Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
drivers/net/usb/qmi_wwan.c
include/net/dst.h

Trivial merge conflicts, both were overlapping changes.

Signed-off-by: David S. Miller <davem@davemloft.net>

+4351 -2992
+4 -4
Documentation/ABI/stable/sysfs-bus-usb
··· 37 37 that the USB device has been connected to the machine. This 38 38 file is read-only. 39 39 Users: 40 - PowerTOP <power@bughost.org> 41 - http://www.lesswatts.org/projects/powertop/ 40 + PowerTOP <powertop@lists.01.org> 41 + https://01.org/powertop/ 42 42 43 43 What: /sys/bus/usb/device/.../power/active_duration 44 44 Date: January 2008 ··· 57 57 will give an integer percentage. Note that this does not 58 58 account for counter wrap. 59 59 Users: 60 - PowerTOP <power@bughost.org> 61 - http://www.lesswatts.org/projects/powertop/ 60 + PowerTOP <powertop@lists.01.org> 61 + https://01.org/powertop/ 62 62 63 63 What: /sys/bus/usb/devices/<busnum>-<port[.port]>...:<config num>-<interface num>/supports_autosuspend 64 64 Date: January 2008
+16 -16
Documentation/ABI/testing/sysfs-devices-power
··· 1 1 What: /sys/devices/.../power/ 2 2 Date: January 2009 3 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 3 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 4 4 Description: 5 5 The /sys/devices/.../power directory contains attributes 6 6 allowing the user space to check and modify some power ··· 8 8 9 9 What: /sys/devices/.../power/wakeup 10 10 Date: January 2009 11 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 11 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 12 12 Description: 13 13 The /sys/devices/.../power/wakeup attribute allows the user 14 14 space to check if the device is enabled to wake up the system ··· 34 34 35 35 What: /sys/devices/.../power/control 36 36 Date: January 2009 37 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 37 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 38 38 Description: 39 39 The /sys/devices/.../power/control attribute allows the user 40 40 space to control the run-time power management of the device. ··· 53 53 54 54 What: /sys/devices/.../power/async 55 55 Date: January 2009 56 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 56 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 57 57 Description: 58 58 The /sys/devices/.../async attribute allows the user space to 59 59 enable or diasble the device's suspend and resume callbacks to ··· 79 79 80 80 What: /sys/devices/.../power/wakeup_count 81 81 Date: September 2010 82 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 82 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 83 83 Description: 84 84 The /sys/devices/.../wakeup_count attribute contains the number 85 85 of signaled wakeup events associated with the device. This ··· 88 88 89 89 What: /sys/devices/.../power/wakeup_active_count 90 90 Date: September 2010 91 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 91 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 92 92 Description: 93 93 The /sys/devices/.../wakeup_active_count attribute contains the 94 94 number of times the processing of wakeup events associated with ··· 98 98 99 99 What: /sys/devices/.../power/wakeup_abort_count 100 100 Date: February 2012 101 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 101 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 102 102 Description: 103 103 The /sys/devices/.../wakeup_abort_count attribute contains the 104 104 number of times the processing of a wakeup event associated with ··· 109 109 110 110 What: /sys/devices/.../power/wakeup_expire_count 111 111 Date: February 2012 112 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 112 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 113 113 Description: 114 114 The /sys/devices/.../wakeup_expire_count attribute contains the 115 115 number of times a wakeup event associated with the device has ··· 119 119 120 120 What: /sys/devices/.../power/wakeup_active 121 121 Date: September 2010 122 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 122 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 123 123 Description: 124 124 The /sys/devices/.../wakeup_active attribute contains either 1, 125 125 or 0, depending on whether or not a wakeup event associated with ··· 129 129 130 130 What: /sys/devices/.../power/wakeup_total_time_ms 131 131 Date: September 2010 132 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 132 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 133 133 Description: 134 134 The /sys/devices/.../wakeup_total_time_ms attribute contains 135 135 the total time of processing wakeup events associated with the ··· 139 139 140 140 What: /sys/devices/.../power/wakeup_max_time_ms 141 141 Date: September 2010 142 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 142 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 143 143 Description: 144 144 The /sys/devices/.../wakeup_max_time_ms attribute contains 145 145 the maximum time of processing a single wakeup event associated ··· 149 149 150 150 What: /sys/devices/.../power/wakeup_last_time_ms 151 151 Date: September 2010 152 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 152 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 153 153 Description: 154 154 The /sys/devices/.../wakeup_last_time_ms attribute contains 155 155 the value of the monotonic clock corresponding to the time of ··· 160 160 161 161 What: /sys/devices/.../power/wakeup_prevent_sleep_time_ms 162 162 Date: February 2012 163 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 163 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 164 164 Description: 165 165 The /sys/devices/.../wakeup_prevent_sleep_time_ms attribute 166 166 contains the total time the device has been preventing ··· 189 189 190 190 What: /sys/devices/.../power/pm_qos_latency_us 191 191 Date: March 2012 192 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 192 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 193 193 Description: 194 194 The /sys/devices/.../power/pm_qos_resume_latency_us attribute 195 195 contains the PM QoS resume latency limit for the given device, ··· 207 207 208 208 What: /sys/devices/.../power/pm_qos_no_power_off 209 209 Date: September 2012 210 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 210 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 211 211 Description: 212 212 The /sys/devices/.../power/pm_qos_no_power_off attribute 213 213 is used for manipulating the PM QoS "no power off" flag. If ··· 222 222 223 223 What: /sys/devices/.../power/pm_qos_remote_wakeup 224 224 Date: September 2012 225 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 225 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 226 226 Description: 227 227 The /sys/devices/.../power/pm_qos_remote_wakeup attribute 228 228 is used for manipulating the PM QoS "remote wakeup required"
+11 -11
Documentation/ABI/testing/sysfs-power
··· 1 1 What: /sys/power/ 2 2 Date: August 2006 3 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 3 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 4 4 Description: 5 5 The /sys/power directory will contain files that will 6 6 provide a unified interface to the power management ··· 8 8 9 9 What: /sys/power/state 10 10 Date: August 2006 11 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 11 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 12 12 Description: 13 13 The /sys/power/state file controls the system power state. 14 14 Reading from this file returns what states are supported, ··· 22 22 23 23 What: /sys/power/disk 24 24 Date: September 2006 25 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 25 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 26 26 Description: 27 27 The /sys/power/disk file controls the operating mode of the 28 28 suspend-to-disk mechanism. Reading from this file returns ··· 67 67 68 68 What: /sys/power/image_size 69 69 Date: August 2006 70 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 70 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 71 71 Description: 72 72 The /sys/power/image_size file controls the size of the image 73 73 created by the suspend-to-disk mechanism. It can be written a ··· 84 84 85 85 What: /sys/power/pm_trace 86 86 Date: August 2006 87 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 87 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 88 88 Description: 89 89 The /sys/power/pm_trace file controls the code which saves the 90 90 last PM event point in the RTC across reboots, so that you can ··· 133 133 134 134 What: /sys/power/pm_async 135 135 Date: January 2009 136 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 136 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 137 137 Description: 138 138 The /sys/power/pm_async file controls the switch allowing the 139 139 user space to enable or disable asynchronous suspend and resume ··· 146 146 147 147 What: /sys/power/wakeup_count 148 148 Date: July 2010 149 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 149 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 150 150 Description: 151 151 The /sys/power/wakeup_count file allows user space to put the 152 152 system into a sleep state while taking into account the ··· 161 161 162 162 What: /sys/power/reserved_size 163 163 Date: May 2011 164 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 164 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 165 165 Description: 166 166 The /sys/power/reserved_size file allows user space to control 167 167 the amount of memory reserved for allocations made by device ··· 175 175 176 176 What: /sys/power/autosleep 177 177 Date: April 2012 178 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 178 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 179 179 Description: 180 180 The /sys/power/autosleep file can be written one of the strings 181 181 returned by reads from /sys/power/state. If that happens, a ··· 192 192 193 193 What: /sys/power/wake_lock 194 194 Date: February 2012 195 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 195 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 196 196 Description: 197 197 The /sys/power/wake_lock file allows user space to create 198 198 wakeup source objects and activate them on demand (if one of ··· 219 219 220 220 What: /sys/power/wake_unlock 221 221 Date: February 2012 222 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 222 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 223 223 Description: 224 224 The /sys/power/wake_unlock file allows user space to deactivate 225 225 wakeup sources created with the help of /sys/power/wake_lock.
+1 -1
Documentation/acpi/dsdt-override.txt
··· 4 4 5 5 When to use this method is described in detail on the 6 6 Linux/ACPI home page: 7 - http://www.lesswatts.org/projects/acpi/overridingDSDT.php 7 + https://01.org/linux-acpi/documentation/overriding-dsdt
-168
Documentation/devicetree/bindings/memory.txt
··· 1 - *** Memory binding *** 2 - 3 - The /memory node provides basic information about the address and size 4 - of the physical memory. This node is usually filled or updated by the 5 - bootloader, depending on the actual memory configuration of the given 6 - hardware. 7 - 8 - The memory layout is described by the following node: 9 - 10 - / { 11 - #address-cells = <(n)>; 12 - #size-cells = <(m)>; 13 - memory { 14 - device_type = "memory"; 15 - reg = <(baseaddr1) (size1) 16 - (baseaddr2) (size2) 17 - ... 18 - (baseaddrN) (sizeN)>; 19 - }; 20 - ... 21 - }; 22 - 23 - A memory node follows the typical device tree rules for "reg" property: 24 - n: number of cells used to store base address value 25 - m: number of cells used to store size value 26 - baseaddrX: defines a base address of the defined memory bank 27 - sizeX: the size of the defined memory bank 28 - 29 - 30 - More than one memory bank can be defined. 31 - 32 - 33 - *** Reserved memory regions *** 34 - 35 - In /memory/reserved-memory node one can create child nodes describing 36 - particular reserved (excluded from normal use) memory regions. Such 37 - memory regions are usually designed for the special usage by various 38 - device drivers. A good example are contiguous memory allocations or 39 - memory sharing with other operating system on the same hardware board. 40 - Those special memory regions might depend on the board configuration and 41 - devices used on the target system. 42 - 43 - Parameters for each memory region can be encoded into the device tree 44 - with the following convention: 45 - 46 - [(label):] (name) { 47 - compatible = "linux,contiguous-memory-region", "reserved-memory-region"; 48 - reg = <(address) (size)>; 49 - (linux,default-contiguous-region); 50 - }; 51 - 52 - compatible: one or more of: 53 - - "linux,contiguous-memory-region" - enables binding of this 54 - region to Contiguous Memory Allocator (special region for 55 - contiguous memory allocations, shared with movable system 56 - memory, Linux kernel-specific). 57 - - "reserved-memory-region" - compatibility is defined, given 58 - region is assigned for exclusive usage for by the respective 59 - devices. 60 - 61 - reg: standard property defining the base address and size of 62 - the memory region 63 - 64 - linux,default-contiguous-region: property indicating that the region 65 - is the default region for all contiguous memory 66 - allocations, Linux specific (optional) 67 - 68 - It is optional to specify the base address, so if one wants to use 69 - autoconfiguration of the base address, '0' can be specified as a base 70 - address in the 'reg' property. 71 - 72 - The /memory/reserved-memory node must contain the same #address-cells 73 - and #size-cells value as the root node. 74 - 75 - 76 - *** Device node's properties *** 77 - 78 - Once regions in the /memory/reserved-memory node have been defined, they 79 - may be referenced by other device nodes. Bindings that wish to reference 80 - memory regions should explicitly document their use of the following 81 - property: 82 - 83 - memory-region = <&phandle_to_defined_region>; 84 - 85 - This property indicates that the device driver should use the memory 86 - region pointed by the given phandle. 87 - 88 - 89 - *** Example *** 90 - 91 - This example defines a memory consisting of 4 memory banks. 3 contiguous 92 - regions are defined for Linux kernel, one default of all device drivers 93 - (named contig_mem, placed at 0x72000000, 64MiB), one dedicated to the 94 - framebuffer device (labelled display_mem, placed at 0x78000000, 8MiB) 95 - and one for multimedia processing (labelled multimedia_mem, placed at 96 - 0x77000000, 64MiB). 'display_mem' region is then assigned to fb@12300000 97 - device for DMA memory allocations (Linux kernel drivers will use CMA is 98 - available or dma-exclusive usage otherwise). 'multimedia_mem' is 99 - assigned to scaler@12500000 and codec@12600000 devices for contiguous 100 - memory allocations when CMA driver is enabled. 101 - 102 - The reason for creating a separate region for framebuffer device is to 103 - match the framebuffer base address to the one configured by bootloader, 104 - so once Linux kernel drivers starts no glitches on the displayed boot 105 - logo appears. Scaller and codec drivers should share the memory 106 - allocations. 107 - 108 - / { 109 - #address-cells = <1>; 110 - #size-cells = <1>; 111 - 112 - /* ... */ 113 - 114 - memory { 115 - reg = <0x40000000 0x10000000 116 - 0x50000000 0x10000000 117 - 0x60000000 0x10000000 118 - 0x70000000 0x10000000>; 119 - 120 - reserved-memory { 121 - #address-cells = <1>; 122 - #size-cells = <1>; 123 - 124 - /* 125 - * global autoconfigured region for contiguous allocations 126 - * (used only with Contiguous Memory Allocator) 127 - */ 128 - contig_region@0 { 129 - compatible = "linux,contiguous-memory-region"; 130 - reg = <0x0 0x4000000>; 131 - linux,default-contiguous-region; 132 - }; 133 - 134 - /* 135 - * special region for framebuffer 136 - */ 137 - display_region: region@78000000 { 138 - compatible = "linux,contiguous-memory-region", "reserved-memory-region"; 139 - reg = <0x78000000 0x800000>; 140 - }; 141 - 142 - /* 143 - * special region for multimedia processing devices 144 - */ 145 - multimedia_region: region@77000000 { 146 - compatible = "linux,contiguous-memory-region"; 147 - reg = <0x77000000 0x4000000>; 148 - }; 149 - }; 150 - }; 151 - 152 - /* ... */ 153 - 154 - fb0: fb@12300000 { 155 - status = "okay"; 156 - memory-region = <&display_region>; 157 - }; 158 - 159 - scaler: scaler@12500000 { 160 - status = "okay"; 161 - memory-region = <&multimedia_region>; 162 - }; 163 - 164 - codec: codec@12600000 { 165 - status = "okay"; 166 - memory-region = <&multimedia_region>; 167 - }; 168 - };
+10 -7
Documentation/devicetree/bindings/mmc/tmio_mmc.txt
··· 9 9 described in mmc.txt, can be used. Additionally the following tmio_mmc-specific 10 10 optional bindings can be used. 11 11 12 + Required properties: 13 + - compatible: "renesas,sdhi-shmobile" - a generic sh-mobile SDHI unit 14 + "renesas,sdhi-sh7372" - SDHI IP on SH7372 SoC 15 + "renesas,sdhi-sh73a0" - SDHI IP on SH73A0 SoC 16 + "renesas,sdhi-r8a73a4" - SDHI IP on R8A73A4 SoC 17 + "renesas,sdhi-r8a7740" - SDHI IP on R8A7740 SoC 18 + "renesas,sdhi-r8a7778" - SDHI IP on R8A7778 SoC 19 + "renesas,sdhi-r8a7779" - SDHI IP on R8A7779 SoC 20 + "renesas,sdhi-r8a7790" - SDHI IP on R8A7790 SoC 21 + 12 22 Optional properties: 13 23 - toshiba,mmc-wrprotect-disable: write-protect detection is unavailable 14 - 15 - When used with Renesas SDHI hardware, the following compatibility strings 16 - configure various model-specific properties: 17 - 18 - "renesas,sh7372-sdhi": (default) compatible with SH7372 19 - "renesas,r8a7740-sdhi": compatible with R8A7740: certain MMC/SD commands have to 20 - wait for the interface to become idle.
+1
Documentation/sound/alsa/HD-Audio-Models.txt
··· 28 28 alc269-dmic Enable ALC269(VA) digital mic workaround 29 29 alc271-dmic Enable ALC271X digital mic workaround 30 30 inv-dmic Inverted internal mic workaround 31 + headset-mic Indicates a combined headset (headphone+mic) jack 31 32 lenovo-dock Enables docking station I/O for some Lenovos 32 33 dell-headset-multi Headset jack, which can also be used as mic-in 33 34 dell-headset-dock Headset jack (without mic-in), and also dock I/O
+40 -14
MAINTAINERS
··· 237 237 238 238 ACPI 239 239 M: Len Brown <lenb@kernel.org> 240 - M: Rafael J. Wysocki <rjw@sisk.pl> 240 + M: Rafael J. Wysocki <rjw@rjwysocki.net> 241 241 L: linux-acpi@vger.kernel.org 242 - W: http://www.lesswatts.org/projects/acpi/ 243 - Q: http://patchwork.kernel.org/project/linux-acpi/list/ 244 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux 242 + W: https://01.org/linux-acpi 243 + Q: https://patchwork.kernel.org/project/linux-acpi/list/ 244 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm 245 245 S: Supported 246 246 F: drivers/acpi/ 247 247 F: drivers/pnp/pnpacpi/ ··· 256 256 ACPI FAN DRIVER 257 257 M: Zhang Rui <rui.zhang@intel.com> 258 258 L: linux-acpi@vger.kernel.org 259 - W: http://www.lesswatts.org/projects/acpi/ 259 + W: https://01.org/linux-acpi 260 260 S: Supported 261 261 F: drivers/acpi/fan.c 262 262 263 263 ACPI THERMAL DRIVER 264 264 M: Zhang Rui <rui.zhang@intel.com> 265 265 L: linux-acpi@vger.kernel.org 266 - W: http://www.lesswatts.org/projects/acpi/ 266 + W: https://01.org/linux-acpi 267 267 S: Supported 268 268 F: drivers/acpi/*thermal* 269 269 270 270 ACPI VIDEO DRIVER 271 271 M: Zhang Rui <rui.zhang@intel.com> 272 272 L: linux-acpi@vger.kernel.org 273 - W: http://www.lesswatts.org/projects/acpi/ 273 + W: https://01.org/linux-acpi 274 274 S: Supported 275 275 F: drivers/acpi/video.c 276 276 ··· 824 824 F: arch/arm/mach-gemini/ 825 825 826 826 ARM/CSR SIRFPRIMA2 MACHINE SUPPORT 827 - M: Barry Song <baohua.song@csr.com> 827 + M: Barry Song <baohua@kernel.org> 828 828 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 829 829 T: git git://git.kernel.org/pub/scm/linux/kernel/git/baohua/linux.git 830 830 S: Maintained 831 831 F: arch/arm/mach-prima2/ 832 + F: drivers/clk/clk-prima2.c 833 + F: drivers/clocksource/timer-prima2.c 834 + F: drivers/clocksource/timer-marco.c 832 835 F: drivers/dma/sirf-dma.c 833 836 F: drivers/i2c/busses/i2c-sirf.c 837 + F: drivers/input/misc/sirfsoc-onkey.c 838 + F: drivers/irqchip/irq-sirfsoc.c 834 839 F: drivers/mmc/host/sdhci-sirf.c 835 840 F: drivers/pinctrl/sirf/ 841 + F: drivers/rtc/rtc-sirfsoc.c 836 842 F: drivers/spi/spi-sirf.c 837 843 838 844 ARM/EBSA110 MACHINE SUPPORT ··· 2301 2295 F: drivers/net/ethernet/ti/cpmac.c 2302 2296 2303 2297 CPU FREQUENCY DRIVERS 2304 - M: Rafael J. Wysocki <rjw@sisk.pl> 2298 + M: Rafael J. Wysocki <rjw@rjwysocki.net> 2305 2299 M: Viresh Kumar <viresh.kumar@linaro.org> 2306 2300 L: cpufreq@vger.kernel.org 2307 2301 L: linux-pm@vger.kernel.org ··· 2332 2326 F: drivers/cpuidle/cpuidle-big_little.c 2333 2327 2334 2328 CPUIDLE DRIVERS 2335 - M: Rafael J. Wysocki <rjw@sisk.pl> 2329 + M: Rafael J. Wysocki <rjw@rjwysocki.net> 2336 2330 M: Daniel Lezcano <daniel.lezcano@linaro.org> 2337 2331 L: linux-pm@vger.kernel.org 2338 2332 S: Maintained ··· 3554 3548 3555 3549 FREEZER 3556 3550 M: Pavel Machek <pavel@ucw.cz> 3557 - M: "Rafael J. Wysocki" <rjw@sisk.pl> 3551 + M: "Rafael J. Wysocki" <rjw@rjwysocki.net> 3558 3552 L: linux-pm@vger.kernel.org 3559 3553 S: Supported 3560 3554 F: Documentation/power/freezing-of-tasks.txt ··· 3624 3618 L: linux-scsi@vger.kernel.org 3625 3619 S: Odd Fixes (e.g., new signatures) 3626 3620 F: drivers/scsi/fdomain.* 3621 + 3622 + GCOV BASED KERNEL PROFILING 3623 + M: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> 3624 + S: Maintained 3625 + F: kernel/gcov/ 3626 + F: Documentation/gcov.txt 3627 3627 3628 3628 GDT SCSI DISK ARRAY CONTROLLER DRIVER 3629 3629 M: Achim Leubner <achim_leubner@adaptec.com> ··· 3896 3884 3897 3885 HIBERNATION (aka Software Suspend, aka swsusp) 3898 3886 M: Pavel Machek <pavel@ucw.cz> 3899 - M: "Rafael J. Wysocki" <rjw@sisk.pl> 3887 + M: "Rafael J. Wysocki" <rjw@rjwysocki.net> 3900 3888 L: linux-pm@vger.kernel.org 3901 3889 S: Supported 3902 3890 F: arch/x86/power/ ··· 4346 4334 INTEL MENLOW THERMAL DRIVER 4347 4335 M: Sujith Thomas <sujith.thomas@intel.com> 4348 4336 L: platform-driver-x86@vger.kernel.org 4349 - W: http://www.lesswatts.org/projects/acpi/ 4337 + W: https://01.org/linux-acpi 4350 4338 S: Supported 4351 4339 F: drivers/platform/x86/intel_menlow.c 4352 4340 ··· 4482 4470 L: linux-serial@vger.kernel.org 4483 4471 S: Maintained 4484 4472 F: drivers/tty/serial/ioc3_serial.c 4473 + 4474 + IOMMU DRIVERS 4475 + M: Joerg Roedel <joro@8bytes.org> 4476 + L: iommu@lists.linux-foundation.org 4477 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git 4478 + S: Maintained 4479 + F: drivers/iommu/ 4485 4480 4486 4481 IP MASQUERADING 4487 4482 M: Juanjo Ciarlante <jjciarla@raiz.uncu.edu.ar> ··· 7831 7812 F: sound/soc/ 7832 7813 F: include/sound/soc* 7833 7814 7815 + SOUND - DMAENGINE HELPERS 7816 + M: Lars-Peter Clausen <lars@metafoo.de> 7817 + S: Supported 7818 + F: include/sound/dmaengine_pcm.h 7819 + F: sound/core/pcm_dmaengine.c 7820 + F: sound/soc/soc-generic-dmaengine-pcm.c 7821 + 7834 7822 SPARC + UltraSPARC (sparc/sparc64) 7835 7823 M: "David S. Miller" <davem@davemloft.net> 7836 7824 L: sparclinux@vger.kernel.org ··· 8117 8091 SUSPEND TO RAM 8118 8092 M: Len Brown <len.brown@intel.com> 8119 8093 M: Pavel Machek <pavel@ucw.cz> 8120 - M: "Rafael J. Wysocki" <rjw@sisk.pl> 8094 + M: "Rafael J. Wysocki" <rjw@rjwysocki.net> 8121 8095 L: linux-pm@vger.kernel.org 8122 8096 S: Supported 8123 8097 F: Documentation/power/
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 12 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc3 4 + EXTRAVERSION = -rc6 5 5 NAME = One Giant Leap for Frogkind 6 6 7 7 # *DOCUMENTATION*
+1 -1
arch/arc/kernel/ptrace.c
··· 102 102 REG_IGNORE_ONE(pad2); 103 103 REG_IN_CHUNK(callee, efa, cregs); /* callee_regs[r25..r13] */ 104 104 REG_IGNORE_ONE(efa); /* efa update invalid */ 105 - REG_IN_ONE(stop_pc, &ptregs->ret); /* stop_pc: PC update */ 105 + REG_IGNORE_ONE(stop_pc); /* PC updated via @ret */ 106 106 107 107 return ret; 108 108 }
+13 -12
arch/arc/kernel/signal.c
··· 101 101 { 102 102 struct rt_sigframe __user *sf; 103 103 unsigned int magic; 104 - int err; 105 104 struct pt_regs *regs = current_pt_regs(); 106 105 107 106 /* Always make any pending restarted system calls return -EINTR */ ··· 118 119 if (!access_ok(VERIFY_READ, sf, sizeof(*sf))) 119 120 goto badframe; 120 121 121 - err = restore_usr_regs(regs, sf); 122 - err |= __get_user(magic, &sf->sigret_magic); 123 - if (err) 122 + if (__get_user(magic, &sf->sigret_magic)) 124 123 goto badframe; 125 124 126 125 if (unlikely(is_do_ss_needed(magic))) 127 126 if (restore_altstack(&sf->uc.uc_stack)) 128 127 goto badframe; 128 + 129 + if (restore_usr_regs(regs, sf)) 130 + goto badframe; 129 131 130 132 /* Don't restart from sigreturn */ 131 133 syscall_wont_restart(regs); ··· 191 191 return 1; 192 192 193 193 /* 194 + * w/o SA_SIGINFO, struct ucontext is partially populated (only 195 + * uc_mcontext/uc_sigmask) for kernel's normal user state preservation 196 + * during signal handler execution. This works for SA_SIGINFO as well 197 + * although the semantics are now overloaded (the same reg state can be 198 + * inspected by userland: but are they allowed to fiddle with it ? 199 + */ 200 + err |= stash_usr_regs(sf, regs, set); 201 + 202 + /* 194 203 * SA_SIGINFO requires 3 args to signal handler: 195 204 * #1: sig-no (common to any handler) 196 205 * #2: struct siginfo ··· 222 213 magic = MAGIC_SIGALTSTK; 223 214 } 224 215 225 - /* 226 - * w/o SA_SIGINFO, struct ucontext is partially populated (only 227 - * uc_mcontext/uc_sigmask) for kernel's normal user state preservation 228 - * during signal handler execution. This works for SA_SIGINFO as well 229 - * although the semantics are now overloaded (the same reg state can be 230 - * inspected by userland: but are they allowed to fiddle with it ? 231 - */ 232 - err |= stash_usr_regs(sf, regs, set); 233 216 err |= __put_user(magic, &sf->sigret_magic); 234 217 if (err) 235 218 return err;
+7 -2
arch/arm/Makefile
··· 296 296 # Convert bzImage to zImage 297 297 bzImage: zImage 298 298 299 - zImage Image xipImage bootpImage uImage: vmlinux 299 + BOOT_TARGETS = zImage Image xipImage bootpImage uImage 300 + INSTALL_TARGETS = zinstall uinstall install 301 + 302 + PHONY += bzImage $(BOOT_TARGETS) $(INSTALL_TARGETS) 303 + 304 + $(BOOT_TARGETS): vmlinux 300 305 $(Q)$(MAKE) $(build)=$(boot) MACHINE=$(MACHINE) $(boot)/$@ 301 306 302 - zinstall uinstall install: vmlinux 307 + $(INSTALL_TARGETS): 303 308 $(Q)$(MAKE) $(build)=$(boot) MACHINE=$(MACHINE) $@ 304 309 305 310 %.dtb: | scripts
+8 -8
arch/arm/boot/Makefile
··· 95 95 @test "$(INITRD)" != "" || \ 96 96 (echo You must specify INITRD; exit -1) 97 97 98 - install: $(obj)/Image 99 - $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \ 98 + install: 99 + $(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \ 100 100 $(obj)/Image System.map "$(INSTALL_PATH)" 101 101 102 - zinstall: $(obj)/zImage 103 - $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \ 102 + zinstall: 103 + $(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \ 104 104 $(obj)/zImage System.map "$(INSTALL_PATH)" 105 105 106 - uinstall: $(obj)/uImage 107 - $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \ 106 + uinstall: 107 + $(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \ 108 108 $(obj)/uImage System.map "$(INSTALL_PATH)" 109 109 110 110 zi: 111 - $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \ 111 + $(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \ 112 112 $(obj)/zImage System.map "$(INSTALL_PATH)" 113 113 114 114 i: 115 - $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \ 115 + $(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \ 116 116 $(obj)/Image System.map "$(INSTALL_PATH)" 117 117 118 118 subdir- := bootp compressed dts
+2
arch/arm/boot/dts/Makefile
··· 41 41 dtb-$(CONFIG_ARCH_AT91) += sama5d34ek.dtb 42 42 dtb-$(CONFIG_ARCH_AT91) += sama5d35ek.dtb 43 43 44 + dtb-$(CONFIG_ARCH_ATLAS6) += atlas6-evb.dtb 45 + 44 46 dtb-$(CONFIG_ARCH_BCM2835) += bcm2835-rpi-b.dtb 45 47 dtb-$(CONFIG_ARCH_BCM) += bcm11351-brt.dtb \ 46 48 bcm28155-ap.dtb
+32 -17
arch/arm/boot/dts/armada-370-netgear-rn102.dts
··· 27 27 }; 28 28 29 29 soc { 30 + ranges = <MBUS_ID(0xf0, 0x01) 0 0xd0000000 0x100000 31 + MBUS_ID(0x01, 0xe0) 0 0xfff00000 0x100000>; 32 + 33 + pcie-controller { 34 + status = "okay"; 35 + 36 + /* Connected to Marvell SATA controller */ 37 + pcie@1,0 { 38 + /* Port 0, Lane 0 */ 39 + status = "okay"; 40 + }; 41 + 42 + /* Connected to FL1009 USB 3.0 controller */ 43 + pcie@2,0 { 44 + /* Port 1, Lane 0 */ 45 + status = "okay"; 46 + }; 47 + }; 48 + 30 49 internal-regs { 31 50 serial@12000 { 32 51 clock-frequency = <200000000>; ··· 74 55 75 56 backup_led_pin: backup-led-pin { 76 57 marvell,pins = "mpp56"; 58 + marvell,function = "gpio"; 59 + }; 60 + 61 + poweroff: poweroff { 62 + marvell,pins = "mpp8"; 77 63 marvell,function = "gpio"; 78 64 }; 79 65 }; ··· 111 87 fan_gear_mode = <0>; 112 88 fan_startv = <1>; 113 89 pwm_polarity = <0>; 114 - }; 115 - }; 116 - 117 - pcie-controller { 118 - status = "okay"; 119 - 120 - /* Connected to Marvell SATA controller */ 121 - pcie@1,0 { 122 - /* Port 0, Lane 0 */ 123 - status = "okay"; 124 - }; 125 - 126 - /* Connected to FL1009 USB 3.0 controller */ 127 - pcie@2,0 { 128 - /* Port 1, Lane 0 */ 129 - status = "okay"; 130 90 }; 131 91 }; 132 92 }; ··· 168 160 button@1 { 169 161 label = "Power Button"; 170 162 linux,code = <116>; /* KEY_POWER */ 171 - gpios = <&gpio1 30 1>; 163 + gpios = <&gpio1 30 0>; 172 164 }; 173 165 174 166 button@2 { ··· 182 174 linux,code = <133>; /* KEY_COPY */ 183 175 gpios = <&gpio1 26 1>; 184 176 }; 177 + }; 178 + 179 + gpio_poweroff { 180 + compatible = "gpio-poweroff"; 181 + pinctrl-0 = <&poweroff>; 182 + pinctrl-names = "default"; 183 + gpios = <&gpio0 8 1>; 185 184 }; 186 185 187 186 };
+11
arch/arm/boot/dts/armada-xp.dtsi
··· 70 70 71 71 timer@20300 { 72 72 compatible = "marvell,armada-xp-timer"; 73 + clocks = <&coreclk 2>, <&refclk>; 74 + clock-names = "nbclk", "fixed"; 73 75 }; 74 76 75 77 coreclk: mvebu-sar@18230 { ··· 169 167 0x184d0 0x4>; 170 168 status = "okay"; 171 169 }; 170 + }; 171 + }; 172 + 173 + clocks { 174 + /* 25 MHz reference crystal */ 175 + refclk: oscillator { 176 + compatible = "fixed-clock"; 177 + #clock-cells = <0>; 178 + clock-frequency = <25000000>; 172 179 }; 173 180 }; 174 181 };
+4 -2
arch/arm/boot/dts/at91sam9x5.dtsi
··· 190 190 AT91_PIOA 8 AT91_PERIPH_A AT91_PINCTRL_NONE>; /* PA8 periph A */ 191 191 }; 192 192 193 - pinctrl_uart2_rts: uart2_rts-0 { 193 + pinctrl_usart2_rts: usart2_rts-0 { 194 194 atmel,pins = 195 195 <AT91_PIOB 0 AT91_PERIPH_B AT91_PINCTRL_NONE>; /* PB0 periph B */ 196 196 }; 197 197 198 - pinctrl_uart2_cts: uart2_cts-0 { 198 + pinctrl_usart2_cts: usart2_cts-0 { 199 199 atmel,pins = 200 200 <AT91_PIOB 1 AT91_PERIPH_B AT91_PINCTRL_NONE>; /* PB1 periph B */ 201 201 }; ··· 556 556 interrupts = <12 IRQ_TYPE_LEVEL_HIGH 0>; 557 557 dmas = <&dma0 1 AT91_DMA_CFG_PER_ID(0)>; 558 558 dma-names = "rxtx"; 559 + pinctrl-names = "default"; 559 560 #address-cells = <1>; 560 561 #size-cells = <0>; 561 562 status = "disabled"; ··· 568 567 interrupts = <26 IRQ_TYPE_LEVEL_HIGH 0>; 569 568 dmas = <&dma1 1 AT91_DMA_CFG_PER_ID(0)>; 570 569 dma-names = "rxtx"; 570 + pinctrl-names = "default"; 571 571 #address-cells = <1>; 572 572 #size-cells = <0>; 573 573 status = "disabled";
+12
arch/arm/boot/dts/atlas6.dtsi
··· 181 181 interrupts = <17>; 182 182 fifosize = <128>; 183 183 clocks = <&clks 13>; 184 + sirf,uart-dma-rx-channel = <21>; 185 + sirf,uart-dma-tx-channel = <2>; 184 186 }; 185 187 186 188 uart1: uart@b0060000 { ··· 201 199 interrupts = <19>; 202 200 fifosize = <128>; 203 201 clocks = <&clks 15>; 202 + sirf,uart-dma-rx-channel = <6>; 203 + sirf,uart-dma-tx-channel = <7>; 204 204 }; 205 205 206 206 usp0: usp@b0080000 { ··· 210 206 compatible = "sirf,prima2-usp"; 211 207 reg = <0xb0080000 0x10000>; 212 208 interrupts = <20>; 209 + fifosize = <128>; 213 210 clocks = <&clks 28>; 211 + sirf,usp-dma-rx-channel = <17>; 212 + sirf,usp-dma-tx-channel = <18>; 214 213 }; 215 214 216 215 usp1: usp@b0090000 { ··· 221 214 compatible = "sirf,prima2-usp"; 222 215 reg = <0xb0090000 0x10000>; 223 216 interrupts = <21>; 217 + fifosize = <128>; 224 218 clocks = <&clks 29>; 219 + sirf,usp-dma-rx-channel = <14>; 220 + sirf,usp-dma-tx-channel = <15>; 225 221 }; 226 222 227 223 dmac0: dma-controller@b00b0000 { ··· 247 237 compatible = "sirf,prima2-vip"; 248 238 reg = <0xb00C0000 0x10000>; 249 239 clocks = <&clks 31>; 240 + interrupts = <14>; 241 + sirf,vip-dma-rx-channel = <16>; 250 242 }; 251 243 252 244 spi0: spi@b00d0000 {
+5
arch/arm/boot/dts/exynos5250.dtsi
··· 96 96 <1 14 0xf08>, 97 97 <1 11 0xf08>, 98 98 <1 10 0xf08>; 99 + /* Unfortunately we need this since some versions of U-Boot 100 + * on Exynos don't set the CNTFRQ register, so we need the 101 + * value from DT. 102 + */ 103 + clock-frequency = <24000000>; 99 104 }; 100 105 101 106 mct@101C0000 {
+2 -1
arch/arm/boot/dts/kirkwood.dtsi
··· 13 13 cpu@0 { 14 14 device_type = "cpu"; 15 15 compatible = "marvell,feroceon"; 16 + reg = <0>; 16 17 clocks = <&core_clk 1>, <&core_clk 3>, <&gate_clk 11>; 17 18 clock-names = "cpu_clk", "ddrclk", "powersave"; 18 19 }; ··· 168 167 xor@60900 { 169 168 compatible = "marvell,orion-xor"; 170 169 reg = <0x60900 0x100 171 - 0xd0B00 0x100>; 170 + 0x60B00 0x100>; 172 171 status = "okay"; 173 172 clocks = <&gate_clk 16>; 174 173
+1 -1
arch/arm/boot/dts/omap3-beagle-xm.dts
··· 11 11 12 12 / { 13 13 model = "TI OMAP3 BeagleBoard xM"; 14 - compatible = "ti,omap3-beagle-xm", "ti,omap3-beagle", "ti,omap3"; 14 + compatible = "ti,omap3-beagle-xm", "ti,omap36xx", "ti,omap3"; 15 15 16 16 cpus { 17 17 cpu@0 {
+2 -2
arch/arm/boot/dts/omap3.dtsi
··· 108 108 #address-cells = <1>; 109 109 #size-cells = <0>; 110 110 pinctrl-single,register-width = <16>; 111 - pinctrl-single,function-mask = <0x7f1f>; 111 + pinctrl-single,function-mask = <0xff1f>; 112 112 }; 113 113 114 114 omap3_pmx_wkup: pinmux@0x48002a00 { ··· 117 117 #address-cells = <1>; 118 118 #size-cells = <0>; 119 119 pinctrl-single,register-width = <16>; 120 - pinctrl-single,function-mask = <0x7f1f>; 120 + pinctrl-single,function-mask = <0xff1f>; 121 121 }; 122 122 123 123 gpio1: gpio@48310000 {
+23 -4
arch/arm/boot/dts/prima2.dtsi
··· 171 171 compatible = "simple-bus"; 172 172 #address-cells = <1>; 173 173 #size-cells = <1>; 174 - ranges = <0xb0000000 0xb0000000 0x180000>; 174 + ranges = <0xb0000000 0xb0000000 0x180000>, 175 + <0x56000000 0x56000000 0x1b00000>; 175 176 176 177 timer@b0020000 { 177 178 compatible = "sirf,prima2-tick"; ··· 197 196 uart0: uart@b0050000 { 198 197 cell-index = <0>; 199 198 compatible = "sirf,prima2-uart"; 200 - reg = <0xb0050000 0x10000>; 199 + reg = <0xb0050000 0x1000>; 201 200 interrupts = <17>; 201 + fifosize = <128>; 202 202 clocks = <&clks 13>; 203 + sirf,uart-dma-rx-channel = <21>; 204 + sirf,uart-dma-tx-channel = <2>; 203 205 }; 204 206 205 207 uart1: uart@b0060000 { 206 208 cell-index = <1>; 207 209 compatible = "sirf,prima2-uart"; 208 - reg = <0xb0060000 0x10000>; 210 + reg = <0xb0060000 0x1000>; 209 211 interrupts = <18>; 212 + fifosize = <32>; 210 213 clocks = <&clks 14>; 211 214 }; 212 215 213 216 uart2: uart@b0070000 { 214 217 cell-index = <2>; 215 218 compatible = "sirf,prima2-uart"; 216 - reg = <0xb0070000 0x10000>; 219 + reg = <0xb0070000 0x1000>; 217 220 interrupts = <19>; 221 + fifosize = <128>; 218 222 clocks = <&clks 15>; 223 + sirf,uart-dma-rx-channel = <6>; 224 + sirf,uart-dma-tx-channel = <7>; 219 225 }; 220 226 221 227 usp0: usp@b0080000 { ··· 230 222 compatible = "sirf,prima2-usp"; 231 223 reg = <0xb0080000 0x10000>; 232 224 interrupts = <20>; 225 + fifosize = <128>; 233 226 clocks = <&clks 28>; 227 + sirf,usp-dma-rx-channel = <17>; 228 + sirf,usp-dma-tx-channel = <18>; 234 229 }; 235 230 236 231 usp1: usp@b0090000 { ··· 241 230 compatible = "sirf,prima2-usp"; 242 231 reg = <0xb0090000 0x10000>; 243 232 interrupts = <21>; 233 + fifosize = <128>; 244 234 clocks = <&clks 29>; 235 + sirf,usp-dma-rx-channel = <14>; 236 + sirf,usp-dma-tx-channel = <15>; 245 237 }; 246 238 247 239 usp2: usp@b00a0000 { ··· 252 238 compatible = "sirf,prima2-usp"; 253 239 reg = <0xb00a0000 0x10000>; 254 240 interrupts = <22>; 241 + fifosize = <128>; 255 242 clocks = <&clks 30>; 243 + sirf,usp-dma-rx-channel = <10>; 244 + sirf,usp-dma-tx-channel = <11>; 256 245 }; 257 246 258 247 dmac0: dma-controller@b00b0000 { ··· 278 261 compatible = "sirf,prima2-vip"; 279 262 reg = <0xb00C0000 0x10000>; 280 263 clocks = <&clks 31>; 264 + interrupts = <14>; 265 + sirf,vip-dma-rx-channel = <16>; 281 266 }; 282 267 283 268 spi0: spi@b00d0000 {
+3 -3
arch/arm/boot/dts/r8a73a4.dtsi
··· 193 193 }; 194 194 195 195 sdhi0: sdhi@ee100000 { 196 - compatible = "renesas,r8a73a4-sdhi"; 196 + compatible = "renesas,sdhi-r8a73a4"; 197 197 reg = <0 0xee100000 0 0x100>; 198 198 interrupt-parent = <&gic>; 199 199 interrupts = <0 165 4>; ··· 202 202 }; 203 203 204 204 sdhi1: sdhi@ee120000 { 205 - compatible = "renesas,r8a73a4-sdhi"; 205 + compatible = "renesas,sdhi-r8a73a4"; 206 206 reg = <0 0xee120000 0 0x100>; 207 207 interrupt-parent = <&gic>; 208 208 interrupts = <0 166 4>; ··· 211 211 }; 212 212 213 213 sdhi2: sdhi@ee140000 { 214 - compatible = "renesas,r8a73a4-sdhi"; 214 + compatible = "renesas,sdhi-r8a73a4"; 215 215 reg = <0 0xee140000 0 0x100>; 216 216 interrupt-parent = <&gic>; 217 217 interrupts = <0 167 4>;
-1
arch/arm/boot/dts/r8a7778.dtsi
··· 96 96 pfc: pfc@fffc0000 { 97 97 compatible = "renesas,pfc-r8a7778"; 98 98 reg = <0xfffc000 0x118>; 99 - #gpio-range-cells = <3>; 100 99 }; 101 100 };
-1
arch/arm/boot/dts/r8a7779.dtsi
··· 188 188 pfc: pfc@fffc0000 { 189 189 compatible = "renesas,pfc-r8a7779"; 190 190 reg = <0xfffc0000 0x23c>; 191 - #gpio-range-cells = <3>; 192 191 }; 193 192 194 193 thermal@ffc48000 {
+4 -5
arch/arm/boot/dts/r8a7790.dtsi
··· 148 148 pfc: pfc@e6060000 { 149 149 compatible = "renesas,pfc-r8a7790"; 150 150 reg = <0 0xe6060000 0 0x250>; 151 - #gpio-range-cells = <3>; 152 151 }; 153 152 154 153 sdhi0: sdhi@ee100000 { 155 - compatible = "renesas,r8a7790-sdhi"; 154 + compatible = "renesas,sdhi-r8a7790"; 156 155 reg = <0 0xee100000 0 0x100>; 157 156 interrupt-parent = <&gic>; 158 157 interrupts = <0 165 4>; ··· 160 161 }; 161 162 162 163 sdhi1: sdhi@ee120000 { 163 - compatible = "renesas,r8a7790-sdhi"; 164 + compatible = "renesas,sdhi-r8a7790"; 164 165 reg = <0 0xee120000 0 0x100>; 165 166 interrupt-parent = <&gic>; 166 167 interrupts = <0 166 4>; ··· 169 170 }; 170 171 171 172 sdhi2: sdhi@ee140000 { 172 - compatible = "renesas,r8a7790-sdhi"; 173 + compatible = "renesas,sdhi-r8a7790"; 173 174 reg = <0 0xee140000 0 0x100>; 174 175 interrupt-parent = <&gic>; 175 176 interrupts = <0 167 4>; ··· 178 179 }; 179 180 180 181 sdhi3: sdhi@ee160000 { 181 - compatible = "renesas,r8a7790-sdhi"; 182 + compatible = "renesas,sdhi-r8a7790"; 182 183 reg = <0 0xee160000 0 0x100>; 183 184 interrupt-parent = <&gic>; 184 185 interrupts = <0 168 4>;
+3 -3
arch/arm/boot/dts/sh73a0.dtsi
··· 196 196 }; 197 197 198 198 sdhi0: sdhi@ee100000 { 199 - compatible = "renesas,r8a7740-sdhi"; 199 + compatible = "renesas,sdhi-r8a7740"; 200 200 reg = <0xee100000 0x100>; 201 201 interrupt-parent = <&gic>; 202 202 interrupts = <0 83 4 ··· 208 208 209 209 /* SDHI1 and SDHI2 have no CD pins, no need for CD IRQ */ 210 210 sdhi1: sdhi@ee120000 { 211 - compatible = "renesas,r8a7740-sdhi"; 211 + compatible = "renesas,sdhi-r8a7740"; 212 212 reg = <0xee120000 0x100>; 213 213 interrupt-parent = <&gic>; 214 214 interrupts = <0 88 4 ··· 219 219 }; 220 220 221 221 sdhi2: sdhi@ee140000 { 222 - compatible = "renesas,r8a7740-sdhi"; 222 + compatible = "renesas,sdhi-r8a7740"; 223 223 reg = <0xee140000 0x100>; 224 224 interrupt-parent = <&gic>; 225 225 interrupts = <0 104 4
+14
arch/arm/boot/install.sh
··· 20 20 # $4 - default install path (blank if root directory) 21 21 # 22 22 23 + verify () { 24 + if [ ! -f "$1" ]; then 25 + echo "" 1>&2 26 + echo " *** Missing file: $1" 1>&2 27 + echo ' *** You need to run "make" before "make install".' 1>&2 28 + echo "" 1>&2 29 + exit 1 30 + fi 31 + } 32 + 33 + # Make sure the files actually exist 34 + verify "$2" 35 + verify "$3" 36 + 23 37 # User may have a custom install script 24 38 if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi 25 39 if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi
+31 -7
arch/arm/common/edma.c
··· 269 269 .ccnt = 1, 270 270 }; 271 271 272 + static const struct of_device_id edma_of_ids[] = { 273 + { .compatible = "ti,edma3", }, 274 + {} 275 + }; 276 + 272 277 /*****************************************************************************/ 273 278 274 279 static void map_dmach_queue(unsigned ctlr, unsigned ch_no, ··· 565 560 static int prepare_unused_channel_list(struct device *dev, void *data) 566 561 { 567 562 struct platform_device *pdev = to_platform_device(dev); 568 - int i, ctlr; 563 + int i, count, ctlr; 564 + struct of_phandle_args dma_spec; 569 565 566 + if (dev->of_node) { 567 + count = of_property_count_strings(dev->of_node, "dma-names"); 568 + if (count < 0) 569 + return 0; 570 + for (i = 0; i < count; i++) { 571 + if (of_parse_phandle_with_args(dev->of_node, "dmas", 572 + "#dma-cells", i, 573 + &dma_spec)) 574 + continue; 575 + 576 + if (!of_match_node(edma_of_ids, dma_spec.np)) { 577 + of_node_put(dma_spec.np); 578 + continue; 579 + } 580 + 581 + clear_bit(EDMA_CHAN_SLOT(dma_spec.args[0]), 582 + edma_cc[0]->edma_unused); 583 + of_node_put(dma_spec.np); 584 + } 585 + return 0; 586 + } 587 + 588 + /* For non-OF case */ 570 589 for (i = 0; i < pdev->num_resources; i++) { 571 590 if ((pdev->resource[i].flags & IORESOURCE_DMA) && 572 591 (int)pdev->resource[i].start >= 0) { 573 592 ctlr = EDMA_CTLR(pdev->resource[i].start); 574 593 clear_bit(EDMA_CHAN_SLOT(pdev->resource[i].start), 575 - edma_cc[ctlr]->edma_unused); 594 + edma_cc[ctlr]->edma_unused); 576 595 } 577 596 } 578 597 ··· 1790 1761 1791 1762 return 0; 1792 1763 } 1793 - 1794 - static const struct of_device_id edma_of_ids[] = { 1795 - { .compatible = "ti,edma3", }, 1796 - {} 1797 - }; 1798 1764 1799 1765 static struct platform_driver edma_driver = { 1800 1766 .driver = {
+4 -2
arch/arm/common/mcpm_entry.c
··· 51 51 { 52 52 phys_reset_t phys_reset; 53 53 54 - BUG_ON(!platform_ops); 54 + if (WARN_ON_ONCE(!platform_ops || !platform_ops->power_down)) 55 + return; 55 56 BUG_ON(!irqs_disabled()); 56 57 57 58 /* ··· 94 93 { 95 94 phys_reset_t phys_reset; 96 95 97 - BUG_ON(!platform_ops); 96 + if (WARN_ON_ONCE(!platform_ops || !platform_ops->suspend)) 97 + return; 98 98 BUG_ON(!irqs_disabled()); 99 99 100 100 /* Very similar to mcpm_cpu_power_down() */
+4 -1
arch/arm/common/sharpsl_param.c
··· 15 15 #include <linux/module.h> 16 16 #include <linux/string.h> 17 17 #include <asm/mach/sharpsl_param.h> 18 + #include <asm/memory.h> 18 19 19 20 /* 20 21 * Certain hardware parameters determined at the time of device manufacture, ··· 26 25 */ 27 26 #ifdef CONFIG_ARCH_SA1100 28 27 #define PARAM_BASE 0xe8ffc000 28 + #define param_start(x) (void *)(x) 29 29 #else 30 30 #define PARAM_BASE 0xa0000a00 31 + #define param_start(x) __va(x) 31 32 #endif 32 33 #define MAGIC_CHG(a,b,c,d) ( ( d << 24 ) | ( c << 16 ) | ( b << 8 ) | a ) 33 34 ··· 44 41 45 42 void sharpsl_save_param(void) 46 43 { 47 - memcpy(&sharpsl_param, (void *)PARAM_BASE, sizeof(struct sharpsl_param_info)); 44 + memcpy(&sharpsl_param, param_start(PARAM_BASE), sizeof(struct sharpsl_param_info)); 48 45 49 46 if (sharpsl_param.comadj_keyword != COMADJ_MAGIC) 50 47 sharpsl_param.comadj=-1;
+1
arch/arm/configs/multi_v7_defconfig
··· 135 135 CONFIG_MMC_ARMMMCI=y 136 136 CONFIG_MMC_SDHCI=y 137 137 CONFIG_MMC_SDHCI_PLTFM=y 138 + CONFIG_MMC_SDHCI_ESDHC_IMX=y 138 139 CONFIG_MMC_SDHCI_TEGRA=y 139 140 CONFIG_MMC_SDHCI_SPEAR=y 140 141 CONFIG_MMC_OMAP=y
-1
arch/arm/include/asm/Kbuild
··· 31 31 generic-y += termios.h 32 32 generic-y += timex.h 33 33 generic-y += trace_clock.h 34 - generic-y += types.h 35 34 generic-y += unaligned.h
+1 -1
arch/arm/include/asm/jump_label.h
··· 16 16 17 17 static __always_inline bool arch_static_branch(struct static_key *key) 18 18 { 19 - asm goto("1:\n\t" 19 + asm_volatile_goto("1:\n\t" 20 20 JUMP_LABEL_NOP "\n\t" 21 21 ".pushsection __jump_table, \"aw\"\n\t" 22 22 ".word 1b, %l[l_yes], %c0\n\t"
+10 -4
arch/arm/include/asm/mcpm.h
··· 76 76 * 77 77 * This must be called with interrupts disabled. 78 78 * 79 - * This does not return. Re-entry in the kernel is expected via 80 - * mcpm_entry_point. 79 + * On success this does not return. Re-entry in the kernel is expected 80 + * via mcpm_entry_point. 81 + * 82 + * This will return if mcpm_platform_register() has not been called 83 + * previously in which case the caller should take appropriate action. 81 84 */ 82 85 void mcpm_cpu_power_down(void); 83 86 ··· 101 98 * 102 99 * This must be called with interrupts disabled. 103 100 * 104 - * This does not return. Re-entry in the kernel is expected via 105 - * mcpm_entry_point. 101 + * On success this does not return. Re-entry in the kernel is expected 102 + * via mcpm_entry_point. 103 + * 104 + * This will return if mcpm_platform_register() has not been called 105 + * previously in which case the caller should take appropriate action. 106 106 */ 107 107 void mcpm_cpu_suspend(u64 expected_residency); 108 108
+6
arch/arm/include/asm/syscall.h
··· 57 57 unsigned int i, unsigned int n, 58 58 unsigned long *args) 59 59 { 60 + if (n == 0) 61 + return; 62 + 60 63 if (i + n > SYSCALL_MAX_ARGS) { 61 64 unsigned long *args_bad = args + SYSCALL_MAX_ARGS - i; 62 65 unsigned int n_bad = n + i - SYSCALL_MAX_ARGS; ··· 84 81 unsigned int i, unsigned int n, 85 82 const unsigned long *args) 86 83 { 84 + if (n == 0) 85 + return; 86 + 87 87 if (i + n > SYSCALL_MAX_ARGS) { 88 88 pr_warning("%s called with max args %d, handling only %d\n", 89 89 __func__, i + n, SYSCALL_MAX_ARGS);
+20 -1
arch/arm/kernel/head.S
··· 487 487 mrc p15, 0, r0, c0, c0, 5 @ read MPIDR 488 488 and r0, r0, #0xc0000000 @ multiprocessing extensions and 489 489 teq r0, #0x80000000 @ not part of a uniprocessor system? 490 - moveq pc, lr @ yes, assume SMP 490 + bne __fixup_smp_on_up @ no, assume UP 491 + 492 + @ Core indicates it is SMP. Check for Aegis SOC where a single 493 + @ Cortex-A9 CPU is present but SMP operations fault. 494 + mov r4, #0x41000000 495 + orr r4, r4, #0x0000c000 496 + orr r4, r4, #0x00000090 497 + teq r3, r4 @ Check for ARM Cortex-A9 498 + movne pc, lr @ Not ARM Cortex-A9, 499 + 500 + @ If a future SoC *does* use 0x0 as the PERIPH_BASE, then the 501 + @ below address check will need to be #ifdef'd or equivalent 502 + @ for the Aegis platform. 503 + mrc p15, 4, r0, c15, c0 @ get SCU base address 504 + teq r0, #0x0 @ '0' on actual UP A9 hardware 505 + beq __fixup_smp_on_up @ So its an A9 UP 506 + ldr r0, [r0, #4] @ read SCU Config 507 + and r0, r0, #0x3 @ number of CPUs 508 + teq r0, #0x0 @ is 1? 509 + movne pc, lr 491 510 492 511 __fixup_smp_on_up: 493 512 adr r0, 1f
+1 -1
arch/arm/mach-at91/at91rm9200_time.c
··· 93 93 94 94 static struct irqaction at91rm9200_timer_irq = { 95 95 .name = "at91_tick", 96 - .flags = IRQF_SHARED | IRQF_DISABLED | IRQF_TIMER | IRQF_IRQPOLL, 96 + .flags = IRQF_SHARED | IRQF_TIMER | IRQF_IRQPOLL, 97 97 .handler = at91rm9200_timer_interrupt, 98 98 .irq = NR_IRQS_LEGACY + AT91_ID_SYS, 99 99 };
+1 -1
arch/arm/mach-at91/at91sam926x_time.c
··· 171 171 172 172 static struct irqaction at91sam926x_pit_irq = { 173 173 .name = "at91_tick", 174 - .flags = IRQF_SHARED | IRQF_DISABLED | IRQF_TIMER | IRQF_IRQPOLL, 174 + .flags = IRQF_SHARED | IRQF_TIMER | IRQF_IRQPOLL, 175 175 .handler = at91sam926x_pit_interrupt, 176 176 .irq = NR_IRQS_LEGACY + AT91_ID_SYS, 177 177 };
+8
arch/arm/mach-at91/at91sam9g45_reset.S
··· 16 16 #include "at91_rstc.h" 17 17 .arm 18 18 19 + /* 20 + * at91_ramc_base is an array void* 21 + * init at NULL if only one DDR controler is present in or DT 22 + */ 19 23 .globl at91sam9g45_restart 20 24 21 25 at91sam9g45_restart: 22 26 ldr r5, =at91_ramc_base @ preload constants 23 27 ldr r0, [r5] 28 + ldr r5, [r5, #4] @ ddr1 29 + cmp r5, #0 24 30 ldr r4, =at91_rstc_base 25 31 ldr r1, [r4] 26 32 ··· 36 30 37 31 .balign 32 @ align to cache line 38 32 33 + strne r2, [r5, #AT91_DDRSDRC_RTR] @ disable DDR1 access 34 + strne r3, [r5, #AT91_DDRSDRC_LPR] @ power down DDR1 39 35 str r2, [r0, #AT91_DDRSDRC_RTR] @ disable DDR0 access 40 36 str r3, [r0, #AT91_DDRSDRC_LPR] @ power down DDR0 41 37 str r4, [r1, #AT91_RSTC_CR] @ reset processor
+1 -1
arch/arm/mach-at91/at91x40_time.c
··· 57 57 58 58 static struct irqaction at91x40_timer_irq = { 59 59 .name = "at91_tick", 60 - .flags = IRQF_DISABLED | IRQF_TIMER, 60 + .flags = IRQF_TIMER, 61 61 .handler = at91x40_timer_interrupt 62 62 }; 63 63
+1 -1
arch/arm/mach-davinci/board-dm365-evm.c
··· 176 176 .context = (void *)0x7f00, 177 177 }; 178 178 179 - static struct snd_platform_data dm365_evm_snd_data = { 179 + static struct snd_platform_data dm365_evm_snd_data __maybe_unused = { 180 180 .asp_chan_q = EVENTQ_3, 181 181 }; 182 182
+2 -2
arch/arm/mach-davinci/include/mach/serial.h
··· 15 15 16 16 #include <mach/hardware.h> 17 17 18 - #include <linux/platform_device.h> 19 - 20 18 #define DAVINCI_UART0_BASE (IO_PHYS + 0x20000) 21 19 #define DAVINCI_UART1_BASE (IO_PHYS + 0x20400) 22 20 #define DAVINCI_UART2_BASE (IO_PHYS + 0x20800) ··· 37 39 #define UART_DM646X_SCR_TX_WATERMARK 0x08 38 40 39 41 #ifndef __ASSEMBLY__ 42 + #include <linux/platform_device.h> 43 + 40 44 extern int davinci_serial_init(struct platform_device *); 41 45 #endif 42 46
+7
arch/arm/mach-integrator/pci_v3.h
··· 1 1 /* Simple oneliner include to the PCIv3 early init */ 2 + #ifdef CONFIG_PCI 2 3 extern int pci_v3_early_init(void); 4 + #else 5 + static inline int pci_v3_early_init(void) 6 + { 7 + return 0; 8 + } 9 + #endif
+7 -1
arch/arm/mach-mvebu/coherency.c
··· 140 140 coherency_base = of_iomap(np, 0); 141 141 coherency_cpu_base = of_iomap(np, 1); 142 142 set_cpu_coherent(cpu_logical_map(smp_processor_id()), 0); 143 + of_node_put(np); 143 144 } 144 145 145 146 return 0; ··· 148 147 149 148 static int __init coherency_late_init(void) 150 149 { 151 - if (of_find_matching_node(NULL, of_coherency_table)) 150 + struct device_node *np; 151 + 152 + np = of_find_matching_node(NULL, of_coherency_table); 153 + if (np) { 152 154 bus_register_notifier(&platform_bus_type, 153 155 &mvebu_hwcc_platform_nb); 156 + of_node_put(np); 157 + } 154 158 return 0; 155 159 } 156 160
+1
arch/arm/mach-mvebu/pmsu.c
··· 67 67 pr_info("Initializing Power Management Service Unit\n"); 68 68 pmsu_mp_base = of_iomap(np, 0); 69 69 pmsu_reset_base = of_iomap(np, 1); 70 + of_node_put(np); 70 71 } 71 72 72 73 return 0;
+1
arch/arm/mach-mvebu/system-controller.c
··· 98 98 BUG_ON(!match); 99 99 system_controller_base = of_iomap(np, 0); 100 100 mvebu_sc = (struct mvebu_system_controller *)match->data; 101 + of_node_put(np); 101 102 } 102 103 103 104 return 0;
+18
arch/arm/mach-omap2/board-generic.c
··· 129 129 .restart = omap3xxx_restart, 130 130 MACHINE_END 131 131 132 + static const char *omap36xx_boards_compat[] __initdata = { 133 + "ti,omap36xx", 134 + NULL, 135 + }; 136 + 137 + DT_MACHINE_START(OMAP36XX_DT, "Generic OMAP36xx (Flattened Device Tree)") 138 + .reserve = omap_reserve, 139 + .map_io = omap3_map_io, 140 + .init_early = omap3630_init_early, 141 + .init_irq = omap_intc_of_init, 142 + .handle_irq = omap3_intc_handle_irq, 143 + .init_machine = omap_generic_init, 144 + .init_late = omap3_init_late, 145 + .init_time = omap3_sync32k_timer_init, 146 + .dt_compat = omap36xx_boards_compat, 147 + .restart = omap3xxx_restart, 148 + MACHINE_END 149 + 132 150 static const char *omap3_gp_boards_compat[] __initdata = { 133 151 "ti,omap3-beagle", 134 152 "timll,omap3-devkit8000",
+9
arch/arm/mach-omap2/board-rx51-peripherals.c
··· 167 167 .name = "lp5523:kb1", 168 168 .chan_nr = 0, 169 169 .led_current = 50, 170 + .max_current = 100, 170 171 }, { 171 172 .name = "lp5523:kb2", 172 173 .chan_nr = 1, 173 174 .led_current = 50, 175 + .max_current = 100, 174 176 }, { 175 177 .name = "lp5523:kb3", 176 178 .chan_nr = 2, 177 179 .led_current = 50, 180 + .max_current = 100, 178 181 }, { 179 182 .name = "lp5523:kb4", 180 183 .chan_nr = 3, 181 184 .led_current = 50, 185 + .max_current = 100, 182 186 }, { 183 187 .name = "lp5523:b", 184 188 .chan_nr = 4, 185 189 .led_current = 50, 190 + .max_current = 100, 186 191 }, { 187 192 .name = "lp5523:g", 188 193 .chan_nr = 5, 189 194 .led_current = 50, 195 + .max_current = 100, 190 196 }, { 191 197 .name = "lp5523:r", 192 198 .chan_nr = 6, 193 199 .led_current = 50, 200 + .max_current = 100, 194 201 }, { 195 202 .name = "lp5523:kb5", 196 203 .chan_nr = 7, 197 204 .led_current = 50, 205 + .max_current = 100, 198 206 }, { 199 207 .name = "lp5523:kb6", 200 208 .chan_nr = 8, 201 209 .led_current = 50, 210 + .max_current = 100, 202 211 } 203 212 }; 204 213
+11 -1
arch/arm/mach-omap2/gpmc-onenand.c
··· 272 272 struct gpmc_timings t; 273 273 int ret; 274 274 275 - if (gpmc_onenand_data->of_node) 275 + if (gpmc_onenand_data->of_node) { 276 276 gpmc_read_settings_dt(gpmc_onenand_data->of_node, 277 277 &onenand_async); 278 + if (onenand_async.sync_read || onenand_async.sync_write) { 279 + if (onenand_async.sync_write) 280 + gpmc_onenand_data->flags |= 281 + ONENAND_SYNC_READWRITE; 282 + else 283 + gpmc_onenand_data->flags |= ONENAND_SYNC_READ; 284 + onenand_async.sync_read = false; 285 + onenand_async.sync_write = false; 286 + } 287 + } 278 288 279 289 omap2_onenand_set_async_mode(onenand_base); 280 290
+1 -3
arch/arm/mach-omap2/mux.h
··· 28 28 #define OMAP_PULL_UP (1 << 4) 29 29 #define OMAP_ALTELECTRICALSEL (1 << 5) 30 30 31 - /* 34xx specific mux bit defines */ 31 + /* omap3/4/5 specific mux bit defines */ 32 32 #define OMAP_INPUT_EN (1 << 8) 33 33 #define OMAP_OFF_EN (1 << 9) 34 34 #define OMAP_OFFOUT_EN (1 << 10) ··· 36 36 #define OMAP_OFF_PULL_EN (1 << 12) 37 37 #define OMAP_OFF_PULL_UP (1 << 13) 38 38 #define OMAP_WAKEUP_EN (1 << 14) 39 - 40 - /* 44xx specific mux bit defines */ 41 39 #define OMAP_WAKEUP_EVENT (1 << 15) 42 40 43 41 /* Active pin states */
+2 -2
arch/arm/mach-omap2/timer.c
··· 628 628 #endif /* CONFIG_HAVE_ARM_TWD */ 629 629 #endif /* CONFIG_ARCH_OMAP4 */ 630 630 631 - #ifdef CONFIG_SOC_OMAP5 631 + #if defined(CONFIG_SOC_OMAP5) || defined(CONFIG_SOC_DRA7XX) 632 632 void __init omap5_realtime_timer_init(void) 633 633 { 634 634 omap4_sync32k_timer_init(); ··· 636 636 637 637 clocksource_of_init(); 638 638 } 639 - #endif /* CONFIG_SOC_OMAP5 */ 639 + #endif /* CONFIG_SOC_OMAP5 || CONFIG_SOC_DRA7XX */ 640 640 641 641 /** 642 642 * omap_timer_init - build and register timer device with an
+2 -2
arch/arm/mach-shmobile/board-armadillo800eva.c
··· 1108 1108 PIN_MAP_MUX_GROUP_DEFAULT("asoc-simple-card.1", "pfc-r8a7740", 1109 1109 "fsib_mclk_in", "fsib"), 1110 1110 /* GETHER */ 1111 - PIN_MAP_MUX_GROUP_DEFAULT("sh-eth", "pfc-r8a7740", 1111 + PIN_MAP_MUX_GROUP_DEFAULT("r8a7740-gether", "pfc-r8a7740", 1112 1112 "gether_mii", "gether"), 1113 - PIN_MAP_MUX_GROUP_DEFAULT("sh-eth", "pfc-r8a7740", 1113 + PIN_MAP_MUX_GROUP_DEFAULT("r8a7740-gether", "pfc-r8a7740", 1114 1114 "gether_int", "gether"), 1115 1115 /* HDMI */ 1116 1116 PIN_MAP_MUX_GROUP_DEFAULT("sh-mobile-hdmi", "pfc-r8a7740",
+26 -1
arch/arm/mach-shmobile/board-lager.c
··· 29 29 #include <linux/pinctrl/machine.h> 30 30 #include <linux/platform_data/gpio-rcar.h> 31 31 #include <linux/platform_device.h> 32 + #include <linux/phy.h> 32 33 #include <linux/regulator/fixed.h> 33 34 #include <linux/regulator/machine.h> 34 35 #include <linux/sh_eth.h> ··· 156 155 &ether_pdata, sizeof(ether_pdata)); 157 156 } 158 157 158 + /* 159 + * Ether LEDs on the Lager board are named LINK and ACTIVE which corresponds 160 + * to non-default 01 setting of the Micrel KSZ8041 PHY control register 1 bits 161 + * 14-15. We have to set them back to 01 from the default 00 value each time 162 + * the PHY is reset. It's also important because the PHY's LED0 signal is 163 + * connected to SoC's ETH_LINK signal and in the PHY's default mode it will 164 + * bounce on and off after each packet, which we apparently want to avoid. 165 + */ 166 + static int lager_ksz8041_fixup(struct phy_device *phydev) 167 + { 168 + u16 phyctrl1 = phy_read(phydev, 0x1e); 169 + 170 + phyctrl1 &= ~0xc000; 171 + phyctrl1 |= 0x4000; 172 + return phy_write(phydev, 0x1e, phyctrl1); 173 + } 174 + 175 + static void __init lager_init(void) 176 + { 177 + lager_add_standard_devices(); 178 + 179 + phy_register_fixup_for_id("r8a7790-ether-ff:01", lager_ksz8041_fixup); 180 + } 181 + 159 182 static const char *lager_boards_compat_dt[] __initdata = { 160 183 "renesas,lager", 161 184 NULL, ··· 188 163 DT_MACHINE_START(LAGER_DT, "lager") 189 164 .init_early = r8a7790_init_delay, 190 165 .init_time = r8a7790_timer_init, 191 - .init_machine = lager_add_standard_devices, 166 + .init_machine = lager_init, 192 167 .dt_compat = lager_boards_compat_dt, 193 168 MACHINE_END
+10 -1
arch/arm/mach-vexpress/tc2_pm.c
··· 131 131 } else 132 132 BUG(); 133 133 134 + /* 135 + * If the CPU is committed to power down, make sure 136 + * the power controller will be in charge of waking it 137 + * up upon IRQ, ie IRQ lines are cut from GIC CPU IF 138 + * to the CPU by disabling the GIC CPU IF to prevent wfi 139 + * from completing execution behind power controller back 140 + */ 141 + if (!skip_wfi) 142 + gic_cpu_if_down(); 143 + 134 144 if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) { 135 145 arch_spin_unlock(&tc2_pm_lock); 136 146 ··· 241 231 cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); 242 232 cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); 243 233 ve_spc_set_resume_addr(cluster, cpu, virt_to_phys(mcpm_entry_point)); 244 - gic_cpu_if_down(); 245 234 tc2_pm_down(residency); 246 235 } 247 236
+28 -15
arch/arm/mm/dma-mapping.c
··· 1232 1232 break; 1233 1233 1234 1234 len = (j - i) << PAGE_SHIFT; 1235 - ret = iommu_map(mapping->domain, iova, phys, len, 0); 1235 + ret = iommu_map(mapping->domain, iova, phys, len, 1236 + IOMMU_READ|IOMMU_WRITE); 1236 1237 if (ret < 0) 1237 1238 goto fail; 1238 1239 iova += len; ··· 1432 1431 GFP_KERNEL); 1433 1432 } 1434 1433 1434 + static int __dma_direction_to_prot(enum dma_data_direction dir) 1435 + { 1436 + int prot; 1437 + 1438 + switch (dir) { 1439 + case DMA_BIDIRECTIONAL: 1440 + prot = IOMMU_READ | IOMMU_WRITE; 1441 + break; 1442 + case DMA_TO_DEVICE: 1443 + prot = IOMMU_READ; 1444 + break; 1445 + case DMA_FROM_DEVICE: 1446 + prot = IOMMU_WRITE; 1447 + break; 1448 + default: 1449 + prot = 0; 1450 + } 1451 + 1452 + return prot; 1453 + } 1454 + 1435 1455 /* 1436 1456 * Map a part of the scatter-gather list into contiguous io address space 1437 1457 */ ··· 1466 1444 int ret = 0; 1467 1445 unsigned int count; 1468 1446 struct scatterlist *s; 1447 + int prot; 1469 1448 1470 1449 size = PAGE_ALIGN(size); 1471 1450 *handle = DMA_ERROR_CODE; ··· 1483 1460 !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs)) 1484 1461 __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir); 1485 1462 1486 - ret = iommu_map(mapping->domain, iova, phys, len, 0); 1463 + prot = __dma_direction_to_prot(dir); 1464 + 1465 + ret = iommu_map(mapping->domain, iova, phys, len, prot); 1487 1466 if (ret < 0) 1488 1467 goto fail; 1489 1468 count += len >> PAGE_SHIFT; ··· 1690 1665 if (dma_addr == DMA_ERROR_CODE) 1691 1666 return dma_addr; 1692 1667 1693 - switch (dir) { 1694 - case DMA_BIDIRECTIONAL: 1695 - prot = IOMMU_READ | IOMMU_WRITE; 1696 - break; 1697 - case DMA_TO_DEVICE: 1698 - prot = IOMMU_READ; 1699 - break; 1700 - case DMA_FROM_DEVICE: 1701 - prot = IOMMU_WRITE; 1702 - break; 1703 - default: 1704 - prot = 0; 1705 - } 1668 + prot = __dma_direction_to_prot(dir); 1706 1669 1707 1670 ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page), len, prot); 1708 1671 if (ret < 0)
-3
arch/arm/mm/init.c
··· 17 17 #include <linux/nodemask.h> 18 18 #include <linux/initrd.h> 19 19 #include <linux/of_fdt.h> 20 - #include <linux/of_reserved_mem.h> 21 20 #include <linux/highmem.h> 22 21 #include <linux/gfp.h> 23 22 #include <linux/memblock.h> ··· 377 378 /* reserve any platform specific memblock areas */ 378 379 if (mdesc->reserve) 379 380 mdesc->reserve(); 380 - 381 - early_init_dt_scan_reserved_mem(); 382 381 383 382 /* 384 383 * reserve memory for DMA contigouos allocations,
-7
arch/arm64/Kconfig.debug
··· 6 6 bool 7 7 default y 8 8 9 - config DEBUG_STACK_USAGE 10 - bool "Enable stack utilization instrumentation" 11 - depends on DEBUG_KERNEL 12 - help 13 - Enables the display of the minimum amount of free stack which each 14 - task has ever had available in the sysrq-T output. 15 - 16 9 config EARLY_PRINTK 17 10 bool "Early printk support" 18 11 default y
+4 -1
arch/arm64/configs/defconfig
··· 42 42 # CONFIG_WIRELESS is not set 43 43 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 44 44 CONFIG_DEVTMPFS=y 45 - # CONFIG_BLK_DEV is not set 45 + CONFIG_BLK_DEV=y 46 46 CONFIG_SCSI=y 47 47 # CONFIG_SCSI_PROC_FS is not set 48 48 CONFIG_BLK_DEV_SD=y ··· 72 72 # CONFIG_IOMMU_SUPPORT is not set 73 73 CONFIG_EXT2_FS=y 74 74 CONFIG_EXT3_FS=y 75 + CONFIG_EXT4_FS=y 75 76 # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 76 77 # CONFIG_EXT3_FS_XATTR is not set 77 78 CONFIG_FUSE_FS=y ··· 91 90 CONFIG_DEBUG_INFO=y 92 91 # CONFIG_FTRACE is not set 93 92 CONFIG_ATOMIC64_SELFTEST=y 93 + CONFIG_VIRTIO_MMIO=y 94 + CONFIG_VIRTIO_BLK=y
+6 -4
arch/arm64/include/asm/uaccess.h
··· 166 166 167 167 #define get_user(x, ptr) \ 168 168 ({ \ 169 + __typeof__(*(ptr)) __user *__p = (ptr); \ 169 170 might_fault(); \ 170 - access_ok(VERIFY_READ, (ptr), sizeof(*(ptr))) ? \ 171 - __get_user((x), (ptr)) : \ 171 + access_ok(VERIFY_READ, __p, sizeof(*__p)) ? \ 172 + __get_user((x), __p) : \ 172 173 ((x) = 0, -EFAULT); \ 173 174 }) 174 175 ··· 228 227 229 228 #define put_user(x, ptr) \ 230 229 ({ \ 230 + __typeof__(*(ptr)) __user *__p = (ptr); \ 231 231 might_fault(); \ 232 - access_ok(VERIFY_WRITE, (ptr), sizeof(*(ptr))) ? \ 233 - __put_user((x), (ptr)) : \ 232 + access_ok(VERIFY_WRITE, __p, sizeof(*__p)) ? \ 233 + __put_user((x), __p) : \ 234 234 -EFAULT; \ 235 235 }) 236 236
+2
arch/arm64/kernel/fpsimd.c
··· 80 80 81 81 void fpsimd_flush_thread(void) 82 82 { 83 + preempt_disable(); 83 84 memset(&current->thread.fpsimd_state, 0, sizeof(struct fpsimd_state)); 84 85 fpsimd_load_state(&current->thread.fpsimd_state); 86 + preempt_enable(); 85 87 } 86 88 87 89 #ifdef CONFIG_KERNEL_MODE_NEON
+1 -1
arch/arm64/mm/tlb.S
··· 35 35 */ 36 36 ENTRY(__cpu_flush_user_tlb_range) 37 37 vma_vm_mm x3, x2 // get vma->vm_mm 38 - mmid x3, x3 // get vm_mm->context.id 38 + mmid w3, x3 // get vm_mm->context.id 39 39 dsb sy 40 40 lsr x0, x0, #12 // align address 41 41 lsr x1, x1, #12
+1 -1
arch/mips/alchemy/board-mtx1.c
··· 276 276 .resource = alchemy_pci_host_res, 277 277 }; 278 278 279 - static struct __initdata platform_device * mtx1_devs[] = { 279 + static struct platform_device *mtx1_devs[] __initdata = { 280 280 &mtx1_pci_host, 281 281 &mtx1_gpio_leds, 282 282 &mtx1_wdt,
+1 -1
arch/mips/include/asm/jump_label.h
··· 22 22 23 23 static __always_inline bool arch_static_branch(struct static_key *key) 24 24 { 25 - asm goto("1:\tnop\n\t" 25 + asm_volatile_goto("1:\tnop\n\t" 26 26 "nop\n\t" 27 27 ".pushsection __jump_table, \"aw\"\n\t" 28 28 WORD_INSN " 1b, %l[l_yes], %0\n\t"
+1 -1
arch/mips/kernel/octeon_switch.S
··· 73 73 3: 74 74 75 75 #if defined(CONFIG_CC_STACKPROTECTOR) && !defined(CONFIG_SMP) 76 - PTR_L t8, __stack_chk_guard 76 + PTR_LA t8, __stack_chk_guard 77 77 LONG_L t9, TASK_STACK_CANARY(a1) 78 78 LONG_S t9, 0(t8) 79 79 #endif
+1 -1
arch/mips/kernel/r2300_switch.S
··· 67 67 1: 68 68 69 69 #if defined(CONFIG_CC_STACKPROTECTOR) && !defined(CONFIG_SMP) 70 - PTR_L t8, __stack_chk_guard 70 + PTR_LA t8, __stack_chk_guard 71 71 LONG_L t9, TASK_STACK_CANARY(a1) 72 72 LONG_S t9, 0(t8) 73 73 #endif
+1 -1
arch/mips/kernel/r4k_switch.S
··· 69 69 1: 70 70 71 71 #if defined(CONFIG_CC_STACKPROTECTOR) && !defined(CONFIG_SMP) 72 - PTR_L t8, __stack_chk_guard 72 + PTR_LA t8, __stack_chk_guard 73 73 LONG_L t9, TASK_STACK_CANARY(a1) 74 74 LONG_S t9, 0(t8) 75 75 #endif
+2
arch/mips/mm/c-r4k.c
··· 609 609 r4k_blast_scache(); 610 610 else 611 611 blast_scache_range(addr, addr + size); 612 + preempt_enable(); 612 613 __sync(); 613 614 return; 614 615 } ··· 651 650 */ 652 651 blast_inv_scache_range(addr, addr + size); 653 652 } 653 + preempt_enable(); 654 654 __sync(); 655 655 return; 656 656 }
+2
arch/parisc/configs/712_defconfig
··· 40 40 CONFIG_LLC2=m 41 41 CONFIG_NET_PKTGEN=m 42 42 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 43 + CONFIG_DEVTMPFS=y 44 + CONFIG_DEVTMPFS_MOUNT=y 43 45 # CONFIG_STANDALONE is not set 44 46 # CONFIG_PREVENT_FIRMWARE_BUILD is not set 45 47 CONFIG_PARPORT=y
+2
arch/parisc/configs/a500_defconfig
··· 79 79 CONFIG_LLC2=m 80 80 CONFIG_NET_PKTGEN=m 81 81 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 82 + CONFIG_DEVTMPFS=y 83 + CONFIG_DEVTMPFS_MOUNT=y 82 84 # CONFIG_STANDALONE is not set 83 85 # CONFIG_PREVENT_FIRMWARE_BUILD is not set 84 86 CONFIG_BLK_DEV_UMEM=m
+3
arch/parisc/configs/b180_defconfig
··· 4 4 CONFIG_IKCONFIG_PROC=y 5 5 CONFIG_LOG_BUF_SHIFT=16 6 6 CONFIG_SYSFS_DEPRECATED_V2=y 7 + CONFIG_BLK_DEV_INITRD=y 7 8 CONFIG_SLAB=y 8 9 CONFIG_MODULES=y 9 10 CONFIG_MODVERSIONS=y ··· 28 27 # CONFIG_INET_LRO is not set 29 28 CONFIG_IPV6=y 30 29 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 30 + CONFIG_DEVTMPFS=y 31 + CONFIG_DEVTMPFS_MOUNT=y 31 32 # CONFIG_PREVENT_FIRMWARE_BUILD is not set 32 33 CONFIG_PARPORT=y 33 34 CONFIG_PARPORT_PC=y
+3
arch/parisc/configs/c3000_defconfig
··· 5 5 CONFIG_IKCONFIG_PROC=y 6 6 CONFIG_LOG_BUF_SHIFT=16 7 7 CONFIG_SYSFS_DEPRECATED_V2=y 8 + CONFIG_BLK_DEV_INITRD=y 8 9 # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set 9 10 CONFIG_EXPERT=y 10 11 CONFIG_KALLSYMS_ALL=y ··· 40 39 CONFIG_IP_NF_QUEUE=m 41 40 CONFIG_NET_PKTGEN=m 42 41 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 42 + CONFIG_DEVTMPFS=y 43 + CONFIG_DEVTMPFS_MOUNT=y 43 44 # CONFIG_STANDALONE is not set 44 45 # CONFIG_PREVENT_FIRMWARE_BUILD is not set 45 46 CONFIG_BLK_DEV_UMEM=m
+2
arch/parisc/configs/c8000_defconfig
··· 62 62 CONFIG_LLC2=m 63 63 CONFIG_DNS_RESOLVER=y 64 64 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 65 + CONFIG_DEVTMPFS=y 66 + CONFIG_DEVTMPFS_MOUNT=y 65 67 # CONFIG_STANDALONE is not set 66 68 CONFIG_PARPORT=y 67 69 CONFIG_PARPORT_PC=y
+2
arch/parisc/configs/default_defconfig
··· 49 49 CONFIG_INET6_IPCOMP=y 50 50 CONFIG_LLC2=m 51 51 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 52 + CONFIG_DEVTMPFS=y 53 + CONFIG_DEVTMPFS_MOUNT=y 52 54 # CONFIG_STANDALONE is not set 53 55 # CONFIG_PREVENT_FIRMWARE_BUILD is not set 54 56 CONFIG_PARPORT=y
+1 -1
arch/parisc/include/asm/traps.h
··· 6 6 7 7 /* traps.c */ 8 8 void parisc_terminate(char *msg, struct pt_regs *regs, 9 - int code, unsigned long offset); 9 + int code, unsigned long offset) __noreturn __cold; 10 10 11 11 /* mm/fault.c */ 12 12 void do_page_fault(struct pt_regs *regs, unsigned long code,
+1 -7
arch/parisc/kernel/smp.c
··· 72 72 IPI_NOP=0, 73 73 IPI_RESCHEDULE=1, 74 74 IPI_CALL_FUNC, 75 - IPI_CALL_FUNC_SINGLE, 76 75 IPI_CPU_START, 77 76 IPI_CPU_STOP, 78 77 IPI_CPU_TEST ··· 161 162 case IPI_CALL_FUNC: 162 163 smp_debug(100, KERN_DEBUG "CPU%d IPI_CALL_FUNC\n", this_cpu); 163 164 generic_smp_call_function_interrupt(); 164 - break; 165 - 166 - case IPI_CALL_FUNC_SINGLE: 167 - smp_debug(100, KERN_DEBUG "CPU%d IPI_CALL_FUNC_SINGLE\n", this_cpu); 168 - generic_smp_call_function_single_interrupt(); 169 165 break; 170 166 171 167 case IPI_CPU_START: ··· 254 260 255 261 void arch_send_call_function_single_ipi(int cpu) 256 262 { 257 - send_IPI_single(cpu, IPI_CALL_FUNC_SINGLE); 263 + send_IPI_single(cpu, IPI_CALL_FUNC); 258 264 } 259 265 260 266 /*
+3 -8
arch/parisc/kernel/traps.c
··· 291 291 do_exit(SIGSEGV); 292 292 } 293 293 294 - int syscall_ipi(int (*syscall) (struct pt_regs *), struct pt_regs *regs) 295 - { 296 - return syscall(regs); 297 - } 298 - 299 294 /* gdb uses break 4,8 */ 300 295 #define GDB_BREAK_INSN 0x10004 301 296 static void handle_gdb_break(struct pt_regs *regs, int wot) ··· 800 805 else { 801 806 802 807 /* 803 - * The kernel should never fault on its own address space. 808 + * The kernel should never fault on its own address space, 809 + * unless pagefault_disable() was called before. 804 810 */ 805 811 806 - if (fault_space == 0) 812 + if (fault_space == 0 && !in_atomic()) 807 813 { 808 814 pdc_chassis_send_status(PDC_CHASSIS_DIRECT_PANIC); 809 815 parisc_terminate("Kernel Fault", regs, code, fault_address); 810 - 811 816 } 812 817 } 813 818
+14 -1
arch/parisc/lib/memcpy.c
··· 56 56 #ifdef __KERNEL__ 57 57 #include <linux/module.h> 58 58 #include <linux/compiler.h> 59 - #include <asm/uaccess.h> 59 + #include <linux/uaccess.h> 60 60 #define s_space "%%sr1" 61 61 #define d_space "%%sr2" 62 62 #else ··· 524 524 EXPORT_SYMBOL(copy_from_user); 525 525 EXPORT_SYMBOL(copy_in_user); 526 526 EXPORT_SYMBOL(memcpy); 527 + 528 + long probe_kernel_read(void *dst, const void *src, size_t size) 529 + { 530 + unsigned long addr = (unsigned long)src; 531 + 532 + if (size < 0 || addr < PAGE_SIZE) 533 + return -EFAULT; 534 + 535 + /* check for I/O space F_EXTEND(0xfff00000) access as well? */ 536 + 537 + return __probe_kernel_read(dst, src, size); 538 + } 539 + 527 540 #endif
+10 -5
arch/parisc/mm/fault.c
··· 171 171 unsigned long address) 172 172 { 173 173 struct vm_area_struct *vma, *prev_vma; 174 - struct task_struct *tsk = current; 175 - struct mm_struct *mm = tsk->mm; 174 + struct task_struct *tsk; 175 + struct mm_struct *mm; 176 176 unsigned long acc_type; 177 177 int fault; 178 - unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE; 178 + unsigned int flags; 179 179 180 - if (in_atomic() || !mm) 180 + if (in_atomic()) 181 181 goto no_context; 182 182 183 + tsk = current; 184 + mm = tsk->mm; 185 + if (!mm) 186 + goto no_context; 187 + 188 + flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE; 183 189 if (user_mode(regs)) 184 190 flags |= FAULT_FLAG_USER; 185 191 186 192 acc_type = parisc_acctyp(code, regs->iir); 187 - 188 193 if (acc_type & VM_WRITE) 189 194 flags |= FAULT_FLAG_WRITE; 190 195 retry:
+1 -1
arch/powerpc/include/asm/jump_label.h
··· 19 19 20 20 static __always_inline bool arch_static_branch(struct static_key *key) 21 21 { 22 - asm goto("1:\n\t" 22 + asm_volatile_goto("1:\n\t" 23 23 "nop\n\t" 24 24 ".pushsection __jump_table, \"aw\"\n\t" 25 25 JUMP_ENTRY_TYPE "1b, %l[l_yes], %c0\n\t"
+1 -1
arch/powerpc/kernel/iommu.c
··· 661 661 /* number of bytes needed for the bitmap */ 662 662 sz = BITS_TO_LONGS(tbl->it_size) * sizeof(unsigned long); 663 663 664 - page = alloc_pages_node(nid, GFP_ATOMIC, get_order(sz)); 664 + page = alloc_pages_node(nid, GFP_KERNEL, get_order(sz)); 665 665 if (!page) 666 666 panic("iommu_init_table: Can't allocate %ld bytes\n", sz); 667 667 tbl->it_map = page_address(page);
+3 -2
arch/powerpc/kernel/irq.c
··· 495 495 void do_IRQ(struct pt_regs *regs) 496 496 { 497 497 struct pt_regs *old_regs = set_irq_regs(regs); 498 - struct thread_info *curtp, *irqtp; 498 + struct thread_info *curtp, *irqtp, *sirqtp; 499 499 500 500 /* Switch to the irq stack to handle this */ 501 501 curtp = current_thread_info(); 502 502 irqtp = hardirq_ctx[raw_smp_processor_id()]; 503 + sirqtp = softirq_ctx[raw_smp_processor_id()]; 503 504 504 505 /* Already there ? */ 505 - if (unlikely(curtp == irqtp)) { 506 + if (unlikely(curtp == irqtp || curtp == sirqtp)) { 506 507 __do_irq(regs); 507 508 set_irq_regs(old_regs); 508 509 return;
+16 -2
arch/powerpc/kernel/sysfs.c
··· 17 17 #include <asm/machdep.h> 18 18 #include <asm/smp.h> 19 19 #include <asm/pmc.h> 20 + #include <asm/firmware.h> 20 21 21 22 #include "cacheinfo.h" 22 23 ··· 180 179 SYSFS_PMCSETUP(dscr, SPRN_DSCR); 181 180 SYSFS_PMCSETUP(pir, SPRN_PIR); 182 181 182 + /* 183 + Lets only enable read for phyp resources and 184 + enable write when needed with a separate function. 185 + Lets be conservative and default to pseries. 186 + */ 183 187 static DEVICE_ATTR(mmcra, 0600, show_mmcra, store_mmcra); 184 188 static DEVICE_ATTR(spurr, 0400, show_spurr, NULL); 185 189 static DEVICE_ATTR(dscr, 0600, show_dscr, store_dscr); 186 - static DEVICE_ATTR(purr, 0600, show_purr, store_purr); 190 + static DEVICE_ATTR(purr, 0400, show_purr, store_purr); 187 191 static DEVICE_ATTR(pir, 0400, show_pir, NULL); 188 192 189 193 unsigned long dscr_default = 0; 190 194 EXPORT_SYMBOL(dscr_default); 195 + 196 + static void add_write_permission_dev_attr(struct device_attribute *attr) 197 + { 198 + attr->attr.mode |= 0200; 199 + } 191 200 192 201 static ssize_t show_dscr_default(struct device *dev, 193 202 struct device_attribute *attr, char *buf) ··· 405 394 if (cpu_has_feature(CPU_FTR_MMCRA)) 406 395 device_create_file(s, &dev_attr_mmcra); 407 396 408 - if (cpu_has_feature(CPU_FTR_PURR)) 397 + if (cpu_has_feature(CPU_FTR_PURR)) { 398 + if (!firmware_has_feature(FW_FEATURE_LPAR)) 399 + add_write_permission_dev_attr(&dev_attr_purr); 409 400 device_create_file(s, &dev_attr_purr); 401 + } 410 402 411 403 if (cpu_has_feature(CPU_FTR_SPURR)) 412 404 device_create_file(s, &dev_attr_spurr);
+65 -32
arch/powerpc/kernel/tm.S
··· 79 79 TABORT(R3) 80 80 blr 81 81 82 + .section ".toc","aw" 83 + DSCR_DEFAULT: 84 + .tc dscr_default[TC],dscr_default 85 + 86 + .section ".text" 82 87 83 88 /* void tm_reclaim(struct thread_struct *thread, 84 89 * unsigned long orig_msr, ··· 128 123 mr r15, r14 129 124 ori r15, r15, MSR_FP 130 125 li r16, MSR_RI 126 + ori r16, r16, MSR_EE /* IRQs hard off */ 131 127 andc r15, r15, r16 132 128 oris r15, r15, MSR_VEC@h 133 129 #ifdef CONFIG_VSX ··· 193 187 std r1, PACATMSCRATCH(r13) 194 188 ld r1, PACAR1(r13) 195 189 190 + /* Store the PPR in r11 and reset to decent value */ 191 + std r11, GPR11(r1) /* Temporary stash */ 192 + mfspr r11, SPRN_PPR 193 + HMT_MEDIUM 194 + 196 195 /* Now get some more GPRS free */ 197 196 std r7, GPR7(r1) /* Temporary stash */ 198 197 std r12, GPR12(r1) /* '' '' '' */ 199 198 ld r12, STACK_PARAM(0)(r1) /* Param 0, thread_struct * */ 199 + 200 + std r11, THREAD_TM_PPR(r12) /* Store PPR and free r11 */ 200 201 201 202 addi r7, r12, PT_CKPT_REGS /* Thread's ckpt_regs */ 202 203 ··· 216 203 SAVE_GPR(0, r7) /* user r0 */ 217 204 SAVE_GPR(2, r7) /* user r2 */ 218 205 SAVE_4GPRS(3, r7) /* user r3-r6 */ 219 - SAVE_4GPRS(8, r7) /* user r8-r11 */ 206 + SAVE_GPR(8, r7) /* user r8 */ 207 + SAVE_GPR(9, r7) /* user r9 */ 208 + SAVE_GPR(10, r7) /* user r10 */ 220 209 ld r3, PACATMSCRATCH(r13) /* user r1 */ 221 210 ld r4, GPR7(r1) /* user r7 */ 222 - ld r5, GPR12(r1) /* user r12 */ 223 - GET_SCRATCH0(6) /* user r13 */ 211 + ld r5, GPR11(r1) /* user r11 */ 212 + ld r6, GPR12(r1) /* user r12 */ 213 + GET_SCRATCH0(8) /* user r13 */ 224 214 std r3, GPR1(r7) 225 215 std r4, GPR7(r7) 226 - std r5, GPR12(r7) 227 - std r6, GPR13(r7) 216 + std r5, GPR11(r7) 217 + std r6, GPR12(r7) 218 + std r8, GPR13(r7) 228 219 229 220 SAVE_NVGPRS(r7) /* user r14-r31 */ 230 221 ··· 251 234 std r6, _XER(r7) 252 235 253 236 254 - /* ******************** TAR, PPR, DSCR ********** */ 237 + /* ******************** TAR, DSCR ********** */ 255 238 mfspr r3, SPRN_TAR 256 - mfspr r4, SPRN_PPR 257 - mfspr r5, SPRN_DSCR 239 + mfspr r4, SPRN_DSCR 258 240 259 241 std r3, THREAD_TM_TAR(r12) 260 - std r4, THREAD_TM_PPR(r12) 261 - std r5, THREAD_TM_DSCR(r12) 242 + std r4, THREAD_TM_DSCR(r12) 262 243 263 244 /* MSR and flags: We don't change CRs, and we don't need to alter 264 245 * MSR. ··· 273 258 std r3, THREAD_TM_TFHAR(r12) 274 259 std r4, THREAD_TM_TFIAR(r12) 275 260 276 - /* AMR and PPR are checkpointed too, but are unsupported by Linux. */ 261 + /* AMR is checkpointed too, but is unsupported by Linux. */ 277 262 278 263 /* Restore original MSR/IRQ state & clear TM mode */ 279 264 ld r14, TM_FRAME_L0(r1) /* Orig MSR */ ··· 289 274 mtcr r4 290 275 mtlr r0 291 276 ld r2, 40(r1) 277 + 278 + /* Load system default DSCR */ 279 + ld r4, DSCR_DEFAULT@toc(r2) 280 + ld r0, 0(r4) 281 + mtspr SPRN_DSCR, r0 282 + 292 283 blr 293 284 294 285 ··· 379 358 380 359 restore_gprs: 381 360 382 - /* ******************** TAR, PPR, DSCR ********** */ 383 - ld r4, THREAD_TM_TAR(r3) 384 - ld r5, THREAD_TM_PPR(r3) 385 - ld r6, THREAD_TM_DSCR(r3) 386 - 387 - mtspr SPRN_TAR, r4 388 - mtspr SPRN_PPR, r5 389 - mtspr SPRN_DSCR, r6 390 - 391 361 /* ******************** CR,LR,CCR,MSR ********** */ 392 - ld r3, _CTR(r7) 393 - ld r4, _LINK(r7) 394 - ld r5, _CCR(r7) 395 - ld r6, _XER(r7) 362 + ld r4, _CTR(r7) 363 + ld r5, _LINK(r7) 364 + ld r6, _CCR(r7) 365 + ld r8, _XER(r7) 396 366 397 - mtctr r3 398 - mtlr r4 399 - mtcr r5 400 - mtxer r6 367 + mtctr r4 368 + mtlr r5 369 + mtcr r6 370 + mtxer r8 371 + 372 + /* ******************** TAR ******************** */ 373 + ld r4, THREAD_TM_TAR(r3) 374 + mtspr SPRN_TAR, r4 375 + 376 + /* Load up the PPR and DSCR in GPRs only at this stage */ 377 + ld r5, THREAD_TM_DSCR(r3) 378 + ld r6, THREAD_TM_PPR(r3) 401 379 402 380 /* Clear the MSR RI since we are about to change R1. EE is already off 403 381 */ ··· 404 384 mtmsrd r4, 1 405 385 406 386 REST_4GPRS(0, r7) /* GPR0-3 */ 407 - REST_GPR(4, r7) /* GPR4-6 */ 408 - REST_GPR(5, r7) 409 - REST_GPR(6, r7) 387 + REST_GPR(4, r7) /* GPR4 */ 410 388 REST_4GPRS(8, r7) /* GPR8-11 */ 411 389 REST_2GPRS(12, r7) /* GPR12-13 */ 412 390 413 391 REST_NVGPRS(r7) /* GPR14-31 */ 414 392 415 - ld r7, GPR7(r7) /* GPR7 */ 393 + /* Load up PPR and DSCR here so we don't run with user values for long 394 + */ 395 + mtspr SPRN_DSCR, r5 396 + mtspr SPRN_PPR, r6 397 + 398 + REST_GPR(5, r7) /* GPR5-7 */ 399 + REST_GPR(6, r7) 400 + ld r7, GPR7(r7) 416 401 417 402 /* Commit register state as checkpointed state: */ 418 403 TRECHKPT 404 + 405 + HMT_MEDIUM 419 406 420 407 /* Our transactional state has now changed. 421 408 * ··· 446 419 mtcr r4 447 420 mtlr r0 448 421 ld r2, 40(r1) 422 + 423 + /* Load system default DSCR */ 424 + ld r4, DSCR_DEFAULT@toc(r2) 425 + ld r0, 0(r4) 426 + mtspr SPRN_DSCR, r0 427 + 449 428 blr 450 429 451 430 /* ****************************************************************** */
+8 -4
arch/powerpc/kernel/vio.c
··· 1530 1530 const char *cp; 1531 1531 1532 1532 dn = dev->of_node; 1533 - if (!dn) 1534 - return -ENODEV; 1533 + if (!dn) { 1534 + strcat(buf, "\n"); 1535 + return strlen(buf); 1536 + } 1535 1537 cp = of_get_property(dn, "compatible", NULL); 1536 - if (!cp) 1537 - return -ENODEV; 1538 + if (!cp) { 1539 + strcat(buf, "\n"); 1540 + return strlen(buf); 1541 + } 1538 1542 1539 1543 return sprintf(buf, "vio:T%sS%s\n", vio_dev->type, cp); 1540 1544 }
+1 -1
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 1066 1066 BEGIN_FTR_SECTION 1067 1067 mfspr r8, SPRN_DSCR 1068 1068 ld r7, HSTATE_DSCR(r13) 1069 - std r8, VCPU_DSCR(r7) 1069 + std r8, VCPU_DSCR(r9) 1070 1070 mtspr SPRN_DSCR, r7 1071 1071 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206) 1072 1072
+17 -1
arch/powerpc/kvm/e500_mmu_host.c
··· 332 332 unsigned long hva; 333 333 int pfnmap = 0; 334 334 int tsize = BOOK3E_PAGESZ_4K; 335 + int ret = 0; 336 + unsigned long mmu_seq; 337 + struct kvm *kvm = vcpu_e500->vcpu.kvm; 338 + 339 + /* used to check for invalidations in progress */ 340 + mmu_seq = kvm->mmu_notifier_seq; 341 + smp_rmb(); 335 342 336 343 /* 337 344 * Translate guest physical to true physical, acquiring ··· 456 449 gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1); 457 450 } 458 451 452 + spin_lock(&kvm->mmu_lock); 453 + if (mmu_notifier_retry(kvm, mmu_seq)) { 454 + ret = -EAGAIN; 455 + goto out; 456 + } 457 + 459 458 kvmppc_e500_ref_setup(ref, gtlbe, pfn); 460 459 461 460 kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize, ··· 470 457 /* Clear i-cache for new pages */ 471 458 kvmppc_mmu_flush_icache(pfn); 472 459 460 + out: 461 + spin_unlock(&kvm->mmu_lock); 462 + 473 463 /* Drop refcount on page, so that mmu notifiers can clear it */ 474 464 kvm_release_pfn_clean(pfn); 475 465 476 - return 0; 466 + return ret; 477 467 } 478 468 479 469 /* XXX only map the one-one case, for now use TLB0 */
+42 -16
arch/powerpc/lib/checksum_64.S
··· 226 226 blr 227 227 228 228 229 - .macro source 229 + .macro srcnr 230 230 100: 231 231 .section __ex_table,"a" 232 232 .align 3 233 - .llong 100b,.Lsrc_error 233 + .llong 100b,.Lsrc_error_nr 234 + .previous 235 + .endm 236 + 237 + .macro source 238 + 150: 239 + .section __ex_table,"a" 240 + .align 3 241 + .llong 150b,.Lsrc_error 242 + .previous 243 + .endm 244 + 245 + .macro dstnr 246 + 200: 247 + .section __ex_table,"a" 248 + .align 3 249 + .llong 200b,.Ldest_error_nr 234 250 .previous 235 251 .endm 236 252 237 253 .macro dest 238 - 200: 254 + 250: 239 255 .section __ex_table,"a" 240 256 .align 3 241 - .llong 200b,.Ldest_error 257 + .llong 250b,.Ldest_error 242 258 .previous 243 259 .endm 244 260 ··· 285 269 rldicl. r6,r3,64-1,64-2 /* r6 = (r3 & 0x3) >> 1 */ 286 270 beq .Lcopy_aligned 287 271 288 - li r7,4 289 - sub r6,r7,r6 272 + li r9,4 273 + sub r6,r9,r6 290 274 mtctr r6 291 275 292 276 1: 293 - source; lhz r6,0(r3) /* align to doubleword */ 277 + srcnr; lhz r6,0(r3) /* align to doubleword */ 294 278 subi r5,r5,2 295 279 addi r3,r3,2 296 280 adde r0,r0,r6 297 - dest; sth r6,0(r4) 281 + dstnr; sth r6,0(r4) 298 282 addi r4,r4,2 299 283 bdnz 1b 300 284 ··· 408 392 409 393 mtctr r6 410 394 3: 411 - source; ld r6,0(r3) 395 + srcnr; ld r6,0(r3) 412 396 addi r3,r3,8 413 397 adde r0,r0,r6 414 - dest; std r6,0(r4) 398 + dstnr; std r6,0(r4) 415 399 addi r4,r4,8 416 400 bdnz 3b 417 401 ··· 421 405 srdi. r6,r5,2 422 406 beq .Lcopy_tail_halfword 423 407 424 - source; lwz r6,0(r3) 408 + srcnr; lwz r6,0(r3) 425 409 addi r3,r3,4 426 410 adde r0,r0,r6 427 - dest; stw r6,0(r4) 411 + dstnr; stw r6,0(r4) 428 412 addi r4,r4,4 429 413 subi r5,r5,4 430 414 ··· 432 416 srdi. r6,r5,1 433 417 beq .Lcopy_tail_byte 434 418 435 - source; lhz r6,0(r3) 419 + srcnr; lhz r6,0(r3) 436 420 addi r3,r3,2 437 421 adde r0,r0,r6 438 - dest; sth r6,0(r4) 422 + dstnr; sth r6,0(r4) 439 423 addi r4,r4,2 440 424 subi r5,r5,2 441 425 ··· 443 427 andi. r6,r5,1 444 428 beq .Lcopy_finish 445 429 446 - source; lbz r6,0(r3) 430 + srcnr; lbz r6,0(r3) 447 431 sldi r9,r6,8 /* Pad the byte out to 16 bits */ 448 432 adde r0,r0,r9 449 - dest; stb r6,0(r4) 433 + dstnr; stb r6,0(r4) 450 434 451 435 .Lcopy_finish: 452 436 addze r0,r0 /* add in final carry */ ··· 456 440 blr 457 441 458 442 .Lsrc_error: 443 + ld r14,STK_REG(R14)(r1) 444 + ld r15,STK_REG(R15)(r1) 445 + ld r16,STK_REG(R16)(r1) 446 + addi r1,r1,STACKFRAMESIZE 447 + .Lsrc_error_nr: 459 448 cmpdi 0,r7,0 460 449 beqlr 461 450 li r6,-EFAULT ··· 468 447 blr 469 448 470 449 .Ldest_error: 450 + ld r14,STK_REG(R14)(r1) 451 + ld r15,STK_REG(R15)(r1) 452 + ld r16,STK_REG(R16)(r1) 453 + addi r1,r1,STACKFRAMESIZE 454 + .Ldest_error_nr: 471 455 cmpdi 0,r8,0 472 456 beqlr 473 457 li r6,-EFAULT
+4
arch/powerpc/mm/init_64.c
··· 300 300 { 301 301 } 302 302 303 + void register_page_bootmem_memmap(unsigned long section_nr, 304 + struct page *start_page, unsigned long size) 305 + { 306 + } 303 307 #endif /* CONFIG_SPARSEMEM_VMEMMAP */ 304 308
+9
arch/powerpc/mm/mem.c
··· 297 297 } 298 298 #endif /* ! CONFIG_NEED_MULTIPLE_NODES */ 299 299 300 + static void __init register_page_bootmem_info(void) 301 + { 302 + int i; 303 + 304 + for_each_online_node(i) 305 + register_page_bootmem_info_node(NODE_DATA(i)); 306 + } 307 + 300 308 void __init mem_init(void) 301 309 { 302 310 #ifdef CONFIG_SWIOTLB 303 311 swiotlb_init(0); 304 312 #endif 305 313 314 + register_page_bootmem_info(); 306 315 high_memory = (void *) __va(max_low_pfn * PAGE_SIZE); 307 316 set_max_mapnr(max_pfn); 308 317 free_all_bootmem();
+3 -2
arch/powerpc/perf/power8-pmu.c
··· 199 199 #define MMCR1_UNIT_SHIFT(pmc) (60 - (4 * ((pmc) - 1))) 200 200 #define MMCR1_COMBINE_SHIFT(pmc) (35 - ((pmc) - 1)) 201 201 #define MMCR1_PMCSEL_SHIFT(pmc) (24 - (((pmc) - 1)) * 8) 202 + #define MMCR1_FAB_SHIFT 36 202 203 #define MMCR1_DC_QUAL_SHIFT 47 203 204 #define MMCR1_IC_QUAL_SHIFT 46 204 205 ··· 389 388 * the threshold bits are used for the match value. 390 389 */ 391 390 if (event_is_fab_match(event[i])) { 392 - mmcr1 |= (event[i] >> EVENT_THR_CTL_SHIFT) & 393 - EVENT_THR_CTL_MASK; 391 + mmcr1 |= ((event[i] >> EVENT_THR_CTL_SHIFT) & 392 + EVENT_THR_CTL_MASK) << MMCR1_FAB_SHIFT; 394 393 } else { 395 394 val = (event[i] >> EVENT_THR_CTL_SHIFT) & EVENT_THR_CTL_MASK; 396 395 mmcra |= val << MMCRA_THR_CTL_SHIFT;
+1 -1
arch/s390/include/asm/jump_label.h
··· 15 15 16 16 static __always_inline bool arch_static_branch(struct static_key *key) 17 17 { 18 - asm goto("0: brcl 0,0\n" 18 + asm_volatile_goto("0: brcl 0,0\n" 19 19 ".pushsection __jump_table, \"aw\"\n" 20 20 ASM_ALIGN "\n" 21 21 ASM_PTR " 0b, %l[label], %0\n"
+3 -1
arch/s390/include/asm/pgtable.h
··· 748 748 749 749 static inline void pgste_set_pte(pte_t *ptep, pte_t entry) 750 750 { 751 - if (!MACHINE_HAS_ESOP && (pte_val(entry) & _PAGE_WRITE)) { 751 + if (!MACHINE_HAS_ESOP && 752 + (pte_val(entry) & _PAGE_PRESENT) && 753 + (pte_val(entry) & _PAGE_WRITE)) { 752 754 /* 753 755 * Without enhanced suppression-on-protection force 754 756 * the dirty bit on for all writable ptes.
+14 -14
arch/s390/include/asm/timex.h
··· 71 71 72 72 typedef unsigned long long cycles_t; 73 73 74 - static inline unsigned long long get_tod_clock(void) 75 - { 76 - unsigned long long clk; 77 - 78 - #ifdef CONFIG_HAVE_MARCH_Z9_109_FEATURES 79 - asm volatile(".insn s,0xb27c0000,%0" : "=Q" (clk) : : "cc"); 80 - #else 81 - asm volatile("stck %0" : "=Q" (clk) : : "cc"); 82 - #endif 83 - return clk; 84 - } 85 - 86 74 static inline void get_tod_clock_ext(char *clk) 87 75 { 88 76 asm volatile("stcke %0" : "=Q" (*clk) : : "cc"); 89 77 } 90 78 91 - static inline unsigned long long get_tod_clock_xt(void) 79 + static inline unsigned long long get_tod_clock(void) 92 80 { 93 81 unsigned char clk[16]; 94 82 get_tod_clock_ext(clk); 95 83 return *((unsigned long long *)&clk[1]); 84 + } 85 + 86 + static inline unsigned long long get_tod_clock_fast(void) 87 + { 88 + #ifdef CONFIG_HAVE_MARCH_Z9_109_FEATURES 89 + unsigned long long clk; 90 + 91 + asm volatile("stckf %0" : "=Q" (clk) : : "cc"); 92 + return clk; 93 + #else 94 + return get_tod_clock(); 95 + #endif 96 96 } 97 97 98 98 static inline cycles_t get_cycles(void) ··· 125 125 */ 126 126 static inline unsigned long long get_tod_clock_monotonic(void) 127 127 { 128 - return get_tod_clock_xt() - sched_clock_base_cc; 128 + return get_tod_clock() - sched_clock_base_cc; 129 129 } 130 130 131 131 /**
+2 -2
arch/s390/kernel/compat_signal.c
··· 99 99 break; 100 100 } 101 101 } 102 - return err; 102 + return err ? -EFAULT : 0; 103 103 } 104 104 105 105 int copy_siginfo_from_user32(siginfo_t *to, compat_siginfo_t __user *from) ··· 148 148 break; 149 149 } 150 150 } 151 - return err; 151 + return err ? -EFAULT : 0; 152 152 } 153 153 154 154 static int save_sigregs32(struct pt_regs *regs, _sigregs32 __user *sregs)
+20 -22
arch/s390/kernel/crash_dump.c
··· 40 40 } 41 41 42 42 /* 43 - * Copy up to one page to vmalloc or real memory 43 + * Copy real to virtual or real memory 44 44 */ 45 - static ssize_t copy_page_real(void *buf, void *src, size_t csize) 45 + static int copy_from_realmem(void *dest, void *src, size_t count) 46 46 { 47 - size_t size; 47 + unsigned long size; 48 + int rc; 48 49 49 - if (is_vmalloc_addr(buf)) { 50 - BUG_ON(csize >= PAGE_SIZE); 51 - /* If buf is not page aligned, copy first part */ 52 - size = min(roundup(__pa(buf), PAGE_SIZE) - __pa(buf), csize); 53 - if (size) { 54 - if (memcpy_real(load_real_addr(buf), src, size)) 55 - return -EFAULT; 56 - buf += size; 57 - src += size; 58 - } 59 - /* Copy second part */ 60 - size = csize - size; 61 - return (size) ? memcpy_real(load_real_addr(buf), src, size) : 0; 62 - } else { 63 - return memcpy_real(buf, src, csize); 64 - } 50 + if (!count) 51 + return 0; 52 + if (!is_vmalloc_or_module_addr(dest)) 53 + return memcpy_real(dest, src, count); 54 + do { 55 + size = min(count, PAGE_SIZE - (__pa(dest) & ~PAGE_MASK)); 56 + if (memcpy_real(load_real_addr(dest), src, size)) 57 + return -EFAULT; 58 + count -= size; 59 + dest += size; 60 + src += size; 61 + } while (count); 62 + return 0; 65 63 } 66 64 67 65 /* ··· 112 114 rc = copy_to_user_real((void __force __user *) buf, 113 115 (void *) src, csize); 114 116 else 115 - rc = copy_page_real(buf, (void *) src, csize); 117 + rc = copy_from_realmem(buf, (void *) src, csize); 116 118 return (rc == 0) ? rc : csize; 117 119 } 118 120 ··· 208 210 if (OLDMEM_BASE) { 209 211 if ((unsigned long) src < OLDMEM_SIZE) { 210 212 copied = min(count, OLDMEM_SIZE - (unsigned long) src); 211 - rc = memcpy_real(dest, src + OLDMEM_BASE, copied); 213 + rc = copy_from_realmem(dest, src + OLDMEM_BASE, copied); 212 214 if (rc) 213 215 return rc; 214 216 } ··· 221 223 return rc; 222 224 } 223 225 } 224 - return memcpy_real(dest + copied, src + copied, count - copied); 226 + return copy_from_realmem(dest + copied, src + copied, count - copied); 225 227 } 226 228 227 229 /*
+1 -1
arch/s390/kernel/debug.c
··· 867 867 debug_finish_entry(debug_info_t * id, debug_entry_t* active, int level, 868 868 int exception) 869 869 { 870 - active->id.stck = get_tod_clock(); 870 + active->id.stck = get_tod_clock_fast(); 871 871 active->id.fields.cpuid = smp_processor_id(); 872 872 active->caller = __builtin_return_address(0); 873 873 active->id.fields.exception = exception;
+1
arch/s390/kernel/entry.S
··· 266 266 tm __TI_flags+3(%r12),_TIF_SYSCALL 267 267 jno sysc_return 268 268 lm %r2,%r7,__PT_R2(%r11) # load svc arguments 269 + l %r10,__TI_sysc_table(%r12) # 31 bit system call table 269 270 xr %r8,%r8 # svc 0 returns -ENOSYS 270 271 clc __PT_INT_CODE+2(2,%r11),BASED(.Lnr_syscalls+2) 271 272 jnl sysc_nr_ok # invalid svc number -> do svc 0
+1
arch/s390/kernel/entry64.S
··· 297 297 tm __TI_flags+7(%r12),_TIF_SYSCALL 298 298 jno sysc_return 299 299 lmg %r2,%r7,__PT_R2(%r11) # load svc arguments 300 + lg %r10,__TI_sysc_table(%r12) # address of system call table 300 301 lghi %r8,0 # svc 0 returns -ENOSYS 301 302 llgh %r1,__PT_INT_CODE+2(%r11) # load new svc number 302 303 cghi %r1,NR_syscalls
+5 -1
arch/s390/kernel/kprobes.c
··· 67 67 case 0xac: /* stnsm */ 68 68 case 0xad: /* stosm */ 69 69 return -EINVAL; 70 + case 0xc6: 71 + switch (insn[0] & 0x0f) { 72 + case 0x00: /* exrl */ 73 + return -EINVAL; 74 + } 70 75 } 71 76 switch (insn[0]) { 72 77 case 0x0101: /* pr */ ··· 185 180 break; 186 181 case 0xc6: 187 182 switch (insn[0] & 0x0f) { 188 - case 0x00: /* exrl */ 189 183 case 0x02: /* pfdrl */ 190 184 case 0x04: /* cghrl */ 191 185 case 0x05: /* chrl */
+3 -3
arch/s390/kvm/interrupt.c
··· 385 385 } 386 386 387 387 if ((!rc) && (vcpu->arch.sie_block->ckc < 388 - get_tod_clock() + vcpu->arch.sie_block->epoch)) { 388 + get_tod_clock_fast() + vcpu->arch.sie_block->epoch)) { 389 389 if ((!psw_extint_disabled(vcpu)) && 390 390 (vcpu->arch.sie_block->gcr[0] & 0x800ul)) 391 391 rc = 1; ··· 425 425 goto no_timer; 426 426 } 427 427 428 - now = get_tod_clock() + vcpu->arch.sie_block->epoch; 428 + now = get_tod_clock_fast() + vcpu->arch.sie_block->epoch; 429 429 if (vcpu->arch.sie_block->ckc < now) { 430 430 __unset_cpu_idle(vcpu); 431 431 return 0; ··· 515 515 } 516 516 517 517 if ((vcpu->arch.sie_block->ckc < 518 - get_tod_clock() + vcpu->arch.sie_block->epoch)) 518 + get_tod_clock_fast() + vcpu->arch.sie_block->epoch)) 519 519 __try_deliver_ckc_interrupt(vcpu); 520 520 521 521 if (atomic_read(&fi->active)) {
+7 -7
arch/s390/lib/delay.c
··· 44 44 do { 45 45 set_clock_comparator(end); 46 46 vtime_stop_cpu(); 47 - } while (get_tod_clock() < end); 47 + } while (get_tod_clock_fast() < end); 48 48 lockdep_on(); 49 49 __ctl_load(cr0, 0, 0); 50 50 __ctl_load(cr6, 6, 6); ··· 55 55 { 56 56 u64 clock_saved, end; 57 57 58 - end = get_tod_clock() + (usecs << 12); 58 + end = get_tod_clock_fast() + (usecs << 12); 59 59 do { 60 60 clock_saved = 0; 61 61 if (end < S390_lowcore.clock_comparator) { ··· 65 65 vtime_stop_cpu(); 66 66 if (clock_saved) 67 67 local_tick_enable(clock_saved); 68 - } while (get_tod_clock() < end); 68 + } while (get_tod_clock_fast() < end); 69 69 } 70 70 71 71 /* ··· 109 109 { 110 110 u64 end; 111 111 112 - end = get_tod_clock() + (usecs << 12); 113 - while (get_tod_clock() < end) 112 + end = get_tod_clock_fast() + (usecs << 12); 113 + while (get_tod_clock_fast() < end) 114 114 cpu_relax(); 115 115 } 116 116 ··· 120 120 121 121 nsecs <<= 9; 122 122 do_div(nsecs, 125); 123 - end = get_tod_clock() + nsecs; 123 + end = get_tod_clock_fast() + nsecs; 124 124 if (nsecs & ~0xfffUL) 125 125 __udelay(nsecs >> 12); 126 - while (get_tod_clock() < end) 126 + while (get_tod_clock_fast() < end) 127 127 barrier(); 128 128 } 129 129 EXPORT_SYMBOL(__ndelay);
+6 -1
arch/sparc/Kconfig
··· 506 506 Only choose N if you know in advance that you will not need to modify 507 507 OpenPROM settings on the running system. 508 508 509 - # Makefile helper 509 + # Makefile helpers 510 510 config SPARC64_PCI 511 511 bool 512 512 default y 513 513 depends on SPARC64 && PCI 514 + 515 + config SPARC64_PCI_MSI 516 + bool 517 + default y 518 + depends on SPARC64_PCI && PCI_MSI 514 519 515 520 endmenu 516 521
+1 -1
arch/sparc/include/asm/floppy_64.h
··· 254 254 once = 1; 255 255 256 256 error = request_irq(FLOPPY_IRQ, sparc_floppy_irq, 257 - IRQF_DISABLED, "floppy", NULL); 257 + 0, "floppy", NULL); 258 258 259 259 return ((error == 0) ? 0 : -1); 260 260 }
+1 -1
arch/sparc/include/asm/jump_label.h
··· 9 9 10 10 static __always_inline bool arch_static_branch(struct static_key *key) 11 11 { 12 - asm goto("1:\n\t" 12 + asm_volatile_goto("1:\n\t" 13 13 "nop\n\t" 14 14 "nop\n\t" 15 15 ".pushsection __jump_table, \"aw\"\n\t"
+2 -1
arch/sparc/kernel/Makefile
··· 1 + 1 2 # 2 3 # Makefile for the linux kernel. 3 4 # ··· 100 99 obj-$(CONFIG_SPARC64_PCI) += pci.o pci_common.o psycho_common.o 101 100 obj-$(CONFIG_SPARC64_PCI) += pci_psycho.o pci_sabre.o pci_schizo.o 102 101 obj-$(CONFIG_SPARC64_PCI) += pci_sun4v.o pci_sun4v_asm.o pci_fire.o 103 - obj-$(CONFIG_PCI_MSI) += pci_msi.o 102 + obj-$(CONFIG_SPARC64_PCI_MSI) += pci_msi.o 104 103 105 104 obj-$(CONFIG_COMPAT) += sys32.o sys_sparc32.o signal32.o 106 105
+2 -3
arch/sparc/kernel/ds.c
··· 849 849 if (boot_command && strlen(boot_command)) { 850 850 unsigned long len; 851 851 852 - strcpy(full_boot_str, "boot "); 853 - strlcpy(full_boot_str + strlen("boot "), boot_command, 854 - sizeof(full_boot_str)); 852 + snprintf(full_boot_str, sizeof(full_boot_str), "boot %s", 853 + boot_command); 855 854 len = strlen(full_boot_str); 856 855 857 856 if (reboot_data_supported) {
+2 -2
arch/sparc/kernel/ldc.c
··· 1249 1249 snprintf(lp->rx_irq_name, LDC_IRQ_NAME_MAX, "%s RX", name); 1250 1250 snprintf(lp->tx_irq_name, LDC_IRQ_NAME_MAX, "%s TX", name); 1251 1251 1252 - err = request_irq(lp->cfg.rx_irq, ldc_rx, IRQF_DISABLED, 1252 + err = request_irq(lp->cfg.rx_irq, ldc_rx, 0, 1253 1253 lp->rx_irq_name, lp); 1254 1254 if (err) 1255 1255 return err; 1256 1256 1257 - err = request_irq(lp->cfg.tx_irq, ldc_tx, IRQF_DISABLED, 1257 + err = request_irq(lp->cfg.tx_irq, ldc_tx, 0, 1258 1258 lp->tx_irq_name, lp); 1259 1259 if (err) { 1260 1260 free_irq(lp->cfg.rx_irq, lp);
+3 -2
arch/tile/include/asm/atomic.h
··· 166 166 * 167 167 * Atomically sets @v to @i and returns old @v 168 168 */ 169 - static inline u64 atomic64_xchg(atomic64_t *v, u64 n) 169 + static inline long long atomic64_xchg(atomic64_t *v, long long n) 170 170 { 171 171 return xchg64(&v->counter, n); 172 172 } ··· 180 180 * Atomically checks if @v holds @o and replaces it with @n if so. 181 181 * Returns the old value at @v. 182 182 */ 183 - static inline u64 atomic64_cmpxchg(atomic64_t *v, u64 o, u64 n) 183 + static inline long long atomic64_cmpxchg(atomic64_t *v, long long o, 184 + long long n) 184 185 { 185 186 return cmpxchg64(&v->counter, o, n); 186 187 }
+15 -12
arch/tile/include/asm/atomic_32.h
··· 80 80 /* A 64bit atomic type */ 81 81 82 82 typedef struct { 83 - u64 __aligned(8) counter; 83 + long long counter; 84 84 } atomic64_t; 85 85 86 86 #define ATOMIC64_INIT(val) { (val) } ··· 91 91 * 92 92 * Atomically reads the value of @v. 93 93 */ 94 - static inline u64 atomic64_read(const atomic64_t *v) 94 + static inline long long atomic64_read(const atomic64_t *v) 95 95 { 96 96 /* 97 97 * Requires an atomic op to read both 32-bit parts consistently. 98 98 * Casting away const is safe since the atomic support routines 99 99 * do not write to memory if the value has not been modified. 100 100 */ 101 - return _atomic64_xchg_add((u64 *)&v->counter, 0); 101 + return _atomic64_xchg_add((long long *)&v->counter, 0); 102 102 } 103 103 104 104 /** ··· 108 108 * 109 109 * Atomically adds @i to @v. 110 110 */ 111 - static inline void atomic64_add(u64 i, atomic64_t *v) 111 + static inline void atomic64_add(long long i, atomic64_t *v) 112 112 { 113 113 _atomic64_xchg_add(&v->counter, i); 114 114 } ··· 120 120 * 121 121 * Atomically adds @i to @v and returns @i + @v 122 122 */ 123 - static inline u64 atomic64_add_return(u64 i, atomic64_t *v) 123 + static inline long long atomic64_add_return(long long i, atomic64_t *v) 124 124 { 125 125 smp_mb(); /* barrier for proper semantics */ 126 126 return _atomic64_xchg_add(&v->counter, i) + i; ··· 135 135 * Atomically adds @a to @v, so long as @v was not already @u. 136 136 * Returns non-zero if @v was not @u, and zero otherwise. 137 137 */ 138 - static inline u64 atomic64_add_unless(atomic64_t *v, u64 a, u64 u) 138 + static inline long long atomic64_add_unless(atomic64_t *v, long long a, 139 + long long u) 139 140 { 140 141 smp_mb(); /* barrier for proper semantics */ 141 142 return _atomic64_xchg_add_unless(&v->counter, a, u) != u; ··· 152 151 * atomic64_set() can't be just a raw store, since it would be lost if it 153 152 * fell between the load and store of one of the other atomic ops. 154 153 */ 155 - static inline void atomic64_set(atomic64_t *v, u64 n) 154 + static inline void atomic64_set(atomic64_t *v, long long n) 156 155 { 157 156 _atomic64_xchg(&v->counter, n); 158 157 } ··· 237 236 extern struct __get_user __atomic_or(volatile int *p, int *lock, int n); 238 237 extern struct __get_user __atomic_andn(volatile int *p, int *lock, int n); 239 238 extern struct __get_user __atomic_xor(volatile int *p, int *lock, int n); 240 - extern u64 __atomic64_cmpxchg(volatile u64 *p, int *lock, u64 o, u64 n); 241 - extern u64 __atomic64_xchg(volatile u64 *p, int *lock, u64 n); 242 - extern u64 __atomic64_xchg_add(volatile u64 *p, int *lock, u64 n); 243 - extern u64 __atomic64_xchg_add_unless(volatile u64 *p, 244 - int *lock, u64 o, u64 n); 239 + extern long long __atomic64_cmpxchg(volatile long long *p, int *lock, 240 + long long o, long long n); 241 + extern long long __atomic64_xchg(volatile long long *p, int *lock, long long n); 242 + extern long long __atomic64_xchg_add(volatile long long *p, int *lock, 243 + long long n); 244 + extern long long __atomic64_xchg_add_unless(volatile long long *p, 245 + int *lock, long long o, long long n); 245 246 246 247 /* Return failure from the atomic wrappers. */ 247 248 struct __get_user __atomic_bad_address(int __user *addr);
+17 -11
arch/tile/include/asm/cmpxchg.h
··· 35 35 int _atomic_xchg_add(int *v, int i); 36 36 int _atomic_xchg_add_unless(int *v, int a, int u); 37 37 int _atomic_cmpxchg(int *ptr, int o, int n); 38 - u64 _atomic64_xchg(u64 *v, u64 n); 39 - u64 _atomic64_xchg_add(u64 *v, u64 i); 40 - u64 _atomic64_xchg_add_unless(u64 *v, u64 a, u64 u); 41 - u64 _atomic64_cmpxchg(u64 *v, u64 o, u64 n); 38 + long long _atomic64_xchg(long long *v, long long n); 39 + long long _atomic64_xchg_add(long long *v, long long i); 40 + long long _atomic64_xchg_add_unless(long long *v, long long a, long long u); 41 + long long _atomic64_cmpxchg(long long *v, long long o, long long n); 42 42 43 43 #define xchg(ptr, n) \ 44 44 ({ \ ··· 53 53 if (sizeof(*(ptr)) != 4) \ 54 54 __cmpxchg_called_with_bad_pointer(); \ 55 55 smp_mb(); \ 56 - (typeof(*(ptr)))_atomic_cmpxchg((int *)ptr, (int)o, (int)n); \ 56 + (typeof(*(ptr)))_atomic_cmpxchg((int *)ptr, (int)o, \ 57 + (int)n); \ 57 58 }) 58 59 59 60 #define xchg64(ptr, n) \ ··· 62 61 if (sizeof(*(ptr)) != 8) \ 63 62 __xchg_called_with_bad_pointer(); \ 64 63 smp_mb(); \ 65 - (typeof(*(ptr)))_atomic64_xchg((u64 *)(ptr), (u64)(n)); \ 64 + (typeof(*(ptr)))_atomic64_xchg((long long *)(ptr), \ 65 + (long long)(n)); \ 66 66 }) 67 67 68 68 #define cmpxchg64(ptr, o, n) \ ··· 71 69 if (sizeof(*(ptr)) != 8) \ 72 70 __cmpxchg_called_with_bad_pointer(); \ 73 71 smp_mb(); \ 74 - (typeof(*(ptr)))_atomic64_cmpxchg((u64 *)ptr, (u64)o, (u64)n); \ 72 + (typeof(*(ptr)))_atomic64_cmpxchg((long long *)ptr, \ 73 + (long long)o, (long long)n); \ 75 74 }) 76 75 77 76 #else ··· 84 81 switch (sizeof(*(ptr))) { \ 85 82 case 4: \ 86 83 __x = (typeof(__x))(unsigned long) \ 87 - __insn_exch4((ptr), (u32)(unsigned long)(n)); \ 84 + __insn_exch4((ptr), \ 85 + (u32)(unsigned long)(n)); \ 88 86 break; \ 89 87 case 8: \ 90 - __x = (typeof(__x)) \ 88 + __x = (typeof(__x)) \ 91 89 __insn_exch((ptr), (unsigned long)(n)); \ 92 90 break; \ 93 91 default: \ ··· 107 103 switch (sizeof(*(ptr))) { \ 108 104 case 4: \ 109 105 __x = (typeof(__x))(unsigned long) \ 110 - __insn_cmpexch4((ptr), (u32)(unsigned long)(n)); \ 106 + __insn_cmpexch4((ptr), \ 107 + (u32)(unsigned long)(n)); \ 111 108 break; \ 112 109 case 8: \ 113 - __x = (typeof(__x))__insn_cmpexch((ptr), (u64)(n)); \ 110 + __x = (typeof(__x))__insn_cmpexch((ptr), \ 111 + (long long)(n)); \ 114 112 break; \ 115 113 default: \ 116 114 __cmpxchg_called_with_bad_pointer(); \
+31 -3
arch/tile/include/asm/percpu.h
··· 15 15 #ifndef _ASM_TILE_PERCPU_H 16 16 #define _ASM_TILE_PERCPU_H 17 17 18 - register unsigned long __my_cpu_offset __asm__("tp"); 19 - #define __my_cpu_offset __my_cpu_offset 20 - #define set_my_cpu_offset(tp) (__my_cpu_offset = (tp)) 18 + register unsigned long my_cpu_offset_reg asm("tp"); 19 + 20 + #ifdef CONFIG_PREEMPT 21 + /* 22 + * For full preemption, we can't just use the register variable 23 + * directly, since we need barrier() to hazard against it, causing the 24 + * compiler to reload anything computed from a previous "tp" value. 25 + * But we also don't want to use volatile asm, since we'd like the 26 + * compiler to be able to cache the value across multiple percpu reads. 27 + * So we use a fake stack read as a hazard against barrier(). 28 + * The 'U' constraint is like 'm' but disallows postincrement. 29 + */ 30 + static inline unsigned long __my_cpu_offset(void) 31 + { 32 + unsigned long tp; 33 + register unsigned long *sp asm("sp"); 34 + asm("move %0, tp" : "=r" (tp) : "U" (*sp)); 35 + return tp; 36 + } 37 + #define __my_cpu_offset __my_cpu_offset() 38 + #else 39 + /* 40 + * We don't need to hazard against barrier() since "tp" doesn't ever 41 + * change with PREEMPT_NONE, and with PREEMPT_VOLUNTARY it only 42 + * changes at function call points, at which we are already re-reading 43 + * the value of "tp" due to "my_cpu_offset_reg" being a global variable. 44 + */ 45 + #define __my_cpu_offset my_cpu_offset_reg 46 + #endif 47 + 48 + #define set_my_cpu_offset(tp) (my_cpu_offset_reg = (tp)) 21 49 22 50 #include <asm-generic/percpu.h> 23 51
+3 -3
arch/tile/kernel/hardwall.c
··· 66 66 0, 67 67 "udn", 68 68 LIST_HEAD_INIT(hardwall_types[HARDWALL_UDN].list), 69 - __SPIN_LOCK_INITIALIZER(hardwall_types[HARDWALL_UDN].lock), 69 + __SPIN_LOCK_UNLOCKED(hardwall_types[HARDWALL_UDN].lock), 70 70 NULL 71 71 }, 72 72 #ifndef __tilepro__ ··· 77 77 1, /* disabled pending hypervisor support */ 78 78 "idn", 79 79 LIST_HEAD_INIT(hardwall_types[HARDWALL_IDN].list), 80 - __SPIN_LOCK_INITIALIZER(hardwall_types[HARDWALL_IDN].lock), 80 + __SPIN_LOCK_UNLOCKED(hardwall_types[HARDWALL_IDN].lock), 81 81 NULL 82 82 }, 83 83 { /* access to user-space IPI */ ··· 87 87 0, 88 88 "ipi", 89 89 LIST_HEAD_INIT(hardwall_types[HARDWALL_IPI].list), 90 - __SPIN_LOCK_INITIALIZER(hardwall_types[HARDWALL_IPI].lock), 90 + __SPIN_LOCK_UNLOCKED(hardwall_types[HARDWALL_IPI].lock), 91 91 NULL 92 92 }, 93 93 #endif
+3
arch/tile/kernel/intvec_32.S
··· 815 815 } 816 816 bzt r28, 1f 817 817 bnz r29, 1f 818 + /* Disable interrupts explicitly for preemption. */ 819 + IRQ_DISABLE(r20,r21) 820 + TRACE_IRQS_OFF 818 821 jal preempt_schedule_irq 819 822 FEEDBACK_REENTER(interrupt_return) 820 823 1:
+3
arch/tile/kernel/intvec_64.S
··· 841 841 } 842 842 beqzt r28, 1f 843 843 bnez r29, 1f 844 + /* Disable interrupts explicitly for preemption. */ 845 + IRQ_DISABLE(r20,r21) 846 + TRACE_IRQS_OFF 844 847 jal preempt_schedule_irq 845 848 FEEDBACK_REENTER(interrupt_return) 846 849 1:
+5 -7
arch/tile/kernel/stack.c
··· 23 23 #include <linux/mmzone.h> 24 24 #include <linux/dcache.h> 25 25 #include <linux/fs.h> 26 + #include <linux/string.h> 26 27 #include <asm/backtrace.h> 27 28 #include <asm/page.h> 28 29 #include <asm/ucontext.h> ··· 333 332 } 334 333 335 334 if (vma->vm_file) { 336 - char *s; 337 335 p = d_path(&vma->vm_file->f_path, buf, bufsize); 338 336 if (IS_ERR(p)) 339 337 p = "?"; 340 - s = strrchr(p, '/'); 341 - if (s) 342 - p = s+1; 338 + name = kbasename(p); 343 339 } else { 344 - p = "anon"; 340 + name = "anon"; 345 341 } 346 342 347 343 /* Generate a string description of the vma info. */ 348 - namelen = strlen(p); 344 + namelen = strlen(name); 349 345 remaining = (bufsize - 1) - namelen; 350 - memmove(buf, p, namelen); 346 + memmove(buf, name, namelen); 351 347 snprintf(buf + namelen, remaining, "[%lx+%lx] ", 352 348 vma->vm_start, vma->vm_end - vma->vm_start); 353 349 }
+4 -4
arch/tile/lib/atomic_32.c
··· 107 107 EXPORT_SYMBOL(_atomic_xor); 108 108 109 109 110 - u64 _atomic64_xchg(u64 *v, u64 n) 110 + long long _atomic64_xchg(long long *v, long long n) 111 111 { 112 112 return __atomic64_xchg(v, __atomic_setup(v), n); 113 113 } 114 114 EXPORT_SYMBOL(_atomic64_xchg); 115 115 116 - u64 _atomic64_xchg_add(u64 *v, u64 i) 116 + long long _atomic64_xchg_add(long long *v, long long i) 117 117 { 118 118 return __atomic64_xchg_add(v, __atomic_setup(v), i); 119 119 } 120 120 EXPORT_SYMBOL(_atomic64_xchg_add); 121 121 122 - u64 _atomic64_xchg_add_unless(u64 *v, u64 a, u64 u) 122 + long long _atomic64_xchg_add_unless(long long *v, long long a, long long u) 123 123 { 124 124 /* 125 125 * Note: argument order is switched here since it is easier ··· 130 130 } 131 131 EXPORT_SYMBOL(_atomic64_xchg_add_unless); 132 132 133 - u64 _atomic64_cmpxchg(u64 *v, u64 o, u64 n) 133 + long long _atomic64_cmpxchg(long long *v, long long o, long long n) 134 134 { 135 135 return __atomic64_cmpxchg(v, __atomic_setup(v), o, n); 136 136 }
+4 -3
arch/x86/Kconfig
··· 860 860 861 861 config X86_UP_APIC 862 862 bool "Local APIC support on uniprocessors" 863 - depends on X86_32 && !SMP && !X86_32_NON_STANDARD 863 + depends on X86_32 && !SMP && !X86_32_NON_STANDARD && !PCI_MSI 864 864 ---help--- 865 865 A local APIC (Advanced Programmable Interrupt Controller) is an 866 866 integrated interrupt controller in the CPU. If you have a single-CPU ··· 885 885 886 886 config X86_LOCAL_APIC 887 887 def_bool y 888 - depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_APIC 888 + depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_APIC || PCI_MSI 889 889 890 890 config X86_IO_APIC 891 891 def_bool y 892 - depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_IOAPIC 892 + depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_IOAPIC || PCI_MSI 893 893 894 894 config X86_VISWS_APIC 895 895 def_bool y ··· 1033 1033 1034 1034 config MICROCODE 1035 1035 tristate "CPU microcode loading support" 1036 + depends on CPU_SUP_AMD || CPU_SUP_INTEL 1036 1037 select FW_LOADER 1037 1038 ---help--- 1038 1039
+3 -3
arch/x86/include/asm/cpufeature.h
··· 374 374 * Catch too early usage of this before alternatives 375 375 * have run. 376 376 */ 377 - asm goto("1: jmp %l[t_warn]\n" 377 + asm_volatile_goto("1: jmp %l[t_warn]\n" 378 378 "2:\n" 379 379 ".section .altinstructions,\"a\"\n" 380 380 " .long 1b - .\n" ··· 388 388 389 389 #endif 390 390 391 - asm goto("1: jmp %l[t_no]\n" 391 + asm_volatile_goto("1: jmp %l[t_no]\n" 392 392 "2:\n" 393 393 ".section .altinstructions,\"a\"\n" 394 394 " .long 1b - .\n" ··· 453 453 * have. Thus, we force the jump to the widest, 4-byte, signed relative 454 454 * offset even though the last would often fit in less bytes. 455 455 */ 456 - asm goto("1: .byte 0xe9\n .long %l[t_dynamic] - 2f\n" 456 + asm_volatile_goto("1: .byte 0xe9\n .long %l[t_dynamic] - 2f\n" 457 457 "2:\n" 458 458 ".section .altinstructions,\"a\"\n" 459 459 " .long 1b - .\n" /* src offset */
+1 -1
arch/x86/include/asm/jump_label.h
··· 18 18 19 19 static __always_inline bool arch_static_branch(struct static_key *key) 20 20 { 21 - asm goto("1:" 21 + asm_volatile_goto("1:" 22 22 ".byte " __stringify(STATIC_KEY_INIT_NOP) "\n\t" 23 23 ".pushsection __jump_table, \"aw\" \n\t" 24 24 _ASM_ALIGN "\n\t"
+2 -2
arch/x86/include/asm/mutex_64.h
··· 20 20 static inline void __mutex_fastpath_lock(atomic_t *v, 21 21 void (*fail_fn)(atomic_t *)) 22 22 { 23 - asm volatile goto(LOCK_PREFIX " decl %0\n" 23 + asm_volatile_goto(LOCK_PREFIX " decl %0\n" 24 24 " jns %l[exit]\n" 25 25 : : "m" (v->counter) 26 26 : "memory", "cc" ··· 75 75 static inline void __mutex_fastpath_unlock(atomic_t *v, 76 76 void (*fail_fn)(atomic_t *)) 77 77 { 78 - asm volatile goto(LOCK_PREFIX " incl %0\n" 78 + asm_volatile_goto(LOCK_PREFIX " incl %0\n" 79 79 " jg %l[exit]\n" 80 80 : : "m" (v->counter) 81 81 : "memory", "cc"
+1 -1
arch/x86/kernel/apic/x2apic_uv_x.c
··· 113 113 break; 114 114 case UV3_HUB_PART_NUMBER: 115 115 case UV3_HUB_PART_NUMBER_X: 116 - uv_min_hub_revision_id += UV3_HUB_REVISION_BASE - 1; 116 + uv_min_hub_revision_id += UV3_HUB_REVISION_BASE; 117 117 break; 118 118 } 119 119
+3 -8
arch/x86/kernel/cpu/perf_event.c
··· 1888 1888 userpg->cap_user_rdpmc = x86_pmu.attr_rdpmc; 1889 1889 userpg->pmc_width = x86_pmu.cntval_bits; 1890 1890 1891 - if (!boot_cpu_has(X86_FEATURE_CONSTANT_TSC)) 1892 - return; 1893 - 1894 - if (!boot_cpu_has(X86_FEATURE_NONSTOP_TSC)) 1891 + if (!sched_clock_stable) 1895 1892 return; 1896 1893 1897 1894 userpg->cap_user_time = 1; ··· 1896 1899 userpg->time_shift = CYC2NS_SCALE_FACTOR; 1897 1900 userpg->time_offset = this_cpu_read(cyc2ns_offset) - now; 1898 1901 1899 - if (sched_clock_stable && !check_tsc_disabled()) { 1900 - userpg->cap_user_time_zero = 1; 1901 - userpg->time_zero = this_cpu_read(cyc2ns_offset); 1902 - } 1902 + userpg->cap_user_time_zero = 1; 1903 + userpg->time_zero = this_cpu_read(cyc2ns_offset); 1903 1904 } 1904 1905 1905 1906 /*
+15 -4
arch/x86/kernel/kvm.c
··· 775 775 if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) 776 776 return; 777 777 778 - printk(KERN_INFO "KVM setup paravirtual spinlock\n"); 779 - 780 - static_key_slow_inc(&paravirt_ticketlocks_enabled); 781 - 782 778 pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning); 783 779 pv_lock_ops.unlock_kick = kvm_unlock_kick; 784 780 } 781 + 782 + static __init int kvm_spinlock_init_jump(void) 783 + { 784 + if (!kvm_para_available()) 785 + return 0; 786 + if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) 787 + return 0; 788 + 789 + static_key_slow_inc(&paravirt_ticketlocks_enabled); 790 + printk(KERN_INFO "KVM setup paravirtual spinlock\n"); 791 + 792 + return 0; 793 + } 794 + early_initcall(kvm_spinlock_init_jump); 795 + 785 796 #endif /* CONFIG_PARAVIRT_SPINLOCKS */
+8
arch/x86/kernel/reboot.c
··· 326 326 DMI_MATCH(DMI_PRODUCT_NAME, "Latitude E6320"), 327 327 }, 328 328 }, 329 + { /* Handle problems with rebooting on the Latitude E5410. */ 330 + .callback = set_pci_reboot, 331 + .ident = "Dell Latitude E5410", 332 + .matches = { 333 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 334 + DMI_MATCH(DMI_PRODUCT_NAME, "Latitude E5410"), 335 + }, 336 + }, 329 337 { /* Handle problems with rebooting on the Latitude E5420. */ 330 338 .callback = set_pci_reboot, 331 339 .ident = "Dell Latitude E5420",
+2 -2
arch/x86/kernel/sysfb_simplefb.c
··· 72 72 * the part that is occupied by the framebuffer */ 73 73 len = mode->height * mode->stride; 74 74 len = PAGE_ALIGN(len); 75 - if (len > si->lfb_size << 16) { 75 + if (len > (u64)si->lfb_size << 16) { 76 76 printk(KERN_WARNING "sysfb: VRAM smaller than advertised\n"); 77 77 return -EINVAL; 78 78 } 79 79 80 80 /* setup IORESOURCE_MEM as framebuffer memory */ 81 81 memset(&res, 0, sizeof(res)); 82 - res.flags = IORESOURCE_MEM; 82 + res.flags = IORESOURCE_MEM | IORESOURCE_BUSY; 83 83 res.name = simplefb_resname; 84 84 res.start = si->lfb_base; 85 85 res.end = si->lfb_base + len - 1;
+12 -12
arch/x86/kvm/vmx.c
··· 3255 3255 3256 3256 static void ept_load_pdptrs(struct kvm_vcpu *vcpu) 3257 3257 { 3258 + struct kvm_mmu *mmu = vcpu->arch.walk_mmu; 3259 + 3258 3260 if (!test_bit(VCPU_EXREG_PDPTR, 3259 3261 (unsigned long *)&vcpu->arch.regs_dirty)) 3260 3262 return; 3261 3263 3262 3264 if (is_paging(vcpu) && is_pae(vcpu) && !is_long_mode(vcpu)) { 3263 - vmcs_write64(GUEST_PDPTR0, vcpu->arch.mmu.pdptrs[0]); 3264 - vmcs_write64(GUEST_PDPTR1, vcpu->arch.mmu.pdptrs[1]); 3265 - vmcs_write64(GUEST_PDPTR2, vcpu->arch.mmu.pdptrs[2]); 3266 - vmcs_write64(GUEST_PDPTR3, vcpu->arch.mmu.pdptrs[3]); 3265 + vmcs_write64(GUEST_PDPTR0, mmu->pdptrs[0]); 3266 + vmcs_write64(GUEST_PDPTR1, mmu->pdptrs[1]); 3267 + vmcs_write64(GUEST_PDPTR2, mmu->pdptrs[2]); 3268 + vmcs_write64(GUEST_PDPTR3, mmu->pdptrs[3]); 3267 3269 } 3268 3270 } 3269 3271 3270 3272 static void ept_save_pdptrs(struct kvm_vcpu *vcpu) 3271 3273 { 3274 + struct kvm_mmu *mmu = vcpu->arch.walk_mmu; 3275 + 3272 3276 if (is_paging(vcpu) && is_pae(vcpu) && !is_long_mode(vcpu)) { 3273 - vcpu->arch.mmu.pdptrs[0] = vmcs_read64(GUEST_PDPTR0); 3274 - vcpu->arch.mmu.pdptrs[1] = vmcs_read64(GUEST_PDPTR1); 3275 - vcpu->arch.mmu.pdptrs[2] = vmcs_read64(GUEST_PDPTR2); 3276 - vcpu->arch.mmu.pdptrs[3] = vmcs_read64(GUEST_PDPTR3); 3277 + mmu->pdptrs[0] = vmcs_read64(GUEST_PDPTR0); 3278 + mmu->pdptrs[1] = vmcs_read64(GUEST_PDPTR1); 3279 + mmu->pdptrs[2] = vmcs_read64(GUEST_PDPTR2); 3280 + mmu->pdptrs[3] = vmcs_read64(GUEST_PDPTR3); 3277 3281 } 3278 3282 3279 3283 __set_bit(VCPU_EXREG_PDPTR, ··· 7781 7777 vmcs_write64(GUEST_PDPTR1, vmcs12->guest_pdptr1); 7782 7778 vmcs_write64(GUEST_PDPTR2, vmcs12->guest_pdptr2); 7783 7779 vmcs_write64(GUEST_PDPTR3, vmcs12->guest_pdptr3); 7784 - __clear_bit(VCPU_EXREG_PDPTR, 7785 - (unsigned long *)&vcpu->arch.regs_avail); 7786 - __clear_bit(VCPU_EXREG_PDPTR, 7787 - (unsigned long *)&vcpu->arch.regs_dirty); 7788 7780 } 7789 7781 7790 7782 kvm_register_write(vcpu, VCPU_REGS_RSP, vmcs12->guest_rsp);
+6 -1
arch/x86/pci/mmconfig-shared.c
··· 700 700 if (!(pci_probe & PCI_PROBE_MMCONF) || pci_mmcfg_arch_init_failed) 701 701 return -ENODEV; 702 702 703 - if (start > end || !addr) 703 + if (start > end) 704 704 return -EINVAL; 705 705 706 706 mutex_lock(&pci_mmcfg_lock); ··· 714 714 cfg->segment, cfg->start_bus, cfg->end_bus); 715 715 mutex_unlock(&pci_mmcfg_lock); 716 716 return -EEXIST; 717 + } 718 + 719 + if (!addr) { 720 + mutex_unlock(&pci_mmcfg_lock); 721 + return -EINVAL; 717 722 } 718 723 719 724 rc = -EBUSY;
+9
arch/x86/xen/smp.c
··· 278 278 old memory can be recycled */ 279 279 make_lowmem_page_readwrite(xen_initial_gdt); 280 280 281 + #ifdef CONFIG_X86_32 282 + /* 283 + * Xen starts us with XEN_FLAT_RING1_DS, but linux code 284 + * expects __USER_DS 285 + */ 286 + loadsegment(ds, __USER_DS); 287 + loadsegment(es, __USER_DS); 288 + #endif 289 + 281 290 xen_filter_cpu_maps(); 282 291 xen_setup_vcpu_info_placement(); 283 292 }
+6 -1
block/partitions/efi.c
··· 222 222 * the disk size. 223 223 * 224 224 * Hybrid MBRs do not necessarily comply with this. 225 + * 226 + * Consider a bad value here to be a warning to support dd'ing 227 + * an image from a smaller disk to a larger disk. 225 228 */ 226 229 if (ret == GPT_MBR_PROTECTIVE) { 227 230 sz = le32_to_cpu(mbr->partition_record[part].size_in_lba); 228 231 if (sz != (uint32_t) total_sectors - 1 && sz != 0xFFFFFFFF) 229 - ret = 0; 232 + pr_debug("GPT: mbr size in lba (%u) different than whole disk (%u).\n", 233 + sz, min_t(uint32_t, 234 + total_sectors - 1, 0xFFFFFFFF)); 230 235 } 231 236 done: 232 237 return ret;
+4 -4
drivers/acpi/Kconfig
··· 24 24 are configured, ACPI is used. 25 25 26 26 The project home page for the Linux ACPI subsystem is here: 27 - <http://www.lesswatts.org/projects/acpi/> 27 + <https://01.org/linux-acpi> 28 28 29 29 Linux support for ACPI is based on Intel Corporation's ACPI 30 30 Component Architecture (ACPI CA). For more information on the ··· 123 123 default y 124 124 help 125 125 This driver handles events on the power, sleep, and lid buttons. 126 - A daemon reads /proc/acpi/event and perform user-defined actions 127 - such as shutting down the system. This is necessary for 128 - software-controlled poweroff. 126 + A daemon reads events from input devices or via netlink and 127 + performs user-defined actions such as shutting down the system. 128 + This is necessary for software-controlled poweroff. 129 129 130 130 To compile this driver as a module, choose M here: 131 131 the module will be called button.
-56
drivers/acpi/device_pm.c
··· 1025 1025 } 1026 1026 } 1027 1027 EXPORT_SYMBOL_GPL(acpi_dev_pm_detach); 1028 - 1029 - /** 1030 - * acpi_dev_pm_add_dependent - Add physical device depending for PM. 1031 - * @handle: Handle of ACPI device node. 1032 - * @depdev: Device depending on that node for PM. 1033 - */ 1034 - void acpi_dev_pm_add_dependent(acpi_handle handle, struct device *depdev) 1035 - { 1036 - struct acpi_device_physical_node *dep; 1037 - struct acpi_device *adev; 1038 - 1039 - if (!depdev || acpi_bus_get_device(handle, &adev)) 1040 - return; 1041 - 1042 - mutex_lock(&adev->physical_node_lock); 1043 - 1044 - list_for_each_entry(dep, &adev->power_dependent, node) 1045 - if (dep->dev == depdev) 1046 - goto out; 1047 - 1048 - dep = kzalloc(sizeof(*dep), GFP_KERNEL); 1049 - if (dep) { 1050 - dep->dev = depdev; 1051 - list_add_tail(&dep->node, &adev->power_dependent); 1052 - } 1053 - 1054 - out: 1055 - mutex_unlock(&adev->physical_node_lock); 1056 - } 1057 - EXPORT_SYMBOL_GPL(acpi_dev_pm_add_dependent); 1058 - 1059 - /** 1060 - * acpi_dev_pm_remove_dependent - Remove physical device depending for PM. 1061 - * @handle: Handle of ACPI device node. 1062 - * @depdev: Device depending on that node for PM. 1063 - */ 1064 - void acpi_dev_pm_remove_dependent(acpi_handle handle, struct device *depdev) 1065 - { 1066 - struct acpi_device_physical_node *dep; 1067 - struct acpi_device *adev; 1068 - 1069 - if (!depdev || acpi_bus_get_device(handle, &adev)) 1070 - return; 1071 - 1072 - mutex_lock(&adev->physical_node_lock); 1073 - 1074 - list_for_each_entry(dep, &adev->power_dependent, node) 1075 - if (dep->dev == depdev) { 1076 - list_del(&dep->node); 1077 - kfree(dep); 1078 - break; 1079 - } 1080 - 1081 - mutex_unlock(&adev->physical_node_lock); 1082 - } 1083 - EXPORT_SYMBOL_GPL(acpi_dev_pm_remove_dependent); 1084 1028 #endif /* CONFIG_PM */
+4 -100
drivers/acpi/power.c
··· 59 59 #define ACPI_POWER_RESOURCE_STATE_ON 0x01 60 60 #define ACPI_POWER_RESOURCE_STATE_UNKNOWN 0xFF 61 61 62 - struct acpi_power_dependent_device { 63 - struct list_head node; 64 - struct acpi_device *adev; 65 - struct work_struct work; 66 - }; 67 - 68 62 struct acpi_power_resource { 69 63 struct acpi_device device; 70 64 struct list_head list_node; 71 - struct list_head dependent; 72 65 char *name; 73 66 u32 system_level; 74 67 u32 order; ··· 226 233 return 0; 227 234 } 228 235 229 - static void acpi_power_resume_dependent(struct work_struct *work) 230 - { 231 - struct acpi_power_dependent_device *dep; 232 - struct acpi_device_physical_node *pn; 233 - struct acpi_device *adev; 234 - int state; 235 - 236 - dep = container_of(work, struct acpi_power_dependent_device, work); 237 - adev = dep->adev; 238 - if (acpi_power_get_inferred_state(adev, &state)) 239 - return; 240 - 241 - if (state > ACPI_STATE_D0) 242 - return; 243 - 244 - mutex_lock(&adev->physical_node_lock); 245 - 246 - list_for_each_entry(pn, &adev->physical_node_list, node) 247 - pm_request_resume(pn->dev); 248 - 249 - list_for_each_entry(pn, &adev->power_dependent, node) 250 - pm_request_resume(pn->dev); 251 - 252 - mutex_unlock(&adev->physical_node_lock); 253 - } 254 - 255 236 static int __acpi_power_on(struct acpi_power_resource *resource) 256 237 { 257 238 acpi_status status = AE_OK; ··· 250 283 resource->name)); 251 284 } else { 252 285 result = __acpi_power_on(resource); 253 - if (result) { 286 + if (result) 254 287 resource->ref_count--; 255 - } else { 256 - struct acpi_power_dependent_device *dep; 257 - 258 - list_for_each_entry(dep, &resource->dependent, node) 259 - schedule_work(&dep->work); 260 - } 261 288 } 262 289 return result; 263 290 } ··· 351 390 return result; 352 391 } 353 392 354 - static void acpi_power_add_dependent(struct acpi_power_resource *resource, 355 - struct acpi_device *adev) 356 - { 357 - struct acpi_power_dependent_device *dep; 358 - 359 - mutex_lock(&resource->resource_lock); 360 - 361 - list_for_each_entry(dep, &resource->dependent, node) 362 - if (dep->adev == adev) 363 - goto out; 364 - 365 - dep = kzalloc(sizeof(*dep), GFP_KERNEL); 366 - if (!dep) 367 - goto out; 368 - 369 - dep->adev = adev; 370 - INIT_WORK(&dep->work, acpi_power_resume_dependent); 371 - list_add_tail(&dep->node, &resource->dependent); 372 - 373 - out: 374 - mutex_unlock(&resource->resource_lock); 375 - } 376 - 377 - static void acpi_power_remove_dependent(struct acpi_power_resource *resource, 378 - struct acpi_device *adev) 379 - { 380 - struct acpi_power_dependent_device *dep; 381 - struct work_struct *work = NULL; 382 - 383 - mutex_lock(&resource->resource_lock); 384 - 385 - list_for_each_entry(dep, &resource->dependent, node) 386 - if (dep->adev == adev) { 387 - list_del(&dep->node); 388 - work = &dep->work; 389 - break; 390 - } 391 - 392 - mutex_unlock(&resource->resource_lock); 393 - 394 - if (work) { 395 - cancel_work_sync(work); 396 - kfree(dep); 397 - } 398 - } 399 - 400 393 static struct attribute *attrs[] = { 401 394 NULL, 402 395 }; ··· 439 524 440 525 void acpi_power_add_remove_device(struct acpi_device *adev, bool add) 441 526 { 442 - struct acpi_device_power_state *ps; 443 - struct acpi_power_resource_entry *entry; 444 527 int state; 445 528 446 529 if (adev->wakeup.flags.valid) ··· 447 534 448 535 if (!adev->power.flags.power_resources) 449 536 return; 450 - 451 - ps = &adev->power.states[ACPI_STATE_D0]; 452 - list_for_each_entry(entry, &ps->resources, node) { 453 - struct acpi_power_resource *resource = entry->resource; 454 - 455 - if (add) 456 - acpi_power_add_dependent(resource, adev); 457 - else 458 - acpi_power_remove_dependent(resource, adev); 459 - } 460 537 461 538 for (state = ACPI_STATE_D0; state <= ACPI_STATE_D3_HOT; state++) 462 539 acpi_power_expose_hide(adev, ··· 785 882 acpi_init_device_object(device, handle, ACPI_BUS_TYPE_POWER, 786 883 ACPI_STA_DEFAULT); 787 884 mutex_init(&resource->resource_lock); 788 - INIT_LIST_HEAD(&resource->dependent); 789 885 INIT_LIST_HEAD(&resource->list_node); 790 886 resource->name = device->pnp.bus_id; 791 887 strcpy(acpi_device_name(device), ACPI_POWER_DEVICE_NAME); ··· 838 936 mutex_lock(&resource->resource_lock); 839 937 840 938 result = acpi_power_get_state(resource->device.handle, &state); 841 - if (result) 939 + if (result) { 940 + mutex_unlock(&resource->resource_lock); 842 941 continue; 942 + } 843 943 844 944 if (state == ACPI_POWER_RESOURCE_STATE_OFF 845 945 && resource->ref_count) {
+1 -2
drivers/acpi/scan.c
··· 968 968 } 969 969 return 0; 970 970 } 971 - EXPORT_SYMBOL_GPL(acpi_bus_get_device); 971 + EXPORT_SYMBOL(acpi_bus_get_device); 972 972 973 973 int acpi_device_add(struct acpi_device *device, 974 974 void (*release)(struct device *)) ··· 999 999 INIT_LIST_HEAD(&device->wakeup_list); 1000 1000 INIT_LIST_HEAD(&device->physical_node_list); 1001 1001 mutex_init(&device->physical_node_lock); 1002 - INIT_LIST_HEAD(&device->power_dependent); 1003 1002 1004 1003 new_bus_id = kzalloc(sizeof(struct acpi_device_bus_id), GFP_KERNEL); 1005 1004 if (!new_bus_id) {
+1 -1
drivers/ata/ahci.c
··· 1343 1343 if (!(hpriv->cap & HOST_CAP_SSS) || ahci_ignore_sss) 1344 1344 host->flags |= ATA_HOST_PARALLEL_SCAN; 1345 1345 else 1346 - printk(KERN_INFO "ahci: SSS flag set, parallel bus scan disabled\n"); 1346 + dev_info(&pdev->dev, "SSS flag set, parallel bus scan disabled\n"); 1347 1347 1348 1348 if (pi.flags & ATA_FLAG_EM) 1349 1349 ahci_reset_em(host);
+1 -1
drivers/ata/ahci_platform.c
··· 184 184 if (!(hpriv->cap & HOST_CAP_SSS) || ahci_ignore_sss) 185 185 host->flags |= ATA_HOST_PARALLEL_SCAN; 186 186 else 187 - printk(KERN_INFO "ahci: SSS flag set, parallel bus scan disabled\n"); 187 + dev_info(dev, "SSS flag set, parallel bus scan disabled\n"); 188 188 189 189 if (pi.flags & ATA_FLAG_EM) 190 190 ahci_reset_em(host);
+9 -1
drivers/ata/libahci.c
··· 778 778 rc = ap->ops->transmit_led_message(ap, 779 779 emp->led_state, 780 780 4); 781 + /* 782 + * If busy, give a breather but do not 783 + * release EH ownership by using msleep() 784 + * instead of ata_msleep(). EM Transmit 785 + * bit is busy for the whole host and 786 + * releasing ownership will cause other 787 + * ports to fail the same way. 788 + */ 781 789 if (rc == -EBUSY) 782 - ata_msleep(ap, 1); 790 + msleep(1); 783 791 else 784 792 break; 785 793 }
-14
drivers/ata/libata-acpi.c
··· 1035 1035 { 1036 1036 ata_acpi_clear_gtf(dev); 1037 1037 } 1038 - 1039 - void ata_scsi_acpi_bind(struct ata_device *dev) 1040 - { 1041 - acpi_handle handle = ata_dev_acpi_handle(dev); 1042 - if (handle) 1043 - acpi_dev_pm_add_dependent(handle, &dev->sdev->sdev_gendev); 1044 - } 1045 - 1046 - void ata_scsi_acpi_unbind(struct ata_device *dev) 1047 - { 1048 - acpi_handle handle = ata_dev_acpi_handle(dev); 1049 - if (handle) 1050 - acpi_dev_pm_remove_dependent(handle, &dev->sdev->sdev_gendev); 1051 - }
+3 -3
drivers/ata/libata-eh.c
··· 1322 1322 * should be retried. To be used from EH. 1323 1323 * 1324 1324 * SCSI midlayer limits the number of retries to scmd->allowed. 1325 - * scmd->retries is decremented for commands which get retried 1325 + * scmd->allowed is incremented for commands which get retried 1326 1326 * due to unrelated failures (qc->err_mask is zero). 1327 1327 */ 1328 1328 void ata_eh_qc_retry(struct ata_queued_cmd *qc) 1329 1329 { 1330 1330 struct scsi_cmnd *scmd = qc->scsicmd; 1331 - if (!qc->err_mask && scmd->retries) 1332 - scmd->retries--; 1331 + if (!qc->err_mask) 1332 + scmd->allowed++; 1333 1333 __ata_eh_qc_complete(qc); 1334 1334 } 1335 1335
-3
drivers/ata/libata-scsi.c
··· 3679 3679 if (!IS_ERR(sdev)) { 3680 3680 dev->sdev = sdev; 3681 3681 scsi_device_put(sdev); 3682 - ata_scsi_acpi_bind(dev); 3683 3682 } else { 3684 3683 dev->sdev = NULL; 3685 3684 } ··· 3765 3766 struct ata_port *ap = dev->link->ap; 3766 3767 struct scsi_device *sdev; 3767 3768 unsigned long flags; 3768 - 3769 - ata_scsi_acpi_unbind(dev); 3770 3769 3771 3770 /* Alas, we need to grab scan_mutex to ensure SCSI device 3772 3771 * state doesn't change underneath us and thus
-4
drivers/ata/libata.h
··· 121 121 extern void ata_acpi_bind_port(struct ata_port *ap); 122 122 extern void ata_acpi_bind_dev(struct ata_device *dev); 123 123 extern acpi_handle ata_dev_acpi_handle(struct ata_device *dev); 124 - extern void ata_scsi_acpi_bind(struct ata_device *dev); 125 - extern void ata_scsi_acpi_unbind(struct ata_device *dev); 126 124 #else 127 125 static inline void ata_acpi_dissociate(struct ata_host *host) { } 128 126 static inline int ata_acpi_on_suspend(struct ata_port *ap) { return 0; } ··· 131 133 pm_message_t state) { } 132 134 static inline void ata_acpi_bind_port(struct ata_port *ap) {} 133 135 static inline void ata_acpi_bind_dev(struct ata_device *dev) {} 134 - static inline void ata_scsi_acpi_bind(struct ata_device *dev) {} 135 - static inline void ata_scsi_acpi_unbind(struct ata_device *dev) {} 136 136 #endif 137 137 138 138 /* libata-scsi.c */
+1 -1
drivers/ata/pata_isapnp.c
··· 78 78 79 79 ap->ioaddr.cmd_addr = cmd_addr; 80 80 81 - if (pnp_port_valid(idev, 1) == 0) { 81 + if (pnp_port_valid(idev, 1)) { 82 82 ctl_addr = devm_ioport_map(&idev->dev, 83 83 pnp_port_start(idev, 1), 1); 84 84 ap->ioaddr.altstatus_addr = ctl_addr;
+5 -2
drivers/base/memory.c
··· 333 333 online_type = ONLINE_KEEP; 334 334 else if (!strncmp(buf, "offline", min_t(int, count, 7))) 335 335 online_type = -1; 336 - else 337 - return -EINVAL; 336 + else { 337 + ret = -EINVAL; 338 + goto err; 339 + } 338 340 339 341 switch (online_type) { 340 342 case ONLINE_KERNEL: ··· 359 357 ret = -EINVAL; /* should never happen */ 360 358 } 361 359 360 + err: 362 361 unlock_device_hotplug(); 363 362 364 363 if (ret)
+9 -3
drivers/bus/mvebu-mbus.c
··· 700 700 phys_addr_t sdramwins_phys_base, 701 701 size_t sdramwins_size) 702 702 { 703 + struct device_node *np; 703 704 int win; 704 705 705 706 mbus->mbuswins_base = ioremap(mbuswins_phys_base, mbuswins_size); ··· 713 712 return -ENOMEM; 714 713 } 715 714 716 - if (of_find_compatible_node(NULL, NULL, "marvell,coherency-fabric")) 715 + np = of_find_compatible_node(NULL, NULL, "marvell,coherency-fabric"); 716 + if (np) { 717 717 mbus->hw_io_coherency = 1; 718 + of_node_put(np); 719 + } 718 720 719 721 for (win = 0; win < mbus->soc->num_wins; win++) 720 722 mvebu_mbus_disable_window(mbus, win); ··· 865 861 int ret; 866 862 867 863 /* 868 - * These are optional, so we clear them and they'll 869 - * be zero if they are missing from the DT. 864 + * These are optional, so we make sure that resource_size(x) will 865 + * return 0. 870 866 */ 871 867 memset(mem, 0, sizeof(struct resource)); 868 + mem->end = -1; 872 869 memset(io, 0, sizeof(struct resource)); 870 + io->end = -1; 873 871 874 872 ret = of_property_read_u32_array(np, "pcie-mem-aperture", reg, ARRAY_SIZE(reg)); 875 873 if (!ret) {
+5 -6
drivers/char/random.c
··· 640 640 */ 641 641 void add_device_randomness(const void *buf, unsigned int size) 642 642 { 643 - unsigned long time = get_cycles() ^ jiffies; 643 + unsigned long time = random_get_entropy() ^ jiffies; 644 644 645 645 mix_pool_bytes(&input_pool, buf, size, NULL); 646 646 mix_pool_bytes(&input_pool, &time, sizeof(time), NULL); ··· 677 677 goto out; 678 678 679 679 sample.jiffies = jiffies; 680 - sample.cycles = get_cycles(); 680 + sample.cycles = random_get_entropy(); 681 681 sample.num = num; 682 682 mix_pool_bytes(&input_pool, &sample, sizeof(sample), NULL); 683 683 ··· 744 744 struct fast_pool *fast_pool = &__get_cpu_var(irq_randomness); 745 745 struct pt_regs *regs = get_irq_regs(); 746 746 unsigned long now = jiffies; 747 - __u32 input[4], cycles = get_cycles(); 747 + __u32 input[4], cycles = random_get_entropy(); 748 748 749 749 input[0] = cycles ^ jiffies; 750 750 input[1] = irq; ··· 1459 1459 1460 1460 static u32 random_int_secret[MD5_MESSAGE_BYTES / 4] ____cacheline_aligned; 1461 1461 1462 - static int __init random_int_secret_init(void) 1462 + int random_int_secret_init(void) 1463 1463 { 1464 1464 get_random_bytes(random_int_secret, sizeof(random_int_secret)); 1465 1465 return 0; 1466 1466 } 1467 - late_initcall(random_int_secret_init); 1468 1467 1469 1468 /* 1470 1469 * Get a random word for internal kernel use only. Similar to urandom but ··· 1482 1483 1483 1484 hash = get_cpu_var(get_random_int_hash); 1484 1485 1485 - hash[0] += current->pid + jiffies + get_cycles(); 1486 + hash[0] += current->pid + jiffies + random_get_entropy(); 1486 1487 md5_transform(hash, random_int_secret); 1487 1488 ret = hash[0]; 1488 1489 put_cpu_var(get_random_int_hash);
+1
drivers/char/tpm/xen-tpmfront.c
··· 10 10 #include <linux/errno.h> 11 11 #include <linux/err.h> 12 12 #include <linux/interrupt.h> 13 + #include <xen/xen.h> 13 14 #include <xen/events.h> 14 15 #include <xen/interface/io/tpmif.h> 15 16 #include <xen/grant_table.h>
+1 -1
drivers/cpufreq/cpufreq-cpu0.c
··· 229 229 if (of_property_read_u32(np, "clock-latency", &transition_latency)) 230 230 transition_latency = CPUFREQ_ETERNAL; 231 231 232 - if (cpu_reg) { 232 + if (!IS_ERR(cpu_reg)) { 233 233 struct opp *opp; 234 234 unsigned long min_uV, max_uV; 235 235 int i;
+8 -5
drivers/cpufreq/intel_pstate.c
··· 383 383 static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate) 384 384 { 385 385 int max_perf, min_perf; 386 + u64 val; 386 387 387 388 intel_pstate_get_min_max(cpu, &min_perf, &max_perf); 388 389 ··· 395 394 trace_cpu_frequency(pstate * 100000, cpu->cpu); 396 395 397 396 cpu->pstate.current_pstate = pstate; 398 - wrmsrl(MSR_IA32_PERF_CTL, pstate << 8); 397 + val = pstate << 8; 398 + if (limits.no_turbo) 399 + val |= (u64)1 << 32; 399 400 401 + wrmsrl(MSR_IA32_PERF_CTL, val); 400 402 } 401 403 402 404 static inline void intel_pstate_pstate_increase(struct cpudata *cpu, int steps) ··· 638 634 639 635 static int intel_pstate_cpu_init(struct cpufreq_policy *policy) 640 636 { 641 - int rc, min_pstate, max_pstate; 642 637 struct cpudata *cpu; 638 + int rc; 643 639 644 640 rc = intel_pstate_init_cpu(policy->cpu); 645 641 if (rc) ··· 653 649 else 654 650 policy->policy = CPUFREQ_POLICY_POWERSAVE; 655 651 656 - intel_pstate_get_min_max(cpu, &min_pstate, &max_pstate); 657 - policy->min = min_pstate * 100000; 658 - policy->max = max_pstate * 100000; 652 + policy->min = cpu->pstate.min_pstate * 100000; 653 + policy->max = cpu->pstate.turbo_pstate * 100000; 659 654 660 655 /* cpuinfo and default policy values */ 661 656 policy->cpuinfo.min_freq = cpu->pstate.min_pstate * 100000;
+1 -1
drivers/cpufreq/s3c64xx-cpufreq.c
··· 166 166 if (freq->frequency == CPUFREQ_ENTRY_INVALID) 167 167 continue; 168 168 169 - dvfs = &s3c64xx_dvfs_table[freq->index]; 169 + dvfs = &s3c64xx_dvfs_table[freq->driver_data]; 170 170 found = 0; 171 171 172 172 for (i = 0; i < count; i++) {
+1 -1
drivers/cpufreq/spear-cpufreq.c
··· 113 113 unsigned int target_freq, unsigned int relation) 114 114 { 115 115 struct cpufreq_freqs freqs; 116 - unsigned long newfreq; 116 + long newfreq; 117 117 struct clk *srcclk; 118 118 int index, ret, mult = 1; 119 119
+1
drivers/dma/Kconfig
··· 198 198 depends on ARCH_DAVINCI || ARCH_OMAP 199 199 select DMA_ENGINE 200 200 select DMA_VIRTUAL_CHANNELS 201 + select TI_PRIV_EDMA 201 202 default n 202 203 help 203 204 Enable support for the TI EDMA controller. This DMA
+2 -1
drivers/dma/edma.c
··· 306 306 EDMA_SLOT_ANY); 307 307 if (echan->slot[i] < 0) { 308 308 dev_err(dev, "Failed to allocate slot\n"); 309 + kfree(edesc); 309 310 return NULL; 310 311 } 311 312 } ··· 750 749 } 751 750 module_exit(edma_exit); 752 751 753 - MODULE_AUTHOR("Matt Porter <mporter@ti.com>"); 752 + MODULE_AUTHOR("Matt Porter <matt.porter@linaro.org>"); 754 753 MODULE_DESCRIPTION("TI EDMA DMA engine driver"); 755 754 MODULE_LICENSE("GPL v2");
+15 -16
drivers/dma/imx-dma.c
··· 437 437 struct imxdma_engine *imxdma = imxdmac->imxdma; 438 438 int chno = imxdmac->channel; 439 439 struct imxdma_desc *desc; 440 + unsigned long flags; 440 441 441 - spin_lock(&imxdma->lock); 442 + spin_lock_irqsave(&imxdma->lock, flags); 442 443 if (list_empty(&imxdmac->ld_active)) { 443 - spin_unlock(&imxdma->lock); 444 + spin_unlock_irqrestore(&imxdma->lock, flags); 444 445 goto out; 445 446 } 446 447 447 448 desc = list_first_entry(&imxdmac->ld_active, 448 449 struct imxdma_desc, 449 450 node); 450 - spin_unlock(&imxdma->lock); 451 + spin_unlock_irqrestore(&imxdma->lock, flags); 451 452 452 453 if (desc->sg) { 453 454 u32 tmp; ··· 520 519 { 521 520 struct imxdma_channel *imxdmac = to_imxdma_chan(d->desc.chan); 522 521 struct imxdma_engine *imxdma = imxdmac->imxdma; 523 - unsigned long flags; 524 522 int slot = -1; 525 523 int i; 526 524 ··· 527 527 switch (d->type) { 528 528 case IMXDMA_DESC_INTERLEAVED: 529 529 /* Try to get a free 2D slot */ 530 - spin_lock_irqsave(&imxdma->lock, flags); 531 530 for (i = 0; i < IMX_DMA_2D_SLOTS; i++) { 532 531 if ((imxdma->slots_2d[i].count > 0) && 533 532 ((imxdma->slots_2d[i].xsr != d->x) || ··· 536 537 slot = i; 537 538 break; 538 539 } 539 - if (slot < 0) { 540 - spin_unlock_irqrestore(&imxdma->lock, flags); 540 + if (slot < 0) 541 541 return -EBUSY; 542 - } 543 542 544 543 imxdma->slots_2d[slot].xsr = d->x; 545 544 imxdma->slots_2d[slot].ysr = d->y; ··· 546 549 547 550 imxdmac->slot_2d = slot; 548 551 imxdmac->enabled_2d = true; 549 - spin_unlock_irqrestore(&imxdma->lock, flags); 550 552 551 553 if (slot == IMX_DMA_2D_SLOT_A) { 552 554 d->config_mem &= ~CCR_MSEL_B; ··· 621 625 struct imxdma_channel *imxdmac = (void *)data; 622 626 struct imxdma_engine *imxdma = imxdmac->imxdma; 623 627 struct imxdma_desc *desc; 628 + unsigned long flags; 624 629 625 - spin_lock(&imxdma->lock); 630 + spin_lock_irqsave(&imxdma->lock, flags); 626 631 627 632 if (list_empty(&imxdmac->ld_active)) { 628 633 /* Someone might have called terminate all */ 629 - goto out; 634 + spin_unlock_irqrestore(&imxdma->lock, flags); 635 + return; 630 636 } 631 637 desc = list_first_entry(&imxdmac->ld_active, struct imxdma_desc, node); 632 - 633 - if (desc->desc.callback) 634 - desc->desc.callback(desc->desc.callback_param); 635 638 636 639 /* If we are dealing with a cyclic descriptor, keep it on ld_active 637 640 * and dont mark the descriptor as complete. ··· 658 663 __func__, imxdmac->channel); 659 664 } 660 665 out: 661 - spin_unlock(&imxdma->lock); 666 + spin_unlock_irqrestore(&imxdma->lock, flags); 667 + 668 + if (desc->desc.callback) 669 + desc->desc.callback(desc->desc.callback_param); 670 + 662 671 } 663 672 664 673 static int imxdma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd, ··· 882 883 kfree(imxdmac->sg_list); 883 884 884 885 imxdmac->sg_list = kcalloc(periods + 1, 885 - sizeof(struct scatterlist), GFP_KERNEL); 886 + sizeof(struct scatterlist), GFP_ATOMIC); 886 887 if (!imxdmac->sg_list) 887 888 return NULL; 888 889
+5 -4
drivers/dma/sh/rcar-hpbdma.c
··· 93 93 void __iomem *base; 94 94 const struct hpb_dmae_slave_config *cfg; 95 95 char dev_id[16]; /* unique name per DMAC of channel */ 96 + dma_addr_t slave_addr; 96 97 }; 97 98 98 99 struct hpb_dmae_device { ··· 433 432 hpb_chan->xfer_mode = XFER_DOUBLE; 434 433 } else { 435 434 dev_err(hpb_chan->shdma_chan.dev, "DCR setting error"); 436 - shdma_free_irq(&hpb_chan->shdma_chan); 437 435 return -EINVAL; 438 436 } 439 437 ··· 446 446 return 0; 447 447 } 448 448 449 - static int hpb_dmae_set_slave(struct shdma_chan *schan, int slave_id, bool try) 449 + static int hpb_dmae_set_slave(struct shdma_chan *schan, int slave_id, 450 + dma_addr_t slave_addr, bool try) 450 451 { 451 452 struct hpb_dmae_chan *chan = to_chan(schan); 452 453 const struct hpb_dmae_slave_config *sc = ··· 458 457 if (try) 459 458 return 0; 460 459 chan->cfg = sc; 460 + chan->slave_addr = slave_addr ? : sc->addr; 461 461 return hpb_dmae_alloc_chan_resources(chan, sc); 462 462 } 463 463 ··· 470 468 { 471 469 struct hpb_dmae_chan *chan = to_chan(schan); 472 470 473 - return chan->cfg->addr; 471 + return chan->slave_addr; 474 472 } 475 473 476 474 static struct shdma_desc *hpb_dmae_embedded_desc(void *buf, int i) ··· 616 614 shdma_for_each_chan(schan, &hpbdev->shdma_dev, i) { 617 615 BUG_ON(!schan); 618 616 619 - shdma_free_irq(schan); 620 617 shdma_chan_remove(schan); 621 618 } 622 619 dma_dev->chancnt = 0;
+3 -2
drivers/gpio/gpio-lynxpoint.c
··· 248 248 struct lp_gpio *lg = irq_data_get_irq_handler_data(data); 249 249 struct irq_chip *chip = irq_data_get_irq_chip(data); 250 250 u32 base, pin, mask; 251 - unsigned long reg, pending; 251 + unsigned long reg, ena, pending; 252 252 unsigned virq; 253 253 254 254 /* check from GPIO controller which pin triggered the interrupt */ 255 255 for (base = 0; base < lg->chip.ngpio; base += 32) { 256 256 reg = lp_gpio_reg(&lg->chip, base, LP_INT_STAT); 257 + ena = lp_gpio_reg(&lg->chip, base, LP_INT_ENABLE); 257 258 258 - while ((pending = inl(reg))) { 259 + while ((pending = (inl(reg) & inl(ena)))) { 259 260 pin = __ffs(pending); 260 261 mask = BIT(pin); 261 262 /* Clear before handling so we don't lose an edge */
+101 -57
drivers/gpio/gpio-omap.c
··· 63 63 struct gpio_chip chip; 64 64 struct clk *dbck; 65 65 u32 mod_usage; 66 + u32 irq_usage; 66 67 u32 dbck_enable_mask; 67 68 bool dbck_enabled; 68 69 struct device *dev; ··· 86 85 #define GPIO_INDEX(bank, gpio) (gpio % bank->width) 87 86 #define GPIO_BIT(bank, gpio) (1 << GPIO_INDEX(bank, gpio)) 88 87 #define GPIO_MOD_CTRL_BIT BIT(0) 88 + 89 + #define BANK_USED(bank) (bank->mod_usage || bank->irq_usage) 90 + #define LINE_USED(line, offset) (line & (1 << offset)) 89 91 90 92 static int irq_to_gpio(struct gpio_bank *bank, unsigned int gpio_irq) 91 93 { ··· 424 420 return 0; 425 421 } 426 422 423 + static void _enable_gpio_module(struct gpio_bank *bank, unsigned offset) 424 + { 425 + if (bank->regs->pinctrl) { 426 + void __iomem *reg = bank->base + bank->regs->pinctrl; 427 + 428 + /* Claim the pin for MPU */ 429 + __raw_writel(__raw_readl(reg) | (1 << offset), reg); 430 + } 431 + 432 + if (bank->regs->ctrl && !BANK_USED(bank)) { 433 + void __iomem *reg = bank->base + bank->regs->ctrl; 434 + u32 ctrl; 435 + 436 + ctrl = __raw_readl(reg); 437 + /* Module is enabled, clocks are not gated */ 438 + ctrl &= ~GPIO_MOD_CTRL_BIT; 439 + __raw_writel(ctrl, reg); 440 + bank->context.ctrl = ctrl; 441 + } 442 + } 443 + 444 + static void _disable_gpio_module(struct gpio_bank *bank, unsigned offset) 445 + { 446 + void __iomem *base = bank->base; 447 + 448 + if (bank->regs->wkup_en && 449 + !LINE_USED(bank->mod_usage, offset) && 450 + !LINE_USED(bank->irq_usage, offset)) { 451 + /* Disable wake-up during idle for dynamic tick */ 452 + _gpio_rmw(base, bank->regs->wkup_en, 1 << offset, 0); 453 + bank->context.wake_en = 454 + __raw_readl(bank->base + bank->regs->wkup_en); 455 + } 456 + 457 + if (bank->regs->ctrl && !BANK_USED(bank)) { 458 + void __iomem *reg = bank->base + bank->regs->ctrl; 459 + u32 ctrl; 460 + 461 + ctrl = __raw_readl(reg); 462 + /* Module is disabled, clocks are gated */ 463 + ctrl |= GPIO_MOD_CTRL_BIT; 464 + __raw_writel(ctrl, reg); 465 + bank->context.ctrl = ctrl; 466 + } 467 + } 468 + 469 + static int gpio_is_input(struct gpio_bank *bank, int mask) 470 + { 471 + void __iomem *reg = bank->base + bank->regs->direction; 472 + 473 + return __raw_readl(reg) & mask; 474 + } 475 + 427 476 static int gpio_irq_type(struct irq_data *d, unsigned type) 428 477 { 429 478 struct gpio_bank *bank = irq_data_get_irq_chip_data(d); 430 479 unsigned gpio = 0; 431 480 int retval; 432 481 unsigned long flags; 482 + unsigned offset; 433 483 434 - if (WARN_ON(!bank->mod_usage)) 435 - return -EINVAL; 484 + if (!BANK_USED(bank)) 485 + pm_runtime_get_sync(bank->dev); 436 486 437 487 #ifdef CONFIG_ARCH_OMAP1 438 488 if (d->irq > IH_MPUIO_BASE) ··· 504 446 return -EINVAL; 505 447 506 448 spin_lock_irqsave(&bank->lock, flags); 507 - retval = _set_gpio_triggering(bank, GPIO_INDEX(bank, gpio), type); 449 + offset = GPIO_INDEX(bank, gpio); 450 + retval = _set_gpio_triggering(bank, offset, type); 451 + if (!LINE_USED(bank->mod_usage, offset)) { 452 + _enable_gpio_module(bank, offset); 453 + _set_gpio_direction(bank, offset, 1); 454 + } else if (!gpio_is_input(bank, 1 << offset)) { 455 + spin_unlock_irqrestore(&bank->lock, flags); 456 + return -EINVAL; 457 + } 458 + 459 + bank->irq_usage |= 1 << GPIO_INDEX(bank, gpio); 508 460 spin_unlock_irqrestore(&bank->lock, flags); 509 461 510 462 if (type & (IRQ_TYPE_LEVEL_LOW | IRQ_TYPE_LEVEL_HIGH)) ··· 671 603 * If this is the first gpio_request for the bank, 672 604 * enable the bank module. 673 605 */ 674 - if (!bank->mod_usage) 606 + if (!BANK_USED(bank)) 675 607 pm_runtime_get_sync(bank->dev); 676 608 677 609 spin_lock_irqsave(&bank->lock, flags); 678 610 /* Set trigger to none. You need to enable the desired trigger with 679 - * request_irq() or set_irq_type(). 611 + * request_irq() or set_irq_type(). Only do this if the IRQ line has 612 + * not already been requested. 680 613 */ 681 - _set_gpio_triggering(bank, offset, IRQ_TYPE_NONE); 682 - 683 - if (bank->regs->pinctrl) { 684 - void __iomem *reg = bank->base + bank->regs->pinctrl; 685 - 686 - /* Claim the pin for MPU */ 687 - __raw_writel(__raw_readl(reg) | (1 << offset), reg); 614 + if (!LINE_USED(bank->irq_usage, offset)) { 615 + _set_gpio_triggering(bank, offset, IRQ_TYPE_NONE); 616 + _enable_gpio_module(bank, offset); 688 617 } 689 - 690 - if (bank->regs->ctrl && !bank->mod_usage) { 691 - void __iomem *reg = bank->base + bank->regs->ctrl; 692 - u32 ctrl; 693 - 694 - ctrl = __raw_readl(reg); 695 - /* Module is enabled, clocks are not gated */ 696 - ctrl &= ~GPIO_MOD_CTRL_BIT; 697 - __raw_writel(ctrl, reg); 698 - bank->context.ctrl = ctrl; 699 - } 700 - 701 618 bank->mod_usage |= 1 << offset; 702 - 703 619 spin_unlock_irqrestore(&bank->lock, flags); 704 620 705 621 return 0; ··· 692 640 static void omap_gpio_free(struct gpio_chip *chip, unsigned offset) 693 641 { 694 642 struct gpio_bank *bank = container_of(chip, struct gpio_bank, chip); 695 - void __iomem *base = bank->base; 696 643 unsigned long flags; 697 644 698 645 spin_lock_irqsave(&bank->lock, flags); 699 - 700 - if (bank->regs->wkup_en) { 701 - /* Disable wake-up during idle for dynamic tick */ 702 - _gpio_rmw(base, bank->regs->wkup_en, 1 << offset, 0); 703 - bank->context.wake_en = 704 - __raw_readl(bank->base + bank->regs->wkup_en); 705 - } 706 - 707 646 bank->mod_usage &= ~(1 << offset); 708 - 709 - if (bank->regs->ctrl && !bank->mod_usage) { 710 - void __iomem *reg = bank->base + bank->regs->ctrl; 711 - u32 ctrl; 712 - 713 - ctrl = __raw_readl(reg); 714 - /* Module is disabled, clocks are gated */ 715 - ctrl |= GPIO_MOD_CTRL_BIT; 716 - __raw_writel(ctrl, reg); 717 - bank->context.ctrl = ctrl; 718 - } 719 - 647 + _disable_gpio_module(bank, offset); 720 648 _reset_gpio(bank, bank->chip.base + offset); 721 649 spin_unlock_irqrestore(&bank->lock, flags); 722 650 ··· 704 672 * If this is the last gpio to be freed in the bank, 705 673 * disable the bank module. 706 674 */ 707 - if (!bank->mod_usage) 675 + if (!BANK_USED(bank)) 708 676 pm_runtime_put(bank->dev); 709 677 } 710 678 ··· 794 762 struct gpio_bank *bank = irq_data_get_irq_chip_data(d); 795 763 unsigned int gpio = irq_to_gpio(bank, d->hwirq); 796 764 unsigned long flags; 765 + unsigned offset = GPIO_INDEX(bank, gpio); 797 766 798 767 spin_lock_irqsave(&bank->lock, flags); 768 + bank->irq_usage &= ~(1 << offset); 769 + _disable_gpio_module(bank, offset); 799 770 _reset_gpio(bank, gpio); 800 771 spin_unlock_irqrestore(&bank->lock, flags); 772 + 773 + /* 774 + * If this is the last IRQ to be freed in the bank, 775 + * disable the bank module. 776 + */ 777 + if (!BANK_USED(bank)) 778 + pm_runtime_put(bank->dev); 801 779 } 802 780 803 781 static void gpio_ack_irq(struct irq_data *d) ··· 939 897 return 0; 940 898 } 941 899 942 - static int gpio_is_input(struct gpio_bank *bank, int mask) 943 - { 944 - void __iomem *reg = bank->base + bank->regs->direction; 945 - 946 - return __raw_readl(reg) & mask; 947 - } 948 - 949 900 static int gpio_get(struct gpio_chip *chip, unsigned offset) 950 901 { 951 902 struct gpio_bank *bank; ··· 957 922 { 958 923 struct gpio_bank *bank; 959 924 unsigned long flags; 925 + int retval = 0; 960 926 961 927 bank = container_of(chip, struct gpio_bank, chip); 962 928 spin_lock_irqsave(&bank->lock, flags); 929 + 930 + if (LINE_USED(bank->irq_usage, offset)) { 931 + retval = -EINVAL; 932 + goto exit; 933 + } 934 + 963 935 bank->set_dataout(bank, offset, value); 964 936 _set_gpio_direction(bank, offset, 0); 937 + 938 + exit: 965 939 spin_unlock_irqrestore(&bank->lock, flags); 966 - return 0; 940 + return retval; 967 941 } 968 942 969 943 static int gpio_debounce(struct gpio_chip *chip, unsigned offset, ··· 1444 1400 struct gpio_bank *bank; 1445 1401 1446 1402 list_for_each_entry(bank, &omap_gpio_list, node) { 1447 - if (!bank->mod_usage || !bank->loses_context) 1403 + if (!BANK_USED(bank) || !bank->loses_context) 1448 1404 continue; 1449 1405 1450 1406 bank->power_mode = pwr_mode; ··· 1458 1414 struct gpio_bank *bank; 1459 1415 1460 1416 list_for_each_entry(bank, &omap_gpio_list, node) { 1461 - if (!bank->mod_usage || !bank->loses_context) 1417 + if (!BANK_USED(bank) || !bank->loses_context) 1462 1418 continue; 1463 1419 1464 1420 pm_runtime_get_sync(bank->dev);
+3 -4
drivers/gpio/gpio-rcar.c
··· 293 293 if (pdata) { 294 294 p->config = *pdata; 295 295 } else if (IS_ENABLED(CONFIG_OF) && np) { 296 - ret = of_parse_phandle_with_args(np, "gpio-ranges", 297 - "#gpio-range-cells", 0, &args); 298 - p->config.number_of_pins = ret == 0 && args.args_count == 3 299 - ? args.args[2] 296 + ret = of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3, 0, 297 + &args); 298 + p->config.number_of_pins = ret == 0 ? args.args[2] 300 299 : RCAR_MAX_GPIO_PER_BANK; 301 300 p->config.gpio_base = -1; 302 301 }
+4 -2
drivers/gpio/gpiolib.c
··· 136 136 */ 137 137 static int desc_to_gpio(const struct gpio_desc *desc) 138 138 { 139 - return desc->chip->base + gpio_chip_hwgpio(desc); 139 + return desc - &gpio_desc[0]; 140 140 } 141 141 142 142 ··· 1398 1398 int status = -EPROBE_DEFER; 1399 1399 unsigned long flags; 1400 1400 1401 - if (!desc || !desc->chip) { 1401 + if (!desc) { 1402 1402 pr_warn("%s: invalid GPIO\n", __func__); 1403 1403 return -EINVAL; 1404 1404 } ··· 1406 1406 spin_lock_irqsave(&gpio_lock, flags); 1407 1407 1408 1408 chip = desc->chip; 1409 + if (chip == NULL) 1410 + goto done; 1409 1411 1410 1412 if (!try_module_get(chip->owner)) 1411 1413 goto done;
+8 -1
drivers/gpu/drm/drm_drv.c
··· 402 402 cmd = ioctl->cmd_drv; 403 403 } 404 404 else if ((nr >= DRM_COMMAND_END) || (nr < DRM_COMMAND_BASE)) { 405 + u32 drv_size; 406 + 405 407 ioctl = &drm_ioctls[nr]; 406 - cmd = ioctl->cmd; 408 + 409 + drv_size = _IOC_SIZE(ioctl->cmd); 407 410 usize = asize = _IOC_SIZE(cmd); 411 + if (drv_size > asize) 412 + asize = drv_size; 413 + 414 + cmd = ioctl->cmd; 408 415 } else 409 416 goto err_i1; 410 417
+2
drivers/gpu/drm/drm_edid.c
··· 2925 2925 /* Speaker Allocation Data Block */ 2926 2926 if (dbl == 3) { 2927 2927 *sadb = kmalloc(dbl, GFP_KERNEL); 2928 + if (!*sadb) 2929 + return -ENOMEM; 2928 2930 memcpy(*sadb, &db[1], dbl); 2929 2931 count = dbl; 2930 2932 break;
-8
drivers/gpu/drm/drm_fb_helper.c
··· 416 416 return; 417 417 418 418 /* 419 - * fbdev->blank can be called from irq context in case of a panic. 420 - * Since we already have our own special panic handler which will 421 - * restore the fbdev console mode completely, just bail out early. 422 - */ 423 - if (oops_in_progress) 424 - return; 425 - 426 - /* 427 419 * For each CRTC in this fb, turn the connectors on/off. 428 420 */ 429 421 drm_modeset_lock_all(dev);
+1
drivers/gpu/drm/gma500/gtt.c
··· 204 204 if (IS_ERR(pages)) 205 205 return PTR_ERR(pages); 206 206 207 + gt->npage = gt->gem.size / PAGE_SIZE; 207 208 gt->pages = pages; 208 209 209 210 return 0;
+3 -12
drivers/gpu/drm/i915/i915_dma.c
··· 1290 1290 * then we do not take part in VGA arbitration and the 1291 1291 * vga_client_register() fails with -ENODEV. 1292 1292 */ 1293 - if (!HAS_PCH_SPLIT(dev)) { 1294 - ret = vga_client_register(dev->pdev, dev, NULL, 1295 - i915_vga_set_decode); 1296 - if (ret && ret != -ENODEV) 1297 - goto out; 1298 - } 1293 + ret = vga_client_register(dev->pdev, dev, NULL, i915_vga_set_decode); 1294 + if (ret && ret != -ENODEV) 1295 + goto out; 1299 1296 1300 1297 intel_register_dsm_handler(); 1301 1298 ··· 1347 1350 * tiny window where we will loose hotplug notifactions. 1348 1351 */ 1349 1352 intel_fbdev_initial_config(dev); 1350 - 1351 - /* 1352 - * Must do this after fbcon init so that 1353 - * vgacon_save_screen() works during the handover. 1354 - */ 1355 - i915_disable_vga_mem(dev); 1356 1353 1357 1354 /* Only enable hotplug handling once the fbdev is fully set up. */ 1358 1355 dev_priv->enable_hotplug_processing = true;
+4 -1
drivers/gpu/drm/i915/i915_drv.c
··· 505 505 intel_modeset_suspend_hw(dev); 506 506 } 507 507 508 + i915_gem_suspend_gtt_mappings(dev); 509 + 508 510 i915_save_state(dev); 509 511 510 512 intel_opregion_fini(dev); ··· 650 648 mutex_lock(&dev->struct_mutex); 651 649 i915_gem_restore_gtt_mappings(dev); 652 650 mutex_unlock(&dev->struct_mutex); 653 - } 651 + } else if (drm_core_check_feature(dev, DRIVER_MODESET)) 652 + i915_check_and_clear_faults(dev); 654 653 655 654 __i915_drm_thaw(dev); 656 655
+6 -2
drivers/gpu/drm/i915/i915_drv.h
··· 497 497 498 498 /* FIXME: Need a more generic return type */ 499 499 gen6_gtt_pte_t (*pte_encode)(dma_addr_t addr, 500 - enum i915_cache_level level); 500 + enum i915_cache_level level, 501 + bool valid); /* Create a valid PTE */ 501 502 void (*clear_range)(struct i915_address_space *vm, 502 503 unsigned int first_entry, 503 - unsigned int num_entries); 504 + unsigned int num_entries, 505 + bool use_scratch); 504 506 void (*insert_entries)(struct i915_address_space *vm, 505 507 struct sg_table *st, 506 508 unsigned int first_entry, ··· 2067 2065 void i915_ppgtt_unbind_object(struct i915_hw_ppgtt *ppgtt, 2068 2066 struct drm_i915_gem_object *obj); 2069 2067 2068 + void i915_check_and_clear_faults(struct drm_device *dev); 2069 + void i915_gem_suspend_gtt_mappings(struct drm_device *dev); 2070 2070 void i915_gem_restore_gtt_mappings(struct drm_device *dev); 2071 2071 int __must_check i915_gem_gtt_prepare_object(struct drm_i915_gem_object *obj); 2072 2072 void i915_gem_gtt_bind_object(struct drm_i915_gem_object *obj,
+85 -24
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 58 58 #define HSW_WT_ELLC_LLC_AGE0 HSW_CACHEABILITY_CONTROL(0x6) 59 59 60 60 static gen6_gtt_pte_t snb_pte_encode(dma_addr_t addr, 61 - enum i915_cache_level level) 61 + enum i915_cache_level level, 62 + bool valid) 62 63 { 63 - gen6_gtt_pte_t pte = GEN6_PTE_VALID; 64 + gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0; 64 65 pte |= GEN6_PTE_ADDR_ENCODE(addr); 65 66 66 67 switch (level) { ··· 80 79 } 81 80 82 81 static gen6_gtt_pte_t ivb_pte_encode(dma_addr_t addr, 83 - enum i915_cache_level level) 82 + enum i915_cache_level level, 83 + bool valid) 84 84 { 85 - gen6_gtt_pte_t pte = GEN6_PTE_VALID; 85 + gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0; 86 86 pte |= GEN6_PTE_ADDR_ENCODE(addr); 87 87 88 88 switch (level) { ··· 107 105 #define BYT_PTE_SNOOPED_BY_CPU_CACHES (1 << 2) 108 106 109 107 static gen6_gtt_pte_t byt_pte_encode(dma_addr_t addr, 110 - enum i915_cache_level level) 108 + enum i915_cache_level level, 109 + bool valid) 111 110 { 112 - gen6_gtt_pte_t pte = GEN6_PTE_VALID; 111 + gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0; 113 112 pte |= GEN6_PTE_ADDR_ENCODE(addr); 114 113 115 114 /* Mark the page as writeable. Other platforms don't have a ··· 125 122 } 126 123 127 124 static gen6_gtt_pte_t hsw_pte_encode(dma_addr_t addr, 128 - enum i915_cache_level level) 125 + enum i915_cache_level level, 126 + bool valid) 129 127 { 130 - gen6_gtt_pte_t pte = GEN6_PTE_VALID; 128 + gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0; 131 129 pte |= HSW_PTE_ADDR_ENCODE(addr); 132 130 133 131 if (level != I915_CACHE_NONE) ··· 138 134 } 139 135 140 136 static gen6_gtt_pte_t iris_pte_encode(dma_addr_t addr, 141 - enum i915_cache_level level) 137 + enum i915_cache_level level, 138 + bool valid) 142 139 { 143 - gen6_gtt_pte_t pte = GEN6_PTE_VALID; 140 + gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0; 144 141 pte |= HSW_PTE_ADDR_ENCODE(addr); 145 142 146 143 switch (level) { ··· 241 236 /* PPGTT support for Sandybdrige/Gen6 and later */ 242 237 static void gen6_ppgtt_clear_range(struct i915_address_space *vm, 243 238 unsigned first_entry, 244 - unsigned num_entries) 239 + unsigned num_entries, 240 + bool use_scratch) 245 241 { 246 242 struct i915_hw_ppgtt *ppgtt = 247 243 container_of(vm, struct i915_hw_ppgtt, base); ··· 251 245 unsigned first_pte = first_entry % I915_PPGTT_PT_ENTRIES; 252 246 unsigned last_pte, i; 253 247 254 - scratch_pte = vm->pte_encode(vm->scratch.addr, I915_CACHE_LLC); 248 + scratch_pte = vm->pte_encode(vm->scratch.addr, I915_CACHE_LLC, true); 255 249 256 250 while (num_entries) { 257 251 last_pte = first_pte + num_entries; ··· 288 282 dma_addr_t page_addr; 289 283 290 284 page_addr = sg_page_iter_dma_address(&sg_iter); 291 - pt_vaddr[act_pte] = vm->pte_encode(page_addr, cache_level); 285 + pt_vaddr[act_pte] = vm->pte_encode(page_addr, cache_level, true); 292 286 if (++act_pte == I915_PPGTT_PT_ENTRIES) { 293 287 kunmap_atomic(pt_vaddr); 294 288 act_pt++; ··· 373 367 } 374 368 375 369 ppgtt->base.clear_range(&ppgtt->base, 0, 376 - ppgtt->num_pd_entries * I915_PPGTT_PT_ENTRIES); 370 + ppgtt->num_pd_entries * I915_PPGTT_PT_ENTRIES, true); 377 371 378 372 ppgtt->pd_offset = first_pd_entry_in_global_pt * sizeof(gen6_gtt_pte_t); 379 373 ··· 450 444 { 451 445 ppgtt->base.clear_range(&ppgtt->base, 452 446 i915_gem_obj_ggtt_offset(obj) >> PAGE_SHIFT, 453 - obj->base.size >> PAGE_SHIFT); 447 + obj->base.size >> PAGE_SHIFT, 448 + true); 454 449 } 455 450 456 451 extern int intel_iommu_gfx_mapped; ··· 492 485 dev_priv->mm.interruptible = interruptible; 493 486 } 494 487 488 + void i915_check_and_clear_faults(struct drm_device *dev) 489 + { 490 + struct drm_i915_private *dev_priv = dev->dev_private; 491 + struct intel_ring_buffer *ring; 492 + int i; 493 + 494 + if (INTEL_INFO(dev)->gen < 6) 495 + return; 496 + 497 + for_each_ring(ring, dev_priv, i) { 498 + u32 fault_reg; 499 + fault_reg = I915_READ(RING_FAULT_REG(ring)); 500 + if (fault_reg & RING_FAULT_VALID) { 501 + DRM_DEBUG_DRIVER("Unexpected fault\n" 502 + "\tAddr: 0x%08lx\\n" 503 + "\tAddress space: %s\n" 504 + "\tSource ID: %d\n" 505 + "\tType: %d\n", 506 + fault_reg & PAGE_MASK, 507 + fault_reg & RING_FAULT_GTTSEL_MASK ? "GGTT" : "PPGTT", 508 + RING_FAULT_SRCID(fault_reg), 509 + RING_FAULT_FAULT_TYPE(fault_reg)); 510 + I915_WRITE(RING_FAULT_REG(ring), 511 + fault_reg & ~RING_FAULT_VALID); 512 + } 513 + } 514 + POSTING_READ(RING_FAULT_REG(&dev_priv->ring[RCS])); 515 + } 516 + 517 + void i915_gem_suspend_gtt_mappings(struct drm_device *dev) 518 + { 519 + struct drm_i915_private *dev_priv = dev->dev_private; 520 + 521 + /* Don't bother messing with faults pre GEN6 as we have little 522 + * documentation supporting that it's a good idea. 523 + */ 524 + if (INTEL_INFO(dev)->gen < 6) 525 + return; 526 + 527 + i915_check_and_clear_faults(dev); 528 + 529 + dev_priv->gtt.base.clear_range(&dev_priv->gtt.base, 530 + dev_priv->gtt.base.start / PAGE_SIZE, 531 + dev_priv->gtt.base.total / PAGE_SIZE, 532 + false); 533 + } 534 + 495 535 void i915_gem_restore_gtt_mappings(struct drm_device *dev) 496 536 { 497 537 struct drm_i915_private *dev_priv = dev->dev_private; 498 538 struct drm_i915_gem_object *obj; 499 539 540 + i915_check_and_clear_faults(dev); 541 + 500 542 /* First fill our portion of the GTT with scratch pages */ 501 543 dev_priv->gtt.base.clear_range(&dev_priv->gtt.base, 502 544 dev_priv->gtt.base.start / PAGE_SIZE, 503 - dev_priv->gtt.base.total / PAGE_SIZE); 545 + dev_priv->gtt.base.total / PAGE_SIZE, 546 + true); 504 547 505 548 list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) { 506 549 i915_gem_clflush_object(obj, obj->pin_display); ··· 593 536 594 537 for_each_sg_page(st->sgl, &sg_iter, st->nents, 0) { 595 538 addr = sg_page_iter_dma_address(&sg_iter); 596 - iowrite32(vm->pte_encode(addr, level), &gtt_entries[i]); 539 + iowrite32(vm->pte_encode(addr, level, true), &gtt_entries[i]); 597 540 i++; 598 541 } 599 542 ··· 605 548 */ 606 549 if (i != 0) 607 550 WARN_ON(readl(&gtt_entries[i-1]) != 608 - vm->pte_encode(addr, level)); 551 + vm->pte_encode(addr, level, true)); 609 552 610 553 /* This next bit makes the above posting read even more important. We 611 554 * want to flush the TLBs only after we're certain all the PTE updates ··· 617 560 618 561 static void gen6_ggtt_clear_range(struct i915_address_space *vm, 619 562 unsigned int first_entry, 620 - unsigned int num_entries) 563 + unsigned int num_entries, 564 + bool use_scratch) 621 565 { 622 566 struct drm_i915_private *dev_priv = vm->dev->dev_private; 623 567 gen6_gtt_pte_t scratch_pte, __iomem *gtt_base = ··· 631 573 first_entry, num_entries, max_entries)) 632 574 num_entries = max_entries; 633 575 634 - scratch_pte = vm->pte_encode(vm->scratch.addr, I915_CACHE_LLC); 576 + scratch_pte = vm->pte_encode(vm->scratch.addr, I915_CACHE_LLC, use_scratch); 577 + 635 578 for (i = 0; i < num_entries; i++) 636 579 iowrite32(scratch_pte, &gtt_base[i]); 637 580 readl(gtt_base); ··· 653 594 654 595 static void i915_ggtt_clear_range(struct i915_address_space *vm, 655 596 unsigned int first_entry, 656 - unsigned int num_entries) 597 + unsigned int num_entries, 598 + bool unused) 657 599 { 658 600 intel_gtt_clear_range(first_entry, num_entries); 659 601 } ··· 682 622 683 623 dev_priv->gtt.base.clear_range(&dev_priv->gtt.base, 684 624 entry, 685 - obj->base.size >> PAGE_SHIFT); 625 + obj->base.size >> PAGE_SHIFT, 626 + true); 686 627 687 628 obj->has_global_gtt_mapping = 0; 688 629 } ··· 770 709 const unsigned long count = (hole_end - hole_start) / PAGE_SIZE; 771 710 DRM_DEBUG_KMS("clearing unused GTT space: [%lx, %lx]\n", 772 711 hole_start, hole_end); 773 - ggtt_vm->clear_range(ggtt_vm, hole_start / PAGE_SIZE, count); 712 + ggtt_vm->clear_range(ggtt_vm, hole_start / PAGE_SIZE, count, true); 774 713 } 775 714 776 715 /* And finally clear the reserved guard page */ 777 - ggtt_vm->clear_range(ggtt_vm, end / PAGE_SIZE - 1, 1); 716 + ggtt_vm->clear_range(ggtt_vm, end / PAGE_SIZE - 1, 1, true); 778 717 } 779 718 780 719 static bool
+12
drivers/gpu/drm/i915/i915_reg.h
··· 604 604 #define ARB_MODE_SWIZZLE_IVB (1<<5) 605 605 #define RENDER_HWS_PGA_GEN7 (0x04080) 606 606 #define RING_FAULT_REG(ring) (0x4094 + 0x100*(ring)->id) 607 + #define RING_FAULT_GTTSEL_MASK (1<<11) 608 + #define RING_FAULT_SRCID(x) ((x >> 3) & 0xff) 609 + #define RING_FAULT_FAULT_TYPE(x) ((x >> 1) & 0x3) 610 + #define RING_FAULT_VALID (1<<0) 607 611 #define DONE_REG 0x40b0 608 612 #define BSD_HWS_PGA_GEN7 (0x04180) 609 613 #define BLT_HWS_PGA_GEN7 (0x04280) ··· 3885 3881 #define GEN7_SQ_CHICKEN_MBCUNIT_CONFIG 0x9030 3886 3882 #define GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB (1<<11) 3887 3883 3884 + #define HSW_SCRATCH1 0xb038 3885 + #define HSW_SCRATCH1_L3_DATA_ATOMICS_DISABLE (1<<27) 3886 + 3888 3887 #define HSW_FUSE_STRAP 0x42014 3889 3888 #define HSW_CDCLK_LIMIT (1 << 24) 3890 3889 ··· 4283 4276 #define FDI_RX_CHICKEN(pipe) _PIPE(pipe, _FDI_RXA_CHICKEN, _FDI_RXB_CHICKEN) 4284 4277 4285 4278 #define SOUTH_DSPCLK_GATE_D 0xc2020 4279 + #define PCH_DPLUNIT_CLOCK_GATE_DISABLE (1<<30) 4286 4280 #define PCH_DPLSUNIT_CLOCK_GATE_DISABLE (1<<29) 4281 + #define PCH_CPUNIT_CLOCK_GATE_DISABLE (1<<14) 4287 4282 #define PCH_LP_PARTITION_LEVEL_DISABLE (1<<12) 4288 4283 4289 4284 /* CPU: FDI_TX */ ··· 4736 4727 #define GEN7_ROW_CHICKEN2 0xe4f4 4737 4728 #define GEN7_ROW_CHICKEN2_GT2 0xf4f4 4738 4729 #define DOP_CLOCK_GATING_DISABLE (1<<0) 4730 + 4731 + #define HSW_ROW_CHICKEN3 0xe49c 4732 + #define HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE (1 << 6) 4739 4733 4740 4734 #define G4X_AUD_VID_DID (dev_priv->info->display_mmio_offset + 0x62020) 4741 4735 #define INTEL_AUDIO_DEVCL 0x808629FB
+2 -36
drivers/gpu/drm/i915/intel_display.c
··· 3941 3941 * consider. */ 3942 3942 void intel_connector_dpms(struct drm_connector *connector, int mode) 3943 3943 { 3944 - struct intel_encoder *encoder = intel_attached_encoder(connector); 3945 - 3946 3944 /* All the simple cases only support two dpms states. */ 3947 3945 if (mode != DRM_MODE_DPMS_ON) 3948 3946 mode = DRM_MODE_DPMS_OFF; ··· 3951 3953 connector->dpms = mode; 3952 3954 3953 3955 /* Only need to change hw state when actually enabled */ 3954 - if (encoder->base.crtc) 3955 - intel_encoder_dpms(encoder, mode); 3956 - else 3957 - WARN_ON(encoder->connectors_active != false); 3956 + if (connector->encoder) 3957 + intel_encoder_dpms(to_intel_encoder(connector->encoder), mode); 3958 3958 3959 3959 intel_modeset_check_state(connector->dev); 3960 3960 } ··· 10045 10049 POSTING_READ(vga_reg); 10046 10050 } 10047 10051 10048 - static void i915_enable_vga_mem(struct drm_device *dev) 10049 - { 10050 - /* Enable VGA memory on Intel HD */ 10051 - if (HAS_PCH_SPLIT(dev)) { 10052 - vga_get_uninterruptible(dev->pdev, VGA_RSRC_LEGACY_IO); 10053 - outb(inb(VGA_MSR_READ) | VGA_MSR_MEM_EN, VGA_MSR_WRITE); 10054 - vga_set_legacy_decoding(dev->pdev, VGA_RSRC_LEGACY_IO | 10055 - VGA_RSRC_LEGACY_MEM | 10056 - VGA_RSRC_NORMAL_IO | 10057 - VGA_RSRC_NORMAL_MEM); 10058 - vga_put(dev->pdev, VGA_RSRC_LEGACY_IO); 10059 - } 10060 - } 10061 - 10062 - void i915_disable_vga_mem(struct drm_device *dev) 10063 - { 10064 - /* Disable VGA memory on Intel HD */ 10065 - if (HAS_PCH_SPLIT(dev)) { 10066 - vga_get_uninterruptible(dev->pdev, VGA_RSRC_LEGACY_IO); 10067 - outb(inb(VGA_MSR_READ) & ~VGA_MSR_MEM_EN, VGA_MSR_WRITE); 10068 - vga_set_legacy_decoding(dev->pdev, VGA_RSRC_LEGACY_IO | 10069 - VGA_RSRC_NORMAL_IO | 10070 - VGA_RSRC_NORMAL_MEM); 10071 - vga_put(dev->pdev, VGA_RSRC_LEGACY_IO); 10072 - } 10073 - } 10074 - 10075 10052 void intel_modeset_init_hw(struct drm_device *dev) 10076 10053 { 10077 10054 intel_init_power_well(dev); ··· 10323 10354 if (I915_READ(vga_reg) != VGA_DISP_DISABLE) { 10324 10355 DRM_DEBUG_KMS("Something enabled VGA plane, disabling it\n"); 10325 10356 i915_disable_vga(dev); 10326 - i915_disable_vga_mem(dev); 10327 10357 } 10328 10358 } 10329 10359 ··· 10535 10567 } 10536 10568 10537 10569 intel_disable_fbc(dev); 10538 - 10539 - i915_enable_vga_mem(dev); 10540 10570 10541 10571 intel_disable_gt_powersave(dev); 10542 10572
+1 -1
drivers/gpu/drm/i915/intel_dp.c
··· 1467 1467 1468 1468 /* Avoid continuous PSR exit by masking memup and hpd */ 1469 1469 I915_WRITE(EDP_PSR_DEBUG_CTL, EDP_PSR_DEBUG_MASK_MEMUP | 1470 - EDP_PSR_DEBUG_MASK_HPD); 1470 + EDP_PSR_DEBUG_MASK_HPD | EDP_PSR_DEBUG_MASK_LPSP); 1471 1471 1472 1472 intel_dp->psr_setup_done = true; 1473 1473 }
-1
drivers/gpu/drm/i915/intel_drv.h
··· 793 793 extern void hsw_pc8_restore_interrupts(struct drm_device *dev); 794 794 extern void intel_aux_display_runtime_get(struct drm_i915_private *dev_priv); 795 795 extern void intel_aux_display_runtime_put(struct drm_i915_private *dev_priv); 796 - extern void i915_disable_vga_mem(struct drm_device *dev); 797 796 798 797 #endif /* __INTEL_DRV_H__ */
+10 -3
drivers/gpu/drm/i915/intel_pm.c
··· 3864 3864 dev_priv->rps.rpe_delay), 3865 3865 dev_priv->rps.rpe_delay); 3866 3866 3867 - INIT_DELAYED_WORK(&dev_priv->rps.vlv_work, vlv_rps_timer_work); 3868 - 3869 3867 valleyview_set_rps(dev_priv->dev, dev_priv->rps.rpe_delay); 3870 3868 3871 3869 gen6_enable_rps_interrupts(dev); ··· 4759 4761 * gating for the panel power sequencer or it will fail to 4760 4762 * start up when no ports are active. 4761 4763 */ 4762 - I915_WRITE(SOUTH_DSPCLK_GATE_D, PCH_DPLSUNIT_CLOCK_GATE_DISABLE); 4764 + I915_WRITE(SOUTH_DSPCLK_GATE_D, PCH_DPLSUNIT_CLOCK_GATE_DISABLE | 4765 + PCH_DPLUNIT_CLOCK_GATE_DISABLE | 4766 + PCH_CPUNIT_CLOCK_GATE_DISABLE); 4763 4767 I915_WRITE(SOUTH_CHICKEN2, I915_READ(SOUTH_CHICKEN2) | 4764 4768 DPLS_EDP_PPS_FIX_DIS); 4765 4769 /* The below fixes the weird display corruption, a few pixels shifted ··· 4954 4954 GEN7_WA_FOR_GEN7_L3_CONTROL); 4955 4955 I915_WRITE(GEN7_L3_CHICKEN_MODE_REGISTER, 4956 4956 GEN7_WA_L3_CHICKEN_MODE); 4957 + 4958 + /* L3 caching of data atomics doesn't work -- disable it. */ 4959 + I915_WRITE(HSW_SCRATCH1, HSW_SCRATCH1_L3_DATA_ATOMICS_DISABLE); 4960 + I915_WRITE(HSW_ROW_CHICKEN3, 4961 + _MASKED_BIT_ENABLE(HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE)); 4957 4962 4958 4963 /* This is required by WaCatErrorRejectionIssue:hsw */ 4959 4964 I915_WRITE(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG, ··· 5686 5681 5687 5682 INIT_DELAYED_WORK(&dev_priv->rps.delayed_resume_work, 5688 5683 intel_gen6_powersave_work); 5684 + 5685 + INIT_DELAYED_WORK(&dev_priv->rps.vlv_work, vlv_rps_timer_work); 5689 5686 } 5690 5687
+1 -1
drivers/gpu/drm/nouveau/core/subdev/mc/base.c
··· 113 113 pmc->use_msi = false; 114 114 break; 115 115 default: 116 - pmc->use_msi = nouveau_boolopt(device->cfgopt, "NvMSI", true); 116 + pmc->use_msi = nouveau_boolopt(device->cfgopt, "NvMSI", false); 117 117 if (pmc->use_msi) { 118 118 pmc->use_msi = pci_enable_msi(device->pdev) == 0; 119 119 if (pmc->use_msi) {
+36 -18
drivers/gpu/drm/radeon/atombios_encoders.c
··· 707 707 switch (connector->connector_type) { 708 708 case DRM_MODE_CONNECTOR_DVII: 709 709 case DRM_MODE_CONNECTOR_HDMIB: /* HDMI-B is basically DL-DVI; analog works fine */ 710 - if ((radeon_connector->audio == RADEON_AUDIO_ENABLE) || 711 - (drm_detect_hdmi_monitor(radeon_connector->edid) && 712 - (radeon_connector->audio == RADEON_AUDIO_AUTO))) 713 - return ATOM_ENCODER_MODE_HDMI; 714 - else if (radeon_connector->use_digital) 710 + if (radeon_audio != 0) { 711 + if (radeon_connector->use_digital && 712 + (radeon_connector->audio == RADEON_AUDIO_ENABLE)) 713 + return ATOM_ENCODER_MODE_HDMI; 714 + else if (drm_detect_hdmi_monitor(radeon_connector->edid) && 715 + (radeon_connector->audio == RADEON_AUDIO_AUTO)) 716 + return ATOM_ENCODER_MODE_HDMI; 717 + else if (radeon_connector->use_digital) 718 + return ATOM_ENCODER_MODE_DVI; 719 + else 720 + return ATOM_ENCODER_MODE_CRT; 721 + } else if (radeon_connector->use_digital) { 715 722 return ATOM_ENCODER_MODE_DVI; 716 - else 723 + } else { 717 724 return ATOM_ENCODER_MODE_CRT; 725 + } 718 726 break; 719 727 case DRM_MODE_CONNECTOR_DVID: 720 728 case DRM_MODE_CONNECTOR_HDMIA: 721 729 default: 722 - if ((radeon_connector->audio == RADEON_AUDIO_ENABLE) || 723 - (drm_detect_hdmi_monitor(radeon_connector->edid) && 724 - (radeon_connector->audio == RADEON_AUDIO_AUTO))) 725 - return ATOM_ENCODER_MODE_HDMI; 726 - else 730 + if (radeon_audio != 0) { 731 + if (radeon_connector->audio == RADEON_AUDIO_ENABLE) 732 + return ATOM_ENCODER_MODE_HDMI; 733 + else if (drm_detect_hdmi_monitor(radeon_connector->edid) && 734 + (radeon_connector->audio == RADEON_AUDIO_AUTO)) 735 + return ATOM_ENCODER_MODE_HDMI; 736 + else 737 + return ATOM_ENCODER_MODE_DVI; 738 + } else { 727 739 return ATOM_ENCODER_MODE_DVI; 740 + } 728 741 break; 729 742 case DRM_MODE_CONNECTOR_LVDS: 730 743 return ATOM_ENCODER_MODE_LVDS; ··· 745 732 case DRM_MODE_CONNECTOR_DisplayPort: 746 733 dig_connector = radeon_connector->con_priv; 747 734 if ((dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) || 748 - (dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_eDP)) 735 + (dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_eDP)) { 749 736 return ATOM_ENCODER_MODE_DP; 750 - else if ((radeon_connector->audio == RADEON_AUDIO_ENABLE) || 751 - (drm_detect_hdmi_monitor(radeon_connector->edid) && 752 - (radeon_connector->audio == RADEON_AUDIO_AUTO))) 753 - return ATOM_ENCODER_MODE_HDMI; 754 - else 737 + } else if (radeon_audio != 0) { 738 + if (radeon_connector->audio == RADEON_AUDIO_ENABLE) 739 + return ATOM_ENCODER_MODE_HDMI; 740 + else if (drm_detect_hdmi_monitor(radeon_connector->edid) && 741 + (radeon_connector->audio == RADEON_AUDIO_AUTO)) 742 + return ATOM_ENCODER_MODE_HDMI; 743 + else 744 + return ATOM_ENCODER_MODE_DVI; 745 + } else { 755 746 return ATOM_ENCODER_MODE_DVI; 747 + } 756 748 break; 757 749 case DRM_MODE_CONNECTOR_eDP: 758 750 return ATOM_ENCODER_MODE_DP; ··· 1673 1655 * does the same thing and more. 1674 1656 */ 1675 1657 if ((rdev->family != CHIP_RV710) && (rdev->family != CHIP_RV730) && 1676 - (rdev->family != CHIP_RS880)) 1658 + (rdev->family != CHIP_RS780) && (rdev->family != CHIP_RS880)) 1677 1659 atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE_OUTPUT, 0, 0); 1678 1660 } 1679 1661 if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(encoder)) && connector) {
+3 -3
drivers/gpu/drm/radeon/btc_dpm.c
··· 1930 1930 } 1931 1931 j++; 1932 1932 1933 - if (j > SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE) 1933 + if (j >= SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE) 1934 1934 return -EINVAL; 1935 1935 1936 1936 tmp = RREG32(MC_PMG_CMD_MRS); ··· 1945 1945 } 1946 1946 j++; 1947 1947 1948 - if (j > SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE) 1948 + if (j >= SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE) 1949 1949 return -EINVAL; 1950 1950 break; 1951 1951 case MC_SEQ_RESERVE_M >> 2: ··· 1959 1959 } 1960 1960 j++; 1961 1961 1962 - if (j > SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE) 1962 + if (j >= SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE) 1963 1963 return -EINVAL; 1964 1964 break; 1965 1965 default:
+10
drivers/gpu/drm/radeon/cik.c
··· 77 77 static void cik_program_aspm(struct radeon_device *rdev); 78 78 static void cik_init_pg(struct radeon_device *rdev); 79 79 static void cik_init_cg(struct radeon_device *rdev); 80 + static void cik_fini_pg(struct radeon_device *rdev); 81 + static void cik_fini_cg(struct radeon_device *rdev); 80 82 static void cik_enable_gui_idle_interrupt(struct radeon_device *rdev, 81 83 bool enable); 82 84 ··· 1694 1692 fw_name); 1695 1693 release_firmware(rdev->smc_fw); 1696 1694 rdev->smc_fw = NULL; 1695 + err = 0; 1697 1696 } else if (rdev->smc_fw->size != smc_req_size) { 1698 1697 printk(KERN_ERR 1699 1698 "cik_smc: Bogus length %zu in firmware \"%s\"\n", ··· 3183 3180 r = radeon_ib_get(rdev, ring->idx, &ib, NULL, 256); 3184 3181 if (r) { 3185 3182 DRM_ERROR("radeon: failed to get ib (%d).\n", r); 3183 + radeon_scratch_free(rdev, scratch); 3186 3184 return r; 3187 3185 } 3188 3186 ib.ptr[0] = PACKET3(PACKET3_SET_UCONFIG_REG, 1); ··· 3200 3196 r = radeon_fence_wait(ib.fence, false); 3201 3197 if (r) { 3202 3198 DRM_ERROR("radeon: fence wait failed (%d).\n", r); 3199 + radeon_scratch_free(rdev, scratch); 3200 + radeon_ib_free(rdev, &ib); 3203 3201 return r; 3204 3202 } 3205 3203 for (i = 0; i < rdev->usec_timeout; i++) { ··· 4190 4184 RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR)); 4191 4185 dev_info(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n", 4192 4186 RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS)); 4187 + 4188 + /* disable CG/PG */ 4189 + cik_fini_pg(rdev); 4190 + cik_fini_cg(rdev); 4193 4191 4194 4192 /* stop the rlc */ 4195 4193 cik_rlc_stop(rdev);
+3
drivers/gpu/drm/radeon/dce6_afmt.c
··· 113 113 u8 *sadb; 114 114 int sad_count; 115 115 116 + /* XXX: setting this register causes hangs on some asics */ 117 + return; 118 + 116 119 if (!dig->afmt->pin) 117 120 return; 118 121
+1 -1
drivers/gpu/drm/radeon/evergreen.c
··· 3131 3131 rdev->config.evergreen.sx_max_export_size = 256; 3132 3132 rdev->config.evergreen.sx_max_export_pos_size = 64; 3133 3133 rdev->config.evergreen.sx_max_export_smx_size = 192; 3134 - rdev->config.evergreen.max_hw_contexts = 8; 3134 + rdev->config.evergreen.max_hw_contexts = 4; 3135 3135 rdev->config.evergreen.sq_num_cf_insts = 2; 3136 3136 3137 3137 rdev->config.evergreen.sc_prim_fifo_size = 0x40;
+4 -2
drivers/gpu/drm/radeon/evergreen_hdmi.c
··· 67 67 u8 *sadb; 68 68 int sad_count; 69 69 70 + /* XXX: setting this register causes hangs on some asics */ 71 + return; 72 + 70 73 list_for_each_entry(connector, &encoder->dev->mode_config.connector_list, head) { 71 74 if (connector->encoder == encoder) 72 75 radeon_connector = to_radeon_connector(connector); ··· 291 288 /* fglrx clears sth in AFMT_AUDIO_PACKET_CONTROL2 here */ 292 289 293 290 WREG32(HDMI_ACR_PACKET_CONTROL + offset, 294 - HDMI_ACR_AUTO_SEND | /* allow hw to sent ACR packets when required */ 295 - HDMI_ACR_SOURCE); /* select SW CTS value */ 291 + HDMI_ACR_AUTO_SEND); /* allow hw to sent ACR packets when required */ 296 292 297 293 evergreen_hdmi_update_ACR(encoder, mode->clock); 298 294
+2 -2
drivers/gpu/drm/radeon/evergreend.h
··· 1501 1501 * 6. COMMAND [29:22] | BYTE_COUNT [20:0] 1502 1502 */ 1503 1503 # define PACKET3_CP_DMA_DST_SEL(x) ((x) << 20) 1504 - /* 0 - SRC_ADDR 1504 + /* 0 - DST_ADDR 1505 1505 * 1 - GDS 1506 1506 */ 1507 1507 # define PACKET3_CP_DMA_ENGINE(x) ((x) << 27) ··· 1516 1516 # define PACKET3_CP_DMA_CP_SYNC (1 << 31) 1517 1517 /* COMMAND */ 1518 1518 # define PACKET3_CP_DMA_DIS_WC (1 << 21) 1519 - # define PACKET3_CP_DMA_CMD_SRC_SWAP(x) ((x) << 23) 1519 + # define PACKET3_CP_DMA_CMD_SRC_SWAP(x) ((x) << 22) 1520 1520 /* 0 - none 1521 1521 * 1 - 8 in 16 1522 1522 * 2 - 8 in 32
+1
drivers/gpu/drm/radeon/ni.c
··· 804 804 fw_name); 805 805 release_firmware(rdev->smc_fw); 806 806 rdev->smc_fw = NULL; 807 + err = 0; 807 808 } else if (rdev->smc_fw->size != smc_req_size) { 808 809 printk(KERN_ERR 809 810 "ni_mc: Bogus length %zu in firmware \"%s\"\n",
+1
drivers/gpu/drm/radeon/r600.c
··· 2302 2302 fw_name); 2303 2303 release_firmware(rdev->smc_fw); 2304 2304 rdev->smc_fw = NULL; 2305 + err = 0; 2305 2306 } else if (rdev->smc_fw->size != smc_req_size) { 2306 2307 printk(KERN_ERR 2307 2308 "smc: Bogus length %zu in firmware \"%s\"\n",
+17 -7
drivers/gpu/drm/radeon/r600_hdmi.c
··· 57 57 static const struct radeon_hdmi_acr r600_hdmi_predefined_acr[] = { 58 58 /* 32kHz 44.1kHz 48kHz */ 59 59 /* Clock N CTS N CTS N CTS */ 60 - { 25174, 4576, 28125, 7007, 31250, 6864, 28125 }, /* 25,20/1.001 MHz */ 60 + { 25175, 4576, 28125, 7007, 31250, 6864, 28125 }, /* 25,20/1.001 MHz */ 61 61 { 25200, 4096, 25200, 6272, 28000, 6144, 25200 }, /* 25.20 MHz */ 62 62 { 27000, 4096, 27000, 6272, 30000, 6144, 27000 }, /* 27.00 MHz */ 63 63 { 27027, 4096, 27027, 6272, 30030, 6144, 27027 }, /* 27.00*1.001 MHz */ 64 64 { 54000, 4096, 54000, 6272, 60000, 6144, 54000 }, /* 54.00 MHz */ 65 65 { 54054, 4096, 54054, 6272, 60060, 6144, 54054 }, /* 54.00*1.001 MHz */ 66 - { 74175, 11648, 210937, 17836, 234375, 11648, 140625 }, /* 74.25/1.001 MHz */ 66 + { 74176, 11648, 210937, 17836, 234375, 11648, 140625 }, /* 74.25/1.001 MHz */ 67 67 { 74250, 4096, 74250, 6272, 82500, 6144, 74250 }, /* 74.25 MHz */ 68 - { 148351, 11648, 421875, 8918, 234375, 5824, 140625 }, /* 148.50/1.001 MHz */ 68 + { 148352, 11648, 421875, 8918, 234375, 5824, 140625 }, /* 148.50/1.001 MHz */ 69 69 { 148500, 4096, 148500, 6272, 165000, 6144, 148500 }, /* 148.50 MHz */ 70 70 { 0, 4096, 0, 6272, 0, 6144, 0 } /* Other */ 71 71 }; ··· 75 75 */ 76 76 static void r600_hdmi_calc_cts(uint32_t clock, int *CTS, int N, int freq) 77 77 { 78 - if (*CTS == 0) 79 - *CTS = clock * N / (128 * freq) * 1000; 78 + u64 n; 79 + u32 d; 80 + 81 + if (*CTS == 0) { 82 + n = (u64)clock * (u64)N * 1000ULL; 83 + d = 128 * freq; 84 + do_div(n, d); 85 + *CTS = n; 86 + } 80 87 DRM_DEBUG("Using ACR timing N=%d CTS=%d for frequency %d\n", 81 88 N, *CTS, freq); 82 89 } ··· 309 302 u8 *sadb; 310 303 int sad_count; 311 304 305 + /* XXX: setting this register causes hangs on some asics */ 306 + return; 307 + 312 308 list_for_each_entry(connector, &encoder->dev->mode_config.connector_list, head) { 313 309 if (connector->encoder == encoder) 314 310 radeon_connector = to_radeon_connector(connector); ··· 454 444 } 455 445 456 446 WREG32(HDMI0_ACR_PACKET_CONTROL + offset, 457 - HDMI0_ACR_AUTO_SEND | /* allow hw to sent ACR packets when required */ 458 - HDMI0_ACR_SOURCE); /* select SW CTS value */ 447 + HDMI0_ACR_SOURCE | /* select SW CTS value - XXX verify that hw CTS works on all families */ 448 + HDMI0_ACR_AUTO_SEND); /* allow hw to sent ACR packets when required */ 459 449 460 450 WREG32(HDMI0_VBI_PACKET_CONTROL + offset, 461 451 HDMI0_NULL_SEND | /* send null packets when required */
+1 -1
drivers/gpu/drm/radeon/r600d.h
··· 1523 1523 */ 1524 1524 # define PACKET3_CP_DMA_CP_SYNC (1 << 31) 1525 1525 /* COMMAND */ 1526 - # define PACKET3_CP_DMA_CMD_SRC_SWAP(x) ((x) << 23) 1526 + # define PACKET3_CP_DMA_CMD_SRC_SWAP(x) ((x) << 22) 1527 1527 /* 0 - none 1528 1528 * 1 - 8 in 16 1529 1529 * 2 - 8 in 32
+21 -12
drivers/gpu/drm/radeon/radeon_connectors.c
··· 1658 1658 drm_object_attach_property(&radeon_connector->base.base, 1659 1659 rdev->mode_info.underscan_vborder_property, 1660 1660 0); 1661 - drm_object_attach_property(&radeon_connector->base.base, 1662 - rdev->mode_info.audio_property, 1663 - RADEON_AUDIO_DISABLE); 1661 + if (radeon_audio != 0) 1662 + drm_object_attach_property(&radeon_connector->base.base, 1663 + rdev->mode_info.audio_property, 1664 + (radeon_audio == 1) ? 1665 + RADEON_AUDIO_AUTO : 1666 + RADEON_AUDIO_DISABLE); 1664 1667 subpixel_order = SubPixelHorizontalRGB; 1665 1668 connector->interlace_allowed = true; 1666 1669 if (connector_type == DRM_MODE_CONNECTOR_HDMIB) ··· 1757 1754 rdev->mode_info.underscan_vborder_property, 1758 1755 0); 1759 1756 } 1760 - if (ASIC_IS_DCE2(rdev)) { 1757 + if (ASIC_IS_DCE2(rdev) && (radeon_audio != 0)) { 1761 1758 drm_object_attach_property(&radeon_connector->base.base, 1762 - rdev->mode_info.audio_property, 1763 - RADEON_AUDIO_DISABLE); 1759 + rdev->mode_info.audio_property, 1760 + (radeon_audio == 1) ? 1761 + RADEON_AUDIO_AUTO : 1762 + RADEON_AUDIO_DISABLE); 1764 1763 } 1765 1764 if (connector_type == DRM_MODE_CONNECTOR_DVII) { 1766 1765 radeon_connector->dac_load_detect = true; ··· 1804 1799 rdev->mode_info.underscan_vborder_property, 1805 1800 0); 1806 1801 } 1807 - if (ASIC_IS_DCE2(rdev)) { 1802 + if (ASIC_IS_DCE2(rdev) && (radeon_audio != 0)) { 1808 1803 drm_object_attach_property(&radeon_connector->base.base, 1809 - rdev->mode_info.audio_property, 1810 - RADEON_AUDIO_DISABLE); 1804 + rdev->mode_info.audio_property, 1805 + (radeon_audio == 1) ? 1806 + RADEON_AUDIO_AUTO : 1807 + RADEON_AUDIO_DISABLE); 1811 1808 } 1812 1809 subpixel_order = SubPixelHorizontalRGB; 1813 1810 connector->interlace_allowed = true; ··· 1850 1843 rdev->mode_info.underscan_vborder_property, 1851 1844 0); 1852 1845 } 1853 - if (ASIC_IS_DCE2(rdev)) { 1846 + if (ASIC_IS_DCE2(rdev) && (radeon_audio != 0)) { 1854 1847 drm_object_attach_property(&radeon_connector->base.base, 1855 - rdev->mode_info.audio_property, 1856 - RADEON_AUDIO_DISABLE); 1848 + rdev->mode_info.audio_property, 1849 + (radeon_audio == 1) ? 1850 + RADEON_AUDIO_AUTO : 1851 + RADEON_AUDIO_DISABLE); 1857 1852 } 1858 1853 connector->interlace_allowed = true; 1859 1854 /* in theory with a DP to VGA converter... */
+1 -2
drivers/gpu/drm/radeon/radeon_cs.c
··· 85 85 VRAM, also but everything into VRAM on AGP cards to avoid 86 86 image corruptions */ 87 87 if (p->ring == R600_RING_TYPE_UVD_INDEX && 88 - p->rdev->family < CHIP_PALM && 89 88 (i == 0 || drm_pci_device_is_agp(p->rdev->ddev))) { 90 - 89 + /* TODO: is this still needed for NI+ ? */ 91 90 p->relocs[i].lobj.domain = 92 91 RADEON_GEM_DOMAIN_VRAM; 93 92
+2 -2
drivers/gpu/drm/radeon/radeon_drv.c
··· 153 153 int radeon_testing = 0; 154 154 int radeon_connector_table = 0; 155 155 int radeon_tv = 1; 156 - int radeon_audio = 1; 156 + int radeon_audio = -1; 157 157 int radeon_disp_priority = 0; 158 158 int radeon_hw_i2c = 0; 159 159 int radeon_pcie_gen2 = -1; ··· 196 196 MODULE_PARM_DESC(tv, "TV enable (0 = disable)"); 197 197 module_param_named(tv, radeon_tv, int, 0444); 198 198 199 - MODULE_PARM_DESC(audio, "Audio enable (1 = enable)"); 199 + MODULE_PARM_DESC(audio, "Audio enable (-1 = auto, 0 = disable, 1 = enable)"); 200 200 module_param_named(audio, radeon_audio, int, 0444); 201 201 202 202 MODULE_PARM_DESC(disp_priority, "Display Priority (0 = auto, 1 = normal, 2 = high)");
+3
drivers/gpu/drm/radeon/radeon_pm.c
··· 945 945 if (enable) { 946 946 mutex_lock(&rdev->pm.mutex); 947 947 rdev->pm.dpm.uvd_active = true; 948 + /* disable this for now */ 949 + #if 0 948 950 if ((rdev->pm.dpm.sd == 1) && (rdev->pm.dpm.hd == 0)) 949 951 dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_SD; 950 952 else if ((rdev->pm.dpm.sd == 2) && (rdev->pm.dpm.hd == 0)) ··· 956 954 else if ((rdev->pm.dpm.sd == 0) && (rdev->pm.dpm.hd == 2)) 957 955 dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_HD2; 958 956 else 957 + #endif 959 958 dpm_state = POWER_STATE_TYPE_INTERNAL_UVD; 960 959 rdev->pm.dpm.state = dpm_state; 961 960 mutex_unlock(&rdev->pm.mutex);
+2 -2
drivers/gpu/drm/radeon/radeon_test.c
··· 36 36 struct radeon_bo *vram_obj = NULL; 37 37 struct radeon_bo **gtt_obj = NULL; 38 38 uint64_t gtt_addr, vram_addr; 39 - unsigned i, n, size; 40 - int r, ring; 39 + unsigned n, size; 40 + int i, r, ring; 41 41 42 42 switch (flag) { 43 43 case RADEON_TEST_COPY_DMA:
+4 -2
drivers/gpu/drm/radeon/radeon_uvd.c
··· 476 476 return -EINVAL; 477 477 } 478 478 479 - if (p->rdev->family < CHIP_PALM && (cmd == 0 || cmd == 0x3) && 479 + /* TODO: is this still necessary on NI+ ? */ 480 + if ((cmd == 0 || cmd == 0x3) && 480 481 (start >> 28) != (p->rdev->uvd.gpu_addr >> 28)) { 481 482 DRM_ERROR("msg/fb buffer %LX-%LX out of 256MB segment!\n", 482 483 start, end); ··· 799 798 (rdev->pm.dpm.hd != hd)) { 800 799 rdev->pm.dpm.sd = sd; 801 800 rdev->pm.dpm.hd = hd; 802 - streams_changed = true; 801 + /* disable this for now */ 802 + /*streams_changed = true;*/ 803 803 } 804 804 } 805 805
+11
drivers/gpu/drm/radeon/si.c
··· 85 85 uint32_t incr, uint32_t flags); 86 86 static void si_enable_gui_idle_interrupt(struct radeon_device *rdev, 87 87 bool enable); 88 + static void si_fini_pg(struct radeon_device *rdev); 89 + static void si_fini_cg(struct radeon_device *rdev); 90 + static void si_rlc_stop(struct radeon_device *rdev); 88 91 89 92 static const u32 verde_rlc_save_restore_register_list[] = 90 93 { ··· 1681 1678 fw_name); 1682 1679 release_firmware(rdev->smc_fw); 1683 1680 rdev->smc_fw = NULL; 1681 + err = 0; 1684 1682 } else if (rdev->smc_fw->size != smc_req_size) { 1685 1683 printk(KERN_ERR 1686 1684 "si_smc: Bogus length %zu in firmware \"%s\"\n", ··· 3611 3607 RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR)); 3612 3608 dev_info(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n", 3613 3609 RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS)); 3610 + 3611 + /* disable PG/CG */ 3612 + si_fini_pg(rdev); 3613 + si_fini_cg(rdev); 3614 + 3615 + /* stop the rlc */ 3616 + si_rlc_stop(rdev); 3614 3617 3615 3618 /* Disable CP parsing/prefetching */ 3616 3619 WREG32(CP_ME_CNTL, CP_ME_HALT | CP_PFP_HALT | CP_CE_HALT);
+3 -3
drivers/gpu/drm/radeon/si_dpm.c
··· 5208 5208 table->mc_reg_table_entry[k].mc_data[j] |= 0x100; 5209 5209 } 5210 5210 j++; 5211 - if (j > SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE) 5211 + if (j >= SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE) 5212 5212 return -EINVAL; 5213 5213 5214 5214 if (!pi->mem_gddr5) { ··· 5218 5218 table->mc_reg_table_entry[k].mc_data[j] = 5219 5219 (table->mc_reg_table_entry[k].mc_data[i] & 0xffff0000) >> 16; 5220 5220 j++; 5221 - if (j > SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE) 5221 + if (j >= SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE) 5222 5222 return -EINVAL; 5223 5223 } 5224 5224 break; ··· 5231 5231 (temp_reg & 0xffff0000) | 5232 5232 (table->mc_reg_table_entry[k].mc_data[i] & 0x0000ffff); 5233 5233 j++; 5234 - if (j > SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE) 5234 + if (j >= SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE) 5235 5235 return -EINVAL; 5236 5236 break; 5237 5237 default:
+2 -2
drivers/gpu/drm/radeon/sid.h
··· 1553 1553 * 6. COMMAND [30:21] | BYTE_COUNT [20:0] 1554 1554 */ 1555 1555 # define PACKET3_CP_DMA_DST_SEL(x) ((x) << 20) 1556 - /* 0 - SRC_ADDR 1556 + /* 0 - DST_ADDR 1557 1557 * 1 - GDS 1558 1558 */ 1559 1559 # define PACKET3_CP_DMA_ENGINE(x) ((x) << 27) ··· 1568 1568 # define PACKET3_CP_DMA_CP_SYNC (1 << 31) 1569 1569 /* COMMAND */ 1570 1570 # define PACKET3_CP_DMA_DIS_WC (1 << 21) 1571 - # define PACKET3_CP_DMA_CMD_SRC_SWAP(x) ((x) << 23) 1571 + # define PACKET3_CP_DMA_CMD_SRC_SWAP(x) ((x) << 22) 1572 1572 /* 0 - none 1573 1573 * 1 - 8 in 16 1574 1574 * 2 - 8 in 32
+1 -1
drivers/gpu/drm/radeon/trinity_dpm.c
··· 1868 1868 for (i = 0; i < SUMO_MAX_HARDWARE_POWERLEVELS; i++) 1869 1869 pi->at[i] = TRINITY_AT_DFLT; 1870 1870 1871 - pi->enable_bapm = true; 1871 + pi->enable_bapm = false; 1872 1872 pi->enable_nbps_policy = true; 1873 1873 pi->enable_sclk_ds = true; 1874 1874 pi->enable_gfx_power_gating = true;
+2 -2
drivers/gpu/drm/radeon/uvd_v1_0.c
··· 212 212 /* enable VCPU clock */ 213 213 WREG32(UVD_VCPU_CNTL, 1 << 9); 214 214 215 - /* enable UMC and NC0 */ 216 - WREG32_P(UVD_LMI_CTRL2, 1 << 13, ~((1 << 8) | (1 << 13))); 215 + /* enable UMC */ 216 + WREG32_P(UVD_LMI_CTRL2, 0, ~(1 << 8)); 217 217 218 218 /* boot up the VCPU */ 219 219 WREG32(UVD_SOFT_RESET, 0);
+12 -5
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 740 740 struct vmw_fpriv *vmw_fp; 741 741 742 742 vmw_fp = vmw_fpriv(file_priv); 743 - ttm_object_file_release(&vmw_fp->tfile); 744 - if (vmw_fp->locked_master) 743 + 744 + if (vmw_fp->locked_master) { 745 + struct vmw_master *vmaster = 746 + vmw_master(vmw_fp->locked_master); 747 + 748 + ttm_lock_set_kill(&vmaster->lock, true, SIGTERM); 749 + ttm_vt_unlock(&vmaster->lock); 745 750 drm_master_put(&vmw_fp->locked_master); 751 + } 752 + 753 + ttm_object_file_release(&vmw_fp->tfile); 746 754 kfree(vmw_fp); 747 755 } 748 756 ··· 933 925 934 926 vmw_fp->locked_master = drm_master_get(file_priv->master); 935 927 ret = ttm_vt_lock(&vmaster->lock, false, vmw_fp->tfile); 936 - vmw_execbuf_release_pinned_bo(dev_priv); 937 - 938 928 if (unlikely((ret != 0))) { 939 929 DRM_ERROR("Unable to lock TTM at VT switch.\n"); 940 930 drm_master_put(&vmw_fp->locked_master); 941 931 } 942 932 943 - ttm_lock_set_kill(&vmaster->lock, true, SIGTERM); 933 + ttm_lock_set_kill(&vmaster->lock, false, SIGTERM); 934 + vmw_execbuf_release_pinned_bo(dev_priv); 944 935 945 936 if (!dev_priv->enable_fb) { 946 937 ret = ttm_bo_evict_mm(&dev_priv->bdev, TTM_PL_VRAM);
+1 -1
drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
··· 970 970 if (new_backup) 971 971 res->backup_offset = new_backup_offset; 972 972 973 - if (!res->func->may_evict) 973 + if (!res->func->may_evict || res->id == -1) 974 974 return; 975 975 976 976 write_lock(&dev_priv->resource_lock);
+1
drivers/hid/Kconfig
··· 241 241 - Sharkoon Drakonia / Perixx MX-2000 gaming mice 242 242 - Tracer Sniper TRM-503 / NOVA Gaming Slider X200 / 243 243 Zalman ZM-GM1 244 + - SHARKOON DarkGlider Gaming mouse 244 245 245 246 config HOLTEK_FF 246 247 bool "Holtek On Line Grip force feedback support"
+8 -5
drivers/hid/hid-core.c
··· 319 319 320 320 static int hid_parser_global(struct hid_parser *parser, struct hid_item *item) 321 321 { 322 - __u32 raw_value; 322 + __s32 raw_value; 323 323 switch (item->tag) { 324 324 case HID_GLOBAL_ITEM_TAG_PUSH: 325 325 ··· 370 370 return 0; 371 371 372 372 case HID_GLOBAL_ITEM_TAG_UNIT_EXPONENT: 373 - /* Units exponent negative numbers are given through a 374 - * two's complement. 375 - * See "6.2.2.7 Global Items" for more information. */ 376 - raw_value = item_udata(item); 373 + /* Many devices provide unit exponent as a two's complement 374 + * nibble due to the common misunderstanding of HID 375 + * specification 1.11, 6.2.2.7 Global Items. Attempt to handle 376 + * both this and the standard encoding. */ 377 + raw_value = item_sdata(item); 377 378 if (!(raw_value & 0xfffffff0)) 378 379 parser->global.unit_exponent = hid_snto32(raw_value, 4); 379 380 else ··· 1716 1715 { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_KEYBOARD) }, 1717 1716 { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A04A) }, 1718 1717 { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A067) }, 1718 + { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A081) }, 1719 1719 { HID_USB_DEVICE(USB_VENDOR_ID_HUION, USB_DEVICE_ID_HUION_580) }, 1720 1720 { HID_USB_DEVICE(USB_VENDOR_ID_JESS2, USB_DEVICE_ID_JESS2_COLOR_RUMBLE_PAD) }, 1721 1721 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ION, USB_DEVICE_ID_ICADE) }, ··· 1871 1869 1872 1870 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PRESENTER_8K_BT) }, 1873 1871 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO, USB_DEVICE_ID_NINTENDO_WIIMOTE) }, 1872 + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO2, USB_DEVICE_ID_NINTENDO_WIIMOTE) }, 1874 1873 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO, USB_DEVICE_ID_NINTENDO_WIIMOTE2) }, 1875 1874 { } 1876 1875 };
+4
drivers/hid/hid-holtek-mouse.c
··· 27 27 * - USB ID 04d9:a067, sold as Sharkoon Drakonia and Perixx MX-2000 28 28 * - USB ID 04d9:a04a, sold as Tracer Sniper TRM-503, NOVA Gaming Slider X200 29 29 * and Zalman ZM-GM1 30 + * - USB ID 04d9:a081, sold as SHARKOON DarkGlider Gaming mouse 30 31 */ 31 32 32 33 static __u8 *holtek_mouse_report_fixup(struct hid_device *hdev, __u8 *rdesc, ··· 47 46 } 48 47 break; 49 48 case USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A04A: 49 + case USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A081: 50 50 if (*rsize >= 113 && rdesc[106] == 0xff && rdesc[107] == 0x7f 51 51 && rdesc[111] == 0xff && rdesc[112] == 0x7f) { 52 52 hid_info(hdev, "Fixing up report descriptor\n"); ··· 65 63 USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A067) }, 66 64 { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, 67 65 USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A04A) }, 66 + { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, 67 + USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A081) }, 68 68 { } 69 69 }; 70 70 MODULE_DEVICE_TABLE(hid, holtek_mouse_devices);
+7
drivers/hid/hid-ids.h
··· 450 450 #define USB_DEVICE_ID_HOLTEK_ALT_KEYBOARD 0xa055 451 451 #define USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A067 0xa067 452 452 #define USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A04A 0xa04a 453 + #define USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A081 0xa081 453 454 454 455 #define USB_VENDOR_ID_IMATION 0x0718 455 456 #define USB_DEVICE_ID_DISC_STAKKA 0xd000 ··· 633 632 #define USB_DEVICE_ID_NEXTWINDOW_TOUCHSCREEN 0x0003 634 633 635 634 #define USB_VENDOR_ID_NINTENDO 0x057e 635 + #define USB_VENDOR_ID_NINTENDO2 0x054c 636 636 #define USB_DEVICE_ID_NINTENDO_WIIMOTE 0x0306 637 637 #define USB_DEVICE_ID_NINTENDO_WIIMOTE2 0x0330 638 638 ··· 793 791 #define USB_DEVICE_ID_SYNAPTICS_COMP_TP 0x0009 794 792 #define USB_DEVICE_ID_SYNAPTICS_WTP 0x0010 795 793 #define USB_DEVICE_ID_SYNAPTICS_DPAD 0x0013 794 + #define USB_DEVICE_ID_SYNAPTICS_LTS1 0x0af8 795 + #define USB_DEVICE_ID_SYNAPTICS_LTS2 0x1d10 796 796 797 797 #define USB_VENDOR_ID_THINGM 0x27b8 798 798 #define USB_DEVICE_ID_BLINK1 0x01ed ··· 921 917 922 918 #define USB_VENDOR_ID_PRIMAX 0x0461 923 919 #define USB_DEVICE_ID_PRIMAX_KEYBOARD 0x4e05 920 + 921 + #define USB_VENDOR_ID_SIS 0x0457 922 + #define USB_DEVICE_ID_SIS_TS 0x1013 924 923 925 924 #endif
+4 -9
drivers/hid/hid-input.c
··· 192 192 return -EINVAL; 193 193 } 194 194 195 + 195 196 /** 196 197 * hidinput_calc_abs_res - calculate an absolute axis resolution 197 198 * @field: the HID report field to calculate resolution for ··· 235 234 case ABS_MT_TOOL_Y: 236 235 case ABS_MT_TOUCH_MAJOR: 237 236 case ABS_MT_TOUCH_MINOR: 238 - if (field->unit & 0xffffff00) /* Not a length */ 239 - return 0; 240 - unit_exponent += hid_snto32(field->unit >> 4, 4) - 1; 241 - switch (field->unit & 0xf) { 242 - case 0x1: /* If centimeters */ 237 + if (field->unit == 0x11) { /* If centimeters */ 243 238 /* Convert to millimeters */ 244 239 unit_exponent += 1; 245 - break; 246 - case 0x3: /* If inches */ 240 + } else if (field->unit == 0x13) { /* If inches */ 247 241 /* Convert to millimeters */ 248 242 prev = physical_extents; 249 243 physical_extents *= 254; 250 244 if (physical_extents < prev) 251 245 return 0; 252 246 unit_exponent -= 1; 253 - break; 254 - default: 247 + } else { 255 248 return 0; 256 249 } 257 250 break;
+1 -1
drivers/hid/hid-roccat-kone.c
··· 382 382 } 383 383 #define PROFILE_ATTR(number) \ 384 384 static struct bin_attribute bin_attr_profile##number = { \ 385 - .attr = { .name = "profile##number", .mode = 0660 }, \ 385 + .attr = { .name = "profile" #number, .mode = 0660 }, \ 386 386 .size = sizeof(struct kone_profile), \ 387 387 .read = kone_sysfs_read_profilex, \ 388 388 .write = kone_sysfs_write_profilex, \
+2 -2
drivers/hid/hid-roccat-koneplus.c
··· 229 229 230 230 #define PROFILE_ATTR(number) \ 231 231 static struct bin_attribute bin_attr_profile##number##_settings = { \ 232 - .attr = { .name = "profile##number##_settings", .mode = 0440 }, \ 232 + .attr = { .name = "profile" #number "_settings", .mode = 0440 }, \ 233 233 .size = KONEPLUS_SIZE_PROFILE_SETTINGS, \ 234 234 .read = koneplus_sysfs_read_profilex_settings, \ 235 235 .private = &profile_numbers[number-1], \ 236 236 }; \ 237 237 static struct bin_attribute bin_attr_profile##number##_buttons = { \ 238 - .attr = { .name = "profile##number##_buttons", .mode = 0440 }, \ 238 + .attr = { .name = "profile" #number "_buttons", .mode = 0440 }, \ 239 239 .size = KONEPLUS_SIZE_PROFILE_BUTTONS, \ 240 240 .read = koneplus_sysfs_read_profilex_buttons, \ 241 241 .private = &profile_numbers[number-1], \
+2 -2
drivers/hid/hid-roccat-kovaplus.c
··· 257 257 258 258 #define PROFILE_ATTR(number) \ 259 259 static struct bin_attribute bin_attr_profile##number##_settings = { \ 260 - .attr = { .name = "profile##number##_settings", .mode = 0440 }, \ 260 + .attr = { .name = "profile" #number "_settings", .mode = 0440 }, \ 261 261 .size = KOVAPLUS_SIZE_PROFILE_SETTINGS, \ 262 262 .read = kovaplus_sysfs_read_profilex_settings, \ 263 263 .private = &profile_numbers[number-1], \ 264 264 }; \ 265 265 static struct bin_attribute bin_attr_profile##number##_buttons = { \ 266 - .attr = { .name = "profile##number##_buttons", .mode = 0440 }, \ 266 + .attr = { .name = "profile" #number "_buttons", .mode = 0440 }, \ 267 267 .size = KOVAPLUS_SIZE_PROFILE_BUTTONS, \ 268 268 .read = kovaplus_sysfs_read_profilex_buttons, \ 269 269 .private = &profile_numbers[number-1], \
+2 -2
drivers/hid/hid-roccat-pyra.c
··· 225 225 226 226 #define PROFILE_ATTR(number) \ 227 227 static struct bin_attribute bin_attr_profile##number##_settings = { \ 228 - .attr = { .name = "profile##number##_settings", .mode = 0440 }, \ 228 + .attr = { .name = "profile" #number "_settings", .mode = 0440 }, \ 229 229 .size = PYRA_SIZE_PROFILE_SETTINGS, \ 230 230 .read = pyra_sysfs_read_profilex_settings, \ 231 231 .private = &profile_numbers[number-1], \ 232 232 }; \ 233 233 static struct bin_attribute bin_attr_profile##number##_buttons = { \ 234 - .attr = { .name = "profile##number##_buttons", .mode = 0440 }, \ 234 + .attr = { .name = "profile" #number "_buttons", .mode = 0440 }, \ 235 235 .size = PYRA_SIZE_PROFILE_BUTTONS, \ 236 236 .read = pyra_sysfs_read_profilex_buttons, \ 237 237 .private = &profile_numbers[number-1], \
+4 -1
drivers/hid/hid-wiimote-core.c
··· 834 834 goto done; 835 835 } 836 836 837 - if (vendor == USB_VENDOR_ID_NINTENDO) { 837 + if (vendor == USB_VENDOR_ID_NINTENDO || 838 + vendor == USB_VENDOR_ID_NINTENDO2) { 838 839 if (product == USB_DEVICE_ID_NINTENDO_WIIMOTE) { 839 840 devtype = WIIMOTE_DEV_GEN10; 840 841 goto done; ··· 1855 1854 1856 1855 static const struct hid_device_id wiimote_hid_devices[] = { 1857 1856 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO, 1857 + USB_DEVICE_ID_NINTENDO_WIIMOTE) }, 1858 + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO2, 1858 1859 USB_DEVICE_ID_NINTENDO_WIIMOTE) }, 1859 1860 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO, 1860 1861 USB_DEVICE_ID_NINTENDO_WIIMOTE2) },
+29 -11
drivers/hid/hid-wiimote-modules.c
··· 119 119 * the rumble motor, this flag shouldn't be set. 120 120 */ 121 121 122 + /* used by wiimod_rumble and wiipro_rumble */ 123 + static void wiimod_rumble_worker(struct work_struct *work) 124 + { 125 + struct wiimote_data *wdata = container_of(work, struct wiimote_data, 126 + rumble_worker); 127 + 128 + spin_lock_irq(&wdata->state.lock); 129 + wiiproto_req_rumble(wdata, wdata->state.cache_rumble); 130 + spin_unlock_irq(&wdata->state.lock); 131 + } 132 + 122 133 static int wiimod_rumble_play(struct input_dev *dev, void *data, 123 134 struct ff_effect *eff) 124 135 { 125 136 struct wiimote_data *wdata = input_get_drvdata(dev); 126 137 __u8 value; 127 - unsigned long flags; 128 138 129 139 /* 130 140 * The wiimote supports only a single rumble motor so if any magnitude ··· 147 137 else 148 138 value = 0; 149 139 150 - spin_lock_irqsave(&wdata->state.lock, flags); 151 - wiiproto_req_rumble(wdata, value); 152 - spin_unlock_irqrestore(&wdata->state.lock, flags); 140 + /* Locking state.lock here might deadlock with input_event() calls. 141 + * schedule_work acts as barrier. Merging multiple changes is fine. */ 142 + wdata->state.cache_rumble = value; 143 + schedule_work(&wdata->rumble_worker); 153 144 154 145 return 0; 155 146 } ··· 158 147 static int wiimod_rumble_probe(const struct wiimod_ops *ops, 159 148 struct wiimote_data *wdata) 160 149 { 150 + INIT_WORK(&wdata->rumble_worker, wiimod_rumble_worker); 151 + 161 152 set_bit(FF_RUMBLE, wdata->input->ffbit); 162 153 if (input_ff_create_memless(wdata->input, NULL, wiimod_rumble_play)) 163 154 return -ENOMEM; ··· 171 158 struct wiimote_data *wdata) 172 159 { 173 160 unsigned long flags; 161 + 162 + cancel_work_sync(&wdata->rumble_worker); 174 163 175 164 spin_lock_irqsave(&wdata->state.lock, flags); 176 165 wiiproto_req_rumble(wdata, 0); ··· 1746 1731 { 1747 1732 struct wiimote_data *wdata = input_get_drvdata(dev); 1748 1733 __u8 value; 1749 - unsigned long flags; 1750 1734 1751 1735 /* 1752 1736 * The wiimote supports only a single rumble motor so if any magnitude ··· 1758 1744 else 1759 1745 value = 0; 1760 1746 1761 - spin_lock_irqsave(&wdata->state.lock, flags); 1762 - wiiproto_req_rumble(wdata, value); 1763 - spin_unlock_irqrestore(&wdata->state.lock, flags); 1747 + /* Locking state.lock here might deadlock with input_event() calls. 1748 + * schedule_work acts as barrier. Merging multiple changes is fine. */ 1749 + wdata->state.cache_rumble = value; 1750 + schedule_work(&wdata->rumble_worker); 1764 1751 1765 1752 return 0; 1766 1753 } ··· 1770 1755 struct wiimote_data *wdata) 1771 1756 { 1772 1757 int ret, i; 1758 + 1759 + INIT_WORK(&wdata->rumble_worker, wiimod_rumble_worker); 1773 1760 1774 1761 wdata->extension.input = input_allocate_device(); 1775 1762 if (!wdata->extension.input) ··· 1834 1817 if (!wdata->extension.input) 1835 1818 return; 1836 1819 1820 + input_unregister_device(wdata->extension.input); 1821 + wdata->extension.input = NULL; 1822 + cancel_work_sync(&wdata->rumble_worker); 1823 + 1837 1824 spin_lock_irqsave(&wdata->state.lock, flags); 1838 1825 wiiproto_req_rumble(wdata, 0); 1839 1826 spin_unlock_irqrestore(&wdata->state.lock, flags); 1840 - 1841 - input_unregister_device(wdata->extension.input); 1842 - wdata->extension.input = NULL; 1843 1827 } 1844 1828 1845 1829 static const struct wiimod_ops wiimod_pro = {
+3 -1
drivers/hid/hid-wiimote.h
··· 133 133 __u8 *cmd_read_buf; 134 134 __u8 cmd_read_size; 135 135 136 - /* calibration data */ 136 + /* calibration/cache data */ 137 137 __u16 calib_bboard[4][3]; 138 + __u8 cache_rumble; 138 139 }; 139 140 140 141 struct wiimote_data { 141 142 struct hid_device *hdev; 142 143 struct input_dev *input; 144 + struct work_struct rumble_worker; 143 145 struct led_classdev *leds[4]; 144 146 struct input_dev *accel; 145 147 struct input_dev *ir;
+14 -7
drivers/hid/hidraw.c
··· 308 308 static void drop_ref(struct hidraw *hidraw, int exists_bit) 309 309 { 310 310 if (exists_bit) { 311 - hid_hw_close(hidraw->hid); 312 311 hidraw->exist = 0; 313 - if (hidraw->open) 312 + if (hidraw->open) { 313 + hid_hw_close(hidraw->hid); 314 314 wake_up_interruptible(&hidraw->wait); 315 + } 315 316 } else { 316 317 --hidraw->open; 317 318 } 318 - 319 - if (!hidraw->open && !hidraw->exist) { 320 - device_destroy(hidraw_class, MKDEV(hidraw_major, hidraw->minor)); 321 - hidraw_table[hidraw->minor] = NULL; 322 - kfree(hidraw); 319 + if (!hidraw->open) { 320 + if (!hidraw->exist) { 321 + device_destroy(hidraw_class, 322 + MKDEV(hidraw_major, hidraw->minor)); 323 + hidraw_table[hidraw->minor] = NULL; 324 + kfree(hidraw); 325 + } else { 326 + /* close device for last reader */ 327 + hid_hw_power(hidraw->hid, PM_HINT_NORMAL); 328 + hid_hw_close(hidraw->hid); 329 + } 323 330 } 324 331 } 325 332
+2 -1
drivers/hid/uhid.c
··· 615 615 616 616 static struct miscdevice uhid_misc = { 617 617 .fops = &uhid_fops, 618 - .minor = MISC_DYNAMIC_MINOR, 618 + .minor = UHID_MINOR, 619 619 .name = UHID_NAME, 620 620 }; 621 621 ··· 634 634 MODULE_LICENSE("GPL"); 635 635 MODULE_AUTHOR("David Herrmann <dh.herrmann@gmail.com>"); 636 636 MODULE_DESCRIPTION("User-space I/O driver support for HID subsystem"); 637 + MODULE_ALIAS_MISCDEV(UHID_MINOR); 637 638 MODULE_ALIAS("devname:" UHID_NAME);
+3
drivers/hid/usbhid/hid-quirks.c
··· 110 110 { USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_MOUSEPEN_I608X, HID_QUIRK_MULTI_INPUT }, 111 111 { USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M610X, HID_QUIRK_MULTI_INPUT }, 112 112 { USB_VENDOR_ID_NTRIG, USB_DEVICE_ID_NTRIG_DUOSENSE, HID_QUIRK_NO_INIT_REPORTS }, 113 + { USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_LTS1, HID_QUIRK_NO_INIT_REPORTS }, 114 + { USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_LTS2, HID_QUIRK_NO_INIT_REPORTS }, 115 + { USB_VENDOR_ID_SIS, USB_DEVICE_ID_SIS_TS, HID_QUIRK_NO_INIT_REPORTS }, 113 116 114 117 { 0, 0 } 115 118 };
+13
drivers/hwmon/applesmc.c
··· 230 230 231 231 static int read_smc(u8 cmd, const char *key, u8 *buffer, u8 len) 232 232 { 233 + u8 status, data = 0; 233 234 int i; 234 235 235 236 if (send_command(cmd) || send_argument(key)) { ··· 238 237 return -EIO; 239 238 } 240 239 240 + /* This has no effect on newer (2012) SMCs */ 241 241 if (send_byte(len, APPLESMC_DATA_PORT)) { 242 242 pr_warn("%.4s: read len fail\n", key); 243 243 return -EIO; ··· 251 249 } 252 250 buffer[i] = inb(APPLESMC_DATA_PORT); 253 251 } 252 + 253 + /* Read the data port until bit0 is cleared */ 254 + for (i = 0; i < 16; i++) { 255 + udelay(APPLESMC_MIN_WAIT); 256 + status = inb(APPLESMC_CMD_PORT); 257 + if (!(status & 0x01)) 258 + break; 259 + data = inb(APPLESMC_DATA_PORT); 260 + } 261 + if (i) 262 + pr_warn("flushed %d bytes, last value is: %d\n", i, data); 254 263 255 264 return 0; 256 265 }
+3 -2
drivers/i2c/busses/i2c-designware-platdrv.c
··· 270 270 MODULE_ALIAS("platform:i2c_designware"); 271 271 272 272 static struct platform_driver dw_i2c_driver = { 273 - .remove = dw_i2c_remove, 273 + .probe = dw_i2c_probe, 274 + .remove = dw_i2c_remove, 274 275 .driver = { 275 276 .name = "i2c_designware", 276 277 .owner = THIS_MODULE, ··· 283 282 284 283 static int __init dw_i2c_init_driver(void) 285 284 { 286 - return platform_driver_probe(&dw_i2c_driver, dw_i2c_probe); 285 + return platform_driver_register(&dw_i2c_driver); 287 286 } 288 287 subsys_initcall(dw_i2c_init_driver); 289 288
+6 -5
drivers/i2c/busses/i2c-imx.c
··· 365 365 clk_disable_unprepare(i2c_imx->clk); 366 366 } 367 367 368 - static void __init i2c_imx_set_clk(struct imx_i2c_struct *i2c_imx, 368 + static void i2c_imx_set_clk(struct imx_i2c_struct *i2c_imx, 369 369 unsigned int rate) 370 370 { 371 371 struct imx_i2c_clk_pair *i2c_clk_div = i2c_imx->hwdata->clk_div; ··· 589 589 .functionality = i2c_imx_func, 590 590 }; 591 591 592 - static int __init i2c_imx_probe(struct platform_device *pdev) 592 + static int i2c_imx_probe(struct platform_device *pdev) 593 593 { 594 594 const struct of_device_id *of_id = of_match_device(i2c_imx_dt_ids, 595 595 &pdev->dev); ··· 697 697 return 0; /* Return OK */ 698 698 } 699 699 700 - static int __exit i2c_imx_remove(struct platform_device *pdev) 700 + static int i2c_imx_remove(struct platform_device *pdev) 701 701 { 702 702 struct imx_i2c_struct *i2c_imx = platform_get_drvdata(pdev); 703 703 ··· 715 715 } 716 716 717 717 static struct platform_driver i2c_imx_driver = { 718 - .remove = __exit_p(i2c_imx_remove), 718 + .probe = i2c_imx_probe, 719 + .remove = i2c_imx_remove, 719 720 .driver = { 720 721 .name = DRIVER_NAME, 721 722 .owner = THIS_MODULE, ··· 727 726 728 727 static int __init i2c_adap_imx_init(void) 729 728 { 730 - return platform_driver_probe(&i2c_imx_driver, i2c_imx_probe); 729 + return platform_driver_register(&i2c_imx_driver); 731 730 } 732 731 subsys_initcall(i2c_adap_imx_init); 733 732
+2 -1
drivers/i2c/busses/i2c-mxs.c
··· 780 780 .owner = THIS_MODULE, 781 781 .of_match_table = mxs_i2c_dt_ids, 782 782 }, 783 + .probe = mxs_i2c_probe, 783 784 .remove = mxs_i2c_remove, 784 785 }; 785 786 786 787 static int __init mxs_i2c_init(void) 787 788 { 788 - return platform_driver_probe(&mxs_i2c_driver, mxs_i2c_probe); 789 + return platform_driver_register(&mxs_i2c_driver); 789 790 } 790 791 subsys_initcall(mxs_i2c_init); 791 792
+3
drivers/i2c/busses/i2c-omap.c
··· 939 939 /* 940 940 * ProDB0017052: Clear ARDY bit twice 941 941 */ 942 + if (stat & OMAP_I2C_STAT_ARDY) 943 + omap_i2c_ack_stat(dev, OMAP_I2C_STAT_ARDY); 944 + 942 945 if (stat & (OMAP_I2C_STAT_ARDY | OMAP_I2C_STAT_NACK | 943 946 OMAP_I2C_STAT_AL)) { 944 947 omap_i2c_ack_stat(dev, (OMAP_I2C_STAT_RRDY |
+5 -6
drivers/i2c/busses/i2c-stu300.c
··· 859 859 .functionality = stu300_func, 860 860 }; 861 861 862 - static int __init 863 - stu300_probe(struct platform_device *pdev) 862 + static int stu300_probe(struct platform_device *pdev) 864 863 { 865 864 struct stu300_dev *dev; 866 865 struct i2c_adapter *adap; ··· 965 966 #define STU300_I2C_PM NULL 966 967 #endif 967 968 968 - static int __exit 969 - stu300_remove(struct platform_device *pdev) 969 + static int stu300_remove(struct platform_device *pdev) 970 970 { 971 971 struct stu300_dev *dev = platform_get_drvdata(pdev); 972 972 ··· 987 989 .pm = STU300_I2C_PM, 988 990 .of_match_table = stu300_dt_match, 989 991 }, 990 - .remove = __exit_p(stu300_remove), 992 + .probe = stu300_probe, 993 + .remove = stu300_remove, 991 994 992 995 }; 993 996 994 997 static int __init stu300_init(void) 995 998 { 996 - return platform_driver_probe(&stu300_i2c_driver, stu300_probe); 999 + return platform_driver_register(&stu300_i2c_driver); 997 1000 } 998 1001 999 1002 static void __exit stu300_exit(void)
+3
drivers/i2c/i2c-core.c
··· 1134 1134 acpi_handle handle; 1135 1135 acpi_status status; 1136 1136 1137 + if (!adap->dev.parent) 1138 + return; 1139 + 1137 1140 handle = ACPI_HANDLE(adap->dev.parent); 1138 1141 if (!handle) 1139 1142 return;
+1 -1
drivers/i2c/muxes/i2c-arb-gpio-challenge.c
··· 200 200 arb->parent = of_find_i2c_adapter_by_node(parent_np); 201 201 if (!arb->parent) { 202 202 dev_err(dev, "Cannot find parent bus\n"); 203 - return -EINVAL; 203 + return -EPROBE_DEFER; 204 204 } 205 205 206 206 /* Actually add the mux adapter */
+9 -5
drivers/i2c/muxes/i2c-mux-gpio.c
··· 66 66 struct device_node *adapter_np, *child; 67 67 struct i2c_adapter *adapter; 68 68 unsigned *values, *gpios; 69 - int i = 0; 69 + int i = 0, ret; 70 70 71 71 if (!np) 72 72 return -ENODEV; ··· 79 79 adapter = of_find_i2c_adapter_by_node(adapter_np); 80 80 if (!adapter) { 81 81 dev_err(&pdev->dev, "Cannot find parent bus\n"); 82 - return -ENODEV; 82 + return -EPROBE_DEFER; 83 83 } 84 84 mux->data.parent = i2c_adapter_id(adapter); 85 85 put_device(&adapter->dev); ··· 116 116 return -ENOMEM; 117 117 } 118 118 119 - for (i = 0; i < mux->data.n_gpios; i++) 120 - gpios[i] = of_get_named_gpio(np, "mux-gpios", i); 119 + for (i = 0; i < mux->data.n_gpios; i++) { 120 + ret = of_get_named_gpio(np, "mux-gpios", i); 121 + if (ret < 0) 122 + return ret; 123 + gpios[i] = ret; 124 + } 121 125 122 126 mux->data.gpios = gpios; 123 127 ··· 181 177 if (!parent) { 182 178 dev_err(&pdev->dev, "Parent adapter (%d) not found\n", 183 179 mux->data.parent); 184 - return -ENODEV; 180 + return -EPROBE_DEFER; 185 181 } 186 182 187 183 mux->parent = parent;
+2 -2
drivers/i2c/muxes/i2c-mux-pinctrl.c
··· 113 113 adapter = of_find_i2c_adapter_by_node(adapter_np); 114 114 if (!adapter) { 115 115 dev_err(mux->dev, "Cannot find parent bus\n"); 116 - return -ENODEV; 116 + return -EPROBE_DEFER; 117 117 } 118 118 mux->pdata->parent_bus_num = i2c_adapter_id(adapter); 119 119 put_device(&adapter->dev); ··· 211 211 if (!mux->parent) { 212 212 dev_err(&pdev->dev, "Parent adapter (%d) not found\n", 213 213 mux->pdata->parent_bus_num); 214 - ret = -ENODEV; 214 + ret = -EPROBE_DEFER; 215 215 goto err; 216 216 } 217 217
+1 -3
drivers/iio/amplifiers/ad8366.c
··· 185 185 186 186 iio_device_unregister(indio_dev); 187 187 188 - if (!IS_ERR(reg)) { 188 + if (!IS_ERR(reg)) 189 189 regulator_disable(reg); 190 - regulator_put(reg); 191 - } 192 190 193 191 return 0; 194 192 }
+4 -2
drivers/iio/frequency/adf4350.c
··· 525 525 } 526 526 527 527 indio_dev = devm_iio_device_alloc(&spi->dev, sizeof(*st)); 528 - if (indio_dev == NULL) 529 - return -ENOMEM; 528 + if (indio_dev == NULL) { 529 + ret = -ENOMEM; 530 + goto error_disable_clk; 531 + } 530 532 531 533 st = iio_priv(indio_dev); 532 534
+3
drivers/iio/industrialio-buffer.c
··· 477 477 indio_dev->currentmode = INDIO_DIRECT_MODE; 478 478 if (indio_dev->setup_ops->postdisable) 479 479 indio_dev->setup_ops->postdisable(indio_dev); 480 + 481 + if (indio_dev->available_scan_masks == NULL) 482 + kfree(indio_dev->active_scan_mask); 480 483 } 481 484 482 485 int iio_update_buffers(struct iio_dev *indio_dev,
+1 -1
drivers/iio/industrialio-core.c
··· 852 852 iio_device_unregister_trigger_consumer(indio_dev); 853 853 iio_device_unregister_eventset(indio_dev); 854 854 iio_device_unregister_sysfs(indio_dev); 855 - iio_device_unregister_debugfs(indio_dev); 856 855 857 856 ida_simple_remove(&iio_ida, indio_dev->id); 858 857 kfree(indio_dev); ··· 1086 1087 1087 1088 if (indio_dev->chrdev.dev) 1088 1089 cdev_del(&indio_dev->chrdev); 1090 + iio_device_unregister_debugfs(indio_dev); 1089 1091 1090 1092 iio_disable_all_buffers(indio_dev); 1091 1093
+9 -9
drivers/iio/magnetometer/st_magn_core.c
··· 29 29 #define ST_MAGN_NUMBER_DATA_CHANNELS 3 30 30 31 31 /* DEFAULT VALUE FOR SENSORS */ 32 - #define ST_MAGN_DEFAULT_OUT_X_L_ADDR 0X04 33 - #define ST_MAGN_DEFAULT_OUT_Y_L_ADDR 0X08 34 - #define ST_MAGN_DEFAULT_OUT_Z_L_ADDR 0X06 32 + #define ST_MAGN_DEFAULT_OUT_X_H_ADDR 0X03 33 + #define ST_MAGN_DEFAULT_OUT_Y_H_ADDR 0X07 34 + #define ST_MAGN_DEFAULT_OUT_Z_H_ADDR 0X05 35 35 36 36 /* FULLSCALE */ 37 37 #define ST_MAGN_FS_AVL_1300MG 1300 ··· 117 117 static const struct iio_chan_spec st_magn_16bit_channels[] = { 118 118 ST_SENSORS_LSM_CHANNELS(IIO_MAGN, 119 119 BIT(IIO_CHAN_INFO_RAW) | BIT(IIO_CHAN_INFO_SCALE), 120 - ST_SENSORS_SCAN_X, 1, IIO_MOD_X, 's', IIO_LE, 16, 16, 121 - ST_MAGN_DEFAULT_OUT_X_L_ADDR), 120 + ST_SENSORS_SCAN_X, 1, IIO_MOD_X, 's', IIO_BE, 16, 16, 121 + ST_MAGN_DEFAULT_OUT_X_H_ADDR), 122 122 ST_SENSORS_LSM_CHANNELS(IIO_MAGN, 123 123 BIT(IIO_CHAN_INFO_RAW) | BIT(IIO_CHAN_INFO_SCALE), 124 - ST_SENSORS_SCAN_Y, 1, IIO_MOD_Y, 's', IIO_LE, 16, 16, 125 - ST_MAGN_DEFAULT_OUT_Y_L_ADDR), 124 + ST_SENSORS_SCAN_Y, 1, IIO_MOD_Y, 's', IIO_BE, 16, 16, 125 + ST_MAGN_DEFAULT_OUT_Y_H_ADDR), 126 126 ST_SENSORS_LSM_CHANNELS(IIO_MAGN, 127 127 BIT(IIO_CHAN_INFO_RAW) | BIT(IIO_CHAN_INFO_SCALE), 128 - ST_SENSORS_SCAN_Z, 1, IIO_MOD_Z, 's', IIO_LE, 16, 16, 129 - ST_MAGN_DEFAULT_OUT_Z_L_ADDR), 128 + ST_SENSORS_SCAN_Z, 1, IIO_MOD_Z, 's', IIO_BE, 16, 16, 129 + ST_MAGN_DEFAULT_OUT_Z_H_ADDR), 130 130 IIO_CHAN_SOFT_TIMESTAMP(3) 131 131 }; 132 132
+11
drivers/infiniband/Kconfig
··· 31 31 libibverbs, libibcm and a hardware driver library from 32 32 <http://www.openfabrics.org/git/>. 33 33 34 + config INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING 35 + bool "Experimental and unstable ABI for userspace access to flow steering verbs" 36 + depends on INFINIBAND_USER_ACCESS 37 + depends on STAGING 38 + ---help--- 39 + The final ABI for userspace access to flow steering verbs 40 + has not been defined. To use the current ABI, *WHICH WILL 41 + CHANGE IN THE FUTURE*, say Y here. 42 + 43 + If unsure, say N. 44 + 34 45 config INFINIBAND_USER_MEM 35 46 bool 36 47 depends on INFINIBAND_USER_ACCESS != n
+2
drivers/infiniband/core/uverbs.h
··· 217 217 IB_UVERBS_DECLARE_CMD(create_xsrq); 218 218 IB_UVERBS_DECLARE_CMD(open_xrcd); 219 219 IB_UVERBS_DECLARE_CMD(close_xrcd); 220 + #ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING 220 221 IB_UVERBS_DECLARE_CMD(create_flow); 221 222 IB_UVERBS_DECLARE_CMD(destroy_flow); 223 + #endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */ 222 224 223 225 #endif /* UVERBS_H */
+4
drivers/infiniband/core/uverbs_cmd.c
··· 54 54 static struct uverbs_lock_class ah_lock_class = { .name = "AH-uobj" }; 55 55 static struct uverbs_lock_class srq_lock_class = { .name = "SRQ-uobj" }; 56 56 static struct uverbs_lock_class xrcd_lock_class = { .name = "XRCD-uobj" }; 57 + #ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING 57 58 static struct uverbs_lock_class rule_lock_class = { .name = "RULE-uobj" }; 59 + #endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */ 58 60 59 61 #define INIT_UDATA(udata, ibuf, obuf, ilen, olen) \ 60 62 do { \ ··· 2601 2599 return ret ? ret : in_len; 2602 2600 } 2603 2601 2602 + #ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING 2604 2603 static int kern_spec_to_ib_spec(struct ib_kern_spec *kern_spec, 2605 2604 union ib_flow_spec *ib_spec) 2606 2605 { ··· 2827 2824 2828 2825 return ret ? ret : in_len; 2829 2826 } 2827 + #endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */ 2830 2828 2831 2829 static int __uverbs_create_xsrq(struct ib_uverbs_file *file, 2832 2830 struct ib_uverbs_create_xsrq *cmd,
+6
drivers/infiniband/core/uverbs_main.c
··· 115 115 [IB_USER_VERBS_CMD_CLOSE_XRCD] = ib_uverbs_close_xrcd, 116 116 [IB_USER_VERBS_CMD_CREATE_XSRQ] = ib_uverbs_create_xsrq, 117 117 [IB_USER_VERBS_CMD_OPEN_QP] = ib_uverbs_open_qp, 118 + #ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING 118 119 [IB_USER_VERBS_CMD_CREATE_FLOW] = ib_uverbs_create_flow, 119 120 [IB_USER_VERBS_CMD_DESTROY_FLOW] = ib_uverbs_destroy_flow 121 + #endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */ 120 122 }; 121 123 122 124 static void ib_uverbs_add_one(struct ib_device *device); ··· 607 605 if (!(file->device->ib_dev->uverbs_cmd_mask & (1ull << hdr.command))) 608 606 return -ENOSYS; 609 607 608 + #ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING 610 609 if (hdr.command >= IB_USER_VERBS_CMD_THRESHOLD) { 611 610 struct ib_uverbs_cmd_hdr_ex hdr_ex; 612 611 ··· 624 621 (hdr_ex.out_words + 625 622 hdr_ex.provider_out_words) * 4); 626 623 } else { 624 + #endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */ 627 625 if (hdr.in_words * 4 != count) 628 626 return -EINVAL; 629 627 ··· 632 628 buf + sizeof(hdr), 633 629 hdr.in_words * 4, 634 630 hdr.out_words * 4); 631 + #ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING 635 632 } 633 + #endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */ 636 634 } 637 635 638 636 static int ib_uverbs_mmap(struct file *filp, struct vm_area_struct *vma)
+1 -1
drivers/infiniband/hw/amso1100/c2_ae.c
··· 141 141 return "C2_QP_STATE_ERROR"; 142 142 default: 143 143 return "<invalid QP state>"; 144 - }; 144 + } 145 145 } 146 146 147 147 void c2_ae_event(struct c2_dev *c2dev, u32 mq_index)
+2
drivers/infiniband/hw/mlx4/main.c
··· 1691 1691 ibdev->ib_dev.create_flow = mlx4_ib_create_flow; 1692 1692 ibdev->ib_dev.destroy_flow = mlx4_ib_destroy_flow; 1693 1693 1694 + #ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING 1694 1695 ibdev->ib_dev.uverbs_cmd_mask |= 1695 1696 (1ull << IB_USER_VERBS_CMD_CREATE_FLOW) | 1696 1697 (1ull << IB_USER_VERBS_CMD_DESTROY_FLOW); 1698 + #endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */ 1697 1699 } 1698 1700 1699 1701 mlx4_ib_alloc_eqs(dev, ibdev);
+10 -6
drivers/infiniband/hw/mlx5/main.c
··· 164 164 static int alloc_comp_eqs(struct mlx5_ib_dev *dev) 165 165 { 166 166 struct mlx5_eq_table *table = &dev->mdev.priv.eq_table; 167 + char name[MLX5_MAX_EQ_NAME]; 167 168 struct mlx5_eq *eq, *n; 168 169 int ncomp_vec; 169 170 int nent; ··· 181 180 goto clean; 182 181 } 183 182 184 - snprintf(eq->name, MLX5_MAX_EQ_NAME, "mlx5_comp%d", i); 183 + snprintf(name, MLX5_MAX_EQ_NAME, "mlx5_comp%d", i); 185 184 err = mlx5_create_map_eq(&dev->mdev, eq, 186 185 i + MLX5_EQ_VEC_COMP_BASE, nent, 0, 187 - eq->name, 188 - &dev->mdev.priv.uuari.uars[0]); 186 + name, &dev->mdev.priv.uuari.uars[0]); 189 187 if (err) { 190 188 kfree(eq); 191 189 goto clean; ··· 301 301 props->max_srq_sge = max_rq_sg - 1; 302 302 props->max_fast_reg_page_list_len = (unsigned int)-1; 303 303 props->local_ca_ack_delay = dev->mdev.caps.local_ca_ack_delay; 304 - props->atomic_cap = dev->mdev.caps.flags & MLX5_DEV_CAP_FLAG_ATOMIC ? 305 - IB_ATOMIC_HCA : IB_ATOMIC_NONE; 306 - props->masked_atomic_cap = IB_ATOMIC_HCA; 304 + props->atomic_cap = IB_ATOMIC_NONE; 305 + props->masked_atomic_cap = IB_ATOMIC_NONE; 307 306 props->max_pkeys = be16_to_cpup((__be16 *)(out_mad->data + 28)); 308 307 props->max_mcast_grp = 1 << dev->mdev.caps.log_max_mcg; 309 308 props->max_mcast_qp_attach = dev->mdev.caps.max_qp_mcg; ··· 1004 1005 1005 1006 ibev.device = &ibdev->ib_dev; 1006 1007 ibev.element.port_num = port; 1008 + 1009 + if (port < 1 || port > ibdev->num_ports) { 1010 + mlx5_ib_warn(ibdev, "warning: event on port %d\n", port); 1011 + return; 1012 + } 1007 1013 1008 1014 if (ibdev->ib_active) 1009 1015 ib_dispatch_event(&ibev);
+33 -37
drivers/infiniband/hw/mlx5/mr.c
··· 42 42 DEF_CACHE_SIZE = 10, 43 43 }; 44 44 45 + enum { 46 + MLX5_UMR_ALIGN = 2048 47 + }; 48 + 45 49 static __be64 *mr_align(__be64 *ptr, int align) 46 50 { 47 51 unsigned long mask = align - 1; ··· 65 61 66 62 static int add_keys(struct mlx5_ib_dev *dev, int c, int num) 67 63 { 68 - struct device *ddev = dev->ib_dev.dma_device; 69 64 struct mlx5_mr_cache *cache = &dev->cache; 70 65 struct mlx5_cache_ent *ent = &cache->ent[c]; 71 66 struct mlx5_create_mkey_mbox_in *in; 72 67 struct mlx5_ib_mr *mr; 73 68 int npages = 1 << ent->order; 74 - int size = sizeof(u64) * npages; 75 69 int err = 0; 76 70 int i; 77 71 ··· 85 83 } 86 84 mr->order = ent->order; 87 85 mr->umred = 1; 88 - mr->pas = kmalloc(size + 0x3f, GFP_KERNEL); 89 - if (!mr->pas) { 90 - kfree(mr); 91 - err = -ENOMEM; 92 - goto out; 93 - } 94 - mr->dma = dma_map_single(ddev, mr_align(mr->pas, 0x40), size, 95 - DMA_TO_DEVICE); 96 - if (dma_mapping_error(ddev, mr->dma)) { 97 - kfree(mr->pas); 98 - kfree(mr); 99 - err = -ENOMEM; 100 - goto out; 101 - } 102 - 103 86 in->seg.status = 1 << 6; 104 87 in->seg.xlt_oct_size = cpu_to_be32((npages + 1) / 2); 105 88 in->seg.qpn_mkey7_0 = cpu_to_be32(0xffffff << 8); ··· 95 108 sizeof(*in)); 96 109 if (err) { 97 110 mlx5_ib_warn(dev, "create mkey failed %d\n", err); 98 - dma_unmap_single(ddev, mr->dma, size, DMA_TO_DEVICE); 99 - kfree(mr->pas); 100 111 kfree(mr); 101 112 goto out; 102 113 } ··· 114 129 115 130 static void remove_keys(struct mlx5_ib_dev *dev, int c, int num) 116 131 { 117 - struct device *ddev = dev->ib_dev.dma_device; 118 132 struct mlx5_mr_cache *cache = &dev->cache; 119 133 struct mlx5_cache_ent *ent = &cache->ent[c]; 120 134 struct mlx5_ib_mr *mr; 121 - int size; 122 135 int err; 123 136 int i; 124 137 ··· 132 149 ent->size--; 133 150 spin_unlock(&ent->lock); 134 151 err = mlx5_core_destroy_mkey(&dev->mdev, &mr->mmr); 135 - if (err) { 152 + if (err) 136 153 mlx5_ib_warn(dev, "failed destroy mkey\n"); 137 - } else { 138 - size = ALIGN(sizeof(u64) * (1 << mr->order), 0x40); 139 - dma_unmap_single(ddev, mr->dma, size, DMA_TO_DEVICE); 140 - kfree(mr->pas); 154 + else 141 155 kfree(mr); 142 - } 143 156 } 144 157 } 145 158 ··· 387 408 388 409 static void clean_keys(struct mlx5_ib_dev *dev, int c) 389 410 { 390 - struct device *ddev = dev->ib_dev.dma_device; 391 411 struct mlx5_mr_cache *cache = &dev->cache; 392 412 struct mlx5_cache_ent *ent = &cache->ent[c]; 393 413 struct mlx5_ib_mr *mr; 394 - int size; 395 414 int err; 396 415 416 + cancel_delayed_work(&ent->dwork); 397 417 while (1) { 398 418 spin_lock(&ent->lock); 399 419 if (list_empty(&ent->head)) { ··· 405 427 ent->size--; 406 428 spin_unlock(&ent->lock); 407 429 err = mlx5_core_destroy_mkey(&dev->mdev, &mr->mmr); 408 - if (err) { 430 + if (err) 409 431 mlx5_ib_warn(dev, "failed destroy mkey\n"); 410 - } else { 411 - size = ALIGN(sizeof(u64) * (1 << mr->order), 0x40); 412 - dma_unmap_single(ddev, mr->dma, size, DMA_TO_DEVICE); 413 - kfree(mr->pas); 432 + else 414 433 kfree(mr); 415 - } 416 434 } 417 435 } 418 436 ··· 514 540 int i; 515 541 516 542 dev->cache.stopped = 1; 517 - destroy_workqueue(dev->cache.wq); 543 + flush_workqueue(dev->cache.wq); 518 544 519 545 mlx5_mr_cache_debugfs_cleanup(dev); 520 546 521 547 for (i = 0; i < MAX_MR_CACHE_ENTRIES; i++) 522 548 clean_keys(dev, i); 549 + 550 + destroy_workqueue(dev->cache.wq); 523 551 524 552 return 0; 525 553 } ··· 651 675 int page_shift, int order, int access_flags) 652 676 { 653 677 struct mlx5_ib_dev *dev = to_mdev(pd->device); 678 + struct device *ddev = dev->ib_dev.dma_device; 654 679 struct umr_common *umrc = &dev->umrc; 655 680 struct ib_send_wr wr, *bad; 656 681 struct mlx5_ib_mr *mr; 657 682 struct ib_sge sg; 683 + int size = sizeof(u64) * npages; 658 684 int err; 659 685 int i; 660 686 ··· 675 697 if (!mr) 676 698 return ERR_PTR(-EAGAIN); 677 699 678 - mlx5_ib_populate_pas(dev, umem, page_shift, mr_align(mr->pas, 0x40), 1); 700 + mr->pas = kmalloc(size + MLX5_UMR_ALIGN - 1, GFP_KERNEL); 701 + if (!mr->pas) { 702 + err = -ENOMEM; 703 + goto error; 704 + } 705 + 706 + mlx5_ib_populate_pas(dev, umem, page_shift, 707 + mr_align(mr->pas, MLX5_UMR_ALIGN), 1); 708 + 709 + mr->dma = dma_map_single(ddev, mr_align(mr->pas, MLX5_UMR_ALIGN), size, 710 + DMA_TO_DEVICE); 711 + if (dma_mapping_error(ddev, mr->dma)) { 712 + kfree(mr->pas); 713 + err = -ENOMEM; 714 + goto error; 715 + } 679 716 680 717 memset(&wr, 0, sizeof(wr)); 681 718 wr.wr_id = (u64)(unsigned long)mr; ··· 710 717 } 711 718 wait_for_completion(&mr->done); 712 719 up(&umrc->sem); 720 + 721 + dma_unmap_single(ddev, mr->dma, size, DMA_TO_DEVICE); 722 + kfree(mr->pas); 713 723 714 724 if (mr->status != IB_WC_SUCCESS) { 715 725 mlx5_ib_warn(dev, "reg umr failed\n");
+30 -50
drivers/infiniband/hw/mlx5/qp.c
··· 203 203 204 204 switch (qp_type) { 205 205 case IB_QPT_XRC_INI: 206 - size = sizeof(struct mlx5_wqe_xrc_seg); 206 + size += sizeof(struct mlx5_wqe_xrc_seg); 207 207 /* fall through */ 208 208 case IB_QPT_RC: 209 209 size += sizeof(struct mlx5_wqe_ctrl_seg) + ··· 211 211 sizeof(struct mlx5_wqe_raddr_seg); 212 212 break; 213 213 214 + case IB_QPT_XRC_TGT: 215 + return 0; 216 + 214 217 case IB_QPT_UC: 215 - size = sizeof(struct mlx5_wqe_ctrl_seg) + 218 + size += sizeof(struct mlx5_wqe_ctrl_seg) + 216 219 sizeof(struct mlx5_wqe_raddr_seg); 217 220 break; 218 221 219 222 case IB_QPT_UD: 220 223 case IB_QPT_SMI: 221 224 case IB_QPT_GSI: 222 - size = sizeof(struct mlx5_wqe_ctrl_seg) + 225 + size += sizeof(struct mlx5_wqe_ctrl_seg) + 223 226 sizeof(struct mlx5_wqe_datagram_seg); 224 227 break; 225 228 226 229 case MLX5_IB_QPT_REG_UMR: 227 - size = sizeof(struct mlx5_wqe_ctrl_seg) + 230 + size += sizeof(struct mlx5_wqe_ctrl_seg) + 228 231 sizeof(struct mlx5_wqe_umr_ctrl_seg) + 229 232 sizeof(struct mlx5_mkey_seg); 230 233 break; ··· 273 270 return wqe_size; 274 271 275 272 if (wqe_size > dev->mdev.caps.max_sq_desc_sz) { 276 - mlx5_ib_dbg(dev, "\n"); 273 + mlx5_ib_dbg(dev, "wqe_size(%d) > max_sq_desc_sz(%d)\n", 274 + wqe_size, dev->mdev.caps.max_sq_desc_sz); 277 275 return -EINVAL; 278 276 } 279 277 ··· 284 280 285 281 wq_size = roundup_pow_of_two(attr->cap.max_send_wr * wqe_size); 286 282 qp->sq.wqe_cnt = wq_size / MLX5_SEND_WQE_BB; 283 + if (qp->sq.wqe_cnt > dev->mdev.caps.max_wqes) { 284 + mlx5_ib_dbg(dev, "wqe count(%d) exceeds limits(%d)\n", 285 + qp->sq.wqe_cnt, dev->mdev.caps.max_wqes); 286 + return -ENOMEM; 287 + } 287 288 qp->sq.wqe_shift = ilog2(MLX5_SEND_WQE_BB); 288 289 qp->sq.max_gs = attr->cap.max_send_sge; 289 - qp->sq.max_post = 1 << ilog2(wq_size / wqe_size); 290 + qp->sq.max_post = wq_size / wqe_size; 291 + attr->cap.max_send_wr = qp->sq.max_post; 290 292 291 293 return wq_size; 292 294 } ··· 1290 1280 MLX5_QP_OPTPAR_Q_KEY, 1291 1281 [MLX5_QP_ST_MLX] = MLX5_QP_OPTPAR_PKEY_INDEX | 1292 1282 MLX5_QP_OPTPAR_Q_KEY, 1283 + [MLX5_QP_ST_XRC] = MLX5_QP_OPTPAR_ALT_ADDR_PATH | 1284 + MLX5_QP_OPTPAR_RRE | 1285 + MLX5_QP_OPTPAR_RAE | 1286 + MLX5_QP_OPTPAR_RWE | 1287 + MLX5_QP_OPTPAR_PKEY_INDEX, 1293 1288 }, 1294 1289 }, 1295 1290 [MLX5_QP_STATE_RTR] = { ··· 1329 1314 [MLX5_QP_STATE_RTS] = { 1330 1315 [MLX5_QP_ST_UD] = MLX5_QP_OPTPAR_Q_KEY, 1331 1316 [MLX5_QP_ST_MLX] = MLX5_QP_OPTPAR_Q_KEY, 1317 + [MLX5_QP_ST_UC] = MLX5_QP_OPTPAR_RWE, 1318 + [MLX5_QP_ST_RC] = MLX5_QP_OPTPAR_RNR_TIMEOUT | 1319 + MLX5_QP_OPTPAR_RWE | 1320 + MLX5_QP_OPTPAR_RAE | 1321 + MLX5_QP_OPTPAR_RRE, 1332 1322 }, 1333 1323 }, 1334 1324 }; ··· 1669 1649 rseg->raddr = cpu_to_be64(remote_addr); 1670 1650 rseg->rkey = cpu_to_be32(rkey); 1671 1651 rseg->reserved = 0; 1672 - } 1673 - 1674 - static void set_atomic_seg(struct mlx5_wqe_atomic_seg *aseg, struct ib_send_wr *wr) 1675 - { 1676 - if (wr->opcode == IB_WR_ATOMIC_CMP_AND_SWP) { 1677 - aseg->swap_add = cpu_to_be64(wr->wr.atomic.swap); 1678 - aseg->compare = cpu_to_be64(wr->wr.atomic.compare_add); 1679 - } else if (wr->opcode == IB_WR_MASKED_ATOMIC_FETCH_AND_ADD) { 1680 - aseg->swap_add = cpu_to_be64(wr->wr.atomic.compare_add); 1681 - aseg->compare = cpu_to_be64(wr->wr.atomic.compare_add_mask); 1682 - } else { 1683 - aseg->swap_add = cpu_to_be64(wr->wr.atomic.compare_add); 1684 - aseg->compare = 0; 1685 - } 1686 - } 1687 - 1688 - static void set_masked_atomic_seg(struct mlx5_wqe_masked_atomic_seg *aseg, 1689 - struct ib_send_wr *wr) 1690 - { 1691 - aseg->swap_add = cpu_to_be64(wr->wr.atomic.swap); 1692 - aseg->swap_add_mask = cpu_to_be64(wr->wr.atomic.swap_mask); 1693 - aseg->compare = cpu_to_be64(wr->wr.atomic.compare_add); 1694 - aseg->compare_mask = cpu_to_be64(wr->wr.atomic.compare_add_mask); 1695 1652 } 1696 1653 1697 1654 static void set_datagram_seg(struct mlx5_wqe_datagram_seg *dseg, ··· 2060 2063 2061 2064 case IB_WR_ATOMIC_CMP_AND_SWP: 2062 2065 case IB_WR_ATOMIC_FETCH_AND_ADD: 2063 - set_raddr_seg(seg, wr->wr.atomic.remote_addr, 2064 - wr->wr.atomic.rkey); 2065 - seg += sizeof(struct mlx5_wqe_raddr_seg); 2066 - 2067 - set_atomic_seg(seg, wr); 2068 - seg += sizeof(struct mlx5_wqe_atomic_seg); 2069 - 2070 - size += (sizeof(struct mlx5_wqe_raddr_seg) + 2071 - sizeof(struct mlx5_wqe_atomic_seg)) / 16; 2072 - break; 2073 - 2074 2066 case IB_WR_MASKED_ATOMIC_CMP_AND_SWP: 2075 - set_raddr_seg(seg, wr->wr.atomic.remote_addr, 2076 - wr->wr.atomic.rkey); 2077 - seg += sizeof(struct mlx5_wqe_raddr_seg); 2078 - 2079 - set_masked_atomic_seg(seg, wr); 2080 - seg += sizeof(struct mlx5_wqe_masked_atomic_seg); 2081 - 2082 - size += (sizeof(struct mlx5_wqe_raddr_seg) + 2083 - sizeof(struct mlx5_wqe_masked_atomic_seg)) / 16; 2084 - break; 2067 + mlx5_ib_warn(dev, "Atomic operations are not supported yet\n"); 2068 + err = -ENOSYS; 2069 + *bad_wr = wr; 2070 + goto out; 2085 2071 2086 2072 case IB_WR_LOCAL_INV: 2087 2073 next_fence = MLX5_FENCE_MODE_INITIATOR_SMALL;
+3 -1
drivers/infiniband/hw/mlx5/srq.c
··· 295 295 mlx5_vfree(in); 296 296 if (err) { 297 297 mlx5_ib_dbg(dev, "create SRQ failed, err %d\n", err); 298 - goto err_srq; 298 + goto err_usr_kern_srq; 299 299 } 300 300 301 301 mlx5_ib_dbg(dev, "create SRQ with srqn 0x%x\n", srq->msrq.srqn); ··· 316 316 317 317 err_core: 318 318 mlx5_core_destroy_srq(&dev->mdev, &srq->msrq); 319 + 320 + err_usr_kern_srq: 319 321 if (pd->uobject) 320 322 destroy_srq_user(pd, srq); 321 323 else
+1 -1
drivers/infiniband/hw/mthca/mthca_eq.c
··· 357 357 mthca_warn(dev, "Unhandled event %02x(%02x) on EQ %d\n", 358 358 eqe->type, eqe->subtype, eq->eqn); 359 359 break; 360 - }; 360 + } 361 361 362 362 set_eqe_hw(eqe); 363 363 ++eq->cons_index;
+3 -3
drivers/infiniband/hw/ocrdma/ocrdma_hw.c
··· 150 150 return IB_QPS_SQE; 151 151 case OCRDMA_QPS_ERR: 152 152 return IB_QPS_ERR; 153 - }; 153 + } 154 154 return IB_QPS_ERR; 155 155 } 156 156 ··· 171 171 return OCRDMA_QPS_SQE; 172 172 case IB_QPS_ERR: 173 173 return OCRDMA_QPS_ERR; 174 - }; 174 + } 175 175 return OCRDMA_QPS_ERR; 176 176 } 177 177 ··· 1982 1982 break; 1983 1983 default: 1984 1984 return -EINVAL; 1985 - }; 1985 + } 1986 1986 1987 1987 cmd = ocrdma_init_emb_mqe(OCRDMA_CMD_CREATE_QP, sizeof(*cmd)); 1988 1988 if (!cmd)
+1 -1
drivers/infiniband/hw/ocrdma/ocrdma_main.c
··· 531 531 case BE_DEV_DOWN: 532 532 ocrdma_close(dev); 533 533 break; 534 - }; 534 + } 535 535 } 536 536 537 537 static struct ocrdma_driver ocrdma_drv = {
+3 -3
drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
··· 141 141 /* Unsupported */ 142 142 *ib_speed = IB_SPEED_SDR; 143 143 *ib_width = IB_WIDTH_1X; 144 - }; 144 + } 145 145 } 146 146 147 147 ··· 2331 2331 default: 2332 2332 ibwc_status = IB_WC_GENERAL_ERR; 2333 2333 break; 2334 - }; 2334 + } 2335 2335 return ibwc_status; 2336 2336 } 2337 2337 ··· 2370 2370 pr_err("%s() invalid opcode received = 0x%x\n", 2371 2371 __func__, hdr->cw & OCRDMA_WQE_OPCODE_MASK); 2372 2372 break; 2373 - }; 2373 + } 2374 2374 } 2375 2375 2376 2376 static void ocrdma_set_cqe_status_flushed(struct ocrdma_qp *qp,
+6 -8
drivers/infiniband/ulp/srpt/ib_srpt.c
··· 1588 1588 int resp_data_len; 1589 1589 int resp_len; 1590 1590 1591 - resp_data_len = (rsp_code == SRP_TSK_MGMT_SUCCESS) ? 0 : 4; 1591 + resp_data_len = 4; 1592 1592 resp_len = sizeof(*srp_rsp) + resp_data_len; 1593 1593 1594 1594 srp_rsp = ioctx->ioctx.buf; ··· 1600 1600 + atomic_xchg(&ch->req_lim_delta, 0)); 1601 1601 srp_rsp->tag = tag; 1602 1602 1603 - if (rsp_code != SRP_TSK_MGMT_SUCCESS) { 1604 - srp_rsp->flags |= SRP_RSP_FLAG_RSPVALID; 1605 - srp_rsp->resp_data_len = cpu_to_be32(resp_data_len); 1606 - srp_rsp->data[3] = rsp_code; 1607 - } 1603 + srp_rsp->flags |= SRP_RSP_FLAG_RSPVALID; 1604 + srp_rsp->resp_data_len = cpu_to_be32(resp_data_len); 1605 + srp_rsp->data[3] = rsp_code; 1608 1606 1609 1607 return resp_len; 1610 1608 } ··· 2356 2358 transport_deregister_session(se_sess); 2357 2359 ch->sess = NULL; 2358 2360 2361 + ib_destroy_cm_id(ch->cm_id); 2362 + 2359 2363 srpt_destroy_ch_ib(ch); 2360 2364 2361 2365 srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_ring, ··· 2367 2367 spin_lock_irq(&sdev->spinlock); 2368 2368 list_del(&ch->list); 2369 2369 spin_unlock_irq(&sdev->spinlock); 2370 - 2371 - ib_destroy_cm_id(ch->cm_id); 2372 2370 2373 2371 if (ch->release_done) 2374 2372 complete(ch->release_done);
+1 -1
drivers/iommu/Kconfig
··· 52 52 select PCI_PRI 53 53 select PCI_PASID 54 54 select IOMMU_API 55 - depends on X86_64 && PCI && ACPI && X86_IO_APIC 55 + depends on X86_64 && PCI && ACPI 56 56 ---help--- 57 57 With this option you can enable support for AMD IOMMU hardware in 58 58 your system. An IOMMU is a hardware component which provides
+7 -6
drivers/iommu/arm-smmu.c
··· 377 377 u32 cbar; 378 378 pgd_t *pgd; 379 379 }; 380 + #define INVALID_IRPTNDX 0xff 380 381 381 382 #define ARM_SMMU_CB_ASID(cfg) ((cfg)->cbndx) 382 383 #define ARM_SMMU_CB_VMID(cfg) ((cfg)->cbndx + 1) ··· 841 840 if (IS_ERR_VALUE(ret)) { 842 841 dev_err(smmu->dev, "failed to request context IRQ %d (%u)\n", 843 842 root_cfg->irptndx, irq); 844 - root_cfg->irptndx = -1; 843 + root_cfg->irptndx = INVALID_IRPTNDX; 845 844 goto out_free_context; 846 845 } 847 846 ··· 870 869 writel_relaxed(0, cb_base + ARM_SMMU_CB_SCTLR); 871 870 arm_smmu_tlb_inv_context(root_cfg); 872 871 873 - if (root_cfg->irptndx != -1) { 872 + if (root_cfg->irptndx != INVALID_IRPTNDX) { 874 873 irq = smmu->irqs[smmu->num_global_irqs + root_cfg->irptndx]; 875 874 free_irq(irq, domain); 876 875 } ··· 1858 1857 goto out_put_parent; 1859 1858 } 1860 1859 1861 - arm_smmu_device_reset(smmu); 1862 - 1863 1860 for (i = 0; i < smmu->num_global_irqs; ++i) { 1864 1861 err = request_irq(smmu->irqs[i], 1865 1862 arm_smmu_global_fault, ··· 1875 1876 spin_lock(&arm_smmu_devices_lock); 1876 1877 list_add(&smmu->list, &arm_smmu_devices); 1877 1878 spin_unlock(&arm_smmu_devices_lock); 1879 + 1880 + arm_smmu_device_reset(smmu); 1878 1881 return 0; 1879 1882 1880 1883 out_free_irqs: ··· 1967 1966 return ret; 1968 1967 1969 1968 /* Oh, for a proper bus abstraction */ 1970 - if (!iommu_present(&platform_bus_type)); 1969 + if (!iommu_present(&platform_bus_type)) 1971 1970 bus_set_iommu(&platform_bus_type, &arm_smmu_ops); 1972 1971 1973 - if (!iommu_present(&amba_bustype)); 1972 + if (!iommu_present(&amba_bustype)) 1974 1973 bus_set_iommu(&amba_bustype, &arm_smmu_ops); 1975 1974 1976 1975 return 0;
+2 -3
drivers/md/bcache/request.c
··· 996 996 closure_bio_submit(bio, cl, s->d); 997 997 } else { 998 998 bch_writeback_add(dc); 999 + s->op.cache_bio = bio; 999 1000 1000 1001 if (bio->bi_rw & REQ_FLUSH) { 1001 1002 /* Also need to send a flush to the backing device */ 1002 - struct bio *flush = bio_alloc_bioset(0, GFP_NOIO, 1003 + struct bio *flush = bio_alloc_bioset(GFP_NOIO, 0, 1003 1004 dc->disk.bio_split); 1004 1005 1005 1006 flush->bi_rw = WRITE_FLUSH; ··· 1009 1008 flush->bi_private = cl; 1010 1009 1011 1010 closure_bio_submit(flush, cl, s->d); 1012 - } else { 1013 - s->op.cache_bio = bio; 1014 1011 } 1015 1012 } 1016 1013 out:
+12 -6
drivers/md/dm-snap-persistent.c
··· 269 269 return NUM_SNAPSHOT_HDR_CHUNKS + ((ps->exceptions_per_area + 1) * area); 270 270 } 271 271 272 + static void skip_metadata(struct pstore *ps) 273 + { 274 + uint32_t stride = ps->exceptions_per_area + 1; 275 + chunk_t next_free = ps->next_free; 276 + if (sector_div(next_free, stride) == NUM_SNAPSHOT_HDR_CHUNKS) 277 + ps->next_free++; 278 + } 279 + 272 280 /* 273 281 * Read or write a metadata area. Remembering to skip the first 274 282 * chunk which holds the header. ··· 510 502 511 503 ps->current_area--; 512 504 505 + skip_metadata(ps); 506 + 513 507 return 0; 514 508 } 515 509 ··· 626 616 struct dm_exception *e) 627 617 { 628 618 struct pstore *ps = get_info(store); 629 - uint32_t stride; 630 - chunk_t next_free; 631 619 sector_t size = get_dev_size(dm_snap_cow(store->snap)->bdev); 632 620 633 621 /* Is there enough room ? */ ··· 638 630 * Move onto the next free pending, making sure to take 639 631 * into account the location of the metadata chunks. 640 632 */ 641 - stride = (ps->exceptions_per_area + 1); 642 - next_free = ++ps->next_free; 643 - if (sector_div(next_free, stride) == 1) 644 - ps->next_free++; 633 + ps->next_free++; 634 + skip_metadata(ps); 645 635 646 636 atomic_inc(&ps->pending_count); 647 637 return 0;
+1 -8
drivers/media/dvb-frontends/tda10071.c
··· 912 912 { 0xd5, 0x03, 0x03 }, 913 913 }; 914 914 915 - /* firmware status */ 916 - ret = tda10071_rd_reg(priv, 0x51, &tmp); 917 - if (ret) 918 - goto error; 919 - 920 - if (!tmp) { 915 + if (priv->warm) { 921 916 /* warm state - wake up device from sleep */ 922 - priv->warm = 1; 923 917 924 918 for (i = 0; i < ARRAY_SIZE(tab); i++) { 925 919 ret = tda10071_wr_reg_mask(priv, tab[i].reg, ··· 931 937 goto error; 932 938 } else { 933 939 /* cold state - try to download firmware */ 934 - priv->warm = 0; 935 940 936 941 /* request the firmware, this will block and timeout */ 937 942 ret = request_firmware(&fw, fw_file, priv->i2c->dev.parent);
+6 -9
drivers/media/i2c/ad9389b.c
··· 628 628 629 629 static const struct v4l2_dv_timings_cap ad9389b_timings_cap = { 630 630 .type = V4L2_DV_BT_656_1120, 631 - .bt = { 632 - .max_width = 1920, 633 - .max_height = 1200, 634 - .min_pixelclock = 25000000, 635 - .max_pixelclock = 170000000, 636 - .standards = V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT | 631 + /* keep this initialization for compatibility with GCC < 4.4.6 */ 632 + .reserved = { 0 }, 633 + V4L2_INIT_BT_TIMINGS(0, 1920, 0, 1200, 25000000, 170000000, 634 + V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT | 637 635 V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT, 638 - .capabilities = V4L2_DV_BT_CAP_PROGRESSIVE | 639 - V4L2_DV_BT_CAP_REDUCED_BLANKING | V4L2_DV_BT_CAP_CUSTOM, 640 - }, 636 + V4L2_DV_BT_CAP_PROGRESSIVE | V4L2_DV_BT_CAP_REDUCED_BLANKING | 637 + V4L2_DV_BT_CAP_CUSTOM) 641 638 }; 642 639 643 640 static int ad9389b_s_dv_timings(struct v4l2_subdev *sd,
+9 -9
drivers/media/i2c/adv7511.c
··· 119 119 120 120 static const struct v4l2_dv_timings_cap adv7511_timings_cap = { 121 121 .type = V4L2_DV_BT_656_1120, 122 - .bt = { 123 - .max_width = ADV7511_MAX_WIDTH, 124 - .max_height = ADV7511_MAX_HEIGHT, 125 - .min_pixelclock = ADV7511_MIN_PIXELCLOCK, 126 - .max_pixelclock = ADV7511_MAX_PIXELCLOCK, 127 - .standards = V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT | 122 + /* keep this initialization for compatibility with GCC < 4.4.6 */ 123 + .reserved = { 0 }, 124 + V4L2_INIT_BT_TIMINGS(0, ADV7511_MAX_WIDTH, 0, ADV7511_MAX_HEIGHT, 125 + ADV7511_MIN_PIXELCLOCK, ADV7511_MAX_PIXELCLOCK, 126 + V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT | 128 127 V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT, 129 - .capabilities = V4L2_DV_BT_CAP_PROGRESSIVE | 130 - V4L2_DV_BT_CAP_REDUCED_BLANKING | V4L2_DV_BT_CAP_CUSTOM, 131 - }, 128 + V4L2_DV_BT_CAP_PROGRESSIVE | V4L2_DV_BT_CAP_REDUCED_BLANKING | 129 + V4L2_DV_BT_CAP_CUSTOM) 132 130 }; 133 131 134 132 static inline struct adv7511_state *get_adv7511_state(struct v4l2_subdev *sd) ··· 1124 1126 state->i2c_edid = i2c_new_dummy(client->adapter, state->i2c_edid_addr >> 1); 1125 1127 if (state->i2c_edid == NULL) { 1126 1128 v4l2_err(sd, "failed to register edid i2c client\n"); 1129 + err = -ENOMEM; 1127 1130 goto err_entity; 1128 1131 } 1129 1132 ··· 1132 1133 state->work_queue = create_singlethread_workqueue(sd->name); 1133 1134 if (state->work_queue == NULL) { 1134 1135 v4l2_err(sd, "could not create workqueue\n"); 1136 + err = -ENOMEM; 1135 1137 goto err_unreg_cec; 1136 1138 } 1137 1139
+12 -18
drivers/media/i2c/adv7842.c
··· 546 546 547 547 static const struct v4l2_dv_timings_cap adv7842_timings_cap_analog = { 548 548 .type = V4L2_DV_BT_656_1120, 549 - .bt = { 550 - .max_width = 1920, 551 - .max_height = 1200, 552 - .min_pixelclock = 25000000, 553 - .max_pixelclock = 170000000, 554 - .standards = V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT | 549 + /* keep this initialization for compatibility with GCC < 4.4.6 */ 550 + .reserved = { 0 }, 551 + V4L2_INIT_BT_TIMINGS(0, 1920, 0, 1200, 25000000, 170000000, 552 + V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT | 555 553 V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT, 556 - .capabilities = V4L2_DV_BT_CAP_PROGRESSIVE | 557 - V4L2_DV_BT_CAP_REDUCED_BLANKING | V4L2_DV_BT_CAP_CUSTOM, 558 - }, 554 + V4L2_DV_BT_CAP_PROGRESSIVE | V4L2_DV_BT_CAP_REDUCED_BLANKING | 555 + V4L2_DV_BT_CAP_CUSTOM) 559 556 }; 560 557 561 558 static const struct v4l2_dv_timings_cap adv7842_timings_cap_digital = { 562 559 .type = V4L2_DV_BT_656_1120, 563 - .bt = { 564 - .max_width = 1920, 565 - .max_height = 1200, 566 - .min_pixelclock = 25000000, 567 - .max_pixelclock = 225000000, 568 - .standards = V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT | 560 + /* keep this initialization for compatibility with GCC < 4.4.6 */ 561 + .reserved = { 0 }, 562 + V4L2_INIT_BT_TIMINGS(0, 1920, 0, 1200, 25000000, 225000000, 563 + V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT | 569 564 V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT, 570 - .capabilities = V4L2_DV_BT_CAP_PROGRESSIVE | 571 - V4L2_DV_BT_CAP_REDUCED_BLANKING | V4L2_DV_BT_CAP_CUSTOM, 572 - }, 565 + V4L2_DV_BT_CAP_PROGRESSIVE | V4L2_DV_BT_CAP_REDUCED_BLANKING | 566 + V4L2_DV_BT_CAP_CUSTOM) 573 567 }; 574 568 575 569 static inline const struct v4l2_dv_timings_cap *
+4 -8
drivers/media/i2c/ths8200.c
··· 46 46 47 47 static const struct v4l2_dv_timings_cap ths8200_timings_cap = { 48 48 .type = V4L2_DV_BT_656_1120, 49 - .bt = { 50 - .max_width = 1920, 51 - .max_height = 1080, 52 - .min_pixelclock = 25000000, 53 - .max_pixelclock = 148500000, 54 - .standards = V4L2_DV_BT_STD_CEA861, 55 - .capabilities = V4L2_DV_BT_CAP_PROGRESSIVE, 56 - }, 49 + /* keep this initialization for compatibility with GCC < 4.4.6 */ 50 + .reserved = { 0 }, 51 + V4L2_INIT_BT_TIMINGS(0, 1920, 0, 1080, 25000000, 148500000, 52 + V4L2_DV_BT_STD_CEA861, V4L2_DV_BT_CAP_PROGRESSIVE) 57 53 }; 58 54 59 55 static inline struct ths8200_state *to_state(struct v4l2_subdev *sd)
+1
drivers/media/pci/saa7134/saa7134-video.c
··· 1455 1455 1456 1456 /* stop video capture */ 1457 1457 if (res_check(fh, RESOURCE_VIDEO)) { 1458 + pm_qos_remove_request(&dev->qos_request); 1458 1459 videobuf_streamoff(&fh->cap); 1459 1460 res_free(dev,fh,RESOURCE_VIDEO); 1460 1461 }
+1
drivers/media/platform/s5p-jpeg/jpeg-core.c
··· 1423 1423 jpeg->vfd_decoder->release = video_device_release; 1424 1424 jpeg->vfd_decoder->lock = &jpeg->lock; 1425 1425 jpeg->vfd_decoder->v4l2_dev = &jpeg->v4l2_dev; 1426 + jpeg->vfd_decoder->vfl_dir = VFL_DIR_M2M; 1426 1427 1427 1428 ret = video_register_device(jpeg->vfd_decoder, VFL_TYPE_GRABBER, -1); 1428 1429 if (ret) {
+1 -1
drivers/media/platform/sh_vou.c
··· 776 776 v4l_bound_align_image(&pix->width, 0, VOU_MAX_IMAGE_WIDTH, 1, 777 777 &pix->height, 0, VOU_MAX_IMAGE_HEIGHT, 1, 0); 778 778 779 - for (i = 0; ARRAY_SIZE(vou_fmt); i++) 779 + for (i = 0; i < ARRAY_SIZE(vou_fmt); i++) 780 780 if (vou_fmt[i].pfmt == pix->pixelformat) 781 781 return 0; 782 782
+2 -3
drivers/media/platform/soc_camera/mx3_camera.c
··· 266 266 struct idmac_channel *ichan = mx3_cam->idmac_channel[0]; 267 267 struct idmac_video_param *video = &ichan->params.video; 268 268 const struct soc_mbus_pixelfmt *host_fmt = icd->current_fmt->host_fmt; 269 - unsigned long flags; 270 269 dma_cookie_t cookie; 271 270 size_t new_size; 272 271 ··· 327 328 memset(vb2_plane_vaddr(vb, 0), 0xaa, vb2_get_plane_payload(vb, 0)); 328 329 #endif 329 330 330 - spin_lock_irqsave(&mx3_cam->lock, flags); 331 + spin_lock_irq(&mx3_cam->lock); 331 332 list_add_tail(&buf->queue, &mx3_cam->capture); 332 333 333 334 if (!mx3_cam->active) ··· 350 351 if (mx3_cam->active == buf) 351 352 mx3_cam->active = NULL; 352 353 353 - spin_unlock_irqrestore(&mx3_cam->lock, flags); 354 + spin_unlock_irq(&mx3_cam->lock); 354 355 error: 355 356 vb2_buffer_done(vb, VB2_BUF_STATE_ERROR); 356 357 }
+2 -1
drivers/media/tuners/e4000.c
··· 19 19 */ 20 20 21 21 #include "e4000_priv.h" 22 + #include <linux/math64.h> 22 23 23 24 /* write multiple registers */ 24 25 static int e4000_wr_regs(struct e4000_priv *priv, u8 reg, u8 *val, int len) ··· 234 233 * or more. 235 234 */ 236 235 f_vco = c->frequency * e4000_pll_lut[i].mul; 237 - sigma_delta = 0x10000UL * (f_vco % priv->cfg->clock) / priv->cfg->clock; 236 + sigma_delta = div_u64(0x10000ULL * (f_vco % priv->cfg->clock), priv->cfg->clock); 238 237 buf[0] = f_vco / priv->cfg->clock; 239 238 buf[1] = (sigma_delta >> 0) & 0xff; 240 239 buf[2] = (sigma_delta >> 8) & 0xff;
+7
drivers/media/usb/stkwebcam/stk-webcam.c
··· 111 111 DMI_MATCH(DMI_PRODUCT_NAME, "F3JC") 112 112 } 113 113 }, 114 + { 115 + .ident = "T12Rg-H", 116 + .matches = { 117 + DMI_MATCH(DMI_SYS_VENDOR, "HCL Infosystems Limited"), 118 + DMI_MATCH(DMI_PRODUCT_NAME, "T12Rg-H") 119 + } 120 + }, 114 121 {} 115 122 }; 116 123
+18
drivers/media/usb/uvc/uvc_driver.c
··· 2090 2090 .bInterfaceSubClass = 1, 2091 2091 .bInterfaceProtocol = 0, 2092 2092 .driver_info = UVC_QUIRK_PROBE_MINMAX }, 2093 + /* Microsoft Lifecam NX-3000 */ 2094 + { .match_flags = USB_DEVICE_ID_MATCH_DEVICE 2095 + | USB_DEVICE_ID_MATCH_INT_INFO, 2096 + .idVendor = 0x045e, 2097 + .idProduct = 0x0721, 2098 + .bInterfaceClass = USB_CLASS_VIDEO, 2099 + .bInterfaceSubClass = 1, 2100 + .bInterfaceProtocol = 0, 2101 + .driver_info = UVC_QUIRK_PROBE_DEF }, 2093 2102 /* Microsoft Lifecam VX-7000 */ 2094 2103 { .match_flags = USB_DEVICE_ID_MATCH_DEVICE 2095 2104 | USB_DEVICE_ID_MATCH_INT_INFO, ··· 2179 2170 | USB_DEVICE_ID_MATCH_INT_INFO, 2180 2171 .idVendor = 0x05a9, 2181 2172 .idProduct = 0x2640, 2173 + .bInterfaceClass = USB_CLASS_VIDEO, 2174 + .bInterfaceSubClass = 1, 2175 + .bInterfaceProtocol = 0, 2176 + .driver_info = UVC_QUIRK_PROBE_DEF }, 2177 + /* Dell SP2008WFP Monitor */ 2178 + { .match_flags = USB_DEVICE_ID_MATCH_DEVICE 2179 + | USB_DEVICE_ID_MATCH_INT_INFO, 2180 + .idVendor = 0x05a9, 2181 + .idProduct = 0x2641, 2182 2182 .bInterfaceClass = USB_CLASS_VIDEO, 2183 2183 .bInterfaceSubClass = 1, 2184 2184 .bInterfaceProtocol = 0,
+3 -1
drivers/media/v4l2-core/videobuf2-core.c
··· 353 353 354 354 if (b->m.planes[plane].bytesused > length) 355 355 return -EINVAL; 356 - if (b->m.planes[plane].data_offset >= 356 + 357 + if (b->m.planes[plane].data_offset > 0 && 358 + b->m.planes[plane].data_offset >= 357 359 b->m.planes[plane].bytesused) 358 360 return -EINVAL; 359 361 }
+82 -5
drivers/media/v4l2-core/videobuf2-dma-contig.c
··· 423 423 return !!(vma->vm_flags & (VM_IO | VM_PFNMAP)); 424 424 } 425 425 426 + static int vb2_dc_get_user_pfn(unsigned long start, int n_pages, 427 + struct vm_area_struct *vma, unsigned long *res) 428 + { 429 + unsigned long pfn, start_pfn, prev_pfn; 430 + unsigned int i; 431 + int ret; 432 + 433 + if (!vma_is_io(vma)) 434 + return -EFAULT; 435 + 436 + ret = follow_pfn(vma, start, &pfn); 437 + if (ret) 438 + return ret; 439 + 440 + start_pfn = pfn; 441 + start += PAGE_SIZE; 442 + 443 + for (i = 1; i < n_pages; ++i, start += PAGE_SIZE) { 444 + prev_pfn = pfn; 445 + ret = follow_pfn(vma, start, &pfn); 446 + 447 + if (ret) { 448 + pr_err("no page for address %lu\n", start); 449 + return ret; 450 + } 451 + if (pfn != prev_pfn + 1) 452 + return -EINVAL; 453 + } 454 + 455 + *res = start_pfn; 456 + return 0; 457 + } 458 + 426 459 static int vb2_dc_get_user_pages(unsigned long start, struct page **pages, 427 460 int n_pages, struct vm_area_struct *vma, int write) 428 461 { ··· 465 432 for (i = 0; i < n_pages; ++i, start += PAGE_SIZE) { 466 433 unsigned long pfn; 467 434 int ret = follow_pfn(vma, start, &pfn); 435 + 436 + if (!pfn_valid(pfn)) 437 + return -EINVAL; 468 438 469 439 if (ret) { 470 440 pr_err("no page for address %lu\n", start); ··· 504 468 struct vb2_dc_buf *buf = buf_priv; 505 469 struct sg_table *sgt = buf->dma_sgt; 506 470 507 - dma_unmap_sg(buf->dev, sgt->sgl, sgt->orig_nents, buf->dma_dir); 508 - if (!vma_is_io(buf->vma)) 509 - vb2_dc_sgt_foreach_page(sgt, vb2_dc_put_dirty_page); 471 + if (sgt) { 472 + dma_unmap_sg(buf->dev, sgt->sgl, sgt->orig_nents, buf->dma_dir); 473 + if (!vma_is_io(buf->vma)) 474 + vb2_dc_sgt_foreach_page(sgt, vb2_dc_put_dirty_page); 510 475 511 - sg_free_table(sgt); 512 - kfree(sgt); 476 + sg_free_table(sgt); 477 + kfree(sgt); 478 + } 513 479 vb2_put_vma(buf->vma); 514 480 kfree(buf); 515 481 } 482 + 483 + /* 484 + * For some kind of reserved memory there might be no struct page available, 485 + * so all that can be done to support such 'pages' is to try to convert 486 + * pfn to dma address or at the last resort just assume that 487 + * dma address == physical address (like it has been assumed in earlier version 488 + * of videobuf2-dma-contig 489 + */ 490 + 491 + #ifdef __arch_pfn_to_dma 492 + static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn) 493 + { 494 + return (dma_addr_t)__arch_pfn_to_dma(dev, pfn); 495 + } 496 + #elif defined(__pfn_to_bus) 497 + static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn) 498 + { 499 + return (dma_addr_t)__pfn_to_bus(pfn); 500 + } 501 + #elif defined(__pfn_to_phys) 502 + static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn) 503 + { 504 + return (dma_addr_t)__pfn_to_phys(pfn); 505 + } 506 + #else 507 + static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn) 508 + { 509 + /* really, we cannot do anything better at this point */ 510 + return (dma_addr_t)(pfn) << PAGE_SHIFT; 511 + } 512 + #endif 516 513 517 514 static void *vb2_dc_get_userptr(void *alloc_ctx, unsigned long vaddr, 518 515 unsigned long size, int write) ··· 617 548 /* extract page list from userspace mapping */ 618 549 ret = vb2_dc_get_user_pages(start, pages, n_pages, vma, write); 619 550 if (ret) { 551 + unsigned long pfn; 552 + if (vb2_dc_get_user_pfn(start, n_pages, vma, &pfn) == 0) { 553 + buf->dma_addr = vb2_dc_pfn_to_dma(buf->dev, pfn); 554 + buf->size = size; 555 + kfree(pages); 556 + return buf; 557 + } 558 + 620 559 pr_err("failed to get user pages\n"); 621 560 goto fail_vma; 622 561 }
+8 -8
drivers/mmc/host/sh_mobile_sdhi.c
··· 113 113 }; 114 114 115 115 static const struct of_device_id sh_mobile_sdhi_of_match[] = { 116 - { .compatible = "renesas,shmobile-sdhi" }, 117 - { .compatible = "renesas,sh7372-sdhi" }, 118 - { .compatible = "renesas,sh73a0-sdhi", .data = &sh_mobile_sdhi_of_cfg[0], }, 119 - { .compatible = "renesas,r8a73a4-sdhi", .data = &sh_mobile_sdhi_of_cfg[0], }, 120 - { .compatible = "renesas,r8a7740-sdhi", .data = &sh_mobile_sdhi_of_cfg[0], }, 121 - { .compatible = "renesas,r8a7778-sdhi", .data = &sh_mobile_sdhi_of_cfg[0], }, 122 - { .compatible = "renesas,r8a7779-sdhi", .data = &sh_mobile_sdhi_of_cfg[0], }, 123 - { .compatible = "renesas,r8a7790-sdhi", .data = &sh_mobile_sdhi_of_cfg[0], }, 116 + { .compatible = "renesas,sdhi-shmobile" }, 117 + { .compatible = "renesas,sdhi-sh7372" }, 118 + { .compatible = "renesas,sdhi-sh73a0", .data = &sh_mobile_sdhi_of_cfg[0], }, 119 + { .compatible = "renesas,sdhi-r8a73a4", .data = &sh_mobile_sdhi_of_cfg[0], }, 120 + { .compatible = "renesas,sdhi-r8a7740", .data = &sh_mobile_sdhi_of_cfg[0], }, 121 + { .compatible = "renesas,sdhi-r8a7778", .data = &sh_mobile_sdhi_of_cfg[0], }, 122 + { .compatible = "renesas,sdhi-r8a7779", .data = &sh_mobile_sdhi_of_cfg[0], }, 123 + { .compatible = "renesas,sdhi-r8a7790", .data = &sh_mobile_sdhi_of_cfg[0], }, 124 124 {}, 125 125 }; 126 126 MODULE_DEVICE_TABLE(of, sh_mobile_sdhi_of_match);
+15 -2
drivers/mtd/devices/m25p80.c
··· 168 168 */ 169 169 static inline int set_4byte(struct m25p *flash, u32 jedec_id, int enable) 170 170 { 171 + int status; 172 + bool need_wren = false; 173 + 171 174 switch (JEDEC_MFR(jedec_id)) { 172 - case CFI_MFR_MACRONIX: 173 175 case CFI_MFR_ST: /* Micron, actually */ 176 + /* Some Micron need WREN command; all will accept it */ 177 + need_wren = true; 178 + case CFI_MFR_MACRONIX: 174 179 case 0xEF /* winbond */: 180 + if (need_wren) 181 + write_enable(flash); 182 + 175 183 flash->command[0] = enable ? OPCODE_EN4B : OPCODE_EX4B; 176 - return spi_write(flash->spi, flash->command, 1); 184 + status = spi_write(flash->spi, flash->command, 1); 185 + 186 + if (need_wren) 187 + write_disable(flash); 188 + 189 + return status; 177 190 default: 178 191 /* Spansion style */ 179 192 flash->command[0] = OPCODE_BRWR;
+3 -5
drivers/mtd/nand/nand_base.c
··· 2869 2869 2870 2870 len = le16_to_cpu(p->ext_param_page_length) * 16; 2871 2871 ep = kmalloc(len, GFP_KERNEL); 2872 - if (!ep) { 2873 - ret = -ENOMEM; 2874 - goto ext_out; 2875 - } 2872 + if (!ep) 2873 + return -ENOMEM; 2876 2874 2877 2875 /* Send our own NAND_CMD_PARAM. */ 2878 2876 chip->cmdfunc(mtd, NAND_CMD_PARAM, 0, -1); ··· 2918 2920 } 2919 2921 2920 2922 pr_info("ONFI extended param page detected.\n"); 2921 - return 0; 2923 + ret = 0; 2922 2924 2923 2925 ext_out: 2924 2926 kfree(ep);
+2 -2
drivers/net/can/at91_can.c
··· 1405 1405 1406 1406 static const struct platform_device_id at91_can_id_table[] = { 1407 1407 { 1408 - .name = "at91_can", 1408 + .name = "at91sam9x5_can", 1409 1409 .driver_data = (kernel_ulong_t)&at91_at91sam9x5_data, 1410 1410 }, { 1411 - .name = "at91sam9x5_can", 1411 + .name = "at91_can", 1412 1412 .driver_data = (kernel_ulong_t)&at91_at91sam9263_data, 1413 1413 }, { 1414 1414 /* sentinel */
+10 -4
drivers/net/can/flexcan.c
··· 62 62 #define FLEXCAN_MCR_BCC BIT(16) 63 63 #define FLEXCAN_MCR_LPRIO_EN BIT(13) 64 64 #define FLEXCAN_MCR_AEN BIT(12) 65 - #define FLEXCAN_MCR_MAXMB(x) ((x) & 0xf) 65 + #define FLEXCAN_MCR_MAXMB(x) ((x) & 0x1f) 66 66 #define FLEXCAN_MCR_IDAM_A (0 << 8) 67 67 #define FLEXCAN_MCR_IDAM_B (1 << 8) 68 68 #define FLEXCAN_MCR_IDAM_C (2 << 8) ··· 735 735 * 736 736 */ 737 737 reg_mcr = flexcan_read(&regs->mcr); 738 + reg_mcr &= ~FLEXCAN_MCR_MAXMB(0xff); 738 739 reg_mcr |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_FEN | FLEXCAN_MCR_HALT | 739 740 FLEXCAN_MCR_SUPV | FLEXCAN_MCR_WRN_EN | 740 - FLEXCAN_MCR_IDAM_C | FLEXCAN_MCR_SRX_DIS; 741 + FLEXCAN_MCR_IDAM_C | FLEXCAN_MCR_SRX_DIS | 742 + FLEXCAN_MCR_MAXMB(FLEXCAN_TX_BUF_ID); 741 743 netdev_dbg(dev, "%s: writing mcr=0x%08x", __func__, reg_mcr); 742 744 flexcan_write(reg_mcr, &regs->mcr); 743 745 ··· 772 770 priv->reg_ctrl_default = reg_ctrl; 773 771 netdev_dbg(dev, "%s: writing ctrl=0x%08x", __func__, reg_ctrl); 774 772 flexcan_write(reg_ctrl, &regs->ctrl); 773 + 774 + /* Abort any pending TX, mark Mailbox as INACTIVE */ 775 + flexcan_write(FLEXCAN_MB_CNT_CODE(0x4), 776 + &regs->cantxfg[FLEXCAN_TX_BUF_ID].can_ctrl); 775 777 776 778 /* acceptance mask/acceptance code (accept everything) */ 777 779 flexcan_write(0x0, &regs->rxgmask); ··· 985 979 } 986 980 987 981 static const struct of_device_id flexcan_of_match[] = { 988 - { .compatible = "fsl,p1010-flexcan", .data = &fsl_p1010_devtype_data, }, 989 - { .compatible = "fsl,imx28-flexcan", .data = &fsl_imx28_devtype_data, }, 990 982 { .compatible = "fsl,imx6q-flexcan", .data = &fsl_imx6q_devtype_data, }, 983 + { .compatible = "fsl,imx28-flexcan", .data = &fsl_imx28_devtype_data, }, 984 + { .compatible = "fsl,p1010-flexcan", .data = &fsl_p1010_devtype_data, }, 991 985 { /* sentinel */ }, 992 986 }; 993 987 MODULE_DEVICE_TABLE(of, flexcan_of_match);
+10 -5
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
··· 1197 1197 /* TM (timers) host DB constants */ 1198 1198 #define TM_ILT_PAGE_SZ_HW 0 1199 1199 #define TM_ILT_PAGE_SZ (4096 << TM_ILT_PAGE_SZ_HW) /* 4K */ 1200 - /* #define TM_CONN_NUM (CNIC_STARTING_CID+CNIC_ISCSI_CXT_MAX) */ 1201 - #define TM_CONN_NUM 1024 1200 + #define TM_CONN_NUM (BNX2X_FIRST_VF_CID + \ 1201 + BNX2X_VF_CIDS + \ 1202 + CNIC_ISCSI_CID_MAX) 1202 1203 #define TM_ILT_SZ (8 * TM_CONN_NUM) 1203 1204 #define TM_ILT_LINES DIV_ROUND_UP(TM_ILT_SZ, TM_ILT_PAGE_SZ) 1204 1205 ··· 1528 1527 #define PCI_32BIT_FLAG (1 << 1) 1529 1528 #define ONE_PORT_FLAG (1 << 2) 1530 1529 #define NO_WOL_FLAG (1 << 3) 1531 - #define USING_DAC_FLAG (1 << 4) 1532 1530 #define USING_MSIX_FLAG (1 << 5) 1533 1531 #define USING_MSI_FLAG (1 << 6) 1534 1532 #define DISABLE_MSI_FLAG (1 << 7) ··· 1622 1622 u16 rx_ticks_int; 1623 1623 u16 rx_ticks; 1624 1624 /* Maximal coalescing timeout in us */ 1625 - #define BNX2X_MAX_COALESCE_TOUT (0xf0*12) 1625 + #define BNX2X_MAX_COALESCE_TOUT (0xff*BNX2X_BTR) 1626 1626 1627 1627 u32 lin_cnt; 1628 1628 ··· 2075 2075 2076 2076 void bnx2x_prep_dmae_with_comp(struct bnx2x *bp, struct dmae_command *dmae, 2077 2077 u8 src_type, u8 dst_type); 2078 - int bnx2x_issue_dmae_with_comp(struct bnx2x *bp, struct dmae_command *dmae); 2078 + int bnx2x_issue_dmae_with_comp(struct bnx2x *bp, struct dmae_command *dmae, 2079 + u32 *comp); 2079 2080 2080 2081 /* FLR related routines */ 2081 2082 u32 bnx2x_flr_clnup_poll_count(struct bnx2x *bp); ··· 2496 2495 #define NUM_MACS 8 2497 2496 2498 2497 void bnx2x_set_local_cmng(struct bnx2x *bp); 2498 + 2499 + #define MCPR_SCRATCH_BASE(bp) \ 2500 + (CHIP_IS_E1x(bp) ? MCP_REG_MCPR_SCRATCH : MCP_A_REG_MCPR_SCRATCH) 2501 + 2499 2502 #endif /* bnx2x.h */
+1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
··· 681 681 } 682 682 } 683 683 #endif 684 + skb_record_rx_queue(skb, fp->rx_queue); 684 685 napi_gro_receive(&fp->napi, skb); 685 686 } 686 687
+2 -38
drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
··· 894 894 * will re-enable parity attentions right after the dump. 895 895 */ 896 896 897 - /* Disable parity on path 0 */ 898 - bnx2x_pretend_func(bp, 0); 899 897 bnx2x_disable_blocks_parity(bp); 900 - 901 - /* Disable parity on path 1 */ 902 - bnx2x_pretend_func(bp, 1); 903 - bnx2x_disable_blocks_parity(bp); 904 - 905 - /* Return to current function */ 906 - bnx2x_pretend_func(bp, BP_ABS_FUNC(bp)); 907 898 908 899 dump_hdr.header_size = (sizeof(struct dump_header) / 4) - 1; 909 900 dump_hdr.preset = DUMP_ALL_PRESETS; ··· 922 931 /* Actually read the registers */ 923 932 __bnx2x_get_regs(bp, p); 924 933 925 - /* Re-enable parity attentions on path 0 */ 926 - bnx2x_pretend_func(bp, 0); 934 + /* Re-enable parity attentions */ 927 935 bnx2x_clear_blocks_parity(bp); 928 936 bnx2x_enable_blocks_parity(bp); 929 - 930 - /* Re-enable parity attentions on path 1 */ 931 - bnx2x_pretend_func(bp, 1); 932 - bnx2x_clear_blocks_parity(bp); 933 - bnx2x_enable_blocks_parity(bp); 934 - 935 - /* Return to current function */ 936 - bnx2x_pretend_func(bp, BP_ABS_FUNC(bp)); 937 937 } 938 938 939 939 static int bnx2x_get_preset_regs_len(struct net_device *dev, u32 preset) ··· 978 996 * will re-enable parity attentions right after the dump. 979 997 */ 980 998 981 - /* Disable parity on path 0 */ 982 - bnx2x_pretend_func(bp, 0); 983 999 bnx2x_disable_blocks_parity(bp); 984 - 985 - /* Disable parity on path 1 */ 986 - bnx2x_pretend_func(bp, 1); 987 - bnx2x_disable_blocks_parity(bp); 988 - 989 - /* Return to current function */ 990 - bnx2x_pretend_func(bp, BP_ABS_FUNC(bp)); 991 1000 992 1001 dump_hdr.header_size = (sizeof(struct dump_header) / 4) - 1; 993 1002 dump_hdr.preset = bp->dump_preset_idx; ··· 1008 1035 /* Actually read the registers */ 1009 1036 __bnx2x_get_preset_regs(bp, p, dump_hdr.preset); 1010 1037 1011 - /* Re-enable parity attentions on path 0 */ 1012 - bnx2x_pretend_func(bp, 0); 1038 + /* Re-enable parity attentions */ 1013 1039 bnx2x_clear_blocks_parity(bp); 1014 1040 bnx2x_enable_blocks_parity(bp); 1015 - 1016 - /* Re-enable parity attentions on path 1 */ 1017 - bnx2x_pretend_func(bp, 1); 1018 - bnx2x_clear_blocks_parity(bp); 1019 - bnx2x_enable_blocks_parity(bp); 1020 - 1021 - /* Return to current function */ 1022 - bnx2x_pretend_func(bp, BP_ABS_FUNC(bp)); 1023 1041 1024 1042 return 0; 1025 1043 }
+25 -13
drivers/net/ethernet/broadcom/bnx2x/bnx2x_init.h
··· 640 640 * [30] MCP Latched ump_tx_parity 641 641 * [31] MCP Latched scpad_parity 642 642 */ 643 - #define MISC_AEU_ENABLE_MCP_PRTY_BITS \ 643 + #define MISC_AEU_ENABLE_MCP_PRTY_SUB_BITS \ 644 644 (AEU_INPUTS_ATTN_BITS_MCP_LATCHED_ROM_PARITY | \ 645 645 AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_RX_PARITY | \ 646 - AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_TX_PARITY | \ 646 + AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_TX_PARITY) 647 + 648 + #define MISC_AEU_ENABLE_MCP_PRTY_BITS \ 649 + (MISC_AEU_ENABLE_MCP_PRTY_SUB_BITS | \ 647 650 AEU_INPUTS_ATTN_BITS_MCP_LATCHED_SCPAD_PARITY) 648 651 649 652 /* Below registers control the MCP parity attention output. When 650 653 * MISC_AEU_ENABLE_MCP_PRTY_BITS are set - attentions are 651 654 * enabled, when cleared - disabled. 652 655 */ 653 - static const u32 mcp_attn_ctl_regs[] = { 654 - MISC_REG_AEU_ENABLE4_FUNC_0_OUT_0, 655 - MISC_REG_AEU_ENABLE4_NIG_0, 656 - MISC_REG_AEU_ENABLE4_PXP_0, 657 - MISC_REG_AEU_ENABLE4_FUNC_1_OUT_0, 658 - MISC_REG_AEU_ENABLE4_NIG_1, 659 - MISC_REG_AEU_ENABLE4_PXP_1 656 + static const struct { 657 + u32 addr; 658 + u32 bits; 659 + } mcp_attn_ctl_regs[] = { 660 + { MISC_REG_AEU_ENABLE4_FUNC_0_OUT_0, 661 + MISC_AEU_ENABLE_MCP_PRTY_BITS }, 662 + { MISC_REG_AEU_ENABLE4_NIG_0, 663 + MISC_AEU_ENABLE_MCP_PRTY_SUB_BITS }, 664 + { MISC_REG_AEU_ENABLE4_PXP_0, 665 + MISC_AEU_ENABLE_MCP_PRTY_SUB_BITS }, 666 + { MISC_REG_AEU_ENABLE4_FUNC_1_OUT_0, 667 + MISC_AEU_ENABLE_MCP_PRTY_BITS }, 668 + { MISC_REG_AEU_ENABLE4_NIG_1, 669 + MISC_AEU_ENABLE_MCP_PRTY_SUB_BITS }, 670 + { MISC_REG_AEU_ENABLE4_PXP_1, 671 + MISC_AEU_ENABLE_MCP_PRTY_SUB_BITS } 660 672 }; 661 673 662 674 static inline void bnx2x_set_mcp_parity(struct bnx2x *bp, u8 enable) ··· 677 665 u32 reg_val; 678 666 679 667 for (i = 0; i < ARRAY_SIZE(mcp_attn_ctl_regs); i++) { 680 - reg_val = REG_RD(bp, mcp_attn_ctl_regs[i]); 668 + reg_val = REG_RD(bp, mcp_attn_ctl_regs[i].addr); 681 669 682 670 if (enable) 683 - reg_val |= MISC_AEU_ENABLE_MCP_PRTY_BITS; 671 + reg_val |= mcp_attn_ctl_regs[i].bits; 684 672 else 685 - reg_val &= ~MISC_AEU_ENABLE_MCP_PRTY_BITS; 673 + reg_val &= ~mcp_attn_ctl_regs[i].bits; 686 674 687 - REG_WR(bp, mcp_attn_ctl_regs[i], reg_val); 675 + REG_WR(bp, mcp_attn_ctl_regs[i].addr, reg_val); 688 676 } 689 677 } 690 678
+212 -176
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 503 503 } 504 504 505 505 /* issue a dmae command over the init-channel and wait for completion */ 506 - int bnx2x_issue_dmae_with_comp(struct bnx2x *bp, struct dmae_command *dmae) 506 + int bnx2x_issue_dmae_with_comp(struct bnx2x *bp, struct dmae_command *dmae, 507 + u32 *comp) 507 508 { 508 - u32 *wb_comp = bnx2x_sp(bp, wb_comp); 509 509 int cnt = CHIP_REV_IS_SLOW(bp) ? (400000) : 4000; 510 510 int rc = 0; 511 511 ··· 518 518 spin_lock_bh(&bp->dmae_lock); 519 519 520 520 /* reset completion */ 521 - *wb_comp = 0; 521 + *comp = 0; 522 522 523 523 /* post the command on the channel used for initializations */ 524 524 bnx2x_post_dmae(bp, dmae, INIT_DMAE_C(bp)); 525 525 526 526 /* wait for completion */ 527 527 udelay(5); 528 - while ((*wb_comp & ~DMAE_PCI_ERR_FLAG) != DMAE_COMP_VAL) { 528 + while ((*comp & ~DMAE_PCI_ERR_FLAG) != DMAE_COMP_VAL) { 529 529 530 530 if (!cnt || 531 531 (bp->recovery_state != BNX2X_RECOVERY_DONE && ··· 537 537 cnt--; 538 538 udelay(50); 539 539 } 540 - if (*wb_comp & DMAE_PCI_ERR_FLAG) { 540 + if (*comp & DMAE_PCI_ERR_FLAG) { 541 541 BNX2X_ERR("DMAE PCI error!\n"); 542 542 rc = DMAE_PCI_ERROR; 543 543 } ··· 574 574 dmae.len = len32; 575 575 576 576 /* issue the command and wait for completion */ 577 - rc = bnx2x_issue_dmae_with_comp(bp, &dmae); 577 + rc = bnx2x_issue_dmae_with_comp(bp, &dmae, bnx2x_sp(bp, wb_comp)); 578 578 if (rc) { 579 579 BNX2X_ERR("DMAE returned failure %d\n", rc); 580 580 bnx2x_panic(); ··· 611 611 dmae.len = len32; 612 612 613 613 /* issue the command and wait for completion */ 614 - rc = bnx2x_issue_dmae_with_comp(bp, &dmae); 614 + rc = bnx2x_issue_dmae_with_comp(bp, &dmae, bnx2x_sp(bp, wb_comp)); 615 615 if (rc) { 616 616 BNX2X_ERR("DMAE returned failure %d\n", rc); 617 617 bnx2x_panic(); ··· 751 751 return rc; 752 752 } 753 753 754 + #define MCPR_TRACE_BUFFER_SIZE (0x800) 755 + #define SCRATCH_BUFFER_SIZE(bp) \ 756 + (CHIP_IS_E1(bp) ? 0x10000 : (CHIP_IS_E1H(bp) ? 0x20000 : 0x28000)) 757 + 754 758 void bnx2x_fw_dump_lvl(struct bnx2x *bp, const char *lvl) 755 759 { 756 760 u32 addr, val; ··· 779 775 trace_shmem_base = bp->common.shmem_base; 780 776 else 781 777 trace_shmem_base = SHMEM2_RD(bp, other_shmem_base_addr); 782 - addr = trace_shmem_base - 0x800; 778 + 779 + /* sanity */ 780 + if (trace_shmem_base < MCPR_SCRATCH_BASE(bp) + MCPR_TRACE_BUFFER_SIZE || 781 + trace_shmem_base >= MCPR_SCRATCH_BASE(bp) + 782 + SCRATCH_BUFFER_SIZE(bp)) { 783 + BNX2X_ERR("Unable to dump trace buffer (mark %x)\n", 784 + trace_shmem_base); 785 + return; 786 + } 787 + 788 + addr = trace_shmem_base - MCPR_TRACE_BUFFER_SIZE; 783 789 784 790 /* validate TRCB signature */ 785 791 mark = REG_RD(bp, addr); ··· 801 787 /* read cyclic buffer pointer */ 802 788 addr += 4; 803 789 mark = REG_RD(bp, addr); 804 - mark = (CHIP_IS_E1x(bp) ? MCP_REG_MCPR_SCRATCH : MCP_A_REG_MCPR_SCRATCH) 805 - + ((mark + 0x3) & ~0x3) - 0x08000000; 790 + mark = MCPR_SCRATCH_BASE(bp) + ((mark + 0x3) & ~0x3) - 0x08000000; 791 + if (mark >= trace_shmem_base || mark < addr + 4) { 792 + BNX2X_ERR("Mark doesn't fall inside Trace Buffer\n"); 793 + return; 794 + } 806 795 printk("%s" "begin fw dump (mark 0x%x)\n", lvl, mark); 807 796 808 797 printk("%s", lvl); 809 798 810 799 /* dump buffer after the mark */ 811 - for (offset = mark; offset <= trace_shmem_base; offset += 0x8*4) { 800 + for (offset = mark; offset < trace_shmem_base; offset += 0x8*4) { 812 801 for (word = 0; word < 8; word++) 813 802 data[word] = htonl(REG_RD(bp, offset + 4*word)); 814 803 data[8] = 0x0; ··· 4297 4280 pr_cont("%s%s", idx ? ", " : "", blk); 4298 4281 } 4299 4282 4300 - static int bnx2x_check_blocks_with_parity0(struct bnx2x *bp, u32 sig, 4301 - int par_num, bool print) 4283 + static bool bnx2x_check_blocks_with_parity0(struct bnx2x *bp, u32 sig, 4284 + int *par_num, bool print) 4302 4285 { 4303 - int i = 0; 4304 - u32 cur_bit = 0; 4286 + u32 cur_bit; 4287 + bool res; 4288 + int i; 4289 + 4290 + res = false; 4291 + 4305 4292 for (i = 0; sig; i++) { 4306 - cur_bit = ((u32)0x1 << i); 4293 + cur_bit = (0x1UL << i); 4307 4294 if (sig & cur_bit) { 4308 - switch (cur_bit) { 4309 - case AEU_INPUTS_ATTN_BITS_BRB_PARITY_ERROR: 4310 - if (print) { 4311 - _print_next_block(par_num++, "BRB"); 4295 + res |= true; /* Each bit is real error! */ 4296 + 4297 + if (print) { 4298 + switch (cur_bit) { 4299 + case AEU_INPUTS_ATTN_BITS_BRB_PARITY_ERROR: 4300 + _print_next_block((*par_num)++, "BRB"); 4312 4301 _print_parity(bp, 4313 4302 BRB1_REG_BRB1_PRTY_STS); 4314 - } 4315 - break; 4316 - case AEU_INPUTS_ATTN_BITS_PARSER_PARITY_ERROR: 4317 - if (print) { 4318 - _print_next_block(par_num++, "PARSER"); 4303 + break; 4304 + case AEU_INPUTS_ATTN_BITS_PARSER_PARITY_ERROR: 4305 + _print_next_block((*par_num)++, 4306 + "PARSER"); 4319 4307 _print_parity(bp, PRS_REG_PRS_PRTY_STS); 4320 - } 4321 - break; 4322 - case AEU_INPUTS_ATTN_BITS_TSDM_PARITY_ERROR: 4323 - if (print) { 4324 - _print_next_block(par_num++, "TSDM"); 4308 + break; 4309 + case AEU_INPUTS_ATTN_BITS_TSDM_PARITY_ERROR: 4310 + _print_next_block((*par_num)++, "TSDM"); 4325 4311 _print_parity(bp, 4326 4312 TSDM_REG_TSDM_PRTY_STS); 4327 - } 4328 - break; 4329 - case AEU_INPUTS_ATTN_BITS_SEARCHER_PARITY_ERROR: 4330 - if (print) { 4331 - _print_next_block(par_num++, 4313 + break; 4314 + case AEU_INPUTS_ATTN_BITS_SEARCHER_PARITY_ERROR: 4315 + _print_next_block((*par_num)++, 4332 4316 "SEARCHER"); 4333 4317 _print_parity(bp, SRC_REG_SRC_PRTY_STS); 4334 - } 4335 - break; 4336 - case AEU_INPUTS_ATTN_BITS_TCM_PARITY_ERROR: 4337 - if (print) { 4338 - _print_next_block(par_num++, "TCM"); 4339 - _print_parity(bp, 4340 - TCM_REG_TCM_PRTY_STS); 4341 - } 4342 - break; 4343 - case AEU_INPUTS_ATTN_BITS_TSEMI_PARITY_ERROR: 4344 - if (print) { 4345 - _print_next_block(par_num++, "TSEMI"); 4318 + break; 4319 + case AEU_INPUTS_ATTN_BITS_TCM_PARITY_ERROR: 4320 + _print_next_block((*par_num)++, "TCM"); 4321 + _print_parity(bp, TCM_REG_TCM_PRTY_STS); 4322 + break; 4323 + case AEU_INPUTS_ATTN_BITS_TSEMI_PARITY_ERROR: 4324 + _print_next_block((*par_num)++, 4325 + "TSEMI"); 4346 4326 _print_parity(bp, 4347 4327 TSEM_REG_TSEM_PRTY_STS_0); 4348 4328 _print_parity(bp, 4349 4329 TSEM_REG_TSEM_PRTY_STS_1); 4350 - } 4351 - break; 4352 - case AEU_INPUTS_ATTN_BITS_PBCLIENT_PARITY_ERROR: 4353 - if (print) { 4354 - _print_next_block(par_num++, "XPB"); 4330 + break; 4331 + case AEU_INPUTS_ATTN_BITS_PBCLIENT_PARITY_ERROR: 4332 + _print_next_block((*par_num)++, "XPB"); 4355 4333 _print_parity(bp, GRCBASE_XPB + 4356 4334 PB_REG_PB_PRTY_STS); 4335 + break; 4357 4336 } 4358 - break; 4359 4337 } 4360 4338 4361 4339 /* Clear the bit */ ··· 4358 4346 } 4359 4347 } 4360 4348 4361 - return par_num; 4349 + return res; 4362 4350 } 4363 4351 4364 - static int bnx2x_check_blocks_with_parity1(struct bnx2x *bp, u32 sig, 4365 - int par_num, bool *global, 4352 + static bool bnx2x_check_blocks_with_parity1(struct bnx2x *bp, u32 sig, 4353 + int *par_num, bool *global, 4366 4354 bool print) 4367 4355 { 4368 - int i = 0; 4369 - u32 cur_bit = 0; 4356 + u32 cur_bit; 4357 + bool res; 4358 + int i; 4359 + 4360 + res = false; 4361 + 4370 4362 for (i = 0; sig; i++) { 4371 - cur_bit = ((u32)0x1 << i); 4363 + cur_bit = (0x1UL << i); 4372 4364 if (sig & cur_bit) { 4365 + res |= true; /* Each bit is real error! */ 4373 4366 switch (cur_bit) { 4374 4367 case AEU_INPUTS_ATTN_BITS_PBF_PARITY_ERROR: 4375 4368 if (print) { 4376 - _print_next_block(par_num++, "PBF"); 4369 + _print_next_block((*par_num)++, "PBF"); 4377 4370 _print_parity(bp, PBF_REG_PBF_PRTY_STS); 4378 4371 } 4379 4372 break; 4380 4373 case AEU_INPUTS_ATTN_BITS_QM_PARITY_ERROR: 4381 4374 if (print) { 4382 - _print_next_block(par_num++, "QM"); 4375 + _print_next_block((*par_num)++, "QM"); 4383 4376 _print_parity(bp, QM_REG_QM_PRTY_STS); 4384 4377 } 4385 4378 break; 4386 4379 case AEU_INPUTS_ATTN_BITS_TIMERS_PARITY_ERROR: 4387 4380 if (print) { 4388 - _print_next_block(par_num++, "TM"); 4381 + _print_next_block((*par_num)++, "TM"); 4389 4382 _print_parity(bp, TM_REG_TM_PRTY_STS); 4390 4383 } 4391 4384 break; 4392 4385 case AEU_INPUTS_ATTN_BITS_XSDM_PARITY_ERROR: 4393 4386 if (print) { 4394 - _print_next_block(par_num++, "XSDM"); 4387 + _print_next_block((*par_num)++, "XSDM"); 4395 4388 _print_parity(bp, 4396 4389 XSDM_REG_XSDM_PRTY_STS); 4397 4390 } 4398 4391 break; 4399 4392 case AEU_INPUTS_ATTN_BITS_XCM_PARITY_ERROR: 4400 4393 if (print) { 4401 - _print_next_block(par_num++, "XCM"); 4394 + _print_next_block((*par_num)++, "XCM"); 4402 4395 _print_parity(bp, XCM_REG_XCM_PRTY_STS); 4403 4396 } 4404 4397 break; 4405 4398 case AEU_INPUTS_ATTN_BITS_XSEMI_PARITY_ERROR: 4406 4399 if (print) { 4407 - _print_next_block(par_num++, "XSEMI"); 4400 + _print_next_block((*par_num)++, 4401 + "XSEMI"); 4408 4402 _print_parity(bp, 4409 4403 XSEM_REG_XSEM_PRTY_STS_0); 4410 4404 _print_parity(bp, ··· 4419 4401 break; 4420 4402 case AEU_INPUTS_ATTN_BITS_DOORBELLQ_PARITY_ERROR: 4421 4403 if (print) { 4422 - _print_next_block(par_num++, 4404 + _print_next_block((*par_num)++, 4423 4405 "DOORBELLQ"); 4424 4406 _print_parity(bp, 4425 4407 DORQ_REG_DORQ_PRTY_STS); ··· 4427 4409 break; 4428 4410 case AEU_INPUTS_ATTN_BITS_NIG_PARITY_ERROR: 4429 4411 if (print) { 4430 - _print_next_block(par_num++, "NIG"); 4412 + _print_next_block((*par_num)++, "NIG"); 4431 4413 if (CHIP_IS_E1x(bp)) { 4432 4414 _print_parity(bp, 4433 4415 NIG_REG_NIG_PRTY_STS); ··· 4441 4423 break; 4442 4424 case AEU_INPUTS_ATTN_BITS_VAUX_PCI_CORE_PARITY_ERROR: 4443 4425 if (print) 4444 - _print_next_block(par_num++, 4426 + _print_next_block((*par_num)++, 4445 4427 "VAUX PCI CORE"); 4446 4428 *global = true; 4447 4429 break; 4448 4430 case AEU_INPUTS_ATTN_BITS_DEBUG_PARITY_ERROR: 4449 4431 if (print) { 4450 - _print_next_block(par_num++, "DEBUG"); 4432 + _print_next_block((*par_num)++, 4433 + "DEBUG"); 4451 4434 _print_parity(bp, DBG_REG_DBG_PRTY_STS); 4452 4435 } 4453 4436 break; 4454 4437 case AEU_INPUTS_ATTN_BITS_USDM_PARITY_ERROR: 4455 4438 if (print) { 4456 - _print_next_block(par_num++, "USDM"); 4439 + _print_next_block((*par_num)++, "USDM"); 4457 4440 _print_parity(bp, 4458 4441 USDM_REG_USDM_PRTY_STS); 4459 4442 } 4460 4443 break; 4461 4444 case AEU_INPUTS_ATTN_BITS_UCM_PARITY_ERROR: 4462 4445 if (print) { 4463 - _print_next_block(par_num++, "UCM"); 4446 + _print_next_block((*par_num)++, "UCM"); 4464 4447 _print_parity(bp, UCM_REG_UCM_PRTY_STS); 4465 4448 } 4466 4449 break; 4467 4450 case AEU_INPUTS_ATTN_BITS_USEMI_PARITY_ERROR: 4468 4451 if (print) { 4469 - _print_next_block(par_num++, "USEMI"); 4452 + _print_next_block((*par_num)++, 4453 + "USEMI"); 4470 4454 _print_parity(bp, 4471 4455 USEM_REG_USEM_PRTY_STS_0); 4472 4456 _print_parity(bp, ··· 4477 4457 break; 4478 4458 case AEU_INPUTS_ATTN_BITS_UPB_PARITY_ERROR: 4479 4459 if (print) { 4480 - _print_next_block(par_num++, "UPB"); 4460 + _print_next_block((*par_num)++, "UPB"); 4481 4461 _print_parity(bp, GRCBASE_UPB + 4482 4462 PB_REG_PB_PRTY_STS); 4483 4463 } 4484 4464 break; 4485 4465 case AEU_INPUTS_ATTN_BITS_CSDM_PARITY_ERROR: 4486 4466 if (print) { 4487 - _print_next_block(par_num++, "CSDM"); 4467 + _print_next_block((*par_num)++, "CSDM"); 4488 4468 _print_parity(bp, 4489 4469 CSDM_REG_CSDM_PRTY_STS); 4490 4470 } 4491 4471 break; 4492 4472 case AEU_INPUTS_ATTN_BITS_CCM_PARITY_ERROR: 4493 4473 if (print) { 4494 - _print_next_block(par_num++, "CCM"); 4474 + _print_next_block((*par_num)++, "CCM"); 4495 4475 _print_parity(bp, CCM_REG_CCM_PRTY_STS); 4496 4476 } 4497 4477 break; ··· 4502 4482 } 4503 4483 } 4504 4484 4505 - return par_num; 4485 + return res; 4506 4486 } 4507 4487 4508 - static int bnx2x_check_blocks_with_parity2(struct bnx2x *bp, u32 sig, 4509 - int par_num, bool print) 4488 + static bool bnx2x_check_blocks_with_parity2(struct bnx2x *bp, u32 sig, 4489 + int *par_num, bool print) 4510 4490 { 4511 - int i = 0; 4512 - u32 cur_bit = 0; 4491 + u32 cur_bit; 4492 + bool res; 4493 + int i; 4494 + 4495 + res = false; 4496 + 4513 4497 for (i = 0; sig; i++) { 4514 - cur_bit = ((u32)0x1 << i); 4498 + cur_bit = (0x1UL << i); 4515 4499 if (sig & cur_bit) { 4516 - switch (cur_bit) { 4517 - case AEU_INPUTS_ATTN_BITS_CSEMI_PARITY_ERROR: 4518 - if (print) { 4519 - _print_next_block(par_num++, "CSEMI"); 4500 + res |= true; /* Each bit is real error! */ 4501 + if (print) { 4502 + switch (cur_bit) { 4503 + case AEU_INPUTS_ATTN_BITS_CSEMI_PARITY_ERROR: 4504 + _print_next_block((*par_num)++, 4505 + "CSEMI"); 4520 4506 _print_parity(bp, 4521 4507 CSEM_REG_CSEM_PRTY_STS_0); 4522 4508 _print_parity(bp, 4523 4509 CSEM_REG_CSEM_PRTY_STS_1); 4524 - } 4525 - break; 4526 - case AEU_INPUTS_ATTN_BITS_PXP_PARITY_ERROR: 4527 - if (print) { 4528 - _print_next_block(par_num++, "PXP"); 4510 + break; 4511 + case AEU_INPUTS_ATTN_BITS_PXP_PARITY_ERROR: 4512 + _print_next_block((*par_num)++, "PXP"); 4529 4513 _print_parity(bp, PXP_REG_PXP_PRTY_STS); 4530 4514 _print_parity(bp, 4531 4515 PXP2_REG_PXP2_PRTY_STS_0); 4532 4516 _print_parity(bp, 4533 4517 PXP2_REG_PXP2_PRTY_STS_1); 4534 - } 4535 - break; 4536 - case AEU_IN_ATTN_BITS_PXPPCICLOCKCLIENT_PARITY_ERROR: 4537 - if (print) 4538 - _print_next_block(par_num++, 4539 - "PXPPCICLOCKCLIENT"); 4540 - break; 4541 - case AEU_INPUTS_ATTN_BITS_CFC_PARITY_ERROR: 4542 - if (print) { 4543 - _print_next_block(par_num++, "CFC"); 4518 + break; 4519 + case AEU_IN_ATTN_BITS_PXPPCICLOCKCLIENT_PARITY_ERROR: 4520 + _print_next_block((*par_num)++, 4521 + "PXPPCICLOCKCLIENT"); 4522 + break; 4523 + case AEU_INPUTS_ATTN_BITS_CFC_PARITY_ERROR: 4524 + _print_next_block((*par_num)++, "CFC"); 4544 4525 _print_parity(bp, 4545 4526 CFC_REG_CFC_PRTY_STS); 4546 - } 4547 - break; 4548 - case AEU_INPUTS_ATTN_BITS_CDU_PARITY_ERROR: 4549 - if (print) { 4550 - _print_next_block(par_num++, "CDU"); 4527 + break; 4528 + case AEU_INPUTS_ATTN_BITS_CDU_PARITY_ERROR: 4529 + _print_next_block((*par_num)++, "CDU"); 4551 4530 _print_parity(bp, CDU_REG_CDU_PRTY_STS); 4552 - } 4553 - break; 4554 - case AEU_INPUTS_ATTN_BITS_DMAE_PARITY_ERROR: 4555 - if (print) { 4556 - _print_next_block(par_num++, "DMAE"); 4531 + break; 4532 + case AEU_INPUTS_ATTN_BITS_DMAE_PARITY_ERROR: 4533 + _print_next_block((*par_num)++, "DMAE"); 4557 4534 _print_parity(bp, 4558 4535 DMAE_REG_DMAE_PRTY_STS); 4559 - } 4560 - break; 4561 - case AEU_INPUTS_ATTN_BITS_IGU_PARITY_ERROR: 4562 - if (print) { 4563 - _print_next_block(par_num++, "IGU"); 4536 + break; 4537 + case AEU_INPUTS_ATTN_BITS_IGU_PARITY_ERROR: 4538 + _print_next_block((*par_num)++, "IGU"); 4564 4539 if (CHIP_IS_E1x(bp)) 4565 4540 _print_parity(bp, 4566 4541 HC_REG_HC_PRTY_STS); 4567 4542 else 4568 4543 _print_parity(bp, 4569 4544 IGU_REG_IGU_PRTY_STS); 4570 - } 4571 - break; 4572 - case AEU_INPUTS_ATTN_BITS_MISC_PARITY_ERROR: 4573 - if (print) { 4574 - _print_next_block(par_num++, "MISC"); 4545 + break; 4546 + case AEU_INPUTS_ATTN_BITS_MISC_PARITY_ERROR: 4547 + _print_next_block((*par_num)++, "MISC"); 4575 4548 _print_parity(bp, 4576 4549 MISC_REG_MISC_PRTY_STS); 4550 + break; 4577 4551 } 4578 - break; 4579 4552 } 4580 4553 4581 4554 /* Clear the bit */ ··· 4576 4563 } 4577 4564 } 4578 4565 4579 - return par_num; 4566 + return res; 4580 4567 } 4581 4568 4582 - static int bnx2x_check_blocks_with_parity3(u32 sig, int par_num, 4583 - bool *global, bool print) 4569 + static bool bnx2x_check_blocks_with_parity3(struct bnx2x *bp, u32 sig, 4570 + int *par_num, bool *global, 4571 + bool print) 4584 4572 { 4585 - int i = 0; 4586 - u32 cur_bit = 0; 4573 + bool res = false; 4574 + u32 cur_bit; 4575 + int i; 4576 + 4587 4577 for (i = 0; sig; i++) { 4588 - cur_bit = ((u32)0x1 << i); 4578 + cur_bit = (0x1UL << i); 4589 4579 if (sig & cur_bit) { 4590 4580 switch (cur_bit) { 4591 4581 case AEU_INPUTS_ATTN_BITS_MCP_LATCHED_ROM_PARITY: 4592 4582 if (print) 4593 - _print_next_block(par_num++, "MCP ROM"); 4583 + _print_next_block((*par_num)++, 4584 + "MCP ROM"); 4594 4585 *global = true; 4586 + res |= true; 4595 4587 break; 4596 4588 case AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_RX_PARITY: 4597 4589 if (print) 4598 - _print_next_block(par_num++, 4590 + _print_next_block((*par_num)++, 4599 4591 "MCP UMP RX"); 4600 4592 *global = true; 4593 + res |= true; 4601 4594 break; 4602 4595 case AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_TX_PARITY: 4603 4596 if (print) 4604 - _print_next_block(par_num++, 4597 + _print_next_block((*par_num)++, 4605 4598 "MCP UMP TX"); 4606 4599 *global = true; 4600 + res |= true; 4607 4601 break; 4608 4602 case AEU_INPUTS_ATTN_BITS_MCP_LATCHED_SCPAD_PARITY: 4609 4603 if (print) 4610 - _print_next_block(par_num++, 4604 + _print_next_block((*par_num)++, 4611 4605 "MCP SCPAD"); 4612 - *global = true; 4606 + /* clear latched SCPAD PATIRY from MCP */ 4607 + REG_WR(bp, MISC_REG_AEU_CLR_LATCH_SIGNAL, 4608 + 1UL << 10); 4613 4609 break; 4614 4610 } 4615 4611 ··· 4627 4605 } 4628 4606 } 4629 4607 4630 - return par_num; 4608 + return res; 4631 4609 } 4632 4610 4633 - static int bnx2x_check_blocks_with_parity4(struct bnx2x *bp, u32 sig, 4634 - int par_num, bool print) 4611 + static bool bnx2x_check_blocks_with_parity4(struct bnx2x *bp, u32 sig, 4612 + int *par_num, bool print) 4635 4613 { 4636 - int i = 0; 4637 - u32 cur_bit = 0; 4614 + u32 cur_bit; 4615 + bool res; 4616 + int i; 4617 + 4618 + res = false; 4619 + 4638 4620 for (i = 0; sig; i++) { 4639 - cur_bit = ((u32)0x1 << i); 4621 + cur_bit = (0x1UL << i); 4640 4622 if (sig & cur_bit) { 4641 - switch (cur_bit) { 4642 - case AEU_INPUTS_ATTN_BITS_PGLUE_PARITY_ERROR: 4643 - if (print) { 4644 - _print_next_block(par_num++, "PGLUE_B"); 4623 + res |= true; /* Each bit is real error! */ 4624 + if (print) { 4625 + switch (cur_bit) { 4626 + case AEU_INPUTS_ATTN_BITS_PGLUE_PARITY_ERROR: 4627 + _print_next_block((*par_num)++, 4628 + "PGLUE_B"); 4645 4629 _print_parity(bp, 4646 - PGLUE_B_REG_PGLUE_B_PRTY_STS); 4647 - } 4648 - break; 4649 - case AEU_INPUTS_ATTN_BITS_ATC_PARITY_ERROR: 4650 - if (print) { 4651 - _print_next_block(par_num++, "ATC"); 4630 + PGLUE_B_REG_PGLUE_B_PRTY_STS); 4631 + break; 4632 + case AEU_INPUTS_ATTN_BITS_ATC_PARITY_ERROR: 4633 + _print_next_block((*par_num)++, "ATC"); 4652 4634 _print_parity(bp, 4653 4635 ATC_REG_ATC_PRTY_STS); 4636 + break; 4654 4637 } 4655 - break; 4656 4638 } 4657 - 4658 4639 /* Clear the bit */ 4659 4640 sig &= ~cur_bit; 4660 4641 } 4661 4642 } 4662 4643 4663 - return par_num; 4644 + return res; 4664 4645 } 4665 4646 4666 4647 static bool bnx2x_parity_attn(struct bnx2x *bp, bool *global, bool print, 4667 4648 u32 *sig) 4668 4649 { 4650 + bool res = false; 4651 + 4669 4652 if ((sig[0] & HW_PRTY_ASSERT_SET_0) || 4670 4653 (sig[1] & HW_PRTY_ASSERT_SET_1) || 4671 4654 (sig[2] & HW_PRTY_ASSERT_SET_2) || ··· 4687 4660 if (print) 4688 4661 netdev_err(bp->dev, 4689 4662 "Parity errors detected in blocks: "); 4690 - par_num = bnx2x_check_blocks_with_parity0(bp, 4691 - sig[0] & HW_PRTY_ASSERT_SET_0, par_num, print); 4692 - par_num = bnx2x_check_blocks_with_parity1(bp, 4693 - sig[1] & HW_PRTY_ASSERT_SET_1, par_num, global, print); 4694 - par_num = bnx2x_check_blocks_with_parity2(bp, 4695 - sig[2] & HW_PRTY_ASSERT_SET_2, par_num, print); 4696 - par_num = bnx2x_check_blocks_with_parity3( 4697 - sig[3] & HW_PRTY_ASSERT_SET_3, par_num, global, print); 4698 - par_num = bnx2x_check_blocks_with_parity4(bp, 4699 - sig[4] & HW_PRTY_ASSERT_SET_4, par_num, print); 4663 + res |= bnx2x_check_blocks_with_parity0(bp, 4664 + sig[0] & HW_PRTY_ASSERT_SET_0, &par_num, print); 4665 + res |= bnx2x_check_blocks_with_parity1(bp, 4666 + sig[1] & HW_PRTY_ASSERT_SET_1, &par_num, global, print); 4667 + res |= bnx2x_check_blocks_with_parity2(bp, 4668 + sig[2] & HW_PRTY_ASSERT_SET_2, &par_num, print); 4669 + res |= bnx2x_check_blocks_with_parity3(bp, 4670 + sig[3] & HW_PRTY_ASSERT_SET_3, &par_num, global, print); 4671 + res |= bnx2x_check_blocks_with_parity4(bp, 4672 + sig[4] & HW_PRTY_ASSERT_SET_4, &par_num, print); 4700 4673 4701 4674 if (print) 4702 4675 pr_cont("\n"); 4676 + } 4703 4677 4704 - return true; 4705 - } else 4706 - return false; 4678 + return res; 4707 4679 } 4708 4680 4709 4681 /** ··· 7152 7126 int port = BP_PORT(bp); 7153 7127 int init_phase = port ? PHASE_PORT1 : PHASE_PORT0; 7154 7128 u32 low, high; 7155 - u32 val; 7129 + u32 val, reg; 7156 7130 7157 7131 DP(NETIF_MSG_HW, "starting port init port %d\n", port); 7158 7132 ··· 7296 7270 /* Enable DCBX attention for all but E1 */ 7297 7271 val |= CHIP_IS_E1(bp) ? 0 : 0x10; 7298 7272 REG_WR(bp, MISC_REG_AEU_MASK_ATTN_FUNC_0 + port*4, val); 7273 + 7274 + /* SCPAD_PARITY should NOT trigger close the gates */ 7275 + reg = port ? MISC_REG_AEU_ENABLE4_NIG_1 : MISC_REG_AEU_ENABLE4_NIG_0; 7276 + REG_WR(bp, reg, 7277 + REG_RD(bp, reg) & 7278 + ~AEU_INPUTS_ATTN_BITS_MCP_LATCHED_SCPAD_PARITY); 7279 + 7280 + reg = port ? MISC_REG_AEU_ENABLE4_PXP_1 : MISC_REG_AEU_ENABLE4_PXP_0; 7281 + REG_WR(bp, reg, 7282 + REG_RD(bp, reg) & 7283 + ~AEU_INPUTS_ATTN_BITS_MCP_LATCHED_SCPAD_PARITY); 7299 7284 7300 7285 bnx2x_init_block(bp, BLOCK_NIG, init_phase); 7301 7286 ··· 11730 11693 static int bnx2x_open(struct net_device *dev) 11731 11694 { 11732 11695 struct bnx2x *bp = netdev_priv(dev); 11733 - bool global = false; 11734 - int other_engine = BP_PATH(bp) ? 0 : 1; 11735 - bool other_load_status, load_status; 11736 11696 int rc; 11737 11697 11738 11698 bp->stats_init = true; ··· 11745 11711 * Parity recovery is only relevant for PF driver. 11746 11712 */ 11747 11713 if (IS_PF(bp)) { 11714 + int other_engine = BP_PATH(bp) ? 0 : 1; 11715 + bool other_load_status, load_status; 11716 + bool global = false; 11717 + 11748 11718 other_load_status = bnx2x_get_load_status(bp, other_engine); 11749 11719 load_status = bnx2x_get_load_status(bp, BP_PATH(bp)); 11750 11720 if (!bnx2x_reset_is_done(bp, BP_PATH(bp)) || ··· 12141 12103 struct device *dev = &bp->pdev->dev; 12142 12104 12143 12105 if (dma_set_mask(dev, DMA_BIT_MASK(64)) == 0) { 12144 - bp->flags |= USING_DAC_FLAG; 12145 12106 if (dma_set_coherent_mask(dev, DMA_BIT_MASK(64)) != 0) { 12146 12107 dev_err(dev, "dma_set_coherent_mask failed, aborting\n"); 12147 12108 return -EIO; ··· 12311 12274 NETIF_F_TSO | NETIF_F_TSO_ECN | NETIF_F_TSO6 | NETIF_F_HIGHDMA; 12312 12275 12313 12276 dev->features |= dev->hw_features | NETIF_F_HW_VLAN_CTAG_RX; 12314 - if (bp->flags & USING_DAC_FLAG) 12315 - dev->features |= NETIF_F_HIGHDMA; 12277 + dev->features |= NETIF_F_HIGHDMA; 12316 12278 12317 12279 /* Add Loopback capability to the device */ 12318 12280 dev->hw_features |= NETIF_F_LOOPBACK; ··· 12651 12615 return BNX2X_MULTI_TX_COS_E1X; 12652 12616 case BCM57712: 12653 12617 case BCM57712_MF: 12654 - case BCM57712_VF: 12655 12618 return BNX2X_MULTI_TX_COS_E2_E3A0; 12656 12619 case BCM57800: 12657 12620 case BCM57800_MF: 12658 - case BCM57800_VF: 12659 12621 case BCM57810: 12660 12622 case BCM57810_MF: 12661 12623 case BCM57840_4_10: 12662 12624 case BCM57840_2_20: 12663 12625 case BCM57840_O: 12664 12626 case BCM57840_MFO: 12665 - case BCM57810_VF: 12666 12627 case BCM57840_MF: 12667 - case BCM57840_VF: 12668 12628 case BCM57811: 12669 12629 case BCM57811_MF: 12670 - case BCM57811_VF: 12671 12630 return BNX2X_MULTI_TX_COS_E3B0; 12631 + case BCM57712_VF: 12632 + case BCM57800_VF: 12633 + case BCM57810_VF: 12634 + case BCM57840_VF: 12635 + case BCM57811_VF: 12672 12636 return 1; 12673 12637 default: 12674 12638 pr_err("Unknown board_type (%d), aborting\n", chip_id);
+17 -12
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
··· 470 470 bnx2x_vfop_qdtor, cmd->done); 471 471 return bnx2x_vfop_transition(bp, vf, bnx2x_vfop_qdtor, 472 472 cmd->block); 473 + } else { 474 + BNX2X_ERR("VF[%d] failed to add a vfop\n", vf->abs_vfid); 475 + return -ENOMEM; 473 476 } 474 - DP(BNX2X_MSG_IOV, "VF[%d] failed to add a vfop. rc %d\n", 475 - vf->abs_vfid, vfop->rc); 476 - return -ENOMEM; 477 477 } 478 478 479 479 static void ··· 3391 3391 rc = bnx2x_del_all_macs(bp, mac_obj, BNX2X_ETH_MAC, true); 3392 3392 if (rc) { 3393 3393 BNX2X_ERR("failed to delete eth macs\n"); 3394 - return -EINVAL; 3394 + rc = -EINVAL; 3395 + goto out; 3395 3396 } 3396 3397 3397 3398 /* remove existing uc list macs */ 3398 3399 rc = bnx2x_del_all_macs(bp, mac_obj, BNX2X_UC_LIST_MAC, true); 3399 3400 if (rc) { 3400 3401 BNX2X_ERR("failed to delete uc_list macs\n"); 3401 - return -EINVAL; 3402 + rc = -EINVAL; 3403 + goto out; 3402 3404 } 3403 3405 3404 3406 /* configure the new mac to device */ ··· 3408 3406 bnx2x_set_mac_one(bp, (u8 *)&bulletin->mac, mac_obj, true, 3409 3407 BNX2X_ETH_MAC, &ramrod_flags); 3410 3408 3409 + out: 3411 3410 bnx2x_unlock_vf_pf_channel(bp, vf, CHANNEL_TLV_PF_SET_MAC); 3412 3411 } 3413 3412 ··· 3471 3468 &ramrod_flags); 3472 3469 if (rc) { 3473 3470 BNX2X_ERR("failed to delete vlans\n"); 3474 - return -EINVAL; 3471 + rc = -EINVAL; 3472 + goto out; 3475 3473 } 3476 3474 3477 3475 /* send queue update ramrod to configure default vlan and silent ··· 3506 3502 rc = bnx2x_config_vlan_mac(bp, &ramrod_param); 3507 3503 if (rc) { 3508 3504 BNX2X_ERR("failed to configure vlan\n"); 3509 - return -EINVAL; 3505 + rc = -EINVAL; 3506 + goto out; 3510 3507 } 3511 3508 3512 3509 /* configure default vlan to vf queue and set silent ··· 3525 3520 rc = bnx2x_queue_state_change(bp, &q_params); 3526 3521 if (rc) { 3527 3522 BNX2X_ERR("Failed to configure default VLAN\n"); 3528 - return rc; 3523 + goto out; 3529 3524 } 3530 3525 3531 3526 /* clear the flag indicating that this VF needs its vlan 3532 - * (will only be set if the HV configured th Vlan before vf was 3533 - * and we were called because the VF came up later 3527 + * (will only be set if the HV configured the Vlan before vf was 3528 + * up and we were called because the VF came up later 3534 3529 */ 3530 + out: 3535 3531 vf->cfg_flags &= ~VF_CFG_VLAN; 3536 - 3537 3532 bnx2x_unlock_vf_pf_channel(bp, vf, CHANNEL_TLV_PF_SET_VLAN); 3538 3533 } 3539 - return 0; 3534 + return rc; 3540 3535 } 3541 3536 3542 3537 /* crc is the first field in the bulletin board. Compute the crc over the
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c
··· 196 196 197 197 } else if (bp->func_stx) { 198 198 *stats_comp = 0; 199 - bnx2x_post_dmae(bp, dmae, INIT_DMAE_C(bp)); 199 + bnx2x_issue_dmae_with_comp(bp, dmae, stats_comp); 200 200 } 201 201 } 202 202
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
··· 1020 1020 dmae.len = len32; 1021 1021 1022 1022 /* issue the command and wait for completion */ 1023 - return bnx2x_issue_dmae_with_comp(bp, &dmae); 1023 + return bnx2x_issue_dmae_with_comp(bp, &dmae, bnx2x_sp(bp, wb_comp)); 1024 1024 } 1025 1025 1026 1026 static void bnx2x_vf_mbx_resp_single_tlv(struct bnx2x *bp,
+38 -18
drivers/net/ethernet/davicom/dm9000.c
··· 158 158 159 159 /* DM9000 network board routine ---------------------------- */ 160 160 161 - static void 162 - dm9000_reset(board_info_t * db) 163 - { 164 - dev_dbg(db->dev, "resetting device\n"); 165 - 166 - /* RESET device */ 167 - writeb(DM9000_NCR, db->io_addr); 168 - udelay(200); 169 - writeb(NCR_RST, db->io_data); 170 - udelay(200); 171 - } 172 - 173 161 /* 174 162 * Read a byte from I/O port 175 163 */ ··· 177 189 { 178 190 writeb(reg, db->io_addr); 179 191 writeb(value, db->io_data); 192 + } 193 + 194 + static void 195 + dm9000_reset(board_info_t *db) 196 + { 197 + dev_dbg(db->dev, "resetting device\n"); 198 + 199 + /* Reset DM9000, see DM9000 Application Notes V1.22 Jun 11, 2004 page 29 200 + * The essential point is that we have to do a double reset, and the 201 + * instruction is to set LBK into MAC internal loopback mode. 202 + */ 203 + iow(db, DM9000_NCR, 0x03); 204 + udelay(100); /* Application note says at least 20 us */ 205 + if (ior(db, DM9000_NCR) & 1) 206 + dev_err(db->dev, "dm9000 did not respond to first reset\n"); 207 + 208 + iow(db, DM9000_NCR, 0); 209 + iow(db, DM9000_NCR, 0x03); 210 + udelay(100); 211 + if (ior(db, DM9000_NCR) & 1) 212 + dev_err(db->dev, "dm9000 did not respond to second reset\n"); 180 213 } 181 214 182 215 /* routines for sending block to chip */ ··· 753 744 static void dm9000_show_carrier(board_info_t *db, 754 745 unsigned carrier, unsigned nsr) 755 746 { 747 + int lpa; 756 748 struct net_device *ndev = db->ndev; 749 + struct mii_if_info *mii = &db->mii; 757 750 unsigned ncr = dm9000_read_locked(db, DM9000_NCR); 758 751 759 - if (carrier) 760 - dev_info(db->dev, "%s: link up, %dMbps, %s-duplex, no LPA\n", 752 + if (carrier) { 753 + lpa = mii->mdio_read(mii->dev, mii->phy_id, MII_LPA); 754 + dev_info(db->dev, 755 + "%s: link up, %dMbps, %s-duplex, lpa 0x%04X\n", 761 756 ndev->name, (nsr & NSR_SPEED) ? 10 : 100, 762 - (ncr & NCR_FDX) ? "full" : "half"); 763 - else 757 + (ncr & NCR_FDX) ? "full" : "half", lpa); 758 + } else { 764 759 dev_info(db->dev, "%s: link down\n", ndev->name); 760 + } 765 761 } 766 762 767 763 static void ··· 904 890 (dev->features & NETIF_F_RXCSUM) ? RCSR_CSUM : 0); 905 891 906 892 iow(db, DM9000_GPCR, GPCR_GEP_CNTL); /* Let GPIO0 output */ 893 + iow(db, DM9000_GPR, 0); 907 894 908 - dm9000_phy_write(dev, 0, MII_BMCR, BMCR_RESET); /* PHY RESET */ 909 - dm9000_phy_write(dev, 0, MII_DM_DSPCR, DSPCR_INIT_PARAM); /* Init */ 895 + /* If we are dealing with DM9000B, some extra steps are required: a 896 + * manual phy reset, and setting init params. 897 + */ 898 + if (db->type == TYPE_DM9000B) { 899 + dm9000_phy_write(dev, 0, MII_BMCR, BMCR_RESET); 900 + dm9000_phy_write(dev, 0, MII_DM_DSPCR, DSPCR_INIT_PARAM); 901 + } 910 902 911 903 ncr = (db->flags & DM9000_PLATF_EXT_PHY) ? NCR_EXT_PHY : 0; 912 904
+29 -11
drivers/net/ethernet/freescale/gianfar.c
··· 88 88 89 89 #include <asm/io.h> 90 90 #include <asm/reg.h> 91 + #include <asm/mpc85xx.h> 91 92 #include <asm/irq.h> 92 93 #include <asm/uaccess.h> 93 94 #include <linux/module.h> ··· 940 939 } 941 940 } 942 941 943 - static void gfar_detect_errata(struct gfar_private *priv) 942 + static void __gfar_detect_errata_83xx(struct gfar_private *priv) 944 943 { 945 - struct device *dev = &priv->ofdev->dev; 946 944 unsigned int pvr = mfspr(SPRN_PVR); 947 945 unsigned int svr = mfspr(SPRN_SVR); 948 946 unsigned int mod = (svr >> 16) & 0xfff6; /* w/o E suffix */ ··· 957 957 (pvr == 0x80861010 && (mod & 0xfff9) == 0x80c0)) 958 958 priv->errata |= GFAR_ERRATA_76; 959 959 960 - /* MPC8313 and MPC837x all rev */ 961 - if ((pvr == 0x80850010 && mod == 0x80b0) || 962 - (pvr == 0x80861010 && (mod & 0xfff9) == 0x80c0)) 963 - priv->errata |= GFAR_ERRATA_A002; 964 - 965 - /* MPC8313 Rev < 2.0, MPC8548 rev 2.0 */ 966 - if ((pvr == 0x80850010 && mod == 0x80b0 && rev < 0x0020) || 967 - (pvr == 0x80210020 && mod == 0x8030 && rev == 0x0020)) 960 + /* MPC8313 Rev < 2.0 */ 961 + if (pvr == 0x80850010 && mod == 0x80b0 && rev < 0x0020) 968 962 priv->errata |= GFAR_ERRATA_12; 963 + } 964 + 965 + static void __gfar_detect_errata_85xx(struct gfar_private *priv) 966 + { 967 + unsigned int svr = mfspr(SPRN_SVR); 968 + 969 + if ((SVR_SOC_VER(svr) == SVR_8548) && (SVR_REV(svr) == 0x20)) 970 + priv->errata |= GFAR_ERRATA_12; 971 + if (((SVR_SOC_VER(svr) == SVR_P2020) && (SVR_REV(svr) < 0x20)) || 972 + ((SVR_SOC_VER(svr) == SVR_P2010) && (SVR_REV(svr) < 0x20))) 973 + priv->errata |= GFAR_ERRATA_76; /* aka eTSEC 20 */ 974 + } 975 + 976 + static void gfar_detect_errata(struct gfar_private *priv) 977 + { 978 + struct device *dev = &priv->ofdev->dev; 979 + 980 + /* no plans to fix */ 981 + priv->errata |= GFAR_ERRATA_A002; 982 + 983 + if (pvr_version_is(PVR_VER_E500V1) || pvr_version_is(PVR_VER_E500V2)) 984 + __gfar_detect_errata_85xx(priv); 985 + else /* non-mpc85xx parts, i.e. e300 core based */ 986 + __gfar_detect_errata_83xx(priv); 969 987 970 988 if (priv->errata) 971 989 dev_info(dev, "enabled errata workarounds, flags: 0x%x\n", ··· 1617 1599 /* Normaly TSEC should not hang on GRS commands, so we should 1618 1600 * actually wait for IEVENT_GRSC flag. 1619 1601 */ 1620 - if (likely(!gfar_has_errata(priv, GFAR_ERRATA_A002))) 1602 + if (!gfar_has_errata(priv, GFAR_ERRATA_A002)) 1621 1603 return 0; 1622 1604 1623 1605 /* Read the eTSEC register at offset 0xD1C. If bits 7-14 are
+15 -13
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
··· 180 180 return 0; 181 181 } 182 182 183 - static void calc_block_sig(struct mlx5_cmd_prot_block *block, u8 token) 183 + static void calc_block_sig(struct mlx5_cmd_prot_block *block, u8 token, 184 + int csum) 184 185 { 185 186 block->token = token; 186 - block->ctrl_sig = ~xor8_buf(block->rsvd0, sizeof(*block) - sizeof(block->data) - 2); 187 - block->sig = ~xor8_buf(block, sizeof(*block) - 1); 187 + if (csum) { 188 + block->ctrl_sig = ~xor8_buf(block->rsvd0, sizeof(*block) - 189 + sizeof(block->data) - 2); 190 + block->sig = ~xor8_buf(block, sizeof(*block) - 1); 191 + } 188 192 } 189 193 190 - static void calc_chain_sig(struct mlx5_cmd_msg *msg, u8 token) 194 + static void calc_chain_sig(struct mlx5_cmd_msg *msg, u8 token, int csum) 191 195 { 192 196 struct mlx5_cmd_mailbox *next = msg->next; 193 197 194 198 while (next) { 195 - calc_block_sig(next->buf, token); 199 + calc_block_sig(next->buf, token, csum); 196 200 next = next->next; 197 201 } 198 202 } 199 203 200 - static void set_signature(struct mlx5_cmd_work_ent *ent) 204 + static void set_signature(struct mlx5_cmd_work_ent *ent, int csum) 201 205 { 202 206 ent->lay->sig = ~xor8_buf(ent->lay, sizeof(*ent->lay)); 203 - calc_chain_sig(ent->in, ent->token); 204 - calc_chain_sig(ent->out, ent->token); 207 + calc_chain_sig(ent->in, ent->token, csum); 208 + calc_chain_sig(ent->out, ent->token, csum); 205 209 } 206 210 207 211 static void poll_timeout(struct mlx5_cmd_work_ent *ent) ··· 543 539 lay->type = MLX5_PCI_CMD_XPORT; 544 540 lay->token = ent->token; 545 541 lay->status_own = CMD_OWNER_HW; 546 - if (!cmd->checksum_disabled) 547 - set_signature(ent); 542 + set_signature(ent, !cmd->checksum_disabled); 548 543 dump_command(dev, ent, 1); 549 544 ktime_get_ts(&ent->ts1); 550 545 ··· 776 773 777 774 copy = min_t(int, size, MLX5_CMD_DATA_BLOCK_SIZE); 778 775 block = next->buf; 779 - if (xor8_buf(block, sizeof(*block)) != 0xff) 780 - return -EINVAL; 781 776 782 777 memcpy(to, block->data, copy); 783 778 to += copy; ··· 1362 1361 goto err_map; 1363 1362 } 1364 1363 1364 + cmd->checksum_disabled = 1; 1365 1365 cmd->max_reg_cmds = (1 << cmd->log_sz) - 1; 1366 1366 cmd->bitmask = (1 << cmd->max_reg_cmds) - 1; 1367 1367 ··· 1512 1510 case MLX5_CMD_STAT_BAD_SYS_STATE_ERR: return -EIO; 1513 1511 case MLX5_CMD_STAT_BAD_RES_ERR: return -EINVAL; 1514 1512 case MLX5_CMD_STAT_RES_BUSY: return -EBUSY; 1515 - case MLX5_CMD_STAT_LIM_ERR: return -EINVAL; 1513 + case MLX5_CMD_STAT_LIM_ERR: return -ENOMEM; 1516 1514 case MLX5_CMD_STAT_BAD_RES_STATE_ERR: return -EINVAL; 1517 1515 case MLX5_CMD_STAT_IX_ERR: return -EINVAL; 1518 1516 case MLX5_CMD_STAT_NO_RES_ERR: return -EAGAIN;
+3 -1
drivers/net/ethernet/mellanox/mlx5/core/eq.c
··· 366 366 goto err_in; 367 367 } 368 368 369 + snprintf(eq->name, MLX5_MAX_EQ_NAME, "%s@pci:%s", 370 + name, pci_name(dev->pdev)); 369 371 eq->eqn = out.eq_number; 370 372 err = request_irq(table->msix_arr[vecidx].vector, mlx5_msix_handler, 0, 371 - name, eq); 373 + eq->name, eq); 372 374 if (err) 373 375 goto err_eq; 374 376
+5 -16
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 165 165 struct mlx5_cmd_set_hca_cap_mbox_in *set_ctx = NULL; 166 166 struct mlx5_cmd_query_hca_cap_mbox_in query_ctx; 167 167 struct mlx5_cmd_set_hca_cap_mbox_out set_out; 168 - struct mlx5_profile *prof = dev->profile; 169 168 u64 flags; 170 - int csum = 1; 171 169 int err; 172 170 173 171 memset(&query_ctx, 0, sizeof(query_ctx)); ··· 195 197 memcpy(&set_ctx->hca_cap, &query_out->hca_cap, 196 198 sizeof(set_ctx->hca_cap)); 197 199 198 - if (prof->mask & MLX5_PROF_MASK_CMDIF_CSUM) { 199 - csum = !!prof->cmdif_csum; 200 - flags = be64_to_cpu(set_ctx->hca_cap.flags); 201 - if (csum) 202 - flags |= MLX5_DEV_CAP_FLAG_CMDIF_CSUM; 203 - else 204 - flags &= ~MLX5_DEV_CAP_FLAG_CMDIF_CSUM; 205 - 206 - set_ctx->hca_cap.flags = cpu_to_be64(flags); 207 - } 208 - 209 200 if (dev->profile->mask & MLX5_PROF_MASK_QP_SIZE) 210 201 set_ctx->hca_cap.log_max_qp = dev->profile->log_max_qp; 211 202 203 + flags = be64_to_cpu(query_out->hca_cap.flags); 204 + /* disable checksum */ 205 + flags &= ~MLX5_DEV_CAP_FLAG_CMDIF_CSUM; 206 + 207 + set_ctx->hca_cap.flags = cpu_to_be64(flags); 212 208 memset(&set_out, 0, sizeof(set_out)); 213 209 set_ctx->hca_cap.log_uar_page_sz = cpu_to_be16(PAGE_SHIFT - 12); 214 210 set_ctx->hdr.opcode = cpu_to_be16(MLX5_CMD_OP_SET_HCA_CAP); ··· 216 224 err = mlx5_cmd_status_to_err(&set_out.hdr); 217 225 if (err) 218 226 goto query_ex; 219 - 220 - if (!csum) 221 - dev->cmd.checksum_disabled = 1; 222 227 223 228 query_ex: 224 229 kfree(query_out);
+14 -2
drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
··· 90 90 __be64 pas[0]; 91 91 }; 92 92 93 + enum { 94 + MAX_RECLAIM_TIME_MSECS = 5000, 95 + }; 96 + 93 97 static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u16 func_id) 94 98 { 95 99 struct rb_root *root = &dev->priv.page_root; ··· 283 279 int err; 284 280 int i; 285 281 282 + if (nclaimed) 283 + *nclaimed = 0; 284 + 286 285 memset(&in, 0, sizeof(in)); 287 286 outlen = sizeof(*out) + npages * sizeof(out->pas[0]); 288 287 out = mlx5_vzalloc(outlen); ··· 395 388 396 389 int mlx5_reclaim_startup_pages(struct mlx5_core_dev *dev) 397 390 { 398 - unsigned long end = jiffies + msecs_to_jiffies(5000); 391 + unsigned long end = jiffies + msecs_to_jiffies(MAX_RECLAIM_TIME_MSECS); 399 392 struct fw_page *fwp; 400 393 struct rb_node *p; 394 + int nclaimed = 0; 401 395 int err; 402 396 403 397 do { 404 398 p = rb_first(&dev->priv.page_root); 405 399 if (p) { 406 400 fwp = rb_entry(p, struct fw_page, rb_node); 407 - err = reclaim_pages(dev, fwp->func_id, optimal_reclaimed_pages(), NULL); 401 + err = reclaim_pages(dev, fwp->func_id, 402 + optimal_reclaimed_pages(), 403 + &nclaimed); 408 404 if (err) { 409 405 mlx5_core_warn(dev, "failed reclaiming pages (%d)\n", err); 410 406 return err; 411 407 } 408 + if (nclaimed) 409 + end = jiffies + msecs_to_jiffies(MAX_RECLAIM_TIME_MSECS); 412 410 } 413 411 if (time_after(jiffies, end)) { 414 412 mlx5_core_warn(dev, "FW did not return all pages. giving up...\n");
+1 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c
··· 691 691 return err; 692 692 } 693 693 694 - if (channel->tx_count) { 694 + if (qlcnic_82xx_check(adapter) && channel->tx_count) { 695 695 err = qlcnic_validate_max_tx_rings(adapter, channel->tx_count); 696 696 if (err) 697 697 return err;
+1 -7
drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
··· 3648 3648 u8 max_hw = QLCNIC_MAX_TX_RINGS; 3649 3649 u32 max_allowed; 3650 3650 3651 - if (!qlcnic_82xx_check(adapter)) { 3652 - netdev_err(netdev, "No Multi TX-Q support\n"); 3653 - return -EINVAL; 3654 - } 3655 - 3656 3651 if (!qlcnic_use_msi_x && !qlcnic_use_msi) { 3657 3652 netdev_err(netdev, "No Multi TX-Q support in INT-x mode\n"); 3658 3653 return -EINVAL; ··· 3687 3692 u8 max_hw = adapter->ahw->max_rx_ques; 3688 3693 u32 max_allowed; 3689 3694 3690 - if (qlcnic_82xx_check(adapter) && !qlcnic_use_msi_x && 3691 - !qlcnic_use_msi) { 3695 + if (!qlcnic_use_msi_x && !qlcnic_use_msi) { 3692 3696 netdev_err(netdev, "No RSS support in INT-x mode\n"); 3693 3697 return -EINVAL; 3694 3698 }
+4 -4
drivers/net/ethernet/renesas/sh_eth.c
··· 620 620 .eesr_err_check = EESR_TWB1 | EESR_TWB | EESR_TABT | EESR_RABT | 621 621 EESR_RFE | EESR_RDE | EESR_RFRMER | EESR_TFE | 622 622 EESR_TDE | EESR_ECI, 623 - .fdr_value = 0x0000070f, 624 - .rmcr_value = 0x00000001, 625 623 626 624 .apr = 1, 627 625 .mpr = 1, 628 626 .tpauser = 1, 629 627 .bculr = 1, 630 628 .hw_swap = 1, 631 - .rpadir = 1, 632 - .rpadir_value = 2 << 16, 633 629 .no_trimd = 1, 634 630 .no_ade = 1, 635 631 .tsu = 1, ··· 688 692 .eesr_err_check = EESR_TWB1 | EESR_TWB | EESR_TABT | EESR_RABT | 689 693 EESR_RFE | EESR_RDE | EESR_RFRMER | EESR_TFE | 690 694 EESR_TDE | EESR_ECI, 695 + .fdr_value = 0x0000070f, 696 + .rmcr_value = 0x00000001, 691 697 692 698 .apr = 1, 693 699 .mpr = 1, 694 700 .tpauser = 1, 695 701 .bculr = 1, 696 702 .hw_swap = 1, 703 + .rpadir = 1, 704 + .rpadir_value = 2 << 16, 697 705 .no_trimd = 1, 698 706 .no_ade = 1, 699 707 .tsu = 1,
+2 -4
drivers/net/ethernet/smsc/smc91x.h
··· 1124 1124 void __iomem *__ioaddr = ioaddr; \ 1125 1125 if (__len >= 2 && (unsigned long)__ptr & 2) { \ 1126 1126 __len -= 2; \ 1127 - SMC_outw(*(u16 *)__ptr, ioaddr, \ 1128 - DATA_REG(lp)); \ 1127 + SMC_outsw(ioaddr, DATA_REG(lp), __ptr, 1); \ 1129 1128 __ptr += 2; \ 1130 1129 } \ 1131 1130 if (SMC_CAN_USE_DATACS && lp->datacs) \ ··· 1132 1133 SMC_outsl(__ioaddr, DATA_REG(lp), __ptr, __len>>2); \ 1133 1134 if (__len & 2) { \ 1134 1135 __ptr += (__len & ~3); \ 1135 - SMC_outw(*((u16 *)__ptr), ioaddr, \ 1136 - DATA_REG(lp)); \ 1136 + SMC_outsw(ioaddr, DATA_REG(lp), __ptr, 1); \ 1137 1137 } \ 1138 1138 } else if (SMC_16BIT(lp)) \ 1139 1139 SMC_outsw(ioaddr, DATA_REG(lp), p, (l) >> 1); \
+1 -8
drivers/net/ethernet/ti/cpsw.c
··· 637 637 static irqreturn_t cpsw_interrupt(int irq, void *dev_id) 638 638 { 639 639 struct cpsw_priv *priv = dev_id; 640 - u32 rx, tx, rx_thresh; 641 - 642 - rx_thresh = __raw_readl(&priv->wr_regs->rx_thresh_stat); 643 - rx = __raw_readl(&priv->wr_regs->rx_stat); 644 - tx = __raw_readl(&priv->wr_regs->tx_stat); 645 - if (!rx_thresh && !rx && !tx) 646 - return IRQ_NONE; 647 640 648 641 cpsw_intr_disable(priv); 649 642 if (priv->irq_enabled == true) { ··· 1164 1171 } 1165 1172 } 1166 1173 1174 + napi_enable(&priv->napi); 1167 1175 cpdma_ctlr_start(priv->dma); 1168 1176 cpsw_intr_enable(priv); 1169 - napi_enable(&priv->napi); 1170 1177 cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_RX); 1171 1178 cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_TX); 1172 1179
+1 -2
drivers/net/ethernet/ti/davinci_emac.c
··· 876 876 netdev_mc_count(ndev) > EMAC_DEF_MAX_MULTICAST_ADDRESSES) { 877 877 mbp_enable = (mbp_enable | EMAC_MBP_RXMCAST); 878 878 emac_add_mcast(priv, EMAC_ALL_MULTI_SET, NULL); 879 - } 880 - if (!netdev_mc_empty(ndev)) { 879 + } else if (!netdev_mc_empty(ndev)) { 881 880 struct netdev_hw_addr *ha; 882 881 883 882 mbp_enable = (mbp_enable | EMAC_MBP_RXMCAST);
-1
drivers/net/hamradio/yam.c
··· 975 975 return -EINVAL; /* Cannot change this parameter when up */ 976 976 if ((ym = kmalloc(sizeof(struct yamdrv_ioctl_mcs), GFP_KERNEL)) == NULL) 977 977 return -ENOBUFS; 978 - ym->bitrate = 9600; 979 978 if (copy_from_user(ym, ifr->ifr_data, sizeof(struct yamdrv_ioctl_mcs))) { 980 979 kfree(ym); 981 980 return -EFAULT;
+20 -3
drivers/net/usb/ax88179_178a.c
··· 36 36 #define AX_RXHDR_L4_TYPE_TCP 16 37 37 #define AX_RXHDR_L3CSUM_ERR 2 38 38 #define AX_RXHDR_L4CSUM_ERR 1 39 - #define AX_RXHDR_CRC_ERR ((u32)BIT(31)) 40 - #define AX_RXHDR_DROP_ERR ((u32)BIT(30)) 39 + #define AX_RXHDR_CRC_ERR ((u32)BIT(29)) 40 + #define AX_RXHDR_DROP_ERR ((u32)BIT(31)) 41 41 #define AX_ACCESS_MAC 0x01 42 42 #define AX_ACCESS_PHY 0x02 43 43 #define AX_ACCESS_EEPROM 0x04 ··· 1406 1406 .tx_fixup = ax88179_tx_fixup, 1407 1407 }; 1408 1408 1409 + static const struct driver_info samsung_info = { 1410 + .description = "Samsung USB Ethernet Adapter", 1411 + .bind = ax88179_bind, 1412 + .unbind = ax88179_unbind, 1413 + .status = ax88179_status, 1414 + .link_reset = ax88179_link_reset, 1415 + .reset = ax88179_reset, 1416 + .stop = ax88179_stop, 1417 + .flags = FLAG_ETHER | FLAG_FRAMING_AX, 1418 + .rx_fixup = ax88179_rx_fixup, 1419 + .tx_fixup = ax88179_tx_fixup, 1420 + }; 1421 + 1409 1422 static const struct usb_device_id products[] = { 1410 1423 { 1411 1424 /* ASIX AX88179 10/100/1000 */ ··· 1431 1418 }, { 1432 1419 /* Sitecom USB 3.0 to Gigabit Adapter */ 1433 1420 USB_DEVICE(0x0df6, 0x0072), 1434 - .driver_info = (unsigned long) &sitecom_info, 1421 + .driver_info = (unsigned long)&sitecom_info, 1422 + }, { 1423 + /* Samsung USB Ethernet Adapter */ 1424 + USB_DEVICE(0x04e8, 0xa100), 1425 + .driver_info = (unsigned long)&samsung_info, 1435 1426 }, 1436 1427 { }, 1437 1428 };
+1
drivers/net/usb/qmi_wwan.c
··· 730 730 {QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */ 731 731 {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */ 732 732 {QMI_FIXED_INTF(0x1bc7, 0x1201, 2)}, /* Telit LE920 */ 733 + {QMI_FIXED_INTF(0x0b3c, 0xc005, 6)}, /* Olivetti Olicard 200 */ 733 734 {QMI_FIXED_INTF(0x1e2d, 0x0060, 4)}, /* Cinterion PLxx */ 734 735 735 736 /* 4. Gobi 1000 devices */
+3 -1
drivers/net/usb/usbnet.c
··· 1688 1688 if (dev->can_dma_sg && !(info->flags & FLAG_SEND_ZLP) && 1689 1689 !(info->flags & FLAG_MULTI_PACKET)) { 1690 1690 dev->padding_pkt = kzalloc(1, GFP_KERNEL); 1691 - if (!dev->padding_pkt) 1691 + if (!dev->padding_pkt) { 1692 + status = -ENOMEM; 1692 1693 goto out4; 1694 + } 1693 1695 } 1694 1696 1695 1697 status = register_netdev (net);
+13 -1
drivers/net/virtio_net.c
··· 938 938 return -EINVAL; 939 939 } else { 940 940 vi->curr_queue_pairs = queue_pairs; 941 - schedule_delayed_work(&vi->refill, 0); 941 + /* virtnet_open() will refill when device is going to up. */ 942 + if (dev->flags & IFF_UP) 943 + schedule_delayed_work(&vi->refill, 0); 942 944 } 943 945 944 946 return 0; ··· 1118 1116 { 1119 1117 struct virtnet_info *vi = container_of(nfb, struct virtnet_info, nb); 1120 1118 1119 + mutex_lock(&vi->config_lock); 1120 + 1121 + if (!vi->config_enable) 1122 + goto done; 1123 + 1121 1124 switch(action & ~CPU_TASKS_FROZEN) { 1122 1125 case CPU_ONLINE: 1123 1126 case CPU_DOWN_FAILED: ··· 1135 1128 default: 1136 1129 break; 1137 1130 } 1131 + 1132 + done: 1133 + mutex_unlock(&vi->config_lock); 1138 1134 return NOTIFY_OK; 1139 1135 } 1140 1136 ··· 1743 1733 vi->config_enable = true; 1744 1734 mutex_unlock(&vi->config_lock); 1745 1735 1736 + rtnl_lock(); 1746 1737 virtnet_set_queues(vi, vi->curr_queue_pairs); 1738 + rtnl_unlock(); 1747 1739 1748 1740 return 0; 1749 1741 }
+1
drivers/net/wan/farsync.c
··· 1972 1972 } 1973 1973 1974 1974 i = port->index; 1975 + memset(&sync, 0, sizeof(sync)); 1975 1976 sync.clock_rate = FST_RDL(card, portConfig[i].lineSpeed); 1976 1977 /* Lucky card and linux use same encoding here */ 1977 1978 sync.clock_type = FST_RDB(card, portConfig[i].internalClock) ==
+1
drivers/net/wan/wanxl.c
··· 355 355 ifr->ifr_settings.size = size; /* data size wanted */ 356 356 return -ENOBUFS; 357 357 } 358 + memset(&line, 0, sizeof(line)); 358 359 line.clock_type = get_status(port)->clocking; 359 360 line.clock_rate = 0; 360 361 line.loopback = 0;
+11 -12
drivers/net/wireless/ath/ath9k/main.c
··· 208 208 struct ath_hw *ah = sc->sc_ah; 209 209 struct ath_common *common = ath9k_hw_common(ah); 210 210 unsigned long flags; 211 + int i; 211 212 212 213 if (ath_startrecv(sc) != 0) { 213 214 ath_err(common, "Unable to restart recv logic\n"); ··· 236 235 } 237 236 work: 238 237 ath_restart_work(sc); 238 + 239 + for (i = 0; i < ATH9K_NUM_TX_QUEUES; i++) { 240 + if (!ATH_TXQ_SETUP(sc, i)) 241 + continue; 242 + 243 + spin_lock_bh(&sc->tx.txq[i].axq_lock); 244 + ath_txq_schedule(sc, &sc->tx.txq[i]); 245 + spin_unlock_bh(&sc->tx.txq[i].axq_lock); 246 + } 239 247 } 240 248 241 249 ieee80211_wake_queues(sc->hw); ··· 629 619 630 620 static int ath_reset(struct ath_softc *sc) 631 621 { 632 - int i, r; 622 + int r; 633 623 634 624 ath9k_ps_wakeup(sc); 635 - 636 625 r = ath_reset_internal(sc, NULL); 637 - 638 - for (i = 0; i < ATH9K_NUM_TX_QUEUES; i++) { 639 - if (!ATH_TXQ_SETUP(sc, i)) 640 - continue; 641 - 642 - spin_lock_bh(&sc->tx.txq[i].axq_lock); 643 - ath_txq_schedule(sc, &sc->tx.txq[i]); 644 - spin_unlock_bh(&sc->tx.txq[i].axq_lock); 645 - } 646 - 647 626 ath9k_ps_restore(sc); 648 627 649 628 return r;
+2
drivers/net/wireless/cw1200/cw1200_spi.c
··· 237 237 struct hwbus_priv *self = dev_id; 238 238 239 239 if (self->core) { 240 + cw1200_spi_lock(self); 240 241 cw1200_irq_handler(self->core); 242 + cw1200_spi_unlock(self); 241 243 return IRQ_HANDLED; 242 244 } else { 243 245 return IRQ_NONE;
+6
drivers/net/wireless/iwlwifi/iwl-6000.c
··· 240 240 .ht_params = &iwl6000_ht_params, 241 241 }; 242 242 243 + const struct iwl_cfg iwl6035_2agn_sff_cfg = { 244 + .name = "Intel(R) Centrino(R) Ultimate-N 6235 AGN", 245 + IWL_DEVICE_6035, 246 + .ht_params = &iwl6000_ht_params, 247 + }; 248 + 243 249 const struct iwl_cfg iwl1030_bgn_cfg = { 244 250 .name = "Intel(R) Centrino(R) Wireless-N 1030 BGN", 245 251 IWL_DEVICE_6030,
+1
drivers/net/wireless/iwlwifi/iwl-config.h
··· 280 280 extern const struct iwl_cfg iwl2000_2bgn_d_cfg; 281 281 extern const struct iwl_cfg iwl2030_2bgn_cfg; 282 282 extern const struct iwl_cfg iwl6035_2agn_cfg; 283 + extern const struct iwl_cfg iwl6035_2agn_sff_cfg; 283 284 extern const struct iwl_cfg iwl105_bgn_cfg; 284 285 extern const struct iwl_cfg iwl105_bgn_d_cfg; 285 286 extern const struct iwl_cfg iwl135_bgn_cfg;
+4 -2
drivers/net/wireless/iwlwifi/iwl-trans.h
··· 601 601 { 602 602 int ret; 603 603 604 - WARN_ONCE(trans->state != IWL_TRANS_FW_ALIVE, 605 - "%s bad state = %d", __func__, trans->state); 604 + if (trans->state != IWL_TRANS_FW_ALIVE) { 605 + IWL_ERR(trans, "%s bad state = %d", __func__, trans->state); 606 + return -EIO; 607 + } 606 608 607 609 if (!(cmd->flags & CMD_ASYNC)) 608 610 lock_map_acquire_read(&trans->sync_cmd_lockdep_map);
+4 -1
drivers/net/wireless/iwlwifi/mvm/power.c
··· 273 273 if (!mvmvif->queue_params[ac].uapsd) 274 274 continue; 275 275 276 - cmd->flags |= cpu_to_le16(POWER_FLAGS_ADVANCE_PM_ENA_MSK); 276 + if (mvm->cur_ucode != IWL_UCODE_WOWLAN) 277 + cmd->flags |= 278 + cpu_to_le16(POWER_FLAGS_ADVANCE_PM_ENA_MSK); 279 + 277 280 cmd->uapsd_ac_flags |= BIT(ac); 278 281 279 282 /* QNDP TID - the highest TID with no admission control */
+11 -1
drivers/net/wireless/iwlwifi/mvm/scan.c
··· 394 394 return false; 395 395 } 396 396 397 + /* 398 + * If scan cannot be aborted, it means that we had a 399 + * SCAN_COMPLETE_NOTIFICATION in the pipe and it called 400 + * ieee80211_scan_completed already. 401 + */ 397 402 IWL_DEBUG_SCAN(mvm, "Scan cannot be aborted, exit now: %d\n", 398 403 *resp); 399 404 return true; ··· 422 417 SCAN_COMPLETE_NOTIFICATION }; 423 418 int ret; 424 419 420 + if (mvm->scan_status == IWL_MVM_SCAN_NONE) 421 + return; 422 + 425 423 iwl_init_notification_wait(&mvm->notif_wait, &wait_scan_abort, 426 424 scan_abort_notif, 427 425 ARRAY_SIZE(scan_abort_notif), 428 426 iwl_mvm_scan_abort_notif, NULL); 429 427 430 - ret = iwl_mvm_send_cmd_pdu(mvm, SCAN_ABORT_CMD, CMD_SYNC, 0, NULL); 428 + ret = iwl_mvm_send_cmd_pdu(mvm, SCAN_ABORT_CMD, 429 + CMD_SYNC | CMD_SEND_IN_RFKILL, 0, NULL); 431 430 if (ret) { 432 431 IWL_ERR(mvm, "Couldn't send SCAN_ABORT_CMD: %d\n", ret); 432 + /* mac80211's state will be cleaned in the fw_restart flow */ 433 433 goto out_remove_notif; 434 434 } 435 435
+42
drivers/net/wireless/iwlwifi/pcie/drv.c
··· 139 139 140 140 /* 6x00 Series */ 141 141 {IWL_PCI_DEVICE(0x422B, 0x1101, iwl6000_3agn_cfg)}, 142 + {IWL_PCI_DEVICE(0x422B, 0x1108, iwl6000_3agn_cfg)}, 142 143 {IWL_PCI_DEVICE(0x422B, 0x1121, iwl6000_3agn_cfg)}, 144 + {IWL_PCI_DEVICE(0x422B, 0x1128, iwl6000_3agn_cfg)}, 143 145 {IWL_PCI_DEVICE(0x422C, 0x1301, iwl6000i_2agn_cfg)}, 144 146 {IWL_PCI_DEVICE(0x422C, 0x1306, iwl6000i_2abg_cfg)}, 145 147 {IWL_PCI_DEVICE(0x422C, 0x1307, iwl6000i_2bg_cfg)}, 146 148 {IWL_PCI_DEVICE(0x422C, 0x1321, iwl6000i_2agn_cfg)}, 147 149 {IWL_PCI_DEVICE(0x422C, 0x1326, iwl6000i_2abg_cfg)}, 148 150 {IWL_PCI_DEVICE(0x4238, 0x1111, iwl6000_3agn_cfg)}, 151 + {IWL_PCI_DEVICE(0x4238, 0x1118, iwl6000_3agn_cfg)}, 149 152 {IWL_PCI_DEVICE(0x4239, 0x1311, iwl6000i_2agn_cfg)}, 150 153 {IWL_PCI_DEVICE(0x4239, 0x1316, iwl6000i_2abg_cfg)}, 151 154 ··· 156 153 {IWL_PCI_DEVICE(0x0082, 0x1301, iwl6005_2agn_cfg)}, 157 154 {IWL_PCI_DEVICE(0x0082, 0x1306, iwl6005_2abg_cfg)}, 158 155 {IWL_PCI_DEVICE(0x0082, 0x1307, iwl6005_2bg_cfg)}, 156 + {IWL_PCI_DEVICE(0x0082, 0x1308, iwl6005_2agn_cfg)}, 159 157 {IWL_PCI_DEVICE(0x0082, 0x1321, iwl6005_2agn_cfg)}, 160 158 {IWL_PCI_DEVICE(0x0082, 0x1326, iwl6005_2abg_cfg)}, 159 + {IWL_PCI_DEVICE(0x0082, 0x1328, iwl6005_2agn_cfg)}, 161 160 {IWL_PCI_DEVICE(0x0085, 0x1311, iwl6005_2agn_cfg)}, 161 + {IWL_PCI_DEVICE(0x0085, 0x1318, iwl6005_2agn_cfg)}, 162 162 {IWL_PCI_DEVICE(0x0085, 0x1316, iwl6005_2abg_cfg)}, 163 163 {IWL_PCI_DEVICE(0x0082, 0xC020, iwl6005_2agn_sff_cfg)}, 164 164 {IWL_PCI_DEVICE(0x0085, 0xC220, iwl6005_2agn_sff_cfg)}, 165 + {IWL_PCI_DEVICE(0x0085, 0xC228, iwl6005_2agn_sff_cfg)}, 165 166 {IWL_PCI_DEVICE(0x0082, 0x4820, iwl6005_2agn_d_cfg)}, 166 167 {IWL_PCI_DEVICE(0x0082, 0x1304, iwl6005_2agn_mow1_cfg)},/* low 5GHz active */ 167 168 {IWL_PCI_DEVICE(0x0082, 0x1305, iwl6005_2agn_mow2_cfg)},/* high 5GHz active */ ··· 247 240 248 241 /* 6x35 Series */ 249 242 {IWL_PCI_DEVICE(0x088E, 0x4060, iwl6035_2agn_cfg)}, 243 + {IWL_PCI_DEVICE(0x088E, 0x406A, iwl6035_2agn_sff_cfg)}, 250 244 {IWL_PCI_DEVICE(0x088F, 0x4260, iwl6035_2agn_cfg)}, 245 + {IWL_PCI_DEVICE(0x088F, 0x426A, iwl6035_2agn_sff_cfg)}, 251 246 {IWL_PCI_DEVICE(0x088E, 0x4460, iwl6035_2agn_cfg)}, 247 + {IWL_PCI_DEVICE(0x088E, 0x446A, iwl6035_2agn_sff_cfg)}, 252 248 {IWL_PCI_DEVICE(0x088E, 0x4860, iwl6035_2agn_cfg)}, 253 249 {IWL_PCI_DEVICE(0x088F, 0x5260, iwl6035_2agn_cfg)}, 254 250 ··· 270 260 #if IS_ENABLED(CONFIG_IWLMVM) 271 261 /* 7000 Series */ 272 262 {IWL_PCI_DEVICE(0x08B1, 0x4070, iwl7260_2ac_cfg)}, 263 + {IWL_PCI_DEVICE(0x08B1, 0x4072, iwl7260_2ac_cfg)}, 273 264 {IWL_PCI_DEVICE(0x08B1, 0x4170, iwl7260_2ac_cfg)}, 274 265 {IWL_PCI_DEVICE(0x08B1, 0x4060, iwl7260_2n_cfg)}, 266 + {IWL_PCI_DEVICE(0x08B1, 0x406A, iwl7260_2n_cfg)}, 275 267 {IWL_PCI_DEVICE(0x08B1, 0x4160, iwl7260_2n_cfg)}, 276 268 {IWL_PCI_DEVICE(0x08B1, 0x4062, iwl7260_n_cfg)}, 277 269 {IWL_PCI_DEVICE(0x08B1, 0x4162, iwl7260_n_cfg)}, 278 270 {IWL_PCI_DEVICE(0x08B2, 0x4270, iwl7260_2ac_cfg)}, 271 + {IWL_PCI_DEVICE(0x08B2, 0x4272, iwl7260_2ac_cfg)}, 279 272 {IWL_PCI_DEVICE(0x08B2, 0x4260, iwl7260_2n_cfg)}, 273 + {IWL_PCI_DEVICE(0x08B2, 0x426A, iwl7260_2n_cfg)}, 280 274 {IWL_PCI_DEVICE(0x08B2, 0x4262, iwl7260_n_cfg)}, 281 275 {IWL_PCI_DEVICE(0x08B1, 0x4470, iwl7260_2ac_cfg)}, 276 + {IWL_PCI_DEVICE(0x08B1, 0x4472, iwl7260_2ac_cfg)}, 282 277 {IWL_PCI_DEVICE(0x08B1, 0x4460, iwl7260_2n_cfg)}, 278 + {IWL_PCI_DEVICE(0x08B1, 0x446A, iwl7260_2n_cfg)}, 283 279 {IWL_PCI_DEVICE(0x08B1, 0x4462, iwl7260_n_cfg)}, 284 280 {IWL_PCI_DEVICE(0x08B1, 0x4870, iwl7260_2ac_cfg)}, 285 281 {IWL_PCI_DEVICE(0x08B1, 0x486E, iwl7260_2ac_cfg)}, 286 282 {IWL_PCI_DEVICE(0x08B1, 0x4A70, iwl7260_2ac_cfg_high_temp)}, 287 283 {IWL_PCI_DEVICE(0x08B1, 0x4A6E, iwl7260_2ac_cfg_high_temp)}, 288 284 {IWL_PCI_DEVICE(0x08B1, 0x4A6C, iwl7260_2ac_cfg_high_temp)}, 285 + {IWL_PCI_DEVICE(0x08B1, 0x4570, iwl7260_2ac_cfg)}, 286 + {IWL_PCI_DEVICE(0x08B1, 0x4560, iwl7260_2n_cfg)}, 287 + {IWL_PCI_DEVICE(0x08B2, 0x4370, iwl7260_2ac_cfg)}, 288 + {IWL_PCI_DEVICE(0x08B2, 0x4360, iwl7260_2n_cfg)}, 289 + {IWL_PCI_DEVICE(0x08B1, 0x5070, iwl7260_2ac_cfg)}, 289 290 {IWL_PCI_DEVICE(0x08B1, 0x4020, iwl7260_2n_cfg)}, 291 + {IWL_PCI_DEVICE(0x08B1, 0x402A, iwl7260_2n_cfg)}, 290 292 {IWL_PCI_DEVICE(0x08B2, 0x4220, iwl7260_2n_cfg)}, 291 293 {IWL_PCI_DEVICE(0x08B1, 0x4420, iwl7260_2n_cfg)}, 292 294 {IWL_PCI_DEVICE(0x08B1, 0xC070, iwl7260_2ac_cfg)}, 295 + {IWL_PCI_DEVICE(0x08B1, 0xC072, iwl7260_2ac_cfg)}, 293 296 {IWL_PCI_DEVICE(0x08B1, 0xC170, iwl7260_2ac_cfg)}, 294 297 {IWL_PCI_DEVICE(0x08B1, 0xC060, iwl7260_2n_cfg)}, 298 + {IWL_PCI_DEVICE(0x08B1, 0xC06A, iwl7260_2n_cfg)}, 295 299 {IWL_PCI_DEVICE(0x08B1, 0xC160, iwl7260_2n_cfg)}, 296 300 {IWL_PCI_DEVICE(0x08B1, 0xC062, iwl7260_n_cfg)}, 297 301 {IWL_PCI_DEVICE(0x08B1, 0xC162, iwl7260_n_cfg)}, 302 + {IWL_PCI_DEVICE(0x08B1, 0xC770, iwl7260_2ac_cfg)}, 303 + {IWL_PCI_DEVICE(0x08B1, 0xC760, iwl7260_2n_cfg)}, 298 304 {IWL_PCI_DEVICE(0x08B2, 0xC270, iwl7260_2ac_cfg)}, 305 + {IWL_PCI_DEVICE(0x08B2, 0xC272, iwl7260_2ac_cfg)}, 299 306 {IWL_PCI_DEVICE(0x08B2, 0xC260, iwl7260_2n_cfg)}, 307 + {IWL_PCI_DEVICE(0x08B2, 0xC26A, iwl7260_n_cfg)}, 300 308 {IWL_PCI_DEVICE(0x08B2, 0xC262, iwl7260_n_cfg)}, 301 309 {IWL_PCI_DEVICE(0x08B1, 0xC470, iwl7260_2ac_cfg)}, 310 + {IWL_PCI_DEVICE(0x08B1, 0xC472, iwl7260_2ac_cfg)}, 302 311 {IWL_PCI_DEVICE(0x08B1, 0xC460, iwl7260_2n_cfg)}, 303 312 {IWL_PCI_DEVICE(0x08B1, 0xC462, iwl7260_n_cfg)}, 313 + {IWL_PCI_DEVICE(0x08B1, 0xC570, iwl7260_2ac_cfg)}, 314 + {IWL_PCI_DEVICE(0x08B1, 0xC560, iwl7260_2n_cfg)}, 315 + {IWL_PCI_DEVICE(0x08B2, 0xC370, iwl7260_2ac_cfg)}, 316 + {IWL_PCI_DEVICE(0x08B1, 0xC360, iwl7260_2n_cfg)}, 304 317 {IWL_PCI_DEVICE(0x08B1, 0xC020, iwl7260_2n_cfg)}, 318 + {IWL_PCI_DEVICE(0x08B1, 0xC02A, iwl7260_2n_cfg)}, 305 319 {IWL_PCI_DEVICE(0x08B2, 0xC220, iwl7260_2n_cfg)}, 306 320 {IWL_PCI_DEVICE(0x08B1, 0xC420, iwl7260_2n_cfg)}, 307 321 308 322 /* 3160 Series */ 309 323 {IWL_PCI_DEVICE(0x08B3, 0x0070, iwl3160_2ac_cfg)}, 324 + {IWL_PCI_DEVICE(0x08B3, 0x0072, iwl3160_2ac_cfg)}, 310 325 {IWL_PCI_DEVICE(0x08B3, 0x0170, iwl3160_2ac_cfg)}, 326 + {IWL_PCI_DEVICE(0x08B3, 0x0172, iwl3160_2ac_cfg)}, 311 327 {IWL_PCI_DEVICE(0x08B3, 0x0060, iwl3160_2n_cfg)}, 312 328 {IWL_PCI_DEVICE(0x08B3, 0x0062, iwl3160_n_cfg)}, 313 329 {IWL_PCI_DEVICE(0x08B4, 0x0270, iwl3160_2ac_cfg)}, 330 + {IWL_PCI_DEVICE(0x08B4, 0x0272, iwl3160_2ac_cfg)}, 314 331 {IWL_PCI_DEVICE(0x08B3, 0x0470, iwl3160_2ac_cfg)}, 332 + {IWL_PCI_DEVICE(0x08B3, 0x0472, iwl3160_2ac_cfg)}, 333 + {IWL_PCI_DEVICE(0x08B4, 0x0370, iwl3160_2ac_cfg)}, 315 334 {IWL_PCI_DEVICE(0x08B3, 0x8070, iwl3160_2ac_cfg)}, 335 + {IWL_PCI_DEVICE(0x08B3, 0x8072, iwl3160_2ac_cfg)}, 316 336 {IWL_PCI_DEVICE(0x08B3, 0x8170, iwl3160_2ac_cfg)}, 337 + {IWL_PCI_DEVICE(0x08B3, 0x8172, iwl3160_2ac_cfg)}, 317 338 {IWL_PCI_DEVICE(0x08B3, 0x8060, iwl3160_2n_cfg)}, 318 339 {IWL_PCI_DEVICE(0x08B3, 0x8062, iwl3160_n_cfg)}, 319 340 {IWL_PCI_DEVICE(0x08B4, 0x8270, iwl3160_2ac_cfg)}, 320 341 {IWL_PCI_DEVICE(0x08B3, 0x8470, iwl3160_2ac_cfg)}, 342 + {IWL_PCI_DEVICE(0x08B3, 0x8570, iwl3160_2ac_cfg)}, 321 343 #endif /* CONFIG_IWLMVM */ 322 344 323 345 {0}
+2
drivers/net/wireless/iwlwifi/pcie/tx.c
··· 1102 1102 * non-AGG queue. 1103 1103 */ 1104 1104 iwl_clear_bits_prph(trans, SCD_AGGR_SEL, BIT(txq_id)); 1105 + 1106 + ssn = trans_pcie->txq[txq_id].q.read_ptr; 1105 1107 } 1106 1108 1107 1109 /* Place first TFD at index corresponding to start sequence number.
+8 -2
drivers/net/wireless/mwifiex/join.c
··· 1422 1422 */ 1423 1423 int mwifiex_deauthenticate(struct mwifiex_private *priv, u8 *mac) 1424 1424 { 1425 + int ret = 0; 1426 + 1425 1427 if (!priv->media_connected) 1426 1428 return 0; 1427 1429 1428 1430 switch (priv->bss_mode) { 1429 1431 case NL80211_IFTYPE_STATION: 1430 1432 case NL80211_IFTYPE_P2P_CLIENT: 1431 - return mwifiex_deauthenticate_infra(priv, mac); 1433 + ret = mwifiex_deauthenticate_infra(priv, mac); 1434 + if (ret) 1435 + cfg80211_disconnected(priv->netdev, 0, NULL, 0, 1436 + GFP_KERNEL); 1437 + break; 1432 1438 case NL80211_IFTYPE_ADHOC: 1433 1439 return mwifiex_send_cmd_sync(priv, 1434 1440 HostCmd_CMD_802_11_AD_HOC_STOP, ··· 1446 1440 break; 1447 1441 } 1448 1442 1449 - return 0; 1443 + return ret; 1450 1444 } 1451 1445 EXPORT_SYMBOL_GPL(mwifiex_deauthenticate); 1452 1446
+2 -1
drivers/net/wireless/mwifiex/sta_event.c
··· 118 118 dev_dbg(adapter->dev, 119 119 "info: successfully disconnected from %pM: reason code %d\n", 120 120 priv->cfg_bssid, reason_code); 121 - if (priv->bss_mode == NL80211_IFTYPE_STATION) { 121 + if (priv->bss_mode == NL80211_IFTYPE_STATION || 122 + priv->bss_mode == NL80211_IFTYPE_P2P_CLIENT) { 122 123 cfg80211_disconnected(priv->netdev, reason_code, NULL, 0, 123 124 GFP_KERNEL); 124 125 }
+2 -1
drivers/net/wireless/rtlwifi/rtl8192cu/trx.c
··· 343 343 (bool)GET_RX_DESC_PAGGR(pdesc)); 344 344 rx_status->mactime = GET_RX_DESC_TSFL(pdesc); 345 345 if (phystatus) { 346 - p_drvinfo = (struct rx_fwinfo_92c *)(pdesc + RTL_RX_DESC_SIZE); 346 + p_drvinfo = (struct rx_fwinfo_92c *)(skb->data + 347 + stats->rx_bufshift); 347 348 rtl92c_translate_rx_signal_stuff(hw, skb, stats, pdesc, 348 349 p_drvinfo); 349 350 }
-6
drivers/of/Kconfig
··· 74 74 depends on MTD 75 75 def_bool y 76 76 77 - config OF_RESERVED_MEM 78 - depends on OF_FLATTREE && (DMA_CMA || (HAVE_GENERIC_DMA_COHERENT && HAVE_MEMBLOCK)) 79 - def_bool y 80 - help 81 - Initialization code for DMA reserved memory 82 - 83 77 endmenu # OF
-1
drivers/of/Makefile
··· 9 9 obj-$(CONFIG_OF_PCI) += of_pci.o 10 10 obj-$(CONFIG_OF_PCI_IRQ) += of_pci_irq.o 11 11 obj-$(CONFIG_OF_MTD) += of_mtd.o 12 - obj-$(CONFIG_OF_RESERVED_MEM) += of_reserved_mem.o
+1 -3
drivers/of/base.c
··· 303 303 struct device_node *cpun, *cpus; 304 304 305 305 cpus = of_find_node_by_path("/cpus"); 306 - if (!cpus) { 307 - pr_warn("Missing cpus node, bailing out\n"); 306 + if (!cpus) 308 307 return NULL; 309 - } 310 308 311 309 for_each_child_of_node(cpus, cpun) { 312 310 if (of_node_cmp(cpun->type, "cpu"))
-12
drivers/of/fdt.c
··· 18 18 #include <linux/string.h> 19 19 #include <linux/errno.h> 20 20 #include <linux/slab.h> 21 - #include <linux/random.h> 22 21 23 22 #include <asm/setup.h> /* for COMMAND_LINE_SIZE */ 24 23 #ifdef CONFIG_PPC ··· 802 803 } 803 804 804 805 #endif /* CONFIG_OF_EARLY_FLATTREE */ 805 - 806 - /* Feed entire flattened device tree into the random pool */ 807 - static int __init add_fdt_randomness(void) 808 - { 809 - if (initial_boot_params) 810 - add_device_randomness(initial_boot_params, 811 - be32_to_cpu(initial_boot_params->totalsize)); 812 - 813 - return 0; 814 - } 815 - core_initcall(add_fdt_randomness);
-173
drivers/of/of_reserved_mem.c
··· 1 - /* 2 - * Device tree based initialization code for reserved memory. 3 - * 4 - * Copyright (c) 2013 Samsung Electronics Co., Ltd. 5 - * http://www.samsung.com 6 - * Author: Marek Szyprowski <m.szyprowski@samsung.com> 7 - * 8 - * This program is free software; you can redistribute it and/or 9 - * modify it under the terms of the GNU General Public License as 10 - * published by the Free Software Foundation; either version 2 of the 11 - * License or (at your optional) any later version of the license. 12 - */ 13 - 14 - #include <linux/memblock.h> 15 - #include <linux/err.h> 16 - #include <linux/of.h> 17 - #include <linux/of_fdt.h> 18 - #include <linux/of_platform.h> 19 - #include <linux/mm.h> 20 - #include <linux/sizes.h> 21 - #include <linux/mm_types.h> 22 - #include <linux/dma-contiguous.h> 23 - #include <linux/dma-mapping.h> 24 - #include <linux/of_reserved_mem.h> 25 - 26 - #define MAX_RESERVED_REGIONS 16 27 - struct reserved_mem { 28 - phys_addr_t base; 29 - unsigned long size; 30 - struct cma *cma; 31 - char name[32]; 32 - }; 33 - static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS]; 34 - static int reserved_mem_count; 35 - 36 - static int __init fdt_scan_reserved_mem(unsigned long node, const char *uname, 37 - int depth, void *data) 38 - { 39 - struct reserved_mem *rmem = &reserved_mem[reserved_mem_count]; 40 - phys_addr_t base, size; 41 - int is_cma, is_reserved; 42 - unsigned long len; 43 - const char *status; 44 - __be32 *prop; 45 - 46 - is_cma = IS_ENABLED(CONFIG_DMA_CMA) && 47 - of_flat_dt_is_compatible(node, "linux,contiguous-memory-region"); 48 - is_reserved = of_flat_dt_is_compatible(node, "reserved-memory-region"); 49 - 50 - if (!is_reserved && !is_cma) { 51 - /* ignore node and scan next one */ 52 - return 0; 53 - } 54 - 55 - status = of_get_flat_dt_prop(node, "status", &len); 56 - if (status && strcmp(status, "okay") != 0) { 57 - /* ignore disabled node nad scan next one */ 58 - return 0; 59 - } 60 - 61 - prop = of_get_flat_dt_prop(node, "reg", &len); 62 - if (!prop || (len < (dt_root_size_cells + dt_root_addr_cells) * 63 - sizeof(__be32))) { 64 - pr_err("Reserved mem: node %s, incorrect \"reg\" property\n", 65 - uname); 66 - /* ignore node and scan next one */ 67 - return 0; 68 - } 69 - base = dt_mem_next_cell(dt_root_addr_cells, &prop); 70 - size = dt_mem_next_cell(dt_root_size_cells, &prop); 71 - 72 - if (!size) { 73 - /* ignore node and scan next one */ 74 - return 0; 75 - } 76 - 77 - pr_info("Reserved mem: found %s, memory base %lx, size %ld MiB\n", 78 - uname, (unsigned long)base, (unsigned long)size / SZ_1M); 79 - 80 - if (reserved_mem_count == ARRAY_SIZE(reserved_mem)) 81 - return -ENOSPC; 82 - 83 - rmem->base = base; 84 - rmem->size = size; 85 - strlcpy(rmem->name, uname, sizeof(rmem->name)); 86 - 87 - if (is_cma) { 88 - struct cma *cma; 89 - if (dma_contiguous_reserve_area(size, base, 0, &cma) == 0) { 90 - rmem->cma = cma; 91 - reserved_mem_count++; 92 - if (of_get_flat_dt_prop(node, 93 - "linux,default-contiguous-region", 94 - NULL)) 95 - dma_contiguous_set_default(cma); 96 - } 97 - } else if (is_reserved) { 98 - if (memblock_remove(base, size) == 0) 99 - reserved_mem_count++; 100 - else 101 - pr_err("Failed to reserve memory for %s\n", uname); 102 - } 103 - 104 - return 0; 105 - } 106 - 107 - static struct reserved_mem *get_dma_memory_region(struct device *dev) 108 - { 109 - struct device_node *node; 110 - const char *name; 111 - int i; 112 - 113 - node = of_parse_phandle(dev->of_node, "memory-region", 0); 114 - if (!node) 115 - return NULL; 116 - 117 - name = kbasename(node->full_name); 118 - for (i = 0; i < reserved_mem_count; i++) 119 - if (strcmp(name, reserved_mem[i].name) == 0) 120 - return &reserved_mem[i]; 121 - return NULL; 122 - } 123 - 124 - /** 125 - * of_reserved_mem_device_init() - assign reserved memory region to given device 126 - * 127 - * This function assign memory region pointed by "memory-region" device tree 128 - * property to the given device. 129 - */ 130 - void of_reserved_mem_device_init(struct device *dev) 131 - { 132 - struct reserved_mem *region = get_dma_memory_region(dev); 133 - if (!region) 134 - return; 135 - 136 - if (region->cma) { 137 - dev_set_cma_area(dev, region->cma); 138 - pr_info("Assigned CMA %s to %s device\n", region->name, 139 - dev_name(dev)); 140 - } else { 141 - if (dma_declare_coherent_memory(dev, region->base, region->base, 142 - region->size, DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE) != 0) 143 - pr_info("Declared reserved memory %s to %s device\n", 144 - region->name, dev_name(dev)); 145 - } 146 - } 147 - 148 - /** 149 - * of_reserved_mem_device_release() - release reserved memory device structures 150 - * 151 - * This function releases structures allocated for memory region handling for 152 - * the given device. 153 - */ 154 - void of_reserved_mem_device_release(struct device *dev) 155 - { 156 - struct reserved_mem *region = get_dma_memory_region(dev); 157 - if (!region && !region->cma) 158 - dma_release_declared_memory(dev); 159 - } 160 - 161 - /** 162 - * early_init_dt_scan_reserved_mem() - create reserved memory regions 163 - * 164 - * This function grabs memory from early allocator for device exclusive use 165 - * defined in device tree structures. It should be called by arch specific code 166 - * once the early allocator (memblock) has been activated and all other 167 - * subsystems have already allocated/reserved memory. 168 - */ 169 - void __init early_init_dt_scan_reserved_mem(void) 170 - { 171 - of_scan_flat_dt_by_path("/memory/reserved-memory", 172 - fdt_scan_reserved_mem, NULL); 173 - }
-4
drivers/of/platform.c
··· 21 21 #include <linux/of_device.h> 22 22 #include <linux/of_irq.h> 23 23 #include <linux/of_platform.h> 24 - #include <linux/of_reserved_mem.h> 25 24 #include <linux/platform_device.h> 26 25 27 26 const struct of_device_id of_default_bus_match_table[] = { ··· 218 219 dev->dev.bus = &platform_bus_type; 219 220 dev->dev.platform_data = platform_data; 220 221 221 - of_reserved_mem_device_init(&dev->dev); 222 - 223 222 /* We do not fill the DMA ops for platform devices by default. 224 223 * This is currently the responsibility of the platform code 225 224 * to do such, possibly using a device notifier ··· 225 228 226 229 if (of_device_add(dev) != 0) { 227 230 platform_device_put(dev); 228 - of_reserved_mem_device_release(&dev->dev); 229 231 return NULL; 230 232 } 231 233
+5 -3
drivers/pci/hotplug/acpiphp_glue.c
··· 994 994 995 995 /* 996 996 * This bridge should have been registered as a hotplug function 997 - * under its parent, so the context has to be there. If not, we 998 - * are in deep goo. 997 + * under its parent, so the context should be there, unless the 998 + * parent is going to be handled by pciehp, in which case this 999 + * bridge is not interesting to us either. 999 1000 */ 1000 1001 mutex_lock(&acpiphp_context_lock); 1001 1002 context = acpiphp_get_context(handle); 1002 - if (WARN_ON(!context)) { 1003 + if (!context) { 1003 1004 mutex_unlock(&acpiphp_context_lock); 1004 1005 put_device(&bus->dev); 1006 + pci_dev_put(bridge->pci_dev); 1005 1007 kfree(bridge); 1006 1008 return; 1007 1009 }
+2 -2
drivers/pinctrl/pinconf.c
··· 490 490 * <devicename> <state> <pinname> are values that should match the pinctrl-maps 491 491 * <newvalue> reflects the new config and is driver dependant 492 492 */ 493 - static int pinconf_dbg_config_write(struct file *file, 493 + static ssize_t pinconf_dbg_config_write(struct file *file, 494 494 const char __user *user_buf, size_t count, loff_t *ppos) 495 495 { 496 496 struct pinctrl_maps *maps_node; ··· 508 508 int i; 509 509 510 510 /* Get userspace string and assure termination */ 511 - buf_size = min(count, (size_t)(sizeof(buf)-1)); 511 + buf_size = min(count, sizeof(buf) - 1); 512 512 if (copy_from_user(buf, user_buf, buf_size)) 513 513 return -EFAULT; 514 514 buf[buf_size] = 0;
+6 -6
drivers/pinctrl/pinctrl-exynos.c
··· 663 663 /* pin banks of s5pv210 pin-controller */ 664 664 static struct samsung_pin_bank s5pv210_pin_bank[] = { 665 665 EXYNOS_PIN_BANK_EINTG(8, 0x000, "gpa0", 0x00), 666 - EXYNOS_PIN_BANK_EINTG(6, 0x020, "gpa1", 0x04), 666 + EXYNOS_PIN_BANK_EINTG(4, 0x020, "gpa1", 0x04), 667 667 EXYNOS_PIN_BANK_EINTG(8, 0x040, "gpb", 0x08), 668 668 EXYNOS_PIN_BANK_EINTG(5, 0x060, "gpc0", 0x0c), 669 669 EXYNOS_PIN_BANK_EINTG(5, 0x080, "gpc1", 0x10), 670 670 EXYNOS_PIN_BANK_EINTG(4, 0x0a0, "gpd0", 0x14), 671 - EXYNOS_PIN_BANK_EINTG(4, 0x0c0, "gpd1", 0x18), 672 - EXYNOS_PIN_BANK_EINTG(5, 0x0e0, "gpe0", 0x1c), 673 - EXYNOS_PIN_BANK_EINTG(8, 0x100, "gpe1", 0x20), 674 - EXYNOS_PIN_BANK_EINTG(6, 0x120, "gpf0", 0x24), 671 + EXYNOS_PIN_BANK_EINTG(6, 0x0c0, "gpd1", 0x18), 672 + EXYNOS_PIN_BANK_EINTG(8, 0x0e0, "gpe0", 0x1c), 673 + EXYNOS_PIN_BANK_EINTG(5, 0x100, "gpe1", 0x20), 674 + EXYNOS_PIN_BANK_EINTG(8, 0x120, "gpf0", 0x24), 675 675 EXYNOS_PIN_BANK_EINTG(8, 0x140, "gpf1", 0x28), 676 676 EXYNOS_PIN_BANK_EINTG(8, 0x160, "gpf2", 0x2c), 677 - EXYNOS_PIN_BANK_EINTG(8, 0x180, "gpf3", 0x30), 677 + EXYNOS_PIN_BANK_EINTG(6, 0x180, "gpf3", 0x30), 678 678 EXYNOS_PIN_BANK_EINTG(7, 0x1a0, "gpg0", 0x34), 679 679 EXYNOS_PIN_BANK_EINTG(7, 0x1c0, "gpg1", 0x38), 680 680 EXYNOS_PIN_BANK_EINTG(7, 0x1e0, "gpg2", 0x3c),
+3 -2
drivers/pinctrl/pinctrl-palmas.c
··· 891 891 param = pinconf_to_config_param(configs[i]); 892 892 param_val = pinconf_to_config_argument(configs[i]); 893 893 894 + if (param == PIN_CONFIG_BIAS_PULL_PIN_DEFAULT) 895 + continue; 896 + 894 897 switch (param) { 895 - case PIN_CONFIG_BIAS_PULL_PIN_DEFAULT: 896 - return 0; 897 898 case PIN_CONFIG_BIAS_DISABLE: 898 899 case PIN_CONFIG_BIAS_PULL_UP: 899 900 case PIN_CONFIG_BIAS_PULL_DOWN:
+2 -3
drivers/pinctrl/pinctrl-tegra114.c
··· 3 3 * 4 4 * Copyright (c) 2012-2013, NVIDIA CORPORATION. All rights reserved. 5 5 * 6 - * Arthur: Pritesh Raithatha <praithatha@nvidia.com> 6 + * Author: Pritesh Raithatha <praithatha@nvidia.com> 7 7 * 8 8 * This program is free software; you can redistribute it and/or modify it 9 9 * under the terms and conditions of the GNU General Public License, ··· 2763 2763 }; 2764 2764 module_platform_driver(tegra114_pinctrl_driver); 2765 2765 2766 - MODULE_ALIAS("platform:tegra114-pinctrl"); 2767 2766 MODULE_AUTHOR("Pritesh Raithatha <praithatha@nvidia.com>"); 2768 - MODULE_DESCRIPTION("NVIDIA Tegra114 pincontrol driver"); 2767 + MODULE_DESCRIPTION("NVIDIA Tegra114 pinctrl driver"); 2769 2768 MODULE_LICENSE("GPL v2");
+1
drivers/platform/x86/Kconfig
··· 504 504 depends on BACKLIGHT_CLASS_DEVICE 505 505 depends on RFKILL || RFKILL = n 506 506 depends on HOTPLUG_PCI 507 + depends on ACPI_VIDEO || ACPI_VIDEO = n 507 508 select INPUT_SPARSEKMAP 508 509 select LEDS_CLASS 509 510 select NEW_LEDS
+9 -17
drivers/platform/x86/sony-laptop.c
··· 127 127 "default is -1 (automatic)"); 128 128 #endif 129 129 130 - static int kbd_backlight = 1; 130 + static int kbd_backlight = -1; 131 131 module_param(kbd_backlight, int, 0444); 132 132 MODULE_PARM_DESC(kbd_backlight, 133 133 "set this to 0 to disable keyboard backlight, " 134 - "1 to enable it (default: 0)"); 134 + "1 to enable it (default: no change from current value)"); 135 135 136 - static int kbd_backlight_timeout; /* = 0 */ 136 + static int kbd_backlight_timeout = -1; 137 137 module_param(kbd_backlight_timeout, int, 0444); 138 138 MODULE_PARM_DESC(kbd_backlight_timeout, 139 - "set this to 0 to set the default 10 seconds timeout, " 140 - "1 for 30 seconds, 2 for 60 seconds and 3 to disable timeout " 141 - "(default: 0)"); 139 + "meaningful values vary from 0 to 3 and their meaning depends " 140 + "on the model (default: no change from current value)"); 142 141 143 142 #ifdef CONFIG_PM_SLEEP 144 143 static void sony_nc_kbd_backlight_resume(void); ··· 1843 1844 if (!kbdbl_ctl) 1844 1845 return -ENOMEM; 1845 1846 1847 + kbdbl_ctl->mode = kbd_backlight; 1848 + kbdbl_ctl->timeout = kbd_backlight_timeout; 1846 1849 kbdbl_ctl->handle = handle; 1847 1850 if (handle == 0x0137) 1848 1851 kbdbl_ctl->base = 0x0C00; ··· 1871 1870 if (ret) 1872 1871 goto outmode; 1873 1872 1874 - __sony_nc_kbd_backlight_mode_set(kbd_backlight); 1875 - __sony_nc_kbd_backlight_timeout_set(kbd_backlight_timeout); 1873 + __sony_nc_kbd_backlight_mode_set(kbdbl_ctl->mode); 1874 + __sony_nc_kbd_backlight_timeout_set(kbdbl_ctl->timeout); 1876 1875 1877 1876 return 0; 1878 1877 ··· 1887 1886 static void sony_nc_kbd_backlight_cleanup(struct platform_device *pd) 1888 1887 { 1889 1888 if (kbdbl_ctl) { 1890 - int result; 1891 - 1892 1889 device_remove_file(&pd->dev, &kbdbl_ctl->mode_attr); 1893 1890 device_remove_file(&pd->dev, &kbdbl_ctl->timeout_attr); 1894 - 1895 - /* restore the default hw behaviour */ 1896 - sony_call_snc_handle(kbdbl_ctl->handle, 1897 - kbdbl_ctl->base | 0x10000, &result); 1898 - sony_call_snc_handle(kbdbl_ctl->handle, 1899 - kbdbl_ctl->base + 0x200, &result); 1900 - 1901 1891 kfree(kbdbl_ctl); 1902 1892 kbdbl_ctl = NULL; 1903 1893 }
+71 -27
drivers/s390/block/dasd_eckd.c
··· 2077 2077 int intensity = 0; 2078 2078 int r0_perm; 2079 2079 int nr_tracks; 2080 + int use_prefix; 2080 2081 2081 2082 startdev = dasd_alias_get_start_dev(base); 2082 2083 if (!startdev) ··· 2107 2106 intensity = fdata->intensity; 2108 2107 } 2109 2108 2109 + use_prefix = base_priv->features.feature[8] & 0x01; 2110 + 2110 2111 switch (intensity) { 2111 2112 case 0x00: /* Normal format */ 2112 2113 case 0x08: /* Normal format, use cdl. */ 2113 2114 cplength = 2 + (rpt*nr_tracks); 2114 - datasize = sizeof(struct PFX_eckd_data) + 2115 - sizeof(struct LO_eckd_data) + 2116 - rpt * nr_tracks * sizeof(struct eckd_count); 2115 + if (use_prefix) 2116 + datasize = sizeof(struct PFX_eckd_data) + 2117 + sizeof(struct LO_eckd_data) + 2118 + rpt * nr_tracks * sizeof(struct eckd_count); 2119 + else 2120 + datasize = sizeof(struct DE_eckd_data) + 2121 + sizeof(struct LO_eckd_data) + 2122 + rpt * nr_tracks * sizeof(struct eckd_count); 2117 2123 break; 2118 2124 case 0x01: /* Write record zero and format track. */ 2119 2125 case 0x09: /* Write record zero and format track, use cdl. */ 2120 2126 cplength = 2 + rpt * nr_tracks; 2121 - datasize = sizeof(struct PFX_eckd_data) + 2122 - sizeof(struct LO_eckd_data) + 2123 - sizeof(struct eckd_count) + 2124 - rpt * nr_tracks * sizeof(struct eckd_count); 2127 + if (use_prefix) 2128 + datasize = sizeof(struct PFX_eckd_data) + 2129 + sizeof(struct LO_eckd_data) + 2130 + sizeof(struct eckd_count) + 2131 + rpt * nr_tracks * sizeof(struct eckd_count); 2132 + else 2133 + datasize = sizeof(struct DE_eckd_data) + 2134 + sizeof(struct LO_eckd_data) + 2135 + sizeof(struct eckd_count) + 2136 + rpt * nr_tracks * sizeof(struct eckd_count); 2125 2137 break; 2126 2138 case 0x04: /* Invalidate track. */ 2127 2139 case 0x0c: /* Invalidate track, use cdl. */ 2128 2140 cplength = 3; 2129 - datasize = sizeof(struct PFX_eckd_data) + 2130 - sizeof(struct LO_eckd_data) + 2131 - sizeof(struct eckd_count); 2141 + if (use_prefix) 2142 + datasize = sizeof(struct PFX_eckd_data) + 2143 + sizeof(struct LO_eckd_data) + 2144 + sizeof(struct eckd_count); 2145 + else 2146 + datasize = sizeof(struct DE_eckd_data) + 2147 + sizeof(struct LO_eckd_data) + 2148 + sizeof(struct eckd_count); 2132 2149 break; 2133 2150 default: 2134 2151 dev_warn(&startdev->cdev->dev, ··· 2166 2147 2167 2148 switch (intensity & ~0x08) { 2168 2149 case 0x00: /* Normal format. */ 2169 - prefix(ccw++, (struct PFX_eckd_data *) data, 2170 - fdata->start_unit, fdata->stop_unit, 2171 - DASD_ECKD_CCW_WRITE_CKD, base, startdev); 2172 - /* grant subsystem permission to format R0 */ 2173 - if (r0_perm) 2174 - ((struct PFX_eckd_data *)data) 2175 - ->define_extent.ga_extended |= 0x04; 2176 - data += sizeof(struct PFX_eckd_data); 2150 + if (use_prefix) { 2151 + prefix(ccw++, (struct PFX_eckd_data *) data, 2152 + fdata->start_unit, fdata->stop_unit, 2153 + DASD_ECKD_CCW_WRITE_CKD, base, startdev); 2154 + /* grant subsystem permission to format R0 */ 2155 + if (r0_perm) 2156 + ((struct PFX_eckd_data *)data) 2157 + ->define_extent.ga_extended |= 0x04; 2158 + data += sizeof(struct PFX_eckd_data); 2159 + } else { 2160 + define_extent(ccw++, (struct DE_eckd_data *) data, 2161 + fdata->start_unit, fdata->stop_unit, 2162 + DASD_ECKD_CCW_WRITE_CKD, startdev); 2163 + /* grant subsystem permission to format R0 */ 2164 + if (r0_perm) 2165 + ((struct DE_eckd_data *) data) 2166 + ->ga_extended |= 0x04; 2167 + data += sizeof(struct DE_eckd_data); 2168 + } 2177 2169 ccw[-1].flags |= CCW_FLAG_CC; 2178 2170 locate_record(ccw++, (struct LO_eckd_data *) data, 2179 2171 fdata->start_unit, 0, rpt*nr_tracks, ··· 2193 2163 data += sizeof(struct LO_eckd_data); 2194 2164 break; 2195 2165 case 0x01: /* Write record zero + format track. */ 2196 - prefix(ccw++, (struct PFX_eckd_data *) data, 2197 - fdata->start_unit, fdata->stop_unit, 2198 - DASD_ECKD_CCW_WRITE_RECORD_ZERO, 2199 - base, startdev); 2200 - data += sizeof(struct PFX_eckd_data); 2166 + if (use_prefix) { 2167 + prefix(ccw++, (struct PFX_eckd_data *) data, 2168 + fdata->start_unit, fdata->stop_unit, 2169 + DASD_ECKD_CCW_WRITE_RECORD_ZERO, 2170 + base, startdev); 2171 + data += sizeof(struct PFX_eckd_data); 2172 + } else { 2173 + define_extent(ccw++, (struct DE_eckd_data *) data, 2174 + fdata->start_unit, fdata->stop_unit, 2175 + DASD_ECKD_CCW_WRITE_RECORD_ZERO, startdev); 2176 + data += sizeof(struct DE_eckd_data); 2177 + } 2201 2178 ccw[-1].flags |= CCW_FLAG_CC; 2202 2179 locate_record(ccw++, (struct LO_eckd_data *) data, 2203 2180 fdata->start_unit, 0, rpt * nr_tracks + 1, ··· 2213 2176 data += sizeof(struct LO_eckd_data); 2214 2177 break; 2215 2178 case 0x04: /* Invalidate track. */ 2216 - prefix(ccw++, (struct PFX_eckd_data *) data, 2217 - fdata->start_unit, fdata->stop_unit, 2218 - DASD_ECKD_CCW_WRITE_CKD, base, startdev); 2219 - data += sizeof(struct PFX_eckd_data); 2179 + if (use_prefix) { 2180 + prefix(ccw++, (struct PFX_eckd_data *) data, 2181 + fdata->start_unit, fdata->stop_unit, 2182 + DASD_ECKD_CCW_WRITE_CKD, base, startdev); 2183 + data += sizeof(struct PFX_eckd_data); 2184 + } else { 2185 + define_extent(ccw++, (struct DE_eckd_data *) data, 2186 + fdata->start_unit, fdata->stop_unit, 2187 + DASD_ECKD_CCW_WRITE_CKD, startdev); 2188 + data += sizeof(struct DE_eckd_data); 2189 + } 2220 2190 ccw[-1].flags |= CCW_FLAG_CC; 2221 2191 locate_record(ccw++, (struct LO_eckd_data *) data, 2222 2192 fdata->start_unit, 0, 1,
+2 -2
drivers/s390/char/sclp.c
··· 486 486 timeout = 0; 487 487 if (timer_pending(&sclp_request_timer)) { 488 488 /* Get timeout TOD value */ 489 - timeout = get_tod_clock() + 489 + timeout = get_tod_clock_fast() + 490 490 sclp_tod_from_jiffies(sclp_request_timer.expires - 491 491 jiffies); 492 492 } ··· 508 508 while (sclp_running_state != sclp_running_state_idle) { 509 509 /* Check for expired request timer */ 510 510 if (timer_pending(&sclp_request_timer) && 511 - get_tod_clock() > timeout && 511 + get_tod_clock_fast() > timeout && 512 512 del_timer(&sclp_request_timer)) 513 513 sclp_request_timer.function(sclp_request_timer.data); 514 514 cpu_relax();
+5 -3
drivers/s390/char/sclp_cmd.c
··· 145 145 146 146 if (sccb->header.response_code != 0x20) 147 147 return 0; 148 - if (sccb->sclp_send_mask & (EVTYP_MSG_MASK | EVTYP_PMSGCMD_MASK)) 149 - return 1; 150 - return 0; 148 + if (!(sccb->sclp_send_mask & (EVTYP_OPCMD_MASK | EVTYP_PMSGCMD_MASK))) 149 + return 0; 150 + if (!(sccb->sclp_receive_mask & (EVTYP_MSG_MASK | EVTYP_PMSGCMD_MASK))) 151 + return 0; 152 + return 1; 151 153 } 152 154 153 155 bool __init sclp_has_vt220(void)
+1 -1
drivers/s390/char/tty3270.c
··· 810 810 struct winsize ws; 811 811 812 812 screen = tty3270_alloc_screen(tp->n_rows, tp->n_cols); 813 - if (!screen) 813 + if (IS_ERR(screen)) 814 814 return; 815 815 /* Switch to new output size */ 816 816 spin_lock_bh(&tp->view.lock);
+1 -1
drivers/s390/char/vmlogrdr.c
··· 313 313 int ret; 314 314 315 315 dev_num = iminor(inode); 316 - if (dev_num > MAXMINOR) 316 + if (dev_num >= MAXMINOR) 317 317 return -ENODEV; 318 318 logptr = &sys_ser[dev_num]; 319 319
+2 -2
drivers/s390/cio/cio.c
··· 878 878 atomic_inc(&chpid_reset_count); 879 879 } 880 880 /* Wait for machine check for all channel paths. */ 881 - timeout = get_tod_clock() + (RCHP_TIMEOUT << 12); 881 + timeout = get_tod_clock_fast() + (RCHP_TIMEOUT << 12); 882 882 while (atomic_read(&chpid_reset_count) != 0) { 883 - if (get_tod_clock() > timeout) 883 + if (get_tod_clock_fast() > timeout) 884 884 break; 885 885 cpu_relax(); 886 886 }
+5 -5
drivers/s390/cio/qdio_main.c
··· 338 338 retries++; 339 339 340 340 if (!start_time) { 341 - start_time = get_tod_clock(); 341 + start_time = get_tod_clock_fast(); 342 342 goto again; 343 343 } 344 - if ((get_tod_clock() - start_time) < QDIO_BUSY_BIT_PATIENCE) 344 + if (get_tod_clock_fast() - start_time < QDIO_BUSY_BIT_PATIENCE) 345 345 goto again; 346 346 } 347 347 if (retries) { ··· 504 504 int count, stop; 505 505 unsigned char state = 0; 506 506 507 - q->timestamp = get_tod_clock(); 507 + q->timestamp = get_tod_clock_fast(); 508 508 509 509 /* 510 510 * Don't check 128 buffers, as otherwise qdio_inbound_q_moved ··· 595 595 * At this point we know, that inbound first_to_check 596 596 * has (probably) not moved (see qdio_inbound_processing). 597 597 */ 598 - if (get_tod_clock() > q->u.in.timestamp + QDIO_INPUT_THRESHOLD) { 598 + if (get_tod_clock_fast() > q->u.in.timestamp + QDIO_INPUT_THRESHOLD) { 599 599 DBF_DEV_EVENT(DBF_INFO, q->irq_ptr, "in done:%02x", 600 600 q->first_to_check); 601 601 return 1; ··· 728 728 int count, stop; 729 729 unsigned char state = 0; 730 730 731 - q->timestamp = get_tod_clock(); 731 + q->timestamp = get_tod_clock_fast(); 732 732 733 733 if (need_siga_sync(q)) 734 734 if (((queue_type(q) != QDIO_IQDIO_QFMT) &&
+2 -1
drivers/spi/spi-atmel.c
··· 1583 1583 /* Initialize the hardware */ 1584 1584 ret = clk_prepare_enable(clk); 1585 1585 if (ret) 1586 - goto out_unmap_regs; 1586 + goto out_free_irq; 1587 1587 spi_writel(as, CR, SPI_BIT(SWRST)); 1588 1588 spi_writel(as, CR, SPI_BIT(SWRST)); /* AT91SAM9263 Rev B workaround */ 1589 1589 if (as->caps.has_wdrbt) { ··· 1614 1614 spi_writel(as, CR, SPI_BIT(SWRST)); 1615 1615 spi_writel(as, CR, SPI_BIT(SWRST)); /* AT91SAM9263 Rev B workaround */ 1616 1616 clk_disable_unprepare(clk); 1617 + out_free_irq: 1617 1618 free_irq(irq, master); 1618 1619 out_unmap_regs: 1619 1620 iounmap(as->regs);
-3
drivers/spi/spi-clps711x.c
··· 226 226 dev_name(&pdev->dev), hw); 227 227 if (ret) { 228 228 dev_err(&pdev->dev, "Can't request IRQ\n"); 229 - clk_put(hw->spi_clk); 230 229 goto clk_out; 231 230 } 232 231 ··· 246 247 gpio_free(hw->chipselect[i]); 247 248 248 249 spi_master_put(master); 249 - kfree(master); 250 250 251 251 return ret; 252 252 } ··· 261 263 gpio_free(hw->chipselect[i]); 262 264 263 265 spi_unregister_master(master); 264 - kfree(master); 265 266 266 267 return 0; 267 268 }
+2 -8
drivers/spi/spi-fsl-dspi.c
··· 476 476 master->bus_num = bus_num; 477 477 478 478 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 479 - if (!res) { 480 - dev_err(&pdev->dev, "can't get platform resource\n"); 481 - ret = -EINVAL; 482 - goto out_master_put; 483 - } 484 - 485 479 dspi->base = devm_ioremap_resource(&pdev->dev, res); 486 - if (!dspi->base) { 487 - ret = -EINVAL; 480 + if (IS_ERR(dspi->base)) { 481 + ret = PTR_ERR(dspi->base); 488 482 goto out_master_put; 489 483 } 490 484
+3 -1
drivers/spi/spi-mpc512x-psc.c
··· 522 522 psc_num = master->bus_num; 523 523 snprintf(clk_name, sizeof(clk_name), "psc%d_mclk", psc_num); 524 524 clk = devm_clk_get(dev, clk_name); 525 - if (IS_ERR(clk)) 525 + if (IS_ERR(clk)) { 526 + ret = PTR_ERR(clk); 526 527 goto free_irq; 528 + } 527 529 ret = clk_prepare_enable(clk); 528 530 if (ret) 529 531 goto free_irq;
+10 -1
drivers/spi/spi-pxa2xx.c
··· 546 546 if (pm_runtime_suspended(&drv_data->pdev->dev)) 547 547 return IRQ_NONE; 548 548 549 - sccr1_reg = read_SSCR1(reg); 549 + /* 550 + * If the device is not yet in RPM suspended state and we get an 551 + * interrupt that is meant for another device, check if status bits 552 + * are all set to one. That means that the device is already 553 + * powered off. 554 + */ 550 555 status = read_SSSR(reg); 556 + if (status == ~0) 557 + return IRQ_NONE; 558 + 559 + sccr1_reg = read_SSCR1(reg); 551 560 552 561 /* Ignore possible writes if we don't need to write */ 553 562 if (!(sccr1_reg & SSCR1_TIE))
+2 -2
drivers/spi/spi-s3c64xx.c
··· 1428 1428 S3C64XX_SPI_INT_TX_OVERRUN_EN | S3C64XX_SPI_INT_TX_UNDERRUN_EN, 1429 1429 sdd->regs + S3C64XX_SPI_INT_EN); 1430 1430 1431 + pm_runtime_enable(&pdev->dev); 1432 + 1431 1433 if (spi_register_master(master)) { 1432 1434 dev_err(&pdev->dev, "cannot register SPI master\n"); 1433 1435 ret = -EBUSY; ··· 1441 1439 dev_dbg(&pdev->dev, "\tIOmem=[%pR]\tDMA=[Rx-%d, Tx-%d]\n", 1442 1440 mem_res, 1443 1441 sdd->rx_dma.dmach, sdd->tx_dma.dmach); 1444 - 1445 - pm_runtime_enable(&pdev->dev); 1446 1442 1447 1443 return 0; 1448 1444
+2 -2
drivers/spi/spi-sh-hspi.c
··· 296 296 goto error1; 297 297 } 298 298 299 + pm_runtime_enable(&pdev->dev); 300 + 299 301 master->num_chipselect = 1; 300 302 master->bus_num = pdev->id; 301 303 master->setup = hspi_setup; ··· 310 308 dev_err(&pdev->dev, "spi_register_master error.\n"); 311 309 goto error1; 312 310 } 313 - 314 - pm_runtime_enable(&pdev->dev); 315 311 316 312 return 0; 317 313
+10 -15
drivers/staging/comedi/drivers/ni_65xx.c
··· 369 369 { 370 370 const struct ni_65xx_board *board = comedi_board(dev); 371 371 struct ni_65xx_private *devpriv = dev->private; 372 - unsigned base_bitfield_channel; 373 - const unsigned max_ports_per_bitfield = 5; 372 + int base_bitfield_channel; 374 373 unsigned read_bits = 0; 375 - unsigned j; 374 + int last_port_offset = ni_65xx_port_by_channel(s->n_chan - 1); 375 + int port_offset; 376 376 377 377 base_bitfield_channel = CR_CHAN(insn->chanspec); 378 - for (j = 0; j < max_ports_per_bitfield; ++j) { 379 - const unsigned port_offset = 380 - ni_65xx_port_by_channel(base_bitfield_channel) + j; 381 - const unsigned port = 382 - sprivate(s)->base_port + port_offset; 383 - unsigned base_port_channel; 378 + for (port_offset = ni_65xx_port_by_channel(base_bitfield_channel); 379 + port_offset <= last_port_offset; port_offset++) { 380 + unsigned port = sprivate(s)->base_port + port_offset; 381 + int base_port_channel = port_offset * ni_65xx_channels_per_port; 384 382 unsigned port_mask, port_data, port_read_bits; 385 - int bitshift; 386 - if (port >= ni_65xx_total_num_ports(board)) 383 + int bitshift = base_port_channel - base_bitfield_channel; 384 + 385 + if (bitshift >= 32) 387 386 break; 388 - base_port_channel = port_offset * ni_65xx_channels_per_port; 389 387 port_mask = data[0]; 390 388 port_data = data[1]; 391 - bitshift = base_port_channel - base_bitfield_channel; 392 - if (bitshift >= 32 || bitshift <= -32) 393 - break; 394 389 if (bitshift > 0) { 395 390 port_mask >>= bitshift; 396 391 port_data >>= bitshift;
+1
drivers/staging/media/msi3101/Kconfig
··· 1 1 config USB_MSI3101 2 2 tristate "Mirics MSi3101 SDR Dongle" 3 3 depends on USB && VIDEO_DEV && VIDEO_V4L2 4 + select VIDEOBUF2_VMALLOC
+8 -2
drivers/staging/media/msi3101/sdr-msi3101.c
··· 1131 1131 /* Absolute min and max number of buffers available for mmap() */ 1132 1132 *nbuffers = 32; 1133 1133 *nplanes = 1; 1134 - sizes[0] = PAGE_ALIGN(3 * 3072); /* 3 * 768 * 4 */ 1134 + /* 1135 + * 3, wMaxPacketSize 3x 1024 bytes 1136 + * 504, max IQ sample pairs per 1024 frame 1137 + * 2, two samples, I and Q 1138 + * 4, 32-bit float 1139 + */ 1140 + sizes[0] = PAGE_ALIGN(3 * 504 * 2 * 4); /* = 12096 */ 1135 1141 dev_dbg(&s->udev->dev, "%s: nbuffers=%d sizes[0]=%d\n", 1136 1142 __func__, *nbuffers, sizes[0]); 1137 1143 return 0; ··· 1663 1657 f->frequency * 625UL / 10UL); 1664 1658 } 1665 1659 1666 - const struct v4l2_ioctl_ops msi3101_ioctl_ops = { 1660 + static const struct v4l2_ioctl_ops msi3101_ioctl_ops = { 1667 1661 .vidioc_querycap = msi3101_querycap, 1668 1662 1669 1663 .vidioc_enum_input = msi3101_enum_input,
+9 -4
drivers/target/iscsi/iscsi_target.c
··· 753 753 754 754 static void iscsit_ack_from_expstatsn(struct iscsi_conn *conn, u32 exp_statsn) 755 755 { 756 - struct iscsi_cmd *cmd; 756 + LIST_HEAD(ack_list); 757 + struct iscsi_cmd *cmd, *cmd_p; 757 758 758 759 conn->exp_statsn = exp_statsn; 759 760 ··· 762 761 return; 763 762 764 763 spin_lock_bh(&conn->cmd_lock); 765 - list_for_each_entry(cmd, &conn->conn_cmd_list, i_conn_node) { 764 + list_for_each_entry_safe(cmd, cmd_p, &conn->conn_cmd_list, i_conn_node) { 766 765 spin_lock(&cmd->istate_lock); 767 766 if ((cmd->i_state == ISTATE_SENT_STATUS) && 768 767 iscsi_sna_lt(cmd->stat_sn, exp_statsn)) { 769 768 cmd->i_state = ISTATE_REMOVE; 770 769 spin_unlock(&cmd->istate_lock); 771 - iscsit_add_cmd_to_immediate_queue(cmd, conn, 772 - cmd->i_state); 770 + list_move_tail(&cmd->i_conn_node, &ack_list); 773 771 continue; 774 772 } 775 773 spin_unlock(&cmd->istate_lock); 776 774 } 777 775 spin_unlock_bh(&conn->cmd_lock); 776 + 777 + list_for_each_entry_safe(cmd, cmd_p, &ack_list, i_conn_node) { 778 + list_del(&cmd->i_conn_node); 779 + iscsit_free_cmd(cmd, false); 780 + } 778 781 } 779 782 780 783 static int iscsit_allocate_iovecs(struct iscsi_cmd *cmd)
+1 -1
drivers/target/iscsi/iscsi_target_nego.c
··· 1192 1192 */ 1193 1193 alloc_tags: 1194 1194 tag_num = max_t(u32, ISCSIT_MIN_TAGS, queue_depth); 1195 - tag_num += ISCSIT_EXTRA_TAGS; 1195 + tag_num += (tag_num / 2) + ISCSIT_EXTRA_TAGS; 1196 1196 tag_size = sizeof(struct iscsi_cmd) + conn->conn_transport->priv_size; 1197 1197 1198 1198 ret = transport_alloc_session_tags(sess->se_sess, tag_num, tag_size);
+2 -2
drivers/target/iscsi/iscsi_target_util.c
··· 736 736 * Fallthrough 737 737 */ 738 738 case ISCSI_OP_SCSI_TMFUNC: 739 - rc = transport_generic_free_cmd(&cmd->se_cmd, 1); 739 + rc = transport_generic_free_cmd(&cmd->se_cmd, shutdown); 740 740 if (!rc && shutdown && se_cmd && se_cmd->se_sess) { 741 741 __iscsit_free_cmd(cmd, true, shutdown); 742 742 target_put_sess_cmd(se_cmd->se_sess, se_cmd); ··· 752 752 se_cmd = &cmd->se_cmd; 753 753 __iscsit_free_cmd(cmd, true, shutdown); 754 754 755 - rc = transport_generic_free_cmd(&cmd->se_cmd, 1); 755 + rc = transport_generic_free_cmd(&cmd->se_cmd, shutdown); 756 756 if (!rc && shutdown && se_cmd->se_sess) { 757 757 __iscsit_free_cmd(cmd, true, shutdown); 758 758 target_put_sess_cmd(se_cmd->se_sess, se_cmd);
+26 -2
drivers/target/target_core_sbc.c
··· 349 349 { 350 350 struct se_device *dev = cmd->se_dev; 351 351 352 - cmd->se_cmd_flags |= SCF_COMPARE_AND_WRITE_POST; 352 + /* 353 + * Only set SCF_COMPARE_AND_WRITE_POST to force a response fall-through 354 + * within target_complete_ok_work() if the command was successfully 355 + * sent to the backend driver. 356 + */ 357 + spin_lock_irq(&cmd->t_state_lock); 358 + if ((cmd->transport_state & CMD_T_SENT) && !cmd->scsi_status) 359 + cmd->se_cmd_flags |= SCF_COMPARE_AND_WRITE_POST; 360 + spin_unlock_irq(&cmd->t_state_lock); 361 + 353 362 /* 354 363 * Unlock ->caw_sem originally obtained during sbc_compare_and_write() 355 364 * before the original READ I/O submission. ··· 372 363 { 373 364 struct se_device *dev = cmd->se_dev; 374 365 struct scatterlist *write_sg = NULL, *sg; 375 - unsigned char *buf, *addr; 366 + unsigned char *buf = NULL, *addr; 376 367 struct sg_mapping_iter m; 377 368 unsigned int offset = 0, len; 378 369 unsigned int nlbas = cmd->t_task_nolb; ··· 387 378 */ 388 379 if (!cmd->t_data_sg || !cmd->t_bidi_data_sg) 389 380 return TCM_NO_SENSE; 381 + /* 382 + * Immediately exit + release dev->caw_sem if command has already 383 + * been failed with a non-zero SCSI status. 384 + */ 385 + if (cmd->scsi_status) { 386 + pr_err("compare_and_write_callback: non zero scsi_status:" 387 + " 0x%02x\n", cmd->scsi_status); 388 + goto out; 389 + } 390 390 391 391 buf = kzalloc(cmd->data_length, GFP_KERNEL); 392 392 if (!buf) { ··· 526 508 cmd->transport_complete_callback = NULL; 527 509 return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 528 510 } 511 + /* 512 + * Reset cmd->data_length to individual block_size in order to not 513 + * confuse backend drivers that depend on this value matching the 514 + * size of the I/O being submitted. 515 + */ 516 + cmd->data_length = cmd->t_task_nolb * dev->dev_attrib.block_size; 529 517 530 518 ret = cmd->execute_rw(cmd, cmd->t_bidi_data_sg, cmd->t_bidi_data_nents, 531 519 DMA_FROM_DEVICE);
+15 -5
drivers/target/target_core_transport.c
··· 236 236 { 237 237 int rc; 238 238 239 - se_sess->sess_cmd_map = kzalloc(tag_num * tag_size, GFP_KERNEL); 239 + se_sess->sess_cmd_map = kzalloc(tag_num * tag_size, 240 + GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT); 240 241 if (!se_sess->sess_cmd_map) { 241 - pr_err("Unable to allocate se_sess->sess_cmd_map\n"); 242 - return -ENOMEM; 242 + se_sess->sess_cmd_map = vzalloc(tag_num * tag_size); 243 + if (!se_sess->sess_cmd_map) { 244 + pr_err("Unable to allocate se_sess->sess_cmd_map\n"); 245 + return -ENOMEM; 246 + } 243 247 } 244 248 245 249 rc = percpu_ida_init(&se_sess->sess_tag_pool, tag_num); 246 250 if (rc < 0) { 247 251 pr_err("Unable to init se_sess->sess_tag_pool," 248 252 " tag_num: %u\n", tag_num); 249 - kfree(se_sess->sess_cmd_map); 253 + if (is_vmalloc_addr(se_sess->sess_cmd_map)) 254 + vfree(se_sess->sess_cmd_map); 255 + else 256 + kfree(se_sess->sess_cmd_map); 250 257 se_sess->sess_cmd_map = NULL; 251 258 return -ENOMEM; 252 259 } ··· 419 412 { 420 413 if (se_sess->sess_cmd_map) { 421 414 percpu_ida_destroy(&se_sess->sess_tag_pool); 422 - kfree(se_sess->sess_cmd_map); 415 + if (is_vmalloc_addr(se_sess->sess_cmd_map)) 416 + vfree(se_sess->sess_cmd_map); 417 + else 418 + kfree(se_sess->sess_cmd_map); 423 419 } 424 420 kmem_cache_free(se_sess_cache, se_sess); 425 421 }
+2 -2
drivers/target/target_core_xcopy.c
··· 298 298 (unsigned long long)xop->dst_lba); 299 299 300 300 if (dc != 0) { 301 - xop->dbl = (desc[29] << 16) & 0xff; 302 - xop->dbl |= (desc[30] << 8) & 0xff; 301 + xop->dbl = (desc[29] & 0xff) << 16; 302 + xop->dbl |= (desc[30] & 0xff) << 8; 303 303 xop->dbl |= desc[31] & 0xff; 304 304 305 305 pr_debug("XCOPY seg desc 0x02: DC=1 w/ dbl: %u\n", xop->dbl);
-2
drivers/thermal/samsung/exynos_thermal_common.c
··· 310 310 } 311 311 312 312 th_zone = conf->pzone_data; 313 - if (th_zone->therm_dev) 314 - return; 315 313 316 314 if (th_zone->bind == false) { 317 315 for (i = 0; i < th_zone->cool_dev_size; i++) {
+8 -4
drivers/thermal/samsung/exynos_tmu.c
··· 317 317 318 318 con = readl(data->base + reg->tmu_ctrl); 319 319 320 + if (pdata->test_mux) 321 + con |= (pdata->test_mux << reg->test_mux_addr_shift); 322 + 320 323 if (pdata->reference_voltage) { 321 324 con &= ~(reg->buf_vref_sel_mask << reg->buf_vref_sel_shift); 322 325 con |= pdata->reference_voltage << reg->buf_vref_sel_shift; ··· 491 488 }, 492 489 { 493 490 .compatible = "samsung,exynos4412-tmu", 494 - .data = (void *)EXYNOS5250_TMU_DRV_DATA, 491 + .data = (void *)EXYNOS4412_TMU_DRV_DATA, 495 492 }, 496 493 { 497 494 .compatible = "samsung,exynos5250-tmu", ··· 632 629 if (ret) 633 630 return ret; 634 631 635 - if (pdata->type == SOC_ARCH_EXYNOS || 636 - pdata->type == SOC_ARCH_EXYNOS4210 || 637 - pdata->type == SOC_ARCH_EXYNOS5440) 632 + if (pdata->type == SOC_ARCH_EXYNOS4210 || 633 + pdata->type == SOC_ARCH_EXYNOS4412 || 634 + pdata->type == SOC_ARCH_EXYNOS5250 || 635 + pdata->type == SOC_ARCH_EXYNOS5440) 638 636 data->soc = pdata->type; 639 637 else { 640 638 ret = -EINVAL;
+6 -1
drivers/thermal/samsung/exynos_tmu.h
··· 41 41 42 42 enum soc_type { 43 43 SOC_ARCH_EXYNOS4210 = 1, 44 - SOC_ARCH_EXYNOS, 44 + SOC_ARCH_EXYNOS4412, 45 + SOC_ARCH_EXYNOS5250, 45 46 SOC_ARCH_EXYNOS5440, 46 47 }; 47 48 ··· 85 84 * @triminfo_reload_shift: shift of triminfo reload enable bit in triminfo_ctrl 86 85 reg. 87 86 * @tmu_ctrl: TMU main controller register. 87 + * @test_mux_addr_shift: shift bits of test mux address. 88 88 * @buf_vref_sel_shift: shift bits of reference voltage in tmu_ctrl register. 89 89 * @buf_vref_sel_mask: mask bits of reference voltage in tmu_ctrl register. 90 90 * @therm_trip_mode_shift: shift bits of tripping mode in tmu_ctrl register. ··· 152 150 u32 triminfo_reload_shift; 153 151 154 152 u32 tmu_ctrl; 153 + u32 test_mux_addr_shift; 155 154 u32 buf_vref_sel_shift; 156 155 u32 buf_vref_sel_mask; 157 156 u32 therm_trip_mode_shift; ··· 260 257 * @first_point_trim: temp value of the first point trimming 261 258 * @second_point_trim: temp value of the second point trimming 262 259 * @default_temp_offset: default temperature offset in case of no trimming 260 + * @test_mux; information if SoC supports test MUX 263 261 * @cal_type: calibration type for temperature 264 262 * @cal_mode: calibration mode for temperature 265 263 * @freq_clip_table: Table representing frequency reduction percentage. ··· 290 286 u8 first_point_trim; 291 287 u8 second_point_trim; 292 288 u8 default_temp_offset; 289 + u8 test_mux; 293 290 294 291 enum calibration_type cal_type; 295 292 enum calibration_mode cal_mode;
+24 -6
drivers/thermal/samsung/exynos_tmu_data.c
··· 90 90 }; 91 91 #endif 92 92 93 - #if defined(CONFIG_SOC_EXYNOS5250) || defined(CONFIG_SOC_EXYNOS4412) 94 - static const struct exynos_tmu_registers exynos5250_tmu_registers = { 93 + #if defined(CONFIG_SOC_EXYNOS4412) || defined(CONFIG_SOC_EXYNOS5250) 94 + static const struct exynos_tmu_registers exynos4412_tmu_registers = { 95 95 .triminfo_data = EXYNOS_TMU_REG_TRIMINFO, 96 96 .triminfo_25_shift = EXYNOS_TRIMINFO_25_SHIFT, 97 97 .triminfo_85_shift = EXYNOS_TRIMINFO_85_SHIFT, 98 98 .triminfo_ctrl = EXYNOS_TMU_TRIMINFO_CON, 99 99 .triminfo_reload_shift = EXYNOS_TRIMINFO_RELOAD_SHIFT, 100 100 .tmu_ctrl = EXYNOS_TMU_REG_CONTROL, 101 + .test_mux_addr_shift = EXYNOS4412_MUX_ADDR_SHIFT, 101 102 .buf_vref_sel_shift = EXYNOS_TMU_REF_VOLTAGE_SHIFT, 102 103 .buf_vref_sel_mask = EXYNOS_TMU_REF_VOLTAGE_MASK, 103 104 .therm_trip_mode_shift = EXYNOS_TMU_TRIP_MODE_SHIFT, ··· 129 128 .emul_time_mask = EXYNOS_EMUL_TIME_MASK, 130 129 }; 131 130 132 - #define EXYNOS5250_TMU_DATA \ 131 + #define EXYNOS4412_TMU_DATA \ 133 132 .threshold_falling = 10, \ 134 133 .trigger_levels[0] = 85, \ 135 134 .trigger_levels[1] = 103, \ ··· 163 162 .temp_level = 103, \ 164 163 }, \ 165 164 .freq_tab_count = 2, \ 166 - .type = SOC_ARCH_EXYNOS, \ 167 - .registers = &exynos5250_tmu_registers, \ 165 + .registers = &exynos4412_tmu_registers, \ 168 166 .features = (TMU_SUPPORT_EMULATION | TMU_SUPPORT_TRIM_RELOAD | \ 169 167 TMU_SUPPORT_FALLING_TRIP | TMU_SUPPORT_READY_STATUS | \ 170 168 TMU_SUPPORT_EMUL_TIME) 169 + #endif 171 170 171 + #if defined(CONFIG_SOC_EXYNOS4412) 172 + struct exynos_tmu_init_data const exynos4412_default_tmu_data = { 173 + .tmu_data = { 174 + { 175 + EXYNOS4412_TMU_DATA, 176 + .type = SOC_ARCH_EXYNOS4412, 177 + .test_mux = EXYNOS4412_MUX_ADDR_VALUE, 178 + }, 179 + }, 180 + .tmu_count = 1, 181 + }; 182 + #endif 183 + 184 + #if defined(CONFIG_SOC_EXYNOS5250) 172 185 struct exynos_tmu_init_data const exynos5250_default_tmu_data = { 173 186 .tmu_data = { 174 - { EXYNOS5250_TMU_DATA }, 187 + { 188 + EXYNOS4412_TMU_DATA, 189 + .type = SOC_ARCH_EXYNOS5250, 190 + }, 175 191 }, 176 192 .tmu_count = 1, 177 193 };
+12 -1
drivers/thermal/samsung/exynos_tmu_data.h
··· 95 95 96 96 #define EXYNOS_MAX_TRIGGER_PER_REG 4 97 97 98 + /* Exynos4412 specific */ 99 + #define EXYNOS4412_MUX_ADDR_VALUE 6 100 + #define EXYNOS4412_MUX_ADDR_SHIFT 20 101 + 98 102 /*exynos5440 specific registers*/ 99 103 #define EXYNOS5440_TMU_S0_7_TRIM 0x000 100 104 #define EXYNOS5440_TMU_S0_7_CTRL 0x020 ··· 142 138 #define EXYNOS4210_TMU_DRV_DATA (NULL) 143 139 #endif 144 140 145 - #if (defined(CONFIG_SOC_EXYNOS5250) || defined(CONFIG_SOC_EXYNOS4412)) 141 + #if defined(CONFIG_SOC_EXYNOS4412) 142 + extern struct exynos_tmu_init_data const exynos4412_default_tmu_data; 143 + #define EXYNOS4412_TMU_DRV_DATA (&exynos4412_default_tmu_data) 144 + #else 145 + #define EXYNOS4412_TMU_DRV_DATA (NULL) 146 + #endif 147 + 148 + #if defined(CONFIG_SOC_EXYNOS5250) 146 149 extern struct exynos_tmu_init_data const exynos5250_default_tmu_data; 147 150 #define EXYNOS5250_TMU_DRV_DATA (&exynos5250_default_tmu_data) 148 151 #else
+1 -1
drivers/thermal/thermal_hwmon.c
··· 159 159 160 160 INIT_LIST_HEAD(&hwmon->tz_list); 161 161 strlcpy(hwmon->type, tz->type, THERMAL_NAME_LENGTH); 162 - hwmon->device = hwmon_device_register(&tz->device); 162 + hwmon->device = hwmon_device_register(NULL); 163 163 if (IS_ERR(hwmon->device)) { 164 164 result = PTR_ERR(hwmon->device); 165 165 goto free_mem;
+1
drivers/thermal/ti-soc-thermal/ti-thermal-common.c
··· 110 110 } else { 111 111 dev_err(bgp->dev, 112 112 "Failed to read PCB state. Using defaults\n"); 113 + ret = 0; 113 114 } 114 115 } 115 116 *temp = ti_thermal_hotspot_temperature(tmp, slope, constant);
+8 -6
drivers/thermal/x86_pkg_temp_thermal.c
··· 316 316 int phy_id = topology_physical_package_id(cpu); 317 317 struct phy_dev_entry *phdev = pkg_temp_thermal_get_phy_entry(cpu); 318 318 bool notify = false; 319 + unsigned long flags; 319 320 320 321 if (!phdev) 321 322 return; 322 323 323 - spin_lock(&pkg_work_lock); 324 + spin_lock_irqsave(&pkg_work_lock, flags); 324 325 ++pkg_work_cnt; 325 326 if (unlikely(phy_id > max_phy_id)) { 326 - spin_unlock(&pkg_work_lock); 327 + spin_unlock_irqrestore(&pkg_work_lock, flags); 327 328 return; 328 329 } 329 330 pkg_work_scheduled[phy_id] = 0; 330 - spin_unlock(&pkg_work_lock); 331 + spin_unlock_irqrestore(&pkg_work_lock, flags); 331 332 332 333 enable_pkg_thres_interrupt(); 333 334 rdmsrl(MSR_IA32_PACKAGE_THERM_STATUS, msr_val); ··· 398 397 int thres_count; 399 398 u32 eax, ebx, ecx, edx; 400 399 u8 *temp; 400 + unsigned long flags; 401 401 402 402 cpuid(6, &eax, &ebx, &ecx, &edx); 403 403 thres_count = ebx & 0x07; ··· 422 420 goto err_ret_unlock; 423 421 } 424 422 425 - spin_lock(&pkg_work_lock); 423 + spin_lock_irqsave(&pkg_work_lock, flags); 426 424 if (topology_physical_package_id(cpu) > max_phy_id) 427 425 max_phy_id = topology_physical_package_id(cpu); 428 426 temp = krealloc(pkg_work_scheduled, 429 427 (max_phy_id+1) * sizeof(u8), GFP_ATOMIC); 430 428 if (!temp) { 431 - spin_unlock(&pkg_work_lock); 429 + spin_unlock_irqrestore(&pkg_work_lock, flags); 432 430 err = -ENOMEM; 433 431 goto err_ret_free; 434 432 } 435 433 pkg_work_scheduled = temp; 436 434 pkg_work_scheduled[topology_physical_package_id(cpu)] = 0; 437 - spin_unlock(&pkg_work_lock); 435 + spin_unlock_irqrestore(&pkg_work_lock, flags); 438 436 439 437 phy_dev_entry->phys_proc_id = topology_physical_package_id(cpu); 440 438 phy_dev_entry->first_cpu = cpu;
+1
drivers/tty/hvc/hvc_xen.c
··· 636 636 .name = "xenboot", 637 637 .write = xenboot_write_console, 638 638 .flags = CON_PRINTBUFFER | CON_BOOT | CON_ANYTIME, 639 + .index = -1, 639 640 }; 640 641 #endif /* CONFIG_EARLY_PRINTK */ 641 642
+26 -20
drivers/tty/n_tty.c
··· 2183 2183 2184 2184 if (!input_available_p(tty, 0)) { 2185 2185 if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) { 2186 - retval = -EIO; 2187 - break; 2188 - } 2189 - if (tty_hung_up_p(file)) 2190 - break; 2191 - if (!timeout) 2192 - break; 2193 - if (file->f_flags & O_NONBLOCK) { 2194 - retval = -EAGAIN; 2195 - break; 2196 - } 2197 - if (signal_pending(current)) { 2198 - retval = -ERESTARTSYS; 2199 - break; 2200 - } 2201 - n_tty_set_room(tty); 2202 - up_read(&tty->termios_rwsem); 2186 + up_read(&tty->termios_rwsem); 2187 + tty_flush_to_ldisc(tty); 2188 + down_read(&tty->termios_rwsem); 2189 + if (!input_available_p(tty, 0)) { 2190 + retval = -EIO; 2191 + break; 2192 + } 2193 + } else { 2194 + if (tty_hung_up_p(file)) 2195 + break; 2196 + if (!timeout) 2197 + break; 2198 + if (file->f_flags & O_NONBLOCK) { 2199 + retval = -EAGAIN; 2200 + break; 2201 + } 2202 + if (signal_pending(current)) { 2203 + retval = -ERESTARTSYS; 2204 + break; 2205 + } 2206 + n_tty_set_room(tty); 2207 + up_read(&tty->termios_rwsem); 2203 2208 2204 - timeout = schedule_timeout(timeout); 2209 + timeout = schedule_timeout(timeout); 2205 2210 2206 - down_read(&tty->termios_rwsem); 2207 - continue; 2211 + down_read(&tty->termios_rwsem); 2212 + continue; 2213 + } 2208 2214 } 2209 2215 __set_current_state(TASK_RUNNING); 2210 2216
-3
drivers/tty/serial/imx.c
··· 1912 1912 1913 1913 sport->devdata = of_id->data; 1914 1914 1915 - if (of_device_is_stdout_path(np)) 1916 - add_preferred_console(imx_reg.cons->name, sport->port.line, 0); 1917 - 1918 1915 return 0; 1919 1916 } 1920 1917 #else
+3 -2
drivers/tty/serial/vt8500_serial.c
··· 561 561 if (!mmres || !irqres) 562 562 return -ENODEV; 563 563 564 - if (np) 564 + if (np) { 565 565 port = of_alias_get_id(np, "serial"); 566 566 if (port >= VT8500_MAX_PORTS) 567 567 port = -1; 568 - else 568 + } else { 569 569 port = -1; 570 + } 570 571 571 572 if (port < 0) { 572 573 /* calculate the port id */
+6 -1
drivers/usb/chipidea/ci_hdrc_pci.c
··· 129 129 PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x0829), 130 130 .driver_data = (kernel_ulong_t)&penwell_pci_platdata, 131 131 }, 132 - { 0, 0, 0, 0, 0, 0, 0 /* end: all zeroes */ } 132 + { 133 + /* Intel Clovertrail */ 134 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xe006), 135 + .driver_data = (kernel_ulong_t)&penwell_pci_platdata, 136 + }, 137 + { 0 } /* end: all zeroes */ 133 138 }; 134 139 MODULE_DEVICE_TABLE(pci, ci_hdrc_pci_id_table); 135 140
+4 -2
drivers/usb/chipidea/host.c
··· 100 100 { 101 101 struct usb_hcd *hcd = ci->hcd; 102 102 103 - usb_remove_hcd(hcd); 104 - usb_put_hcd(hcd); 103 + if (hcd) { 104 + usb_remove_hcd(hcd); 105 + usb_put_hcd(hcd); 106 + } 105 107 if (ci->platdata->reg_vbus) 106 108 regulator_disable(ci->platdata->reg_vbus); 107 109 }
+6
drivers/usb/core/quirks.c
··· 97 97 /* Alcor Micro Corp. Hub */ 98 98 { USB_DEVICE(0x058f, 0x9254), .driver_info = USB_QUIRK_RESET_RESUME }, 99 99 100 + /* MicroTouch Systems touchscreen */ 101 + { USB_DEVICE(0x0596, 0x051e), .driver_info = USB_QUIRK_RESET_RESUME }, 102 + 100 103 /* appletouch */ 101 104 { USB_DEVICE(0x05ac, 0x021a), .driver_info = USB_QUIRK_RESET_RESUME }, 102 105 ··· 132 129 133 130 /* Broadcom BCM92035DGROM BT dongle */ 134 131 { USB_DEVICE(0x0a5c, 0x2021), .driver_info = USB_QUIRK_RESET_RESUME }, 132 + 133 + /* MAYA44USB sound device */ 134 + { USB_DEVICE(0x0a92, 0x0091), .driver_info = USB_QUIRK_RESET_RESUME }, 135 135 136 136 /* Action Semiconductor flash disk */ 137 137 { USB_DEVICE(0x10d6, 0x2200), .driver_info =
+2
drivers/usb/gadget/f_fs.c
··· 2256 2256 data->raw_descs + ret, 2257 2257 (sizeof data->raw_descs) - ret, 2258 2258 __ffs_func_bind_do_descs, func); 2259 + if (unlikely(ret < 0)) 2260 + goto error; 2259 2261 } 2260 2262 2261 2263 /*
+5 -4
drivers/usb/gadget/pxa25x_udc.c
··· 2054 2054 /* 2055 2055 * probe - binds to the platform device 2056 2056 */ 2057 - static int __init pxa25x_udc_probe(struct platform_device *pdev) 2057 + static int pxa25x_udc_probe(struct platform_device *pdev) 2058 2058 { 2059 2059 struct pxa25x_udc *dev = &memory; 2060 2060 int retval, irq; ··· 2203 2203 pullup_off(); 2204 2204 } 2205 2205 2206 - static int __exit pxa25x_udc_remove(struct platform_device *pdev) 2206 + static int pxa25x_udc_remove(struct platform_device *pdev) 2207 2207 { 2208 2208 struct pxa25x_udc *dev = platform_get_drvdata(pdev); 2209 2209 ··· 2294 2294 2295 2295 static struct platform_driver udc_driver = { 2296 2296 .shutdown = pxa25x_udc_shutdown, 2297 - .remove = __exit_p(pxa25x_udc_remove), 2297 + .probe = pxa25x_udc_probe, 2298 + .remove = pxa25x_udc_remove, 2298 2299 .suspend = pxa25x_udc_suspend, 2299 2300 .resume = pxa25x_udc_resume, 2300 2301 .driver = { ··· 2304 2303 }, 2305 2304 }; 2306 2305 2307 - module_platform_driver_probe(udc_driver, pxa25x_udc_probe); 2306 + module_platform_driver(udc_driver); 2308 2307 2309 2308 MODULE_DESCRIPTION(DRIVER_DESC); 2310 2309 MODULE_AUTHOR("Frank Becker, Robert Schwebel, David Brownell");
+1 -1
drivers/usb/gadget/s3c-hsotg.c
··· 543 543 * FIFO, requests of >512 cause the endpoint to get stuck with a 544 544 * fragment of the end of the transfer in it. 545 545 */ 546 - if (can_write > 512) 546 + if (can_write > 512 && !periodic) 547 547 can_write = 512; 548 548 549 549 /*
+2 -2
drivers/usb/host/pci-quirks.c
··· 799 799 * switchable ports. 800 800 */ 801 801 pci_write_config_dword(xhci_pdev, USB_INTEL_USB3_PSSEN, 802 - cpu_to_le32(ports_available)); 802 + ports_available); 803 803 804 804 pci_read_config_dword(xhci_pdev, USB_INTEL_USB3_PSSEN, 805 805 &ports_available); ··· 821 821 * host. 822 822 */ 823 823 pci_write_config_dword(xhci_pdev, USB_INTEL_XUSB2PR, 824 - cpu_to_le32(ports_available)); 824 + ports_available); 825 825 826 826 pci_read_config_dword(xhci_pdev, USB_INTEL_XUSB2PR, 827 827 &ports_available);
-26
drivers/usb/host/xhci-hub.c
··· 1157 1157 t1 = xhci_port_state_to_neutral(t1); 1158 1158 if (t1 != t2) 1159 1159 xhci_writel(xhci, t2, port_array[port_index]); 1160 - 1161 - if (hcd->speed != HCD_USB3) { 1162 - /* enable remote wake up for USB 2.0 */ 1163 - __le32 __iomem *addr; 1164 - u32 tmp; 1165 - 1166 - /* Get the port power control register address. */ 1167 - addr = port_array[port_index] + PORTPMSC; 1168 - tmp = xhci_readl(xhci, addr); 1169 - tmp |= PORT_RWE; 1170 - xhci_writel(xhci, tmp, addr); 1171 - } 1172 1160 } 1173 1161 hcd->state = HC_STATE_SUSPENDED; 1174 1162 bus_state->next_statechange = jiffies + msecs_to_jiffies(10); ··· 1235 1247 xhci_ring_device(xhci, slot_id); 1236 1248 } else 1237 1249 xhci_writel(xhci, temp, port_array[port_index]); 1238 - 1239 - if (hcd->speed != HCD_USB3) { 1240 - /* disable remote wake up for USB 2.0 */ 1241 - __le32 __iomem *addr; 1242 - u32 tmp; 1243 - 1244 - /* Add one to the port status register address to get 1245 - * the port power control register address. 1246 - */ 1247 - addr = port_array[port_index] + PORTPMSC; 1248 - tmp = xhci_readl(xhci, addr); 1249 - tmp &= ~PORT_RWE; 1250 - xhci_writel(xhci, tmp, addr); 1251 - } 1252 1250 } 1253 1251 1254 1252 (void) xhci_readl(xhci, &xhci->op_regs->command);
+25
drivers/usb/host/xhci-pci.c
··· 35 35 #define PCI_VENDOR_ID_ETRON 0x1b6f 36 36 #define PCI_DEVICE_ID_ASROCK_P67 0x7023 37 37 38 + #define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI 0x8c31 39 + #define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c31 40 + 38 41 static const char hcd_name[] = "xhci_hcd"; 39 42 40 43 /* called after powerup, by probe or system-pm "wakeup" */ ··· 71 68 xhci_dbg_trace(xhci, trace_xhci_dbg_quirks, 72 69 "QUIRK: Fresco Logic xHC needs configure" 73 70 " endpoint cmd after reset endpoint"); 71 + } 72 + if (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK && 73 + pdev->revision == 0x4) { 74 + xhci->quirks |= XHCI_SLOW_SUSPEND; 75 + xhci_dbg_trace(xhci, trace_xhci_dbg_quirks, 76 + "QUIRK: Fresco Logic xHC revision %u" 77 + "must be suspended extra slowly", 78 + pdev->revision); 74 79 } 75 80 /* Fresco Logic confirms: all revisions of this chip do not 76 81 * support MSI, even though some of them claim to in their PCI ··· 120 109 */ 121 110 xhci->quirks |= XHCI_SPURIOUS_REBOOT; 122 111 xhci->quirks |= XHCI_AVOID_BEI; 112 + } 113 + if (pdev->vendor == PCI_VENDOR_ID_INTEL && 114 + (pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI || 115 + pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI)) { 116 + /* Workaround for occasional spurious wakeups from S5 (or 117 + * any other sleep) on Haswell machines with LPT and LPT-LP 118 + * with the new Intel BIOS 119 + */ 120 + xhci->quirks |= XHCI_SPURIOUS_WAKEUP; 123 121 } 124 122 if (pdev->vendor == PCI_VENDOR_ID_ETRON && 125 123 pdev->device == PCI_DEVICE_ID_ASROCK_P67) { ··· 237 217 usb_put_hcd(xhci->shared_hcd); 238 218 } 239 219 usb_hcd_pci_remove(dev); 220 + 221 + /* Workaround for spurious wakeups at shutdown with HSW */ 222 + if (xhci->quirks & XHCI_SPURIOUS_WAKEUP) 223 + pci_set_power_state(dev, PCI_D3hot); 224 + 240 225 kfree(xhci); 241 226 } 242 227
+13 -1
drivers/usb/host/xhci.c
··· 730 730 731 731 spin_lock_irq(&xhci->lock); 732 732 xhci_halt(xhci); 733 + /* Workaround for spurious wakeups at shutdown with HSW */ 734 + if (xhci->quirks & XHCI_SPURIOUS_WAKEUP) 735 + xhci_reset(xhci); 733 736 spin_unlock_irq(&xhci->lock); 734 737 735 738 xhci_cleanup_msix(xhci); ··· 740 737 xhci_dbg_trace(xhci, trace_xhci_dbg_init, 741 738 "xhci_shutdown completed - status = %x", 742 739 xhci_readl(xhci, &xhci->op_regs->status)); 740 + 741 + /* Yet another workaround for spurious wakeups at shutdown with HSW */ 742 + if (xhci->quirks & XHCI_SPURIOUS_WAKEUP) 743 + pci_set_power_state(to_pci_dev(hcd->self.controller), PCI_D3hot); 743 744 } 744 745 745 746 #ifdef CONFIG_PM ··· 846 839 int xhci_suspend(struct xhci_hcd *xhci) 847 840 { 848 841 int rc = 0; 842 + unsigned int delay = XHCI_MAX_HALT_USEC; 849 843 struct usb_hcd *hcd = xhci_to_hcd(xhci); 850 844 u32 command; 851 845 ··· 869 861 command = xhci_readl(xhci, &xhci->op_regs->command); 870 862 command &= ~CMD_RUN; 871 863 xhci_writel(xhci, command, &xhci->op_regs->command); 864 + 865 + /* Some chips from Fresco Logic need an extraordinary delay */ 866 + delay *= (xhci->quirks & XHCI_SLOW_SUSPEND) ? 10 : 1; 867 + 872 868 if (xhci_handshake(xhci, &xhci->op_regs->status, 873 - STS_HALT, STS_HALT, XHCI_MAX_HALT_USEC)) { 869 + STS_HALT, STS_HALT, delay)) { 874 870 xhci_warn(xhci, "WARN: xHC CMD_RUN timeout\n"); 875 871 spin_unlock_irq(&xhci->lock); 876 872 return -ETIMEDOUT;
+2
drivers/usb/host/xhci.h
··· 1548 1548 #define XHCI_COMP_MODE_QUIRK (1 << 14) 1549 1549 #define XHCI_AVOID_BEI (1 << 15) 1550 1550 #define XHCI_PLAT (1 << 16) 1551 + #define XHCI_SLOW_SUSPEND (1 << 17) 1552 + #define XHCI_SPURIOUS_WAKEUP (1 << 18) 1551 1553 unsigned int num_active_eps; 1552 1554 unsigned int limit_active_eps; 1553 1555 /* There are two roothubs to keep track of bus suspend info for */
+1 -1
drivers/usb/misc/Kconfig
··· 246 246 config USB_HSIC_USB3503 247 247 tristate "USB3503 HSIC to USB20 Driver" 248 248 depends on I2C 249 - select REGMAP 249 + select REGMAP_I2C 250 250 help 251 251 This option enables support for SMSC USB3503 HSIC to USB 2.0 Driver.
+46
drivers/usb/musb/musb_core.c
··· 922 922 } 923 923 924 924 /* 925 + * Program the HDRC to start (enable interrupts, dma, etc.). 926 + */ 927 + void musb_start(struct musb *musb) 928 + { 929 + void __iomem *regs = musb->mregs; 930 + u8 devctl = musb_readb(regs, MUSB_DEVCTL); 931 + 932 + dev_dbg(musb->controller, "<== devctl %02x\n", devctl); 933 + 934 + /* Set INT enable registers, enable interrupts */ 935 + musb->intrtxe = musb->epmask; 936 + musb_writew(regs, MUSB_INTRTXE, musb->intrtxe); 937 + musb->intrrxe = musb->epmask & 0xfffe; 938 + musb_writew(regs, MUSB_INTRRXE, musb->intrrxe); 939 + musb_writeb(regs, MUSB_INTRUSBE, 0xf7); 940 + 941 + musb_writeb(regs, MUSB_TESTMODE, 0); 942 + 943 + /* put into basic highspeed mode and start session */ 944 + musb_writeb(regs, MUSB_POWER, MUSB_POWER_ISOUPDATE 945 + | MUSB_POWER_HSENAB 946 + /* ENSUSPEND wedges tusb */ 947 + /* | MUSB_POWER_ENSUSPEND */ 948 + ); 949 + 950 + musb->is_active = 0; 951 + devctl = musb_readb(regs, MUSB_DEVCTL); 952 + devctl &= ~MUSB_DEVCTL_SESSION; 953 + 954 + /* session started after: 955 + * (a) ID-grounded irq, host mode; 956 + * (b) vbus present/connect IRQ, peripheral mode; 957 + * (c) peripheral initiates, using SRP 958 + */ 959 + if (musb->port_mode != MUSB_PORT_MODE_HOST && 960 + (devctl & MUSB_DEVCTL_VBUS) == MUSB_DEVCTL_VBUS) { 961 + musb->is_active = 1; 962 + } else { 963 + devctl |= MUSB_DEVCTL_SESSION; 964 + } 965 + 966 + musb_platform_enable(musb); 967 + musb_writeb(regs, MUSB_DEVCTL, devctl); 968 + } 969 + 970 + /* 925 971 * Make the HDRC stop (disable interrupts, etc.); 926 972 * reversible by musb_start 927 973 * called on gadget driver unregister
+1
drivers/usb/musb/musb_core.h
··· 503 503 extern const char musb_driver_name[]; 504 504 505 505 extern void musb_stop(struct musb *musb); 506 + extern void musb_start(struct musb *musb); 506 507 507 508 extern void musb_write_fifo(struct musb_hw_ep *ep, u16 len, const u8 *src); 508 509 extern void musb_read_fifo(struct musb_hw_ep *ep, u16 len, u8 *dst);
+3
drivers/usb/musb/musb_dsps.c
··· 535 535 struct dsps_glue *glue; 536 536 int ret; 537 537 538 + if (!strcmp(pdev->name, "musb-hdrc")) 539 + return -ENODEV; 540 + 538 541 match = of_match_node(musb_dsps_of_match, pdev->dev.of_node); 539 542 if (!match) { 540 543 dev_err(&pdev->dev, "fail to get matching of_match struct\n");
+6
drivers/usb/musb/musb_gadget.c
··· 1790 1790 musb->g.max_speed = USB_SPEED_HIGH; 1791 1791 musb->g.speed = USB_SPEED_UNKNOWN; 1792 1792 1793 + MUSB_DEV_MODE(musb); 1794 + musb->xceiv->otg->default_a = 0; 1795 + musb->xceiv->state = OTG_STATE_B_IDLE; 1796 + 1793 1797 /* this "gadget" abstracts/virtualizes the controller */ 1794 1798 musb->g.name = musb_driver_name; 1795 1799 musb->g.is_otg = 1; ··· 1858 1854 otg_set_peripheral(otg, &musb->g); 1859 1855 musb->xceiv->state = OTG_STATE_B_IDLE; 1860 1856 spin_unlock_irqrestore(&musb->lock, flags); 1857 + 1858 + musb_start(musb); 1861 1859 1862 1860 /* REVISIT: funcall to other code, which also 1863 1861 * handles power budgeting ... this way also
-46
drivers/usb/musb/musb_virthub.c
··· 44 44 45 45 #include "musb_core.h" 46 46 47 - /* 48 - * Program the HDRC to start (enable interrupts, dma, etc.). 49 - */ 50 - static void musb_start(struct musb *musb) 51 - { 52 - void __iomem *regs = musb->mregs; 53 - u8 devctl = musb_readb(regs, MUSB_DEVCTL); 54 - 55 - dev_dbg(musb->controller, "<== devctl %02x\n", devctl); 56 - 57 - /* Set INT enable registers, enable interrupts */ 58 - musb->intrtxe = musb->epmask; 59 - musb_writew(regs, MUSB_INTRTXE, musb->intrtxe); 60 - musb->intrrxe = musb->epmask & 0xfffe; 61 - musb_writew(regs, MUSB_INTRRXE, musb->intrrxe); 62 - musb_writeb(regs, MUSB_INTRUSBE, 0xf7); 63 - 64 - musb_writeb(regs, MUSB_TESTMODE, 0); 65 - 66 - /* put into basic highspeed mode and start session */ 67 - musb_writeb(regs, MUSB_POWER, MUSB_POWER_ISOUPDATE 68 - | MUSB_POWER_HSENAB 69 - /* ENSUSPEND wedges tusb */ 70 - /* | MUSB_POWER_ENSUSPEND */ 71 - ); 72 - 73 - musb->is_active = 0; 74 - devctl = musb_readb(regs, MUSB_DEVCTL); 75 - devctl &= ~MUSB_DEVCTL_SESSION; 76 - 77 - /* session started after: 78 - * (a) ID-grounded irq, host mode; 79 - * (b) vbus present/connect IRQ, peripheral mode; 80 - * (c) peripheral initiates, using SRP 81 - */ 82 - if (musb->port_mode != MUSB_PORT_MODE_HOST && 83 - (devctl & MUSB_DEVCTL_VBUS) == MUSB_DEVCTL_VBUS) { 84 - musb->is_active = 1; 85 - } else { 86 - devctl |= MUSB_DEVCTL_SESSION; 87 - } 88 - 89 - musb_platform_enable(musb); 90 - musb_writeb(regs, MUSB_DEVCTL, devctl); 91 - } 92 - 93 47 static void musb_port_suspend(struct musb *musb, bool do_suspend) 94 48 { 95 49 struct usb_otg *otg = musb->xceiv->otg;
+5 -6
drivers/usb/phy/phy-gpio-vbus-usb.c
··· 241 241 242 242 /* platform driver interface */ 243 243 244 - static int __init gpio_vbus_probe(struct platform_device *pdev) 244 + static int gpio_vbus_probe(struct platform_device *pdev) 245 245 { 246 246 struct gpio_vbus_mach_info *pdata = dev_get_platdata(&pdev->dev); 247 247 struct gpio_vbus_data *gpio_vbus; ··· 349 349 return err; 350 350 } 351 351 352 - static int __exit gpio_vbus_remove(struct platform_device *pdev) 352 + static int gpio_vbus_remove(struct platform_device *pdev) 353 353 { 354 354 struct gpio_vbus_data *gpio_vbus = platform_get_drvdata(pdev); 355 355 struct gpio_vbus_mach_info *pdata = dev_get_platdata(&pdev->dev); ··· 398 398 }; 399 399 #endif 400 400 401 - /* NOTE: the gpio-vbus device may *NOT* be hotplugged */ 402 - 403 401 MODULE_ALIAS("platform:gpio-vbus"); 404 402 405 403 static struct platform_driver gpio_vbus_driver = { ··· 408 410 .pm = &gpio_vbus_dev_pm_ops, 409 411 #endif 410 412 }, 411 - .remove = __exit_p(gpio_vbus_remove), 413 + .probe = gpio_vbus_probe, 414 + .remove = gpio_vbus_remove, 412 415 }; 413 416 414 - module_platform_driver_probe(gpio_vbus_driver, gpio_vbus_probe); 417 + module_platform_driver(gpio_vbus_driver); 415 418 416 419 MODULE_DESCRIPTION("simple GPIO controlled OTG transceiver driver"); 417 420 MODULE_AUTHOR("Philipp Zabel");
+227 -1
drivers/usb/serial/option.c
··· 81 81 82 82 #define HUAWEI_VENDOR_ID 0x12D1 83 83 #define HUAWEI_PRODUCT_E173 0x140C 84 + #define HUAWEI_PRODUCT_E1750 0x1406 84 85 #define HUAWEI_PRODUCT_K4505 0x1464 85 86 #define HUAWEI_PRODUCT_K3765 0x1465 86 87 #define HUAWEI_PRODUCT_K4605 0x14C6 ··· 451 450 #define CHANGHONG_VENDOR_ID 0x2077 452 451 #define CHANGHONG_PRODUCT_CH690 0x7001 453 452 453 + /* Inovia */ 454 + #define INOVIA_VENDOR_ID 0x20a6 455 + #define INOVIA_SEW858 0x1105 456 + 454 457 /* some devices interfaces need special handling due to a number of reasons */ 455 458 enum option_blacklist_reason { 456 459 OPTION_BLACKLIST_NONE = 0, ··· 572 567 { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x1c23, USB_CLASS_COMM, 0x02, 0xff) }, 573 568 { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E173, 0xff, 0xff, 0xff), 574 569 .driver_info = (kernel_ulong_t) &net_intf1_blacklist }, 570 + { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E1750, 0xff, 0xff, 0xff), 571 + .driver_info = (kernel_ulong_t) &net_intf2_blacklist }, 575 572 { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x1441, USB_CLASS_COMM, 0x02, 0xff) }, 576 573 { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x1442, USB_CLASS_COMM, 0x02, 0xff) }, 577 574 { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_K4505, 0xff, 0xff, 0xff), ··· 693 686 { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x7A) }, 694 687 { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x7B) }, 695 688 { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x7C) }, 689 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x01) }, 690 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x02) }, 691 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x03) }, 692 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x04) }, 693 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x05) }, 694 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x06) }, 695 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x0A) }, 696 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x0B) }, 697 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x0D) }, 698 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x0E) }, 699 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x0F) }, 700 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x10) }, 701 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x12) }, 702 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x13) }, 703 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x14) }, 704 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x15) }, 705 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x17) }, 706 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x18) }, 707 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x19) }, 708 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x1A) }, 709 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x1B) }, 710 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x1C) }, 711 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x31) }, 712 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x32) }, 713 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x33) }, 714 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x34) }, 715 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x35) }, 716 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x36) }, 717 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x3A) }, 718 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x3B) }, 719 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x3D) }, 720 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x3E) }, 721 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x3F) }, 722 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x48) }, 723 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x49) }, 724 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x4A) }, 725 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x4B) }, 726 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x4C) }, 727 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x61) }, 728 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x62) }, 729 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x63) }, 730 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x64) }, 731 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x65) }, 732 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x66) }, 733 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6A) }, 734 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6B) }, 735 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6D) }, 736 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6E) }, 737 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6F) }, 738 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x78) }, 739 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x79) }, 740 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x7A) }, 741 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x7B) }, 742 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x7C) }, 743 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x01) }, 744 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x02) }, 745 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x03) }, 746 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x04) }, 747 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x05) }, 748 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x06) }, 749 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x0A) }, 750 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x0B) }, 751 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x0D) }, 752 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x0E) }, 753 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x0F) }, 754 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x10) }, 755 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x12) }, 756 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x13) }, 757 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x14) }, 758 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x15) }, 759 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x17) }, 760 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x18) }, 761 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x19) }, 762 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x1A) }, 763 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x1B) }, 764 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x1C) }, 765 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x31) }, 766 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x32) }, 767 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x33) }, 768 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x34) }, 769 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x35) }, 770 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x36) }, 771 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x3A) }, 772 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x3B) }, 773 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x3D) }, 774 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x3E) }, 775 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x3F) }, 776 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x48) }, 777 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x49) }, 778 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x4A) }, 779 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x4B) }, 780 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x4C) }, 781 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x61) }, 782 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x62) }, 783 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x63) }, 784 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x64) }, 785 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x65) }, 786 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x66) }, 787 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6A) }, 788 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6B) }, 789 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6D) }, 790 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6E) }, 791 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6F) }, 792 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x78) }, 793 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x79) }, 794 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x7A) }, 795 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x7B) }, 796 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x7C) }, 797 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x01) }, 798 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x02) }, 799 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x03) }, 800 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x04) }, 801 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x05) }, 802 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x06) }, 803 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x0A) }, 804 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x0B) }, 805 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x0D) }, 806 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x0E) }, 807 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x0F) }, 808 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x10) }, 809 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x12) }, 810 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x13) }, 811 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x14) }, 812 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x15) }, 813 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x17) }, 814 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x18) }, 815 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x19) }, 816 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x1A) }, 817 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x1B) }, 818 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x1C) }, 819 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x31) }, 820 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x32) }, 821 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x33) }, 822 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x34) }, 823 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x35) }, 824 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x36) }, 825 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x3A) }, 826 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x3B) }, 827 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x3D) }, 828 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x3E) }, 829 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x3F) }, 830 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x48) }, 831 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x49) }, 832 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x4A) }, 833 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x4B) }, 834 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x4C) }, 835 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x61) }, 836 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x62) }, 837 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x63) }, 838 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x64) }, 839 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x65) }, 840 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x66) }, 841 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6A) }, 842 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6B) }, 843 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6D) }, 844 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6E) }, 845 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6F) }, 846 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x78) }, 847 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x79) }, 848 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x7A) }, 849 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x7B) }, 850 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x7C) }, 851 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x01) }, 852 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x02) }, 853 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x03) }, 854 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x04) }, 855 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x05) }, 856 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x06) }, 857 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x0A) }, 858 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x0B) }, 859 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x0D) }, 860 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x0E) }, 861 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x0F) }, 862 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x10) }, 863 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x12) }, 864 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x13) }, 865 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x14) }, 866 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x15) }, 867 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x17) }, 868 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x18) }, 869 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x19) }, 870 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x1A) }, 871 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x1B) }, 872 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x1C) }, 873 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x31) }, 874 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x32) }, 875 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x33) }, 876 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x34) }, 877 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x35) }, 878 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x36) }, 879 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x3A) }, 880 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x3B) }, 881 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x3D) }, 882 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x3E) }, 883 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x3F) }, 884 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x48) }, 885 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x49) }, 886 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x4A) }, 887 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x4B) }, 888 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x4C) }, 889 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x61) }, 890 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x62) }, 891 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x63) }, 892 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x64) }, 893 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x65) }, 894 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x66) }, 895 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6A) }, 896 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6B) }, 897 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6D) }, 898 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6E) }, 899 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6F) }, 900 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x78) }, 901 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x79) }, 902 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x7A) }, 903 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x7B) }, 904 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x7C) }, 696 905 697 906 698 907 { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_V640) }, ··· 1477 1254 1478 1255 { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100) }, 1479 1256 { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD145) }, 1480 - { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD200) }, 1257 + { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD200), 1258 + .driver_info = (kernel_ulong_t)&net_intf6_blacklist 1259 + }, 1481 1260 { USB_DEVICE(CELOT_VENDOR_ID, CELOT_PRODUCT_CT680M) }, /* CT-650 CDMA 450 1xEVDO modem */ 1482 1261 { USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_GT_B3730, USB_CLASS_CDC_DATA, 0x00, 0x00) }, /* Samsung GT-B3730 LTE USB modem.*/ 1483 1262 { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEM600) }, ··· 1567 1342 { USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d03, 0xff, 0x00, 0x00) }, 1568 1343 { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */ 1569 1344 { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/C1 */ 1345 + { USB_DEVICE(INOVIA_VENDOR_ID, INOVIA_SEW858) }, 1570 1346 { } /* Terminating entry */ 1571 1347 }; 1572 1348 MODULE_DEVICE_TABLE(usb, option_ids);
+1
drivers/usb/serial/ti_usb_3410_5052.c
··· 190 190 { USB_DEVICE(IBM_VENDOR_ID, IBM_454B_PRODUCT_ID) }, 191 191 { USB_DEVICE(IBM_VENDOR_ID, IBM_454C_PRODUCT_ID) }, 192 192 { USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_PRODUCT_ID) }, 193 + { USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_STRIP_PORT_ID) }, 193 194 { USB_DEVICE(TI_VENDOR_ID, FRI2_PRODUCT_ID) }, 194 195 { } /* terminator */ 195 196 };
+4 -1
drivers/usb/storage/scsiglue.c
··· 211 211 /* 212 212 * Many devices do not respond properly to READ_CAPACITY_16. 213 213 * Tell the SCSI layer to try READ_CAPACITY_10 first. 214 + * However some USB 3.0 drive enclosures return capacity 215 + * modulo 2TB. Those must use READ_CAPACITY_16 214 216 */ 215 - sdev->try_rc_10_first = 1; 217 + if (!(us->fflags & US_FL_NEEDS_CAP16)) 218 + sdev->try_rc_10_first = 1; 216 219 217 220 /* assume SPC3 or latter devices support sense size > 18 */ 218 221 if (sdev->scsi_level > SCSI_SPC_2)
+7
drivers/usb/storage/unusual_devs.h
··· 1925 1925 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1926 1926 US_FL_IGNORE_RESIDUE ), 1927 1927 1928 + /* Reported by Oliver Neukum <oneukum@suse.com> */ 1929 + UNUSUAL_DEV( 0x174c, 0x55aa, 0x0100, 0x0100, 1930 + "ASMedia", 1931 + "AS2105", 1932 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1933 + US_FL_NEEDS_CAP16), 1934 + 1928 1935 /* Reported by Jesse Feddema <jdfeddema@gmail.com> */ 1929 1936 UNUSUAL_DEV( 0x177f, 0x0400, 0x0000, 0x0000, 1930 1937 "Yarvik",
+21 -19
drivers/vfio/vfio_iommu_type1.c
··· 545 545 long npage; 546 546 int ret = 0, prot = 0; 547 547 uint64_t mask; 548 + struct vfio_dma *dma = NULL; 549 + unsigned long pfn; 548 550 549 551 end = map->iova + map->size; 550 552 ··· 589 587 } 590 588 591 589 for (iova = map->iova; iova < end; iova += size, vaddr += size) { 592 - struct vfio_dma *dma = NULL; 593 - unsigned long pfn; 594 590 long i; 595 591 596 592 /* Pin a contiguous chunk of memory */ ··· 597 597 if (npage <= 0) { 598 598 WARN_ON(!npage); 599 599 ret = (int)npage; 600 - break; 600 + goto out; 601 601 } 602 602 603 603 /* Verify pages are not already mapped */ 604 604 for (i = 0; i < npage; i++) { 605 605 if (iommu_iova_to_phys(iommu->domain, 606 606 iova + (i << PAGE_SHIFT))) { 607 - vfio_unpin_pages(pfn, npage, prot, true); 608 607 ret = -EBUSY; 609 - break; 608 + goto out_unpin; 610 609 } 611 610 } 612 611 ··· 615 616 if (ret) { 616 617 if (ret != -EBUSY || 617 618 map_try_harder(iommu, iova, pfn, npage, prot)) { 618 - vfio_unpin_pages(pfn, npage, prot, true); 619 - break; 619 + goto out_unpin; 620 620 } 621 621 } 622 622 ··· 670 672 dma = kzalloc(sizeof(*dma), GFP_KERNEL); 671 673 if (!dma) { 672 674 iommu_unmap(iommu->domain, iova, size); 673 - vfio_unpin_pages(pfn, npage, prot, true); 674 675 ret = -ENOMEM; 675 - break; 676 + goto out_unpin; 676 677 } 677 678 678 679 dma->size = size; ··· 682 685 } 683 686 } 684 687 685 - if (ret) { 686 - struct vfio_dma *tmp; 687 - iova = map->iova; 688 - size = map->size; 689 - while ((tmp = vfio_find_dma(iommu, iova, size))) { 690 - int r = vfio_remove_dma_overlap(iommu, iova, 691 - &size, tmp); 692 - if (WARN_ON(r || !size)) 693 - break; 694 - } 688 + WARN_ON(ret); 689 + mutex_unlock(&iommu->lock); 690 + return ret; 691 + 692 + out_unpin: 693 + vfio_unpin_pages(pfn, npage, prot, true); 694 + 695 + out: 696 + iova = map->iova; 697 + size = map->size; 698 + while ((dma = vfio_find_dma(iommu, iova, size))) { 699 + int r = vfio_remove_dma_overlap(iommu, iova, 700 + &size, dma); 701 + if (WARN_ON(r || !size)) 702 + break; 695 703 } 696 704 697 705 mutex_unlock(&iommu->lock);
+6 -1
drivers/vhost/scsi.c
··· 728 728 } 729 729 se_sess = tv_nexus->tvn_se_sess; 730 730 731 - tag = percpu_ida_alloc(&se_sess->sess_tag_pool, GFP_KERNEL); 731 + tag = percpu_ida_alloc(&se_sess->sess_tag_pool, GFP_ATOMIC); 732 + if (tag < 0) { 733 + pr_err("Unable to obtain tag for tcm_vhost_cmd\n"); 734 + return ERR_PTR(-ENOMEM); 735 + } 736 + 732 737 cmd = &((struct tcm_vhost_cmd *)se_sess->sess_cmd_map)[tag]; 733 738 sg = cmd->tvc_sgl; 734 739 pages = cmd->tvc_upages;
+6
drivers/w1/w1.c
··· 613 613 sl = dev_to_w1_slave(dev); 614 614 fops = sl->family->fops; 615 615 616 + if (!fops) 617 + return 0; 618 + 616 619 switch (action) { 617 620 case BUS_NOTIFY_ADD_DEVICE: 618 621 /* if the family driver needs to initialize something... */ ··· 716 713 atomic_set(&sl->refcnt, 0); 717 714 init_completion(&sl->released); 718 715 716 + /* slave modules need to be loaded in a context with unlocked mutex */ 717 + mutex_unlock(&dev->mutex); 719 718 request_module("w1-family-0x%0x", rn->family); 719 + mutex_lock(&dev->mutex); 720 720 721 721 spin_lock(&w1_flock); 722 722 f = w1_family_registered(rn->family);
+6
drivers/watchdog/hpwdt.c
··· 802 802 return -ENODEV; 803 803 } 804 804 805 + /* 806 + * Ignore all auxilary iLO devices with the following PCI ID 807 + */ 808 + if (dev->subsystem_device == 0x1979) 809 + return -ENODEV; 810 + 805 811 if (pci_enable_device(dev)) { 806 812 dev_warn(&dev->dev, 807 813 "Not possible to enable PCI Device: 0x%x:0x%x.\n",
+1 -1
drivers/watchdog/kempld_wdt.c
··· 35 35 #define KEMPLD_WDT_STAGE_TIMEOUT(x) (0x1b + (x) * 4) 36 36 #define KEMPLD_WDT_STAGE_CFG(x) (0x18 + (x)) 37 37 #define STAGE_CFG_GET_PRESCALER(x) (((x) & 0x30) >> 4) 38 - #define STAGE_CFG_SET_PRESCALER(x) (((x) & 0x30) << 4) 38 + #define STAGE_CFG_SET_PRESCALER(x) (((x) & 0x3) << 4) 39 39 #define STAGE_CFG_PRESCALER_MASK 0x30 40 40 #define STAGE_CFG_ACTION_MASK 0x7 41 41 #define STAGE_CFG_ASSERT (1 << 3)
+2 -2
drivers/watchdog/sunxi_wdt.c
··· 146 146 .set_timeout = sunxi_wdt_set_timeout, 147 147 }; 148 148 149 - static int __init sunxi_wdt_probe(struct platform_device *pdev) 149 + static int sunxi_wdt_probe(struct platform_device *pdev) 150 150 { 151 151 struct sunxi_wdt_dev *sunxi_wdt; 152 152 struct resource *res; ··· 187 187 return 0; 188 188 } 189 189 190 - static int __exit sunxi_wdt_remove(struct platform_device *pdev) 190 + static int sunxi_wdt_remove(struct platform_device *pdev) 191 191 { 192 192 struct sunxi_wdt_dev *sunxi_wdt = platform_get_drvdata(pdev); 193 193
+2 -1
drivers/watchdog/ts72xx_wdt.c
··· 310 310 311 311 case WDIOC_GETSTATUS: 312 312 case WDIOC_GETBOOTSTATUS: 313 - return put_user(0, p); 313 + error = put_user(0, p); 314 + break; 314 315 315 316 case WDIOC_KEEPALIVE: 316 317 ts72xx_wdt_kick(wdt);
+37 -15
fs/aio.c
··· 167 167 } 168 168 __initcall(aio_setup); 169 169 170 + static void put_aio_ring_file(struct kioctx *ctx) 171 + { 172 + struct file *aio_ring_file = ctx->aio_ring_file; 173 + if (aio_ring_file) { 174 + truncate_setsize(aio_ring_file->f_inode, 0); 175 + 176 + /* Prevent further access to the kioctx from migratepages */ 177 + spin_lock(&aio_ring_file->f_inode->i_mapping->private_lock); 178 + aio_ring_file->f_inode->i_mapping->private_data = NULL; 179 + ctx->aio_ring_file = NULL; 180 + spin_unlock(&aio_ring_file->f_inode->i_mapping->private_lock); 181 + 182 + fput(aio_ring_file); 183 + } 184 + } 185 + 170 186 static void aio_free_ring(struct kioctx *ctx) 171 187 { 172 188 int i; 173 - struct file *aio_ring_file = ctx->aio_ring_file; 174 189 175 190 for (i = 0; i < ctx->nr_pages; i++) { 176 191 pr_debug("pid(%d) [%d] page->count=%d\n", current->pid, i, ··· 193 178 put_page(ctx->ring_pages[i]); 194 179 } 195 180 181 + put_aio_ring_file(ctx); 182 + 196 183 if (ctx->ring_pages && ctx->ring_pages != ctx->internal_pages) 197 184 kfree(ctx->ring_pages); 198 - 199 - if (aio_ring_file) { 200 - truncate_setsize(aio_ring_file->f_inode, 0); 201 - fput(aio_ring_file); 202 - ctx->aio_ring_file = NULL; 203 - } 204 185 } 205 186 206 187 static int aio_ring_mmap(struct file *file, struct vm_area_struct *vma) ··· 218 207 static int aio_migratepage(struct address_space *mapping, struct page *new, 219 208 struct page *old, enum migrate_mode mode) 220 209 { 221 - struct kioctx *ctx = mapping->private_data; 210 + struct kioctx *ctx; 222 211 unsigned long flags; 223 - unsigned idx = old->index; 224 212 int rc; 225 213 226 214 /* Writeback must be complete */ ··· 234 224 235 225 get_page(new); 236 226 237 - spin_lock_irqsave(&ctx->completion_lock, flags); 238 - migrate_page_copy(new, old); 239 - ctx->ring_pages[idx] = new; 240 - spin_unlock_irqrestore(&ctx->completion_lock, flags); 227 + /* We can potentially race against kioctx teardown here. Use the 228 + * address_space's private data lock to protect the mapping's 229 + * private_data. 230 + */ 231 + spin_lock(&mapping->private_lock); 232 + ctx = mapping->private_data; 233 + if (ctx) { 234 + pgoff_t idx; 235 + spin_lock_irqsave(&ctx->completion_lock, flags); 236 + migrate_page_copy(new, old); 237 + idx = old->index; 238 + if (idx < (pgoff_t)ctx->nr_pages) 239 + ctx->ring_pages[idx] = new; 240 + spin_unlock_irqrestore(&ctx->completion_lock, flags); 241 + } else 242 + rc = -EBUSY; 243 + spin_unlock(&mapping->private_lock); 241 244 242 245 return rc; 243 246 } ··· 640 617 out_freeref: 641 618 free_percpu(ctx->users.pcpu_count); 642 619 out_freectx: 643 - if (ctx->aio_ring_file) 644 - fput(ctx->aio_ring_file); 620 + put_aio_ring_file(ctx); 645 621 kmem_cache_free(kioctx_cachep, ctx); 646 622 pr_debug("error allocating ioctx %d\n", err); 647 623 return ERR_PTR(err);
+19 -6
fs/btrfs/async-thread.c
··· 107 107 worker->idle = 1; 108 108 109 109 /* the list may be empty if the worker is just starting */ 110 - if (!list_empty(&worker->worker_list)) { 110 + if (!list_empty(&worker->worker_list) && 111 + !worker->workers->stopping) { 111 112 list_move(&worker->worker_list, 112 113 &worker->workers->idle_list); 113 114 } ··· 128 127 spin_lock_irqsave(&worker->workers->lock, flags); 129 128 worker->idle = 0; 130 129 131 - if (!list_empty(&worker->worker_list)) { 130 + if (!list_empty(&worker->worker_list) && 131 + !worker->workers->stopping) { 132 132 list_move_tail(&worker->worker_list, 133 133 &worker->workers->worker_list); 134 134 } ··· 414 412 int can_stop; 415 413 416 414 spin_lock_irq(&workers->lock); 415 + workers->stopping = 1; 417 416 list_splice_init(&workers->idle_list, &workers->worker_list); 418 417 while (!list_empty(&workers->worker_list)) { 419 418 cur = workers->worker_list.next; ··· 458 455 workers->ordered = 0; 459 456 workers->atomic_start_pending = 0; 460 457 workers->atomic_worker_start = async_helper; 458 + workers->stopping = 0; 461 459 } 462 460 463 461 /* ··· 484 480 atomic_set(&worker->num_pending, 0); 485 481 atomic_set(&worker->refs, 1); 486 482 worker->workers = workers; 487 - worker->task = kthread_run(worker_loop, worker, 488 - "btrfs-%s-%d", workers->name, 489 - workers->num_workers + 1); 483 + worker->task = kthread_create(worker_loop, worker, 484 + "btrfs-%s-%d", workers->name, 485 + workers->num_workers + 1); 490 486 if (IS_ERR(worker->task)) { 491 487 ret = PTR_ERR(worker->task); 492 - kfree(worker); 493 488 goto fail; 494 489 } 490 + 495 491 spin_lock_irq(&workers->lock); 492 + if (workers->stopping) { 493 + spin_unlock_irq(&workers->lock); 494 + goto fail_kthread; 495 + } 496 496 list_add_tail(&worker->worker_list, &workers->idle_list); 497 497 worker->idle = 1; 498 498 workers->num_workers++; ··· 504 496 WARN_ON(workers->num_workers_starting < 0); 505 497 spin_unlock_irq(&workers->lock); 506 498 499 + wake_up_process(worker->task); 507 500 return 0; 501 + 502 + fail_kthread: 503 + kthread_stop(worker->task); 508 504 fail: 505 + kfree(worker); 509 506 spin_lock_irq(&workers->lock); 510 507 workers->num_workers_starting--; 511 508 spin_unlock_irq(&workers->lock);
+2
fs/btrfs/async-thread.h
··· 107 107 108 108 /* extra name for this worker, used for current->name */ 109 109 char *name; 110 + 111 + int stopping; 110 112 }; 111 113 112 114 void btrfs_queue_worker(struct btrfs_workers *workers, struct btrfs_work *work);
+1 -4
fs/btrfs/dev-replace.c
··· 535 535 list_add(&tgt_device->dev_alloc_list, &fs_info->fs_devices->alloc_list); 536 536 537 537 btrfs_rm_dev_replace_srcdev(fs_info, src_device); 538 - if (src_device->bdev) { 539 - /* zero out the old super */ 540 - btrfs_scratch_superblock(src_device); 541 - } 538 + 542 539 /* 543 540 * this is again a consistent state where no dev_replace procedure 544 541 * is running, the target device is part of the filesystem, the
+5 -4
fs/btrfs/disk-io.c
··· 1561 1561 return ret; 1562 1562 } 1563 1563 1564 - struct btrfs_root *btrfs_read_fs_root_no_name(struct btrfs_fs_info *fs_info, 1565 - struct btrfs_key *location) 1564 + struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info, 1565 + struct btrfs_key *location, 1566 + bool check_ref) 1566 1567 { 1567 1568 struct btrfs_root *root; 1568 1569 int ret; ··· 1587 1586 again: 1588 1587 root = btrfs_lookup_fs_root(fs_info, location->objectid); 1589 1588 if (root) { 1590 - if (btrfs_root_refs(&root->root_item) == 0) 1589 + if (check_ref && btrfs_root_refs(&root->root_item) == 0) 1591 1590 return ERR_PTR(-ENOENT); 1592 1591 return root; 1593 1592 } ··· 1596 1595 if (IS_ERR(root)) 1597 1596 return root; 1598 1597 1599 - if (btrfs_root_refs(&root->root_item) == 0) { 1598 + if (check_ref && btrfs_root_refs(&root->root_item) == 0) { 1600 1599 ret = -ENOENT; 1601 1600 goto fail; 1602 1601 }
+11 -2
fs/btrfs/disk-io.h
··· 68 68 int btrfs_init_fs_root(struct btrfs_root *root); 69 69 int btrfs_insert_fs_root(struct btrfs_fs_info *fs_info, 70 70 struct btrfs_root *root); 71 - struct btrfs_root *btrfs_read_fs_root_no_name(struct btrfs_fs_info *fs_info, 72 - struct btrfs_key *location); 71 + 72 + struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info, 73 + struct btrfs_key *key, 74 + bool check_ref); 75 + static inline struct btrfs_root * 76 + btrfs_read_fs_root_no_name(struct btrfs_fs_info *fs_info, 77 + struct btrfs_key *location) 78 + { 79 + return btrfs_get_fs_root(fs_info, location, true); 80 + } 81 + 73 82 int btrfs_cleanup_fs_roots(struct btrfs_fs_info *fs_info); 74 83 void btrfs_btree_balance_dirty(struct btrfs_root *root); 75 84 void btrfs_btree_balance_dirty_nodelay(struct btrfs_root *root);
+13 -9
fs/btrfs/extent_io.c
··· 145 145 offsetof(struct btrfs_io_bio, bio)); 146 146 if (!btrfs_bioset) 147 147 goto free_buffer_cache; 148 + 149 + if (bioset_integrity_create(btrfs_bioset, BIO_POOL_SIZE)) 150 + goto free_bioset; 151 + 148 152 return 0; 153 + 154 + free_bioset: 155 + bioset_free(btrfs_bioset); 156 + btrfs_bioset = NULL; 149 157 150 158 free_buffer_cache: 151 159 kmem_cache_destroy(extent_buffer_cache); ··· 1490 1482 cur_start = state->end + 1; 1491 1483 node = rb_next(node); 1492 1484 total_bytes += state->end - state->start + 1; 1493 - if (total_bytes >= max_bytes) { 1494 - *end = *start + max_bytes - 1; 1485 + if (total_bytes >= max_bytes) 1495 1486 break; 1496 - } 1497 1487 if (!node) 1498 1488 break; 1499 1489 } ··· 1620 1614 *start = delalloc_start; 1621 1615 *end = delalloc_end; 1622 1616 free_extent_state(cached_state); 1623 - return found; 1617 + return 0; 1624 1618 } 1625 1619 1626 1620 /* ··· 1633 1627 1634 1628 /* 1635 1629 * make sure to limit the number of pages we try to lock down 1636 - * if we're looping. 1637 1630 */ 1638 - if (delalloc_end + 1 - delalloc_start > max_bytes && loops) 1639 - delalloc_end = delalloc_start + PAGE_CACHE_SIZE - 1; 1631 + if (delalloc_end + 1 - delalloc_start > max_bytes) 1632 + delalloc_end = delalloc_start + max_bytes - 1; 1640 1633 1641 1634 /* step two, lock all the pages after the page that has start */ 1642 1635 ret = lock_delalloc_pages(inode, locked_page, ··· 1646 1641 */ 1647 1642 free_extent_state(cached_state); 1648 1643 if (!loops) { 1649 - unsigned long offset = (*start) & (PAGE_CACHE_SIZE - 1); 1650 - max_bytes = PAGE_CACHE_SIZE - offset; 1644 + max_bytes = PAGE_CACHE_SIZE; 1651 1645 loops = 1; 1652 1646 goto again; 1653 1647 } else {
+2 -1
fs/btrfs/inode.c
··· 6437 6437 6438 6438 if (btrfs_extent_readonly(root, disk_bytenr)) 6439 6439 goto out; 6440 + btrfs_release_path(path); 6440 6441 6441 6442 /* 6442 6443 * look for other files referencing this extent, if we ··· 7987 7986 7988 7987 7989 7988 /* check for collisions, even if the name isn't there */ 7990 - ret = btrfs_check_dir_item_collision(root, new_dir->i_ino, 7989 + ret = btrfs_check_dir_item_collision(dest, new_dir->i_ino, 7991 7990 new_dentry->d_name.name, 7992 7991 new_dentry->d_name.len); 7993 7992
+1 -1
fs/btrfs/relocation.c
··· 588 588 else 589 589 key.offset = (u64)-1; 590 590 591 - return btrfs_read_fs_root_no_name(fs_info, &key); 591 + return btrfs_get_fs_root(fs_info, &key, false); 592 592 } 593 593 594 594 #ifdef BTRFS_COMPAT_EXTENT_TREE_V0
+3 -5
fs/btrfs/root-tree.c
··· 299 299 continue; 300 300 } 301 301 302 - if (btrfs_root_refs(&root->root_item) == 0) { 303 - btrfs_add_dead_root(root); 304 - continue; 305 - } 306 - 307 302 err = btrfs_init_fs_root(root); 308 303 if (err) { 309 304 btrfs_free_fs_root(root); ··· 313 318 btrfs_free_fs_root(root); 314 319 break; 315 320 } 321 + 322 + if (btrfs_root_refs(&root->root_item) == 0) 323 + btrfs_add_dead_root(root); 316 324 } 317 325 318 326 btrfs_free_path(path);
+2 -5
fs/btrfs/transaction.c
··· 1838 1838 assert_qgroups_uptodate(trans); 1839 1839 update_super_roots(root); 1840 1840 1841 - if (!root->fs_info->log_root_recovering) { 1842 - btrfs_set_super_log_root(root->fs_info->super_copy, 0); 1843 - btrfs_set_super_log_root_level(root->fs_info->super_copy, 0); 1844 - } 1845 - 1841 + btrfs_set_super_log_root(root->fs_info->super_copy, 0); 1842 + btrfs_set_super_log_root_level(root->fs_info->super_copy, 0); 1846 1843 memcpy(root->fs_info->super_for_commit, root->fs_info->super_copy, 1847 1844 sizeof(*root->fs_info->super_copy)); 1848 1845
+6 -1
fs/btrfs/volumes.c
··· 1716 1716 struct btrfs_device *srcdev) 1717 1717 { 1718 1718 WARN_ON(!mutex_is_locked(&fs_info->fs_devices->device_list_mutex)); 1719 + 1719 1720 list_del_rcu(&srcdev->dev_list); 1720 1721 list_del_rcu(&srcdev->dev_alloc_list); 1721 1722 fs_info->fs_devices->num_devices--; ··· 1726 1725 } 1727 1726 if (srcdev->can_discard) 1728 1727 fs_info->fs_devices->num_can_discard--; 1729 - if (srcdev->bdev) 1728 + if (srcdev->bdev) { 1730 1729 fs_info->fs_devices->open_devices--; 1730 + 1731 + /* zero out the old super */ 1732 + btrfs_scratch_superblock(srcdev); 1733 + } 1731 1734 1732 1735 call_rcu(&srcdev->rcu, free_device); 1733 1736 }
+12 -2
fs/buffer.c
··· 1005 1005 struct buffer_head *bh; 1006 1006 sector_t end_block; 1007 1007 int ret = 0; /* Will call free_more_memory() */ 1008 + gfp_t gfp_mask; 1008 1009 1009 - page = find_or_create_page(inode->i_mapping, index, 1010 - (mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS)|__GFP_MOVABLE); 1010 + gfp_mask = mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS; 1011 + gfp_mask |= __GFP_MOVABLE; 1012 + /* 1013 + * XXX: __getblk_slow() can not really deal with failure and 1014 + * will endlessly loop on improvised global reclaim. Prefer 1015 + * looping in the allocator rather than here, at least that 1016 + * code knows what it's doing. 1017 + */ 1018 + gfp_mask |= __GFP_NOFAIL; 1019 + 1020 + page = find_or_create_page(inode->i_mapping, index, gfp_mask); 1011 1021 if (!page) 1012 1022 return ret; 1013 1023
+4 -2
fs/cifs/cifsfs.c
··· 120 120 { 121 121 struct inode *inode; 122 122 struct cifs_sb_info *cifs_sb; 123 + struct cifs_tcon *tcon; 123 124 int rc = 0; 124 125 125 126 cifs_sb = CIFS_SB(sb); 127 + tcon = cifs_sb_master_tcon(cifs_sb); 126 128 127 129 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIXACL) 128 130 sb->s_flags |= MS_POSIXACL; 129 131 130 - if (cifs_sb_master_tcon(cifs_sb)->ses->capabilities & CAP_LARGE_FILES) 132 + if (tcon->ses->capabilities & tcon->ses->server->vals->cap_large_files) 131 133 sb->s_maxbytes = MAX_LFS_FILESIZE; 132 134 else 133 135 sb->s_maxbytes = MAX_NON_LFS; ··· 149 147 goto out_no_root; 150 148 } 151 149 152 - if (cifs_sb_master_tcon(cifs_sb)->nocase) 150 + if (tcon->nocase) 153 151 sb->s_d_op = &cifs_ci_dentry_ops; 154 152 else 155 153 sb->s_d_op = &cifs_dentry_ops;
+1 -1
fs/cifs/cifsfs.h
··· 132 132 extern const struct export_operations cifs_export_ops; 133 133 #endif /* CONFIG_CIFS_NFSD_EXPORT */ 134 134 135 - #define CIFS_VERSION "2.01" 135 + #define CIFS_VERSION "2.02" 136 136 #endif /* _CIFSFS_H */
+1 -4
fs/cifs/cifsglob.h
··· 547 547 unsigned int max_rw; /* maxRw specifies the maximum */ 548 548 /* message size the server can send or receive for */ 549 549 /* SMB_COM_WRITE_RAW or SMB_COM_READ_RAW. */ 550 - unsigned int max_vcs; /* maximum number of smb sessions, at least 551 - those that can be specified uniquely with 552 - vcnumbers */ 553 550 unsigned int capabilities; /* selective disabling of caps by smb sess */ 554 551 int timeAdj; /* Adjust for difference in server time zone in sec */ 555 552 __u64 CurrentMid; /* multiplex id - rotating counter */ ··· 712 715 enum statusEnum status; 713 716 unsigned overrideSecFlg; /* if non-zero override global sec flags */ 714 717 __u16 ipc_tid; /* special tid for connection to IPC share */ 715 - __u16 vcnum; 716 718 char *serverOS; /* name of operating system underlying server */ 717 719 char *serverNOS; /* name of network operating system of server */ 718 720 char *serverDomain; /* security realm of server */ ··· 1268 1272 #define CIFS_FATTR_DELETE_PENDING 0x2 1269 1273 #define CIFS_FATTR_NEED_REVAL 0x4 1270 1274 #define CIFS_FATTR_INO_COLLISION 0x8 1275 + #define CIFS_FATTR_UNKNOWN_NLINK 0x10 1271 1276 1272 1277 struct cifs_fattr { 1273 1278 u32 cf_flags;
+24 -28
fs/cifs/cifspdu.h
··· 1491 1491 __u8 FileName[0]; 1492 1492 } __attribute__((packed)); 1493 1493 1494 - struct reparse_data { 1495 - __u32 ReparseTag; 1496 - __u16 ReparseDataLength; 1494 + /* For IO_REPARSE_TAG_SYMLINK */ 1495 + struct reparse_symlink_data { 1496 + __le32 ReparseTag; 1497 + __le16 ReparseDataLength; 1497 1498 __u16 Reserved; 1498 - __u16 SubstituteNameOffset; 1499 - __u16 SubstituteNameLength; 1500 - __u16 PrintNameOffset; 1501 - __u16 PrintNameLength; 1502 - __u32 Flags; 1499 + __le16 SubstituteNameOffset; 1500 + __le16 SubstituteNameLength; 1501 + __le16 PrintNameOffset; 1502 + __le16 PrintNameLength; 1503 + __le32 Flags; 1504 + char PathBuffer[0]; 1505 + } __attribute__((packed)); 1506 + 1507 + /* For IO_REPARSE_TAG_NFS */ 1508 + #define NFS_SPECFILE_LNK 0x00000000014B4E4C 1509 + #define NFS_SPECFILE_CHR 0x0000000000524843 1510 + #define NFS_SPECFILE_BLK 0x00000000004B4C42 1511 + #define NFS_SPECFILE_FIFO 0x000000004F464946 1512 + #define NFS_SPECFILE_SOCK 0x000000004B434F53 1513 + struct reparse_posix_data { 1514 + __le32 ReparseTag; 1515 + __le16 ReparseDataLength; 1516 + __u16 Reserved; 1517 + __le64 InodeType; /* LNK, FIFO, CHR etc. */ 1503 1518 char PathBuffer[0]; 1504 1519 } __attribute__((packed)); 1505 1520 ··· 2667 2652 } __attribute__((packed)) FILE_XATTR_INFO; /* extended attribute info 2668 2653 level 0x205 */ 2669 2654 2670 - 2671 - /* flags for chattr command */ 2672 - #define EXT_SECURE_DELETE 0x00000001 /* EXT3_SECRM_FL */ 2673 - #define EXT_ENABLE_UNDELETE 0x00000002 /* EXT3_UNRM_FL */ 2674 - /* Reserved for compress file 0x4 */ 2675 - #define EXT_SYNCHRONOUS 0x00000008 /* EXT3_SYNC_FL */ 2676 - #define EXT_IMMUTABLE_FL 0x00000010 /* EXT3_IMMUTABLE_FL */ 2677 - #define EXT_OPEN_APPEND_ONLY 0x00000020 /* EXT3_APPEND_FL */ 2678 - #define EXT_DO_NOT_BACKUP 0x00000040 /* EXT3_NODUMP_FL */ 2679 - #define EXT_NO_UPDATE_ATIME 0x00000080 /* EXT3_NOATIME_FL */ 2680 - /* 0x100 through 0x800 reserved for compression flags and are GET-ONLY */ 2681 - #define EXT_HASH_TREE_INDEXED_DIR 0x00001000 /* GET-ONLY EXT3_INDEX_FL */ 2682 - /* 0x2000 reserved for IMAGIC_FL */ 2683 - #define EXT_JOURNAL_THIS_FILE 0x00004000 /* GET-ONLY EXT3_JOURNAL_DATA_FL */ 2684 - /* 0x8000 reserved for EXT3_NOTAIL_FL */ 2685 - #define EXT_SYNCHRONOUS_DIR 0x00010000 /* EXT3_DIRSYNC_FL */ 2686 - #define EXT_TOPDIR 0x00020000 /* EXT3_TOPDIR_FL */ 2687 - 2688 - #define EXT_SET_MASK 0x000300FF 2689 - #define EXT_GET_MASK 0x0003DFFF 2655 + /* flags for lsattr and chflags commands removed arein uapi/linux/fs.h */ 2690 2656 2691 2657 typedef struct file_chattr_info { 2692 2658 __le64 mask; /* list of all possible attribute bits */
+34 -7
fs/cifs/cifssmb.c
··· 463 463 cifs_max_pending); 464 464 set_credits(server, server->maxReq); 465 465 server->maxBuf = le16_to_cpu(rsp->MaxBufSize); 466 - server->max_vcs = le16_to_cpu(rsp->MaxNumberVcs); 467 466 /* even though we do not use raw we might as well set this 468 467 accurately, in case we ever find a need for it */ 469 468 if ((le16_to_cpu(rsp->RawMode) & RAW_ENABLE) == RAW_ENABLE) { ··· 3088 3089 bool is_unicode; 3089 3090 unsigned int sub_len; 3090 3091 char *sub_start; 3091 - struct reparse_data *reparse_buf; 3092 + struct reparse_symlink_data *reparse_buf; 3093 + struct reparse_posix_data *posix_buf; 3092 3094 __u32 data_offset, data_count; 3093 3095 char *end_of_smb; 3094 3096 ··· 3138 3138 goto qreparse_out; 3139 3139 } 3140 3140 end_of_smb = 2 + get_bcc(&pSMBr->hdr) + (char *)&pSMBr->ByteCount; 3141 - reparse_buf = (struct reparse_data *) 3141 + reparse_buf = (struct reparse_symlink_data *) 3142 3142 ((char *)&pSMBr->hdr.Protocol + data_offset); 3143 3143 if ((char *)reparse_buf >= end_of_smb) { 3144 3144 rc = -EIO; 3145 3145 goto qreparse_out; 3146 3146 } 3147 - if ((reparse_buf->PathBuffer + reparse_buf->PrintNameOffset + 3148 - reparse_buf->PrintNameLength) > end_of_smb) { 3147 + if (reparse_buf->ReparseTag == cpu_to_le32(IO_REPARSE_TAG_NFS)) { 3148 + cifs_dbg(FYI, "NFS style reparse tag\n"); 3149 + posix_buf = (struct reparse_posix_data *)reparse_buf; 3150 + 3151 + if (posix_buf->InodeType != cpu_to_le64(NFS_SPECFILE_LNK)) { 3152 + cifs_dbg(FYI, "unsupported file type 0x%llx\n", 3153 + le64_to_cpu(posix_buf->InodeType)); 3154 + rc = -EOPNOTSUPP; 3155 + goto qreparse_out; 3156 + } 3157 + is_unicode = true; 3158 + sub_len = le16_to_cpu(reparse_buf->ReparseDataLength); 3159 + if (posix_buf->PathBuffer + sub_len > end_of_smb) { 3160 + cifs_dbg(FYI, "reparse buf beyond SMB\n"); 3161 + rc = -EIO; 3162 + goto qreparse_out; 3163 + } 3164 + *symlinkinfo = cifs_strndup_from_utf16(posix_buf->PathBuffer, 3165 + sub_len, is_unicode, nls_codepage); 3166 + goto qreparse_out; 3167 + } else if (reparse_buf->ReparseTag != 3168 + cpu_to_le32(IO_REPARSE_TAG_SYMLINK)) { 3169 + rc = -EOPNOTSUPP; 3170 + goto qreparse_out; 3171 + } 3172 + 3173 + /* Reparse tag is NTFS symlink */ 3174 + sub_start = le16_to_cpu(reparse_buf->SubstituteNameOffset) + 3175 + reparse_buf->PathBuffer; 3176 + sub_len = le16_to_cpu(reparse_buf->SubstituteNameLength); 3177 + if (sub_start + sub_len > end_of_smb) { 3149 3178 cifs_dbg(FYI, "reparse buf beyond SMB\n"); 3150 3179 rc = -EIO; 3151 3180 goto qreparse_out; 3152 3181 } 3153 - sub_start = reparse_buf->SubstituteNameOffset + reparse_buf->PathBuffer; 3154 - sub_len = reparse_buf->SubstituteNameLength; 3155 3182 if (pSMBr->hdr.Flags2 & SMBFLG2_UNICODE) 3156 3183 is_unicode = true; 3157 3184 else
+8
fs/cifs/file.c
··· 3254 3254 /* 3255 3255 * Reads as many pages as possible from fscache. Returns -ENOBUFS 3256 3256 * immediately if the cookie is negative 3257 + * 3258 + * After this point, every page in the list might have PG_fscache set, 3259 + * so we will need to clean that up off of every page we don't use. 3257 3260 */ 3258 3261 rc = cifs_readpages_from_fscache(mapping->host, mapping, page_list, 3259 3262 &num_pages); ··· 3379 3376 kref_put(&rdata->refcount, cifs_readdata_release); 3380 3377 } 3381 3378 3379 + /* Any pages that have been shown to fscache but didn't get added to 3380 + * the pagecache must be uncached before they get returned to the 3381 + * allocator. 3382 + */ 3383 + cifs_fscache_readpages_cancel(mapping->host, page_list); 3382 3384 return rc; 3383 3385 } 3384 3386
+7
fs/cifs/fscache.c
··· 223 223 fscache_uncache_page(CIFS_I(inode)->fscache, page); 224 224 } 225 225 226 + void __cifs_fscache_readpages_cancel(struct inode *inode, struct list_head *pages) 227 + { 228 + cifs_dbg(FYI, "%s: (fsc: %p, i: %p)\n", 229 + __func__, CIFS_I(inode)->fscache, inode); 230 + fscache_readpages_cancel(CIFS_I(inode)->fscache, pages); 231 + } 232 + 226 233 void __cifs_fscache_invalidate_page(struct page *page, struct inode *inode) 227 234 { 228 235 struct cifsInodeInfo *cifsi = CIFS_I(inode);
+13
fs/cifs/fscache.h
··· 54 54 struct address_space *, 55 55 struct list_head *, 56 56 unsigned *); 57 + extern void __cifs_fscache_readpages_cancel(struct inode *, struct list_head *); 57 58 58 59 extern void __cifs_readpage_to_fscache(struct inode *, struct page *); 59 60 ··· 90 89 { 91 90 if (PageFsCache(page)) 92 91 __cifs_readpage_to_fscache(inode, page); 92 + } 93 + 94 + static inline void cifs_fscache_readpages_cancel(struct inode *inode, 95 + struct list_head *pages) 96 + { 97 + if (CIFS_I(inode)->fscache) 98 + return __cifs_fscache_readpages_cancel(inode, pages); 93 99 } 94 100 95 101 #else /* CONFIG_CIFS_FSCACHE */ ··· 138 130 139 131 static inline void cifs_readpage_to_fscache(struct inode *inode, 140 132 struct page *page) {} 133 + 134 + static inline void cifs_fscache_readpages_cancel(struct inode *inode, 135 + struct list_head *pages) 136 + { 137 + } 141 138 142 139 #endif /* CONFIG_CIFS_FSCACHE */ 143 140
+39 -6
fs/cifs/inode.c
··· 120 120 cifs_i->invalid_mapping = true; 121 121 } 122 122 123 + /* 124 + * copy nlink to the inode, unless it wasn't provided. Provide 125 + * sane values if we don't have an existing one and none was provided 126 + */ 127 + static void 128 + cifs_nlink_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr) 129 + { 130 + /* 131 + * if we're in a situation where we can't trust what we 132 + * got from the server (readdir, some non-unix cases) 133 + * fake reasonable values 134 + */ 135 + if (fattr->cf_flags & CIFS_FATTR_UNKNOWN_NLINK) { 136 + /* only provide fake values on a new inode */ 137 + if (inode->i_state & I_NEW) { 138 + if (fattr->cf_cifsattrs & ATTR_DIRECTORY) 139 + set_nlink(inode, 2); 140 + else 141 + set_nlink(inode, 1); 142 + } 143 + return; 144 + } 145 + 146 + /* we trust the server, so update it */ 147 + set_nlink(inode, fattr->cf_nlink); 148 + } 149 + 123 150 /* populate an inode with info from a cifs_fattr struct */ 124 151 void 125 152 cifs_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr) ··· 161 134 inode->i_mtime = fattr->cf_mtime; 162 135 inode->i_ctime = fattr->cf_ctime; 163 136 inode->i_rdev = fattr->cf_rdev; 164 - set_nlink(inode, fattr->cf_nlink); 137 + cifs_nlink_fattr_to_inode(inode, fattr); 165 138 inode->i_uid = fattr->cf_uid; 166 139 inode->i_gid = fattr->cf_gid; 167 140 ··· 568 541 fattr->cf_bytes = le64_to_cpu(info->AllocationSize); 569 542 fattr->cf_createtime = le64_to_cpu(info->CreationTime); 570 543 544 + fattr->cf_nlink = le32_to_cpu(info->NumberOfLinks); 571 545 if (fattr->cf_cifsattrs & ATTR_DIRECTORY) { 572 546 fattr->cf_mode = S_IFDIR | cifs_sb->mnt_dir_mode; 573 547 fattr->cf_dtype = DT_DIR; ··· 576 548 * Server can return wrong NumberOfLinks value for directories 577 549 * when Unix extensions are disabled - fake it. 578 550 */ 579 - fattr->cf_nlink = 2; 551 + if (!tcon->unix_ext) 552 + fattr->cf_flags |= CIFS_FATTR_UNKNOWN_NLINK; 580 553 } else if (fattr->cf_cifsattrs & ATTR_REPARSE) { 581 554 fattr->cf_mode = S_IFLNK; 582 555 fattr->cf_dtype = DT_LNK; ··· 590 561 if (fattr->cf_cifsattrs & ATTR_READONLY) 591 562 fattr->cf_mode &= ~(S_IWUGO); 592 563 593 - fattr->cf_nlink = le32_to_cpu(info->NumberOfLinks); 594 - if (fattr->cf_nlink < 1) { 595 - cifs_dbg(1, "replacing bogus file nlink value %u\n", 564 + /* 565 + * Don't accept zero nlink from non-unix servers unless 566 + * delete is pending. Instead mark it as unknown. 567 + */ 568 + if ((fattr->cf_nlink < 1) && !tcon->unix_ext && 569 + !info->DeletePending) { 570 + cifs_dbg(1, "bogus file nlink value %u\n", 596 571 fattr->cf_nlink); 597 - fattr->cf_nlink = 1; 572 + fattr->cf_flags |= CIFS_FATTR_UNKNOWN_NLINK; 598 573 } 599 574 } 600 575
+3 -1
fs/cifs/netmisc.c
··· 780 780 ERRDOS, ERRnoaccess, 0xc0000290}, { 781 781 ERRDOS, ERRbadfunc, 0xc000029c}, { 782 782 ERRDOS, ERRsymlink, NT_STATUS_STOPPED_ON_SYMLINK}, { 783 - ERRDOS, ERRinvlevel, 0x007c0001}, }; 783 + ERRDOS, ERRinvlevel, 0x007c0001}, { 784 + 0, 0, 0 } 785 + }; 784 786 785 787 /***************************************************************************** 786 788 Print an error message from the status code
+3
fs/cifs/readdir.c
··· 180 180 fattr->cf_dtype = DT_REG; 181 181 } 182 182 183 + /* non-unix readdir doesn't provide nlink */ 184 + fattr->cf_flags |= CIFS_FATTR_UNKNOWN_NLINK; 185 + 183 186 if (fattr->cf_cifsattrs & ATTR_READONLY) 184 187 fattr->cf_mode &= ~S_IWUGO; 185 188
+3 -85
fs/cifs/sess.c
··· 32 32 #include <linux/slab.h> 33 33 #include "cifs_spnego.h" 34 34 35 - /* 36 - * Checks if this is the first smb session to be reconnected after 37 - * the socket has been reestablished (so we know whether to use vc 0). 38 - * Called while holding the cifs_tcp_ses_lock, so do not block 39 - */ 40 - static bool is_first_ses_reconnect(struct cifs_ses *ses) 41 - { 42 - struct list_head *tmp; 43 - struct cifs_ses *tmp_ses; 44 - 45 - list_for_each(tmp, &ses->server->smb_ses_list) { 46 - tmp_ses = list_entry(tmp, struct cifs_ses, 47 - smb_ses_list); 48 - if (tmp_ses->need_reconnect == false) 49 - return false; 50 - } 51 - /* could not find a session that was already connected, 52 - this must be the first one we are reconnecting */ 53 - return true; 54 - } 55 - 56 - /* 57 - * vc number 0 is treated specially by some servers, and should be the 58 - * first one we request. After that we can use vcnumbers up to maxvcs, 59 - * one for each smb session (some Windows versions set maxvcs incorrectly 60 - * so maxvc=1 can be ignored). If we have too many vcs, we can reuse 61 - * any vc but zero (some servers reset the connection on vcnum zero) 62 - * 63 - */ 64 - static __le16 get_next_vcnum(struct cifs_ses *ses) 65 - { 66 - __u16 vcnum = 0; 67 - struct list_head *tmp; 68 - struct cifs_ses *tmp_ses; 69 - __u16 max_vcs = ses->server->max_vcs; 70 - __u16 i; 71 - int free_vc_found = 0; 72 - 73 - /* Quoting the MS-SMB specification: "Windows-based SMB servers set this 74 - field to one but do not enforce this limit, which allows an SMB client 75 - to establish more virtual circuits than allowed by this value ... but 76 - other server implementations can enforce this limit." */ 77 - if (max_vcs < 2) 78 - max_vcs = 0xFFFF; 79 - 80 - spin_lock(&cifs_tcp_ses_lock); 81 - if ((ses->need_reconnect) && is_first_ses_reconnect(ses)) 82 - goto get_vc_num_exit; /* vcnum will be zero */ 83 - for (i = ses->server->srv_count - 1; i < max_vcs; i++) { 84 - if (i == 0) /* this is the only connection, use vc 0 */ 85 - break; 86 - 87 - free_vc_found = 1; 88 - 89 - list_for_each(tmp, &ses->server->smb_ses_list) { 90 - tmp_ses = list_entry(tmp, struct cifs_ses, 91 - smb_ses_list); 92 - if (tmp_ses->vcnum == i) { 93 - free_vc_found = 0; 94 - break; /* found duplicate, try next vcnum */ 95 - } 96 - } 97 - if (free_vc_found) 98 - break; /* we found a vcnumber that will work - use it */ 99 - } 100 - 101 - if (i == 0) 102 - vcnum = 0; /* for most common case, ie if one smb session, use 103 - vc zero. Also for case when no free vcnum, zero 104 - is safest to send (some clients only send zero) */ 105 - else if (free_vc_found == 0) 106 - vcnum = 1; /* we can not reuse vc=0 safely, since some servers 107 - reset all uids on that, but 1 is ok. */ 108 - else 109 - vcnum = i; 110 - ses->vcnum = vcnum; 111 - get_vc_num_exit: 112 - spin_unlock(&cifs_tcp_ses_lock); 113 - 114 - return cpu_to_le16(vcnum); 115 - } 116 - 117 35 static __u32 cifs_ssetup_hdr(struct cifs_ses *ses, SESSION_SETUP_ANDX *pSMB) 118 36 { 119 37 __u32 capabilities = 0; ··· 46 128 CIFSMaxBufSize + MAX_CIFS_HDR_SIZE - 4, 47 129 USHRT_MAX)); 48 130 pSMB->req.MaxMpxCount = cpu_to_le16(ses->server->maxReq); 49 - pSMB->req.VcNumber = get_next_vcnum(ses); 131 + pSMB->req.VcNumber = __constant_cpu_to_le16(1); 50 132 51 133 /* Now no need to set SMBFLG_CASELESS or obsolete CANONICAL PATH */ 52 134 ··· 500 582 return NTLMv2; 501 583 if (global_secflags & CIFSSEC_MAY_NTLM) 502 584 return NTLM; 503 - /* Fallthrough */ 504 585 default: 505 - return Unspecified; 586 + /* Fallthrough to attempt LANMAN authentication next */ 587 + break; 506 588 } 507 589 case CIFS_NEGFLAVOR_LANMAN: 508 590 switch (requested) {
+6
fs/cifs/smb2pdu.c
··· 687 687 else 688 688 return -EIO; 689 689 690 + /* no need to send SMB logoff if uid already closed due to reconnect */ 691 + if (ses->need_reconnect) 692 + goto smb2_session_already_dead; 693 + 690 694 rc = small_smb2_init(SMB2_LOGOFF, NULL, (void **) &req); 691 695 if (rc) 692 696 return rc; ··· 705 701 * No tcon so can't do 706 702 * cifs_stats_inc(&tcon->stats.smb2_stats.smb2_com_fail[SMB2...]); 707 703 */ 704 + 705 + smb2_session_already_dead: 708 706 return rc; 709 707 } 710 708
+14
fs/cifs/smbfsctl.h
··· 97 97 #define FSCTL_QUERY_NETWORK_INTERFACE_INFO 0x001401FC /* BB add struct */ 98 98 #define FSCTL_SRV_READ_HASH 0x001441BB /* BB add struct */ 99 99 100 + /* See FSCC 2.1.2.5 */ 100 101 #define IO_REPARSE_TAG_MOUNT_POINT 0xA0000003 101 102 #define IO_REPARSE_TAG_HSM 0xC0000004 102 103 #define IO_REPARSE_TAG_SIS 0x80000007 104 + #define IO_REPARSE_TAG_HSM2 0x80000006 105 + #define IO_REPARSE_TAG_DRIVER_EXTENDER 0x80000005 106 + /* Used by the DFS filter. See MS-DFSC */ 107 + #define IO_REPARSE_TAG_DFS 0x8000000A 108 + /* Used by the DFS filter See MS-DFSC */ 109 + #define IO_REPARSE_TAG_DFSR 0x80000012 110 + #define IO_REPARSE_TAG_FILTER_MANAGER 0x8000000B 111 + /* See section MS-FSCC 2.1.2.4 */ 112 + #define IO_REPARSE_TAG_SYMLINK 0xA000000C 113 + #define IO_REPARSE_TAG_DEDUP 0x80000013 114 + #define IO_REPARSE_APPXSTREAM 0xC0000014 115 + /* NFS symlinks, Win 8/SMB3 and later */ 116 + #define IO_REPARSE_TAG_NFS 0x80000014 103 117 104 118 /* fsctl flags */ 105 119 /* If Flags is set to this value, the request is an FSCTL not ioctl request */
+7 -2
fs/cifs/transport.c
··· 410 410 wait_for_free_request(struct TCP_Server_Info *server, const int timeout, 411 411 const int optype) 412 412 { 413 - return wait_for_free_credits(server, timeout, 414 - server->ops->get_credits_field(server, optype)); 413 + int *val; 414 + 415 + val = server->ops->get_credits_field(server, optype); 416 + /* Since an echo is already inflight, no need to wait to send another */ 417 + if (*val <= 0 && optype == CIFS_ECHO_OP) 418 + return -EAGAIN; 419 + return wait_for_free_credits(server, timeout, val); 415 420 } 416 421 417 422 static int allocate_mid(struct cifs_ses *ses, struct smb_hdr *in_buf,
+7 -8
fs/dcache.c
··· 1331 1331 * list is non-empty and continue searching. 1332 1332 */ 1333 1333 1334 - /** 1335 - * have_submounts - check for mounts over a dentry 1336 - * @parent: dentry to check. 1337 - * 1338 - * Return true if the parent or its subdirectories contain 1339 - * a mount point 1340 - */ 1341 - 1342 1334 static enum d_walk_ret check_mount(void *data, struct dentry *dentry) 1343 1335 { 1344 1336 int *ret = data; ··· 1341 1349 return D_WALK_CONTINUE; 1342 1350 } 1343 1351 1352 + /** 1353 + * have_submounts - check for mounts over a dentry 1354 + * @parent: dentry to check. 1355 + * 1356 + * Return true if the parent or its subdirectories contain 1357 + * a mount point 1358 + */ 1344 1359 int have_submounts(struct dentry *parent) 1345 1360 { 1346 1361 int ret = 0;
+2 -3
fs/ext3/namei.c
··· 1783 1783 d_tmpfile(dentry, inode); 1784 1784 err = ext3_orphan_add(handle, inode); 1785 1785 if (err) 1786 - goto err_drop_inode; 1786 + goto err_unlock_inode; 1787 1787 mark_inode_dirty(inode); 1788 1788 unlock_new_inode(inode); 1789 1789 } ··· 1791 1791 if (err == -ENOSPC && ext3_should_retry_alloc(dir->i_sb, &retries)) 1792 1792 goto retry; 1793 1793 return err; 1794 - err_drop_inode: 1794 + err_unlock_inode: 1795 1795 ext3_journal_stop(handle); 1796 1796 unlock_new_inode(inode); 1797 - iput(inode); 1798 1797 return err; 1799 1798 } 1800 1799
+1 -1
fs/ext4/inode.c
··· 2563 2563 break; 2564 2564 } 2565 2565 blk_finish_plug(&plug); 2566 - if (!ret && !cycled) { 2566 + if (!ret && !cycled && wbc->nr_to_write > 0) { 2567 2567 cycled = 1; 2568 2568 mpd.last_page = writeback_index - 1; 2569 2569 mpd.first_page = 0;
+2 -3
fs/ext4/namei.c
··· 2319 2319 d_tmpfile(dentry, inode); 2320 2320 err = ext4_orphan_add(handle, inode); 2321 2321 if (err) 2322 - goto err_drop_inode; 2322 + goto err_unlock_inode; 2323 2323 mark_inode_dirty(inode); 2324 2324 unlock_new_inode(inode); 2325 2325 } ··· 2328 2328 if (err == -ENOSPC && ext4_should_retry_alloc(dir->i_sb, &retries)) 2329 2329 goto retry; 2330 2330 return err; 2331 - err_drop_inode: 2331 + err_unlock_inode: 2332 2332 ext4_journal_stop(handle); 2333 2333 unlock_new_inode(inode); 2334 - iput(inode); 2335 2334 return err; 2336 2335 } 2337 2336
+2
fs/ext4/xattr.c
··· 1350 1350 s_min_extra_isize) { 1351 1351 tried_min_extra_isize++; 1352 1352 new_extra_isize = s_min_extra_isize; 1353 + kfree(is); is = NULL; 1354 + kfree(bs); bs = NULL; 1353 1355 goto retry; 1354 1356 } 1355 1357 error = -1;
+13 -7
fs/fuse/dir.c
··· 182 182 struct inode *inode; 183 183 struct dentry *parent; 184 184 struct fuse_conn *fc; 185 + struct fuse_inode *fi; 185 186 int ret; 186 187 187 188 inode = ACCESS_ONCE(entry->d_inode); ··· 229 228 if (!err && !outarg.nodeid) 230 229 err = -ENOENT; 231 230 if (!err) { 232 - struct fuse_inode *fi = get_fuse_inode(inode); 231 + fi = get_fuse_inode(inode); 233 232 if (outarg.nodeid != get_node_id(inode)) { 234 233 fuse_queue_forget(fc, forget, outarg.nodeid, 1); 235 234 goto invalid; ··· 247 246 attr_version); 248 247 fuse_change_entry_timeout(entry, &outarg); 249 248 } else if (inode) { 250 - fc = get_fuse_conn(inode); 251 - if (fc->readdirplus_auto) { 249 + fi = get_fuse_inode(inode); 250 + if (flags & LOOKUP_RCU) { 251 + if (test_bit(FUSE_I_INIT_RDPLUS, &fi->state)) 252 + return -ECHILD; 253 + } else if (test_and_clear_bit(FUSE_I_INIT_RDPLUS, &fi->state)) { 252 254 parent = dget_parent(entry); 253 255 fuse_advise_use_readdirplus(parent->d_inode); 254 256 dput(parent); ··· 263 259 264 260 invalid: 265 261 ret = 0; 266 - if (check_submounts_and_drop(entry) != 0) 262 + 263 + if (!(flags & LOOKUP_RCU) && check_submounts_and_drop(entry) != 0) 267 264 ret = 1; 268 265 goto out; 269 266 } ··· 1068 1063 struct fuse_access_in inarg; 1069 1064 int err; 1070 1065 1066 + BUG_ON(mask & MAY_NOT_BLOCK); 1067 + 1071 1068 if (fc->no_access) 1072 1069 return 0; 1073 1070 ··· 1157 1150 noticed immediately, only after the attribute 1158 1151 timeout has expired */ 1159 1152 } else if (mask & (MAY_ACCESS | MAY_CHDIR)) { 1160 - if (mask & MAY_NOT_BLOCK) 1161 - return -ECHILD; 1162 - 1163 1153 err = fuse_access(inode, mask); 1164 1154 } else if ((mask & MAY_EXEC) && S_ISREG(inode->i_mode)) { 1165 1155 if (!(inode->i_mode & S_IXUGO)) { ··· 1295 1291 } 1296 1292 1297 1293 found: 1294 + if (fc->readdirplus_auto) 1295 + set_bit(FUSE_I_INIT_RDPLUS, &get_fuse_inode(inode)->state); 1298 1296 fuse_change_entry_timeout(dentry, o); 1299 1297 1300 1298 err = 0;
+17 -6
fs/fuse/file.c
··· 2467 2467 { 2468 2468 struct fuse_file *ff = file->private_data; 2469 2469 struct inode *inode = file->f_inode; 2470 + struct fuse_inode *fi = get_fuse_inode(inode); 2470 2471 struct fuse_conn *fc = ff->fc; 2471 2472 struct fuse_req *req; 2472 2473 struct fuse_fallocate_in inarg = { ··· 2485 2484 2486 2485 if (lock_inode) { 2487 2486 mutex_lock(&inode->i_mutex); 2488 - if (mode & FALLOC_FL_PUNCH_HOLE) 2489 - fuse_set_nowrite(inode); 2487 + if (mode & FALLOC_FL_PUNCH_HOLE) { 2488 + loff_t endbyte = offset + length - 1; 2489 + err = filemap_write_and_wait_range(inode->i_mapping, 2490 + offset, endbyte); 2491 + if (err) 2492 + goto out; 2493 + 2494 + fuse_sync_writes(inode); 2495 + } 2490 2496 } 2497 + 2498 + if (!(mode & FALLOC_FL_KEEP_SIZE)) 2499 + set_bit(FUSE_I_SIZE_UNSTABLE, &fi->state); 2491 2500 2492 2501 req = fuse_get_req_nopages(fc); 2493 2502 if (IS_ERR(req)) { ··· 2531 2520 fuse_invalidate_attr(inode); 2532 2521 2533 2522 out: 2534 - if (lock_inode) { 2535 - if (mode & FALLOC_FL_PUNCH_HOLE) 2536 - fuse_release_nowrite(inode); 2523 + if (!(mode & FALLOC_FL_KEEP_SIZE)) 2524 + clear_bit(FUSE_I_SIZE_UNSTABLE, &fi->state); 2525 + 2526 + if (lock_inode) 2537 2527 mutex_unlock(&inode->i_mutex); 2538 - } 2539 2528 2540 2529 return err; 2541 2530 }
+2
fs/fuse/fuse_i.h
··· 115 115 enum { 116 116 /** Advise readdirplus */ 117 117 FUSE_I_ADVISE_RDPLUS, 118 + /** Initialized with readdirplus */ 119 + FUSE_I_INIT_RDPLUS, 118 120 /** An operation changing file size is in progress */ 119 121 FUSE_I_SIZE_UNSTABLE, 120 122 };
+1 -2
fs/jfs/jfs_inode.c
··· 95 95 96 96 if (insert_inode_locked(inode) < 0) { 97 97 rc = -EINVAL; 98 - goto fail_unlock; 98 + goto fail_put; 99 99 } 100 100 101 101 inode_init_owner(inode, parent, mode); ··· 156 156 fail_drop: 157 157 dquot_drop(inode); 158 158 inode->i_flags |= S_NOQUOTA; 159 - fail_unlock: 160 159 clear_nlink(inode); 161 160 unlock_new_inode(inode); 162 161 fail_put:
+2 -1
fs/namei.c
··· 2294 2294 * path_mountpoint - look up a path to be umounted 2295 2295 * @dfd: directory file descriptor to start walk from 2296 2296 * @name: full pathname to walk 2297 + * @path: pointer to container for result 2297 2298 * @flags: lookup flags 2298 2299 * 2299 2300 * Look up the given name, but don't attempt to revalidate the last component. 2300 - * Returns 0 and "path" will be valid on success; Retuns error otherwise. 2301 + * Returns 0 and "path" will be valid on success; Returns error otherwise. 2301 2302 */ 2302 2303 static int 2303 2304 path_mountpoint(int dfd, const char *name, struct path *path, unsigned int flags)
+7 -3
fs/proc/inode.c
··· 288 288 static unsigned long proc_reg_get_unmapped_area(struct file *file, unsigned long orig_addr, unsigned long len, unsigned long pgoff, unsigned long flags) 289 289 { 290 290 struct proc_dir_entry *pde = PDE(file_inode(file)); 291 - int rv = -EIO; 292 - unsigned long (*get_unmapped_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long); 291 + unsigned long rv = -EIO; 292 + unsigned long (*get_unmapped_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long) = NULL; 293 293 if (use_pde(pde)) { 294 - get_unmapped_area = pde->proc_fops->get_unmapped_area; 294 + #ifdef CONFIG_MMU 295 + get_unmapped_area = current->mm->get_unmapped_area; 296 + #endif 297 + if (pde->proc_fops->get_unmapped_area) 298 + get_unmapped_area = pde->proc_fops->get_unmapped_area; 295 299 if (get_unmapped_area) 296 300 rv = get_unmapped_area(file, orig_addr, len, pgoff, flags); 297 301 unuse_pde(pde);
+3 -1
fs/proc/task_mmu.c
··· 941 941 frame = pte_pfn(pte); 942 942 flags = PM_PRESENT; 943 943 page = vm_normal_page(vma, addr, pte); 944 + if (pte_soft_dirty(pte)) 945 + flags2 |= __PM_SOFT_DIRTY; 944 946 } else if (is_swap_pte(pte)) { 945 947 swp_entry_t entry; 946 948 if (pte_swp_soft_dirty(pte)) ··· 962 960 963 961 if (page && !PageAnon(page)) 964 962 flags |= PM_FILE; 965 - if ((vma->vm_flags & VM_SOFTDIRTY) || pte_soft_dirty(pte)) 963 + if ((vma->vm_flags & VM_SOFTDIRTY)) 966 964 flags2 |= __PM_SOFT_DIRTY; 967 965 968 966 *pme = make_pme(PM_PFRAME(frame) | PM_STATUS2(pm->v2, flags2) | flags);
+1 -1
fs/statfs.c
··· 94 94 95 95 int fd_statfs(int fd, struct kstatfs *st) 96 96 { 97 - struct fd f = fdget(fd); 97 + struct fd f = fdget_raw(fd); 98 98 int error = -EBADF; 99 99 if (f.file) { 100 100 error = vfs_statfs(&f.file->f_path, st);
+3 -3
fs/xfs/xfs_dir2_block.c
··· 1158 1158 /* 1159 1159 * Create entry for . 1160 1160 */ 1161 - dep = xfs_dir3_data_dot_entry_p(hdr); 1161 + dep = xfs_dir3_data_dot_entry_p(mp, hdr); 1162 1162 dep->inumber = cpu_to_be64(dp->i_ino); 1163 1163 dep->namelen = 1; 1164 1164 dep->name[0] = '.'; ··· 1172 1172 /* 1173 1173 * Create entry for .. 1174 1174 */ 1175 - dep = xfs_dir3_data_dotdot_entry_p(hdr); 1175 + dep = xfs_dir3_data_dotdot_entry_p(mp, hdr); 1176 1176 dep->inumber = cpu_to_be64(xfs_dir2_sf_get_parent_ino(sfp)); 1177 1177 dep->namelen = 2; 1178 1178 dep->name[0] = dep->name[1] = '.'; ··· 1183 1183 blp[1].hashval = cpu_to_be32(xfs_dir_hash_dotdot); 1184 1184 blp[1].address = cpu_to_be32(xfs_dir2_byte_to_dataptr(mp, 1185 1185 (char *)dep - (char *)hdr)); 1186 - offset = xfs_dir3_data_first_offset(hdr); 1186 + offset = xfs_dir3_data_first_offset(mp); 1187 1187 /* 1188 1188 * Loop over existing entries, stuff them in. 1189 1189 */
+20 -31
fs/xfs/xfs_dir2_format.h
··· 497 497 /* 498 498 * Offsets of . and .. in data space (always block 0) 499 499 * 500 - * The macros are used for shortform directories as they have no headers to read 501 - * the magic number out of. Shortform directories need to know the size of the 502 - * data block header because the sfe embeds the block offset of the entry into 503 - * it so that it doesn't change when format conversion occurs. Bad Things Happen 504 - * if we don't follow this rule. 505 - * 506 500 * XXX: there is scope for significant optimisation of the logic here. Right 507 501 * now we are checking for "dir3 format" over and over again. Ideally we should 508 502 * only do it once for each operation. 509 503 */ 510 - #define XFS_DIR3_DATA_DOT_OFFSET(mp) \ 511 - xfs_dir3_data_hdr_size(xfs_sb_version_hascrc(&(mp)->m_sb)) 512 - #define XFS_DIR3_DATA_DOTDOT_OFFSET(mp) \ 513 - (XFS_DIR3_DATA_DOT_OFFSET(mp) + xfs_dir3_data_entsize(mp, 1)) 514 - #define XFS_DIR3_DATA_FIRST_OFFSET(mp) \ 515 - (XFS_DIR3_DATA_DOTDOT_OFFSET(mp) + xfs_dir3_data_entsize(mp, 2)) 516 - 517 504 static inline xfs_dir2_data_aoff_t 518 - xfs_dir3_data_dot_offset(struct xfs_dir2_data_hdr *hdr) 505 + xfs_dir3_data_dot_offset(struct xfs_mount *mp) 519 506 { 520 - return xfs_dir3_data_entry_offset(hdr); 507 + return xfs_dir3_data_hdr_size(xfs_sb_version_hascrc(&mp->m_sb)); 521 508 } 522 509 523 510 static inline xfs_dir2_data_aoff_t 524 - xfs_dir3_data_dotdot_offset(struct xfs_dir2_data_hdr *hdr) 511 + xfs_dir3_data_dotdot_offset(struct xfs_mount *mp) 525 512 { 526 - bool dir3 = hdr->magic == cpu_to_be32(XFS_DIR3_DATA_MAGIC) || 527 - hdr->magic == cpu_to_be32(XFS_DIR3_BLOCK_MAGIC); 528 - return xfs_dir3_data_dot_offset(hdr) + 529 - __xfs_dir3_data_entsize(dir3, 1); 513 + return xfs_dir3_data_dot_offset(mp) + 514 + xfs_dir3_data_entsize(mp, 1); 530 515 } 531 516 532 517 static inline xfs_dir2_data_aoff_t 533 - xfs_dir3_data_first_offset(struct xfs_dir2_data_hdr *hdr) 518 + xfs_dir3_data_first_offset(struct xfs_mount *mp) 534 519 { 535 - bool dir3 = hdr->magic == cpu_to_be32(XFS_DIR3_DATA_MAGIC) || 536 - hdr->magic == cpu_to_be32(XFS_DIR3_BLOCK_MAGIC); 537 - return xfs_dir3_data_dotdot_offset(hdr) + 538 - __xfs_dir3_data_entsize(dir3, 2); 520 + return xfs_dir3_data_dotdot_offset(mp) + 521 + xfs_dir3_data_entsize(mp, 2); 539 522 } 540 523 541 524 /* 542 525 * location of . and .. in data space (always block 0) 543 526 */ 544 527 static inline struct xfs_dir2_data_entry * 545 - xfs_dir3_data_dot_entry_p(struct xfs_dir2_data_hdr *hdr) 528 + xfs_dir3_data_dot_entry_p( 529 + struct xfs_mount *mp, 530 + struct xfs_dir2_data_hdr *hdr) 546 531 { 547 532 return (struct xfs_dir2_data_entry *) 548 - ((char *)hdr + xfs_dir3_data_dot_offset(hdr)); 533 + ((char *)hdr + xfs_dir3_data_dot_offset(mp)); 549 534 } 550 535 551 536 static inline struct xfs_dir2_data_entry * 552 - xfs_dir3_data_dotdot_entry_p(struct xfs_dir2_data_hdr *hdr) 537 + xfs_dir3_data_dotdot_entry_p( 538 + struct xfs_mount *mp, 539 + struct xfs_dir2_data_hdr *hdr) 553 540 { 554 541 return (struct xfs_dir2_data_entry *) 555 - ((char *)hdr + xfs_dir3_data_dotdot_offset(hdr)); 542 + ((char *)hdr + xfs_dir3_data_dotdot_offset(mp)); 556 543 } 557 544 558 545 static inline struct xfs_dir2_data_entry * 559 - xfs_dir3_data_first_entry_p(struct xfs_dir2_data_hdr *hdr) 546 + xfs_dir3_data_first_entry_p( 547 + struct xfs_mount *mp, 548 + struct xfs_dir2_data_hdr *hdr) 560 549 { 561 550 return (struct xfs_dir2_data_entry *) 562 - ((char *)hdr + xfs_dir3_data_first_offset(hdr)); 551 + ((char *)hdr + xfs_dir3_data_first_offset(mp)); 563 552 } 564 553 565 554 /*
+2 -2
fs/xfs/xfs_dir2_readdir.c
··· 119 119 * mp->m_dirdatablk. 120 120 */ 121 121 dot_offset = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, 122 - XFS_DIR3_DATA_DOT_OFFSET(mp)); 122 + xfs_dir3_data_dot_offset(mp)); 123 123 dotdot_offset = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk, 124 - XFS_DIR3_DATA_DOTDOT_OFFSET(mp)); 124 + xfs_dir3_data_dotdot_offset(mp)); 125 125 126 126 /* 127 127 * Put . entry unless we're starting past it.
+3 -3
fs/xfs/xfs_dir2_sf.c
··· 557 557 * to insert the new entry. 558 558 * If it's going to end up at the end then oldsfep will point there. 559 559 */ 560 - for (offset = XFS_DIR3_DATA_FIRST_OFFSET(mp), 560 + for (offset = xfs_dir3_data_first_offset(mp), 561 561 oldsfep = xfs_dir2_sf_firstentry(oldsfp), 562 562 add_datasize = xfs_dir3_data_entsize(mp, args->namelen), 563 563 eof = (char *)oldsfep == &buf[old_isize]; ··· 640 640 641 641 sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data; 642 642 size = xfs_dir3_data_entsize(mp, args->namelen); 643 - offset = XFS_DIR3_DATA_FIRST_OFFSET(mp); 643 + offset = xfs_dir3_data_first_offset(mp); 644 644 sfep = xfs_dir2_sf_firstentry(sfp); 645 645 holefit = 0; 646 646 /* ··· 713 713 mp = dp->i_mount; 714 714 715 715 sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data; 716 - offset = XFS_DIR3_DATA_FIRST_OFFSET(mp); 716 + offset = xfs_dir3_data_first_offset(mp); 717 717 ino = xfs_dir2_sf_get_parent_ino(sfp); 718 718 i8count = ino > XFS_DIR2_MAX_SHORT_INUM; 719 719
+16 -3
fs/xfs/xfs_dquot.c
··· 64 64 struct kmem_zone *xfs_qm_dqtrxzone; 65 65 static struct kmem_zone *xfs_qm_dqzone; 66 66 67 - static struct lock_class_key xfs_dquot_other_class; 67 + static struct lock_class_key xfs_dquot_group_class; 68 + static struct lock_class_key xfs_dquot_project_class; 68 69 69 70 /* 70 71 * This is called to free all the memory associated with a dquot ··· 704 703 * Make sure group quotas have a different lock class than user 705 704 * quotas. 706 705 */ 707 - if (!(type & XFS_DQ_USER)) 708 - lockdep_set_class(&dqp->q_qlock, &xfs_dquot_other_class); 706 + switch (type) { 707 + case XFS_DQ_USER: 708 + /* uses the default lock class */ 709 + break; 710 + case XFS_DQ_GROUP: 711 + lockdep_set_class(&dqp->q_qlock, &xfs_dquot_group_class); 712 + break; 713 + case XFS_DQ_PROJ: 714 + lockdep_set_class(&dqp->q_qlock, &xfs_dquot_project_class); 715 + break; 716 + default: 717 + ASSERT(0); 718 + break; 719 + } 709 720 710 721 XFS_STATS_INC(xs_qm_dquot); 711 722
+1
fs/xfs/xfs_log_recover.c
··· 1585 1585 "bad number of regions (%d) in inode log format", 1586 1586 in_f->ilf_size); 1587 1587 ASSERT(0); 1588 + kmem_free(ptr); 1588 1589 return XFS_ERROR(EIO); 1589 1590 } 1590 1591
-7
include/acpi/acpi_bus.h
··· 311 311 unsigned int physical_node_count; 312 312 struct list_head physical_node_list; 313 313 struct mutex physical_node_lock; 314 - struct list_head power_dependent; 315 314 void (*remove)(struct acpi_device *); 316 315 }; 317 316 ··· 455 456 acpi_status acpi_remove_pm_notifier(struct acpi_device *adev, 456 457 acpi_notify_handler handler); 457 458 int acpi_pm_device_sleep_state(struct device *, int *, int); 458 - void acpi_dev_pm_add_dependent(acpi_handle handle, struct device *depdev); 459 - void acpi_dev_pm_remove_dependent(acpi_handle handle, struct device *depdev); 460 459 #else 461 460 static inline acpi_status acpi_add_pm_notifier(struct acpi_device *adev, 462 461 acpi_notify_handler handler, ··· 475 478 return (m >= ACPI_STATE_D0 && m <= ACPI_STATE_D3_COLD) ? 476 479 m : ACPI_STATE_D0; 477 480 } 478 - static inline void acpi_dev_pm_add_dependent(acpi_handle handle, 479 - struct device *depdev) {} 480 - static inline void acpi_dev_pm_remove_dependent(acpi_handle handle, 481 - struct device *depdev) {} 482 481 #endif 483 482 484 483 #ifdef CONFIG_PM_RUNTIME
+2 -2
include/asm-generic/hugetlb.h
··· 6 6 return mk_pte(page, pgprot); 7 7 } 8 8 9 - static inline int huge_pte_write(pte_t pte) 9 + static inline unsigned long huge_pte_write(pte_t pte) 10 10 { 11 11 return pte_write(pte); 12 12 } 13 13 14 - static inline int huge_pte_dirty(pte_t pte) 14 + static inline unsigned long huge_pte_dirty(pte_t pte) 15 15 { 16 16 return pte_dirty(pte); 17 17 }
+1 -3
include/dt-bindings/pinctrl/omap.h
··· 23 23 #define PULL_UP (1 << 4) 24 24 #define ALTELECTRICALSEL (1 << 5) 25 25 26 - /* 34xx specific mux bit defines */ 26 + /* omap3/4/5 specific mux bit defines */ 27 27 #define INPUT_EN (1 << 8) 28 28 #define OFF_EN (1 << 9) 29 29 #define OFFOUT_EN (1 << 10) ··· 31 31 #define OFF_PULL_EN (1 << 12) 32 32 #define OFF_PULL_UP (1 << 13) 33 33 #define WAKEUP_EN (1 << 14) 34 - 35 - /* 44xx specific mux bit defines */ 36 34 #define WAKEUP_EVENT (1 << 15) 37 35 38 36 /* Active pin states */
+15
include/linux/compiler-gcc4.h
··· 65 65 #define __visible __attribute__((externally_visible)) 66 66 #endif 67 67 68 + /* 69 + * GCC 'asm goto' miscompiles certain code sequences: 70 + * 71 + * http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670 72 + * 73 + * Work it around via a compiler barrier quirk suggested by Jakub Jelinek. 74 + * Fixed in GCC 4.8.2 and later versions. 75 + * 76 + * (asm goto is automatically volatile - the naming reflects this.) 77 + */ 78 + #if GCC_VERSION <= 40801 79 + # define asm_volatile_goto(x...) do { asm goto(x); asm (""); } while (0) 80 + #else 81 + # define asm_volatile_goto(x...) do { asm goto(x); } while (0) 82 + #endif 68 83 69 84 #ifdef CONFIG_ARCH_USE_BUILTIN_BSWAP 70 85 #if GCC_VERSION >= 40400
+1 -1
include/linux/intel-iommu.h
··· 55 55 #define DMAR_IQT_REG 0x88 /* Invalidation queue tail register */ 56 56 #define DMAR_IQ_SHIFT 4 /* Invalidation queue head/tail shift */ 57 57 #define DMAR_IQA_REG 0x90 /* Invalidation queue addr register */ 58 - #define DMAR_ICS_REG 0x98 /* Invalidation complete status register */ 58 + #define DMAR_ICS_REG 0x9c /* Invalidation complete status register */ 59 59 #define DMAR_IRTA_REG 0xb8 /* Interrupt remapping table addr register */ 60 60 61 61 #define OFFSET_STRIDE (9)
+11 -39
include/linux/memcontrol.h
··· 137 137 extern void mem_cgroup_replace_page_cache(struct page *oldpage, 138 138 struct page *newpage); 139 139 140 - /** 141 - * mem_cgroup_toggle_oom - toggle the memcg OOM killer for the current task 142 - * @new: true to enable, false to disable 143 - * 144 - * Toggle whether a failed memcg charge should invoke the OOM killer 145 - * or just return -ENOMEM. Returns the previous toggle state. 146 - * 147 - * NOTE: Any path that enables the OOM killer before charging must 148 - * call mem_cgroup_oom_synchronize() afterward to finalize the 149 - * OOM handling and clean up. 150 - */ 151 - static inline bool mem_cgroup_toggle_oom(bool new) 140 + static inline void mem_cgroup_oom_enable(void) 152 141 { 153 - bool old; 154 - 155 - old = current->memcg_oom.may_oom; 156 - current->memcg_oom.may_oom = new; 157 - 158 - return old; 142 + WARN_ON(current->memcg_oom.may_oom); 143 + current->memcg_oom.may_oom = 1; 159 144 } 160 145 161 - static inline void mem_cgroup_enable_oom(void) 146 + static inline void mem_cgroup_oom_disable(void) 162 147 { 163 - bool old = mem_cgroup_toggle_oom(true); 164 - 165 - WARN_ON(old == true); 166 - } 167 - 168 - static inline void mem_cgroup_disable_oom(void) 169 - { 170 - bool old = mem_cgroup_toggle_oom(false); 171 - 172 - WARN_ON(old == false); 148 + WARN_ON(!current->memcg_oom.may_oom); 149 + current->memcg_oom.may_oom = 0; 173 150 } 174 151 175 152 static inline bool task_in_memcg_oom(struct task_struct *p) 176 153 { 177 - return p->memcg_oom.in_memcg_oom; 154 + return p->memcg_oom.memcg; 178 155 } 179 156 180 - bool mem_cgroup_oom_synchronize(void); 157 + bool mem_cgroup_oom_synchronize(bool wait); 181 158 182 159 #ifdef CONFIG_MEMCG_SWAP 183 160 extern int do_swap_account; ··· 379 402 { 380 403 } 381 404 382 - static inline bool mem_cgroup_toggle_oom(bool new) 383 - { 384 - return false; 385 - } 386 - 387 - static inline void mem_cgroup_enable_oom(void) 405 + static inline void mem_cgroup_oom_enable(void) 388 406 { 389 407 } 390 408 391 - static inline void mem_cgroup_disable_oom(void) 409 + static inline void mem_cgroup_oom_disable(void) 392 410 { 393 411 } 394 412 ··· 392 420 return false; 393 421 } 394 422 395 - static inline bool mem_cgroup_oom_synchronize(void) 423 + static inline bool mem_cgroup_oom_synchronize(bool wait) 396 424 { 397 425 return false; 398 426 }
+1
include/linux/miscdevice.h
··· 45 45 #define MAPPER_CTRL_MINOR 236 46 46 #define LOOP_CTRL_MINOR 237 47 47 #define VHOST_NET_MINOR 238 48 + #define UHID_MINOR 239 48 49 #define MISC_DYNAMIC_MINOR 255 49 50 50 51 struct device;
+2 -2
include/linux/mlx5/device.h
··· 181 181 MLX5_DEV_CAP_FLAG_TLP_HINTS = 1LL << 39, 182 182 MLX5_DEV_CAP_FLAG_SIG_HAND_OVER = 1LL << 40, 183 183 MLX5_DEV_CAP_FLAG_DCT = 1LL << 41, 184 - MLX5_DEV_CAP_FLAG_CMDIF_CSUM = 1LL << 46, 184 + MLX5_DEV_CAP_FLAG_CMDIF_CSUM = 3LL << 46, 185 185 }; 186 186 187 187 enum { ··· 417 417 struct health_buffer health; 418 418 __be32 rsvd2[884]; 419 419 __be32 health_counter; 420 - __be32 rsvd3[1023]; 420 + __be32 rsvd3[1019]; 421 421 __be64 ieee1588_clk; 422 422 __be32 ieee1588_clk_type; 423 423 __be32 clr_intx;
+2 -4
include/linux/mlx5/driver.h
··· 82 82 }; 83 83 84 84 enum { 85 - MLX5_MAX_EQ_NAME = 20 85 + MLX5_MAX_EQ_NAME = 32 86 86 }; 87 87 88 88 enum { ··· 747 747 748 748 enum { 749 749 MLX5_PROF_MASK_QP_SIZE = (u64)1 << 0, 750 - MLX5_PROF_MASK_CMDIF_CSUM = (u64)1 << 1, 751 - MLX5_PROF_MASK_MR_CACHE = (u64)1 << 2, 750 + MLX5_PROF_MASK_MR_CACHE = (u64)1 << 1, 752 751 }; 753 752 754 753 enum { ··· 757 758 struct mlx5_profile { 758 759 u64 mask; 759 760 u32 log_max_qp; 760 - int cmdif_csum; 761 761 struct { 762 762 int size; 763 763 int limit;
-14
include/linux/of_reserved_mem.h
··· 1 - #ifndef __OF_RESERVED_MEM_H 2 - #define __OF_RESERVED_MEM_H 3 - 4 - #ifdef CONFIG_OF_RESERVED_MEM 5 - void of_reserved_mem_device_init(struct device *dev); 6 - void of_reserved_mem_device_release(struct device *dev); 7 - void early_init_dt_scan_reserved_mem(void); 8 - #else 9 - static inline void of_reserved_mem_device_init(struct device *dev) { } 10 - static inline void of_reserved_mem_device_release(struct device *dev) { } 11 - static inline void early_init_dt_scan_reserved_mem(void) { } 12 - #endif 13 - 14 - #endif /* __OF_RESERVED_MEM_H */
+23 -1
include/linux/perf_event.h
··· 294 294 */ 295 295 struct perf_event { 296 296 #ifdef CONFIG_PERF_EVENTS 297 - struct list_head group_entry; 297 + /* 298 + * entry onto perf_event_context::event_list; 299 + * modifications require ctx->lock 300 + * RCU safe iterations. 301 + */ 298 302 struct list_head event_entry; 303 + 304 + /* 305 + * XXX: group_entry and sibling_list should be mutually exclusive; 306 + * either you're a sibling on a group, or you're the group leader. 307 + * Rework the code to always use the same list element. 308 + * 309 + * Locked for modification by both ctx->mutex and ctx->lock; holding 310 + * either sufficies for read. 311 + */ 312 + struct list_head group_entry; 299 313 struct list_head sibling_list; 314 + 315 + /* 316 + * We need storage to track the entries in perf_pmu_migrate_context; we 317 + * cannot use the event_entry because of RCU and we want to keep the 318 + * group in tact which avoids us using the other two entries. 319 + */ 320 + struct list_head migrate_entry; 321 + 300 322 struct hlist_node hlist_entry; 301 323 int nr_siblings; 302 324 int group_flags;
+1
include/linux/random.h
··· 17 17 extern void get_random_bytes(void *buf, int nbytes); 18 18 extern void get_random_bytes_arch(void *buf, int nbytes); 19 19 void generate_random_uuid(unsigned char uuid_out[16]); 20 + extern int random_int_secret_init(void); 20 21 21 22 #ifndef MODULE 22 23 extern const struct file_operations random_fops, urandom_fops;
+3 -4
include/linux/sched.h
··· 1394 1394 } memcg_batch; 1395 1395 unsigned int memcg_kmem_skip_account; 1396 1396 struct memcg_oom_info { 1397 + struct mem_cgroup *memcg; 1398 + gfp_t gfp_mask; 1399 + int order; 1397 1400 unsigned int may_oom:1; 1398 - unsigned int in_memcg_oom:1; 1399 - unsigned int oom_locked:1; 1400 - int wakeups; 1401 - struct mem_cgroup *wait_on_memcg; 1402 1401 } memcg_oom; 1403 1402 #endif 1404 1403 #ifdef CONFIG_UPROBES
+14
include/linux/timex.h
··· 64 64 65 65 #include <asm/timex.h> 66 66 67 + #ifndef random_get_entropy 68 + /* 69 + * The random_get_entropy() function is used by the /dev/random driver 70 + * in order to extract entropy via the relative unpredictability of 71 + * when an interrupt takes places versus a high speed, fine-grained 72 + * timing source or cycle counter. Since it will be occurred on every 73 + * single interrupt, it must have a very low cost/overhead. 74 + * 75 + * By default we use get_cycles() for this purpose, but individual 76 + * architectures may override this in their asm/timex.h header file. 77 + */ 78 + #define random_get_entropy() get_cycles() 79 + #endif 80 + 67 81 /* 68 82 * SHIFT_PLL is used as a dampening factor to define how much we 69 83 * adjust the frequency correction for a given offset in PLL mode.
+1 -1
include/linux/usb/usb_phy_gen_xceiv.h
··· 12 12 unsigned int needs_reset:1; 13 13 }; 14 14 15 - #if IS_ENABLED(CONFIG_NOP_USB_XCEIV) 15 + #if defined(CONFIG_NOP_USB_XCEIV) || (defined(CONFIG_NOP_USB_XCEIV_MODULE) && defined(MODULE)) 16 16 /* sometimes transceivers are accessed only through e.g. ULPI */ 17 17 extern void usb_nop_xceiv_register(void); 18 18 extern void usb_nop_xceiv_unregister(void);
+3 -1
include/linux/usb_usual.h
··· 66 66 US_FLAG(INITIAL_READ10, 0x00100000) \ 67 67 /* Initial READ(10) (and others) must be retried */ \ 68 68 US_FLAG(WRITE_CACHE, 0x00200000) \ 69 - /* Write Cache status is not available */ 69 + /* Write Cache status is not available */ \ 70 + US_FLAG(NEEDS_CAP16, 0x00400000) 71 + /* cannot handle READ_CAPACITY_10 */ 70 72 71 73 #define US_FLAG(name, value) US_FL_##name = value , 72 74 enum { US_DO_ALL_FLAGS };
-7
include/linux/vgaarb.h
··· 65 65 * out of the arbitration process (and can be safe to take 66 66 * interrupts at any time. 67 67 */ 68 - #if defined(CONFIG_VGA_ARB) 69 68 extern void vga_set_legacy_decoding(struct pci_dev *pdev, 70 69 unsigned int decodes); 71 - #else 72 - static inline void vga_set_legacy_decoding(struct pci_dev *pdev, 73 - unsigned int decodes) 74 - { 75 - } 76 - #endif 77 70 78 71 /** 79 72 * vga_get - acquire & locks VGA resources
+1 -1
include/linux/yam.h
··· 77 77 78 78 struct yamdrv_ioctl_mcs { 79 79 int cmd; 80 - int bitrate; 80 + unsigned int bitrate; 81 81 unsigned char bits[YAM_FPGA_SIZE]; 82 82 };
+4 -2
include/net/cipso_ipv4.h
··· 290 290 unsigned char err_offset = 0; 291 291 u8 opt_len = opt[1]; 292 292 u8 opt_iter; 293 + u8 tag_len; 293 294 294 295 if (opt_len < 8) { 295 296 err_offset = 1; ··· 303 302 } 304 303 305 304 for (opt_iter = 6; opt_iter < opt_len;) { 306 - if (opt[opt_iter + 1] > (opt_len - opt_iter)) { 305 + tag_len = opt[opt_iter + 1]; 306 + if ((tag_len == 0) || (opt[opt_iter + 1] > (opt_len - opt_iter))) { 307 307 err_offset = opt_iter + 1; 308 308 goto out; 309 309 } 310 - opt_iter += opt[opt_iter + 1]; 310 + opt_iter += tag_len; 311 311 } 312 312 313 313 out:
+12
include/net/dst.h
··· 478 478 { 479 479 return dst_orig; 480 480 } 481 + 482 + static inline struct xfrm_state *dst_xfrm(const struct dst_entry *dst) 483 + { 484 + return NULL; 485 + } 486 + 481 487 #else 482 488 struct dst_entry *xfrm_lookup(struct net *net, struct dst_entry *dst_orig, 483 489 const struct flowi *fl, struct sock *sk, 484 490 int flags); 491 + 492 + /* skb attached with this dst needs transformation if dst->xfrm is valid */ 493 + static inline struct xfrm_state *dst_xfrm(const struct dst_entry *dst) 494 + { 495 + return dst->xfrm; 496 + } 485 497 #endif 486 498 487 499 #endif /* _NET_DST_H */
+2 -4
include/net/ip6_route.h
··· 182 182 skb_dst(skb)->dev->mtu : dst_mtu(skb_dst(skb)); 183 183 } 184 184 185 - static inline struct in6_addr *rt6_nexthop(struct rt6_info *rt, struct in6_addr *dest) 185 + static inline struct in6_addr *rt6_nexthop(struct rt6_info *rt) 186 186 { 187 - if (rt->rt6i_flags & RTF_GATEWAY) 188 - return &rt->rt6i_gateway; 189 - return dest; 187 + return &rt->rt6i_gateway; 190 188 } 191 189 192 190 #endif
+1 -1
include/net/mac802154.h
··· 133 133 134 134 /* Basic interface to register ieee802154 device */ 135 135 struct ieee802154_dev * 136 - ieee802154_alloc_device(size_t priv_data_lex, struct ieee802154_ops *ops); 136 + ieee802154_alloc_device(size_t priv_data_len, struct ieee802154_ops *ops); 137 137 void ieee802154_free_device(struct ieee802154_dev *dev); 138 138 int ieee802154_register_device(struct ieee802154_dev *dev); 139 139 void ieee802154_unregister_device(struct ieee802154_dev *dev);
+1
include/sound/rcar_snd.h
··· 68 68 * 69 69 * A : generation 70 70 */ 71 + #define RSND_GEN_MASK (0xF << 0) 71 72 #define RSND_GEN1 (1 << 0) /* fixme */ 72 73 #define RSND_GEN2 (2 << 0) /* fixme */ 73 74
+2
include/uapi/drm/drm_mode.h
··· 223 223 __u32 connection; 224 224 __u32 mm_width, mm_height; /**< HxW in millimeters */ 225 225 __u32 subpixel; 226 + 227 + __u32 pad; 226 228 }; 227 229 228 230 #define DRM_MODE_PROP_PENDING (1<<0)
+6
include/uapi/rdma/ib_user_verbs.h
··· 87 87 IB_USER_VERBS_CMD_CLOSE_XRCD, 88 88 IB_USER_VERBS_CMD_CREATE_XSRQ, 89 89 IB_USER_VERBS_CMD_OPEN_QP, 90 + #ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING 90 91 IB_USER_VERBS_CMD_CREATE_FLOW = IB_USER_VERBS_CMD_THRESHOLD, 91 92 IB_USER_VERBS_CMD_DESTROY_FLOW 93 + #endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */ 92 94 }; 93 95 94 96 /* ··· 128 126 __u16 out_words; 129 127 }; 130 128 129 + #ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING 131 130 struct ib_uverbs_cmd_hdr_ex { 132 131 __u32 command; 133 132 __u16 in_words; ··· 137 134 __u16 provider_out_words; 138 135 __u32 cmd_hdr_reserved; 139 136 }; 137 + #endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */ 140 138 141 139 struct ib_uverbs_get_context { 142 140 __u64 response; ··· 700 696 __u64 driver_data[0]; 701 697 }; 702 698 699 + #ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING 703 700 struct ib_kern_eth_filter { 704 701 __u8 dst_mac[6]; 705 702 __u8 src_mac[6]; ··· 785 780 __u32 comp_mask; 786 781 __u32 flow_handle; 787 782 }; 783 + #endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */ 788 784 789 785 struct ib_uverbs_create_srq { 790 786 __u64 response;
+2
init/main.c
··· 76 76 #include <linux/elevator.h> 77 77 #include <linux/sched_clock.h> 78 78 #include <linux/context_tracking.h> 79 + #include <linux/random.h> 79 80 80 81 #include <asm/io.h> 81 82 #include <asm/bugs.h> ··· 788 787 do_ctors(); 789 788 usermodehelper_enable(); 790 789 do_initcalls(); 790 + random_int_secret_init(); 791 791 } 792 792 793 793 static void __init do_pre_smp_initcalls(void)
+29 -13
ipc/sem.c
··· 1282 1282 1283 1283 sem_lock(sma, NULL, -1); 1284 1284 1285 + if (sma->sem_perm.deleted) { 1286 + sem_unlock(sma, -1); 1287 + rcu_read_unlock(); 1288 + return -EIDRM; 1289 + } 1290 + 1285 1291 curr = &sma->sem_base[semnum]; 1286 1292 1287 1293 ipc_assert_locked_object(&sma->sem_perm); ··· 1342 1336 int i; 1343 1337 1344 1338 sem_lock(sma, NULL, -1); 1339 + if (sma->sem_perm.deleted) { 1340 + err = -EIDRM; 1341 + goto out_unlock; 1342 + } 1345 1343 if(nsems > SEMMSL_FAST) { 1346 1344 if (!ipc_rcu_getref(sma)) { 1347 - sem_unlock(sma, -1); 1348 - rcu_read_unlock(); 1349 1345 err = -EIDRM; 1350 - goto out_free; 1346 + goto out_unlock; 1351 1347 } 1352 1348 sem_unlock(sma, -1); 1353 1349 rcu_read_unlock(); ··· 1362 1354 rcu_read_lock(); 1363 1355 sem_lock_and_putref(sma); 1364 1356 if (sma->sem_perm.deleted) { 1365 - sem_unlock(sma, -1); 1366 - rcu_read_unlock(); 1367 1357 err = -EIDRM; 1368 - goto out_free; 1358 + goto out_unlock; 1369 1359 } 1370 1360 } 1371 1361 for (i = 0; i < sma->sem_nsems; i++) ··· 1381 1375 struct sem_undo *un; 1382 1376 1383 1377 if (!ipc_rcu_getref(sma)) { 1384 - rcu_read_unlock(); 1385 - return -EIDRM; 1378 + err = -EIDRM; 1379 + goto out_rcu_wakeup; 1386 1380 } 1387 1381 rcu_read_unlock(); 1388 1382 ··· 1410 1404 rcu_read_lock(); 1411 1405 sem_lock_and_putref(sma); 1412 1406 if (sma->sem_perm.deleted) { 1413 - sem_unlock(sma, -1); 1414 - rcu_read_unlock(); 1415 1407 err = -EIDRM; 1416 - goto out_free; 1408 + goto out_unlock; 1417 1409 } 1418 1410 1419 1411 for (i = 0; i < nsems; i++) ··· 1435 1431 goto out_rcu_wakeup; 1436 1432 1437 1433 sem_lock(sma, NULL, -1); 1434 + if (sma->sem_perm.deleted) { 1435 + err = -EIDRM; 1436 + goto out_unlock; 1437 + } 1438 1438 curr = &sma->sem_base[semnum]; 1439 1439 1440 1440 switch (cmd) { ··· 1844 1836 if (error) 1845 1837 goto out_rcu_wakeup; 1846 1838 1839 + error = -EIDRM; 1840 + locknum = sem_lock(sma, sops, nsops); 1841 + if (sma->sem_perm.deleted) 1842 + goto out_unlock_free; 1847 1843 /* 1848 1844 * semid identifiers are not unique - find_alloc_undo may have 1849 1845 * allocated an undo structure, it was invalidated by an RMID ··· 1855 1843 * This case can be detected checking un->semid. The existence of 1856 1844 * "un" itself is guaranteed by rcu. 1857 1845 */ 1858 - error = -EIDRM; 1859 - locknum = sem_lock(sma, sops, nsops); 1860 1846 if (un && un->semid == -1) 1861 1847 goto out_unlock_free; 1862 1848 ··· 2067 2057 } 2068 2058 2069 2059 sem_lock(sma, NULL, -1); 2060 + /* exit_sem raced with IPC_RMID, nothing to do */ 2061 + if (sma->sem_perm.deleted) { 2062 + sem_unlock(sma, -1); 2063 + rcu_read_unlock(); 2064 + continue; 2065 + } 2070 2066 un = __lookup_undo(ulp, semid); 2071 2067 if (un == NULL) { 2072 2068 /* exit_sem raced with IPC_RMID+semget() that created
+21 -6
ipc/util.c
··· 17 17 * Pavel Emelianov <xemul@openvz.org> 18 18 * 19 19 * General sysv ipc locking scheme: 20 - * when doing ipc id lookups, take the ids->rwsem 21 - * rcu_read_lock() 22 - * obtain the ipc object (kern_ipc_perm) 23 - * perform security, capabilities, auditing and permission checks, etc. 24 - * acquire the ipc lock (kern_ipc_perm.lock) throught ipc_lock_object() 25 - * perform data updates (ie: SET, RMID, LOCK/UNLOCK commands) 20 + * rcu_read_lock() 21 + * obtain the ipc object (kern_ipc_perm) by looking up the id in an idr 22 + * tree. 23 + * - perform initial checks (capabilities, auditing and permission, 24 + * etc). 25 + * - perform read-only operations, such as STAT, INFO commands. 26 + * acquire the ipc lock (kern_ipc_perm.lock) through 27 + * ipc_lock_object() 28 + * - perform data updates, such as SET, RMID commands and 29 + * mechanism-specific operations (semop/semtimedop, 30 + * msgsnd/msgrcv, shmat/shmdt). 31 + * drop the ipc lock, through ipc_unlock_object(). 32 + * rcu_read_unlock() 33 + * 34 + * The ids->rwsem must be taken when: 35 + * - creating, removing and iterating the existing entries in ipc 36 + * identifier sets. 37 + * - iterating through files under /proc/sysvipc/ 38 + * 39 + * Note that sems have a special fast path that avoids kern_ipc_perm.lock - 40 + * see sem_lock(). 26 41 */ 27 42 28 43 #include <linux/mm.h>
+6 -8
kernel/cgroup.c
··· 2039 2039 2040 2040 /* @tsk either already exited or can't exit until the end */ 2041 2041 if (tsk->flags & PF_EXITING) 2042 - continue; 2042 + goto next; 2043 2043 2044 2044 /* as per above, nr_threads may decrease, but not increase. */ 2045 2045 BUG_ON(i >= group_size); ··· 2047 2047 ent.cgrp = task_cgroup_from_root(tsk, root); 2048 2048 /* nothing to do if this task is already in the cgroup */ 2049 2049 if (ent.cgrp == cgrp) 2050 - continue; 2050 + goto next; 2051 2051 /* 2052 2052 * saying GFP_ATOMIC has no effect here because we did prealloc 2053 2053 * earlier, but it's good form to communicate our expectations. ··· 2055 2055 retval = flex_array_put(group, i, &ent, GFP_ATOMIC); 2056 2056 BUG_ON(retval != 0); 2057 2057 i++; 2058 - 2058 + next: 2059 2059 if (!threadgroup) 2060 2060 break; 2061 2061 } while_each_thread(leader, tsk); ··· 3188 3188 3189 3189 WARN_ON_ONCE(!rcu_read_lock_held()); 3190 3190 3191 - /* if first iteration, visit the leftmost descendant */ 3192 - if (!pos) { 3193 - next = css_leftmost_descendant(root); 3194 - return next != root ? next : NULL; 3195 - } 3191 + /* if first iteration, visit leftmost descendant which may be @root */ 3192 + if (!pos) 3193 + return css_leftmost_descendant(root); 3196 3194 3197 3195 /* if we visited @root, we're done */ 3198 3196 if (pos == root)
+3 -3
kernel/events/core.c
··· 7234 7234 perf_remove_from_context(event); 7235 7235 unaccount_event_cpu(event, src_cpu); 7236 7236 put_ctx(src_ctx); 7237 - list_add(&event->event_entry, &events); 7237 + list_add(&event->migrate_entry, &events); 7238 7238 } 7239 7239 mutex_unlock(&src_ctx->mutex); 7240 7240 7241 7241 synchronize_rcu(); 7242 7242 7243 7243 mutex_lock(&dst_ctx->mutex); 7244 - list_for_each_entry_safe(event, tmp, &events, event_entry) { 7245 - list_del(&event->event_entry); 7244 + list_for_each_entry_safe(event, tmp, &events, migrate_entry) { 7245 + list_del(&event->migrate_entry); 7246 7246 if (event->state >= PERF_EVENT_STATE_OFF) 7247 7247 event->state = PERF_EVENT_STATE_INACTIVE; 7248 7248 account_event_cpu(event, dst_cpu);
+4 -1
kernel/power/snapshot.c
··· 743 743 struct memory_bitmap *bm1, *bm2; 744 744 int error = 0; 745 745 746 - BUG_ON(forbidden_pages_map || free_pages_map); 746 + if (forbidden_pages_map && free_pages_map) 747 + return 0; 748 + else 749 + BUG_ON(forbidden_pages_map || free_pages_map); 747 750 748 751 bm1 = kzalloc(sizeof(struct memory_bitmap), GFP_KERNEL); 749 752 if (!bm1)
+8
kernel/power/user.c
··· 39 39 char frozen; 40 40 char ready; 41 41 char platform_support; 42 + bool free_bitmaps; 42 43 } snapshot_state; 43 44 44 45 atomic_t snapshot_device_available = ATOMIC_INIT(1); ··· 83 82 data->swap = -1; 84 83 data->mode = O_WRONLY; 85 84 error = pm_notifier_call_chain(PM_RESTORE_PREPARE); 85 + if (!error) { 86 + error = create_basic_memory_bitmaps(); 87 + data->free_bitmaps = !error; 88 + } 86 89 if (error) 87 90 pm_notifier_call_chain(PM_POST_RESTORE); 88 91 } ··· 116 111 pm_restore_gfp_mask(); 117 112 free_basic_memory_bitmaps(); 118 113 thaw_processes(); 114 + } else if (data->free_bitmaps) { 115 + free_basic_memory_bitmaps(); 119 116 } 120 117 pm_notifier_call_chain(data->mode == O_RDONLY ? 121 118 PM_POST_HIBERNATION : PM_POST_RESTORE); ··· 238 231 break; 239 232 pm_restore_gfp_mask(); 240 233 free_basic_memory_bitmaps(); 234 + data->free_bitmaps = false; 241 235 thaw_processes(); 242 236 data->frozen = 0; 243 237 break;
+12 -3
kernel/softirq.c
··· 328 328 329 329 static inline void invoke_softirq(void) 330 330 { 331 - if (!force_irqthreads) 332 - __do_softirq(); 333 - else 331 + if (!force_irqthreads) { 332 + /* 333 + * We can safely execute softirq on the current stack if 334 + * it is the irq stack, because it should be near empty 335 + * at this stage. But we have no way to know if the arch 336 + * calls irq_exit() on the irq stack. So call softirq 337 + * in its own stack to prevent from any overrun on top 338 + * of a potentially deep task stack. 339 + */ 340 + do_softirq(); 341 + } else { 334 342 wakeup_softirqd(); 343 + } 335 344 } 336 345 337 346 static inline void tick_irq_exit(void)
+1 -1
lib/kobject.c
··· 592 592 { 593 593 struct kobject *kobj = container_of(kref, struct kobject, kref); 594 594 #ifdef CONFIG_DEBUG_KOBJECT_RELEASE 595 - pr_debug("kobject: '%s' (%p): %s, parent %p (delayed)\n", 595 + pr_info("kobject: '%s' (%p): %s, parent %p (delayed)\n", 596 596 kobject_name(kobj), kobj, __func__, kobj->parent); 597 597 INIT_DELAYED_WORK(&kobj->release, kobject_delayed_cleanup); 598 598 schedule_delayed_work(&kobj->release, HZ);
+3
lib/percpu-refcount.c
··· 53 53 ref->release = release; 54 54 return 0; 55 55 } 56 + EXPORT_SYMBOL_GPL(percpu_ref_init); 56 57 57 58 /** 58 59 * percpu_ref_cancel_init - cancel percpu_ref_init() ··· 85 84 free_percpu(ref->pcpu_count); 86 85 } 87 86 } 87 + EXPORT_SYMBOL_GPL(percpu_ref_cancel_init); 88 88 89 89 static void percpu_ref_kill_rcu(struct rcu_head *rcu) 90 90 { ··· 158 156 159 157 call_rcu_sched(&ref->rcu, percpu_ref_kill_rcu); 160 158 } 159 + EXPORT_SYMBOL_GPL(percpu_ref_kill_and_confirm);
+1 -1
mm/Kconfig
··· 183 183 config MEMORY_HOTREMOVE 184 184 bool "Allow for memory hot remove" 185 185 select MEMORY_ISOLATION 186 - select HAVE_BOOTMEM_INFO_NODE if X86_64 186 + select HAVE_BOOTMEM_INFO_NODE if (X86_64 || PPC64) 187 187 depends on MEMORY_HOTPLUG && ARCH_ENABLE_MEMORY_HOTREMOVE 188 188 depends on MIGRATION 189 189
+1 -10
mm/filemap.c
··· 1616 1616 struct inode *inode = mapping->host; 1617 1617 pgoff_t offset = vmf->pgoff; 1618 1618 struct page *page; 1619 - bool memcg_oom; 1620 1619 pgoff_t size; 1621 1620 int ret = 0; 1622 1621 ··· 1624 1625 return VM_FAULT_SIGBUS; 1625 1626 1626 1627 /* 1627 - * Do we have something in the page cache already? Either 1628 - * way, try readahead, but disable the memcg OOM killer for it 1629 - * as readahead is optional and no errors are propagated up 1630 - * the fault stack. The OOM killer is enabled while trying to 1631 - * instantiate the faulting page individually below. 1628 + * Do we have something in the page cache already? 1632 1629 */ 1633 1630 page = find_get_page(mapping, offset); 1634 1631 if (likely(page) && !(vmf->flags & FAULT_FLAG_TRIED)) { ··· 1632 1637 * We found the page, so try async readahead before 1633 1638 * waiting for the lock. 1634 1639 */ 1635 - memcg_oom = mem_cgroup_toggle_oom(false); 1636 1640 do_async_mmap_readahead(vma, ra, file, page, offset); 1637 - mem_cgroup_toggle_oom(memcg_oom); 1638 1641 } else if (!page) { 1639 1642 /* No page in the page cache at all */ 1640 - memcg_oom = mem_cgroup_toggle_oom(false); 1641 1643 do_sync_mmap_readahead(vma, ra, file, offset); 1642 - mem_cgroup_toggle_oom(memcg_oom); 1643 1644 count_vm_event(PGMAJFAULT); 1644 1645 mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT); 1645 1646 ret = VM_FAULT_MAJOR;
+9 -1
mm/huge_memory.c
··· 2697 2697 2698 2698 mmun_start = haddr; 2699 2699 mmun_end = haddr + HPAGE_PMD_SIZE; 2700 + again: 2700 2701 mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); 2701 2702 spin_lock(&mm->page_table_lock); 2702 2703 if (unlikely(!pmd_trans_huge(*pmd))) { ··· 2720 2719 split_huge_page(page); 2721 2720 2722 2721 put_page(page); 2723 - BUG_ON(pmd_trans_huge(*pmd)); 2722 + 2723 + /* 2724 + * We don't always have down_write of mmap_sem here: a racing 2725 + * do_huge_pmd_wp_page() might have copied-on-write to another 2726 + * huge page before our split_huge_page() got the anon_vma lock. 2727 + */ 2728 + if (unlikely(pmd_trans_huge(*pmd))) 2729 + goto again; 2724 2730 } 2725 2731 2726 2732 void split_huge_page_pmd_mm(struct mm_struct *mm, unsigned long address,
+16 -1
mm/hugetlb.c
··· 653 653 BUG_ON(page_count(page)); 654 654 BUG_ON(page_mapcount(page)); 655 655 restore_reserve = PagePrivate(page); 656 + ClearPagePrivate(page); 656 657 657 658 spin_lock(&hugetlb_lock); 658 659 hugetlb_cgroup_uncharge_page(hstate_index(h), ··· 696 695 /* we rely on prep_new_huge_page to set the destructor */ 697 696 set_compound_order(page, order); 698 697 __SetPageHead(page); 698 + __ClearPageReserved(page); 699 699 for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) { 700 700 __SetPageTail(p); 701 + /* 702 + * For gigantic hugepages allocated through bootmem at 703 + * boot, it's safer to be consistent with the not-gigantic 704 + * hugepages and clear the PG_reserved bit from all tail pages 705 + * too. Otherwse drivers using get_user_pages() to access tail 706 + * pages may get the reference counting wrong if they see 707 + * PG_reserved set on a tail page (despite the head page not 708 + * having PG_reserved set). Enforcing this consistency between 709 + * head and tail pages allows drivers to optimize away a check 710 + * on the head page when they need know if put_page() is needed 711 + * after get_user_pages(). 712 + */ 713 + __ClearPageReserved(p); 701 714 set_page_count(p, 0); 702 715 p->first_page = page; 703 716 } ··· 1344 1329 #else 1345 1330 page = virt_to_page(m); 1346 1331 #endif 1347 - __ClearPageReserved(page); 1348 1332 WARN_ON(page_count(page) != 1); 1349 1333 prep_compound_huge_page(page, h->order); 1334 + WARN_ON(PageReserved(page)); 1350 1335 prep_new_huge_page(h, page, page_to_nid(page)); 1351 1336 /* 1352 1337 * If we had gigantic hugepages allocated at boot time, we need
+72 -105
mm/memcontrol.c
··· 866 866 unsigned long val = 0; 867 867 int cpu; 868 868 869 + get_online_cpus(); 869 870 for_each_online_cpu(cpu) 870 871 val += per_cpu(memcg->stat->events[idx], cpu); 871 872 #ifdef CONFIG_HOTPLUG_CPU ··· 874 873 val += memcg->nocpu_base.events[idx]; 875 874 spin_unlock(&memcg->pcp_counter_lock); 876 875 #endif 876 + put_online_cpus(); 877 877 return val; 878 878 } 879 879 ··· 2161 2159 memcg_wakeup_oom(memcg); 2162 2160 } 2163 2161 2164 - /* 2165 - * try to call OOM killer 2166 - */ 2167 2162 static void mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int order) 2168 2163 { 2169 - bool locked; 2170 - int wakeups; 2171 - 2172 2164 if (!current->memcg_oom.may_oom) 2173 2165 return; 2174 - 2175 - current->memcg_oom.in_memcg_oom = 1; 2176 - 2177 2166 /* 2178 - * As with any blocking lock, a contender needs to start 2179 - * listening for wakeups before attempting the trylock, 2180 - * otherwise it can miss the wakeup from the unlock and sleep 2181 - * indefinitely. This is just open-coded because our locking 2182 - * is so particular to memcg hierarchies. 2167 + * We are in the middle of the charge context here, so we 2168 + * don't want to block when potentially sitting on a callstack 2169 + * that holds all kinds of filesystem and mm locks. 2170 + * 2171 + * Also, the caller may handle a failed allocation gracefully 2172 + * (like optional page cache readahead) and so an OOM killer 2173 + * invocation might not even be necessary. 2174 + * 2175 + * That's why we don't do anything here except remember the 2176 + * OOM context and then deal with it at the end of the page 2177 + * fault when the stack is unwound, the locks are released, 2178 + * and when we know whether the fault was overall successful. 2183 2179 */ 2184 - wakeups = atomic_read(&memcg->oom_wakeups); 2180 + css_get(&memcg->css); 2181 + current->memcg_oom.memcg = memcg; 2182 + current->memcg_oom.gfp_mask = mask; 2183 + current->memcg_oom.order = order; 2184 + } 2185 + 2186 + /** 2187 + * mem_cgroup_oom_synchronize - complete memcg OOM handling 2188 + * @handle: actually kill/wait or just clean up the OOM state 2189 + * 2190 + * This has to be called at the end of a page fault if the memcg OOM 2191 + * handler was enabled. 2192 + * 2193 + * Memcg supports userspace OOM handling where failed allocations must 2194 + * sleep on a waitqueue until the userspace task resolves the 2195 + * situation. Sleeping directly in the charge context with all kinds 2196 + * of locks held is not a good idea, instead we remember an OOM state 2197 + * in the task and mem_cgroup_oom_synchronize() has to be called at 2198 + * the end of the page fault to complete the OOM handling. 2199 + * 2200 + * Returns %true if an ongoing memcg OOM situation was detected and 2201 + * completed, %false otherwise. 2202 + */ 2203 + bool mem_cgroup_oom_synchronize(bool handle) 2204 + { 2205 + struct mem_cgroup *memcg = current->memcg_oom.memcg; 2206 + struct oom_wait_info owait; 2207 + bool locked; 2208 + 2209 + /* OOM is global, do not handle */ 2210 + if (!memcg) 2211 + return false; 2212 + 2213 + if (!handle) 2214 + goto cleanup; 2215 + 2216 + owait.memcg = memcg; 2217 + owait.wait.flags = 0; 2218 + owait.wait.func = memcg_oom_wake_function; 2219 + owait.wait.private = current; 2220 + INIT_LIST_HEAD(&owait.wait.task_list); 2221 + 2222 + prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE); 2185 2223 mem_cgroup_mark_under_oom(memcg); 2186 2224 2187 2225 locked = mem_cgroup_oom_trylock(memcg); ··· 2231 2189 2232 2190 if (locked && !memcg->oom_kill_disable) { 2233 2191 mem_cgroup_unmark_under_oom(memcg); 2234 - mem_cgroup_out_of_memory(memcg, mask, order); 2235 - mem_cgroup_oom_unlock(memcg); 2236 - /* 2237 - * There is no guarantee that an OOM-lock contender 2238 - * sees the wakeups triggered by the OOM kill 2239 - * uncharges. Wake any sleepers explicitely. 2240 - */ 2241 - memcg_oom_recover(memcg); 2192 + finish_wait(&memcg_oom_waitq, &owait.wait); 2193 + mem_cgroup_out_of_memory(memcg, current->memcg_oom.gfp_mask, 2194 + current->memcg_oom.order); 2242 2195 } else { 2243 - /* 2244 - * A system call can just return -ENOMEM, but if this 2245 - * is a page fault and somebody else is handling the 2246 - * OOM already, we need to sleep on the OOM waitqueue 2247 - * for this memcg until the situation is resolved. 2248 - * Which can take some time because it might be 2249 - * handled by a userspace task. 2250 - * 2251 - * However, this is the charge context, which means 2252 - * that we may sit on a large call stack and hold 2253 - * various filesystem locks, the mmap_sem etc. and we 2254 - * don't want the OOM handler to deadlock on them 2255 - * while we sit here and wait. Store the current OOM 2256 - * context in the task_struct, then return -ENOMEM. 2257 - * At the end of the page fault handler, with the 2258 - * stack unwound, pagefault_out_of_memory() will check 2259 - * back with us by calling 2260 - * mem_cgroup_oom_synchronize(), possibly putting the 2261 - * task to sleep. 2262 - */ 2263 - current->memcg_oom.oom_locked = locked; 2264 - current->memcg_oom.wakeups = wakeups; 2265 - css_get(&memcg->css); 2266 - current->memcg_oom.wait_on_memcg = memcg; 2267 - } 2268 - } 2269 - 2270 - /** 2271 - * mem_cgroup_oom_synchronize - complete memcg OOM handling 2272 - * 2273 - * This has to be called at the end of a page fault if the the memcg 2274 - * OOM handler was enabled and the fault is returning %VM_FAULT_OOM. 2275 - * 2276 - * Memcg supports userspace OOM handling, so failed allocations must 2277 - * sleep on a waitqueue until the userspace task resolves the 2278 - * situation. Sleeping directly in the charge context with all kinds 2279 - * of locks held is not a good idea, instead we remember an OOM state 2280 - * in the task and mem_cgroup_oom_synchronize() has to be called at 2281 - * the end of the page fault to put the task to sleep and clean up the 2282 - * OOM state. 2283 - * 2284 - * Returns %true if an ongoing memcg OOM situation was detected and 2285 - * finalized, %false otherwise. 2286 - */ 2287 - bool mem_cgroup_oom_synchronize(void) 2288 - { 2289 - struct oom_wait_info owait; 2290 - struct mem_cgroup *memcg; 2291 - 2292 - /* OOM is global, do not handle */ 2293 - if (!current->memcg_oom.in_memcg_oom) 2294 - return false; 2295 - 2296 - /* 2297 - * We invoked the OOM killer but there is a chance that a kill 2298 - * did not free up any charges. Everybody else might already 2299 - * be sleeping, so restart the fault and keep the rampage 2300 - * going until some charges are released. 2301 - */ 2302 - memcg = current->memcg_oom.wait_on_memcg; 2303 - if (!memcg) 2304 - goto out; 2305 - 2306 - if (test_thread_flag(TIF_MEMDIE) || fatal_signal_pending(current)) 2307 - goto out_memcg; 2308 - 2309 - owait.memcg = memcg; 2310 - owait.wait.flags = 0; 2311 - owait.wait.func = memcg_oom_wake_function; 2312 - owait.wait.private = current; 2313 - INIT_LIST_HEAD(&owait.wait.task_list); 2314 - 2315 - prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE); 2316 - /* Only sleep if we didn't miss any wakeups since OOM */ 2317 - if (atomic_read(&memcg->oom_wakeups) == current->memcg_oom.wakeups) 2318 2196 schedule(); 2319 - finish_wait(&memcg_oom_waitq, &owait.wait); 2320 - out_memcg: 2321 - mem_cgroup_unmark_under_oom(memcg); 2322 - if (current->memcg_oom.oom_locked) { 2197 + mem_cgroup_unmark_under_oom(memcg); 2198 + finish_wait(&memcg_oom_waitq, &owait.wait); 2199 + } 2200 + 2201 + if (locked) { 2323 2202 mem_cgroup_oom_unlock(memcg); 2324 2203 /* 2325 2204 * There is no guarantee that an OOM-lock contender ··· 2249 2286 */ 2250 2287 memcg_oom_recover(memcg); 2251 2288 } 2289 + cleanup: 2290 + current->memcg_oom.memcg = NULL; 2252 2291 css_put(&memcg->css); 2253 - current->memcg_oom.wait_on_memcg = NULL; 2254 - out: 2255 - current->memcg_oom.in_memcg_oom = 0; 2256 2292 return true; 2257 2293 } 2258 2294 ··· 2665 2703 || fatal_signal_pending(current))) 2666 2704 goto bypass; 2667 2705 2706 + if (unlikely(task_in_memcg_oom(current))) 2707 + goto bypass; 2708 + 2668 2709 /* 2669 2710 * We always charge the cgroup the mm_struct belongs to. 2670 2711 * The mm_struct's mem_cgroup changes on task migration if the ··· 2766 2801 return 0; 2767 2802 nomem: 2768 2803 *ptr = NULL; 2804 + if (gfp_mask & __GFP_NOFAIL) 2805 + return 0; 2769 2806 return -ENOMEM; 2770 2807 bypass: 2771 2808 *ptr = root_mem_cgroup;
+14 -6
mm/memory.c
··· 837 837 */ 838 838 make_migration_entry_read(&entry); 839 839 pte = swp_entry_to_pte(entry); 840 + if (pte_swp_soft_dirty(*src_pte)) 841 + pte = pte_swp_mksoft_dirty(pte); 840 842 set_pte_at(src_mm, addr, src_pte, pte); 841 843 } 842 844 } ··· 3865 3863 * space. Kernel faults are handled more gracefully. 3866 3864 */ 3867 3865 if (flags & FAULT_FLAG_USER) 3868 - mem_cgroup_enable_oom(); 3866 + mem_cgroup_oom_enable(); 3869 3867 3870 3868 ret = __handle_mm_fault(mm, vma, address, flags); 3871 3869 3872 - if (flags & FAULT_FLAG_USER) 3873 - mem_cgroup_disable_oom(); 3874 - 3875 - if (WARN_ON(task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))) 3876 - mem_cgroup_oom_synchronize(); 3870 + if (flags & FAULT_FLAG_USER) { 3871 + mem_cgroup_oom_disable(); 3872 + /* 3873 + * The task may have entered a memcg OOM situation but 3874 + * if the allocation error was handled gracefully (no 3875 + * VM_FAULT_OOM), there is no need to kill anything. 3876 + * Just clean up the OOM state peacefully. 3877 + */ 3878 + if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM)) 3879 + mem_cgroup_oom_synchronize(false); 3880 + } 3877 3881 3878 3882 return ret; 3879 3883 }
+2
mm/migrate.c
··· 161 161 162 162 get_page(new); 163 163 pte = pte_mkold(mk_pte(new, vma->vm_page_prot)); 164 + if (pte_swp_soft_dirty(*ptep)) 165 + pte = pte_mksoft_dirty(pte); 164 166 if (is_write_migration_entry(entry)) 165 167 pte = pte_mkwrite(pte); 166 168 #ifdef CONFIG_HUGETLB_PAGE
+5 -2
mm/mprotect.c
··· 94 94 swp_entry_t entry = pte_to_swp_entry(oldpte); 95 95 96 96 if (is_write_migration_entry(entry)) { 97 + pte_t newpte; 97 98 /* 98 99 * A protection check is difficult so 99 100 * just be safe and disable write 100 101 */ 101 102 make_migration_entry_read(&entry); 102 - set_pte_at(mm, addr, pte, 103 - swp_entry_to_pte(entry)); 103 + newpte = swp_entry_to_pte(entry); 104 + if (pte_swp_soft_dirty(oldpte)) 105 + newpte = pte_swp_mksoft_dirty(newpte); 106 + set_pte_at(mm, addr, pte, newpte); 104 107 } 105 108 pages++; 106 109 }
+1 -4
mm/mremap.c
··· 25 25 #include <asm/uaccess.h> 26 26 #include <asm/cacheflush.h> 27 27 #include <asm/tlbflush.h> 28 - #include <asm/pgalloc.h> 29 28 30 29 #include "internal.h" 31 30 ··· 62 63 return NULL; 63 64 64 65 pmd = pmd_alloc(mm, pud, addr); 65 - if (!pmd) { 66 - pud_free(mm, pud); 66 + if (!pmd) 67 67 return NULL; 68 - } 69 68 70 69 VM_BUG_ON(pmd_trans_huge(*pmd)); 71 70
+1 -1
mm/oom_kill.c
··· 680 680 { 681 681 struct zonelist *zonelist; 682 682 683 - if (mem_cgroup_oom_synchronize()) 683 + if (mem_cgroup_oom_synchronize(true)) 684 684 return; 685 685 686 686 zonelist = node_zonelist(first_online_node, GFP_KERNEL);
+5 -5
mm/page-writeback.c
··· 1210 1210 return 1; 1211 1211 } 1212 1212 1213 - static long bdi_max_pause(struct backing_dev_info *bdi, 1214 - unsigned long bdi_dirty) 1213 + static unsigned long bdi_max_pause(struct backing_dev_info *bdi, 1214 + unsigned long bdi_dirty) 1215 1215 { 1216 - long bw = bdi->avg_write_bandwidth; 1217 - long t; 1216 + unsigned long bw = bdi->avg_write_bandwidth; 1217 + unsigned long t; 1218 1218 1219 1219 /* 1220 1220 * Limit pause time for small memory systems. If sleeping for too long ··· 1226 1226 t = bdi_dirty / (1 + bw / roundup_pow_of_two(1 + HZ / 8)); 1227 1227 t++; 1228 1228 1229 - return min_t(long, t, MAX_PAUSE); 1229 + return min_t(unsigned long, t, MAX_PAUSE); 1230 1230 } 1231 1231 1232 1232 static long bdi_min_pause(struct backing_dev_info *bdi,
+2
mm/slab_common.c
··· 56 56 continue; 57 57 } 58 58 59 + #if !defined(CONFIG_SLUB) || !defined(CONFIG_SLUB_DEBUG_ON) 59 60 /* 60 61 * For simplicity, we won't check this in the list of memcg 61 62 * caches. We have control over memcg naming, and if there ··· 70 69 s = NULL; 71 70 return -EINVAL; 72 71 } 72 + #endif 73 73 } 74 74 75 75 WARN_ON(strchr(name, ' ')); /* It confuses parsers */
+3 -1
mm/swapfile.c
··· 1824 1824 struct filename *pathname; 1825 1825 int i, type, prev; 1826 1826 int err; 1827 + unsigned int old_block_size; 1827 1828 1828 1829 if (!capable(CAP_SYS_ADMIN)) 1829 1830 return -EPERM; ··· 1915 1914 } 1916 1915 1917 1916 swap_file = p->swap_file; 1917 + old_block_size = p->old_block_size; 1918 1918 p->swap_file = NULL; 1919 1919 p->max = 0; 1920 1920 swap_map = p->swap_map; ··· 1940 1938 inode = mapping->host; 1941 1939 if (S_ISBLK(inode->i_mode)) { 1942 1940 struct block_device *bdev = I_BDEV(inode); 1943 - set_blocksize(bdev, p->old_block_size); 1941 + set_blocksize(bdev, old_block_size); 1944 1942 blkdev_put(bdev, FMODE_READ | FMODE_WRITE | FMODE_EXCL); 1945 1943 } else { 1946 1944 mutex_lock(&inode->i_mutex);
+1
mm/vmscan.c
··· 211 211 down_write(&shrinker_rwsem); 212 212 list_del(&shrinker->list); 213 213 up_write(&shrinker_rwsem); 214 + kfree(shrinker->nr_deferred); 214 215 } 215 216 EXPORT_SYMBOL(unregister_shrinker); 216 217
+4
mm/zswap.c
··· 804 804 } 805 805 tree->rbroot = RB_ROOT; 806 806 spin_unlock(&tree->lock); 807 + 808 + zbud_destroy_pool(tree->pool); 809 + kfree(tree); 810 + zswap_trees[type] = NULL; 807 811 } 808 812 809 813 static struct zbud_ops zswap_zbud_ops = {
+2 -2
net/bridge/br_fdb.c
··· 700 700 701 701 vid = nla_get_u16(tb[NDA_VLAN]); 702 702 703 - if (vid >= VLAN_N_VID) { 703 + if (!vid || vid >= VLAN_VID_MASK) { 704 704 pr_info("bridge: RTM_NEWNEIGH with invalid vlan id %d\n", 705 705 vid); 706 706 return -EINVAL; ··· 794 794 795 795 vid = nla_get_u16(tb[NDA_VLAN]); 796 796 797 - if (vid >= VLAN_N_VID) { 797 + if (!vid || vid >= VLAN_VID_MASK) { 798 798 pr_info("bridge: RTM_NEWNEIGH with invalid vlan id %d\n", 799 799 vid); 800 800 return -EINVAL;
+1 -1
net/bridge/br_mdb.c
··· 453 453 call_rcu_bh(&p->rcu, br_multicast_free_pg); 454 454 err = 0; 455 455 456 - if (!mp->ports && !mp->mglist && mp->timer_armed && 456 + if (!mp->ports && !mp->mglist && 457 457 netif_running(br->dev)) 458 458 mod_timer(&mp->timer, jiffies); 459 459 break;
+26 -12
net/bridge/br_multicast.c
··· 272 272 del_timer(&p->timer); 273 273 call_rcu_bh(&p->rcu, br_multicast_free_pg); 274 274 275 - if (!mp->ports && !mp->mglist && mp->timer_armed && 275 + if (!mp->ports && !mp->mglist && 276 276 netif_running(br->dev)) 277 277 mod_timer(&mp->timer, jiffies); 278 278 ··· 620 620 621 621 mp->br = br; 622 622 mp->addr = *group; 623 - 624 623 setup_timer(&mp->timer, br_multicast_group_expired, 625 624 (unsigned long)mp); 626 625 ··· 659 660 struct net_bridge_mdb_entry *mp; 660 661 struct net_bridge_port_group *p; 661 662 struct net_bridge_port_group __rcu **pp; 663 + unsigned long now = jiffies; 662 664 int err; 663 665 664 666 spin_lock(&br->multicast_lock); ··· 674 674 675 675 if (!port) { 676 676 mp->mglist = true; 677 + mod_timer(&mp->timer, now + br->multicast_membership_interval); 677 678 goto out; 678 679 } 679 680 ··· 682 681 (p = mlock_dereference(*pp, br)) != NULL; 683 682 pp = &p->next) { 684 683 if (p->port == port) 685 - goto out; 684 + goto found; 686 685 if ((unsigned long)p->port < (unsigned long)port) 687 686 break; 688 687 } ··· 693 692 rcu_assign_pointer(*pp, p); 694 693 br_mdb_notify(br->dev, port, group, RTM_NEWMDB); 695 694 695 + found: 696 + mod_timer(&p->timer, now + br->multicast_membership_interval); 696 697 out: 697 698 err = 0; 698 699 ··· 1194 1191 if (!mp) 1195 1192 goto out; 1196 1193 1197 - mod_timer(&mp->timer, now + br->multicast_membership_interval); 1198 - mp->timer_armed = true; 1199 - 1200 1194 max_delay *= br->multicast_last_member_count; 1201 1195 1202 1196 if (mp->mglist && ··· 1269 1269 mp = br_mdb_ip6_get(mlock_dereference(br->mdb, br), group, vid); 1270 1270 if (!mp) 1271 1271 goto out; 1272 - 1273 - mod_timer(&mp->timer, now + br->multicast_membership_interval); 1274 - mp->timer_armed = true; 1275 1272 1276 1273 max_delay *= br->multicast_last_member_count; 1277 1274 if (mp->mglist && ··· 1355 1358 call_rcu_bh(&p->rcu, br_multicast_free_pg); 1356 1359 br_mdb_notify(br->dev, port, group, RTM_DELMDB); 1357 1360 1358 - if (!mp->ports && !mp->mglist && mp->timer_armed && 1361 + if (!mp->ports && !mp->mglist && 1359 1362 netif_running(br->dev)) 1360 1363 mod_timer(&mp->timer, jiffies); 1361 1364 } ··· 1367 1370 br->multicast_last_member_interval; 1368 1371 1369 1372 if (!port) { 1370 - if (mp->mglist && mp->timer_armed && 1373 + if (mp->mglist && 1371 1374 (timer_pending(&mp->timer) ? 1372 1375 time_after(mp->timer.expires, time) : 1373 1376 try_to_del_timer_sync(&mp->timer) >= 0)) { 1374 1377 mod_timer(&mp->timer, time); 1375 1378 } 1379 + 1380 + goto out; 1381 + } 1382 + 1383 + for (p = mlock_dereference(mp->ports, br); 1384 + p != NULL; 1385 + p = mlock_dereference(p->next, br)) { 1386 + if (p->port != port) 1387 + continue; 1388 + 1389 + if (!hlist_unhashed(&p->mglist) && 1390 + (timer_pending(&p->timer) ? 1391 + time_after(p->timer.expires, time) : 1392 + try_to_del_timer_sync(&p->timer) >= 0)) { 1393 + mod_timer(&p->timer, time); 1394 + } 1395 + 1396 + break; 1376 1397 } 1377 1398 out: 1378 1399 spin_unlock(&br->multicast_lock); ··· 1813 1798 hlist_for_each_entry_safe(mp, n, &mdb->mhash[i], 1814 1799 hlist[ver]) { 1815 1800 del_timer(&mp->timer); 1816 - mp->timer_armed = false; 1817 1801 call_rcu_bh(&mp->rcu, br_multicast_free_group); 1818 1802 } 1819 1803 }
+1 -1
net/bridge/br_netlink.c
··· 243 243 244 244 vinfo = nla_data(tb[IFLA_BRIDGE_VLAN_INFO]); 245 245 246 - if (vinfo->vid >= VLAN_N_VID) 246 + if (!vinfo->vid || vinfo->vid >= VLAN_VID_MASK) 247 247 return -EINVAL; 248 248 249 249 switch (cmd) {
+1 -4
net/bridge/br_private.h
··· 126 126 struct timer_list timer; 127 127 struct br_ip addr; 128 128 bool mglist; 129 - bool timer_armed; 130 129 }; 131 130 132 131 struct net_bridge_mdb_htable ··· 623 624 * vid wasn't set 624 625 */ 625 626 smp_rmb(); 626 - return (v->pvid & VLAN_TAG_PRESENT) ? 627 - (v->pvid & ~VLAN_TAG_PRESENT) : 628 - VLAN_N_VID; 627 + return v->pvid ?: VLAN_N_VID; 629 628 } 630 629 631 630 #else
+1 -1
net/bridge/br_stp_if.c
··· 134 134 135 135 if (br->bridge_forward_delay < BR_MIN_FORWARD_DELAY) 136 136 __br_set_forward_delay(br, BR_MIN_FORWARD_DELAY); 137 - else if (br->bridge_forward_delay < BR_MAX_FORWARD_DELAY) 137 + else if (br->bridge_forward_delay > BR_MAX_FORWARD_DELAY) 138 138 __br_set_forward_delay(br, BR_MAX_FORWARD_DELAY); 139 139 140 140 if (r == 0) {
+66 -57
net/bridge/br_vlan.c
··· 45 45 return 0; 46 46 } 47 47 48 - if (vid) { 49 - if (v->port_idx) { 50 - p = v->parent.port; 51 - br = p->br; 52 - dev = p->dev; 53 - } else { 54 - br = v->parent.br; 55 - dev = br->dev; 56 - } 57 - ops = dev->netdev_ops; 48 + if (v->port_idx) { 49 + p = v->parent.port; 50 + br = p->br; 51 + dev = p->dev; 52 + } else { 53 + br = v->parent.br; 54 + dev = br->dev; 55 + } 56 + ops = dev->netdev_ops; 58 57 59 - if (p && (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) { 60 - /* Add VLAN to the device filter if it is supported. 61 - * Stricly speaking, this is not necessary now, since 62 - * devices are made promiscuous by the bridge, but if 63 - * that ever changes this code will allow tagged 64 - * traffic to enter the bridge. 65 - */ 66 - err = ops->ndo_vlan_rx_add_vid(dev, htons(ETH_P_8021Q), 67 - vid); 68 - if (err) 69 - return err; 70 - } 58 + if (p && (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) { 59 + /* Add VLAN to the device filter if it is supported. 60 + * Stricly speaking, this is not necessary now, since 61 + * devices are made promiscuous by the bridge, but if 62 + * that ever changes this code will allow tagged 63 + * traffic to enter the bridge. 64 + */ 65 + err = ops->ndo_vlan_rx_add_vid(dev, htons(ETH_P_8021Q), 66 + vid); 67 + if (err) 68 + return err; 69 + } 71 70 72 - err = br_fdb_insert(br, p, dev->dev_addr, vid); 73 - if (err) { 74 - br_err(br, "failed insert local address into bridge " 75 - "forwarding table\n"); 76 - goto out_filt; 77 - } 78 - 71 + err = br_fdb_insert(br, p, dev->dev_addr, vid); 72 + if (err) { 73 + br_err(br, "failed insert local address into bridge " 74 + "forwarding table\n"); 75 + goto out_filt; 79 76 } 80 77 81 78 set_bit(vid, v->vlan_bitmap); ··· 95 98 __vlan_delete_pvid(v, vid); 96 99 clear_bit(vid, v->untagged_bitmap); 97 100 98 - if (v->port_idx && vid) { 101 + if (v->port_idx) { 99 102 struct net_device *dev = v->parent.port->dev; 100 103 const struct net_device_ops *ops = dev->netdev_ops; 101 104 ··· 189 192 bool br_allowed_ingress(struct net_bridge *br, struct net_port_vlans *v, 190 193 struct sk_buff *skb, u16 *vid) 191 194 { 195 + int err; 196 + 192 197 /* If VLAN filtering is disabled on the bridge, all packets are 193 198 * permitted. 194 199 */ ··· 203 204 if (!v) 204 205 return false; 205 206 206 - if (br_vlan_get_tag(skb, vid)) { 207 + err = br_vlan_get_tag(skb, vid); 208 + if (!*vid) { 207 209 u16 pvid = br_get_pvid(v); 208 210 209 - /* Frame did not have a tag. See if pvid is set 210 - * on this port. That tells us which vlan untagged 211 - * traffic belongs to. 211 + /* Frame had a tag with VID 0 or did not have a tag. 212 + * See if pvid is set on this port. That tells us which 213 + * vlan untagged or priority-tagged traffic belongs to. 212 214 */ 213 215 if (pvid == VLAN_N_VID) 214 216 return false; 215 217 216 - /* PVID is set on this port. Any untagged ingress 217 - * frame is considered to belong to this vlan. 218 + /* PVID is set on this port. Any untagged or priority-tagged 219 + * ingress frame is considered to belong to this vlan. 218 220 */ 219 - __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), pvid); 221 + *vid = pvid; 222 + if (likely(err)) 223 + /* Untagged Frame. */ 224 + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), pvid); 225 + else 226 + /* Priority-tagged Frame. 227 + * At this point, We know that skb->vlan_tci had 228 + * VLAN_TAG_PRESENT bit and its VID field was 0x000. 229 + * We update only VID field and preserve PCP field. 230 + */ 231 + skb->vlan_tci |= pvid; 232 + 220 233 return true; 221 234 } 222 235 ··· 259 248 return false; 260 249 } 261 250 262 - /* Must be protected by RTNL */ 251 + /* Must be protected by RTNL. 252 + * Must be called with vid in range from 1 to 4094 inclusive. 253 + */ 263 254 int br_vlan_add(struct net_bridge *br, u16 vid, u16 flags) 264 255 { 265 256 struct net_port_vlans *pv = NULL; ··· 291 278 return err; 292 279 } 293 280 294 - /* Must be protected by RTNL */ 281 + /* Must be protected by RTNL. 282 + * Must be called with vid in range from 1 to 4094 inclusive. 283 + */ 295 284 int br_vlan_delete(struct net_bridge *br, u16 vid) 296 285 { 297 286 struct net_port_vlans *pv; ··· 304 289 if (!pv) 305 290 return -EINVAL; 306 291 307 - if (vid) { 308 - /* If the VID !=0 remove fdb for this vid. VID 0 is special 309 - * in that it's the default and is always there in the fdb. 310 - */ 311 - spin_lock_bh(&br->hash_lock); 312 - fdb_delete_by_addr(br, br->dev->dev_addr, vid); 313 - spin_unlock_bh(&br->hash_lock); 314 - } 292 + spin_lock_bh(&br->hash_lock); 293 + fdb_delete_by_addr(br, br->dev->dev_addr, vid); 294 + spin_unlock_bh(&br->hash_lock); 315 295 316 296 __vlan_del(pv, vid); 317 297 return 0; ··· 339 329 return 0; 340 330 } 341 331 342 - /* Must be protected by RTNL */ 332 + /* Must be protected by RTNL. 333 + * Must be called with vid in range from 1 to 4094 inclusive. 334 + */ 343 335 int nbp_vlan_add(struct net_bridge_port *port, u16 vid, u16 flags) 344 336 { 345 337 struct net_port_vlans *pv = NULL; ··· 375 363 return err; 376 364 } 377 365 378 - /* Must be protected by RTNL */ 366 + /* Must be protected by RTNL. 367 + * Must be called with vid in range from 1 to 4094 inclusive. 368 + */ 379 369 int nbp_vlan_delete(struct net_bridge_port *port, u16 vid) 380 370 { 381 371 struct net_port_vlans *pv; ··· 388 374 if (!pv) 389 375 return -EINVAL; 390 376 391 - if (vid) { 392 - /* If the VID !=0 remove fdb for this vid. VID 0 is special 393 - * in that it's the default and is always there in the fdb. 394 - */ 395 - spin_lock_bh(&port->br->hash_lock); 396 - fdb_delete_by_addr(port->br, port->dev->dev_addr, vid); 397 - spin_unlock_bh(&port->br->hash_lock); 398 - } 377 + spin_lock_bh(&port->br->hash_lock); 378 + fdb_delete_by_addr(port->br, port->dev->dev_addr, vid); 379 + spin_unlock_bh(&port->br->hash_lock); 399 380 400 381 return __vlan_del(pv, vid); 401 382 }
+2
net/core/secure_seq.c
··· 11 11 12 12 #include <net/secure_seq.h> 13 13 14 + #if IS_ENABLED(CONFIG_IPV6) || IS_ENABLED(CONFIG_INET) 14 15 #define NET_SECRET_SIZE (MD5_MESSAGE_BYTES / 4) 15 16 16 17 static u32 net_secret[NET_SECRET_SIZE] ____cacheline_aligned; ··· 20 19 { 21 20 net_get_random_once(net_secret, sizeof(net_secret)); 22 21 } 22 + #endif 23 23 24 24 #ifdef CONFIG_INET 25 25 static u32 seq_scale(u32 seq)
+9 -4
net/ipv4/ip_output.c
··· 772 772 /* initialize protocol header pointer */ 773 773 skb->transport_header = skb->network_header + fragheaderlen; 774 774 775 - skb->ip_summed = CHECKSUM_PARTIAL; 776 775 skb->csum = 0; 777 776 778 - /* specify the length of each IP datagram fragment */ 779 - skb_shinfo(skb)->gso_size = maxfraglen - fragheaderlen; 780 - skb_shinfo(skb)->gso_type = SKB_GSO_UDP; 777 + 781 778 __skb_queue_tail(queue, skb); 779 + } else if (skb_is_gso(skb)) { 780 + goto append; 782 781 } 783 782 783 + skb->ip_summed = CHECKSUM_PARTIAL; 784 + /* specify the length of each IP datagram fragment */ 785 + skb_shinfo(skb)->gso_size = maxfraglen - fragheaderlen; 786 + skb_shinfo(skb)->gso_type = SKB_GSO_UDP; 787 + 788 + append: 784 789 return skb_append_datato_frags(sk, skb, getfrag, from, 785 790 (length - transhdrlen)); 786 791 }
+11 -3
net/ipv4/ip_vti.c
··· 61 61 iph->saddr, iph->daddr, 0); 62 62 if (tunnel != NULL) { 63 63 struct pcpu_tstats *tstats; 64 + u32 oldmark = skb->mark; 65 + int ret; 64 66 65 - if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb)) 67 + 68 + /* temporarily mark the skb with the tunnel o_key, to 69 + * only match policies with this mark. 70 + */ 71 + skb->mark = be32_to_cpu(tunnel->parms.o_key); 72 + ret = xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb); 73 + skb->mark = oldmark; 74 + if (!ret) 66 75 return -1; 67 76 68 77 tstats = this_cpu_ptr(tunnel->dev->tstats); ··· 80 71 tstats->rx_bytes += skb->len; 81 72 u64_stats_update_end(&tstats->syncp); 82 73 83 - skb->mark = 0; 84 74 secpath_reset(skb); 85 75 skb->dev = tunnel->dev; 86 76 return 1; ··· 111 103 112 104 memset(&fl4, 0, sizeof(fl4)); 113 105 flowi4_init_output(&fl4, tunnel->parms.link, 114 - be32_to_cpu(tunnel->parms.i_key), RT_TOS(tos), 106 + be32_to_cpu(tunnel->parms.o_key), RT_TOS(tos), 115 107 RT_SCOPE_UNIVERSE, 116 108 IPPROTO_IPIP, 0, 117 109 dst, tiph->saddr, 0, 0);
+3 -1
net/ipv4/tcp_input.c
··· 3338 3338 tcp_init_cwnd_reduction(sk, true); 3339 3339 tcp_set_ca_state(sk, TCP_CA_CWR); 3340 3340 tcp_end_cwnd_reduction(sk); 3341 - tcp_set_ca_state(sk, TCP_CA_Open); 3341 + tcp_try_keep_open(sk); 3342 3342 NET_INC_STATS_BH(sock_net(sk), 3343 3343 LINUX_MIB_TCPLOSSPROBERECOVERY); 3344 3344 } ··· 5750 5750 tcp_rearm_rto(sk); 5751 5751 } else 5752 5752 tcp_init_metrics(sk); 5753 + 5754 + tcp_update_pacing_rate(sk); 5753 5755 5754 5756 /* Prevent spurious tcp_cwnd_restart() on first data packet */ 5755 5757 tp->lsndtime = tcp_time_stamp;
+7 -5
net/ipv4/tcp_output.c
··· 986 986 static void tcp_set_skb_tso_segs(const struct sock *sk, struct sk_buff *skb, 987 987 unsigned int mss_now) 988 988 { 989 - if (skb->len <= mss_now || !sk_can_gso(sk) || 990 - skb->ip_summed == CHECKSUM_NONE) { 989 + /* Make sure we own this skb before messing gso_size/gso_segs */ 990 + WARN_ON_ONCE(skb_cloned(skb)); 991 + 992 + if (skb->len <= mss_now || skb->ip_summed == CHECKSUM_NONE) { 991 993 /* Avoid the costly divide in the normal 992 994 * non-TSO case. 993 995 */ ··· 1069 1067 if (nsize < 0) 1070 1068 nsize = 0; 1071 1069 1072 - if (skb_cloned(skb) && 1073 - skb_is_nonlinear(skb) && 1074 - pskb_expand_head(skb, 0, 0, GFP_ATOMIC)) 1070 + if (skb_unclone(skb, GFP_ATOMIC)) 1075 1071 return -ENOMEM; 1076 1072 1077 1073 /* Get a new skb... force flag on. */ ··· 2344 2344 int oldpcount = tcp_skb_pcount(skb); 2345 2345 2346 2346 if (unlikely(oldpcount > 1)) { 2347 + if (skb_unclone(skb, GFP_ATOMIC)) 2348 + return -ENOMEM; 2347 2349 tcp_init_tso_segs(sk, skb, cur_mss); 2348 2350 tcp_adjust_pcount(sk, skb, oldpcount - tcp_skb_pcount(skb)); 2349 2351 }
+1
net/ipv4/xfrm4_policy.c
··· 107 107 108 108 memset(fl4, 0, sizeof(struct flowi4)); 109 109 fl4->flowi4_mark = skb->mark; 110 + fl4->flowi4_oif = skb_dst(skb)->dev->ifindex; 110 111 111 112 if (!ip_is_fragment(iph)) { 112 113 switch (iph->protocol) {
+1 -2
net/ipv6/ah6.c
··· 618 618 struct ip_auth_hdr *ah = (struct ip_auth_hdr*)(skb->data+offset); 619 619 struct xfrm_state *x; 620 620 621 - if (type != ICMPV6_DEST_UNREACH && 622 - type != ICMPV6_PKT_TOOBIG && 621 + if (type != ICMPV6_PKT_TOOBIG && 623 622 type != NDISC_REDIRECT) 624 623 return; 625 624
+1 -2
net/ipv6/esp6.c
··· 436 436 struct ip_esp_hdr *esph = (struct ip_esp_hdr *)(skb->data + offset); 437 437 struct xfrm_state *x; 438 438 439 - if (type != ICMPV6_DEST_UNREACH && 440 - type != ICMPV6_PKT_TOOBIG && 439 + if (type != ICMPV6_PKT_TOOBIG && 441 440 type != NDISC_REDIRECT) 442 441 return; 443 442
+1 -2
net/ipv6/ip6_gre.c
··· 976 976 if (t->parms.o_flags&GRE_SEQ) 977 977 addend += 4; 978 978 } 979 + t->hlen = addend; 979 980 980 981 if (p->flags & IP6_TNL_F_CAP_XMIT) { 981 982 int strict = (ipv6_addr_type(&p->raddr) & ··· 1003 1002 } 1004 1003 ip6_rt_put(rt); 1005 1004 } 1006 - 1007 - t->hlen = addend; 1008 1005 } 1009 1006 1010 1007 static int ip6gre_tnl_change(struct ip6_tnl *t,
+16 -13
net/ipv6/ip6_output.c
··· 105 105 } 106 106 107 107 rcu_read_lock_bh(); 108 - nexthop = rt6_nexthop((struct rt6_info *)dst, &ipv6_hdr(skb)->daddr); 108 + nexthop = rt6_nexthop((struct rt6_info *)dst); 109 109 neigh = __ipv6_neigh_lookup_noref(dst->dev, nexthop); 110 110 if (unlikely(!neigh)) 111 111 neigh = __neigh_create(&nd_tbl, nexthop, dst->dev, false); ··· 874 874 */ 875 875 rt = (struct rt6_info *) *dst; 876 876 rcu_read_lock_bh(); 877 - n = __ipv6_neigh_lookup_noref(rt->dst.dev, rt6_nexthop(rt, &fl6->daddr)); 877 + n = __ipv6_neigh_lookup_noref(rt->dst.dev, rt6_nexthop(rt)); 878 878 err = n && !(n->nud_state & NUD_VALID) ? -EINVAL : 0; 879 879 rcu_read_unlock_bh(); 880 880 ··· 1008 1008 1009 1009 { 1010 1010 struct sk_buff *skb; 1011 + struct frag_hdr fhdr; 1011 1012 int err; 1012 1013 1013 1014 /* There is support for UDP large send offload by network ··· 1016 1015 * udp datagram 1017 1016 */ 1018 1017 if ((skb = skb_peek_tail(&sk->sk_write_queue)) == NULL) { 1019 - struct frag_hdr fhdr; 1020 - 1021 1018 skb = sock_alloc_send_skb(sk, 1022 1019 hh_len + fragheaderlen + transhdrlen + 20, 1023 1020 (flags & MSG_DONTWAIT), &err); ··· 1035 1036 skb->transport_header = skb->network_header + fragheaderlen; 1036 1037 1037 1038 skb->protocol = htons(ETH_P_IPV6); 1038 - skb->ip_summed = CHECKSUM_PARTIAL; 1039 1039 skb->csum = 0; 1040 1040 1041 - /* Specify the length of each IPv6 datagram fragment. 1042 - * It has to be a multiple of 8. 1043 - */ 1044 - skb_shinfo(skb)->gso_size = (mtu - fragheaderlen - 1045 - sizeof(struct frag_hdr)) & ~7; 1046 - skb_shinfo(skb)->gso_type = SKB_GSO_UDP; 1047 - ipv6_select_ident(&fhdr, rt); 1048 - skb_shinfo(skb)->ip6_frag_id = fhdr.identification; 1049 1041 __skb_queue_tail(&sk->sk_write_queue, skb); 1042 + } else if (skb_is_gso(skb)) { 1043 + goto append; 1050 1044 } 1051 1045 1046 + skb->ip_summed = CHECKSUM_PARTIAL; 1047 + /* Specify the length of each IPv6 datagram fragment. 1048 + * It has to be a multiple of 8. 1049 + */ 1050 + skb_shinfo(skb)->gso_size = (mtu - fragheaderlen - 1051 + sizeof(struct frag_hdr)) & ~7; 1052 + skb_shinfo(skb)->gso_type = SKB_GSO_UDP; 1053 + ipv6_select_ident(&fhdr, rt); 1054 + skb_shinfo(skb)->ip6_frag_id = fhdr.identification; 1055 + 1056 + append: 1052 1057 return skb_append_datato_frags(sk, skb, getfrag, from, 1053 1058 (length - transhdrlen)); 1054 1059 }
+1 -2
net/ipv6/ipcomp6.c
··· 64 64 (struct ip_comp_hdr *)(skb->data + offset); 65 65 struct xfrm_state *x; 66 66 67 - if (type != ICMPV6_DEST_UNREACH && 68 - type != ICMPV6_PKT_TOOBIG && 67 + if (type != ICMPV6_PKT_TOOBIG && 69 68 type != NDISC_REDIRECT) 70 69 return; 71 70
+38 -10
net/ipv6/route.c
··· 476 476 } 477 477 478 478 #ifdef CONFIG_IPV6_ROUTER_PREF 479 + struct __rt6_probe_work { 480 + struct work_struct work; 481 + struct in6_addr target; 482 + struct net_device *dev; 483 + }; 484 + 485 + static void rt6_probe_deferred(struct work_struct *w) 486 + { 487 + struct in6_addr mcaddr; 488 + struct __rt6_probe_work *work = 489 + container_of(w, struct __rt6_probe_work, work); 490 + 491 + addrconf_addr_solict_mult(&work->target, &mcaddr); 492 + ndisc_send_ns(work->dev, NULL, &work->target, &mcaddr, NULL); 493 + dev_put(work->dev); 494 + kfree(w); 495 + } 496 + 479 497 static void rt6_probe(struct rt6_info *rt) 480 498 { 481 499 struct neighbour *neigh; ··· 517 499 518 500 if (!neigh || 519 501 time_after(jiffies, neigh->updated + rt->rt6i_idev->cnf.rtr_probe_interval)) { 520 - struct in6_addr mcaddr; 521 - struct in6_addr *target; 502 + struct __rt6_probe_work *work; 522 503 523 - if (neigh) { 504 + work = kmalloc(sizeof(*work), GFP_ATOMIC); 505 + 506 + if (neigh && work) 524 507 neigh->updated = jiffies; 525 - write_unlock(&neigh->lock); 526 - } 527 508 528 - target = (struct in6_addr *)&rt->rt6i_gateway; 529 - addrconf_addr_solict_mult(target, &mcaddr); 530 - ndisc_send_ns(rt->dst.dev, NULL, target, &mcaddr, NULL); 509 + if (neigh) 510 + write_unlock(&neigh->lock); 511 + 512 + if (work) { 513 + INIT_WORK(&work->work, rt6_probe_deferred); 514 + work->target = rt->rt6i_gateway; 515 + dev_hold(rt->dst.dev); 516 + work->dev = rt->dst.dev; 517 + schedule_work(&work->work); 518 + } 531 519 } else { 532 520 out: 533 521 write_unlock(&neigh->lock); ··· 875 851 if (ort->rt6i_dst.plen != 128 && 876 852 ipv6_addr_equal(&ort->rt6i_dst.addr, daddr)) 877 853 rt->rt6i_flags |= RTF_ANYCAST; 878 - rt->rt6i_gateway = *daddr; 879 854 } 880 855 881 856 rt->rt6i_flags |= RTF_CACHE; ··· 1358 1335 rt->dst.flags |= DST_HOST; 1359 1336 rt->dst.output = ip6_output; 1360 1337 atomic_set(&rt->dst.__refcnt, 1); 1338 + rt->rt6i_gateway = fl6->daddr; 1361 1339 rt->rt6i_dst.addr = fl6->daddr; 1362 1340 rt->rt6i_dst.plen = 128; 1363 1341 rt->rt6i_idev = idev; ··· 1894 1870 in6_dev_hold(rt->rt6i_idev); 1895 1871 rt->dst.lastuse = jiffies; 1896 1872 1897 - rt->rt6i_gateway = ort->rt6i_gateway; 1873 + if (ort->rt6i_flags & RTF_GATEWAY) 1874 + rt->rt6i_gateway = ort->rt6i_gateway; 1875 + else 1876 + rt->rt6i_gateway = *dest; 1898 1877 rt->rt6i_flags = ort->rt6i_flags; 1899 1878 if ((ort->rt6i_flags & (RTF_DEFAULT | RTF_ADDRCONF)) == 1900 1879 (RTF_DEFAULT | RTF_ADDRCONF)) ··· 2184 2157 else 2185 2158 rt->rt6i_flags |= RTF_LOCAL; 2186 2159 2160 + rt->rt6i_gateway = *addr; 2187 2161 rt->rt6i_dst.addr = *addr; 2188 2162 rt->rt6i_dst.plen = 128; 2189 2163 rt->rt6i_table = fib6_get_table(net, RT6_TABLE_LOCAL);
+2 -3
net/ipv6/udp.c
··· 1243 1243 if (tclass < 0) 1244 1244 tclass = np->tclass; 1245 1245 1246 - if (dontfrag < 0) 1247 - dontfrag = np->dontfrag; 1248 - 1249 1246 if (msg->msg_flags&MSG_CONFIRM) 1250 1247 goto do_confirm; 1251 1248 back_from_confirm: ··· 1261 1264 up->pending = AF_INET6; 1262 1265 1263 1266 do_append_data: 1267 + if (dontfrag < 0) 1268 + dontfrag = np->dontfrag; 1264 1269 up->len += ulen; 1265 1270 getfrag = is_udplite ? udplite_getfrag : ip_generic_getfrag; 1266 1271 err = ip6_append_data(sk, getfrag, msg->msg_iov, ulen,
+1
net/ipv6/xfrm6_policy.c
··· 138 138 139 139 memset(fl6, 0, sizeof(struct flowi6)); 140 140 fl6->flowi6_mark = skb->mark; 141 + fl6->flowi6_oif = skb_dst(skb)->dev->ifindex; 141 142 142 143 fl6->daddr = reverse ? hdr->saddr : hdr->daddr; 143 144 fl6->saddr = reverse ? hdr->daddr : hdr->saddr;
+2 -1
net/key/af_key.c
··· 1098 1098 1099 1099 x->id.proto = proto; 1100 1100 x->id.spi = sa->sadb_sa_spi; 1101 - x->props.replay_window = sa->sadb_sa_replay; 1101 + x->props.replay_window = min_t(unsigned int, sa->sadb_sa_replay, 1102 + (sizeof(x->replay.bitmap) * 8)); 1102 1103 if (sa->sadb_sa_flags & SADB_SAFLAGS_NOECN) 1103 1104 x->props.flags |= XFRM_STATE_NOECN; 1104 1105 if (sa->sadb_sa_flags & SADB_SAFLAGS_DECAP_DSCP)
+4
net/l2tp/l2tp_ppp.c
··· 353 353 goto error_put_sess_tun; 354 354 } 355 355 356 + local_bh_disable(); 356 357 l2tp_xmit_skb(session, skb, session->hdr_len); 358 + local_bh_enable(); 357 359 358 360 sock_put(ps->tunnel_sock); 359 361 sock_put(sk); ··· 424 422 skb->data[0] = ppph[0]; 425 423 skb->data[1] = ppph[1]; 426 424 425 + local_bh_disable(); 427 426 l2tp_xmit_skb(session, skb, session->hdr_len); 427 + local_bh_enable(); 428 428 429 429 sock_put(sk_tun); 430 430 sock_put(sk);
+1 -1
net/mac80211/cfg.c
··· 3564 3564 return -EINVAL; 3565 3565 } 3566 3566 band = chanctx_conf->def.chan->band; 3567 - sta = sta_info_get(sdata, peer); 3567 + sta = sta_info_get_bss(sdata, peer); 3568 3568 if (sta) { 3569 3569 qos = test_sta_flag(sta, WLAN_STA_WME); 3570 3570 } else {
+3
net/mac80211/ieee80211_i.h
··· 893 893 * that the scan completed. 894 894 * @SCAN_ABORTED: Set for our scan work function when the driver reported 895 895 * a scan complete for an aborted scan. 896 + * @SCAN_HW_CANCELLED: Set for our scan work function when the scan is being 897 + * cancelled. 896 898 */ 897 899 enum { 898 900 SCAN_SW_SCANNING, ··· 902 900 SCAN_ONCHANNEL_SCANNING, 903 901 SCAN_COMPLETED, 904 902 SCAN_ABORTED, 903 + SCAN_HW_CANCELLED, 905 904 }; 906 905 907 906 /**
+2
net/mac80211/offchannel.c
··· 394 394 395 395 if (started) 396 396 ieee80211_start_next_roc(local); 397 + else if (list_empty(&local->roc_list)) 398 + ieee80211_run_deferred_scan(local); 397 399 } 398 400 399 401 out_unlock:
+19
net/mac80211/scan.c
··· 238 238 enum ieee80211_band band; 239 239 int i, ielen, n_chans; 240 240 241 + if (test_bit(SCAN_HW_CANCELLED, &local->scanning)) 242 + return false; 243 + 241 244 do { 242 245 if (local->hw_scan_band == IEEE80211_NUM_BANDS) 243 246 return false; ··· 942 939 if (!local->scan_req) 943 940 goto out; 944 941 942 + /* 943 + * We have a scan running and the driver already reported completion, 944 + * but the worker hasn't run yet or is stuck on the mutex - mark it as 945 + * cancelled. 946 + */ 947 + if (test_bit(SCAN_HW_SCANNING, &local->scanning) && 948 + test_bit(SCAN_COMPLETED, &local->scanning)) { 949 + set_bit(SCAN_HW_CANCELLED, &local->scanning); 950 + goto out; 951 + } 952 + 945 953 if (test_bit(SCAN_HW_SCANNING, &local->scanning)) { 954 + /* 955 + * Make sure that __ieee80211_scan_completed doesn't trigger a 956 + * scan on another band. 957 + */ 958 + set_bit(SCAN_HW_CANCELLED, &local->scanning); 946 959 if (local->ops->cancel_hw_scan) 947 960 drv_cancel_hw_scan(local, 948 961 rcu_dereference_protected(local->scan_sdata,
+3
net/mac80211/status.c
··· 180 180 struct ieee80211_local *local = sta->local; 181 181 struct ieee80211_sub_if_data *sdata = sta->sdata; 182 182 183 + if (local->hw.flags & IEEE80211_HW_REPORTS_TX_ACK_STATUS) 184 + sta->last_rx = jiffies; 185 + 183 186 if (ieee80211_is_data_qos(mgmt->frame_control)) { 184 187 struct ieee80211_hdr *hdr = (void *) skb->data; 185 188 u8 *qc = ieee80211_get_qos_ctl(hdr);
+2 -1
net/mac80211/tx.c
··· 1120 1120 tx->sta = rcu_dereference(sdata->u.vlan.sta); 1121 1121 if (!tx->sta && sdata->dev->ieee80211_ptr->use_4addr) 1122 1122 return TX_DROP; 1123 - } else if (info->flags & IEEE80211_TX_CTL_INJECTED || 1123 + } else if (info->flags & (IEEE80211_TX_CTL_INJECTED | 1124 + IEEE80211_TX_INTFL_NL80211_FRAME_TX) || 1124 1125 tx->sdata->control_port_protocol == tx->skb->protocol) { 1125 1126 tx->sta = sta_info_get_bss(sdata, hdr->addr1); 1126 1127 }
+4
net/mac80211/util.c
··· 2236 2236 } 2237 2237 2238 2238 rate = cfg80211_calculate_bitrate(&ri); 2239 + if (WARN_ONCE(!rate, 2240 + "Invalid bitrate: flags=0x%x, idx=%d, vht_nss=%d\n", 2241 + status->flag, status->rate_idx, status->vht_nss)) 2242 + return 0; 2239 2243 2240 2244 /* rewind from end of MPDU */ 2241 2245 if (status->flag & RX_FLAG_MACTIME_END)
+2 -2
net/netfilter/nf_conntrack_h323_main.c
··· 778 778 flowi6_to_flowi(&fl1), false)) { 779 779 if (!afinfo->route(&init_net, (struct dst_entry **)&rt2, 780 780 flowi6_to_flowi(&fl2), false)) { 781 - if (!memcmp(&rt1->rt6i_gateway, &rt2->rt6i_gateway, 782 - sizeof(rt1->rt6i_gateway)) && 781 + if (ipv6_addr_equal(rt6_nexthop(rt1), 782 + rt6_nexthop(rt2)) && 783 783 rt1->dst.dev == rt2->dst.dev) 784 784 ret = 1; 785 785 dst_release(&rt2->dst);
+17
net/sched/sch_netem.c
··· 358 358 return PSCHED_NS2TICKS(ticks); 359 359 } 360 360 361 + static void tfifo_reset(struct Qdisc *sch) 362 + { 363 + struct netem_sched_data *q = qdisc_priv(sch); 364 + struct rb_node *p; 365 + 366 + while ((p = rb_first(&q->t_root))) { 367 + struct sk_buff *skb = netem_rb_to_skb(p); 368 + 369 + rb_erase(p, &q->t_root); 370 + skb->next = NULL; 371 + skb->prev = NULL; 372 + kfree_skb(skb); 373 + } 374 + } 375 + 361 376 static void tfifo_enqueue(struct sk_buff *nskb, struct Qdisc *sch) 362 377 { 363 378 struct netem_sched_data *q = qdisc_priv(sch); ··· 535 520 skb->next = NULL; 536 521 skb->prev = NULL; 537 522 len = qdisc_pkt_len(skb); 523 + sch->qstats.backlog -= len; 538 524 kfree_skb(skb); 539 525 } 540 526 } ··· 625 609 struct netem_sched_data *q = qdisc_priv(sch); 626 610 627 611 qdisc_reset_queue(sch); 612 + tfifo_reset(sch); 628 613 if (q->qdisc) 629 614 qdisc_reset(q->qdisc); 630 615 qdisc_watchdog_cancel(&q->watchdog);
+2 -1
net/sctp/output.c
··· 536 536 * by CRC32-C as described in <draft-ietf-tsvwg-sctpcsum-02.txt>. 537 537 */ 538 538 if (!sctp_checksum_disable) { 539 - if (!(dst->dev->features & NETIF_F_SCTP_CSUM)) { 539 + if (!(dst->dev->features & NETIF_F_SCTP_CSUM) || 540 + (dst_xfrm(dst) != NULL) || packet->ipfragok) { 540 541 __u32 crc32 = sctp_start_cksum((__u8 *)sh, cksum_buf_len); 541 542 542 543 /* 3) Put the resultant value into the checksum field in the
+10
net/unix/af_unix.c
··· 1246 1246 return 0; 1247 1247 } 1248 1248 1249 + static void unix_sock_inherit_flags(const struct socket *old, 1250 + struct socket *new) 1251 + { 1252 + if (test_bit(SOCK_PASSCRED, &old->flags)) 1253 + set_bit(SOCK_PASSCRED, &new->flags); 1254 + if (test_bit(SOCK_PASSSEC, &old->flags)) 1255 + set_bit(SOCK_PASSSEC, &new->flags); 1256 + } 1257 + 1249 1258 static int unix_accept(struct socket *sock, struct socket *newsock, int flags) 1250 1259 { 1251 1260 struct sock *sk = sock->sk; ··· 1289 1280 /* attach accepted sock to socket */ 1290 1281 unix_state_lock(tsk); 1291 1282 newsock->state = SS_CONNECTED; 1283 + unix_sock_inherit_flags(sock, newsock); 1292 1284 sock_graft(tsk, newsock); 1293 1285 unix_state_unlock(tsk); 1294 1286 return 0;
-2
net/wireless/core.c
··· 958 958 case NETDEV_PRE_UP: 959 959 if (!(wdev->wiphy->interface_modes & BIT(wdev->iftype))) 960 960 return notifier_from_errno(-EOPNOTSUPP); 961 - if (rfkill_blocked(rdev->rfkill)) 962 - return notifier_from_errno(-ERFKILL); 963 961 ret = cfg80211_can_add_interface(rdev, wdev->iftype); 964 962 if (ret) 965 963 return notifier_from_errno(ret);
+3
net/wireless/core.h
··· 402 402 cfg80211_can_add_interface(struct cfg80211_registered_device *rdev, 403 403 enum nl80211_iftype iftype) 404 404 { 405 + if (rfkill_blocked(rdev->rfkill)) 406 + return -ERFKILL; 407 + 405 408 return cfg80211_can_change_interface(rdev, NULL, iftype); 406 409 } 407 410
+6 -1
net/wireless/radiotap.c
··· 97 97 struct ieee80211_radiotap_header *radiotap_header, 98 98 int max_length, const struct ieee80211_radiotap_vendor_namespaces *vns) 99 99 { 100 + /* check the radiotap header can actually be present */ 101 + if (max_length < sizeof(struct ieee80211_radiotap_header)) 102 + return -EINVAL; 103 + 100 104 /* Linux only supports version 0 radiotap format */ 101 105 if (radiotap_header->it_version) 102 106 return -EINVAL; ··· 135 131 */ 136 132 137 133 if ((unsigned long)iterator->_arg - 138 - (unsigned long)iterator->_rtheader > 134 + (unsigned long)iterator->_rtheader + 135 + sizeof(uint32_t) > 139 136 (unsigned long)iterator->_max_length) 140 137 return -EINVAL; 141 138 }
+21 -7
net/xfrm/xfrm_policy.c
··· 334 334 335 335 atomic_inc(&policy->genid); 336 336 337 - del_timer(&policy->polq.hold_timer); 337 + if (del_timer(&policy->polq.hold_timer)) 338 + xfrm_pol_put(policy); 338 339 xfrm_queue_purge(&policy->polq.hold_queue); 339 340 340 341 if (del_timer(&policy->timer)) ··· 590 589 591 590 spin_lock_bh(&pq->hold_queue.lock); 592 591 skb_queue_splice_init(&pq->hold_queue, &list); 593 - del_timer(&pq->hold_timer); 592 + if (del_timer(&pq->hold_timer)) 593 + xfrm_pol_put(old); 594 594 spin_unlock_bh(&pq->hold_queue.lock); 595 595 596 596 if (skb_queue_empty(&list)) ··· 602 600 spin_lock_bh(&pq->hold_queue.lock); 603 601 skb_queue_splice(&list, &pq->hold_queue); 604 602 pq->timeout = XFRM_QUEUE_TMO_MIN; 605 - mod_timer(&pq->hold_timer, jiffies); 603 + if (!mod_timer(&pq->hold_timer, jiffies)) 604 + xfrm_pol_hold(new); 606 605 spin_unlock_bh(&pq->hold_queue.lock); 607 606 } 608 607 ··· 1772 1769 1773 1770 spin_lock(&pq->hold_queue.lock); 1774 1771 skb = skb_peek(&pq->hold_queue); 1772 + if (!skb) { 1773 + spin_unlock(&pq->hold_queue.lock); 1774 + goto out; 1775 + } 1775 1776 dst = skb_dst(skb); 1776 1777 sk = skb->sk; 1777 1778 xfrm_decode_session(skb, &fl, dst->ops->family); ··· 1794 1787 goto purge_queue; 1795 1788 1796 1789 pq->timeout = pq->timeout << 1; 1797 - mod_timer(&pq->hold_timer, jiffies + pq->timeout); 1798 - return; 1790 + if (!mod_timer(&pq->hold_timer, jiffies + pq->timeout)) 1791 + xfrm_pol_hold(pol); 1792 + goto out; 1799 1793 } 1800 1794 1801 1795 dst_release(dst); ··· 1827 1819 err = dst_output(skb); 1828 1820 } 1829 1821 1822 + out: 1823 + xfrm_pol_put(pol); 1830 1824 return; 1831 1825 1832 1826 purge_queue: 1833 1827 pq->timeout = 0; 1834 1828 xfrm_queue_purge(&pq->hold_queue); 1829 + xfrm_pol_put(pol); 1835 1830 } 1836 1831 1837 1832 static int xdst_queue_output(struct sk_buff *skb) ··· 1842 1831 unsigned long sched_next; 1843 1832 struct dst_entry *dst = skb_dst(skb); 1844 1833 struct xfrm_dst *xdst = (struct xfrm_dst *) dst; 1845 - struct xfrm_policy_queue *pq = &xdst->pols[0]->polq; 1834 + struct xfrm_policy *pol = xdst->pols[0]; 1835 + struct xfrm_policy_queue *pq = &pol->polq; 1846 1836 1847 1837 if (pq->hold_queue.qlen > XFRM_MAX_QUEUE_LEN) { 1848 1838 kfree_skb(skb); ··· 1862 1850 if (del_timer(&pq->hold_timer)) { 1863 1851 if (time_before(pq->hold_timer.expires, sched_next)) 1864 1852 sched_next = pq->hold_timer.expires; 1853 + xfrm_pol_put(pol); 1865 1854 } 1866 1855 1867 1856 __skb_queue_tail(&pq->hold_queue, skb); 1868 - mod_timer(&pq->hold_timer, sched_next); 1857 + if (!mod_timer(&pq->hold_timer, sched_next)) 1858 + xfrm_pol_hold(pol); 1869 1859 1870 1860 spin_unlock_bh(&pq->hold_queue.lock); 1871 1861
+29 -27
net/xfrm/xfrm_replay.c
··· 61 61 62 62 switch (event) { 63 63 case XFRM_REPLAY_UPDATE: 64 - if (x->replay_maxdiff && 65 - (x->replay.seq - x->preplay.seq < x->replay_maxdiff) && 66 - (x->replay.oseq - x->preplay.oseq < x->replay_maxdiff)) { 64 + if (!x->replay_maxdiff || 65 + ((x->replay.seq - x->preplay.seq < x->replay_maxdiff) && 66 + (x->replay.oseq - x->preplay.oseq < x->replay_maxdiff))) { 67 67 if (x->xflags & XFRM_TIME_DEFER) 68 68 event = XFRM_REPLAY_TIMEOUT; 69 69 else ··· 129 129 return 0; 130 130 131 131 diff = x->replay.seq - seq; 132 - if (diff >= min_t(unsigned int, x->props.replay_window, 133 - sizeof(x->replay.bitmap) * 8)) { 132 + if (diff >= x->props.replay_window) { 134 133 x->stats.replay_window++; 135 134 goto err; 136 135 } ··· 301 302 302 303 switch (event) { 303 304 case XFRM_REPLAY_UPDATE: 304 - if (x->replay_maxdiff && 305 - (replay_esn->seq - preplay_esn->seq < x->replay_maxdiff) && 306 - (replay_esn->oseq - preplay_esn->oseq < x->replay_maxdiff)) { 305 + if (!x->replay_maxdiff || 306 + ((replay_esn->seq - preplay_esn->seq < x->replay_maxdiff) && 307 + (replay_esn->oseq - preplay_esn->oseq 308 + < x->replay_maxdiff))) { 307 309 if (x->xflags & XFRM_TIME_DEFER) 308 310 event = XFRM_REPLAY_TIMEOUT; 309 311 else ··· 353 353 354 354 switch (event) { 355 355 case XFRM_REPLAY_UPDATE: 356 - if (!x->replay_maxdiff) 357 - break; 358 - 359 - if (replay_esn->seq_hi == preplay_esn->seq_hi) 360 - seq_diff = replay_esn->seq - preplay_esn->seq; 361 - else 362 - seq_diff = ~preplay_esn->seq + replay_esn->seq + 1; 363 - 364 - if (replay_esn->oseq_hi == preplay_esn->oseq_hi) 365 - oseq_diff = replay_esn->oseq - preplay_esn->oseq; 366 - else 367 - oseq_diff = ~preplay_esn->oseq + replay_esn->oseq + 1; 368 - 369 - if (seq_diff < x->replay_maxdiff && 370 - oseq_diff < x->replay_maxdiff) { 371 - 372 - if (x->xflags & XFRM_TIME_DEFER) 373 - event = XFRM_REPLAY_TIMEOUT; 356 + if (x->replay_maxdiff) { 357 + if (replay_esn->seq_hi == preplay_esn->seq_hi) 358 + seq_diff = replay_esn->seq - preplay_esn->seq; 374 359 else 375 - return; 360 + seq_diff = ~preplay_esn->seq + replay_esn->seq 361 + + 1; 362 + 363 + if (replay_esn->oseq_hi == preplay_esn->oseq_hi) 364 + oseq_diff = replay_esn->oseq 365 + - preplay_esn->oseq; 366 + else 367 + oseq_diff = ~preplay_esn->oseq 368 + + replay_esn->oseq + 1; 369 + 370 + if (seq_diff >= x->replay_maxdiff || 371 + oseq_diff >= x->replay_maxdiff) 372 + break; 376 373 } 374 + 375 + if (x->xflags & XFRM_TIME_DEFER) 376 + event = XFRM_REPLAY_TIMEOUT; 377 + else 378 + return; 377 379 378 380 break; 379 381
+3 -2
net/xfrm/xfrm_user.c
··· 446 446 memcpy(&x->sel, &p->sel, sizeof(x->sel)); 447 447 memcpy(&x->lft, &p->lft, sizeof(x->lft)); 448 448 x->props.mode = p->mode; 449 - x->props.replay_window = p->replay_window; 449 + x->props.replay_window = min_t(unsigned int, p->replay_window, 450 + sizeof(x->replay.bitmap) * 8); 450 451 x->props.reqid = p->reqid; 451 452 x->props.family = p->family; 452 453 memcpy(&x->props.saddr, &p->saddr, sizeof(x->props.saddr)); ··· 1857 1856 if (x->km.state != XFRM_STATE_VALID) 1858 1857 goto out; 1859 1858 1860 - err = xfrm_replay_verify_len(x->replay_esn, rp); 1859 + err = xfrm_replay_verify_len(x->replay_esn, re); 1861 1860 if (err) 1862 1861 goto out; 1863 1862
+1 -3
security/apparmor/apparmorfs.c
··· 580 580 581 581 /* check if the next ns is a sibling, parent, gp, .. */ 582 582 parent = ns->parent; 583 - while (parent) { 583 + while (ns != root) { 584 584 mutex_unlock(&ns->lock); 585 585 next = list_entry_next(ns, base.list); 586 586 if (!list_entry_is_head(next, &parent->sub_ns, base.list)) { 587 587 mutex_lock(&next->lock); 588 588 return next; 589 589 } 590 - if (parent == root) 591 - return NULL; 592 590 ns = parent; 593 591 parent = parent->parent; 594 592 }
+1
security/apparmor/policy.c
··· 610 610 aa_put_dfa(profile->policy.dfa); 611 611 aa_put_replacedby(profile->replacedby); 612 612 613 + kzfree(profile->hash); 613 614 kzfree(profile); 614 615 } 615 616
+3 -6
security/selinux/avc.c
··· 746 746 * @tclass: target security class 747 747 * @requested: requested permissions, interpreted based on @tclass 748 748 * @auditdata: auxiliary audit data 749 - * @flags: VFS walk flags 750 749 * 751 750 * Check the AVC to determine whether the @requested permissions are granted 752 751 * for the SID pair (@ssid, @tsid), interpreting the permissions ··· 755 756 * permissions are granted, -%EACCES if any permissions are denied, or 756 757 * another -errno upon other errors. 757 758 */ 758 - int avc_has_perm_flags(u32 ssid, u32 tsid, u16 tclass, 759 - u32 requested, struct common_audit_data *auditdata, 760 - unsigned flags) 759 + int avc_has_perm(u32 ssid, u32 tsid, u16 tclass, 760 + u32 requested, struct common_audit_data *auditdata) 761 761 { 762 762 struct av_decision avd; 763 763 int rc, rc2; 764 764 765 765 rc = avc_has_perm_noaudit(ssid, tsid, tclass, requested, 0, &avd); 766 766 767 - rc2 = avc_audit(ssid, tsid, tclass, requested, &avd, rc, auditdata, 768 - flags); 767 + rc2 = avc_audit(ssid, tsid, tclass, requested, &avd, rc, auditdata); 769 768 if (rc2) 770 769 return rc2; 771 770 return rc;
+7 -8
security/selinux/hooks.c
··· 1502 1502 1503 1503 rc = avc_has_perm_noaudit(sid, sid, sclass, av, 0, &avd); 1504 1504 if (audit == SECURITY_CAP_AUDIT) { 1505 - int rc2 = avc_audit(sid, sid, sclass, av, &avd, rc, &ad, 0); 1505 + int rc2 = avc_audit(sid, sid, sclass, av, &avd, rc, &ad); 1506 1506 if (rc2) 1507 1507 return rc2; 1508 1508 } ··· 1525 1525 static int inode_has_perm(const struct cred *cred, 1526 1526 struct inode *inode, 1527 1527 u32 perms, 1528 - struct common_audit_data *adp, 1529 - unsigned flags) 1528 + struct common_audit_data *adp) 1530 1529 { 1531 1530 struct inode_security_struct *isec; 1532 1531 u32 sid; ··· 1538 1539 sid = cred_sid(cred); 1539 1540 isec = inode->i_security; 1540 1541 1541 - return avc_has_perm_flags(sid, isec->sid, isec->sclass, perms, adp, flags); 1542 + return avc_has_perm(sid, isec->sid, isec->sclass, perms, adp); 1542 1543 } 1543 1544 1544 1545 /* Same as inode_has_perm, but pass explicit audit data containing ··· 1553 1554 1554 1555 ad.type = LSM_AUDIT_DATA_DENTRY; 1555 1556 ad.u.dentry = dentry; 1556 - return inode_has_perm(cred, inode, av, &ad, 0); 1557 + return inode_has_perm(cred, inode, av, &ad); 1557 1558 } 1558 1559 1559 1560 /* Same as inode_has_perm, but pass explicit audit data containing ··· 1568 1569 1569 1570 ad.type = LSM_AUDIT_DATA_PATH; 1570 1571 ad.u.path = *path; 1571 - return inode_has_perm(cred, inode, av, &ad, 0); 1572 + return inode_has_perm(cred, inode, av, &ad); 1572 1573 } 1573 1574 1574 1575 /* Same as path_has_perm, but uses the inode from the file struct. */ ··· 1580 1581 1581 1582 ad.type = LSM_AUDIT_DATA_PATH; 1582 1583 ad.u.path = file->f_path; 1583 - return inode_has_perm(cred, file_inode(file), av, &ad, 0); 1584 + return inode_has_perm(cred, file_inode(file), av, &ad); 1584 1585 } 1585 1586 1586 1587 /* Check whether a task can use an open file descriptor to ··· 1616 1617 /* av is zero if only checking access to the descriptor. */ 1617 1618 rc = 0; 1618 1619 if (av) 1619 - rc = inode_has_perm(cred, inode, av, &ad, 0); 1620 + rc = inode_has_perm(cred, inode, av, &ad); 1620 1621 1621 1622 out: 1622 1623 return rc;
+5 -13
security/selinux/include/avc.h
··· 130 130 u16 tclass, u32 requested, 131 131 struct av_decision *avd, 132 132 int result, 133 - struct common_audit_data *a, unsigned flags) 133 + struct common_audit_data *a) 134 134 { 135 135 u32 audited, denied; 136 136 audited = avc_audit_required(requested, avd, result, 0, &denied); ··· 138 138 return 0; 139 139 return slow_avc_audit(ssid, tsid, tclass, 140 140 requested, audited, denied, 141 - a, flags); 141 + a, 0); 142 142 } 143 143 144 144 #define AVC_STRICT 1 /* Ignore permissive mode. */ ··· 147 147 unsigned flags, 148 148 struct av_decision *avd); 149 149 150 - int avc_has_perm_flags(u32 ssid, u32 tsid, 151 - u16 tclass, u32 requested, 152 - struct common_audit_data *auditdata, 153 - unsigned); 154 - 155 - static inline int avc_has_perm(u32 ssid, u32 tsid, 156 - u16 tclass, u32 requested, 157 - struct common_audit_data *auditdata) 158 - { 159 - return avc_has_perm_flags(ssid, tsid, tclass, requested, auditdata, 0); 160 - } 150 + int avc_has_perm(u32 ssid, u32 tsid, 151 + u16 tclass, u32 requested, 152 + struct common_audit_data *auditdata); 161 153 162 154 u32 avc_policy_seqno(void); 163 155
+1
sound/pci/ac97/ac97_codec.c
··· 175 175 { 0x54524106, 0xffffffff, "TR28026", NULL, NULL }, 176 176 { 0x54524108, 0xffffffff, "TR28028", patch_tritech_tr28028, NULL }, // added by xin jin [07/09/99] 177 177 { 0x54524123, 0xffffffff, "TR28602", NULL, NULL }, // only guess --jk [TR28023 = eMicro EM28023 (new CT1297)] 178 + { 0x54584e03, 0xffffffff, "TLV320AIC27", NULL, NULL }, 178 179 { 0x54584e20, 0xffffffff, "TLC320AD9xC", NULL, NULL }, 179 180 { 0x56494161, 0xffffffff, "VIA1612A", NULL, NULL }, // modified ICE1232 with S/PDIF 180 181 { 0x56494170, 0xffffffff, "VIA1617A", patch_vt1617a, NULL }, // modified VT1616 with S/PDIF
+1 -1
sound/pci/hda/hda_generic.c
··· 3531 3531 if (!multi) 3532 3532 err = create_single_cap_vol_ctl(codec, n, vol, sw, 3533 3533 inv_dmic); 3534 - else if (!multi_cap_vol) 3534 + else if (!multi_cap_vol && !inv_dmic) 3535 3535 err = create_bind_cap_vol_ctl(codec, n, vol, sw); 3536 3536 else 3537 3537 err = create_multi_cap_vol_ctl(codec);
+11
sound/pci/hda/patch_conexant.c
··· 3231 3231 CXT_FIXUP_INC_MIC_BOOST, 3232 3232 CXT_FIXUP_HEADPHONE_MIC_PIN, 3233 3233 CXT_FIXUP_HEADPHONE_MIC, 3234 + CXT_FIXUP_GPIO1, 3234 3235 }; 3235 3236 3236 3237 static void cxt_fixup_stereo_dmic(struct hda_codec *codec, ··· 3376 3375 .type = HDA_FIXUP_FUNC, 3377 3376 .v.func = cxt_fixup_headphone_mic, 3378 3377 }, 3378 + [CXT_FIXUP_GPIO1] = { 3379 + .type = HDA_FIXUP_VERBS, 3380 + .v.verbs = (const struct hda_verb[]) { 3381 + { 0x01, AC_VERB_SET_GPIO_MASK, 0x01 }, 3382 + { 0x01, AC_VERB_SET_GPIO_DIRECTION, 0x01 }, 3383 + { 0x01, AC_VERB_SET_GPIO_DATA, 0x01 }, 3384 + { } 3385 + }, 3386 + }, 3379 3387 }; 3380 3388 3381 3389 static const struct snd_pci_quirk cxt5051_fixups[] = { ··· 3394 3384 3395 3385 static const struct snd_pci_quirk cxt5066_fixups[] = { 3396 3386 SND_PCI_QUIRK(0x1025, 0x0543, "Acer Aspire One 522", CXT_FIXUP_STEREO_DMIC), 3387 + SND_PCI_QUIRK(0x1025, 0x054c, "Acer Aspire 3830TG", CXT_FIXUP_GPIO1), 3397 3388 SND_PCI_QUIRK(0x1043, 0x138d, "Asus", CXT_FIXUP_HEADPHONE_MIC_PIN), 3398 3389 SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400", CXT_PINCFG_LENOVO_TP410), 3399 3390 SND_PCI_QUIRK(0x17aa, 0x215e, "Lenovo T410", CXT_PINCFG_LENOVO_TP410),
+8 -10
sound/pci/hda/patch_hdmi.c
··· 937 937 } 938 938 939 939 /* 940 + * always configure channel mapping, it may have been changed by the 941 + * user in the meantime 942 + */ 943 + hdmi_setup_channel_mapping(codec, pin_nid, non_pcm, ca, 944 + channels, per_pin->chmap, 945 + per_pin->chmap_set); 946 + 947 + /* 940 948 * sizeof(ai) is used instead of sizeof(*hdmi_ai) or 941 949 * sizeof(*dp_ai) to avoid partial match/update problems when 942 950 * the user switches between HDMI/DP monitors. ··· 955 947 "pin=%d channels=%d\n", 956 948 pin_nid, 957 949 channels); 958 - hdmi_setup_channel_mapping(codec, pin_nid, non_pcm, ca, 959 - channels, per_pin->chmap, 960 - per_pin->chmap_set); 961 950 hdmi_stop_infoframe_trans(codec, pin_nid); 962 951 hdmi_fill_audio_infoframe(codec, pin_nid, 963 952 ai.bytes, sizeof(ai)); 964 953 hdmi_start_infoframe_trans(codec, pin_nid); 965 - } else { 966 - /* For non-pcm audio switch, setup new channel mapping 967 - * accordingly */ 968 - if (per_pin->non_pcm != non_pcm) 969 - hdmi_setup_channel_mapping(codec, pin_nid, non_pcm, ca, 970 - channels, per_pin->chmap, 971 - per_pin->chmap_set); 972 954 } 973 955 974 956 per_pin->non_pcm = non_pcm;
+54
sound/pci/hda/patch_realtek.c
··· 2819 2819 alc_write_coef_idx(codec, 0x1e, coef | 0x80); 2820 2820 } 2821 2821 2822 + static void alc269_fixup_headset_mic(struct hda_codec *codec, 2823 + const struct hda_fixup *fix, int action) 2824 + { 2825 + struct alc_spec *spec = codec->spec; 2826 + 2827 + if (action == HDA_FIXUP_ACT_PRE_PROBE) 2828 + spec->parse_flags |= HDA_PINCFG_HEADSET_MIC; 2829 + } 2830 + 2822 2831 static void alc271_fixup_dmic(struct hda_codec *codec, 2823 2832 const struct hda_fixup *fix, int action) 2824 2833 { ··· 3505 3496 } 3506 3497 } 3507 3498 3499 + static void alc290_fixup_mono_speakers(struct hda_codec *codec, 3500 + const struct hda_fixup *fix, int action) 3501 + { 3502 + if (action == HDA_FIXUP_ACT_PRE_PROBE) 3503 + /* Remove DAC node 0x03, as it seems to be 3504 + giving mono output */ 3505 + snd_hda_override_wcaps(codec, 0x03, 0); 3506 + } 3507 + 3508 3508 enum { 3509 3509 ALC269_FIXUP_SONY_VAIO, 3510 3510 ALC275_FIXUP_SONY_VAIO_GPIO2, ··· 3525 3507 ALC271_FIXUP_DMIC, 3526 3508 ALC269_FIXUP_PCM_44K, 3527 3509 ALC269_FIXUP_STEREO_DMIC, 3510 + ALC269_FIXUP_HEADSET_MIC, 3528 3511 ALC269_FIXUP_QUANTA_MUTE, 3529 3512 ALC269_FIXUP_LIFEBOOK, 3530 3513 ALC269_FIXUP_AMIC, ··· 3538 3519 ALC269_FIXUP_HP_GPIO_LED, 3539 3520 ALC269_FIXUP_INV_DMIC, 3540 3521 ALC269_FIXUP_LENOVO_DOCK, 3522 + ALC286_FIXUP_SONY_MIC_NO_PRESENCE, 3541 3523 ALC269_FIXUP_PINCFG_NO_HP_TO_LINEOUT, 3542 3524 ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, 3543 3525 ALC269_FIXUP_DELL2_MIC_NO_PRESENCE, 3526 + ALC269_FIXUP_DELL3_MIC_NO_PRESENCE, 3544 3527 ALC269_FIXUP_HEADSET_MODE, 3545 3528 ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC, 3546 3529 ALC269_FIXUP_ASUS_X101_FUNC, ··· 3556 3535 ALC283_FIXUP_CHROME_BOOK, 3557 3536 ALC282_FIXUP_ASUS_TX300, 3558 3537 ALC283_FIXUP_INT_MIC, 3538 + ALC290_FIXUP_MONO_SPEAKERS, 3559 3539 }; 3560 3540 3561 3541 static const struct hda_fixup alc269_fixups[] = { ··· 3624 3602 [ALC269_FIXUP_STEREO_DMIC] = { 3625 3603 .type = HDA_FIXUP_FUNC, 3626 3604 .v.func = alc269_fixup_stereo_dmic, 3605 + }, 3606 + [ALC269_FIXUP_HEADSET_MIC] = { 3607 + .type = HDA_FIXUP_FUNC, 3608 + .v.func = alc269_fixup_headset_mic, 3627 3609 }, 3628 3610 [ALC269_FIXUP_QUANTA_MUTE] = { 3629 3611 .type = HDA_FIXUP_FUNC, ··· 3738 3712 .chained = true, 3739 3713 .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC 3740 3714 }, 3715 + [ALC269_FIXUP_DELL3_MIC_NO_PRESENCE] = { 3716 + .type = HDA_FIXUP_PINS, 3717 + .v.pins = (const struct hda_pintbl[]) { 3718 + { 0x1a, 0x01a1913c }, /* use as headset mic, without its own jack detect */ 3719 + { } 3720 + }, 3721 + .chained = true, 3722 + .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC 3723 + }, 3741 3724 [ALC269_FIXUP_HEADSET_MODE] = { 3742 3725 .type = HDA_FIXUP_FUNC, 3743 3726 .v.func = alc_fixup_headset_mode, ··· 3754 3719 [ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC] = { 3755 3720 .type = HDA_FIXUP_FUNC, 3756 3721 .v.func = alc_fixup_headset_mode_no_hp_mic, 3722 + }, 3723 + [ALC286_FIXUP_SONY_MIC_NO_PRESENCE] = { 3724 + .type = HDA_FIXUP_PINS, 3725 + .v.pins = (const struct hda_pintbl[]) { 3726 + { 0x18, 0x01a1913c }, /* use as headset mic, without its own jack detect */ 3727 + { } 3728 + }, 3729 + .chained = true, 3730 + .chain_id = ALC269_FIXUP_HEADSET_MIC 3757 3731 }, 3758 3732 [ALC269_FIXUP_ASUS_X101_FUNC] = { 3759 3733 .type = HDA_FIXUP_FUNC, ··· 3848 3804 .chained = true, 3849 3805 .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST 3850 3806 }, 3807 + [ALC290_FIXUP_MONO_SPEAKERS] = { 3808 + .type = HDA_FIXUP_FUNC, 3809 + .v.func = alc290_fixup_mono_speakers, 3810 + .chained = true, 3811 + .chain_id = ALC269_FIXUP_DELL3_MIC_NO_PRESENCE, 3812 + }, 3851 3813 }; 3852 3814 3853 3815 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 3895 3845 SND_PCI_QUIRK(0x1028, 0x0608, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 3896 3846 SND_PCI_QUIRK(0x1028, 0x0609, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 3897 3847 SND_PCI_QUIRK(0x1028, 0x0613, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 3848 + SND_PCI_QUIRK(0x1028, 0x0616, "Dell Vostro 5470", ALC290_FIXUP_MONO_SPEAKERS), 3898 3849 SND_PCI_QUIRK(0x1028, 0x15cc, "Dell X5 Precision", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE), 3899 3850 SND_PCI_QUIRK(0x1028, 0x15cd, "Dell X5 Precision", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE), 3900 3851 SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2), ··· 3918 3867 SND_PCI_QUIRK(0x1043, 0x8398, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC), 3919 3868 SND_PCI_QUIRK(0x1043, 0x83ce, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC), 3920 3869 SND_PCI_QUIRK(0x1043, 0x8516, "ASUS X101CH", ALC269_FIXUP_ASUS_X101), 3870 + SND_PCI_QUIRK(0x104d, 0x90b6, "Sony VAIO Pro 13", ALC286_FIXUP_SONY_MIC_NO_PRESENCE), 3921 3871 SND_PCI_QUIRK(0x104d, 0x9073, "Sony VAIO", ALC275_FIXUP_SONY_VAIO_GPIO2), 3922 3872 SND_PCI_QUIRK(0x104d, 0x907b, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ), 3923 3873 SND_PCI_QUIRK(0x104d, 0x9084, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ), ··· 4004 3952 {.id = ALC269_FIXUP_STEREO_DMIC, .name = "alc269-dmic"}, 4005 3953 {.id = ALC271_FIXUP_DMIC, .name = "alc271-dmic"}, 4006 3954 {.id = ALC269_FIXUP_INV_DMIC, .name = "inv-dmic"}, 3955 + {.id = ALC269_FIXUP_HEADSET_MIC, .name = "headset-mic"}, 4007 3956 {.id = ALC269_FIXUP_LENOVO_DOCK, .name = "lenovo-dock"}, 4008 3957 {.id = ALC269_FIXUP_HP_GPIO_LED, .name = "hp-gpio-led"}, 4009 3958 {.id = ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, .name = "dell-headset-multi"}, ··· 4622 4569 SND_PCI_QUIRK(0x1028, 0x05d8, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), 4623 4570 SND_PCI_QUIRK(0x1028, 0x05db, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), 4624 4571 SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800), 4572 + SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_ASUS_MODE4), 4625 4573 SND_PCI_QUIRK(0x1043, 0x8469, "ASUS mobo", ALC662_FIXUP_NO_JACK_DETECT), 4626 4574 SND_PCI_QUIRK(0x105b, 0x0cd6, "Foxconn", ALC662_FIXUP_ASUS_MODE2), 4627 4575 SND_PCI_QUIRK(0x144d, 0xc051, "Samsung R720", ALC662_FIXUP_IDEAPAD),
+1
sound/pci/rme9652/hdsp.c
··· 4845 4845 if ((err = hdsp_get_iobox_version(hdsp)) < 0) 4846 4846 return err; 4847 4847 } 4848 + memset(&hdsp_version, 0, sizeof(hdsp_version)); 4848 4849 hdsp_version.io_type = hdsp->io_type; 4849 4850 hdsp_version.firmware_rev = hdsp->firmware_rev; 4850 4851 if ((err = copy_to_user(argp, &hdsp_version, sizeof(hdsp_version))))
+1
sound/soc/blackfin/bf6xx-i2s.c
··· 88 88 case SNDRV_PCM_FORMAT_S8: 89 89 param.spctl |= 0x70; 90 90 sport->wdsize = 1; 91 + break; 91 92 case SNDRV_PCM_FORMAT_S16_LE: 92 93 param.spctl |= 0xf0; 93 94 sport->wdsize = 2;
+3
sound/soc/codecs/88pm860x-codec.c
··· 349 349 val = ucontrol->value.integer.value[0]; 350 350 val2 = ucontrol->value.integer.value[1]; 351 351 352 + if (val >= ARRAY_SIZE(st_table) || val2 >= ARRAY_SIZE(st_table)) 353 + return -EINVAL; 354 + 352 355 err = snd_soc_update_bits(codec, reg, 0x3f, st_table[val].m); 353 356 if (err < 0) 354 357 return err;
+6 -1
sound/soc/codecs/ab8500-codec.c
··· 1225 1225 struct ab8500_codec_drvdata *drvdata = dev_get_drvdata(codec->dev); 1226 1226 struct device *dev = codec->dev; 1227 1227 bool apply_fir, apply_iir; 1228 - int req, status; 1228 + unsigned int req; 1229 + int status; 1229 1230 1230 1231 dev_dbg(dev, "%s: Enter.\n", __func__); 1231 1232 1232 1233 mutex_lock(&drvdata->anc_lock); 1233 1234 1234 1235 req = ucontrol->value.integer.value[0]; 1236 + if (req >= ARRAY_SIZE(enum_anc_state)) { 1237 + status = -EINVAL; 1238 + goto cleanup; 1239 + } 1235 1240 if (req != ANC_APPLY_FIR_IIR && req != ANC_APPLY_FIR && 1236 1241 req != ANC_APPLY_IIR) { 1237 1242 dev_err(dev, "%s: ERROR: Unsupported status to set '%s'!\n",
+2 -2
sound/soc/codecs/max98095.c
··· 1863 1863 struct max98095_pdata *pdata = max98095->pdata; 1864 1864 int channel = max98095_get_eq_channel(kcontrol->id.name); 1865 1865 struct max98095_cdata *cdata; 1866 - int sel = ucontrol->value.integer.value[0]; 1866 + unsigned int sel = ucontrol->value.integer.value[0]; 1867 1867 struct max98095_eq_cfg *coef_set; 1868 1868 int fs, best, best_val, i; 1869 1869 int regmask, regsave; ··· 2016 2016 struct max98095_pdata *pdata = max98095->pdata; 2017 2017 int channel = max98095_get_bq_channel(codec, kcontrol->id.name); 2018 2018 struct max98095_cdata *cdata; 2019 - int sel = ucontrol->value.integer.value[0]; 2019 + unsigned int sel = ucontrol->value.integer.value[0]; 2020 2020 struct max98095_biquad_cfg *coef_set; 2021 2021 int fs, best, best_val, i; 2022 2022 int regmask, regsave;
+1 -1
sound/soc/codecs/pcm1681.c
··· 270 270 static const struct regmap_config pcm1681_regmap = { 271 271 .reg_bits = 8, 272 272 .val_bits = 8, 273 - .max_register = ARRAY_SIZE(pcm1681_reg_defaults) + 1, 273 + .max_register = 0x13, 274 274 .reg_defaults = pcm1681_reg_defaults, 275 275 .num_reg_defaults = ARRAY_SIZE(pcm1681_reg_defaults), 276 276 .writeable_reg = pcm1681_writeable_reg,
+1 -1
sound/soc/codecs/pcm1792a.c
··· 188 188 static const struct regmap_config pcm1792a_regmap = { 189 189 .reg_bits = 8, 190 190 .val_bits = 8, 191 - .max_register = 24, 191 + .max_register = 23, 192 192 .reg_defaults = pcm1792a_reg_defaults, 193 193 .num_reg_defaults = ARRAY_SIZE(pcm1792a_reg_defaults), 194 194 .writeable_reg = pcm1792a_writeable_reg,
+4
sound/soc/codecs/tlv320aic3x.c
··· 674 674 /* Left Input */ 675 675 {"Left Line1L Mux", "single-ended", "LINE1L"}, 676 676 {"Left Line1L Mux", "differential", "LINE1L"}, 677 + {"Left Line1R Mux", "single-ended", "LINE1R"}, 678 + {"Left Line1R Mux", "differential", "LINE1R"}, 677 679 678 680 {"Left Line2L Mux", "single-ended", "LINE2L"}, 679 681 {"Left Line2L Mux", "differential", "LINE2L"}, ··· 692 690 /* Right Input */ 693 691 {"Right Line1R Mux", "single-ended", "LINE1R"}, 694 692 {"Right Line1R Mux", "differential", "LINE1R"}, 693 + {"Right Line1L Mux", "single-ended", "LINE1L"}, 694 + {"Right Line1L Mux", "differential", "LINE1L"}, 695 695 696 696 {"Right Line2R Mux", "single-ended", "LINE2R"}, 697 697 {"Right Line2R Mux", "differential", "LINE2R"},
+1 -1
sound/soc/fsl/fsl_ssi.c
··· 936 936 ssi_private->ssi_phys = res.start; 937 937 938 938 ssi_private->irq = irq_of_parse_and_map(np, 0); 939 - if (ssi_private->irq == NO_IRQ) { 939 + if (ssi_private->irq == 0) { 940 940 dev_err(&pdev->dev, "no irq for node %s\n", np->full_name); 941 941 return -ENXIO; 942 942 }
+1 -1
sound/soc/fsl/imx-mc13783.c
··· 112 112 return ret; 113 113 } 114 114 115 - if (machine_is_mx31_3ds()) { 115 + if (machine_is_mx31_3ds() || machine_is_mx31moboard()) { 116 116 imx_audmux_v2_configure_port(MX31_AUDMUX_PORT4_SSI_PINS_4, 117 117 IMX_AUDMUX_V2_PTCR_SYN, 118 118 IMX_AUDMUX_V2_PDCR_RXDSEL(MX31_AUDMUX_PORT1_SSI0) |
+5 -2
sound/soc/fsl/imx-sgtl5000.c
··· 62 62 struct device_node *ssi_np, *codec_np; 63 63 struct platform_device *ssi_pdev; 64 64 struct i2c_client *codec_dev; 65 - struct imx_sgtl5000_data *data; 65 + struct imx_sgtl5000_data *data = NULL; 66 66 int int_port, ext_port; 67 67 int ret; 68 68 ··· 128 128 goto fail; 129 129 } 130 130 131 - data->codec_clk = devm_clk_get(&codec_dev->dev, NULL); 131 + data->codec_clk = clk_get(&codec_dev->dev, NULL); 132 132 if (IS_ERR(data->codec_clk)) { 133 133 ret = PTR_ERR(data->codec_clk); 134 134 goto fail; ··· 172 172 return 0; 173 173 174 174 fail: 175 + if (data && !IS_ERR(data->codec_clk)) 176 + clk_put(data->codec_clk); 175 177 if (ssi_np) 176 178 of_node_put(ssi_np); 177 179 if (codec_np) ··· 187 185 struct imx_sgtl5000_data *data = platform_get_drvdata(pdev); 188 186 189 187 snd_soc_unregister_card(&data->card); 188 + clk_put(data->codec_clk); 190 189 191 190 return 0; 192 191 }
+12 -11
sound/soc/fsl/imx-ssi.c
··· 600 600 ssi->fiq_params.dma_params_rx = &ssi->dma_params_rx; 601 601 ssi->fiq_params.dma_params_tx = &ssi->dma_params_tx; 602 602 603 - ret = imx_pcm_fiq_init(pdev, &ssi->fiq_params); 604 - if (ret) 605 - goto failed_pcm_fiq; 603 + ssi->fiq_init = imx_pcm_fiq_init(pdev, &ssi->fiq_params); 604 + ssi->dma_init = imx_pcm_dma_init(pdev); 606 605 607 - ret = imx_pcm_dma_init(pdev); 608 - if (ret) 609 - goto failed_pcm_dma; 606 + if (ssi->fiq_init && ssi->dma_init) { 607 + ret = ssi->fiq_init; 608 + goto failed_pcm; 609 + } 610 610 611 611 return 0; 612 612 613 - failed_pcm_dma: 614 - imx_pcm_fiq_exit(pdev); 615 - failed_pcm_fiq: 613 + failed_pcm: 616 614 snd_soc_unregister_component(&pdev->dev); 617 615 failed_register: 618 616 release_mem_region(res->start, resource_size(res)); ··· 626 628 struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 627 629 struct imx_ssi *ssi = platform_get_drvdata(pdev); 628 630 629 - imx_pcm_dma_exit(pdev); 630 - imx_pcm_fiq_exit(pdev); 631 + if (!ssi->dma_init) 632 + imx_pcm_dma_exit(pdev); 633 + 634 + if (!ssi->fiq_init) 635 + imx_pcm_fiq_exit(pdev); 631 636 632 637 snd_soc_unregister_component(&pdev->dev); 633 638
+2
sound/soc/fsl/imx-ssi.h
··· 211 211 struct imx_dma_data filter_data_rx; 212 212 struct imx_pcm_fiq_params fiq_params; 213 213 214 + int fiq_init; 215 + int dma_init; 214 216 int enabled; 215 217 }; 216 218
+2 -2
sound/soc/omap/Kconfig
··· 1 1 config SND_OMAP_SOC 2 2 tristate "SoC Audio for the Texas Instruments OMAP chips" 3 - depends on (ARCH_OMAP && DMA_OMAP) || (ARCH_ARM && COMPILE_TEST) 3 + depends on (ARCH_OMAP && DMA_OMAP) || (ARM && COMPILE_TEST) 4 4 select SND_DMAENGINE_PCM 5 5 6 6 config SND_OMAP_SOC_DMIC ··· 26 26 27 27 config SND_OMAP_SOC_RX51 28 28 tristate "SoC Audio support for Nokia RX-51" 29 - depends on SND_OMAP_SOC && ARCH_ARM && (MACH_NOKIA_RX51 || COMPILE_TEST) 29 + depends on SND_OMAP_SOC && ARM && (MACH_NOKIA_RX51 || COMPILE_TEST) 30 30 select SND_OMAP_SOC_MCBSP 31 31 select SND_SOC_TLV320AIC3X 32 32 select SND_SOC_TPA6130A2
+2 -2
sound/soc/sh/rcar/rsnd.h
··· 220 220 void __iomem *rsnd_gen_reg_get(struct rsnd_priv *priv, 221 221 struct rsnd_mod *mod, 222 222 enum rsnd_reg reg); 223 - #define rsnd_is_gen1(s) ((s)->info->flags & RSND_GEN1) 224 - #define rsnd_is_gen2(s) ((s)->info->flags & RSND_GEN2) 223 + #define rsnd_is_gen1(s) (((s)->info->flags & RSND_GEN_MASK) == RSND_GEN1) 224 + #define rsnd_is_gen2(s) (((s)->info->flags & RSND_GEN_MASK) == RSND_GEN2) 225 225 226 226 /* 227 227 * R-Car ADG
-1
sound/soc/soc-core.c
··· 1380 1380 return -ENODEV; 1381 1381 1382 1382 list_add(&cpu_dai->dapm.list, &card->dapm_list); 1383 - snd_soc_dapm_new_dai_widgets(&cpu_dai->dapm, cpu_dai); 1384 1383 } 1385 1384 1386 1385 if (cpu_dai->driver->probe) {
+3 -1
sound/usb/usx2y/us122l.c
··· 262 262 } 263 263 264 264 area->vm_ops = &usb_stream_hwdep_vm_ops; 265 - area->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP; 265 + area->vm_flags |= VM_DONTDUMP; 266 + if (!read) 267 + area->vm_flags |= VM_DONTEXPAND; 266 268 area->vm_private_data = us122l; 267 269 atomic_inc(&us122l->mmap_count); 268 270 out:
+3 -19
sound/usb/usx2y/usbusx2yaudio.c
··· 299 299 usX2Y_clients_stop(usX2Y); 300 300 } 301 301 302 - static void usX2Y_error_sequence(struct usX2Ydev *usX2Y, 303 - struct snd_usX2Y_substream *subs, struct urb *urb) 304 - { 305 - snd_printk(KERN_ERR 306 - "Sequence Error!(hcd_frame=%i ep=%i%s;wait=%i,frame=%i).\n" 307 - "Most probably some urb of usb-frame %i is still missing.\n" 308 - "Cause could be too long delays in usb-hcd interrupt handling.\n", 309 - usb_get_current_frame_number(usX2Y->dev), 310 - subs->endpoint, usb_pipein(urb->pipe) ? "in" : "out", 311 - usX2Y->wait_iso_frame, urb->start_frame, usX2Y->wait_iso_frame); 312 - usX2Y_clients_stop(usX2Y); 313 - } 314 - 315 302 static void i_usX2Y_urb_complete(struct urb *urb) 316 303 { 317 304 struct snd_usX2Y_substream *subs = urb->context; ··· 315 328 usX2Y_error_urb_status(usX2Y, subs, urb); 316 329 return; 317 330 } 318 - if (likely((urb->start_frame & 0xFFFF) == (usX2Y->wait_iso_frame & 0xFFFF))) 319 - subs->completed_urb = urb; 320 - else { 321 - usX2Y_error_sequence(usX2Y, subs, urb); 322 - return; 323 - } 331 + 332 + subs->completed_urb = urb; 333 + 324 334 { 325 335 struct snd_usX2Y_substream *capsubs = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE], 326 336 *playbacksubs = usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+1 -6
sound/usb/usx2y/usx2yhwdeppcm.c
··· 244 244 usX2Y_error_urb_status(usX2Y, subs, urb); 245 245 return; 246 246 } 247 - if (likely((urb->start_frame & 0xFFFF) == (usX2Y->wait_iso_frame & 0xFFFF))) 248 - subs->completed_urb = urb; 249 - else { 250 - usX2Y_error_sequence(usX2Y, subs, urb); 251 - return; 252 - } 253 247 248 + subs->completed_urb = urb; 254 249 capsubs = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE]; 255 250 capsubs2 = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE + 2]; 256 251 playbacksubs = usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+1
tools/perf/Makefile
··· 770 770 install-bin: all 771 771 $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(bindir_SQ)' 772 772 $(INSTALL) $(OUTPUT)perf '$(DESTDIR_SQ)$(bindir_SQ)' 773 + $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)' 773 774 $(INSTALL) $(OUTPUT)perf-archive -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)' 774 775 ifndef NO_LIBPERL 775 776 $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/perl/Perf-Trace-Util/lib/Perf/Trace'
+1
tools/perf/builtin-stat.c
··· 457 457 perror("failed to prepare workload"); 458 458 return -1; 459 459 } 460 + child_pid = evsel_list->workload.pid; 460 461 } 461 462 462 463 if (group)
+1 -1
tools/perf/config/feature-tests.mak
··· 219 219 220 220 int main(void) 221 221 { 222 - printf(\"error message: %s\n\", audit_errno_to_name(0)); 222 + printf(\"error message: %s\", audit_errno_to_name(0)); 223 223 return audit_open(); 224 224 } 225 225 endef
+22 -5
tools/perf/util/dwarf-aux.c
··· 426 426 * @die_mem: a buffer for result DIE 427 427 * 428 428 * Search a non-inlined function DIE which includes @addr. Stores the 429 - * DIE to @die_mem and returns it if found. Returns NULl if failed. 429 + * DIE to @die_mem and returns it if found. Returns NULL if failed. 430 430 */ 431 431 Dwarf_Die *die_find_realfunc(Dwarf_Die *cu_die, Dwarf_Addr addr, 432 432 Dwarf_Die *die_mem) ··· 454 454 } 455 455 456 456 /** 457 - * die_find_inlinefunc - Search an inlined function at given address 458 - * @cu_die: a CU DIE which including @addr 457 + * die_find_top_inlinefunc - Search the top inlined function at given address 458 + * @sp_die: a subprogram DIE which including @addr 459 459 * @addr: target address 460 460 * @die_mem: a buffer for result DIE 461 461 * 462 462 * Search an inlined function DIE which includes @addr. Stores the 463 - * DIE to @die_mem and returns it if found. Returns NULl if failed. 463 + * DIE to @die_mem and returns it if found. Returns NULL if failed. 464 + * Even if several inlined functions are expanded recursively, this 465 + * doesn't trace it down, and returns the topmost one. 466 + */ 467 + Dwarf_Die *die_find_top_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr, 468 + Dwarf_Die *die_mem) 469 + { 470 + return die_find_child(sp_die, __die_find_inline_cb, &addr, die_mem); 471 + } 472 + 473 + /** 474 + * die_find_inlinefunc - Search an inlined function at given address 475 + * @sp_die: a subprogram DIE which including @addr 476 + * @addr: target address 477 + * @die_mem: a buffer for result DIE 478 + * 479 + * Search an inlined function DIE which includes @addr. Stores the 480 + * DIE to @die_mem and returns it if found. Returns NULL if failed. 464 481 * If several inlined functions are expanded recursively, this trace 465 - * it and returns deepest one. 482 + * it down and returns deepest one. 466 483 */ 467 484 Dwarf_Die *die_find_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr, 468 485 Dwarf_Die *die_mem)
+5 -1
tools/perf/util/dwarf-aux.h
··· 79 79 extern Dwarf_Die *die_find_realfunc(Dwarf_Die *cu_die, Dwarf_Addr addr, 80 80 Dwarf_Die *die_mem); 81 81 82 - /* Search an inlined function including given address */ 82 + /* Search the top inlined function including given address */ 83 + extern Dwarf_Die *die_find_top_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr, 84 + Dwarf_Die *die_mem); 85 + 86 + /* Search the deepest inlined function including given address */ 83 87 extern Dwarf_Die *die_find_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr, 84 88 Dwarf_Die *die_mem); 85 89
+12
tools/perf/util/header.c
··· 2768 2768 if (perf_file_header__read(&f_header, header, fd) < 0) 2769 2769 return -EINVAL; 2770 2770 2771 + /* 2772 + * Sanity check that perf.data was written cleanly; data size is 2773 + * initialized to 0 and updated only if the on_exit function is run. 2774 + * If data size is still 0 then the file contains only partial 2775 + * information. Just warn user and process it as much as it can. 2776 + */ 2777 + if (f_header.data.size == 0) { 2778 + pr_warning("WARNING: The %s file's data size field is 0 which is unexpected.\n" 2779 + "Was the 'perf record' command properly terminated?\n", 2780 + session->filename); 2781 + } 2782 + 2771 2783 nr_attrs = f_header.attrs.size / f_header.attr_size; 2772 2784 lseek(fd, f_header.attrs.offset, SEEK_SET); 2773 2785
+33 -16
tools/perf/util/probe-finder.c
··· 1327 1327 struct perf_probe_point *ppt) 1328 1328 { 1329 1329 Dwarf_Die cudie, spdie, indie; 1330 - Dwarf_Addr _addr, baseaddr; 1331 - const char *fname = NULL, *func = NULL, *tmp; 1330 + Dwarf_Addr _addr = 0, baseaddr = 0; 1331 + const char *fname = NULL, *func = NULL, *basefunc = NULL, *tmp; 1332 1332 int baseline = 0, lineno = 0, ret = 0; 1333 1333 1334 1334 /* Adjust address with bias */ ··· 1349 1349 /* Find a corresponding function (name, baseline and baseaddr) */ 1350 1350 if (die_find_realfunc(&cudie, (Dwarf_Addr)addr, &spdie)) { 1351 1351 /* Get function entry information */ 1352 - tmp = dwarf_diename(&spdie); 1353 - if (!tmp || 1352 + func = basefunc = dwarf_diename(&spdie); 1353 + if (!func || 1354 1354 dwarf_entrypc(&spdie, &baseaddr) != 0 || 1355 - dwarf_decl_line(&spdie, &baseline) != 0) 1355 + dwarf_decl_line(&spdie, &baseline) != 0) { 1356 + lineno = 0; 1356 1357 goto post; 1357 - func = tmp; 1358 + } 1358 1359 1359 - if (addr == (unsigned long)baseaddr) 1360 + if (addr == (unsigned long)baseaddr) { 1360 1361 /* Function entry - Relative line number is 0 */ 1361 1362 lineno = baseline; 1362 - else if (die_find_inlinefunc(&spdie, (Dwarf_Addr)addr, 1363 - &indie)) { 1363 + fname = dwarf_decl_file(&spdie); 1364 + goto post; 1365 + } 1366 + 1367 + /* Track down the inline functions step by step */ 1368 + while (die_find_top_inlinefunc(&spdie, (Dwarf_Addr)addr, 1369 + &indie)) { 1370 + /* There is an inline function */ 1364 1371 if (dwarf_entrypc(&indie, &_addr) == 0 && 1365 - _addr == addr) 1372 + _addr == addr) { 1366 1373 /* 1367 1374 * addr is at an inline function entry. 1368 1375 * In this case, lineno should be the call-site 1369 - * line number. 1376 + * line number. (overwrite lineinfo) 1370 1377 */ 1371 1378 lineno = die_get_call_lineno(&indie); 1372 - else { 1379 + fname = die_get_call_file(&indie); 1380 + break; 1381 + } else { 1373 1382 /* 1374 1383 * addr is in an inline function body. 1375 1384 * Since lineno points one of the lines ··· 1386 1377 * be the entry line of the inline function. 1387 1378 */ 1388 1379 tmp = dwarf_diename(&indie); 1389 - if (tmp && 1390 - dwarf_decl_line(&spdie, &baseline) == 0) 1391 - func = tmp; 1380 + if (!tmp || 1381 + dwarf_decl_line(&indie, &baseline) != 0) 1382 + break; 1383 + func = tmp; 1384 + spdie = indie; 1392 1385 } 1393 1386 } 1387 + /* Verify the lineno and baseline are in a same file */ 1388 + tmp = dwarf_decl_file(&spdie); 1389 + if (!tmp || strcmp(tmp, fname) != 0) 1390 + lineno = 0; 1394 1391 } 1395 1392 1396 1393 post: 1397 1394 /* Make a relative line number or an offset */ 1398 1395 if (lineno) 1399 1396 ppt->line = lineno - baseline; 1400 - else if (func) 1397 + else if (basefunc) { 1401 1398 ppt->offset = addr - (unsigned long)baseaddr; 1399 + func = basefunc; 1400 + } 1402 1401 1403 1402 /* Duplicate strings */ 1404 1403 if (func) {
+3 -1
tools/perf/util/session.c
··· 256 256 tool->sample = process_event_sample_stub; 257 257 if (tool->mmap == NULL) 258 258 tool->mmap = process_event_stub; 259 + if (tool->mmap2 == NULL) 260 + tool->mmap2 = process_event_stub; 259 261 if (tool->comm == NULL) 260 262 tool->comm = process_event_stub; 261 263 if (tool->fork == NULL) ··· 1312 1310 file_offset = page_offset; 1313 1311 head = data_offset - page_offset; 1314 1312 1315 - if (data_offset + data_size < file_size) 1313 + if (data_size && (data_offset + data_size < file_size)) 1316 1314 file_size = data_offset + data_size; 1317 1315 1318 1316 progress_next = file_size / 16;
+1 -1
tools/testing/selftests/timers/posix_timers.c
··· 151 151 fflush(stdout); 152 152 153 153 done = 0; 154 - timer_create(which, NULL, &id); 154 + err = timer_create(which, NULL, &id); 155 155 if (err < 0) { 156 156 perror("Can't create timer\n"); 157 157 return -1;
+4 -2
virt/kvm/kvm_main.c
··· 1064 1064 unsigned long gfn_to_hva_prot(struct kvm *kvm, gfn_t gfn, bool *writable) 1065 1065 { 1066 1066 struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn); 1067 - if (writable) 1067 + unsigned long hva = __gfn_to_hva_many(slot, gfn, NULL, false); 1068 + 1069 + if (!kvm_is_error_hva(hva) && writable) 1068 1070 *writable = !memslot_is_readonly(slot); 1069 1071 1070 - return __gfn_to_hva_many(gfn_to_memslot(kvm, gfn), gfn, NULL, false); 1072 + return hva; 1071 1073 } 1072 1074 1073 1075 static int kvm_read_hva(void *data, void __user *hva, int len)