Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'v3.12-rc6' into devel

Linux 3.12-rc6

Conflicts:
drivers/gpio/gpio-lynxpoint.c

+1808 -1584
+4 -4
Documentation/ABI/stable/sysfs-bus-usb
··· 37 37 that the USB device has been connected to the machine. This 38 38 file is read-only. 39 39 Users: 40 - PowerTOP <power@bughost.org> 41 - http://www.lesswatts.org/projects/powertop/ 40 + PowerTOP <powertop@lists.01.org> 41 + https://01.org/powertop/ 42 42 43 43 What: /sys/bus/usb/device/.../power/active_duration 44 44 Date: January 2008 ··· 57 57 will give an integer percentage. Note that this does not 58 58 account for counter wrap. 59 59 Users: 60 - PowerTOP <power@bughost.org> 61 - http://www.lesswatts.org/projects/powertop/ 60 + PowerTOP <powertop@lists.01.org> 61 + https://01.org/powertop/ 62 62 63 63 What: /sys/bus/usb/devices/<busnum>-<port[.port]>...:<config num>-<interface num>/supports_autosuspend 64 64 Date: January 2008
+16 -16
Documentation/ABI/testing/sysfs-devices-power
··· 1 1 What: /sys/devices/.../power/ 2 2 Date: January 2009 3 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 3 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 4 4 Description: 5 5 The /sys/devices/.../power directory contains attributes 6 6 allowing the user space to check and modify some power ··· 8 8 9 9 What: /sys/devices/.../power/wakeup 10 10 Date: January 2009 11 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 11 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 12 12 Description: 13 13 The /sys/devices/.../power/wakeup attribute allows the user 14 14 space to check if the device is enabled to wake up the system ··· 34 34 35 35 What: /sys/devices/.../power/control 36 36 Date: January 2009 37 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 37 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 38 38 Description: 39 39 The /sys/devices/.../power/control attribute allows the user 40 40 space to control the run-time power management of the device. ··· 53 53 54 54 What: /sys/devices/.../power/async 55 55 Date: January 2009 56 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 56 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 57 57 Description: 58 58 The /sys/devices/.../async attribute allows the user space to 59 59 enable or diasble the device's suspend and resume callbacks to ··· 79 79 80 80 What: /sys/devices/.../power/wakeup_count 81 81 Date: September 2010 82 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 82 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 83 83 Description: 84 84 The /sys/devices/.../wakeup_count attribute contains the number 85 85 of signaled wakeup events associated with the device. This ··· 88 88 89 89 What: /sys/devices/.../power/wakeup_active_count 90 90 Date: September 2010 91 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 91 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 92 92 Description: 93 93 The /sys/devices/.../wakeup_active_count attribute contains the 94 94 number of times the processing of wakeup events associated with ··· 98 98 99 99 What: /sys/devices/.../power/wakeup_abort_count 100 100 Date: February 2012 101 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 101 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 102 102 Description: 103 103 The /sys/devices/.../wakeup_abort_count attribute contains the 104 104 number of times the processing of a wakeup event associated with ··· 109 109 110 110 What: /sys/devices/.../power/wakeup_expire_count 111 111 Date: February 2012 112 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 112 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 113 113 Description: 114 114 The /sys/devices/.../wakeup_expire_count attribute contains the 115 115 number of times a wakeup event associated with the device has ··· 119 119 120 120 What: /sys/devices/.../power/wakeup_active 121 121 Date: September 2010 122 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 122 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 123 123 Description: 124 124 The /sys/devices/.../wakeup_active attribute contains either 1, 125 125 or 0, depending on whether or not a wakeup event associated with ··· 129 129 130 130 What: /sys/devices/.../power/wakeup_total_time_ms 131 131 Date: September 2010 132 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 132 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 133 133 Description: 134 134 The /sys/devices/.../wakeup_total_time_ms attribute contains 135 135 the total time of processing wakeup events associated with the ··· 139 139 140 140 What: /sys/devices/.../power/wakeup_max_time_ms 141 141 Date: September 2010 142 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 142 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 143 143 Description: 144 144 The /sys/devices/.../wakeup_max_time_ms attribute contains 145 145 the maximum time of processing a single wakeup event associated ··· 149 149 150 150 What: /sys/devices/.../power/wakeup_last_time_ms 151 151 Date: September 2010 152 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 152 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 153 153 Description: 154 154 The /sys/devices/.../wakeup_last_time_ms attribute contains 155 155 the value of the monotonic clock corresponding to the time of ··· 160 160 161 161 What: /sys/devices/.../power/wakeup_prevent_sleep_time_ms 162 162 Date: February 2012 163 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 163 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 164 164 Description: 165 165 The /sys/devices/.../wakeup_prevent_sleep_time_ms attribute 166 166 contains the total time the device has been preventing ··· 189 189 190 190 What: /sys/devices/.../power/pm_qos_latency_us 191 191 Date: March 2012 192 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 192 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 193 193 Description: 194 194 The /sys/devices/.../power/pm_qos_resume_latency_us attribute 195 195 contains the PM QoS resume latency limit for the given device, ··· 207 207 208 208 What: /sys/devices/.../power/pm_qos_no_power_off 209 209 Date: September 2012 210 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 210 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 211 211 Description: 212 212 The /sys/devices/.../power/pm_qos_no_power_off attribute 213 213 is used for manipulating the PM QoS "no power off" flag. If ··· 222 222 223 223 What: /sys/devices/.../power/pm_qos_remote_wakeup 224 224 Date: September 2012 225 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 225 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 226 226 Description: 227 227 The /sys/devices/.../power/pm_qos_remote_wakeup attribute 228 228 is used for manipulating the PM QoS "remote wakeup required"
+11 -11
Documentation/ABI/testing/sysfs-power
··· 1 1 What: /sys/power/ 2 2 Date: August 2006 3 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 3 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 4 4 Description: 5 5 The /sys/power directory will contain files that will 6 6 provide a unified interface to the power management ··· 8 8 9 9 What: /sys/power/state 10 10 Date: August 2006 11 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 11 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 12 12 Description: 13 13 The /sys/power/state file controls the system power state. 14 14 Reading from this file returns what states are supported, ··· 22 22 23 23 What: /sys/power/disk 24 24 Date: September 2006 25 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 25 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 26 26 Description: 27 27 The /sys/power/disk file controls the operating mode of the 28 28 suspend-to-disk mechanism. Reading from this file returns ··· 67 67 68 68 What: /sys/power/image_size 69 69 Date: August 2006 70 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 70 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 71 71 Description: 72 72 The /sys/power/image_size file controls the size of the image 73 73 created by the suspend-to-disk mechanism. It can be written a ··· 84 84 85 85 What: /sys/power/pm_trace 86 86 Date: August 2006 87 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 87 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 88 88 Description: 89 89 The /sys/power/pm_trace file controls the code which saves the 90 90 last PM event point in the RTC across reboots, so that you can ··· 133 133 134 134 What: /sys/power/pm_async 135 135 Date: January 2009 136 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 136 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 137 137 Description: 138 138 The /sys/power/pm_async file controls the switch allowing the 139 139 user space to enable or disable asynchronous suspend and resume ··· 146 146 147 147 What: /sys/power/wakeup_count 148 148 Date: July 2010 149 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 149 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 150 150 Description: 151 151 The /sys/power/wakeup_count file allows user space to put the 152 152 system into a sleep state while taking into account the ··· 161 161 162 162 What: /sys/power/reserved_size 163 163 Date: May 2011 164 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 164 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 165 165 Description: 166 166 The /sys/power/reserved_size file allows user space to control 167 167 the amount of memory reserved for allocations made by device ··· 175 175 176 176 What: /sys/power/autosleep 177 177 Date: April 2012 178 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 178 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 179 179 Description: 180 180 The /sys/power/autosleep file can be written one of the strings 181 181 returned by reads from /sys/power/state. If that happens, a ··· 192 192 193 193 What: /sys/power/wake_lock 194 194 Date: February 2012 195 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 195 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 196 196 Description: 197 197 The /sys/power/wake_lock file allows user space to create 198 198 wakeup source objects and activate them on demand (if one of ··· 219 219 220 220 What: /sys/power/wake_unlock 221 221 Date: February 2012 222 - Contact: Rafael J. Wysocki <rjw@sisk.pl> 222 + Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 223 223 Description: 224 224 The /sys/power/wake_unlock file allows user space to deactivate 225 225 wakeup sources created with the help of /sys/power/wake_lock.
+1 -1
Documentation/acpi/dsdt-override.txt
··· 4 4 5 5 When to use this method is described in detail on the 6 6 Linux/ACPI home page: 7 - http://www.lesswatts.org/projects/acpi/overridingDSDT.php 7 + https://01.org/linux-acpi/documentation/overriding-dsdt
-168
Documentation/devicetree/bindings/memory.txt
··· 1 - *** Memory binding *** 2 - 3 - The /memory node provides basic information about the address and size 4 - of the physical memory. This node is usually filled or updated by the 5 - bootloader, depending on the actual memory configuration of the given 6 - hardware. 7 - 8 - The memory layout is described by the following node: 9 - 10 - / { 11 - #address-cells = <(n)>; 12 - #size-cells = <(m)>; 13 - memory { 14 - device_type = "memory"; 15 - reg = <(baseaddr1) (size1) 16 - (baseaddr2) (size2) 17 - ... 18 - (baseaddrN) (sizeN)>; 19 - }; 20 - ... 21 - }; 22 - 23 - A memory node follows the typical device tree rules for "reg" property: 24 - n: number of cells used to store base address value 25 - m: number of cells used to store size value 26 - baseaddrX: defines a base address of the defined memory bank 27 - sizeX: the size of the defined memory bank 28 - 29 - 30 - More than one memory bank can be defined. 31 - 32 - 33 - *** Reserved memory regions *** 34 - 35 - In /memory/reserved-memory node one can create child nodes describing 36 - particular reserved (excluded from normal use) memory regions. Such 37 - memory regions are usually designed for the special usage by various 38 - device drivers. A good example are contiguous memory allocations or 39 - memory sharing with other operating system on the same hardware board. 40 - Those special memory regions might depend on the board configuration and 41 - devices used on the target system. 42 - 43 - Parameters for each memory region can be encoded into the device tree 44 - with the following convention: 45 - 46 - [(label):] (name) { 47 - compatible = "linux,contiguous-memory-region", "reserved-memory-region"; 48 - reg = <(address) (size)>; 49 - (linux,default-contiguous-region); 50 - }; 51 - 52 - compatible: one or more of: 53 - - "linux,contiguous-memory-region" - enables binding of this 54 - region to Contiguous Memory Allocator (special region for 55 - contiguous memory allocations, shared with movable system 56 - memory, Linux kernel-specific). 57 - - "reserved-memory-region" - compatibility is defined, given 58 - region is assigned for exclusive usage for by the respective 59 - devices. 60 - 61 - reg: standard property defining the base address and size of 62 - the memory region 63 - 64 - linux,default-contiguous-region: property indicating that the region 65 - is the default region for all contiguous memory 66 - allocations, Linux specific (optional) 67 - 68 - It is optional to specify the base address, so if one wants to use 69 - autoconfiguration of the base address, '0' can be specified as a base 70 - address in the 'reg' property. 71 - 72 - The /memory/reserved-memory node must contain the same #address-cells 73 - and #size-cells value as the root node. 74 - 75 - 76 - *** Device node's properties *** 77 - 78 - Once regions in the /memory/reserved-memory node have been defined, they 79 - may be referenced by other device nodes. Bindings that wish to reference 80 - memory regions should explicitly document their use of the following 81 - property: 82 - 83 - memory-region = <&phandle_to_defined_region>; 84 - 85 - This property indicates that the device driver should use the memory 86 - region pointed by the given phandle. 87 - 88 - 89 - *** Example *** 90 - 91 - This example defines a memory consisting of 4 memory banks. 3 contiguous 92 - regions are defined for Linux kernel, one default of all device drivers 93 - (named contig_mem, placed at 0x72000000, 64MiB), one dedicated to the 94 - framebuffer device (labelled display_mem, placed at 0x78000000, 8MiB) 95 - and one for multimedia processing (labelled multimedia_mem, placed at 96 - 0x77000000, 64MiB). 'display_mem' region is then assigned to fb@12300000 97 - device for DMA memory allocations (Linux kernel drivers will use CMA is 98 - available or dma-exclusive usage otherwise). 'multimedia_mem' is 99 - assigned to scaler@12500000 and codec@12600000 devices for contiguous 100 - memory allocations when CMA driver is enabled. 101 - 102 - The reason for creating a separate region for framebuffer device is to 103 - match the framebuffer base address to the one configured by bootloader, 104 - so once Linux kernel drivers starts no glitches on the displayed boot 105 - logo appears. Scaller and codec drivers should share the memory 106 - allocations. 107 - 108 - / { 109 - #address-cells = <1>; 110 - #size-cells = <1>; 111 - 112 - /* ... */ 113 - 114 - memory { 115 - reg = <0x40000000 0x10000000 116 - 0x50000000 0x10000000 117 - 0x60000000 0x10000000 118 - 0x70000000 0x10000000>; 119 - 120 - reserved-memory { 121 - #address-cells = <1>; 122 - #size-cells = <1>; 123 - 124 - /* 125 - * global autoconfigured region for contiguous allocations 126 - * (used only with Contiguous Memory Allocator) 127 - */ 128 - contig_region@0 { 129 - compatible = "linux,contiguous-memory-region"; 130 - reg = <0x0 0x4000000>; 131 - linux,default-contiguous-region; 132 - }; 133 - 134 - /* 135 - * special region for framebuffer 136 - */ 137 - display_region: region@78000000 { 138 - compatible = "linux,contiguous-memory-region", "reserved-memory-region"; 139 - reg = <0x78000000 0x800000>; 140 - }; 141 - 142 - /* 143 - * special region for multimedia processing devices 144 - */ 145 - multimedia_region: region@77000000 { 146 - compatible = "linux,contiguous-memory-region"; 147 - reg = <0x77000000 0x4000000>; 148 - }; 149 - }; 150 - }; 151 - 152 - /* ... */ 153 - 154 - fb0: fb@12300000 { 155 - status = "okay"; 156 - memory-region = <&display_region>; 157 - }; 158 - 159 - scaler: scaler@12500000 { 160 - status = "okay"; 161 - memory-region = <&multimedia_region>; 162 - }; 163 - 164 - codec: codec@12600000 { 165 - status = "okay"; 166 - memory-region = <&multimedia_region>; 167 - }; 168 - };
+1
Documentation/sound/alsa/HD-Audio-Models.txt
··· 28 28 alc269-dmic Enable ALC269(VA) digital mic workaround 29 29 alc271-dmic Enable ALC271X digital mic workaround 30 30 inv-dmic Inverted internal mic workaround 31 + headset-mic Indicates a combined headset (headphone+mic) jack 31 32 lenovo-dock Enables docking station I/O for some Lenovos 32 33 dell-headset-multi Headset jack, which can also be used as mic-in 33 34 dell-headset-dock Headset jack (without mic-in), and also dock I/O
+19 -13
MAINTAINERS
··· 237 237 238 238 ACPI 239 239 M: Len Brown <lenb@kernel.org> 240 - M: Rafael J. Wysocki <rjw@sisk.pl> 240 + M: Rafael J. Wysocki <rjw@rjwysocki.net> 241 241 L: linux-acpi@vger.kernel.org 242 - W: http://www.lesswatts.org/projects/acpi/ 243 - Q: http://patchwork.kernel.org/project/linux-acpi/list/ 244 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux 242 + W: https://01.org/linux-acpi 243 + Q: https://patchwork.kernel.org/project/linux-acpi/list/ 244 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm 245 245 S: Supported 246 246 F: drivers/acpi/ 247 247 F: drivers/pnp/pnpacpi/ ··· 256 256 ACPI FAN DRIVER 257 257 M: Zhang Rui <rui.zhang@intel.com> 258 258 L: linux-acpi@vger.kernel.org 259 - W: http://www.lesswatts.org/projects/acpi/ 259 + W: https://01.org/linux-acpi 260 260 S: Supported 261 261 F: drivers/acpi/fan.c 262 262 263 263 ACPI THERMAL DRIVER 264 264 M: Zhang Rui <rui.zhang@intel.com> 265 265 L: linux-acpi@vger.kernel.org 266 - W: http://www.lesswatts.org/projects/acpi/ 266 + W: https://01.org/linux-acpi 267 267 S: Supported 268 268 F: drivers/acpi/*thermal* 269 269 270 270 ACPI VIDEO DRIVER 271 271 M: Zhang Rui <rui.zhang@intel.com> 272 272 L: linux-acpi@vger.kernel.org 273 - W: http://www.lesswatts.org/projects/acpi/ 273 + W: https://01.org/linux-acpi 274 274 S: Supported 275 275 F: drivers/acpi/video.c 276 276 ··· 2300 2300 F: drivers/net/ethernet/ti/cpmac.c 2301 2301 2302 2302 CPU FREQUENCY DRIVERS 2303 - M: Rafael J. Wysocki <rjw@sisk.pl> 2303 + M: Rafael J. Wysocki <rjw@rjwysocki.net> 2304 2304 M: Viresh Kumar <viresh.kumar@linaro.org> 2305 2305 L: cpufreq@vger.kernel.org 2306 2306 L: linux-pm@vger.kernel.org ··· 2331 2331 F: drivers/cpuidle/cpuidle-big_little.c 2332 2332 2333 2333 CPUIDLE DRIVERS 2334 - M: Rafael J. Wysocki <rjw@sisk.pl> 2334 + M: Rafael J. Wysocki <rjw@rjwysocki.net> 2335 2335 M: Daniel Lezcano <daniel.lezcano@linaro.org> 2336 2336 L: linux-pm@vger.kernel.org 2337 2337 S: Maintained ··· 3553 3553 3554 3554 FREEZER 3555 3555 M: Pavel Machek <pavel@ucw.cz> 3556 - M: "Rafael J. Wysocki" <rjw@sisk.pl> 3556 + M: "Rafael J. Wysocki" <rjw@rjwysocki.net> 3557 3557 L: linux-pm@vger.kernel.org 3558 3558 S: Supported 3559 3559 F: Documentation/power/freezing-of-tasks.txt ··· 3623 3623 L: linux-scsi@vger.kernel.org 3624 3624 S: Odd Fixes (e.g., new signatures) 3625 3625 F: drivers/scsi/fdomain.* 3626 + 3627 + GCOV BASED KERNEL PROFILING 3628 + M: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> 3629 + S: Maintained 3630 + F: kernel/gcov/ 3631 + F: Documentation/gcov.txt 3626 3632 3627 3633 GDT SCSI DISK ARRAY CONTROLLER DRIVER 3628 3634 M: Achim Leubner <achim_leubner@adaptec.com> ··· 3895 3889 3896 3890 HIBERNATION (aka Software Suspend, aka swsusp) 3897 3891 M: Pavel Machek <pavel@ucw.cz> 3898 - M: "Rafael J. Wysocki" <rjw@sisk.pl> 3892 + M: "Rafael J. Wysocki" <rjw@rjwysocki.net> 3899 3893 L: linux-pm@vger.kernel.org 3900 3894 S: Supported 3901 3895 F: arch/x86/power/ ··· 4345 4339 INTEL MENLOW THERMAL DRIVER 4346 4340 M: Sujith Thomas <sujith.thomas@intel.com> 4347 4341 L: platform-driver-x86@vger.kernel.org 4348 - W: http://www.lesswatts.org/projects/acpi/ 4342 + W: https://01.org/linux-acpi 4349 4343 S: Supported 4350 4344 F: drivers/platform/x86/intel_menlow.c 4351 4345 ··· 8107 8101 SUSPEND TO RAM 8108 8102 M: Len Brown <len.brown@intel.com> 8109 8103 M: Pavel Machek <pavel@ucw.cz> 8110 - M: "Rafael J. Wysocki" <rjw@sisk.pl> 8104 + M: "Rafael J. Wysocki" <rjw@rjwysocki.net> 8111 8105 L: linux-pm@vger.kernel.org 8112 8106 S: Supported 8113 8107 F: Documentation/power/
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 12 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc4 4 + EXTRAVERSION = -rc6 5 5 NAME = One Giant Leap for Frogkind 6 6 7 7 # *DOCUMENTATION*
+1 -1
arch/arc/kernel/ptrace.c
··· 102 102 REG_IGNORE_ONE(pad2); 103 103 REG_IN_CHUNK(callee, efa, cregs); /* callee_regs[r25..r13] */ 104 104 REG_IGNORE_ONE(efa); /* efa update invalid */ 105 - REG_IN_ONE(stop_pc, &ptregs->ret); /* stop_pc: PC update */ 105 + REG_IGNORE_ONE(stop_pc); /* PC updated via @ret */ 106 106 107 107 return ret; 108 108 }
+7 -2
arch/arm/Makefile
··· 296 296 # Convert bzImage to zImage 297 297 bzImage: zImage 298 298 299 - zImage Image xipImage bootpImage uImage: vmlinux 299 + BOOT_TARGETS = zImage Image xipImage bootpImage uImage 300 + INSTALL_TARGETS = zinstall uinstall install 301 + 302 + PHONY += bzImage $(BOOT_TARGETS) $(INSTALL_TARGETS) 303 + 304 + $(BOOT_TARGETS): vmlinux 300 305 $(Q)$(MAKE) $(build)=$(boot) MACHINE=$(MACHINE) $(boot)/$@ 301 306 302 - zinstall uinstall install: vmlinux 307 + $(INSTALL_TARGETS): 303 308 $(Q)$(MAKE) $(build)=$(boot) MACHINE=$(MACHINE) $@ 304 309 305 310 %.dtb: | scripts
+8 -8
arch/arm/boot/Makefile
··· 95 95 @test "$(INITRD)" != "" || \ 96 96 (echo You must specify INITRD; exit -1) 97 97 98 - install: $(obj)/Image 99 - $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \ 98 + install: 99 + $(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \ 100 100 $(obj)/Image System.map "$(INSTALL_PATH)" 101 101 102 - zinstall: $(obj)/zImage 103 - $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \ 102 + zinstall: 103 + $(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \ 104 104 $(obj)/zImage System.map "$(INSTALL_PATH)" 105 105 106 - uinstall: $(obj)/uImage 107 - $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \ 106 + uinstall: 107 + $(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \ 108 108 $(obj)/uImage System.map "$(INSTALL_PATH)" 109 109 110 110 zi: 111 - $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \ 111 + $(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \ 112 112 $(obj)/zImage System.map "$(INSTALL_PATH)" 113 113 114 114 i: 115 - $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \ 115 + $(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \ 116 116 $(obj)/Image System.map "$(INSTALL_PATH)" 117 117 118 118 subdir- := bootp compressed dts
+5
arch/arm/boot/dts/exynos5250.dtsi
··· 96 96 <1 14 0xf08>, 97 97 <1 11 0xf08>, 98 98 <1 10 0xf08>; 99 + /* Unfortunately we need this since some versions of U-Boot 100 + * on Exynos don't set the CNTFRQ register, so we need the 101 + * value from DT. 102 + */ 103 + clock-frequency = <24000000>; 99 104 }; 100 105 101 106 mct@101C0000 {
+1 -1
arch/arm/boot/dts/omap3-beagle-xm.dts
··· 11 11 12 12 / { 13 13 model = "TI OMAP3 BeagleBoard xM"; 14 - compatible = "ti,omap3-beagle-xm", "ti,omap3-beagle", "ti,omap3"; 14 + compatible = "ti,omap3-beagle-xm", "ti,omap36xx", "ti,omap3"; 15 15 16 16 cpus { 17 17 cpu@0 {
+2 -2
arch/arm/boot/dts/omap3.dtsi
··· 108 108 #address-cells = <1>; 109 109 #size-cells = <0>; 110 110 pinctrl-single,register-width = <16>; 111 - pinctrl-single,function-mask = <0x7f1f>; 111 + pinctrl-single,function-mask = <0xff1f>; 112 112 }; 113 113 114 114 omap3_pmx_wkup: pinmux@0x48002a00 { ··· 117 117 #address-cells = <1>; 118 118 #size-cells = <0>; 119 119 pinctrl-single,register-width = <16>; 120 - pinctrl-single,function-mask = <0x7f1f>; 120 + pinctrl-single,function-mask = <0xff1f>; 121 121 }; 122 122 123 123 gpio1: gpio@48310000 {
+14
arch/arm/boot/install.sh
··· 20 20 # $4 - default install path (blank if root directory) 21 21 # 22 22 23 + verify () { 24 + if [ ! -f "$1" ]; then 25 + echo "" 1>&2 26 + echo " *** Missing file: $1" 1>&2 27 + echo ' *** You need to run "make" before "make install".' 1>&2 28 + echo "" 1>&2 29 + exit 1 30 + fi 31 + } 32 + 33 + # Make sure the files actually exist 34 + verify "$2" 35 + verify "$3" 36 + 23 37 # User may have a custom install script 24 38 if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi 25 39 if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi
+4 -2
arch/arm/common/mcpm_entry.c
··· 51 51 { 52 52 phys_reset_t phys_reset; 53 53 54 - BUG_ON(!platform_ops); 54 + if (WARN_ON_ONCE(!platform_ops || !platform_ops->power_down)) 55 + return; 55 56 BUG_ON(!irqs_disabled()); 56 57 57 58 /* ··· 94 93 { 95 94 phys_reset_t phys_reset; 96 95 97 - BUG_ON(!platform_ops); 96 + if (WARN_ON_ONCE(!platform_ops || !platform_ops->suspend)) 97 + return; 98 98 BUG_ON(!irqs_disabled()); 99 99 100 100 /* Very similar to mcpm_cpu_power_down() */
+4 -1
arch/arm/common/sharpsl_param.c
··· 15 15 #include <linux/module.h> 16 16 #include <linux/string.h> 17 17 #include <asm/mach/sharpsl_param.h> 18 + #include <asm/memory.h> 18 19 19 20 /* 20 21 * Certain hardware parameters determined at the time of device manufacture, ··· 26 25 */ 27 26 #ifdef CONFIG_ARCH_SA1100 28 27 #define PARAM_BASE 0xe8ffc000 28 + #define param_start(x) (void *)(x) 29 29 #else 30 30 #define PARAM_BASE 0xa0000a00 31 + #define param_start(x) __va(x) 31 32 #endif 32 33 #define MAGIC_CHG(a,b,c,d) ( ( d << 24 ) | ( c << 16 ) | ( b << 8 ) | a ) 33 34 ··· 44 41 45 42 void sharpsl_save_param(void) 46 43 { 47 - memcpy(&sharpsl_param, (void *)PARAM_BASE, sizeof(struct sharpsl_param_info)); 44 + memcpy(&sharpsl_param, param_start(PARAM_BASE), sizeof(struct sharpsl_param_info)); 48 45 49 46 if (sharpsl_param.comadj_keyword != COMADJ_MAGIC) 50 47 sharpsl_param.comadj=-1;
-1
arch/arm/include/asm/Kbuild
··· 31 31 generic-y += termios.h 32 32 generic-y += timex.h 33 33 generic-y += trace_clock.h 34 - generic-y += types.h 35 34 generic-y += unaligned.h
+1 -1
arch/arm/include/asm/jump_label.h
··· 16 16 17 17 static __always_inline bool arch_static_branch(struct static_key *key) 18 18 { 19 - asm goto("1:\n\t" 19 + asm_volatile_goto("1:\n\t" 20 20 JUMP_LABEL_NOP "\n\t" 21 21 ".pushsection __jump_table, \"aw\"\n\t" 22 22 ".word 1b, %l[l_yes], %c0\n\t"
+10 -4
arch/arm/include/asm/mcpm.h
··· 76 76 * 77 77 * This must be called with interrupts disabled. 78 78 * 79 - * This does not return. Re-entry in the kernel is expected via 80 - * mcpm_entry_point. 79 + * On success this does not return. Re-entry in the kernel is expected 80 + * via mcpm_entry_point. 81 + * 82 + * This will return if mcpm_platform_register() has not been called 83 + * previously in which case the caller should take appropriate action. 81 84 */ 82 85 void mcpm_cpu_power_down(void); 83 86 ··· 101 98 * 102 99 * This must be called with interrupts disabled. 103 100 * 104 - * This does not return. Re-entry in the kernel is expected via 105 - * mcpm_entry_point. 101 + * On success this does not return. Re-entry in the kernel is expected 102 + * via mcpm_entry_point. 103 + * 104 + * This will return if mcpm_platform_register() has not been called 105 + * previously in which case the caller should take appropriate action. 106 106 */ 107 107 void mcpm_cpu_suspend(u64 expected_residency); 108 108
+6
arch/arm/include/asm/syscall.h
··· 57 57 unsigned int i, unsigned int n, 58 58 unsigned long *args) 59 59 { 60 + if (n == 0) 61 + return; 62 + 60 63 if (i + n > SYSCALL_MAX_ARGS) { 61 64 unsigned long *args_bad = args + SYSCALL_MAX_ARGS - i; 62 65 unsigned int n_bad = n + i - SYSCALL_MAX_ARGS; ··· 84 81 unsigned int i, unsigned int n, 85 82 const unsigned long *args) 86 83 { 84 + if (n == 0) 85 + return; 86 + 87 87 if (i + n > SYSCALL_MAX_ARGS) { 88 88 pr_warning("%s called with max args %d, handling only %d\n", 89 89 __func__, i + n, SYSCALL_MAX_ARGS);
+20 -1
arch/arm/kernel/head.S
··· 487 487 mrc p15, 0, r0, c0, c0, 5 @ read MPIDR 488 488 and r0, r0, #0xc0000000 @ multiprocessing extensions and 489 489 teq r0, #0x80000000 @ not part of a uniprocessor system? 490 - moveq pc, lr @ yes, assume SMP 490 + bne __fixup_smp_on_up @ no, assume UP 491 + 492 + @ Core indicates it is SMP. Check for Aegis SOC where a single 493 + @ Cortex-A9 CPU is present but SMP operations fault. 494 + mov r4, #0x41000000 495 + orr r4, r4, #0x0000c000 496 + orr r4, r4, #0x00000090 497 + teq r3, r4 @ Check for ARM Cortex-A9 498 + movne pc, lr @ Not ARM Cortex-A9, 499 + 500 + @ If a future SoC *does* use 0x0 as the PERIPH_BASE, then the 501 + @ below address check will need to be #ifdef'd or equivalent 502 + @ for the Aegis platform. 503 + mrc p15, 4, r0, c15, c0 @ get SCU base address 504 + teq r0, #0x0 @ '0' on actual UP A9 hardware 505 + beq __fixup_smp_on_up @ So its an A9 UP 506 + ldr r0, [r0, #4] @ read SCU Config 507 + and r0, r0, #0x3 @ number of CPUs 508 + teq r0, #0x0 @ is 1? 509 + movne pc, lr 491 510 492 511 __fixup_smp_on_up: 493 512 adr r0, 1f
+18
arch/arm/mach-omap2/board-generic.c
··· 129 129 .restart = omap3xxx_restart, 130 130 MACHINE_END 131 131 132 + static const char *omap36xx_boards_compat[] __initdata = { 133 + "ti,omap36xx", 134 + NULL, 135 + }; 136 + 137 + DT_MACHINE_START(OMAP36XX_DT, "Generic OMAP36xx (Flattened Device Tree)") 138 + .reserve = omap_reserve, 139 + .map_io = omap3_map_io, 140 + .init_early = omap3630_init_early, 141 + .init_irq = omap_intc_of_init, 142 + .handle_irq = omap3_intc_handle_irq, 143 + .init_machine = omap_generic_init, 144 + .init_late = omap3_init_late, 145 + .init_time = omap3_sync32k_timer_init, 146 + .dt_compat = omap36xx_boards_compat, 147 + .restart = omap3xxx_restart, 148 + MACHINE_END 149 + 132 150 static const char *omap3_gp_boards_compat[] __initdata = { 133 151 "ti,omap3-beagle", 134 152 "timll,omap3-devkit8000",
+9
arch/arm/mach-omap2/board-rx51-peripherals.c
··· 167 167 .name = "lp5523:kb1", 168 168 .chan_nr = 0, 169 169 .led_current = 50, 170 + .max_current = 100, 170 171 }, { 171 172 .name = "lp5523:kb2", 172 173 .chan_nr = 1, 173 174 .led_current = 50, 175 + .max_current = 100, 174 176 }, { 175 177 .name = "lp5523:kb3", 176 178 .chan_nr = 2, 177 179 .led_current = 50, 180 + .max_current = 100, 178 181 }, { 179 182 .name = "lp5523:kb4", 180 183 .chan_nr = 3, 181 184 .led_current = 50, 185 + .max_current = 100, 182 186 }, { 183 187 .name = "lp5523:b", 184 188 .chan_nr = 4, 185 189 .led_current = 50, 190 + .max_current = 100, 186 191 }, { 187 192 .name = "lp5523:g", 188 193 .chan_nr = 5, 189 194 .led_current = 50, 195 + .max_current = 100, 190 196 }, { 191 197 .name = "lp5523:r", 192 198 .chan_nr = 6, 193 199 .led_current = 50, 200 + .max_current = 100, 194 201 }, { 195 202 .name = "lp5523:kb5", 196 203 .chan_nr = 7, 197 204 .led_current = 50, 205 + .max_current = 100, 198 206 }, { 199 207 .name = "lp5523:kb6", 200 208 .chan_nr = 8, 201 209 .led_current = 50, 210 + .max_current = 100, 202 211 } 203 212 }; 204 213
+11 -1
arch/arm/mach-omap2/gpmc-onenand.c
··· 272 272 struct gpmc_timings t; 273 273 int ret; 274 274 275 - if (gpmc_onenand_data->of_node) 275 + if (gpmc_onenand_data->of_node) { 276 276 gpmc_read_settings_dt(gpmc_onenand_data->of_node, 277 277 &onenand_async); 278 + if (onenand_async.sync_read || onenand_async.sync_write) { 279 + if (onenand_async.sync_write) 280 + gpmc_onenand_data->flags |= 281 + ONENAND_SYNC_READWRITE; 282 + else 283 + gpmc_onenand_data->flags |= ONENAND_SYNC_READ; 284 + onenand_async.sync_read = false; 285 + onenand_async.sync_write = false; 286 + } 287 + } 278 288 279 289 omap2_onenand_set_async_mode(onenand_base); 280 290
+1 -3
arch/arm/mach-omap2/mux.h
··· 28 28 #define OMAP_PULL_UP (1 << 4) 29 29 #define OMAP_ALTELECTRICALSEL (1 << 5) 30 30 31 - /* 34xx specific mux bit defines */ 31 + /* omap3/4/5 specific mux bit defines */ 32 32 #define OMAP_INPUT_EN (1 << 8) 33 33 #define OMAP_OFF_EN (1 << 9) 34 34 #define OMAP_OFFOUT_EN (1 << 10) ··· 36 36 #define OMAP_OFF_PULL_EN (1 << 12) 37 37 #define OMAP_OFF_PULL_UP (1 << 13) 38 38 #define OMAP_WAKEUP_EN (1 << 14) 39 - 40 - /* 44xx specific mux bit defines */ 41 39 #define OMAP_WAKEUP_EVENT (1 << 15) 42 40 43 41 /* Active pin states */
+2 -2
arch/arm/mach-omap2/timer.c
··· 628 628 #endif /* CONFIG_HAVE_ARM_TWD */ 629 629 #endif /* CONFIG_ARCH_OMAP4 */ 630 630 631 - #ifdef CONFIG_SOC_OMAP5 631 + #if defined(CONFIG_SOC_OMAP5) || defined(CONFIG_SOC_DRA7XX) 632 632 void __init omap5_realtime_timer_init(void) 633 633 { 634 634 omap4_sync32k_timer_init(); ··· 636 636 637 637 clocksource_of_init(); 638 638 } 639 - #endif /* CONFIG_SOC_OMAP5 */ 639 + #endif /* CONFIG_SOC_OMAP5 || CONFIG_SOC_DRA7XX */ 640 640 641 641 /** 642 642 * omap_timer_init - build and register timer device with an
+28 -15
arch/arm/mm/dma-mapping.c
··· 1232 1232 break; 1233 1233 1234 1234 len = (j - i) << PAGE_SHIFT; 1235 - ret = iommu_map(mapping->domain, iova, phys, len, 0); 1235 + ret = iommu_map(mapping->domain, iova, phys, len, 1236 + IOMMU_READ|IOMMU_WRITE); 1236 1237 if (ret < 0) 1237 1238 goto fail; 1238 1239 iova += len; ··· 1432 1431 GFP_KERNEL); 1433 1432 } 1434 1433 1434 + static int __dma_direction_to_prot(enum dma_data_direction dir) 1435 + { 1436 + int prot; 1437 + 1438 + switch (dir) { 1439 + case DMA_BIDIRECTIONAL: 1440 + prot = IOMMU_READ | IOMMU_WRITE; 1441 + break; 1442 + case DMA_TO_DEVICE: 1443 + prot = IOMMU_READ; 1444 + break; 1445 + case DMA_FROM_DEVICE: 1446 + prot = IOMMU_WRITE; 1447 + break; 1448 + default: 1449 + prot = 0; 1450 + } 1451 + 1452 + return prot; 1453 + } 1454 + 1435 1455 /* 1436 1456 * Map a part of the scatter-gather list into contiguous io address space 1437 1457 */ ··· 1466 1444 int ret = 0; 1467 1445 unsigned int count; 1468 1446 struct scatterlist *s; 1447 + int prot; 1469 1448 1470 1449 size = PAGE_ALIGN(size); 1471 1450 *handle = DMA_ERROR_CODE; ··· 1483 1460 !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs)) 1484 1461 __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir); 1485 1462 1486 - ret = iommu_map(mapping->domain, iova, phys, len, 0); 1463 + prot = __dma_direction_to_prot(dir); 1464 + 1465 + ret = iommu_map(mapping->domain, iova, phys, len, prot); 1487 1466 if (ret < 0) 1488 1467 goto fail; 1489 1468 count += len >> PAGE_SHIFT; ··· 1690 1665 if (dma_addr == DMA_ERROR_CODE) 1691 1666 return dma_addr; 1692 1667 1693 - switch (dir) { 1694 - case DMA_BIDIRECTIONAL: 1695 - prot = IOMMU_READ | IOMMU_WRITE; 1696 - break; 1697 - case DMA_TO_DEVICE: 1698 - prot = IOMMU_READ; 1699 - break; 1700 - case DMA_FROM_DEVICE: 1701 - prot = IOMMU_WRITE; 1702 - break; 1703 - default: 1704 - prot = 0; 1705 - } 1668 + prot = __dma_direction_to_prot(dir); 1706 1669 1707 1670 ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page), len, prot); 1708 1671 if (ret < 0)
-3
arch/arm/mm/init.c
··· 17 17 #include <linux/nodemask.h> 18 18 #include <linux/initrd.h> 19 19 #include <linux/of_fdt.h> 20 - #include <linux/of_reserved_mem.h> 21 20 #include <linux/highmem.h> 22 21 #include <linux/gfp.h> 23 22 #include <linux/memblock.h> ··· 377 378 /* reserve any platform specific memblock areas */ 378 379 if (mdesc->reserve) 379 380 mdesc->reserve(); 380 - 381 - early_init_dt_scan_reserved_mem(); 382 381 383 382 /* 384 383 * reserve memory for DMA contigouos allocations,
+1 -1
arch/mips/include/asm/jump_label.h
··· 22 22 23 23 static __always_inline bool arch_static_branch(struct static_key *key) 24 24 { 25 - asm goto("1:\tnop\n\t" 25 + asm_volatile_goto("1:\tnop\n\t" 26 26 "nop\n\t" 27 27 ".pushsection __jump_table, \"aw\"\n\t" 28 28 WORD_INSN " 1b, %l[l_yes], %0\n\t"
+1 -1
arch/mips/kernel/octeon_switch.S
··· 73 73 3: 74 74 75 75 #if defined(CONFIG_CC_STACKPROTECTOR) && !defined(CONFIG_SMP) 76 - PTR_L t8, __stack_chk_guard 76 + PTR_LA t8, __stack_chk_guard 77 77 LONG_L t9, TASK_STACK_CANARY(a1) 78 78 LONG_S t9, 0(t8) 79 79 #endif
+1 -1
arch/mips/kernel/r2300_switch.S
··· 67 67 1: 68 68 69 69 #if defined(CONFIG_CC_STACKPROTECTOR) && !defined(CONFIG_SMP) 70 - PTR_L t8, __stack_chk_guard 70 + PTR_LA t8, __stack_chk_guard 71 71 LONG_L t9, TASK_STACK_CANARY(a1) 72 72 LONG_S t9, 0(t8) 73 73 #endif
+1 -1
arch/mips/kernel/r4k_switch.S
··· 69 69 1: 70 70 71 71 #if defined(CONFIG_CC_STACKPROTECTOR) && !defined(CONFIG_SMP) 72 - PTR_L t8, __stack_chk_guard 72 + PTR_LA t8, __stack_chk_guard 73 73 LONG_L t9, TASK_STACK_CANARY(a1) 74 74 LONG_S t9, 0(t8) 75 75 #endif
+1 -1
arch/parisc/include/asm/traps.h
··· 6 6 7 7 /* traps.c */ 8 8 void parisc_terminate(char *msg, struct pt_regs *regs, 9 - int code, unsigned long offset); 9 + int code, unsigned long offset) __noreturn __cold; 10 10 11 11 /* mm/fault.c */ 12 12 void do_page_fault(struct pt_regs *regs, unsigned long code,
+1
arch/parisc/kernel/cache.c
··· 602 602 __flush_cache_page(vma, vmaddr, PFN_PHYS(pfn)); 603 603 } 604 604 } 605 + EXPORT_SYMBOL_GPL(flush_cache_page); 605 606 606 607 #ifdef CONFIG_PARISC_TMPALIAS 607 608
+1 -7
arch/parisc/kernel/smp.c
··· 72 72 IPI_NOP=0, 73 73 IPI_RESCHEDULE=1, 74 74 IPI_CALL_FUNC, 75 - IPI_CALL_FUNC_SINGLE, 76 75 IPI_CPU_START, 77 76 IPI_CPU_STOP, 78 77 IPI_CPU_TEST ··· 161 162 case IPI_CALL_FUNC: 162 163 smp_debug(100, KERN_DEBUG "CPU%d IPI_CALL_FUNC\n", this_cpu); 163 164 generic_smp_call_function_interrupt(); 164 - break; 165 - 166 - case IPI_CALL_FUNC_SINGLE: 167 - smp_debug(100, KERN_DEBUG "CPU%d IPI_CALL_FUNC_SINGLE\n", this_cpu); 168 - generic_smp_call_function_single_interrupt(); 169 165 break; 170 166 171 167 case IPI_CPU_START: ··· 254 260 255 261 void arch_send_call_function_single_ipi(int cpu) 256 262 { 257 - send_IPI_single(cpu, IPI_CALL_FUNC_SINGLE); 263 + send_IPI_single(cpu, IPI_CALL_FUNC); 258 264 } 259 265 260 266 /*
+3 -8
arch/parisc/kernel/traps.c
··· 291 291 do_exit(SIGSEGV); 292 292 } 293 293 294 - int syscall_ipi(int (*syscall) (struct pt_regs *), struct pt_regs *regs) 295 - { 296 - return syscall(regs); 297 - } 298 - 299 294 /* gdb uses break 4,8 */ 300 295 #define GDB_BREAK_INSN 0x10004 301 296 static void handle_gdb_break(struct pt_regs *regs, int wot) ··· 800 805 else { 801 806 802 807 /* 803 - * The kernel should never fault on its own address space. 808 + * The kernel should never fault on its own address space, 809 + * unless pagefault_disable() was called before. 804 810 */ 805 811 806 - if (fault_space == 0) 812 + if (fault_space == 0 && !in_atomic()) 807 813 { 808 814 pdc_chassis_send_status(PDC_CHASSIS_DIRECT_PANIC); 809 815 parisc_terminate("Kernel Fault", regs, code, fault_address); 810 - 811 816 } 812 817 } 813 818
+14 -1
arch/parisc/lib/memcpy.c
··· 56 56 #ifdef __KERNEL__ 57 57 #include <linux/module.h> 58 58 #include <linux/compiler.h> 59 - #include <asm/uaccess.h> 59 + #include <linux/uaccess.h> 60 60 #define s_space "%%sr1" 61 61 #define d_space "%%sr2" 62 62 #else ··· 524 524 EXPORT_SYMBOL(copy_from_user); 525 525 EXPORT_SYMBOL(copy_in_user); 526 526 EXPORT_SYMBOL(memcpy); 527 + 528 + long probe_kernel_read(void *dst, const void *src, size_t size) 529 + { 530 + unsigned long addr = (unsigned long)src; 531 + 532 + if (size < 0 || addr < PAGE_SIZE) 533 + return -EFAULT; 534 + 535 + /* check for I/O space F_EXTEND(0xfff00000) access as well? */ 536 + 537 + return __probe_kernel_read(dst, src, size); 538 + } 539 + 527 540 #endif
+10 -5
arch/parisc/mm/fault.c
··· 171 171 unsigned long address) 172 172 { 173 173 struct vm_area_struct *vma, *prev_vma; 174 - struct task_struct *tsk = current; 175 - struct mm_struct *mm = tsk->mm; 174 + struct task_struct *tsk; 175 + struct mm_struct *mm; 176 176 unsigned long acc_type; 177 177 int fault; 178 - unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE; 178 + unsigned int flags; 179 179 180 - if (in_atomic() || !mm) 180 + if (in_atomic()) 181 181 goto no_context; 182 182 183 + tsk = current; 184 + mm = tsk->mm; 185 + if (!mm) 186 + goto no_context; 187 + 188 + flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE; 183 189 if (user_mode(regs)) 184 190 flags |= FAULT_FLAG_USER; 185 191 186 192 acc_type = parisc_acctyp(code, regs->iir); 187 - 188 193 if (acc_type & VM_WRITE) 189 194 flags |= FAULT_FLAG_WRITE; 190 195 retry:
+1 -1
arch/powerpc/include/asm/jump_label.h
··· 19 19 20 20 static __always_inline bool arch_static_branch(struct static_key *key) 21 21 { 22 - asm goto("1:\n\t" 22 + asm_volatile_goto("1:\n\t" 23 23 "nop\n\t" 24 24 ".pushsection __jump_table, \"aw\"\n\t" 25 25 JUMP_ENTRY_TYPE "1b, %l[l_yes], %c0\n\t"
+3 -2
arch/powerpc/kernel/irq.c
··· 495 495 void do_IRQ(struct pt_regs *regs) 496 496 { 497 497 struct pt_regs *old_regs = set_irq_regs(regs); 498 - struct thread_info *curtp, *irqtp; 498 + struct thread_info *curtp, *irqtp, *sirqtp; 499 499 500 500 /* Switch to the irq stack to handle this */ 501 501 curtp = current_thread_info(); 502 502 irqtp = hardirq_ctx[raw_smp_processor_id()]; 503 + sirqtp = softirq_ctx[raw_smp_processor_id()]; 503 504 504 505 /* Already there ? */ 505 - if (unlikely(curtp == irqtp)) { 506 + if (unlikely(curtp == irqtp || curtp == sirqtp)) { 506 507 __do_irq(regs); 507 508 set_irq_regs(old_regs); 508 509 return;
+1 -1
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 1066 1066 BEGIN_FTR_SECTION 1067 1067 mfspr r8, SPRN_DSCR 1068 1068 ld r7, HSTATE_DSCR(r13) 1069 - std r8, VCPU_DSCR(r7) 1069 + std r8, VCPU_DSCR(r9) 1070 1070 mtspr SPRN_DSCR, r7 1071 1071 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206) 1072 1072
+17 -1
arch/powerpc/kvm/e500_mmu_host.c
··· 332 332 unsigned long hva; 333 333 int pfnmap = 0; 334 334 int tsize = BOOK3E_PAGESZ_4K; 335 + int ret = 0; 336 + unsigned long mmu_seq; 337 + struct kvm *kvm = vcpu_e500->vcpu.kvm; 338 + 339 + /* used to check for invalidations in progress */ 340 + mmu_seq = kvm->mmu_notifier_seq; 341 + smp_rmb(); 335 342 336 343 /* 337 344 * Translate guest physical to true physical, acquiring ··· 456 449 gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1); 457 450 } 458 451 452 + spin_lock(&kvm->mmu_lock); 453 + if (mmu_notifier_retry(kvm, mmu_seq)) { 454 + ret = -EAGAIN; 455 + goto out; 456 + } 457 + 459 458 kvmppc_e500_ref_setup(ref, gtlbe, pfn); 460 459 461 460 kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize, ··· 470 457 /* Clear i-cache for new pages */ 471 458 kvmppc_mmu_flush_icache(pfn); 472 459 460 + out: 461 + spin_unlock(&kvm->mmu_lock); 462 + 473 463 /* Drop refcount on page, so that mmu notifiers can clear it */ 474 464 kvm_release_pfn_clean(pfn); 475 465 476 - return 0; 466 + return ret; 477 467 } 478 468 479 469 /* XXX only map the one-one case, for now use TLB0 */
+1 -1
arch/s390/include/asm/jump_label.h
··· 15 15 16 16 static __always_inline bool arch_static_branch(struct static_key *key) 17 17 { 18 - asm goto("0: brcl 0,0\n" 18 + asm_volatile_goto("0: brcl 0,0\n" 19 19 ".pushsection __jump_table, \"aw\"\n" 20 20 ASM_ALIGN "\n" 21 21 ASM_PTR " 0b, %l[label], %0\n"
+20 -22
arch/s390/kernel/crash_dump.c
··· 40 40 } 41 41 42 42 /* 43 - * Copy up to one page to vmalloc or real memory 43 + * Copy real to virtual or real memory 44 44 */ 45 - static ssize_t copy_page_real(void *buf, void *src, size_t csize) 45 + static int copy_from_realmem(void *dest, void *src, size_t count) 46 46 { 47 - size_t size; 47 + unsigned long size; 48 + int rc; 48 49 49 - if (is_vmalloc_addr(buf)) { 50 - BUG_ON(csize >= PAGE_SIZE); 51 - /* If buf is not page aligned, copy first part */ 52 - size = min(roundup(__pa(buf), PAGE_SIZE) - __pa(buf), csize); 53 - if (size) { 54 - if (memcpy_real(load_real_addr(buf), src, size)) 55 - return -EFAULT; 56 - buf += size; 57 - src += size; 58 - } 59 - /* Copy second part */ 60 - size = csize - size; 61 - return (size) ? memcpy_real(load_real_addr(buf), src, size) : 0; 62 - } else { 63 - return memcpy_real(buf, src, csize); 64 - } 50 + if (!count) 51 + return 0; 52 + if (!is_vmalloc_or_module_addr(dest)) 53 + return memcpy_real(dest, src, count); 54 + do { 55 + size = min(count, PAGE_SIZE - (__pa(dest) & ~PAGE_MASK)); 56 + if (memcpy_real(load_real_addr(dest), src, size)) 57 + return -EFAULT; 58 + count -= size; 59 + dest += size; 60 + src += size; 61 + } while (count); 62 + return 0; 65 63 } 66 64 67 65 /* ··· 112 114 rc = copy_to_user_real((void __force __user *) buf, 113 115 (void *) src, csize); 114 116 else 115 - rc = copy_page_real(buf, (void *) src, csize); 117 + rc = copy_from_realmem(buf, (void *) src, csize); 116 118 return (rc == 0) ? rc : csize; 117 119 } 118 120 ··· 208 210 if (OLDMEM_BASE) { 209 211 if ((unsigned long) src < OLDMEM_SIZE) { 210 212 copied = min(count, OLDMEM_SIZE - (unsigned long) src); 211 - rc = memcpy_real(dest, src + OLDMEM_BASE, copied); 213 + rc = copy_from_realmem(dest, src + OLDMEM_BASE, copied); 212 214 if (rc) 213 215 return rc; 214 216 } ··· 221 223 return rc; 222 224 } 223 225 } 224 - return memcpy_real(dest + copied, src + copied, count - copied); 226 + return copy_from_realmem(dest + copied, src + copied, count - copied); 225 227 } 226 228 227 229 /*
+1
arch/s390/kernel/entry.S
··· 266 266 tm __TI_flags+3(%r12),_TIF_SYSCALL 267 267 jno sysc_return 268 268 lm %r2,%r7,__PT_R2(%r11) # load svc arguments 269 + l %r10,__TI_sysc_table(%r12) # 31 bit system call table 269 270 xr %r8,%r8 # svc 0 returns -ENOSYS 270 271 clc __PT_INT_CODE+2(2,%r11),BASED(.Lnr_syscalls+2) 271 272 jnl sysc_nr_ok # invalid svc number -> do svc 0
+1
arch/s390/kernel/entry64.S
··· 297 297 tm __TI_flags+7(%r12),_TIF_SYSCALL 298 298 jno sysc_return 299 299 lmg %r2,%r7,__PT_R2(%r11) # load svc arguments 300 + lg %r10,__TI_sysc_table(%r12) # address of system call table 300 301 lghi %r8,0 # svc 0 returns -ENOSYS 301 302 llgh %r1,__PT_INT_CODE+2(%r11) # load new svc number 302 303 cghi %r1,NR_syscalls
+5 -1
arch/s390/kernel/kprobes.c
··· 67 67 case 0xac: /* stnsm */ 68 68 case 0xad: /* stosm */ 69 69 return -EINVAL; 70 + case 0xc6: 71 + switch (insn[0] & 0x0f) { 72 + case 0x00: /* exrl */ 73 + return -EINVAL; 74 + } 70 75 } 71 76 switch (insn[0]) { 72 77 case 0x0101: /* pr */ ··· 185 180 break; 186 181 case 0xc6: 187 182 switch (insn[0] & 0x0f) { 188 - case 0x00: /* exrl */ 189 183 case 0x02: /* pfdrl */ 190 184 case 0x04: /* cghrl */ 191 185 case 0x05: /* chrl */
+1 -1
arch/sparc/include/asm/jump_label.h
··· 9 9 10 10 static __always_inline bool arch_static_branch(struct static_key *key) 11 11 { 12 - asm goto("1:\n\t" 12 + asm_volatile_goto("1:\n\t" 13 13 "nop\n\t" 14 14 "nop\n\t" 15 15 ".pushsection __jump_table, \"aw\"\n\t"
+3 -2
arch/tile/include/asm/atomic.h
··· 166 166 * 167 167 * Atomically sets @v to @i and returns old @v 168 168 */ 169 - static inline u64 atomic64_xchg(atomic64_t *v, u64 n) 169 + static inline long long atomic64_xchg(atomic64_t *v, long long n) 170 170 { 171 171 return xchg64(&v->counter, n); 172 172 } ··· 180 180 * Atomically checks if @v holds @o and replaces it with @n if so. 181 181 * Returns the old value at @v. 182 182 */ 183 - static inline u64 atomic64_cmpxchg(atomic64_t *v, u64 o, u64 n) 183 + static inline long long atomic64_cmpxchg(atomic64_t *v, long long o, 184 + long long n) 184 185 { 185 186 return cmpxchg64(&v->counter, o, n); 186 187 }
+15 -12
arch/tile/include/asm/atomic_32.h
··· 80 80 /* A 64bit atomic type */ 81 81 82 82 typedef struct { 83 - u64 __aligned(8) counter; 83 + long long counter; 84 84 } atomic64_t; 85 85 86 86 #define ATOMIC64_INIT(val) { (val) } ··· 91 91 * 92 92 * Atomically reads the value of @v. 93 93 */ 94 - static inline u64 atomic64_read(const atomic64_t *v) 94 + static inline long long atomic64_read(const atomic64_t *v) 95 95 { 96 96 /* 97 97 * Requires an atomic op to read both 32-bit parts consistently. 98 98 * Casting away const is safe since the atomic support routines 99 99 * do not write to memory if the value has not been modified. 100 100 */ 101 - return _atomic64_xchg_add((u64 *)&v->counter, 0); 101 + return _atomic64_xchg_add((long long *)&v->counter, 0); 102 102 } 103 103 104 104 /** ··· 108 108 * 109 109 * Atomically adds @i to @v. 110 110 */ 111 - static inline void atomic64_add(u64 i, atomic64_t *v) 111 + static inline void atomic64_add(long long i, atomic64_t *v) 112 112 { 113 113 _atomic64_xchg_add(&v->counter, i); 114 114 } ··· 120 120 * 121 121 * Atomically adds @i to @v and returns @i + @v 122 122 */ 123 - static inline u64 atomic64_add_return(u64 i, atomic64_t *v) 123 + static inline long long atomic64_add_return(long long i, atomic64_t *v) 124 124 { 125 125 smp_mb(); /* barrier for proper semantics */ 126 126 return _atomic64_xchg_add(&v->counter, i) + i; ··· 135 135 * Atomically adds @a to @v, so long as @v was not already @u. 136 136 * Returns non-zero if @v was not @u, and zero otherwise. 137 137 */ 138 - static inline u64 atomic64_add_unless(atomic64_t *v, u64 a, u64 u) 138 + static inline long long atomic64_add_unless(atomic64_t *v, long long a, 139 + long long u) 139 140 { 140 141 smp_mb(); /* barrier for proper semantics */ 141 142 return _atomic64_xchg_add_unless(&v->counter, a, u) != u; ··· 152 151 * atomic64_set() can't be just a raw store, since it would be lost if it 153 152 * fell between the load and store of one of the other atomic ops. 154 153 */ 155 - static inline void atomic64_set(atomic64_t *v, u64 n) 154 + static inline void atomic64_set(atomic64_t *v, long long n) 156 155 { 157 156 _atomic64_xchg(&v->counter, n); 158 157 } ··· 237 236 extern struct __get_user __atomic_or(volatile int *p, int *lock, int n); 238 237 extern struct __get_user __atomic_andn(volatile int *p, int *lock, int n); 239 238 extern struct __get_user __atomic_xor(volatile int *p, int *lock, int n); 240 - extern u64 __atomic64_cmpxchg(volatile u64 *p, int *lock, u64 o, u64 n); 241 - extern u64 __atomic64_xchg(volatile u64 *p, int *lock, u64 n); 242 - extern u64 __atomic64_xchg_add(volatile u64 *p, int *lock, u64 n); 243 - extern u64 __atomic64_xchg_add_unless(volatile u64 *p, 244 - int *lock, u64 o, u64 n); 239 + extern long long __atomic64_cmpxchg(volatile long long *p, int *lock, 240 + long long o, long long n); 241 + extern long long __atomic64_xchg(volatile long long *p, int *lock, long long n); 242 + extern long long __atomic64_xchg_add(volatile long long *p, int *lock, 243 + long long n); 244 + extern long long __atomic64_xchg_add_unless(volatile long long *p, 245 + int *lock, long long o, long long n); 245 246 246 247 /* Return failure from the atomic wrappers. */ 247 248 struct __get_user __atomic_bad_address(int __user *addr);
+17 -11
arch/tile/include/asm/cmpxchg.h
··· 35 35 int _atomic_xchg_add(int *v, int i); 36 36 int _atomic_xchg_add_unless(int *v, int a, int u); 37 37 int _atomic_cmpxchg(int *ptr, int o, int n); 38 - u64 _atomic64_xchg(u64 *v, u64 n); 39 - u64 _atomic64_xchg_add(u64 *v, u64 i); 40 - u64 _atomic64_xchg_add_unless(u64 *v, u64 a, u64 u); 41 - u64 _atomic64_cmpxchg(u64 *v, u64 o, u64 n); 38 + long long _atomic64_xchg(long long *v, long long n); 39 + long long _atomic64_xchg_add(long long *v, long long i); 40 + long long _atomic64_xchg_add_unless(long long *v, long long a, long long u); 41 + long long _atomic64_cmpxchg(long long *v, long long o, long long n); 42 42 43 43 #define xchg(ptr, n) \ 44 44 ({ \ ··· 53 53 if (sizeof(*(ptr)) != 4) \ 54 54 __cmpxchg_called_with_bad_pointer(); \ 55 55 smp_mb(); \ 56 - (typeof(*(ptr)))_atomic_cmpxchg((int *)ptr, (int)o, (int)n); \ 56 + (typeof(*(ptr)))_atomic_cmpxchg((int *)ptr, (int)o, \ 57 + (int)n); \ 57 58 }) 58 59 59 60 #define xchg64(ptr, n) \ ··· 62 61 if (sizeof(*(ptr)) != 8) \ 63 62 __xchg_called_with_bad_pointer(); \ 64 63 smp_mb(); \ 65 - (typeof(*(ptr)))_atomic64_xchg((u64 *)(ptr), (u64)(n)); \ 64 + (typeof(*(ptr)))_atomic64_xchg((long long *)(ptr), \ 65 + (long long)(n)); \ 66 66 }) 67 67 68 68 #define cmpxchg64(ptr, o, n) \ ··· 71 69 if (sizeof(*(ptr)) != 8) \ 72 70 __cmpxchg_called_with_bad_pointer(); \ 73 71 smp_mb(); \ 74 - (typeof(*(ptr)))_atomic64_cmpxchg((u64 *)ptr, (u64)o, (u64)n); \ 72 + (typeof(*(ptr)))_atomic64_cmpxchg((long long *)ptr, \ 73 + (long long)o, (long long)n); \ 75 74 }) 76 75 77 76 #else ··· 84 81 switch (sizeof(*(ptr))) { \ 85 82 case 4: \ 86 83 __x = (typeof(__x))(unsigned long) \ 87 - __insn_exch4((ptr), (u32)(unsigned long)(n)); \ 84 + __insn_exch4((ptr), \ 85 + (u32)(unsigned long)(n)); \ 88 86 break; \ 89 87 case 8: \ 90 - __x = (typeof(__x)) \ 88 + __x = (typeof(__x)) \ 91 89 __insn_exch((ptr), (unsigned long)(n)); \ 92 90 break; \ 93 91 default: \ ··· 107 103 switch (sizeof(*(ptr))) { \ 108 104 case 4: \ 109 105 __x = (typeof(__x))(unsigned long) \ 110 - __insn_cmpexch4((ptr), (u32)(unsigned long)(n)); \ 106 + __insn_cmpexch4((ptr), \ 107 + (u32)(unsigned long)(n)); \ 111 108 break; \ 112 109 case 8: \ 113 - __x = (typeof(__x))__insn_cmpexch((ptr), (u64)(n)); \ 110 + __x = (typeof(__x))__insn_cmpexch((ptr), \ 111 + (long long)(n)); \ 114 112 break; \ 115 113 default: \ 116 114 __cmpxchg_called_with_bad_pointer(); \
+31 -3
arch/tile/include/asm/percpu.h
··· 15 15 #ifndef _ASM_TILE_PERCPU_H 16 16 #define _ASM_TILE_PERCPU_H 17 17 18 - register unsigned long __my_cpu_offset __asm__("tp"); 19 - #define __my_cpu_offset __my_cpu_offset 20 - #define set_my_cpu_offset(tp) (__my_cpu_offset = (tp)) 18 + register unsigned long my_cpu_offset_reg asm("tp"); 19 + 20 + #ifdef CONFIG_PREEMPT 21 + /* 22 + * For full preemption, we can't just use the register variable 23 + * directly, since we need barrier() to hazard against it, causing the 24 + * compiler to reload anything computed from a previous "tp" value. 25 + * But we also don't want to use volatile asm, since we'd like the 26 + * compiler to be able to cache the value across multiple percpu reads. 27 + * So we use a fake stack read as a hazard against barrier(). 28 + * The 'U' constraint is like 'm' but disallows postincrement. 29 + */ 30 + static inline unsigned long __my_cpu_offset(void) 31 + { 32 + unsigned long tp; 33 + register unsigned long *sp asm("sp"); 34 + asm("move %0, tp" : "=r" (tp) : "U" (*sp)); 35 + return tp; 36 + } 37 + #define __my_cpu_offset __my_cpu_offset() 38 + #else 39 + /* 40 + * We don't need to hazard against barrier() since "tp" doesn't ever 41 + * change with PREEMPT_NONE, and with PREEMPT_VOLUNTARY it only 42 + * changes at function call points, at which we are already re-reading 43 + * the value of "tp" due to "my_cpu_offset_reg" being a global variable. 44 + */ 45 + #define __my_cpu_offset my_cpu_offset_reg 46 + #endif 47 + 48 + #define set_my_cpu_offset(tp) (my_cpu_offset_reg = (tp)) 21 49 22 50 #include <asm-generic/percpu.h> 23 51
+3 -3
arch/tile/kernel/hardwall.c
··· 66 66 0, 67 67 "udn", 68 68 LIST_HEAD_INIT(hardwall_types[HARDWALL_UDN].list), 69 - __SPIN_LOCK_INITIALIZER(hardwall_types[HARDWALL_UDN].lock), 69 + __SPIN_LOCK_UNLOCKED(hardwall_types[HARDWALL_UDN].lock), 70 70 NULL 71 71 }, 72 72 #ifndef __tilepro__ ··· 77 77 1, /* disabled pending hypervisor support */ 78 78 "idn", 79 79 LIST_HEAD_INIT(hardwall_types[HARDWALL_IDN].list), 80 - __SPIN_LOCK_INITIALIZER(hardwall_types[HARDWALL_IDN].lock), 80 + __SPIN_LOCK_UNLOCKED(hardwall_types[HARDWALL_IDN].lock), 81 81 NULL 82 82 }, 83 83 { /* access to user-space IPI */ ··· 87 87 0, 88 88 "ipi", 89 89 LIST_HEAD_INIT(hardwall_types[HARDWALL_IPI].list), 90 - __SPIN_LOCK_INITIALIZER(hardwall_types[HARDWALL_IPI].lock), 90 + __SPIN_LOCK_UNLOCKED(hardwall_types[HARDWALL_IPI].lock), 91 91 NULL 92 92 }, 93 93 #endif
+3
arch/tile/kernel/intvec_32.S
··· 815 815 } 816 816 bzt r28, 1f 817 817 bnz r29, 1f 818 + /* Disable interrupts explicitly for preemption. */ 819 + IRQ_DISABLE(r20,r21) 820 + TRACE_IRQS_OFF 818 821 jal preempt_schedule_irq 819 822 FEEDBACK_REENTER(interrupt_return) 820 823 1:
+3
arch/tile/kernel/intvec_64.S
··· 841 841 } 842 842 beqzt r28, 1f 843 843 bnez r29, 1f 844 + /* Disable interrupts explicitly for preemption. */ 845 + IRQ_DISABLE(r20,r21) 846 + TRACE_IRQS_OFF 844 847 jal preempt_schedule_irq 845 848 FEEDBACK_REENTER(interrupt_return) 846 849 1:
+5 -7
arch/tile/kernel/stack.c
··· 23 23 #include <linux/mmzone.h> 24 24 #include <linux/dcache.h> 25 25 #include <linux/fs.h> 26 + #include <linux/string.h> 26 27 #include <asm/backtrace.h> 27 28 #include <asm/page.h> 28 29 #include <asm/ucontext.h> ··· 333 332 } 334 333 335 334 if (vma->vm_file) { 336 - char *s; 337 335 p = d_path(&vma->vm_file->f_path, buf, bufsize); 338 336 if (IS_ERR(p)) 339 337 p = "?"; 340 - s = strrchr(p, '/'); 341 - if (s) 342 - p = s+1; 338 + name = kbasename(p); 343 339 } else { 344 - p = "anon"; 340 + name = "anon"; 345 341 } 346 342 347 343 /* Generate a string description of the vma info. */ 348 - namelen = strlen(p); 344 + namelen = strlen(name); 349 345 remaining = (bufsize - 1) - namelen; 350 - memmove(buf, p, namelen); 346 + memmove(buf, name, namelen); 351 347 snprintf(buf + namelen, remaining, "[%lx+%lx] ", 352 348 vma->vm_start, vma->vm_end - vma->vm_start); 353 349 }
+4 -4
arch/tile/lib/atomic_32.c
··· 107 107 EXPORT_SYMBOL(_atomic_xor); 108 108 109 109 110 - u64 _atomic64_xchg(u64 *v, u64 n) 110 + long long _atomic64_xchg(long long *v, long long n) 111 111 { 112 112 return __atomic64_xchg(v, __atomic_setup(v), n); 113 113 } 114 114 EXPORT_SYMBOL(_atomic64_xchg); 115 115 116 - u64 _atomic64_xchg_add(u64 *v, u64 i) 116 + long long _atomic64_xchg_add(long long *v, long long i) 117 117 { 118 118 return __atomic64_xchg_add(v, __atomic_setup(v), i); 119 119 } 120 120 EXPORT_SYMBOL(_atomic64_xchg_add); 121 121 122 - u64 _atomic64_xchg_add_unless(u64 *v, u64 a, u64 u) 122 + long long _atomic64_xchg_add_unless(long long *v, long long a, long long u) 123 123 { 124 124 /* 125 125 * Note: argument order is switched here since it is easier ··· 130 130 } 131 131 EXPORT_SYMBOL(_atomic64_xchg_add_unless); 132 132 133 - u64 _atomic64_cmpxchg(u64 *v, u64 o, u64 n) 133 + long long _atomic64_cmpxchg(long long *v, long long o, long long n) 134 134 { 135 135 return __atomic64_cmpxchg(v, __atomic_setup(v), o, n); 136 136 }
+4 -3
arch/x86/Kconfig
··· 860 860 861 861 config X86_UP_APIC 862 862 bool "Local APIC support on uniprocessors" 863 - depends on X86_32 && !SMP && !X86_32_NON_STANDARD 863 + depends on X86_32 && !SMP && !X86_32_NON_STANDARD && !PCI_MSI 864 864 ---help--- 865 865 A local APIC (Advanced Programmable Interrupt Controller) is an 866 866 integrated interrupt controller in the CPU. If you have a single-CPU ··· 885 885 886 886 config X86_LOCAL_APIC 887 887 def_bool y 888 - depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_APIC 888 + depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_APIC || PCI_MSI 889 889 890 890 config X86_IO_APIC 891 891 def_bool y 892 - depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_IOAPIC 892 + depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_IOAPIC || PCI_MSI 893 893 894 894 config X86_VISWS_APIC 895 895 def_bool y ··· 1033 1033 1034 1034 config MICROCODE 1035 1035 tristate "CPU microcode loading support" 1036 + depends on CPU_SUP_AMD || CPU_SUP_INTEL 1036 1037 select FW_LOADER 1037 1038 ---help--- 1038 1039
+3 -3
arch/x86/include/asm/cpufeature.h
··· 374 374 * Catch too early usage of this before alternatives 375 375 * have run. 376 376 */ 377 - asm goto("1: jmp %l[t_warn]\n" 377 + asm_volatile_goto("1: jmp %l[t_warn]\n" 378 378 "2:\n" 379 379 ".section .altinstructions,\"a\"\n" 380 380 " .long 1b - .\n" ··· 388 388 389 389 #endif 390 390 391 - asm goto("1: jmp %l[t_no]\n" 391 + asm_volatile_goto("1: jmp %l[t_no]\n" 392 392 "2:\n" 393 393 ".section .altinstructions,\"a\"\n" 394 394 " .long 1b - .\n" ··· 453 453 * have. Thus, we force the jump to the widest, 4-byte, signed relative 454 454 * offset even though the last would often fit in less bytes. 455 455 */ 456 - asm goto("1: .byte 0xe9\n .long %l[t_dynamic] - 2f\n" 456 + asm_volatile_goto("1: .byte 0xe9\n .long %l[t_dynamic] - 2f\n" 457 457 "2:\n" 458 458 ".section .altinstructions,\"a\"\n" 459 459 " .long 1b - .\n" /* src offset */
+1 -1
arch/x86/include/asm/jump_label.h
··· 18 18 19 19 static __always_inline bool arch_static_branch(struct static_key *key) 20 20 { 21 - asm goto("1:" 21 + asm_volatile_goto("1:" 22 22 ".byte " __stringify(STATIC_KEY_INIT_NOP) "\n\t" 23 23 ".pushsection __jump_table, \"aw\" \n\t" 24 24 _ASM_ALIGN "\n\t"
+2 -2
arch/x86/include/asm/mutex_64.h
··· 20 20 static inline void __mutex_fastpath_lock(atomic_t *v, 21 21 void (*fail_fn)(atomic_t *)) 22 22 { 23 - asm volatile goto(LOCK_PREFIX " decl %0\n" 23 + asm_volatile_goto(LOCK_PREFIX " decl %0\n" 24 24 " jns %l[exit]\n" 25 25 : : "m" (v->counter) 26 26 : "memory", "cc" ··· 75 75 static inline void __mutex_fastpath_unlock(atomic_t *v, 76 76 void (*fail_fn)(atomic_t *)) 77 77 { 78 - asm volatile goto(LOCK_PREFIX " incl %0\n" 78 + asm_volatile_goto(LOCK_PREFIX " incl %0\n" 79 79 " jg %l[exit]\n" 80 80 : : "m" (v->counter) 81 81 : "memory", "cc"
+1 -1
arch/x86/kernel/apic/x2apic_uv_x.c
··· 113 113 break; 114 114 case UV3_HUB_PART_NUMBER: 115 115 case UV3_HUB_PART_NUMBER_X: 116 - uv_min_hub_revision_id += UV3_HUB_REVISION_BASE - 1; 116 + uv_min_hub_revision_id += UV3_HUB_REVISION_BASE; 117 117 break; 118 118 } 119 119
+3 -8
arch/x86/kernel/cpu/perf_event.c
··· 1888 1888 userpg->cap_user_rdpmc = x86_pmu.attr_rdpmc; 1889 1889 userpg->pmc_width = x86_pmu.cntval_bits; 1890 1890 1891 - if (!boot_cpu_has(X86_FEATURE_CONSTANT_TSC)) 1892 - return; 1893 - 1894 - if (!boot_cpu_has(X86_FEATURE_NONSTOP_TSC)) 1891 + if (!sched_clock_stable) 1895 1892 return; 1896 1893 1897 1894 userpg->cap_user_time = 1; ··· 1896 1899 userpg->time_shift = CYC2NS_SCALE_FACTOR; 1897 1900 userpg->time_offset = this_cpu_read(cyc2ns_offset) - now; 1898 1901 1899 - if (sched_clock_stable && !check_tsc_disabled()) { 1900 - userpg->cap_user_time_zero = 1; 1901 - userpg->time_zero = this_cpu_read(cyc2ns_offset); 1902 - } 1902 + userpg->cap_user_time_zero = 1; 1903 + userpg->time_zero = this_cpu_read(cyc2ns_offset); 1903 1904 } 1904 1905 1905 1906 /*
+15 -4
arch/x86/kernel/kvm.c
··· 775 775 if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) 776 776 return; 777 777 778 - printk(KERN_INFO "KVM setup paravirtual spinlock\n"); 779 - 780 - static_key_slow_inc(&paravirt_ticketlocks_enabled); 781 - 782 778 pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning); 783 779 pv_lock_ops.unlock_kick = kvm_unlock_kick; 784 780 } 781 + 782 + static __init int kvm_spinlock_init_jump(void) 783 + { 784 + if (!kvm_para_available()) 785 + return 0; 786 + if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) 787 + return 0; 788 + 789 + static_key_slow_inc(&paravirt_ticketlocks_enabled); 790 + printk(KERN_INFO "KVM setup paravirtual spinlock\n"); 791 + 792 + return 0; 793 + } 794 + early_initcall(kvm_spinlock_init_jump); 795 + 785 796 #endif /* CONFIG_PARAVIRT_SPINLOCKS */
+8
arch/x86/kernel/reboot.c
··· 326 326 DMI_MATCH(DMI_PRODUCT_NAME, "Latitude E6320"), 327 327 }, 328 328 }, 329 + { /* Handle problems with rebooting on the Latitude E5410. */ 330 + .callback = set_pci_reboot, 331 + .ident = "Dell Latitude E5410", 332 + .matches = { 333 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 334 + DMI_MATCH(DMI_PRODUCT_NAME, "Latitude E5410"), 335 + }, 336 + }, 329 337 { /* Handle problems with rebooting on the Latitude E5420. */ 330 338 .callback = set_pci_reboot, 331 339 .ident = "Dell Latitude E5420",
+12 -12
arch/x86/kvm/vmx.c
··· 3255 3255 3256 3256 static void ept_load_pdptrs(struct kvm_vcpu *vcpu) 3257 3257 { 3258 + struct kvm_mmu *mmu = vcpu->arch.walk_mmu; 3259 + 3258 3260 if (!test_bit(VCPU_EXREG_PDPTR, 3259 3261 (unsigned long *)&vcpu->arch.regs_dirty)) 3260 3262 return; 3261 3263 3262 3264 if (is_paging(vcpu) && is_pae(vcpu) && !is_long_mode(vcpu)) { 3263 - vmcs_write64(GUEST_PDPTR0, vcpu->arch.mmu.pdptrs[0]); 3264 - vmcs_write64(GUEST_PDPTR1, vcpu->arch.mmu.pdptrs[1]); 3265 - vmcs_write64(GUEST_PDPTR2, vcpu->arch.mmu.pdptrs[2]); 3266 - vmcs_write64(GUEST_PDPTR3, vcpu->arch.mmu.pdptrs[3]); 3265 + vmcs_write64(GUEST_PDPTR0, mmu->pdptrs[0]); 3266 + vmcs_write64(GUEST_PDPTR1, mmu->pdptrs[1]); 3267 + vmcs_write64(GUEST_PDPTR2, mmu->pdptrs[2]); 3268 + vmcs_write64(GUEST_PDPTR3, mmu->pdptrs[3]); 3267 3269 } 3268 3270 } 3269 3271 3270 3272 static void ept_save_pdptrs(struct kvm_vcpu *vcpu) 3271 3273 { 3274 + struct kvm_mmu *mmu = vcpu->arch.walk_mmu; 3275 + 3272 3276 if (is_paging(vcpu) && is_pae(vcpu) && !is_long_mode(vcpu)) { 3273 - vcpu->arch.mmu.pdptrs[0] = vmcs_read64(GUEST_PDPTR0); 3274 - vcpu->arch.mmu.pdptrs[1] = vmcs_read64(GUEST_PDPTR1); 3275 - vcpu->arch.mmu.pdptrs[2] = vmcs_read64(GUEST_PDPTR2); 3276 - vcpu->arch.mmu.pdptrs[3] = vmcs_read64(GUEST_PDPTR3); 3277 + mmu->pdptrs[0] = vmcs_read64(GUEST_PDPTR0); 3278 + mmu->pdptrs[1] = vmcs_read64(GUEST_PDPTR1); 3279 + mmu->pdptrs[2] = vmcs_read64(GUEST_PDPTR2); 3280 + mmu->pdptrs[3] = vmcs_read64(GUEST_PDPTR3); 3277 3281 } 3278 3282 3279 3283 __set_bit(VCPU_EXREG_PDPTR, ··· 7781 7777 vmcs_write64(GUEST_PDPTR1, vmcs12->guest_pdptr1); 7782 7778 vmcs_write64(GUEST_PDPTR2, vmcs12->guest_pdptr2); 7783 7779 vmcs_write64(GUEST_PDPTR3, vmcs12->guest_pdptr3); 7784 - __clear_bit(VCPU_EXREG_PDPTR, 7785 - (unsigned long *)&vcpu->arch.regs_avail); 7786 - __clear_bit(VCPU_EXREG_PDPTR, 7787 - (unsigned long *)&vcpu->arch.regs_dirty); 7788 7780 } 7789 7781 7790 7782 kvm_register_write(vcpu, VCPU_REGS_RSP, vmcs12->guest_rsp);
+9
arch/x86/xen/smp.c
··· 278 278 old memory can be recycled */ 279 279 make_lowmem_page_readwrite(xen_initial_gdt); 280 280 281 + #ifdef CONFIG_X86_32 282 + /* 283 + * Xen starts us with XEN_FLAT_RING1_DS, but linux code 284 + * expects __USER_DS 285 + */ 286 + loadsegment(ds, __USER_DS); 287 + loadsegment(es, __USER_DS); 288 + #endif 289 + 281 290 xen_filter_cpu_maps(); 282 291 xen_setup_vcpu_info_placement(); 283 292 }
+6 -1
block/partitions/efi.c
··· 222 222 * the disk size. 223 223 * 224 224 * Hybrid MBRs do not necessarily comply with this. 225 + * 226 + * Consider a bad value here to be a warning to support dd'ing 227 + * an image from a smaller disk to a larger disk. 225 228 */ 226 229 if (ret == GPT_MBR_PROTECTIVE) { 227 230 sz = le32_to_cpu(mbr->partition_record[part].size_in_lba); 228 231 if (sz != (uint32_t) total_sectors - 1 && sz != 0xFFFFFFFF) 229 - ret = 0; 232 + pr_debug("GPT: mbr size in lba (%u) different than whole disk (%u).\n", 233 + sz, min_t(uint32_t, 234 + total_sectors - 1, 0xFFFFFFFF)); 230 235 } 231 236 done: 232 237 return ret;
+4 -4
drivers/acpi/Kconfig
··· 24 24 are configured, ACPI is used. 25 25 26 26 The project home page for the Linux ACPI subsystem is here: 27 - <http://www.lesswatts.org/projects/acpi/> 27 + <https://01.org/linux-acpi> 28 28 29 29 Linux support for ACPI is based on Intel Corporation's ACPI 30 30 Component Architecture (ACPI CA). For more information on the ··· 123 123 default y 124 124 help 125 125 This driver handles events on the power, sleep, and lid buttons. 126 - A daemon reads /proc/acpi/event and perform user-defined actions 127 - such as shutting down the system. This is necessary for 128 - software-controlled poweroff. 126 + A daemon reads events from input devices or via netlink and 127 + performs user-defined actions such as shutting down the system. 128 + This is necessary for software-controlled poweroff. 129 129 130 130 To compile this driver as a module, choose M here: 131 131 the module will be called button.
-56
drivers/acpi/device_pm.c
··· 1025 1025 } 1026 1026 } 1027 1027 EXPORT_SYMBOL_GPL(acpi_dev_pm_detach); 1028 - 1029 - /** 1030 - * acpi_dev_pm_add_dependent - Add physical device depending for PM. 1031 - * @handle: Handle of ACPI device node. 1032 - * @depdev: Device depending on that node for PM. 1033 - */ 1034 - void acpi_dev_pm_add_dependent(acpi_handle handle, struct device *depdev) 1035 - { 1036 - struct acpi_device_physical_node *dep; 1037 - struct acpi_device *adev; 1038 - 1039 - if (!depdev || acpi_bus_get_device(handle, &adev)) 1040 - return; 1041 - 1042 - mutex_lock(&adev->physical_node_lock); 1043 - 1044 - list_for_each_entry(dep, &adev->power_dependent, node) 1045 - if (dep->dev == depdev) 1046 - goto out; 1047 - 1048 - dep = kzalloc(sizeof(*dep), GFP_KERNEL); 1049 - if (dep) { 1050 - dep->dev = depdev; 1051 - list_add_tail(&dep->node, &adev->power_dependent); 1052 - } 1053 - 1054 - out: 1055 - mutex_unlock(&adev->physical_node_lock); 1056 - } 1057 - EXPORT_SYMBOL_GPL(acpi_dev_pm_add_dependent); 1058 - 1059 - /** 1060 - * acpi_dev_pm_remove_dependent - Remove physical device depending for PM. 1061 - * @handle: Handle of ACPI device node. 1062 - * @depdev: Device depending on that node for PM. 1063 - */ 1064 - void acpi_dev_pm_remove_dependent(acpi_handle handle, struct device *depdev) 1065 - { 1066 - struct acpi_device_physical_node *dep; 1067 - struct acpi_device *adev; 1068 - 1069 - if (!depdev || acpi_bus_get_device(handle, &adev)) 1070 - return; 1071 - 1072 - mutex_lock(&adev->physical_node_lock); 1073 - 1074 - list_for_each_entry(dep, &adev->power_dependent, node) 1075 - if (dep->dev == depdev) { 1076 - list_del(&dep->node); 1077 - kfree(dep); 1078 - break; 1079 - } 1080 - 1081 - mutex_unlock(&adev->physical_node_lock); 1082 - } 1083 - EXPORT_SYMBOL_GPL(acpi_dev_pm_remove_dependent); 1084 1028 #endif /* CONFIG_PM */
+4 -100
drivers/acpi/power.c
··· 59 59 #define ACPI_POWER_RESOURCE_STATE_ON 0x01 60 60 #define ACPI_POWER_RESOURCE_STATE_UNKNOWN 0xFF 61 61 62 - struct acpi_power_dependent_device { 63 - struct list_head node; 64 - struct acpi_device *adev; 65 - struct work_struct work; 66 - }; 67 - 68 62 struct acpi_power_resource { 69 63 struct acpi_device device; 70 64 struct list_head list_node; 71 - struct list_head dependent; 72 65 char *name; 73 66 u32 system_level; 74 67 u32 order; ··· 226 233 return 0; 227 234 } 228 235 229 - static void acpi_power_resume_dependent(struct work_struct *work) 230 - { 231 - struct acpi_power_dependent_device *dep; 232 - struct acpi_device_physical_node *pn; 233 - struct acpi_device *adev; 234 - int state; 235 - 236 - dep = container_of(work, struct acpi_power_dependent_device, work); 237 - adev = dep->adev; 238 - if (acpi_power_get_inferred_state(adev, &state)) 239 - return; 240 - 241 - if (state > ACPI_STATE_D0) 242 - return; 243 - 244 - mutex_lock(&adev->physical_node_lock); 245 - 246 - list_for_each_entry(pn, &adev->physical_node_list, node) 247 - pm_request_resume(pn->dev); 248 - 249 - list_for_each_entry(pn, &adev->power_dependent, node) 250 - pm_request_resume(pn->dev); 251 - 252 - mutex_unlock(&adev->physical_node_lock); 253 - } 254 - 255 236 static int __acpi_power_on(struct acpi_power_resource *resource) 256 237 { 257 238 acpi_status status = AE_OK; ··· 250 283 resource->name)); 251 284 } else { 252 285 result = __acpi_power_on(resource); 253 - if (result) { 286 + if (result) 254 287 resource->ref_count--; 255 - } else { 256 - struct acpi_power_dependent_device *dep; 257 - 258 - list_for_each_entry(dep, &resource->dependent, node) 259 - schedule_work(&dep->work); 260 - } 261 288 } 262 289 return result; 263 290 } ··· 351 390 return result; 352 391 } 353 392 354 - static void acpi_power_add_dependent(struct acpi_power_resource *resource, 355 - struct acpi_device *adev) 356 - { 357 - struct acpi_power_dependent_device *dep; 358 - 359 - mutex_lock(&resource->resource_lock); 360 - 361 - list_for_each_entry(dep, &resource->dependent, node) 362 - if (dep->adev == adev) 363 - goto out; 364 - 365 - dep = kzalloc(sizeof(*dep), GFP_KERNEL); 366 - if (!dep) 367 - goto out; 368 - 369 - dep->adev = adev; 370 - INIT_WORK(&dep->work, acpi_power_resume_dependent); 371 - list_add_tail(&dep->node, &resource->dependent); 372 - 373 - out: 374 - mutex_unlock(&resource->resource_lock); 375 - } 376 - 377 - static void acpi_power_remove_dependent(struct acpi_power_resource *resource, 378 - struct acpi_device *adev) 379 - { 380 - struct acpi_power_dependent_device *dep; 381 - struct work_struct *work = NULL; 382 - 383 - mutex_lock(&resource->resource_lock); 384 - 385 - list_for_each_entry(dep, &resource->dependent, node) 386 - if (dep->adev == adev) { 387 - list_del(&dep->node); 388 - work = &dep->work; 389 - break; 390 - } 391 - 392 - mutex_unlock(&resource->resource_lock); 393 - 394 - if (work) { 395 - cancel_work_sync(work); 396 - kfree(dep); 397 - } 398 - } 399 - 400 393 static struct attribute *attrs[] = { 401 394 NULL, 402 395 }; ··· 439 524 440 525 void acpi_power_add_remove_device(struct acpi_device *adev, bool add) 441 526 { 442 - struct acpi_device_power_state *ps; 443 - struct acpi_power_resource_entry *entry; 444 527 int state; 445 528 446 529 if (adev->wakeup.flags.valid) ··· 447 534 448 535 if (!adev->power.flags.power_resources) 449 536 return; 450 - 451 - ps = &adev->power.states[ACPI_STATE_D0]; 452 - list_for_each_entry(entry, &ps->resources, node) { 453 - struct acpi_power_resource *resource = entry->resource; 454 - 455 - if (add) 456 - acpi_power_add_dependent(resource, adev); 457 - else 458 - acpi_power_remove_dependent(resource, adev); 459 - } 460 537 461 538 for (state = ACPI_STATE_D0; state <= ACPI_STATE_D3_HOT; state++) 462 539 acpi_power_expose_hide(adev, ··· 785 882 acpi_init_device_object(device, handle, ACPI_BUS_TYPE_POWER, 786 883 ACPI_STA_DEFAULT); 787 884 mutex_init(&resource->resource_lock); 788 - INIT_LIST_HEAD(&resource->dependent); 789 885 INIT_LIST_HEAD(&resource->list_node); 790 886 resource->name = device->pnp.bus_id; 791 887 strcpy(acpi_device_name(device), ACPI_POWER_DEVICE_NAME); ··· 838 936 mutex_lock(&resource->resource_lock); 839 937 840 938 result = acpi_power_get_state(resource->device.handle, &state); 841 - if (result) 939 + if (result) { 940 + mutex_unlock(&resource->resource_lock); 842 941 continue; 942 + } 843 943 844 944 if (state == ACPI_POWER_RESOURCE_STATE_OFF 845 945 && resource->ref_count) {
-1
drivers/acpi/scan.c
··· 999 999 INIT_LIST_HEAD(&device->wakeup_list); 1000 1000 INIT_LIST_HEAD(&device->physical_node_list); 1001 1001 mutex_init(&device->physical_node_lock); 1002 - INIT_LIST_HEAD(&device->power_dependent); 1003 1002 1004 1003 new_bus_id = kzalloc(sizeof(struct acpi_device_bus_id), GFP_KERNEL); 1005 1004 if (!new_bus_id) {
-14
drivers/ata/libata-acpi.c
··· 1035 1035 { 1036 1036 ata_acpi_clear_gtf(dev); 1037 1037 } 1038 - 1039 - void ata_scsi_acpi_bind(struct ata_device *dev) 1040 - { 1041 - acpi_handle handle = ata_dev_acpi_handle(dev); 1042 - if (handle) 1043 - acpi_dev_pm_add_dependent(handle, &dev->sdev->sdev_gendev); 1044 - } 1045 - 1046 - void ata_scsi_acpi_unbind(struct ata_device *dev) 1047 - { 1048 - acpi_handle handle = ata_dev_acpi_handle(dev); 1049 - if (handle) 1050 - acpi_dev_pm_remove_dependent(handle, &dev->sdev->sdev_gendev); 1051 - }
-3
drivers/ata/libata-scsi.c
··· 3679 3679 if (!IS_ERR(sdev)) { 3680 3680 dev->sdev = sdev; 3681 3681 scsi_device_put(sdev); 3682 - ata_scsi_acpi_bind(dev); 3683 3682 } else { 3684 3683 dev->sdev = NULL; 3685 3684 } ··· 3765 3766 struct ata_port *ap = dev->link->ap; 3766 3767 struct scsi_device *sdev; 3767 3768 unsigned long flags; 3768 - 3769 - ata_scsi_acpi_unbind(dev); 3770 3769 3771 3770 /* Alas, we need to grab scan_mutex to ensure SCSI device 3772 3771 * state doesn't change underneath us and thus
-4
drivers/ata/libata.h
··· 121 121 extern void ata_acpi_bind_port(struct ata_port *ap); 122 122 extern void ata_acpi_bind_dev(struct ata_device *dev); 123 123 extern acpi_handle ata_dev_acpi_handle(struct ata_device *dev); 124 - extern void ata_scsi_acpi_bind(struct ata_device *dev); 125 - extern void ata_scsi_acpi_unbind(struct ata_device *dev); 126 124 #else 127 125 static inline void ata_acpi_dissociate(struct ata_host *host) { } 128 126 static inline int ata_acpi_on_suspend(struct ata_port *ap) { return 0; } ··· 131 133 pm_message_t state) { } 132 134 static inline void ata_acpi_bind_port(struct ata_port *ap) {} 133 135 static inline void ata_acpi_bind_dev(struct ata_device *dev) {} 134 - static inline void ata_scsi_acpi_bind(struct ata_device *dev) {} 135 - static inline void ata_scsi_acpi_unbind(struct ata_device *dev) {} 136 136 #endif 137 137 138 138 /* libata-scsi.c */
+5 -2
drivers/base/memory.c
··· 333 333 online_type = ONLINE_KEEP; 334 334 else if (!strncmp(buf, "offline", min_t(int, count, 7))) 335 335 online_type = -1; 336 - else 337 - return -EINVAL; 336 + else { 337 + ret = -EINVAL; 338 + goto err; 339 + } 338 340 339 341 switch (online_type) { 340 342 case ONLINE_KERNEL: ··· 359 357 ret = -EINVAL; /* should never happen */ 360 358 } 361 359 360 + err: 362 361 unlock_device_hotplug(); 363 362 364 363 if (ret)
+5 -6
drivers/char/random.c
··· 640 640 */ 641 641 void add_device_randomness(const void *buf, unsigned int size) 642 642 { 643 - unsigned long time = get_cycles() ^ jiffies; 643 + unsigned long time = random_get_entropy() ^ jiffies; 644 644 645 645 mix_pool_bytes(&input_pool, buf, size, NULL); 646 646 mix_pool_bytes(&input_pool, &time, sizeof(time), NULL); ··· 677 677 goto out; 678 678 679 679 sample.jiffies = jiffies; 680 - sample.cycles = get_cycles(); 680 + sample.cycles = random_get_entropy(); 681 681 sample.num = num; 682 682 mix_pool_bytes(&input_pool, &sample, sizeof(sample), NULL); 683 683 ··· 744 744 struct fast_pool *fast_pool = &__get_cpu_var(irq_randomness); 745 745 struct pt_regs *regs = get_irq_regs(); 746 746 unsigned long now = jiffies; 747 - __u32 input[4], cycles = get_cycles(); 747 + __u32 input[4], cycles = random_get_entropy(); 748 748 749 749 input[0] = cycles ^ jiffies; 750 750 input[1] = irq; ··· 1459 1459 1460 1460 static u32 random_int_secret[MD5_MESSAGE_BYTES / 4] ____cacheline_aligned; 1461 1461 1462 - static int __init random_int_secret_init(void) 1462 + int random_int_secret_init(void) 1463 1463 { 1464 1464 get_random_bytes(random_int_secret, sizeof(random_int_secret)); 1465 1465 return 0; 1466 1466 } 1467 - late_initcall(random_int_secret_init); 1468 1467 1469 1468 /* 1470 1469 * Get a random word for internal kernel use only. Similar to urandom but ··· 1482 1483 1483 1484 hash = get_cpu_var(get_random_int_hash); 1484 1485 1485 - hash[0] += current->pid + jiffies + get_cycles(); 1486 + hash[0] += current->pid + jiffies + random_get_entropy(); 1486 1487 md5_transform(hash, random_int_secret); 1487 1488 ret = hash[0]; 1488 1489 put_cpu_var(get_random_int_hash);
+1
drivers/char/tpm/xen-tpmfront.c
··· 10 10 #include <linux/errno.h> 11 11 #include <linux/err.h> 12 12 #include <linux/interrupt.h> 13 + #include <xen/xen.h> 13 14 #include <xen/events.h> 14 15 #include <xen/interface/io/tpmif.h> 15 16 #include <xen/grant_table.h>
+7 -7
drivers/cpufreq/intel_pstate.c
··· 383 383 static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate) 384 384 { 385 385 int max_perf, min_perf; 386 + u64 val; 386 387 387 388 intel_pstate_get_min_max(cpu, &min_perf, &max_perf); 388 389 ··· 395 394 trace_cpu_frequency(pstate * 100000, cpu->cpu); 396 395 397 396 cpu->pstate.current_pstate = pstate; 397 + val = pstate << 8; 398 398 if (limits.no_turbo) 399 - wrmsrl(MSR_IA32_PERF_CTL, BIT(32) | (pstate << 8)); 400 - else 401 - wrmsrl(MSR_IA32_PERF_CTL, pstate << 8); 399 + val |= (u64)1 << 32; 402 400 401 + wrmsrl(MSR_IA32_PERF_CTL, val); 403 402 } 404 403 405 404 static inline void intel_pstate_pstate_increase(struct cpudata *cpu, int steps) ··· 638 637 639 638 static int intel_pstate_cpu_init(struct cpufreq_policy *policy) 640 639 { 641 - int rc, min_pstate, max_pstate; 642 640 struct cpudata *cpu; 641 + int rc; 643 642 644 643 rc = intel_pstate_init_cpu(policy->cpu); 645 644 if (rc) ··· 653 652 else 654 653 policy->policy = CPUFREQ_POLICY_POWERSAVE; 655 654 656 - intel_pstate_get_min_max(cpu, &min_pstate, &max_pstate); 657 - policy->min = min_pstate * 100000; 658 - policy->max = max_pstate * 100000; 655 + policy->min = cpu->pstate.min_pstate * 100000; 656 + policy->max = cpu->pstate.turbo_pstate * 100000; 659 657 660 658 /* cpuinfo and default policy values */ 661 659 policy->cpuinfo.min_freq = cpu->pstate.min_pstate * 100000;
+1 -1
drivers/cpufreq/s3c64xx-cpufreq.c
··· 166 166 if (freq->frequency == CPUFREQ_ENTRY_INVALID) 167 167 continue; 168 168 169 - dvfs = &s3c64xx_dvfs_table[freq->index]; 169 + dvfs = &s3c64xx_dvfs_table[freq->driver_data]; 170 170 found = 0; 171 171 172 172 for (i = 0; i < count; i++) {
+1
drivers/dma/edma.c
··· 306 306 EDMA_SLOT_ANY); 307 307 if (echan->slot[i] < 0) { 308 308 dev_err(dev, "Failed to allocate slot\n"); 309 + kfree(edesc); 309 310 return NULL; 310 311 } 311 312 }
+5 -4
drivers/dma/sh/rcar-hpbdma.c
··· 93 93 void __iomem *base; 94 94 const struct hpb_dmae_slave_config *cfg; 95 95 char dev_id[16]; /* unique name per DMAC of channel */ 96 + dma_addr_t slave_addr; 96 97 }; 97 98 98 99 struct hpb_dmae_device { ··· 433 432 hpb_chan->xfer_mode = XFER_DOUBLE; 434 433 } else { 435 434 dev_err(hpb_chan->shdma_chan.dev, "DCR setting error"); 436 - shdma_free_irq(&hpb_chan->shdma_chan); 437 435 return -EINVAL; 438 436 } 439 437 ··· 446 446 return 0; 447 447 } 448 448 449 - static int hpb_dmae_set_slave(struct shdma_chan *schan, int slave_id, bool try) 449 + static int hpb_dmae_set_slave(struct shdma_chan *schan, int slave_id, 450 + dma_addr_t slave_addr, bool try) 450 451 { 451 452 struct hpb_dmae_chan *chan = to_chan(schan); 452 453 const struct hpb_dmae_slave_config *sc = ··· 458 457 if (try) 459 458 return 0; 460 459 chan->cfg = sc; 460 + chan->slave_addr = slave_addr ? : sc->addr; 461 461 return hpb_dmae_alloc_chan_resources(chan, sc); 462 462 } 463 463 ··· 470 468 { 471 469 struct hpb_dmae_chan *chan = to_chan(schan); 472 470 473 - return chan->cfg->addr; 471 + return chan->slave_addr; 474 472 } 475 473 476 474 static struct shdma_desc *hpb_dmae_embedded_desc(void *buf, int i) ··· 616 614 shdma_for_each_chan(schan, &hpbdev->shdma_dev, i) { 617 615 BUG_ON(!schan); 618 616 619 - shdma_free_irq(schan); 620 617 shdma_chan_remove(schan); 621 618 } 622 619 dma_dev->chancnt = 0;
+3 -2
drivers/gpio/gpio-lynxpoint.c
··· 248 248 struct lp_gpio *lg = irq_data_get_irq_handler_data(data); 249 249 struct irq_chip *chip = irq_data_get_irq_chip(data); 250 250 u32 base, pin, mask; 251 - unsigned long reg, pending; 251 + unsigned long reg, ena, pending; 252 252 253 253 /* check from GPIO controller which pin triggered the interrupt */ 254 254 for (base = 0; base < lg->chip.ngpio; base += 32) { 255 255 reg = lp_gpio_reg(&lg->chip, base, LP_INT_STAT); 256 + ena = lp_gpio_reg(&lg->chip, base, LP_INT_ENABLE); 256 257 257 - while ((pending = inl(reg))) { 258 + while ((pending = (inl(reg) & inl(ena)))) { 258 259 unsigned irq; 259 260 260 261 pin = __ffs(pending);
+4 -2
drivers/gpio/gpiolib.c
··· 183 183 */ 184 184 static int desc_to_gpio(const struct gpio_desc *desc) 185 185 { 186 - return desc->chip->base + gpio_chip_hwgpio(desc); 186 + return desc - &gpio_desc[0]; 187 187 } 188 188 189 189 ··· 1452 1452 int status = -EPROBE_DEFER; 1453 1453 unsigned long flags; 1454 1454 1455 - if (!desc || !desc->chip) { 1455 + if (!desc) { 1456 1456 pr_warn("%s: invalid GPIO\n", __func__); 1457 1457 return -EINVAL; 1458 1458 } ··· 1460 1460 spin_lock_irqsave(&gpio_lock, flags); 1461 1461 1462 1462 chip = desc->chip; 1463 + if (chip == NULL) 1464 + goto done; 1463 1465 1464 1466 if (!try_module_get(chip->owner)) 1465 1467 goto done;
+2
drivers/gpu/drm/drm_edid.c
··· 2925 2925 /* Speaker Allocation Data Block */ 2926 2926 if (dbl == 3) { 2927 2927 *sadb = kmalloc(dbl, GFP_KERNEL); 2928 + if (!*sadb) 2929 + return -ENOMEM; 2928 2930 memcpy(*sadb, &db[1], dbl); 2929 2931 count = dbl; 2930 2932 break;
-8
drivers/gpu/drm/drm_fb_helper.c
··· 416 416 return; 417 417 418 418 /* 419 - * fbdev->blank can be called from irq context in case of a panic. 420 - * Since we already have our own special panic handler which will 421 - * restore the fbdev console mode completely, just bail out early. 422 - */ 423 - if (oops_in_progress) 424 - return; 425 - 426 - /* 427 419 * For each CRTC in this fb, turn the connectors on/off. 428 420 */ 429 421 drm_modeset_lock_all(dev);
+1
drivers/gpu/drm/gma500/gtt.c
··· 204 204 if (IS_ERR(pages)) 205 205 return PTR_ERR(pages); 206 206 207 + gt->npage = gt->gem.size / PAGE_SIZE; 207 208 gt->pages = pages; 208 209 209 210 return 0;
+3 -12
drivers/gpu/drm/i915/i915_dma.c
··· 1290 1290 * then we do not take part in VGA arbitration and the 1291 1291 * vga_client_register() fails with -ENODEV. 1292 1292 */ 1293 - if (!HAS_PCH_SPLIT(dev)) { 1294 - ret = vga_client_register(dev->pdev, dev, NULL, 1295 - i915_vga_set_decode); 1296 - if (ret && ret != -ENODEV) 1297 - goto out; 1298 - } 1293 + ret = vga_client_register(dev->pdev, dev, NULL, i915_vga_set_decode); 1294 + if (ret && ret != -ENODEV) 1295 + goto out; 1299 1296 1300 1297 intel_register_dsm_handler(); 1301 1298 ··· 1347 1350 * tiny window where we will loose hotplug notifactions. 1348 1351 */ 1349 1352 intel_fbdev_initial_config(dev); 1350 - 1351 - /* 1352 - * Must do this after fbcon init so that 1353 - * vgacon_save_screen() works during the handover. 1354 - */ 1355 - i915_disable_vga_mem(dev); 1356 1353 1357 1354 /* Only enable hotplug handling once the fbdev is fully set up. */ 1358 1355 dev_priv->enable_hotplug_processing = true;
+6
drivers/gpu/drm/i915/i915_reg.h
··· 3881 3881 #define GEN7_SQ_CHICKEN_MBCUNIT_CONFIG 0x9030 3882 3882 #define GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB (1<<11) 3883 3883 3884 + #define HSW_SCRATCH1 0xb038 3885 + #define HSW_SCRATCH1_L3_DATA_ATOMICS_DISABLE (1<<27) 3886 + 3884 3887 #define HSW_FUSE_STRAP 0x42014 3885 3888 #define HSW_CDCLK_LIMIT (1 << 24) 3886 3889 ··· 4730 4727 #define GEN7_ROW_CHICKEN2 0xe4f4 4731 4728 #define GEN7_ROW_CHICKEN2_GT2 0xf4f4 4732 4729 #define DOP_CLOCK_GATING_DISABLE (1<<0) 4730 + 4731 + #define HSW_ROW_CHICKEN3 0xe49c 4732 + #define HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE (1 << 6) 4733 4733 4734 4734 #define G4X_AUD_VID_DID (dev_priv->info->display_mmio_offset + 0x62020) 4735 4735 #define INTEL_AUDIO_DEVCL 0x808629FB
+2 -36
drivers/gpu/drm/i915/intel_display.c
··· 3941 3941 * consider. */ 3942 3942 void intel_connector_dpms(struct drm_connector *connector, int mode) 3943 3943 { 3944 - struct intel_encoder *encoder = intel_attached_encoder(connector); 3945 - 3946 3944 /* All the simple cases only support two dpms states. */ 3947 3945 if (mode != DRM_MODE_DPMS_ON) 3948 3946 mode = DRM_MODE_DPMS_OFF; ··· 3951 3953 connector->dpms = mode; 3952 3954 3953 3955 /* Only need to change hw state when actually enabled */ 3954 - if (encoder->base.crtc) 3955 - intel_encoder_dpms(encoder, mode); 3956 - else 3957 - WARN_ON(encoder->connectors_active != false); 3956 + if (connector->encoder) 3957 + intel_encoder_dpms(to_intel_encoder(connector->encoder), mode); 3958 3958 3959 3959 intel_modeset_check_state(connector->dev); 3960 3960 } ··· 10045 10049 POSTING_READ(vga_reg); 10046 10050 } 10047 10051 10048 - static void i915_enable_vga_mem(struct drm_device *dev) 10049 - { 10050 - /* Enable VGA memory on Intel HD */ 10051 - if (HAS_PCH_SPLIT(dev)) { 10052 - vga_get_uninterruptible(dev->pdev, VGA_RSRC_LEGACY_IO); 10053 - outb(inb(VGA_MSR_READ) | VGA_MSR_MEM_EN, VGA_MSR_WRITE); 10054 - vga_set_legacy_decoding(dev->pdev, VGA_RSRC_LEGACY_IO | 10055 - VGA_RSRC_LEGACY_MEM | 10056 - VGA_RSRC_NORMAL_IO | 10057 - VGA_RSRC_NORMAL_MEM); 10058 - vga_put(dev->pdev, VGA_RSRC_LEGACY_IO); 10059 - } 10060 - } 10061 - 10062 - void i915_disable_vga_mem(struct drm_device *dev) 10063 - { 10064 - /* Disable VGA memory on Intel HD */ 10065 - if (HAS_PCH_SPLIT(dev)) { 10066 - vga_get_uninterruptible(dev->pdev, VGA_RSRC_LEGACY_IO); 10067 - outb(inb(VGA_MSR_READ) & ~VGA_MSR_MEM_EN, VGA_MSR_WRITE); 10068 - vga_set_legacy_decoding(dev->pdev, VGA_RSRC_LEGACY_IO | 10069 - VGA_RSRC_NORMAL_IO | 10070 - VGA_RSRC_NORMAL_MEM); 10071 - vga_put(dev->pdev, VGA_RSRC_LEGACY_IO); 10072 - } 10073 - } 10074 - 10075 10052 void intel_modeset_init_hw(struct drm_device *dev) 10076 10053 { 10077 10054 intel_init_power_well(dev); ··· 10323 10354 if (I915_READ(vga_reg) != VGA_DISP_DISABLE) { 10324 10355 DRM_DEBUG_KMS("Something enabled VGA plane, disabling it\n"); 10325 10356 i915_disable_vga(dev); 10326 - i915_disable_vga_mem(dev); 10327 10357 } 10328 10358 } 10329 10359 ··· 10535 10567 } 10536 10568 10537 10569 intel_disable_fbc(dev); 10538 - 10539 - i915_enable_vga_mem(dev); 10540 10570 10541 10571 intel_disable_gt_powersave(dev); 10542 10572
+1 -1
drivers/gpu/drm/i915/intel_dp.c
··· 1467 1467 1468 1468 /* Avoid continuous PSR exit by masking memup and hpd */ 1469 1469 I915_WRITE(EDP_PSR_DEBUG_CTL, EDP_PSR_DEBUG_MASK_MEMUP | 1470 - EDP_PSR_DEBUG_MASK_HPD); 1470 + EDP_PSR_DEBUG_MASK_HPD | EDP_PSR_DEBUG_MASK_LPSP); 1471 1471 1472 1472 intel_dp->psr_setup_done = true; 1473 1473 }
-1
drivers/gpu/drm/i915/intel_drv.h
··· 793 793 extern void hsw_pc8_restore_interrupts(struct drm_device *dev); 794 794 extern void intel_aux_display_runtime_get(struct drm_i915_private *dev_priv); 795 795 extern void intel_aux_display_runtime_put(struct drm_i915_private *dev_priv); 796 - extern void i915_disable_vga_mem(struct drm_device *dev); 797 796 798 797 #endif /* __INTEL_DRV_H__ */
+7 -2
drivers/gpu/drm/i915/intel_pm.c
··· 3864 3864 dev_priv->rps.rpe_delay), 3865 3865 dev_priv->rps.rpe_delay); 3866 3866 3867 - INIT_DELAYED_WORK(&dev_priv->rps.vlv_work, vlv_rps_timer_work); 3868 - 3869 3867 valleyview_set_rps(dev_priv->dev, dev_priv->rps.rpe_delay); 3870 3868 3871 3869 gen6_enable_rps_interrupts(dev); ··· 4953 4955 I915_WRITE(GEN7_L3_CHICKEN_MODE_REGISTER, 4954 4956 GEN7_WA_L3_CHICKEN_MODE); 4955 4957 4958 + /* L3 caching of data atomics doesn't work -- disable it. */ 4959 + I915_WRITE(HSW_SCRATCH1, HSW_SCRATCH1_L3_DATA_ATOMICS_DISABLE); 4960 + I915_WRITE(HSW_ROW_CHICKEN3, 4961 + _MASKED_BIT_ENABLE(HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE)); 4962 + 4956 4963 /* This is required by WaCatErrorRejectionIssue:hsw */ 4957 4964 I915_WRITE(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG, 4958 4965 I915_READ(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG) | ··· 5684 5681 5685 5682 INIT_DELAYED_WORK(&dev_priv->rps.delayed_resume_work, 5686 5683 intel_gen6_powersave_work); 5684 + 5685 + INIT_DELAYED_WORK(&dev_priv->rps.vlv_work, vlv_rps_timer_work); 5687 5686 } 5688 5687
+1 -1
drivers/gpu/drm/nouveau/core/subdev/mc/base.c
··· 113 113 pmc->use_msi = false; 114 114 break; 115 115 default: 116 - pmc->use_msi = nouveau_boolopt(device->cfgopt, "NvMSI", true); 116 + pmc->use_msi = nouveau_boolopt(device->cfgopt, "NvMSI", false); 117 117 if (pmc->use_msi) { 118 118 pmc->use_msi = pci_enable_msi(device->pdev) == 0; 119 119 if (pmc->use_msi) {
+3 -3
drivers/gpu/drm/radeon/btc_dpm.c
··· 1930 1930 } 1931 1931 j++; 1932 1932 1933 - if (j > SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE) 1933 + if (j >= SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE) 1934 1934 return -EINVAL; 1935 1935 1936 1936 tmp = RREG32(MC_PMG_CMD_MRS); ··· 1945 1945 } 1946 1946 j++; 1947 1947 1948 - if (j > SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE) 1948 + if (j >= SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE) 1949 1949 return -EINVAL; 1950 1950 break; 1951 1951 case MC_SEQ_RESERVE_M >> 2: ··· 1959 1959 } 1960 1960 j++; 1961 1961 1962 - if (j > SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE) 1962 + if (j >= SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE) 1963 1963 return -EINVAL; 1964 1964 break; 1965 1965 default:
+6
drivers/gpu/drm/radeon/cik.c
··· 77 77 static void cik_program_aspm(struct radeon_device *rdev); 78 78 static void cik_init_pg(struct radeon_device *rdev); 79 79 static void cik_init_cg(struct radeon_device *rdev); 80 + static void cik_fini_pg(struct radeon_device *rdev); 81 + static void cik_fini_cg(struct radeon_device *rdev); 80 82 static void cik_enable_gui_idle_interrupt(struct radeon_device *rdev, 81 83 bool enable); 82 84 ··· 4186 4184 RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR)); 4187 4185 dev_info(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n", 4188 4186 RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS)); 4187 + 4188 + /* disable CG/PG */ 4189 + cik_fini_pg(rdev); 4190 + cik_fini_cg(rdev); 4189 4191 4190 4192 /* stop the rlc */ 4191 4193 cik_rlc_stop(rdev);
+1 -1
drivers/gpu/drm/radeon/evergreen.c
··· 3131 3131 rdev->config.evergreen.sx_max_export_size = 256; 3132 3132 rdev->config.evergreen.sx_max_export_pos_size = 64; 3133 3133 rdev->config.evergreen.sx_max_export_smx_size = 192; 3134 - rdev->config.evergreen.max_hw_contexts = 8; 3134 + rdev->config.evergreen.max_hw_contexts = 4; 3135 3135 rdev->config.evergreen.sq_num_cf_insts = 2; 3136 3136 3137 3137 rdev->config.evergreen.sc_prim_fifo_size = 0x40;
+1 -2
drivers/gpu/drm/radeon/evergreen_hdmi.c
··· 288 288 /* fglrx clears sth in AFMT_AUDIO_PACKET_CONTROL2 here */ 289 289 290 290 WREG32(HDMI_ACR_PACKET_CONTROL + offset, 291 - HDMI_ACR_AUTO_SEND | /* allow hw to sent ACR packets when required */ 292 - HDMI_ACR_SOURCE); /* select SW CTS value */ 291 + HDMI_ACR_AUTO_SEND); /* allow hw to sent ACR packets when required */ 293 292 294 293 evergreen_hdmi_update_ACR(encoder, mode->clock); 295 294
+2 -2
drivers/gpu/drm/radeon/evergreend.h
··· 1501 1501 * 6. COMMAND [29:22] | BYTE_COUNT [20:0] 1502 1502 */ 1503 1503 # define PACKET3_CP_DMA_DST_SEL(x) ((x) << 20) 1504 - /* 0 - SRC_ADDR 1504 + /* 0 - DST_ADDR 1505 1505 * 1 - GDS 1506 1506 */ 1507 1507 # define PACKET3_CP_DMA_ENGINE(x) ((x) << 27) ··· 1516 1516 # define PACKET3_CP_DMA_CP_SYNC (1 << 31) 1517 1517 /* COMMAND */ 1518 1518 # define PACKET3_CP_DMA_DIS_WC (1 << 21) 1519 - # define PACKET3_CP_DMA_CMD_SRC_SWAP(x) ((x) << 23) 1519 + # define PACKET3_CP_DMA_CMD_SRC_SWAP(x) ((x) << 22) 1520 1520 /* 0 - none 1521 1521 * 1 - 8 in 16 1522 1522 * 2 - 8 in 32
+14 -7
drivers/gpu/drm/radeon/r600_hdmi.c
··· 57 57 static const struct radeon_hdmi_acr r600_hdmi_predefined_acr[] = { 58 58 /* 32kHz 44.1kHz 48kHz */ 59 59 /* Clock N CTS N CTS N CTS */ 60 - { 25174, 4576, 28125, 7007, 31250, 6864, 28125 }, /* 25,20/1.001 MHz */ 60 + { 25175, 4576, 28125, 7007, 31250, 6864, 28125 }, /* 25,20/1.001 MHz */ 61 61 { 25200, 4096, 25200, 6272, 28000, 6144, 25200 }, /* 25.20 MHz */ 62 62 { 27000, 4096, 27000, 6272, 30000, 6144, 27000 }, /* 27.00 MHz */ 63 63 { 27027, 4096, 27027, 6272, 30030, 6144, 27027 }, /* 27.00*1.001 MHz */ 64 64 { 54000, 4096, 54000, 6272, 60000, 6144, 54000 }, /* 54.00 MHz */ 65 65 { 54054, 4096, 54054, 6272, 60060, 6144, 54054 }, /* 54.00*1.001 MHz */ 66 - { 74175, 11648, 210937, 17836, 234375, 11648, 140625 }, /* 74.25/1.001 MHz */ 66 + { 74176, 11648, 210937, 17836, 234375, 11648, 140625 }, /* 74.25/1.001 MHz */ 67 67 { 74250, 4096, 74250, 6272, 82500, 6144, 74250 }, /* 74.25 MHz */ 68 - { 148351, 11648, 421875, 8918, 234375, 5824, 140625 }, /* 148.50/1.001 MHz */ 68 + { 148352, 11648, 421875, 8918, 234375, 5824, 140625 }, /* 148.50/1.001 MHz */ 69 69 { 148500, 4096, 148500, 6272, 165000, 6144, 148500 }, /* 148.50 MHz */ 70 70 { 0, 4096, 0, 6272, 0, 6144, 0 } /* Other */ 71 71 }; ··· 75 75 */ 76 76 static void r600_hdmi_calc_cts(uint32_t clock, int *CTS, int N, int freq) 77 77 { 78 - if (*CTS == 0) 79 - *CTS = clock * N / (128 * freq) * 1000; 78 + u64 n; 79 + u32 d; 80 + 81 + if (*CTS == 0) { 82 + n = (u64)clock * (u64)N * 1000ULL; 83 + d = 128 * freq; 84 + do_div(n, d); 85 + *CTS = n; 86 + } 80 87 DRM_DEBUG("Using ACR timing N=%d CTS=%d for frequency %d\n", 81 88 N, *CTS, freq); 82 89 } ··· 451 444 } 452 445 453 446 WREG32(HDMI0_ACR_PACKET_CONTROL + offset, 454 - HDMI0_ACR_AUTO_SEND | /* allow hw to sent ACR packets when required */ 455 - HDMI0_ACR_SOURCE); /* select SW CTS value */ 447 + HDMI0_ACR_SOURCE | /* select SW CTS value - XXX verify that hw CTS works on all families */ 448 + HDMI0_ACR_AUTO_SEND); /* allow hw to sent ACR packets when required */ 456 449 457 450 WREG32(HDMI0_VBI_PACKET_CONTROL + offset, 458 451 HDMI0_NULL_SEND | /* send null packets when required */
+1 -1
drivers/gpu/drm/radeon/r600d.h
··· 1523 1523 */ 1524 1524 # define PACKET3_CP_DMA_CP_SYNC (1 << 31) 1525 1525 /* COMMAND */ 1526 - # define PACKET3_CP_DMA_CMD_SRC_SWAP(x) ((x) << 23) 1526 + # define PACKET3_CP_DMA_CMD_SRC_SWAP(x) ((x) << 22) 1527 1527 /* 0 - none 1528 1528 * 1 - 8 in 16 1529 1529 * 2 - 8 in 32
+3
drivers/gpu/drm/radeon/radeon_pm.c
··· 945 945 if (enable) { 946 946 mutex_lock(&rdev->pm.mutex); 947 947 rdev->pm.dpm.uvd_active = true; 948 + /* disable this for now */ 949 + #if 0 948 950 if ((rdev->pm.dpm.sd == 1) && (rdev->pm.dpm.hd == 0)) 949 951 dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_SD; 950 952 else if ((rdev->pm.dpm.sd == 2) && (rdev->pm.dpm.hd == 0)) ··· 956 954 else if ((rdev->pm.dpm.sd == 0) && (rdev->pm.dpm.hd == 2)) 957 955 dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_HD2; 958 956 else 957 + #endif 959 958 dpm_state = POWER_STATE_TYPE_INTERNAL_UVD; 960 959 rdev->pm.dpm.state = dpm_state; 961 960 mutex_unlock(&rdev->pm.mutex);
+2 -2
drivers/gpu/drm/radeon/radeon_test.c
··· 36 36 struct radeon_bo *vram_obj = NULL; 37 37 struct radeon_bo **gtt_obj = NULL; 38 38 uint64_t gtt_addr, vram_addr; 39 - unsigned i, n, size; 40 - int r, ring; 39 + unsigned n, size; 40 + int i, r, ring; 41 41 42 42 switch (flag) { 43 43 case RADEON_TEST_COPY_DMA:
+2 -1
drivers/gpu/drm/radeon/radeon_uvd.c
··· 798 798 (rdev->pm.dpm.hd != hd)) { 799 799 rdev->pm.dpm.sd = sd; 800 800 rdev->pm.dpm.hd = hd; 801 - streams_changed = true; 801 + /* disable this for now */ 802 + /*streams_changed = true;*/ 802 803 } 803 804 } 804 805
+10
drivers/gpu/drm/radeon/si.c
··· 85 85 uint32_t incr, uint32_t flags); 86 86 static void si_enable_gui_idle_interrupt(struct radeon_device *rdev, 87 87 bool enable); 88 + static void si_fini_pg(struct radeon_device *rdev); 89 + static void si_fini_cg(struct radeon_device *rdev); 90 + static void si_rlc_stop(struct radeon_device *rdev); 88 91 89 92 static const u32 verde_rlc_save_restore_register_list[] = 90 93 { ··· 3610 3607 RREG32(VM_CONTEXT1_PROTECTION_FAULT_ADDR)); 3611 3608 dev_info(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n", 3612 3609 RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS)); 3610 + 3611 + /* disable PG/CG */ 3612 + si_fini_pg(rdev); 3613 + si_fini_cg(rdev); 3614 + 3615 + /* stop the rlc */ 3616 + si_rlc_stop(rdev); 3613 3617 3614 3618 /* Disable CP parsing/prefetching */ 3615 3619 WREG32(CP_ME_CNTL, CP_ME_HALT | CP_PFP_HALT | CP_CE_HALT);
+3 -3
drivers/gpu/drm/radeon/si_dpm.c
··· 5208 5208 table->mc_reg_table_entry[k].mc_data[j] |= 0x100; 5209 5209 } 5210 5210 j++; 5211 - if (j > SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE) 5211 + if (j >= SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE) 5212 5212 return -EINVAL; 5213 5213 5214 5214 if (!pi->mem_gddr5) { ··· 5218 5218 table->mc_reg_table_entry[k].mc_data[j] = 5219 5219 (table->mc_reg_table_entry[k].mc_data[i] & 0xffff0000) >> 16; 5220 5220 j++; 5221 - if (j > SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE) 5221 + if (j >= SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE) 5222 5222 return -EINVAL; 5223 5223 } 5224 5224 break; ··· 5231 5231 (temp_reg & 0xffff0000) | 5232 5232 (table->mc_reg_table_entry[k].mc_data[i] & 0x0000ffff); 5233 5233 j++; 5234 - if (j > SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE) 5234 + if (j >= SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE) 5235 5235 return -EINVAL; 5236 5236 break; 5237 5237 default:
+2 -2
drivers/gpu/drm/radeon/sid.h
··· 1553 1553 * 6. COMMAND [30:21] | BYTE_COUNT [20:0] 1554 1554 */ 1555 1555 # define PACKET3_CP_DMA_DST_SEL(x) ((x) << 20) 1556 - /* 0 - SRC_ADDR 1556 + /* 0 - DST_ADDR 1557 1557 * 1 - GDS 1558 1558 */ 1559 1559 # define PACKET3_CP_DMA_ENGINE(x) ((x) << 27) ··· 1568 1568 # define PACKET3_CP_DMA_CP_SYNC (1 << 31) 1569 1569 /* COMMAND */ 1570 1570 # define PACKET3_CP_DMA_DIS_WC (1 << 21) 1571 - # define PACKET3_CP_DMA_CMD_SRC_SWAP(x) ((x) << 23) 1571 + # define PACKET3_CP_DMA_CMD_SRC_SWAP(x) ((x) << 22) 1572 1572 /* 0 - none 1573 1573 * 1 - 8 in 16 1574 1574 * 2 - 8 in 32
+1 -1
drivers/gpu/drm/radeon/trinity_dpm.c
··· 1868 1868 for (i = 0; i < SUMO_MAX_HARDWARE_POWERLEVELS; i++) 1869 1869 pi->at[i] = TRINITY_AT_DFLT; 1870 1870 1871 - pi->enable_bapm = true; 1871 + pi->enable_bapm = false; 1872 1872 pi->enable_nbps_policy = true; 1873 1873 pi->enable_sclk_ds = true; 1874 1874 pi->enable_gfx_power_gating = true;
+1
drivers/hid/Kconfig
··· 241 241 - Sharkoon Drakonia / Perixx MX-2000 gaming mice 242 242 - Tracer Sniper TRM-503 / NOVA Gaming Slider X200 / 243 243 Zalman ZM-GM1 244 + - SHARKOON DarkGlider Gaming mouse 244 245 245 246 config HOLTEK_FF 246 247 bool "Holtek On Line Grip force feedback support"
+1
drivers/hid/hid-core.c
··· 1715 1715 { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_KEYBOARD) }, 1716 1716 { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A04A) }, 1717 1717 { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A067) }, 1718 + { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A081) }, 1718 1719 { HID_USB_DEVICE(USB_VENDOR_ID_HUION, USB_DEVICE_ID_HUION_580) }, 1719 1720 { HID_USB_DEVICE(USB_VENDOR_ID_JESS2, USB_DEVICE_ID_JESS2_COLOR_RUMBLE_PAD) }, 1720 1721 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ION, USB_DEVICE_ID_ICADE) },
+4
drivers/hid/hid-holtek-mouse.c
··· 27 27 * - USB ID 04d9:a067, sold as Sharkoon Drakonia and Perixx MX-2000 28 28 * - USB ID 04d9:a04a, sold as Tracer Sniper TRM-503, NOVA Gaming Slider X200 29 29 * and Zalman ZM-GM1 30 + * - USB ID 04d9:a081, sold as SHARKOON DarkGlider Gaming mouse 30 31 */ 31 32 32 33 static __u8 *holtek_mouse_report_fixup(struct hid_device *hdev, __u8 *rdesc, ··· 47 46 } 48 47 break; 49 48 case USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A04A: 49 + case USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A081: 50 50 if (*rsize >= 113 && rdesc[106] == 0xff && rdesc[107] == 0x7f 51 51 && rdesc[111] == 0xff && rdesc[112] == 0x7f) { 52 52 hid_info(hdev, "Fixing up report descriptor\n"); ··· 65 63 USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A067) }, 66 64 { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, 67 65 USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A04A) }, 66 + { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, 67 + USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A081) }, 68 68 { } 69 69 }; 70 70 MODULE_DEVICE_TABLE(hid, holtek_mouse_devices);
+1
drivers/hid/hid-ids.h
··· 450 450 #define USB_DEVICE_ID_HOLTEK_ALT_KEYBOARD 0xa055 451 451 #define USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A067 0xa067 452 452 #define USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A04A 0xa04a 453 + #define USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A081 0xa081 453 454 454 455 #define USB_VENDOR_ID_IMATION 0x0718 455 456 #define USB_DEVICE_ID_DISC_STAKKA 0xd000
+1 -1
drivers/hid/hid-roccat-kone.c
··· 382 382 } 383 383 #define PROFILE_ATTR(number) \ 384 384 static struct bin_attribute bin_attr_profile##number = { \ 385 - .attr = { .name = "profile##number", .mode = 0660 }, \ 385 + .attr = { .name = "profile" #number, .mode = 0660 }, \ 386 386 .size = sizeof(struct kone_profile), \ 387 387 .read = kone_sysfs_read_profilex, \ 388 388 .write = kone_sysfs_write_profilex, \
+2 -2
drivers/hid/hid-roccat-koneplus.c
··· 229 229 230 230 #define PROFILE_ATTR(number) \ 231 231 static struct bin_attribute bin_attr_profile##number##_settings = { \ 232 - .attr = { .name = "profile##number##_settings", .mode = 0440 }, \ 232 + .attr = { .name = "profile" #number "_settings", .mode = 0440 }, \ 233 233 .size = KONEPLUS_SIZE_PROFILE_SETTINGS, \ 234 234 .read = koneplus_sysfs_read_profilex_settings, \ 235 235 .private = &profile_numbers[number-1], \ 236 236 }; \ 237 237 static struct bin_attribute bin_attr_profile##number##_buttons = { \ 238 - .attr = { .name = "profile##number##_buttons", .mode = 0440 }, \ 238 + .attr = { .name = "profile" #number "_buttons", .mode = 0440 }, \ 239 239 .size = KONEPLUS_SIZE_PROFILE_BUTTONS, \ 240 240 .read = koneplus_sysfs_read_profilex_buttons, \ 241 241 .private = &profile_numbers[number-1], \
+2 -2
drivers/hid/hid-roccat-kovaplus.c
··· 257 257 258 258 #define PROFILE_ATTR(number) \ 259 259 static struct bin_attribute bin_attr_profile##number##_settings = { \ 260 - .attr = { .name = "profile##number##_settings", .mode = 0440 }, \ 260 + .attr = { .name = "profile" #number "_settings", .mode = 0440 }, \ 261 261 .size = KOVAPLUS_SIZE_PROFILE_SETTINGS, \ 262 262 .read = kovaplus_sysfs_read_profilex_settings, \ 263 263 .private = &profile_numbers[number-1], \ 264 264 }; \ 265 265 static struct bin_attribute bin_attr_profile##number##_buttons = { \ 266 - .attr = { .name = "profile##number##_buttons", .mode = 0440 }, \ 266 + .attr = { .name = "profile" #number "_buttons", .mode = 0440 }, \ 267 267 .size = KOVAPLUS_SIZE_PROFILE_BUTTONS, \ 268 268 .read = kovaplus_sysfs_read_profilex_buttons, \ 269 269 .private = &profile_numbers[number-1], \
+2 -2
drivers/hid/hid-roccat-pyra.c
··· 225 225 226 226 #define PROFILE_ATTR(number) \ 227 227 static struct bin_attribute bin_attr_profile##number##_settings = { \ 228 - .attr = { .name = "profile##number##_settings", .mode = 0440 }, \ 228 + .attr = { .name = "profile" #number "_settings", .mode = 0440 }, \ 229 229 .size = PYRA_SIZE_PROFILE_SETTINGS, \ 230 230 .read = pyra_sysfs_read_profilex_settings, \ 231 231 .private = &profile_numbers[number-1], \ 232 232 }; \ 233 233 static struct bin_attribute bin_attr_profile##number##_buttons = { \ 234 - .attr = { .name = "profile##number##_buttons", .mode = 0440 }, \ 234 + .attr = { .name = "profile" #number "_buttons", .mode = 0440 }, \ 235 235 .size = PYRA_SIZE_PROFILE_BUTTONS, \ 236 236 .read = pyra_sysfs_read_profilex_buttons, \ 237 237 .private = &profile_numbers[number-1], \
+29 -11
drivers/hid/hid-wiimote-modules.c
··· 119 119 * the rumble motor, this flag shouldn't be set. 120 120 */ 121 121 122 + /* used by wiimod_rumble and wiipro_rumble */ 123 + static void wiimod_rumble_worker(struct work_struct *work) 124 + { 125 + struct wiimote_data *wdata = container_of(work, struct wiimote_data, 126 + rumble_worker); 127 + 128 + spin_lock_irq(&wdata->state.lock); 129 + wiiproto_req_rumble(wdata, wdata->state.cache_rumble); 130 + spin_unlock_irq(&wdata->state.lock); 131 + } 132 + 122 133 static int wiimod_rumble_play(struct input_dev *dev, void *data, 123 134 struct ff_effect *eff) 124 135 { 125 136 struct wiimote_data *wdata = input_get_drvdata(dev); 126 137 __u8 value; 127 - unsigned long flags; 128 138 129 139 /* 130 140 * The wiimote supports only a single rumble motor so if any magnitude ··· 147 137 else 148 138 value = 0; 149 139 150 - spin_lock_irqsave(&wdata->state.lock, flags); 151 - wiiproto_req_rumble(wdata, value); 152 - spin_unlock_irqrestore(&wdata->state.lock, flags); 140 + /* Locking state.lock here might deadlock with input_event() calls. 141 + * schedule_work acts as barrier. Merging multiple changes is fine. */ 142 + wdata->state.cache_rumble = value; 143 + schedule_work(&wdata->rumble_worker); 153 144 154 145 return 0; 155 146 } ··· 158 147 static int wiimod_rumble_probe(const struct wiimod_ops *ops, 159 148 struct wiimote_data *wdata) 160 149 { 150 + INIT_WORK(&wdata->rumble_worker, wiimod_rumble_worker); 151 + 161 152 set_bit(FF_RUMBLE, wdata->input->ffbit); 162 153 if (input_ff_create_memless(wdata->input, NULL, wiimod_rumble_play)) 163 154 return -ENOMEM; ··· 171 158 struct wiimote_data *wdata) 172 159 { 173 160 unsigned long flags; 161 + 162 + cancel_work_sync(&wdata->rumble_worker); 174 163 175 164 spin_lock_irqsave(&wdata->state.lock, flags); 176 165 wiiproto_req_rumble(wdata, 0); ··· 1746 1731 { 1747 1732 struct wiimote_data *wdata = input_get_drvdata(dev); 1748 1733 __u8 value; 1749 - unsigned long flags; 1750 1734 1751 1735 /* 1752 1736 * The wiimote supports only a single rumble motor so if any magnitude ··· 1758 1744 else 1759 1745 value = 0; 1760 1746 1761 - spin_lock_irqsave(&wdata->state.lock, flags); 1762 - wiiproto_req_rumble(wdata, value); 1763 - spin_unlock_irqrestore(&wdata->state.lock, flags); 1747 + /* Locking state.lock here might deadlock with input_event() calls. 1748 + * schedule_work acts as barrier. Merging multiple changes is fine. */ 1749 + wdata->state.cache_rumble = value; 1750 + schedule_work(&wdata->rumble_worker); 1764 1751 1765 1752 return 0; 1766 1753 } ··· 1770 1755 struct wiimote_data *wdata) 1771 1756 { 1772 1757 int ret, i; 1758 + 1759 + INIT_WORK(&wdata->rumble_worker, wiimod_rumble_worker); 1773 1760 1774 1761 wdata->extension.input = input_allocate_device(); 1775 1762 if (!wdata->extension.input) ··· 1834 1817 if (!wdata->extension.input) 1835 1818 return; 1836 1819 1820 + input_unregister_device(wdata->extension.input); 1821 + wdata->extension.input = NULL; 1822 + cancel_work_sync(&wdata->rumble_worker); 1823 + 1837 1824 spin_lock_irqsave(&wdata->state.lock, flags); 1838 1825 wiiproto_req_rumble(wdata, 0); 1839 1826 spin_unlock_irqrestore(&wdata->state.lock, flags); 1840 - 1841 - input_unregister_device(wdata->extension.input); 1842 - wdata->extension.input = NULL; 1843 1827 } 1844 1828 1845 1829 static const struct wiimod_ops wiimod_pro = {
+3 -1
drivers/hid/hid-wiimote.h
··· 133 133 __u8 *cmd_read_buf; 134 134 __u8 cmd_read_size; 135 135 136 - /* calibration data */ 136 + /* calibration/cache data */ 137 137 __u16 calib_bboard[4][3]; 138 + __u8 cache_rumble; 138 139 }; 139 140 140 141 struct wiimote_data { 141 142 struct hid_device *hdev; 142 143 struct input_dev *input; 144 + struct work_struct rumble_worker; 143 145 struct led_classdev *leds[4]; 144 146 struct input_dev *accel; 145 147 struct input_dev *ir;
+14 -7
drivers/hid/hidraw.c
··· 308 308 static void drop_ref(struct hidraw *hidraw, int exists_bit) 309 309 { 310 310 if (exists_bit) { 311 - hid_hw_close(hidraw->hid); 312 311 hidraw->exist = 0; 313 - if (hidraw->open) 312 + if (hidraw->open) { 313 + hid_hw_close(hidraw->hid); 314 314 wake_up_interruptible(&hidraw->wait); 315 + } 315 316 } else { 316 317 --hidraw->open; 317 318 } 318 - 319 - if (!hidraw->open && !hidraw->exist) { 320 - device_destroy(hidraw_class, MKDEV(hidraw_major, hidraw->minor)); 321 - hidraw_table[hidraw->minor] = NULL; 322 - kfree(hidraw); 319 + if (!hidraw->open) { 320 + if (!hidraw->exist) { 321 + device_destroy(hidraw_class, 322 + MKDEV(hidraw_major, hidraw->minor)); 323 + hidraw_table[hidraw->minor] = NULL; 324 + kfree(hidraw); 325 + } else { 326 + /* close device for last reader */ 327 + hid_hw_power(hidraw->hid, PM_HINT_NORMAL); 328 + hid_hw_close(hidraw->hid); 329 + } 323 330 } 324 331 } 325 332
+2 -1
drivers/hid/uhid.c
··· 615 615 616 616 static struct miscdevice uhid_misc = { 617 617 .fops = &uhid_fops, 618 - .minor = MISC_DYNAMIC_MINOR, 618 + .minor = UHID_MINOR, 619 619 .name = UHID_NAME, 620 620 }; 621 621 ··· 634 634 MODULE_LICENSE("GPL"); 635 635 MODULE_AUTHOR("David Herrmann <dh.herrmann@gmail.com>"); 636 636 MODULE_DESCRIPTION("User-space I/O driver support for HID subsystem"); 637 + MODULE_ALIAS_MISCDEV(UHID_MINOR); 637 638 MODULE_ALIAS("devname:" UHID_NAME);
+13
drivers/hwmon/applesmc.c
··· 230 230 231 231 static int read_smc(u8 cmd, const char *key, u8 *buffer, u8 len) 232 232 { 233 + u8 status, data = 0; 233 234 int i; 234 235 235 236 if (send_command(cmd) || send_argument(key)) { ··· 238 237 return -EIO; 239 238 } 240 239 240 + /* This has no effect on newer (2012) SMCs */ 241 241 if (send_byte(len, APPLESMC_DATA_PORT)) { 242 242 pr_warn("%.4s: read len fail\n", key); 243 243 return -EIO; ··· 251 249 } 252 250 buffer[i] = inb(APPLESMC_DATA_PORT); 253 251 } 252 + 253 + /* Read the data port until bit0 is cleared */ 254 + for (i = 0; i < 16; i++) { 255 + udelay(APPLESMC_MIN_WAIT); 256 + status = inb(APPLESMC_CMD_PORT); 257 + if (!(status & 0x01)) 258 + break; 259 + data = inb(APPLESMC_DATA_PORT); 260 + } 261 + if (i) 262 + pr_warn("flushed %d bytes, last value is: %d\n", i, data); 254 263 255 264 return 0; 256 265 }
+3 -2
drivers/i2c/busses/i2c-designware-platdrv.c
··· 270 270 MODULE_ALIAS("platform:i2c_designware"); 271 271 272 272 static struct platform_driver dw_i2c_driver = { 273 - .remove = dw_i2c_remove, 273 + .probe = dw_i2c_probe, 274 + .remove = dw_i2c_remove, 274 275 .driver = { 275 276 .name = "i2c_designware", 276 277 .owner = THIS_MODULE, ··· 283 282 284 283 static int __init dw_i2c_init_driver(void) 285 284 { 286 - return platform_driver_probe(&dw_i2c_driver, dw_i2c_probe); 285 + return platform_driver_register(&dw_i2c_driver); 287 286 } 288 287 subsys_initcall(dw_i2c_init_driver); 289 288
+6 -5
drivers/i2c/busses/i2c-imx.c
··· 365 365 clk_disable_unprepare(i2c_imx->clk); 366 366 } 367 367 368 - static void __init i2c_imx_set_clk(struct imx_i2c_struct *i2c_imx, 368 + static void i2c_imx_set_clk(struct imx_i2c_struct *i2c_imx, 369 369 unsigned int rate) 370 370 { 371 371 struct imx_i2c_clk_pair *i2c_clk_div = i2c_imx->hwdata->clk_div; ··· 589 589 .functionality = i2c_imx_func, 590 590 }; 591 591 592 - static int __init i2c_imx_probe(struct platform_device *pdev) 592 + static int i2c_imx_probe(struct platform_device *pdev) 593 593 { 594 594 const struct of_device_id *of_id = of_match_device(i2c_imx_dt_ids, 595 595 &pdev->dev); ··· 697 697 return 0; /* Return OK */ 698 698 } 699 699 700 - static int __exit i2c_imx_remove(struct platform_device *pdev) 700 + static int i2c_imx_remove(struct platform_device *pdev) 701 701 { 702 702 struct imx_i2c_struct *i2c_imx = platform_get_drvdata(pdev); 703 703 ··· 715 715 } 716 716 717 717 static struct platform_driver i2c_imx_driver = { 718 - .remove = __exit_p(i2c_imx_remove), 718 + .probe = i2c_imx_probe, 719 + .remove = i2c_imx_remove, 719 720 .driver = { 720 721 .name = DRIVER_NAME, 721 722 .owner = THIS_MODULE, ··· 727 726 728 727 static int __init i2c_adap_imx_init(void) 729 728 { 730 - return platform_driver_probe(&i2c_imx_driver, i2c_imx_probe); 729 + return platform_driver_register(&i2c_imx_driver); 731 730 } 732 731 subsys_initcall(i2c_adap_imx_init); 733 732
+2 -1
drivers/i2c/busses/i2c-mxs.c
··· 780 780 .owner = THIS_MODULE, 781 781 .of_match_table = mxs_i2c_dt_ids, 782 782 }, 783 + .probe = mxs_i2c_probe, 783 784 .remove = mxs_i2c_remove, 784 785 }; 785 786 786 787 static int __init mxs_i2c_init(void) 787 788 { 788 - return platform_driver_probe(&mxs_i2c_driver, mxs_i2c_probe); 789 + return platform_driver_register(&mxs_i2c_driver); 789 790 } 790 791 subsys_initcall(mxs_i2c_init); 791 792
+3
drivers/i2c/busses/i2c-omap.c
··· 939 939 /* 940 940 * ProDB0017052: Clear ARDY bit twice 941 941 */ 942 + if (stat & OMAP_I2C_STAT_ARDY) 943 + omap_i2c_ack_stat(dev, OMAP_I2C_STAT_ARDY); 944 + 942 945 if (stat & (OMAP_I2C_STAT_ARDY | OMAP_I2C_STAT_NACK | 943 946 OMAP_I2C_STAT_AL)) { 944 947 omap_i2c_ack_stat(dev, (OMAP_I2C_STAT_RRDY |
+5 -6
drivers/i2c/busses/i2c-stu300.c
··· 859 859 .functionality = stu300_func, 860 860 }; 861 861 862 - static int __init 863 - stu300_probe(struct platform_device *pdev) 862 + static int stu300_probe(struct platform_device *pdev) 864 863 { 865 864 struct stu300_dev *dev; 866 865 struct i2c_adapter *adap; ··· 965 966 #define STU300_I2C_PM NULL 966 967 #endif 967 968 968 - static int __exit 969 - stu300_remove(struct platform_device *pdev) 969 + static int stu300_remove(struct platform_device *pdev) 970 970 { 971 971 struct stu300_dev *dev = platform_get_drvdata(pdev); 972 972 ··· 987 989 .pm = STU300_I2C_PM, 988 990 .of_match_table = stu300_dt_match, 989 991 }, 990 - .remove = __exit_p(stu300_remove), 992 + .probe = stu300_probe, 993 + .remove = stu300_remove, 991 994 992 995 }; 993 996 994 997 static int __init stu300_init(void) 995 998 { 996 - return platform_driver_probe(&stu300_i2c_driver, stu300_probe); 999 + return platform_driver_register(&stu300_i2c_driver); 997 1000 } 998 1001 999 1002 static void __exit stu300_exit(void)
+3
drivers/i2c/i2c-core.c
··· 1134 1134 acpi_handle handle; 1135 1135 acpi_status status; 1136 1136 1137 + if (!adap->dev.parent) 1138 + return; 1139 + 1137 1140 handle = ACPI_HANDLE(adap->dev.parent); 1138 1141 if (!handle) 1139 1142 return;
+1 -1
drivers/i2c/muxes/i2c-arb-gpio-challenge.c
··· 200 200 arb->parent = of_find_i2c_adapter_by_node(parent_np); 201 201 if (!arb->parent) { 202 202 dev_err(dev, "Cannot find parent bus\n"); 203 - return -EINVAL; 203 + return -EPROBE_DEFER; 204 204 } 205 205 206 206 /* Actually add the mux adapter */
+9 -5
drivers/i2c/muxes/i2c-mux-gpio.c
··· 66 66 struct device_node *adapter_np, *child; 67 67 struct i2c_adapter *adapter; 68 68 unsigned *values, *gpios; 69 - int i = 0; 69 + int i = 0, ret; 70 70 71 71 if (!np) 72 72 return -ENODEV; ··· 79 79 adapter = of_find_i2c_adapter_by_node(adapter_np); 80 80 if (!adapter) { 81 81 dev_err(&pdev->dev, "Cannot find parent bus\n"); 82 - return -ENODEV; 82 + return -EPROBE_DEFER; 83 83 } 84 84 mux->data.parent = i2c_adapter_id(adapter); 85 85 put_device(&adapter->dev); ··· 116 116 return -ENOMEM; 117 117 } 118 118 119 - for (i = 0; i < mux->data.n_gpios; i++) 120 - gpios[i] = of_get_named_gpio(np, "mux-gpios", i); 119 + for (i = 0; i < mux->data.n_gpios; i++) { 120 + ret = of_get_named_gpio(np, "mux-gpios", i); 121 + if (ret < 0) 122 + return ret; 123 + gpios[i] = ret; 124 + } 121 125 122 126 mux->data.gpios = gpios; 123 127 ··· 181 177 if (!parent) { 182 178 dev_err(&pdev->dev, "Parent adapter (%d) not found\n", 183 179 mux->data.parent); 184 - return -ENODEV; 180 + return -EPROBE_DEFER; 185 181 } 186 182 187 183 mux->parent = parent;
+2 -2
drivers/i2c/muxes/i2c-mux-pinctrl.c
··· 113 113 adapter = of_find_i2c_adapter_by_node(adapter_np); 114 114 if (!adapter) { 115 115 dev_err(mux->dev, "Cannot find parent bus\n"); 116 - return -ENODEV; 116 + return -EPROBE_DEFER; 117 117 } 118 118 mux->pdata->parent_bus_num = i2c_adapter_id(adapter); 119 119 put_device(&adapter->dev); ··· 211 211 if (!mux->parent) { 212 212 dev_err(&pdev->dev, "Parent adapter (%d) not found\n", 213 213 mux->pdata->parent_bus_num); 214 - ret = -ENODEV; 214 + ret = -EPROBE_DEFER; 215 215 goto err; 216 216 } 217 217
+4 -2
drivers/iio/frequency/adf4350.c
··· 525 525 } 526 526 527 527 indio_dev = devm_iio_device_alloc(&spi->dev, sizeof(*st)); 528 - if (indio_dev == NULL) 529 - return -ENOMEM; 528 + if (indio_dev == NULL) { 529 + ret = -ENOMEM; 530 + goto error_disable_clk; 531 + } 530 532 531 533 st = iio_priv(indio_dev); 532 534
+3
drivers/iio/industrialio-buffer.c
··· 477 477 indio_dev->currentmode = INDIO_DIRECT_MODE; 478 478 if (indio_dev->setup_ops->postdisable) 479 479 indio_dev->setup_ops->postdisable(indio_dev); 480 + 481 + if (indio_dev->available_scan_masks == NULL) 482 + kfree(indio_dev->active_scan_mask); 480 483 } 481 484 482 485 int iio_update_buffers(struct iio_dev *indio_dev,
+1 -1
drivers/infiniband/hw/amso1100/c2_ae.c
··· 141 141 return "C2_QP_STATE_ERROR"; 142 142 default: 143 143 return "<invalid QP state>"; 144 - }; 144 + } 145 145 } 146 146 147 147 void c2_ae_event(struct c2_dev *c2dev, u32 mq_index)
+10 -6
drivers/infiniband/hw/mlx5/main.c
··· 164 164 static int alloc_comp_eqs(struct mlx5_ib_dev *dev) 165 165 { 166 166 struct mlx5_eq_table *table = &dev->mdev.priv.eq_table; 167 + char name[MLX5_MAX_EQ_NAME]; 167 168 struct mlx5_eq *eq, *n; 168 169 int ncomp_vec; 169 170 int nent; ··· 181 180 goto clean; 182 181 } 183 182 184 - snprintf(eq->name, MLX5_MAX_EQ_NAME, "mlx5_comp%d", i); 183 + snprintf(name, MLX5_MAX_EQ_NAME, "mlx5_comp%d", i); 185 184 err = mlx5_create_map_eq(&dev->mdev, eq, 186 185 i + MLX5_EQ_VEC_COMP_BASE, nent, 0, 187 - eq->name, 188 - &dev->mdev.priv.uuari.uars[0]); 186 + name, &dev->mdev.priv.uuari.uars[0]); 189 187 if (err) { 190 188 kfree(eq); 191 189 goto clean; ··· 301 301 props->max_srq_sge = max_rq_sg - 1; 302 302 props->max_fast_reg_page_list_len = (unsigned int)-1; 303 303 props->local_ca_ack_delay = dev->mdev.caps.local_ca_ack_delay; 304 - props->atomic_cap = dev->mdev.caps.flags & MLX5_DEV_CAP_FLAG_ATOMIC ? 305 - IB_ATOMIC_HCA : IB_ATOMIC_NONE; 306 - props->masked_atomic_cap = IB_ATOMIC_HCA; 304 + props->atomic_cap = IB_ATOMIC_NONE; 305 + props->masked_atomic_cap = IB_ATOMIC_NONE; 307 306 props->max_pkeys = be16_to_cpup((__be16 *)(out_mad->data + 28)); 308 307 props->max_mcast_grp = 1 << dev->mdev.caps.log_max_mcg; 309 308 props->max_mcast_qp_attach = dev->mdev.caps.max_qp_mcg; ··· 1004 1005 1005 1006 ibev.device = &ibdev->ib_dev; 1006 1007 ibev.element.port_num = port; 1008 + 1009 + if (port < 1 || port > ibdev->num_ports) { 1010 + mlx5_ib_warn(ibdev, "warning: event on port %d\n", port); 1011 + return; 1012 + } 1007 1013 1008 1014 if (ibdev->ib_active) 1009 1015 ib_dispatch_event(&ibev);
+33 -37
drivers/infiniband/hw/mlx5/mr.c
··· 42 42 DEF_CACHE_SIZE = 10, 43 43 }; 44 44 45 + enum { 46 + MLX5_UMR_ALIGN = 2048 47 + }; 48 + 45 49 static __be64 *mr_align(__be64 *ptr, int align) 46 50 { 47 51 unsigned long mask = align - 1; ··· 65 61 66 62 static int add_keys(struct mlx5_ib_dev *dev, int c, int num) 67 63 { 68 - struct device *ddev = dev->ib_dev.dma_device; 69 64 struct mlx5_mr_cache *cache = &dev->cache; 70 65 struct mlx5_cache_ent *ent = &cache->ent[c]; 71 66 struct mlx5_create_mkey_mbox_in *in; 72 67 struct mlx5_ib_mr *mr; 73 68 int npages = 1 << ent->order; 74 - int size = sizeof(u64) * npages; 75 69 int err = 0; 76 70 int i; 77 71 ··· 85 83 } 86 84 mr->order = ent->order; 87 85 mr->umred = 1; 88 - mr->pas = kmalloc(size + 0x3f, GFP_KERNEL); 89 - if (!mr->pas) { 90 - kfree(mr); 91 - err = -ENOMEM; 92 - goto out; 93 - } 94 - mr->dma = dma_map_single(ddev, mr_align(mr->pas, 0x40), size, 95 - DMA_TO_DEVICE); 96 - if (dma_mapping_error(ddev, mr->dma)) { 97 - kfree(mr->pas); 98 - kfree(mr); 99 - err = -ENOMEM; 100 - goto out; 101 - } 102 - 103 86 in->seg.status = 1 << 6; 104 87 in->seg.xlt_oct_size = cpu_to_be32((npages + 1) / 2); 105 88 in->seg.qpn_mkey7_0 = cpu_to_be32(0xffffff << 8); ··· 95 108 sizeof(*in)); 96 109 if (err) { 97 110 mlx5_ib_warn(dev, "create mkey failed %d\n", err); 98 - dma_unmap_single(ddev, mr->dma, size, DMA_TO_DEVICE); 99 - kfree(mr->pas); 100 111 kfree(mr); 101 112 goto out; 102 113 } ··· 114 129 115 130 static void remove_keys(struct mlx5_ib_dev *dev, int c, int num) 116 131 { 117 - struct device *ddev = dev->ib_dev.dma_device; 118 132 struct mlx5_mr_cache *cache = &dev->cache; 119 133 struct mlx5_cache_ent *ent = &cache->ent[c]; 120 134 struct mlx5_ib_mr *mr; 121 - int size; 122 135 int err; 123 136 int i; 124 137 ··· 132 149 ent->size--; 133 150 spin_unlock(&ent->lock); 134 151 err = mlx5_core_destroy_mkey(&dev->mdev, &mr->mmr); 135 - if (err) { 152 + if (err) 136 153 mlx5_ib_warn(dev, "failed destroy mkey\n"); 137 - } else { 138 - size = ALIGN(sizeof(u64) * (1 << mr->order), 0x40); 139 - dma_unmap_single(ddev, mr->dma, size, DMA_TO_DEVICE); 140 - kfree(mr->pas); 154 + else 141 155 kfree(mr); 142 - } 143 156 } 144 157 } 145 158 ··· 387 408 388 409 static void clean_keys(struct mlx5_ib_dev *dev, int c) 389 410 { 390 - struct device *ddev = dev->ib_dev.dma_device; 391 411 struct mlx5_mr_cache *cache = &dev->cache; 392 412 struct mlx5_cache_ent *ent = &cache->ent[c]; 393 413 struct mlx5_ib_mr *mr; 394 - int size; 395 414 int err; 396 415 416 + cancel_delayed_work(&ent->dwork); 397 417 while (1) { 398 418 spin_lock(&ent->lock); 399 419 if (list_empty(&ent->head)) { ··· 405 427 ent->size--; 406 428 spin_unlock(&ent->lock); 407 429 err = mlx5_core_destroy_mkey(&dev->mdev, &mr->mmr); 408 - if (err) { 430 + if (err) 409 431 mlx5_ib_warn(dev, "failed destroy mkey\n"); 410 - } else { 411 - size = ALIGN(sizeof(u64) * (1 << mr->order), 0x40); 412 - dma_unmap_single(ddev, mr->dma, size, DMA_TO_DEVICE); 413 - kfree(mr->pas); 432 + else 414 433 kfree(mr); 415 - } 416 434 } 417 435 } 418 436 ··· 514 540 int i; 515 541 516 542 dev->cache.stopped = 1; 517 - destroy_workqueue(dev->cache.wq); 543 + flush_workqueue(dev->cache.wq); 518 544 519 545 mlx5_mr_cache_debugfs_cleanup(dev); 520 546 521 547 for (i = 0; i < MAX_MR_CACHE_ENTRIES; i++) 522 548 clean_keys(dev, i); 549 + 550 + destroy_workqueue(dev->cache.wq); 523 551 524 552 return 0; 525 553 } ··· 651 675 int page_shift, int order, int access_flags) 652 676 { 653 677 struct mlx5_ib_dev *dev = to_mdev(pd->device); 678 + struct device *ddev = dev->ib_dev.dma_device; 654 679 struct umr_common *umrc = &dev->umrc; 655 680 struct ib_send_wr wr, *bad; 656 681 struct mlx5_ib_mr *mr; 657 682 struct ib_sge sg; 683 + int size = sizeof(u64) * npages; 658 684 int err; 659 685 int i; 660 686 ··· 675 697 if (!mr) 676 698 return ERR_PTR(-EAGAIN); 677 699 678 - mlx5_ib_populate_pas(dev, umem, page_shift, mr_align(mr->pas, 0x40), 1); 700 + mr->pas = kmalloc(size + MLX5_UMR_ALIGN - 1, GFP_KERNEL); 701 + if (!mr->pas) { 702 + err = -ENOMEM; 703 + goto error; 704 + } 705 + 706 + mlx5_ib_populate_pas(dev, umem, page_shift, 707 + mr_align(mr->pas, MLX5_UMR_ALIGN), 1); 708 + 709 + mr->dma = dma_map_single(ddev, mr_align(mr->pas, MLX5_UMR_ALIGN), size, 710 + DMA_TO_DEVICE); 711 + if (dma_mapping_error(ddev, mr->dma)) { 712 + kfree(mr->pas); 713 + err = -ENOMEM; 714 + goto error; 715 + } 679 716 680 717 memset(&wr, 0, sizeof(wr)); 681 718 wr.wr_id = (u64)(unsigned long)mr; ··· 710 717 } 711 718 wait_for_completion(&mr->done); 712 719 up(&umrc->sem); 720 + 721 + dma_unmap_single(ddev, mr->dma, size, DMA_TO_DEVICE); 722 + kfree(mr->pas); 713 723 714 724 if (mr->status != IB_WC_SUCCESS) { 715 725 mlx5_ib_warn(dev, "reg umr failed\n");
+30 -50
drivers/infiniband/hw/mlx5/qp.c
··· 203 203 204 204 switch (qp_type) { 205 205 case IB_QPT_XRC_INI: 206 - size = sizeof(struct mlx5_wqe_xrc_seg); 206 + size += sizeof(struct mlx5_wqe_xrc_seg); 207 207 /* fall through */ 208 208 case IB_QPT_RC: 209 209 size += sizeof(struct mlx5_wqe_ctrl_seg) + ··· 211 211 sizeof(struct mlx5_wqe_raddr_seg); 212 212 break; 213 213 214 + case IB_QPT_XRC_TGT: 215 + return 0; 216 + 214 217 case IB_QPT_UC: 215 - size = sizeof(struct mlx5_wqe_ctrl_seg) + 218 + size += sizeof(struct mlx5_wqe_ctrl_seg) + 216 219 sizeof(struct mlx5_wqe_raddr_seg); 217 220 break; 218 221 219 222 case IB_QPT_UD: 220 223 case IB_QPT_SMI: 221 224 case IB_QPT_GSI: 222 - size = sizeof(struct mlx5_wqe_ctrl_seg) + 225 + size += sizeof(struct mlx5_wqe_ctrl_seg) + 223 226 sizeof(struct mlx5_wqe_datagram_seg); 224 227 break; 225 228 226 229 case MLX5_IB_QPT_REG_UMR: 227 - size = sizeof(struct mlx5_wqe_ctrl_seg) + 230 + size += sizeof(struct mlx5_wqe_ctrl_seg) + 228 231 sizeof(struct mlx5_wqe_umr_ctrl_seg) + 229 232 sizeof(struct mlx5_mkey_seg); 230 233 break; ··· 273 270 return wqe_size; 274 271 275 272 if (wqe_size > dev->mdev.caps.max_sq_desc_sz) { 276 - mlx5_ib_dbg(dev, "\n"); 273 + mlx5_ib_dbg(dev, "wqe_size(%d) > max_sq_desc_sz(%d)\n", 274 + wqe_size, dev->mdev.caps.max_sq_desc_sz); 277 275 return -EINVAL; 278 276 } 279 277 ··· 284 280 285 281 wq_size = roundup_pow_of_two(attr->cap.max_send_wr * wqe_size); 286 282 qp->sq.wqe_cnt = wq_size / MLX5_SEND_WQE_BB; 283 + if (qp->sq.wqe_cnt > dev->mdev.caps.max_wqes) { 284 + mlx5_ib_dbg(dev, "wqe count(%d) exceeds limits(%d)\n", 285 + qp->sq.wqe_cnt, dev->mdev.caps.max_wqes); 286 + return -ENOMEM; 287 + } 287 288 qp->sq.wqe_shift = ilog2(MLX5_SEND_WQE_BB); 288 289 qp->sq.max_gs = attr->cap.max_send_sge; 289 - qp->sq.max_post = 1 << ilog2(wq_size / wqe_size); 290 + qp->sq.max_post = wq_size / wqe_size; 291 + attr->cap.max_send_wr = qp->sq.max_post; 290 292 291 293 return wq_size; 292 294 } ··· 1290 1280 MLX5_QP_OPTPAR_Q_KEY, 1291 1281 [MLX5_QP_ST_MLX] = MLX5_QP_OPTPAR_PKEY_INDEX | 1292 1282 MLX5_QP_OPTPAR_Q_KEY, 1283 + [MLX5_QP_ST_XRC] = MLX5_QP_OPTPAR_ALT_ADDR_PATH | 1284 + MLX5_QP_OPTPAR_RRE | 1285 + MLX5_QP_OPTPAR_RAE | 1286 + MLX5_QP_OPTPAR_RWE | 1287 + MLX5_QP_OPTPAR_PKEY_INDEX, 1293 1288 }, 1294 1289 }, 1295 1290 [MLX5_QP_STATE_RTR] = { ··· 1329 1314 [MLX5_QP_STATE_RTS] = { 1330 1315 [MLX5_QP_ST_UD] = MLX5_QP_OPTPAR_Q_KEY, 1331 1316 [MLX5_QP_ST_MLX] = MLX5_QP_OPTPAR_Q_KEY, 1317 + [MLX5_QP_ST_UC] = MLX5_QP_OPTPAR_RWE, 1318 + [MLX5_QP_ST_RC] = MLX5_QP_OPTPAR_RNR_TIMEOUT | 1319 + MLX5_QP_OPTPAR_RWE | 1320 + MLX5_QP_OPTPAR_RAE | 1321 + MLX5_QP_OPTPAR_RRE, 1332 1322 }, 1333 1323 }, 1334 1324 }; ··· 1669 1649 rseg->raddr = cpu_to_be64(remote_addr); 1670 1650 rseg->rkey = cpu_to_be32(rkey); 1671 1651 rseg->reserved = 0; 1672 - } 1673 - 1674 - static void set_atomic_seg(struct mlx5_wqe_atomic_seg *aseg, struct ib_send_wr *wr) 1675 - { 1676 - if (wr->opcode == IB_WR_ATOMIC_CMP_AND_SWP) { 1677 - aseg->swap_add = cpu_to_be64(wr->wr.atomic.swap); 1678 - aseg->compare = cpu_to_be64(wr->wr.atomic.compare_add); 1679 - } else if (wr->opcode == IB_WR_MASKED_ATOMIC_FETCH_AND_ADD) { 1680 - aseg->swap_add = cpu_to_be64(wr->wr.atomic.compare_add); 1681 - aseg->compare = cpu_to_be64(wr->wr.atomic.compare_add_mask); 1682 - } else { 1683 - aseg->swap_add = cpu_to_be64(wr->wr.atomic.compare_add); 1684 - aseg->compare = 0; 1685 - } 1686 - } 1687 - 1688 - static void set_masked_atomic_seg(struct mlx5_wqe_masked_atomic_seg *aseg, 1689 - struct ib_send_wr *wr) 1690 - { 1691 - aseg->swap_add = cpu_to_be64(wr->wr.atomic.swap); 1692 - aseg->swap_add_mask = cpu_to_be64(wr->wr.atomic.swap_mask); 1693 - aseg->compare = cpu_to_be64(wr->wr.atomic.compare_add); 1694 - aseg->compare_mask = cpu_to_be64(wr->wr.atomic.compare_add_mask); 1695 1652 } 1696 1653 1697 1654 static void set_datagram_seg(struct mlx5_wqe_datagram_seg *dseg, ··· 2060 2063 2061 2064 case IB_WR_ATOMIC_CMP_AND_SWP: 2062 2065 case IB_WR_ATOMIC_FETCH_AND_ADD: 2063 - set_raddr_seg(seg, wr->wr.atomic.remote_addr, 2064 - wr->wr.atomic.rkey); 2065 - seg += sizeof(struct mlx5_wqe_raddr_seg); 2066 - 2067 - set_atomic_seg(seg, wr); 2068 - seg += sizeof(struct mlx5_wqe_atomic_seg); 2069 - 2070 - size += (sizeof(struct mlx5_wqe_raddr_seg) + 2071 - sizeof(struct mlx5_wqe_atomic_seg)) / 16; 2072 - break; 2073 - 2074 2066 case IB_WR_MASKED_ATOMIC_CMP_AND_SWP: 2075 - set_raddr_seg(seg, wr->wr.atomic.remote_addr, 2076 - wr->wr.atomic.rkey); 2077 - seg += sizeof(struct mlx5_wqe_raddr_seg); 2078 - 2079 - set_masked_atomic_seg(seg, wr); 2080 - seg += sizeof(struct mlx5_wqe_masked_atomic_seg); 2081 - 2082 - size += (sizeof(struct mlx5_wqe_raddr_seg) + 2083 - sizeof(struct mlx5_wqe_masked_atomic_seg)) / 16; 2084 - break; 2067 + mlx5_ib_warn(dev, "Atomic operations are not supported yet\n"); 2068 + err = -ENOSYS; 2069 + *bad_wr = wr; 2070 + goto out; 2085 2071 2086 2072 case IB_WR_LOCAL_INV: 2087 2073 next_fence = MLX5_FENCE_MODE_INITIATOR_SMALL;
+3 -1
drivers/infiniband/hw/mlx5/srq.c
··· 295 295 mlx5_vfree(in); 296 296 if (err) { 297 297 mlx5_ib_dbg(dev, "create SRQ failed, err %d\n", err); 298 - goto err_srq; 298 + goto err_usr_kern_srq; 299 299 } 300 300 301 301 mlx5_ib_dbg(dev, "create SRQ with srqn 0x%x\n", srq->msrq.srqn); ··· 316 316 317 317 err_core: 318 318 mlx5_core_destroy_srq(&dev->mdev, &srq->msrq); 319 + 320 + err_usr_kern_srq: 319 321 if (pd->uobject) 320 322 destroy_srq_user(pd, srq); 321 323 else
+1 -1
drivers/infiniband/hw/mthca/mthca_eq.c
··· 357 357 mthca_warn(dev, "Unhandled event %02x(%02x) on EQ %d\n", 358 358 eqe->type, eqe->subtype, eq->eqn); 359 359 break; 360 - }; 360 + } 361 361 362 362 set_eqe_hw(eqe); 363 363 ++eq->cons_index;
+3 -3
drivers/infiniband/hw/ocrdma/ocrdma_hw.c
··· 150 150 return IB_QPS_SQE; 151 151 case OCRDMA_QPS_ERR: 152 152 return IB_QPS_ERR; 153 - }; 153 + } 154 154 return IB_QPS_ERR; 155 155 } 156 156 ··· 171 171 return OCRDMA_QPS_SQE; 172 172 case IB_QPS_ERR: 173 173 return OCRDMA_QPS_ERR; 174 - }; 174 + } 175 175 return OCRDMA_QPS_ERR; 176 176 } 177 177 ··· 1982 1982 break; 1983 1983 default: 1984 1984 return -EINVAL; 1985 - }; 1985 + } 1986 1986 1987 1987 cmd = ocrdma_init_emb_mqe(OCRDMA_CMD_CREATE_QP, sizeof(*cmd)); 1988 1988 if (!cmd)
+1 -1
drivers/infiniband/hw/ocrdma/ocrdma_main.c
··· 531 531 case BE_DEV_DOWN: 532 532 ocrdma_close(dev); 533 533 break; 534 - }; 534 + } 535 535 } 536 536 537 537 static struct ocrdma_driver ocrdma_drv = {
+3 -3
drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
··· 141 141 /* Unsupported */ 142 142 *ib_speed = IB_SPEED_SDR; 143 143 *ib_width = IB_WIDTH_1X; 144 - }; 144 + } 145 145 } 146 146 147 147 ··· 2331 2331 default: 2332 2332 ibwc_status = IB_WC_GENERAL_ERR; 2333 2333 break; 2334 - }; 2334 + } 2335 2335 return ibwc_status; 2336 2336 } 2337 2337 ··· 2370 2370 pr_err("%s() invalid opcode received = 0x%x\n", 2371 2371 __func__, hdr->cw & OCRDMA_WQE_OPCODE_MASK); 2372 2372 break; 2373 - }; 2373 + } 2374 2374 } 2375 2375 2376 2376 static void ocrdma_set_cqe_status_flushed(struct ocrdma_qp *qp,
+1 -1
drivers/iommu/Kconfig
··· 52 52 select PCI_PRI 53 53 select PCI_PASID 54 54 select IOMMU_API 55 - depends on X86_64 && PCI && ACPI && X86_IO_APIC 55 + depends on X86_64 && PCI && ACPI 56 56 ---help--- 57 57 With this option you can enable support for AMD IOMMU hardware in 58 58 your system. An IOMMU is a hardware component which provides
+1 -2
drivers/md/bcache/request.c
··· 996 996 closure_bio_submit(bio, cl, s->d); 997 997 } else { 998 998 bch_writeback_add(dc); 999 + s->op.cache_bio = bio; 999 1000 1000 1001 if (bio->bi_rw & REQ_FLUSH) { 1001 1002 /* Also need to send a flush to the backing device */ ··· 1009 1008 flush->bi_private = cl; 1010 1009 1011 1010 closure_bio_submit(flush, cl, s->d); 1012 - } else { 1013 - s->op.cache_bio = bio; 1014 1011 } 1015 1012 } 1016 1013 out:
+12 -6
drivers/md/dm-snap-persistent.c
··· 269 269 return NUM_SNAPSHOT_HDR_CHUNKS + ((ps->exceptions_per_area + 1) * area); 270 270 } 271 271 272 + static void skip_metadata(struct pstore *ps) 273 + { 274 + uint32_t stride = ps->exceptions_per_area + 1; 275 + chunk_t next_free = ps->next_free; 276 + if (sector_div(next_free, stride) == NUM_SNAPSHOT_HDR_CHUNKS) 277 + ps->next_free++; 278 + } 279 + 272 280 /* 273 281 * Read or write a metadata area. Remembering to skip the first 274 282 * chunk which holds the header. ··· 510 502 511 503 ps->current_area--; 512 504 505 + skip_metadata(ps); 506 + 513 507 return 0; 514 508 } 515 509 ··· 626 616 struct dm_exception *e) 627 617 { 628 618 struct pstore *ps = get_info(store); 629 - uint32_t stride; 630 - chunk_t next_free; 631 619 sector_t size = get_dev_size(dm_snap_cow(store->snap)->bdev); 632 620 633 621 /* Is there enough room ? */ ··· 638 630 * Move onto the next free pending, making sure to take 639 631 * into account the location of the metadata chunks. 640 632 */ 641 - stride = (ps->exceptions_per_area + 1); 642 - next_free = ++ps->next_free; 643 - if (sector_div(next_free, stride) == 1) 644 - ps->next_free++; 633 + ps->next_free++; 634 + skip_metadata(ps); 645 635 646 636 atomic_inc(&ps->pending_count); 647 637 return 0;
+15 -2
drivers/mtd/devices/m25p80.c
··· 168 168 */ 169 169 static inline int set_4byte(struct m25p *flash, u32 jedec_id, int enable) 170 170 { 171 + int status; 172 + bool need_wren = false; 173 + 171 174 switch (JEDEC_MFR(jedec_id)) { 172 - case CFI_MFR_MACRONIX: 173 175 case CFI_MFR_ST: /* Micron, actually */ 176 + /* Some Micron need WREN command; all will accept it */ 177 + need_wren = true; 178 + case CFI_MFR_MACRONIX: 174 179 case 0xEF /* winbond */: 180 + if (need_wren) 181 + write_enable(flash); 182 + 175 183 flash->command[0] = enable ? OPCODE_EN4B : OPCODE_EX4B; 176 - return spi_write(flash->spi, flash->command, 1); 184 + status = spi_write(flash->spi, flash->command, 1); 185 + 186 + if (need_wren) 187 + write_disable(flash); 188 + 189 + return status; 177 190 default: 178 191 /* Spansion style */ 179 192 flash->command[0] = OPCODE_BRWR;
+3 -5
drivers/mtd/nand/nand_base.c
··· 2869 2869 2870 2870 len = le16_to_cpu(p->ext_param_page_length) * 16; 2871 2871 ep = kmalloc(len, GFP_KERNEL); 2872 - if (!ep) { 2873 - ret = -ENOMEM; 2874 - goto ext_out; 2875 - } 2872 + if (!ep) 2873 + return -ENOMEM; 2876 2874 2877 2875 /* Send our own NAND_CMD_PARAM. */ 2878 2876 chip->cmdfunc(mtd, NAND_CMD_PARAM, 0, -1); ··· 2918 2920 } 2919 2921 2920 2922 pr_info("ONFI extended param page detected.\n"); 2921 - return 0; 2923 + ret = 0; 2922 2924 2923 2925 ext_out: 2924 2926 kfree(ep);
+15 -13
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
··· 180 180 return 0; 181 181 } 182 182 183 - static void calc_block_sig(struct mlx5_cmd_prot_block *block, u8 token) 183 + static void calc_block_sig(struct mlx5_cmd_prot_block *block, u8 token, 184 + int csum) 184 185 { 185 186 block->token = token; 186 - block->ctrl_sig = ~xor8_buf(block->rsvd0, sizeof(*block) - sizeof(block->data) - 2); 187 - block->sig = ~xor8_buf(block, sizeof(*block) - 1); 187 + if (csum) { 188 + block->ctrl_sig = ~xor8_buf(block->rsvd0, sizeof(*block) - 189 + sizeof(block->data) - 2); 190 + block->sig = ~xor8_buf(block, sizeof(*block) - 1); 191 + } 188 192 } 189 193 190 - static void calc_chain_sig(struct mlx5_cmd_msg *msg, u8 token) 194 + static void calc_chain_sig(struct mlx5_cmd_msg *msg, u8 token, int csum) 191 195 { 192 196 struct mlx5_cmd_mailbox *next = msg->next; 193 197 194 198 while (next) { 195 - calc_block_sig(next->buf, token); 199 + calc_block_sig(next->buf, token, csum); 196 200 next = next->next; 197 201 } 198 202 } 199 203 200 - static void set_signature(struct mlx5_cmd_work_ent *ent) 204 + static void set_signature(struct mlx5_cmd_work_ent *ent, int csum) 201 205 { 202 206 ent->lay->sig = ~xor8_buf(ent->lay, sizeof(*ent->lay)); 203 - calc_chain_sig(ent->in, ent->token); 204 - calc_chain_sig(ent->out, ent->token); 207 + calc_chain_sig(ent->in, ent->token, csum); 208 + calc_chain_sig(ent->out, ent->token, csum); 205 209 } 206 210 207 211 static void poll_timeout(struct mlx5_cmd_work_ent *ent) ··· 543 539 lay->type = MLX5_PCI_CMD_XPORT; 544 540 lay->token = ent->token; 545 541 lay->status_own = CMD_OWNER_HW; 546 - if (!cmd->checksum_disabled) 547 - set_signature(ent); 542 + set_signature(ent, !cmd->checksum_disabled); 548 543 dump_command(dev, ent, 1); 549 544 ktime_get_ts(&ent->ts1); 550 545 ··· 776 773 777 774 copy = min_t(int, size, MLX5_CMD_DATA_BLOCK_SIZE); 778 775 block = next->buf; 779 - if (xor8_buf(block, sizeof(*block)) != 0xff) 780 - return -EINVAL; 781 776 782 777 memcpy(to, block->data, copy); 783 778 to += copy; ··· 1362 1361 goto err_map; 1363 1362 } 1364 1363 1364 + cmd->checksum_disabled = 1; 1365 1365 cmd->max_reg_cmds = (1 << cmd->log_sz) - 1; 1366 1366 cmd->bitmask = (1 << cmd->max_reg_cmds) - 1; 1367 1367 ··· 1512 1510 case MLX5_CMD_STAT_BAD_SYS_STATE_ERR: return -EIO; 1513 1511 case MLX5_CMD_STAT_BAD_RES_ERR: return -EINVAL; 1514 1512 case MLX5_CMD_STAT_RES_BUSY: return -EBUSY; 1515 - case MLX5_CMD_STAT_LIM_ERR: return -EINVAL; 1513 + case MLX5_CMD_STAT_LIM_ERR: return -ENOMEM; 1516 1514 case MLX5_CMD_STAT_BAD_RES_STATE_ERR: return -EINVAL; 1517 1515 case MLX5_CMD_STAT_IX_ERR: return -EINVAL; 1518 1516 case MLX5_CMD_STAT_NO_RES_ERR: return -EAGAIN;
+3 -1
drivers/net/ethernet/mellanox/mlx5/core/eq.c
··· 366 366 goto err_in; 367 367 } 368 368 369 + snprintf(eq->name, MLX5_MAX_EQ_NAME, "%s@pci:%s", 370 + name, pci_name(dev->pdev)); 369 371 eq->eqn = out.eq_number; 370 372 err = request_irq(table->msix_arr[vecidx].vector, mlx5_msix_handler, 0, 371 - name, eq); 373 + eq->name, eq); 372 374 if (err) 373 375 goto err_eq; 374 376
+5 -16
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 165 165 struct mlx5_cmd_set_hca_cap_mbox_in *set_ctx = NULL; 166 166 struct mlx5_cmd_query_hca_cap_mbox_in query_ctx; 167 167 struct mlx5_cmd_set_hca_cap_mbox_out set_out; 168 - struct mlx5_profile *prof = dev->profile; 169 168 u64 flags; 170 - int csum = 1; 171 169 int err; 172 170 173 171 memset(&query_ctx, 0, sizeof(query_ctx)); ··· 195 197 memcpy(&set_ctx->hca_cap, &query_out->hca_cap, 196 198 sizeof(set_ctx->hca_cap)); 197 199 198 - if (prof->mask & MLX5_PROF_MASK_CMDIF_CSUM) { 199 - csum = !!prof->cmdif_csum; 200 - flags = be64_to_cpu(set_ctx->hca_cap.flags); 201 - if (csum) 202 - flags |= MLX5_DEV_CAP_FLAG_CMDIF_CSUM; 203 - else 204 - flags &= ~MLX5_DEV_CAP_FLAG_CMDIF_CSUM; 205 - 206 - set_ctx->hca_cap.flags = cpu_to_be64(flags); 207 - } 208 - 209 200 if (dev->profile->mask & MLX5_PROF_MASK_QP_SIZE) 210 201 set_ctx->hca_cap.log_max_qp = dev->profile->log_max_qp; 211 202 203 + flags = be64_to_cpu(query_out->hca_cap.flags); 204 + /* disable checksum */ 205 + flags &= ~MLX5_DEV_CAP_FLAG_CMDIF_CSUM; 206 + 207 + set_ctx->hca_cap.flags = cpu_to_be64(flags); 212 208 memset(&set_out, 0, sizeof(set_out)); 213 209 set_ctx->hca_cap.log_uar_page_sz = cpu_to_be16(PAGE_SHIFT - 12); 214 210 set_ctx->hdr.opcode = cpu_to_be16(MLX5_CMD_OP_SET_HCA_CAP); ··· 216 224 err = mlx5_cmd_status_to_err(&set_out.hdr); 217 225 if (err) 218 226 goto query_ex; 219 - 220 - if (!csum) 221 - dev->cmd.checksum_disabled = 1; 222 227 223 228 query_ex: 224 229 kfree(query_out);
+14 -2
drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
··· 90 90 __be64 pas[0]; 91 91 }; 92 92 93 + enum { 94 + MAX_RECLAIM_TIME_MSECS = 5000, 95 + }; 96 + 93 97 static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u16 func_id) 94 98 { 95 99 struct rb_root *root = &dev->priv.page_root; ··· 283 279 int err; 284 280 int i; 285 281 282 + if (nclaimed) 283 + *nclaimed = 0; 284 + 286 285 memset(&in, 0, sizeof(in)); 287 286 outlen = sizeof(*out) + npages * sizeof(out->pas[0]); 288 287 out = mlx5_vzalloc(outlen); ··· 395 388 396 389 int mlx5_reclaim_startup_pages(struct mlx5_core_dev *dev) 397 390 { 398 - unsigned long end = jiffies + msecs_to_jiffies(5000); 391 + unsigned long end = jiffies + msecs_to_jiffies(MAX_RECLAIM_TIME_MSECS); 399 392 struct fw_page *fwp; 400 393 struct rb_node *p; 394 + int nclaimed = 0; 401 395 int err; 402 396 403 397 do { 404 398 p = rb_first(&dev->priv.page_root); 405 399 if (p) { 406 400 fwp = rb_entry(p, struct fw_page, rb_node); 407 - err = reclaim_pages(dev, fwp->func_id, optimal_reclaimed_pages(), NULL); 401 + err = reclaim_pages(dev, fwp->func_id, 402 + optimal_reclaimed_pages(), 403 + &nclaimed); 408 404 if (err) { 409 405 mlx5_core_warn(dev, "failed reclaiming pages (%d)\n", err); 410 406 return err; 411 407 } 408 + if (nclaimed) 409 + end = jiffies + msecs_to_jiffies(MAX_RECLAIM_TIME_MSECS); 412 410 } 413 411 if (time_after(jiffies, end)) { 414 412 mlx5_core_warn(dev, "FW did not return all pages. giving up...\n");
-6
drivers/of/Kconfig
··· 74 74 depends on MTD 75 75 def_bool y 76 76 77 - config OF_RESERVED_MEM 78 - depends on OF_FLATTREE && (DMA_CMA || (HAVE_GENERIC_DMA_COHERENT && HAVE_MEMBLOCK)) 79 - def_bool y 80 - help 81 - Initialization code for DMA reserved memory 82 - 83 77 endmenu # OF
-1
drivers/of/Makefile
··· 9 9 obj-$(CONFIG_OF_PCI) += of_pci.o 10 10 obj-$(CONFIG_OF_PCI_IRQ) += of_pci_irq.o 11 11 obj-$(CONFIG_OF_MTD) += of_mtd.o 12 - obj-$(CONFIG_OF_RESERVED_MEM) += of_reserved_mem.o
+1 -3
drivers/of/base.c
··· 303 303 struct device_node *cpun, *cpus; 304 304 305 305 cpus = of_find_node_by_path("/cpus"); 306 - if (!cpus) { 307 - pr_warn("Missing cpus node, bailing out\n"); 306 + if (!cpus) 308 307 return NULL; 309 - } 310 308 311 309 for_each_child_of_node(cpus, cpun) { 312 310 if (of_node_cmp(cpun->type, "cpu"))
-12
drivers/of/fdt.c
··· 18 18 #include <linux/string.h> 19 19 #include <linux/errno.h> 20 20 #include <linux/slab.h> 21 - #include <linux/random.h> 22 21 23 22 #include <asm/setup.h> /* for COMMAND_LINE_SIZE */ 24 23 #ifdef CONFIG_PPC ··· 802 803 } 803 804 804 805 #endif /* CONFIG_OF_EARLY_FLATTREE */ 805 - 806 - /* Feed entire flattened device tree into the random pool */ 807 - static int __init add_fdt_randomness(void) 808 - { 809 - if (initial_boot_params) 810 - add_device_randomness(initial_boot_params, 811 - be32_to_cpu(initial_boot_params->totalsize)); 812 - 813 - return 0; 814 - } 815 - core_initcall(add_fdt_randomness);
-173
drivers/of/of_reserved_mem.c
··· 1 - /* 2 - * Device tree based initialization code for reserved memory. 3 - * 4 - * Copyright (c) 2013 Samsung Electronics Co., Ltd. 5 - * http://www.samsung.com 6 - * Author: Marek Szyprowski <m.szyprowski@samsung.com> 7 - * 8 - * This program is free software; you can redistribute it and/or 9 - * modify it under the terms of the GNU General Public License as 10 - * published by the Free Software Foundation; either version 2 of the 11 - * License or (at your optional) any later version of the license. 12 - */ 13 - 14 - #include <linux/memblock.h> 15 - #include <linux/err.h> 16 - #include <linux/of.h> 17 - #include <linux/of_fdt.h> 18 - #include <linux/of_platform.h> 19 - #include <linux/mm.h> 20 - #include <linux/sizes.h> 21 - #include <linux/mm_types.h> 22 - #include <linux/dma-contiguous.h> 23 - #include <linux/dma-mapping.h> 24 - #include <linux/of_reserved_mem.h> 25 - 26 - #define MAX_RESERVED_REGIONS 16 27 - struct reserved_mem { 28 - phys_addr_t base; 29 - unsigned long size; 30 - struct cma *cma; 31 - char name[32]; 32 - }; 33 - static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS]; 34 - static int reserved_mem_count; 35 - 36 - static int __init fdt_scan_reserved_mem(unsigned long node, const char *uname, 37 - int depth, void *data) 38 - { 39 - struct reserved_mem *rmem = &reserved_mem[reserved_mem_count]; 40 - phys_addr_t base, size; 41 - int is_cma, is_reserved; 42 - unsigned long len; 43 - const char *status; 44 - __be32 *prop; 45 - 46 - is_cma = IS_ENABLED(CONFIG_DMA_CMA) && 47 - of_flat_dt_is_compatible(node, "linux,contiguous-memory-region"); 48 - is_reserved = of_flat_dt_is_compatible(node, "reserved-memory-region"); 49 - 50 - if (!is_reserved && !is_cma) { 51 - /* ignore node and scan next one */ 52 - return 0; 53 - } 54 - 55 - status = of_get_flat_dt_prop(node, "status", &len); 56 - if (status && strcmp(status, "okay") != 0) { 57 - /* ignore disabled node nad scan next one */ 58 - return 0; 59 - } 60 - 61 - prop = of_get_flat_dt_prop(node, "reg", &len); 62 - if (!prop || (len < (dt_root_size_cells + dt_root_addr_cells) * 63 - sizeof(__be32))) { 64 - pr_err("Reserved mem: node %s, incorrect \"reg\" property\n", 65 - uname); 66 - /* ignore node and scan next one */ 67 - return 0; 68 - } 69 - base = dt_mem_next_cell(dt_root_addr_cells, &prop); 70 - size = dt_mem_next_cell(dt_root_size_cells, &prop); 71 - 72 - if (!size) { 73 - /* ignore node and scan next one */ 74 - return 0; 75 - } 76 - 77 - pr_info("Reserved mem: found %s, memory base %lx, size %ld MiB\n", 78 - uname, (unsigned long)base, (unsigned long)size / SZ_1M); 79 - 80 - if (reserved_mem_count == ARRAY_SIZE(reserved_mem)) 81 - return -ENOSPC; 82 - 83 - rmem->base = base; 84 - rmem->size = size; 85 - strlcpy(rmem->name, uname, sizeof(rmem->name)); 86 - 87 - if (is_cma) { 88 - struct cma *cma; 89 - if (dma_contiguous_reserve_area(size, base, 0, &cma) == 0) { 90 - rmem->cma = cma; 91 - reserved_mem_count++; 92 - if (of_get_flat_dt_prop(node, 93 - "linux,default-contiguous-region", 94 - NULL)) 95 - dma_contiguous_set_default(cma); 96 - } 97 - } else if (is_reserved) { 98 - if (memblock_remove(base, size) == 0) 99 - reserved_mem_count++; 100 - else 101 - pr_err("Failed to reserve memory for %s\n", uname); 102 - } 103 - 104 - return 0; 105 - } 106 - 107 - static struct reserved_mem *get_dma_memory_region(struct device *dev) 108 - { 109 - struct device_node *node; 110 - const char *name; 111 - int i; 112 - 113 - node = of_parse_phandle(dev->of_node, "memory-region", 0); 114 - if (!node) 115 - return NULL; 116 - 117 - name = kbasename(node->full_name); 118 - for (i = 0; i < reserved_mem_count; i++) 119 - if (strcmp(name, reserved_mem[i].name) == 0) 120 - return &reserved_mem[i]; 121 - return NULL; 122 - } 123 - 124 - /** 125 - * of_reserved_mem_device_init() - assign reserved memory region to given device 126 - * 127 - * This function assign memory region pointed by "memory-region" device tree 128 - * property to the given device. 129 - */ 130 - void of_reserved_mem_device_init(struct device *dev) 131 - { 132 - struct reserved_mem *region = get_dma_memory_region(dev); 133 - if (!region) 134 - return; 135 - 136 - if (region->cma) { 137 - dev_set_cma_area(dev, region->cma); 138 - pr_info("Assigned CMA %s to %s device\n", region->name, 139 - dev_name(dev)); 140 - } else { 141 - if (dma_declare_coherent_memory(dev, region->base, region->base, 142 - region->size, DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE) != 0) 143 - pr_info("Declared reserved memory %s to %s device\n", 144 - region->name, dev_name(dev)); 145 - } 146 - } 147 - 148 - /** 149 - * of_reserved_mem_device_release() - release reserved memory device structures 150 - * 151 - * This function releases structures allocated for memory region handling for 152 - * the given device. 153 - */ 154 - void of_reserved_mem_device_release(struct device *dev) 155 - { 156 - struct reserved_mem *region = get_dma_memory_region(dev); 157 - if (!region && !region->cma) 158 - dma_release_declared_memory(dev); 159 - } 160 - 161 - /** 162 - * early_init_dt_scan_reserved_mem() - create reserved memory regions 163 - * 164 - * This function grabs memory from early allocator for device exclusive use 165 - * defined in device tree structures. It should be called by arch specific code 166 - * once the early allocator (memblock) has been activated and all other 167 - * subsystems have already allocated/reserved memory. 168 - */ 169 - void __init early_init_dt_scan_reserved_mem(void) 170 - { 171 - of_scan_flat_dt_by_path("/memory/reserved-memory", 172 - fdt_scan_reserved_mem, NULL); 173 - }
-4
drivers/of/platform.c
··· 21 21 #include <linux/of_device.h> 22 22 #include <linux/of_irq.h> 23 23 #include <linux/of_platform.h> 24 - #include <linux/of_reserved_mem.h> 25 24 #include <linux/platform_device.h> 26 25 27 26 const struct of_device_id of_default_bus_match_table[] = { ··· 218 219 dev->dev.bus = &platform_bus_type; 219 220 dev->dev.platform_data = platform_data; 220 221 221 - of_reserved_mem_device_init(&dev->dev); 222 - 223 222 /* We do not fill the DMA ops for platform devices by default. 224 223 * This is currently the responsibility of the platform code 225 224 * to do such, possibly using a device notifier ··· 225 228 226 229 if (of_device_add(dev) != 0) { 227 230 platform_device_put(dev); 228 - of_reserved_mem_device_release(&dev->dev); 229 231 return NULL; 230 232 } 231 233
+5 -3
drivers/pci/hotplug/acpiphp_glue.c
··· 994 994 995 995 /* 996 996 * This bridge should have been registered as a hotplug function 997 - * under its parent, so the context has to be there. If not, we 998 - * are in deep goo. 997 + * under its parent, so the context should be there, unless the 998 + * parent is going to be handled by pciehp, in which case this 999 + * bridge is not interesting to us either. 999 1000 */ 1000 1001 mutex_lock(&acpiphp_context_lock); 1001 1002 context = acpiphp_get_context(handle); 1002 - if (WARN_ON(!context)) { 1003 + if (!context) { 1003 1004 mutex_unlock(&acpiphp_context_lock); 1004 1005 put_device(&bus->dev); 1006 + pci_dev_put(bridge->pci_dev); 1005 1007 kfree(bridge); 1006 1008 return; 1007 1009 }
+5 -3
drivers/s390/char/sclp_cmd.c
··· 145 145 146 146 if (sccb->header.response_code != 0x20) 147 147 return 0; 148 - if (sccb->sclp_send_mask & (EVTYP_MSG_MASK | EVTYP_PMSGCMD_MASK)) 149 - return 1; 150 - return 0; 148 + if (!(sccb->sclp_send_mask & (EVTYP_OPCMD_MASK | EVTYP_PMSGCMD_MASK))) 149 + return 0; 150 + if (!(sccb->sclp_receive_mask & (EVTYP_MSG_MASK | EVTYP_PMSGCMD_MASK))) 151 + return 0; 152 + return 1; 151 153 } 152 154 153 155 bool __init sclp_has_vt220(void)
+1 -1
drivers/s390/char/tty3270.c
··· 810 810 struct winsize ws; 811 811 812 812 screen = tty3270_alloc_screen(tp->n_rows, tp->n_cols); 813 - if (!screen) 813 + if (IS_ERR(screen)) 814 814 return; 815 815 /* Switch to new output size */ 816 816 spin_lock_bh(&tp->view.lock);
+2 -1
drivers/spi/spi-atmel.c
··· 1583 1583 /* Initialize the hardware */ 1584 1584 ret = clk_prepare_enable(clk); 1585 1585 if (ret) 1586 - goto out_unmap_regs; 1586 + goto out_free_irq; 1587 1587 spi_writel(as, CR, SPI_BIT(SWRST)); 1588 1588 spi_writel(as, CR, SPI_BIT(SWRST)); /* AT91SAM9263 Rev B workaround */ 1589 1589 if (as->caps.has_wdrbt) { ··· 1614 1614 spi_writel(as, CR, SPI_BIT(SWRST)); 1615 1615 spi_writel(as, CR, SPI_BIT(SWRST)); /* AT91SAM9263 Rev B workaround */ 1616 1616 clk_disable_unprepare(clk); 1617 + out_free_irq: 1617 1618 free_irq(irq, master); 1618 1619 out_unmap_regs: 1619 1620 iounmap(as->regs);
-3
drivers/spi/spi-clps711x.c
··· 226 226 dev_name(&pdev->dev), hw); 227 227 if (ret) { 228 228 dev_err(&pdev->dev, "Can't request IRQ\n"); 229 - clk_put(hw->spi_clk); 230 229 goto clk_out; 231 230 } 232 231 ··· 246 247 gpio_free(hw->chipselect[i]); 247 248 248 249 spi_master_put(master); 249 - kfree(master); 250 250 251 251 return ret; 252 252 } ··· 261 263 gpio_free(hw->chipselect[i]); 262 264 263 265 spi_unregister_master(master); 264 - kfree(master); 265 266 266 267 return 0; 267 268 }
+2 -8
drivers/spi/spi-fsl-dspi.c
··· 476 476 master->bus_num = bus_num; 477 477 478 478 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 479 - if (!res) { 480 - dev_err(&pdev->dev, "can't get platform resource\n"); 481 - ret = -EINVAL; 482 - goto out_master_put; 483 - } 484 - 485 479 dspi->base = devm_ioremap_resource(&pdev->dev, res); 486 - if (!dspi->base) { 487 - ret = -EINVAL; 480 + if (IS_ERR(dspi->base)) { 481 + ret = PTR_ERR(dspi->base); 488 482 goto out_master_put; 489 483 } 490 484
+3 -1
drivers/spi/spi-mpc512x-psc.c
··· 522 522 psc_num = master->bus_num; 523 523 snprintf(clk_name, sizeof(clk_name), "psc%d_mclk", psc_num); 524 524 clk = devm_clk_get(dev, clk_name); 525 - if (IS_ERR(clk)) 525 + if (IS_ERR(clk)) { 526 + ret = PTR_ERR(clk); 526 527 goto free_irq; 528 + } 527 529 ret = clk_prepare_enable(clk); 528 530 if (ret) 529 531 goto free_irq;
+10 -1
drivers/spi/spi-pxa2xx.c
··· 546 546 if (pm_runtime_suspended(&drv_data->pdev->dev)) 547 547 return IRQ_NONE; 548 548 549 - sccr1_reg = read_SSCR1(reg); 549 + /* 550 + * If the device is not yet in RPM suspended state and we get an 551 + * interrupt that is meant for another device, check if status bits 552 + * are all set to one. That means that the device is already 553 + * powered off. 554 + */ 550 555 status = read_SSSR(reg); 556 + if (status == ~0) 557 + return IRQ_NONE; 558 + 559 + sccr1_reg = read_SSCR1(reg); 551 560 552 561 /* Ignore possible writes if we don't need to write */ 553 562 if (!(sccr1_reg & SSCR1_TIE))
+2 -2
drivers/spi/spi-s3c64xx.c
··· 1428 1428 S3C64XX_SPI_INT_TX_OVERRUN_EN | S3C64XX_SPI_INT_TX_UNDERRUN_EN, 1429 1429 sdd->regs + S3C64XX_SPI_INT_EN); 1430 1430 1431 + pm_runtime_enable(&pdev->dev); 1432 + 1431 1433 if (spi_register_master(master)) { 1432 1434 dev_err(&pdev->dev, "cannot register SPI master\n"); 1433 1435 ret = -EBUSY; ··· 1441 1439 dev_dbg(&pdev->dev, "\tIOmem=[%pR]\tDMA=[Rx-%d, Tx-%d]\n", 1442 1440 mem_res, 1443 1441 sdd->rx_dma.dmach, sdd->tx_dma.dmach); 1444 - 1445 - pm_runtime_enable(&pdev->dev); 1446 1442 1447 1443 return 0; 1448 1444
+2 -2
drivers/spi/spi-sh-hspi.c
··· 296 296 goto error1; 297 297 } 298 298 299 + pm_runtime_enable(&pdev->dev); 300 + 299 301 master->num_chipselect = 1; 300 302 master->bus_num = pdev->id; 301 303 master->setup = hspi_setup; ··· 310 308 dev_err(&pdev->dev, "spi_register_master error.\n"); 311 309 goto error1; 312 310 } 313 - 314 - pm_runtime_enable(&pdev->dev); 315 311 316 312 return 0; 317 313
-3
drivers/tty/serial/imx.c
··· 1912 1912 1913 1913 sport->devdata = of_id->data; 1914 1914 1915 - if (of_device_is_stdout_path(np)) 1916 - add_preferred_console(imx_reg.cons->name, sport->port.line, 0); 1917 - 1918 1915 return 0; 1919 1916 } 1920 1917 #else
+3 -2
drivers/tty/serial/vt8500_serial.c
··· 561 561 if (!mmres || !irqres) 562 562 return -ENODEV; 563 563 564 - if (np) 564 + if (np) { 565 565 port = of_alias_get_id(np, "serial"); 566 566 if (port >= VT8500_MAX_PORTS) 567 567 port = -1; 568 - else 568 + } else { 569 569 port = -1; 570 + } 570 571 571 572 if (port < 0) { 572 573 /* calculate the port id */
+4 -2
drivers/usb/chipidea/host.c
··· 100 100 { 101 101 struct usb_hcd *hcd = ci->hcd; 102 102 103 - usb_remove_hcd(hcd); 104 - usb_put_hcd(hcd); 103 + if (hcd) { 104 + usb_remove_hcd(hcd); 105 + usb_put_hcd(hcd); 106 + } 105 107 if (ci->platdata->reg_vbus) 106 108 regulator_disable(ci->platdata->reg_vbus); 107 109 }
+6
drivers/usb/core/quirks.c
··· 97 97 /* Alcor Micro Corp. Hub */ 98 98 { USB_DEVICE(0x058f, 0x9254), .driver_info = USB_QUIRK_RESET_RESUME }, 99 99 100 + /* MicroTouch Systems touchscreen */ 101 + { USB_DEVICE(0x0596, 0x051e), .driver_info = USB_QUIRK_RESET_RESUME }, 102 + 100 103 /* appletouch */ 101 104 { USB_DEVICE(0x05ac, 0x021a), .driver_info = USB_QUIRK_RESET_RESUME }, 102 105 ··· 132 129 133 130 /* Broadcom BCM92035DGROM BT dongle */ 134 131 { USB_DEVICE(0x0a5c, 0x2021), .driver_info = USB_QUIRK_RESET_RESUME }, 132 + 133 + /* MAYA44USB sound device */ 134 + { USB_DEVICE(0x0a92, 0x0091), .driver_info = USB_QUIRK_RESET_RESUME }, 135 135 136 136 /* Action Semiconductor flash disk */ 137 137 { USB_DEVICE(0x10d6, 0x2200), .driver_info =
+2 -2
drivers/usb/host/pci-quirks.c
··· 799 799 * switchable ports. 800 800 */ 801 801 pci_write_config_dword(xhci_pdev, USB_INTEL_USB3_PSSEN, 802 - cpu_to_le32(ports_available)); 802 + ports_available); 803 803 804 804 pci_read_config_dword(xhci_pdev, USB_INTEL_USB3_PSSEN, 805 805 &ports_available); ··· 821 821 * host. 822 822 */ 823 823 pci_write_config_dword(xhci_pdev, USB_INTEL_XUSB2PR, 824 - cpu_to_le32(ports_available)); 824 + ports_available); 825 825 826 826 pci_read_config_dword(xhci_pdev, USB_INTEL_XUSB2PR, 827 827 &ports_available);
-26
drivers/usb/host/xhci-hub.c
··· 1157 1157 t1 = xhci_port_state_to_neutral(t1); 1158 1158 if (t1 != t2) 1159 1159 xhci_writel(xhci, t2, port_array[port_index]); 1160 - 1161 - if (hcd->speed != HCD_USB3) { 1162 - /* enable remote wake up for USB 2.0 */ 1163 - __le32 __iomem *addr; 1164 - u32 tmp; 1165 - 1166 - /* Get the port power control register address. */ 1167 - addr = port_array[port_index] + PORTPMSC; 1168 - tmp = xhci_readl(xhci, addr); 1169 - tmp |= PORT_RWE; 1170 - xhci_writel(xhci, tmp, addr); 1171 - } 1172 1160 } 1173 1161 hcd->state = HC_STATE_SUSPENDED; 1174 1162 bus_state->next_statechange = jiffies + msecs_to_jiffies(10); ··· 1235 1247 xhci_ring_device(xhci, slot_id); 1236 1248 } else 1237 1249 xhci_writel(xhci, temp, port_array[port_index]); 1238 - 1239 - if (hcd->speed != HCD_USB3) { 1240 - /* disable remote wake up for USB 2.0 */ 1241 - __le32 __iomem *addr; 1242 - u32 tmp; 1243 - 1244 - /* Add one to the port status register address to get 1245 - * the port power control register address. 1246 - */ 1247 - addr = port_array[port_index] + PORTPMSC; 1248 - tmp = xhci_readl(xhci, addr); 1249 - tmp &= ~PORT_RWE; 1250 - xhci_writel(xhci, tmp, addr); 1251 - } 1252 1250 } 1253 1251 1254 1252 (void) xhci_readl(xhci, &xhci->op_regs->command);
+25
drivers/usb/host/xhci-pci.c
··· 35 35 #define PCI_VENDOR_ID_ETRON 0x1b6f 36 36 #define PCI_DEVICE_ID_ASROCK_P67 0x7023 37 37 38 + #define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI 0x8c31 39 + #define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c31 40 + 38 41 static const char hcd_name[] = "xhci_hcd"; 39 42 40 43 /* called after powerup, by probe or system-pm "wakeup" */ ··· 71 68 xhci_dbg_trace(xhci, trace_xhci_dbg_quirks, 72 69 "QUIRK: Fresco Logic xHC needs configure" 73 70 " endpoint cmd after reset endpoint"); 71 + } 72 + if (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK && 73 + pdev->revision == 0x4) { 74 + xhci->quirks |= XHCI_SLOW_SUSPEND; 75 + xhci_dbg_trace(xhci, trace_xhci_dbg_quirks, 76 + "QUIRK: Fresco Logic xHC revision %u" 77 + "must be suspended extra slowly", 78 + pdev->revision); 74 79 } 75 80 /* Fresco Logic confirms: all revisions of this chip do not 76 81 * support MSI, even though some of them claim to in their PCI ··· 120 109 */ 121 110 xhci->quirks |= XHCI_SPURIOUS_REBOOT; 122 111 xhci->quirks |= XHCI_AVOID_BEI; 112 + } 113 + if (pdev->vendor == PCI_VENDOR_ID_INTEL && 114 + (pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI || 115 + pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI)) { 116 + /* Workaround for occasional spurious wakeups from S5 (or 117 + * any other sleep) on Haswell machines with LPT and LPT-LP 118 + * with the new Intel BIOS 119 + */ 120 + xhci->quirks |= XHCI_SPURIOUS_WAKEUP; 123 121 } 124 122 if (pdev->vendor == PCI_VENDOR_ID_ETRON && 125 123 pdev->device == PCI_DEVICE_ID_ASROCK_P67) { ··· 237 217 usb_put_hcd(xhci->shared_hcd); 238 218 } 239 219 usb_hcd_pci_remove(dev); 220 + 221 + /* Workaround for spurious wakeups at shutdown with HSW */ 222 + if (xhci->quirks & XHCI_SPURIOUS_WAKEUP) 223 + pci_set_power_state(dev, PCI_D3hot); 224 + 240 225 kfree(xhci); 241 226 } 242 227
+13 -1
drivers/usb/host/xhci.c
··· 730 730 731 731 spin_lock_irq(&xhci->lock); 732 732 xhci_halt(xhci); 733 + /* Workaround for spurious wakeups at shutdown with HSW */ 734 + if (xhci->quirks & XHCI_SPURIOUS_WAKEUP) 735 + xhci_reset(xhci); 733 736 spin_unlock_irq(&xhci->lock); 734 737 735 738 xhci_cleanup_msix(xhci); ··· 740 737 xhci_dbg_trace(xhci, trace_xhci_dbg_init, 741 738 "xhci_shutdown completed - status = %x", 742 739 xhci_readl(xhci, &xhci->op_regs->status)); 740 + 741 + /* Yet another workaround for spurious wakeups at shutdown with HSW */ 742 + if (xhci->quirks & XHCI_SPURIOUS_WAKEUP) 743 + pci_set_power_state(to_pci_dev(hcd->self.controller), PCI_D3hot); 743 744 } 744 745 745 746 #ifdef CONFIG_PM ··· 846 839 int xhci_suspend(struct xhci_hcd *xhci) 847 840 { 848 841 int rc = 0; 842 + unsigned int delay = XHCI_MAX_HALT_USEC; 849 843 struct usb_hcd *hcd = xhci_to_hcd(xhci); 850 844 u32 command; 851 845 ··· 869 861 command = xhci_readl(xhci, &xhci->op_regs->command); 870 862 command &= ~CMD_RUN; 871 863 xhci_writel(xhci, command, &xhci->op_regs->command); 864 + 865 + /* Some chips from Fresco Logic need an extraordinary delay */ 866 + delay *= (xhci->quirks & XHCI_SLOW_SUSPEND) ? 10 : 1; 867 + 872 868 if (xhci_handshake(xhci, &xhci->op_regs->status, 873 - STS_HALT, STS_HALT, XHCI_MAX_HALT_USEC)) { 869 + STS_HALT, STS_HALT, delay)) { 874 870 xhci_warn(xhci, "WARN: xHC CMD_RUN timeout\n"); 875 871 spin_unlock_irq(&xhci->lock); 876 872 return -ETIMEDOUT;
+2
drivers/usb/host/xhci.h
··· 1548 1548 #define XHCI_COMP_MODE_QUIRK (1 << 14) 1549 1549 #define XHCI_AVOID_BEI (1 << 15) 1550 1550 #define XHCI_PLAT (1 << 16) 1551 + #define XHCI_SLOW_SUSPEND (1 << 17) 1552 + #define XHCI_SPURIOUS_WAKEUP (1 << 18) 1551 1553 unsigned int num_active_eps; 1552 1554 unsigned int limit_active_eps; 1553 1555 /* There are two roothubs to keep track of bus suspend info for */
+1 -1
drivers/usb/misc/Kconfig
··· 246 246 config USB_HSIC_USB3503 247 247 tristate "USB3503 HSIC to USB20 Driver" 248 248 depends on I2C 249 - select REGMAP 249 + select REGMAP_I2C 250 250 help 251 251 This option enables support for SMSC USB3503 HSIC to USB 2.0 Driver.
+46
drivers/usb/musb/musb_core.c
··· 922 922 } 923 923 924 924 /* 925 + * Program the HDRC to start (enable interrupts, dma, etc.). 926 + */ 927 + void musb_start(struct musb *musb) 928 + { 929 + void __iomem *regs = musb->mregs; 930 + u8 devctl = musb_readb(regs, MUSB_DEVCTL); 931 + 932 + dev_dbg(musb->controller, "<== devctl %02x\n", devctl); 933 + 934 + /* Set INT enable registers, enable interrupts */ 935 + musb->intrtxe = musb->epmask; 936 + musb_writew(regs, MUSB_INTRTXE, musb->intrtxe); 937 + musb->intrrxe = musb->epmask & 0xfffe; 938 + musb_writew(regs, MUSB_INTRRXE, musb->intrrxe); 939 + musb_writeb(regs, MUSB_INTRUSBE, 0xf7); 940 + 941 + musb_writeb(regs, MUSB_TESTMODE, 0); 942 + 943 + /* put into basic highspeed mode and start session */ 944 + musb_writeb(regs, MUSB_POWER, MUSB_POWER_ISOUPDATE 945 + | MUSB_POWER_HSENAB 946 + /* ENSUSPEND wedges tusb */ 947 + /* | MUSB_POWER_ENSUSPEND */ 948 + ); 949 + 950 + musb->is_active = 0; 951 + devctl = musb_readb(regs, MUSB_DEVCTL); 952 + devctl &= ~MUSB_DEVCTL_SESSION; 953 + 954 + /* session started after: 955 + * (a) ID-grounded irq, host mode; 956 + * (b) vbus present/connect IRQ, peripheral mode; 957 + * (c) peripheral initiates, using SRP 958 + */ 959 + if (musb->port_mode != MUSB_PORT_MODE_HOST && 960 + (devctl & MUSB_DEVCTL_VBUS) == MUSB_DEVCTL_VBUS) { 961 + musb->is_active = 1; 962 + } else { 963 + devctl |= MUSB_DEVCTL_SESSION; 964 + } 965 + 966 + musb_platform_enable(musb); 967 + musb_writeb(regs, MUSB_DEVCTL, devctl); 968 + } 969 + 970 + /* 925 971 * Make the HDRC stop (disable interrupts, etc.); 926 972 * reversible by musb_start 927 973 * called on gadget driver unregister
+1
drivers/usb/musb/musb_core.h
··· 503 503 extern const char musb_driver_name[]; 504 504 505 505 extern void musb_stop(struct musb *musb); 506 + extern void musb_start(struct musb *musb); 506 507 507 508 extern void musb_write_fifo(struct musb_hw_ep *ep, u16 len, const u8 *src); 508 509 extern void musb_read_fifo(struct musb_hw_ep *ep, u16 len, u8 *dst);
+3
drivers/usb/musb/musb_gadget.c
··· 1853 1853 musb->gadget_driver = driver; 1854 1854 1855 1855 spin_lock_irqsave(&musb->lock, flags); 1856 + musb->is_active = 1; 1856 1857 1857 1858 otg_set_peripheral(otg, &musb->g); 1858 1859 musb->xceiv->state = OTG_STATE_B_IDLE; 1859 1860 spin_unlock_irqrestore(&musb->lock, flags); 1861 + 1862 + musb_start(musb); 1860 1863 1861 1864 /* REVISIT: funcall to other code, which also 1862 1865 * handles power budgeting ... this way also
-46
drivers/usb/musb/musb_virthub.c
··· 44 44 45 45 #include "musb_core.h" 46 46 47 - /* 48 - * Program the HDRC to start (enable interrupts, dma, etc.). 49 - */ 50 - static void musb_start(struct musb *musb) 51 - { 52 - void __iomem *regs = musb->mregs; 53 - u8 devctl = musb_readb(regs, MUSB_DEVCTL); 54 - 55 - dev_dbg(musb->controller, "<== devctl %02x\n", devctl); 56 - 57 - /* Set INT enable registers, enable interrupts */ 58 - musb->intrtxe = musb->epmask; 59 - musb_writew(regs, MUSB_INTRTXE, musb->intrtxe); 60 - musb->intrrxe = musb->epmask & 0xfffe; 61 - musb_writew(regs, MUSB_INTRRXE, musb->intrrxe); 62 - musb_writeb(regs, MUSB_INTRUSBE, 0xf7); 63 - 64 - musb_writeb(regs, MUSB_TESTMODE, 0); 65 - 66 - /* put into basic highspeed mode and start session */ 67 - musb_writeb(regs, MUSB_POWER, MUSB_POWER_ISOUPDATE 68 - | MUSB_POWER_HSENAB 69 - /* ENSUSPEND wedges tusb */ 70 - /* | MUSB_POWER_ENSUSPEND */ 71 - ); 72 - 73 - musb->is_active = 0; 74 - devctl = musb_readb(regs, MUSB_DEVCTL); 75 - devctl &= ~MUSB_DEVCTL_SESSION; 76 - 77 - /* session started after: 78 - * (a) ID-grounded irq, host mode; 79 - * (b) vbus present/connect IRQ, peripheral mode; 80 - * (c) peripheral initiates, using SRP 81 - */ 82 - if (musb->port_mode != MUSB_PORT_MODE_HOST && 83 - (devctl & MUSB_DEVCTL_VBUS) == MUSB_DEVCTL_VBUS) { 84 - musb->is_active = 1; 85 - } else { 86 - devctl |= MUSB_DEVCTL_SESSION; 87 - } 88 - 89 - musb_platform_enable(musb); 90 - musb_writeb(regs, MUSB_DEVCTL, devctl); 91 - } 92 - 93 47 static void musb_port_suspend(struct musb *musb, bool do_suspend) 94 48 { 95 49 struct usb_otg *otg = musb->xceiv->otg;
+224 -1
drivers/usb/serial/option.c
··· 451 451 #define CHANGHONG_VENDOR_ID 0x2077 452 452 #define CHANGHONG_PRODUCT_CH690 0x7001 453 453 454 + /* Inovia */ 455 + #define INOVIA_VENDOR_ID 0x20a6 456 + #define INOVIA_SEW858 0x1105 457 + 454 458 /* some devices interfaces need special handling due to a number of reasons */ 455 459 enum option_blacklist_reason { 456 460 OPTION_BLACKLIST_NONE = 0, ··· 693 689 { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x7A) }, 694 690 { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x7B) }, 695 691 { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x7C) }, 692 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x01) }, 693 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x02) }, 694 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x03) }, 695 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x04) }, 696 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x05) }, 697 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x06) }, 698 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x0A) }, 699 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x0B) }, 700 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x0D) }, 701 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x0E) }, 702 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x0F) }, 703 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x10) }, 704 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x12) }, 705 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x13) }, 706 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x14) }, 707 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x15) }, 708 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x17) }, 709 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x18) }, 710 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x19) }, 711 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x1A) }, 712 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x1B) }, 713 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x1C) }, 714 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x31) }, 715 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x32) }, 716 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x33) }, 717 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x34) }, 718 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x35) }, 719 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x36) }, 720 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x3A) }, 721 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x3B) }, 722 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x3D) }, 723 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x3E) }, 724 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x3F) }, 725 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x48) }, 726 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x49) }, 727 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x4A) }, 728 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x4B) }, 729 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x4C) }, 730 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x61) }, 731 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x62) }, 732 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x63) }, 733 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x64) }, 734 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x65) }, 735 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x66) }, 736 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6A) }, 737 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6B) }, 738 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6D) }, 739 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6E) }, 740 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6F) }, 741 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x78) }, 742 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x79) }, 743 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x7A) }, 744 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x7B) }, 745 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x7C) }, 746 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x01) }, 747 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x02) }, 748 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x03) }, 749 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x04) }, 750 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x05) }, 751 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x06) }, 752 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x0A) }, 753 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x0B) }, 754 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x0D) }, 755 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x0E) }, 756 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x0F) }, 757 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x10) }, 758 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x12) }, 759 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x13) }, 760 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x14) }, 761 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x15) }, 762 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x17) }, 763 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x18) }, 764 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x19) }, 765 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x1A) }, 766 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x1B) }, 767 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x1C) }, 768 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x31) }, 769 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x32) }, 770 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x33) }, 771 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x34) }, 772 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x35) }, 773 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x36) }, 774 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x3A) }, 775 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x3B) }, 776 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x3D) }, 777 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x3E) }, 778 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x3F) }, 779 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x48) }, 780 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x49) }, 781 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x4A) }, 782 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x4B) }, 783 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x4C) }, 784 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x61) }, 785 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x62) }, 786 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x63) }, 787 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x64) }, 788 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x65) }, 789 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x66) }, 790 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6A) }, 791 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6B) }, 792 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6D) }, 793 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6E) }, 794 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6F) }, 795 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x78) }, 796 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x79) }, 797 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x7A) }, 798 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x7B) }, 799 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x7C) }, 800 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x01) }, 801 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x02) }, 802 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x03) }, 803 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x04) }, 804 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x05) }, 805 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x06) }, 806 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x0A) }, 807 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x0B) }, 808 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x0D) }, 809 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x0E) }, 810 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x0F) }, 811 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x10) }, 812 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x12) }, 813 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x13) }, 814 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x14) }, 815 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x15) }, 816 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x17) }, 817 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x18) }, 818 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x19) }, 819 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x1A) }, 820 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x1B) }, 821 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x1C) }, 822 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x31) }, 823 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x32) }, 824 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x33) }, 825 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x34) }, 826 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x35) }, 827 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x36) }, 828 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x3A) }, 829 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x3B) }, 830 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x3D) }, 831 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x3E) }, 832 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x3F) }, 833 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x48) }, 834 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x49) }, 835 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x4A) }, 836 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x4B) }, 837 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x4C) }, 838 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x61) }, 839 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x62) }, 840 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x63) }, 841 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x64) }, 842 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x65) }, 843 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x66) }, 844 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6A) }, 845 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6B) }, 846 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6D) }, 847 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6E) }, 848 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6F) }, 849 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x78) }, 850 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x79) }, 851 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x7A) }, 852 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x7B) }, 853 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x7C) }, 854 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x01) }, 855 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x02) }, 856 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x03) }, 857 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x04) }, 858 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x05) }, 859 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x06) }, 860 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x0A) }, 861 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x0B) }, 862 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x0D) }, 863 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x0E) }, 864 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x0F) }, 865 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x10) }, 866 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x12) }, 867 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x13) }, 868 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x14) }, 869 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x15) }, 870 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x17) }, 871 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x18) }, 872 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x19) }, 873 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x1A) }, 874 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x1B) }, 875 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x1C) }, 876 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x31) }, 877 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x32) }, 878 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x33) }, 879 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x34) }, 880 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x35) }, 881 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x36) }, 882 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x3A) }, 883 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x3B) }, 884 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x3D) }, 885 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x3E) }, 886 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x3F) }, 887 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x48) }, 888 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x49) }, 889 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x4A) }, 890 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x4B) }, 891 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x4C) }, 892 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x61) }, 893 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x62) }, 894 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x63) }, 895 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x64) }, 896 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x65) }, 897 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x66) }, 898 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6A) }, 899 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6B) }, 900 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6D) }, 901 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6E) }, 902 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6F) }, 903 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x78) }, 904 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x79) }, 905 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x7A) }, 906 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x7B) }, 907 + { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x7C) }, 696 908 697 909 698 910 { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_V640) }, ··· 1477 1257 1478 1258 { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100) }, 1479 1259 { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD145) }, 1480 - { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD200) }, 1260 + { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD200), 1261 + .driver_info = (kernel_ulong_t)&net_intf6_blacklist 1262 + }, 1481 1263 { USB_DEVICE(CELOT_VENDOR_ID, CELOT_PRODUCT_CT680M) }, /* CT-650 CDMA 450 1xEVDO modem */ 1482 1264 { USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_GT_B3730, USB_CLASS_CDC_DATA, 0x00, 0x00) }, /* Samsung GT-B3730 LTE USB modem.*/ 1483 1265 { USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEM600) }, ··· 1567 1345 { USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d03, 0xff, 0x00, 0x00) }, 1568 1346 { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */ 1569 1347 { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/C1 */ 1348 + { USB_DEVICE(INOVIA_VENDOR_ID, INOVIA_SEW858) }, 1570 1349 { } /* Terminating entry */ 1571 1350 }; 1572 1351 MODULE_DEVICE_TABLE(usb, option_ids);
+1
drivers/usb/serial/ti_usb_3410_5052.c
··· 190 190 { USB_DEVICE(IBM_VENDOR_ID, IBM_454B_PRODUCT_ID) }, 191 191 { USB_DEVICE(IBM_VENDOR_ID, IBM_454C_PRODUCT_ID) }, 192 192 { USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_PRODUCT_ID) }, 193 + { USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_STRIP_PORT_ID) }, 193 194 { USB_DEVICE(TI_VENDOR_ID, FRI2_PRODUCT_ID) }, 194 195 { } /* terminator */ 195 196 };
+4 -1
drivers/usb/storage/scsiglue.c
··· 211 211 /* 212 212 * Many devices do not respond properly to READ_CAPACITY_16. 213 213 * Tell the SCSI layer to try READ_CAPACITY_10 first. 214 + * However some USB 3.0 drive enclosures return capacity 215 + * modulo 2TB. Those must use READ_CAPACITY_16 214 216 */ 215 - sdev->try_rc_10_first = 1; 217 + if (!(us->fflags & US_FL_NEEDS_CAP16)) 218 + sdev->try_rc_10_first = 1; 216 219 217 220 /* assume SPC3 or latter devices support sense size > 18 */ 218 221 if (sdev->scsi_level > SCSI_SPC_2)
+7
drivers/usb/storage/unusual_devs.h
··· 1925 1925 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1926 1926 US_FL_IGNORE_RESIDUE ), 1927 1927 1928 + /* Reported by Oliver Neukum <oneukum@suse.com> */ 1929 + UNUSUAL_DEV( 0x174c, 0x55aa, 0x0100, 0x0100, 1930 + "ASMedia", 1931 + "AS2105", 1932 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1933 + US_FL_NEEDS_CAP16), 1934 + 1928 1935 /* Reported by Jesse Feddema <jdfeddema@gmail.com> */ 1929 1936 UNUSUAL_DEV( 0x177f, 0x0400, 0x0000, 0x0000, 1930 1937 "Yarvik",
+21 -19
drivers/vfio/vfio_iommu_type1.c
··· 545 545 long npage; 546 546 int ret = 0, prot = 0; 547 547 uint64_t mask; 548 + struct vfio_dma *dma = NULL; 549 + unsigned long pfn; 548 550 549 551 end = map->iova + map->size; 550 552 ··· 589 587 } 590 588 591 589 for (iova = map->iova; iova < end; iova += size, vaddr += size) { 592 - struct vfio_dma *dma = NULL; 593 - unsigned long pfn; 594 590 long i; 595 591 596 592 /* Pin a contiguous chunk of memory */ ··· 597 597 if (npage <= 0) { 598 598 WARN_ON(!npage); 599 599 ret = (int)npage; 600 - break; 600 + goto out; 601 601 } 602 602 603 603 /* Verify pages are not already mapped */ 604 604 for (i = 0; i < npage; i++) { 605 605 if (iommu_iova_to_phys(iommu->domain, 606 606 iova + (i << PAGE_SHIFT))) { 607 - vfio_unpin_pages(pfn, npage, prot, true); 608 607 ret = -EBUSY; 609 - break; 608 + goto out_unpin; 610 609 } 611 610 } 612 611 ··· 615 616 if (ret) { 616 617 if (ret != -EBUSY || 617 618 map_try_harder(iommu, iova, pfn, npage, prot)) { 618 - vfio_unpin_pages(pfn, npage, prot, true); 619 - break; 619 + goto out_unpin; 620 620 } 621 621 } 622 622 ··· 670 672 dma = kzalloc(sizeof(*dma), GFP_KERNEL); 671 673 if (!dma) { 672 674 iommu_unmap(iommu->domain, iova, size); 673 - vfio_unpin_pages(pfn, npage, prot, true); 674 675 ret = -ENOMEM; 675 - break; 676 + goto out_unpin; 676 677 } 677 678 678 679 dma->size = size; ··· 682 685 } 683 686 } 684 687 685 - if (ret) { 686 - struct vfio_dma *tmp; 687 - iova = map->iova; 688 - size = map->size; 689 - while ((tmp = vfio_find_dma(iommu, iova, size))) { 690 - int r = vfio_remove_dma_overlap(iommu, iova, 691 - &size, tmp); 692 - if (WARN_ON(r || !size)) 693 - break; 694 - } 688 + WARN_ON(ret); 689 + mutex_unlock(&iommu->lock); 690 + return ret; 691 + 692 + out_unpin: 693 + vfio_unpin_pages(pfn, npage, prot, true); 694 + 695 + out: 696 + iova = map->iova; 697 + size = map->size; 698 + while ((dma = vfio_find_dma(iommu, iova, size))) { 699 + int r = vfio_remove_dma_overlap(iommu, iova, 700 + &size, dma); 701 + if (WARN_ON(r || !size)) 702 + break; 695 703 } 696 704 697 705 mutex_unlock(&iommu->lock);
+6
drivers/w1/w1.c
··· 613 613 sl = dev_to_w1_slave(dev); 614 614 fops = sl->family->fops; 615 615 616 + if (!fops) 617 + return 0; 618 + 616 619 switch (action) { 617 620 case BUS_NOTIFY_ADD_DEVICE: 618 621 /* if the family driver needs to initialize something... */ ··· 716 713 atomic_set(&sl->refcnt, 0); 717 714 init_completion(&sl->released); 718 715 716 + /* slave modules need to be loaded in a context with unlocked mutex */ 717 + mutex_unlock(&dev->mutex); 719 718 request_module("w1-family-0x%0x", rn->family); 719 + mutex_lock(&dev->mutex); 720 720 721 721 spin_lock(&w1_flock); 722 722 f = w1_family_registered(rn->family);
+6
drivers/watchdog/hpwdt.c
··· 802 802 return -ENODEV; 803 803 } 804 804 805 + /* 806 + * Ignore all auxilary iLO devices with the following PCI ID 807 + */ 808 + if (dev->subsystem_device == 0x1979) 809 + return -ENODEV; 810 + 805 811 if (pci_enable_device(dev)) { 806 812 dev_warn(&dev->dev, 807 813 "Not possible to enable PCI Device: 0x%x:0x%x.\n",
+1 -1
drivers/watchdog/kempld_wdt.c
··· 35 35 #define KEMPLD_WDT_STAGE_TIMEOUT(x) (0x1b + (x) * 4) 36 36 #define KEMPLD_WDT_STAGE_CFG(x) (0x18 + (x)) 37 37 #define STAGE_CFG_GET_PRESCALER(x) (((x) & 0x30) >> 4) 38 - #define STAGE_CFG_SET_PRESCALER(x) (((x) & 0x30) << 4) 38 + #define STAGE_CFG_SET_PRESCALER(x) (((x) & 0x3) << 4) 39 39 #define STAGE_CFG_PRESCALER_MASK 0x30 40 40 #define STAGE_CFG_ACTION_MASK 0x7 41 41 #define STAGE_CFG_ASSERT (1 << 3)
+2 -2
drivers/watchdog/sunxi_wdt.c
··· 146 146 .set_timeout = sunxi_wdt_set_timeout, 147 147 }; 148 148 149 - static int __init sunxi_wdt_probe(struct platform_device *pdev) 149 + static int sunxi_wdt_probe(struct platform_device *pdev) 150 150 { 151 151 struct sunxi_wdt_dev *sunxi_wdt; 152 152 struct resource *res; ··· 187 187 return 0; 188 188 } 189 189 190 - static int __exit sunxi_wdt_remove(struct platform_device *pdev) 190 + static int sunxi_wdt_remove(struct platform_device *pdev) 191 191 { 192 192 struct sunxi_wdt_dev *sunxi_wdt = platform_get_drvdata(pdev); 193 193
+2 -1
drivers/watchdog/ts72xx_wdt.c
··· 310 310 311 311 case WDIOC_GETSTATUS: 312 312 case WDIOC_GETBOOTSTATUS: 313 - return put_user(0, p); 313 + error = put_user(0, p); 314 + break; 314 315 315 316 case WDIOC_KEEPALIVE: 316 317 ts72xx_wdt_kick(wdt);
+5 -4
fs/btrfs/disk-io.c
··· 1561 1561 return ret; 1562 1562 } 1563 1563 1564 - struct btrfs_root *btrfs_read_fs_root_no_name(struct btrfs_fs_info *fs_info, 1565 - struct btrfs_key *location) 1564 + struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info, 1565 + struct btrfs_key *location, 1566 + bool check_ref) 1566 1567 { 1567 1568 struct btrfs_root *root; 1568 1569 int ret; ··· 1587 1586 again: 1588 1587 root = btrfs_lookup_fs_root(fs_info, location->objectid); 1589 1588 if (root) { 1590 - if (btrfs_root_refs(&root->root_item) == 0) 1589 + if (check_ref && btrfs_root_refs(&root->root_item) == 0) 1591 1590 return ERR_PTR(-ENOENT); 1592 1591 return root; 1593 1592 } ··· 1596 1595 if (IS_ERR(root)) 1597 1596 return root; 1598 1597 1599 - if (btrfs_root_refs(&root->root_item) == 0) { 1598 + if (check_ref && btrfs_root_refs(&root->root_item) == 0) { 1600 1599 ret = -ENOENT; 1601 1600 goto fail; 1602 1601 }
+11 -2
fs/btrfs/disk-io.h
··· 68 68 int btrfs_init_fs_root(struct btrfs_root *root); 69 69 int btrfs_insert_fs_root(struct btrfs_fs_info *fs_info, 70 70 struct btrfs_root *root); 71 - struct btrfs_root *btrfs_read_fs_root_no_name(struct btrfs_fs_info *fs_info, 72 - struct btrfs_key *location); 71 + 72 + struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info, 73 + struct btrfs_key *key, 74 + bool check_ref); 75 + static inline struct btrfs_root * 76 + btrfs_read_fs_root_no_name(struct btrfs_fs_info *fs_info, 77 + struct btrfs_key *location) 78 + { 79 + return btrfs_get_fs_root(fs_info, location, true); 80 + } 81 + 73 82 int btrfs_cleanup_fs_roots(struct btrfs_fs_info *fs_info); 74 83 void btrfs_btree_balance_dirty(struct btrfs_root *root); 75 84 void btrfs_btree_balance_dirty_nodelay(struct btrfs_root *root);
+4 -8
fs/btrfs/extent_io.c
··· 1490 1490 cur_start = state->end + 1; 1491 1491 node = rb_next(node); 1492 1492 total_bytes += state->end - state->start + 1; 1493 - if (total_bytes >= max_bytes) { 1494 - *end = *start + max_bytes - 1; 1493 + if (total_bytes >= max_bytes) 1495 1494 break; 1496 - } 1497 1495 if (!node) 1498 1496 break; 1499 1497 } ··· 1633 1635 1634 1636 /* 1635 1637 * make sure to limit the number of pages we try to lock down 1636 - * if we're looping. 1637 1638 */ 1638 - if (delalloc_end + 1 - delalloc_start > max_bytes && loops) 1639 - delalloc_end = delalloc_start + PAGE_CACHE_SIZE - 1; 1639 + if (delalloc_end + 1 - delalloc_start > max_bytes) 1640 + delalloc_end = delalloc_start + max_bytes - 1; 1640 1641 1641 1642 /* step two, lock all the pages after the page that has start */ 1642 1643 ret = lock_delalloc_pages(inode, locked_page, ··· 1646 1649 */ 1647 1650 free_extent_state(cached_state); 1648 1651 if (!loops) { 1649 - unsigned long offset = (*start) & (PAGE_CACHE_SIZE - 1); 1650 - max_bytes = PAGE_CACHE_SIZE - offset; 1652 + max_bytes = PAGE_CACHE_SIZE; 1651 1653 loops = 1; 1652 1654 goto again; 1653 1655 } else {
+2 -1
fs/btrfs/inode.c
··· 6437 6437 6438 6438 if (btrfs_extent_readonly(root, disk_bytenr)) 6439 6439 goto out; 6440 + btrfs_release_path(path); 6440 6441 6441 6442 /* 6442 6443 * look for other files referencing this extent, if we ··· 7987 7986 7988 7987 7989 7988 /* check for collisions, even if the name isn't there */ 7990 - ret = btrfs_check_dir_item_collision(root, new_dir->i_ino, 7989 + ret = btrfs_check_dir_item_collision(dest, new_dir->i_ino, 7991 7990 new_dentry->d_name.name, 7992 7991 new_dentry->d_name.len); 7993 7992
+1 -1
fs/btrfs/relocation.c
··· 588 588 else 589 589 key.offset = (u64)-1; 590 590 591 - return btrfs_read_fs_root_no_name(fs_info, &key); 591 + return btrfs_get_fs_root(fs_info, &key, false); 592 592 } 593 593 594 594 #ifdef BTRFS_COMPAT_EXTENT_TREE_V0
+3 -5
fs/btrfs/root-tree.c
··· 299 299 continue; 300 300 } 301 301 302 - if (btrfs_root_refs(&root->root_item) == 0) { 303 - btrfs_add_dead_root(root); 304 - continue; 305 - } 306 - 307 302 err = btrfs_init_fs_root(root); 308 303 if (err) { 309 304 btrfs_free_fs_root(root); ··· 313 318 btrfs_free_fs_root(root); 314 319 break; 315 320 } 321 + 322 + if (btrfs_root_refs(&root->root_item) == 0) 323 + btrfs_add_dead_root(root); 316 324 } 317 325 318 326 btrfs_free_path(path);
+12 -2
fs/buffer.c
··· 1005 1005 struct buffer_head *bh; 1006 1006 sector_t end_block; 1007 1007 int ret = 0; /* Will call free_more_memory() */ 1008 + gfp_t gfp_mask; 1008 1009 1009 - page = find_or_create_page(inode->i_mapping, index, 1010 - (mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS)|__GFP_MOVABLE); 1010 + gfp_mask = mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS; 1011 + gfp_mask |= __GFP_MOVABLE; 1012 + /* 1013 + * XXX: __getblk_slow() can not really deal with failure and 1014 + * will endlessly loop on improvised global reclaim. Prefer 1015 + * looping in the allocator rather than here, at least that 1016 + * code knows what it's doing. 1017 + */ 1018 + gfp_mask |= __GFP_NOFAIL; 1019 + 1020 + page = find_or_create_page(inode->i_mapping, index, gfp_mask); 1011 1021 if (!page) 1012 1022 return ret; 1013 1023
+4 -2
fs/cifs/cifsfs.c
··· 120 120 { 121 121 struct inode *inode; 122 122 struct cifs_sb_info *cifs_sb; 123 + struct cifs_tcon *tcon; 123 124 int rc = 0; 124 125 125 126 cifs_sb = CIFS_SB(sb); 127 + tcon = cifs_sb_master_tcon(cifs_sb); 126 128 127 129 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIXACL) 128 130 sb->s_flags |= MS_POSIXACL; 129 131 130 - if (cifs_sb_master_tcon(cifs_sb)->ses->capabilities & CAP_LARGE_FILES) 132 + if (tcon->ses->capabilities & tcon->ses->server->vals->cap_large_files) 131 133 sb->s_maxbytes = MAX_LFS_FILESIZE; 132 134 else 133 135 sb->s_maxbytes = MAX_NON_LFS; ··· 149 147 goto out_no_root; 150 148 } 151 149 152 - if (cifs_sb_master_tcon(cifs_sb)->nocase) 150 + if (tcon->nocase) 153 151 sb->s_d_op = &cifs_ci_dentry_ops; 154 152 else 155 153 sb->s_d_op = &cifs_dentry_ops;
+23 -8
fs/cifs/cifspdu.h
··· 1491 1491 __u8 FileName[0]; 1492 1492 } __attribute__((packed)); 1493 1493 1494 - struct reparse_data { 1495 - __u32 ReparseTag; 1496 - __u16 ReparseDataLength; 1494 + /* For IO_REPARSE_TAG_SYMLINK */ 1495 + struct reparse_symlink_data { 1496 + __le32 ReparseTag; 1497 + __le16 ReparseDataLength; 1497 1498 __u16 Reserved; 1498 - __u16 SubstituteNameOffset; 1499 - __u16 SubstituteNameLength; 1500 - __u16 PrintNameOffset; 1501 - __u16 PrintNameLength; 1502 - __u32 Flags; 1499 + __le16 SubstituteNameOffset; 1500 + __le16 SubstituteNameLength; 1501 + __le16 PrintNameOffset; 1502 + __le16 PrintNameLength; 1503 + __le32 Flags; 1504 + char PathBuffer[0]; 1505 + } __attribute__((packed)); 1506 + 1507 + /* For IO_REPARSE_TAG_NFS */ 1508 + #define NFS_SPECFILE_LNK 0x00000000014B4E4C 1509 + #define NFS_SPECFILE_CHR 0x0000000000524843 1510 + #define NFS_SPECFILE_BLK 0x00000000004B4C42 1511 + #define NFS_SPECFILE_FIFO 0x000000004F464946 1512 + #define NFS_SPECFILE_SOCK 0x000000004B434F53 1513 + struct reparse_posix_data { 1514 + __le32 ReparseTag; 1515 + __le16 ReparseDataLength; 1516 + __u16 Reserved; 1517 + __le64 InodeType; /* LNK, FIFO, CHR etc. */ 1503 1518 char PathBuffer[0]; 1504 1519 } __attribute__((packed)); 1505 1520
+34 -6
fs/cifs/cifssmb.c
··· 3088 3088 bool is_unicode; 3089 3089 unsigned int sub_len; 3090 3090 char *sub_start; 3091 - struct reparse_data *reparse_buf; 3091 + struct reparse_symlink_data *reparse_buf; 3092 + struct reparse_posix_data *posix_buf; 3092 3093 __u32 data_offset, data_count; 3093 3094 char *end_of_smb; 3094 3095 ··· 3138 3137 goto qreparse_out; 3139 3138 } 3140 3139 end_of_smb = 2 + get_bcc(&pSMBr->hdr) + (char *)&pSMBr->ByteCount; 3141 - reparse_buf = (struct reparse_data *) 3140 + reparse_buf = (struct reparse_symlink_data *) 3142 3141 ((char *)&pSMBr->hdr.Protocol + data_offset); 3143 3142 if ((char *)reparse_buf >= end_of_smb) { 3144 3143 rc = -EIO; 3145 3144 goto qreparse_out; 3146 3145 } 3147 - if ((reparse_buf->PathBuffer + reparse_buf->PrintNameOffset + 3148 - reparse_buf->PrintNameLength) > end_of_smb) { 3146 + if (reparse_buf->ReparseTag == cpu_to_le32(IO_REPARSE_TAG_NFS)) { 3147 + cifs_dbg(FYI, "NFS style reparse tag\n"); 3148 + posix_buf = (struct reparse_posix_data *)reparse_buf; 3149 + 3150 + if (posix_buf->InodeType != cpu_to_le64(NFS_SPECFILE_LNK)) { 3151 + cifs_dbg(FYI, "unsupported file type 0x%llx\n", 3152 + le64_to_cpu(posix_buf->InodeType)); 3153 + rc = -EOPNOTSUPP; 3154 + goto qreparse_out; 3155 + } 3156 + is_unicode = true; 3157 + sub_len = le16_to_cpu(reparse_buf->ReparseDataLength); 3158 + if (posix_buf->PathBuffer + sub_len > end_of_smb) { 3159 + cifs_dbg(FYI, "reparse buf beyond SMB\n"); 3160 + rc = -EIO; 3161 + goto qreparse_out; 3162 + } 3163 + *symlinkinfo = cifs_strndup_from_utf16(posix_buf->PathBuffer, 3164 + sub_len, is_unicode, nls_codepage); 3165 + goto qreparse_out; 3166 + } else if (reparse_buf->ReparseTag != 3167 + cpu_to_le32(IO_REPARSE_TAG_SYMLINK)) { 3168 + rc = -EOPNOTSUPP; 3169 + goto qreparse_out; 3170 + } 3171 + 3172 + /* Reparse tag is NTFS symlink */ 3173 + sub_start = le16_to_cpu(reparse_buf->SubstituteNameOffset) + 3174 + reparse_buf->PathBuffer; 3175 + sub_len = le16_to_cpu(reparse_buf->SubstituteNameLength); 3176 + if (sub_start + sub_len > end_of_smb) { 3149 3177 cifs_dbg(FYI, "reparse buf beyond SMB\n"); 3150 3178 rc = -EIO; 3151 3179 goto qreparse_out; 3152 3180 } 3153 - sub_start = reparse_buf->SubstituteNameOffset + reparse_buf->PathBuffer; 3154 - sub_len = reparse_buf->SubstituteNameLength; 3155 3181 if (pSMBr->hdr.Flags2 & SMBFLG2_UNICODE) 3156 3182 is_unicode = true; 3157 3183 else
+3 -1
fs/cifs/netmisc.c
··· 780 780 ERRDOS, ERRnoaccess, 0xc0000290}, { 781 781 ERRDOS, ERRbadfunc, 0xc000029c}, { 782 782 ERRDOS, ERRsymlink, NT_STATUS_STOPPED_ON_SYMLINK}, { 783 - ERRDOS, ERRinvlevel, 0x007c0001}, }; 783 + ERRDOS, ERRinvlevel, 0x007c0001}, { 784 + 0, 0, 0 } 785 + }; 784 786 785 787 /***************************************************************************** 786 788 Print an error message from the status code
+2 -2
fs/cifs/sess.c
··· 500 500 return NTLMv2; 501 501 if (global_secflags & CIFSSEC_MAY_NTLM) 502 502 return NTLM; 503 - /* Fallthrough */ 504 503 default: 505 - return Unspecified; 504 + /* Fallthrough to attempt LANMAN authentication next */ 505 + break; 506 506 } 507 507 case CIFS_NEGFLAVOR_LANMAN: 508 508 switch (requested) {
+6
fs/cifs/smb2pdu.c
··· 687 687 else 688 688 return -EIO; 689 689 690 + /* no need to send SMB logoff if uid already closed due to reconnect */ 691 + if (ses->need_reconnect) 692 + goto smb2_session_already_dead; 693 + 690 694 rc = small_smb2_init(SMB2_LOGOFF, NULL, (void **) &req); 691 695 if (rc) 692 696 return rc; ··· 705 701 * No tcon so can't do 706 702 * cifs_stats_inc(&tcon->stats.smb2_stats.smb2_com_fail[SMB2...]); 707 703 */ 704 + 705 + smb2_session_already_dead: 708 706 return rc; 709 707 } 710 708
+14
fs/cifs/smbfsctl.h
··· 97 97 #define FSCTL_QUERY_NETWORK_INTERFACE_INFO 0x001401FC /* BB add struct */ 98 98 #define FSCTL_SRV_READ_HASH 0x001441BB /* BB add struct */ 99 99 100 + /* See FSCC 2.1.2.5 */ 100 101 #define IO_REPARSE_TAG_MOUNT_POINT 0xA0000003 101 102 #define IO_REPARSE_TAG_HSM 0xC0000004 102 103 #define IO_REPARSE_TAG_SIS 0x80000007 104 + #define IO_REPARSE_TAG_HSM2 0x80000006 105 + #define IO_REPARSE_TAG_DRIVER_EXTENDER 0x80000005 106 + /* Used by the DFS filter. See MS-DFSC */ 107 + #define IO_REPARSE_TAG_DFS 0x8000000A 108 + /* Used by the DFS filter See MS-DFSC */ 109 + #define IO_REPARSE_TAG_DFSR 0x80000012 110 + #define IO_REPARSE_TAG_FILTER_MANAGER 0x8000000B 111 + /* See section MS-FSCC 2.1.2.4 */ 112 + #define IO_REPARSE_TAG_SYMLINK 0xA000000C 113 + #define IO_REPARSE_TAG_DEDUP 0x80000013 114 + #define IO_REPARSE_APPXSTREAM 0xC0000014 115 + /* NFS symlinks, Win 8/SMB3 and later */ 116 + #define IO_REPARSE_TAG_NFS 0x80000014 103 117 104 118 /* fsctl flags */ 105 119 /* If Flags is set to this value, the request is an FSCTL not ioctl request */
+7 -2
fs/cifs/transport.c
··· 410 410 wait_for_free_request(struct TCP_Server_Info *server, const int timeout, 411 411 const int optype) 412 412 { 413 - return wait_for_free_credits(server, timeout, 414 - server->ops->get_credits_field(server, optype)); 413 + int *val; 414 + 415 + val = server->ops->get_credits_field(server, optype); 416 + /* Since an echo is already inflight, no need to wait to send another */ 417 + if (*val <= 0 && optype == CIFS_ECHO_OP) 418 + return -EAGAIN; 419 + return wait_for_free_credits(server, timeout, val); 415 420 } 416 421 417 422 static int allocate_mid(struct cifs_ses *ses, struct smb_hdr *in_buf,
+2 -3
fs/ext3/namei.c
··· 1783 1783 d_tmpfile(dentry, inode); 1784 1784 err = ext3_orphan_add(handle, inode); 1785 1785 if (err) 1786 - goto err_drop_inode; 1786 + goto err_unlock_inode; 1787 1787 mark_inode_dirty(inode); 1788 1788 unlock_new_inode(inode); 1789 1789 } ··· 1791 1791 if (err == -ENOSPC && ext3_should_retry_alloc(dir->i_sb, &retries)) 1792 1792 goto retry; 1793 1793 return err; 1794 - err_drop_inode: 1794 + err_unlock_inode: 1795 1795 ext3_journal_stop(handle); 1796 1796 unlock_new_inode(inode); 1797 - iput(inode); 1798 1797 return err; 1799 1798 } 1800 1799
+1 -1
fs/ext4/inode.c
··· 2563 2563 break; 2564 2564 } 2565 2565 blk_finish_plug(&plug); 2566 - if (!ret && !cycled) { 2566 + if (!ret && !cycled && wbc->nr_to_write > 0) { 2567 2567 cycled = 1; 2568 2568 mpd.last_page = writeback_index - 1; 2569 2569 mpd.first_page = 0;
+2 -3
fs/ext4/namei.c
··· 2319 2319 d_tmpfile(dentry, inode); 2320 2320 err = ext4_orphan_add(handle, inode); 2321 2321 if (err) 2322 - goto err_drop_inode; 2322 + goto err_unlock_inode; 2323 2323 mark_inode_dirty(inode); 2324 2324 unlock_new_inode(inode); 2325 2325 } ··· 2328 2328 if (err == -ENOSPC && ext4_should_retry_alloc(dir->i_sb, &retries)) 2329 2329 goto retry; 2330 2330 return err; 2331 - err_drop_inode: 2331 + err_unlock_inode: 2332 2332 ext4_journal_stop(handle); 2333 2333 unlock_new_inode(inode); 2334 - iput(inode); 2335 2334 return err; 2336 2335 } 2337 2336
+2
fs/ext4/xattr.c
··· 1350 1350 s_min_extra_isize) { 1351 1351 tried_min_extra_isize++; 1352 1352 new_extra_isize = s_min_extra_isize; 1353 + kfree(is); is = NULL; 1354 + kfree(bs); bs = NULL; 1353 1355 goto retry; 1354 1356 } 1355 1357 error = -1;
+7 -3
fs/proc/inode.c
··· 288 288 static unsigned long proc_reg_get_unmapped_area(struct file *file, unsigned long orig_addr, unsigned long len, unsigned long pgoff, unsigned long flags) 289 289 { 290 290 struct proc_dir_entry *pde = PDE(file_inode(file)); 291 - int rv = -EIO; 292 - unsigned long (*get_unmapped_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long); 291 + unsigned long rv = -EIO; 292 + unsigned long (*get_unmapped_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long) = NULL; 293 293 if (use_pde(pde)) { 294 - get_unmapped_area = pde->proc_fops->get_unmapped_area; 294 + #ifdef CONFIG_MMU 295 + get_unmapped_area = current->mm->get_unmapped_area; 296 + #endif 297 + if (pde->proc_fops->get_unmapped_area) 298 + get_unmapped_area = pde->proc_fops->get_unmapped_area; 295 299 if (get_unmapped_area) 296 300 rv = get_unmapped_area(file, orig_addr, len, pgoff, flags); 297 301 unuse_pde(pde);
+3 -1
fs/proc/task_mmu.c
··· 941 941 frame = pte_pfn(pte); 942 942 flags = PM_PRESENT; 943 943 page = vm_normal_page(vma, addr, pte); 944 + if (pte_soft_dirty(pte)) 945 + flags2 |= __PM_SOFT_DIRTY; 944 946 } else if (is_swap_pte(pte)) { 945 947 swp_entry_t entry; 946 948 if (pte_swp_soft_dirty(pte)) ··· 962 960 963 961 if (page && !PageAnon(page)) 964 962 flags |= PM_FILE; 965 - if ((vma->vm_flags & VM_SOFTDIRTY) || pte_soft_dirty(pte)) 963 + if ((vma->vm_flags & VM_SOFTDIRTY)) 966 964 flags2 |= __PM_SOFT_DIRTY; 967 965 968 966 *pme = make_pme(PM_PFRAME(frame) | PM_STATUS2(pm->v2, flags2) | flags);
+1 -1
fs/statfs.c
··· 94 94 95 95 int fd_statfs(int fd, struct kstatfs *st) 96 96 { 97 - struct fd f = fdget(fd); 97 + struct fd f = fdget_raw(fd); 98 98 int error = -EBADF; 99 99 if (f.file) { 100 100 error = vfs_statfs(&f.file->f_path, st);
-7
include/acpi/acpi_bus.h
··· 311 311 unsigned int physical_node_count; 312 312 struct list_head physical_node_list; 313 313 struct mutex physical_node_lock; 314 - struct list_head power_dependent; 315 314 void (*remove)(struct acpi_device *); 316 315 }; 317 316 ··· 455 456 acpi_status acpi_remove_pm_notifier(struct acpi_device *adev, 456 457 acpi_notify_handler handler); 457 458 int acpi_pm_device_sleep_state(struct device *, int *, int); 458 - void acpi_dev_pm_add_dependent(acpi_handle handle, struct device *depdev); 459 - void acpi_dev_pm_remove_dependent(acpi_handle handle, struct device *depdev); 460 459 #else 461 460 static inline acpi_status acpi_add_pm_notifier(struct acpi_device *adev, 462 461 acpi_notify_handler handler, ··· 475 478 return (m >= ACPI_STATE_D0 && m <= ACPI_STATE_D3_COLD) ? 476 479 m : ACPI_STATE_D0; 477 480 } 478 - static inline void acpi_dev_pm_add_dependent(acpi_handle handle, 479 - struct device *depdev) {} 480 - static inline void acpi_dev_pm_remove_dependent(acpi_handle handle, 481 - struct device *depdev) {} 482 481 #endif 483 482 484 483 #ifdef CONFIG_PM_RUNTIME
+1 -3
include/dt-bindings/pinctrl/omap.h
··· 23 23 #define PULL_UP (1 << 4) 24 24 #define ALTELECTRICALSEL (1 << 5) 25 25 26 - /* 34xx specific mux bit defines */ 26 + /* omap3/4/5 specific mux bit defines */ 27 27 #define INPUT_EN (1 << 8) 28 28 #define OFF_EN (1 << 9) 29 29 #define OFFOUT_EN (1 << 10) ··· 31 31 #define OFF_PULL_EN (1 << 12) 32 32 #define OFF_PULL_UP (1 << 13) 33 33 #define WAKEUP_EN (1 << 14) 34 - 35 - /* 44xx specific mux bit defines */ 36 34 #define WAKEUP_EVENT (1 << 15) 37 35 38 36 /* Active pin states */
+15
include/linux/compiler-gcc4.h
··· 65 65 #define __visible __attribute__((externally_visible)) 66 66 #endif 67 67 68 + /* 69 + * GCC 'asm goto' miscompiles certain code sequences: 70 + * 71 + * http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670 72 + * 73 + * Work it around via a compiler barrier quirk suggested by Jakub Jelinek. 74 + * Fixed in GCC 4.8.2 and later versions. 75 + * 76 + * (asm goto is automatically volatile - the naming reflects this.) 77 + */ 78 + #if GCC_VERSION <= 40801 79 + # define asm_volatile_goto(x...) do { asm goto(x); asm (""); } while (0) 80 + #else 81 + # define asm_volatile_goto(x...) do { asm goto(x); } while (0) 82 + #endif 68 83 69 84 #ifdef CONFIG_ARCH_USE_BUILTIN_BSWAP 70 85 #if GCC_VERSION >= 40400
+11 -39
include/linux/memcontrol.h
··· 137 137 extern void mem_cgroup_replace_page_cache(struct page *oldpage, 138 138 struct page *newpage); 139 139 140 - /** 141 - * mem_cgroup_toggle_oom - toggle the memcg OOM killer for the current task 142 - * @new: true to enable, false to disable 143 - * 144 - * Toggle whether a failed memcg charge should invoke the OOM killer 145 - * or just return -ENOMEM. Returns the previous toggle state. 146 - * 147 - * NOTE: Any path that enables the OOM killer before charging must 148 - * call mem_cgroup_oom_synchronize() afterward to finalize the 149 - * OOM handling and clean up. 150 - */ 151 - static inline bool mem_cgroup_toggle_oom(bool new) 140 + static inline void mem_cgroup_oom_enable(void) 152 141 { 153 - bool old; 154 - 155 - old = current->memcg_oom.may_oom; 156 - current->memcg_oom.may_oom = new; 157 - 158 - return old; 142 + WARN_ON(current->memcg_oom.may_oom); 143 + current->memcg_oom.may_oom = 1; 159 144 } 160 145 161 - static inline void mem_cgroup_enable_oom(void) 146 + static inline void mem_cgroup_oom_disable(void) 162 147 { 163 - bool old = mem_cgroup_toggle_oom(true); 164 - 165 - WARN_ON(old == true); 166 - } 167 - 168 - static inline void mem_cgroup_disable_oom(void) 169 - { 170 - bool old = mem_cgroup_toggle_oom(false); 171 - 172 - WARN_ON(old == false); 148 + WARN_ON(!current->memcg_oom.may_oom); 149 + current->memcg_oom.may_oom = 0; 173 150 } 174 151 175 152 static inline bool task_in_memcg_oom(struct task_struct *p) 176 153 { 177 - return p->memcg_oom.in_memcg_oom; 154 + return p->memcg_oom.memcg; 178 155 } 179 156 180 - bool mem_cgroup_oom_synchronize(void); 157 + bool mem_cgroup_oom_synchronize(bool wait); 181 158 182 159 #ifdef CONFIG_MEMCG_SWAP 183 160 extern int do_swap_account; ··· 379 402 { 380 403 } 381 404 382 - static inline bool mem_cgroup_toggle_oom(bool new) 383 - { 384 - return false; 385 - } 386 - 387 - static inline void mem_cgroup_enable_oom(void) 405 + static inline void mem_cgroup_oom_enable(void) 388 406 { 389 407 } 390 408 391 - static inline void mem_cgroup_disable_oom(void) 409 + static inline void mem_cgroup_oom_disable(void) 392 410 { 393 411 } 394 412 ··· 392 420 return false; 393 421 } 394 422 395 - static inline bool mem_cgroup_oom_synchronize(void) 423 + static inline bool mem_cgroup_oom_synchronize(bool wait) 396 424 { 397 425 return false; 398 426 }
+1
include/linux/miscdevice.h
··· 45 45 #define MAPPER_CTRL_MINOR 236 46 46 #define LOOP_CTRL_MINOR 237 47 47 #define VHOST_NET_MINOR 238 48 + #define UHID_MINOR 239 48 49 #define MISC_DYNAMIC_MINOR 255 49 50 50 51 struct device;
+2 -2
include/linux/mlx5/device.h
··· 181 181 MLX5_DEV_CAP_FLAG_TLP_HINTS = 1LL << 39, 182 182 MLX5_DEV_CAP_FLAG_SIG_HAND_OVER = 1LL << 40, 183 183 MLX5_DEV_CAP_FLAG_DCT = 1LL << 41, 184 - MLX5_DEV_CAP_FLAG_CMDIF_CSUM = 1LL << 46, 184 + MLX5_DEV_CAP_FLAG_CMDIF_CSUM = 3LL << 46, 185 185 }; 186 186 187 187 enum { ··· 417 417 struct health_buffer health; 418 418 __be32 rsvd2[884]; 419 419 __be32 health_counter; 420 - __be32 rsvd3[1023]; 420 + __be32 rsvd3[1019]; 421 421 __be64 ieee1588_clk; 422 422 __be32 ieee1588_clk_type; 423 423 __be32 clr_intx;
+2 -4
include/linux/mlx5/driver.h
··· 82 82 }; 83 83 84 84 enum { 85 - MLX5_MAX_EQ_NAME = 20 85 + MLX5_MAX_EQ_NAME = 32 86 86 }; 87 87 88 88 enum { ··· 747 747 748 748 enum { 749 749 MLX5_PROF_MASK_QP_SIZE = (u64)1 << 0, 750 - MLX5_PROF_MASK_CMDIF_CSUM = (u64)1 << 1, 751 - MLX5_PROF_MASK_MR_CACHE = (u64)1 << 2, 750 + MLX5_PROF_MASK_MR_CACHE = (u64)1 << 1, 752 751 }; 753 752 754 753 enum { ··· 757 758 struct mlx5_profile { 758 759 u64 mask; 759 760 u32 log_max_qp; 760 - int cmdif_csum; 761 761 struct { 762 762 int size; 763 763 int limit;
-14
include/linux/of_reserved_mem.h
··· 1 - #ifndef __OF_RESERVED_MEM_H 2 - #define __OF_RESERVED_MEM_H 3 - 4 - #ifdef CONFIG_OF_RESERVED_MEM 5 - void of_reserved_mem_device_init(struct device *dev); 6 - void of_reserved_mem_device_release(struct device *dev); 7 - void early_init_dt_scan_reserved_mem(void); 8 - #else 9 - static inline void of_reserved_mem_device_init(struct device *dev) { } 10 - static inline void of_reserved_mem_device_release(struct device *dev) { } 11 - static inline void early_init_dt_scan_reserved_mem(void) { } 12 - #endif 13 - 14 - #endif /* __OF_RESERVED_MEM_H */
+23 -1
include/linux/perf_event.h
··· 294 294 */ 295 295 struct perf_event { 296 296 #ifdef CONFIG_PERF_EVENTS 297 - struct list_head group_entry; 297 + /* 298 + * entry onto perf_event_context::event_list; 299 + * modifications require ctx->lock 300 + * RCU safe iterations. 301 + */ 298 302 struct list_head event_entry; 303 + 304 + /* 305 + * XXX: group_entry and sibling_list should be mutually exclusive; 306 + * either you're a sibling on a group, or you're the group leader. 307 + * Rework the code to always use the same list element. 308 + * 309 + * Locked for modification by both ctx->mutex and ctx->lock; holding 310 + * either sufficies for read. 311 + */ 312 + struct list_head group_entry; 299 313 struct list_head sibling_list; 314 + 315 + /* 316 + * We need storage to track the entries in perf_pmu_migrate_context; we 317 + * cannot use the event_entry because of RCU and we want to keep the 318 + * group in tact which avoids us using the other two entries. 319 + */ 320 + struct list_head migrate_entry; 321 + 300 322 struct hlist_node hlist_entry; 301 323 int nr_siblings; 302 324 int group_flags;
+1
include/linux/random.h
··· 17 17 extern void get_random_bytes(void *buf, int nbytes); 18 18 extern void get_random_bytes_arch(void *buf, int nbytes); 19 19 void generate_random_uuid(unsigned char uuid_out[16]); 20 + extern int random_int_secret_init(void); 20 21 21 22 #ifndef MODULE 22 23 extern const struct file_operations random_fops, urandom_fops;
+3 -4
include/linux/sched.h
··· 1394 1394 } memcg_batch; 1395 1395 unsigned int memcg_kmem_skip_account; 1396 1396 struct memcg_oom_info { 1397 + struct mem_cgroup *memcg; 1398 + gfp_t gfp_mask; 1399 + int order; 1397 1400 unsigned int may_oom:1; 1398 - unsigned int in_memcg_oom:1; 1399 - unsigned int oom_locked:1; 1400 - int wakeups; 1401 - struct mem_cgroup *wait_on_memcg; 1402 1401 } memcg_oom; 1403 1402 #endif 1404 1403 #ifdef CONFIG_UPROBES
+14
include/linux/timex.h
··· 64 64 65 65 #include <asm/timex.h> 66 66 67 + #ifndef random_get_entropy 68 + /* 69 + * The random_get_entropy() function is used by the /dev/random driver 70 + * in order to extract entropy via the relative unpredictability of 71 + * when an interrupt takes places versus a high speed, fine-grained 72 + * timing source or cycle counter. Since it will be occurred on every 73 + * single interrupt, it must have a very low cost/overhead. 74 + * 75 + * By default we use get_cycles() for this purpose, but individual 76 + * architectures may override this in their asm/timex.h header file. 77 + */ 78 + #define random_get_entropy() get_cycles() 79 + #endif 80 + 67 81 /* 68 82 * SHIFT_PLL is used as a dampening factor to define how much we 69 83 * adjust the frequency correction for a given offset in PLL mode.
+1 -1
include/linux/usb/usb_phy_gen_xceiv.h
··· 12 12 unsigned int needs_reset:1; 13 13 }; 14 14 15 - #if IS_ENABLED(CONFIG_NOP_USB_XCEIV) 15 + #if defined(CONFIG_NOP_USB_XCEIV) || (defined(CONFIG_NOP_USB_XCEIV_MODULE) && defined(MODULE)) 16 16 /* sometimes transceivers are accessed only through e.g. ULPI */ 17 17 extern void usb_nop_xceiv_register(void); 18 18 extern void usb_nop_xceiv_unregister(void);
+3 -1
include/linux/usb_usual.h
··· 66 66 US_FLAG(INITIAL_READ10, 0x00100000) \ 67 67 /* Initial READ(10) (and others) must be retried */ \ 68 68 US_FLAG(WRITE_CACHE, 0x00200000) \ 69 - /* Write Cache status is not available */ 69 + /* Write Cache status is not available */ \ 70 + US_FLAG(NEEDS_CAP16, 0x00400000) 71 + /* cannot handle READ_CAPACITY_10 */ 70 72 71 73 #define US_FLAG(name, value) US_FL_##name = value , 72 74 enum { US_DO_ALL_FLAGS };
-7
include/linux/vgaarb.h
··· 65 65 * out of the arbitration process (and can be safe to take 66 66 * interrupts at any time. 67 67 */ 68 - #if defined(CONFIG_VGA_ARB) 69 68 extern void vga_set_legacy_decoding(struct pci_dev *pdev, 70 69 unsigned int decodes); 71 - #else 72 - static inline void vga_set_legacy_decoding(struct pci_dev *pdev, 73 - unsigned int decodes) 74 - { 75 - } 76 - #endif 77 70 78 71 /** 79 72 * vga_get - acquire & locks VGA resources
+2
init/main.c
··· 76 76 #include <linux/elevator.h> 77 77 #include <linux/sched_clock.h> 78 78 #include <linux/context_tracking.h> 79 + #include <linux/random.h> 79 80 80 81 #include <asm/io.h> 81 82 #include <asm/bugs.h> ··· 781 780 do_ctors(); 782 781 usermodehelper_enable(); 783 782 do_initcalls(); 783 + random_int_secret_init(); 784 784 } 785 785 786 786 static void __init do_pre_smp_initcalls(void)
+29 -13
ipc/sem.c
··· 1282 1282 1283 1283 sem_lock(sma, NULL, -1); 1284 1284 1285 + if (sma->sem_perm.deleted) { 1286 + sem_unlock(sma, -1); 1287 + rcu_read_unlock(); 1288 + return -EIDRM; 1289 + } 1290 + 1285 1291 curr = &sma->sem_base[semnum]; 1286 1292 1287 1293 ipc_assert_locked_object(&sma->sem_perm); ··· 1342 1336 int i; 1343 1337 1344 1338 sem_lock(sma, NULL, -1); 1339 + if (sma->sem_perm.deleted) { 1340 + err = -EIDRM; 1341 + goto out_unlock; 1342 + } 1345 1343 if(nsems > SEMMSL_FAST) { 1346 1344 if (!ipc_rcu_getref(sma)) { 1347 - sem_unlock(sma, -1); 1348 - rcu_read_unlock(); 1349 1345 err = -EIDRM; 1350 - goto out_free; 1346 + goto out_unlock; 1351 1347 } 1352 1348 sem_unlock(sma, -1); 1353 1349 rcu_read_unlock(); ··· 1362 1354 rcu_read_lock(); 1363 1355 sem_lock_and_putref(sma); 1364 1356 if (sma->sem_perm.deleted) { 1365 - sem_unlock(sma, -1); 1366 - rcu_read_unlock(); 1367 1357 err = -EIDRM; 1368 - goto out_free; 1358 + goto out_unlock; 1369 1359 } 1370 1360 } 1371 1361 for (i = 0; i < sma->sem_nsems; i++) ··· 1381 1375 struct sem_undo *un; 1382 1376 1383 1377 if (!ipc_rcu_getref(sma)) { 1384 - rcu_read_unlock(); 1385 - return -EIDRM; 1378 + err = -EIDRM; 1379 + goto out_rcu_wakeup; 1386 1380 } 1387 1381 rcu_read_unlock(); 1388 1382 ··· 1410 1404 rcu_read_lock(); 1411 1405 sem_lock_and_putref(sma); 1412 1406 if (sma->sem_perm.deleted) { 1413 - sem_unlock(sma, -1); 1414 - rcu_read_unlock(); 1415 1407 err = -EIDRM; 1416 - goto out_free; 1408 + goto out_unlock; 1417 1409 } 1418 1410 1419 1411 for (i = 0; i < nsems; i++) ··· 1435 1431 goto out_rcu_wakeup; 1436 1432 1437 1433 sem_lock(sma, NULL, -1); 1434 + if (sma->sem_perm.deleted) { 1435 + err = -EIDRM; 1436 + goto out_unlock; 1437 + } 1438 1438 curr = &sma->sem_base[semnum]; 1439 1439 1440 1440 switch (cmd) { ··· 1844 1836 if (error) 1845 1837 goto out_rcu_wakeup; 1846 1838 1839 + error = -EIDRM; 1840 + locknum = sem_lock(sma, sops, nsops); 1841 + if (sma->sem_perm.deleted) 1842 + goto out_unlock_free; 1847 1843 /* 1848 1844 * semid identifiers are not unique - find_alloc_undo may have 1849 1845 * allocated an undo structure, it was invalidated by an RMID ··· 1855 1843 * This case can be detected checking un->semid. The existence of 1856 1844 * "un" itself is guaranteed by rcu. 1857 1845 */ 1858 - error = -EIDRM; 1859 - locknum = sem_lock(sma, sops, nsops); 1860 1846 if (un && un->semid == -1) 1861 1847 goto out_unlock_free; 1862 1848 ··· 2067 2057 } 2068 2058 2069 2059 sem_lock(sma, NULL, -1); 2060 + /* exit_sem raced with IPC_RMID, nothing to do */ 2061 + if (sma->sem_perm.deleted) { 2062 + sem_unlock(sma, -1); 2063 + rcu_read_unlock(); 2064 + continue; 2065 + } 2070 2066 un = __lookup_undo(ulp, semid); 2071 2067 if (un == NULL) { 2072 2068 /* exit_sem raced with IPC_RMID+semget() that created
+21 -6
ipc/util.c
··· 17 17 * Pavel Emelianov <xemul@openvz.org> 18 18 * 19 19 * General sysv ipc locking scheme: 20 - * when doing ipc id lookups, take the ids->rwsem 21 - * rcu_read_lock() 22 - * obtain the ipc object (kern_ipc_perm) 23 - * perform security, capabilities, auditing and permission checks, etc. 24 - * acquire the ipc lock (kern_ipc_perm.lock) throught ipc_lock_object() 25 - * perform data updates (ie: SET, RMID, LOCK/UNLOCK commands) 20 + * rcu_read_lock() 21 + * obtain the ipc object (kern_ipc_perm) by looking up the id in an idr 22 + * tree. 23 + * - perform initial checks (capabilities, auditing and permission, 24 + * etc). 25 + * - perform read-only operations, such as STAT, INFO commands. 26 + * acquire the ipc lock (kern_ipc_perm.lock) through 27 + * ipc_lock_object() 28 + * - perform data updates, such as SET, RMID commands and 29 + * mechanism-specific operations (semop/semtimedop, 30 + * msgsnd/msgrcv, shmat/shmdt). 31 + * drop the ipc lock, through ipc_unlock_object(). 32 + * rcu_read_unlock() 33 + * 34 + * The ids->rwsem must be taken when: 35 + * - creating, removing and iterating the existing entries in ipc 36 + * identifier sets. 37 + * - iterating through files under /proc/sysvipc/ 38 + * 39 + * Note that sems have a special fast path that avoids kern_ipc_perm.lock - 40 + * see sem_lock(). 26 41 */ 27 42 28 43 #include <linux/mm.h>
+3 -3
kernel/events/core.c
··· 7234 7234 perf_remove_from_context(event); 7235 7235 unaccount_event_cpu(event, src_cpu); 7236 7236 put_ctx(src_ctx); 7237 - list_add(&event->event_entry, &events); 7237 + list_add(&event->migrate_entry, &events); 7238 7238 } 7239 7239 mutex_unlock(&src_ctx->mutex); 7240 7240 7241 7241 synchronize_rcu(); 7242 7242 7243 7243 mutex_lock(&dst_ctx->mutex); 7244 - list_for_each_entry_safe(event, tmp, &events, event_entry) { 7245 - list_del(&event->event_entry); 7244 + list_for_each_entry_safe(event, tmp, &events, migrate_entry) { 7245 + list_del(&event->migrate_entry); 7246 7246 if (event->state >= PERF_EVENT_STATE_OFF) 7247 7247 event->state = PERF_EVENT_STATE_INACTIVE; 7248 7248 account_event_cpu(event, dst_cpu);
+1 -1
lib/kobject.c
··· 592 592 { 593 593 struct kobject *kobj = container_of(kref, struct kobject, kref); 594 594 #ifdef CONFIG_DEBUG_KOBJECT_RELEASE 595 - pr_debug("kobject: '%s' (%p): %s, parent %p (delayed)\n", 595 + pr_info("kobject: '%s' (%p): %s, parent %p (delayed)\n", 596 596 kobject_name(kobj), kobj, __func__, kobj->parent); 597 597 INIT_DELAYED_WORK(&kobj->release, kobject_delayed_cleanup); 598 598 schedule_delayed_work(&kobj->release, HZ);
+3
lib/percpu-refcount.c
··· 53 53 ref->release = release; 54 54 return 0; 55 55 } 56 + EXPORT_SYMBOL_GPL(percpu_ref_init); 56 57 57 58 /** 58 59 * percpu_ref_cancel_init - cancel percpu_ref_init() ··· 85 84 free_percpu(ref->pcpu_count); 86 85 } 87 86 } 87 + EXPORT_SYMBOL_GPL(percpu_ref_cancel_init); 88 88 89 89 static void percpu_ref_kill_rcu(struct rcu_head *rcu) 90 90 { ··· 158 156 159 157 call_rcu_sched(&ref->rcu, percpu_ref_kill_rcu); 160 158 } 159 + EXPORT_SYMBOL_GPL(percpu_ref_kill_and_confirm);
+1 -10
mm/filemap.c
··· 1616 1616 struct inode *inode = mapping->host; 1617 1617 pgoff_t offset = vmf->pgoff; 1618 1618 struct page *page; 1619 - bool memcg_oom; 1620 1619 pgoff_t size; 1621 1620 int ret = 0; 1622 1621 ··· 1624 1625 return VM_FAULT_SIGBUS; 1625 1626 1626 1627 /* 1627 - * Do we have something in the page cache already? Either 1628 - * way, try readahead, but disable the memcg OOM killer for it 1629 - * as readahead is optional and no errors are propagated up 1630 - * the fault stack. The OOM killer is enabled while trying to 1631 - * instantiate the faulting page individually below. 1628 + * Do we have something in the page cache already? 1632 1629 */ 1633 1630 page = find_get_page(mapping, offset); 1634 1631 if (likely(page) && !(vmf->flags & FAULT_FLAG_TRIED)) { ··· 1632 1637 * We found the page, so try async readahead before 1633 1638 * waiting for the lock. 1634 1639 */ 1635 - memcg_oom = mem_cgroup_toggle_oom(false); 1636 1640 do_async_mmap_readahead(vma, ra, file, page, offset); 1637 - mem_cgroup_toggle_oom(memcg_oom); 1638 1641 } else if (!page) { 1639 1642 /* No page in the page cache at all */ 1640 - memcg_oom = mem_cgroup_toggle_oom(false); 1641 1643 do_sync_mmap_readahead(vma, ra, file, offset); 1642 - mem_cgroup_toggle_oom(memcg_oom); 1643 1644 count_vm_event(PGMAJFAULT); 1644 1645 mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT); 1645 1646 ret = VM_FAULT_MAJOR;
+9 -1
mm/huge_memory.c
··· 2697 2697 2698 2698 mmun_start = haddr; 2699 2699 mmun_end = haddr + HPAGE_PMD_SIZE; 2700 + again: 2700 2701 mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); 2701 2702 spin_lock(&mm->page_table_lock); 2702 2703 if (unlikely(!pmd_trans_huge(*pmd))) { ··· 2720 2719 split_huge_page(page); 2721 2720 2722 2721 put_page(page); 2723 - BUG_ON(pmd_trans_huge(*pmd)); 2722 + 2723 + /* 2724 + * We don't always have down_write of mmap_sem here: a racing 2725 + * do_huge_pmd_wp_page() might have copied-on-write to another 2726 + * huge page before our split_huge_page() got the anon_vma lock. 2727 + */ 2728 + if (unlikely(pmd_trans_huge(*pmd))) 2729 + goto again; 2724 2730 } 2725 2731 2726 2732 void split_huge_page_pmd_mm(struct mm_struct *mm, unsigned long address,
+16 -1
mm/hugetlb.c
··· 653 653 BUG_ON(page_count(page)); 654 654 BUG_ON(page_mapcount(page)); 655 655 restore_reserve = PagePrivate(page); 656 + ClearPagePrivate(page); 656 657 657 658 spin_lock(&hugetlb_lock); 658 659 hugetlb_cgroup_uncharge_page(hstate_index(h), ··· 696 695 /* we rely on prep_new_huge_page to set the destructor */ 697 696 set_compound_order(page, order); 698 697 __SetPageHead(page); 698 + __ClearPageReserved(page); 699 699 for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) { 700 700 __SetPageTail(p); 701 + /* 702 + * For gigantic hugepages allocated through bootmem at 703 + * boot, it's safer to be consistent with the not-gigantic 704 + * hugepages and clear the PG_reserved bit from all tail pages 705 + * too. Otherwse drivers using get_user_pages() to access tail 706 + * pages may get the reference counting wrong if they see 707 + * PG_reserved set on a tail page (despite the head page not 708 + * having PG_reserved set). Enforcing this consistency between 709 + * head and tail pages allows drivers to optimize away a check 710 + * on the head page when they need know if put_page() is needed 711 + * after get_user_pages(). 712 + */ 713 + __ClearPageReserved(p); 701 714 set_page_count(p, 0); 702 715 p->first_page = page; 703 716 } ··· 1344 1329 #else 1345 1330 page = virt_to_page(m); 1346 1331 #endif 1347 - __ClearPageReserved(page); 1348 1332 WARN_ON(page_count(page) != 1); 1349 1333 prep_compound_huge_page(page, h->order); 1334 + WARN_ON(PageReserved(page)); 1350 1335 prep_new_huge_page(h, page, page_to_nid(page)); 1351 1336 /* 1352 1337 * If we had gigantic hugepages allocated at boot time, we need
+72 -105
mm/memcontrol.c
··· 866 866 unsigned long val = 0; 867 867 int cpu; 868 868 869 + get_online_cpus(); 869 870 for_each_online_cpu(cpu) 870 871 val += per_cpu(memcg->stat->events[idx], cpu); 871 872 #ifdef CONFIG_HOTPLUG_CPU ··· 874 873 val += memcg->nocpu_base.events[idx]; 875 874 spin_unlock(&memcg->pcp_counter_lock); 876 875 #endif 876 + put_online_cpus(); 877 877 return val; 878 878 } 879 879 ··· 2161 2159 memcg_wakeup_oom(memcg); 2162 2160 } 2163 2161 2164 - /* 2165 - * try to call OOM killer 2166 - */ 2167 2162 static void mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int order) 2168 2163 { 2169 - bool locked; 2170 - int wakeups; 2171 - 2172 2164 if (!current->memcg_oom.may_oom) 2173 2165 return; 2174 - 2175 - current->memcg_oom.in_memcg_oom = 1; 2176 - 2177 2166 /* 2178 - * As with any blocking lock, a contender needs to start 2179 - * listening for wakeups before attempting the trylock, 2180 - * otherwise it can miss the wakeup from the unlock and sleep 2181 - * indefinitely. This is just open-coded because our locking 2182 - * is so particular to memcg hierarchies. 2167 + * We are in the middle of the charge context here, so we 2168 + * don't want to block when potentially sitting on a callstack 2169 + * that holds all kinds of filesystem and mm locks. 2170 + * 2171 + * Also, the caller may handle a failed allocation gracefully 2172 + * (like optional page cache readahead) and so an OOM killer 2173 + * invocation might not even be necessary. 2174 + * 2175 + * That's why we don't do anything here except remember the 2176 + * OOM context and then deal with it at the end of the page 2177 + * fault when the stack is unwound, the locks are released, 2178 + * and when we know whether the fault was overall successful. 2183 2179 */ 2184 - wakeups = atomic_read(&memcg->oom_wakeups); 2180 + css_get(&memcg->css); 2181 + current->memcg_oom.memcg = memcg; 2182 + current->memcg_oom.gfp_mask = mask; 2183 + current->memcg_oom.order = order; 2184 + } 2185 + 2186 + /** 2187 + * mem_cgroup_oom_synchronize - complete memcg OOM handling 2188 + * @handle: actually kill/wait or just clean up the OOM state 2189 + * 2190 + * This has to be called at the end of a page fault if the memcg OOM 2191 + * handler was enabled. 2192 + * 2193 + * Memcg supports userspace OOM handling where failed allocations must 2194 + * sleep on a waitqueue until the userspace task resolves the 2195 + * situation. Sleeping directly in the charge context with all kinds 2196 + * of locks held is not a good idea, instead we remember an OOM state 2197 + * in the task and mem_cgroup_oom_synchronize() has to be called at 2198 + * the end of the page fault to complete the OOM handling. 2199 + * 2200 + * Returns %true if an ongoing memcg OOM situation was detected and 2201 + * completed, %false otherwise. 2202 + */ 2203 + bool mem_cgroup_oom_synchronize(bool handle) 2204 + { 2205 + struct mem_cgroup *memcg = current->memcg_oom.memcg; 2206 + struct oom_wait_info owait; 2207 + bool locked; 2208 + 2209 + /* OOM is global, do not handle */ 2210 + if (!memcg) 2211 + return false; 2212 + 2213 + if (!handle) 2214 + goto cleanup; 2215 + 2216 + owait.memcg = memcg; 2217 + owait.wait.flags = 0; 2218 + owait.wait.func = memcg_oom_wake_function; 2219 + owait.wait.private = current; 2220 + INIT_LIST_HEAD(&owait.wait.task_list); 2221 + 2222 + prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE); 2185 2223 mem_cgroup_mark_under_oom(memcg); 2186 2224 2187 2225 locked = mem_cgroup_oom_trylock(memcg); ··· 2231 2189 2232 2190 if (locked && !memcg->oom_kill_disable) { 2233 2191 mem_cgroup_unmark_under_oom(memcg); 2234 - mem_cgroup_out_of_memory(memcg, mask, order); 2235 - mem_cgroup_oom_unlock(memcg); 2236 - /* 2237 - * There is no guarantee that an OOM-lock contender 2238 - * sees the wakeups triggered by the OOM kill 2239 - * uncharges. Wake any sleepers explicitely. 2240 - */ 2241 - memcg_oom_recover(memcg); 2192 + finish_wait(&memcg_oom_waitq, &owait.wait); 2193 + mem_cgroup_out_of_memory(memcg, current->memcg_oom.gfp_mask, 2194 + current->memcg_oom.order); 2242 2195 } else { 2243 - /* 2244 - * A system call can just return -ENOMEM, but if this 2245 - * is a page fault and somebody else is handling the 2246 - * OOM already, we need to sleep on the OOM waitqueue 2247 - * for this memcg until the situation is resolved. 2248 - * Which can take some time because it might be 2249 - * handled by a userspace task. 2250 - * 2251 - * However, this is the charge context, which means 2252 - * that we may sit on a large call stack and hold 2253 - * various filesystem locks, the mmap_sem etc. and we 2254 - * don't want the OOM handler to deadlock on them 2255 - * while we sit here and wait. Store the current OOM 2256 - * context in the task_struct, then return -ENOMEM. 2257 - * At the end of the page fault handler, with the 2258 - * stack unwound, pagefault_out_of_memory() will check 2259 - * back with us by calling 2260 - * mem_cgroup_oom_synchronize(), possibly putting the 2261 - * task to sleep. 2262 - */ 2263 - current->memcg_oom.oom_locked = locked; 2264 - current->memcg_oom.wakeups = wakeups; 2265 - css_get(&memcg->css); 2266 - current->memcg_oom.wait_on_memcg = memcg; 2267 - } 2268 - } 2269 - 2270 - /** 2271 - * mem_cgroup_oom_synchronize - complete memcg OOM handling 2272 - * 2273 - * This has to be called at the end of a page fault if the the memcg 2274 - * OOM handler was enabled and the fault is returning %VM_FAULT_OOM. 2275 - * 2276 - * Memcg supports userspace OOM handling, so failed allocations must 2277 - * sleep on a waitqueue until the userspace task resolves the 2278 - * situation. Sleeping directly in the charge context with all kinds 2279 - * of locks held is not a good idea, instead we remember an OOM state 2280 - * in the task and mem_cgroup_oom_synchronize() has to be called at 2281 - * the end of the page fault to put the task to sleep and clean up the 2282 - * OOM state. 2283 - * 2284 - * Returns %true if an ongoing memcg OOM situation was detected and 2285 - * finalized, %false otherwise. 2286 - */ 2287 - bool mem_cgroup_oom_synchronize(void) 2288 - { 2289 - struct oom_wait_info owait; 2290 - struct mem_cgroup *memcg; 2291 - 2292 - /* OOM is global, do not handle */ 2293 - if (!current->memcg_oom.in_memcg_oom) 2294 - return false; 2295 - 2296 - /* 2297 - * We invoked the OOM killer but there is a chance that a kill 2298 - * did not free up any charges. Everybody else might already 2299 - * be sleeping, so restart the fault and keep the rampage 2300 - * going until some charges are released. 2301 - */ 2302 - memcg = current->memcg_oom.wait_on_memcg; 2303 - if (!memcg) 2304 - goto out; 2305 - 2306 - if (test_thread_flag(TIF_MEMDIE) || fatal_signal_pending(current)) 2307 - goto out_memcg; 2308 - 2309 - owait.memcg = memcg; 2310 - owait.wait.flags = 0; 2311 - owait.wait.func = memcg_oom_wake_function; 2312 - owait.wait.private = current; 2313 - INIT_LIST_HEAD(&owait.wait.task_list); 2314 - 2315 - prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE); 2316 - /* Only sleep if we didn't miss any wakeups since OOM */ 2317 - if (atomic_read(&memcg->oom_wakeups) == current->memcg_oom.wakeups) 2318 2196 schedule(); 2319 - finish_wait(&memcg_oom_waitq, &owait.wait); 2320 - out_memcg: 2321 - mem_cgroup_unmark_under_oom(memcg); 2322 - if (current->memcg_oom.oom_locked) { 2197 + mem_cgroup_unmark_under_oom(memcg); 2198 + finish_wait(&memcg_oom_waitq, &owait.wait); 2199 + } 2200 + 2201 + if (locked) { 2323 2202 mem_cgroup_oom_unlock(memcg); 2324 2203 /* 2325 2204 * There is no guarantee that an OOM-lock contender ··· 2249 2286 */ 2250 2287 memcg_oom_recover(memcg); 2251 2288 } 2289 + cleanup: 2290 + current->memcg_oom.memcg = NULL; 2252 2291 css_put(&memcg->css); 2253 - current->memcg_oom.wait_on_memcg = NULL; 2254 - out: 2255 - current->memcg_oom.in_memcg_oom = 0; 2256 2292 return true; 2257 2293 } 2258 2294 ··· 2665 2703 || fatal_signal_pending(current))) 2666 2704 goto bypass; 2667 2705 2706 + if (unlikely(task_in_memcg_oom(current))) 2707 + goto bypass; 2708 + 2668 2709 /* 2669 2710 * We always charge the cgroup the mm_struct belongs to. 2670 2711 * The mm_struct's mem_cgroup changes on task migration if the ··· 2766 2801 return 0; 2767 2802 nomem: 2768 2803 *ptr = NULL; 2804 + if (gfp_mask & __GFP_NOFAIL) 2805 + return 0; 2769 2806 return -ENOMEM; 2770 2807 bypass: 2771 2808 *ptr = root_mem_cgroup;
+14 -6
mm/memory.c
··· 837 837 */ 838 838 make_migration_entry_read(&entry); 839 839 pte = swp_entry_to_pte(entry); 840 + if (pte_swp_soft_dirty(*src_pte)) 841 + pte = pte_swp_mksoft_dirty(pte); 840 842 set_pte_at(src_mm, addr, src_pte, pte); 841 843 } 842 844 } ··· 3865 3863 * space. Kernel faults are handled more gracefully. 3866 3864 */ 3867 3865 if (flags & FAULT_FLAG_USER) 3868 - mem_cgroup_enable_oom(); 3866 + mem_cgroup_oom_enable(); 3869 3867 3870 3868 ret = __handle_mm_fault(mm, vma, address, flags); 3871 3869 3872 - if (flags & FAULT_FLAG_USER) 3873 - mem_cgroup_disable_oom(); 3874 - 3875 - if (WARN_ON(task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))) 3876 - mem_cgroup_oom_synchronize(); 3870 + if (flags & FAULT_FLAG_USER) { 3871 + mem_cgroup_oom_disable(); 3872 + /* 3873 + * The task may have entered a memcg OOM situation but 3874 + * if the allocation error was handled gracefully (no 3875 + * VM_FAULT_OOM), there is no need to kill anything. 3876 + * Just clean up the OOM state peacefully. 3877 + */ 3878 + if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM)) 3879 + mem_cgroup_oom_synchronize(false); 3880 + } 3877 3881 3878 3882 return ret; 3879 3883 }
+2
mm/migrate.c
··· 161 161 162 162 get_page(new); 163 163 pte = pte_mkold(mk_pte(new, vma->vm_page_prot)); 164 + if (pte_swp_soft_dirty(*ptep)) 165 + pte = pte_mksoft_dirty(pte); 164 166 if (is_write_migration_entry(entry)) 165 167 pte = pte_mkwrite(pte); 166 168 #ifdef CONFIG_HUGETLB_PAGE
+5 -2
mm/mprotect.c
··· 94 94 swp_entry_t entry = pte_to_swp_entry(oldpte); 95 95 96 96 if (is_write_migration_entry(entry)) { 97 + pte_t newpte; 97 98 /* 98 99 * A protection check is difficult so 99 100 * just be safe and disable write 100 101 */ 101 102 make_migration_entry_read(&entry); 102 - set_pte_at(mm, addr, pte, 103 - swp_entry_to_pte(entry)); 103 + newpte = swp_entry_to_pte(entry); 104 + if (pte_swp_soft_dirty(oldpte)) 105 + newpte = pte_swp_mksoft_dirty(newpte); 106 + set_pte_at(mm, addr, pte, newpte); 104 107 } 105 108 pages++; 106 109 }
+1 -4
mm/mremap.c
··· 25 25 #include <asm/uaccess.h> 26 26 #include <asm/cacheflush.h> 27 27 #include <asm/tlbflush.h> 28 - #include <asm/pgalloc.h> 29 28 30 29 #include "internal.h" 31 30 ··· 62 63 return NULL; 63 64 64 65 pmd = pmd_alloc(mm, pud, addr); 65 - if (!pmd) { 66 - pud_free(mm, pud); 66 + if (!pmd) 67 67 return NULL; 68 - } 69 68 70 69 VM_BUG_ON(pmd_trans_huge(*pmd)); 71 70
+1 -1
mm/oom_kill.c
··· 680 680 { 681 681 struct zonelist *zonelist; 682 682 683 - if (mem_cgroup_oom_synchronize()) 683 + if (mem_cgroup_oom_synchronize(true)) 684 684 return; 685 685 686 686 zonelist = node_zonelist(first_online_node, GFP_KERNEL);
+5 -5
mm/page-writeback.c
··· 1210 1210 return 1; 1211 1211 } 1212 1212 1213 - static long bdi_max_pause(struct backing_dev_info *bdi, 1214 - unsigned long bdi_dirty) 1213 + static unsigned long bdi_max_pause(struct backing_dev_info *bdi, 1214 + unsigned long bdi_dirty) 1215 1215 { 1216 - long bw = bdi->avg_write_bandwidth; 1217 - long t; 1216 + unsigned long bw = bdi->avg_write_bandwidth; 1217 + unsigned long t; 1218 1218 1219 1219 /* 1220 1220 * Limit pause time for small memory systems. If sleeping for too long ··· 1226 1226 t = bdi_dirty / (1 + bw / roundup_pow_of_two(1 + HZ / 8)); 1227 1227 t++; 1228 1228 1229 - return min_t(long, t, MAX_PAUSE); 1229 + return min_t(unsigned long, t, MAX_PAUSE); 1230 1230 } 1231 1231 1232 1232 static long bdi_min_pause(struct backing_dev_info *bdi,
+2
mm/slab_common.c
··· 56 56 continue; 57 57 } 58 58 59 + #if !defined(CONFIG_SLUB) || !defined(CONFIG_SLUB_DEBUG_ON) 59 60 /* 60 61 * For simplicity, we won't check this in the list of memcg 61 62 * caches. We have control over memcg naming, and if there ··· 70 69 s = NULL; 71 70 return -EINVAL; 72 71 } 72 + #endif 73 73 } 74 74 75 75 WARN_ON(strchr(name, ' ')); /* It confuses parsers */
+3 -1
mm/swapfile.c
··· 1824 1824 struct filename *pathname; 1825 1825 int i, type, prev; 1826 1826 int err; 1827 + unsigned int old_block_size; 1827 1828 1828 1829 if (!capable(CAP_SYS_ADMIN)) 1829 1830 return -EPERM; ··· 1915 1914 } 1916 1915 1917 1916 swap_file = p->swap_file; 1917 + old_block_size = p->old_block_size; 1918 1918 p->swap_file = NULL; 1919 1919 p->max = 0; 1920 1920 swap_map = p->swap_map; ··· 1940 1938 inode = mapping->host; 1941 1939 if (S_ISBLK(inode->i_mode)) { 1942 1940 struct block_device *bdev = I_BDEV(inode); 1943 - set_blocksize(bdev, p->old_block_size); 1941 + set_blocksize(bdev, old_block_size); 1944 1942 blkdev_put(bdev, FMODE_READ | FMODE_WRITE | FMODE_EXCL); 1945 1943 } else { 1946 1944 mutex_lock(&inode->i_mutex);
+1
mm/vmscan.c
··· 211 211 down_write(&shrinker_rwsem); 212 212 list_del(&shrinker->list); 213 213 up_write(&shrinker_rwsem); 214 + kfree(shrinker->nr_deferred); 214 215 } 215 216 EXPORT_SYMBOL(unregister_shrinker); 216 217
+4
mm/zswap.c
··· 804 804 } 805 805 tree->rbroot = RB_ROOT; 806 806 spin_unlock(&tree->lock); 807 + 808 + zbud_destroy_pool(tree->pool); 809 + kfree(tree); 810 + zswap_trees[type] = NULL; 807 811 } 808 812 809 813 static struct zbud_ops zswap_zbud_ops = {
+1 -3
security/apparmor/apparmorfs.c
··· 580 580 581 581 /* check if the next ns is a sibling, parent, gp, .. */ 582 582 parent = ns->parent; 583 - while (parent) { 583 + while (ns != root) { 584 584 mutex_unlock(&ns->lock); 585 585 next = list_entry_next(ns, base.list); 586 586 if (!list_entry_is_head(next, &parent->sub_ns, base.list)) { 587 587 mutex_lock(&next->lock); 588 588 return next; 589 589 } 590 - if (parent == root) 591 - return NULL; 592 590 ns = parent; 593 591 parent = parent->parent; 594 592 }
+1
security/apparmor/policy.c
··· 610 610 aa_put_dfa(profile->policy.dfa); 611 611 aa_put_replacedby(profile->replacedby); 612 612 613 + kzfree(profile->hash); 613 614 kzfree(profile); 614 615 } 615 616
+1 -1
sound/pci/hda/hda_generic.c
··· 3531 3531 if (!multi) 3532 3532 err = create_single_cap_vol_ctl(codec, n, vol, sw, 3533 3533 inv_dmic); 3534 - else if (!multi_cap_vol) 3534 + else if (!multi_cap_vol && !inv_dmic) 3535 3535 err = create_bind_cap_vol_ctl(codec, n, vol, sw); 3536 3536 else 3537 3537 err = create_multi_cap_vol_ctl(codec);
+8 -10
sound/pci/hda/patch_hdmi.c
··· 937 937 } 938 938 939 939 /* 940 + * always configure channel mapping, it may have been changed by the 941 + * user in the meantime 942 + */ 943 + hdmi_setup_channel_mapping(codec, pin_nid, non_pcm, ca, 944 + channels, per_pin->chmap, 945 + per_pin->chmap_set); 946 + 947 + /* 940 948 * sizeof(ai) is used instead of sizeof(*hdmi_ai) or 941 949 * sizeof(*dp_ai) to avoid partial match/update problems when 942 950 * the user switches between HDMI/DP monitors. ··· 955 947 "pin=%d channels=%d\n", 956 948 pin_nid, 957 949 channels); 958 - hdmi_setup_channel_mapping(codec, pin_nid, non_pcm, ca, 959 - channels, per_pin->chmap, 960 - per_pin->chmap_set); 961 950 hdmi_stop_infoframe_trans(codec, pin_nid); 962 951 hdmi_fill_audio_infoframe(codec, pin_nid, 963 952 ai.bytes, sizeof(ai)); 964 953 hdmi_start_infoframe_trans(codec, pin_nid); 965 - } else { 966 - /* For non-pcm audio switch, setup new channel mapping 967 - * accordingly */ 968 - if (per_pin->non_pcm != non_pcm) 969 - hdmi_setup_channel_mapping(codec, pin_nid, non_pcm, ca, 970 - channels, per_pin->chmap, 971 - per_pin->chmap_set); 972 954 } 973 955 974 956 per_pin->non_pcm = non_pcm;
+54
sound/pci/hda/patch_realtek.c
··· 2819 2819 alc_write_coef_idx(codec, 0x1e, coef | 0x80); 2820 2820 } 2821 2821 2822 + static void alc269_fixup_headset_mic(struct hda_codec *codec, 2823 + const struct hda_fixup *fix, int action) 2824 + { 2825 + struct alc_spec *spec = codec->spec; 2826 + 2827 + if (action == HDA_FIXUP_ACT_PRE_PROBE) 2828 + spec->parse_flags |= HDA_PINCFG_HEADSET_MIC; 2829 + } 2830 + 2822 2831 static void alc271_fixup_dmic(struct hda_codec *codec, 2823 2832 const struct hda_fixup *fix, int action) 2824 2833 { ··· 3505 3496 } 3506 3497 } 3507 3498 3499 + static void alc290_fixup_mono_speakers(struct hda_codec *codec, 3500 + const struct hda_fixup *fix, int action) 3501 + { 3502 + if (action == HDA_FIXUP_ACT_PRE_PROBE) 3503 + /* Remove DAC node 0x03, as it seems to be 3504 + giving mono output */ 3505 + snd_hda_override_wcaps(codec, 0x03, 0); 3506 + } 3507 + 3508 3508 enum { 3509 3509 ALC269_FIXUP_SONY_VAIO, 3510 3510 ALC275_FIXUP_SONY_VAIO_GPIO2, ··· 3525 3507 ALC271_FIXUP_DMIC, 3526 3508 ALC269_FIXUP_PCM_44K, 3527 3509 ALC269_FIXUP_STEREO_DMIC, 3510 + ALC269_FIXUP_HEADSET_MIC, 3528 3511 ALC269_FIXUP_QUANTA_MUTE, 3529 3512 ALC269_FIXUP_LIFEBOOK, 3530 3513 ALC269_FIXUP_AMIC, ··· 3538 3519 ALC269_FIXUP_HP_GPIO_LED, 3539 3520 ALC269_FIXUP_INV_DMIC, 3540 3521 ALC269_FIXUP_LENOVO_DOCK, 3522 + ALC286_FIXUP_SONY_MIC_NO_PRESENCE, 3541 3523 ALC269_FIXUP_PINCFG_NO_HP_TO_LINEOUT, 3542 3524 ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, 3543 3525 ALC269_FIXUP_DELL2_MIC_NO_PRESENCE, 3526 + ALC269_FIXUP_DELL3_MIC_NO_PRESENCE, 3544 3527 ALC269_FIXUP_HEADSET_MODE, 3545 3528 ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC, 3546 3529 ALC269_FIXUP_ASUS_X101_FUNC, ··· 3556 3535 ALC283_FIXUP_CHROME_BOOK, 3557 3536 ALC282_FIXUP_ASUS_TX300, 3558 3537 ALC283_FIXUP_INT_MIC, 3538 + ALC290_FIXUP_MONO_SPEAKERS, 3559 3539 }; 3560 3540 3561 3541 static const struct hda_fixup alc269_fixups[] = { ··· 3624 3602 [ALC269_FIXUP_STEREO_DMIC] = { 3625 3603 .type = HDA_FIXUP_FUNC, 3626 3604 .v.func = alc269_fixup_stereo_dmic, 3605 + }, 3606 + [ALC269_FIXUP_HEADSET_MIC] = { 3607 + .type = HDA_FIXUP_FUNC, 3608 + .v.func = alc269_fixup_headset_mic, 3627 3609 }, 3628 3610 [ALC269_FIXUP_QUANTA_MUTE] = { 3629 3611 .type = HDA_FIXUP_FUNC, ··· 3738 3712 .chained = true, 3739 3713 .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC 3740 3714 }, 3715 + [ALC269_FIXUP_DELL3_MIC_NO_PRESENCE] = { 3716 + .type = HDA_FIXUP_PINS, 3717 + .v.pins = (const struct hda_pintbl[]) { 3718 + { 0x1a, 0x01a1913c }, /* use as headset mic, without its own jack detect */ 3719 + { } 3720 + }, 3721 + .chained = true, 3722 + .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC 3723 + }, 3741 3724 [ALC269_FIXUP_HEADSET_MODE] = { 3742 3725 .type = HDA_FIXUP_FUNC, 3743 3726 .v.func = alc_fixup_headset_mode, ··· 3754 3719 [ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC] = { 3755 3720 .type = HDA_FIXUP_FUNC, 3756 3721 .v.func = alc_fixup_headset_mode_no_hp_mic, 3722 + }, 3723 + [ALC286_FIXUP_SONY_MIC_NO_PRESENCE] = { 3724 + .type = HDA_FIXUP_PINS, 3725 + .v.pins = (const struct hda_pintbl[]) { 3726 + { 0x18, 0x01a1913c }, /* use as headset mic, without its own jack detect */ 3727 + { } 3728 + }, 3729 + .chained = true, 3730 + .chain_id = ALC269_FIXUP_HEADSET_MIC 3757 3731 }, 3758 3732 [ALC269_FIXUP_ASUS_X101_FUNC] = { 3759 3733 .type = HDA_FIXUP_FUNC, ··· 3848 3804 .chained = true, 3849 3805 .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST 3850 3806 }, 3807 + [ALC290_FIXUP_MONO_SPEAKERS] = { 3808 + .type = HDA_FIXUP_FUNC, 3809 + .v.func = alc290_fixup_mono_speakers, 3810 + .chained = true, 3811 + .chain_id = ALC269_FIXUP_DELL3_MIC_NO_PRESENCE, 3812 + }, 3851 3813 }; 3852 3814 3853 3815 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 3895 3845 SND_PCI_QUIRK(0x1028, 0x0608, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 3896 3846 SND_PCI_QUIRK(0x1028, 0x0609, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 3897 3847 SND_PCI_QUIRK(0x1028, 0x0613, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 3848 + SND_PCI_QUIRK(0x1028, 0x0616, "Dell Vostro 5470", ALC290_FIXUP_MONO_SPEAKERS), 3898 3849 SND_PCI_QUIRK(0x1028, 0x15cc, "Dell X5 Precision", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE), 3899 3850 SND_PCI_QUIRK(0x1028, 0x15cd, "Dell X5 Precision", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE), 3900 3851 SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2), ··· 3918 3867 SND_PCI_QUIRK(0x1043, 0x8398, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC), 3919 3868 SND_PCI_QUIRK(0x1043, 0x83ce, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC), 3920 3869 SND_PCI_QUIRK(0x1043, 0x8516, "ASUS X101CH", ALC269_FIXUP_ASUS_X101), 3870 + SND_PCI_QUIRK(0x104d, 0x90b6, "Sony VAIO Pro 13", ALC286_FIXUP_SONY_MIC_NO_PRESENCE), 3921 3871 SND_PCI_QUIRK(0x104d, 0x9073, "Sony VAIO", ALC275_FIXUP_SONY_VAIO_GPIO2), 3922 3872 SND_PCI_QUIRK(0x104d, 0x907b, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ), 3923 3873 SND_PCI_QUIRK(0x104d, 0x9084, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ), ··· 4004 3952 {.id = ALC269_FIXUP_STEREO_DMIC, .name = "alc269-dmic"}, 4005 3953 {.id = ALC271_FIXUP_DMIC, .name = "alc271-dmic"}, 4006 3954 {.id = ALC269_FIXUP_INV_DMIC, .name = "inv-dmic"}, 3955 + {.id = ALC269_FIXUP_HEADSET_MIC, .name = "headset-mic"}, 4007 3956 {.id = ALC269_FIXUP_LENOVO_DOCK, .name = "lenovo-dock"}, 4008 3957 {.id = ALC269_FIXUP_HP_GPIO_LED, .name = "hp-gpio-led"}, 4009 3958 {.id = ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, .name = "dell-headset-multi"}, ··· 4622 4569 SND_PCI_QUIRK(0x1028, 0x05d8, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), 4623 4570 SND_PCI_QUIRK(0x1028, 0x05db, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), 4624 4571 SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800), 4572 + SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_ASUS_MODE4), 4625 4573 SND_PCI_QUIRK(0x1043, 0x8469, "ASUS mobo", ALC662_FIXUP_NO_JACK_DETECT), 4626 4574 SND_PCI_QUIRK(0x105b, 0x0cd6, "Foxconn", ALC662_FIXUP_ASUS_MODE2), 4627 4575 SND_PCI_QUIRK(0x144d, 0xc051, "Samsung R720", ALC662_FIXUP_IDEAPAD),
+1
sound/pci/rme9652/hdsp.c
··· 4845 4845 if ((err = hdsp_get_iobox_version(hdsp)) < 0) 4846 4846 return err; 4847 4847 } 4848 + memset(&hdsp_version, 0, sizeof(hdsp_version)); 4848 4849 hdsp_version.io_type = hdsp->io_type; 4849 4850 hdsp_version.firmware_rev = hdsp->firmware_rev; 4850 4851 if ((err = copy_to_user(argp, &hdsp_version, sizeof(hdsp_version))))
+3 -1
sound/usb/usx2y/us122l.c
··· 262 262 } 263 263 264 264 area->vm_ops = &usb_stream_hwdep_vm_ops; 265 - area->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP; 265 + area->vm_flags |= VM_DONTDUMP; 266 + if (!read) 267 + area->vm_flags |= VM_DONTEXPAND; 266 268 area->vm_private_data = us122l; 267 269 atomic_inc(&us122l->mmap_count); 268 270 out:
+3 -19
sound/usb/usx2y/usbusx2yaudio.c
··· 299 299 usX2Y_clients_stop(usX2Y); 300 300 } 301 301 302 - static void usX2Y_error_sequence(struct usX2Ydev *usX2Y, 303 - struct snd_usX2Y_substream *subs, struct urb *urb) 304 - { 305 - snd_printk(KERN_ERR 306 - "Sequence Error!(hcd_frame=%i ep=%i%s;wait=%i,frame=%i).\n" 307 - "Most probably some urb of usb-frame %i is still missing.\n" 308 - "Cause could be too long delays in usb-hcd interrupt handling.\n", 309 - usb_get_current_frame_number(usX2Y->dev), 310 - subs->endpoint, usb_pipein(urb->pipe) ? "in" : "out", 311 - usX2Y->wait_iso_frame, urb->start_frame, usX2Y->wait_iso_frame); 312 - usX2Y_clients_stop(usX2Y); 313 - } 314 - 315 302 static void i_usX2Y_urb_complete(struct urb *urb) 316 303 { 317 304 struct snd_usX2Y_substream *subs = urb->context; ··· 315 328 usX2Y_error_urb_status(usX2Y, subs, urb); 316 329 return; 317 330 } 318 - if (likely((urb->start_frame & 0xFFFF) == (usX2Y->wait_iso_frame & 0xFFFF))) 319 - subs->completed_urb = urb; 320 - else { 321 - usX2Y_error_sequence(usX2Y, subs, urb); 322 - return; 323 - } 331 + 332 + subs->completed_urb = urb; 333 + 324 334 { 325 335 struct snd_usX2Y_substream *capsubs = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE], 326 336 *playbacksubs = usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+1 -6
sound/usb/usx2y/usx2yhwdeppcm.c
··· 244 244 usX2Y_error_urb_status(usX2Y, subs, urb); 245 245 return; 246 246 } 247 - if (likely((urb->start_frame & 0xFFFF) == (usX2Y->wait_iso_frame & 0xFFFF))) 248 - subs->completed_urb = urb; 249 - else { 250 - usX2Y_error_sequence(usX2Y, subs, urb); 251 - return; 252 - } 253 247 248 + subs->completed_urb = urb; 254 249 capsubs = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE]; 255 250 capsubs2 = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE + 2]; 256 251 playbacksubs = usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
+1
tools/perf/Makefile
··· 770 770 install-bin: all 771 771 $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(bindir_SQ)' 772 772 $(INSTALL) $(OUTPUT)perf '$(DESTDIR_SQ)$(bindir_SQ)' 773 + $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)' 773 774 $(INSTALL) $(OUTPUT)perf-archive -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)' 774 775 ifndef NO_LIBPERL 775 776 $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/perl/Perf-Trace-Util/lib/Perf/Trace'
+1
tools/perf/builtin-stat.c
··· 457 457 perror("failed to prepare workload"); 458 458 return -1; 459 459 } 460 + child_pid = evsel_list->workload.pid; 460 461 } 461 462 462 463 if (group)
+1 -1
tools/perf/config/feature-tests.mak
··· 219 219 220 220 int main(void) 221 221 { 222 - printf(\"error message: %s\n\", audit_errno_to_name(0)); 222 + printf(\"error message: %s\", audit_errno_to_name(0)); 223 223 return audit_open(); 224 224 } 225 225 endef
+22 -5
tools/perf/util/dwarf-aux.c
··· 426 426 * @die_mem: a buffer for result DIE 427 427 * 428 428 * Search a non-inlined function DIE which includes @addr. Stores the 429 - * DIE to @die_mem and returns it if found. Returns NULl if failed. 429 + * DIE to @die_mem and returns it if found. Returns NULL if failed. 430 430 */ 431 431 Dwarf_Die *die_find_realfunc(Dwarf_Die *cu_die, Dwarf_Addr addr, 432 432 Dwarf_Die *die_mem) ··· 454 454 } 455 455 456 456 /** 457 - * die_find_inlinefunc - Search an inlined function at given address 458 - * @cu_die: a CU DIE which including @addr 457 + * die_find_top_inlinefunc - Search the top inlined function at given address 458 + * @sp_die: a subprogram DIE which including @addr 459 459 * @addr: target address 460 460 * @die_mem: a buffer for result DIE 461 461 * 462 462 * Search an inlined function DIE which includes @addr. Stores the 463 - * DIE to @die_mem and returns it if found. Returns NULl if failed. 463 + * DIE to @die_mem and returns it if found. Returns NULL if failed. 464 + * Even if several inlined functions are expanded recursively, this 465 + * doesn't trace it down, and returns the topmost one. 466 + */ 467 + Dwarf_Die *die_find_top_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr, 468 + Dwarf_Die *die_mem) 469 + { 470 + return die_find_child(sp_die, __die_find_inline_cb, &addr, die_mem); 471 + } 472 + 473 + /** 474 + * die_find_inlinefunc - Search an inlined function at given address 475 + * @sp_die: a subprogram DIE which including @addr 476 + * @addr: target address 477 + * @die_mem: a buffer for result DIE 478 + * 479 + * Search an inlined function DIE which includes @addr. Stores the 480 + * DIE to @die_mem and returns it if found. Returns NULL if failed. 464 481 * If several inlined functions are expanded recursively, this trace 465 - * it and returns deepest one. 482 + * it down and returns deepest one. 466 483 */ 467 484 Dwarf_Die *die_find_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr, 468 485 Dwarf_Die *die_mem)
+5 -1
tools/perf/util/dwarf-aux.h
··· 79 79 extern Dwarf_Die *die_find_realfunc(Dwarf_Die *cu_die, Dwarf_Addr addr, 80 80 Dwarf_Die *die_mem); 81 81 82 - /* Search an inlined function including given address */ 82 + /* Search the top inlined function including given address */ 83 + extern Dwarf_Die *die_find_top_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr, 84 + Dwarf_Die *die_mem); 85 + 86 + /* Search the deepest inlined function including given address */ 83 87 extern Dwarf_Die *die_find_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr, 84 88 Dwarf_Die *die_mem); 85 89
+12
tools/perf/util/header.c
··· 2768 2768 if (perf_file_header__read(&f_header, header, fd) < 0) 2769 2769 return -EINVAL; 2770 2770 2771 + /* 2772 + * Sanity check that perf.data was written cleanly; data size is 2773 + * initialized to 0 and updated only if the on_exit function is run. 2774 + * If data size is still 0 then the file contains only partial 2775 + * information. Just warn user and process it as much as it can. 2776 + */ 2777 + if (f_header.data.size == 0) { 2778 + pr_warning("WARNING: The %s file's data size field is 0 which is unexpected.\n" 2779 + "Was the 'perf record' command properly terminated?\n", 2780 + session->filename); 2781 + } 2782 + 2771 2783 nr_attrs = f_header.attrs.size / f_header.attr_size; 2772 2784 lseek(fd, f_header.attrs.offset, SEEK_SET); 2773 2785
+33 -16
tools/perf/util/probe-finder.c
··· 1327 1327 struct perf_probe_point *ppt) 1328 1328 { 1329 1329 Dwarf_Die cudie, spdie, indie; 1330 - Dwarf_Addr _addr, baseaddr; 1331 - const char *fname = NULL, *func = NULL, *tmp; 1330 + Dwarf_Addr _addr = 0, baseaddr = 0; 1331 + const char *fname = NULL, *func = NULL, *basefunc = NULL, *tmp; 1332 1332 int baseline = 0, lineno = 0, ret = 0; 1333 1333 1334 1334 /* Adjust address with bias */ ··· 1349 1349 /* Find a corresponding function (name, baseline and baseaddr) */ 1350 1350 if (die_find_realfunc(&cudie, (Dwarf_Addr)addr, &spdie)) { 1351 1351 /* Get function entry information */ 1352 - tmp = dwarf_diename(&spdie); 1353 - if (!tmp || 1352 + func = basefunc = dwarf_diename(&spdie); 1353 + if (!func || 1354 1354 dwarf_entrypc(&spdie, &baseaddr) != 0 || 1355 - dwarf_decl_line(&spdie, &baseline) != 0) 1355 + dwarf_decl_line(&spdie, &baseline) != 0) { 1356 + lineno = 0; 1356 1357 goto post; 1357 - func = tmp; 1358 + } 1358 1359 1359 - if (addr == (unsigned long)baseaddr) 1360 + if (addr == (unsigned long)baseaddr) { 1360 1361 /* Function entry - Relative line number is 0 */ 1361 1362 lineno = baseline; 1362 - else if (die_find_inlinefunc(&spdie, (Dwarf_Addr)addr, 1363 - &indie)) { 1363 + fname = dwarf_decl_file(&spdie); 1364 + goto post; 1365 + } 1366 + 1367 + /* Track down the inline functions step by step */ 1368 + while (die_find_top_inlinefunc(&spdie, (Dwarf_Addr)addr, 1369 + &indie)) { 1370 + /* There is an inline function */ 1364 1371 if (dwarf_entrypc(&indie, &_addr) == 0 && 1365 - _addr == addr) 1372 + _addr == addr) { 1366 1373 /* 1367 1374 * addr is at an inline function entry. 1368 1375 * In this case, lineno should be the call-site 1369 - * line number. 1376 + * line number. (overwrite lineinfo) 1370 1377 */ 1371 1378 lineno = die_get_call_lineno(&indie); 1372 - else { 1379 + fname = die_get_call_file(&indie); 1380 + break; 1381 + } else { 1373 1382 /* 1374 1383 * addr is in an inline function body. 1375 1384 * Since lineno points one of the lines ··· 1386 1377 * be the entry line of the inline function. 1387 1378 */ 1388 1379 tmp = dwarf_diename(&indie); 1389 - if (tmp && 1390 - dwarf_decl_line(&spdie, &baseline) == 0) 1391 - func = tmp; 1380 + if (!tmp || 1381 + dwarf_decl_line(&indie, &baseline) != 0) 1382 + break; 1383 + func = tmp; 1384 + spdie = indie; 1392 1385 } 1393 1386 } 1387 + /* Verify the lineno and baseline are in a same file */ 1388 + tmp = dwarf_decl_file(&spdie); 1389 + if (!tmp || strcmp(tmp, fname) != 0) 1390 + lineno = 0; 1394 1391 } 1395 1392 1396 1393 post: 1397 1394 /* Make a relative line number or an offset */ 1398 1395 if (lineno) 1399 1396 ppt->line = lineno - baseline; 1400 - else if (func) 1397 + else if (basefunc) { 1401 1398 ppt->offset = addr - (unsigned long)baseaddr; 1399 + func = basefunc; 1400 + } 1402 1401 1403 1402 /* Duplicate strings */ 1404 1403 if (func) {
+3 -1
tools/perf/util/session.c
··· 256 256 tool->sample = process_event_sample_stub; 257 257 if (tool->mmap == NULL) 258 258 tool->mmap = process_event_stub; 259 + if (tool->mmap2 == NULL) 260 + tool->mmap2 = process_event_stub; 259 261 if (tool->comm == NULL) 260 262 tool->comm = process_event_stub; 261 263 if (tool->fork == NULL) ··· 1312 1310 file_offset = page_offset; 1313 1311 head = data_offset - page_offset; 1314 1312 1315 - if (data_offset + data_size < file_size) 1313 + if (data_size && (data_offset + data_size < file_size)) 1316 1314 file_size = data_offset + data_size; 1317 1315 1318 1316 progress_next = file_size / 16;
+1 -1
tools/testing/selftests/timers/posix_timers.c
··· 151 151 fflush(stdout); 152 152 153 153 done = 0; 154 - timer_create(which, NULL, &id); 154 + err = timer_create(which, NULL, &id); 155 155 if (err < 0) { 156 156 perror("Can't create timer\n"); 157 157 return -1;