···3737 that the USB device has been connected to the machine. This3838 file is read-only.3939Users:4040- PowerTOP <power@bughost.org>4141- http://www.lesswatts.org/projects/powertop/4040+ PowerTOP <powertop@lists.01.org>4141+ https://01.org/powertop/42424343What: /sys/bus/usb/device/.../power/active_duration4444Date: January 2008···5757 will give an integer percentage. Note that this does not5858 account for counter wrap.5959Users:6060- PowerTOP <power@bughost.org>6161- http://www.lesswatts.org/projects/powertop/6060+ PowerTOP <powertop@lists.01.org>6161+ https://01.org/powertop/62626363What: /sys/bus/usb/devices/<busnum>-<port[.port]>...:<config num>-<interface num>/supports_autosuspend6464Date: January 2008
+16-16
Documentation/ABI/testing/sysfs-devices-power
···11What: /sys/devices/.../power/22Date: January 200933-Contact: Rafael J. Wysocki <rjw@sisk.pl>33+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>44Description:55 The /sys/devices/.../power directory contains attributes66 allowing the user space to check and modify some power···8899What: /sys/devices/.../power/wakeup1010Date: January 20091111-Contact: Rafael J. Wysocki <rjw@sisk.pl>1111+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>1212Description:1313 The /sys/devices/.../power/wakeup attribute allows the user1414 space to check if the device is enabled to wake up the system···34343535What: /sys/devices/.../power/control3636Date: January 20093737-Contact: Rafael J. Wysocki <rjw@sisk.pl>3737+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>3838Description:3939 The /sys/devices/.../power/control attribute allows the user4040 space to control the run-time power management of the device.···53535454What: /sys/devices/.../power/async5555Date: January 20095656-Contact: Rafael J. Wysocki <rjw@sisk.pl>5656+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>5757Description:5858 The /sys/devices/.../async attribute allows the user space to5959 enable or diasble the device's suspend and resume callbacks to···79798080What: /sys/devices/.../power/wakeup_count8181Date: September 20108282-Contact: Rafael J. Wysocki <rjw@sisk.pl>8282+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>8383Description:8484 The /sys/devices/.../wakeup_count attribute contains the number8585 of signaled wakeup events associated with the device. This···88888989What: /sys/devices/.../power/wakeup_active_count9090Date: September 20109191-Contact: Rafael J. Wysocki <rjw@sisk.pl>9191+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>9292Description:9393 The /sys/devices/.../wakeup_active_count attribute contains the9494 number of times the processing of wakeup events associated with···98989999What: /sys/devices/.../power/wakeup_abort_count100100Date: February 2012101101-Contact: Rafael J. Wysocki <rjw@sisk.pl>101101+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>102102Description:103103 The /sys/devices/.../wakeup_abort_count attribute contains the104104 number of times the processing of a wakeup event associated with···109109110110What: /sys/devices/.../power/wakeup_expire_count111111Date: February 2012112112-Contact: Rafael J. Wysocki <rjw@sisk.pl>112112+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>113113Description:114114 The /sys/devices/.../wakeup_expire_count attribute contains the115115 number of times a wakeup event associated with the device has···119119120120What: /sys/devices/.../power/wakeup_active121121Date: September 2010122122-Contact: Rafael J. Wysocki <rjw@sisk.pl>122122+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>123123Description:124124 The /sys/devices/.../wakeup_active attribute contains either 1,125125 or 0, depending on whether or not a wakeup event associated with···129129130130What: /sys/devices/.../power/wakeup_total_time_ms131131Date: September 2010132132-Contact: Rafael J. Wysocki <rjw@sisk.pl>132132+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>133133Description:134134 The /sys/devices/.../wakeup_total_time_ms attribute contains135135 the total time of processing wakeup events associated with the···139139140140What: /sys/devices/.../power/wakeup_max_time_ms141141Date: September 2010142142-Contact: Rafael J. Wysocki <rjw@sisk.pl>142142+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>143143Description:144144 The /sys/devices/.../wakeup_max_time_ms attribute contains145145 the maximum time of processing a single wakeup event associated···149149150150What: /sys/devices/.../power/wakeup_last_time_ms151151Date: September 2010152152-Contact: Rafael J. Wysocki <rjw@sisk.pl>152152+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>153153Description:154154 The /sys/devices/.../wakeup_last_time_ms attribute contains155155 the value of the monotonic clock corresponding to the time of···160160161161What: /sys/devices/.../power/wakeup_prevent_sleep_time_ms162162Date: February 2012163163-Contact: Rafael J. Wysocki <rjw@sisk.pl>163163+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>164164Description:165165 The /sys/devices/.../wakeup_prevent_sleep_time_ms attribute166166 contains the total time the device has been preventing···189189190190What: /sys/devices/.../power/pm_qos_latency_us191191Date: March 2012192192-Contact: Rafael J. Wysocki <rjw@sisk.pl>192192+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>193193Description:194194 The /sys/devices/.../power/pm_qos_resume_latency_us attribute195195 contains the PM QoS resume latency limit for the given device,···207207208208What: /sys/devices/.../power/pm_qos_no_power_off209209Date: September 2012210210-Contact: Rafael J. Wysocki <rjw@sisk.pl>210210+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>211211Description:212212 The /sys/devices/.../power/pm_qos_no_power_off attribute213213 is used for manipulating the PM QoS "no power off" flag. If···222222223223What: /sys/devices/.../power/pm_qos_remote_wakeup224224Date: September 2012225225-Contact: Rafael J. Wysocki <rjw@sisk.pl>225225+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>226226Description:227227 The /sys/devices/.../power/pm_qos_remote_wakeup attribute228228 is used for manipulating the PM QoS "remote wakeup required"
+11-11
Documentation/ABI/testing/sysfs-power
···11What: /sys/power/22Date: August 200633-Contact: Rafael J. Wysocki <rjw@sisk.pl>33+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>44Description:55 The /sys/power directory will contain files that will66 provide a unified interface to the power management···8899What: /sys/power/state1010Date: August 20061111-Contact: Rafael J. Wysocki <rjw@sisk.pl>1111+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>1212Description:1313 The /sys/power/state file controls the system power state.1414 Reading from this file returns what states are supported,···22222323What: /sys/power/disk2424Date: September 20062525-Contact: Rafael J. Wysocki <rjw@sisk.pl>2525+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>2626Description:2727 The /sys/power/disk file controls the operating mode of the2828 suspend-to-disk mechanism. Reading from this file returns···67676868What: /sys/power/image_size6969Date: August 20067070-Contact: Rafael J. Wysocki <rjw@sisk.pl>7070+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>7171Description:7272 The /sys/power/image_size file controls the size of the image7373 created by the suspend-to-disk mechanism. It can be written a···84848585What: /sys/power/pm_trace8686Date: August 20068787-Contact: Rafael J. Wysocki <rjw@sisk.pl>8787+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>8888Description:8989 The /sys/power/pm_trace file controls the code which saves the9090 last PM event point in the RTC across reboots, so that you can···133133134134What: /sys/power/pm_async135135Date: January 2009136136-Contact: Rafael J. Wysocki <rjw@sisk.pl>136136+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>137137Description:138138 The /sys/power/pm_async file controls the switch allowing the139139 user space to enable or disable asynchronous suspend and resume···146146147147What: /sys/power/wakeup_count148148Date: July 2010149149-Contact: Rafael J. Wysocki <rjw@sisk.pl>149149+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>150150Description:151151 The /sys/power/wakeup_count file allows user space to put the152152 system into a sleep state while taking into account the···161161162162What: /sys/power/reserved_size163163Date: May 2011164164-Contact: Rafael J. Wysocki <rjw@sisk.pl>164164+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>165165Description:166166 The /sys/power/reserved_size file allows user space to control167167 the amount of memory reserved for allocations made by device···175175176176What: /sys/power/autosleep177177Date: April 2012178178-Contact: Rafael J. Wysocki <rjw@sisk.pl>178178+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>179179Description:180180 The /sys/power/autosleep file can be written one of the strings181181 returned by reads from /sys/power/state. If that happens, a···192192193193What: /sys/power/wake_lock194194Date: February 2012195195-Contact: Rafael J. Wysocki <rjw@sisk.pl>195195+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>196196Description:197197 The /sys/power/wake_lock file allows user space to create198198 wakeup source objects and activate them on demand (if one of···219219220220What: /sys/power/wake_unlock221221Date: February 2012222222-Contact: Rafael J. Wysocki <rjw@sisk.pl>222222+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>223223Description:224224 The /sys/power/wake_unlock file allows user space to deactivate225225 wakeup sources created with the help of /sys/power/wake_lock.
+1-1
Documentation/acpi/dsdt-override.txt
···4455When to use this method is described in detail on the66Linux/ACPI home page:77-http://www.lesswatts.org/projects/acpi/overridingDSDT.php77+https://01.org/linux-acpi/documentation/overriding-dsdt
-168
Documentation/devicetree/bindings/memory.txt
···11-*** Memory binding ***22-33-The /memory node provides basic information about the address and size44-of the physical memory. This node is usually filled or updated by the55-bootloader, depending on the actual memory configuration of the given66-hardware.77-88-The memory layout is described by the following node:99-1010-/ {1111- #address-cells = <(n)>;1212- #size-cells = <(m)>;1313- memory {1414- device_type = "memory";1515- reg = <(baseaddr1) (size1)1616- (baseaddr2) (size2)1717- ...1818- (baseaddrN) (sizeN)>;1919- };2020- ...2121-};2222-2323-A memory node follows the typical device tree rules for "reg" property:2424-n: number of cells used to store base address value2525-m: number of cells used to store size value2626-baseaddrX: defines a base address of the defined memory bank2727-sizeX: the size of the defined memory bank2828-2929-3030-More than one memory bank can be defined.3131-3232-3333-*** Reserved memory regions ***3434-3535-In /memory/reserved-memory node one can create child nodes describing3636-particular reserved (excluded from normal use) memory regions. Such3737-memory regions are usually designed for the special usage by various3838-device drivers. A good example are contiguous memory allocations or3939-memory sharing with other operating system on the same hardware board.4040-Those special memory regions might depend on the board configuration and4141-devices used on the target system.4242-4343-Parameters for each memory region can be encoded into the device tree4444-with the following convention:4545-4646-[(label):] (name) {4747- compatible = "linux,contiguous-memory-region", "reserved-memory-region";4848- reg = <(address) (size)>;4949- (linux,default-contiguous-region);5050-};5151-5252-compatible: one or more of:5353- - "linux,contiguous-memory-region" - enables binding of this5454- region to Contiguous Memory Allocator (special region for5555- contiguous memory allocations, shared with movable system5656- memory, Linux kernel-specific).5757- - "reserved-memory-region" - compatibility is defined, given5858- region is assigned for exclusive usage for by the respective5959- devices.6060-6161-reg: standard property defining the base address and size of6262- the memory region6363-6464-linux,default-contiguous-region: property indicating that the region6565- is the default region for all contiguous memory6666- allocations, Linux specific (optional)6767-6868-It is optional to specify the base address, so if one wants to use6969-autoconfiguration of the base address, '0' can be specified as a base7070-address in the 'reg' property.7171-7272-The /memory/reserved-memory node must contain the same #address-cells7373-and #size-cells value as the root node.7474-7575-7676-*** Device node's properties ***7777-7878-Once regions in the /memory/reserved-memory node have been defined, they7979-may be referenced by other device nodes. Bindings that wish to reference8080-memory regions should explicitly document their use of the following8181-property:8282-8383-memory-region = <&phandle_to_defined_region>;8484-8585-This property indicates that the device driver should use the memory8686-region pointed by the given phandle.8787-8888-8989-*** Example ***9090-9191-This example defines a memory consisting of 4 memory banks. 3 contiguous9292-regions are defined for Linux kernel, one default of all device drivers9393-(named contig_mem, placed at 0x72000000, 64MiB), one dedicated to the9494-framebuffer device (labelled display_mem, placed at 0x78000000, 8MiB)9595-and one for multimedia processing (labelled multimedia_mem, placed at9696-0x77000000, 64MiB). 'display_mem' region is then assigned to fb@123000009797-device for DMA memory allocations (Linux kernel drivers will use CMA is9898-available or dma-exclusive usage otherwise). 'multimedia_mem' is9999-assigned to scaler@12500000 and codec@12600000 devices for contiguous100100-memory allocations when CMA driver is enabled.101101-102102-The reason for creating a separate region for framebuffer device is to103103-match the framebuffer base address to the one configured by bootloader,104104-so once Linux kernel drivers starts no glitches on the displayed boot105105-logo appears. Scaller and codec drivers should share the memory106106-allocations.107107-108108-/ {109109- #address-cells = <1>;110110- #size-cells = <1>;111111-112112- /* ... */113113-114114- memory {115115- reg = <0x40000000 0x10000000116116- 0x50000000 0x10000000117117- 0x60000000 0x10000000118118- 0x70000000 0x10000000>;119119-120120- reserved-memory {121121- #address-cells = <1>;122122- #size-cells = <1>;123123-124124- /*125125- * global autoconfigured region for contiguous allocations126126- * (used only with Contiguous Memory Allocator)127127- */128128- contig_region@0 {129129- compatible = "linux,contiguous-memory-region";130130- reg = <0x0 0x4000000>;131131- linux,default-contiguous-region;132132- };133133-134134- /*135135- * special region for framebuffer136136- */137137- display_region: region@78000000 {138138- compatible = "linux,contiguous-memory-region", "reserved-memory-region";139139- reg = <0x78000000 0x800000>;140140- };141141-142142- /*143143- * special region for multimedia processing devices144144- */145145- multimedia_region: region@77000000 {146146- compatible = "linux,contiguous-memory-region";147147- reg = <0x77000000 0x4000000>;148148- };149149- };150150- };151151-152152- /* ... */153153-154154- fb0: fb@12300000 {155155- status = "okay";156156- memory-region = <&display_region>;157157- };158158-159159- scaler: scaler@12500000 {160160- status = "okay";161161- memory-region = <&multimedia_region>;162162- };163163-164164- codec: codec@12600000 {165165- status = "okay";166166- memory-region = <&multimedia_region>;167167- };168168-};
···99described in mmc.txt, can be used. Additionally the following tmio_mmc-specific1010optional bindings can be used.11111212+Required properties:1313+- compatible: "renesas,sdhi-shmobile" - a generic sh-mobile SDHI unit1414+ "renesas,sdhi-sh7372" - SDHI IP on SH7372 SoC1515+ "renesas,sdhi-sh73a0" - SDHI IP on SH73A0 SoC1616+ "renesas,sdhi-r8a73a4" - SDHI IP on R8A73A4 SoC1717+ "renesas,sdhi-r8a7740" - SDHI IP on R8A7740 SoC1818+ "renesas,sdhi-r8a7778" - SDHI IP on R8A7778 SoC1919+ "renesas,sdhi-r8a7779" - SDHI IP on R8A7779 SoC2020+ "renesas,sdhi-r8a7790" - SDHI IP on R8A7790 SoC2121+1222Optional properties:1323- toshiba,mmc-wrprotect-disable: write-protect detection is unavailable1414-1515-When used with Renesas SDHI hardware, the following compatibility strings1616-configure various model-specific properties:1717-1818-"renesas,sh7372-sdhi": (default) compatible with SH73721919-"renesas,r8a7740-sdhi": compatible with R8A7740: certain MMC/SD commands have to2020- wait for the interface to become idle.
+1
Documentation/sound/alsa/HD-Audio-Models.txt
···2828 alc269-dmic Enable ALC269(VA) digital mic workaround2929 alc271-dmic Enable ALC271X digital mic workaround3030 inv-dmic Inverted internal mic workaround3131+ headset-mic Indicates a combined headset (headphone+mic) jack3132 lenovo-dock Enables docking station I/O for some Lenovos3233 dell-headset-multi Headset jack, which can also be used as mic-in3334 dell-headset-dock Headset jack (without mic-in), and also dock I/O
+40-14
MAINTAINERS
···237237238238ACPI239239M: Len Brown <lenb@kernel.org>240240-M: Rafael J. Wysocki <rjw@sisk.pl>240240+M: Rafael J. Wysocki <rjw@rjwysocki.net>241241L: linux-acpi@vger.kernel.org242242-W: http://www.lesswatts.org/projects/acpi/243243-Q: http://patchwork.kernel.org/project/linux-acpi/list/244244-T: git git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux242242+W: https://01.org/linux-acpi243243+Q: https://patchwork.kernel.org/project/linux-acpi/list/244244+T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm245245S: Supported246246F: drivers/acpi/247247F: drivers/pnp/pnpacpi/···256256ACPI FAN DRIVER257257M: Zhang Rui <rui.zhang@intel.com>258258L: linux-acpi@vger.kernel.org259259-W: http://www.lesswatts.org/projects/acpi/259259+W: https://01.org/linux-acpi260260S: Supported261261F: drivers/acpi/fan.c262262263263ACPI THERMAL DRIVER264264M: Zhang Rui <rui.zhang@intel.com>265265L: linux-acpi@vger.kernel.org266266-W: http://www.lesswatts.org/projects/acpi/266266+W: https://01.org/linux-acpi267267S: Supported268268F: drivers/acpi/*thermal*269269270270ACPI VIDEO DRIVER271271M: Zhang Rui <rui.zhang@intel.com>272272L: linux-acpi@vger.kernel.org273273-W: http://www.lesswatts.org/projects/acpi/273273+W: https://01.org/linux-acpi274274S: Supported275275F: drivers/acpi/video.c276276···824824F: arch/arm/mach-gemini/825825826826ARM/CSR SIRFPRIMA2 MACHINE SUPPORT827827-M: Barry Song <baohua.song@csr.com>827827+M: Barry Song <baohua@kernel.org>828828L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)829829T: git git://git.kernel.org/pub/scm/linux/kernel/git/baohua/linux.git830830S: Maintained831831F: arch/arm/mach-prima2/832832+F: drivers/clk/clk-prima2.c833833+F: drivers/clocksource/timer-prima2.c834834+F: drivers/clocksource/timer-marco.c832835F: drivers/dma/sirf-dma.c833836F: drivers/i2c/busses/i2c-sirf.c837837+F: drivers/input/misc/sirfsoc-onkey.c838838+F: drivers/irqchip/irq-sirfsoc.c834839F: drivers/mmc/host/sdhci-sirf.c835840F: drivers/pinctrl/sirf/841841+F: drivers/rtc/rtc-sirfsoc.c836842F: drivers/spi/spi-sirf.c837843838844ARM/EBSA110 MACHINE SUPPORT···23012295F: drivers/net/ethernet/ti/cpmac.c2302229623032297CPU FREQUENCY DRIVERS23042304-M: Rafael J. Wysocki <rjw@sisk.pl>22982298+M: Rafael J. Wysocki <rjw@rjwysocki.net>23052299M: Viresh Kumar <viresh.kumar@linaro.org>23062300L: cpufreq@vger.kernel.org23072301L: linux-pm@vger.kernel.org···23322326F: drivers/cpuidle/cpuidle-big_little.c2333232723342328CPUIDLE DRIVERS23352335-M: Rafael J. Wysocki <rjw@sisk.pl>23292329+M: Rafael J. Wysocki <rjw@rjwysocki.net>23362330M: Daniel Lezcano <daniel.lezcano@linaro.org>23372331L: linux-pm@vger.kernel.org23382332S: Maintained···3554354835553549FREEZER35563550M: Pavel Machek <pavel@ucw.cz>35573557-M: "Rafael J. Wysocki" <rjw@sisk.pl>35513551+M: "Rafael J. Wysocki" <rjw@rjwysocki.net>35583552L: linux-pm@vger.kernel.org35593553S: Supported35603554F: Documentation/power/freezing-of-tasks.txt···36243618L: linux-scsi@vger.kernel.org36253619S: Odd Fixes (e.g., new signatures)36263620F: drivers/scsi/fdomain.*36213621+36223622+GCOV BASED KERNEL PROFILING36233623+M: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>36243624+S: Maintained36253625+F: kernel/gcov/36263626+F: Documentation/gcov.txt3627362736283628GDT SCSI DISK ARRAY CONTROLLER DRIVER36293629M: Achim Leubner <achim_leubner@adaptec.com>···3896388438973885HIBERNATION (aka Software Suspend, aka swsusp)38983886M: Pavel Machek <pavel@ucw.cz>38993899-M: "Rafael J. Wysocki" <rjw@sisk.pl>38873887+M: "Rafael J. Wysocki" <rjw@rjwysocki.net>39003888L: linux-pm@vger.kernel.org39013889S: Supported39023890F: arch/x86/power/···43464334INTEL MENLOW THERMAL DRIVER43474335M: Sujith Thomas <sujith.thomas@intel.com>43484336L: platform-driver-x86@vger.kernel.org43494349-W: http://www.lesswatts.org/projects/acpi/43374337+W: https://01.org/linux-acpi43504338S: Supported43514339F: drivers/platform/x86/intel_menlow.c43524340···44824470L: linux-serial@vger.kernel.org44834471S: Maintained44844472F: drivers/tty/serial/ioc3_serial.c44734473+44744474+IOMMU DRIVERS44754475+M: Joerg Roedel <joro@8bytes.org>44764476+L: iommu@lists.linux-foundation.org44774477+T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git44784478+S: Maintained44794479+F: drivers/iommu/4485448044864481IP MASQUERADING44874482M: Juanjo Ciarlante <jjciarla@raiz.uncu.edu.ar>···78317812F: sound/soc/78327813F: include/sound/soc*7833781478157815+SOUND - DMAENGINE HELPERS78167816+M: Lars-Peter Clausen <lars@metafoo.de>78177817+S: Supported78187818+F: include/sound/dmaengine_pcm.h78197819+F: sound/core/pcm_dmaengine.c78207820+F: sound/soc/soc-generic-dmaengine-pcm.c78217821+78347822SPARC + UltraSPARC (sparc/sparc64)78357823M: "David S. Miller" <davem@davemloft.net>78367824L: sparclinux@vger.kernel.org···81178091SUSPEND TO RAM81188092M: Len Brown <len.brown@intel.com>81198093M: Pavel Machek <pavel@ucw.cz>81208120-M: "Rafael J. Wysocki" <rjw@sisk.pl>80948094+M: "Rafael J. Wysocki" <rjw@rjwysocki.net>81218095L: linux-pm@vger.kernel.org81228096S: Supported81238097F: Documentation/power/
+1-1
Makefile
···11VERSION = 322PATCHLEVEL = 1233SUBLEVEL = 044-EXTRAVERSION = -rc344+EXTRAVERSION = -rc655NAME = One Giant Leap for Frogkind6677# *DOCUMENTATION*
+1-1
arch/arc/kernel/ptrace.c
···102102 REG_IGNORE_ONE(pad2);103103 REG_IN_CHUNK(callee, efa, cregs); /* callee_regs[r25..r13] */104104 REG_IGNORE_ONE(efa); /* efa update invalid */105105- REG_IN_ONE(stop_pc, &ptregs->ret); /* stop_pc: PC update */105105+ REG_IGNORE_ONE(stop_pc); /* PC updated via @ret */106106107107 return ret;108108}
+13-12
arch/arc/kernel/signal.c
···101101{102102 struct rt_sigframe __user *sf;103103 unsigned int magic;104104- int err;105104 struct pt_regs *regs = current_pt_regs();106105107106 /* Always make any pending restarted system calls return -EINTR */···118119 if (!access_ok(VERIFY_READ, sf, sizeof(*sf)))119120 goto badframe;120121121121- err = restore_usr_regs(regs, sf);122122- err |= __get_user(magic, &sf->sigret_magic);123123- if (err)122122+ if (__get_user(magic, &sf->sigret_magic))124123 goto badframe;125124126125 if (unlikely(is_do_ss_needed(magic)))127126 if (restore_altstack(&sf->uc.uc_stack))128127 goto badframe;128128+129129+ if (restore_usr_regs(regs, sf))130130+ goto badframe;129131130132 /* Don't restart from sigreturn */131133 syscall_wont_restart(regs);···191191 return 1;192192193193 /*194194+ * w/o SA_SIGINFO, struct ucontext is partially populated (only195195+ * uc_mcontext/uc_sigmask) for kernel's normal user state preservation196196+ * during signal handler execution. This works for SA_SIGINFO as well197197+ * although the semantics are now overloaded (the same reg state can be198198+ * inspected by userland: but are they allowed to fiddle with it ?199199+ */200200+ err |= stash_usr_regs(sf, regs, set);201201+202202+ /*194203 * SA_SIGINFO requires 3 args to signal handler:195204 * #1: sig-no (common to any handler)196205 * #2: struct siginfo···222213 magic = MAGIC_SIGALTSTK;223214 }224215225225- /*226226- * w/o SA_SIGINFO, struct ucontext is partially populated (only227227- * uc_mcontext/uc_sigmask) for kernel's normal user state preservation228228- * during signal handler execution. This works for SA_SIGINFO as well229229- * although the semantics are now overloaded (the same reg state can be230230- * inspected by userland: but are they allowed to fiddle with it ?231231- */232232- err |= stash_usr_regs(sf, regs, set);233216 err |= __put_user(magic, &sf->sigret_magic);234217 if (err)235218 return err;
···9696 <1 14 0xf08>,9797 <1 11 0xf08>,9898 <1 10 0xf08>;9999+ /* Unfortunately we need this since some versions of U-Boot100100+ * on Exynos don't set the CNTFRQ register, so we need the101101+ * value from DT.102102+ */103103+ clock-frequency = <24000000>;99104 };100105101106 mct@101C0000 {
···196196 };197197198198 sdhi0: sdhi@ee100000 {199199- compatible = "renesas,r8a7740-sdhi";199199+ compatible = "renesas,sdhi-r8a7740";200200 reg = <0xee100000 0x100>;201201 interrupt-parent = <&gic>;202202 interrupts = <0 83 4···208208209209 /* SDHI1 and SDHI2 have no CD pins, no need for CD IRQ */210210 sdhi1: sdhi@ee120000 {211211- compatible = "renesas,r8a7740-sdhi";211211+ compatible = "renesas,sdhi-r8a7740";212212 reg = <0xee120000 0x100>;213213 interrupt-parent = <&gic>;214214 interrupts = <0 88 4···219219 };220220221221 sdhi2: sdhi@ee140000 {222222- compatible = "renesas,r8a7740-sdhi";222222+ compatible = "renesas,sdhi-r8a7740";223223 reg = <0xee140000 0x100>;224224 interrupt-parent = <&gic>;225225 interrupts = <0 104 4
+14
arch/arm/boot/install.sh
···2020# $4 - default install path (blank if root directory)2121#22222323+verify () {2424+ if [ ! -f "$1" ]; then2525+ echo "" 1>&22626+ echo " *** Missing file: $1" 1>&22727+ echo ' *** You need to run "make" before "make install".' 1>&22828+ echo "" 1>&22929+ exit 13030+ fi3131+}3232+3333+# Make sure the files actually exist3434+verify "$2"3535+verify "$3"3636+2337# User may have a custom install script2438if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi2539if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi
+31-7
arch/arm/common/edma.c
···269269 .ccnt = 1,270270};271271272272+static const struct of_device_id edma_of_ids[] = {273273+ { .compatible = "ti,edma3", },274274+ {}275275+};276276+272277/*****************************************************************************/273278274279static void map_dmach_queue(unsigned ctlr, unsigned ch_no,···565560static int prepare_unused_channel_list(struct device *dev, void *data)566561{567562 struct platform_device *pdev = to_platform_device(dev);568568- int i, ctlr;563563+ int i, count, ctlr;564564+ struct of_phandle_args dma_spec;569565566566+ if (dev->of_node) {567567+ count = of_property_count_strings(dev->of_node, "dma-names");568568+ if (count < 0)569569+ return 0;570570+ for (i = 0; i < count; i++) {571571+ if (of_parse_phandle_with_args(dev->of_node, "dmas",572572+ "#dma-cells", i,573573+ &dma_spec))574574+ continue;575575+576576+ if (!of_match_node(edma_of_ids, dma_spec.np)) {577577+ of_node_put(dma_spec.np);578578+ continue;579579+ }580580+581581+ clear_bit(EDMA_CHAN_SLOT(dma_spec.args[0]),582582+ edma_cc[0]->edma_unused);583583+ of_node_put(dma_spec.np);584584+ }585585+ return 0;586586+ }587587+588588+ /* For non-OF case */570589 for (i = 0; i < pdev->num_resources; i++) {571590 if ((pdev->resource[i].flags & IORESOURCE_DMA) &&572591 (int)pdev->resource[i].start >= 0) {573592 ctlr = EDMA_CTLR(pdev->resource[i].start);574593 clear_bit(EDMA_CHAN_SLOT(pdev->resource[i].start),575575- edma_cc[ctlr]->edma_unused);594594+ edma_cc[ctlr]->edma_unused);576595 }577596 }578597···1790176117911762 return 0;17921763}17931793-17941794-static const struct of_device_id edma_of_ids[] = {17951795- { .compatible = "ti,edma3", },17961796- {}17971797-};1798176417991765static struct platform_driver edma_driver = {18001766 .driver = {
+4-2
arch/arm/common/mcpm_entry.c
···5151{5252 phys_reset_t phys_reset;53535454- BUG_ON(!platform_ops);5454+ if (WARN_ON_ONCE(!platform_ops || !platform_ops->power_down))5555+ return;5556 BUG_ON(!irqs_disabled());56575758 /*···9493{9594 phys_reset_t phys_reset;96959797- BUG_ON(!platform_ops);9696+ if (WARN_ON_ONCE(!platform_ops || !platform_ops->suspend))9797+ return;9898 BUG_ON(!irqs_disabled());9999100100 /* Very similar to mcpm_cpu_power_down() */
+4-1
arch/arm/common/sharpsl_param.c
···1515#include <linux/module.h>1616#include <linux/string.h>1717#include <asm/mach/sharpsl_param.h>1818+#include <asm/memory.h>18191920/*2021 * Certain hardware parameters determined at the time of device manufacture,···2625 */2726#ifdef CONFIG_ARCH_SA11002827#define PARAM_BASE 0xe8ffc0002828+#define param_start(x) (void *)(x)2929#else3030#define PARAM_BASE 0xa0000a003131+#define param_start(x) __va(x)3132#endif3233#define MAGIC_CHG(a,b,c,d) ( ( d << 24 ) | ( c << 16 ) | ( b << 8 ) | a )3334···44414542void sharpsl_save_param(void)4643{4747- memcpy(&sharpsl_param, (void *)PARAM_BASE, sizeof(struct sharpsl_param_info));4444+ memcpy(&sharpsl_param, param_start(PARAM_BASE), sizeof(struct sharpsl_param_info));48454946 if (sharpsl_param.comadj_keyword != COMADJ_MAGIC)5047 sharpsl_param.comadj=-1;
···7676 *7777 * This must be called with interrupts disabled.7878 *7979- * This does not return. Re-entry in the kernel is expected via8080- * mcpm_entry_point.7979+ * On success this does not return. Re-entry in the kernel is expected8080+ * via mcpm_entry_point.8181+ *8282+ * This will return if mcpm_platform_register() has not been called8383+ * previously in which case the caller should take appropriate action.8184 */8285void mcpm_cpu_power_down(void);8386···10198 *10299 * This must be called with interrupts disabled.103100 *104104- * This does not return. Re-entry in the kernel is expected via105105- * mcpm_entry_point.101101+ * On success this does not return. Re-entry in the kernel is expected102102+ * via mcpm_entry_point.103103+ *104104+ * This will return if mcpm_platform_register() has not been called105105+ * previously in which case the caller should take appropriate action.106106 */107107void mcpm_cpu_suspend(u64 expected_residency);108108
+6
arch/arm/include/asm/syscall.h
···5757 unsigned int i, unsigned int n,5858 unsigned long *args)5959{6060+ if (n == 0)6161+ return;6262+6063 if (i + n > SYSCALL_MAX_ARGS) {6164 unsigned long *args_bad = args + SYSCALL_MAX_ARGS - i;6265 unsigned int n_bad = n + i - SYSCALL_MAX_ARGS;···8481 unsigned int i, unsigned int n,8582 const unsigned long *args)8683{8484+ if (n == 0)8585+ return;8686+8787 if (i + n > SYSCALL_MAX_ARGS) {8888 pr_warning("%s called with max args %d, handling only %d\n",8989 __func__, i + n, SYSCALL_MAX_ARGS);
+20-1
arch/arm/kernel/head.S
···487487 mrc p15, 0, r0, c0, c0, 5 @ read MPIDR488488 and r0, r0, #0xc0000000 @ multiprocessing extensions and489489 teq r0, #0x80000000 @ not part of a uniprocessor system?490490- moveq pc, lr @ yes, assume SMP490490+ bne __fixup_smp_on_up @ no, assume UP491491+492492+ @ Core indicates it is SMP. Check for Aegis SOC where a single493493+ @ Cortex-A9 CPU is present but SMP operations fault.494494+ mov r4, #0x41000000495495+ orr r4, r4, #0x0000c000496496+ orr r4, r4, #0x00000090497497+ teq r3, r4 @ Check for ARM Cortex-A9498498+ movne pc, lr @ Not ARM Cortex-A9,499499+500500+ @ If a future SoC *does* use 0x0 as the PERIPH_BASE, then the501501+ @ below address check will need to be #ifdef'd or equivalent502502+ @ for the Aegis platform.503503+ mrc p15, 4, r0, c15, c0 @ get SCU base address504504+ teq r0, #0x0 @ '0' on actual UP A9 hardware505505+ beq __fixup_smp_on_up @ So its an A9 UP506506+ ldr r0, [r0, #4] @ read SCU Config507507+ and r0, r0, #0x3 @ number of CPUs508508+ teq r0, #0x0 @ is 1?509509+ movne pc, lr491510492511__fixup_smp_on_up:493512 adr r0, 1f
···11/* Simple oneliner include to the PCIv3 early init */22+#ifdef CONFIG_PCI23extern int pci_v3_early_init(void);44+#else55+static inline int pci_v3_early_init(void)66+{77+ return 0;88+}99+#endif
···2929#include <linux/pinctrl/machine.h>3030#include <linux/platform_data/gpio-rcar.h>3131#include <linux/platform_device.h>3232+#include <linux/phy.h>3233#include <linux/regulator/fixed.h>3334#include <linux/regulator/machine.h>3435#include <linux/sh_eth.h>···156155 ðer_pdata, sizeof(ether_pdata));157156}158157158158+/*159159+ * Ether LEDs on the Lager board are named LINK and ACTIVE which corresponds160160+ * to non-default 01 setting of the Micrel KSZ8041 PHY control register 1 bits161161+ * 14-15. We have to set them back to 01 from the default 00 value each time162162+ * the PHY is reset. It's also important because the PHY's LED0 signal is163163+ * connected to SoC's ETH_LINK signal and in the PHY's default mode it will164164+ * bounce on and off after each packet, which we apparently want to avoid.165165+ */166166+static int lager_ksz8041_fixup(struct phy_device *phydev)167167+{168168+ u16 phyctrl1 = phy_read(phydev, 0x1e);169169+170170+ phyctrl1 &= ~0xc000;171171+ phyctrl1 |= 0x4000;172172+ return phy_write(phydev, 0x1e, phyctrl1);173173+}174174+175175+static void __init lager_init(void)176176+{177177+ lager_add_standard_devices();178178+179179+ phy_register_fixup_for_id("r8a7790-ether-ff:01", lager_ksz8041_fixup);180180+}181181+159182static const char *lager_boards_compat_dt[] __initdata = {160183 "renesas,lager",161184 NULL,···188163DT_MACHINE_START(LAGER_DT, "lager")189164 .init_early = r8a7790_init_delay,190165 .init_time = r8a7790_timer_init,191191- .init_machine = lager_add_standard_devices,166166+ .init_machine = lager_init,192167 .dt_compat = lager_boards_compat_dt,193168MACHINE_END
+10-1
arch/arm/mach-vexpress/tc2_pm.c
···131131 } else132132 BUG();133133134134+ /*135135+ * If the CPU is committed to power down, make sure136136+ * the power controller will be in charge of waking it137137+ * up upon IRQ, ie IRQ lines are cut from GIC CPU IF138138+ * to the CPU by disabling the GIC CPU IF to prevent wfi139139+ * from completing execution behind power controller back140140+ */141141+ if (!skip_wfi)142142+ gic_cpu_if_down();143143+134144 if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) {135145 arch_spin_unlock(&tc2_pm_lock);136146···241231 cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);242232 cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);243233 ve_spc_set_resume_addr(cluster, cpu, virt_to_phys(mcpm_entry_point));244244- gic_cpu_if_down();245234 tc2_pm_down(residency);246235}247236
+28-15
arch/arm/mm/dma-mapping.c
···12321232 break;1233123312341234 len = (j - i) << PAGE_SHIFT;12351235- ret = iommu_map(mapping->domain, iova, phys, len, 0);12351235+ ret = iommu_map(mapping->domain, iova, phys, len,12361236+ IOMMU_READ|IOMMU_WRITE);12361237 if (ret < 0)12371238 goto fail;12381239 iova += len;···14321431 GFP_KERNEL);14331432}1434143314341434+static int __dma_direction_to_prot(enum dma_data_direction dir)14351435+{14361436+ int prot;14371437+14381438+ switch (dir) {14391439+ case DMA_BIDIRECTIONAL:14401440+ prot = IOMMU_READ | IOMMU_WRITE;14411441+ break;14421442+ case DMA_TO_DEVICE:14431443+ prot = IOMMU_READ;14441444+ break;14451445+ case DMA_FROM_DEVICE:14461446+ prot = IOMMU_WRITE;14471447+ break;14481448+ default:14491449+ prot = 0;14501450+ }14511451+14521452+ return prot;14531453+}14541454+14351455/*14361456 * Map a part of the scatter-gather list into contiguous io address space14371457 */···14661444 int ret = 0;14671445 unsigned int count;14681446 struct scatterlist *s;14471447+ int prot;1469144814701449 size = PAGE_ALIGN(size);14711450 *handle = DMA_ERROR_CODE;···14831460 !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))14841461 __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir);1485146214861486- ret = iommu_map(mapping->domain, iova, phys, len, 0);14631463+ prot = __dma_direction_to_prot(dir);14641464+14651465+ ret = iommu_map(mapping->domain, iova, phys, len, prot);14871466 if (ret < 0)14881467 goto fail;14891468 count += len >> PAGE_SHIFT;···16901665 if (dma_addr == DMA_ERROR_CODE)16911666 return dma_addr;1692166716931693- switch (dir) {16941694- case DMA_BIDIRECTIONAL:16951695- prot = IOMMU_READ | IOMMU_WRITE;16961696- break;16971697- case DMA_TO_DEVICE:16981698- prot = IOMMU_READ;16991699- break;17001700- case DMA_FROM_DEVICE:17011701- prot = IOMMU_WRITE;17021702- break;17031703- default:17041704- prot = 0;17051705- }16681668+ prot = __dma_direction_to_prot(dir);1706166917071670 ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page), len, prot);17081671 if (ret < 0)
-3
arch/arm/mm/init.c
···1717#include <linux/nodemask.h>1818#include <linux/initrd.h>1919#include <linux/of_fdt.h>2020-#include <linux/of_reserved_mem.h>2120#include <linux/highmem.h>2221#include <linux/gfp.h>2322#include <linux/memblock.h>···377378 /* reserve any platform specific memblock areas */378379 if (mdesc->reserve)379380 mdesc->reserve();380380-381381- early_init_dt_scan_reserved_mem();382381383382 /*384383 * reserve memory for DMA contigouos allocations,
-7
arch/arm64/Kconfig.debug
···66 bool77 default y8899-config DEBUG_STACK_USAGE1010- bool "Enable stack utilization instrumentation"1111- depends on DEBUG_KERNEL1212- help1313- Enables the display of the minimum amount of free stack which each1414- task has ever had available in the sysrq-T output.1515-169config EARLY_PRINTK1710 bool "Early printk support"1811 default y
+4-1
arch/arm64/configs/defconfig
···4242# CONFIG_WIRELESS is not set4343CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"4444CONFIG_DEVTMPFS=y4545-# CONFIG_BLK_DEV is not set4545+CONFIG_BLK_DEV=y4646CONFIG_SCSI=y4747# CONFIG_SCSI_PROC_FS is not set4848CONFIG_BLK_DEV_SD=y···7272# CONFIG_IOMMU_SUPPORT is not set7373CONFIG_EXT2_FS=y7474CONFIG_EXT3_FS=y7575+CONFIG_EXT4_FS=y7576# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set7677# CONFIG_EXT3_FS_XATTR is not set7778CONFIG_FUSE_FS=y···9190CONFIG_DEBUG_INFO=y9291# CONFIG_FTRACE is not set9392CONFIG_ATOMIC64_SELFTEST=y9393+CONFIG_VIRTIO_MMIO=y9494+CONFIG_VIRTIO_BLK=y
···4040CONFIG_LLC2=m4141CONFIG_NET_PKTGEN=m4242CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"4343+CONFIG_DEVTMPFS=y4444+CONFIG_DEVTMPFS_MOUNT=y4345# CONFIG_STANDALONE is not set4446# CONFIG_PREVENT_FIRMWARE_BUILD is not set4547CONFIG_PARPORT=y
+2
arch/parisc/configs/a500_defconfig
···7979CONFIG_LLC2=m8080CONFIG_NET_PKTGEN=m8181CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"8282+CONFIG_DEVTMPFS=y8383+CONFIG_DEVTMPFS_MOUNT=y8284# CONFIG_STANDALONE is not set8385# CONFIG_PREVENT_FIRMWARE_BUILD is not set8486CONFIG_BLK_DEV_UMEM=m
+3
arch/parisc/configs/b180_defconfig
···44CONFIG_IKCONFIG_PROC=y55CONFIG_LOG_BUF_SHIFT=1666CONFIG_SYSFS_DEPRECATED_V2=y77+CONFIG_BLK_DEV_INITRD=y78CONFIG_SLAB=y89CONFIG_MODULES=y910CONFIG_MODVERSIONS=y···2827# CONFIG_INET_LRO is not set2928CONFIG_IPV6=y3029CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"3030+CONFIG_DEVTMPFS=y3131+CONFIG_DEVTMPFS_MOUNT=y3132# CONFIG_PREVENT_FIRMWARE_BUILD is not set3233CONFIG_PARPORT=y3334CONFIG_PARPORT_PC=y
+3
arch/parisc/configs/c3000_defconfig
···55CONFIG_IKCONFIG_PROC=y66CONFIG_LOG_BUF_SHIFT=1677CONFIG_SYSFS_DEPRECATED_V2=y88+CONFIG_BLK_DEV_INITRD=y89# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set910CONFIG_EXPERT=y1011CONFIG_KALLSYMS_ALL=y···4039CONFIG_IP_NF_QUEUE=m4140CONFIG_NET_PKTGEN=m4241CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"4242+CONFIG_DEVTMPFS=y4343+CONFIG_DEVTMPFS_MOUNT=y4344# CONFIG_STANDALONE is not set4445# CONFIG_PREVENT_FIRMWARE_BUILD is not set4546CONFIG_BLK_DEV_UMEM=m
+2
arch/parisc/configs/c8000_defconfig
···6262CONFIG_LLC2=m6363CONFIG_DNS_RESOLVER=y6464CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"6565+CONFIG_DEVTMPFS=y6666+CONFIG_DEVTMPFS_MOUNT=y6567# CONFIG_STANDALONE is not set6668CONFIG_PARPORT=y6769CONFIG_PARPORT_PC=y
+2
arch/parisc/configs/default_defconfig
···4949CONFIG_INET6_IPCOMP=y5050CONFIG_LLC2=m5151CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"5252+CONFIG_DEVTMPFS=y5353+CONFIG_DEVTMPFS_MOUNT=y5254# CONFIG_STANDALONE is not set5355# CONFIG_PREVENT_FIRMWARE_BUILD is not set5456CONFIG_PARPORT=y
+1-1
arch/parisc/include/asm/traps.h
···6677/* traps.c */88void parisc_terminate(char *msg, struct pt_regs *regs,99- int code, unsigned long offset);99+ int code, unsigned long offset) __noreturn __cold;10101111/* mm/fault.c */1212void do_page_fault(struct pt_regs *regs, unsigned long code,
···291291 do_exit(SIGSEGV);292292}293293294294-int syscall_ipi(int (*syscall) (struct pt_regs *), struct pt_regs *regs)295295-{296296- return syscall(regs);297297-}298298-299294/* gdb uses break 4,8 */300295#define GDB_BREAK_INSN 0x10004301296static void handle_gdb_break(struct pt_regs *regs, int wot)···800805 else {801806802807 /*803803- * The kernel should never fault on its own address space.808808+ * The kernel should never fault on its own address space,809809+ * unless pagefault_disable() was called before.804810 */805811806806- if (fault_space == 0) 812812+ if (fault_space == 0 && !in_atomic())807813 {808814 pdc_chassis_send_status(PDC_CHASSIS_DIRECT_PANIC);809815 parisc_terminate("Kernel Fault", regs, code, fault_address);810810-811816 }812817 }813818
+14-1
arch/parisc/lib/memcpy.c
···5656#ifdef __KERNEL__5757#include <linux/module.h>5858#include <linux/compiler.h>5959-#include <asm/uaccess.h>5959+#include <linux/uaccess.h>6060#define s_space "%%sr1"6161#define d_space "%%sr2"6262#else···524524EXPORT_SYMBOL(copy_from_user);525525EXPORT_SYMBOL(copy_in_user);526526EXPORT_SYMBOL(memcpy);527527+528528+long probe_kernel_read(void *dst, const void *src, size_t size)529529+{530530+ unsigned long addr = (unsigned long)src;531531+532532+ if (size < 0 || addr < PAGE_SIZE)533533+ return -EFAULT;534534+535535+ /* check for I/O space F_EXTEND(0xfff00000) access as well? */536536+537537+ return __probe_kernel_read(dst, src, size);538538+}539539+527540#endif
+10-5
arch/parisc/mm/fault.c
···171171 unsigned long address)172172{173173 struct vm_area_struct *vma, *prev_vma;174174- struct task_struct *tsk = current;175175- struct mm_struct *mm = tsk->mm;174174+ struct task_struct *tsk;175175+ struct mm_struct *mm;176176 unsigned long acc_type;177177 int fault;178178- unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;178178+ unsigned int flags;179179180180- if (in_atomic() || !mm)180180+ if (in_atomic())181181 goto no_context;182182183183+ tsk = current;184184+ mm = tsk->mm;185185+ if (!mm)186186+ goto no_context;187187+188188+ flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;183189 if (user_mode(regs))184190 flags |= FAULT_FLAG_USER;185191186192 acc_type = parisc_acctyp(code, regs->iir);187187-188193 if (acc_type & VM_WRITE)189194 flags |= FAULT_FLAG_WRITE;190195retry:
···332332 unsigned long hva;333333 int pfnmap = 0;334334 int tsize = BOOK3E_PAGESZ_4K;335335+ int ret = 0;336336+ unsigned long mmu_seq;337337+ struct kvm *kvm = vcpu_e500->vcpu.kvm;338338+339339+ /* used to check for invalidations in progress */340340+ mmu_seq = kvm->mmu_notifier_seq;341341+ smp_rmb();335342336343 /*337344 * Translate guest physical to true physical, acquiring···456449 gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1);457450 }458451452452+ spin_lock(&kvm->mmu_lock);453453+ if (mmu_notifier_retry(kvm, mmu_seq)) {454454+ ret = -EAGAIN;455455+ goto out;456456+ }457457+459458 kvmppc_e500_ref_setup(ref, gtlbe, pfn);460459461460 kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,···470457 /* Clear i-cache for new pages */471458 kvmppc_mmu_flush_icache(pfn);472459460460+out:461461+ spin_unlock(&kvm->mmu_lock);462462+473463 /* Drop refcount on page, so that mmu notifiers can clear it */474464 kvm_release_pfn_clean(pfn);475465476476- return 0;466466+ return ret;477467}478468479469/* XXX only map the one-one case, for now use TLB0 */
···748748749749static inline void pgste_set_pte(pte_t *ptep, pte_t entry)750750{751751- if (!MACHINE_HAS_ESOP && (pte_val(entry) & _PAGE_WRITE)) {751751+ if (!MACHINE_HAS_ESOP &&752752+ (pte_val(entry) & _PAGE_PRESENT) &&753753+ (pte_val(entry) & _PAGE_WRITE)) {752754 /*753755 * Without enhanced suppression-on-protection force754756 * the dirty bit on for all writable ptes.
+14-14
arch/s390/include/asm/timex.h
···71717272typedef unsigned long long cycles_t;73737474-static inline unsigned long long get_tod_clock(void)7575-{7676- unsigned long long clk;7777-7878-#ifdef CONFIG_HAVE_MARCH_Z9_109_FEATURES7979- asm volatile(".insn s,0xb27c0000,%0" : "=Q" (clk) : : "cc");8080-#else8181- asm volatile("stck %0" : "=Q" (clk) : : "cc");8282-#endif8383- return clk;8484-}8585-8674static inline void get_tod_clock_ext(char *clk)8775{8876 asm volatile("stcke %0" : "=Q" (*clk) : : "cc");8977}90789191-static inline unsigned long long get_tod_clock_xt(void)7979+static inline unsigned long long get_tod_clock(void)9280{9381 unsigned char clk[16];9482 get_tod_clock_ext(clk);9583 return *((unsigned long long *)&clk[1]);8484+}8585+8686+static inline unsigned long long get_tod_clock_fast(void)8787+{8888+#ifdef CONFIG_HAVE_MARCH_Z9_109_FEATURES8989+ unsigned long long clk;9090+9191+ asm volatile("stckf %0" : "=Q" (clk) : : "cc");9292+ return clk;9393+#else9494+ return get_tod_clock();9595+#endif9696}97979898static inline cycles_t get_cycles(void)···125125 */126126static inline unsigned long long get_tod_clock_monotonic(void)127127{128128- return get_tod_clock_xt() - sched_clock_base_cc;128128+ return get_tod_clock() - sched_clock_base_cc;129129}130130131131/**
···266266 tm __TI_flags+3(%r12),_TIF_SYSCALL267267 jno sysc_return268268 lm %r2,%r7,__PT_R2(%r11) # load svc arguments269269+ l %r10,__TI_sysc_table(%r12) # 31 bit system call table269270 xr %r8,%r8 # svc 0 returns -ENOSYS270271 clc __PT_INT_CODE+2(2,%r11),BASED(.Lnr_syscalls+2)271272 jnl sysc_nr_ok # invalid svc number -> do svc 0
+1
arch/s390/kernel/entry64.S
···297297 tm __TI_flags+7(%r12),_TIF_SYSCALL298298 jno sysc_return299299 lmg %r2,%r7,__PT_R2(%r11) # load svc arguments300300+ lg %r10,__TI_sysc_table(%r12) # address of system call table300301 lghi %r8,0 # svc 0 returns -ENOSYS301302 llgh %r1,__PT_INT_CODE+2(%r11) # load new svc number302303 cghi %r1,NR_syscalls
+5-1
arch/s390/kernel/kprobes.c
···6767 case 0xac: /* stnsm */6868 case 0xad: /* stosm */6969 return -EINVAL;7070+ case 0xc6:7171+ switch (insn[0] & 0x0f) {7272+ case 0x00: /* exrl */7373+ return -EINVAL;7474+ }7075 }7176 switch (insn[0]) {7277 case 0x0101: /* pr */···185180 break;186181 case 0xc6:187182 switch (insn[0] & 0x0f) {188188- case 0x00: /* exrl */189183 case 0x02: /* pfdrl */190184 case 0x04: /* cghrl */191185 case 0x05: /* chrl */
+3-3
arch/s390/kvm/interrupt.c
···385385 }386386387387 if ((!rc) && (vcpu->arch.sie_block->ckc <388388- get_tod_clock() + vcpu->arch.sie_block->epoch)) {388388+ get_tod_clock_fast() + vcpu->arch.sie_block->epoch)) {389389 if ((!psw_extint_disabled(vcpu)) &&390390 (vcpu->arch.sie_block->gcr[0] & 0x800ul))391391 rc = 1;···425425 goto no_timer;426426 }427427428428- now = get_tod_clock() + vcpu->arch.sie_block->epoch;428428+ now = get_tod_clock_fast() + vcpu->arch.sie_block->epoch;429429 if (vcpu->arch.sie_block->ckc < now) {430430 __unset_cpu_idle(vcpu);431431 return 0;···515515 }516516517517 if ((vcpu->arch.sie_block->ckc <518518- get_tod_clock() + vcpu->arch.sie_block->epoch))518518+ get_tod_clock_fast() + vcpu->arch.sie_block->epoch))519519 __try_deliver_ckc_interrupt(vcpu);520520521521 if (atomic_read(&fi->active)) {
+7-7
arch/s390/lib/delay.c
···4444 do {4545 set_clock_comparator(end);4646 vtime_stop_cpu();4747- } while (get_tod_clock() < end);4747+ } while (get_tod_clock_fast() < end);4848 lockdep_on();4949 __ctl_load(cr0, 0, 0);5050 __ctl_load(cr6, 6, 6);···5555{5656 u64 clock_saved, end;57575858- end = get_tod_clock() + (usecs << 12);5858+ end = get_tod_clock_fast() + (usecs << 12);5959 do {6060 clock_saved = 0;6161 if (end < S390_lowcore.clock_comparator) {···6565 vtime_stop_cpu();6666 if (clock_saved)6767 local_tick_enable(clock_saved);6868- } while (get_tod_clock() < end);6868+ } while (get_tod_clock_fast() < end);6969}70707171/*···109109{110110 u64 end;111111112112- end = get_tod_clock() + (usecs << 12);113113- while (get_tod_clock() < end)112112+ end = get_tod_clock_fast() + (usecs << 12);113113+ while (get_tod_clock_fast() < end)114114 cpu_relax();115115}116116···120120121121 nsecs <<= 9;122122 do_div(nsecs, 125);123123- end = get_tod_clock() + nsecs;123123+ end = get_tod_clock_fast() + nsecs;124124 if (nsecs & ~0xfffUL)125125 __udelay(nsecs >> 12);126126- while (get_tod_clock() < end)126126+ while (get_tod_clock_fast() < end)127127 barrier();128128}129129EXPORT_SYMBOL(__ndelay);
+6-1
arch/sparc/Kconfig
···506506 Only choose N if you know in advance that you will not need to modify507507 OpenPROM settings on the running system.508508509509-# Makefile helper509509+# Makefile helpers510510config SPARC64_PCI511511 bool512512 default y513513 depends on SPARC64 && PCI514514+515515+config SPARC64_PCI_MSI516516+ bool517517+ default y518518+ depends on SPARC64_PCI && PCI_MSI514519515520endmenu516521
···166166 *167167 * Atomically sets @v to @i and returns old @v168168 */169169-static inline u64 atomic64_xchg(atomic64_t *v, u64 n)169169+static inline long long atomic64_xchg(atomic64_t *v, long long n)170170{171171 return xchg64(&v->counter, n);172172}···180180 * Atomically checks if @v holds @o and replaces it with @n if so.181181 * Returns the old value at @v.182182 */183183-static inline u64 atomic64_cmpxchg(atomic64_t *v, u64 o, u64 n)183183+static inline long long atomic64_cmpxchg(atomic64_t *v, long long o,184184+ long long n)184185{185186 return cmpxchg64(&v->counter, o, n);186187}
+15-12
arch/tile/include/asm/atomic_32.h
···8080/* A 64bit atomic type */81818282typedef struct {8383- u64 __aligned(8) counter;8383+ long long counter;8484} atomic64_t;85858686#define ATOMIC64_INIT(val) { (val) }···9191 *9292 * Atomically reads the value of @v.9393 */9494-static inline u64 atomic64_read(const atomic64_t *v)9494+static inline long long atomic64_read(const atomic64_t *v)9595{9696 /*9797 * Requires an atomic op to read both 32-bit parts consistently.9898 * Casting away const is safe since the atomic support routines9999 * do not write to memory if the value has not been modified.100100 */101101- return _atomic64_xchg_add((u64 *)&v->counter, 0);101101+ return _atomic64_xchg_add((long long *)&v->counter, 0);102102}103103104104/**···108108 *109109 * Atomically adds @i to @v.110110 */111111-static inline void atomic64_add(u64 i, atomic64_t *v)111111+static inline void atomic64_add(long long i, atomic64_t *v)112112{113113 _atomic64_xchg_add(&v->counter, i);114114}···120120 *121121 * Atomically adds @i to @v and returns @i + @v122122 */123123-static inline u64 atomic64_add_return(u64 i, atomic64_t *v)123123+static inline long long atomic64_add_return(long long i, atomic64_t *v)124124{125125 smp_mb(); /* barrier for proper semantics */126126 return _atomic64_xchg_add(&v->counter, i) + i;···135135 * Atomically adds @a to @v, so long as @v was not already @u.136136 * Returns non-zero if @v was not @u, and zero otherwise.137137 */138138-static inline u64 atomic64_add_unless(atomic64_t *v, u64 a, u64 u)138138+static inline long long atomic64_add_unless(atomic64_t *v, long long a,139139+ long long u)139140{140141 smp_mb(); /* barrier for proper semantics */141142 return _atomic64_xchg_add_unless(&v->counter, a, u) != u;···152151 * atomic64_set() can't be just a raw store, since it would be lost if it153152 * fell between the load and store of one of the other atomic ops.154153 */155155-static inline void atomic64_set(atomic64_t *v, u64 n)154154+static inline void atomic64_set(atomic64_t *v, long long n)156155{157156 _atomic64_xchg(&v->counter, n);158157}···237236extern struct __get_user __atomic_or(volatile int *p, int *lock, int n);238237extern struct __get_user __atomic_andn(volatile int *p, int *lock, int n);239238extern struct __get_user __atomic_xor(volatile int *p, int *lock, int n);240240-extern u64 __atomic64_cmpxchg(volatile u64 *p, int *lock, u64 o, u64 n);241241-extern u64 __atomic64_xchg(volatile u64 *p, int *lock, u64 n);242242-extern u64 __atomic64_xchg_add(volatile u64 *p, int *lock, u64 n);243243-extern u64 __atomic64_xchg_add_unless(volatile u64 *p,244244- int *lock, u64 o, u64 n);239239+extern long long __atomic64_cmpxchg(volatile long long *p, int *lock,240240+ long long o, long long n);241241+extern long long __atomic64_xchg(volatile long long *p, int *lock, long long n);242242+extern long long __atomic64_xchg_add(volatile long long *p, int *lock,243243+ long long n);244244+extern long long __atomic64_xchg_add_unless(volatile long long *p,245245+ int *lock, long long o, long long n);245246246247/* Return failure from the atomic wrappers. */247248struct __get_user __atomic_bad_address(int __user *addr);
+17-11
arch/tile/include/asm/cmpxchg.h
···3535int _atomic_xchg_add(int *v, int i);3636int _atomic_xchg_add_unless(int *v, int a, int u);3737int _atomic_cmpxchg(int *ptr, int o, int n);3838-u64 _atomic64_xchg(u64 *v, u64 n);3939-u64 _atomic64_xchg_add(u64 *v, u64 i);4040-u64 _atomic64_xchg_add_unless(u64 *v, u64 a, u64 u);4141-u64 _atomic64_cmpxchg(u64 *v, u64 o, u64 n);3838+long long _atomic64_xchg(long long *v, long long n);3939+long long _atomic64_xchg_add(long long *v, long long i);4040+long long _atomic64_xchg_add_unless(long long *v, long long a, long long u);4141+long long _atomic64_cmpxchg(long long *v, long long o, long long n);42424343#define xchg(ptr, n) \4444 ({ \···5353 if (sizeof(*(ptr)) != 4) \5454 __cmpxchg_called_with_bad_pointer(); \5555 smp_mb(); \5656- (typeof(*(ptr)))_atomic_cmpxchg((int *)ptr, (int)o, (int)n); \5656+ (typeof(*(ptr)))_atomic_cmpxchg((int *)ptr, (int)o, \5757+ (int)n); \5758 })58595960#define xchg64(ptr, n) \···6261 if (sizeof(*(ptr)) != 8) \6362 __xchg_called_with_bad_pointer(); \6463 smp_mb(); \6565- (typeof(*(ptr)))_atomic64_xchg((u64 *)(ptr), (u64)(n)); \6464+ (typeof(*(ptr)))_atomic64_xchg((long long *)(ptr), \6565+ (long long)(n)); \6666 })67676868#define cmpxchg64(ptr, o, n) \···7169 if (sizeof(*(ptr)) != 8) \7270 __cmpxchg_called_with_bad_pointer(); \7371 smp_mb(); \7474- (typeof(*(ptr)))_atomic64_cmpxchg((u64 *)ptr, (u64)o, (u64)n); \7272+ (typeof(*(ptr)))_atomic64_cmpxchg((long long *)ptr, \7373+ (long long)o, (long long)n); \7574 })76757776#else···8481 switch (sizeof(*(ptr))) { \8582 case 4: \8683 __x = (typeof(__x))(unsigned long) \8787- __insn_exch4((ptr), (u32)(unsigned long)(n)); \8484+ __insn_exch4((ptr), \8585+ (u32)(unsigned long)(n)); \8886 break; \8987 case 8: \9090- __x = (typeof(__x)) \8888+ __x = (typeof(__x)) \9189 __insn_exch((ptr), (unsigned long)(n)); \9290 break; \9391 default: \···107103 switch (sizeof(*(ptr))) { \108104 case 4: \109105 __x = (typeof(__x))(unsigned long) \110110- __insn_cmpexch4((ptr), (u32)(unsigned long)(n)); \106106+ __insn_cmpexch4((ptr), \107107+ (u32)(unsigned long)(n)); \111108 break; \112109 case 8: \113113- __x = (typeof(__x))__insn_cmpexch((ptr), (u64)(n)); \110110+ __x = (typeof(__x))__insn_cmpexch((ptr), \111111+ (long long)(n)); \114112 break; \115113 default: \116114 __cmpxchg_called_with_bad_pointer(); \
+31-3
arch/tile/include/asm/percpu.h
···1515#ifndef _ASM_TILE_PERCPU_H1616#define _ASM_TILE_PERCPU_H17171818-register unsigned long __my_cpu_offset __asm__("tp");1919-#define __my_cpu_offset __my_cpu_offset2020-#define set_my_cpu_offset(tp) (__my_cpu_offset = (tp))1818+register unsigned long my_cpu_offset_reg asm("tp");1919+2020+#ifdef CONFIG_PREEMPT2121+/*2222+ * For full preemption, we can't just use the register variable2323+ * directly, since we need barrier() to hazard against it, causing the2424+ * compiler to reload anything computed from a previous "tp" value.2525+ * But we also don't want to use volatile asm, since we'd like the2626+ * compiler to be able to cache the value across multiple percpu reads.2727+ * So we use a fake stack read as a hazard against barrier().2828+ * The 'U' constraint is like 'm' but disallows postincrement.2929+ */3030+static inline unsigned long __my_cpu_offset(void)3131+{3232+ unsigned long tp;3333+ register unsigned long *sp asm("sp");3434+ asm("move %0, tp" : "=r" (tp) : "U" (*sp));3535+ return tp;3636+}3737+#define __my_cpu_offset __my_cpu_offset()3838+#else3939+/*4040+ * We don't need to hazard against barrier() since "tp" doesn't ever4141+ * change with PREEMPT_NONE, and with PREEMPT_VOLUNTARY it only4242+ * changes at function call points, at which we are already re-reading4343+ * the value of "tp" due to "my_cpu_offset_reg" being a global variable.4444+ */4545+#define __my_cpu_offset my_cpu_offset_reg4646+#endif4747+4848+#define set_my_cpu_offset(tp) (my_cpu_offset_reg = (tp))21492250#include <asm-generic/percpu.h>2351
···2323#include <linux/mmzone.h>2424#include <linux/dcache.h>2525#include <linux/fs.h>2626+#include <linux/string.h>2627#include <asm/backtrace.h>2728#include <asm/page.h>2829#include <asm/ucontext.h>···333332 }334333335334 if (vma->vm_file) {336336- char *s;337335 p = d_path(&vma->vm_file->f_path, buf, bufsize);338336 if (IS_ERR(p))339337 p = "?";340340- s = strrchr(p, '/');341341- if (s)342342- p = s+1;338338+ name = kbasename(p);343339 } else {344344- p = "anon";340340+ name = "anon";345341 }346342347343 /* Generate a string description of the vma info. */348348- namelen = strlen(p);344344+ namelen = strlen(name);349345 remaining = (bufsize - 1) - namelen;350350- memmove(buf, p, namelen);346346+ memmove(buf, name, namelen);351347 snprintf(buf + namelen, remaining, "[%lx+%lx] ",352348 vma->vm_start, vma->vm_end - vma->vm_start);353349}
+4-4
arch/tile/lib/atomic_32.c
···107107EXPORT_SYMBOL(_atomic_xor);108108109109110110-u64 _atomic64_xchg(u64 *v, u64 n)110110+long long _atomic64_xchg(long long *v, long long n)111111{112112 return __atomic64_xchg(v, __atomic_setup(v), n);113113}114114EXPORT_SYMBOL(_atomic64_xchg);115115116116-u64 _atomic64_xchg_add(u64 *v, u64 i)116116+long long _atomic64_xchg_add(long long *v, long long i)117117{118118 return __atomic64_xchg_add(v, __atomic_setup(v), i);119119}120120EXPORT_SYMBOL(_atomic64_xchg_add);121121122122-u64 _atomic64_xchg_add_unless(u64 *v, u64 a, u64 u)122122+long long _atomic64_xchg_add_unless(long long *v, long long a, long long u)123123{124124 /*125125 * Note: argument order is switched here since it is easier···130130}131131EXPORT_SYMBOL(_atomic64_xchg_add_unless);132132133133-u64 _atomic64_cmpxchg(u64 *v, u64 o, u64 n)133133+long long _atomic64_cmpxchg(long long *v, long long o, long long n)134134{135135 return __atomic64_cmpxchg(v, __atomic_setup(v), o, n);136136}
+4-3
arch/x86/Kconfig
···860860861861config X86_UP_APIC862862 bool "Local APIC support on uniprocessors"863863- depends on X86_32 && !SMP && !X86_32_NON_STANDARD863863+ depends on X86_32 && !SMP && !X86_32_NON_STANDARD && !PCI_MSI864864 ---help---865865 A local APIC (Advanced Programmable Interrupt Controller) is an866866 integrated interrupt controller in the CPU. If you have a single-CPU···885885886886config X86_LOCAL_APIC887887 def_bool y888888- depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_APIC888888+ depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_APIC || PCI_MSI889889890890config X86_IO_APIC891891 def_bool y892892- depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_IOAPIC892892+ depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_IOAPIC || PCI_MSI893893894894config X86_VISWS_APIC895895 def_bool y···1033103310341034config MICROCODE10351035 tristate "CPU microcode loading support"10361036+ depends on CPU_SUP_AMD || CPU_SUP_INTEL10361037 select FW_LOADER10371038 ---help---10381039
+3-3
arch/x86/include/asm/cpufeature.h
···374374 * Catch too early usage of this before alternatives375375 * have run.376376 */377377- asm goto("1: jmp %l[t_warn]\n"377377+ asm_volatile_goto("1: jmp %l[t_warn]\n"378378 "2:\n"379379 ".section .altinstructions,\"a\"\n"380380 " .long 1b - .\n"···388388389389#endif390390391391- asm goto("1: jmp %l[t_no]\n"391391+ asm_volatile_goto("1: jmp %l[t_no]\n"392392 "2:\n"393393 ".section .altinstructions,\"a\"\n"394394 " .long 1b - .\n"···453453 * have. Thus, we force the jump to the widest, 4-byte, signed relative454454 * offset even though the last would often fit in less bytes.455455 */456456- asm goto("1: .byte 0xe9\n .long %l[t_dynamic] - 2f\n"456456+ asm_volatile_goto("1: .byte 0xe9\n .long %l[t_dynamic] - 2f\n"457457 "2:\n"458458 ".section .altinstructions,\"a\"\n"459459 " .long 1b - .\n" /* src offset */
···700700 if (!(pci_probe & PCI_PROBE_MMCONF) || pci_mmcfg_arch_init_failed)701701 return -ENODEV;702702703703- if (start > end || !addr)703703+ if (start > end)704704 return -EINVAL;705705706706 mutex_lock(&pci_mmcfg_lock);···714714 cfg->segment, cfg->start_bus, cfg->end_bus);715715 mutex_unlock(&pci_mmcfg_lock);716716 return -EEXIST;717717+ }718718+719719+ if (!addr) {720720+ mutex_unlock(&pci_mmcfg_lock);721721+ return -EINVAL;717722 }718723719724 rc = -EBUSY;
+9
arch/x86/xen/smp.c
···278278 old memory can be recycled */279279 make_lowmem_page_readwrite(xen_initial_gdt);280280281281+#ifdef CONFIG_X86_32282282+ /*283283+ * Xen starts us with XEN_FLAT_RING1_DS, but linux code284284+ * expects __USER_DS285285+ */286286+ loadsegment(ds, __USER_DS);287287+ loadsegment(es, __USER_DS);288288+#endif289289+281290 xen_filter_cpu_maps();282291 xen_setup_vcpu_info_placement();283292 }
+6-1
block/partitions/efi.c
···222222 * the disk size.223223 *224224 * Hybrid MBRs do not necessarily comply with this.225225+ *226226+ * Consider a bad value here to be a warning to support dd'ing227227+ * an image from a smaller disk to a larger disk.225228 */226229 if (ret == GPT_MBR_PROTECTIVE) {227230 sz = le32_to_cpu(mbr->partition_record[part].size_in_lba);228231 if (sz != (uint32_t) total_sectors - 1 && sz != 0xFFFFFFFF)229229- ret = 0;232232+ pr_debug("GPT: mbr size in lba (%u) different than whole disk (%u).\n",233233+ sz, min_t(uint32_t,234234+ total_sectors - 1, 0xFFFFFFFF));230235 }231236done:232237 return ret;
+4-4
drivers/acpi/Kconfig
···2424 are configured, ACPI is used.25252626 The project home page for the Linux ACPI subsystem is here:2727- <http://www.lesswatts.org/projects/acpi/>2727+ <https://01.org/linux-acpi>28282929 Linux support for ACPI is based on Intel Corporation's ACPI3030 Component Architecture (ACPI CA). For more information on the···123123 default y124124 help125125 This driver handles events on the power, sleep, and lid buttons.126126- A daemon reads /proc/acpi/event and perform user-defined actions127127- such as shutting down the system. This is necessary for128128- software-controlled poweroff.126126+ A daemon reads events from input devices or via netlink and127127+ performs user-defined actions such as shutting down the system.128128+ This is necessary for software-controlled poweroff.129129130130 To compile this driver as a module, choose M here:131131 the module will be called button.
-56
drivers/acpi/device_pm.c
···10251025 }10261026}10271027EXPORT_SYMBOL_GPL(acpi_dev_pm_detach);10281028-10291029-/**10301030- * acpi_dev_pm_add_dependent - Add physical device depending for PM.10311031- * @handle: Handle of ACPI device node.10321032- * @depdev: Device depending on that node for PM.10331033- */10341034-void acpi_dev_pm_add_dependent(acpi_handle handle, struct device *depdev)10351035-{10361036- struct acpi_device_physical_node *dep;10371037- struct acpi_device *adev;10381038-10391039- if (!depdev || acpi_bus_get_device(handle, &adev))10401040- return;10411041-10421042- mutex_lock(&adev->physical_node_lock);10431043-10441044- list_for_each_entry(dep, &adev->power_dependent, node)10451045- if (dep->dev == depdev)10461046- goto out;10471047-10481048- dep = kzalloc(sizeof(*dep), GFP_KERNEL);10491049- if (dep) {10501050- dep->dev = depdev;10511051- list_add_tail(&dep->node, &adev->power_dependent);10521052- }10531053-10541054- out:10551055- mutex_unlock(&adev->physical_node_lock);10561056-}10571057-EXPORT_SYMBOL_GPL(acpi_dev_pm_add_dependent);10581058-10591059-/**10601060- * acpi_dev_pm_remove_dependent - Remove physical device depending for PM.10611061- * @handle: Handle of ACPI device node.10621062- * @depdev: Device depending on that node for PM.10631063- */10641064-void acpi_dev_pm_remove_dependent(acpi_handle handle, struct device *depdev)10651065-{10661066- struct acpi_device_physical_node *dep;10671067- struct acpi_device *adev;10681068-10691069- if (!depdev || acpi_bus_get_device(handle, &adev))10701070- return;10711071-10721072- mutex_lock(&adev->physical_node_lock);10731073-10741074- list_for_each_entry(dep, &adev->power_dependent, node)10751075- if (dep->dev == depdev) {10761076- list_del(&dep->node);10771077- kfree(dep);10781078- break;10791079- }10801080-10811081- mutex_unlock(&adev->physical_node_lock);10821082-}10831083-EXPORT_SYMBOL_GPL(acpi_dev_pm_remove_dependent);10841028#endif /* CONFIG_PM */
···13431343 if (!(hpriv->cap & HOST_CAP_SSS) || ahci_ignore_sss)13441344 host->flags |= ATA_HOST_PARALLEL_SCAN;13451345 else13461346- printk(KERN_INFO "ahci: SSS flag set, parallel bus scan disabled\n");13461346+ dev_info(&pdev->dev, "SSS flag set, parallel bus scan disabled\n");1347134713481348 if (pi.flags & ATA_FLAG_EM)13491349 ahci_reset_em(host);
+1-1
drivers/ata/ahci_platform.c
···184184 if (!(hpriv->cap & HOST_CAP_SSS) || ahci_ignore_sss)185185 host->flags |= ATA_HOST_PARALLEL_SCAN;186186 else187187- printk(KERN_INFO "ahci: SSS flag set, parallel bus scan disabled\n");187187+ dev_info(dev, "SSS flag set, parallel bus scan disabled\n");188188189189 if (pi.flags & ATA_FLAG_EM)190190 ahci_reset_em(host);
+9-1
drivers/ata/libahci.c
···778778 rc = ap->ops->transmit_led_message(ap,779779 emp->led_state,780780 4);781781+ /*782782+ * If busy, give a breather but do not783783+ * release EH ownership by using msleep()784784+ * instead of ata_msleep(). EM Transmit785785+ * bit is busy for the whole host and786786+ * releasing ownership will cause other787787+ * ports to fail the same way.788788+ */781789 if (rc == -EBUSY)782782- ata_msleep(ap, 1);790790+ msleep(1);783791 else784792 break;785793 }
···13221322 * should be retried. To be used from EH.13231323 *13241324 * SCSI midlayer limits the number of retries to scmd->allowed.13251325- * scmd->retries is decremented for commands which get retried13251325+ * scmd->allowed is incremented for commands which get retried13261326 * due to unrelated failures (qc->err_mask is zero).13271327 */13281328void ata_eh_qc_retry(struct ata_queued_cmd *qc)13291329{13301330 struct scsi_cmnd *scmd = qc->scsicmd;13311331- if (!qc->err_mask && scmd->retries)13321332- scmd->retries--;13311331+ if (!qc->err_mask)13321332+ scmd->allowed++;13331333 __ata_eh_qc_complete(qc);13341334}13351335
-3
drivers/ata/libata-scsi.c
···36793679 if (!IS_ERR(sdev)) {36803680 dev->sdev = sdev;36813681 scsi_device_put(sdev);36823682- ata_scsi_acpi_bind(dev);36833682 } else {36843683 dev->sdev = NULL;36853684 }···37653766 struct ata_port *ap = dev->link->ap;37663767 struct scsi_device *sdev;37673768 unsigned long flags;37683768-37693769- ata_scsi_acpi_unbind(dev);3770376937713770 /* Alas, we need to grab scan_mutex to ensure SCSI device37723771 * state doesn't change underneath us and thus
···333333 online_type = ONLINE_KEEP;334334 else if (!strncmp(buf, "offline", min_t(int, count, 7)))335335 online_type = -1;336336- else337337- return -EINVAL;336336+ else {337337+ ret = -EINVAL;338338+ goto err;339339+ }338340339341 switch (online_type) {340342 case ONLINE_KERNEL:···359357 ret = -EINVAL; /* should never happen */360358 }361359360360+err:362361 unlock_device_hotplug();363362364363 if (ret)
+9-3
drivers/bus/mvebu-mbus.c
···700700 phys_addr_t sdramwins_phys_base,701701 size_t sdramwins_size)702702{703703+ struct device_node *np;703704 int win;704705705706 mbus->mbuswins_base = ioremap(mbuswins_phys_base, mbuswins_size);···713712 return -ENOMEM;714713 }715714716716- if (of_find_compatible_node(NULL, NULL, "marvell,coherency-fabric"))715715+ np = of_find_compatible_node(NULL, NULL, "marvell,coherency-fabric");716716+ if (np) {717717 mbus->hw_io_coherency = 1;718718+ of_node_put(np);719719+ }718720719721 for (win = 0; win < mbus->soc->num_wins; win++)720722 mvebu_mbus_disable_window(mbus, win);···865861 int ret;866862867863 /*868868- * These are optional, so we clear them and they'll869869- * be zero if they are missing from the DT.864864+ * These are optional, so we make sure that resource_size(x) will865865+ * return 0.870866 */871867 memset(mem, 0, sizeof(struct resource));868868+ mem->end = -1;872869 memset(io, 0, sizeof(struct resource));870870+ io->end = -1;873871874872 ret = of_property_read_u32_array(np, "pcie-mem-aperture", reg, ARRAY_SIZE(reg));875873 if (!ret) {
+5-6
drivers/char/random.c
···640640 */641641void add_device_randomness(const void *buf, unsigned int size)642642{643643- unsigned long time = get_cycles() ^ jiffies;643643+ unsigned long time = random_get_entropy() ^ jiffies;644644645645 mix_pool_bytes(&input_pool, buf, size, NULL);646646 mix_pool_bytes(&input_pool, &time, sizeof(time), NULL);···677677 goto out;678678679679 sample.jiffies = jiffies;680680- sample.cycles = get_cycles();680680+ sample.cycles = random_get_entropy();681681 sample.num = num;682682 mix_pool_bytes(&input_pool, &sample, sizeof(sample), NULL);683683···744744 struct fast_pool *fast_pool = &__get_cpu_var(irq_randomness);745745 struct pt_regs *regs = get_irq_regs();746746 unsigned long now = jiffies;747747- __u32 input[4], cycles = get_cycles();747747+ __u32 input[4], cycles = random_get_entropy();748748749749 input[0] = cycles ^ jiffies;750750 input[1] = irq;···1459145914601460static u32 random_int_secret[MD5_MESSAGE_BYTES / 4] ____cacheline_aligned;1461146114621462-static int __init random_int_secret_init(void)14621462+int random_int_secret_init(void)14631463{14641464 get_random_bytes(random_int_secret, sizeof(random_int_secret));14651465 return 0;14661466}14671467-late_initcall(random_int_secret_init);1468146714691468/*14701469 * Get a random word for internal kernel use only. Similar to urandom but···1482148314831484 hash = get_cpu_var(get_random_int_hash);1484148514851485- hash[0] += current->pid + jiffies + get_cycles();14861486+ hash[0] += current->pid + jiffies + random_get_entropy();14861487 md5_transform(hash, random_int_secret);14871488 ret = hash[0];14881489 put_cpu_var(get_random_int_hash);
···229229 if (of_property_read_u32(np, "clock-latency", &transition_latency))230230 transition_latency = CPUFREQ_ETERNAL;231231232232- if (cpu_reg) {232232+ if (!IS_ERR(cpu_reg)) {233233 struct opp *opp;234234 unsigned long min_uV, max_uV;235235 int i;
+8-5
drivers/cpufreq/intel_pstate.c
···383383static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate)384384{385385 int max_perf, min_perf;386386+ u64 val;386387387388 intel_pstate_get_min_max(cpu, &min_perf, &max_perf);388389···395394 trace_cpu_frequency(pstate * 100000, cpu->cpu);396395397396 cpu->pstate.current_pstate = pstate;398398- wrmsrl(MSR_IA32_PERF_CTL, pstate << 8);397397+ val = pstate << 8;398398+ if (limits.no_turbo)399399+ val |= (u64)1 << 32;399400401401+ wrmsrl(MSR_IA32_PERF_CTL, val);400402}401403402404static inline void intel_pstate_pstate_increase(struct cpudata *cpu, int steps)···638634639635static int intel_pstate_cpu_init(struct cpufreq_policy *policy)640636{641641- int rc, min_pstate, max_pstate;642637 struct cpudata *cpu;638638+ int rc;643639644640 rc = intel_pstate_init_cpu(policy->cpu);645641 if (rc)···653649 else654650 policy->policy = CPUFREQ_POLICY_POWERSAVE;655651656656- intel_pstate_get_min_max(cpu, &min_pstate, &max_pstate);657657- policy->min = min_pstate * 100000;658658- policy->max = max_pstate * 100000;652652+ policy->min = cpu->pstate.min_pstate * 100000;653653+ policy->max = cpu->pstate.turbo_pstate * 100000;659654660655 /* cpuinfo and default policy values */661656 policy->cpuinfo.min_freq = cpu->pstate.min_pstate * 100000;
+1-1
drivers/cpufreq/s3c64xx-cpufreq.c
···166166 if (freq->frequency == CPUFREQ_ENTRY_INVALID)167167 continue;168168169169- dvfs = &s3c64xx_dvfs_table[freq->index];169169+ dvfs = &s3c64xx_dvfs_table[freq->driver_data];170170 found = 0;171171172172 for (i = 0; i < count; i++) {
+1-1
drivers/cpufreq/spear-cpufreq.c
···113113 unsigned int target_freq, unsigned int relation)114114{115115 struct cpufreq_freqs freqs;116116- unsigned long newfreq;116116+ long newfreq;117117 struct clk *srcclk;118118 int index, ret, mult = 1;119119
+1
drivers/dma/Kconfig
···198198 depends on ARCH_DAVINCI || ARCH_OMAP199199 select DMA_ENGINE200200 select DMA_VIRTUAL_CHANNELS201201+ select TI_PRIV_EDMA201202 default n202203 help203204 Enable support for the TI EDMA controller. This DMA
···29252925 /* Speaker Allocation Data Block */29262926 if (dbl == 3) {29272927 *sadb = kmalloc(dbl, GFP_KERNEL);29282928+ if (!*sadb)29292929+ return -ENOMEM;29282930 memcpy(*sadb, &db[1], dbl);29292931 count = dbl;29302932 break;
-8
drivers/gpu/drm/drm_fb_helper.c
···416416 return;417417418418 /*419419- * fbdev->blank can be called from irq context in case of a panic.420420- * Since we already have our own special panic handler which will421421- * restore the fbdev console mode completely, just bail out early.422422- */423423- if (oops_in_progress)424424- return;425425-426426- /*427419 * For each CRTC in this fb, turn the connectors on/off.428420 */429421 drm_modeset_lock_all(dev);
···12901290 * then we do not take part in VGA arbitration and the12911291 * vga_client_register() fails with -ENODEV.12921292 */12931293- if (!HAS_PCH_SPLIT(dev)) {12941294- ret = vga_client_register(dev->pdev, dev, NULL,12951295- i915_vga_set_decode);12961296- if (ret && ret != -ENODEV)12971297- goto out;12981298- }12931293+ ret = vga_client_register(dev->pdev, dev, NULL, i915_vga_set_decode);12941294+ if (ret && ret != -ENODEV)12951295+ goto out;1299129613001297 intel_register_dsm_handler();13011298···13471350 * tiny window where we will loose hotplug notifactions.13481351 */13491352 intel_fbdev_initial_config(dev);13501350-13511351- /*13521352- * Must do this after fbcon init so that13531353- * vgacon_save_screen() works during the handover.13541354- */13551355- i915_disable_vga_mem(dev);1356135313571354 /* Only enable hotplug handling once the fbdev is fully set up. */13581355 dev_priv->enable_hotplug_processing = true;
···38643864 dev_priv->rps.rpe_delay),38653865 dev_priv->rps.rpe_delay);3866386638673867- INIT_DELAYED_WORK(&dev_priv->rps.vlv_work, vlv_rps_timer_work);38683868-38693867 valleyview_set_rps(dev_priv->dev, dev_priv->rps.rpe_delay);3870386838713869 gen6_enable_rps_interrupts(dev);···47594761 * gating for the panel power sequencer or it will fail to47604762 * start up when no ports are active.47614763 */47624762- I915_WRITE(SOUTH_DSPCLK_GATE_D, PCH_DPLSUNIT_CLOCK_GATE_DISABLE);47644764+ I915_WRITE(SOUTH_DSPCLK_GATE_D, PCH_DPLSUNIT_CLOCK_GATE_DISABLE |47654765+ PCH_DPLUNIT_CLOCK_GATE_DISABLE |47664766+ PCH_CPUNIT_CLOCK_GATE_DISABLE);47634767 I915_WRITE(SOUTH_CHICKEN2, I915_READ(SOUTH_CHICKEN2) |47644768 DPLS_EDP_PPS_FIX_DIS);47654769 /* The below fixes the weird display corruption, a few pixels shifted···49544954 GEN7_WA_FOR_GEN7_L3_CONTROL);49554955 I915_WRITE(GEN7_L3_CHICKEN_MODE_REGISTER,49564956 GEN7_WA_L3_CHICKEN_MODE);49574957+49584958+ /* L3 caching of data atomics doesn't work -- disable it. */49594959+ I915_WRITE(HSW_SCRATCH1, HSW_SCRATCH1_L3_DATA_ATOMICS_DISABLE);49604960+ I915_WRITE(HSW_ROW_CHICKEN3,49614961+ _MASKED_BIT_ENABLE(HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE));4957496249584963 /* This is required by WaCatErrorRejectionIssue:hsw */49594964 I915_WRITE(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG,···5686568156875682 INIT_DELAYED_WORK(&dev_priv->rps.delayed_resume_work,56885683 intel_gen6_powersave_work);56845684+56855685+ INIT_DELAYED_WORK(&dev_priv->rps.vlv_work, vlv_rps_timer_work);56895686}56905687
···113113 u8 *sadb;114114 int sad_count;115115116116+ /* XXX: setting this register causes hangs on some asics */117117+ return;118118+116119 if (!dig->afmt->pin)117120 return;118121
···16581658 drm_object_attach_property(&radeon_connector->base.base,16591659 rdev->mode_info.underscan_vborder_property,16601660 0);16611661- drm_object_attach_property(&radeon_connector->base.base,16621662- rdev->mode_info.audio_property,16631663- RADEON_AUDIO_DISABLE);16611661+ if (radeon_audio != 0)16621662+ drm_object_attach_property(&radeon_connector->base.base,16631663+ rdev->mode_info.audio_property,16641664+ (radeon_audio == 1) ?16651665+ RADEON_AUDIO_AUTO :16661666+ RADEON_AUDIO_DISABLE);16641667 subpixel_order = SubPixelHorizontalRGB;16651668 connector->interlace_allowed = true;16661669 if (connector_type == DRM_MODE_CONNECTOR_HDMIB)···17571754 rdev->mode_info.underscan_vborder_property,17581755 0);17591756 }17601760- if (ASIC_IS_DCE2(rdev)) {17571757+ if (ASIC_IS_DCE2(rdev) && (radeon_audio != 0)) {17611758 drm_object_attach_property(&radeon_connector->base.base,17621762- rdev->mode_info.audio_property,17631763- RADEON_AUDIO_DISABLE);17591759+ rdev->mode_info.audio_property,17601760+ (radeon_audio == 1) ?17611761+ RADEON_AUDIO_AUTO :17621762+ RADEON_AUDIO_DISABLE);17641763 }17651764 if (connector_type == DRM_MODE_CONNECTOR_DVII) {17661765 radeon_connector->dac_load_detect = true;···18041799 rdev->mode_info.underscan_vborder_property,18051800 0);18061801 }18071807- if (ASIC_IS_DCE2(rdev)) {18021802+ if (ASIC_IS_DCE2(rdev) && (radeon_audio != 0)) {18081803 drm_object_attach_property(&radeon_connector->base.base,18091809- rdev->mode_info.audio_property,18101810- RADEON_AUDIO_DISABLE);18041804+ rdev->mode_info.audio_property,18051805+ (radeon_audio == 1) ?18061806+ RADEON_AUDIO_AUTO :18071807+ RADEON_AUDIO_DISABLE);18111808 }18121809 subpixel_order = SubPixelHorizontalRGB;18131810 connector->interlace_allowed = true;···18501843 rdev->mode_info.underscan_vborder_property,18511844 0);18521845 }18531853- if (ASIC_IS_DCE2(rdev)) {18461846+ if (ASIC_IS_DCE2(rdev) && (radeon_audio != 0)) {18541847 drm_object_attach_property(&radeon_connector->base.base,18551855- rdev->mode_info.audio_property,18561856- RADEON_AUDIO_DISABLE);18481848+ rdev->mode_info.audio_property,18491849+ (radeon_audio == 1) ?18501850+ RADEON_AUDIO_AUTO :18511851+ RADEON_AUDIO_DISABLE);18571852 }18581853 connector->interlace_allowed = true;18591854 /* in theory with a DP to VGA converter... */
+1-2
drivers/gpu/drm/radeon/radeon_cs.c
···8585 VRAM, also but everything into VRAM on AGP cards to avoid8686 image corruptions */8787 if (p->ring == R600_RING_TYPE_UVD_INDEX &&8888- p->rdev->family < CHIP_PALM &&8988 (i == 0 || drm_pci_device_is_agp(p->rdev->ddev))) {9090-8989+ /* TODO: is this still needed for NI+ ? */9190 p->relocs[i].lobj.domain =9291 RADEON_GEM_DOMAIN_VRAM;9392
···945945 if (enable) {946946 mutex_lock(&rdev->pm.mutex);947947 rdev->pm.dpm.uvd_active = true;948948+ /* disable this for now */949949+#if 0948950 if ((rdev->pm.dpm.sd == 1) && (rdev->pm.dpm.hd == 0))949951 dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_SD;950952 else if ((rdev->pm.dpm.sd == 2) && (rdev->pm.dpm.hd == 0))···956954 else if ((rdev->pm.dpm.sd == 0) && (rdev->pm.dpm.hd == 2))957955 dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_HD2;958956 else957957+#endif959958 dpm_state = POWER_STATE_TYPE_INTERNAL_UVD;960959 rdev->pm.dpm.state = dpm_state;961960 mutex_unlock(&rdev->pm.mutex);
+2-2
drivers/gpu/drm/radeon/radeon_test.c
···3636 struct radeon_bo *vram_obj = NULL;3737 struct radeon_bo **gtt_obj = NULL;3838 uint64_t gtt_addr, vram_addr;3939- unsigned i, n, size;4040- int r, ring;3939+ unsigned n, size;4040+ int i, r, ring;41414242 switch (flag) {4343 case RADEON_TEST_COPY_DMA:
+4-2
drivers/gpu/drm/radeon/radeon_uvd.c
···476476 return -EINVAL;477477 }478478479479- if (p->rdev->family < CHIP_PALM && (cmd == 0 || cmd == 0x3) &&479479+ /* TODO: is this still necessary on NI+ ? */480480+ if ((cmd == 0 || cmd == 0x3) &&480481 (start >> 28) != (p->rdev->uvd.gpu_addr >> 28)) {481482 DRM_ERROR("msg/fb buffer %LX-%LX out of 256MB segment!\n",482483 start, end);···799798 (rdev->pm.dpm.hd != hd)) {800799 rdev->pm.dpm.sd = sd;801800 rdev->pm.dpm.hd = hd;802802- streams_changed = true;801801+ /* disable this for now */802802+ /*streams_changed = true;*/803803 }804804 }805805
···740740 struct vmw_fpriv *vmw_fp;741741742742 vmw_fp = vmw_fpriv(file_priv);743743- ttm_object_file_release(&vmw_fp->tfile);744744- if (vmw_fp->locked_master)743743+744744+ if (vmw_fp->locked_master) {745745+ struct vmw_master *vmaster =746746+ vmw_master(vmw_fp->locked_master);747747+748748+ ttm_lock_set_kill(&vmaster->lock, true, SIGTERM);749749+ ttm_vt_unlock(&vmaster->lock);745750 drm_master_put(&vmw_fp->locked_master);751751+ }752752+753753+ ttm_object_file_release(&vmw_fp->tfile);746754 kfree(vmw_fp);747755}748756···933925934926 vmw_fp->locked_master = drm_master_get(file_priv->master);935927 ret = ttm_vt_lock(&vmaster->lock, false, vmw_fp->tfile);936936- vmw_execbuf_release_pinned_bo(dev_priv);937937-938928 if (unlikely((ret != 0))) {939929 DRM_ERROR("Unable to lock TTM at VT switch.\n");940930 drm_master_put(&vmw_fp->locked_master);941931 }942932943943- ttm_lock_set_kill(&vmaster->lock, true, SIGTERM);933933+ ttm_lock_set_kill(&vmaster->lock, false, SIGTERM);934934+ vmw_execbuf_release_pinned_bo(dev_priv);944935945936 if (!dev_priv->enable_fb) {946937 ret = ttm_bo_evict_mm(&dev_priv->bdev, TTM_PL_VRAM);
+1-1
drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
···970970 if (new_backup)971971 res->backup_offset = new_backup_offset;972972973973- if (!res->func->may_evict)973973+ if (!res->func->may_evict || res->id == -1)974974 return;975975976976 write_lock(&dev_priv->resource_lock);
+1
drivers/hid/Kconfig
···241241 - Sharkoon Drakonia / Perixx MX-2000 gaming mice242242 - Tracer Sniper TRM-503 / NOVA Gaming Slider X200 /243243 Zalman ZM-GM1244244+ - SHARKOON DarkGlider Gaming mouse244245245246config HOLTEK_FF246247 bool "Holtek On Line Grip force feedback support"
+8-5
drivers/hid/hid-core.c
···319319320320static int hid_parser_global(struct hid_parser *parser, struct hid_item *item)321321{322322- __u32 raw_value;322322+ __s32 raw_value;323323 switch (item->tag) {324324 case HID_GLOBAL_ITEM_TAG_PUSH:325325···370370 return 0;371371372372 case HID_GLOBAL_ITEM_TAG_UNIT_EXPONENT:373373- /* Units exponent negative numbers are given through a374374- * two's complement.375375- * See "6.2.2.7 Global Items" for more information. */376376- raw_value = item_udata(item);373373+ /* Many devices provide unit exponent as a two's complement374374+ * nibble due to the common misunderstanding of HID375375+ * specification 1.11, 6.2.2.7 Global Items. Attempt to handle376376+ * both this and the standard encoding. */377377+ raw_value = item_sdata(item);377378 if (!(raw_value & 0xfffffff0))378379 parser->global.unit_exponent = hid_snto32(raw_value, 4);379380 else···17161715 { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_KEYBOARD) },17171716 { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A04A) },17181717 { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A067) },17181718+ { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A081) },17191719 { HID_USB_DEVICE(USB_VENDOR_ID_HUION, USB_DEVICE_ID_HUION_580) },17201720 { HID_USB_DEVICE(USB_VENDOR_ID_JESS2, USB_DEVICE_ID_JESS2_COLOR_RUMBLE_PAD) },17211721 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ION, USB_DEVICE_ID_ICADE) },···1871186918721870 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PRESENTER_8K_BT) },18731871 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO, USB_DEVICE_ID_NINTENDO_WIIMOTE) },18721872+ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO2, USB_DEVICE_ID_NINTENDO_WIIMOTE) },18741873 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO, USB_DEVICE_ID_NINTENDO_WIIMOTE2) },18751874 { }18761875};
+4
drivers/hid/hid-holtek-mouse.c
···2727 * - USB ID 04d9:a067, sold as Sharkoon Drakonia and Perixx MX-20002828 * - USB ID 04d9:a04a, sold as Tracer Sniper TRM-503, NOVA Gaming Slider X2002929 * and Zalman ZM-GM13030+ * - USB ID 04d9:a081, sold as SHARKOON DarkGlider Gaming mouse3031 */31323233static __u8 *holtek_mouse_report_fixup(struct hid_device *hdev, __u8 *rdesc,···4746 }4847 break;4948 case USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A04A:4949+ case USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A081:5050 if (*rsize >= 113 && rdesc[106] == 0xff && rdesc[107] == 0x7f5151 && rdesc[111] == 0xff && rdesc[112] == 0x7f) {5252 hid_info(hdev, "Fixing up report descriptor\n");···6563 USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A067) },6664 { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT,6765 USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A04A) },6666+ { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT,6767+ USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A081) },6868 { }6969};7070MODULE_DEVICE_TABLE(hid, holtek_mouse_devices);
···230230231231static int read_smc(u8 cmd, const char *key, u8 *buffer, u8 len)232232{233233+ u8 status, data = 0;233234 int i;234235235236 if (send_command(cmd) || send_argument(key)) {···238237 return -EIO;239238 }240239240240+ /* This has no effect on newer (2012) SMCs */241241 if (send_byte(len, APPLESMC_DATA_PORT)) {242242 pr_warn("%.4s: read len fail\n", key);243243 return -EIO;···251249 }252250 buffer[i] = inb(APPLESMC_DATA_PORT);253251 }252252+253253+ /* Read the data port until bit0 is cleared */254254+ for (i = 0; i < 16; i++) {255255+ udelay(APPLESMC_MIN_WAIT);256256+ status = inb(APPLESMC_CMD_PORT);257257+ if (!(status & 0x01))258258+ break;259259+ data = inb(APPLESMC_DATA_PORT);260260+ }261261+ if (i)262262+ pr_warn("flushed %d bytes, last value is: %d\n", i, data);254263255264 return 0;256265}
···3131 libibverbs, libibcm and a hardware driver library from3232 <http://www.openfabrics.org/git/>.33333434+config INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING3535+ bool "Experimental and unstable ABI for userspace access to flow steering verbs"3636+ depends on INFINIBAND_USER_ACCESS3737+ depends on STAGING3838+ ---help---3939+ The final ABI for userspace access to flow steering verbs4040+ has not been defined. To use the current ABI, *WHICH WILL4141+ CHANGE IN THE FUTURE*, say Y here.4242+4343+ If unsure, say N.4444+3445config INFINIBAND_USER_MEM3546 bool3647 depends on INFINIBAND_USER_ACCESS != n
···5252 select PCI_PRI5353 select PCI_PASID5454 select IOMMU_API5555- depends on X86_64 && PCI && ACPI && X86_IO_APIC5555+ depends on X86_64 && PCI && ACPI5656 ---help---5757 With this option you can enable support for AMD IOMMU hardware in5858 your system. An IOMMU is a hardware component which provides
+7-6
drivers/iommu/arm-smmu.c
···377377 u32 cbar;378378 pgd_t *pgd;379379};380380+#define INVALID_IRPTNDX 0xff380381381382#define ARM_SMMU_CB_ASID(cfg) ((cfg)->cbndx)382383#define ARM_SMMU_CB_VMID(cfg) ((cfg)->cbndx + 1)···841840 if (IS_ERR_VALUE(ret)) {842841 dev_err(smmu->dev, "failed to request context IRQ %d (%u)\n",843842 root_cfg->irptndx, irq);844844- root_cfg->irptndx = -1;843843+ root_cfg->irptndx = INVALID_IRPTNDX;845844 goto out_free_context;846845 }847846···870869 writel_relaxed(0, cb_base + ARM_SMMU_CB_SCTLR);871870 arm_smmu_tlb_inv_context(root_cfg);872871873873- if (root_cfg->irptndx != -1) {872872+ if (root_cfg->irptndx != INVALID_IRPTNDX) {874873 irq = smmu->irqs[smmu->num_global_irqs + root_cfg->irptndx];875874 free_irq(irq, domain);876875 }···18581857 goto out_put_parent;18591858 }1860185918611861- arm_smmu_device_reset(smmu);18621862-18631860 for (i = 0; i < smmu->num_global_irqs; ++i) {18641861 err = request_irq(smmu->irqs[i],18651862 arm_smmu_global_fault,···18751876 spin_lock(&arm_smmu_devices_lock);18761877 list_add(&smmu->list, &arm_smmu_devices);18771878 spin_unlock(&arm_smmu_devices_lock);18791879+18801880+ arm_smmu_device_reset(smmu);18781881 return 0;1879188218801883out_free_irqs:···19671966 return ret;1968196719691968 /* Oh, for a proper bus abstraction */19701970- if (!iommu_present(&platform_bus_type));19691969+ if (!iommu_present(&platform_bus_type))19711970 bus_set_iommu(&platform_bus_type, &arm_smmu_ops);1972197119731973- if (!iommu_present(&amba_bustype));19721972+ if (!iommu_present(&amba_bustype))19741973 bus_set_iommu(&amba_bustype, &arm_smmu_ops);1975197419761975 return 0;
+2-3
drivers/md/bcache/request.c
···996996 closure_bio_submit(bio, cl, s->d);997997 } else {998998 bch_writeback_add(dc);999999+ s->op.cache_bio = bio;999100010001001 if (bio->bi_rw & REQ_FLUSH) {10011002 /* Also need to send a flush to the backing device */10021002- struct bio *flush = bio_alloc_bioset(0, GFP_NOIO,10031003+ struct bio *flush = bio_alloc_bioset(GFP_NOIO, 0,10031004 dc->disk.bio_split);1004100510051006 flush->bi_rw = WRITE_FLUSH;···10091008 flush->bi_private = cl;1010100910111010 closure_bio_submit(flush, cl, s->d);10121012- } else {10131013- s->op.cache_bio = bio;10141011 }10151012 }10161013out:
+12-6
drivers/md/dm-snap-persistent.c
···269269 return NUM_SNAPSHOT_HDR_CHUNKS + ((ps->exceptions_per_area + 1) * area);270270}271271272272+static void skip_metadata(struct pstore *ps)273273+{274274+ uint32_t stride = ps->exceptions_per_area + 1;275275+ chunk_t next_free = ps->next_free;276276+ if (sector_div(next_free, stride) == NUM_SNAPSHOT_HDR_CHUNKS)277277+ ps->next_free++;278278+}279279+272280/*273281 * Read or write a metadata area. Remembering to skip the first274282 * chunk which holds the header.···510502511503 ps->current_area--;512504505505+ skip_metadata(ps);506506+513507 return 0;514508}515509···626616 struct dm_exception *e)627617{628618 struct pstore *ps = get_info(store);629629- uint32_t stride;630630- chunk_t next_free;631619 sector_t size = get_dev_size(dm_snap_cow(store->snap)->bdev);632620633621 /* Is there enough room ? */···638630 * Move onto the next free pending, making sure to take639631 * into account the location of the metadata chunks.640632 */641641- stride = (ps->exceptions_per_area + 1);642642- next_free = ++ps->next_free;643643- if (sector_div(next_free, stride) == 1)644644- ps->next_free++;633633+ ps->next_free++;634634+ skip_metadata(ps);645635646636 atomic_inc(&ps->pending_count);647637 return 0;
+1-8
drivers/media/dvb-frontends/tda10071.c
···912912 { 0xd5, 0x03, 0x03 },913913 };914914915915- /* firmware status */916916- ret = tda10071_rd_reg(priv, 0x51, &tmp);917917- if (ret)918918- goto error;919919-920920- if (!tmp) {915915+ if (priv->warm) {921916 /* warm state - wake up device from sleep */922922- priv->warm = 1;923917924918 for (i = 0; i < ARRAY_SIZE(tab); i++) {925919 ret = tda10071_wr_reg_mask(priv, tab[i].reg,···931937 goto error;932938 } else {933939 /* cold state - try to download firmware */934934- priv->warm = 0;935940936941 /* request the firmware, this will block and timeout */937942 ret = request_firmware(&fw, fw_file, priv->i2c->dev.parent);
···776776 v4l_bound_align_image(&pix->width, 0, VOU_MAX_IMAGE_WIDTH, 1,777777 &pix->height, 0, VOU_MAX_IMAGE_HEIGHT, 1, 0);778778779779- for (i = 0; ARRAY_SIZE(vou_fmt); i++)779779+ for (i = 0; i < ARRAY_SIZE(vou_fmt); i++)780780 if (vou_fmt[i].pfmt == pix->pixelformat)781781 return 0;782782
···353353354354 if (b->m.planes[plane].bytesused > length)355355 return -EINVAL;356356- if (b->m.planes[plane].data_offset >=356356+357357+ if (b->m.planes[plane].data_offset > 0 &&358358+ b->m.planes[plane].data_offset >=357359 b->m.planes[plane].bytesused)358360 return -EINVAL;359361 }
+82-5
drivers/media/v4l2-core/videobuf2-dma-contig.c
···423423 return !!(vma->vm_flags & (VM_IO | VM_PFNMAP));424424}425425426426+static int vb2_dc_get_user_pfn(unsigned long start, int n_pages,427427+ struct vm_area_struct *vma, unsigned long *res)428428+{429429+ unsigned long pfn, start_pfn, prev_pfn;430430+ unsigned int i;431431+ int ret;432432+433433+ if (!vma_is_io(vma))434434+ return -EFAULT;435435+436436+ ret = follow_pfn(vma, start, &pfn);437437+ if (ret)438438+ return ret;439439+440440+ start_pfn = pfn;441441+ start += PAGE_SIZE;442442+443443+ for (i = 1; i < n_pages; ++i, start += PAGE_SIZE) {444444+ prev_pfn = pfn;445445+ ret = follow_pfn(vma, start, &pfn);446446+447447+ if (ret) {448448+ pr_err("no page for address %lu\n", start);449449+ return ret;450450+ }451451+ if (pfn != prev_pfn + 1)452452+ return -EINVAL;453453+ }454454+455455+ *res = start_pfn;456456+ return 0;457457+}458458+426459static int vb2_dc_get_user_pages(unsigned long start, struct page **pages,427460 int n_pages, struct vm_area_struct *vma, int write)428461{···465432 for (i = 0; i < n_pages; ++i, start += PAGE_SIZE) {466433 unsigned long pfn;467434 int ret = follow_pfn(vma, start, &pfn);435435+436436+ if (!pfn_valid(pfn))437437+ return -EINVAL;468438469439 if (ret) {470440 pr_err("no page for address %lu\n", start);···504468 struct vb2_dc_buf *buf = buf_priv;505469 struct sg_table *sgt = buf->dma_sgt;506470507507- dma_unmap_sg(buf->dev, sgt->sgl, sgt->orig_nents, buf->dma_dir);508508- if (!vma_is_io(buf->vma))509509- vb2_dc_sgt_foreach_page(sgt, vb2_dc_put_dirty_page);471471+ if (sgt) {472472+ dma_unmap_sg(buf->dev, sgt->sgl, sgt->orig_nents, buf->dma_dir);473473+ if (!vma_is_io(buf->vma))474474+ vb2_dc_sgt_foreach_page(sgt, vb2_dc_put_dirty_page);510475511511- sg_free_table(sgt);512512- kfree(sgt);476476+ sg_free_table(sgt);477477+ kfree(sgt);478478+ }513479 vb2_put_vma(buf->vma);514480 kfree(buf);515481}482482+483483+/*484484+ * For some kind of reserved memory there might be no struct page available,485485+ * so all that can be done to support such 'pages' is to try to convert486486+ * pfn to dma address or at the last resort just assume that487487+ * dma address == physical address (like it has been assumed in earlier version488488+ * of videobuf2-dma-contig489489+ */490490+491491+#ifdef __arch_pfn_to_dma492492+static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn)493493+{494494+ return (dma_addr_t)__arch_pfn_to_dma(dev, pfn);495495+}496496+#elif defined(__pfn_to_bus)497497+static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn)498498+{499499+ return (dma_addr_t)__pfn_to_bus(pfn);500500+}501501+#elif defined(__pfn_to_phys)502502+static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn)503503+{504504+ return (dma_addr_t)__pfn_to_phys(pfn);505505+}506506+#else507507+static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn)508508+{509509+ /* really, we cannot do anything better at this point */510510+ return (dma_addr_t)(pfn) << PAGE_SHIFT;511511+}512512+#endif516513517514static void *vb2_dc_get_userptr(void *alloc_ctx, unsigned long vaddr,518515 unsigned long size, int write)···617548 /* extract page list from userspace mapping */618549 ret = vb2_dc_get_user_pages(start, pages, n_pages, vma, write);619550 if (ret) {551551+ unsigned long pfn;552552+ if (vb2_dc_get_user_pfn(start, n_pages, vma, &pfn) == 0) {553553+ buf->dma_addr = vb2_dc_pfn_to_dma(buf->dev, pfn);554554+ buf->size = size;555555+ kfree(pages);556556+ return buf;557557+ }558558+620559 pr_err("failed to get user pages\n");621560 goto fail_vma;622561 }
···503503}504504505505/* issue a dmae command over the init-channel and wait for completion */506506-int bnx2x_issue_dmae_with_comp(struct bnx2x *bp, struct dmae_command *dmae)506506+int bnx2x_issue_dmae_with_comp(struct bnx2x *bp, struct dmae_command *dmae,507507+ u32 *comp)507508{508508- u32 *wb_comp = bnx2x_sp(bp, wb_comp);509509 int cnt = CHIP_REV_IS_SLOW(bp) ? (400000) : 4000;510510 int rc = 0;511511···518518 spin_lock_bh(&bp->dmae_lock);519519520520 /* reset completion */521521- *wb_comp = 0;521521+ *comp = 0;522522523523 /* post the command on the channel used for initializations */524524 bnx2x_post_dmae(bp, dmae, INIT_DMAE_C(bp));525525526526 /* wait for completion */527527 udelay(5);528528- while ((*wb_comp & ~DMAE_PCI_ERR_FLAG) != DMAE_COMP_VAL) {528528+ while ((*comp & ~DMAE_PCI_ERR_FLAG) != DMAE_COMP_VAL) {529529530530 if (!cnt ||531531 (bp->recovery_state != BNX2X_RECOVERY_DONE &&···537537 cnt--;538538 udelay(50);539539 }540540- if (*wb_comp & DMAE_PCI_ERR_FLAG) {540540+ if (*comp & DMAE_PCI_ERR_FLAG) {541541 BNX2X_ERR("DMAE PCI error!\n");542542 rc = DMAE_PCI_ERROR;543543 }···574574 dmae.len = len32;575575576576 /* issue the command and wait for completion */577577- rc = bnx2x_issue_dmae_with_comp(bp, &dmae);577577+ rc = bnx2x_issue_dmae_with_comp(bp, &dmae, bnx2x_sp(bp, wb_comp));578578 if (rc) {579579 BNX2X_ERR("DMAE returned failure %d\n", rc);580580 bnx2x_panic();···611611 dmae.len = len32;612612613613 /* issue the command and wait for completion */614614- rc = bnx2x_issue_dmae_with_comp(bp, &dmae);614614+ rc = bnx2x_issue_dmae_with_comp(bp, &dmae, bnx2x_sp(bp, wb_comp));615615 if (rc) {616616 BNX2X_ERR("DMAE returned failure %d\n", rc);617617 bnx2x_panic();···751751 return rc;752752}753753754754+#define MCPR_TRACE_BUFFER_SIZE (0x800)755755+#define SCRATCH_BUFFER_SIZE(bp) \756756+ (CHIP_IS_E1(bp) ? 0x10000 : (CHIP_IS_E1H(bp) ? 0x20000 : 0x28000))757757+754758void bnx2x_fw_dump_lvl(struct bnx2x *bp, const char *lvl)755759{756760 u32 addr, val;···779775 trace_shmem_base = bp->common.shmem_base;780776 else781777 trace_shmem_base = SHMEM2_RD(bp, other_shmem_base_addr);782782- addr = trace_shmem_base - 0x800;778778+779779+ /* sanity */780780+ if (trace_shmem_base < MCPR_SCRATCH_BASE(bp) + MCPR_TRACE_BUFFER_SIZE ||781781+ trace_shmem_base >= MCPR_SCRATCH_BASE(bp) +782782+ SCRATCH_BUFFER_SIZE(bp)) {783783+ BNX2X_ERR("Unable to dump trace buffer (mark %x)\n",784784+ trace_shmem_base);785785+ return;786786+ }787787+788788+ addr = trace_shmem_base - MCPR_TRACE_BUFFER_SIZE;783789784790 /* validate TRCB signature */785791 mark = REG_RD(bp, addr);···801787 /* read cyclic buffer pointer */802788 addr += 4;803789 mark = REG_RD(bp, addr);804804- mark = (CHIP_IS_E1x(bp) ? MCP_REG_MCPR_SCRATCH : MCP_A_REG_MCPR_SCRATCH)805805- + ((mark + 0x3) & ~0x3) - 0x08000000;790790+ mark = MCPR_SCRATCH_BASE(bp) + ((mark + 0x3) & ~0x3) - 0x08000000;791791+ if (mark >= trace_shmem_base || mark < addr + 4) {792792+ BNX2X_ERR("Mark doesn't fall inside Trace Buffer\n");793793+ return;794794+ }806795 printk("%s" "begin fw dump (mark 0x%x)\n", lvl, mark);807796808797 printk("%s", lvl);809798810799 /* dump buffer after the mark */811811- for (offset = mark; offset <= trace_shmem_base; offset += 0x8*4) {800800+ for (offset = mark; offset < trace_shmem_base; offset += 0x8*4) {812801 for (word = 0; word < 8; word++)813802 data[word] = htonl(REG_RD(bp, offset + 4*word));814803 data[8] = 0x0;···42974280 pr_cont("%s%s", idx ? ", " : "", blk);42984281}4299428243004300-static int bnx2x_check_blocks_with_parity0(struct bnx2x *bp, u32 sig,43014301- int par_num, bool print)42834283+static bool bnx2x_check_blocks_with_parity0(struct bnx2x *bp, u32 sig,42844284+ int *par_num, bool print)43024285{43034303- int i = 0;43044304- u32 cur_bit = 0;42864286+ u32 cur_bit;42874287+ bool res;42884288+ int i;42894289+42904290+ res = false;42914291+43054292 for (i = 0; sig; i++) {43064306- cur_bit = ((u32)0x1 << i);42934293+ cur_bit = (0x1UL << i);43074294 if (sig & cur_bit) {43084308- switch (cur_bit) {43094309- case AEU_INPUTS_ATTN_BITS_BRB_PARITY_ERROR:43104310- if (print) {43114311- _print_next_block(par_num++, "BRB");42954295+ res |= true; /* Each bit is real error! */42964296+42974297+ if (print) {42984298+ switch (cur_bit) {42994299+ case AEU_INPUTS_ATTN_BITS_BRB_PARITY_ERROR:43004300+ _print_next_block((*par_num)++, "BRB");43124301 _print_parity(bp,43134302 BRB1_REG_BRB1_PRTY_STS);43144314- }43154315- break;43164316- case AEU_INPUTS_ATTN_BITS_PARSER_PARITY_ERROR:43174317- if (print) {43184318- _print_next_block(par_num++, "PARSER");43034303+ break;43044304+ case AEU_INPUTS_ATTN_BITS_PARSER_PARITY_ERROR:43054305+ _print_next_block((*par_num)++,43064306+ "PARSER");43194307 _print_parity(bp, PRS_REG_PRS_PRTY_STS);43204320- }43214321- break;43224322- case AEU_INPUTS_ATTN_BITS_TSDM_PARITY_ERROR:43234323- if (print) {43244324- _print_next_block(par_num++, "TSDM");43084308+ break;43094309+ case AEU_INPUTS_ATTN_BITS_TSDM_PARITY_ERROR:43104310+ _print_next_block((*par_num)++, "TSDM");43254311 _print_parity(bp,43264312 TSDM_REG_TSDM_PRTY_STS);43274327- }43284328- break;43294329- case AEU_INPUTS_ATTN_BITS_SEARCHER_PARITY_ERROR:43304330- if (print) {43314331- _print_next_block(par_num++,43134313+ break;43144314+ case AEU_INPUTS_ATTN_BITS_SEARCHER_PARITY_ERROR:43154315+ _print_next_block((*par_num)++,43324316 "SEARCHER");43334317 _print_parity(bp, SRC_REG_SRC_PRTY_STS);43344334- }43354335- break;43364336- case AEU_INPUTS_ATTN_BITS_TCM_PARITY_ERROR:43374337- if (print) {43384338- _print_next_block(par_num++, "TCM");43394339- _print_parity(bp,43404340- TCM_REG_TCM_PRTY_STS);43414341- }43424342- break;43434343- case AEU_INPUTS_ATTN_BITS_TSEMI_PARITY_ERROR:43444344- if (print) {43454345- _print_next_block(par_num++, "TSEMI");43184318+ break;43194319+ case AEU_INPUTS_ATTN_BITS_TCM_PARITY_ERROR:43204320+ _print_next_block((*par_num)++, "TCM");43214321+ _print_parity(bp, TCM_REG_TCM_PRTY_STS);43224322+ break;43234323+ case AEU_INPUTS_ATTN_BITS_TSEMI_PARITY_ERROR:43244324+ _print_next_block((*par_num)++,43254325+ "TSEMI");43464326 _print_parity(bp,43474327 TSEM_REG_TSEM_PRTY_STS_0);43484328 _print_parity(bp,43494329 TSEM_REG_TSEM_PRTY_STS_1);43504350- }43514351- break;43524352- case AEU_INPUTS_ATTN_BITS_PBCLIENT_PARITY_ERROR:43534353- if (print) {43544354- _print_next_block(par_num++, "XPB");43304330+ break;43314331+ case AEU_INPUTS_ATTN_BITS_PBCLIENT_PARITY_ERROR:43324332+ _print_next_block((*par_num)++, "XPB");43554333 _print_parity(bp, GRCBASE_XPB +43564334 PB_REG_PB_PRTY_STS);43354335+ break;43574336 }43584358- break;43594337 }4360433843614339 /* Clear the bit */···43584346 }43594347 }4360434843614361- return par_num;43494349+ return res;43624350}4363435143644364-static int bnx2x_check_blocks_with_parity1(struct bnx2x *bp, u32 sig,43654365- int par_num, bool *global,43524352+static bool bnx2x_check_blocks_with_parity1(struct bnx2x *bp, u32 sig,43534353+ int *par_num, bool *global,43664354 bool print)43674355{43684368- int i = 0;43694369- u32 cur_bit = 0;43564356+ u32 cur_bit;43574357+ bool res;43584358+ int i;43594359+43604360+ res = false;43614361+43704362 for (i = 0; sig; i++) {43714371- cur_bit = ((u32)0x1 << i);43634363+ cur_bit = (0x1UL << i);43724364 if (sig & cur_bit) {43654365+ res |= true; /* Each bit is real error! */43734366 switch (cur_bit) {43744367 case AEU_INPUTS_ATTN_BITS_PBF_PARITY_ERROR:43754368 if (print) {43764376- _print_next_block(par_num++, "PBF");43694369+ _print_next_block((*par_num)++, "PBF");43774370 _print_parity(bp, PBF_REG_PBF_PRTY_STS);43784371 }43794372 break;43804373 case AEU_INPUTS_ATTN_BITS_QM_PARITY_ERROR:43814374 if (print) {43824382- _print_next_block(par_num++, "QM");43754375+ _print_next_block((*par_num)++, "QM");43834376 _print_parity(bp, QM_REG_QM_PRTY_STS);43844377 }43854378 break;43864379 case AEU_INPUTS_ATTN_BITS_TIMERS_PARITY_ERROR:43874380 if (print) {43884388- _print_next_block(par_num++, "TM");43814381+ _print_next_block((*par_num)++, "TM");43894382 _print_parity(bp, TM_REG_TM_PRTY_STS);43904383 }43914384 break;43924385 case AEU_INPUTS_ATTN_BITS_XSDM_PARITY_ERROR:43934386 if (print) {43944394- _print_next_block(par_num++, "XSDM");43874387+ _print_next_block((*par_num)++, "XSDM");43954388 _print_parity(bp,43964389 XSDM_REG_XSDM_PRTY_STS);43974390 }43984391 break;43994392 case AEU_INPUTS_ATTN_BITS_XCM_PARITY_ERROR:44004393 if (print) {44014401- _print_next_block(par_num++, "XCM");43944394+ _print_next_block((*par_num)++, "XCM");44024395 _print_parity(bp, XCM_REG_XCM_PRTY_STS);44034396 }44044397 break;44054398 case AEU_INPUTS_ATTN_BITS_XSEMI_PARITY_ERROR:44064399 if (print) {44074407- _print_next_block(par_num++, "XSEMI");44004400+ _print_next_block((*par_num)++,44014401+ "XSEMI");44084402 _print_parity(bp,44094403 XSEM_REG_XSEM_PRTY_STS_0);44104404 _print_parity(bp,···44194401 break;44204402 case AEU_INPUTS_ATTN_BITS_DOORBELLQ_PARITY_ERROR:44214403 if (print) {44224422- _print_next_block(par_num++,44044404+ _print_next_block((*par_num)++,44234405 "DOORBELLQ");44244406 _print_parity(bp,44254407 DORQ_REG_DORQ_PRTY_STS);···44274409 break;44284410 case AEU_INPUTS_ATTN_BITS_NIG_PARITY_ERROR:44294411 if (print) {44304430- _print_next_block(par_num++, "NIG");44124412+ _print_next_block((*par_num)++, "NIG");44314413 if (CHIP_IS_E1x(bp)) {44324414 _print_parity(bp,44334415 NIG_REG_NIG_PRTY_STS);···44414423 break;44424424 case AEU_INPUTS_ATTN_BITS_VAUX_PCI_CORE_PARITY_ERROR:44434425 if (print)44444444- _print_next_block(par_num++,44264426+ _print_next_block((*par_num)++,44454427 "VAUX PCI CORE");44464428 *global = true;44474429 break;44484430 case AEU_INPUTS_ATTN_BITS_DEBUG_PARITY_ERROR:44494431 if (print) {44504450- _print_next_block(par_num++, "DEBUG");44324432+ _print_next_block((*par_num)++,44334433+ "DEBUG");44514434 _print_parity(bp, DBG_REG_DBG_PRTY_STS);44524435 }44534436 break;44544437 case AEU_INPUTS_ATTN_BITS_USDM_PARITY_ERROR:44554438 if (print) {44564456- _print_next_block(par_num++, "USDM");44394439+ _print_next_block((*par_num)++, "USDM");44574440 _print_parity(bp,44584441 USDM_REG_USDM_PRTY_STS);44594442 }44604443 break;44614444 case AEU_INPUTS_ATTN_BITS_UCM_PARITY_ERROR:44624445 if (print) {44634463- _print_next_block(par_num++, "UCM");44464446+ _print_next_block((*par_num)++, "UCM");44644447 _print_parity(bp, UCM_REG_UCM_PRTY_STS);44654448 }44664449 break;44674450 case AEU_INPUTS_ATTN_BITS_USEMI_PARITY_ERROR:44684451 if (print) {44694469- _print_next_block(par_num++, "USEMI");44524452+ _print_next_block((*par_num)++,44534453+ "USEMI");44704454 _print_parity(bp,44714455 USEM_REG_USEM_PRTY_STS_0);44724456 _print_parity(bp,···44774457 break;44784458 case AEU_INPUTS_ATTN_BITS_UPB_PARITY_ERROR:44794459 if (print) {44804480- _print_next_block(par_num++, "UPB");44604460+ _print_next_block((*par_num)++, "UPB");44814461 _print_parity(bp, GRCBASE_UPB +44824462 PB_REG_PB_PRTY_STS);44834463 }44844464 break;44854465 case AEU_INPUTS_ATTN_BITS_CSDM_PARITY_ERROR:44864466 if (print) {44874487- _print_next_block(par_num++, "CSDM");44674467+ _print_next_block((*par_num)++, "CSDM");44884468 _print_parity(bp,44894469 CSDM_REG_CSDM_PRTY_STS);44904470 }44914471 break;44924472 case AEU_INPUTS_ATTN_BITS_CCM_PARITY_ERROR:44934473 if (print) {44944494- _print_next_block(par_num++, "CCM");44744474+ _print_next_block((*par_num)++, "CCM");44954475 _print_parity(bp, CCM_REG_CCM_PRTY_STS);44964476 }44974477 break;···45024482 }45034483 }4504448445054505- return par_num;44854485+ return res;45064486}4507448745084508-static int bnx2x_check_blocks_with_parity2(struct bnx2x *bp, u32 sig,45094509- int par_num, bool print)44884488+static bool bnx2x_check_blocks_with_parity2(struct bnx2x *bp, u32 sig,44894489+ int *par_num, bool print)45104490{45114511- int i = 0;45124512- u32 cur_bit = 0;44914491+ u32 cur_bit;44924492+ bool res;44934493+ int i;44944494+44954495+ res = false;44964496+45134497 for (i = 0; sig; i++) {45144514- cur_bit = ((u32)0x1 << i);44984498+ cur_bit = (0x1UL << i);45154499 if (sig & cur_bit) {45164516- switch (cur_bit) {45174517- case AEU_INPUTS_ATTN_BITS_CSEMI_PARITY_ERROR:45184518- if (print) {45194519- _print_next_block(par_num++, "CSEMI");45004500+ res |= true; /* Each bit is real error! */45014501+ if (print) {45024502+ switch (cur_bit) {45034503+ case AEU_INPUTS_ATTN_BITS_CSEMI_PARITY_ERROR:45044504+ _print_next_block((*par_num)++,45054505+ "CSEMI");45204506 _print_parity(bp,45214507 CSEM_REG_CSEM_PRTY_STS_0);45224508 _print_parity(bp,45234509 CSEM_REG_CSEM_PRTY_STS_1);45244524- }45254525- break;45264526- case AEU_INPUTS_ATTN_BITS_PXP_PARITY_ERROR:45274527- if (print) {45284528- _print_next_block(par_num++, "PXP");45104510+ break;45114511+ case AEU_INPUTS_ATTN_BITS_PXP_PARITY_ERROR:45124512+ _print_next_block((*par_num)++, "PXP");45294513 _print_parity(bp, PXP_REG_PXP_PRTY_STS);45304514 _print_parity(bp,45314515 PXP2_REG_PXP2_PRTY_STS_0);45324516 _print_parity(bp,45334517 PXP2_REG_PXP2_PRTY_STS_1);45344534- }45354535- break;45364536- case AEU_IN_ATTN_BITS_PXPPCICLOCKCLIENT_PARITY_ERROR:45374537- if (print)45384538- _print_next_block(par_num++,45394539- "PXPPCICLOCKCLIENT");45404540- break;45414541- case AEU_INPUTS_ATTN_BITS_CFC_PARITY_ERROR:45424542- if (print) {45434543- _print_next_block(par_num++, "CFC");45184518+ break;45194519+ case AEU_IN_ATTN_BITS_PXPPCICLOCKCLIENT_PARITY_ERROR:45204520+ _print_next_block((*par_num)++,45214521+ "PXPPCICLOCKCLIENT");45224522+ break;45234523+ case AEU_INPUTS_ATTN_BITS_CFC_PARITY_ERROR:45244524+ _print_next_block((*par_num)++, "CFC");45444525 _print_parity(bp,45454526 CFC_REG_CFC_PRTY_STS);45464546- }45474547- break;45484548- case AEU_INPUTS_ATTN_BITS_CDU_PARITY_ERROR:45494549- if (print) {45504550- _print_next_block(par_num++, "CDU");45274527+ break;45284528+ case AEU_INPUTS_ATTN_BITS_CDU_PARITY_ERROR:45294529+ _print_next_block((*par_num)++, "CDU");45514530 _print_parity(bp, CDU_REG_CDU_PRTY_STS);45524552- }45534553- break;45544554- case AEU_INPUTS_ATTN_BITS_DMAE_PARITY_ERROR:45554555- if (print) {45564556- _print_next_block(par_num++, "DMAE");45314531+ break;45324532+ case AEU_INPUTS_ATTN_BITS_DMAE_PARITY_ERROR:45334533+ _print_next_block((*par_num)++, "DMAE");45574534 _print_parity(bp,45584535 DMAE_REG_DMAE_PRTY_STS);45594559- }45604560- break;45614561- case AEU_INPUTS_ATTN_BITS_IGU_PARITY_ERROR:45624562- if (print) {45634563- _print_next_block(par_num++, "IGU");45364536+ break;45374537+ case AEU_INPUTS_ATTN_BITS_IGU_PARITY_ERROR:45384538+ _print_next_block((*par_num)++, "IGU");45644539 if (CHIP_IS_E1x(bp))45654540 _print_parity(bp,45664541 HC_REG_HC_PRTY_STS);45674542 else45684543 _print_parity(bp,45694544 IGU_REG_IGU_PRTY_STS);45704570- }45714571- break;45724572- case AEU_INPUTS_ATTN_BITS_MISC_PARITY_ERROR:45734573- if (print) {45744574- _print_next_block(par_num++, "MISC");45454545+ break;45464546+ case AEU_INPUTS_ATTN_BITS_MISC_PARITY_ERROR:45474547+ _print_next_block((*par_num)++, "MISC");45754548 _print_parity(bp,45764549 MISC_REG_MISC_PRTY_STS);45504550+ break;45774551 }45784578- break;45794552 }4580455345814554 /* Clear the bit */···45764563 }45774564 }4578456545794579- return par_num;45664566+ return res;45804567}4581456845824582-static int bnx2x_check_blocks_with_parity3(u32 sig, int par_num,45834583- bool *global, bool print)45694569+static bool bnx2x_check_blocks_with_parity3(struct bnx2x *bp, u32 sig,45704570+ int *par_num, bool *global,45714571+ bool print)45844572{45854585- int i = 0;45864586- u32 cur_bit = 0;45734573+ bool res = false;45744574+ u32 cur_bit;45754575+ int i;45764576+45874577 for (i = 0; sig; i++) {45884588- cur_bit = ((u32)0x1 << i);45784578+ cur_bit = (0x1UL << i);45894579 if (sig & cur_bit) {45904580 switch (cur_bit) {45914581 case AEU_INPUTS_ATTN_BITS_MCP_LATCHED_ROM_PARITY:45924582 if (print)45934593- _print_next_block(par_num++, "MCP ROM");45834583+ _print_next_block((*par_num)++,45844584+ "MCP ROM");45944585 *global = true;45864586+ res |= true;45954587 break;45964588 case AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_RX_PARITY:45974589 if (print)45984598- _print_next_block(par_num++,45904590+ _print_next_block((*par_num)++,45994591 "MCP UMP RX");46004592 *global = true;45934593+ res |= true;46014594 break;46024595 case AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_TX_PARITY:46034596 if (print)46044604- _print_next_block(par_num++,45974597+ _print_next_block((*par_num)++,46054598 "MCP UMP TX");46064599 *global = true;46004600+ res |= true;46074601 break;46084602 case AEU_INPUTS_ATTN_BITS_MCP_LATCHED_SCPAD_PARITY:46094603 if (print)46104610- _print_next_block(par_num++,46044604+ _print_next_block((*par_num)++,46114605 "MCP SCPAD");46124612- *global = true;46064606+ /* clear latched SCPAD PATIRY from MCP */46074607+ REG_WR(bp, MISC_REG_AEU_CLR_LATCH_SIGNAL,46084608+ 1UL << 10);46134609 break;46144610 }46154611···46274605 }46284606 }4629460746304630- return par_num;46084608+ return res;46314609}4632461046334633-static int bnx2x_check_blocks_with_parity4(struct bnx2x *bp, u32 sig,46344634- int par_num, bool print)46114611+static bool bnx2x_check_blocks_with_parity4(struct bnx2x *bp, u32 sig,46124612+ int *par_num, bool print)46354613{46364636- int i = 0;46374637- u32 cur_bit = 0;46144614+ u32 cur_bit;46154615+ bool res;46164616+ int i;46174617+46184618+ res = false;46194619+46384620 for (i = 0; sig; i++) {46394639- cur_bit = ((u32)0x1 << i);46214621+ cur_bit = (0x1UL << i);46404622 if (sig & cur_bit) {46414641- switch (cur_bit) {46424642- case AEU_INPUTS_ATTN_BITS_PGLUE_PARITY_ERROR:46434643- if (print) {46444644- _print_next_block(par_num++, "PGLUE_B");46234623+ res |= true; /* Each bit is real error! */46244624+ if (print) {46254625+ switch (cur_bit) {46264626+ case AEU_INPUTS_ATTN_BITS_PGLUE_PARITY_ERROR:46274627+ _print_next_block((*par_num)++,46284628+ "PGLUE_B");46454629 _print_parity(bp,46464646- PGLUE_B_REG_PGLUE_B_PRTY_STS);46474647- }46484648- break;46494649- case AEU_INPUTS_ATTN_BITS_ATC_PARITY_ERROR:46504650- if (print) {46514651- _print_next_block(par_num++, "ATC");46304630+ PGLUE_B_REG_PGLUE_B_PRTY_STS);46314631+ break;46324632+ case AEU_INPUTS_ATTN_BITS_ATC_PARITY_ERROR:46334633+ _print_next_block((*par_num)++, "ATC");46524634 _print_parity(bp,46534635 ATC_REG_ATC_PRTY_STS);46364636+ break;46544637 }46554655- break;46564638 }46574657-46584639 /* Clear the bit */46594640 sig &= ~cur_bit;46604641 }46614642 }4662464346634663- return par_num;46444644+ return res;46644645}4665464646664647static bool bnx2x_parity_attn(struct bnx2x *bp, bool *global, bool print,46674648 u32 *sig)46684649{46504650+ bool res = false;46514651+46694652 if ((sig[0] & HW_PRTY_ASSERT_SET_0) ||46704653 (sig[1] & HW_PRTY_ASSERT_SET_1) ||46714654 (sig[2] & HW_PRTY_ASSERT_SET_2) ||···46874660 if (print)46884661 netdev_err(bp->dev,46894662 "Parity errors detected in blocks: ");46904690- par_num = bnx2x_check_blocks_with_parity0(bp,46914691- sig[0] & HW_PRTY_ASSERT_SET_0, par_num, print);46924692- par_num = bnx2x_check_blocks_with_parity1(bp,46934693- sig[1] & HW_PRTY_ASSERT_SET_1, par_num, global, print);46944694- par_num = bnx2x_check_blocks_with_parity2(bp,46954695- sig[2] & HW_PRTY_ASSERT_SET_2, par_num, print);46964696- par_num = bnx2x_check_blocks_with_parity3(46974697- sig[3] & HW_PRTY_ASSERT_SET_3, par_num, global, print);46984698- par_num = bnx2x_check_blocks_with_parity4(bp,46994699- sig[4] & HW_PRTY_ASSERT_SET_4, par_num, print);46634663+ res |= bnx2x_check_blocks_with_parity0(bp,46644664+ sig[0] & HW_PRTY_ASSERT_SET_0, &par_num, print);46654665+ res |= bnx2x_check_blocks_with_parity1(bp,46664666+ sig[1] & HW_PRTY_ASSERT_SET_1, &par_num, global, print);46674667+ res |= bnx2x_check_blocks_with_parity2(bp,46684668+ sig[2] & HW_PRTY_ASSERT_SET_2, &par_num, print);46694669+ res |= bnx2x_check_blocks_with_parity3(bp,46704670+ sig[3] & HW_PRTY_ASSERT_SET_3, &par_num, global, print);46714671+ res |= bnx2x_check_blocks_with_parity4(bp,46724672+ sig[4] & HW_PRTY_ASSERT_SET_4, &par_num, print);4700467347014674 if (print)47024675 pr_cont("\n");46764676+ }4703467747044704- return true;47054705- } else47064706- return false;46784678+ return res;47074679}4708468047094681/**···71527126 int port = BP_PORT(bp);71537127 int init_phase = port ? PHASE_PORT1 : PHASE_PORT0;71547128 u32 low, high;71557155- u32 val;71297129+ u32 val, reg;7156713071577131 DP(NETIF_MSG_HW, "starting port init port %d\n", port);71587132···72967270 /* Enable DCBX attention for all but E1 */72977271 val |= CHIP_IS_E1(bp) ? 0 : 0x10;72987272 REG_WR(bp, MISC_REG_AEU_MASK_ATTN_FUNC_0 + port*4, val);72737273+72747274+ /* SCPAD_PARITY should NOT trigger close the gates */72757275+ reg = port ? MISC_REG_AEU_ENABLE4_NIG_1 : MISC_REG_AEU_ENABLE4_NIG_0;72767276+ REG_WR(bp, reg,72777277+ REG_RD(bp, reg) &72787278+ ~AEU_INPUTS_ATTN_BITS_MCP_LATCHED_SCPAD_PARITY);72797279+72807280+ reg = port ? MISC_REG_AEU_ENABLE4_PXP_1 : MISC_REG_AEU_ENABLE4_PXP_0;72817281+ REG_WR(bp, reg,72827282+ REG_RD(bp, reg) &72837283+ ~AEU_INPUTS_ATTN_BITS_MCP_LATCHED_SCPAD_PARITY);7299728473007285 bnx2x_init_block(bp, BLOCK_NIG, init_phase);73017286···1173011693static int bnx2x_open(struct net_device *dev)1173111694{1173211695 struct bnx2x *bp = netdev_priv(dev);1173311733- bool global = false;1173411734- int other_engine = BP_PATH(bp) ? 0 : 1;1173511735- bool other_load_status, load_status;1173611696 int rc;11737116971173811698 bp->stats_init = true;···1174511711 * Parity recovery is only relevant for PF driver.1174611712 */1174711713 if (IS_PF(bp)) {1171411714+ int other_engine = BP_PATH(bp) ? 0 : 1;1171511715+ bool other_load_status, load_status;1171611716+ bool global = false;1171711717+1174811718 other_load_status = bnx2x_get_load_status(bp, other_engine);1174911719 load_status = bnx2x_get_load_status(bp, BP_PATH(bp));1175011720 if (!bnx2x_reset_is_done(bp, BP_PATH(bp)) ||···1214112103 struct device *dev = &bp->pdev->dev;12142121041214312105 if (dma_set_mask(dev, DMA_BIT_MASK(64)) == 0) {1214412144- bp->flags |= USING_DAC_FLAG;1214512106 if (dma_set_coherent_mask(dev, DMA_BIT_MASK(64)) != 0) {1214612107 dev_err(dev, "dma_set_coherent_mask failed, aborting\n");1214712108 return -EIO;···1231112274 NETIF_F_TSO | NETIF_F_TSO_ECN | NETIF_F_TSO6 | NETIF_F_HIGHDMA;12312122751231312276 dev->features |= dev->hw_features | NETIF_F_HW_VLAN_CTAG_RX;1231412314- if (bp->flags & USING_DAC_FLAG)1231512315- dev->features |= NETIF_F_HIGHDMA;1227712277+ dev->features |= NETIF_F_HIGHDMA;12316122781231712279 /* Add Loopback capability to the device */1231812280 dev->hw_features |= NETIF_F_LOOPBACK;···1265112615 return BNX2X_MULTI_TX_COS_E1X;1265212616 case BCM57712:1265312617 case BCM57712_MF:1265412654- case BCM57712_VF:1265512618 return BNX2X_MULTI_TX_COS_E2_E3A0;1265612619 case BCM57800:1265712620 case BCM57800_MF:1265812658- case BCM57800_VF:1265912621 case BCM57810:1266012622 case BCM57810_MF:1266112623 case BCM57840_4_10:1266212624 case BCM57840_2_20:1266312625 case BCM57840_O:1266412626 case BCM57840_MFO:1266512665- case BCM57810_VF:1266612627 case BCM57840_MF:1266712667- case BCM57840_VF:1266812628 case BCM57811:1266912629 case BCM57811_MF:1267012670- case BCM57811_VF:1267112630 return BNX2X_MULTI_TX_COS_E3B0;1263112631+ case BCM57712_VF:1263212632+ case BCM57800_VF:1263312633+ case BCM57810_VF:1263412634+ case BCM57840_VF:1263512635+ case BCM57811_VF:1267212636 return 1;1267312637 default:1267412638 pr_err("Unknown board_type (%d), aborting\n", chip_id);
+17-12
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
···470470 bnx2x_vfop_qdtor, cmd->done);471471 return bnx2x_vfop_transition(bp, vf, bnx2x_vfop_qdtor,472472 cmd->block);473473+ } else {474474+ BNX2X_ERR("VF[%d] failed to add a vfop\n", vf->abs_vfid);475475+ return -ENOMEM;473476 }474474- DP(BNX2X_MSG_IOV, "VF[%d] failed to add a vfop. rc %d\n",475475- vf->abs_vfid, vfop->rc);476476- return -ENOMEM;477477}478478479479static void···33913391 rc = bnx2x_del_all_macs(bp, mac_obj, BNX2X_ETH_MAC, true);33923392 if (rc) {33933393 BNX2X_ERR("failed to delete eth macs\n");33943394- return -EINVAL;33943394+ rc = -EINVAL;33953395+ goto out;33953396 }3396339733973398 /* remove existing uc list macs */33983399 rc = bnx2x_del_all_macs(bp, mac_obj, BNX2X_UC_LIST_MAC, true);33993400 if (rc) {34003401 BNX2X_ERR("failed to delete uc_list macs\n");34013401- return -EINVAL;34023402+ rc = -EINVAL;34033403+ goto out;34023404 }3403340534043406 /* configure the new mac to device */···34083406 bnx2x_set_mac_one(bp, (u8 *)&bulletin->mac, mac_obj, true,34093407 BNX2X_ETH_MAC, &ramrod_flags);3410340834093409+out:34113410 bnx2x_unlock_vf_pf_channel(bp, vf, CHANNEL_TLV_PF_SET_MAC);34123411 }34133412···34713468 &ramrod_flags);34723469 if (rc) {34733470 BNX2X_ERR("failed to delete vlans\n");34743474- return -EINVAL;34713471+ rc = -EINVAL;34723472+ goto out;34753473 }3476347434773475 /* send queue update ramrod to configure default vlan and silent···35063502 rc = bnx2x_config_vlan_mac(bp, &ramrod_param);35073503 if (rc) {35083504 BNX2X_ERR("failed to configure vlan\n");35093509- return -EINVAL;35053505+ rc = -EINVAL;35063506+ goto out;35103507 }3511350835123509 /* configure default vlan to vf queue and set silent···35253520 rc = bnx2x_queue_state_change(bp, &q_params);35263521 if (rc) {35273522 BNX2X_ERR("Failed to configure default VLAN\n");35283528- return rc;35233523+ goto out;35293524 }3530352535313526 /* clear the flag indicating that this VF needs its vlan35323532- * (will only be set if the HV configured th Vlan before vf was35333533- * and we were called because the VF came up later35273527+ * (will only be set if the HV configured the Vlan before vf was35283528+ * up and we were called because the VF came up later35343529 */35303530+out:35353531 vf->cfg_flags &= ~VF_CFG_VLAN;35363536-35373532 bnx2x_unlock_vf_pf_channel(bp, vf, CHANNEL_TLV_PF_SET_VLAN);35383533 }35393539- return 0;35343534+ return rc;35403535}3541353635423537/* crc is the first field in the bulletin board. Compute the crc over the
···691691 return err;692692 }693693694694- if (channel->tx_count) {694694+ if (qlcnic_82xx_check(adapter) && channel->tx_count) {695695 err = qlcnic_validate_max_tx_rings(adapter, channel->tx_count);696696 if (err)697697 return err;
+1-7
drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
···36483648 u8 max_hw = QLCNIC_MAX_TX_RINGS;36493649 u32 max_allowed;3650365036513651- if (!qlcnic_82xx_check(adapter)) {36523652- netdev_err(netdev, "No Multi TX-Q support\n");36533653- return -EINVAL;36543654- }36553655-36563651 if (!qlcnic_use_msi_x && !qlcnic_use_msi) {36573652 netdev_err(netdev, "No Multi TX-Q support in INT-x mode\n");36583653 return -EINVAL;···36873692 u8 max_hw = adapter->ahw->max_rx_ques;36883693 u32 max_allowed;3689369436903690- if (qlcnic_82xx_check(adapter) && !qlcnic_use_msi_x &&36913691- !qlcnic_use_msi) {36953695+ if (!qlcnic_use_msi_x && !qlcnic_use_msi) {36923696 netdev_err(netdev, "No RSS support in INT-x mode\n");36933697 return -EINVAL;36943698 }
···16881688 if (dev->can_dma_sg && !(info->flags & FLAG_SEND_ZLP) &&16891689 !(info->flags & FLAG_MULTI_PACKET)) {16901690 dev->padding_pkt = kzalloc(1, GFP_KERNEL);16911691- if (!dev->padding_pkt)16911691+ if (!dev->padding_pkt) {16921692+ status = -ENOMEM;16921693 goto out4;16941694+ }16931695 }1694169616951697 status = register_netdev (net);
+13-1
drivers/net/virtio_net.c
···938938 return -EINVAL;939939 } else {940940 vi->curr_queue_pairs = queue_pairs;941941- schedule_delayed_work(&vi->refill, 0);941941+ /* virtnet_open() will refill when device is going to up. */942942+ if (dev->flags & IFF_UP)943943+ schedule_delayed_work(&vi->refill, 0);942944 }943945944946 return 0;···11181116{11191117 struct virtnet_info *vi = container_of(nfb, struct virtnet_info, nb);1120111811191119+ mutex_lock(&vi->config_lock);11201120+11211121+ if (!vi->config_enable)11221122+ goto done;11231123+11211124 switch(action & ~CPU_TASKS_FROZEN) {11221125 case CPU_ONLINE:11231126 case CPU_DOWN_FAILED:···11351128 default:11361129 break;11371130 }11311131+11321132+done:11331133+ mutex_unlock(&vi->config_lock);11381134 return NOTIFY_OK;11391135}11401136···17431733 vi->config_enable = true;17441734 mutex_unlock(&vi->config_lock);1745173517361736+ rtnl_lock();17461737 virtnet_set_queues(vi, vi->curr_queue_pairs);17381738+ rtnl_unlock();1747173917481740 return 0;17491741}
+1
drivers/net/wan/farsync.c
···19721972 }1973197319741974 i = port->index;19751975+ memset(&sync, 0, sizeof(sync));19751976 sync.clock_rate = FST_RDL(card, portConfig[i].lineSpeed);19761977 /* Lucky card and linux use same encoding here */19771978 sync.clock_type = FST_RDB(card, portConfig[i].internalClock) ==
···208208 struct ath_hw *ah = sc->sc_ah;209209 struct ath_common *common = ath9k_hw_common(ah);210210 unsigned long flags;211211+ int i;211212212213 if (ath_startrecv(sc) != 0) {213214 ath_err(common, "Unable to restart recv logic\n");···236235 }237236 work:238237 ath_restart_work(sc);238238+239239+ for (i = 0; i < ATH9K_NUM_TX_QUEUES; i++) {240240+ if (!ATH_TXQ_SETUP(sc, i))241241+ continue;242242+243243+ spin_lock_bh(&sc->tx.txq[i].axq_lock);244244+ ath_txq_schedule(sc, &sc->tx.txq[i]);245245+ spin_unlock_bh(&sc->tx.txq[i].axq_lock);246246+ }239247 }240248241249 ieee80211_wake_queues(sc->hw);···629619630620static int ath_reset(struct ath_softc *sc)631621{632632- int i, r;622622+ int r;633623634624 ath9k_ps_wakeup(sc);635635-636625 r = ath_reset_internal(sc, NULL);637637-638638- for (i = 0; i < ATH9K_NUM_TX_QUEUES; i++) {639639- if (!ATH_TXQ_SETUP(sc, i))640640- continue;641641-642642- spin_lock_bh(&sc->tx.txq[i].axq_lock);643643- ath_txq_schedule(sc, &sc->tx.txq[i]);644644- spin_unlock_bh(&sc->tx.txq[i].axq_lock);645645- }646646-647626 ath9k_ps_restore(sc);648627649628 return r;
···601601{602602 int ret;603603604604- WARN_ONCE(trans->state != IWL_TRANS_FW_ALIVE,605605- "%s bad state = %d", __func__, trans->state);604604+ if (trans->state != IWL_TRANS_FW_ALIVE) {605605+ IWL_ERR(trans, "%s bad state = %d", __func__, trans->state);606606+ return -EIO;607607+ }606608607609 if (!(cmd->flags & CMD_ASYNC))608610 lock_map_acquire_read(&trans->sync_cmd_lockdep_map);
+4-1
drivers/net/wireless/iwlwifi/mvm/power.c
···273273 if (!mvmvif->queue_params[ac].uapsd)274274 continue;275275276276- cmd->flags |= cpu_to_le16(POWER_FLAGS_ADVANCE_PM_ENA_MSK);276276+ if (mvm->cur_ucode != IWL_UCODE_WOWLAN)277277+ cmd->flags |=278278+ cpu_to_le16(POWER_FLAGS_ADVANCE_PM_ENA_MSK);279279+277280 cmd->uapsd_ac_flags |= BIT(ac);278281279282 /* QNDP TID - the highest TID with no admission control */
+11-1
drivers/net/wireless/iwlwifi/mvm/scan.c
···394394 return false;395395 }396396397397+ /*398398+ * If scan cannot be aborted, it means that we had a399399+ * SCAN_COMPLETE_NOTIFICATION in the pipe and it called400400+ * ieee80211_scan_completed already.401401+ */397402 IWL_DEBUG_SCAN(mvm, "Scan cannot be aborted, exit now: %d\n",398403 *resp);399404 return true;···422417 SCAN_COMPLETE_NOTIFICATION };423418 int ret;424419420420+ if (mvm->scan_status == IWL_MVM_SCAN_NONE)421421+ return;422422+425423 iwl_init_notification_wait(&mvm->notif_wait, &wait_scan_abort,426424 scan_abort_notif,427425 ARRAY_SIZE(scan_abort_notif),428426 iwl_mvm_scan_abort_notif, NULL);429427430430- ret = iwl_mvm_send_cmd_pdu(mvm, SCAN_ABORT_CMD, CMD_SYNC, 0, NULL);428428+ ret = iwl_mvm_send_cmd_pdu(mvm, SCAN_ABORT_CMD,429429+ CMD_SYNC | CMD_SEND_IN_RFKILL, 0, NULL);431430 if (ret) {432431 IWL_ERR(mvm, "Couldn't send SCAN_ABORT_CMD: %d\n", ret);432432+ /* mac80211's state will be cleaned in the fw_restart flow */433433 goto out_remove_notif;434434 }435435
···11021102 * non-AGG queue.11031103 */11041104 iwl_clear_bits_prph(trans, SCD_AGGR_SEL, BIT(txq_id));11051105+11061106+ ssn = trans_pcie->txq[txq_id].q.read_ptr;11051107 }1106110811071109 /* Place first TFD at index corresponding to start sequence number.
+8-2
drivers/net/wireless/mwifiex/join.c
···14221422 */14231423int mwifiex_deauthenticate(struct mwifiex_private *priv, u8 *mac)14241424{14251425+ int ret = 0;14261426+14251427 if (!priv->media_connected)14261428 return 0;1427142914281430 switch (priv->bss_mode) {14291431 case NL80211_IFTYPE_STATION:14301432 case NL80211_IFTYPE_P2P_CLIENT:14311431- return mwifiex_deauthenticate_infra(priv, mac);14331433+ ret = mwifiex_deauthenticate_infra(priv, mac);14341434+ if (ret)14351435+ cfg80211_disconnected(priv->netdev, 0, NULL, 0,14361436+ GFP_KERNEL);14371437+ break;14321438 case NL80211_IFTYPE_ADHOC:14331439 return mwifiex_send_cmd_sync(priv,14341440 HostCmd_CMD_802_11_AD_HOC_STOP,···14461440 break;14471441 }1448144214491449- return 0;14431443+ return ret;14501444}14511445EXPORT_SYMBOL_GPL(mwifiex_deauthenticate);14521446
···303303 struct device_node *cpun, *cpus;304304305305 cpus = of_find_node_by_path("/cpus");306306- if (!cpus) {307307- pr_warn("Missing cpus node, bailing out\n");306306+ if (!cpus)308307 return NULL;309309- }310308311309 for_each_child_of_node(cpus, cpun) {312310 if (of_node_cmp(cpun->type, "cpu"))
-12
drivers/of/fdt.c
···1818#include <linux/string.h>1919#include <linux/errno.h>2020#include <linux/slab.h>2121-#include <linux/random.h>22212322#include <asm/setup.h> /* for COMMAND_LINE_SIZE */2423#ifdef CONFIG_PPC···802803}803804804805#endif /* CONFIG_OF_EARLY_FLATTREE */805805-806806-/* Feed entire flattened device tree into the random pool */807807-static int __init add_fdt_randomness(void)808808-{809809- if (initial_boot_params)810810- add_device_randomness(initial_boot_params,811811- be32_to_cpu(initial_boot_params->totalsize));812812-813813- return 0;814814-}815815-core_initcall(add_fdt_randomness);
-173
drivers/of/of_reserved_mem.c
···11-/*22- * Device tree based initialization code for reserved memory.33- *44- * Copyright (c) 2013 Samsung Electronics Co., Ltd.55- * http://www.samsung.com66- * Author: Marek Szyprowski <m.szyprowski@samsung.com>77- *88- * This program is free software; you can redistribute it and/or99- * modify it under the terms of the GNU General Public License as1010- * published by the Free Software Foundation; either version 2 of the1111- * License or (at your optional) any later version of the license.1212- */1313-1414-#include <linux/memblock.h>1515-#include <linux/err.h>1616-#include <linux/of.h>1717-#include <linux/of_fdt.h>1818-#include <linux/of_platform.h>1919-#include <linux/mm.h>2020-#include <linux/sizes.h>2121-#include <linux/mm_types.h>2222-#include <linux/dma-contiguous.h>2323-#include <linux/dma-mapping.h>2424-#include <linux/of_reserved_mem.h>2525-2626-#define MAX_RESERVED_REGIONS 162727-struct reserved_mem {2828- phys_addr_t base;2929- unsigned long size;3030- struct cma *cma;3131- char name[32];3232-};3333-static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS];3434-static int reserved_mem_count;3535-3636-static int __init fdt_scan_reserved_mem(unsigned long node, const char *uname,3737- int depth, void *data)3838-{3939- struct reserved_mem *rmem = &reserved_mem[reserved_mem_count];4040- phys_addr_t base, size;4141- int is_cma, is_reserved;4242- unsigned long len;4343- const char *status;4444- __be32 *prop;4545-4646- is_cma = IS_ENABLED(CONFIG_DMA_CMA) &&4747- of_flat_dt_is_compatible(node, "linux,contiguous-memory-region");4848- is_reserved = of_flat_dt_is_compatible(node, "reserved-memory-region");4949-5050- if (!is_reserved && !is_cma) {5151- /* ignore node and scan next one */5252- return 0;5353- }5454-5555- status = of_get_flat_dt_prop(node, "status", &len);5656- if (status && strcmp(status, "okay") != 0) {5757- /* ignore disabled node nad scan next one */5858- return 0;5959- }6060-6161- prop = of_get_flat_dt_prop(node, "reg", &len);6262- if (!prop || (len < (dt_root_size_cells + dt_root_addr_cells) *6363- sizeof(__be32))) {6464- pr_err("Reserved mem: node %s, incorrect \"reg\" property\n",6565- uname);6666- /* ignore node and scan next one */6767- return 0;6868- }6969- base = dt_mem_next_cell(dt_root_addr_cells, &prop);7070- size = dt_mem_next_cell(dt_root_size_cells, &prop);7171-7272- if (!size) {7373- /* ignore node and scan next one */7474- return 0;7575- }7676-7777- pr_info("Reserved mem: found %s, memory base %lx, size %ld MiB\n",7878- uname, (unsigned long)base, (unsigned long)size / SZ_1M);7979-8080- if (reserved_mem_count == ARRAY_SIZE(reserved_mem))8181- return -ENOSPC;8282-8383- rmem->base = base;8484- rmem->size = size;8585- strlcpy(rmem->name, uname, sizeof(rmem->name));8686-8787- if (is_cma) {8888- struct cma *cma;8989- if (dma_contiguous_reserve_area(size, base, 0, &cma) == 0) {9090- rmem->cma = cma;9191- reserved_mem_count++;9292- if (of_get_flat_dt_prop(node,9393- "linux,default-contiguous-region",9494- NULL))9595- dma_contiguous_set_default(cma);9696- }9797- } else if (is_reserved) {9898- if (memblock_remove(base, size) == 0)9999- reserved_mem_count++;100100- else101101- pr_err("Failed to reserve memory for %s\n", uname);102102- }103103-104104- return 0;105105-}106106-107107-static struct reserved_mem *get_dma_memory_region(struct device *dev)108108-{109109- struct device_node *node;110110- const char *name;111111- int i;112112-113113- node = of_parse_phandle(dev->of_node, "memory-region", 0);114114- if (!node)115115- return NULL;116116-117117- name = kbasename(node->full_name);118118- for (i = 0; i < reserved_mem_count; i++)119119- if (strcmp(name, reserved_mem[i].name) == 0)120120- return &reserved_mem[i];121121- return NULL;122122-}123123-124124-/**125125- * of_reserved_mem_device_init() - assign reserved memory region to given device126126- *127127- * This function assign memory region pointed by "memory-region" device tree128128- * property to the given device.129129- */130130-void of_reserved_mem_device_init(struct device *dev)131131-{132132- struct reserved_mem *region = get_dma_memory_region(dev);133133- if (!region)134134- return;135135-136136- if (region->cma) {137137- dev_set_cma_area(dev, region->cma);138138- pr_info("Assigned CMA %s to %s device\n", region->name,139139- dev_name(dev));140140- } else {141141- if (dma_declare_coherent_memory(dev, region->base, region->base,142142- region->size, DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE) != 0)143143- pr_info("Declared reserved memory %s to %s device\n",144144- region->name, dev_name(dev));145145- }146146-}147147-148148-/**149149- * of_reserved_mem_device_release() - release reserved memory device structures150150- *151151- * This function releases structures allocated for memory region handling for152152- * the given device.153153- */154154-void of_reserved_mem_device_release(struct device *dev)155155-{156156- struct reserved_mem *region = get_dma_memory_region(dev);157157- if (!region && !region->cma)158158- dma_release_declared_memory(dev);159159-}160160-161161-/**162162- * early_init_dt_scan_reserved_mem() - create reserved memory regions163163- *164164- * This function grabs memory from early allocator for device exclusive use165165- * defined in device tree structures. It should be called by arch specific code166166- * once the early allocator (memblock) has been activated and all other167167- * subsystems have already allocated/reserved memory.168168- */169169-void __init early_init_dt_scan_reserved_mem(void)170170-{171171- of_scan_flat_dt_by_path("/memory/reserved-memory",172172- fdt_scan_reserved_mem, NULL);173173-}
-4
drivers/of/platform.c
···2121#include <linux/of_device.h>2222#include <linux/of_irq.h>2323#include <linux/of_platform.h>2424-#include <linux/of_reserved_mem.h>2524#include <linux/platform_device.h>26252726const struct of_device_id of_default_bus_match_table[] = {···218219 dev->dev.bus = &platform_bus_type;219220 dev->dev.platform_data = platform_data;220221221221- of_reserved_mem_device_init(&dev->dev);222222-223222 /* We do not fill the DMA ops for platform devices by default.224223 * This is currently the responsibility of the platform code225224 * to do such, possibly using a device notifier···225228226229 if (of_device_add(dev) != 0) {227230 platform_device_put(dev);228228- of_reserved_mem_device_release(&dev->dev);229231 return NULL;230232 }231233
+5-3
drivers/pci/hotplug/acpiphp_glue.c
···994994995995 /*996996 * This bridge should have been registered as a hotplug function997997- * under its parent, so the context has to be there. If not, we998998- * are in deep goo.997997+ * under its parent, so the context should be there, unless the998998+ * parent is going to be handled by pciehp, in which case this999999+ * bridge is not interesting to us either.9991000 */10001001 mutex_lock(&acpiphp_context_lock);10011002 context = acpiphp_get_context(handle);10021002- if (WARN_ON(!context)) {10031003+ if (!context) {10031004 mutex_unlock(&acpiphp_context_lock);10041005 put_device(&bus->dev);10061006+ pci_dev_put(bridge->pci_dev);10051007 kfree(bridge);10061008 return;10071009 }
+2-2
drivers/pinctrl/pinconf.c
···490490 * <devicename> <state> <pinname> are values that should match the pinctrl-maps491491 * <newvalue> reflects the new config and is driver dependant492492 */493493-static int pinconf_dbg_config_write(struct file *file,493493+static ssize_t pinconf_dbg_config_write(struct file *file,494494 const char __user *user_buf, size_t count, loff_t *ppos)495495{496496 struct pinctrl_maps *maps_node;···508508 int i;509509510510 /* Get userspace string and assure termination */511511- buf_size = min(count, (size_t)(sizeof(buf)-1));511511+ buf_size = min(count, sizeof(buf) - 1);512512 if (copy_from_user(buf, user_buf, buf_size))513513 return -EFAULT;514514 buf[buf_size] = 0;
···891891 param = pinconf_to_config_param(configs[i]);892892 param_val = pinconf_to_config_argument(configs[i]);893893894894+ if (param == PIN_CONFIG_BIAS_PULL_PIN_DEFAULT)895895+ continue;896896+894897 switch (param) {895895- case PIN_CONFIG_BIAS_PULL_PIN_DEFAULT:896896- return 0;897898 case PIN_CONFIG_BIAS_DISABLE:898899 case PIN_CONFIG_BIAS_PULL_UP:899900 case PIN_CONFIG_BIAS_PULL_DOWN:
+2-3
drivers/pinctrl/pinctrl-tegra114.c
···33 *44 * Copyright (c) 2012-2013, NVIDIA CORPORATION. All rights reserved.55 *66- * Arthur: Pritesh Raithatha <praithatha@nvidia.com>66+ * Author: Pritesh Raithatha <praithatha@nvidia.com>77 *88 * This program is free software; you can redistribute it and/or modify it99 * under the terms and conditions of the GNU General Public License,···27632763};27642764module_platform_driver(tegra114_pinctrl_driver);2765276527662766-MODULE_ALIAS("platform:tegra114-pinctrl");27672766MODULE_AUTHOR("Pritesh Raithatha <praithatha@nvidia.com>");27682768-MODULE_DESCRIPTION("NVIDIA Tegra114 pincontrol driver");27672767+MODULE_DESCRIPTION("NVIDIA Tegra114 pinctrl driver");27692768MODULE_LICENSE("GPL v2");
+1
drivers/platform/x86/Kconfig
···504504 depends on BACKLIGHT_CLASS_DEVICE505505 depends on RFKILL || RFKILL = n506506 depends on HOTPLUG_PCI507507+ depends on ACPI_VIDEO || ACPI_VIDEO = n507508 select INPUT_SPARSEKMAP508509 select LEDS_CLASS509510 select NEW_LEDS
+9-17
drivers/platform/x86/sony-laptop.c
···127127 "default is -1 (automatic)");128128#endif129129130130-static int kbd_backlight = 1;130130+static int kbd_backlight = -1;131131module_param(kbd_backlight, int, 0444);132132MODULE_PARM_DESC(kbd_backlight,133133 "set this to 0 to disable keyboard backlight, "134134- "1 to enable it (default: 0)");134134+ "1 to enable it (default: no change from current value)");135135136136-static int kbd_backlight_timeout; /* = 0 */136136+static int kbd_backlight_timeout = -1;137137module_param(kbd_backlight_timeout, int, 0444);138138MODULE_PARM_DESC(kbd_backlight_timeout,139139- "set this to 0 to set the default 10 seconds timeout, "140140- "1 for 30 seconds, 2 for 60 seconds and 3 to disable timeout "141141- "(default: 0)");139139+ "meaningful values vary from 0 to 3 and their meaning depends "140140+ "on the model (default: no change from current value)");142141143142#ifdef CONFIG_PM_SLEEP144143static void sony_nc_kbd_backlight_resume(void);···18431844 if (!kbdbl_ctl)18441845 return -ENOMEM;1845184618471847+ kbdbl_ctl->mode = kbd_backlight;18481848+ kbdbl_ctl->timeout = kbd_backlight_timeout;18461849 kbdbl_ctl->handle = handle;18471850 if (handle == 0x0137)18481851 kbdbl_ctl->base = 0x0C00;···18711870 if (ret)18721871 goto outmode;1873187218741874- __sony_nc_kbd_backlight_mode_set(kbd_backlight);18751875- __sony_nc_kbd_backlight_timeout_set(kbd_backlight_timeout);18731873+ __sony_nc_kbd_backlight_mode_set(kbdbl_ctl->mode);18741874+ __sony_nc_kbd_backlight_timeout_set(kbdbl_ctl->timeout);1876187518771876 return 0;18781877···18871886static void sony_nc_kbd_backlight_cleanup(struct platform_device *pd)18881887{18891888 if (kbdbl_ctl) {18901890- int result;18911891-18921889 device_remove_file(&pd->dev, &kbdbl_ctl->mode_attr);18931890 device_remove_file(&pd->dev, &kbdbl_ctl->timeout_attr);18941894-18951895- /* restore the default hw behaviour */18961896- sony_call_snc_handle(kbdbl_ctl->handle,18971897- kbdbl_ctl->base | 0x10000, &result);18981898- sony_call_snc_handle(kbdbl_ctl->handle,18991899- kbdbl_ctl->base + 0x200, &result);19001900-19011891 kfree(kbdbl_ctl);19021892 kbdbl_ctl = NULL;19031893 }
+71-27
drivers/s390/block/dasd_eckd.c
···20772077 int intensity = 0;20782078 int r0_perm;20792079 int nr_tracks;20802080+ int use_prefix;2080208120812082 startdev = dasd_alias_get_start_dev(base);20822083 if (!startdev)···21072106 intensity = fdata->intensity;21082107 }2109210821092109+ use_prefix = base_priv->features.feature[8] & 0x01;21102110+21102111 switch (intensity) {21112112 case 0x00: /* Normal format */21122113 case 0x08: /* Normal format, use cdl. */21132114 cplength = 2 + (rpt*nr_tracks);21142114- datasize = sizeof(struct PFX_eckd_data) +21152115- sizeof(struct LO_eckd_data) +21162116- rpt * nr_tracks * sizeof(struct eckd_count);21152115+ if (use_prefix)21162116+ datasize = sizeof(struct PFX_eckd_data) +21172117+ sizeof(struct LO_eckd_data) +21182118+ rpt * nr_tracks * sizeof(struct eckd_count);21192119+ else21202120+ datasize = sizeof(struct DE_eckd_data) +21212121+ sizeof(struct LO_eckd_data) +21222122+ rpt * nr_tracks * sizeof(struct eckd_count);21172123 break;21182124 case 0x01: /* Write record zero and format track. */21192125 case 0x09: /* Write record zero and format track, use cdl. */21202126 cplength = 2 + rpt * nr_tracks;21212121- datasize = sizeof(struct PFX_eckd_data) +21222122- sizeof(struct LO_eckd_data) +21232123- sizeof(struct eckd_count) +21242124- rpt * nr_tracks * sizeof(struct eckd_count);21272127+ if (use_prefix)21282128+ datasize = sizeof(struct PFX_eckd_data) +21292129+ sizeof(struct LO_eckd_data) +21302130+ sizeof(struct eckd_count) +21312131+ rpt * nr_tracks * sizeof(struct eckd_count);21322132+ else21332133+ datasize = sizeof(struct DE_eckd_data) +21342134+ sizeof(struct LO_eckd_data) +21352135+ sizeof(struct eckd_count) +21362136+ rpt * nr_tracks * sizeof(struct eckd_count);21252137 break;21262138 case 0x04: /* Invalidate track. */21272139 case 0x0c: /* Invalidate track, use cdl. */21282140 cplength = 3;21292129- datasize = sizeof(struct PFX_eckd_data) +21302130- sizeof(struct LO_eckd_data) +21312131- sizeof(struct eckd_count);21412141+ if (use_prefix)21422142+ datasize = sizeof(struct PFX_eckd_data) +21432143+ sizeof(struct LO_eckd_data) +21442144+ sizeof(struct eckd_count);21452145+ else21462146+ datasize = sizeof(struct DE_eckd_data) +21472147+ sizeof(struct LO_eckd_data) +21482148+ sizeof(struct eckd_count);21322149 break;21332150 default:21342151 dev_warn(&startdev->cdev->dev,···2166214721672148 switch (intensity & ~0x08) {21682149 case 0x00: /* Normal format. */21692169- prefix(ccw++, (struct PFX_eckd_data *) data,21702170- fdata->start_unit, fdata->stop_unit,21712171- DASD_ECKD_CCW_WRITE_CKD, base, startdev);21722172- /* grant subsystem permission to format R0 */21732173- if (r0_perm)21742174- ((struct PFX_eckd_data *)data)21752175- ->define_extent.ga_extended |= 0x04;21762176- data += sizeof(struct PFX_eckd_data);21502150+ if (use_prefix) {21512151+ prefix(ccw++, (struct PFX_eckd_data *) data,21522152+ fdata->start_unit, fdata->stop_unit,21532153+ DASD_ECKD_CCW_WRITE_CKD, base, startdev);21542154+ /* grant subsystem permission to format R0 */21552155+ if (r0_perm)21562156+ ((struct PFX_eckd_data *)data)21572157+ ->define_extent.ga_extended |= 0x04;21582158+ data += sizeof(struct PFX_eckd_data);21592159+ } else {21602160+ define_extent(ccw++, (struct DE_eckd_data *) data,21612161+ fdata->start_unit, fdata->stop_unit,21622162+ DASD_ECKD_CCW_WRITE_CKD, startdev);21632163+ /* grant subsystem permission to format R0 */21642164+ if (r0_perm)21652165+ ((struct DE_eckd_data *) data)21662166+ ->ga_extended |= 0x04;21672167+ data += sizeof(struct DE_eckd_data);21682168+ }21772169 ccw[-1].flags |= CCW_FLAG_CC;21782170 locate_record(ccw++, (struct LO_eckd_data *) data,21792171 fdata->start_unit, 0, rpt*nr_tracks,···21932163 data += sizeof(struct LO_eckd_data);21942164 break;21952165 case 0x01: /* Write record zero + format track. */21962196- prefix(ccw++, (struct PFX_eckd_data *) data,21972197- fdata->start_unit, fdata->stop_unit,21982198- DASD_ECKD_CCW_WRITE_RECORD_ZERO,21992199- base, startdev);22002200- data += sizeof(struct PFX_eckd_data);21662166+ if (use_prefix) {21672167+ prefix(ccw++, (struct PFX_eckd_data *) data,21682168+ fdata->start_unit, fdata->stop_unit,21692169+ DASD_ECKD_CCW_WRITE_RECORD_ZERO,21702170+ base, startdev);21712171+ data += sizeof(struct PFX_eckd_data);21722172+ } else {21732173+ define_extent(ccw++, (struct DE_eckd_data *) data,21742174+ fdata->start_unit, fdata->stop_unit,21752175+ DASD_ECKD_CCW_WRITE_RECORD_ZERO, startdev);21762176+ data += sizeof(struct DE_eckd_data);21772177+ }22012178 ccw[-1].flags |= CCW_FLAG_CC;22022179 locate_record(ccw++, (struct LO_eckd_data *) data,22032180 fdata->start_unit, 0, rpt * nr_tracks + 1,···22132176 data += sizeof(struct LO_eckd_data);22142177 break;22152178 case 0x04: /* Invalidate track. */22162216- prefix(ccw++, (struct PFX_eckd_data *) data,22172217- fdata->start_unit, fdata->stop_unit,22182218- DASD_ECKD_CCW_WRITE_CKD, base, startdev);22192219- data += sizeof(struct PFX_eckd_data);21792179+ if (use_prefix) {21802180+ prefix(ccw++, (struct PFX_eckd_data *) data,21812181+ fdata->start_unit, fdata->stop_unit,21822182+ DASD_ECKD_CCW_WRITE_CKD, base, startdev);21832183+ data += sizeof(struct PFX_eckd_data);21842184+ } else {21852185+ define_extent(ccw++, (struct DE_eckd_data *) data,21862186+ fdata->start_unit, fdata->stop_unit,21872187+ DASD_ECKD_CCW_WRITE_CKD, startdev);21882188+ data += sizeof(struct DE_eckd_data);21892189+ }22202190 ccw[-1].flags |= CCW_FLAG_CC;22212191 locate_record(ccw++, (struct LO_eckd_data *) data,22222192 fdata->start_unit, 0, 1,
+2-2
drivers/s390/char/sclp.c
···486486 timeout = 0;487487 if (timer_pending(&sclp_request_timer)) {488488 /* Get timeout TOD value */489489- timeout = get_tod_clock() +489489+ timeout = get_tod_clock_fast() +490490 sclp_tod_from_jiffies(sclp_request_timer.expires -491491 jiffies);492492 }···508508 while (sclp_running_state != sclp_running_state_idle) {509509 /* Check for expired request timer */510510 if (timer_pending(&sclp_request_timer) &&511511- get_tod_clock() > timeout &&511511+ get_tod_clock_fast() > timeout &&512512 del_timer(&sclp_request_timer))513513 sclp_request_timer.function(sclp_request_timer.data);514514 cpu_relax();
+5-3
drivers/s390/char/sclp_cmd.c
···145145146146 if (sccb->header.response_code != 0x20)147147 return 0;148148- if (sccb->sclp_send_mask & (EVTYP_MSG_MASK | EVTYP_PMSGCMD_MASK))149149- return 1;150150- return 0;148148+ if (!(sccb->sclp_send_mask & (EVTYP_OPCMD_MASK | EVTYP_PMSGCMD_MASK)))149149+ return 0;150150+ if (!(sccb->sclp_receive_mask & (EVTYP_MSG_MASK | EVTYP_PMSGCMD_MASK)))151151+ return 0;152152+ return 1;151153}152154153155bool __init sclp_has_vt220(void)
+1-1
drivers/s390/char/tty3270.c
···810810 struct winsize ws;811811812812 screen = tty3270_alloc_screen(tp->n_rows, tp->n_cols);813813- if (!screen)813813+ if (IS_ERR(screen))814814 return;815815 /* Switch to new output size */816816 spin_lock_bh(&tp->view.lock);
+1-1
drivers/s390/char/vmlogrdr.c
···313313 int ret;314314315315 dev_num = iminor(inode);316316- if (dev_num > MAXMINOR)316316+ if (dev_num >= MAXMINOR)317317 return -ENODEV;318318 logptr = &sys_ser[dev_num];319319
+2-2
drivers/s390/cio/cio.c
···878878 atomic_inc(&chpid_reset_count);879879 }880880 /* Wait for machine check for all channel paths. */881881- timeout = get_tod_clock() + (RCHP_TIMEOUT << 12);881881+ timeout = get_tod_clock_fast() + (RCHP_TIMEOUT << 12);882882 while (atomic_read(&chpid_reset_count) != 0) {883883- if (get_tod_clock() > timeout)883883+ if (get_tod_clock_fast() > timeout)884884 break;885885 cpu_relax();886886 }
+5-5
drivers/s390/cio/qdio_main.c
···338338 retries++;339339340340 if (!start_time) {341341- start_time = get_tod_clock();341341+ start_time = get_tod_clock_fast();342342 goto again;343343 }344344- if ((get_tod_clock() - start_time) < QDIO_BUSY_BIT_PATIENCE)344344+ if (get_tod_clock_fast() - start_time < QDIO_BUSY_BIT_PATIENCE)345345 goto again;346346 }347347 if (retries) {···504504 int count, stop;505505 unsigned char state = 0;506506507507- q->timestamp = get_tod_clock();507507+ q->timestamp = get_tod_clock_fast();508508509509 /*510510 * Don't check 128 buffers, as otherwise qdio_inbound_q_moved···595595 * At this point we know, that inbound first_to_check596596 * has (probably) not moved (see qdio_inbound_processing).597597 */598598- if (get_tod_clock() > q->u.in.timestamp + QDIO_INPUT_THRESHOLD) {598598+ if (get_tod_clock_fast() > q->u.in.timestamp + QDIO_INPUT_THRESHOLD) {599599 DBF_DEV_EVENT(DBF_INFO, q->irq_ptr, "in done:%02x",600600 q->first_to_check);601601 return 1;···728728 int count, stop;729729 unsigned char state = 0;730730731731- q->timestamp = get_tod_clock();731731+ q->timestamp = get_tod_clock_fast();732732733733 if (need_siga_sync(q))734734 if (((queue_type(q) != QDIO_IQDIO_QFMT) &&
+2-1
drivers/spi/spi-atmel.c
···15831583 /* Initialize the hardware */15841584 ret = clk_prepare_enable(clk);15851585 if (ret)15861586- goto out_unmap_regs;15861586+ goto out_free_irq;15871587 spi_writel(as, CR, SPI_BIT(SWRST));15881588 spi_writel(as, CR, SPI_BIT(SWRST)); /* AT91SAM9263 Rev B workaround */15891589 if (as->caps.has_wdrbt) {···16141614 spi_writel(as, CR, SPI_BIT(SWRST));16151615 spi_writel(as, CR, SPI_BIT(SWRST)); /* AT91SAM9263 Rev B workaround */16161616 clk_disable_unprepare(clk);16171617+out_free_irq:16171618 free_irq(irq, master);16181619out_unmap_regs:16191620 iounmap(as->regs);
···476476 master->bus_num = bus_num;477477478478 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);479479- if (!res) {480480- dev_err(&pdev->dev, "can't get platform resource\n");481481- ret = -EINVAL;482482- goto out_master_put;483483- }484484-485479 dspi->base = devm_ioremap_resource(&pdev->dev, res);486486- if (!dspi->base) {487487- ret = -EINVAL;480480+ if (IS_ERR(dspi->base)) {481481+ ret = PTR_ERR(dspi->base);488482 goto out_master_put;489483 }490484
+3-1
drivers/spi/spi-mpc512x-psc.c
···522522 psc_num = master->bus_num;523523 snprintf(clk_name, sizeof(clk_name), "psc%d_mclk", psc_num);524524 clk = devm_clk_get(dev, clk_name);525525- if (IS_ERR(clk))525525+ if (IS_ERR(clk)) {526526+ ret = PTR_ERR(clk);526527 goto free_irq;528528+ }527529 ret = clk_prepare_enable(clk);528530 if (ret)529531 goto free_irq;
+10-1
drivers/spi/spi-pxa2xx.c
···546546 if (pm_runtime_suspended(&drv_data->pdev->dev))547547 return IRQ_NONE;548548549549- sccr1_reg = read_SSCR1(reg);549549+ /*550550+ * If the device is not yet in RPM suspended state and we get an551551+ * interrupt that is meant for another device, check if status bits552552+ * are all set to one. That means that the device is already553553+ * powered off.554554+ */550555 status = read_SSSR(reg);556556+ if (status == ~0)557557+ return IRQ_NONE;558558+559559+ sccr1_reg = read_SSCR1(reg);551560552561 /* Ignore possible writes if we don't need to write */553562 if (!(sccr1_reg & SSCR1_TIE))
···349349{350350 struct se_device *dev = cmd->se_dev;351351352352- cmd->se_cmd_flags |= SCF_COMPARE_AND_WRITE_POST;352352+ /*353353+ * Only set SCF_COMPARE_AND_WRITE_POST to force a response fall-through354354+ * within target_complete_ok_work() if the command was successfully355355+ * sent to the backend driver.356356+ */357357+ spin_lock_irq(&cmd->t_state_lock);358358+ if ((cmd->transport_state & CMD_T_SENT) && !cmd->scsi_status)359359+ cmd->se_cmd_flags |= SCF_COMPARE_AND_WRITE_POST;360360+ spin_unlock_irq(&cmd->t_state_lock);361361+353362 /*354363 * Unlock ->caw_sem originally obtained during sbc_compare_and_write()355364 * before the original READ I/O submission.···372363{373364 struct se_device *dev = cmd->se_dev;374365 struct scatterlist *write_sg = NULL, *sg;375375- unsigned char *buf, *addr;366366+ unsigned char *buf = NULL, *addr;376367 struct sg_mapping_iter m;377368 unsigned int offset = 0, len;378369 unsigned int nlbas = cmd->t_task_nolb;···387378 */388379 if (!cmd->t_data_sg || !cmd->t_bidi_data_sg)389380 return TCM_NO_SENSE;381381+ /*382382+ * Immediately exit + release dev->caw_sem if command has already383383+ * been failed with a non-zero SCSI status.384384+ */385385+ if (cmd->scsi_status) {386386+ pr_err("compare_and_write_callback: non zero scsi_status:"387387+ " 0x%02x\n", cmd->scsi_status);388388+ goto out;389389+ }390390391391 buf = kzalloc(cmd->data_length, GFP_KERNEL);392392 if (!buf) {···526508 cmd->transport_complete_callback = NULL;527509 return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;528510 }511511+ /*512512+ * Reset cmd->data_length to individual block_size in order to not513513+ * confuse backend drivers that depend on this value matching the514514+ * size of the I/O being submitted.515515+ */516516+ cmd->data_length = cmd->t_task_nolb * dev->dev_attrib.block_size;529517530518 ret = cmd->execute_rw(cmd, cmd->t_bidi_data_sg, cmd->t_bidi_data_nents,531519 DMA_FROM_DEVICE);
+15-5
drivers/target/target_core_transport.c
···236236{237237 int rc;238238239239- se_sess->sess_cmd_map = kzalloc(tag_num * tag_size, GFP_KERNEL);239239+ se_sess->sess_cmd_map = kzalloc(tag_num * tag_size,240240+ GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT);240241 if (!se_sess->sess_cmd_map) {241241- pr_err("Unable to allocate se_sess->sess_cmd_map\n");242242- return -ENOMEM;242242+ se_sess->sess_cmd_map = vzalloc(tag_num * tag_size);243243+ if (!se_sess->sess_cmd_map) {244244+ pr_err("Unable to allocate se_sess->sess_cmd_map\n");245245+ return -ENOMEM;246246+ }243247 }244248245249 rc = percpu_ida_init(&se_sess->sess_tag_pool, tag_num);246250 if (rc < 0) {247251 pr_err("Unable to init se_sess->sess_tag_pool,"248252 " tag_num: %u\n", tag_num);249249- kfree(se_sess->sess_cmd_map);253253+ if (is_vmalloc_addr(se_sess->sess_cmd_map))254254+ vfree(se_sess->sess_cmd_map);255255+ else256256+ kfree(se_sess->sess_cmd_map);250257 se_sess->sess_cmd_map = NULL;251258 return -ENOMEM;252259 }···419412{420413 if (se_sess->sess_cmd_map) {421414 percpu_ida_destroy(&se_sess->sess_tag_pool);422422- kfree(se_sess->sess_cmd_map);415415+ if (is_vmalloc_addr(se_sess->sess_cmd_map))416416+ vfree(se_sess->sess_cmd_map);417417+ else418418+ kfree(se_sess->sess_cmd_map);423419 }424420 kmem_cache_free(se_sess_cache, se_sess);425421}
···310310 }311311312312 th_zone = conf->pzone_data;313313- if (th_zone->therm_dev)314314- return;315313316314 if (th_zone->bind == false) {317315 for (i = 0; i < th_zone->cool_dev_size; i++) {
+8-4
drivers/thermal/samsung/exynos_tmu.c
···317317318318 con = readl(data->base + reg->tmu_ctrl);319319320320+ if (pdata->test_mux)321321+ con |= (pdata->test_mux << reg->test_mux_addr_shift);322322+320323 if (pdata->reference_voltage) {321324 con &= ~(reg->buf_vref_sel_mask << reg->buf_vref_sel_shift);322325 con |= pdata->reference_voltage << reg->buf_vref_sel_shift;···491488 },492489 {493490 .compatible = "samsung,exynos4412-tmu",494494- .data = (void *)EXYNOS5250_TMU_DRV_DATA,491491+ .data = (void *)EXYNOS4412_TMU_DRV_DATA,495492 },496493 {497494 .compatible = "samsung,exynos5250-tmu",···632629 if (ret)633630 return ret;634631635635- if (pdata->type == SOC_ARCH_EXYNOS ||636636- pdata->type == SOC_ARCH_EXYNOS4210 ||637637- pdata->type == SOC_ARCH_EXYNOS5440)632632+ if (pdata->type == SOC_ARCH_EXYNOS4210 ||633633+ pdata->type == SOC_ARCH_EXYNOS4412 ||634634+ pdata->type == SOC_ARCH_EXYNOS5250 ||635635+ pdata->type == SOC_ARCH_EXYNOS5440)638636 data->soc = pdata->type;639637 else {640638 ret = -EINVAL;
+6-1
drivers/thermal/samsung/exynos_tmu.h
···41414242enum soc_type {4343 SOC_ARCH_EXYNOS4210 = 1,4444- SOC_ARCH_EXYNOS,4444+ SOC_ARCH_EXYNOS4412,4545+ SOC_ARCH_EXYNOS5250,4546 SOC_ARCH_EXYNOS5440,4647};4748···8584 * @triminfo_reload_shift: shift of triminfo reload enable bit in triminfo_ctrl8685 reg.8786 * @tmu_ctrl: TMU main controller register.8787+ * @test_mux_addr_shift: shift bits of test mux address.8888 * @buf_vref_sel_shift: shift bits of reference voltage in tmu_ctrl register.8989 * @buf_vref_sel_mask: mask bits of reference voltage in tmu_ctrl register.9090 * @therm_trip_mode_shift: shift bits of tripping mode in tmu_ctrl register.···152150 u32 triminfo_reload_shift;153151154152 u32 tmu_ctrl;153153+ u32 test_mux_addr_shift;155154 u32 buf_vref_sel_shift;156155 u32 buf_vref_sel_mask;157156 u32 therm_trip_mode_shift;···260257 * @first_point_trim: temp value of the first point trimming261258 * @second_point_trim: temp value of the second point trimming262259 * @default_temp_offset: default temperature offset in case of no trimming260260+ * @test_mux; information if SoC supports test MUX263261 * @cal_type: calibration type for temperature264262 * @cal_mode: calibration mode for temperature265263 * @freq_clip_table: Table representing frequency reduction percentage.···290286 u8 first_point_trim;291287 u8 second_point_trim;292288 u8 default_temp_offset;289289+ u8 test_mux;293290294291 enum calibration_type cal_type;295292 enum calibration_mode cal_mode;
···561561 if (!mmres || !irqres)562562 return -ENODEV;563563564564- if (np)564564+ if (np) {565565 port = of_alias_get_id(np, "serial");566566 if (port >= VT8500_MAX_PORTS)567567 port = -1;568568- else568568+ } else {569569 port = -1;570570+ }570571571572 if (port < 0) {572573 /* calculate the port id */
···20542054/*20552055 * probe - binds to the platform device20562056 */20572057-static int __init pxa25x_udc_probe(struct platform_device *pdev)20572057+static int pxa25x_udc_probe(struct platform_device *pdev)20582058{20592059 struct pxa25x_udc *dev = &memory;20602060 int retval, irq;···22032203 pullup_off();22042204}2205220522062206-static int __exit pxa25x_udc_remove(struct platform_device *pdev)22062206+static int pxa25x_udc_remove(struct platform_device *pdev)22072207{22082208 struct pxa25x_udc *dev = platform_get_drvdata(pdev);22092209···2294229422952295static struct platform_driver udc_driver = {22962296 .shutdown = pxa25x_udc_shutdown,22972297- .remove = __exit_p(pxa25x_udc_remove),22972297+ .probe = pxa25x_udc_probe,22982298+ .remove = pxa25x_udc_remove,22982299 .suspend = pxa25x_udc_suspend,22992300 .resume = pxa25x_udc_resume,23002301 .driver = {···23042303 },23052304};2306230523072307-module_platform_driver_probe(udc_driver, pxa25x_udc_probe);23062306+module_platform_driver(udc_driver);2308230723092308MODULE_DESCRIPTION(DRIVER_DESC);23102309MODULE_AUTHOR("Frank Becker, Robert Schwebel, David Brownell");
+1-1
drivers/usb/gadget/s3c-hsotg.c
···543543 * FIFO, requests of >512 cause the endpoint to get stuck with a544544 * fragment of the end of the transfer in it.545545 */546546- if (can_write > 512)546546+ if (can_write > 512 && !periodic)547547 can_write = 512;548548549549 /*
···11571157 t1 = xhci_port_state_to_neutral(t1);11581158 if (t1 != t2)11591159 xhci_writel(xhci, t2, port_array[port_index]);11601160-11611161- if (hcd->speed != HCD_USB3) {11621162- /* enable remote wake up for USB 2.0 */11631163- __le32 __iomem *addr;11641164- u32 tmp;11651165-11661166- /* Get the port power control register address. */11671167- addr = port_array[port_index] + PORTPMSC;11681168- tmp = xhci_readl(xhci, addr);11691169- tmp |= PORT_RWE;11701170- xhci_writel(xhci, tmp, addr);11711171- }11721160 }11731161 hcd->state = HC_STATE_SUSPENDED;11741162 bus_state->next_statechange = jiffies + msecs_to_jiffies(10);···12351247 xhci_ring_device(xhci, slot_id);12361248 } else12371249 xhci_writel(xhci, temp, port_array[port_index]);12381238-12391239- if (hcd->speed != HCD_USB3) {12401240- /* disable remote wake up for USB 2.0 */12411241- __le32 __iomem *addr;12421242- u32 tmp;12431243-12441244- /* Add one to the port status register address to get12451245- * the port power control register address.12461246- */12471247- addr = port_array[port_index] + PORTPMSC;12481248- tmp = xhci_readl(xhci, addr);12491249- tmp &= ~PORT_RWE;12501250- xhci_writel(xhci, tmp, addr);12511251- }12521250 }1253125112541252 (void) xhci_readl(xhci, &xhci->op_regs->command);
+25
drivers/usb/host/xhci-pci.c
···3535#define PCI_VENDOR_ID_ETRON 0x1b6f3636#define PCI_DEVICE_ID_ASROCK_P67 0x702337373838+#define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI 0x8c313939+#define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c314040+3841static const char hcd_name[] = "xhci_hcd";39424043/* called after powerup, by probe or system-pm "wakeup" */···7168 xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,7269 "QUIRK: Fresco Logic xHC needs configure"7370 " endpoint cmd after reset endpoint");7171+ }7272+ if (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK &&7373+ pdev->revision == 0x4) {7474+ xhci->quirks |= XHCI_SLOW_SUSPEND;7575+ xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,7676+ "QUIRK: Fresco Logic xHC revision %u"7777+ "must be suspended extra slowly",7878+ pdev->revision);7479 }7580 /* Fresco Logic confirms: all revisions of this chip do not7681 * support MSI, even though some of them claim to in their PCI···120109 */121110 xhci->quirks |= XHCI_SPURIOUS_REBOOT;122111 xhci->quirks |= XHCI_AVOID_BEI;112112+ }113113+ if (pdev->vendor == PCI_VENDOR_ID_INTEL &&114114+ (pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI ||115115+ pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI)) {116116+ /* Workaround for occasional spurious wakeups from S5 (or117117+ * any other sleep) on Haswell machines with LPT and LPT-LP118118+ * with the new Intel BIOS119119+ */120120+ xhci->quirks |= XHCI_SPURIOUS_WAKEUP;123121 }124122 if (pdev->vendor == PCI_VENDOR_ID_ETRON &&125123 pdev->device == PCI_DEVICE_ID_ASROCK_P67) {···237217 usb_put_hcd(xhci->shared_hcd);238218 }239219 usb_hcd_pci_remove(dev);220220+221221+ /* Workaround for spurious wakeups at shutdown with HSW */222222+ if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)223223+ pci_set_power_state(dev, PCI_D3hot);224224+240225 kfree(xhci);241226}242227
+13-1
drivers/usb/host/xhci.c
···730730731731 spin_lock_irq(&xhci->lock);732732 xhci_halt(xhci);733733+ /* Workaround for spurious wakeups at shutdown with HSW */734734+ if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)735735+ xhci_reset(xhci);733736 spin_unlock_irq(&xhci->lock);734737735738 xhci_cleanup_msix(xhci);···740737 xhci_dbg_trace(xhci, trace_xhci_dbg_init,741738 "xhci_shutdown completed - status = %x",742739 xhci_readl(xhci, &xhci->op_regs->status));740740+741741+ /* Yet another workaround for spurious wakeups at shutdown with HSW */742742+ if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)743743+ pci_set_power_state(to_pci_dev(hcd->self.controller), PCI_D3hot);743744}744745745746#ifdef CONFIG_PM···846839int xhci_suspend(struct xhci_hcd *xhci)847840{848841 int rc = 0;842842+ unsigned int delay = XHCI_MAX_HALT_USEC;849843 struct usb_hcd *hcd = xhci_to_hcd(xhci);850844 u32 command;851845···869861 command = xhci_readl(xhci, &xhci->op_regs->command);870862 command &= ~CMD_RUN;871863 xhci_writel(xhci, command, &xhci->op_regs->command);864864+865865+ /* Some chips from Fresco Logic need an extraordinary delay */866866+ delay *= (xhci->quirks & XHCI_SLOW_SUSPEND) ? 10 : 1;867867+872868 if (xhci_handshake(xhci, &xhci->op_regs->status,873873- STS_HALT, STS_HALT, XHCI_MAX_HALT_USEC)) {869869+ STS_HALT, STS_HALT, delay)) {874870 xhci_warn(xhci, "WARN: xHC CMD_RUN timeout\n");875871 spin_unlock_irq(&xhci->lock);876872 return -ETIMEDOUT;
+2
drivers/usb/host/xhci.h
···15481548#define XHCI_COMP_MODE_QUIRK (1 << 14)15491549#define XHCI_AVOID_BEI (1 << 15)15501550#define XHCI_PLAT (1 << 16)15511551+#define XHCI_SLOW_SUSPEND (1 << 17)15521552+#define XHCI_SPURIOUS_WAKEUP (1 << 18)15511553 unsigned int num_active_eps;15521554 unsigned int limit_active_eps;15531555 /* There are two roothubs to keep track of bus suspend info for */
+1-1
drivers/usb/misc/Kconfig
···246246config USB_HSIC_USB3503247247 tristate "USB3503 HSIC to USB20 Driver"248248 depends on I2C249249- select REGMAP249249+ select REGMAP_I2C250250 help251251 This option enables support for SMSC USB3503 HSIC to USB 2.0 Driver.
+46
drivers/usb/musb/musb_core.c
···922922}923923924924/*925925+ * Program the HDRC to start (enable interrupts, dma, etc.).926926+ */927927+void musb_start(struct musb *musb)928928+{929929+ void __iomem *regs = musb->mregs;930930+ u8 devctl = musb_readb(regs, MUSB_DEVCTL);931931+932932+ dev_dbg(musb->controller, "<== devctl %02x\n", devctl);933933+934934+ /* Set INT enable registers, enable interrupts */935935+ musb->intrtxe = musb->epmask;936936+ musb_writew(regs, MUSB_INTRTXE, musb->intrtxe);937937+ musb->intrrxe = musb->epmask & 0xfffe;938938+ musb_writew(regs, MUSB_INTRRXE, musb->intrrxe);939939+ musb_writeb(regs, MUSB_INTRUSBE, 0xf7);940940+941941+ musb_writeb(regs, MUSB_TESTMODE, 0);942942+943943+ /* put into basic highspeed mode and start session */944944+ musb_writeb(regs, MUSB_POWER, MUSB_POWER_ISOUPDATE945945+ | MUSB_POWER_HSENAB946946+ /* ENSUSPEND wedges tusb */947947+ /* | MUSB_POWER_ENSUSPEND */948948+ );949949+950950+ musb->is_active = 0;951951+ devctl = musb_readb(regs, MUSB_DEVCTL);952952+ devctl &= ~MUSB_DEVCTL_SESSION;953953+954954+ /* session started after:955955+ * (a) ID-grounded irq, host mode;956956+ * (b) vbus present/connect IRQ, peripheral mode;957957+ * (c) peripheral initiates, using SRP958958+ */959959+ if (musb->port_mode != MUSB_PORT_MODE_HOST &&960960+ (devctl & MUSB_DEVCTL_VBUS) == MUSB_DEVCTL_VBUS) {961961+ musb->is_active = 1;962962+ } else {963963+ devctl |= MUSB_DEVCTL_SESSION;964964+ }965965+966966+ musb_platform_enable(musb);967967+ musb_writeb(regs, MUSB_DEVCTL, devctl);968968+}969969+970970+/*925971 * Make the HDRC stop (disable interrupts, etc.);926972 * reversible by musb_start927973 * called on gadget driver unregister
···535535 struct dsps_glue *glue;536536 int ret;537537538538+ if (!strcmp(pdev->name, "musb-hdrc"))539539+ return -ENODEV;540540+538541 match = of_match_node(musb_dsps_of_match, pdev->dev.of_node);539542 if (!match) {540543 dev_err(&pdev->dev, "fail to get matching of_match struct\n");
+6
drivers/usb/musb/musb_gadget.c
···17901790 musb->g.max_speed = USB_SPEED_HIGH;17911791 musb->g.speed = USB_SPEED_UNKNOWN;1792179217931793+ MUSB_DEV_MODE(musb);17941794+ musb->xceiv->otg->default_a = 0;17951795+ musb->xceiv->state = OTG_STATE_B_IDLE;17961796+17931797 /* this "gadget" abstracts/virtualizes the controller */17941798 musb->g.name = musb_driver_name;17951799 musb->g.is_otg = 1;···18581854 otg_set_peripheral(otg, &musb->g);18591855 musb->xceiv->state = OTG_STATE_B_IDLE;18601856 spin_unlock_irqrestore(&musb->lock, flags);18571857+18581858+ musb_start(musb);1861185918621860 /* REVISIT: funcall to other code, which also18631861 * handles power budgeting ... this way also
···211211 /*212212 * Many devices do not respond properly to READ_CAPACITY_16.213213 * Tell the SCSI layer to try READ_CAPACITY_10 first.214214+ * However some USB 3.0 drive enclosures return capacity215215+ * modulo 2TB. Those must use READ_CAPACITY_16214216 */215215- sdev->try_rc_10_first = 1;217217+ if (!(us->fflags & US_FL_NEEDS_CAP16))218218+ sdev->try_rc_10_first = 1;216219217220 /* assume SPC3 or latter devices support sense size > 18 */218221 if (sdev->scsi_level > SCSI_SPC_2)
···545545 long npage;546546 int ret = 0, prot = 0;547547 uint64_t mask;548548+ struct vfio_dma *dma = NULL;549549+ unsigned long pfn;548550549551 end = map->iova + map->size;550552···589587 }590588591589 for (iova = map->iova; iova < end; iova += size, vaddr += size) {592592- struct vfio_dma *dma = NULL;593593- unsigned long pfn;594590 long i;595591596592 /* Pin a contiguous chunk of memory */···597597 if (npage <= 0) {598598 WARN_ON(!npage);599599 ret = (int)npage;600600- break;600600+ goto out;601601 }602602603603 /* Verify pages are not already mapped */604604 for (i = 0; i < npage; i++) {605605 if (iommu_iova_to_phys(iommu->domain,606606 iova + (i << PAGE_SHIFT))) {607607- vfio_unpin_pages(pfn, npage, prot, true);608607 ret = -EBUSY;609609- break;608608+ goto out_unpin;610609 }611610 }612611···615616 if (ret) {616617 if (ret != -EBUSY ||617618 map_try_harder(iommu, iova, pfn, npage, prot)) {618618- vfio_unpin_pages(pfn, npage, prot, true);619619- break;619619+ goto out_unpin;620620 }621621 }622622···670672 dma = kzalloc(sizeof(*dma), GFP_KERNEL);671673 if (!dma) {672674 iommu_unmap(iommu->domain, iova, size);673673- vfio_unpin_pages(pfn, npage, prot, true);674675 ret = -ENOMEM;675675- break;676676+ goto out_unpin;676677 }677678678679 dma->size = size;···682685 }683686 }684687685685- if (ret) {686686- struct vfio_dma *tmp;687687- iova = map->iova;688688- size = map->size;689689- while ((tmp = vfio_find_dma(iommu, iova, size))) {690690- int r = vfio_remove_dma_overlap(iommu, iova,691691- &size, tmp);692692- if (WARN_ON(r || !size))693693- break;694694- }688688+ WARN_ON(ret);689689+ mutex_unlock(&iommu->lock);690690+ return ret;691691+692692+out_unpin:693693+ vfio_unpin_pages(pfn, npage, prot, true);694694+695695+out:696696+ iova = map->iova;697697+ size = map->size;698698+ while ((dma = vfio_find_dma(iommu, iova, size))) {699699+ int r = vfio_remove_dma_overlap(iommu, iova,700700+ &size, dma);701701+ if (WARN_ON(r || !size))702702+ break;695703 }696704697705 mutex_unlock(&iommu->lock);
+6-1
drivers/vhost/scsi.c
···728728 }729729 se_sess = tv_nexus->tvn_se_sess;730730731731- tag = percpu_ida_alloc(&se_sess->sess_tag_pool, GFP_KERNEL);731731+ tag = percpu_ida_alloc(&se_sess->sess_tag_pool, GFP_ATOMIC);732732+ if (tag < 0) {733733+ pr_err("Unable to obtain tag for tcm_vhost_cmd\n");734734+ return ERR_PTR(-ENOMEM);735735+ }736736+732737 cmd = &((struct tcm_vhost_cmd *)se_sess->sess_cmd_map)[tag];733738 sg = cmd->tvc_sgl;734739 pages = cmd->tvc_upages;
+6
drivers/w1/w1.c
···613613 sl = dev_to_w1_slave(dev);614614 fops = sl->family->fops;615615616616+ if (!fops)617617+ return 0;618618+616619 switch (action) {617620 case BUS_NOTIFY_ADD_DEVICE:618621 /* if the family driver needs to initialize something... */···716713 atomic_set(&sl->refcnt, 0);717714 init_completion(&sl->released);718715716716+ /* slave modules need to be loaded in a context with unlocked mutex */717717+ mutex_unlock(&dev->mutex);719718 request_module("w1-family-0x%0x", rn->family);719719+ mutex_lock(&dev->mutex);720720721721 spin_lock(&w1_flock);722722 f = w1_family_registered(rn->family);
+6
drivers/watchdog/hpwdt.c
···802802 return -ENODEV;803803 }804804805805+ /*806806+ * Ignore all auxilary iLO devices with the following PCI ID807807+ */808808+ if (dev->subsystem_device == 0x1979)809809+ return -ENODEV;810810+805811 if (pci_enable_device(dev)) {806812 dev_warn(&dev->dev,807813 "Not possible to enable PCI Device: 0x%x:0x%x.\n",
···146146 .set_timeout = sunxi_wdt_set_timeout,147147};148148149149-static int __init sunxi_wdt_probe(struct platform_device *pdev)149149+static int sunxi_wdt_probe(struct platform_device *pdev)150150{151151 struct sunxi_wdt_dev *sunxi_wdt;152152 struct resource *res;···187187 return 0;188188}189189190190-static int __exit sunxi_wdt_remove(struct platform_device *pdev)190190+static int sunxi_wdt_remove(struct platform_device *pdev)191191{192192 struct sunxi_wdt_dev *sunxi_wdt = platform_get_drvdata(pdev);193193
+2-1
drivers/watchdog/ts72xx_wdt.c
···310310311311 case WDIOC_GETSTATUS:312312 case WDIOC_GETBOOTSTATUS:313313- return put_user(0, p);313313+ error = put_user(0, p);314314+ break;314315315316 case WDIOC_KEEPALIVE:316317 ts72xx_wdt_kick(wdt);
+37-15
fs/aio.c
···167167}168168__initcall(aio_setup);169169170170+static void put_aio_ring_file(struct kioctx *ctx)171171+{172172+ struct file *aio_ring_file = ctx->aio_ring_file;173173+ if (aio_ring_file) {174174+ truncate_setsize(aio_ring_file->f_inode, 0);175175+176176+ /* Prevent further access to the kioctx from migratepages */177177+ spin_lock(&aio_ring_file->f_inode->i_mapping->private_lock);178178+ aio_ring_file->f_inode->i_mapping->private_data = NULL;179179+ ctx->aio_ring_file = NULL;180180+ spin_unlock(&aio_ring_file->f_inode->i_mapping->private_lock);181181+182182+ fput(aio_ring_file);183183+ }184184+}185185+170186static void aio_free_ring(struct kioctx *ctx)171187{172188 int i;173173- struct file *aio_ring_file = ctx->aio_ring_file;174189175190 for (i = 0; i < ctx->nr_pages; i++) {176191 pr_debug("pid(%d) [%d] page->count=%d\n", current->pid, i,···193178 put_page(ctx->ring_pages[i]);194179 }195180181181+ put_aio_ring_file(ctx);182182+196183 if (ctx->ring_pages && ctx->ring_pages != ctx->internal_pages)197184 kfree(ctx->ring_pages);198198-199199- if (aio_ring_file) {200200- truncate_setsize(aio_ring_file->f_inode, 0);201201- fput(aio_ring_file);202202- ctx->aio_ring_file = NULL;203203- }204185}205186206187static int aio_ring_mmap(struct file *file, struct vm_area_struct *vma)···218207static int aio_migratepage(struct address_space *mapping, struct page *new,219208 struct page *old, enum migrate_mode mode)220209{221221- struct kioctx *ctx = mapping->private_data;210210+ struct kioctx *ctx;222211 unsigned long flags;223223- unsigned idx = old->index;224212 int rc;225213226214 /* Writeback must be complete */···234224235225 get_page(new);236226237237- spin_lock_irqsave(&ctx->completion_lock, flags);238238- migrate_page_copy(new, old);239239- ctx->ring_pages[idx] = new;240240- spin_unlock_irqrestore(&ctx->completion_lock, flags);227227+ /* We can potentially race against kioctx teardown here. Use the228228+ * address_space's private data lock to protect the mapping's229229+ * private_data.230230+ */231231+ spin_lock(&mapping->private_lock);232232+ ctx = mapping->private_data;233233+ if (ctx) {234234+ pgoff_t idx;235235+ spin_lock_irqsave(&ctx->completion_lock, flags);236236+ migrate_page_copy(new, old);237237+ idx = old->index;238238+ if (idx < (pgoff_t)ctx->nr_pages)239239+ ctx->ring_pages[idx] = new;240240+ spin_unlock_irqrestore(&ctx->completion_lock, flags);241241+ } else242242+ rc = -EBUSY;243243+ spin_unlock(&mapping->private_lock);241244242245 return rc;243246}···640617out_freeref:641618 free_percpu(ctx->users.pcpu_count);642619out_freectx:643643- if (ctx->aio_ring_file)644644- fput(ctx->aio_ring_file);620620+ put_aio_ring_file(ctx);645621 kmem_cache_free(kioctx_cachep, ctx);646622 pr_debug("error allocating ioctx %d\n", err);647623 return ERR_PTR(err);
+19-6
fs/btrfs/async-thread.c
···107107 worker->idle = 1;108108109109 /* the list may be empty if the worker is just starting */110110- if (!list_empty(&worker->worker_list)) {110110+ if (!list_empty(&worker->worker_list) &&111111+ !worker->workers->stopping) {111112 list_move(&worker->worker_list,112113 &worker->workers->idle_list);113114 }···128127 spin_lock_irqsave(&worker->workers->lock, flags);129128 worker->idle = 0;130129131131- if (!list_empty(&worker->worker_list)) {130130+ if (!list_empty(&worker->worker_list) &&131131+ !worker->workers->stopping) {132132 list_move_tail(&worker->worker_list,133133 &worker->workers->worker_list);134134 }···414412 int can_stop;415413416414 spin_lock_irq(&workers->lock);415415+ workers->stopping = 1;417416 list_splice_init(&workers->idle_list, &workers->worker_list);418417 while (!list_empty(&workers->worker_list)) {419418 cur = workers->worker_list.next;···458455 workers->ordered = 0;459456 workers->atomic_start_pending = 0;460457 workers->atomic_worker_start = async_helper;458458+ workers->stopping = 0;461459}462460463461/*···484480 atomic_set(&worker->num_pending, 0);485481 atomic_set(&worker->refs, 1);486482 worker->workers = workers;487487- worker->task = kthread_run(worker_loop, worker,488488- "btrfs-%s-%d", workers->name,489489- workers->num_workers + 1);483483+ worker->task = kthread_create(worker_loop, worker,484484+ "btrfs-%s-%d", workers->name,485485+ workers->num_workers + 1);490486 if (IS_ERR(worker->task)) {491487 ret = PTR_ERR(worker->task);492492- kfree(worker);493488 goto fail;494489 }490490+495491 spin_lock_irq(&workers->lock);492492+ if (workers->stopping) {493493+ spin_unlock_irq(&workers->lock);494494+ goto fail_kthread;495495+ }496496 list_add_tail(&worker->worker_list, &workers->idle_list);497497 worker->idle = 1;498498 workers->num_workers++;···504496 WARN_ON(workers->num_workers_starting < 0);505497 spin_unlock_irq(&workers->lock);506498499499+ wake_up_process(worker->task);507500 return 0;501501+502502+fail_kthread:503503+ kthread_stop(worker->task);508504fail:505505+ kfree(worker);509506 spin_lock_irq(&workers->lock);510507 workers->num_workers_starting--;511508 spin_unlock_irq(&workers->lock);
+2
fs/btrfs/async-thread.h
···107107108108 /* extra name for this worker, used for current->name */109109 char *name;110110+111111+ int stopping;110112};111113112114void btrfs_queue_worker(struct btrfs_workers *workers, struct btrfs_work *work);
+1-4
fs/btrfs/dev-replace.c
···535535 list_add(&tgt_device->dev_alloc_list, &fs_info->fs_devices->alloc_list);536536537537 btrfs_rm_dev_replace_srcdev(fs_info, src_device);538538- if (src_device->bdev) {539539- /* zero out the old super */540540- btrfs_scratch_superblock(src_device);541541- }538538+542539 /*543540 * this is again a consistent state where no dev_replace procedure544541 * is running, the target device is part of the filesystem, the
+5-4
fs/btrfs/disk-io.c
···15611561 return ret;15621562}1563156315641564-struct btrfs_root *btrfs_read_fs_root_no_name(struct btrfs_fs_info *fs_info,15651565- struct btrfs_key *location)15641564+struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info,15651565+ struct btrfs_key *location,15661566+ bool check_ref)15661567{15671568 struct btrfs_root *root;15681569 int ret;···15871586again:15881587 root = btrfs_lookup_fs_root(fs_info, location->objectid);15891588 if (root) {15901590- if (btrfs_root_refs(&root->root_item) == 0)15891589+ if (check_ref && btrfs_root_refs(&root->root_item) == 0)15911590 return ERR_PTR(-ENOENT);15921591 return root;15931592 }···15961595 if (IS_ERR(root))15971596 return root;1598159715991599- if (btrfs_root_refs(&root->root_item) == 0) {15981598+ if (check_ref && btrfs_root_refs(&root->root_item) == 0) {16001599 ret = -ENOENT;16011600 goto fail;16021601 }
···145145 offsetof(struct btrfs_io_bio, bio));146146 if (!btrfs_bioset)147147 goto free_buffer_cache;148148+149149+ if (bioset_integrity_create(btrfs_bioset, BIO_POOL_SIZE))150150+ goto free_bioset;151151+148152 return 0;153153+154154+free_bioset:155155+ bioset_free(btrfs_bioset);156156+ btrfs_bioset = NULL;149157150158free_buffer_cache:151159 kmem_cache_destroy(extent_buffer_cache);···14901482 cur_start = state->end + 1;14911483 node = rb_next(node);14921484 total_bytes += state->end - state->start + 1;14931493- if (total_bytes >= max_bytes) {14941494- *end = *start + max_bytes - 1;14851485+ if (total_bytes >= max_bytes)14951486 break;14961496- }14971487 if (!node)14981488 break;14991489 }···16201614 *start = delalloc_start;16211615 *end = delalloc_end;16221616 free_extent_state(cached_state);16231623- return found;16171617+ return 0;16241618 }1625161916261620 /*···1633162716341628 /*16351629 * make sure to limit the number of pages we try to lock down16361636- * if we're looping.16371630 */16381638- if (delalloc_end + 1 - delalloc_start > max_bytes && loops)16391639- delalloc_end = delalloc_start + PAGE_CACHE_SIZE - 1;16311631+ if (delalloc_end + 1 - delalloc_start > max_bytes)16321632+ delalloc_end = delalloc_start + max_bytes - 1;1640163316411634 /* step two, lock all the pages after the page that has start */16421635 ret = lock_delalloc_pages(inode, locked_page,···16461641 */16471642 free_extent_state(cached_state);16481643 if (!loops) {16491649- unsigned long offset = (*start) & (PAGE_CACHE_SIZE - 1);16501650- max_bytes = PAGE_CACHE_SIZE - offset;16441644+ max_bytes = PAGE_CACHE_SIZE;16511645 loops = 1;16521646 goto again;16531647 } else {
+2-1
fs/btrfs/inode.c
···6437643764386438 if (btrfs_extent_readonly(root, disk_bytenr))64396439 goto out;64406440+ btrfs_release_path(path);6440644164416442 /*64426443 * look for other files referencing this extent, if we···798779867988798779897988 /* check for collisions, even if the name isn't there */79907990- ret = btrfs_check_dir_item_collision(root, new_dir->i_ino,79897989+ ret = btrfs_check_dir_item_collision(dest, new_dir->i_ino,79917990 new_dentry->d_name.name,79927991 new_dentry->d_name.len);79937992
···17161716 struct btrfs_device *srcdev)17171717{17181718 WARN_ON(!mutex_is_locked(&fs_info->fs_devices->device_list_mutex));17191719+17191720 list_del_rcu(&srcdev->dev_list);17201721 list_del_rcu(&srcdev->dev_alloc_list);17211722 fs_info->fs_devices->num_devices--;···17261725 }17271726 if (srcdev->can_discard)17281727 fs_info->fs_devices->num_can_discard--;17291729- if (srcdev->bdev)17281728+ if (srcdev->bdev) {17301729 fs_info->fs_devices->open_devices--;17301730+17311731+ /* zero out the old super */17321732+ btrfs_scratch_superblock(srcdev);17331733+ }1731173417321735 call_rcu(&srcdev->rcu, free_device);17331736}
+12-2
fs/buffer.c
···10051005 struct buffer_head *bh;10061006 sector_t end_block;10071007 int ret = 0; /* Will call free_more_memory() */10081008+ gfp_t gfp_mask;1008100910091009- page = find_or_create_page(inode->i_mapping, index,10101010- (mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS)|__GFP_MOVABLE);10101010+ gfp_mask = mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS;10111011+ gfp_mask |= __GFP_MOVABLE;10121012+ /*10131013+ * XXX: __getblk_slow() can not really deal with failure and10141014+ * will endlessly loop on improvised global reclaim. Prefer10151015+ * looping in the allocator rather than here, at least that10161016+ * code knows what it's doing.10171017+ */10181018+ gfp_mask |= __GFP_NOFAIL;10191019+10201020+ page = find_or_create_page(inode->i_mapping, index, gfp_mask);10111021 if (!page)10121022 return ret;10131023
+4-2
fs/cifs/cifsfs.c
···120120{121121 struct inode *inode;122122 struct cifs_sb_info *cifs_sb;123123+ struct cifs_tcon *tcon;123124 int rc = 0;124125125126 cifs_sb = CIFS_SB(sb);127127+ tcon = cifs_sb_master_tcon(cifs_sb);126128127129 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIXACL)128130 sb->s_flags |= MS_POSIXACL;129131130130- if (cifs_sb_master_tcon(cifs_sb)->ses->capabilities & CAP_LARGE_FILES)132132+ if (tcon->ses->capabilities & tcon->ses->server->vals->cap_large_files)131133 sb->s_maxbytes = MAX_LFS_FILESIZE;132134 else133135 sb->s_maxbytes = MAX_NON_LFS;···149147 goto out_no_root;150148 }151149152152- if (cifs_sb_master_tcon(cifs_sb)->nocase)150150+ if (tcon->nocase)153151 sb->s_d_op = &cifs_ci_dentry_ops;154152 else155153 sb->s_d_op = &cifs_dentry_ops;
···547547 unsigned int max_rw; /* maxRw specifies the maximum */548548 /* message size the server can send or receive for */549549 /* SMB_COM_WRITE_RAW or SMB_COM_READ_RAW. */550550- unsigned int max_vcs; /* maximum number of smb sessions, at least551551- those that can be specified uniquely with552552- vcnumbers */553550 unsigned int capabilities; /* selective disabling of caps by smb sess */554551 int timeAdj; /* Adjust for difference in server time zone in sec */555552 __u64 CurrentMid; /* multiplex id - rotating counter */···712715 enum statusEnum status;713716 unsigned overrideSecFlg; /* if non-zero override global sec flags */714717 __u16 ipc_tid; /* special tid for connection to IPC share */715715- __u16 vcnum;716718 char *serverOS; /* name of operating system underlying server */717719 char *serverNOS; /* name of network operating system of server */718720 char *serverDomain; /* security realm of server */···12681272#define CIFS_FATTR_DELETE_PENDING 0x212691273#define CIFS_FATTR_NEED_REVAL 0x412701274#define CIFS_FATTR_INO_COLLISION 0x812751275+#define CIFS_FATTR_UNKNOWN_NLINK 0x101271127612721277struct cifs_fattr {12731278 u32 cf_flags;
···463463 cifs_max_pending);464464 set_credits(server, server->maxReq);465465 server->maxBuf = le16_to_cpu(rsp->MaxBufSize);466466- server->max_vcs = le16_to_cpu(rsp->MaxNumberVcs);467466 /* even though we do not use raw we might as well set this468467 accurately, in case we ever find a need for it */469468 if ((le16_to_cpu(rsp->RawMode) & RAW_ENABLE) == RAW_ENABLE) {···30883089 bool is_unicode;30893090 unsigned int sub_len;30903091 char *sub_start;30913091- struct reparse_data *reparse_buf;30923092+ struct reparse_symlink_data *reparse_buf;30933093+ struct reparse_posix_data *posix_buf;30923094 __u32 data_offset, data_count;30933095 char *end_of_smb;30943096···31383138 goto qreparse_out;31393139 }31403140 end_of_smb = 2 + get_bcc(&pSMBr->hdr) + (char *)&pSMBr->ByteCount;31413141- reparse_buf = (struct reparse_data *)31413141+ reparse_buf = (struct reparse_symlink_data *)31423142 ((char *)&pSMBr->hdr.Protocol + data_offset);31433143 if ((char *)reparse_buf >= end_of_smb) {31443144 rc = -EIO;31453145 goto qreparse_out;31463146 }31473147- if ((reparse_buf->PathBuffer + reparse_buf->PrintNameOffset +31483148- reparse_buf->PrintNameLength) > end_of_smb) {31473147+ if (reparse_buf->ReparseTag == cpu_to_le32(IO_REPARSE_TAG_NFS)) {31483148+ cifs_dbg(FYI, "NFS style reparse tag\n");31493149+ posix_buf = (struct reparse_posix_data *)reparse_buf;31503150+31513151+ if (posix_buf->InodeType != cpu_to_le64(NFS_SPECFILE_LNK)) {31523152+ cifs_dbg(FYI, "unsupported file type 0x%llx\n",31533153+ le64_to_cpu(posix_buf->InodeType));31543154+ rc = -EOPNOTSUPP;31553155+ goto qreparse_out;31563156+ }31573157+ is_unicode = true;31583158+ sub_len = le16_to_cpu(reparse_buf->ReparseDataLength);31593159+ if (posix_buf->PathBuffer + sub_len > end_of_smb) {31603160+ cifs_dbg(FYI, "reparse buf beyond SMB\n");31613161+ rc = -EIO;31623162+ goto qreparse_out;31633163+ }31643164+ *symlinkinfo = cifs_strndup_from_utf16(posix_buf->PathBuffer,31653165+ sub_len, is_unicode, nls_codepage);31663166+ goto qreparse_out;31673167+ } else if (reparse_buf->ReparseTag !=31683168+ cpu_to_le32(IO_REPARSE_TAG_SYMLINK)) {31693169+ rc = -EOPNOTSUPP;31703170+ goto qreparse_out;31713171+ }31723172+31733173+ /* Reparse tag is NTFS symlink */31743174+ sub_start = le16_to_cpu(reparse_buf->SubstituteNameOffset) +31753175+ reparse_buf->PathBuffer;31763176+ sub_len = le16_to_cpu(reparse_buf->SubstituteNameLength);31773177+ if (sub_start + sub_len > end_of_smb) {31493178 cifs_dbg(FYI, "reparse buf beyond SMB\n");31503179 rc = -EIO;31513180 goto qreparse_out;31523181 }31533153- sub_start = reparse_buf->SubstituteNameOffset + reparse_buf->PathBuffer;31543154- sub_len = reparse_buf->SubstituteNameLength;31553182 if (pSMBr->hdr.Flags2 & SMBFLG2_UNICODE)31563183 is_unicode = true;31573184 else
+8
fs/cifs/file.c
···32543254 /*32553255 * Reads as many pages as possible from fscache. Returns -ENOBUFS32563256 * immediately if the cookie is negative32573257+ *32583258+ * After this point, every page in the list might have PG_fscache set,32593259+ * so we will need to clean that up off of every page we don't use.32573260 */32583261 rc = cifs_readpages_from_fscache(mapping->host, mapping, page_list,32593262 &num_pages);···33793376 kref_put(&rdata->refcount, cifs_readdata_release);33803377 }3381337833793379+ /* Any pages that have been shown to fscache but didn't get added to33803380+ * the pagecache must be uncached before they get returned to the33813381+ * allocator.33823382+ */33833383+ cifs_fscache_readpages_cancel(mapping->host, page_list);33823384 return rc;33833385}33843386
···120120 cifs_i->invalid_mapping = true;121121}122122123123+/*124124+ * copy nlink to the inode, unless it wasn't provided. Provide125125+ * sane values if we don't have an existing one and none was provided126126+ */127127+static void128128+cifs_nlink_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr)129129+{130130+ /*131131+ * if we're in a situation where we can't trust what we132132+ * got from the server (readdir, some non-unix cases)133133+ * fake reasonable values134134+ */135135+ if (fattr->cf_flags & CIFS_FATTR_UNKNOWN_NLINK) {136136+ /* only provide fake values on a new inode */137137+ if (inode->i_state & I_NEW) {138138+ if (fattr->cf_cifsattrs & ATTR_DIRECTORY)139139+ set_nlink(inode, 2);140140+ else141141+ set_nlink(inode, 1);142142+ }143143+ return;144144+ }145145+146146+ /* we trust the server, so update it */147147+ set_nlink(inode, fattr->cf_nlink);148148+}149149+123150/* populate an inode with info from a cifs_fattr struct */124151void125152cifs_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr)···161134 inode->i_mtime = fattr->cf_mtime;162135 inode->i_ctime = fattr->cf_ctime;163136 inode->i_rdev = fattr->cf_rdev;164164- set_nlink(inode, fattr->cf_nlink);137137+ cifs_nlink_fattr_to_inode(inode, fattr);165138 inode->i_uid = fattr->cf_uid;166139 inode->i_gid = fattr->cf_gid;167140···568541 fattr->cf_bytes = le64_to_cpu(info->AllocationSize);569542 fattr->cf_createtime = le64_to_cpu(info->CreationTime);570543544544+ fattr->cf_nlink = le32_to_cpu(info->NumberOfLinks);571545 if (fattr->cf_cifsattrs & ATTR_DIRECTORY) {572546 fattr->cf_mode = S_IFDIR | cifs_sb->mnt_dir_mode;573547 fattr->cf_dtype = DT_DIR;···576548 * Server can return wrong NumberOfLinks value for directories577549 * when Unix extensions are disabled - fake it.578550 */579579- fattr->cf_nlink = 2;551551+ if (!tcon->unix_ext)552552+ fattr->cf_flags |= CIFS_FATTR_UNKNOWN_NLINK;580553 } else if (fattr->cf_cifsattrs & ATTR_REPARSE) {581554 fattr->cf_mode = S_IFLNK;582555 fattr->cf_dtype = DT_LNK;···590561 if (fattr->cf_cifsattrs & ATTR_READONLY)591562 fattr->cf_mode &= ~(S_IWUGO);592563593593- fattr->cf_nlink = le32_to_cpu(info->NumberOfLinks);594594- if (fattr->cf_nlink < 1) {595595- cifs_dbg(1, "replacing bogus file nlink value %u\n",564564+ /*565565+ * Don't accept zero nlink from non-unix servers unless566566+ * delete is pending. Instead mark it as unknown.567567+ */568568+ if ((fattr->cf_nlink < 1) && !tcon->unix_ext &&569569+ !info->DeletePending) {570570+ cifs_dbg(1, "bogus file nlink value %u\n",596571 fattr->cf_nlink);597597- fattr->cf_nlink = 1;572572+ fattr->cf_flags |= CIFS_FATTR_UNKNOWN_NLINK;598573 }599574 }600575
+3-1
fs/cifs/netmisc.c
···780780 ERRDOS, ERRnoaccess, 0xc0000290}, {781781 ERRDOS, ERRbadfunc, 0xc000029c}, {782782 ERRDOS, ERRsymlink, NT_STATUS_STOPPED_ON_SYMLINK}, {783783- ERRDOS, ERRinvlevel, 0x007c0001}, };783783+ ERRDOS, ERRinvlevel, 0x007c0001}, {784784+ 0, 0, 0 }785785+};784786785787/*****************************************************************************786788 Print an error message from the status code
···3232#include <linux/slab.h>3333#include "cifs_spnego.h"34343535-/*3636- * Checks if this is the first smb session to be reconnected after3737- * the socket has been reestablished (so we know whether to use vc 0).3838- * Called while holding the cifs_tcp_ses_lock, so do not block3939- */4040-static bool is_first_ses_reconnect(struct cifs_ses *ses)4141-{4242- struct list_head *tmp;4343- struct cifs_ses *tmp_ses;4444-4545- list_for_each(tmp, &ses->server->smb_ses_list) {4646- tmp_ses = list_entry(tmp, struct cifs_ses,4747- smb_ses_list);4848- if (tmp_ses->need_reconnect == false)4949- return false;5050- }5151- /* could not find a session that was already connected,5252- this must be the first one we are reconnecting */5353- return true;5454-}5555-5656-/*5757- * vc number 0 is treated specially by some servers, and should be the5858- * first one we request. After that we can use vcnumbers up to maxvcs,5959- * one for each smb session (some Windows versions set maxvcs incorrectly6060- * so maxvc=1 can be ignored). If we have too many vcs, we can reuse6161- * any vc but zero (some servers reset the connection on vcnum zero)6262- *6363- */6464-static __le16 get_next_vcnum(struct cifs_ses *ses)6565-{6666- __u16 vcnum = 0;6767- struct list_head *tmp;6868- struct cifs_ses *tmp_ses;6969- __u16 max_vcs = ses->server->max_vcs;7070- __u16 i;7171- int free_vc_found = 0;7272-7373- /* Quoting the MS-SMB specification: "Windows-based SMB servers set this7474- field to one but do not enforce this limit, which allows an SMB client7575- to establish more virtual circuits than allowed by this value ... but7676- other server implementations can enforce this limit." */7777- if (max_vcs < 2)7878- max_vcs = 0xFFFF;7979-8080- spin_lock(&cifs_tcp_ses_lock);8181- if ((ses->need_reconnect) && is_first_ses_reconnect(ses))8282- goto get_vc_num_exit; /* vcnum will be zero */8383- for (i = ses->server->srv_count - 1; i < max_vcs; i++) {8484- if (i == 0) /* this is the only connection, use vc 0 */8585- break;8686-8787- free_vc_found = 1;8888-8989- list_for_each(tmp, &ses->server->smb_ses_list) {9090- tmp_ses = list_entry(tmp, struct cifs_ses,9191- smb_ses_list);9292- if (tmp_ses->vcnum == i) {9393- free_vc_found = 0;9494- break; /* found duplicate, try next vcnum */9595- }9696- }9797- if (free_vc_found)9898- break; /* we found a vcnumber that will work - use it */9999- }100100-101101- if (i == 0)102102- vcnum = 0; /* for most common case, ie if one smb session, use103103- vc zero. Also for case when no free vcnum, zero104104- is safest to send (some clients only send zero) */105105- else if (free_vc_found == 0)106106- vcnum = 1; /* we can not reuse vc=0 safely, since some servers107107- reset all uids on that, but 1 is ok. */108108- else109109- vcnum = i;110110- ses->vcnum = vcnum;111111-get_vc_num_exit:112112- spin_unlock(&cifs_tcp_ses_lock);113113-114114- return cpu_to_le16(vcnum);115115-}116116-11735static __u32 cifs_ssetup_hdr(struct cifs_ses *ses, SESSION_SETUP_ANDX *pSMB)11836{11937 __u32 capabilities = 0;···46128 CIFSMaxBufSize + MAX_CIFS_HDR_SIZE - 4,47129 USHRT_MAX));48130 pSMB->req.MaxMpxCount = cpu_to_le16(ses->server->maxReq);4949- pSMB->req.VcNumber = get_next_vcnum(ses);131131+ pSMB->req.VcNumber = __constant_cpu_to_le16(1);5013251133 /* Now no need to set SMBFLG_CASELESS or obsolete CANONICAL PATH */52134···500582 return NTLMv2;501583 if (global_secflags & CIFSSEC_MAY_NTLM)502584 return NTLM;503503- /* Fallthrough */504585 default:505505- return Unspecified;586586+ /* Fallthrough to attempt LANMAN authentication next */587587+ break;506588 }507589 case CIFS_NEGFLAVOR_LANMAN:508590 switch (requested) {
+6
fs/cifs/smb2pdu.c
···687687 else688688 return -EIO;689689690690+ /* no need to send SMB logoff if uid already closed due to reconnect */691691+ if (ses->need_reconnect)692692+ goto smb2_session_already_dead;693693+690694 rc = small_smb2_init(SMB2_LOGOFF, NULL, (void **) &req);691695 if (rc)692696 return rc;···705701 * No tcon so can't do706702 * cifs_stats_inc(&tcon->stats.smb2_stats.smb2_com_fail[SMB2...]);707703 */704704+705705+smb2_session_already_dead:708706 return rc;709707}710708
+14
fs/cifs/smbfsctl.h
···9797#define FSCTL_QUERY_NETWORK_INTERFACE_INFO 0x001401FC /* BB add struct */9898#define FSCTL_SRV_READ_HASH 0x001441BB /* BB add struct */9999100100+/* See FSCC 2.1.2.5 */100101#define IO_REPARSE_TAG_MOUNT_POINT 0xA0000003101102#define IO_REPARSE_TAG_HSM 0xC0000004102103#define IO_REPARSE_TAG_SIS 0x80000007104104+#define IO_REPARSE_TAG_HSM2 0x80000006105105+#define IO_REPARSE_TAG_DRIVER_EXTENDER 0x80000005106106+/* Used by the DFS filter. See MS-DFSC */107107+#define IO_REPARSE_TAG_DFS 0x8000000A108108+/* Used by the DFS filter See MS-DFSC */109109+#define IO_REPARSE_TAG_DFSR 0x80000012110110+#define IO_REPARSE_TAG_FILTER_MANAGER 0x8000000B111111+/* See section MS-FSCC 2.1.2.4 */112112+#define IO_REPARSE_TAG_SYMLINK 0xA000000C113113+#define IO_REPARSE_TAG_DEDUP 0x80000013114114+#define IO_REPARSE_APPXSTREAM 0xC0000014115115+/* NFS symlinks, Win 8/SMB3 and later */116116+#define IO_REPARSE_TAG_NFS 0x80000014103117104118/* fsctl flags */105119/* If Flags is set to this value, the request is an FSCTL not ioctl request */
+7-2
fs/cifs/transport.c
···410410wait_for_free_request(struct TCP_Server_Info *server, const int timeout,411411 const int optype)412412{413413- return wait_for_free_credits(server, timeout,414414- server->ops->get_credits_field(server, optype));413413+ int *val;414414+415415+ val = server->ops->get_credits_field(server, optype);416416+ /* Since an echo is already inflight, no need to wait to send another */417417+ if (*val <= 0 && optype == CIFS_ECHO_OP)418418+ return -EAGAIN;419419+ return wait_for_free_credits(server, timeout, val);415420}416421417422static int allocate_mid(struct cifs_ses *ses, struct smb_hdr *in_buf,
+7-8
fs/dcache.c
···13311331 * list is non-empty and continue searching.13321332 */1333133313341334-/**13351335- * have_submounts - check for mounts over a dentry13361336- * @parent: dentry to check.13371337- *13381338- * Return true if the parent or its subdirectories contain13391339- * a mount point13401340- */13411341-13421334static enum d_walk_ret check_mount(void *data, struct dentry *dentry)13431335{13441336 int *ret = data;···13411349 return D_WALK_CONTINUE;13421350}1343135113521352+/**13531353+ * have_submounts - check for mounts over a dentry13541354+ * @parent: dentry to check.13551355+ *13561356+ * Return true if the parent or its subdirectories contain13571357+ * a mount point13581358+ */13441359int have_submounts(struct dentry *parent)13451360{13461361 int ret = 0;
···22942294 * path_mountpoint - look up a path to be umounted22952295 * @dfd: directory file descriptor to start walk from22962296 * @name: full pathname to walk22972297+ * @path: pointer to container for result22972298 * @flags: lookup flags22982299 *22992300 * Look up the given name, but don't attempt to revalidate the last component.23002300- * Returns 0 and "path" will be valid on success; Retuns error otherwise.23012301+ * Returns 0 and "path" will be valid on success; Returns error otherwise.23012302 */23022303static int23032304path_mountpoint(int dfd, const char *name, struct path *path, unsigned int flags)
+7-3
fs/proc/inode.c
···288288static unsigned long proc_reg_get_unmapped_area(struct file *file, unsigned long orig_addr, unsigned long len, unsigned long pgoff, unsigned long flags)289289{290290 struct proc_dir_entry *pde = PDE(file_inode(file));291291- int rv = -EIO;292292- unsigned long (*get_unmapped_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long);291291+ unsigned long rv = -EIO;292292+ unsigned long (*get_unmapped_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long) = NULL;293293 if (use_pde(pde)) {294294- get_unmapped_area = pde->proc_fops->get_unmapped_area;294294+#ifdef CONFIG_MMU295295+ get_unmapped_area = current->mm->get_unmapped_area;296296+#endif297297+ if (pde->proc_fops->get_unmapped_area)298298+ get_unmapped_area = pde->proc_fops->get_unmapped_area;295299 if (get_unmapped_area)296300 rv = get_unmapped_area(file, orig_addr, len, pgoff, flags);297301 unuse_pde(pde);
+3-1
fs/proc/task_mmu.c
···941941 frame = pte_pfn(pte);942942 flags = PM_PRESENT;943943 page = vm_normal_page(vma, addr, pte);944944+ if (pte_soft_dirty(pte))945945+ flags2 |= __PM_SOFT_DIRTY;944946 } else if (is_swap_pte(pte)) {945947 swp_entry_t entry;946948 if (pte_swp_soft_dirty(pte))···962960963961 if (page && !PageAnon(page))964962 flags |= PM_FILE;965965- if ((vma->vm_flags & VM_SOFTDIRTY) || pte_soft_dirty(pte))963963+ if ((vma->vm_flags & VM_SOFTDIRTY))966964 flags2 |= __PM_SOFT_DIRTY;967965968966 *pme = make_pme(PM_PFRAME(frame) | PM_STATUS2(pm->v2, flags2) | flags);
+1-1
fs/statfs.c
···94949595int fd_statfs(int fd, struct kstatfs *st)9696{9797- struct fd f = fdget(fd);9797+ struct fd f = fdget_raw(fd);9898 int error = -EBADF;9999 if (f.file) {100100 error = vfs_statfs(&f.file->f_path, st);
+3-3
fs/xfs/xfs_dir2_block.c
···11581158 /*11591159 * Create entry for .11601160 */11611161- dep = xfs_dir3_data_dot_entry_p(hdr);11611161+ dep = xfs_dir3_data_dot_entry_p(mp, hdr);11621162 dep->inumber = cpu_to_be64(dp->i_ino);11631163 dep->namelen = 1;11641164 dep->name[0] = '.';···11721172 /*11731173 * Create entry for ..11741174 */11751175- dep = xfs_dir3_data_dotdot_entry_p(hdr);11751175+ dep = xfs_dir3_data_dotdot_entry_p(mp, hdr);11761176 dep->inumber = cpu_to_be64(xfs_dir2_sf_get_parent_ino(sfp));11771177 dep->namelen = 2;11781178 dep->name[0] = dep->name[1] = '.';···11831183 blp[1].hashval = cpu_to_be32(xfs_dir_hash_dotdot);11841184 blp[1].address = cpu_to_be32(xfs_dir2_byte_to_dataptr(mp,11851185 (char *)dep - (char *)hdr));11861186- offset = xfs_dir3_data_first_offset(hdr);11861186+ offset = xfs_dir3_data_first_offset(mp);11871187 /*11881188 * Loop over existing entries, stuff them in.11891189 */
+20-31
fs/xfs/xfs_dir2_format.h
···497497/*498498 * Offsets of . and .. in data space (always block 0)499499 *500500- * The macros are used for shortform directories as they have no headers to read501501- * the magic number out of. Shortform directories need to know the size of the502502- * data block header because the sfe embeds the block offset of the entry into503503- * it so that it doesn't change when format conversion occurs. Bad Things Happen504504- * if we don't follow this rule.505505- *506500 * XXX: there is scope for significant optimisation of the logic here. Right507501 * now we are checking for "dir3 format" over and over again. Ideally we should508502 * only do it once for each operation.509503 */510510-#define XFS_DIR3_DATA_DOT_OFFSET(mp) \511511- xfs_dir3_data_hdr_size(xfs_sb_version_hascrc(&(mp)->m_sb))512512-#define XFS_DIR3_DATA_DOTDOT_OFFSET(mp) \513513- (XFS_DIR3_DATA_DOT_OFFSET(mp) + xfs_dir3_data_entsize(mp, 1))514514-#define XFS_DIR3_DATA_FIRST_OFFSET(mp) \515515- (XFS_DIR3_DATA_DOTDOT_OFFSET(mp) + xfs_dir3_data_entsize(mp, 2))516516-517504static inline xfs_dir2_data_aoff_t518518-xfs_dir3_data_dot_offset(struct xfs_dir2_data_hdr *hdr)505505+xfs_dir3_data_dot_offset(struct xfs_mount *mp)519506{520520- return xfs_dir3_data_entry_offset(hdr);507507+ return xfs_dir3_data_hdr_size(xfs_sb_version_hascrc(&mp->m_sb));521508}522509523510static inline xfs_dir2_data_aoff_t524524-xfs_dir3_data_dotdot_offset(struct xfs_dir2_data_hdr *hdr)511511+xfs_dir3_data_dotdot_offset(struct xfs_mount *mp)525512{526526- bool dir3 = hdr->magic == cpu_to_be32(XFS_DIR3_DATA_MAGIC) ||527527- hdr->magic == cpu_to_be32(XFS_DIR3_BLOCK_MAGIC);528528- return xfs_dir3_data_dot_offset(hdr) +529529- __xfs_dir3_data_entsize(dir3, 1);513513+ return xfs_dir3_data_dot_offset(mp) +514514+ xfs_dir3_data_entsize(mp, 1);530515}531516532517static inline xfs_dir2_data_aoff_t533533-xfs_dir3_data_first_offset(struct xfs_dir2_data_hdr *hdr)518518+xfs_dir3_data_first_offset(struct xfs_mount *mp)534519{535535- bool dir3 = hdr->magic == cpu_to_be32(XFS_DIR3_DATA_MAGIC) ||536536- hdr->magic == cpu_to_be32(XFS_DIR3_BLOCK_MAGIC);537537- return xfs_dir3_data_dotdot_offset(hdr) +538538- __xfs_dir3_data_entsize(dir3, 2);520520+ return xfs_dir3_data_dotdot_offset(mp) +521521+ xfs_dir3_data_entsize(mp, 2);539522}540523541524/*542525 * location of . and .. in data space (always block 0)543526 */544527static inline struct xfs_dir2_data_entry *545545-xfs_dir3_data_dot_entry_p(struct xfs_dir2_data_hdr *hdr)528528+xfs_dir3_data_dot_entry_p(529529+ struct xfs_mount *mp,530530+ struct xfs_dir2_data_hdr *hdr)546531{547532 return (struct xfs_dir2_data_entry *)548548- ((char *)hdr + xfs_dir3_data_dot_offset(hdr));533533+ ((char *)hdr + xfs_dir3_data_dot_offset(mp));549534}550535551536static inline struct xfs_dir2_data_entry *552552-xfs_dir3_data_dotdot_entry_p(struct xfs_dir2_data_hdr *hdr)537537+xfs_dir3_data_dotdot_entry_p(538538+ struct xfs_mount *mp,539539+ struct xfs_dir2_data_hdr *hdr)553540{554541 return (struct xfs_dir2_data_entry *)555555- ((char *)hdr + xfs_dir3_data_dotdot_offset(hdr));542542+ ((char *)hdr + xfs_dir3_data_dotdot_offset(mp));556543}557544558545static inline struct xfs_dir2_data_entry *559559-xfs_dir3_data_first_entry_p(struct xfs_dir2_data_hdr *hdr)546546+xfs_dir3_data_first_entry_p(547547+ struct xfs_mount *mp,548548+ struct xfs_dir2_data_hdr *hdr)560549{561550 return (struct xfs_dir2_data_entry *)562562- ((char *)hdr + xfs_dir3_data_first_offset(hdr));551551+ ((char *)hdr + xfs_dir3_data_first_offset(mp));563552}564553565554/*
+2-2
fs/xfs/xfs_dir2_readdir.c
···119119 * mp->m_dirdatablk.120120 */121121 dot_offset = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk,122122- XFS_DIR3_DATA_DOT_OFFSET(mp));122122+ xfs_dir3_data_dot_offset(mp));123123 dotdot_offset = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk,124124- XFS_DIR3_DATA_DOTDOT_OFFSET(mp));124124+ xfs_dir3_data_dotdot_offset(mp));125125126126 /*127127 * Put . entry unless we're starting past it.
+3-3
fs/xfs/xfs_dir2_sf.c
···557557 * to insert the new entry.558558 * If it's going to end up at the end then oldsfep will point there.559559 */560560- for (offset = XFS_DIR3_DATA_FIRST_OFFSET(mp),560560+ for (offset = xfs_dir3_data_first_offset(mp),561561 oldsfep = xfs_dir2_sf_firstentry(oldsfp),562562 add_datasize = xfs_dir3_data_entsize(mp, args->namelen),563563 eof = (char *)oldsfep == &buf[old_isize];···640640641641 sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data;642642 size = xfs_dir3_data_entsize(mp, args->namelen);643643- offset = XFS_DIR3_DATA_FIRST_OFFSET(mp);643643+ offset = xfs_dir3_data_first_offset(mp);644644 sfep = xfs_dir2_sf_firstentry(sfp);645645 holefit = 0;646646 /*···713713 mp = dp->i_mount;714714715715 sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data;716716- offset = XFS_DIR3_DATA_FIRST_OFFSET(mp);716716+ offset = xfs_dir3_data_first_offset(mp);717717 ino = xfs_dir2_sf_get_parent_ino(sfp);718718 i8count = ino > XFS_DIR2_MAX_SHORT_INUM;719719
+16-3
fs/xfs/xfs_dquot.c
···6464struct kmem_zone *xfs_qm_dqtrxzone;6565static struct kmem_zone *xfs_qm_dqzone;66666767-static struct lock_class_key xfs_dquot_other_class;6767+static struct lock_class_key xfs_dquot_group_class;6868+static struct lock_class_key xfs_dquot_project_class;68696970/*7071 * This is called to free all the memory associated with a dquot···704703 * Make sure group quotas have a different lock class than user705704 * quotas.706705 */707707- if (!(type & XFS_DQ_USER))708708- lockdep_set_class(&dqp->q_qlock, &xfs_dquot_other_class);706706+ switch (type) {707707+ case XFS_DQ_USER:708708+ /* uses the default lock class */709709+ break;710710+ case XFS_DQ_GROUP:711711+ lockdep_set_class(&dqp->q_qlock, &xfs_dquot_group_class);712712+ break;713713+ case XFS_DQ_PROJ:714714+ lockdep_set_class(&dqp->q_qlock, &xfs_dquot_project_class);715715+ break;716716+ default:717717+ ASSERT(0);718718+ break;719719+ }709720710721 XFS_STATS_INC(xs_qm_dquot);711722
···66 return mk_pte(page, pgprot);77}8899-static inline int huge_pte_write(pte_t pte)99+static inline unsigned long huge_pte_write(pte_t pte)1010{1111 return pte_write(pte);1212}13131414-static inline int huge_pte_dirty(pte_t pte)1414+static inline unsigned long huge_pte_dirty(pte_t pte)1515{1616 return pte_dirty(pte);1717}
+1-3
include/dt-bindings/pinctrl/omap.h
···2323#define PULL_UP (1 << 4)2424#define ALTELECTRICALSEL (1 << 5)25252626-/* 34xx specific mux bit defines */2626+/* omap3/4/5 specific mux bit defines */2727#define INPUT_EN (1 << 8)2828#define OFF_EN (1 << 9)2929#define OFFOUT_EN (1 << 10)···3131#define OFF_PULL_EN (1 << 12)3232#define OFF_PULL_UP (1 << 13)3333#define WAKEUP_EN (1 << 14)3434-3535-/* 44xx specific mux bit defines */3634#define WAKEUP_EVENT (1 << 15)37353836/* Active pin states */
+15
include/linux/compiler-gcc4.h
···6565#define __visible __attribute__((externally_visible))6666#endif67676868+/*6969+ * GCC 'asm goto' miscompiles certain code sequences:7070+ *7171+ * http://gcc.gnu.org/bugzilla/show_bug.cgi?id=586707272+ *7373+ * Work it around via a compiler barrier quirk suggested by Jakub Jelinek.7474+ * Fixed in GCC 4.8.2 and later versions.7575+ *7676+ * (asm goto is automatically volatile - the naming reflects this.)7777+ */7878+#if GCC_VERSION <= 408017979+# define asm_volatile_goto(x...) do { asm goto(x); asm (""); } while (0)8080+#else8181+# define asm_volatile_goto(x...) do { asm goto(x); } while (0)8282+#endif68836984#ifdef CONFIG_ARCH_USE_BUILTIN_BSWAP7085#if GCC_VERSION >= 40400
···294294 */295295struct perf_event {296296#ifdef CONFIG_PERF_EVENTS297297- struct list_head group_entry;297297+ /*298298+ * entry onto perf_event_context::event_list;299299+ * modifications require ctx->lock300300+ * RCU safe iterations.301301+ */298302 struct list_head event_entry;303303+304304+ /*305305+ * XXX: group_entry and sibling_list should be mutually exclusive;306306+ * either you're a sibling on a group, or you're the group leader.307307+ * Rework the code to always use the same list element.308308+ *309309+ * Locked for modification by both ctx->mutex and ctx->lock; holding310310+ * either sufficies for read.311311+ */312312+ struct list_head group_entry;299313 struct list_head sibling_list;314314+315315+ /*316316+ * We need storage to track the entries in perf_pmu_migrate_context; we317317+ * cannot use the event_entry because of RCU and we want to keep the318318+ * group in tact which avoids us using the other two entries.319319+ */320320+ struct list_head migrate_entry;321321+300322 struct hlist_node hlist_entry;301323 int nr_siblings;302324 int group_flags;
+1
include/linux/random.h
···1717extern void get_random_bytes(void *buf, int nbytes);1818extern void get_random_bytes_arch(void *buf, int nbytes);1919void generate_random_uuid(unsigned char uuid_out[16]);2020+extern int random_int_secret_init(void);20212122#ifndef MODULE2223extern const struct file_operations random_fops, urandom_fops;
+3-4
include/linux/sched.h
···13941394 } memcg_batch;13951395 unsigned int memcg_kmem_skip_account;13961396 struct memcg_oom_info {13971397+ struct mem_cgroup *memcg;13981398+ gfp_t gfp_mask;13991399+ int order;13971400 unsigned int may_oom:1;13981398- unsigned int in_memcg_oom:1;13991399- unsigned int oom_locked:1;14001400- int wakeups;14011401- struct mem_cgroup *wait_on_memcg;14021401 } memcg_oom;14031402#endif14041403#ifdef CONFIG_UPROBES
+14
include/linux/timex.h
···64646565#include <asm/timex.h>66666767+#ifndef random_get_entropy6868+/*6969+ * The random_get_entropy() function is used by the /dev/random driver7070+ * in order to extract entropy via the relative unpredictability of7171+ * when an interrupt takes places versus a high speed, fine-grained7272+ * timing source or cycle counter. Since it will be occurred on every7373+ * single interrupt, it must have a very low cost/overhead.7474+ *7575+ * By default we use get_cycles() for this purpose, but individual7676+ * architectures may override this in their asm/timex.h header file.7777+ */7878+#define random_get_entropy() get_cycles()7979+#endif8080+6781/*6882 * SHIFT_PLL is used as a dampening factor to define how much we6983 * adjust the frequency correction for a given offset in PLL mode.
+1-1
include/linux/usb/usb_phy_gen_xceiv.h
···1212 unsigned int needs_reset:1;1313};14141515-#if IS_ENABLED(CONFIG_NOP_USB_XCEIV)1515+#if defined(CONFIG_NOP_USB_XCEIV) || (defined(CONFIG_NOP_USB_XCEIV_MODULE) && defined(MODULE))1616/* sometimes transceivers are accessed only through e.g. ULPI */1717extern void usb_nop_xceiv_register(void);1818extern void usb_nop_xceiv_unregister(void);
+3-1
include/linux/usb_usual.h
···6666 US_FLAG(INITIAL_READ10, 0x00100000) \6767 /* Initial READ(10) (and others) must be retried */ \6868 US_FLAG(WRITE_CACHE, 0x00200000) \6969- /* Write Cache status is not available */6969+ /* Write Cache status is not available */ \7070+ US_FLAG(NEEDS_CAP16, 0x00400000)7171+ /* cannot handle READ_CAPACITY_10 */70727173#define US_FLAG(name, value) US_FL_##name = value ,7274enum { US_DO_ALL_FLAGS };
-7
include/linux/vgaarb.h
···6565 * out of the arbitration process (and can be safe to take6666 * interrupts at any time.6767 */6868-#if defined(CONFIG_VGA_ARB)6968extern void vga_set_legacy_decoding(struct pci_dev *pdev,7069 unsigned int decodes);7171-#else7272-static inline void vga_set_legacy_decoding(struct pci_dev *pdev,7373- unsigned int decodes)7474-{7575-}7676-#endif77707871/**7972 * vga_get - acquire & locks VGA resources
+1-1
include/linux/yam.h
···77777878struct yamdrv_ioctl_mcs {7979 int cmd;8080- int bitrate;8080+ unsigned int bitrate;8181 unsigned char bits[YAM_FPGA_SIZE];8282};
···1282128212831283 sem_lock(sma, NULL, -1);1284128412851285+ if (sma->sem_perm.deleted) {12861286+ sem_unlock(sma, -1);12871287+ rcu_read_unlock();12881288+ return -EIDRM;12891289+ }12901290+12851291 curr = &sma->sem_base[semnum];1286129212871293 ipc_assert_locked_object(&sma->sem_perm);···13421336 int i;1343133713441338 sem_lock(sma, NULL, -1);13391339+ if (sma->sem_perm.deleted) {13401340+ err = -EIDRM;13411341+ goto out_unlock;13421342+ }13451343 if(nsems > SEMMSL_FAST) {13461344 if (!ipc_rcu_getref(sma)) {13471347- sem_unlock(sma, -1);13481348- rcu_read_unlock();13491345 err = -EIDRM;13501350- goto out_free;13461346+ goto out_unlock;13511347 }13521348 sem_unlock(sma, -1);13531349 rcu_read_unlock();···13621354 rcu_read_lock();13631355 sem_lock_and_putref(sma);13641356 if (sma->sem_perm.deleted) {13651365- sem_unlock(sma, -1);13661366- rcu_read_unlock();13671357 err = -EIDRM;13681368- goto out_free;13581358+ goto out_unlock;13691359 }13701360 }13711361 for (i = 0; i < sma->sem_nsems; i++)···13811375 struct sem_undo *un;1382137613831377 if (!ipc_rcu_getref(sma)) {13841384- rcu_read_unlock();13851385- return -EIDRM;13781378+ err = -EIDRM;13791379+ goto out_rcu_wakeup;13861380 }13871381 rcu_read_unlock();13881382···14101404 rcu_read_lock();14111405 sem_lock_and_putref(sma);14121406 if (sma->sem_perm.deleted) {14131413- sem_unlock(sma, -1);14141414- rcu_read_unlock();14151407 err = -EIDRM;14161416- goto out_free;14081408+ goto out_unlock;14171409 }1418141014191411 for (i = 0; i < nsems; i++)···14351431 goto out_rcu_wakeup;1436143214371433 sem_lock(sma, NULL, -1);14341434+ if (sma->sem_perm.deleted) {14351435+ err = -EIDRM;14361436+ goto out_unlock;14371437+ }14381438 curr = &sma->sem_base[semnum];1439143914401440 switch (cmd) {···18441836 if (error)18451837 goto out_rcu_wakeup;1846183818391839+ error = -EIDRM;18401840+ locknum = sem_lock(sma, sops, nsops);18411841+ if (sma->sem_perm.deleted)18421842+ goto out_unlock_free;18471843 /*18481844 * semid identifiers are not unique - find_alloc_undo may have18491845 * allocated an undo structure, it was invalidated by an RMID···18551843 * This case can be detected checking un->semid. The existence of18561844 * "un" itself is guaranteed by rcu.18571845 */18581858- error = -EIDRM;18591859- locknum = sem_lock(sma, sops, nsops);18601846 if (un && un->semid == -1)18611847 goto out_unlock_free;18621848···20672057 }2068205820692059 sem_lock(sma, NULL, -1);20602060+ /* exit_sem raced with IPC_RMID, nothing to do */20612061+ if (sma->sem_perm.deleted) {20622062+ sem_unlock(sma, -1);20632063+ rcu_read_unlock();20642064+ continue;20652065+ }20702066 un = __lookup_undo(ulp, semid);20712067 if (un == NULL) {20722068 /* exit_sem raced with IPC_RMID+semget() that created
+21-6
ipc/util.c
···1717 * Pavel Emelianov <xemul@openvz.org>1818 *1919 * General sysv ipc locking scheme:2020- * when doing ipc id lookups, take the ids->rwsem2121- * rcu_read_lock()2222- * obtain the ipc object (kern_ipc_perm)2323- * perform security, capabilities, auditing and permission checks, etc.2424- * acquire the ipc lock (kern_ipc_perm.lock) throught ipc_lock_object()2525- * perform data updates (ie: SET, RMID, LOCK/UNLOCK commands)2020+ * rcu_read_lock()2121+ * obtain the ipc object (kern_ipc_perm) by looking up the id in an idr2222+ * tree.2323+ * - perform initial checks (capabilities, auditing and permission,2424+ * etc).2525+ * - perform read-only operations, such as STAT, INFO commands.2626+ * acquire the ipc lock (kern_ipc_perm.lock) through2727+ * ipc_lock_object()2828+ * - perform data updates, such as SET, RMID commands and2929+ * mechanism-specific operations (semop/semtimedop,3030+ * msgsnd/msgrcv, shmat/shmdt).3131+ * drop the ipc lock, through ipc_unlock_object().3232+ * rcu_read_unlock()3333+ *3434+ * The ids->rwsem must be taken when:3535+ * - creating, removing and iterating the existing entries in ipc3636+ * identifier sets.3737+ * - iterating through files under /proc/sysvipc/3838+ *3939+ * Note that sems have a special fast path that avoids kern_ipc_perm.lock -4040+ * see sem_lock().2641 */27422843#include <linux/mm.h>
+6-8
kernel/cgroup.c
···2039203920402040 /* @tsk either already exited or can't exit until the end */20412041 if (tsk->flags & PF_EXITING)20422042- continue;20422042+ goto next;2043204320442044 /* as per above, nr_threads may decrease, but not increase. */20452045 BUG_ON(i >= group_size);···20472047 ent.cgrp = task_cgroup_from_root(tsk, root);20482048 /* nothing to do if this task is already in the cgroup */20492049 if (ent.cgrp == cgrp)20502050- continue;20502050+ goto next;20512051 /*20522052 * saying GFP_ATOMIC has no effect here because we did prealloc20532053 * earlier, but it's good form to communicate our expectations.···20552055 retval = flex_array_put(group, i, &ent, GFP_ATOMIC);20562056 BUG_ON(retval != 0);20572057 i++;20582058-20582058+ next:20592059 if (!threadgroup)20602060 break;20612061 } while_each_thread(leader, tsk);···3188318831893189 WARN_ON_ONCE(!rcu_read_lock_held());3190319031913191- /* if first iteration, visit the leftmost descendant */31923192- if (!pos) {31933193- next = css_leftmost_descendant(root);31943194- return next != root ? next : NULL;31953195- }31913191+ /* if first iteration, visit leftmost descendant which may be @root */31923192+ if (!pos)31933193+ return css_leftmost_descendant(root);3196319431973195 /* if we visited @root, we're done */31983196 if (pos == root)
···328328329329static inline void invoke_softirq(void)330330{331331- if (!force_irqthreads)332332- __do_softirq();333333- else331331+ if (!force_irqthreads) {332332+ /*333333+ * We can safely execute softirq on the current stack if334334+ * it is the irq stack, because it should be near empty335335+ * at this stage. But we have no way to know if the arch336336+ * calls irq_exit() on the irq stack. So call softirq337337+ * in its own stack to prevent from any overrun on top338338+ * of a potentially deep task stack.339339+ */340340+ do_softirq();341341+ } else {334342 wakeup_softirqd();343343+ }335344}336345337346static inline void tick_irq_exit(void)
···183183config MEMORY_HOTREMOVE184184 bool "Allow for memory hot remove"185185 select MEMORY_ISOLATION186186- select HAVE_BOOTMEM_INFO_NODE if X86_64186186+ select HAVE_BOOTMEM_INFO_NODE if (X86_64 || PPC64)187187 depends on MEMORY_HOTPLUG && ARCH_ENABLE_MEMORY_HOTREMOVE188188 depends on MIGRATION189189
+1-10
mm/filemap.c
···16161616 struct inode *inode = mapping->host;16171617 pgoff_t offset = vmf->pgoff;16181618 struct page *page;16191619- bool memcg_oom;16201619 pgoff_t size;16211620 int ret = 0;16221621···16241625 return VM_FAULT_SIGBUS;1625162616261627 /*16271627- * Do we have something in the page cache already? Either16281628- * way, try readahead, but disable the memcg OOM killer for it16291629- * as readahead is optional and no errors are propagated up16301630- * the fault stack. The OOM killer is enabled while trying to16311631- * instantiate the faulting page individually below.16281628+ * Do we have something in the page cache already?16321629 */16331630 page = find_get_page(mapping, offset);16341631 if (likely(page) && !(vmf->flags & FAULT_FLAG_TRIED)) {···16321637 * We found the page, so try async readahead before16331638 * waiting for the lock.16341639 */16351635- memcg_oom = mem_cgroup_toggle_oom(false);16361640 do_async_mmap_readahead(vma, ra, file, page, offset);16371637- mem_cgroup_toggle_oom(memcg_oom);16381641 } else if (!page) {16391642 /* No page in the page cache at all */16401640- memcg_oom = mem_cgroup_toggle_oom(false);16411643 do_sync_mmap_readahead(vma, ra, file, offset);16421642- mem_cgroup_toggle_oom(memcg_oom);16431644 count_vm_event(PGMAJFAULT);16441645 mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT);16451646 ret = VM_FAULT_MAJOR;
+9-1
mm/huge_memory.c
···2697269726982698 mmun_start = haddr;26992699 mmun_end = haddr + HPAGE_PMD_SIZE;27002700+again:27002701 mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);27012702 spin_lock(&mm->page_table_lock);27022703 if (unlikely(!pmd_trans_huge(*pmd))) {···27202719 split_huge_page(page);2721272027222721 put_page(page);27232723- BUG_ON(pmd_trans_huge(*pmd));27222722+27232723+ /*27242724+ * We don't always have down_write of mmap_sem here: a racing27252725+ * do_huge_pmd_wp_page() might have copied-on-write to another27262726+ * huge page before our split_huge_page() got the anon_vma lock.27272727+ */27282728+ if (unlikely(pmd_trans_huge(*pmd)))27292729+ goto again;27242730}2725273127262732void split_huge_page_pmd_mm(struct mm_struct *mm, unsigned long address,
+16-1
mm/hugetlb.c
···653653 BUG_ON(page_count(page));654654 BUG_ON(page_mapcount(page));655655 restore_reserve = PagePrivate(page);656656+ ClearPagePrivate(page);656657657658 spin_lock(&hugetlb_lock);658659 hugetlb_cgroup_uncharge_page(hstate_index(h),···696695 /* we rely on prep_new_huge_page to set the destructor */697696 set_compound_order(page, order);698697 __SetPageHead(page);698698+ __ClearPageReserved(page);699699 for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) {700700 __SetPageTail(p);701701+ /*702702+ * For gigantic hugepages allocated through bootmem at703703+ * boot, it's safer to be consistent with the not-gigantic704704+ * hugepages and clear the PG_reserved bit from all tail pages705705+ * too. Otherwse drivers using get_user_pages() to access tail706706+ * pages may get the reference counting wrong if they see707707+ * PG_reserved set on a tail page (despite the head page not708708+ * having PG_reserved set). Enforcing this consistency between709709+ * head and tail pages allows drivers to optimize away a check710710+ * on the head page when they need know if put_page() is needed711711+ * after get_user_pages().712712+ */713713+ __ClearPageReserved(p);701714 set_page_count(p, 0);702715 p->first_page = page;703716 }···13441329#else13451330 page = virt_to_page(m);13461331#endif13471347- __ClearPageReserved(page);13481332 WARN_ON(page_count(page) != 1);13491333 prep_compound_huge_page(page, h->order);13341334+ WARN_ON(PageReserved(page));13501335 prep_new_huge_page(h, page, page_to_nid(page));13511336 /*13521337 * If we had gigantic hugepages allocated at boot time, we need
+72-105
mm/memcontrol.c
···866866 unsigned long val = 0;867867 int cpu;868868869869+ get_online_cpus();869870 for_each_online_cpu(cpu)870871 val += per_cpu(memcg->stat->events[idx], cpu);871872#ifdef CONFIG_HOTPLUG_CPU···874873 val += memcg->nocpu_base.events[idx];875874 spin_unlock(&memcg->pcp_counter_lock);876875#endif876876+ put_online_cpus();877877 return val;878878}879879···21612159 memcg_wakeup_oom(memcg);21622160}2163216121642164-/*21652165- * try to call OOM killer21662166- */21672162static void mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int order)21682163{21692169- bool locked;21702170- int wakeups;21712171-21722164 if (!current->memcg_oom.may_oom)21732165 return;21742174-21752175- current->memcg_oom.in_memcg_oom = 1;21762176-21772166 /*21782178- * As with any blocking lock, a contender needs to start21792179- * listening for wakeups before attempting the trylock,21802180- * otherwise it can miss the wakeup from the unlock and sleep21812181- * indefinitely. This is just open-coded because our locking21822182- * is so particular to memcg hierarchies.21672167+ * We are in the middle of the charge context here, so we21682168+ * don't want to block when potentially sitting on a callstack21692169+ * that holds all kinds of filesystem and mm locks.21702170+ *21712171+ * Also, the caller may handle a failed allocation gracefully21722172+ * (like optional page cache readahead) and so an OOM killer21732173+ * invocation might not even be necessary.21742174+ *21752175+ * That's why we don't do anything here except remember the21762176+ * OOM context and then deal with it at the end of the page21772177+ * fault when the stack is unwound, the locks are released,21782178+ * and when we know whether the fault was overall successful.21832179 */21842184- wakeups = atomic_read(&memcg->oom_wakeups);21802180+ css_get(&memcg->css);21812181+ current->memcg_oom.memcg = memcg;21822182+ current->memcg_oom.gfp_mask = mask;21832183+ current->memcg_oom.order = order;21842184+}21852185+21862186+/**21872187+ * mem_cgroup_oom_synchronize - complete memcg OOM handling21882188+ * @handle: actually kill/wait or just clean up the OOM state21892189+ *21902190+ * This has to be called at the end of a page fault if the memcg OOM21912191+ * handler was enabled.21922192+ *21932193+ * Memcg supports userspace OOM handling where failed allocations must21942194+ * sleep on a waitqueue until the userspace task resolves the21952195+ * situation. Sleeping directly in the charge context with all kinds21962196+ * of locks held is not a good idea, instead we remember an OOM state21972197+ * in the task and mem_cgroup_oom_synchronize() has to be called at21982198+ * the end of the page fault to complete the OOM handling.21992199+ *22002200+ * Returns %true if an ongoing memcg OOM situation was detected and22012201+ * completed, %false otherwise.22022202+ */22032203+bool mem_cgroup_oom_synchronize(bool handle)22042204+{22052205+ struct mem_cgroup *memcg = current->memcg_oom.memcg;22062206+ struct oom_wait_info owait;22072207+ bool locked;22082208+22092209+ /* OOM is global, do not handle */22102210+ if (!memcg)22112211+ return false;22122212+22132213+ if (!handle)22142214+ goto cleanup;22152215+22162216+ owait.memcg = memcg;22172217+ owait.wait.flags = 0;22182218+ owait.wait.func = memcg_oom_wake_function;22192219+ owait.wait.private = current;22202220+ INIT_LIST_HEAD(&owait.wait.task_list);22212221+22222222+ prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE);21852223 mem_cgroup_mark_under_oom(memcg);2186222421872225 locked = mem_cgroup_oom_trylock(memcg);···2231218922322190 if (locked && !memcg->oom_kill_disable) {22332191 mem_cgroup_unmark_under_oom(memcg);22342234- mem_cgroup_out_of_memory(memcg, mask, order);22352235- mem_cgroup_oom_unlock(memcg);22362236- /*22372237- * There is no guarantee that an OOM-lock contender22382238- * sees the wakeups triggered by the OOM kill22392239- * uncharges. Wake any sleepers explicitely.22402240- */22412241- memcg_oom_recover(memcg);21922192+ finish_wait(&memcg_oom_waitq, &owait.wait);21932193+ mem_cgroup_out_of_memory(memcg, current->memcg_oom.gfp_mask,21942194+ current->memcg_oom.order);22422195 } else {22432243- /*22442244- * A system call can just return -ENOMEM, but if this22452245- * is a page fault and somebody else is handling the22462246- * OOM already, we need to sleep on the OOM waitqueue22472247- * for this memcg until the situation is resolved.22482248- * Which can take some time because it might be22492249- * handled by a userspace task.22502250- *22512251- * However, this is the charge context, which means22522252- * that we may sit on a large call stack and hold22532253- * various filesystem locks, the mmap_sem etc. and we22542254- * don't want the OOM handler to deadlock on them22552255- * while we sit here and wait. Store the current OOM22562256- * context in the task_struct, then return -ENOMEM.22572257- * At the end of the page fault handler, with the22582258- * stack unwound, pagefault_out_of_memory() will check22592259- * back with us by calling22602260- * mem_cgroup_oom_synchronize(), possibly putting the22612261- * task to sleep.22622262- */22632263- current->memcg_oom.oom_locked = locked;22642264- current->memcg_oom.wakeups = wakeups;22652265- css_get(&memcg->css);22662266- current->memcg_oom.wait_on_memcg = memcg;22672267- }22682268-}22692269-22702270-/**22712271- * mem_cgroup_oom_synchronize - complete memcg OOM handling22722272- *22732273- * This has to be called at the end of a page fault if the the memcg22742274- * OOM handler was enabled and the fault is returning %VM_FAULT_OOM.22752275- *22762276- * Memcg supports userspace OOM handling, so failed allocations must22772277- * sleep on a waitqueue until the userspace task resolves the22782278- * situation. Sleeping directly in the charge context with all kinds22792279- * of locks held is not a good idea, instead we remember an OOM state22802280- * in the task and mem_cgroup_oom_synchronize() has to be called at22812281- * the end of the page fault to put the task to sleep and clean up the22822282- * OOM state.22832283- *22842284- * Returns %true if an ongoing memcg OOM situation was detected and22852285- * finalized, %false otherwise.22862286- */22872287-bool mem_cgroup_oom_synchronize(void)22882288-{22892289- struct oom_wait_info owait;22902290- struct mem_cgroup *memcg;22912291-22922292- /* OOM is global, do not handle */22932293- if (!current->memcg_oom.in_memcg_oom)22942294- return false;22952295-22962296- /*22972297- * We invoked the OOM killer but there is a chance that a kill22982298- * did not free up any charges. Everybody else might already22992299- * be sleeping, so restart the fault and keep the rampage23002300- * going until some charges are released.23012301- */23022302- memcg = current->memcg_oom.wait_on_memcg;23032303- if (!memcg)23042304- goto out;23052305-23062306- if (test_thread_flag(TIF_MEMDIE) || fatal_signal_pending(current))23072307- goto out_memcg;23082308-23092309- owait.memcg = memcg;23102310- owait.wait.flags = 0;23112311- owait.wait.func = memcg_oom_wake_function;23122312- owait.wait.private = current;23132313- INIT_LIST_HEAD(&owait.wait.task_list);23142314-23152315- prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE);23162316- /* Only sleep if we didn't miss any wakeups since OOM */23172317- if (atomic_read(&memcg->oom_wakeups) == current->memcg_oom.wakeups)23182196 schedule();23192319- finish_wait(&memcg_oom_waitq, &owait.wait);23202320-out_memcg:23212321- mem_cgroup_unmark_under_oom(memcg);23222322- if (current->memcg_oom.oom_locked) {21972197+ mem_cgroup_unmark_under_oom(memcg);21982198+ finish_wait(&memcg_oom_waitq, &owait.wait);21992199+ }22002200+22012201+ if (locked) {23232202 mem_cgroup_oom_unlock(memcg);23242203 /*23252204 * There is no guarantee that an OOM-lock contender···22492286 */22502287 memcg_oom_recover(memcg);22512288 }22892289+cleanup:22902290+ current->memcg_oom.memcg = NULL;22522291 css_put(&memcg->css);22532253- current->memcg_oom.wait_on_memcg = NULL;22542254-out:22552255- current->memcg_oom.in_memcg_oom = 0;22562292 return true;22572293}22582294···26652703 || fatal_signal_pending(current)))26662704 goto bypass;2667270527062706+ if (unlikely(task_in_memcg_oom(current)))27072707+ goto bypass;27082708+26682709 /*26692710 * We always charge the cgroup the mm_struct belongs to.26702711 * The mm_struct's mem_cgroup changes on task migration if the···27662801 return 0;27672802nomem:27682803 *ptr = NULL;28042804+ if (gfp_mask & __GFP_NOFAIL)28052805+ return 0;27692806 return -ENOMEM;27702807bypass:27712808 *ptr = root_mem_cgroup;
+14-6
mm/memory.c
···837837 */838838 make_migration_entry_read(&entry);839839 pte = swp_entry_to_pte(entry);840840+ if (pte_swp_soft_dirty(*src_pte))841841+ pte = pte_swp_mksoft_dirty(pte);840842 set_pte_at(src_mm, addr, src_pte, pte);841843 }842844 }···38653863 * space. Kernel faults are handled more gracefully.38663864 */38673865 if (flags & FAULT_FLAG_USER)38683868- mem_cgroup_enable_oom();38663866+ mem_cgroup_oom_enable();3869386738703868 ret = __handle_mm_fault(mm, vma, address, flags);3871386938723872- if (flags & FAULT_FLAG_USER)38733873- mem_cgroup_disable_oom();38743874-38753875- if (WARN_ON(task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM)))38763876- mem_cgroup_oom_synchronize();38703870+ if (flags & FAULT_FLAG_USER) {38713871+ mem_cgroup_oom_disable();38723872+ /*38733873+ * The task may have entered a memcg OOM situation but38743874+ * if the allocation error was handled gracefully (no38753875+ * VM_FAULT_OOM), there is no need to kill anything.38763876+ * Just clean up the OOM state peacefully.38773877+ */38783878+ if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))38793879+ mem_cgroup_oom_synchronize(false);38803880+ }3877388138783882 return ret;38793883}
+2
mm/migrate.c
···161161162162 get_page(new);163163 pte = pte_mkold(mk_pte(new, vma->vm_page_prot));164164+ if (pte_swp_soft_dirty(*ptep))165165+ pte = pte_mksoft_dirty(pte);164166 if (is_write_migration_entry(entry))165167 pte = pte_mkwrite(pte);166168#ifdef CONFIG_HUGETLB_PAGE
+5-2
mm/mprotect.c
···9494 swp_entry_t entry = pte_to_swp_entry(oldpte);95959696 if (is_write_migration_entry(entry)) {9797+ pte_t newpte;9798 /*9899 * A protection check is difficult so99100 * just be safe and disable write100101 */101102 make_migration_entry_read(&entry);102102- set_pte_at(mm, addr, pte,103103- swp_entry_to_pte(entry));103103+ newpte = swp_entry_to_pte(entry);104104+ if (pte_swp_soft_dirty(oldpte))105105+ newpte = pte_swp_mksoft_dirty(newpte);106106+ set_pte_at(mm, addr, pte, newpte);104107 }105108 pages++;106109 }
···680680{681681 struct zonelist *zonelist;682682683683- if (mem_cgroup_oom_synchronize())683683+ if (mem_cgroup_oom_synchronize(true))684684 return;685685686686 zonelist = node_zonelist(first_online_node, GFP_KERNEL);
+5-5
mm/page-writeback.c
···12101210 return 1;12111211}1212121212131213-static long bdi_max_pause(struct backing_dev_info *bdi,12141214- unsigned long bdi_dirty)12131213+static unsigned long bdi_max_pause(struct backing_dev_info *bdi,12141214+ unsigned long bdi_dirty)12151215{12161216- long bw = bdi->avg_write_bandwidth;12171217- long t;12161216+ unsigned long bw = bdi->avg_write_bandwidth;12171217+ unsigned long t;1218121812191219 /*12201220 * Limit pause time for small memory systems. If sleeping for too long···12261226 t = bdi_dirty / (1 + bw / roundup_pow_of_two(1 + HZ / 8));12271227 t++;1228122812291229- return min_t(long, t, MAX_PAUSE);12291229+ return min_t(unsigned long, t, MAX_PAUSE);12301230}1231123112321232static long bdi_min_pause(struct backing_dev_info *bdi,
+2
mm/slab_common.c
···5656 continue;5757 }58585959+#if !defined(CONFIG_SLUB) || !defined(CONFIG_SLUB_DEBUG_ON)5960 /*6061 * For simplicity, we won't check this in the list of memcg6162 * caches. We have control over memcg naming, and if there···7069 s = NULL;7170 return -EINVAL;7271 }7272+#endif7373 }74747575 WARN_ON(strchr(name, ' ')); /* It confuses parsers */
+3-1
mm/swapfile.c
···18241824 struct filename *pathname;18251825 int i, type, prev;18261826 int err;18271827+ unsigned int old_block_size;1827182818281829 if (!capable(CAP_SYS_ADMIN))18291830 return -EPERM;···19151914 }1916191519171916 swap_file = p->swap_file;19171917+ old_block_size = p->old_block_size;19181918 p->swap_file = NULL;19191919 p->max = 0;19201920 swap_map = p->swap_map;···19401938 inode = mapping->host;19411939 if (S_ISBLK(inode->i_mode)) {19421940 struct block_device *bdev = I_BDEV(inode);19431943- set_blocksize(bdev, p->old_block_size);19411941+ set_blocksize(bdev, old_block_size);19441942 blkdev_put(bdev, FMODE_READ | FMODE_WRITE | FMODE_EXCL);19451943 } else {19461944 mutex_lock(&inode->i_mutex);
···700700701701 vid = nla_get_u16(tb[NDA_VLAN]);702702703703- if (vid >= VLAN_N_VID) {703703+ if (!vid || vid >= VLAN_VID_MASK) {704704 pr_info("bridge: RTM_NEWNEIGH with invalid vlan id %d\n",705705 vid);706706 return -EINVAL;···794794795795 vid = nla_get_u16(tb[NDA_VLAN]);796796797797- if (vid >= VLAN_N_VID) {797797+ if (!vid || vid >= VLAN_VID_MASK) {798798 pr_info("bridge: RTM_NEWNEIGH with invalid vlan id %d\n",799799 vid);800800 return -EINVAL;
···134134135135 if (br->bridge_forward_delay < BR_MIN_FORWARD_DELAY)136136 __br_set_forward_delay(br, BR_MIN_FORWARD_DELAY);137137- else if (br->bridge_forward_delay < BR_MAX_FORWARD_DELAY)137137+ else if (br->bridge_forward_delay > BR_MAX_FORWARD_DELAY)138138 __br_set_forward_delay(br, BR_MAX_FORWARD_DELAY);139139140140 if (r == 0) {
+66-57
net/bridge/br_vlan.c
···4545 return 0;4646 }47474848- if (vid) {4949- if (v->port_idx) {5050- p = v->parent.port;5151- br = p->br;5252- dev = p->dev;5353- } else {5454- br = v->parent.br;5555- dev = br->dev;5656- }5757- ops = dev->netdev_ops;4848+ if (v->port_idx) {4949+ p = v->parent.port;5050+ br = p->br;5151+ dev = p->dev;5252+ } else {5353+ br = v->parent.br;5454+ dev = br->dev;5555+ }5656+ ops = dev->netdev_ops;58575959- if (p && (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) {6060- /* Add VLAN to the device filter if it is supported.6161- * Stricly speaking, this is not necessary now, since6262- * devices are made promiscuous by the bridge, but if6363- * that ever changes this code will allow tagged6464- * traffic to enter the bridge.6565- */6666- err = ops->ndo_vlan_rx_add_vid(dev, htons(ETH_P_8021Q),6767- vid);6868- if (err)6969- return err;7070- }5858+ if (p && (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) {5959+ /* Add VLAN to the device filter if it is supported.6060+ * Stricly speaking, this is not necessary now, since6161+ * devices are made promiscuous by the bridge, but if6262+ * that ever changes this code will allow tagged6363+ * traffic to enter the bridge.6464+ */6565+ err = ops->ndo_vlan_rx_add_vid(dev, htons(ETH_P_8021Q),6666+ vid);6767+ if (err)6868+ return err;6969+ }71707272- err = br_fdb_insert(br, p, dev->dev_addr, vid);7373- if (err) {7474- br_err(br, "failed insert local address into bridge "7575- "forwarding table\n");7676- goto out_filt;7777- }7878-7171+ err = br_fdb_insert(br, p, dev->dev_addr, vid);7272+ if (err) {7373+ br_err(br, "failed insert local address into bridge "7474+ "forwarding table\n");7575+ goto out_filt;7976 }80778178 set_bit(vid, v->vlan_bitmap);···9598 __vlan_delete_pvid(v, vid);9699 clear_bit(vid, v->untagged_bitmap);971009898- if (v->port_idx && vid) {101101+ if (v->port_idx) {99102 struct net_device *dev = v->parent.port->dev;100103 const struct net_device_ops *ops = dev->netdev_ops;101104···189192bool br_allowed_ingress(struct net_bridge *br, struct net_port_vlans *v,190193 struct sk_buff *skb, u16 *vid)191194{195195+ int err;196196+192197 /* If VLAN filtering is disabled on the bridge, all packets are193198 * permitted.194199 */···203204 if (!v)204205 return false;205206206206- if (br_vlan_get_tag(skb, vid)) {207207+ err = br_vlan_get_tag(skb, vid);208208+ if (!*vid) {207209 u16 pvid = br_get_pvid(v);208210209209- /* Frame did not have a tag. See if pvid is set210210- * on this port. That tells us which vlan untagged211211- * traffic belongs to.211211+ /* Frame had a tag with VID 0 or did not have a tag.212212+ * See if pvid is set on this port. That tells us which213213+ * vlan untagged or priority-tagged traffic belongs to.212214 */213215 if (pvid == VLAN_N_VID)214216 return false;215217216216- /* PVID is set on this port. Any untagged ingress217217- * frame is considered to belong to this vlan.218218+ /* PVID is set on this port. Any untagged or priority-tagged219219+ * ingress frame is considered to belong to this vlan.218220 */219219- __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), pvid);221221+ *vid = pvid;222222+ if (likely(err))223223+ /* Untagged Frame. */224224+ __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), pvid);225225+ else226226+ /* Priority-tagged Frame.227227+ * At this point, We know that skb->vlan_tci had228228+ * VLAN_TAG_PRESENT bit and its VID field was 0x000.229229+ * We update only VID field and preserve PCP field.230230+ */231231+ skb->vlan_tci |= pvid;232232+220233 return true;221234 }222235···259248 return false;260249}261250262262-/* Must be protected by RTNL */251251+/* Must be protected by RTNL.252252+ * Must be called with vid in range from 1 to 4094 inclusive.253253+ */263254int br_vlan_add(struct net_bridge *br, u16 vid, u16 flags)264255{265256 struct net_port_vlans *pv = NULL;···291278 return err;292279}293280294294-/* Must be protected by RTNL */281281+/* Must be protected by RTNL.282282+ * Must be called with vid in range from 1 to 4094 inclusive.283283+ */295284int br_vlan_delete(struct net_bridge *br, u16 vid)296285{297286 struct net_port_vlans *pv;···304289 if (!pv)305290 return -EINVAL;306291307307- if (vid) {308308- /* If the VID !=0 remove fdb for this vid. VID 0 is special309309- * in that it's the default and is always there in the fdb.310310- */311311- spin_lock_bh(&br->hash_lock);312312- fdb_delete_by_addr(br, br->dev->dev_addr, vid);313313- spin_unlock_bh(&br->hash_lock);314314- }292292+ spin_lock_bh(&br->hash_lock);293293+ fdb_delete_by_addr(br, br->dev->dev_addr, vid);294294+ spin_unlock_bh(&br->hash_lock);315295316296 __vlan_del(pv, vid);317297 return 0;···339329 return 0;340330}341331342342-/* Must be protected by RTNL */332332+/* Must be protected by RTNL.333333+ * Must be called with vid in range from 1 to 4094 inclusive.334334+ */343335int nbp_vlan_add(struct net_bridge_port *port, u16 vid, u16 flags)344336{345337 struct net_port_vlans *pv = NULL;···375363 return err;376364}377365378378-/* Must be protected by RTNL */366366+/* Must be protected by RTNL.367367+ * Must be called with vid in range from 1 to 4094 inclusive.368368+ */379369int nbp_vlan_delete(struct net_bridge_port *port, u16 vid)380370{381371 struct net_port_vlans *pv;···388374 if (!pv)389375 return -EINVAL;390376391391- if (vid) {392392- /* If the VID !=0 remove fdb for this vid. VID 0 is special393393- * in that it's the default and is always there in the fdb.394394- */395395- spin_lock_bh(&port->br->hash_lock);396396- fdb_delete_by_addr(port->br, port->dev->dev_addr, vid);397397- spin_unlock_bh(&port->br->hash_lock);398398- }377377+ spin_lock_bh(&port->br->hash_lock);378378+ fdb_delete_by_addr(port->br, port->dev->dev_addr, vid);379379+ spin_unlock_bh(&port->br->hash_lock);399380400381 return __vlan_del(pv, vid);401382}
···772772 /* initialize protocol header pointer */773773 skb->transport_header = skb->network_header + fragheaderlen;774774775775- skb->ip_summed = CHECKSUM_PARTIAL;776775 skb->csum = 0;777776778778- /* specify the length of each IP datagram fragment */779779- skb_shinfo(skb)->gso_size = maxfraglen - fragheaderlen;780780- skb_shinfo(skb)->gso_type = SKB_GSO_UDP;777777+781778 __skb_queue_tail(queue, skb);779779+ } else if (skb_is_gso(skb)) {780780+ goto append;782781 }783782783783+ skb->ip_summed = CHECKSUM_PARTIAL;784784+ /* specify the length of each IP datagram fragment */785785+ skb_shinfo(skb)->gso_size = maxfraglen - fragheaderlen;786786+ skb_shinfo(skb)->gso_type = SKB_GSO_UDP;787787+788788+append:784789 return skb_append_datato_frags(sk, skb, getfrag, from,785790 (length - transhdrlen));786791}
+11-3
net/ipv4/ip_vti.c
···6161 iph->saddr, iph->daddr, 0);6262 if (tunnel != NULL) {6363 struct pcpu_tstats *tstats;6464+ u32 oldmark = skb->mark;6565+ int ret;64666565- if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb))6767+6868+ /* temporarily mark the skb with the tunnel o_key, to6969+ * only match policies with this mark.7070+ */7171+ skb->mark = be32_to_cpu(tunnel->parms.o_key);7272+ ret = xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb);7373+ skb->mark = oldmark;7474+ if (!ret)6675 return -1;67766877 tstats = this_cpu_ptr(tunnel->dev->tstats);···8071 tstats->rx_bytes += skb->len;8172 u64_stats_update_end(&tstats->syncp);82738383- skb->mark = 0;8474 secpath_reset(skb);8575 skb->dev = tunnel->dev;8676 return 1;···111103112104 memset(&fl4, 0, sizeof(fl4));113105 flowi4_init_output(&fl4, tunnel->parms.link,114114- be32_to_cpu(tunnel->parms.i_key), RT_TOS(tos),106106+ be32_to_cpu(tunnel->parms.o_key), RT_TOS(tos),115107 RT_SCOPE_UNIVERSE,116108 IPPROTO_IPIP, 0,117109 dst, tiph->saddr, 0, 0);
+3-1
net/ipv4/tcp_input.c
···33383338 tcp_init_cwnd_reduction(sk, true);33393339 tcp_set_ca_state(sk, TCP_CA_CWR);33403340 tcp_end_cwnd_reduction(sk);33413341- tcp_set_ca_state(sk, TCP_CA_Open);33413341+ tcp_try_keep_open(sk);33423342 NET_INC_STATS_BH(sock_net(sk),33433343 LINUX_MIB_TCPLOSSPROBERECOVERY);33443344 }···57505750 tcp_rearm_rto(sk);57515751 } else57525752 tcp_init_metrics(sk);57535753+57545754+ tcp_update_pacing_rate(sk);5753575557545756 /* Prevent spurious tcp_cwnd_restart() on first data packet */57555757 tp->lsndtime = tcp_time_stamp;
+7-5
net/ipv4/tcp_output.c
···986986static void tcp_set_skb_tso_segs(const struct sock *sk, struct sk_buff *skb,987987 unsigned int mss_now)988988{989989- if (skb->len <= mss_now || !sk_can_gso(sk) ||990990- skb->ip_summed == CHECKSUM_NONE) {989989+ /* Make sure we own this skb before messing gso_size/gso_segs */990990+ WARN_ON_ONCE(skb_cloned(skb));991991+992992+ if (skb->len <= mss_now || skb->ip_summed == CHECKSUM_NONE) {991993 /* Avoid the costly divide in the normal992994 * non-TSO case.993995 */···10691067 if (nsize < 0)10701068 nsize = 0;1071106910721072- if (skb_cloned(skb) &&10731073- skb_is_nonlinear(skb) &&10741074- pskb_expand_head(skb, 0, 0, GFP_ATOMIC))10701070+ if (skb_unclone(skb, GFP_ATOMIC))10751071 return -ENOMEM;1076107210771073 /* Get a new skb... force flag on. */···23442344 int oldpcount = tcp_skb_pcount(skb);2345234523462346 if (unlikely(oldpcount > 1)) {23472347+ if (skb_unclone(skb, GFP_ATOMIC))23482348+ return -ENOMEM;23472349 tcp_init_tso_segs(sk, skb, cur_mss);23482350 tcp_adjust_pcount(sk, skb, oldpcount - tcp_skb_pcount(skb));23492351 }
···35643564 return -EINVAL;35653565 }35663566 band = chanctx_conf->def.chan->band;35673567- sta = sta_info_get(sdata, peer);35673567+ sta = sta_info_get_bss(sdata, peer);35683568 if (sta) {35693569 qos = test_sta_flag(sta, WLAN_STA_WME);35703570 } else {
+3
net/mac80211/ieee80211_i.h
···893893 * that the scan completed.894894 * @SCAN_ABORTED: Set for our scan work function when the driver reported895895 * a scan complete for an aborted scan.896896+ * @SCAN_HW_CANCELLED: Set for our scan work function when the scan is being897897+ * cancelled.896898 */897899enum {898900 SCAN_SW_SCANNING,···902900 SCAN_ONCHANNEL_SCANNING,903901 SCAN_COMPLETED,904902 SCAN_ABORTED,903903+ SCAN_HW_CANCELLED,905904};906905907906/**
+2
net/mac80211/offchannel.c
···394394395395 if (started)396396 ieee80211_start_next_roc(local);397397+ else if (list_empty(&local->roc_list))398398+ ieee80211_run_deferred_scan(local);397399 }398400399401 out_unlock:
+19
net/mac80211/scan.c
···238238 enum ieee80211_band band;239239 int i, ielen, n_chans;240240241241+ if (test_bit(SCAN_HW_CANCELLED, &local->scanning))242242+ return false;243243+241244 do {242245 if (local->hw_scan_band == IEEE80211_NUM_BANDS)243246 return false;···942939 if (!local->scan_req)943940 goto out;944941942942+ /*943943+ * We have a scan running and the driver already reported completion,944944+ * but the worker hasn't run yet or is stuck on the mutex - mark it as945945+ * cancelled.946946+ */947947+ if (test_bit(SCAN_HW_SCANNING, &local->scanning) &&948948+ test_bit(SCAN_COMPLETED, &local->scanning)) {949949+ set_bit(SCAN_HW_CANCELLED, &local->scanning);950950+ goto out;951951+ }952952+945953 if (test_bit(SCAN_HW_SCANNING, &local->scanning)) {954954+ /*955955+ * Make sure that __ieee80211_scan_completed doesn't trigger a956956+ * scan on another band.957957+ */958958+ set_bit(SCAN_HW_CANCELLED, &local->scanning);946959 if (local->ops->cancel_hw_scan)947960 drv_cancel_hw_scan(local,948961 rcu_dereference_protected(local->scan_sdata,
···536536 * by CRC32-C as described in <draft-ietf-tsvwg-sctpcsum-02.txt>.537537 */538538 if (!sctp_checksum_disable) {539539- if (!(dst->dev->features & NETIF_F_SCTP_CSUM)) {539539+ if (!(dst->dev->features & NETIF_F_SCTP_CSUM) ||540540+ (dst_xfrm(dst) != NULL) || packet->ipfragok) {540541 __u32 crc32 = sctp_start_cksum((__u8 *)sh, cksum_buf_len);541542542543 /* 3) Put the resultant value into the checksum field in the
···958958 case NETDEV_PRE_UP:959959 if (!(wdev->wiphy->interface_modes & BIT(wdev->iftype)))960960 return notifier_from_errno(-EOPNOTSUPP);961961- if (rfkill_blocked(rdev->rfkill))962962- return notifier_from_errno(-ERFKILL);963961 ret = cfg80211_can_add_interface(rdev, wdev->iftype);964962 if (ret)965963 return notifier_from_errno(ret);
···8888 case SNDRV_PCM_FORMAT_S8:8989 param.spctl |= 0x70;9090 sport->wdsize = 1;9191+ break;9192 case SNDRV_PCM_FORMAT_S16_LE:9293 param.spctl |= 0xf0;9394 sport->wdsize = 2;
+3
sound/soc/codecs/88pm860x-codec.c
···349349 val = ucontrol->value.integer.value[0];350350 val2 = ucontrol->value.integer.value[1];351351352352+ if (val >= ARRAY_SIZE(st_table) || val2 >= ARRAY_SIZE(st_table))353353+ return -EINVAL;354354+352355 err = snd_soc_update_bits(codec, reg, 0x3f, st_table[val].m);353356 if (err < 0)354357 return err;
+6-1
sound/soc/codecs/ab8500-codec.c
···12251225 struct ab8500_codec_drvdata *drvdata = dev_get_drvdata(codec->dev);12261226 struct device *dev = codec->dev;12271227 bool apply_fir, apply_iir;12281228- int req, status;12281228+ unsigned int req;12291229+ int status;1229123012301231 dev_dbg(dev, "%s: Enter.\n", __func__);1231123212321233 mutex_lock(&drvdata->anc_lock);1233123412341235 req = ucontrol->value.integer.value[0];12361236+ if (req >= ARRAY_SIZE(enum_anc_state)) {12371237+ status = -EINVAL;12381238+ goto cleanup;12391239+ }12351240 if (req != ANC_APPLY_FIR_IIR && req != ANC_APPLY_FIR &&12361241 req != ANC_APPLY_IIR) {12371242 dev_err(dev, "%s: ERROR: Unsupported status to set '%s'!\n",
+2-2
sound/soc/codecs/max98095.c
···18631863 struct max98095_pdata *pdata = max98095->pdata;18641864 int channel = max98095_get_eq_channel(kcontrol->id.name);18651865 struct max98095_cdata *cdata;18661866- int sel = ucontrol->value.integer.value[0];18661866+ unsigned int sel = ucontrol->value.integer.value[0];18671867 struct max98095_eq_cfg *coef_set;18681868 int fs, best, best_val, i;18691869 int regmask, regsave;···20162016 struct max98095_pdata *pdata = max98095->pdata;20172017 int channel = max98095_get_bq_channel(codec, kcontrol->id.name);20182018 struct max98095_cdata *cdata;20192019- int sel = ucontrol->value.integer.value[0];20192019+ unsigned int sel = ucontrol->value.integer.value[0];20202020 struct max98095_biquad_cfg *coef_set;20212021 int fs, best, best_val, i;20222022 int regmask, regsave;
···936936 ssi_private->ssi_phys = res.start;937937938938 ssi_private->irq = irq_of_parse_and_map(np, 0);939939- if (ssi_private->irq == NO_IRQ) {939939+ if (ssi_private->irq == 0) {940940 dev_err(&pdev->dev, "no irq for node %s\n", np->full_name);941941 return -ENXIO;942942 }
+1-1
sound/soc/fsl/imx-mc13783.c
···112112 return ret;113113 }114114115115- if (machine_is_mx31_3ds()) {115115+ if (machine_is_mx31_3ds() || machine_is_mx31moboard()) {116116 imx_audmux_v2_configure_port(MX31_AUDMUX_PORT4_SSI_PINS_4,117117 IMX_AUDMUX_V2_PTCR_SYN,118118 IMX_AUDMUX_V2_PDCR_RXDSEL(MX31_AUDMUX_PORT1_SSI0) |
+5-2
sound/soc/fsl/imx-sgtl5000.c
···6262 struct device_node *ssi_np, *codec_np;6363 struct platform_device *ssi_pdev;6464 struct i2c_client *codec_dev;6565- struct imx_sgtl5000_data *data;6565+ struct imx_sgtl5000_data *data = NULL;6666 int int_port, ext_port;6767 int ret;6868···128128 goto fail;129129 }130130131131- data->codec_clk = devm_clk_get(&codec_dev->dev, NULL);131131+ data->codec_clk = clk_get(&codec_dev->dev, NULL);132132 if (IS_ERR(data->codec_clk)) {133133 ret = PTR_ERR(data->codec_clk);134134 goto fail;···172172 return 0;173173174174fail:175175+ if (data && !IS_ERR(data->codec_clk))176176+ clk_put(data->codec_clk);175177 if (ssi_np)176178 of_node_put(ssi_np);177179 if (codec_np)···187185 struct imx_sgtl5000_data *data = platform_get_drvdata(pdev);188186189187 snd_soc_unregister_card(&data->card);188188+ clk_put(data->codec_clk);190189191190 return 0;192191}
+12-11
sound/soc/fsl/imx-ssi.c
···600600 ssi->fiq_params.dma_params_rx = &ssi->dma_params_rx;601601 ssi->fiq_params.dma_params_tx = &ssi->dma_params_tx;602602603603- ret = imx_pcm_fiq_init(pdev, &ssi->fiq_params);604604- if (ret)605605- goto failed_pcm_fiq;603603+ ssi->fiq_init = imx_pcm_fiq_init(pdev, &ssi->fiq_params);604604+ ssi->dma_init = imx_pcm_dma_init(pdev);606605607607- ret = imx_pcm_dma_init(pdev);608608- if (ret)609609- goto failed_pcm_dma;606606+ if (ssi->fiq_init && ssi->dma_init) {607607+ ret = ssi->fiq_init;608608+ goto failed_pcm;609609+ }610610611611 return 0;612612613613-failed_pcm_dma:614614- imx_pcm_fiq_exit(pdev);615615-failed_pcm_fiq:613613+failed_pcm:616614 snd_soc_unregister_component(&pdev->dev);617615failed_register:618616 release_mem_region(res->start, resource_size(res));···626628 struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0);627629 struct imx_ssi *ssi = platform_get_drvdata(pdev);628630629629- imx_pcm_dma_exit(pdev);630630- imx_pcm_fiq_exit(pdev);631631+ if (!ssi->dma_init)632632+ imx_pcm_dma_exit(pdev);633633+634634+ if (!ssi->fiq_init)635635+ imx_pcm_fiq_exit(pdev);631636632637 snd_soc_unregister_component(&pdev->dev);633638
+2
sound/soc/fsl/imx-ssi.h
···211211 struct imx_dma_data filter_data_rx;212212 struct imx_pcm_fiq_params fiq_params;213213214214+ int fiq_init;215215+ int dma_init;214216 int enabled;215217};216218
+2-2
sound/soc/omap/Kconfig
···11config SND_OMAP_SOC22 tristate "SoC Audio for the Texas Instruments OMAP chips"33- depends on (ARCH_OMAP && DMA_OMAP) || (ARCH_ARM && COMPILE_TEST)33+ depends on (ARCH_OMAP && DMA_OMAP) || (ARM && COMPILE_TEST)44 select SND_DMAENGINE_PCM5566config SND_OMAP_SOC_DMIC···26262727config SND_OMAP_SOC_RX512828 tristate "SoC Audio support for Nokia RX-51"2929- depends on SND_OMAP_SOC && ARCH_ARM && (MACH_NOKIA_RX51 || COMPILE_TEST)2929+ depends on SND_OMAP_SOC && ARM && (MACH_NOKIA_RX51 || COMPILE_TEST)3030 select SND_OMAP_SOC_MCBSP3131 select SND_SOC_TLV320AIC3X3232 select SND_SOC_TPA6130A2
···426426 * @die_mem: a buffer for result DIE427427 *428428 * Search a non-inlined function DIE which includes @addr. Stores the429429- * DIE to @die_mem and returns it if found. Returns NULl if failed.429429+ * DIE to @die_mem and returns it if found. Returns NULL if failed.430430 */431431Dwarf_Die *die_find_realfunc(Dwarf_Die *cu_die, Dwarf_Addr addr,432432 Dwarf_Die *die_mem)···454454}455455456456/**457457- * die_find_inlinefunc - Search an inlined function at given address458458- * @cu_die: a CU DIE which including @addr457457+ * die_find_top_inlinefunc - Search the top inlined function at given address458458+ * @sp_die: a subprogram DIE which including @addr459459 * @addr: target address460460 * @die_mem: a buffer for result DIE461461 *462462 * Search an inlined function DIE which includes @addr. Stores the463463- * DIE to @die_mem and returns it if found. Returns NULl if failed.463463+ * DIE to @die_mem and returns it if found. Returns NULL if failed.464464+ * Even if several inlined functions are expanded recursively, this465465+ * doesn't trace it down, and returns the topmost one.466466+ */467467+Dwarf_Die *die_find_top_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr,468468+ Dwarf_Die *die_mem)469469+{470470+ return die_find_child(sp_die, __die_find_inline_cb, &addr, die_mem);471471+}472472+473473+/**474474+ * die_find_inlinefunc - Search an inlined function at given address475475+ * @sp_die: a subprogram DIE which including @addr476476+ * @addr: target address477477+ * @die_mem: a buffer for result DIE478478+ *479479+ * Search an inlined function DIE which includes @addr. Stores the480480+ * DIE to @die_mem and returns it if found. Returns NULL if failed.464481 * If several inlined functions are expanded recursively, this trace465465- * it and returns deepest one.482482+ * it down and returns deepest one.466483 */467484Dwarf_Die *die_find_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr,468485 Dwarf_Die *die_mem)
+5-1
tools/perf/util/dwarf-aux.h
···7979extern Dwarf_Die *die_find_realfunc(Dwarf_Die *cu_die, Dwarf_Addr addr,8080 Dwarf_Die *die_mem);81818282-/* Search an inlined function including given address */8282+/* Search the top inlined function including given address */8383+extern Dwarf_Die *die_find_top_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr,8484+ Dwarf_Die *die_mem);8585+8686+/* Search the deepest inlined function including given address */8387extern Dwarf_Die *die_find_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr,8488 Dwarf_Die *die_mem);8589
+12
tools/perf/util/header.c
···27682768 if (perf_file_header__read(&f_header, header, fd) < 0)27692769 return -EINVAL;2770277027712771+ /*27722772+ * Sanity check that perf.data was written cleanly; data size is27732773+ * initialized to 0 and updated only if the on_exit function is run.27742774+ * If data size is still 0 then the file contains only partial27752775+ * information. Just warn user and process it as much as it can.27762776+ */27772777+ if (f_header.data.size == 0) {27782778+ pr_warning("WARNING: The %s file's data size field is 0 which is unexpected.\n"27792779+ "Was the 'perf record' command properly terminated?\n",27802780+ session->filename);27812781+ }27822782+27712783 nr_attrs = f_header.attrs.size / f_header.attr_size;27722784 lseek(fd, f_header.attrs.offset, SEEK_SET);27732785
+33-16
tools/perf/util/probe-finder.c
···13271327 struct perf_probe_point *ppt)13281328{13291329 Dwarf_Die cudie, spdie, indie;13301330- Dwarf_Addr _addr, baseaddr;13311331- const char *fname = NULL, *func = NULL, *tmp;13301330+ Dwarf_Addr _addr = 0, baseaddr = 0;13311331+ const char *fname = NULL, *func = NULL, *basefunc = NULL, *tmp;13321332 int baseline = 0, lineno = 0, ret = 0;1333133313341334 /* Adjust address with bias */···13491349 /* Find a corresponding function (name, baseline and baseaddr) */13501350 if (die_find_realfunc(&cudie, (Dwarf_Addr)addr, &spdie)) {13511351 /* Get function entry information */13521352- tmp = dwarf_diename(&spdie);13531353- if (!tmp ||13521352+ func = basefunc = dwarf_diename(&spdie);13531353+ if (!func ||13541354 dwarf_entrypc(&spdie, &baseaddr) != 0 ||13551355- dwarf_decl_line(&spdie, &baseline) != 0)13551355+ dwarf_decl_line(&spdie, &baseline) != 0) {13561356+ lineno = 0;13561357 goto post;13571357- func = tmp;13581358+ }1358135913591359- if (addr == (unsigned long)baseaddr)13601360+ if (addr == (unsigned long)baseaddr) {13601361 /* Function entry - Relative line number is 0 */13611362 lineno = baseline;13621362- else if (die_find_inlinefunc(&spdie, (Dwarf_Addr)addr,13631363- &indie)) {13631363+ fname = dwarf_decl_file(&spdie);13641364+ goto post;13651365+ }13661366+13671367+ /* Track down the inline functions step by step */13681368+ while (die_find_top_inlinefunc(&spdie, (Dwarf_Addr)addr,13691369+ &indie)) {13701370+ /* There is an inline function */13641371 if (dwarf_entrypc(&indie, &_addr) == 0 &&13651365- _addr == addr)13721372+ _addr == addr) {13661373 /*13671374 * addr is at an inline function entry.13681375 * In this case, lineno should be the call-site13691369- * line number.13761376+ * line number. (overwrite lineinfo)13701377 */13711378 lineno = die_get_call_lineno(&indie);13721372- else {13791379+ fname = die_get_call_file(&indie);13801380+ break;13811381+ } else {13731382 /*13741383 * addr is in an inline function body.13751384 * Since lineno points one of the lines···13861377 * be the entry line of the inline function.13871378 */13881379 tmp = dwarf_diename(&indie);13891389- if (tmp &&13901390- dwarf_decl_line(&spdie, &baseline) == 0)13911391- func = tmp;13801380+ if (!tmp ||13811381+ dwarf_decl_line(&indie, &baseline) != 0)13821382+ break;13831383+ func = tmp;13841384+ spdie = indie;13921385 }13931386 }13871387+ /* Verify the lineno and baseline are in a same file */13881388+ tmp = dwarf_decl_file(&spdie);13891389+ if (!tmp || strcmp(tmp, fname) != 0)13901390+ lineno = 0;13941391 }1395139213961393post:13971394 /* Make a relative line number or an offset */13981395 if (lineno)13991396 ppt->line = lineno - baseline;14001400- else if (func)13971397+ else if (basefunc) {14011398 ppt->offset = addr - (unsigned long)baseaddr;13991399+ func = basefunc;14001400+ }1402140114031402 /* Duplicate strings */14041403 if (func) {
+3-1
tools/perf/util/session.c
···256256 tool->sample = process_event_sample_stub;257257 if (tool->mmap == NULL)258258 tool->mmap = process_event_stub;259259+ if (tool->mmap2 == NULL)260260+ tool->mmap2 = process_event_stub;259261 if (tool->comm == NULL)260262 tool->comm = process_event_stub;261263 if (tool->fork == NULL)···13121310 file_offset = page_offset;13131311 head = data_offset - page_offset;1314131213151315- if (data_offset + data_size < file_size)13131313+ if (data_size && (data_offset + data_size < file_size))13161314 file_size = data_offset + data_size;1317131513181316 progress_next = file_size / 16;