···11+Code of Conflict22+----------------33+44+The Linux kernel development effort is a very personal process compared55+to "traditional" ways of developing software. Your code and ideas66+behind it will be carefully reviewed, often resulting in critique and77+criticism. The review will almost always require improvements to the88+code before it can be included in the kernel. Know that this happens99+because everyone involved wants to see the best possible solution for1010+the overall success of Linux. This development process has been proven1111+to create the most robust operating system kernel ever, and we do not1212+want to do anything to cause the quality of submission and eventual1313+result to ever decrease.1414+1515+If however, anyone feels personally abused, threatened, or otherwise1616+uncomfortable due to this process, that is not acceptable. If so,1717+please contact the Linux Foundation's Technical Advisory Board at1818+<tab@lists.linux-foundation.org>, or the individual members, and they1919+will work to resolve the issue to the best of their ability. For more2020+information on who is on the Technical Advisory Board and what their2121+role is, please see:2222+ http://www.linuxfoundation.org/programs/advisory-councils/tab2323+2424+As a reviewer of code, please strive to keep things civil and focused on2525+the technical issues involved. We are all humans, and frustrations can2626+be high on both sides of the process. Try to keep in mind the immortal2727+words of Bill and Ted, "Be excellent to each other."
···2222 - pclkN, clkN: Pairs of parent of input clock and input clock to the2323 devices in this power domain. Maximum of 4 pairs (N = 0 to 3)2424 are supported currently.2525+- power-domains: phandle pointing to the parent power domain, for more details2626+ see Documentation/devicetree/bindings/power/power_domain.txt25272628Node of a device using power domains must have a power-domains property2729defined with a phandle to respective power domain.
+4
Documentation/devicetree/bindings/arm/sti.txt
···1313Required root node property:1414compatible = "st,stih407";15151616+Boards with the ST STiH410 SoC shall have the following properties:1717+Required root node property:1818+compatible = "st,stih410";1919+1620Boards with the ST STiH418 SoC shall have the following properties:1721Required root node property:1822compatible = "st,stih418";
+1
Documentation/devicetree/bindings/i2c/i2c-imx.txt
···77 - "fsl,vf610-i2c" for I2C compatible with the one integrated on Vybrid vf610 SoC88- reg : Should contain I2C/HS-I2C registers location and length99- interrupts : Should contain I2C/HS-I2C interrupt1010+- clocks : Should contain the I2C/HS-I2C clock specifier10111112Optional properties:1213- clock-frequency : Constains desired I2C/HS-I2C bus clock frequency in Hz.
···44APM X-Gene SoC.5566Required properties for all the ethernet interfaces:77-- compatible: Should be "apm,xgene-enet"77+- compatible: Should state binding information from the following list,88+ - "apm,xgene-enet": RGMII based 1G interface99+ - "apm,xgene1-sgenet": SGMII based 1G interface1010+ - "apm,xgene1-xgenet": XFI based 10G interface811- reg: Address and length of the register set for the device. It contains the912 information of registers in the same order as described by reg-names1013- reg-names: Should contain the register set names
+3-1
Documentation/devicetree/bindings/net/dsa/dsa.txt
···1919(DSA_MAX_SWITCHES).2020Each of these switch child nodes should have the following required properties:21212222-- reg : Describes the switch address on the MII bus2222+- reg : Contains two fields. The first one describes the2323+ address on the MII bus. The second is the switch2424+ number that must be unique in cascaded configurations2325- #address-cells : Must be 12426- #size-cells : Must be 02527
···1919 providing multiple PM domains (e.g. power controllers), but can be any value2020 as specified by device tree binding documentation of particular provider.21212222+Optional properties:2323+ - power-domains : A phandle and PM domain specifier as defined by bindings of2424+ the power controller specified by phandle.2525+ Some power domains might be powered from another power domain (or have2626+ other hardware specific dependencies). For representing such dependency2727+ a standard PM domain consumer binding is used. When provided, all domains2828+ created by the given provider should be subdomains of the domain2929+ specified by this binding. More details about power domain specifier are3030+ available in the next section.3131+2232Example:23332434 power: power-controller@12340000 {···39294030The node above defines a power controller that is a PM domain provider and4131expects one cell as its phandle argument.3232+3333+Example 2:3434+3535+ parent: power-controller@12340000 {3636+ compatible = "foo,power-controller";3737+ reg = <0x12340000 0x1000>;3838+ #power-domain-cells = <1>;3939+ };4040+4141+ child: power-controller@12340000 {4242+ compatible = "foo,power-controller";4343+ reg = <0x12341000 0x1000>;4444+ power-domains = <&parent 0>;4545+ #power-domain-cells = <1>;4646+ };4747+4848+The nodes above define two power controllers: 'parent' and 'child'.4949+Domains created by the 'child' power controller are subdomains of '0' power5050+domain provided by the 'parent' power controller.42514352==PM domain consumers==4453
···11+ETRAX FS UART22+33+Required properties:44+- compatible : "axis,etraxfs-uart"55+- reg: offset and length of the register set for the device.66+- interrupts: device interrupt77+88+Optional properties:99+- {dtr,dsr,ri,cd}-gpios: specify a GPIO for DTR/DSR/RI/CD1010+ line respectively.1111+1212+Example:1313+1414+serial@b00260000 {1515+ compatible = "axis,etraxfs-uart";1616+ reg = <0xb0026000 0x1000>;1717+ interrupts = <68>;1818+ status = "disabled";1919+};
···2121- reg-io-width : the size (in bytes) of the IO accesses that should be2222 performed on the device. If this property is not present then single byte2323 accesses are used.2424+- dcd-override : Override the DCD modem status signal. This signal will always2525+ be reported as active instead of being obtained from the modem status2626+ register. Define this if your serial port does not use this pin.2727+- dsr-override : Override the DTS modem status signal. This signal will always2828+ be reported as active instead of being obtained from the modem status2929+ register. Define this if your serial port does not use this pin.3030+- cts-override : Override the CTS modem status signal. This signal will always3131+ be reported as active instead of being obtained from the modem status3232+ register. Define this if your serial port does not use this pin.3333+- ri-override : Override the RI modem status signal. This signal will always be3434+ reported as inactive instead of being obtained from the modem status register.3535+ Define this if your serial port does not use this pin.24362537Example:2638···4331 interrupts = <10>;4432 reg-shift = <2>;4533 reg-io-width = <4>;3434+ dcd-override;3535+ dsr-override;3636+ cts-override;3737+ ri-override;4638 };47394840Example with one clock:
···12121313 devicetree@vger.kernel.org14141515+ and Cc: the DT maintainers. Use scripts/get_maintainer.pl to identify1616+ all of the DT maintainers.1717+1518 3) The Documentation/ portion of the patch should come in the series before1619 the code implementing the binding.1720
···2626- atmel,disable : Should be present if you want to disable the watchdog.2727- atmel,idle-halt : Should be present if you want to stop the watchdog when2828 entering idle state.2929+ CAUTION: This property should be used with care, it actually makes the3030+ watchdog not counting when the CPU is in idle state, therefore the3131+ watchdog reset time depends on mean CPU usage and will not reset at all3232+ if the CPU stop working while it is in idle state, which is probably3333+ not what you want.2934- atmel,dbg-halt : Should be present if you want to stop the watchdog when3035 entering debug state.3136
+8
Documentation/input/alps.txt
···114114 byte 4: 0 y6 y5 y4 y3 y2 y1 y0115115 byte 5: 0 z6 z5 z4 z3 z2 z1 z0116116117117+Protocol Version 2 DualPoint devices send standard PS/2 mouse packets for118118+the DualPoint Stick.119119+117120Dualpoint device -- interleaved packet format118121---------------------------------------------119122···129126 byte 6: 0 y9 y8 y7 1 m r l130127 byte 7: 0 y6 y5 y4 y3 y2 y1 y0131128 byte 8: 0 z6 z5 z4 z3 z2 z1 z0129129+130130+Devices which use the interleaving format normally send standard PS/2 mouse131131+packets for the DualPoint Stick + ALPS Absolute Mode packets for the132132+touchpad, switching to the interleaved packet format when both the stick and133133+the touchpad are used at the same time.132134133135ALPS Absolute Mode - Protocol Version 3134136---------------------------------------
+6
Documentation/input/event-codes.txt
···294294The kernel does not provide button emulation for such devices but treats295295them as any other INPUT_PROP_BUTTONPAD device.296296297297+INPUT_PROP_ACCELEROMETER298298+-------------------------299299+Directional axes on this device (absolute and/or relative x, y, z) represent300300+accelerometer data. All other axes retain their meaning. A device must not mix301301+regular directional axes and accelerometer axes on the same event node.302302+297303Guidelines:298304==========299305The guidelines below ensure proper single-touch and multi-finger functionality.
+6-3
Documentation/input/multi-touch-protocol.txt
···312312313313The type of approaching tool. A lot of kernel drivers cannot distinguish314314between different tool types, such as a finger or a pen. In such cases, the315315-event should be omitted. The protocol currently supports MT_TOOL_FINGER and316316-MT_TOOL_PEN [2]. For type B devices, this event is handled by input core;317317-drivers should instead use input_mt_report_slot_state().315315+event should be omitted. The protocol currently supports MT_TOOL_FINGER,316316+MT_TOOL_PEN, and MT_TOOL_PALM [2]. For type B devices, this event is handled317317+by input core; drivers should instead use input_mt_report_slot_state().318318+A contact's ABS_MT_TOOL_TYPE may change over time while still touching the319319+device, because the firmware may not be able to determine which tool is being320320+used when it first appears.318321319322ABS_MT_BLOB_ID320323
+17-5
Documentation/power/suspend-and-interrupts.txt
···40404141The IRQF_NO_SUSPEND flag is used to indicate that to the IRQ subsystem when4242requesting a special-purpose interrupt. It causes suspend_device_irqs() to4343-leave the corresponding IRQ enabled so as to allow the interrupt to work all4444-the time as expected.4343+leave the corresponding IRQ enabled so as to allow the interrupt to work as4444+expected during the suspend-resume cycle, but does not guarantee that the4545+interrupt will wake the system from a suspended state -- for such cases it is4646+necessary to use enable_irq_wake().45474648Note that the IRQF_NO_SUSPEND flag affects the entire IRQ and not just one4749user of it. Thus, if the IRQ is shared, all of the interrupt handlers installed···112110IRQF_NO_SUSPEND and enable_irq_wake()113111-------------------------------------114112115115-There are no valid reasons to use both enable_irq_wake() and the IRQF_NO_SUSPEND116116-flag on the same IRQ.113113+There are very few valid reasons to use both enable_irq_wake() and the114114+IRQF_NO_SUSPEND flag on the same IRQ, and it is never valid to use both for the115115+same device.117116118117First of all, if the IRQ is not shared, the rules for handling IRQF_NO_SUSPEND119118interrupts (interrupt handlers are invoked after suspend_device_irqs()) are···123120124121Second, both enable_irq_wake() and IRQF_NO_SUSPEND apply to entire IRQs and not125122to individual interrupt handlers, so sharing an IRQ between a system wakeup126126-interrupt source and an IRQF_NO_SUSPEND interrupt source does not make sense.123123+interrupt source and an IRQF_NO_SUSPEND interrupt source does not generally124124+make sense.125125+126126+In rare cases an IRQ can be shared between a wakeup device driver and an127127+IRQF_NO_SUSPEND user. In order for this to be safe, the wakeup device driver128128+must be able to discern spurious IRQs from genuine wakeup events (signalling129129+the latter to the core with pm_system_wakeup()), must use enable_irq_wake() to130130+ensure that the IRQ will function as a wakeup source, and must request the IRQ131131+with IRQF_COND_SUSPEND to tell the core that it meets these requirements. If132132+these requirements are not met, it is not valid to use IRQF_COND_SUSPEND.
+56-24
MAINTAINERS
···637637F: include/uapi/linux/kfd_ioctl.h638638639639AMD MICROCODE UPDATE SUPPORT640640-M: Andreas Herrmann <herrmann.der.user@googlemail.com>641641-L: amd64-microcode@amd64.org640640+M: Borislav Petkov <bp@alien8.de>642641S: Maintained643642F: arch/x86/kernel/cpu/microcode/amd*644643···10291030F: arch/arm/boot/dts/imx*10301031F: arch/arm/configs/imx*_defconfig1031103210331033+ARM/FREESCALE VYBRID ARM ARCHITECTURE10341034+M: Shawn Guo <shawn.guo@linaro.org>10351035+M: Sascha Hauer <kernel@pengutronix.de>10361036+R: Stefan Agner <stefan@agner.ch>10371037+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)10381038+S: Maintained10391039+T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git10401040+F: arch/arm/mach-imx/*vf610*10411041+F: arch/arm/boot/dts/vf*10421042+10321043ARM/GLOMATION GESBC9312SX MACHINE SUPPORT10331044M: Lennert Buytenhek <kernel@wantstofly.org>10341045L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)···11851176L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)11861177S: Maintained11871178F: arch/arm/mach-mvebu/11881188-F: drivers/rtc/armada38x-rtc11791179+F: drivers/rtc/rtc-armada38x.c1189118011901181ARM/Marvell Berlin SoC support11911182M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>···11971188M: Jason Cooper <jason@lakedaemon.net>11981189M: Andrew Lunn <andrew@lunn.ch>11991190M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>11911191+M: Gregory Clement <gregory.clement@free-electrons.com>12001192L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)12011193S: Maintained12021194F: arch/arm/mach-dove/···13611351F: drivers/*/*rockchip*13621352F: drivers/*/*/*rockchip*13631353F: sound/soc/rockchip/13541354+N: rockchip1364135513651356ARM/SAMSUNG EXYNOS ARM ARCHITECTURES13661357M: Kukjin Kim <kgene@kernel.org>···16751664F: include/linux/platform_data/at24.h1676166516771666ATA OVER ETHERNET (AOE) DRIVER16781678-M: "Ed L. Cashin" <ecashin@coraid.com>16791679-W: http://support.coraid.com/support/linux16671667+M: "Ed L. Cashin" <ed.cashin@acm.org>16681668+W: http://www.openaoe.org/16801669S: Supported16811670F: Documentation/aoe/16821671F: drivers/block/aoe/···17411730F: drivers/net/ethernet/atheros/1742173117431732ATM17441744-M: Chas Williams <chas@cmf.nrl.navy.mil>17331733+M: Chas Williams <3chas3@gmail.com>17451734L: linux-atm-general@lists.sourceforge.net (moderated for non-subscribers)17461735L: netdev@vger.kernel.org17471736W: http://linux-atm.sourceforge.net···20762065BONDING DRIVER20772066M: Jay Vosburgh <j.vosburgh@gmail.com>20782067M: Veaceslav Falico <vfalico@gmail.com>20792079-M: Andy Gospodarek <andy@greyhouse.net>20682068+M: Andy Gospodarek <gospo@cumulusnetworks.com>20802069L: netdev@vger.kernel.org20812070W: http://sourceforge.net/projects/bonding/20822071S: Supported···2118210721192108BROADCOM BCM281XX/BCM11XXX/BCM216XX ARM ARCHITECTURE21202109M: Christian Daudt <bcm@fixthebug.org>21212121-M: Matt Porter <mporter@linaro.org>21222110M: Florian Fainelli <f.fainelli@gmail.com>21232111L: bcm-kernel-feedback-list@broadcom.com21242112T: git git://github.com/broadcom/mach-bcm···2379236923802370CAN NETWORK LAYER23812371M: Oliver Hartkopp <socketcan@hartkopp.net>23722372+M: Marc Kleine-Budde <mkl@pengutronix.de>23822373L: linux-can@vger.kernel.org23832383-W: http://gitorious.org/linux-can23742374+W: https://github.com/linux-can23842375T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can.git23852376T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next.git23862377S: Maintained···23972386M: Wolfgang Grandegger <wg@grandegger.com>23982387M: Marc Kleine-Budde <mkl@pengutronix.de>23992388L: linux-can@vger.kernel.org24002400-W: http://gitorious.org/linux-can23892389+W: https://github.com/linux-can24012390T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can.git24022391T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next.git24032392S: Maintained···32513240S: Maintained32523241F: Documentation/hwmon/dme173732533242F: drivers/hwmon/dme1737.c32433243+32443244+DMI/SMBIOS SUPPORT32453245+M: Jean Delvare <jdelvare@suse.de>32463246+S: Maintained32473247+F: drivers/firmware/dmi-id.c32483248+F: drivers/firmware/dmi_scan.c32493249+F: include/linux/dmi.h3254325032553251DOCKING STATION DRIVER32563252M: Shaohua Li <shaohua.li@intel.com>···50945076F: drivers/platform/x86/intel_menlow.c5095507750965078INTEL IA32 MICROCODE UPDATE SUPPORT50975097-M: Tigran Aivazian <tigran@aivazian.fsnet.co.uk>50795079+M: Borislav Petkov <bp@alien8.de>50985080S: Maintained50995081F: arch/x86/kernel/cpu/microcode/core*51005082F: arch/x86/kernel/cpu/microcode/intel*···51355117S: Maintained51365118F: drivers/char/hw_random/ixp4xx-rng.c5137511951385138-INTEL ETHERNET DRIVERS (e100/e1000/e1000e/fm10k/igb/igbvf/ixgb/ixgbe/ixgbevf/i40e/i40evf)51205120+INTEL ETHERNET DRIVERS51395121M: Jeff Kirsher <jeffrey.t.kirsher@intel.com>51405140-M: Jesse Brandeburg <jesse.brandeburg@intel.com>51415141-M: Bruce Allan <bruce.w.allan@intel.com>51425142-M: Carolyn Wyborny <carolyn.wyborny@intel.com>51435143-M: Don Skidmore <donald.c.skidmore@intel.com>51445144-M: Greg Rose <gregory.v.rose@intel.com>51455145-M: Matthew Vick <matthew.vick@intel.com>51465146-M: John Ronciak <john.ronciak@intel.com>51475147-M: Mitch Williams <mitch.a.williams@intel.com>51485148-M: Linux NICS <linux.nics@intel.com>51495149-L: e1000-devel@lists.sourceforge.net51225122+R: Jesse Brandeburg <jesse.brandeburg@intel.com>51235123+R: Shannon Nelson <shannon.nelson@intel.com>51245124+R: Carolyn Wyborny <carolyn.wyborny@intel.com>51255125+R: Don Skidmore <donald.c.skidmore@intel.com>51265126+R: Matthew Vick <matthew.vick@intel.com>51275127+R: John Ronciak <john.ronciak@intel.com>51285128+R: Mitch Williams <mitch.a.williams@intel.com>51295129+L: intel-wired-lan@lists.osuosl.org51505130W: http://www.intel.com/support/feedback.htm51515131W: http://e1000.sourceforge.net/51525152-T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net.git51535153-T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next.git51325132+Q: http://patchwork.ozlabs.org/project/intel-wired-lan/list/51335133+T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-queue.git51345134+T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue.git51545135S: Supported51555136F: Documentation/networking/e100.txt51565137F: Documentation/networking/e1000.txt···84978480L: netdev@vger.kernel.org84988481F: drivers/net/ethernet/samsung/sxgbe/8499848284838483+SAMSUNG THERMAL DRIVER84848484+M: Lukasz Majewski <l.majewski@samsung.com>84858485+L: linux-pm@vger.kernel.org84868486+L: linux-samsung-soc@vger.kernel.org84878487+S: Supported84888488+T: https://github.com/lmajewski/linux-samsung-thermal.git84898489+F: drivers/thermal/samsung/84908490+85008491SAMSUNG USB2 PHY DRIVER85018492M: Kamil Debski <k.debski@samsung.com>85028493L: linux-kernel@vger.kernel.org···1021210187S: Maintained1021310188F: Documentation/usb/ohci.txt1021410189F: drivers/usb/host/ohci*1019010190+1019110191+USB OTG FSM (Finite State Machine)1019210192+M: Peter Chen <Peter.Chen@freescale.com>1019310193+T: git git://github.com/hzpeterchen/linux-usb.git1019410194+L: linux-usb@vger.kernel.org1019510195+S: Maintained1019610196+F: drivers/usb/common/usb-otg-fsm.c10215101971021610198USB OVER IP DRIVER1021710199M: Valentina Manea <valentina.manea.m@gmail.com>
···4747/* Forward declaration, a strange C thing */4848struct task_struct;49495050-/* Return saved PC of a blocked thread */5151-unsigned long thread_saved_pc(struct task_struct *t);5252-5350#define task_pt_regs(p) \5451 ((struct pt_regs *)(THREAD_SIZE + (void *)task_stack_page(p)) - 1)5552···6972#define release_segments(mm) do { } while (0)70737174#define KSTK_EIP(tsk) (task_pt_regs(tsk)->ret)7575+#define KSTK_ESP(tsk) (task_pt_regs(tsk)->sp)72767377/*7478 * Where abouts of Task's sp, fp, blink when it was last seen in kernel mode.7579 * Look in process.c for details of kernel stack layout7680 */7777-#define KSTK_ESP(tsk) (tsk->thread.ksp)8181+#define TSK_K_ESP(tsk) (tsk->thread.ksp)78827979-#define KSTK_REG(tsk, off) (*((unsigned int *)(KSTK_ESP(tsk) + \8383+#define TSK_K_REG(tsk, off) (*((unsigned int *)(TSK_K_ESP(tsk) + \8084 sizeof(struct callee_regs) + off)))81858282-#define KSTK_BLINK(tsk) KSTK_REG(tsk, 4)8383-#define KSTK_FP(tsk) KSTK_REG(tsk, 0)8686+#define TSK_K_BLINK(tsk) TSK_K_REG(tsk, 4)8787+#define TSK_K_FP(tsk) TSK_K_REG(tsk, 0)8888+8989+#define thread_saved_pc(tsk) TSK_K_BLINK(tsk)84908591extern void start_thread(struct pt_regs * regs, unsigned long pc,8692 unsigned long usp);
+37
arch/arc/include/asm/stacktrace.h
···11+/*22+ * Copyright (C) 2014-15 Synopsys, Inc. (www.synopsys.com)33+ * Copyright (C) 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)44+ *55+ * This program is free software; you can redistribute it and/or modify66+ * it under the terms of the GNU General Public License version 2 as77+ * published by the Free Software Foundation.88+ */99+1010+#ifndef __ASM_STACKTRACE_H1111+#define __ASM_STACKTRACE_H1212+1313+#include <linux/sched.h>1414+1515+/**1616+ * arc_unwind_core - Unwind the kernel mode stack for an execution context1717+ * @tsk: NULL for current task, specific task otherwise1818+ * @regs: pt_regs used to seed the unwinder {SP, FP, BLINK, PC}1919+ * If NULL, use pt_regs of @tsk (if !NULL) otherwise2020+ * use the current values of {SP, FP, BLINK, PC}2121+ * @consumer_fn: Callback invoked for each frame unwound2222+ * Returns 0 to continue unwinding, -1 to stop2323+ * @arg: Arg to callback2424+ *2525+ * Returns the address of first function in stack2626+ *2727+ * Semantics:2828+ * - synchronous unwinding (e.g. dump_stack): @tsk NULL, @regs NULL2929+ * - Asynchronous unwinding of sleeping task: @tsk !NULL, @regs NULL3030+ * - Asynchronous unwinding of intr/excp etc: @tsk !NULL, @regs !NULL3131+ */3232+notrace noinline unsigned int arc_unwind_core(3333+ struct task_struct *tsk, struct pt_regs *regs,3434+ int (*consumer_fn) (unsigned int, void *),3535+ void *arg);3636+3737+#endif /* __ASM_STACKTRACE_H */
-23
arch/arc/kernel/process.c
···192192 return 0;193193}194194195195-/*196196- * API: expected by schedular Code: If thread is sleeping where is that.197197- * What is this good for? it will be always the scheduler or ret_from_fork.198198- * So we hard code that anyways.199199- */200200-unsigned long thread_saved_pc(struct task_struct *t)201201-{202202- struct pt_regs *regs = task_pt_regs(t);203203- unsigned long blink = 0;204204-205205- /*206206- * If the thread being queried for in not itself calling this, then it207207- * implies it is not executing, which in turn implies it is sleeping,208208- * which in turn implies it got switched OUT by the schedular.209209- * In that case, it's kernel mode blink can reliably retrieved as per210210- * the picture above (right above pt_regs).211211- */212212- if (t != current && t->state != TASK_RUNNING)213213- blink = *((unsigned int *)regs - 1);214214-215215- return blink;216216-}217217-218195int elf_check_arch(const struct elf32_hdr *x)219196{220197 unsigned int eflags;
+18-6
arch/arc/kernel/signal.c
···6767 sigset_t *set)6868{6969 int err;7070- err = __copy_to_user(&(sf->uc.uc_mcontext.regs), regs,7070+ err = __copy_to_user(&(sf->uc.uc_mcontext.regs.scratch), regs,7171 sizeof(sf->uc.uc_mcontext.regs.scratch));7272 err |= __copy_to_user(&sf->uc.uc_sigmask, set, sizeof(sigset_t));7373···8383 if (!err)8484 set_current_blocked(&set);85858686- err |= __copy_from_user(regs, &(sf->uc.uc_mcontext.regs),8686+ err |= __copy_from_user(regs, &(sf->uc.uc_mcontext.regs.scratch),8787 sizeof(sf->uc.uc_mcontext.regs.scratch));88888989 return err;···130130131131 /* Don't restart from sigreturn */132132 syscall_wont_restart(regs);133133+134134+ /*135135+ * Ensure that sigreturn always returns to user mode (in case the136136+ * regs saved on user stack got fudged between save and sigreturn)137137+ * Otherwise it is easy to panic the kernel with a custom138138+ * signal handler and/or restorer which clobberes the status32/ret139139+ * to return to a bogus location in kernel mode.140140+ */141141+ regs->status32 |= STATUS_U_MASK;133142134143 return regs->r0;135144···238229239230 /*240231 * handler returns using sigreturn stub provided already by userpsace232232+ * If not, nuke the process right away241233 */242242- BUG_ON(!(ksig->ka.sa.sa_flags & SA_RESTORER));234234+ if(!(ksig->ka.sa.sa_flags & SA_RESTORER))235235+ return 1;236236+243237 regs->blink = (unsigned long)ksig->ka.sa.sa_restorer;244238245239 /* User Stack for signal handler will be above the frame just carved */···308296handle_signal(struct ksignal *ksig, struct pt_regs *regs)309297{310298 sigset_t *oldset = sigmask_to_save();311311- int ret;299299+ int failed;312300313301 /* Set up the stack frame */314314- ret = setup_rt_frame(ksig, oldset, regs);302302+ failed = setup_rt_frame(ksig, oldset, regs);315303316316- signal_setup_done(ret, ksig, 0);304304+ signal_setup_done(failed, ksig, 0);317305}318306319307void do_signal(struct pt_regs *regs)
+17-4
arch/arc/kernel/stacktrace.c
···4343 struct pt_regs *regs,4444 struct unwind_frame_info *frame_info)4545{4646+ /*4747+ * synchronous unwinding (e.g. dump_stack)4848+ * - uses current values of SP and friends4949+ */4650 if (tsk == NULL && regs == NULL) {4751 unsigned long fp, sp, blink, ret;4852 frame_info->task = current;···6561 frame_info->regs.r63 = ret;6662 frame_info->call_frame = 0;6763 } else if (regs == NULL) {6464+ /*6565+ * Asynchronous unwinding of sleeping task6666+ * - Gets SP etc from task's pt_regs (saved bottom of kernel6767+ * mode stack of task)6868+ */68696970 frame_info->task = tsk;70717171- frame_info->regs.r27 = KSTK_FP(tsk);7272- frame_info->regs.r28 = KSTK_ESP(tsk);7373- frame_info->regs.r31 = KSTK_BLINK(tsk);7272+ frame_info->regs.r27 = TSK_K_FP(tsk);7373+ frame_info->regs.r28 = TSK_K_ESP(tsk);7474+ frame_info->regs.r31 = TSK_K_BLINK(tsk);7475 frame_info->regs.r63 = (unsigned int)__switch_to;75767677 /* In the prologue of __switch_to, first FP is saved on stack···9283 frame_info->call_frame = 0;93849485 } else {8686+ /*8787+ * Asynchronous unwinding of intr/exception8888+ * - Just uses the pt_regs passed8989+ */9590 frame_info->task = tsk;96919792 frame_info->regs.r27 = regs->fp;···1089510996#endif11097111111-static noinline unsigned int9898+notrace noinline unsigned int11299arc_unwind_core(struct task_struct *tsk, struct pt_regs *regs,113100 int (*consumer_fn) (unsigned int, void *), void *arg)114101{
···11+/*22+ * Device tree sources for Exynos4412 TMU sensor configuration33+ *44+ * Copyright (c) 2014 Lukasz Majewski <l.majewski@samsung.com>55+ *66+ * This program is free software; you can redistribute it and/or modify77+ * it under the terms of the GNU General Public License version 2 as88+ * published by the Free Software Foundation.99+ *1010+ */1111+1212+#include <dt-bindings/thermal/thermal_exynos.h>1313+1414+#thermal-sensor-cells = <0>;1515+samsung,tmu_gain = <8>;1616+samsung,tmu_reference_voltage = <16>;1717+samsung,tmu_noise_cancel_mode = <4>;1818+samsung,tmu_efuse_value = <55>;1919+samsung,tmu_min_efuse_value = <40>;2020+samsung,tmu_max_efuse_value = <100>;2121+samsung,tmu_first_point_trim = <25>;2222+samsung,tmu_second_point_trim = <85>;2323+samsung,tmu_default_temp_offset = <50>;2424+samsung,tmu_cal_type = <TYPE_ONE_POINT_TRIMMING>;
···11+/*22+ * Device tree sources for default Exynos5420 thermal zone definition33+ *44+ * Copyright (c) 2014 Lukasz Majewski <l.majewski@samsung.com>55+ *66+ * This program is free software; you can redistribute it and/or modify77+ * it under the terms of the GNU General Public License version 2 as88+ * published by the Free Software Foundation.99+ *1010+ */1111+1212+polling-delay-passive = <0>;1313+polling-delay = <0>;1414+trips {1515+ cpu-alert-0 {1616+ temperature = <85000>; /* millicelsius */1717+ hysteresis = <10000>; /* millicelsius */1818+ type = "active";1919+ };2020+ cpu-alert-1 {2121+ temperature = <103000>; /* millicelsius */2222+ hysteresis = <10000>; /* millicelsius */2323+ type = "active";2424+ };2525+ cpu-alert-2 {2626+ temperature = <110000>; /* millicelsius */2727+ hysteresis = <10000>; /* millicelsius */2828+ type = "active";2929+ };3030+ cpu-crit-0 {3131+ temperature = <1200000>; /* millicelsius */3232+ hysteresis = <0>; /* millicelsius */3333+ type = "critical";3434+ };3535+};
···11+/*22+ * Device tree sources for Exynos5440 TMU sensor configuration33+ *44+ * Copyright (c) 2014 Lukasz Majewski <l.majewski@samsung.com>55+ *66+ * This program is free software; you can redistribute it and/or modify77+ * it under the terms of the GNU General Public License version 2 as88+ * published by the Free Software Foundation.99+ *1010+ */1111+1212+#include <dt-bindings/thermal/thermal_exynos.h>1313+1414+#thermal-sensor-cells = <0>;1515+samsung,tmu_gain = <5>;1616+samsung,tmu_reference_voltage = <16>;1717+samsung,tmu_noise_cancel_mode = <4>;1818+samsung,tmu_efuse_value = <0x5d2d>;1919+samsung,tmu_min_efuse_value = <16>;2020+samsung,tmu_max_efuse_value = <76>;2121+samsung,tmu_first_point_trim = <25>;2222+samsung,tmu_second_point_trim = <70>;2323+samsung,tmu_default_temp_offset = <25>;2424+samsung,tmu_cal_type = <TYPE_ONE_POINT_TRIMMING>;
+25
arch/arm/boot/dts/exynos5440-trip-points.dtsi
···11+/*22+ * Device tree sources for default Exynos5440 thermal zone definition33+ *44+ * Copyright (c) 2014 Lukasz Majewski <l.majewski@samsung.com>55+ *66+ * This program is free software; you can redistribute it and/or modify77+ * it under the terms of the GNU General Public License version 2 as88+ * published by the Free Software Foundation.99+ *1010+ */1111+1212+polling-delay-passive = <0>;1313+polling-delay = <0>;1414+trips {1515+ cpu-alert-0 {1616+ temperature = <100000>; /* millicelsius */1717+ hysteresis = <0>; /* millicelsius */1818+ type = "active";1919+ };2020+ cpu-crit-0 {2121+ temperature = <1050000>; /* millicelsius */2222+ hysteresis = <0>; /* millicelsius */2323+ type = "critical";2424+ };2525+};
···7070CONFIG_BLK_DEV_SD=y7171# CONFIG_SCSI_LOWLEVEL is not set7272CONFIG_NETDEVICES=y7373+CONFIG_ARM_AT91_ETHER=y7374CONFIG_MACB=y7475# CONFIG_NET_VENDOR_BROADCOM is not set7576CONFIG_DM9000=y
···377377CONFIG_PWM_TWL_LED=m378378CONFIG_OMAP_USB2=m379379CONFIG_TI_PIPE3=y380380+CONFIG_TWL4030_USB=m380381CONFIG_EXT2_FS=y381382CONFIG_EXT3_FS=y382383# CONFIG_EXT3_FS_XATTR is not set
···246246 if (cpu_arch)247247 cpu_arch += CPU_ARCH_ARMv3;248248 } else if ((read_cpuid_id() & 0x000f0000) == 0x000f0000) {249249- unsigned int mmfr0;250250-251249 /* Revised CPUID format. Read the Memory Model Feature252250 * Register 0 and check for VMSAv7 or PMSAv7 */253253- asm("mrc p15, 0, %0, c0, c1, 4"254254- : "=r" (mmfr0));251251+ unsigned int mmfr0 = read_cpuid_ext(CPUID_EXT_MMFR0);255252 if ((mmfr0 & 0x0000000f) >= 0x00000003 ||256253 (mmfr0 & 0x000000f0) >= 0x00000030)257254 cpu_arch = CPU_ARCH_ARMv7;
+1-1
arch/arm/kvm/arm.c
···540540541541 vcpu->mode = OUTSIDE_GUEST_MODE;542542 kvm_guest_exit();543543- trace_kvm_exit(*vcpu_pc(vcpu));543543+ trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));544544 /*545545 * We may have taken a host interrupt in HYP mode (ie546546 * while executing the guest). This interrupt is still
+53-22
arch/arm/kvm/mmu.c
···290290 phys_addr_t addr = start, end = start + size;291291 phys_addr_t next;292292293293- pgd = pgdp + pgd_index(addr);293293+ pgd = pgdp + kvm_pgd_index(addr);294294 do {295295 next = kvm_pgd_addr_end(addr, end);296296 if (!pgd_none(*pgd))···355355 phys_addr_t next;356356 pgd_t *pgd;357357358358- pgd = kvm->arch.pgd + pgd_index(addr);358358+ pgd = kvm->arch.pgd + kvm_pgd_index(addr);359359 do {360360 next = kvm_pgd_addr_end(addr, end);361361 stage2_flush_puds(kvm, pgd, addr, next);···632632 __phys_to_pfn(phys_addr), PAGE_HYP_DEVICE);633633}634634635635+/* Free the HW pgd, one page at a time */636636+static void kvm_free_hwpgd(void *hwpgd)637637+{638638+ free_pages_exact(hwpgd, kvm_get_hwpgd_size());639639+}640640+641641+/* Allocate the HW PGD, making sure that each page gets its own refcount */642642+static void *kvm_alloc_hwpgd(void)643643+{644644+ unsigned int size = kvm_get_hwpgd_size();645645+646646+ return alloc_pages_exact(size, GFP_KERNEL | __GFP_ZERO);647647+}648648+635649/**636650 * kvm_alloc_stage2_pgd - allocate level-1 table for stage-2 translation.637651 * @kvm: The KVM struct pointer for the VM.···659645 */660646int kvm_alloc_stage2_pgd(struct kvm *kvm)661647{662662- int ret;663648 pgd_t *pgd;649649+ void *hwpgd;664650665651 if (kvm->arch.pgd != NULL) {666652 kvm_err("kvm_arch already initialized?\n");667653 return -EINVAL;668654 }669655656656+ hwpgd = kvm_alloc_hwpgd();657657+ if (!hwpgd)658658+ return -ENOMEM;659659+660660+ /* When the kernel uses more levels of page tables than the661661+ * guest, we allocate a fake PGD and pre-populate it to point662662+ * to the next-level page table, which will be the real663663+ * initial page table pointed to by the VTTBR.664664+ *665665+ * When KVM_PREALLOC_LEVEL==2, we allocate a single page for666666+ * the PMD and the kernel will use folded pud.667667+ * When KVM_PREALLOC_LEVEL==1, we allocate 2 consecutive PUD668668+ * pages.669669+ */670670 if (KVM_PREALLOC_LEVEL > 0) {671671+ int i;672672+671673 /*672674 * Allocate fake pgd for the page table manipulation macros to673675 * work. This is not used by the hardware and we have no···691661 */692662 pgd = (pgd_t *)kmalloc(PTRS_PER_S2_PGD * sizeof(pgd_t),693663 GFP_KERNEL | __GFP_ZERO);664664+665665+ if (!pgd) {666666+ kvm_free_hwpgd(hwpgd);667667+ return -ENOMEM;668668+ }669669+670670+ /* Plug the HW PGD into the fake one. */671671+ for (i = 0; i < PTRS_PER_S2_PGD; i++) {672672+ if (KVM_PREALLOC_LEVEL == 1)673673+ pgd_populate(NULL, pgd + i,674674+ (pud_t *)hwpgd + i * PTRS_PER_PUD);675675+ else if (KVM_PREALLOC_LEVEL == 2)676676+ pud_populate(NULL, pud_offset(pgd, 0) + i,677677+ (pmd_t *)hwpgd + i * PTRS_PER_PMD);678678+ }694679 } else {695680 /*696681 * Allocate actual first-level Stage-2 page table used by the697682 * hardware for Stage-2 page table walks.698683 */699699- pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, S2_PGD_ORDER);684684+ pgd = (pgd_t *)hwpgd;700685 }701701-702702- if (!pgd)703703- return -ENOMEM;704704-705705- ret = kvm_prealloc_hwpgd(kvm, pgd);706706- if (ret)707707- goto out_err;708686709687 kvm_clean_pgd(pgd);710688 kvm->arch.pgd = pgd;711689 return 0;712712-out_err:713713- if (KVM_PREALLOC_LEVEL > 0)714714- kfree(pgd);715715- else716716- free_pages((unsigned long)pgd, S2_PGD_ORDER);717717- return ret;718690}719691720692/**···817785 return;818786819787 unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);820820- kvm_free_hwpgd(kvm);788788+ kvm_free_hwpgd(kvm_get_hwpgd(kvm));821789 if (KVM_PREALLOC_LEVEL > 0)822790 kfree(kvm->arch.pgd);823823- else824824- free_pages((unsigned long)kvm->arch.pgd, S2_PGD_ORDER);791791+825792 kvm->arch.pgd = NULL;826793}827794···830799 pgd_t *pgd;831800 pud_t *pud;832801833833- pgd = kvm->arch.pgd + pgd_index(addr);802802+ pgd = kvm->arch.pgd + kvm_pgd_index(addr);834803 if (WARN_ON(pgd_none(*pgd))) {835804 if (!cache)836805 return NULL;···11201089 pgd_t *pgd;11211090 phys_addr_t next;1122109111231123- pgd = kvm->arch.pgd + pgd_index(addr);10921092+ pgd = kvm->arch.pgd + kvm_pgd_index(addr);11241093 do {11251094 /*11261095 * Release kvm_mmu_lock periodically if the memory region is
···126126 */127127void exynos_cpu_power_down(int cpu)128128{129129- if (cpu == 0 && (of_machine_is_compatible("samsung,exynos5420") ||130130- of_machine_is_compatible("samsung,exynos5800"))) {129129+ if (cpu == 0 && (soc_is_exynos5420() || soc_is_exynos5800())) {131130 /*132131 * Bypass power down for CPU0 during suspend. Check for133132 * the SYS_PWR_REG value to decide if we are suspending
+28
arch/arm/mach-exynos/pm_domains.c
···161161 of_genpd_add_provider_simple(np, &pd->pd);162162 }163163164164+ /* Assign the child power domains to their parents */165165+ for_each_compatible_node(np, NULL, "samsung,exynos4210-pd") {166166+ struct generic_pm_domain *child_domain, *parent_domain;167167+ struct of_phandle_args args;168168+169169+ args.np = np;170170+ args.args_count = 0;171171+ child_domain = of_genpd_get_from_provider(&args);172172+ if (!child_domain)173173+ continue;174174+175175+ if (of_parse_phandle_with_args(np, "power-domains",176176+ "#power-domain-cells", 0, &args) != 0)177177+ continue;178178+179179+ parent_domain = of_genpd_get_from_provider(&args);180180+ if (!parent_domain)181181+ continue;182182+183183+ if (pm_genpd_add_subdomain(parent_domain, child_domain))184184+ pr_warn("%s failed to add subdomain: %s\n",185185+ parent_domain->name, child_domain->name);186186+ else187187+ pr_info("%s has as child subdomain: %s.\n",188188+ parent_domain->name, child_domain->name);189189+ of_node_put(np);190190+ }191191+164192 return 0;165193}166194arch_initcall(exynos4_pm_init_power_domain);
···211211 * set bit IOMUXC_GPR1[21]. Or the PTP clock must be from pad212212 * (external OSC), and we need to clear the bit.213213 */214214- clksel = ptp_clk == enet_ref ? IMX6Q_GPR1_ENET_CLK_SEL_ANATOP :215215- IMX6Q_GPR1_ENET_CLK_SEL_PAD;214214+ clksel = clk_is_match(ptp_clk, enet_ref) ?215215+ IMX6Q_GPR1_ENET_CLK_SEL_ANATOP :216216+ IMX6Q_GPR1_ENET_CLK_SEL_PAD;216217 gpr = syscon_regmap_lookup_by_compatible("fsl,imx6q-iomuxc-gpr");217218 if (!IS_ERR(gpr))218219 regmap_update_bits(gpr, IOMUXC_GPR1,
···720720 return kasprintf(GFP_KERNEL, "OMAP4");721721 else if (soc_is_omap54xx())722722 return kasprintf(GFP_KERNEL, "OMAP5");723723+ else if (soc_is_am33xx() || soc_is_am335x())724724+ return kasprintf(GFP_KERNEL, "AM33xx");723725 else if (soc_is_am43xx())724726 return kasprintf(GFP_KERNEL, "AM43xx");725727 else if (soc_is_dra7xx())
+5-5
arch/arm/mach-omap2/omap_hwmod.c
···16921692 if (ret == -EBUSY)16931693 pr_warn("omap_hwmod: %s: failed to hardreset\n", oh->name);1694169416951695- if (!ret) {16951695+ if (oh->clkdm) {16961696 /*16971697 * Set the clockdomain to HW_AUTO, assuming that the16981698 * previous state was HW_AUTO.16991699 */17001700- if (oh->clkdm && hwsup)17001700+ if (hwsup)17011701 clkdm_allow_idle(oh->clkdm);17021702- } else {17031703- if (oh->clkdm)17041704- clkdm_hwmod_disable(oh->clkdm, oh);17021702+17031703+ clkdm_hwmod_disable(oh->clkdm, oh);17051704 }1706170517071706 return ret;···26972698 INIT_LIST_HEAD(&oh->master_ports);26982699 INIT_LIST_HEAD(&oh->slave_ports);26992700 spin_lock_init(&oh->_lock);27012701+ lockdep_set_class(&oh->_lock, &oh->hwmod_key);2700270227012703 oh->_state = _HWMOD_STATE_REGISTERED;27022704
···45454646extern unsigned long socfpga_cpu1start_addr;47474848-#define SOCFPGA_SCU_VIRT_BASE 0xfffec0004848+#define SOCFPGA_SCU_VIRT_BASE 0xfee0000049495050#endif
+5
arch/arm/mach-socfpga/socfpga.c
···2323#include <asm/hardware/cache-l2x0.h>2424#include <asm/mach/arch.h>2525#include <asm/mach/map.h>2626+#include <asm/cacheflush.h>26272728#include "core.h"2829···7372 if (of_property_read_u32(np, "cpu1-start-addr",7473 (u32 *) &socfpga_cpu1start_addr))7574 pr_err("SMP: Need cpu1-start-addr in device tree.\n");7575+7676+ /* Ensure that socfpga_cpu1start_addr is visible to other CPUs */7777+ smp_wmb();7878+ sync_cache_w(&socfpga_cpu1start_addr);76797780 sys_manager_base_addr = of_iomap(np, 0);7881
···246246 __ret; \247247})248248249249-#define this_cpu_cmpxchg_1(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n)250250-#define this_cpu_cmpxchg_2(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n)251251-#define this_cpu_cmpxchg_4(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n)252252-#define this_cpu_cmpxchg_8(ptr, o, n) cmpxchg_local(raw_cpu_ptr(&(ptr)), o, n)249249+#define _protect_cmpxchg_local(pcp, o, n) \250250+({ \251251+ typeof(*raw_cpu_ptr(&(pcp))) __ret; \252252+ preempt_disable(); \253253+ __ret = cmpxchg_local(raw_cpu_ptr(&(pcp)), o, n); \254254+ preempt_enable(); \255255+ __ret; \256256+})253257254254-#define this_cpu_cmpxchg_double_8(ptr1, ptr2, o1, o2, n1, n2) \255255- cmpxchg_double_local(raw_cpu_ptr(&(ptr1)), raw_cpu_ptr(&(ptr2)), \256256- o1, o2, n1, n2)258258+#define this_cpu_cmpxchg_1(ptr, o, n) _protect_cmpxchg_local(ptr, o, n)259259+#define this_cpu_cmpxchg_2(ptr, o, n) _protect_cmpxchg_local(ptr, o, n)260260+#define this_cpu_cmpxchg_4(ptr, o, n) _protect_cmpxchg_local(ptr, o, n)261261+#define this_cpu_cmpxchg_8(ptr, o, n) _protect_cmpxchg_local(ptr, o, n)262262+263263+#define this_cpu_cmpxchg_double_8(ptr1, ptr2, o1, o2, n1, n2) \264264+({ \265265+ int __ret; \266266+ preempt_disable(); \267267+ __ret = cmpxchg_double_local( raw_cpu_ptr(&(ptr1)), \268268+ raw_cpu_ptr(&(ptr2)), \269269+ o1, o2, n1, n2); \270270+ preempt_enable(); \271271+ __ret; \272272+})257273258274#define cmpxchg64(ptr,o,n) cmpxchg((ptr),(o),(n))259275#define cmpxchg64_local(ptr,o,n) cmpxchg_local((ptr),(o),(n))
+3-2
arch/arm64/include/asm/kvm_arm.h
···129129 * 40 bits wide (T0SZ = 24). Systems with a PARange smaller than 40 bits are130130 * not known to exist and will break with this configuration.131131 *132132+ * VTCR_EL2.PS is extracted from ID_AA64MMFR0_EL1.PARange at boot time133133+ * (see hyp-init.S).134134+ *132135 * Note that when using 4K pages, we concatenate two first level page tables133136 * together.134137 *···141138#ifdef CONFIG_ARM64_64K_PAGES142139/*143140 * Stage2 translation configuration:144144- * 40bits output (PS = 2)145141 * 40bits input (T0SZ = 24)146142 * 64kB pages (TG0 = 1)147143 * 2 level page tables (SL = 1)···152150#else153151/*154152 * Stage2 translation configuration:155155- * 40bits output (PS = 2)156153 * 40bits input (T0SZ = 24)157154 * 4kB pages (TG0 = 0)158155 * 3 level page tables (SL = 1)
+6-42
arch/arm64/include/asm/kvm_mmu.h
···158158#define PTRS_PER_S2_PGD (1 << PTRS_PER_S2_PGD_SHIFT)159159#define S2_PGD_ORDER get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))160160161161+#define kvm_pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1))162162+161163/*162164 * If we are concatenating first level stage-2 page tables, we would have less163165 * than or equal to 16 pointers in the fake PGD, because that's what the···172170#else173171#define KVM_PREALLOC_LEVEL (0)174172#endif175175-176176-/**177177- * kvm_prealloc_hwpgd - allocate inital table for VTTBR178178- * @kvm: The KVM struct pointer for the VM.179179- * @pgd: The kernel pseudo pgd180180- *181181- * When the kernel uses more levels of page tables than the guest, we allocate182182- * a fake PGD and pre-populate it to point to the next-level page table, which183183- * will be the real initial page table pointed to by the VTTBR.184184- *185185- * When KVM_PREALLOC_LEVEL==2, we allocate a single page for the PMD and186186- * the kernel will use folded pud. When KVM_PREALLOC_LEVEL==1, we187187- * allocate 2 consecutive PUD pages.188188- */189189-static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd)190190-{191191- unsigned int i;192192- unsigned long hwpgd;193193-194194- if (KVM_PREALLOC_LEVEL == 0)195195- return 0;196196-197197- hwpgd = __get_free_pages(GFP_KERNEL | __GFP_ZERO, PTRS_PER_S2_PGD_SHIFT);198198- if (!hwpgd)199199- return -ENOMEM;200200-201201- for (i = 0; i < PTRS_PER_S2_PGD; i++) {202202- if (KVM_PREALLOC_LEVEL == 1)203203- pgd_populate(NULL, pgd + i,204204- (pud_t *)hwpgd + i * PTRS_PER_PUD);205205- else if (KVM_PREALLOC_LEVEL == 2)206206- pud_populate(NULL, pud_offset(pgd, 0) + i,207207- (pmd_t *)hwpgd + i * PTRS_PER_PMD);208208- }209209-210210- return 0;211211-}212173213174static inline void *kvm_get_hwpgd(struct kvm *kvm)214175{···189224 return pmd_offset(pud, 0);190225}191226192192-static inline void kvm_free_hwpgd(struct kvm *kvm)227227+static inline unsigned int kvm_get_hwpgd_size(void)193228{194194- if (KVM_PREALLOC_LEVEL > 0) {195195- unsigned long hwpgd = (unsigned long)kvm_get_hwpgd(kvm);196196- free_pages(hwpgd, PTRS_PER_S2_PGD_SHIFT);197197- }229229+ if (KVM_PREALLOC_LEVEL > 0)230230+ return PTRS_PER_S2_PGD * PAGE_SIZE;231231+ return PTRS_PER_S2_PGD * sizeof(pgd_t);198232}199233200234static inline bool kvm_page_empty(void *ptr)
+9
arch/arm64/include/asm/mmu_context.h
···151151{152152 unsigned int cpu = smp_processor_id();153153154154+ /*155155+ * init_mm.pgd does not contain any user mappings and it is always156156+ * active for kernel addresses in TTBR1. Just set the reserved TTBR0.157157+ */158158+ if (next == &init_mm) {159159+ cpu_set_reserved_ttbr0();160160+ return;161161+ }162162+154163 if (!cpumask_test_and_set_cpu(cpu, mm_cpumask(next)) || prev != next)155164 check_and_switch_context(next, tsk);156165}
···144144}145145146146/*147147+ * Used to invalidate the TLB (walk caches) corresponding to intermediate page148148+ * table levels (pgd/pud/pmd).149149+ */150150+static inline void __flush_tlb_pgtable(struct mm_struct *mm,151151+ unsigned long uaddr)152152+{153153+ unsigned long addr = uaddr >> 12 | ((unsigned long)ASID(mm) << 48);154154+155155+ dsb(ishst);156156+ asm("tlbi vae1is, %0" : : "r" (addr));157157+ dsb(ish);158158+}159159+/*147160 * On AArch64, the cache coherency is handled via the set_pte_at() function.148161 */149162static inline void update_mmu_cache(struct vm_area_struct *vma,
+14-1
arch/arm64/kernel/efi.c
···337337338338static void efi_set_pgd(struct mm_struct *mm)339339{340340- cpu_switch_mm(mm->pgd, mm);340340+ if (mm == &init_mm)341341+ cpu_set_reserved_ttbr0();342342+ else343343+ cpu_switch_mm(mm->pgd, mm);344344+341345 flush_tlb_all();342346 if (icache_is_aivivt())343347 __flush_icache_all();···357353{358354 efi_set_pgd(current->active_mm);359355 preempt_enable();356356+}357357+358358+/*359359+ * UpdateCapsule() depends on the system being shutdown via360360+ * ResetSystem().361361+ */362362+bool efi_poweroff_required(void)363363+{364364+ return efi_enabled(EFI_RUNTIME_SERVICES);360365}
+1-1
arch/arm64/kernel/head.S
···585585 * zeroing of .bss would clobber it.586586 */587587 .pushsection .data..cacheline_aligned588588-ENTRY(__boot_cpu_mode)589588 .align L1_CACHE_SHIFT589589+ENTRY(__boot_cpu_mode)590590 .long BOOT_CPU_MODE_EL2591591 .long 0592592 .popsection
+8
arch/arm64/kernel/process.c
···2121#include <stdarg.h>22222323#include <linux/compat.h>2424+#include <linux/efi.h>2425#include <linux/export.h>2526#include <linux/sched.h>2627#include <linux/kernel.h>···150149 /* Disable interrupts first */151150 local_irq_disable();152151 smp_send_stop();152152+153153+ /*154154+ * UpdateCapsule() depends on the system being reset via155155+ * ResetSystem().156156+ */157157+ if (efi_enabled(EFI_RUNTIME_SERVICES))158158+ efi_reboot(reboot_mode, NULL);153159154160 /* Now call the architecture specific reboot code. */155161 if (arm_pm_restart)
···5151 WARN_ON_ONCE(1);5252 }53535454- if (!is_module_address(start) || !is_module_address(end - 1))5454+ if (start < MODULES_VADDR || start >= MODULES_END)5555+ return -EINVAL;5656+5757+ if (end < MODULES_VADDR || end >= MODULES_END)5558 return -EINVAL;56595760 data.set_mask = set_mask;
+5
arch/c6x/include/asm/pgtable.h
···6767 */6868#define pgtable_cache_init() do { } while (0)69697070+/*7171+ * c6x is !MMU, so define the simpliest implementation7272+ */7373+#define pgprot_writecombine pgprot_noncached7474+7075#include <asm-generic/pgtable.h>71767277#endif /* _ASM_C6X_PGTABLE_H */
···11+/*22+ * Meta page table definitions.33+ */44+55+#ifndef _METAG_PGTABLE_BITS_H66+#define _METAG_PGTABLE_BITS_H77+88+#include <asm/metag_mem.h>99+1010+/*1111+ * Definitions for MMU descriptors1212+ *1313+ * These are the hardware bits in the MMCU pte entries.1414+ * Derived from the Meta toolkit headers.1515+ */1616+#define _PAGE_PRESENT MMCU_ENTRY_VAL_BIT1717+#define _PAGE_WRITE MMCU_ENTRY_WR_BIT1818+#define _PAGE_PRIV MMCU_ENTRY_PRIV_BIT1919+/* Write combine bit - this can cause writes to occur out of order */2020+#define _PAGE_WR_COMBINE MMCU_ENTRY_WRC_BIT2121+/* Sys coherent bit - this bit is never used by Linux */2222+#define _PAGE_SYS_COHERENT MMCU_ENTRY_SYS_BIT2323+#define _PAGE_ALWAYS_ZERO_1 0x0202424+#define _PAGE_CACHE_CTRL0 0x0402525+#define _PAGE_CACHE_CTRL1 0x0802626+#define _PAGE_ALWAYS_ZERO_2 0x1002727+#define _PAGE_ALWAYS_ZERO_3 0x2002828+#define _PAGE_ALWAYS_ZERO_4 0x4002929+#define _PAGE_ALWAYS_ZERO_5 0x8003030+3131+/* These are software bits that we stuff into the gaps in the hardware3232+ * pte entries that are not used. Note, these DO get stored in the actual3333+ * hardware, but the hardware just does not use them.3434+ */3535+#define _PAGE_ACCESSED _PAGE_ALWAYS_ZERO_13636+#define _PAGE_DIRTY _PAGE_ALWAYS_ZERO_23737+3838+/* Pages owned, and protected by, the kernel. */3939+#define _PAGE_KERNEL _PAGE_PRIV4040+4141+/* No cacheing of this page */4242+#define _PAGE_CACHE_WIN0 (MMCU_CWIN_UNCACHED << MMCU_ENTRY_CWIN_S)4343+/* burst cacheing - good for data streaming */4444+#define _PAGE_CACHE_WIN1 (MMCU_CWIN_BURST << MMCU_ENTRY_CWIN_S)4545+/* One cache way per thread */4646+#define _PAGE_CACHE_WIN2 (MMCU_CWIN_C1SET << MMCU_ENTRY_CWIN_S)4747+/* Full on cacheing */4848+#define _PAGE_CACHE_WIN3 (MMCU_CWIN_CACHED << MMCU_ENTRY_CWIN_S)4949+5050+#define _PAGE_CACHEABLE (_PAGE_CACHE_WIN3 | _PAGE_WR_COMBINE)5151+5252+/* which bits are used for cache control ... */5353+#define _PAGE_CACHE_MASK (_PAGE_CACHE_CTRL0 | _PAGE_CACHE_CTRL1 | \5454+ _PAGE_WR_COMBINE)5555+5656+/* This is a mask of the bits that pte_modify is allowed to change. */5757+#define _PAGE_CHG_MASK (PAGE_MASK)5858+5959+#define _PAGE_SZ_SHIFT 16060+#define _PAGE_SZ_4K (0x0)6161+#define _PAGE_SZ_8K (0x1 << _PAGE_SZ_SHIFT)6262+#define _PAGE_SZ_16K (0x2 << _PAGE_SZ_SHIFT)6363+#define _PAGE_SZ_32K (0x3 << _PAGE_SZ_SHIFT)6464+#define _PAGE_SZ_64K (0x4 << _PAGE_SZ_SHIFT)6565+#define _PAGE_SZ_128K (0x5 << _PAGE_SZ_SHIFT)6666+#define _PAGE_SZ_256K (0x6 << _PAGE_SZ_SHIFT)6767+#define _PAGE_SZ_512K (0x7 << _PAGE_SZ_SHIFT)6868+#define _PAGE_SZ_1M (0x8 << _PAGE_SZ_SHIFT)6969+#define _PAGE_SZ_2M (0x9 << _PAGE_SZ_SHIFT)7070+#define _PAGE_SZ_4M (0xa << _PAGE_SZ_SHIFT)7171+#define _PAGE_SZ_MASK (0xf << _PAGE_SZ_SHIFT)7272+7373+#if defined(CONFIG_PAGE_SIZE_4K)7474+#define _PAGE_SZ (_PAGE_SZ_4K)7575+#elif defined(CONFIG_PAGE_SIZE_8K)7676+#define _PAGE_SZ (_PAGE_SZ_8K)7777+#elif defined(CONFIG_PAGE_SIZE_16K)7878+#define _PAGE_SZ (_PAGE_SZ_16K)7979+#endif8080+#define _PAGE_TABLE (_PAGE_SZ | _PAGE_PRESENT)8181+8282+#if defined(CONFIG_HUGETLB_PAGE_SIZE_8K)8383+# define _PAGE_SZHUGE (_PAGE_SZ_8K)8484+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_16K)8585+# define _PAGE_SZHUGE (_PAGE_SZ_16K)8686+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_32K)8787+# define _PAGE_SZHUGE (_PAGE_SZ_32K)8888+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_64K)8989+# define _PAGE_SZHUGE (_PAGE_SZ_64K)9090+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_128K)9191+# define _PAGE_SZHUGE (_PAGE_SZ_128K)9292+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_256K)9393+# define _PAGE_SZHUGE (_PAGE_SZ_256K)9494+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_512K)9595+# define _PAGE_SZHUGE (_PAGE_SZ_512K)9696+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_1M)9797+# define _PAGE_SZHUGE (_PAGE_SZ_1M)9898+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_2M)9999+# define _PAGE_SZHUGE (_PAGE_SZ_2M)100100+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_4M)101101+# define _PAGE_SZHUGE (_PAGE_SZ_4M)102102+#endif103103+104104+#endif /* _METAG_PGTABLE_BITS_H */
+1-94
arch/metag/include/asm/pgtable.h
···55#ifndef _METAG_PGTABLE_H66#define _METAG_PGTABLE_H7788+#include <asm/pgtable-bits.h>89#include <asm-generic/pgtable-nopmd.h>9101011/* Invalid regions on Meta: 0x00000000-0x001FFFFF and 0xFFFF0000-0xFFFFFFFF */···1918#define CONSISTENT_END 0x773FFFFF2019#define VMALLOC_START 0x780000002120#define VMALLOC_END 0x7FFFFFFF2222-#endif2323-2424-/*2525- * Definitions for MMU descriptors2626- *2727- * These are the hardware bits in the MMCU pte entries.2828- * Derived from the Meta toolkit headers.2929- */3030-#define _PAGE_PRESENT MMCU_ENTRY_VAL_BIT3131-#define _PAGE_WRITE MMCU_ENTRY_WR_BIT3232-#define _PAGE_PRIV MMCU_ENTRY_PRIV_BIT3333-/* Write combine bit - this can cause writes to occur out of order */3434-#define _PAGE_WR_COMBINE MMCU_ENTRY_WRC_BIT3535-/* Sys coherent bit - this bit is never used by Linux */3636-#define _PAGE_SYS_COHERENT MMCU_ENTRY_SYS_BIT3737-#define _PAGE_ALWAYS_ZERO_1 0x0203838-#define _PAGE_CACHE_CTRL0 0x0403939-#define _PAGE_CACHE_CTRL1 0x0804040-#define _PAGE_ALWAYS_ZERO_2 0x1004141-#define _PAGE_ALWAYS_ZERO_3 0x2004242-#define _PAGE_ALWAYS_ZERO_4 0x4004343-#define _PAGE_ALWAYS_ZERO_5 0x8004444-4545-/* These are software bits that we stuff into the gaps in the hardware4646- * pte entries that are not used. Note, these DO get stored in the actual4747- * hardware, but the hardware just does not use them.4848- */4949-#define _PAGE_ACCESSED _PAGE_ALWAYS_ZERO_15050-#define _PAGE_DIRTY _PAGE_ALWAYS_ZERO_25151-5252-/* Pages owned, and protected by, the kernel. */5353-#define _PAGE_KERNEL _PAGE_PRIV5454-5555-/* No cacheing of this page */5656-#define _PAGE_CACHE_WIN0 (MMCU_CWIN_UNCACHED << MMCU_ENTRY_CWIN_S)5757-/* burst cacheing - good for data streaming */5858-#define _PAGE_CACHE_WIN1 (MMCU_CWIN_BURST << MMCU_ENTRY_CWIN_S)5959-/* One cache way per thread */6060-#define _PAGE_CACHE_WIN2 (MMCU_CWIN_C1SET << MMCU_ENTRY_CWIN_S)6161-/* Full on cacheing */6262-#define _PAGE_CACHE_WIN3 (MMCU_CWIN_CACHED << MMCU_ENTRY_CWIN_S)6363-6464-#define _PAGE_CACHEABLE (_PAGE_CACHE_WIN3 | _PAGE_WR_COMBINE)6565-6666-/* which bits are used for cache control ... */6767-#define _PAGE_CACHE_MASK (_PAGE_CACHE_CTRL0 | _PAGE_CACHE_CTRL1 | \6868- _PAGE_WR_COMBINE)6969-7070-/* This is a mask of the bits that pte_modify is allowed to change. */7171-#define _PAGE_CHG_MASK (PAGE_MASK)7272-7373-#define _PAGE_SZ_SHIFT 17474-#define _PAGE_SZ_4K (0x0)7575-#define _PAGE_SZ_8K (0x1 << _PAGE_SZ_SHIFT)7676-#define _PAGE_SZ_16K (0x2 << _PAGE_SZ_SHIFT)7777-#define _PAGE_SZ_32K (0x3 << _PAGE_SZ_SHIFT)7878-#define _PAGE_SZ_64K (0x4 << _PAGE_SZ_SHIFT)7979-#define _PAGE_SZ_128K (0x5 << _PAGE_SZ_SHIFT)8080-#define _PAGE_SZ_256K (0x6 << _PAGE_SZ_SHIFT)8181-#define _PAGE_SZ_512K (0x7 << _PAGE_SZ_SHIFT)8282-#define _PAGE_SZ_1M (0x8 << _PAGE_SZ_SHIFT)8383-#define _PAGE_SZ_2M (0x9 << _PAGE_SZ_SHIFT)8484-#define _PAGE_SZ_4M (0xa << _PAGE_SZ_SHIFT)8585-#define _PAGE_SZ_MASK (0xf << _PAGE_SZ_SHIFT)8686-8787-#if defined(CONFIG_PAGE_SIZE_4K)8888-#define _PAGE_SZ (_PAGE_SZ_4K)8989-#elif defined(CONFIG_PAGE_SIZE_8K)9090-#define _PAGE_SZ (_PAGE_SZ_8K)9191-#elif defined(CONFIG_PAGE_SIZE_16K)9292-#define _PAGE_SZ (_PAGE_SZ_16K)9393-#endif9494-#define _PAGE_TABLE (_PAGE_SZ | _PAGE_PRESENT)9595-9696-#if defined(CONFIG_HUGETLB_PAGE_SIZE_8K)9797-# define _PAGE_SZHUGE (_PAGE_SZ_8K)9898-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_16K)9999-# define _PAGE_SZHUGE (_PAGE_SZ_16K)100100-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_32K)101101-# define _PAGE_SZHUGE (_PAGE_SZ_32K)102102-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_64K)103103-# define _PAGE_SZHUGE (_PAGE_SZ_64K)104104-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_128K)105105-# define _PAGE_SZHUGE (_PAGE_SZ_128K)106106-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_256K)107107-# define _PAGE_SZHUGE (_PAGE_SZ_256K)108108-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_512K)109109-# define _PAGE_SZHUGE (_PAGE_SZ_512K)110110-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_1M)111111-# define _PAGE_SZHUGE (_PAGE_SZ_1M)112112-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_2M)113113-# define _PAGE_SZHUGE (_PAGE_SZ_2M)114114-#elif defined(CONFIG_HUGETLB_PAGE_SIZE_4M)115115-# define _PAGE_SZHUGE (_PAGE_SZ_4M)11621#endif1172211823/*
+4-3
arch/microblaze/kernel/entry.S
···348348 * The LP register should point to the location where the called function349349 * should return. [note that MAKE_SYS_CALL uses label 1] */350350 /* See if the system call number is valid */351351+ blti r12, 5f351352 addi r11, r12, -__NR_syscalls;352352- bgei r11,5f;353353+ bgei r11, 5f;353354 /* Figure out which function to use for this system call. */354355 /* Note Microblaze barrel shift is optional, so don't rely on it */355356 add r12, r12, r12; /* convert num -> ptr */···376375377376 /* The syscall number is invalid, return an error. */3783775:379379- rtsd r15, 8; /* looks like a normal subroutine return */378378+ braid ret_from_trap380379 addi r3, r0, -ENOSYS;381380382381/* Entry point used to return from a syscall/trap */···412411 bri 1b413412414413 /* Maybe handle a signal */415415-5: 414414+5:416415 andi r11, r19, _TIF_SIGPENDING | _TIF_NOTIFY_RESUME;417416 beqi r11, 4f; /* Signals to handle, handle them */418417
···15151616#include <uapi/asm/ptrace.h>17171818+/* This struct defines the way the registers are stored on the1919+ stack during a system call. */2020+1821#ifndef __ASSEMBLY__2222+struct pt_regs {2323+ unsigned long r8; /* r8-r15 Caller-saved GP registers */2424+ unsigned long r9;2525+ unsigned long r10;2626+ unsigned long r11;2727+ unsigned long r12;2828+ unsigned long r13;2929+ unsigned long r14;3030+ unsigned long r15;3131+ unsigned long r1; /* Assembler temporary */3232+ unsigned long r2; /* Retval LS 32bits */3333+ unsigned long r3; /* Retval MS 32bits */3434+ unsigned long r4; /* r4-r7 Register arguments */3535+ unsigned long r5;3636+ unsigned long r6;3737+ unsigned long r7;3838+ unsigned long orig_r2; /* Copy of r2 ?? */3939+ unsigned long ra; /* Return address */4040+ unsigned long fp; /* Frame pointer */4141+ unsigned long sp; /* Stack pointer */4242+ unsigned long gp; /* Global pointer */4343+ unsigned long estatus;4444+ unsigned long ea; /* Exception return address (pc) */4545+ unsigned long orig_r7;4646+};4747+4848+/*4949+ * This is the extended stack used by signal handlers and the context5050+ * switcher: it's pushed after the normal "struct pt_regs".5151+ */5252+struct switch_stack {5353+ unsigned long r16; /* r16-r23 Callee-saved GP registers */5454+ unsigned long r17;5555+ unsigned long r18;5656+ unsigned long r19;5757+ unsigned long r20;5858+ unsigned long r21;5959+ unsigned long r22;6060+ unsigned long r23;6161+ unsigned long fp;6262+ unsigned long gp;6363+ unsigned long ra;6464+};6565+1966#define user_mode(regs) (((regs)->estatus & ESTATUS_EU))20672168#define instruction_pointer(regs) ((regs)->ra)
-32
arch/nios2/include/asm/ucontext.h
···11-/*22- * Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch>33- * Copyright (C) 2004 Microtronix Datacom Ltd44- *55- * This file is subject to the terms and conditions of the GNU General Public66- * License. See the file "COPYING" in the main directory of this archive77- * for more details.88- */99-1010-#ifndef _ASM_NIOS2_UCONTEXT_H1111-#define _ASM_NIOS2_UCONTEXT_H1212-1313-typedef int greg_t;1414-#define NGREG 321515-typedef greg_t gregset_t[NGREG];1616-1717-struct mcontext {1818- int version;1919- gregset_t gregs;2020-};2121-2222-#define MCONTEXT_VERSION 22323-2424-struct ucontext {2525- unsigned long uc_flags;2626- struct ucontext *uc_link;2727- stack_t uc_stack;2828- struct mcontext uc_mcontext;2929- sigset_t uc_sigmask; /* mask last for extensibility */3030-};3131-3232-#endif
···67676868#define NUM_PTRACE_REG (PTR_TLBMISC + 1)69697070-/* this struct defines the way the registers are stored on the7171- stack during a system call.7272-7373- There is a fake_regs in setup.c that has to match pt_regs.*/7474-7575-struct pt_regs {7676- unsigned long r8; /* r8-r15 Caller-saved GP registers */7777- unsigned long r9;7878- unsigned long r10;7979- unsigned long r11;8080- unsigned long r12;8181- unsigned long r13;8282- unsigned long r14;8383- unsigned long r15;8484- unsigned long r1; /* Assembler temporary */8585- unsigned long r2; /* Retval LS 32bits */8686- unsigned long r3; /* Retval MS 32bits */8787- unsigned long r4; /* r4-r7 Register arguments */8888- unsigned long r5;8989- unsigned long r6;9090- unsigned long r7;9191- unsigned long orig_r2; /* Copy of r2 ?? */9292- unsigned long ra; /* Return address */9393- unsigned long fp; /* Frame pointer */9494- unsigned long sp; /* Stack pointer */9595- unsigned long gp; /* Global pointer */9696- unsigned long estatus;9797- unsigned long ea; /* Exception return address (pc) */9898- unsigned long orig_r7;9999-};100100-101101-/*102102- * This is the extended stack used by signal handlers and the context103103- * switcher: it's pushed after the normal "struct pt_regs".104104- */105105-struct switch_stack {106106- unsigned long r16; /* r16-r23 Callee-saved GP registers */107107- unsigned long r17;108108- unsigned long r18;109109- unsigned long r19;110110- unsigned long r20;111111- unsigned long r21;112112- unsigned long r22;113113- unsigned long r23;114114- unsigned long fp;115115- unsigned long gp;116116- unsigned long ra;7070+/* User structures for general purpose registers. */7171+struct user_pt_regs {7272+ __u32 regs[49];11773};1187411975#endif /* __ASSEMBLY__ */
+7-5
arch/nios2/include/uapi/asm/sigcontext.h
···1515 * details.1616 */17171818-#ifndef _ASM_NIOS2_SIGCONTEXT_H1919-#define _ASM_NIOS2_SIGCONTEXT_H1818+#ifndef _UAPI__ASM_SIGCONTEXT_H1919+#define _UAPI__ASM_SIGCONTEXT_H20202121-#include <asm/ptrace.h>2121+#include <linux/types.h>2222+2323+#define MCONTEXT_VERSION 222242325struct sigcontext {2424- struct pt_regs regs;2525- unsigned long sc_mask; /* old sigmask */2626+ int version;2727+ unsigned long gregs[32];2628};27292830#endif
+2-2
arch/nios2/kernel/signal.c
···3939 struct ucontext *uc, int *pr2)4040{4141 int temp;4242- greg_t *gregs = uc->uc_mcontext.gregs;4242+ unsigned long *gregs = uc->uc_mcontext.gregs;4343 int err;44444545 /* Always make any pending restarted system calls return -EINTR */···127127static inline int rt_setup_ucontext(struct ucontext *uc, struct pt_regs *regs)128128{129129 struct switch_stack *sw = (struct switch_stack *)regs - 1;130130- greg_t *gregs = uc->uc_mcontext.gregs;130130+ unsigned long *gregs = uc->uc_mcontext.gregs;131131 int err = 0;132132133133 err |= __put_user(MCONTEXT_VERSION, &uc->uc_mcontext.version);
-6
arch/nios2/mm/fault.c
···126126 break;127127 }128128129129-survive:130129 /*131130 * If for any reason at all we couldn't handle the fault,132131 * make sure we exit gracefully rather than endlessly redo···219220 */220221out_of_memory:221222 up_read(&mm->mmap_sem);222222- if (is_global_init(tsk)) {223223- yield();224224- down_read(&mm->mmap_sem);225225- goto survive;226226- }227223 if (!user_mode(regs))228224 goto no_context;229225 pagefault_out_of_memory();
+10-7
arch/parisc/include/asm/pgalloc.h
···26262727 if (likely(pgd != NULL)) {2828 memset(pgd, 0, PAGE_SIZE<<PGD_ALLOC_ORDER);2929-#ifdef CONFIG_64BIT2929+#if PT_NLEVELS == 33030 actual_pgd += PTRS_PER_PGD;3131 /* Populate first pmd with allocated memory. We mark it3232 * with PxD_FLAG_ATTACHED as a signal to the system that this···45454646static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)4747{4848-#ifdef CONFIG_64BIT4848+#if PT_NLEVELS == 34949 pgd -= PTRS_PER_PGD;5050#endif5151 free_pages((unsigned long)pgd, PGD_ALLOC_ORDER);···72727373static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)7474{7575-#ifdef CONFIG_64BIT7675 if(pmd_flag(*pmd) & PxD_FLAG_ATTACHED)7777- /* This is the permanent pmd attached to the pgd;7878- * cannot free it */7676+ /*7777+ * This is the permanent pmd attached to the pgd;7878+ * cannot free it.7979+ * Increment the counter to compensate for the decrement8080+ * done by generic mm code.8181+ */8282+ mm_inc_nr_pmds(mm);7983 return;8080-#endif8184 free_pages((unsigned long)pmd, PMD_ORDER);8285}8386···10299static inline void103100pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte)104101{105105-#ifdef CONFIG_64BIT102102+#if PT_NLEVELS == 3106103 /* preserve the gateway marker if this is the beginning of107104 * the permanent pmd */108105 if(pmd_flag(*pmd) & PxD_FLAG_ATTACHED)
+6-3
arch/parisc/kernel/syscall_table.S
···5555#define ENTRY_COMP(_name_) .word sys_##_name_5656#endif57575858- ENTRY_SAME(restart_syscall) /* 0 */5959- ENTRY_SAME(exit)5858+90: ENTRY_SAME(restart_syscall) /* 0 */5959+91: ENTRY_SAME(exit)6060 ENTRY_SAME(fork_wrapper)6161 ENTRY_SAME(read)6262 ENTRY_SAME(write)···439439 ENTRY_SAME(bpf)440440 ENTRY_COMP(execveat)441441442442- /* Nothing yet */442442+443443+.ifne (. - 90b) - (__NR_Linux_syscalls * (91b - 90b))444444+.error "size of syscall table does not fit value of __NR_Linux_syscalls"445445+.endif443446444447#undef ENTRY_SAME445448#undef ENTRY_DIFF
···14081408 bne 9f /* continue in V mode if we are. */14091409141014105:14111411-#ifdef CONFIG_KVM_BOOK3S_64_HV14111411+#ifdef CONFIG_KVM_BOOK3S_64_HANDLER14121412 /*14131413 * We are coming from kernel context. Check if we are coming from14141414 * guest. if yes, then we can continue. We will fall through
+26
arch/powerpc/kernel/iommu.c
···11751175}11761176EXPORT_SYMBOL_GPL(iommu_del_device);1177117711781178+static int tce_iommu_bus_notifier(struct notifier_block *nb,11791179+ unsigned long action, void *data)11801180+{11811181+ struct device *dev = data;11821182+11831183+ switch (action) {11841184+ case BUS_NOTIFY_ADD_DEVICE:11851185+ return iommu_add_device(dev);11861186+ case BUS_NOTIFY_DEL_DEVICE:11871187+ if (dev->iommu_group)11881188+ iommu_del_device(dev);11891189+ return 0;11901190+ default:11911191+ return 0;11921192+ }11931193+}11941194+11951195+static struct notifier_block tce_iommu_bus_nb = {11961196+ .notifier_call = tce_iommu_bus_notifier,11971197+};11981198+11991199+int __init tce_iommu_bus_notifier_init(void)12001200+{12011201+ bus_register_notifier(&pci_bus_type, &tce_iommu_bus_nb);12021202+ return 0;12031203+}11781204#endif /* CONFIG_IOMMU_API */
+2-2
arch/powerpc/kernel/smp.c
···541541 if (smp_ops->give_timebase)542542 smp_ops->give_timebase();543543544544- /* Wait until cpu puts itself in the online map */545545- while (!cpu_online(cpu))544544+ /* Wait until cpu puts itself in the online & active maps */545545+ while (!cpu_online(cpu) || !cpu_active(cpu))546546 cpu_relax();547547548548 return 0;
+4-4
arch/powerpc/kvm/book3s_hv.c
···636636 spin_lock(&vcpu->arch.vpa_update_lock);637637 lppaca = (struct lppaca *)vcpu->arch.vpa.pinned_addr;638638 if (lppaca)639639- yield_count = lppaca->yield_count;639639+ yield_count = be32_to_cpu(lppaca->yield_count);640640 spin_unlock(&vcpu->arch.vpa_update_lock);641641 return yield_count;642642}···942942static void kvmppc_set_lpcr(struct kvm_vcpu *vcpu, u64 new_lpcr,943943 bool preserve_top32)944944{945945+ struct kvm *kvm = vcpu->kvm;945946 struct kvmppc_vcore *vc = vcpu->arch.vcore;946947 u64 mask;947948949949+ mutex_lock(&kvm->lock);948950 spin_lock(&vc->lock);949951 /*950952 * If ILE (interrupt little-endian) has changed, update the951953 * MSR_LE bit in the intr_msr for each vcpu in this vcore.952954 */953955 if ((new_lpcr & LPCR_ILE) != (vc->lpcr & LPCR_ILE)) {954954- struct kvm *kvm = vcpu->kvm;955956 struct kvm_vcpu *vcpu;956957 int i;957958958958- mutex_lock(&kvm->lock);959959 kvm_for_each_vcpu(i, vcpu, kvm) {960960 if (vcpu->arch.vcore != vc)961961 continue;···964964 else965965 vcpu->arch.intr_msr &= ~MSR_LE;966966 }967967- mutex_unlock(&kvm->lock);968967 }969968970969 /*···980981 mask &= 0xFFFFFFFF;981982 vc->lpcr = (vc->lpcr & ~mask) | (new_lpcr & mask);982983 spin_unlock(&vc->lock);984984+ mutex_unlock(&kvm->lock);983985}984986985987static int kvmppc_get_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
+1
arch/powerpc/kvm/book3s_hv_rmhandlers.S
···10051005 /* Save HEIR (HV emulation assist reg) in emul_inst10061006 if this is an HEI (HV emulation interrupt, e40) */10071007 li r3,KVM_INST_FETCH_FAILED10081008+ stw r3,VCPU_LAST_INST(r9)10081009 cmpwi r12,BOOK3S_INTERRUPT_H_EMUL_ASSIST10091010 bne 11f10101011 mfspr r3,SPRN_HEIR
-26
arch/powerpc/platforms/powernv/pci.c
···836836#endif837837}838838839839-static int tce_iommu_bus_notifier(struct notifier_block *nb,840840- unsigned long action, void *data)841841-{842842- struct device *dev = data;843843-844844- switch (action) {845845- case BUS_NOTIFY_ADD_DEVICE:846846- return iommu_add_device(dev);847847- case BUS_NOTIFY_DEL_DEVICE:848848- if (dev->iommu_group)849849- iommu_del_device(dev);850850- return 0;851851- default:852852- return 0;853853- }854854-}855855-856856-static struct notifier_block tce_iommu_bus_nb = {857857- .notifier_call = tce_iommu_bus_notifier,858858-};859859-860860-static int __init tce_iommu_bus_notifier_init(void)861861-{862862- bus_register_notifier(&pci_bus_type, &tce_iommu_bus_nb);863863- return 0;864864-}865839machine_subsys_initcall_sync(powernv, tce_iommu_bus_notifier_init);
+12-2
arch/powerpc/platforms/powernv/smp.c
···3333#include <asm/runlatch.h>3434#include <asm/code-patching.h>3535#include <asm/dbell.h>3636+#include <asm/kvm_ppc.h>3737+#include <asm/ppc-opcode.h>36383739#include "powernv.h"3840···151149static void pnv_smp_cpu_kill_self(void)152150{153151 unsigned int cpu;154154- unsigned long srr1;152152+ unsigned long srr1, wmask;155153 u32 idle_states;156154157155 /* Standard hot unplug procedure */···162160 DBG("CPU%d offline\n", cpu);163161 generic_set_cpu_dead(cpu);164162 smp_wmb();163163+164164+ wmask = SRR1_WAKEMASK;165165+ if (cpu_has_feature(CPU_FTR_ARCH_207S))166166+ wmask = SRR1_WAKEMASK_P8;165167166168 idle_states = pnv_get_supported_cpuidle_states();167169 /* We don't want to take decrementer interrupts while we are offline,···197191 * having finished executing in a KVM guest, then srr1198192 * contains 0.199193 */200200- if ((srr1 & SRR1_WAKEMASK) == SRR1_WAKEEE) {194194+ if ((srr1 & wmask) == SRR1_WAKEEE) {201195 icp_native_flush_interrupt();202196 local_paca->irq_happened &= PACA_IRQ_HARD_DIS;203197 smp_mb();198198+ } else if ((srr1 & wmask) == SRR1_WAKEHDBELL) {199199+ unsigned long msg = PPC_DBELL_TYPE(PPC_DBELL_SERVER);200200+ asm volatile(PPC_MSGCLR(%0) : : "r" (msg));201201+ kvmppc_set_host_ipi(cpu, 0);204202 }205203206204 if (cpu_core_split_required())
···515515#define S390_ARCH_FAC_MASK_SIZE_U64 \516516 (S390_ARCH_FAC_MASK_SIZE_BYTE / sizeof(u64))517517518518-struct s390_model_fac {519519- /* facilities used in SIE context */520520- __u64 sie[S390_ARCH_FAC_LIST_SIZE_U64];521521- /* subset enabled by kvm */522522- __u64 kvm[S390_ARCH_FAC_LIST_SIZE_U64];518518+struct kvm_s390_fac {519519+ /* facility list requested by guest */520520+ __u64 list[S390_ARCH_FAC_LIST_SIZE_U64];521521+ /* facility mask supported by kvm & hosting machine */522522+ __u64 mask[S390_ARCH_FAC_LIST_SIZE_U64];523523};524524525525struct kvm_s390_cpu_model {526526- struct s390_model_fac *fac;526526+ struct kvm_s390_fac *fac;527527 struct cpuid cpu_id;528528 unsigned short ibc;529529};
+1-1
arch/s390/include/asm/mmu_context.h
···6262{6363 int cpu = smp_processor_id();64646565+ S390_lowcore.user_asce = next->context.asce_bits | __pa(next->pgd);6566 if (prev == next)6667 return;6768 if (MACHINE_HAS_TLB_LC)···7473 atomic_dec(&prev->context.attach_count);7574 if (MACHINE_HAS_TLB_LC)7675 cpumask_clear_cpu(cpu, &prev->context.cpu_attach_mask);7777- S390_lowcore.user_asce = next->context.asce_bits | __pa(next->pgd);7876}79778078#define finish_arch_post_lock_switch finish_arch_post_lock_switch
+1-10
arch/s390/include/asm/page.h
···3737#endif3838}39394040-static inline void clear_page(void *page)4141-{4242- register unsigned long reg1 asm ("1") = 0;4343- register void *reg2 asm ("2") = page;4444- register unsigned long reg3 asm ("3") = 4096;4545- asm volatile(4646- " mvcl 2,0"4747- : "+d" (reg2), "+d" (reg3) : "d" (reg1)4848- : "memory", "cc");4949-}4040+#define clear_page(page) memset((page), 0, PAGE_SIZE)50415142/*5243 * copy_page uses the mvcl instruction with 0xb0 padding byte in order to
+45-16
arch/s390/kernel/ftrace.c
···57575858unsigned long ftrace_plt;59596060+static inline void ftrace_generate_orig_insn(struct ftrace_insn *insn)6161+{6262+#ifdef CC_USING_HOTPATCH6363+ /* brcl 0,0 */6464+ insn->opc = 0xc004;6565+ insn->disp = 0;6666+#else6767+ /* stg r14,8(r15) */6868+ insn->opc = 0xe3e0;6969+ insn->disp = 0xf0080024;7070+#endif7171+}7272+7373+static inline int is_kprobe_on_ftrace(struct ftrace_insn *insn)7474+{7575+#ifdef CONFIG_KPROBES7676+ if (insn->opc == BREAKPOINT_INSTRUCTION)7777+ return 1;7878+#endif7979+ return 0;8080+}8181+8282+static inline void ftrace_generate_kprobe_nop_insn(struct ftrace_insn *insn)8383+{8484+#ifdef CONFIG_KPROBES8585+ insn->opc = BREAKPOINT_INSTRUCTION;8686+ insn->disp = KPROBE_ON_FTRACE_NOP;8787+#endif8888+}8989+9090+static inline void ftrace_generate_kprobe_call_insn(struct ftrace_insn *insn)9191+{9292+#ifdef CONFIG_KPROBES9393+ insn->opc = BREAKPOINT_INSTRUCTION;9494+ insn->disp = KPROBE_ON_FTRACE_CALL;9595+#endif9696+}9797+6098int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,6199 unsigned long addr)62100{···11072 return -EFAULT;11173 if (addr == MCOUNT_ADDR) {11274 /* Initial code replacement */113113-#ifdef CC_USING_HOTPATCH114114- /* We expect to see brcl 0,0 */115115- ftrace_generate_nop_insn(&orig);116116-#else117117- /* We expect to see stg r14,8(r15) */118118- orig.opc = 0xe3e0;119119- orig.disp = 0xf0080024;120120-#endif7575+ ftrace_generate_orig_insn(&orig);12176 ftrace_generate_nop_insn(&new);122122- } else if (old.opc == BREAKPOINT_INSTRUCTION) {7777+ } else if (is_kprobe_on_ftrace(&old)) {12378 /*12479 * If we find a breakpoint instruction, a kprobe has been12580 * placed at the beginning of the function. We write the···12089 * bytes of the original instruction so that the kprobes12190 * handler can execute a nop, if it reaches this breakpoint.12291 */123123- new.opc = orig.opc = BREAKPOINT_INSTRUCTION;124124- orig.disp = KPROBE_ON_FTRACE_CALL;125125- new.disp = KPROBE_ON_FTRACE_NOP;9292+ ftrace_generate_kprobe_call_insn(&orig);9393+ ftrace_generate_kprobe_nop_insn(&new);12694 } else {12795 /* Replace ftrace call with a nop. */12896 ftrace_generate_call_insn(&orig, rec->ip);···141111142112 if (probe_kernel_read(&old, (void *) rec->ip, sizeof(old)))143113 return -EFAULT;144144- if (old.opc == BREAKPOINT_INSTRUCTION) {114114+ if (is_kprobe_on_ftrace(&old)) {145115 /*146116 * If we find a breakpoint instruction, a kprobe has been147117 * placed at the beginning of the function. We write the···149119 * bytes of the original instruction so that the kprobes150120 * handler can execute a brasl if it reaches this breakpoint.151121 */152152- new.opc = orig.opc = BREAKPOINT_INSTRUCTION;153153- orig.disp = KPROBE_ON_FTRACE_NOP;154154- new.disp = KPROBE_ON_FTRACE_CALL;122122+ ftrace_generate_kprobe_nop_insn(&orig);123123+ ftrace_generate_kprobe_call_insn(&new);155124 } else {156125 /* Replace nop with an ftrace call. */157126 ftrace_generate_nop_insn(&orig);
···165165 case KVM_CAP_ONE_REG:166166 case KVM_CAP_ENABLE_CAP:167167 case KVM_CAP_S390_CSS_SUPPORT:168168- case KVM_CAP_IRQFD:169168 case KVM_CAP_IOEVENTFD:170169 case KVM_CAP_DEVICE_CTRL:171170 case KVM_CAP_ENABLE_CAP_VM:···521522 memcpy(&kvm->arch.model.cpu_id, &proc->cpuid,522523 sizeof(struct cpuid));523524 kvm->arch.model.ibc = proc->ibc;524524- memcpy(kvm->arch.model.fac->kvm, proc->fac_list,525525+ memcpy(kvm->arch.model.fac->list, proc->fac_list,525526 S390_ARCH_FAC_LIST_SIZE_BYTE);526527 } else527528 ret = -EFAULT;···555556 }556557 memcpy(&proc->cpuid, &kvm->arch.model.cpu_id, sizeof(struct cpuid));557558 proc->ibc = kvm->arch.model.ibc;558558- memcpy(&proc->fac_list, kvm->arch.model.fac->kvm, S390_ARCH_FAC_LIST_SIZE_BYTE);559559+ memcpy(&proc->fac_list, kvm->arch.model.fac->list, S390_ARCH_FAC_LIST_SIZE_BYTE);559560 if (copy_to_user((void __user *)attr->addr, proc, sizeof(*proc)))560561 ret = -EFAULT;561562 kfree(proc);···575576 }576577 get_cpu_id((struct cpuid *) &mach->cpuid);577578 mach->ibc = sclp_get_ibc();578578- memcpy(&mach->fac_mask, kvm_s390_fac_list_mask,579579- kvm_s390_fac_list_mask_size() * sizeof(u64));579579+ memcpy(&mach->fac_mask, kvm->arch.model.fac->mask,580580+ S390_ARCH_FAC_LIST_SIZE_BYTE);580581 memcpy((unsigned long *)&mach->fac_list, S390_lowcore.stfle_fac_list,581581- S390_ARCH_FAC_LIST_SIZE_U64);582582+ S390_ARCH_FAC_LIST_SIZE_BYTE);582583 if (copy_to_user((void __user *)attr->addr, mach, sizeof(*mach)))583584 ret = -EFAULT;584585 kfree(mach);···777778static int kvm_s390_query_ap_config(u8 *config)778779{779780 u32 fcn_code = 0x04000000UL;780780- u32 cc;781781+ u32 cc = 0;781782783783+ memset(config, 0, 128);782784 asm volatile(783785 "lgr 0,%1\n"784786 "lgr 2,%2\n"785787 ".long 0xb2af0000\n" /* PQAP(QCI) */786786- "ipm %0\n"788788+ "0: ipm %0\n"787789 "srl %0,28\n"788788- : "=r" (cc)790790+ "1:\n"791791+ EX_TABLE(0b, 1b)792792+ : "+r" (cc)789793 : "r" (fcn_code), "r" (config)790794 : "cc", "0", "2", "memory"791795 );···841839842840 kvm_s390_set_crycb_format(kvm);843841844844- /* Disable AES/DEA protected key functions by default */845845- kvm->arch.crypto.aes_kw = 0;846846- kvm->arch.crypto.dea_kw = 0;842842+ /* Enable AES/DEA protected key functions by default */843843+ kvm->arch.crypto.aes_kw = 1;844844+ kvm->arch.crypto.dea_kw = 1;845845+ get_random_bytes(kvm->arch.crypto.crycb->aes_wrapping_key_mask,846846+ sizeof(kvm->arch.crypto.crycb->aes_wrapping_key_mask));847847+ get_random_bytes(kvm->arch.crypto.crycb->dea_wrapping_key_mask,848848+ sizeof(kvm->arch.crypto.crycb->dea_wrapping_key_mask));847849848850 return 0;849851}···892886 /*893887 * The architectural maximum amount of facilities is 16 kbit. To store894888 * this amount, 2 kbyte of memory is required. Thus we need a full895895- * page to hold the active copy (arch.model.fac->sie) and the current896896- * facilities set (arch.model.fac->kvm). Its address size has to be889889+ * page to hold the guest facility list (arch.model.fac->list) and the890890+ * facility mask (arch.model.fac->mask). Its address size has to be897891 * 31 bits and word aligned.898892 */899893 kvm->arch.model.fac =900900- (struct s390_model_fac *) get_zeroed_page(GFP_KERNEL | GFP_DMA);894894+ (struct kvm_s390_fac *) get_zeroed_page(GFP_KERNEL | GFP_DMA);901895 if (!kvm->arch.model.fac)902896 goto out_nofac;903897904904- memcpy(kvm->arch.model.fac->kvm, S390_lowcore.stfle_fac_list,905905- S390_ARCH_FAC_LIST_SIZE_U64);906906-907907- /*908908- * If this KVM host runs *not* in a LPAR, relax the facility bits909909- * of the kvm facility mask by all missing facilities. This will allow910910- * to determine the right CPU model by means of the remaining facilities.911911- * Live guest migration must prohibit the migration of KVMs running in912912- * a LPAR to non LPAR hosts.913913- */914914- if (!MACHINE_IS_LPAR)915915- for (i = 0; i < kvm_s390_fac_list_mask_size(); i++)916916- kvm_s390_fac_list_mask[i] &= kvm->arch.model.fac->kvm[i];917917-918918- /*919919- * Apply the kvm facility mask to limit the kvm supported/tolerated920920- * facility list.921921- */898898+ /* Populate the facility mask initially. */899899+ memcpy(kvm->arch.model.fac->mask, S390_lowcore.stfle_fac_list,900900+ S390_ARCH_FAC_LIST_SIZE_BYTE);922901 for (i = 0; i < S390_ARCH_FAC_LIST_SIZE_U64; i++) {923902 if (i < kvm_s390_fac_list_mask_size())924924- kvm->arch.model.fac->kvm[i] &= kvm_s390_fac_list_mask[i];903903+ kvm->arch.model.fac->mask[i] &= kvm_s390_fac_list_mask[i];925904 else926926- kvm->arch.model.fac->kvm[i] = 0UL;905905+ kvm->arch.model.fac->mask[i] = 0UL;927906 }907907+908908+ /* Populate the facility list initially. */909909+ memcpy(kvm->arch.model.fac->list, kvm->arch.model.fac->mask,910910+ S390_ARCH_FAC_LIST_SIZE_BYTE);928911929912 kvm_s390_get_cpu_id(&kvm->arch.model.cpu_id);930913 kvm->arch.model.ibc = sclp_get_ibc() & 0x0fff;···1160116511611166 mutex_lock(&vcpu->kvm->lock);11621167 vcpu->arch.cpu_id = vcpu->kvm->arch.model.cpu_id;11631163- memcpy(vcpu->kvm->arch.model.fac->sie, vcpu->kvm->arch.model.fac->kvm,11641164- S390_ARCH_FAC_LIST_SIZE_BYTE);11651168 vcpu->arch.sie_block->ibc = vcpu->kvm->arch.model.ibc;11661169 mutex_unlock(&vcpu->kvm->lock);11671170···12051212 vcpu->arch.sie_block->scaol = (__u32)(__u64)kvm->arch.sca;12061213 set_bit(63 - id, (unsigned long *) &kvm->arch.sca->mcn);12071214 }12081208- vcpu->arch.sie_block->fac = (int) (long) kvm->arch.model.fac->sie;12151215+ vcpu->arch.sie_block->fac = (int) (long) kvm->arch.model.fac->list;1209121612101217 spin_lock_init(&vcpu->arch.local_int.lock);12111218 vcpu->arch.local_int.float_int = &kvm->arch.float_int;
+2-1
arch/s390/kvm/kvm-s390.h
···128128/* test availability of facility in a kvm intance */129129static inline int test_kvm_facility(struct kvm *kvm, unsigned long nr)130130{131131- return __test_facility(nr, kvm->arch.model.fac->kvm);131131+ return __test_facility(nr, kvm->arch.model.fac->mask) &&132132+ __test_facility(nr, kvm->arch.model.fac->list);132133}133134134135/* are cpu states controlled by user space */
+1-1
arch/s390/kvm/priv.c
···348348 * We need to shift the lower 32 facility bits (bit 0-31) from a u64349349 * into a u32 memory representation. They will remain bits 0-31.350350 */351351- fac = *vcpu->kvm->arch.model.fac->sie >> 32;351351+ fac = *vcpu->kvm->arch.model.fac->list >> 32;352352 rc = write_guest_lc(vcpu, offsetof(struct _lowcore, stfl_fac_list),353353 &fac, sizeof(fac));354354 if (rc)
+16-12
arch/s390/pci/pci.c
···287287 addr = ZPCI_IOMAP_ADDR_BASE | ((u64) idx << 48);288288 return (void __iomem *) addr + offset;289289}290290-EXPORT_SYMBOL_GPL(pci_iomap_range);290290+EXPORT_SYMBOL(pci_iomap_range);291291292292void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long maxlen)293293{···309309 }310310 spin_unlock(&zpci_iomap_lock);311311}312312-EXPORT_SYMBOL_GPL(pci_iounmap);312312+EXPORT_SYMBOL(pci_iounmap);313313314314static int pci_read(struct pci_bus *bus, unsigned int devfn, int where,315315 int size, u32 *val)···483483 airq_iv_free_bit(zpci_aisb_iv, zdev->aisb);484484}485485486486-static void zpci_map_resources(struct zpci_dev *zdev)486486+static void zpci_map_resources(struct pci_dev *pdev)487487{488488- struct pci_dev *pdev = zdev->pdev;489488 resource_size_t len;490489 int i;491490···498499 }499500}500501501501-static void zpci_unmap_resources(struct zpci_dev *zdev)502502+static void zpci_unmap_resources(struct pci_dev *pdev)502503{503503- struct pci_dev *pdev = zdev->pdev;504504 resource_size_t len;505505 int i;506506···649651650652 zdev->pdev = pdev;651653 pdev->dev.groups = zpci_attr_groups;652652- zpci_map_resources(zdev);654654+ zpci_map_resources(pdev);653655654656 for (i = 0; i < PCI_BAR_COUNT; i++) {655657 res = &pdev->resource[i];···661663 return 0;662664}663665666666+void pcibios_release_device(struct pci_dev *pdev)667667+{668668+ zpci_unmap_resources(pdev);669669+}670670+664671int pcibios_enable_device(struct pci_dev *pdev, int mask)665672{666673 struct zpci_dev *zdev = get_zdev(pdev);···673670 zdev->pdev = pdev;674671 zpci_debug_init_device(zdev);675672 zpci_fmb_enable_device(zdev);676676- zpci_map_resources(zdev);677673678674 return pci_enable_resources(pdev, mask);679675}···681679{682680 struct zpci_dev *zdev = get_zdev(pdev);683681684684- zpci_unmap_resources(zdev);685682 zpci_fmb_disable_device(zdev);686683 zpci_debug_exit_device(zdev);687684 zdev->pdev = NULL;···689688#ifdef CONFIG_HIBERNATE_CALLBACKS690689static int zpci_restore(struct device *dev)691690{692692- struct zpci_dev *zdev = get_zdev(to_pci_dev(dev));691691+ struct pci_dev *pdev = to_pci_dev(dev);692692+ struct zpci_dev *zdev = get_zdev(pdev);693693 int ret = 0;694694695695 if (zdev->state != ZPCI_FN_STATE_ONLINE)···700698 if (ret)701699 goto out;702700703703- zpci_map_resources(zdev);701701+ zpci_map_resources(pdev);704702 zpci_register_ioat(zdev, 0, zdev->start_dma + PAGE_OFFSET,705703 zdev->start_dma + zdev->iommu_size - 1,706704 (u64) zdev->dma_table);···711709712710static int zpci_freeze(struct device *dev)713711{714714- struct zpci_dev *zdev = get_zdev(to_pci_dev(dev));712712+ struct pci_dev *pdev = to_pci_dev(dev);713713+ struct zpci_dev *zdev = get_zdev(pdev);715714716715 if (zdev->state != ZPCI_FN_STATE_ONLINE)717716 return 0;718717719718 zpci_unregister_ioat(zdev, 0);719719+ zpci_unmap_resources(pdev);720720 return clp_disable_fh(zdev);721721}722722
+8-9
arch/s390/pci/pci_mmio.c
···6464 if (copy_from_user(buf, user_buffer, length))6565 goto out;66666767- memcpy_toio(io_addr, buf, length);6868- ret = 0;6767+ ret = zpci_memcpy_toio(io_addr, buf, length);6968out:7069 if (buf != local_buf)7170 kfree(buf);···9798 goto out;9899 io_addr = (void __iomem *)((pfn << PAGE_SHIFT) | (mmio_addr & ~PAGE_MASK));99100100100- ret = -EFAULT;101101- if ((unsigned long) io_addr < ZPCI_IOMAP_ADDR_BASE)101101+ if ((unsigned long) io_addr < ZPCI_IOMAP_ADDR_BASE) {102102+ ret = -EFAULT;102103 goto out;103103-104104- memcpy_fromio(buf, io_addr, length);105105-104104+ }105105+ ret = zpci_memcpy_fromio(buf, io_addr, length);106106+ if (ret)107107+ goto out;106108 if (copy_to_user(user_buffer, buf, length))107107- goto out;109109+ ret = -EFAULT;108110109109- ret = 0;110111out:111112 if (buf != local_buf)112113 kfree(buf);
+3
arch/sparc/Kconfig
···8686 default "arch/sparc/configs/sparc32_defconfig" if SPARC328787 default "arch/sparc/configs/sparc64_defconfig" if SPARC6488888989+config ARCH_PROC_KCORE_TEXT9090+ def_bool y9191+8992config IOMMU_HELPER9093 bool9194 default y if SPARC64
+12
arch/sparc/include/asm/hypervisor.h
···29572957 unsigned long reg_val);29582958#endif2959295929602960+29612961+#define HV_FAST_M7_GET_PERFREG 0x4329622962+#define HV_FAST_M7_SET_PERFREG 0x4429632963+29642964+#ifndef __ASSEMBLY__29652965+unsigned long sun4v_m7_get_perfreg(unsigned long reg_num,29662966+ unsigned long *reg_val);29672967+unsigned long sun4v_m7_set_perfreg(unsigned long reg_num,29682968+ unsigned long reg_val);29692969+#endif29702970+29602971/* Function numbers for HV_CORE_TRAP. */29612972#define HV_CORE_SET_VER 0x0029622973#define HV_CORE_PUTCHAR 0x01···29922981#define HV_GRP_SDIO 0x010829932982#define HV_GRP_SDIO_ERR 0x010929942983#define HV_GRP_REBOOT_DATA 0x011029842984+#define HV_GRP_M7_PERF 0x011429952985#define HV_GRP_NIAG_PERF 0x020029962986#define HV_GRP_FIRE_PERF 0x020129972987#define HV_GRP_N2_CPU 0x0202
+10-10
arch/sparc/include/asm/io_64.h
···407407{408408}409409410410-#define ioread8(X) readb(X)411411-#define ioread16(X) readw(X)412412-#define ioread16be(X) __raw_readw(X)413413-#define ioread32(X) readl(X)414414-#define ioread32be(X) __raw_readl(X)415415-#define iowrite8(val,X) writeb(val,X)416416-#define iowrite16(val,X) writew(val,X)417417-#define iowrite16be(val,X) __raw_writew(val,X)418418-#define iowrite32(val,X) writel(val,X)419419-#define iowrite32be(val,X) __raw_writel(val,X)410410+#define ioread8 readb411411+#define ioread16 readw412412+#define ioread16be __raw_readw413413+#define ioread32 readl414414+#define ioread32be __raw_readl415415+#define iowrite8 writeb416416+#define iowrite16 writew417417+#define iowrite16be __raw_writew418418+#define iowrite32 writel419419+#define iowrite32be __raw_writel420420421421/* Create a virtual mapping cookie for an IO port range */422422void __iomem *ioport_map(unsigned long port, unsigned int nr);
-1
arch/sparc/include/asm/starfire.h
···1212extern int this_is_starfire;13131414void check_if_starfire(void);1515-int starfire_hard_smp_processor_id(void);1615void starfire_hookup(int);1716unsigned int starfire_translate(unsigned long imap, unsigned int upaid);1817
···14061406 scheduler_ipi();14071407}1408140814091409-/* This is a nop because we capture all other cpus14101410- * anyways when making the PROM active.14111411- */14091409+static void stop_this_cpu(void *dummy)14101410+{14111411+ prom_stopself();14121412+}14131413+14121414void smp_send_stop(void)14131415{14161416+ int cpu;14171417+14181418+ if (tlb_type == hypervisor) {14191419+ for_each_online_cpu(cpu) {14201420+ if (cpu == smp_processor_id())14211421+ continue;14221422+#ifdef CONFIG_SUN_LDOMS14231423+ if (ldom_domaining_enabled) {14241424+ unsigned long hv_err;14251425+ hv_err = sun4v_cpu_stop(cpu);14261426+ if (hv_err)14271427+ printk(KERN_ERR "sun4v_cpu_stop() "14281428+ "failed err=%lu\n", hv_err);14291429+ } else14301430+#endif14311431+ prom_stopcpu_cpuid(cpu);14321432+ }14331433+ } else14341434+ smp_call_function(stop_this_cpu, NULL, 0);14141435}1415143614161437/**
-5
arch/sparc/kernel/starfire.c
···2828 this_is_starfire = 1;2929}30303131-int starfire_hard_smp_processor_id(void)3232-{3333- return upa_readl(0x1fff40000d0UL);3434-}3535-3631/*3732 * Each Starfire board has 32 registers which perform translation3833 * and delivery of traditional interrupt packets into the extended
+1-1
arch/sparc/kernel/sys_sparc_64.c
···333333 long err;334334335335 /* No need for backward compatibility. We can start fresh... */336336- if (call <= SEMCTL) {336336+ if (call <= SEMTIMEDOP) {337337 switch (call) {338338 case SEMOP:339339 err = sys_semtimedop(first, ptr,
···499499 depends on X86_IO_APIC500500 select IOSF_MBI501501 select INTEL_IMR502502+ select COMMON_CLK502503 ---help---503504 Select to include support for Quark X1000 SoC.504505 Say Y here if you have a Quark based system such as the Arduino
···5151extern unsigned long max_low_pfn_mapped;5252extern unsigned long max_pfn_mapped;53535454-extern bool kaslr_enabled;5555-5654static inline phys_addr_t get_max_mapped(void)5755{5856 return (phys_addr_t)max_pfn_mapped << PAGE_SHIFT;
+2
arch/x86/include/asm/pci_x86.h
···9393extern int (*pcibios_enable_irq)(struct pci_dev *dev);9494extern void (*pcibios_disable_irq)(struct pci_dev *dev);95959696+extern bool mp_should_keep_irq(struct device *dev);9797+9698struct pci_raw_ops {9799 int (*read)(unsigned int domain, unsigned int bus, unsigned int devfn,98100 int reg, int len, u32 *val);
···13381338}1339133913401340/*13411341+ * ACPI offers an alternative platform interface model that removes13421342+ * ACPI hardware requirements for platforms that do not implement13431343+ * the PC Architecture.13441344+ *13451345+ * We initialize the Hardware-reduced ACPI model here:13461346+ */13471347+static void __init acpi_reduced_hw_init(void)13481348+{13491349+ if (acpi_gbl_reduced_hardware) {13501350+ /*13511351+ * Override x86_init functions and bypass legacy pic13521352+ * in Hardware-reduced ACPI mode13531353+ */13541354+ x86_init.timers.timer_init = x86_init_noop;13551355+ x86_init.irqs.pre_vector_init = x86_init_noop;13561356+ legacy_pic = &null_legacy_pic;13571357+ }13581358+}13591359+13601360+/*13411361 * If your system is blacklisted here, but you find that acpi=force13421362 * works for you, please contact linux-acpi@vger.kernel.org13431363 */···15551535 * Process the Multiple APIC Description Table (MADT), if present15561536 */15571537 early_acpi_process_madt();15381538+15391539+ /*15401540+ * Hardware-reduced ACPI mode initialization:15411541+ */15421542+ acpi_reduced_hw_init();1558154315591544 return 0;15601545}
+16-6
arch/x86/kernel/apic/apic_numachip.c
···3737static unsigned int get_apic_id(unsigned long x)3838{3939 unsigned long value;4040- unsigned int id;4040+ unsigned int id = (x >> 24) & 0xff;41414242- rdmsrl(MSR_FAM10H_NODE_ID, value);4343- id = ((x >> 24) & 0xffU) | ((value << 2) & 0xff00U);4242+ if (static_cpu_has_safe(X86_FEATURE_NODEID_MSR)) {4343+ rdmsrl(MSR_FAM10H_NODE_ID, value);4444+ id |= (value << 2) & 0xff00;4545+ }44464547 return id;4648}···157155158156static void fixup_cpu_id(struct cpuinfo_x86 *c, int node)159157{160160- if (c->phys_proc_id != node) {161161- c->phys_proc_id = node;162162- per_cpu(cpu_llc_id, smp_processor_id()) = node;158158+ u64 val;159159+ u32 nodes = 1;160160+161161+ this_cpu_write(cpu_llc_id, node);162162+163163+ /* Account for nodes per socket in multi-core-module processors */164164+ if (static_cpu_has_safe(X86_FEATURE_NODEID_MSR)) {165165+ rdmsrl(MSR_FAM10H_NODE_ID, val);166166+ nodes = ((val >> 3) & 7) + 1;163167 }168168+169169+ c->phys_proc_id = node / nodes;164170}165171166172static int __init numachip_system_init(void)
+5-5
arch/x86/kernel/cpu/perf_event_intel.c
···212212 INTEL_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PREC_DIST */213213 INTEL_EVENT_CONSTRAINT(0xcd, 0x8), /* MEM_TRANS_RETIRED.LOAD_LATENCY */214214 /* CYCLE_ACTIVITY.CYCLES_L1D_PENDING */215215- INTEL_EVENT_CONSTRAINT(0x08a3, 0x4),215215+ INTEL_UEVENT_CONSTRAINT(0x08a3, 0x4),216216 /* CYCLE_ACTIVITY.STALLS_L1D_PENDING */217217- INTEL_EVENT_CONSTRAINT(0x0ca3, 0x4),217217+ INTEL_UEVENT_CONSTRAINT(0x0ca3, 0x4),218218 /* CYCLE_ACTIVITY.CYCLES_NO_EXECUTE */219219- INTEL_EVENT_CONSTRAINT(0x04a3, 0xf),219219+ INTEL_UEVENT_CONSTRAINT(0x04a3, 0xf),220220 EVENT_CONSTRAINT_END221221};222222···16491649 if (c)16501650 return c;1651165116521652- c = intel_pebs_constraints(event);16521652+ c = intel_shared_regs_constraints(cpuc, event);16531653 if (c)16541654 return c;1655165516561656- c = intel_shared_regs_constraints(cpuc, event);16561656+ c = intel_pebs_constraints(event);16571657 if (c)16581658 return c;16591659
+37-10
arch/x86/kernel/entry_64.S
···269269 testl $3, CS-ARGOFFSET(%rsp) # from kernel_thread?270270 jz 1f271271272272- testl $_TIF_IA32, TI_flags(%rcx) # 32-bit compat task needs IRET273273- jnz int_ret_from_sys_call274274-275275- RESTORE_TOP_OF_STACK %rdi, -ARGOFFSET276276- jmp ret_from_sys_call # go to the SYSRET fastpath272272+ /*273273+ * By the time we get here, we have no idea whether our pt_regs,274274+ * ti flags, and ti status came from the 64-bit SYSCALL fast path,275275+ * the slow path, or one of the ia32entry paths.276276+ * Use int_ret_from_sys_call to return, since it can safely handle277277+ * all of the above.278278+ */279279+ jmp int_ret_from_sys_call2772802782811:279282 subq $REST_SKIP, %rsp # leave space for volatiles···364361 * Has incomplete stack frame and undefined top of stack.365362 */366363ret_from_sys_call:367367- testl $_TIF_ALLWORK_MASK,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET)368368- jnz int_ret_from_sys_call_fixup /* Go the the slow path */369369-370364 LOCKDEP_SYS_EXIT371365 DISABLE_INTERRUPTS(CLBR_NONE)372366 TRACE_IRQS_OFF367367+368368+ /*369369+ * We must check ti flags with interrupts (or at least preemption)370370+ * off because we must *never* return to userspace without371371+ * processing exit work that is enqueued if we're preempted here.372372+ * In particular, returning to userspace with any of the one-shot373373+ * flags (TIF_NOTIFY_RESUME, TIF_USER_RETURN_NOTIFY, etc) set is374374+ * very bad.375375+ */376376+ testl $_TIF_ALLWORK_MASK,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET)377377+ jnz int_ret_from_sys_call_fixup /* Go the the slow path */378378+373379 CFI_REMEMBER_STATE374380 /*375381 * sysretq will re-enable interrupts:···395383396384int_ret_from_sys_call_fixup:397385 FIXUP_TOP_OF_STACK %r11, -ARGOFFSET398398- jmp int_ret_from_sys_call386386+ jmp int_ret_from_sys_call_irqs_off399387400388 /* Do syscall tracing */401389tracesys:···441429GLOBAL(int_ret_from_sys_call)442430 DISABLE_INTERRUPTS(CLBR_NONE)443431 TRACE_IRQS_OFF432432+int_ret_from_sys_call_irqs_off:444433 movl $_TIF_ALLWORK_MASK,%edi445434 /* edi: mask to check */446435GLOBAL(int_with_check)···799786 cmpq %r11,(EFLAGS-ARGOFFSET)(%rsp) /* R11 == RFLAGS */800787 jne opportunistic_sysret_failed801788802802- testq $X86_EFLAGS_RF,%r11 /* sysret can't restore RF */789789+ /*790790+ * SYSRET can't restore RF. SYSRET can restore TF, but unlike IRET,791791+ * restoring TF results in a trap from userspace immediately after792792+ * SYSRET. This would cause an infinite loop whenever #DB happens793793+ * with register state that satisfies the opportunistic SYSRET794794+ * conditions. For example, single-stepping this user code:795795+ *796796+ * movq $stuck_here,%rcx797797+ * pushfq798798+ * popq %r11799799+ * stuck_here:800800+ *801801+ * would never get past 'stuck_here'.802802+ */803803+ testq $(X86_EFLAGS_RF|X86_EFLAGS_TF), %r11803804 jnz opportunistic_sysret_failed804805805806 /* nothing to check for RSP */
···47474848#ifdef CONFIG_RANDOMIZE_BASE4949static unsigned long module_load_offset;5050+static int randomize_modules = 1;50515152/* Mutex protects the module_load_offset. */5253static DEFINE_MUTEX(module_kaslr_mutex);53545555+static int __init parse_nokaslr(char *p)5656+{5757+ randomize_modules = 0;5858+ return 0;5959+}6060+early_param("nokaslr", parse_nokaslr);6161+5462static unsigned long int get_module_load_offset(void)5563{5656- if (kaslr_enabled) {6464+ if (randomize_modules) {5765 mutex_lock(&module_kaslr_mutex);5866 /*5967 * Calculate the module_load_offset the first time this
+10
arch/x86/kernel/reboot.c
···183183 },184184 },185185186186+ /* ASRock */187187+ { /* Handle problems with rebooting on ASRock Q1900DC-ITX */188188+ .callback = set_pci_reboot,189189+ .ident = "ASRock Q1900DC-ITX",190190+ .matches = {191191+ DMI_MATCH(DMI_BOARD_VENDOR, "ASRock"),192192+ DMI_MATCH(DMI_BOARD_NAME, "Q1900DC-ITX"),193193+ },194194+ },195195+186196 /* ASUS */187197 { /* Handle problems with rebooting on ASUS P4S800 */188198 .callback = set_bios_reboot,
···384384 goto exit;385385 conditional_sti(regs);386386387387- if (!user_mode(regs))387387+ if (!user_mode_vm(regs))388388 die("bounds", regs, error_code);389389390390 if (!cpu_feature_enabled(X86_FEATURE_MPX)) {···637637 * then it's very likely the result of an icebp/int01 trap.638638 * User wants a sigtrap for that.639639 */640640- if (!dr6 && user_mode(regs))640640+ if (!dr6 && user_mode_vm(regs))641641 user_icebp = 1;642642643643 /* Catch kmemcheck conditions first of all! */
+4-3
arch/x86/kernel/xsave.c
···379379 * thread's fpu state, reconstruct fxstate from the fsave380380 * header. Sanitize the copied state etc.381381 */382382- struct xsave_struct *xsave = &tsk->thread.fpu.state->xsave;382382+ struct fpu *fpu = &tsk->thread.fpu;383383 struct user_i387_ia32_struct env;384384 int err = 0;385385···393393 */394394 drop_fpu(tsk);395395396396- if (__copy_from_user(xsave, buf_fx, state_size) ||396396+ if (__copy_from_user(&fpu->state->xsave, buf_fx, state_size) ||397397 __copy_from_user(&env, buf, sizeof(env))) {398398+ fpu_finit(fpu);398399 err = -1;399400 } else {400401 sanitize_restored_xstate(tsk, &env, xstate_bv, fx_only);401401- set_used_math();402402 }403403404404+ set_used_math();404405 if (use_eager_fpu()) {405406 preempt_disable();406407 math_state_restore();
···21682168{21692169 unsigned long *msr_bitmap;2170217021712171- if (irqchip_in_kernel(vcpu->kvm) && apic_x2apic_mode(vcpu->arch.apic)) {21712171+ if (is_guest_mode(vcpu))21722172+ msr_bitmap = vmx_msr_bitmap_nested;21732173+ else if (irqchip_in_kernel(vcpu->kvm) &&21742174+ apic_x2apic_mode(vcpu->arch.apic)) {21722175 if (is_long_mode(vcpu))21732176 msr_bitmap = vmx_msr_bitmap_longmode_x2apic;21742177 else···24792476 if (enable_ept) {24802477 /* nested EPT: emulate EPT also to L1 */24812478 vmx->nested.nested_vmx_secondary_ctls_high |=24822482- SECONDARY_EXEC_ENABLE_EPT |24832483- SECONDARY_EXEC_UNRESTRICTED_GUEST;24792479+ SECONDARY_EXEC_ENABLE_EPT;24842480 vmx->nested.nested_vmx_ept_caps = VMX_EPT_PAGE_WALK_4_BIT |24852481 VMX_EPTP_WB_BIT | VMX_EPT_2MB_PAGE_BIT |24862482 VMX_EPT_INVEPT_BIT;···24922490 vmx->nested.nested_vmx_ept_caps |= VMX_EPT_EXTENT_GLOBAL_BIT;24932491 } else24942492 vmx->nested.nested_vmx_ept_caps = 0;24932493+24942494+ if (enable_unrestricted_guest)24952495+ vmx->nested.nested_vmx_secondary_ctls_high |=24962496+ SECONDARY_EXEC_UNRESTRICTED_GUEST;2495249724962498 /* miscellaneous data */24972499 rdmsr(MSR_IA32_VMX_MISC,···43734367 return 0;43744368}4375436943704370+static inline bool kvm_vcpu_trigger_posted_interrupt(struct kvm_vcpu *vcpu)43714371+{43724372+#ifdef CONFIG_SMP43734373+ if (vcpu->mode == IN_GUEST_MODE) {43744374+ apic->send_IPI_mask(get_cpu_mask(vcpu->cpu),43754375+ POSTED_INTR_VECTOR);43764376+ return true;43774377+ }43784378+#endif43794379+ return false;43804380+}43814381+43764382static int vmx_deliver_nested_posted_interrupt(struct kvm_vcpu *vcpu,43774383 int vector)43784384{···43934375 if (is_guest_mode(vcpu) &&43944376 vector == vmx->nested.posted_intr_nv) {43954377 /* the PIR and ON have been set by L1. */43964396- if (vcpu->mode == IN_GUEST_MODE)43974397- apic->send_IPI_mask(get_cpu_mask(vcpu->cpu),43984398- POSTED_INTR_VECTOR);43784378+ kvm_vcpu_trigger_posted_interrupt(vcpu);43994379 /*44004380 * If a posted intr is not recognized by hardware,44014381 * we will accomplish it in the next vmentry.···4425440944264410 r = pi_test_and_set_on(&vmx->pi_desc);44274411 kvm_make_request(KVM_REQ_EVENT, vcpu);44284428-#ifdef CONFIG_SMP44294429- if (!r && (vcpu->mode == IN_GUEST_MODE))44304430- apic->send_IPI_mask(get_cpu_mask(vcpu->cpu),44314431- POSTED_INTR_VECTOR);44324432- else44334433-#endif44124412+ if (r || !kvm_vcpu_trigger_posted_interrupt(vcpu))44344413 kvm_vcpu_kick(vcpu);44354414}44364415···92249213 }9225921492269215 if (cpu_has_vmx_msr_bitmap() &&92279227- exec_control & CPU_BASED_USE_MSR_BITMAPS &&92289228- nested_vmx_merge_msr_bitmap(vcpu, vmcs12)) {92299229- vmcs_write64(MSR_BITMAP, __pa(vmx_msr_bitmap_nested));92169216+ exec_control & CPU_BASED_USE_MSR_BITMAPS) {92179217+ nested_vmx_merge_msr_bitmap(vcpu, vmcs12);92189218+ /* MSR_BITMAP will be set by following vmx_set_efer. */92309219 } else92319220 exec_control &= ~CPU_BASED_USE_MSR_BITMAPS;92329221
-1
arch/x86/kvm/x86.c
···27442744 case KVM_CAP_USER_NMI:27452745 case KVM_CAP_REINJECT_CONTROL:27462746 case KVM_CAP_IRQ_INJECT_STATUS:27472747- case KVM_CAP_IRQFD:27482747 case KVM_CAP_IOEVENTFD:27492748 case KVM_CAP_IOEVENTFD_NO_LENGTH:27502749 case KVM_CAP_PIT2:
···1717 .text1818 .globl __kernel_sigreturn1919 .type __kernel_sigreturn,@function2020+ nop /* this guy is needed for .LSTARTFDEDLSI1 below (watch for HACK) */2021 ALIGN2122__kernel_sigreturn:2223.LSTART_sigreturn:
···278278 /*279279 * We're out of tags on this hardware queue, kick any280280 * pending IO submits before going to sleep waiting for281281- * some to complete.281281+ * some to complete. Note that hctx can be NULL here for282282+ * reserved tag allocation.282283 */283283- blk_mq_run_hw_queue(hctx, false);284284+ if (hctx)285285+ blk_mq_run_hw_queue(hctx, false);284286285287 /*286288 * Retry tag allocation after running the hardware queue,
···485485 if (!pin || !dev->irq_managed || dev->irq <= 0)486486 return;487487488488+ /* Keep IOAPIC pin configuration when suspending */489489+ if (dev->dev.power.is_prepared)490490+ return;491491+#ifdef CONFIG_PM492492+ if (dev->dev.power.runtime_status == RPM_SUSPENDING)493493+ return;494494+#endif495495+488496 entry = acpi_pci_irq_lookup(dev, pin);489497 if (!entry)490498 return;···513505 if (gsi >= 0) {514506 acpi_unregister_gsi(gsi);515507 dev->irq_managed = 0;516516- dev->irq = 0;517508 }518509}
+3-1
drivers/acpi/resource.c
···4242 * CHECKME: len might be required to check versus a minimum4343 * length as well. 1 for io is fine, but for memory it does4444 * not make any sense at all.4545+ * Note: some BIOSes report incorrect length for ACPI address space4646+ * descriptor, so remove check of 'reslen == len' to avoid regression.4547 */4646- if (len && reslen && reslen == len && start <= end)4848+ if (len && reslen && start <= end)4749 return true;48504951 pr_debug("ACPI: invalid or unassigned resource %s [%016llx - %016llx] length [%016llx]\n",
+16-4
drivers/acpi/video.c
···2110211021112111int acpi_video_register(void)21122112{21132113- int result = 0;21132113+ int ret;21142114+21142115 if (register_count) {21152116 /*21162117 * if the function of acpi_video_register is already called,···21232122 mutex_init(&video_list_lock);21242123 INIT_LIST_HEAD(&video_bus_head);2125212421262126- result = acpi_bus_register_driver(&acpi_video_bus);21272127- if (result < 0)21282128- return -ENODEV;21252125+ ret = acpi_bus_register_driver(&acpi_video_bus);21262126+ if (ret)21272127+ return ret;2129212821302129 /*21312130 * When the acpi_video_bus is loaded successfully, increase···2177217621782177static int __init acpi_video_init(void)21792178{21792179+ /*21802180+ * Let the module load even if ACPI is disabled (e.g. due to21812181+ * a broken BIOS) so that i915.ko can still be loaded on such21822182+ * old systems without an AcpiOpRegion.21832183+ *21842184+ * acpi_video_register() will report -ENODEV later as well due21852185+ * to acpi_disabled when i915.ko tries to register itself afterwards.21862186+ */21872187+ if (acpi_disabled)21882188+ return 0;21892189+21802190 dmi_check_system(video_dmi_table);2181219121822192 if (intel_opregion_present())
+5-5
drivers/android/binder.c
···551551{552552 void *page_addr;553553 unsigned long user_page_addr;554554- struct vm_struct tmp_area;555554 struct page **page;556555 struct mm_struct *mm;557556···599600 proc->pid, page_addr);600601 goto err_alloc_page_failed;601602 }602602- tmp_area.addr = page_addr;603603- tmp_area.size = PAGE_SIZE + PAGE_SIZE /* guard page? */;604604- ret = map_vm_area(&tmp_area, PAGE_KERNEL, page);605605- if (ret) {603603+ ret = map_kernel_range_noflush((unsigned long)page_addr,604604+ PAGE_SIZE, PAGE_KERNEL, page);605605+ flush_cache_vmap((unsigned long)page_addr,606606+ (unsigned long)page_addr + PAGE_SIZE);607607+ if (ret != 1) {606608 pr_err("%d: binder_alloc_buf failed to map page at %p in kernel\n",607609 proc->pid, page_addr);608610 goto err_map_kernel_failed;
···869869 */870870 ata_msleep(ap, 1);871871872872+ sata_set_spd(link);873873+872874 /*873875 * Now, bring the host controller online again, this can take time874876 * as PHY reset and communication establishment, 1st D2H FIS and
+12-12
drivers/base/power/domain.c
···22422242}2243224322442244static int pm_genpd_summary_one(struct seq_file *s,22452245- struct generic_pm_domain *gpd)22452245+ struct generic_pm_domain *genpd)22462246{22472247 static const char * const status_lookup[] = {22482248 [GPD_STATE_ACTIVE] = "on",···22562256 struct gpd_link *link;22572257 int ret;2258225822592259- ret = mutex_lock_interruptible(&gpd->lock);22592259+ ret = mutex_lock_interruptible(&genpd->lock);22602260 if (ret)22612261 return -ERESTARTSYS;2262226222632263- if (WARN_ON(gpd->status >= ARRAY_SIZE(status_lookup)))22632263+ if (WARN_ON(genpd->status >= ARRAY_SIZE(status_lookup)))22642264 goto exit;22652265- seq_printf(s, "%-30s %-15s ", gpd->name, status_lookup[gpd->status]);22652265+ seq_printf(s, "%-30s %-15s ", genpd->name, status_lookup[genpd->status]);2266226622672267 /*22682268 * Modifications on the list require holding locks on both22692269 * master and slave, so we are safe.22702270- * Also gpd->name is immutable.22702270+ * Also genpd->name is immutable.22712271 */22722272- list_for_each_entry(link, &gpd->master_links, master_node) {22722272+ list_for_each_entry(link, &genpd->master_links, master_node) {22732273 seq_printf(s, "%s", link->slave->name);22742274- if (!list_is_last(&link->master_node, &gpd->master_links))22742274+ if (!list_is_last(&link->master_node, &genpd->master_links))22752275 seq_puts(s, ", ");22762276 }2277227722782278- list_for_each_entry(pm_data, &gpd->dev_list, list_node) {22782278+ list_for_each_entry(pm_data, &genpd->dev_list, list_node) {22792279 kobj_path = kobject_get_path(&pm_data->dev->kobj, GFP_KERNEL);22802280 if (kobj_path == NULL)22812281 continue;···2287228722882288 seq_puts(s, "\n");22892289exit:22902290- mutex_unlock(&gpd->lock);22902290+ mutex_unlock(&genpd->lock);2291229122922292 return 0;22932293}2294229422952295static int pm_genpd_summary_show(struct seq_file *s, void *data)22962296{22972297- struct generic_pm_domain *gpd;22972297+ struct generic_pm_domain *genpd;22982298 int ret = 0;2299229923002300 seq_puts(s, " domain status slaves\n");···23052305 if (ret)23062306 return -ERESTARTSYS;2307230723082308- list_for_each_entry(gpd, &gpd_list, gpd_list_node) {23092309- ret = pm_genpd_summary_one(s, gpd);23082308+ list_for_each_entry(genpd, &gpd_list, gpd_list_node) {23092309+ ret = pm_genpd_summary_one(s, genpd);23102310 if (ret)23112311 break;23122312 }
···140140{141141 int rc;142142143143- rc = device_add(&chip->dev);144144- if (rc) {145145- dev_err(&chip->dev,146146- "unable to device_register() %s, major %d, minor %d, err=%d\n",147147- chip->devname, MAJOR(chip->dev.devt),148148- MINOR(chip->dev.devt), rc);149149-150150- return rc;151151- }152152-153143 rc = cdev_add(&chip->cdev, chip->dev.devt, 1);154144 if (rc) {155145 dev_err(&chip->dev,···148158 MINOR(chip->dev.devt), rc);149159150160 device_unregister(&chip->dev);161161+ return rc;162162+ }163163+164164+ rc = device_add(&chip->dev);165165+ if (rc) {166166+ dev_err(&chip->dev,167167+ "unable to device_register() %s, major %d, minor %d, err=%d\n",168168+ chip->devname, MAJOR(chip->dev.devt),169169+ MINOR(chip->dev.devt), rc);170170+151171 return rc;152172 }153173···174174 * tpm_chip_register() - create a character device for the TPM chip175175 * @chip: TPM chip to use.176176 *177177- * Creates a character device for the TPM chip and adds sysfs interfaces for178178- * the device, PPI and TCPA. As the last step this function adds the179179- * chip to the list of TPM chips available for use.177177+ * Creates a character device for the TPM chip and adds sysfs attributes for178178+ * the device. As the last step this function adds the chip to the list of TPM179179+ * chips available for in-kernel use.180180 *181181- * NOTE: This function should be only called after the chip initialization182182- * is complete.183183- *184184- * Called from tpm_<specific>.c probe function only for devices185185- * the driver has determined it should claim. Prior to calling186186- * this function the specific probe function has called pci_enable_device187187- * upon errant exit from this function specific probe function should call188188- * pci_disable_device181181+ * This function should be only called after the chip initialization is182182+ * complete.189183 */190184int tpm_chip_register(struct tpm_chip *chip)191185{192186 int rc;193193-194194- rc = tpm_dev_add_device(chip);195195- if (rc)196196- return rc;197187198188 /* Populate sysfs for TPM1 devices. */199189 if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) {···197207198208 chip->bios_dir = tpm_bios_log_setup(chip->devname);199209 }210210+211211+ rc = tpm_dev_add_device(chip);212212+ if (rc)213213+ return rc;200214201215 /* Make the chip available. */202216 spin_lock(&driver_lock);
···144144 divider->flags);145145}146146147147-/*148148- * The reverse of DIV_ROUND_UP: The maximum number which149149- * divided by m is r150150- */151151-#define MULT_ROUND_UP(r, m) ((r) * (m) + (m) - 1)152152-153147static bool _is_valid_table_div(const struct clk_div_table *table,154148 unsigned int div)155149{···219225 unsigned long parent_rate, unsigned long rate,220226 unsigned long flags)221227{222222- int up, down, div;228228+ int up, down;229229+ unsigned long up_rate, down_rate;223230224224- up = down = div = DIV_ROUND_CLOSEST(parent_rate, rate);231231+ up = DIV_ROUND_UP(parent_rate, rate);232232+ down = parent_rate / rate;225233226234 if (flags & CLK_DIVIDER_POWER_OF_TWO) {227227- up = __roundup_pow_of_two(div);228228- down = __rounddown_pow_of_two(div);235235+ up = __roundup_pow_of_two(up);236236+ down = __rounddown_pow_of_two(down);229237 } else if (table) {230230- up = _round_up_table(table, div);231231- down = _round_down_table(table, div);238238+ up = _round_up_table(table, up);239239+ down = _round_down_table(table, down);232240 }233241234234- return (up - div) <= (div - down) ? up : down;242242+ up_rate = DIV_ROUND_UP(parent_rate, up);243243+ down_rate = DIV_ROUND_UP(parent_rate, down);244244+245245+ return (rate - up_rate) <= (down_rate - rate) ? up : down;235246}236247237248static int _div_round(const struct clk_div_table *table,···312313 return i;313314 }314315 parent_rate = __clk_round_rate(__clk_get_parent(hw->clk),315315- MULT_ROUND_UP(rate, i));316316+ rate * i);316317 now = DIV_ROUND_UP(parent_rate, i);317318 if (_is_best_div(rate, now, best, flags)) {318319 bestdiv = i;···352353 bestdiv = readl(divider->reg) >> divider->shift;353354 bestdiv &= div_mask(divider->width);354355 bestdiv = _get_div(divider->table, bestdiv, divider->flags);355355- return bestdiv;356356+ return DIV_ROUND_UP(*prate, bestdiv);356357 }357358358359 return divider_round_rate(hw, rate, prate, divider->table,
+26-1
drivers/clk/clk.c
···1350135013511351 return rate;13521352}13531353-EXPORT_SYMBOL_GPL(clk_core_get_rate);1354135313551354/**13561355 * clk_get_rate - return the rate of clk···2168216921692170 return clk_core_get_phase(clk->core);21702171}21722172+21732173+/**21742174+ * clk_is_match - check if two clk's point to the same hardware clock21752175+ * @p: clk compared against q21762176+ * @q: clk compared against p21772177+ *21782178+ * Returns true if the two struct clk pointers both point to the same hardware21792179+ * clock node. Put differently, returns true if struct clk *p and struct clk *q21802180+ * share the same struct clk_core object.21812181+ *21822182+ * Returns false otherwise. Note that two NULL clks are treated as matching.21832183+ */21842184+bool clk_is_match(const struct clk *p, const struct clk *q)21852185+{21862186+ /* trivial case: identical struct clk's or both NULL */21872187+ if (p == q)21882188+ return true;21892189+21902190+ /* true if clk->core pointers match. Avoid derefing garbage */21912191+ if (!IS_ERR_OR_NULL(p) && !IS_ERR_OR_NULL(q))21922192+ if (p->core == q->core)21932193+ return true;21942194+21952195+ return false;21962196+}21972197+EXPORT_SYMBOL_GPL(clk_is_match);2171219821722199/**21732200 * __clk_init - initialize the data structures in a struct clk
···417417 .mnctr_en_bit = 8,418418 .mnctr_reset_bit = 7,419419 .mnctr_mode_shift = 5,420420- .n_val_shift = 16,421421- .m_val_shift = 16,420420+ .n_val_shift = 24,421421+ .m_val_shift = 8,422422 .width = 8,423423 },424424 .p = {···547547 return PTR_ERR(regmap);548548549549 /* Use the correct frequency plan depending on speed of PLL4 */550550- val = regmap_read(regmap, 0x4, &val);550550+ regmap_read(regmap, 0x4, &val);551551 if (val == 0x12) {552552 slimbus_src.freq_tbl = clk_tbl_aif_osr_492;553553 mi2s_osr_src.freq_tbl = clk_tbl_aif_osr_492;···574574 .remove = lcc_msm8960_remove,575575 .driver = {576576 .name = "lcc-msm8960",577577- .owner = THIS_MODULE,578577 .of_match_table = lcc_msm8960_match_table,579578 },580579};
+3-3
drivers/clk/ti/fapll.c
···8484 struct fapll_data *fd = to_fapll(hw);8585 u32 v = readl_relaxed(fd->base);86868787- v |= (1 << FAPLL_MAIN_PLLEN);8787+ v |= FAPLL_MAIN_PLLEN;8888 writel_relaxed(v, fd->base);89899090 return 0;···9595 struct fapll_data *fd = to_fapll(hw);9696 u32 v = readl_relaxed(fd->base);97979898- v &= ~(1 << FAPLL_MAIN_PLLEN);9898+ v &= ~FAPLL_MAIN_PLLEN;9999 writel_relaxed(v, fd->base);100100}101101···104104 struct fapll_data *fd = to_fapll(hw);105105 u32 v = readl_relaxed(fd->base);106106107107- return v & (1 << FAPLL_MAIN_PLLEN);107107+ return v & FAPLL_MAIN_PLLEN;108108}109109110110static unsigned long ti_fapll_recalc_rate(struct clk_hw *hw,
+3
drivers/clocksource/Kconfig
···192192config SH_TIMER_CMT193193 bool "Renesas CMT timer driver" if COMPILE_TEST194194 depends on GENERIC_CLOCKEVENTS195195+ depends on HAS_IOMEM195196 default SYS_SUPPORTS_SH_CMT196197 help197198 This enables build of a clocksource and clockevent driver for···202201config SH_TIMER_MTU2203202 bool "Renesas MTU2 timer driver" if COMPILE_TEST204203 depends on GENERIC_CLOCKEVENTS204204+ depends on HAS_IOMEM205205 default SYS_SUPPORTS_SH_MTU2206206 help207207 This enables build of a clockevent driver for the Multi-Function···212210config SH_TIMER_TMU213211 bool "Renesas TMU timer driver" if COMPILE_TEST214212 depends on GENERIC_CLOCKEVENTS213213+ depends on HAS_IOMEM215214 default SYS_SUPPORTS_SH_TMU216215 help217216 This enables build of a clocksource and clockevent driver for
···159159160160static int exynos_cpufreq_probe(struct platform_device *pdev)161161{162162- struct device_node *cpus, *np;162162+ struct device_node *cpu0;163163 int ret = -EINVAL;164164165165 exynos_info = kzalloc(sizeof(*exynos_info), GFP_KERNEL);···206206 if (ret)207207 goto err_cpufreq_reg;208208209209- cpus = of_find_node_by_path("/cpus");210210- if (!cpus) {211211- pr_err("failed to find cpus node\n");209209+ cpu0 = of_get_cpu_node(0, NULL);210210+ if (!cpu0) {211211+ pr_err("failed to find cpu0 node\n");212212 return 0;213213 }214214215215- np = of_get_next_child(cpus, NULL);216216- if (!np) {217217- pr_err("failed to find cpus child node\n");218218- of_node_put(cpus);219219- return 0;220220- }221221-222222- if (of_find_property(np, "#cooling-cells", NULL)) {223223- cdev = of_cpufreq_cooling_register(np,215215+ if (of_find_property(cpu0, "#cooling-cells", NULL)) {216216+ cdev = of_cpufreq_cooling_register(cpu0,224217 cpu_present_mask);225218 if (IS_ERR(cdev))226219 pr_err("running cpufreq without cooling device: %ld\n",227220 PTR_ERR(cdev));228221 }229229- of_node_put(np);230230- of_node_put(cpus);231222232223 return 0;233224
+2
drivers/cpufreq/ppc-corenet-cpufreq.c
···2222#include <linux/smp.h>2323#include <sysdev/fsl_soc.h>24242525+#include <asm/smp.h> /* for get_hard_smp_processor_id() in UP configs */2626+2527/**2628 * struct cpu_data - per CPU data struct2729 * @parent: the parent node of cpu clock
···4444 off = 1;4545}46464747+bool cpuidle_not_available(struct cpuidle_driver *drv,4848+ struct cpuidle_device *dev)4949+{5050+ return off || !initialized || !drv || !dev || !dev->enabled;5151+}5252+4753/**4854 * cpuidle_play_dead - cpu off-lining4955 *···7266 return -ENODEV;7367}74687575-/**7676- * cpuidle_find_deepest_state - Find deepest state meeting specific conditions.7777- * @drv: cpuidle driver for the given CPU.7878- * @dev: cpuidle device for the given CPU.7979- * @freeze: Whether or not the state should be suitable for suspend-to-idle.8080- */8181-static int cpuidle_find_deepest_state(struct cpuidle_driver *drv,8282- struct cpuidle_device *dev, bool freeze)6969+static int find_deepest_state(struct cpuidle_driver *drv,7070+ struct cpuidle_device *dev, bool freeze)8371{8472 unsigned int latency_req = 0;8573 int i, ret = freeze ? -1 : CPUIDLE_DRIVER_STATE_START - 1;···9090 ret = i;9191 }9292 return ret;9393+}9494+9595+/**9696+ * cpuidle_find_deepest_state - Find the deepest available idle state.9797+ * @drv: cpuidle driver for the given CPU.9898+ * @dev: cpuidle device for the given CPU.9999+ */100100+int cpuidle_find_deepest_state(struct cpuidle_driver *drv,101101+ struct cpuidle_device *dev)102102+{103103+ return find_deepest_state(drv, dev, false);93104}9410595106static void enter_freeze_proper(struct cpuidle_driver *drv,···124113125114/**126115 * cpuidle_enter_freeze - Enter an idle state suitable for suspend-to-idle.116116+ * @drv: cpuidle driver for the given CPU.117117+ * @dev: cpuidle device for the given CPU.127118 *128119 * If there are states with the ->enter_freeze callback, find the deepest of129129- * them and enter it with frozen tick. Otherwise, find the deepest state130130- * available and enter it normally.120120+ * them and enter it with frozen tick.131121 */132132-void cpuidle_enter_freeze(void)122122+int cpuidle_enter_freeze(struct cpuidle_driver *drv, struct cpuidle_device *dev)133123{134134- struct cpuidle_device *dev = __this_cpu_read(cpuidle_devices);135135- struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev);136124 int index;137125138126 /*···139129 * that interrupts won't be enabled when it exits and allows the tick to140130 * be frozen safely.141131 */142142- index = cpuidle_find_deepest_state(drv, dev, true);143143- if (index >= 0) {144144- enter_freeze_proper(drv, dev, index);145145- return;146146- }147147-148148- /*149149- * It is not safe to freeze the tick, find the deepest state available150150- * at all and try to enter it normally.151151- */152152- index = cpuidle_find_deepest_state(drv, dev, false);132132+ index = find_deepest_state(drv, dev, true);153133 if (index >= 0)154154- cpuidle_enter(drv, dev, index);155155- else156156- arch_cpu_idle();134134+ enter_freeze_proper(drv, dev, index);157135158158- /* Interrupts are enabled again here. */159159- local_irq_disable();136136+ return index;160137}161138162139/**···202205 */203206int cpuidle_select(struct cpuidle_driver *drv, struct cpuidle_device *dev)204207{205205- if (off || !initialized)206206- return -ENODEV;207207-208208- if (!drv || !dev || !dev->enabled)209209- return -EBUSY;210210-211208 return cpuidle_curr_governor->select(drv, dev);212209}213210
+3
drivers/dma-buf/fence.c
···159159 if (WARN_ON(timeout < 0))160160 return -EINVAL;161161162162+ if (timeout == 0)163163+ return fence_is_signaled(fence);164164+162165 trace_fence_wait_start(fence);163166 ret = fence->ops->wait(fence, intr, timeout);164167 trace_fence_wait_end(fence);
+3-2
drivers/dma-buf/reservation.c
···327327 unsigned seq, shared_count, i = 0;328328 long ret = timeout;329329330330+ if (!timeout)331331+ return reservation_object_test_signaled_rcu(obj, wait_all);332332+330333retry:331334 fence = NULL;332335 shared_count = 0;···405402 int ret = 1;406403407404 if (!test_bit(FENCE_FLAG_SIGNALED_BIT, &lfence->flags)) {408408- int ret;409409-410405 fence = fence_get_rcu(lfence);411406 if (!fence)412407 return -1;
···238238}239239240240/*241241- * atc_get_current_descriptors -242242- * locate the descriptor which equal to physical address in DSCR243243- * @atchan: the channel we want to start244244- * @dscr_addr: physical descriptor address in DSCR241241+ * atc_get_desc_by_cookie - get the descriptor of a cookie242242+ * @atchan: the DMA channel243243+ * @cookie: the cookie to get the descriptor for245244 */246246-static struct at_desc *atc_get_current_descriptors(struct at_dma_chan *atchan,247247- u32 dscr_addr)245245+static struct at_desc *atc_get_desc_by_cookie(struct at_dma_chan *atchan,246246+ dma_cookie_t cookie)248247{249249- struct at_desc *desc, *_desc, *child, *desc_cur = NULL;248248+ struct at_desc *desc, *_desc;249249+250250+ list_for_each_entry_safe(desc, _desc, &atchan->queue, desc_node) {251251+ if (desc->txd.cookie == cookie)252252+ return desc;253253+ }250254251255 list_for_each_entry_safe(desc, _desc, &atchan->active_list, desc_node) {252252- if (desc->lli.dscr == dscr_addr) {253253- desc_cur = desc;254254- break;255255- }256256-257257- list_for_each_entry(child, &desc->tx_list, desc_node) {258258- if (child->lli.dscr == dscr_addr) {259259- desc_cur = child;260260- break;261261- }262262- }256256+ if (desc->txd.cookie == cookie)257257+ return desc;263258 }264259265265- return desc_cur;260260+ return NULL;266261}267262268268-/*269269- * atc_get_bytes_left -270270- * Get the number of bytes residue in dma buffer,271271- * @chan: the channel we want to start263263+/**264264+ * atc_calc_bytes_left - calculates the number of bytes left according to the265265+ * value read from CTRLA.266266+ *267267+ * @current_len: the number of bytes left before reading CTRLA268268+ * @ctrla: the value of CTRLA269269+ * @desc: the descriptor containing the transfer width272270 */273273-static int atc_get_bytes_left(struct dma_chan *chan)271271+static inline int atc_calc_bytes_left(int current_len, u32 ctrla,272272+ struct at_desc *desc)273273+{274274+ return current_len - ((ctrla & ATC_BTSIZE_MAX) << desc->tx_width);275275+}276276+277277+/**278278+ * atc_calc_bytes_left_from_reg - calculates the number of bytes left according279279+ * to the current value of CTRLA.280280+ *281281+ * @current_len: the number of bytes left before reading CTRLA282282+ * @atchan: the channel to read CTRLA for283283+ * @desc: the descriptor containing the transfer width284284+ */285285+static inline int atc_calc_bytes_left_from_reg(int current_len,286286+ struct at_dma_chan *atchan, struct at_desc *desc)287287+{288288+ u32 ctrla = channel_readl(atchan, CTRLA);289289+290290+ return atc_calc_bytes_left(current_len, ctrla, desc);291291+}292292+293293+/**294294+ * atc_get_bytes_left - get the number of bytes residue for a cookie295295+ * @chan: DMA channel296296+ * @cookie: transaction identifier to check status of297297+ */298298+static int atc_get_bytes_left(struct dma_chan *chan, dma_cookie_t cookie)274299{275300 struct at_dma_chan *atchan = to_at_dma_chan(chan);276276- struct at_dma *atdma = to_at_dma(chan->device);277277- int chan_id = atchan->chan_common.chan_id;278301 struct at_desc *desc_first = atc_first_active(atchan);279279- struct at_desc *desc_cur;280280- int ret = 0, count = 0;302302+ struct at_desc *desc;303303+ int ret;304304+ u32 ctrla, dscr;281305282306 /*283283- * Initialize necessary values in the first time.284284- * remain_desc record remain desc length.307307+ * If the cookie doesn't match to the currently running transfer then308308+ * we can return the total length of the associated DMA transfer,309309+ * because it is still queued.285310 */286286- if (atchan->remain_desc == 0)287287- /* First descriptor embedds the transaction length */288288- atchan->remain_desc = desc_first->len;311311+ desc = atc_get_desc_by_cookie(atchan, cookie);312312+ if (desc == NULL)313313+ return -EINVAL;314314+ else if (desc != desc_first)315315+ return desc->total_len;289316290290- /*291291- * This happens when current descriptor transfer complete.292292- * The residual buffer size should reduce current descriptor length.293293- */294294- if (unlikely(test_bit(ATC_IS_BTC, &atchan->status))) {295295- clear_bit(ATC_IS_BTC, &atchan->status);296296- desc_cur = atc_get_current_descriptors(atchan,297297- channel_readl(atchan, DSCR));298298- if (!desc_cur) {299299- ret = -EINVAL;300300- goto out;301301- }317317+ /* cookie matches to the currently running transfer */318318+ ret = desc_first->total_len;302319303303- count = (desc_cur->lli.ctrla & ATC_BTSIZE_MAX)304304- << desc_first->tx_width;305305- if (atchan->remain_desc < count) {306306- ret = -EINVAL;307307- goto out;308308- }320320+ if (desc_first->lli.dscr) {321321+ /* hardware linked list transfer */309322310310- atchan->remain_desc -= count;311311- ret = atchan->remain_desc;312312- } else {313323 /*314314- * Get residual bytes when current315315- * descriptor transfer in progress.324324+ * Calculate the residue by removing the length of the child325325+ * descriptors already transferred from the total length.326326+ * To get the current child descriptor we can use the value of327327+ * the channel's DSCR register and compare it against the value328328+ * of the hardware linked list structure of each child329329+ * descriptor.316330 */317317- count = (channel_readl(atchan, CTRLA) & ATC_BTSIZE_MAX)318318- << (desc_first->tx_width);319319- ret = atchan->remain_desc - count;320320- }321321- /*322322- * Check fifo empty.323323- */324324- if (!(dma_readl(atdma, CHSR) & AT_DMA_EMPT(chan_id)))325325- atc_issue_pending(chan);326331327327-out:332332+ ctrla = channel_readl(atchan, CTRLA);333333+ rmb(); /* ensure CTRLA is read before DSCR */334334+ dscr = channel_readl(atchan, DSCR);335335+336336+ /* for the first descriptor we can be more accurate */337337+ if (desc_first->lli.dscr == dscr)338338+ return atc_calc_bytes_left(ret, ctrla, desc_first);339339+340340+ ret -= desc_first->len;341341+ list_for_each_entry(desc, &desc_first->tx_list, desc_node) {342342+ if (desc->lli.dscr == dscr)343343+ break;344344+345345+ ret -= desc->len;346346+ }347347+348348+ /*349349+ * For the last descriptor in the chain we can calculate350350+ * the remaining bytes using the channel's register.351351+ * Note that the transfer width of the first and last352352+ * descriptor may differ.353353+ */354354+ if (!desc->lli.dscr)355355+ ret = atc_calc_bytes_left_from_reg(ret, atchan, desc);356356+ } else {357357+ /* single transfer */358358+ ret = atc_calc_bytes_left_from_reg(ret, atchan, desc_first);359359+ }360360+328361 return ret;329362}330363···572539 /* Give information to tasklet */573540 set_bit(ATC_IS_ERROR, &atchan->status);574541 }575575- if (pending & AT_DMA_BTC(i))576576- set_bit(ATC_IS_BTC, &atchan->status);577542 tasklet_schedule(&atchan->tasklet);578543 ret = IRQ_HANDLED;579544 }···684653 desc->lli.ctrlb = ctrlb;685654686655 desc->txd.cookie = 0;656656+ desc->len = xfer_count << src_width;687657688658 atc_desc_chain(&first, &prev, desc);689659 }690660691661 /* First descriptor of the chain embedds additional information */692662 first->txd.cookie = -EBUSY;693693- first->len = len;663663+ first->total_len = len;664664+665665+ /* set transfer width for the calculation of the residue */694666 first->tx_width = src_width;667667+ prev->tx_width = src_width;695668696669 /* set end-of-link to the last link descriptor of list*/697670 set_desc_eol(desc);···787752 | ATC_SRC_WIDTH(mem_width)788753 | len >> mem_width;789754 desc->lli.ctrlb = ctrlb;755755+ desc->len = len;790756791757 atc_desc_chain(&first, &prev, desc);792758 total_len += len;···828792 | ATC_DST_WIDTH(mem_width)829793 | len >> reg_width;830794 desc->lli.ctrlb = ctrlb;795795+ desc->len = len;831796832797 atc_desc_chain(&first, &prev, desc);833798 total_len += len;···843806844807 /* First descriptor of the chain embedds additional information */845808 first->txd.cookie = -EBUSY;846846- first->len = total_len;809809+ first->total_len = total_len;810810+811811+ /* set transfer width for the calculation of the residue */847812 first->tx_width = reg_width;813813+ prev->tx_width = reg_width;848814849815 /* first link descriptor of list is responsible of flags */850816 first->txd.flags = flags; /* client is in control of this ack */···912872 | ATC_FC_MEM2PER913873 | ATC_SIF(atchan->mem_if)914874 | ATC_DIF(atchan->per_if);875875+ desc->len = period_len;915876 break;916877917878 case DMA_DEV_TO_MEM:···924883 | ATC_FC_PER2MEM925884 | ATC_SIF(atchan->per_if)926885 | ATC_DIF(atchan->mem_if);886886+ desc->len = period_len;927887 break;928888929889 default:···10069641007965 /* First descriptor of the chain embedds additional information */1008966 first->txd.cookie = -EBUSY;10091009- first->len = buf_len;967967+ first->total_len = buf_len;1010968 first->tx_width = reg_width;10119691012970 return &first->txd;···11601118 spin_lock_irqsave(&atchan->lock, flags);1161111911621120 /* Get number of bytes left in the active transactions */11631163- bytes = atc_get_bytes_left(chan);11211121+ bytes = atc_get_bytes_left(chan, cookie);1164112211651123 spin_unlock_irqrestore(&atchan->lock, flags);11661124···1256121412571215 spin_lock_irqsave(&atchan->lock, flags);12581216 atchan->descs_allocated = i;12591259- atchan->remain_desc = 0;12601217 list_splice(&tmp_list, &atchan->free_list);12611218 dma_cookie_init(chan);12621219 spin_unlock_irqrestore(&atchan->lock, flags);···12981257 list_splice_init(&atchan->free_list, &list);12991258 atchan->descs_allocated = 0;13001259 atchan->status = 0;13011301- atchan->remain_desc = 0;1302126013031261 dev_vdbg(chan2dev(chan), "free_chan_resources: done\n");13041262}
+3-4
drivers/dma/at_hdmac_regs.h
···181181 * @at_lli: hardware lli structure182182 * @txd: support for the async_tx api183183 * @desc_node: node on the channed descriptors list184184- * @len: total transaction bytecount184184+ * @len: descriptor byte count185185 * @tx_width: transfer width186186+ * @total_len: total transaction byte count186187 */187188struct at_desc {188189 /* FIRST values the hardware uses */···195194 struct list_head desc_node;196195 size_t len;197196 u32 tx_width;197197+ size_t total_len;198198};199199200200static inline struct at_desc *···215213enum atc_status {216214 ATC_IS_ERROR = 0,217215 ATC_IS_PAUSED = 1,218218- ATC_IS_BTC = 2,219216 ATC_IS_CYCLIC = 24,220217};221218···232231 * @save_cfg: configuration register that is saved on suspend/resume cycle233232 * @save_dscr: for cyclic operations, preserve next descriptor address in234233 * the cyclic list on suspend/resume cycle235235- * @remain_desc: to save remain desc length236234 * @dma_sconfig: configuration for slave transfers, passed via237235 * .device_config238236 * @lock: serializes enqueue/dequeue operations to descriptors lists···251251 struct tasklet_struct tasklet;252252 u32 save_cfg;253253 u32 save_dscr;254254- u32 remain_desc;255254 struct dma_slave_config dma_sconfig;256255257256 spinlock_t lock;
···626626 dev_vdbg(dw->dma.dev, "%s: status=0x%x\n", __func__, status);627627628628 /* Check if we have any interrupt from the DMAC */629629- if (!status)629629+ if (!status || !dw->in_use)630630 return IRQ_NONE;631631632632 /*
···260260 */261261 if (echan->edesc) {262262 int cyclic = echan->edesc->cyclic;263263+264264+ /*265265+ * free the running request descriptor266266+ * since it is not in any of the vdesc lists267267+ */268268+ edma_desc_free(&echan->edesc->vdesc);269269+263270 echan->edesc = NULL;264271 edma_stop(echan->ch_num);265272 /* Move the cyclic channel back to default queue */
+4-3
drivers/dma/imx-sdma.c
···531531 dev_err(sdma->dev, "Timeout waiting for CH0 ready\n");532532 }533533534534+ /* Set bits of CONFIG register with dynamic context switching */535535+ if (readl(sdma->regs + SDMA_H_CONFIG) == 0)536536+ writel_relaxed(SDMA_H_CONFIG_CSM, sdma->regs + SDMA_H_CONFIG);537537+534538 return ret ? 0 : -ETIMEDOUT;535539}536540···13971393 writel_relaxed(0, sdma->regs + SDMA_H_CONFIG);1398139413991395 writel_relaxed(ccb_phys, sdma->regs + SDMA_H_C0PTR);14001400-14011401- /* Set bits of CONFIG register with given context switching mode */14021402- writel_relaxed(SDMA_H_CONFIG_CSM, sdma->regs + SDMA_H_CONFIG);1403139614041397 /* Initializes channel's priorities */14051398 sdma_set_channel_priority(&sdma->channel[0], 7);
+4
drivers/dma/ioat/dma_v3.c
···230230 switch (pdev->device) {231231 case PCI_DEVICE_ID_INTEL_IOAT_BWD2:232232 case PCI_DEVICE_ID_INTEL_IOAT_BWD3:233233+ case PCI_DEVICE_ID_INTEL_IOAT_BDXDE0:234234+ case PCI_DEVICE_ID_INTEL_IOAT_BDXDE1:235235+ case PCI_DEVICE_ID_INTEL_IOAT_BDXDE2:236236+ case PCI_DEVICE_ID_INTEL_IOAT_BDXDE3:233237 return true;234238 default:235239 return false;
+10
drivers/dma/mmp_pdma.c
···219219220220 while (dint) {221221 i = __ffs(dint);222222+ /* only handle interrupts belonging to pdma driver*/223223+ if (i >= pdev->dma_channels)224224+ break;222225 dint &= (dint - 1);223226 phy = &pdev->phy[i];224227 ret = mmp_pdma_chan_handler(irq, phy);···1002999 struct resource *iores;10031000 int i, ret, irq = 0;10041001 int dma_channels = 0, irq_num = 0;10021002+ const enum dma_slave_buswidth widths =10031003+ DMA_SLAVE_BUSWIDTH_1_BYTE | DMA_SLAVE_BUSWIDTH_2_BYTES |10041004+ DMA_SLAVE_BUSWIDTH_4_BYTES;1005100510061006 pdev = devm_kzalloc(&op->dev, sizeof(*pdev), GFP_KERNEL);10071007 if (!pdev)···10721066 pdev->device.device_config = mmp_pdma_config;10731067 pdev->device.device_terminate_all = mmp_pdma_terminate_all;10741068 pdev->device.copy_align = PDMA_ALIGNMENT;10691069+ pdev->device.src_addr_widths = widths;10701070+ pdev->device.dst_addr_widths = widths;10711071+ pdev->device.directions = BIT(DMA_MEM_TO_DEV) | BIT(DMA_DEV_TO_MEM);10721072+ pdev->device.residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;1075107310761074 if (pdev->dev->coherent_dma_mask)10771075 dma_set_mask(pdev->dev, pdev->dev->coherent_dma_mask);
···981981 * c->desc is NULL and exit.)982982 */983983 if (c->desc) {984984+ omap_dma_desc_free(&c->desc->vd);984985 c->desc = NULL;985986 /* Avoid stopping the dma twice */986987 if (!c->paused)
···7878 * We have to be cautious here. We have seen BIOSes with DMI pointers7979 * pointing to completely the wrong place for example8080 */8181-static void dmi_table(u8 *buf, int len, int num,8181+static void dmi_table(u8 *buf, u32 len, int num,8282 void (*decode)(const struct dmi_header *, void *),8383 void *private_data)8484{···8686 int i = 0;87878888 /*8989- * Stop when we see all the items the table claimed to have9090- * OR we run off the end of the table (also happens)8989+ * Stop when we have seen all the items the table claimed to have9090+ * (SMBIOS < 3.0 only) OR we reach an end-of-table marker OR we run9191+ * off the end of the table (should never happen but sometimes does9292+ * on bogus implementations.)9193 */9292- while ((i < num) && (data - buf + sizeof(struct dmi_header)) <= len) {9494+ while ((!num || i < num) &&9595+ (data - buf + sizeof(struct dmi_header)) <= len) {9396 const struct dmi_header *dm = (const struct dmi_header *)data;9494-9595- /*9696- * 7.45 End-of-Table (Type 127) [SMBIOS reference spec v3.0.0]9797- */9898- if (dm->type == DMI_ENTRY_END_OF_TABLE)9999- break;1009710198 /*10299 * We want to know the total length (formatted area and···105108 data++;106109 if (data - buf < len - 1)107110 decode(dm, private_data);111111+112112+ /*113113+ * 7.45 End-of-Table (Type 127) [SMBIOS reference spec v3.0.0]114114+ */115115+ if (dm->type == DMI_ENTRY_END_OF_TABLE)116116+ break;117117+108118 data += 2;109119 i++;110120 }111121}112122113123static phys_addr_t dmi_base;114114-static u16 dmi_len;124124+static u32 dmi_len;115125static u16 dmi_num;116126117127static int __init dmi_walk_early(void (*decode)(const struct dmi_header *,···532528 if (memcmp(buf, "_SM3_", 5) == 0 &&533529 buf[6] < 32 && dmi_checksum(buf, buf[6])) {534530 dmi_ver = get_unaligned_be16(buf + 7);531531+ dmi_num = 0; /* No longer specified */535532 dmi_len = get_unaligned_le32(buf + 12);536533 dmi_base = get_unaligned_le64(buf + 16);537537-538538- /*539539- * The 64-bit SMBIOS 3.0 entry point no longer has a field540540- * containing the number of structures present in the table.541541- * Instead, it defines the table size as a maximum size, and542542- * relies on the end-of-table structure type (#127) to be used543543- * to signal the end of the table.544544- * So let's define dmi_num as an upper bound as well: each545545- * structure has a 4 byte header, so dmi_len / 4 is an upper546546- * bound for the number of structures in the table.547547- */548548- dmi_num = dmi_len / 4;549534550535 if (dmi_walk_early(dmi_decode) == 0) {551536 pr_info("SMBIOS %d.%d present.\n",
+4-4
drivers/firmware/efi/libstub/efi-stub-helper.c
···179179 start = desc->phys_addr;180180 end = start + desc->num_pages * (1UL << EFI_PAGE_SHIFT);181181182182- if ((start + size) > end || (start + size) > max)183183- continue;184184-185185- if (end - size > max)182182+ if (end > max)186183 end = max;184184+185185+ if ((start + size) > end)186186+ continue;187187188188 if (round_down(end - size, align) < start)189189 continue;
···219219 ret = of_property_read_u32_index(np, "gpio,syscon-dev", 2,220220 &priv->dir_reg_offset);221221 if (ret)222222- dev_err(dev, "can't read the dir register offset!\n");222222+ dev_dbg(dev, "can't read the dir register offset!\n");223223224224 priv->dir_reg_offset <<= 3;225225 }
+10
drivers/gpio/gpiolib-acpi.c
···201201 if (!handler)202202 return AE_BAD_PARAMETER;203203204204+ pin = acpi_gpiochip_pin_to_gpio_offset(chip, pin);205205+ if (pin < 0)206206+ return AE_BAD_PARAMETER;207207+204208 desc = gpiochip_request_own_desc(chip, pin, "ACPI:Event");205209 if (IS_ERR(desc)) {206210 dev_err(chip->dev, "Failed to request GPIO\n");···554550 struct acpi_gpio_connection *conn;555551 struct gpio_desc *desc;556552 bool found;553553+554554+ pin = acpi_gpiochip_pin_to_gpio_offset(chip, pin);555555+ if (pin < 0) {556556+ status = AE_BAD_PARAMETER;557557+ goto out;558558+ }557559558560 mutex_lock(&achip->conn_lock);559561
···733733 struct drm_dp_sideband_msg_tx *txmsg)734734{735735 bool ret;736736- mutex_lock(&mgr->qlock);736736+737737+ /*738738+ * All updates to txmsg->state are protected by mgr->qlock, and the two739739+ * cases we check here are terminal states. For those the barriers740740+ * provided by the wake_up/wait_event pair are enough.741741+ */737742 ret = (txmsg->state == DRM_DP_SIDEBAND_TX_RX ||738743 txmsg->state == DRM_DP_SIDEBAND_TX_TIMEOUT);739739- mutex_unlock(&mgr->qlock);740744 return ret;741745}742746···13671363 return 0;13681364}1369136513701370-/* must be called holding qlock */13711366static void process_single_down_tx_qlock(struct drm_dp_mst_topology_mgr *mgr)13721367{13731368 struct drm_dp_sideband_msg_tx *txmsg;13741369 int ret;13701370+13711371+ WARN_ON(!mutex_is_locked(&mgr->qlock));1375137213761373 /* construct a chunk from the first msg in the tx_msg queue */13771374 if (list_empty(&mgr->tx_msg_downq)) {
···888888 of_node_put(i80_if_timings);889889890890 ctx->regs = of_iomap(dev->of_node, 0);891891- if (IS_ERR(ctx->regs)) {892892- ret = PTR_ERR(ctx->regs);891891+ if (!ctx->regs) {892892+ ret = -ENOMEM;893893 goto err_del_component;894894 }895895
-245
drivers/gpu/drm/exynos/exynos_drm_connector.c
···11-/*22- * Copyright (c) 2011 Samsung Electronics Co., Ltd.33- * Authors:44- * Inki Dae <inki.dae@samsung.com>55- * Joonyoung Shim <jy0922.shim@samsung.com>66- * Seung-Woo Kim <sw0312.kim@samsung.com>77- *88- * This program is free software; you can redistribute it and/or modify it99- * under the terms of the GNU General Public License as published by the1010- * Free Software Foundation; either version 2 of the License, or (at your1111- * option) any later version.1212- */1313-1414-#include <drm/drmP.h>1515-#include <drm/drm_crtc_helper.h>1616-1717-#include <drm/exynos_drm.h>1818-#include "exynos_drm_drv.h"1919-#include "exynos_drm_encoder.h"2020-#include "exynos_drm_connector.h"2121-2222-#define to_exynos_connector(x) container_of(x, struct exynos_drm_connector,\2323- drm_connector)2424-2525-struct exynos_drm_connector {2626- struct drm_connector drm_connector;2727- uint32_t encoder_id;2828- struct exynos_drm_display *display;2929-};3030-3131-static int exynos_drm_connector_get_modes(struct drm_connector *connector)3232-{3333- struct exynos_drm_connector *exynos_connector =3434- to_exynos_connector(connector);3535- struct exynos_drm_display *display = exynos_connector->display;3636- struct edid *edid = NULL;3737- unsigned int count = 0;3838- int ret;3939-4040- /*4141- * if get_edid() exists then get_edid() callback of hdmi side4242- * is called to get edid data through i2c interface else4343- * get timing from the FIMD driver(display controller).4444- *4545- * P.S. in case of lcd panel, count is always 1 if success4646- * because lcd panel has only one mode.4747- */4848- if (display->ops->get_edid) {4949- edid = display->ops->get_edid(display, connector);5050- if (IS_ERR_OR_NULL(edid)) {5151- ret = PTR_ERR(edid);5252- edid = NULL;5353- DRM_ERROR("Panel operation get_edid failed %d\n", ret);5454- goto out;5555- }5656-5757- count = drm_add_edid_modes(connector, edid);5858- if (!count) {5959- DRM_ERROR("Add edid modes failed %d\n", count);6060- goto out;6161- }6262-6363- drm_mode_connector_update_edid_property(connector, edid);6464- } else {6565- struct exynos_drm_panel_info *panel;6666- struct drm_display_mode *mode = drm_mode_create(connector->dev);6767- if (!mode) {6868- DRM_ERROR("failed to create a new display mode.\n");6969- return 0;7070- }7171-7272- if (display->ops->get_panel)7373- panel = display->ops->get_panel(display);7474- else {7575- drm_mode_destroy(connector->dev, mode);7676- return 0;7777- }7878-7979- drm_display_mode_from_videomode(&panel->vm, mode);8080- mode->width_mm = panel->width_mm;8181- mode->height_mm = panel->height_mm;8282- connector->display_info.width_mm = mode->width_mm;8383- connector->display_info.height_mm = mode->height_mm;8484-8585- mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED;8686- drm_mode_set_name(mode);8787- drm_mode_probed_add(connector, mode);8888-8989- count = 1;9090- }9191-9292-out:9393- kfree(edid);9494- return count;9595-}9696-9797-static int exynos_drm_connector_mode_valid(struct drm_connector *connector,9898- struct drm_display_mode *mode)9999-{100100- struct exynos_drm_connector *exynos_connector =101101- to_exynos_connector(connector);102102- struct exynos_drm_display *display = exynos_connector->display;103103- int ret = MODE_BAD;104104-105105- DRM_DEBUG_KMS("%s\n", __FILE__);106106-107107- if (display->ops->check_mode)108108- if (!display->ops->check_mode(display, mode))109109- ret = MODE_OK;110110-111111- return ret;112112-}113113-114114-static struct drm_encoder *exynos_drm_best_encoder(115115- struct drm_connector *connector)116116-{117117- struct drm_device *dev = connector->dev;118118- struct exynos_drm_connector *exynos_connector =119119- to_exynos_connector(connector);120120- return drm_encoder_find(dev, exynos_connector->encoder_id);121121-}122122-123123-static struct drm_connector_helper_funcs exynos_connector_helper_funcs = {124124- .get_modes = exynos_drm_connector_get_modes,125125- .mode_valid = exynos_drm_connector_mode_valid,126126- .best_encoder = exynos_drm_best_encoder,127127-};128128-129129-static int exynos_drm_connector_fill_modes(struct drm_connector *connector,130130- unsigned int max_width, unsigned int max_height)131131-{132132- struct exynos_drm_connector *exynos_connector =133133- to_exynos_connector(connector);134134- struct exynos_drm_display *display = exynos_connector->display;135135- unsigned int width, height;136136-137137- width = max_width;138138- height = max_height;139139-140140- /*141141- * if specific driver want to find desired_mode using maxmum142142- * resolution then get max width and height from that driver.143143- */144144- if (display->ops->get_max_resol)145145- display->ops->get_max_resol(display, &width, &height);146146-147147- return drm_helper_probe_single_connector_modes(connector, width,148148- height);149149-}150150-151151-/* get detection status of display device. */152152-static enum drm_connector_status153153-exynos_drm_connector_detect(struct drm_connector *connector, bool force)154154-{155155- struct exynos_drm_connector *exynos_connector =156156- to_exynos_connector(connector);157157- struct exynos_drm_display *display = exynos_connector->display;158158- enum drm_connector_status status = connector_status_disconnected;159159-160160- if (display->ops->is_connected) {161161- if (display->ops->is_connected(display))162162- status = connector_status_connected;163163- else164164- status = connector_status_disconnected;165165- }166166-167167- return status;168168-}169169-170170-static void exynos_drm_connector_destroy(struct drm_connector *connector)171171-{172172- struct exynos_drm_connector *exynos_connector =173173- to_exynos_connector(connector);174174-175175- drm_connector_unregister(connector);176176- drm_connector_cleanup(connector);177177- kfree(exynos_connector);178178-}179179-180180-static struct drm_connector_funcs exynos_connector_funcs = {181181- .dpms = drm_helper_connector_dpms,182182- .fill_modes = exynos_drm_connector_fill_modes,183183- .detect = exynos_drm_connector_detect,184184- .destroy = exynos_drm_connector_destroy,185185-};186186-187187-struct drm_connector *exynos_drm_connector_create(struct drm_device *dev,188188- struct drm_encoder *encoder)189189-{190190- struct exynos_drm_connector *exynos_connector;191191- struct exynos_drm_display *display = exynos_drm_get_display(encoder);192192- struct drm_connector *connector;193193- int type;194194- int err;195195-196196- exynos_connector = kzalloc(sizeof(*exynos_connector), GFP_KERNEL);197197- if (!exynos_connector)198198- return NULL;199199-200200- connector = &exynos_connector->drm_connector;201201-202202- switch (display->type) {203203- case EXYNOS_DISPLAY_TYPE_HDMI:204204- type = DRM_MODE_CONNECTOR_HDMIA;205205- connector->interlace_allowed = true;206206- connector->polled = DRM_CONNECTOR_POLL_HPD;207207- break;208208- case EXYNOS_DISPLAY_TYPE_VIDI:209209- type = DRM_MODE_CONNECTOR_VIRTUAL;210210- connector->polled = DRM_CONNECTOR_POLL_HPD;211211- break;212212- default:213213- type = DRM_MODE_CONNECTOR_Unknown;214214- break;215215- }216216-217217- drm_connector_init(dev, connector, &exynos_connector_funcs, type);218218- drm_connector_helper_add(connector, &exynos_connector_helper_funcs);219219-220220- err = drm_connector_register(connector);221221- if (err)222222- goto err_connector;223223-224224- exynos_connector->encoder_id = encoder->base.id;225225- exynos_connector->display = display;226226- connector->dpms = DRM_MODE_DPMS_OFF;227227- connector->encoder = encoder;228228-229229- err = drm_mode_connector_attach_encoder(connector, encoder);230230- if (err) {231231- DRM_ERROR("failed to attach a connector to a encoder\n");232232- goto err_sysfs;233233- }234234-235235- DRM_DEBUG_KMS("connector has been created\n");236236-237237- return connector;238238-239239-err_sysfs:240240- drm_connector_unregister(connector);241241-err_connector:242242- drm_connector_cleanup(connector);243243- kfree(exynos_connector);244244- return NULL;245245-}
-20
drivers/gpu/drm/exynos/exynos_drm_connector.h
···11-/*22- * Copyright (c) 2011 Samsung Electronics Co., Ltd.33- * Authors:44- * Inki Dae <inki.dae@samsung.com>55- * Joonyoung Shim <jy0922.shim@samsung.com>66- * Seung-Woo Kim <sw0312.kim@samsung.com>77- *88- * This program is free software; you can redistribute it and/or modify it99- * under the terms of the GNU General Public License as published by the1010- * Free Software Foundation; either version 2 of the License, or (at your1111- * option) any later version.1212- */1313-1414-#ifndef _EXYNOS_DRM_CONNECTOR_H_1515-#define _EXYNOS_DRM_CONNECTOR_H_1616-1717-struct drm_connector *exynos_drm_connector_create(struct drm_device *dev,1818- struct drm_encoder *encoder);1919-2020-#endif
+16-21
drivers/gpu/drm/exynos/exynos_drm_fimd.c
···147147 unsigned int ovl_height;148148 unsigned int fb_width;149149 unsigned int fb_height;150150+ unsigned int fb_pitch;150151 unsigned int bpp;151152 unsigned int pixel_format;152153 dma_addr_t dma_addr;···285284 }286285}287286288288-static int fimd_ctx_initialize(struct fimd_context *ctx,287287+static int fimd_iommu_attach_devices(struct fimd_context *ctx,289288 struct drm_device *drm_dev)290289{291291- struct exynos_drm_private *priv;292292- priv = drm_dev->dev_private;293293-294294- ctx->drm_dev = drm_dev;295295- ctx->pipe = priv->pipe++;296290297291 /* attach this sub driver to iommu mapping if supported. */298292 if (is_drm_iommu_supported(ctx->drm_dev)) {···309313 return 0;310314}311315312312-static void fimd_ctx_remove(struct fimd_context *ctx)316316+static void fimd_iommu_detach_devices(struct fimd_context *ctx)313317{314318 /* detach this sub driver from iommu mapping if supported. */315319 if (is_drm_iommu_supported(ctx->drm_dev))···533537 win_data->offset_y = plane->crtc_y;534538 win_data->ovl_width = plane->crtc_width;535539 win_data->ovl_height = plane->crtc_height;540540+ win_data->fb_pitch = plane->pitch;536541 win_data->fb_width = plane->fb_width;537542 win_data->fb_height = plane->fb_height;538543 win_data->dma_addr = plane->dma_addr[0] + offset;539544 win_data->bpp = plane->bpp;540545 win_data->pixel_format = plane->pixel_format;541541- win_data->buf_offsize = (plane->fb_width - plane->crtc_width) *542542- (plane->bpp >> 3);546546+ win_data->buf_offsize =547547+ plane->pitch - (plane->crtc_width * (plane->bpp >> 3));543548 win_data->line_size = plane->crtc_width * (plane->bpp >> 3);544549545550 DRM_DEBUG_KMS("offset_x = %d, offset_y = %d\n",···706709 writel(val, ctx->regs + VIDWx_BUF_START(win, 0));707710708711 /* buffer end address */709709- size = win_data->fb_width * win_data->ovl_height * (win_data->bpp >> 3);712712+ size = win_data->fb_pitch * win_data->ovl_height * (win_data->bpp >> 3);710713 val = (unsigned long)(win_data->dma_addr + size);711714 writel(val, ctx->regs + VIDWx_BUF_END(win, 0));712715···10531056{10541057 struct fimd_context *ctx = dev_get_drvdata(dev);10551058 struct drm_device *drm_dev = data;10591059+ struct exynos_drm_private *priv = drm_dev->dev_private;10561060 int ret;1057106110581058- ret = fimd_ctx_initialize(ctx, drm_dev);10591059- if (ret) {10601060- DRM_ERROR("fimd_ctx_initialize failed.\n");10611061- return ret;10621062- }10621062+ ctx->drm_dev = drm_dev;10631063+ ctx->pipe = priv->pipe++;1063106410641065 ctx->crtc = exynos_drm_crtc_create(drm_dev, ctx->pipe,10651066 EXYNOS_DISPLAY_TYPE_LCD,10661067 &fimd_crtc_ops, ctx);10671067- if (IS_ERR(ctx->crtc)) {10681068- fimd_ctx_remove(ctx);10691069- return PTR_ERR(ctx->crtc);10701070- }1071106810721069 if (ctx->display)10731070 exynos_drm_create_enc_conn(drm_dev, ctx->display);10711071+10721072+ ret = fimd_iommu_attach_devices(ctx, drm_dev);10731073+ if (ret)10741074+ return ret;1074107510751076 return 0;10761077···1081108610821087 fimd_dpms(ctx->crtc, DRM_MODE_DPMS_OFF);1083108810891089+ fimd_iommu_detach_devices(ctx);10901090+10841091 if (ctx->display)10851092 exynos_dpi_remove(ctx->display);10861086-10871087- fimd_ctx_remove(ctx);10881093}1089109410901095static const struct component_ops fimd_component_ops = {
+1-1
drivers/gpu/drm/exynos/exynos_drm_plane.c
···175175 struct exynos_drm_plane *exynos_plane = to_exynos_plane(plane);176176 struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(plane->crtc);177177178178- if (exynos_crtc->ops->win_disable)178178+ if (exynos_crtc && exynos_crtc->ops->win_disable)179179 exynos_crtc->ops->win_disable(exynos_crtc,180180 exynos_plane->zpos);181181
···622622 return 0;623623}624624625625-static int i915_drm_suspend_late(struct drm_device *drm_dev)625625+static int i915_drm_suspend_late(struct drm_device *drm_dev, bool hibernation)626626{627627 struct drm_i915_private *dev_priv = drm_dev->dev_private;628628 int ret;···636636 }637637638638 pci_disable_device(drm_dev->pdev);639639- pci_set_power_state(drm_dev->pdev, PCI_D3hot);639639+ /*640640+ * During hibernation on some GEN4 platforms the BIOS may try to access641641+ * the device even though it's already in D3 and hang the machine. So642642+ * leave the device in D0 on those platforms and hope the BIOS will643643+ * power down the device properly. Platforms where this was seen:644644+ * Lenovo Thinkpad X301, X61s645645+ */646646+ if (!(hibernation &&647647+ drm_dev->pdev->subsystem_vendor == PCI_VENDOR_ID_LENOVO &&648648+ INTEL_INFO(dev_priv)->gen == 4))649649+ pci_set_power_state(drm_dev->pdev, PCI_D3hot);640650641651 return 0;642652}···672662 if (error)673663 return error;674664675675- return i915_drm_suspend_late(dev);665665+ return i915_drm_suspend_late(dev, false);676666}677667678668static int i915_drm_resume(struct drm_device *dev)···960950 if (drm_dev->switch_power_state == DRM_SWITCH_POWER_OFF)961951 return 0;962952963963- return i915_drm_suspend_late(drm_dev);953953+ return i915_drm_suspend_late(drm_dev, false);954954+}955955+956956+static int i915_pm_poweroff_late(struct device *dev)957957+{958958+ struct drm_device *drm_dev = dev_to_i915(dev)->dev;959959+960960+ if (drm_dev->switch_power_state == DRM_SWITCH_POWER_OFF)961961+ return 0;962962+963963+ return i915_drm_suspend_late(drm_dev, true);964964}965965966966static int i915_pm_resume_early(struct device *dev)···15401520 .thaw_early = i915_pm_resume_early,15411521 .thaw = i915_pm_resume,15421522 .poweroff = i915_pm_suspend,15431543- .poweroff_late = i915_pm_suspend_late,15231523+ .poweroff_late = i915_pm_poweroff_late,15441524 .restore_early = i915_pm_resume_early,15451525 .restore = i915_pm_resume,15461526
+41-22
drivers/gpu/drm/i915/i915_gem.c
···2737273727382738 WARN_ON(i915_verify_lists(ring->dev));2739273927402740- /* Move any buffers on the active list that are no longer referenced27412741- * by the ringbuffer to the flushing/inactive lists as appropriate,27422742- * before we free the context associated with the requests.27402740+ /* Retire requests first as we use it above for the early return.27412741+ * If we retire requests last, we may use a later seqno and so clear27422742+ * the requests lists without clearing the active list, leading to27432743+ * confusion.27432744 */27442744- while (!list_empty(&ring->active_list)) {27452745- struct drm_i915_gem_object *obj;27462746-27472747- obj = list_first_entry(&ring->active_list,27482748- struct drm_i915_gem_object,27492749- ring_list);27502750-27512751- if (!i915_gem_request_completed(obj->last_read_req, true))27522752- break;27532753-27542754- i915_gem_object_move_to_inactive(obj);27552755- }27562756-27572757-27582745 while (!list_empty(&ring->request_list)) {27592746 struct drm_i915_gem_request *request;27602747 struct intel_ringbuffer *ringbuf;···27742787 ringbuf->last_retired_head = request->postfix;2775278827762789 i915_gem_free_request(request);27902790+ }27912791+27922792+ /* Move any buffers on the active list that are no longer referenced27932793+ * by the ringbuffer to the flushing/inactive lists as appropriate,27942794+ * before we free the context associated with the requests.27952795+ */27962796+ while (!list_empty(&ring->active_list)) {27972797+ struct drm_i915_gem_object *obj;27982798+27992799+ obj = list_first_entry(&ring->active_list,28002800+ struct drm_i915_gem_object,28012801+ ring_list);28022802+28032803+ if (!i915_gem_request_completed(obj->last_read_req, true))28042804+ break;28052805+28062806+ i915_gem_object_move_to_inactive(obj);27772807 }2778280827792809 if (unlikely(ring->trace_irq_req &&···29402936 req = obj->last_read_req;2941293729422938 /* Do this after OLR check to make sure we make forward progress polling29432943- * on this IOCTL with a timeout <=0 (like busy ioctl)29392939+ * on this IOCTL with a timeout == 0 (like busy ioctl)29442940 */29452945- if (args->timeout_ns <= 0) {29412941+ if (args->timeout_ns == 0) {29462942 ret = -ETIME;29472943 goto out;29482944 }···29522948 i915_gem_request_reference(req);29532949 mutex_unlock(&dev->struct_mutex);2954295029552955- ret = __i915_wait_request(req, reset_counter, true, &args->timeout_ns,29512951+ ret = __i915_wait_request(req, reset_counter, true,29522952+ args->timeout_ns > 0 ? &args->timeout_ns : NULL,29562953 file->driver_priv);29572954 mutex_lock(&dev->struct_mutex);29582955 i915_gem_request_unreference(req);···47974792 if (INTEL_INFO(dev)->gen < 6 && !intel_enable_gtt())47984793 return -EIO;4799479447954795+ /* Double layer security blanket, see i915_gem_init() */47964796+ intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);47974797+48004798 if (dev_priv->ellc_size)48014799 I915_WRITE(HSW_IDICR, I915_READ(HSW_IDICR) | IDIHASHMSK(0xf));48024800···48324824 for_each_ring(ring, dev_priv, i) {48334825 ret = ring->init_hw(ring);48344826 if (ret)48354835- return ret;48274827+ goto out;48364828 }4837482948384830 for (i = 0; i < NUM_L3_SLICES(dev); i++)···48494841 DRM_ERROR("Context enable failed %d\n", ret);48504842 i915_gem_cleanup_ringbuffer(dev);4851484348524852- return ret;48444844+ goto out;48534845 }4854484648474847+out:48484848+ intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);48554849 return ret;48564850}48574851···48874877 dev_priv->gt.stop_ring = intel_logical_ring_stop;48884878 }4889487948804880+ /* This is just a security blanket to placate dragons.48814881+ * On some systems, we very sporadically observe that the first TLBs48824882+ * used by the CS may be stale, despite us poking the TLB reset. If48834883+ * we hold the forcewake during initialisation these problems48844884+ * just magically go away.48854885+ */48864886+ intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);48874887+48904888 ret = i915_gem_init_userptr(dev);48914889 if (ret)48924890 goto out_unlock;···49214903 }4922490449234905out_unlock:49064906+ intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);49244907 mutex_unlock(&dev->struct_mutex);4925490849264909 return ret;
+1-1
drivers/gpu/drm/i915/i915_gem_execbuffer.c
···14871487 goto err;14881488 }1489148914901490- if (i915_needs_cmd_parser(ring)) {14901490+ if (i915_needs_cmd_parser(ring) && args->batch_len) {14911491 batch_obj = i915_gem_execbuffer_parse(ring,14921492 &shadow_exec_entry,14931493 eb,
+3-3
drivers/gpu/drm/i915/i915_gem_gtt.c
···1145114511461146 ppgtt->base.clear_range(&ppgtt->base, 0, ppgtt->base.total, true);1147114711481148- DRM_DEBUG_DRIVER("Allocated pde space (%ldM) at GTT entry: %lx\n",11481148+ DRM_DEBUG_DRIVER("Allocated pde space (%lldM) at GTT entry: %llx\n",11491149 ppgtt->node.size >> 20,11501150 ppgtt->node.start / PAGE_SIZE);11511151···1713171317141714static void i915_gtt_color_adjust(struct drm_mm_node *node,17151715 unsigned long color,17161716- unsigned long *start,17171717- unsigned long *end)17161716+ u64 *start,17171717+ u64 *end)17181718{17191719 if (node->color != color)17201720 *start += 4096;
+39-7
drivers/gpu/drm/i915/intel_display.c
···3737#include <drm/i915_drm.h>3838#include "i915_drv.h"3939#include "i915_trace.h"4040+#include <drm/drm_atomic.h>4041#include <drm/drm_atomic_helper.h>4142#include <drm/drm_dp_helper.h>4243#include <drm/drm_crtc_helper.h>···24172416 return false;24182417}2419241824192419+/* Update plane->state->fb to match plane->fb after driver-internal updates */24202420+static void24212421+update_state_fb(struct drm_plane *plane)24222422+{24232423+ if (plane->fb != plane->state->fb)24242424+ drm_atomic_set_fb_for_plane(plane->state, plane->fb);24252425+}24262426+24202427static void24212428intel_find_plane_obj(struct intel_crtc *intel_crtc,24222429 struct intel_initial_plane_config *plane_config)···24382429 if (!intel_crtc->base.primary->fb)24392430 return;2440243124412441- if (intel_alloc_plane_obj(intel_crtc, plane_config))24322432+ if (intel_alloc_plane_obj(intel_crtc, plane_config)) {24332433+ struct drm_plane *primary = intel_crtc->base.primary;24342434+24352435+ primary->state->crtc = &intel_crtc->base;24362436+ primary->crtc = &intel_crtc->base;24372437+ update_state_fb(primary);24382438+24422439 return;24402440+ }2443244124442442 kfree(intel_crtc->base.primary->fb);24452443 intel_crtc->base.primary->fb = NULL;···24692453 continue;2470245424712455 if (i915_gem_obj_ggtt_offset(obj) == plane_config->base) {24562456+ struct drm_plane *primary = intel_crtc->base.primary;24572457+24722458 if (obj->tiling_mode != I915_TILING_NONE)24732459 dev_priv->preserve_bios_swizzle = true;2474246024752461 drm_framebuffer_reference(c->primary->fb);24762476- intel_crtc->base.primary->fb = c->primary->fb;24622462+ primary->fb = c->primary->fb;24632463+ primary->state->crtc = &intel_crtc->base;24642464+ primary->crtc = &intel_crtc->base;24772465 obj->frontbuffer_bits |= INTEL_FRONTBUFFER_PRIMARY(intel_crtc->pipe);24782466 break;24792467 }24802468 }24692469+24702470+ update_state_fb(intel_crtc->base.primary);24812471}2482247224832473static void i9xx_update_primary_plane(struct drm_crtc *crtc,···66246602 struct drm_framebuffer *fb;66256603 struct intel_framebuffer *intel_fb;6626660466056605+ val = I915_READ(DSPCNTR(plane));66066606+ if (!(val & DISPLAY_PLANE_ENABLE))66076607+ return;66086608+66276609 intel_fb = kzalloc(sizeof(*intel_fb), GFP_KERNEL);66286610 if (!intel_fb) {66296611 DRM_DEBUG_KMS("failed to alloc fb\n");···66356609 }6636661066376611 fb = &intel_fb->base;66386638-66396639- val = I915_READ(DSPCNTR(plane));6640661266416613 if (INTEL_INFO(dev)->gen >= 4)66426614 if (val & DISPPLANE_TILED)···76677643 fb = &intel_fb->base;7668764476697645 val = I915_READ(PLANE_CTL(pipe, 0));76467646+ if (!(val & PLANE_CTL_ENABLE))76477647+ goto error;76487648+76707649 if (val & PLANE_CTL_TILED_MASK)76717650 plane_config->tiling = I915_TILING_X;76727651···77577730 struct drm_framebuffer *fb;77587731 struct intel_framebuffer *intel_fb;7759773277337733+ val = I915_READ(DSPCNTR(pipe));77347734+ if (!(val & DISPLAY_PLANE_ENABLE))77357735+ return;77367736+77607737 intel_fb = kzalloc(sizeof(*intel_fb), GFP_KERNEL);77617738 if (!intel_fb) {77627739 DRM_DEBUG_KMS("failed to alloc fb\n");···77687737 }7769773877707739 fb = &intel_fb->base;77717771-77727772- val = I915_READ(DSPCNTR(pipe));7773774077747741 if (INTEL_INFO(dev)->gen >= 4)77757742 if (val & DISPPLANE_TILED)···97459716 struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe];97469717 struct intel_crtc *intel_crtc = to_intel_crtc(crtc);9747971897489748- WARN_ON(!in_irq());97199719+ WARN_ON(!in_interrupt());9749972097509721 if (crtc == NULL)97519722 return;···98459816 drm_gem_object_reference(&obj->base);9846981798479818 crtc->primary->fb = fb;98199819+ update_state_fb(crtc->primary);9848982098499821 work->pending_flip_obj = obj;98509822···99149884cleanup_pending:99159885 atomic_dec(&intel_crtc->unpin_work_count);99169886 crtc->primary->fb = old_fb;98879887+ update_state_fb(crtc->primary);99179888 drm_gem_object_unreference(&work->old_fb_obj->base);99189889 drm_gem_object_unreference(&obj->base);99199890 mutex_unlock(&dev->struct_mutex);···1374913718 to_intel_crtc(c)->pipe);1375013719 drm_framebuffer_unreference(c->primary->fb);1375113720 c->primary->fb = NULL;1372113721+ update_state_fb(c->primary);1375213722 }1375313723 }1375413724 mutex_unlock(&dev->struct_mutex);
+7-11
drivers/gpu/drm/i915/intel_fifo_underrun.c
···282282 return ret;283283}284284285285-static bool286286-__cpu_fifo_underrun_reporting_enabled(struct drm_i915_private *dev_priv,287287- enum pipe pipe)288288-{289289- struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe];290290- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);291291-292292- return !intel_crtc->cpu_fifo_underrun_disabled;293293-}294294-295285/**296286 * intel_set_pch_fifo_underrun_reporting - set PCH fifo underrun reporting state297287 * @dev_priv: i915 device instance···342352void intel_cpu_fifo_underrun_irq_handler(struct drm_i915_private *dev_priv,343353 enum pipe pipe)344354{355355+ struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe];356356+357357+ /* We may be called too early in init, thanks BIOS! */358358+ if (crtc == NULL)359359+ return;360360+345361 /* GMCH can't disable fifo underruns, filter them. */346362 if (HAS_GMCH_DISPLAY(dev_priv->dev) &&347347- !__cpu_fifo_underrun_reporting_enabled(dev_priv, pipe))363363+ to_intel_crtc(crtc)->cpu_fifo_underrun_disabled)348364 return;349365350366 if (intel_set_cpu_fifo_underrun_reporting(dev_priv, pipe, false))
+2-2
drivers/gpu/drm/i915/intel_sprite.c
···13221322 drm_modeset_lock_all(dev);1323132313241324 plane = drm_plane_find(dev, set->plane_id);13251325- if (!plane) {13251325+ if (!plane || plane->type != DRM_PLANE_TYPE_OVERLAY) {13261326 ret = -ENOENT;13271327 goto out_unlock;13281328 }···13491349 drm_modeset_lock_all(dev);1350135013511351 plane = drm_plane_find(dev, get->plane_id);13521352- if (!plane) {13521352+ if (!plane || plane->type != DRM_PLANE_TYPE_OVERLAY) {13531353 ret = -ENOENT;13541354 goto out_unlock;13551355 }
+7-1
drivers/gpu/drm/i915/intel_uncore.c
···1048104810491049 /* We need to init first for ECOBUS access and then10501050 * determine later if we want to reinit, in case of MT access is10511051- * not working10511051+ * not working. In this stage we don't know which flavour this10521052+ * ivb is, so it is better to reset also the gen6 fw registers10531053+ * before the ecobus check.10521054 */10551055+10561056+ __raw_i915_write32(dev_priv, FORCEWAKE, 0);10571057+ __raw_posting_read(dev_priv, ECOBUS);10581058+10531059 fw_domain_init(dev_priv, FW_DOMAIN_ID_RENDER,10541060 FORCEWAKE_MT, FORCEWAKE_MT_ACK);10551061
···88git clone https://github.com/freedreno/envytools.git991010The rules-ng-ng source files this header was generated from are:1111-- /home/robclark/src/freedreno/envytools/rnndb/msm.xml ( 676 bytes, from 2014-12-05 15:34:49)1212-- /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1453 bytes, from 2013-03-31 16:51:27)1313-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20908 bytes, from 2014-12-08 16:13:00)1414-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2357 bytes, from 2014-12-08 16:13:00)1515-- /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 27208 bytes, from 2015-01-13 23:56:11)1616-- /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 11712 bytes, from 2013-08-17 17:13:43)1717-- /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 344 bytes, from 2013-08-11 19:26:32)1818-- /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2014-10-31 16:48:57)1919-- /home/robclark/src/freedreno/envytools/rnndb/hdmi/qfprom.xml ( 600 bytes, from 2013-07-05 19:21:12)2020-- /home/robclark/src/freedreno/envytools/rnndb/hdmi/hdmi.xml ( 26848 bytes, from 2015-01-13 23:55:57)2121-- /home/robclark/src/freedreno/envytools/rnndb/edp/edp.xml ( 8253 bytes, from 2014-12-08 16:13:00)1111+- /local/mnt2/workspace2/sviau/envytools/rnndb/mdp/mdp5.xml ( 27229 bytes, from 2015-02-10 17:00:41)1212+- /local/mnt2/workspace2/sviau/envytools/rnndb/freedreno_copyright.xml ( 1453 bytes, from 2014-06-02 18:31:15)1313+- /local/mnt2/workspace2/sviau/envytools/rnndb/mdp/mdp_common.xml ( 2357 bytes, from 2015-01-23 16:20:19)22142315Copyright (C) 2013-2015 by the following authors:2416- Rob Clark <robdclark@gmail.com> (robclark)···902910 case 2: return (mdp5_cfg->lm.base[2]);903911 case 3: return (mdp5_cfg->lm.base[3]);904912 case 4: return (mdp5_cfg->lm.base[4]);913913+ case 5: return (mdp5_cfg->lm.base[5]);905914 default: return INVALID_IDX(idx);906915 }907916}
+61-38
drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c
···62626363 /* current cursor being scanned out: */6464 struct drm_gem_object *scanout_bo;6565- uint32_t width;6666- uint32_t height;6565+ uint32_t width, height;6666+ uint32_t x, y;6767 } cursor;6868};6969#define to_mdp5_crtc(x) container_of(x, struct mdp5_crtc, base)···103103 struct drm_plane *plane;104104 uint32_t flush_mask = 0;105105106106- /* we could have already released CTL in the disable path: */107107- if (!mdp5_crtc->ctl)106106+ /* this should not happen: */107107+ if (WARN_ON(!mdp5_crtc->ctl))108108 return;109109110110 drm_atomic_crtc_for_each_plane(plane, crtc) {···142142143143 drm_atomic_crtc_for_each_plane(plane, crtc) {144144 mdp5_plane_complete_flip(plane);145145+ }146146+147147+ if (mdp5_crtc->ctl && !crtc->state->enable) {148148+ mdp5_ctl_release(mdp5_crtc->ctl);149149+ mdp5_crtc->ctl = NULL;145150 }146151}147152···391386 mdp5_crtc->event = crtc->state->event;392387 spin_unlock_irqrestore(&dev->event_lock, flags);393388389389+ /*390390+ * If no CTL has been allocated in mdp5_crtc_atomic_check(),391391+ * it means we are trying to flush a CRTC whose state is disabled:392392+ * nothing else needs to be done.393393+ */394394+ if (unlikely(!mdp5_crtc->ctl))395395+ return;396396+394397 blend_setup(crtc);395398 crtc_flush_all(crtc);396399 request_pending(crtc, PENDING_FLIP);397397-398398- if (mdp5_crtc->ctl && !crtc->state->enable) {399399- mdp5_ctl_release(mdp5_crtc->ctl);400400- mdp5_crtc->ctl = NULL;401401- }402400}403401404402static int mdp5_crtc_set_property(struct drm_crtc *crtc,···409401{410402 // XXX411403 return -EINVAL;404404+}405405+406406+static void get_roi(struct drm_crtc *crtc, uint32_t *roi_w, uint32_t *roi_h)407407+{408408+ struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc);409409+ uint32_t xres = crtc->mode.hdisplay;410410+ uint32_t yres = crtc->mode.vdisplay;411411+412412+ /*413413+ * Cursor Region Of Interest (ROI) is a plane read from cursor414414+ * buffer to render. The ROI region is determined by the visibility of415415+ * the cursor point. In the default Cursor image the cursor point will416416+ * be at the top left of the cursor image, unless it is specified417417+ * otherwise using hotspot feature.418418+ *419419+ * If the cursor point reaches the right (xres - x < cursor.width) or420420+ * bottom (yres - y < cursor.height) boundary of the screen, then ROI421421+ * width and ROI height need to be evaluated to crop the cursor image422422+ * accordingly.423423+ * (xres-x) will be new cursor width when x > (xres - cursor.width)424424+ * (yres-y) will be new cursor height when y > (yres - cursor.height)425425+ */426426+ *roi_w = min(mdp5_crtc->cursor.width, xres -427427+ mdp5_crtc->cursor.x);428428+ *roi_h = min(mdp5_crtc->cursor.height, yres -429429+ mdp5_crtc->cursor.y);412430}413431414432static int mdp5_crtc_cursor_set(struct drm_crtc *crtc,···450416 unsigned int depth;451417 enum mdp5_cursor_alpha cur_alpha = CURSOR_ALPHA_PER_PIXEL;452418 uint32_t flush_mask = mdp_ctl_flush_mask_cursor(0);419419+ uint32_t roi_w, roi_h;453420 unsigned long flags;454421455422 if ((width > CURSOR_WIDTH) || (height > CURSOR_HEIGHT)) {···481446 spin_lock_irqsave(&mdp5_crtc->cursor.lock, flags);482447 old_bo = mdp5_crtc->cursor.scanout_bo;483448449449+ mdp5_crtc->cursor.scanout_bo = cursor_bo;450450+ mdp5_crtc->cursor.width = width;451451+ mdp5_crtc->cursor.height = height;452452+453453+ get_roi(crtc, &roi_w, &roi_h);454454+484455 mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_STRIDE(lm), stride);485456 mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_FORMAT(lm),486457 MDP5_LM_CURSOR_FORMAT_FORMAT(CURSOR_FMT_ARGB8888));···494453 MDP5_LM_CURSOR_IMG_SIZE_SRC_H(height) |495454 MDP5_LM_CURSOR_IMG_SIZE_SRC_W(width));496455 mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_SIZE(lm),497497- MDP5_LM_CURSOR_SIZE_ROI_H(height) |498498- MDP5_LM_CURSOR_SIZE_ROI_W(width));456456+ MDP5_LM_CURSOR_SIZE_ROI_H(roi_h) |457457+ MDP5_LM_CURSOR_SIZE_ROI_W(roi_w));499458 mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_BASE_ADDR(lm), cursor_addr);500459501501-502460 blendcfg = MDP5_LM_CURSOR_BLEND_CONFIG_BLEND_EN;503503- blendcfg |= MDP5_LM_CURSOR_BLEND_CONFIG_BLEND_TRANSP_EN;504461 blendcfg |= MDP5_LM_CURSOR_BLEND_CONFIG_BLEND_ALPHA_SEL(cur_alpha);505462 mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_BLEND_CONFIG(lm), blendcfg);506463507507- mdp5_crtc->cursor.scanout_bo = cursor_bo;508508- mdp5_crtc->cursor.width = width;509509- mdp5_crtc->cursor.height = height;510464 spin_unlock_irqrestore(&mdp5_crtc->cursor.lock, flags);511465512466 ret = mdp5_ctl_set_cursor(mdp5_crtc->ctl, true);···525489 struct mdp5_kms *mdp5_kms = get_kms(crtc);526490 struct mdp5_crtc *mdp5_crtc = to_mdp5_crtc(crtc);527491 uint32_t flush_mask = mdp_ctl_flush_mask_cursor(0);528528- uint32_t xres = crtc->mode.hdisplay;529529- uint32_t yres = crtc->mode.vdisplay;530492 uint32_t roi_w;531493 uint32_t roi_h;532494 unsigned long flags;533495534534- x = (x > 0) ? x : 0;535535- y = (y > 0) ? y : 0;496496+ /* In case the CRTC is disabled, just drop the cursor update */497497+ if (unlikely(!crtc->state->enable))498498+ return 0;536499537537- /*538538- * Cursor Region Of Interest (ROI) is a plane read from cursor539539- * buffer to render. The ROI region is determined by the visiblity of540540- * the cursor point. In the default Cursor image the cursor point will541541- * be at the top left of the cursor image, unless it is specified542542- * otherwise using hotspot feature.543543- *544544- * If the cursor point reaches the right (xres - x < cursor.width) or545545- * bottom (yres - y < cursor.height) boundary of the screen, then ROI546546- * width and ROI height need to be evaluated to crop the cursor image547547- * accordingly.548548- * (xres-x) will be new cursor width when x > (xres - cursor.width)549549- * (yres-y) will be new cursor height when y > (yres - cursor.height)550550- */551551- roi_w = min(mdp5_crtc->cursor.width, xres - x);552552- roi_h = min(mdp5_crtc->cursor.height, yres - y);500500+ mdp5_crtc->cursor.x = x = max(x, 0);501501+ mdp5_crtc->cursor.y = y = max(y, 0);502502+503503+ get_roi(crtc, &roi_w, &roi_h);553504554505 spin_lock_irqsave(&mdp5_crtc->cursor.lock, flags);555506 mdp5_write(mdp5_kms, REG_MDP5_LM_CURSOR_SIZE(mdp5_crtc->lm),···567544static const struct drm_crtc_helper_funcs mdp5_crtc_helper_funcs = {568545 .mode_fixup = mdp5_crtc_mode_fixup,569546 .mode_set_nofb = mdp5_crtc_mode_set_nofb,570570- .prepare = mdp5_crtc_disable,571571- .commit = mdp5_crtc_enable,547547+ .disable = mdp5_crtc_disable,548548+ .enable = mdp5_crtc_enable,572549 .atomic_check = mdp5_crtc_atomic_check,573550 .atomic_begin = mdp5_crtc_atomic_begin,574551 .atomic_flush = mdp5_crtc_atomic_flush,
···219219 * mark our set of crtc's as busy:220220 */221221 ret = start_atomic(dev->dev_private, c->crtc_mask);222222- if (ret)222222+ if (ret) {223223+ kfree(c);223224 return ret;225225+ }224226225227 /*226228 * This is the point of no return - everything below never fails except
+1-1
drivers/gpu/drm/nouveau/nouveau_fbcon.c
···418418 nouveau_fbcon_zfill(dev, fbcon);419419420420 /* To allow resizeing without swapping buffers */421421- NV_INFO(drm, "allocated %dx%d fb: 0x%lx, bo %p\n",421421+ NV_INFO(drm, "allocated %dx%d fb: 0x%llx, bo %p\n",422422 nouveau_fb->base.width, nouveau_fb->base.height,423423 nvbo->bo.offset, nvbo);424424
+4-2
drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
···340340341341 /* switch mmio to cpu's native endianness */342342#ifndef __BIG_ENDIAN343343- if (ioread32_native(map + 0x000004) != 0x00000000)343343+ if (ioread32_native(map + 0x000004) != 0x00000000) {344344#else345345- if (ioread32_native(map + 0x000004) == 0x00000000)345345+ if (ioread32_native(map + 0x000004) == 0x00000000) {346346#endif347347 iowrite32_native(0x01000001, map + 0x000004);348348+ ioread32_native(map);349349+ }348350349351 /* read boot0 and strapping information */350352 boot0 = ioread32_native(map + 0x000000);
···272272}273273274274void dce4_dp_audio_set_dto(struct radeon_device *rdev,275275- struct radeon_crtc *crtc, unsigned int clock)275275+ struct radeon_crtc *crtc, unsigned int clock)276276{277277 u32 value;278278···294294 * is the numerator, DCCG_AUDIO_DTOx_MODULE is the denominator295295 */296296 WREG32(DCCG_AUDIO_DTO1_PHASE, 24000);297297- WREG32(DCCG_AUDIO_DTO1_MODULE, rdev->clock.max_pixel_clock * 10);297297+ WREG32(DCCG_AUDIO_DTO1_MODULE, clock);298298}299299300300void dce4_set_vbi_packet(struct drm_encoder *encoder, u32 offset)···350350 struct drm_device *dev = encoder->dev;351351 struct radeon_device *rdev = dev->dev_private;352352353353- WREG32(HDMI_INFOFRAME_CONTROL0 + offset,354354- HDMI_AUDIO_INFO_SEND | /* enable audio info frames (frames won't be set until audio is enabled) */355355- HDMI_AUDIO_INFO_CONT); /* required for audio info values to be updated */356356-357353 WREG32(AFMT_INFOFRAME_CONTROL0 + offset,358354 AFMT_AUDIO_INFO_UPDATE); /* required for audio info values to be updated */359359-360360- WREG32(HDMI_INFOFRAME_CONTROL1 + offset,361361- HDMI_AUDIO_INFO_LINE(2)); /* anything other than 0 */362362-363363- WREG32(HDMI_AUDIO_PACKET_CONTROL + offset,364364- HDMI_AUDIO_DELAY_EN(1) | /* set the default audio delay */365365- HDMI_AUDIO_PACKETS_PER_LINE(3)); /* should be suffient for all audio modes and small enough for all hblanks */366355367356 WREG32(AFMT_60958_0 + offset,368357 AFMT_60958_CS_CHANNEL_NUMBER_L(1));···397408 if (!dig || !dig->afmt)398409 return;399410400400- /* Silent, r600_hdmi_enable will raise WARN for us */401401- if (enable && dig->afmt->enabled)402402- return;403403- if (!enable && !dig->afmt->enabled)404404- return;411411+ if (enable) {412412+ WREG32(HDMI_INFOFRAME_CONTROL1 + dig->afmt->offset,413413+ HDMI_AUDIO_INFO_LINE(2)); /* anything other than 0 */405414406406- if (!enable && dig->afmt->pin) {407407- radeon_audio_enable(rdev, dig->afmt->pin, 0);408408- dig->afmt->pin = NULL;415415+ WREG32(HDMI_AUDIO_PACKET_CONTROL + dig->afmt->offset,416416+ HDMI_AUDIO_DELAY_EN(1) | /* set the default audio delay */417417+ HDMI_AUDIO_PACKETS_PER_LINE(3)); /* should be suffient for all audio modes and small enough for all hblanks */418418+419419+ WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset,420420+ HDMI_AUDIO_INFO_SEND | /* enable audio info frames (frames won't be set until audio is enabled) */421421+ HDMI_AUDIO_INFO_CONT); /* required for audio info values to be updated */422422+ } else {423423+ WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset, 0);409424 }410425411426 dig->afmt->enabled = enable;···418425 enable ? "En" : "Dis", dig->afmt->offset, radeon_encoder->encoder_id);419426}420427421421-void evergreen_enable_dp_audio_packets(struct drm_encoder *encoder, bool enable)428428+void evergreen_dp_enable(struct drm_encoder *encoder, bool enable)422429{423430 struct drm_device *dev = encoder->dev;424431 struct radeon_device *rdev = dev->dev_private;425432 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);426433 struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;427427- uint32_t offset;428434429435 if (!dig || !dig->afmt)430436 return;431431-432432- offset = dig->afmt->offset;433437434438 if (enable) {435439 struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);···434444 struct radeon_connector_atom_dig *dig_connector;435445 uint32_t val;436446437437- if (dig->afmt->enabled)438438- return;439439-440440- WREG32(EVERGREEN_DP_SEC_TIMESTAMP + offset, EVERGREEN_DP_SEC_TIMESTAMP_MODE(1));447447+ WREG32(EVERGREEN_DP_SEC_TIMESTAMP + dig->afmt->offset,448448+ EVERGREEN_DP_SEC_TIMESTAMP_MODE(1));441449442450 if (radeon_connector->con_priv) {443451 dig_connector = radeon_connector->con_priv;444444- val = RREG32(EVERGREEN_DP_SEC_AUD_N + offset);452452+ val = RREG32(EVERGREEN_DP_SEC_AUD_N + dig->afmt->offset);445453 val &= ~EVERGREEN_DP_SEC_N_BASE_MULTIPLE(0xf);446454447455 if (dig_connector->dp_clock == 162000)···447459 else448460 val |= EVERGREEN_DP_SEC_N_BASE_MULTIPLE(5);449461450450- WREG32(EVERGREEN_DP_SEC_AUD_N + offset, val);462462+ WREG32(EVERGREEN_DP_SEC_AUD_N + dig->afmt->offset, val);451463 }452464453453- WREG32(EVERGREEN_DP_SEC_CNTL + offset,465465+ WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset,454466 EVERGREEN_DP_SEC_ASP_ENABLE | /* Audio packet transmission */455467 EVERGREEN_DP_SEC_ATP_ENABLE | /* Audio timestamp packet transmission */456468 EVERGREEN_DP_SEC_AIP_ENABLE | /* Audio infoframe packet transmission */457469 EVERGREEN_DP_SEC_STREAM_ENABLE); /* Master enable for secondary stream engine */458458- radeon_audio_enable(rdev, dig->afmt->pin, 0xf);459470 } else {460460- if (!dig->afmt->enabled)461461- return;462462-463463- WREG32(EVERGREEN_DP_SEC_CNTL + offset, 0);464464- radeon_audio_enable(rdev, dig->afmt->pin, 0);471471+ WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset, 0);465472 }466473467474 dig->afmt->enabled = enable;
+4
drivers/gpu/drm/radeon/r100.c
···728728 tmp |= RADEON_FP2_DETECT_MASK;729729 }730730 WREG32(RADEON_GEN_INT_CNTL, tmp);731731+732732+ /* read back to post the write */733733+ RREG32(RADEON_GEN_INT_CNTL);734734+731735 return 0;732736}733737
···256256 u32 ring = RADEON_CS_RING_GFX;257257 s32 priority = 0;258258259259+ INIT_LIST_HEAD(&p->validated);260260+259261 if (!cs->num_chunks) {260262 return 0;261263 }264264+262265 /* get chunks */263263- INIT_LIST_HEAD(&p->validated);264266 p->idx = 0;265267 p->ib.sa_bo = NULL;266268 p->const_ib.sa_bo = NULL;
+44-22
drivers/gpu/drm/radeon/radeon_fence.c
···10301030 return test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->base.flags);10311031}1032103210331033+struct radeon_wait_cb {10341034+ struct fence_cb base;10351035+ struct task_struct *task;10361036+};10371037+10381038+static void10391039+radeon_fence_wait_cb(struct fence *fence, struct fence_cb *cb)10401040+{10411041+ struct radeon_wait_cb *wait =10421042+ container_of(cb, struct radeon_wait_cb, base);10431043+10441044+ wake_up_process(wait->task);10451045+}10461046+10331047static signed long radeon_fence_default_wait(struct fence *f, bool intr,10341048 signed long t)10351049{10361050 struct radeon_fence *fence = to_radeon_fence(f);10371051 struct radeon_device *rdev = fence->rdev;10381038- bool signaled;10521052+ struct radeon_wait_cb cb;1039105310401040- fence_enable_sw_signaling(&fence->base);10541054+ cb.task = current;1041105510421042- /*10431043- * This function has to return -EDEADLK, but cannot hold10441044- * exclusive_lock during the wait because some callers10451045- * may already hold it. This means checking needs_reset without10461046- * lock, and not fiddling with any gpu internals.10471047- *10481048- * The callback installed with fence_enable_sw_signaling will10491049- * run before our wait_event_*timeout call, so we will see10501050- * both the signaled fence and the changes to needs_reset.10511051- */10561056+ if (fence_add_callback(f, &cb.base, radeon_fence_wait_cb))10571057+ return t;1052105810531053- if (intr)10541054- t = wait_event_interruptible_timeout(rdev->fence_queue,10551055- ((signaled = radeon_test_signaled(fence)) ||10561056- rdev->needs_reset), t);10571057- else10581058- t = wait_event_timeout(rdev->fence_queue,10591059- ((signaled = radeon_test_signaled(fence)) ||10601060- rdev->needs_reset), t);10591059+ while (t > 0) {10601060+ if (intr)10611061+ set_current_state(TASK_INTERRUPTIBLE);10621062+ else10631063+ set_current_state(TASK_UNINTERRUPTIBLE);1061106410621062- if (t > 0 && !signaled)10631063- return -EDEADLK;10651065+ /*10661066+ * radeon_test_signaled must be called after10671067+ * set_current_state to prevent a race with wake_up_process10681068+ */10691069+ if (radeon_test_signaled(fence))10701070+ break;10711071+10721072+ if (rdev->needs_reset) {10731073+ t = -EDEADLK;10741074+ break;10751075+ }10761076+10771077+ t = schedule_timeout(t);10781078+10791079+ if (t > 0 && intr && signal_pending(current))10801080+ t = -ERESTARTSYS;10811081+ }10821082+10831083+ __set_current_state(TASK_RUNNING);10841084+ fence_remove_callback(f, &cb.base);10851085+10641086 return t;10651087}10661088
···122122 it = interval_tree_iter_first(&rmn->objects, start, end);123123 while (it) {124124 struct radeon_bo *bo;125125- struct fence *fence;126125 int r;127126128127 bo = container_of(it, struct radeon_bo, mn_it);···133134 continue;134135 }135136136136- fence = reservation_object_get_excl(bo->tbo.resv);137137- if (fence) {138138- r = radeon_fence_wait((struct radeon_fence *)fence, false);139139- if (r)140140- DRM_ERROR("(%d) failed to wait for user bo\n", r);141141- }137137+ r = reservation_object_wait_timeout_rcu(bo->tbo.resv, true,138138+ false, MAX_SCHEDULE_TIMEOUT);139139+ if (r)140140+ DRM_ERROR("(%d) failed to wait for user bo\n", r);142141143142 radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_CPU);144143 r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false);
-11
drivers/gpu/drm/radeon/radeon_object.c
···173173 else174174 rbo->placements[i].lpfn = 0;175175 }176176-177177- /*178178- * Use two-ended allocation depending on the buffer size to179179- * improve fragmentation quality.180180- * 512kb was measured as the most optimal number.181181- */182182- if (rbo->tbo.mem.size > 512 * 1024) {183183- for (i = 0; i < c; i++) {184184- rbo->placements[i].flags |= TTM_PL_FLAG_TOPDOWN;185185- }186186- }187176}188177189178int radeon_bo_create(struct radeon_device *rdev,
+17-5
drivers/gpu/drm/radeon/radeon_pm.c
···837837 radeon_pm_compute_clocks(rdev);838838}839839840840-static struct radeon_ps *radeon_dpm_pick_power_state(struct radeon_device *rdev,841841- enum radeon_pm_state_type dpm_state)840840+static bool radeon_dpm_single_display(struct radeon_device *rdev)842841{843843- int i;844844- struct radeon_ps *ps;845845- u32 ui_class;846842 bool single_display = (rdev->pm.dpm.new_active_crtc_count < 2) ?847843 true : false;848844···853857 */854858 if (single_display && (r600_dpm_get_vrefresh(rdev) >= 120))855859 single_display = false;860860+861861+ return single_display;862862+}863863+864864+static struct radeon_ps *radeon_dpm_pick_power_state(struct radeon_device *rdev,865865+ enum radeon_pm_state_type dpm_state)866866+{867867+ int i;868868+ struct radeon_ps *ps;869869+ u32 ui_class;870870+ bool single_display = radeon_dpm_single_display(rdev);856871857872 /* certain older asics have a separare 3D performance state,858873 * so try that first if the user selected performance···990983 struct radeon_ps *ps;991984 enum radeon_pm_state_type dpm_state;992985 int ret;986986+ bool single_display = radeon_dpm_single_display(rdev);993987994988 /* if dpm init failed */995989 if (!rdev->pm.dpm_enabled)···10141006 if (rdev->pm.dpm.current_ps == rdev->pm.dpm.requested_ps) {10151007 /* vce just modifies an existing state so force a change */10161008 if (ps->vce_active != rdev->pm.dpm.vce_active)10091009+ goto force;10101010+ /* user has made a display change (such as timing) */10111011+ if (rdev->pm.dpm.single_display != single_display)10171012 goto force;10181013 if ((rdev->family < CHIP_BARTS) || (rdev->flags & RADEON_IS_IGP)) {10191014 /* for pre-BTC and APUs if the num crtcs changed but state is the same,···1080106910811070 rdev->pm.dpm.current_active_crtcs = rdev->pm.dpm.new_active_crtcs;10821071 rdev->pm.dpm.current_active_crtc_count = rdev->pm.dpm.new_active_crtc_count;10721072+ rdev->pm.dpm.single_display = single_display;1083107310841074 /* wait for the rings to drain */10851075 for (i = 0; i < RADEON_NUM_RINGS; i++) {
+1-1
drivers/gpu/drm/radeon/radeon_ring.c
···495495 seq_printf(m, "%u free dwords in ring\n", ring->ring_free_dw);496496 seq_printf(m, "%u dwords in ring\n", count);497497498498- if (!ring->ready)498498+ if (!ring->ring)499499 return 0;500500501501 /* print 8 dw before current rptr as often it's the last executed
+4
drivers/gpu/drm/radeon/radeon_ttm.c
···598598 enum dma_data_direction direction = write ?599599 DMA_BIDIRECTIONAL : DMA_TO_DEVICE;600600601601+ /* double check that we don't free the table twice */602602+ if (!ttm->sg->sgl)603603+ return;604604+601605 /* free the sg table and pages again */602606 dma_unmap_sg(rdev->dev, ttm->sg->sgl, ttm->sg->nents, direction);603607
···296296 if (iadc->poll_eoc) {297297 ret = iadc_poll_wait_eoc(iadc, wait);298298 } else {299299- ret = wait_for_completion_timeout(&iadc->complete, wait);299299+ ret = wait_for_completion_timeout(&iadc->complete,300300+ usecs_to_jiffies(wait));300301 if (!ret)301302 ret = -ETIMEDOUT;302303 else
···6060 iio_trigger_set_drvdata(adis->trig, adis);6161 ret = iio_trigger_register(adis->trig);62626363- indio_dev->trig = adis->trig;6363+ indio_dev->trig = iio_trigger_get(adis->trig);6464 if (ret)6565 goto error_free_irq;6666
+35-27
drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
···410410 }411411}412412413413-static int inv_mpu6050_write_fsr(struct inv_mpu6050_state *st, int fsr)413413+static int inv_mpu6050_write_gyro_scale(struct inv_mpu6050_state *st, int val)414414{415415- int result;415415+ int result, i;416416 u8 d;417417418418- if (fsr < 0 || fsr > INV_MPU6050_MAX_GYRO_FS_PARAM)419419- return -EINVAL;420420- if (fsr == st->chip_config.fsr)421421- return 0;418418+ for (i = 0; i < ARRAY_SIZE(gyro_scale_6050); ++i) {419419+ if (gyro_scale_6050[i] == val) {420420+ d = (i << INV_MPU6050_GYRO_CONFIG_FSR_SHIFT);421421+ result = inv_mpu6050_write_reg(st,422422+ st->reg->gyro_config, d);423423+ if (result)424424+ return result;422425423423- d = (fsr << INV_MPU6050_GYRO_CONFIG_FSR_SHIFT);424424- result = inv_mpu6050_write_reg(st, st->reg->gyro_config, d);425425- if (result)426426- return result;427427- st->chip_config.fsr = fsr;426426+ st->chip_config.fsr = i;427427+ return 0;428428+ }429429+ }428430429429- return 0;431431+ return -EINVAL;430432}431433432432-static int inv_mpu6050_write_accel_fs(struct inv_mpu6050_state *st, int fs)434434+static int inv_mpu6050_write_accel_scale(struct inv_mpu6050_state *st, int val)433435{434434- int result;436436+ int result, i;435437 u8 d;436438437437- if (fs < 0 || fs > INV_MPU6050_MAX_ACCL_FS_PARAM)438438- return -EINVAL;439439- if (fs == st->chip_config.accl_fs)440440- return 0;439439+ for (i = 0; i < ARRAY_SIZE(accel_scale); ++i) {440440+ if (accel_scale[i] == val) {441441+ d = (i << INV_MPU6050_ACCL_CONFIG_FSR_SHIFT);442442+ result = inv_mpu6050_write_reg(st,443443+ st->reg->accl_config, d);444444+ if (result)445445+ return result;441446442442- d = (fs << INV_MPU6050_ACCL_CONFIG_FSR_SHIFT);443443- result = inv_mpu6050_write_reg(st, st->reg->accl_config, d);444444- if (result)445445- return result;446446- st->chip_config.accl_fs = fs;447447+ st->chip_config.accl_fs = i;448448+ return 0;449449+ }450450+ }447451448448- return 0;452452+ return -EINVAL;449453}450454451455static int inv_mpu6050_write_raw(struct iio_dev *indio_dev,···475471 case IIO_CHAN_INFO_SCALE:476472 switch (chan->type) {477473 case IIO_ANGL_VEL:478478- result = inv_mpu6050_write_fsr(st, val);474474+ result = inv_mpu6050_write_gyro_scale(st, val2);479475 break;480476 case IIO_ACCEL:481481- result = inv_mpu6050_write_accel_fs(st, val);477477+ result = inv_mpu6050_write_accel_scale(st, val2);482478 break;483479 default:484480 result = -EINVAL;···784780785781 i2c_set_clientdata(client, indio_dev);786782 indio_dev->dev.parent = &client->dev;787787- indio_dev->name = id->name;783783+ /* id will be NULL when enumerated via ACPI */784784+ if (id)785785+ indio_dev->name = (char *)id->name;786786+ else787787+ indio_dev->name = (char *)dev_name(&client->dev);788788 indio_dev->channels = inv_mpu_channels;789789 indio_dev->num_channels = ARRAY_SIZE(inv_mpu_channels);790790
+14-11
drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
···2424#include <linux/poll.h>2525#include "inv_mpu_iio.h"26262727+static void inv_clear_kfifo(struct inv_mpu6050_state *st)2828+{2929+ unsigned long flags;3030+3131+ /* take the spin lock sem to avoid interrupt kick in */3232+ spin_lock_irqsave(&st->time_stamp_lock, flags);3333+ kfifo_reset(&st->timestamps);3434+ spin_unlock_irqrestore(&st->time_stamp_lock, flags);3535+}3636+2737int inv_reset_fifo(struct iio_dev *indio_dev)2838{2939 int result;···6050 INV_MPU6050_BIT_FIFO_RST);6151 if (result)6252 goto reset_fifo_fail;5353+5454+ /* clear timestamps fifo */5555+ inv_clear_kfifo(st);5656+6357 /* enable interrupt */6458 if (st->chip_config.accl_fifo_enable ||6559 st->chip_config.gyro_fifo_enable) {···9581 INV_MPU6050_BIT_DATA_RDY_EN);96829783 return result;9898-}9999-100100-static void inv_clear_kfifo(struct inv_mpu6050_state *st)101101-{102102- unsigned long flags;103103-104104- /* take the spin lock sem to avoid interrupt kick in */105105- spin_lock_irqsave(&st->time_stamp_lock, flags);106106- kfifo_reset(&st->timestamps);107107- spin_unlock_irqrestore(&st->time_stamp_lock, flags);10884}1098511086/**···188184flush_fifo:189185 /* Flush HW and SW FIFOs. */190186 inv_reset_fifo(indio_dev);191191- inv_clear_kfifo(st);192187 mutex_unlock(&indio_dev->mlock);193188 iio_trigger_notify_done(indio_dev->trig);194189
+1-1
drivers/iio/imu/kmx61.c
···12271227 base = KMX61_MAG_XOUT_L;1228122812291229 mutex_lock(&data->lock);12301230- for_each_set_bit(bit, indio_dev->buffer->scan_mask,12301230+ for_each_set_bit(bit, indio_dev->active_scan_mask,12311231 indio_dev->masklength) {12321232 ret = kmx61_read_measurement(data, base, bit);12331233 if (ret < 0) {
+3-2
drivers/iio/industrialio-core.c
···847847 * @attr_list: List of IIO device attributes848848 *849849 * This function frees the memory allocated for each of the IIO device850850- * attributes in the list. Note: if you want to reuse the list after calling851851- * this function you have to reinitialize it using INIT_LIST_HEAD().850850+ * attributes in the list.852851 */853852void iio_free_chan_devattr_list(struct list_head *attr_list)854853{···855856856857 list_for_each_entry_safe(p, n, attr_list, l) {857858 kfree(p->dev_attr.attr.name);859859+ list_del(&p->l);858860 kfree(p);859861 }860862}···936936937937 iio_free_chan_devattr_list(&indio_dev->channel_attr_list);938938 kfree(indio_dev->chan_attr_group.attrs);939939+ indio_dev->chan_attr_group.attrs = NULL;939940}940941941942static void iio_dev_release(struct device *device)
···7373config GP2AP020A00F7474 tristate "Sharp GP2AP020A00F Proximity/ALS sensor"7575 depends on I2C7676+ select REGMAP_I2C7677 select IIO_BUFFER7778 select IIO_TRIGGERED_BUFFER7879 select IRQ_WORK···127126config JSA1212128127 tristate "JSA1212 ALS and proximity sensor driver"129128 depends on I2C129129+ select REGMAP_I2C130130 help131131 Say Y here if you want to build a IIO driver for JSA1212132132 proximity & ALS sensor device.
+2
drivers/iio/magnetometer/Kconfig
···18181919config AK099112020 tristate "Asahi Kasei AK09911 3-axis Compass"2121+ depends on I2C2222+ depends on GPIOLIB2123 select AK89752224 help2325 Deprecated: AK09911 is now supported by AK8975 driver.
···9999 if (dmasync)100100 dma_set_attr(DMA_ATTR_WRITE_BARRIER, &attrs);101101102102+ /*103103+ * If the combination of the addr and size requested for this memory104104+ * region causes an integer overflow, return error.105105+ */106106+ if ((PAGE_ALIGN(addr + size) <= size) ||107107+ (PAGE_ALIGN(addr + size) <= addr))108108+ return ERR_PTR(-EINVAL);109109+102110 if (!can_do_mlock())103111 return ERR_PTR(-EPERM);104112
+16-4
drivers/infiniband/hw/mlx4/mad.c
···6464#define GUID_TBL_BLK_NUM_ENTRIES 86565#define GUID_TBL_BLK_SIZE (GUID_TBL_ENTRY_SIZE * GUID_TBL_BLK_NUM_ENTRIES)66666767+/* Counters should be saturate once they reach their maximum value */6868+#define ASSIGN_32BIT_COUNTER(counter, value) do {\6969+ if ((value) > U32_MAX) \7070+ counter = cpu_to_be32(U32_MAX); \7171+ else \7272+ counter = cpu_to_be32(value); \7373+} while (0)7474+6775struct mlx4_mad_rcv_buf {6876 struct ib_grh grh;6977 u8 payload[256];···814806static void edit_counter(struct mlx4_counter *cnt,815807 struct ib_pma_portcounters *pma_cnt)816808{817817- pma_cnt->port_xmit_data = cpu_to_be32((be64_to_cpu(cnt->tx_bytes)>>2));818818- pma_cnt->port_rcv_data = cpu_to_be32((be64_to_cpu(cnt->rx_bytes)>>2));819819- pma_cnt->port_xmit_packets = cpu_to_be32(be64_to_cpu(cnt->tx_frames));820820- pma_cnt->port_rcv_packets = cpu_to_be32(be64_to_cpu(cnt->rx_frames));809809+ ASSIGN_32BIT_COUNTER(pma_cnt->port_xmit_data,810810+ (be64_to_cpu(cnt->tx_bytes) >> 2));811811+ ASSIGN_32BIT_COUNTER(pma_cnt->port_rcv_data,812812+ (be64_to_cpu(cnt->rx_bytes) >> 2));813813+ ASSIGN_32BIT_COUNTER(pma_cnt->port_xmit_packets,814814+ be64_to_cpu(cnt->tx_frames));815815+ ASSIGN_32BIT_COUNTER(pma_cnt->port_rcv_packets,816816+ be64_to_cpu(cnt->rx_frames));821817}822818823819static int iboe_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
+5-1
drivers/infiniband/hw/mlx4/main.c
···26972697 spin_lock_bh(&ibdev->iboe.lock);26982698 for (i = 0; i < MLX4_MAX_PORTS; ++i) {26992699 struct net_device *curr_netdev = ibdev->iboe.netdevs[i];27002700+ enum ib_port_state curr_port_state;2700270127012701- enum ib_port_state curr_port_state =27022702+ if (!curr_netdev)27032703+ continue;27042704+27052705+ curr_port_state =27022706 (netif_running(curr_netdev) &&27032707 netif_carrier_ok(curr_netdev)) ?27042708 IB_PORT_ACTIVE : IB_PORT_DOWN;
···6767#define X_MAX_POSITIVE 81766868#define Y_MAX_POSITIVE 817669697070-/* maximum ABS_MT_POSITION displacement (in mm) */7171-#define DMAX 107272-7370/*****************************************************************************7471 * Stuff we need even when we do not want native Synaptics support7572 ****************************************************************************/···120123121124static bool cr48_profile_sensor;122125126126+#define ANY_BOARD_ID 0123127struct min_max_quirk {124128 const char * const *pnp_ids;129129+ struct {130130+ unsigned long int min, max;131131+ } board_id;125132 int x_min, x_max, y_min, y_max;126133};127134128135static const struct min_max_quirk min_max_pnpid_table[] = {129136 {130137 (const char * const []){"LEN0033", NULL},138138+ {ANY_BOARD_ID, ANY_BOARD_ID},131139 1024, 5052, 2258, 4832132140 },133141 {134134- (const char * const []){"LEN0035", "LEN0042", NULL},142142+ (const char * const []){"LEN0042", NULL},143143+ {ANY_BOARD_ID, ANY_BOARD_ID},135144 1232, 5710, 1156, 4696136145 },137146 {138147 (const char * const []){"LEN0034", "LEN0036", "LEN0037",139148 "LEN0039", "LEN2002", "LEN2004",140149 NULL},150150+ {ANY_BOARD_ID, 2961},141151 1024, 5112, 2024, 4832142152 },143153 {144154 (const char * const []){"LEN2001", NULL},155155+ {ANY_BOARD_ID, ANY_BOARD_ID},145156 1024, 5022, 2508, 4832146157 },147158 {148159 (const char * const []){"LEN2006", NULL},160160+ {2691, 2691},161161+ 1024, 5045, 2457, 4832162162+ },163163+ {164164+ (const char * const []){"LEN2006", NULL},165165+ {ANY_BOARD_ID, ANY_BOARD_ID},149166 1264, 5675, 1171, 4688150167 },151168 { }···186175 "LEN0041",187176 "LEN0042", /* Yoga */188177 "LEN0045",189189- "LEN0046",190178 "LEN0047",191191- "LEN0048",192179 "LEN0049",193180 "LEN2000",194181 "LEN2001", /* Edge E431 */···194185 "LEN2003",195186 "LEN2004", /* L440 */196187 "LEN2005",197197- "LEN2006",188188+ "LEN2006", /* Edge E440/E540 */198189 "LEN2007",199190 "LEN2008",200191 "LEN2009",···244235 return 0;245236}246237238238+static int synaptics_more_extended_queries(struct psmouse *psmouse)239239+{240240+ struct synaptics_data *priv = psmouse->private;241241+ unsigned char buf[3];242242+243243+ if (synaptics_send_cmd(psmouse, SYN_QUE_MEXT_CAPAB_10, buf))244244+ return -1;245245+246246+ priv->ext_cap_10 = (buf[0]<<16) | (buf[1]<<8) | buf[2];247247+248248+ return 0;249249+}250250+247251/*248248- * Read the board id from the touchpad252252+ * Read the board id and the "More Extended Queries" from the touchpad249253 * The board id is encoded in the "QUERY MODES" response250254 */251251-static int synaptics_board_id(struct psmouse *psmouse)255255+static int synaptics_query_modes(struct psmouse *psmouse)252256{253257 struct synaptics_data *priv = psmouse->private;254258 unsigned char bid[3];255259260260+ /* firmwares prior 7.5 have no board_id encoded */261261+ if (SYN_ID_FULL(priv->identity) < 0x705)262262+ return 0;263263+256264 if (synaptics_send_cmd(psmouse, SYN_QUE_MODES, bid))257265 return -1;258266 priv->board_id = ((bid[0] & 0xfc) << 6) | bid[1];267267+268268+ if (SYN_MEXT_CAP_BIT(bid[0]))269269+ return synaptics_more_extended_queries(psmouse);270270+259271 return 0;260272}261273···376346{377347 struct synaptics_data *priv = psmouse->private;378348 unsigned char resp[3];379379- int i;380349381350 if (SYN_ID_MAJOR(priv->identity) < 4)382351 return 0;···387358 }388359 }389360390390- for (i = 0; min_max_pnpid_table[i].pnp_ids; i++) {391391- if (psmouse_matches_pnp_id(psmouse,392392- min_max_pnpid_table[i].pnp_ids)) {393393- priv->x_min = min_max_pnpid_table[i].x_min;394394- priv->x_max = min_max_pnpid_table[i].x_max;395395- priv->y_min = min_max_pnpid_table[i].y_min;396396- priv->y_max = min_max_pnpid_table[i].y_max;397397- return 0;398398- }399399- }400400-401361 if (SYN_EXT_CAP_REQUESTS(priv->capabilities) >= 5 &&402362 SYN_CAP_MAX_DIMENSIONS(priv->ext_cap_0c)) {403363 if (synaptics_send_cmd(psmouse, SYN_QUE_EXT_MAX_COORDS, resp)) {···395377 } else {396378 priv->x_max = (resp[0] << 5) | ((resp[1] & 0x0f) << 1);397379 priv->y_max = (resp[2] << 5) | ((resp[1] & 0xf0) >> 3);380380+ psmouse_info(psmouse,381381+ "queried max coordinates: x [..%d], y [..%d]\n",382382+ priv->x_max, priv->y_max);398383 }399384 }400385401401- if (SYN_EXT_CAP_REQUESTS(priv->capabilities) >= 7 &&402402- SYN_CAP_MIN_DIMENSIONS(priv->ext_cap_0c)) {386386+ if (SYN_CAP_MIN_DIMENSIONS(priv->ext_cap_0c) &&387387+ (SYN_EXT_CAP_REQUESTS(priv->capabilities) >= 7 ||388388+ /*389389+ * Firmware v8.1 does not report proper number of extended390390+ * capabilities, but has been proven to report correct min391391+ * coordinates.392392+ */393393+ SYN_ID_FULL(priv->identity) == 0x801)) {403394 if (synaptics_send_cmd(psmouse, SYN_QUE_EXT_MIN_COORDS, resp)) {404395 psmouse_warn(psmouse,405396 "device claims to have min coordinates query, but I'm not able to read it.\n");406397 } else {407398 priv->x_min = (resp[0] << 5) | ((resp[1] & 0x0f) << 1);408399 priv->y_min = (resp[2] << 5) | ((resp[1] & 0xf0) >> 3);400400+ psmouse_info(psmouse,401401+ "queried min coordinates: x [%d..], y [%d..]\n",402402+ priv->x_min, priv->y_min);409403 }410404 }411405412406 return 0;407407+}408408+409409+/*410410+ * Apply quirk(s) if the hardware matches411411+ */412412+413413+static void synaptics_apply_quirks(struct psmouse *psmouse)414414+{415415+ struct synaptics_data *priv = psmouse->private;416416+ int i;417417+418418+ for (i = 0; min_max_pnpid_table[i].pnp_ids; i++) {419419+ if (!psmouse_matches_pnp_id(psmouse,420420+ min_max_pnpid_table[i].pnp_ids))421421+ continue;422422+423423+ if (min_max_pnpid_table[i].board_id.min != ANY_BOARD_ID &&424424+ priv->board_id < min_max_pnpid_table[i].board_id.min)425425+ continue;426426+427427+ if (min_max_pnpid_table[i].board_id.max != ANY_BOARD_ID &&428428+ priv->board_id > min_max_pnpid_table[i].board_id.max)429429+ continue;430430+431431+ priv->x_min = min_max_pnpid_table[i].x_min;432432+ priv->x_max = min_max_pnpid_table[i].x_max;433433+ priv->y_min = min_max_pnpid_table[i].y_min;434434+ priv->y_max = min_max_pnpid_table[i].y_max;435435+ psmouse_info(psmouse,436436+ "quirked min/max coordinates: x [%d..%d], y [%d..%d]\n",437437+ priv->x_min, priv->x_max,438438+ priv->y_min, priv->y_max);439439+ break;440440+ }413441}414442415443static int synaptics_query_hardware(struct psmouse *psmouse)···466402 return -1;467403 if (synaptics_firmware_id(psmouse))468404 return -1;469469- if (synaptics_board_id(psmouse))405405+ if (synaptics_query_modes(psmouse))470406 return -1;471407 if (synaptics_capability(psmouse))472408 return -1;473409 if (synaptics_resolution(psmouse))474410 return -1;411411+412412+ synaptics_apply_quirks(psmouse);475413476414 return 0;477415}···582516 return (buf[0] & 0xFC) == 0x84 && (buf[3] & 0xCC) == 0xC4;583517}584518585585-static void synaptics_pass_pt_packet(struct serio *ptport, unsigned char *packet)519519+static void synaptics_pass_pt_packet(struct psmouse *psmouse,520520+ struct serio *ptport,521521+ unsigned char *packet)586522{523523+ struct synaptics_data *priv = psmouse->private;587524 struct psmouse *child = serio_get_drvdata(ptport);588525589526 if (child && child->state == PSMOUSE_ACTIVATED) {590590- serio_interrupt(ptport, packet[1], 0);527527+ serio_interrupt(ptport, packet[1] | priv->pt_buttons, 0);591528 serio_interrupt(ptport, packet[4], 0);592529 serio_interrupt(ptport, packet[5], 0);593530 if (child->pktsize == 4)594531 serio_interrupt(ptport, packet[2], 0);595595- } else532532+ } else {596533 serio_interrupt(ptport, packet[1], 0);534534+ }597535}598536599537static void synaptics_pt_activate(struct psmouse *psmouse)···673603 default:674604 break;675605 }606606+}607607+608608+static void synaptics_parse_ext_buttons(const unsigned char buf[],609609+ struct synaptics_data *priv,610610+ struct synaptics_hw_state *hw)611611+{612612+ unsigned int ext_bits =613613+ (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) + 1) >> 1;614614+ unsigned int ext_mask = GENMASK(ext_bits - 1, 0);615615+616616+ hw->ext_buttons = buf[4] & ext_mask;617617+ hw->ext_buttons |= (buf[5] & ext_mask) << ext_bits;676618}677619678620static bool is_forcepad;···773691 hw->down = ((buf[0] ^ buf[3]) & 0x02) ? 1 : 0;774692 }775693776776- if (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) &&694694+ if (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) > 0 &&777695 ((buf[0] ^ buf[3]) & 0x02)) {778778- switch (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) & ~0x01) {779779- default:780780- /*781781- * if nExtBtn is greater than 8 it should be782782- * considered invalid and treated as 0783783- */784784- break;785785- case 8:786786- hw->ext_buttons |= ((buf[5] & 0x08)) ? 0x80 : 0;787787- hw->ext_buttons |= ((buf[4] & 0x08)) ? 0x40 : 0;788788- case 6:789789- hw->ext_buttons |= ((buf[5] & 0x04)) ? 0x20 : 0;790790- hw->ext_buttons |= ((buf[4] & 0x04)) ? 0x10 : 0;791791- case 4:792792- hw->ext_buttons |= ((buf[5] & 0x02)) ? 0x08 : 0;793793- hw->ext_buttons |= ((buf[4] & 0x02)) ? 0x04 : 0;794794- case 2:795795- hw->ext_buttons |= ((buf[5] & 0x01)) ? 0x02 : 0;796796- hw->ext_buttons |= ((buf[4] & 0x01)) ? 0x01 : 0;797797- }696696+ synaptics_parse_ext_buttons(buf, priv, hw);798697 }799698 } else {800699 hw->x = (((buf[1] & 0x1f) << 8) | buf[2]);···837774 }838775}839776777777+static void synaptics_report_ext_buttons(struct psmouse *psmouse,778778+ const struct synaptics_hw_state *hw)779779+{780780+ struct input_dev *dev = psmouse->dev;781781+ struct synaptics_data *priv = psmouse->private;782782+ int ext_bits = (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) + 1) >> 1;783783+ char buf[6] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};784784+ int i;785785+786786+ if (!SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap))787787+ return;788788+789789+ /* Bug in FW 8.1, buttons are reported only when ExtBit is 1 */790790+ if (SYN_ID_FULL(priv->identity) == 0x801 &&791791+ !((psmouse->packet[0] ^ psmouse->packet[3]) & 0x02))792792+ return;793793+794794+ if (!SYN_CAP_EXT_BUTTONS_STICK(priv->ext_cap_10)) {795795+ for (i = 0; i < ext_bits; i++) {796796+ input_report_key(dev, BTN_0 + 2 * i,797797+ hw->ext_buttons & (1 << i));798798+ input_report_key(dev, BTN_1 + 2 * i,799799+ hw->ext_buttons & (1 << (i + ext_bits)));800800+ }801801+ return;802802+ }803803+804804+ /*805805+ * This generation of touchpads has the trackstick buttons806806+ * physically wired to the touchpad. Re-route them through807807+ * the pass-through interface.808808+ */809809+ if (!priv->pt_port)810810+ return;811811+812812+ /* The trackstick expects at most 3 buttons */813813+ priv->pt_buttons = SYN_CAP_EXT_BUTTON_STICK_L(hw->ext_buttons) |814814+ SYN_CAP_EXT_BUTTON_STICK_R(hw->ext_buttons) << 1 |815815+ SYN_CAP_EXT_BUTTON_STICK_M(hw->ext_buttons) << 2;816816+817817+ synaptics_pass_pt_packet(psmouse, priv->pt_port, buf);818818+}819819+840820static void synaptics_report_buttons(struct psmouse *psmouse,841821 const struct synaptics_hw_state *hw)842822{843823 struct input_dev *dev = psmouse->dev;844824 struct synaptics_data *priv = psmouse->private;845845- int i;846825847826 input_report_key(dev, BTN_LEFT, hw->left);848827 input_report_key(dev, BTN_RIGHT, hw->right);···897792 input_report_key(dev, BTN_BACK, hw->down);898793 }899794900900- for (i = 0; i < SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap); i++)901901- input_report_key(dev, BTN_0 + i, hw->ext_buttons & (1 << i));795795+ synaptics_report_ext_buttons(psmouse, hw);902796}903797904798static void synaptics_report_mt_data(struct psmouse *psmouse,···917813 pos[i].y = synaptics_invert_y(hw[i]->y);918814 }919815920920- input_mt_assign_slots(dev, slot, pos, nsemi, DMAX * priv->x_res);816816+ input_mt_assign_slots(dev, slot, pos, nsemi, 0);921817922818 for (i = 0; i < nsemi; i++) {923819 input_mt_slot(dev, slot[i]);···11181014 if (SYN_CAP_PASS_THROUGH(priv->capabilities) &&11191015 synaptics_is_pt_packet(psmouse->packet)) {11201016 if (priv->pt_port)11211121- synaptics_pass_pt_packet(priv->pt_port, psmouse->packet);10171017+ synaptics_pass_pt_packet(psmouse, priv->pt_port,10181018+ psmouse->packet);11221019 } else11231020 synaptics_process_packet(psmouse);11241021···12211116 __set_bit(BTN_BACK, dev->keybit);12221117 }1223111812241224- for (i = 0; i < SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap); i++)12251225- __set_bit(BTN_0 + i, dev->keybit);11191119+ if (!SYN_CAP_EXT_BUTTONS_STICK(priv->ext_cap_10))11201120+ for (i = 0; i < SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap); i++)11211121+ __set_bit(BTN_0 + i, dev->keybit);1226112212271123 __clear_bit(EV_REL, dev->evbit);12281124 __clear_bit(REL_X, dev->relbit);···1231112512321126 if (SYN_CAP_CLICKPAD(priv->ext_cap_0c)) {12331127 __set_bit(INPUT_PROP_BUTTONPAD, dev->propbit);12341234- if (psmouse_matches_pnp_id(psmouse, topbuttonpad_pnp_ids))11281128+ if (psmouse_matches_pnp_id(psmouse, topbuttonpad_pnp_ids) &&11291129+ !SYN_CAP_EXT_BUTTONS_STICK(priv->ext_cap_10))12351130 __set_bit(INPUT_PROP_TOPBUTTONPAD, dev->propbit);12361131 /* Clickpads report only left button */12371132 __clear_bit(BTN_RIGHT, dev->keybit);
+28
drivers/input/mouse/synaptics.h
···2222#define SYN_QUE_EXT_CAPAB_0C 0x0c2323#define SYN_QUE_EXT_MAX_COORDS 0x0d2424#define SYN_QUE_EXT_MIN_COORDS 0x0f2525+#define SYN_QUE_MEXT_CAPAB_10 0x1025262627/* synatics modes */2728#define SYN_BIT_ABSOLUTE_MODE (1 << 7)···5453#define SYN_EXT_CAP_REQUESTS(c) (((c) & 0x700000) >> 20)5554#define SYN_CAP_MULTI_BUTTON_NO(ec) (((ec) & 0x00f000) >> 12)5655#define SYN_CAP_PRODUCT_ID(ec) (((ec) & 0xff0000) >> 16)5656+#define SYN_MEXT_CAP_BIT(m) ((m) & (1 << 1))57575858/*5959 * The following describes response for the 0x0c query.···9088#define SYN_CAP_ADV_GESTURE(ex0c) ((ex0c) & 0x080000)9189#define SYN_CAP_REDUCED_FILTERING(ex0c) ((ex0c) & 0x000400)9290#define SYN_CAP_IMAGE_SENSOR(ex0c) ((ex0c) & 0x000800)9191+9292+/*9393+ * The following descibes response for the 0x10 query.9494+ *9595+ * byte mask name meaning9696+ * ---- ---- ------- ------------9797+ * 1 0x01 ext buttons are stick buttons exported in the extended9898+ * capability are actually meant to be used9999+ * by the tracktick (pass-through).100100+ * 1 0x02 SecurePad the touchpad is a SecurePad, so it101101+ * contains a built-in fingerprint reader.102102+ * 1 0xe0 more ext count how many more extented queries are103103+ * available after this one.104104+ * 2 0xff SecurePad width the width of the SecurePad fingerprint105105+ * reader.106106+ * 3 0xff SecurePad height the height of the SecurePad fingerprint107107+ * reader.108108+ */109109+#define SYN_CAP_EXT_BUTTONS_STICK(ex10) ((ex10) & 0x010000)110110+#define SYN_CAP_SECUREPAD(ex10) ((ex10) & 0x020000)111111+112112+#define SYN_CAP_EXT_BUTTON_STICK_L(eb) (!!((eb) & 0x01))113113+#define SYN_CAP_EXT_BUTTON_STICK_M(eb) (!!((eb) & 0x02))114114+#define SYN_CAP_EXT_BUTTON_STICK_R(eb) (!!((eb) & 0x04))9311594116/* synaptics modes query bits */95117#define SYN_MODE_ABSOLUTE(m) ((m) & (1 << 7))···169143 unsigned long int capabilities; /* Capabilities */170144 unsigned long int ext_cap; /* Extended Capabilities */171145 unsigned long int ext_cap_0c; /* Ext Caps from 0x0c query */146146+ unsigned long int ext_cap_10; /* Ext Caps from 0x10 query */172147 unsigned long int identity; /* Identification */173148 unsigned int x_res, y_res; /* X/Y resolution in units/mm */174149 unsigned int x_max, y_max; /* Max coordinates (from FW) */···183156 bool disable_gesture; /* disable gestures */184157185158 struct serio *pt_port; /* Pass-through serio port */159159+ unsigned char pt_buttons; /* Pass-through buttons */186160187161 /*188162 * Last received Advanced Gesture Mode (AGM) packet. An AGM packet
+1
drivers/input/touchscreen/Kconfig
···943943 tristate "Allwinner sun4i resistive touchscreen controller support"944944 depends on ARCH_SUNXI || COMPILE_TEST945945 depends on HWMON946946+ depends on THERMAL || !THERMAL_OF946947 help947948 This selects support for the resistive touchscreen controller948949 found on Allwinner sunxi SoCs.
+2
drivers/iommu/Kconfig
···2323config IOMMU_IO_PGTABLE_LPAE2424 bool "ARMv7/v8 Long Descriptor Format"2525 select IOMMU_IO_PGTABLE2626+ depends on ARM || ARM64 || COMPILE_TEST2627 help2728 Enable support for the ARM long descriptor pagetable format.2829 This allocator supports 4K/2M/1G, 16K/32M and 64K/512M page···6463 bool "MSM IOMMU Support"6564 depends on ARM6665 depends on ARCH_MSM8X60 || ARCH_MSM8960 || COMPILE_TEST6666+ depends on BROKEN6767 select IOMMU_API6868 help6969 Support for the IOMMUs found on certain Qualcomm SOCs.
+6-3
drivers/iommu/arm-smmu.c
···12881288 return 0;1289128912901290 spin_lock_irqsave(&smmu_domain->pgtbl_lock, flags);12911291- if (smmu_domain->smmu->features & ARM_SMMU_FEAT_TRANS_OPS)12911291+ if (smmu_domain->smmu->features & ARM_SMMU_FEAT_TRANS_OPS &&12921292+ smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {12921293 ret = arm_smmu_iova_to_phys_hard(domain, iova);12931293- else12941294+ } else {12941295 ret = ops->iova_to_phys(ops, iova);12961296+ }12971297+12951298 spin_unlock_irqrestore(&smmu_domain->pgtbl_lock, flags);1296129912971300 return ret;···15591556 return -ENODEV;15601557 }1561155815621562- if (smmu->version == 1 || (!(id & ID0_ATOSNS) && (id & ID0_S1TS))) {15591559+ if ((id & ID0_S1TS) && ((smmu->version == 1) || (id & ID0_ATOSNS))) {15631560 smmu->features |= ARM_SMMU_FEAT_TRANS_OPS;15641561 dev_notice(smmu->dev, "\taddress translation ops\n");15651562 }
+7
drivers/iommu/exynos-iommu.c
···1186118611871187static int __init exynos_iommu_init(void)11881188{11891189+ struct device_node *np;11891190 int ret;11911191+11921192+ np = of_find_matching_node(NULL, sysmmu_of_match);11931193+ if (!np)11941194+ return 0;11951195+11961196+ of_node_put(np);1190119711911198 lv2table_kmem_cache = kmem_cache_create("exynos-iommu-lv2table",11921199 LV2TABLE_SIZE, LV2TABLE_SIZE, 0, NULL);
+3-4
drivers/iommu/intel-iommu.c
···1742174217431743static void domain_exit(struct dmar_domain *domain)17441744{17451745- struct dmar_drhd_unit *drhd;17461746- struct intel_iommu *iommu;17471745 struct page *freelist = NULL;17461746+ int i;1748174717491748 /* Domain 0 is reserved, so dont process it */17501749 if (!domain)···1763176417641765 /* clear attached or cached domains */17651766 rcu_read_lock();17661766- for_each_active_iommu(iommu, drhd)17671767- iommu_detach_domain(domain, iommu);17671767+ for_each_set_bit(i, domain->iommu_bmp, g_num_of_iommus)17681768+ iommu_detach_domain(domain, g_iommus[i]);17681769 rcu_read_unlock();1769177017701771 dma_free_pagelist(freelist);
+3-2
drivers/iommu/io-pgtable-arm.c
···5656 ((((d)->levels - ((l) - ARM_LPAE_START_LVL(d) + 1)) \5757 * (d)->bits_per_level) + (d)->pg_shift)58585959-#define ARM_LPAE_PAGES_PER_PGD(d) ((d)->pgd_size >> (d)->pg_shift)5959+#define ARM_LPAE_PAGES_PER_PGD(d) \6060+ DIV_ROUND_UP((d)->pgd_size, 1UL << (d)->pg_shift)60616162/*6263 * Calculate the index at level l used to map virtual address a using the···6766 ((l) == ARM_LPAE_START_LVL(d) ? ilog2(ARM_LPAE_PAGES_PER_PGD(d)) : 0)68676968#define ARM_LPAE_LVL_IDX(a,l,d) \7070- (((a) >> ARM_LPAE_LVL_SHIFT(l,d)) & \6969+ (((u64)(a) >> ARM_LPAE_LVL_SHIFT(l,d)) & \7170 ((1 << ((d)->bits_per_level + ARM_LPAE_PGD_IDX(l,d))) - 1))72717372/* Calculate the block/page mapping size at level l for pagetable in d. */
···1015101510161016static int __init rk_iommu_init(void)10171017{10181018+ struct device_node *np;10181019 int ret;10201020+10211021+ np = of_find_matching_node(NULL, rk_iommu_dt_ids);10221022+ if (!np)10231023+ return 0;10241024+10251025+ of_node_put(np);1019102610201027 ret = bus_set_iommu(&platform_bus_type, &rk_iommu_ops);10211028 if (ret)
···11config LGUEST22 tristate "Linux hypervisor example code"33- depends on X86_32 && EVENTFD && TTY33+ depends on X86_32 && EVENTFD && TTY && PCI_DIRECT44 select HVC_DRIVER55 ---help---66 This is a very simple module which allows you to run
···2020#include <linux/log2.h>2121#include <linux/dm-kcopyd.h>22222323+#include "dm.h"2424+2325#include "dm-exception-store.h"24262527#define DM_MSG_PREFIX "snapshots"···293291};294292295293/*294294+ * This structure is allocated for each origin target295295+ */296296+struct dm_origin {297297+ struct dm_dev *dev;298298+ struct dm_target *ti;299299+ unsigned split_boundary;300300+ struct list_head hash_list;301301+};302302+303303+/*296304 * Size of the hash table for origin volumes. If we make this297305 * the size of the minors list then it should be nearly perfect298306 */299307#define ORIGIN_HASH_SIZE 256300308#define ORIGIN_MASK 0xFF301309static struct list_head *_origins;310310+static struct list_head *_dm_origins;302311static struct rw_semaphore _origins_lock;303312304313static DECLARE_WAIT_QUEUE_HEAD(_pending_exceptions_done);···323310 _origins = kmalloc(ORIGIN_HASH_SIZE * sizeof(struct list_head),324311 GFP_KERNEL);325312 if (!_origins) {326326- DMERR("unable to allocate memory");313313+ DMERR("unable to allocate memory for _origins");327314 return -ENOMEM;328315 }329329-330316 for (i = 0; i < ORIGIN_HASH_SIZE; i++)331317 INIT_LIST_HEAD(_origins + i);318318+319319+ _dm_origins = kmalloc(ORIGIN_HASH_SIZE * sizeof(struct list_head),320320+ GFP_KERNEL);321321+ if (!_dm_origins) {322322+ DMERR("unable to allocate memory for _dm_origins");323323+ kfree(_origins);324324+ return -ENOMEM;325325+ }326326+ for (i = 0; i < ORIGIN_HASH_SIZE; i++)327327+ INIT_LIST_HEAD(_dm_origins + i);328328+332329 init_rwsem(&_origins_lock);333330334331 return 0;···347324static void exit_origin_hash(void)348325{349326 kfree(_origins);327327+ kfree(_dm_origins);350328}351329352330static unsigned origin_hash(struct block_device *bdev)···372348{373349 struct list_head *sl = &_origins[origin_hash(o->bdev)];374350 list_add_tail(&o->hash_list, sl);351351+}352352+353353+static struct dm_origin *__lookup_dm_origin(struct block_device *origin)354354+{355355+ struct list_head *ol;356356+ struct dm_origin *o;357357+358358+ ol = &_dm_origins[origin_hash(origin)];359359+ list_for_each_entry (o, ol, hash_list)360360+ if (bdev_equal(o->dev->bdev, origin))361361+ return o;362362+363363+ return NULL;364364+}365365+366366+static void __insert_dm_origin(struct dm_origin *o)367367+{368368+ struct list_head *sl = &_dm_origins[origin_hash(o->dev->bdev)];369369+ list_add_tail(&o->hash_list, sl);370370+}371371+372372+static void __remove_dm_origin(struct dm_origin *o)373373+{374374+ list_del(&o->hash_list);375375}376376377377/*···18881840static void snapshot_resume(struct dm_target *ti)18891841{18901842 struct dm_snapshot *s = ti->private;18911891- struct dm_snapshot *snap_src = NULL, *snap_dest = NULL;18431843+ struct dm_snapshot *snap_src = NULL, *snap_dest = NULL, *snap_merging = NULL;18441844+ struct dm_origin *o;18451845+ struct mapped_device *origin_md = NULL;18461846+ bool must_restart_merging = false;1892184718931848 down_read(&_origins_lock);18491849+18501850+ o = __lookup_dm_origin(s->origin->bdev);18511851+ if (o)18521852+ origin_md = dm_table_get_md(o->ti->table);18531853+ if (!origin_md) {18541854+ (void) __find_snapshots_sharing_cow(s, NULL, NULL, &snap_merging);18551855+ if (snap_merging)18561856+ origin_md = dm_table_get_md(snap_merging->ti->table);18571857+ }18581858+ if (origin_md == dm_table_get_md(ti->table))18591859+ origin_md = NULL;18601860+ if (origin_md) {18611861+ if (dm_hold(origin_md))18621862+ origin_md = NULL;18631863+ }18641864+18651865+ up_read(&_origins_lock);18661866+18671867+ if (origin_md) {18681868+ dm_internal_suspend_fast(origin_md);18691869+ if (snap_merging && test_bit(RUNNING_MERGE, &snap_merging->state_bits)) {18701870+ must_restart_merging = true;18711871+ stop_merge(snap_merging);18721872+ }18731873+ }18741874+18751875+ down_read(&_origins_lock);18761876+18941877 (void) __find_snapshots_sharing_cow(s, &snap_src, &snap_dest, NULL);18951878 if (snap_src && snap_dest) {18961879 down_write(&snap_src->lock);···19301851 up_write(&snap_dest->lock);19311852 up_write(&snap_src->lock);19321853 }18541854+19331855 up_read(&_origins_lock);18561856+18571857+ if (origin_md) {18581858+ if (must_restart_merging)18591859+ start_merge(snap_merging);18601860+ dm_internal_resume_fast(origin_md);18611861+ dm_put(origin_md);18621862+ }1934186319351864 /* Now we have correct chunk size, reregister */19361865 reregister_snapshot(s);···22202133 * Origin: maps a linear range of a device, with hooks for snapshotting.22212134 */2222213522232223-struct dm_origin {22242224- struct dm_dev *dev;22252225- unsigned split_boundary;22262226-};22272227-22282136/*22292137 * Construct an origin mapping: <dev_path>22302138 * The context for an origin is merely a 'struct dm_dev *'···22482166 goto bad_open;22492167 }2250216821692169+ o->ti = ti;22512170 ti->private = o;22522171 ti->num_flush_bios = 1;22532172···22632180static void origin_dtr(struct dm_target *ti)22642181{22652182 struct dm_origin *o = ti->private;21832183+22662184 dm_put_device(ti, o->dev);22672185 kfree(o);22682186}···23002216 struct dm_origin *o = ti->private;2301221723022218 o->split_boundary = get_origin_minimum_chunksize(o->dev->bdev);22192219+22202220+ down_write(&_origins_lock);22212221+ __insert_dm_origin(o);22222222+ up_write(&_origins_lock);22232223+}22242224+22252225+static void origin_postsuspend(struct dm_target *ti)22262226+{22272227+ struct dm_origin *o = ti->private;22282228+22292229+ down_write(&_origins_lock);22302230+ __remove_dm_origin(o);22312231+ up_write(&_origins_lock);23032232}2304223323052234static void origin_status(struct dm_target *ti, status_type_t type,···2355225823562259static struct target_type origin_target = {23572260 .name = "snapshot-origin",23582358- .version = {1, 8, 1},22612261+ .version = {1, 9, 0},23592262 .module = THIS_MODULE,23602263 .ctr = origin_ctr,23612264 .dtr = origin_dtr,23622265 .map = origin_map,23632266 .resume = origin_resume,22672267+ .postsuspend = origin_postsuspend,23642268 .status = origin_status,23652269 .merge = origin_merge,23662270 .iterate_devices = origin_iterate_devices,···2369227123702272static struct target_type snapshot_target = {23712273 .name = "snapshot",23722372- .version = {1, 12, 0},22742274+ .version = {1, 13, 0},23732275 .module = THIS_MODULE,23742276 .ctr = snapshot_ctr,23752277 .dtr = snapshot_dtr,···2383228523842286static struct target_type merge_target = {23852287 .name = dm_snapshot_merge_target_name,23862386- .version = {1, 2, 0},22882288+ .version = {1, 3, 0},23872289 .module = THIS_MODULE,23882290 .ctr = snapshot_ctr,23892291 .dtr = snapshot_dtr,
-11
drivers/md/dm-thin.c
···23582358 return DM_MAPIO_REMAPPED;2359235923602360 case -ENODATA:23612361- if (get_pool_mode(tc->pool) == PM_READ_ONLY) {23622362- /*23632363- * This block isn't provisioned, and we have no way23642364- * of doing so.23652365- */23662366- handle_unserviceable_bio(tc->pool, bio);23672367- cell_defer_no_holder(tc, virt_cell);23682368- return DM_MAPIO_SUBMITTED;23692369- }23702370- /* fall through */23712371-23722361 case -EWOULDBLOCK:23732362 thin_defer_cell(tc, virt_cell);23742363 return DM_MAPIO_SUBMITTED;
+37-10
drivers/md/dm.c
···433433434434 dm_get(md);435435 atomic_inc(&md->open_count);436436-437436out:438437 spin_unlock(&_minor_lock);439438···441442442443static void dm_blk_close(struct gendisk *disk, fmode_t mode)443444{444444- struct mapped_device *md = disk->private_data;445445+ struct mapped_device *md;445446446447 spin_lock(&_minor_lock);448448+449449+ md = disk->private_data;450450+ if (WARN_ON(!md))451451+ goto out;447452448453 if (atomic_dec_and_test(&md->open_count) &&449454 (test_bit(DMF_DEFERRED_REMOVE, &md->flags)))450455 queue_work(deferred_remove_workqueue, &deferred_remove_work);451456452457 dm_put(md);453453-458458+out:454459 spin_unlock(&_minor_lock);455460}456461···22442241 int minor = MINOR(disk_devt(md->disk));2245224222462243 unlock_fs(md);22472247- bdput(md->bdev);22482244 destroy_workqueue(md->wq);2249224522502246 if (md->kworker_task)···22542252 mempool_destroy(md->rq_pool);22552253 if (md->bs)22562254 bioset_free(md->bs);22572257- blk_integrity_unregister(md->disk);22582258- del_gendisk(md->disk);22552255+22592256 cleanup_srcu_struct(&md->io_barrier);22602257 free_table_devices(&md->table_devices);22612261- free_minor(minor);22582258+ dm_stats_cleanup(&md->stats);2262225922632260 spin_lock(&_minor_lock);22642261 md->disk->private_data = NULL;22652262 spin_unlock(&_minor_lock);22662266-22632263+ if (blk_get_integrity(md->disk))22642264+ blk_integrity_unregister(md->disk);22652265+ del_gendisk(md->disk);22672266 put_disk(md->disk);22682267 blk_cleanup_queue(md->queue);22692269- dm_stats_cleanup(&md->stats);22682268+ bdput(md->bdev);22692269+ free_minor(minor);22702270+22702271 module_put(THIS_MODULE);22712272 kfree(md);22722273}···26212616 BUG_ON(test_bit(DMF_FREEING, &md->flags));26222617}2623261826192619+int dm_hold(struct mapped_device *md)26202620+{26212621+ spin_lock(&_minor_lock);26222622+ if (test_bit(DMF_FREEING, &md->flags)) {26232623+ spin_unlock(&_minor_lock);26242624+ return -EBUSY;26252625+ }26262626+ dm_get(md);26272627+ spin_unlock(&_minor_lock);26282628+ return 0;26292629+}26302630+EXPORT_SYMBOL_GPL(dm_hold);26312631+26242632const char *dm_device_name(struct mapped_device *md)26252633{26262634 return md->name;···2647262926482630 might_sleep();2649263126502650- spin_lock(&_minor_lock);26512632 map = dm_get_live_table(md, &srcu_idx);26332633+26342634+ spin_lock(&_minor_lock);26522635 idr_replace(&_minor_idr, MINOR_ALLOCED, MINOR(disk_devt(dm_disk(md))));26532636 set_bit(DMF_FREEING, &md->flags);26542637 spin_unlock(&_minor_lock);···26572638 if (dm_request_based(md))26582639 flush_kthread_worker(&md->kworker);2659264026412641+ /*26422642+ * Take suspend_lock so that presuspend and postsuspend methods26432643+ * do not race with internal suspend.26442644+ */26452645+ mutex_lock(&md->suspend_lock);26602646 if (!dm_suspended_md(md)) {26612647 dm_table_presuspend_targets(map);26622648 dm_table_postsuspend_targets(map);26632649 }26502650+ mutex_unlock(&md->suspend_lock);2664265126652652 /* dm_put_live_table must be before msleep, otherwise deadlock is possible */26662653 dm_put_live_table(md, srcu_idx);···31403115 flush_workqueue(md->wq);31413116 dm_wait_for_completion(md, TASK_UNINTERRUPTIBLE);31423117}31183118+EXPORT_SYMBOL_GPL(dm_internal_suspend_fast);3143311931443120void dm_internal_resume_fast(struct mapped_device *md)31453121{···31523126done:31533127 mutex_unlock(&md->suspend_lock);31543128}31293129+EXPORT_SYMBOL_GPL(dm_internal_resume_fast);3155313031563131/*-----------------------------------------------------------------31573132 * Event notification.
+2-1
drivers/md/md.c
···50805080 }50815081 if (err) {50825082 mddev_detach(mddev);50835083- pers->free(mddev, mddev->private);50835083+ if (mddev->private)50845084+ pers->free(mddev, mddev->private);50845085 module_put(pers->owner);50855086 bitmap_destroy(mddev);50865087 return err;
···739739 for (id = kempld_dmi_table;740740 id->matches[0].slot != DMI_NONE; id++)741741 if (strstr(id->ident, force_device_id))742742- if (id->callback && id->callback(id))742742+ if (id->callback && !id->callback(id))743743 break;744744 if (id->matches[0].slot == DMI_NONE)745745 return -ENODEV;
+24-6
drivers/mfd/rtsx_usb.c
···196196int rtsx_usb_ep0_read_register(struct rtsx_ucr *ucr, u16 addr, u8 *data)197197{198198 u16 value;199199+ u8 *buf;200200+ int ret;199201200202 if (!data)201203 return -EINVAL;202202- *data = 0;204204+205205+ buf = kzalloc(sizeof(u8), GFP_KERNEL);206206+ if (!buf)207207+ return -ENOMEM;203208204209 addr |= EP0_READ_REG_CMD << EP0_OP_SHIFT;205210 value = swab16(addr);206211207207- return usb_control_msg(ucr->pusb_dev,212212+ ret = usb_control_msg(ucr->pusb_dev,208213 usb_rcvctrlpipe(ucr->pusb_dev, 0), RTSX_USB_REQ_REG_OP,209214 USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,210210- value, 0, data, 1, 100);215215+ value, 0, buf, 1, 100);216216+ *data = *buf;217217+218218+ kfree(buf);219219+ return ret;211220}212221EXPORT_SYMBOL_GPL(rtsx_usb_ep0_read_register);213222···297288int rtsx_usb_get_card_status(struct rtsx_ucr *ucr, u16 *status)298289{299290 int ret;291291+ u16 *buf;300292301293 if (!status)302294 return -EINVAL;303295304304- if (polling_pipe == 0)296296+ if (polling_pipe == 0) {297297+ buf = kzalloc(sizeof(u16), GFP_KERNEL);298298+ if (!buf)299299+ return -ENOMEM;300300+305301 ret = usb_control_msg(ucr->pusb_dev,306302 usb_rcvctrlpipe(ucr->pusb_dev, 0),307303 RTSX_USB_REQ_POLL,308304 USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,309309- 0, 0, status, 2, 100);310310- else305305+ 0, 0, buf, 2, 100);306306+ *status = *buf;307307+308308+ kfree(buf);309309+ } else {311310 ret = rtsx_usb_get_status_with_bulk(ucr, status);311311+ }312312313313 /* usb_control_msg may return positive when success */314314 if (ret < 0)
+2
drivers/misc/mei/init.c
···341341342342 dev->dev_state = MEI_DEV_POWER_DOWN;343343 mei_reset(dev);344344+ /* move device to disabled state unconditionally */345345+ dev->dev_state = MEI_DEV_DISABLED;344346345347 mutex_unlock(&dev->device_lock);346348
+1-1
drivers/mmc/core/pwrseq_simple.c
···124124 PTR_ERR(pwrseq->reset_gpios[i]) != -ENOSYS) {125125 ret = PTR_ERR(pwrseq->reset_gpios[i]);126126127127- while (--i)127127+ while (i--)128128 gpiod_put(pwrseq->reset_gpios[i]);129129130130 goto clk_put;
+1
drivers/mtd/nand/Kconfig
···526526527527config MTD_NAND_HISI504528528 tristate "Support for NAND controller on Hisilicon SoC Hip04"529529+ depends on HAS_DMA529530 help530531 Enables support for NAND controller on Hisilicon SoC Hip04.531532
+44-6
drivers/mtd/nand/pxa3xx_nand.c
···480480 nand_writel(info, NDCR, ndcr | int_mask);481481}482482483483+static void drain_fifo(struct pxa3xx_nand_info *info, void *data, int len)484484+{485485+ if (info->ecc_bch) {486486+ int timeout;487487+488488+ /*489489+ * According to the datasheet, when reading from NDDB490490+ * with BCH enabled, after each 32 bytes reads, we491491+ * have to make sure that the NDSR.RDDREQ bit is set.492492+ *493493+ * Drain the FIFO 8 32 bits reads at a time, and skip494494+ * the polling on the last read.495495+ */496496+ while (len > 8) {497497+ __raw_readsl(info->mmio_base + NDDB, data, 8);498498+499499+ for (timeout = 0;500500+ !(nand_readl(info, NDSR) & NDSR_RDDREQ);501501+ timeout++) {502502+ if (timeout >= 5) {503503+ dev_err(&info->pdev->dev,504504+ "Timeout on RDDREQ while draining the FIFO\n");505505+ return;506506+ }507507+508508+ mdelay(1);509509+ }510510+511511+ data += 32;512512+ len -= 8;513513+ }514514+ }515515+516516+ __raw_readsl(info->mmio_base + NDDB, data, len);517517+}518518+483519static void handle_data_pio(struct pxa3xx_nand_info *info)484520{485521 unsigned int do_bytes = min(info->data_size, info->chunk_size);···532496 DIV_ROUND_UP(info->oob_size, 4));533497 break;534498 case STATE_PIO_READING:535535- __raw_readsl(info->mmio_base + NDDB,536536- info->data_buff + info->data_buff_pos,537537- DIV_ROUND_UP(do_bytes, 4));499499+ drain_fifo(info,500500+ info->data_buff + info->data_buff_pos,501501+ DIV_ROUND_UP(do_bytes, 4));538502539503 if (info->oob_size > 0)540540- __raw_readsl(info->mmio_base + NDDB,541541- info->oob_buff + info->oob_buff_pos,542542- DIV_ROUND_UP(info->oob_size, 4));504504+ drain_fifo(info,505505+ info->oob_buff + info->oob_buff_pos,506506+ DIV_ROUND_UP(info->oob_size, 4));543507 break;544508 default:545509 dev_err(&info->pdev->dev, "%s: invalid state %d\n", __func__,···16081572 int ret, irq, cs;1609157316101574 pdata = dev_get_platdata(&pdev->dev);15751575+ if (pdata->num_cs <= 0)15761576+ return -ENODEV;16111577 info = devm_kzalloc(&pdev->dev, sizeof(*info) + (sizeof(*mtd) +16121578 sizeof(*host)) * pdata->num_cs, GFP_KERNEL);16131579 if (!info)
···157157 making it transparent to the connected L2 switch.158158159159 Ipvlan devices can be added using the "ip" command from the160160- iproute2 package starting with the iproute2-X.Y.ZZ release:160160+ iproute2 package starting with the iproute2-3.19 release:161161162162 "ip link add link <main-dev> [ NAME ] type ipvlan"163163
+1-1
drivers/net/appletalk/Kconfig
···40404141config LTPC4242 tristate "Apple/Farallon LocalTalk PC support"4343- depends on DEV_APPLETALK && (ISA || EISA) && ISA_DMA_API4343+ depends on DEV_APPLETALK && (ISA || EISA) && ISA_DMA_API && VIRT_TO_BUS4444 help4545 This allows you to use the AppleTalk PC card to connect to LocalTalk4646 networks. The card is also known as the Farallon PhoneNet PC card.
+2-1
drivers/net/bonding/bond_main.c
···38503850 /* Find out if any slaves have the same mapping as this skb. */38513851 bond_for_each_slave_rcu(bond, slave, iter) {38523852 if (slave->queue_id == skb->queue_mapping) {38533853- if (bond_slave_can_tx(slave)) {38533853+ if (bond_slave_is_up(slave) &&38543854+ slave->link == BOND_LINK_UP) {38543855 bond_dev_queue_xmit(bond, skb, slave->dev);38553856 return 0;38563857 }
+1-1
drivers/net/can/Kconfig
···131131132132config CAN_XILINXCAN133133 tristate "Xilinx CAN"134134- depends on ARCH_ZYNQ || MICROBLAZE || COMPILE_TEST134134+ depends on ARCH_ZYNQ || ARM64 || MICROBLAZE || COMPILE_TEST135135 depends on COMMON_CLK && HAS_IOMEM136136 ---help---137137 Xilinx CAN driver. This driver supports both soft AXI CAN IP and
···376376 u16 pktlength;377377 u16 pktstatus;378378379379- while ((rxstatus = priv->dmaops->get_rx_status(priv)) != 0) {379379+ while (((rxstatus = priv->dmaops->get_rx_status(priv)) != 0) &&380380+ (count < limit)) {380381 pktstatus = rxstatus >> 16;381382 pktlength = rxstatus & 0xffff;382383···492491 struct altera_tse_private *priv =493492 container_of(napi, struct altera_tse_private, napi);494493 int rxcomplete = 0;495495- int txcomplete = 0;496494 unsigned long int flags;497495498498- txcomplete = tse_tx_complete(priv);496496+ tse_tx_complete(priv);499497500498 rxcomplete = tse_rx(priv, budget);501499502502- if (rxcomplete >= budget || txcomplete > 0)503503- return rxcomplete;500500+ if (rxcomplete < budget) {504501505505- napi_gro_flush(napi, false);506506- __napi_complete(napi);502502+ napi_gro_flush(napi, false);503503+ __napi_complete(napi);507504508508- netdev_dbg(priv->dev,509509- "NAPI Complete, did %d packets with budget %d\n",510510- txcomplete+rxcomplete, budget);505505+ netdev_dbg(priv->dev,506506+ "NAPI Complete, did %d packets with budget %d\n",507507+ rxcomplete, budget);511508512512- spin_lock_irqsave(&priv->rxdma_irq_lock, flags);513513- priv->dmaops->enable_rxirq(priv);514514- priv->dmaops->enable_txirq(priv);515515- spin_unlock_irqrestore(&priv->rxdma_irq_lock, flags);516516- return rxcomplete + txcomplete;509509+ spin_lock_irqsave(&priv->rxdma_irq_lock, flags);510510+ priv->dmaops->enable_rxirq(priv);511511+ priv->dmaops->enable_txirq(priv);512512+ spin_unlock_irqrestore(&priv->rxdma_irq_lock, flags);513513+ }514514+ return rxcomplete;517515}518516519517/* DMA TX & RX FIFO interrupt routing···521521{522522 struct net_device *dev = dev_id;523523 struct altera_tse_private *priv;524524- unsigned long int flags;525524526525 if (unlikely(!dev)) {527526 pr_err("%s: invalid dev pointer\n", __func__);···528529 }529530 priv = netdev_priv(dev);530531531531- /* turn off desc irqs and enable napi rx */532532- spin_lock_irqsave(&priv->rxdma_irq_lock, flags);533533-534534- if (likely(napi_schedule_prep(&priv->napi))) {535535- priv->dmaops->disable_rxirq(priv);536536- priv->dmaops->disable_txirq(priv);537537- __napi_schedule(&priv->napi);538538- }539539-532532+ spin_lock(&priv->rxdma_irq_lock);540533 /* reset IRQs */541534 priv->dmaops->clear_rxirq(priv);542535 priv->dmaops->clear_txirq(priv);536536+ spin_unlock(&priv->rxdma_irq_lock);543537544544- spin_unlock_irqrestore(&priv->rxdma_irq_lock, flags);538538+ if (likely(napi_schedule_prep(&priv->napi))) {539539+ spin_lock(&priv->rxdma_irq_lock);540540+ priv->dmaops->disable_rxirq(priv);541541+ priv->dmaops->disable_txirq(priv);542542+ spin_unlock(&priv->rxdma_irq_lock);543543+ __napi_schedule(&priv->napi);544544+ }545545+545546546547 return IRQ_HANDLED;547548}···13981399 }1399140014001401 if (of_property_read_u32(pdev->dev.of_node, "tx-fifo-depth",14011401- &priv->rx_fifo_depth)) {14021402+ &priv->tx_fifo_depth)) {14021403 dev_err(&pdev->dev, "cannot obtain tx-fifo-depth\n");14031404 ret = -ENXIO;14041405 goto err_free_netdev;
+29-2
drivers/net/ethernet/amd/pcnet32.c
···15431543{15441544 struct pcnet32_private *lp;15451545 int i, media;15461546- int fdx, mii, fset, dxsuflo;15461546+ int fdx, mii, fset, dxsuflo, sram;15471547 int chip_version;15481548 char *chipname;15491549 struct net_device *dev;···15801580 }1581158115821582 /* initialize variables */15831583- fdx = mii = fset = dxsuflo = 0;15831583+ fdx = mii = fset = dxsuflo = sram = 0;15841584 chip_version = (chip_version >> 12) & 0xffff;1585158515861586 switch (chip_version) {···16131613 chipname = "PCnet/FAST III 79C973"; /* PCI */16141614 fdx = 1;16151615 mii = 1;16161616+ sram = 1;16161617 break;16171618 case 0x2626:16181619 chipname = "PCnet/Home 79C978"; /* PCI */···16371636 chipname = "PCnet/FAST III 79C975"; /* PCI */16381637 fdx = 1;16391638 mii = 1;16391639+ sram = 1;16401640 break;16411641 case 0x2628:16421642 chipname = "PCnet/PRO 79C976";···16641662 a->write_csr(ioaddr, 80,16651663 (a->read_csr(ioaddr, 80) & 0x0C00) | 0x0c00);16661664 dxsuflo = 1;16651665+ }16661666+16671667+ /*16681668+ * The Am79C973/Am79C975 controllers come with 12K of SRAM16691669+ * which we can use for the Tx/Rx buffers but most importantly,16701670+ * the use of SRAM allow us to use the BCR18:NOUFLO bit to avoid16711671+ * Tx fifo underflows.16721672+ */16731673+ if (sram) {16741674+ /*16751675+ * The SRAM is being configured in two steps. First we16761676+ * set the SRAM size in the BCR25:SRAM_SIZE bits. According16771677+ * to the datasheet, each bit corresponds to a 512-byte16781678+ * page so we can have at most 24 pages. The SRAM_SIZE16791679+ * holds the value of the upper 8 bits of the 16-bit SRAM size.16801680+ * The low 8-bits start at 0x00 and end at 0xff. So the16811681+ * address range is from 0x0000 up to 0x17ff. Therefore,16821682+ * the SRAM_SIZE is set to 0x17. The next step is to set16831683+ * the BCR26:SRAM_BND midway through so the Tx and Rx16841684+ * buffers can share the SRAM equally.16851685+ */16861686+ a->write_bcr(ioaddr, 25, 0x17);16871687+ a->write_bcr(ioaddr, 26, 0xc);16881688+ /* And finally enable the NOUFLO bit */16891689+ a->write_bcr(ioaddr, 18, a->read_bcr(ioaddr, 18) | (1 << 11));16671690 }1668169116691692 dev = alloc_etherdev(sizeof(*lp));
+93-82
drivers/net/ethernet/amd/xgbe/xgbe-drv.c
···609609 }610610}611611612612+static int xgbe_request_irqs(struct xgbe_prv_data *pdata)613613+{614614+ struct xgbe_channel *channel;615615+ struct net_device *netdev = pdata->netdev;616616+ unsigned int i;617617+ int ret;618618+619619+ ret = devm_request_irq(pdata->dev, pdata->dev_irq, xgbe_isr, 0,620620+ netdev->name, pdata);621621+ if (ret) {622622+ netdev_alert(netdev, "error requesting irq %d\n",623623+ pdata->dev_irq);624624+ return ret;625625+ }626626+627627+ if (!pdata->per_channel_irq)628628+ return 0;629629+630630+ channel = pdata->channel;631631+ for (i = 0; i < pdata->channel_count; i++, channel++) {632632+ snprintf(channel->dma_irq_name,633633+ sizeof(channel->dma_irq_name) - 1,634634+ "%s-TxRx-%u", netdev_name(netdev),635635+ channel->queue_index);636636+637637+ ret = devm_request_irq(pdata->dev, channel->dma_irq,638638+ xgbe_dma_isr, 0,639639+ channel->dma_irq_name, channel);640640+ if (ret) {641641+ netdev_alert(netdev, "error requesting irq %d\n",642642+ channel->dma_irq);643643+ goto err_irq;644644+ }645645+ }646646+647647+ return 0;648648+649649+err_irq:650650+ /* Using an unsigned int, 'i' will go to UINT_MAX and exit */651651+ for (i--, channel--; i < pdata->channel_count; i--, channel--)652652+ devm_free_irq(pdata->dev, channel->dma_irq, channel);653653+654654+ devm_free_irq(pdata->dev, pdata->dev_irq, pdata);655655+656656+ return ret;657657+}658658+659659+static void xgbe_free_irqs(struct xgbe_prv_data *pdata)660660+{661661+ struct xgbe_channel *channel;662662+ unsigned int i;663663+664664+ devm_free_irq(pdata->dev, pdata->dev_irq, pdata);665665+666666+ if (!pdata->per_channel_irq)667667+ return;668668+669669+ channel = pdata->channel;670670+ for (i = 0; i < pdata->channel_count; i++, channel++)671671+ devm_free_irq(pdata->dev, channel->dma_irq, channel);672672+}673673+612674void xgbe_init_tx_coalesce(struct xgbe_prv_data *pdata)613675{614676 struct xgbe_hw_if *hw_if = &pdata->hw_if;···872810 return -EINVAL;873811 }874812875875- phy_stop(pdata->phydev);876876-877813 spin_lock_irqsave(&pdata->lock, flags);878814879815 if (caller == XGMAC_DRIVER_CONTEXT)880816 netif_device_detach(netdev);881817882818 netif_tx_stop_all_queues(netdev);883883- xgbe_napi_disable(pdata, 0);884819885885- /* Powerdown Tx/Rx */886820 hw_if->powerdown_tx(pdata);887821 hw_if->powerdown_rx(pdata);822822+823823+ xgbe_napi_disable(pdata, 0);824824+825825+ phy_stop(pdata->phydev);888826889827 pdata->power_down = 1;890828···916854917855 phy_start(pdata->phydev);918856919919- /* Enable Tx/Rx */857857+ xgbe_napi_enable(pdata, 0);858858+920859 hw_if->powerup_tx(pdata);921860 hw_if->powerup_rx(pdata);922861923862 if (caller == XGMAC_DRIVER_CONTEXT)924863 netif_device_attach(netdev);925864926926- xgbe_napi_enable(pdata, 0);927865 netif_tx_start_all_queues(netdev);928866929867 spin_unlock_irqrestore(&pdata->lock, flags);···937875{938876 struct xgbe_hw_if *hw_if = &pdata->hw_if;939877 struct net_device *netdev = pdata->netdev;878878+ int ret;940879941880 DBGPR("-->xgbe_start\n");942881···947884948885 phy_start(pdata->phydev);949886887887+ xgbe_napi_enable(pdata, 1);888888+889889+ ret = xgbe_request_irqs(pdata);890890+ if (ret)891891+ goto err_napi;892892+950893 hw_if->enable_tx(pdata);951894 hw_if->enable_rx(pdata);952895953896 xgbe_init_tx_timers(pdata);954897955955- xgbe_napi_enable(pdata, 1);956898 netif_tx_start_all_queues(netdev);957899958900 DBGPR("<--xgbe_start\n");959901960902 return 0;903903+904904+err_napi:905905+ xgbe_napi_disable(pdata, 1);906906+907907+ phy_stop(pdata->phydev);908908+909909+ hw_if->exit(pdata);910910+911911+ return ret;961912}962913963914static void xgbe_stop(struct xgbe_prv_data *pdata)···984907985908 DBGPR("-->xgbe_stop\n");986909987987- phy_stop(pdata->phydev);988988-989910 netif_tx_stop_all_queues(netdev);990990- xgbe_napi_disable(pdata, 1);991911992912 xgbe_stop_tx_timers(pdata);993913994914 hw_if->disable_tx(pdata);995915 hw_if->disable_rx(pdata);916916+917917+ xgbe_free_irqs(pdata);918918+919919+ xgbe_napi_disable(pdata, 1);920920+921921+ phy_stop(pdata->phydev);922922+923923+ hw_if->exit(pdata);996924997925 channel = pdata->channel;998926 for (i = 0; i < pdata->channel_count; i++, channel++) {···10139311014932static void xgbe_restart_dev(struct xgbe_prv_data *pdata)1015933{10161016- struct xgbe_channel *channel;10171017- struct xgbe_hw_if *hw_if = &pdata->hw_if;10181018- unsigned int i;10191019-1020934 DBGPR("-->xgbe_restart_dev\n");10219351022936 /* If not running, "restart" will happen on open */···1020942 return;10219431022944 xgbe_stop(pdata);10231023- synchronize_irq(pdata->dev_irq);10241024- if (pdata->per_channel_irq) {10251025- channel = pdata->channel;10261026- for (i = 0; i < pdata->channel_count; i++, channel++)10271027- synchronize_irq(channel->dma_irq);10281028- }10299451030946 xgbe_free_tx_data(pdata);1031947 xgbe_free_rx_data(pdata);10321032-10331033- /* Issue software reset to device */10341034- hw_if->exit(pdata);10359481036949 xgbe_start(pdata);1037950···13521283static int xgbe_open(struct net_device *netdev)13531284{13541285 struct xgbe_prv_data *pdata = netdev_priv(netdev);13551355- struct xgbe_hw_if *hw_if = &pdata->hw_if;13561286 struct xgbe_desc_if *desc_if = &pdata->desc_if;13571357- struct xgbe_channel *channel = NULL;13581358- unsigned int i = 0;13591287 int ret;1360128813611289 DBGPR("-->xgbe_open\n");···13951329 INIT_WORK(&pdata->restart_work, xgbe_restart);13961330 INIT_WORK(&pdata->tx_tstamp_work, xgbe_tx_tstamp);1397133113981398- /* Request interrupts */13991399- ret = devm_request_irq(pdata->dev, pdata->dev_irq, xgbe_isr, 0,14001400- netdev->name, pdata);14011401- if (ret) {14021402- netdev_alert(netdev, "error requesting irq %d\n",14031403- pdata->dev_irq);14041404- goto err_rings;14051405- }14061406-14071407- if (pdata->per_channel_irq) {14081408- channel = pdata->channel;14091409- for (i = 0; i < pdata->channel_count; i++, channel++) {14101410- snprintf(channel->dma_irq_name,14111411- sizeof(channel->dma_irq_name) - 1,14121412- "%s-TxRx-%u", netdev_name(netdev),14131413- channel->queue_index);14141414-14151415- ret = devm_request_irq(pdata->dev, channel->dma_irq,14161416- xgbe_dma_isr, 0,14171417- channel->dma_irq_name, channel);14181418- if (ret) {14191419- netdev_alert(netdev,14201420- "error requesting irq %d\n",14211421- channel->dma_irq);14221422- goto err_irq;14231423- }14241424- }14251425- }14261426-14271332 ret = xgbe_start(pdata);14281333 if (ret)14291429- goto err_start;13341334+ goto err_rings;1430133514311336 DBGPR("<--xgbe_open\n");1432133714331338 return 0;14341434-14351435-err_start:14361436- hw_if->exit(pdata);14371437-14381438-err_irq:14391439- if (pdata->per_channel_irq) {14401440- /* Using an unsigned int, 'i' will go to UINT_MAX and exit */14411441- for (i--, channel--; i < pdata->channel_count; i--, channel--)14421442- devm_free_irq(pdata->dev, channel->dma_irq, channel);14431443- }14441444-14451445- devm_free_irq(pdata->dev, pdata->dev_irq, pdata);1446133914471340err_rings:14481341 desc_if->free_ring_resources(pdata);···14241399static int xgbe_close(struct net_device *netdev)14251400{14261401 struct xgbe_prv_data *pdata = netdev_priv(netdev);14271427- struct xgbe_hw_if *hw_if = &pdata->hw_if;14281402 struct xgbe_desc_if *desc_if = &pdata->desc_if;14291429- struct xgbe_channel *channel;14301430- unsigned int i;1431140314321404 DBGPR("-->xgbe_close\n");1433140514341406 /* Stop the device */14351407 xgbe_stop(pdata);1436140814371437- /* Issue software reset to device */14381438- hw_if->exit(pdata);14391439-14401409 /* Free the ring descriptors and buffers */14411410 desc_if->free_ring_resources(pdata);14421442-14431443- /* Release the interrupts */14441444- devm_free_irq(pdata->dev, pdata->dev_irq, pdata);14451445- if (pdata->per_channel_irq) {14461446- channel = pdata->channel;14471447- for (i = 0; i < pdata->channel_count; i++, channel++)14481448- devm_free_irq(pdata->dev, channel->dma_irq, channel);14491449- }1450141114511412 /* Free the channel and ring structures */14521413 xgbe_free_channels(pdata);
+1-1
drivers/net/ethernet/apm/xgene/xgene_enet_hw.c
···593593 if (!xgene_ring_mgr_init(pdata))594594 return -ENODEV;595595596596- if (!efi_enabled(EFI_BOOT)) {596596+ if (pdata->clk) {597597 clk_prepare_enable(pdata->clk);598598 clk_disable_unprepare(pdata->clk);599599 clk_prepare_enable(pdata->clk);
···486486{487487 struct bcm_enet_priv *priv;488488 struct net_device *dev;489489- int tx_work_done, rx_work_done;489489+ int rx_work_done;490490491491 priv = container_of(napi, struct bcm_enet_priv, napi);492492 dev = priv->net_dev;···498498 ENETDMAC_IR, priv->tx_chan);499499500500 /* reclaim sent skb */501501- tx_work_done = bcm_enet_tx_reclaim(dev, 0);501501+ bcm_enet_tx_reclaim(dev, 0);502502503503 spin_lock(&priv->rx_lock);504504 rx_work_done = bcm_enet_receive_queue(dev, budget);505505 spin_unlock(&priv->rx_lock);506506507507- if (rx_work_done >= budget || tx_work_done > 0) {508508- /* rx/tx queue is not yet empty/clean */507507+ if (rx_work_done >= budget) {508508+ /* rx queue is not yet empty/clean */509509 return rx_work_done;510510 }511511
+4-3
drivers/net/ethernet/broadcom/bcmsysport.c
···274274 /* RBUF misc statistics */275275 STAT_RBUF("rbuf_ovflow_cnt", mib.rbuf_ovflow_cnt, RBUF_OVFL_DISC_CNTR),276276 STAT_RBUF("rbuf_err_cnt", mib.rbuf_err_cnt, RBUF_ERR_PKT_CNTR),277277- STAT_MIB_RX("alloc_rx_buff_failed", mib.alloc_rx_buff_failed),278278- STAT_MIB_RX("rx_dma_failed", mib.rx_dma_failed),279279- STAT_MIB_TX("tx_dma_failed", mib.tx_dma_failed),277277+ STAT_MIB_SOFT("alloc_rx_buff_failed", mib.alloc_rx_buff_failed),278278+ STAT_MIB_SOFT("rx_dma_failed", mib.rx_dma_failed),279279+ STAT_MIB_SOFT("tx_dma_failed", mib.tx_dma_failed),280280};281281282282#define BCM_SYSPORT_STATS_LEN ARRAY_SIZE(bcm_sysport_gstrings_stats)···345345 s = &bcm_sysport_gstrings_stats[i];346346 switch (s->type) {347347 case BCM_SYSPORT_STAT_NETDEV:348348+ case BCM_SYSPORT_STAT_SOFT:348349 continue;349350 case BCM_SYSPORT_STAT_MIB_RX:350351 case BCM_SYSPORT_STAT_MIB_TX:
+2
drivers/net/ethernet/broadcom/bcmsysport.h
···570570 BCM_SYSPORT_STAT_RUNT,571571 BCM_SYSPORT_STAT_RXCHK,572572 BCM_SYSPORT_STAT_RBUF,573573+ BCM_SYSPORT_STAT_SOFT,573574};574575575576/* Macros to help define ethtool statistics */···591590#define STAT_MIB_RX(str, m) STAT_MIB(str, m, BCM_SYSPORT_STAT_MIB_RX)592591#define STAT_MIB_TX(str, m) STAT_MIB(str, m, BCM_SYSPORT_STAT_MIB_TX)593592#define STAT_RUNT(str, m) STAT_MIB(str, m, BCM_SYSPORT_STAT_RUNT)593593+#define STAT_MIB_SOFT(str, m) STAT_MIB(str, m, BCM_SYSPORT_STAT_SOFT)594594595595#define STAT_RXCHK(str, m, ofs) { \596596 .stat_string = str, \
-7
drivers/net/ethernet/broadcom/bgmac.c
···302302 slot->skb = skb;303303 slot->dma_addr = dma_addr;304304305305- if (slot->dma_addr & 0xC0000000)306306- bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n");307307-308305 return 0;309306}310307···502505 ring->mmio_base);503506 goto err_dma_free;504507 }505505- if (ring->dma_base & 0xC0000000)506506- bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n");507508508509 ring->unaligned = bgmac_dma_unaligned(bgmac, ring,509510 BGMAC_DMA_RING_TX);···531536 err = -ENOMEM;532537 goto err_dma_free;533538 }534534- if (ring->dma_base & 0xC0000000)535535- bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n");536539537540 ring->unaligned = bgmac_dma_unaligned(bgmac, ring,538541 BGMAC_DMA_RING_RX);
+1-3
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
···18111811 int stats_state;1812181218131813 /* used for synchronization of concurrent threads statistics handling */18141814- spinlock_t stats_lock;18141814+ struct mutex stats_lock;1815181518161816 /* used by dmae command loader */18171817 struct dmae_command stats_dmae;···1935193519361936 int fp_array_size;19371937 u32 dump_preset_idx;19381938- bool stats_started;19391939- struct semaphore stats_sema;1940193819411939 u8 phys_port_id[ETH_ALEN];19421940
+58-46
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
···129129 u32 xmac_val;130130 u32 emac_addr;131131 u32 emac_val;132132- u32 umac_addr;133133- u32 umac_val;132132+ u32 umac_addr[2];133133+ u32 umac_val[2];134134 u32 bmac_addr;135135 u32 bmac_val[2];136136};···78667866 return 0;78677867}7868786878697869+/* previous driver DMAE transaction may have occurred when pre-boot stage ended78707870+ * and boot began, or when kdump kernel was loaded. Either case would invalidate78717871+ * the addresses of the transaction, resulting in was-error bit set in the pci78727872+ * causing all hw-to-host pcie transactions to timeout. If this happened we want78737873+ * to clear the interrupt which detected this from the pglueb and the was done78747874+ * bit78757875+ */78767876+static void bnx2x_clean_pglue_errors(struct bnx2x *bp)78777877+{78787878+ if (!CHIP_IS_E1x(bp))78797879+ REG_WR(bp, PGLUE_B_REG_WAS_ERROR_PF_7_0_CLR,78807880+ 1 << BP_ABS_FUNC(bp));78817881+}78827882+78697883static int bnx2x_init_hw_func(struct bnx2x *bp)78707884{78717885 int port = BP_PORT(bp);···7972795879737959 bnx2x_init_block(bp, BLOCK_PGLUE_B, init_phase);7974796079757975- if (!CHIP_IS_E1x(bp))79767976- REG_WR(bp, PGLUE_B_REG_WAS_ERROR_PF_7_0_CLR, func);79617961+ bnx2x_clean_pglue_errors(bp);7977796279787963 bnx2x_init_block(bp, BLOCK_ATC, init_phase);79797964 bnx2x_init_block(bp, BLOCK_DMAE, init_phase);···1015410141 return base + (BP_ABS_FUNC(bp)) * stride;1015510142}10156101431014410144+static bool bnx2x_prev_unload_close_umac(struct bnx2x *bp,1014510145+ u8 port, u32 reset_reg,1014610146+ struct bnx2x_mac_vals *vals)1014710147+{1014810148+ u32 mask = MISC_REGISTERS_RESET_REG_2_UMAC0 << port;1014910149+ u32 base_addr;1015010150+1015110151+ if (!(mask & reset_reg))1015210152+ return false;1015310153+1015410154+ BNX2X_DEV_INFO("Disable umac Rx %02x\n", port);1015510155+ base_addr = port ? GRCBASE_UMAC1 : GRCBASE_UMAC0;1015610156+ vals->umac_addr[port] = base_addr + UMAC_REG_COMMAND_CONFIG;1015710157+ vals->umac_val[port] = REG_RD(bp, vals->umac_addr[port]);1015810158+ REG_WR(bp, vals->umac_addr[port], 0);1015910159+1016010160+ return true;1016110161+}1016210162+1015710163static void bnx2x_prev_unload_close_mac(struct bnx2x *bp,1015810164 struct bnx2x_mac_vals *vals)1015910165{···1018110149 u8 port = BP_PORT(bp);10182101501018310151 /* reset addresses as they also mark which values were changed */1018410184- vals->bmac_addr = 0;1018510185- vals->umac_addr = 0;1018610186- vals->xmac_addr = 0;1018710187- vals->emac_addr = 0;1015210152+ memset(vals, 0, sizeof(*vals));10188101531018910154 reset_reg = REG_RD(bp, MISC_REG_RESET_REG_2);1019010155···1023010201 REG_WR(bp, vals->xmac_addr, 0);1023110202 mac_stopped = true;1023210203 }1023310233- mask = MISC_REGISTERS_RESET_REG_2_UMAC0 << port;1023410234- if (mask & reset_reg) {1023510235- BNX2X_DEV_INFO("Disable umac Rx\n");1023610236- base_addr = BP_PORT(bp) ? GRCBASE_UMAC1 : GRCBASE_UMAC0;1023710237- vals->umac_addr = base_addr + UMAC_REG_COMMAND_CONFIG;1023810238- vals->umac_val = REG_RD(bp, vals->umac_addr);1023910239- REG_WR(bp, vals->umac_addr, 0);1024010240- mac_stopped = true;1024110241- }1020410204+1020510205+ mac_stopped |= bnx2x_prev_unload_close_umac(bp, 0,1020610206+ reset_reg, vals);1020710207+ mac_stopped |= bnx2x_prev_unload_close_umac(bp, 1,1020810208+ reset_reg, vals);1024210209 }10243102101024410211 if (mac_stopped)···1053010505 /* Close the MAC Rx to prevent BRB from filling up */1053110506 bnx2x_prev_unload_close_mac(bp, &mac_vals);10532105071053310533- /* close LLH filters towards the BRB */1050810508+ /* close LLH filters for both ports towards the BRB */1053410509 bnx2x_set_rx_filter(&bp->link_params, 0);1051010510+ bp->link_params.port ^= 1;1051110511+ bnx2x_set_rx_filter(&bp->link_params, 0);1051210512+ bp->link_params.port ^= 1;10535105131053610514 /* Check if the UNDI driver was previously loaded */1053710515 if (bnx2x_prev_is_after_undi(bp)) {···10581105531058210554 if (mac_vals.xmac_addr)1058310555 REG_WR(bp, mac_vals.xmac_addr, mac_vals.xmac_val);1058410584- if (mac_vals.umac_addr)1058510585- REG_WR(bp, mac_vals.umac_addr, mac_vals.umac_val);1055610556+ if (mac_vals.umac_addr[0])1055710557+ REG_WR(bp, mac_vals.umac_addr[0], mac_vals.umac_val[0]);1055810558+ if (mac_vals.umac_addr[1])1055910559+ REG_WR(bp, mac_vals.umac_addr[1], mac_vals.umac_val[1]);1058610560 if (mac_vals.emac_addr)1058710561 REG_WR(bp, mac_vals.emac_addr, mac_vals.emac_val);1058810562 if (mac_vals.bmac_addr) {···1060110571 return bnx2x_prev_mcp_done(bp);1060210572}10603105731060410604-/* previous driver DMAE transaction may have occurred when pre-boot stage ended1060510605- * and boot began, or when kdump kernel was loaded. Either case would invalidate1060610606- * the addresses of the transaction, resulting in was-error bit set in the pci1060710607- * causing all hw-to-host pcie transactions to timeout. If this happened we want1060810608- * to clear the interrupt which detected this from the pglueb and the was done1060910609- * bit1061010610- */1061110611-static void bnx2x_prev_interrupted_dmae(struct bnx2x *bp)1061210612-{1061310613- if (!CHIP_IS_E1x(bp)) {1061410614- u32 val = REG_RD(bp, PGLUE_B_REG_PGLUE_B_INT_STS);1061510615- if (val & PGLUE_B_PGLUE_B_INT_STS_REG_WAS_ERROR_ATTN) {1061610616- DP(BNX2X_MSG_SP,1061710617- "'was error' bit was found to be set in pglueb upon startup. Clearing\n");1061810618- REG_WR(bp, PGLUE_B_REG_WAS_ERROR_PF_7_0_CLR,1061910619- 1 << BP_FUNC(bp));1062010620- }1062110621- }1062210622-}1062310623-1062410574static int bnx2x_prev_unload(struct bnx2x *bp)1062510575{1062610576 int time_counter = 10;···1061010600 /* clear hw from errors which may have resulted from an interrupted1061110601 * dmae transaction.1061210602 */1061310613- bnx2x_prev_interrupted_dmae(bp);1060310603+ bnx2x_clean_pglue_errors(bp);10614106041061510605 /* Release previously held locks */1061610606 hw_lock_reg = (BP_FUNC(bp) <= 5) ?···1204712037 mutex_init(&bp->port.phy_mutex);1204812038 mutex_init(&bp->fw_mb_mutex);1204912039 mutex_init(&bp->drv_info_mutex);1204012040+ mutex_init(&bp->stats_lock);1205012041 bp->drv_info_mng_owner = false;1205112051- spin_lock_init(&bp->stats_lock);1205212052- sema_init(&bp->stats_sema, 1);12053120421205412043 INIT_DELAYED_WORK(&bp->sp_task, bnx2x_sp_task);1205512044 INIT_DELAYED_WORK(&bp->sp_rtnl_task, bnx2x_sp_rtnl_task);···1273112722 pci_write_config_dword(bp->pdev, PCICFG_GRC_ADDRESS,1273212723 PCICFG_VENDOR_ID_OFFSET);12733127241272512725+ /* Set PCIe reset type to fundamental for EEH recovery */1272612726+ pdev->needs_freset = 1;1272712727+1273412728 /* AER (Advanced Error reporting) configuration */1273512729 rc = pci_enable_pcie_error_reporting(pdev);1273612730 if (!rc)···1277812766 NETIF_F_TSO | NETIF_F_TSO_ECN | NETIF_F_TSO6 |1277912767 NETIF_F_RXCSUM | NETIF_F_LRO | NETIF_F_GRO |1278012768 NETIF_F_RXHASH | NETIF_F_HW_VLAN_CTAG_TX;1278112781- if (!CHIP_IS_E1x(bp)) {1276912769+ if (!chip_is_e1x) {1278212770 dev->hw_features |= NETIF_F_GSO_GRE | NETIF_F_GSO_UDP_TUNNEL |1278312771 NETIF_F_GSO_IPIP | NETIF_F_GSO_SIT;1278412772 dev->hw_enc_features =···1367713665 cancel_delayed_work_sync(&bp->sp_task);1367813666 cancel_delayed_work_sync(&bp->period_task);13679136671368013680- spin_lock_bh(&bp->stats_lock);1366813668+ mutex_lock(&bp->stats_lock);1368113669 bp->stats_state = STATS_STATE_DISABLED;1368213682- spin_unlock_bh(&bp->stats_lock);1367013670+ mutex_unlock(&bp->stats_lock);13683136711368413672 bnx2x_save_statistics(bp);1368513673
+3-1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
···2238223822392239 cookie.vf = vf;22402240 cookie.state = VF_ACQUIRED;22412241- bnx2x_stats_safe_exec(bp, bnx2x_set_vf_state, &cookie);22412241+ rc = bnx2x_stats_safe_exec(bp, bnx2x_set_vf_state, &cookie);22422242+ if (rc)22432243+ goto op_err;22422244 }2243224522442246 DP(BNX2X_MSG_IOV, "set state to acquired\n");
+74-90
drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c
···123123 */124124static void bnx2x_storm_stats_post(struct bnx2x *bp)125125{126126- if (!bp->stats_pending) {127127- int rc;126126+ int rc;128127129129- spin_lock_bh(&bp->stats_lock);128128+ if (bp->stats_pending)129129+ return;130130131131- if (bp->stats_pending) {132132- spin_unlock_bh(&bp->stats_lock);133133- return;134134- }131131+ bp->fw_stats_req->hdr.drv_stats_counter =132132+ cpu_to_le16(bp->stats_counter++);135133136136- bp->fw_stats_req->hdr.drv_stats_counter =137137- cpu_to_le16(bp->stats_counter++);134134+ DP(BNX2X_MSG_STATS, "Sending statistics ramrod %d\n",135135+ le16_to_cpu(bp->fw_stats_req->hdr.drv_stats_counter));138136139139- DP(BNX2X_MSG_STATS, "Sending statistics ramrod %d\n",140140- le16_to_cpu(bp->fw_stats_req->hdr.drv_stats_counter));137137+ /* adjust the ramrod to include VF queues statistics */138138+ bnx2x_iov_adjust_stats_req(bp);139139+ bnx2x_dp_stats(bp);141140142142- /* adjust the ramrod to include VF queues statistics */143143- bnx2x_iov_adjust_stats_req(bp);144144- bnx2x_dp_stats(bp);145145-146146- /* send FW stats ramrod */147147- rc = bnx2x_sp_post(bp, RAMROD_CMD_ID_COMMON_STAT_QUERY, 0,148148- U64_HI(bp->fw_stats_req_mapping),149149- U64_LO(bp->fw_stats_req_mapping),150150- NONE_CONNECTION_TYPE);151151- if (rc == 0)152152- bp->stats_pending = 1;153153-154154- spin_unlock_bh(&bp->stats_lock);155155- }141141+ /* send FW stats ramrod */142142+ rc = bnx2x_sp_post(bp, RAMROD_CMD_ID_COMMON_STAT_QUERY, 0,143143+ U64_HI(bp->fw_stats_req_mapping),144144+ U64_LO(bp->fw_stats_req_mapping),145145+ NONE_CONNECTION_TYPE);146146+ if (rc == 0)147147+ bp->stats_pending = 1;156148}157149158150static void bnx2x_hw_stats_post(struct bnx2x *bp)···213221 */214222215223/* should be called under stats_sema */216216-static void __bnx2x_stats_pmf_update(struct bnx2x *bp)224224+static void bnx2x_stats_pmf_update(struct bnx2x *bp)217225{218226 struct dmae_command *dmae;219227 u32 opcode;···511519}512520513521/* should be called under stats_sema */514514-static void __bnx2x_stats_start(struct bnx2x *bp)522522+static void bnx2x_stats_start(struct bnx2x *bp)515523{516524 if (IS_PF(bp)) {517525 if (bp->port.pmf)···523531 bnx2x_hw_stats_post(bp);524532 bnx2x_storm_stats_post(bp);525533 }526526-527527- bp->stats_started = true;528528-}529529-530530-static void bnx2x_stats_start(struct bnx2x *bp)531531-{532532- if (down_timeout(&bp->stats_sema, HZ/10))533533- BNX2X_ERR("Unable to acquire stats lock\n");534534- __bnx2x_stats_start(bp);535535- up(&bp->stats_sema);536534}537535538536static void bnx2x_stats_pmf_start(struct bnx2x *bp)539537{540540- if (down_timeout(&bp->stats_sema, HZ/10))541541- BNX2X_ERR("Unable to acquire stats lock\n");542538 bnx2x_stats_comp(bp);543543- __bnx2x_stats_pmf_update(bp);544544- __bnx2x_stats_start(bp);545545- up(&bp->stats_sema);546546-}547547-548548-static void bnx2x_stats_pmf_update(struct bnx2x *bp)549549-{550550- if (down_timeout(&bp->stats_sema, HZ/10))551551- BNX2X_ERR("Unable to acquire stats lock\n");552552- __bnx2x_stats_pmf_update(bp);553553- up(&bp->stats_sema);539539+ bnx2x_stats_pmf_update(bp);540540+ bnx2x_stats_start(bp);554541}555542556543static void bnx2x_stats_restart(struct bnx2x *bp)···539568 */540569 if (IS_VF(bp))541570 return;542542- if (down_timeout(&bp->stats_sema, HZ/10))543543- BNX2X_ERR("Unable to acquire stats lock\n");571571+544572 bnx2x_stats_comp(bp);545545- __bnx2x_stats_start(bp);546546- up(&bp->stats_sema);573573+ bnx2x_stats_start(bp);547574}548575549576static void bnx2x_bmac_stats_update(struct bnx2x *bp)···12151246{12161247 u32 *stats_comp = bnx2x_sp(bp, stats_comp);1217124812181218- /* we run update from timer context, so give up12191219- * if somebody is in the middle of transition12201220- */12211221- if (down_trylock(&bp->stats_sema))12491249+ if (bnx2x_edebug_stats_stopped(bp))12221250 return;12231223-12241224- if (bnx2x_edebug_stats_stopped(bp) || !bp->stats_started)12251225- goto out;1226125112271252 if (IS_PF(bp)) {12281253 if (*stats_comp != DMAE_COMP_VAL)12291229- goto out;12541254+ return;1230125512311256 if (bp->port.pmf)12321257 bnx2x_hw_stats_update(bp);···12301267 BNX2X_ERR("storm stats were not updated for 3 times\n");12311268 bnx2x_panic();12321269 }12331233- goto out;12701270+ return;12341271 }12351272 } else {12361273 /* vf doesn't collect HW statistics, and doesn't get completions···1244128112451282 /* vf is done */12461283 if (IS_VF(bp))12471247- goto out;12841284+ return;1248128512491286 if (netif_msg_timer(bp)) {12501287 struct bnx2x_eth_stats *estats = &bp->eth_stats;···1255129212561293 bnx2x_hw_stats_post(bp);12571294 bnx2x_storm_stats_post(bp);12581258-12591259-out:12601260- up(&bp->stats_sema);12611295}1262129612631297static void bnx2x_port_stats_stop(struct bnx2x *bp)···1318135813191359static void bnx2x_stats_stop(struct bnx2x *bp)13201360{13211321- int update = 0;13221322-13231323- if (down_timeout(&bp->stats_sema, HZ/10))13241324- BNX2X_ERR("Unable to acquire stats lock\n");13251325-13261326- bp->stats_started = false;13611361+ bool update = false;1327136213281363 bnx2x_stats_comp(bp);13291364···13361381 bnx2x_hw_stats_post(bp);13371382 bnx2x_stats_comp(bp);13381383 }13391339-13401340- up(&bp->stats_sema);13411384}1342138513431386static void bnx2x_stats_do_nothing(struct bnx2x *bp)···1363141013641411void bnx2x_stats_handle(struct bnx2x *bp, enum bnx2x_stats_event event)13651412{13661366- enum bnx2x_stats_state state;13671367- void (*action)(struct bnx2x *bp);14131413+ enum bnx2x_stats_state state = bp->stats_state;14141414+13681415 if (unlikely(bp->panic))13691416 return;1370141713711371- spin_lock_bh(&bp->stats_lock);13721372- state = bp->stats_state;13731373- bp->stats_state = bnx2x_stats_stm[state][event].next_state;13741374- action = bnx2x_stats_stm[state][event].action;13751375- spin_unlock_bh(&bp->stats_lock);14181418+ /* Statistics update run from timer context, and we don't want to stop14191419+ * that context in case someone is in the middle of a transition.14201420+ * For other events, wait a bit until lock is taken.14211421+ */14221422+ if (!mutex_trylock(&bp->stats_lock)) {14231423+ if (event == STATS_EVENT_UPDATE)14241424+ return;1376142513771377- action(bp);14261426+ DP(BNX2X_MSG_STATS,14271427+ "Unlikely stats' lock contention [event %d]\n", event);14281428+ mutex_lock(&bp->stats_lock);14291429+ }14301430+14311431+ bnx2x_stats_stm[state][event].action(bp);14321432+ bp->stats_state = bnx2x_stats_stm[state][event].next_state;14331433+14341434+ mutex_unlock(&bp->stats_lock);1378143513791436 if ((event != STATS_EVENT_UPDATE) || netif_msg_timer(bp))13801437 DP(BNX2X_MSG_STATS, "state %d -> event %d -> state %d\n",···19611998 }19621999}1963200019641964-void bnx2x_stats_safe_exec(struct bnx2x *bp,19651965- void (func_to_exec)(void *cookie),19661966- void *cookie){19671967- if (down_timeout(&bp->stats_sema, HZ/10))19681968- BNX2X_ERR("Unable to acquire stats lock\n");20012001+int bnx2x_stats_safe_exec(struct bnx2x *bp,20022002+ void (func_to_exec)(void *cookie),20032003+ void *cookie)20042004+{20052005+ int cnt = 10, rc = 0;20062006+20072007+ /* Wait for statistics to end [while blocking further requests],20082008+ * then run supplied function 'safely'.20092009+ */20102010+ mutex_lock(&bp->stats_lock);20112011+19692012 bnx2x_stats_comp(bp);20132013+ while (bp->stats_pending && cnt--)20142014+ if (bnx2x_storm_stats_update(bp))20152015+ usleep_range(1000, 2000);20162016+ if (bp->stats_pending) {20172017+ BNX2X_ERR("Failed to wait for stats pending to clear [possibly FW is stuck]\n");20182018+ rc = -EBUSY;20192019+ goto out;20202020+ }20212021+19702022 func_to_exec(cookie);19711971- __bnx2x_stats_start(bp);19721972- up(&bp->stats_sema);20232023+20242024+out:20252025+ /* No need to restart statistics - if they're enabled, the timer20262026+ * will restart the statistics.20272027+ */20282028+ mutex_unlock(&bp->stats_lock);20292029+20302030+ return rc;19732031}
···920920{921921 int i;922922923923- for (i = 0; i < ARRAY_SIZE(adap->sge.ingr_map); i++) {923923+ for (i = 0; i < adap->sge.ingr_sz; i++) {924924 struct sge_rspq *q = adap->sge.ingr_map[i];925925926926 if (q && q->handler) {···934934 }935935}936936937937+/* Disable interrupt and napi handler */938938+static void disable_interrupts(struct adapter *adap)939939+{940940+ if (adap->flags & FULL_INIT_DONE) {941941+ t4_intr_disable(adap);942942+ if (adap->flags & USING_MSIX) {943943+ free_msix_queue_irqs(adap);944944+ free_irq(adap->msix_info[0].vec, adap);945945+ } else {946946+ free_irq(adap->pdev->irq, adap);947947+ }948948+ quiesce_rx(adap);949949+ }950950+}951951+937952/*938953 * Enable NAPI scheduling and interrupt generation for all Rx queues.939954 */···956941{957942 int i;958943959959- for (i = 0; i < ARRAY_SIZE(adap->sge.ingr_map); i++) {944944+ for (i = 0; i < adap->sge.ingr_sz; i++) {960945 struct sge_rspq *q = adap->sge.ingr_map[i];961946962947 if (!q)···985970 int err, msi_idx, i, j;986971 struct sge *s = &adap->sge;987972988988- bitmap_zero(s->starving_fl, MAX_EGRQ);989989- bitmap_zero(s->txq_maperr, MAX_EGRQ);973973+ bitmap_zero(s->starving_fl, s->egr_sz);974974+ bitmap_zero(s->txq_maperr, s->egr_sz);990975991976 if (adap->flags & USING_MSIX)992977 msi_idx = 1; /* vector 0 is for non-queue interrupts */···998983 msi_idx = -((int)s->intrq.abs_id + 1);999984 }1000985986986+ /* NOTE: If you add/delete any Ingress/Egress Queue allocations in here,987987+ * don't forget to update the following which need to be988988+ * synchronized to and changes here.989989+ *990990+ * 1. The calculations of MAX_INGQ in cxgb4.h.991991+ *992992+ * 2. Update enable_msix/name_msix_vecs/request_msix_queue_irqs993993+ * to accommodate any new/deleted Ingress Queues994994+ * which need MSI-X Vectors.995995+ *996996+ * 3. Update sge_qinfo_show() to include information on the997997+ * new/deleted queues.998998+ */1001999 err = t4_sge_alloc_rxq(adap, &s->fw_evtq, true, adap->port[0],10021000 msi_idx, NULL, fwevtq_handler);10031001 if (err) {···4272424442734245static void cxgb_down(struct adapter *adapter)42744246{42754275- t4_intr_disable(adapter);42764247 cancel_work_sync(&adapter->tid_release_task);42774248 cancel_work_sync(&adapter->db_full_task);42784249 cancel_work_sync(&adapter->db_drop_task);42794250 adapter->tid_release_task_busy = false;42804251 adapter->tid_release_head = NULL;4281425242824282- if (adapter->flags & USING_MSIX) {42834283- free_msix_queue_irqs(adapter);42844284- free_irq(adapter->msix_info[0].vec, adapter);42854285- } else42864286- free_irq(adapter->pdev->irq, adapter);42874287- quiesce_rx(adapter);42884253 t4_sge_stop(adapter);42894254 t4_free_sge_resources(adapter);42904255 adapter->flags &= ~FULL_INIT_DONE;···47544733 if (ret < 0)47554734 return ret;4756473547574757- ret = t4_cfg_pfvf(adap, adap->fn, adap->fn, 0, MAX_EGRQ, 64, MAX_INGQ,47584758- 0, 0, 4, 0xf, 0xf, 16, FW_CMD_CAP_PF, FW_CMD_CAP_PF);47364736+ ret = t4_cfg_pfvf(adap, adap->fn, adap->fn, 0, adap->sge.egr_sz, 64,47374737+ MAX_INGQ, 0, 0, 4, 0xf, 0xf, 16, FW_CMD_CAP_PF,47384738+ FW_CMD_CAP_PF);47594739 if (ret < 0)47604740 return ret;47614741···51105088 enum dev_state state;51115089 u32 params[7], val[7];51125090 struct fw_caps_config_cmd caps_cmd;51135113- struct fw_devlog_cmd devlog_cmd;51145114- u32 devlog_meminfo;51155091 int reset = 1;50925092+50935093+ /* Grab Firmware Device Log parameters as early as possible so we have50945094+ * access to it for debugging, etc.50955095+ */50965096+ ret = t4_init_devlog_params(adap);50975097+ if (ret < 0)50985098+ return ret;5116509951175100 /* Contact FW, advertising Master capability */51185101 ret = t4_fw_hello(adap, adap->mbox, adap->mbox, MASTER_MAY, &state);···51955168 ret = get_vpd_params(adap, &adap->params.vpd);51965169 if (ret < 0)51975170 goto bye;51985198-51995199- /* Read firmware device log parameters. We really need to find a way52005200- * to get these parameters initialized with some default values (which52015201- * are likely to be correct) for the case where we either don't52025202- * attache to the firmware or it's crashed when we probe the adapter.52035203- * That way we'll still be able to perform early firmware startup52045204- * debugging ... If the request to get the Firmware's Device Log52055205- * parameters fails, we'll live so we don't make that a fatal error.52065206- */52075207- memset(&devlog_cmd, 0, sizeof(devlog_cmd));52085208- devlog_cmd.op_to_write = htonl(FW_CMD_OP_V(FW_DEVLOG_CMD) |52095209- FW_CMD_REQUEST_F | FW_CMD_READ_F);52105210- devlog_cmd.retval_len16 = htonl(FW_LEN16(devlog_cmd));52115211- ret = t4_wr_mbox(adap, adap->mbox, &devlog_cmd, sizeof(devlog_cmd),52125212- &devlog_cmd);52135213- if (ret == 0) {52145214- devlog_meminfo =52155215- ntohl(devlog_cmd.memtype_devlog_memaddr16_devlog);52165216- adap->params.devlog.memtype =52175217- FW_DEVLOG_CMD_MEMTYPE_DEVLOG_G(devlog_meminfo);52185218- adap->params.devlog.start =52195219- FW_DEVLOG_CMD_MEMADDR16_DEVLOG_G(devlog_meminfo) << 4;52205220- adap->params.devlog.size = ntohl(devlog_cmd.memsize_devlog);52215221- }5222517152235172 /*52245173 * Find out what ports are available to us. Note that we need to do···52955292 adap->tids.ftid_base = val[3];52965293 adap->tids.nftids = val[4] - val[3] + 1;52975294 adap->sge.ingr_start = val[5];52955295+52965296+ /* qids (ingress/egress) returned from firmware can be anywhere52975297+ * in the range from EQ(IQFLINT)_START to EQ(IQFLINT)_END.52985298+ * Hence driver needs to allocate memory for this range to52995299+ * store the queue info. Get the highest IQFLINT/EQ index returned53005300+ * in FW_EQ_*_CMD.alloc command.53015301+ */53025302+ params[0] = FW_PARAM_PFVF(EQ_END);53035303+ params[1] = FW_PARAM_PFVF(IQFLINT_END);53045304+ ret = t4_query_params(adap, adap->mbox, adap->fn, 0, 2, params, val);53055305+ if (ret < 0)53065306+ goto bye;53075307+ adap->sge.egr_sz = val[0] - adap->sge.egr_start + 1;53085308+ adap->sge.ingr_sz = val[1] - adap->sge.ingr_start + 1;53095309+53105310+ adap->sge.egr_map = kcalloc(adap->sge.egr_sz,53115311+ sizeof(*adap->sge.egr_map), GFP_KERNEL);53125312+ if (!adap->sge.egr_map) {53135313+ ret = -ENOMEM;53145314+ goto bye;53155315+ }53165316+53175317+ adap->sge.ingr_map = kcalloc(adap->sge.ingr_sz,53185318+ sizeof(*adap->sge.ingr_map), GFP_KERNEL);53195319+ if (!adap->sge.ingr_map) {53205320+ ret = -ENOMEM;53215321+ goto bye;53225322+ }53235323+53245324+ /* Allocate the memory for the vaious egress queue bitmaps53255325+ * ie starving_fl and txq_maperr.53265326+ */53275327+ adap->sge.starving_fl = kcalloc(BITS_TO_LONGS(adap->sge.egr_sz),53285328+ sizeof(long), GFP_KERNEL);53295329+ if (!adap->sge.starving_fl) {53305330+ ret = -ENOMEM;53315331+ goto bye;53325332+ }53335333+53345334+ adap->sge.txq_maperr = kcalloc(BITS_TO_LONGS(adap->sge.egr_sz),53355335+ sizeof(long), GFP_KERNEL);53365336+ if (!adap->sge.txq_maperr) {53375337+ ret = -ENOMEM;53385338+ goto bye;53395339+ }5298534052995341 params[0] = FW_PARAM_PFVF(CLIP_START);53005342 params[1] = FW_PARAM_PFVF(CLIP_END);···55495501 * happened to HW/FW, stop issuing commands.55505502 */55515503bye:55045504+ kfree(adap->sge.egr_map);55055505+ kfree(adap->sge.ingr_map);55065506+ kfree(adap->sge.starving_fl);55075507+ kfree(adap->sge.txq_maperr);55525508 if (ret != -ETIMEDOUT && ret != -EIO)55535509 t4_fw_bye(adap, adap->mbox);55545510 return ret;···55805528 netif_carrier_off(dev);55815529 }55825530 spin_unlock(&adap->stats_lock);55315531+ disable_interrupts(adap);55835532 if (adap->flags & FULL_INIT_DONE)55845533 cxgb_down(adap);55855534 rtnl_unlock();···5965591259665913 t4_free_mem(adapter->l2t);59675914 t4_free_mem(adapter->tids.tid_tab);59155915+ kfree(adapter->sge.egr_map);59165916+ kfree(adapter->sge.ingr_map);59175917+ kfree(adapter->sge.starving_fl);59185918+ kfree(adapter->sge.txq_maperr);59685919 disable_msi(adapter);5969592059705921 for_each_port(adapter, i)···6293623662946237 if (is_offload(adapter))62956238 detach_ulds(adapter);62396239+62406240+ disable_interrupts(adapter);6296624162976242 for_each_port(adapter, i)62986243 if (adapter->port[i]->reg_state == NETREG_REGISTERED)
+4-3
drivers/net/ethernet/chelsio/cxgb4/sge.c
···21712171 struct adapter *adap = (struct adapter *)data;21722172 struct sge *s = &adap->sge;2173217321742174- for (i = 0; i < ARRAY_SIZE(s->starving_fl); i++)21742174+ for (i = 0; i < BITS_TO_LONGS(s->egr_sz); i++)21752175 for (m = s->starving_fl[i]; m; m &= m - 1) {21762176 struct sge_eth_rxq *rxq;21772177 unsigned int id = __ffs(m) + i * BITS_PER_LONG;···22592259 struct adapter *adap = (struct adapter *)data;22602260 struct sge *s = &adap->sge;2261226122622262- for (i = 0; i < ARRAY_SIZE(s->txq_maperr); i++)22622262+ for (i = 0; i < BITS_TO_LONGS(s->egr_sz); i++)22632263 for (m = s->txq_maperr[i]; m; m &= m - 1) {22642264 unsigned long id = __ffs(m) + i * BITS_PER_LONG;22652265 struct sge_ofld_txq *txq = s->egr_map[id];···27412741 free_rspq_fl(adap, &adap->sge.intrq, NULL);2742274227432743 /* clear the reverse egress queue map */27442744- memset(adap->sge.egr_map, 0, sizeof(adap->sge.egr_map));27442744+ memset(adap->sge.egr_map, 0,27452745+ adap->sge.egr_sz * sizeof(*adap->sge.egr_map));27452746}2746274727472748void t4_sge_start(struct adapter *adap)
+98-11
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
···449449 * @mtype: memory type: MEM_EDC0, MEM_EDC1 or MEM_MC450450 * @addr: address within indicated memory type451451 * @len: amount of memory to transfer452452- * @buf: host memory buffer452452+ * @hbuf: host memory buffer453453 * @dir: direction of transfer T4_MEMORY_READ (1) or T4_MEMORY_WRITE (0)454454 *455455 * Reads/writes an [almost] arbitrary memory region in the firmware: the···460460 * caller's responsibility to perform appropriate byte order conversions.461461 */462462int t4_memory_rw(struct adapter *adap, int win, int mtype, u32 addr,463463- u32 len, __be32 *buf, int dir)463463+ u32 len, void *hbuf, int dir)464464{465465 u32 pos, offset, resid, memoffset;466466 u32 edc_size, mc_size, win_pf, mem_reg, mem_aperture, mem_base;467467+ u32 *buf;467468468469 /* Argument sanity checks ...469470 */470470- if (addr & 0x3)471471+ if (addr & 0x3 || (uintptr_t)hbuf & 0x3)471472 return -EINVAL;473473+ buf = (u32 *)hbuf;472474473475 /* It's convenient to be able to handle lengths which aren't a474476 * multiple of 32-bits because we often end up transferring files to···534532535533 /* Transfer data to/from the adapter as long as there's an integral536534 * number of 32-bit transfers to complete.535535+ *536536+ * A note on Endianness issues:537537+ *538538+ * The "register" reads and writes below from/to the PCI-E Memory539539+ * Window invoke the standard adapter Big-Endian to PCI-E Link540540+ * Little-Endian "swizzel." As a result, if we have the following541541+ * data in adapter memory:542542+ *543543+ * Memory: ... | b0 | b1 | b2 | b3 | ...544544+ * Address: i+0 i+1 i+2 i+3545545+ *546546+ * Then a read of the adapter memory via the PCI-E Memory Window547547+ * will yield:548548+ *549549+ * x = readl(i)550550+ * 31 0551551+ * [ b3 | b2 | b1 | b0 ]552552+ *553553+ * If this value is stored into local memory on a Little-Endian system554554+ * it will show up correctly in local memory as:555555+ *556556+ * ( ..., b0, b1, b2, b3, ... )557557+ *558558+ * But on a Big-Endian system, the store will show up in memory559559+ * incorrectly swizzled as:560560+ *561561+ * ( ..., b3, b2, b1, b0, ... )562562+ *563563+ * So we need to account for this in the reads and writes to the564564+ * PCI-E Memory Window below by undoing the register read/write565565+ * swizzels.537566 */538567 while (len > 0) {539568 if (dir == T4_MEMORY_READ)540540- *buf++ = (__force __be32) t4_read_reg(adap,541541- mem_base + offset);569569+ *buf++ = le32_to_cpu((__force __le32)t4_read_reg(adap,570570+ mem_base + offset));542571 else543572 t4_write_reg(adap, mem_base + offset,544544- (__force u32) *buf++);573573+ (__force u32)cpu_to_le32(*buf++));545574 offset += sizeof(__be32);546575 len -= sizeof(__be32);547576···601568 */602569 if (resid) {603570 union {604604- __be32 word;571571+ u32 word;605572 char byte[4];606573 } last;607574 unsigned char *bp;608575 int i;609576610577 if (dir == T4_MEMORY_READ) {611611- last.word = (__force __be32) t4_read_reg(adap,612612- mem_base + offset);578578+ last.word = le32_to_cpu(579579+ (__force __le32)t4_read_reg(adap,580580+ mem_base + offset));613581 for (bp = (unsigned char *)buf, i = resid; i < 4; i++)614582 bp[i] = last.byte[i];615583 } else {···618584 for (i = resid; i < 4; i++)619585 last.byte[i] = 0;620586 t4_write_reg(adap, mem_base + offset,621621- (__force u32) last.word);587587+ (__force u32)cpu_to_le32(last.word));622588 }623589 }624590···11201086 }1121108711221088 /* Installed successfully, update the cached header too. */11231123- memcpy(card_fw, fs_fw, sizeof(*card_fw));10891089+ *card_fw = *fs_fw;11241090 card_fw_usable = 1;11251091 *reset = 0; /* already reset as part of load_fw */11261092 }···4455442144564422 *pbar2_qoffset = bar2_qoffset;44574423 *pbar2_qid = bar2_qid;44244424+ return 0;44254425+}44264426+44274427+/**44284428+ * t4_init_devlog_params - initialize adapter->params.devlog44294429+ * @adap: the adapter44304430+ *44314431+ * Initialize various fields of the adapter's Firmware Device Log44324432+ * Parameters structure.44334433+ */44344434+int t4_init_devlog_params(struct adapter *adap)44354435+{44364436+ struct devlog_params *dparams = &adap->params.devlog;44374437+ u32 pf_dparams;44384438+ unsigned int devlog_meminfo;44394439+ struct fw_devlog_cmd devlog_cmd;44404440+ int ret;44414441+44424442+ /* If we're dealing with newer firmware, the Device Log Paramerters44434443+ * are stored in a designated register which allows us to access the44444444+ * Device Log even if we can't talk to the firmware.44454445+ */44464446+ pf_dparams =44474447+ t4_read_reg(adap, PCIE_FW_REG(PCIE_FW_PF_A, PCIE_FW_PF_DEVLOG));44484448+ if (pf_dparams) {44494449+ unsigned int nentries, nentries128;44504450+44514451+ dparams->memtype = PCIE_FW_PF_DEVLOG_MEMTYPE_G(pf_dparams);44524452+ dparams->start = PCIE_FW_PF_DEVLOG_ADDR16_G(pf_dparams) << 4;44534453+44544454+ nentries128 = PCIE_FW_PF_DEVLOG_NENTRIES128_G(pf_dparams);44554455+ nentries = (nentries128 + 1) * 128;44564456+ dparams->size = nentries * sizeof(struct fw_devlog_e);44574457+44584458+ return 0;44594459+ }44604460+44614461+ /* Otherwise, ask the firmware for it's Device Log Parameters.44624462+ */44634463+ memset(&devlog_cmd, 0, sizeof(devlog_cmd));44644464+ devlog_cmd.op_to_write = htonl(FW_CMD_OP_V(FW_DEVLOG_CMD) |44654465+ FW_CMD_REQUEST_F | FW_CMD_READ_F);44664466+ devlog_cmd.retval_len16 = htonl(FW_LEN16(devlog_cmd));44674467+ ret = t4_wr_mbox(adap, adap->mbox, &devlog_cmd, sizeof(devlog_cmd),44684468+ &devlog_cmd);44694469+ if (ret)44704470+ return ret;44714471+44724472+ devlog_meminfo = ntohl(devlog_cmd.memtype_devlog_memaddr16_devlog);44734473+ dparams->memtype = FW_DEVLOG_CMD_MEMTYPE_DEVLOG_G(devlog_meminfo);44744474+ dparams->start = FW_DEVLOG_CMD_MEMADDR16_DEVLOG_G(devlog_meminfo) << 4;44754475+ dparams->size = ntohl(devlog_cmd.memsize_devlog);44764476+44584477 return 0;44594478}44604479
···101101 FW_RI_BIND_MW_WR = 0x18,102102 FW_RI_FR_NSMR_WR = 0x19,103103 FW_RI_INV_LSTAG_WR = 0x1a,104104- FW_LASTC2E_WR = 0x40104104+ FW_LASTC2E_WR = 0x70105105};106106107107struct fw_wr_hdr {···993993 FW_MEMTYPE_CF_EXTMEM = 0x2,994994 FW_MEMTYPE_CF_FLASH = 0x4,995995 FW_MEMTYPE_CF_INTERNAL = 0x5,996996+ FW_MEMTYPE_CF_EXTMEM1 = 0x6,996997};997998998999struct fw_caps_config_cmd {···10361035 FW_PARAMS_MNEM_PFVF = 2, /* function params */10371036 FW_PARAMS_MNEM_REG = 3, /* limited register access */10381037 FW_PARAMS_MNEM_DMAQ = 4, /* dma queue params */10381038+ FW_PARAMS_MNEM_CHNET = 5, /* chnet params */10391039 FW_PARAMS_MNEM_LAST10401040};10411041···31043102 FW_DEVLOG_FACILITY_FCOE = 0x2E,31053103 FW_DEVLOG_FACILITY_FOISCSI = 0x30,31063104 FW_DEVLOG_FACILITY_FOFCOE = 0x32,31073107- FW_DEVLOG_FACILITY_MAX = 0x32,31053105+ FW_DEVLOG_FACILITY_CHNET = 0x34,31063106+ FW_DEVLOG_FACILITY_MAX = 0x34,31083107};3109310831103109/* log message format */···31413138#define FW_DEVLOG_CMD_MEMADDR16_DEVLOG_G(x) \31423139 (((x) >> FW_DEVLOG_CMD_MEMADDR16_DEVLOG_S) & \31433140 FW_DEVLOG_CMD_MEMADDR16_DEVLOG_M)31413141+31423142+/* P C I E F W P F 7 R E G I S T E R */31433143+31443144+/* PF7 stores the Firmware Device Log parameters which allows Host Drivers to31453145+ * access the "devlog" which needing to contact firmware. The encoding is31463146+ * mostly the same as that returned by the DEVLOG command except for the size31473147+ * which is encoded as the number of entries in multiples-1 of 128 here rather31483148+ * than the memory size as is done in the DEVLOG command. Thus, 0 means 12831493149+ * and 15 means 2048. This of course in turn constrains the allowed values31503150+ * for the devlog size ...31513151+ */31523152+#define PCIE_FW_PF_DEVLOG 731533153+31543154+#define PCIE_FW_PF_DEVLOG_NENTRIES128_S 2831553155+#define PCIE_FW_PF_DEVLOG_NENTRIES128_M 0xf31563156+#define PCIE_FW_PF_DEVLOG_NENTRIES128_V(x) \31573157+ ((x) << PCIE_FW_PF_DEVLOG_NENTRIES128_S)31583158+#define PCIE_FW_PF_DEVLOG_NENTRIES128_G(x) \31593159+ (((x) >> PCIE_FW_PF_DEVLOG_NENTRIES128_S) & \31603160+ PCIE_FW_PF_DEVLOG_NENTRIES128_M)31613161+31623162+#define PCIE_FW_PF_DEVLOG_ADDR16_S 431633163+#define PCIE_FW_PF_DEVLOG_ADDR16_M 0xffffff31643164+#define PCIE_FW_PF_DEVLOG_ADDR16_V(x) ((x) << PCIE_FW_PF_DEVLOG_ADDR16_S)31653165+#define PCIE_FW_PF_DEVLOG_ADDR16_G(x) \31663166+ (((x) >> PCIE_FW_PF_DEVLOG_ADDR16_S) & PCIE_FW_PF_DEVLOG_ADDR16_M)31673167+31683168+#define PCIE_FW_PF_DEVLOG_MEMTYPE_S 031693169+#define PCIE_FW_PF_DEVLOG_MEMTYPE_M 0xf31703170+#define PCIE_FW_PF_DEVLOG_MEMTYPE_V(x) ((x) << PCIE_FW_PF_DEVLOG_MEMTYPE_S)31713171+#define PCIE_FW_PF_DEVLOG_MEMTYPE_G(x) \31723172+ (((x) >> PCIE_FW_PF_DEVLOG_MEMTYPE_S) & PCIE_FW_PF_DEVLOG_MEMTYPE_M)3144317331453174#endif /* _T4FW_INTERFACE_H_ */
···354354 u16 vlan_tag;355355 u32 tx_rate;356356 u32 plink_tracking;357357+ u32 privileges;357358};358359359360enum vf_state {···424423425424 u8 __iomem *csr; /* CSR BAR used only for BE2/3 */426425 u8 __iomem *db; /* Door Bell */426426+ u8 __iomem *pcicfg; /* On SH,BEx only. Shadow of PCI config space */427427428428 struct mutex mbox_lock; /* For serializing mbox cmds to BE card */429429 struct be_dma_mem mbox_mem;
+7-10
drivers/net/ethernet/emulex/benet/be_cmds.c
···19021902{19031903 int num_eqs, i = 0;1904190419051905- if (lancer_chip(adapter) && num > 8) {19061906- while (num) {19071907- num_eqs = min(num, 8);19081908- __be_cmd_modify_eqd(adapter, &set_eqd[i], num_eqs);19091909- i += num_eqs;19101910- num -= num_eqs;19111911- }19121912- } else {19131913- __be_cmd_modify_eqd(adapter, set_eqd, num);19051905+ while (num) {19061906+ num_eqs = min(num, 8);19071907+ __be_cmd_modify_eqd(adapter, &set_eqd[i], num_eqs);19081908+ i += num_eqs;19091909+ num -= num_eqs;19141910 }1915191119161912 return 0;···1914191819151919/* Uses sycnhronous mcc */19161920int be_cmd_vlan_config(struct be_adapter *adapter, u32 if_id, u16 *vtag_array,19171917- u32 num)19211921+ u32 num, u32 domain)19181922{19191923 struct be_mcc_wrb *wrb;19201924 struct be_cmd_req_vlan_config *req;···19321936 be_wrb_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_COMMON,19331937 OPCODE_COMMON_NTWK_VLAN_CONFIG, sizeof(*req),19341938 wrb, NULL);19391939+ req->hdr.domain = domain;1935194019361941 req->interface_id = if_id;19371942 req->untagged = BE_IF_FLAGS_UNTAGGED & be_if_cap_flags(adapter) ? 1 : 0;
···989989 if (!cmd_buf)990990 return count;991991 bytes_not_copied = copy_from_user(cmd_buf, buffer, count);992992- if (bytes_not_copied < 0)992992+ if (bytes_not_copied < 0) {993993+ kfree(cmd_buf);993994 return bytes_not_copied;995995+ }994996 if (bytes_not_copied > 0)995997 count -= bytes_not_copied;996998 cmd_buf[count] = '\0';
+33-11
drivers/net/ethernet/intel/i40e/i40e_main.c
···15121512 vsi->tc_config.numtc = numtc;15131513 vsi->tc_config.enabled_tc = enabled_tc ? enabled_tc : 1;15141514 /* Number of queues per enabled TC */15151515- num_tc_qps = vsi->alloc_queue_pairs/numtc;15151515+ /* In MFP case we can have a much lower count of MSIx15161516+ * vectors available and so we need to lower the used15171517+ * q count.15181518+ */15191519+ qcount = min_t(int, vsi->alloc_queue_pairs, pf->num_lan_msix);15201520+ num_tc_qps = qcount / numtc;15161521 num_tc_qps = min_t(int, num_tc_qps, I40E_MAX_QUEUES_PER_TC);1517152215181523 /* Setup queue offset/count for all TCs for given VSI */···26892684 u16 qoffset, qcount;26902685 int i, n;2691268626922692- if (!(vsi->back->flags & I40E_FLAG_DCB_ENABLED))26932693- return;26872687+ if (!(vsi->back->flags & I40E_FLAG_DCB_ENABLED)) {26882688+ /* Reset the TC information */26892689+ for (i = 0; i < vsi->num_queue_pairs; i++) {26902690+ rx_ring = vsi->rx_rings[i];26912691+ tx_ring = vsi->tx_rings[i];26922692+ rx_ring->dcb_tc = 0;26932693+ tx_ring->dcb_tc = 0;26942694+ }26952695+ }2694269626952697 for (n = 0; n < I40E_MAX_TRAFFIC_CLASS; n++) {26962698 if (!(vsi->tc_config.enabled_tc & (1 << n)))···38413829static void i40e_clear_interrupt_scheme(struct i40e_pf *pf)38423830{38433831 int i;38323832+38333833+ i40e_stop_misc_vector(pf);38343834+ if (pf->flags & I40E_FLAG_MSIX_ENABLED) {38353835+ synchronize_irq(pf->msix_entries[0].vector);38363836+ free_irq(pf->msix_entries[0].vector, pf);38373837+ }3844383838453839 i40e_put_lump(pf->irq_pile, 0, I40E_PILE_VALID_BIT-1);38463840 for (i = 0; i < pf->num_alloc_vsi; i++)···5272525452735255 /* Wait for the PF's Tx queues to be disabled */52745256 ret = i40e_pf_wait_txq_disabled(pf);52755275- if (!ret)52575257+ if (ret) {52585258+ /* Schedule PF reset to recover */52595259+ set_bit(__I40E_PF_RESET_REQUESTED, &pf->state);52605260+ i40e_service_event_schedule(pf);52615261+ } else {52765262 i40e_pf_unquiesce_all_vsi(pf);52635263+ }52645264+52775265exit:52785266 return ret;52795267}···56115587 int i, v;5612558856135589 /* If we're down or resetting, just bail */56145614- if (test_bit(__I40E_CONFIG_BUSY, &pf->state))55905590+ if (test_bit(__I40E_DOWN, &pf->state) ||55915591+ test_bit(__I40E_CONFIG_BUSY, &pf->state))56155592 return;5616559356175594 /* for each VSI/netdev···95589533 set_bit(__I40E_DOWN, &pf->state);95599534 del_timer_sync(&pf->service_timer);95609535 cancel_work_sync(&pf->service_task);95369536+ i40e_fdir_teardown(pf);9561953795629538 if (pf->flags & I40E_FLAG_SRIOV_ENABLED) {95639539 i40e_free_vfs(pf);···95849558 */95859559 if (pf->vsi[pf->lan_vsi])95869560 i40e_vsi_release(pf->vsi[pf->lan_vsi]);95879587-95889588- i40e_stop_misc_vector(pf);95899589- if (pf->flags & I40E_FLAG_MSIX_ENABLED) {95909590- synchronize_irq(pf->msix_entries[0].vector);95919591- free_irq(pf->msix_entries[0].vector, pf);95929592- }9593956195949562 /* shutdown and destroy the HMC */95959563 if (pf->hw.hmc.hmc_obj) {···9737971797389718 wr32(hw, I40E_PFPM_APM, (pf->wol_en ? I40E_PFPM_APM_APME_MASK : 0));97399719 wr32(hw, I40E_PFPM_WUFC, (pf->wol_en ? I40E_PFPM_WUFC_MAG_MASK : 0));97209720+97219721+ i40e_clear_interrupt_scheme(pf);9740972297419723 if (system_state == SYSTEM_POWER_OFF) {97429724 pci_wake_from_d3(pdev, pf->wol_en);
+35
drivers/net/ethernet/intel/i40e/i40e_nvm.c
···679679{680680 i40e_status status;681681 enum i40e_nvmupd_cmd upd_cmd;682682+ bool retry_attempt = false;682683683684 upd_cmd = i40e_nvmupd_validate_command(hw, cmd, errno);684685686686+retry:685687 switch (upd_cmd) {686688 case I40E_NVMUPD_WRITE_CON:687689 status = i40e_nvmupd_nvm_write(hw, cmd, bytes, errno);···727725 *errno = -ESRCH;728726 break;729727 }728728+729729+ /* In some circumstances, a multi-write transaction takes longer730730+ * than the default 3 minute timeout on the write semaphore. If731731+ * the write failed with an EBUSY status, this is likely the problem,732732+ * so here we try to reacquire the semaphore then retry the write.733733+ * We only do one retry, then give up.734734+ */735735+ if (status && (hw->aq.asq_last_status == I40E_AQ_RC_EBUSY) &&736736+ !retry_attempt) {737737+ i40e_status old_status = status;738738+ u32 old_asq_status = hw->aq.asq_last_status;739739+ u32 gtime;740740+741741+ gtime = rd32(hw, I40E_GLVFGEN_TIMER);742742+ if (gtime >= hw->nvm.hw_semaphore_timeout) {743743+ i40e_debug(hw, I40E_DEBUG_ALL,744744+ "NVMUPD: write semaphore expired (%d >= %lld), retrying\n",745745+ gtime, hw->nvm.hw_semaphore_timeout);746746+ i40e_release_nvm(hw);747747+ status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE);748748+ if (status) {749749+ i40e_debug(hw, I40E_DEBUG_ALL,750750+ "NVMUPD: write semaphore reacquire failed aq_err = %d\n",751751+ hw->aq.asq_last_status);752752+ status = old_status;753753+ hw->aq.asq_last_status = old_asq_status;754754+ } else {755755+ retry_attempt = true;756756+ goto retry;757757+ }758758+ }759759+ }760760+730761 return status;731762}732763
+95-24
drivers/net/ethernet/intel/i40e/i40e_txrx.c
···586586}587587588588/**589589+ * i40e_get_head - Retrieve head from head writeback590590+ * @tx_ring: tx ring to fetch head of591591+ *592592+ * Returns value of Tx ring head based on value stored593593+ * in head write-back location594594+ **/595595+static inline u32 i40e_get_head(struct i40e_ring *tx_ring)596596+{597597+ void *head = (struct i40e_tx_desc *)tx_ring->desc + tx_ring->count;598598+599599+ return le32_to_cpu(*(volatile __le32 *)head);600600+}601601+602602+/**589603 * i40e_get_tx_pending - how many tx descriptors not processed590604 * @tx_ring: the ring of descriptors591605 *···608594 **/609595static u32 i40e_get_tx_pending(struct i40e_ring *ring)610596{611611- u32 ntu = ((ring->next_to_clean <= ring->next_to_use)612612- ? ring->next_to_use613613- : ring->next_to_use + ring->count);614614- return ntu - ring->next_to_clean;597597+ u32 head, tail;598598+599599+ head = i40e_get_head(ring);600600+ tail = readl(ring->tail);601601+602602+ if (head != tail)603603+ return (head < tail) ?604604+ tail - head : (tail + ring->count - head);605605+606606+ return 0;615607}616608617609/**···626606 **/627607static bool i40e_check_tx_hang(struct i40e_ring *tx_ring)628608{609609+ u32 tx_done = tx_ring->stats.packets;610610+ u32 tx_done_old = tx_ring->tx_stats.tx_done_old;629611 u32 tx_pending = i40e_get_tx_pending(tx_ring);630612 struct i40e_pf *pf = tx_ring->vsi->back;631613 bool ret = false;···645623 * run the check_tx_hang logic with a transmit completion646624 * pending but without time to complete it yet.647625 */648648- if ((tx_ring->tx_stats.tx_done_old == tx_ring->stats.packets) &&649649- (tx_pending >= I40E_MIN_DESC_PENDING)) {626626+ if ((tx_done_old == tx_done) && tx_pending) {650627 /* make sure it is true for two checks in a row */651628 ret = test_and_set_bit(__I40E_HANG_CHECK_ARMED,652629 &tx_ring->state);653653- } else if ((tx_ring->tx_stats.tx_done_old == tx_ring->stats.packets) &&654654- (tx_pending < I40E_MIN_DESC_PENDING) &&655655- (tx_pending > 0)) {630630+ } else if (tx_done_old == tx_done &&631631+ (tx_pending < I40E_MIN_DESC_PENDING) && (tx_pending > 0)) {656632 if (I40E_DEBUG_FLOW & pf->hw.debug_mask)657633 dev_info(tx_ring->dev, "HW needs some more descs to do a cacheline flush. tx_pending %d, queue %d",658634 tx_pending, tx_ring->queue_index);659635 pf->tx_sluggish_count++;660636 } else {661637 /* update completed stats and disarm the hang check */662662- tx_ring->tx_stats.tx_done_old = tx_ring->stats.packets;638638+ tx_ring->tx_stats.tx_done_old = tx_done;663639 clear_bit(__I40E_HANG_CHECK_ARMED, &tx_ring->state);664640 }665641666642 return ret;667667-}668668-669669-/**670670- * i40e_get_head - Retrieve head from head writeback671671- * @tx_ring: tx ring to fetch head of672672- *673673- * Returns value of Tx ring head based on value stored674674- * in head write-back location675675- **/676676-static inline u32 i40e_get_head(struct i40e_ring *tx_ring)677677-{678678- void *head = (struct i40e_tx_desc *)tx_ring->desc + tx_ring->count;679679-680680- return le32_to_cpu(*(volatile __le32 *)head);681643}682644683645#define WB_STRIDE 0x3···21462140}2147214121482142/**21432143+ * i40e_chk_linearize - Check if there are more than 8 fragments per packet21442144+ * @skb: send buffer21452145+ * @tx_flags: collected send information21462146+ * @hdr_len: size of the packet header21472147+ *21482148+ * Note: Our HW can't scatter-gather more than 8 fragments to build21492149+ * a packet on the wire and so we need to figure out the cases where we21502150+ * need to linearize the skb.21512151+ **/21522152+static bool i40e_chk_linearize(struct sk_buff *skb, u32 tx_flags,21532153+ const u8 hdr_len)21542154+{21552155+ struct skb_frag_struct *frag;21562156+ bool linearize = false;21572157+ unsigned int size = 0;21582158+ u16 num_frags;21592159+ u16 gso_segs;21602160+21612161+ num_frags = skb_shinfo(skb)->nr_frags;21622162+ gso_segs = skb_shinfo(skb)->gso_segs;21632163+21642164+ if (tx_flags & (I40E_TX_FLAGS_TSO | I40E_TX_FLAGS_FSO)) {21652165+ u16 j = 1;21662166+21672167+ if (num_frags < (I40E_MAX_BUFFER_TXD))21682168+ goto linearize_chk_done;21692169+ /* try the simple math, if we have too many frags per segment */21702170+ if (DIV_ROUND_UP((num_frags + gso_segs), gso_segs) >21712171+ I40E_MAX_BUFFER_TXD) {21722172+ linearize = true;21732173+ goto linearize_chk_done;21742174+ }21752175+ frag = &skb_shinfo(skb)->frags[0];21762176+ size = hdr_len;21772177+ /* we might still have more fragments per segment */21782178+ do {21792179+ size += skb_frag_size(frag);21802180+ frag++; j++;21812181+ if (j == I40E_MAX_BUFFER_TXD) {21822182+ if (size < skb_shinfo(skb)->gso_size) {21832183+ linearize = true;21842184+ break;21852185+ }21862186+ j = 1;21872187+ size -= skb_shinfo(skb)->gso_size;21882188+ if (size)21892189+ j++;21902190+ size += hdr_len;21912191+ }21922192+ num_frags--;21932193+ } while (num_frags);21942194+ } else {21952195+ if (num_frags >= I40E_MAX_BUFFER_TXD)21962196+ linearize = true;21972197+ }21982198+21992199+linearize_chk_done:22002200+ return linearize;22012201+}22022202+22032203+/**21492204 * i40e_tx_map - Build the Tx descriptor21502205 * @tx_ring: ring to send buffer on21512206 * @skb: send buffer···2462239524632396 if (tsyn)24642397 tx_flags |= I40E_TX_FLAGS_TSYN;23982398+23992399+ if (i40e_chk_linearize(skb, tx_flags, hdr_len))24002400+ if (skb_linearize(skb))24012401+ goto out_drop;2465240224662403 skb_tx_timestamp(skb);24672404
···153153154154 /* All active slaves need to receive the event */155155 if (slave == ALL_SLAVES) {156156- for (i = 0; i < dev->num_slaves; i++) {157157- if (i != dev->caps.function &&158158- master->slave_state[i].active)159159- if (mlx4_GEN_EQE(dev, i, eqe))160160- mlx4_warn(dev, "Failed to generate event for slave %d\n",161161- i);156156+ for (i = 0; i <= dev->persist->num_vfs; i++) {157157+ if (mlx4_GEN_EQE(dev, i, eqe))158158+ mlx4_warn(dev, "Failed to generate event for slave %d\n",159159+ i);162160 }163161 } else {164162 if (mlx4_GEN_EQE(dev, slave, eqe))···201203 struct mlx4_eqe *eqe)202204{203205 struct mlx4_priv *priv = mlx4_priv(dev);204204- struct mlx4_slave_state *s_slave =205205- &priv->mfunc.master.slave_state[slave];206206207207- if (!s_slave->active) {208208- /*mlx4_warn(dev, "Trying to pass event to inactive slave\n");*/207207+ if (slave < 0 || slave > dev->persist->num_vfs ||208208+ slave == dev->caps.function ||209209+ !priv->mfunc.master.slave_state[slave].active)209210 return;210210- }211211212212 slave_event(dev, slave, eqe);213213}
+1-1
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
···453453 unsigned long rx_chksum_none;454454 unsigned long rx_chksum_complete;455455 unsigned long tx_chksum_offload;456456-#define NUM_PORT_STATS 9456456+#define NUM_PORT_STATS 10457457};458458459459struct mlx4_en_perf_stats {
···25612561 int rc = -EINVAL;2562256225632563 if (!rtl_fw_format_ok(tp, rtl_fw)) {25642564- netif_err(tp, ifup, dev, "invalid firwmare\n");25642564+ netif_err(tp, ifup, dev, "invalid firmware\n");25652565 goto out;25662566 }25672567···50675067 RTL_W8(ChipCmd, CmdReset);5068506850695069 rtl_udelay_loop_wait_low(tp, &rtl_chipcmd_cond, 100, 100);50705070-50715071- netdev_reset_queue(tp->dev);50725070}5073507150745072static void rtl_request_uncached_firmware(struct rtl8169_private *tp)···70477049 u32 status, len;70487050 u32 opts[2];70497051 int frags;70507050- bool stop_queue;7051705270527053 if (unlikely(!TX_FRAGS_READY_FOR(tp, skb_shinfo(skb)->nr_frags))) {70537054 netif_err(tp, drv, dev, "BUG! Tx Ring full when queue awake!\n");···7087709070887091 txd->opts2 = cpu_to_le32(opts[1]);7089709270907090- netdev_sent_queue(dev, skb->len);70917091-70927093 skb_tx_timestamp(skb);7093709470947095 /* Force memory writes to complete before releasing descriptor */···7101710671027107 tp->cur_tx += frags + 1;7103710871047104- stop_queue = !TX_FRAGS_READY_FOR(tp, MAX_SKB_FRAGS);71097109+ RTL_W8(TxPoll, NPQ);7105711071067106- if (!skb->xmit_more || stop_queue ||71077107- netif_xmit_stopped(netdev_get_tx_queue(dev, 0))) {71087108- RTL_W8(TxPoll, NPQ);71117111+ mmiowb();7109711271107110- mmiowb();71117111- }71127112-71137113- if (stop_queue) {71137113+ if (!TX_FRAGS_READY_FOR(tp, MAX_SKB_FRAGS)) {71147114 /* Avoid wrongly optimistic queue wake-up: rtl_tx thread must71157115 * not miss a ring update when it notices a stopped queue.71167116 */···71887198static void rtl_tx(struct net_device *dev, struct rtl8169_private *tp)71897199{71907200 unsigned int dirty_tx, tx_left;71917191- unsigned int bytes_compl = 0, pkts_compl = 0;7192720171937202 dirty_tx = tp->dirty_tx;71947203 smp_rmb();···72117222 rtl8169_unmap_tx_skb(&tp->pci_dev->dev, tx_skb,72127223 tp->TxDescArray + entry);72137224 if (status & LastFrag) {72147214- pkts_compl++;72157215- bytes_compl += tx_skb->skb->len;72257225+ u64_stats_update_begin(&tp->tx_stats.syncp);72267226+ tp->tx_stats.packets++;72277227+ tp->tx_stats.bytes += tx_skb->skb->len;72287228+ u64_stats_update_end(&tp->tx_stats.syncp);72167229 dev_kfree_skb_any(tx_skb->skb);72177230 tx_skb->skb = NULL;72187231 }···72237232 }7224723372257234 if (tp->dirty_tx != dirty_tx) {72267226- netdev_completed_queue(tp->dev, pkts_compl, bytes_compl);72277227-72287228- u64_stats_update_begin(&tp->tx_stats.syncp);72297229- tp->tx_stats.packets += pkts_compl;72307230- tp->tx_stats.bytes += bytes_compl;72317231- u64_stats_update_end(&tp->tx_stats.syncp);72327232-72337235 tp->dirty_tx = dirty_tx;72347236 /* Sync with rtl8169_start_xmit:72357237 * - publish dirty_tx ring index (write barrier)
+13-5
drivers/net/ethernet/renesas/sh_eth.c
···508508 .tpauser = 1,509509 .hw_swap = 1,510510 .rmiimode = 1,511511- .shift_rd0 = 1,512511};513512514513static void sh_eth_set_rate_sh7724(struct net_device *ndev)···13911392 msleep(2); /* max frame time at 10 Mbps < 1250 us */13921393 sh_eth_get_stats(ndev);13931394 sh_eth_reset(ndev);13951395+13961396+ /* Set MAC address again */13971397+ update_mac_address(ndev);13941398}1395139913961400/* free Tx skb function */···14091407 txdesc = &mdp->tx_ring[entry];14101408 if (txdesc->status & cpu_to_edmac(mdp, TD_TACT))14111409 break;14101410+ /* TACT bit must be checked before all the following reads */14111411+ rmb();14121412 /* Free the original skb. */14131413 if (mdp->tx_skbuff[entry]) {14141414 dma_unmap_single(&ndev->dev, txdesc->addr,···14481444 limit = boguscnt;14491445 rxdesc = &mdp->rx_ring[entry];14501446 while (!(rxdesc->status & cpu_to_edmac(mdp, RD_RACT))) {14471447+ /* RACT bit must be checked before all the following reads */14481448+ rmb();14511449 desc_status = edmac_to_cpu(mdp, rxdesc->status);14521450 pkt_len = rxdesc->frame_length;14531451···1461145514621456 /* In case of almost all GETHER/ETHERs, the Receive Frame State14631457 * (RFS) bits in the Receive Descriptor 0 are from bit 9 to14641464- * bit 0. However, in case of the R8A7740, R8A779x, and14651465- * R7S72100 the RFS bits are from bit 25 to bit 16. So, the14581458+ * bit 0. However, in case of the R8A7740 and R7S7210014591459+ * the RFS bits are from bit 25 to bit 16. So, the14661460 * driver needs right shifting by 16.14671461 */14681462 if (mdp->cd->shift_rd0)···15291523 skb_checksum_none_assert(skb);15301524 rxdesc->addr = dma_addr;15311525 }15261526+ wmb(); /* RACT bit must be set after all the above writes */15321527 if (entry >= mdp->num_rx_ring - 1)15331528 rxdesc->status |=15341529 cpu_to_edmac(mdp, RD_RACT | RD_RFP | RD_RDEL);···15421535 /* If we don't need to check status, don't. -KDU */15431536 if (!(sh_eth_read(ndev, EDRRR) & EDRRR_R)) {15441537 /* fix the values for the next receiving if RDE is set */15451545- if (intr_status & EESR_RDE) {15381538+ if (intr_status & EESR_RDE && mdp->reg_offset[RDFAR] != 0) {15461539 u32 count = (sh_eth_read(ndev, RDFAR) -15471540 sh_eth_read(ndev, RDLAR)) >> 4;15481541···21812174 }21822175 spin_unlock_irqrestore(&mdp->lock, flags);2183217621842184- if (skb_padto(skb, ETH_ZLEN))21772177+ if (skb_put_padto(skb, ETH_ZLEN))21852178 return NETDEV_TX_OK;2186217921872180 entry = mdp->cur_tx % mdp->num_tx_ring;···21992192 }22002193 txdesc->buffer_length = skb->len;2201219421952195+ wmb(); /* TACT bit must be set after all the above writes */22022196 if (entry >= mdp->num_tx_ring - 1)22032197 txdesc->status |= cpu_to_edmac(mdp, TD_TACT | TD_TDLE);22042198 else
+11-3
drivers/net/ethernet/rocker/rocker.c
···12571257 u64 val = rocker_read64(rocker_port->rocker, PORT_PHYS_ENABLE);1258125812591259 if (enable)12601260- val |= 1 << rocker_port->lport;12601260+ val |= 1ULL << rocker_port->lport;12611261 else12621262- val &= ~(1 << rocker_port->lport);12621262+ val &= ~(1ULL << rocker_port->lport);12631263 rocker_write64(rocker_port->rocker, PORT_PHYS_ENABLE, val);12641264}12651265···4201420142024202 alloc_size = sizeof(struct rocker_port *) * rocker->port_count;42034203 rocker->ports = kmalloc(alloc_size, GFP_KERNEL);42044204+ if (!rocker->ports)42054205+ return -ENOMEM;42044206 for (i = 0; i < rocker->port_count; i++) {42054207 err = rocker_probe_port(rocker, i);42064208 if (err)···44684466 struct net_device *master = netdev_master_upper_dev_get(dev);44694467 int err = 0;4470446844694469+ /* There are currently three cases handled here:44704470+ * 1. Joining a bridge44714471+ * 2. Leaving a previously joined bridge44724472+ * 3. Other, e.g. being added to or removed from a bond or openvswitch,44734473+ * in which case nothing is done44744474+ */44714475 if (master && master->rtnl_link_ops &&44724476 !strcmp(master->rtnl_link_ops->kind, "bridge"))44734477 err = rocker_port_bridge_join(rocker_port, master);44744474- else44784478+ else if (rocker_port_is_bridged(rocker_port))44754479 err = rocker_port_bridge_leave(rocker_port);4476448044774481 return err;
···272272 struct stmmac_priv *priv = NULL;273273 struct plat_stmmacenet_data *plat_dat = NULL;274274 const char *mac = NULL;275275+ int irq, wol_irq, lpi_irq;276276+277277+ /* Get IRQ information early to have an ability to ask for deferred278278+ * probe if needed before we went too far with resource allocation.279279+ */280280+ irq = platform_get_irq_byname(pdev, "macirq");281281+ if (irq < 0) {282282+ if (irq != -EPROBE_DEFER) {283283+ dev_err(dev,284284+ "MAC IRQ configuration information not found\n");285285+ }286286+ return irq;287287+ }288288+289289+ /* On some platforms e.g. SPEAr the wake up irq differs from the mac irq290290+ * The external wake up irq can be passed through the platform code291291+ * named as "eth_wake_irq"292292+ *293293+ * In case the wake up interrupt is not passed from the platform294294+ * so the driver will continue to use the mac irq (ndev->irq)295295+ */296296+ wol_irq = platform_get_irq_byname(pdev, "eth_wake_irq");297297+ if (wol_irq < 0) {298298+ if (wol_irq == -EPROBE_DEFER)299299+ return -EPROBE_DEFER;300300+ wol_irq = irq;301301+ }302302+303303+ lpi_irq = platform_get_irq_byname(pdev, "eth_lpi");304304+ if (lpi_irq == -EPROBE_DEFER)305305+ return -EPROBE_DEFER;275306276307 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);277308 addr = devm_ioremap_resource(dev, res);···354323 return PTR_ERR(priv);355324 }356325326326+ /* Copy IRQ values to priv structure which is now avaialble */327327+ priv->dev->irq = irq;328328+ priv->wol_irq = wol_irq;329329+ priv->lpi_irq = lpi_irq;330330+357331 /* Get MAC address if available (DT) */358332 if (mac)359333 memcpy(priv->dev->dev_addr, mac, ETH_ALEN);360360-361361- /* Get the MAC information */362362- priv->dev->irq = platform_get_irq_byname(pdev, "macirq");363363- if (priv->dev->irq < 0) {364364- if (priv->dev->irq != -EPROBE_DEFER) {365365- netdev_err(priv->dev,366366- "MAC IRQ configuration information not found\n");367367- }368368- return priv->dev->irq;369369- }370370-371371- /*372372- * On some platforms e.g. SPEAr the wake up irq differs from the mac irq373373- * The external wake up irq can be passed through the platform code374374- * named as "eth_wake_irq"375375- *376376- * In case the wake up interrupt is not passed from the platform377377- * so the driver will continue to use the mac irq (ndev->irq)378378- */379379- priv->wol_irq = platform_get_irq_byname(pdev, "eth_wake_irq");380380- if (priv->wol_irq < 0) {381381- if (priv->wol_irq == -EPROBE_DEFER)382382- return -EPROBE_DEFER;383383- priv->wol_irq = priv->dev->irq;384384- }385385-386386- priv->lpi_irq = platform_get_irq_byname(pdev, "eth_lpi");387387- if (priv->lpi_irq == -EPROBE_DEFER)388388- return -EPROBE_DEFER;389334390335 platform_set_drvdata(pdev, priv->dev);391336
···236236}237237238238/**239239+ * phy_check_valid - check if there is a valid PHY setting which matches240240+ * speed, duplex, and feature mask241241+ * @speed: speed to match242242+ * @duplex: duplex to match243243+ * @features: A mask of the valid settings244244+ *245245+ * Description: Returns true if there is a valid setting, false otherwise.246246+ */247247+static inline bool phy_check_valid(int speed, int duplex, u32 features)248248+{249249+ unsigned int idx;250250+251251+ idx = phy_find_valid(phy_find_setting(speed, duplex), features);252252+253253+ return settings[idx].speed == speed && settings[idx].duplex == duplex &&254254+ (settings[idx].setting & features);255255+}256256+257257+/**239258 * phy_sanitize_settings - make sure the PHY is set to supported speed and duplex240259 * @phydev: the target phy_device struct241260 *···10641045 int eee_lp, eee_cap, eee_adv;10651046 u32 lp, cap, adv;10661047 int status;10671067- unsigned int idx;1068104810691049 /* Read phy status to properly get the right settings */10701050 status = phy_read_status(phydev);···1095107710961078 adv = mmd_eee_adv_to_ethtool_adv_t(eee_adv);10971079 lp = mmd_eee_adv_to_ethtool_adv_t(eee_lp);10981098- idx = phy_find_setting(phydev->speed, phydev->duplex);10991099- if (!(lp & adv & settings[idx].setting))10801080+ if (!phy_check_valid(phydev->speed, phydev->duplex, lp & adv))11001081 goto eee_exit_err;1101108211021083 if (clk_stop_enable) {
···1172117211731173 /* return skb */11741174 ctx->tx_curr_skb = NULL;11751175- dev->net->stats.tx_packets += ctx->tx_curr_frame_num;1176117511771176 /* keep private stats: framing overhead and number of NTBs */11781177 ctx->tx_overhead += skb_out->len - ctx->tx_curr_frame_payload;11791178 ctx->tx_ntbs++;1180117911811181- /* usbnet has already counted all the framing overhead.11801180+ /* usbnet will count all the framing overhead by default.11821181 * Adjust the stats so that the tx_bytes counter show real11831182 * payload data instead.11841183 */11851185- dev->net->stats.tx_bytes -= skb_out->len - ctx->tx_curr_frame_payload;11841184+ usbnet_set_skb_tx_stats(skb_out, n,11851185+ ctx->tx_curr_frame_payload - skb_out->len);1186118611871187 return skb_out;11881188
+34-7
drivers/net/usb/cx82310_eth.c
···4646};47474848#define CMD_PACKET_SIZE 644949-/* first command after power on can take around 8 seconds */5050-#define CMD_TIMEOUT 150004949+#define CMD_TIMEOUT 1005150#define CMD_REPLY_RETRY 552515352#define CX82310_MTU 1514···7778 ret = usb_bulk_msg(udev, usb_sndbulkpipe(udev, CMD_EP), buf,7879 CMD_PACKET_SIZE, &actual_len, CMD_TIMEOUT);7980 if (ret < 0) {8080- dev_err(&dev->udev->dev, "send command %#x: error %d\n",8181- cmd, ret);8181+ if (cmd != CMD_GET_LINK_STATUS)8282+ dev_err(&dev->udev->dev, "send command %#x: error %d\n",8383+ cmd, ret);8284 goto end;8385 }8486···9090 buf, CMD_PACKET_SIZE, &actual_len,9191 CMD_TIMEOUT);9292 if (ret < 0) {9393- dev_err(&dev->udev->dev,9494- "reply receive error %d\n", ret);9393+ if (cmd != CMD_GET_LINK_STATUS)9494+ dev_err(&dev->udev->dev,9595+ "reply receive error %d\n",9696+ ret);9597 goto end;9698 }9799 if (actual_len > 0)···136134 int ret;137135 char buf[15];138136 struct usb_device *udev = dev->udev;137137+ u8 link[3];138138+ int timeout = 50;139139140140 /* avoid ADSL modems - continue only if iProduct is "USB NET CARD" */141141 if (usb_string(udev, udev->descriptor.iProduct, buf, sizeof(buf)) > 0···163159 dev->partial_data = (unsigned long) kmalloc(dev->hard_mtu, GFP_KERNEL);164160 if (!dev->partial_data)165161 return -ENOMEM;162162+163163+ /* wait for firmware to become ready (indicated by the link being up) */164164+ while (--timeout) {165165+ ret = cx82310_cmd(dev, CMD_GET_LINK_STATUS, true, NULL, 0,166166+ link, sizeof(link));167167+ /* the command can time out during boot - it's not an error */168168+ if (!ret && link[0] == 1 && link[2] == 1)169169+ break;170170+ msleep(500);171171+ };172172+ if (!timeout) {173173+ dev_err(&udev->dev, "firmware not ready in time\n");174174+ return -ETIMEDOUT;175175+ }166176167177 /* enable ethernet mode (?) */168178 ret = cx82310_cmd(dev, CMD_ETHERNET_MODE, true, "\x01", 1, NULL, 0);···318300 .tx_fixup = cx82310_tx_fixup,319301};320302303303+#define USB_DEVICE_CLASS(vend, prod, cl, sc, pr) \304304+ .match_flags = USB_DEVICE_ID_MATCH_DEVICE | \305305+ USB_DEVICE_ID_MATCH_DEV_INFO, \306306+ .idVendor = (vend), \307307+ .idProduct = (prod), \308308+ .bDeviceClass = (cl), \309309+ .bDeviceSubClass = (sc), \310310+ .bDeviceProtocol = (pr)311311+321312static const struct usb_device_id products[] = {322313 {323323- USB_DEVICE_AND_INTERFACE_INFO(0x0572, 0xcb01, 0xff, 0, 0),314314+ USB_DEVICE_CLASS(0x0572, 0xcb01, 0xff, 0, 0),324315 .driver_info = (unsigned long) &cx82310_info325316 },326317 { },
···11881188 struct usbnet *dev = entry->dev;1189118911901190 if (urb->status == 0) {11911191- if (!(dev->driver_info->flags & FLAG_MULTI_PACKET))11921192- dev->net->stats.tx_packets++;11911191+ dev->net->stats.tx_packets += entry->packets;11931192 dev->net->stats.tx_bytes += entry->length;11941193 } else {11951194 dev->net->stats.tx_errors++;···13461347 } else13471348 urb->transfer_flags |= URB_ZERO_PACKET;13481349 }13491349- entry->length = urb->transfer_buffer_length = length;13501350+ urb->transfer_buffer_length = length;13511351+13521352+ if (info->flags & FLAG_MULTI_PACKET) {13531353+ /* Driver has set number of packets and a length delta.13541354+ * Calculate the complete length and ensure that it's13551355+ * positive.13561356+ */13571357+ entry->length += length;13581358+ if (WARN_ON_ONCE(entry->length <= 0))13591359+ entry->length = length;13601360+ } else {13611361+ usbnet_set_skb_tx_stats(skb, 1, length);13621362+ }1350136313511364 spin_lock_irqsave(&dev->txq.lock, flags);13521365 retval = usb_autopm_get_interface_async(dev->intf);
+4-5
drivers/net/virtio_net.c
···14481448{14491449 int i;1450145014511451- for (i = 0; i < vi->max_queue_pairs; i++)14511451+ for (i = 0; i < vi->max_queue_pairs; i++) {14521452+ napi_hash_del(&vi->rq[i].napi);14521453 netif_napi_del(&vi->rq[i].napi);14541454+ }1453145514541456 kfree(vi->rq);14551457 kfree(vi->sq);···19501948 cancel_delayed_work_sync(&vi->refill);1951194919521950 if (netif_running(vi->dev)) {19531953- for (i = 0; i < vi->max_queue_pairs; i++) {19511951+ for (i = 0; i < vi->max_queue_pairs; i++)19541952 napi_disable(&vi->rq[i].napi);19551955- napi_hash_del(&vi->rq[i].napi);19561956- netif_napi_del(&vi->rq[i].napi);19571957- }19581953 }1959195419601955 remove_vq_common(vi);
+2-2
drivers/net/vxlan.c
···12181218 goto drop;1219121912201220 flags &= ~VXLAN_HF_RCO;12211221- vni &= VXLAN_VID_MASK;12211221+ vni &= VXLAN_VNI_MASK;12221222 }1223122312241224 /* For backwards compatibility, only allow reserved fields to be···12391239 flags &= ~VXLAN_GBP_USED_BITS;12401240 }1241124112421242- if (flags || (vni & ~VXLAN_VID_MASK)) {12421242+ if (flags || vni & ~VXLAN_VNI_MASK) {12431243 /* If there are any unprocessed flags remaining treat12441244 * this as a malformed packet. This behavior diverges from12451245 * VXLAN RFC (RFC7348) which stipulates that bits in reserved
···53705370 case 0x432a: /* BCM4321 */53715371 case 0x432d: /* BCM4322 */53725372 case 0x4352: /* BCM43222 */53735373+ case 0x435a: /* BCM43228 */53735374 case 0x4333: /* BCM4331 */53745375 case 0x43a2: /* BCM4360 */53755376 case 0x43b3: /* BCM4352 */
+2-1
drivers/net/wireless/brcm80211/brcmfmac/feature.c
···126126 brcmf_feat_iovar_int_get(ifp, BRCMF_FEAT_MCHAN, "mchan");127127 if (drvr->bus_if->wowl_supported)128128 brcmf_feat_iovar_int_get(ifp, BRCMF_FEAT_WOWL, "wowl");129129- brcmf_feat_iovar_int_set(ifp, BRCMF_FEAT_MBSS, "mbss", 0);129129+ if (drvr->bus_if->chip != BRCM_CC_43362_CHIP_ID)130130+ brcmf_feat_iovar_int_set(ifp, BRCMF_FEAT_MBSS, "mbss", 0);130131131132 /* set chip related quirks */132133 switch (drvr->bus_if->chip) {
+12-3
drivers/net/wireless/brcm80211/brcmfmac/vendor.c
···3939 void *dcmd_buf = NULL, *wr_pointer;4040 u16 msglen, maxmsglen = PAGE_SIZE - 0x100;41414242- brcmf_dbg(TRACE, "cmd %x set %d len %d\n", cmdhdr->cmd, cmdhdr->set,4343- cmdhdr->len);4242+ if (len < sizeof(*cmdhdr)) {4343+ brcmf_err("vendor command too short: %d\n", len);4444+ return -EINVAL;4545+ }44464547 vif = container_of(wdev, struct brcmf_cfg80211_vif, wdev);4648 ifp = vif->ifp;47494848- len -= sizeof(struct brcmf_vndr_dcmd_hdr);5050+ brcmf_dbg(TRACE, "ifidx=%d, cmd=%d\n", ifp->ifidx, cmdhdr->cmd);5151+5252+ if (cmdhdr->offset > len) {5353+ brcmf_err("bad buffer offset %d > %d\n", cmdhdr->offset, len);5454+ return -EINVAL;5555+ }5656+5757+ len -= cmdhdr->offset;4958 ret_len = cmdhdr->len;5059 if (ret_len > 0 || len > 0) {5160 if (len > BRCMF_DCMD_MAXLEN) {
-1
drivers/net/wireless/iwlwifi/dvm/dev.h
···708708 unsigned long reload_jiffies;709709 int reload_count;710710 bool ucode_loaded;711711- bool init_ucode_run; /* Don't run init uCode again */712711713712 u8 plcp_delta_threshold;714713
···793793 if (!vif->bss_conf.assoc)794794 smps_mode = IEEE80211_SMPS_AUTOMATIC;795795796796- if (IWL_COEX_IS_RRC_ON(mvm->last_bt_notif.ttc_rrc_status,796796+ if (mvmvif->phy_ctxt &&797797+ IWL_COEX_IS_RRC_ON(mvm->last_bt_notif.ttc_rrc_status,797798 mvmvif->phy_ctxt->id))798799 smps_mode = IEEE80211_SMPS_AUTOMATIC;799800
+2-1
drivers/net/wireless/iwlwifi/mvm/coex_legacy.c
···832832 if (!vif->bss_conf.assoc)833833 smps_mode = IEEE80211_SMPS_AUTOMATIC;834834835835- if (data->notif->rrc_enabled & BIT(mvmvif->phy_ctxt->id))835835+ if (mvmvif->phy_ctxt &&836836+ data->notif->rrc_enabled & BIT(mvmvif->phy_ctxt->id))836837 smps_mode = IEEE80211_SMPS_AUTOMATIC;837838838839 IWL_DEBUG_COEX(data->mvm,
+35-3
drivers/net/wireless/iwlwifi/mvm/mac80211.c
···405405 hw->wiphy->bands[IEEE80211_BAND_5GHZ] =406406 &mvm->nvm_data->bands[IEEE80211_BAND_5GHZ];407407408408- if (mvm->fw->ucode_capa.capa[0] & IWL_UCODE_TLV_CAPA_BEAMFORMER)408408+ if ((mvm->fw->ucode_capa.capa[0] &409409+ IWL_UCODE_TLV_CAPA_BEAMFORMER) &&410410+ (mvm->fw->ucode_capa.api[0] &411411+ IWL_UCODE_TLV_API_LQ_SS_PARAMS))409412 hw->wiphy->bands[IEEE80211_BAND_5GHZ]->vht_cap.cap |=410413 IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE;411414 }···2218221522192216 mutex_lock(&mvm->mutex);2220221722212221- iwl_mvm_cancel_scan(mvm);22182218+ /* Due to a race condition, it's possible that mac80211 asks22192219+ * us to stop a hw_scan when it's already stopped. This can22202220+ * happen, for instance, if we stopped the scan ourselves,22212221+ * called ieee80211_scan_completed() and the userspace called22222222+ * cancel scan scan before ieee80211_scan_work() could run.22232223+ * To handle that, simply return if the scan is not running.22242224+ */22252225+ /* FIXME: for now, we ignore this race for UMAC scans, since22262226+ * they don't set the scan_status.22272227+ */22282228+ if ((mvm->scan_status == IWL_MVM_SCAN_OS) ||22292229+ (mvm->fw->ucode_capa.capa[0] & IWL_UCODE_TLV_CAPA_UMAC_SCAN))22302230+ iwl_mvm_cancel_scan(mvm);2222223122232232 mutex_unlock(&mvm->mutex);22242233}···25742559 int ret;2575256025762561 mutex_lock(&mvm->mutex);25622562+25632563+ /* Due to a race condition, it's possible that mac80211 asks25642564+ * us to stop a sched_scan when it's already stopped. This25652565+ * can happen, for instance, if we stopped the scan ourselves,25662566+ * called ieee80211_sched_scan_stopped() and the userspace called25672567+ * stop sched scan scan before ieee80211_sched_scan_stopped_work()25682568+ * could run. To handle this, simply return if the scan is25692569+ * not running.25702570+ */25712571+ /* FIXME: for now, we ignore this race for UMAC scans, since25722572+ * they don't set the scan_status.25732573+ */25742574+ if (mvm->scan_status != IWL_MVM_SCAN_SCHED &&25752575+ !(mvm->fw->ucode_capa.capa[0] & IWL_UCODE_TLV_CAPA_UMAC_SCAN)) {25762576+ mutex_unlock(&mvm->mutex);25772577+ return 0;25782578+ }25792579+25772580 ret = iwl_mvm_scan_offload_stop(mvm, false);25782581 mutex_unlock(&mvm->mutex);25792582 iwl_mvm_wait_for_async_handlers(mvm);2580258325812584 return ret;25822582-25832585}2584258625852587static int iwl_mvm_mac_set_key(struct ieee80211_hw *hw,
···13861386 }1387138713881388 return true;13891389- } else if (0x86DD == ether_type) {13901390- return true;13891389+ } else if (ETH_P_IPV6 == ether_type) {13901390+ /* TODO: Handle any IPv6 cases that need special handling.13911391+ * For now, always return false13921392+ */13931393+ goto end;13911394 }1392139513931396end:
+11-1
drivers/net/wireless/rtlwifi/pci.c
···11241124 /*This is for new trx flow*/11251125 struct rtl_tx_buffer_desc *pbuffer_desc = NULL;11261126 u8 temp_one = 1;11271127+ u8 *entry;1127112811281129 memset(&tcb_desc, 0, sizeof(struct rtl_tcb_desc));11291130 ring = &rtlpci->tx_ring[BEACON_QUEUE];11301131 pskb = __skb_dequeue(&ring->queue);11311131- if (pskb)11321132+ if (rtlpriv->use_new_trx_flow)11331133+ entry = (u8 *)(&ring->buffer_desc[ring->idx]);11341134+ else11351135+ entry = (u8 *)(&ring->desc[ring->idx]);11361136+ if (pskb) {11371137+ pci_unmap_single(rtlpci->pdev,11381138+ rtlpriv->cfg->ops->get_desc(11391139+ (u8 *)entry, true, HW_DESC_TXBUFF_ADDR),11401140+ pskb->len, PCI_DMA_TODEVICE);11321141 kfree_skb(pskb);11421142+ }1133114311341144 /*NB: the beacon data buffer must be 32-bit aligned. */11351145 pskb = ieee80211_beacon_get(hw, mac->vif);
+1-2
drivers/net/xen-netback/interface.c
···340340 unsigned int num_queues = vif->num_queues;341341 int i;342342 unsigned int queue_index;343343- struct xenvif_stats *vif_stats;344343345344 for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++) {346345 unsigned long accum = 0;347346 for (queue_index = 0; queue_index < num_queues; ++queue_index) {348348- vif_stats = &vif->queues[queue_index].stats;347347+ void *vif_stats = &vif->queues[queue_index].stats;349348 accum += *(unsigned long *)(vif_stats + xenvif_stats[i].offset);350349 }351350 data[i] = accum;
+31-15
drivers/net/xen-netback/netback.c
···9696static void make_tx_response(struct xenvif_queue *queue,9797 struct xen_netif_tx_request *txp,9898 s8 st);9999+static void push_tx_responses(struct xenvif_queue *queue);99100100101static inline int tx_work_todo(struct xenvif_queue *queue);101102···658657 do {659658 spin_lock_irqsave(&queue->response_lock, flags);660659 make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR);660660+ push_tx_responses(queue);661661 spin_unlock_irqrestore(&queue->response_lock, flags);662662 if (cons == end)663663 break;···13451343{13461344 unsigned int offset = skb_headlen(skb);13471345 skb_frag_t frags[MAX_SKB_FRAGS];13481348- int i;13461346+ int i, f;13491347 struct ubuf_info *uarg;13501348 struct sk_buff *nskb = skb_shinfo(skb)->frag_list;13511349···13851383 frags[i].page_offset = 0;13861384 skb_frag_size_set(&frags[i], len);13871385 }13881388- /* swap out with old one */13891389- memcpy(skb_shinfo(skb)->frags,13901390- frags,13911391- i * sizeof(skb_frag_t));13921392- skb_shinfo(skb)->nr_frags = i;13931393- skb->truesize += i * PAGE_SIZE;1394138613951395- /* remove traces of mapped pages and frag_list */13871387+ /* Copied all the bits from the frag list -- free it. */13961388 skb_frag_list_init(skb);13891389+ xenvif_skb_zerocopy_prepare(queue, nskb);13901390+ kfree_skb(nskb);13911391+13921392+ /* Release all the original (foreign) frags. */13931393+ for (f = 0; f < skb_shinfo(skb)->nr_frags; f++)13941394+ skb_frag_unref(skb, f);13971395 uarg = skb_shinfo(skb)->destructor_arg;13981396 /* increase inflight counter to offset decrement in callback */13991397 atomic_inc(&queue->inflight_packets);14001398 uarg->callback(uarg, true);14011399 skb_shinfo(skb)->destructor_arg = NULL;1402140014031403- xenvif_skb_zerocopy_prepare(queue, nskb);14041404- kfree_skb(nskb);14011401+ /* Fill the skb with the new (local) frags. */14021402+ memcpy(skb_shinfo(skb)->frags, frags, i * sizeof(skb_frag_t));14031403+ skb_shinfo(skb)->nr_frags = i;14041404+ skb->truesize += i * PAGE_SIZE;1405140514061406 return 0;14071407}···16561652 unsigned long flags;1657165316581654 pending_tx_info = &queue->pending_tx_info[pending_idx];16551655+16591656 spin_lock_irqsave(&queue->response_lock, flags);16571657+16601658 make_tx_response(queue, &pending_tx_info->req, status);16611661- index = pending_index(queue->pending_prod);16591659+16601660+ /* Release the pending index before pusing the Tx response so16611661+ * its available before a new Tx request is pushed by the16621662+ * frontend.16631663+ */16641664+ index = pending_index(queue->pending_prod++);16621665 queue->pending_ring[index] = pending_idx;16631663- /* TX shouldn't use the index before we give it back here */16641664- mb();16651665- queue->pending_prod++;16661666+16671667+ push_tx_responses(queue);16681668+16661669 spin_unlock_irqrestore(&queue->response_lock, flags);16671670}16681671···16801669{16811670 RING_IDX i = queue->tx.rsp_prod_pvt;16821671 struct xen_netif_tx_response *resp;16831683- int notify;1684167216851673 resp = RING_GET_RESPONSE(&queue->tx, i);16861674 resp->id = txp->id;···16891679 RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL;1690168016911681 queue->tx.rsp_prod_pvt = ++i;16821682+}16831683+16841684+static void push_tx_responses(struct xenvif_queue *queue)16851685+{16861686+ int notify;16871687+16921688 RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);16931689 if (notify)16941690 notify_remote_via_irq(queue->tx_irq);
+1-4
drivers/net/xen-netfront.c
···1008100810091009static int xennet_change_mtu(struct net_device *dev, int mtu)10101010{10111011- int max = xennet_can_sg(dev) ?10121012- XEN_NETIF_MAX_TX_SIZE - MAX_TCP_HEADER : ETH_DATA_LEN;10111011+ int max = xennet_can_sg(dev) ? XEN_NETIF_MAX_TX_SIZE : ETH_DATA_LEN;1013101210141013 if (mtu > max)10151014 return -EINVAL;···1277127812781279 netdev->ethtool_ops = &xennet_ethtool_ops;12791280 SET_NETDEV_DEV(netdev, &dev->dev);12801280-12811281- netif_set_gso_max_size(netdev, XEN_NETIF_MAX_TX_SIZE - MAX_TCP_HEADER);1282128112831282 np->netdev = netdev;12841283
+1-2
drivers/of/Kconfig
···8484 bool85858686config OF_OVERLAY8787- bool8888- depends on OF8787+ bool "Device Tree overlays"8988 select OF_DYNAMIC9089 select OF_RESOLVE9190
+8-3
drivers/of/address.c
···450450 return NULL;451451}452452453453-static int of_empty_ranges_quirk(void)453453+static int of_empty_ranges_quirk(struct device_node *np)454454{455455 if (IS_ENABLED(CONFIG_PPC)) {456456- /* To save cycles, we cache the result */456456+ /* To save cycles, we cache the result for global "Mac" setting */457457 static int quirk_state = -1;458458459459+ /* PA-SEMI sdc DT bug */460460+ if (of_device_is_compatible(np, "1682m-sdc"))461461+ return true;462462+463463+ /* Make quirk cached */459464 if (quirk_state < 0)460465 quirk_state =461466 of_machine_is_compatible("Power Macintosh") ||···495490 * This code is only enabled on powerpc. --gcl496491 */497492 ranges = of_get_property(parent, rprop, &rlen);498498- if (ranges == NULL && !of_empty_ranges_quirk()) {493493+ if (ranges == NULL && !of_empty_ranges_quirk(parent)) {499494 pr_debug("OF: no ranges; cannot translate\n");500495 return 1;501496 }
+10-8
drivers/of/base.c
···714714 const char *path)715715{716716 struct device_node *child;717717- int len = strchrnul(path, '/') - path;718718- int term;717717+ int len;719718719719+ len = strcspn(path, "/:");720720 if (!len)721721 return NULL;722722-723723- term = strchrnul(path, ':') - path;724724- if (term < len)725725- len = term;726722727723 __for_each_child_of_node(parent, child) {728724 const char *name = strrchr(child->full_name, '/');···764768765769 /* The path could begin with an alias */766770 if (*path != '/') {767767- char *p = strchrnul(path, '/');768768- int len = separator ? separator - path : p - path;771771+ int len;772772+ const char *p = separator;773773+774774+ if (!p)775775+ p = strchrnul(path, '/');776776+ len = p - path;769777770778 /* of_aliases must not be NULL */771779 if (!of_aliases)···794794 path++; /* Increment past '/' delimiter */795795 np = __of_find_node_by_path(np, path);796796 path = strchrnul(path, '/');797797+ if (separator && separator < path)798798+ break;797799 }798800 raw_spin_unlock_irqrestore(&devtree_lock, flags);799801 return np;
+7-3
drivers/of/irq.c
···290290 struct device_node *p;291291 const __be32 *intspec, *tmp, *addr;292292 u32 intsize, intlen;293293- int i, res = -EINVAL;293293+ int i, res;294294295295 pr_debug("of_irq_parse_one: dev=%s, index=%d\n", of_node_full_name(device), index);296296···323323324324 /* Get size of interrupt specifier */325325 tmp = of_get_property(p, "#interrupt-cells", NULL);326326- if (tmp == NULL)326326+ if (tmp == NULL) {327327+ res = -EINVAL;327328 goto out;329329+ }328330 intsize = be32_to_cpu(*tmp);329331330332 pr_debug(" intsize=%d intlen=%d\n", intsize, intlen);331333332334 /* Check index */333333- if ((index + 1) * intsize > intlen)335335+ if ((index + 1) * intsize > intlen) {336336+ res = -EINVAL;334337 goto out;338338+ }335339336340 /* Copy intspec into irq structure */337341 intspec += index * intsize;
···127127 return false;128128}129129130130-static int xgene_pcie_map_bus(struct pci_bus *bus, unsigned int devfn,130130+static void __iomem *xgene_pcie_map_bus(struct pci_bus *bus, unsigned int devfn,131131 int offset)132132{133133 struct xgene_pcie_port *port = bus->sysdata;···137137 return NULL;138138139139 xgene_pcie_set_rtdid_reg(bus, devfn);140140- return xgene_pcie_get_cfg_base(bus);140140+ return xgene_pcie_get_cfg_base(bus) + offset;141141}142142143143static struct pci_ops xgene_pcie_ops = {
+3-2
drivers/pci/pci-sysfs.c
···521521 struct pci_dev *pdev = to_pci_dev(dev);522522 char *driver_override, *old = pdev->driver_override, *cp;523523524524- if (count > PATH_MAX)524524+ /* We need to keep extra room for a newline */525525+ if (count >= (PAGE_SIZE - 1))525526 return -EINVAL;526527527528 driver_override = kstrndup(buf, count, GFP_KERNEL);···550549{551550 struct pci_dev *pdev = to_pci_dev(dev);552551553553- return sprintf(buf, "%s\n", pdev->driver_override);552552+ return snprintf(buf, PAGE_SIZE, "%s\n", pdev->driver_override);554553}555554static DEVICE_ATTR_RW(driver_override);556555
+3-9
drivers/pcmcia/Kconfig
···6969 tristate "CardBus yenta-compatible bridge support"7070 depends on PCI7171 select CARDBUS if !EXPERT7272- select PCCARD_NONSTATIC if PCMCIA != n && ISA7373- select PCCARD_PCI if PCMCIA !=n && !ISA7272+ select PCCARD_NONSTATIC if PCMCIA != n7473 ---help---7574 This option enables support for CardBus host bridges. Virtually7675 all modern PCMCIA bridges are CardBus compatible. A "bridge" is···109110config PD6729110111 tristate "Cirrus PD6729 compatible bridge support"111112 depends on PCMCIA && PCI112112- select PCCARD_NONSTATIC if PCMCIA != n && ISA113113- select PCCARD_PCI if PCMCIA !=n && !ISA113113+ select PCCARD_NONSTATIC114114 help115115 This provides support for the Cirrus PD6729 PCI-to-PCMCIA bridge116116 device, found in some older laptops and PCMCIA card readers.···117119config I82092118120 tristate "i82092 compatible bridge support"119121 depends on PCMCIA && PCI120120- select PCCARD_NONSTATIC if PCMCIA != n && ISA121121- select PCCARD_PCI if PCMCIA !=n && !ISA122122+ select PCCARD_NONSTATIC122123 help123124 This provides support for the Intel I82092AA PCI-to-PCMCIA bridge device,124125 found in some older laptops and more commonly in evaluation boards for the···287290 help288291 Say Y here to support the CompactFlash controller on the289292 PA Semi Electra eval board.290290-291291-config PCCARD_PCI292292- bool293293294294config PCCARD_NONSTATIC295295 bool
···6161 return ret;62626363 clk_disable_unprepare(phy->clk);6464- if (ret)6565- return ret;66646765 return 0;6866}···76787779 /* Power up usb phy analog blocks by set siddq 0 */7880 ret = rockchip_usb_phy_power(phy, 0);7979- if (ret)8181+ if (ret) {8282+ clk_disable_unprepare(phy->clk);8083 return ret;8484+ }81858286 return 0;8387}
+4-8
drivers/phy/phy-ti-pipe3.c
···165165 cpu_relax();166166 val = ti_pipe3_readl(phy->pll_ctrl_base, PLL_STATUS);167167 if (val & PLL_LOCK)168168- break;168168+ return 0;169169 } while (!time_after(jiffies, timeout));170170171171- if (!(val & PLL_LOCK)) {172172- dev_err(phy->dev, "DPLL failed to lock\n");173173- return -EBUSY;174174- }175175-176176- return 0;171171+ dev_err(phy->dev, "DPLL failed to lock\n");172172+ return -EBUSY;177173}178174179175static int ti_pipe3_dpll_program(struct ti_pipe3 *phy)···604608605609module_platform_driver(ti_pipe3_driver);606610607607-MODULE_ALIAS("platform: ti_pipe3");611611+MODULE_ALIAS("platform:ti_pipe3");608612MODULE_AUTHOR("Texas Instruments Inc.");609613MODULE_DESCRIPTION("TI PIPE3 phy driver");610614MODULE_LICENSE("GPL v2");
···73737474#define TIME_WINDOW_MAX_MSEC 400007575#define TIME_WINDOW_MIN_MSEC 2507676-7676+#define ENERGY_UNIT_SCALE 1000 /* scale from driver unit to powercap unit */7777enum unit_type {7878 ARBITRARY_UNIT, /* no translation */7979 POWER_UNIT,···158158 struct rapl_power_limit rpl[NR_POWER_LIMITS];159159 u64 attr_map; /* track capabilities */160160 unsigned int state;161161+ unsigned int domain_energy_unit;161162 int package_id;162163};163164#define power_zone_to_rapl_domain(_zone) \···191190 void (*set_floor_freq)(struct rapl_domain *rd, bool mode);192191 u64 (*compute_time_window)(struct rapl_package *rp, u64 val,193192 bool to_raw);193193+ unsigned int dram_domain_energy_unit;194194};195195static struct rapl_defaults *rapl_defaults;196196···229227static int rapl_write_data_raw(struct rapl_domain *rd,230228 enum rapl_primitives prim,231229 unsigned long long value);232232-static u64 rapl_unit_xlate(int package, enum unit_type type, u64 value,230230+static u64 rapl_unit_xlate(struct rapl_domain *rd, int package,231231+ enum unit_type type, u64 value,233232 int to_raw);234233static void package_power_limit_irq_save(int package_id);235234···308305309306static int get_max_energy_counter(struct powercap_zone *pcd_dev, u64 *energy)310307{311311- *energy = rapl_unit_xlate(0, ENERGY_UNIT, ENERGY_STATUS_MASK, 0);308308+ struct rapl_domain *rd = power_zone_to_rapl_domain(pcd_dev);309309+310310+ *energy = rapl_unit_xlate(rd, 0, ENERGY_UNIT, ENERGY_STATUS_MASK, 0);312311 return 0;313312}314313···644639 rd->msrs[4] = MSR_DRAM_POWER_INFO;645640 rd->rpl[0].prim_id = PL1_ENABLE;646641 rd->rpl[0].name = pl1_name;642642+ rd->domain_energy_unit =643643+ rapl_defaults->dram_domain_energy_unit;644644+ if (rd->domain_energy_unit)645645+ pr_info("DRAM domain energy unit %dpj\n",646646+ rd->domain_energy_unit);647647 break;648648 }649649 if (mask) {···658648 }659649}660650661661-static u64 rapl_unit_xlate(int package, enum unit_type type, u64 value,651651+static u64 rapl_unit_xlate(struct rapl_domain *rd, int package,652652+ enum unit_type type, u64 value,662653 int to_raw)663654{664655 u64 units = 1;665656 struct rapl_package *rp;657657+ u64 scale = 1;666658667659 rp = find_package_by_id(package);668660 if (!rp)···675663 units = rp->power_unit;676664 break;677665 case ENERGY_UNIT:678678- units = rp->energy_unit;666666+ scale = ENERGY_UNIT_SCALE;667667+ /* per domain unit takes precedence */668668+ if (rd && rd->domain_energy_unit)669669+ units = rd->domain_energy_unit;670670+ else671671+ units = rp->energy_unit;679672 break;680673 case TIME_UNIT:681674 return rapl_defaults->compute_time_window(rp, value, to_raw);···690673 };691674692675 if (to_raw)693693- return div64_u64(value, units);676676+ return div64_u64(value, units) * scale;694677695678 value *= units;696679697697- return value;680680+ return div64_u64(value, scale);698681}699682700683/* in the order of enum rapl_primitives */···790773 final = value & rp->mask;791774 final = final >> rp->shift;792775 if (xlate)793793- *data = rapl_unit_xlate(rd->package_id, rp->unit, final, 0);776776+ *data = rapl_unit_xlate(rd, rd->package_id, rp->unit, final, 0);794777 else795778 *data = final;796779···816799 "failed to read msr 0x%x on cpu %d\n", msr, cpu);817800 return -EIO;818801 }819819- value = rapl_unit_xlate(rd->package_id, rp->unit, value, 1);802802+ value = rapl_unit_xlate(rd, rd->package_id, rp->unit, value, 1);820803 msr_val &= ~rp->mask;821804 msr_val |= value << rp->shift;822805 if (wrmsrl_safe_on_cpu(cpu, msr, msr_val)) {···835818 * calculate units differ on different CPUs.836819 * We convert the units to below format based on CPUs.837820 * i.e.838838- * energy unit: microJoules : Represented in microJoules by default821821+ * energy unit: picoJoules : Represented in picoJoules by default839822 * power unit : microWatts : Represented in milliWatts by default840823 * time unit : microseconds: Represented in seconds by default841824 */···851834 }852835853836 value = (msr_val & ENERGY_UNIT_MASK) >> ENERGY_UNIT_OFFSET;854854- rp->energy_unit = 1000000 / (1 << value);837837+ rp->energy_unit = ENERGY_UNIT_SCALE * 1000000 / (1 << value);855838856839 value = (msr_val & POWER_UNIT_MASK) >> POWER_UNIT_OFFSET;857840 rp->power_unit = 1000000 / (1 << value);···859842 value = (msr_val & TIME_UNIT_MASK) >> TIME_UNIT_OFFSET;860843 rp->time_unit = 1000000 / (1 << value);861844862862- pr_debug("Core CPU package %d energy=%duJ, time=%dus, power=%duW\n",845845+ pr_debug("Core CPU package %d energy=%dpJ, time=%dus, power=%duW\n",863846 rp->id, rp->energy_unit, rp->time_unit, rp->power_unit);864847865848 return 0;···876859 return -ENODEV;877860 }878861 value = (msr_val & ENERGY_UNIT_MASK) >> ENERGY_UNIT_OFFSET;879879- rp->energy_unit = 1 << value;862862+ rp->energy_unit = ENERGY_UNIT_SCALE * 1 << value;880863881864 value = (msr_val & POWER_UNIT_MASK) >> POWER_UNIT_OFFSET;882865 rp->power_unit = (1 << value) * 1000;···884867 value = (msr_val & TIME_UNIT_MASK) >> TIME_UNIT_OFFSET;885868 rp->time_unit = 1000000 / (1 << value);886869887887- pr_debug("Atom package %d energy=%duJ, time=%dus, power=%duW\n",870870+ pr_debug("Atom package %d energy=%dpJ, time=%dus, power=%duW\n",888871 rp->id, rp->energy_unit, rp->time_unit, rp->power_unit);889872890873 return 0;···10341017 .compute_time_window = rapl_compute_time_window_core,10351018};1036101910201020+static const struct rapl_defaults rapl_defaults_hsw_server = {10211021+ .check_unit = rapl_check_unit_core,10221022+ .set_floor_freq = set_floor_freq_default,10231023+ .compute_time_window = rapl_compute_time_window_core,10241024+ .dram_domain_energy_unit = 15300,10251025+};10261026+10371027static const struct rapl_defaults rapl_defaults_atom = {10381028 .check_unit = rapl_check_unit_atom,10391029 .set_floor_freq = set_floor_freq_atom,···10611037 RAPL_CPU(0x3a, rapl_defaults_core),/* Ivy Bridge */10621038 RAPL_CPU(0x3c, rapl_defaults_core),/* Haswell */10631039 RAPL_CPU(0x3d, rapl_defaults_core),/* Broadwell */10641064- RAPL_CPU(0x3f, rapl_defaults_core),/* Haswell */10401040+ RAPL_CPU(0x3f, rapl_defaults_hsw_server),/* Haswell servers */10651041 RAPL_CPU(0x45, rapl_defaults_core),/* Haswell ULT */10661042 RAPL_CPU(0x4C, rapl_defaults_atom),/* Braswell */10671043 RAPL_CPU(0x4A, rapl_defaults_atom),/* Tangier */
+17-24
drivers/regulator/core.c
···18391839 }1840184018411841 if (rdev->ena_pin) {18421842- ret = regulator_ena_gpio_ctrl(rdev, true);18431843- if (ret < 0)18441844- return ret;18451845- rdev->ena_gpio_state = 1;18421842+ if (!rdev->ena_gpio_state) {18431843+ ret = regulator_ena_gpio_ctrl(rdev, true);18441844+ if (ret < 0)18451845+ return ret;18461846+ rdev->ena_gpio_state = 1;18471847+ }18461848 } else if (rdev->desc->ops->enable) {18471849 ret = rdev->desc->ops->enable(rdev);18481850 if (ret < 0)···19411939 trace_regulator_disable(rdev_get_name(rdev));1942194019431941 if (rdev->ena_pin) {19441944- ret = regulator_ena_gpio_ctrl(rdev, false);19451945- if (ret < 0)19461946- return ret;19471947- rdev->ena_gpio_state = 0;19421942+ if (rdev->ena_gpio_state) {19431943+ ret = regulator_ena_gpio_ctrl(rdev, false);19441944+ if (ret < 0)19451945+ return ret;19461946+ rdev->ena_gpio_state = 0;19471947+ }1948194819491949 } else if (rdev->desc->ops->disable) {19501950 ret = rdev->desc->ops->disable(rdev);···34483444 if (attr == &dev_attr_requested_microamps.attr)34493445 return rdev->desc->type == REGULATOR_CURRENT ? mode : 0;3450344634513451- /* all the other attributes exist to support constraints;34523452- * don't show them if there are no constraints, or if the34533453- * relevant supporting methods are missing.34543454- */34553455- if (!rdev->constraints)34563456- return 0;34573457-34583447 /* constraints need specific supporting methods */34593448 if (attr == &dev_attr_min_microvolts.attr ||34603449 attr == &dev_attr_max_microvolts.attr)···36303633 config->ena_gpio, ret);36313634 goto wash;36323635 }36333633-36343634- if (config->ena_gpio_flags & GPIOF_OUT_INIT_HIGH)36353635- rdev->ena_gpio_state = 1;36363636-36373637- if (config->ena_gpio_invert)36383638- rdev->ena_gpio_state = !rdev->ena_gpio_state;36393636 }3640363736413638 /* set regulator constraints */···37983807 list_for_each_entry(rdev, ®ulator_list, list) {37993808 mutex_lock(&rdev->mutex);38003809 if (rdev->use_count > 0 || rdev->constraints->always_on) {38013801- error = _regulator_do_enable(rdev);38023802- if (error)38033803- ret = error;38103810+ if (!_regulator_is_enabled(rdev)) {38113811+ error = _regulator_do_enable(rdev);38123812+ if (error)38133813+ ret = error;38143814+ }38043815 } else {38053816 if (!have_full_constraints())38063817 goto unlock;
+9
drivers/regulator/da9210-regulator.c
···152152 config.regmap = chip->regmap;153153 config.of_node = dev->of_node;154154155155+ /* Mask all interrupt sources to deassert interrupt line */156156+ error = regmap_write(chip->regmap, DA9210_REG_MASK_A, ~0);157157+ if (!error)158158+ error = regmap_write(chip->regmap, DA9210_REG_MASK_B, ~0);159159+ if (error) {160160+ dev_err(&i2c->dev, "Failed to write to mask reg: %d\n", error);161161+ return error;162162+ }163163+155164 rdev = devm_regulator_register(&i2c->dev, &da9210_reg, &config);156165 if (IS_ERR(rdev)) {157166 dev_err(&i2c->dev, "Failed to register DA9210 regulator\n");
+4
drivers/regulator/palmas-regulator.c
···15721572 if (!pmic)15731573 return -ENOMEM;1574157415751575+ if (of_device_is_compatible(node, "ti,tps659038-pmic"))15761576+ palmas_generic_regs_info[PALMAS_REG_REGEN2].ctrl_addr =15771577+ TPS659038_REGEN2_CTRL;15781578+15751579 pmic->dev = &pdev->dev;15761580 pmic->palmas = palmas;15771581 palmas->pmic = pmic;
···951951 void *bufs_va;952952 int err = 0, i;953953 size_t total_buf_space;954954+ bool notify;954955955956 vrp = kzalloc(sizeof(*vrp), GFP_KERNEL);956957 if (!vrp)···10311030 }10321031 }1033103210331033+ /*10341034+ * Prepare to kick but don't notify yet - we can't do this before10351035+ * device is ready.10361036+ */10371037+ notify = virtqueue_kick_prepare(vrp->rvq);10381038+10391039+ /* From this point on, we can notify and get callbacks. */10401040+ virtio_device_ready(vdev);10411041+10341042 /* tell the remote processor it can start sending messages */10351035- virtqueue_kick(vrp->rvq);10431043+ /*10441044+ * this might be concurrent with callbacks, but we are only10451045+ * doing notify, not a full kick here, so that's ok.10461046+ */10471047+ if (notify)10481048+ virtqueue_notify(vrp->rvq);1036104910371050 dev_info(&vdev->dev, "rpmsg host is online\n");10381051
+48-14
drivers/rtc/rtc-at91rm9200.c
···3131#include <linux/io.h>3232#include <linux/of.h>3333#include <linux/of_device.h>3434+#include <linux/suspend.h>3435#include <linux/uaccess.h>35363637#include "rtc-at91rm9200.h"···5554static int irq;5655static DEFINE_SPINLOCK(at91_rtc_lock);5756static u32 at91_rtc_shadow_imr;5757+static bool suspended;5858+static DEFINE_SPINLOCK(suspended_lock);5959+static unsigned long cached_events;6060+static u32 at91_rtc_imr;58615962static void at91_rtc_write_ier(u32 mask)6063{···295290 struct rtc_device *rtc = platform_get_drvdata(pdev);296291 unsigned int rtsr;297292 unsigned long events = 0;293293+ int ret = IRQ_NONE;298294295295+ spin_lock(&suspended_lock);299296 rtsr = at91_rtc_read(AT91_RTC_SR) & at91_rtc_read_imr();300297 if (rtsr) { /* this interrupt is shared! Is it ours? */301298 if (rtsr & AT91_RTC_ALARM)···311304312305 at91_rtc_write(AT91_RTC_SCCR, rtsr); /* clear status reg */313306314314- rtc_update_irq(rtc, 1, events);307307+ if (!suspended) {308308+ rtc_update_irq(rtc, 1, events);315309316316- dev_dbg(&pdev->dev, "%s(): num=%ld, events=0x%02lx\n", __func__,317317- events >> 8, events & 0x000000FF);310310+ dev_dbg(&pdev->dev, "%s(): num=%ld, events=0x%02lx\n",311311+ __func__, events >> 8, events & 0x000000FF);312312+ } else {313313+ cached_events |= events;314314+ at91_rtc_write_idr(at91_rtc_imr);315315+ pm_system_wakeup();316316+ }318317319319- return IRQ_HANDLED;318318+ ret = IRQ_HANDLED;320319 }321321- return IRQ_NONE; /* not handled */320320+ spin_unlock(&suspended_lock);321321+322322+ return ret;322323}323324324325static const struct at91_rtc_config at91rm9200_config = {···416401 AT91_RTC_CALEV);417402418403 ret = devm_request_irq(&pdev->dev, irq, at91_rtc_interrupt,419419- IRQF_SHARED,420420- "at91_rtc", pdev);404404+ IRQF_SHARED | IRQF_COND_SUSPEND,405405+ "at91_rtc", pdev);421406 if (ret) {422407 dev_err(&pdev->dev, "IRQ %d already in use.\n", irq);423408 return ret;···469454470455/* AT91RM9200 RTC Power management control */471456472472-static u32 at91_rtc_imr;473473-474457static int at91_rtc_suspend(struct device *dev)475458{476459 /* this IRQ is shared with DBGU and other hardware which isn't···477464 at91_rtc_imr = at91_rtc_read_imr()478465 & (AT91_RTC_ALARM|AT91_RTC_SECEV);479466 if (at91_rtc_imr) {480480- if (device_may_wakeup(dev))467467+ if (device_may_wakeup(dev)) {468468+ unsigned long flags;469469+481470 enable_irq_wake(irq);482482- else471471+472472+ spin_lock_irqsave(&suspended_lock, flags);473473+ suspended = true;474474+ spin_unlock_irqrestore(&suspended_lock, flags);475475+ } else {483476 at91_rtc_write_idr(at91_rtc_imr);477477+ }484478 }485479 return 0;486480}487481488482static int at91_rtc_resume(struct device *dev)489483{484484+ struct rtc_device *rtc = dev_get_drvdata(dev);485485+490486 if (at91_rtc_imr) {491491- if (device_may_wakeup(dev))487487+ if (device_may_wakeup(dev)) {488488+ unsigned long flags;489489+490490+ spin_lock_irqsave(&suspended_lock, flags);491491+492492+ if (cached_events) {493493+ rtc_update_irq(rtc, 1, cached_events);494494+ cached_events = 0;495495+ }496496+497497+ suspended = false;498498+ spin_unlock_irqrestore(&suspended_lock, flags);499499+492500 disable_irq_wake(irq);493493- else494494- at91_rtc_write_ier(at91_rtc_imr);501501+ }502502+ at91_rtc_write_ier(at91_rtc_imr);495503 }496504 return 0;497505}
+63-14
drivers/rtc/rtc-at91sam9.c
···2323#include <linux/io.h>2424#include <linux/mfd/syscon.h>2525#include <linux/regmap.h>2626+#include <linux/suspend.h>2627#include <linux/clk.h>27282829/*···7877 unsigned int gpbr_offset;7978 int irq;8079 struct clk *sclk;8080+ bool suspended;8181+ unsigned long events;8282+ spinlock_t lock;8183};82848385#define rtt_readl(rtc, field) \···275271 return 0;276272}277273278278-/*279279- * IRQ handler for the RTC280280- */281281-static irqreturn_t at91_rtc_interrupt(int irq, void *_rtc)274274+static irqreturn_t at91_rtc_cache_events(struct sam9_rtc *rtc)282275{283283- struct sam9_rtc *rtc = _rtc;284276 u32 sr, mr;285285- unsigned long events = 0;286277287278 /* Shared interrupt may be for another device. Note: reading288279 * SR clears it, so we must only read it in this irq handler!···289290290291 /* alarm status */291292 if (sr & AT91_RTT_ALMS)292292- events |= (RTC_AF | RTC_IRQF);293293+ rtc->events |= (RTC_AF | RTC_IRQF);293294294295 /* timer update/increment */295296 if (sr & AT91_RTT_RTTINC)296296- events |= (RTC_UF | RTC_IRQF);297297-298298- rtc_update_irq(rtc->rtcdev, 1, events);299299-300300- pr_debug("%s: num=%ld, events=0x%02lx\n", __func__,301301- events >> 8, events & 0x000000FF);297297+ rtc->events |= (RTC_UF | RTC_IRQF);302298303299 return IRQ_HANDLED;300300+}301301+302302+static void at91_rtc_flush_events(struct sam9_rtc *rtc)303303+{304304+ if (!rtc->events)305305+ return;306306+307307+ rtc_update_irq(rtc->rtcdev, 1, rtc->events);308308+ rtc->events = 0;309309+310310+ pr_debug("%s: num=%ld, events=0x%02lx\n", __func__,311311+ rtc->events >> 8, rtc->events & 0x000000FF);312312+}313313+314314+/*315315+ * IRQ handler for the RTC316316+ */317317+static irqreturn_t at91_rtc_interrupt(int irq, void *_rtc)318318+{319319+ struct sam9_rtc *rtc = _rtc;320320+ int ret;321321+322322+ spin_lock(&rtc->lock);323323+324324+ ret = at91_rtc_cache_events(rtc);325325+326326+ /* We're called in suspended state */327327+ if (rtc->suspended) {328328+ /* Mask irqs coming from this peripheral */329329+ rtt_writel(rtc, MR,330330+ rtt_readl(rtc, MR) &331331+ ~(AT91_RTT_ALMIEN | AT91_RTT_RTTINCIEN));332332+ /* Trigger a system wakeup */333333+ pm_system_wakeup();334334+ } else {335335+ at91_rtc_flush_events(rtc);336336+ }337337+338338+ spin_unlock(&rtc->lock);339339+340340+ return ret;304341}305342306343static const struct rtc_class_ops at91_rtc_ops = {···456421457422 /* register irq handler after we know what name we'll use */458423 ret = devm_request_irq(&pdev->dev, rtc->irq, at91_rtc_interrupt,459459- IRQF_SHARED, dev_name(&rtc->rtcdev->dev), rtc);424424+ IRQF_SHARED | IRQF_COND_SUSPEND,425425+ dev_name(&rtc->rtcdev->dev), rtc);460426 if (ret) {461427 dev_dbg(&pdev->dev, "can't share IRQ %d?\n", rtc->irq);462428 return ret;···518482 rtc->imr = mr & (AT91_RTT_ALMIEN | AT91_RTT_RTTINCIEN);519483 if (rtc->imr) {520484 if (device_may_wakeup(dev) && (mr & AT91_RTT_ALMIEN)) {485485+ unsigned long flags;486486+521487 enable_irq_wake(rtc->irq);488488+ spin_lock_irqsave(&rtc->lock, flags);489489+ rtc->suspended = true;490490+ spin_unlock_irqrestore(&rtc->lock, flags);522491 /* don't let RTTINC cause wakeups */523492 if (mr & AT91_RTT_RTTINCIEN)524493 rtt_writel(rtc, MR, mr & ~AT91_RTT_RTTINCIEN);···540499 u32 mr;541500542501 if (rtc->imr) {502502+ unsigned long flags;503503+543504 if (device_may_wakeup(dev))544505 disable_irq_wake(rtc->irq);545506 mr = rtt_readl(rtc, MR);546507 rtt_writel(rtc, MR, mr | rtc->imr);508508+509509+ spin_lock_irqsave(&rtc->lock, flags);510510+ rtc->suspended = false;511511+ at91_rtc_cache_events(rtc);512512+ at91_rtc_flush_events(rtc);513513+ spin_unlock_irqrestore(&rtc->lock, flags);547514 }548515549516 return 0;
···500500 struct sas_discovery_event *ev = to_sas_discovery_event(work);501501 struct asd_sas_port *port = ev->port;502502 struct sas_ha_struct *ha = port->ha;503503+ struct domain_device *ddev = port->port_dev;503504504505 /* prevent revalidation from finding sata links in recovery */505506 mutex_lock(&ha->disco_mutex);···515514 SAS_DPRINTK("REVALIDATING DOMAIN on port %d, pid:%d\n", port->id,516515 task_pid_nr(current));517516518518- if (port->port_dev)519519- res = sas_ex_revalidate_domain(port->port_dev);517517+ if (ddev && (ddev->dev_type == SAS_FANOUT_EXPANDER_DEVICE ||518518+ ddev->dev_type == SAS_EDGE_EXPANDER_DEVICE))519519+ res = sas_ex_revalidate_domain(ddev);520520521521 SAS_DPRINTK("done REVALIDATING DOMAIN on port %d, pid:%d, res 0x%x\n",522522 port->id, task_pid_nr(current), res);
+1-1
drivers/scsi/qla2xxx/tcm_qla2xxx.c
···15961596 /*15971597 * Finally register the new FC Nexus with TCM15981598 */15991599- __transport_register_session(se_nacl->se_tpg, se_nacl, se_sess, sess);15991599+ transport_register_session(se_nacl->se_tpg, se_nacl, se_sess, sess);1600160016011601 return 0;16021602}
+6-6
drivers/spi/spi-atmel.c
···764764 (unsigned long long)xfer->rx_dma);765765 }766766767767- /* REVISIT: We're waiting for ENDRX before we start the next767767+ /* REVISIT: We're waiting for RXBUFF before we start the next768768 * transfer because we need to handle some difficult timing769769- * issues otherwise. If we wait for ENDTX in one transfer and770770- * then starts waiting for ENDRX in the next, it's difficult771771- * to tell the difference between the ENDRX interrupt we're772772- * actually waiting for and the ENDRX interrupt of the769769+ * issues otherwise. If we wait for TXBUFE in one transfer and770770+ * then starts waiting for RXBUFF in the next, it's difficult771771+ * to tell the difference between the RXBUFF interrupt we're772772+ * actually waiting for and the RXBUFF interrupt of the773773 * previous transfer.774774 *775775 * It should be doable, though. Just not now...776776 */777777- spi_writel(as, IER, SPI_BIT(ENDRX) | SPI_BIT(OVRES));777777+ spi_writel(as, IER, SPI_BIT(RXBUFF) | SPI_BIT(OVRES));778778 spi_writel(as, PTCR, SPI_BIT(TXTEN) | SPI_BIT(RXTEN));779779}780780
···459459 unsigned long flags;460460 int ret;461461462462+ if (xfer->len > SPFI_TRANSACTION_TSIZE_MASK) {463463+ dev_err(spfi->dev,464464+ "Transfer length (%d) is greater than the max supported (%d)",465465+ xfer->len, SPFI_TRANSACTION_TSIZE_MASK);466466+ return -EINVAL;467467+ }468468+462469 /*463470 * Stop all DMA and reset the controller if the previous transaction464471 * timed-out and never completed it's DMA.
···426426 unsigned int *data)427427{428428 struct pci1710_private *devpriv = dev->private;429429- unsigned int chan = CR_CHAN(insn->chanspec);430429 int ret = 0;431430 int i;432431···446447 if (ret)447448 break;448449449449- ret = pci171x_ai_read_sample(dev, s, chan, &val);450450+ ret = pci171x_ai_read_sample(dev, s, 0, &val);450451 if (ret)451452 break;452453
···1818#include <linux/delay.h>1919#include <linux/gpio.h>2020#include <linux/module.h>2121+#include <linux/bitops.h>21222223#include <linux/iio/iio.h>2324#include <linux/iio/sysfs.h>···6968 break;7069 case IIO_ANGL_VEL:7170 vel = (((s16)(st->rx[0])) << 4) | ((st->rx[1] & 0xF0) >> 4);7272- vel = (vel << 4) >> 4;7171+ vel = sign_extend32(vel, 11);7372 *val = vel;7473 break;7574 default:
+15-17
drivers/staging/vt6655/device_main.c
···330330 /* zonetype initial */331331 pDevice->byOriginalZonetype = pDevice->abyEEPROM[EEP_OFS_ZONETYPE];332332333333- /* Get RFType */334334- pDevice->byRFType = SROMbyReadEmbedded(pDevice->PortOffset, EEP_OFS_RFTYPE);335335-336336- /* force change RevID for VT3253 emu */337337- if ((pDevice->byRFType & RF_EMU) != 0)338338- pDevice->byRevId = 0x80;339339-340340- pDevice->byRFType &= RF_MASK;341341- pr_debug("pDevice->byRFType = %x\n", pDevice->byRFType);342342-343333 if (!pDevice->bZoneRegExist)344334 pDevice->byZoneType = pDevice->abyEEPROM[EEP_OFS_ZONETYPE];345335···11771187{11781188 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;11791189 PSTxDesc head_td;11801180- u32 dma_idx = TYPE_AC0DMA;11901190+ u32 dma_idx;11811191 unsigned long flags;1182119211831193 spin_lock_irqsave(&priv->lock, flags);1184119411851185- if (!ieee80211_is_data(hdr->frame_control))11951195+ if (ieee80211_is_data(hdr->frame_control))11961196+ dma_idx = TYPE_AC0DMA;11971197+ else11861198 dma_idx = TYPE_TXDMA0;1187119911881200 if (AVAIL_TD(priv, dma_idx) < 1) {···11971205 head_td->m_td1TD1.byTCR = 0;1198120611991207 head_td->pTDInfo->skb = skb;12081208+12091209+ if (dma_idx == TYPE_AC0DMA)12101210+ head_td->pTDInfo->byFlags = TD_FLAGS_NETIF_SKB;1200121112011212 priv->iTDUsed[dma_idx]++;12021213···1229123412301235 head_td->buff_addr = cpu_to_le32(head_td->pTDInfo->skb_dma);1231123612321232- if (dma_idx == TYPE_AC0DMA) {12331233- head_td->pTDInfo->byFlags = TD_FLAGS_NETIF_SKB;12341234-12371237+ if (head_td->pTDInfo->byFlags & TD_FLAGS_NETIF_SKB)12351238 MACvTransmitAC0(priv->PortOffset);12361236- } else {12391239+ else12371240 MACvTransmit0(priv->PortOffset);12381238- }1239124112401242 spin_unlock_irqrestore(&priv->lock, flags);12411243···17691777 /* initial to reload eeprom */17701778 MACvInitialize(priv->PortOffset);17711779 MACvReadEtherAddress(priv->PortOffset, priv->abyCurrentNetAddr);17801780+17811781+ /* Get RFType */17821782+ priv->byRFType = SROMbyReadEmbedded(priv->PortOffset, EEP_OFS_RFTYPE);17831783+ priv->byRFType &= RF_MASK;17841784+17851785+ dev_dbg(&pcid->dev, "RF Type = %x\n", priv->byRFType);1772178617731787 device_get_options(priv);17741788 device_set_options(priv);
+1
drivers/staging/vt6655/rf.c
···794794 break;795795 case RATE_6M:796796 case RATE_9M:797797+ case RATE_12M:797798 case RATE_18M:798799 byPwr = priv->abyOFDMPwrTbl[uCH];799800 if (priv->byRFType == RF_UW2452)
···42564256 pr_debug("Closing iSCSI connection CID %hu on SID:"42574257 " %u\n", conn->cid, sess->sid);42584258 /*42594259- * Always up conn_logout_comp just in case the RX Thread is sleeping42604260- * and the logout response never got sent because the connection42614261- * failed.42594259+ * Always up conn_logout_comp for the traditional TCP case just in case42604260+ * the RX Thread in iscsi_target_rx_opcode() is sleeping and the logout42614261+ * response never got sent because the connection failed.42624262+ *42634263+ * However for iser-target, isert_wait4logout() is using conn_logout_comp42644264+ * to signal logout response TX interrupt completion. Go ahead and skip42654265+ * this for iser since isert_rx_opcode() does not wait on logout failure,42664266+ * and to avoid iscsi_conn pointer dereference in iser-target code.42624267 */42634263- complete(&conn->conn_logout_comp);42684268+ if (conn->conn_transport->transport_type == ISCSI_TCP)42694269+ complete(&conn->conn_logout_comp);4264427042654271 iscsi_release_thread_set(conn);42664272
···953953 transport_free_session(tl_nexus->se_sess);954954 goto out;955955 }956956- /*957957- * Now, register the SAS I_T Nexus as active with the call to958958- * transport_register_session()959959- */960960- __transport_register_session(se_tpg, tl_nexus->se_sess->se_node_acl,956956+ /* Now, register the SAS I_T Nexus as active. */957957+ transport_register_session(se_tpg, tl_nexus->se_sess->se_node_acl,961958 tl_nexus->se_sess, tl_nexus);962959 tl_tpg->tl_nexus = tl_nexus;963960 pr_debug("TCM_Loop_ConfigFS: Established I_T Nexus to emulated"
+29-3
drivers/target/target_core_device.c
···650650 return aligned_max_sectors;651651}652652653653+bool se_dev_check_wce(struct se_device *dev)654654+{655655+ bool wce = false;656656+657657+ if (dev->transport->get_write_cache)658658+ wce = dev->transport->get_write_cache(dev);659659+ else if (dev->dev_attrib.emulate_write_cache > 0)660660+ wce = true;661661+662662+ return wce;663663+}664664+653665int se_dev_set_max_unmap_lba_count(654666 struct se_device *dev,655667 u32 max_unmap_lba_count)···779767 pr_err("Illegal value %d\n", flag);780768 return -EINVAL;781769 }770770+ if (flag &&771771+ dev->transport->get_write_cache) {772772+ pr_err("emulate_fua_write not supported for this device\n");773773+ return -EINVAL;774774+ }775775+ if (dev->export_count) {776776+ pr_err("emulate_fua_write cannot be changed with active"777777+ " exports: %d\n", dev->export_count);778778+ return -EINVAL;779779+ }782780 dev->dev_attrib.emulate_fua_write = flag;783781 pr_debug("dev[%p]: SE Device Forced Unit Access WRITEs: %d\n",784782 dev, dev->dev_attrib.emulate_fua_write);···823801 pr_err("emulate_write_cache not supported for this device\n");824802 return -EINVAL;825803 }826826-804804+ if (dev->export_count) {805805+ pr_err("emulate_write_cache cannot be changed with active"806806+ " exports: %d\n", dev->export_count);807807+ return -EINVAL;808808+ }827809 dev->dev_attrib.emulate_write_cache = flag;828810 pr_debug("dev[%p]: SE Device WRITE_CACHE_EMULATION flag: %d\n",829811 dev, dev->dev_attrib.emulate_write_cache);···15601534 ret = dev->transport->configure_device(dev);15611535 if (ret)15621536 goto out;15631563- dev->dev_flags |= DF_CONFIGURED;15641564-15651537 /*15661538 * XXX: there is not much point to have two different values here..15671539 */···16201596 mutex_lock(&g_device_mutex);16211597 list_add_tail(&dev->g_dev_node, &g_device_list);16221598 mutex_unlock(&g_device_mutex);15991599+16001600+ dev->dev_flags |= DF_CONFIGURED;1623160116241602 return 0;16251603
···708708 }709709 }710710 if (cdb[1] & 0x8) {711711- if (!dev->dev_attrib.emulate_fua_write ||712712- !dev->dev_attrib.emulate_write_cache) {711711+ if (!dev->dev_attrib.emulate_fua_write || !se_dev_check_wce(dev)) {713712 pr_err("Got CDB: 0x%02x with FUA bit set, but device"714713 " does not advertise support for FUA write\n",715714 cdb[0]);
+3-16
drivers/target/target_core_spc.c
···454454}455455EXPORT_SYMBOL(spc_emulate_evpd_83);456456457457-static bool458458-spc_check_dev_wce(struct se_device *dev)459459-{460460- bool wce = false;461461-462462- if (dev->transport->get_write_cache)463463- wce = dev->transport->get_write_cache(dev);464464- else if (dev->dev_attrib.emulate_write_cache > 0)465465- wce = true;466466-467467- return wce;468468-}469469-470457/* Extended INQUIRY Data VPD Page */471458static sense_reason_t472459spc_emulate_evpd_86(struct se_cmd *cmd, unsigned char *buf)···477490 buf[5] = 0x07;478491479492 /* If WriteCache emulation is enabled, set V_SUP */480480- if (spc_check_dev_wce(dev))493493+ if (se_dev_check_wce(dev))481494 buf[6] = 0x01;482495 /* If an LBA map is present set R_SUP */483496 spin_lock(&cmd->se_dev->t10_alua.lba_map_lock);···884897 if (pc == 1)885898 goto out;886899887887- if (spc_check_dev_wce(dev))900900+ if (se_dev_check_wce(dev))888901 p[2] = 0x04; /* Write Cache Enable */889902 p[12] = 0x20; /* Disabled Read Ahead */890903···9961009 (cmd->se_deve->lun_flags & TRANSPORT_LUNFLAGS_READ_ONLY)))9971010 spc_modesense_write_protect(&buf[length], type);9981011999999- if ((spc_check_dev_wce(dev)) &&10121012+ if ((se_dev_check_wce(dev)) &&10001013 (dev->dev_attrib.emulate_fua_write > 0))10011014 spc_modesense_dpofua(&buf[length], type);10021015
···10281028/* We limit tty time update visibility to every 8 seconds or so. */10291029static void tty_update_time(struct timespec *time)10301030{10311031- unsigned long sec = get_seconds() & ~7;10321032- if ((long)(sec - time->tv_sec) > 0)10311031+ unsigned long sec = get_seconds();10321032+ if (abs(sec - time->tv_sec) & ~7)10331033 time->tv_sec = sec;10341034}10351035
+11-5
drivers/tty/tty_ioctl.c
···217217#endif218218 if (!timeout)219219 timeout = MAX_SCHEDULE_TIMEOUT;220220- if (wait_event_interruptible_timeout(tty->write_wait,221221- !tty_chars_in_buffer(tty), timeout) >= 0) {222222- if (tty->ops->wait_until_sent)223223- tty->ops->wait_until_sent(tty, timeout);224224- }220220+221221+ timeout = wait_event_interruptible_timeout(tty->write_wait,222222+ !tty_chars_in_buffer(tty), timeout);223223+ if (timeout <= 0)224224+ return;225225+226226+ if (timeout == MAX_SCHEDULE_TIMEOUT)227227+ timeout = 0;228228+229229+ if (tty->ops->wait_until_sent)230230+ tty->ops->wait_until_sent(tty, timeout);225231}226232EXPORT_SYMBOL(tty_wait_until_sent);227233
+11
drivers/usb/chipidea/udc.c
···929929 return retval;930930}931931932932+static int otg_a_alt_hnp_support(struct ci_hdrc *ci)933933+{934934+ dev_warn(&ci->gadget.dev,935935+ "connect the device to an alternate port if you want HNP\n");936936+ return isr_setup_status_phase(ci);937937+}938938+932939/**933940 * isr_setup_packet_handler: setup packet handler934941 * @ci: UDC descriptor···10671060 err = isr_setup_status_phase(10681061 ci);10691062 }10631063+ break;10641064+ case USB_DEVICE_A_ALT_HNP_SUPPORT:10651065+ if (ci_otg_is_fsm_mode(ci))10661066+ err = otg_a_alt_hnp_support(ci);10701067 break;10711068 default:10721069 goto delegate;
+2
drivers/usb/class/cdc-acm.c
···1650165016511651static const struct usb_device_id acm_ids[] = {16521652 /* quirky and broken devices */16531653+ { USB_DEVICE(0x076d, 0x0006), /* Denso Cradle CU-321 */16541654+ .driver_info = NO_UNION_NORMAL, },/* has no union descriptor */16531655 { USB_DEVICE(0x17ef, 0x7000), /* Lenovo USB modem */16541656 .driver_info = NO_UNION_NORMAL, },/* has no union descriptor */16551657 { USB_DEVICE(0x0870, 0x0001), /* Metricom GS Modem */
+2-2
drivers/usb/common/usb-otg-fsm.c
···150150 break;151151 case OTG_STATE_B_PERIPHERAL:152152 otg_chrg_vbus(fsm, 0);153153- otg_loc_conn(fsm, 1);154153 otg_loc_sof(fsm, 0);155154 otg_set_protocol(fsm, PROTO_GADGET);155155+ otg_loc_conn(fsm, 1);156156 break;157157 case OTG_STATE_B_WAIT_ACON:158158 otg_chrg_vbus(fsm, 0);···213213214214 break;215215 case OTG_STATE_A_PERIPHERAL:216216- otg_loc_conn(fsm, 1);217216 otg_loc_sof(fsm, 0);218217 otg_set_protocol(fsm, PROTO_GADGET);219218 otg_drv_vbus(fsm, 1);219219+ otg_loc_conn(fsm, 1);220220 otg_add_timer(fsm, A_BIDL_ADIS);221221 break;222222 case OTG_STATE_A_WAIT_VFALL:
···7474MODULE_AUTHOR ("David Brownell");7575MODULE_LICENSE ("GPL");76767777+static int ep_open(struct inode *, struct file *);7878+77797880/*----------------------------------------------------------------------*/7981···285283 * still need dev->lock to use epdata->ep.286284 */287285static int288288-get_ready_ep (unsigned f_flags, struct ep_data *epdata)286286+get_ready_ep (unsigned f_flags, struct ep_data *epdata, bool is_write)289287{290288 int val;291289292290 if (f_flags & O_NONBLOCK) {293291 if (!mutex_trylock(&epdata->lock))294292 goto nonblock;295295- if (epdata->state != STATE_EP_ENABLED) {293293+ if (epdata->state != STATE_EP_ENABLED &&294294+ (!is_write || epdata->state != STATE_EP_READY)) {296295 mutex_unlock(&epdata->lock);297296nonblock:298297 val = -EAGAIN;···308305309306 switch (epdata->state) {310307 case STATE_EP_ENABLED:308308+ return 0;309309+ case STATE_EP_READY: /* not configured yet */310310+ if (is_write)311311+ return 0;312312+ // FALLTHRU313313+ case STATE_EP_UNBOUND: /* clean disconnect */311314 break;312315 // case STATE_EP_DISABLED: /* "can't happen" */313313- // case STATE_EP_READY: /* "can't happen" */314316 default: /* error! */315317 pr_debug ("%s: ep %p not available, state %d\n",316318 shortname, epdata, epdata->state);317317- // FALLTHROUGH318318- case STATE_EP_UNBOUND: /* clean disconnect */319319- val = -ENODEV;320320- mutex_unlock(&epdata->lock);321319 }322322- return val;320320+ mutex_unlock(&epdata->lock);321321+ return -ENODEV;323322}324323325324static ssize_t···368363 return value;369364}370365371371-372372-/* handle a synchronous OUT bulk/intr/iso transfer */373373-static ssize_t374374-ep_read (struct file *fd, char __user *buf, size_t len, loff_t *ptr)375375-{376376- struct ep_data *data = fd->private_data;377377- void *kbuf;378378- ssize_t value;379379-380380- if ((value = get_ready_ep (fd->f_flags, data)) < 0)381381- return value;382382-383383- /* halt any endpoint by doing a "wrong direction" i/o call */384384- if (usb_endpoint_dir_in(&data->desc)) {385385- if (usb_endpoint_xfer_isoc(&data->desc)) {386386- mutex_unlock(&data->lock);387387- return -EINVAL;388388- }389389- DBG (data->dev, "%s halt\n", data->name);390390- spin_lock_irq (&data->dev->lock);391391- if (likely (data->ep != NULL))392392- usb_ep_set_halt (data->ep);393393- spin_unlock_irq (&data->dev->lock);394394- mutex_unlock(&data->lock);395395- return -EBADMSG;396396- }397397-398398- /* FIXME readahead for O_NONBLOCK and poll(); careful with ZLPs */399399-400400- value = -ENOMEM;401401- kbuf = kmalloc (len, GFP_KERNEL);402402- if (unlikely (!kbuf))403403- goto free1;404404-405405- value = ep_io (data, kbuf, len);406406- VDEBUG (data->dev, "%s read %zu OUT, status %d\n",407407- data->name, len, (int) value);408408- if (value >= 0 && copy_to_user (buf, kbuf, value))409409- value = -EFAULT;410410-411411-free1:412412- mutex_unlock(&data->lock);413413- kfree (kbuf);414414- return value;415415-}416416-417417-/* handle a synchronous IN bulk/intr/iso transfer */418418-static ssize_t419419-ep_write (struct file *fd, const char __user *buf, size_t len, loff_t *ptr)420420-{421421- struct ep_data *data = fd->private_data;422422- void *kbuf;423423- ssize_t value;424424-425425- if ((value = get_ready_ep (fd->f_flags, data)) < 0)426426- return value;427427-428428- /* halt any endpoint by doing a "wrong direction" i/o call */429429- if (!usb_endpoint_dir_in(&data->desc)) {430430- if (usb_endpoint_xfer_isoc(&data->desc)) {431431- mutex_unlock(&data->lock);432432- return -EINVAL;433433- }434434- DBG (data->dev, "%s halt\n", data->name);435435- spin_lock_irq (&data->dev->lock);436436- if (likely (data->ep != NULL))437437- usb_ep_set_halt (data->ep);438438- spin_unlock_irq (&data->dev->lock);439439- mutex_unlock(&data->lock);440440- return -EBADMSG;441441- }442442-443443- /* FIXME writebehind for O_NONBLOCK and poll(), qlen = 1 */444444-445445- value = -ENOMEM;446446- kbuf = memdup_user(buf, len);447447- if (IS_ERR(kbuf)) {448448- value = PTR_ERR(kbuf);449449- kbuf = NULL;450450- goto free1;451451- }452452-453453- value = ep_io (data, kbuf, len);454454- VDEBUG (data->dev, "%s write %zu IN, status %d\n",455455- data->name, len, (int) value);456456-free1:457457- mutex_unlock(&data->lock);458458- kfree (kbuf);459459- return value;460460-}461461-462366static int463367ep_release (struct inode *inode, struct file *fd)464368{···395481 struct ep_data *data = fd->private_data;396482 int status;397483398398- if ((status = get_ready_ep (fd->f_flags, data)) < 0)484484+ if ((status = get_ready_ep (fd->f_flags, data, false)) < 0)399485 return status;400486401487 spin_lock_irq (&data->dev->lock);···431517 struct mm_struct *mm;432518 struct work_struct work;433519 void *buf;434434- const struct iovec *iv;435435- unsigned long nr_segs;520520+ struct iov_iter to;521521+ const void *to_free;436522 unsigned actual;437523};438524···455541 return value;456542}457543458458-static ssize_t ep_copy_to_user(struct kiocb_priv *priv)459459-{460460- ssize_t len, total;461461- void *to_copy;462462- int i;463463-464464- /* copy stuff into user buffers */465465- total = priv->actual;466466- len = 0;467467- to_copy = priv->buf;468468- for (i=0; i < priv->nr_segs; i++) {469469- ssize_t this = min((ssize_t)(priv->iv[i].iov_len), total);470470-471471- if (copy_to_user(priv->iv[i].iov_base, to_copy, this)) {472472- if (len == 0)473473- len = -EFAULT;474474- break;475475- }476476-477477- total -= this;478478- len += this;479479- to_copy += this;480480- if (total == 0)481481- break;482482- }483483-484484- return len;485485-}486486-487544static void ep_user_copy_worker(struct work_struct *work)488545{489546 struct kiocb_priv *priv = container_of(work, struct kiocb_priv, work);···463578 size_t ret;464579465580 use_mm(mm);466466- ret = ep_copy_to_user(priv);581581+ ret = copy_to_iter(priv->buf, priv->actual, &priv->to);467582 unuse_mm(mm);583583+ if (!ret)584584+ ret = -EFAULT;468585469586 /* completing the iocb can drop the ctx and mm, don't touch mm after */470587 aio_complete(iocb, ret, ret);471588472589 kfree(priv->buf);590590+ kfree(priv->to_free);473591 kfree(priv);474592}475593···491603 * don't need to copy anything to userspace, so we can492604 * complete the aio request immediately.493605 */494494- if (priv->iv == NULL || unlikely(req->actual == 0)) {606606+ if (priv->to_free == NULL || unlikely(req->actual == 0)) {495607 kfree(req->buf);608608+ kfree(priv->to_free);496609 kfree(priv);497610 iocb->private = NULL;498611 /* aio_complete() reports bytes-transferred _and_ faults */···507618508619 priv->buf = req->buf;509620 priv->actual = req->actual;621621+ INIT_WORK(&priv->work, ep_user_copy_worker);510622 schedule_work(&priv->work);511623 }512624 spin_unlock(&epdata->dev->lock);···516626 put_ep(epdata);517627}518628519519-static ssize_t520520-ep_aio_rwtail(521521- struct kiocb *iocb,522522- char *buf,523523- size_t len,524524- struct ep_data *epdata,525525- const struct iovec *iv,526526- unsigned long nr_segs527527-)629629+static ssize_t ep_aio(struct kiocb *iocb,630630+ struct kiocb_priv *priv,631631+ struct ep_data *epdata,632632+ char *buf,633633+ size_t len)528634{529529- struct kiocb_priv *priv;530530- struct usb_request *req;531531- ssize_t value;635635+ struct usb_request *req;636636+ ssize_t value;532637533533- priv = kmalloc(sizeof *priv, GFP_KERNEL);534534- if (!priv) {535535- value = -ENOMEM;536536-fail:537537- kfree(buf);538538- return value;539539- }540638 iocb->private = priv;541639 priv->iocb = iocb;542542- priv->iv = iv;543543- priv->nr_segs = nr_segs;544544- INIT_WORK(&priv->work, ep_user_copy_worker);545545-546546- value = get_ready_ep(iocb->ki_filp->f_flags, epdata);547547- if (unlikely(value < 0)) {548548- kfree(priv);549549- goto fail;550550- }551640552641 kiocb_set_cancel_fn(iocb, ep_aio_cancel);553642 get_ep(epdata);···538669 * allocate or submit those if the host disconnected.539670 */540671 spin_lock_irq(&epdata->dev->lock);541541- if (likely(epdata->ep)) {542542- req = usb_ep_alloc_request(epdata->ep, GFP_ATOMIC);543543- if (likely(req)) {544544- priv->req = req;545545- req->buf = buf;546546- req->length = len;547547- req->complete = ep_aio_complete;548548- req->context = iocb;549549- value = usb_ep_queue(epdata->ep, req, GFP_ATOMIC);550550- if (unlikely(0 != value))551551- usb_ep_free_request(epdata->ep, req);552552- } else553553- value = -EAGAIN;554554- } else555555- value = -ENODEV;672672+ value = -ENODEV;673673+ if (unlikely(epdata->ep))674674+ goto fail;675675+676676+ req = usb_ep_alloc_request(epdata->ep, GFP_ATOMIC);677677+ value = -ENOMEM;678678+ if (unlikely(!req))679679+ goto fail;680680+681681+ priv->req = req;682682+ req->buf = buf;683683+ req->length = len;684684+ req->complete = ep_aio_complete;685685+ req->context = iocb;686686+ value = usb_ep_queue(epdata->ep, req, GFP_ATOMIC);687687+ if (unlikely(0 != value)) {688688+ usb_ep_free_request(epdata->ep, req);689689+ goto fail;690690+ }556691 spin_unlock_irq(&epdata->dev->lock);692692+ return -EIOCBQUEUED;557693558558- mutex_unlock(&epdata->lock);559559-560560- if (unlikely(value)) {561561- kfree(priv);562562- put_ep(epdata);563563- } else564564- value = -EIOCBQUEUED;694694+fail:695695+ spin_unlock_irq(&epdata->dev->lock);696696+ kfree(priv->to_free);697697+ kfree(priv);698698+ put_ep(epdata);565699 return value;566700}567701568702static ssize_t569569-ep_aio_read(struct kiocb *iocb, const struct iovec *iov,570570- unsigned long nr_segs, loff_t o)703703+ep_read_iter(struct kiocb *iocb, struct iov_iter *to)571704{572572- struct ep_data *epdata = iocb->ki_filp->private_data;573573- char *buf;705705+ struct file *file = iocb->ki_filp;706706+ struct ep_data *epdata = file->private_data;707707+ size_t len = iov_iter_count(to);708708+ ssize_t value;709709+ char *buf;574710575575- if (unlikely(usb_endpoint_dir_in(&epdata->desc)))576576- return -EINVAL;711711+ if ((value = get_ready_ep(file->f_flags, epdata, false)) < 0)712712+ return value;577713578578- buf = kmalloc(iocb->ki_nbytes, GFP_KERNEL);579579- if (unlikely(!buf))714714+ /* halt any endpoint by doing a "wrong direction" i/o call */715715+ if (usb_endpoint_dir_in(&epdata->desc)) {716716+ if (usb_endpoint_xfer_isoc(&epdata->desc) ||717717+ !is_sync_kiocb(iocb)) {718718+ mutex_unlock(&epdata->lock);719719+ return -EINVAL;720720+ }721721+ DBG (epdata->dev, "%s halt\n", epdata->name);722722+ spin_lock_irq(&epdata->dev->lock);723723+ if (likely(epdata->ep != NULL))724724+ usb_ep_set_halt(epdata->ep);725725+ spin_unlock_irq(&epdata->dev->lock);726726+ mutex_unlock(&epdata->lock);727727+ return -EBADMSG;728728+ }729729+730730+ buf = kmalloc(len, GFP_KERNEL);731731+ if (unlikely(!buf)) {732732+ mutex_unlock(&epdata->lock);580733 return -ENOMEM;581581-582582- return ep_aio_rwtail(iocb, buf, iocb->ki_nbytes, epdata, iov, nr_segs);734734+ }735735+ if (is_sync_kiocb(iocb)) {736736+ value = ep_io(epdata, buf, len);737737+ if (value >= 0 && copy_to_iter(buf, value, to))738738+ value = -EFAULT;739739+ } else {740740+ struct kiocb_priv *priv = kzalloc(sizeof *priv, GFP_KERNEL);741741+ value = -ENOMEM;742742+ if (!priv)743743+ goto fail;744744+ priv->to_free = dup_iter(&priv->to, to, GFP_KERNEL);745745+ if (!priv->to_free) {746746+ kfree(priv);747747+ goto fail;748748+ }749749+ value = ep_aio(iocb, priv, epdata, buf, len);750750+ if (value == -EIOCBQUEUED)751751+ buf = NULL;752752+ }753753+fail:754754+ kfree(buf);755755+ mutex_unlock(&epdata->lock);756756+ return value;583757}584758759759+static ssize_t ep_config(struct ep_data *, const char *, size_t);760760+585761static ssize_t586586-ep_aio_write(struct kiocb *iocb, const struct iovec *iov,587587- unsigned long nr_segs, loff_t o)762762+ep_write_iter(struct kiocb *iocb, struct iov_iter *from)588763{589589- struct ep_data *epdata = iocb->ki_filp->private_data;590590- char *buf;591591- size_t len = 0;592592- int i = 0;764764+ struct file *file = iocb->ki_filp;765765+ struct ep_data *epdata = file->private_data;766766+ size_t len = iov_iter_count(from);767767+ bool configured;768768+ ssize_t value;769769+ char *buf;593770594594- if (unlikely(!usb_endpoint_dir_in(&epdata->desc)))595595- return -EINVAL;771771+ if ((value = get_ready_ep(file->f_flags, epdata, true)) < 0)772772+ return value;596773597597- buf = kmalloc(iocb->ki_nbytes, GFP_KERNEL);598598- if (unlikely(!buf))599599- return -ENOMEM;774774+ configured = epdata->state == STATE_EP_ENABLED;600775601601- for (i=0; i < nr_segs; i++) {602602- if (unlikely(copy_from_user(&buf[len], iov[i].iov_base,603603- iov[i].iov_len) != 0)) {604604- kfree(buf);605605- return -EFAULT;776776+ /* halt any endpoint by doing a "wrong direction" i/o call */777777+ if (configured && !usb_endpoint_dir_in(&epdata->desc)) {778778+ if (usb_endpoint_xfer_isoc(&epdata->desc) ||779779+ !is_sync_kiocb(iocb)) {780780+ mutex_unlock(&epdata->lock);781781+ return -EINVAL;606782 }607607- len += iov[i].iov_len;783783+ DBG (epdata->dev, "%s halt\n", epdata->name);784784+ spin_lock_irq(&epdata->dev->lock);785785+ if (likely(epdata->ep != NULL))786786+ usb_ep_set_halt(epdata->ep);787787+ spin_unlock_irq(&epdata->dev->lock);788788+ mutex_unlock(&epdata->lock);789789+ return -EBADMSG;608790 }609609- return ep_aio_rwtail(iocb, buf, len, epdata, NULL, 0);791791+792792+ buf = kmalloc(len, GFP_KERNEL);793793+ if (unlikely(!buf)) {794794+ mutex_unlock(&epdata->lock);795795+ return -ENOMEM;796796+ }797797+798798+ if (unlikely(copy_from_iter(buf, len, from) != len)) {799799+ value = -EFAULT;800800+ goto out;801801+ }802802+803803+ if (unlikely(!configured)) {804804+ value = ep_config(epdata, buf, len);805805+ } else if (is_sync_kiocb(iocb)) {806806+ value = ep_io(epdata, buf, len);807807+ } else {808808+ struct kiocb_priv *priv = kzalloc(sizeof *priv, GFP_KERNEL);809809+ value = -ENOMEM;810810+ if (priv) {811811+ value = ep_aio(iocb, priv, epdata, buf, len);812812+ if (value == -EIOCBQUEUED)813813+ buf = NULL;814814+ }815815+ }816816+out:817817+ kfree(buf);818818+ mutex_unlock(&epdata->lock);819819+ return value;610820}611821612822/*----------------------------------------------------------------------*/···693745/* used after endpoint configuration */694746static const struct file_operations ep_io_operations = {695747 .owner = THIS_MODULE,696696- .llseek = no_llseek,697748698698- .read = ep_read,699699- .write = ep_write,700700- .unlocked_ioctl = ep_ioctl,749749+ .open = ep_open,701750 .release = ep_release,702702-703703- .aio_read = ep_aio_read,704704- .aio_write = ep_aio_write,751751+ .llseek = no_llseek,752752+ .read = new_sync_read,753753+ .write = new_sync_write,754754+ .unlocked_ioctl = ep_ioctl,755755+ .read_iter = ep_read_iter,756756+ .write_iter = ep_write_iter,705757};706758707759/* ENDPOINT INITIALIZATION···718770 * speed descriptor, then optional high speed descriptor.719771 */720772static ssize_t721721-ep_config (struct file *fd, const char __user *buf, size_t len, loff_t *ptr)773773+ep_config (struct ep_data *data, const char *buf, size_t len)722774{723723- struct ep_data *data = fd->private_data;724775 struct usb_ep *ep;725776 u32 tag;726777 int value, length = len;727727-728728- value = mutex_lock_interruptible(&data->lock);729729- if (value < 0)730730- return value;731778732779 if (data->state != STATE_EP_READY) {733780 value = -EL2HLT;···734791 goto fail0;735792736793 /* we might need to change message format someday */737737- if (copy_from_user (&tag, buf, 4)) {738738- goto fail1;739739- }794794+ memcpy(&tag, buf, 4);740795 if (tag != 1) {741796 DBG(data->dev, "config %s, bad tag %d\n", data->name, tag);742797 goto fail0;···747806 */748807749808 /* full/low speed descriptor, then high speed */750750- if (copy_from_user (&data->desc, buf, USB_DT_ENDPOINT_SIZE)) {751751- goto fail1;752752- }809809+ memcpy(&data->desc, buf, USB_DT_ENDPOINT_SIZE);753810 if (data->desc.bLength != USB_DT_ENDPOINT_SIZE754811 || data->desc.bDescriptorType != USB_DT_ENDPOINT)755812 goto fail0;756813 if (len != USB_DT_ENDPOINT_SIZE) {757814 if (len != 2 * USB_DT_ENDPOINT_SIZE)758815 goto fail0;759759- if (copy_from_user (&data->hs_desc, buf + USB_DT_ENDPOINT_SIZE,760760- USB_DT_ENDPOINT_SIZE)) {761761- goto fail1;762762- }816816+ memcpy(&data->hs_desc, buf + USB_DT_ENDPOINT_SIZE,817817+ USB_DT_ENDPOINT_SIZE);763818 if (data->hs_desc.bLength != USB_DT_ENDPOINT_SIZE764819 || data->hs_desc.bDescriptorType765820 != USB_DT_ENDPOINT) {···777840 case USB_SPEED_LOW:778841 case USB_SPEED_FULL:779842 ep->desc = &data->desc;780780- value = usb_ep_enable(ep);781781- if (value == 0)782782- data->state = STATE_EP_ENABLED;783843 break;784844 case USB_SPEED_HIGH:785845 /* fails if caller didn't provide that descriptor... */786846 ep->desc = &data->hs_desc;787787- value = usb_ep_enable(ep);788788- if (value == 0)789789- data->state = STATE_EP_ENABLED;790847 break;791848 default:792849 DBG(data->dev, "unconnected, %s init abandoned\n",793850 data->name);794851 value = -EINVAL;852852+ goto gone;795853 }854854+ value = usb_ep_enable(ep);796855 if (value == 0) {797797- fd->f_op = &ep_io_operations;856856+ data->state = STATE_EP_ENABLED;798857 value = length;799858 }800859gone:···800867 data->desc.bDescriptorType = 0;801868 data->hs_desc.bDescriptorType = 0;802869 }803803- mutex_unlock(&data->lock);804870 return value;805871fail0:806872 value = -EINVAL;807807- goto fail;808808-fail1:809809- value = -EFAULT;810873 goto fail;811874}812875···830901 mutex_unlock(&data->lock);831902 return value;832903}833833-834834-/* used before endpoint configuration */835835-static const struct file_operations ep_config_operations = {836836- .llseek = no_llseek,837837-838838- .open = ep_open,839839- .write = ep_config,840840- .release = ep_release,841841-};842904843905/*----------------------------------------------------------------------*/844906···909989 enum ep0_state state;910990911991 spin_lock_irq (&dev->lock);992992+ if (dev->state <= STATE_DEV_OPENED) {993993+ retval = -EINVAL;994994+ goto done;995995+ }912996913997 /* report fd mode change before acting on it */914998 if (dev->setup_abort) {···11111187 struct dev_data *dev = fd->private_data;11121188 ssize_t retval = -ESRCH;1113118911141114- spin_lock_irq (&dev->lock);11151115-11161190 /* report fd mode change before acting on it */11171191 if (dev->setup_abort) {11181192 dev->setup_abort = 0;···11561234 } else11571235 DBG (dev, "fail %s, state %d\n", __func__, dev->state);1158123611591159- spin_unlock_irq (&dev->lock);11601237 return retval;11611238}11621239···12021281 struct dev_data *dev = fd->private_data;12031282 int mask = 0;1204128312841284+ if (dev->state <= STATE_DEV_OPENED)12851285+ return DEFAULT_POLLMASK;12861286+12051287 poll_wait(fd, &dev->wait, wait);1206128812071289 spin_lock_irq (&dev->lock);···1239131512401316 return ret;12411317}12421242-12431243-/* used after device configuration */12441244-static const struct file_operations ep0_io_operations = {12451245- .owner = THIS_MODULE,12461246- .llseek = no_llseek,12471247-12481248- .read = ep0_read,12491249- .write = ep0_write,12501250- .fasync = ep0_fasync,12511251- .poll = ep0_poll,12521252- .unlocked_ioctl = dev_ioctl,12531253- .release = dev_release,12541254-};1255131812561319/*----------------------------------------------------------------------*/12571320···15611650 goto enomem1;1562165115631652 data->dentry = gadgetfs_create_file (dev->sb, data->name,15641564- data, &ep_config_operations);16531653+ data, &ep_io_operations);15651654 if (!data->dentry)15661655 goto enomem2;15671656 list_add_tail (&data->epfiles, &dev->epfiles);···17631852 u32 tag;17641853 char *kbuf;1765185418551855+ spin_lock_irq(&dev->lock);18561856+ if (dev->state > STATE_DEV_OPENED) {18571857+ value = ep0_write(fd, buf, len, ptr);18581858+ spin_unlock_irq(&dev->lock);18591859+ return value;18601860+ }18611861+ spin_unlock_irq(&dev->lock);18621862+17661863 if (len < (USB_DT_CONFIG_SIZE + USB_DT_DEVICE_SIZE + 4))17671864 return -EINVAL;17681865···18441925 * on, they can work ... except in cleanup paths that18451926 * kick in after the ep0 descriptor is closed.18461927 */18471847- fd->f_op = &ep0_io_operations;18481928 value = len;18491929 }18501930 return value;···18741956 return value;18751957}1876195818771877-static const struct file_operations dev_init_operations = {19591959+static const struct file_operations ep0_operations = {18781960 .llseek = no_llseek,1879196118801962 .open = dev_open,19631963+ .read = ep0_read,18811964 .write = dev_config,18821965 .fasync = ep0_fasync,19661966+ .poll = ep0_poll,18831967 .unlocked_ioctl = dev_ioctl,18841968 .release = dev_release,18851969};···19972077 goto Enomem;1998207819992079 dev->sb = sb;20002000- dev->dentry = gadgetfs_create_file(sb, CHIP, dev, &dev_init_operations);20802080+ dev->dentry = gadgetfs_create_file(sb, CHIP, dev, &ep0_operations);20012081 if (!dev->dentry) {20022082 put_dev(dev);20032083 goto Enomem;
+2-3
drivers/usb/gadget/legacy/tcm_usb_gadget.c
···17401740 goto err_session;17411741 }17421742 /*17431743- * Now register the TCM vHost virtual I_T Nexus as active with the17441744- * call to __transport_register_session()17431743+ * Now register the TCM vHost virtual I_T Nexus as active.17451744 */17461746- __transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl,17451745+ transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl,17471746 tv_nexus->tvn_se_sess, tv_nexus);17481747 tpg->tpg_nexus = tv_nexus;17491748 mutex_unlock(&tpg->tpg_mutex);
···34343535struct atmel_ehci_priv {3636 struct clk *iclk;3737- struct clk *fclk;3837 struct clk *uclk;3938 bool clocked;4039};···5051{5152 if (atmel_ehci->clocked)5253 return;5353- if (IS_ENABLED(CONFIG_COMMON_CLK)) {5454- clk_set_rate(atmel_ehci->uclk, 48000000);5555- clk_prepare_enable(atmel_ehci->uclk);5656- }5454+5555+ clk_prepare_enable(atmel_ehci->uclk);5756 clk_prepare_enable(atmel_ehci->iclk);5858- clk_prepare_enable(atmel_ehci->fclk);5957 atmel_ehci->clocked = true;6058}6159···6064{6165 if (!atmel_ehci->clocked)6266 return;6363- clk_disable_unprepare(atmel_ehci->fclk);6767+6468 clk_disable_unprepare(atmel_ehci->iclk);6565- if (IS_ENABLED(CONFIG_COMMON_CLK))6666- clk_disable_unprepare(atmel_ehci->uclk);6969+ clk_disable_unprepare(atmel_ehci->uclk);6770 atmel_ehci->clocked = false;6871}6972···141146 retval = -ENOENT;142147 goto fail_request_resource;143148 }144144- atmel_ehci->fclk = devm_clk_get(&pdev->dev, "uhpck");145145- if (IS_ERR(atmel_ehci->fclk)) {146146- dev_err(&pdev->dev, "Error getting function clock\n");147147- retval = -ENOENT;149149+150150+ atmel_ehci->uclk = devm_clk_get(&pdev->dev, "usb_clk");151151+ if (IS_ERR(atmel_ehci->uclk)) {152152+ dev_err(&pdev->dev, "failed to get uclk\n");153153+ retval = PTR_ERR(atmel_ehci->uclk);148154 goto fail_request_resource;149149- }150150- if (IS_ENABLED(CONFIG_COMMON_CLK)) {151151- atmel_ehci->uclk = devm_clk_get(&pdev->dev, "usb_clk");152152- if (IS_ERR(atmel_ehci->uclk)) {153153- dev_err(&pdev->dev, "failed to get uclk\n");154154- retval = PTR_ERR(atmel_ehci->uclk);155155- goto fail_request_resource;156156- }157155 }158156159157 ehci = hcd_to_ehci(hcd);
+8-1
drivers/usb/host/xhci-hub.c
···387387 status = PORT_PLC;388388 port_change_bit = "link state";389389 break;390390+ case USB_PORT_FEAT_C_PORT_CONFIG_ERROR:391391+ status = PORT_CEC;392392+ port_change_bit = "config error";393393+ break;390394 default:391395 /* Should never happen */392396 return;···592588 status |= USB_PORT_STAT_C_LINK_STATE << 16;593589 if ((raw_port_status & PORT_WRC))594590 status |= USB_PORT_STAT_C_BH_RESET << 16;591591+ if ((raw_port_status & PORT_CEC))592592+ status |= USB_PORT_STAT_C_CONFIG_ERROR << 16;595593 }596594597595 if (hcd->speed != HCD_USB3) {···10111005 case USB_PORT_FEAT_C_OVER_CURRENT:10121006 case USB_PORT_FEAT_C_ENABLE:10131007 case USB_PORT_FEAT_C_PORT_LINK_STATE:10081008+ case USB_PORT_FEAT_C_PORT_CONFIG_ERROR:10141009 xhci_clear_port_change_bit(xhci, wValue, wIndex,10151010 port_array[wIndex], temp);10161011 break;···10761069 */10771070 status = bus_state->resuming_ports;1078107110791079- mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC;10721072+ mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC | PORT_CEC;1080107310811074 spin_lock_irqsave(&xhci->lock, flags);10821075 /* For each port, did anything change? If so, set that bit in buf. */
+31-1
drivers/usb/host/xhci-pci.c
···37373838#define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI 0x8c313939#define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c314040+#define PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI 0x22b54141+#define PCI_DEVICE_ID_INTEL_SUNRISEPOINT_H_XHCI 0xa12f4242+#define PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI 0x9d2f40434144static const char hcd_name[] = "xhci_hcd";4245···115112 if (pdev->vendor == PCI_VENDOR_ID_INTEL) {116113 xhci->quirks |= XHCI_LPM_SUPPORT;117114 xhci->quirks |= XHCI_INTEL_HOST;115115+ xhci->quirks |= XHCI_AVOID_BEI;118116 }119117 if (pdev->vendor == PCI_VENDOR_ID_INTEL &&120118 pdev->device == PCI_DEVICE_ID_INTEL_PANTHERPOINT_XHCI) {···131127 * PPT chipsets.132128 */133129 xhci->quirks |= XHCI_SPURIOUS_REBOOT;134134- xhci->quirks |= XHCI_AVOID_BEI;135130 }136131 if (pdev->vendor == PCI_VENDOR_ID_INTEL &&137132 pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI) {138133 xhci->quirks |= XHCI_SPURIOUS_REBOOT;134134+ }135135+ if (pdev->vendor == PCI_VENDOR_ID_INTEL &&136136+ (pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI ||137137+ pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_H_XHCI ||138138+ pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI)) {139139+ xhci->quirks |= XHCI_PME_STUCK_QUIRK;139140 }140141 if (pdev->vendor == PCI_VENDOR_ID_ETRON &&141142 pdev->device == PCI_DEVICE_ID_EJ168) {···166157 if (xhci->quirks & XHCI_RESET_ON_RESUME)167158 xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,168159 "QUIRK: Resetting on resume");160160+}161161+162162+/*163163+ * Make sure PME works on some Intel xHCI controllers by writing 1 to clear164164+ * the Internal PME flag bit in vendor specific PMCTRL register at offset 0x80a4165165+ */166166+static void xhci_pme_quirk(struct xhci_hcd *xhci)167167+{168168+ u32 val;169169+ void __iomem *reg;170170+171171+ reg = (void __iomem *) xhci->cap_regs + 0x80a4;172172+ val = readl(reg);173173+ writel(val | BIT(28), reg);174174+ readl(reg);169175}170176171177/* called during probe() after chip reset completes */···307283 if (xhci->quirks & XHCI_COMP_MODE_QUIRK)308284 pdev->no_d3cold = true;309285286286+ if (xhci->quirks & XHCI_PME_STUCK_QUIRK)287287+ xhci_pme_quirk(xhci);288288+310289 return xhci_suspend(xhci, do_wakeup);311290}312291···339312340313 if (pdev->vendor == PCI_VENDOR_ID_INTEL)341314 usb_enable_intel_xhci_ports(pdev);315315+316316+ if (xhci->quirks & XHCI_PME_STUCK_QUIRK)317317+ xhci_pme_quirk(xhci);342318343319 retval = xhci_resume(xhci, hibernated);344320 return retval;
+9-10
drivers/usb/host/xhci-plat.c
···8383 if (irq < 0)8484 return -ENODEV;85858686-8787- if (of_device_is_compatible(pdev->dev.of_node,8888- "marvell,armada-375-xhci") ||8989- of_device_is_compatible(pdev->dev.of_node,9090- "marvell,armada-380-xhci")) {9191- ret = xhci_mvebu_mbus_init_quirk(pdev);9292- if (ret)9393- return ret;9494- }9595-9686 /* Initialize dma_mask and coherent_dma_mask to 32-bits */9787 ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32));9888 if (ret)···115125 ret = clk_prepare_enable(clk);116126 if (ret)117127 goto put_hcd;128128+ }129129+130130+ if (of_device_is_compatible(pdev->dev.of_node,131131+ "marvell,armada-375-xhci") ||132132+ of_device_is_compatible(pdev->dev.of_node,133133+ "marvell,armada-380-xhci")) {134134+ ret = xhci_mvebu_mbus_init_quirk(pdev);135135+ if (ret)136136+ goto disable_clk;118137 }119138120139 ret = usb_add_hcd(hcd, irq, IRQF_SHARED);
+8-2
drivers/usb/host/xhci-ring.c
···19461946 if (event_trb != ep_ring->dequeue) {19471947 /* The event was for the status stage */19481948 if (event_trb == td->last_trb) {19491949- if (td->urb->actual_length != 0) {19491949+ if (td->urb_length_set) {19501950 /* Don't overwrite a previously set error code19511951 */19521952 if ((*status == -EINPROGRESS || *status == 0) &&···19601960 td->urb->transfer_buffer_length;19611961 }19621962 } else {19631963- /* Maybe the event was for the data stage? */19631963+ /*19641964+ * Maybe the event was for the data stage? If so, update19651965+ * already the actual_length of the URB and flag it as19661966+ * set, so that it is not overwritten in the event for19671967+ * the last TRB.19681968+ */19691969+ td->urb_length_set = true;19641970 td->urb->actual_length =19651971 td->urb->transfer_buffer_length -19661972 EVENT_TRB_LEN(le32_to_cpu(event->transfer_len));
+7-2
drivers/usb/host/xhci.h
···11+12/*23 * xHCI host controller driver34 *···8988#define HCS_IST(p) (((p) >> 0) & 0xf)9089/* bits 4:7, max number of Event Ring segments */9190#define HCS_ERST_MAX(p) (((p) >> 4) & 0xf)9191+/* bits 21:25 Hi 5 bits of Scratchpad buffers SW must allocate for the HW */9292/* bit 26 Scratchpad restore - for save/restore HW state - not used yet */9393-/* bits 27:31 number of Scratchpad buffers SW must allocate for the HW */9494-#define HCS_MAX_SCRATCHPAD(p) (((p) >> 27) & 0x1f)9393+/* bits 27:31 Lo 5 bits of Scratchpad buffers SW must allocate for the HW */9494+#define HCS_MAX_SCRATCHPAD(p) ((((p) >> 16) & 0x3e0) | (((p) >> 27) & 0x1f))95959696/* HCSPARAMS3 - hcs_params3 - bitmasks */9797/* bits 0:7, Max U1 to U0 latency for the roothub ports */···12901288 struct xhci_segment *start_seg;12911289 union xhci_trb *first_trb;12921290 union xhci_trb *last_trb;12911291+ /* actual_length of the URB has already been set */12921292+ bool urb_length_set;12931293};1294129412951295/* xHCI command default timeout value */···15641560#define XHCI_SPURIOUS_WAKEUP (1 << 18)15651561/* For controllers with a broken beyond repair streams implementation */15661562#define XHCI_BROKEN_STREAMS (1 << 19)15631563+#define XHCI_PME_STUCK_QUIRK (1 << 20)15671564 unsigned int num_active_eps;15681565 unsigned int limit_active_eps;15691566 /* There are two roothubs to keep track of bus suspend info for */
+1-1
drivers/usb/isp1760/isp1760-core.c
···151151 }152152153153 if (IS_ENABLED(CONFIG_USB_ISP1761_UDC) && !udc_disabled) {154154- ret = isp1760_udc_register(isp, irq, irqflags | IRQF_SHARED);154154+ ret = isp1760_udc_register(isp, irq, irqflags);155155 if (ret < 0) {156156 isp1760_hcd_unregister(&isp->hcd);157157 return ret;
···457457 if (IS_ERR(musb->xceiv))458458 return PTR_ERR(musb->xceiv);459459460460+ musb->phy = devm_phy_get(dev->parent, "usb2-phy");461461+460462 /* Returns zero if e.g. not clocked */461463 rev = dsps_readl(reg_base, wrp->revision);462464 if (!rev)463465 return -ENODEV;464466465467 usb_phy_init(musb->xceiv);468468+ if (IS_ERR(musb->phy)) {469469+ musb->phy = NULL;470470+ } else {471471+ ret = phy_init(musb->phy);472472+ if (ret < 0)473473+ return ret;474474+ ret = phy_power_on(musb->phy);475475+ if (ret) {476476+ phy_exit(musb->phy);477477+ return ret;478478+ }479479+ }480480+466481 setup_timer(&glue->timer, otg_timer, (unsigned long) musb);467482468483 /* Reset the musb */···517502518503 del_timer_sync(&glue->timer);519504 usb_phy_shutdown(musb->xceiv);505505+ phy_power_off(musb->phy);506506+ phy_exit(musb->phy);520507 debugfs_remove_recursive(glue->dbgfs_root);521508522509 return 0;···627610 struct device *dev = musb->controller;628611 struct dsps_glue *glue = dev_get_drvdata(dev->parent);629612 const struct dsps_musb_wrapper *wrp = glue->wrp;630630- int session_restart = 0;613613+ int session_restart = 0, error;631614632615 if (glue->sw_babble_enabled)633616 session_restart = sw_babble_control(musb);···641624 dsps_writel(musb->ctrl_base, wrp->control, (1 << wrp->reset));642625 usleep_range(100, 200);643626 usb_phy_shutdown(musb->xceiv);627627+ error = phy_power_off(musb->phy);628628+ if (error)629629+ dev_err(dev, "phy shutdown failed: %i\n", error);644630 usleep_range(100, 200);645631 usb_phy_init(musb->xceiv);632632+ error = phy_power_on(musb->phy);633633+ if (error)634634+ dev_err(dev, "phy powerup failed: %i\n", error);646635 session_restart = 1;647636 }648637···710687 struct musb_hdrc_config *config;711688 struct platform_device *musb;712689 struct device_node *dn = parent->dev.of_node;713713- int ret;690690+ int ret, val;714691715692 memset(resources, 0, sizeof(resources));716693 res = platform_get_resource_byname(parent, IORESOURCE_MEM, "mc");···762739 pdata.mode = get_musb_port_mode(dev);763740 /* DT keeps this entry in mA, musb expects it as per USB spec */764741 pdata.power = get_int_prop(dn, "mentor,power") / 2;765765- config->multipoint = of_property_read_bool(dn, "mentor,multipoint");742742+743743+ ret = of_property_read_u32(dn, "mentor,multipoint", &val);744744+ if (!ret && val)745745+ config->multipoint = true;766746767747 ret = platform_device_add_data(musb, &pdata, sizeof(pdata));768748 if (ret) {
+1-1
drivers/usb/musb/musb_host.c
···26132613 .description = "musb-hcd",26142614 .product_desc = "MUSB HDRC host driver",26152615 .hcd_priv_size = sizeof(struct musb *),26162616- .flags = HCD_USB2 | HCD_MEMORY,26162616+ .flags = HCD_USB2 | HCD_MEMORY | HCD_BH,2617261726182618 /* not using irq handler or reset hooks from usbcore, since26192619 * those must be shared with peripheral code for OTG configs
+5-2
drivers/usb/musb/omap2430.c
···516516 struct omap2430_glue *glue;517517 struct device_node *np = pdev->dev.of_node;518518 struct musb_hdrc_config *config;519519- int ret = -ENOMEM;519519+ int ret = -ENOMEM, val;520520521521 glue = devm_kzalloc(&pdev->dev, sizeof(*glue), GFP_KERNEL);522522 if (!glue)···559559 of_property_read_u32(np, "num-eps", (u32 *)&config->num_eps);560560 of_property_read_u32(np, "ram-bits", (u32 *)&config->ram_bits);561561 of_property_read_u32(np, "power", (u32 *)&pdata->power);562562- config->multipoint = of_property_read_bool(np, "multipoint");562562+563563+ ret = of_property_read_u32(np, "multipoint", &val);564564+ if (!ret && val)565565+ config->multipoint = true;563566564567 pdata->board_data = data;565568 pdata->config = config;
+3
drivers/usb/phy/phy-am335x-control.c
···126126 return NULL;127127128128 dev = bus_find_device(&platform_bus_type, NULL, node, match);129129+ if (!dev)130130+ return NULL;131131+129132 ctrl_usb = dev_get_drvdata(dev);130133 if (!ctrl_usb)131134 return NULL;
+1
drivers/usb/renesas_usbhs/Kconfig
···66 tristate 'Renesas USBHS controller'77 depends on USB_GADGET88 depends on ARCH_SHMOBILE || SUPERH || COMPILE_TEST99+ depends on EXTCON || !EXTCON # if EXTCON=m, USBHS cannot be built-in910 default n1011 help1112 Renesas USBHS is a discrete USB host and peripheral controller chip
+20-27
drivers/usb/serial/bus.c
···3838 return 0;3939}40404141-static ssize_t port_number_show(struct device *dev,4242- struct device_attribute *attr, char *buf)4343-{4444- struct usb_serial_port *port = to_usb_serial_port(dev);4545-4646- return sprintf(buf, "%d\n", port->port_number);4747-}4848-static DEVICE_ATTR_RO(port_number);4949-5041static int usb_serial_device_probe(struct device *dev)5142{5243 struct usb_serial_driver *driver;5344 struct usb_serial_port *port;4545+ struct device *tty_dev;5446 int retval = 0;5547 int minor;56485749 port = to_usb_serial_port(dev);5858- if (!port) {5959- retval = -ENODEV;6060- goto exit;6161- }5050+ if (!port)5151+ return -ENODEV;62526353 /* make sure suspend/resume doesn't race against port_probe */6454 retval = usb_autopm_get_interface(port->serial->interface);6555 if (retval)6666- goto exit;5656+ return retval;67576858 driver = port->serial->type;6959 if (driver->port_probe) {7060 retval = driver->port_probe(port);7161 if (retval)7272- goto exit_with_autopm;7373- }7474-7575- retval = device_create_file(dev, &dev_attr_port_number);7676- if (retval) {7777- if (driver->port_remove)7878- retval = driver->port_remove(port);7979- goto exit_with_autopm;6262+ goto err_autopm_put;8063 }81648265 minor = port->minor;8383- tty_register_device(usb_serial_tty_driver, minor, dev);6666+ tty_dev = tty_register_device(usb_serial_tty_driver, minor, dev);6767+ if (IS_ERR(tty_dev)) {6868+ retval = PTR_ERR(tty_dev);6969+ goto err_port_remove;7070+ }7171+7272+ usb_autopm_put_interface(port->serial->interface);7373+8474 dev_info(&port->serial->dev->dev,8575 "%s converter now attached to ttyUSB%d\n",8676 driver->description, minor);87778888-exit_with_autopm:7878+ return 0;7979+8080+err_port_remove:8181+ if (driver->port_remove)8282+ driver->port_remove(port);8383+err_autopm_put:8984 usb_autopm_put_interface(port->serial->interface);9090-exit:8585+9186 return retval;9287}9388···108113109114 minor = port->minor;110115 tty_unregister_device(usb_serial_tty_driver, minor);111111-112112- device_remove_file(&port->dev, &dev_attr_port_number);113116114117 driver = port->serial->type;115118 if (driver->port_remove)
+6-9
drivers/usb/serial/ch341.c
···8484 u8 line_status; /* active status of modem control inputs */8585};86868787+static void ch341_set_termios(struct tty_struct *tty,8888+ struct usb_serial_port *port,8989+ struct ktermios *old_termios);9090+8791static int ch341_control_out(struct usb_device *dev, u8 request,8892 u16 value, u16 index)8993{···313309 struct ch341_private *priv = usb_get_serial_port_data(port);314310 int r;315311316316- priv->baud_rate = DEFAULT_BAUD_RATE;317317-318312 r = ch341_configure(serial->dev, priv);319313 if (r)320314 goto out;321315322322- r = ch341_set_handshake(serial->dev, priv->line_control);323323- if (r)324324- goto out;325325-326326- r = ch341_set_baudrate(serial->dev, priv);327327- if (r)328328- goto out;316316+ if (tty)317317+ ch341_set_termios(tty, port, NULL);329318330319 dev_dbg(&port->dev, "%s - submitting interrupt urb\n", __func__);331320 r = usb_submit_urb(port->interrupt_in_urb, GFP_KERNEL);
···258258 * character or at least one jiffy.259259 */260260 period = max_t(unsigned long, (10 * HZ / bps), 1);261261- period = min_t(unsigned long, period, timeout);261261+ if (timeout)262262+ period = min_t(unsigned long, period, timeout);262263263264 dev_dbg(&port->dev, "%s - timeout = %u ms, period = %u ms\n",264265 __func__, jiffies_to_msecs(timeout),···269268 schedule_timeout_interruptible(period);270269 if (signal_pending(current))271270 break;272272- if (time_after(jiffies, expire))271271+ if (timeout && time_after(jiffies, expire))273272 break;274273 }275274}
+3
drivers/usb/serial/keyspan_pda.c
···6161/* For Xircom PGSDB9 and older Entrega version of the same device */6262#define XIRCOM_VENDOR_ID 0x085a6363#define XIRCOM_FAKE_ID 0x80276464+#define XIRCOM_FAKE_ID_2 0x8025 /* "PGMFHUB" serial */6465#define ENTREGA_VENDOR_ID 0x16456566#define ENTREGA_FAKE_ID 0x80936667···7170#endif7271#ifdef XIRCOM7372 { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID) },7373+ { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID_2) },7474 { USB_DEVICE(ENTREGA_VENDOR_ID, ENTREGA_FAKE_ID) },7575#endif7676 { USB_DEVICE(KEYSPAN_VENDOR_ID, KEYSPAN_PDA_ID) },···9593#ifdef XIRCOM9694static const struct usb_device_id id_table_fake_xircom[] = {9795 { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID) },9696+ { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID_2) },9897 { USB_DEVICE(ENTREGA_VENDOR_ID, ENTREGA_FAKE_ID) },9998 { }10099};
+2-1
drivers/usb/serial/mxuport.c
···12841284 }1285128512861286 /* Initial port termios */12871287- mxuport_set_termios(tty, port, NULL);12871287+ if (tty)12881288+ mxuport_set_termios(tty, port, NULL);1288128912891290 /*12901291 * TODO: use RQ_VENDOR_GET_MSR, once we know what it
+13-5
drivers/usb/serial/pl2303.c
···132132#define UART_OVERRUN_ERROR 0x40133133#define UART_CTS 0x80134134135135+static void pl2303_set_break(struct usb_serial_port *port, bool enable);135136136137enum pl2303_type {137138 TYPE_01, /* Type 0 and 1 (difference unknown) */···616615{617616 usb_serial_generic_close(port);618617 usb_kill_urb(port->interrupt_in_urb);618618+ pl2303_set_break(port, false);619619}620620621621static int pl2303_open(struct tty_struct *tty, struct usb_serial_port *port)···743741 return -ENOIOCTLCMD;744742}745743746746-static void pl2303_break_ctl(struct tty_struct *tty, int break_state)744744+static void pl2303_set_break(struct usb_serial_port *port, bool enable)747745{748748- struct usb_serial_port *port = tty->driver_data;749746 struct usb_serial *serial = port->serial;750747 u16 state;751748 int result;752749753753- if (break_state == 0)754754- state = BREAK_OFF;755755- else750750+ if (enable)756751 state = BREAK_ON;752752+ else753753+ state = BREAK_OFF;757754758755 dev_dbg(&port->dev, "%s - turning break %s\n", __func__,759756 state == BREAK_OFF ? "off" : "on");···762761 0, NULL, 0, 100);763762 if (result)764763 dev_err(&port->dev, "error sending break = %d\n", result);764764+}765765+766766+static void pl2303_break_ctl(struct tty_struct *tty, int state)767767+{768768+ struct usb_serial_port *port = tty->driver_data;769769+770770+ pl2303_set_break(port, state);765771}766772767773static void pl2303_update_line_status(struct usb_serial_port *port,
···889889 !(us->fflags & US_FL_SCM_MULT_TARG)) {890890 mutex_lock(&us->dev_mutex);891891 us->max_lun = usb_stor_Bulk_max_lun(us);892892+ /*893893+ * Allow proper scanning of devices that present more than 8 LUNs894894+ * While not affecting other devices that may need the previous behavior895895+ */896896+ if (us->max_lun >= 8)897897+ us_to_host(us)->max_lun = us->max_lun+1;892898 mutex_unlock(&us->dev_mutex);893899 }894900 scsi_scan_host(us_to_host(us));
+2
drivers/vfio/pci/vfio_pci_intrs.c
···868868 func = vfio_pci_set_err_trigger;869869 break;870870 }871871+ break;871872 case VFIO_PCI_REQ_IRQ_INDEX:872873 switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) {873874 case VFIO_IRQ_SET_ACTION_TRIGGER:874875 func = vfio_pci_set_req_trigger;875876 break;876877 }878878+ break;877879 }878880879881 if (!func)
+14-11
drivers/vhost/net.c
···591591 * TODO: support TSO.592592 */593593 iov_iter_advance(&msg.msg_iter, vhost_hlen);594594- } else {595595- /* It'll come from socket; we'll need to patch596596- * ->num_buffers over if VIRTIO_NET_F_MRG_RXBUF597597- */598598- iov_iter_advance(&fixup, sizeof(hdr));599594 }600595 err = sock->ops->recvmsg(NULL, sock, &msg,601596 sock_len, MSG_DONTWAIT | MSG_TRUNC);···604609 continue;605610 }606611 /* Supply virtio_net_hdr if VHOST_NET_F_VIRTIO_NET_HDR */607607- if (unlikely(vhost_hlen) &&608608- copy_to_iter(&hdr, sizeof(hdr), &fixup) != sizeof(hdr)) {609609- vq_err(vq, "Unable to write vnet_hdr at addr %p\n",610610- vq->iov->iov_base);611611- break;612612+ if (unlikely(vhost_hlen)) {613613+ if (copy_to_iter(&hdr, sizeof(hdr),614614+ &fixup) != sizeof(hdr)) {615615+ vq_err(vq, "Unable to write vnet_hdr "616616+ "at addr %p\n", vq->iov->iov_base);617617+ break;618618+ }619619+ } else {620620+ /* Header came from socket; we'll need to patch621621+ * ->num_buffers over if VIRTIO_NET_F_MRG_RXBUF622622+ */623623+ iov_iter_advance(&fixup, sizeof(hdr));612624 }613625 /* TODO: Should check and handle checksum. */614626615627 num_buffers = cpu_to_vhost16(vq, headcount);616628 if (likely(mergeable) &&617617- copy_to_iter(&num_buffers, 2, &fixup) != 2) {629629+ copy_to_iter(&num_buffers, sizeof num_buffers,630630+ &fixup) != sizeof num_buffers) {618631 vq_err(vq, "Failed num_buffers write");619632 vhost_discard_vq_desc(vq, headcount);620633 break;
+2-3
drivers/vhost/scsi.c
···19561956 goto out;19571957 }19581958 /*19591959- * Now register the TCM vhost virtual I_T Nexus as active with the19601960- * call to __transport_register_session()19591959+ * Now register the TCM vhost virtual I_T Nexus as active.19611960 */19621962- __transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl,19611961+ transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl,19631962 tv_nexus->tvn_se_sess, tv_nexus);19641963 tpg->tpg_nexus = tv_nexus;19651964
+3
drivers/video/fbdev/amba-clcd.c
···599599600600 len = clcdfb_snprintf_mode(NULL, 0, mode);601601 name = devm_kzalloc(dev, len + 1, GFP_KERNEL);602602+ if (!name)603603+ return -ENOMEM;604604+602605 clcdfb_snprintf_mode(name, len + 1, mode);603606 mode->name = name;604607
+3-3
drivers/video/fbdev/core/fbmon.c
···624624 int num = 0, i, first = 1;625625 int ver, rev;626626627627- ver = edid[EDID_STRUCT_VERSION];628628- rev = edid[EDID_STRUCT_REVISION];629629-630627 mode = kzalloc(50 * sizeof(struct fb_videomode), GFP_KERNEL);631628 if (mode == NULL)632629 return NULL;···633636 kfree(mode);634637 return NULL;635638 }639639+640640+ ver = edid[EDID_STRUCT_VERSION];641641+ rev = edid[EDID_STRUCT_REVISION];636642637643 *dbsize = 0;638644
···4242#define PDC_WDT_MIN_TIMEOUT 14343#define PDC_WDT_DEF_TIMEOUT 6444444545-static int heartbeat;4545+static int heartbeat = PDC_WDT_DEF_TIMEOUT;4646module_param(heartbeat, int, 0);4747-MODULE_PARM_DESC(heartbeat, "Watchdog heartbeats in seconds. "4848- "(default = " __MODULE_STRING(PDC_WDT_DEF_TIMEOUT) ")");4747+MODULE_PARM_DESC(heartbeat, "Watchdog heartbeats in seconds "4848+ "(default=" __MODULE_STRING(PDC_WDT_DEF_TIMEOUT) ")");49495050static bool nowayout = WATCHDOG_NOWAYOUT;5151module_param(nowayout, bool, 0);···191191 pdc_wdt->wdt_dev.ops = &pdc_wdt_ops;192192 pdc_wdt->wdt_dev.max_timeout = 1 << PDC_WDT_CONFIG_DELAY_MASK;193193 pdc_wdt->wdt_dev.parent = &pdev->dev;194194+ watchdog_set_drvdata(&pdc_wdt->wdt_dev, pdc_wdt);194195195196 ret = watchdog_init_timeout(&pdc_wdt->wdt_dev, heartbeat, &pdev->dev);196197 if (ret < 0) {···233232 watchdog_set_nowayout(&pdc_wdt->wdt_dev, nowayout);234233235234 platform_set_drvdata(pdev, pdc_wdt);236236- watchdog_set_drvdata(&pdc_wdt->wdt_dev, pdc_wdt);237235238236 ret = watchdog_register_device(&pdc_wdt->wdt_dev);239237 if (ret)
+1-1
drivers/watchdog/mtk_wdt.c
···133133 u32 reg;134134 struct mtk_wdt_dev *mtk_wdt = watchdog_get_drvdata(wdt_dev);135135 void __iomem *wdt_base = mtk_wdt->wdt_base;136136- u32 ret;136136+ int ret;137137138138 ret = mtk_wdt_set_timeout(wdt_dev, wdt_dev->timeout);139139 if (ret < 0)
+17
drivers/xen/Kconfig
···55555656 In that case step 3 should be omitted.57575858+config XEN_BALLOON_MEMORY_HOTPLUG_LIMIT5959+ int "Hotplugged memory limit (in GiB) for a PV guest"6060+ default 512 if X86_646161+ default 4 if X86_326262+ range 0 64 if X86_326363+ depends on XEN_HAVE_PVMMU6464+ depends on XEN_BALLOON_MEMORY_HOTPLUG6565+ help6666+ Maxmium amount of memory (in GiB) that a PV guest can be6767+ expanded to when using memory hotplug.6868+6969+ A PV guest can have more memory than this limit if is7070+ started with a larger maximum.7171+7272+ This value is used to allocate enough space in internal7373+ tables needed for physical memory administration.7474+5875config XEN_SCRUB_PAGES5976 bool "Scrub pages before returning them to system"6077 depends on XEN_BALLOON
+23
drivers/xen/balloon.c
···229229 balloon_hotplug = round_up(balloon_hotplug, PAGES_PER_SECTION);230230 nid = memory_add_physaddr_to_nid(hotplug_start_paddr);231231232232+#ifdef CONFIG_XEN_HAVE_PVMMU233233+ /*234234+ * add_memory() will build page tables for the new memory so235235+ * the p2m must contain invalid entries so the correct236236+ * non-present PTEs will be written.237237+ *238238+ * If a failure occurs, the original (identity) p2m entries239239+ * are not restored since this region is now known not to240240+ * conflict with any devices.241241+ */ 242242+ if (!xen_feature(XENFEAT_auto_translated_physmap)) {243243+ unsigned long pfn, i;244244+245245+ pfn = PFN_DOWN(hotplug_start_paddr);246246+ for (i = 0; i < balloon_hotplug; i++) {247247+ if (!set_phys_to_machine(pfn + i, INVALID_P2M_ENTRY)) {248248+ pr_warn("set_phys_to_machine() failed, no memory added\n");249249+ return BP_ECANCELED;250250+ }251251+ }252252+ }253253+#endif254254+232255 rc = add_memory(nid, hotplug_start_paddr, balloon_hotplug << PAGE_SHIFT);233256234257 if (rc) {
+12-6
drivers/xen/events/events_base.c
···526526 pirq_query_unmask(irq);527527528528 rc = set_evtchn_to_irq(evtchn, irq);529529- if (rc != 0) {530530- pr_err("irq%d: Failed to set port to irq mapping (%d)\n",531531- irq, rc);532532- xen_evtchn_close(evtchn);533533- return 0;534534- }529529+ if (rc)530530+ goto err;531531+535532 bind_evtchn_to_cpu(evtchn, 0);536533 info->evtchn = evtchn;534534+535535+ rc = xen_evtchn_port_setup(info);536536+ if (rc)537537+ goto err;537538538539out:539540 unmask_evtchn(evtchn);540541 eoi_pirq(irq_get_irq_data(irq));541542543543+ return 0;544544+545545+err:546546+ pr_err("irq%d: Failed to set port to irq mapping (%d)\n", irq, rc);547547+ xen_evtchn_close(evtchn);542548 return 0;543549}544550
+1-1
drivers/xen/xen-pciback/conf_space.c
···1616#include "conf_space.h"1717#include "conf_space_quirks.h"18181919-static bool permissive;1919+bool permissive;2020module_param(permissive, bool, 0644);21212222/* This is where xen_pcibk_read_config_byte, xen_pcibk_read_config_word,
+2
drivers/xen/xen-pciback/conf_space.h
···6464 void *data;6565};66666767+extern bool permissive;6868+6769#define OFFSET(cfg_entry) ((cfg_entry)->base_offset+(cfg_entry)->field->offset)68706971/* Add fields to a device - the add_fields macro expects to get a pointer to
+47-12
drivers/xen/xen-pciback/conf_space_header.c
···1111#include "pciback.h"1212#include "conf_space.h"13131414+struct pci_cmd_info {1515+ u16 val;1616+};1717+1418struct pci_bar_info {1519 u32 val;1620 u32 len_val;···2420#define is_enable_cmd(value) ((value)&(PCI_COMMAND_MEMORY|PCI_COMMAND_IO))2521#define is_master_cmd(value) ((value)&PCI_COMMAND_MASTER)26222323+/* Bits guests are allowed to control in permissive mode. */2424+#define PCI_COMMAND_GUEST (PCI_COMMAND_MASTER|PCI_COMMAND_SPECIAL| \2525+ PCI_COMMAND_INVALIDATE|PCI_COMMAND_VGA_PALETTE| \2626+ PCI_COMMAND_WAIT|PCI_COMMAND_FAST_BACK)2727+2828+static void *command_init(struct pci_dev *dev, int offset)2929+{3030+ struct pci_cmd_info *cmd = kmalloc(sizeof(*cmd), GFP_KERNEL);3131+ int err;3232+3333+ if (!cmd)3434+ return ERR_PTR(-ENOMEM);3535+3636+ err = pci_read_config_word(dev, PCI_COMMAND, &cmd->val);3737+ if (err) {3838+ kfree(cmd);3939+ return ERR_PTR(err);4040+ }4141+4242+ return cmd;4343+}4444+2745static int command_read(struct pci_dev *dev, int offset, u16 *value, void *data)2846{2929- int i;3030- int ret;4747+ int ret = pci_read_config_word(dev, offset, value);4848+ const struct pci_cmd_info *cmd = data;31493232- ret = xen_pcibk_read_config_word(dev, offset, value, data);3333- if (!pci_is_enabled(dev))3434- return ret;3535-3636- for (i = 0; i < PCI_ROM_RESOURCE; i++) {3737- if (dev->resource[i].flags & IORESOURCE_IO)3838- *value |= PCI_COMMAND_IO;3939- if (dev->resource[i].flags & IORESOURCE_MEM)4040- *value |= PCI_COMMAND_MEMORY;4141- }5050+ *value &= PCI_COMMAND_GUEST;5151+ *value |= cmd->val & ~PCI_COMMAND_GUEST;42524353 return ret;4454}···6143{6244 struct xen_pcibk_dev_data *dev_data;6345 int err;4646+ u16 val;4747+ struct pci_cmd_info *cmd = data;64486549 dev_data = pci_get_drvdata(dev);6650 if (!pci_is_enabled(dev) && is_enable_cmd(value)) {···10282 value &= ~PCI_COMMAND_INVALIDATE;10383 }10484 }8585+8686+ cmd->val = value;8787+8888+ if (!permissive && (!dev_data || !dev_data->permissive))8989+ return 0;9090+9191+ /* Only allow the guest to control certain bits. */9292+ err = pci_read_config_word(dev, offset, &val);9393+ if (err || val == value)9494+ return err;9595+9696+ value &= PCI_COMMAND_GUEST;9797+ value |= val & ~PCI_COMMAND_GUEST;1059810699 return pci_write_config_word(dev, offset, value);107100}···315282 {316283 .offset = PCI_COMMAND,317284 .size = 2,285285+ .init = command_init,286286+ .release = bar_release,318287 .u.w.read = command_read,319288 .u.w.write = command_write,320289 },
+2-5
drivers/xen/xen-scsiback.c
···16591659 name);16601660 goto out;16611661 }16621662- /*16631663- * Now register the TCM pvscsi virtual I_T Nexus as active with the16641664- * call to __transport_register_session()16651665- */16661666- __transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl,16621662+ /* Now register the TCM pvscsi virtual I_T Nexus as active. */16631663+ transport_register_session(se_tpg, tv_nexus->tvn_se_sess->se_node_acl,16671664 tv_nexus->tvn_se_sess, tv_nexus);16681665 tpg->tpg_nexus = tv_nexus;16691666
+12-7
fs/affs/file.c
···699699 boff = tmp % bsize;700700 if (boff) {701701 bh = affs_bread_ino(inode, bidx, 0);702702- if (IS_ERR(bh))703703- return PTR_ERR(bh);702702+ if (IS_ERR(bh)) {703703+ written = PTR_ERR(bh);704704+ goto err_first_bh;705705+ }704706 tmp = min(bsize - boff, to - from);705707 BUG_ON(boff + tmp > bsize || tmp > bsize);706708 memcpy(AFFS_DATA(bh) + boff, data + from, tmp);···714712 bidx++;715713 } else if (bidx) {716714 bh = affs_bread_ino(inode, bidx - 1, 0);717717- if (IS_ERR(bh))718718- return PTR_ERR(bh);715715+ if (IS_ERR(bh)) {716716+ written = PTR_ERR(bh);717717+ goto err_first_bh;718718+ }719719 }720720 while (from + bsize <= to) {721721 prev_bh = bh;722722 bh = affs_getemptyblk_ino(inode, bidx);723723 if (IS_ERR(bh))724724- goto out;724724+ goto err_bh;725725 memcpy(AFFS_DATA(bh), data + from, bsize);726726 if (buffer_new(bh)) {727727 AFFS_DATA_HEAD(bh)->ptype = cpu_to_be32(T_DATA);···755751 prev_bh = bh;756752 bh = affs_bread_ino(inode, bidx, 1);757753 if (IS_ERR(bh))758758- goto out;754754+ goto err_bh;759755 tmp = min(bsize, to - from);760756 BUG_ON(tmp > bsize);761757 memcpy(AFFS_DATA(bh), data + from, tmp);···794790 if (tmp > inode->i_size)795791 inode->i_size = AFFS_I(inode)->mmu_private = tmp;796792793793+err_first_bh:797794 unlock_page(page);798795 page_cache_release(page);799796800797 return written;801798802802-out:799799+err_bh:803800 bh = prev_bh;804801 if (!written)805802 written = PTR_ERR(bh);
+4-4
fs/btrfs/ctree.c
···1645164516461646 parent_nritems = btrfs_header_nritems(parent);16471647 blocksize = root->nodesize;16481648- end_slot = parent_nritems;16481648+ end_slot = parent_nritems - 1;1649164916501650- if (parent_nritems == 1)16501650+ if (parent_nritems <= 1)16511651 return 0;1652165216531653 btrfs_set_lock_blocking(parent);1654165416551655- for (i = start_slot; i < end_slot; i++) {16551655+ for (i = start_slot; i <= end_slot; i++) {16561656 int close = 1;1657165716581658 btrfs_node_key(parent, &disk_key, i);···16691669 other = btrfs_node_blockptr(parent, i - 1);16701670 close = close_blocks(blocknr, other, blocksize);16711671 }16721672- if (!close && i < end_slot - 2) {16721672+ if (!close && i < end_slot) {16731673 other = btrfs_node_blockptr(parent, i + 1);16741674 close = close_blocks(blocknr, other, blocksize);16751675 }
···39213921 }39223922 if (btrfs_super_sys_array_size(sb) < sizeof(struct btrfs_disk_key)39233923 + sizeof(struct btrfs_chunk)) {39243924- printk(KERN_ERR "BTRFS: system chunk array too small %u < %lu\n",39243924+ printk(KERN_ERR "BTRFS: system chunk array too small %u < %zu\n",39253925 btrfs_super_sys_array_size(sb),39263926 sizeof(struct btrfs_disk_key)39273927 + sizeof(struct btrfs_chunk));
+50-1
fs/btrfs/extent-tree.c
···32083208 return 0;32093209 }3210321032113211+ if (trans->aborted)32123212+ return 0;32113213again:32123214 inode = lookup_free_space_inode(root, block_group, path);32133215 if (IS_ERR(inode) && PTR_ERR(inode) != -ENOENT) {···32453243 */32463244 BTRFS_I(inode)->generation = 0;32473245 ret = btrfs_update_inode(trans, root, inode);32463246+ if (ret) {32473247+ /*32483248+ * So theoretically we could recover from this, simply set the32493249+ * super cache generation to 0 so we know to invalidate the32503250+ * cache, but then we'd have to keep track of the block groups32513251+ * that fail this way so we know we _have_ to reset this cache32523252+ * before the next commit or risk reading stale cache. So to32533253+ * limit our exposure to horrible edge cases lets just abort the32543254+ * transaction, this only happens in really bad situations32553255+ * anyway.32563256+ */32573257+ btrfs_abort_transaction(trans, root, ret);32583258+ goto out_put;32593259+ }32483260 WARN_ON(ret);3249326132503262 if (i_size_read(inode) > 0) {···33233307 spin_unlock(&block_group->lock);3324330833253309 return ret;33103310+}33113311+33123312+int btrfs_setup_space_cache(struct btrfs_trans_handle *trans,33133313+ struct btrfs_root *root)33143314+{33153315+ struct btrfs_block_group_cache *cache, *tmp;33163316+ struct btrfs_transaction *cur_trans = trans->transaction;33173317+ struct btrfs_path *path;33183318+33193319+ if (list_empty(&cur_trans->dirty_bgs) ||33203320+ !btrfs_test_opt(root, SPACE_CACHE))33213321+ return 0;33223322+33233323+ path = btrfs_alloc_path();33243324+ if (!path)33253325+ return -ENOMEM;33263326+33273327+ /* Could add new block groups, use _safe just in case */33283328+ list_for_each_entry_safe(cache, tmp, &cur_trans->dirty_bgs,33293329+ dirty_list) {33303330+ if (cache->disk_cache_state == BTRFS_DC_CLEAR)33313331+ cache_save_setup(cache, trans, path);33323332+ }33333333+33343334+ btrfs_free_path(path);33353335+ return 0;33263336}3327333733283338int btrfs_write_dirty_block_groups(struct btrfs_trans_handle *trans,···51365094 num_bytes = ALIGN(num_bytes, root->sectorsize);5137509551385096 spin_lock(&BTRFS_I(inode)->lock);51395139- BTRFS_I(inode)->outstanding_extents++;50975097+ nr_extents = (unsigned)div64_u64(num_bytes +50985098+ BTRFS_MAX_EXTENT_SIZE - 1,50995099+ BTRFS_MAX_EXTENT_SIZE);51005100+ BTRFS_I(inode)->outstanding_extents += nr_extents;51015101+ nr_extents = 0;5140510251415103 if (BTRFS_I(inode)->outstanding_extents >51425104 BTRFS_I(inode)->reserved_extents)···52845238 spin_unlock(&BTRFS_I(inode)->lock);52855239 if (dropped > 0)52865240 to_free += btrfs_calc_trans_metadata_size(root, dropped);52415241+52425242+ if (btrfs_test_is_dummy_root(root))52435243+ return;5287524452885245 trace_btrfs_space_reservation(root->fs_info, "delalloc",52895246 btrfs_ino(inode), to_free, 0);
+6
fs/btrfs/extent_io.c
···4968496849694969 /* Should be safe to release our pages at this point */49704970 btrfs_release_extent_buffer_page(eb);49714971+#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS49724972+ if (unlikely(test_bit(EXTENT_BUFFER_DUMMY, &eb->bflags))) {49734973+ __free_extent_buffer(eb);49744974+ return 1;49754975+ }49764976+#endif49714977 call_rcu(&eb->rcu_head, btrfs_release_extent_buffer_rcu);49724978 return 1;49734979 }
+56-31
fs/btrfs/file.c
···18111811 mutex_unlock(&inode->i_mutex);1812181218131813 /*18141814- * we want to make sure fsync finds this change18151815- * but we haven't joined a transaction running right now.18161816- *18171817- * Later on, someone is sure to update the inode and get the18181818- * real transid recorded.18191819- *18201820- * We set last_trans now to the fs_info generation + 1,18211821- * this will either be one more than the running transaction18221822- * or the generation used for the next transaction if there isn't18231823- * one running right now.18241824- *18251814 * We also have to set last_sub_trans to the current log transid,18261815 * otherwise subsequent syncs to a file that's been synced in this18271816 * transaction will appear to have already occured.18281817 */18291829- BTRFS_I(inode)->last_trans = root->fs_info->generation + 1;18301818 BTRFS_I(inode)->last_sub_trans = root->log_transid;18311819 if (num_written > 0) {18321820 err = generic_write_sync(file, pos, num_written);···19471959 atomic_inc(&root->log_batch);1948196019491961 /*19501950- * check the transaction that last modified this inode19511951- * and see if its already been committed19521952- */19531953- if (!BTRFS_I(inode)->last_trans) {19541954- mutex_unlock(&inode->i_mutex);19551955- goto out;19561956- }19571957-19581958- /*19591959- * if the last transaction that changed this file was before19601960- * the current transaction, we can bail out now without any19611961- * syncing19621962+ * If the last transaction that changed this file was before the current19631963+ * transaction and we have the full sync flag set in our inode, we can19641964+ * bail out now without any syncing.19651965+ *19661966+ * Note that we can't bail out if the full sync flag isn't set. This is19671967+ * because when the full sync flag is set we start all ordered extents19681968+ * and wait for them to fully complete - when they complete they update19691969+ * the inode's last_trans field through:19701970+ *19711971+ * btrfs_finish_ordered_io() ->19721972+ * btrfs_update_inode_fallback() ->19731973+ * btrfs_update_inode() ->19741974+ * btrfs_set_inode_last_trans()19751975+ *19761976+ * So we are sure that last_trans is up to date and can do this check to19771977+ * bail out safely. For the fast path, when the full sync flag is not19781978+ * set in our inode, we can not do it because we start only our ordered19791979+ * extents and don't wait for them to complete (that is when19801980+ * btrfs_finish_ordered_io runs), so here at this point their last_trans19811981+ * value might be less than or equals to fs_info->last_trans_committed,19821982+ * and setting a speculative last_trans for an inode when a buffered19831983+ * write is made (such as fs_info->generation + 1 for example) would not19841984+ * be reliable since after setting the value and before fsync is called19851985+ * any number of transactions can start and commit (transaction kthread19861986+ * commits the current transaction periodically), and a transaction19871987+ * commit does not start nor waits for ordered extents to complete.19621988 */19631989 smp_mb();19641990 if (btrfs_inode_in_log(inode, root->fs_info->generation) ||19651965- BTRFS_I(inode)->last_trans <=19661966- root->fs_info->last_trans_committed) {19671967- BTRFS_I(inode)->last_trans = 0;19681968-19911991+ (full_sync && BTRFS_I(inode)->last_trans <=19921992+ root->fs_info->last_trans_committed)) {19691993 /*19701994 * We'v had everything committed since the last time we were19711995 * modified so clear this flag in case it was set for whatever···22752275 bool same_page;22762276 bool no_holes = btrfs_fs_incompat(root->fs_info, NO_HOLES);22772277 u64 ino_size;22782278+ bool truncated_page = false;22792279+ bool updated_inode = false;2278228022792281 ret = btrfs_wait_ordered_range(inode, offset, len);22802282 if (ret)···23082306 * entire page.23092307 */23102308 if (same_page && len < PAGE_CACHE_SIZE) {23112311- if (offset < ino_size)23092309+ if (offset < ino_size) {23102310+ truncated_page = true;23122311 ret = btrfs_truncate_page(inode, offset, len, 0);23122312+ } else {23132313+ ret = 0;23142314+ }23132315 goto out_only_mutex;23142316 }2315231723162318 /* zero back part of the first page */23172319 if (offset < ino_size) {23202320+ truncated_page = true;23182321 ret = btrfs_truncate_page(inode, offset, 0, 0);23192322 if (ret) {23202323 mutex_unlock(&inode->i_mutex);···23552348 if (!ret) {23562349 /* zero the front end of the last page */23572350 if (tail_start + tail_len < ino_size) {23512351+ truncated_page = true;23582352 ret = btrfs_truncate_page(inode,23592353 tail_start + tail_len, 0, 1);23602354 if (ret)···23652357 }2366235823672359 if (lockend < lockstart) {23682368- mutex_unlock(&inode->i_mutex);23692369- return 0;23602360+ ret = 0;23612361+ goto out_only_mutex;23702362 }2371236323722364 while (1) {···2514250625152507 trans->block_rsv = &root->fs_info->trans_block_rsv;25162508 ret = btrfs_update_inode(trans, root, inode);25092509+ updated_inode = true;25172510 btrfs_end_transaction(trans, root);25182511 btrfs_btree_balance_dirty(root);25192512out_free:···25242515 unlock_extent_cached(&BTRFS_I(inode)->io_tree, lockstart, lockend,25252516 &cached_state, GFP_NOFS);25262517out_only_mutex:25182518+ if (!updated_inode && truncated_page && !ret && !err) {25192519+ /*25202520+ * If we only end up zeroing part of a page, we still need to25212521+ * update the inode item, so that all the time fields are25222522+ * updated as well as the necessary btrfs inode in memory fields25232523+ * for detecting, at fsync time, if the inode isn't yet in the25242524+ * log tree or it's there but not up to date.25252525+ */25262526+ trans = btrfs_start_transaction(root, 1);25272527+ if (IS_ERR(trans)) {25282528+ err = PTR_ERR(trans);25292529+ } else {25302530+ err = btrfs_update_inode(trans, root, inode);25312531+ ret = btrfs_end_transaction(trans, root);25322532+ }25332533+ }25272534 mutex_unlock(&inode->i_mutex);25282535 if (ret && !err)25292536 err = ret;
+84-29
fs/btrfs/inode.c
···108108109109static int btrfs_dirty_inode(struct inode *inode);110110111111+#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS112112+void btrfs_test_inode_set_ops(struct inode *inode)113113+{114114+ BTRFS_I(inode)->io_tree.ops = &btrfs_extent_io_ops;115115+}116116+#endif117117+111118static int btrfs_init_inode_security(struct btrfs_trans_handle *trans,112119 struct inode *inode, struct inode *dir,113120 const struct qstr *qstr)···15491542 u64 new_size;1550154315511544 /*15521552- * We need the largest size of the remaining extent to see if we15531553- * need to add a new outstanding extent. Think of the following15541554- * case15551555- *15561556- * [MEAX_EXTENT_SIZEx2 - 4k][4k]15571557- *15581558- * The new_size would just be 4k and we'd think we had enough15591559- * outstanding extents for this if we only took one side of the15601560- * split, same goes for the other direction. We need to see if15611561- * the larger size still is the same amount of extents as the15621562- * original size, because if it is we need to add a new15631563- * outstanding extent. But if we split up and the larger size15641564- * is less than the original then we are good to go since we've15651565- * already accounted for the extra extent in our original15661566- * accounting.15451545+ * See the explanation in btrfs_merge_extent_hook, the same15461546+ * applies here, just in reverse.15671547 */15681548 new_size = orig->end - split + 1;15691569- if ((split - orig->start) > new_size)15701570- new_size = split - orig->start;15711571-15721572- num_extents = div64_u64(size + BTRFS_MAX_EXTENT_SIZE - 1,15491549+ num_extents = div64_u64(new_size + BTRFS_MAX_EXTENT_SIZE - 1,15731550 BTRFS_MAX_EXTENT_SIZE);15741574- if (div64_u64(new_size + BTRFS_MAX_EXTENT_SIZE - 1,15751575- BTRFS_MAX_EXTENT_SIZE) < num_extents)15511551+ new_size = split - orig->start;15521552+ num_extents += div64_u64(new_size + BTRFS_MAX_EXTENT_SIZE - 1,15531553+ BTRFS_MAX_EXTENT_SIZE);15541554+ if (div64_u64(size + BTRFS_MAX_EXTENT_SIZE - 1,15551555+ BTRFS_MAX_EXTENT_SIZE) >= num_extents)15761556 return;15771557 }15781558···15851591 if (!(other->state & EXTENT_DELALLOC))15861592 return;1587159315881588- old_size = other->end - other->start + 1;15891589- new_size = old_size + (new->end - new->start + 1);15941594+ if (new->start > other->start)15951595+ new_size = new->end - other->start + 1;15961596+ else15971597+ new_size = other->end - new->start + 1;1590159815911599 /* we're not bigger than the max, unreserve the space and go */15921600 if (new_size <= BTRFS_MAX_EXTENT_SIZE) {···15991603 }1600160416011605 /*16021602- * If we grew by another max_extent, just return, we want to keep that16031603- * reserved amount.16061606+ * We have to add up either side to figure out how many extents were16071607+ * accounted for before we merged into one big extent. If the number of16081608+ * extents we accounted for is <= the amount we need for the new range16091609+ * then we can return, otherwise drop. Think of it like this16101610+ *16111611+ * [ 4k][MAX_SIZE]16121612+ *16131613+ * So we've grown the extent by a MAX_SIZE extent, this would mean we16141614+ * need 2 outstanding extents, on one side we have 1 and the other side16151615+ * we have 1 so they are == and we can return. But in this case16161616+ *16171617+ * [MAX_SIZE+4k][MAX_SIZE+4k]16181618+ *16191619+ * Each range on their own accounts for 2 extents, but merged together16201620+ * they are only 3 extents worth of accounting, so we need to drop in16211621+ * this case.16041622 */16231623+ old_size = other->end - other->start + 1;16051624 num_extents = div64_u64(old_size + BTRFS_MAX_EXTENT_SIZE - 1,16061625 BTRFS_MAX_EXTENT_SIZE);16261626+ old_size = new->end - new->start + 1;16271627+ num_extents += div64_u64(old_size + BTRFS_MAX_EXTENT_SIZE - 1,16281628+ BTRFS_MAX_EXTENT_SIZE);16291629+16071630 if (div64_u64(new_size + BTRFS_MAX_EXTENT_SIZE - 1,16081608- BTRFS_MAX_EXTENT_SIZE) > num_extents)16311631+ BTRFS_MAX_EXTENT_SIZE) >= num_extents)16091632 return;1610163316111634 spin_lock(&BTRFS_I(inode)->lock);···17011686 spin_unlock(&BTRFS_I(inode)->lock);17021687 }1703168816891689+ /* For sanity tests */16901690+ if (btrfs_test_is_dummy_root(root))16911691+ return;16921692+17041693 __percpu_counter_add(&root->fs_info->delalloc_bytes, len,17051694 root->fs_info->delalloc_batch);17061695 spin_lock(&BTRFS_I(inode)->lock);···17591740 if (*bits & EXTENT_DO_ACCOUNTING &&17601741 root != root->fs_info->tree_root)17611742 btrfs_delalloc_release_metadata(inode, len);17431743+17441744+ /* For sanity tests. */17451745+ if (btrfs_test_is_dummy_root(root))17461746+ return;1762174717631748 if (root->root_key.objectid != BTRFS_DATA_RELOC_TREE_OBJECTID17641749 && do_list && !(state->state & EXTENT_NORESERVE))···72367213 u64 start = iblock << inode->i_blkbits;72377214 u64 lockstart, lockend;72387215 u64 len = bh_result->b_size;72397239- u64 orig_len = len;72167216+ u64 *outstanding_extents = NULL;72407217 int unlock_bits = EXTENT_LOCKED;72417218 int ret = 0;72427219···7247722472487225 lockstart = start;72497226 lockend = start + len - 1;72277227+72287228+ if (current->journal_info) {72297229+ /*72307230+ * Need to pull our outstanding extents and set journal_info to NULL so72317231+ * that anything that needs to check if there's a transction doesn't get72327232+ * confused.72337233+ */72347234+ outstanding_extents = current->journal_info;72357235+ current->journal_info = NULL;72367236+ }7250723772517238 /*72527239 * If this errors out it's because we couldn't invalidate pagecache for···73187285 ((BTRFS_I(inode)->flags & BTRFS_INODE_NODATACOW) &&73197286 em->block_start != EXTENT_MAP_HOLE)) {73207287 int type;73217321- int ret;73227288 u64 block_start, orig_start, orig_block_len, ram_bytes;7323728973247290 if (test_bit(EXTENT_FLAG_PREALLOC, &em->flags))···73817349 if (start + len > i_size_read(inode))73827350 i_size_write(inode, start + len);7383735173847384- if (len < orig_len) {73527352+ /*73537353+ * If we have an outstanding_extents count still set then we're73547354+ * within our reservation, otherwise we need to adjust our inode73557355+ * counter appropriately.73567356+ */73577357+ if (*outstanding_extents) {73587358+ (*outstanding_extents)--;73597359+ } else {73857360 spin_lock(&BTRFS_I(inode)->lock);73867361 BTRFS_I(inode)->outstanding_extents++;73877362 spin_unlock(&BTRFS_I(inode)->lock);73887363 }73647364+73657365+ current->journal_info = outstanding_extents;73897366 btrfs_free_reserved_data_space(inode, len);73907367 }73917368···74187377unlock_err:74197378 clear_extent_bit(&BTRFS_I(inode)->io_tree, lockstart, lockend,74207379 unlock_bits, 1, 0, &cached_state, GFP_NOFS);73807380+ if (outstanding_extents)73817381+ current->journal_info = outstanding_extents;74217382 return ret;74227383}74237384···81198076{81208077 struct file *file = iocb->ki_filp;81218078 struct inode *inode = file->f_mapping->host;80798079+ u64 outstanding_extents = 0;81228080 size_t count = 0;81238081 int flags = 0;81248082 bool wakeup = true;···81578113 ret = btrfs_delalloc_reserve_space(inode, count);81588114 if (ret)81598115 goto out;81168116+ outstanding_extents = div64_u64(count +81178117+ BTRFS_MAX_EXTENT_SIZE - 1,81188118+ BTRFS_MAX_EXTENT_SIZE);81198119+81208120+ /*81218121+ * We need to know how many extents we reserved so that we can81228122+ * do the accounting properly if we go over the number we81238123+ * originally calculated. Abuse current->journal_info for this.81248124+ */81258125+ current->journal_info = &outstanding_extents;81608126 } else if (test_bit(BTRFS_INODE_READDIO_NEED_LOCK,81618127 &BTRFS_I(inode)->runtime_flags)) {81628128 inode_dio_done(inode);···81798125 iter, offset, btrfs_get_blocks_direct, NULL,81808126 btrfs_submit_direct, flags);81818127 if (rw & WRITE) {81288128+ current->journal_info = NULL;81828129 if (ret < 0 && ret != -EIOCBQUEUED)81838130 btrfs_delalloc_release_space(inode, count);81848131 else if (ret >= 0 && (size_t)ret < count)
+2-5
fs/btrfs/ordered-data.c
···452452 continue;453453 if (entry_end(ordered) <= start)454454 break;455455- if (!list_empty(&ordered->log_list))456456- continue;457457- if (test_bit(BTRFS_ORDERED_LOGGED, &ordered->flags))455455+ if (test_and_set_bit(BTRFS_ORDERED_LOGGED, &ordered->flags))458456 continue;459457 list_add(&ordered->log_list, logged_list);460458 atomic_inc(&ordered->refs);···509511 wait_event(ordered->wait, test_bit(BTRFS_ORDERED_IO_DONE,510512 &ordered->flags));511513512512- if (!test_and_set_bit(BTRFS_ORDERED_LOGGED, &ordered->flags))513513- list_add_tail(&ordered->trans_list, &trans->ordered);514514+ list_add_tail(&ordered->trans_list, &trans->ordered);514515 spin_lock_irq(&log->log_extents_lock[index]);515516 }516517 spin_unlock_irq(&log->log_extents_lock[index]);
+1-1
fs/btrfs/qgroup.c
···12591259 if (oper1->seq < oper2->seq)12601260 return -1;12611261 if (oper1->seq > oper2->seq)12621262- return -1;12621262+ return 1;12631263 if (oper1->ref_root < oper2->ref_root)12641264 return -1;12651265 if (oper1->ref_root > oper2->ref_root)
+156-15
fs/btrfs/send.c
···230230 u64 parent_ino;231231 u64 ino;232232 u64 gen;233233+ bool is_orphan;233234 struct list_head update_refs;234235};235236···29852984 u64 ino_gen,29862985 u64 parent_ino,29872986 struct list_head *new_refs,29882988- struct list_head *deleted_refs)29872987+ struct list_head *deleted_refs,29882988+ const bool is_orphan)29892989{29902990 struct rb_node **p = &sctx->pending_dir_moves.rb_node;29912991 struct rb_node *parent = NULL;···30012999 pm->parent_ino = parent_ino;30023000 pm->ino = ino;30033001 pm->gen = ino_gen;30023002+ pm->is_orphan = is_orphan;30043003 INIT_LIST_HEAD(&pm->list);30053004 INIT_LIST_HEAD(&pm->update_refs);30063005 RB_CLEAR_NODE(&pm->node);···31343131 rmdir_ino = dm->rmdir_ino;31353132 free_waiting_dir_move(sctx, dm);3136313331373137- ret = get_first_ref(sctx->parent_root, pm->ino,31383138- &parent_ino, &parent_gen, name);31393139- if (ret < 0)31403140- goto out;31413141-31423142- ret = get_cur_path(sctx, parent_ino, parent_gen,31433143- from_path);31443144- if (ret < 0)31453145- goto out;31463146- ret = fs_path_add_path(from_path, name);31343134+ if (pm->is_orphan) {31353135+ ret = gen_unique_name(sctx, pm->ino,31363136+ pm->gen, from_path);31373137+ } else {31383138+ ret = get_first_ref(sctx->parent_root, pm->ino,31393139+ &parent_ino, &parent_gen, name);31403140+ if (ret < 0)31413141+ goto out;31423142+ ret = get_cur_path(sctx, parent_ino, parent_gen,31433143+ from_path);31443144+ if (ret < 0)31453145+ goto out;31463146+ ret = fs_path_add_path(from_path, name);31473147+ }31473148 if (ret < 0)31483149 goto out;31493150···31573150 LIST_HEAD(deleted_refs);31583151 ASSERT(ancestor > BTRFS_FIRST_FREE_OBJECTID);31593152 ret = add_pending_dir_move(sctx, pm->ino, pm->gen, ancestor,31603160- &pm->update_refs, &deleted_refs);31533153+ &pm->update_refs, &deleted_refs,31543154+ pm->is_orphan);31613155 if (ret < 0)31623156 goto out;31633157 if (rmdir_ino) {···32913283 return ret;32923284}3293328532863286+/*32873287+ * We might need to delay a directory rename even when no ancestor directory32883288+ * (in the send root) with a higher inode number than ours (sctx->cur_ino) was32893289+ * renamed. This happens when we rename a directory to the old name (the name32903290+ * in the parent root) of some other unrelated directory that got its rename32913291+ * delayed due to some ancestor with higher number that got renamed.32923292+ *32933293+ * Example:32943294+ *32953295+ * Parent snapshot:32963296+ * . (ino 256)32973297+ * |---- a/ (ino 257)32983298+ * | |---- file (ino 260)32993299+ * |33003300+ * |---- b/ (ino 258)33013301+ * |---- c/ (ino 259)33023302+ *33033303+ * Send snapshot:33043304+ * . (ino 256)33053305+ * |---- a/ (ino 258)33063306+ * |---- x/ (ino 259)33073307+ * |---- y/ (ino 257)33083308+ * |----- file (ino 260)33093309+ *33103310+ * Here we can not rename 258 from 'b' to 'a' without the rename of inode 25733113311+ * from 'a' to 'x/y' happening first, which in turn depends on the rename of33123312+ * inode 259 from 'c' to 'x'. So the order of rename commands the send stream33133313+ * must issue is:33143314+ *33153315+ * 1 - rename 259 from 'c' to 'x'33163316+ * 2 - rename 257 from 'a' to 'x/y'33173317+ * 3 - rename 258 from 'b' to 'a'33183318+ *33193319+ * Returns 1 if the rename of sctx->cur_ino needs to be delayed, 0 if it can33203320+ * be done right away and < 0 on error.33213321+ */33223322+static int wait_for_dest_dir_move(struct send_ctx *sctx,33233323+ struct recorded_ref *parent_ref,33243324+ const bool is_orphan)33253325+{33263326+ struct btrfs_path *path;33273327+ struct btrfs_key key;33283328+ struct btrfs_key di_key;33293329+ struct btrfs_dir_item *di;33303330+ u64 left_gen;33313331+ u64 right_gen;33323332+ int ret = 0;33333333+33343334+ if (RB_EMPTY_ROOT(&sctx->waiting_dir_moves))33353335+ return 0;33363336+33373337+ path = alloc_path_for_send();33383338+ if (!path)33393339+ return -ENOMEM;33403340+33413341+ key.objectid = parent_ref->dir;33423342+ key.type = BTRFS_DIR_ITEM_KEY;33433343+ key.offset = btrfs_name_hash(parent_ref->name, parent_ref->name_len);33443344+33453345+ ret = btrfs_search_slot(NULL, sctx->parent_root, &key, path, 0, 0);33463346+ if (ret < 0) {33473347+ goto out;33483348+ } else if (ret > 0) {33493349+ ret = 0;33503350+ goto out;33513351+ }33523352+33533353+ di = btrfs_match_dir_item_name(sctx->parent_root, path,33543354+ parent_ref->name, parent_ref->name_len);33553355+ if (!di) {33563356+ ret = 0;33573357+ goto out;33583358+ }33593359+ /*33603360+ * di_key.objectid has the number of the inode that has a dentry in the33613361+ * parent directory with the same name that sctx->cur_ino is being33623362+ * renamed to. We need to check if that inode is in the send root as33633363+ * well and if it is currently marked as an inode with a pending rename,33643364+ * if it is, we need to delay the rename of sctx->cur_ino as well, so33653365+ * that it happens after that other inode is renamed.33663366+ */33673367+ btrfs_dir_item_key_to_cpu(path->nodes[0], di, &di_key);33683368+ if (di_key.type != BTRFS_INODE_ITEM_KEY) {33693369+ ret = 0;33703370+ goto out;33713371+ }33723372+33733373+ ret = get_inode_info(sctx->parent_root, di_key.objectid, NULL,33743374+ &left_gen, NULL, NULL, NULL, NULL);33753375+ if (ret < 0)33763376+ goto out;33773377+ ret = get_inode_info(sctx->send_root, di_key.objectid, NULL,33783378+ &right_gen, NULL, NULL, NULL, NULL);33793379+ if (ret < 0) {33803380+ if (ret == -ENOENT)33813381+ ret = 0;33823382+ goto out;33833383+ }33843384+33853385+ /* Different inode, no need to delay the rename of sctx->cur_ino */33863386+ if (right_gen != left_gen) {33873387+ ret = 0;33883388+ goto out;33893389+ }33903390+33913391+ if (is_waiting_for_move(sctx, di_key.objectid)) {33923392+ ret = add_pending_dir_move(sctx,33933393+ sctx->cur_ino,33943394+ sctx->cur_inode_gen,33953395+ di_key.objectid,33963396+ &sctx->new_refs,33973397+ &sctx->deleted_refs,33983398+ is_orphan);33993399+ if (!ret)34003400+ ret = 1;34013401+ }34023402+out:34033403+ btrfs_free_path(path);34043404+ return ret;34053405+}34063406+32943407static int wait_for_parent_move(struct send_ctx *sctx,32953408 struct recorded_ref *parent_ref)32963409{···34783349 sctx->cur_inode_gen,34793350 ino,34803351 &sctx->new_refs,34813481- &sctx->deleted_refs);33523352+ &sctx->deleted_refs,33533353+ false);34823354 if (!ret)34833355 ret = 1;34843356 }···35023372 int did_overwrite = 0;35033373 int is_orphan = 0;35043374 u64 last_dir_ino_rm = 0;33753375+ bool can_rename = true;3505337635063377verbose_printk("btrfs: process_recorded_refs %llu\n", sctx->cur_ino);35073378···36213490 }36223491 }3623349234933493+ if (S_ISDIR(sctx->cur_inode_mode) && sctx->parent_root) {34943494+ ret = wait_for_dest_dir_move(sctx, cur, is_orphan);34953495+ if (ret < 0)34963496+ goto out;34973497+ if (ret == 1) {34983498+ can_rename = false;34993499+ *pending_move = 1;35003500+ }35013501+ }35023502+36243503 /*36253504 * link/move the ref to the new place. If we have an orphan36263505 * inode, move it and update valid_path. If not, link or move36273506 * it depending on the inode mode.36283507 */36293629- if (is_orphan) {35083508+ if (is_orphan && can_rename) {36303509 ret = send_rename(sctx, valid_path, cur->full_path);36313510 if (ret < 0)36323511 goto out;···36443503 ret = fs_path_copy(valid_path, cur->full_path);36453504 if (ret < 0)36463505 goto out;36473647- } else {35063506+ } else if (can_rename) {36483507 if (S_ISDIR(sctx->cur_inode_mode)) {36493508 /*36503509 * Dirs can't be linked, so move it. For moved
+196-1
fs/btrfs/tests/inode-tests.c
···911911 return ret;912912}913913914914+static int test_extent_accounting(void)915915+{916916+ struct inode *inode = NULL;917917+ struct btrfs_root *root = NULL;918918+ int ret = -ENOMEM;919919+920920+ inode = btrfs_new_test_inode();921921+ if (!inode) {922922+ test_msg("Couldn't allocate inode\n");923923+ return ret;924924+ }925925+926926+ root = btrfs_alloc_dummy_root();927927+ if (IS_ERR(root)) {928928+ test_msg("Couldn't allocate root\n");929929+ goto out;930930+ }931931+932932+ root->fs_info = btrfs_alloc_dummy_fs_info();933933+ if (!root->fs_info) {934934+ test_msg("Couldn't allocate dummy fs info\n");935935+ goto out;936936+ }937937+938938+ BTRFS_I(inode)->root = root;939939+ btrfs_test_inode_set_ops(inode);940940+941941+ /* [BTRFS_MAX_EXTENT_SIZE] */942942+ BTRFS_I(inode)->outstanding_extents++;943943+ ret = btrfs_set_extent_delalloc(inode, 0, BTRFS_MAX_EXTENT_SIZE - 1,944944+ NULL);945945+ if (ret) {946946+ test_msg("btrfs_set_extent_delalloc returned %d\n", ret);947947+ goto out;948948+ }949949+ if (BTRFS_I(inode)->outstanding_extents != 1) {950950+ ret = -EINVAL;951951+ test_msg("Miscount, wanted 1, got %u\n",952952+ BTRFS_I(inode)->outstanding_extents);953953+ goto out;954954+ }955955+956956+ /* [BTRFS_MAX_EXTENT_SIZE][4k] */957957+ BTRFS_I(inode)->outstanding_extents++;958958+ ret = btrfs_set_extent_delalloc(inode, BTRFS_MAX_EXTENT_SIZE,959959+ BTRFS_MAX_EXTENT_SIZE + 4095, NULL);960960+ if (ret) {961961+ test_msg("btrfs_set_extent_delalloc returned %d\n", ret);962962+ goto out;963963+ }964964+ if (BTRFS_I(inode)->outstanding_extents != 2) {965965+ ret = -EINVAL;966966+ test_msg("Miscount, wanted 2, got %u\n",967967+ BTRFS_I(inode)->outstanding_extents);968968+ goto out;969969+ }970970+971971+ /* [BTRFS_MAX_EXTENT_SIZE/2][4K HOLE][the rest] */972972+ ret = clear_extent_bit(&BTRFS_I(inode)->io_tree,973973+ BTRFS_MAX_EXTENT_SIZE >> 1,974974+ (BTRFS_MAX_EXTENT_SIZE >> 1) + 4095,975975+ EXTENT_DELALLOC | EXTENT_DIRTY |976976+ EXTENT_UPTODATE | EXTENT_DO_ACCOUNTING, 0, 0,977977+ NULL, GFP_NOFS);978978+ if (ret) {979979+ test_msg("clear_extent_bit returned %d\n", ret);980980+ goto out;981981+ }982982+ if (BTRFS_I(inode)->outstanding_extents != 2) {983983+ ret = -EINVAL;984984+ test_msg("Miscount, wanted 2, got %u\n",985985+ BTRFS_I(inode)->outstanding_extents);986986+ goto out;987987+ }988988+989989+ /* [BTRFS_MAX_EXTENT_SIZE][4K] */990990+ BTRFS_I(inode)->outstanding_extents++;991991+ ret = btrfs_set_extent_delalloc(inode, BTRFS_MAX_EXTENT_SIZE >> 1,992992+ (BTRFS_MAX_EXTENT_SIZE >> 1) + 4095,993993+ NULL);994994+ if (ret) {995995+ test_msg("btrfs_set_extent_delalloc returned %d\n", ret);996996+ goto out;997997+ }998998+ if (BTRFS_I(inode)->outstanding_extents != 2) {999999+ ret = -EINVAL;10001000+ test_msg("Miscount, wanted 2, got %u\n",10011001+ BTRFS_I(inode)->outstanding_extents);10021002+ goto out;10031003+ }10041004+10051005+ /*10061006+ * [BTRFS_MAX_EXTENT_SIZE+4K][4K HOLE][BTRFS_MAX_EXTENT_SIZE+4K]10071007+ *10081008+ * I'm artificially adding 2 to outstanding_extents because in the10091009+ * buffered IO case we'd add things up as we go, but I don't feel like10101010+ * doing that here, this isn't the interesting case we want to test.10111011+ */10121012+ BTRFS_I(inode)->outstanding_extents += 2;10131013+ ret = btrfs_set_extent_delalloc(inode, BTRFS_MAX_EXTENT_SIZE + 8192,10141014+ (BTRFS_MAX_EXTENT_SIZE << 1) + 12287,10151015+ NULL);10161016+ if (ret) {10171017+ test_msg("btrfs_set_extent_delalloc returned %d\n", ret);10181018+ goto out;10191019+ }10201020+ if (BTRFS_I(inode)->outstanding_extents != 4) {10211021+ ret = -EINVAL;10221022+ test_msg("Miscount, wanted 4, got %u\n",10231023+ BTRFS_I(inode)->outstanding_extents);10241024+ goto out;10251025+ }10261026+10271027+ /* [BTRFS_MAX_EXTENT_SIZE+4k][4k][BTRFS_MAX_EXTENT_SIZE+4k] */10281028+ BTRFS_I(inode)->outstanding_extents++;10291029+ ret = btrfs_set_extent_delalloc(inode, BTRFS_MAX_EXTENT_SIZE+4096,10301030+ BTRFS_MAX_EXTENT_SIZE+8191, NULL);10311031+ if (ret) {10321032+ test_msg("btrfs_set_extent_delalloc returned %d\n", ret);10331033+ goto out;10341034+ }10351035+ if (BTRFS_I(inode)->outstanding_extents != 3) {10361036+ ret = -EINVAL;10371037+ test_msg("Miscount, wanted 3, got %u\n",10381038+ BTRFS_I(inode)->outstanding_extents);10391039+ goto out;10401040+ }10411041+10421042+ /* [BTRFS_MAX_EXTENT_SIZE+4k][4K HOLE][BTRFS_MAX_EXTENT_SIZE+4k] */10431043+ ret = clear_extent_bit(&BTRFS_I(inode)->io_tree,10441044+ BTRFS_MAX_EXTENT_SIZE+4096,10451045+ BTRFS_MAX_EXTENT_SIZE+8191,10461046+ EXTENT_DIRTY | EXTENT_DELALLOC |10471047+ EXTENT_DO_ACCOUNTING | EXTENT_UPTODATE, 0, 0,10481048+ NULL, GFP_NOFS);10491049+ if (ret) {10501050+ test_msg("clear_extent_bit returned %d\n", ret);10511051+ goto out;10521052+ }10531053+ if (BTRFS_I(inode)->outstanding_extents != 4) {10541054+ ret = -EINVAL;10551055+ test_msg("Miscount, wanted 4, got %u\n",10561056+ BTRFS_I(inode)->outstanding_extents);10571057+ goto out;10581058+ }10591059+10601060+ /*10611061+ * Refill the hole again just for good measure, because I thought it10621062+ * might fail and I'd rather satisfy my paranoia at this point.10631063+ */10641064+ BTRFS_I(inode)->outstanding_extents++;10651065+ ret = btrfs_set_extent_delalloc(inode, BTRFS_MAX_EXTENT_SIZE+4096,10661066+ BTRFS_MAX_EXTENT_SIZE+8191, NULL);10671067+ if (ret) {10681068+ test_msg("btrfs_set_extent_delalloc returned %d\n", ret);10691069+ goto out;10701070+ }10711071+ if (BTRFS_I(inode)->outstanding_extents != 3) {10721072+ ret = -EINVAL;10731073+ test_msg("Miscount, wanted 3, got %u\n",10741074+ BTRFS_I(inode)->outstanding_extents);10751075+ goto out;10761076+ }10771077+10781078+ /* Empty */10791079+ ret = clear_extent_bit(&BTRFS_I(inode)->io_tree, 0, (u64)-1,10801080+ EXTENT_DIRTY | EXTENT_DELALLOC |10811081+ EXTENT_DO_ACCOUNTING | EXTENT_UPTODATE, 0, 0,10821082+ NULL, GFP_NOFS);10831083+ if (ret) {10841084+ test_msg("clear_extent_bit returned %d\n", ret);10851085+ goto out;10861086+ }10871087+ if (BTRFS_I(inode)->outstanding_extents) {10881088+ ret = -EINVAL;10891089+ test_msg("Miscount, wanted 0, got %u\n",10901090+ BTRFS_I(inode)->outstanding_extents);10911091+ goto out;10921092+ }10931093+ ret = 0;10941094+out:10951095+ if (ret)10961096+ clear_extent_bit(&BTRFS_I(inode)->io_tree, 0, (u64)-1,10971097+ EXTENT_DIRTY | EXTENT_DELALLOC |10981098+ EXTENT_DO_ACCOUNTING | EXTENT_UPTODATE, 0, 0,10991099+ NULL, GFP_NOFS);11001100+ iput(inode);11011101+ btrfs_free_dummy_root(root);11021102+ return ret;11031103+}11041104+9141105int btrfs_test_inodes(void)9151106{9161107 int ret;···1115924 if (ret)1116925 return ret;1117926 test_msg("Running hole first btrfs_get_extent test\n");11181118- return test_hole_first();927927+ ret = test_hole_first();928928+ if (ret)929929+ return ret;930930+ test_msg("Running outstanding_extents tests\n");931931+ return test_extent_accounting();1119932}
+25-17
fs/btrfs/transaction.c
···10231023 u64 old_root_bytenr;10241024 u64 old_root_used;10251025 struct btrfs_root *tree_root = root->fs_info->tree_root;10261026- bool extent_root = (root->objectid == BTRFS_EXTENT_TREE_OBJECTID);1027102610281027 old_root_used = btrfs_root_used(&root->root_item);10291029- btrfs_write_dirty_block_groups(trans, root);1030102810311029 while (1) {10321030 old_root_bytenr = btrfs_root_bytenr(&root->root_item);10331031 if (old_root_bytenr == root->node->start &&10341034- old_root_used == btrfs_root_used(&root->root_item) &&10351035- (!extent_root ||10361036- list_empty(&trans->transaction->dirty_bgs)))10321032+ old_root_used == btrfs_root_used(&root->root_item))10371033 break;1038103410391035 btrfs_set_root_node(&root->root_item, root->node);···10401044 return ret;1041104510421046 old_root_used = btrfs_root_used(&root->root_item);10431043- if (extent_root) {10441044- ret = btrfs_write_dirty_block_groups(trans, root);10451045- if (ret)10461046- return ret;10471047- }10481048- ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1);10491049- if (ret)10501050- return ret;10511051- ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1);10521052- if (ret)10531053- return ret;10541047 }1055104810561049 return 0;···10561071 struct btrfs_root *root)10571072{10581073 struct btrfs_fs_info *fs_info = root->fs_info;10741074+ struct list_head *dirty_bgs = &trans->transaction->dirty_bgs;10591075 struct list_head *next;10601076 struct extent_buffer *eb;10611077 int ret;···10841098 if (ret)10851099 return ret;1086110011011101+ ret = btrfs_setup_space_cache(trans, root);11021102+ if (ret)11031103+ return ret;11041104+10871105 /* run_qgroups might have added some more refs */10881106 ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1);10891107 if (ret)10901108 return ret;10911091-11091109+again:10921110 while (!list_empty(&fs_info->dirty_cowonly_roots)) {10931111 next = fs_info->dirty_cowonly_roots.next;10941112 list_del_init(next);···11051115 ret = update_cowonly_root(trans, root);11061116 if (ret)11071117 return ret;11181118+ ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1);11191119+ if (ret)11201120+ return ret;11081121 }11221122+11231123+ while (!list_empty(dirty_bgs)) {11241124+ ret = btrfs_write_dirty_block_groups(trans, root);11251125+ if (ret)11261126+ return ret;11271127+ ret = btrfs_run_delayed_refs(trans, root, (unsigned long)-1);11281128+ if (ret)11291129+ return ret;11301130+ }11311131+11321132+ if (!list_empty(&fs_info->dirty_cowonly_roots))11331133+ goto again;1109113411101135 list_add_tail(&fs_info->extent_root->dirty_list,11111136 &trans->transaction->switch_commits);···18181813 ret = btrfs_end_transaction(trans, root);1819181418201815 wait_for_commit(root, cur_trans);18161816+18171817+ if (unlikely(cur_trans->aborted))18181818+ ret = cur_trans->aborted;1821181918221820 btrfs_put_transaction(cur_trans);18231821
···111111 name, name_len, -1);112112 if (!di && (flags & XATTR_REPLACE))113113 ret = -ENODATA;114114+ else if (IS_ERR(di))115115+ ret = PTR_ERR(di);114116 else if (di)115117 ret = btrfs_delete_one_dir_name(trans, root, path, di);116118 goto out;···129127 ASSERT(mutex_is_locked(&inode->i_mutex));130128 di = btrfs_lookup_xattr(NULL, root, path, btrfs_ino(inode),131129 name, name_len, 0);132132- if (!di) {130130+ if (!di)133131 ret = -ENODATA;132132+ else if (IS_ERR(di))133133+ ret = PTR_ERR(di);134134+ if (ret)134135 goto out;135135- }136136 btrfs_release_path(path);137137 di = NULL;138138 }
+5-1
fs/cifs/cifsencrypt.c
···11/*22 * fs/cifs/cifsencrypt.c33 *44+ * Encryption and hashing operations relating to NTLM, NTLMv2. See MS-NLMP55+ * for more detailed information66+ *47 * Copyright (C) International Business Machines Corp., 2005,201358 * Author(s): Steve French (sfrench@us.ibm.com)69 *···518515 __func__);519516 return rc;520517 }521521- } else if (ses->serverName) {518518+ } else {519519+ /* We use ses->serverName if no domain name available */522520 len = strlen(ses->serverName);523521524522 server = kmalloc(2 + (len * 2), GFP_KERNEL);
+11-2
fs/cifs/connect.c
···15991599 pr_warn("CIFS: username too long\n");16001600 goto cifs_parse_mount_err;16011601 }16021602+16031603+ kfree(vol->username);16021604 vol->username = kstrdup(string, GFP_KERNEL);16031605 if (!vol->username)16041606 goto cifs_parse_mount_err;···17021700 goto cifs_parse_mount_err;17031701 }1704170217031703+ kfree(vol->domainname);17051704 vol->domainname = kstrdup(string, GFP_KERNEL);17061705 if (!vol->domainname) {17071706 pr_warn("CIFS: no memory for domainname\n");···17341731 }1735173217361733 if (strncasecmp(string, "default", 7) != 0) {17341734+ kfree(vol->iocharset);17371735 vol->iocharset = kstrdup(string,17381736 GFP_KERNEL);17391737 if (!vol->iocharset) {···29172913 * calling name ends in null (byte 16) from old smb29182914 * convention.29192915 */29202920- if (server->workstation_RFC1001_name &&29212921- server->workstation_RFC1001_name[0] != 0)29162916+ if (server->workstation_RFC1001_name[0] != 0)29222917 rfc1002mangle(ses_init_buf->trailer.29232918 session_req.calling_name,29242919 server->workstation_RFC1001_name,···36953692#endif /* CIFS_WEAK_PW_HASH */36963693 rc = SMBNTencrypt(tcon->password, ses->server->cryptkey,36973694 bcc_ptr, nls_codepage);36953695+ if (rc) {36963696+ cifs_dbg(FYI, "%s Can't generate NTLM rsp. Error: %d\n",36973697+ __func__, rc);36983698+ cifs_buf_release(smb_buffer);36993699+ return rc;37003700+ }3698370136993702 bcc_ptr += CIFS_AUTH_RESP_SIZE;37003703 if (ses->capabilities & CAP_UNICODE) {
···322322323323 /* return pointer to beginning of data area, ie offset from SMB start */324324 if ((*off != 0) && (*len != 0))325325- return hdr->ProtocolId + *off;325325+ return (char *)(&hdr->ProtocolId[0]) + *off;326326 else327327 return NULL;328328}
+2-1
fs/cifs/smb2ops.c
···684684685685 /* No need to change MaxChunks since already set to 1 */686686 chunk_sizes_updated = true;687687- }687687+ } else688688+ goto cchunk_out;688689 }689690690691cchunk_out:
+10-7
fs/cifs/smb2pdu.c
···12181218 struct smb2_ioctl_req *req;12191219 struct smb2_ioctl_rsp *rsp;12201220 struct TCP_Server_Info *server;12211221- struct cifs_ses *ses = tcon->ses;12211221+ struct cifs_ses *ses;12221222 struct kvec iov[2];12231223 int resp_buftype;12241224 int num_iovecs;···12321232 /* zero out returned data len, in case of error */12331233 if (plen)12341234 *plen = 0;12351235+12361236+ if (tcon)12371237+ ses = tcon->ses;12381238+ else12391239+ return -EIO;1235124012361241 if (ses && (ses->server))12371242 server = ses->server;···13011296 rsp = (struct smb2_ioctl_rsp *)iov[0].iov_base;1302129713031298 if ((rc != 0) && (rc != -EINVAL)) {13041304- if (tcon)13051305- cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE);12991299+ cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE);13061300 goto ioctl_exit;13071301 } else if (rc == -EINVAL) {13081302 if ((opcode != FSCTL_SRV_COPYCHUNK_WRITE) &&13091303 (opcode != FSCTL_SRV_COPYCHUNK)) {13101310- if (tcon)13111311- cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE);13041304+ cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE);13121305 goto ioctl_exit;13131306 }13141307 }···1632162916331630 rc = SendReceive2(xid, ses, iov, 1, &resp_buftype, 0);1634163116351635- if ((rc != 0) && tcon)16321632+ if (rc != 0)16361633 cifs_stats_fail_inc(tcon, SMB2_FLUSH_HE);1637163416381635 free_rsp_buf(resp_buftype, iov[0].iov_base);···21172114 struct kvec iov[2];21182115 int rc = 0;21192116 int len;21202120- int resp_buftype;21172117+ int resp_buftype = CIFS_NO_BUFFER;21212118 unsigned char *bufptr;21222119 struct TCP_Server_Info *server;21232120 struct cifs_ses *ses = tcon->ses;
···407407 if (!cipher_name_set) {408408 int cipher_name_len = strlen(ECRYPTFS_DEFAULT_CIPHER);409409410410- BUG_ON(cipher_name_len >= ECRYPTFS_MAX_CIPHER_NAME_SIZE);410410+ BUG_ON(cipher_name_len > ECRYPTFS_MAX_CIPHER_NAME_SIZE);411411 strcpy(mount_crypt_stat->global_default_cipher_name,412412 ECRYPTFS_DEFAULT_CIPHER);413413 }
+83-10
fs/fs-writeback.c
···5353 struct completion *done; /* set if the caller waits */5454};55555656+/*5757+ * If an inode is constantly having its pages dirtied, but then the5858+ * updates stop dirtytime_expire_interval seconds in the past, it's5959+ * possible for the worst case time between when an inode has its6060+ * timestamps updated and when they finally get written out to be two6161+ * dirtytime_expire_intervals. We set the default to 12 hours (in6262+ * seconds), which means most of the time inodes will have their6363+ * timestamps written to disk after 12 hours, but in the worst case a6464+ * few inodes might not their timestamps updated for 24 hours.6565+ */6666+unsigned int dirtytime_expire_interval = 12 * 60 * 60;6767+5668/**5769 * writeback_in_progress - determine whether there is writeback in progress5870 * @bdi: the device's backing_dev_info structure.···287275288276 if ((flags & EXPIRE_DIRTY_ATIME) == 0)289277 older_than_this = work->older_than_this;290290- else if ((work->reason == WB_REASON_SYNC) == 0) {291291- expire_time = jiffies - (HZ * 86400);278278+ else if (!work->for_sync) {279279+ expire_time = jiffies - (dirtytime_expire_interval * HZ);292280 older_than_this = &expire_time;293281 }294282 while (!list_empty(delaying_queue)) {···470458 */471459 redirty_tail(inode, wb);472460 } else if (inode->i_state & I_DIRTY_TIME) {461461+ inode->dirtied_when = jiffies;473462 list_move(&inode->i_wb_list, &wb->b_dirty_time);474463 } else {475464 /* The inode is clean. Remove from writeback lists. */···518505 spin_lock(&inode->i_lock);519506520507 dirty = inode->i_state & I_DIRTY;521521- if (((dirty & (I_DIRTY_SYNC | I_DIRTY_DATASYNC)) &&522522- (inode->i_state & I_DIRTY_TIME)) ||523523- (inode->i_state & I_DIRTY_TIME_EXPIRED)) {524524- dirty |= I_DIRTY_TIME | I_DIRTY_TIME_EXPIRED;525525- trace_writeback_lazytime(inode);526526- }508508+ if (inode->i_state & I_DIRTY_TIME) {509509+ if ((dirty & (I_DIRTY_SYNC | I_DIRTY_DATASYNC)) ||510510+ unlikely(inode->i_state & I_DIRTY_TIME_EXPIRED) ||511511+ unlikely(time_after(jiffies,512512+ (inode->dirtied_time_when +513513+ dirtytime_expire_interval * HZ)))) {514514+ dirty |= I_DIRTY_TIME | I_DIRTY_TIME_EXPIRED;515515+ trace_writeback_lazytime(inode);516516+ }517517+ } else518518+ inode->i_state &= ~I_DIRTY_TIME_EXPIRED;527519 inode->i_state &= ~dirty;528520529521 /*···11491131 rcu_read_unlock();11501132}1151113311341134+/*11351135+ * Wake up bdi's periodically to make sure dirtytime inodes gets11361136+ * written back periodically. We deliberately do *not* check the11371137+ * b_dirtytime list in wb_has_dirty_io(), since this would cause the11381138+ * kernel to be constantly waking up once there are any dirtytime11391139+ * inodes on the system. So instead we define a separate delayed work11401140+ * function which gets called much more rarely. (By default, only11411141+ * once every 12 hours.)11421142+ *11431143+ * If there is any other write activity going on in the file system,11441144+ * this function won't be necessary. But if the only thing that has11451145+ * happened on the file system is a dirtytime inode caused by an atime11461146+ * update, we need this infrastructure below to make sure that inode11471147+ * eventually gets pushed out to disk.11481148+ */11491149+static void wakeup_dirtytime_writeback(struct work_struct *w);11501150+static DECLARE_DELAYED_WORK(dirtytime_work, wakeup_dirtytime_writeback);11511151+11521152+static void wakeup_dirtytime_writeback(struct work_struct *w)11531153+{11541154+ struct backing_dev_info *bdi;11551155+11561156+ rcu_read_lock();11571157+ list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {11581158+ if (list_empty(&bdi->wb.b_dirty_time))11591159+ continue;11601160+ bdi_wakeup_thread(bdi);11611161+ }11621162+ rcu_read_unlock();11631163+ schedule_delayed_work(&dirtytime_work, dirtytime_expire_interval * HZ);11641164+}11651165+11661166+static int __init start_dirtytime_writeback(void)11671167+{11681168+ schedule_delayed_work(&dirtytime_work, dirtytime_expire_interval * HZ);11691169+ return 0;11701170+}11711171+__initcall(start_dirtytime_writeback);11721172+11731173+int dirtytime_interval_handler(struct ctl_table *table, int write,11741174+ void __user *buffer, size_t *lenp, loff_t *ppos)11751175+{11761176+ int ret;11771177+11781178+ ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);11791179+ if (ret == 0 && write)11801180+ mod_delayed_work(system_wq, &dirtytime_work, 0);11811181+ return ret;11821182+}11831183+11521184static noinline void block_dump___mark_inode_dirty(struct inode *inode)11531185{11541186 if (inode->i_ino || strcmp(inode->i_sb->s_id, "bdev")) {···13371269 }1338127013391271 inode->dirtied_when = jiffies;13401340- list_move(&inode->i_wb_list, dirtytime ?13411341- &bdi->wb.b_dirty_time : &bdi->wb.b_dirty);12721272+ if (dirtytime)12731273+ inode->dirtied_time_when = jiffies;12741274+ if (inode->i_state & (I_DIRTY_INODE | I_DIRTY_PAGES))12751275+ list_move(&inode->i_wb_list, &bdi->wb.b_dirty);12761276+ else12771277+ list_move(&inode->i_wb_list,12781278+ &bdi->wb.b_dirty_time);13421279 spin_unlock(&bdi->wb.list_lock);13431280 trace_writeback_dirty_inode_enqueue(inode);13441281
+17-2
fs/fuse/dev.c
···890890891891 newpage = buf->page;892892893893- if (WARN_ON(!PageUptodate(newpage)))894894- return -EIO;893893+ if (!PageUptodate(newpage))894894+ SetPageUptodate(newpage);895895896896 ClearPageMappedToDisk(newpage);897897···13531353 return err;13541354}1355135513561356+static int fuse_dev_open(struct inode *inode, struct file *file)13571357+{13581358+ /*13591359+ * The fuse device's file's private_data is used to hold13601360+ * the fuse_conn(ection) when it is mounted, and is used to13611361+ * keep track of whether the file has been mounted already.13621362+ */13631363+ file->private_data = NULL;13641364+ return 0;13651365+}13661366+13561367static ssize_t fuse_dev_read(struct kiocb *iocb, const struct iovec *iov,13571368 unsigned long nr_segs, loff_t pos)13581369{···18081797static int fuse_notify(struct fuse_conn *fc, enum fuse_notify_code code,18091798 unsigned int size, struct fuse_copy_state *cs)18101799{18001800+ /* Don't try to move pages (yet) */18011801+ cs->move_pages = 0;18021802+18111803 switch (code) {18121804 case FUSE_NOTIFY_POLL:18131805 return fuse_notify_poll(fc, size, cs);···2231221722322218const struct file_operations fuse_dev_operations = {22332219 .owner = THIS_MODULE,22202220+ .open = fuse_dev_open,22342221 .llseek = no_llseek,22352222 .read = do_sync_read,22362223 .aio_read = fuse_dev_read,
+11-9
fs/hfsplus/brec.c
···131131 hfs_bnode_write(node, entry, data_off + key_len, entry_len);132132 hfs_bnode_dump(node);133133134134- if (new_node) {135135- /* update parent key if we inserted a key136136- * at the start of the first node137137- */138138- if (!rec && new_node != node)139139- hfs_brec_update_parent(fd);134134+ /*135135+ * update parent key if we inserted a key136136+ * at the start of the node and it is not the new node137137+ */138138+ if (!rec && new_node != node) {139139+ hfs_bnode_read_key(node, fd->search_key, data_off + size);140140+ hfs_brec_update_parent(fd);141141+ }140142143143+ if (new_node) {141144 hfs_bnode_put(fd->bnode);142145 if (!new_node->parent) {143146 hfs_btree_inc_height(tree);···170167 }171168 goto again;172169 }173173-174174- if (!rec)175175- hfs_brec_update_parent(fd);176170177171 return 0;178172}···370370 if (IS_ERR(parent))371371 return PTR_ERR(parent);372372 __hfs_brec_find(parent, fd, hfs_find_rec_by_key);373373+ if (fd->record < 0)374374+ return -ENOENT;373375 hfs_bnode_dump(parent);374376 rec = fd->record;375377
+1
fs/kernfs/file.c
···207207 goto out_free;208208 }209209210210+ of->event = atomic_read(&of->kn->attr.open->event);210211 ops = kernfs_ops(of->kn);211212 if (ops->read)212213 len = ops->read(of, buf, len, *ppos);
+5-5
fs/locks.c
···13881388int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)13891389{13901390 int error = 0;13911391- struct file_lock *new_fl;13921391 struct file_lock_context *ctx = inode->i_flctx;13931393- struct file_lock *fl;13921392+ struct file_lock *new_fl, *fl, *tmp;13941393 unsigned long break_time;13951394 int want_write = (mode & O_ACCMODE) != O_RDONLY;13961395 LIST_HEAD(dispose);···14191420 break_time++; /* so that 0 means no break time */14201421 }1421142214221422- list_for_each_entry(fl, &ctx->flc_lease, fl_list) {14231423+ list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_list) {14231424 if (!leases_conflict(fl, new_fl))14241425 continue;14251426 if (want_write) {···16641665 }1665166616661667 if (my_fl != NULL) {16671667- error = lease->fl_lmops->lm_change(my_fl, arg, &dispose);16681668+ lease = my_fl;16691669+ error = lease->fl_lmops->lm_change(lease, arg, &dispose);16681670 if (error)16691671 goto out;16701672 goto out_setup;···17271727 break;17281728 }17291729 }17301730- trace_generic_delete_lease(inode, fl);17301730+ trace_generic_delete_lease(inode, victim);17311731 if (victim)17321732 error = fl->fl_lmops->lm_change(victim, F_UNLCK, &dispose);17331733 spin_unlock(&ctx->flc_lock);
···181181 clear_bit(NFS_DELEGATION_NEED_RECLAIM,182182 &delegation->flags);183183 spin_unlock(&delegation->lock);184184- put_rpccred(oldcred);185184 rcu_read_unlock();185185+ put_rpccred(oldcred);186186 trace_nfs4_reclaim_delegation(inode, res->delegation_type);187187 } else {188188 /* We appear to have raced with a delegation return. */···370370 delegation = NULL;371371 goto out;372372 }373373- freeme = nfs_detach_delegation_locked(nfsi, 373373+ if (test_and_set_bit(NFS_DELEGATION_RETURNING,374374+ &old_delegation->flags))375375+ goto out;376376+ freeme = nfs_detach_delegation_locked(nfsi,374377 old_delegation, clp);375378 if (freeme == NULL)376379 goto out;···436433{437434 bool ret = false;438435436436+ if (test_bit(NFS_DELEGATION_RETURNING, &delegation->flags))437437+ goto out;439438 if (test_and_clear_bit(NFS_DELEGATION_RETURN, &delegation->flags))440439 ret = true;441440 if (test_and_clear_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags) && !ret) {···449444 ret = true;450445 spin_unlock(&delegation->lock);451446 }447447+out:452448 return ret;453449}454450···477471 super_list) {478472 if (!nfs_delegation_need_return(delegation))479473 continue;480480- inode = nfs_delegation_grab_inode(delegation);481481- if (inode == NULL)474474+ if (!nfs_sb_active(server->super))482475 continue;476476+ inode = nfs_delegation_grab_inode(delegation);477477+ if (inode == NULL) {478478+ rcu_read_unlock();479479+ nfs_sb_deactive(server->super);480480+ goto restart;481481+ }483482 delegation = nfs_start_delegation_return_locked(NFS_I(inode));484483 rcu_read_unlock();485484486485 err = nfs_end_delegation_return(inode, delegation, 0);487486 iput(inode);487487+ nfs_sb_deactive(server->super);488488 if (!err)489489 goto restart;490490 set_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state);···821809 list_for_each_entry_rcu(server, &clp->cl_superblocks, client_link) {822810 list_for_each_entry_rcu(delegation, &server->delegations,823811 super_list) {812812+ if (test_bit(NFS_DELEGATION_RETURNING,813813+ &delegation->flags))814814+ continue;824815 if (test_bit(NFS_DELEGATION_NEED_RECLAIM,825816 &delegation->flags) == 0)826817 continue;827827- inode = nfs_delegation_grab_inode(delegation);828828- if (inode == NULL)818818+ if (!nfs_sb_active(server->super))829819 continue;830830- delegation = nfs_detach_delegation(NFS_I(inode),831831- delegation, server);820820+ inode = nfs_delegation_grab_inode(delegation);821821+ if (inode == NULL) {822822+ rcu_read_unlock();823823+ nfs_sb_deactive(server->super);824824+ goto restart;825825+ }826826+ delegation = nfs_start_delegation_return_locked(NFS_I(inode));832827 rcu_read_unlock();833833-834834- if (delegation != NULL)835835- nfs_free_delegation(delegation);828828+ if (delegation != NULL) {829829+ delegation = nfs_detach_delegation(NFS_I(inode),830830+ delegation, server);831831+ if (delegation != NULL)832832+ nfs_free_delegation(delegation);833833+ }836834 iput(inode);835835+ nfs_sb_deactive(server->super);837836 goto restart;838837 }839838 }
+19-3
fs/nfs/dir.c
···408408 return 0;409409}410410411411+/* Match file and dirent using either filehandle or fileid412412+ * Note: caller is responsible for checking the fsid413413+ */411414static412415int nfs_same_file(struct dentry *dentry, struct nfs_entry *entry)413416{417417+ struct nfs_inode *nfsi;418418+414419 if (dentry->d_inode == NULL)415420 goto different;416416- if (nfs_compare_fh(entry->fh, NFS_FH(dentry->d_inode)) != 0)417417- goto different;418418- return 1;421421+422422+ nfsi = NFS_I(dentry->d_inode);423423+ if (entry->fattr->fileid == nfsi->fileid)424424+ return 1;425425+ if (nfs_compare_fh(entry->fh, &nfsi->fh) == 0)426426+ return 1;419427different:420428 return 0;421429}···477469 struct inode *inode;478470 int status;479471472472+ if (!(entry->fattr->valid & NFS_ATTR_FATTR_FILEID))473473+ return;474474+ if (!(entry->fattr->valid & NFS_ATTR_FATTR_FSID))475475+ return;480476 if (filename.name[0] == '.') {481477 if (filename.len == 1)482478 return;···491479492480 dentry = d_lookup(parent, &filename);493481 if (dentry != NULL) {482482+ /* Is there a mountpoint here? If so, just exit */483483+ if (!nfs_fsid_equal(&NFS_SB(dentry->d_sb)->fsid,484484+ &entry->fattr->fsid))485485+ goto out;494486 if (nfs_same_file(dentry, entry)) {495487 nfs_set_verifier(dentry, nfs_save_change_attribute(dir));496488 status = nfs_refresh_inode(dentry->d_inode, entry->fattr);
+9-2
fs/nfs/file.c
···178178 iocb->ki_filp,179179 iov_iter_count(to), (unsigned long) iocb->ki_pos);180180181181- result = nfs_revalidate_mapping(inode, iocb->ki_filp->f_mapping);181181+ result = nfs_revalidate_mapping_protected(inode, iocb->ki_filp->f_mapping);182182 if (!result) {183183 result = generic_file_read_iter(iocb, to);184184 if (result > 0)···199199 dprintk("NFS: splice_read(%pD2, %lu@%Lu)\n",200200 filp, (unsigned long) count, (unsigned long long) *ppos);201201202202- res = nfs_revalidate_mapping(inode, filp->f_mapping);202202+ res = nfs_revalidate_mapping_protected(inode, filp->f_mapping);203203 if (!res) {204204 res = generic_file_splice_read(filp, ppos, pipe, count, flags);205205 if (res > 0)···372372 nfs_wait_bit_killable, TASK_KILLABLE);373373 if (ret)374374 return ret;375375+ /*376376+ * Wait for O_DIRECT to complete377377+ */378378+ nfs_inode_dio_wait(mapping->host);375379376380 page = grab_cache_page_write_begin(mapping, index, flags);377381 if (!page)···622618623619 /* make sure the cache has finished storing the page */624620 nfs_fscache_wait_on_page_write(NFS_I(inode), page);621621+622622+ wait_on_bit_action(&NFS_I(inode)->flags, NFS_INO_INVALIDATING,623623+ nfs_wait_bit_killable, TASK_KILLABLE);625624626625 lock_page(page);627626 mapping = page_file_mapping(page);
+92-19
fs/nfs/inode.c
···556556 * This is a copy of the common vmtruncate, but with the locking557557 * corrected to take into account the fact that NFS requires558558 * inode->i_size to be updated under the inode->i_lock.559559+ * Note: must be called with inode->i_lock held!559560 */560561static int nfs_vmtruncate(struct inode * inode, loff_t offset)561562{···566565 if (err)567566 goto out;568567569569- spin_lock(&inode->i_lock);570568 i_size_write(inode, offset);571569 /* Optimisation */572570 if (offset == 0)573571 NFS_I(inode)->cache_validity &= ~NFS_INO_INVALID_DATA;574574- spin_unlock(&inode->i_lock);575572573573+ spin_unlock(&inode->i_lock);576574 truncate_pagecache(inode, offset);575575+ spin_lock(&inode->i_lock);577576out:578577 return err;579578}···586585 * Note: we do this in the *proc.c in order to ensure that587586 * it works for things like exclusive creates too.588587 */589589-void nfs_setattr_update_inode(struct inode *inode, struct iattr *attr)588588+void nfs_setattr_update_inode(struct inode *inode, struct iattr *attr,589589+ struct nfs_fattr *fattr)590590{591591+ /* Barrier: bump the attribute generation count. */592592+ nfs_fattr_set_barrier(fattr);593593+594594+ spin_lock(&inode->i_lock);595595+ NFS_I(inode)->attr_gencount = fattr->gencount;591596 if ((attr->ia_valid & (ATTR_MODE|ATTR_UID|ATTR_GID)) != 0) {592592- spin_lock(&inode->i_lock);593597 if ((attr->ia_valid & ATTR_MODE) != 0) {594598 int mode = attr->ia_mode & S_IALLUGO;595599 mode |= inode->i_mode & ~S_IALLUGO;···606600 inode->i_gid = attr->ia_gid;607601 nfs_set_cache_invalid(inode, NFS_INO_INVALID_ACCESS608602 | NFS_INO_INVALID_ACL);609609- spin_unlock(&inode->i_lock);610603 }611604 if ((attr->ia_valid & ATTR_SIZE) != 0) {612605 nfs_inc_stats(inode, NFSIOS_SETATTRTRUNC);613606 nfs_vmtruncate(inode, attr->ia_size);614607 }608608+ nfs_update_inode(inode, fattr);609609+ spin_unlock(&inode->i_lock);615610}616611EXPORT_SYMBOL_GPL(nfs_setattr_update_inode);617612···1035102810361029 if (mapping->nrpages != 0) {10371030 if (S_ISREG(inode->i_mode)) {10311031+ unmap_mapping_range(mapping, 0, 0, 0);10381032 ret = nfs_sync_mapping(mapping);10391033 if (ret < 0)10401034 return ret;···10681060}1069106110701062/**10711071- * nfs_revalidate_mapping - Revalidate the pagecache10631063+ * __nfs_revalidate_mapping - Revalidate the pagecache10721064 * @inode - pointer to host inode10731065 * @mapping - pointer to mapping10661066+ * @may_lock - take inode->i_mutex?10741067 */10751075-int nfs_revalidate_mapping(struct inode *inode, struct address_space *mapping)10681068+static int __nfs_revalidate_mapping(struct inode *inode,10691069+ struct address_space *mapping,10701070+ bool may_lock)10761071{10771072 struct nfs_inode *nfsi = NFS_I(inode);10781073 unsigned long *bitlock = &nfsi->flags;···11241113 nfsi->cache_validity &= ~NFS_INO_INVALID_DATA;11251114 spin_unlock(&inode->i_lock);11261115 trace_nfs_invalidate_mapping_enter(inode);11271127- ret = nfs_invalidate_mapping(inode, mapping);11161116+ if (may_lock) {11171117+ mutex_lock(&inode->i_mutex);11181118+ ret = nfs_invalidate_mapping(inode, mapping);11191119+ mutex_unlock(&inode->i_mutex);11201120+ } else11211121+ ret = nfs_invalidate_mapping(inode, mapping);11281122 trace_nfs_invalidate_mapping_exit(inode, ret);1129112311301124 clear_bit_unlock(NFS_INO_INVALIDATING, bitlock);···11371121 wake_up_bit(bitlock, NFS_INO_INVALIDATING);11381122out:11391123 return ret;11241124+}11251125+11261126+/**11271127+ * nfs_revalidate_mapping - Revalidate the pagecache11281128+ * @inode - pointer to host inode11291129+ * @mapping - pointer to mapping11301130+ */11311131+int nfs_revalidate_mapping(struct inode *inode, struct address_space *mapping)11321132+{11331133+ return __nfs_revalidate_mapping(inode, mapping, false);11341134+}11351135+11361136+/**11371137+ * nfs_revalidate_mapping_protected - Revalidate the pagecache11381138+ * @inode - pointer to host inode11391139+ * @mapping - pointer to mapping11401140+ *11411141+ * Differs from nfs_revalidate_mapping() in that it grabs the inode->i_mutex11421142+ * while invalidating the mapping.11431143+ */11441144+int nfs_revalidate_mapping_protected(struct inode *inode, struct address_space *mapping)11451145+{11461146+ return __nfs_revalidate_mapping(inode, mapping, true);11401147}1141114811421149static unsigned long nfs_wcc_update_inode(struct inode *inode, struct nfs_fattr *fattr)···12701231 return timespec_compare(&fattr->ctime, &inode->i_ctime) > 0;12711232}1272123312731273-static int nfs_size_need_update(const struct inode *inode, const struct nfs_fattr *fattr)12741274-{12751275- if (!(fattr->valid & NFS_ATTR_FATTR_SIZE))12761276- return 0;12771277- return nfs_size_to_loff_t(fattr->size) > i_size_read(inode);12781278-}12791279-12801234static atomic_long_t nfs_attr_generation_counter;1281123512821236static unsigned long nfs_read_attr_generation_counter(void)···12811249{12821250 return atomic_long_inc_return(&nfs_attr_generation_counter);12831251}12521252+EXPORT_SYMBOL_GPL(nfs_inc_attr_generation_counter);1284125312851254void nfs_fattr_init(struct nfs_fattr *fattr)12861255{···12921259 fattr->group_name = NULL;12931260}12941261EXPORT_SYMBOL_GPL(nfs_fattr_init);12621262+12631263+/**12641264+ * nfs_fattr_set_barrier12651265+ * @fattr: attributes12661266+ *12671267+ * Used to set a barrier after an attribute was updated. This12681268+ * barrier ensures that older attributes from RPC calls that may12691269+ * have raced with our update cannot clobber these new values.12701270+ * Note that you are still responsible for ensuring that other12711271+ * operations which change the attribute on the server do not12721272+ * collide.12731273+ */12741274+void nfs_fattr_set_barrier(struct nfs_fattr *fattr)12751275+{12761276+ fattr->gencount = nfs_inc_attr_generation_counter();12771277+}1295127812961279struct nfs_fattr *nfs_alloc_fattr(void)12971280{···1419137014201371 return ((long)fattr->gencount - (long)nfsi->attr_gencount) > 0 ||14211372 nfs_ctime_need_update(inode, fattr) ||14221422- nfs_size_need_update(inode, fattr) ||14231373 ((long)nfsi->attr_gencount - (long)nfs_read_attr_generation_counter() > 0);14241374}14251375···15081460 int status;1509146115101462 spin_lock(&inode->i_lock);14631463+ nfs_fattr_set_barrier(fattr);15111464 status = nfs_post_op_update_inode_locked(inode, fattr);15121465 spin_unlock(&inode->i_lock);15131466···15171468EXPORT_SYMBOL_GPL(nfs_post_op_update_inode);1518146915191470/**15201520- * nfs_post_op_update_inode_force_wcc - try to update the inode attribute cache14711471+ * nfs_post_op_update_inode_force_wcc_locked - update the inode attribute cache15211472 * @inode - pointer to inode15221473 * @fattr - updated attributes15231474 *···15271478 *15281479 * This function is mainly designed to be used by the ->write_done() functions.15291480 */15301530-int nfs_post_op_update_inode_force_wcc(struct inode *inode, struct nfs_fattr *fattr)14811481+int nfs_post_op_update_inode_force_wcc_locked(struct inode *inode, struct nfs_fattr *fattr)15311482{15321483 int status;1533148415341534- spin_lock(&inode->i_lock);15351485 /* Don't do a WCC update if these attributes are already stale */15361486 if ((fattr->valid & NFS_ATTR_FATTR) == 0 ||15371487 !nfs_inode_attrs_need_update(inode, fattr)) {···15621514 }15631515out_noforce:15641516 status = nfs_post_op_update_inode_locked(inode, fattr);15171517+ return status;15181518+}15191519+15201520+/**15211521+ * nfs_post_op_update_inode_force_wcc - try to update the inode attribute cache15221522+ * @inode - pointer to inode15231523+ * @fattr - updated attributes15241524+ *15251525+ * After an operation that has changed the inode metadata, mark the15261526+ * attribute cache as being invalid, then try to update it. Fake up15271527+ * weak cache consistency data, if none exist.15281528+ *15291529+ * This function is mainly designed to be used by the ->write_done() functions.15301530+ */15311531+int nfs_post_op_update_inode_force_wcc(struct inode *inode, struct nfs_fattr *fattr)15321532+{15331533+ int status;15341534+15351535+ spin_lock(&inode->i_lock);15361536+ nfs_fattr_set_barrier(fattr);15371537+ status = nfs_post_op_update_inode_force_wcc_locked(inode, fattr);15651538 spin_unlock(&inode->i_lock);15661539 return status;15671540}···17841715 nfs_inc_stats(inode, NFSIOS_ATTRINVALIDATE);17851716 nfsi->attrtimeo = NFS_MINATTRTIMEO(inode);17861717 nfsi->attrtimeo_timestamp = now;17181718+ /* Set barrier to be more recent than all outstanding updates */17871719 nfsi->attr_gencount = nfs_inc_attr_generation_counter();17881720 } else {17891721 if (!time_in_range_open(now, nfsi->attrtimeo_timestamp, nfsi->attrtimeo_timestamp + nfsi->attrtimeo)) {···17921722 nfsi->attrtimeo = NFS_MAXATTRTIMEO(inode);17931723 nfsi->attrtimeo_timestamp = now;17941724 }17251725+ /* Set the barrier to be more recent than this fattr */17261726+ if ((long)fattr->gencount - (long)nfsi->attr_gencount > 0)17271727+ nfsi->attr_gencount = fattr->gencount;17951728 }17961729 invalid &= ~NFS_INO_INVALID_ATTR;17971730 /* Don't invalidate the data if we were to blame */
···138138 nfs_fattr_init(fattr);139139 status = rpc_call_sync(NFS_CLIENT(inode), &msg, 0);140140 if (status == 0)141141- nfs_setattr_update_inode(inode, sattr);141141+ nfs_setattr_update_inode(inode, sattr, fattr);142142 dprintk("NFS reply setattr: %d\n", status);143143 return status;144144}···834834 if (nfs3_async_handle_jukebox(task, inode))835835 return -EAGAIN;836836 if (task->tk_status >= 0)837837- nfs_post_op_update_inode_force_wcc(inode, hdr->res.fattr);837837+ nfs_writeback_update_inode(hdr);838838 return 0;839839}840840
+5
fs/nfs/nfs3xdr.c
···19871987 if (entry->fattr->valid & NFS_ATTR_FATTR_V3)19881988 entry->d_type = nfs_umode_to_dtype(entry->fattr->mode);1989198919901990+ if (entry->fattr->fileid != entry->ino) {19911991+ entry->fattr->mounted_on_fileid = entry->ino;19921992+ entry->fattr->valid |= NFS_ATTR_FATTR_MOUNTED_ON_FILEID;19931993+ }19941994+19901995 /* In fact, a post_op_fh3: */19911996 p = xdr_inline_decode(xdr, 4);19921997 if (unlikely(p == NULL))
+4-5
fs/nfs/nfs4client.c
···621621 spin_lock(&nn->nfs_client_lock);622622 list_for_each_entry(pos, &nn->nfs_client_list, cl_share_link) {623623624624+ if (pos == new)625625+ goto found;626626+624627 if (pos->rpc_ops != new->rpc_ops)625628 continue;626629···642639 prev = pos;643640644641 status = nfs_wait_client_init_complete(pos);645645- if (pos->cl_cons_state == NFS_CS_SESSION_INITING) {646646- nfs4_schedule_lease_recovery(pos);647647- status = nfs4_wait_clnt_recover(pos);648648- }649642 spin_lock(&nn->nfs_client_lock);650643 if (status < 0)651644 break;···667668 */668669 if (!nfs4_match_client_owner_id(pos, new))669670 continue;670670-671671+found:671672 atomic_inc(&pos->cl_count);672673 *result = pos;673674 status = 0;
+21-10
fs/nfs/nfs4proc.c
···901901 if (!cinfo->atomic || cinfo->before != dir->i_version)902902 nfs_force_lookup_revalidate(dir);903903 dir->i_version = cinfo->after;904904+ nfsi->attr_gencount = nfs_inc_attr_generation_counter();904905 nfs_fscache_invalidate(dir);905906 spin_unlock(&dir->i_lock);906907}···1553155215541553 opendata->o_arg.open_flags = 0;15551554 opendata->o_arg.fmode = fmode;15551555+ opendata->o_arg.share_access = nfs4_map_atomic_open_share(15561556+ NFS_SB(opendata->dentry->d_sb),15571557+ fmode, 0);15561558 memset(&opendata->o_res, 0, sizeof(opendata->o_res));15571559 memset(&opendata->c_res, 0, sizeof(opendata->c_res));15581560 nfs4_init_opendata_res(opendata);···24172413 opendata->o_res.f_attr, sattr,24182414 state, label, olabel);24192415 if (status == 0) {24202420- nfs_setattr_update_inode(state->inode, sattr);24212421- nfs_post_op_update_inode(state->inode, opendata->o_res.f_attr);24162416+ nfs_setattr_update_inode(state->inode, sattr,24172417+ opendata->o_res.f_attr);24222418 nfs_setsecurity(state->inode, opendata->o_res.f_attr, olabel);24232419 }24242420 }···26552651 case -NFS4ERR_BAD_STATEID:26562652 case -NFS4ERR_EXPIRED:26572653 if (!nfs4_stateid_match(&calldata->arg.stateid,26582658- &state->stateid)) {26542654+ &state->open_stateid)) {26592655 rpc_restart_call_prepare(task);26602656 goto out_release;26612657 }···26912687 is_rdwr = test_bit(NFS_O_RDWR_STATE, &state->flags);26922688 is_rdonly = test_bit(NFS_O_RDONLY_STATE, &state->flags);26932689 is_wronly = test_bit(NFS_O_WRONLY_STATE, &state->flags);26942694- nfs4_stateid_copy(&calldata->arg.stateid, &state->stateid);26902690+ nfs4_stateid_copy(&calldata->arg.stateid, &state->open_stateid);26952691 /* Calculate the change in open mode */26962692 calldata->arg.fmode = 0;26972693 if (state->n_rdwr == 0) {···3292328832933289 status = nfs4_do_setattr(inode, cred, fattr, sattr, state, NULL, label);32943290 if (status == 0) {32953295- nfs_setattr_update_inode(inode, sattr);32913291+ nfs_setattr_update_inode(inode, sattr, fattr);32963292 nfs_setsecurity(inode, fattr, label);32973293 }32983294 nfs4_label_free(label);···42384234 }42394235 if (task->tk_status >= 0) {42404236 renew_lease(NFS_SERVER(inode), hdr->timestamp);42414241- nfs_post_op_update_inode_force_wcc(inode, &hdr->fattr);42374237+ nfs_writeback_update_inode(hdr);42424238 }42434239 return 0;42444240}···6897689368986894 if (status == 0) {68996895 clp->cl_clientid = res.clientid;69006900- clp->cl_exchange_flags = (res.flags & ~EXCHGID4_FLAG_CONFIRMED_R);69016901- if (!(res.flags & EXCHGID4_FLAG_CONFIRMED_R))68966896+ clp->cl_exchange_flags = res.flags;68976897+ /* Client ID is not confirmed */68986898+ if (!(res.flags & EXCHGID4_FLAG_CONFIRMED_R)) {68996899+ clear_bit(NFS4_SESSION_ESTABLISHED,69006900+ &clp->cl_session->session_state);69026901 clp->cl_seqid = res.seqid;69026902+ }6903690369046904 kfree(clp->cl_serverowner);69056905 clp->cl_serverowner = res.server_owner;···72357227 struct nfs41_create_session_res *res)72367228{72377229 nfs4_copy_sessionid(&session->sess_id, &res->sessionid);72307230+ /* Mark client id and session as being confirmed */72317231+ session->clp->cl_exchange_flags |= EXCHGID4_FLAG_CONFIRMED_R;72327232+ set_bit(NFS4_SESSION_ESTABLISHED, &session->session_state);72387233 session->flags = res->flags;72397234 memcpy(&session->fc_attrs, &res->fc_attrs, sizeof(session->fc_attrs));72407235 if (res->flags & SESSION4_BACK_CHAN)···73337322 dprintk("--> nfs4_proc_destroy_session\n");7334732373357324 /* session is still being setup */73367336- if (session->clp->cl_cons_state != NFS_CS_READY)73377337- return status;73257325+ if (!test_and_clear_bit(NFS4_SESSION_ESTABLISHED, &session->session_state))73267326+ return 0;7338732773397328 status = rpc_call_sync(session->clp->cl_rpcclient, &msg, RPC_TASK_TIMEOUT);73407329 trace_nfs4_destroy_session(session->clp, status);
+1
fs/nfs/nfs4session.h
···70707171enum nfs4_session_state {7272 NFS4_SESSION_INITING,7373+ NFS4_SESSION_ESTABLISHED,7374};74757576extern int nfs4_setup_slot_table(struct nfs4_slot_table *tbl,
+16-2
fs/nfs/nfs4state.c
···346346 status = nfs4_proc_exchange_id(clp, cred);347347 if (status != NFS4_OK)348348 return status;349349- set_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);350349351351- return nfs41_walk_client_list(clp, result, cred);350350+ status = nfs41_walk_client_list(clp, result, cred);351351+ if (status < 0)352352+ return status;353353+ if (clp != *result)354354+ return 0;355355+356356+ /* Purge state if the client id was established in a prior instance */357357+ if (clp->cl_exchange_flags & EXCHGID4_FLAG_CONFIRMED_R)358358+ set_bit(NFS4CLNT_PURGE_STATE, &clp->cl_state);359359+ else360360+ set_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);361361+ nfs4_schedule_state_manager(clp);362362+ status = nfs_wait_client_init_complete(clp);363363+ if (status < 0)364364+ nfs_put_client(clp);365365+ return status;352366}353367354368#endif /* CONFIG_NFS_V4_1 */
···15621562 p = xdr_decode_hyper(p, &lgp->lg_seg.offset);15631563 p = xdr_decode_hyper(p, &lgp->lg_seg.length);15641564 p = xdr_decode_hyper(p, &lgp->lg_minlength);15651565- nfsd4_decode_stateid(argp, &lgp->lg_sid);15651565+15661566+ status = nfsd4_decode_stateid(argp, &lgp->lg_sid);15671567+ if (status)15681568+ return status;15691569+15661570 READ_BUF(4);15671571 lgp->lg_maxcount = be32_to_cpup(p++);15681572···15841580 p = xdr_decode_hyper(p, &lcp->lc_seg.offset);15851581 p = xdr_decode_hyper(p, &lcp->lc_seg.length);15861582 lcp->lc_reclaim = be32_to_cpup(p++);15871587- nfsd4_decode_stateid(argp, &lcp->lc_sid);15831583+15841584+ status = nfsd4_decode_stateid(argp, &lcp->lc_sid);15851585+ if (status)15861586+ return status;15871587+15881588 READ_BUF(4);15891589 lcp->lc_newoffset = be32_to_cpup(p++);15901590 if (lcp->lc_newoffset) {···16361628 READ_BUF(16);16371629 p = xdr_decode_hyper(p, &lrp->lr_seg.offset);16381630 p = xdr_decode_hyper(p, &lrp->lr_seg.length);16391639- nfsd4_decode_stateid(argp, &lrp->lr_sid);16311631+16321632+ status = nfsd4_decode_stateid(argp, &lrp->lr_sid);16331633+ if (status)16341634+ return status;16351635+16401636 READ_BUF(4);16411637 lrp->lrf_body_len = be32_to_cpup(p++);16421638 if (lrp->lrf_body_len > 0) {···41354123 return nfserr_resource;41364124 *p++ = cpu_to_be32(lrp->lrs_present);41374125 if (lrp->lrs_present)41384138- nfsd4_encode_stateid(xdr, &lrp->lr_sid);41264126+ return nfsd4_encode_stateid(xdr, &lrp->lr_sid);41394127 return nfs_ok;41404128}41414129#endif /* CONFIG_NFSD_PNFS */
+5-1
fs/nfsd/nfscache.c
···165165{166166 unsigned int hashsize;167167 unsigned int i;168168+ int status = 0;168169169170 max_drc_entries = nfsd_cache_size_limit();170171 atomic_set(&num_drc_entries, 0);171172 hashsize = nfsd_hashsize(max_drc_entries);172173 maskbits = ilog2(hashsize);173174174174- register_shrinker(&nfsd_reply_cache_shrinker);175175+ status = register_shrinker(&nfsd_reply_cache_shrinker);176176+ if (status)177177+ return status;178178+175179 drc_slab = kmem_cache_create("nfsd_drc", sizeof(struct svc_cacherep),176180 0, 0, NULL);177181 if (!drc_slab)
+4-3
fs/nilfs2/segment.c
···19071907 struct the_nilfs *nilfs)19081908{19091909 struct nilfs_inode_info *ii, *n;19101910+ int during_mount = !(sci->sc_super->s_flags & MS_ACTIVE);19101911 int defer_iput = false;1911191219121913 spin_lock(&nilfs->ns_inode_lock);···19201919 brelse(ii->i_bh);19211920 ii->i_bh = NULL;19221921 list_del_init(&ii->i_dirty);19231923- if (!ii->vfs_inode.i_nlink) {19221922+ if (!ii->vfs_inode.i_nlink || during_mount) {19241923 /*19251925- * Defer calling iput() to avoid a deadlock19261926- * over I_SYNC flag for inodes with i_nlink == 019241924+ * Defer calling iput() to avoid deadlocks if19251925+ * i_nlink == 0 or mount is not yet finished.19271926 */19281927 list_add_tail(&ii->i_dirty, &sci->sc_iput_queue);19291928 defer_iput = true;
···502502503503static inline int ocfs2_supports_append_dio(struct ocfs2_super *osb)504504{505505- if (osb->s_feature_ro_compat & OCFS2_FEATURE_RO_COMPAT_APPEND_DIO)505505+ if (osb->s_feature_incompat & OCFS2_FEATURE_INCOMPAT_APPEND_DIO)506506 return 1;507507 return 0;508508}
+8-7
fs/ocfs2/ocfs2_fs.h
···102102 | OCFS2_FEATURE_INCOMPAT_INDEXED_DIRS \103103 | OCFS2_FEATURE_INCOMPAT_REFCOUNT_TREE \104104 | OCFS2_FEATURE_INCOMPAT_DISCONTIG_BG \105105- | OCFS2_FEATURE_INCOMPAT_CLUSTERINFO)105105+ | OCFS2_FEATURE_INCOMPAT_CLUSTERINFO \106106+ | OCFS2_FEATURE_INCOMPAT_APPEND_DIO)106107#define OCFS2_FEATURE_RO_COMPAT_SUPP (OCFS2_FEATURE_RO_COMPAT_UNWRITTEN \107108 | OCFS2_FEATURE_RO_COMPAT_USRQUOTA \108108- | OCFS2_FEATURE_RO_COMPAT_GRPQUOTA \109109- | OCFS2_FEATURE_RO_COMPAT_APPEND_DIO)109109+ | OCFS2_FEATURE_RO_COMPAT_GRPQUOTA)110110111111/*112112 * Heartbeat-only devices are missing journals and other files. The···179179#define OCFS2_FEATURE_INCOMPAT_CLUSTERINFO 0x4000180180181181/*182182+ * Append Direct IO support183183+ */184184+#define OCFS2_FEATURE_INCOMPAT_APPEND_DIO 0x8000185185+186186+/*182187 * backup superblock flag is used to indicate that this volume183188 * has backup superblocks.184189 */···205200#define OCFS2_FEATURE_RO_COMPAT_USRQUOTA 0x0002206201#define OCFS2_FEATURE_RO_COMPAT_GRPQUOTA 0x0004207202208208-/*209209- * Append Direct IO support210210- */211211-#define OCFS2_FEATURE_RO_COMPAT_APPEND_DIO 0x0008212203213204/* The byte offset of the first backup block will be 1G.214205 * The following will be 4G, 16G, 64G, 256G and 1T.
+27-6
fs/overlayfs/super.c
···529529{530530 struct ovl_fs *ufs = sb->s_fs_info;531531532532- if (!(*flags & MS_RDONLY) &&533533- (!ufs->upper_mnt || (ufs->upper_mnt->mnt_sb->s_flags & MS_RDONLY)))532532+ if (!(*flags & MS_RDONLY) && !ufs->upper_mnt)534533 return -EROFS;535534536535 return 0;···614615 break;615616616617 default:618618+ pr_err("overlayfs: unrecognized mount option \"%s\" or missing value\n", p);617619 return -EINVAL;618620 }619621 }622622+623623+ /* Workdir is useless in non-upper mount */624624+ if (!config->upperdir && config->workdir) {625625+ pr_info("overlayfs: option \"workdir=%s\" is useless in a non-upper mount, ignore\n",626626+ config->workdir);627627+ kfree(config->workdir);628628+ config->workdir = NULL;629629+ }630630+620631 return 0;621632}622633···846837847838 sb->s_stack_depth = 0;848839 if (ufs->config.upperdir) {849849- /* FIXME: workdir is not needed for a R/O mount */850840 if (!ufs->config.workdir) {851841 pr_err("overlayfs: missing 'workdir'\n");852842 goto out_free_config;···854846 err = ovl_mount_dir(ufs->config.upperdir, &upperpath);855847 if (err)856848 goto out_free_config;849849+850850+ /* Upper fs should not be r/o */851851+ if (upperpath.mnt->mnt_sb->s_flags & MS_RDONLY) {852852+ pr_err("overlayfs: upper fs is r/o, try multi-lower layers mount\n");853853+ err = -EINVAL;854854+ goto out_put_upperpath;855855+ }857856858857 err = ovl_mount_dir(ufs->config.workdir, &workpath);859858 if (err)···884869885870 err = -EINVAL;886871 stacklen = ovl_split_lowerdirs(lowertmp);887887- if (stacklen > OVL_MAX_STACK)872872+ if (stacklen > OVL_MAX_STACK) {873873+ pr_err("overlayfs: too many lower directries, limit is %d\n",874874+ OVL_MAX_STACK);888875 goto out_free_lowertmp;876876+ } else if (!ufs->config.upperdir && stacklen == 1) {877877+ pr_err("overlayfs: at least 2 lowerdir are needed while upperdir nonexistent\n");878878+ goto out_free_lowertmp;879879+ }889880890881 stack = kcalloc(stacklen, sizeof(struct path), GFP_KERNEL);891882 if (!stack)···953932 ufs->numlower++;954933 }955934956956- /* If the upper fs is r/o or nonexistent, we mark overlayfs r/o too */957957- if (!ufs->upper_mnt || (ufs->upper_mnt->mnt_sb->s_flags & MS_RDONLY))935935+ /* If the upper fs is nonexistent, we mark overlayfs r/o too */936936+ if (!ufs->upper_mnt)958937 sb->s_flags |= MS_RDONLY;959938960939 sb->s_d_op = &ovl_dentry_operations;
+3
fs/proc/task_mmu.c
···1325132513261326static int pagemap_open(struct inode *inode, struct file *file)13271327{13281328+ /* do not disclose physical addresses: attack vector */13291329+ if (!capable(CAP_SYS_ADMIN))13301330+ return -EPERM;13281331 pr_warn_once("Bits 55-60 of /proc/PID/pagemap entries are about "13291332 "to stop being page-shift some time soon. See the "13301333 "linux/Documentation/vm/pagemap.txt for details.\n");
+26-26
include/drm/drm_mm.h
···6868 unsigned scanned_preceeds_hole : 1;6969 unsigned allocated : 1;7070 unsigned long color;7171- unsigned long start;7272- unsigned long size;7171+ u64 start;7272+ u64 size;7373 struct drm_mm *mm;7474};7575···8282 unsigned int scan_check_range : 1;8383 unsigned scan_alignment;8484 unsigned long scan_color;8585- unsigned long scan_size;8686- unsigned long scan_hit_start;8787- unsigned long scan_hit_end;8585+ u64 scan_size;8686+ u64 scan_hit_start;8787+ u64 scan_hit_end;8888 unsigned scanned_blocks;8989- unsigned long scan_start;9090- unsigned long scan_end;8989+ u64 scan_start;9090+ u64 scan_end;9191 struct drm_mm_node *prev_scanned_node;92929393 void (*color_adjust)(struct drm_mm_node *node, unsigned long color,9494- unsigned long *start, unsigned long *end);9494+ u64 *start, u64 *end);9595};96969797/**···124124 return mm->hole_stack.next;125125}126126127127-static inline unsigned long __drm_mm_hole_node_start(struct drm_mm_node *hole_node)127127+static inline u64 __drm_mm_hole_node_start(struct drm_mm_node *hole_node)128128{129129 return hole_node->start + hole_node->size;130130}···140140 * Returns:141141 * Start of the subsequent hole.142142 */143143-static inline unsigned long drm_mm_hole_node_start(struct drm_mm_node *hole_node)143143+static inline u64 drm_mm_hole_node_start(struct drm_mm_node *hole_node)144144{145145 BUG_ON(!hole_node->hole_follows);146146 return __drm_mm_hole_node_start(hole_node);147147}148148149149-static inline unsigned long __drm_mm_hole_node_end(struct drm_mm_node *hole_node)149149+static inline u64 __drm_mm_hole_node_end(struct drm_mm_node *hole_node)150150{151151 return list_entry(hole_node->node_list.next,152152 struct drm_mm_node, node_list)->start;···163163 * Returns:164164 * End of the subsequent hole.165165 */166166-static inline unsigned long drm_mm_hole_node_end(struct drm_mm_node *hole_node)166166+static inline u64 drm_mm_hole_node_end(struct drm_mm_node *hole_node)167167{168168 return __drm_mm_hole_node_end(hole_node);169169}···222222223223int drm_mm_insert_node_generic(struct drm_mm *mm,224224 struct drm_mm_node *node,225225- unsigned long size,225225+ u64 size,226226 unsigned alignment,227227 unsigned long color,228228 enum drm_mm_search_flags sflags,···245245 */246246static inline int drm_mm_insert_node(struct drm_mm *mm,247247 struct drm_mm_node *node,248248- unsigned long size,248248+ u64 size,249249 unsigned alignment,250250 enum drm_mm_search_flags flags)251251{···255255256256int drm_mm_insert_node_in_range_generic(struct drm_mm *mm,257257 struct drm_mm_node *node,258258- unsigned long size,258258+ u64 size,259259 unsigned alignment,260260 unsigned long color,261261- unsigned long start,262262- unsigned long end,261261+ u64 start,262262+ u64 end,263263 enum drm_mm_search_flags sflags,264264 enum drm_mm_allocator_flags aflags);265265/**···282282 */283283static inline int drm_mm_insert_node_in_range(struct drm_mm *mm,284284 struct drm_mm_node *node,285285- unsigned long size,285285+ u64 size,286286 unsigned alignment,287287- unsigned long start,288288- unsigned long end,287287+ u64 start,288288+ u64 end,289289 enum drm_mm_search_flags flags)290290{291291 return drm_mm_insert_node_in_range_generic(mm, node, size, alignment,···296296void drm_mm_remove_node(struct drm_mm_node *node);297297void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new);298298void drm_mm_init(struct drm_mm *mm,299299- unsigned long start,300300- unsigned long size);299299+ u64 start,300300+ u64 size);301301void drm_mm_takedown(struct drm_mm *mm);302302bool drm_mm_clean(struct drm_mm *mm);303303304304void drm_mm_init_scan(struct drm_mm *mm,305305- unsigned long size,305305+ u64 size,306306 unsigned alignment,307307 unsigned long color);308308void drm_mm_init_scan_with_range(struct drm_mm *mm,309309- unsigned long size,309309+ u64 size,310310 unsigned alignment,311311 unsigned long color,312312- unsigned long start,313313- unsigned long end);312312+ u64 start,313313+ u64 end);314314bool drm_mm_scan_add_block(struct drm_mm_node *node);315315bool drm_mm_scan_remove_block(struct drm_mm_node *node);316316
+1-1
include/drm/ttm/ttm_bo_api.h
···249249 * either of these locks held.250250 */251251252252- unsigned long offset;252252+ uint64_t offset; /* GPU address space is independent of CPU word size */253253 uint32_t cur_placement;254254255255 struct sg_table *sg;
+1-1
include/drm/ttm/ttm_bo_driver.h
···277277 bool has_type;278278 bool use_type;279279 uint32_t flags;280280- unsigned long gpu_offset;280280+ uint64_t gpu_offset; /* GPU address space is independent of CPU word size */281281 uint64_t size;282282 uint32_t available_caching;283283 uint32_t default_caching;
···125125 */126126int clk_get_phase(struct clk *clk);127127128128+/**129129+ * clk_is_match - check if two clk's point to the same hardware clock130130+ * @p: clk compared against q131131+ * @q: clk compared against p132132+ *133133+ * Returns true if the two struct clk pointers both point to the same hardware134134+ * clock node. Put differently, returns true if struct clk *p and struct clk *q135135+ * share the same struct clk_core object.136136+ *137137+ * Returns false otherwise. Note that two NULL clks are treated as matching.138138+ */139139+bool clk_is_match(const struct clk *p, const struct clk *q);140140+128141#else129142130143static inline long clk_get_accuracy(struct clk *clk)···153140static inline long clk_get_phase(struct clk *clk)154141{155142 return -ENOTSUPP;143143+}144144+145145+static inline bool clk_is_match(const struct clk *p, const struct clk *q)146146+{147147+ return p == q;156148}157149158150#endif
···604604 struct mutex i_mutex;605605606606 unsigned long dirtied_when; /* jiffies of first dirtying */607607+ unsigned long dirtied_time_when;607608608609 struct hlist_node i_hash;609610 struct list_head i_wb_list; /* backing dev IO list */
+8-1
include/linux/interrupt.h
···5050 * IRQF_ONESHOT - Interrupt is not reenabled after the hardirq handler finished.5151 * Used by threaded interrupts which need to keep the5252 * irq line disabled until the threaded handler has been run.5353- * IRQF_NO_SUSPEND - Do not disable this IRQ during suspend5353+ * IRQF_NO_SUSPEND - Do not disable this IRQ during suspend. Does not guarantee5454+ * that this interrupt will wake the system from a suspended5555+ * state. See Documentation/power/suspend-and-interrupts.txt5456 * IRQF_FORCE_RESUME - Force enable it on resume even if IRQF_NO_SUSPEND is set5557 * IRQF_NO_THREAD - Interrupt cannot be threaded5658 * IRQF_EARLY_RESUME - Resume IRQ early during syscore instead of at device5759 * resume time.6060+ * IRQF_COND_SUSPEND - If the IRQ is shared with a NO_SUSPEND user, execute this6161+ * interrupt handler after suspending interrupts. For system6262+ * wakeup devices users need to implement wakeup detection in6363+ * their interrupt handlers.5864 */5965#define IRQF_SHARED 0x000000806066#define IRQF_PROBE_SHARED 0x00000100···7367#define IRQF_FORCE_RESUME 0x000080007468#define IRQF_NO_THREAD 0x000100007569#define IRQF_EARLY_RESUME 0x000200007070+#define IRQF_COND_SUSPEND 0x0004000076717772#define IRQF_TIMER (__IRQF_TIMER | IRQF_NO_SUSPEND | IRQF_NO_THREAD)7873
···7878#ifdef CONFIG_PM_SLEEP7979 unsigned int nr_actions;8080 unsigned int no_suspend_depth;8181+ unsigned int cond_suspend_depth;8182 unsigned int force_resume_depth;8283#endif8384#ifdef CONFIG_PROC_FS
···44#include <linux/compiler.h>5566unsigned long lcm(unsigned long a, unsigned long b) __attribute_const__;77+unsigned long lcm_not_zero(unsigned long a, unsigned long b) __attribute_const__;7889#endif /* _LCM_H */
+1
include/linux/libata.h
···232232 * led */233233 ATA_FLAG_NO_DIPM = (1 << 23), /* host not happy with DIPM */234234 ATA_FLAG_LOWTAG = (1 << 24), /* host wants lowest available tag */235235+ ATA_FLAG_SAS_HOST = (1 << 25), /* SAS host */235236236237 /* bits 24:31 of ap->flags are reserved for LLD specific flags */237238
+3
include/linux/mfd/palmas.h
···29992999#define PALMAS_GPADC_TRIM15 0x0E30003000#define PALMAS_GPADC_TRIM16 0x0F3001300130023002+/* TPS659038 regen2_ctrl offset iss different from palmas */30033003+#define TPS659038_REGEN2_CTRL 0x1230043004+30023005/* TPS65917 Interrupt registers */3003300630043007/* Registers for function INTERRUPT */
···316316 * @driver_data: private regulator data317317 * @of_node: OpenFirmware node to parse for device tree bindings (may be318318 * NULL).319319- * @regmap: regmap to use for core regmap helpers if dev_get_regulator() is319319+ * @regmap: regmap to use for core regmap helpers if dev_get_regmap() is320320 * insufficient.321321 * @ena_gpio_initialized: GPIO controlling regulator enable was properly322322 * initialized, meaning that >= 0 is a valid gpio
+5-17
include/linux/rhashtable.h
···5454 * @buckets: size * hash buckets5555 */5656struct bucket_table {5757- size_t size;5858- unsigned int locks_mask;5959- spinlock_t *locks;6060- struct rhash_head __rcu *buckets[];5757+ size_t size;5858+ unsigned int locks_mask;5959+ spinlock_t *locks;6060+6161+ struct rhash_head __rcu *buckets[] ____cacheline_aligned_in_smp;6162};62636364typedef u32 (*rht_hashfn_t)(const void *data, u32 len, u32 seed);···7978 * @locks_mul: Number of bucket locks to allocate per cpu (default: 128)8079 * @hashfn: Function to hash key8180 * @obj_hashfn: Function to hash object8282- * @grow_decision: If defined, may return true if table should expand8383- * @shrink_decision: If defined, may return true if table should shrink8484- *8585- * Note: when implementing the grow and shrink decision function, min/max8686- * shift must be enforced, otherwise, resizing watermarks they set may be8787- * useless.8881 */8982struct rhashtable_params {9083 size_t nelem_hint;···9297 size_t locks_mul;9398 rht_hashfn_t hashfn;9499 rht_obj_hashfn_t obj_hashfn;9595- bool (*grow_decision)(const struct rhashtable *ht,9696- size_t new_size);9797- bool (*shrink_decision)(const struct rhashtable *ht,9898- size_t new_size);99100};100101101102/**···182191183192void rhashtable_insert(struct rhashtable *ht, struct rhash_head *node);184193bool rhashtable_remove(struct rhashtable *ht, struct rhash_head *node);185185-186186-bool rht_grow_above_75(const struct rhashtable *ht, size_t new_size);187187-bool rht_shrink_below_30(const struct rhashtable *ht, size_t new_size);188194189195int rhashtable_expand(struct rhashtable *ht);190196int rhashtable_shrink(struct rhashtable *ht);
+5-4
include/linux/sched.h
···1625162516261626 /*16271627 * numa_faults_locality tracks if faults recorded during the last16281628- * scan window were remote/local. The task scan period is adapted16291629- * based on the locality of the faults with different weights16301630- * depending on whether they were shared or private faults16281628+ * scan window were remote/local or failed to migrate. The task scan16291629+ * period is adapted based on the locality of the faults with different16301630+ * weights depending on whether they were shared or private faults16311631 */16321632- unsigned long numa_faults_locality[2];16321632+ unsigned long numa_faults_locality[3];1633163316341634 unsigned long numa_pages_migrated;16351635#endif /* CONFIG_NUMA_BALANCING */···17191719#define TNF_NO_GROUP 0x0217201720#define TNF_SHARED 0x0417211721#define TNF_FAULT_LOCAL 0x0817221722+#define TNF_MIGRATE_FAIL 0x101722172317231724#ifdef CONFIG_NUMA_BALANCING17241725extern void task_numa_fault(int last_node, int node, int pages, int flags);
+7-7
include/linux/serial_core.h
···143143 unsigned char iotype; /* io access style */144144 unsigned char unused1;145145146146-#define UPIO_PORT (0) /* 8b I/O port access */147147-#define UPIO_HUB6 (1) /* Hub6 ISA card */148148-#define UPIO_MEM (2) /* 8b MMIO access */149149-#define UPIO_MEM32 (3) /* 32b little endian */150150-#define UPIO_MEM32BE (4) /* 32b big endian */151151-#define UPIO_AU (5) /* Au1x00 and RT288x type IO */152152-#define UPIO_TSI (6) /* Tsi108/109 type IO */146146+#define UPIO_PORT (SERIAL_IO_PORT) /* 8b I/O port access */147147+#define UPIO_HUB6 (SERIAL_IO_HUB6) /* Hub6 ISA card */148148+#define UPIO_MEM (SERIAL_IO_MEM) /* 8b MMIO access */149149+#define UPIO_MEM32 (SERIAL_IO_MEM32) /* 32b little endian */150150+#define UPIO_AU (SERIAL_IO_AU) /* Au1x00 and RT288x type IO */151151+#define UPIO_TSI (SERIAL_IO_TSI) /* Tsi108/109 type IO */152152+#define UPIO_MEM32BE (SERIAL_IO_MEM32BE) /* 32b big endian */153153154154 unsigned int read_status_mask; /* driver specific */155155 unsigned int ignore_status_mask; /* driver specific */
···649649 * sequence completes. On some systems, many such sequences can execute as650650 * as single programmed DMA transfer. On all systems, these messages are651651 * queued, and might complete after transactions to other devices. Messages652652- * sent to a given spi_device are alway executed in FIFO order.652652+ * sent to a given spi_device are always executed in FIFO order.653653 *654654 * The code that submits an spi_message (and its spi_transfers)655655 * to the lower layers is responsible for managing its memory.
···190190 * @num_ports: the number of different ports this device will have.191191 * @bulk_in_size: minimum number of bytes to allocate for bulk-in buffer192192 * (0 = end-point size)193193- * @bulk_out_size: minimum number of bytes to allocate for bulk-out buffer194194- * (0 = end-point size)193193+ * @bulk_out_size: bytes to allocate for bulk-out buffer (0 = end-point size)195194 * @calc_num_ports: pointer to a function to determine how many ports this196195 * device has dynamically. It will be called after the probe()197196 * callback is called, but before attach()
+15-1
include/linux/usb/usbnet.h
···227227 struct urb *urb;228228 struct usbnet *dev;229229 enum skb_state state;230230- size_t length;230230+ long length;231231+ unsigned long packets;231232};233233+234234+/* Drivers that set FLAG_MULTI_PACKET must call this in their235235+ * tx_fixup method before returning an skb.236236+ */237237+static inline void238238+usbnet_set_skb_tx_stats(struct sk_buff *skb,239239+ unsigned long packets, long bytes_delta)240240+{241241+ struct skb_data *entry = (struct skb_data *) skb->cb;242242+243243+ entry->packets = packets;244244+ entry->length = bytes_delta;245245+}232246233247extern int usbnet_open(struct net_device *net);234248extern int usbnet_stop(struct net_device *net);
+1
include/linux/vmalloc.h
···1717#define VM_VPAGES 0x00000010 /* buffer for pages was vmalloc'ed */1818#define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */1919#define VM_NO_GUARD 0x00000040 /* don't add guard page */2020+#define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */2021/* bits [20..32] reserved for arch specific ioremap internals */21222223/*
+2-1
include/linux/workqueue.h
···7070 /* data contains off-queue information when !WORK_STRUCT_PWQ */7171 WORK_OFFQ_FLAG_BASE = WORK_STRUCT_COLOR_SHIFT,72727373- WORK_OFFQ_CANCELING = (1 << WORK_OFFQ_FLAG_BASE),7373+ __WORK_OFFQ_CANCELING = WORK_OFFQ_FLAG_BASE,7474+ WORK_OFFQ_CANCELING = (1 << __WORK_OFFQ_CANCELING),74757576 /*7677 * When a work item is off queue, its high bits point to the last
+3
include/linux/writeback.h
···130130extern unsigned long vm_dirty_bytes;131131extern unsigned int dirty_writeback_interval;132132extern unsigned int dirty_expire_interval;133133+extern unsigned int dirtytime_expire_interval;133134extern int vm_highmem_is_dirtyable;134135extern int block_dump;135136extern int laptop_mode;···147146extern int dirty_bytes_handler(struct ctl_table *table, int write,148147 void __user *buffer, size_t *lenp,149148 loff_t *ppos);149149+int dirtytime_interval_handler(struct ctl_table *table, int write,150150+ void __user *buffer, size_t *lenp, loff_t *ppos);150151151152struct ctl_table;152153int dirty_writeback_centisecs_handler(struct ctl_table *, int,
···119119 const struct nft_data *data,120120 enum nft_data_types type);121121122122+123123+/**124124+ * struct nft_userdata - user defined data associated with an object125125+ *126126+ * @len: length of the data127127+ * @data: content128128+ *129129+ * The presence of user data is indicated in an object specific fashion,130130+ * so a length of zero can't occur and the value "len" indicates data131131+ * of length len + 1.132132+ */133133+struct nft_userdata {134134+ u8 len;135135+ unsigned char data[0];136136+};137137+122138/**123139 * struct nft_set_elem - generic representation of set elements124140 *···396380 * @handle: rule handle397381 * @genmask: generation mask398382 * @dlen: length of expression data399399- * @ulen: length of user data (used for comments)383383+ * @udata: user data is appended to the rule400384 * @data: expression data401385 */402386struct nft_rule {···404388 u64 handle:42,405389 genmask:2,406390 dlen:12,407407- ulen:8;391391+ udata:1;408392 unsigned char data[]409393 __attribute__((aligned(__alignof__(struct nft_expr))));410394};···492476 return (struct nft_expr *)&rule->data[rule->dlen];493477}494478495495-static inline void *nft_userdata(const struct nft_rule *rule)479479+static inline struct nft_userdata *nft_userdata(const struct nft_rule *rule)496480{497481 return (void *)&rule->data[rule->dlen];498482}
···973973 */974974#define MT_TOOL_FINGER 0975975#define MT_TOOL_PEN 1976976-#define MT_TOOL_MAX 1976976+#define MT_TOOL_PALM 2977977+#define MT_TOOL_MAX 2977978978979/*979980 * Values describing the status of a force-feedback effect
+1-1
include/uapi/linux/nfsd/export.h
···4747 * exported filesystem.4848 */4949#define NFSEXP_V4ROOT 0x100005050-#define NFSEXP_NOPNFS 0x200005050+#define NFSEXP_PNFS 0x2000051515252/* All flags that we claim to support. (Note we don't support NOACL.) */5353#define NFSEXP_ALLFLAGS 0x3FE7F
···6060 __u32 size_max;6161 /* The maximum number of segments (if VIRTIO_BLK_F_SEG_MAX) */6262 __u32 seg_max;6363- /* geometry the device (if VIRTIO_BLK_F_GEOMETRY) */6363+ /* geometry of the device (if VIRTIO_BLK_F_GEOMETRY) */6464 struct virtio_blk_geometry {6565 __u16 cylinders;6666 __u8 heads;···119119#define VIRTIO_BLK_T_BARRIER 0x80000000120120#endif /* !VIRTIO_BLK_NO_LEGACY */121121122122-/* This is the first element of the read scatter-gather list. */122122+/*123123+ * This comes first in the read scatter-gather list.124124+ * For legacy virtio, if VIRTIO_F_ANY_LAYOUT is not negotiated,125125+ * this is the first element of the read scatter-gather list.126126+ */123127struct virtio_blk_outhdr {124128 /* VIRTIO_BLK_T* */125129 __virtio32 type;
+10-2
include/uapi/linux/virtio_scsi.h
···29293030#include <linux/virtio_types.h>31313232-#define VIRTIO_SCSI_CDB_SIZE 323333-#define VIRTIO_SCSI_SENSE_SIZE 963232+/* Default values of the CDB and sense data size configuration fields */3333+#define VIRTIO_SCSI_CDB_DEFAULT_SIZE 323434+#define VIRTIO_SCSI_SENSE_DEFAULT_SIZE 963535+3636+#ifndef VIRTIO_SCSI_CDB_SIZE3737+#define VIRTIO_SCSI_CDB_SIZE VIRTIO_SCSI_CDB_DEFAULT_SIZE3838+#endif3939+#ifndef VIRTIO_SCSI_SENSE_SIZE4040+#define VIRTIO_SCSI_SENSE_SIZE VIRTIO_SCSI_SENSE_DEFAULT_SIZE4141+#endif34423543/* SCSI command request, followed by data-out */3644struct virtio_scsi_cmd_req {
···548548549549 rcu_read_lock();550550 cpuset_for_each_descendant_pre(cp, pos_css, root_cs) {551551- if (cp == root_cs)552552- continue;553553-554551 /* skip the whole subtree if @cp doesn't have any CPU */555552 if (cpumask_empty(cp->cpus_allowed)) {556553 pos_css = css_rightmost_descendant(pos_css);···870873 * If it becomes empty, inherit the effective mask of the871874 * parent, which is guaranteed to have some CPUs.872875 */873873- if (cpumask_empty(new_cpus))876876+ if (cgroup_on_dfl(cp->css.cgroup) && cpumask_empty(new_cpus))874877 cpumask_copy(new_cpus, parent->effective_cpus);875878876879 /* Skip the whole subtree if the cpumask remains the same. */···11261129 * If it becomes empty, inherit the effective mask of the11271130 * parent, which is guaranteed to have some MEMs.11281131 */11291129- if (nodes_empty(*new_mems))11321132+ if (cgroup_on_dfl(cp->css.cgroup) && nodes_empty(*new_mems))11301133 *new_mems = parent->effective_mems;1131113411321135 /* Skip the whole subtree if the nodemask remains the same. */···1976197919771980 spin_lock_irq(&callback_lock);19781981 cs->mems_allowed = parent->mems_allowed;19821982+ cs->effective_mems = parent->mems_allowed;19791983 cpumask_copy(cs->cpus_allowed, parent->cpus_allowed);19841984+ cpumask_copy(cs->effective_cpus, parent->cpus_allowed);19801985 spin_unlock_irq(&callback_lock);19811986out_unlock:19821987 mutex_unlock(&cpuset_mutex);
+11-1
kernel/events/core.c
···35913591 ctx = perf_event_ctx_lock_nested(event, SINGLE_DEPTH_NESTING);35923592 WARN_ON_ONCE(ctx->parent_ctx);35933593 perf_remove_from_context(event, true);35943594- mutex_unlock(&ctx->mutex);35943594+ perf_event_ctx_unlock(event, ctx);3595359535963596 _free_event(event);35973597}···45744574{45754575 struct perf_event *event = container_of(entry,45764576 struct perf_event, pending);45774577+ int rctx;45784578+45794579+ rctx = perf_swevent_get_recursion_context();45804580+ /*45814581+ * If we 'fail' here, that's OK, it means recursion is already disabled45824582+ * and we won't recurse 'further'.45834583+ */4577458445784585 if (event->pending_disable) {45794586 event->pending_disable = 0;···45914584 event->pending_wakeup = 0;45924585 perf_event_wakeup(event);45934586 }45874587+45884588+ if (rctx >= 0)45894589+ perf_swevent_put_recursion_context(rctx);45944590}4595459145964592/*
+6-1
kernel/irq/manage.c
···15061506 * otherwise we'll have trouble later trying to figure out15071507 * which interrupt is which (messes up the interrupt freeing15081508 * logic etc).15091509+ *15101510+ * Also IRQF_COND_SUSPEND only makes sense for shared interrupts and15111511+ * it cannot be set along with IRQF_NO_SUSPEND.15091512 */15101510- if ((irqflags & IRQF_SHARED) && !dev_id)15131513+ if (((irqflags & IRQF_SHARED) && !dev_id) ||15141514+ (!(irqflags & IRQF_SHARED) && (irqflags & IRQF_COND_SUSPEND)) ||15151515+ ((irqflags & IRQF_NO_SUSPEND) && (irqflags & IRQF_COND_SUSPEND)))15111516 return -EINVAL;1512151715131518 desc = irq_to_desc(irq);
+6-1
kernel/irq/pm.c
···43434444 if (action->flags & IRQF_NO_SUSPEND)4545 desc->no_suspend_depth++;4646+ else if (action->flags & IRQF_COND_SUSPEND)4747+ desc->cond_suspend_depth++;46484749 WARN_ON_ONCE(desc->no_suspend_depth &&4848- desc->no_suspend_depth != desc->nr_actions);5050+ (desc->no_suspend_depth +5151+ desc->cond_suspend_depth) != desc->nr_actions);4952}50535154/*···64616562 if (action->flags & IRQF_NO_SUSPEND)6663 desc->no_suspend_depth--;6464+ else if (action->flags & IRQF_COND_SUSPEND)6565+ desc->cond_suspend_depth--;6766}68676968static bool suspend_device_irq(struct irq_desc *desc, int irq)
+28-5
kernel/livepatch/core.c
···8989/* sets obj->mod if object is not vmlinux and module is found */9090static void klp_find_object_module(struct klp_object *obj)9191{9292+ struct module *mod;9393+9294 if (!klp_is_module(obj))9395 return;94969597 mutex_lock(&module_mutex);9698 /*9797- * We don't need to take a reference on the module here because we have9898- * the klp_mutex, which is also taken by the module notifier. This9999- * prevents any module from unloading until we release the klp_mutex.9999+ * We do not want to block removal of patched modules and therefore100100+ * we do not take a reference here. The patches are removed by101101+ * a going module handler instead.100102 */101101- obj->mod = find_module(obj->name);103103+ mod = find_module(obj->name);104104+ /*105105+ * Do not mess work of the module coming and going notifiers.106106+ * Note that the patch might still be needed before the going handler107107+ * is called. Module functions can be called even in the GOING state108108+ * until mod->exit() finishes. This is especially important for109109+ * patches that modify semantic of the functions.110110+ */111111+ if (mod && mod->klp_alive)112112+ obj->mod = mod;113113+102114 mutex_unlock(&module_mutex);103115}104116···260248 /* first, check if it's an exported symbol */261249 preempt_disable();262250 sym = find_symbol(name, NULL, NULL, true, true);263263- preempt_enable();264251 if (sym) {265252 *addr = sym->value;253253+ preempt_enable();266254 return 0;267255 }256256+ preempt_enable();268257269258 /* otherwise check if it's in another .o within the patch module */270259 return klp_find_object_symbol(pmod->name, name, addr);···779766 return -EINVAL;780767781768 obj->state = KLP_DISABLED;769769+ obj->mod = NULL;782770783771 klp_find_object_module(obj);784772···973959 return 0;974960975961 mutex_lock(&klp_mutex);962962+963963+ /*964964+ * Each module has to know that the notifier has been called.965965+ * We never know what module will get patched by a new patch.966966+ */967967+ if (action == MODULE_STATE_COMING)968968+ mod->klp_alive = true;969969+ else /* MODULE_STATE_GOING */970970+ mod->klp_alive = false;976971977972 list_for_each_entry(patch, &klp_patches, list) {978973 for (obj = patch->objs; obj->funcs; obj++) {
+55-26
kernel/locking/lockdep.c
···633633 if (!new_class->name)634634 return 0;635635636636- list_for_each_entry(class, &all_lock_classes, lock_entry) {636636+ list_for_each_entry_rcu(class, &all_lock_classes, lock_entry) {637637 if (new_class->key - new_class->subclass == class->key)638638 return class->name_version;639639 if (class->name && !strcmp(class->name, new_class->name))···700700 hash_head = classhashentry(key);701701702702 /*703703- * We can walk the hash lockfree, because the hash only704704- * grows, and we are careful when adding entries to the end:703703+ * We do an RCU walk of the hash, see lockdep_free_key_range().705704 */706706- list_for_each_entry(class, hash_head, hash_entry) {705705+ if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))706706+ return NULL;707707+708708+ list_for_each_entry_rcu(class, hash_head, hash_entry) {707709 if (class->key == key) {708710 /*709711 * Huh! same key, different name? Did someone trample···730728 struct lockdep_subclass_key *key;731729 struct list_head *hash_head;732730 struct lock_class *class;733733- unsigned long flags;731731+732732+ DEBUG_LOCKS_WARN_ON(!irqs_disabled());734733735734 class = look_up_lock_class(lock, subclass);736735 if (likely(class))···753750 key = lock->key->subkeys + subclass;754751 hash_head = classhashentry(key);755752756756- raw_local_irq_save(flags);757753 if (!graph_lock()) {758758- raw_local_irq_restore(flags);759754 return NULL;760755 }761756 /*762757 * We have to do the hash-walk again, to avoid races763758 * with another CPU:764759 */765765- list_for_each_entry(class, hash_head, hash_entry)760760+ list_for_each_entry_rcu(class, hash_head, hash_entry) {766761 if (class->key == key)767762 goto out_unlock_set;763763+ }764764+768765 /*769766 * Allocate a new key from the static array, and add it to770767 * the hash:771768 */772769 if (nr_lock_classes >= MAX_LOCKDEP_KEYS) {773770 if (!debug_locks_off_graph_unlock()) {774774- raw_local_irq_restore(flags);775771 return NULL;776772 }777777- raw_local_irq_restore(flags);778773779774 print_lockdep_off("BUG: MAX_LOCKDEP_KEYS too low!");780775 dump_stack();···799798800799 if (verbose(class)) {801800 graph_unlock();802802- raw_local_irq_restore(flags);803801804802 printk("\nnew class %p: %s", class->key, class->name);805803 if (class->name_version > 1)···806806 printk("\n");807807 dump_stack();808808809809- raw_local_irq_save(flags);810809 if (!graph_lock()) {811811- raw_local_irq_restore(flags);812810 return NULL;813811 }814812 }815813out_unlock_set:816814 graph_unlock();817817- raw_local_irq_restore(flags);818815819816out_set_class_cache:820817 if (!subclass || force)···867870 entry->distance = distance;868871 entry->trace = *trace;869872 /*870870- * Since we never remove from the dependency list, the list can871871- * be walked lockless by other CPUs, it's only allocation872872- * that must be protected by the spinlock. But this also means873873- * we must make new entries visible only once writes to the874874- * entry become visible - hence the RCU op:873873+ * Both allocation and removal are done under the graph lock; but874874+ * iteration is under RCU-sched; see look_up_lock_class() and875875+ * lockdep_free_key_range().875876 */876877 list_add_tail_rcu(&entry->entry, head);877878···10201025 else10211026 head = &lock->class->locks_before;1022102710231023- list_for_each_entry(entry, head, entry) {10281028+ DEBUG_LOCKS_WARN_ON(!irqs_disabled());10291029+10301030+ list_for_each_entry_rcu(entry, head, entry) {10241031 if (!lock_accessed(entry)) {10251032 unsigned int cq_depth;10261033 mark_lock_accessed(entry, lock);···20192022 * We can walk it lock-free, because entries only get added20202023 * to the hash:20212024 */20222022- list_for_each_entry(chain, hash_head, entry) {20252025+ list_for_each_entry_rcu(chain, hash_head, entry) {20232026 if (chain->chain_key == chain_key) {20242027cache_hit:20252028 debug_atomic_inc(chain_lookup_hits);···29932996 if (unlikely(!debug_locks))29942997 return;2995299829962996- if (subclass)29992999+ if (subclass) {30003000+ unsigned long flags;30013001+30023002+ if (DEBUG_LOCKS_WARN_ON(current->lockdep_recursion))30033003+ return;30043004+30053005+ raw_local_irq_save(flags);30063006+ current->lockdep_recursion = 1;29973007 register_lock_class(lock, subclass, 1);30083008+ current->lockdep_recursion = 0;30093009+ raw_local_irq_restore(flags);30103010+ }29983011}29993012EXPORT_SYMBOL_GPL(lockdep_init_map);30003013···38943887 return addr >= start && addr < start + size;38953888}3896388938903890+/*38913891+ * Used in module.c to remove lock classes from memory that is going to be38923892+ * freed; and possibly re-used by other modules.38933893+ *38943894+ * We will have had one sync_sched() before getting here, so we're guaranteed38953895+ * nobody will look up these exact classes -- they're properly dead but still38963896+ * allocated.38973897+ */38973898void lockdep_free_key_range(void *start, unsigned long size)38983899{38993899- struct lock_class *class, *next;39003900+ struct lock_class *class;39003901 struct list_head *head;39013902 unsigned long flags;39023903 int i;···39203905 head = classhash_table + i;39213906 if (list_empty(head))39223907 continue;39233923- list_for_each_entry_safe(class, next, head, hash_entry) {39083908+ list_for_each_entry_rcu(class, head, hash_entry) {39243909 if (within(class->key, start, size))39253910 zap_class(class);39263911 else if (within(class->name, start, size))···39313916 if (locked)39323917 graph_unlock();39333918 raw_local_irq_restore(flags);39193919+39203920+ /*39213921+ * Wait for any possible iterators from look_up_lock_class() to pass39223922+ * before continuing to free the memory they refer to.39233923+ *39243924+ * sync_sched() is sufficient because the read-side is IRQ disable.39253925+ */39263926+ synchronize_sched();39273927+39283928+ /*39293929+ * XXX at this point we could return the resources to the pool;39303930+ * instead we leak them. We would need to change to bitmap allocators39313931+ * instead of the linear allocators we have now.39323932+ */39343933}3935393439363935void lockdep_reset_lock(struct lockdep_map *lock)39373936{39383938- struct lock_class *class, *next;39373937+ struct lock_class *class;39393938 struct list_head *head;39403939 unsigned long flags;39413940 int i, j;···39773948 head = classhash_table + i;39783949 if (list_empty(head))39793950 continue;39803980- list_for_each_entry_safe(class, next, head, hash_entry) {39513951+ list_for_each_entry_rcu(class, head, hash_entry) {39813952 int match = 0;3982395339833954 for (j = 0; j < NR_LOCKDEP_CACHING_CLASSES; j++)
+6-6
kernel/module.c
···5656#include <linux/async.h>5757#include <linux/percpu.h>5858#include <linux/kmemleak.h>5959-#include <linux/kasan.h>6059#include <linux/jump_label.h>6160#include <linux/pfn.h>6261#include <linux/bsearch.h>···18131814void __weak module_memfree(void *module_region)18141815{18151816 vfree(module_region);18161816- kasan_module_free(module_region);18171817}1818181818191819void __weak module_arch_cleanup(struct module *mod)···18651867 kfree(mod->args);18661868 percpu_modfree(mod);1867186918681868- /* Free lock-classes: */18701870+ /* Free lock-classes; relies on the preceding sync_rcu(). */18691871 lockdep_free_key_range(mod->module_core, mod->core_size);1870187218711873 /* Finally, free the core (containing the module structure) */···23112313 info->symoffs = ALIGN(mod->core_size, symsect->sh_addralign ?: 1);23122314 info->stroffs = mod->core_size = info->symoffs + ndst * sizeof(Elf_Sym);23132315 mod->core_size += strtab_size;23162316+ mod->core_size = debug_align(mod->core_size);2314231723152318 /* Put string table section at end of init part of module. */23162319 strsect->sh_flags |= SHF_ALLOC;23172320 strsect->sh_entsize = get_offset(mod, &mod->init_size, strsect,23182321 info->index.str) | INIT_OFFSET_MASK;23222322+ mod->init_size = debug_align(mod->init_size);23192323 pr_debug("\t%s\n", info->secstrings + strsect->sh_name);23202324}23212325···33493349 module_bug_cleanup(mod);33503350 mutex_unlock(&module_mutex);3351335133523352- /* Free lock-classes: */33533353- lockdep_free_key_range(mod->module_core, mod->core_size);33543354-33553352 /* we can't deallocate the module until we clear memory protection */33563353 unset_module_init_ro_nx(mod);33573354 unset_module_core_ro_nx(mod);···33723375 synchronize_rcu();33733376 mutex_unlock(&module_mutex);33743377 free_module:33783378+ /* Free lock-classes; relies on the preceding sync_rcu() */33793379+ lockdep_free_key_range(mod->module_core, mod->core_size);33803380+33753381 module_deallocate(mod, info);33763382 free_copy:33773383 free_copy(info);
+1-1
kernel/printk/console_cmdline.h
···3344struct console_cmdline55{66- char name[8]; /* Name of the driver */66+ char name[16]; /* Name of the driver */77 int index; /* Minor dev. to use */88 char *options; /* Options for the driver */99#ifdef CONFIG_A11Y_BRAILLE_CONSOLE
+1
kernel/printk/printk.c
···24642464 for (i = 0, c = console_cmdline;24652465 i < MAX_CMDLINECONSOLES && c->name[0];24662466 i++, c++) {24672467+ BUILD_BUG_ON(sizeof(c->name) != sizeof(newcon->name));24672468 if (strcmp(c->name, newcon->name) != 0)24682469 continue;24692470 if (newcon->index >= 0 &&
···16091609 /*16101610 * If there were no record hinting faults then either the task is16111611 * completely idle or all activity is areas that are not of interest16121612- * to automatic numa balancing. Scan slower16121612+ * to automatic numa balancing. Related to that, if there were failed16131613+ * migration then it implies we are migrating too quickly or the local16141614+ * node is overloaded. In either case, scan slower16131615 */16141614- if (local + shared == 0) {16161616+ if (local + shared == 0 || p->numa_faults_locality[2]) {16151617 p->numa_scan_period = min(p->numa_scan_period_max,16161618 p->numa_scan_period << 1);16171619···2082208020832081 if (migrated)20842082 p->numa_pages_migrated += pages;20832083+ if (flags & TNF_MIGRATE_FAIL)20842084+ p->numa_faults_locality[2] += pages;2085208520862086 p->numa_faults[task_faults_idx(NUMA_MEMBUF, mem_node, priv)] += pages;20872087 p->numa_faults[task_faults_idx(NUMA_CPUBUF, cpu_node, priv)] += pages;
+34-22
kernel/sched/idle.c
···8282 struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev);8383 int next_state, entered_state;8484 unsigned int broadcast;8585+ bool reflect;85868687 /*8788 * Check if the idle task must be rescheduled. If it is the···106105 */107106 rcu_idle_enter();108107108108+ if (cpuidle_not_available(drv, dev))109109+ goto use_default;110110+109111 /*110112 * Suspend-to-idle ("freeze") is a system state in which all user space111113 * has been frozen, all I/O devices have been suspended and the only···119115 * until a proper wakeup interrupt happens.120116 */121117 if (idle_should_freeze()) {122122- cpuidle_enter_freeze();123123- local_irq_enable();124124- goto exit_idle;125125- }126126-127127- /*128128- * Ask the cpuidle framework to choose a convenient idle state.129129- * Fall back to the default arch idle method on errors.130130- */131131- next_state = cpuidle_select(drv, dev);132132- if (next_state < 0) {133133-use_default:134134- /*135135- * We can't use the cpuidle framework, let's use the default136136- * idle routine.137137- */138138- if (current_clr_polling_and_test())118118+ entered_state = cpuidle_enter_freeze(drv, dev);119119+ if (entered_state >= 0) {139120 local_irq_enable();140140- else141141- arch_cpu_idle();121121+ goto exit_idle;122122+ }142123143143- goto exit_idle;124124+ reflect = false;125125+ next_state = cpuidle_find_deepest_state(drv, dev);126126+ } else {127127+ reflect = true;128128+ /*129129+ * Ask the cpuidle framework to choose a convenient idle state.130130+ */131131+ next_state = cpuidle_select(drv, dev);144132 }145145-133133+ /* Fall back to the default arch idle method on errors. */134134+ if (next_state < 0)135135+ goto use_default;146136147137 /*148138 * The idle task must be scheduled, it is pointless to···181183 /*182184 * Give the governor an opportunity to reflect on the outcome183185 */184184- cpuidle_reflect(dev, entered_state);186186+ if (reflect)187187+ cpuidle_reflect(dev, entered_state);185188186189exit_idle:187190 __current_set_polling();···195196196197 rcu_idle_exit();197198 start_critical_timings();199199+ return;200200+201201+use_default:202202+ /*203203+ * We can't use the cpuidle framework, let's use the default204204+ * idle routine.205205+ */206206+ if (current_clr_polling_and_test())207207+ local_irq_enable();208208+ else209209+ arch_cpu_idle();210210+211211+ goto exit_idle;198212}199213200214/*
···4949 */5050static int bc_set_next(ktime_t expires, struct clock_event_device *bc)5151{5252+ int bc_moved;5253 /*5354 * We try to cancel the timer first. If the callback is on5455 * flight on some other cpu then we let it handle it. If we···6160 * restart the timer because we are in the callback, but we6261 * can set the expiry time and let the callback return6362 * HRTIMER_RESTART.6363+ *6464+ * Since we are in the idle loop at this point and because6565+ * hrtimer_{start/cancel} functions call into tracing,6666+ * calls to these functions must be bound within RCU_NONIDLE.6467 */6565- if (hrtimer_try_to_cancel(&bctimer) >= 0) {6666- hrtimer_start(&bctimer, expires, HRTIMER_MODE_ABS_PINNED);6868+ RCU_NONIDLE(bc_moved = (hrtimer_try_to_cancel(&bctimer) >= 0) ?6969+ !hrtimer_start(&bctimer, expires, HRTIMER_MODE_ABS_PINNED) :7070+ 0);7171+ if (bc_moved) {6772 /* Bind the "device" to the cpu */6873 bc->bound_on = smp_processor_id();6974 } else if (bc->bound_on == smp_processor_id()) {
+30-10
kernel/trace/ftrace.c
···1059105910601060static struct pid * const ftrace_swapper_pid = &init_struct_pid;1061106110621062+#ifdef CONFIG_FUNCTION_GRAPH_TRACER10631063+static int ftrace_graph_active;10641064+#else10651065+# define ftrace_graph_active 010661066+#endif10671067+10621068#ifdef CONFIG_DYNAMIC_FTRACE1063106910641070static struct ftrace_ops *removed_ops;···20472041 if (!ftrace_rec_count(rec))20482042 rec->flags = 0;20492043 else20502050- /* Just disable the record (keep REGS state) */20512051- rec->flags &= ~FTRACE_FL_ENABLED;20442044+ /*20452045+ * Just disable the record, but keep the ops TRAMP20462046+ * and REGS states. The _EN flags must be disabled though.20472047+ */20482048+ rec->flags &= ~(FTRACE_FL_ENABLED | FTRACE_FL_TRAMP_EN |20492049+ FTRACE_FL_REGS_EN);20522050 }2053205120542052 return FTRACE_UPDATE_MAKE_NOP;···2698268826992689static void ftrace_startup_sysctl(void)27002690{26912691+ int command;26922692+27012693 if (unlikely(ftrace_disabled))27022694 return;2703269527042696 /* Force update next time */27052697 saved_ftrace_func = NULL;27062698 /* ftrace_start_up is true if we want ftrace running */27072707- if (ftrace_start_up)27082708- ftrace_run_update_code(FTRACE_UPDATE_CALLS);26992699+ if (ftrace_start_up) {27002700+ command = FTRACE_UPDATE_CALLS;27012701+ if (ftrace_graph_active)27022702+ command |= FTRACE_START_FUNC_RET;27032703+ ftrace_startup_enable(command);27042704+ }27092705}2710270627112707static void ftrace_shutdown_sysctl(void)27122708{27092709+ int command;27102710+27132711 if (unlikely(ftrace_disabled))27142712 return;2715271327162714 /* ftrace_start_up is true if ftrace is running */27172717- if (ftrace_start_up)27182718- ftrace_run_update_code(FTRACE_DISABLE_CALLS);27152715+ if (ftrace_start_up) {27162716+ command = FTRACE_DISABLE_CALLS;27172717+ if (ftrace_graph_active)27182718+ command |= FTRACE_STOP_FUNC_RET;27192719+ ftrace_run_update_code(command);27202720+ }27192721}2720272227212723static cycle_t ftrace_update_time;···5580555855815559 if (ftrace_enabled) {5582556055835583- ftrace_startup_sysctl();55845584-55855561 /* we are starting ftrace again */55865562 if (ftrace_ops_list != &ftrace_list_end)55875563 update_ftrace_function();55645564+55655565+ ftrace_startup_sysctl();5588556655895567 } else {55905568 /* stopping ftrace calls (just send to ftrace_stub) */···56115589#endif56125590 ASSIGN_OPS_HASH(graph_ops, &global_ops.local_hash)56135591};56145614-56155615-static int ftrace_graph_active;5616559256175593int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace)56185594{
+52-4
kernel/workqueue.c
···27282728}27292729EXPORT_SYMBOL_GPL(flush_work);2730273027312731+struct cwt_wait {27322732+ wait_queue_t wait;27332733+ struct work_struct *work;27342734+};27352735+27362736+static int cwt_wakefn(wait_queue_t *wait, unsigned mode, int sync, void *key)27372737+{27382738+ struct cwt_wait *cwait = container_of(wait, struct cwt_wait, wait);27392739+27402740+ if (cwait->work != key)27412741+ return 0;27422742+ return autoremove_wake_function(wait, mode, sync, key);27432743+}27442744+27312745static bool __cancel_work_timer(struct work_struct *work, bool is_dwork)27322746{27472747+ static DECLARE_WAIT_QUEUE_HEAD(cancel_waitq);27332748 unsigned long flags;27342749 int ret;2735275027362751 do {27372752 ret = try_to_grab_pending(work, is_dwork, &flags);27382753 /*27392739- * If someone else is canceling, wait for the same event it27402740- * would be waiting for before retrying.27542754+ * If someone else is already canceling, wait for it to27552755+ * finish. flush_work() doesn't work for PREEMPT_NONE27562756+ * because we may get scheduled between @work's completion27572757+ * and the other canceling task resuming and clearing27582758+ * CANCELING - flush_work() will return false immediately27592759+ * as @work is no longer busy, try_to_grab_pending() will27602760+ * return -ENOENT as @work is still being canceled and the27612761+ * other canceling task won't be able to clear CANCELING as27622762+ * we're hogging the CPU.27632763+ *27642764+ * Let's wait for completion using a waitqueue. As this27652765+ * may lead to the thundering herd problem, use a custom27662766+ * wake function which matches @work along with exclusive27672767+ * wait and wakeup.27412768 */27422742- if (unlikely(ret == -ENOENT))27432743- flush_work(work);27692769+ if (unlikely(ret == -ENOENT)) {27702770+ struct cwt_wait cwait;27712771+27722772+ init_wait(&cwait.wait);27732773+ cwait.wait.func = cwt_wakefn;27742774+ cwait.work = work;27752775+27762776+ prepare_to_wait_exclusive(&cancel_waitq, &cwait.wait,27772777+ TASK_UNINTERRUPTIBLE);27782778+ if (work_is_canceling(work))27792779+ schedule();27802780+ finish_wait(&cancel_waitq, &cwait.wait);27812781+ }27442782 } while (unlikely(ret < 0));2745278327462784 /* tell other tasks trying to grab @work to back off */···2787274927882750 flush_work(work);27892751 clear_work_data(work);27522752+27532753+ /*27542754+ * Paired with prepare_to_wait() above so that either27552755+ * waitqueue_active() is visible here or !work_is_canceling() is27562756+ * visible there.27572757+ */27582758+ smp_mb();27592759+ if (waitqueue_active(&cancel_waitq))27602760+ __wake_up(&cancel_waitq, TASK_NORMAL, 1, work);27612761+27902762 return ret;27912763}27922764
···1212 return 0;1313}1414EXPORT_SYMBOL_GPL(lcm);1515+1616+unsigned long lcm_not_zero(unsigned long a, unsigned long b)1717+{1818+ unsigned long l = lcm(a, b);1919+2020+ if (l)2121+ return l;2222+2323+ return (b ? : a);2424+}2525+EXPORT_SYMBOL_GPL(lcm_not_zero);
···6464 return (1UL << (align_order - cma->order_per_bit)) - 1;6565}66666767+/*6868+ * Find a PFN aligned to the specified order and return an offset represented in6969+ * order_per_bits.7070+ */6771static unsigned long cma_bitmap_aligned_offset(struct cma *cma, int align_order)6872{6969- unsigned int alignment;7070-7173 if (align_order <= cma->order_per_bit)7274 return 0;7373- alignment = 1UL << (align_order - cma->order_per_bit);7474- return ALIGN(cma->base_pfn, alignment) -7575- (cma->base_pfn >> cma->order_per_bit);7575+7676+ return (ALIGN(cma->base_pfn, (1UL << align_order))7777+ - cma->base_pfn) >> cma->order_per_bit;7678}77797880static unsigned long cma_bitmap_maxno(struct cma *cma)
+15-10
mm/huge_memory.c
···12601260 int target_nid, last_cpupid = -1;12611261 bool page_locked;12621262 bool migrated = false;12631263+ bool was_writable;12631264 int flags = 0;1264126512651266 /* A PROT_NONE fault should not end up here */···12921291 flags |= TNF_FAULT_LOCAL;12931292 }1294129312951295- /*12961296- * Avoid grouping on DSO/COW pages in specific and RO pages12971297- * in general, RO pages shouldn't hurt as much anyway since12981298- * they can be in shared cache state.12991299- */13001300- if (!pmd_write(pmd))12941294+ /* See similar comment in do_numa_page for explanation */12951295+ if (!(vma->vm_flags & VM_WRITE))13011296 flags |= TNF_NO_GROUP;1302129713031298 /*···13501353 if (migrated) {13511354 flags |= TNF_MIGRATED;13521355 page_nid = target_nid;13531353- }13561356+ } else13571357+ flags |= TNF_MIGRATE_FAIL;1354135813551359 goto out;13561360clear_pmdnuma:13571361 BUG_ON(!PageLocked(page));13621362+ was_writable = pmd_write(pmd);13581363 pmd = pmd_modify(pmd, vma->vm_page_prot);13641364+ pmd = pmd_mkyoung(pmd);13651365+ if (was_writable)13661366+ pmd = pmd_mkwrite(pmd);13591367 set_pmd_at(mm, haddr, pmdp, pmd);13601368 update_mmu_cache_pmd(vma, addr, pmdp);13611369 unlock_page(page);···1484148214851483 if (__pmd_trans_huge_lock(pmd, vma, &ptl) == 1) {14861484 pmd_t entry;14851485+ bool preserve_write = prot_numa && pmd_write(*pmd);14861486+ ret = 1;1487148714881488 /*14891489 * Avoid trapping faults against the zero page. The read-only···14941490 */14951491 if (prot_numa && is_huge_zero_pmd(*pmd)) {14961492 spin_unlock(ptl);14971497- return 0;14931493+ return ret;14981494 }1499149515001496 if (!prot_numa || !pmd_protnone(*pmd)) {15011501- ret = 1;15021497 entry = pmdp_get_and_clear_notify(mm, addr, pmd);15031498 entry = pmd_modify(entry, newprot);14991499+ if (preserve_write)15001500+ entry = pmd_mkwrite(entry);15041501 ret = HPAGE_PMD_NR;15051502 set_pmd_at(mm, addr, pmd, entry);15061506- BUG_ON(pmd_write(entry));15031503+ BUG_ON(!preserve_write && pmd_write(entry));15071504 }15081505 spin_unlock(ptl);15091506 }
+3-1
mm/hugetlb.c
···917917 __SetPageHead(page);918918 __ClearPageReserved(page);919919 for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) {920920- __SetPageTail(p);921920 /*922921 * For gigantic hugepages allocated through bootmem at923922 * boot, it's safer to be consistent with the not-gigantic···932933 __ClearPageReserved(p);933934 set_page_count(p, 0);934935 p->first_page = page;936936+ /* Make sure p->first_page is always valid for PageTail() */937937+ smp_wmb();938938+ __SetPageTail(p);935939 }936940}937941
···52325232 * on for the root memcg is enough.52335233 */52345234 if (cgroup_on_dfl(root_css->cgroup))52355235- mem_cgroup_from_css(root_css)->use_hierarchy = true;52355235+ root_mem_cgroup->use_hierarchy = true;52365236+ else52375237+ root_mem_cgroup->use_hierarchy = false;52365238}5237523952385240static u64 memory_current_read(struct cgroup_subsys_state *css,
+12-5
mm/memory.c
···30353035 int last_cpupid;30363036 int target_nid;30373037 bool migrated = false;30383038+ bool was_writable = pte_write(pte);30383039 int flags = 0;3039304030403041 /* A PROT_NONE fault should not end up here */···30603059 /* Make it present again */30613060 pte = pte_modify(pte, vma->vm_page_prot);30623061 pte = pte_mkyoung(pte);30623062+ if (was_writable)30633063+ pte = pte_mkwrite(pte);30633064 set_pte_at(mm, addr, ptep, pte);30643065 update_mmu_cache(vma, addr, ptep);30653066···30723069 }3073307030743071 /*30753075- * Avoid grouping on DSO/COW pages in specific and RO pages30763076- * in general, RO pages shouldn't hurt as much anyway since30773077- * they can be in shared cache state.30723072+ * Avoid grouping on RO pages in general. RO pages shouldn't hurt as30733073+ * much anyway since they can be in shared cache state. This misses30743074+ * the case where a mapping is writable but the process never writes30753075+ * to it but pte_write gets cleared during protection updates and30763076+ * pte_dirty has unpredictable behaviour between PTE scan updates,30773077+ * background writeback, dirty balancing and application behaviour.30783078 */30793079- if (!pte_write(pte))30793079+ if (!(vma->vm_flags & VM_WRITE))30803080 flags |= TNF_NO_GROUP;3081308130823082 /*···31033097 if (migrated) {31043098 page_nid = target_nid;31053099 flags |= TNF_MIGRATED;31063106- }31003100+ } else31013101+ flags |= TNF_MIGRATE_FAIL;3107310231083103out:31093104 if (page_nid != -1)
+4-9
mm/memory_hotplug.c
···10921092 return NULL;1093109310941094 arch_refresh_nodedata(nid, pgdat);10951095+ } else {10961096+ /* Reset the nr_zones and classzone_idx to 0 before reuse */10971097+ pgdat->nr_zones = 0;10981098+ pgdat->classzone_idx = 0;10951099 }1096110010971101 /* we can use NODE_DATA(nid) from here */···19811977 if (is_vmalloc_addr(zone->wait_table))19821978 vfree(zone->wait_table);19831979 }19841984-19851985- /*19861986- * Since there is no way to guarentee the address of pgdat/zone is not19871987- * on stack of any kernel threads or used by other kernel objects19881988- * without reference counting or other symchronizing method, do not19891989- * reset node_data and free pgdat here. Just reset it to 0 and reuse19901990- * the memory when the node is online again.19911991- */19921992- memset(pgdat, 0, sizeof(*pgdat));19931980}19941981EXPORT_SYMBOL(try_offline_node);19951982
+2-2
mm/mlock.c
···26262727int can_do_mlock(void)2828{2929- if (capable(CAP_IPC_LOCK))3030- return 1;3129 if (rlimit(RLIMIT_MEMLOCK) != 0)3030+ return 1;3131+ if (capable(CAP_IPC_LOCK))3232 return 1;3333 return 0;3434}
···265265 vma = vma->vm_next;266266267267 err = walk_page_test(start, next, walk);268268- if (err > 0)268268+ if (err > 0) {269269+ /*270270+ * positive return values are purely for271271+ * controlling the pagewalk, so should never272272+ * be passed to the callers.273273+ */274274+ err = 0;269275 continue;276276+ }270277 if (err < 0)271278 break;272279 }
+7
mm/rmap.c
···287287 return 0;288288289289 enomem_failure:290290+ /*291291+ * dst->anon_vma is dropped here otherwise its degree can be incorrectly292292+ * decremented in unlink_anon_vmas().293293+ * We can safely do this because callers of anon_vma_clone() don't care294294+ * about dst->anon_vma if anon_vma_clone() failed.295295+ */296296+ dst->anon_vma = NULL;290297 unlink_anon_vmas(dst);291298 return -ENOMEM;292299}
+4-2
mm/slub.c
···24492449 do {24502450 tid = this_cpu_read(s->cpu_slab->tid);24512451 c = raw_cpu_ptr(s->cpu_slab);24522452- } while (IS_ENABLED(CONFIG_PREEMPT) && unlikely(tid != c->tid));24522452+ } while (IS_ENABLED(CONFIG_PREEMPT) &&24532453+ unlikely(tid != READ_ONCE(c->tid)));2453245424542455 /*24552456 * Irqless object alloc/free algorithm used here depends on sequence···27192718 do {27202719 tid = this_cpu_read(s->cpu_slab->tid);27212720 c = raw_cpu_ptr(s->cpu_slab);27222722- } while (IS_ENABLED(CONFIG_PREEMPT) && unlikely(tid != c->tid));27212721+ } while (IS_ENABLED(CONFIG_PREEMPT) &&27222722+ unlikely(tid != READ_ONCE(c->tid)));2723272327242724 /* Same with comment on barrier() in slab_alloc_node() */27252725 barrier();
···198198 */199199int peernet2id(struct net *net, struct net *peer)200200{201201- int id = __peernet2id(net, peer, true);201201+ bool alloc = atomic_read(&peer->count) == 0 ? false : true;202202+ int id;202203204204+ id = __peernet2id(net, peer, alloc);203205 return id >= 0 ? id : NETNSA_NSID_NOT_ASSIGNED;204206}205207EXPORT_SYMBOL(peernet2id);
···36213621{36223622 struct sk_buff_head *q = &sk->sk_error_queue;36233623 struct sk_buff *skb, *skb_next;36243624+ unsigned long flags;36243625 int err = 0;3625362636263626- spin_lock_bh(&q->lock);36273627+ spin_lock_irqsave(&q->lock, flags);36273628 skb = __skb_dequeue(q);36283629 if (skb && (skb_next = skb_peek(q)))36293630 err = SKB_EXT_ERR(skb_next)->ee.ee_errno;36303630- spin_unlock_bh(&q->lock);36313631+ spin_unlock_irqrestore(&q->lock, flags);3631363236323633 sk->sk_err = err;36333634 if (err)···37333732 struct sock *sk, int tstype)37343733{37353734 struct sk_buff *skb;37363736- bool tsonly = sk->sk_tsflags & SOF_TIMESTAMPING_OPT_TSONLY;37353735+ bool tsonly;3737373637383738- if (!sk || !skb_may_tx_timestamp(sk, tsonly))37373737+ if (!sk)37383738+ return;37393739+37403740+ tsonly = sk->sk_tsflags & SOF_TIMESTAMPING_OPT_TSONLY;37413741+ if (!skb_may_tx_timestamp(sk, tsonly))37393742 return;3740374337413744 if (tsonly)···41774172 skb->ignore_df = 0;41784173 skb_dst_drop(skb);41794174 skb->mark = 0;41804180- skb->sender_cpu = 0;41754175+ skb_sender_cpu_clear(skb);41814176 skb_init_secmark(skb);41824177 secpath_reset(skb);41834178 nf_reset(skb);
+23
net/core/sock.c
···653653 sock_reset_flag(sk, bit);654654}655655656656+bool sk_mc_loop(struct sock *sk)657657+{658658+ if (dev_recursion_level())659659+ return false;660660+ if (!sk)661661+ return true;662662+ switch (sk->sk_family) {663663+ case AF_INET:664664+ return inet_sk(sk)->mc_loop;665665+#if IS_ENABLED(CONFIG_IPV6)666666+ case AF_INET6:667667+ return inet6_sk(sk)->mc_loop;668668+#endif669669+ }670670+ WARN_ON(1);671671+ return true;672672+}673673+EXPORT_SYMBOL(sk_mc_loop);674674+656675/*657676 * This is meant for all protocols to use and covers goings on658677 * at the socket level. Everything here is generic.···16741655}16751656EXPORT_SYMBOL(sock_rfree);1676165716581658+/*16591659+ * Buffer destructor for skbs that are not used directly in read or write16601660+ * path, e.g. for error handler skbs. Automatically called from kfree_skb.16611661+ */16771662void sock_efree(struct sk_buff *skb)16781663{16791664 sock_put(skb->sk);
+6-4
net/core/sysctl_net_core.c
···2525static int zero = 0;2626static int one = 1;2727static int ushort_max = USHRT_MAX;2828+static int min_sndbuf = SOCK_MIN_SNDBUF;2929+static int min_rcvbuf = SOCK_MIN_RCVBUF;28302931static int net_msg_warn; /* Unused, but still a sysctl */3032···239237 .maxlen = sizeof(int),240238 .mode = 0644,241239 .proc_handler = proc_dointvec_minmax,242242- .extra1 = &one,240240+ .extra1 = &min_sndbuf,243241 },244242 {245243 .procname = "rmem_max",···247245 .maxlen = sizeof(int),248246 .mode = 0644,249247 .proc_handler = proc_dointvec_minmax,250250- .extra1 = &one,248248+ .extra1 = &min_rcvbuf,251249 },252250 {253251 .procname = "wmem_default",···255253 .maxlen = sizeof(int),256254 .mode = 0644,257255 .proc_handler = proc_dointvec_minmax,258258- .extra1 = &one,256256+ .extra1 = &min_sndbuf,259257 },260258 {261259 .procname = "rmem_default",···263261 .maxlen = sizeof(int),264262 .mode = 0644,265263 .proc_handler = proc_dointvec_minmax,266266- .extra1 = &one,264264+ .extra1 = &min_rcvbuf,267265 },268266 {269267 .procname = "dev_weight",
···501501#ifdef CONFIG_OF502502static int dsa_of_setup_routing_table(struct dsa_platform_data *pd,503503 struct dsa_chip_data *cd,504504- int chip_index,504504+ int chip_index, int port_index,505505 struct device_node *link)506506{507507- int ret;508507 const __be32 *reg;509509- int link_port_addr;510508 int link_sw_addr;511509 struct device_node *parent_sw;512510 int len;···517519 if (!reg || (len != sizeof(*reg) * 2))518520 return -EINVAL;519521522522+ /*523523+ * Get the destination switch number from the second field of its 'reg'524524+ * property, i.e. for "reg = <0x19 1>" sw_addr is '1'.525525+ */520526 link_sw_addr = be32_to_cpup(reg + 1);521527522528 if (link_sw_addr >= pd->nr_chips)···537535 memset(cd->rtable, -1, pd->nr_chips * sizeof(s8));538536 }539537540540- reg = of_get_property(link, "reg", NULL);541541- if (!reg) {542542- ret = -EINVAL;543543- goto out;544544- }545545-546546- link_port_addr = be32_to_cpup(reg);547547-548548- cd->rtable[link_sw_addr] = link_port_addr;538538+ cd->rtable[link_sw_addr] = port_index;549539550540 return 0;551551-out:552552- kfree(cd->rtable);553553- return ret;554541}555542556543static void dsa_of_free_platform_data(struct dsa_platform_data *pd)···649658 if (!strcmp(port_name, "dsa") && link &&650659 pd->nr_chips > 1) {651660 ret = dsa_of_setup_routing_table(pd, cd,652652- chip_index, link);661661+ chip_index, port_index, link);653662 if (ret)654663 goto out_free_chip;655664 }
···432432 kfree_skb(skb);433433}434434435435-static bool ipv4_pktinfo_prepare_errqueue(const struct sock *sk,436436- const struct sk_buff *skb,437437- int ee_origin)435435+/* IPv4 supports cmsg on all imcp errors and some timestamps436436+ *437437+ * Timestamp code paths do not initialize the fields expected by cmsg:438438+ * the PKTINFO fields in skb->cb[]. Fill those in here.439439+ */440440+static bool ipv4_datagram_support_cmsg(const struct sock *sk,441441+ struct sk_buff *skb,442442+ int ee_origin)438443{439439- struct in_pktinfo *info = PKTINFO_SKB_CB(skb);444444+ struct in_pktinfo *info;440445441441- if ((ee_origin != SO_EE_ORIGIN_TIMESTAMPING) ||442442- (!(sk->sk_tsflags & SOF_TIMESTAMPING_OPT_CMSG)) ||446446+ if (ee_origin == SO_EE_ORIGIN_ICMP)447447+ return true;448448+449449+ if (ee_origin == SO_EE_ORIGIN_LOCAL)450450+ return false;451451+452452+ /* Support IP_PKTINFO on tstamp packets if requested, to correlate453453+ * timestamp with egress dev. Not possible for packets without dev454454+ * or without payload (SOF_TIMESTAMPING_OPT_TSONLY).455455+ */456456+ if ((!(sk->sk_tsflags & SOF_TIMESTAMPING_OPT_CMSG)) ||443457 (!skb->dev))444458 return false;445459460460+ info = PKTINFO_SKB_CB(skb);446461 info->ipi_spec_dst.s_addr = ip_hdr(skb)->saddr;447462 info->ipi_ifindex = skb->dev->ifindex;448463 return true;···498483499484 serr = SKB_EXT_ERR(skb);500485501501- if (sin && skb->len) {486486+ if (sin && serr->port) {502487 sin->sin_family = AF_INET;503488 sin->sin_addr.s_addr = *(__be32 *)(skb_network_header(skb) +504489 serr->addr_offset);···511496 sin = &errhdr.offender;512497 memset(sin, 0, sizeof(*sin));513498514514- if (skb->len &&515515- (serr->ee.ee_origin == SO_EE_ORIGIN_ICMP ||516516- ipv4_pktinfo_prepare_errqueue(sk, skb, serr->ee.ee_origin))) {499499+ if (ipv4_datagram_support_cmsg(sk, skb, serr->ee.ee_origin)) {517500 sin->sin_family = AF_INET;518501 sin->sin_addr.s_addr = ip_hdr(skb)->saddr;519502 if (inet_sk(sk)->cmsg_flags)
···259259 kgid_t low, high;260260 int ret = 0;261261262262+ if (sk->sk_family == AF_INET6)263263+ sk->sk_ipv6only = 1;264264+262265 inet_get_ping_group_range_net(net, &low, &high);263266 if (gid_lte(low, group) && gid_lte(group, high))264267 return 0;···308305 if (addr_len < sizeof(*addr))309306 return -EINVAL;310307308308+ if (addr->sin_family != AF_INET &&309309+ !(addr->sin_family == AF_UNSPEC &&310310+ addr->sin_addr.s_addr == htonl(INADDR_ANY)))311311+ return -EAFNOSUPPORT;312312+311313 pr_debug("ping_check_bind_addr(sk=%p,addr=%pI4,port=%d)\n",312314 sk, &addr->sin_addr.s_addr, ntohs(addr->sin_port));313315···338330 return -EINVAL;339331340332 if (addr->sin6_family != AF_INET6)341341- return -EINVAL;333333+ return -EAFNOSUPPORT;342334343335 pr_debug("ping_check_bind_addr(sk=%p,addr=%pI6c,port=%d)\n",344336 sk, addr->sin6_addr.s6_addr, ntohs(addr->sin6_port));···724716 if (msg->msg_namelen < sizeof(*usin))725717 return -EINVAL;726718 if (usin->sin_family != AF_INET)727727- return -EINVAL;719719+ return -EAFNOSUPPORT;728720 daddr = usin->sin_addr.s_addr;729721 /* no remote port */730722 } else {
+3-7
net/ipv4/tcp.c
···835835 int large_allowed)836836{837837 struct tcp_sock *tp = tcp_sk(sk);838838- u32 new_size_goal, size_goal, hlen;838838+ u32 new_size_goal, size_goal;839839840840 if (!large_allowed || !sk_can_gso(sk))841841 return mss_now;842842843843- /* Maybe we should/could use sk->sk_prot->max_header here ? */844844- hlen = inet_csk(sk)->icsk_af_ops->net_header_len +845845- inet_csk(sk)->icsk_ext_hdr_len +846846- tp->tcp_header_len;847847-848848- new_size_goal = sk->sk_gso_max_size - 1 - hlen;843843+ /* Note : tcp_tso_autosize() will eventually split this later */844844+ new_size_goal = sk->sk_gso_max_size - 1 - MAX_TCP_HEADER;849845 new_size_goal = tcp_bound_to_half_wnd(tp, new_size_goal);850846851847 /* We try hard to avoid divides here */
+6
net/ipv4/tcp_cong.c
···378378 */379379void tcp_cong_avoid_ai(struct tcp_sock *tp, u32 w, u32 acked)380380{381381+ /* If credits accumulated at a higher w, apply them gently now. */382382+ if (tp->snd_cwnd_cnt >= w) {383383+ tp->snd_cwnd_cnt = 0;384384+ tp->snd_cwnd++;385385+ }386386+381387 tp->snd_cwnd_cnt += acked;382388 if (tp->snd_cwnd_cnt >= w) {383389 u32 delta = tp->snd_cwnd_cnt / w;
+4-2
net/ipv4/tcp_cubic.c
···306306 }307307 }308308309309- if (ca->cnt == 0) /* cannot be zero */310310- ca->cnt = 1;309309+ /* The maximum rate of cwnd increase CUBIC allows is 1 packet per310310+ * 2 packets ACKed, meaning cwnd grows at 1.5x per RTT.311311+ */312312+ ca->cnt = max(ca->cnt, 2U);311313}312314313315static void bictcp_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+5-4
net/ipv4/tcp_input.c
···31053105 if (!first_ackt.v64)31063106 first_ackt = last_ackt;3107310731083108- if (!(sacked & TCPCB_SACKED_ACKED))31083108+ if (!(sacked & TCPCB_SACKED_ACKED)) {31093109 reord = min(pkts_acked, reord);31103110- if (!after(scb->end_seq, tp->high_seq))31113111- flag |= FLAG_ORIG_SACK_ACKED;31103110+ if (!after(scb->end_seq, tp->high_seq))31113111+ flag |= FLAG_ORIG_SACK_ACKED;31123112+ }31123113 }3113311431143115 if (sacked & TCPCB_SACKED_ACKED)···47714770 return false;4772477147734772 /* If we filled the congestion window, do not expand. */47744774- if (tp->packets_out >= tp->snd_cwnd)47734773+ if (tcp_packets_in_flight(tp) >= tp->snd_cwnd)47754774 return false;4776477547774776 return true;
···27732773 } else {27742774 /* Socket is locked, keep trying until memory is available. */27752775 for (;;) {27762776- skb = alloc_skb_fclone(MAX_TCP_HEADER,27772777- sk->sk_allocation);27762776+ skb = sk_stream_alloc_skb(sk, 0, sk->sk_allocation);27782777 if (skb)27792778 break;27802779 yield();27812780 }27822782-27832783- /* Reserve space for headers and prepare control bits. */27842784- skb_reserve(skb, MAX_TCP_HEADER);27852781 /* FIN eats a sequence byte, write_seq advanced by tcp_queue_skb(). */27862782 tcp_init_nondata_skb(skb, tp->write_seq,27872783 TCPHDR_ACK | TCPHDR_FIN);
···325325 kfree_skb(skb);326326}327327328328-static void ip6_datagram_prepare_pktinfo_errqueue(struct sk_buff *skb)328328+/* IPv6 supports cmsg on all origins aside from SO_EE_ORIGIN_LOCAL.329329+ *330330+ * At one point, excluding local errors was a quick test to identify icmp/icmp6331331+ * errors. This is no longer true, but the test remained, so the v6 stack,332332+ * unlike v4, also honors cmsg requests on all wifi and timestamp errors.333333+ *334334+ * Timestamp code paths do not initialize the fields expected by cmsg:335335+ * the PKTINFO fields in skb->cb[]. Fill those in here.336336+ */337337+static bool ip6_datagram_support_cmsg(struct sk_buff *skb,338338+ struct sock_exterr_skb *serr)329339{330330- int ifindex = skb->dev ? skb->dev->ifindex : -1;340340+ if (serr->ee.ee_origin == SO_EE_ORIGIN_ICMP ||341341+ serr->ee.ee_origin == SO_EE_ORIGIN_ICMP6)342342+ return true;343343+344344+ if (serr->ee.ee_origin == SO_EE_ORIGIN_LOCAL)345345+ return false;346346+347347+ if (!skb->dev)348348+ return false;331349332350 if (skb->protocol == htons(ETH_P_IPV6))333333- IP6CB(skb)->iif = ifindex;351351+ IP6CB(skb)->iif = skb->dev->ifindex;334352 else335335- PKTINFO_SKB_CB(skb)->ipi_ifindex = ifindex;353353+ PKTINFO_SKB_CB(skb)->ipi_ifindex = skb->dev->ifindex;354354+355355+ return true;336356}337357338358/*···389369390370 serr = SKB_EXT_ERR(skb);391371392392- if (sin && skb->len) {372372+ if (sin && serr->port) {393373 const unsigned char *nh = skb_network_header(skb);394374 sin->sin6_family = AF_INET6;395375 sin->sin6_flowinfo = 0;···414394 memcpy(&errhdr.ee, &serr->ee, sizeof(struct sock_extended_err));415395 sin = &errhdr.offender;416396 memset(sin, 0, sizeof(*sin));417417- if (serr->ee.ee_origin != SO_EE_ORIGIN_LOCAL && skb->len) {397397+398398+ if (ip6_datagram_support_cmsg(skb, serr)) {418399 sin->sin6_family = AF_INET6;419419- if (np->rxopt.all) {420420- if (serr->ee.ee_origin != SO_EE_ORIGIN_ICMP &&421421- serr->ee.ee_origin != SO_EE_ORIGIN_ICMP6)422422- ip6_datagram_prepare_pktinfo_errqueue(skb);400400+ if (np->rxopt.all)423401 ip6_datagram_recv_common_ctl(sk, msg, skb);424424- }425402 if (skb->protocol == htons(ETH_P_IPV6)) {426403 sin->sin6_addr = ipv6_hdr(skb)->saddr;427404 if (np->rxopt.all)
···12181218 if (rt)12191219 rt6_set_expires(rt, jiffies + (HZ * lifetime));12201220 if (ra_msg->icmph.icmp6_hop_limit) {12211221- in6_dev->cnf.hop_limit = ra_msg->icmph.icmp6_hop_limit;12211221+ /* Only set hop_limit on the interface if it is higher than12221222+ * the current hop_limit.12231223+ */12241224+ if (in6_dev->cnf.hop_limit < ra_msg->icmph.icmp6_hop_limit) {12251225+ in6_dev->cnf.hop_limit = ra_msg->icmph.icmp6_hop_limit;12261226+ } else {12271227+ ND_PRINTK(2, warn, "RA: Got route advertisement with lower hop_limit than current\n");12281228+ }12221229 if (rt)12231230 dst_metric_set(&rt->dst, RTAX_HOPLIMIT,12241231 ra_msg->icmph.icmp6_hop_limit);
···102102103103 if (msg->msg_name) {104104 DECLARE_SOCKADDR(struct sockaddr_in6 *, u, msg->msg_name);105105- if (msg->msg_namelen < sizeof(struct sockaddr_in6) ||106106- u->sin6_family != AF_INET6) {105105+ if (msg->msg_namelen < sizeof(*u))107106 return -EINVAL;107107+ if (u->sin6_family != AF_INET6) {108108+ return -EAFNOSUPPORT;108109 }109110 if (sk->sk_bound_dev_if &&110111 sk->sk_bound_dev_if != u->sin6_scope_id) {
+12-1
net/ipv6/tcp_ipv6.c
···14111411 TCP_SKB_CB(skb)->sacked = 0;14121412}1413141314141414+static void tcp_v6_restore_cb(struct sk_buff *skb)14151415+{14161416+ /* We need to move header back to the beginning if xfrm6_policy_check()14171417+ * and tcp_v6_fill_cb() are going to be called again.14181418+ */14191419+ memmove(IP6CB(skb), &TCP_SKB_CB(skb)->header.h6,14201420+ sizeof(struct inet6_skb_parm));14211421+}14221422+14141423static int tcp_v6_rcv(struct sk_buff *skb)14151424{14161425 const struct tcphdr *th;···15521543 inet_twsk_deschedule(tw, &tcp_death_row);15531544 inet_twsk_put(tw);15541545 sk = sk2;15461546+ tcp_v6_restore_cb(skb);15551547 goto process;15561548 }15571549 /* Fall through to ACK */···15611551 tcp_v6_timewait_ack(sk, skb);15621552 break;15631553 case TCP_TW_RST:15541554+ tcp_v6_restore_cb(skb);15641555 goto no_tcp_socket;15651556 case TCP_TW_SUCCESS:15661557 ;···15961585 skb->sk = sk;15971586 skb->destructor = sock_edemux;15981587 if (sk->sk_state != TCP_TIME_WAIT) {15991599- struct dst_entry *dst = sk->sk_rx_dst;15881588+ struct dst_entry *dst = READ_ONCE(sk->sk_rx_dst);1600158916011590 if (dst)16021591 dst = dst_check(dst, inet6_sk(sk)->rx_dst_cookie);
+3-5
net/ipv6/udp_offload.c
···112112 fptr = (struct frag_hdr *)(skb_network_header(skb) + unfrag_ip6hlen);113113 fptr->nexthdr = nexthdr;114114 fptr->reserved = 0;115115- if (skb_shinfo(skb)->ip6_frag_id)116116- fptr->identification = skb_shinfo(skb)->ip6_frag_id;117117- else118118- ipv6_select_ident(fptr,119119- (struct rt6_info *)skb_dst(skb));115115+ if (!skb_shinfo(skb)->ip6_frag_id)116116+ ipv6_proxy_select_ident(skb);117117+ fptr->identification = skb_shinfo(skb)->ip6_frag_id;120118121119 /* Fragment the skb. ipv6 header and the remaining fields of the122120 * fragment header are updated in ipv6_gso_segment()
···798798 orig_jiffies = jiffies;799799800800 /* Set poll time to 200 ms */801801- poll_time = IRDA_MIN(timeout, msecs_to_jiffies(200));801801+ poll_time = msecs_to_jiffies(200);802802+ if (timeout)803803+ poll_time = min_t(unsigned long, timeout, poll_time);802804803805 spin_lock_irqsave(&self->spinlock, flags);804806 while (self->tx_skb && self->tx_skb->len) {···813811 break;814812 }815813 spin_unlock_irqrestore(&self->spinlock, flags);816816- current->state = TASK_RUNNING;814814+ __set_current_state(TASK_RUNNING);817815}818816819817/*
+2-2
net/irda/irnet/irnet_ppp.c
···305305306306 /* Put ourselves on the wait queue to be woken up */307307 add_wait_queue(&irnet_events.rwait, &wait);308308- current->state = TASK_INTERRUPTIBLE;308308+ set_current_state(TASK_INTERRUPTIBLE);309309 for(;;)310310 {311311 /* If there is unread events */···321321 /* Yield and wait to be woken up */322322 schedule();323323 }324324- current->state = TASK_RUNNING;324324+ __set_current_state(TASK_RUNNING);325325 remove_wait_queue(&irnet_events.rwait, &wait);326326327327 /* Did we got it ? */
+1-3
net/iucv/af_iucv.c
···11141114 noblock, &err);11151115 else11161116 skb = sock_alloc_send_skb(sk, len, noblock, &err);11171117- if (!skb) {11181118- err = -ENOMEM;11171117+ if (!skb)11191118 goto out;11201120- }11211119 if (iucv->transport == AF_IUCV_TRANS_HIPER)11221120 skb_reserve(skb, sizeof(struct af_iucv_trans_hdr) + ETH_HLEN);11231121 if (memcpy_from_msg(skb_put(skb, len), msg, len)) {
···4949 container_of(h, struct tid_ampdu_rx, rcu_head);5050 int i;51515252- del_timer_sync(&tid_rx->reorder_timer);5353-5452 for (i = 0; i < tid_rx->buf_size; i++)5553 __skb_queue_purge(&tid_rx->reorder_buf[i]);5654 kfree(tid_rx->reorder_buf);···9092 tid, WLAN_BACK_RECIPIENT, reason);91939294 del_timer_sync(&tid_rx->session_timer);9595+9696+ /* make sure ieee80211_sta_reorder_release() doesn't re-arm the timer */9797+ spin_lock_bh(&tid_rx->reorder_lock);9898+ tid_rx->removed = true;9999+ spin_unlock_bh(&tid_rx->reorder_lock);100100+ del_timer_sync(&tid_rx->reorder_timer);9310194102 call_rcu(&tid_rx->rcu_head, ieee80211_free_tid_rx);95103}
+5
net/mac80211/chan.c
···15081508 if (ieee80211_chanctx_refcount(local, ctx) == 0)15091509 ieee80211_free_chanctx(local, ctx);1510151015111511+ sdata->radar_required = false;15121512+15111513 /* Unreserving may ready an in-place reservation. */15121514 if (use_reserved_switch)15131515 ieee80211_vif_use_reserved_switch(local);···15681566 ieee80211_recalc_smps_chanctx(local, ctx);15691567 ieee80211_recalc_radar_chanctx(local, ctx);15701568 out:15691569+ if (ret)15701570+ sdata->radar_required = false;15711571+15711572 mutex_unlock(&local->chanctx_mtx);15721573 return ret;15731574}
+18-6
net/mac80211/ieee80211_i.h
···5858#define IEEE80211_UNSET_POWER_LEVEL INT_MIN59596060/*6161- * Some APs experience problems when working with U-APSD. Decrease the6262- * probability of that happening by using legacy mode for all ACs but VO.6363- * The AP that caused us trouble was a Cisco 4410N. It ignores our6464- * setting, and always treats non-VO ACs as legacy.6161+ * Some APs experience problems when working with U-APSD. Decreasing the6262+ * probability of that happening by using legacy mode for all ACs but VO isn't6363+ * enough.6464+ *6565+ * Cisco 4410N originally forced us to enable VO by default only because it6666+ * treated non-VO ACs as legacy.6767+ *6868+ * However some APs (notably Netgear R7000) silently reclassify packets to6969+ * different ACs. Since u-APSD ACs require trigger frames for frame retrieval7070+ * clients would never see some frames (e.g. ARP responses) or would fetch them7171+ * accidentally after a long time.7272+ *7373+ * It makes little sense to enable u-APSD queues by default because it needs7474+ * userspace applications to be aware of it to actually take advantage of the7575+ * possible additional powersavings. Implicitly depending on driver autotrigger7676+ * frame support doesn't make much sense.6577 */6666-#define IEEE80211_DEFAULT_UAPSD_QUEUES \6767- IEEE80211_WMM_IE_STA_QOSINFO_AC_VO7878+#define IEEE80211_DEFAULT_UAPSD_QUEUES 068796980#define IEEE80211_DEFAULT_MAX_SP_LEN \7081 IEEE80211_WMM_IE_STA_QOSINFO_SP_ALL···464453 unsigned int flags;465454466455 bool csa_waiting_bcn;456456+ bool csa_ignored_same_chan;467457468458 bool beacon_crc_valid;469459 u32 beacon_crc;
···175175 * @reorder_lock: serializes access to reorder buffer, see below.176176 * @auto_seq: used for offloaded BA sessions to automatically pick head_seq_and177177 * and ssn.178178+ * @removed: this session is removed (but might have been found due to RCU)178179 *179180 * This structure's lifetime is managed by RCU, assignments to180181 * the array holding it must hold the aggregation mutex.···200199 u16 timeout;201200 u8 dialog_token;202201 bool auto_seq;202202+ bool removed;203203};204204205205/**
···34023402 if (udest.af == 0)34033403 udest.af = svc->af;3404340434053405- if (udest.af != svc->af) {34053405+ if (udest.af != svc->af && cmd != IPVS_CMD_DEL_DEST) {34063406 /* The synchronization protocol is incompatible34073407 * with mixed family services34083408 */
···7777 if (!tb[NFCTH_TUPLE_L3PROTONUM] || !tb[NFCTH_TUPLE_L4PROTONUM])7878 return -EINVAL;79798080+ /* Not all fields are initialized so first zero the tuple */8181+ memset(tuple, 0, sizeof(struct nf_conntrack_tuple));8282+8083 tuple->src.l3num = ntohs(nla_get_be16(tb[NFCTH_TUPLE_L3PROTONUM]));8184 tuple->dst.protonum = nla_get_u8(tb[NFCTH_TUPLE_L4PROTONUM]);8285
···103103 * @ops: Class structure.104104 * @percpu_stats: Points to per-CPU statistics used and maintained by vport105105 * @err_stats: Points to error statistics used and maintained by vport106106+ * @detach_list: list used for detaching vport in net-exit call.106107 */107108struct vport {108109 struct rcu_head rcu;···118117 struct pcpu_sw_netstats __percpu *percpu_stats;119118120119 struct vport_err_stats err_stats;120120+ struct list_head detach_list;121121};122122123123/**
+28-14
net/packet/af_packet.c
···698698699699 if (pkc->last_kactive_blk_num == pkc->kactive_blk_num) {700700 if (!frozen) {701701+ if (!BLOCK_NUM_PKTS(pbd)) {702702+ /* An empty block. Just refresh the timer. */703703+ goto refresh_timer;704704+ }701705 prb_retire_current_block(pkc, po, TP_STATUS_BLK_TMO);702706 if (!prb_dispatch_next_block(pkc, po))703707 goto refresh_timer;···802798 h1->ts_last_pkt.ts_sec = last_pkt->tp_sec;803799 h1->ts_last_pkt.ts_nsec = last_pkt->tp_nsec;804800 } else {805805- /* Ok, we tmo'd - so get the current time */801801+ /* Ok, we tmo'd - so get the current time.802802+ *803803+ * It shouldn't really happen as we don't close empty804804+ * blocks. See prb_retire_rx_blk_timer_expired().805805+ */806806 struct timespec ts;807807 getnstimeofday(&ts);808808 h1->ts_last_pkt.ts_sec = ts.tv_sec;···13571349 return 0;13581350 }1359135113521352+ if (fanout_has_flag(f, PACKET_FANOUT_FLAG_DEFRAG)) {13531353+ skb = ip_check_defrag(skb, IP_DEFRAG_AF_PACKET);13541354+ if (!skb)13551355+ return 0;13561356+ }13601357 switch (f->type) {13611358 case PACKET_FANOUT_HASH:13621359 default:13631363- if (fanout_has_flag(f, PACKET_FANOUT_FLAG_DEFRAG)) {13641364- skb = ip_check_defrag(skb, IP_DEFRAG_AF_PACKET);13651365- if (!skb)13661366- return 0;13671367- }13681360 idx = fanout_demux_hash(f, skb, num);13691361 break;13701362 case PACKET_FANOUT_LB:···31233115 return 0;31243116}3125311731263126-static void packet_dev_mclist(struct net_device *dev, struct packet_mclist *i, int what)31183118+static void packet_dev_mclist_delete(struct net_device *dev,31193119+ struct packet_mclist **mlp)31273120{31283128- for ( ; i; i = i->next) {31293129- if (i->ifindex == dev->ifindex)31303130- packet_dev_mc(dev, i, what);31213121+ struct packet_mclist *ml;31223122+31233123+ while ((ml = *mlp) != NULL) {31243124+ if (ml->ifindex == dev->ifindex) {31253125+ packet_dev_mc(dev, ml, -1);31263126+ *mlp = ml->next;31273127+ kfree(ml);31283128+ } else31293129+ mlp = &ml->next;31313130 }31323131}31333132···32113196 packet_dev_mc(dev, ml, -1);32123197 kfree(ml);32133198 }32143214- rtnl_unlock();32153215- return 0;31993199+ break;32163200 }32173201 }32183202 rtnl_unlock();32193219- return -EADDRNOTAVAIL;32033203+ return 0;32203204}3221320532223206static void packet_flush_mclist(struct sock *sk)···35653551 switch (msg) {35663552 case NETDEV_UNREGISTER:35673553 if (po->mclist)35683568- packet_dev_mclist(dev, po->mclist, -1);35543554+ packet_dev_mclist_delete(dev, &po->mclist);35693555 /* fallthrough */3570355635713557 case NETDEV_DOWN:
+22-18
net/rds/iw_rdma.c
···8888 int *unpinned);8989static void rds_iw_destroy_fastreg(struct rds_iw_mr_pool *pool, struct rds_iw_mr *ibmr);90909191-static int rds_iw_get_device(struct rds_sock *rs, struct rds_iw_device **rds_iwdev, struct rdma_cm_id **cm_id)9191+static int rds_iw_get_device(struct sockaddr_in *src, struct sockaddr_in *dst,9292+ struct rds_iw_device **rds_iwdev,9393+ struct rdma_cm_id **cm_id)9294{9395 struct rds_iw_device *iwdev;9496 struct rds_iw_cm_id *i_cm_id;···114112 src_addr->sin_port,115113 dst_addr->sin_addr.s_addr,116114 dst_addr->sin_port,117117- rs->rs_bound_addr,118118- rs->rs_bound_port,119119- rs->rs_conn_addr,120120- rs->rs_conn_port);115115+ src->sin_addr.s_addr,116116+ src->sin_port,117117+ dst->sin_addr.s_addr,118118+ dst->sin_port);121119#ifdef WORKING_TUPLE_DETECTION122122- if (src_addr->sin_addr.s_addr == rs->rs_bound_addr &&123123- src_addr->sin_port == rs->rs_bound_port &&124124- dst_addr->sin_addr.s_addr == rs->rs_conn_addr &&125125- dst_addr->sin_port == rs->rs_conn_port) {120120+ if (src_addr->sin_addr.s_addr == src->sin_addr.s_addr &&121121+ src_addr->sin_port == src->sin_port &&122122+ dst_addr->sin_addr.s_addr == dst->sin_addr.s_addr &&123123+ dst_addr->sin_port == dst->sin_port) {126124#else127125 /* FIXME - needs to compare the local and remote128126 * ipaddr/port tuple, but the ipaddr is the only···130128 * zero'ed. It doesn't appear to be properly populated131129 * during connection setup...132130 */133133- if (src_addr->sin_addr.s_addr == rs->rs_bound_addr) {131131+ if (src_addr->sin_addr.s_addr == src->sin_addr.s_addr) {134132#endif135133 spin_unlock_irq(&iwdev->spinlock);136134 *rds_iwdev = iwdev;···182180{183181 struct sockaddr_in *src_addr, *dst_addr;184182 struct rds_iw_device *rds_iwdev_old;185185- struct rds_sock rs;186183 struct rdma_cm_id *pcm_id;187184 int rc;188185189186 src_addr = (struct sockaddr_in *)&cm_id->route.addr.src_addr;190187 dst_addr = (struct sockaddr_in *)&cm_id->route.addr.dst_addr;191188192192- rs.rs_bound_addr = src_addr->sin_addr.s_addr;193193- rs.rs_bound_port = src_addr->sin_port;194194- rs.rs_conn_addr = dst_addr->sin_addr.s_addr;195195- rs.rs_conn_port = dst_addr->sin_port;196196-197197- rc = rds_iw_get_device(&rs, &rds_iwdev_old, &pcm_id);189189+ rc = rds_iw_get_device(src_addr, dst_addr, &rds_iwdev_old, &pcm_id);198190 if (rc)199191 rds_iw_remove_cm_id(rds_iwdev, cm_id);200192···594598 struct rds_iw_device *rds_iwdev;595599 struct rds_iw_mr *ibmr = NULL;596600 struct rdma_cm_id *cm_id;601601+ struct sockaddr_in src = {602602+ .sin_addr.s_addr = rs->rs_bound_addr,603603+ .sin_port = rs->rs_bound_port,604604+ };605605+ struct sockaddr_in dst = {606606+ .sin_addr.s_addr = rs->rs_conn_addr,607607+ .sin_port = rs->rs_conn_port,608608+ };597609 int ret;598610599599- ret = rds_iw_get_device(rs, &rds_iwdev, &cm_id);611611+ ret = rds_iw_get_device(&src, &dst, &rds_iwdev, &cm_id);600612 if (ret || !cm_id) {601613 ret = -ENODEV;602614 goto out;
···8787 if (!skb) {8888 /* nothing remains on the queue */8989 if (copied &&9090- (msg->msg_flags & MSG_PEEK || timeo == 0))9090+ (flags & MSG_PEEK || timeo == 0))9191 goto out;92929393 /* wait for a message to turn up */
+28-8
net/sched/act_bpf.c
···2525 struct tcf_result *res)2626{2727 struct tcf_bpf *b = a->priv;2828- int action;2929- int filter_res;2828+ int action, filter_res;30293130 spin_lock(&b->tcf_lock);3131+3232 b->tcf_tm.lastuse = jiffies;3333 bstats_update(&b->tcf_bstats, skb);3434- action = b->tcf_action;35343635 filter_res = BPF_PROG_RUN(b->filter, skb);3737- if (filter_res == 0) {3838- /* Return code 0 from the BPF program3939- * is being interpreted as a drop here.4040- */4141- action = TC_ACT_SHOT;3636+3737+ /* A BPF program may overwrite the default action opcode.3838+ * Similarly as in cls_bpf, if filter_res == -1 we use the3939+ * default action specified from tc.4040+ *4141+ * In case a different well-known TC_ACT opcode has been4242+ * returned, it will overwrite the default one.4343+ *4444+ * For everything else that is unkown, TC_ACT_UNSPEC is4545+ * returned.4646+ */4747+ switch (filter_res) {4848+ case TC_ACT_PIPE:4949+ case TC_ACT_RECLASSIFY:5050+ case TC_ACT_OK:5151+ action = filter_res;5252+ break;5353+ case TC_ACT_SHOT:5454+ action = filter_res;4255 b->tcf_qstats.drops++;5656+ break;5757+ case TC_ACT_UNSPEC:5858+ action = b->tcf_action;5959+ break;6060+ default:6161+ action = TC_ACT_UNSPEC;6262+ break;4363 }44644565 spin_unlock(&b->tcf_lock);
+4-1
net/sched/cls_u32.c
···7878 struct tc_u_common *tp_c;7979 int refcnt;8080 unsigned int divisor;8181- struct tc_u_knode __rcu *ht[1];8281 struct rcu_head rcu;8282+ /* The 'ht' field MUST be the last field in structure to allow for8383+ * more entries allocated at end of structure.8484+ */8585+ struct tc_u_knode __rcu *ht[1];8386};84878588struct tc_u_common {
···1702170217031703 if (len > INT_MAX)17041704 len = INT_MAX;17051705+ if (unlikely(!access_ok(VERIFY_READ, buff, len)))17061706+ return -EFAULT;17051707 sock = sockfd_lookup_light(fd, &err, &fput_needed);17061708 if (!sock)17071709 goto out;···1762176017631761 if (size > INT_MAX)17641762 size = INT_MAX;17631763+ if (unlikely(!access_ok(VERIFY_WRITE, ubuf, size)))17641764+ return -EFAULT;17651765 sock = sockfd_lookup_light(fd, &err, &fput_needed);17661766 if (!sock)17671767 goto out;
+2
net/sunrpc/auth_gss/gss_rpc_upcall.c
···217217218218 for (i = 0; i < arg->npages && arg->pages[i]; i++)219219 __free_page(arg->pages[i]);220220+221221+ kfree(arg->pages);220222}221223222224static int gssp_alloc_receive_pages(struct gssx_arg_accept_sec_context *arg)
+2
net/sunrpc/auth_gss/svcauth_gss.c
···463463 /* number of additional gid's */464464 if (get_int(&mesg, &N))465465 goto out;466466+ if (N < 0 || N > NGROUPS_MAX)467467+ goto out;466468 status = -ENOMEM;467469 rsci.cred.cr_group_info = groups_alloc(N);468470 if (rsci.cred.cr_group_info == NULL)
···303303 struct super_block *pipefs_sb;304304 int err;305305306306- err = rpc_clnt_debugfs_register(clnt);307307- if (err)308308- return err;306306+ rpc_clnt_debugfs_register(clnt);309307310308 pipefs_sb = rpc_get_sb_net(net);311309 if (pipefs_sb) {
+29-23
net/sunrpc/debugfs.c
···129129 .release = tasks_release,130130};131131132132-int132132+void133133rpc_clnt_debugfs_register(struct rpc_clnt *clnt)134134{135135- int len, err;135135+ int len;136136 char name[24]; /* enough for "../../rpc_xprt/ + 8 hex digits + NULL */137137+ struct rpc_xprt *xprt;137138138139 /* Already registered? */139139- if (clnt->cl_debugfs)140140- return 0;140140+ if (clnt->cl_debugfs || !rpc_clnt_dir)141141+ return;141142142143 len = snprintf(name, sizeof(name), "%x", clnt->cl_clid);143144 if (len >= sizeof(name))144144- return -EINVAL;145145+ return;145146146147 /* make the per-client dir */147148 clnt->cl_debugfs = debugfs_create_dir(name, rpc_clnt_dir);148149 if (!clnt->cl_debugfs)149149- return -ENOMEM;150150+ return;150151151152 /* make tasks file */152152- err = -ENOMEM;153153 if (!debugfs_create_file("tasks", S_IFREG | S_IRUSR, clnt->cl_debugfs,154154 clnt, &tasks_fops))155155 goto out_err;156156157157- err = -EINVAL;158157 rcu_read_lock();158158+ xprt = rcu_dereference(clnt->cl_xprt);159159+ /* no "debugfs" dentry? Don't bother with the symlink. */160160+ if (!xprt->debugfs) {161161+ rcu_read_unlock();162162+ return;163163+ }159164 len = snprintf(name, sizeof(name), "../../rpc_xprt/%s",160160- rcu_dereference(clnt->cl_xprt)->debugfs->d_name.name);165165+ xprt->debugfs->d_name.name);161166 rcu_read_unlock();167167+162168 if (len >= sizeof(name))163169 goto out_err;164170165165- err = -ENOMEM;166171 if (!debugfs_create_symlink("xprt", clnt->cl_debugfs, name))167172 goto out_err;168173169169- return 0;174174+ return;170175out_err:171176 debugfs_remove_recursive(clnt->cl_debugfs);172177 clnt->cl_debugfs = NULL;173173- return err;174178}175179176180void···230226 .release = xprt_info_release,231227};232228233233-int229229+void234230rpc_xprt_debugfs_register(struct rpc_xprt *xprt)235231{236232 int len, id;237233 static atomic_t cur_id;238234 char name[9]; /* 8 hex digits + NULL term */239235236236+ if (!rpc_xprt_dir)237237+ return;238238+240239 id = (unsigned int)atomic_inc_return(&cur_id);241240242241 len = snprintf(name, sizeof(name), "%x", id);243242 if (len >= sizeof(name))244244- return -EINVAL;243243+ return;245244246245 /* make the per-client dir */247246 xprt->debugfs = debugfs_create_dir(name, rpc_xprt_dir);248247 if (!xprt->debugfs)249249- return -ENOMEM;248248+ return;250249251250 /* make tasks file */252251 if (!debugfs_create_file("info", S_IFREG | S_IRUSR, xprt->debugfs,253252 xprt, &xprt_info_fops)) {254253 debugfs_remove_recursive(xprt->debugfs);255254 xprt->debugfs = NULL;256256- return -ENOMEM;257255 }258258-259259- return 0;260256}261257262258void···270266sunrpc_debugfs_exit(void)271267{272268 debugfs_remove_recursive(topdir);269269+ topdir = NULL;270270+ rpc_clnt_dir = NULL;271271+ rpc_xprt_dir = NULL;273272}274273275275-int __init274274+void __init276275sunrpc_debugfs_init(void)277276{278277 topdir = debugfs_create_dir("sunrpc", NULL);279278 if (!topdir)280280- goto out;279279+ return;281280282281 rpc_clnt_dir = debugfs_create_dir("rpc_clnt", topdir);283282 if (!rpc_clnt_dir)···290283 if (!rpc_xprt_dir)291284 goto out_remove;292285293293- return 0;286286+ return;294287out_remove:295288 debugfs_remove_recursive(topdir);296289 topdir = NULL;297297-out:298298- return -ENOMEM;290290+ rpc_clnt_dir = NULL;299291}
+1-6
net/sunrpc/sunrpc_syms.c
···9898 if (err)9999 goto out4;100100101101- err = sunrpc_debugfs_init();102102- if (err)103103- goto out5;104104-101101+ sunrpc_debugfs_init();105102#if IS_ENABLED(CONFIG_SUNRPC_DEBUG)106103 rpc_register_sysctl();107104#endif···106109 init_socket_xprt(); /* clnt sock transport */107110 return 0;108111109109-out5:110110- unregister_rpc_pipefs();111112out4:112113 unregister_pernet_subsys(&sunrpc_net_ops);113114out3:
+1-6
net/sunrpc/xprt.c
···13311331 */13321332struct rpc_xprt *xprt_create_transport(struct xprt_create *args)13331333{13341334- int err;13351334 struct rpc_xprt *xprt;13361335 struct xprt_class *t;13371336···13711372 return ERR_PTR(-ENOMEM);13721373 }1373137413741374- err = rpc_xprt_debugfs_register(xprt);13751375- if (err) {13761376- xprt_destroy(xprt);13771377- return ERR_PTR(err);13781378- }13751375+ rpc_xprt_debugfs_register(xprt);1379137613801377 dprintk("RPC: created transport %p with %u slots\n", xprt,13811378 xprt->max_reqs);
+2-1
net/sunrpc/xprtrdma/rpc_rdma.c
···738738 struct rpc_xprt *xprt = rep->rr_xprt;739739 struct rpcrdma_xprt *r_xprt = rpcx_to_rdmax(xprt);740740 __be32 *iptr;741741- int credits, rdmalen, status;741741+ int rdmalen, status;742742 unsigned long cwnd;743743+ u32 credits;743744744745 /* Check status. If bad, signal disconnect and return rep to pool */745746 if (rep->rr_len == ~0U) {
+1-1
net/sunrpc/xprtrdma/xprt_rdma.h
···285285 */286286struct rpcrdma_buffer {287287 spinlock_t rb_lock; /* protects indexes */288288- int rb_max_requests;/* client max requests */288288+ u32 rb_max_requests;/* client max requests */289289 struct list_head rb_mws; /* optional memory windows/fmrs/frmrs */290290 struct list_head rb_all;291291 int rb_send_index;
···26542654 return err;26552655 }2656265626572657- msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);26582658- if (!msg)26592659- return -ENOMEM;26602660-26612657 err = parse_monitor_flags(type == NL80211_IFTYPE_MONITOR ?26622658 info->attrs[NL80211_ATTR_MNTR_FLAGS] : NULL,26632659 &flags);···26612665 if (!err && (flags & MONITOR_FLAG_ACTIVE) &&26622666 !(rdev->wiphy.features & NL80211_FEATURE_ACTIVE_MONITOR))26632667 return -EOPNOTSUPP;26682668+26692669+ msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);26702670+ if (!msg)26712671+ return -ENOMEM;2664267226652673 wdev = rdev_add_virtual_intf(rdev,26662674 nla_data(info->attrs[NL80211_ATTR_IFNAME]),···4399439944004400 if (parse_station_flags(info, dev->ieee80211_ptr->iftype, ¶ms))44014401 return -EINVAL;44024402+44034403+ /* HT/VHT requires QoS, but if we don't have that just ignore HT/VHT44044404+ * as userspace might just pass through the capabilities from the IEs44054405+ * directly, rather than enforcing this restriction and returning an44064406+ * error in this case.44074407+ */44084408+ if (!(params.sta_flags_set & BIT(NL80211_STA_FLAG_WME))) {44094409+ params.ht_capa = NULL;44104410+ params.vht_capa = NULL;44114411+ }4402441244034413 /* When you run into this, adjust the code below for the new flag */44044414 BUILD_BUG_ON(NL80211_STA_FLAG_MAX != 7);···1253812528 }12539125291254012530 for (j = 0; j < match->n_channels; j++) {1254112541- if (nla_put_u32(msg,1254212542- NL80211_ATTR_WIPHY_FREQ,1254312543- match->channels[j])) {1253112531+ if (nla_put_u32(msg, j, match->channels[j])) {1254412532 nla_nest_cancel(msg, nl_freqs);1254512533 nla_nest_cancel(msg, nl_match);1254612534 goto out;
+1-1
net/wireless/reg.c
···228228229229/* We keep a static world regulatory domain in case of the absence of CRDA */230230static const struct ieee80211_regdomain world_regdom = {231231- .n_reg_rules = 6,231231+ .n_reg_rules = 8,232232 .alpha2 = "00",233233 .reg_rules = {234234 /* IEEE 802.11b/g, channels 1..11 */
+6-6
net/xfrm/xfrm_policy.c
···22692269 * have the xfrm_state's. We need to wait for KM to22702270 * negotiate new SA's or bail out with error.*/22712271 if (net->xfrm.sysctl_larval_drop) {22722272- dst_release(dst);22732273- xfrm_pols_put(pols, drop_pols);22742272 XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTNOSTATES);22752275-22762276- return ERR_PTR(-EREMOTE);22732273+ err = -EREMOTE;22742274+ goto error;22772275 }2278227622792277 err = -EAGAIN;···23222324error:23232325 dst_release(dst);23242326dropdst:23252325- dst_release(dst_orig);23272327+ if (!(flags & XFRM_LOOKUP_KEEP_DST_REF))23282328+ dst_release(dst_orig);23262329 xfrm_pols_put(pols, drop_pols);23272330 return ERR_PTR(err);23282331}···23372338 struct sock *sk, int flags)23382339{23392340 struct dst_entry *dst = xfrm_lookup(net, dst_orig, fl, sk,23402340- flags | XFRM_LOOKUP_QUEUE);23412341+ flags | XFRM_LOOKUP_QUEUE |23422342+ XFRM_LOOKUP_KEEP_DST_REF);2341234323422344 if (IS_ERR(dst) && PTR_ERR(dst) == -EREMOTE)23432345 return make_blackhole(net, dst_orig->ops->family, dst_orig);
···4646#include <sound/pcm_params.h>4747#include <sound/soc.h>48484949-#include <asm/mach-types.h>5050-5149#include "../codecs/wm8731.h"5250#include "atmel-pcm.h"5351#include "atmel_ssc_dai.h"···169171 int ret;170172171173 if (!np) {172172- if (!(machine_is_at91sam9g20ek() ||173173- machine_is_at91sam9g20ek_2mmc()))174174- return -ENODEV;174174+ return -ENODEV;175175 }176176177177 ret = atmel_ssc_set_audio(0);···206210 card->dev = &pdev->dev;207211208212 /* Parse device node info */209209- if (np) {210210- ret = snd_soc_of_parse_card_name(card, "atmel,model");211211- if (ret)212212- goto err;213213+ ret = snd_soc_of_parse_card_name(card, "atmel,model");214214+ if (ret)215215+ goto err;213216214214- ret = snd_soc_of_parse_audio_routing(card,215215- "atmel,audio-routing");216216- if (ret)217217- goto err;217217+ ret = snd_soc_of_parse_audio_routing(card,218218+ "atmel,audio-routing");219219+ if (ret)220220+ goto err;218221219219- /* Parse codec info */220220- at91sam9g20ek_dai.codec_name = NULL;221221- codec_np = of_parse_phandle(np, "atmel,audio-codec", 0);222222- if (!codec_np) {223223- dev_err(&pdev->dev, "codec info missing\n");224224- return -EINVAL;225225- }226226- at91sam9g20ek_dai.codec_of_node = codec_np;227227-228228- /* Parse dai and platform info */229229- at91sam9g20ek_dai.cpu_dai_name = NULL;230230- at91sam9g20ek_dai.platform_name = NULL;231231- cpu_np = of_parse_phandle(np, "atmel,ssc-controller", 0);232232- if (!cpu_np) {233233- dev_err(&pdev->dev, "dai and pcm info missing\n");234234- return -EINVAL;235235- }236236- at91sam9g20ek_dai.cpu_of_node = cpu_np;237237- at91sam9g20ek_dai.platform_of_node = cpu_np;238238-239239- of_node_put(codec_np);240240- of_node_put(cpu_np);222222+ /* Parse codec info */223223+ at91sam9g20ek_dai.codec_name = NULL;224224+ codec_np = of_parse_phandle(np, "atmel,audio-codec", 0);225225+ if (!codec_np) {226226+ dev_err(&pdev->dev, "codec info missing\n");227227+ return -EINVAL;241228 }229229+ at91sam9g20ek_dai.codec_of_node = codec_np;230230+231231+ /* Parse dai and platform info */232232+ at91sam9g20ek_dai.cpu_dai_name = NULL;233233+ at91sam9g20ek_dai.platform_name = NULL;234234+ cpu_np = of_parse_phandle(np, "atmel,ssc-controller", 0);235235+ if (!cpu_np) {236236+ dev_err(&pdev->dev, "dai and pcm info missing\n");237237+ return -EINVAL;238238+ }239239+ at91sam9g20ek_dai.cpu_of_node = cpu_np;240240+ at91sam9g20ek_dai.platform_of_node = cpu_np;241241+242242+ of_node_put(codec_np);243243+ of_node_put(cpu_np);242244243245 ret = snd_soc_register_card(card);244246 if (ret) {
+1-1
sound/soc/cirrus/Kconfig
···16161717config SND_EP93XX_SOC_SNAPPERCL151818 tristate "SoC Audio support for Bluewater Systems Snapper CL15 module"1919- depends on SND_EP93XX_SOC && MACH_SNAPPER_CL151919+ depends on SND_EP93XX_SOC && MACH_SNAPPER_CL15 && I2C2020 select SND_EP93XX_SOC_I2S2121 select SND_SOC_TLV320AIC23_I2C2222 help
+1-1
sound/soc/codecs/Kconfig
···6969 select SND_SOC_MAX98088 if I2C7070 select SND_SOC_MAX98090 if I2C7171 select SND_SOC_MAX98095 if I2C7272- select SND_SOC_MAX98357A7272+ select SND_SOC_MAX98357A if GPIOLIB7373 select SND_SOC_MAX9850 if I2C7474 select SND_SOC_MAX9768 if I2C7575 select SND_SOC_MAX9877 if I2C
···225225 case RT5670_ADC_EQ_CTRL1:226226 case RT5670_EQ_CTRL1:227227 case RT5670_ALC_CTRL_1:228228- case RT5670_IRQ_CTRL1:229228 case RT5670_IRQ_CTRL2:230229 case RT5670_INT_IRQ_ST:231230 case RT5670_IL_CMD:···27012702 msleep(100);2702270327032704 regmap_write(rt5670->regmap, RT5670_RESET, 0);27052705+27062706+ regmap_read(rt5670->regmap, RT5670_VENDOR_ID, &val);27072707+ if (val >= 4)27082708+ regmap_write(rt5670->regmap, RT5670_GPIO_CTRL3, 0x0980);27092709+ else27102710+ regmap_write(rt5670->regmap, RT5670_GPIO_CTRL3, 0x0d00);2704271127052712 ret = regmap_register_patch(rt5670->regmap, init_list,27062713 ARRAY_SIZE(init_list));
···530530531531 case OMAP_MCBSP_SYSCLK_CLKX_EXT:532532 regs->srgr2 |= CLKSM;533533+ regs->pcr0 |= SCLKME;534534+ /*535535+ * If McBSP is master but yet the CLKX/CLKR pin drives the SRG,536536+ * disable output on those pins. This enables to inject the537537+ * reference clock through CLKX/CLKR. For this to work538538+ * set_dai_sysclk() _needs_ to be called after set_dai_fmt().539539+ */540540+ regs->pcr0 &= ~CLKXM;541541+ break;533542 case OMAP_MCBSP_SYSCLK_CLKR_EXT:534543 regs->pcr0 |= SCLKME;544544+ /* Disable ouput on CLKR pin in master mode */545545+ regs->pcr0 &= ~CLKRM;535546 break;536547 default:537548 err = -ENODEV;
+1-1
sound/soc/omap/omap-pcm.c
···201201 struct snd_pcm *pcm = rtd->pcm;202202 int ret;203203204204- ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(64));204204+ ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32));205205 if (ret)206206 return ret;207207
+5-5
sound/soc/samsung/Kconfig
···174174175175config SND_SOC_SPEYSIDE176176 tristate "Audio support for Wolfson Speyside"177177- depends on SND_SOC_SAMSUNG && MACH_WLF_CRAGG_6410177177+ depends on SND_SOC_SAMSUNG && MACH_WLF_CRAGG_6410 && I2C && SPI_MASTER178178 select SND_SAMSUNG_I2S179179 select SND_SOC_WM8996180180 select SND_SOC_WM9081···189189190190config SND_SOC_BELLS191191 tristate "Audio support for Wolfson Bells"192192- depends on SND_SOC_SAMSUNG && MACH_WLF_CRAGG_6410 && MFD_ARIZONA192192+ depends on SND_SOC_SAMSUNG && MACH_WLF_CRAGG_6410 && MFD_ARIZONA && I2C && SPI_MASTER193193 select SND_SAMSUNG_I2S194194 select SND_SOC_WM5102195195 select SND_SOC_WM5110···206206207207config SND_SOC_LITTLEMILL208208 tristate "Audio support for Wolfson Littlemill"209209- depends on SND_SOC_SAMSUNG && MACH_WLF_CRAGG_6410209209+ depends on SND_SOC_SAMSUNG && MACH_WLF_CRAGG_6410 && I2C210210 select SND_SAMSUNG_I2S211211 select MFD_WM8994212212 select SND_SOC_WM8994···223223224224config SND_SOC_ODROIDX2225225 tristate "Audio support for Odroid-X2 and Odroid-U3"226226- depends on SND_SOC_SAMSUNG226226+ depends on SND_SOC_SAMSUNG && I2C227227 select SND_SOC_MAX98090228228 select SND_SAMSUNG_I2S229229 help···231231232232config SND_SOC_ARNDALE_RT5631_ALC5631233233 tristate "Audio support for RT5631(ALC5631) on Arndale Board"234234- depends on SND_SOC_SAMSUNG234234+ depends on SND_SOC_SAMSUNG && I2C235235 select SND_SAMSUNG_I2S236236 select SND_SOC_RT5631
···2222TARGETS_HOTPLUG = cpu-hotplug2323TARGETS_HOTPLUG += memory-hotplug24242525+# Clear LDFLAGS and MAKEFLAGS if called from main2626+# Makefile to avoid test build failures when test2727+# Makefile doesn't have explicit build rules.2828+ifeq (1,$(MAKELEVEL))2929+undefine LDFLAGS3030+override MAKEFLAGS =3131+endif3232+2533all:2634 for TARGET in $(TARGETS); do \2735 make -C $$TARGET; \
+9-1
tools/testing/selftests/exec/execveat.c
···3030#ifdef __NR_execveat3131 return syscall(__NR_execveat, fd, path, argv, envp, flags);3232#else3333- errno = -ENOSYS;3333+ errno = ENOSYS;3434 return -1;3535#endif3636}···233233 int fd_script_ephemeral = open_or_die("script.ephemeral", O_RDONLY);234234 int fd_cloexec = open_or_die("execveat", O_RDONLY|O_CLOEXEC);235235 int fd_script_cloexec = open_or_die("script", O_RDONLY|O_CLOEXEC);236236+237237+ /* Check if we have execveat at all, and bail early if not */238238+ errno = 0;239239+ execveat_(-1, NULL, NULL, NULL, 0);240240+ if (errno == ENOSYS) {241241+ printf("[FAIL] ENOSYS calling execveat - no kernel support?\n");242242+ return 1;243243+ }236244237245 /* Change file position to confirm it doesn't affect anything */238246 lseek(fd, 10, SEEK_SET);
···883883 return vgic_ops->get_eisr(vcpu);884884}885885886886+static inline void vgic_clear_eisr(struct kvm_vcpu *vcpu)887887+{888888+ vgic_ops->clear_eisr(vcpu);889889+}890890+886891static inline u32 vgic_get_interrupt_status(struct kvm_vcpu *vcpu)887892{888893 return vgic_ops->get_interrupt_status(vcpu);···927922 vgic_set_lr(vcpu, lr_nr, vlr);928923 clear_bit(lr_nr, vgic_cpu->lr_used);929924 vgic_cpu->vgic_irq_lr_map[irq] = LR_EMPTY;925925+ vgic_sync_lr_elrsr(vcpu, lr_nr, vlr);930926}931927932928/*···984978 BUG_ON(!test_bit(lr, vgic_cpu->lr_used));985979 vlr.state |= LR_STATE_PENDING;986980 vgic_set_lr(vcpu, lr, vlr);981981+ vgic_sync_lr_elrsr(vcpu, lr, vlr);987982 return true;988983 }989984 }···1006999 vlr.state |= LR_EOI_INT;1007100010081001 vgic_set_lr(vcpu, lr, vlr);10021002+ vgic_sync_lr_elrsr(vcpu, lr, vlr);1009100310101004 return true;10111005}···1143113511441136 if (status & INT_STATUS_UNDERFLOW)11451137 vgic_disable_underflow(vcpu);11381138+11391139+ /*11401140+ * In the next iterations of the vcpu loop, if we sync the vgic state11411141+ * after flushing it, but before entering the guest (this happens for11421142+ * pending signals and vmid rollovers), then make sure we don't pick11431143+ * up any old maintenance interrupts here.11441144+ */11451145+ vgic_clear_eisr(vcpu);1146114611471147 return level_pending;11481148}···15991583 * emulation. So check this here again. KVM_CREATE_DEVICE does16001584 * the proper checks already.16011585 */16021602- if (type == KVM_DEV_TYPE_ARM_VGIC_V2 && !vgic->can_emulate_gicv2)16031603- return -ENODEV;15861586+ if (type == KVM_DEV_TYPE_ARM_VGIC_V2 && !vgic->can_emulate_gicv2) {15871587+ ret = -ENODEV;15881588+ goto out;15891589+ }1604159016051591 /*16061592 * Any time a vcpu is run, vcpu_load is called which tries to grab the
+8-7
virt/kvm/kvm_main.c
···471471 BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX);472472473473 r = -ENOMEM;474474- kvm->memslots = kzalloc(sizeof(struct kvm_memslots), GFP_KERNEL);474474+ kvm->memslots = kvm_kvzalloc(sizeof(struct kvm_memslots));475475 if (!kvm->memslots)476476 goto out_err_no_srcu;477477···522522out_err_no_disable:523523 for (i = 0; i < KVM_NR_BUSES; i++)524524 kfree(kvm->buses[i]);525525- kfree(kvm->memslots);525525+ kvfree(kvm->memslots);526526 kvm_arch_free_vm(kvm);527527 return ERR_PTR(r);528528}···578578 kvm_for_each_memslot(memslot, slots)579579 kvm_free_physmem_slot(kvm, memslot, NULL);580580581581- kfree(kvm->memslots);581581+ kvfree(kvm->memslots);582582}583583584584static void kvm_destroy_devices(struct kvm *kvm)···871871 goto out_free;872872 }873873874874- slots = kmemdup(kvm->memslots, sizeof(struct kvm_memslots),875875- GFP_KERNEL);874874+ slots = kvm_kvzalloc(sizeof(struct kvm_memslots));876875 if (!slots)877876 goto out_free;877877+ memcpy(slots, kvm->memslots, sizeof(struct kvm_memslots));878878879879 if ((change == KVM_MR_DELETE) || (change == KVM_MR_MOVE)) {880880 slot = id_to_memslot(slots, mem->slot);···917917 kvm_arch_commit_memory_region(kvm, mem, &old, change);918918919919 kvm_free_physmem_slot(kvm, &old, &new);920920- kfree(old_memslots);920920+ kvfree(old_memslots);921921922922 /*923923 * IOMMU mapping: New slots need to be mapped. Old slots need to be···936936 return 0;937937938938out_slots:939939- kfree(slots);939939+ kvfree(slots);940940out_free:941941 kvm_free_physmem_slot(kvm, &new, &old);942942out:···24922492 case KVM_CAP_SIGNAL_MSI:24932493#endif24942494#ifdef CONFIG_HAVE_KVM_IRQFD24952495+ case KVM_CAP_IRQFD:24952496 case KVM_CAP_IRQFD_RESAMPLE:24962497#endif24972498 case KVM_CAP_CHECK_EXTENSION_VM: