···866866867867 dscc4.setup= [NET]868868869869+ dt_cpu_ftrs= [PPC]870870+ Format: {"off" | "known"}871871+ Control how the dt_cpu_ftrs device-tree binding is872872+ used for CPU feature discovery and setup (if it873873+ exists).874874+ off: Do not use it, fall back to legacy cpu table.875875+ known: Do not pass through unknown features to guests876876+ or userspace, only those that the kernel is aware of.877877+869878 dump_apple_properties [X86]870879 Dump name and content of EFI device properties on871880 x86 Macs. Useful for driver authors to determine···38103801 grace period will be considered for automatic38113802 expediting. Set to zero to disable automatic38123803 expediting.38043804+38053805+ stack_guard_gap= [MM]38063806+ override the default stack gap protection. The value38073807+ is in page units and it defines how many pages prior38083808+ to (for stacks growing down) resp. after (for stacks38093809+ growing up) the main stack are reserved for no other38103810+ mapping. Default value is 256 pages.3813381138143812 stacktrace [FTRACE]38153813 Enabled the stack tracer on boot up.
···2222- #clock-cells : must contain 12323- #reset-cells : must contain 124242525-For the PRCM CCUs on H3/A64, one more clock is needed:2525+For the PRCM CCUs on H3/A64, two more clocks are needed:2626+- "pll-periph": the SoC's peripheral PLL from the main CCU2627- "iosc": the SoC's internal frequency oscillator27282829Example for generic CCU:···4039r_ccu: clock@01f01400 {4140 compatible = "allwinner,sun50i-a64-r-ccu";4241 reg = <0x01f01400 0x100>;4343- clocks = <&osc24M>, <&osc32k>, <&iosc>;4444- clock-names = "hosc", "losc", "iosc";4242+ clocks = <&osc24M>, <&osc32k>, <&iosc>, <&ccu CLK_PLL_PERIPH0>;4343+ clock-names = "hosc", "losc", "iosc", "pll-periph";4544 #clock-cells = <1>;4645 #reset-cells = <1>;4746};
···4141Optional properties:42424343In order to use the GPIO lines in PWM mode, some additional optional4444-properties are required. Only Armada 370 and XP support these properties.4444+properties are required.45454646-- compatible: Must contain "marvell,armada-370-xp-gpio"4646+- compatible: Must contain "marvell,armada-370-gpio"47474848- reg: an additional register set is needed, for the GPIO Blink4949 Counter on/off registers.···7171 };72727373 gpio1: gpio@18140 {7474- compatible = "marvell,armada-370-xp-gpio";7474+ compatible = "marvell,armada-370-gpio";7575 reg = <0x18140 0x40>, <0x181c8 0x08>;7676 reg-names = "gpio", "pwm";7777 ngpios = <17>;
···2222 experiment to determine which bit corresponds to which input. Use2323 KEY_RESERVED for unused padding values.24242525+- reset-gpios: GPIO specifier for the touchscreen's reset pin (active low)2626+2527Example:26282729 touch@4b {
···3434 "brcm,bcm6328-switch"3535 "brcm,bcm6368-switch" and the mandatory "brcm,bcm63xx-switch"36363737-See Documentation/devicetree/bindings/dsa/dsa.txt for a list of additional3737+See Documentation/devicetree/bindings/net/dsa/dsa.txt for a list of additional3838required and optional properties.39394040Examples:
···2626- interrupt-controller : Indicates the switch is itself an interrupt2727 controller. This is used for the PHY interrupts.2828#interrupt-cells = <2> : Controller uses two cells, number and flag2929+- eeprom-length : Set to the length of an EEPROM connected to the3030+ switch. Must be set if the switch can not detect3131+ the presence and/or size of a connected EEPROM,3232+ otherwise optional.2933- mdio : Container of PHY and devices on the switches MDIO3034 bus.3135- mdio? : Container of PHYs and devices on the external MDIO
···2727 of the device. On many systems this is wired high so the device goes2828 out of reset at power-on, but if it is under program control, this2929 optional GPIO can wake up in response to it.3030+- vdd33a-supply, vddvario-supply : 3.3V analog and IO logic power supplies30313132Examples:3233
···247247bias-pull-up - pull up the pin248248bias-pull-down - pull down the pin249249bias-pull-pin-default - use pin-default pull state250250-bi-directional - pin supports simultaneous input/output operations251250drive-push-pull - drive actively high and low252251drive-open-drain - drive with open drain253252drive-open-source - drive with open source···259260power-source - select between different power supplies260261low-power-enable - enable low power mode261262low-power-disable - disable low power mode262262-output-enable - enable output on pin regardless of output value263263output-low - set the pin to output mode with low level264264output-high - set the pin to output mode with high level265265slew-rate - set the slew rate
···11+Device-Tree binding for ps/2 gpio device22+33+Required properties:44+ - compatible = "ps2-gpio"55+ - data-gpios: the data pin66+ - clk-gpios: the clock pin77+ - interrupts: Should trigger on the falling edge of the clock line.88+99+Optional properties:1010+ - write-enable: Indicates whether write function is provided1111+ to serio device. Possibly providing the write fn will not work, because1212+ of the tough timing requirements.1313+1414+Example nodes:1515+1616+ps2@0 {1717+ compatible = "ps2-gpio";1818+ interrupt-parent = <&gpio>;1919+ interrupts = <23 IRQ_TYPE_EDGE_FALLING>;2020+ data-gpios = <&gpio 24 GPIO_ACTIVE_HIGH>;2121+ clk-gpios = <&gpio 23 GPIO_ACTIVE_HIGH>;2222+ write-enable;2323+};
+1
Documentation/devicetree/bindings/usb/dwc2.txt
···1010 - "rockchip,rk3288-usb", "rockchip,rk3066-usb", "snps,dwc2": for rk3288 Soc;1111 - "lantiq,arx100-usb": The DWC2 USB controller instance in Lantiq ARX SoCs;1212 - "lantiq,xrx200-usb": The DWC2 USB controller instance in Lantiq XRX SoCs;1313+ - "amlogic,meson8-usb": The DWC2 USB controller instance in Amlogic Meson8 SoCs;1314 - "amlogic,meson8b-usb": The DWC2 USB controller instance in Amlogic Meson8b SoCs;1415 - "amlogic,meson-gxbb-usb": The DWC2 USB controller instance in Amlogic S905 SoCs;1516 - "amcc,dwc-otg": The DWC2 USB controller instance in AMCC Canyonlands 460EX SoCs;
+5
Documentation/gpio/drivers-on-gpio.txt
···8484 NAND flash MTD subsystem and provides chip access and partition parsing like8585 any other NAND driving hardware.86868787+- ps2-gpio: drivers/input/serio/ps2-gpio.c is used to drive a PS/2 (IBM) serio8888+ bus, data and clock line, by bit banging two GPIO lines. It will appear as8989+ any other serio bus to the system and makes it possible to connect drivers9090+ for e.g. keyboards and other PS/2 protocol based devices.9191+8792Apart from this there are special GPIO drivers in subsystems like MMC/SD to8893read card detect and write protect GPIO lines, and in the TTY serial subsystem8994to emulate MCTRL (modem control) signals CTS/RTS by using two GPIO lines. The
+194
Documentation/networking/dpaa.txt
···11+The QorIQ DPAA Ethernet Driver22+==============================33+44+Authors:55+Madalin Bucur <madalin.bucur@nxp.com>66+Camelia Groza <camelia.groza@nxp.com>77+88+Contents99+========1010+1111+ - DPAA Ethernet Overview1212+ - DPAA Ethernet Supported SoCs1313+ - Configuring DPAA Ethernet in your kernel1414+ - DPAA Ethernet Frame Processing1515+ - DPAA Ethernet Features1616+ - Debugging1717+1818+DPAA Ethernet Overview1919+======================2020+2121+DPAA stands for Data Path Acceleration Architecture and it is a2222+set of networking acceleration IPs that are available on several2323+generations of SoCs, both on PowerPC and ARM64.2424+2525+The Freescale DPAA architecture consists of a series of hardware blocks2626+that support Ethernet connectivity. The Ethernet driver depends upon the2727+following drivers in the Linux kernel:2828+2929+ - Peripheral Access Memory Unit (PAMU) (* needed only for PPC platforms)3030+ drivers/iommu/fsl_*3131+ - Frame Manager (FMan)3232+ drivers/net/ethernet/freescale/fman3333+ - Queue Manager (QMan), Buffer Manager (BMan)3434+ drivers/soc/fsl/qbman3535+3636+A simplified view of the dpaa_eth interfaces mapped to FMan MACs:3737+3838+ dpaa_eth /eth0\ ... /ethN\3939+ driver | | | |4040+ ------------- ---- ----------- ---- -------------4141+ -Ports / Tx Rx \ ... / Tx Rx \4242+ FMan | | | |4343+ -MACs | MAC0 | | MACN |4444+ / dtsec0 \ ... / dtsecN \ (or tgec)4545+ / \ / \(or memac)4646+ --------- -------------- --- -------------- ---------4747+ FMan, FMan Port, FMan SP, FMan MURAM drivers4848+ ---------------------------------------------------------4949+ FMan HW blocks: MURAM, MACs, Ports, SP5050+ ---------------------------------------------------------5151+5252+The dpaa_eth relation to the QMan, BMan and FMan:5353+ ________________________________5454+ dpaa_eth / eth0 \5555+ driver / \5656+ --------- -^- -^- -^- --- ---------5757+ QMan driver / \ / \ / \ \ / | BMan |5858+ |Rx | |Rx | |Tx | |Tx | | driver |5959+ --------- |Dfl| |Err| |Cnf| |FQs| | |6060+ QMan HW |FQ | |FQ | |FQs| | | | |6161+ / \ / \ / \ \ / | |6262+ --------- --- --- --- -v- ---------6363+ | FMan QMI | |6464+ | FMan HW FMan BMI | BMan HW |6565+ ----------------------- --------6666+6767+where the acronyms used above (and in the code) are:6868+DPAA = Data Path Acceleration Architecture6969+FMan = DPAA Frame Manager7070+QMan = DPAA Queue Manager7171+BMan = DPAA Buffers Manager7272+QMI = QMan interface in FMan7373+BMI = BMan interface in FMan7474+FMan SP = FMan Storage Profiles7575+MURAM = Multi-user RAM in FMan7676+FQ = QMan Frame Queue7777+Rx Dfl FQ = default reception FQ7878+Rx Err FQ = Rx error frames FQ7979+Tx Cnf FQ = Tx confirmation FQs8080+Tx FQs = transmission frame queues8181+dtsec = datapath three speed Ethernet controller (10/100/1000 Mbps)8282+tgec = ten gigabit Ethernet controller (10 Gbps)8383+memac = multirate Ethernet MAC (10/100/1000/10000)8484+8585+DPAA Ethernet Supported SoCs8686+============================8787+8888+The DPAA drivers enable the Ethernet controllers present on the following SoCs:8989+9090+# PPC9191+P10239292+P20419393+P30419494+P40809595+P50209696+P50409797+T10239898+T10249999+T1040100100+T1042101101+T2080102102+T4240103103+B4860104104+105105+# ARM106106+LS1043A107107+LS1046A108108+109109+Configuring DPAA Ethernet in your kernel110110+========================================111111+112112+To enable the DPAA Ethernet driver, the following Kconfig options are required:113113+114114+# common for arch/arm64 and arch/powerpc platforms115115+CONFIG_FSL_DPAA=y116116+CONFIG_FSL_FMAN=y117117+CONFIG_FSL_DPAA_ETH=y118118+CONFIG_FSL_XGMAC_MDIO=y119119+120120+# for arch/powerpc only121121+CONFIG_FSL_PAMU=y122122+123123+# common options needed for the PHYs used on the RDBs124124+CONFIG_VITESSE_PHY=y125125+CONFIG_REALTEK_PHY=y126126+CONFIG_AQUANTIA_PHY=y127127+128128+DPAA Ethernet Frame Processing129129+==============================130130+131131+On Rx, buffers for the incoming frames are retrieved from one of the three132132+existing buffers pools. The driver initializes and seeds these, each with133133+buffers of different sizes: 1KB, 2KB and 4KB.134134+135135+On Tx, all transmitted frames are returned to the driver through Tx136136+confirmation frame queues. The driver is then responsible for freeing the137137+buffers. In order to do this properly, a backpointer is added to the buffer138138+before transmission that points to the skb. When the buffer returns to the139139+driver on a confirmation FQ, the skb can be correctly consumed.140140+141141+DPAA Ethernet Features142142+======================143143+144144+Currently the DPAA Ethernet driver enables the basic features required for145145+a Linux Ethernet driver. The support for advanced features will be added146146+gradually.147147+148148+The driver has Rx and Tx checksum offloading for UDP and TCP. Currently the Rx149149+checksum offload feature is enabled by default and cannot be controlled through150150+ethtool.151151+152152+The driver has support for multiple prioritized Tx traffic classes. Priorities153153+range from 0 (lowest) to 3 (highest). These are mapped to HW workqueues with154154+strict priority levels. Each traffic class contains NR_CPU TX queues. By155155+default, only one traffic class is enabled and the lowest priority Tx queues156156+are used. Higher priority traffic classes can be enabled with the mqprio157157+qdisc. For example, all four traffic classes are enabled on an interface with158158+the following command. Furthermore, skb priority levels are mapped to traffic159159+classes as follows:160160+161161+ * priorities 0 to 3 - traffic class 0 (low priority)162162+ * priorities 4 to 7 - traffic class 1 (medium-low priority)163163+ * priorities 8 to 11 - traffic class 2 (medium-high priority)164164+ * priorities 12 to 15 - traffic class 3 (high priority)165165+166166+tc qdisc add dev <int> root handle 1: \167167+ mqprio num_tc 4 map 0 0 0 0 1 1 1 1 2 2 2 2 3 3 3 3 hw 1168168+169169+Debugging170170+=========171171+172172+The following statistics are exported for each interface through ethtool:173173+174174+ - interrupt count per CPU175175+ - Rx packets count per CPU176176+ - Tx packets count per CPU177177+ - Tx confirmed packets count per CPU178178+ - Tx S/G frames count per CPU179179+ - Tx error count per CPU180180+ - Rx error count per CPU181181+ - Rx error count per type182182+ - congestion related statistics:183183+ - congestion status184184+ - time spent in congestion185185+ - number of time the device entered congestion186186+ - dropped packets count per cause187187+188188+The driver also exports the following information in sysfs:189189+190190+ - the FQ IDs for each FQ type191191+ /sys/devices/platform/dpaa-ethernet.0/net/<int>/fqids192192+193193+ - the IDs of the buffer pools in use194194+ /sys/devices/platform/dpaa-ethernet.0/net/<int>/bpids
+1-1
Documentation/networking/scaling.txt
···122122or will be computed in the stack. Capable hardware can pass the hash in123123the receive descriptor for the packet; this would usually be the same124124hash used for RSS (e.g. computed Toeplitz hash). The hash is saved in125125-skb->rx_hash and can be used elsewhere in the stack as a hash of the125125+skb->hash and can be used elsewhere in the stack as a hash of the126126packet’s flow.127127128128Each receive hardware queue has an associated list of CPUs to which
+13-18
Documentation/networking/tcp.txt
···11TCP protocol22============3344-Last updated: 9 February 200844+Last updated: 3 June 20175566Contents77========···2929A congestion control mechanism can be registered through functions in3030tcp_cong.c. The functions used by the congestion control mechanism are3131registered via passing a tcp_congestion_ops struct to3232-tcp_register_congestion_control. As a minimum name, ssthresh,3333-cong_avoid must be valid.3232+tcp_register_congestion_control. As a minimum, the congestion control3333+mechanism must provide a valid name and must implement either ssthresh,3434+cong_avoid and undo_cwnd hooks or the "omnipotent" cong_control hook.34353536Private data for a congestion control mechanism is stored in tp->ca_priv.3637tcp_ca(tp) returns a pointer to this space. This is preallocated space - it3738is important to check the size of your private data will fit this space, or3838-alternatively space could be allocated elsewhere and a pointer to it could3939+alternatively, space could be allocated elsewhere and a pointer to it could3940be stored here.40414142There are three kinds of congestion control algorithms currently: The4243simplest ones are derived from TCP reno (highspeed, scalable) and just4343-provide an alternative the congestion window calculation. More complex4444+provide an alternative congestion window calculation. More complex4445ones like BIC try to look at other events to provide better4546heuristics. There are also round trip time based algorithms like4647Vegas and Westwood+.···5049needs to maintain fairness and performance. Please review current5150research and RFC's before developing new modules.52515353-The method that is used to determine which congestion control mechanism is5454-determined by the setting of the sysctl net.ipv4.tcp_congestion_control.5555-The default congestion control will be the last one registered (LIFO);5656-so if you built everything as modules, the default will be reno. If you5757-build with the defaults from Kconfig, then CUBIC will be builtin (not a5858-module) and it will end up the default.5252+The default congestion control mechanism is chosen based on the5353+DEFAULT_TCP_CONG Kconfig parameter. If you really want a particular default5454+value then you can set it using sysctl net.ipv4.tcp_congestion_control. The5555+module will be autoloaded if needed and you will get the expected protocol. If5656+you ask for an unknown congestion method, then the sysctl attempt will fail.59576060-If you really want a particular default value then you will need6161-to set it with the sysctl. If you use a sysctl, the module will be autoloaded6262-if needed and you will get the expected protocol. If you ask for an6363-unknown congestion method, then the sysctl attempt will fail.6464-6565-If you remove a tcp congestion control module, then you will get the next5858+If you remove a TCP congestion control module, then you will get the next6659available one. Since reno cannot be built as a module, and cannot be6767-deleted, it will always be available.6060+removed, it will always be available.68616962How the new TCP output machine [nyi] works.7063===========================================
+17-18
MAINTAINERS
···1172117211731173ARM/CIRRUS LOGIC EP93XX ARM ARCHITECTURE11741174M: Hartley Sweeten <hsweeten@visionengravers.com>11751175-M: Ryan Mallon <rmallon@gmail.com>11751175+M: Alexander Sverdlin <alexander.sverdlin@gmail.com>11761176L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)11771177S: Maintained11781178F: arch/arm/mach-ep93xx/···14891489M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>14901490L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)14911491S: Maintained14921492-F: arch/arm/mach-mvebu/14931493-F: drivers/rtc/rtc-armada38x.c14941492F: arch/arm/boot/dts/armada*14951493F: arch/arm/boot/dts/kirkwood*14941494+F: arch/arm/configs/mvebu_*_defconfig14951495+F: arch/arm/mach-mvebu/14961496F: arch/arm64/boot/dts/marvell/armada*14971497F: drivers/cpufreq/mvebu-cpufreq.c14981498-F: arch/arm/configs/mvebu_*_defconfig14981498+F: drivers/irqchip/irq-armada-370-xp.c14991499+F: drivers/irqchip/irq-mvebu-*15001500+F: drivers/rtc/rtc-armada38x.c1499150115001502ARM/Marvell Berlin SoC support15011503M: Jisheng Zhang <jszhang@marvell.com>···17231721ARM/SAMSUNG EXYNOS ARM ARCHITECTURES17241722M: Kukjin Kim <kgene@kernel.org>17251723M: Krzysztof Kozlowski <krzk@kernel.org>17261726-R: Javier Martinez Canillas <javier@osg.samsung.com>17271724L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)17281725L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers)17291726Q: https://patchwork.kernel.org/project/linux-samsung-soc/list/···18301829ARM/STI ARCHITECTURE18311830M: Patrice Chotard <patrice.chotard@st.com>18321831L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)18331833-L: kernel@stlinux.com18341832W: http://www.stlinux.com18351833S: Maintained18361834F: arch/arm/mach-sti/···2964296429652965C6X ARCHITECTURE29662966M: Mark Salter <msalter@redhat.com>29672967-M: Aurelien Jacquiot <a-jacquiot@ti.com>29672967+M: Aurelien Jacquiot <jacquiot.aurelien@gmail.com>29682968L: linux-c6x-dev@linux-c6x.org29692969W: http://www.linux-c6x.org/wiki/index.php/Main_Page29702970S: Maintained···5628562856295629GENWQE (IBM Generic Workqueue Card)56305630M: Frank Haverkamp <haver@linux.vnet.ibm.com>56315631-M: Gabriel Krisman Bertazi <krisman@linux.vnet.ibm.com>56315631+M: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com>56325632S: Supported56335633F: drivers/misc/genwqe/56345634···5673567356745674GPIO SUBSYSTEM56755675M: Linus Walleij <linus.walleij@linaro.org>56765676-M: Alexandre Courbot <gnurou@gmail.com>56775676L: linux-gpio@vger.kernel.org56785677T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio.git56795678S: Maintained···7714771577157716LIVE PATCHING77167717M: Josh Poimboeuf <jpoimboe@redhat.com>77177717-M: Jessica Yu <jeyu@redhat.com>77187718+M: Jessica Yu <jeyu@kernel.org>77187719M: Jiri Kosina <jikos@kernel.org>77197720M: Miroslav Benes <mbenes@suse.cz>77207721R: Petr Mladek <pmladek@suse.com>···85158516F: drivers/media/radio/radio-miropcm20*8516851785178518MELLANOX MLX4 core VPI driver85188518-M: Yishai Hadas <yishaih@mellanox.com>85198519+M: Tariq Toukan <tariqt@mellanox.com>85198520L: netdev@vger.kernel.org85208521L: linux-rdma@vger.kernel.org85218522W: http://www.mellanox.com···85238524S: Supported85248525F: drivers/net/ethernet/mellanox/mlx4/85258526F: include/linux/mlx4/85268526-F: include/uapi/rdma/mlx4-abi.h8527852785288528MELLANOX MLX4 IB driver85298529M: Yishai Hadas <yishaih@mellanox.com>···85328534S: Supported85338535F: drivers/infiniband/hw/mlx4/85348536F: include/linux/mlx4/85378537+F: include/uapi/rdma/mlx4-abi.h8535853885368539MELLANOX MLX5 core VPI driver85378540M: Saeed Mahameed <saeedm@mellanox.com>···85458546S: Supported85468547F: drivers/net/ethernet/mellanox/mlx5/core/85478548F: include/linux/mlx5/85488548-F: include/uapi/rdma/mlx5-abi.h8549854985508550MELLANOX MLX5 IB driver85518551M: Matan Barak <matanb@mellanox.com>···85558557S: Supported85568558F: drivers/infiniband/hw/mlx5/85578559F: include/linux/mlx5/85608560+F: include/uapi/rdma/mlx5-abi.h8558856185598562MELEXIS MLX90614 DRIVER85608563M: Crt Mori <cmo@melexis.com>···85958596F: drivers/media/dvb-frontends/mn88473*8596859785978598MODULE SUPPORT85988598-M: Jessica Yu <jeyu@redhat.com>85998599+M: Jessica Yu <jeyu@kernel.org>85998600M: Rusty Russell <rusty@rustcorp.com.au>86008601T: git git://git.kernel.org/pub/scm/linux/kernel/git/jeyu/linux.git modules-next86018602S: Maintained···10457104581045810459PXA RTC DRIVER1045910460M: Robert Jarzmik <robert.jarzmik@free.fr>1046010460-L: rtc-linux@googlegroups.com1046110461+L: linux-rtc@vger.kernel.org1046110462S: Maintained10462104631046310464QAT DRIVER···1076410765REAL TIME CLOCK (RTC) SUBSYSTEM1076510766M: Alessandro Zummo <a.zummo@towertech.it>1076610767M: Alexandre Belloni <alexandre.belloni@free-electrons.com>1076710767-L: rtc-linux@googlegroups.com1076810768+L: linux-rtc@vger.kernel.org1076810769Q: http://patchwork.ozlabs.org/project/rtc-linux/list/1076910770T: git git://git.kernel.org/pub/scm/linux/kernel/git/abelloni/linux.git1077010771S: Maintained···11275112761127611277STI CEC DRIVER1127711278M: Benjamin Gaignard <benjamin.gaignard@linaro.org>1127811278-L: kernel@stlinux.com1127911279S: Maintained1128011280F: drivers/staging/media/st-cec/1128111281F: Documentation/devicetree/bindings/media/stih-cec.txt···1178411786S: Supported1178511787F: arch/arm/mach-davinci/1178611788F: drivers/i2c/busses/i2c-davinci.c1178911789+F: arch/arm/boot/dts/da850*11787117901178811791TI DAVINCI SERIES MEDIA DRIVER1178911792M: "Lad, Prabhakar" <prabhakar.csengg@gmail.com>···1386813869F: drivers/net/wireless/wl3501*13869138701387013871WOLFSON MICROELECTRONICS DRIVERS1387113871-L: patches@opensource.wolfsonmicro.com1387213872+L: patches@opensource.cirrus.com1387213873T: git https://github.com/CirrusLogic/linux-drivers.git1387313874W: https://github.com/CirrusLogic/linux-drivers/wiki1387413875S: Supported
+2-2
Makefile
···11VERSION = 422PATCHLEVEL = 1233SUBLEVEL = 044-EXTRAVERSION = -rc344+EXTRAVERSION =55NAME = Fearless Coyote6677# *DOCUMENTATION*···14371437 @echo ' make V=0|1 [targets] 0 => quiet build (default), 1 => verbose build'14381438 @echo ' make V=2 [targets] 2 => give reason for rebuild of target'14391439 @echo ' make O=dir [targets] Locate all output files in "dir", including .config'14401440- @echo ' make C=1 [targets] Check all c source with $$CHECK (sparse by default)'14401440+ @echo ' make C=1 [targets] Check re-compiled c source with $$CHECK (sparse by default)'14411441 @echo ' make C=2 [targets] Force check of all c source with $$CHECK'14421442 @echo ' make RECORDMCOUNT_WARN=1 [targets] Warn about ignored mcount sections'14431443 @echo ' make W=n [targets] Enable extra gcc checks, n=1,2,3 where'
-2
arch/arc/include/asm/processor.h
···8686#define TSK_K_BLINK(tsk) TSK_K_REG(tsk, 4)8787#define TSK_K_FP(tsk) TSK_K_REG(tsk, 0)88888989-#define thread_saved_pc(tsk) TSK_K_BLINK(tsk)9090-9189extern void start_thread(struct pt_regs * regs, unsigned long pc,9290 unsigned long usp);9391
+1-1
arch/arc/mm/mmap.c
···65656666 vma = find_vma(mm, addr);6767 if (TASK_SIZE - len >= addr &&6868- (!vma || addr + len <= vma->vm_start))6868+ (!vma || addr + len <= vm_start_gap(vma)))6969 return addr;7070 }7171
···33#include <dt-bindings/clock/bcm2835-aux.h>44#include <dt-bindings/gpio/gpio.h>5566+/* firmware-provided startup stubs live here, where the secondary CPUs are77+ * spinning.88+ */99+/memreserve/ 0x00000000 0x00001000;1010+611/* This include file covers the common peripherals and configuration between712 * bcm2835 and bcm2836 implementations, leaving the CPU configuration to813 * bcm2835.dtsi and bcm2836.dtsi.
···23112311}23122312EXPORT_SYMBOL_GPL(arm_iommu_attach_device);2313231323142314-static void __arm_iommu_detach_device(struct device *dev)23142314+/**23152315+ * arm_iommu_detach_device23162316+ * @dev: valid struct device pointer23172317+ *23182318+ * Detaches the provided device from a previously attached map.23192319+ * This voids the dma operations (dma_map_ops pointer)23202320+ */23212321+void arm_iommu_detach_device(struct device *dev)23152322{23162323 struct dma_iommu_mapping *mapping;23172324···23312324 iommu_detach_device(mapping->domain, dev);23322325 kref_put(&mapping->kref, release_iommu_mapping);23332326 to_dma_iommu_mapping(dev) = NULL;23272327+ set_dma_ops(dev, NULL);2334232823352329 pr_debug("Detached IOMMU controller from %s device.\n", dev_name(dev));23362336-}23372337-23382338-/**23392339- * arm_iommu_detach_device23402340- * @dev: valid struct device pointer23412341- *23422342- * Detaches the provided device from a previously attached map.23432343- * This voids the dma operations (dma_map_ops pointer)23442344- */23452345-void arm_iommu_detach_device(struct device *dev)23462346-{23472347- __arm_iommu_detach_device(dev);23482348- set_dma_ops(dev, NULL);23492330}23502331EXPORT_SYMBOL_GPL(arm_iommu_detach_device);23512332···23742379 if (!mapping)23752380 return;2376238123772377- __arm_iommu_detach_device(dev);23822382+ arm_iommu_detach_device(dev);23782383 arm_iommu_release_mapping(mapping);23792384}23802385···24252430 dev->dma_ops = xen_dma_ops;24262431 }24272432#endif24332433+ dev->archdata.dma_ops_setup = true;24282434}2429243524302436void arch_teardown_dma_ops(struct device *dev)24312437{24382438+ if (!dev->archdata.dma_ops_setup)24392439+ return;24402440+24322441 arm_teardown_iommu_dma_ops(dev);24332442}
+2-2
arch/arm/mm/mmap.c
···90909191 vma = find_vma(mm, addr);9292 if (TASK_SIZE - len >= addr &&9393- (!vma || addr + len <= vma->vm_start))9393+ (!vma || addr + len <= vm_start_gap(vma)))9494 return addr;9595 }9696···141141 addr = PAGE_ALIGN(addr);142142 vma = find_vma(mm, addr);143143 if (TASK_SIZE - len >= addr &&144144- (!vma || addr + len <= vma->vm_start))144144+ (!vma || addr + len <= vm_start_gap(vma)))145145 return addr;146146 }147147
+4-4
arch/arm/mm/mmu.c
···1218121812191219 high_memory = __va(arm_lowmem_limit - 1) + 1;1220122012211221+ if (!memblock_limit)12221222+ memblock_limit = arm_lowmem_limit;12231223+12211224 /*12221225 * Round the memblock limit down to a pmd size. This12231226 * helps to ensure that we will allocate memory from the12241227 * last full pmd, which should be mapped.12251228 */12261226- if (memblock_limit)12271227- memblock_limit = round_down(memblock_limit, PMD_SIZE);12281228- if (!memblock_limit)12291229- memblock_limit = arm_lowmem_limit;12291229+ memblock_limit = round_down(memblock_limit, PMD_SIZE);1230123012311231 if (!IS_ENABLED(CONFIG_HIGHMEM) || cache_is_vipt_aliasing()) {12321232 if (memblock_end_of_DRAM() > arm_lowmem_limit) {
···256256 seqcnt_check fail=monotonic_raw257257258258 /* All computations are done with left-shifted nsecs. */259259- lsl x14, x14, x12260259 get_nsec_per_sec res=x9261260 lsl x9, x9, x12262261
+7-4
arch/arm64/kvm/hyp-init.S
···106106 tlbi alle2107107 dsb sy108108109109- mrs x4, sctlr_el2110110- and x4, x4, #SCTLR_ELx_EE // preserve endianness of EL2111111- ldr x5, =SCTLR_ELx_FLAGS112112- orr x4, x4, x5109109+ /*110110+ * Preserve all the RES1 bits while setting the default flags,111111+ * as well as the EE bit on BE. Drop the A flag since the compiler112112+ * is allowed to generate unaligned accesses.113113+ */114114+ ldr x4, =(SCTLR_EL2_RES1 | (SCTLR_ELx_FLAGS & ~SCTLR_ELx_A))115115+CPU_BE( orr x4, x4, #SCTLR_ELx_EE)113116 msr sctlr_el2, x4114117 isb115118
+5-5
arch/arm64/kvm/vgic-sys-reg-v3.c
···6565 * Here set VMCR.CTLR in ICC_CTLR_EL1 layout.6666 * The vgic_set_vmcr() will convert to ICH_VMCR layout.6767 */6868- vmcr.ctlr = val & ICC_CTLR_EL1_CBPR_MASK;6969- vmcr.ctlr |= val & ICC_CTLR_EL1_EOImode_MASK;6868+ vmcr.cbpr = (val & ICC_CTLR_EL1_CBPR_MASK) >> ICC_CTLR_EL1_CBPR_SHIFT;6969+ vmcr.eoim = (val & ICC_CTLR_EL1_EOImode_MASK) >> ICC_CTLR_EL1_EOImode_SHIFT;7070 vgic_set_vmcr(vcpu, &vmcr);7171 } else {7272 val = 0;···8383 * The VMCR.CTLR value is in ICC_CTLR_EL1 layout.8484 * Extract it directly using ICC_CTLR_EL1 reg definitions.8585 */8686- val |= vmcr.ctlr & ICC_CTLR_EL1_CBPR_MASK;8787- val |= vmcr.ctlr & ICC_CTLR_EL1_EOImode_MASK;8686+ val |= (vmcr.cbpr << ICC_CTLR_EL1_CBPR_SHIFT) & ICC_CTLR_EL1_CBPR_MASK;8787+ val |= (vmcr.eoim << ICC_CTLR_EL1_EOImode_SHIFT) & ICC_CTLR_EL1_EOImode_MASK;88888989 p->regval = val;9090 }···135135 p->regval = 0;136136137137 vgic_get_vmcr(vcpu, &vmcr);138138- if (!((vmcr.ctlr & ICH_VMCR_CBPR_MASK) >> ICH_VMCR_CBPR_SHIFT)) {138138+ if (!vmcr.cbpr) {139139 if (p->is_write) {140140 vmcr.abpr = (p->regval & ICC_BPR1_EL1_MASK) >>141141 ICC_BPR1_EL1_SHIFT;
···7575{7676}77777878-/*7979- * Return saved PC of a blocked thread.8080- */8181-#define thread_saved_pc(tsk) (tsk->thread.pc)8282-8378unsigned long get_wchan(struct task_struct *p);84798580#define KSTK_EIP(tsk) \
-5
arch/c6x/include/asm/processor.h
···9696#define release_segments(mm) do { } while (0)97979898/*9999- * saved PC of a blocked thread.100100- */101101-#define thread_saved_pc(tsk) (task_pt_regs(tsk)->pc)102102-103103-/*10499 * saved kernel SP and DP of a blocked thread.105100 */106101#ifdef _BIG_ENDIAN
-8
arch/cris/arch-v10/kernel/process.c
···6969 while(1) /* waiting for RETRIBUTION! */ ;7070}71717272-/*7373- * Return saved PC of a blocked thread.7474- */7575-unsigned long thread_saved_pc(struct task_struct *t)7676-{7777- return task_pt_regs(t)->irp;7878-}7979-8072/* setup the child's kernel stack with a pt_regs and switch_stack on it.8173 * it will be un-nested during _resume and _ret_from_sys_call when the8274 * new thread is scheduled.
-8
arch/cris/arch-v32/kernel/process.c
···8585}86868787/*8888- * Return saved PC of a blocked thread.8989- */9090-unsigned long thread_saved_pc(struct task_struct *t)9191-{9292- return task_pt_regs(t)->erp;9393-}9494-9595-/*9688 * Setup the child's kernel stack with a pt_regs and call switch_stack() on it.9789 * It will be unnested during _resume and _ret_from_sys_call when the new thread9890 * is scheduled.
-2
arch/cris/include/asm/processor.h
···52525353#define KSTK_ESP(tsk) ((tsk) == current ? rdusp() : (tsk)->thread.usp)54545555-extern unsigned long thread_saved_pc(struct task_struct *tsk);5656-5755/* Free all resources held by a thread. */5856static inline void release_thread(struct task_struct *dead_task)5957{
-5
arch/frv/include/asm/processor.h
···9696#define release_segments(mm) do { } while (0)9797#define forget_segments() do { } while (0)98989999-/*100100- * Return saved PC of a blocked thread.101101- */102102-extern unsigned long thread_saved_pc(struct task_struct *tsk);103103-10499unsigned long get_wchan(struct task_struct *p);105100106101#define KSTK_EIP(tsk) ((tsk)->thread.frame0->pc)
+6
arch/frv/include/asm/timex.h
···1616#define vxtime_lock() do {} while (0)1717#define vxtime_unlock() do {} while (0)18181919+/* This attribute is used in include/linux/jiffies.h alongside with2020+ * __cacheline_aligned_in_smp. It is assumed that __cacheline_aligned_in_smp2121+ * for frv does not contain another section specification.2222+ */2323+#define __jiffy_arch_data __attribute__((__section__(".data")))2424+1925#endif2026
-9
arch/frv/kernel/process.c
···198198 return 0;199199}200200201201-unsigned long thread_saved_pc(struct task_struct *tsk)202202-{203203- /* Check whether the thread is blocked in resume() */204204- if (in_sched_functions(tsk->thread.pc))205205- return ((unsigned long *)tsk->thread.fp)[2];206206- else207207- return tsk->thread.pc;208208-}209209-210201int elf_check_arch(const struct elf32_hdr *hdr)211202{212203 unsigned long hsr0 = __get_HSR(0);
+1-1
arch/frv/mm/elf-fdpic.c
···7575 addr = PAGE_ALIGN(addr);7676 vma = find_vma(current->mm, addr);7777 if (TASK_SIZE - len >= addr &&7878- (!vma || addr + len <= vma->vm_start))7878+ (!vma || addr + len <= vm_start_gap(vma)))7979 goto success;8080 }8181
-4
arch/h8300/include/asm/processor.h
···110110{111111}112112113113-/*114114- * Return saved PC of a blocked thread.115115- */116116-unsigned long thread_saved_pc(struct task_struct *tsk);117113unsigned long get_wchan(struct task_struct *p);118114119115#define KSTK_EIP(tsk) \
-5
arch/h8300/kernel/process.c
···129129 return 0;130130}131131132132-unsigned long thread_saved_pc(struct task_struct *tsk)133133-{134134- return ((struct pt_regs *)tsk->thread.esp0)->pc;135135-}136136-137132unsigned long get_wchan(struct task_struct *p)138133{139134 unsigned long fp, pc;
-3
arch/hexagon/include/asm/processor.h
···3333/* task_struct, defined elsewhere, is the "process descriptor" */3434struct task_struct;35353636-/* this is defined in arch/process.c */3737-extern unsigned long thread_saved_pc(struct task_struct *tsk);3838-3936extern void start_thread(struct pt_regs *, unsigned long, unsigned long);40374138/*
-8
arch/hexagon/kernel/process.c
···6161}62626363/*6464- * Return saved PC of a blocked thread6565- */6666-unsigned long thread_saved_pc(struct task_struct *tsk)6767-{6868- return 0;6969-}7070-7171-/*7264 * Copy architecture-specific thread state7365 */7466int copy_thread(unsigned long clone_flags, unsigned long usp,
+2-3
arch/hexagon/mm/uaccess.c
···3737 long uncleared;38383939 while (count > PAGE_SIZE) {4040- uncleared = __copy_to_user_hexagon(dest, &empty_zero_page,4141- PAGE_SIZE);4040+ uncleared = raw_copy_to_user(dest, &empty_zero_page, PAGE_SIZE);4241 if (uncleared)4342 return count - (PAGE_SIZE - uncleared);4443 count -= PAGE_SIZE;4544 dest += PAGE_SIZE;4645 }4746 if (count)4848- count = __copy_to_user_hexagon(dest, &empty_zero_page, count);4747+ count = raw_copy_to_user(dest, &empty_zero_page, count);49485049 return count;5150}
-17
arch/ia64/include/asm/processor.h
···602602}603603604604/*605605- * Return saved PC of a blocked thread.606606- * Note that the only way T can block is through a call to schedule() -> switch_to().607607- */608608-static inline unsigned long609609-thread_saved_pc (struct task_struct *t)610610-{611611- struct unw_frame_info info;612612- unsigned long ip;613613-614614- unw_init_from_blocked_task(&info, t);615615- if (unw_unwind(&info) < 0)616616- return 0;617617- unw_get_ip(&info, &ip);618618- return ip;619619-}620620-621621-/*622605 * Get the current instruction/program counter value.623606 */624607#define current_text_addr() \
-2
arch/m32r/include/asm/processor.h
···122122extern void copy_segments(struct task_struct *p, struct mm_struct * mm);123123extern void release_segments(struct mm_struct * mm);124124125125-extern unsigned long thread_saved_pc(struct task_struct *);126126-127125/* Copy and release all segment info associated with a VM */128126#define copy_segments(p, mm) do { } while (0)129127#define release_segments(mm) do { } while (0)
-8
arch/m32r/kernel/process.c
···39394040#include <linux/err.h>41414242-/*4343- * Return saved PC of a blocked thread.4444- */4545-unsigned long thread_saved_pc(struct task_struct *tsk)4646-{4747- return tsk->thread.lr;4848-}4949-5042void (*pm_power_off)(void) = NULL;5143EXPORT_SYMBOL(pm_power_off);5244
-2
arch/m68k/include/asm/processor.h
···130130{131131}132132133133-extern unsigned long thread_saved_pc(struct task_struct *tsk);134134-135133unsigned long get_wchan(struct task_struct *p);136134137135#define KSTK_EIP(tsk) \
-14
arch/m68k/kernel/process.c
···4040asmlinkage void ret_from_fork(void);4141asmlinkage void ret_from_kernel_thread(void);42424343-4444-/*4545- * Return saved PC from a blocked thread4646- */4747-unsigned long thread_saved_pc(struct task_struct *tsk)4848-{4949- struct switch_stack *sw = (struct switch_stack *)tsk->thread.ksp;5050- /* Check whether the thread is blocked in resume() */5151- if (in_sched_functions(sw->retpc))5252- return ((unsigned long *)sw->a6)[1];5353- else5454- return sw->retpc;5555-}5656-5743void arch_cpu_idle(void)5844{5945#if defined(MACH_ATARI_ONLY)
-6
arch/microblaze/include/asm/processor.h
···6969{7070}71717272-extern unsigned long thread_saved_pc(struct task_struct *t);7373-7472extern unsigned long get_wchan(struct task_struct *p);75737674# define KSTK_EIP(tsk) (0)···118120static inline void release_thread(struct task_struct *dead_task)119121{120122}121121-122122-/* Return saved (kernel) PC of a blocked thread. */123123-# define thread_saved_pc(tsk) \124124- ((tsk)->thread.regs ? (tsk)->thread.regs->r15 : 0)125123126124unsigned long get_wchan(struct task_struct *p);127125
-17
arch/microblaze/kernel/process.c
···119119 return 0;120120}121121122122-#ifndef CONFIG_MMU123123-/*124124- * Return saved PC of a blocked thread.125125- */126126-unsigned long thread_saved_pc(struct task_struct *tsk)127127-{128128- struct cpu_context *ctx =129129- &(((struct thread_info *)(tsk->stack))->cpu_context);130130-131131- /* Check whether the thread is blocked in resume() */132132- if (in_sched_functions(ctx->r15))133133- return (unsigned long)ctx->r15;134134- else135135- return ctx->r14;136136-}137137-#endif138138-139122unsigned long get_wchan(struct task_struct *p)140123{141124/* TBD (used by procfs) */
···804804 break;805805 }806806 /* Compact branch: BNEZC || JIALC */807807- if (insn.i_format.rs)807807+ if (!insn.i_format.rs) {808808+ /* JIALC: set $31/ra */808809 regs->regs[31] = epc + 4;810810+ }809811 regs->cp0_epc += 8;810812 break;811813#endif
+3
arch/mips/kernel/entry.S
···1111#include <asm/asm.h>1212#include <asm/asmmacro.h>1313#include <asm/compiler.h>1414+#include <asm/irqflags.h>1415#include <asm/regdef.h>1516#include <asm/mipsregs.h>1617#include <asm/stackframe.h>···120119 andi t0, a2, _TIF_NEED_RESCHED # a2 is preloaded with TI_FLAGS121120 beqz t0, work_notifysig122121work_resched:122122+ TRACE_IRQS_OFF123123 jal schedule124124125125 local_irq_disable # make sure need_resched and···157155 beqz t0, work_pending # trace bit set?158156 local_irq_enable # could let syscall_trace_leave()159157 # call schedule() instead158158+ TRACE_IRQS_ON160159 move a0, sp161160 jal syscall_trace_leave162161 b resume_userspace
+5-19
arch/mips/kernel/ftrace.c
···38383939#endif40404141-/*4242- * Check if the address is in kernel space4343- *4444- * Clone core_kernel_text() from kernel/extable.c, but doesn't call4545- * init_kernel_text() for Ftrace doesn't trace functions in init sections.4646- */4747-static inline int in_kernel_space(unsigned long ip)4848-{4949- if (ip >= (unsigned long)_stext &&5050- ip <= (unsigned long)_etext)5151- return 1;5252- return 0;5353-}5454-5541#ifdef CONFIG_DYNAMIC_FTRACE56425743#define JAL 0x0c000000 /* jump & link: ip --> ra, jump to target */···184198 * If ip is in kernel space, no long call, otherwise, long call is185199 * needed.186200 */187187- new = in_kernel_space(ip) ? INSN_NOP : INSN_B_1F;201201+ new = core_kernel_text(ip) ? INSN_NOP : INSN_B_1F;188202#ifdef CONFIG_64BIT189203 return ftrace_modify_code(ip, new);190204#else···204218 unsigned int new;205219 unsigned long ip = rec->ip;206220207207- new = in_kernel_space(ip) ? insn_jal_ftrace_caller : insn_la_mcount[0];221221+ new = core_kernel_text(ip) ? insn_jal_ftrace_caller : insn_la_mcount[0];208222209223#ifdef CONFIG_64BIT210224 return ftrace_modify_code(ip, new);211225#else212212- return ftrace_modify_code_2r(ip, new, in_kernel_space(ip) ?226226+ return ftrace_modify_code_2r(ip, new, core_kernel_text(ip) ?213227 INSN_NOP : insn_la_mcount[1]);214228#endif215229}···275289 * instruction "lui v1, hi_16bit_of_mcount"(offset is 24), but for276290 * kernel, move after the instruction "move ra, at"(offset is 16)277291 */278278- ip = self_ra - (in_kernel_space(self_ra) ? 16 : 24);292292+ ip = self_ra - (core_kernel_text(self_ra) ? 16 : 24);279293280294 /*281295 * search the text until finding the non-store instruction or "s{d,w}···380394 * entries configured through the tracing/set_graph_function interface.381395 */382396383383- insns = in_kernel_space(self_ra) ? 2 : MCOUNT_OFFSET_INSNS + 1;397397+ insns = core_kernel_text(self_ra) ? 2 : MCOUNT_OFFSET_INSNS + 1;384398 trace.func = self_ra - (MCOUNT_INSN_SIZE * insns);385399386400 /* Only trace if the calling function expects to */
···166166int kvm_mips_host_tlb_inv(struct kvm_vcpu *vcpu, unsigned long va,167167 bool user, bool kernel)168168{169169- int idx_user, idx_kernel;169169+ /*170170+ * Initialize idx_user and idx_kernel to workaround bogus171171+ * maybe-initialized warning when using GCC 6.172172+ */173173+ int idx_user = 0, idx_kernel = 0;170174 unsigned long flags, old_entryhi;171175172176 local_irq_save(flags);
+4-1
arch/mips/math-emu/dp_maddf.c
···5454 return ieee754dp_nanxcpt(z);5555 case IEEE754_CLASS_DNORM:5656 DPDNORMZ;5757- /* QNAN is handled separately below */5757+ /* QNAN and ZERO cases are handled separately below */5858 }59596060 switch (CLPAIR(xc, yc)) {···209209 ((rm << (DP_FBITS + 1 + 3 + 1)) != 0);210210 }211211 assert(rm & (DP_HIDDEN_BIT << 3));212212+213213+ if (zc == IEEE754_CLASS_ZERO)214214+ return ieee754dp_format(rs, re, rm);212215213216 /* And now the addition */214217 assert(zm & DP_HIDDEN_BIT);
+4-1
arch/mips/math-emu/sp_maddf.c
···5454 return ieee754sp_nanxcpt(z);5555 case IEEE754_CLASS_DNORM:5656 SPDNORMZ;5757- /* QNAN is handled separately below */5757+ /* QNAN and ZERO cases are handled separately below */5858 }59596060 switch (CLPAIR(xc, yc)) {···202202 ((rm << (SP_FBITS + 1 + 3 + 1)) != 0);203203 }204204 assert(rm & (SP_HIDDEN_BIT << 3));205205+206206+ if (zc == IEEE754_CLASS_ZERO)207207+ return ieee754sp_format(rs, re, rm);205208206209 /* And now the addition */207210
+18-5
arch/mips/mm/dma-default.c
···6868 * systems and only the R10000 and R12000 are used in such systems, the6969 * SGI IP28 Indigo² rsp. SGI IP32 aka O2.7070 */7171-static inline int cpu_needs_post_dma_flush(struct device *dev)7171+static inline bool cpu_needs_post_dma_flush(struct device *dev)7272{7373- return !plat_device_is_coherent(dev) &&7474- (boot_cpu_type() == CPU_R10000 ||7575- boot_cpu_type() == CPU_R12000 ||7676- boot_cpu_type() == CPU_BMIPS5000);7373+ if (plat_device_is_coherent(dev))7474+ return false;7575+7676+ switch (boot_cpu_type()) {7777+ case CPU_R10000:7878+ case CPU_R12000:7979+ case CPU_BMIPS5000:8080+ return true;8181+8282+ default:8383+ /*8484+ * Presence of MAARs suggests that the CPU supports8585+ * speculatively prefetching data, and therefore requires8686+ * the post-DMA flush/invalidate.8787+ */8888+ return cpu_has_maar;8989+ }7790}78917992static gfp_t massage_gfp_flags(const struct device *dev, gfp_t gfp)
+1-1
arch/mips/mm/mmap.c
···93939494 vma = find_vma(mm, addr);9595 if (TASK_SIZE - len >= addr &&9696- (!vma || addr + len <= vma->vm_start))9696+ (!vma || addr + len <= vm_start_gap(vma)))9797 return addr;9898 }9999
···132132/* Free all resources held by a thread. */133133extern void release_thread(struct task_struct *);134134135135-/*136136- * Return saved PC of a blocked thread.137137- */138138-extern unsigned long thread_saved_pc(struct task_struct *tsk);139139-140135unsigned long get_wchan(struct task_struct *p);141136142137#define task_pt_regs(task) ((task)->thread.uregs)
-8
arch/mn10300/kernel/process.c
···4040#include "internal.h"41414242/*4343- * return saved PC of a blocked thread.4444- */4545-unsigned long thread_saved_pc(struct task_struct *tsk)4646-{4747- return ((unsigned long *) tsk->thread.sp)[3];4848-}4949-5050-/*5143 * power off function, if any5244 */5345void (*pm_power_off)(void);
-3
arch/nios2/include/asm/processor.h
···7575{7676}77777878-/* Return saved PC of a blocked thread. */7979-#define thread_saved_pc(tsk) ((tsk)->thread.kregs->ea)8080-8178extern unsigned long get_wchan(struct task_struct *p);82798380#define task_pt_regs(p) \
-5
arch/openrisc/include/asm/processor.h
···8484void release_thread(struct task_struct *);8585unsigned long get_wchan(struct task_struct *p);86868787-/*8888- * Return saved PC of a blocked thread. For now, this is the "user" PC8989- */9090-extern unsigned long thread_saved_pc(struct task_struct *t);9191-9287#define init_stack (init_thread_union.stack)93889489#define cpu_relax() barrier()
···163163 .flags = 0 \164164 }165165166166-/*167167- * Return saved PC of a blocked thread. This is used by ps mostly.168168- */169169-170166struct task_struct;171171-unsigned long thread_saved_pc(struct task_struct *t);172167void show_trace(struct task_struct *task, unsigned long *stack);173168174169/*
···9090 unsigned long len, unsigned long pgoff, unsigned long flags)9191{9292 struct mm_struct *mm = current->mm;9393- struct vm_area_struct *vma;9393+ struct vm_area_struct *vma, *prev;9494 unsigned long task_size = TASK_SIZE;9595 int do_color_align, last_mmap;9696 struct vm_unmapped_area_info info;···117117 else118118 addr = PAGE_ALIGN(addr);119119120120- vma = find_vma(mm, addr);120120+ vma = find_vma_prev(mm, addr, &prev);121121 if (task_size - len >= addr &&122122- (!vma || addr + len <= vma->vm_start))122122+ (!vma || addr + len <= vm_start_gap(vma)) &&123123+ (!prev || addr >= vm_end_gap(prev)))123124 goto found_addr;124125 }125126···144143 const unsigned long len, const unsigned long pgoff,145144 const unsigned long flags)146145{147147- struct vm_area_struct *vma;146146+ struct vm_area_struct *vma, *prev;148147 struct mm_struct *mm = current->mm;149148 unsigned long addr = addr0;150149 int do_color_align, last_mmap;···178177 addr = COLOR_ALIGN(addr, last_mmap, pgoff);179178 else180179 addr = PAGE_ALIGN(addr);181181- vma = find_vma(mm, addr);180180+181181+ vma = find_vma_prev(mm, addr, &prev);182182 if (TASK_SIZE - len >= addr &&183183- (!vma || addr + len <= vma->vm_start))183183+ (!vma || addr + len <= vm_start_gap(vma)) &&184184+ (!prev || addr >= vm_end_gap(prev)))184185 goto found_addr;185186 }186187
-21
arch/powerpc/Kconfig
···380380381381menu "Kernel options"382382383383-config PPC_DT_CPU_FTRS384384- bool "Device-tree based CPU feature discovery & setup"385385- depends on PPC_BOOK3S_64386386- default n387387- help388388- This enables code to use a new device tree binding for describing CPU389389- compatibility and features. Saying Y here will attempt to use the new390390- binding if the firmware provides it. Currently only the skiboot391391- firmware provides this binding.392392- If you're not sure say Y.393393-394394-config PPC_CPUFEATURES_ENABLE_UNKNOWN395395- bool "cpufeatures pass through unknown features to guest/userspace"396396- depends on PPC_DT_CPU_FTRS397397- default y398398-399383config HIGHMEM400384 bool "High memory support"401385 depends on PPC32···11981214source "arch/powerpc/Kconfig.debug"1199121512001216source "security/Kconfig"12011201-12021202-config KEYS_COMPAT12031203- bool12041204- depends on COMPAT && KEYS12051205- default y1206121712071218source "crypto/Kconfig"12081219
···103103extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);104104extern int kprobe_handler(struct pt_regs *regs);105105extern int kprobe_post_handler(struct pt_regs *regs);106106+extern int is_current_kprobe_addr(unsigned long addr);106107#ifdef CONFIG_KPROBES_ON_FTRACE107108extern int skip_singlestep(struct kprobe *p, struct pt_regs *regs,108109 struct kprobe_ctlblk *kcb);
+12-19
arch/powerpc/include/asm/processor.h
···110110#define TASK_SIZE_128TB (0x0000800000000000UL)111111#define TASK_SIZE_512TB (0x0002000000000000UL)112112113113-#ifdef CONFIG_PPC_BOOK3S_64113113+/*114114+ * For now 512TB is only supported with book3s and 64K linux page size.115115+ */116116+#if defined(CONFIG_PPC_BOOK3S_64) && defined(CONFIG_PPC_64K_PAGES)114117/*115118 * Max value currently used:116119 */117117-#define TASK_SIZE_USER64 TASK_SIZE_512TB120120+#define TASK_SIZE_USER64 TASK_SIZE_512TB121121+#define DEFAULT_MAP_WINDOW_USER64 TASK_SIZE_128TB118122#else119119-#define TASK_SIZE_USER64 TASK_SIZE_64TB123123+#define TASK_SIZE_USER64 TASK_SIZE_64TB124124+#define DEFAULT_MAP_WINDOW_USER64 TASK_SIZE_64TB120125#endif121126122127/*···137132 * space during mmap's.138133 */139134#define TASK_UNMAPPED_BASE_USER32 (PAGE_ALIGN(TASK_SIZE_USER32 / 4))140140-#define TASK_UNMAPPED_BASE_USER64 (PAGE_ALIGN(TASK_SIZE_128TB / 4))135135+#define TASK_UNMAPPED_BASE_USER64 (PAGE_ALIGN(DEFAULT_MAP_WINDOW_USER64 / 4))141136142137#define TASK_UNMAPPED_BASE ((is_32bit_task()) ? \143138 TASK_UNMAPPED_BASE_USER32 : TASK_UNMAPPED_BASE_USER64 )···148143 * with 128TB and conditionally enable upto 512TB149144 */150145#ifdef CONFIG_PPC_BOOK3S_64151151-#define DEFAULT_MAP_WINDOW ((is_32bit_task()) ? \152152- TASK_SIZE_USER32 : TASK_SIZE_128TB)146146+#define DEFAULT_MAP_WINDOW ((is_32bit_task()) ? \147147+ TASK_SIZE_USER32 : DEFAULT_MAP_WINDOW_USER64)153148#else154149#define DEFAULT_MAP_WINDOW TASK_SIZE155150#endif156151157152#ifdef __powerpc64__158153159159-#ifdef CONFIG_PPC_BOOK3S_64160160-/* Limit stack to 128TB */161161-#define STACK_TOP_USER64 TASK_SIZE_128TB162162-#else163163-#define STACK_TOP_USER64 TASK_SIZE_USER64164164-#endif165165-154154+#define STACK_TOP_USER64 DEFAULT_MAP_WINDOW_USER64166155#define STACK_TOP_USER32 TASK_SIZE_USER32167156168157#define STACK_TOP (is_32bit_task() ? \···377378 .fscr = FSCR_TAR | FSCR_EBB \378379}379380#endif380380-381381-/*382382- * Return saved PC of a blocked thread. For now, this is the "user" PC383383- */384384-#define thread_saved_pc(tsk) \385385- ((tsk)->thread.regs? (tsk)->thread.regs->nip: 0)386381387382#define task_pt_regs(tsk) ((struct pt_regs *)(tsk)->thread.regs)388383
+14
arch/powerpc/include/asm/topology.h
···4444extern int sysfs_add_device_to_node(struct device *dev, int nid);4545extern void sysfs_remove_device_from_node(struct device *dev, int nid);46464747+static inline int early_cpu_to_node(int cpu)4848+{4949+ int nid;5050+5151+ nid = numa_cpu_lookup_table[cpu];5252+5353+ /*5454+ * Fall back to node 0 if nid is unset (it should be, except bugs).5555+ * This allows callers to safely do NODE_DATA(early_cpu_to_node(cpu)).5656+ */5757+ return (nid < 0) ? 0 : nid;5858+}4759#else6060+6161+static inline int early_cpu_to_node(int cpu) { return 0; }48624963static inline void dump_numa_cpu_topology(void) {}5064
···9494 * store at 0 and some ESBs support doing a trigger via a9595 * separate trigger page.9696 */9797-#define XIVE_ESB_GET 0x8009898-#define XIVE_ESB_SET_PQ_00 0xc009999-#define XIVE_ESB_SET_PQ_01 0xd00100100-#define XIVE_ESB_SET_PQ_10 0xe00101101-#define XIVE_ESB_SET_PQ_11 0xf009797+#define XIVE_ESB_STORE_EOI 0x400 /* Store */9898+#define XIVE_ESB_LOAD_EOI 0x000 /* Load */9999+#define XIVE_ESB_GET 0x800 /* Load */100100+#define XIVE_ESB_SET_PQ_00 0xc00 /* Load */101101+#define XIVE_ESB_SET_PQ_01 0xd00 /* Load */102102+#define XIVE_ESB_SET_PQ_10 0xe00 /* Load */103103+#define XIVE_ESB_SET_PQ_11 0xf00 /* Load */102104103105#define XIVE_ESB_VAL_P 0x2104106#define XIVE_ESB_VAL_Q 0x1
+48-10
arch/powerpc/kernel/dt_cpu_ftrs.c
···88#include <linux/export.h>99#include <linux/init.h>1010#include <linux/jump_label.h>1111+#include <linux/libfdt.h>1112#include <linux/memblock.h>1213#include <linux/printk.h>1314#include <linux/sched.h>···643642 {"processor-control-facility", feat_enable_dbell, CPU_FTR_DBELL},644643 {"processor-control-facility-v3", feat_enable_dbell, CPU_FTR_DBELL},645644 {"processor-utilization-of-resources-register", feat_enable_purr, 0},646646- {"subcore", feat_enable, CPU_FTR_SUBCORE},647645 {"no-execute", feat_enable, 0},648646 {"strong-access-ordering", feat_enable, CPU_FTR_SAO},649647 {"cache-inhibited-large-page", feat_enable_large_ci, 0},···671671 {"wait-v3", feat_enable, 0},672672};673673674674-/* XXX: how to configure this? Default + boot time? */675675-#ifdef CONFIG_PPC_CPUFEATURES_ENABLE_UNKNOWN676676-#define CPU_FEATURE_ENABLE_UNKNOWN 1677677-#else678678-#define CPU_FEATURE_ENABLE_UNKNOWN 0679679-#endif674674+static bool __initdata using_dt_cpu_ftrs;675675+static bool __initdata enable_unknown = true;676676+677677+static int __init dt_cpu_ftrs_parse(char *str)678678+{679679+ if (!str)680680+ return 0;681681+682682+ if (!strcmp(str, "off"))683683+ using_dt_cpu_ftrs = false;684684+ else if (!strcmp(str, "known"))685685+ enable_unknown = false;686686+ else687687+ return 1;688688+689689+ return 0;690690+}691691+early_param("dt_cpu_ftrs", dt_cpu_ftrs_parse);680692681693static void __init cpufeatures_setup_start(u32 isa)682694{···719707 }720708 }721709722722- if (!known && CPU_FEATURE_ENABLE_UNKNOWN) {710710+ if (!known && enable_unknown) {723711 if (!feat_try_enable_unknown(f)) {724712 pr_info("not enabling: %s (unknown and unsupported by kernel)\n",725713 f->name);···768756 cur_cpu_spec->cpu_features, cur_cpu_spec->mmu_features);769757}770758759759+static int __init disabled_on_cmdline(void)760760+{761761+ unsigned long root, chosen;762762+ const char *p;763763+764764+ root = of_get_flat_dt_root();765765+ chosen = of_get_flat_dt_subnode_by_name(root, "chosen");766766+ if (chosen == -FDT_ERR_NOTFOUND)767767+ return false;768768+769769+ p = of_get_flat_dt_prop(chosen, "bootargs", NULL);770770+ if (!p)771771+ return false;772772+773773+ if (strstr(p, "dt_cpu_ftrs=off"))774774+ return true;775775+776776+ return false;777777+}778778+771779static int __init fdt_find_cpu_features(unsigned long node, const char *uname,772780 int depth, void *data)773781{···798766 return 0;799767}800768801801-static bool __initdata using_dt_cpu_ftrs = false;802802-803769bool __init dt_cpu_ftrs_in_use(void)804770{805771 return using_dt_cpu_ftrs;···805775806776bool __init dt_cpu_ftrs_init(void *fdt)807777{778778+ using_dt_cpu_ftrs = false;779779+808780 /* Setup and verify the FDT, if it fails we just bail */809781 if (!early_init_dt_verify(fdt))810782 return false;811783812784 if (!of_scan_flat_dt(fdt_find_cpu_features, NULL))785785+ return false;786786+787787+ if (disabled_on_cmdline())813788 return false;814789815790 cpufeatures_setup_cpu();···1062102710631028void __init dt_cpu_ftrs_scan(void)10641029{10301030+ if (!using_dt_cpu_ftrs)10311031+ return;10321032+10651033 of_scan_flat_dt(dt_cpu_ftrs_scan_callback, NULL);10661034}
+7-4
arch/powerpc/kernel/exceptions-64s.S
···14111411 .balign IFETCH_ALIGN_BYTES14121412do_hash_page:14131413#ifdef CONFIG_PPC_STD_MMU_6414141414- andis. r0,r4,0xa410 /* weird error? */14141414+ andis. r0,r4,0xa450 /* weird error? */14151415 bne- handle_page_fault /* if not, try to insert a HPTE */14161416- andis. r0,r4,DSISR_DABRMATCH@h14171417- bne- handle_dabr_fault14181416 CURRENT_THREAD_INFO(r11, r1)14191417 lwz r0,TI_PREEMPT(r11) /* If we're in an "NMI" */14201418 andis. r0,r0,NMI_MASK@h /* (i.e. an irq when soft-disabled) */···1436143814371439 /* Error */14381440 blt- 13f14411441+14421442+ /* Reload DSISR into r4 for the DABR check below */14431443+ ld r4,_DSISR(r1)14391444#endif /* CONFIG_PPC_STD_MMU_64 */1440144514411446/* Here we have a page fault that hash_page can't handle. */14421447handle_page_fault:14431443-11: ld r4,_DAR(r1)14481448+11: andis. r0,r4,DSISR_DABRMATCH@h14491449+ bne- handle_dabr_fault14501450+ ld r4,_DAR(r1)14441451 ld r5,_DSISR(r1)14451452 addi r3,r1,STACK_FRAME_OVERHEAD14461453 bl do_page_fault
+17
arch/powerpc/kernel/kprobes.c
···43434444struct kretprobe_blackpoint kretprobe_blacklist[] = {{NULL, NULL}};45454646+int is_current_kprobe_addr(unsigned long addr)4747+{4848+ struct kprobe *p = kprobe_running();4949+ return (p && (unsigned long)p->addr == addr) ? 1 : 0;5050+}5151+4652bool arch_within_kprobe_blacklist(unsigned long addr)4753{4854 return (addr >= (unsigned long)__kprobes_text_start &&···623617 regs->gpr[2] = (unsigned long)(((func_descr_t *)jp->entry)->toc);624618#endif625619620620+ /*621621+ * jprobes use jprobe_return() which skips the normal return622622+ * path of the function, and this messes up the accounting of the623623+ * function graph tracer.624624+ *625625+ * Pause function graph tracing while performing the jprobe function.626626+ */627627+ pause_graph_tracing();628628+626629 return 1;627630}628631NOKPROBE_SYMBOL(setjmp_pre_handler);···657642 * saved regs...658643 */659644 memcpy(regs, &kcb->jprobe_saved_regs, sizeof(struct pt_regs));645645+ /* It's OK to start function graph tracing again */646646+ unpause_graph_tracing();660647 preempt_enable_no_resched();661648 return 1;662649}
···616616#endif617617618618/*619619+ * Emergency stacks are used for a range of things, from asynchronous620620+ * NMIs (system reset, machine check) to synchronous, process context.621621+ * We set preempt_count to zero, even though that isn't necessarily correct. To622622+ * get the right value we'd need to copy it from the previous thread_info, but623623+ * doing that might fault causing more problems.624624+ * TODO: what to do with accounting?625625+ */626626+static void emerg_stack_init_thread_info(struct thread_info *ti, int cpu)627627+{628628+ ti->task = NULL;629629+ ti->cpu = cpu;630630+ ti->preempt_count = 0;631631+ ti->local_flags = 0;632632+ ti->flags = 0;633633+ klp_init_thread_info(ti);634634+}635635+636636+/*619637 * Stack space used when we detect a bad kernel stack pointer, and620638 * early in SMP boots before relocation is enabled. Exclusive emergency621639 * stack for machine checks.···651633 * Since we use these as temporary stacks during secondary CPU652634 * bringup, we need to get at them in real mode. This means they653635 * must also be within the RMO region.636636+ *637637+ * The IRQ stacks allocated elsewhere in this file are zeroed and638638+ * initialized in kernel/irq.c. These are initialized here in order639639+ * to have emergency stacks available as early as possible.654640 */655641 limit = min(safe_stack_limit(), ppc64_rma_size);656642657643 for_each_possible_cpu(i) {658644 struct thread_info *ti;659645 ti = __va(memblock_alloc_base(THREAD_SIZE, THREAD_SIZE, limit));660660- klp_init_thread_info(ti);646646+ memset(ti, 0, THREAD_SIZE);647647+ emerg_stack_init_thread_info(ti, i);661648 paca[i].emergency_sp = (void *)ti + THREAD_SIZE;662649663650#ifdef CONFIG_PPC_BOOK3S_64664651 /* emergency stack for NMI exception handling. */665652 ti = __va(memblock_alloc_base(THREAD_SIZE, THREAD_SIZE, limit));666666- klp_init_thread_info(ti);653653+ memset(ti, 0, THREAD_SIZE);654654+ emerg_stack_init_thread_info(ti, i);667655 paca[i].nmi_emergency_sp = (void *)ti + THREAD_SIZE;668656669657 /* emergency stack for machine check exception handling. */670658 ti = __va(memblock_alloc_base(THREAD_SIZE, THREAD_SIZE, limit));671671- klp_init_thread_info(ti);659659+ memset(ti, 0, THREAD_SIZE);660660+ emerg_stack_init_thread_info(ti, i);672661 paca[i].mc_emergency_sp = (void *)ti + THREAD_SIZE;673662#endif674663 }···686661687662static void * __init pcpu_fc_alloc(unsigned int cpu, size_t size, size_t align)688663{689689- return __alloc_bootmem_node(NODE_DATA(cpu_to_node(cpu)), size, align,664664+ return __alloc_bootmem_node(NODE_DATA(early_cpu_to_node(cpu)), size, align,690665 __pa(MAX_DMA_ADDRESS));691666}692667···697672698673static int pcpu_cpu_distance(unsigned int from, unsigned int to)699674{700700- if (cpu_to_node(from) == cpu_to_node(to))675675+ if (early_cpu_to_node(from) == early_cpu_to_node(to))701676 return LOCAL_DISTANCE;702677 else703678 return REMOTE_DISTANCE;
+46-13
arch/powerpc/kernel/trace/ftrace_64_mprofile.S
···4545 stdu r1,-SWITCH_FRAME_SIZE(r1)46464747 /* Save all gprs to pt_regs */4848- SAVE_8GPRS(0,r1)4949- SAVE_8GPRS(8,r1)5050- SAVE_8GPRS(16,r1)5151- SAVE_8GPRS(24,r1)4848+ SAVE_GPR(0, r1)4949+ SAVE_10GPRS(2, r1)5050+ SAVE_10GPRS(12, r1)5151+ SAVE_10GPRS(22, r1)5252+5353+ /* Save previous stack pointer (r1) */5454+ addi r8, r1, SWITCH_FRAME_SIZE5555+ std r8, GPR1(r1)52565357 /* Load special regs for save below */5458 mfmsr r8···9995 bl ftrace_stub10096 nop10197102102- /* Load ctr with the possibly modified NIP */103103- ld r3, _NIP(r1)104104- mtctr r39898+ /* Load the possibly modified NIP */9999+ ld r15, _NIP(r1)100100+105101#ifdef CONFIG_LIVEPATCH106106- cmpd r14,r3 /* has NIP been altered? */102102+ cmpd r14, r15 /* has NIP been altered? */107103#endif108104105105+#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_KPROBES_ON_FTRACE)106106+ /* NIP has not been altered, skip over further checks */107107+ beq 1f108108+109109+ /* Check if there is an active kprobe on us */110110+ subi r3, r14, 4111111+ bl is_current_kprobe_addr112112+ nop113113+114114+ /*115115+ * If r3 == 1, then this is a kprobe/jprobe.116116+ * else, this is livepatched function.117117+ *118118+ * The conditional branch for livepatch_handler below will use the119119+ * result of this comparison. For kprobe/jprobe, we just need to branch to120120+ * the new NIP, not call livepatch_handler. The branch below is bne, so we121121+ * want CR0[EQ] to be true if this is a kprobe/jprobe. Which means we want122122+ * CR0[EQ] = (r3 == 1).123123+ */124124+ cmpdi r3, 1125125+1:126126+#endif127127+128128+ /* Load CTR with the possibly modified NIP */129129+ mtctr r15130130+109131 /* Restore gprs */110110- REST_8GPRS(0,r1)111111- REST_8GPRS(8,r1)112112- REST_8GPRS(16,r1)113113- REST_8GPRS(24,r1)132132+ REST_GPR(0,r1)133133+ REST_10GPRS(2,r1)134134+ REST_10GPRS(12,r1)135135+ REST_10GPRS(22,r1)114136115137 /* Restore possibly modified LR */116138 ld r0, _LINK(r1)···149119 addi r1, r1, SWITCH_FRAME_SIZE150120151121#ifdef CONFIG_LIVEPATCH152152- /* Based on the cmpd above, if the NIP was altered handle livepatch */122122+ /*123123+ * Based on the cmpd or cmpdi above, if the NIP was altered and we're124124+ * not on a kprobe/jprobe, then handle livepatch.125125+ */153126 bne- livepatch_handler154127#endif155128
+51
arch/powerpc/kvm/book3s_hv.c
···14861486 r = set_vpa(vcpu, &vcpu->arch.dtl, addr, len);14871487 break;14881488 case KVM_REG_PPC_TB_OFFSET:14891489+ /*14901490+ * POWER9 DD1 has an erratum where writing TBU40 causes14911491+ * the timebase to lose ticks. So we don't let the14921492+ * timebase offset be changed on P9 DD1. (It is14931493+ * initialized to zero.)14941494+ */14951495+ if (cpu_has_feature(CPU_FTR_POWER9_DD1))14961496+ break;14891497 /* round up to multiple of 2^24 */14901498 vcpu->arch.vcore->tb_offset =14911499 ALIGN(set_reg_val(id, *val), 1UL << 24);···29152907{29162908 int r;29172909 int srcu_idx;29102910+ unsigned long ebb_regs[3] = {}; /* shut up GCC */29112911+ unsigned long user_tar = 0;29122912+ unsigned int user_vrsave;2918291329192914 if (!vcpu->arch.sane) {29202915 run->exit_reason = KVM_EXIT_INTERNAL_ERROR;29212916 return -EINVAL;29222917 }29182918+29192919+ /*29202920+ * Don't allow entry with a suspended transaction, because29212921+ * the guest entry/exit code will lose it.29222922+ * If the guest has TM enabled, save away their TM-related SPRs29232923+ * (they will get restored by the TM unavailable interrupt).29242924+ */29252925+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM29262926+ if (cpu_has_feature(CPU_FTR_TM) && current->thread.regs &&29272927+ (current->thread.regs->msr & MSR_TM)) {29282928+ if (MSR_TM_ACTIVE(current->thread.regs->msr)) {29292929+ run->exit_reason = KVM_EXIT_FAIL_ENTRY;29302930+ run->fail_entry.hardware_entry_failure_reason = 0;29312931+ return -EINVAL;29322932+ }29332933+ current->thread.tm_tfhar = mfspr(SPRN_TFHAR);29342934+ current->thread.tm_tfiar = mfspr(SPRN_TFIAR);29352935+ current->thread.tm_texasr = mfspr(SPRN_TEXASR);29362936+ current->thread.regs->msr &= ~MSR_TM;29372937+ }29382938+#endif2923293929242940 kvmppc_core_prepare_to_enter(vcpu);29252941···29652933 }2966293429672935 flush_all_to_thread(current);29362936+29372937+ /* Save userspace EBB and other register values */29382938+ if (cpu_has_feature(CPU_FTR_ARCH_207S)) {29392939+ ebb_regs[0] = mfspr(SPRN_EBBHR);29402940+ ebb_regs[1] = mfspr(SPRN_EBBRR);29412941+ ebb_regs[2] = mfspr(SPRN_BESCR);29422942+ user_tar = mfspr(SPRN_TAR);29432943+ }29442944+ user_vrsave = mfspr(SPRN_VRSAVE);2968294529692946 vcpu->arch.wqp = &vcpu->arch.vcore->wq;29702947 vcpu->arch.pgdir = current->mm->pgd;···30002959 r = kvmppc_xics_rm_complete(vcpu, 0);30012960 }30022961 } while (is_kvmppc_resume_guest(r));29622962+29632963+ /* Restore userspace EBB and other register values */29642964+ if (cpu_has_feature(CPU_FTR_ARCH_207S)) {29652965+ mtspr(SPRN_EBBHR, ebb_regs[0]);29662966+ mtspr(SPRN_EBBRR, ebb_regs[1]);29672967+ mtspr(SPRN_BESCR, ebb_regs[2]);29682968+ mtspr(SPRN_TAR, user_tar);29692969+ mtspr(SPRN_FSCR, current->thread.fscr);29702970+ }29712971+ mtspr(SPRN_VRSAVE, user_vrsave);3003297230042973 out:30052974 vcpu->arch.state = KVMPPC_VCPU_NOTREADY;
+11-1
arch/powerpc/kvm/book3s_hv_interrupts.S
···121121 * Put whatever is in the decrementer into the122122 * hypervisor decrementer.123123 */124124+BEGIN_FTR_SECTION125125+ ld r5, HSTATE_KVM_VCORE(r13)126126+ ld r6, VCORE_KVM(r5)127127+ ld r9, KVM_HOST_LPCR(r6)128128+ andis. r9, r9, LPCR_LD@h129129+END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)124130 mfspr r8,SPRN_DEC125131 mftb r7126126- mtspr SPRN_HDEC,r8132132+BEGIN_FTR_SECTION133133+ /* On POWER9, don't sign-extend if host LPCR[LD] bit is set */134134+ bne 32f135135+END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)127136 extsw r8,r8137137+32: mtspr SPRN_HDEC,r8128138 add r8,r8,r7129139 std r8,HSTATE_DECEXP(r13)130140
+56-19
arch/powerpc/kvm/book3s_hv_rmhandlers.S
···3232#include <asm/opal.h>3333#include <asm/xive-regs.h>34343535+/* Sign-extend HDEC if not on POWER9 */3636+#define EXTEND_HDEC(reg) \3737+BEGIN_FTR_SECTION; \3838+ extsw reg, reg; \3939+END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300)4040+3541#define VCPU_GPRS_TM(reg) (((reg) * ULONG_SIZE) + VCPU_GPR_TM)36423743/* Values in HSTATE_NAPPING(r13) */3844#define NAPPING_CEDE 13945#define NAPPING_NOVCPU 24646+4747+/* Stack frame offsets for kvmppc_hv_entry */4848+#define SFS 1444949+#define STACK_SLOT_TRAP (SFS-4)5050+#define STACK_SLOT_TID (SFS-16)5151+#define STACK_SLOT_PSSCR (SFS-24)5252+#define STACK_SLOT_PID (SFS-32)5353+#define STACK_SLOT_IAMR (SFS-40)5454+#define STACK_SLOT_CIABR (SFS-48)5555+#define STACK_SLOT_DAWR (SFS-56)5656+#define STACK_SLOT_DAWRX (SFS-64)40574158/*4259 * Call kvmppc_hv_entry in real mode.···231214kvmppc_primary_no_guest:232215 /* We handle this much like a ceded vcpu */233216 /* put the HDEC into the DEC, since HDEC interrupts don't wake us */217217+ /* HDEC may be larger than DEC for arch >= v3.00, but since the */218218+ /* HDEC value came from DEC in the first place, it will fit */234219 mfspr r3, SPRN_HDEC235220 mtspr SPRN_DEC, r3236221 /*···314295315296 /* See if our timeslice has expired (HDEC is negative) */316297 mfspr r0, SPRN_HDEC298298+ EXTEND_HDEC(r0)317299 li r12, BOOK3S_INTERRUPT_HV_DECREMENTER318318- cmpwi r0, 0300300+ cmpdi r0, 0319301 blt kvm_novcpu_exit320302321303 /* Got an IPI but other vcpus aren't yet exiting, must be a latecomer */···339319 bl kvmhv_accumulate_time340320#endif34132113: mr r3, r12342342- stw r12, 112-4(r1)322322+ stw r12, STACK_SLOT_TRAP(r1)343323 bl kvmhv_commence_exit344324 nop345345- lwz r12, 112-4(r1)325325+ lwz r12, STACK_SLOT_TRAP(r1)346326 b kvmhv_switch_to_host347327348328/*···410390 lbz r4, HSTATE_PTID(r13)411391 cmpwi r4, 0412392 bne 63f413413- lis r6, 0x7fff414414- ori r6, r6, 0xffff393393+ LOAD_REG_ADDR(r6, decrementer_max)394394+ ld r6, 0(r6)415395 mtspr SPRN_HDEC, r6416396 /* and set per-LPAR registers, if doing dynamic micro-threading */417397 ld r6, HSTATE_SPLIT_MODE(r13)···565545 * *566546 *****************************************************************************/567547568568-/* Stack frame offsets */569569-#define STACK_SLOT_TID (112-16)570570-#define STACK_SLOT_PSSCR (112-24)571571-#define STACK_SLOT_PID (112-32)572572-573548.global kvmppc_hv_entry574549kvmppc_hv_entry:575550···580565 */581566 mflr r0582567 std r0, PPC_LR_STKOFF(r1)583583- stdu r1, -112(r1)568568+ stdu r1, -SFS(r1)584569585570 /* Save R1 in the PACA */586571 std r1, HSTATE_HOST_R1(r13)···764749 mfspr r5, SPRN_TIDR765750 mfspr r6, SPRN_PSSCR766751 mfspr r7, SPRN_PID752752+ mfspr r8, SPRN_IAMR767753 std r5, STACK_SLOT_TID(r1)768754 std r6, STACK_SLOT_PSSCR(r1)769755 std r7, STACK_SLOT_PID(r1)756756+ std r8, STACK_SLOT_IAMR(r1)770757END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)758758+BEGIN_FTR_SECTION759759+ mfspr r5, SPRN_CIABR760760+ mfspr r6, SPRN_DAWR761761+ mfspr r7, SPRN_DAWRX762762+ std r5, STACK_SLOT_CIABR(r1)763763+ std r6, STACK_SLOT_DAWR(r1)764764+ std r7, STACK_SLOT_DAWRX(r1)765765+END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)771766772767BEGIN_FTR_SECTION773768 /* Set partition DABR */···993968994969 /* Check if HDEC expires soon */995970 mfspr r3, SPRN_HDEC996996- cmpwi r3, 512 /* 1 microsecond */971971+ EXTEND_HDEC(r3)972972+ cmpdi r3, 512 /* 1 microsecond */997973 blt hdec_soon998974999975#ifdef CONFIG_KVM_XICS···15311505 * set by the guest could disrupt the host.15321506 */15331507 li r0, 015341534- mtspr SPRN_IAMR, r015351535- mtspr SPRN_CIABR, r015361536- mtspr SPRN_DAWRX, r015081508+ mtspr SPRN_PSPB, r015371509 mtspr SPRN_WORT, r015381510BEGIN_FTR_SECTION15111511+ mtspr SPRN_IAMR, r015391512 mtspr SPRN_TCSCR, r015401513 /* Set MMCRS to 1<<31 to freeze and disable the SPMC counters */15411514 li r0, 1···15501525 std r6,VCPU_UAMOR(r9)15511526 li r6,015521527 mtspr SPRN_AMR,r615281528+ mtspr SPRN_UAMOR, r61553152915541530 /* Switch DSCR back to host value */15551531 mfspr r8, SPRN_DSCR···1696167016971671 /* Restore host values of some registers */16981672BEGIN_FTR_SECTION16731673+ ld r5, STACK_SLOT_CIABR(r1)16741674+ ld r6, STACK_SLOT_DAWR(r1)16751675+ ld r7, STACK_SLOT_DAWRX(r1)16761676+ mtspr SPRN_CIABR, r516771677+ mtspr SPRN_DAWR, r616781678+ mtspr SPRN_DAWRX, r716791679+END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)16801680+BEGIN_FTR_SECTION16991681 ld r5, STACK_SLOT_TID(r1)17001682 ld r6, STACK_SLOT_PSSCR(r1)17011683 ld r7, STACK_SLOT_PID(r1)16841684+ ld r8, STACK_SLOT_IAMR(r1)17021685 mtspr SPRN_TIDR, r517031686 mtspr SPRN_PSSCR, r617041687 mtspr SPRN_PID, r716881688+ mtspr SPRN_IAMR, r817051689END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)17061690BEGIN_FTR_SECTION17071691 PPC_INVALIDATE_ERAT···18551819 li r0, KVM_GUEST_MODE_NONE18561820 stb r0, HSTATE_IN_GUEST(r13)1857182118581858- ld r0, 112+PPC_LR_STKOFF(r1)18591859- addi r1, r1, 11218221822+ ld r0, SFS+PPC_LR_STKOFF(r1)18231823+ addi r1, r1, SFS18601824 mtlr r018611825 blr18621826···24022366 mfspr r3, SPRN_DEC24032367 mfspr r4, SPRN_HDEC24042368 mftb r524052405- cmpw r3, r423692369+ extsw r3, r323702370+ EXTEND_HDEC(r4)23712371+ cmpd r3, r424062372 ble 67f24072373 mtspr SPRN_DEC, r42408237467:24092375 /* save expiry time of guest decrementer */24102410- extsw r3, r324112376 add r3, r3, r524122377 ld r4, HSTATE_KVM_VCPU(r13)24132378 ld r5, HSTATE_KVM_VCORE(r13)
+2-2
arch/powerpc/kvm/book3s_xive_template.c
···6969{7070 /* If the XIVE supports the new "store EOI facility, use it */7171 if (xd->flags & XIVE_IRQ_FLAG_STORE_EOI)7272- __x_writeq(0, __x_eoi_page(xd));7272+ __x_writeq(0, __x_eoi_page(xd) + XIVE_ESB_STORE_EOI);7373 else if (hw_irq && xd->flags & XIVE_IRQ_FLAG_EOI_FW) {7474 opal_int_eoi(hw_irq);7575 } else {···8989 * properly.9090 */9191 if (xd->flags & XIVE_IRQ_FLAG_LSI)9292- __x_readq(__x_eoi_page(xd));9292+ __x_readq(__x_eoi_page(xd) + XIVE_ESB_LOAD_EOI);9393 else {9494 eoi_val = GLUE(X_PFX,esb_load)(xd, XIVE_ESB_SET_PQ_00);9595
+1-1
arch/powerpc/mm/hugetlbpage-radix.c
···6868 addr = ALIGN(addr, huge_page_size(h));6969 vma = find_vma(mm, addr);7070 if (mm->task_size - len >= addr &&7171- (!vma || addr + len <= vma->vm_start))7171+ (!vma || addr + len <= vm_start_gap(vma)))7272 return addr;7373 }7474 /*
+2-2
arch/powerpc/mm/mmap.c
···112112 addr = PAGE_ALIGN(addr);113113 vma = find_vma(mm, addr);114114 if (mm->task_size - len >= addr && addr >= mmap_min_addr &&115115- (!vma || addr + len <= vma->vm_start))115115+ (!vma || addr + len <= vm_start_gap(vma)))116116 return addr;117117 }118118···157157 addr = PAGE_ALIGN(addr);158158 vma = find_vma(mm, addr);159159 if (mm->task_size - len >= addr && addr >= mmap_min_addr &&160160- (!vma || addr + len <= vma->vm_start))160160+ (!vma || addr + len <= vm_start_gap(vma)))161161 return addr;162162 }163163
+1-1
arch/powerpc/mm/mmu_context_book3s64.c
···9999 * mm->context.addr_limit. Default to max task size so that we copy the100100 * default values to paca which will help us to handle slb miss early.101101 */102102- mm->context.addr_limit = TASK_SIZE_128TB;102102+ mm->context.addr_limit = DEFAULT_MAP_WINDOW_USER64;103103104104 /*105105 * The old code would re-promote on fork, we don't do that when using
···59596060 In case of doubt, say Y61616262+config PPC_DT_CPU_FTRS6363+ bool "Device-tree based CPU feature discovery & setup"6464+ depends on PPC_BOOK3S_646565+ default y6666+ help6767+ This enables code to use a new device tree binding for describing CPU6868+ compatibility and features. Saying Y here will attempt to use the new6969+ binding if the firmware provides it. Currently only the skiboot7070+ firmware provides this binding.7171+ If you're not sure say Y.7272+6273config UDBG_RTAS_CONSOLE6374 bool "RTAS based debug console"6475 depends on PPC_RTAS
···7575 if (WARN_ON(!gpdev))7676 return NULL;77777878- if (WARN_ON(!gpdev->dev.of_node))7878+ /* Not all PCI devices have device-tree nodes */7979+ if (!gpdev->dev.of_node)7980 return NULL;80818182 /* Get assoicated PCI device */···449448 return mmio_atsd_reg;450449}451450452452-static int mmio_invalidate_pid(struct npu *npu, unsigned long pid)451451+static int mmio_invalidate_pid(struct npu *npu, unsigned long pid, bool flush)453452{454453 unsigned long launch;455454···465464 /* PID */466465 launch |= pid << PPC_BITLSHIFT(38);467466467467+ /* No flush */468468+ launch |= !flush << PPC_BITLSHIFT(39);469469+468470 /* Invalidating the entire process doesn't use a va */469471 return mmio_launch_invalidate(npu, launch, 0);470472}471473472474static int mmio_invalidate_va(struct npu *npu, unsigned long va,473473- unsigned long pid)475475+ unsigned long pid, bool flush)474476{475477 unsigned long launch;476478···489485 /* PID */490486 launch |= pid << PPC_BITLSHIFT(38);491487488488+ /* No flush */489489+ launch |= !flush << PPC_BITLSHIFT(39);490490+492491 return mmio_launch_invalidate(npu, launch, va);493492}494493495494#define mn_to_npu_context(x) container_of(x, struct npu_context, mn)495495+496496+struct mmio_atsd_reg {497497+ struct npu *npu;498498+ int reg;499499+};500500+501501+static void mmio_invalidate_wait(502502+ struct mmio_atsd_reg mmio_atsd_reg[NV_MAX_NPUS], bool flush)503503+{504504+ struct npu *npu;505505+ int i, reg;506506+507507+ /* Wait for all invalidations to complete */508508+ for (i = 0; i <= max_npu2_index; i++) {509509+ if (mmio_atsd_reg[i].reg < 0)510510+ continue;511511+512512+ /* Wait for completion */513513+ npu = mmio_atsd_reg[i].npu;514514+ reg = mmio_atsd_reg[i].reg;515515+ while (__raw_readq(npu->mmio_atsd_regs[reg] + XTS_ATSD_STAT))516516+ cpu_relax();517517+518518+ put_mmio_atsd_reg(npu, reg);519519+520520+ /*521521+ * The GPU requires two flush ATSDs to ensure all entries have522522+ * been flushed. We use PID 0 as it will never be used for a523523+ * process on the GPU.524524+ */525525+ if (flush)526526+ mmio_invalidate_pid(npu, 0, true);527527+ }528528+}496529497530/*498531 * Invalidate either a single address or an entire PID depending on499532 * the value of va.500533 */501534static void mmio_invalidate(struct npu_context *npu_context, int va,502502- unsigned long address)535535+ unsigned long address, bool flush)503536{504504- int i, j, reg;537537+ int i, j;505538 struct npu *npu;506539 struct pnv_phb *nphb;507540 struct pci_dev *npdev;508508- struct {509509- struct npu *npu;510510- int reg;511511- } mmio_atsd_reg[NV_MAX_NPUS];541541+ struct mmio_atsd_reg mmio_atsd_reg[NV_MAX_NPUS];512542 unsigned long pid = npu_context->mm->context.id;513543514544 /*···562524563525 if (va)564526 mmio_atsd_reg[i].reg =565565- mmio_invalidate_va(npu, address, pid);527527+ mmio_invalidate_va(npu, address, pid,528528+ flush);566529 else567530 mmio_atsd_reg[i].reg =568568- mmio_invalidate_pid(npu, pid);531531+ mmio_invalidate_pid(npu, pid, flush);569532570533 /*571534 * The NPU hardware forwards the shootdown to all GPUs···582543 */583544 flush_tlb_mm(npu_context->mm);584545585585- /* Wait for all invalidations to complete */586586- for (i = 0; i <= max_npu2_index; i++) {587587- if (mmio_atsd_reg[i].reg < 0)588588- continue;589589-590590- /* Wait for completion */591591- npu = mmio_atsd_reg[i].npu;592592- reg = mmio_atsd_reg[i].reg;593593- while (__raw_readq(npu->mmio_atsd_regs[reg] + XTS_ATSD_STAT))594594- cpu_relax();595595- put_mmio_atsd_reg(npu, reg);596596- }546546+ mmio_invalidate_wait(mmio_atsd_reg, flush);547547+ if (flush)548548+ /* Wait for the flush to complete */549549+ mmio_invalidate_wait(mmio_atsd_reg, false);597550}598551599552static void pnv_npu2_mn_release(struct mmu_notifier *mn,···601570 * There should be no more translation requests for this PID, but we602571 * need to ensure any entries for it are removed from the TLB.603572 */604604- mmio_invalidate(npu_context, 0, 0);573573+ mmio_invalidate(npu_context, 0, 0, true);605574}606575607576static void pnv_npu2_mn_change_pte(struct mmu_notifier *mn,···611580{612581 struct npu_context *npu_context = mn_to_npu_context(mn);613582614614- mmio_invalidate(npu_context, 1, address);583583+ mmio_invalidate(npu_context, 1, address, true);615584}616585617586static void pnv_npu2_mn_invalidate_page(struct mmu_notifier *mn,···620589{621590 struct npu_context *npu_context = mn_to_npu_context(mn);622591623623- mmio_invalidate(npu_context, 1, address);592592+ mmio_invalidate(npu_context, 1, address, true);624593}625594626595static void pnv_npu2_mn_invalidate_range(struct mmu_notifier *mn,···630599 struct npu_context *npu_context = mn_to_npu_context(mn);631600 unsigned long address;632601633633- for (address = start; address <= end; address += PAGE_SIZE)634634- mmio_invalidate(npu_context, 1, address);602602+ for (address = start; address < end; address += PAGE_SIZE)603603+ mmio_invalidate(npu_context, 1, address, false);604604+605605+ /* Do the flush only on the final addess == end */606606+ mmio_invalidate(npu_context, 1, address, true);635607}636608637609static const struct mmu_notifier_ops nv_nmmu_notifier_ops = {···684650 /* No nvlink associated with this GPU device */685651 return ERR_PTR(-ENODEV);686652687687- if (!mm) {688688- /* kernel thread contexts are not supported */653653+ if (!mm || mm->context.id == 0) {654654+ /*655655+ * Kernel thread contexts are not supported and context id 0 is656656+ * reserved on the GPU.657657+ */689658 return ERR_PTR(-EINVAL);690659 }691660
+7-1
arch/powerpc/platforms/powernv/subcore.c
···407407408408static int subcore_init(void)409409{410410- if (!cpu_has_feature(CPU_FTR_SUBCORE))410410+ unsigned pvr_ver;411411+412412+ pvr_ver = PVR_VER(mfspr(SPRN_PVR));413413+414414+ if (pvr_ver != PVR_POWER8 &&415415+ pvr_ver != PVR_POWER8E &&416416+ pvr_ver != PVR_POWER8NVL)411417 return 0;412418413419 /*
+2
arch/powerpc/platforms/pseries/hotplug-memory.c
···124124 for (i = 0; i < num_lmbs; i++) {125125 lmbs[i].base_addr = be64_to_cpu(lmbs[i].base_addr);126126 lmbs[i].drc_index = be32_to_cpu(lmbs[i].drc_index);127127+ lmbs[i].aa_index = be32_to_cpu(lmbs[i].aa_index);127128 lmbs[i].flags = be32_to_cpu(lmbs[i].flags);128129 }129130···148147 for (i = 0; i < num_lmbs; i++) {149148 lmbs[i].base_addr = cpu_to_be64(lmbs[i].base_addr);150149 lmbs[i].drc_index = cpu_to_be32(lmbs[i].drc_index);150150+ lmbs[i].aa_index = cpu_to_be32(lmbs[i].aa_index);151151 lmbs[i].flags = cpu_to_be32(lmbs[i].flags);152152 }153153
···297297{298298 /* If the XIVE supports the new "store EOI facility, use it */299299 if (xd->flags & XIVE_IRQ_FLAG_STORE_EOI)300300- out_be64(xd->eoi_mmio, 0);300300+ out_be64(xd->eoi_mmio + XIVE_ESB_STORE_EOI, 0);301301 else if (hw_irq && xd->flags & XIVE_IRQ_FLAG_EOI_FW) {302302 /*303303 * The FW told us to call it. This happens for some
-3
arch/s390/Kconfig
···363363config SYSVIPC_COMPAT364364 def_bool y if COMPAT && SYSVIPC365365366366-config KEYS_COMPAT367367- def_bool y if COMPAT && KEYS368368-369366config SMP370367 def_bool y371368 prompt "Symmetric multi-processing support"
+33-6
arch/s390/configs/default_defconfig
···3030CONFIG_SCHED_AUTOGROUP=y3131CONFIG_BLK_DEV_INITRD=y3232CONFIG_EXPERT=y3333+# CONFIG_SYSFS_SYSCALL is not set3334CONFIG_BPF_SYSCALL=y3435CONFIG_USERFAULTFD=y3536# CONFIG_COMPAT_BRK is not set···4544CONFIG_MODULE_FORCE_UNLOAD=y4645CONFIG_MODVERSIONS=y4746CONFIG_MODULE_SRCVERSION_ALL=y4747+CONFIG_BLK_DEV_INTEGRITY=y4848CONFIG_BLK_DEV_THROTTLING=y4949+CONFIG_BLK_WBT=y5050+CONFIG_BLK_WBT_SQ=y4951CONFIG_PARTITION_ADVANCED=y5052CONFIG_IBM_PARTITION=y5153CONFIG_BSD_DISKLABEL=y···9490CONFIG_UNIX_DIAG=m9591CONFIG_XFRM_USER=m9692CONFIG_NET_KEY=m9393+CONFIG_SMC=m9494+CONFIG_SMC_DIAG=m9795CONFIG_INET=y9896CONFIG_IP_MULTICAST=y9997CONFIG_IP_ADVANCED_ROUTER=y···365359CONFIG_NET_ACT_SKBEDIT=m366360CONFIG_NET_ACT_CSUM=m367361CONFIG_DNS_RESOLVER=y362362+CONFIG_NETLINK_DIAG=m368363CONFIG_CGROUP_NET_PRIO=y369364CONFIG_BPF_JIT=y370365CONFIG_NET_PKTGEN=m···374367CONFIG_DMA_CMA=y375368CONFIG_CMA_SIZE_MBYTES=0376369CONFIG_CONNECTOR=y370370+CONFIG_ZRAM=m377371CONFIG_BLK_DEV_LOOP=m378372CONFIG_BLK_DEV_CRYPTOLOOP=m373373+CONFIG_BLK_DEV_DRBD=m379374CONFIG_BLK_DEV_NBD=m380375CONFIG_BLK_DEV_OSD=m381376CONFIG_BLK_DEV_RAM=y382377CONFIG_BLK_DEV_RAM_SIZE=32768383383-CONFIG_CDROM_PKTCDVD=m384384-CONFIG_ATA_OVER_ETH=m378378+CONFIG_BLK_DEV_RAM_DAX=y385379CONFIG_VIRTIO_BLK=y380380+CONFIG_BLK_DEV_RBD=m386381CONFIG_ENCLOSURE_SERVICES=m382382+CONFIG_GENWQE=m387383CONFIG_RAID_ATTRS=m388384CONFIG_SCSI=y389385CONFIG_BLK_DEV_SD=y···452442# CONFIG_NET_VENDOR_INTEL is not set453443# CONFIG_NET_VENDOR_MARVELL is not set454444CONFIG_MLX4_EN=m445445+CONFIG_MLX5_CORE=m446446+CONFIG_MLX5_CORE_EN=y455447# CONFIG_NET_VENDOR_NATSEMI is not set456448CONFIG_PPP=m457449CONFIG_PPP_BSDCOMP=m···464452CONFIG_PPPOL2TP=m465453CONFIG_PPP_ASYNC=m466454CONFIG_PPP_SYNC_TTY=m467467-# CONFIG_INPUT_MOUSEDEV_PSAUX is not set468455# CONFIG_INPUT_KEYBOARD is not set469456# CONFIG_INPUT_MOUSE is not set470457# CONFIG_SERIO is not set···482471CONFIG_INFINIBAND=m483472CONFIG_INFINIBAND_USER_ACCESS=m484473CONFIG_MLX4_INFINIBAND=m474474+CONFIG_MLX5_INFINIBAND=m485475CONFIG_VIRTIO_BALLOON=m486476CONFIG_EXT4_FS=y487477CONFIG_EXT4_FS_POSIX_ACL=y···499487CONFIG_XFS_RT=y500488CONFIG_XFS_DEBUG=y501489CONFIG_GFS2_FS=m490490+CONFIG_GFS2_FS_LOCKING_DLM=y502491CONFIG_OCFS2_FS=m503492CONFIG_BTRFS_FS=y504493CONFIG_BTRFS_FS_POSIX_ACL=y494494+CONFIG_BTRFS_DEBUG=y505495CONFIG_NILFS2_FS=m496496+CONFIG_FS_DAX=y497497+CONFIG_EXPORTFS_BLOCK_OPS=y506498CONFIG_FANOTIFY=y499499+CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y507500CONFIG_QUOTA_NETLINK_INTERFACE=y501501+CONFIG_QUOTA_DEBUG=y508502CONFIG_QFMT_V1=m509503CONFIG_QFMT_V2=m510504CONFIG_AUTOFS4_FS=m···576558CONFIG_DEBUG_SECTION_MISMATCH=y577559CONFIG_MAGIC_SYSRQ=y578560CONFIG_DEBUG_PAGEALLOC=y561561+CONFIG_DEBUG_RODATA_TEST=y579562CONFIG_DEBUG_OBJECTS=y580563CONFIG_DEBUG_OBJECTS_SELFTEST=y581564CONFIG_DEBUG_OBJECTS_FREE=y···599580CONFIG_WQ_WATCHDOG=y600581CONFIG_PANIC_ON_OOPS=y601582CONFIG_DEBUG_TIMEKEEPING=y602602-CONFIG_TIMER_STATS=y603583CONFIG_DEBUG_RT_MUTEXES=y604584CONFIG_DEBUG_WW_MUTEX_SLOWPATH=y605585CONFIG_PROVE_LOCKING=y···613595CONFIG_RCU_CPU_STALL_TIMEOUT=300614596CONFIG_NOTIFIER_ERROR_INJECTION=m615597CONFIG_PM_NOTIFIER_ERROR_INJECT=m598598+CONFIG_NETDEV_NOTIFIER_ERROR_INJECT=m616599CONFIG_FAULT_INJECTION=y617600CONFIG_FAILSLAB=y618601CONFIG_FAIL_PAGE_ALLOC=y···635616CONFIG_TRACE_ENUM_MAP_FILE=y636617CONFIG_LKDTM=m637618CONFIG_TEST_LIST_SORT=y619619+CONFIG_TEST_SORT=y638620CONFIG_KPROBES_SANITY_TEST=y639621CONFIG_RBTREE_TEST=y640622CONFIG_INTERVAL_TREE_TEST=m641623CONFIG_PERCPU_TEST=m642624CONFIG_ATOMIC64_SELFTEST=y643643-CONFIG_TEST_STRING_HELPERS=y644644-CONFIG_TEST_KSTRTOX=y645625CONFIG_DMA_API_DEBUG=y646626CONFIG_TEST_BPF=m647627CONFIG_BUG_ON_DATA_CORRUPTION=y···648630CONFIG_ENCRYPTED_KEYS=m649631CONFIG_SECURITY=y650632CONFIG_SECURITY_NETWORK=y633633+CONFIG_HARDENED_USERCOPY=y651634CONFIG_SECURITY_SELINUX=y652635CONFIG_SECURITY_SELINUX_BOOTPARAM=y653636CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=0···659640CONFIG_CRYPTO_DH=m660641CONFIG_CRYPTO_ECDH=m661642CONFIG_CRYPTO_USER=m643643+CONFIG_CRYPTO_PCRYPT=m662644CONFIG_CRYPTO_CRYPTD=m645645+CONFIG_CRYPTO_MCRYPTD=m663646CONFIG_CRYPTO_TEST=m664647CONFIG_CRYPTO_CCM=m665648CONFIG_CRYPTO_GCM=m···669648CONFIG_CRYPTO_LRW=m670649CONFIG_CRYPTO_PCBC=m671650CONFIG_CRYPTO_KEYWRAP=m651651+CONFIG_CRYPTO_CMAC=m672652CONFIG_CRYPTO_XCBC=m673653CONFIG_CRYPTO_VMAC=m674654CONFIG_CRYPTO_CRC32=m···679657CONFIG_CRYPTO_RMD256=m680658CONFIG_CRYPTO_RMD320=m681659CONFIG_CRYPTO_SHA512=m660660+CONFIG_CRYPTO_SHA3=m682661CONFIG_CRYPTO_TGR192=m683662CONFIG_CRYPTO_WP512=m663663+CONFIG_CRYPTO_AES_TI=m684664CONFIG_CRYPTO_ANUBIS=m685665CONFIG_CRYPTO_BLOWFISH=m686666CONFIG_CRYPTO_CAMELLIA=m···698674CONFIG_CRYPTO_842=m699675CONFIG_CRYPTO_LZ4=m700676CONFIG_CRYPTO_LZ4HC=m677677+CONFIG_CRYPTO_ANSI_CPRNG=m701678CONFIG_CRYPTO_USER_API_HASH=m702679CONFIG_CRYPTO_USER_API_SKCIPHER=m703680CONFIG_CRYPTO_USER_API_RNG=m···710685CONFIG_CRYPTO_SHA512_S390=m711686CONFIG_CRYPTO_DES_S390=m712687CONFIG_CRYPTO_AES_S390=m688688+CONFIG_CRYPTO_PAES_S390=m713689CONFIG_CRYPTO_GHASH_S390=m714690CONFIG_CRYPTO_CRC32_S390=y715691CONFIG_ASYMMETRIC_KEY_TYPE=y···718692CONFIG_X509_CERTIFICATE_PARSER=m719693CONFIG_CRC7=m720694CONFIG_CRC8=m695695+CONFIG_RANDOM32_SELFTEST=y721696CONFIG_CORDIC=m722697CONFIG_CMM=m723698CONFIG_APPLDATA_BASE=y
+24-4
arch/s390/configs/gcov_defconfig
···3131CONFIG_SCHED_AUTOGROUP=y3232CONFIG_BLK_DEV_INITRD=y3333CONFIG_EXPERT=y3434+# CONFIG_SYSFS_SYSCALL is not set3435CONFIG_BPF_SYSCALL=y3536CONFIG_USERFAULTFD=y3637# CONFIG_COMPAT_BRK is not set···4746CONFIG_MODULE_FORCE_UNLOAD=y4847CONFIG_MODVERSIONS=y4948CONFIG_MODULE_SRCVERSION_ALL=y4949+CONFIG_BLK_DEV_INTEGRITY=y5050CONFIG_BLK_DEV_THROTTLING=y5151+CONFIG_BLK_WBT=y5252+CONFIG_BLK_WBT_SQ=y5153CONFIG_PARTITION_ADVANCED=y5254CONFIG_IBM_PARTITION=y5355CONFIG_BSD_DISKLABEL=y···9288CONFIG_UNIX_DIAG=m9389CONFIG_XFRM_USER=m9490CONFIG_NET_KEY=m9191+CONFIG_SMC=m9292+CONFIG_SMC_DIAG=m9593CONFIG_INET=y9694CONFIG_IP_MULTICAST=y9795CONFIG_IP_ADVANCED_ROUTER=y···362356CONFIG_NET_ACT_SKBEDIT=m363357CONFIG_NET_ACT_CSUM=m364358CONFIG_DNS_RESOLVER=y359359+CONFIG_NETLINK_DIAG=m365360CONFIG_CGROUP_NET_PRIO=y366361CONFIG_BPF_JIT=y367362CONFIG_NET_PKTGEN=m···371364CONFIG_DMA_CMA=y372365CONFIG_CMA_SIZE_MBYTES=0373366CONFIG_CONNECTOR=y367367+CONFIG_ZRAM=m374368CONFIG_BLK_DEV_LOOP=m375369CONFIG_BLK_DEV_CRYPTOLOOP=m370370+CONFIG_BLK_DEV_DRBD=m376371CONFIG_BLK_DEV_NBD=m377372CONFIG_BLK_DEV_OSD=m378373CONFIG_BLK_DEV_RAM=y379374CONFIG_BLK_DEV_RAM_SIZE=32768380380-CONFIG_CDROM_PKTCDVD=m381381-CONFIG_ATA_OVER_ETH=m375375+CONFIG_BLK_DEV_RAM_DAX=y382376CONFIG_VIRTIO_BLK=y383377CONFIG_ENCLOSURE_SERVICES=m378378+CONFIG_GENWQE=m384379CONFIG_RAID_ATTRS=m385380CONFIG_SCSI=y386381CONFIG_BLK_DEV_SD=y···448439# CONFIG_NET_VENDOR_INTEL is not set449440# CONFIG_NET_VENDOR_MARVELL is not set450441CONFIG_MLX4_EN=m442442+CONFIG_MLX5_CORE=m443443+CONFIG_MLX5_CORE_EN=y451444# CONFIG_NET_VENDOR_NATSEMI is not set452445CONFIG_PPP=m453446CONFIG_PPP_BSDCOMP=m···460449CONFIG_PPPOL2TP=m461450CONFIG_PPP_ASYNC=m462451CONFIG_PPP_SYNC_TTY=m463463-# CONFIG_INPUT_MOUSEDEV_PSAUX is not set464452# CONFIG_INPUT_KEYBOARD is not set465453# CONFIG_INPUT_MOUSE is not set466454# CONFIG_SERIO is not set···478468CONFIG_INFINIBAND=m479469CONFIG_INFINIBAND_USER_ACCESS=m480470CONFIG_MLX4_INFINIBAND=m471471+CONFIG_MLX5_INFINIBAND=m481472CONFIG_VIRTIO_BALLOON=m482473CONFIG_EXT4_FS=y483474CONFIG_EXT4_FS_POSIX_ACL=y···494483CONFIG_XFS_POSIX_ACL=y495484CONFIG_XFS_RT=y496485CONFIG_GFS2_FS=m486486+CONFIG_GFS2_FS_LOCKING_DLM=y497487CONFIG_OCFS2_FS=m498488CONFIG_BTRFS_FS=y499489CONFIG_BTRFS_FS_POSIX_ACL=y500490CONFIG_NILFS2_FS=m491491+CONFIG_FS_DAX=y492492+CONFIG_EXPORTFS_BLOCK_OPS=y501493CONFIG_FANOTIFY=y494494+CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y502495CONFIG_QUOTA_NETLINK_INTERFACE=y503496CONFIG_QFMT_V1=m504497CONFIG_QFMT_V2=m···568553CONFIG_MAGIC_SYSRQ=y569554CONFIG_DEBUG_MEMORY_INIT=y570555CONFIG_PANIC_ON_OOPS=y571571-CONFIG_TIMER_STATS=y572556CONFIG_RCU_TORTURE_TEST=m573557CONFIG_RCU_CPU_STALL_TIMEOUT=60574558CONFIG_LATENCYTOP=y···590576CONFIG_ENCRYPTED_KEYS=m591577CONFIG_SECURITY=y592578CONFIG_SECURITY_NETWORK=y579579+CONFIG_HARDENED_USERCOPY=y593580CONFIG_SECURITY_SELINUX=y594581CONFIG_SECURITY_SELINUX_BOOTPARAM=y595582CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=0···614599CONFIG_CRYPTO_LRW=m615600CONFIG_CRYPTO_PCBC=m616601CONFIG_CRYPTO_KEYWRAP=m602602+CONFIG_CRYPTO_CMAC=m617603CONFIG_CRYPTO_XCBC=m618604CONFIG_CRYPTO_VMAC=m619605CONFIG_CRYPTO_CRC32=m···627611CONFIG_CRYPTO_SHA3=m628612CONFIG_CRYPTO_TGR192=m629613CONFIG_CRYPTO_WP512=m614614+CONFIG_CRYPTO_AES_TI=m630615CONFIG_CRYPTO_ANUBIS=m631616CONFIG_CRYPTO_BLOWFISH=m632617CONFIG_CRYPTO_CAMELLIA=m···643626CONFIG_CRYPTO_842=m644627CONFIG_CRYPTO_LZ4=m645628CONFIG_CRYPTO_LZ4HC=m629629+CONFIG_CRYPTO_ANSI_CPRNG=m646630CONFIG_CRYPTO_USER_API_HASH=m647631CONFIG_CRYPTO_USER_API_SKCIPHER=m648632CONFIG_CRYPTO_USER_API_RNG=m649633CONFIG_CRYPTO_USER_API_AEAD=m650634CONFIG_ZCRYPT=m635635+CONFIG_PKEY=m651636CONFIG_CRYPTO_SHA1_S390=m652637CONFIG_CRYPTO_SHA256_S390=m653638CONFIG_CRYPTO_SHA512_S390=m654639CONFIG_CRYPTO_DES_S390=m655640CONFIG_CRYPTO_AES_S390=m641641+CONFIG_CRYPTO_PAES_S390=m656642CONFIG_CRYPTO_GHASH_S390=m657643CONFIG_CRYPTO_CRC32_S390=y658644CONFIG_CRC7=m
+23-4
arch/s390/configs/performance_defconfig
···3131CONFIG_SCHED_AUTOGROUP=y3232CONFIG_BLK_DEV_INITRD=y3333CONFIG_EXPERT=y3434+# CONFIG_SYSFS_SYSCALL is not set3435CONFIG_BPF_SYSCALL=y3536CONFIG_USERFAULTFD=y3637# CONFIG_COMPAT_BRK is not set···4544CONFIG_MODULE_FORCE_UNLOAD=y4645CONFIG_MODVERSIONS=y4746CONFIG_MODULE_SRCVERSION_ALL=y4747+CONFIG_BLK_DEV_INTEGRITY=y4848CONFIG_BLK_DEV_THROTTLING=y4949+CONFIG_BLK_WBT=y5050+CONFIG_BLK_WBT_SQ=y4951CONFIG_PARTITION_ADVANCED=y5052CONFIG_IBM_PARTITION=y5153CONFIG_BSD_DISKLABEL=y···9086CONFIG_UNIX_DIAG=m9187CONFIG_XFRM_USER=m9288CONFIG_NET_KEY=m8989+CONFIG_SMC=m9090+CONFIG_SMC_DIAG=m9391CONFIG_INET=y9492CONFIG_IP_MULTICAST=y9593CONFIG_IP_ADVANCED_ROUTER=y···360354CONFIG_NET_ACT_SKBEDIT=m361355CONFIG_NET_ACT_CSUM=m362356CONFIG_DNS_RESOLVER=y357357+CONFIG_NETLINK_DIAG=m363358CONFIG_CGROUP_NET_PRIO=y364359CONFIG_BPF_JIT=y365360CONFIG_NET_PKTGEN=m···369362CONFIG_DMA_CMA=y370363CONFIG_CMA_SIZE_MBYTES=0371364CONFIG_CONNECTOR=y365365+CONFIG_ZRAM=m372366CONFIG_BLK_DEV_LOOP=m373367CONFIG_BLK_DEV_CRYPTOLOOP=m368368+CONFIG_BLK_DEV_DRBD=m374369CONFIG_BLK_DEV_NBD=m375370CONFIG_BLK_DEV_OSD=m376371CONFIG_BLK_DEV_RAM=y377372CONFIG_BLK_DEV_RAM_SIZE=32768378378-CONFIG_CDROM_PKTCDVD=m379379-CONFIG_ATA_OVER_ETH=m373373+CONFIG_BLK_DEV_RAM_DAX=y380374CONFIG_VIRTIO_BLK=y381375CONFIG_ENCLOSURE_SERVICES=m376376+CONFIG_GENWQE=m382377CONFIG_RAID_ATTRS=m383378CONFIG_SCSI=y384379CONFIG_BLK_DEV_SD=y···446437# CONFIG_NET_VENDOR_INTEL is not set447438# CONFIG_NET_VENDOR_MARVELL is not set448439CONFIG_MLX4_EN=m440440+CONFIG_MLX5_CORE=m441441+CONFIG_MLX5_CORE_EN=y449442# CONFIG_NET_VENDOR_NATSEMI is not set450443CONFIG_PPP=m451444CONFIG_PPP_BSDCOMP=m···458447CONFIG_PPPOL2TP=m459448CONFIG_PPP_ASYNC=m460449CONFIG_PPP_SYNC_TTY=m461461-# CONFIG_INPUT_MOUSEDEV_PSAUX is not set462450# CONFIG_INPUT_KEYBOARD is not set463451# CONFIG_INPUT_MOUSE is not set464452# CONFIG_SERIO is not set···476466CONFIG_INFINIBAND=m477467CONFIG_INFINIBAND_USER_ACCESS=m478468CONFIG_MLX4_INFINIBAND=m469469+CONFIG_MLX5_INFINIBAND=m479470CONFIG_VIRTIO_BALLOON=m480471CONFIG_EXT4_FS=y481472CONFIG_EXT4_FS_POSIX_ACL=y···492481CONFIG_XFS_POSIX_ACL=y493482CONFIG_XFS_RT=y494483CONFIG_GFS2_FS=m484484+CONFIG_GFS2_FS_LOCKING_DLM=y495485CONFIG_OCFS2_FS=m496486CONFIG_BTRFS_FS=y497487CONFIG_BTRFS_FS_POSIX_ACL=y498488CONFIG_NILFS2_FS=m489489+CONFIG_FS_DAX=y490490+CONFIG_EXPORTFS_BLOCK_OPS=y499491CONFIG_FANOTIFY=y492492+CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y500493CONFIG_QUOTA_NETLINK_INTERFACE=y501494CONFIG_QFMT_V1=m502495CONFIG_QFMT_V2=m···566551CONFIG_MAGIC_SYSRQ=y567552CONFIG_DEBUG_MEMORY_INIT=y568553CONFIG_PANIC_ON_OOPS=y569569-CONFIG_TIMER_STATS=y570554CONFIG_RCU_TORTURE_TEST=m571555CONFIG_RCU_CPU_STALL_TIMEOUT=60572556CONFIG_LATENCYTOP=y···588574CONFIG_ENCRYPTED_KEYS=m589575CONFIG_SECURITY=y590576CONFIG_SECURITY_NETWORK=y577577+CONFIG_HARDENED_USERCOPY=y591578CONFIG_SECURITY_SELINUX=y592579CONFIG_SECURITY_SELINUX_BOOTPARAM=y593580CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=0···612597CONFIG_CRYPTO_LRW=m613598CONFIG_CRYPTO_PCBC=m614599CONFIG_CRYPTO_KEYWRAP=m600600+CONFIG_CRYPTO_CMAC=m615601CONFIG_CRYPTO_XCBC=m616602CONFIG_CRYPTO_VMAC=m617603CONFIG_CRYPTO_CRC32=m···625609CONFIG_CRYPTO_SHA3=m626610CONFIG_CRYPTO_TGR192=m627611CONFIG_CRYPTO_WP512=m612612+CONFIG_CRYPTO_AES_TI=m628613CONFIG_CRYPTO_ANUBIS=m629614CONFIG_CRYPTO_BLOWFISH=m630615CONFIG_CRYPTO_CAMELLIA=m···641624CONFIG_CRYPTO_842=m642625CONFIG_CRYPTO_LZ4=m643626CONFIG_CRYPTO_LZ4HC=m627627+CONFIG_CRYPTO_ANSI_CPRNG=m644628CONFIG_CRYPTO_USER_API_HASH=m645629CONFIG_CRYPTO_USER_API_SKCIPHER=m646630CONFIG_CRYPTO_USER_API_RNG=m···653635CONFIG_CRYPTO_SHA512_S390=m654636CONFIG_CRYPTO_DES_S390=m655637CONFIG_CRYPTO_AES_S390=m638638+CONFIG_CRYPTO_PAES_S390=m656639CONFIG_CRYPTO_GHASH_S390=m657640CONFIG_CRYPTO_CRC32_S390=y658641CONFIG_CRC7=m
+4-2
arch/s390/configs/zfcpdump_defconfig
···1212CONFIG_NR_CPUS=21313# CONFIG_HOTPLUG_CPU is not set1414CONFIG_HZ_100=y1515+# CONFIG_ARCH_RANDOM is not set1516# CONFIG_COMPACTION is not set1617# CONFIG_MIGRATION is not set1818+# CONFIG_BOUNCE is not set1719# CONFIG_CHECK_STACK is not set1820# CONFIG_CHSC_SCH is not set1921# CONFIG_SCM_BUS is not set···3836CONFIG_SCSI_LOGGING=y3937CONFIG_SCSI_FC_ATTRS=y4038CONFIG_ZFCP=y4141-# CONFIG_INPUT_MOUSEDEV_PSAUX is not set4239# CONFIG_INPUT_KEYBOARD is not set4340# CONFIG_INPUT_MOUSE is not set4441# CONFIG_SERIO is not set4542# CONFIG_HVC_IUCV is not set4343+# CONFIG_HW_RANDOM_S390 is not set4644CONFIG_RAW_DRIVER=y4745# CONFIG_SCLP_ASYNC is not set4846# CONFIG_HMC_DRV is not set···5654# CONFIG_INOTIFY_USER is not set5755CONFIG_CONFIGFS_FS=y5856# CONFIG_MISC_FILESYSTEMS is not set5757+# CONFIG_NETWORK_FILESYSTEMS is not set5958CONFIG_PRINTK_TIME=y6059CONFIG_DEBUG_INFO=y6161-CONFIG_DEBUG_FS=y6260CONFIG_DEBUG_KERNEL=y6361CONFIG_PANIC_ON_OOPS=y6462# CONFIG_SCHED_DEBUG is not set
+3-5
arch/s390/defconfig
···2828CONFIG_USER_NS=y2929CONFIG_BLK_DEV_INITRD=y3030CONFIG_EXPERT=y3131+# CONFIG_SYSFS_SYSCALL is not set3132CONFIG_BPF_SYSCALL=y3233CONFIG_USERFAULTFD=y3334# CONFIG_COMPAT_BRK is not set···109108CONFIG_SCSI_VIRTIO=y110109CONFIG_MD=y111110CONFIG_MD_LINEAR=m112112-CONFIG_MD_RAID0=m113111CONFIG_MD_MULTIPATH=m114112CONFIG_BLK_DEV_DM=y115113CONFIG_DM_CRYPT=m···131131CONFIG_VIRTIO_NET=y132132# CONFIG_NET_VENDOR_ALACRITECH is not set133133# CONFIG_NET_VENDOR_SOLARFLARE is not set134134+# CONFIG_NET_VENDOR_SYNOPSYS is not set134135# CONFIG_INPUT is not set135136# CONFIG_SERIO is not set136137CONFIG_DEVKMEM=y···163162CONFIG_DEBUG_PAGEALLOC=y164163CONFIG_DETECT_HUNG_TASK=y165164CONFIG_PANIC_ON_OOPS=y166166-CONFIG_TIMER_STATS=y167165CONFIG_DEBUG_RT_MUTEXES=y168166CONFIG_PROVE_LOCKING=y169167CONFIG_LOCK_STAT=y···172172CONFIG_DEBUG_SG=y173173CONFIG_DEBUG_NOTIFIERS=y174174CONFIG_RCU_CPU_STALL_TIMEOUT=60175175-CONFIG_RCU_TRACE=y176175CONFIG_LATENCYTOP=y177176CONFIG_SCHED_TRACER=y178177CONFIG_FTRACE_SYSCALLS=y179178CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP=y180179CONFIG_STACK_TRACER=y181180CONFIG_BLK_DEV_IO_TRACE=y182182-CONFIG_UPROBE_EVENTS=y183181CONFIG_FUNCTION_PROFILER=y184182CONFIG_TRACE_ENUM_MAP_FILE=y185183CONFIG_KPROBES_SANITY_TEST=y···188190CONFIG_CRYPTO_GCM=m189191CONFIG_CRYPTO_CBC=y190192CONFIG_CRYPTO_CTS=m191191-CONFIG_CRYPTO_ECB=m192193CONFIG_CRYPTO_LRW=m193194CONFIG_CRYPTO_PCBC=m194195CONFIG_CRYPTO_XTS=m···227230CONFIG_CRYPTO_USER_API_RNG=m228231CONFIG_ZCRYPT=m229232CONFIG_PKEY=m233233+CONFIG_CRYPTO_PAES_S390=m230234CONFIG_CRYPTO_SHA1_S390=m231235CONFIG_CRYPTO_SHA256_S390=m232236CONFIG_CRYPTO_SHA512_S390=m
···221221/* Free guarded storage control block for current */222222void exit_thread_gs(void);223223224224-/*225225- * Return saved PC of a blocked thread.226226- */227227-extern unsigned long thread_saved_pc(struct task_struct *t);228228-229224unsigned long get_wchan(struct task_struct *p);230225#define task_pt_regs(tsk) ((struct pt_regs *) \231226 (task_stack_page(tsk) + THREAD_SIZE) - 1)
+13-6
arch/s390/kernel/entry.S
···231231 lctlg %c1,%c1,__LC_USER_ASCE # load primary asce232232.Lsie_done:233233# some program checks are suppressing. C code (e.g. do_protection_exception)234234-# will rewind the PSW by the ILC, which is 4 bytes in case of SIE. Other235235-# instructions between sie64a and .Lsie_done should not cause program236236-# interrupts. So lets use a nop (47 00 00 00) as a landing pad.234234+# will rewind the PSW by the ILC, which is often 4 bytes in case of SIE. There235235+# are some corner cases (e.g. runtime instrumentation) where ILC is unpredictable.236236+# Other instructions between sie64a and .Lsie_done should not cause program237237+# interrupts. So lets use 3 nops as a landing pad for all possible rewinds.237238# See also .Lcleanup_sie238238-.Lrewind_pad:239239- nop 0239239+.Lrewind_pad6:240240+ nopr 7241241+.Lrewind_pad4:242242+ nopr 7243243+.Lrewind_pad2:244244+ nopr 7240245 .globl sie_exit241246sie_exit:242247 lg %r14,__SF_EMPTY+8(%r15) # load guest register save area···254249 stg %r14,__SF_EMPTY+16(%r15) # set exit reason code255250 j sie_exit256251257257- EX_TABLE(.Lrewind_pad,.Lsie_fault)252252+ EX_TABLE(.Lrewind_pad6,.Lsie_fault)253253+ EX_TABLE(.Lrewind_pad4,.Lsie_fault)254254+ EX_TABLE(.Lrewind_pad2,.Lsie_fault)258255 EX_TABLE(sie_exit,.Lsie_fault)259256EXPORT_SYMBOL(sie64a)260257EXPORT_SYMBOL(sie_exit)
+1-6
arch/s390/kernel/ipl.c
···564564565565static void __ipl_run(void *unused)566566{567567- if (MACHINE_IS_LPAR && ipl_info.type == IPL_TYPE_CCW)568568- diag308(DIAG308_LOAD_NORMAL_DUMP, NULL);569567 diag308(DIAG308_LOAD_CLEAR, NULL);570568 if (MACHINE_IS_VM)571569 __cpcmd("IPL", NULL, 0, NULL);···10861088 break;10871089 case REIPL_METHOD_CCW_DIAG:10881090 diag308(DIAG308_SET, reipl_block_ccw);10891089- if (MACHINE_IS_LPAR)10901090- diag308(DIAG308_LOAD_NORMAL_DUMP, NULL);10911091- else10921092- diag308(DIAG308_LOAD_CLEAR, NULL);10911091+ diag308(DIAG308_LOAD_CLEAR, NULL);10931092 break;10941093 case REIPL_METHOD_FCP_RW_DIAG:10951094 diag308(DIAG308_SET, reipl_block_fcp);
-25
arch/s390/kernel/process.c
···41414242asmlinkage void ret_from_fork(void) asm ("ret_from_fork");43434444-/*4545- * Return saved PC of a blocked thread. used in kernel/sched.4646- * resume in entry.S does not create a new stack frame, it4747- * just stores the registers %r6-%r15 to the frame given by4848- * schedule. We want to return the address of the caller of4949- * schedule, so we have to walk the backchain one time to5050- * find the frame schedule() store its return address.5151- */5252-unsigned long thread_saved_pc(struct task_struct *tsk)5353-{5454- struct stack_frame *sf, *low, *high;5555-5656- if (!tsk || !task_stack_page(tsk))5757- return 0;5858- low = task_stack_page(tsk);5959- high = (struct stack_frame *) task_pt_regs(tsk);6060- sf = (struct stack_frame *) tsk->thread.ksp;6161- if (sf <= low || sf > high)6262- return 0;6363- sf = (struct stack_frame *) sf->back_chain;6464- if (sf <= low || sf > high)6565- return 0;6666- return sf->gprs[8];6767-}6868-6944extern void kernel_thread_starter(void);70457146/*
···21602160 struct kvm_s390_ais_req req;21612161 int ret = 0;2162216221632163- if (!fi->ais_enabled)21632163+ if (!test_kvm_facility(kvm, 72))21642164 return -ENOTSUPP;2165216521662166 if (copy_from_user(&req, (void __user *)attr->addr, sizeof(req)))···22042204 };22052205 int ret = 0;2206220622072207- if (!fi->ais_enabled || !adapter->suppressible)22072207+ if (!test_kvm_facility(kvm, 72) || !adapter->suppressible)22082208 return kvm_s390_inject_vm(kvm, &s390int);2209220922102210 mutex_lock(&fi->ais_lock);
-2
arch/s390/kvm/kvm-s390.c
···558558 } else {559559 set_kvm_facility(kvm->arch.model.fac_mask, 72);560560 set_kvm_facility(kvm->arch.model.fac_list, 72);561561- kvm->arch.float_int.ais_enabled = 1;562561 r = 0;563562 }564563 mutex_unlock(&kvm->lock);···15321533 mutex_init(&kvm->arch.float_int.ais_lock);15331534 kvm->arch.float_int.simm = 0;15341535 kvm->arch.float_int.nimm = 0;15351535- kvm->arch.float_int.ais_enabled = 0;15361536 spin_lock_init(&kvm->arch.float_int.lock);15371537 for (i = 0; i < FIRQ_LIST_COUNT; i++)15381538 INIT_LIST_HEAD(&kvm->arch.float_int.lists[i]);
+2-2
arch/s390/mm/mmap.c
···101101 addr = PAGE_ALIGN(addr);102102 vma = find_vma(mm, addr);103103 if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&104104- (!vma || addr + len <= vma->vm_start))104104+ (!vma || addr + len <= vm_start_gap(vma)))105105 goto check_asce_limit;106106 }107107···151151 addr = PAGE_ALIGN(addr);152152 vma = find_vma(mm, addr);153153 if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&154154- (!vma || addr + len <= vma->vm_start))154154+ (!vma || addr + len <= vm_start_gap(vma)))155155 goto check_asce_limit;156156 }157157
-1
arch/score/include/asm/processor.h
···1313 */1414extern void (*cpu_wait)(void);15151616-extern unsigned long thread_saved_pc(struct task_struct *tsk);1716extern void start_thread(struct pt_regs *regs,1817 unsigned long pc, unsigned long sp);1918extern unsigned long get_wchan(struct task_struct *p);
-5
arch/score/kernel/process.c
···101101 return 1;102102}103103104104-unsigned long thread_saved_pc(struct task_struct *tsk)105105-{106106- return task_pt_regs(tsk)->cp0_epc;107107-}108108-109104unsigned long get_wchan(struct task_struct *task)110105{111106 if (!task || task == current || task->state == TASK_RUNNING)
+2-2
arch/sh/mm/mmap.c
···64646565 vma = find_vma(mm, addr);6666 if (TASK_SIZE - len >= addr &&6767- (!vma || addr + len <= vma->vm_start))6767+ (!vma || addr + len <= vm_start_gap(vma)))6868 return addr;6969 }7070···114114115115 vma = find_vma(mm, addr);116116 if (TASK_SIZE - len >= addr &&117117- (!vma || addr + len <= vma->vm_start))117117+ (!vma || addr + len <= vm_start_gap(vma)))118118 return addr;119119 }120120
+8-7
arch/sparc/Kconfig
···192192 int "Maximum number of CPUs"193193 depends on SMP194194 range 2 32 if SPARC32195195- range 2 1024 if SPARC64195195+ range 2 4096 if SPARC64196196 default 32 if SPARC32197197- default 64 if SPARC64197197+ default 4096 if SPARC64198198199199source kernel/Kconfig.hz200200···295295 depends on SPARC64 && SMP296296297297config NODES_SHIFT298298- int299299- default "4"298298+ int "Maximum NUMA Nodes (as a power of 2)"299299+ range 4 5 if SPARC64300300+ default "5"300301 depends on NEED_MULTIPLE_NODES302302+ help303303+ Specify the maximum number of NUMA Nodes available on the target304304+ system. Increases memory reserved to accommodate various tables.301305302306# Some NUMA nodes have memory ranges that span303307# other nodes. Even though a pfn is valid and···576572 bool577573 depends on COMPAT && SYSVIPC578574 default y579579-580580-config KEYS_COMPAT581581- def_bool y if COMPAT && KEYS582575583576endmenu584577
···1919extern unsigned long tlb_context_cache;2020extern unsigned long mmu_context_bmap[];21212222+DECLARE_PER_CPU(struct mm_struct *, per_cpu_secondary_mm);2223void get_new_mmu_context(struct mm_struct *mm);2323-#ifdef CONFIG_SMP2424-void smp_new_mmu_context_version(void);2525-#else2626-#define smp_new_mmu_context_version() do { } while (0)2727-#endif2828-2924int init_new_context(struct task_struct *tsk, struct mm_struct *mm);3025void destroy_context(struct mm_struct *mm);3126···7176static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, struct task_struct *tsk)7277{7378 unsigned long ctx_valid, flags;7474- int cpu;7979+ int cpu = smp_processor_id();75808181+ per_cpu(per_cpu_secondary_mm, cpu) = mm;7682 if (unlikely(mm == &init_mm))7783 return;7884···119123 * for the first time, we must flush that context out of the120124 * local TLB.121125 */122122- cpu = smp_processor_id();123126 if (!ctx_valid || !cpumask_test_cpu(cpu, mm_cpumask(mm))) {124127 cpumask_set_cpu(cpu, mm_cpumask(mm));125128 __flush_tlb_mm(CTX_HWBITS(mm->context),···128133}129134130135#define deactivate_mm(tsk,mm) do { } while (0)131131-132132-/* Activate a new MM instance for the current task. */133133-static inline void activate_mm(struct mm_struct *active_mm, struct mm_struct *mm)134134-{135135- unsigned long flags;136136- int cpu;137137-138138- spin_lock_irqsave(&mm->context.lock, flags);139139- if (!CTX_VALID(mm->context))140140- get_new_mmu_context(mm);141141- cpu = smp_processor_id();142142- if (!cpumask_test_cpu(cpu, mm_cpumask(mm)))143143- cpumask_set_cpu(cpu, mm_cpumask(mm));144144-145145- load_secondary_context(mm);146146- __flush_tlb_mm(CTX_HWBITS(mm->context), SECONDARY_CONTEXT);147147- tsb_context_switch(mm);148148- spin_unlock_irqrestore(&mm->context.lock, flags);149149-}150150-136136+#define activate_mm(active_mm, mm) switch_mm(active_mm, mm, NULL)151137#endif /* !(__ASSEMBLY__) */152138153139#endif /* !(__SPARC64_MMU_CONTEXT_H) */
···6767 .current_ds = KERNEL_DS, \6868}69697070-/* Return saved PC of a blocked thread. */7171-unsigned long thread_saved_pc(struct task_struct *t);7272-7370/* Do necessary setup to start up a newly executed thread. */7471static inline void start_thread(struct pt_regs * regs, unsigned long pc,7572 unsigned long sp)
-2
arch/sparc/include/asm/processor_64.h
···8989#include <linux/types.h>9090#include <asm/fpumacro.h>91919292-/* Return saved PC of a blocked thread. */9392struct task_struct;9494-unsigned long thread_saved_pc(struct task_struct *);95939694/* On Uniprocessor, even in RMO processes see TSO semantics */9795#ifdef CONFIG_SMP
···177177}178178179179/*180180- * Note: sparc64 has a pretty intricated thread_saved_pc, check it out.181181- */182182-unsigned long thread_saved_pc(struct task_struct *tsk)183183-{184184- return task_thread_info(tsk)->kpc;185185-}186186-187187-/*188180 * Free current thread data structures etc..189181 */190182void exit_thread(struct task_struct *tsk)
-19
arch/sparc/kernel/process_64.c
···400400401401#endif402402403403-unsigned long thread_saved_pc(struct task_struct *tsk)404404-{405405- struct thread_info *ti = task_thread_info(tsk);406406- unsigned long ret = 0xdeadbeefUL;407407-408408- if (ti && ti->ksp) {409409- unsigned long *sp;410410- sp = (unsigned long *)(ti->ksp + STACK_BIAS);411411- if (((unsigned long)sp & (sizeof(long) - 1)) == 0UL &&412412- sp[14]) {413413- unsigned long *fp;414414- fp = (unsigned long *)(sp[14] + STACK_BIAS);415415- if (((unsigned long)fp & (sizeof(long) - 1)) == 0UL)416416- ret = fp[15];417417- }418418- }419419- return ret;420420-}421421-422403/* Free current thread data structures etc.. */423404void exit_thread(struct task_struct *tsk)424405{
-31
arch/sparc/kernel/smp_64.c
···964964 preempt_enable();965965}966966967967-void __irq_entry smp_new_mmu_context_version_client(int irq, struct pt_regs *regs)968968-{969969- struct mm_struct *mm;970970- unsigned long flags;971971-972972- clear_softint(1 << irq);973973-974974- /* See if we need to allocate a new TLB context because975975- * the version of the one we are using is now out of date.976976- */977977- mm = current->active_mm;978978- if (unlikely(!mm || (mm == &init_mm)))979979- return;980980-981981- spin_lock_irqsave(&mm->context.lock, flags);982982-983983- if (unlikely(!CTX_VALID(mm->context)))984984- get_new_mmu_context(mm);985985-986986- spin_unlock_irqrestore(&mm->context.lock, flags);987987-988988- load_secondary_context(mm);989989- __flush_tlb_mm(CTX_HWBITS(mm->context),990990- SECONDARY_CONTEXT);991991-}992992-993993-void smp_new_mmu_context_version(void)994994-{995995- smp_cross_call(&xcall_new_mmu_context_version, 0, 0, 0);996996-}997997-998967#ifdef CONFIG_KGDB999968void kgdb_roundup_cpus(unsigned long flags)1000969{
+2-2
arch/sparc/kernel/sys_sparc_64.c
···120120121121 vma = find_vma(mm, addr);122122 if (task_size - len >= addr &&123123- (!vma || addr + len <= vma->vm_start))123123+ (!vma || addr + len <= vm_start_gap(vma)))124124 return addr;125125 }126126···183183184184 vma = find_vma(mm, addr);185185 if (task_size - len >= addr &&186186- (!vma || addr + len <= vma->vm_start))186186+ (!vma || addr + len <= vm_start_gap(vma)))187187 return addr;188188 }189189
+7-4
arch/sparc/kernel/tsb.S
···455455 .type copy_tsb,#function456456copy_tsb: /* %o0=old_tsb_base, %o1=old_tsb_size457457 * %o2=new_tsb_base, %o3=new_tsb_size458458+ * %o4=page_size_shift458459 */459460 sethi %uhi(TSB_PASS_BITS), %g7460461 srlx %o3, 4, %o3461461- add %o0, %o1, %g1 /* end of old tsb */462462+ add %o0, %o1, %o1 /* end of old tsb */462463 sllx %g7, 32, %g7463464 sub %o3, 1, %o3 /* %o3 == new tsb hash mask */465465+466466+ mov %o4, %g1 /* page_size_shift */464467465468661: prefetcha [%o0] ASI_N, #one_read466469 .section .tsb_phys_patch, "ax"···489486 /* This can definitely be computed faster... */490487 srlx %o0, 4, %o5 /* Build index */491488 and %o5, 511, %o5 /* Mask index */492492- sllx %o5, PAGE_SHIFT, %o5 /* Put into vaddr position */489489+ sllx %o5, %g1, %o5 /* Put into vaddr position */493490 or %o4, %o5, %o4 /* Full VADDR. */494494- srlx %o4, PAGE_SHIFT, %o4 /* Shift down to create index */491491+ srlx %o4, %g1, %o4 /* Shift down to create index */495492 and %o4, %o3, %o4 /* Mask with new_tsb_nents-1 */496493 sllx %o4, 4, %o4 /* Shift back up into tsb ent offset */497494 TSB_STORE(%o2 + %o4, %g2) /* Store TAG */···499496 TSB_STORE(%o2 + %o4, %g3) /* Store TTE */50049750149880: add %o0, 16, %o0502502- cmp %o0, %g1499499+ cmp %o0, %o1503500 bne,pt %xcc, 90b504501 nop505502
···120120 addr = ALIGN(addr, huge_page_size(h));121121 vma = find_vma(mm, addr);122122 if (task_size - len >= addr &&123123- (!vma || addr + len <= vma->vm_start))123123+ (!vma || addr + len <= vm_start_gap(vma)))124124 return addr;125125 }126126 if (mm->get_unmapped_area == arch_get_unmapped_area)
+60-29
arch/sparc/mm/init_64.c
···358358 }359359360360 if ((hv_pgsz_mask & cpu_pgsz_mask) == 0U) {361361- pr_warn("hugepagesz=%llu not supported by MMU.\n",361361+ hugetlb_bad_size();362362+ pr_err("hugepagesz=%llu not supported by MMU.\n",362363 hugepage_size);363364 goto out;364365 }···707706708707/* get_new_mmu_context() uses "cache + 1". */709708DEFINE_SPINLOCK(ctx_alloc_lock);710710-unsigned long tlb_context_cache = CTX_FIRST_VERSION - 1;709709+unsigned long tlb_context_cache = CTX_FIRST_VERSION;711710#define MAX_CTX_NR (1UL << CTX_NR_BITS)712711#define CTX_BMAP_SLOTS BITS_TO_LONGS(MAX_CTX_NR)713712DECLARE_BITMAP(mmu_context_bmap, MAX_CTX_NR);713713+DEFINE_PER_CPU(struct mm_struct *, per_cpu_secondary_mm) = {0};714714+715715+static void mmu_context_wrap(void)716716+{717717+ unsigned long old_ver = tlb_context_cache & CTX_VERSION_MASK;718718+ unsigned long new_ver, new_ctx, old_ctx;719719+ struct mm_struct *mm;720720+ int cpu;721721+722722+ bitmap_zero(mmu_context_bmap, 1 << CTX_NR_BITS);723723+724724+ /* Reserve kernel context */725725+ set_bit(0, mmu_context_bmap);726726+727727+ new_ver = (tlb_context_cache & CTX_VERSION_MASK) + CTX_FIRST_VERSION;728728+ if (unlikely(new_ver == 0))729729+ new_ver = CTX_FIRST_VERSION;730730+ tlb_context_cache = new_ver;731731+732732+ /*733733+ * Make sure that any new mm that are added into per_cpu_secondary_mm,734734+ * are going to go through get_new_mmu_context() path.735735+ */736736+ mb();737737+738738+ /*739739+ * Updated versions to current on those CPUs that had valid secondary740740+ * contexts741741+ */742742+ for_each_online_cpu(cpu) {743743+ /*744744+ * If a new mm is stored after we took this mm from the array,745745+ * it will go into get_new_mmu_context() path, because we746746+ * already bumped the version in tlb_context_cache.747747+ */748748+ mm = per_cpu(per_cpu_secondary_mm, cpu);749749+750750+ if (unlikely(!mm || mm == &init_mm))751751+ continue;752752+753753+ old_ctx = mm->context.sparc64_ctx_val;754754+ if (likely((old_ctx & CTX_VERSION_MASK) == old_ver)) {755755+ new_ctx = (old_ctx & ~CTX_VERSION_MASK) | new_ver;756756+ set_bit(new_ctx & CTX_NR_MASK, mmu_context_bmap);757757+ mm->context.sparc64_ctx_val = new_ctx;758758+ }759759+ }760760+}714761715762/* Caller does TLB context flushing on local CPU if necessary.716763 * The caller also ensures that CTX_VALID(mm->context) is false.···774725{775726 unsigned long ctx, new_ctx;776727 unsigned long orig_pgsz_bits;777777- int new_version;778728779729 spin_lock(&ctx_alloc_lock);730730+retry:731731+ /* wrap might have happened, test again if our context became valid */732732+ if (unlikely(CTX_VALID(mm->context)))733733+ goto out;780734 orig_pgsz_bits = (mm->context.sparc64_ctx_val & CTX_PGSZ_MASK);781735 ctx = (tlb_context_cache + 1) & CTX_NR_MASK;782736 new_ctx = find_next_zero_bit(mmu_context_bmap, 1 << CTX_NR_BITS, ctx);783783- new_version = 0;784737 if (new_ctx >= (1 << CTX_NR_BITS)) {785738 new_ctx = find_next_zero_bit(mmu_context_bmap, ctx, 1);786739 if (new_ctx >= ctx) {787787- int i;788788- new_ctx = (tlb_context_cache & CTX_VERSION_MASK) +789789- CTX_FIRST_VERSION;790790- if (new_ctx == 1)791791- new_ctx = CTX_FIRST_VERSION;792792-793793- /* Don't call memset, for 16 entries that's just794794- * plain silly...795795- */796796- mmu_context_bmap[0] = 3;797797- mmu_context_bmap[1] = 0;798798- mmu_context_bmap[2] = 0;799799- mmu_context_bmap[3] = 0;800800- for (i = 4; i < CTX_BMAP_SLOTS; i += 4) {801801- mmu_context_bmap[i + 0] = 0;802802- mmu_context_bmap[i + 1] = 0;803803- mmu_context_bmap[i + 2] = 0;804804- mmu_context_bmap[i + 3] = 0;805805- }806806- new_version = 1;807807- goto out;740740+ mmu_context_wrap();741741+ goto retry;808742 }809743 }744744+ if (mm->context.sparc64_ctx_val)745745+ cpumask_clear(mm_cpumask(mm));810746 mmu_context_bmap[new_ctx>>6] |= (1UL << (new_ctx & 63));811747 new_ctx |= (tlb_context_cache & CTX_VERSION_MASK);812812-out:813748 tlb_context_cache = new_ctx;814749 mm->context.sparc64_ctx_val = new_ctx | orig_pgsz_bits;750750+out:815751 spin_unlock(&ctx_alloc_lock);816816-817817- if (unlikely(new_version))818818- smp_new_mmu_context_version();819752}820753821754static int numa_enabled = 1;
+5-2
arch/sparc/mm/tsb.c
···496496 extern void copy_tsb(unsigned long old_tsb_base,497497 unsigned long old_tsb_size,498498 unsigned long new_tsb_base,499499- unsigned long new_tsb_size);499499+ unsigned long new_tsb_size,500500+ unsigned long page_size_shift);500501 unsigned long old_tsb_base = (unsigned long) old_tsb;501502 unsigned long new_tsb_base = (unsigned long) new_tsb;502503···505504 old_tsb_base = __pa(old_tsb_base);506505 new_tsb_base = __pa(new_tsb_base);507506 }508508- copy_tsb(old_tsb_base, old_size, new_tsb_base, new_size);507507+ copy_tsb(old_tsb_base, old_size, new_tsb_base, new_size,508508+ tsb_index == MM_TSB_BASE ?509509+ PAGE_SHIFT : REAL_HPAGE_SHIFT);509510 }510511511512 mm->context.tsb_block[tsb_index].tsb = new_tsb;
···214214215215extern void prepare_exit_to_usermode(struct pt_regs *regs, u32 flags);216216217217-218218-/*219219- * Return saved (kernel) PC of a blocked thread.220220- * Only used in a printk() in kernel/sched/core.c, so don't work too hard.221221- */222222-#define thread_saved_pc(t) ((t)->thread.pc)223223-224217unsigned long get_wchan(struct task_struct *p);225218226219/* Return initial ksp value for given task. */
+1-1
arch/tile/mm/hugetlbpage.c
···233233 addr = ALIGN(addr, huge_page_size(h));234234 vma = find_vma(mm, addr);235235 if (TASK_SIZE - len >= addr &&236236- (!vma || addr + len <= vma->vm_start))236236+ (!vma || addr + len <= vm_start_gap(vma)))237237 return addr;238238 }239239 if (current->mm->get_unmapped_area == arch_get_unmapped_area)
···5656 __attribute__((__section__(".data..init_irqstack"))) =5757 { INIT_THREAD_INFO(init_task) };58585959-unsigned long thread_saved_pc(struct task_struct *task)6060-{6161- /* FIXME: Need to look up userspace_pid by cpu */6262- return os_process_pc(userspace_pid[0]);6363-}6464-6559/* Changed in setup_arch, which is called in early boot */6660static char host_info[(__NEW_UTS_LEN + 1) * 5];6761
···564564{565565 unsigned long random_addr, min_addr;566566567567- /* By default, keep output position unchanged. */568568- *virt_addr = *output;569569-570567 if (cmdline_find_option_bool("nokaslr")) {571568 warn("KASLR disabled: 'nokaslr' on cmdline.");572569 return;
+4-2
arch/x86/boot/compressed/misc.c
···338338 unsigned long output_len)339339{340340 const unsigned long kernel_total_size = VO__end - VO__text;341341- unsigned long virt_addr = (unsigned long)output;341341+ unsigned long virt_addr = LOAD_PHYSICAL_ADDR;342342343343 /* Retain x86 boot parameters pointer passed from startup_32/64. */344344 boot_params = rmode;···390390#ifdef CONFIG_X86_64391391 if (heap > 0x3fffffffffffUL)392392 error("Destination address too large");393393+ if (virt_addr + max(output_len, kernel_total_size) > KERNEL_IMAGE_SIZE)394394+ error("Destination virtual address is beyond the kernel mapping area");393395#else394396 if (heap > ((-__PAGE_OFFSET-(128<<20)-1) & 0x7fffffff))395397 error("Destination address too large");···399397#ifndef CONFIG_RELOCATABLE400398 if ((unsigned long)output != LOAD_PHYSICAL_ADDR)401399 error("Destination address does not match LOAD_PHYSICAL_ADDR");402402- if ((unsigned long)output != virt_addr)400400+ if (virt_addr != LOAD_PHYSICAL_ADDR)403401 error("Destination virtual address changed when not relocatable");404402#endif405403
-2
arch/x86/boot/compressed/misc.h
···8181 unsigned long output_size,8282 unsigned long *virt_addr)8383{8484- /* No change from existing output location. */8585- *virt_addr = *output;8684}8785#endif8886
···11701170 pmu = type->pmus;11711171 for (i = 0; i < type->num_boxes; i++, pmu++) {11721172 box = pmu->boxes[pkg];11731173- if (!box && atomic_inc_return(&box->refcnt) == 1)11731173+ if (box && atomic_inc_return(&box->refcnt) == 1)11741174 uncore_box_init(box);11751175 }11761176 }
+1
arch/x86/include/asm/extable.h
···2929 } while (0)30303131extern int fixup_exception(struct pt_regs *regs, int trapnr);3232+extern int fixup_bug(struct pt_regs *regs, int trapnr);3233extern bool ex_has_fault_handler(unsigned long ip);3334extern void early_fixup_exception(struct pt_regs *regs, int trapnr);3435
+1
arch/x86/include/asm/kvm_emulate.h
···296296297297 bool perm_ok; /* do not check permissions if true */298298 bool ud; /* inject an #UD if host doesn't support insn */299299+ bool tf; /* TF value before instruction (after for syscall/sysret) */299300300301 bool have_exception;301302 struct x86_exception exception;
···860860861861#endif /* CONFIG_X86_64 */862862863863-extern unsigned long thread_saved_pc(struct task_struct *tsk);864864-865863extern void start_thread(struct pt_regs *regs, unsigned long new_ip,866864 unsigned long new_sp);867865
+1
arch/x86/kernel/cpu/cyrix.c
···255255 break;256256257257 case 4: /* MediaGX/GXm or Geode GXM/GXLV/GX1 */258258+ case 11: /* GX1 with inverted Device ID */258259#ifdef CONFIG_PCI259260 {260261 u32 vendor, device;
···545545}546546547547/*548548- * Return saved PC of a blocked thread.549549- * What is this good for? it will be always the scheduler or ret_from_fork.550550- */551551-unsigned long thread_saved_pc(struct task_struct *tsk)552552-{553553- struct inactive_task_frame *frame =554554- (struct inactive_task_frame *) READ_ONCE(tsk->thread.sp);555555- return READ_ONCE_NOCHECK(frame->ret_addr);556556-}557557-558558-/*559548 * Called from fs/proc with a reference on @p to find the function560549 * which called into schedule(). This needs to be done carefully561550 * because the task might wake up and we might look at a stack
···144144 addr = PAGE_ALIGN(addr);145145 vma = find_vma(mm, addr);146146 if (end - len >= addr &&147147- (!vma || addr + len <= vma->vm_start))147147+ (!vma || addr + len <= vm_start_gap(vma)))148148 return addr;149149 }150150···187187 addr = PAGE_ALIGN(addr);188188 vma = find_vma(mm, addr);189189 if (TASK_SIZE - len >= addr &&190190- (!vma || addr + len <= vma->vm_start))190190+ (!vma || addr + len <= vm_start_gap(vma)))191191 return addr;192192 }193193
+1-1
arch/x86/kernel/tboot.c
···514514 if (!tboot_enabled())515515 return 0;516516517517- if (!intel_iommu_tboot_noforce)517517+ if (intel_iommu_tboot_noforce)518518 return 1;519519520520 if (no_iommu || swiotlb || dmar_disabled)
+1-1
arch/x86/kernel/traps.c
···182182 return ud == INSN_UD0 || ud == INSN_UD2;183183}184184185185-static int fixup_bug(struct pt_regs *regs, int trapnr)185185+int fixup_bug(struct pt_regs *regs, int trapnr)186186{187187 if (trapnr != X86_TRAP_UD)188188 return 0;
+11-9
arch/x86/kvm/cpuid.c
···780780static int move_to_next_stateful_cpuid_entry(struct kvm_vcpu *vcpu, int i)781781{782782 struct kvm_cpuid_entry2 *e = &vcpu->arch.cpuid_entries[i];783783- int j, nent = vcpu->arch.cpuid_nent;783783+ struct kvm_cpuid_entry2 *ej;784784+ int j = i;785785+ int nent = vcpu->arch.cpuid_nent;784786785787 e->flags &= ~KVM_CPUID_FLAG_STATE_READ_NEXT;786788 /* when no next entry is found, the current entry[i] is reselected */787787- for (j = i + 1; ; j = (j + 1) % nent) {788788- struct kvm_cpuid_entry2 *ej = &vcpu->arch.cpuid_entries[j];789789- if (ej->function == e->function) {790790- ej->flags |= KVM_CPUID_FLAG_STATE_READ_NEXT;791791- return j;792792- }793793- }794794- return 0; /* silence gcc, even though control never reaches here */789789+ do {790790+ j = (j + 1) % nent;791791+ ej = &vcpu->arch.cpuid_entries[j];792792+ } while (ej->function != e->function);793793+794794+ ej->flags |= KVM_CPUID_FLAG_STATE_READ_NEXT;795795+796796+ return j;795797}796798797799/* find an entry with matching function, matching index (if needed), and that
···18071807 * AMD's VMCB does not have an explicit unusable field, so emulate it18081808 * for cross vendor migration purposes by "not present"18091809 */18101810- var->unusable = !var->present || (var->type == 0);18101810+ var->unusable = !var->present;1811181118121812 switch (seg) {18131813 case VCPU_SREG_TR:···18401840 */18411841 if (var->unusable)18421842 var->db = 0;18431843+ /* This is symmetric with svm_set_segment() */18431844 var->dpl = to_svm(vcpu)->vmcb->save.cpl;18441845 break;18451846 }···19811980 s->base = var->base;19821981 s->limit = var->limit;19831982 s->selector = var->selector;19841984- if (var->unusable)19851985- s->attrib = 0;19861986- else {19871987- s->attrib = (var->type & SVM_SELECTOR_TYPE_MASK);19881988- s->attrib |= (var->s & 1) << SVM_SELECTOR_S_SHIFT;19891989- s->attrib |= (var->dpl & 3) << SVM_SELECTOR_DPL_SHIFT;19901990- s->attrib |= (var->present & 1) << SVM_SELECTOR_P_SHIFT;19911991- s->attrib |= (var->avl & 1) << SVM_SELECTOR_AVL_SHIFT;19921992- s->attrib |= (var->l & 1) << SVM_SELECTOR_L_SHIFT;19931993- s->attrib |= (var->db & 1) << SVM_SELECTOR_DB_SHIFT;19941994- s->attrib |= (var->g & 1) << SVM_SELECTOR_G_SHIFT;19951995- }19831983+ s->attrib = (var->type & SVM_SELECTOR_TYPE_MASK);19841984+ s->attrib |= (var->s & 1) << SVM_SELECTOR_S_SHIFT;19851985+ s->attrib |= (var->dpl & 3) << SVM_SELECTOR_DPL_SHIFT;19861986+ s->attrib |= ((var->present & 1) && !var->unusable) << SVM_SELECTOR_P_SHIFT;19871987+ s->attrib |= (var->avl & 1) << SVM_SELECTOR_AVL_SHIFT;19881988+ s->attrib |= (var->l & 1) << SVM_SELECTOR_L_SHIFT;19891989+ s->attrib |= (var->db & 1) << SVM_SELECTOR_DB_SHIFT;19901990+ s->attrib |= (var->g & 1) << SVM_SELECTOR_G_SHIFT;1996199119971992 /*19981993 * This is always accurate, except if SYSRET returned to a segment···19972000 * would entail passing the CPL to userspace and back.19982001 */19992002 if (seg == VCPU_SREG_SS)20002000- svm->vmcb->save.cpl = (s->attrib >> SVM_SELECTOR_DPL_SHIFT) & 3;20032003+ /* This is symmetric with svm_get_segment() */20042004+ svm->vmcb->save.cpl = (var->dpl & 3);2001200520022006 mark_dirty(svm->vmcb, VMCB_SEG);20032007}
+63-86
arch/x86/kvm/vmx.c
···24252425 if (!(vmcs12->exception_bitmap & (1u << nr)))24262426 return 0;2427242724282428- nested_vmx_vmexit(vcpu, to_vmx(vcpu)->exit_reason,24282428+ nested_vmx_vmexit(vcpu, EXIT_REASON_EXCEPTION_NMI,24292429 vmcs_read32(VM_EXIT_INTR_INFO),24302430 vmcs_readl(EXIT_QUALIFICATION));24312431 return 1;···69146914 return 0;69156915}6916691669176917-/*69186918- * This function performs the various checks including69196919- * - if it's 4KB aligned69206920- * - No bits beyond the physical address width are set69216921- * - Returns 0 on success or else 169226922- * (Intel SDM Section 30.3)69236923- */69246924-static int nested_vmx_check_vmptr(struct kvm_vcpu *vcpu, int exit_reason,69256925- gpa_t *vmpointer)69176917+static int nested_vmx_get_vmptr(struct kvm_vcpu *vcpu, gpa_t *vmpointer)69266918{69276919 gva_t gva;69286928- gpa_t vmptr;69296920 struct x86_exception e;69306930- struct page *page;69316931- struct vcpu_vmx *vmx = to_vmx(vcpu);69326932- int maxphyaddr = cpuid_maxphyaddr(vcpu);6933692169346922 if (get_vmx_mem_address(vcpu, vmcs_readl(EXIT_QUALIFICATION),69356923 vmcs_read32(VMX_INSTRUCTION_INFO), false, &gva))69366924 return 1;6937692569386938- if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva, &vmptr,69396939- sizeof(vmptr), &e)) {69266926+ if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva, vmpointer,69276927+ sizeof(*vmpointer), &e)) {69406928 kvm_inject_page_fault(vcpu, &e);69416929 return 1;69426930 }6943693169446944- switch (exit_reason) {69456945- case EXIT_REASON_VMON:69466946- /*69476947- * SDM 3: 24.11.569486948- * The first 4 bytes of VMXON region contain the supported69496949- * VMCS revision identifier69506950- *69516951- * Note - IA32_VMX_BASIC[48] will never be 169526952- * for the nested case;69536953- * which replaces physical address width with 3269546954- *69556955- */69566956- if (!PAGE_ALIGNED(vmptr) || (vmptr >> maxphyaddr)) {69576957- nested_vmx_failInvalid(vcpu);69586958- return kvm_skip_emulated_instruction(vcpu);69596959- }69606960-69616961- page = nested_get_page(vcpu, vmptr);69626962- if (page == NULL) {69636963- nested_vmx_failInvalid(vcpu);69646964- return kvm_skip_emulated_instruction(vcpu);69656965- }69666966- if (*(u32 *)kmap(page) != VMCS12_REVISION) {69676967- kunmap(page);69686968- nested_release_page_clean(page);69696969- nested_vmx_failInvalid(vcpu);69706970- return kvm_skip_emulated_instruction(vcpu);69716971- }69726972- kunmap(page);69736973- nested_release_page_clean(page);69746974- vmx->nested.vmxon_ptr = vmptr;69756975- break;69766976- case EXIT_REASON_VMCLEAR:69776977- if (!PAGE_ALIGNED(vmptr) || (vmptr >> maxphyaddr)) {69786978- nested_vmx_failValid(vcpu,69796979- VMXERR_VMCLEAR_INVALID_ADDRESS);69806980- return kvm_skip_emulated_instruction(vcpu);69816981- }69826982-69836983- if (vmptr == vmx->nested.vmxon_ptr) {69846984- nested_vmx_failValid(vcpu,69856985- VMXERR_VMCLEAR_VMXON_POINTER);69866986- return kvm_skip_emulated_instruction(vcpu);69876987- }69886988- break;69896989- case EXIT_REASON_VMPTRLD:69906990- if (!PAGE_ALIGNED(vmptr) || (vmptr >> maxphyaddr)) {69916991- nested_vmx_failValid(vcpu,69926992- VMXERR_VMPTRLD_INVALID_ADDRESS);69936993- return kvm_skip_emulated_instruction(vcpu);69946994- }69956995-69966996- if (vmptr == vmx->nested.vmxon_ptr) {69976997- nested_vmx_failValid(vcpu,69986998- VMXERR_VMPTRLD_VMXON_POINTER);69996999- return kvm_skip_emulated_instruction(vcpu);70007000- }70017001- break;70027002- default:70037003- return 1; /* shouldn't happen */70047004- }70057005-70067006- if (vmpointer)70077007- *vmpointer = vmptr;70086932 return 0;70096933}70106934···69907066static int handle_vmon(struct kvm_vcpu *vcpu)69917067{69927068 int ret;70697069+ gpa_t vmptr;70707070+ struct page *page;69937071 struct vcpu_vmx *vmx = to_vmx(vcpu);69947072 const u64 VMXON_NEEDED_FEATURES = FEATURE_CONTROL_LOCKED69957073 | FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX;···70217095 return 1;70227096 }7023709770247024- if (nested_vmx_check_vmptr(vcpu, EXIT_REASON_VMON, NULL))70987098+ if (nested_vmx_get_vmptr(vcpu, &vmptr))70257099 return 1;70267026-71007100+71017101+ /*71027102+ * SDM 3: 24.11.571037103+ * The first 4 bytes of VMXON region contain the supported71047104+ * VMCS revision identifier71057105+ *71067106+ * Note - IA32_VMX_BASIC[48] will never be 1 for the nested case;71077107+ * which replaces physical address width with 3271087108+ */71097109+ if (!PAGE_ALIGNED(vmptr) || (vmptr >> cpuid_maxphyaddr(vcpu))) {71107110+ nested_vmx_failInvalid(vcpu);71117111+ return kvm_skip_emulated_instruction(vcpu);71127112+ }71137113+71147114+ page = nested_get_page(vcpu, vmptr);71157115+ if (page == NULL) {71167116+ nested_vmx_failInvalid(vcpu);71177117+ return kvm_skip_emulated_instruction(vcpu);71187118+ }71197119+ if (*(u32 *)kmap(page) != VMCS12_REVISION) {71207120+ kunmap(page);71217121+ nested_release_page_clean(page);71227122+ nested_vmx_failInvalid(vcpu);71237123+ return kvm_skip_emulated_instruction(vcpu);71247124+ }71257125+ kunmap(page);71267126+ nested_release_page_clean(page);71277127+71287128+ vmx->nested.vmxon_ptr = vmptr;70277129 ret = enter_vmx_operation(vcpu);70287130 if (ret)70297131 return ret;···71677213 if (!nested_vmx_check_permission(vcpu))71687214 return 1;7169721571707170- if (nested_vmx_check_vmptr(vcpu, EXIT_REASON_VMCLEAR, &vmptr))72167216+ if (nested_vmx_get_vmptr(vcpu, &vmptr))71717217 return 1;72187218+72197219+ if (!PAGE_ALIGNED(vmptr) || (vmptr >> cpuid_maxphyaddr(vcpu))) {72207220+ nested_vmx_failValid(vcpu, VMXERR_VMCLEAR_INVALID_ADDRESS);72217221+ return kvm_skip_emulated_instruction(vcpu);72227222+ }72237223+72247224+ if (vmptr == vmx->nested.vmxon_ptr) {72257225+ nested_vmx_failValid(vcpu, VMXERR_VMCLEAR_VMXON_POINTER);72267226+ return kvm_skip_emulated_instruction(vcpu);72277227+ }7172722871737229 if (vmptr == vmx->nested.current_vmptr)71747230 nested_release_vmcs12(vmx);···75097545 if (!nested_vmx_check_permission(vcpu))75107546 return 1;7511754775127512- if (nested_vmx_check_vmptr(vcpu, EXIT_REASON_VMPTRLD, &vmptr))75487548+ if (nested_vmx_get_vmptr(vcpu, &vmptr))75137549 return 1;75507550+75517551+ if (!PAGE_ALIGNED(vmptr) || (vmptr >> cpuid_maxphyaddr(vcpu))) {75527552+ nested_vmx_failValid(vcpu, VMXERR_VMPTRLD_INVALID_ADDRESS);75537553+ return kvm_skip_emulated_instruction(vcpu);75547554+ }75557555+75567556+ if (vmptr == vmx->nested.vmxon_ptr) {75577557+ nested_vmx_failValid(vcpu, VMXERR_VMPTRLD_VMXON_POINTER);75587558+ return kvm_skip_emulated_instruction(vcpu);75597559+ }7514756075157561 if (vmx->nested.current_vmptr != vmptr) {75167562 struct vmcs12 *new_vmcs12;···78877913{78887914 unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION);78897915 int cr = exit_qualification & 15;78907890- int reg = (exit_qualification >> 8) & 15;78917891- unsigned long val = kvm_register_readl(vcpu, reg);79167916+ int reg;79177917+ unsigned long val;7892791878937919 switch ((exit_qualification >> 4) & 3) {78947920 case 0: /* mov to cr */79217921+ reg = (exit_qualification >> 8) & 15;79227922+ val = kvm_register_readl(vcpu, reg);78957923 switch (cr) {78967924 case 0:78977925 if (vmcs12->cr0_guest_host_mask &···79487972 * lmsw can change bits 1..3 of cr0, and only set bit 0 of79497973 * cr0. Other attempted changes are ignored, with no exit.79507974 */79757975+ val = (exit_qualification >> LMSW_SOURCE_DATA_SHIFT) & 0x0f;79517976 if (vmcs12->cr0_guest_host_mask & 0xe &79527977 (val ^ vmcs12->cr0_read_shadow))79537978 return true;
+38-34
arch/x86/kvm/x86.c
···53135313 kvm_x86_ops->get_cs_db_l_bits(vcpu, &cs_db, &cs_l);5314531453155315 ctxt->eflags = kvm_get_rflags(vcpu);53165316+ ctxt->tf = (ctxt->eflags & X86_EFLAGS_TF) != 0;53175317+53165318 ctxt->eip = kvm_rip_read(vcpu);53175319 ctxt->mode = (!is_protmode(vcpu)) ? X86EMUL_MODE_REAL :53185320 (ctxt->eflags & X86_EFLAGS_VM) ? X86EMUL_MODE_VM86 :···55305528 return dr6;55315529}5532553055335533-static void kvm_vcpu_check_singlestep(struct kvm_vcpu *vcpu, unsigned long rflags, int *r)55315531+static void kvm_vcpu_do_singlestep(struct kvm_vcpu *vcpu, int *r)55345532{55355533 struct kvm_run *kvm_run = vcpu->run;5536553455375537- /*55385538- * rflags is the old, "raw" value of the flags. The new value has55395539- * not been saved yet.55405540- *55415541- * This is correct even for TF set by the guest, because "the55425542- * processor will not generate this exception after the instruction55435543- * that sets the TF flag".55445544- */55455545- if (unlikely(rflags & X86_EFLAGS_TF)) {55465546- if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) {55475547- kvm_run->debug.arch.dr6 = DR6_BS | DR6_FIXED_1 |55485548- DR6_RTM;55495549- kvm_run->debug.arch.pc = vcpu->arch.singlestep_rip;55505550- kvm_run->debug.arch.exception = DB_VECTOR;55515551- kvm_run->exit_reason = KVM_EXIT_DEBUG;55525552- *r = EMULATE_USER_EXIT;55535553- } else {55545554- /*55555555- * "Certain debug exceptions may clear bit 0-3. The55565556- * remaining contents of the DR6 register are never55575557- * cleared by the processor".55585558- */55595559- vcpu->arch.dr6 &= ~15;55605560- vcpu->arch.dr6 |= DR6_BS | DR6_RTM;55615561- kvm_queue_exception(vcpu, DB_VECTOR);55625562- }55355535+ if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) {55365536+ kvm_run->debug.arch.dr6 = DR6_BS | DR6_FIXED_1 | DR6_RTM;55375537+ kvm_run->debug.arch.pc = vcpu->arch.singlestep_rip;55385538+ kvm_run->debug.arch.exception = DB_VECTOR;55395539+ kvm_run->exit_reason = KVM_EXIT_DEBUG;55405540+ *r = EMULATE_USER_EXIT;55415541+ } else {55425542+ /*55435543+ * "Certain debug exceptions may clear bit 0-3. The55445544+ * remaining contents of the DR6 register are never55455545+ * cleared by the processor".55465546+ */55475547+ vcpu->arch.dr6 &= ~15;55485548+ vcpu->arch.dr6 |= DR6_BS | DR6_RTM;55495549+ kvm_queue_exception(vcpu, DB_VECTOR);55635550 }55645551}55655552···55585567 int r = EMULATE_DONE;5559556855605569 kvm_x86_ops->skip_emulated_instruction(vcpu);55615561- kvm_vcpu_check_singlestep(vcpu, rflags, &r);55705570+55715571+ /*55725572+ * rflags is the old, "raw" value of the flags. The new value has55735573+ * not been saved yet.55745574+ *55755575+ * This is correct even for TF set by the guest, because "the55765576+ * processor will not generate this exception after the instruction55775577+ * that sets the TF flag".55785578+ */55795579+ if (unlikely(rflags & X86_EFLAGS_TF))55805580+ kvm_vcpu_do_singlestep(vcpu, &r);55625581 return r == EMULATE_DONE;55635582}55645583EXPORT_SYMBOL_GPL(kvm_skip_emulated_instruction);···57275726 toggle_interruptibility(vcpu, ctxt->interruptibility);57285727 vcpu->arch.emulate_regs_need_sync_to_vcpu = false;57295728 kvm_rip_write(vcpu, ctxt->eip);57305730- if (r == EMULATE_DONE)57315731- kvm_vcpu_check_singlestep(vcpu, rflags, &r);57295729+ if (r == EMULATE_DONE &&57305730+ (ctxt->tf || (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)))57315731+ kvm_vcpu_do_singlestep(vcpu, &r);57325732 if (!ctxt->have_exception ||57335733 exception_type(ctxt->exception.vector) == EXCPT_TRAP)57345734 __kvm_set_rflags(vcpu, ctxt->eflags);···83968394 if (vcpu->arch.pv.pv_unhalted)83978395 return true;8398839683998399- if (atomic_read(&vcpu->arch.nmi_queued))83978397+ if (kvm_test_request(KVM_REQ_NMI, vcpu) ||83988398+ (vcpu->arch.nmi_pending &&83998399+ kvm_x86_ops->nmi_allowed(vcpu)))84008400 return true;8401840184028402- if (kvm_test_request(KVM_REQ_SMI, vcpu))84028402+ if (kvm_test_request(KVM_REQ_SMI, vcpu) ||84038403+ (vcpu->arch.smi_pending && !is_smm(vcpu)))84038404 return true;8404840584058406 if (kvm_arch_interrupt_allowed(vcpu) &&···86098604 if (!(vcpu->arch.apf.msr_val & KVM_ASYNC_PF_ENABLED))86108605 return true;86118606 else86128612- return !kvm_event_needs_reinjection(vcpu) &&86138613- kvm_x86_ops->interrupt_allowed(vcpu);86078607+ return kvm_can_do_async_pf(vcpu);86148608}8615860986168610void kvm_arch_start_assignment(struct kvm *kvm)
+3
arch/x86/mm/extable.c
···162162 if (fixup_exception(regs, trapnr))163163 return;164164165165+ if (fixup_bug(regs, trapnr))166166+ return;167167+165168fail:166169 early_printk("PANIC: early exception 0x%02x IP %lx:%lx error %lx cr2 0x%lx\n",167170 (unsigned)trapnr, (unsigned long)regs->cs, regs->ip,
+1-1
arch/x86/mm/hugetlbpage.c
···148148 addr = ALIGN(addr, huge_page_size(h));149149 vma = find_vma(mm, addr);150150 if (TASK_SIZE - len >= addr &&151151- (!vma || addr + len <= vma->vm_start))151151+ (!vma || addr + len <= vm_start_gap(vma)))152152 return addr;153153 }154154 if (mm->get_unmapped_area == arch_get_unmapped_area)
+3-3
arch/x86/mm/init.c
···161161162162static void __init probe_page_size_mask(void)163163{164164-#if !defined(CONFIG_KMEMCHECK)165164 /*166165 * For CONFIG_KMEMCHECK or pagealloc debugging, identity mapping will167166 * use small pages.168167 * This will simplify cpa(), which otherwise needs to support splitting169168 * large pages into small in interrupt context, etc.170169 */171171- if (boot_cpu_has(X86_FEATURE_PSE) && !debug_pagealloc_enabled())170170+ if (boot_cpu_has(X86_FEATURE_PSE) && !debug_pagealloc_enabled() && !IS_ENABLED(CONFIG_KMEMCHECK))172171 page_size_mask |= 1 << PG_LEVEL_2M;173173-#endif172172+ else173173+ direct_gbpages = 0;174174175175 /* Enable PSE if available */176176 if (boot_cpu_has(X86_FEATURE_PSE))
+7-1
arch/x86/mm/init_64.c
···990990991991 pud_base = pud_offset(p4d, 0);992992 remove_pud_table(pud_base, addr, next, direct);993993- free_pud_table(pud_base, p4d);993993+ /*994994+ * For 4-level page tables we do not want to free PUDs, but in the995995+ * 5-level case we should free them. This code will have to change996996+ * to adapt for boot-time switching between 4 and 5 level page tables.997997+ */998998+ if (CONFIG_PGTABLE_LEVELS == 5)999999+ free_pud_table(pud_base, p4d);9941000 }99510019961002 if (direct)
+3-6
arch/x86/mm/pat.c
···6565}6666early_param("nopat", nopat);67676868-static bool __read_mostly __pat_initialized = false;6969-7068bool pat_enabled(void)7169{7272- return __pat_initialized;7070+ return !!__pat_enabled;7371}7472EXPORT_SYMBOL_GPL(pat_enabled);7573···225227 }226228227229 wrmsrl(MSR_IA32_CR_PAT, pat);228228- __pat_initialized = true;229230230231 __init_cache_modes(pat);231232}232233233234static void pat_ap_init(u64 pat)234235{235235- if (!this_cpu_has(X86_FEATURE_PAT)) {236236+ if (!boot_cpu_has(X86_FEATURE_PAT)) {236237 /*237238 * If this happens we are on a secondary CPU, but switched to238239 * PAT on the boot CPU. We have no way to undo PAT.···306309 u64 pat;307310 struct cpuinfo_x86 *c = &boot_cpu_data;308311309309- if (!__pat_enabled) {312312+ if (!pat_enabled()) {310313 init_cache_modes();311314 return;312315 }
+4-2
arch/x86/platform/efi/efi.c
···828828829829 /*830830 * We don't do virtual mode, since we don't do runtime services, on831831- * non-native EFI831831+ * non-native EFI. With efi=old_map, we don't do runtime services in832832+ * kexec kernel because in the initial boot something else might833833+ * have been mapped at these virtual addresses.832834 */833833- if (!efi_is_native()) {835835+ if (!efi_is_native() || efi_enabled(EFI_OLD_MEMMAP)) {834836 efi_memmap_unmap();835837 clear_bit(EFI_RUNTIME_SERVICES, &efi.flags);836838 return;
+71-8
arch/x86/platform/efi/efi_64.c
···71717272pgd_t * __init efi_call_phys_prolog(void)7373{7474- unsigned long vaddress;7575- pgd_t *save_pgd;7474+ unsigned long vaddr, addr_pgd, addr_p4d, addr_pud;7575+ pgd_t *save_pgd, *pgd_k, *pgd_efi;7676+ p4d_t *p4d, *p4d_k, *p4d_efi;7777+ pud_t *pud;76787779 int pgd;7878- int n_pgds;8080+ int n_pgds, i, j;79818082 if (!efi_enabled(EFI_OLD_MEMMAP)) {8183 save_pgd = (pgd_t *)read_cr3();···9088 n_pgds = DIV_ROUND_UP((max_pfn << PAGE_SHIFT), PGDIR_SIZE);9189 save_pgd = kmalloc_array(n_pgds, sizeof(*save_pgd), GFP_KERNEL);92909191+ /*9292+ * Build 1:1 identity mapping for efi=old_map usage. Note that9393+ * PAGE_OFFSET is PGDIR_SIZE aligned when KASLR is disabled, while9494+ * it is PUD_SIZE ALIGNED with KASLR enabled. So for a given physical9595+ * address X, the pud_index(X) != pud_index(__va(X)), we can only copy9696+ * PUD entry of __va(X) to fill in pud entry of X to build 1:1 mapping.9797+ * This means here we can only reuse the PMD tables of the direct mapping.9898+ */9399 for (pgd = 0; pgd < n_pgds; pgd++) {9494- save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);9595- vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);9696- set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));100100+ addr_pgd = (unsigned long)(pgd * PGDIR_SIZE);101101+ vaddr = (unsigned long)__va(pgd * PGDIR_SIZE);102102+ pgd_efi = pgd_offset_k(addr_pgd);103103+ save_pgd[pgd] = *pgd_efi;104104+105105+ p4d = p4d_alloc(&init_mm, pgd_efi, addr_pgd);106106+ if (!p4d) {107107+ pr_err("Failed to allocate p4d table!\n");108108+ goto out;109109+ }110110+111111+ for (i = 0; i < PTRS_PER_P4D; i++) {112112+ addr_p4d = addr_pgd + i * P4D_SIZE;113113+ p4d_efi = p4d + p4d_index(addr_p4d);114114+115115+ pud = pud_alloc(&init_mm, p4d_efi, addr_p4d);116116+ if (!pud) {117117+ pr_err("Failed to allocate pud table!\n");118118+ goto out;119119+ }120120+121121+ for (j = 0; j < PTRS_PER_PUD; j++) {122122+ addr_pud = addr_p4d + j * PUD_SIZE;123123+124124+ if (addr_pud > (max_pfn << PAGE_SHIFT))125125+ break;126126+127127+ vaddr = (unsigned long)__va(addr_pud);128128+129129+ pgd_k = pgd_offset_k(vaddr);130130+ p4d_k = p4d_offset(pgd_k, vaddr);131131+ pud[j] = *pud_offset(p4d_k, vaddr);132132+ }133133+ }97134 }98135out:99136 __flush_tlb_all();···145104 /*146105 * After the lock is released, the original page table is restored.147106 */148148- int pgd_idx;107107+ int pgd_idx, i;149108 int nr_pgds;109109+ pgd_t *pgd;110110+ p4d_t *p4d;111111+ pud_t *pud;150112151113 if (!efi_enabled(EFI_OLD_MEMMAP)) {152114 write_cr3((unsigned long)save_pgd);···159115160116 nr_pgds = DIV_ROUND_UP((max_pfn << PAGE_SHIFT) , PGDIR_SIZE);161117162162- for (pgd_idx = 0; pgd_idx < nr_pgds; pgd_idx++)118118+ for (pgd_idx = 0; pgd_idx < nr_pgds; pgd_idx++) {119119+ pgd = pgd_offset_k(pgd_idx * PGDIR_SIZE);163120 set_pgd(pgd_offset_k(pgd_idx * PGDIR_SIZE), save_pgd[pgd_idx]);121121+122122+ if (!(pgd_val(*pgd) & _PAGE_PRESENT))123123+ continue;124124+125125+ for (i = 0; i < PTRS_PER_P4D; i++) {126126+ p4d = p4d_offset(pgd,127127+ pgd_idx * PGDIR_SIZE + i * P4D_SIZE);128128+129129+ if (!(p4d_val(*p4d) & _PAGE_PRESENT))130130+ continue;131131+132132+ pud = (pud_t *)p4d_page_vaddr(*p4d);133133+ pud_free(&init_mm, pud);134134+ }135135+136136+ p4d = (p4d_t *)pgd_page_vaddr(*pgd);137137+ p4d_free(&init_mm, p4d);138138+ }164139165140 kfree(save_pgd);166141
+3
arch/x86/platform/efi/quirks.c
···360360 free_bootmem_late(start, size);361361 }362362363363+ if (!num_entries)364364+ return;365365+363366 new_size = efi.memmap.desc_size * num_entries;364367 new_phys = efi_memmap_alloc(num_entries);365368 if (!new_phys) {
···213213#define release_segments(mm) do { } while(0)214214#define forget_segments() do { } while (0)215215216216-#define thread_saved_pc(tsk) (task_pt_regs(tsk)->pc)217217-218216extern unsigned long get_wchan(struct task_struct *p);219217220218#define KSTK_EIP(tsk) (task_pt_regs(tsk)->pc)
-5
arch/xtensa/kernel/irq.c
···3434{3535 int irq = irq_find_mapping(NULL, hwirq);36363737- if (hwirq >= NR_IRQS) {3838- printk(KERN_EMERG "%s: cannot handle IRQ %d\n",3939- __func__, hwirq);4040- }4141-4237#ifdef CONFIG_DEBUG_STACKOVERFLOW4338 /* Debugging check for stack overflow: is there less than 1KB free? */4439 {
···8888 /* At this point: (!vmm || addr < vmm->vm_end). */8989 if (TASK_SIZE - len < addr)9090 return -ENOMEM;9191- if (!vmm || addr + len <= vmm->vm_start)9191+ if (!vmm || addr + len <= vm_start_gap(vmm))9292 return addr;9393 addr = vmm->vm_end;9494 if (flags & MAP_SHARED)
···5252BFQG_FLAG_FNS(empty)5353#undef BFQG_FLAG_FNS54545555-/* This should be called with the queue_lock held. */5555+/* This should be called with the scheduler lock held. */5656static void bfqg_stats_update_group_wait_time(struct bfqg_stats *stats)5757{5858 unsigned long long now;···6767 bfqg_stats_clear_waiting(stats);6868}69697070-/* This should be called with the queue_lock held. */7070+/* This should be called with the scheduler lock held. */7171static void bfqg_stats_set_start_group_wait_time(struct bfq_group *bfqg,7272 struct bfq_group *curr_bfqg)7373{···8181 bfqg_stats_mark_waiting(stats);8282}83838484-/* This should be called with the queue_lock held. */8484+/* This should be called with the scheduler lock held. */8585static void bfqg_stats_end_empty_time(struct bfqg_stats *stats)8686{8787 unsigned long long now;···203203204204static void bfqg_get(struct bfq_group *bfqg)205205{206206- return blkg_get(bfqg_to_blkg(bfqg));206206+ bfqg->ref++;207207}208208209209void bfqg_put(struct bfq_group *bfqg)210210{211211- return blkg_put(bfqg_to_blkg(bfqg));211211+ bfqg->ref--;212212+213213+ if (bfqg->ref == 0)214214+ kfree(bfqg);215215+}216216+217217+static void bfqg_and_blkg_get(struct bfq_group *bfqg)218218+{219219+ /* see comments in bfq_bic_update_cgroup for why refcounting bfqg */220220+ bfqg_get(bfqg);221221+222222+ blkg_get(bfqg_to_blkg(bfqg));223223+}224224+225225+void bfqg_and_blkg_put(struct bfq_group *bfqg)226226+{227227+ bfqg_put(bfqg);228228+229229+ blkg_put(bfqg_to_blkg(bfqg));212230}213231214232void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq,···330312 if (bfqq) {331313 bfqq->ioprio = bfqq->new_ioprio;332314 bfqq->ioprio_class = bfqq->new_ioprio_class;333333- bfqg_get(bfqg);315315+ /*316316+ * Make sure that bfqg and its associated blkg do not317317+ * disappear before entity.318318+ */319319+ bfqg_and_blkg_get(bfqg);334320 }335321 entity->parent = bfqg->my_entity; /* NULL for root group */336322 entity->sched_data = &bfqg->sched_data;···421399 return NULL;422400 }423401402402+ /* see comments in bfq_bic_update_cgroup for why refcounting */403403+ bfqg_get(bfqg);424404 return &bfqg->pd;425405}426406···450426 struct bfq_group *bfqg = pd_to_bfqg(pd);451427452428 bfqg_stats_exit(&bfqg->stats);453453- return kfree(bfqg);429429+ bfqg_put(bfqg);454430}455431456432void bfq_pd_reset_stats(struct blkg_policy_data *pd)···520496 * Move @bfqq to @bfqg, deactivating it from its old group and reactivating521497 * it on the new one. Avoid putting the entity on the old group idle tree.522498 *523523- * Must be called under the queue lock; the cgroup owning @bfqg must524524- * not disappear (by now this just means that we are called under525525- * rcu_read_lock()).499499+ * Must be called under the scheduler lock, to make sure that the blkg500500+ * owning @bfqg does not disappear (see comments in501501+ * bfq_bic_update_cgroup on guaranteeing the consistency of blkg502502+ * objects).526503 */527504void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,528505 struct bfq_group *bfqg)···544519 bfq_deactivate_bfqq(bfqd, bfqq, false, false);545520 else if (entity->on_st)546521 bfq_put_idle_entity(bfq_entity_service_tree(entity), entity);547547- bfqg_put(bfqq_group(bfqq));522522+ bfqg_and_blkg_put(bfqq_group(bfqq));548523549549- /*550550- * Here we use a reference to bfqg. We don't need a refcounter551551- * as the cgroup reference will not be dropped, so that its552552- * destroy() callback will not be invoked.553553- */554524 entity->parent = bfqg->my_entity;555525 entity->sched_data = &bfqg->sched_data;556556- bfqg_get(bfqg);526526+ /* pin down bfqg and its associated blkg */527527+ bfqg_and_blkg_get(bfqg);557528558529 if (bfq_bfqq_busy(bfqq)) {559530 bfq_pos_tree_add_move(bfqd, bfqq);···566545 * @bic: the bic to move.567546 * @blkcg: the blk-cgroup to move to.568547 *569569- * Move bic to blkcg, assuming that bfqd->queue is locked; the caller570570- * has to make sure that the reference to cgroup is valid across the call.548548+ * Move bic to blkcg, assuming that bfqd->lock is held; which makes549549+ * sure that the reference to cgroup is valid across the call (see550550+ * comments in bfq_bic_update_cgroup on this issue)571551 *572552 * NOTE: an alternative approach might have been to store the current573553 * cgroup in bfqq and getting a reference to it, reducing the lookup···626604 goto out;627605628606 bfqg = __bfq_bic_change_cgroup(bfqd, bic, bio_blkcg(bio));607607+ /*608608+ * Update blkg_path for bfq_log_* functions. We cache this609609+ * path, and update it here, for the following610610+ * reasons. Operations on blkg objects in blk-cgroup are611611+ * protected with the request_queue lock, and not with the612612+ * lock that protects the instances of this scheduler613613+ * (bfqd->lock). This exposes BFQ to the following sort of614614+ * race.615615+ *616616+ * The blkg_lookup performed in bfq_get_queue, protected617617+ * through rcu, may happen to return the address of a copy of618618+ * the original blkg. If this is the case, then the619619+ * bfqg_and_blkg_get performed in bfq_get_queue, to pin down620620+ * the blkg, is useless: it does not prevent blk-cgroup code621621+ * from destroying both the original blkg and all objects622622+ * directly or indirectly referred by the copy of the623623+ * blkg.624624+ *625625+ * On the bright side, destroy operations on a blkg invoke, as626626+ * a first step, hooks of the scheduler associated with the627627+ * blkg. And these hooks are executed with bfqd->lock held for628628+ * BFQ. As a consequence, for any blkg associated with the629629+ * request queue this instance of the scheduler is attached630630+ * to, we are guaranteed that such a blkg is not destroyed, and631631+ * that all the pointers it contains are consistent, while we632632+ * are holding bfqd->lock. A blkg_lookup performed with633633+ * bfqd->lock held then returns a fully consistent blkg, which634634+ * remains consistent until this lock is held.635635+ *636636+ * Thanks to the last fact, and to the fact that: (1) bfqg has637637+ * been obtained through a blkg_lookup in the above638638+ * assignment, and (2) bfqd->lock is being held, here we can639639+ * safely use the policy data for the involved blkg (i.e., the640640+ * field bfqg->pd) to get to the blkg associated with bfqg,641641+ * and then we can safely use any field of blkg. After we642642+ * release bfqd->lock, even just getting blkg through this643643+ * bfqg may cause dangling references to be traversed, as644644+ * bfqg->pd may not exist any more.645645+ *646646+ * In view of the above facts, here we cache, in the bfqg, any647647+ * blkg data we may need for this bic, and for its associated648648+ * bfq_queue. As of now, we need to cache only the path of the649649+ * blkg, which is used in the bfq_log_* functions.650650+ *651651+ * Finally, note that bfqg itself needs to be protected from652652+ * destruction on the blkg_free of the original blkg (which653653+ * invokes bfq_pd_free). We use an additional private654654+ * refcounter for bfqg, to let it disappear only after no655655+ * bfq_queue refers to it any longer.656656+ */657657+ blkg_path(bfqg_to_blkg(bfqg), bfqg->blkg_path, sizeof(bfqg->blkg_path));629658 bic->blkcg_serial_nr = serial_nr;630659out:631660 rcu_read_unlock();···713640 * @bfqd: the device data structure with the root group.714641 * @bfqg: the group to move from.715642 * @st: the service tree with the entities.716716- *717717- * Needs queue_lock to be taken and reference to be valid over the call.718643 */719644static void bfq_reparent_active_entities(struct bfq_data *bfqd,720645 struct bfq_group *bfqg,···763692 /*764693 * The idle tree may still contain bfq_queues belonging765694 * to exited task because they never migrated to a different766766- * cgroup from the one being destroyed now. No one else767767- * can access them so it's safe to act without any lock.695695+ * cgroup from the one being destroyed now.768696 */769697 bfq_flush_idle_tree(st);770698
···759759 /* must be the first member */760760 struct blkg_policy_data pd;761761762762+ /* cached path for this blkg (see comments in bfq_bic_update_cgroup) */763763+ char blkg_path[128];764764+765765+ /* reference counter (see comments in bfq_bic_update_cgroup) */766766+ int ref;767767+762768 struct bfq_entity entity;763769 struct bfq_sched_data sched_data;764770···844838struct blkcg_gq *bfqg_to_blkg(struct bfq_group *bfqg);845839struct bfq_group *bfqq_group(struct bfq_queue *bfqq);846840struct bfq_group *bfq_create_group_hierarchy(struct bfq_data *bfqd, int node);847847-void bfqg_put(struct bfq_group *bfqg);841841+void bfqg_and_blkg_put(struct bfq_group *bfqg);848842849843#ifdef CONFIG_BFQ_GROUP_IOSCHED850844extern struct cftype bfq_blkcg_legacy_files[];···916910struct bfq_group *bfqq_group(struct bfq_queue *bfqq);917911918912#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) do { \919919- char __pbuf[128]; \920920- \921921- blkg_path(bfqg_to_blkg(bfqq_group(bfqq)), __pbuf, sizeof(__pbuf)); \922922- blk_add_trace_msg((bfqd)->queue, "bfq%d%c %s " fmt, (bfqq)->pid, \913913+ blk_add_trace_msg((bfqd)->queue, "bfq%d%c %s " fmt, (bfqq)->pid,\923914 bfq_bfqq_sync((bfqq)) ? 'S' : 'A', \924924- __pbuf, ##args); \915915+ bfqq_group(bfqq)->blkg_path, ##args); \925916} while (0)926917927927-#define bfq_log_bfqg(bfqd, bfqg, fmt, args...) do { \928928- char __pbuf[128]; \929929- \930930- blkg_path(bfqg_to_blkg(bfqg), __pbuf, sizeof(__pbuf)); \931931- blk_add_trace_msg((bfqd)->queue, "%s " fmt, __pbuf, ##args); \932932-} while (0)918918+#define bfq_log_bfqg(bfqd, bfqg, fmt, args...) \919919+ blk_add_trace_msg((bfqd)->queue, "%s " fmt, (bfqg)->blkg_path, ##args)933920934921#else /* CONFIG_BFQ_GROUP_IOSCHED */935922
+3
block/bio-integrity.c
···175175 if (bio_op(bio) != REQ_OP_READ && bio_op(bio) != REQ_OP_WRITE)176176 return false;177177178178+ if (!bio_sectors(bio))179179+ return false;180180+178181 /* Already protected? */179182 if (bio_integrity(bio))180183 return false;
+9-3
block/bio.c
···240240 return bvl;241241}242242243243-static void __bio_free(struct bio *bio)243243+void bio_uninit(struct bio *bio)244244{245245 bio_disassociate_task(bio);246246247247 if (bio_integrity(bio))248248 bio_integrity_free(bio);249249}250250+EXPORT_SYMBOL(bio_uninit);250251251252static void bio_free(struct bio *bio)252253{253254 struct bio_set *bs = bio->bi_pool;254255 void *p;255256256256- __bio_free(bio);257257+ bio_uninit(bio);257258258259 if (bs) {259260 bvec_free(bs->bvec_pool, bio->bi_io_vec, BVEC_POOL_IDX(bio));···272271 }273272}274273274274+/*275275+ * Users of this function have their own bio allocation. Subsequently,276276+ * they must remember to pair any call to bio_init() with bio_uninit()277277+ * when IO has completed, or when the bio is released.278278+ */275279void bio_init(struct bio *bio, struct bio_vec *table,276280 unsigned short max_vecs)277281{···303297{304298 unsigned long flags = bio->bi_flags & (~0UL << BIO_RESET_BITS);305299306306- __bio_free(bio);300300+ bio_uninit(bio);307301308302 memset(bio, 0, BIO_RESET_BYTES);309303 bio->bi_flags = flags;
···648648 if (!rl->rq_pool)649649 return -ENOMEM;650650651651+ if (rl != &q->root_rl)652652+ WARN_ON_ONCE(!blk_get_queue(q));653653+651654 return 0;652655}653656654654-void blk_exit_rl(struct request_list *rl)657657+void blk_exit_rl(struct request_queue *q, struct request_list *rl)655658{656656- if (rl->rq_pool)659659+ if (rl->rq_pool) {657660 mempool_destroy(rl->rq_pool);661661+ if (rl != &q->root_rl)662662+ blk_put_queue(q);663663+ }658664}659665660666struct request_queue *blk_alloc_queue(gfp_t gfp_mask)
+46-12
block/blk-mq-sched.c
···6868 __blk_mq_sched_assign_ioc(q, rq, bio, ioc);6969}70707171+/*7272+ * Mark a hardware queue as needing a restart. For shared queues, maintain7373+ * a count of how many hardware queues are marked for restart.7474+ */7575+static void blk_mq_sched_mark_restart_hctx(struct blk_mq_hw_ctx *hctx)7676+{7777+ if (test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state))7878+ return;7979+8080+ if (hctx->flags & BLK_MQ_F_TAG_SHARED) {8181+ struct request_queue *q = hctx->queue;8282+8383+ if (!test_and_set_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state))8484+ atomic_inc(&q->shared_hctx_restart);8585+ } else8686+ set_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);8787+}8888+8989+static bool blk_mq_sched_restart_hctx(struct blk_mq_hw_ctx *hctx)9090+{9191+ if (!test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state))9292+ return false;9393+9494+ if (hctx->flags & BLK_MQ_F_TAG_SHARED) {9595+ struct request_queue *q = hctx->queue;9696+9797+ if (test_and_clear_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state))9898+ atomic_dec(&q->shared_hctx_restart);9999+ } else100100+ clear_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);101101+102102+ if (blk_mq_hctx_has_pending(hctx)) {103103+ blk_mq_run_hw_queue(hctx, true);104104+ return true;105105+ }106106+107107+ return false;108108+}109109+71110struct request *blk_mq_sched_get_request(struct request_queue *q,72111 struct bio *bio,73112 unsigned int op,···305266 return true;306267}307268308308-static bool blk_mq_sched_restart_hctx(struct blk_mq_hw_ctx *hctx)309309-{310310- if (test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state)) {311311- clear_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);312312- if (blk_mq_hctx_has_pending(hctx)) {313313- blk_mq_run_hw_queue(hctx, true);314314- return true;315315- }316316- }317317- return false;318318-}319319-320269/**321270 * list_for_each_entry_rcu_rr - iterate in a round-robin fashion over rcu list322271 * @pos: loop cursor.···336309 unsigned int i, j;337310338311 if (set->flags & BLK_MQ_F_TAG_SHARED) {312312+ /*313313+ * If this is 0, then we know that no hardware queues314314+ * have RESTART marked. We're done.315315+ */316316+ if (!atomic_read(&queue->shared_hctx_restart))317317+ return;318318+339319 rcu_read_lock();340320 list_for_each_entry_rcu_rr(q, queue, &set->tag_list,341321 tag_set_list) {
-9
block/blk-mq-sched.h
···115115 return false;116116}117117118118-/*119119- * Mark a hardware queue as needing a restart.120120- */121121-static inline void blk_mq_sched_mark_restart_hctx(struct blk_mq_hw_ctx *hctx)122122-{123123- if (!test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state))124124- set_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);125125-}126126-127118static inline bool blk_mq_sched_needs_restart(struct blk_mq_hw_ctx *hctx)128119{129120 return test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state);
···777777}778778779779/**780780- * blk_release_queue: - release a &struct request_queue when it is no longer needed781781- * @kobj: the kobj belonging to the request queue to be released780780+ * __blk_release_queue - release a request queue when it is no longer needed781781+ * @work: pointer to the release_work member of the request queue to be released782782 *783783 * Description:784784- * blk_release_queue is the pair to blk_init_queue() or785785- * blk_queue_make_request(). It should be called when a request queue is786786- * being released; typically when a block device is being de-registered.787787- * Currently, its primary task it to free all the &struct request788788- * structures that were allocated to the queue and the queue itself.784784+ * blk_release_queue is the counterpart of blk_init_queue(). It should be785785+ * called when a request queue is being released; typically when a block786786+ * device is being de-registered. Its primary task it to free the queue787787+ * itself.789788 *790790- * Note:789789+ * Notes:791790 * The low level driver must have finished any outstanding requests first792791 * via blk_cleanup_queue().793793- **/794794-static void blk_release_queue(struct kobject *kobj)792792+ *793793+ * Although blk_release_queue() may be called with preemption disabled,794794+ * __blk_release_queue() may sleep.795795+ */796796+static void __blk_release_queue(struct work_struct *work)795797{796796- struct request_queue *q =797797- container_of(kobj, struct request_queue, kobj);798798+ struct request_queue *q = container_of(work, typeof(*q), release_work);798799799800 if (test_bit(QUEUE_FLAG_POLL_STATS, &q->queue_flags))800801 blk_stat_remove_callback(q, q->poll_cb);···810809811810 blk_free_queue_stats(q->stats);812811813813- blk_exit_rl(&q->root_rl);812812+ blk_exit_rl(q, &q->root_rl);814813815814 if (q->queue_tags)816815 __blk_queue_free_tags(q);···833832834833 ida_simple_remove(&blk_queue_ida, q->id);835834 call_rcu(&q->rcu_head, blk_free_queue_rcu);835835+}836836+837837+static void blk_release_queue(struct kobject *kobj)838838+{839839+ struct request_queue *q =840840+ container_of(kobj, struct request_queue, kobj);841841+842842+ INIT_WORK(&q->release_work, __blk_release_queue);843843+ schedule_work(&q->release_work);836844}837845838846static const struct sysfs_ops queue_sysfs_ops = {
+18-4
block/blk-throttle.c
···2727#define MIN_THROTL_IOPS (10)2828#define DFL_LATENCY_TARGET (-1L)2929#define DFL_IDLE_THRESHOLD (0)3030+#define DFL_HD_BASELINE_LATENCY (4000L) /* 4ms */3131+#define LATENCY_FILTERED_SSD (0)3232+/*3333+ * For HD, very small latency comes from sequential IO. Such IO is helpless to3434+ * help determine if its IO is impacted by others, hence we ignore the IO3535+ */3636+#define LATENCY_FILTERED_HD (1000L) /* 1ms */30373138#define SKIP_LATENCY (((u64)1) << BLK_STAT_RES_SHIFT)3239···219212 struct avg_latency_bucket avg_buckets[LATENCY_BUCKET_SIZE];220213 struct latency_bucket __percpu *latency_buckets;221214 unsigned long last_calculate_time;215215+ unsigned long filtered_latency;222216223217 bool track_bio_latency;224218};···706698static void throtl_schedule_pending_timer(struct throtl_service_queue *sq,707699 unsigned long expires)708700{709709- unsigned long max_expire = jiffies + 8 * sq_to_tg(sq)->td->throtl_slice;701701+ unsigned long max_expire = jiffies + 8 * sq_to_td(sq)->throtl_slice;710702711703 /*712704 * Since we are adjusting the throttle limit dynamically, the sleep···22892281 throtl_track_latency(tg->td, blk_stat_size(&bio->bi_issue_stat),22902282 bio_op(bio), lat);2291228322922292- if (tg->latency_target) {22842284+ if (tg->latency_target && lat >= tg->td->filtered_latency) {22932285 int bucket;22942286 unsigned int threshold;22952287···24252417void blk_throtl_register_queue(struct request_queue *q)24262418{24272419 struct throtl_data *td;24202420+ int i;2428242124292422 td = q->td;24302423 BUG_ON(!td);2431242424322432- if (blk_queue_nonrot(q))24252425+ if (blk_queue_nonrot(q)) {24332426 td->throtl_slice = DFL_THROTL_SLICE_SSD;24342434- else24272427+ td->filtered_latency = LATENCY_FILTERED_SSD;24282428+ } else {24352429 td->throtl_slice = DFL_THROTL_SLICE_HD;24302430+ td->filtered_latency = LATENCY_FILTERED_HD;24312431+ for (i = 0; i < LATENCY_BUCKET_SIZE; i++)24322432+ td->avg_buckets[i].latency = DFL_HD_BASELINE_LATENCY;24332433+ }24362434#ifndef CONFIG_BLK_DEV_THROTTLING_LOW24372435 /* if no low limit, use previous default */24382436 td->throtl_slice = DFL_THROTL_SLICE_HD;
···3838static const int cfq_hist_divisor = 4;39394040/*4141- * offset from end of service tree4141+ * offset from end of queue service tree for idle class4242 */4343#define CFQ_IDLE_DELAY (NSEC_PER_SEC / 5)4444+/* offset from end of group service tree under time slice mode */4545+#define CFQ_SLICE_MODE_GROUP_DELAY (NSEC_PER_SEC / 5)4646+/* offset from end of group service under IOPS mode */4747+#define CFQ_IOPS_MODE_GROUP_DELAY (HZ / 5)44484549/*4650 * below this threshold, we consider thinktime immediate···13661362 cfqg->vfraction = max_t(unsigned, vfr, 1);13671363}1368136413651365+static inline u64 cfq_get_cfqg_vdisktime_delay(struct cfq_data *cfqd)13661366+{13671367+ if (!iops_mode(cfqd))13681368+ return CFQ_SLICE_MODE_GROUP_DELAY;13691369+ else13701370+ return CFQ_IOPS_MODE_GROUP_DELAY;13711371+}13721372+13691373static void13701374cfq_group_notify_queue_add(struct cfq_data *cfqd, struct cfq_group *cfqg)13711375{···13931381 n = rb_last(&st->rb);13941382 if (n) {13951383 __cfqg = rb_entry_cfqg(n);13961396- cfqg->vdisktime = __cfqg->vdisktime + CFQ_IDLE_DELAY;13841384+ cfqg->vdisktime = __cfqg->vdisktime +13851385+ cfq_get_cfqg_vdisktime_delay(cfqd);13971386 } else13981387 cfqg->vdisktime = st->min_vdisktime;13991388 cfq_group_service_tree_add(st, cfqg);
+1-1
crypto/asymmetric_keys/public_key.c
···141141 * signature and returns that to us.142142 */143143 ret = crypto_akcipher_verify(req);144144- if (ret == -EINPROGRESS) {144144+ if ((ret == -EINPROGRESS) || (ret == -EBUSY)) {145145 wait_for_completion(&compl.completion);146146 ret = compl.err;147147 }
···102102 }103103 }104104105105+ ret = -ENOMEM;105106 cert->pub->key = kmemdup(ctx->key, ctx->key_size, GFP_KERNEL);106107 if (!cert->pub->key)107108 goto error_decode;
+2-3
crypto/drbg.c
···17671767 break;17681768 case -EINPROGRESS:17691769 case -EBUSY:17701770- ret = wait_for_completion_interruptible(17711771- &drbg->ctr_completion);17721772- if (!ret && !drbg->ctr_async_err) {17701770+ wait_for_completion(&drbg->ctr_completion);17711771+ if (!drbg->ctr_async_err) {17731772 reinit_completion(&drbg->ctr_completion);17741773 break;17751774 }
+2-4
crypto/gcm.c
···152152153153 err = crypto_skcipher_encrypt(&data->req);154154 if (err == -EINPROGRESS || err == -EBUSY) {155155- err = wait_for_completion_interruptible(156156- &data->result.completion);157157- if (!err)158158- err = data->result.err;155155+ wait_for_completion(&data->result.completion);156156+ err = data->result.err;159157 }160158161159 if (err)
+25-13
drivers/acpi/acpica/tbutils.c
···416416 }417417 }418418419419- table_desc->validation_count++;420420- if (table_desc->validation_count == 0) {421421- ACPI_ERROR((AE_INFO,422422- "Table %p, Validation count is zero after increment\n",423423- table_desc));424424- table_desc->validation_count--;425425- return_ACPI_STATUS(AE_LIMIT);419419+ if (table_desc->validation_count < ACPI_MAX_TABLE_VALIDATIONS) {420420+ table_desc->validation_count++;421421+422422+ /*423423+ * Detect validation_count overflows to ensure that the warning424424+ * message will only be printed once.425425+ */426426+ if (table_desc->validation_count >= ACPI_MAX_TABLE_VALIDATIONS) {427427+ ACPI_WARNING((AE_INFO,428428+ "Table %p, Validation count overflows\n",429429+ table_desc));430430+ }426431 }427432428433 *out_table = table_desc->pointer;···454449455450 ACPI_FUNCTION_TRACE(acpi_tb_put_table);456451457457- if (table_desc->validation_count == 0) {458458- ACPI_WARNING((AE_INFO,459459- "Table %p, Validation count is zero before decrement\n",460460- table_desc));461461- return_VOID;452452+ if (table_desc->validation_count < ACPI_MAX_TABLE_VALIDATIONS) {453453+ table_desc->validation_count--;454454+455455+ /*456456+ * Detect validation_count underflows to ensure that the warning457457+ * message will only be printed once.458458+ */459459+ if (table_desc->validation_count >= ACPI_MAX_TABLE_VALIDATIONS) {460460+ ACPI_WARNING((AE_INFO,461461+ "Table %p, Validation count underflows\n",462462+ table_desc));463463+ return_VOID;464464+ }462465 }463463- table_desc->validation_count--;464466465467 if (table_desc->validation_count == 0) {466468
-9
drivers/acpi/acpica/utresrc.c
···474474 return_ACPI_STATUS(AE_AML_NO_RESOURCE_END_TAG);475475 }476476477477- /*478478- * The end_tag opcode must be followed by a zero byte.479479- * Although this byte is technically defined to be a checksum,480480- * in practice, all ASL compilers set this byte to zero.481481- */482482- if (*(aml + 1) != 0) {483483- return_ACPI_STATUS(AE_AML_NO_RESOURCE_END_TAG);484484- }485485-486477 /* Return the pointer to the end_tag if requested */487478488479 if (!user_function) {
+14-8
drivers/acpi/arm64/iort.c
···666666 int ret = -ENODEV;667667 struct fwnode_handle *iort_fwnode;668668669669- /*670670- * If we already translated the fwspec there671671- * is nothing left to do, return the iommu_ops.672672- */673673- ops = iort_fwspec_iommu_ops(dev->iommu_fwspec);674674- if (ops)675675- return ops;676676-677669 if (node) {678670 iort_fwnode = iort_get_fwnode(node);679671 if (!iort_fwnode)···727735 u32 streamid = 0;728736 int err;729737738738+ /*739739+ * If we already translated the fwspec there740740+ * is nothing left to do, return the iommu_ops.741741+ */742742+ ops = iort_fwspec_iommu_ops(dev->iommu_fwspec);743743+ if (ops)744744+ return ops;745745+730746 if (dev_is_pci(dev)) {731747 struct pci_bus *bus = to_pci_dev(dev)->bus;732748 u32 rid;···781781 err = iort_add_device_replay(ops, dev);782782 if (err)783783 ops = ERR_PTR(err);784784+785785+ /* Ignore all other errors apart from EPROBE_DEFER */786786+ if (IS_ERR(ops) && (PTR_ERR(ops) != -EPROBE_DEFER)) {787787+ dev_dbg(dev, "Adding to IOMMU failed: %ld\n", PTR_ERR(ops));788788+ ops = NULL;789789+ }784790785791 return ops;786792}
···2424#include <linux/pm_qos.h>2525#include <linux/pm_domain.h>2626#include <linux/pm_runtime.h>2727-#include <linux/suspend.h>28272928#include "internal.h"3029···399400 mutex_lock(&acpi_pm_notifier_lock);400401401402 if (adev->wakeup.flags.notifier_present) {402402- pm_wakeup_ws_event(adev->wakeup.ws, 0, true);403403+ __pm_wakeup_event(adev->wakeup.ws, 0);403404 if (adev->wakeup.context.work.func)404405 queue_pm_work(&adev->wakeup.context.work);405406 }
+39-32
drivers/acpi/scan.c
···13711371 iort_set_dma_mask(dev);1372137213731373 iommu = iort_iommu_configure(dev);13741374- if (IS_ERR(iommu))13751375- return PTR_ERR(iommu);13741374+ if (IS_ERR(iommu) && PTR_ERR(iommu) == -EPROBE_DEFER)13751375+ return -EPROBE_DEFER;1376137613771377 size = max(dev->coherent_dma_mask, dev->coherent_dma_mask + 1);13781378 /*···14281428 adev->flags.coherent_dma = cca;14291429}1430143014311431+static int acpi_check_spi_i2c_slave(struct acpi_resource *ares, void *data)14321432+{14331433+ bool *is_spi_i2c_slave_p = data;14341434+14351435+ if (ares->type != ACPI_RESOURCE_TYPE_SERIAL_BUS)14361436+ return 1;14371437+14381438+ /*14391439+ * devices that are connected to UART still need to be enumerated to14401440+ * platform bus14411441+ */14421442+ if (ares->data.common_serial_bus.type != ACPI_RESOURCE_SERIAL_TYPE_UART)14431443+ *is_spi_i2c_slave_p = true;14441444+14451445+ /* no need to do more checking */14461446+ return -1;14471447+}14481448+14491449+static bool acpi_is_spi_i2c_slave(struct acpi_device *device)14501450+{14511451+ struct list_head resource_list;14521452+ bool is_spi_i2c_slave = false;14531453+14541454+ INIT_LIST_HEAD(&resource_list);14551455+ acpi_dev_get_resources(device, &resource_list, acpi_check_spi_i2c_slave,14561456+ &is_spi_i2c_slave);14571457+ acpi_dev_free_resource_list(&resource_list);14581458+14591459+ return is_spi_i2c_slave;14601460+}14611461+14311462void acpi_init_device_object(struct acpi_device *device, acpi_handle handle,14321463 int type, unsigned long long sta)14331464{···14741443 acpi_bus_get_flags(device);14751444 device->flags.match_driver = false;14761445 device->flags.initialized = true;14461446+ device->flags.spi_i2c_slave = acpi_is_spi_i2c_slave(device);14771447 acpi_device_clear_enumerated(device);14781448 device_initialize(&device->dev);14791449 dev_set_uevent_suppress(&device->dev, true);···17591727 return AE_OK;17601728}1761172917621762-static int acpi_check_spi_i2c_slave(struct acpi_resource *ares, void *data)17631763-{17641764- bool *is_spi_i2c_slave_p = data;17651765-17661766- if (ares->type != ACPI_RESOURCE_TYPE_SERIAL_BUS)17671767- return 1;17681768-17691769- /*17701770- * devices that are connected to UART still need to be enumerated to17711771- * platform bus17721772- */17731773- if (ares->data.common_serial_bus.type != ACPI_RESOURCE_SERIAL_TYPE_UART)17741774- *is_spi_i2c_slave_p = true;17751775-17761776- /* no need to do more checking */17771777- return -1;17781778-}17791779-17801730static void acpi_default_enumeration(struct acpi_device *device)17811731{17821782- struct list_head resource_list;17831783- bool is_spi_i2c_slave = false;17841784-17851732 /*17861733 * Do not enumerate SPI/I2C slaves as they will be enumerated by their17871734 * respective parents.17881735 */17891789- INIT_LIST_HEAD(&resource_list);17901790- acpi_dev_get_resources(device, &resource_list, acpi_check_spi_i2c_slave,17911791- &is_spi_i2c_slave);17921792- acpi_dev_free_resource_list(&resource_list);17931793- if (!is_spi_i2c_slave) {17361736+ if (!device->flags.spi_i2c_slave) {17941737 acpi_create_platform_device(device, NULL);17951738 acpi_device_set_enumerated(device);17961739 } else {···18611854 return;1862185518631856 device->flags.match_driver = true;18641864- if (ret > 0) {18571857+ if (ret > 0 && !device->flags.spi_i2c_slave) {18651858 acpi_device_set_enumerated(device);18661859 goto ok;18671860 }···18701863 if (ret < 0)18711864 return;1872186518731873- if (device->pnp.type.platform_id)18741874- acpi_default_enumeration(device);18751875- else18661866+ if (!device->pnp.type.platform_id && !device->flags.spi_i2c_slave)18761867 acpi_device_set_enumerated(device);18681868+ else18691869+ acpi_default_enumeration(device);1877187018781871 ok:18791872 list_for_each_entry(child, &device->children, node)
-28
drivers/acpi/sleep.c
···663663 acpi_os_wait_events_complete();664664 if (acpi_sci_irq_valid())665665 enable_irq_wake(acpi_sci_irq);666666-667666 return 0;668668-}669669-670670-static void acpi_freeze_wake(void)671671-{672672- /*673673- * If IRQD_WAKEUP_ARMED is not set for the SCI at this point, it means674674- * that the SCI has triggered while suspended, so cancel the wakeup in675675- * case it has not been a wakeup event (the GPEs will be checked later).676676- */677677- if (acpi_sci_irq_valid() &&678678- !irqd_is_wakeup_armed(irq_get_irq_data(acpi_sci_irq)))679679- pm_system_cancel_wakeup();680680-}681681-682682-static void acpi_freeze_sync(void)683683-{684684- /*685685- * Process all pending events in case there are any wakeup ones.686686- *687687- * The EC driver uses the system workqueue, so that one needs to be688688- * flushed too.689689- */690690- acpi_os_wait_events_complete();691691- flush_scheduled_work();692667}693668694669static void acpi_freeze_restore(void)···671696 acpi_disable_wakeup_devices(ACPI_STATE_S0);672697 if (acpi_sci_irq_valid())673698 disable_irq_wake(acpi_sci_irq);674674-675699 acpi_enable_all_runtime_gpes();676700}677701···682708static const struct platform_freeze_ops acpi_freeze_ops = {683709 .begin = acpi_freeze_begin,684710 .prepare = acpi_freeze_prepare,685685- .wake = acpi_freeze_wake,686686- .sync = acpi_freeze_sync,687711 .restore = acpi_freeze_restore,688712 .end = acpi_freeze_end,689713};
···13641364{}13651365#endif1366136613671367+/*13681368+ * On the Acer Aspire Switch Alpha 12, sometimes all SATA ports are detected13691369+ * as DUMMY, or detected but eventually get a "link down" and never get up13701370+ * again. When this happens, CAP.NP may hold a value of 0x00 or 0x01, and the13711371+ * port_map may hold a value of 0x00.13721372+ *13731373+ * Overriding CAP.NP to 0x02 and the port_map to 0x7 will reveal all 3 ports13741374+ * and can significantly reduce the occurrence of the problem.13751375+ *13761376+ * https://bugzilla.kernel.org/show_bug.cgi?id=18947113771377+ */13781378+static void acer_sa5_271_workaround(struct ahci_host_priv *hpriv,13791379+ struct pci_dev *pdev)13801380+{13811381+ static const struct dmi_system_id sysids[] = {13821382+ {13831383+ .ident = "Acer Switch Alpha 12",13841384+ .matches = {13851385+ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),13861386+ DMI_MATCH(DMI_PRODUCT_NAME, "Switch SA5-271")13871387+ },13881388+ },13891389+ { }13901390+ };13911391+13921392+ if (dmi_check_system(sysids)) {13931393+ dev_info(&pdev->dev, "enabling Acer Switch Alpha 12 workaround\n");13941394+ if ((hpriv->saved_cap & 0xC734FF00) == 0xC734FF00) {13951395+ hpriv->port_map = 0x7;13961396+ hpriv->cap = 0xC734FF02;13971397+ }13981398+ }13991399+}14001400+13671401#ifdef CONFIG_ARM6413681402/*13691403 * Due to ERRATA#22536, ThunderX needs to handle HOST_IRQ_STAT differently.···16691635 dev_info(&pdev->dev,16701636 "online status unreliable, applying workaround\n");16711637 }16381638+16391639+16401640+ /* Acer SA5-271 workaround modifies private_data */16411641+ acer_sa5_271_workaround(hpriv, pdev);1672164216731643 /* CAP.NP sometimes indicate the index of the last enabled16741644 * port, at other times, that of the last possible port, so
+3-2
drivers/ata/libahci_platform.c
···514514515515 irq = platform_get_irq(pdev, 0);516516 if (irq <= 0) {517517- dev_err(dev, "no irq\n");518518- return -EINVAL;517517+ if (irq != -EPROBE_DEFER)518518+ dev_err(dev, "no irq\n");519519+ return irq;519520 }520521521522 hpriv->irq = irq;
···10261026{10271027 return sysfs_create_groups(&dev->kobj, groups);10281028}10291029+EXPORT_SYMBOL_GPL(device_add_groups);1029103010301031void device_remove_groups(struct device *dev,10311032 const struct attribute_group **groups)10321033{10331034 sysfs_remove_groups(&dev->kobj, groups);10341035}10361036+EXPORT_SYMBOL_GPL(device_remove_groups);10371037+10381038+union device_attr_group_devres {10391039+ const struct attribute_group *group;10401040+ const struct attribute_group **groups;10411041+};10421042+10431043+static int devm_attr_group_match(struct device *dev, void *res, void *data)10441044+{10451045+ return ((union device_attr_group_devres *)res)->group == data;10461046+}10471047+10481048+static void devm_attr_group_remove(struct device *dev, void *res)10491049+{10501050+ union device_attr_group_devres *devres = res;10511051+ const struct attribute_group *group = devres->group;10521052+10531053+ dev_dbg(dev, "%s: removing group %p\n", __func__, group);10541054+ sysfs_remove_group(&dev->kobj, group);10551055+}10561056+10571057+static void devm_attr_groups_remove(struct device *dev, void *res)10581058+{10591059+ union device_attr_group_devres *devres = res;10601060+ const struct attribute_group **groups = devres->groups;10611061+10621062+ dev_dbg(dev, "%s: removing groups %p\n", __func__, groups);10631063+ sysfs_remove_groups(&dev->kobj, groups);10641064+}10651065+10661066+/**10671067+ * devm_device_add_group - given a device, create a managed attribute group10681068+ * @dev: The device to create the group for10691069+ * @grp: The attribute group to create10701070+ *10711071+ * This function creates a group for the first time. It will explicitly10721072+ * warn and error if any of the attribute files being created already exist.10731073+ *10741074+ * Returns 0 on success or error code on failure.10751075+ */10761076+int devm_device_add_group(struct device *dev, const struct attribute_group *grp)10771077+{10781078+ union device_attr_group_devres *devres;10791079+ int error;10801080+10811081+ devres = devres_alloc(devm_attr_group_remove,10821082+ sizeof(*devres), GFP_KERNEL);10831083+ if (!devres)10841084+ return -ENOMEM;10851085+10861086+ error = sysfs_create_group(&dev->kobj, grp);10871087+ if (error) {10881088+ devres_free(devres);10891089+ return error;10901090+ }10911091+10921092+ devres->group = grp;10931093+ devres_add(dev, devres);10941094+ return 0;10951095+}10961096+EXPORT_SYMBOL_GPL(devm_device_add_group);10971097+10981098+/**10991099+ * devm_device_remove_group: remove a managed group from a device11001100+ * @dev: device to remove the group from11011101+ * @grp: group to remove11021102+ *11031103+ * This function removes a group of attributes from a device. The attributes11041104+ * previously have to have been created for this group, otherwise it will fail.11051105+ */11061106+void devm_device_remove_group(struct device *dev,11071107+ const struct attribute_group *grp)11081108+{11091109+ WARN_ON(devres_release(dev, devm_attr_group_remove,11101110+ devm_attr_group_match,11111111+ /* cast away const */ (void *)grp));11121112+}11131113+EXPORT_SYMBOL_GPL(devm_device_remove_group);11141114+11151115+/**11161116+ * devm_device_add_groups - create a bunch of managed attribute groups11171117+ * @dev: The device to create the group for11181118+ * @groups: The attribute groups to create, NULL terminated11191119+ *11201120+ * This function creates a bunch of managed attribute groups. If an error11211121+ * occurs when creating a group, all previously created groups will be11221122+ * removed, unwinding everything back to the original state when this11231123+ * function was called. It will explicitly warn and error if any of the11241124+ * attribute files being created already exist.11251125+ *11261126+ * Returns 0 on success or error code from sysfs_create_group on failure.11271127+ */11281128+int devm_device_add_groups(struct device *dev,11291129+ const struct attribute_group **groups)11301130+{11311131+ union device_attr_group_devres *devres;11321132+ int error;11331133+11341134+ devres = devres_alloc(devm_attr_groups_remove,11351135+ sizeof(*devres), GFP_KERNEL);11361136+ if (!devres)11371137+ return -ENOMEM;11381138+11391139+ error = sysfs_create_groups(&dev->kobj, groups);11401140+ if (error) {11411141+ devres_free(devres);11421142+ return error;11431143+ }11441144+11451145+ devres->groups = groups;11461146+ devres_add(dev, devres);11471147+ return 0;11481148+}11491149+EXPORT_SYMBOL_GPL(devm_device_add_groups);11501150+11511151+/**11521152+ * devm_device_remove_groups - remove a list of managed groups11531153+ *11541154+ * @dev: The device for the groups to be removed from11551155+ * @groups: NULL terminated list of groups to be removed11561156+ *11571157+ * If groups is not NULL, remove the specified groups from the device.11581158+ */11591159+void devm_device_remove_groups(struct device *dev,11601160+ const struct attribute_group **groups)11611161+{11621162+ WARN_ON(devres_release(dev, devm_attr_groups_remove,11631163+ devm_attr_group_match,11641164+ /* cast away const */ (void *)groups));11651165+}11661166+EXPORT_SYMBOL_GPL(devm_device_remove_groups);1035116710361168static int device_add_attrs(struct device *dev)10371169{
+4
drivers/base/dd.c
···259259 if (dev->bus)260260 blocking_notifier_call_chain(&dev->bus->p->bus_notifier,261261 BUS_NOTIFY_BOUND_DRIVER, dev);262262+263263+ kobject_uevent(&dev->kobj, KOBJ_BIND);262264}263265264266static int driver_sysfs_add(struct device *dev)···850848 blocking_notifier_call_chain(&dev->bus->p->bus_notifier,851849 BUS_NOTIFY_UNBOUND_DRIVER,852850 dev);851851+852852+ kobject_uevent(&dev->kobj, KOBJ_UNBIND);853853 }854854}855855
···2828/* First wakeup IRQ seen by the kernel in the last cycle. */2929unsigned int pm_wakeup_irq __read_mostly;30303131-/* If greater than 0 and the system is suspending, terminate the suspend. */3232-static atomic_t pm_abort_suspend __read_mostly;3131+/* If set and the system is suspending, terminate the suspend. */3232+static bool pm_abort_suspend __read_mostly;33333434/*3535 * Combined counters of registered wakeup events and wakeup events in progress.···855855 pm_print_active_wakeup_sources();856856 }857857858858- return ret || atomic_read(&pm_abort_suspend) > 0;858858+ return ret || pm_abort_suspend;859859}860860861861void pm_system_wakeup(void)862862{863863- atomic_inc(&pm_abort_suspend);863863+ pm_abort_suspend = true;864864 freeze_wake();865865}866866EXPORT_SYMBOL_GPL(pm_system_wakeup);867867868868-void pm_system_cancel_wakeup(void)868868+void pm_wakeup_clear(void)869869{870870- atomic_dec(&pm_abort_suspend);871871-}872872-873873-void pm_wakeup_clear(bool reset)874874-{870870+ pm_abort_suspend = false;875871 pm_wakeup_irq = 0;876876- if (reset)877877- atomic_set(&pm_abort_suspend, 0);878872}879873880874void pm_system_irq_wakeup(unsigned int irq_number)
+3
drivers/block/loop.c
···608608 */609609static int loop_flush(struct loop_device *lo)610610{611611+ /* loop not yet configured, no running thread, nothing to flush */612612+ if (lo->lo_state != Lo_bound)613613+ return 0;611614 return loop_switch(lo, NULL);612615}613616
···159159 init_waitqueue_head(&ring->shutdown_wq);160160 ring->blkif = blkif;161161 ring->st_print = jiffies;162162- xen_blkif_get(blkif);162162+ ring->active = true;163163 }164164165165 return 0;···249249 struct xen_blkif_ring *ring = &blkif->rings[r];250250 unsigned int i = 0;251251252252+ if (!ring->active)253253+ continue;254254+252255 if (ring->xenblkd) {253256 kthread_stop(ring->xenblkd);254257 wake_up(&ring->shutdown_wq);255255- ring->xenblkd = NULL;256258 }257259258260 /* The above kthread_stop() guarantees that at this point we···298296 BUG_ON(ring->free_pages_num != 0);299297 BUG_ON(ring->persistent_gnt_c != 0);300298 WARN_ON(i != (XEN_BLKIF_REQS_PER_PAGE * blkif->nr_ring_pages));301301- xen_blkif_put(blkif);299299+ ring->active = false;302300 }303301 blkif->nr_ring_pages = 0;304302 /*···314312315313static void xen_blkif_free(struct xen_blkif *blkif)316314{317317-318318- xen_blkif_disconnect(blkif);315315+ WARN_ON(xen_blkif_disconnect(blkif));319316 xen_vbd_free(&blkif->vbd);317317+ kfree(blkif->be->mode);318318+ kfree(blkif->be);320319321320 /* Make sure everything is drained before shutting down */322321 kmem_cache_free(xen_blkif_cachep, blkif);···514511 xen_blkif_put(be->blkif);515512 }516513517517- kfree(be->mode);518518- kfree(be);519514 return 0;520515}521516
+1-1
drivers/char/mem.c
···343343 phys_addr_t offset = (phys_addr_t)vma->vm_pgoff << PAGE_SHIFT;344344345345 /* It's illegal to wrap around the end of the physical address space. */346346- if (offset + (phys_addr_t)size < offset)346346+ if (offset + (phys_addr_t)size - 1 < offset)347347 return -EINVAL;348348349349 if (!valid_mmap_phys_addr_range(vma->vm_pgoff, size))
···11/*22 * random.c -- A strong random number generator33 *44+ * Copyright (C) 2017 Jason A. Donenfeld <Jason@zx2c4.com>. All55+ * Rights Reserved.66+ *47 * Copyright Matt Mackall <mpm@selenic.com>, 2003, 2004, 200558 *69 * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All···765762static struct crng_state **crng_node_pool __read_mostly;766763#endif767764765765+static void invalidate_batched_entropy(void);766766+768767static void crng_initialize(struct crng_state *crng)769768{770769 int i;···803798 p[crng_init_cnt % CHACHA20_KEY_SIZE] ^= *cp;804799 cp++; crng_init_cnt++; len--;805800 }801801+ spin_unlock_irqrestore(&primary_crng.lock, flags);806802 if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {803803+ invalidate_batched_entropy();807804 crng_init = 1;808805 wake_up_interruptible(&crng_init_wait);809806 pr_notice("random: fast init done\n");810807 }811811- spin_unlock_irqrestore(&primary_crng.lock, flags);812808 return 1;813809}814810···841835 }842836 memzero_explicit(&buf, sizeof(buf));843837 crng->init_time = jiffies;838838+ spin_unlock_irqrestore(&primary_crng.lock, flags);844839 if (crng == &primary_crng && crng_init < 2) {840840+ invalidate_batched_entropy();845841 crng_init = 2;846842 process_random_ready_list();847843 wake_up_interruptible(&crng_init_wait);848844 pr_notice("random: crng init done\n");849845 }850850- spin_unlock_irqrestore(&primary_crng.lock, flags);851846}852847853848static inline void crng_wait_ready(void)···11041097static __u32 get_reg(struct fast_pool *f, struct pt_regs *regs)11051098{11061099 __u32 *ptr = (__u32 *) regs;11001100+ unsigned int idx;1107110111081102 if (regs == NULL)11091103 return 0;11101110- if (f->reg_idx >= sizeof(struct pt_regs) / sizeof(__u32))11111111- f->reg_idx = 0;11121112- return *(ptr + f->reg_idx++);11041104+ idx = READ_ONCE(f->reg_idx);11051105+ if (idx >= sizeof(struct pt_regs) / sizeof(__u32))11061106+ idx = 0;11071107+ ptr += idx++;11081108+ WRITE_ONCE(f->reg_idx, idx);11091109+ return *ptr;11131110}1114111111151112void add_interrupt_randomness(int irq, int irq_flags)···20302019 };20312020 unsigned int position;20322021};20222022+static rwlock_t batched_entropy_reset_lock = __RW_LOCK_UNLOCKED(batched_entropy_reset_lock);2033202320342024/*20352025 * Get a random word for internal kernel use only. The quality of the random···20412029u64 get_random_u64(void)20422030{20432031 u64 ret;20322032+ bool use_lock = READ_ONCE(crng_init) < 2;20332033+ unsigned long flags = 0;20442034 struct batched_entropy *batch;2045203520462036#if BITS_PER_LONG == 64···20552041#endif2056204220572043 batch = &get_cpu_var(batched_entropy_u64);20442044+ if (use_lock)20452045+ read_lock_irqsave(&batched_entropy_reset_lock, flags);20582046 if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) {20592047 extract_crng((u8 *)batch->entropy_u64);20602048 batch->position = 0;20612049 }20622050 ret = batch->entropy_u64[batch->position++];20512051+ if (use_lock)20522052+ read_unlock_irqrestore(&batched_entropy_reset_lock, flags);20632053 put_cpu_var(batched_entropy_u64);20642054 return ret;20652055}···20732055u32 get_random_u32(void)20742056{20752057 u32 ret;20582058+ bool use_lock = READ_ONCE(crng_init) < 2;20592059+ unsigned long flags = 0;20762060 struct batched_entropy *batch;2077206120782062 if (arch_get_random_int(&ret))20792063 return ret;2080206420812065 batch = &get_cpu_var(batched_entropy_u32);20662066+ if (use_lock)20672067+ read_lock_irqsave(&batched_entropy_reset_lock, flags);20822068 if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) {20832069 extract_crng((u8 *)batch->entropy_u32);20842070 batch->position = 0;20852071 }20862072 ret = batch->entropy_u32[batch->position++];20732073+ if (use_lock)20742074+ read_unlock_irqrestore(&batched_entropy_reset_lock, flags);20872075 put_cpu_var(batched_entropy_u32);20882076 return ret;20892077}20902078EXPORT_SYMBOL(get_random_u32);20792079+20802080+/* It's important to invalidate all potential batched entropy that might20812081+ * be stored before the crng is initialized, which we can do lazily by20822082+ * simply resetting the counter to zero so that it's re-extracted on the20832083+ * next usage. */20842084+static void invalidate_batched_entropy(void)20852085+{20862086+ int cpu;20872087+ unsigned long flags;20882088+20892089+ write_lock_irqsave(&batched_entropy_reset_lock, flags);20902090+ for_each_possible_cpu (cpu) {20912091+ per_cpu_ptr(&batched_entropy_u32, cpu)->position = 0;20922092+ per_cpu_ptr(&batched_entropy_u64, cpu)->position = 0;20932093+ }20942094+ write_unlock_irqrestore(&batched_entropy_reset_lock, flags);20952095+}2091209620922097/**20932098 * randomize_page - Generate a random, page aligned address
+1
drivers/clk/meson/Kconfig
···1414config COMMON_CLK_GXBB1515 bool1616 depends on COMMON_CLK_AMLOGIC1717+ select RESET_CONTROLLER1718 help1819 Support for the clock controller on AmLogic S905 devices, aka gxbb.1920 Say Y if you want peripherals and CPU frequency scaling to work.
···24682468 if (!(cpufreq_driver->flags & CPUFREQ_STICKY) &&24692469 list_empty(&cpufreq_policy_list)) {24702470 /* if all ->init() calls failed, unregister */24712471+ ret = -ENODEV;24712472 pr_debug("%s: No CPU initialized for driver %s\n", __func__,24722473 driver_data->name);24732474 goto err_if_unreg;
+2-2
drivers/cpufreq/cpufreq_conservative.c
···185185 int ret;186186 ret = sscanf(buf, "%u", &input);187187188188- /* cannot be lower than 11 otherwise freq will not fall */189189- if (ret != 1 || input < 11 || input > 100 ||188188+ /* cannot be lower than 1 otherwise freq will not fall */189189+ if (ret != 1 || input < 1 || input > 100 ||190190 input >= dbs_data->up_threshold)191191 return -EINVAL;192192
···180180 if (!state_node)181181 break;182182183183- if (!of_device_is_available(state_node))183183+ if (!of_device_is_available(state_node)) {184184+ of_node_put(state_node);184185 continue;186186+ }185187186188 if (!idle_state_valid(state_node, i, cpumask)) {187189 pr_warn("%s idle state not valid, bailing out\n",
···646646 int rc;647647 int i;648648649649+ if (!gpio->clk)650650+ return -EINVAL;651651+649652 rc = usecs_to_cycles(gpio, usecs, &requested_cycles);650653 if (rc < 0) {651654 dev_warn(chip->parent, "Failed to convert %luus to cycles at %luHz: %d\n",
+36-18
drivers/gpio/gpio-crystalcove.c
···9090{9191 int reg;92929393- if (gpio == 94)9494- return GPIOPANELCTL;9393+ if (gpio >= CRYSTALCOVE_GPIO_NUM) {9494+ /*9595+ * Virtual GPIO called from ACPI, for now we only support9696+ * the panel ctl.9797+ */9898+ switch (gpio) {9999+ case 0x5e:100100+ return GPIOPANELCTL;101101+ default:102102+ return -EOPNOTSUPP;103103+ }104104+ }9510596106 if (reg_type == CTRL_IN) {97107 if (gpio < 8)···140130static int crystalcove_gpio_dir_in(struct gpio_chip *chip, unsigned gpio)141131{142132 struct crystalcove_gpio *cg = gpiochip_get_data(chip);133133+ int reg = to_reg(gpio, CTRL_OUT);143134144144- if (gpio > CRYSTALCOVE_VGPIO_NUM)135135+ if (reg < 0)145136 return 0;146137147147- return regmap_write(cg->regmap, to_reg(gpio, CTRL_OUT),148148- CTLO_INPUT_SET);138138+ return regmap_write(cg->regmap, reg, CTLO_INPUT_SET);149139}150140151141static int crystalcove_gpio_dir_out(struct gpio_chip *chip, unsigned gpio,152142 int value)153143{154144 struct crystalcove_gpio *cg = gpiochip_get_data(chip);145145+ int reg = to_reg(gpio, CTRL_OUT);155146156156- if (gpio > CRYSTALCOVE_VGPIO_NUM)147147+ if (reg < 0)157148 return 0;158149159159- return regmap_write(cg->regmap, to_reg(gpio, CTRL_OUT),160160- CTLO_OUTPUT_SET | value);150150+ return regmap_write(cg->regmap, reg, CTLO_OUTPUT_SET | value);161151}162152163153static int crystalcove_gpio_get(struct gpio_chip *chip, unsigned gpio)164154{165155 struct crystalcove_gpio *cg = gpiochip_get_data(chip);166166- int ret;167156 unsigned int val;157157+ int ret, reg = to_reg(gpio, CTRL_IN);168158169169- if (gpio > CRYSTALCOVE_VGPIO_NUM)159159+ if (reg < 0)170160 return 0;171161172172- ret = regmap_read(cg->regmap, to_reg(gpio, CTRL_IN), &val);162162+ ret = regmap_read(cg->regmap, reg, &val);173163 if (ret)174164 return ret;175165···180170 unsigned gpio, int value)181171{182172 struct crystalcove_gpio *cg = gpiochip_get_data(chip);173173+ int reg = to_reg(gpio, CTRL_OUT);183174184184- if (gpio > CRYSTALCOVE_VGPIO_NUM)175175+ if (reg < 0)185176 return;186177187178 if (value)188188- regmap_update_bits(cg->regmap, to_reg(gpio, CTRL_OUT), 1, 1);179179+ regmap_update_bits(cg->regmap, reg, 1, 1);189180 else190190- regmap_update_bits(cg->regmap, to_reg(gpio, CTRL_OUT), 1, 0);181181+ regmap_update_bits(cg->regmap, reg, 1, 0);191182}192183193184static int crystalcove_irq_type(struct irq_data *data, unsigned type)194185{195186 struct crystalcove_gpio *cg =196187 gpiochip_get_data(irq_data_get_irq_chip_data(data));188188+189189+ if (data->hwirq >= CRYSTALCOVE_GPIO_NUM)190190+ return 0;197191198192 switch (type) {199193 case IRQ_TYPE_NONE:···249235 struct crystalcove_gpio *cg =250236 gpiochip_get_data(irq_data_get_irq_chip_data(data));251237252252- cg->set_irq_mask = false;253253- cg->update |= UPDATE_IRQ_MASK;238238+ if (data->hwirq < CRYSTALCOVE_GPIO_NUM) {239239+ cg->set_irq_mask = false;240240+ cg->update |= UPDATE_IRQ_MASK;241241+ }254242}255243256244static void crystalcove_irq_mask(struct irq_data *data)···260244 struct crystalcove_gpio *cg =261245 gpiochip_get_data(irq_data_get_irq_chip_data(data));262246263263- cg->set_irq_mask = true;264264- cg->update |= UPDATE_IRQ_MASK;247247+ if (data->hwirq < CRYSTALCOVE_GPIO_NUM) {248248+ cg->set_irq_mask = true;249249+ cg->update |= UPDATE_IRQ_MASK;250250+ }265251}266252267253static struct irq_chip crystalcove_irqchip = {
+11-4
drivers/gpio/gpio-mvebu.c
···721721 u32 set;722722723723 if (!of_device_is_compatible(mvchip->chip.of_node,724724- "marvell,armada-370-xp-gpio"))724724+ "marvell,armada-370-gpio"))725725 return 0;726726727727 if (IS_ERR(mvchip->clk))···747747 set = U32_MAX;748748 else749749 return -EINVAL;750750- writel_relaxed(0, mvebu_gpioreg_blink_counter_select(mvchip));750750+ writel_relaxed(set, mvebu_gpioreg_blink_counter_select(mvchip));751751752752 mvpwm = devm_kzalloc(dev, sizeof(struct mvebu_pwm), GFP_KERNEL);753753 if (!mvpwm)···768768 mvpwm->chip.dev = dev;769769 mvpwm->chip.ops = &mvebu_pwm_ops;770770 mvpwm->chip.npwm = mvchip->chip.ngpio;771771+ /*772772+ * There may already be some PWM allocated, so we can't force773773+ * mvpwm->chip.base to a fixed point like mvchip->chip.base.774774+ * So, we let pwmchip_add() do the numbering and take the next free775775+ * region.776776+ */777777+ mvpwm->chip.base = -1;771778772779 spin_lock_init(&mvpwm->lock);773780···852845 .data = (void *) MVEBU_GPIO_SOC_VARIANT_ARMADAXP,853846 },854847 {855855- .compatible = "marvell,armada-370-xp-gpio",848848+ .compatible = "marvell,armada-370-gpio",856849 .data = (void *) MVEBU_GPIO_SOC_VARIANT_ORION,857850 },858851 {···11281121 mvchip);11291122 }1130112311311131- /* Armada 370/XP has simple PWM support for GPIO lines */11241124+ /* Some MVEBU SoCs have simple PWM support for GPIO lines */11321125 if (IS_ENABLED(CONFIG_PWM))11331126 return mvebu_pwm_probe(pdev, mvchip, id);11341127
+1-1
drivers/gpio/gpiolib-acpi.c
···201201 handler = acpi_gpio_irq_handler_evt;202202 }203203 if (!handler)204204- return AE_BAD_PARAMETER;204204+ return AE_OK;205205206206 pin = acpi_gpiochip_pin_to_gpio_offset(chip->gpiodev, pin);207207 if (pin < 0)
+2-1
drivers/gpio/gpiolib.c
···708708709709 ge.timestamp = ktime_get_real_ns();710710711711- if (le->eflags & GPIOEVENT_REQUEST_BOTH_EDGES) {711711+ if (le->eflags & GPIOEVENT_REQUEST_RISING_EDGE712712+ && le->eflags & GPIOEVENT_REQUEST_FALLING_EDGE) {712713 int level = gpiod_get_value_cansleep(le->desc);713714714715 if (level)
+4
drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
···693693 DRM_INFO("Changing default dispclk from %dMhz to 600Mhz\n",694694 adev->clock.default_dispclk / 100);695695 adev->clock.default_dispclk = 60000;696696+ } else if (adev->clock.default_dispclk <= 60000) {697697+ DRM_INFO("Changing default dispclk from %dMhz to 625Mhz\n",698698+ adev->clock.default_dispclk / 100);699699+ adev->clock.default_dispclk = 62500;696700 }697701 adev->clock.dp_extclk =698702 le16_to_cpu(firmware_info->info_21.usUniphyDPModeExtClkFreq);
···709709710710static struct phm_master_table_item711711vega10_thermal_start_thermal_controller_master_list[] = {712712- {NULL, tf_vega10_thermal_initialize},713713- {NULL, tf_vega10_thermal_set_temperature_range},714714- {NULL, tf_vega10_thermal_enable_alert},712712+ { .tableFunction = tf_vega10_thermal_initialize },713713+ { .tableFunction = tf_vega10_thermal_set_temperature_range },714714+ { .tableFunction = tf_vega10_thermal_enable_alert },715715/* We should restrict performance levels to low before we halt the SMC.716716 * On the other hand we are still in boot state when we do this717717 * so it would be pointless.718718 * If this assumption changes we have to revisit this table.719719 */720720- {NULL, tf_vega10_thermal_setup_fan_table},721721- {NULL, tf_vega10_thermal_start_smc_fan_control},722722- {NULL, NULL}720720+ { .tableFunction = tf_vega10_thermal_setup_fan_table },721721+ { .tableFunction = tf_vega10_thermal_start_smc_fan_control },722722+ { }723723};724724725725static struct phm_master_table_header···731731732732static struct phm_master_table_item733733vega10_thermal_set_temperature_range_master_list[] = {734734- {NULL, tf_vega10_thermal_disable_alert},735735- {NULL, tf_vega10_thermal_set_temperature_range},736736- {NULL, tf_vega10_thermal_enable_alert},737737- {NULL, NULL}734734+ { .tableFunction = tf_vega10_thermal_disable_alert },735735+ { .tableFunction = tf_vega10_thermal_set_temperature_range },736736+ { .tableFunction = tf_vega10_thermal_enable_alert },737737+ { }738738};739739740740struct phm_master_table_header
···160160 * drm framework doesn't support multiple irq yet.161161 * we can refer to the crtc to current hardware interrupt occurred through162162 * this pipe value.163163- * @enabled: if the crtc is enabled or not164164- * @event: vblank event that is currently queued for flip165165- * @wait_update: wait all pending planes updates to finish166166- * @pending_update: number of pending plane updates in this crtc167163 * @ops: pointer to callbacks for exynos drm specific functionality168164 * @ctx: A pointer to the crtc's implementation specific context165165+ * @pipe_clk: A pointer to the crtc's pipeline clock.169166 */170167struct exynos_drm_crtc {171168 struct drm_crtc base;
+9-17
drivers/gpu/drm/exynos/exynos_drm_dsi.c
···16331633{16341634 struct device *dev = dsi->dev;16351635 struct device_node *node = dev->of_node;16361636- struct device_node *ep;16371636 int ret;1638163716391638 ret = exynos_dsi_of_read_u32(node, "samsung,pll-clock-frequency",···16401641 if (ret < 0)16411642 return ret;1642164316431643- ep = of_graph_get_endpoint_by_regs(node, DSI_PORT_OUT, 0);16441644- if (!ep) {16451645- dev_err(dev, "no output port with endpoint specified\n");16461646- return -EINVAL;16471647- }16481648-16491649- ret = exynos_dsi_of_read_u32(ep, "samsung,burst-clock-frequency",16441644+ ret = exynos_dsi_of_read_u32(node, "samsung,burst-clock-frequency",16501645 &dsi->burst_clk_rate);16511646 if (ret < 0)16521652- goto end;16471647+ return ret;1653164816541654- ret = exynos_dsi_of_read_u32(ep, "samsung,esc-clock-frequency",16491649+ ret = exynos_dsi_of_read_u32(node, "samsung,esc-clock-frequency",16551650 &dsi->esc_clk_rate);16561651 if (ret < 0)16571657- goto end;16581658-16591659- of_node_put(ep);16521652+ return ret;1660165316611654 dsi->bridge_node = of_graph_get_remote_node(node, DSI_PORT_OUT, 0);16621655 if (!dsi->bridge_node)16631656 return -EINVAL;1664165716651665-end:16661666- of_node_put(ep);16671667-16681668- return ret;16581658+ return 0;16691659}1670166016711661static int exynos_dsi_bind(struct device *dev, struct device *master,···1805181718061818static int exynos_dsi_remove(struct platform_device *pdev)18071819{18201820+ struct exynos_dsi *dsi = platform_get_drvdata(pdev);18211821+18221822+ of_node_put(dsi->bridge_node);18231823+18081824 pm_runtime_disable(&pdev->dev);1809182518101826 component_del(&pdev->dev, &exynos_dsi_component_ops);
+1-1
drivers/gpu/drm/hisilicon/kirin/dw_drm_dsi.c
···760760 * Get the endpoint node. In our case, dsi has one output port1761761 * to which the external HDMI bridge is connected.762762 */763763- ret = drm_of_find_panel_or_bridge(np, 0, 0, NULL, &dsi->bridge);763763+ ret = drm_of_find_panel_or_bridge(np, 1, 0, NULL, &dsi->bridge);764764 if (ret)765765 return ret;766766
···292292 struct file_stats *stats = data;293293 struct i915_vma *vma;294294295295+ lockdep_assert_held(&obj->base.dev->struct_mutex);296296+295297 stats->count++;296298 stats->total += obj->base.size;297299 if (!obj->bind_count)···478476 struct drm_i915_gem_request *request;479477 struct task_struct *task;480478479479+ mutex_lock(&dev->struct_mutex);480480+481481 memset(&stats, 0, sizeof(stats));482482 stats.file_priv = file->driver_priv;483483 spin_lock(&file->table_lock);···491487 * still alive (e.g. get_pid(current) => fork() => exit()).492488 * Therefore, we need to protect this ->comm access using RCU.493489 */494494- mutex_lock(&dev->struct_mutex);495490 request = list_first_entry_or_null(&file_priv->mm.request_list,496491 struct drm_i915_gem_request,497492 client_link);···500497 PIDTYPE_PID);501498 print_file_stats(m, task ? task->comm : "<unknown>", stats);502499 rcu_read_unlock();500500+503501 mutex_unlock(&dev->struct_mutex);504502 }505503 mutex_unlock(&dev->filelist_mutex);
+9-4
drivers/gpu/drm/i915/i915_drv.c
···12351235 goto out_fini;1236123612371237 pci_set_drvdata(pdev, &dev_priv->drm);12381238+ /*12391239+ * Disable the system suspend direct complete optimization, which can12401240+ * leave the device suspended skipping the driver's suspend handlers12411241+ * if the device was already runtime suspended. This is needed due to12421242+ * the difference in our runtime and system suspend sequence and12431243+ * becaue the HDA driver may require us to enable the audio power12441244+ * domain during system suspend.12451245+ */12461246+ pdev->dev_flags |= PCI_DEV_FLAGS_NEEDS_RESUME;1238124712391248 ret = i915_driver_init_early(dev_priv, ent);12401249 if (ret < 0)···1281127212821273 dev_priv->ipc_enabled = false;1283127412841284- /* Everything is in place, we can now relax! */12851285- DRM_INFO("Initialized %s %d.%d.%d %s for %s on minor %d\n",12861286- driver.name, driver.major, driver.minor, driver.patchlevel,12871287- driver.date, pci_name(pdev), dev_priv->drm.primary->index);12881275 if (IS_ENABLED(CONFIG_DRM_I915_DEBUG))12891276 DRM_INFO("DRM_I915_DEBUG enabled\n");12901277 if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM))
+12-1
drivers/gpu/drm/i915/i915_drv.h
···562562563563void intel_link_compute_m_n(int bpp, int nlanes,564564 int pixel_clock, int link_clock,565565- struct intel_link_m_n *m_n);565565+ struct intel_link_m_n *m_n,566566+ bool reduce_m_n);566567567568/* Interface history:568569 *···29862985{29872986#ifdef CONFIG_INTEL_IOMMU29882987 if (INTEL_GEN(dev_priv) >= 6 && intel_iommu_gfx_mapped)29882988+ return true;29892989+#endif29902990+ return false;29912991+}29922992+29932993+static inline bool29942994+intel_ggtt_update_needs_vtd_wa(struct drm_i915_private *dev_priv)29952995+{29962996+#ifdef CONFIG_INTEL_IOMMU29972997+ if (IS_BROXTON(dev_priv) && intel_iommu_gfx_mapped)29892998 return true;29902999#endif29913000 return false;
+45-20
drivers/gpu/drm/i915/i915_gem.c
···22852285 struct page *page;22862286 unsigned long last_pfn = 0; /* suppress gcc warning */22872287 unsigned int max_segment;22882288+ gfp_t noreclaim;22882289 int ret;22892289- gfp_t gfp;2290229022912291 /* Assert that the object is not currently in any GPU domain. As it22922292 * wasn't in the GTT, there shouldn't be any way it could have been in···23152315 * Fail silently without starting the shrinker23162316 */23172317 mapping = obj->base.filp->f_mapping;23182318- gfp = mapping_gfp_constraint(mapping, ~(__GFP_IO | __GFP_RECLAIM));23192319- gfp |= __GFP_NORETRY | __GFP_NOWARN;23182318+ noreclaim = mapping_gfp_constraint(mapping,23192319+ ~(__GFP_IO | __GFP_RECLAIM));23202320+ noreclaim |= __GFP_NORETRY | __GFP_NOWARN;23212321+23202322 sg = st->sgl;23212323 st->nents = 0;23222324 for (i = 0; i < page_count; i++) {23232323- page = shmem_read_mapping_page_gfp(mapping, i, gfp);23242324- if (unlikely(IS_ERR(page))) {23252325- i915_gem_shrink(dev_priv,23262326- page_count,23272327- I915_SHRINK_BOUND |23282328- I915_SHRINK_UNBOUND |23292329- I915_SHRINK_PURGEABLE);23252325+ const unsigned int shrink[] = {23262326+ I915_SHRINK_BOUND | I915_SHRINK_UNBOUND | I915_SHRINK_PURGEABLE,23272327+ 0,23282328+ }, *s = shrink;23292329+ gfp_t gfp = noreclaim;23302330+23312331+ do {23302332 page = shmem_read_mapping_page_gfp(mapping, i, gfp);23312331- }23322332- if (unlikely(IS_ERR(page))) {23332333- gfp_t reclaim;23332333+ if (likely(!IS_ERR(page)))23342334+ break;23352335+23362336+ if (!*s) {23372337+ ret = PTR_ERR(page);23382338+ goto err_sg;23392339+ }23402340+23412341+ i915_gem_shrink(dev_priv, 2 * page_count, *s++);23422342+ cond_resched();2334234323352344 /* We've tried hard to allocate the memory by reaping23362345 * our own buffer, now let the real VM do its job and···23492340 * defer the oom here by reporting the ENOMEM back23502341 * to userspace.23512342 */23522352- reclaim = mapping_gfp_mask(mapping);23532353- reclaim |= __GFP_NORETRY; /* reclaim, but no oom */23432343+ if (!*s) {23442344+ /* reclaim and warn, but no oom */23452345+ gfp = mapping_gfp_mask(mapping);2354234623552355- page = shmem_read_mapping_page_gfp(mapping, i, reclaim);23562356- if (IS_ERR(page)) {23572357- ret = PTR_ERR(page);23582358- goto err_sg;23472347+ /* Our bo are always dirty and so we require23482348+ * kswapd to reclaim our pages (direct reclaim23492349+ * does not effectively begin pageout of our23502350+ * buffers on its own). However, direct reclaim23512351+ * only waits for kswapd when under allocation23522352+ * congestion. So as a result __GFP_RECLAIM is23532353+ * unreliable and fails to actually reclaim our23542354+ * dirty pages -- unless you try over and over23552355+ * again with !__GFP_NORETRY. However, we still23562356+ * want to fail this allocation rather than23572357+ * trigger the out-of-memory killer and for23582358+ * this we want the future __GFP_MAYFAIL.23592359+ */23592360 }23602360- }23612361+ } while (1);23622362+23612363 if (!i ||23622364 sg->length >= max_segment ||23632365 page_to_pfn(page) != last_pfn + 1) {···33183298{33193299 int ret;3320330033013301+ /* If the device is asleep, we have no requests outstanding */33023302+ if (!READ_ONCE(i915->gt.awake))33033303+ return 0;33043304+33213305 if (flags & I915_WAIT_LOCKED) {33223306 struct i915_gem_timeline *tl;33233307···4242421842434219 mapping = obj->base.filp->f_mapping;42444220 mapping_set_gfp_mask(mapping, mask);42214221+ GEM_BUG_ON(!(mapping_gfp_mask(mapping) & __GFP_RECLAIM));4245422242464223 i915_gem_object_init(obj, &i915_gem_object_ops);42474224
+14-3
drivers/gpu/drm/i915/i915_gem_execbuffer.c
···546546}547547548548static int549549-i915_gem_execbuffer_relocate_entry(struct drm_i915_gem_object *obj,549549+i915_gem_execbuffer_relocate_entry(struct i915_vma *vma,550550 struct eb_vmas *eb,551551 struct drm_i915_gem_relocation_entry *reloc,552552 struct reloc_cache *cache)553553{554554+ struct drm_i915_gem_object *obj = vma->obj;554555 struct drm_i915_private *dev_priv = to_i915(obj->base.dev);555556 struct drm_gem_object *target_obj;556557 struct drm_i915_gem_object *target_i915_obj;···629628 return -EINVAL;630629 }631630631631+ /*632632+ * If we write into the object, we need to force the synchronisation633633+ * barrier, either with an asynchronous clflush or if we executed the634634+ * patching using the GPU (though that should be serialised by the635635+ * timeline). To be completely sure, and since we are required to636636+ * do relocations we are already stalling, disable the user's opt637637+ * of our synchronisation.638638+ */639639+ vma->exec_entry->flags &= ~EXEC_OBJECT_ASYNC;640640+632641 ret = relocate_entry(obj, reloc, cache, target_offset);633642 if (ret)634643 return ret;···689678 do {690679 u64 offset = r->presumed_offset;691680692692- ret = i915_gem_execbuffer_relocate_entry(vma->obj, eb, r, &cache);681681+ ret = i915_gem_execbuffer_relocate_entry(vma, eb, r, &cache);693682 if (ret)694683 goto out;695684···737726738727 reloc_cache_init(&cache, eb->i915);739728 for (i = 0; i < entry->relocation_count; i++) {740740- ret = i915_gem_execbuffer_relocate_entry(vma->obj, eb, &relocs[i], &cache);729729+ ret = i915_gem_execbuffer_relocate_entry(vma, eb, &relocs[i], &cache);741730 if (ret)742731 break;743732 }
+106-2
drivers/gpu/drm/i915/i915_gem_gtt.c
···21912191 gen8_set_pte(>t_base[i], scratch_pte);21922192}2193219321942194+static void bxt_vtd_ggtt_wa(struct i915_address_space *vm)21952195+{21962196+ struct drm_i915_private *dev_priv = vm->i915;21972197+21982198+ /*21992199+ * Make sure the internal GAM fifo has been cleared of all GTT22002200+ * writes before exiting stop_machine(). This guarantees that22012201+ * any aperture accesses waiting to start in another process22022202+ * cannot back up behind the GTT writes causing a hang.22032203+ * The register can be any arbitrary GAM register.22042204+ */22052205+ POSTING_READ(GFX_FLSH_CNTL_GEN6);22062206+}22072207+22082208+struct insert_page {22092209+ struct i915_address_space *vm;22102210+ dma_addr_t addr;22112211+ u64 offset;22122212+ enum i915_cache_level level;22132213+};22142214+22152215+static int bxt_vtd_ggtt_insert_page__cb(void *_arg)22162216+{22172217+ struct insert_page *arg = _arg;22182218+22192219+ gen8_ggtt_insert_page(arg->vm, arg->addr, arg->offset, arg->level, 0);22202220+ bxt_vtd_ggtt_wa(arg->vm);22212221+22222222+ return 0;22232223+}22242224+22252225+static void bxt_vtd_ggtt_insert_page__BKL(struct i915_address_space *vm,22262226+ dma_addr_t addr,22272227+ u64 offset,22282228+ enum i915_cache_level level,22292229+ u32 unused)22302230+{22312231+ struct insert_page arg = { vm, addr, offset, level };22322232+22332233+ stop_machine(bxt_vtd_ggtt_insert_page__cb, &arg, NULL);22342234+}22352235+22362236+struct insert_entries {22372237+ struct i915_address_space *vm;22382238+ struct sg_table *st;22392239+ u64 start;22402240+ enum i915_cache_level level;22412241+};22422242+22432243+static int bxt_vtd_ggtt_insert_entries__cb(void *_arg)22442244+{22452245+ struct insert_entries *arg = _arg;22462246+22472247+ gen8_ggtt_insert_entries(arg->vm, arg->st, arg->start, arg->level, 0);22482248+ bxt_vtd_ggtt_wa(arg->vm);22492249+22502250+ return 0;22512251+}22522252+22532253+static void bxt_vtd_ggtt_insert_entries__BKL(struct i915_address_space *vm,22542254+ struct sg_table *st,22552255+ u64 start,22562256+ enum i915_cache_level level,22572257+ u32 unused)22582258+{22592259+ struct insert_entries arg = { vm, st, start, level };22602260+22612261+ stop_machine(bxt_vtd_ggtt_insert_entries__cb, &arg, NULL);22622262+}22632263+22642264+struct clear_range {22652265+ struct i915_address_space *vm;22662266+ u64 start;22672267+ u64 length;22682268+};22692269+22702270+static int bxt_vtd_ggtt_clear_range__cb(void *_arg)22712271+{22722272+ struct clear_range *arg = _arg;22732273+22742274+ gen8_ggtt_clear_range(arg->vm, arg->start, arg->length);22752275+ bxt_vtd_ggtt_wa(arg->vm);22762276+22772277+ return 0;22782278+}22792279+22802280+static void bxt_vtd_ggtt_clear_range__BKL(struct i915_address_space *vm,22812281+ u64 start,22822282+ u64 length)22832283+{22842284+ struct clear_range arg = { vm, start, length };22852285+22862286+ stop_machine(bxt_vtd_ggtt_clear_range__cb, &arg, NULL);22872287+}22882288+21942289static void gen6_ggtt_clear_range(struct i915_address_space *vm,21952290 u64 start, u64 length)21962291{···24082313 appgtt->base.allocate_va_range) {24092314 ret = appgtt->base.allocate_va_range(&appgtt->base,24102315 vma->node.start,24112411- vma->node.size);23162316+ vma->size);24122317 if (ret)24132318 goto err_pages;24142319 }···2880278528812786 ggtt->base.insert_entries = gen8_ggtt_insert_entries;2882278727882788+ /* Serialize GTT updates with aperture access on BXT if VT-d is on. */27892789+ if (intel_ggtt_update_needs_vtd_wa(dev_priv)) {27902790+ ggtt->base.insert_entries = bxt_vtd_ggtt_insert_entries__BKL;27912791+ ggtt->base.insert_page = bxt_vtd_ggtt_insert_page__BKL;27922792+ if (ggtt->base.clear_range != nop_clear_range)27932793+ ggtt->base.clear_range = bxt_vtd_ggtt_clear_range__BKL;27942794+ }27952795+28832796 ggtt->invalidate = gen6_ggtt_invalidate;2884279728852798 return ggtt_probe_common(ggtt, size);···3100299731012998void i915_ggtt_disable_guc(struct drm_i915_private *i915)31022999{31033103- i915->ggtt.invalidate = gen6_ggtt_invalidate;30003000+ if (i915->ggtt.invalidate == guc_ggtt_invalidate)30013001+ i915->ggtt.invalidate = gen6_ggtt_invalidate;31043002}3105300331063004void i915_gem_restore_gtt_mappings(struct drm_i915_private *dev_priv)
+1-1
drivers/gpu/drm/i915/i915_gem_request.c
···623623 * GPU processing the request, we never over-estimate the624624 * position of the head.625625 */626626- req->head = req->ring->tail;626626+ req->head = req->ring->emit;627627628628 /* Check that we didn't interrupt ourselves with a new request */629629 GEM_BUG_ON(req->timeline->seqno != req->fence.seqno);
-5
drivers/gpu/drm/i915/i915_gem_shrinker.c
···5959 return;60606161 mutex_unlock(&dev->struct_mutex);6262-6363- /* expedite the RCU grace period to free some request slabs */6464- synchronize_rcu_expedited();6562}66636764static bool any_vma_pinned(struct drm_i915_gem_object *obj)···270273 I915_SHRINK_UNBOUND |271274 I915_SHRINK_ACTIVE);272275 intel_runtime_pm_put(dev_priv);273273-274274- synchronize_rcu(); /* wait for our earlier RCU delayed slab frees */275276276277 return freed;277278}
···480480 GEM_BUG_ON(freespace < wqi_size);481481482482 /* The GuC firmware wants the tail index in QWords, not bytes */483483- tail = rq->tail;484484- assert_ring_tail_valid(rq->ring, rq->tail);485485- tail >>= 3;483483+ tail = intel_ring_set_tail(rq->ring, rq->tail) >> 3;486484 GEM_BUG_ON(tail > WQ_RING_TAIL_MAX);487485488486 /* For now workqueue item is 4 DWs; workqueue buffer is 2 pages. So we
···3636#define VGT_VERSION_MAJOR 13737#define VGT_VERSION_MINOR 038383939-#define INTEL_VGT_IF_VERSION_ENCODE(major, minor) ((major) << 16 | (minor))4040-#define INTEL_VGT_IF_VERSION \4141- INTEL_VGT_IF_VERSION_ENCODE(VGT_VERSION_MAJOR, VGT_VERSION_MINOR)4242-4339/*4440 * notifications from guest to vgpu device model4541 */···51555256struct vgt_if {5357 u64 magic; /* VGT_MAGIC */5454- uint16_t version_major;5555- uint16_t version_minor;5858+ u16 version_major;5959+ u16 version_minor;5660 u32 vgt_id; /* ID of vGT instance */5761 u32 rsv1[12]; /* pad to offset 0x40 */5862 /*
+1-1
drivers/gpu/drm/i915/i915_reg.h
···8280828082818281/* MIPI DSI registers */8282828282838283-#define _MIPI_PORT(port, a, c) ((port) ? c : a) /* ports A and C only */82838283+#define _MIPI_PORT(port, a, c) (((port) == PORT_A) ? a : c) /* ports A and C only */82848284#define _MMIO_MIPI(port, a, c) _MMIO(_MIPI_PORT(port, a, c))8285828582868286#define MIPIO_TXESC_CLK_DIV1 _MMIO(0x160004)
+4-6
drivers/gpu/drm/i915/i915_vgpu.c
···6060 */6161void i915_check_vgpu(struct drm_i915_private *dev_priv)6262{6363- uint64_t magic;6464- uint32_t version;6363+ u64 magic;6464+ u16 version_major;65656666 BUILD_BUG_ON(sizeof(struct vgt_if) != VGT_PVINFO_SIZE);6767···6969 if (magic != VGT_MAGIC)7070 return;71717272- version = INTEL_VGT_IF_VERSION_ENCODE(7373- __raw_i915_read16(dev_priv, vgtif_reg(version_major)),7474- __raw_i915_read16(dev_priv, vgtif_reg(version_minor)));7575- if (version != INTEL_VGT_IF_VERSION) {7272+ version_major = __raw_i915_read16(dev_priv, vgtif_reg(version_major));7373+ if (version_major < VGT_VERSION_MAJOR) {7674 DRM_INFO("VGT interface version mismatch!\n");7775 return;7876 }
+5
drivers/gpu/drm/i915/i915_vma.c
···650650 break;651651 }652652653653+ if (!ret) {654654+ ret = i915_gem_active_retire(&vma->last_fence,655655+ &vma->vm->i915->drm.struct_mutex);656656+ }657657+653658 __i915_vma_unpin(vma);654659 if (ret)655660 return ret;
+48-27
drivers/gpu/drm/i915/intel_display.c
···120120static void skylake_pfit_enable(struct intel_crtc *crtc);121121static void ironlake_pfit_disable(struct intel_crtc *crtc, bool force);122122static void ironlake_pfit_enable(struct intel_crtc *crtc);123123-static void intel_modeset_setup_hw_state(struct drm_device *dev);123123+static void intel_modeset_setup_hw_state(struct drm_device *dev,124124+ struct drm_modeset_acquire_ctx *ctx);124125static void intel_pre_disable_primary_noatomic(struct drm_crtc *crtc);125126126127struct intel_limit {···34503449 struct drm_crtc *crtc;34513450 int i, ret;3452345134533453- intel_modeset_setup_hw_state(dev);34523452+ intel_modeset_setup_hw_state(dev, ctx);34543453 i915_redisable_vga(to_i915(dev));3455345434563455 if (!state)···4599459846004599static int46014600skl_update_scaler(struct intel_crtc_state *crtc_state, bool force_detach,46024602- unsigned scaler_user, int *scaler_id, unsigned int rotation,46014601+ unsigned int scaler_user, int *scaler_id,46034602 int src_w, int src_h, int dst_w, int dst_h)46044603{46054604 struct intel_crtc_scaler_state *scaler_state =···46084607 to_intel_crtc(crtc_state->base.crtc);46094608 int need_scaling;4610460946114611- need_scaling = drm_rotation_90_or_270(rotation) ?46124612- (src_h != dst_w || src_w != dst_h):46134613- (src_w != dst_w || src_h != dst_h);46104610+ /*46114611+ * Src coordinates are already rotated by 270 degrees for46124612+ * the 90/270 degree plane rotation cases (to match the46134613+ * GTT mapping), hence no need to account for rotation here.46144614+ */46154615+ need_scaling = src_w != dst_w || src_h != dst_h;4614461646154617 /*46164618 * if plane is being disabled or scaler is no more required or force detach···46754671 const struct drm_display_mode *adjusted_mode = &state->base.adjusted_mode;4676467246774673 return skl_update_scaler(state, !state->base.active, SKL_CRTC_INDEX,46784678- &state->scaler_state.scaler_id, DRM_ROTATE_0,46744674+ &state->scaler_state.scaler_id,46794675 state->pipe_src_w, state->pipe_src_h,46804676 adjusted_mode->crtc_hdisplay, adjusted_mode->crtc_vdisplay);46814677}···47044700 ret = skl_update_scaler(crtc_state, force_detach,47054701 drm_plane_index(&intel_plane->base),47064702 &plane_state->scaler_id,47074707- plane_state->base.rotation,47084703 drm_rect_width(&plane_state->base.src) >> 16,47094704 drm_rect_height(&plane_state->base.src) >> 16,47104705 drm_rect_width(&plane_state->base.dst),···58265823 intel_update_watermarks(intel_crtc);58275824}5828582558295829-static void intel_crtc_disable_noatomic(struct drm_crtc *crtc)58265826+static void intel_crtc_disable_noatomic(struct drm_crtc *crtc,58275827+ struct drm_modeset_acquire_ctx *ctx)58305828{58315829 struct intel_encoder *encoder;58325830 struct intel_crtc *intel_crtc = to_intel_crtc(crtc);···58575853 return;58585854 }5859585558605860- state->acquire_ctx = crtc->dev->mode_config.acquire_ctx;58565856+ state->acquire_ctx = ctx;5861585758625858 /* Everything's already locked, -EDEADLK can't happen. */58635859 crtc_state = intel_atomic_get_crtc_state(state, intel_crtc);···61056101 pipe_config->fdi_lanes = lane;6106610261076103 intel_link_compute_m_n(pipe_config->pipe_bpp, lane, fdi_dotclock,61086108- link_bw, &pipe_config->fdi_m_n);61046104+ link_bw, &pipe_config->fdi_m_n, false);6109610561106106 ret = ironlake_check_fdi_lanes(dev, intel_crtc->pipe, pipe_config);61116107 if (ret == -EINVAL && pipe_config->pipe_bpp > 6*3) {···62816277}6282627862836279static void compute_m_n(unsigned int m, unsigned int n,62846284- uint32_t *ret_m, uint32_t *ret_n)62806280+ uint32_t *ret_m, uint32_t *ret_n,62816281+ bool reduce_m_n)62856282{62866283 /*62876284 * Reduce M/N as much as possible without loss in precision. Several DP···62906285 * values. The passed in values are more likely to have the least62916286 * significant bits zero than M after rounding below, so do this first.62926287 */62936293- while ((m & 1) == 0 && (n & 1) == 0) {62946294- m >>= 1;62956295- n >>= 1;62886288+ if (reduce_m_n) {62896289+ while ((m & 1) == 0 && (n & 1) == 0) {62906290+ m >>= 1;62916291+ n >>= 1;62926292+ }62966293 }6297629462986295 *ret_n = min_t(unsigned int, roundup_pow_of_two(n), DATA_LINK_N_MAX);···63056298void63066299intel_link_compute_m_n(int bits_per_pixel, int nlanes,63076300 int pixel_clock, int link_clock,63086308- struct intel_link_m_n *m_n)63016301+ struct intel_link_m_n *m_n,63026302+ bool reduce_m_n)63096303{63106304 m_n->tu = 64;6311630563126306 compute_m_n(bits_per_pixel * pixel_clock,63136307 link_clock * nlanes * 8,63146314- &m_n->gmch_m, &m_n->gmch_n);63086308+ &m_n->gmch_m, &m_n->gmch_n,63096309+ reduce_m_n);6315631063166311 compute_m_n(pixel_clock, link_clock,63176317- &m_n->link_m, &m_n->link_n);63126312+ &m_n->link_m, &m_n->link_n,63136313+ reduce_m_n);63186314}6319631563206316static inline bool intel_panel_use_ssc(struct drm_i915_private *dev_priv)···1220712197 * type. For DP ports it behaves like most other platforms, but on HDMI1220812198 * there's an extra 1 line difference. So we need to add two instead of1220912199 * one to the value.1220012200+ *1220112201+ * On VLV/CHV DSI the scanline counter would appear to increment1220212202+ * approx. 1/3 of a scanline before start of vblank. Unfortunately1220312203+ * that means we can't tell whether we're in vblank or not while1220412204+ * we're on that particular line. We must still set scanline_offset1220512205+ * to 1 so that the vblank timestamps come out correct when we query1220612206+ * the scanline counter from within the vblank interrupt handler.1220712207+ * However if queried just before the start of vblank we'll get an1220812208+ * answer that's slightly in the future.1221012209 */1221112210 if (IS_GEN2(dev_priv)) {1221212211 const struct drm_display_mode *adjusted_mode = &crtc->config->base.adjusted_mode;···1503215013 intel_setup_outputs(dev_priv);15033150141503415015 drm_modeset_lock_all(dev);1503515035- intel_modeset_setup_hw_state(dev);1501615016+ intel_modeset_setup_hw_state(dev, dev->mode_config.acquire_ctx);1503615017 drm_modeset_unlock_all(dev);15037150181503815019 for_each_intel_crtc(dev, crtc) {···1506915050 return 0;1507015051}15071150521507215072-static void intel_enable_pipe_a(struct drm_device *dev)1505315053+static void intel_enable_pipe_a(struct drm_device *dev,1505415054+ struct drm_modeset_acquire_ctx *ctx)1507315055{1507415056 struct intel_connector *connector;1507515057 struct drm_connector_list_iter conn_iter;1507615058 struct drm_connector *crt = NULL;1507715059 struct intel_load_detect_pipe load_detect_temp;1507815078- struct drm_modeset_acquire_ctx *ctx = dev->mode_config.acquire_ctx;1507915060 int ret;15080150611508115062 /* We can't just switch on the pipe A, we need to set things up with a···1514715128 (HAS_PCH_LPT_H(dev_priv) && pch_transcoder == TRANSCODER_A);1514815129}15149151301515015150-static void intel_sanitize_crtc(struct intel_crtc *crtc)1513115131+static void intel_sanitize_crtc(struct intel_crtc *crtc,1513215132+ struct drm_modeset_acquire_ctx *ctx)1515115133{1515215134 struct drm_device *dev = crtc->base.dev;1515315135 struct drm_i915_private *dev_priv = to_i915(dev);···1519415174 plane = crtc->plane;1519515175 crtc->base.primary->state->visible = true;1519615176 crtc->plane = !plane;1519715197- intel_crtc_disable_noatomic(&crtc->base);1517715177+ intel_crtc_disable_noatomic(&crtc->base, ctx);1519815178 crtc->plane = plane;1519915179 }1520015180···1520415184 * resume. Force-enable the pipe to fix this, the update_dpms1520515185 * call below we restore the pipe to the right state, but leave1520615186 * the required bits on. */1520715207- intel_enable_pipe_a(dev);1518715187+ intel_enable_pipe_a(dev, ctx);1520815188 }15209151891521015190 /* Adjust the state of the output pipe according to whether we1521115191 * have active connectors/encoders. */1521215192 if (crtc->active && !intel_crtc_has_encoders(crtc))1521315213- intel_crtc_disable_noatomic(&crtc->base);1519315193+ intel_crtc_disable_noatomic(&crtc->base, ctx);15214151941521515195 if (crtc->active || HAS_GMCH_DISPLAY(dev_priv)) {1521615196 /*···1550815488 * and sanitizes it to the current state1550915489 */1551015490static void1551115511-intel_modeset_setup_hw_state(struct drm_device *dev)1549115491+intel_modeset_setup_hw_state(struct drm_device *dev,1549215492+ struct drm_modeset_acquire_ctx *ctx)1551215493{1551315494 struct drm_i915_private *dev_priv = to_i915(dev);1551415495 enum pipe pipe;···1552915508 for_each_pipe(dev_priv, pipe) {1553015509 crtc = intel_get_crtc_for_pipe(dev_priv, pipe);15531155101553215532- intel_sanitize_crtc(crtc);1551115511+ intel_sanitize_crtc(crtc, ctx);1553315512 intel_dump_pipe_config(crtc, crtc->config,1553415513 "[setup_hw_state]");1553515514 }
···3373337333743374 /* n.b., src is 16.16 fixed point, dst is whole integer */33753375 if (plane->id == PLANE_CURSOR) {33763376+ /*33773377+ * Cursors only support 0/180 degree rotation,33783378+ * hence no need to account for rotation here.33793379+ */33763380 src_w = pstate->base.src_w;33773381 src_h = pstate->base.src_h;33783382 dst_w = pstate->base.crtc_w;33793383 dst_h = pstate->base.crtc_h;33803384 } else {33853385+ /*33863386+ * Src coordinates are already rotated by 270 degrees for33873387+ * the 90/270 degree plane rotation cases (to match the33883388+ * GTT mapping), hence no need to account for rotation here.33893389+ */33813390 src_w = drm_rect_width(&pstate->base.src);33823391 src_h = drm_rect_height(&pstate->base.src);33833392 dst_w = drm_rect_width(&pstate->base.dst);33843393 dst_h = drm_rect_height(&pstate->base.dst);33853394 }33863386-33873387- if (drm_rotation_90_or_270(pstate->base.rotation))33883388- swap(dst_w, dst_h);3389339533903396 downscale_h = max(src_h / dst_h, (uint32_t)DRM_PLANE_HELPER_NO_SCALING);33913397 downscale_w = max(src_w / dst_w, (uint32_t)DRM_PLANE_HELPER_NO_SCALING);···34233417 if (y && format != DRM_FORMAT_NV12)34243418 return 0;3425341934203420+ /*34213421+ * Src coordinates are already rotated by 270 degrees for34223422+ * the 90/270 degree plane rotation cases (to match the34233423+ * GTT mapping), hence no need to account for rotation here.34243424+ */34263425 width = drm_rect_width(&intel_pstate->base.src) >> 16;34273426 height = drm_rect_height(&intel_pstate->base.src) >> 16;34283428-34293429- if (drm_rotation_90_or_270(pstate->rotation))34303430- swap(width, height);3431342734323428 /* for planar format */34333429 if (format == DRM_FORMAT_NV12) {···35133505 fb->modifier != I915_FORMAT_MOD_Yf_TILED)35143506 return 8;3515350735083508+ /*35093509+ * Src coordinates are already rotated by 270 degrees for35103510+ * the 90/270 degree plane rotation cases (to match the35113511+ * GTT mapping), hence no need to account for rotation here.35123512+ */35163513 src_w = drm_rect_width(&intel_pstate->base.src) >> 16;35173514 src_h = drm_rect_height(&intel_pstate->base.src) >> 16;35183518-35193519- if (drm_rotation_90_or_270(pstate->rotation))35203520- swap(src_w, src_h);3521351535223516 /* Halve UV plane width and height for NV12 */35233517 if (fb->format->format == DRM_FORMAT_NV12 && !y) {···38043794 width = intel_pstate->base.crtc_w;38053795 height = intel_pstate->base.crtc_h;38063796 } else {37973797+ /*37983798+ * Src coordinates are already rotated by 270 degrees for37993799+ * the 90/270 degree plane rotation cases (to match the38003800+ * GTT mapping), hence no need to account for rotation here.38013801+ */38073802 width = drm_rect_width(&intel_pstate->base.src) >> 16;38083803 height = drm_rect_height(&intel_pstate->base.src) >> 16;38093804 }38103810-38113811- if (drm_rotation_90_or_270(pstate->rotation))38123812- swap(width, height);3813380538143806 cpp = fb->format->cpp[0];38153807 plane_pixel_rate = skl_adjusted_plane_pixel_rate(cstate, intel_pstate);···43474335 struct drm_crtc_state *cstate;43484336 struct intel_atomic_state *intel_state = to_intel_atomic_state(state);43494337 struct skl_wm_values *results = &intel_state->wm_results;43384338+ struct drm_device *dev = state->dev;43504339 struct skl_pipe_wm *pipe_wm;43514340 bool changed = false;43524341 int ret, i;43424342+43434343+ /*43444344+ * When we distrust bios wm we always need to recompute to set the43454345+ * expected DDB allocations for each CRTC.43464346+ */43474347+ if (to_i915(dev)->wm.distrust_bios_wm)43484348+ changed = true;4353434943544350 /*43554351 * If this transaction isn't actually touching any CRTC's, don't···43694349 */43704350 for_each_new_crtc_in_state(state, crtc, cstate, i)43714351 changed = true;43524352+43724353 if (!changed)43734354 return 0;43744355
+3-2
drivers/gpu/drm/i915/intel_psr.c
···435435 }436436437437 /* PSR2 is restricted to work with panel resolutions upto 3200x2000 */438438- if (intel_crtc->config->pipe_src_w > 3200 ||439439- intel_crtc->config->pipe_src_h > 2000) {438438+ if (dev_priv->psr.psr2_support &&439439+ (intel_crtc->config->pipe_src_w > 3200 ||440440+ intel_crtc->config->pipe_src_h > 2000)) {440441 dev_priv->psr.psr2_support = false;441442 return false;442443 }
+27-14
drivers/gpu/drm/i915/intel_ringbuffer.c
···49495050void intel_ring_update_space(struct intel_ring *ring)5151{5252- ring->space = __intel_ring_space(ring->head, ring->tail, ring->size);5252+ ring->space = __intel_ring_space(ring->head, ring->emit, ring->size);5353}54545555static int···774774775775 i915_gem_request_submit(request);776776777777- assert_ring_tail_valid(request->ring, request->tail);778778- I915_WRITE_TAIL(request->engine, request->tail);777777+ I915_WRITE_TAIL(request->engine,778778+ intel_ring_set_tail(request->ring, request->tail));779779}780780781781static void i9xx_emit_breadcrumb(struct drm_i915_gem_request *req, u32 *cs)···13161316 return PTR_ERR(addr);13171317}1318131813191319+void intel_ring_reset(struct intel_ring *ring, u32 tail)13201320+{13211321+ GEM_BUG_ON(!list_empty(&ring->request_list));13221322+ ring->tail = tail;13231323+ ring->head = tail;13241324+ ring->emit = tail;13251325+ intel_ring_update_space(ring);13261326+}13271327+13191328void intel_ring_unpin(struct intel_ring *ring)13201329{13211330 GEM_BUG_ON(!ring->vma);13221331 GEM_BUG_ON(!ring->vaddr);13321332+13331333+ /* Discard any unused bytes beyond that submitted to hw. */13341334+ intel_ring_reset(ring, ring->tail);1323133513241336 if (i915_vma_is_map_and_fenceable(ring->vma))13251337 i915_vma_unpin_iomap(ring->vma);···15741562 struct intel_engine_cs *engine;15751563 enum intel_engine_id id;1576156415651565+ /* Restart from the beginning of the rings for convenience */15771566 for_each_engine(engine, dev_priv, id)15781578- engine->buffer->head = engine->buffer->tail;15671567+ intel_ring_reset(engine->buffer, 0);15791568}1580156915811570static int ring_request_alloc(struct drm_i915_gem_request *request)···16291616 unsigned space;1630161716311618 /* Would completion of this request free enough space? */16321632- space = __intel_ring_space(target->postfix, ring->tail,16191619+ space = __intel_ring_space(target->postfix, ring->emit,16331620 ring->size);16341621 if (space >= bytes)16351622 break;···16541641u32 *intel_ring_begin(struct drm_i915_gem_request *req, int num_dwords)16551642{16561643 struct intel_ring *ring = req->ring;16571657- int remain_actual = ring->size - ring->tail;16581658- int remain_usable = ring->effective_size - ring->tail;16441644+ int remain_actual = ring->size - ring->emit;16451645+ int remain_usable = ring->effective_size - ring->emit;16591646 int bytes = num_dwords * sizeof(u32);16601647 int total_bytes, wait_bytes;16611648 bool need_wrap = false;···1691167816921679 if (unlikely(need_wrap)) {16931680 GEM_BUG_ON(remain_actual > ring->space);16941694- GEM_BUG_ON(ring->tail + remain_actual > ring->size);16811681+ GEM_BUG_ON(ring->emit + remain_actual > ring->size);1695168216961683 /* Fill the tail with MI_NOOP */16971697- memset(ring->vaddr + ring->tail, 0, remain_actual);16981698- ring->tail = 0;16841684+ memset(ring->vaddr + ring->emit, 0, remain_actual);16851685+ ring->emit = 0;16991686 ring->space -= remain_actual;17001687 }1701168817021702- GEM_BUG_ON(ring->tail > ring->size - bytes);17031703- cs = ring->vaddr + ring->tail;17041704- ring->tail += bytes;16891689+ GEM_BUG_ON(ring->emit > ring->size - bytes);16901690+ cs = ring->vaddr + ring->emit;16911691+ ring->emit += bytes;17051692 ring->space -= bytes;17061693 GEM_BUG_ON(ring->space < 0);17071694···17121699int intel_ring_cacheline_align(struct drm_i915_gem_request *req)17131700{17141701 int num_dwords =17151715- (req->ring->tail & (CACHELINE_BYTES - 1)) / sizeof(uint32_t);17021702+ (req->ring->emit & (CACHELINE_BYTES - 1)) / sizeof(uint32_t);17161703 u32 *cs;1717170417181705 if (num_dwords == 0)
+17-2
drivers/gpu/drm/i915/intel_ringbuffer.h
···145145146146 u32 head;147147 u32 tail;148148+ u32 emit;148149149150 int space;150151 int size;···489488struct intel_ring *490489intel_engine_create_ring(struct intel_engine_cs *engine, int size);491490int intel_ring_pin(struct intel_ring *ring, unsigned int offset_bias);491491+void intel_ring_reset(struct intel_ring *ring, u32 tail);492492+void intel_ring_update_space(struct intel_ring *ring);492493void intel_ring_unpin(struct intel_ring *ring);493494void intel_ring_free(struct intel_ring *ring);494495···514511 * reserved for the command packet (i.e. the value passed to515512 * intel_ring_begin()).516513 */517517- GEM_BUG_ON((req->ring->vaddr + req->ring->tail) != cs);514514+ GEM_BUG_ON((req->ring->vaddr + req->ring->emit) != cs);518515}519516520517static inline u32···543540 GEM_BUG_ON(tail >= ring->size);544541}545542546546-void intel_ring_update_space(struct intel_ring *ring);543543+static inline unsigned int544544+intel_ring_set_tail(struct intel_ring *ring, unsigned int tail)545545+{546546+ /* Whilst writes to the tail are strictly order, there is no547547+ * serialisation between readers and the writers. The tail may be548548+ * read by i915_gem_request_retire() just as it is being updated549549+ * by execlists, as although the breadcrumb is complete, the context550550+ * switch hasn't been seen.551551+ */552552+ assert_ring_tail_valid(ring, tail);553553+ ring->tail = tail;554554+ return tail;555555+}547556548557void intel_engine_init_global_seqno(struct intel_engine_cs *engine, u32 seqno);549558
+21
drivers/gpu/drm/i915/intel_sprite.c
···8383 */8484void intel_pipe_update_start(struct intel_crtc *crtc)8585{8686+ struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);8687 const struct drm_display_mode *adjusted_mode = &crtc->config->base.adjusted_mode;8788 long timeout = msecs_to_jiffies_timeout(1);8889 int scanline, min, max, vblank_start;8990 wait_queue_head_t *wq = drm_crtc_vblank_waitqueue(&crtc->base);9191+ bool need_vlv_dsi_wa = (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) &&9292+ intel_crtc_has_type(crtc->config, INTEL_OUTPUT_DSI);9093 DEFINE_WAIT(wait);91949295 vblank_start = adjusted_mode->crtc_vblank_start;···141138 finish_wait(wq, &wait);142139143140 drm_crtc_vblank_put(&crtc->base);141141+142142+ /*143143+ * On VLV/CHV DSI the scanline counter would appear to144144+ * increment approx. 1/3 of a scanline before start of vblank.145145+ * The registers still get latched at start of vblank however.146146+ * This means we must not write any registers on the first147147+ * line of vblank (since not the whole line is actually in148148+ * vblank). And unfortunately we can't use the interrupt to149149+ * wait here since it will fire too soon. We could use the150150+ * frame start interrupt instead since it will fire after the151151+ * critical scanline, but that would require more changes152152+ * in the interrupt code. So for now we'll just do the nasty153153+ * thing and poll for the bad scanline to pass us by.154154+ *155155+ * FIXME figure out if BXT+ DSI suffers from this as well156156+ */157157+ while (need_vlv_dsi_wa && scanline == vblank_start)158158+ scanline = intel_get_crtc_scanline(crtc);144159145160 crtc->debug.scanline_start = scanline;146161 crtc->debug.start_vbl_time = ktime_get();
-2
drivers/gpu/drm/i915/intel_uc.h
···5959 * available in the work queue (note, the queue is shared,6060 * not per-engine). It is OK for this to be nonzero, but6161 * it should not be huge!6262- * q_fail: failed to enqueue a work item. This should never happen,6363- * because we check for space beforehand.6462 * b_fail: failed to ring the doorbell. This should never happen, unless6563 * somehow the hardware misbehaves, or maybe if the GuC firmware6664 * crashes? We probably need to reset the GPU to recover.
···673673 ret = drm_of_find_panel_or_bridge(child,674674 imx_ldb->lvds_mux ? 4 : 2, 0,675675 &channel->panel, &channel->bridge);676676- if (ret)676676+ if (ret && ret != -ENODEV)677677 return ret;678678679679 /* panel ddc only if there is no bridge */
+6-9
drivers/gpu/drm/mediatek/mtk_dsi.c
···1919#include <drm/drm_of.h>2020#include <linux/clk.h>2121#include <linux/component.h>2222+#include <linux/iopoll.h>2223#include <linux/irq.h>2324#include <linux/of.h>2425#include <linux/of_platform.h>···901900902901static void mtk_dsi_wait_for_idle(struct mtk_dsi *dsi)903902{904904- u32 timeout_ms = 500000; /* total 1s ~ 2s timeout */903903+ int ret;904904+ u32 val;905905906906- while (timeout_ms--) {907907- if (!(readl(dsi->regs + DSI_INTSTA) & DSI_BUSY))908908- break;909909-910910- usleep_range(2, 4);911911- }912912-913913- if (timeout_ms == 0) {906906+ ret = readl_poll_timeout(dsi->regs + DSI_INTSTA, val, !(val & DSI_BUSY),907907+ 4, 2000000);908908+ if (ret) {914909 DRM_WARN("polling dsi wait not busy timeout!\n");915910916911 mtk_dsi_enable(dsi);
+1-1
drivers/gpu/drm/mediatek/mtk_hdmi.c
···10621062 }1063106310641064 err = hdmi_vendor_infoframe_pack(&frame, buffer, sizeof(buffer));10651065- if (err) {10651065+ if (err < 0) {10661066 dev_err(hdmi->dev, "Failed to pack vendor infoframe: %zd\n",10671067 err);10681068 return err;
+15-5
drivers/gpu/drm/meson/meson_drv.c
···152152 .max_register = 0x1000,153153};154154155155-static int meson_drv_bind(struct device *dev)155155+static int meson_drv_bind_master(struct device *dev, bool has_components)156156{157157 struct platform_device *pdev = to_platform_device(dev);158158 struct meson_drm *priv;···233233 if (ret)234234 goto free_drm;235235236236- ret = component_bind_all(drm->dev, drm);237237- if (ret) {238238- dev_err(drm->dev, "Couldn't bind all components\n");239239- goto free_drm;236236+ if (has_components) {237237+ ret = component_bind_all(drm->dev, drm);238238+ if (ret) {239239+ dev_err(drm->dev, "Couldn't bind all components\n");240240+ goto free_drm;241241+ }240242 }241243242244 ret = meson_plane_create(priv);···276274 drm_dev_unref(drm);277275278276 return ret;277277+}278278+279279+static int meson_drv_bind(struct device *dev)280280+{281281+ return meson_drv_bind_master(dev, true);279282}280283281284static void meson_drv_unbind(struct device *dev)···363356364357 count += meson_probe_remote(pdev, &match, np, remote);365358 }359359+360360+ if (count && !match)361361+ return meson_drv_bind_master(&pdev->dev, false);366362367363 /* If some endpoints were found, initialize the nodes */368364 if (count) {
+8-1
drivers/gpu/drm/mgag200/mgag200_mode.c
···117311731174117411751175 if (IS_G200_SE(mdev)) {11761176- if (mdev->unique_rev_id >= 0x02) {11761176+ if (mdev->unique_rev_id >= 0x04) {11771177+ WREG8(MGAREG_CRTCEXT_INDEX, 0x06);11781178+ WREG8(MGAREG_CRTCEXT_DATA, 0);11791179+ } else if (mdev->unique_rev_id >= 0x02) {11771180 u8 hi_pri_lvl;11781181 u32 bpp;11791182 u32 mb;···16411638 return MODE_VIRTUAL_Y;16421639 if (mga_vga_calculate_mode_bandwidth(mode, bpp)16431640 > (30100 * 1024))16411641+ return MODE_BANDWIDTH;16421642+ } else {16431643+ if (mga_vga_calculate_mode_bandwidth(mode, bpp)16441644+ > (55000 * 1024))16441645 return MODE_BANDWIDTH;16451646 }16461647 } else if (mdev->type == G200_WB) {
+1
drivers/gpu/drm/msm/Kconfig
···1313 select QCOM_SCM1414 select SND_SOC_HDMI_CODEC if SND_SOC1515 select SYNC_FILE1616+ select PM_OPP1617 default y1718 help1819 DRM/KMS driver for MSM/snapdragon.
···758758 struct msm_gem_object *msm_obj;759759 bool use_vram = false;760760761761+ WARN_ON(!mutex_is_locked(&dev->struct_mutex));762762+761763 switch (flags & MSM_BO_CACHE_MASK) {762764 case MSM_BO_UNCACHED:763765 case MSM_BO_CACHED:···855853856854 size = PAGE_ALIGN(dmabuf->size);857855856856+ /* Take mutex so we can modify the inactive list in msm_gem_new_impl */857857+ mutex_lock(&dev->struct_mutex);858858 ret = msm_gem_new_impl(dev, size, MSM_BO_WC, dmabuf->resv, &obj);859859+ mutex_unlock(&dev->struct_mutex);860860+859861 if (ret)860862 goto fail;861863
···410410 if (!in_fence)411411 return -EINVAL;412412413413- /* TODO if we get an array-fence due to userspace merging multiple414414- * fences, we need a way to determine if all the backing fences415415- * are from our own context..413413+ /*414414+ * Wait if the fence is from a foreign context, or if the fence415415+ * array contains any fence from a foreign context.416416 */417417-418418- if (in_fence->context != gpu->fctx->context) {417417+ if (!dma_fence_match_context(in_fence, gpu->fctx->context)) {419418 ret = dma_fence_wait(in_fence, true);420419 if (ret)421420 return ret;···495496 goto out;496497 }497498498498- if ((submit_cmd.size + submit_cmd.submit_offset) >=499499- msm_obj->base.size) {499499+ if (!submit_cmd.size ||500500+ ((submit_cmd.size + submit_cmd.submit_offset) >501501+ msm_obj->base.size)) {500502 DRM_ERROR("invalid cmdstream size: %u\n", submit_cmd.size);501503 ret = -EINVAL;502504 goto out;
+2-2
drivers/gpu/drm/msm/msm_gpu.c
···549549 gpu->grp_clks[i] = get_clock(dev, name);550550551551 /* Remember the key clocks that we need to control later */552552- if (!strcmp(name, "core"))552552+ if (!strcmp(name, "core") || !strcmp(name, "core_clk"))553553 gpu->core_clk = gpu->grp_clks[i];554554- else if (!strcmp(name, "rbbmtimer"))554554+ else if (!strcmp(name, "rbbmtimer") || !strcmp(name, "rbbmtimer_clk"))555555 gpu->rbbmtimer_clk = gpu->grp_clks[i];556556557557 ++i;
+42
drivers/gpu/drm/mxsfb/mxsfb_crtc.c
···3535#include "mxsfb_drv.h"3636#include "mxsfb_regs.h"37373838+#define MXS_SET_ADDR 0x43939+#define MXS_CLR_ADDR 0x84040+#define MODULE_CLKGATE BIT(30)4141+#define MODULE_SFTRST BIT(31)4242+/* 1 second delay should be plenty of time for block reset */4343+#define RESET_TIMEOUT 10000004444+3845static u32 set_hsync_pulse_width(struct mxsfb_drm_private *mxsfb, u32 val)3946{4047 return (val & mxsfb->devdata->hs_wdth_mask) <<···166159 clk_disable_unprepare(mxsfb->clk_disp_axi);167160}168161162162+/*163163+ * Clear the bit and poll it cleared. This is usually called with164164+ * a reset address and mask being either SFTRST(bit 31) or CLKGATE165165+ * (bit 30).166166+ */167167+static int clear_poll_bit(void __iomem *addr, u32 mask)168168+{169169+ u32 reg;170170+171171+ writel(mask, addr + MXS_CLR_ADDR);172172+ return readl_poll_timeout(addr, reg, !(reg & mask), 0, RESET_TIMEOUT);173173+}174174+175175+static int mxsfb_reset_block(void __iomem *reset_addr)176176+{177177+ int ret;178178+179179+ ret = clear_poll_bit(reset_addr, MODULE_SFTRST);180180+ if (ret)181181+ return ret;182182+183183+ writel(MODULE_CLKGATE, reset_addr + MXS_CLR_ADDR);184184+185185+ ret = clear_poll_bit(reset_addr, MODULE_SFTRST);186186+ if (ret)187187+ return ret;188188+189189+ return clear_poll_bit(reset_addr, MODULE_CLKGATE);190190+}191191+169192static void mxsfb_crtc_mode_set_nofb(struct mxsfb_drm_private *mxsfb)170193{171194 struct drm_display_mode *m = &mxsfb->pipe.crtc.state->adjusted_mode;···209172 * first stop the controller and drain its FIFOs.210173 */211174 mxsfb_enable_axi_clk(mxsfb);175175+176176+ /* Mandatory eLCDIF reset as per the Reference Manual */177177+ err = mxsfb_reset_block(mxsfb->base);178178+ if (err)179179+ return;212180213181 /* Clear the FIFOs */214182 writel(CTRL1_FIFO_CLEAR, mxsfb->base + LCDC_CTRL1 + REG_SET);
···245245 struct drm_connector_state *conn_state)246246{247247 struct rockchip_crtc_state *s = to_rockchip_crtc_state(crtc_state);248248- struct rockchip_dp_device *dp = to_dp(encoder);249249- int ret;250248251249 /*252250 * The hardware IC designed that VOP must output the RGB10 video···256258257259 s->output_mode = ROCKCHIP_OUT_MODE_AAAA;258260 s->output_type = DRM_MODE_CONNECTOR_eDP;259259- if (dp->data->chip_type == RK3399_EDP) {260260- /*261261- * For RK3399, VOP Lit must code the out mode to RGB888,262262- * VOP Big must code the out mode to RGB10.263263- */264264- ret = drm_of_encoder_active_endpoint_id(dp->dev->of_node,265265- encoder);266266- if (ret > 0)267267- s->output_mode = ROCKCHIP_OUT_MODE_P888;268268- }269261270262 return 0;271263}
+2-7
drivers/gpu/drm/rockchip/cdn-dp-core.c
···615615{616616 struct cdn_dp_device *dp = encoder_to_dp(encoder);617617 int ret, val;618618- struct rockchip_crtc_state *state;619618620619 ret = drm_of_encoder_active_endpoint_id(dp->dev->of_node, encoder);621620 if (ret < 0) {···624625625626 DRM_DEV_DEBUG_KMS(dp->dev, "vop %s output to cdn-dp\n",626627 (ret) ? "LIT" : "BIG");627627- state = to_rockchip_crtc_state(encoder->crtc->state);628628- if (ret) {628628+ if (ret)629629 val = DP_SEL_VOP_LIT | (DP_SEL_VOP_LIT << 16);630630- state->output_mode = ROCKCHIP_OUT_MODE_P888;631631- } else {630630+ else632631 val = DP_SEL_VOP_LIT << 16;633633- state->output_mode = ROCKCHIP_OUT_MODE_AAAA;634634- }635632636633 ret = cdn_dp_grf_write(dp, GRF_SOC_CON9, val);637634 if (ret)
+8
drivers/gpu/drm/rockchip/rockchip_drm_vop.c
···875875static void vop_crtc_enable(struct drm_crtc *crtc)876876{877877 struct vop *vop = to_vop(crtc);878878+ const struct vop_data *vop_data = vop->data;878879 struct rockchip_crtc_state *s = to_rockchip_crtc_state(crtc->state);879880 struct drm_display_mode *adjusted_mode = &crtc->state->adjusted_mode;880881 u16 hsync_len = adjusted_mode->hsync_end - adjusted_mode->hsync_start;···968967 DRM_DEV_ERROR(vop->dev, "unsupported connector_type [%d]\n",969968 s->output_type);970969 }970970+971971+ /*972972+ * if vop is not support RGB10 output, need force RGB10 to RGB888.973973+ */974974+ if (s->output_mode == ROCKCHIP_OUT_MODE_AAAA &&975975+ !(vop_data->feature & VOP_FEATURE_OUTPUT_RGB10))976976+ s->output_mode = ROCKCHIP_OUT_MODE_P888;971977 VOP_CTRL_SET(vop, out_mode, s->output_mode);972978973979 VOP_CTRL_SET(vop, htotal_pw, (htotal << 16) | hsync_len);
···5656 * @right: Right side of bounding box.5757 * @top: Top side of bounding box.5858 * @bottom: Bottom side of bounding box.5959+ * @fb_left: Left side of the framebuffer/content bounding box6060+ * @fb_top: Top of the framebuffer/content bounding box5961 * @buf: DMA buffer when DMA-ing between buffer and screen targets.6062 * @sid: Surface ID when copying between surface and screen targets.6163 */···6563 struct vmw_kms_dirty base;6664 SVGA3dTransferType transfer;6765 s32 left, right, top, bottom;6666+ s32 fb_left, fb_top;6867 u32 pitch;6968 union {7069 struct vmw_dma_buffer *buf;···650647 *651648 * @dirty: The closure structure.652649 *653653- * This function calculates the bounding box for all the incoming clips650650+ * This function calculates the bounding box for all the incoming clips.654651 */655652static void vmw_stdu_dmabuf_cpu_clip(struct vmw_kms_dirty *dirty)656653{···659656660657 dirty->num_hits = 1;661658662662- /* Calculate bounding box */659659+ /* Calculate destination bounding box */663660 ddirty->left = min_t(s32, ddirty->left, dirty->unit_x1);664661 ddirty->top = min_t(s32, ddirty->top, dirty->unit_y1);665662 ddirty->right = max_t(s32, ddirty->right, dirty->unit_x2);666663 ddirty->bottom = max_t(s32, ddirty->bottom, dirty->unit_y2);664664+665665+ /*666666+ * Calculate content bounding box. We only need the top-left667667+ * coordinate because width and height will be the same as the668668+ * destination bounding box above669669+ */670670+ ddirty->fb_left = min_t(s32, ddirty->fb_left, dirty->fb_x);671671+ ddirty->fb_top = min_t(s32, ddirty->fb_top, dirty->fb_y);667672}668673669674···708697 /* Assume we are blitting from Host (display_srf) to Guest (dmabuf) */709698 src_pitch = stdu->display_srf->base_size.width * stdu->cpp;710699 src = ttm_kmap_obj_virtual(&stdu->host_map, ¬_used);711711- src += dirty->unit_y1 * src_pitch + dirty->unit_x1 * stdu->cpp;700700+ src += ddirty->top * src_pitch + ddirty->left * stdu->cpp;712701713702 dst_pitch = ddirty->pitch;714703 dst = ttm_kmap_obj_virtual(&stdu->guest_map, ¬_used);715715- dst += dirty->fb_y * dst_pitch + dirty->fb_x * stdu->cpp;704704+ dst += ddirty->fb_top * dst_pitch + ddirty->fb_left * stdu->cpp;716705717706718707 /* Figure out the real direction */···771760 }772761773762out_cleanup:774774- ddirty->left = ddirty->top = S32_MAX;763763+ ddirty->left = ddirty->top = ddirty->fb_left = ddirty->fb_top = S32_MAX;775764 ddirty->right = ddirty->bottom = S32_MIN;776765}777766···823812 SVGA3D_READ_HOST_VRAM;824813 ddirty.left = ddirty.top = S32_MAX;825814 ddirty.right = ddirty.bottom = S32_MIN;815815+ ddirty.fb_left = ddirty.fb_top = S32_MAX;826816 ddirty.pitch = vfb->base.pitches[0];827817 ddirty.buf = buf;828818 ddirty.base.fifo_commit = vmw_stdu_dmabuf_fifo_commit;···13671355 DRM_ERROR("Failed to bind surface to STDU.\n");13681356 else13691357 crtc->primary->fb = plane->state->fb;13581358+13591359+ ret = vmw_stdu_update_st(dev_priv, stdu);13601360+13611361+ if (ret)13621362+ DRM_ERROR("Failed to update STDU.\n");13701363}1371136413721365
+15-8
drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
···12741274 struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;12751275 int ret;12761276 uint32_t size;12771277- uint32_t backup_handle;12771277+ uint32_t backup_handle = 0;1278127812791279 if (req->multisample_count != 0)12801280+ return -EINVAL;12811281+12821282+ if (req->mip_levels > DRM_VMW_MAX_MIP_LEVELS)12801283 return -EINVAL;1281128412821285 if (unlikely(vmw_user_surface_size == 0))···13171314 ret = vmw_user_dmabuf_lookup(tfile, req->buffer_handle,13181315 &res->backup,13191316 &user_srf->backup_base);13201320- if (ret == 0 && res->backup->base.num_pages * PAGE_SIZE <13211321- res->backup_size) {13221322- DRM_ERROR("Surface backup buffer is too small.\n");13231323- vmw_dmabuf_unreference(&res->backup);13241324- ret = -EINVAL;13251325- goto out_unlock;13171317+ if (ret == 0) {13181318+ if (res->backup->base.num_pages * PAGE_SIZE <13191319+ res->backup_size) {13201320+ DRM_ERROR("Surface backup buffer is too small.\n");13211321+ vmw_dmabuf_unreference(&res->backup);13221322+ ret = -EINVAL;13231323+ goto out_unlock;13241324+ } else {13251325+ backup_handle = req->buffer_handle;13261326+ }13261327 }13271328 } else if (req->drm_surface_flags & drm_vmw_surface_flag_create_buffer)13281329 ret = vmw_user_dmabuf_alloc(dev_priv, tfile,···14981491 dev_priv->stdu_max_height);1499149215001493 if (size.width > max_width || size.height > max_height) {15011501- DRM_ERROR("%ux%u\n, exeeds max surface size %ux%u",14941494+ DRM_ERROR("%ux%u\n, exceeds max surface size %ux%u",15021495 size.width, size.height,15031496 max_width, max_height);15041497 return -EINVAL;
+1-1
drivers/gpu/host1x/dev.c
···172172173173 host->rst = devm_reset_control_get(&pdev->dev, "host1x");174174 if (IS_ERR(host->rst)) {175175- err = PTR_ERR(host->clk);175175+ err = PTR_ERR(host->rst);176176 dev_err(&pdev->dev, "failed to get reset: %d\n", err);177177 return err;178178 }
+8-7
drivers/gpu/ipu-v3/ipu-common.c
···725725 spin_lock_irqsave(&ipu->lock, flags);726726727727 val = ipu_cm_read(ipu, IPU_CONF);728728- if (vdi) {728728+ if (vdi)729729 val |= IPU_CONF_IC_INPUT;730730- } else {730730+ else731731 val &= ~IPU_CONF_IC_INPUT;732732- if (csi_id == 1)733733- val |= IPU_CONF_CSI_SEL;734734- else735735- val &= ~IPU_CONF_CSI_SEL;736736- }732732+733733+ if (csi_id == 1)734734+ val |= IPU_CONF_CSI_SEL;735735+ else736736+ val &= ~IPU_CONF_CSI_SEL;737737+737738 ipu_cm_write(ipu, val, IPU_CONF);738739739740 spin_unlock_irqrestore(&ipu->lock, flags);
+5-8
drivers/gpu/ipu-v3/ipu-pre.c
···131131 if (pre->in_use)132132 return -EBUSY;133133134134- clk_prepare_enable(pre->clk_axi);135135-136134 /* first get the engine out of reset and remove clock gating */137135 writel(0, pre->regs + IPU_PRE_CTRL);138136···147149148150void ipu_pre_put(struct ipu_pre *pre)149151{150150- u32 val;151151-152152- val = IPU_PRE_CTRL_SFTRST | IPU_PRE_CTRL_CLKGATE;153153- writel(val, pre->regs + IPU_PRE_CTRL);154154-155155- clk_disable_unprepare(pre->clk_axi);152152+ writel(IPU_PRE_CTRL_SFTRST, pre->regs + IPU_PRE_CTRL);156153157154 pre->in_use = false;158155}···242249 if (!pre->buffer_virt)243250 return -ENOMEM;244251252252+ clk_prepare_enable(pre->clk_axi);253253+245254 pre->dev = dev;246255 platform_set_drvdata(pdev, pre);247256 mutex_lock(&ipu_pre_list_mutex);···262267 list_del(&pre->list);263268 available_pres--;264269 mutex_unlock(&ipu_pre_list_mutex);270270+271271+ clk_disable_unprepare(pre->clk_axi);265272266273 if (pre->buffer_virt)267274 gen_pool_free(pre->iram, (unsigned long)pre->buffer_virt,
+4-2
drivers/hid/Kconfig
···275275 - Trio Linker Plus II276276277277config HID_ELECOM278278- tristate "ELECOM BM084 bluetooth mouse"278278+ tristate "ELECOM HID devices"279279 depends on HID280280 ---help---281281- Support for the ELECOM BM084 (bluetooth mouse).281281+ Support for ELECOM devices:282282+ - BM084 Bluetooth Mouse283283+ - DEFT Trackball (Wired and wireless)282284283285config HID_ELO284286 tristate "ELO USB 4000/4500 touchscreen"
···15711571{15721572 unsigned char *data = wacom->data;1573157315741574- if (wacom->pen_input)15741574+ if (wacom->pen_input) {15751575 dev_dbg(wacom->pen_input->dev.parent,15761576 "%s: received report #%d\n", __func__, data[0]);15771577- else if (wacom->touch_input)15771577+15781578+ if (len == WACOM_PKGLEN_PENABLED ||15791579+ data[0] == WACOM_REPORT_PENABLED)15801580+ return wacom_tpc_pen(wacom);15811581+ }15821582+ else if (wacom->touch_input) {15781583 dev_dbg(wacom->touch_input->dev.parent,15791584 "%s: received report #%d\n", __func__, data[0]);1580158515811581- switch (len) {15821582- case WACOM_PKGLEN_TPC1FG:15831583- return wacom_tpc_single_touch(wacom, len);15841584-15851585- case WACOM_PKGLEN_TPC2FG:15861586- return wacom_tpc_mt_touch(wacom);15871587-15881588- case WACOM_PKGLEN_PENABLED:15891589- return wacom_tpc_pen(wacom);15901590-15911591- default:15921592- switch (data[0]) {15931593- case WACOM_REPORT_TPC1FG:15941594- case WACOM_REPORT_TPCHID:15951595- case WACOM_REPORT_TPCST:15961596- case WACOM_REPORT_TPC1FGE:15861586+ switch (len) {15871587+ case WACOM_PKGLEN_TPC1FG:15971588 return wacom_tpc_single_touch(wacom, len);1598158915991599- case WACOM_REPORT_TPCMT:16001600- case WACOM_REPORT_TPCMT2:16011601- return wacom_mt_touch(wacom);15901590+ case WACOM_PKGLEN_TPC2FG:15911591+ return wacom_tpc_mt_touch(wacom);1602159216031603- case WACOM_REPORT_PENABLED:16041604- return wacom_tpc_pen(wacom);15931593+ default:15941594+ switch (data[0]) {15951595+ case WACOM_REPORT_TPC1FG:15961596+ case WACOM_REPORT_TPCHID:15971597+ case WACOM_REPORT_TPCST:15981598+ case WACOM_REPORT_TPC1FGE:15991599+ return wacom_tpc_single_touch(wacom, len);16001600+16011601+ case WACOM_REPORT_TPCMT:16021602+ case WACOM_REPORT_TPCMT2:16031603+ return wacom_mt_touch(wacom);16041604+16051605+ }16051606 }16061607 }16071608
···343343344344config SENSORS_ASPEED345345 tristate "ASPEED AST2400/AST2500 PWM and Fan tach driver"346346+ select REGMAP346347 help347348 This driver provides support for ASPEED AST2400/AST2500 PWM348349 and Fan Tacho controllers.
···4141static const struct inv_mpu6050_reg_map reg_set_6500 = {4242 .sample_rate_div = INV_MPU6050_REG_SAMPLE_RATE_DIV,4343 .lpf = INV_MPU6050_REG_CONFIG,4444+ .accel_lpf = INV_MPU6500_REG_ACCEL_CONFIG_2,4445 .user_ctrl = INV_MPU6050_REG_USER_CTRL,4546 .fifo_en = INV_MPU6050_REG_FIFO_EN,4647 .gyro_config = INV_MPU6050_REG_GYRO_CONFIG,···212211EXPORT_SYMBOL_GPL(inv_mpu6050_set_power_itg);213212214213/**214214+ * inv_mpu6050_set_lpf_regs() - set low pass filter registers, chip dependent215215+ *216216+ * MPU60xx/MPU9150 use only 1 register for accelerometer + gyroscope217217+ * MPU6500 and above have a dedicated register for accelerometer218218+ */219219+static int inv_mpu6050_set_lpf_regs(struct inv_mpu6050_state *st,220220+ enum inv_mpu6050_filter_e val)221221+{222222+ int result;223223+224224+ result = regmap_write(st->map, st->reg->lpf, val);225225+ if (result)226226+ return result;227227+228228+ switch (st->chip_type) {229229+ case INV_MPU6050:230230+ case INV_MPU6000:231231+ case INV_MPU9150:232232+ /* old chips, nothing to do */233233+ result = 0;234234+ break;235235+ default:236236+ /* set accel lpf */237237+ result = regmap_write(st->map, st->reg->accel_lpf, val);238238+ break;239239+ }240240+241241+ return result;242242+}243243+244244+/**215245 * inv_mpu6050_init_config() - Initialize hardware, disable FIFO.216246 *217247 * Initial configuration:···265233 if (result)266234 return result;267235268268- d = INV_MPU6050_FILTER_20HZ;269269- result = regmap_write(st->map, st->reg->lpf, d);236236+ result = inv_mpu6050_set_lpf_regs(st, INV_MPU6050_FILTER_20HZ);270237 if (result)271238 return result;272239···568537 * would be alising. This function basically search for the569538 * correct low pass parameters based on the fifo rate, e.g,570539 * sampling frequency.540540+ *541541+ * lpf is set automatically when setting sampling rate to avoid any aliases.571542 */572543static int inv_mpu6050_set_lpf(struct inv_mpu6050_state *st, int rate)573544{···585552 while ((h < hz[i]) && (i < ARRAY_SIZE(d) - 1))586553 i++;587554 data = d[i];588588- result = regmap_write(st->map, st->reg->lpf, data);555555+ result = inv_mpu6050_set_lpf_regs(st, data);589556 if (result)590557 return result;591558 st->chip_config.lpf = data;
+3
drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
···2828 * struct inv_mpu6050_reg_map - Notable registers.2929 * @sample_rate_div: Divider applied to gyro output rate.3030 * @lpf: Configures internal low pass filter.3131+ * @accel_lpf: Configures accelerometer low pass filter.3132 * @user_ctrl: Enables/resets the FIFO.3233 * @fifo_en: Determines which data will appear in FIFO.3334 * @gyro_config: gyro config register.···4847struct inv_mpu6050_reg_map {4948 u8 sample_rate_div;5049 u8 lpf;5050+ u8 accel_lpf;5151 u8 user_ctrl;5252 u8 fifo_en;5353 u8 gyro_config;···190188#define INV_MPU6050_FIFO_THRESHOLD 500191189192190/* mpu6500 registers */191191+#define INV_MPU6500_REG_ACCEL_CONFIG_2 0x1D193192#define INV_MPU6500_REG_ACCEL_OFFSET 0x77194193195194/* delay time in milliseconds */
···169169int ib_sa_init(void);170170void ib_sa_cleanup(void);171171172172+int ibnl_init(void);173173+void ibnl_cleanup(void);174174+175175+/**176176+ * Check if there are any listeners to the netlink group177177+ * @group: the netlink group ID178178+ * Returns 0 on success or a negative for no listeners.179179+ */180180+int ibnl_chk_listeners(unsigned int group);181181+172182int ib_nl_handle_resolve_resp(struct sk_buff *skb,173183 struct netlink_callback *cb);174184int ib_nl_handle_set_timeout(struct sk_buff *skb,
···63126312 }63136313}6314631463156315-static void write_global_credit(struct hfi1_devdata *dd,63166316- u8 vau, u16 total, u16 shared)63156315+/*63166316+ * Set up allocation unit vaulue.63176317+ */63186318+void set_up_vau(struct hfi1_devdata *dd, u8 vau)63176319{63186318- write_csr(dd, SEND_CM_GLOBAL_CREDIT,63196319- ((u64)total <<63206320- SEND_CM_GLOBAL_CREDIT_TOTAL_CREDIT_LIMIT_SHIFT) |63216321- ((u64)shared <<63226322- SEND_CM_GLOBAL_CREDIT_SHARED_LIMIT_SHIFT) |63236323- ((u64)vau << SEND_CM_GLOBAL_CREDIT_AU_SHIFT));63206320+ u64 reg = read_csr(dd, SEND_CM_GLOBAL_CREDIT);63216321+63226322+ /* do not modify other values in the register */63236323+ reg &= ~SEND_CM_GLOBAL_CREDIT_AU_SMASK;63246324+ reg |= (u64)vau << SEND_CM_GLOBAL_CREDIT_AU_SHIFT;63256325+ write_csr(dd, SEND_CM_GLOBAL_CREDIT, reg);63246326}6325632763266328/*63276329 * Set up initial VL15 credits of the remote. Assumes the rest of63286328- * the CM credit registers are zero from a previous global or credit reset .63306330+ * the CM credit registers are zero from a previous global or credit reset.63316331+ * Shared limit for VL15 will always be 0.63296332 */63306330-void set_up_vl15(struct hfi1_devdata *dd, u8 vau, u16 vl15buf)63336333+void set_up_vl15(struct hfi1_devdata *dd, u16 vl15buf)63316334{63326332- /* leave shared count at zero for both global and VL15 */63336333- write_global_credit(dd, vau, vl15buf, 0);63356335+ u64 reg = read_csr(dd, SEND_CM_GLOBAL_CREDIT);63366336+63376337+ /* set initial values for total and shared credit limit */63386338+ reg &= ~(SEND_CM_GLOBAL_CREDIT_TOTAL_CREDIT_LIMIT_SMASK |63396339+ SEND_CM_GLOBAL_CREDIT_SHARED_LIMIT_SMASK);63406340+63416341+ /*63426342+ * Set total limit to be equal to VL15 credits.63436343+ * Leave shared limit at 0.63446344+ */63456345+ reg |= (u64)vl15buf << SEND_CM_GLOBAL_CREDIT_TOTAL_CREDIT_LIMIT_SHIFT;63466346+ write_csr(dd, SEND_CM_GLOBAL_CREDIT, reg);6334634763356348 write_csr(dd, SEND_CM_CREDIT_VL15, (u64)vl15buf63366349 << SEND_CM_CREDIT_VL15_DEDICATED_LIMIT_VL_SHIFT);···63616348 for (i = 0; i < TXE_NUM_DATA_VL; i++)63626349 write_csr(dd, SEND_CM_CREDIT_VL + (8 * i), 0);63636350 write_csr(dd, SEND_CM_CREDIT_VL15, 0);63646364- write_global_credit(dd, 0, 0, 0);63516351+ write_csr(dd, SEND_CM_GLOBAL_CREDIT, 0);63656352 /* reset the CM block */63666353 pio_send_control(dd, PSC_CM_RESET);63546354+ /* reset cached value */63556355+ dd->vl15buf_cached = 0;63676356}6368635763696358/* convert a vCU to a CU */···68546839{68556840 struct hfi1_pportdata *ppd = container_of(work, struct hfi1_pportdata,68566841 link_up_work);68426842+ struct hfi1_devdata *dd = ppd->dd;68436843+68576844 set_link_state(ppd, HLS_UP_INIT);6858684568596846 /* cache the read of DC_LCB_STS_ROUND_TRIP_LTP_CNT */68606860- read_ltp_rtt(ppd->dd);68476847+ read_ltp_rtt(dd);68616848 /*68626849 * OPA specifies that certain counters are cleared on a transition68636850 * to link up, so do that.68646851 */68656865- clear_linkup_counters(ppd->dd);68526852+ clear_linkup_counters(dd);68666853 /*68676854 * And (re)set link up default values.68686855 */68696856 set_linkup_defaults(ppd);6870685768586858+ /*68596859+ * Set VL15 credits. Use cached value from verify cap interrupt.68606860+ * In case of quick linkup or simulator, vl15 value will be set by68616861+ * handle_linkup_change. VerifyCap interrupt handler will not be68626862+ * called in those scenarios.68636863+ */68646864+ if (!(quick_linkup || dd->icode == ICODE_FUNCTIONAL_SIMULATOR))68656865+ set_up_vl15(dd, dd->vl15buf_cached);68666866+68716867 /* enforce link speed enabled */68726868 if ((ppd->link_speed_active & ppd->link_speed_enabled) == 0) {68736869 /* oops - current speed is not enabled, bounce */68746874- dd_dev_err(ppd->dd,68706870+ dd_dev_err(dd,68756871 "Link speed active 0x%x is outside enabled 0x%x, downing link\n",68766872 ppd->link_speed_active, ppd->link_speed_enabled);68776873 set_link_down_reason(ppd, OPA_LINKDOWN_REASON_SPEED_POLICY, 0,···73837357 */73847358 if (vau == 0)73857359 vau = 1;73867386- set_up_vl15(dd, vau, vl15buf);73607360+ set_up_vau(dd, vau);73617361+73627362+ /*73637363+ * Set VL15 credits to 0 in global credit register. Cache remote VL1573647364+ * credits value and wait for link-up interrupt ot set it.73657365+ */73667366+ set_up_vl15(dd, 0);73677367+ dd->vl15buf_cached = vl15buf;7387736873887369 /* set up the LCB CRC mode */73897370 crc_mask = ppd->port_crc_mode_enabled & partner_supported_crc;
···10451045 /* initial vl15 credits to use */10461046 u16 vl15_init;1047104710481048+ /*10491049+ * Cached value for vl15buf, read during verify cap interrupt. VL1510501050+ * credits are to be kept at 0 and set when handling the link-up10511051+ * interrupt. This removes the possibility of receiving VL15 MAD10521052+ * packets before this HFI is ready.10531053+ */10541054+ u16 vl15buf_cached;10551055+10481056 /* Misc small ints */10491057 u8 n_krcv_queues;10501058 u8 qos_shift;···16061598int fm_get_table(struct hfi1_pportdata *ppd, int which, void *t);16071599int fm_set_table(struct hfi1_pportdata *ppd, int which, void *t);1608160016091609-void set_up_vl15(struct hfi1_devdata *dd, u8 vau, u16 vl15buf);16011601+void set_up_vau(struct hfi1_devdata *dd, u8 vau);16021602+void set_up_vl15(struct hfi1_devdata *dd, u16 vl15buf);16101603void reset_link_credits(struct hfi1_devdata *dd);16111604void assign_remote_cm_au_table(struct hfi1_devdata *dd, u8 vcu);16121605
+2-1
drivers/infiniband/hw/hfi1/intr.c
···130130 * the remote values. Both sides must be using the values.131131 */132132 if (quick_linkup || dd->icode == ICODE_FUNCTIONAL_SIMULATOR) {133133- set_up_vl15(dd, dd->vau, dd->vl15_init);133133+ set_up_vau(dd, dd->vau);134134+ set_up_vl15(dd, dd->vl15_init);134135 assign_remote_cm_au_table(dd, dd->vcu);135136 }136137
+2-2
drivers/infiniband/hw/hfi1/pcie.c
···207207 /*208208 * Save BARs and command to rewrite after device reset.209209 */210210- dd->pcibar0 = addr;211211- dd->pcibar1 = addr >> 32;210210+ pci_read_config_dword(dd->pcidev, PCI_BASE_ADDRESS_0, &dd->pcibar0);211211+ pci_read_config_dword(dd->pcidev, PCI_BASE_ADDRESS_1, &dd->pcibar1);212212 pci_read_config_dword(dd->pcidev, PCI_ROM_ADDRESS, &dd->pci_rom);213213 pci_read_config_word(dd->pcidev, PCI_COMMAND, &dd->pci_command);214214 pcie_capability_read_word(dd->pcidev, PCI_EXP_DEVCTL, &dd->pcie_devctl);
+4-1
drivers/infiniband/hw/hfi1/rc.c
···21592159 ret = hfi1_rvt_get_rwqe(qp, 1);21602160 if (ret < 0)21612161 goto nack_op_err;21622162- if (!ret)21622162+ if (!ret) {21632163+ /* peer will send again */21642164+ rvt_put_ss(&qp->r_sge);21632165 goto rnr_nak;21662166+ }21642167 wc.ex.imm_data = ohdr->u.rc.imm_data;21652168 wc.wc_flags = IB_WC_WITH_IMM;21662169 goto send_last;
···644644static int pxa27x_keypad_open(struct input_dev *dev)645645{646646 struct pxa27x_keypad *keypad = input_get_drvdata(dev);647647-647647+ int ret;648648 /* Enable unit clock */649649- clk_prepare_enable(keypad->clk);649649+ ret = clk_prepare_enable(keypad->clk);650650+ if (ret)651651+ return ret;652652+650653 pxa27x_keypad_config(keypad);651654652655 return 0;···686683 struct platform_device *pdev = to_platform_device(dev);687684 struct pxa27x_keypad *keypad = platform_get_drvdata(pdev);688685 struct input_dev *input_dev = keypad->input_dev;686686+ int ret = 0;689687690688 /*691689 * If the keypad is used as wake up source, the clock is not turned···699695700696 if (input_dev->users) {701697 /* Enable unit clock */702702- clk_prepare_enable(keypad->clk);703703- pxa27x_keypad_config(keypad);698698+ ret = clk_prepare_enable(keypad->clk);699699+ if (!ret)700700+ pxa27x_keypad_config(keypad);704701 }705702706703 mutex_unlock(&input_dev->mutex);707704 }708705709709- return 0;706706+ return ret;710707}711708#endif712709
···370370{371371 unsigned int debounce_cnt;372372 u32 val = 0;373373+ int ret;373374374374- clk_prepare_enable(kbc->clk);375375+ ret = clk_prepare_enable(kbc->clk);376376+ if (ret)377377+ return ret;375378376379 /* Reset the KBC controller to clear all previous status.*/377380 reset_control_assert(kbc->rst);
···581581 To compile this driver as a module, choose M here: the module will be582582 called pwm-beeper.583583584584+config INPUT_RK805_PWRKEY585585+ tristate "Rockchip RK805 PMIC power key support"586586+ depends on MFD_RK808587587+ help588588+ Select this option to enable power key driver for RK805.589589+590590+ If unsure, say N.591591+592592+ To compile this driver as a module, choose M here: the module will be593593+ called rk805_pwrkey.594594+584595config INPUT_GPIO_ROTARY_ENCODER585596 tristate "Rotary encoders connected to GPIO pins"586597 depends on GPIOLIB || COMPILE_TEST
···8585};86868787/* table of devices that work with this driver */8888-static struct usb_device_id keyspan_table[] = {8888+static const struct usb_device_id keyspan_table[] = {8989 { USB_DEVICE(USB_KEYSPAN_VENDOR_ID, USB_KEYSPAN_PRODUCT_UIA11) },9090 { } /* Terminating entry */9191};
+11-6
drivers/input/misc/pcspkr.c
···1818#include <linux/input.h>1919#include <linux/platform_device.h>2020#include <linux/timex.h>2121-#include <asm/io.h>2121+#include <linux/io.h>22222323MODULE_AUTHOR("Vojtech Pavlik <vojtech@ucw.cz>");2424MODULE_DESCRIPTION("PC Speaker beeper driver");2525MODULE_LICENSE("GPL");2626MODULE_ALIAS("platform:pcspkr");27272828-static int pcspkr_event(struct input_dev *dev, unsigned int type, unsigned int code, int value)2828+static int pcspkr_event(struct input_dev *dev, unsigned int type,2929+ unsigned int code, int value)2930{3031 unsigned int count = 0;3132 unsigned long flags;32333334 if (type != EV_SND)3434- return -1;3535+ return -EINVAL;35363637 switch (code) {3737- case SND_BELL: if (value) value = 1000;3838- case SND_TONE: break;3939- default: return -1;3838+ case SND_BELL:3939+ if (value)4040+ value = 1000;4141+ case SND_TONE:4242+ break;4343+ default:4444+ return -EINVAL;4045 }41464247 if (value > 20 && value < 32767)
···125125 * According to Info.plist Geyser IV is the same as Geyser III.)126126 */127127128128-static struct usb_device_id atp_table[] = {128128+static const struct usb_device_id atp_table[] = {129129 /* PowerBooks Feb 2005, iBooks G4 */130130 ATP_DEVICE(0x020e, fountain_info), /* FOUNTAIN ANSI */131131 ATP_DEVICE(0x020f, fountain_info), /* FOUNTAIN ISO */
···292292 To compile this driver as a module, choose M here: the293293 module will be called sun4i-ps2.294294295295+config SERIO_GPIO_PS2296296+ tristate "GPIO PS/2 bit banging driver"297297+ depends on GPIOLIB298298+ help299299+ Say Y here if you want PS/2 bit banging support via GPIO.300300+301301+ To compile this driver as a module, choose M here: the302302+ module will be called ps2-gpio.303303+304304+ If you are unsure, say N.305305+295306config USERIO296307 tristate "User space serio port driver support"297308 help
···4545#define XPS2_STATUS_RX_FULL 0x00000001 /* Receive Full */4646#define XPS2_STATUS_TX_FULL 0x00000002 /* Transmit Full */47474848-/* Bit definitions for ISR/IER registers. Both the registers have the same bit4949- * definitions and are only defined once. */4848+/*4949+ * Bit definitions for ISR/IER registers. Both the registers have the same bit5050+ * definitions and are only defined once.5151+ */5052#define XPS2_IPIXR_WDT_TOUT 0x00000001 /* Watchdog Timeout Interrupt */5153#define XPS2_IPIXR_TX_NOACK 0x00000002 /* Transmit No ACK Interrupt */5254#define XPS2_IPIXR_TX_ACK 0x00000004 /* Transmit ACK (Data) Interrupt */···294292 /* Disable all the interrupts, just in case */295293 out_be32(drvdata->base_address + XPS2_IPIER_OFFSET, 0);296294297297- /* Reset the PS2 device and abort any current transaction, to make sure298298- * we have the PS2 in a good state */295295+ /*296296+ * Reset the PS2 device and abort any current transaction,297297+ * to make sure we have the PS2 in a good state.298298+ */299299 out_be32(drvdata->base_address + XPS2_SRST_OFFSET, XPS2_SRST_RESET);300300301301 dev_info(dev, "Xilinx PS2 at 0x%08llX mapped to 0x%p, irq=%d\n",
···783783 for (i = 0; i < commit_sections; i++)784784 rw_section_mac(ic, commit_start + i, true);785785 }786786- rw_journal(ic, REQ_OP_WRITE, REQ_FUA, commit_start, commit_sections, &io_comp);786786+ rw_journal(ic, REQ_OP_WRITE, REQ_FUA | REQ_SYNC, commit_start,787787+ commit_sections, &io_comp);787788 } else {788789 unsigned to_end;789790 io_comp.in_flight = (atomic_t)ATOMIC_INIT(2);···11051104static void submit_flush_bio(struct dm_integrity_c *ic, struct dm_integrity_io *dio)11061105{11071106 struct bio *bio;11081108- spin_lock_irq(&ic->endio_wait.lock);11071107+ unsigned long flags;11081108+11091109+ spin_lock_irqsave(&ic->endio_wait.lock, flags);11091110 bio = dm_bio_from_per_bio_data(dio, sizeof(struct dm_integrity_io));11101111 bio_list_add(&ic->flush_bio_list, bio);11111111- spin_unlock_irq(&ic->endio_wait.lock);11121112+ spin_unlock_irqrestore(&ic->endio_wait.lock, flags);11131113+11121114 queue_work(ic->commit_wq, &ic->commit_work);11131115}11141116···23782374 blk_queue_max_integrity_segments(disk->queue, UINT_MAX);23792375}2380237623812381-/* FIXME: use new kvmalloc */23822382-static void *dm_integrity_kvmalloc(size_t size, gfp_t gfp)23832383-{23842384- void *ptr = NULL;23852385-23862386- if (size <= PAGE_SIZE)23872387- ptr = kmalloc(size, GFP_KERNEL | gfp);23882388- if (!ptr && size <= KMALLOC_MAX_SIZE)23892389- ptr = kmalloc(size, GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY | gfp);23902390- if (!ptr)23912391- ptr = __vmalloc(size, GFP_KERNEL | gfp, PAGE_KERNEL);23922392-23932393- return ptr;23942394-}23952395-23962377static void dm_integrity_free_page_list(struct dm_integrity_c *ic, struct page_list *pl)23972378{23982379 unsigned i;···23962407 struct page_list *pl;23972408 unsigned i;2398240923992399- pl = dm_integrity_kvmalloc(page_list_desc_size, __GFP_ZERO);24102410+ pl = kvmalloc(page_list_desc_size, GFP_KERNEL | __GFP_ZERO);24002411 if (!pl)24012412 return NULL;24022413···24262437 struct scatterlist **sl;24272438 unsigned i;2428243924292429- sl = dm_integrity_kvmalloc(ic->journal_sections * sizeof(struct scatterlist *), __GFP_ZERO);24402440+ sl = kvmalloc(ic->journal_sections * sizeof(struct scatterlist *), GFP_KERNEL | __GFP_ZERO);24302441 if (!sl)24312442 return NULL;24322443···2442245324432454 n_pages = (end_index - start_index + 1);2444245524452445- s = dm_integrity_kvmalloc(n_pages * sizeof(struct scatterlist), 0);24562456+ s = kvmalloc(n_pages * sizeof(struct scatterlist), GFP_KERNEL);24462457 if (!s) {24472458 dm_integrity_free_journal_scatterlist(ic, sl);24482459 return NULL;···26062617 goto bad;26072618 }2608261926092609- sg = dm_integrity_kvmalloc((ic->journal_pages + 1) * sizeof(struct scatterlist), 0);26202620+ sg = kvmalloc((ic->journal_pages + 1) * sizeof(struct scatterlist), GFP_KERNEL);26102621 if (!sg) {26112622 *error = "Unable to allocate sg list";26122623 r = -ENOMEM;···26622673 r = -ENOMEM;26632674 goto bad;26642675 }26652665- ic->sk_requests = dm_integrity_kvmalloc(ic->journal_sections * sizeof(struct skcipher_request *), __GFP_ZERO);26762676+ ic->sk_requests = kvmalloc(ic->journal_sections * sizeof(struct skcipher_request *), GFP_KERNEL | __GFP_ZERO);26662677 if (!ic->sk_requests) {26672678 *error = "Unable to allocate sk requests";26682679 r = -ENOMEM;···27292740 r = -ENOMEM;27302741 goto bad;27312742 }27322732- ic->journal_tree = dm_integrity_kvmalloc(journal_tree_size, 0);27432743+ ic->journal_tree = kvmalloc(journal_tree_size, GFP_KERNEL);27332744 if (!ic->journal_tree) {27342745 *error = "Could not allocate memory for journal tree";27352746 r = -ENOMEM;···30413052 r = calculate_device_limits(ic);30423053 if (r) {30433054 ti->error = "The device is too small";30553055+ goto bad;30563056+ }30573057+ if (ti->len > ic->provided_data_sectors) {30583058+ r = -EINVAL;30593059+ ti->error = "Not enough provided sectors for requested mapping size";30443060 goto bad;30453061 }30463062
+2-2
drivers/md/dm-io.c
···317317 else if (op == REQ_OP_WRITE_SAME)318318 special_cmd_max_sectors = q->limits.max_write_same_sectors;319319 if ((op == REQ_OP_DISCARD || op == REQ_OP_WRITE_ZEROES ||320320- op == REQ_OP_WRITE_SAME) &&321321- special_cmd_max_sectors == 0) {320320+ op == REQ_OP_WRITE_SAME) && special_cmd_max_sectors == 0) {321321+ atomic_inc(&io->count);322322 dec_count(io, region, -EOPNOTSUPP);323323 return;324324 }
+3-2
drivers/md/dm-ioctl.c
···17101710 }1711171117121712 /*17131713- * Try to avoid low memory issues when a device is suspended.17131713+ * Use __GFP_HIGH to avoid low memory issues when a device is17141714+ * suspended and the ioctl is needed to resume it.17141715 * Use kmalloc() rather than vmalloc() when we can.17151716 */17161717 dmi = NULL;17171718 noio_flag = memalloc_noio_save();17181718- dmi = kvmalloc(param_kernel->data_size, GFP_KERNEL);17191719+ dmi = kvmalloc(param_kernel->data_size, GFP_KERNEL | __GFP_HIGH);17191720 memalloc_noio_restore(noio_flag);1720172117211722 if (!dmi) {
+14-3
drivers/md/dm-raid.c
···19271927 /********************************************************************19281928 * BELOW FOLLOW V1.9.0 EXTENSIONS TO THE PRISTINE SUPERBLOCK FORMAT!!!19291929 *19301930- * FEATURE_FLAG_SUPPORTS_V190 in the features member indicates that those exist19301930+ * FEATURE_FLAG_SUPPORTS_V190 in the compat_features member indicates that those exist19311931 */1932193219331933 __le32 flags; /* Flags defining array states for reshaping */···20922092 sb->layout = cpu_to_le32(mddev->layout);20932093 sb->stripe_sectors = cpu_to_le32(mddev->chunk_sectors);2094209420952095+ /********************************************************************20962096+ * BELOW FOLLOW V1.9.0 EXTENSIONS TO THE PRISTINE SUPERBLOCK FORMAT!!!20972097+ *20982098+ * FEATURE_FLAG_SUPPORTS_V190 in the compat_features member indicates that those exist20992099+ */20952100 sb->new_level = cpu_to_le32(mddev->new_level);20962101 sb->new_layout = cpu_to_le32(mddev->new_layout);20972102 sb->new_stripe_sectors = cpu_to_le32(mddev->new_chunk_sectors);···24432438 mddev->bitmap_info.default_offset = mddev->bitmap_info.offset;2444243924452440 if (!test_and_clear_bit(FirstUse, &rdev->flags)) {24462446- /* Retrieve device size stored in superblock to be prepared for shrink */24472447- rdev->sectors = le64_to_cpu(sb->sectors);24412441+ /*24422442+ * Retrieve rdev size stored in superblock to be prepared for shrink.24432443+ * Check extended superblock members are present otherwise the size24442444+ * will not be set!24452445+ */24462446+ if (le32_to_cpu(sb->compat_features) & FEATURE_FLAG_SUPPORTS_V190)24472447+ rdev->sectors = le64_to_cpu(sb->sectors);24482448+24482449 rdev->recovery_offset = le64_to_cpu(sb->disk_recovery_offset);24492450 if (rdev->recovery_offset == MaxSector)24502451 set_bit(In_sync, &rdev->flags);
+20-3
drivers/md/dm-raid1.c
···145145146146struct dm_raid1_bio_record {147147 struct mirror *m;148148+ /* if details->bi_bdev == NULL, details were not saved */148149 struct dm_bio_details details;149150 region_t write_region;150151};···261260 struct mirror *m;262261 struct dm_io_request io_req = {263262 .bi_op = REQ_OP_WRITE,264264- .bi_op_flags = REQ_PREFLUSH,263263+ .bi_op_flags = REQ_PREFLUSH | REQ_SYNC,265264 .mem.type = DM_IO_KMEM,266265 .mem.ptr.addr = NULL,267266 .client = ms->io_client,···11991198 struct dm_raid1_bio_record *bio_record =12001199 dm_per_bio_data(bio, sizeof(struct dm_raid1_bio_record));1201120012011201+ bio_record->details.bi_bdev = NULL;12021202+12021203 if (rw == WRITE) {12031204 /* Save region for mirror_end_io() handler */12041205 bio_record->write_region = dm_rh_bio_to_region(ms->rh, bio);···12591256 }1260125712611258 if (error == -EOPNOTSUPP)12621262- return error;12591259+ goto out;1263126012641261 if ((error == -EWOULDBLOCK) && (bio->bi_opf & REQ_RAHEAD))12651265- return error;12621262+ goto out;1266126312671264 if (unlikely(error)) {12651265+ if (!bio_record->details.bi_bdev) {12661266+ /*12671267+ * There wasn't enough memory to record necessary12681268+ * information for a retry or there was no other12691269+ * mirror in-sync.12701270+ */12711271+ DMERR_LIMIT("Mirror read failed.");12721272+ return -EIO;12731273+ }12741274+12681275 m = bio_record->m;1269127612701277 DMERR("Mirror read failed from %s. Trying alternative device.",···12901277 bd = &bio_record->details;1291127812921279 dm_bio_restore(bd, bio);12801280+ bio_record->details.bi_bdev = NULL;12931281 bio->bi_error = 0;1294128212951283 queue_bio(ms, bio, rw);···12981284 }12991285 DMERR("All replicated volumes dead, failing I/O");13001286 }12871287+12881288+out:12891289+ bio_record->details.bi_bdev = NULL;1301129013021291 return error;13031292}
+2-1
drivers/md/dm-snap-persistent.c
···741741 /*742742 * Commit exceptions to disk.743743 */744744- if (ps->valid && area_io(ps, REQ_OP_WRITE, REQ_PREFLUSH | REQ_FUA))744744+ if (ps->valid && area_io(ps, REQ_OP_WRITE,745745+ REQ_PREFLUSH | REQ_FUA | REQ_SYNC))745746 ps->valid = 0;746747747748 /*
+13-13
drivers/md/dm-thin.c
···10941094 return;10951095 }1096109610971097+ /*10981098+ * Increment the unmapped blocks. This prevents a race between the10991099+ * passdown io and reallocation of freed blocks.11001100+ */11011101+ r = dm_pool_inc_data_range(pool->pmd, m->data_block, data_end);11021102+ if (r) {11031103+ metadata_operation_failed(pool, "dm_pool_inc_data_range", r);11041104+ bio_io_error(m->bio);11051105+ cell_defer_no_holder(tc, m->cell);11061106+ mempool_free(m, pool->mapping_pool);11071107+ return;11081108+ }11091109+10971110 discard_parent = bio_alloc(GFP_NOIO, 1);10981111 if (!discard_parent) {10991112 DMWARN("%s: unable to allocate top level discard bio for passdown. Skipping passdown.",···11261113 r = issue_discard(&op, m->data_block, data_end);11271114 end_discard(&op, r);11281115 }11291129- }11301130-11311131- /*11321132- * Increment the unmapped blocks. This prevents a race between the11331133- * passdown io and reallocation of freed blocks.11341134- */11351135- r = dm_pool_inc_data_range(pool->pmd, m->data_block, data_end);11361136- if (r) {11371137- metadata_operation_failed(pool, "dm_pool_inc_data_range", r);11381138- bio_io_error(m->bio);11391139- cell_defer_no_holder(tc, m->cell);11401140- mempool_free(m, pool->mapping_pool);11411141- return;11421116 }11431117}11441118
+2-2
drivers/md/dm-verity-target.c
···166166 return r;167167 }168168169169- if (likely(v->version >= 1))169169+ if (likely(v->salt_size && (v->version >= 1)))170170 r = verity_hash_update(v, req, v->salt, v->salt_size, res);171171172172 return r;···177177{178178 int r;179179180180- if (unlikely(!v->version)) {180180+ if (unlikely(v->salt_size && (!v->version))) {181181 r = verity_hash_update(v, req, v->salt, v->salt_size, res);182182183183 if (r < 0) {
···30633063 mdname(mddev));30643064 return -EIO;30653065 }30663066+ if (mddev_init_writes_pending(mddev) < 0)30673067+ return -ENOMEM;30663068 /*30673069 * copy the already verified devices into our private RAID130683070 * bookkeeping area. [whatever we allocate in run(),
+3
drivers/md/raid10.c
···36113611 int first = 1;36123612 bool discard_supported = false;3613361336143614+ if (mddev_init_writes_pending(mddev) < 0)36153615+ return -ENOMEM;36163616+36143617 if (mddev->private == NULL) {36153618 conf = setup_conf(mddev);36163619 if (IS_ERR(conf))
···4455media-objs := media-device.o media-devnode.o media-entity.o6677-obj-$(CONFIG_CEC_CORE) += cec/88-97#108# I2C drivers should come before other drivers, otherwise they'll fail119# when compiled as builtin drivers···23252426# There are both core and drivers at RC subtree - merge before drivers2527obj-y += rc/2828+2929+obj-$(CONFIG_CEC_CORE) += cec/26302731#2832# Finally, merge the drivers that require the core
+1-14
drivers/media/cec/Kconfig
···11-config CEC_CORE22- tristate33- depends on MEDIA_CEC_SUPPORT44- default y55-66-config MEDIA_CEC_NOTIFIER77- bool88-91config MEDIA_CEC_RC102 bool "HDMI CEC RC integration"113 depends on CEC_CORE && RC_CORE44+ depends on CEC_CORE=m || RC_CORE=y125 ---help---136 Pass on CEC remote control messages to the RC framework.1414-1515-config MEDIA_CEC_DEBUG1616- bool "HDMI CEC debugfs interface"1717- depends on CEC_CORE && DEBUG_FS1818- ---help---1919- Turns on the DebugFS interface for CEC devices.
···18641864 WARN_ON(call_op(adap, adap_monitor_all_enable, 0));18651865}1866186618671867-#ifdef CONFIG_MEDIA_CEC_DEBUG18671867+#ifdef CONFIG_DEBUG_FS18681868/*18691869 * Log the current state of the CEC adapter.18701870 * Very useful for debugging.
+1-7
drivers/media/cec/cec-api.c
···271271 bool block, struct cec_msg __user *parg)272272{273273 struct cec_msg msg = {};274274- long err = 0;274274+ long err;275275276276 if (copy_from_user(&msg, parg, sizeof(msg)))277277 return -EFAULT;278278- mutex_lock(&adap->lock);279279- if (!adap->is_configured && fh->mode_follower < CEC_MODE_MONITOR)280280- err = -ENONET;281281- mutex_unlock(&adap->lock);282282- if (err)283283- return err;284278285279 err = cec_receive_msg(fh, &msg, block);286280 if (err)
···220220221221config VIDEO_ADV7604_CEC222222 bool "Enable Analog Devices ADV7604 CEC support"223223- depends on VIDEO_ADV7604 && CEC_CORE223223+ depends on VIDEO_ADV7604224224+ select CEC_CORE224225 ---help---225226 When selected the adv7604 will support the optional226227 HDMI CEC feature.···241240242241config VIDEO_ADV7842_CEC243242 bool "Enable Analog Devices ADV7842 CEC support"244244- depends on VIDEO_ADV7842 && CEC_CORE243243+ depends on VIDEO_ADV7842244244+ select CEC_CORE245245 ---help---246246 When selected the adv7842 will support the optional247247 HDMI CEC feature.···480478481479config VIDEO_ADV7511_CEC482480 bool "Enable Analog Devices ADV7511 CEC support"483483- depends on VIDEO_ADV7511 && CEC_CORE481481+ depends on VIDEO_ADV7511482482+ select CEC_CORE484483 ---help---485484 When selected the adv7511 will support the optional486485 HDMI CEC feature.
···26262727config VIDEO_VIVID_CEC2828 bool "Enable CEC emulation support"2929- depends on VIDEO_VIVID && CEC_CORE2929+ depends on VIDEO_VIVID3030+ select CEC_CORE3031 ---help---3132 When selected the vivid module will emulate the optional3233 HDMI CEC feature.
+8-5
drivers/media/rc/rc-ir-raw.c
···211211 */212212void ir_raw_event_handle(struct rc_dev *dev)213213{214214- if (!dev->raw)214214+ if (!dev->raw || !dev->raw->thread)215215 return;216216217217 wake_up_process(dev->raw->thread);···490490{491491 int rc;492492 struct ir_raw_handler *handler;493493+ struct task_struct *thread;493494494495 if (!dev)495496 return -EINVAL;···508507 * because the event is coming from userspace509508 */510509 if (dev->driver_type != RC_DRIVER_IR_RAW_TX) {511511- dev->raw->thread = kthread_run(ir_raw_event_thread, dev->raw,512512- "rc%u", dev->minor);510510+ thread = kthread_run(ir_raw_event_thread, dev->raw, "rc%u",511511+ dev->minor);513512514514- if (IS_ERR(dev->raw->thread)) {515515- rc = PTR_ERR(dev->raw->thread);513513+ if (IS_ERR(thread)) {514514+ rc = PTR_ERR(thread);516515 goto out;517516 }517517+518518+ dev->raw->thread = thread;518519 }519520520521 mutex_lock(&ir_raw_handler_lock);
+6
drivers/media/rc/sir_ir.c
···183183 static unsigned long delt;184184 unsigned long deltintr;185185 unsigned long flags;186186+ int counter = 0;186187 int iir, lsr;187188188189 while ((iir = inb(io + UART_IIR) & UART_IIR_ID)) {190190+ if (++counter > 256) {191191+ dev_err(&sir_ir_dev->dev, "Trapped in interrupt");192192+ break;193193+ }194194+189195 switch (iir & UART_IIR_ID) { /* FIXME toto treba preriedit */190196 case UART_IIR_MSI:191197 (void)inb(io + UART_MSR);
···245245 int ret;246246247247 ret = regmap_read_poll_timeout(arizona->regmap,248248- ARIZONA_INTERRUPT_RAW_STATUS_5, val,249249- ((val & mask) == target),248248+ reg, val, ((val & mask) == target),250249 ARIZONA_REG_POLL_DELAY_US,251250 timeout_ms * 1000);252251 if (ret)
+3-3
drivers/misc/cxl/context.c
···4545 mutex_init(&ctx->mapping_lock);4646 ctx->mapping = NULL;47474848- if (cxl_is_psl8(afu)) {4848+ if (cxl_is_power8()) {4949 spin_lock_init(&ctx->sste_lock);50505151 /*···189189 if (start + len > ctx->afu->adapter->ps_size)190190 return -EINVAL;191191192192- if (cxl_is_psl9(ctx->afu)) {192192+ if (cxl_is_power9()) {193193 /*194194 * Make sure there is a valid problem state195195 * area space for this AFU.···324324{325325 struct cxl_context *ctx = container_of(rcu, struct cxl_context, rcu);326326327327- if (cxl_is_psl8(ctx->afu))327327+ if (cxl_is_power8())328328 free_page((u64)ctx->sstp);329329 if (ctx->ff_page)330330 __free_page(ctx->ff_page);
+5-13
drivers/misc/cxl/cxl.h
···357357#define CXL_PSL9_DSISR_An_PF_RGP 0x0000000000000090ULL /* PTE not found (Radix Guest (parent)) 0b10010000 */358358#define CXL_PSL9_DSISR_An_PF_HRH 0x0000000000000094ULL /* PTE not found (HPT/Radix Host) 0b10010100 */359359#define CXL_PSL9_DSISR_An_PF_STEG 0x000000000000009CULL /* PTE not found (STEG VA) 0b10011100 */360360+#define CXL_PSL9_DSISR_An_URTCH 0x00000000000000B4ULL /* Unsupported Radix Tree Configuration 0b10110100 */360361361362/****** CXL_PSL_TFC_An ******************************************************/362363#define CXL_PSL_TFC_An_A (1ull << (63-28)) /* Acknowledge non-translation fault */···845844846845static inline bool cxl_is_power9(void)847846{848848- /* intermediate solution */849849- if (!cxl_is_power8() &&850850- (cpu_has_feature(CPU_FTRS_POWER9) ||851851- cpu_has_feature(CPU_FTR_POWER9_DD1)))847847+ if (pvr_version_is(PVR_POWER9))852848 return true;853849 return false;854850}855851856856-static inline bool cxl_is_psl8(struct cxl_afu *afu)852852+static inline bool cxl_is_power9_dd1(void)857853{858858- if (afu->adapter->caia_major == 1)859859- return true;860860- return false;861861-}862862-863863-static inline bool cxl_is_psl9(struct cxl_afu *afu)864864-{865865- if (afu->adapter->caia_major == 2)854854+ if ((pvr_version_is(PVR_POWER9)) &&855855+ cpu_has_feature(CPU_FTR_POWER9_DD1))866856 return true;867857 return false;868858}
···159159160160 /* Do this outside the status_mutex to avoid a circular dependency with161161 * the locking in cxl_mmap_fault() */162162- if (copy_from_user(&work, uwork,163163- sizeof(struct cxl_ioctl_start_work))) {164164- rc = -EFAULT;165165- goto out;166166- }162162+ if (copy_from_user(&work, uwork, sizeof(work)))163163+ return -EFAULT;167164168165 mutex_lock(&ctx->status_mutex);169166 if (ctx->status != OPENED) {
+13-4
drivers/misc/cxl/main.c
···329329330330 cxl_debugfs_init();331331332332- if ((rc = register_cxl_calls(&cxl_calls)))333333- goto err;332332+ /*333333+ * we don't register the callback on P9. slb callack is only334334+ * used for the PSL8 MMU and CX4.335335+ */336336+ if (cxl_is_power8()) {337337+ rc = register_cxl_calls(&cxl_calls);338338+ if (rc)339339+ goto err;340340+ }334341335342 if (cpu_has_feature(CPU_FTR_HVMODE)) {336343 cxl_ops = &cxl_native_ops;···354347355348 return 0;356349err1:357357- unregister_cxl_calls(&cxl_calls);350350+ if (cxl_is_power8())351351+ unregister_cxl_calls(&cxl_calls);358352err:359353 cxl_debugfs_exit();360354 cxl_file_exit();···374366375367 cxl_debugfs_exit();376368 cxl_file_exit();377377- unregister_cxl_calls(&cxl_calls);369369+ if (cxl_is_power8())370370+ unregister_cxl_calls(&cxl_calls);378371 idr_destroy(&cxl_adapter_idr);379372}380373
+28-15
drivers/misc/cxl/native.c
···105105 CXL_AFU_Cntl_An_RS_MASK | CXL_AFU_Cntl_An_ES_MASK,106106 false);107107108108- /* Re-enable any masked interrupts */109109- serr = cxl_p1n_read(afu, CXL_PSL_SERR_An);110110- serr &= ~CXL_PSL_SERR_An_IRQ_MASKS;111111- cxl_p1n_write(afu, CXL_PSL_SERR_An, serr);112112-108108+ /*109109+ * Re-enable any masked interrupts when the AFU is not110110+ * activated to avoid side effects after attaching a process111111+ * in dedicated mode.112112+ */113113+ if (afu->current_mode == 0) {114114+ serr = cxl_p1n_read(afu, CXL_PSL_SERR_An);115115+ serr &= ~CXL_PSL_SERR_An_IRQ_MASKS;116116+ cxl_p1n_write(afu, CXL_PSL_SERR_An, serr);117117+ }113118114119 return rc;115120}···144139145140 pr_devel("PSL purge request\n");146141147147- if (cxl_is_psl8(afu))142142+ if (cxl_is_power8())148143 trans_fault = CXL_PSL_DSISR_TRANS;149149- if (cxl_is_psl9(afu))144144+ if (cxl_is_power9())150145 trans_fault = CXL_PSL9_DSISR_An_TF;151146152147 if (!cxl_ops->link_ok(afu->adapter, afu)) {···608603 if (!test_tsk_thread_flag(current, TIF_32BIT))609604 sr |= CXL_PSL_SR_An_SF;610605 }611611- if (cxl_is_psl9(ctx->afu)) {606606+ if (cxl_is_power9()) {612607 if (radix_enabled())613608 sr |= CXL_PSL_SR_An_XLAT_ror;614609 else···1122111711231118static bool cxl_is_translation_fault(struct cxl_afu *afu, u64 dsisr)11241119{11251125- if ((cxl_is_psl8(afu)) && (dsisr & CXL_PSL_DSISR_TRANS))11201120+ if ((cxl_is_power8()) && (dsisr & CXL_PSL_DSISR_TRANS))11261121 return true;1127112211281128- if ((cxl_is_psl9(afu)) && (dsisr & CXL_PSL9_DSISR_An_TF))11231123+ if ((cxl_is_power9()) && (dsisr & CXL_PSL9_DSISR_An_TF))11291124 return true;1130112511311126 return false;···11991194 if (ph != ctx->pe)12001195 return;12011196 dsisr = cxl_p2n_read(ctx->afu, CXL_PSL_DSISR_An);12021202- if (cxl_is_psl8(ctx->afu) &&11971197+ if (cxl_is_power8() &&12031198 ((dsisr & CXL_PSL_DSISR_PENDING) == 0))12041199 return;12051205- if (cxl_is_psl9(ctx->afu) &&12001200+ if (cxl_is_power9() &&12061201 ((dsisr & CXL_PSL9_DSISR_PENDING) == 0))12071202 return;12081203 /*···1307130213081303void cxl_native_release_psl_err_irq(struct cxl *adapter)13091304{13101310- if (adapter->native->err_virq != irq_find_mapping(NULL, adapter->native->err_hwirq))13051305+ if (adapter->native->err_virq == 0 ||13061306+ adapter->native->err_virq !=13071307+ irq_find_mapping(NULL, adapter->native->err_hwirq))13111308 return;1312130913131310 cxl_p1_write(adapter, CXL_PSL_ErrIVTE, 0x0000000000000000);13141311 cxl_unmap_irq(adapter->native->err_virq, adapter);13151312 cxl_ops->release_one_irq(adapter, adapter->native->err_hwirq);13161313 kfree(adapter->irq_name);13141314+ adapter->native->err_virq = 0;13171315}1318131613191317int cxl_native_register_serr_irq(struct cxl_afu *afu)···1354134613551347void cxl_native_release_serr_irq(struct cxl_afu *afu)13561348{13571357- if (afu->serr_virq != irq_find_mapping(NULL, afu->serr_hwirq))13491349+ if (afu->serr_virq == 0 ||13501350+ afu->serr_virq != irq_find_mapping(NULL, afu->serr_hwirq))13581351 return;1359135213601353 cxl_p1n_write(afu, CXL_PSL_SERR_An, 0x0000000000000000);13611354 cxl_unmap_irq(afu->serr_virq, afu);13621355 cxl_ops->release_one_irq(afu->adapter, afu->serr_hwirq);13631356 kfree(afu->err_irq_name);13571357+ afu->serr_virq = 0;13641358}1365135913661360int cxl_native_register_psl_irq(struct cxl_afu *afu)···1385137513861376void cxl_native_release_psl_irq(struct cxl_afu *afu)13871377{13881388- if (afu->native->psl_virq != irq_find_mapping(NULL, afu->native->psl_hwirq))13781378+ if (afu->native->psl_virq == 0 ||13791379+ afu->native->psl_virq !=13801380+ irq_find_mapping(NULL, afu->native->psl_hwirq))13891381 return;1390138213911383 cxl_unmap_irq(afu->native->psl_virq, afu);13921384 cxl_ops->release_one_irq(afu->adapter, afu->native->psl_hwirq);13931385 kfree(afu->psl_irq_name);13861386+ afu->native->psl_virq = 0;13941387}1395138813961389static void recover_psl_err(struct cxl_afu *afu, u64 errstat)
+4-7
drivers/misc/cxl/pci.c
···436436 /* nMMU_ID Defaults to: b’000001001’*/437437 xsl_dsnctl |= ((u64)0x09 << (63-28));438438439439- if (cxl_is_power9() && !cpu_has_feature(CPU_FTR_POWER9_DD1)) {439439+ if (!(cxl_is_power9_dd1())) {440440 /*441441 * Used to identify CAPI packets which should be sorted into442442 * the Non-Blocking queues by the PHB. This field should match···491491 cxl_p1_write(adapter, CXL_PSL9_APCDEDTYPE, 0x40000003FFFF0000ULL);492492493493 /* Disable vc dd1 fix */494494- if ((cxl_is_power9() && cpu_has_feature(CPU_FTR_POWER9_DD1)))494494+ if (cxl_is_power9_dd1())495495 cxl_p1_write(adapter, CXL_PSL9_GP_CT, 0x0400000000000001ULL);496496497497 return 0;···14391439 * The adapter is about to be reset, so ignore errors.14401440 * Not supported on P9 DD114411441 */14421442- if ((cxl_is_power8()) ||14431443- ((cxl_is_power9() && !cpu_has_feature(CPU_FTR_POWER9_DD1))))14421442+ if ((cxl_is_power8()) || (!(cxl_is_power9_dd1())))14441443 cxl_data_cache_flush(adapter);1445144414461445 /* pcie_warm_reset requests a fundamental pci reset which includes a···17491750 .debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_psl9,17501751 .debugfs_add_afu_regs = cxl_debugfs_add_afu_regs_psl9,17511752 .psl_irq_dump_registers = cxl_native_irq_dump_regs_psl9,17521752- .err_irq_dump_registers = cxl_native_err_irq_dump_regs,17531753 .debugfs_stop_trace = cxl_stop_trace_psl9,17541754 .write_timebase_ctrl = write_timebase_ctrl_psl9,17551755 .timebase_read = timebase_read_psl9,···18871889 * Flush adapter datacache as its about to be removed.18881890 * Not supported on P9 DD1.18891891 */18901890- if ((cxl_is_power8()) ||18911891- ((cxl_is_power9() && !cpu_has_feature(CPU_FTR_POWER9_DD1))))18921892+ if ((cxl_is_power8()) || (!(cxl_is_power9_dd1())))18921893 cxl_data_cache_flush(adapter);1893189418941895 cxl_deconfigure_adapter(adapter);
···210210 int i;211211 bool use_desc_chain_mode = true;212212213213+ /*214214+ * Broken SDIO with AP6255-based WiFi on Khadas VIM Pro has been215215+ * reported. For some strange reason this occurs in descriptor216216+ * chain mode only. So let's fall back to bounce buffer mode217217+ * for command SD_IO_RW_EXTENDED.218218+ */219219+ if (mrq->cmd->opcode == SD_IO_RW_EXTENDED)220220+ return;221221+213222 for_each_sg(data->sg, sg, data->sg_len, i)214223 /* check for 8 byte alignment */215224 if (sg->offset & 7) {
+35-11
drivers/mtd/nand/nand_base.c
···202202 return 0;203203}204204205205-const struct mtd_ooblayout_ops nand_ooblayout_lp_hamming_ops = {205205+static const struct mtd_ooblayout_ops nand_ooblayout_lp_hamming_ops = {206206 .ecc = nand_ooblayout_ecc_lp_hamming,207207 .free = nand_ooblayout_free_lp_hamming,208208};···43614361 /* Initialize the ->data_interface field. */43624362 ret = nand_init_data_interface(chip);43634363 if (ret)43644364- return ret;43644364+ goto err_nand_init;4365436543664366 /*43674367 * Setup the data interface correctly on the chip and controller side.···43734373 */43744374 ret = nand_setup_data_interface(chip);43754375 if (ret)43764376- return ret;43764376+ goto err_nand_init;4377437743784378 nand_maf_id = chip->id.data[0];43794379 nand_dev_id = chip->id.data[1];···44044404 mtd->size = i * chip->chipsize;4405440544064406 return 0;44074407+44084408+err_nand_init:44094409+ /* Free manufacturer priv data. */44104410+ nand_manufacturer_cleanup(chip);44114411+44124412+ return ret;44074413}44084414EXPORT_SYMBOL(nand_scan_ident);44094415···4580457445814575 /* New bad blocks should be marked in OOB, flash-based BBT, or both */45824576 if (WARN_ON((chip->bbt_options & NAND_BBT_NO_OOB_BBM) &&45834583- !(chip->bbt_options & NAND_BBT_USE_FLASH)))45844584- return -EINVAL;45774577+ !(chip->bbt_options & NAND_BBT_USE_FLASH))) {45784578+ ret = -EINVAL;45794579+ goto err_ident;45804580+ }4585458145864582 if (invalid_ecc_page_accessors(chip)) {45874583 pr_err("Invalid ECC page accessors setup\n");45884588- return -EINVAL;45844584+ ret = -EINVAL;45854585+ goto err_ident;45894586 }4590458745914588 if (!(chip->options & NAND_OWN_BUFFERS)) {45924589 nbuf = kzalloc(sizeof(*nbuf), GFP_KERNEL);45934593- if (!nbuf)45944594- return -ENOMEM;45904590+ if (!nbuf) {45914591+ ret = -ENOMEM;45924592+ goto err_ident;45934593+ }4595459445964595 nbuf->ecccalc = kmalloc(mtd->oobsize, GFP_KERNEL);45974596 if (!nbuf->ecccalc) {···4619460846204609 chip->buffers = nbuf;46214610 } else {46224622- if (!chip->buffers)46234623- return -ENOMEM;46114611+ if (!chip->buffers) {46124612+ ret = -ENOMEM;46134613+ goto err_ident;46144614+ }46244615 }4625461646264617 /* Set the internal oob buffer location, just after the page data */···48554842 return 0;4856484348574844 /* Build bad block table */48584858- return chip->scan_bbt(mtd);48454845+ ret = chip->scan_bbt(mtd);48464846+ if (ret)48474847+ goto err_free;48484848+ return 0;48494849+48594850err_free:48604851 if (nbuf) {48614852 kfree(nbuf->databuf);···48674850 kfree(nbuf->ecccalc);48684851 kfree(nbuf);48694852 }48534853+48544854+err_ident:48554855+ /* Clean up nand_scan_ident(). */48564856+48574857+ /* Free manufacturer priv data. */48584858+ nand_manufacturer_cleanup(chip);48594859+48704860 return ret;48714861}48724862EXPORT_SYMBOL(nand_scan_tail);
-1
drivers/mtd/nand/nand_ids.c
···66 * published by the Free Software Foundation.77 *88 */99-#include <linux/module.h>109#include <linux/mtd/nand.h>1110#include <linux/sizes.h>1211
···391391 can_update_state_error_stats(dev, new_state);392392 priv->state = new_state;393393394394+ if (!cf)395395+ return;396396+394397 if (unlikely(new_state == CAN_STATE_BUS_OFF)) {395398 cf->can_id |= CAN_ERR_BUSOFF;396399 return;
+1-1
drivers/net/can/peak_canfd/peak_canfd.c
···489489 struct pucan_rx_msg *msg_list, int msg_count)490490{491491 void *msg_ptr = msg_list;492492- int i, msg_size;492492+ int i, msg_size = 0;493493494494 for (i = 0; i < msg_count; i++) {495495 msg_size = peak_canfd_handle_msg(priv, msg_ptr);
+3-4
drivers/net/can/slcan.c
···417417static void slc_free_netdev(struct net_device *dev)418418{419419 int i = dev->base_addr;420420- free_netdev(dev);420420+421421 slcan_devs[i] = NULL;422422}423423···436436static void slc_setup(struct net_device *dev)437437{438438 dev->netdev_ops = &slc_netdev_ops;439439- dev->destructor = slc_free_netdev;439439+ dev->needs_free_netdev = true;440440+ dev->priv_destructor = slc_free_netdev;440441441442 dev->hard_header_len = 0;442443 dev->addr_len = 0;···762761 if (sl->tty) {763762 printk(KERN_ERR "%s: tty discipline still running\n",764763 dev->name);765765- /* Intentionally leak the control block. */766766- dev->destructor = NULL;767764 }768765769766 unregister_netdev(dev);
···908908 const struct peak_usb_adapter *peak_usb_adapter = NULL;909909 int i, err = -ENOMEM;910910911911- usb_dev = interface_to_usbdev(intf);912912-913911 /* get corresponding PCAN-USB adapter */914912 for (i = 0; i < ARRAY_SIZE(peak_usb_adapters_list); i++)915913 if (peak_usb_adapters_list[i]->device_id == usb_id_product) {···918920 if (!peak_usb_adapter) {919921 /* should never come except device_id bad usage in this file */920922 pr_err("%s: didn't find device id. 0x%x in devices list\n",921921- PCAN_USB_DRIVER_NAME, usb_dev->descriptor.idProduct);923923+ PCAN_USB_DRIVER_NAME, usb_id_product);922924 return -ENODEV;923925 }924926
···190190 rxr->sgl_size = adapter->max_rx_sgl_size;191191 rxr->smoothed_interval =192192 ena_com_get_nonadaptive_moderation_interval_rx(ena_dev);193193+ rxr->empty_rx_queue = 0;193194 }194195}195196···10791078 rx_ring->per_napi_bytes = 0;10801079}1081108010811081+static inline void ena_unmask_interrupt(struct ena_ring *tx_ring,10821082+ struct ena_ring *rx_ring)10831083+{10841084+ struct ena_eth_io_intr_reg intr_reg;10851085+10861086+ /* Update intr register: rx intr delay,10871087+ * tx intr delay and interrupt unmask10881088+ */10891089+ ena_com_update_intr_reg(&intr_reg,10901090+ rx_ring->smoothed_interval,10911091+ tx_ring->smoothed_interval,10921092+ true);10931093+10941094+ /* It is a shared MSI-X.10951095+ * Tx and Rx CQ have pointer to it.10961096+ * So we use one of them to reach the intr reg10971097+ */10981098+ ena_com_unmask_intr(rx_ring->ena_com_io_cq, &intr_reg);10991099+}11001100+10821101static inline void ena_update_ring_numa_node(struct ena_ring *tx_ring,10831102 struct ena_ring *rx_ring)10841103{···11291108{11301109 struct ena_napi *ena_napi = container_of(napi, struct ena_napi, napi);11311110 struct ena_ring *tx_ring, *rx_ring;11321132- struct ena_eth_io_intr_reg intr_reg;1133111111341112 u32 tx_work_done;11351113 u32 rx_work_done;···11691149 if (ena_com_get_adaptive_moderation_enabled(rx_ring->ena_dev))11701150 ena_adjust_intr_moderation(rx_ring, tx_ring);1171115111721172- /* Update intr register: rx intr delay,11731173- * tx intr delay and interrupt unmask11741174- */11751175- ena_com_update_intr_reg(&intr_reg,11761176- rx_ring->smoothed_interval,11771177- tx_ring->smoothed_interval,11781178- true);11791179-11801180- /* It is a shared MSI-X.11811181- * Tx and Rx CQ have pointer to it.11821182- * So we use one of them to reach the intr reg11831183- */11841184- ena_com_unmask_intr(rx_ring->ena_com_io_cq, &intr_reg);11521152+ ena_unmask_interrupt(tx_ring, rx_ring);11851153 }11861186-1187115411881155 ena_update_ring_numa_node(tx_ring, rx_ring);11891156···1492148514931486 ena_napi_enable_all(adapter);1494148714881488+ /* Enable completion queues interrupt */14891489+ for (i = 0; i < adapter->num_queues; i++)14901490+ ena_unmask_interrupt(&adapter->tx_ring[i],14911491+ &adapter->rx_ring[i]);14921492+14951493 /* schedule napi in case we had pending packets14961494 * from the last time we disable napi14971495 */···15441532 "Failed to get TX queue handlers. TX queue num %d rc: %d\n",15451533 qid, rc);15461534 ena_com_destroy_io_queue(ena_dev, ena_qid);15351535+ return rc;15471536 }1548153715491538 ena_com_update_numa_node(tx_ring->ena_com_io_cq, ctx.numa_node);···16091596 "Failed to get RX queue handlers. RX queue num %d rc: %d\n",16101597 qid, rc);16111598 ena_com_destroy_io_queue(ena_dev, ena_qid);15991599+ return rc;16121600 }1613160116141602 ena_com_update_numa_node(rx_ring->ena_com_io_cq, ctx.numa_node);···1995198119961982 tx_info->tx_descs = nb_hw_desc;19971983 tx_info->last_jiffies = jiffies;19841984+ tx_info->print_once = 0;1998198519991986 tx_ring->next_to_use = ENA_TX_RING_IDX_NEXT(next_to_use,20001987 tx_ring->ring_size);···25652550 "Reset attempt failed. Can not reset the device\n");25662551}2567255225682568-static void check_for_missing_tx_completions(struct ena_adapter *adapter)25532553+static int check_missing_comp_in_queue(struct ena_adapter *adapter,25542554+ struct ena_ring *tx_ring)25692555{25702556 struct ena_tx_buffer *tx_buf;25712557 unsigned long last_jiffies;25582558+ u32 missed_tx = 0;25592559+ int i;25602560+25612561+ for (i = 0; i < tx_ring->ring_size; i++) {25622562+ tx_buf = &tx_ring->tx_buffer_info[i];25632563+ last_jiffies = tx_buf->last_jiffies;25642564+ if (unlikely(last_jiffies &&25652565+ time_is_before_jiffies(last_jiffies + TX_TIMEOUT))) {25662566+ if (!tx_buf->print_once)25672567+ netif_notice(adapter, tx_err, adapter->netdev,25682568+ "Found a Tx that wasn't completed on time, qid %d, index %d.\n",25692569+ tx_ring->qid, i);25702570+25712571+ tx_buf->print_once = 1;25722572+ missed_tx++;25732573+25742574+ if (unlikely(missed_tx > MAX_NUM_OF_TIMEOUTED_PACKETS)) {25752575+ netif_err(adapter, tx_err, adapter->netdev,25762576+ "The number of lost tx completions is above the threshold (%d > %d). Reset the device\n",25772577+ missed_tx, MAX_NUM_OF_TIMEOUTED_PACKETS);25782578+ set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags);25792579+ return -EIO;25802580+ }25812581+ }25822582+ }25832583+25842584+ return 0;25852585+}25862586+25872587+static void check_for_missing_tx_completions(struct ena_adapter *adapter)25882588+{25722589 struct ena_ring *tx_ring;25732573- int i, j, budget;25742574- u32 missed_tx;25902590+ int i, budget, rc;2575259125762592 /* Make sure the driver doesn't turn the device in other process */25772593 smp_rmb();···26182572 for (i = adapter->last_monitored_tx_qid; i < adapter->num_queues; i++) {26192573 tx_ring = &adapter->tx_ring[i];2620257426212621- for (j = 0; j < tx_ring->ring_size; j++) {26222622- tx_buf = &tx_ring->tx_buffer_info[j];26232623- last_jiffies = tx_buf->last_jiffies;26242624- if (unlikely(last_jiffies && time_is_before_jiffies(last_jiffies + TX_TIMEOUT))) {26252625- netif_notice(adapter, tx_err, adapter->netdev,26262626- "Found a Tx that wasn't completed on time, qid %d, index %d.\n",26272627- tx_ring->qid, j);26282628-26292629- u64_stats_update_begin(&tx_ring->syncp);26302630- missed_tx = tx_ring->tx_stats.missing_tx_comp++;26312631- u64_stats_update_end(&tx_ring->syncp);26322632-26332633- /* Clear last jiffies so the lost buffer won't26342634- * be counted twice.26352635- */26362636- tx_buf->last_jiffies = 0;26372637-26382638- if (unlikely(missed_tx > MAX_NUM_OF_TIMEOUTED_PACKETS)) {26392639- netif_err(adapter, tx_err, adapter->netdev,26402640- "The number of lost tx completion is above the threshold (%d > %d). Reset the device\n",26412641- missed_tx, MAX_NUM_OF_TIMEOUTED_PACKETS);26422642- set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags);26432643- }26442644- }26452645- }25752575+ rc = check_missing_comp_in_queue(adapter, tx_ring);25762576+ if (unlikely(rc))25772577+ return;2646257826472579 budget--;26482580 if (!budget)···26282604 }2629260526302606 adapter->last_monitored_tx_qid = i % adapter->num_queues;26072607+}26082608+26092609+/* trigger napi schedule after 2 consecutive detections */26102610+#define EMPTY_RX_REFILL 226112611+/* For the rare case where the device runs out of Rx descriptors and the26122612+ * napi handler failed to refill new Rx descriptors (due to a lack of memory26132613+ * for example).26142614+ * This case will lead to a deadlock:26152615+ * The device won't send interrupts since all the new Rx packets will be dropped26162616+ * The napi handler won't allocate new Rx descriptors so the device will be26172617+ * able to send new packets.26182618+ *26192619+ * This scenario can happen when the kernel's vm.min_free_kbytes is too small.26202620+ * It is recommended to have at least 512MB, with a minimum of 128MB for26212621+ * constrained environment).26222622+ *26232623+ * When such a situation is detected - Reschedule napi26242624+ */26252625+static void check_for_empty_rx_ring(struct ena_adapter *adapter)26262626+{26272627+ struct ena_ring *rx_ring;26282628+ int i, refill_required;26292629+26302630+ if (!test_bit(ENA_FLAG_DEV_UP, &adapter->flags))26312631+ return;26322632+26332633+ if (test_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags))26342634+ return;26352635+26362636+ for (i = 0; i < adapter->num_queues; i++) {26372637+ rx_ring = &adapter->rx_ring[i];26382638+26392639+ refill_required =26402640+ ena_com_sq_empty_space(rx_ring->ena_com_io_sq);26412641+ if (unlikely(refill_required == (rx_ring->ring_size - 1))) {26422642+ rx_ring->empty_rx_queue++;26432643+26442644+ if (rx_ring->empty_rx_queue >= EMPTY_RX_REFILL) {26452645+ u64_stats_update_begin(&rx_ring->syncp);26462646+ rx_ring->rx_stats.empty_rx_ring++;26472647+ u64_stats_update_end(&rx_ring->syncp);26482648+26492649+ netif_err(adapter, drv, adapter->netdev,26502650+ "trigger refill for ring %d\n", i);26512651+26522652+ napi_schedule(rx_ring->napi);26532653+ rx_ring->empty_rx_queue = 0;26542654+ }26552655+ } else {26562656+ rx_ring->empty_rx_queue = 0;26572657+ }26582658+ }26312659}2632266026332661/* Check for keep alive expiration */···27352659 check_for_admin_com_state(adapter);2736266027372661 check_for_missing_tx_completions(adapter);26622662+26632663+ check_for_empty_rx_ring(adapter);2738266427392665 if (debug_area)27402666 ena_dump_stats_to_buf(adapter, debug_area);···29182840{29192841 int release_bars;2920284228432843+ if (ena_dev->mem_bar)28442844+ devm_iounmap(&pdev->dev, ena_dev->mem_bar);28452845+28462846+ devm_iounmap(&pdev->dev, ena_dev->reg_bar);28472847+29212848 release_bars = pci_select_bars(pdev, IORESOURCE_MEM) & ENA_BAR_MASK;29222849 pci_release_selected_regions(pdev, release_bars);29232850}···30102927 goto err_free_ena_dev;30112928 }3012292930133013- ena_dev->reg_bar = ioremap(pci_resource_start(pdev, ENA_REG_BAR),30143014- pci_resource_len(pdev, ENA_REG_BAR));29302930+ ena_dev->reg_bar = devm_ioremap(&pdev->dev,29312931+ pci_resource_start(pdev, ENA_REG_BAR),29322932+ pci_resource_len(pdev, ENA_REG_BAR));30152933 if (!ena_dev->reg_bar) {30162934 dev_err(&pdev->dev, "failed to remap regs bar\n");30172935 rc = -EFAULT;···30322948 ena_set_push_mode(pdev, ena_dev, &get_feat_ctx);3033294930342950 if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) {30353035- ena_dev->mem_bar = ioremap_wc(pci_resource_start(pdev, ENA_MEM_BAR),30363036- pci_resource_len(pdev, ENA_MEM_BAR));29512951+ ena_dev->mem_bar = devm_ioremap_wc(&pdev->dev,29522952+ pci_resource_start(pdev, ENA_MEM_BAR),29532953+ pci_resource_len(pdev, ENA_MEM_BAR));30372954 if (!ena_dev->mem_bar) {30382955 rc = -EFAULT;30392956 goto err_device_destroy;
+15-3
drivers/net/ethernet/amazon/ena/ena_netdev.h
···45454646#define DRV_MODULE_VER_MAJOR 14747#define DRV_MODULE_VER_MINOR 14848-#define DRV_MODULE_VER_SUBMINOR 24848+#define DRV_MODULE_VER_SUBMINOR 749495050#define DRV_MODULE_NAME "ena"5151#ifndef DRV_MODULE_VERSION···146146 u32 tx_descs;147147 /* num of buffers used by this skb */148148 u32 num_of_bufs;149149- /* Save the last jiffies to detect missing tx packets */149149+150150+ /* Used for detect missing tx packets to limit the number of prints */151151+ u32 print_once;152152+ /* Save the last jiffies to detect missing tx packets153153+ *154154+ * sets to non zero value on ena_start_xmit and set to zero on155155+ * napi and timer_Service_routine.156156+ *157157+ * while this value is not protected by lock,158158+ * a given packet is not expected to be handled by ena_start_xmit159159+ * and by napi/timer_service at the same time.160160+ */150161 unsigned long last_jiffies;151162 struct ena_com_buf bufs[ENA_PKT_MAX_BUFS];152163} ____cacheline_aligned;···181170 u64 napi_comp;182171 u64 tx_poll;183172 u64 doorbells;184184- u64 missing_tx_comp;185173 u64 bad_req_id;186174};187175···194184 u64 dma_mapping_err;195185 u64 bad_desc_num;196186 u64 rx_copybreak_pkt;187187+ u64 empty_rx_ring;197188};198189199190struct ena_ring {···242231 struct ena_stats_tx tx_stats;243232 struct ena_stats_rx rx_stats;244233 };234234+ int empty_rx_queue;245235} ____cacheline_aligned;246236247237struct ena_stats_dev {
+2-3
drivers/net/ethernet/amd/xgbe/xgbe-desc.c
···324324 struct xgbe_ring *ring,325325 struct xgbe_ring_data *rdata)326326{327327- int order, ret;327327+ int ret;328328329329 if (!ring->rx_hdr_pa.pages) {330330 ret = xgbe_alloc_pages(pdata, &ring->rx_hdr_pa, GFP_ATOMIC, 0);···333333 }334334335335 if (!ring->rx_buf_pa.pages) {336336- order = max_t(int, PAGE_ALLOC_COSTLY_ORDER - 1, 0);337336 ret = xgbe_alloc_pages(pdata, &ring->rx_buf_pa, GFP_ATOMIC,338338- order);337337+ PAGE_ALLOC_COSTLY_ORDER);339338 if (ret)340339 return ret;341340 }
···19261926 }1927192719281928 /* select a non-FCoE queue */19291929- return fallback(dev, skb) % BNX2X_NUM_ETH_QUEUES(bp);19291929+ return fallback(dev, skb) % (BNX2X_NUM_ETH_QUEUES(bp) * bp->max_cos);19301930}1931193119321932void bnx2x_set_num_queues(struct bnx2x *bp)···38833883 /* when transmitting in a vf, start bd must hold the ethertype38843884 * for fw to enforce it38853885 */38863886+ u16 vlan_tci = 0;38863887#ifndef BNX2X_STOP_ON_ERROR38873887- if (IS_VF(bp))38883888+ if (IS_VF(bp)) {38883889#endif38893889- tx_start_bd->vlan_or_ethertype =38903890- cpu_to_le16(ntohs(eth->h_proto));38903890+ /* Still need to consider inband vlan for enforced */38913891+ if (__vlan_get_tag(skb, &vlan_tci)) {38923892+ tx_start_bd->vlan_or_ethertype =38933893+ cpu_to_le16(ntohs(eth->h_proto));38943894+ } else {38953895+ tx_start_bd->bd_flags.as_bitfield |=38963896+ (X_ETH_INBAND_VLAN <<38973897+ ETH_TX_BD_FLAGS_VLAN_MODE_SHIFT);38983898+ tx_start_bd->vlan_or_ethertype =38993899+ cpu_to_le16(vlan_tci);39003900+ }38913901#ifndef BNX2X_STOP_ON_ERROR38923892- else39023902+ } else {38933903 /* used by FW for packet accounting */38943904 tx_start_bd->vlan_or_ethertype = cpu_to_le16(pkt_prod);39053905+ }38953906#endif38963907 }38973908
+1-1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
···1272912729 } else {1273012730 /* If no mc addresses are required, flush the configuration */1273112731 rc = bnx2x_config_mcast(bp, &rparam, BNX2X_MCAST_CMD_DEL);1273212732- if (rc)1273212732+ if (rc < 0)1273312733 BNX2X_ERR("Failed to clear multicast configuration %d\n",1273412734 rc);1273512735 }
+13-2
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
···901901 /* release VF resources */902902 bnx2x_vf_free_resc(bp, vf);903903904904+ vf->malicious = false;905905+904906 /* re-open the mailbox */905907 bnx2x_vf_enable_mbx(bp, vf->abs_vfid);906908 return;···18241822 vf->abs_vfid, qidx);18251823 bnx2x_vf_handle_rss_update_eqe(bp, vf);18261824 case EVENT_RING_OPCODE_VF_FLR:18271827- case EVENT_RING_OPCODE_MALICIOUS_VF:18281825 /* Do nothing for now */18261826+ return 0;18271827+ case EVENT_RING_OPCODE_MALICIOUS_VF:18281828+ vf->malicious = true;18291829 return 0;18301830 }18311831···19051901 if (vf->state != VF_ENABLED) {19061902 DP_AND((BNX2X_MSG_IOV | BNX2X_MSG_STATS),19071903 "vf %d not enabled so no stats for it\n",19041904+ vf->abs_vfid);19051905+ continue;19061906+ }19071907+19081908+ if (vf->malicious) {19091909+ DP_AND((BNX2X_MSG_IOV | BNX2X_MSG_STATS),19101910+ "vf %d malicious so no stats for it\n",19081911 vf->abs_vfid);19091912 continue;19101913 }···30533042{30543043 BNX2X_PCI_FREE(bp->vf2pf_mbox, bp->vf2pf_mbox_mapping,30553044 sizeof(struct bnx2x_vf_mbx_msg));30563056- BNX2X_PCI_FREE(bp->vf2pf_mbox, bp->pf2vf_bulletin_mapping,30453045+ BNX2X_PCI_FREE(bp->pf2vf_bulletin, bp->pf2vf_bulletin_mapping,30573046 sizeof(union pf_vf_bulletin));30583047}30593048
+1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.h
···141141#define VF_RESET 3 /* VF FLR'd, pending cleanup */142142143143 bool flr_clnup_stage; /* true during flr cleanup */144144+ bool malicious; /* true if FW indicated so, until FLR */144145145146 /* dma */146147 dma_addr_t fw_stat_map;
+52-9
drivers/net/ethernet/broadcom/bnxt/bnxt.c
···13011301 cp_cons = NEXT_CMP(cp_cons);13021302 }1303130313041304- if (unlikely(agg_bufs > MAX_SKB_FRAGS)) {13041304+ if (unlikely(agg_bufs > MAX_SKB_FRAGS || TPA_END_ERRORS(tpa_end1))) {13051305 bnxt_abort_tpa(bp, bnapi, cp_cons, agg_bufs);13061306- netdev_warn(bp->dev, "TPA frags %d exceeded MAX_SKB_FRAGS %d\n",13071307- agg_bufs, (int)MAX_SKB_FRAGS);13061306+ if (agg_bufs > MAX_SKB_FRAGS)13071307+ netdev_warn(bp->dev, "TPA frags %d exceeded MAX_SKB_FRAGS %d\n",13081308+ agg_bufs, (int)MAX_SKB_FRAGS);13081309 return NULL;13091310 }13101311···15631562 return rc;15641563}1565156415651565+/* In netpoll mode, if we are using a combined completion ring, we need to15661566+ * discard the rx packets and recycle the buffers.15671567+ */15681568+static int bnxt_force_rx_discard(struct bnxt *bp, struct bnxt_napi *bnapi,15691569+ u32 *raw_cons, u8 *event)15701570+{15711571+ struct bnxt_cp_ring_info *cpr = &bnapi->cp_ring;15721572+ u32 tmp_raw_cons = *raw_cons;15731573+ struct rx_cmp_ext *rxcmp1;15741574+ struct rx_cmp *rxcmp;15751575+ u16 cp_cons;15761576+ u8 cmp_type;15771577+15781578+ cp_cons = RING_CMP(tmp_raw_cons);15791579+ rxcmp = (struct rx_cmp *)15801580+ &cpr->cp_desc_ring[CP_RING(cp_cons)][CP_IDX(cp_cons)];15811581+15821582+ tmp_raw_cons = NEXT_RAW_CMP(tmp_raw_cons);15831583+ cp_cons = RING_CMP(tmp_raw_cons);15841584+ rxcmp1 = (struct rx_cmp_ext *)15851585+ &cpr->cp_desc_ring[CP_RING(cp_cons)][CP_IDX(cp_cons)];15861586+15871587+ if (!RX_CMP_VALID(rxcmp1, tmp_raw_cons))15881588+ return -EBUSY;15891589+15901590+ cmp_type = RX_CMP_TYPE(rxcmp);15911591+ if (cmp_type == CMP_TYPE_RX_L2_CMP) {15921592+ rxcmp1->rx_cmp_cfa_code_errors_v2 |=15931593+ cpu_to_le32(RX_CMPL_ERRORS_CRC_ERROR);15941594+ } else if (cmp_type == CMP_TYPE_RX_L2_TPA_END_CMP) {15951595+ struct rx_tpa_end_cmp_ext *tpa_end1;15961596+15971597+ tpa_end1 = (struct rx_tpa_end_cmp_ext *)rxcmp1;15981598+ tpa_end1->rx_tpa_end_cmp_errors_v2 |=15991599+ cpu_to_le32(RX_TPA_END_CMP_ERRORS);16001600+ }16011601+ return bnxt_rx_pkt(bp, bnapi, raw_cons, event);16021602+}16031603+15661604#define BNXT_GET_EVENT_PORT(data) \15671605 ((data) & \15681606 ASYNC_EVENT_CMPL_PORT_CONN_NOT_ALLOWED_EVENT_DATA1_PORT_ID_MASK)···17841744 if (unlikely(tx_pkts > bp->tx_wake_thresh))17851745 rx_pkts = budget;17861746 } else if ((TX_CMP_TYPE(txcmp) & 0x30) == 0x10) {17871787- rc = bnxt_rx_pkt(bp, bnapi, &raw_cons, &event);17471747+ if (likely(budget))17481748+ rc = bnxt_rx_pkt(bp, bnapi, &raw_cons, &event);17491749+ else17501750+ rc = bnxt_force_rx_discard(bp, bnapi, &raw_cons,17511751+ &event);17881752 if (likely(rc >= 0))17891753 rx_pkts += rc;17901754 else if (rc == -EBUSY) /* partial completion */···67076663 struct bnxt *bp = netdev_priv(dev);67086664 int i;6709666567106710- for (i = 0; i < bp->cp_nr_rings; i++) {67116711- struct bnxt_irq *irq = &bp->irq_tbl[i];66666666+ /* Only process tx rings/combined rings in netpoll mode. */66676667+ for (i = 0; i < bp->tx_nr_rings; i++) {66686668+ struct bnxt_tx_ring_info *txr = &bp->tx_ring[i];6712666967136713- disable_irq(irq->vector);67146714- irq->handler(irq->vector, bp->bnapi[i]);67156715- enable_irq(irq->vector);66706670+ napi_schedule(&txr->bnapi->napi);67166671 }67176672}67186673#endif
···224224 I40E_PRIV_FLAG("LinkPolling", I40E_FLAG_LINK_POLLING_ENABLED, 0),225225 I40E_PRIV_FLAG("flow-director-atr", I40E_FLAG_FD_ATR_ENABLED, 0),226226 I40E_PRIV_FLAG("veb-stats", I40E_FLAG_VEB_STATS_ENABLED, 0),227227- I40E_PRIV_FLAG("hw-atr-eviction", I40E_FLAG_HW_ATR_EVICT_CAPABLE, 0),227227+ I40E_PRIV_FLAG("hw-atr-eviction", I40E_FLAG_HW_ATR_EVICT_ENABLED, 0),228228 I40E_PRIV_FLAG("legacy-rx", I40E_FLAG_LEGACY_RX, 0),229229};230230···4092409240934093 /* Only allow ATR evict on hardware that is capable of handling it */40944094 if (pf->flags & I40E_FLAG_HW_ATR_EVICT_CAPABLE)40954095- pf->flags &= ~I40E_FLAG_HW_ATR_EVICT_CAPABLE;40954095+ pf->flags &= ~I40E_FLAG_HW_ATR_EVICT_ENABLED;4096409640974097 if (changed_flags & I40E_FLAG_TRUE_PROMISC_SUPPORT) {40984098 u16 sw_flags = 0, valid_flags = 0;
+22-21
drivers/net/ethernet/intel/i40e/i40e_main.c
···295295 **/296296void i40e_service_event_schedule(struct i40e_pf *pf)297297{298298- if (!test_bit(__I40E_VSI_DOWN, pf->state) &&298298+ if (!test_bit(__I40E_DOWN, pf->state) &&299299 !test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state))300300 queue_work(i40e_wq, &pf->service_task);301301}···36113611 * this is not a performance path and napi_schedule()36123612 * can deal with rescheduling.36133613 */36143614- if (!test_bit(__I40E_VSI_DOWN, pf->state))36143614+ if (!test_bit(__I40E_DOWN, pf->state))36153615 napi_schedule_irqoff(&q_vector->napi);36163616 }36173617···36873687enable_intr:36883688 /* re-enable interrupt causes */36893689 wr32(hw, I40E_PFINT_ICR0_ENA, ena_mask);36903690- if (!test_bit(__I40E_VSI_DOWN, pf->state)) {36903690+ if (!test_bit(__I40E_DOWN, pf->state)) {36913691 i40e_service_event_schedule(pf);36923692 i40e_irq_dynamic_enable_icr0(pf, false);36933693 }···62036203{6204620462056205 /* if interface is down do nothing */62066206- if (test_bit(__I40E_VSI_DOWN, pf->state))62066206+ if (test_bit(__I40E_DOWN, pf->state))62076207 return;6208620862096209 if (test_bit(__I40E_FD_FLUSH_REQUESTED, pf->state))···63446344 int i;6345634563466346 /* if interface is down do nothing */63476347- if (test_bit(__I40E_VSI_DOWN, pf->state) ||63476347+ if (test_bit(__I40E_DOWN, pf->state) ||63486348 test_bit(__I40E_CONFIG_BUSY, pf->state))63496349 return;63506350···63996399 reset_flags |= BIT(__I40E_GLOBAL_RESET_REQUESTED);64006400 clear_bit(__I40E_GLOBAL_RESET_REQUESTED, pf->state);64016401 }64026402- if (test_bit(__I40E_VSI_DOWN_REQUESTED, pf->state)) {64036403- reset_flags |= BIT(__I40E_VSI_DOWN_REQUESTED);64046404- clear_bit(__I40E_VSI_DOWN_REQUESTED, pf->state);64026402+ if (test_bit(__I40E_DOWN_REQUESTED, pf->state)) {64036403+ reset_flags |= BIT(__I40E_DOWN_REQUESTED);64046404+ clear_bit(__I40E_DOWN_REQUESTED, pf->state);64056405 }6406640664076407 /* If there's a recovery already waiting, it takes···6415641564166416 /* If we're already down or resetting, just bail */64176417 if (reset_flags &&64186418- !test_bit(__I40E_VSI_DOWN, pf->state) &&64186418+ !test_bit(__I40E_DOWN, pf->state) &&64196419 !test_bit(__I40E_CONFIG_BUSY, pf->state)) {64206420 rtnl_lock();64216421 i40e_do_reset(pf, reset_flags, true);···70027002 u32 val;70037003 int v;7004700470057005- if (test_bit(__I40E_VSI_DOWN, pf->state))70057005+ if (test_bit(__I40E_DOWN, pf->state))70067006 goto clear_recovery;70077007 dev_dbg(&pf->pdev->dev, "Rebuilding internal switch\n");70087008···88218821 (pf->hw.aq.api_min_ver > 4))) {88228822 /* Supported in FW API version higher than 1.4 */88238823 pf->flags |= I40E_FLAG_GENEVE_OFFLOAD_CAPABLE;88248824- pf->flags = I40E_FLAG_HW_ATR_EVICT_CAPABLE;88258825- } else {88268826- pf->flags = I40E_FLAG_HW_ATR_EVICT_CAPABLE;88278824 }88258825+88268826+ /* Enable HW ATR eviction if possible */88278827+ if (pf->flags & I40E_FLAG_HW_ATR_EVICT_CAPABLE)88288828+ pf->flags |= I40E_FLAG_HW_ATR_EVICT_ENABLED;8828882988298830 pf->eeprom_version = 0xDEAD;88308831 pf->lan_veb = I40E_NO_VEB;···97689767 return -ENODEV;97699768 }97709769 if (vsi == pf->vsi[pf->lan_vsi] &&97719771- !test_bit(__I40E_VSI_DOWN, pf->state)) {97709770+ !test_bit(__I40E_DOWN, pf->state)) {97729771 dev_info(&pf->pdev->dev, "Can't remove PF VSI\n");97739772 return -ENODEV;97749773 }···1100411003 }1100511004 pf->next_vsi = 0;1100611005 pf->pdev = pdev;1100711007- set_bit(__I40E_VSI_DOWN, pf->state);1100611006+ set_bit(__I40E_DOWN, pf->state);11008110071100911008 hw = &pf->hw;1101011009 hw->back = pf;···1129411293 * before setting up the misc vector or we get a race and the vector1129511294 * ends up disabled forever.1129611295 */1129711297- clear_bit(__I40E_VSI_DOWN, pf->state);1129611296+ clear_bit(__I40E_DOWN, pf->state);11298112971129911298 /* In case of MSIX we are going to setup the misc vector right here1130011299 * to handle admin queue events etc. In case of legacy and MSI···11449114481145011449 /* Unwind what we've done if something failed in the setup */1145111450err_vsis:1145211452- set_bit(__I40E_VSI_DOWN, pf->state);1145111451+ set_bit(__I40E_DOWN, pf->state);1145311452 i40e_clear_interrupt_scheme(pf);1145411453 kfree(pf->vsi);1145511454err_switch_setup:···11501115001150211501 /* no more scheduling of any task */1150311502 set_bit(__I40E_SUSPENDED, pf->state);1150411504- set_bit(__I40E_VSI_DOWN, pf->state);1150311503+ set_bit(__I40E_DOWN, pf->state);1150511504 if (pf->service_timer.data)1150611505 del_timer_sync(&pf->service_timer);1150711506 if (pf->service_task.func)···1174111740 struct i40e_hw *hw = &pf->hw;11742117411174311742 set_bit(__I40E_SUSPENDED, pf->state);1174411744- set_bit(__I40E_VSI_DOWN, pf->state);1174311743+ set_bit(__I40E_DOWN, pf->state);1174511744 rtnl_lock();1174611745 i40e_prep_for_reset(pf, true);1174711746 rtnl_unlock();···1179011789 int retval = 0;11791117901179211791 set_bit(__I40E_SUSPENDED, pf->state);1179311793- set_bit(__I40E_VSI_DOWN, pf->state);1179211792+ set_bit(__I40E_DOWN, pf->state);11794117931179511794 if (pf->wol_en && (pf->flags & I40E_FLAG_WOL_MC_MAGIC_PKT_WAKE))1179611795 i40e_enable_mc_magic_wake(pf);···11842118411184311842 /* handling the reset will rebuild the device state */1184411843 if (test_and_clear_bit(__I40E_SUSPENDED, pf->state)) {1184511845- clear_bit(__I40E_VSI_DOWN, pf->state);1184411844+ clear_bit(__I40E_DOWN, pf->state);1184611845 rtnl_lock();1184711846 i40e_reset_and_rebuild(pf, false, true);1184811847 rtnl_unlock();
+4-3
drivers/net/ethernet/intel/i40e/i40e_txrx.c
···18541854#if (PAGE_SIZE < 8192)18551855 unsigned int truesize = i40e_rx_pg_size(rx_ring) / 2;18561856#else18571857- unsigned int truesize = SKB_DATA_ALIGN(size);18571857+ unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) +18581858+ SKB_DATA_ALIGN(I40E_SKB_PAD + size);18581859#endif18591860 struct sk_buff *skb;18601861···23412340 /* Due to lack of space, no more new filters can be programmed */23422341 if (th->syn && (pf->flags & I40E_FLAG_FD_ATR_AUTO_DISABLED))23432342 return;23442344- if (pf->flags & I40E_FLAG_HW_ATR_EVICT_CAPABLE) {23432343+ if (pf->flags & I40E_FLAG_HW_ATR_EVICT_ENABLED) {23452344 /* HW ATR eviction will take care of removing filters on FIN23462345 * and RST packets.23472346 */···24032402 I40E_TXD_FLTR_QW1_CNTINDEX_SHIFT) &24042403 I40E_TXD_FLTR_QW1_CNTINDEX_MASK;2405240424062406- if (pf->flags & I40E_FLAG_HW_ATR_EVICT_CAPABLE)24052405+ if (pf->flags & I40E_FLAG_HW_ATR_EVICT_ENABLED)24072406 dtype_cmd |= I40E_TXD_FLTR_QW1_ATR_MASK;2408240724092408 fdir_desc->qindex_flex_ptype_vsi = cpu_to_le32(flex_ptype);
···931931 emac_mac_config(adpt);932932 emac_mac_rx_descs_refill(adpt, &adpt->rx_q);933933934934- adpt->phydev->irq = PHY_IGNORE_INTERRUPT;934934+ adpt->phydev->irq = PHY_POLL;935935 ret = phy_connect_direct(netdev, adpt->phydev, emac_adjust_link,936936 PHY_INTERFACE_MODE_SGMII);937937 if (ret) {
+4-71
drivers/net/ethernet/qualcomm/emac/emac-phy.c
···1313/* Qualcomm Technologies, Inc. EMAC PHY Controller driver.1414 */15151616-#include <linux/module.h>1717-#include <linux/of.h>1818-#include <linux/of_net.h>1916#include <linux/of_mdio.h>2017#include <linux/phy.h>2118#include <linux/iopoll.h>2219#include <linux/acpi.h>2320#include "emac.h"2424-#include "emac-mac.h"25212622/* EMAC base register offsets */2723#define EMAC_MDIO_CTRL 0x001414···48524953#define MDIO_WAIT_TIMES 100050545151-#define EMAC_LINK_SPEED_DEFAULT (\5252- EMAC_LINK_SPEED_10_HALF |\5353- EMAC_LINK_SPEED_10_FULL |\5454- EMAC_LINK_SPEED_100_HALF |\5555- EMAC_LINK_SPEED_100_FULL |\5656- EMAC_LINK_SPEED_1GB_FULL)5757-5858-/**5959- * emac_phy_mdio_autopoll_disable() - disable mdio autopoll6060- * @adpt: the emac adapter6161- *6262- * The autopoll feature takes over the MDIO bus. In order for6363- * the PHY driver to be able to talk to the PHY over the MDIO6464- * bus, we need to temporarily disable the autopoll feature.6565- */6666-static int emac_phy_mdio_autopoll_disable(struct emac_adapter *adpt)6767-{6868- u32 val;6969-7070- /* disable autopoll */7171- emac_reg_update32(adpt->base + EMAC_MDIO_CTRL, MDIO_AP_EN, 0);7272-7373- /* wait for any mdio polling to complete */7474- if (!readl_poll_timeout(adpt->base + EMAC_MDIO_CTRL, val,7575- !(val & MDIO_BUSY), 100, MDIO_WAIT_TIMES * 100))7676- return 0;7777-7878- /* failed to disable; ensure it is enabled before returning */7979- emac_reg_update32(adpt->base + EMAC_MDIO_CTRL, 0, MDIO_AP_EN);8080-8181- return -EBUSY;8282-}8383-8484-/**8585- * emac_phy_mdio_autopoll_disable() - disable mdio autopoll8686- * @adpt: the emac adapter8787- *8888- * The EMAC has the ability to poll the external PHY on the MDIO8989- * bus for link state changes. This eliminates the need for the9090- * driver to poll the phy. If if the link state does change,9191- * the EMAC issues an interrupt on behalf of the PHY.9292- */9393-static void emac_phy_mdio_autopoll_enable(struct emac_adapter *adpt)9494-{9595- emac_reg_update32(adpt->base + EMAC_MDIO_CTRL, 0, MDIO_AP_EN);9696-}9797-9855static int emac_mdio_read(struct mii_bus *bus, int addr, int regnum)9956{10057 struct emac_adapter *adpt = bus->priv;10158 u32 reg;102102- int ret;103103-104104- ret = emac_phy_mdio_autopoll_disable(adpt);105105- if (ret)106106- return ret;1075910860 emac_reg_update32(adpt->base + EMAC_PHY_STS, PHY_ADDR_BMSK,10961 (addr << PHY_ADDR_SHFT));···66122 if (readl_poll_timeout(adpt->base + EMAC_MDIO_CTRL, reg,67123 !(reg & (MDIO_START | MDIO_BUSY)),68124 100, MDIO_WAIT_TIMES * 100))6969- ret = -EIO;7070- else7171- ret = (reg >> MDIO_DATA_SHFT) & MDIO_DATA_BMSK;125125+ return -EIO;721267373- emac_phy_mdio_autopoll_enable(adpt);7474-7575- return ret;127127+ return (reg >> MDIO_DATA_SHFT) & MDIO_DATA_BMSK;76128}7712978130static int emac_mdio_write(struct mii_bus *bus, int addr, int regnum, u16 val)79131{80132 struct emac_adapter *adpt = bus->priv;81133 u32 reg;8282- int ret;8383-8484- ret = emac_phy_mdio_autopoll_disable(adpt);8585- if (ret)8686- return ret;8713488135 emac_reg_update32(adpt->base + EMAC_PHY_STS, PHY_ADDR_BMSK,89136 (addr << PHY_ADDR_SHFT));···90155 if (readl_poll_timeout(adpt->base + EMAC_MDIO_CTRL, reg,91156 !(reg & (MDIO_START | MDIO_BUSY)), 100,92157 MDIO_WAIT_TIMES * 100))9393- ret = -EIO;158158+ return -EIO;941599595- emac_phy_mdio_autopoll_enable(adpt);9696-9797- return ret;160160+ return 0;98161}99162100163/* Configure the MDIO bus and connect the external PHY */
···214214{215215 /* Context type from W/B descriptor must be zero */216216 if (le32_to_cpu(p->des3) & TDES3_CONTEXT_TYPE)217217- return -EINVAL;217217+ return 0;218218219219 /* Tx Timestamp Status is 1 so des0 and des1'll have valid values */220220 if (le32_to_cpu(p->des3) & TDES3_TIMESTAMP_STATUS)221221- return 0;221221+ return 1;222222223223- return 1;223223+ return 0;224224}225225226226static inline u64 dwmac4_get_timestamp(void *desc, u32 ats)···282282 }283283 }284284exit:285285- return ret;285285+ if (likely(ret == 0))286286+ return 1;287287+288288+ return 0;286289}287290288291static void dwmac4_rd_init_rx_desc(struct dma_desc *p, int disable_rx_ic,
+37-15
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
···434434 return;435435436436 /* check tx tstamp status */437437- if (!priv->hw->desc->get_tx_timestamp_status(p)) {437437+ if (priv->hw->desc->get_tx_timestamp_status(p)) {438438 /* get the valid tstamp */439439 ns = priv->hw->desc->get_timestamp(p, priv->adv_ts);440440441441 memset(&shhwtstamp, 0, sizeof(struct skb_shared_hwtstamps));442442 shhwtstamp.hwtstamp = ns_to_ktime(ns);443443444444- netdev_info(priv->dev, "get valid TX hw timestamp %llu\n", ns);444444+ netdev_dbg(priv->dev, "get valid TX hw timestamp %llu\n", ns);445445 /* pass tstamp to stack */446446 skb_tstamp_tx(skb, &shhwtstamp);447447 }···468468 return;469469470470 /* Check if timestamp is available */471471- if (!priv->hw->desc->get_rx_timestamp_status(p, priv->adv_ts)) {471471+ if (priv->hw->desc->get_rx_timestamp_status(p, priv->adv_ts)) {472472 /* For GMAC4, the valid timestamp is from CTX next desc. */473473 if (priv->plat->has_gmac4)474474 ns = priv->hw->desc->get_timestamp(np, priv->adv_ts);475475 else476476 ns = priv->hw->desc->get_timestamp(p, priv->adv_ts);477477478478- netdev_info(priv->dev, "get valid RX hw timestamp %llu\n", ns);478478+ netdev_dbg(priv->dev, "get valid RX hw timestamp %llu\n", ns);479479 shhwtstamp = skb_hwtstamps(skb);480480 memset(shhwtstamp, 0, sizeof(struct skb_shared_hwtstamps));481481 shhwtstamp->hwtstamp = ns_to_ktime(ns);482482 } else {483483- netdev_err(priv->dev, "cannot get RX hw timestamp\n");483483+ netdev_dbg(priv->dev, "cannot get RX hw timestamp\n");484484 }485485}486486···546546 /* PTP v1, UDP, any kind of event packet */547547 config.rx_filter = HWTSTAMP_FILTER_PTP_V1_L4_EVENT;548548 /* take time stamp for all event messages */549549- snap_type_sel = PTP_TCR_SNAPTYPSEL_1;549549+ if (priv->plat->has_gmac4)550550+ snap_type_sel = PTP_GMAC4_TCR_SNAPTYPSEL_1;551551+ else552552+ snap_type_sel = PTP_TCR_SNAPTYPSEL_1;550553551554 ptp_over_ipv4_udp = PTP_TCR_TSIPV4ENA;552555 ptp_over_ipv6_udp = PTP_TCR_TSIPV6ENA;···581578 config.rx_filter = HWTSTAMP_FILTER_PTP_V2_L4_EVENT;582579 ptp_v2 = PTP_TCR_TSVER2ENA;583580 /* take time stamp for all event messages */584584- snap_type_sel = PTP_TCR_SNAPTYPSEL_1;581581+ if (priv->plat->has_gmac4)582582+ snap_type_sel = PTP_GMAC4_TCR_SNAPTYPSEL_1;583583+ else584584+ snap_type_sel = PTP_TCR_SNAPTYPSEL_1;585585586586 ptp_over_ipv4_udp = PTP_TCR_TSIPV4ENA;587587 ptp_over_ipv6_udp = PTP_TCR_TSIPV6ENA;···618612 config.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;619613 ptp_v2 = PTP_TCR_TSVER2ENA;620614 /* take time stamp for all event messages */621621- snap_type_sel = PTP_TCR_SNAPTYPSEL_1;615615+ if (priv->plat->has_gmac4)616616+ snap_type_sel = PTP_GMAC4_TCR_SNAPTYPSEL_1;617617+ else618618+ snap_type_sel = PTP_TCR_SNAPTYPSEL_1;622619623620 ptp_over_ipv4_udp = PTP_TCR_TSIPV4ENA;624621 ptp_over_ipv6_udp = PTP_TCR_TSIPV6ENA;···12171208 u32 rx_count = priv->plat->rx_queues_to_use;12181209 unsigned int bfsize = 0;12191210 int ret = -ENOMEM;12201220- u32 queue;12111211+ int queue;12211212 int i;1222121312231214 if (priv->hw->mode->set_16kib_bfsize)···2733272427342725 priv->hw->desc->prepare_tso_tx_desc(desc, 0, buff_size,27352726 0, 1,27362736- (last_segment) && (buff_size < TSO_MAX_BUFF_SIZE),27272727+ (last_segment) && (tmp_len <= TSO_MAX_BUFF_SIZE),27372728 0, 0);2738272927392730 tmp_len -= TSO_MAX_BUFF_SIZE;···2831282228322823 tx_q->tx_skbuff_dma[first_entry].buf = des;28332824 tx_q->tx_skbuff_dma[first_entry].len = skb_headlen(skb);28342834- tx_q->tx_skbuff[first_entry] = skb;2835282528362826 first->des0 = cpu_to_le32(des);28372827···2864285628652857 tx_q->tx_skbuff_dma[tx_q->cur_tx].last_segment = true;2866285828592859+ /* Only the last descriptor gets to point to the skb. */28602860+ tx_q->tx_skbuff[tx_q->cur_tx] = skb;28612861+28622862+ /* We've used all descriptors we need for this skb, however,28632863+ * advance cur_tx so that it references a fresh descriptor.28642864+ * ndo_start_xmit will fill this descriptor the next time it's28652865+ * called and stmmac_tx_clean may clean up to this descriptor.28662866+ */28672867 tx_q->cur_tx = STMMAC_GET_ENTRY(tx_q->cur_tx, DMA_TX_SIZE);2868286828692869 if (unlikely(stmmac_tx_avail(priv, queue) <= (MAX_SKB_FRAGS + 1))) {···29632947 int i, csum_insertion = 0, is_jumbo = 0;29642948 u32 queue = skb_get_queue_mapping(skb);29652949 int nfrags = skb_shinfo(skb)->nr_frags;29662966- unsigned int entry, first_entry;29502950+ int entry;29512951+ unsigned int first_entry;29672952 struct dma_desc *desc, *first;29682953 struct stmmac_tx_queue *tx_q;29692954 unsigned int enh_desc;···30042987 desc = tx_q->dma_tx + entry;3005298830062989 first = desc;30073007-30083008- tx_q->tx_skbuff[first_entry] = skb;3009299030102991 enh_desc = priv->plat->enh_desc;30112992 /* To program the descriptors according to the size of the frame */···30523037 skb->len);30533038 }3054303930553055- entry = STMMAC_GET_ENTRY(entry, DMA_TX_SIZE);30403040+ /* Only the last descriptor gets to point to the skb. */30413041+ tx_q->tx_skbuff[entry] = skb;3056304230433043+ /* We've used all descriptors we need for this skb, however,30443044+ * advance cur_tx so that it references a fresh descriptor.30453045+ * ndo_start_xmit will fill this descriptor the next time it's30463046+ * called and stmmac_tx_clean may clean up to this descriptor.30473047+ */30483048+ entry = STMMAC_GET_ENTRY(entry, DMA_TX_SIZE);30573049 tx_q->cur_tx = entry;3058305030593051 if (netif_msg_pktdata(priv)) {
+2-1
drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h
···5959/* Enable Snapshot for Messages Relevant to Master */6060#define PTP_TCR_TSMSTRENA BIT(15)6161/* Select PTP packets for Taking Snapshots */6262-#define PTP_TCR_SNAPTYPSEL_1 GENMASK(17, 16)6262+#define PTP_TCR_SNAPTYPSEL_1 BIT(16)6363+#define PTP_GMAC4_TCR_SNAPTYPSEL_1 GENMASK(17, 16)6364/* Enable MAC address for PTP Frame Filtering */6465#define PTP_TCR_TSENMACADDR BIT(18)6566
+1-1
drivers/net/ethernet/ti/cpsw-common.c
···9090 if (of_device_is_compatible(dev->of_node, "ti,dm816-emac"))9191 return cpsw_am33xx_cm_get_macid(dev, 0x30, slave, mac_addr);92929393- if (of_machine_is_compatible("ti,am4372"))9393+ if (of_machine_is_compatible("ti,am43"))9494 return cpsw_am33xx_cm_get_macid(dev, 0x630, slave, mac_addr);95959696 if (of_machine_is_compatible("ti,dra7"))
···3939#define MACVLAN_HASH_SIZE (1<<MACVLAN_HASH_BITS)4040#define MACVLAN_BC_QUEUE_LEN 100041414242+#define MACVLAN_F_PASSTHRU 14343+#define MACVLAN_F_ADDRCHANGE 24444+4245struct macvlan_port {4346 struct net_device *dev;4447 struct hlist_head vlan_hash[MACVLAN_HASH_SIZE];4548 struct list_head vlans;4649 struct sk_buff_head bc_queue;4750 struct work_struct bc_work;4848- bool passthru;5151+ u32 flags;4952 int count;5053 struct hlist_head vlan_source_hash[MACVLAN_HASH_SIZE];5154 DECLARE_BITMAP(mc_filter, MACVLAN_MC_FILTER_SZ);5555+ unsigned char perm_addr[ETH_ALEN];5256};53575458struct macvlan_source_entry {···6965#define MACVLAN_SKB_CB(__skb) ((struct macvlan_skb_cb *)&((__skb)->cb[0]))70667167static void macvlan_port_destroy(struct net_device *dev);6868+6969+static inline bool macvlan_passthru(const struct macvlan_port *port)7070+{7171+ return port->flags & MACVLAN_F_PASSTHRU;7272+}7373+7474+static inline void macvlan_set_passthru(struct macvlan_port *port)7575+{7676+ port->flags |= MACVLAN_F_PASSTHRU;7777+}7878+7979+static inline bool macvlan_addr_change(const struct macvlan_port *port)8080+{8181+ return port->flags & MACVLAN_F_ADDRCHANGE;8282+}8383+8484+static inline void macvlan_set_addr_change(struct macvlan_port *port)8585+{8686+ port->flags |= MACVLAN_F_ADDRCHANGE;8787+}8888+8989+static inline void macvlan_clear_addr_change(struct macvlan_port *port)9090+{9191+ port->flags &= ~MACVLAN_F_ADDRCHANGE;9292+}72937394/* Hash Ethernet address */7495static u32 macvlan_eth_hash(const unsigned char *addr)···210181static bool macvlan_addr_busy(const struct macvlan_port *port,211182 const unsigned char *addr)212183{213213- /* Test to see if the specified multicast address is184184+ /* Test to see if the specified address is214185 * currently in use by the underlying device or215186 * another macvlan.216187 */217217- if (ether_addr_equal_64bits(port->dev->dev_addr, addr))188188+ if (!macvlan_passthru(port) && !macvlan_addr_change(port) &&189189+ ether_addr_equal_64bits(port->dev->dev_addr, addr))218190 return true;219191220192 if (macvlan_hash_lookup(port, addr))···475445 }476446477447 macvlan_forward_source(skb, port, eth->h_source);478478- if (port->passthru)448448+ if (macvlan_passthru(port))479449 vlan = list_first_or_null_rcu(&port->vlans,480450 struct macvlan_dev, list);481451 else···604574 struct net_device *lowerdev = vlan->lowerdev;605575 int err;606576607607- if (vlan->port->passthru) {577577+ if (macvlan_passthru(vlan->port)) {608578 if (!(vlan->flags & MACVLAN_FLAG_NOPROMISC)) {609579 err = dev_set_promiscuity(lowerdev, 1);610580 if (err < 0)···679649 dev_uc_unsync(lowerdev, dev);680650 dev_mc_unsync(lowerdev, dev);681651682682- if (vlan->port->passthru) {652652+ if (macvlan_passthru(vlan->port)) {683653 if (!(vlan->flags & MACVLAN_FLAG_NOPROMISC))684654 dev_set_promiscuity(lowerdev, -1);685655 goto hash_del;···702672{703673 struct macvlan_dev *vlan = netdev_priv(dev);704674 struct net_device *lowerdev = vlan->lowerdev;675675+ struct macvlan_port *port = vlan->port;705676 int err;706677707678 if (!(dev->flags & IFF_UP)) {···713682 if (macvlan_addr_busy(vlan->port, addr))714683 return -EBUSY;715684716716- if (!vlan->port->passthru) {685685+ if (!macvlan_passthru(port)) {717686 err = dev_uc_add(lowerdev, addr);718687 if (err)719688 return err;···723692724693 macvlan_hash_change_addr(vlan, addr);725694 }695695+ if (macvlan_passthru(port) && !macvlan_addr_change(port)) {696696+ /* Since addr_change isn't set, we are here due to lower697697+ * device change. Save the lower-dev address so we can698698+ * restore it later.699699+ */700700+ ether_addr_copy(vlan->port->perm_addr,701701+ lowerdev->dev_addr);702702+ }703703+ macvlan_clear_addr_change(port);726704 return 0;727705}728706···743703 if (!is_valid_ether_addr(addr->sa_data))744704 return -EADDRNOTAVAIL;745705706706+ /* If the addresses are the same, this is a no-op */707707+ if (ether_addr_equal(dev->dev_addr, addr->sa_data))708708+ return 0;709709+746710 if (vlan->mode == MACVLAN_MODE_PASSTHRU) {711711+ macvlan_set_addr_change(vlan->port);747712 dev_set_mac_address(vlan->lowerdev, addr);748713 return 0;749714 }···973928 /* Support unicast filter only on passthru devices.974929 * Multicast filter should be allowed on all devices.975930 */976976- if (!vlan->port->passthru && is_unicast_ether_addr(addr))931931+ if (!macvlan_passthru(vlan->port) && is_unicast_ether_addr(addr))977932 return -EOPNOTSUPP;978933979934 if (flags & NLM_F_REPLACE)···997952 /* Support unicast filter only on passthru devices.998953 * Multicast filter should be allowed on all devices.999954 */10001000- if (!vlan->port->passthru && is_unicast_ether_addr(addr))955955+ if (!macvlan_passthru(vlan->port) && is_unicast_ether_addr(addr))1001956 return -EOPNOTSUPP;10029571003958 if (is_unicast_ether_addr(addr))···11371092 netif_keep_dst(dev);11381093 dev->priv_flags |= IFF_UNICAST_FLT;11391094 dev->netdev_ops = &macvlan_netdev_ops;11401140- dev->destructor = free_netdev;10951095+ dev->needs_free_netdev = true;11411096 dev->header_ops = &macvlan_hard_header_ops;11421097 dev->ethtool_ops = &macvlan_ethtool_ops;11431098}···11651120 if (port == NULL)11661121 return -ENOMEM;1167112211681168- port->passthru = false;11691123 port->dev = dev;11241124+ ether_addr_copy(port->perm_addr, dev->dev_addr);11701125 INIT_LIST_HEAD(&port->vlans);11711126 for (i = 0; i < MACVLAN_HASH_SIZE; i++)11721127 INIT_HLIST_HEAD(&port->vlan_hash[i]);···12041159 dev_put(src->dev);1205116012061161 kfree_skb(skb);11621162+ }11631163+11641164+ /* If the lower device address has been changed by passthru11651165+ * macvlan, put it back.11661166+ */11671167+ if (macvlan_passthru(port) &&11681168+ !ether_addr_equal(port->dev->dev_addr, port->perm_addr)) {11691169+ struct sockaddr sa;11701170+11711171+ sa.sa_family = port->dev->type;11721172+ memcpy(&sa.sa_data, port->perm_addr, port->dev->addr_len);11731173+ dev_set_mac_address(port->dev, &sa);12071174 }1208117512091176 kfree(port);···13831326 port = macvlan_port_get_rtnl(lowerdev);1384132713851328 /* Only 1 macvlan device can be created in passthru mode */13861386- if (port->passthru) {13291329+ if (macvlan_passthru(port)) {13871330 /* The macvlan port must be not created this time,13881331 * still goto destroy_macvlan_port for readability.13891332 */···14091352 err = -EINVAL;14101353 goto destroy_macvlan_port;14111354 }14121412- port->passthru = true;13551355+ macvlan_set_passthru(port);14131356 eth_hw_addr_inherit(dev, lowerdev);14141357 }14151358···14911434 if (data && data[IFLA_MACVLAN_FLAGS]) {14921435 __u16 flags = nla_get_u16(data[IFLA_MACVLAN_FLAGS]);14931436 bool promisc = (flags ^ vlan->flags) & MACVLAN_FLAG_NOPROMISC;14941494- if (vlan->port->passthru && promisc) {14371437+ if (macvlan_passthru(vlan->port) && promisc) {14951438 int err;1496143914971440 if (flags & MACVLAN_FLAG_NOPROMISC)···16541597 }16551598 break;16561599 case NETDEV_CHANGEADDR:16571657- if (!port->passthru)16001600+ if (!macvlan_passthru(port))16581601 return NOTIFY_DONE;1659160216601603 vlan = list_first_entry_or_null(&port->vlans,
+1-1
drivers/net/netconsole.c
···358358 if (err)359359 goto out_unlock;360360361361- pr_info("netconsole: network logging started\n");361361+ pr_info("network logging started\n");362362 } else { /* false */363363 /* We need to disable the netconsole before cleaning it up364364 * otherwise we might end up in write_msg() with
···127127 tristate "ThunderX SOCs MDIO buses"128128 depends on 64BIT129129 depends on PCI130130+ depends on !(MDIO_DEVICE=y && PHYLIB=m)130131 select MDIO_CAVIUM131132 help132133 This driver supports the MDIO interfaces found on Cavium
···658658 return 0;659659}660660661661+static int mdio_uevent(struct device *dev, struct kobj_uevent_env *env)662662+{663663+ int rc;664664+665665+ /* Some devices have extra OF data and an OF-style MODALIAS */666666+ rc = of_device_uevent_modalias(dev, env);667667+ if (rc != -ENODEV)668668+ return rc;669669+670670+ return 0;671671+}672672+661673#ifdef CONFIG_PM662674static int mdio_bus_suspend(struct device *dev)663675{···720708struct bus_type mdio_bus_type = {721709 .name = "mdio_bus",722710 .match = mdio_bus_match,711711+ .uevent = mdio_uevent,723712 .pm = MDIO_BUS_PM_OPS,724713};725714EXPORT_SYMBOL(mdio_bus_type);
+30-14
drivers/net/phy/micrel.c
···268268 return ret;269269}270270271271+/* Some config bits need to be set again on resume, handle them here. */272272+static int kszphy_config_reset(struct phy_device *phydev)273273+{274274+ struct kszphy_priv *priv = phydev->priv;275275+ int ret;276276+277277+ if (priv->rmii_ref_clk_sel) {278278+ ret = kszphy_rmii_clk_sel(phydev, priv->rmii_ref_clk_sel_val);279279+ if (ret) {280280+ phydev_err(phydev,281281+ "failed to set rmii reference clock\n");282282+ return ret;283283+ }284284+ }285285+286286+ if (priv->led_mode >= 0)287287+ kszphy_setup_led(phydev, priv->type->led_mode_reg, priv->led_mode);288288+289289+ return 0;290290+}291291+271292static int kszphy_config_init(struct phy_device *phydev)272293{273294 struct kszphy_priv *priv = phydev->priv;274295 const struct kszphy_type *type;275275- int ret;276296277297 if (!priv)278298 return 0;···305285 if (type->has_nand_tree_disable)306286 kszphy_nand_tree_disable(phydev);307287308308- if (priv->rmii_ref_clk_sel) {309309- ret = kszphy_rmii_clk_sel(phydev, priv->rmii_ref_clk_sel_val);310310- if (ret) {311311- phydev_err(phydev,312312- "failed to set rmii reference clock\n");313313- return ret;314314- }315315- }316316-317317- if (priv->led_mode >= 0)318318- kszphy_setup_led(phydev, type->led_mode_reg, priv->led_mode);319319-320320- return 0;288288+ return kszphy_config_reset(phydev);321289}322290323291static int ksz8041_config_init(struct phy_device *phydev)···619611 if ((regval & 0xFF) == 0xFF) {620612 phy_init_hw(phydev);621613 phydev->link = 0;614614+ if (phydev->drv->config_intr && phy_interrupt_is_valid(phydev))615615+ phydev->drv->config_intr(phydev);622616 }623617624618 return 0;···710700711701static int kszphy_resume(struct phy_device *phydev)712702{703703+ int ret;704704+713705 genphy_resume(phydev);706706+707707+ ret = kszphy_config_reset(phydev);708708+ if (ret)709709+ return ret;714710715711 /* Enable PHY Interrupts */716712 if (phy_interrupt_is_valid(phydev)) {
+3-1
drivers/net/phy/phy.c
···5454 return "5Gbps";5555 case SPEED_10000:5656 return "10Gbps";5757+ case SPEED_14000:5858+ return "14Gbps";5759 case SPEED_20000:5860 return "20Gbps";5961 case SPEED_25000:···243241 * phy_lookup_setting - lookup a PHY setting244242 * @speed: speed to match245243 * @duplex: duplex to match246246- * @feature: allowed link modes244244+ * @features: allowed link modes247245 * @exact: an exact match is required248246 *249247 * Search the settings array for a setting that matches the speed and
+3-4
drivers/net/slip/slip.c
···629629static void sl_free_netdev(struct net_device *dev)630630{631631 int i = dev->base_addr;632632- free_netdev(dev);632632+633633 slip_devs[i] = NULL;634634}635635···651651static void sl_setup(struct net_device *dev)652652{653653 dev->netdev_ops = &sl_netdev_ops;654654- dev->destructor = sl_free_netdev;654654+ dev->needs_free_netdev = true;655655+ dev->priv_destructor = sl_free_netdev;655656656657 dev->hard_header_len = 0;657658 dev->addr_len = 0;···13701369 if (sl->tty) {13711370 printk(KERN_ERR "%s: tty discipline still running\n",13721371 dev->name);13731373- /* Intentionally leak the control block. */13741374- dev->destructor = NULL;13751372 }1376137313771374 unregister_netdev(dev);
···1149114911501150 mutex_lock(&mvm->mutex);1151115111521152- /* stop recording */11531152 if (mvm->cfg->device_family == IWL_DEVICE_FAMILY_7000) {11531153+ /* stop recording */11541154 iwl_set_bits_prph(mvm->trans, MON_BUFF_SAMPLE_CTL, 0x100);11551155+11561156+ iwl_mvm_fw_error_dump(mvm);11571157+11581158+ /* start recording again if the firmware is not crashed */11591159+ if (!test_bit(STATUS_FW_ERROR, &mvm->trans->status) &&11601160+ mvm->fw->dbg_dest_tlv)11611161+ iwl_clear_bits_prph(mvm->trans,11621162+ MON_BUFF_SAMPLE_CTL, 0x100);11551163 } else {11641164+ u32 in_sample = iwl_read_prph(mvm->trans, DBGC_IN_SAMPLE);11651165+ u32 out_ctrl = iwl_read_prph(mvm->trans, DBGC_OUT_CTRL);11661166+11671167+ /* stop recording */11561168 iwl_write_prph(mvm->trans, DBGC_IN_SAMPLE, 0);11571157- /* wait before we collect the data till the DBGC stop */11581169 udelay(100);11701170+ iwl_write_prph(mvm->trans, DBGC_OUT_CTRL, 0);11711171+ /* wait before we collect the data till the DBGC stop */11721172+ udelay(500);11731173+11741174+ iwl_mvm_fw_error_dump(mvm);11751175+11761176+ /* start recording again if the firmware is not crashed */11771177+ if (!test_bit(STATUS_FW_ERROR, &mvm->trans->status) &&11781178+ mvm->fw->dbg_dest_tlv) {11791179+ iwl_write_prph(mvm->trans, DBGC_IN_SAMPLE, in_sample);11801180+ iwl_write_prph(mvm->trans, DBGC_OUT_CTRL, out_ctrl);11811181+ }11591182 }11601160-11611161- iwl_mvm_fw_error_dump(mvm);11621162-11631163- /* start recording again if the firmware is not crashed */11641164- WARN_ON_ONCE((!test_bit(STATUS_FW_ERROR, &mvm->trans->status)) &&11651165- mvm->fw->dbg_dest_tlv &&11661166- iwl_mvm_start_fw_dbg_conf(mvm, mvm->fw_dbg_conf));1167118311681184 mutex_unlock(&mvm->mutex);11691185
+11-35
drivers/net/wireless/intel/iwlwifi/mvm/rs.c
···22 *33 * Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved.44 * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH55- * Copyright(c) 2016 Intel Deutschland GmbH55+ * Copyright(c) 2016 - 2017 Intel Deutschland GmbH66 *77 * This program is free software; you can redistribute it and/or modify it88 * under the terms of version 2 of the GNU General Public License as···10831083 rs_get_lower_rate_in_column(lq_sta, rate);10841084}1085108510861086-/* Check if both rates are identical10871087- * allow_ant_mismatch enables matching a SISO rate on ANT_A or ANT_B10881088- * with a rate indicating STBC/BFER and ANT_AB.10891089- */10901090-static inline bool rs_rate_equal(struct rs_rate *a,10911091- struct rs_rate *b,10921092- bool allow_ant_mismatch)10931093-10941094-{10951095- bool ant_match = (a->ant == b->ant) && (a->stbc == b->stbc) &&10961096- (a->bfer == b->bfer);10971097-10981098- if (allow_ant_mismatch) {10991099- if (a->stbc || a->bfer) {11001100- WARN_ONCE(a->ant != ANT_AB, "stbc %d bfer %d ant %d",11011101- a->stbc, a->bfer, a->ant);11021102- ant_match |= (b->ant == ANT_A || b->ant == ANT_B);11031103- } else if (b->stbc || b->bfer) {11041104- WARN_ONCE(b->ant != ANT_AB, "stbc %d bfer %d ant %d",11051105- b->stbc, b->bfer, b->ant);11061106- ant_match |= (a->ant == ANT_A || a->ant == ANT_B);11071107- }11081108- }11091109-11101110- return (a->type == b->type) && (a->bw == b->bw) && (a->sgi == b->sgi) &&11111111- (a->ldpc == b->ldpc) && (a->index == b->index) && ant_match;11121112-}11131113-11141086/* Check if both rates share the same column */11151087static inline bool rs_rate_column_match(struct rs_rate *a,11161088 struct rs_rate *b)···11541182 u32 lq_hwrate;11551183 struct rs_rate lq_rate, tx_resp_rate;11561184 struct iwl_scale_tbl_info *curr_tbl, *other_tbl, *tmp_tbl;11571157- u8 reduced_txp = (uintptr_t)info->status.status_driver_data[0];11851185+ u32 tlc_info = (uintptr_t)info->status.status_driver_data[0];11861186+ u8 reduced_txp = tlc_info & RS_DRV_DATA_TXP_MSK;11871187+ u8 lq_color = RS_DRV_DATA_LQ_COLOR_GET(tlc_info);11581188 u32 tx_resp_hwrate = (uintptr_t)info->status.status_driver_data[1];11591189 struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);11601190 struct iwl_lq_sta *lq_sta = &mvmsta->lq_sta;11611161- bool allow_ant_mismatch = fw_has_api(&mvm->fw->ucode_capa,11621162- IWL_UCODE_TLV_API_LQ_SS_PARAMS);1163119111641192 /* Treat uninitialized rate scaling data same as non-existing. */11651193 if (!lq_sta) {···12341262 rs_rate_from_ucode_rate(lq_hwrate, info->band, &lq_rate);1235126312361264 /* Here we actually compare this rate to the latest LQ command */12371237- if (!rs_rate_equal(&tx_resp_rate, &lq_rate, allow_ant_mismatch)) {12651265+ if (lq_color != LQ_FLAG_COLOR_GET(table->flags)) {12381266 IWL_DEBUG_RATE(mvm,12391239- "initial tx resp rate 0x%x does not match 0x%x\n",12401240- tx_resp_hwrate, lq_hwrate);12671267+ "tx resp color 0x%x does not match 0x%x\n",12681268+ lq_color, LQ_FLAG_COLOR_GET(table->flags));1241126912421270 /*12431271 * Since rates mis-match, the last LQ command may have failed.···32983326 u8 valid_tx_ant = 0;32993327 struct iwl_lq_cmd *lq_cmd = &lq_sta->lq;33003328 bool toggle_ant = false;33293329+ u32 color;3301333033023331 memcpy(&rate, initial_rate, sizeof(rate));33033332···33533380 num_rates, num_retries, valid_tx_ant,33543381 toggle_ant);3355338233833383+ /* update the color of the LQ command (as a counter at bits 1-3) */33843384+ color = LQ_FLAGS_COLOR_INC(LQ_FLAG_COLOR_GET(lq_cmd->flags));33853385+ lq_cmd->flags = LQ_FLAG_COLOR_SET(lq_cmd->flags, color);33563386}3357338733583388struct rs_bfer_active_iter_data {
+15
drivers/net/wireless/intel/iwlwifi/mvm/rs.h
···22 *33 * Copyright(c) 2003 - 2014 Intel Corporation. All rights reserved.44 * Copyright(c) 2015 Intel Mobile Communications GmbH55+ * Copyright(c) 2017 Intel Deutschland GmbH56 *67 * This program is free software; you can redistribute it and/or modify it78 * under the terms of version 2 of the GNU General Public License as···357356 struct iwl_mvm *drv;358357 } pers;359358};359359+360360+/* ieee80211_tx_info's status_driver_data[0] is packed with lq color and txp361361+ * Note, it's iwlmvm <-> mac80211 interface.362362+ * bits 0-7: reduced tx power363363+ * bits 8-10: LQ command's color364364+ */365365+#define RS_DRV_DATA_TXP_MSK 0xff366366+#define RS_DRV_DATA_LQ_COLOR_POS 8367367+#define RS_DRV_DATA_LQ_COLOR_MSK (7 << RS_DRV_DATA_LQ_COLOR_POS)368368+#define RS_DRV_DATA_LQ_COLOR_GET(_f) (((_f) & RS_DRV_DATA_LQ_COLOR_MSK) >>\369369+ RS_DRV_DATA_LQ_COLOR_POS)370370+#define RS_DRV_DATA_PACK(_c, _p) ((void *)(uintptr_t)\371371+ (((uintptr_t)_p) |\372372+ ((_c) << RS_DRV_DATA_LQ_COLOR_POS)))360373361374/* Initialize station's rate scaling information after adding station */362375void iwl_mvm_rs_rate_init(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+17-9
drivers/net/wireless/intel/iwlwifi/mvm/sta.c
···21202120 if (!iwl_mvm_is_dqa_supported(mvm))21212121 return 0;2122212221232123- if (WARN_ON(vif->type != NL80211_IFTYPE_AP))21232123+ if (WARN_ON(vif->type != NL80211_IFTYPE_AP &&21242124+ vif->type != NL80211_IFTYPE_ADHOC))21242125 return -ENOTSUPP;2125212621262127 /*···21562155 mvmvif->cab_queue = queue;21572156 } else if (!fw_has_api(&mvm->fw->ucode_capa,21582157 IWL_UCODE_TLV_API_STA_TYPE)) {21582158+ /*21592159+ * In IBSS, ieee80211_check_queues() sets the cab_queue to be21602160+ * invalid, so make sure we use the queue we want.21612161+ * Note that this is done here as we want to avoid making DQA21622162+ * changes in mac80211 layer.21632163+ */21642164+ if (vif->type == NL80211_IFTYPE_ADHOC) {21652165+ vif->cab_queue = IWL_MVM_DQA_GCAST_QUEUE;21662166+ mvmvif->cab_queue = vif->cab_queue;21672167+ }21592168 iwl_mvm_enable_txq(mvm, vif->cab_queue, vif->cab_queue, 0,21602169 &cfg, timeout);21612170 }···3332332133333322 /* Get the station from the mvm local station table */33343323 mvm_sta = iwl_mvm_get_key_sta(mvm, vif, sta);33353335- if (!mvm_sta) {33363336- IWL_ERR(mvm, "Failed to find station\n");33373337- return -EINVAL;33383338- }33393339- sta_id = mvm_sta->sta_id;33243324+ if (mvm_sta)33253325+ sta_id = mvm_sta->sta_id;3340332633413327 IWL_DEBUG_WEP(mvm, "mvm remove dynamic key: idx=%d sta=%d\n",33423328 keyconf->keyidx, sta_id);3343332933443344- if (keyconf->cipher == WLAN_CIPHER_SUITE_AES_CMAC ||33453345- keyconf->cipher == WLAN_CIPHER_SUITE_BIP_GMAC_128 ||33463346- keyconf->cipher == WLAN_CIPHER_SUITE_BIP_GMAC_256)33303330+ if (mvm_sta && (keyconf->cipher == WLAN_CIPHER_SUITE_AES_CMAC ||33313331+ keyconf->cipher == WLAN_CIPHER_SUITE_BIP_GMAC_128 ||33323332+ keyconf->cipher == WLAN_CIPHER_SUITE_BIP_GMAC_256))33473333 return iwl_mvm_send_sta_igtk(mvm, keyconf, sta_id, true);3348333433493335 if (!__test_and_clear_bit(keyconf->hw_key_idx, mvm->fw_key_table)) {
+2
drivers/net/wireless/intel/iwlwifi/mvm/sta.h
···313313 * This is basically (last acked packet++).314314 * @rate_n_flags: Rate at which Tx was attempted. Holds the data between the315315 * Tx response (TX_CMD), and the block ack notification (COMPRESSED_BA).316316+ * @lq_color: the color of the LQ command as it appears in tx response.316317 * @amsdu_in_ampdu_allowed: true if A-MSDU in A-MPDU is allowed.317318 * @state: state of the BA agreement establishment / tear down.318319 * @txq_id: Tx queue used by the BA session / DQA···332331 u16 next_reclaimed;333332 /* The rest is Tx AGG related */334333 u32 rate_n_flags;334334+ u8 lq_color;335335 bool amsdu_in_ampdu_allowed;336336 enum iwl_mvm_agg_state state;337337 u16 txq_id;
+5-3
drivers/net/wireless/intel/iwlwifi/mvm/tt.c
···790790 struct iwl_mvm *mvm = (struct iwl_mvm *)(cdev->devdata);791791 int ret;792792793793- if (!mvm->ucode_loaded || !(mvm->cur_ucode == IWL_UCODE_REGULAR))794794- return -EIO;795795-796793 mutex_lock(&mvm->mutex);794794+795795+ if (!mvm->ucode_loaded || !(mvm->cur_ucode == IWL_UCODE_REGULAR)) {796796+ ret = -EIO;797797+ goto unlock;798798+ }797799798800 if (new_state >= ARRAY_SIZE(iwl_mvm_cdev_budgets)) {799801 ret = -EINVAL;
···106106107107 if (work_done < budget) {108108 napi_complete_done(napi, work_done);109109- xenvif_napi_schedule_or_enable_events(queue);109109+ /* If the queue is rate-limited, it shall be110110+ * rescheduled in the timer callback.111111+ */112112+ if (likely(!queue->rate_limited))113113+ xenvif_napi_schedule_or_enable_events(queue);110114 }111115112116 return work_done;
+5-1
drivers/net/xen-netback/netback.c
···180180 max_credit = ULONG_MAX; /* wrapped: clamp to ULONG_MAX */181181182182 queue->remaining_credit = min(max_credit, max_burst);183183+ queue->rate_limited = false;183184}184185185186void xenvif_tx_credit_callback(unsigned long data)···687686 msecs_to_jiffies(queue->credit_usec / 1000);688687689688 /* Timer could already be pending in rare cases. */690690- if (timer_pending(&queue->credit_timeout))689689+ if (timer_pending(&queue->credit_timeout)) {690690+ queue->rate_limited = true;691691 return true;692692+ }692693693694 /* Passed the point where we can replenish credit? */694695 if (time_after_eq64(now, next_credit)) {···705702 mod_timer(&queue->credit_timeout,706703 next_credit);707704 queue->credit_window_start = next_credit;705705+ queue->rate_limited = true;708706709707 return true;710708 }
···90909191static unsigned int seg_order = 19; /* 512K */9292module_param(seg_order, uint, 0644);9393-MODULE_PARM_DESC(seg_order, "size order [n^2] of buffer segment for testing");9393+MODULE_PARM_DESC(seg_order, "size order [2^n] of buffer segment for testing");94949595static unsigned int run_order = 32; /* 4G */9696module_param(run_order, uint, 0644);9797-MODULE_PARM_DESC(run_order, "size order [n^2] of total data to transfer");9797+MODULE_PARM_DESC(run_order, "size order [2^n] of total data to transfer");98989999static bool use_dma; /* default to 0 */100100module_param(use_dma, bool, 0644);
+14-7
drivers/nvme/host/core.c
···5656static int nvme_char_major;5757module_param(nvme_char_major, int, 0);58585959-static unsigned long default_ps_max_latency_us = 25000;5959+static unsigned long default_ps_max_latency_us = 100000;6060module_param(default_ps_max_latency_us, ulong, 0644);6161MODULE_PARM_DESC(default_ps_max_latency_us,6262 "max power saving latency for new devices; use PM QOS to change per device");···13421342 * transitioning between power states. Therefore, when running13431343 * in any given state, we will enter the next lower-power13441344 * non-operational state after waiting 50 * (enlat + exlat)13451345- * microseconds, as long as that state's total latency is under13451345+ * microseconds, as long as that state's exit latency is under13461346 * the requested maximum latency.13471347 *13481348 * We will not autonomously enter any non-operational state for···13871387 * lowest-power state, not the number of states.13881388 */13891389 for (state = (int)ctrl->npss; state >= 0; state--) {13901390- u64 total_latency_us, transition_ms;13901390+ u64 total_latency_us, exit_latency_us, transition_ms;1391139113921392 if (target)13931393 table->entries[state] = target;···14081408 NVME_PS_FLAGS_NON_OP_STATE))14091409 continue;1410141014111411- total_latency_us =14121412- (u64)le32_to_cpu(ctrl->psd[state].entry_lat) +14131413- + le32_to_cpu(ctrl->psd[state].exit_lat);14141414- if (total_latency_us > ctrl->ps_max_latency_us)14111411+ exit_latency_us =14121412+ (u64)le32_to_cpu(ctrl->psd[state].exit_lat);14131413+ if (exit_latency_us > ctrl->ps_max_latency_us)14151414 continue;14151415+14161416+ total_latency_us =14171417+ exit_latency_us +14181418+ le32_to_cpu(ctrl->psd[state].entry_lat);1416141914171420 /*14181421 * This state is good. Use it as the APST idle···24412438 struct nvme_ns *ns;2442243924432440 mutex_lock(&ctrl->namespaces_mutex);24412441+24422442+ /* Forcibly start all queues to avoid having stuck requests */24432443+ blk_mq_start_hw_queues(ctrl->admin_q);24442444+24442445 list_for_each_entry(ns, &ctrl->namespaces, list) {24452446 /*24462447 * Revalidating a dead namespace sets capacity to 0. This will
+18-2
drivers/nvme/host/fc.c
···11391139/* *********************** NVME Ctrl Routines **************************** */1140114011411141static void __nvme_fc_final_op_cleanup(struct request *rq);11421142+static void nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg);1142114311431144static int11441145nvme_fc_reinit_request(void *data, struct request *rq)···12661265 struct nvme_command *sqe = &op->cmd_iu.sqe;12671266 __le16 status = cpu_to_le16(NVME_SC_SUCCESS << 1);12681267 union nvme_result result;12691269- bool complete_rq;12681268+ bool complete_rq, terminate_assoc = true;1270126912711270 /*12721271 * WARNING:···12951294 * fabricate a CQE, the following fields will not be set as they12961295 * are not referenced:12971296 * cqe.sqid, cqe.sqhd, cqe.command_id12971297+ *12981298+ * Failure or error of an individual i/o, in a transport12991299+ * detected fashion unrelated to the nvme completion status,13001300+ * potentially cause the initiator and target sides to get out13011301+ * of sync on SQ head/tail (aka outstanding io count allowed).13021302+ * Per FC-NVME spec, failure of an individual command requires13031303+ * the connection to be terminated, which in turn requires the13041304+ * association to be terminated.12981305 */1299130613001307 fc_dma_sync_single_for_cpu(ctrl->lport->dev, op->fcp_req.rspdma,···13681359 goto done;13691360 }1370136113621362+ terminate_assoc = false;13631363+13711364done:13721365 if (op->flags & FCOP_FLAGS_AEN) {13731366 nvme_complete_async_event(&queue->ctrl->ctrl, status, &result);···13771366 atomic_set(&op->state, FCPOP_STATE_IDLE);13781367 op->flags = FCOP_FLAGS_AEN; /* clear other flags */13791368 nvme_fc_ctrl_put(ctrl);13801380- return;13691369+ goto check_error;13811370 }1382137113831372 complete_rq = __nvme_fc_fcpop_chk_teardowns(ctrl, op);···13901379 nvme_end_request(rq, status, result);13911380 } else13921381 __nvme_fc_final_op_cleanup(rq);13821382+13831383+check_error:13841384+ if (terminate_assoc)13851385+ nvme_fc_error_recovery(ctrl, "transport detected io error");13931386}1394138713951388static int···28062791 ctrl->ctrl.opts = NULL;28072792 /* initiate nvme ctrl ref counting teardown */28082793 nvme_uninit_ctrl(&ctrl->ctrl);27942794+ nvme_put_ctrl(&ctrl->ctrl);2809279528102796 /* as we're past the point where we transition to the ref28112797 * counting teardown path, if we return a bad pointer here,
+8-8
drivers/nvme/host/pci.c
···13671367 bool nssro = dev->subsystem && (csts & NVME_CSTS_NSSRO);1368136813691369 /* If there is a reset ongoing, we shouldn't reset again. */13701370- if (work_busy(&dev->reset_work))13701370+ if (dev->ctrl.state == NVME_CTRL_RESETTING)13711371 return false;1372137213731373 /* We shouldn't reset unless the controller is on fatal error state···18051805 if (pci_is_enabled(pdev)) {18061806 u32 csts = readl(dev->bar + NVME_REG_CSTS);1807180718081808- if (dev->ctrl.state == NVME_CTRL_LIVE)18081808+ if (dev->ctrl.state == NVME_CTRL_LIVE ||18091809+ dev->ctrl.state == NVME_CTRL_RESETTING)18091810 nvme_start_freeze(&dev->ctrl);18101811 dead = !!((csts & NVME_CSTS_CFS) || !(csts & NVME_CSTS_RDY) ||18111812 pdev->error_state != pci_channel_io_normal);···19041903 bool was_suspend = !!(dev->ctrl.ctrl_config & NVME_CC_SHN_NORMAL);19051904 int result = -ENODEV;1906190519071907- if (WARN_ON(dev->ctrl.state == NVME_CTRL_RESETTING))19061906+ if (WARN_ON(dev->ctrl.state != NVME_CTRL_RESETTING))19081907 goto out;1909190819101909 /*···19131912 */19141913 if (dev->ctrl.ctrl_config & NVME_CC_ENABLE)19151914 nvme_dev_disable(dev, false);19161916-19171917- if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_RESETTING))19181918- goto out;1919191519201916 result = nvme_pci_enable(dev);19211917 if (result)···20072009{20082010 if (!dev->ctrl.admin_q || blk_queue_dying(dev->ctrl.admin_q))20092011 return -ENODEV;20102010- if (work_busy(&dev->reset_work))20112011- return -ENODEV;20122012+ if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_RESETTING))20132013+ return -EBUSY;20122014 if (!queue_work(nvme_workq, &dev->reset_work))20132015 return -EBUSY;20142016 return 0;···21342136 if (result)21352137 goto release_pools;2136213821392139+ nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_RESETTING);21372140 dev_info(dev->ctrl.device, "pci function %s\n", dev_name(&pdev->dev));2138214121392142 queue_work(nvme_workq, &dev->reset_work);···2178217921792180 nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DELETING);2180218121822182+ cancel_work_sync(&dev->reset_work);21812183 pci_set_drvdata(pdev, NULL);2182218421832185 if (!pci_device_is_present(pdev)) {
+29-15
drivers/nvme/host/rdma.c
···753753 if (ret)754754 goto requeue;755755756756- blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);757757-758756 ret = nvmf_connect_admin_queue(&ctrl->ctrl);759757 if (ret)760760- goto stop_admin_q;758758+ goto requeue;761759762760 set_bit(NVME_RDMA_Q_LIVE, &ctrl->queues[0].flags);763761764762 ret = nvme_enable_ctrl(&ctrl->ctrl, ctrl->cap);765763 if (ret)766766- goto stop_admin_q;764764+ goto requeue;767765768766 nvme_start_keep_alive(&ctrl->ctrl);769767770768 if (ctrl->queue_count > 1) {771769 ret = nvme_rdma_init_io_queues(ctrl);772770 if (ret)773773- goto stop_admin_q;771771+ goto requeue;774772775773 ret = nvme_rdma_connect_io_queues(ctrl);776774 if (ret)777777- goto stop_admin_q;775775+ goto requeue;778776 }779777780778 changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE);···780782 ctrl->ctrl.opts->nr_reconnects = 0;781783782784 if (ctrl->queue_count > 1) {783783- nvme_start_queues(&ctrl->ctrl);784785 nvme_queue_scan(&ctrl->ctrl);785786 nvme_queue_async_events(&ctrl->ctrl);786787 }···788791789792 return;790793791791-stop_admin_q:792792- blk_mq_stop_hw_queues(ctrl->ctrl.admin_q);793794requeue:794795 dev_info(ctrl->ctrl.device, "Failed reconnect attempt %d\n",795796 ctrl->ctrl.opts->nr_reconnects);···817822 nvme_cancel_request, &ctrl->ctrl);818823 blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,819824 nvme_cancel_request, &ctrl->ctrl);825825+826826+ /*827827+ * queues are not a live anymore, so restart the queues to fail fast828828+ * new IO829829+ */830830+ blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);831831+ nvme_start_queues(&ctrl->ctrl);820832821833 nvme_rdma_reconnect_or_remove(ctrl);822834}···14351433/*14361434 * We cannot accept any other command until the Connect command has completed.14371435 */14381438-static inline bool nvme_rdma_queue_is_ready(struct nvme_rdma_queue *queue,14361436+static inline int nvme_rdma_queue_is_ready(struct nvme_rdma_queue *queue,14391437 struct request *rq)14401438{14411439 if (unlikely(!test_bit(NVME_RDMA_Q_LIVE, &queue->flags))) {···1443144114441442 if (!blk_rq_is_passthrough(rq) ||14451443 cmd->common.opcode != nvme_fabrics_command ||14461446- cmd->fabrics.fctype != nvme_fabrics_type_connect)14471447- return false;14441444+ cmd->fabrics.fctype != nvme_fabrics_type_connect) {14451445+ /*14461446+ * reconnecting state means transport disruption, which14471447+ * can take a long time and even might fail permanently,14481448+ * so we can't let incoming I/O be requeued forever.14491449+ * fail it fast to allow upper layers a chance to14501450+ * failover.14511451+ */14521452+ if (queue->ctrl->ctrl.state == NVME_CTRL_RECONNECTING)14531453+ return -EIO;14541454+ else14551455+ return -EAGAIN;14561456+ }14481457 }1449145814501450- return true;14591459+ return 0;14511460}1452146114531462static int nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,···1476146314771464 WARN_ON_ONCE(rq->tag < 0);1478146514791479- if (!nvme_rdma_queue_is_ready(queue, rq))14801480- return BLK_MQ_RQ_QUEUE_BUSY;14661466+ ret = nvme_rdma_queue_is_ready(queue, rq);14671467+ if (unlikely(ret))14681468+ goto err;1481146914821470 dev = queue->device->dev;14831471 ib_dma_sync_single_for_cpu(dev, sqe->dma,
+2-2
drivers/of/device.c
···144144 coherent ? " " : " not ");145145146146 iommu = of_iommu_configure(dev, np);147147- if (IS_ERR(iommu))148148- return PTR_ERR(iommu);147147+ if (IS_ERR(iommu) && PTR_ERR(iommu) == -EPROBE_DEFER)148148+ return -EPROBE_DEFER;149149150150 dev_dbg(dev, "device is%sbehind an iommu\n",151151 iommu ? " " : " not ");
···55config PCI_EPF_TEST66 tristate "PCI Endpoint Test driver"77 depends on PCI_ENDPOINT88+ select CRC3289 help910 Enable this configuration option to enable the test driver1011 for PCI Endpoint.
+11
drivers/perf/arm_pmu_acpi.c
···2929 return -EINVAL;30303131 gsi = gicc->performance_interrupt;3232+3333+ /*3434+ * Per the ACPI spec, the MADT cannot describe a PMU that doesn't3535+ * have an interrupt. QEMU advertises this by using a GSI of zero,3636+ * which is not known to be valid on any hardware despite being3737+ * valid per the spec. Take the pragmatic approach and reject a3838+ * GSI of zero for now.3939+ */4040+ if (!gsi)4141+ return 0;4242+3243 if (gicc->flags & ACPI_MADT_PERFORMANCE_IRQ_MODE)3344 trigger = ACPI_EDGE_SENSITIVE;3445 else
+7-7
drivers/phy/phy-qcom-qmp.c
···844844 int num = qmp->cfg->num_vregs;845845 int i;846846847847- qmp->vregs = devm_kcalloc(dev, num, sizeof(qmp->vregs), GFP_KERNEL);847847+ qmp->vregs = devm_kcalloc(dev, num, sizeof(*qmp->vregs), GFP_KERNEL);848848 if (!qmp->vregs)849849 return -ENOMEM;850850···983983 * Resources are indexed as: tx -> 0; rx -> 1; pcs -> 2.984984 */985985 qphy->tx = of_iomap(np, 0);986986- if (IS_ERR(qphy->tx))987987- return PTR_ERR(qphy->tx);986986+ if (!qphy->tx)987987+ return -ENOMEM;988988989989 qphy->rx = of_iomap(np, 1);990990- if (IS_ERR(qphy->rx))991991- return PTR_ERR(qphy->rx);990990+ if (!qphy->rx)991991+ return -ENOMEM;992992993993 qphy->pcs = of_iomap(np, 2);994994- if (IS_ERR(qphy->pcs))995995- return PTR_ERR(qphy->pcs);994994+ if (!qphy->pcs)995995+ return -ENOMEM;996996997997 /*998998 * Get PHY's Pipe clock, if any. USB3 and PCIe are PIPE3
+3-17
drivers/pinctrl/core.c
···680680 * pinctrl_generic_free_groups() - removes all pin groups681681 * @pctldev: pin controller device682682 *683683- * Note that the caller must take care of locking.683683+ * Note that the caller must take care of locking. The pinctrl groups684684+ * are allocated with devm_kzalloc() so no need to free them here.684685 */685686static void pinctrl_generic_free_groups(struct pinctrl_dev *pctldev)686687{687688 struct radix_tree_iter iter;688688- struct group_desc *group;689689- unsigned long *indices;690689 void **slot;691691- int i = 0;692692-693693- indices = devm_kzalloc(pctldev->dev, sizeof(*indices) *694694- pctldev->num_groups, GFP_KERNEL);695695- if (!indices)696696- return;697690698691 radix_tree_for_each_slot(slot, &pctldev->pin_group_tree, &iter, 0)699699- indices[i++] = iter.index;700700-701701- for (i = 0; i < pctldev->num_groups; i++) {702702- group = radix_tree_lookup(&pctldev->pin_group_tree,703703- indices[i]);704704- radix_tree_delete(&pctldev->pin_group_tree, indices[i]);705705- devm_kfree(pctldev->dev, group);706706- }692692+ radix_tree_delete(&pctldev->pin_group_tree, iter.index);707693708694 pctldev->num_groups = 0;709695}
···495495 .flags = IRQCHIP_SKIP_SET_WAKE,496496};497497498498-static void amd_gpio_irq_handler(struct irq_desc *desc)498498+#define PIN_IRQ_PENDING (BIT(INTERRUPT_STS_OFF) | BIT(WAKE_STS_OFF))499499+500500+static irqreturn_t amd_gpio_irq_handler(int irq, void *dev_id)499501{500500- u32 i;501501- u32 off;502502- u32 reg;503503- u32 pin_reg;504504- u64 reg64;505505- int handled = 0;506506- unsigned int irq;502502+ struct amd_gpio *gpio_dev = dev_id;503503+ struct gpio_chip *gc = &gpio_dev->gc;504504+ irqreturn_t ret = IRQ_NONE;505505+ unsigned int i, irqnr;507506 unsigned long flags;508508- struct irq_chip *chip = irq_desc_get_chip(desc);509509- struct gpio_chip *gc = irq_desc_get_handler_data(desc);510510- struct amd_gpio *gpio_dev = gpiochip_get_data(gc);507507+ u32 *regs, regval;508508+ u64 status, mask;511509512512- chained_irq_enter(chip, desc);513513- /*enable GPIO interrupt again*/510510+ /* Read the wake status */514511 raw_spin_lock_irqsave(&gpio_dev->lock, flags);515515- reg = readl(gpio_dev->base + WAKE_INT_STATUS_REG1);516516- reg64 = reg;517517- reg64 = reg64 << 32;518518-519519- reg = readl(gpio_dev->base + WAKE_INT_STATUS_REG0);520520- reg64 |= reg;512512+ status = readl(gpio_dev->base + WAKE_INT_STATUS_REG1);513513+ status <<= 32;514514+ status |= readl(gpio_dev->base + WAKE_INT_STATUS_REG0);521515 raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);522516523523- /*524524- * first 46 bits indicates interrupt status.525525- * one bit represents four interrupt sources.526526- */527527- for (off = 0; off < 46 ; off++) {528528- if (reg64 & BIT(off)) {529529- for (i = 0; i < 4; i++) {530530- pin_reg = readl(gpio_dev->base +531531- (off * 4 + i) * 4);532532- if ((pin_reg & BIT(INTERRUPT_STS_OFF)) ||533533- (pin_reg & BIT(WAKE_STS_OFF))) {534534- irq = irq_find_mapping(gc->irqdomain,535535- off * 4 + i);536536- generic_handle_irq(irq);537537- writel(pin_reg,538538- gpio_dev->base539539- + (off * 4 + i) * 4);540540- handled++;541541- }542542- }517517+ /* Bit 0-45 contain the relevant status bits */518518+ status &= (1ULL << 46) - 1;519519+ regs = gpio_dev->base;520520+ for (mask = 1, irqnr = 0; status; mask <<= 1, regs += 4, irqnr += 4) {521521+ if (!(status & mask))522522+ continue;523523+ status &= ~mask;524524+525525+ /* Each status bit covers four pins */526526+ for (i = 0; i < 4; i++) {527527+ regval = readl(regs + i);528528+ if (!(regval & PIN_IRQ_PENDING))529529+ continue;530530+ irq = irq_find_mapping(gc->irqdomain, irqnr + i);531531+ generic_handle_irq(irq);532532+ /* Clear interrupt */533533+ writel(regval, regs + i);534534+ ret = IRQ_HANDLED;543535 }544536 }545537546546- if (handled == 0)547547- handle_bad_irq(desc);548548-538538+ /* Signal EOI to the GPIO unit */549539 raw_spin_lock_irqsave(&gpio_dev->lock, flags);550550- reg = readl(gpio_dev->base + WAKE_INT_MASTER_REG);551551- reg |= EOI_MASK;552552- writel(reg, gpio_dev->base + WAKE_INT_MASTER_REG);540540+ regval = readl(gpio_dev->base + WAKE_INT_MASTER_REG);541541+ regval |= EOI_MASK;542542+ writel(regval, gpio_dev->base + WAKE_INT_MASTER_REG);553543 raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);554544555555- chained_irq_exit(chip, desc);545545+ return ret;556546}557547558548static int amd_get_groups_count(struct pinctrl_dev *pctldev)···811821 goto out2;812822 }813823814814- gpiochip_set_chained_irqchip(&gpio_dev->gc,815815- &amd_gpio_irqchip,816816- irq_base,817817- amd_gpio_irq_handler);824824+ ret = devm_request_irq(&pdev->dev, irq_base, amd_gpio_irq_handler, 0,825825+ KBUILD_MODNAME, gpio_dev);826826+ if (ret)827827+ goto out2;828828+818829 platform_set_drvdata(pdev, gpio_dev);819830820831 dev_dbg(&pdev->dev, "amd gpio driver loaded\n");
+4-40
drivers/pinctrl/pinctrl-rockchip.c
···143143 * @gpio_chip: gpiolib chip144144 * @grange: gpio range145145 * @slock: spinlock for the gpio bank146146- * @irq_lock: bus lock for irq chip147147- * @new_irqs: newly configured irqs which must be muxed as GPIOs in148148- * irq_bus_sync_unlock()149146 */150147struct rockchip_pin_bank {151148 void __iomem *reg_base;···165168 struct pinctrl_gpio_range grange;166169 raw_spinlock_t slock;167170 u32 toggle_edge_mode;168168- struct mutex irq_lock;169169- u32 new_irqs;170171};171172172173#define PIN_BANK(id, pins, label) \···21292134 int ret;2130213521312136 /* make sure the pin is configured as gpio input */21322132- ret = rockchip_verify_mux(bank, d->hwirq, RK_FUNC_GPIO);21372137+ ret = rockchip_set_mux(bank, d->hwirq, RK_FUNC_GPIO);21332138 if (ret < 0)21342139 return ret;2135214021362136- bank->new_irqs |= mask;21372137-21412141+ clk_enable(bank->clk);21382142 raw_spin_lock_irqsave(&bank->slock, flags);2139214321402144 data = readl_relaxed(bank->reg_base + GPIO_SWPORT_DDR);···21912197 default:21922198 irq_gc_unlock(gc);21932199 raw_spin_unlock_irqrestore(&bank->slock, flags);22002200+ clk_disable(bank->clk);21942201 return -EINVAL;21952202 }21962203···2200220522012206 irq_gc_unlock(gc);22022207 raw_spin_unlock_irqrestore(&bank->slock, flags);22082208+ clk_disable(bank->clk);2203220922042210 return 0;22052211}···22412245 struct rockchip_pin_bank *bank = gc->private;2242224622432247 irq_gc_mask_set_bit(d);22442244- clk_disable(bank->clk);22452245-}22462246-22472247-static void rockchip_irq_bus_lock(struct irq_data *d)22482248-{22492249- struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);22502250- struct rockchip_pin_bank *bank = gc->private;22512251-22522252- clk_enable(bank->clk);22532253- mutex_lock(&bank->irq_lock);22542254-}22552255-22562256-static void rockchip_irq_bus_sync_unlock(struct irq_data *d)22572257-{22582258- struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);22592259- struct rockchip_pin_bank *bank = gc->private;22602260-22612261- while (bank->new_irqs) {22622262- unsigned int irq = __ffs(bank->new_irqs);22632263- int ret;22642264-22652265- ret = rockchip_set_mux(bank, irq, RK_FUNC_GPIO);22662266- WARN_ON(ret < 0);22672267-22682268- bank->new_irqs &= ~BIT(irq);22692269- }22702270-22712271- mutex_unlock(&bank->irq_lock);22722248 clk_disable(bank->clk);22732249}22742250···23102342 gc->chip_types[0].chip.irq_suspend = rockchip_irq_suspend;23112343 gc->chip_types[0].chip.irq_resume = rockchip_irq_resume;23122344 gc->chip_types[0].chip.irq_set_type = rockchip_irq_set_type;23132313- gc->chip_types[0].chip.irq_bus_lock = rockchip_irq_bus_lock;23142314- gc->chip_types[0].chip.irq_bus_sync_unlock =23152315- rockchip_irq_bus_sync_unlock;23162345 gc->wake_enabled = IRQ_MSK(bank->nr_pins);2317234623182347 irq_set_chained_handler_and_data(bank->irq,···24832518 int bank_pins = 0;2484251924852520 raw_spin_lock_init(&bank->slock);24862486- mutex_init(&bank->irq_lock);24872521 bank->drvdata = d;24882522 bank->pin_base = ctrl->nr_pins;24892523 ctrl->nr_pins += bank->nr_pins;
+4-17
drivers/pinctrl/pinmux.c
···826826 * pinmux_generic_free_functions() - removes all functions827827 * @pctldev: pin controller device828828 *829829- * Note that the caller must take care of locking.829829+ * Note that the caller must take care of locking. The pinctrl830830+ * functions are allocated with devm_kzalloc() so no need to free831831+ * them here.830832 */831833void pinmux_generic_free_functions(struct pinctrl_dev *pctldev)832834{833835 struct radix_tree_iter iter;834834- struct function_desc *function;835835- unsigned long *indices;836836 void **slot;837837- int i = 0;838838-839839- indices = devm_kzalloc(pctldev->dev, sizeof(*indices) *840840- pctldev->num_functions, GFP_KERNEL);841841- if (!indices)842842- return;843837844838 radix_tree_for_each_slot(slot, &pctldev->pin_function_tree, &iter, 0)845845- indices[i++] = iter.index;846846-847847- for (i = 0; i < pctldev->num_functions; i++) {848848- function = radix_tree_lookup(&pctldev->pin_function_tree,849849- indices[i]);850850- radix_tree_delete(&pctldev->pin_function_tree, indices[i]);851851- devm_kfree(pctldev->dev, function);852852- }839839+ radix_tree_delete(&pctldev->pin_function_tree, iter.index);853840854841 pctldev->num_functions = 0;855842}
+1-1
drivers/pinctrl/stm32/pinctrl-stm32.c
···798798 break;799799 case PIN_CONFIG_OUTPUT:800800 __stm32_gpio_set(bank, offset, arg);801801- ret = stm32_pmx_gpio_set_direction(pctldev, NULL, pin, false);801801+ ret = stm32_pmx_gpio_set_direction(pctldev, range, pin, false);802802 break;803803 default:804804 ret = -EINVAL;
···187187 CTPF_HAS_ATID, /* reserved atid */188188 CTPF_HAS_TID, /* reserved hw tid */189189 CTPF_OFFLOAD_DOWN, /* offload function off */190190+ CTPF_LOGOUT_RSP_RCVD, /* received logout response */190191};191192192193struct cxgbi_skb_rx_cb {···196195};197196198197struct cxgbi_skb_tx_cb {199199- void *l2t;198198+ void *handle;199199+ void *arp_err_handler;200200 struct sk_buff *wr_next;201201};202202···205203 SKCBF_TX_NEED_HDR, /* packet needs a header */206204 SKCBF_TX_MEM_WRITE, /* memory write */207205 SKCBF_TX_FLAG_COMPL, /* wr completion flag */206206+ SKCBF_TX_DONE, /* skb tx done */208207 SKCBF_RX_COALESCED, /* received whole pdu */209208 SKCBF_RX_HDR, /* received pdu header */210209 SKCBF_RX_DATA, /* received pdu payload */···218215};219216220217struct cxgbi_skb_cb {221221- unsigned char ulp_mode;222222- unsigned long flags;223223- unsigned int seq;224218 union {225219 struct cxgbi_skb_rx_cb rx;226220 struct cxgbi_skb_tx_cb tx;227221 };222222+ unsigned char ulp_mode;223223+ unsigned long flags;224224+ unsigned int seq;228225};229226230227#define CXGBI_SKB_CB(skb) ((struct cxgbi_skb_cb *)&((skb)->cb[0]))···377374 cxgbi_skcb_tx_wr_next(skb) = NULL;378375 /*379376 * We want to take an extra reference since both us and the driver380380- * need to free the packet before it's really freed. We know there's381381- * just one user currently so we use atomic_set rather than skb_get382382- * to avoid the atomic op.377377+ * need to free the packet before it's really freed.383378 */384384- atomic_set(&skb->users, 2);379379+ skb_get(skb);385380386381 if (!csk->wr_pending_head)387382 csk->wr_pending_head = skb;
···11701170 cmd = list_first_entry_or_null(&vscsi->free_cmd,11711171 struct ibmvscsis_cmd, list);11721172 if (cmd) {11731173+ if (cmd->abort_cmd)11741174+ cmd->abort_cmd = NULL;11731175 cmd->flags &= ~(DELAY_SEND);11741176 list_del(&cmd->list);11751177 cmd->iue = iue;···17761774 if (cmd->abort_cmd) {17771775 retry = true;17781776 cmd->abort_cmd->flags &= ~(DELAY_SEND);17771777+ cmd->abort_cmd = NULL;17791778 }1780177917811780 /*···17911788 list_del(&cmd->list);17921789 ibmvscsis_free_cmd_resources(vscsi,17931790 cmd);17911791+ /*17921792+ * With a successfully aborted op17931793+ * through LIO we want to increment the17941794+ * the vscsi credit so that when we dont17951795+ * send a rsp to the original scsi abort17961796+ * op (h_send_crq), but the tm rsp to17971797+ * the abort is sent, the credit is17981798+ * correctly sent with the abort tm rsp.17991799+ * We would need 1 for the abort tm rsp18001800+ * and 1 credit for the aborted scsi op.18011801+ * Thus we need to increment here.18021802+ * Also we want to increment the credit18031803+ * here because we want to make sure18041804+ * cmd is actually released first18051805+ * otherwise the client will think it18061806+ * it can send a new cmd, and we could18071807+ * find ourselves short of cmd elements.18081808+ */18091809+ vscsi->credit += 1;17941810 } else {17951811 iue = cmd->iue;17961812···2984296229852963 rsp->opcode = SRP_RSP;2986296429872987- if (vscsi->credit > 0 && vscsi->state == SRP_PROCESSING)29882988- rsp->req_lim_delta = cpu_to_be32(vscsi->credit);29892989- else29902990- rsp->req_lim_delta = cpu_to_be32(1 + vscsi->credit);29652965+ rsp->req_lim_delta = cpu_to_be32(1 + vscsi->credit);29912966 rsp->tag = cmd->rsp.tag;29922967 rsp->flags = 0;29932968
···206206 * associated with a LPFC_NODELIST entry. This207207 * routine effectively results in a "software abort".208208 */209209-int209209+void210210lpfc_els_abort(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp)211211{212212 LIST_HEAD(abort_list);···214214 struct lpfc_iocbq *iocb, *next_iocb;215215216216 pring = lpfc_phba_elsring(phba);217217+218218+ /* In case of error recovery path, we might have a NULL pring here */219219+ if (!pring)220220+ return;217221218222 /* Abort outstanding I/O on NPort <nlp_DID> */219223 lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_DISCOVERY,···277273 IOSTAT_LOCAL_REJECT, IOERR_SLI_ABORTED);278274279275 lpfc_cancel_retry_delay_tmo(phba->pport, ndlp);280280- return 0;281276}282277283278static int
···11config CRYPTO_DEV_CCREE22 tristate "Support for ARM TrustZone CryptoCell C7XX family of Crypto accelerators"33- depends on CRYPTO_HW && OF && HAS_DMA33+ depends on CRYPTO && CRYPTO_HW && OF && HAS_DMA44 default n55 select CRYPTO_HASH66 select CRYPTO_BLKCIPHER
···160160 oldfs = get_fs(); set_fs(get_ds());161161162162 if (1!=readFile(fp, &buf, 1))163163- ret = PTR_ERR(fp);163163+ ret = -EINVAL;164164165165 set_fs(oldfs);166166 filp_close(fp, NULL);
+44-8
drivers/target/iscsi/iscsi_target.c
···12791279 */12801280 if (dump_payload)12811281 goto after_immediate_data;12821282+ /*12831283+ * Check for underflow case where both EDTL and immediate data payload12841284+ * exceeds what is presented by CDB's TRANSFER LENGTH, and what has12851285+ * already been set in target_cmd_size_check() as se_cmd->data_length.12861286+ *12871287+ * For this special case, fail the command and dump the immediate data12881288+ * payload.12891289+ */12901290+ if (cmd->first_burst_len > cmd->se_cmd.data_length) {12911291+ cmd->sense_reason = TCM_INVALID_CDB_FIELD;12921292+ goto after_immediate_data;12931293+ }1282129412831295 immed_ret = iscsit_handle_immediate_data(cmd, hdr,12841296 cmd->first_burst_len);···38023790{38033791 int ret = 0;38043792 struct iscsi_conn *conn = arg;37933793+ bool conn_freed = false;37943794+38053795 /*38063796 * Allow ourselves to be interrupted by SIGINT so that a38073797 * connection recovery / failure event can be triggered externally.···38293815 goto transport_err;3830381638313817 ret = iscsit_handle_response_queue(conn);38323832- if (ret == 1)38183818+ if (ret == 1) {38333819 goto get_immediate;38343834- else if (ret == -ECONNRESET)38203820+ } else if (ret == -ECONNRESET) {38213821+ conn_freed = true;38353822 goto out;38363836- else if (ret < 0)38233823+ } else if (ret < 0) {38373824 goto transport_err;38253825+ }38383826 }3839382738403828transport_err:···38463830 * responsible for cleaning up the early connection failure.38473831 */38483832 if (conn->conn_state != TARG_CONN_STATE_IN_LOGIN)38493849- iscsit_take_action_for_connection_exit(conn);38333833+ iscsit_take_action_for_connection_exit(conn, &conn_freed);38503834out:38353835+ if (!conn_freed) {38363836+ while (!kthread_should_stop()) {38373837+ msleep(100);38383838+ }38393839+ }38513840 return 0;38523841}38533842···40254004{40264005 int rc;40274006 struct iscsi_conn *conn = arg;40074007+ bool conn_freed = false;4028400840294009 /*40304010 * Allow ourselves to be interrupted by SIGINT so that a···40384016 */40394017 rc = wait_for_completion_interruptible(&conn->rx_login_comp);40404018 if (rc < 0 || iscsi_target_check_conn_state(conn))40414041- return 0;40194019+ goto out;4042402040434021 if (!conn->conn_transport->iscsit_get_rx_pdu)40444022 return 0;···4047402540484026 if (!signal_pending(current))40494027 atomic_set(&conn->transport_failed, 1);40504050- iscsit_take_action_for_connection_exit(conn);40284028+ iscsit_take_action_for_connection_exit(conn, &conn_freed);40294029+40304030+out:40314031+ if (!conn_freed) {40324032+ while (!kthread_should_stop()) {40334033+ msleep(100);40344034+ }40354035+ }40364036+40514037 return 0;40524038}40534039···44354405 * always sleep waiting for RX/TX thread shutdown to complete44364406 * within iscsit_close_connection().44374407 */44384438- if (!conn->conn_transport->rdma_shutdown)44084408+ if (!conn->conn_transport->rdma_shutdown) {44394409 sleep = cmpxchg(&conn->tx_thread_active, true, false);44104410+ if (!sleep)44114411+ return;44124412+ }4440441344414414 atomic_set(&conn->conn_logout_remove, 0);44424415 complete(&conn->conn_logout_comp);···44554422{44564423 int sleep = 1;4457442444584458- if (!conn->conn_transport->rdma_shutdown)44254425+ if (!conn->conn_transport->rdma_shutdown) {44594426 sleep = cmpxchg(&conn->tx_thread_active, true, false);44274427+ if (!sleep)44284428+ return;44294429+ }4460443044614431 atomic_set(&conn->conn_logout_remove, 0);44624432 complete(&conn->conn_logout_comp);
···7575 kfree(tmr);7676}77777878-static void core_tmr_handle_tas_abort(struct se_cmd *cmd, int tas)7878+static int core_tmr_handle_tas_abort(struct se_cmd *cmd, int tas)7979{8080 unsigned long flags;8181 bool remove = true, send_tas;···9191 transport_send_task_abort(cmd);9292 }93939494- transport_cmd_finish_abort(cmd, remove);9494+ return transport_cmd_finish_abort(cmd, remove);9595}96969797static int target_check_cdb_and_preempt(struct list_head *list,···184184 cancel_work_sync(&se_cmd->work);185185 transport_wait_for_tasks(se_cmd);186186187187- transport_cmd_finish_abort(se_cmd, true);188188- target_put_sess_cmd(se_cmd);187187+ if (!transport_cmd_finish_abort(se_cmd, true))188188+ target_put_sess_cmd(se_cmd);189189190190 printk("ABORT_TASK: Sending TMR_FUNCTION_COMPLETE for"191191 " ref_tag: %llu\n", ref_tag);···281281 cancel_work_sync(&cmd->work);282282 transport_wait_for_tasks(cmd);283283284284- transport_cmd_finish_abort(cmd, 1);285285- target_put_sess_cmd(cmd);284284+ if (!transport_cmd_finish_abort(cmd, 1))285285+ target_put_sess_cmd(cmd);286286 }287287}288288···380380 cancel_work_sync(&cmd->work);381381 transport_wait_for_tasks(cmd);382382383383- core_tmr_handle_tas_abort(cmd, tas);384384- target_put_sess_cmd(cmd);383383+ if (!core_tmr_handle_tas_abort(cmd, tas))384384+ target_put_sess_cmd(cmd);385385 }386386}387387
+24-8
drivers/target/target_core_transport.c
···651651 percpu_ref_put(&lun->lun_ref);652652}653653654654-void transport_cmd_finish_abort(struct se_cmd *cmd, int remove)654654+int transport_cmd_finish_abort(struct se_cmd *cmd, int remove)655655{656656 bool ack_kref = (cmd->se_cmd_flags & SCF_ACK_KREF);657657+ int ret = 0;657658658659 if (cmd->se_cmd_flags & SCF_SE_LUN_CMD)659660 transport_lun_remove_cmd(cmd);···666665 cmd->se_tfo->aborted_task(cmd);667666668667 if (transport_cmd_check_stop_to_fabric(cmd))669669- return;668668+ return 1;670669 if (remove && ack_kref)671671- transport_put_cmd(cmd);670670+ ret = transport_put_cmd(cmd);671671+672672+ return ret;672673}673674674675static void target_complete_failure_work(struct work_struct *work)···11631160 if (cmd->unknown_data_length) {11641161 cmd->data_length = size;11651162 } else if (size != cmd->data_length) {11661166- pr_warn("TARGET_CORE[%s]: Expected Transfer Length:"11631163+ pr_warn_ratelimited("TARGET_CORE[%s]: Expected Transfer Length:"11671164 " %u does not match SCSI CDB Length: %u for SAM Opcode:"11681165 " 0x%02x\n", cmd->se_tfo->get_fabric_name(),11691166 cmd->data_length, size, cmd->t_task_cdb[0]);1170116711711171- if (cmd->data_direction == DMA_TO_DEVICE &&11721172- cmd->se_cmd_flags & SCF_SCSI_DATA_CDB) {11731173- pr_err("Rejecting underflow/overflow WRITE data\n");11741174- return TCM_INVALID_CDB_FIELD;11681168+ if (cmd->data_direction == DMA_TO_DEVICE) {11691169+ if (cmd->se_cmd_flags & SCF_SCSI_DATA_CDB) {11701170+ pr_err_ratelimited("Rejecting underflow/overflow"11711171+ " for WRITE data CDB\n");11721172+ return TCM_INVALID_CDB_FIELD;11731173+ }11741174+ /*11751175+ * Some fabric drivers like iscsi-target still expect to11761176+ * always reject overflow writes. Reject this case until11771177+ * full fabric driver level support for overflow writes11781178+ * is introduced tree-wide.11791179+ */11801180+ if (size > cmd->data_length) {11811181+ pr_err_ratelimited("Rejecting overflow for"11821182+ " WRITE control CDB\n");11831183+ return TCM_INVALID_CDB_FIELD;11841184+ }11751185 }11761186 /*11771187 * Reject READ_* or WRITE_* with overflow/underflow for
···315315 list_del(&f->list);316316 if (f->unbind)317317 f->unbind(c, f);318318+319319+ if (f->bind_deactivated)320320+ usb_function_activate(f);318321}319322EXPORT_SYMBOL_GPL(usb_remove_function);320323···959956960957 f = list_first_entry(&config->functions,961958 struct usb_function, list);962962- list_del(&f->list);963963- if (f->unbind) {964964- DBG(cdev, "unbind function '%s'/%p\n", f->name, f);965965- f->unbind(config, f);966966- /* may free memory for "f" */967967- }959959+960960+ usb_remove_function(config, f);968961 }969962 list_del(&config->list);970963 if (config->unbind) {
+11-2
drivers/usb/gadget/function/f_mass_storage.c
···396396/* Caller must hold fsg->lock */397397static void wakeup_thread(struct fsg_common *common)398398{399399- smp_wmb(); /* ensure the write of bh->state is complete */399399+ /*400400+ * Ensure the reading of thread_wakeup_needed401401+ * and the writing of bh->state are completed402402+ */403403+ smp_mb();400404 /* Tell the main thread that something has happened */401405 common->thread_wakeup_needed = 1;402406 if (common->thread_task)···631627 }632628 __set_current_state(TASK_RUNNING);633629 common->thread_wakeup_needed = 0;634634- smp_rmb(); /* ensure the latest bh->state is visible */630630+631631+ /*632632+ * Ensure the writing of thread_wakeup_needed633633+ * and the reading of bh->state are completed634634+ */635635+ smp_mb();635636 return rc;636637}637638
···1183118311841184 /* closing ep0 === shutdown all */1185118511861186- if (dev->gadget_registered)11861186+ if (dev->gadget_registered) {11871187 usb_gadget_unregister_driver (&gadgetfs_driver);11881188+ dev->gadget_registered = false;11891189+ }1188119011891191 /* at this point "good" hardware has disconnected the11901192 * device from USB; the host won't see it any more.···16791677gadgetfs_suspend (struct usb_gadget *gadget)16801678{16811679 struct dev_data *dev = get_gadget_data (gadget);16801680+ unsigned long flags;1682168116831682 INFO (dev, "suspended from state %d\n", dev->state);16841684- spin_lock (&dev->lock);16831683+ spin_lock_irqsave(&dev->lock, flags);16851684 switch (dev->state) {16861685 case STATE_DEV_SETUP: // VERY odd... host died??16871686 case STATE_DEV_CONNECTED:···16931690 default:16941691 break;16951692 }16961696- spin_unlock (&dev->lock);16931693+ spin_unlock_irqrestore(&dev->lock, flags);16971694}1698169516991696static struct usb_gadget_driver gadgetfs_driver = {
+4-9
drivers/usb/gadget/udc/dummy_hcd.c
···442442 /* Report reset and disconnect events to the driver */443443 if (dum->driver && (disconnect || reset)) {444444 stop_activity(dum);445445- spin_unlock(&dum->lock);446445 if (reset)447446 usb_gadget_udc_reset(&dum->gadget, dum->driver);448447 else449448 dum->driver->disconnect(&dum->gadget);450450- spin_lock(&dum->lock);451449 }452450 } else if (dum_hcd->active != dum_hcd->old_active) {453453- if (dum_hcd->old_active && dum->driver->suspend) {454454- spin_unlock(&dum->lock);451451+ if (dum_hcd->old_active && dum->driver->suspend)455452 dum->driver->suspend(&dum->gadget);456456- spin_lock(&dum->lock);457457- } else if (!dum_hcd->old_active && dum->driver->resume) {458458- spin_unlock(&dum->lock);453453+ else if (!dum_hcd->old_active && dum->driver->resume)459454 dum->driver->resume(&dum->gadget);460460- spin_lock(&dum->lock);461461- }462455 }463456464457 dum_hcd->old_status = dum_hcd->port_status;···976983 struct dummy_hcd *dum_hcd = gadget_to_dummy_hcd(g);977984 struct dummy *dum = dum_hcd->dum;978985986986+ spin_lock_irq(&dum->lock);979987 dum->driver = NULL;988988+ spin_unlock_irq(&dum->lock);980989981990 return 0;982991}
···245245 dsps_mod_timer_optional(glue);246246 break;247247 case OTG_STATE_A_WAIT_BCON:248248+ /* keep VBUS on for host-only mode */249249+ if (musb->port_mode == MUSB_PORT_MODE_HOST) {250250+ dsps_mod_timer_optional(glue);251251+ break;252252+ }248253 musb_writeb(musb->mregs, MUSB_DEVCTL, 0);249254 skip_session = 1;250255 /* fall */
+1-1
drivers/video/fbdev/core/fbmon.c
···1048104810491049 for (i = 0; i < (128 - edid[2]) / DETAILED_TIMING_DESCRIPTION_SIZE;10501050 i++, block += DETAILED_TIMING_DESCRIPTION_SIZE)10511051- if (PIXEL_CLOCK)10511051+ if (PIXEL_CLOCK != 0)10521052 edt[num++] = block - edid;1053105310541054 /* Yikes, EDID data is totally useless */
···468468469469 if (btrfs_dir_name_len(leaf, dir_item) > namelen) {470470 btrfs_crit(fs_info, "invalid dir item name len: %u",471471- (unsigned)btrfs_dir_data_len(leaf, dir_item));471471+ (unsigned)btrfs_dir_name_len(leaf, dir_item));472472 return 1;473473 }474474
+6-4
fs/btrfs/disk-io.c
···34673467 * we fua the first super. The others we allow34683468 * to go down lazy.34693469 */34703470- if (i == 0)34713471- ret = btrfsic_submit_bh(REQ_OP_WRITE, REQ_FUA, bh);34723472- else34703470+ if (i == 0) {34713471+ ret = btrfsic_submit_bh(REQ_OP_WRITE,34723472+ REQ_SYNC | REQ_FUA, bh);34733473+ } else {34733474 ret = btrfsic_submit_bh(REQ_OP_WRITE, REQ_SYNC, bh);34753475+ }34743476 if (ret)34753477 errors++;34763478 }···3537353535383536 bio->bi_end_io = btrfs_end_empty_barrier;35393537 bio->bi_bdev = device->bdev;35403540- bio->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;35383538+ bio->bi_opf = REQ_OP_WRITE | REQ_SYNC | REQ_PREFLUSH;35413539 init_completion(&device->flush_wait);35423540 bio->bi_private = &device->flush_wait;35433541 device->flush_bio = bio;
+4-3
fs/btrfs/extent-tree.c
···39933993 info->space_info_kobj, "%s",39943994 alloc_name(found->flags));39953995 if (ret) {39963996+ percpu_counter_destroy(&found->total_bytes_pinned);39963997 kfree(found);39973998 return ret;39983999 }···48454844 spin_unlock(&delayed_rsv->lock);4846484548474846commit:48484848- trans = btrfs_join_transaction(fs_info->fs_root);48474847+ trans = btrfs_join_transaction(fs_info->extent_root);48494848 if (IS_ERR(trans))48504849 return -ENOSPC;48514850···48634862 struct btrfs_space_info *space_info, u64 num_bytes,48644863 u64 orig_bytes, int state)48654864{48664866- struct btrfs_root *root = fs_info->fs_root;48654865+ struct btrfs_root *root = fs_info->extent_root;48674866 struct btrfs_trans_handle *trans;48684867 int nr;48694868 int ret = 0;···50635062 int flush_state = FLUSH_DELAYED_ITEMS_NR;5064506350655064 spin_lock(&space_info->lock);50665066- to_reclaim = btrfs_calc_reclaim_metadata_size(fs_info->fs_root,50655065+ to_reclaim = btrfs_calc_reclaim_metadata_size(fs_info->extent_root,50675066 space_info);50685067 if (!to_reclaim) {50695068 spin_unlock(&space_info->lock);
+123-3
fs/btrfs/extent_io.c
···24582458 if (!uptodate) {24592459 ClearPageUptodate(page);24602460 SetPageError(page);24612461- ret = ret < 0 ? ret : -EIO;24612461+ ret = err < 0 ? err : -EIO;24622462 mapping_set_error(page->mapping, ret);24632463 }24642464}···43774377 return NULL;43784378}4379437943804380+/*43814381+ * To cache previous fiemap extent43824382+ *43834383+ * Will be used for merging fiemap extent43844384+ */43854385+struct fiemap_cache {43864386+ u64 offset;43874387+ u64 phys;43884388+ u64 len;43894389+ u32 flags;43904390+ bool cached;43914391+};43924392+43934393+/*43944394+ * Helper to submit fiemap extent.43954395+ *43964396+ * Will try to merge current fiemap extent specified by @offset, @phys,43974397+ * @len and @flags with cached one.43984398+ * And only when we fails to merge, cached one will be submitted as43994399+ * fiemap extent.44004400+ *44014401+ * Return value is the same as fiemap_fill_next_extent().44024402+ */44034403+static int emit_fiemap_extent(struct fiemap_extent_info *fieinfo,44044404+ struct fiemap_cache *cache,44054405+ u64 offset, u64 phys, u64 len, u32 flags)44064406+{44074407+ int ret = 0;44084408+44094409+ if (!cache->cached)44104410+ goto assign;44114411+44124412+ /*44134413+ * Sanity check, extent_fiemap() should have ensured that new44144414+ * fiemap extent won't overlap with cahced one.44154415+ * Not recoverable.44164416+ *44174417+ * NOTE: Physical address can overlap, due to compression44184418+ */44194419+ if (cache->offset + cache->len > offset) {44204420+ WARN_ON(1);44214421+ return -EINVAL;44224422+ }44234423+44244424+ /*44254425+ * Only merges fiemap extents if44264426+ * 1) Their logical addresses are continuous44274427+ *44284428+ * 2) Their physical addresses are continuous44294429+ * So truly compressed (physical size smaller than logical size)44304430+ * extents won't get merged with each other44314431+ *44324432+ * 3) Share same flags except FIEMAP_EXTENT_LAST44334433+ * So regular extent won't get merged with prealloc extent44344434+ */44354435+ if (cache->offset + cache->len == offset &&44364436+ cache->phys + cache->len == phys &&44374437+ (cache->flags & ~FIEMAP_EXTENT_LAST) ==44384438+ (flags & ~FIEMAP_EXTENT_LAST)) {44394439+ cache->len += len;44404440+ cache->flags |= flags;44414441+ goto try_submit_last;44424442+ }44434443+44444444+ /* Not mergeable, need to submit cached one */44454445+ ret = fiemap_fill_next_extent(fieinfo, cache->offset, cache->phys,44464446+ cache->len, cache->flags);44474447+ cache->cached = false;44484448+ if (ret)44494449+ return ret;44504450+assign:44514451+ cache->cached = true;44524452+ cache->offset = offset;44534453+ cache->phys = phys;44544454+ cache->len = len;44554455+ cache->flags = flags;44564456+try_submit_last:44574457+ if (cache->flags & FIEMAP_EXTENT_LAST) {44584458+ ret = fiemap_fill_next_extent(fieinfo, cache->offset,44594459+ cache->phys, cache->len, cache->flags);44604460+ cache->cached = false;44614461+ }44624462+ return ret;44634463+}44644464+44654465+/*44664466+ * Sanity check for fiemap cache44674467+ *44684468+ * All fiemap cache should be submitted by emit_fiemap_extent()44694469+ * Iteration should be terminated either by last fiemap extent or44704470+ * fieinfo->fi_extents_max.44714471+ * So no cached fiemap should exist.44724472+ */44734473+static int check_fiemap_cache(struct btrfs_fs_info *fs_info,44744474+ struct fiemap_extent_info *fieinfo,44754475+ struct fiemap_cache *cache)44764476+{44774477+ int ret;44784478+44794479+ if (!cache->cached)44804480+ return 0;44814481+44824482+ /* Small and recoverbale problem, only to info developer */44834483+#ifdef CONFIG_BTRFS_DEBUG44844484+ WARN_ON(1);44854485+#endif44864486+ btrfs_warn(fs_info,44874487+ "unhandled fiemap cache detected: offset=%llu phys=%llu len=%llu flags=0x%x",44884488+ cache->offset, cache->phys, cache->len, cache->flags);44894489+ ret = fiemap_fill_next_extent(fieinfo, cache->offset, cache->phys,44904490+ cache->len, cache->flags);44914491+ cache->cached = false;44924492+ if (ret > 0)44934493+ ret = 0;44944494+ return ret;44954495+}44964496+43804497int extent_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,43814498 __u64 start, __u64 len, get_extent_t *get_extent)43824499{···45114394 struct extent_state *cached_state = NULL;45124395 struct btrfs_path *path;45134396 struct btrfs_root *root = BTRFS_I(inode)->root;43974397+ struct fiemap_cache cache = { 0 };45144398 int end = 0;45154399 u64 em_start = 0;45164400 u64 em_len = 0;···46914573 flags |= FIEMAP_EXTENT_LAST;46924574 end = 1;46934575 }46944694- ret = fiemap_fill_next_extent(fieinfo, em_start, disko,46954695- em_len, flags);45764576+ ret = emit_fiemap_extent(fieinfo, &cache, em_start, disko,45774577+ em_len, flags);46964578 if (ret) {46974579 if (ret == 1)46984580 ret = 0;···47004582 }47014583 }47024584out_free:45854585+ if (!ret)45864586+ ret = check_fiemap_cache(root->fs_info, fieinfo, &cache);47034587 free_extent_map(em);47044588out:47054589 btrfs_free_path(path);
···8383 ret = -ENOMEM;8484 sl = kmalloc(sizeof(struct configfs_symlink), GFP_KERNEL);8585 if (sl) {8686- sl->sl_target = config_item_get(item);8786 spin_lock(&configfs_dirent_lock);8887 if (target_sd->s_type & CONFIGFS_USET_DROPPING) {8988 spin_unlock(&configfs_dirent_lock);9090- config_item_put(item);9189 kfree(sl);9290 return -ENOENT;9391 }9292+ sl->sl_target = config_item_get(item);9493 list_add(&sl->sl_list, &target_sd->s_links);9594 spin_unlock(&configfs_dirent_lock);9695 ret = configfs_create_link(sl, parent_item->ci_dentry,
+24
fs/dax.c
···859859 if (ret < 0)860860 goto out;861861 }862862+ start_index = indices[pvec.nr - 1] + 1;862863 }863864out:864865 put_dax(dax_dev);···11561155 }1157115611581157 /*11581158+ * It is possible, particularly with mixed reads & writes to private11591159+ * mappings, that we have raced with a PMD fault that overlaps with11601160+ * the PTE we need to set up. If so just return and the fault will be11611161+ * retried.11621162+ */11631163+ if (pmd_trans_huge(*vmf->pmd) || pmd_devmap(*vmf->pmd)) {11641164+ vmf_ret = VM_FAULT_NOPAGE;11651165+ goto unlock_entry;11661166+ }11671167+11681168+ /*11591169 * Note that we don't bother to use iomap_apply here: DAX required11601170 * the file system block size to be equal the page size, which means11611171 * that we never have to deal with more than a single extent here.···14081396 entry = grab_mapping_entry(mapping, pgoff, RADIX_DAX_PMD);14091397 if (IS_ERR(entry))14101398 goto fallback;13991399+14001400+ /*14011401+ * It is possible, particularly with mixed reads & writes to private14021402+ * mappings, that we have raced with a PTE fault that overlaps with14031403+ * the PMD we need to set up. If so just return and the fault will be14041404+ * retried.14051405+ */14061406+ if (!pmd_none(*vmf->pmd) && !pmd_trans_huge(*vmf->pmd) &&14071407+ !pmd_devmap(*vmf->pmd)) {14081408+ result = 0;14091409+ goto unlock_entry;14101410+ }1411141114121412 /*14131413 * Note that we don't use iomap_apply here. We aren't doing I/O, only
+4-6
fs/dcache.c
···14941494{14951495 struct detach_data *data = _data;1496149614971497- if (!data->mountpoint && !data->select.found)14971497+ if (!data->mountpoint && list_empty(&data->select.dispose))14981498 __d_drop(data->select.start);14991499}15001500···1536153615371537 d_walk(dentry, &data, detach_and_collect, check_and_drop);1538153815391539- if (data.select.found)15391539+ if (!list_empty(&data.select.dispose))15401540 shrink_dentry_list(&data.select.dispose);15411541+ else if (!data.mountpoint)15421542+ return;1541154315421544 if (data.mountpoint) {15431545 detach_mounts(data.mountpoint);15441546 dput(data.mountpoint);15451547 }15461546-15471547- if (!data.mountpoint && !data.select.found)15481548- break;15491549-15501548 cond_resched();15511549 }15521550}
+24-4
fs/exec.c
···220220221221 if (write) {222222 unsigned long size = bprm->vma->vm_end - bprm->vma->vm_start;223223+ unsigned long ptr_size;223224 struct rlimit *rlim;225225+226226+ /*227227+ * Since the stack will hold pointers to the strings, we228228+ * must account for them as well.229229+ *230230+ * The size calculation is the entire vma while each arg page is231231+ * built, so each time we get here it's calculating how far it232232+ * is currently (rather than each call being just the newly233233+ * added size from the arg page). As a result, we need to234234+ * always add the entire size of the pointers, so that on the235235+ * last call to get_arg_page() we'll actually have the entire236236+ * correct size.237237+ */238238+ ptr_size = (bprm->argc + bprm->envc) * sizeof(void *);239239+ if (ptr_size > ULONG_MAX - size)240240+ goto fail;241241+ size += ptr_size;224242225243 acct_arg_size(bprm, size / PAGE_SIZE);226244···257239 * to work from.258240 */259241 rlim = current->signal->rlim;260260- if (size > ACCESS_ONCE(rlim[RLIMIT_STACK].rlim_cur) / 4) {261261- put_page(page);262262- return NULL;263263- }242242+ if (size > READ_ONCE(rlim[RLIMIT_STACK].rlim_cur) / 4)243243+ goto fail;264244 }265245266246 return page;247247+248248+fail:249249+ put_page(page);250250+ return NULL;267251}268252269253static void put_arg_page(struct page *page)
···25232523 int buf_size,25242524 struct inode *dir,25252525 struct ext4_filename *fname,25262526- const struct qstr *d_name,25272526 unsigned int offset,25282527 struct ext4_dir_entry_2 **res_dir);25292528extern int ext4_generic_delete_entry(handle_t *handle,···30063007 int *has_inline_data);30073008extern struct buffer_head *ext4_find_inline_entry(struct inode *dir,30083009 struct ext4_filename *fname,30093009- const struct qstr *d_name,30103010 struct ext4_dir_entry_2 **res_dir,30113011 int *has_inline_data);30123012extern int ext4_delete_inline_entry(handle_t *handle,
+42-43
fs/ext4/extents.c
···34133413 struct ext4_sb_info *sbi;34143414 struct ext4_extent_header *eh;34153415 struct ext4_map_blocks split_map;34163416- struct ext4_extent zero_ex;34163416+ struct ext4_extent zero_ex1, zero_ex2;34173417 struct ext4_extent *ex, *abut_ex;34183418 ext4_lblk_t ee_block, eof_block;34193419 unsigned int ee_len, depth, map_len = map->m_len;34203420 int allocated = 0, max_zeroout = 0;34213421 int err = 0;34223422- int split_flag = 0;34223422+ int split_flag = EXT4_EXT_DATA_VALID2;3423342334243424 ext_debug("ext4_ext_convert_to_initialized: inode %lu, logical"34253425 "block %llu, max_blocks %u\n", inode->i_ino,···34363436 ex = path[depth].p_ext;34373437 ee_block = le32_to_cpu(ex->ee_block);34383438 ee_len = ext4_ext_get_actual_len(ex);34393439- zero_ex.ee_len = 0;34393439+ zero_ex1.ee_len = 0;34403440+ zero_ex2.ee_len = 0;3440344134413442 trace_ext4_ext_convert_to_initialized_enter(inode, map, ex);34423443···35773576 if (ext4_encrypted_inode(inode))35783577 max_zeroout = 0;3579357835803580- /* If extent is less than s_max_zeroout_kb, zeroout directly */35813581- if (max_zeroout && (ee_len <= max_zeroout)) {35823582- err = ext4_ext_zeroout(inode, ex);35833583- if (err)35843584- goto out;35853585- zero_ex.ee_block = ex->ee_block;35863586- zero_ex.ee_len = cpu_to_le16(ext4_ext_get_actual_len(ex));35873587- ext4_ext_store_pblock(&zero_ex, ext4_ext_pblock(ex));35883588-35893589- err = ext4_ext_get_access(handle, inode, path + depth);35903590- if (err)35913591- goto out;35923592- ext4_ext_mark_initialized(ex);35933593- ext4_ext_try_to_merge(handle, inode, path, ex);35943594- err = ext4_ext_dirty(handle, inode, path + path->p_depth);35953595- goto out;35963596- }35973597-35983579 /*35993599- * four cases:35803580+ * five cases:36003581 * 1. split the extent into three extents.36013601- * 2. split the extent into two extents, zeroout the first half.36023602- * 3. split the extent into two extents, zeroout the second half.35823582+ * 2. split the extent into two extents, zeroout the head of the first35833583+ * extent.35843584+ * 3. split the extent into two extents, zeroout the tail of the second35853585+ * extent.36033586 * 4. split the extent into two extents with out zeroout.35873587+ * 5. no splitting needed, just possibly zeroout the head and / or the35883588+ * tail of the extent.36043589 */36053590 split_map.m_lblk = map->m_lblk;36063591 split_map.m_len = map->m_len;3607359236083608- if (max_zeroout && (allocated > map->m_len)) {35933593+ if (max_zeroout && (allocated > split_map.m_len)) {36093594 if (allocated <= max_zeroout) {36103610- /* case 3 */36113611- zero_ex.ee_block =36123612- cpu_to_le32(map->m_lblk);36133613- zero_ex.ee_len = cpu_to_le16(allocated);36143614- ext4_ext_store_pblock(&zero_ex,36153615- ext4_ext_pblock(ex) + map->m_lblk - ee_block);36163616- err = ext4_ext_zeroout(inode, &zero_ex);35953595+ /* case 3 or 5 */35963596+ zero_ex1.ee_block =35973597+ cpu_to_le32(split_map.m_lblk +35983598+ split_map.m_len);35993599+ zero_ex1.ee_len =36003600+ cpu_to_le16(allocated - split_map.m_len);36013601+ ext4_ext_store_pblock(&zero_ex1,36023602+ ext4_ext_pblock(ex) + split_map.m_lblk +36033603+ split_map.m_len - ee_block);36043604+ err = ext4_ext_zeroout(inode, &zero_ex1);36173605 if (err)36183606 goto out;36193619- split_map.m_lblk = map->m_lblk;36203607 split_map.m_len = allocated;36213621- } else if (map->m_lblk - ee_block + map->m_len < max_zeroout) {36223622- /* case 2 */36233623- if (map->m_lblk != ee_block) {36243624- zero_ex.ee_block = ex->ee_block;36253625- zero_ex.ee_len = cpu_to_le16(map->m_lblk -36083608+ }36093609+ if (split_map.m_lblk - ee_block + split_map.m_len <36103610+ max_zeroout) {36113611+ /* case 2 or 5 */36123612+ if (split_map.m_lblk != ee_block) {36133613+ zero_ex2.ee_block = ex->ee_block;36143614+ zero_ex2.ee_len = cpu_to_le16(split_map.m_lblk -36263615 ee_block);36273627- ext4_ext_store_pblock(&zero_ex,36163616+ ext4_ext_store_pblock(&zero_ex2,36283617 ext4_ext_pblock(ex));36293629- err = ext4_ext_zeroout(inode, &zero_ex);36183618+ err = ext4_ext_zeroout(inode, &zero_ex2);36303619 if (err)36313620 goto out;36323621 }3633362236233623+ split_map.m_len += split_map.m_lblk - ee_block;36343624 split_map.m_lblk = ee_block;36353635- split_map.m_len = map->m_lblk - ee_block + map->m_len;36363625 allocated = map->m_len;36373626 }36383627 }···36333642 err = 0;36343643out:36353644 /* If we have gotten a failure, don't zero out status tree */36363636- if (!err)36373637- err = ext4_zeroout_es(inode, &zero_ex);36453645+ if (!err) {36463646+ err = ext4_zeroout_es(inode, &zero_ex1);36473647+ if (!err)36483648+ err = ext4_zeroout_es(inode, &zero_ex2);36493649+ }36383650 return err ? err : allocated;36393651}36403652···4877488348784884 /* Zero out partial block at the edges of the range */48794885 ret = ext4_zero_partial_blocks(handle, inode, offset, len);48864886+ if (ret >= 0)48874887+ ext4_update_inode_fsync_trans(handle, inode, 1);4880488848814889 if (file->f_flags & O_SYNC)48824890 ext4_handle_sync(handle);···55655569 ext4_handle_sync(handle);55665570 inode->i_mtime = inode->i_ctime = current_time(inode);55675571 ext4_mark_inode_dirty(handle, inode);55725572+ ext4_update_inode_fsync_trans(handle, inode, 1);5568557355695574out_stop:55705575 ext4_journal_stop(handle);···57395742 up_write(&EXT4_I(inode)->i_data_sem);57405743 if (IS_SYNC(inode))57415744 ext4_handle_sync(handle);57455745+ if (ret >= 0)57465746+ ext4_update_inode_fsync_trans(handle, inode, 1);5742574757435748out_stop:57445749 ext4_journal_stop(handle);
+16-38
fs/ext4/file.c
···474474 endoff = (loff_t)end_blk << blkbits;475475476476 index = startoff >> PAGE_SHIFT;477477- end = endoff >> PAGE_SHIFT;477477+ end = (endoff - 1) >> PAGE_SHIFT;478478479479 pagevec_init(&pvec, 0);480480 do {481481 int i, num;482482 unsigned long nr_pages;483483484484- num = min_t(pgoff_t, end - index, PAGEVEC_SIZE);484484+ num = min_t(pgoff_t, end - index, PAGEVEC_SIZE - 1) + 1;485485 nr_pages = pagevec_lookup(&pvec, inode->i_mapping, index,486486 (pgoff_t)num);487487- if (nr_pages == 0) {488488- if (whence == SEEK_DATA)489489- break;490490-491491- BUG_ON(whence != SEEK_HOLE);492492- /*493493- * If this is the first time to go into the loop and494494- * offset is not beyond the end offset, it will be a495495- * hole at this offset496496- */497497- if (lastoff == startoff || lastoff < endoff)498498- found = 1;487487+ if (nr_pages == 0)499488 break;500500- }501501-502502- /*503503- * If this is the first time to go into the loop and504504- * offset is smaller than the first page offset, it will be a505505- * hole at this offset.506506- */507507- if (lastoff == startoff && whence == SEEK_HOLE &&508508- lastoff < page_offset(pvec.pages[0])) {509509- found = 1;510510- break;511511- }512489513490 for (i = 0; i < nr_pages; i++) {514491 struct page *page = pvec.pages[i];515492 struct buffer_head *bh, *head;516493517494 /*518518- * If the current offset is not beyond the end of given519519- * range, it will be a hole.495495+ * If current offset is smaller than the page offset,496496+ * there is a hole at this offset.520497 */521521- if (lastoff < endoff && whence == SEEK_HOLE &&522522- page->index > end) {498498+ if (whence == SEEK_HOLE && lastoff < endoff &&499499+ lastoff < page_offset(pvec.pages[i])) {523500 found = 1;524501 *offset = lastoff;525502 goto out;526503 }504504+505505+ if (page->index > end)506506+ goto out;527507528508 lock_page(page);529509···544564 unlock_page(page);545565 }546566547547- /*548548- * The no. of pages is less than our desired, that would be a549549- * hole in there.550550- */551551- if (nr_pages < num && whence == SEEK_HOLE) {552552- found = 1;553553- *offset = lastoff;567567+ /* The no. of pages is less than our desired, we are done. */568568+ if (nr_pages < num)554569 break;555555- }556570557571 index = pvec.pages[i - 1]->index + 1;558572 pagevec_release(&pvec);559573 } while (index <= end);560574575575+ if (whence == SEEK_HOLE && lastoff < endoff) {576576+ found = 1;577577+ *offset = lastoff;578578+ }561579out:562580 pagevec_release(&pvec);563581 return found;
···21242124static int mpage_submit_page(struct mpage_da_data *mpd, struct page *page)21252125{21262126 int len;21272127- loff_t size = i_size_read(mpd->inode);21272127+ loff_t size;21282128 int err;2129212921302130 BUG_ON(page->index != mpd->first_page);21312131+ clear_page_dirty_for_io(page);21322132+ /*21332133+ * We have to be very careful here! Nothing protects writeback path21342134+ * against i_size changes and the page can be writeably mapped into21352135+ * page tables. So an application can be growing i_size and writing21362136+ * data through mmap while writeback runs. clear_page_dirty_for_io()21372137+ * write-protects our page in page tables and the page cannot get21382138+ * written to again until we release page lock. So only after21392139+ * clear_page_dirty_for_io() we are safe to sample i_size for21402140+ * ext4_bio_write_page() to zero-out tail of the written page. We rely21412141+ * on the barrier provided by TestClearPageDirty in21422142+ * clear_page_dirty_for_io() to make sure i_size is really sampled only21432143+ * after page tables are updated.21442144+ */21452145+ size = i_size_read(mpd->inode);21312146 if (page->index == size >> PAGE_SHIFT)21322147 len = size & ~PAGE_MASK;21332148 else21342149 len = PAGE_SIZE;21352135- clear_page_dirty_for_io(page);21362150 err = ext4_bio_write_page(&mpd->io_submit, page, len, mpd->wbc, false);21372151 if (!err)21382152 mpd->wbc->nr_to_write--;···36433629 get_block_func = ext4_dio_get_block_unwritten_async;36443630 dio_flags = DIO_LOCKING;36453631 }36463646-#ifdef CONFIG_EXT4_FS_ENCRYPTION36473647- BUG_ON(ext4_encrypted_inode(inode) && S_ISREG(inode->i_mode));36483648-#endif36493632 ret = __blockdev_direct_IO(iocb, inode, inode->i_sb->s_bdev, iter,36503633 get_block_func, ext4_end_io_dio, NULL,36513634 dio_flags);···37243713 */37253714 inode_lock_shared(inode);37263715 ret = filemap_write_and_wait_range(mapping, iocb->ki_pos,37273727- iocb->ki_pos + count);37163716+ iocb->ki_pos + count - 1);37283717 if (ret)37293718 goto out_unlock;37303719 ret = __blockdev_direct_IO(iocb, inode, inode->i_sb->s_bdev,···4218420742194208 inode->i_mtime = inode->i_ctime = current_time(inode);42204209 ext4_mark_inode_dirty(handle, inode);42104210+ if (ret >= 0)42114211+ ext4_update_inode_fsync_trans(handle, inode, 1);42214212out_stop:42224213 ext4_journal_stop(handle);42234214out_dio:···56505637 /* No extended attributes present */56515638 if (!ext4_test_inode_state(inode, EXT4_STATE_XATTR) ||56525639 header->h_magic != cpu_to_le32(EXT4_XATTR_MAGIC)) {56535653- memset((void *)raw_inode + EXT4_GOOD_OLD_INODE_SIZE, 0,56545654- new_extra_isize);56405640+ memset((void *)raw_inode + EXT4_GOOD_OLD_INODE_SIZE +56415641+ EXT4_I(inode)->i_extra_isize, 0,56425642+ new_extra_isize - EXT4_I(inode)->i_extra_isize);56555643 EXT4_I(inode)->i_extra_isize = new_extra_isize;56565644 return 0;56575645 }
+14-9
fs/ext4/mballoc.c
···3887388738883888 err = ext4_mb_load_buddy(sb, group, &e4b);38893889 if (err) {38903890- ext4_error(sb, "Error loading buddy information for %u", group);38903890+ ext4_warning(sb, "Error %d loading buddy information for %u",38913891+ err, group);38913892 put_bh(bitmap_bh);38923893 return 0;38933894 }···40454044 BUG_ON(pa->pa_type != MB_INODE_PA);40464045 group = ext4_get_group_number(sb, pa->pa_pstart);4047404640484048- err = ext4_mb_load_buddy(sb, group, &e4b);40474047+ err = ext4_mb_load_buddy_gfp(sb, group, &e4b,40484048+ GFP_NOFS|__GFP_NOFAIL);40494049 if (err) {40504050- ext4_error(sb, "Error loading buddy information for %u",40514051- group);40504050+ ext4_error(sb, "Error %d loading buddy information for %u",40514051+ err, group);40524052 continue;40534053 }40544054···43054303 spin_unlock(&lg->lg_prealloc_lock);4306430443074305 list_for_each_entry_safe(pa, tmp, &discard_list, u.pa_tmp_list) {43064306+ int err;4308430743094308 group = ext4_get_group_number(sb, pa->pa_pstart);43104310- if (ext4_mb_load_buddy(sb, group, &e4b)) {43114311- ext4_error(sb, "Error loading buddy information for %u",43124312- group);43094309+ err = ext4_mb_load_buddy_gfp(sb, group, &e4b,43104310+ GFP_NOFS|__GFP_NOFAIL);43114311+ if (err) {43124312+ ext4_error(sb, "Error %d loading buddy information for %u",43134313+ err, group);43134314 continue;43144315 }43154316 ext4_lock_group(sb, group);···5132512751335128 ret = ext4_mb_load_buddy(sb, group, &e4b);51345129 if (ret) {51355135- ext4_error(sb, "Error in loading buddy "51365136- "information for %u", group);51305130+ ext4_warning(sb, "Error %d loading buddy information for %u",51315131+ ret, group);51375132 return ret;51385133 }51395134 bitmap = e4b.bd_bitmap;
+5-8
fs/ext4/namei.c
···11551155static inline int search_dirblock(struct buffer_head *bh,11561156 struct inode *dir,11571157 struct ext4_filename *fname,11581158- const struct qstr *d_name,11591158 unsigned int offset,11601159 struct ext4_dir_entry_2 **res_dir)11611160{11621161 return ext4_search_dir(bh, bh->b_data, dir->i_sb->s_blocksize, dir,11631163- fname, d_name, offset, res_dir);11621162+ fname, offset, res_dir);11641163}1165116411661165/*···12611262 */12621263int ext4_search_dir(struct buffer_head *bh, char *search_buf, int buf_size,12631264 struct inode *dir, struct ext4_filename *fname,12641264- const struct qstr *d_name,12651265 unsigned int offset, struct ext4_dir_entry_2 **res_dir)12661266{12671267 struct ext4_dir_entry_2 * de;···1353135513541356 if (ext4_has_inline_data(dir)) {13551357 int has_inline_data = 1;13561356- ret = ext4_find_inline_entry(dir, &fname, d_name, res_dir,13581358+ ret = ext4_find_inline_entry(dir, &fname, res_dir,13571359 &has_inline_data);13581360 if (has_inline_data) {13591361 if (inlined)···14451447 goto next;14461448 }14471449 set_buffer_verified(bh);14481448- i = search_dirblock(bh, dir, &fname, d_name,14501450+ i = search_dirblock(bh, dir, &fname,14491451 block << EXT4_BLOCK_SIZE_BITS(sb), res_dir);14501452 if (i == 1) {14511453 EXT4_I(dir)->i_dir_start_lookup = block;···14861488{14871489 struct super_block * sb = dir->i_sb;14881490 struct dx_frame frames[2], *frame;14891489- const struct qstr *d_name = fname->usr_fname;14901491 struct buffer_head *bh;14911492 ext4_lblk_t block;14921493 int retval;···15021505 if (IS_ERR(bh))15031506 goto errout;1504150715051505- retval = search_dirblock(bh, dir, fname, d_name,15081508+ retval = search_dirblock(bh, dir, fname,15061509 block << EXT4_BLOCK_SIZE_BITS(sb),15071510 res_dir);15081511 if (retval == 1)···1527153015281531 bh = NULL;15291532errout:15301530- dxtrace(printk(KERN_DEBUG "%s not found\n", d_name->name));15331533+ dxtrace(printk(KERN_DEBUG "%s not found\n", fname->usr_fname->name));15311534success:15321535 dx_release(frames);15331536 return bh;
+8-9
fs/ext4/super.c
···848848{849849 int type;850850851851- if (ext4_has_feature_quota(sb)) {852852- dquot_disable(sb, -1,853853- DQUOT_USAGE_ENABLED | DQUOT_LIMITS_ENABLED);854854- } else {855855- /* Use our quota_off function to clear inode flags etc. */856856- for (type = 0; type < EXT4_MAXQUOTAS; type++)857857- ext4_quota_off(sb, type);858858- }851851+ /* Use our quota_off function to clear inode flags etc. */852852+ for (type = 0; type < EXT4_MAXQUOTAS; type++)853853+ ext4_quota_off(sb, type);859854}860855#else861856static inline void ext4_quota_off_umount(struct super_block *sb)···11741179 return res;11751180 }1176118111821182+ res = dquot_initialize(inode);11831183+ if (res)11841184+ return res;11771185retry:11781186 handle = ext4_journal_start(inode, EXT4_HT_MISC,11791187 ext4_jbd2_credits_xattr(inode));···54835485 goto out;5484548654855487 err = dquot_quota_off(sb, type);54865486- if (err)54885488+ if (err || ext4_has_feature_quota(sb))54875489 goto out_put;5488549054895491 inode_lock(inode);···55035505out_unlock:55045506 inode_unlock(inode);55055507out_put:55085508+ lockdep_set_quota_inode(inode, I_DATA_SEM_NORMAL);55065509 iput(inode);55075510 return err;55085511out:
+8
fs/ext4/xattr.c
···888888 else {889889 u32 ref;890890891891+ WARN_ON_ONCE(dquot_initialize_needed(inode));892892+891893 /* The old block is released after updating892894 the inode. */893895 error = dquot_alloc_block(inode,···955953 } else {956954 /* We need to allocate a new block */957955 ext4_fsblk_t goal, block;956956+957957+ WARN_ON_ONCE(dquot_initialize_needed(inode));958958959959 goal = ext4_group_first_block_no(sb,960960 EXT4_I(inode)->i_block_group);···11701166 return -EINVAL;11711167 if (strlen(name) > 255)11721168 return -ERANGE;11691169+11731170 ext4_write_lock_xattr(inode, &no_expand);1174117111751172 error = ext4_reserve_inode_write(handle, inode, &is.iloc);···12721267 int error, retries = 0;12731268 int credits = ext4_jbd2_credits_xattr(inode);1274126912701270+ error = dquot_initialize(inode);12711271+ if (error)12721272+ return error;12751273retry:12761274 handle = ext4_journal_start(inode, EXT4_HT_XATTR, credits);12771275 if (IS_ERR(handle)) {
···200200 addr = ALIGN(addr, huge_page_size(h));201201 vma = find_vma(mm, addr);202202 if (TASK_SIZE - len >= addr &&203203- (!vma || addr + len <= vma->vm_start))203203+ (!vma || addr + len <= vm_start_gap(vma)))204204 return addr;205205 }206206
+6
fs/jbd2/transaction.c
···680680681681 rwsem_release(&journal->j_trans_commit_map, 1, _THIS_IP_);682682 handle->h_buffer_credits = nblocks;683683+ /*684684+ * Restore the original nofs context because the journal restart685685+ * is basically the same thing as journal stop and start.686686+ * start_this_handle will start a new nofs context.687687+ */688688+ memalloc_nofs_restore(handle->saved_alloc_context);683689 ret = start_this_handle(journal, handle, gfp_mask);684690 return ret;685691}
···753753 * A single slot, so highest used slotid is either 0 or -1754754 */755755 nfs4_free_slot(tbl, slot);756756- nfs4_slot_tbl_drain_complete(tbl);757756 spin_unlock(&tbl->slot_tbl_lock);758757}759758
+24-27
fs/nfs/dir.c
···19461946}19471947EXPORT_SYMBOL_GPL(nfs_link);1948194819491949-static void19501950-nfs_complete_rename(struct rpc_task *task, struct nfs_renamedata *data)19511951-{19521952- struct dentry *old_dentry = data->old_dentry;19531953- struct dentry *new_dentry = data->new_dentry;19541954- struct inode *old_inode = d_inode(old_dentry);19551955- struct inode *new_inode = d_inode(new_dentry);19561956-19571957- nfs_mark_for_revalidate(old_inode);19581958-19591959- switch (task->tk_status) {19601960- case 0:19611961- if (new_inode != NULL)19621962- nfs_drop_nlink(new_inode);19631963- d_move(old_dentry, new_dentry);19641964- nfs_set_verifier(new_dentry,19651965- nfs_save_change_attribute(data->new_dir));19661966- break;19671967- case -ENOENT:19681968- nfs_dentry_handle_enoent(old_dentry);19691969- }19701970-}19711971-19721949/*19731950 * RENAME19741951 * FIXME: Some nfsds, like the Linux user space nfsd, may generate a···19761999{19772000 struct inode *old_inode = d_inode(old_dentry);19782001 struct inode *new_inode = d_inode(new_dentry);19791979- struct dentry *dentry = NULL;20022002+ struct dentry *dentry = NULL, *rehash = NULL;19802003 struct rpc_task *task;19812004 int error = -EBUSY;19822005···19992022 * To prevent any new references to the target during the20002023 * rename, we unhash the dentry in advance.20012024 */20022002- if (!d_unhashed(new_dentry))20252025+ if (!d_unhashed(new_dentry)) {20032026 d_drop(new_dentry);20272027+ rehash = new_dentry;20282028+ }2004202920052030 if (d_count(new_dentry) > 2) {20062031 int err;···20192040 goto out;2020204120212042 new_dentry = dentry;20432043+ rehash = NULL;20222044 new_inode = NULL;20232045 }20242046 }···20282048 if (new_inode != NULL)20292049 NFS_PROTO(new_inode)->return_delegation(new_inode);2030205020312031- task = nfs_async_rename(old_dir, new_dir, old_dentry, new_dentry,20322032- nfs_complete_rename);20512051+ task = nfs_async_rename(old_dir, new_dir, old_dentry, new_dentry, NULL);20332052 if (IS_ERR(task)) {20342053 error = PTR_ERR(task);20352054 goto out;···20382059 if (error == 0)20392060 error = task->tk_status;20402061 rpc_put_task(task);20622062+ nfs_mark_for_revalidate(old_inode);20412063out:20642064+ if (rehash)20652065+ d_rehash(rehash);20422066 trace_nfs_rename_exit(old_dir, old_dentry,20432067 new_dir, new_dentry, error);20682068+ if (!error) {20692069+ if (new_inode != NULL)20702070+ nfs_drop_nlink(new_inode);20712071+ /*20722072+ * The d_move() should be here instead of in an async RPC completion20732073+ * handler because we need the proper locks to move the dentry. If20742074+ * we're interrupted by a signal, the async RPC completion handler20752075+ * should mark the directories for revalidation.20762076+ */20772077+ d_move(old_dentry, new_dentry);20782078+ nfs_set_verifier(new_dentry,20792079+ nfs_save_change_attribute(new_dir));20802080+ } else if (error == -ENOENT)20812081+ nfs_dentry_handle_enoent(old_dentry);20822082+20442083 /* new dentry created? */20452084 if (dentry)20462085 dput(dentry);
···177177 if (status)178178 goto out;179179180180- if (!nfs_write_verifier_cmp(&res->write_res.verifier.verifier,180180+ if (nfs_write_verifier_cmp(&res->write_res.verifier.verifier,181181 &res->commit_res.verf->verifier)) {182182 status = -EAGAIN;183183 goto out;
···20942094}20952095EXPORT_SYMBOL_GPL(pnfs_generic_pg_check_layout);2096209620972097+/*20982098+ * Check for any intersection between the request and the pgio->pg_lseg,20992099+ * and if none, put this pgio->pg_lseg away.21002100+ */21012101+static void21022102+pnfs_generic_pg_check_range(struct nfs_pageio_descriptor *pgio, struct nfs_page *req)21032103+{21042104+ if (pgio->pg_lseg && !pnfs_lseg_request_intersecting(pgio->pg_lseg, req)) {21052105+ pnfs_put_lseg(pgio->pg_lseg);21062106+ pgio->pg_lseg = NULL;21072107+ }21082108+}21092109+20972110void20982111pnfs_generic_pg_init_read(struct nfs_pageio_descriptor *pgio, struct nfs_page *req)20992112{21002113 u64 rd_size = req->wb_bytes;2101211421022115 pnfs_generic_pg_check_layout(pgio);21162116+ pnfs_generic_pg_check_range(pgio, req);21032117 if (pgio->pg_lseg == NULL) {21042118 if (pgio->pg_dreq == NULL)21052119 rd_size = i_size_read(pgio->pg_inode) - req_offset(req);···21452131 struct nfs_page *req, u64 wb_size)21462132{21472133 pnfs_generic_pg_check_layout(pgio);21342134+ pnfs_generic_pg_check_range(pgio, req);21482135 if (pgio->pg_lseg == NULL) {21492136 pgio->pg_lseg = pnfs_update_layout(pgio->pg_inode,21502137 req->wb_context,···22062191 seg_end = pnfs_end_offset(pgio->pg_lseg->pls_range.offset,22072192 pgio->pg_lseg->pls_range.length);22082193 req_start = req_offset(req);22092209- WARN_ON_ONCE(req_start >= seg_end);21942194+22102195 /* start of request is past the last byte of this segment */22112211- if (req_start >= seg_end) {22122212- /* reference the new lseg */22132213- if (pgio->pg_ops->pg_cleanup)22142214- pgio->pg_ops->pg_cleanup(pgio);22152215- if (pgio->pg_ops->pg_init)22162216- pgio->pg_ops->pg_init(pgio, req);21962196+ if (req_start >= seg_end)22172197 return 0;22182218- }2219219822202199 /* adjust 'size' iff there are fewer bytes left in the22212200 * segment than what nfs_generic_pg_test returned */
···23012301/*23022302 * Initialise the common bits of the superblock23032303 */23042304-inline void nfs_initialise_sb(struct super_block *sb)23042304+static void nfs_initialise_sb(struct super_block *sb)23052305{23062306 struct nfs_server *server = NFS_SB(sb);23072307···23482348/*23492349 * Finish setting up a cloned NFS2/3/4 superblock23502350 */23512351-void nfs_clone_super(struct super_block *sb, struct nfs_mount_info *mount_info)23512351+static void nfs_clone_super(struct super_block *sb,23522352+ struct nfs_mount_info *mount_info)23522353{23532354 const struct super_block *old_sb = mount_info->cloned->sb;23542355 struct nfs_server *server = NFS_SB(sb);
+6-17
fs/nfsd/nfs3xdr.c
···334334 if (!p)335335 return 0;336336 p = xdr_decode_hyper(p, &args->offset);337337+337338 args->count = ntohl(*p++);338338-339339- if (!xdr_argsize_check(rqstp, p))340340- return 0;341341-342339 len = min(args->count, max_blocksize);343340344341 /* set up the kvec */···349352 v++;350353 }351354 args->vlen = v;352352- return 1;355355+ return xdr_argsize_check(rqstp, p);353356}354357355358int···541544 p = decode_fh(p, &args->fh);542545 if (!p)543546 return 0;544544- if (!xdr_argsize_check(rqstp, p))545545- return 0;546547 args->buffer = page_address(*(rqstp->rq_next_page++));547548548548- return 1;549549+ return xdr_argsize_check(rqstp, p);549550}550551551552int···569574 args->verf = p; p += 2;570575 args->dircount = ~0;571576 args->count = ntohl(*p++);572572-573573- if (!xdr_argsize_check(rqstp, p))574574- return 0;575575-576577 args->count = min_t(u32, args->count, PAGE_SIZE);577578 args->buffer = page_address(*(rqstp->rq_next_page++));578579579579- return 1;580580+ return xdr_argsize_check(rqstp, p);580581}581582582583int···590599 args->dircount = ntohl(*p++);591600 args->count = ntohl(*p++);592601593593- if (!xdr_argsize_check(rqstp, p))594594- return 0;595595-596602 len = args->count = min(args->count, max_blocksize);597603 while (len > 0) {598604 struct page *p = *(rqstp->rq_next_page++);···597609 args->buffer = page_address(p);598610 len -= PAGE_SIZE;599611 }600600- return 1;612612+613613+ return xdr_argsize_check(rqstp, p);601614}602615603616int
+6-7
fs/nfsd/nfs4proc.c
···17691769 opdesc->op_get_currentstateid(cstate, &op->u);17701770 op->status = opdesc->op_func(rqstp, cstate, &op->u);1771177117721772+ /* Only from SEQUENCE */17731773+ if (cstate->status == nfserr_replay_cache) {17741774+ dprintk("%s NFS4.1 replay from cache\n", __func__);17751775+ status = op->status;17761776+ goto out;17771777+ }17721778 if (!op->status) {17731779 if (opdesc->op_set_currentstateid)17741780 opdesc->op_set_currentstateid(cstate, &op->u);···17851779 if (need_wrongsec_check(rqstp))17861780 op->status = check_nfsd_access(current_fh->fh_export, rqstp);17871781 }17881788-17891782encode_op:17901790- /* Only from SEQUENCE */17911791- if (cstate->status == nfserr_replay_cache) {17921792- dprintk("%s NFS4.1 replay from cache\n", __func__);17931793- status = op->status;17941794- goto out;17951795- }17961783 if (op->status == nfserr_replay_me) {17971784 op->replay = &cstate->replay_owner->so_replay;17981785 nfsd4_encode_replay(&resp->xdr, op);
+3-10
fs/nfsd/nfsxdr.c
···257257 len = args->count = ntohl(*p++);258258 p++; /* totalcount - unused */259259260260- if (!xdr_argsize_check(rqstp, p))261261- return 0;262262-263260 len = min_t(unsigned int, len, NFSSVC_MAXBLKSIZE_V2);264261265262 /* set up somewhere to store response.···272275 v++;273276 }274277 args->vlen = v;275275- return 1;278278+ return xdr_argsize_check(rqstp, p);276279}277280278281int···362365 p = decode_fh(p, &args->fh);363366 if (!p)364367 return 0;365365- if (!xdr_argsize_check(rqstp, p))366366- return 0;367368 args->buffer = page_address(*(rqstp->rq_next_page++));368369369369- return 1;370370+ return xdr_argsize_check(rqstp, p);370371}371372372373int···402407 args->cookie = ntohl(*p++);403408 args->count = ntohl(*p++);404409 args->count = min_t(u32, args->count, PAGE_SIZE);405405- if (!xdr_argsize_check(rqstp, p))406406- return 0;407410 args->buffer = page_address(*(rqstp->rq_next_page++));408411409409- return 1;412412+ return xdr_argsize_check(rqstp, p);410413}411414412415/*
+1-1
fs/ntfs/namei.c
···159159 PTR_ERR(dent_inode));160160 kfree(name);161161 /* Return the error code. */162162- return (struct dentry *)dent_inode;162162+ return ERR_CAST(dent_inode);163163 }164164 /* It is guaranteed that @name is no longer allocated at this point. */165165 if (MREF_ERR(mref) == -ENOENT) {
+4
fs/ocfs2/dlmglue.c
···25912591 struct ocfs2_lock_res *lockres;2592259225932593 lockres = &OCFS2_I(inode)->ip_inode_lockres;25942594+ /* had_lock means that the currect process already takes the cluster25952595+ * lock previously. If had_lock is 1, we have nothing to do here, and25962596+ * it will get unlocked where we got the lock.25972597+ */25942598 if (!had_lock) {25952599 ocfs2_remove_holder(lockres, oh);25962600 ocfs2_inode_unlock(inode, ex);
···11config OVERLAY_FS22 tristate "Overlay filesystem support"33+ select EXPORTFS34 help45 An overlay filesystem combines two filesystems - an 'upper' filesystem56 and a 'lower' filesystem. When a name exists in both filesystems, the
+37-22
fs/overlayfs/copy_up.c
···300300 return PTR_ERR(fh);301301 }302302303303- err = ovl_do_setxattr(upper, OVL_XATTR_ORIGIN, fh, fh ? fh->len : 0, 0);303303+ /*304304+ * Do not fail when upper doesn't support xattrs.305305+ */306306+ err = ovl_check_setxattr(dentry, upper, OVL_XATTR_ORIGIN, fh,307307+ fh ? fh->len : 0, 0);304308 kfree(fh);305309306310 return err;···330326 .link = link331327 };332328333333- upper = lookup_one_len(dentry->d_name.name, upperdir,334334- dentry->d_name.len);335335- err = PTR_ERR(upper);336336- if (IS_ERR(upper))337337- goto out;338338-339329 err = security_inode_copy_up(dentry, &new_creds);340330 if (err < 0)341341- goto out1;331331+ goto out;342332343333 if (new_creds)344334 old_creds = override_creds(new_creds);···340342 if (tmpfile)341343 temp = ovl_do_tmpfile(upperdir, stat->mode);342344 else343343- temp = ovl_lookup_temp(workdir, dentry);344344- err = PTR_ERR(temp);345345- if (IS_ERR(temp))346346- goto out1;347347-345345+ temp = ovl_lookup_temp(workdir);348346 err = 0;349349- if (!tmpfile)347347+ if (IS_ERR(temp)) {348348+ err = PTR_ERR(temp);349349+ temp = NULL;350350+ }351351+352352+ if (!err && !tmpfile)350353 err = ovl_create_real(wdir, temp, &cattr, NULL, true);351354352355 if (new_creds) {···356357 }357358358359 if (err)359359- goto out2;360360+ goto out;360361361362 if (S_ISREG(stat->mode)) {362363 struct path upperpath;···392393 /*393394 * Store identifier of lower inode in upper inode xattr to394395 * allow lookup of the copy up origin inode.396396+ *397397+ * Don't set origin when we are breaking the association with a lower398398+ * hard link.395399 */396396- err = ovl_set_origin(dentry, lowerpath->dentry, temp);397397- if (err)400400+ if (S_ISDIR(stat->mode) || stat->nlink == 1) {401401+ err = ovl_set_origin(dentry, lowerpath->dentry, temp);402402+ if (err)403403+ goto out_cleanup;404404+ }405405+406406+ upper = lookup_one_len(dentry->d_name.name, upperdir,407407+ dentry->d_name.len);408408+ if (IS_ERR(upper)) {409409+ err = PTR_ERR(upper);410410+ upper = NULL;398411 goto out_cleanup;412412+ }399413400414 if (tmpfile)401415 err = ovl_do_link(temp, udir, upper, true);···423411424412 /* Restore timestamps on parent (best effort) */425413 ovl_set_timestamps(upperdir, pstat);426426-out2:427427- dput(temp);428428-out1:429429- dput(upper);430414out:415415+ dput(temp);416416+ dput(upper);431417 return err;432418433419out_cleanup:434420 if (!tmpfile)435421 ovl_cleanup(wdir, temp);436436- goto out2;422422+ goto out;437423}438424439425/*···463453464454 ovl_path_upper(parent, &parentpath);465455 upperdir = parentpath.dentry;456456+457457+ /* Mark parent "impure" because it may now contain non-pure upper */458458+ err = ovl_set_impure(parent, upperdir);459459+ if (err)460460+ return err;466461467462 err = vfs_getattr(&parentpath, &pstat,468463 STATX_ATIME | STATX_MTIME, AT_STATX_SYNC_AS_STAT);
+47-14
fs/overlayfs/dir.c
···4141 }4242}43434444-struct dentry *ovl_lookup_temp(struct dentry *workdir, struct dentry *dentry)4444+struct dentry *ovl_lookup_temp(struct dentry *workdir)4545{4646 struct dentry *temp;4747 char name[20];···6868 struct dentry *whiteout;6969 struct inode *wdir = workdir->d_inode;70707171- whiteout = ovl_lookup_temp(workdir, dentry);7171+ whiteout = ovl_lookup_temp(workdir);7272 if (IS_ERR(whiteout))7373 return whiteout;7474···127127 return err;128128}129129130130-static int ovl_set_opaque(struct dentry *dentry, struct dentry *upperdentry)130130+static int ovl_set_opaque_xerr(struct dentry *dentry, struct dentry *upper,131131+ int xerr)131132{132133 int err;133134134134- err = ovl_do_setxattr(upperdentry, OVL_XATTR_OPAQUE, "y", 1, 0);135135+ err = ovl_check_setxattr(dentry, upper, OVL_XATTR_OPAQUE, "y", 1, xerr);135136 if (!err)136137 ovl_dentry_set_opaque(dentry);137138138139 return err;140140+}141141+142142+static int ovl_set_opaque(struct dentry *dentry, struct dentry *upperdentry)143143+{144144+ /*145145+ * Fail with -EIO when trying to create opaque dir and upper doesn't146146+ * support xattrs. ovl_rename() calls ovl_set_opaque_xerr(-EXDEV) to147147+ * return a specific error for noxattr case.148148+ */149149+ return ovl_set_opaque_xerr(dentry, upperdentry, -EIO);139150}140151141152/* Common operations required to be done after creation of file on upper */···171160static bool ovl_type_merge(struct dentry *dentry)172161{173162 return OVL_TYPE_MERGE(ovl_path_type(dentry));163163+}164164+165165+static bool ovl_type_origin(struct dentry *dentry)166166+{167167+ return OVL_TYPE_ORIGIN(ovl_path_type(dentry));174168}175169176170static int ovl_create_upper(struct dentry *dentry, struct inode *inode,···266250 if (upper->d_parent->d_inode != udir)267251 goto out_unlock;268252269269- opaquedir = ovl_lookup_temp(workdir, dentry);253253+ opaquedir = ovl_lookup_temp(workdir);270254 err = PTR_ERR(opaquedir);271255 if (IS_ERR(opaquedir))272256 goto out_unlock;···398382 if (err)399383 goto out;400384401401- newdentry = ovl_lookup_temp(workdir, dentry);385385+ newdentry = ovl_lookup_temp(workdir);402386 err = PTR_ERR(newdentry);403387 if (IS_ERR(newdentry))404388 goto out_unlock;···862846 if (IS_ERR(redirect))863847 return PTR_ERR(redirect);864848865865- err = ovl_do_setxattr(ovl_dentry_upper(dentry), OVL_XATTR_REDIRECT,866866- redirect, strlen(redirect), 0);849849+ err = ovl_check_setxattr(dentry, ovl_dentry_upper(dentry),850850+ OVL_XATTR_REDIRECT,851851+ redirect, strlen(redirect), -EXDEV);867852 if (!err) {868853 spin_lock(&dentry->d_lock);869854 ovl_dentry_set_redirect(dentry, redirect);870855 spin_unlock(&dentry->d_lock);871856 } else {872857 kfree(redirect);873873- if (err == -EOPNOTSUPP)874874- ovl_clear_redirect_dir(dentry->d_sb);875875- else876876- pr_warn_ratelimited("overlay: failed to set redirect (%i)\n", err);858858+ pr_warn_ratelimited("overlay: failed to set redirect (%i)\n", err);877859 /* Fall back to userspace copy-up */878860 err = -EXDEV;879861 }···957943 old_upperdir = ovl_dentry_upper(old->d_parent);958944 new_upperdir = ovl_dentry_upper(new->d_parent);959945946946+ if (!samedir) {947947+ /*948948+ * When moving a merge dir or non-dir with copy up origin into949949+ * a new parent, we are marking the new parent dir "impure".950950+ * When ovl_iterate() iterates an "impure" upper dir, it will951951+ * lookup the origin inodes of the entries to fill d_ino.952952+ */953953+ if (ovl_type_origin(old)) {954954+ err = ovl_set_impure(new->d_parent, new_upperdir);955955+ if (err)956956+ goto out_revert_creds;957957+ }958958+ if (!overwrite && ovl_type_origin(new)) {959959+ err = ovl_set_impure(old->d_parent, old_upperdir);960960+ if (err)961961+ goto out_revert_creds;962962+ }963963+ }964964+960965 trap = lock_rename(new_upperdir, old_upperdir);961966962967 olddentry = lookup_one_len(old->d_name.name, old_upperdir,···1025992 if (ovl_type_merge_or_lower(old))1026993 err = ovl_set_redirect(old, samedir);1027994 else if (!old_opaque && ovl_type_merge(new->d_parent))10281028- err = ovl_set_opaque(old, olddentry);995995+ err = ovl_set_opaque_xerr(old, olddentry, -EXDEV);1029996 if (err)1030997 goto out_dput;1031998 }···10331000 if (ovl_type_merge_or_lower(new))10341001 err = ovl_set_redirect(new, samedir);10351002 else if (!new_opaque && ovl_type_merge(old->d_parent))10361036- err = ovl_set_opaque(new, newdentry);10031003+ err = ovl_set_opaque_xerr(new, newdentry, -EXDEV);10371004 if (err)10381005 goto out_dput;10391006 }
+11-1
fs/overlayfs/inode.c
···240240 return res;241241}242242243243+static bool ovl_can_list(const char *s)244244+{245245+ /* List all non-trusted xatts */246246+ if (strncmp(s, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN) != 0)247247+ return true;248248+249249+ /* Never list trusted.overlay, list other trusted for superuser only */250250+ return !ovl_is_private_xattr(s) && capable(CAP_SYS_ADMIN);251251+}252252+243253ssize_t ovl_listxattr(struct dentry *dentry, char *list, size_t size)244254{245255 struct dentry *realdentry = ovl_dentry_real(dentry);···273263 return -EIO;274264275265 len -= slen;276276- if (ovl_is_private_xattr(s)) {266266+ if (!ovl_can_list(s)) {277267 res -= slen;278268 memmove(s, s + slen, len);279269 } else {
···8282 ufs_error (sb, "ufs_free_fragments",8383 "bit already cleared for fragment %u", i);8484 }8585-8585+8686+ inode_sub_bytes(inode, count << uspi->s_fshift);8687 fs32_add(sb, &ucg->cg_cs.cs_nffree, count);8788 uspi->cs_total.cs_nffree += count;8889 fs32_add(sb, &UFS_SB(sb)->fs_cs(cgno).cs_nffree, count);···185184 ufs_error(sb, "ufs_free_blocks", "freeing free fragment");186185 }187186 ubh_setblock(UCPI_UBH(ucpi), ucpi->c_freeoff, blkno);187187+ inode_sub_bytes(inode, uspi->s_fpb << uspi->s_fshift);188188 if ((UFS_SB(sb)->s_flags & UFS_CG_MASK) == UFS_CG_44BSD)189189 ufs_clusteracct (sb, ucpi, blkno, 1);190190···400398 /*401399 * There is not enough space for user on the device402400 */403403- if (!capable(CAP_SYS_RESOURCE) && ufs_freespace(uspi, UFS_MINFREE) <= 0) {404404- mutex_unlock(&UFS_SB(sb)->s_lock);405405- UFSD("EXIT (FAILED)\n");406406- return 0;401401+ if (unlikely(ufs_freefrags(uspi) <= uspi->s_root_blocks)) {402402+ if (!capable(CAP_SYS_RESOURCE)) {403403+ mutex_unlock(&UFS_SB(sb)->s_lock);404404+ UFSD("EXIT (FAILED)\n");405405+ return 0;406406+ }407407 }408408409409 if (goal >= uspi->s_size) ···423419 if (result) {424420 ufs_clear_frags(inode, result + oldcount,425421 newcount - oldcount, locked_page != NULL);422422+ *err = 0;426423 write_seqlock(&UFS_I(inode)->meta_lock);427424 ufs_cpu_to_data_ptr(sb, p, result);428428- write_sequnlock(&UFS_I(inode)->meta_lock);429429- *err = 0;430425 UFS_I(inode)->i_lastfrag =431426 max(UFS_I(inode)->i_lastfrag, fragment + count);427427+ write_sequnlock(&UFS_I(inode)->meta_lock);432428 }433429 mutex_unlock(&UFS_SB(sb)->s_lock);434430 UFSD("EXIT, result %llu\n", (unsigned long long)result);···441437 result = ufs_add_fragments(inode, tmp, oldcount, newcount);442438 if (result) {443439 *err = 0;440440+ read_seqlock_excl(&UFS_I(inode)->meta_lock);444441 UFS_I(inode)->i_lastfrag = max(UFS_I(inode)->i_lastfrag,445442 fragment + count);443443+ read_sequnlock_excl(&UFS_I(inode)->meta_lock);446444 ufs_clear_frags(inode, result + oldcount, newcount - oldcount,447445 locked_page != NULL);448446 mutex_unlock(&UFS_SB(sb)->s_lock);···455449 /*456450 * allocate new block and move data457451 */458458- switch (fs32_to_cpu(sb, usb1->fs_optim)) {459459- case UFS_OPTSPACE:452452+ if (fs32_to_cpu(sb, usb1->fs_optim) == UFS_OPTSPACE) {460453 request = newcount;461461- if (uspi->s_minfree < 5 || uspi->cs_total.cs_nffree462462- > uspi->s_dsize * uspi->s_minfree / (2 * 100))463463- break;464464- usb1->fs_optim = cpu_to_fs32(sb, UFS_OPTTIME);465465- break;466466- default:467467- usb1->fs_optim = cpu_to_fs32(sb, UFS_OPTTIME);468468-469469- case UFS_OPTTIME:454454+ if (uspi->cs_total.cs_nffree < uspi->s_space_to_time)455455+ usb1->fs_optim = cpu_to_fs32(sb, UFS_OPTTIME);456456+ } else {470457 request = uspi->s_fpb;471471- if (uspi->cs_total.cs_nffree < uspi->s_dsize *472472- (uspi->s_minfree - 2) / 100)473473- break;474474- usb1->fs_optim = cpu_to_fs32(sb, UFS_OPTTIME);475475- break;458458+ if (uspi->cs_total.cs_nffree > uspi->s_time_to_space)459459+ usb1->fs_optim = cpu_to_fs32(sb, UFS_OPTSPACE);476460 }477461 result = ufs_alloc_fragments (inode, cgno, goal, request, err);478462 if (result) {479463 ufs_clear_frags(inode, result + oldcount, newcount - oldcount,480464 locked_page != NULL);465465+ mutex_unlock(&UFS_SB(sb)->s_lock);481466 ufs_change_blocknr(inode, fragment - oldcount, oldcount,482467 uspi->s_sbbase + tmp,483468 uspi->s_sbbase + result, locked_page);469469+ *err = 0;484470 write_seqlock(&UFS_I(inode)->meta_lock);485471 ufs_cpu_to_data_ptr(sb, p, result);486486- write_sequnlock(&UFS_I(inode)->meta_lock);487487- *err = 0;488472 UFS_I(inode)->i_lastfrag = max(UFS_I(inode)->i_lastfrag,489473 fragment + count);490490- mutex_unlock(&UFS_SB(sb)->s_lock);474474+ write_sequnlock(&UFS_I(inode)->meta_lock);491475 if (newcount < request)492476 ufs_free_fragments (inode, result + newcount, request - newcount);493477 ufs_free_fragments (inode, tmp, oldcount);···489493 UFSD("EXIT (FAILED)\n");490494 return 0;491495} 496496+497497+static bool try_add_frags(struct inode *inode, unsigned frags)498498+{499499+ unsigned size = frags * i_blocksize(inode);500500+ spin_lock(&inode->i_lock);501501+ __inode_add_bytes(inode, size);502502+ if (unlikely((u32)inode->i_blocks != inode->i_blocks)) {503503+ __inode_sub_bytes(inode, size);504504+ spin_unlock(&inode->i_lock);505505+ return false;506506+ }507507+ spin_unlock(&inode->i_lock);508508+ return true;509509+}492510493511static u64 ufs_add_fragments(struct inode *inode, u64 fragment,494512 unsigned oldcount, unsigned newcount)···540530 for (i = oldcount; i < newcount; i++)541531 if (ubh_isclr (UCPI_UBH(ucpi), ucpi->c_freeoff, fragno + i))542532 return 0;533533+534534+ if (!try_add_frags(inode, count))535535+ return 0;543536 /*544537 * Block can be extended545538 */···660647 ubh_setbit (UCPI_UBH(ucpi), ucpi->c_freeoff, goal + i);661648 i = uspi->s_fpb - count;662649650650+ inode_sub_bytes(inode, i << uspi->s_fshift);663651 fs32_add(sb, &ucg->cg_cs.cs_nffree, i);664652 uspi->cs_total.cs_nffree += i;665653 fs32_add(sb, &UFS_SB(sb)->fs_cs(cgno).cs_nffree, i);···670656671657 result = ufs_bitmap_search (sb, ucpi, goal, allocsize);672658 if (result == INVBLOCK)659659+ return 0;660660+ if (!try_add_frags(inode, count))673661 return 0;674662 for (i = 0; i < count; i++)675663 ubh_clrbit (UCPI_UBH(ucpi), ucpi->c_freeoff, result + i);···732716 return INVBLOCK;733717 ucpi->c_rotor = result;734718gotit:719719+ if (!try_add_frags(inode, uspi->s_fpb))720720+ return 0;735721 blkno = ufs_fragstoblks(result);736722 ubh_clrblock (UCPI_UBH(ucpi), ucpi->c_freeoff, blkno);737723 if ((UFS_SB(sb)->s_flags & UFS_CG_MASK) == UFS_CG_44BSD)
+52-44
fs/ufs/inode.c
···235235236236 p = ufs_get_direct_data_ptr(uspi, ufsi, block);237237 tmp = ufs_new_fragments(inode, p, lastfrag, ufs_data_ptr_to_cpu(sb, p),238238- new_size, err, locked_page);238238+ new_size - (lastfrag & uspi->s_fpbmask), err,239239+ locked_page);239240 return tmp != 0;240241}241242···285284 goal += uspi->s_fpb;286285 }287286 tmp = ufs_new_fragments(inode, p, ufs_blknum(new_fragment),288288- goal, uspi->s_fpb, err, locked_page);287287+ goal, nfrags, err, locked_page);289288290289 if (!tmp) {291290 *err = -ENOSPC;···401400 u64 phys64 = 0;402401 unsigned frag = fragment & uspi->s_fpbmask;403402404404- if (!create) {405405- phys64 = ufs_frag_map(inode, offsets, depth);406406- goto out;407407- }403403+ phys64 = ufs_frag_map(inode, offsets, depth);404404+ if (!create)405405+ goto done;408406407407+ if (phys64) {408408+ if (fragment >= UFS_NDIR_FRAGMENT)409409+ goto done;410410+ read_seqlock_excl(&UFS_I(inode)->meta_lock);411411+ if (fragment < UFS_I(inode)->i_lastfrag) {412412+ read_sequnlock_excl(&UFS_I(inode)->meta_lock);413413+ goto done;414414+ }415415+ read_sequnlock_excl(&UFS_I(inode)->meta_lock);416416+ }409417 /* This code entered only while writing ....? */410418411419 mutex_lock(&UFS_I(inode)->truncate_mutex);···458448 }459449 mutex_unlock(&UFS_I(inode)->truncate_mutex);460450 return err;451451+452452+done:453453+ if (phys64)454454+ map_bh(bh_result, sb, phys64 + frag);455455+ return 0;461456}462457463458static int ufs_writepage(struct page *page, struct writeback_control *wbc)···566551 */567552 inode->i_mode = mode = fs16_to_cpu(sb, ufs_inode->ui_mode);568553 set_nlink(inode, fs16_to_cpu(sb, ufs_inode->ui_nlink));569569- if (inode->i_nlink == 0) {570570- ufs_error (sb, "ufs_read_inode", "inode %lu has zero nlink\n", inode->i_ino);571571- return -1;572572- }554554+ if (inode->i_nlink == 0)555555+ return -ESTALE;573556574557 /*575558 * Linux now has 32-bit uid and gid, so we can support EFT.···576563 i_gid_write(inode, ufs_get_inode_gid(sb, ufs_inode));577564578565 inode->i_size = fs64_to_cpu(sb, ufs_inode->ui_size);579579- inode->i_atime.tv_sec = fs32_to_cpu(sb, ufs_inode->ui_atime.tv_sec);580580- inode->i_ctime.tv_sec = fs32_to_cpu(sb, ufs_inode->ui_ctime.tv_sec);581581- inode->i_mtime.tv_sec = fs32_to_cpu(sb, ufs_inode->ui_mtime.tv_sec);566566+ inode->i_atime.tv_sec = (signed)fs32_to_cpu(sb, ufs_inode->ui_atime.tv_sec);567567+ inode->i_ctime.tv_sec = (signed)fs32_to_cpu(sb, ufs_inode->ui_ctime.tv_sec);568568+ inode->i_mtime.tv_sec = (signed)fs32_to_cpu(sb, ufs_inode->ui_mtime.tv_sec);582569 inode->i_mtime.tv_nsec = 0;583570 inode->i_atime.tv_nsec = 0;584571 inode->i_ctime.tv_nsec = 0;···612599 */613600 inode->i_mode = mode = fs16_to_cpu(sb, ufs2_inode->ui_mode);614601 set_nlink(inode, fs16_to_cpu(sb, ufs2_inode->ui_nlink));615615- if (inode->i_nlink == 0) {616616- ufs_error (sb, "ufs_read_inode", "inode %lu has zero nlink\n", inode->i_ino);617617- return -1;618618- }602602+ if (inode->i_nlink == 0)603603+ return -ESTALE;619604620605 /*621606 * Linux now has 32-bit uid and gid, so we can support EFT.···653642 struct ufs_sb_private_info *uspi = UFS_SB(sb)->s_uspi;654643 struct buffer_head * bh;655644 struct inode *inode;656656- int err;645645+ int err = -EIO;657646658647 UFSD("ENTER, ino %lu\n", ino);659648···688677 err = ufs1_read_inode(inode,689678 ufs_inode + ufs_inotofsbo(inode->i_ino));690679 }691691-680680+ brelse(bh);692681 if (err)693682 goto bad_inode;683683+694684 inode->i_version++;695685 ufsi->i_lastfrag =696686 (inode->i_size + uspi->s_fsize - 1) >> uspi->s_fshift;···700688701689 ufs_set_inode_ops(inode);702690703703- brelse(bh);704704-705691 UFSD("EXIT\n");706692 unlock_new_inode(inode);707693 return inode;708694709695bad_inode:710696 iget_failed(inode);711711- return ERR_PTR(-EIO);697697+ return ERR_PTR(err);712698}713699714700static void ufs1_update_inode(struct inode *inode, struct ufs_inode *ufs_inode)···851841 truncate_inode_pages_final(&inode->i_data);852842 if (want_delete) {853843 inode->i_size = 0;854854- if (inode->i_blocks)844844+ if (inode->i_blocks &&845845+ (S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode) ||846846+ S_ISLNK(inode->i_mode)))855847 ufs_truncate_blocks(inode);848848+ ufs_update_inode(inode, inode_needs_sync(inode));856849 }857850858851 invalidate_inode_buffers(inode);···881868 ctx->to = from + count;882869}883870884884-#define DIRECT_BLOCK ((inode->i_size + uspi->s_bsize - 1) >> uspi->s_bshift)885871#define DIRECT_FRAGMENT ((inode->i_size + uspi->s_fsize - 1) >> uspi->s_fshift)886872887873static void ufs_trunc_direct(struct inode *inode)···11121100 return err;11131101}1114110211151115-static void __ufs_truncate_blocks(struct inode *inode)11031103+static void ufs_truncate_blocks(struct inode *inode)11161104{11171105 struct ufs_inode_info *ufsi = UFS_I(inode);11181106 struct super_block *sb = inode->i_sb;11191107 struct ufs_sb_private_info *uspi = UFS_SB(sb)->s_uspi;11201108 unsigned offsets[4];11211121- int depth = ufs_block_to_path(inode, DIRECT_BLOCK, offsets);11091109+ int depth;11221110 int depth2;11231111 unsigned i;11241112 struct ufs_buffer_head *ubh[3];11251113 void *p;11261114 u64 block;1127111511281128- if (!depth)11291129- return;11161116+ if (inode->i_size) {11171117+ sector_t last = (inode->i_size - 1) >> uspi->s_bshift;11181118+ depth = ufs_block_to_path(inode, last, offsets);11191119+ if (!depth)11201120+ return;11211121+ } else {11221122+ depth = 1;11231123+ }1130112411311131- /* find the last non-zero in offsets[] */11321125 for (depth2 = depth - 1; depth2; depth2--)11331133- if (offsets[depth2])11261126+ if (offsets[depth2] != uspi->s_apb - 1)11341127 break;1135112811361129 mutex_lock(&ufsi->truncate_mutex);···11441127 offsets[0] = UFS_IND_BLOCK;11451128 } else {11461129 /* get the blocks that should be partially emptied */11471147- p = ufs_get_direct_data_ptr(uspi, ufsi, offsets[0]);11301130+ p = ufs_get_direct_data_ptr(uspi, ufsi, offsets[0]++);11481131 for (i = 0; i < depth2; i++) {11491149- offsets[i]++; /* next branch is fully freed */11501132 block = ufs_data_ptr_to_cpu(sb, p);11511133 if (!block)11521134 break;···11561140 write_sequnlock(&ufsi->meta_lock);11571141 break;11581142 }11591159- p = ubh_get_data_ptr(uspi, ubh[i], offsets[i + 1]);11431143+ p = ubh_get_data_ptr(uspi, ubh[i], offsets[i + 1]++);11601144 }11611145 while (i--)11621146 free_branch_tail(inode, offsets[i + 1], ubh[i], depth - i - 1);···11711155 free_full_branch(inode, block, i - UFS_IND_BLOCK + 1);11721156 }11731157 }11581158+ read_seqlock_excl(&ufsi->meta_lock);11741159 ufsi->i_lastfrag = DIRECT_FRAGMENT;11601160+ read_sequnlock_excl(&ufsi->meta_lock);11751161 mark_inode_dirty(inode);11761162 mutex_unlock(&ufsi->truncate_mutex);11771163}···1201118312021184 truncate_setsize(inode, size);1203118512041204- __ufs_truncate_blocks(inode);11861186+ ufs_truncate_blocks(inode);12051187 inode->i_mtime = inode->i_ctime = current_time(inode);12061188 mark_inode_dirty(inode);12071189out:12081190 UFSD("EXIT: err %d\n", err);12091191 return err;12101210-}12111211-12121212-static void ufs_truncate_blocks(struct inode *inode)12131213-{12141214- if (!(S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode) ||12151215- S_ISLNK(inode->i_mode)))12161216- return;12171217- if (IS_APPEND(inode) || IS_IMMUTABLE(inode))12181218- return;12191219- __ufs_truncate_blocks(inode);12201192}1221119312221194int ufs_setattr(struct dentry *dentry, struct iattr *attr)
+69-27
fs/ufs/super.c
···480480 usb3 = ubh_get_usb_third(uspi);481481482482 if ((mtype == UFS_MOUNT_UFSTYPE_44BSD &&483483- (usb1->fs_flags & UFS_FLAGS_UPDATED)) ||483483+ (usb2->fs_un.fs_u2.fs_maxbsize == usb1->fs_bsize)) ||484484 mtype == UFS_MOUNT_UFSTYPE_UFS2) {485485 /*we have statistic in different place, then usual*/486486 uspi->cs_total.cs_ndir = fs64_to_cpu(sb, usb2->fs_un.fs_u2.cs_ndir);···596596 usb2 = ubh_get_usb_second(uspi);597597 usb3 = ubh_get_usb_third(uspi);598598599599- if ((mtype == UFS_MOUNT_UFSTYPE_44BSD &&600600- (usb1->fs_flags & UFS_FLAGS_UPDATED)) ||601601- mtype == UFS_MOUNT_UFSTYPE_UFS2) {599599+ if (mtype == UFS_MOUNT_UFSTYPE_UFS2) {602600 /*we have statistic in different place, then usual*/603601 usb2->fs_un.fs_u2.cs_ndir =604602 cpu_to_fs64(sb, uspi->cs_total.cs_ndir);···606608 cpu_to_fs64(sb, uspi->cs_total.cs_nifree);607609 usb3->fs_un1.fs_u2.cs_nffree =608610 cpu_to_fs64(sb, uspi->cs_total.cs_nffree);609609- } else {610610- usb1->fs_cstotal.cs_ndir =611611- cpu_to_fs32(sb, uspi->cs_total.cs_ndir);612612- usb1->fs_cstotal.cs_nbfree =613613- cpu_to_fs32(sb, uspi->cs_total.cs_nbfree);614614- usb1->fs_cstotal.cs_nifree =615615- cpu_to_fs32(sb, uspi->cs_total.cs_nifree);616616- usb1->fs_cstotal.cs_nffree =617617- cpu_to_fs32(sb, uspi->cs_total.cs_nffree);611611+ goto out;618612 }613613+614614+ if (mtype == UFS_MOUNT_UFSTYPE_44BSD &&615615+ (usb2->fs_un.fs_u2.fs_maxbsize == usb1->fs_bsize)) {616616+ /* store stats in both old and new places */617617+ usb2->fs_un.fs_u2.cs_ndir =618618+ cpu_to_fs64(sb, uspi->cs_total.cs_ndir);619619+ usb2->fs_un.fs_u2.cs_nbfree =620620+ cpu_to_fs64(sb, uspi->cs_total.cs_nbfree);621621+ usb3->fs_un1.fs_u2.cs_nifree =622622+ cpu_to_fs64(sb, uspi->cs_total.cs_nifree);623623+ usb3->fs_un1.fs_u2.cs_nffree =624624+ cpu_to_fs64(sb, uspi->cs_total.cs_nffree);625625+ }626626+ usb1->fs_cstotal.cs_ndir = cpu_to_fs32(sb, uspi->cs_total.cs_ndir);627627+ usb1->fs_cstotal.cs_nbfree = cpu_to_fs32(sb, uspi->cs_total.cs_nbfree);628628+ usb1->fs_cstotal.cs_nifree = cpu_to_fs32(sb, uspi->cs_total.cs_nifree);629629+ usb1->fs_cstotal.cs_nffree = cpu_to_fs32(sb, uspi->cs_total.cs_nffree);630630+out:619631 ubh_mark_buffer_dirty(USPI_UBH(uspi));620632 ufs_print_super_stuff(sb, usb1, usb2, usb3);621633 UFSD("EXIT\n");···754746 return;755747}756748749749+static u64 ufs_max_bytes(struct super_block *sb)750750+{751751+ struct ufs_sb_private_info *uspi = UFS_SB(sb)->s_uspi;752752+ int bits = uspi->s_apbshift;753753+ u64 res;754754+755755+ if (bits > 21)756756+ res = ~0ULL;757757+ else758758+ res = UFS_NDADDR + (1LL << bits) + (1LL << (2*bits)) +759759+ (1LL << (3*bits));760760+761761+ if (res >= (MAX_LFS_FILESIZE >> uspi->s_bshift))762762+ return MAX_LFS_FILESIZE;763763+ return res << uspi->s_bshift;764764+}765765+757766static int ufs_fill_super(struct super_block *sb, void *data, int silent)758767{759768 struct ufs_sb_info * sbi;···837812 uspi->s_dirblksize = UFS_SECTOR_SIZE;838813 super_block_offset=UFS_SBLOCK;839814840840- /* Keep 2Gig file limit. Some UFS variants need to override 841841- this but as I don't know which I'll let those in the know loosen842842- the rules */815815+ sb->s_maxbytes = MAX_LFS_FILESIZE;816816+843817 switch (sbi->s_mount_opt & UFS_MOUNT_UFSTYPE) {844818 case UFS_MOUNT_UFSTYPE_44BSD:845819 UFSD("ufstype=44bsd\n");···1004980 flags |= UFS_ST_SUN;1005981 }1006982983983+ if ((flags & UFS_ST_MASK) == UFS_ST_44BSD &&984984+ uspi->s_postblformat == UFS_42POSTBLFMT) {985985+ if (!silent)986986+ pr_err("this is not a 44bsd filesystem");987987+ goto failed;988988+ }989989+1007990 /*1008991 * Check ufs magic number1009992 */···11581127 uspi->s_cgmask = fs32_to_cpu(sb, usb1->fs_cgmask);1159112811601129 if ((flags & UFS_TYPE_MASK) == UFS_TYPE_UFS2) {11611161- uspi->s_u2_size = fs64_to_cpu(sb, usb3->fs_un1.fs_u2.fs_size);11621162- uspi->s_u2_dsize = fs64_to_cpu(sb, usb3->fs_un1.fs_u2.fs_dsize);11301130+ uspi->s_size = fs64_to_cpu(sb, usb3->fs_un1.fs_u2.fs_size);11311131+ uspi->s_dsize = fs64_to_cpu(sb, usb3->fs_un1.fs_u2.fs_dsize);11631132 } else {11641133 uspi->s_size = fs32_to_cpu(sb, usb1->fs_size);11651134 uspi->s_dsize = fs32_to_cpu(sb, usb1->fs_dsize);···12081177 uspi->s_postbloff = fs32_to_cpu(sb, usb3->fs_postbloff);12091178 uspi->s_rotbloff = fs32_to_cpu(sb, usb3->fs_rotbloff);1210117911801180+ uspi->s_root_blocks = mul_u64_u32_div(uspi->s_dsize,11811181+ uspi->s_minfree, 100);11821182+ if (uspi->s_minfree <= 5) {11831183+ uspi->s_time_to_space = ~0ULL;11841184+ uspi->s_space_to_time = 0;11851185+ usb1->fs_optim = cpu_to_fs32(sb, UFS_OPTSPACE);11861186+ } else {11871187+ uspi->s_time_to_space = (uspi->s_root_blocks / 2) + 1;11881188+ uspi->s_space_to_time = mul_u64_u32_div(uspi->s_dsize,11891189+ uspi->s_minfree - 2, 100) - 1;11901190+ }11911191+12111192 /*12121193 * Compute another frequently used values12131194 */···12551212 "fast symlink size (%u)\n", uspi->s_maxsymlinklen);12561213 uspi->s_maxsymlinklen = maxsymlen;12571214 }12151215+ sb->s_maxbytes = ufs_max_bytes(sb);12581216 sb->s_max_links = UFS_LINK_MAX;1259121712601218 inode = ufs_iget(sb, UFS_ROOTINO);···14091365 mutex_lock(&UFS_SB(sb)->s_lock);14101366 usb3 = ubh_get_usb_third(uspi);1411136714121412- if ((flags & UFS_TYPE_MASK) == UFS_TYPE_UFS2) {13681368+ if ((flags & UFS_TYPE_MASK) == UFS_TYPE_UFS2)14131369 buf->f_type = UFS2_MAGIC;14141414- buf->f_blocks = fs64_to_cpu(sb, usb3->fs_un1.fs_u2.fs_dsize);14151415- } else {13701370+ else14161371 buf->f_type = UFS_MAGIC;14171417- buf->f_blocks = uspi->s_dsize;14181418- }14191419- buf->f_bfree = ufs_blkstofrags(uspi->cs_total.cs_nbfree) +14201420- uspi->cs_total.cs_nffree;13721372+13731373+ buf->f_blocks = uspi->s_dsize;13741374+ buf->f_bfree = ufs_freefrags(uspi);14211375 buf->f_ffree = uspi->cs_total.cs_nifree;14221376 buf->f_bsize = sb->s_blocksize;14231423- buf->f_bavail = (buf->f_bfree > (((long)buf->f_blocks / 100) * uspi->s_minfree))14241424- ? (buf->f_bfree - (((long)buf->f_blocks / 100) * uspi->s_minfree)) : 0;13771377+ buf->f_bavail = (buf->f_bfree > uspi->s_root_blocks)13781378+ ? (buf->f_bfree - uspi->s_root_blocks) : 0;14251379 buf->f_files = uspi->s_ncg * uspi->s_ipg;14261380 buf->f_namelen = UFS_MAXNAMLEN;14271381 buf->f_fsid.val[0] = (u32)id;
+5-4
fs/ufs/ufs_fs.h
···733733 __u32 s_dblkno; /* offset of first data after cg */734734 __u32 s_cgoffset; /* cylinder group offset in cylinder */735735 __u32 s_cgmask; /* used to calc mod fs_ntrak */736736- __u32 s_size; /* number of blocks (fragments) in fs */737737- __u32 s_dsize; /* number of data blocks in fs */738738- __u64 s_u2_size; /* ufs2: number of blocks (fragments) in fs */739739- __u64 s_u2_dsize; /*ufs2: number of data blocks in fs */736736+ __u64 s_size; /* number of blocks (fragments) in fs */737737+ __u64 s_dsize; /* number of data blocks in fs */740738 __u32 s_ncg; /* number of cylinder groups */741739 __u32 s_bsize; /* size of basic blocks */742740 __u32 s_fsize; /* size of fragments */···791793 __u32 s_maxsymlinklen;/* upper limit on fast symlinks' size */792794 __s32 fs_magic; /* filesystem magic */793795 unsigned int s_dirblksize;796796+ __u64 s_root_blocks;797797+ __u64 s_time_to_space;798798+ __u64 s_space_to_time;794799};795800796801/*
···350350#define ubh_blkmap(ubh,begin,bit) \351351 ((*ubh_get_addr(ubh, (begin) + ((bit) >> 3)) >> ((bit) & 7)) & (0xff >> (UFS_MAXFRAG - uspi->s_fpb)))352352353353-/*354354- * Determine the number of available frags given a355355- * percentage to hold in reserve.356356- */357353static inline u64358358-ufs_freespace(struct ufs_sb_private_info *uspi, int percentreserved)354354+ufs_freefrags(struct ufs_sb_private_info *uspi)359355{360356 return ufs_blkstofrags(uspi->cs_total.cs_nbfree) +361361- uspi->cs_total.cs_nffree -362362- (uspi->s_dsize * (percentreserved) / 100);357357+ uspi->cs_total.cs_nffree;363358}364359365360/*···468473static inline int _ubh_isblockset_(struct ufs_sb_private_info * uspi,469474 struct ufs_buffer_head * ubh, unsigned begin, unsigned block)470475{476476+ u8 mask;471477 switch (uspi->s_fpb) {472478 case 8:473479 return (*ubh_get_addr (ubh, begin + block) == 0xff);474480 case 4:475475- return (*ubh_get_addr (ubh, begin + (block >> 1)) == (0x0f << ((block & 0x01) << 2)));481481+ mask = 0x0f << ((block & 0x01) << 2);482482+ return (*ubh_get_addr (ubh, begin + (block >> 1)) & mask) == mask;476483 case 2:477477- return (*ubh_get_addr (ubh, begin + (block >> 2)) == (0x03 << ((block & 0x03) << 1)));484484+ mask = 0x03 << ((block & 0x03) << 1);485485+ return (*ubh_get_addr (ubh, begin + (block >> 2)) & mask) == mask;478486 case 1:479479- return (*ubh_get_addr (ubh, begin + (block >> 3)) == (0x01 << (block & 0x07)));487487+ mask = 0x01 << (block & 0x07);488488+ return (*ubh_get_addr (ubh, begin + (block >> 3)) & mask) == mask;480489 }481490 return 0; 482491}
+21-8
fs/userfaultfd.c
···340340 bool must_wait, return_to_userland;341341 long blocking_state;342342343343- BUG_ON(!rwsem_is_locked(&mm->mmap_sem));344344-345343 ret = VM_FAULT_SIGBUS;344344+345345+ /*346346+ * We don't do userfault handling for the final child pid update.347347+ *348348+ * We also don't do userfault handling during349349+ * coredumping. hugetlbfs has the special350350+ * follow_hugetlb_page() to skip missing pages in the351351+ * FOLL_DUMP case, anon memory also checks for FOLL_DUMP with352352+ * the no_page_table() helper in follow_page_mask(), but the353353+ * shmem_vm_ops->fault method is invoked even during354354+ * coredumping without mmap_sem and it ends up here.355355+ */356356+ if (current->flags & (PF_EXITING|PF_DUMPCORE))357357+ goto out;358358+359359+ /*360360+ * Coredumping runs without mmap_sem so we can only check that361361+ * the mmap_sem is held, if PF_DUMPCORE was not set.362362+ */363363+ WARN_ON_ONCE(!rwsem_is_locked(&mm->mmap_sem));364364+346365 ctx = vmf->vma->vm_userfaultfd_ctx.ctx;347366 if (!ctx)348367 goto out;···377358 * caller of handle_userfault to release the mmap_sem.378359 */379360 if (unlikely(ACCESS_ONCE(ctx->released)))380380- goto out;381381-382382- /*383383- * We don't do userfault handling for the final child pid update.384384- */385385- if (current->flags & PF_EXITING)386361 goto out;387362388363 /*
+5-2
fs/xfs/xfs_aops.c
···13161316 * The swap code (ab-)uses ->bmap to get a block mapping and then13171317 * bypasseѕ the file system for actual I/O. We really can't allow13181318 * that on reflinks inodes, so we have to skip out here. And yes,13191319- * 0 is the magic code for a bmap error..13191319+ * 0 is the magic code for a bmap error.13201320+ *13211321+ * Since we don't pass back blockdev info, we can't return bmap13221322+ * information for rt files either.13201323 */13211321- if (xfs_is_reflink_inode(ip))13241324+ if (xfs_is_reflink_inode(ip) || XFS_IS_REALTIME_INODE(ip))13221325 return 0;1323132613241327 filemap_write_and_wait(mapping);
+26-12
fs/xfs/xfs_buf.c
···9797xfs_buf_ioacct_inc(9898 struct xfs_buf *bp)9999{100100- if (bp->b_flags & (XBF_NO_IOACCT|_XBF_IN_FLIGHT))100100+ if (bp->b_flags & XBF_NO_IOACCT)101101 return;102102103103 ASSERT(bp->b_flags & XBF_ASYNC);104104- bp->b_flags |= _XBF_IN_FLIGHT;105105- percpu_counter_inc(&bp->b_target->bt_io_count);104104+ spin_lock(&bp->b_lock);105105+ if (!(bp->b_state & XFS_BSTATE_IN_FLIGHT)) {106106+ bp->b_state |= XFS_BSTATE_IN_FLIGHT;107107+ percpu_counter_inc(&bp->b_target->bt_io_count);108108+ }109109+ spin_unlock(&bp->b_lock);106110}107111108112/*···114110 * freed and unaccount from the buftarg.115111 */116112static inline void113113+__xfs_buf_ioacct_dec(114114+ struct xfs_buf *bp)115115+{116116+ lockdep_assert_held(&bp->b_lock);117117+118118+ if (bp->b_state & XFS_BSTATE_IN_FLIGHT) {119119+ bp->b_state &= ~XFS_BSTATE_IN_FLIGHT;120120+ percpu_counter_dec(&bp->b_target->bt_io_count);121121+ }122122+}123123+124124+static inline void117125xfs_buf_ioacct_dec(118126 struct xfs_buf *bp)119127{120120- if (!(bp->b_flags & _XBF_IN_FLIGHT))121121- return;122122-123123- bp->b_flags &= ~_XBF_IN_FLIGHT;124124- percpu_counter_dec(&bp->b_target->bt_io_count);128128+ spin_lock(&bp->b_lock);129129+ __xfs_buf_ioacct_dec(bp);130130+ spin_unlock(&bp->b_lock);125131}126132127133/*···163149 * unaccounted (released to LRU) before that occurs. Drop in-flight164150 * status now to preserve accounting consistency.165151 */166166- xfs_buf_ioacct_dec(bp);167167-168152 spin_lock(&bp->b_lock);153153+ __xfs_buf_ioacct_dec(bp);154154+169155 atomic_set(&bp->b_lru_ref, 0);170156 if (!(bp->b_state & XFS_BSTATE_DISPOSE) &&171157 (list_lru_del(&bp->b_target->bt_lru, &bp->b_lru)))···993979 * ensures the decrement occurs only once per-buf.994980 */995981 if ((atomic_read(&bp->b_hold) == 1) && !list_empty(&bp->b_lru))996996- xfs_buf_ioacct_dec(bp);982982+ __xfs_buf_ioacct_dec(bp);997983 goto out_unlock;998984 }9999851000986 /* the last reference has been dropped ... */10011001- xfs_buf_ioacct_dec(bp);987987+ __xfs_buf_ioacct_dec(bp);1002988 if (!(bp->b_flags & XBF_STALE) && atomic_read(&bp->b_lru_ref)) {1003989 /*1004990 * If the buffer is added to the LRU take a new reference to the
+2-3
fs/xfs/xfs_buf.h
···6363#define _XBF_KMEM (1 << 21)/* backed by heap memory */6464#define _XBF_DELWRI_Q (1 << 22)/* buffer on a delwri queue */6565#define _XBF_COMPOUND (1 << 23)/* compound buffer */6666-#define _XBF_IN_FLIGHT (1 << 25) /* I/O in flight, for accounting purposes */67666867typedef unsigned int xfs_buf_flags_t;6968···8384 { _XBF_PAGES, "PAGES" }, \8485 { _XBF_KMEM, "KMEM" }, \8586 { _XBF_DELWRI_Q, "DELWRI_Q" }, \8686- { _XBF_COMPOUND, "COMPOUND" }, \8787- { _XBF_IN_FLIGHT, "IN_FLIGHT" }8787+ { _XBF_COMPOUND, "COMPOUND" }888889899090/*9191 * Internal state flags.9292 */9393#define XFS_BSTATE_DISPOSE (1 << 0) /* buffer being discarded */9494+#define XFS_BSTATE_IN_FLIGHT (1 << 1) /* I/O in flight */94959596/*9697 * The xfs_buftarg contains 2 notions of "sector size" -
···374374 u16 validation_count;375375};376376377377+/*378378+ * Maximum value of the validation_count field in struct acpi_table_desc.379379+ * When reached, validation_count cannot be changed any more and the table will380380+ * be permanently regarded as validated.381381+ *382382+ * This is to prevent situations in which unbalanced table get/put operations383383+ * may cause premature table unmapping in the OS to happen.384384+ *385385+ * The maximum validation count can be defined to any value, but should be386386+ * greater than the maximum number of OS early stage mapping slots to avoid387387+ * leaking early stage table mappings to the late stage.388388+ */389389+#define ACPI_MAX_TABLE_VALIDATIONS ACPI_UINT16_MAX390390+377391/* Masks for Flags field above */378392379393#define ACPI_TABLE_ORIGIN_EXTERNAL_VIRTUAL (0) /* Virtual address, external maintained */
+51
include/drm/drm_dp_helper.h
···913913int drm_dp_start_crc(struct drm_dp_aux *aux, struct drm_crtc *crtc);914914int drm_dp_stop_crc(struct drm_dp_aux *aux);915915916916+struct drm_dp_dpcd_ident {917917+ u8 oui[3];918918+ u8 device_id[6];919919+ u8 hw_rev;920920+ u8 sw_major_rev;921921+ u8 sw_minor_rev;922922+} __packed;923923+924924+/**925925+ * struct drm_dp_desc - DP branch/sink device descriptor926926+ * @ident: DP device identification from DPCD 0x400 (sink) or 0x500 (branch).927927+ * @quirks: Quirks; use drm_dp_has_quirk() to query for the quirks.928928+ */929929+struct drm_dp_desc {930930+ struct drm_dp_dpcd_ident ident;931931+ u32 quirks;932932+};933933+934934+int drm_dp_read_desc(struct drm_dp_aux *aux, struct drm_dp_desc *desc,935935+ bool is_branch);936936+937937+/**938938+ * enum drm_dp_quirk - Display Port sink/branch device specific quirks939939+ *940940+ * Display Port sink and branch devices in the wild have a variety of bugs, try941941+ * to collect them here. The quirks are shared, but it's up to the drivers to942942+ * implement workarounds for them.943943+ */944944+enum drm_dp_quirk {945945+ /**946946+ * @DP_DPCD_QUIRK_LIMITED_M_N:947947+ *948948+ * The device requires main link attributes Mvid and Nvid to be limited949949+ * to 16 bits.950950+ */951951+ DP_DPCD_QUIRK_LIMITED_M_N,952952+};953953+954954+/**955955+ * drm_dp_has_quirk() - does the DP device have a specific quirk956956+ * @desc: Device decriptor filled by drm_dp_read_desc()957957+ * @quirk: Quirk to query for958958+ *959959+ * Return true if DP device identified by @desc has @quirk.960960+ */961961+static inline bool962962+drm_dp_has_quirk(const struct drm_dp_desc *desc, enum drm_dp_quirk quirk)963963+{964964+ return desc->quirks & BIT(quirk);965965+}966966+916967#endif /* _DRM_DP_HELPER_H_ */
···426426427427extern void bio_init(struct bio *bio, struct bio_vec *table,428428 unsigned short max_vecs);429429+extern void bio_uninit(struct bio *);429430extern void bio_reset(struct bio *);430431void bio_chain(struct bio *, struct bio *);431432
···4848 CSS_ONLINE = (1 << 1), /* between ->css_online() and ->css_offline() */4949 CSS_RELEASED = (1 << 2), /* refcnt reached zero, released */5050 CSS_VISIBLE = (1 << 3), /* css is visible to userland */5151+ CSS_DYING = (1 << 4), /* css is dying */5152};52535354/* bits in struct cgroup flags field */
+20
include/linux/cgroup.h
···344344}345345346346/**347347+ * css_is_dying - test whether the specified css is dying348348+ * @css: target css349349+ *350350+ * Test whether @css is in the process of offlining or already offline. In351351+ * most cases, ->css_online() and ->css_offline() callbacks should be352352+ * enough; however, the actual offline operations are RCU delayed and this353353+ * test returns %true also when @css is scheduled to be offlined.354354+ *355355+ * This is useful, for example, when the use case requires synchronous356356+ * behavior with respect to cgroup removal. cgroup removal schedules css357357+ * offlining but the css can seem alive while the operation is being358358+ * delayed. If the delay affects user visible semantics, this test can be359359+ * used to resolve the situation.360360+ */361361+static inline bool css_is_dying(struct cgroup_subsys_state *css)362362+{363363+ return !(css->flags & CSS_NO_REF) && percpu_ref_is_dying(&css->refcnt);364364+}365365+366366+/**347367 * css_put - put a css reference348368 * @css: target css349369 *
+8
include/linux/compiler-clang.h
···1515 * with any version that can compile the kernel1616 */1717#define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)1818+1919+/*2020+ * GCC does not warn about unused static inline functions for2121+ * -Wunused-function. This turns out to avoid the need for complex #ifdef2222+ * directives. Suppress the warning in clang as well.2323+ */2424+#undef inline2525+#define inline inline __attribute__((unused)) notrace
···167167/**168168 * hash_for_each_possible_rcu - iterate over all possible objects hashing to the169169 * same bucket in an rcu enabled hashtable170170- * in a rcu enabled hashtable171170 * @name: hashtable to iterate172171 * @obj: the type * to use as a loop cursor for each entry173172 * @member: the name of the hlist_node within the struct
···6464/* TICK_USEC is the time between ticks in usec assuming fake USER_HZ */6565#define TICK_USEC ((1000000UL + USER_HZ/2) / USER_HZ)66666767+#ifndef __jiffy_arch_data6868+#define __jiffy_arch_data6969+#endif7070+6771/*6872 * The 64-bit value is not atomic - you MUST NOT read it6973 * without sampling the sequence number in jiffies_lock.7074 * get_jiffies_64() will do this for you as appropriate.7175 */7276extern u64 __cacheline_aligned_in_smp jiffies_64;7373-extern unsigned long volatile __cacheline_aligned_in_smp jiffies;7777+extern unsigned long volatile __cacheline_aligned_in_smp __jiffy_arch_data jiffies;74787579#if (BITS_PER_LONG < 64)7680u64 get_jiffies_64(void);
-1
include/linux/key.h
···173173#ifdef KEY_DEBUGGING174174 unsigned magic;175175#define KEY_DEBUG_MAGIC 0x18273645u176176-#define KEY_DEBUG_MAGIC_X 0xf8e9dacbu177176#endif178177179178 unsigned long flags; /* status flags (change with bitops) */
···1393139313941394int get_cmdline(struct task_struct *task, char *buffer, int buflen);1395139513961396-/* Is the vma a continuation of the stack vma above it? */13971397-static inline int vma_growsdown(struct vm_area_struct *vma, unsigned long addr)13981398-{13991399- return vma && (vma->vm_end == addr) && (vma->vm_flags & VM_GROWSDOWN);14001400-}14011401-14021396static inline bool vma_is_anonymous(struct vm_area_struct *vma)14031397{14041398 return !vma->vm_ops;···14071413#else14081414static inline bool vma_is_shmem(struct vm_area_struct *vma) { return false; }14091415#endif14101410-14111411-static inline int stack_guard_page_start(struct vm_area_struct *vma,14121412- unsigned long addr)14131413-{14141414- return (vma->vm_flags & VM_GROWSDOWN) &&14151415- (vma->vm_start == addr) &&14161416- !vma_growsdown(vma->vm_prev, addr);14171417-}14181418-14191419-/* Is the vma a continuation of the stack vma below it? */14201420-static inline int vma_growsup(struct vm_area_struct *vma, unsigned long addr)14211421-{14221422- return vma && (vma->vm_start == addr) && (vma->vm_flags & VM_GROWSUP);14231423-}14241424-14251425-static inline int stack_guard_page_end(struct vm_area_struct *vma,14261426- unsigned long addr)14271427-{14281428- return (vma->vm_flags & VM_GROWSUP) &&14291429- (vma->vm_end == addr) &&14301430- !vma_growsup(vma->vm_next, addr);14311431-}1432141614331417int vma_is_stack_for_current(struct vm_area_struct *vma);14341418···21942222 pgoff_t offset,21952223 unsigned long size);2196222422252225+extern unsigned long stack_guard_gap;21972226/* Generic expand stack which grows the stack according to GROWS{UP,DOWN} */21982227extern int expand_stack(struct vm_area_struct *vma, unsigned long address);21992228···22212248 if (vma && end_addr <= vma->vm_start)22222249 vma = NULL;22232250 return vma;22512251+}22522252+22532253+static inline unsigned long vm_start_gap(struct vm_area_struct *vma)22542254+{22552255+ unsigned long vm_start = vma->vm_start;22562256+22572257+ if (vma->vm_flags & VM_GROWSDOWN) {22582258+ vm_start -= stack_guard_gap;22592259+ if (vm_start > vma->vm_start)22602260+ vm_start = 0;22612261+ }22622262+ return vm_start;22632263+}22642264+22652265+static inline unsigned long vm_end_gap(struct vm_area_struct *vma)22662266+{22672267+ unsigned long vm_end = vma->vm_end;22682268+22692269+ if (vma->vm_flags & VM_GROWSUP) {22702270+ vm_end += stack_guard_gap;22712271+ if (vm_end < vma->vm_end)22722272+ vm_end = -PAGE_SIZE;22732273+ }22742274+ return vm_end;22242275}2225227622262277static inline unsigned long vma_pages(struct vm_area_struct *vma)···23232326#define FOLL_MLOCK 0x1000 /* lock present pages */23242327#define FOLL_REMOTE 0x2000 /* we are working on non-current tsk/mm */23252328#define FOLL_COW 0x4000 /* internal GUP flag */23292329+23302330+static inline int vm_fault_to_errno(int vm_fault, int foll_flags)23312331+{23322332+ if (vm_fault & VM_FAULT_OOM)23332333+ return -ENOMEM;23342334+ if (vm_fault & (VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE))23352335+ return (foll_flags & FOLL_HWPOISON) ? -EHWPOISON : -EFAULT;23362336+ if (vm_fault & (VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV))23372337+ return -EFAULT;23382338+ return 0;23392339+}2326234023272341typedef int (*pte_fn_t)(pte_t *pte, pgtable_t token, unsigned long addr,23282342 void *data);
+1
include/linux/mmzone.h
···678678 * is the first PFN that needs to be initialised.679679 */680680 unsigned long first_deferred_pfn;681681+ unsigned long static_init_size;681682#endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */682683683684#ifdef CONFIG_TRANSPARENT_HUGEPAGE
···457457 hwparam_ioport, /* Module parameter configures an I/O port */458458 hwparam_iomem, /* Module parameter configures an I/O mem address */459459 hwparam_ioport_or_iomem, /* Module parameter could be either, depending on other option */460460- hwparam_irq, /* Module parameter configures an I/O port */460460+ hwparam_irq, /* Module parameter configures an IRQ */461461 hwparam_dma, /* Module parameter configures a DMA channel */462462 hwparam_dma_addr, /* Module parameter configures a DMA buffer address */463463 hwparam_other, /* Module parameter configures some other value */
+10-5
include/linux/netdevice.h
···914914 *915915 * int (*ndo_change_mtu)(struct net_device *dev, int new_mtu);916916 * Called when a user wants to change the Maximum Transfer Unit917917- * of a device. If not defined, any request to change MTU will918918- * will return an error.917917+ * of a device.919918 *920919 * void (*ndo_tx_timeout)(struct net_device *dev);921920 * Callback used when the transmitter has not made any progress···15951596 * @rtnl_link_state: This enum represents the phases of creating15961597 * a new link15971598 *15981598- * @destructor: Called from unregister,15991599- * can be used to call free_netdev15991599+ * @needs_free_netdev: Should unregister perform free_netdev?16001600+ * @priv_destructor: Called from unregister16001601 * @npinfo: XXX: need comments on this one16011602 * @nd_net: Network namespace this network device is inside16021603 *···18571858 RTNL_LINK_INITIALIZING,18581859 } rtnl_link_state:16;1859186018601860- void (*destructor)(struct net_device *dev);18611861+ bool needs_free_netdev;18621862+ void (*priv_destructor)(struct net_device *dev);1861186318621864#ifdef CONFIG_NETPOLL18631865 struct netpoll_info __rcu *npinfo;···42594259 if (!dev->name[0] || strchr(dev->name, '%'))42604260 return "(unnamed net_device)";42614261 return dev->name;42624262+}42634263+42644264+static inline bool netdev_unregistering(const struct net_device *dev)42654265+{42664266+ return dev->reg_state == NETREG_UNREGISTERING;42624267}4263426842644269static inline const char *netdev_reg_state(const struct net_device *dev)
-3
include/linux/pinctrl/pinconf-generic.h
···4242 * @PIN_CONFIG_BIAS_PULL_UP: the pin will be pulled up (usually with high4343 * impedance to VDD). If the argument is != 0 pull-up is enabled,4444 * if it is 0, pull-up is total, i.e. the pin is connected to VDD.4545- * @PIN_CONFIG_BIDIRECTIONAL: the pin will be configured to allow simultaneous4646- * input and output operations.4745 * @PIN_CONFIG_DRIVE_OPEN_DRAIN: the pin will be driven with open drain (open4846 * collector) which means it is usually wired with other output ports4947 * which are then pulled up with an external resistor. Setting this···9698 PIN_CONFIG_BIAS_PULL_DOWN,9799 PIN_CONFIG_BIAS_PULL_PIN_DEFAULT,98100 PIN_CONFIG_BIAS_PULL_UP,9999- PIN_CONFIG_BIDIRECTIONAL,100101 PIN_CONFIG_DRIVE_OPEN_DRAIN,101102 PIN_CONFIG_DRIVE_OPEN_SOURCE,102103 PIN_CONFIG_DRIVE_PUSH_PULL,
···2929 */3030struct tk_read_base {3131 struct clocksource *clock;3232- u64 (*read)(struct clocksource *cs);3332 u64 mask;3433 u64 cycle_last;3534 u32 mult;···5758 * interval.5859 * @xtime_remainder: Shifted nano seconds left over when rounding5960 * @cycle_interval6060- * @raw_interval: Raw nano seconds accumulated per NTP interval.6161+ * @raw_interval: Shifted raw nano seconds accumulated per NTP interval.6162 * @ntp_error: Difference between accumulated time and NTP time in ntp6263 * shifted nano seconds.6364 * @ntp_error_shift: Shift conversion between clock shifted nano seconds and···99100 u64 cycle_interval;100101 u64 xtime_interval;101102 s64 xtime_remainder;102102- u32 raw_interval;103103+ u64 raw_interval;103104 /* The ntp_tick_length() value currently being used.104105 * This cached copy ensures we consistently apply the tick105106 * length for an entire tick, as ntp_tick_length may change
+11-1
include/media/cec-notifier.h
···2929struct cec_adapter;3030struct cec_notifier;31313232-#ifdef CONFIG_MEDIA_CEC_NOTIFIER3232+#if IS_REACHABLE(CONFIG_CEC_CORE) && IS_ENABLED(CONFIG_CEC_NOTIFIER)33333434/**3535 * cec_notifier_get - find or create a new cec_notifier for the given device.···103103104104static inline void cec_notifier_set_phys_addr_from_edid(struct cec_notifier *n,105105 const struct edid *edid)106106+{107107+}108108+109109+static inline void cec_notifier_register(struct cec_notifier *n,110110+ struct cec_adapter *adap,111111+ void (*callback)(struct cec_adapter *adap, u16 pa))112112+{113113+}114114+115115+static inline void cec_notifier_unregister(struct cec_notifier *n)106116{107117}108118
···924924 void (*cwnd_event)(struct sock *sk, enum tcp_ca_event ev);925925 /* call when ack arrives (optional) */926926 void (*in_ack_event)(struct sock *sk, u32 flags);927927- /* new value of cwnd after loss (optional) */927927+ /* new value of cwnd after loss (required) */928928 u32 (*undo_cwnd)(struct sock *sk);929929 /* hook for packet ack accounting (optional) */930930 void (*pkts_acked)(struct sock *sk, const struct ack_sample *sample);
+2-2
include/net/wext.h
···66struct net;7788#ifdef CONFIG_WEXT_CORE99-int wext_handle_ioctl(struct net *net, struct ifreq *ifr, unsigned int cmd,99+int wext_handle_ioctl(struct net *net, struct iwreq *iwr, unsigned int cmd,1010 void __user *arg);1111int compat_wext_handle_ioctl(struct net *net, unsigned int cmd,1212 unsigned long arg);···1414struct iw_statistics *get_wireless_stats(struct net_device *dev);1515int call_commit_handler(struct net_device *dev);1616#else1717-static inline int wext_handle_ioctl(struct net *net, struct ifreq *ifr, unsigned int cmd,1717+static inline int wext_handle_ioctl(struct net *net, struct iwreq *iwr, unsigned int cmd,1818 void __user *arg)1919{2020 return -EINVAL;
···1010 struct module *module;1111};12121313-int ibnl_init(void);1414-void ibnl_cleanup(void);1515-1613/**1714 * Add a a client to the list of IB netlink exporters.1815 * @index: Index of the added client···7376 */7477int ibnl_multicast(struct sk_buff *skb, struct nlmsghdr *nlh,7578 unsigned int group, gfp_t flags);7676-7777-/**7878- * Check if there are any listeners to the netlink group7979- * @group: the netlink group ID8080- * Returns 0 on success or a negative for no listeners.8181- */8282-int ibnl_chk_listeners(unsigned int group);83798480#endif /* _RDMA_NETLINK_H */
···112112#define N_TXTADDR(x) (N_MAGIC(x) == QMAGIC ? PAGE_SIZE : 0)113113#endif114114115115-/* Address of data segment in memory after it is loaded.116116- Note that it is up to you to define SEGMENT_SIZE117117- on machines not listed here. */118118-#if defined(vax) || defined(hp300) || defined(pyr)119119-#define SEGMENT_SIZE page_size120120-#endif121121-#ifdef sony122122-#define SEGMENT_SIZE 0x2000123123-#endif /* Sony. */124124-#ifdef is68k125125-#define SEGMENT_SIZE 0x20000126126-#endif127127-#if defined(m68k) && defined(PORTAR)128128-#define PAGE_SIZE 0x400129129-#define SEGMENT_SIZE PAGE_SIZE130130-#endif131131-132132-#ifdef linux115115+/* Address of data segment in memory after it is loaded. */133116#ifndef __KERNEL__134117#include <unistd.h>135118#endif···122139#ifndef SEGMENT_SIZE123140#ifndef __KERNEL__124141#define SEGMENT_SIZE getpagesize()125125-#endif126142#endif127143#endif128144#endif···242260 unsigned int r_extern:1;243261 /* Four bits that aren't used, but when writing an object file244262 it is desirable to clear them. */245245-#ifdef NS32K246246- unsigned r_bsr:1;247247- unsigned r_disp:1;248248- unsigned r_pad:2;249249-#else250263 unsigned int r_pad:4;251251-#endif252264};253265#endif /* no N_RELOCATION_INFO_DECLARED. */254266
+4-2
include/uapi/linux/ethtool.h
···14861486 * it was forced up into this mode or autonegotiated.14871487 */1488148814891489-/* The forced speed, in units of 1Mb. All values 0 to INT_MAX are legal. */14901490-/* Update drivers/net/phy/phy.c:phy_speed_to_str() when adding new values */14891489+/* The forced speed, in units of 1Mb. All values 0 to INT_MAX are legal.14901490+ * Update drivers/net/phy/phy.c:phy_speed_to_str() and14911491+ * drivers/net/bonding/bond_3ad.c:__get_link_speed() when adding new values.14921492+ */14911493#define SPEED_10 1014921494#define SPEED_100 10014931495#define SPEED_1000 1000
···343343#define OVS_KEY_ATTR_MAX (__OVS_KEY_ATTR_MAX - 1)344344345345enum ovs_tunnel_key_attr {346346+ /* OVS_TUNNEL_KEY_ATTR_NONE, standard nl API requires this attribute! */346347 OVS_TUNNEL_KEY_ATTR_ID, /* be64 Tunnel ID */347348 OVS_TUNNEL_KEY_ATTR_IPV4_SRC, /* be32 src IP address. */348349 OVS_TUNNEL_KEY_ATTR_IPV4_DST, /* be32 dst IP address. */
+5
kernel/bpf/verifier.c
···989989 if (err)990990 return err;991991992992+ if (is_pointer_value(env, insn->src_reg)) {993993+ verbose("R%d leaks addr into mem\n", insn->src_reg);994994+ return -EACCES;995995+ }996996+992997 /* check whether atomic_add can read the memory */993998 err = check_mem_access(env, insn->dst_reg, insn->off,994999 BPF_SIZE(insn->code), BPF_READ, -1);
+5
kernel/cgroup/cgroup.c
···42654265{42664266 lockdep_assert_held(&cgroup_mutex);4267426742684268+ if (css->flags & CSS_DYING)42694269+ return;42704270+42714271+ css->flags |= CSS_DYING;42724272+42684273 /*42694274 * This must happen before css is disassociated with its cgroup.42704275 * See seq_css() for details.
···73167316 return __perf_event_account_interrupt(event, 1);73177317}7318731873197319+static bool sample_is_allowed(struct perf_event *event, struct pt_regs *regs)73207320+{73217321+ /*73227322+ * Due to interrupt latency (AKA "skid"), we may enter the73237323+ * kernel before taking an overflow, even if the PMU is only73247324+ * counting user events.73257325+ * To avoid leaking information to userspace, we must always73267326+ * reject kernel samples when exclude_kernel is set.73277327+ */73287328+ if (event->attr.exclude_kernel && !user_mode(regs))73297329+ return false;73307330+73317331+ return true;73327332+}73337333+73197334/*73207335 * Generic event overflow handling, sampling.73217336 */···73507335 return 0;7351733673527337 ret = __perf_event_account_interrupt(event, throttle);73387338+73397339+ /*73407340+ * For security, drop the skid kernel samples if necessary.73417341+ */73427342+ if (!sample_is_allowed(event, regs))73437343+ return ret;7353734473547345 /*73557346 * XXX event_limit might not quite work as expected on inherited
+1-1
kernel/events/ring_buffer.c
···580580 int ret = -ENOMEM, max_order = 0;581581582582 if (!has_aux(event))583583- return -ENOTSUPP;583583+ return -EOPNOTSUPP;584584585585 if (event->pmu->capabilities & PERF_PMU_CAP_AUX_NO_SG) {586586 /*
+3-1
kernel/irq/manage.c
···13121312 ret = __irq_set_trigger(desc,13131313 new->flags & IRQF_TRIGGER_MASK);1314131413151315- if (ret)13151315+ if (ret) {13161316+ irq_release_resources(desc);13161317 goto out_mask;13181318+ }13171319 }1318132013191321 desc->istate &= ~(IRQS_AUTODETECT | IRQS_SPURIOUS_DISABLED | \
+1
kernel/livepatch/Kconfig
···1010 depends on SYSFS1111 depends on KALLSYMS_ALL1212 depends on HAVE_LIVEPATCH1313+ depends on !TRIM_UNUSED_KSYMS1314 help1415 Say Y here if you want to support kernel live patching.1516 This option has no runtime impact until a kernel "patch"
+6-2
kernel/livepatch/patch.c
···59596060 ops = container_of(fops, struct klp_ops, fops);61616262- rcu_read_lock();6262+ /*6363+ * A variant of synchronize_sched() is used to allow patching functions6464+ * where RCU is not watching, see klp_synchronize_transition().6565+ */6666+ preempt_disable_notrace();63676468 func = list_first_or_null_rcu(&ops->func_stack, struct klp_func,6569 stack_node);···119115120116 klp_arch_set_pc(regs, (unsigned long)func->new_func);121117unlock:122122- rcu_read_unlock();118118+ preempt_enable_notrace();123119}124120125121/*
+31-5
kernel/livepatch/transition.c
···4949static DECLARE_DELAYED_WORK(klp_transition_work, klp_transition_work_fn);50505151/*5252+ * This function is just a stub to implement a hard force5353+ * of synchronize_sched(). This requires synchronizing5454+ * tasks even in userspace and idle.5555+ */5656+static void klp_sync(struct work_struct *work)5757+{5858+}5959+6060+/*6161+ * We allow to patch also functions where RCU is not watching,6262+ * e.g. before user_exit(). We can not rely on the RCU infrastructure6363+ * to do the synchronization. Instead hard force the sched synchronization.6464+ *6565+ * This approach allows to use RCU functions for manipulating func_stack6666+ * safely.6767+ */6868+static void klp_synchronize_transition(void)6969+{7070+ schedule_on_each_cpu(klp_sync);7171+}7272+7373+/*5274 * The transition to the target patch state is complete. Clean up the data5375 * structures.5476 */···9573 * func->transition gets cleared, the handler may choose a9674 * removed function.9775 */9898- synchronize_rcu();7676+ klp_synchronize_transition();9977 }1007810179 if (klp_transition_patch->immediate)···1149211593 /* Prevent klp_ftrace_handler() from seeing KLP_UNDEFINED state */11694 if (klp_target_state == KLP_PATCHED)117117- synchronize_rcu();9595+ klp_synchronize_transition();1189611997 read_lock(&tasklist_lock);12098 for_each_process_thread(g, task) {···158136 */159137void klp_update_patch_state(struct task_struct *task)160138{161161- rcu_read_lock();139139+ /*140140+ * A variant of synchronize_sched() is used to allow patching functions141141+ * where RCU is not watching, see klp_synchronize_transition().142142+ */143143+ preempt_disable_notrace();162144163145 /*164146 * This test_and_clear_tsk_thread_flag() call also serves as a read···179153 if (test_and_clear_tsk_thread_flag(task, TIF_PATCH_PENDING))180154 task->patch_state = READ_ONCE(klp_target_state);181155182182- rcu_read_unlock();156156+ preempt_enable_notrace();183157}184158185159/*···565539 clear_tsk_thread_flag(idle_task(cpu), TIF_PATCH_PENDING);566540567541 /* Let any remaining calls to klp_update_patch_state() complete */568568- synchronize_rcu();542542+ klp_synchronize_transition();569543570544 klp_start_transition();571545}
+1-1
kernel/power/process.c
···132132 if (!pm_freezing)133133 atomic_inc(&system_freezing_cnt);134134135135- pm_wakeup_clear(true);135135+ pm_wakeup_clear();136136 pr_info("Freezing user space processes ... ");137137 pm_freezing = true;138138 error = try_to_freeze_tasks(true);
+4-25
kernel/power/suspend.c
···72727373static void freeze_enter(void)7474{7575- trace_suspend_resume(TPS("machine_suspend"), PM_SUSPEND_FREEZE, true);7676-7775 spin_lock_irq(&suspend_freeze_lock);7876 if (pm_wakeup_pending())7977 goto out;···98100 out:99101 suspend_freeze_state = FREEZE_STATE_NONE;100102 spin_unlock_irq(&suspend_freeze_lock);101101-102102- trace_suspend_resume(TPS("machine_suspend"), PM_SUSPEND_FREEZE, false);103103-}104104-105105-static void s2idle_loop(void)106106-{107107- do {108108- freeze_enter();109109-110110- if (freeze_ops && freeze_ops->wake)111111- freeze_ops->wake();112112-113113- dpm_resume_noirq(PMSG_RESUME);114114- if (freeze_ops && freeze_ops->sync)115115- freeze_ops->sync();116116-117117- if (pm_wakeup_pending())118118- break;119119-120120- pm_wakeup_clear(false);121121- } while (!dpm_suspend_noirq(PMSG_SUSPEND));122103}123104124105void freeze_wake(void)···371394 * all the devices are suspended.372395 */373396 if (state == PM_SUSPEND_FREEZE) {374374- s2idle_loop();375375- goto Platform_early_resume;397397+ trace_suspend_resume(TPS("machine_suspend"), state, true);398398+ freeze_enter();399399+ trace_suspend_resume(TPS("machine_suspend"), state, false);400400+ goto Platform_wake;376401 }377402378403 error = disable_nonboot_cpus();
+10-36
kernel/printk/printk.c
···269269#define MAX_CMDLINECONSOLES 8270270271271static struct console_cmdline console_cmdline[MAX_CMDLINECONSOLES];272272-static int console_cmdline_cnt;273272274273static int preferred_console = -1;275274int console_set_on_cmdline;···19051906 * See if this tty is not yet registered, and19061907 * if we have a slot free.19071908 */19081908- for (i = 0, c = console_cmdline; i < console_cmdline_cnt; i++, c++) {19091909+ for (i = 0, c = console_cmdline;19101910+ i < MAX_CMDLINECONSOLES && c->name[0];19111911+ i++, c++) {19091912 if (strcmp(c->name, name) == 0 && c->index == idx) {19101910- if (brl_options)19111911- return 0;19121912-19131913- /*19141914- * Maintain an invariant that will help to find if19151915- * the matching console is preferred, see19161916- * register_console():19171917- *19181918- * The last non-braille console is always19191919- * the preferred one.19201920- */19211921- if (i != console_cmdline_cnt - 1)19221922- swap(console_cmdline[i],19231923- console_cmdline[console_cmdline_cnt - 1]);19241924-19251925- preferred_console = console_cmdline_cnt - 1;19261926-19131913+ if (!brl_options)19141914+ preferred_console = i;19271915 return 0;19281916 }19291917 }···19231937 braille_set_options(c, brl_options);1924193819251939 c->index = idx;19261926- console_cmdline_cnt++;19271940 return 0;19281941}19291942/*···24622477 }2463247824642479 /*24652465- * See if this console matches one we selected on the command line.24662466- *24672467- * There may be several entries in the console_cmdline array matching24682468- * with the same console, one with newcon->match(), another by24692469- * name/index:24702470- *24712471- * pl011,mmio,0x87e024000000,115200 -- added from SPCR24722472- * ttyAMA0 -- added from command line24732473- *24742474- * Traverse the console_cmdline array in reverse order to be24752475- * sure that if this console is preferred then it will be the first24762476- * matching entry. We use the invariant that is maintained in24772477- * __add_preferred_console().24802480+ * See if this console matches one we selected on24812481+ * the command line.24782482 */24792479- for (i = console_cmdline_cnt - 1; i >= 0; i--) {24802480- c = console_cmdline + i;24812481-24832483+ for (i = 0, c = console_cmdline;24842484+ i < MAX_CMDLINECONSOLES && c->name[0];24852485+ i++, c++) {24822486 if (!newcon->match ||24832487 newcon->match(newcon, c->name, c->index, c->options) != 0) {24842488 /* default matching */
+2-3
kernel/rcu/srcu.c
···263263264264/*265265 * Counts the new reader in the appropriate per-CPU element of the266266- * srcu_struct. Must be called from process context.266266+ * srcu_struct.267267 * Returns an index that must be passed to the matching srcu_read_unlock().268268 */269269int __srcu_read_lock(struct srcu_struct *sp)···271271 int idx;272272273273 idx = READ_ONCE(sp->completed) & 0x1;274274- __this_cpu_inc(sp->per_cpu_ref->lock_count[idx]);274274+ this_cpu_inc(sp->per_cpu_ref->lock_count[idx]);275275 smp_mb(); /* B */ /* Avoid leaking the critical section. */276276 return idx;277277}···281281 * Removes the count for the old reader from the appropriate per-CPU282282 * element of the srcu_struct. Note that this may well be a different283283 * CPU than that which was incremented by the corresponding srcu_read_lock().284284- * Must be called from process context.285284 */286285void __srcu_read_unlock(struct srcu_struct *sp, int idx)287286{
+4-3
kernel/rcu/srcutiny.c
···97979898/*9999 * Counts the new reader in the appropriate per-CPU element of the100100- * srcu_struct. Must be called from process context.101101- * Returns an index that must be passed to the matching srcu_read_unlock().100100+ * srcu_struct. Can be invoked from irq/bh handlers, but the matching101101+ * __srcu_read_unlock() must be in the same handler instance. Returns an102102+ * index that must be passed to the matching srcu_read_unlock().102103 */103104int __srcu_read_lock(struct srcu_struct *sp)104105{···113112114113/*115114 * Removes the count for the old reader from the appropriate element of116116- * the srcu_struct. Must be called from process context.115115+ * the srcu_struct.117116 */118117void __srcu_read_unlock(struct srcu_struct *sp, int idx)119118{
+2-3
kernel/rcu/srcutree.c
···357357358358/*359359 * Counts the new reader in the appropriate per-CPU element of the360360- * srcu_struct. Must be called from process context.360360+ * srcu_struct.361361 * Returns an index that must be passed to the matching srcu_read_unlock().362362 */363363int __srcu_read_lock(struct srcu_struct *sp)···365365 int idx;366366367367 idx = READ_ONCE(sp->srcu_idx) & 0x1;368368- __this_cpu_inc(sp->sda->srcu_lock_count[idx]);368368+ this_cpu_inc(sp->sda->srcu_lock_count[idx]);369369 smp_mb(); /* B */ /* Avoid leaking the critical section. */370370 return idx;371371}···375375 * Removes the count for the old reader from the appropriate per-CPU376376 * element of the srcu_struct. Note that this may well be a different377377 * CPU than that which was incremented by the corresponding srcu_read_lock().378378- * Must be called from process context.379378 */380379void __srcu_read_unlock(struct srcu_struct *sp, int idx)381380{
···654654{655655 struct ftrace_probe_ops *ops;656656657657+ if (!tr)658658+ return -ENODEV;659659+657660 /* we register both traceon and traceoff to this callback */658661 if (strcmp(cmd, "traceon") == 0)659662 ops = param ? &traceon_count_probe_ops : &traceon_probe_ops;···673670{674671 struct ftrace_probe_ops *ops;675672673673+ if (!tr)674674+ return -ENODEV;675675+676676 ops = param ? &stacktrace_count_probe_ops : &stacktrace_probe_ops;677677678678 return ftrace_trace_probe_callback(tr, ops, hash, glob, cmd,···687681 char *glob, char *cmd, char *param, int enable)688682{689683 struct ftrace_probe_ops *ops;684684+685685+ if (!tr)686686+ return -ENODEV;690687691688 ops = &dump_probe_ops;692689···703694 char *glob, char *cmd, char *param, int enable)704695{705696 struct ftrace_probe_ops *ops;697697+698698+ if (!tr)699699+ return -ENODEV;706700707701 ops = &cpudump_probe_ops;708702
+5-9
kernel/trace/trace_kprobe.c
···707707 pr_info("Probe point is not specified.\n");708708 return -EINVAL;709709 }710710- if (isdigit(argv[1][0])) {711711- /* an address specified */712712- ret = kstrtoul(&argv[1][0], 0, (unsigned long *)&addr);713713- if (ret) {714714- pr_info("Failed to parse address.\n");715715- return ret;716716- }717717- } else {710710+711711+ /* try to parse an address. if that fails, try to read the712712+ * input as a symbol. */713713+ if (kstrtoul(argv[1], 0, (unsigned long *)&addr)) {718714 /* a symbol specified */719715 symbol = argv[1];720716 /* TODO: support .init module functions */721717 ret = traceprobe_split_symbol_offset(symbol, &offset);722718 if (ret) {723723- pr_info("Failed to parse symbol.\n");719719+ pr_info("Failed to parse either an address or a symbol.\n");724720 return ret;725721 }726722 if (offset && is_return &&
···17391739 }17401740}1741174117421742+extern unsigned long __init_memblock17431743+memblock_reserved_memory_within(phys_addr_t start_addr, phys_addr_t end_addr)17441744+{17451745+ struct memblock_region *rgn;17461746+ unsigned long size = 0;17471747+ int idx;17481748+17491749+ for_each_memblock_type((&memblock.reserved), rgn) {17501750+ phys_addr_t start, end;17511751+17521752+ if (rgn->base + rgn->size < start_addr)17531753+ continue;17541754+ if (rgn->base > end_addr)17551755+ continue;17561756+17571757+ start = rgn->base;17581758+ end = start + rgn->size;17591759+ size += end - start;17601760+ }17611761+17621762+ return size;17631763+}17641764+17421765void __init_memblock __memblock_dump_all(void)17431766{17441767 pr_info("MEMBLOCK configuration:\n");
+6-7
mm/memory-failure.c
···11841184 * page_remove_rmap() in try_to_unmap_one(). So to determine page status11851185 * correctly, we save a copy of the page flags at this time.11861186 */11871187- page_flags = p->flags;11871187+ if (PageHuge(p))11881188+ page_flags = hpage->flags;11891189+ else11901190+ page_flags = p->flags;1188119111891192 /*11901193 * unpoison always clear PG_hwpoison inside page lock···15981595 if (ret) {15991596 pr_info("soft offline: %#lx: migration failed %d, type %lx (%pGp)\n",16001597 pfn, ret, page->flags, &page->flags);16011601- /*16021602- * We know that soft_offline_huge_page() tries to migrate16031603- * only one hugepage pointed to by hpage, so we need not16041604- * run through the pagelist here.16051605- */16061606- putback_active_hugepage(hpage);15981598+ if (!list_empty(&pagelist))15991599+ putback_movable_pages(&pagelist);16071600 if (ret > 0)16081601 ret = -EIO;16091602 } else {
+30-48
mm/memory.c
···28552855}2856285628572857/*28582858- * This is like a special single-page "expand_{down|up}wards()",28592859- * except we must first make sure that 'address{-|+}PAGE_SIZE'28602860- * doesn't hit another vma.28612861- */28622862-static inline int check_stack_guard_page(struct vm_area_struct *vma, unsigned long address)28632863-{28642864- address &= PAGE_MASK;28652865- if ((vma->vm_flags & VM_GROWSDOWN) && address == vma->vm_start) {28662866- struct vm_area_struct *prev = vma->vm_prev;28672867-28682868- /*28692869- * Is there a mapping abutting this one below?28702870- *28712871- * That's only ok if it's the same stack mapping28722872- * that has gotten split..28732873- */28742874- if (prev && prev->vm_end == address)28752875- return prev->vm_flags & VM_GROWSDOWN ? 0 : -ENOMEM;28762876-28772877- return expand_downwards(vma, address - PAGE_SIZE);28782878- }28792879- if ((vma->vm_flags & VM_GROWSUP) && address + PAGE_SIZE == vma->vm_end) {28802880- struct vm_area_struct *next = vma->vm_next;28812881-28822882- /* As VM_GROWSDOWN but s/below/above/ */28832883- if (next && next->vm_start == address + PAGE_SIZE)28842884- return next->vm_flags & VM_GROWSUP ? 0 : -ENOMEM;28852885-28862886- return expand_upwards(vma, address + PAGE_SIZE);28872887- }28882888- return 0;28892889-}28902890-28912891-/*28922858 * We enter with non-exclusive mmap_sem (to exclude vma changes,28932859 * but allow concurrent faults), and pte mapped but not yet locked.28942860 * We return with mmap_sem still held, but pte unmapped and unlocked.···28692903 /* File mapping without ->vm_ops ? */28702904 if (vma->vm_flags & VM_SHARED)28712905 return VM_FAULT_SIGBUS;28722872-28732873- /* Check if we need to add a guard page to the stack */28742874- if (check_stack_guard_page(vma, vmf->address) < 0)28752875- return VM_FAULT_SIGSEGV;2876290628772907 /*28782908 * Use pte_alloc() instead of pte_alloc_map(). We can't run···29913029 return ret;29923030}2993303130323032+/*30333033+ * The ordering of these checks is important for pmds with _PAGE_DEVMAP set.30343034+ * If we check pmd_trans_unstable() first we will trip the bad_pmd() check30353035+ * inside of pmd_none_or_trans_huge_or_clear_bad(). This will end up correctly30363036+ * returning 1 but not before it spams dmesg with the pmd_clear_bad() output.30373037+ */30383038+static int pmd_devmap_trans_unstable(pmd_t *pmd)30393039+{30403040+ return pmd_devmap(*pmd) || pmd_trans_unstable(pmd);30413041+}30423042+29943043static int pte_alloc_one_map(struct vm_fault *vmf)29953044{29963045 struct vm_area_struct *vma = vmf->vma;···30253052map_pte:30263053 /*30273054 * If a huge pmd materialized under us just retry later. Use30283028- * pmd_trans_unstable() instead of pmd_trans_huge() to ensure the pmd30293029- * didn't become pmd_trans_huge under us and then back to pmd_none, as30303030- * a result of MADV_DONTNEED running immediately after a huge pmd fault30313031- * in a different thread of this mm, in turn leading to a misleading30323032- * pmd_trans_huge() retval. All we have to ensure is that it is a30333033- * regular pmd that we can walk with pte_offset_map() and we can do that30343034- * through an atomic read in C, which is what pmd_trans_unstable()30353035- * provides.30553055+ * pmd_trans_unstable() via pmd_devmap_trans_unstable() instead of30563056+ * pmd_trans_huge() to ensure the pmd didn't become pmd_trans_huge30573057+ * under us and then back to pmd_none, as a result of MADV_DONTNEED30583058+ * running immediately after a huge pmd fault in a different thread of30593059+ * this mm, in turn leading to a misleading pmd_trans_huge() retval.30603060+ * All we have to ensure is that it is a regular pmd that we can walk30613061+ * with pte_offset_map() and we can do that through an atomic read in30623062+ * C, which is what pmd_trans_unstable() provides.30363063 */30373037- if (pmd_trans_unstable(vmf->pmd) || pmd_devmap(*vmf->pmd))30643064+ if (pmd_devmap_trans_unstable(vmf->pmd))30383065 return VM_FAULT_NOPAGE;3039306630673067+ /*30683068+ * At this point we know that our vmf->pmd points to a page of ptes30693069+ * and it cannot become pmd_none(), pmd_devmap() or pmd_trans_huge()30703070+ * for the duration of the fault. If a racing MADV_DONTNEED runs and30713071+ * we zap the ptes pointed to by our vmf->pmd, the vmf->ptl will still30723072+ * be valid and we will re-check to make sure the vmf->pte isn't30733073+ * pte_none() under vmf->ptl protection when we return to30743074+ * alloc_set_pte().30753075+ */30403076 vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,30413077 &vmf->ptl);30423078 return 0;···36723690 vmf->pte = NULL;36733691 } else {36743692 /* See comment in pte_alloc_one_map() */36753675- if (pmd_trans_unstable(vmf->pmd) || pmd_devmap(*vmf->pmd))36933693+ if (pmd_devmap_trans_unstable(vmf->pmd))36763694 return 0;36773695 /*36783696 * A regular pmd is established and it can't morph into a huge
+3-2
mm/mlock.c
···284284{285285 int i;286286 int nr = pagevec_count(pvec);287287- int delta_munlocked;287287+ int delta_munlocked = -nr;288288 struct pagevec pvec_putback;289289 int pgrescued = 0;290290···304304 continue;305305 else306306 __munlock_isolation_failed(page);307307+ } else {308308+ delta_munlocked++;307309 }308310309311 /*···317315 pagevec_add(&pvec_putback, pvec->pages[i]);318316 pvec->pages[i] = NULL;319317 }320320- delta_munlocked = -nr + pagevec_count(&pvec_putback);321318 __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked);322319 spin_unlock_irq(zone_lru_lock(zone));323320
+97-63
mm/mmap.c
···183183 unsigned long retval;184184 unsigned long newbrk, oldbrk;185185 struct mm_struct *mm = current->mm;186186+ struct vm_area_struct *next;186187 unsigned long min_brk;187188 bool populate;188189 LIST_HEAD(uf);···230229 }231230232231 /* Check against existing mmap mappings. */233233- if (find_vma_intersection(mm, oldbrk, newbrk+PAGE_SIZE))232232+ next = find_vma(mm, oldbrk);233233+ if (next && newbrk + PAGE_SIZE > vm_start_gap(next))234234 goto out;235235236236 /* Ok, looks good - let it rip. */···255253256254static long vma_compute_subtree_gap(struct vm_area_struct *vma)257255{258258- unsigned long max, subtree_gap;259259- max = vma->vm_start;260260- if (vma->vm_prev)261261- max -= vma->vm_prev->vm_end;256256+ unsigned long max, prev_end, subtree_gap;257257+258258+ /*259259+ * Note: in the rare case of a VM_GROWSDOWN above a VM_GROWSUP, we260260+ * allow two stack_guard_gaps between them here, and when choosing261261+ * an unmapped area; whereas when expanding we only require one.262262+ * That's a little inconsistent, but keeps the code here simpler.263263+ */264264+ max = vm_start_gap(vma);265265+ if (vma->vm_prev) {266266+ prev_end = vm_end_gap(vma->vm_prev);267267+ if (max > prev_end)268268+ max -= prev_end;269269+ else270270+ max = 0;271271+ }262272 if (vma->vm_rb.rb_left) {263273 subtree_gap = rb_entry(vma->vm_rb.rb_left,264274 struct vm_area_struct, vm_rb)->rb_subtree_gap;···366352 anon_vma_unlock_read(anon_vma);367353 }368354369369- highest_address = vma->vm_end;355355+ highest_address = vm_end_gap(vma);370356 vma = vma->vm_next;371357 i++;372358 }···555541 if (vma->vm_next)556542 vma_gap_update(vma->vm_next);557543 else558558- mm->highest_vm_end = vma->vm_end;544544+ mm->highest_vm_end = vm_end_gap(vma);559545560546 /*561547 * vma->vm_prev wasn't known when we followed the rbtree to find the···870856 vma_gap_update(vma);871857 if (end_changed) {872858 if (!next)873873- mm->highest_vm_end = end;859859+ mm->highest_vm_end = vm_end_gap(vma);874860 else if (!adjust_next)875861 vma_gap_update(next);876862 }···955941 * mm->highest_vm_end doesn't need any update956942 * in remove_next == 1 case.957943 */958958- VM_WARN_ON(mm->highest_vm_end != end);944944+ VM_WARN_ON(mm->highest_vm_end != vm_end_gap(vma));959945 }960946 }961947 if (insert && file)···1801178718021788 while (true) {18031789 /* Visit left subtree if it looks promising */18041804- gap_end = vma->vm_start;17901790+ gap_end = vm_start_gap(vma);18051791 if (gap_end >= low_limit && vma->vm_rb.rb_left) {18061792 struct vm_area_struct *left =18071793 rb_entry(vma->vm_rb.rb_left,···18121798 }18131799 }1814180018151815- gap_start = vma->vm_prev ? vma->vm_prev->vm_end : 0;18011801+ gap_start = vma->vm_prev ? vm_end_gap(vma->vm_prev) : 0;18161802check_current:18171803 /* Check if current node has a suitable gap */18181804 if (gap_start > high_limit)18191805 return -ENOMEM;18201820- if (gap_end >= low_limit && gap_end - gap_start >= length)18061806+ if (gap_end >= low_limit &&18071807+ gap_end > gap_start && gap_end - gap_start >= length)18211808 goto found;1822180918231810 /* Visit right subtree if it looks promising */···18401825 vma = rb_entry(rb_parent(prev),18411826 struct vm_area_struct, vm_rb);18421827 if (prev == vma->vm_rb.rb_left) {18431843- gap_start = vma->vm_prev->vm_end;18441844- gap_end = vma->vm_start;18281828+ gap_start = vm_end_gap(vma->vm_prev);18291829+ gap_end = vm_start_gap(vma);18451830 goto check_current;18461831 }18471832 }···1905189019061891 while (true) {19071892 /* Visit right subtree if it looks promising */19081908- gap_start = vma->vm_prev ? vma->vm_prev->vm_end : 0;18931893+ gap_start = vma->vm_prev ? vm_end_gap(vma->vm_prev) : 0;19091894 if (gap_start <= high_limit && vma->vm_rb.rb_right) {19101895 struct vm_area_struct *right =19111896 rb_entry(vma->vm_rb.rb_right,···1918190319191904check_current:19201905 /* Check if current node has a suitable gap */19211921- gap_end = vma->vm_start;19061906+ gap_end = vm_start_gap(vma);19221907 if (gap_end < low_limit)19231908 return -ENOMEM;19241924- if (gap_start <= high_limit && gap_end - gap_start >= length)19091909+ if (gap_start <= high_limit &&19101910+ gap_end > gap_start && gap_end - gap_start >= length)19251911 goto found;1926191219271913 /* Visit left subtree if it looks promising */···19451929 struct vm_area_struct, vm_rb);19461930 if (prev == vma->vm_rb.rb_right) {19471931 gap_start = vma->vm_prev ?19481948- vma->vm_prev->vm_end : 0;19321932+ vm_end_gap(vma->vm_prev) : 0;19491933 goto check_current;19501934 }19511935 }···19831967 unsigned long len, unsigned long pgoff, unsigned long flags)19841968{19851969 struct mm_struct *mm = current->mm;19861986- struct vm_area_struct *vma;19701970+ struct vm_area_struct *vma, *prev;19871971 struct vm_unmapped_area_info info;1988197219891973 if (len > TASK_SIZE - mmap_min_addr)···1994197819951979 if (addr) {19961980 addr = PAGE_ALIGN(addr);19971997- vma = find_vma(mm, addr);19811981+ vma = find_vma_prev(mm, addr, &prev);19981982 if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&19991999- (!vma || addr + len <= vma->vm_start))19831983+ (!vma || addr + len <= vm_start_gap(vma)) &&19841984+ (!prev || addr >= vm_end_gap(prev)))20001985 return addr;20011986 }20021987···20202003 const unsigned long len, const unsigned long pgoff,20212004 const unsigned long flags)20222005{20232023- struct vm_area_struct *vma;20062006+ struct vm_area_struct *vma, *prev;20242007 struct mm_struct *mm = current->mm;20252008 unsigned long addr = addr0;20262009 struct vm_unmapped_area_info info;···20352018 /* requesting a specific address */20362019 if (addr) {20372020 addr = PAGE_ALIGN(addr);20382038- vma = find_vma(mm, addr);20212021+ vma = find_vma_prev(mm, addr, &prev);20392022 if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&20402040- (!vma || addr + len <= vma->vm_start))20232023+ (!vma || addr + len <= vm_start_gap(vma)) &&20242024+ (!prev || addr >= vm_end_gap(prev)))20412025 return addr;20422026 }20432027···21732155 * update accounting. This is shared with both the21742156 * grow-up and grow-down cases.21752157 */21762176-static int acct_stack_growth(struct vm_area_struct *vma, unsigned long size, unsigned long grow)21582158+static int acct_stack_growth(struct vm_area_struct *vma,21592159+ unsigned long size, unsigned long grow)21772160{21782161 struct mm_struct *mm = vma->vm_mm;21792162 struct rlimit *rlim = current->signal->rlim;21802180- unsigned long new_start, actual_size;21632163+ unsigned long new_start;2181216421822165 /* address space limit tests */21832166 if (!may_expand_vm(mm, vma->vm_flags, grow))21842167 return -ENOMEM;2185216821862169 /* Stack limit test */21872187- actual_size = size;21882188- if (size && (vma->vm_flags & (VM_GROWSUP | VM_GROWSDOWN)))21892189- actual_size -= PAGE_SIZE;21902190- if (actual_size > READ_ONCE(rlim[RLIMIT_STACK].rlim_cur))21702170+ if (size > READ_ONCE(rlim[RLIMIT_STACK].rlim_cur))21912171 return -ENOMEM;2192217221932173 /* mlock limit tests */···22232207int expand_upwards(struct vm_area_struct *vma, unsigned long address)22242208{22252209 struct mm_struct *mm = vma->vm_mm;22102210+ struct vm_area_struct *next;22112211+ unsigned long gap_addr;22262212 int error = 0;2227221322282214 if (!(vma->vm_flags & VM_GROWSUP))22292215 return -EFAULT;2230221622312231- /* Guard against wrapping around to address 0. */22322232- if (address < PAGE_ALIGN(address+4))22332233- address = PAGE_ALIGN(address+4);22342234- else22172217+ /* Guard against exceeding limits of the address space. */22182218+ address &= PAGE_MASK;22192219+ if (address >= TASK_SIZE)22352220 return -ENOMEM;22212221+ address += PAGE_SIZE;22222222+22232223+ /* Enforce stack_guard_gap */22242224+ gap_addr = address + stack_guard_gap;22252225+22262226+ /* Guard against overflow */22272227+ if (gap_addr < address || gap_addr > TASK_SIZE)22282228+ gap_addr = TASK_SIZE;22292229+22302230+ next = vma->vm_next;22312231+ if (next && next->vm_start < gap_addr) {22322232+ if (!(next->vm_flags & VM_GROWSUP))22332233+ return -ENOMEM;22342234+ /* Check that both stack segments have the same anon_vma? */22352235+ }2236223622372237 /* We must make sure the anon_vma is allocated. */22382238 if (unlikely(anon_vma_prepare(vma)))···22932261 if (vma->vm_next)22942262 vma_gap_update(vma->vm_next);22952263 else22962296- mm->highest_vm_end = address;22642264+ mm->highest_vm_end = vm_end_gap(vma);22972265 spin_unlock(&mm->page_table_lock);2298226622992267 perf_event_mmap(vma);···23142282 unsigned long address)23152283{23162284 struct mm_struct *mm = vma->vm_mm;22852285+ struct vm_area_struct *prev;22862286+ unsigned long gap_addr;23172287 int error;2318228823192289 address &= PAGE_MASK;23202290 error = security_mmap_addr(address);23212291 if (error)23222292 return error;22932293+22942294+ /* Enforce stack_guard_gap */22952295+ gap_addr = address - stack_guard_gap;22962296+ if (gap_addr > address)22972297+ return -ENOMEM;22982298+ prev = vma->vm_prev;22992299+ if (prev && prev->vm_end > gap_addr) {23002300+ if (!(prev->vm_flags & VM_GROWSDOWN))23012301+ return -ENOMEM;23022302+ /* Check that both stack segments have the same anon_vma? */23032303+ }2323230423242305 /* We must make sure the anon_vma is allocated. */23252306 if (unlikely(anon_vma_prepare(vma)))···23882343 return error;23892344}2390234523912391-/*23922392- * Note how expand_stack() refuses to expand the stack all the way to23932393- * abut the next virtual mapping, *unless* that mapping itself is also23942394- * a stack mapping. We want to leave room for a guard page, after all23952395- * (the guard page itself is not added here, that is done by the23962396- * actual page faulting logic)23972397- *23982398- * This matches the behavior of the guard page logic (see mm/memory.c:23992399- * check_stack_guard_page()), which only allows the guard page to be24002400- * removed under these circumstances.24012401- */23462346+/* enforced gap between the expanding stack and other mappings. */23472347+unsigned long stack_guard_gap = 256UL<<PAGE_SHIFT;23482348+23492349+static int __init cmdline_parse_stack_guard_gap(char *p)23502350+{23512351+ unsigned long val;23522352+ char *endptr;23532353+23542354+ val = simple_strtoul(p, &endptr, 10);23552355+ if (!*endptr)23562356+ stack_guard_gap = val << PAGE_SHIFT;23572357+23582358+ return 0;23592359+}23602360+__setup("stack_guard_gap=", cmdline_parse_stack_guard_gap);23612361+24022362#ifdef CONFIG_STACK_GROWSUP24032363int expand_stack(struct vm_area_struct *vma, unsigned long address)24042364{24052405- struct vm_area_struct *next;24062406-24072407- address &= PAGE_MASK;24082408- next = vma->vm_next;24092409- if (next && next->vm_start == address + PAGE_SIZE) {24102410- if (!(next->vm_flags & VM_GROWSUP))24112411- return -ENOMEM;24122412- }24132365 return expand_upwards(vma, address);24142366}24152367···24282386#else24292387int expand_stack(struct vm_area_struct *vma, unsigned long address)24302388{24312431- struct vm_area_struct *prev;24322432-24332433- address &= PAGE_MASK;24342434- prev = vma->vm_prev;24352435- if (prev && prev->vm_end == address) {24362436- if (!(prev->vm_flags & VM_GROWSDOWN))24372437- return -ENOMEM;24382438- }24392389 return expand_downwards(vma, address);24402390}24412391···25252491 vma->vm_prev = prev;25262492 vma_gap_update(vma);25272493 } else25282528- mm->highest_vm_end = prev ? prev->vm_end : 0;24942494+ mm->highest_vm_end = prev ? vm_end_gap(prev) : 0;25292495 tail_vma->vm_next = NULL;2530249625312497 /* Kill the cache */
+25-12
mm/page_alloc.c
···292292#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT293293static inline void reset_deferred_meminit(pg_data_t *pgdat)294294{295295+ unsigned long max_initialise;296296+ unsigned long reserved_lowmem;297297+298298+ /*299299+ * Initialise at least 2G of a node but also take into account that300300+ * two large system hashes that can take up 1GB for 0.25TB/node.301301+ */302302+ max_initialise = max(2UL << (30 - PAGE_SHIFT),303303+ (pgdat->node_spanned_pages >> 8));304304+305305+ /*306306+ * Compensate the all the memblock reservations (e.g. crash kernel)307307+ * from the initial estimation to make sure we will initialize enough308308+ * memory to boot.309309+ */310310+ reserved_lowmem = memblock_reserved_memory_within(pgdat->node_start_pfn,311311+ pgdat->node_start_pfn + max_initialise);312312+ max_initialise += reserved_lowmem;313313+314314+ pgdat->static_init_size = min(max_initialise, pgdat->node_spanned_pages);295315 pgdat->first_deferred_pfn = ULONG_MAX;296316}297317···334314 unsigned long pfn, unsigned long zone_end,335315 unsigned long *nr_initialised)336316{337337- unsigned long max_initialise;338338-339317 /* Always populate low zones for address-contrained allocations */340318 if (zone_end < pgdat_end_pfn(pgdat))341319 return true;342342- /*343343- * Initialise at least 2G of a node but also take into account that344344- * two large system hashes that can take up 1GB for 0.25TB/node.345345- */346346- max_initialise = max(2UL << (30 - PAGE_SHIFT),347347- (pgdat->node_spanned_pages >> 8));348348-349320 (*nr_initialised)++;350350- if ((*nr_initialised > max_initialise) &&321321+ if ((*nr_initialised > pgdat->static_init_size) &&351322 (pfn & (PAGES_PER_SECTION - 1)) == 0) {352323 pgdat->first_deferred_pfn = pfn;353324 return false;···38813870 goto got_pg;3882387138833872 /* Avoid allocations with no watermarks from looping endlessly */38843884- if (test_thread_flag(TIF_MEMDIE))38733873+ if (test_thread_flag(TIF_MEMDIE) &&38743874+ (alloc_flags == ALLOC_NO_WATERMARKS ||38753875+ (gfp_mask & __GFP_NOMEMALLOC)))38853876 goto nopage;3886387738873878 /* Retry as long as the OOM killer is making progress */···61496136 /* pg_data_t should be reset to zero when it's allocated */61506137 WARN_ON(pgdat->nr_zones || pgdat->kswapd_classzone_idx);6151613861526152- reset_deferred_meminit(pgdat);61536139 pgdat->node_id = nid;61546140 pgdat->node_start_pfn = node_start_pfn;61556141 pgdat->per_cpu_nodestats = NULL;···61706158 (unsigned long)pgdat->node_mem_map);61716159#endif6172616061616161+ reset_deferred_meminit(pgdat);61736162 free_area_init_core(pgdat);61746163}61756164
+30-16
mm/slub.c
···55125512 char mbuf[64];55135513 char *buf;55145514 struct slab_attribute *attr = to_slab_attr(slab_attrs[i]);55155515+ ssize_t len;5515551655165517 if (!attr || !attr->store || !attr->show)55175518 continue;···55375536 buf = buffer;55385537 }5539553855405540- attr->show(root_cache, buf);55415541- attr->store(s, buf, strlen(buf));55395539+ len = attr->show(root_cache, buf);55405540+ if (len > 0)55415541+ attr->store(s, buf, len);55425542 }5543554355445544 if (buffer)···56255623 return name;56265624}5627562556265626+static void sysfs_slab_remove_workfn(struct work_struct *work)56275627+{56285628+ struct kmem_cache *s =56295629+ container_of(work, struct kmem_cache, kobj_remove_work);56305630+56315631+ if (!s->kobj.state_in_sysfs)56325632+ /*56335633+ * For a memcg cache, this may be called during56345634+ * deactivation and again on shutdown. Remove only once.56355635+ * A cache is never shut down before deactivation is56365636+ * complete, so no need to worry about synchronization.56375637+ */56385638+ return;56395639+56405640+#ifdef CONFIG_MEMCG56415641+ kset_unregister(s->memcg_kset);56425642+#endif56435643+ kobject_uevent(&s->kobj, KOBJ_REMOVE);56445644+ kobject_del(&s->kobj);56455645+ kobject_put(&s->kobj);56465646+}56475647+56285648static int sysfs_slab_add(struct kmem_cache *s)56295649{56305650 int err;56315651 const char *name;56325652 struct kset *kset = cache_kset(s);56335653 int unmergeable = slab_unmergeable(s);56545654+56555655+ INIT_WORK(&s->kobj_remove_work, sysfs_slab_remove_workfn);5634565656355657 if (!kset) {56365658 kobject_init(&s->kobj, &slab_ktype);···57195693 */57205694 return;5721569557225722- if (!s->kobj.state_in_sysfs)57235723- /*57245724- * For a memcg cache, this may be called during57255725- * deactivation and again on shutdown. Remove only once.57265726- * A cache is never shut down before deactivation is57275727- * complete, so no need to worry about synchronization.57285728- */57295729- return;57305730-57315731-#ifdef CONFIG_MEMCG57325732- kset_unregister(s->memcg_kset);57335733-#endif57345734- kobject_uevent(&s->kobj, KOBJ_REMOVE);57355735- kobject_del(&s->kobj);56965696+ kobject_get(&s->kobj);56975697+ schedule_work(&s->kobj_remove_work);57365698}5737569957385700void sysfs_slab_release(struct kmem_cache *s)
+3
mm/swap_cgroup.c
···4848 if (!page)4949 goto not_enough_page;5050 ctrl->map[idx] = page;5151+5252+ if (!(idx % SWAP_CLUSTER_MAX))5353+ cond_resched();5154 }5255 return 0;5356not_enough_page:
+5-2
mm/util.c
···357357 WARN_ON_ONCE((flags & GFP_KERNEL) != GFP_KERNEL);358358359359 /*360360- * Make sure that larger requests are not too disruptive - no OOM361361- * killer and no allocation failure warnings as we have a fallback360360+ * We want to attempt a large physically contiguous block first because361361+ * it is less likely to fragment multiple larger blocks and therefore362362+ * contribute to a long term fragmentation less than vmalloc fallback.363363+ * However make sure that larger requests are not too disruptive - no364364+ * OOM killer and no allocation failure warnings as we have a fallback.362365 */363366 if (size > PAGE_SIZE) {364367 kmalloc_flags |= __GFP_NOWARN;
+13-2
mm/vmalloc.c
···287287 if (p4d_none(*p4d))288288 return NULL;289289 pud = pud_offset(p4d, addr);290290- if (pud_none(*pud))290290+291291+ /*292292+ * Don't dereference bad PUD or PMD (below) entries. This will also293293+ * identify huge mappings, which we may encounter on architectures294294+ * that define CONFIG_HAVE_ARCH_HUGE_VMAP=y. Such regions will be295295+ * identified as vmalloc addresses by is_vmalloc_addr(), but are296296+ * not [unambiguously] associated with a struct page, so there is297297+ * no correct value to return for them.298298+ */299299+ WARN_ON_ONCE(pud_bad(*pud));300300+ if (pud_none(*pud) || pud_bad(*pud))291301 return NULL;292302 pmd = pmd_offset(pud, addr);293293- if (pmd_none(*pmd))303303+ WARN_ON_ONCE(pmd_bad(*pmd));304304+ if (pmd_none(*pmd) || pmd_bad(*pmd))294305 return NULL;295306296307 ptep = pte_offset_map(pmd, addr);
+3-3
mm/vmpressure.c
···115115 unsigned long pressure = 0;116116117117 /*118118- * reclaimed can be greater than scanned in cases119119- * like THP, where the scanned is 1 and reclaimed120120- * could be 512118118+ * reclaimed can be greater than scanned for things such as reclaimed119119+ * slab pages. shrink_node() just adds reclaimed pages without a120120+ * related increment to scanned pages.121121 */122122 if (reclaimed >= scanned)123123 goto out;
···595595 err = 0;596596 switch (nla_type(attr)) {597597 case IFLA_BRIDGE_VLAN_TUNNEL_INFO:598598- if (!(p->flags & BR_VLAN_TUNNEL))598598+ if (!p || !(p->flags & BR_VLAN_TUNNEL))599599 return -EINVAL;600600 err = br_parse_vlan_tunnel_info(attr, &tinfo_curr);601601 if (err)
+2-1
net/bridge/br_stp_if.c
···179179 br_debug(br, "using kernel STP\n");180180181181 /* To start timers on any ports left in blocking */182182- mod_timer(&br->hello_timer, jiffies + br->hello_time);182182+ if (br->dev->flags & IFF_UP)183183+ mod_timer(&br->hello_timer, jiffies + br->hello_time);183184 br_port_state_selection(br);184185 }185186
···872872873873static int can_pernet_init(struct net *net)874874{875875- net->can.can_rcvlists_lock =876876- __SPIN_LOCK_UNLOCKED(net->can.can_rcvlists_lock);875875+ spin_lock_init(&net->can.can_rcvlists_lock);877876 net->can.can_rx_alldev_list =878877 kzalloc(sizeof(struct dev_rcv_lists), GFP_KERNEL);879878
+50-24
net/core/dev.c
···12531253 if (!new_ifalias)12541254 return -ENOMEM;12551255 dev->ifalias = new_ifalias;12561256+ memcpy(dev->ifalias, alias, len);12571257+ dev->ifalias[len] = 0;1256125812571257- strlcpy(dev->ifalias, alias, len+1);12581259 return len;12591260}12601261···47674766}47684767EXPORT_SYMBOL(gro_find_complete_by_type);4769476847694769+static void napi_skb_free_stolen_head(struct sk_buff *skb)47704770+{47714771+ skb_dst_drop(skb);47724772+ secpath_reset(skb);47734773+ kmem_cache_free(skbuff_head_cache, skb);47744774+}47754775+47704776static gro_result_t napi_skb_finish(gro_result_t ret, struct sk_buff *skb)47714777{47724778 switch (ret) {···47874779 break;4788478047894781 case GRO_MERGED_FREE:47904790- if (NAPI_GRO_CB(skb)->free == NAPI_GRO_FREE_STOLEN_HEAD) {47914791- skb_dst_drop(skb);47924792- secpath_reset(skb);47934793- kmem_cache_free(skbuff_head_cache, skb);47944794- } else {47824782+ if (NAPI_GRO_CB(skb)->free == NAPI_GRO_FREE_STOLEN_HEAD)47834783+ napi_skb_free_stolen_head(skb);47844784+ else47954785 __kfree_skb(skb);47964796- }47974786 break;4798478747994788 case GRO_HELD:···48624857 break;4863485848644859 case GRO_DROP:48654865- case GRO_MERGED_FREE:48664860 napi_reuse_skb(napi, skb);48614861+ break;48624862+48634863+ case GRO_MERGED_FREE:48644864+ if (NAPI_GRO_CB(skb)->free == NAPI_GRO_FREE_STOLEN_HEAD)48654865+ napi_skb_free_stolen_head(skb);48664866+ else48674867+ napi_reuse_skb(napi, skb);48674868 break;4868486948694870 case GRO_MERGED:···49594948}49604949EXPORT_SYMBOL(__skb_gro_checksum_complete);4961495049514951+static void net_rps_send_ipi(struct softnet_data *remsd)49524952+{49534953+#ifdef CONFIG_RPS49544954+ while (remsd) {49554955+ struct softnet_data *next = remsd->rps_ipi_next;49564956+49574957+ if (cpu_online(remsd->cpu))49584958+ smp_call_function_single_async(remsd->cpu, &remsd->csd);49594959+ remsd = next;49604960+ }49614961+#endif49624962+}49634963+49624964/*49634965 * net_rps_action_and_irq_enable sends any pending IPI's for rps.49644966 * Note: called with local irq disabled, but exits with local irq enabled.···49874963 local_irq_enable();4988496449894965 /* Send pending IPI's to kick RPS processing on remote cpus. */49904990- while (remsd) {49914991- struct softnet_data *next = remsd->rps_ipi_next;49924992-49934993- if (cpu_online(remsd->cpu))49944994- smp_call_function_single_async(remsd->cpu,49954995- &remsd->csd);49964996- remsd = next;49974997- }49664966+ net_rps_send_ipi(remsd);49984967 } else49994968#endif50004969 local_irq_enable();···52165199 if (rc == BUSY_POLL_BUDGET)52175200 __napi_schedule(napi);52185201 local_bh_enable();52195219- if (local_softirq_pending())52205220- do_softirq();52215202}5222520352235204void napi_busy_loop(unsigned int napi_id,···75167501err_uninit:75177502 if (dev->netdev_ops->ndo_uninit)75187503 dev->netdev_ops->ndo_uninit(dev);75047504+ if (dev->priv_destructor)75057505+ dev->priv_destructor(dev);75197506 goto out;75207507}75217508EXPORT_SYMBOL(register_netdevice);···77257708 WARN_ON(rcu_access_pointer(dev->ip6_ptr));77267709 WARN_ON(dev->dn_ptr);7727771077287728- if (dev->destructor)77297729- dev->destructor(dev);77117711+ if (dev->priv_destructor)77127712+ dev->priv_destructor(dev);77137713+ if (dev->needs_free_netdev)77147714+ free_netdev(dev);7730771577317716 /* Report a network device has been unregistered */77327717 rtnl_lock();···77937774 } else {77947775 netdev_stats_to_stats64(storage, &dev->stats);77957776 }77967796- storage->rx_dropped += atomic_long_read(&dev->rx_dropped);77977797- storage->tx_dropped += atomic_long_read(&dev->tx_dropped);77987798- storage->rx_nohandler += atomic_long_read(&dev->rx_nohandler);77777777+ storage->rx_dropped += (unsigned long)atomic_long_read(&dev->rx_dropped);77787778+ storage->tx_dropped += (unsigned long)atomic_long_read(&dev->tx_dropped);77797779+ storage->rx_nohandler += (unsigned long)atomic_long_read(&dev->rx_nohandler);77997780 return storage;78007781}78017782EXPORT_SYMBOL(dev_get_stats);···82118192 struct sk_buff **list_skb;82128193 struct sk_buff *skb;82138194 unsigned int cpu;82148214- struct softnet_data *sd, *oldsd;81958195+ struct softnet_data *sd, *oldsd, *remsd = NULL;8215819682168197 local_irq_disable();82178198 cpu = smp_processor_id();···8251823282528233 raise_softirq_irqoff(NET_TX_SOFTIRQ);82538234 local_irq_enable();82358235+82368236+#ifdef CONFIG_RPS82378237+ remsd = oldsd->rps_ipi_list;82388238+ oldsd->rps_ipi_list = NULL;82398239+#endif82408240+ /* send out pending IPI's on offline CPU */82418241+ net_rps_send_ipi(remsd);8254824282558243 /* Process offline CPU's input_pkt_queue */82568244 while ((skb = __skb_dequeue(&oldsd->process_queue))) {
+16-3
net/core/dev_ioctl.c
···410410 if (cmd == SIOCGIFNAME)411411 return dev_ifname(net, (struct ifreq __user *)arg);412412413413+ /*414414+ * Take care of Wireless Extensions. Unfortunately struct iwreq415415+ * isn't a proper subset of struct ifreq (it's 8 byte shorter)416416+ * so we need to treat it specially, otherwise applications may417417+ * fault if the struct they're passing happens to land at the418418+ * end of a mapped page.419419+ */420420+ if (cmd >= SIOCIWFIRST && cmd <= SIOCIWLAST) {421421+ struct iwreq iwr;422422+423423+ if (copy_from_user(&iwr, arg, sizeof(iwr)))424424+ return -EFAULT;425425+426426+ return wext_handle_ioctl(net, &iwr, cmd, arg);427427+ }428428+413429 if (copy_from_user(&ifr, arg, sizeof(struct ifreq)))414430 return -EFAULT;415431···575559 ret = -EFAULT;576560 return ret;577561 }578578- /* Take care of Wireless Extensions */579579- if (cmd >= SIOCIWFIRST && cmd <= SIOCIWLAST)580580- return wext_handle_ioctl(net, &ifr, cmd, arg);581562 return -ENOTTY;582563 }583564}
+6-2
net/core/devlink.c
···1680168016811681 hdr = genlmsg_put(skb, info->snd_portid, info->snd_seq,16821682 &devlink_nl_family, NLM_F_MULTI, cmd);16831683- if (!hdr)16831683+ if (!hdr) {16841684+ nlmsg_free(skb);16841685 return -EMSGSIZE;16861686+ }1685168716861688 if (devlink_nl_put_handle(skb, devlink))16871689 goto nla_put_failure;···2100209821012099 hdr = genlmsg_put(skb, info->snd_portid, info->snd_seq,21022100 &devlink_nl_family, NLM_F_MULTI, cmd);21032103- if (!hdr)21012101+ if (!hdr) {21022102+ nlmsg_free(skb);21042103 return -EMSGSIZE;21042104+ }2105210521062106 if (devlink_nl_put_handle(skb, devlink))21072107 goto nla_put_failure;
+14
net/core/dst.c
···469469 spin_lock_bh(&dst_garbage.lock);470470 dst = dst_garbage.list;471471 dst_garbage.list = NULL;472472+ /* The code in dst_ifdown places a hold on the loopback device.473473+ * If the gc entry processing is set to expire after a lengthy474474+ * interval, this hold can cause netdev_wait_allrefs() to hang475475+ * out and wait for a long time -- until the the loopback476476+ * interface is released. If we're really unlucky, it'll emit477477+ * pr_emerg messages to console too. Reset the interval here,478478+ * so dst cleanups occur in a more timely fashion.479479+ */480480+ if (dst_garbage.timer_inc > DST_GC_INC) {481481+ dst_garbage.timer_inc = DST_GC_INC;482482+ dst_garbage.timer_expires = DST_GC_MIN;483483+ mod_delayed_work(system_wq, &dst_gc_work,484484+ dst_garbage.timer_expires);485485+ }472486 spin_unlock_bh(&dst_garbage.lock);473487474488 if (last)
+14-7
net/core/fib_rules.c
···568568 struct net *net = sock_net(skb->sk);569569 struct fib_rule_hdr *frh = nlmsg_data(nlh);570570 struct fib_rules_ops *ops = NULL;571571- struct fib_rule *rule, *tmp;571571+ struct fib_rule *rule, *r;572572 struct nlattr *tb[FRA_MAX+1];573573 struct fib_kuid_range range;574574 int err = -EINVAL;···668668669669 /*670670 * Check if this rule is a target to any of them. If so,671671+ * adjust to the next one with the same preference or671672 * disable them. As this operation is eventually very672672- * expensive, it is only performed if goto rules have673673- * actually been added.673673+ * expensive, it is only performed if goto rules, except674674+ * current if it is goto rule, have actually been added.674675 */675676 if (ops->nr_goto_rules > 0) {676676- list_for_each_entry(tmp, &ops->rules_list, list) {677677- if (rtnl_dereference(tmp->ctarget) == rule) {678678- RCU_INIT_POINTER(tmp->ctarget, NULL);677677+ struct fib_rule *n;678678+679679+ n = list_next_entry(rule, list);680680+ if (&n->list == &ops->rules_list || n->pref != rule->pref)681681+ n = NULL;682682+ list_for_each_entry(r, &ops->rules_list, list) {683683+ if (rtnl_dereference(r->ctarget) != rule)684684+ continue;685685+ rcu_assign_pointer(r->ctarget, n);686686+ if (!n)679687 ops->unresolved_rules++;680680- }681688 }682689 }683690
+4-1
net/core/rtnetlink.c
···931931 + nla_total_size(1) /* IFLA_LINKMODE */932932 + nla_total_size(4) /* IFLA_CARRIER_CHANGES */933933 + nla_total_size(4) /* IFLA_LINK_NETNSID */934934+ + nla_total_size(4) /* IFLA_GROUP */934935 + nla_total_size(ext_filter_mask935936 & RTEXT_FILTER_VF ? 4 : 0) /* IFLA_NUM_VF */936937 + rtnl_vfinfo_size(dev, ext_filter_mask) /* IFLA_VFINFO_LIST */···11251124 struct ifla_vf_mac vf_mac;11261125 struct ifla_vf_info ivi;1127112611271127+ memset(&ivi, 0, sizeof(ivi));11281128+11281129 /* Not all SR-IOV capable drivers support the11291130 * spoofcheck and "RSS query enable" query. Preset to11301131 * -1 so the user space tool can detect that the driver···11351132 ivi.spoofchk = -1;11361133 ivi.rss_query_en = -1;11371134 ivi.trusted = -1;11381138- memset(ivi.mac, 0, sizeof(ivi.mac));11391135 /* The default value for VF link state is "auto"11401136 * IFLA_VF_LINK_STATE_AUTO which equals zero11411137 */···14691467 [IFLA_LINK_NETNSID] = { .type = NLA_S32 },14701468 [IFLA_PROTO_DOWN] = { .type = NLA_U8 },14711469 [IFLA_XDP] = { .type = NLA_NESTED },14701470+ [IFLA_GROUP] = { .type = NLA_U32 },14721471};1473147214741473static const struct nla_policy ifla_info_policy[IFLA_INFO_MAX+1] = {
+4-1
net/core/skbuff.c
···3754375437553755 spin_lock_irqsave(&q->lock, flags);37563756 skb = __skb_dequeue(q);37573757- if (skb && (skb_next = skb_peek(q)))37573757+ if (skb && (skb_next = skb_peek(q))) {37583758 icmp_next = is_icmp_err_skb(skb_next);37593759+ if (icmp_next)37603760+ sk->sk_err = SKB_EXT_ERR(skb_next)->ee.ee_origin;37613761+ }37593762 spin_unlock_irqrestore(&q->lock, flags);3760376337613764 if (is_icmp_err_skb(skb) && !icmp_next)
···102102{103103 struct nlmsghdr *nlh = nlmsg_hdr(skb);104104105105- if (nlh->nlmsg_len < sizeof(*nlh) || skb->len < nlh->nlmsg_len)105105+ if (skb->len < sizeof(*nlh) ||106106+ nlh->nlmsg_len < sizeof(*nlh) ||107107+ skb->len < nlh->nlmsg_len)106108 return;107109108110 if (!netlink_capable(skb, CAP_NET_ADMIN))
+47
net/dsa/dsa.c
···223223 return 0;224224}225225226226+#ifdef CONFIG_PM_SLEEP227227+int dsa_switch_suspend(struct dsa_switch *ds)228228+{229229+ int i, ret = 0;230230+231231+ /* Suspend slave network devices */232232+ for (i = 0; i < ds->num_ports; i++) {233233+ if (!dsa_is_port_initialized(ds, i))234234+ continue;235235+236236+ ret = dsa_slave_suspend(ds->ports[i].netdev);237237+ if (ret)238238+ return ret;239239+ }240240+241241+ if (ds->ops->suspend)242242+ ret = ds->ops->suspend(ds);243243+244244+ return ret;245245+}246246+EXPORT_SYMBOL_GPL(dsa_switch_suspend);247247+248248+int dsa_switch_resume(struct dsa_switch *ds)249249+{250250+ int i, ret = 0;251251+252252+ if (ds->ops->resume)253253+ ret = ds->ops->resume(ds);254254+255255+ if (ret)256256+ return ret;257257+258258+ /* Resume slave network devices */259259+ for (i = 0; i < ds->num_ports; i++) {260260+ if (!dsa_is_port_initialized(ds, i))261261+ continue;262262+263263+ ret = dsa_slave_resume(ds->ports[i].netdev);264264+ if (ret)265265+ return ret;266266+ }267267+268268+ return 0;269269+}270270+EXPORT_SYMBOL_GPL(dsa_switch_resume);271271+#endif272272+226273static struct packet_type dsa_pack_type __read_mostly = {227274 .type = cpu_to_be16(ETH_P_XDSA),228275 .func = dsa_switch_rcv,
+3-1
net/dsa/dsa2.c
···484484 dsa_ds_unapply(dst, ds);485485 }486486487487- if (dst->cpu_switch)487487+ if (dst->cpu_switch) {488488 dsa_cpu_port_ethtool_restore(dst->cpu_switch);489489+ dst->cpu_switch = NULL;490490+ }489491490492 pr_info("DSA: tree %d unapplied\n", dst->tree);491493 dst->applied = false;
-47
net/dsa/legacy.c
···289289 dsa_switch_unregister_notifier(ds);290290}291291292292-#ifdef CONFIG_PM_SLEEP293293-int dsa_switch_suspend(struct dsa_switch *ds)294294-{295295- int i, ret = 0;296296-297297- /* Suspend slave network devices */298298- for (i = 0; i < ds->num_ports; i++) {299299- if (!dsa_is_port_initialized(ds, i))300300- continue;301301-302302- ret = dsa_slave_suspend(ds->ports[i].netdev);303303- if (ret)304304- return ret;305305- }306306-307307- if (ds->ops->suspend)308308- ret = ds->ops->suspend(ds);309309-310310- return ret;311311-}312312-EXPORT_SYMBOL_GPL(dsa_switch_suspend);313313-314314-int dsa_switch_resume(struct dsa_switch *ds)315315-{316316- int i, ret = 0;317317-318318- if (ds->ops->resume)319319- ret = ds->ops->resume(ds);320320-321321- if (ret)322322- return ret;323323-324324- /* Resume slave network devices */325325- for (i = 0; i < ds->num_ports; i++) {326326- if (!dsa_is_port_initialized(ds, i))327327- continue;328328-329329- ret = dsa_slave_resume(ds->ports[i].netdev);330330- if (ret)331331- return ret;332332- }333333-334334- return 0;335335-}336336-EXPORT_SYMBOL_GPL(dsa_switch_resume);337337-#endif338338-339292/* platform driver init and cleanup *****************************************/340293static int dev_is_class(struct device *dev, void *class)341294{
···657657 /* Needed by both icmp_global_allow and icmp_xmit_lock */658658 local_bh_disable();659659660660- /* Check global sysctl_icmp_msgs_per_sec ratelimit */661661- if (!icmpv4_global_allow(net, type, code))660660+ /* Check global sysctl_icmp_msgs_per_sec ratelimit, unless661661+ * incoming dev is loopback. If outgoing dev change to not be662662+ * loopback, then peer ratelimit still work (in icmpv4_xrlim_allow)663663+ */664664+ if (!(skb_in->dev && (skb_in->dev->flags&IFF_LOOPBACK)) &&665665+ !icmpv4_global_allow(net, type, code))662666 goto out_bh_enable;663667664668 sk = icmp_xmit_lock(net);
···101101static void ipmr_free_table(struct mr_table *mrt);102102103103static void ip_mr_forward(struct net *net, struct mr_table *mrt,104104- struct sk_buff *skb, struct mfc_cache *cache,105105- int local);104104+ struct net_device *dev, struct sk_buff *skb,105105+ struct mfc_cache *cache, int local);106106static int ipmr_cache_report(struct mr_table *mrt,107107 struct sk_buff *pkt, vifi_t vifi, int assert);108108static int __ipmr_fill_mroute(struct mr_table *mrt, struct sk_buff *skb,···501501 dev->mtu = ETH_DATA_LEN - sizeof(struct iphdr) - 8;502502 dev->flags = IFF_NOARP;503503 dev->netdev_ops = ®_vif_netdev_ops;504504- dev->destructor = free_netdev;504504+ dev->needs_free_netdev = true;505505 dev->features |= NETIF_F_NETNS_LOCAL;506506}507507···988988989989 rtnl_unicast(skb, net, NETLINK_CB(skb).portid);990990 } else {991991- ip_mr_forward(net, mrt, skb, c, 0);991991+ ip_mr_forward(net, mrt, skb->dev, skb, c, 0);992992 }993993 }994994}···1073107310741074/* Queue a packet for resolution. It gets locked cache entry! */10751075static int ipmr_cache_unresolved(struct mr_table *mrt, vifi_t vifi,10761076- struct sk_buff *skb)10761076+ struct sk_buff *skb, struct net_device *dev)10771077{10781078 const struct iphdr *iph = ip_hdr(skb);10791079 struct mfc_cache *c;···11301130 kfree_skb(skb);11311131 err = -ENOBUFS;11321132 } else {11331133+ if (dev) {11341134+ skb->dev = dev;11351135+ skb->skb_iif = dev->ifindex;11361136+ }11331137 skb_queue_tail(&c->mfc_un.unres.unresolved, skb);11341138 err = 0;11351139 }···1832182818331829/* "local" means that we should preserve one skb (for local delivery) */18341830static void ip_mr_forward(struct net *net, struct mr_table *mrt,18351835- struct sk_buff *skb, struct mfc_cache *cache,18361836- int local)18311831+ struct net_device *dev, struct sk_buff *skb,18321832+ struct mfc_cache *cache, int local)18371833{18381838- int true_vifi = ipmr_find_vif(mrt, skb->dev);18341834+ int true_vifi = ipmr_find_vif(mrt, dev);18391835 int psend = -1;18401836 int vif, ct;18411837···18571853 }1858185418591855 /* Wrong interface: drop packet and (maybe) send PIM assert. */18601860- if (mrt->vif_table[vif].dev != skb->dev) {18611861- struct net_device *mdev;18621862-18631863- mdev = l3mdev_master_dev_rcu(mrt->vif_table[vif].dev);18641864- if (mdev == skb->dev)18651865- goto forward;18661866-18561856+ if (mrt->vif_table[vif].dev != dev) {18671857 if (rt_is_output_route(skb_rtable(skb))) {18681858 /* It is our own packet, looped back.18691859 * Very complicated situation...···20512053 read_lock(&mrt_lock);20522054 vif = ipmr_find_vif(mrt, dev);20532055 if (vif >= 0) {20542054- int err2 = ipmr_cache_unresolved(mrt, vif, skb);20562056+ int err2 = ipmr_cache_unresolved(mrt, vif, skb, dev);20552057 read_unlock(&mrt_lock);2056205820572059 return err2;···20622064 }2063206520642066 read_lock(&mrt_lock);20652065- ip_mr_forward(net, mrt, skb, cache, local);20672067+ ip_mr_forward(net, mrt, dev, skb, cache, local);20662068 read_unlock(&mrt_lock);2067206920682070 if (local)···22362238 iph->saddr = saddr;22372239 iph->daddr = daddr;22382240 iph->version = 0;22392239- err = ipmr_cache_unresolved(mrt, vif, skb2);22412241+ err = ipmr_cache_unresolved(mrt, vif, skb2, dev);22402242 read_unlock(&mrt_lock);22412243 rcu_read_unlock();22422244 return err;
+6-2
net/ipv4/tcp.c
···23302330 tcp_init_send_head(sk);23312331 memset(&tp->rx_opt, 0, sizeof(tp->rx_opt));23322332 __sk_dst_reset(sk);23332333+ dst_release(sk->sk_rx_dst);23342334+ sk->sk_rx_dst = NULL;23332335 tcp_saved_syn_free(tp);2334233623352337 /* Clean up fastopen related fields */···23832381 return 0;23842382}2385238323862386-static int tcp_repair_options_est(struct tcp_sock *tp,23842384+static int tcp_repair_options_est(struct sock *sk,23872385 struct tcp_repair_opt __user *optbuf, unsigned int len)23882386{23872387+ struct tcp_sock *tp = tcp_sk(sk);23892388 struct tcp_repair_opt opt;2390238923912390 while (len >= sizeof(opt)) {···23992396 switch (opt.opt_code) {24002397 case TCPOPT_MSS:24012398 tp->rx_opt.mss_clamp = opt.opt_val;23992399+ tcp_mtup_init(sk);24022400 break;24032401 case TCPOPT_WINDOW:24042402 {···25592555 if (!tp->repair)25602556 err = -EINVAL;25612557 else if (sk->sk_state == TCP_ESTABLISHED)25622562- err = tcp_repair_options_est(tp,25582558+ err = tcp_repair_options_est(sk,25632559 (struct tcp_repair_opt __user *)optval,25642560 optlen);25652561 else
+1
net/ipv4/tcp_cong.c
···180180{181181 const struct inet_connection_sock *icsk = inet_csk(sk);182182183183+ tcp_sk(sk)->prior_ssthresh = 0;183184 if (icsk->icsk_ca_ops->init)184185 icsk->icsk_ca_ops->init(sk);185186 if (tcp_ca_needs_ecn(sk))
+6-5
net/ipv6/addrconf.c
···332332static void addrconf_mod_dad_work(struct inet6_ifaddr *ifp,333333 unsigned long delay)334334{335335- if (!delayed_work_pending(&ifp->dad_work))336336- in6_ifa_hold(ifp);337337- mod_delayed_work(addrconf_wq, &ifp->dad_work, delay);335335+ in6_ifa_hold(ifp);336336+ if (mod_delayed_work(addrconf_wq, &ifp->dad_work, delay))337337+ in6_ifa_put(ifp);338338}339339340340static int snmp6_alloc_dev(struct inet6_dev *idev)···33693369 struct net_device *dev = netdev_notifier_info_to_dev(ptr);33703370 struct netdev_notifier_changeupper_info *info;33713371 struct inet6_dev *idev = __in6_dev_get(dev);33723372+ struct net *net = dev_net(dev);33723373 int run_pending = 0;33733374 int err;33743375···33853384 case NETDEV_CHANGEMTU:33863385 /* if MTU under IPV6_MIN_MTU stop IPv6 on this interface. */33873386 if (dev->mtu < IPV6_MIN_MTU) {33883388- addrconf_ifdown(dev, 1);33873387+ addrconf_ifdown(dev, dev != net->loopback_dev);33893388 break;33903389 }33913390···35013500 * IPV6_MIN_MTU stop IPv6 on this interface.35023501 */35033502 if (dev->mtu < IPV6_MIN_MTU)35043504- addrconf_ifdown(dev, 1);35033503+ addrconf_ifdown(dev, dev != net->loopback_dev);35053504 }35063505 break;35073506
+5-1
net/ipv6/calipso.c
···13191319 struct ipv6hdr *ip6_hdr;13201320 struct ipv6_opt_hdr *hop;13211321 unsigned char buf[CALIPSO_MAX_BUFFER];13221322- int len_delta, new_end, pad;13221322+ int len_delta, new_end, pad, payload;13231323 unsigned int start, end;1324132413251325 ip6_hdr = ipv6_hdr(skb);···13461346 if (ret_val < 0)13471347 return ret_val;1348134813491349+ ip6_hdr = ipv6_hdr(skb); /* Reset as skb_cow() may have moved it */13501350+13491351 if (len_delta) {13501352 if (len_delta > 0)13511353 skb_push(skb, len_delta);···13571355 sizeof(*ip6_hdr) + start);13581356 skb_reset_network_header(skb);13591357 ip6_hdr = ipv6_hdr(skb);13581358+ payload = ntohs(ip6_hdr->payload_len);13591359+ ip6_hdr->payload_len = htons(payload + len_delta);13601360 }1361136113621362 hop = (struct ipv6_opt_hdr *)(ip6_hdr + 1);
+7-1
net/ipv6/datagram.c
···250250 */251251252252 err = ip6_datagram_dst_update(sk, true);253253- if (err)253253+ if (err) {254254+ /* Reset daddr and dport so that udp_v6_early_demux()255255+ * fails to find this socket256256+ */257257+ memset(&sk->sk_v6_daddr, 0, sizeof(sk->sk_v6_daddr));258258+ inet->inet_dport = 0;254259 goto out;260260+ }255261256262 sk->sk_state = TCP_ESTABLISHED;257263 sk_set_txhash(sk);
+25
net/ipv6/esp6_offload.c
···3030#include <net/ipv6.h>3131#include <linux/icmpv6.h>32323333+static __u16 esp6_nexthdr_esp_offset(struct ipv6hdr *ipv6_hdr, int nhlen)3434+{3535+ int off = sizeof(struct ipv6hdr);3636+ struct ipv6_opt_hdr *exthdr;3737+3838+ if (likely(ipv6_hdr->nexthdr == NEXTHDR_ESP))3939+ return offsetof(struct ipv6hdr, nexthdr);4040+4141+ while (off < nhlen) {4242+ exthdr = (void *)ipv6_hdr + off;4343+ if (exthdr->nexthdr == NEXTHDR_ESP)4444+ return off;4545+4646+ off += ipv6_optlen(exthdr);4747+ }4848+4949+ return 0;5050+}5151+3352static struct sk_buff **esp6_gro_receive(struct sk_buff **head,3453 struct sk_buff *skb)3554{···5738 struct xfrm_state *x;5839 __be32 seq;5940 __be32 spi;4141+ int nhoff;6042 int err;61436244 skb_pull(skb, offset);···92729373 xo->flags |= XFRM_GRO;94747575+ nhoff = esp6_nexthdr_esp_offset(ipv6_hdr(skb), offset);7676+ if (!nhoff)7777+ goto out;7878+7979+ IP6CB(skb)->nhoff = nhoff;9580 XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip6 = NULL;9681 XFRM_SPI_SKB_CB(skb)->family = AF_INET6;9782 XFRM_SPI_SKB_CB(skb)->daddroff = offsetof(struct ipv6hdr, daddr);
···77 * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>88 * Copyright 2007, Michael Wu <flamingice@sourmilk.net>99 * Copyright 2007-2010, Intel Corporation1010+ * Copyright 2017 Intel Deutschland GmbH1011 *1112 * This program is free software; you can redistribute it and/or modify1213 * it under the terms of the GNU General Public License version 2 as···290289{291290 int i;292291293293- cancel_work_sync(&sta->ampdu_mlme.work);294294-295292 for (i = 0; i < IEEE80211_NUM_TIDS; i++) {296293 __ieee80211_stop_tx_ba_session(sta, i, reason);297294 __ieee80211_stop_rx_ba_session(sta, i, WLAN_BACK_RECIPIENT,···297298 reason != AGG_STOP_DESTROY_STA &&298299 reason != AGG_STOP_PEER_REQUEST);299300 }301301+302302+ /* stopping might queue the work again - so cancel only afterwards */303303+ cancel_work_sync(&sta->ampdu_mlme.work);300304}301305302306void ieee80211_ba_session_work(struct work_struct *work)···354352 spin_unlock_bh(&sta->lock);355353356354 tid_tx = rcu_dereference_protected_tid_tx(sta, tid);357357- if (tid_tx && test_and_clear_bit(HT_AGG_STATE_WANT_STOP,358358- &tid_tx->state))355355+ if (!tid_tx)356356+ continue;357357+358358+ if (test_and_clear_bit(HT_AGG_STATE_START_CB, &tid_tx->state))359359+ ieee80211_start_tx_ba_cb(sta, tid, tid_tx);360360+ if (test_and_clear_bit(HT_AGG_STATE_WANT_STOP, &tid_tx->state))359361 ___ieee80211_stop_tx_ba_session(sta, tid,360362 AGG_STOP_LOCAL_REQUEST);363363+ if (test_and_clear_bit(HT_AGG_STATE_STOP_CB, &tid_tx->state))364364+ ieee80211_stop_tx_ba_cb(sta, tid, tid_tx);361365 }362366 mutex_unlock(&sta->ampdu_mlme.mtx);363367}
···601601 struct ieee80211_supported_band *sband;602602 struct ieee80211_chanctx_conf *chanctx_conf;603603 struct ieee80211_channel *chan;604604- u32 rate_flags, rates = 0;604604+ u32 rates = 0;605605606606 sdata_assert_lock(sdata);607607···612612 return;613613 }614614 chan = chanctx_conf->def.chan;615615- rate_flags = ieee80211_chandef_rate_flags(&chanctx_conf->def);616615 rcu_read_unlock();617616 sband = local->hw.wiphy->bands[chan->band];618617 shift = ieee80211_vif_get_shift(&sdata->vif);···635636 */636637 rates_len = 0;637638 for (i = 0; i < sband->n_bitrates; i++) {638638- if ((rate_flags & sband->bitrates[i].flags)639639- != rate_flags)640640- continue;641639 rates |= BIT(i);642640 rates_len++;643641 }···28142818 u32 *rates, u32 *basic_rates,28152819 bool *have_higher_than_11mbit,28162820 int *min_rate, int *min_rate_index,28172817- int shift, u32 rate_flags)28212821+ int shift)28182822{28192823 int i, j;28202824···28422846 int brate;2843284728442848 br = &sband->bitrates[j];28452845- if ((rate_flags & br->flags) != rate_flags)28462846- continue;2847284928482850 brate = DIV_ROUND_UP(br->bitrate, (1 << shift) * 5);28492851 if (brate == rate) {···43924398 return -ENOMEM;43934399 }4394440043954395- if (new_sta || override) {43964396- err = ieee80211_prep_channel(sdata, cbss);43974397- if (err) {43984398- if (new_sta)43994399- sta_info_free(local, new_sta);44004400- return -EINVAL;44014401- }44024402- }44034403-44014401+ /*44024402+ * Set up the information for the new channel before setting the44034403+ * new channel. We can't - completely race-free - change the basic44044404+ * rates bitmap and the channel (sband) that it refers to, but if44054405+ * we set it up before we at least avoid calling into the driver's44064406+ * bss_info_changed() method with invalid information (since we do44074407+ * call that from changing the channel - only for IDLE and perhaps44084408+ * some others, but ...).44094409+ *44104410+ * So to avoid that, just set up all the new information before the44114411+ * channel, but tell the driver to apply it only afterwards, since44124412+ * it might need the new channel for that.44134413+ */44044414 if (new_sta) {44054415 u32 rates = 0, basic_rates = 0;44064416 bool have_higher_than_11mbit;44074417 int min_rate = INT_MAX, min_rate_index = -1;44084408- struct ieee80211_chanctx_conf *chanctx_conf;44094418 const struct cfg80211_bss_ies *ies;44104419 int shift = ieee80211_vif_get_shift(&sdata->vif);44114411- u32 rate_flags;44124412-44134413- rcu_read_lock();44144414- chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);44154415- if (WARN_ON(!chanctx_conf)) {44164416- rcu_read_unlock();44174417- sta_info_free(local, new_sta);44184418- return -EINVAL;44194419- }44204420- rate_flags = ieee80211_chandef_rate_flags(&chanctx_conf->def);44214421- rcu_read_unlock();4422442044234421 ieee80211_get_rates(sband, bss->supp_rates,44244422 bss->supp_rates_len,44254423 &rates, &basic_rates,44264424 &have_higher_than_11mbit,44274425 &min_rate, &min_rate_index,44284428- shift, rate_flags);44264426+ shift);4429442744304428 /*44314429 * This used to be a workaround for basic rates missing···44754489 sdata->vif.bss_conf.sync_dtim_count = 0;44764490 }44774491 rcu_read_unlock();44924492+ }4478449344794479- /* tell driver about BSSID, basic rates and timing */44944494+ if (new_sta || override) {44954495+ err = ieee80211_prep_channel(sdata, cbss);44964496+ if (err) {44974497+ if (new_sta)44984498+ sta_info_free(local, new_sta);44994499+ return -EINVAL;45004500+ }45014501+ }45024502+45034503+ if (new_sta) {45044504+ /*45054505+ * tell driver about BSSID, basic rates and timing45064506+ * this was set up above, before setting the channel45074507+ */44804508 ieee80211_bss_info_change_notify(sdata,44814509 BSS_CHANGED_BSSID | BSS_CHANGED_BASIC_RATES |44824510 BSS_CHANGED_BEACON_INT);
+5-1
net/mac80211/rx.c
···16131613 */16141614 if (!ieee80211_hw_check(&sta->local->hw, AP_LINK_PS) &&16151615 !ieee80211_has_morefrags(hdr->frame_control) &&16161616+ !ieee80211_is_back_req(hdr->frame_control) &&16161617 !(status->rx_flags & IEEE80211_RX_DEFERRED_RELEASE) &&16171618 (rx->sdata->vif.type == NL80211_IFTYPE_AP ||16181619 rx->sdata->vif.type == NL80211_IFTYPE_AP_VLAN) &&16191619- /* PM bit is only checked in frames where it isn't reserved,16201620+ /*16211621+ * PM bit is only checked in frames where it isn't reserved,16201622 * in AP mode it's reserved in non-bufferable management frames16211623 * (cf. IEEE 802.11-2012 8.2.4.1.7 Power Management field)16241624+ * BAR frames should be ignored as specified in16251625+ * IEEE 802.11-2012 10.2.1.2.16221626 */16231627 (!ieee80211_is_mgmt(hdr->frame_control) ||16241628 ieee80211_is_bufferable_mmpdu(hdr->frame_control))) {
···890890 }891891out:892892 local_bh_enable();893893- if (last)893893+ if (last) {894894+ /* nf ct hash resize happened, now clear the leftover. */895895+ if ((struct nf_conn *)cb->args[1] == last)896896+ cb->args[1] = 0;897897+894898 nf_ct_put(last);899899+ }895900896901 while (i) {897902 i--;
+6-3
net/netfilter/nf_conntrack_proto_sctp.c
···512512 u8 pf, unsigned int hooknum)513513{514514 const struct sctphdr *sh;515515- struct sctphdr _sctph;516515 const char *logmsg;517516518518- sh = skb_header_pointer(skb, dataoff, sizeof(_sctph), &_sctph);519519- if (!sh) {517517+ if (skb->len < dataoff + sizeof(struct sctphdr)) {520518 logmsg = "nf_ct_sctp: short packet ";521519 goto out_invalid;522520 }523521 if (net->ct.sysctl_checksum && hooknum == NF_INET_PRE_ROUTING &&524522 skb->ip_summed == CHECKSUM_NONE) {523523+ if (!skb_make_writable(skb, dataoff + sizeof(struct sctphdr))) {524524+ logmsg = "nf_ct_sctp: failed to read header ";525525+ goto out_invalid;526526+ }527527+ sh = (const struct sctphdr *)(skb->data + dataoff);525528 if (sh->checksum != sctp_compute_cksum(skb, dataoff)) {526529 logmsg = "nf_ct_sctp: bad CRC ";527530 goto out_invalid;
+1-1
net/netfilter/nf_nat_core.c
···566566 * Else, when the conntrack is destoyed, nf_nat_cleanup_conntrack()567567 * will delete entry from already-freed table.568568 */569569- ct->status &= ~IPS_NAT_DONE_MASK;569569+ clear_bit(IPS_SRC_NAT_DONE_BIT, &ct->status);570570 rhltable_remove(&nf_nat_bysource_table, &ct->nat_bysource,571571 nf_nat_bysource_params);572572
+11-11
net/netfilter/nft_set_rbtree.c
···116116 else if (d > 0)117117 p = &parent->rb_right;118118 else {119119- if (nft_set_elem_active(&rbe->ext, genmask)) {120120- if (nft_rbtree_interval_end(rbe) &&121121- !nft_rbtree_interval_end(new))122122- p = &parent->rb_left;123123- else if (!nft_rbtree_interval_end(rbe) &&124124- nft_rbtree_interval_end(new))125125- p = &parent->rb_right;126126- else {127127- *ext = &rbe->ext;128128- return -EEXIST;129129- }119119+ if (nft_rbtree_interval_end(rbe) &&120120+ !nft_rbtree_interval_end(new)) {121121+ p = &parent->rb_left;122122+ } else if (!nft_rbtree_interval_end(rbe) &&123123+ nft_rbtree_interval_end(new)) {124124+ p = &parent->rb_right;125125+ } else if (nft_set_elem_active(&rbe->ext, genmask)) {126126+ *ext = &rbe->ext;127127+ return -EEXIST;128128+ } else {129129+ p = &parent->rb_left;130130 }131131 }132132 }
···119119120120 for (i = 0; i < (reqs << 1); i++) {121121 rqst = kzalloc(sizeof(*rqst), GFP_KERNEL);122122- if (!rqst) {123123- pr_err("RPC: %s: Failed to create bc rpc_rqst\n",124124- __func__);122122+ if (!rqst)125123 goto out_free;126126- }124124+127125 dprintk("RPC: %s: new rqst %p\n", __func__, rqst);128126129127 rqst->rq_xprt = &r_xprt->rx_xprt;
+6-1
net/sunrpc/xprtsock.c
···24322432 case -ENETUNREACH:24332433 case -EADDRINUSE:24342434 case -ENOBUFS:24352435- /* retry with existing socket, after a delay */24352435+ /*24362436+ * xs_tcp_force_close() wakes tasks with -EIO.24372437+ * We need to wake them first to ensure the24382438+ * correct error code.24392439+ */24402440+ xprt_wake_pending_tasks(xprt, status);24362441 xs_tcp_force_close(xprt);24372442 goto out;24382443 }
+1-1
net/tipc/msg.c
···508508 }509509510510 if (skb_cloned(_skb) &&511511- pskb_expand_head(_skb, BUF_HEADROOM, BUF_TAILROOM, GFP_KERNEL))511511+ pskb_expand_head(_skb, BUF_HEADROOM, BUF_TAILROOM, GFP_ATOMIC))512512 goto exit;513513514514 /* Now reverse the concerned fields */
+6-1
net/unix/af_unix.c
···999999 struct path path = { };1000100010011001 err = -EINVAL;10021002- if (sunaddr->sun_family != AF_UNIX)10021002+ if (addr_len < offsetofend(struct sockaddr_un, sun_family) ||10031003+ sunaddr->sun_family != AF_UNIX)10031004 goto out;1004100510051006 if (addr_len == sizeof(short)) {···11101109 struct sock *other;11111110 unsigned int hash;11121111 int err;11121112+11131113+ err = -EINVAL;11141114+ if (alen < offsetofend(struct sockaddr, sa_family))11151115+ goto out;1113111611141117 if (addr->sa_family != AF_UNSPEC) {11151118 err = unix_mkname(sunaddr, alen, &hash);
+9-13
net/wireless/wext-core.c
···914914 * Main IOCTl dispatcher.915915 * Check the type of IOCTL and call the appropriate wrapper...916916 */917917-static int wireless_process_ioctl(struct net *net, struct ifreq *ifr,917917+static int wireless_process_ioctl(struct net *net, struct iwreq *iwr,918918 unsigned int cmd,919919 struct iw_request_info *info,920920 wext_ioctl_func standard,921921 wext_ioctl_func private)922922{923923- struct iwreq *iwr = (struct iwreq *) ifr;924923 struct net_device *dev;925924 iw_handler handler;926925···927928 * The copy_to/from_user() of ifr is also dealt with in there */928929929930 /* Make sure the device exist */930930- if ((dev = __dev_get_by_name(net, ifr->ifr_name)) == NULL)931931+ if ((dev = __dev_get_by_name(net, iwr->ifr_name)) == NULL)931932 return -ENODEV;932933933934 /* A bunch of special cases, then the generic case...···956957 else if (private)957958 return private(dev, iwr, cmd, info, handler);958959 }959959- /* Old driver API : call driver ioctl handler */960960- if (dev->netdev_ops->ndo_do_ioctl)961961- return dev->netdev_ops->ndo_do_ioctl(dev, ifr, cmd);962960 return -EOPNOTSUPP;963961}964962···973977}974978975979/* entry point from dev ioctl */976976-static int wext_ioctl_dispatch(struct net *net, struct ifreq *ifr,980980+static int wext_ioctl_dispatch(struct net *net, struct iwreq *iwr,977981 unsigned int cmd, struct iw_request_info *info,978982 wext_ioctl_func standard,979983 wext_ioctl_func private)···983987 if (ret)984988 return ret;985989986986- dev_load(net, ifr->ifr_name);990990+ dev_load(net, iwr->ifr_name);987991 rtnl_lock();988988- ret = wireless_process_ioctl(net, ifr, cmd, info, standard, private);992992+ ret = wireless_process_ioctl(net, iwr, cmd, info, standard, private);989993 rtnl_unlock();990994991995 return ret;···10351039}103610401037104110381038-int wext_handle_ioctl(struct net *net, struct ifreq *ifr, unsigned int cmd,10421042+int wext_handle_ioctl(struct net *net, struct iwreq *iwr, unsigned int cmd,10391043 void __user *arg)10401044{10411045 struct iw_request_info info = { .cmd = cmd, .flags = 0 };10421046 int ret;1043104710441044- ret = wext_ioctl_dispatch(net, ifr, cmd, &info,10481048+ ret = wext_ioctl_dispatch(net, iwr, cmd, &info,10451049 ioctl_standard_call,10461050 ioctl_private_call);10471051 if (ret >= 0 &&10481052 IW_IS_GET(cmd) &&10491049- copy_to_user(arg, ifr, sizeof(struct iwreq)))10531053+ copy_to_user(arg, iwr, sizeof(struct iwreq)))10501054 return -EFAULT;1051105510521056 return ret;···11031107 info.cmd = cmd;11041108 info.flags = IW_REQUEST_FLAG_COMPAT;1105110911061106- ret = wext_ioctl_dispatch(net, (struct ifreq *) &iwr, cmd, &info,11101110+ ret = wext_ioctl_dispatch(net, &iwr, cmd, &info,11071111 compat_standard_call,11081112 compat_private_call);11091113
···1414include scripts/Kbuild.include15151616srcdir := $(srctree)/$(obj)1717-subdirs := $(patsubst $(srcdir)/%/.,%,$(wildcard $(srcdir)/*/.))1717+1818+# When make is run under a fakechroot environment, the function1919+# $(wildcard $(srcdir)/*/.) doesn't only return directories, but also regular2020+# files. So, we are using a combination of sort/dir/wildcard which works2121+# with fakechroot.2222+subdirs := $(patsubst $(srcdir)/%/,%,\2323+ $(filter-out $(srcdir)/,\2424+ $(sort $(dir $(wildcard $(srcdir)/*/)))))2525+1826# caller may set destination dir (when installing to asm/)1927_dst := $(if $(dst),$(dst),$(obj))2028
···20202121 If you are unsure as to whether this is required, answer N.22222323+config KEYS_COMPAT2424+ def_bool y2525+ depends on COMPAT && KEYS2626+2327config PERSISTENT_KEYRINGS2428 bool "Enable register of persistent per-UID keyrings"2529 depends on KEYS···9389config KEY_DH_OPERATIONS9490 bool "Diffie-Hellman operations on retained keys"9591 depends on KEYS9696- select MPILIB9792 select CRYPTO9893 select CRYPTO_HASH9494+ select CRYPTO_DH9995 help10096 This option provides support for calculating Diffie-Hellman10197 public keys and shared secrets using values stored as keys
+190-122
security/keys/dh.c
···88 * 2 of the License, or (at your option) any later version.99 */10101111-#include <linux/mpi.h>1211#include <linux/slab.h>1312#include <linux/uaccess.h>1313+#include <linux/scatterlist.h>1414#include <linux/crypto.h>1515#include <crypto/hash.h>1616+#include <crypto/kpp.h>1717+#include <crypto/dh.h>1618#include <keys/user-type.h>1719#include "internal.h"18201919-/*2020- * Public key or shared secret generation function [RFC2631 sec 2.1.1]2121- *2222- * ya = g^xa mod p;2323- * or2424- * ZZ = yb^xa mod p;2525- *2626- * where xa is the local private key, ya is the local public key, g is2727- * the generator, p is the prime, yb is the remote public key, and ZZ2828- * is the shared secret.2929- *3030- * Both are the same calculation, so g or yb are the "base" and ya or3131- * ZZ are the "result".3232- */3333-static int do_dh(MPI result, MPI base, MPI xa, MPI p)3434-{3535- return mpi_powm(result, base, xa, p);3636-}3737-3838-static ssize_t mpi_from_key(key_serial_t keyid, size_t maxlen, MPI *mpi)2121+static ssize_t dh_data_from_key(key_serial_t keyid, void **data)3922{4023 struct key *key;4124 key_ref_t key_ref;···3956 status = key_validate(key);4057 if (status == 0) {4158 const struct user_key_payload *payload;5959+ uint8_t *duplicate;42604361 payload = user_key_payload_locked(key);44624545- if (maxlen == 0) {4646- *mpi = NULL;6363+ duplicate = kmemdup(payload->data, payload->datalen,6464+ GFP_KERNEL);6565+ if (duplicate) {6666+ *data = duplicate;4767 ret = payload->datalen;4848- } else if (payload->datalen <= maxlen) {4949- *mpi = mpi_read_raw_data(payload->data,5050- payload->datalen);5151- if (*mpi)5252- ret = payload->datalen;5368 } else {5454- ret = -EINVAL;6969+ ret = -ENOMEM;5570 }5671 }5772 up_read(&key->sem);···5877 key_put(key);5978error:6079 return ret;8080+}8181+8282+static void dh_free_data(struct dh *dh)8383+{8484+ kzfree(dh->key);8585+ kzfree(dh->p);8686+ kzfree(dh->g);8787+}8888+8989+struct dh_completion {9090+ struct completion completion;9191+ int err;9292+};9393+9494+static void dh_crypto_done(struct crypto_async_request *req, int err)9595+{9696+ struct dh_completion *compl = req->data;9797+9898+ if (err == -EINPROGRESS)9999+ return;100100+101101+ compl->err = err;102102+ complete(&compl->completion);61103}6210463105struct kdf_sdesc {···9389 struct crypto_shash *tfm;9490 struct kdf_sdesc *sdesc;9591 int size;9292+ int err;96939794 /* allocate synchronous hash */9895 tfm = crypto_alloc_shash(hashname, 0, 0);···10297 return PTR_ERR(tfm);10398 }10499100100+ err = -EINVAL;101101+ if (crypto_shash_digestsize(tfm) == 0)102102+ goto out_free_tfm;103103+104104+ err = -ENOMEM;105105 size = sizeof(struct shash_desc) + crypto_shash_descsize(tfm);106106 sdesc = kmalloc(size, GFP_KERNEL);107107 if (!sdesc)108108- return -ENOMEM;108108+ goto out_free_tfm;109109 sdesc->shash.tfm = tfm;110110 sdesc->shash.flags = 0x0;111111112112 *sdesc_ret = sdesc;113113114114 return 0;115115+116116+out_free_tfm:117117+ crypto_free_shash(tfm);118118+ return err;115119}116120117121static void kdf_dealloc(struct kdf_sdesc *sdesc)···134120 kzfree(sdesc);135121}136122137137-/* convert 32 bit integer into its string representation */138138-static inline void crypto_kw_cpu_to_be32(u32 val, u8 *buf)139139-{140140- __be32 *a = (__be32 *)buf;141141-142142- *a = cpu_to_be32(val);143143-}144144-145123/*146124 * Implementation of the KDF in counter mode according to SP800-108 section 5.1147125 * as well as SP800-56A section 5.8.1 (Single-step KDF).···144138 * 5.8.1.2).145139 */146140static int kdf_ctr(struct kdf_sdesc *sdesc, const u8 *src, unsigned int slen,147147- u8 *dst, unsigned int dlen)141141+ u8 *dst, unsigned int dlen, unsigned int zlen)148142{149143 struct shash_desc *desc = &sdesc->shash;150144 unsigned int h = crypto_shash_digestsize(desc->tfm);151145 int err = 0;152146 u8 *dst_orig = dst;153153- u32 i = 1;154154- u8 iteration[sizeof(u32)];147147+ __be32 counter = cpu_to_be32(1);155148156149 while (dlen) {157150 err = crypto_shash_init(desc);158151 if (err)159152 goto err;160153161161- crypto_kw_cpu_to_be32(i, iteration);162162- err = crypto_shash_update(desc, iteration, sizeof(u32));154154+ err = crypto_shash_update(desc, (u8 *)&counter, sizeof(__be32));163155 if (err)164156 goto err;157157+158158+ if (zlen && h) {159159+ u8 tmpbuffer[h];160160+ size_t chunk = min_t(size_t, zlen, h);161161+ memset(tmpbuffer, 0, chunk);162162+163163+ do {164164+ err = crypto_shash_update(desc, tmpbuffer,165165+ chunk);166166+ if (err)167167+ goto err;168168+169169+ zlen -= chunk;170170+ chunk = min_t(size_t, zlen, h);171171+ } while (zlen);172172+ }165173166174 if (src && slen) {167175 err = crypto_shash_update(desc, src, slen);···199179200180 dlen -= h;201181 dst += h;202202- i++;182182+ counter = cpu_to_be32(be32_to_cpu(counter) + 1);203183 }204184 }205185···212192213193static int keyctl_dh_compute_kdf(struct kdf_sdesc *sdesc,214194 char __user *buffer, size_t buflen,215215- uint8_t *kbuf, size_t kbuflen)195195+ uint8_t *kbuf, size_t kbuflen, size_t lzero)216196{217197 uint8_t *outbuf = NULL;218198 int ret;···223203 goto err;224204 }225205226226- ret = kdf_ctr(sdesc, kbuf, kbuflen, outbuf, buflen);206206+ ret = kdf_ctr(sdesc, kbuf, kbuflen, outbuf, buflen, lzero);227207 if (ret)228208 goto err;229209···241221 struct keyctl_kdf_params *kdfcopy)242222{243223 long ret;244244- MPI base, private, prime, result;245245- unsigned nbytes;224224+ ssize_t dlen;225225+ int secretlen;226226+ int outlen;246227 struct keyctl_dh_params pcopy;247247- uint8_t *kbuf;248248- ssize_t keylen;249249- size_t resultlen;228228+ struct dh dh_inputs;229229+ struct scatterlist outsg;230230+ struct dh_completion compl;231231+ struct crypto_kpp *tfm;232232+ struct kpp_request *req;233233+ uint8_t *secret;234234+ uint8_t *outbuf;250235 struct kdf_sdesc *sdesc = NULL;251236252237 if (!params || (!buffer && buflen)) {253238 ret = -EINVAL;254254- goto out;239239+ goto out1;255240 }256241 if (copy_from_user(&pcopy, params, sizeof(pcopy)) != 0) {257242 ret = -EFAULT;258258- goto out;243243+ goto out1;259244 }260245261246 if (kdfcopy) {···269244 if (buflen > KEYCTL_KDF_MAX_OUTPUT_LEN ||270245 kdfcopy->otherinfolen > KEYCTL_KDF_MAX_OI_LEN) {271246 ret = -EMSGSIZE;272272- goto out;247247+ goto out1;273248 }274249275250 /* get KDF name string */276251 hashname = strndup_user(kdfcopy->hashname, CRYPTO_MAX_ALG_NAME);277252 if (IS_ERR(hashname)) {278253 ret = PTR_ERR(hashname);279279- goto out;254254+ goto out1;280255 }281256282257 /* allocate KDF from the kernel crypto API */283258 ret = kdf_alloc(&sdesc, hashname);284259 kfree(hashname);285260 if (ret)286286- goto out;261261+ goto out1;287262 }288263289289- /*290290- * If the caller requests postprocessing with a KDF, allow an291291- * arbitrary output buffer size since the KDF ensures proper truncation.292292- */293293- keylen = mpi_from_key(pcopy.prime, kdfcopy ? SIZE_MAX : buflen, &prime);294294- if (keylen < 0 || !prime) {295295- /* buflen == 0 may be used to query the required buffer size,296296- * which is the prime key length.297297- */298298- ret = keylen;299299- goto out;264264+ memset(&dh_inputs, 0, sizeof(dh_inputs));265265+266266+ dlen = dh_data_from_key(pcopy.prime, &dh_inputs.p);267267+ if (dlen < 0) {268268+ ret = dlen;269269+ goto out1;300270 }271271+ dh_inputs.p_size = dlen;301272302302- /* The result is never longer than the prime */303303- resultlen = keylen;304304-305305- keylen = mpi_from_key(pcopy.base, SIZE_MAX, &base);306306- if (keylen < 0 || !base) {307307- ret = keylen;308308- goto error1;273273+ dlen = dh_data_from_key(pcopy.base, &dh_inputs.g);274274+ if (dlen < 0) {275275+ ret = dlen;276276+ goto out2;309277 }278278+ dh_inputs.g_size = dlen;310279311311- keylen = mpi_from_key(pcopy.private, SIZE_MAX, &private);312312- if (keylen < 0 || !private) {313313- ret = keylen;314314- goto error2;280280+ dlen = dh_data_from_key(pcopy.private, &dh_inputs.key);281281+ if (dlen < 0) {282282+ ret = dlen;283283+ goto out2;315284 }285285+ dh_inputs.key_size = dlen;316286317317- result = mpi_alloc(0);318318- if (!result) {287287+ secretlen = crypto_dh_key_len(&dh_inputs);288288+ secret = kmalloc(secretlen, GFP_KERNEL);289289+ if (!secret) {319290 ret = -ENOMEM;320320- goto error3;291291+ goto out2;321292 }322322-323323- /* allocate space for DH shared secret and SP800-56A otherinfo */324324- kbuf = kmalloc(kdfcopy ? (resultlen + kdfcopy->otherinfolen) : resultlen,325325- GFP_KERNEL);326326- if (!kbuf) {327327- ret = -ENOMEM;328328- goto error4;329329- }330330-331331- /*332332- * Concatenate SP800-56A otherinfo past DH shared secret -- the333333- * input to the KDF is (DH shared secret || otherinfo)334334- */335335- if (kdfcopy && kdfcopy->otherinfo &&336336- copy_from_user(kbuf + resultlen, kdfcopy->otherinfo,337337- kdfcopy->otherinfolen) != 0) {338338- ret = -EFAULT;339339- goto error5;340340- }341341-342342- ret = do_dh(result, base, private, prime);293293+ ret = crypto_dh_encode_key(secret, secretlen, &dh_inputs);343294 if (ret)344344- goto error5;295295+ goto out3;345296346346- ret = mpi_read_buffer(result, kbuf, resultlen, &nbytes, NULL);347347- if (ret != 0)348348- goto error5;297297+ tfm = crypto_alloc_kpp("dh", CRYPTO_ALG_TYPE_KPP, 0);298298+ if (IS_ERR(tfm)) {299299+ ret = PTR_ERR(tfm);300300+ goto out3;301301+ }302302+303303+ ret = crypto_kpp_set_secret(tfm, secret, secretlen);304304+ if (ret)305305+ goto out4;306306+307307+ outlen = crypto_kpp_maxsize(tfm);308308+309309+ if (!kdfcopy) {310310+ /*311311+ * When not using a KDF, buflen 0 is used to read the312312+ * required buffer length313313+ */314314+ if (buflen == 0) {315315+ ret = outlen;316316+ goto out4;317317+ } else if (outlen > buflen) {318318+ ret = -EOVERFLOW;319319+ goto out4;320320+ }321321+ }322322+323323+ outbuf = kzalloc(kdfcopy ? (outlen + kdfcopy->otherinfolen) : outlen,324324+ GFP_KERNEL);325325+ if (!outbuf) {326326+ ret = -ENOMEM;327327+ goto out4;328328+ }329329+330330+ sg_init_one(&outsg, outbuf, outlen);331331+332332+ req = kpp_request_alloc(tfm, GFP_KERNEL);333333+ if (!req) {334334+ ret = -ENOMEM;335335+ goto out5;336336+ }337337+338338+ kpp_request_set_input(req, NULL, 0);339339+ kpp_request_set_output(req, &outsg, outlen);340340+ init_completion(&compl.completion);341341+ kpp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG |342342+ CRYPTO_TFM_REQ_MAY_SLEEP,343343+ dh_crypto_done, &compl);344344+345345+ /*346346+ * For DH, generate_public_key and generate_shared_secret are347347+ * the same calculation348348+ */349349+ ret = crypto_kpp_generate_public_key(req);350350+ if (ret == -EINPROGRESS) {351351+ wait_for_completion(&compl.completion);352352+ ret = compl.err;353353+ if (ret)354354+ goto out6;355355+ }349356350357 if (kdfcopy) {351351- ret = keyctl_dh_compute_kdf(sdesc, buffer, buflen, kbuf,352352- resultlen + kdfcopy->otherinfolen);353353- } else {354354- ret = nbytes;355355- if (copy_to_user(buffer, kbuf, nbytes) != 0)358358+ /*359359+ * Concatenate SP800-56A otherinfo past DH shared secret -- the360360+ * input to the KDF is (DH shared secret || otherinfo)361361+ */362362+ if (copy_from_user(outbuf + req->dst_len, kdfcopy->otherinfo,363363+ kdfcopy->otherinfolen) != 0) {356364 ret = -EFAULT;365365+ goto out6;366366+ }367367+368368+ ret = keyctl_dh_compute_kdf(sdesc, buffer, buflen, outbuf,369369+ req->dst_len + kdfcopy->otherinfolen,370370+ outlen - req->dst_len);371371+ } else if (copy_to_user(buffer, outbuf, req->dst_len) == 0) {372372+ ret = req->dst_len;373373+ } else {374374+ ret = -EFAULT;357375 }358376359359-error5:360360- kzfree(kbuf);361361-error4:362362- mpi_free(result);363363-error3:364364- mpi_free(private);365365-error2:366366- mpi_free(base);367367-error1:368368- mpi_free(prime);369369-out:377377+out6:378378+ kpp_request_free(req);379379+out5:380380+ kzfree(outbuf);381381+out4:382382+ crypto_free_kpp(tfm);383383+out3:384384+ kzfree(secret);385385+out2:386386+ dh_free_data(&dh_inputs);387387+out1:370388 kdf_dealloc(sdesc);371389 return ret;372390}
+75-131
security/keys/encrypted-keys/encrypted.c
···3030#include <linux/scatterlist.h>3131#include <linux/ctype.h>3232#include <crypto/aes.h>3333+#include <crypto/algapi.h>3334#include <crypto/hash.h>3435#include <crypto/sha.h>3536#include <crypto/skcipher.h>···5554#define MAX_DATA_SIZE 40965655#define MIN_DATA_SIZE 2057565858-struct sdesc {5959- struct shash_desc shash;6060- char ctx[];6161-};6262-6363-static struct crypto_shash *hashalg;6464-static struct crypto_shash *hmacalg;5757+static struct crypto_shash *hash_tfm;65586659enum {6760 Opt_err = -1, Opt_new, Opt_load, Opt_update···136141 */137142static int valid_master_desc(const char *new_desc, const char *orig_desc)138143{139139- if (!memcmp(new_desc, KEY_TRUSTED_PREFIX, KEY_TRUSTED_PREFIX_LEN)) {140140- if (strlen(new_desc) == KEY_TRUSTED_PREFIX_LEN)141141- goto out;142142- if (orig_desc)143143- if (memcmp(new_desc, orig_desc, KEY_TRUSTED_PREFIX_LEN))144144- goto out;145145- } else if (!memcmp(new_desc, KEY_USER_PREFIX, KEY_USER_PREFIX_LEN)) {146146- if (strlen(new_desc) == KEY_USER_PREFIX_LEN)147147- goto out;148148- if (orig_desc)149149- if (memcmp(new_desc, orig_desc, KEY_USER_PREFIX_LEN))150150- goto out;151151- } else152152- goto out;144144+ int prefix_len;145145+146146+ if (!strncmp(new_desc, KEY_TRUSTED_PREFIX, KEY_TRUSTED_PREFIX_LEN))147147+ prefix_len = KEY_TRUSTED_PREFIX_LEN;148148+ else if (!strncmp(new_desc, KEY_USER_PREFIX, KEY_USER_PREFIX_LEN))149149+ prefix_len = KEY_USER_PREFIX_LEN;150150+ else151151+ return -EINVAL;152152+153153+ if (!new_desc[prefix_len])154154+ return -EINVAL;155155+156156+ if (orig_desc && strncmp(new_desc, orig_desc, prefix_len))157157+ return -EINVAL;158158+153159 return 0;154154-out:155155- return -EINVAL;156160}157161158162/*···315321 return ukey;316322}317323318318-static struct sdesc *alloc_sdesc(struct crypto_shash *alg)324324+static int calc_hash(struct crypto_shash *tfm, u8 *digest,325325+ const u8 *buf, unsigned int buflen)319326{320320- struct sdesc *sdesc;321321- int size;327327+ SHASH_DESC_ON_STACK(desc, tfm);328328+ int err;322329323323- size = sizeof(struct shash_desc) + crypto_shash_descsize(alg);324324- sdesc = kmalloc(size, GFP_KERNEL);325325- if (!sdesc)326326- return ERR_PTR(-ENOMEM);327327- sdesc->shash.tfm = alg;328328- sdesc->shash.flags = 0x0;329329- return sdesc;330330+ desc->tfm = tfm;331331+ desc->flags = 0;332332+333333+ err = crypto_shash_digest(desc, buf, buflen, digest);334334+ shash_desc_zero(desc);335335+ return err;330336}331337332338static int calc_hmac(u8 *digest, const u8 *key, unsigned int keylen,333339 const u8 *buf, unsigned int buflen)334340{335335- struct sdesc *sdesc;336336- int ret;341341+ struct crypto_shash *tfm;342342+ int err;337343338338- sdesc = alloc_sdesc(hmacalg);339339- if (IS_ERR(sdesc)) {340340- pr_info("encrypted_key: can't alloc %s\n", hmac_alg);341341- return PTR_ERR(sdesc);344344+ tfm = crypto_alloc_shash(hmac_alg, 0, CRYPTO_ALG_ASYNC);345345+ if (IS_ERR(tfm)) {346346+ pr_err("encrypted_key: can't alloc %s transform: %ld\n",347347+ hmac_alg, PTR_ERR(tfm));348348+ return PTR_ERR(tfm);342349 }343350344344- ret = crypto_shash_setkey(hmacalg, key, keylen);345345- if (!ret)346346- ret = crypto_shash_digest(&sdesc->shash, buf, buflen, digest);347347- kfree(sdesc);348348- return ret;349349-}350350-351351-static int calc_hash(u8 *digest, const u8 *buf, unsigned int buflen)352352-{353353- struct sdesc *sdesc;354354- int ret;355355-356356- sdesc = alloc_sdesc(hashalg);357357- if (IS_ERR(sdesc)) {358358- pr_info("encrypted_key: can't alloc %s\n", hash_alg);359359- return PTR_ERR(sdesc);360360- }361361-362362- ret = crypto_shash_digest(&sdesc->shash, buf, buflen, digest);363363- kfree(sdesc);364364- return ret;351351+ err = crypto_shash_setkey(tfm, key, keylen);352352+ if (!err)353353+ err = calc_hash(tfm, digest, buf, buflen);354354+ crypto_free_shash(tfm);355355+ return err;365356}366357367358enum derived_key_type { ENC_KEY, AUTH_KEY };···364385 derived_buf_len = HASH_SIZE;365386366387 derived_buf = kzalloc(derived_buf_len, GFP_KERNEL);367367- if (!derived_buf) {368368- pr_err("encrypted_key: out of memory\n");388388+ if (!derived_buf)369389 return -ENOMEM;370370- }390390+371391 if (key_type)372392 strcpy(derived_buf, "AUTH_KEY");373393 else···374396375397 memcpy(derived_buf + strlen(derived_buf) + 1, master_key,376398 master_keylen);377377- ret = calc_hash(derived_key, derived_buf, derived_buf_len);378378- kfree(derived_buf);399399+ ret = calc_hash(hash_tfm, derived_key, derived_buf, derived_buf_len);400400+ kzfree(derived_buf);379401 return ret;380402}381403···458480 struct skcipher_request *req;459481 unsigned int encrypted_datalen;460482 u8 iv[AES_BLOCK_SIZE];461461- unsigned int padlen;462462- char pad[16];463483 int ret;464484465485 encrypted_datalen = roundup(epayload->decrypted_datalen, blksize);466466- padlen = encrypted_datalen - epayload->decrypted_datalen;467486468487 req = init_skcipher_req(derived_key, derived_keylen);469488 ret = PTR_ERR(req);···468493 goto out;469494 dump_decrypted_data(epayload);470495471471- memset(pad, 0, sizeof pad);472496 sg_init_table(sg_in, 2);473497 sg_set_buf(&sg_in[0], epayload->decrypted_data,474498 epayload->decrypted_datalen);475475- sg_set_buf(&sg_in[1], pad, padlen);499499+ sg_set_page(&sg_in[1], ZERO_PAGE(0), AES_BLOCK_SIZE, 0);476500477501 sg_init_table(sg_out, 1);478502 sg_set_buf(sg_out, epayload->encrypted_data, encrypted_datalen);···507533 if (!ret)508534 dump_hmac(NULL, digest, HASH_SIZE);509535out:536536+ memzero_explicit(derived_key, sizeof(derived_key));510537 return ret;511538}512539···536561 ret = calc_hmac(digest, derived_key, sizeof derived_key, p, len);537562 if (ret < 0)538563 goto out;539539- ret = memcmp(digest, epayload->format + epayload->datablob_len,540540- sizeof digest);564564+ ret = crypto_memneq(digest, epayload->format + epayload->datablob_len,565565+ sizeof(digest));541566 if (ret) {542567 ret = -EINVAL;543568 dump_hmac("datablob",···546571 dump_hmac("calc", digest, HASH_SIZE);547572 }548573out:574574+ memzero_explicit(derived_key, sizeof(derived_key));549575 return ret;550576}551577···560584 struct skcipher_request *req;561585 unsigned int encrypted_datalen;562586 u8 iv[AES_BLOCK_SIZE];563563- char pad[16];587587+ u8 *pad;564588 int ret;589589+590590+ /* Throwaway buffer to hold the unused zero padding at the end */591591+ pad = kmalloc(AES_BLOCK_SIZE, GFP_KERNEL);592592+ if (!pad)593593+ return -ENOMEM;565594566595 encrypted_datalen = roundup(epayload->decrypted_datalen, blksize);567596 req = init_skcipher_req(derived_key, derived_keylen);···575594 goto out;576595 dump_encrypted_data(epayload, encrypted_datalen);577596578578- memset(pad, 0, sizeof pad);579597 sg_init_table(sg_in, 1);580598 sg_init_table(sg_out, 2);581599 sg_set_buf(sg_in, epayload->encrypted_data, encrypted_datalen);582600 sg_set_buf(&sg_out[0], epayload->decrypted_data,583601 epayload->decrypted_datalen);584584- sg_set_buf(&sg_out[1], pad, sizeof pad);602602+ sg_set_buf(&sg_out[1], pad, AES_BLOCK_SIZE);585603586604 memcpy(iv, epayload->iv, sizeof(iv));587605 skcipher_request_set_crypt(req, sg_in, sg_out, encrypted_datalen, iv);···592612 goto out;593613 dump_decrypted_data(epayload);594614out:615615+ kfree(pad);595616 return ret;596617}597618···703722out:704723 up_read(&mkey->sem);705724 key_put(mkey);725725+ memzero_explicit(derived_key, sizeof(derived_key));706726 return ret;707727}708728···810828 ret = encrypted_init(epayload, key->description, format, master_desc,811829 decrypted_datalen, hex_encoded_iv);812830 if (ret < 0) {813813- kfree(epayload);831831+ kzfree(epayload);814832 goto out;815833 }816834817835 rcu_assign_keypointer(key, epayload);818836out:819819- kfree(datablob);837837+ kzfree(datablob);820838 return ret;821839}822840···825843 struct encrypted_key_payload *epayload;826844827845 epayload = container_of(rcu, struct encrypted_key_payload, rcu);828828- memset(epayload->decrypted_data, 0, epayload->decrypted_datalen);829829- kfree(epayload);846846+ kzfree(epayload);830847}831848832849/*···883902 rcu_assign_keypointer(key, new_epayload);884903 call_rcu(&epayload->rcu, encrypted_rcu_free);885904out:886886- kfree(buf);905905+ kzfree(buf);887906 return ret;888907}889908···941960942961 up_read(&mkey->sem);943962 key_put(mkey);963963+ memzero_explicit(derived_key, sizeof(derived_key));944964945965 if (copy_to_user(buffer, ascii_buf, asciiblob_len) != 0)946966 ret = -EFAULT;947947- kfree(ascii_buf);967967+ kzfree(ascii_buf);948968949969 return asciiblob_len;950970out:951971 up_read(&mkey->sem);952972 key_put(mkey);973973+ memzero_explicit(derived_key, sizeof(derived_key));953974 return ret;954975}955976956977/*957957- * encrypted_destroy - before freeing the key, clear the decrypted data958958- *959959- * Before freeing the key, clear the memory containing the decrypted960960- * key data.978978+ * encrypted_destroy - clear and free the key's payload961979 */962980static void encrypted_destroy(struct key *key)963981{964964- struct encrypted_key_payload *epayload = key->payload.data[0];965965-966966- if (!epayload)967967- return;968968-969969- memzero_explicit(epayload->decrypted_data, epayload->decrypted_datalen);970970- kfree(key->payload.data[0]);982982+ kzfree(key->payload.data[0]);971983}972984973985struct key_type key_type_encrypted = {···973999};9741000EXPORT_SYMBOL_GPL(key_type_encrypted);9751001976976-static void encrypted_shash_release(void)977977-{978978- if (hashalg)979979- crypto_free_shash(hashalg);980980- if (hmacalg)981981- crypto_free_shash(hmacalg);982982-}983983-984984-static int __init encrypted_shash_alloc(void)985985-{986986- int ret;987987-988988- hmacalg = crypto_alloc_shash(hmac_alg, 0, CRYPTO_ALG_ASYNC);989989- if (IS_ERR(hmacalg)) {990990- pr_info("encrypted_key: could not allocate crypto %s\n",991991- hmac_alg);992992- return PTR_ERR(hmacalg);993993- }994994-995995- hashalg = crypto_alloc_shash(hash_alg, 0, CRYPTO_ALG_ASYNC);996996- if (IS_ERR(hashalg)) {997997- pr_info("encrypted_key: could not allocate crypto %s\n",998998- hash_alg);999999- ret = PTR_ERR(hashalg);10001000- goto hashalg_fail;10011001- }10021002-10031003- return 0;10041004-10051005-hashalg_fail:10061006- crypto_free_shash(hmacalg);10071007- return ret;10081008-}10091009-10101002static int __init init_encrypted(void)10111003{10121004 int ret;1013100510141014- ret = encrypted_shash_alloc();10151015- if (ret < 0)10161016- return ret;10061006+ hash_tfm = crypto_alloc_shash(hash_alg, 0, CRYPTO_ALG_ASYNC);10071007+ if (IS_ERR(hash_tfm)) {10081008+ pr_err("encrypted_key: can't allocate %s transform: %ld\n",10091009+ hash_alg, PTR_ERR(hash_tfm));10101010+ return PTR_ERR(hash_tfm);10111011+ }10121012+10171013 ret = aes_get_sizes();10181014 if (ret < 0)10191015 goto out;···9921048 goto out;9931049 return 0;9941050out:995995- encrypted_shash_release();10511051+ crypto_free_shash(hash_tfm);9961052 return ret;99710539981054}999105510001056static void __exit cleanup_encrypted(void)10011057{10021002- encrypted_shash_release();10581058+ crypto_free_shash(hash_tfm);10031059 unregister_key_type(&key_type_encrypted);10041060}10051061
···660660 goto error;661661662662found:663663- /* pretend it doesn't exist if it is awaiting deletion */664664- if (refcount_read(&key->usage) == 0)665665- goto not_found;666666-667667- /* this races with key_put(), but that doesn't matter since key_put()668668- * doesn't actually change the key663663+ /* A key is allowed to be looked up only if someone still owns a664664+ * reference to it - otherwise it's awaiting the gc.669665 */670670- __key_get(key);666666+ if (!refcount_inc_not_zero(&key->usage))667667+ goto not_found;671668672669error:673670 spin_unlock(&key_serial_lock);···963966 /* the key must be writable */964967 ret = key_permission(key_ref, KEY_NEED_WRITE);965968 if (ret < 0)966966- goto error;969969+ return ret;967970968971 /* attempt to update it if supported */969969- ret = -EOPNOTSUPP;970972 if (!key->type->update)971971- goto error;973973+ return -EOPNOTSUPP;972974973975 memset(&prep, 0, sizeof(prep));974976 prep.data = payload;
+11-5
security/keys/keyctl.c
···9999 /* pull the payload in if one was supplied */100100 payload = NULL;101101102102- if (_payload) {102102+ if (plen) {103103 ret = -ENOMEM;104104 payload = kvmalloc(plen, GFP_KERNEL);105105 if (!payload)···132132133133 key_ref_put(keyring_ref);134134 error3:135135- kvfree(payload);135135+ if (payload) {136136+ memzero_explicit(payload, plen);137137+ kvfree(payload);138138+ }136139 error2:137140 kfree(description);138141 error:···327324328325 /* pull the payload in if one was supplied */329326 payload = NULL;330330- if (_payload) {327327+ if (plen) {331328 ret = -ENOMEM;332329 payload = kmalloc(plen, GFP_KERNEL);333330 if (!payload)···350347351348 key_ref_put(key_ref);352349error2:353353- kfree(payload);350350+ kzfree(payload);354351error:355352 return ret;356353}···10961093 keyctl_change_reqkey_auth(NULL);1097109410981095error2:10991099- kvfree(payload);10961096+ if (payload) {10971097+ memzero_explicit(payload, plen);10981098+ kvfree(payload);10991099+ }11001100error:11011101 return ret;11021102}
+6-6
security/keys/keyring.c
···706706 * Non-keyrings avoid the leftmost branch of the root entirely (root707707 * slots 1-15).708708 */709709- ptr = ACCESS_ONCE(keyring->keys.root);709709+ ptr = READ_ONCE(keyring->keys.root);710710 if (!ptr)711711 goto not_this_keyring;712712···720720 if ((shortcut->index_key[0] & ASSOC_ARRAY_FAN_MASK) != 0)721721 goto not_this_keyring;722722723723- ptr = ACCESS_ONCE(shortcut->next_node);723723+ ptr = READ_ONCE(shortcut->next_node);724724 node = assoc_array_ptr_to_node(ptr);725725 goto begin_node;726726 }···740740 if (assoc_array_ptr_is_shortcut(ptr)) {741741 shortcut = assoc_array_ptr_to_shortcut(ptr);742742 smp_read_barrier_depends();743743- ptr = ACCESS_ONCE(shortcut->next_node);743743+ ptr = READ_ONCE(shortcut->next_node);744744 BUG_ON(!assoc_array_ptr_is_node(ptr));745745 }746746 node = assoc_array_ptr_to_node(ptr);···752752ascend_to_node:753753 /* Go through the slots in a node */754754 for (; slot < ASSOC_ARRAY_FAN_OUT; slot++) {755755- ptr = ACCESS_ONCE(node->slots[slot]);755755+ ptr = READ_ONCE(node->slots[slot]);756756757757 if (assoc_array_ptr_is_meta(ptr) && node->back_pointer)758758 goto descend_to_node;···790790 /* We've dealt with all the slots in the current node, so now we need791791 * to ascend to the parent and continue processing there.792792 */793793- ptr = ACCESS_ONCE(node->back_pointer);793793+ ptr = READ_ONCE(node->back_pointer);794794 slot = node->parent_slot;795795796796 if (ptr && assoc_array_ptr_is_shortcut(ptr)) {797797 shortcut = assoc_array_ptr_to_shortcut(ptr);798798 smp_read_barrier_depends();799799- ptr = ACCESS_ONCE(shortcut->back_pointer);799799+ ptr = READ_ONCE(shortcut->back_pointer);800800 slot = shortcut->parent_slot;801801 }802802 if (!ptr)
+4-3
security/keys/process_keys.c
···809809 ret = PTR_ERR(keyring);810810 goto error2;811811 } else if (keyring == new->session_keyring) {812812- key_put(keyring);813812 ret = 0;814814- goto error2;813813+ goto error3;815814 }816815817816 /* we've got a keyring - now to install it */818817 ret = install_session_keyring_to_cred(new, keyring);819818 if (ret < 0)820820- goto error2;819819+ goto error3;821820822821 commit_creds(new);823822 mutex_unlock(&key_session_mutex);···826827okay:827828 return ret;828829830830+error3:831831+ key_put(keyring);829832error2:830833 mutex_unlock(&key_session_mutex);831834error:
···24922492 struct snd_pcm_substream *substream;24932493 const struct snd_pcm_chmap_elem *map;2494249424952495- if (snd_BUG_ON(!info->chmap))24952495+ if (!info->chmap)24962496 return -EINVAL;24972497 substream = snd_pcm_chmap_substream(info, idx);24982498 if (!substream)···25242524 unsigned int __user *dst;25252525 int c, count = 0;2526252625272527- if (snd_BUG_ON(!info->chmap))25272527+ if (!info->chmap)25282528 return -EINVAL;25292529 if (size < 8)25302530 return -ENOMEM;
+5-2
sound/core/timer.c
···16181618 if (err < 0)16191619 goto __err;1620162016211621+ tu->qhead = tu->qtail = tu->qused = 0;16211622 kfree(tu->queue);16221623 tu->queue = NULL;16231624 kfree(tu->tqueue);···1960195919611960 tu = file->private_data;19621961 unit = tu->tread ? sizeof(struct snd_timer_tread) : sizeof(struct snd_timer_read);19621962+ mutex_lock(&tu->ioctl_lock);19631963 spin_lock_irq(&tu->qlock);19641964 while ((long)count - result >= unit) {19651965 while (!tu->qused) {···19761974 add_wait_queue(&tu->qchange_sleep, &wait);1977197519781976 spin_unlock_irq(&tu->qlock);19771977+ mutex_unlock(&tu->ioctl_lock);19791978 schedule();19791979+ mutex_lock(&tu->ioctl_lock);19801980 spin_lock_irq(&tu->qlock);1981198119821982 remove_wait_queue(&tu->qchange_sleep, &wait);···19981994 tu->qused--;19991995 spin_unlock_irq(&tu->qlock);2000199620012001- mutex_lock(&tu->ioctl_lock);20021997 if (tu->tread) {20031998 if (copy_to_user(buffer, &tu->tqueue[qhead],20041999 sizeof(struct snd_timer_tread)))···20072004 sizeof(struct snd_timer_read)))20082005 err = -EFAULT;20092006 }20102010- mutex_unlock(&tu->ioctl_lock);2011200720122008 spin_lock_irq(&tu->qlock);20132009 if (err < 0)···20162014 }20172015 _error:20182016 spin_unlock_irq(&tu->qlock);20172017+ mutex_unlock(&tu->ioctl_lock);20192018 return result > 0 ? result : err;20202019}20212020
+6-2
sound/firewire/amdtp-stream.c
···682682 cycle = increment_cycle_count(cycle, 1);683683 if (s->handle_packet(s, 0, cycle, i) < 0) {684684 s->packet_index = -1;685685- amdtp_stream_pcm_abort(s);685685+ if (in_interrupt())686686+ amdtp_stream_pcm_abort(s);687687+ WRITE_ONCE(s->pcm_buffer_pointer, SNDRV_PCM_POS_XRUN);686688 return;687689 }688690 }···736734 /* Queueing error or detecting invalid payload. */737735 if (i < packets) {738736 s->packet_index = -1;739739- amdtp_stream_pcm_abort(s);737737+ if (in_interrupt())738738+ amdtp_stream_pcm_abort(s);739739+ WRITE_ONCE(s->pcm_buffer_pointer, SNDRV_PCM_POS_XRUN);740740 return;741741 }742742
+1-1
sound/firewire/amdtp-stream.h
···135135 /* For a PCM substream processing. */136136 struct snd_pcm_substream *pcm;137137 struct tasklet_struct period_tasklet;138138- unsigned int pcm_buffer_pointer;138138+ snd_pcm_uframes_t pcm_buffer_pointer;139139 unsigned int pcm_period_pointer;140140141141 /* To wait for first packet. */
···13371337/* configure each codec instance */13381338int azx_codec_configure(struct azx *chip)13391339{13401340- struct hda_codec *codec;13411341- list_for_each_codec(codec, &chip->bus) {13401340+ struct hda_codec *codec, *next;13411341+13421342+ /* use _safe version here since snd_hda_codec_configure() deregisters13431343+ * the device upon error and deletes itself from the bus list.13441344+ */13451345+ list_for_each_codec_safe(codec, next, &chip->bus) {13421346 snd_hda_codec_configure(codec);13431347 }13441348 return 0;
···22862286 list_for_each_entry(rtd, &card->rtd_list, list)22872287 flush_delayed_work(&rtd->delayed_work);2288228822892289+ /* free the ALSA card at first; this syncs with pending operations */22902290+ snd_card_free(card->snd_card);22912291+22892292 /* remove and free each DAI */22902293 soc_remove_dai_links(card);22912294 soc_remove_pcm_runtimes(card);···23032300 if (card->remove)23042301 card->remove(card);2305230223062306- snd_card_free(card->snd_card);23072303 return 0;23082308-23092304}2310230523112306/* removes a socdev */
···240240 or241241 ./perf probe --add='schedule:12 cpu'242242243243- this will add one or more probes which has the name start with "schedule".243243+Add one or more probes which has the name start with "schedule".244244245245- Add probes on lines in schedule() function which calls update_rq_clock().245245+ ./perf probe schedule*246246+ or247247+ ./perf probe --add='schedule*'248248+249249+Add probes on lines in schedule() function which calls update_rq_clock().246250247251 ./perf probe 'schedule;update_rq_clock*'248252 or
+1-1
tools/perf/Documentation/perf-script-perl.txt
···3939When perf script is invoked using a trace script, a user-defined4040'handler function' is called for each event in the trace. If there's4141no handler function defined for a given event type, the event is4242-ignored (or passed to a 'trace_handled' function, see below) and the4242+ignored (or passed to a 'trace_unhandled' function, see below) and the4343next event is processed.44444545Most of the event's field values are passed as arguments to the
+9-14
tools/perf/Documentation/perf-script-python.txt
···149149 print "id=%d, args=%s\n" % \150150 (id, args),151151152152-def trace_unhandled(event_name, context, common_cpu, common_secs, common_nsecs,153153- common_pid, common_comm):154154- print_header(event_name, common_cpu, common_secs, common_nsecs,155155- common_pid, common_comm)152152+def trace_unhandled(event_name, context, event_fields_dict):153153+ print ' '.join(['%s=%s'%(k,str(v))for k,v in sorted(event_fields_dict.items())])156154157155def print_header(event_name, cpu, secs, nsecs, pid, comm):158156 print "%-20s %5u %05u.%09u %8u %-20s " % \···319321process can be generalized to any tracepoint or set of tracepoints320322you're interested in - basically find the tracepoint(s) you're321323interested in by looking at the list of available events shown by322322-'perf list' and/or look in /sys/kernel/debug/tracing events for324324+'perf list' and/or look in /sys/kernel/debug/tracing/events/ for323325detailed event and field info, record the corresponding trace data324326using 'perf record', passing it the list of interesting events,325327generate a skeleton script using 'perf script -g python' and modify the···332334scripts listed by the 'perf script -l' command e.g.:333335334336----335335-root@tropicana:~# perf script -l337337+# perf script -l336338List of available trace scripts:337339 wakeup-latency system-wide min/max/avg wakeup latency338340 rw-by-file <comm> r/w activity for a program, by file···381383382384----383385# ls -al kernel-source/tools/perf/scripts/python384384-385385-root@tropicana:/home/trz/src/tip# ls -al tools/perf/scripts/python386386total 32387387drwxr-xr-x 4 trz trz 4096 2010-01-26 22:30 .388388drwxr-xr-x 4 trz trz 4096 2010-01-26 22:29 ..···395399should show a new entry for your script:396400397401----398398-root@tropicana:~# perf script -l402402+# perf script -l399403List of available trace scripts:400404 wakeup-latency system-wide min/max/avg wakeup latency401405 rw-by-file <comm> r/w activity for a program, by file···433437When perf script is invoked using a trace script, a user-defined434438'handler function' is called for each event in the trace. If there's435439no handler function defined for a given event type, the event is436436-ignored (or passed to a 'trace_handled' function, see below) and the440440+ignored (or passed to a 'trace_unhandled' function, see below) and the437441next event is processed.438442439443Most of the event's field values are passed as arguments to the···528532gives scripts a chance to do setup tasks:529533530534----531531-def trace_begin:535535+def trace_begin():532536 pass533537----534538···537541 as display results:538542539543----540540-def trace_end:544544+def trace_end():541545 pass542546----543547···546550 of common arguments are passed into it:547551548552----549549-def trace_unhandled(event_name, context, common_cpu, common_secs,550550- common_nsecs, common_pid, common_comm):553553+def trace_unhandled(event_name, context, event_fields_dict):551554 pass552555----553556
+19-19
tools/perf/Makefile.config
···19192020include $(srctree)/tools/scripts/Makefile.arch21212222-$(call detected_var,ARCH)2222+$(call detected_var,SRCARCH)23232424NO_PERF_REGS := 125252626# Additional ARCH settings for ppc2727-ifeq ($(ARCH),powerpc)2727+ifeq ($(SRCARCH),powerpc)2828 NO_PERF_REGS := 02929 LIBUNWIND_LIBS := -lunwind -lunwind-ppc643030endif31313232# Additional ARCH settings for x863333-ifeq ($(ARCH),x86)3333+ifeq ($(SRCARCH),x86)3434 $(call detected,CONFIG_X86)3535 ifeq (${IS_64_BIT}, 1)3636 CFLAGS += -DHAVE_ARCH_X86_64_SUPPORT -DHAVE_SYSCALL_TABLE -I$(OUTPUT)arch/x86/include/generated···4343 NO_PERF_REGS := 04444endif45454646-ifeq ($(ARCH),arm)4646+ifeq ($(SRCARCH),arm)4747 NO_PERF_REGS := 04848 LIBUNWIND_LIBS = -lunwind -lunwind-arm4949endif50505151-ifeq ($(ARCH),arm64)5151+ifeq ($(SRCARCH),arm64)5252 NO_PERF_REGS := 05353 LIBUNWIND_LIBS = -lunwind -lunwind-aarch645454endif···6161# Disable it on all other architectures in case libdw unwind6262# support is detected in system. Add supported architectures6363# to the check.6464-ifneq ($(ARCH),$(filter $(ARCH),x86 arm))6464+ifneq ($(SRCARCH),$(filter $(SRCARCH),x86 arm))6565 NO_LIBDW_DWARF_UNWIND := 16666endif6767···115115FEATURE_CHECK_CFLAGS-libbabeltrace := $(LIBBABELTRACE_CFLAGS)116116FEATURE_CHECK_LDFLAGS-libbabeltrace := $(LIBBABELTRACE_LDFLAGS) -lbabeltrace-ctf117117118118-FEATURE_CHECK_CFLAGS-bpf = -I. -I$(srctree)/tools/include -I$(srctree)/tools/arch/$(ARCH)/include/uapi -I$(srctree)/tools/include/uapi118118+FEATURE_CHECK_CFLAGS-bpf = -I. -I$(srctree)/tools/include -I$(srctree)/tools/arch/$(SRCARCH)/include/uapi -I$(srctree)/tools/include/uapi119119# include ARCH specific config120120--include $(src-perf)/arch/$(ARCH)/Makefile120120+-include $(src-perf)/arch/$(SRCARCH)/Makefile121121122122ifdef PERF_HAVE_ARCH_REGS_QUERY_REGISTER_OFFSET123123 CFLAGS += -DHAVE_ARCH_REGS_QUERY_REGISTER_OFFSET···228228endif229229230230INC_FLAGS += -I$(src-perf)/util/include231231-INC_FLAGS += -I$(src-perf)/arch/$(ARCH)/include231231+INC_FLAGS += -I$(src-perf)/arch/$(SRCARCH)/include232232INC_FLAGS += -I$(srctree)/tools/include/uapi233233INC_FLAGS += -I$(srctree)/tools/include/234234-INC_FLAGS += -I$(srctree)/tools/arch/$(ARCH)/include/uapi235235-INC_FLAGS += -I$(srctree)/tools/arch/$(ARCH)/include/236236-INC_FLAGS += -I$(srctree)/tools/arch/$(ARCH)/234234+INC_FLAGS += -I$(srctree)/tools/arch/$(SRCARCH)/include/uapi235235+INC_FLAGS += -I$(srctree)/tools/arch/$(SRCARCH)/include/236236+INC_FLAGS += -I$(srctree)/tools/arch/$(SRCARCH)/237237238238# $(obj-perf) for generated common-cmds.h239239# $(obj-perf)/util for generated bison/flex headers···355355356356 ifndef NO_DWARF357357 ifeq ($(origin PERF_HAVE_DWARF_REGS), undefined)358358- msg := $(warning DWARF register mappings have not been defined for architecture $(ARCH), DWARF support disabled);358358+ msg := $(warning DWARF register mappings have not been defined for architecture $(SRCARCH), DWARF support disabled);359359 NO_DWARF := 1360360 else361361 CFLAGS += -DHAVE_DWARF_SUPPORT $(LIBDW_CFLAGS)···380380 CFLAGS += -DHAVE_BPF_PROLOGUE381381 $(call detected,CONFIG_BPF_PROLOGUE)382382 else383383- msg := $(warning BPF prologue is not supported by architecture $(ARCH), missing regs_query_register_offset());383383+ msg := $(warning BPF prologue is not supported by architecture $(SRCARCH), missing regs_query_register_offset());384384 endif385385 else386386 msg := $(warning DWARF support is off, BPF prologue is disabled);···406406 endif407407endif408408409409-ifeq ($(ARCH),powerpc)409409+ifeq ($(SRCARCH),powerpc)410410 ifndef NO_DWARF411411 CFLAGS += -DHAVE_SKIP_CALLCHAIN_IDX412412 endif···487487endif488488489489ifndef NO_LOCAL_LIBUNWIND490490- ifeq ($(ARCH),$(filter $(ARCH),arm arm64))490490+ ifeq ($(SRCARCH),$(filter $(SRCARCH),arm arm64))491491 $(call feature_check,libunwind-debug-frame)492492 ifneq ($(feature-libunwind-debug-frame), 1)493493 msg := $(warning No debug_frame support found in libunwind);···740740 NO_PERF_READ_VDSO32 := 1741741 endif742742 endif743743- ifneq ($(ARCH), x86)743743+ ifneq ($(SRCARCH), x86)744744 NO_PERF_READ_VDSOX32 := 1745745 endif746746 ifndef NO_PERF_READ_VDSOX32···769769endif770770771771ifndef NO_AUXTRACE772772- ifeq ($(ARCH),x86)772772+ ifeq ($(SRCARCH),x86)773773 ifeq ($(feature-get_cpuid), 0)774774 msg := $(warning Your gcc lacks the __get_cpuid() builtin, disables support for auxtrace/Intel PT, please install a newer gcc);775775 NO_AUXTRACE := 1···872872ETC_PERFCONFIG = etc/perfconfig873873endif874874ifndef lib875875-ifeq ($(ARCH)$(IS_64_BIT), x861)875875+ifeq ($(SRCARCH)$(IS_64_BIT), x861)876876lib = lib64877877else878878lib = lib
+1-1
tools/perf/Makefile.perf
···226226227227ifeq ($(config),0)228228include $(srctree)/tools/scripts/Makefile.arch229229--include arch/$(ARCH)/Makefile229229+-include arch/$(SRCARCH)/Makefile230230endif231231232232# The FEATURE_DUMP_EXPORT holds location of the actual
···288288 return count1 == 1 && overflows == 3 && count2 == 3 && overflows_2 == 3 && count3 == 2 ?289289 TEST_OK : TEST_FAIL;290290}291291+292292+bool test__bp_signal_is_supported(void)293293+{294294+/*295295+ * The powerpc so far does not have support to even create296296+ * instruction breakpoint using the perf event interface.297297+ * Once it's there we can release this.298298+ */299299+#ifdef __powerpc__300300+ return false;301301+#else302302+ return true;303303+#endif304304+}
+7
tools/perf/tests/builtin-test.c
···9797 {9898 .desc = "Breakpoint overflow signal handler",9999 .func = test__bp_signal,100100+ .is_supported = test__bp_signal_is_supported,100101 },101102 {102103 .desc = "Breakpoint overflow sampling",103104 .func = test__bp_signal_overflow,105105+ .is_supported = test__bp_signal_is_supported,104106 },105107 {106108 .desc = "Number of exit events of a simple workload",···402400403401 if (!perf_test__matches(t, curr, argc, argv))404402 continue;403403+404404+ if (t->is_supported && !t->is_supported()) {405405+ pr_debug("%2d: %-*s: Disabled\n", i, width, t->desc);406406+ continue;407407+ }405408406409 pr_info("%2d: %-*s:", i, width, t->desc);407410
+19-1
tools/perf/tests/code-reading.c
···229229 unsigned char buf2[BUFSZ];230230 size_t ret_len;231231 u64 objdump_addr;232232+ const char *objdump_name;233233+ char decomp_name[KMOD_DECOMP_LEN];232234 int ret;233235234236 pr_debug("Reading object code for memory address: %#"PRIx64"\n", addr);···291289 state->done[state->done_cnt++] = al.map->start;292290 }293291292292+ objdump_name = al.map->dso->long_name;293293+ if (dso__needs_decompress(al.map->dso)) {294294+ if (dso__decompress_kmodule_path(al.map->dso, objdump_name,295295+ decomp_name,296296+ sizeof(decomp_name)) < 0) {297297+ pr_debug("decompression failed\n");298298+ return -1;299299+ }300300+301301+ objdump_name = decomp_name;302302+ }303303+294304 /* Read the object code using objdump */295305 objdump_addr = map__rip_2objdump(al.map, al.addr);296296- ret = read_via_objdump(al.map->dso->long_name, objdump_addr, buf2, len);306306+ ret = read_via_objdump(objdump_name, objdump_addr, buf2, len);307307+308308+ if (dso__needs_decompress(al.map->dso))309309+ unlink(objdump_name);310310+297311 if (ret > 0) {298312 /*299313 * The kernel maps are inaccurate - assume objdump is right in
···273273 struct perf_evsel *evsel;274274275275 event_attr_init(&attr);276276+ /*277277+ * Unnamed union member, not supported as struct member named278278+ * initializer in older compilers such as gcc 4.4.7279279+ *280280+ * Just for probing the precise_ip:281281+ */282282+ attr.sample_period = 1;276283277284 perf_event_attr__set_max_precise_ip(&attr);285285+ /*286286+ * Now let the usual logic to set up the perf_event_attr defaults287287+ * to kick in when we return and before perf_evsel__open() is called.288288+ */289289+ attr.sample_period = 0;278290279291 evsel = perf_evsel__new(&attr);280292 if (evsel == NULL)
+11-3
tools/perf/util/header.c
···841841842842/*843843 * default get_cpuid(): nothing gets recorded844844- * actual implementation must be in arch/$(ARCH)/util/header.c844844+ * actual implementation must be in arch/$(SRCARCH)/util/header.c845845 */846846int __weak get_cpuid(char *buffer __maybe_unused, size_t sz __maybe_unused)847847{···1469146914701470 dso__set_build_id(dso, &bev->build_id);1471147114721472- if (!is_kernel_module(filename, cpumode))14731473- dso->kernel = dso_type;14721472+ if (dso_type != DSO_TYPE_USER) {14731473+ struct kmod_path m = { .name = NULL, };14741474+14751475+ if (!kmod_path__parse_name(&m, filename) && m.kmod)14761476+ dso__set_module_info(dso, &m, machine);14771477+ else14781478+ dso->kernel = dso_type;14791479+14801480+ free(m.name);14811481+ }1474148214751483 build_id__sprintf(dso->build_id, sizeof(dso->build_id),14761484 sbuild_id);
+7-14
tools/perf/util/machine.c
···572572 if (dso == NULL)573573 goto out_unlock;574574575575- if (machine__is_host(machine))576576- dso->symtab_type = DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE;577577- else578578- dso->symtab_type = DSO_BINARY_TYPE__GUEST_KMODULE;579579-580580- /* _KMODULE_COMP should be next to _KMODULE */581581- if (m->kmod && m->comp)582582- dso->symtab_type++;583583-584584- dso__set_short_name(dso, strdup(m->name), true);575575+ dso__set_module_info(dso, m, machine);585576 dso__set_long_name(dso, strdup(filename), true);586577 }587578···12091218 */12101219 map_groups__fixup_end(&machine->kmaps);1211122012121212- if (machine__get_running_kernel_start(machine, &name, &addr)) {12131213- } else if (maps__set_kallsyms_ref_reloc_sym(machine->vmlinux_maps, name, addr)) {12141214- machine__destroy_kernel_maps(machine);12151215- return -1;12211221+ if (!machine__get_running_kernel_start(machine, &name, &addr)) {12221222+ if (name &&12231223+ maps__set_kallsyms_ref_reloc_sym(machine->vmlinux_maps, name, addr)) {12241224+ machine__destroy_kernel_maps(machine);12251225+ return -1;12261226+ }12161227 }1217122812181229 return 0;
+1-1
tools/perf/util/probe-event.c
···619619 struct map *map, unsigned long offs)620620{621621 struct symbol *sym;622622- u64 addr = tp->address + tp->offset - offs;622622+ u64 addr = tp->address - offs;623623624624 sym = map__find_symbol(map, addr);625625 if (!sym)
···12191219 fprintf(ofp, "# be retrieved using Python functions of the form "12201220 "common_*(context).\n");1221122112221222- fprintf(ofp, "# See the perf-trace-python Documentation for the list "12221222+ fprintf(ofp, "# See the perf-script-python Documentation for the list "12231223 "of available functions.\n\n");1224122412251225 fprintf(ofp, "import os\n");
+3-38
tools/perf/util/symbol-elf.c
···637637 return 0;638638}639639640640-static int decompress_kmodule(struct dso *dso, const char *name,641641- enum dso_binary_type type)642642-{643643- int fd = -1;644644- char tmpbuf[] = "/tmp/perf-kmod-XXXXXX";645645- struct kmod_path m;646646-647647- if (type != DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE_COMP &&648648- type != DSO_BINARY_TYPE__GUEST_KMODULE_COMP &&649649- type != DSO_BINARY_TYPE__BUILD_ID_CACHE)650650- return -1;651651-652652- if (type == DSO_BINARY_TYPE__BUILD_ID_CACHE)653653- name = dso->long_name;654654-655655- if (kmod_path__parse_ext(&m, name) || !m.comp)656656- return -1;657657-658658- fd = mkstemp(tmpbuf);659659- if (fd < 0) {660660- dso->load_errno = errno;661661- goto out;662662- }663663-664664- if (!decompress_to_file(m.ext, name, fd)) {665665- dso->load_errno = DSO_LOAD_ERRNO__DECOMPRESSION_FAILURE;666666- close(fd);667667- fd = -1;668668- }669669-670670- unlink(tmpbuf);671671-672672-out:673673- free(m.ext);674674- return fd;675675-}676676-677640bool symsrc__possibly_runtime(struct symsrc *ss)678641{679642 return ss->dynsym || ss->opdsec;···668705 int fd;669706670707 if (dso__needs_decompress(dso)) {671671- fd = decompress_kmodule(dso, name, type);708708+ fd = dso__decompress_kmodule_fd(dso, name);672709 if (fd < 0)673710 return -1;711711+712712+ type = dso->symtab_type;674713 } else {675714 fd = open(name, O_RDONLY);676715 if (fd < 0) {
-4
tools/perf/util/symbol.c
···15621562 if (!runtime_ss && syms_ss)15631563 runtime_ss = syms_ss;1564156415651565- if (syms_ss && syms_ss->type == DSO_BINARY_TYPE__BUILD_ID_CACHE)15661566- if (dso__build_id_is_kmod(dso, name, PATH_MAX))15671567- kmod = true;15681568-15691565 if (syms_ss)15701566 ret = dso__load_sym(dso, map, syms_ss, runtime_ss, kmod);15711567 else
+17-1
tools/perf/util/unwind-libdw.c
···3939 return 0;40404141 mod = dwfl_addrmodule(ui->dwfl, ip);4242+ if (mod) {4343+ Dwarf_Addr s;4444+4545+ dwfl_module_info(mod, NULL, &s, NULL, NULL, NULL, NULL, NULL);4646+ if (s != al->map->start)4747+ mod = 0;4848+ }4949+4250 if (!mod)4351 mod = dwfl_report_elf(ui->dwfl, dso->short_name,4452 dso->long_name, -1, al->map->start,···178170 Dwarf_Addr pc;179171 bool isactivation;180172173173+ if (!dwfl_frame_pc(state, &pc, NULL)) {174174+ pr_err("%s", dwfl_errmsg(-1));175175+ return DWARF_CB_ABORT;176176+ }177177+178178+ // report the module before we query for isactivation179179+ report_module(pc, ui);180180+181181 if (!dwfl_frame_pc(state, &pc, &isactivation)) {182182 pr_err("%s", dwfl_errmsg(-1));183183 return DWARF_CB_ABORT;···240224241225 err = dwfl_getthread_frames(ui->dwfl, thread->tid, frame_callback, ui);242226243243- if (err && !ui->max_stack)227227+ if (err && ui->max_stack != max_stack)244228 err = 0;245229246230 /*
+30-11
tools/testing/selftests/bpf/bpf_endian.h
···11#ifndef __BPF_ENDIAN__22#define __BPF_ENDIAN__3344-#include <asm/byteorder.h>44+#include <linux/swab.h>5566-#if __BYTE_ORDER == __LITTLE_ENDIAN77-# define __bpf_ntohs(x) __builtin_bswap16(x)88-# define __bpf_htons(x) __builtin_bswap16(x)99-#elif __BYTE_ORDER == __BIG_ENDIAN1010-# define __bpf_ntohs(x) (x)1111-# define __bpf_htons(x) (x)66+/* LLVM's BPF target selects the endianness of the CPU77+ * it compiles on, or the user specifies (bpfel/bpfeb),88+ * respectively. The used __BYTE_ORDER__ is defined by99+ * the compiler, we cannot rely on __BYTE_ORDER from1010+ * libc headers, since it doesn't reflect the actual1111+ * requested byte order.1212+ *1313+ * Note, LLVM's BPF target has different __builtin_bswapX()1414+ * semantics. It does map to BPF_ALU | BPF_END | BPF_TO_BE1515+ * in bpfel and bpfeb case, which means below, that we map1616+ * to cpu_to_be16(). We could use it unconditionally in BPF1717+ * case, but better not rely on it, so that this header here1818+ * can be used from application and BPF program side, which1919+ * use different targets.2020+ */2121+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__2222+# define __bpf_ntohs(x) __builtin_bswap16(x)2323+# define __bpf_htons(x) __builtin_bswap16(x)2424+# define __bpf_constant_ntohs(x) ___constant_swab16(x)2525+# define __bpf_constant_htons(x) ___constant_swab16(x)2626+#elif __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__2727+# define __bpf_ntohs(x) (x)2828+# define __bpf_htons(x) (x)2929+# define __bpf_constant_ntohs(x) (x)3030+# define __bpf_constant_htons(x) (x)1231#else1313-# error "Fix your __BYTE_ORDER?!"3232+# error "Fix your compiler's __BYTE_ORDER__?!"1433#endif15341635#define bpf_htons(x) \1736 (__builtin_constant_p(x) ? \1818- __constant_htons(x) : __bpf_htons(x))3737+ __bpf_constant_htons(x) : __bpf_htons(x))1938#define bpf_ntohs(x) \2039 (__builtin_constant_p(x) ? \2121- __constant_ntohs(x) : __bpf_ntohs(x))4040+ __bpf_constant_ntohs(x) : __bpf_ntohs(x))22412323-#endif4242+#endif /* __BPF_ENDIAN__ */
···220220endchoice221221222222config INITRAMFS_COMPRESSION223223+ depends on INITRAMFS_SOURCE!=""223224 string224225 default "" if INITRAMFS_COMPRESSION_NONE225226 default ".gz" if INITRAMFS_COMPRESSION_GZIP