Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Minor comment conflict in mac80211.

Signed-off-by: David S. Miller <davem@davemloft.net>

+840 -433
+2
Documentation/devicetree/bindings/crypto/allwinner,sun4i-a10-crypto.yaml
··· 23 23 - items: 24 24 - const: allwinner,sun7i-a20-crypto 25 25 - const: allwinner,sun4i-a10-crypto 26 + - items: 27 + - const: allwinner,sun8i-a33-crypto 26 28 27 29 reg: 28 30 maxItems: 1
+1
Documentation/devicetree/bindings/input/cypress,tm2-touchkey.txt
··· 5 5 * "cypress,tm2-touchkey" - for the touchkey found on the tm2 board 6 6 * "cypress,midas-touchkey" - for the touchkey found on midas boards 7 7 * "cypress,aries-touchkey" - for the touchkey found on aries boards 8 + * "coreriver,tc360-touchkey" - for the Coreriver TouchCore 360 touchkey 8 9 - reg: I2C address of the chip. 9 10 - interrupts: interrupt to which the chip is connected (see interrupt 10 11 binding[0]).
+2
Documentation/devicetree/bindings/vendor-prefixes.yaml
··· 205 205 description: Colorful GRP, Shenzhen Xueyushi Technology Ltd. 206 206 "^compulab,.*": 207 207 description: CompuLab Ltd. 208 + "^coreriver,.*": 209 + description: CORERIVER Semiconductor Co.,Ltd. 208 210 "^corpro,.*": 209 211 description: Chengdu Corpro Technology Co., Ltd. 210 212 "^cortina,.*":
+25
Documentation/virt/kvm/amd-memory-encryption.rst
··· 53 53 encrypting bootstrap code, snapshot, migrating and debugging the guest. For more 54 54 information, see the SEV Key Management spec [api-spec]_ 55 55 56 + The main ioctl to access SEV is KVM_MEM_ENCRYPT_OP. If the argument 57 + to KVM_MEM_ENCRYPT_OP is NULL, the ioctl returns 0 if SEV is enabled 58 + and ``ENOTTY` if it is disabled (on some older versions of Linux, 59 + the ioctl runs normally even with a NULL argument, and therefore will 60 + likely return ``EFAULT``). If non-NULL, the argument to KVM_MEM_ENCRYPT_OP 61 + must be a struct kvm_sev_cmd:: 62 + 63 + struct kvm_sev_cmd { 64 + __u32 id; 65 + __u64 data; 66 + __u32 error; 67 + __u32 sev_fd; 68 + }; 69 + 70 + 71 + The ``id`` field contains the subcommand, and the ``data`` field points to 72 + another struct containing arguments specific to command. The ``sev_fd`` 73 + should point to a file descriptor that is opened on the ``/dev/sev`` 74 + device, if needed (see individual commands). 75 + 76 + On output, ``error`` is zero on success, or an error code. Error codes 77 + are defined in ``<linux/psp-dev.h>`. 78 + 56 79 KVM implements the following commands to support common lifecycle events of SEV 57 80 guests, such as launching, running, snapshotting, migrating and decommissioning. 58 81 ··· 112 89 }; 113 90 114 91 On success, the 'handle' field contains a new handle and on error, a negative value. 92 + 93 + KVM_SEV_LAUNCH_START requires the ``sev_fd`` field to be valid. 115 94 116 95 For more details, see SEV spec Section 6.2. 117 96
+4 -5
MAINTAINERS
··· 7590 7590 7591 7591 HISILICON ROCE DRIVER 7592 7592 M: Lijun Ou <oulijun@huawei.com> 7593 - M: Wei Hu(Xavier) <xavier.huwei@huawei.com> 7593 + M: Wei Hu(Xavier) <huwei87@hisilicon.com> 7594 + M: Weihang Li <liweihang@huawei.com> 7594 7595 L: linux-rdma@vger.kernel.org 7595 7596 S: Maintained 7596 7597 F: drivers/infiniband/hw/hns/ ··· 15449 15448 F: include/uapi/rdma/siw-abi.h 15450 15449 15451 15450 SOFT-ROCE DRIVER (rxe) 15452 - M: Moni Shoua <monis@mellanox.com> 15451 + M: Zhu Yanjun <yanjunz@mellanox.com> 15453 15452 L: linux-rdma@vger.kernel.org 15454 15453 S: Supported 15455 - W: https://github.com/SoftRoCE/rxe-dev/wiki/rxe-dev:-Home 15456 - Q: http://patchwork.kernel.org/project/linux-rdma/list/ 15457 15454 F: drivers/infiniband/sw/rxe/ 15458 15455 F: include/uapi/rdma/rdma_user_rxe.h 15459 15456 ··· 16788 16789 S: Maintained 16789 16790 F: drivers/media/platform/ti-vpe/ 16790 16791 F: Documentation/devicetree/bindings/media/ti,vpe.yaml 16791 - Documentation/devicetree/bindings/media/ti,cal.yaml 16792 + F: Documentation/devicetree/bindings/media/ti,cal.yaml 16792 16793 16793 16794 TI WILINK WIRELESS DRIVERS 16794 16795 L: linux-wireless@vger.kernel.org
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 6 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc7 5 + EXTRAVERSION = 6 6 NAME = Kleptomaniac Octopus 7 7 8 8 # *DOCUMENTATION*
+1
arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
··· 112 112 &sdhci { 113 113 #address-cells = <1>; 114 114 #size-cells = <0>; 115 + pinctrl-names = "default"; 115 116 pinctrl-0 = <&emmc_gpio34 &gpclk2_gpio43>; 116 117 bus-width = <4>; 117 118 mmc-pwrseq = <&wifi_pwrseq>;
+1
arch/arm/boot/dts/bcm2835-rpi.dtsi
··· 15 15 firmware: firmware { 16 16 compatible = "raspberrypi,bcm2835-firmware", "simple-bus"; 17 17 mboxes = <&mailbox>; 18 + dma-ranges; 18 19 }; 19 20 20 21 power: power {
+2 -2
arch/arm/boot/dts/dm8148-evm.dts
··· 24 24 25 25 &cpsw_emac0 { 26 26 phy-handle = <&ethphy0>; 27 - phy-mode = "rgmii"; 27 + phy-mode = "rgmii-id"; 28 28 }; 29 29 30 30 &cpsw_emac1 { 31 31 phy-handle = <&ethphy1>; 32 - phy-mode = "rgmii"; 32 + phy-mode = "rgmii-id"; 33 33 }; 34 34 35 35 &davinci_mdio {
+2 -2
arch/arm/boot/dts/dm8148-t410.dts
··· 33 33 34 34 &cpsw_emac0 { 35 35 phy-handle = <&ethphy0>; 36 - phy-mode = "rgmii"; 36 + phy-mode = "rgmii-id"; 37 37 }; 38 38 39 39 &cpsw_emac1 { 40 40 phy-handle = <&ethphy1>; 41 - phy-mode = "rgmii"; 41 + phy-mode = "rgmii-id"; 42 42 }; 43 43 44 44 &davinci_mdio {
+2 -2
arch/arm/boot/dts/dra62x-j5eco-evm.dts
··· 24 24 25 25 &cpsw_emac0 { 26 26 phy-handle = <&ethphy0>; 27 - phy-mode = "rgmii"; 27 + phy-mode = "rgmii-id"; 28 28 }; 29 29 30 30 &cpsw_emac1 { 31 31 phy-handle = <&ethphy1>; 32 - phy-mode = "rgmii"; 32 + phy-mode = "rgmii-id"; 33 33 }; 34 34 35 35 &davinci_mdio {
+1
arch/arm/boot/dts/dra7.dtsi
··· 148 148 #address-cells = <1>; 149 149 #size-cells = <1>; 150 150 ranges = <0x0 0x0 0x0 0xc0000000>; 151 + dma-ranges = <0x80000000 0x0 0x80000000 0x80000000>; 151 152 ti,hwmods = "l3_main_1", "l3_main_2"; 152 153 reg = <0x0 0x44000000 0x0 0x1000000>, 153 154 <0x0 0x45000000 0x0 0x1000>;
+2 -2
arch/arm/boot/dts/exynos4412-galaxy-s3.dtsi
··· 33 33 }; 34 34 }; 35 35 36 - lcd_vdd3_reg: voltage-regulator-6 { 36 + lcd_vdd3_reg: voltage-regulator-7 { 37 37 compatible = "regulator-fixed"; 38 38 regulator-name = "LCD_VDD_2.2V"; 39 39 regulator-min-microvolt = <2200000>; ··· 42 42 enable-active-high; 43 43 }; 44 44 45 - ps_als_reg: voltage-regulator-7 { 45 + ps_als_reg: voltage-regulator-8 { 46 46 compatible = "regulator-fixed"; 47 47 regulator-name = "LED_A_3.0V"; 48 48 regulator-min-microvolt = <3000000>;
+1 -1
arch/arm/boot/dts/exynos4412-n710x.dts
··· 13 13 14 14 /* bootargs are passed in by bootloader */ 15 15 16 - cam_vdda_reg: voltage-regulator-6 { 16 + cam_vdda_reg: voltage-regulator-7 { 17 17 compatible = "regulator-fixed"; 18 18 regulator-name = "CAM_SENSOR_CORE_1.2V"; 19 19 regulator-min-microvolt = <1200000>;
+2 -2
arch/arm/boot/dts/imx6qdl-phytec-phycore-som.dtsi
··· 112 112 regulators { 113 113 vdd_arm: buck1 { 114 114 regulator-name = "vdd_arm"; 115 - regulator-min-microvolt = <730000>; 115 + regulator-min-microvolt = <925000>; 116 116 regulator-max-microvolt = <1380000>; 117 117 regulator-initial-mode = <DA9063_BUCK_MODE_SYNC>; 118 118 regulator-always-on; ··· 120 120 121 121 vdd_soc: buck2 { 122 122 regulator-name = "vdd_soc"; 123 - regulator-min-microvolt = <730000>; 123 + regulator-min-microvolt = <1150000>; 124 124 regulator-max-microvolt = <1380000>; 125 125 regulator-initial-mode = <DA9063_BUCK_MODE_SYNC>; 126 126 regulator-always-on;
+1 -1
arch/arm/boot/dts/motorola-mapphone-common.dtsi
··· 429 429 reset-gpios = <&gpio6 13 GPIO_ACTIVE_HIGH>; /* gpio173 */ 430 430 431 431 /* gpio_183 with sys_nirq2 pad as wakeup */ 432 - interrupts-extended = <&gpio6 23 IRQ_TYPE_EDGE_FALLING>, 432 + interrupts-extended = <&gpio6 23 IRQ_TYPE_LEVEL_LOW>, 433 433 <&omap4_pmx_core 0x160>; 434 434 interrupt-names = "irq", "wakeup"; 435 435 wakeup-source;
+28 -16
arch/arm/boot/dts/omap3-n900.dts
··· 854 854 compatible = "ti,omap2-onenand"; 855 855 reg = <0 0 0x20000>; /* CS0, offset 0, IO size 128K */ 856 856 857 + /* 858 + * These timings are based on CONFIG_OMAP_GPMC_DEBUG=y reported 859 + * bootloader set values when booted with v5.1 860 + * (OneNAND Manufacturer: Samsung): 861 + * 862 + * cs0 GPMC_CS_CONFIG1: 0xfb001202 863 + * cs0 GPMC_CS_CONFIG2: 0x00111100 864 + * cs0 GPMC_CS_CONFIG3: 0x00020200 865 + * cs0 GPMC_CS_CONFIG4: 0x11001102 866 + * cs0 GPMC_CS_CONFIG5: 0x03101616 867 + * cs0 GPMC_CS_CONFIG6: 0x90060000 868 + */ 857 869 gpmc,sync-read; 858 870 gpmc,sync-write; 859 871 gpmc,burst-length = <16>; 860 872 gpmc,burst-read; 861 873 gpmc,burst-wrap; 862 874 gpmc,burst-write; 863 - gpmc,device-width = <2>; /* GPMC_DEVWIDTH_16BIT */ 864 - gpmc,mux-add-data = <2>; /* GPMC_MUX_AD */ 875 + gpmc,device-width = <2>; 876 + gpmc,mux-add-data = <2>; 865 877 gpmc,cs-on-ns = <0>; 866 - gpmc,cs-rd-off-ns = <87>; 867 - gpmc,cs-wr-off-ns = <87>; 878 + gpmc,cs-rd-off-ns = <102>; 879 + gpmc,cs-wr-off-ns = <102>; 868 880 gpmc,adv-on-ns = <0>; 869 - gpmc,adv-rd-off-ns = <10>; 870 - gpmc,adv-wr-off-ns = <10>; 871 - gpmc,oe-on-ns = <15>; 872 - gpmc,oe-off-ns = <87>; 881 + gpmc,adv-rd-off-ns = <12>; 882 + gpmc,adv-wr-off-ns = <12>; 883 + gpmc,oe-on-ns = <12>; 884 + gpmc,oe-off-ns = <102>; 873 885 gpmc,we-on-ns = <0>; 874 - gpmc,we-off-ns = <87>; 875 - gpmc,rd-cycle-ns = <112>; 876 - gpmc,wr-cycle-ns = <112>; 877 - gpmc,access-ns = <81>; 878 - gpmc,page-burst-access-ns = <15>; 886 + gpmc,we-off-ns = <102>; 887 + gpmc,rd-cycle-ns = <132>; 888 + gpmc,wr-cycle-ns = <132>; 889 + gpmc,access-ns = <96>; 890 + gpmc,page-burst-access-ns = <18>; 879 891 gpmc,bus-turnaround-ns = <0>; 880 892 gpmc,cycle2cycle-delay-ns = <0>; 881 893 gpmc,wait-monitoring-ns = <0>; 882 - gpmc,clk-activation-ns = <5>; 883 - gpmc,wr-data-mux-bus-ns = <30>; 884 - gpmc,wr-access-ns = <81>; 894 + gpmc,clk-activation-ns = <6>; 895 + gpmc,wr-data-mux-bus-ns = <36>; 896 + gpmc,wr-access-ns = <96>; 885 897 gpmc,sync-clk-ps = <15000>; 886 898 887 899 /*
+1
arch/arm/boot/dts/omap5.dtsi
··· 143 143 #address-cells = <1>; 144 144 #size-cells = <1>; 145 145 ranges = <0 0 0 0xc0000000>; 146 + dma-ranges = <0x80000000 0x0 0x80000000 0x80000000>; 146 147 ti,hwmods = "l3_main_1", "l3_main_2", "l3_main_3"; 147 148 reg = <0 0x44000000 0 0x2000>, 148 149 <0 0x44800000 0 0x3000>,
+2 -2
arch/arm/boot/dts/ox810se.dtsi
··· 323 323 interrupt-controller; 324 324 reg = <0 0x200>; 325 325 #interrupt-cells = <1>; 326 - valid-mask = <0xFFFFFFFF>; 327 - clear-mask = <0>; 326 + valid-mask = <0xffffffff>; 327 + clear-mask = <0xffffffff>; 328 328 }; 329 329 330 330 timer0: timer@200 {
+2 -2
arch/arm/boot/dts/ox820.dtsi
··· 240 240 reg = <0 0x200>; 241 241 interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>; 242 242 #interrupt-cells = <1>; 243 - valid-mask = <0xFFFFFFFF>; 244 - clear-mask = <0>; 243 + valid-mask = <0xffffffff>; 244 + clear-mask = <0xffffffff>; 245 245 }; 246 246 247 247 timer0: timer@200 {
+1 -1
arch/arm/boot/dts/sun8i-a33.dtsi
··· 215 215 }; 216 216 217 217 crypto: crypto-engine@1c15000 { 218 - compatible = "allwinner,sun4i-a10-crypto"; 218 + compatible = "allwinner,sun8i-a33-crypto"; 219 219 reg = <0x01c15000 0x1000>; 220 220 interrupts = <GIC_SPI 80 IRQ_TYPE_LEVEL_HIGH>; 221 221 clocks = <&ccu CLK_BUS_SS>, <&ccu CLK_SS>;
+4 -3
arch/arm/boot/dts/sun8i-a83t-tbs-a711.dts
··· 374 374 }; 375 375 376 376 &reg_dldo3 { 377 - regulator-min-microvolt = <2800000>; 378 - regulator-max-microvolt = <2800000>; 377 + regulator-min-microvolt = <1800000>; 378 + regulator-max-microvolt = <1800000>; 379 379 regulator-name = "vdd-csi"; 380 380 }; 381 381 ··· 498 498 }; 499 499 500 500 &usbphy { 501 - usb0_id_det-gpios = <&pio 7 11 GPIO_ACTIVE_HIGH>; /* PH11 */ 501 + usb0_id_det-gpios = <&pio 7 11 (GPIO_ACTIVE_HIGH | GPIO_PULL_UP)>; /* PH11 */ 502 + usb0_vbus_power-supply = <&usb_power_supply>; 502 503 usb0_vbus-supply = <&reg_drivevbus>; 503 504 usb1_vbus-supply = <&reg_vmain>; 504 505 usb2_vbus-supply = <&reg_vmain>;
+3 -3
arch/arm/boot/dts/sun8i-a83t.dtsi
··· 1006 1006 reg = <0x01c30000 0x104>; 1007 1007 interrupts = <GIC_SPI 82 IRQ_TYPE_LEVEL_HIGH>; 1008 1008 interrupt-names = "macirq"; 1009 - resets = <&ccu CLK_BUS_EMAC>; 1010 - reset-names = "stmmaceth"; 1011 - clocks = <&ccu RST_BUS_EMAC>; 1009 + clocks = <&ccu CLK_BUS_EMAC>; 1012 1010 clock-names = "stmmaceth"; 1011 + resets = <&ccu RST_BUS_EMAC>; 1012 + reset-names = "stmmaceth"; 1013 1013 status = "disabled"; 1014 1014 1015 1015 mdio: mdio {
+62 -63
arch/arm/boot/dts/sun8i-r40.dtsi
··· 181 181 interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>; 182 182 }; 183 183 184 + spi0: spi@1c05000 { 185 + compatible = "allwinner,sun8i-r40-spi", 186 + "allwinner,sun8i-h3-spi"; 187 + reg = <0x01c05000 0x1000>; 188 + interrupts = <GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>; 189 + clocks = <&ccu CLK_BUS_SPI0>, <&ccu CLK_SPI0>; 190 + clock-names = "ahb", "mod"; 191 + resets = <&ccu RST_BUS_SPI0>; 192 + status = "disabled"; 193 + #address-cells = <1>; 194 + #size-cells = <0>; 195 + }; 196 + 197 + spi1: spi@1c06000 { 198 + compatible = "allwinner,sun8i-r40-spi", 199 + "allwinner,sun8i-h3-spi"; 200 + reg = <0x01c06000 0x1000>; 201 + interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>; 202 + clocks = <&ccu CLK_BUS_SPI1>, <&ccu CLK_SPI1>; 203 + clock-names = "ahb", "mod"; 204 + resets = <&ccu RST_BUS_SPI1>; 205 + status = "disabled"; 206 + #address-cells = <1>; 207 + #size-cells = <0>; 208 + }; 209 + 184 210 csi0: csi@1c09000 { 185 211 compatible = "allwinner,sun8i-r40-csi0", 186 212 "allwinner,sun7i-a20-csi0"; ··· 316 290 resets = <&ccu RST_BUS_CE>; 317 291 }; 318 292 293 + spi2: spi@1c17000 { 294 + compatible = "allwinner,sun8i-r40-spi", 295 + "allwinner,sun8i-h3-spi"; 296 + reg = <0x01c17000 0x1000>; 297 + interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>; 298 + clocks = <&ccu CLK_BUS_SPI2>, <&ccu CLK_SPI2>; 299 + clock-names = "ahb", "mod"; 300 + resets = <&ccu RST_BUS_SPI2>; 301 + status = "disabled"; 302 + #address-cells = <1>; 303 + #size-cells = <0>; 304 + }; 305 + 306 + ahci: sata@1c18000 { 307 + compatible = "allwinner,sun8i-r40-ahci"; 308 + reg = <0x01c18000 0x1000>; 309 + interrupts = <GIC_SPI 56 IRQ_TYPE_LEVEL_HIGH>; 310 + clocks = <&ccu CLK_BUS_SATA>, <&ccu CLK_SATA>; 311 + resets = <&ccu RST_BUS_SATA>; 312 + reset-names = "ahci"; 313 + status = "disabled"; 314 + }; 315 + 319 316 ehci1: usb@1c19000 { 320 317 compatible = "allwinner,sun8i-r40-ehci", "generic-ehci"; 321 318 reg = <0x01c19000 0x100>; ··· 383 334 phys = <&usbphy 2>; 384 335 phy-names = "usb"; 385 336 status = "disabled"; 337 + }; 338 + 339 + spi3: spi@1c1f000 { 340 + compatible = "allwinner,sun8i-r40-spi", 341 + "allwinner,sun8i-h3-spi"; 342 + reg = <0x01c1f000 0x1000>; 343 + interrupts = <GIC_SPI 50 IRQ_TYPE_LEVEL_HIGH>; 344 + clocks = <&ccu CLK_BUS_SPI3>, <&ccu CLK_SPI3>; 345 + clock-names = "ahb", "mod"; 346 + resets = <&ccu RST_BUS_SPI3>; 347 + status = "disabled"; 348 + #address-cells = <1>; 349 + #size-cells = <0>; 386 350 }; 387 351 388 352 ccu: clock@1c20000 { ··· 713 651 status = "disabled"; 714 652 #address-cells = <1>; 715 653 #size-cells = <0>; 716 - }; 717 - 718 - spi0: spi@1c05000 { 719 - compatible = "allwinner,sun8i-r40-spi", 720 - "allwinner,sun8i-h3-spi"; 721 - reg = <0x01c05000 0x1000>; 722 - interrupts = <GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>; 723 - clocks = <&ccu CLK_BUS_SPI0>, <&ccu CLK_SPI0>; 724 - clock-names = "ahb", "mod"; 725 - resets = <&ccu RST_BUS_SPI0>; 726 - status = "disabled"; 727 - #address-cells = <1>; 728 - #size-cells = <0>; 729 - }; 730 - 731 - spi1: spi@1c06000 { 732 - compatible = "allwinner,sun8i-r40-spi", 733 - "allwinner,sun8i-h3-spi"; 734 - reg = <0x01c06000 0x1000>; 735 - interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>; 736 - clocks = <&ccu CLK_BUS_SPI1>, <&ccu CLK_SPI1>; 737 - clock-names = "ahb", "mod"; 738 - resets = <&ccu RST_BUS_SPI1>; 739 - status = "disabled"; 740 - #address-cells = <1>; 741 - #size-cells = <0>; 742 - }; 743 - 744 - spi2: spi@1c07000 { 745 - compatible = "allwinner,sun8i-r40-spi", 746 - "allwinner,sun8i-h3-spi"; 747 - reg = <0x01c07000 0x1000>; 748 - interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>; 749 - clocks = <&ccu CLK_BUS_SPI2>, <&ccu CLK_SPI2>; 750 - clock-names = "ahb", "mod"; 751 - resets = <&ccu RST_BUS_SPI2>; 752 - status = "disabled"; 753 - #address-cells = <1>; 754 - #size-cells = <0>; 755 - }; 756 - 757 - spi3: spi@1c0f000 { 758 - compatible = "allwinner,sun8i-r40-spi", 759 - "allwinner,sun8i-h3-spi"; 760 - reg = <0x01c0f000 0x1000>; 761 - interrupts = <GIC_SPI 50 IRQ_TYPE_LEVEL_HIGH>; 762 - clocks = <&ccu CLK_BUS_SPI3>, <&ccu CLK_SPI3>; 763 - clock-names = "ahb", "mod"; 764 - resets = <&ccu RST_BUS_SPI3>; 765 - status = "disabled"; 766 - #address-cells = <1>; 767 - #size-cells = <0>; 768 - }; 769 - 770 - ahci: sata@1c18000 { 771 - compatible = "allwinner,sun8i-r40-ahci"; 772 - reg = <0x01c18000 0x1000>; 773 - interrupts = <GIC_SPI 56 IRQ_TYPE_LEVEL_HIGH>; 774 - clocks = <&ccu CLK_BUS_SATA>, <&ccu CLK_SATA>; 775 - resets = <&ccu RST_BUS_SATA>; 776 - reset-names = "ahci"; 777 - status = "disabled"; 778 - 779 654 }; 780 655 781 656 gmac: ethernet@1c50000 {
+1 -1
arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
··· 53 53 * PSCI node is not added default, U-boot will add missing 54 54 * parts if it determines to use PSCI. 55 55 */ 56 - entry-method = "arm,psci"; 56 + entry-method = "psci"; 57 57 58 58 CPU_PW20: cpu-pw20 { 59 59 compatible = "arm,idle-state";
+1 -1
arch/arm64/boot/dts/sprd/sc9863a.dtsi
··· 108 108 }; 109 109 110 110 idle-states { 111 - entry-method = "arm,psci"; 111 + entry-method = "psci"; 112 112 CORE_PD: core-pd { 113 113 compatible = "arm,idle-state"; 114 114 entry-latency-us = <4000>;
+1 -1
arch/arm64/include/asm/alternative.h
··· 221 221 222 222 .macro user_alt, label, oldinstr, newinstr, cond 223 223 9999: alternative_insn "\oldinstr", "\newinstr", \cond 224 - _ASM_EXTABLE 9999b, \label 224 + _asm_extable 9999b, \label 225 225 .endm 226 226 227 227 /*
+5
arch/parisc/Kconfig
··· 79 79 config STACK_GROWSUP 80 80 def_bool y 81 81 82 + config ARCH_DEFCONFIG 83 + string 84 + default "arch/parisc/configs/generic-32bit_defconfig" if !64BIT 85 + default "arch/parisc/configs/generic-64bit_defconfig" if 64BIT 86 + 82 87 config GENERIC_LOCKBREAK 83 88 bool 84 89 default y
+7
arch/parisc/Makefile
··· 34 34 LD_BFD := elf32-hppa-linux 35 35 endif 36 36 37 + # select defconfig based on actual architecture 38 + ifeq ($(shell uname -m),parisc64) 39 + KBUILD_DEFCONFIG := generic-64bit_defconfig 40 + else 41 + KBUILD_DEFCONFIG := generic-32bit_defconfig 42 + endif 43 + 37 44 export LD_BFD 38 45 39 46 ifneq ($(SUBARCH),$(UTS_MACHINE))
-1
arch/riscv/Kconfig
··· 50 50 select PCI_DOMAINS_GENERIC if PCI 51 51 select PCI_MSI if PCI 52 52 select RISCV_TIMER 53 - select UACCESS_MEMCPY if !MMU 54 53 select GENERIC_IRQ_MULTI_HANDLER 55 54 select GENERIC_ARCH_TOPOLOGY if SMP 56 55 select ARCH_HAS_PTE_SPECIAL
-14
arch/riscv/Kconfig.socs
··· 12 12 13 13 config SOC_VIRT 14 14 bool "QEMU Virt Machine" 15 - select VIRTIO_PCI 16 - select VIRTIO_BALLOON 17 - select VIRTIO_MMIO 18 - select VIRTIO_CONSOLE 19 - select VIRTIO_NET 20 - select NET_9P_VIRTIO 21 - select VIRTIO_BLK 22 - select SCSI_VIRTIO 23 - select DRM_VIRTIO_GPU 24 - select HW_RANDOM_VIRTIO 25 - select RPMSG_CHAR 26 - select RPMSG_VIRTIO 27 - select CRYPTO_DEV_VIRTIO 28 - select VIRTIO_INPUT 29 15 select POWER_RESET_SYSCON 30 16 select POWER_RESET_SYSCON_POWEROFF 31 17 select GOLDFISH
+15 -1
arch/riscv/configs/defconfig
··· 31 31 CONFIG_IP_PNP_RARP=y 32 32 CONFIG_NETLINK_DIAG=y 33 33 CONFIG_NET_9P=y 34 + CONFIG_NET_9P_VIRTIO=y 34 35 CONFIG_PCI=y 35 36 CONFIG_PCIEPORTBUS=y 36 37 CONFIG_PCI_HOST_GENERIC=y ··· 39 38 CONFIG_DEVTMPFS=y 40 39 CONFIG_DEVTMPFS_MOUNT=y 41 40 CONFIG_BLK_DEV_LOOP=y 41 + CONFIG_VIRTIO_BLK=y 42 42 CONFIG_BLK_DEV_SD=y 43 43 CONFIG_BLK_DEV_SR=y 44 + CONFIG_SCSI_VIRTIO=y 44 45 CONFIG_ATA=y 45 46 CONFIG_SATA_AHCI=y 46 47 CONFIG_SATA_AHCI_PLATFORM=y 47 48 CONFIG_NETDEVICES=y 49 + CONFIG_VIRTIO_NET=y 48 50 CONFIG_MACB=y 49 51 CONFIG_E1000E=y 50 52 CONFIG_R8169=y ··· 58 54 CONFIG_SERIAL_OF_PLATFORM=y 59 55 CONFIG_SERIAL_EARLYCON_RISCV_SBI=y 60 56 CONFIG_HVC_RISCV_SBI=y 57 + CONFIG_VIRTIO_CONSOLE=y 61 58 CONFIG_HW_RANDOM=y 59 + CONFIG_HW_RANDOM_VIRTIO=y 62 60 CONFIG_SPI=y 63 61 CONFIG_SPI_SIFIVE=y 64 62 # CONFIG_PTP_1588_CLOCK is not set 65 63 CONFIG_POWER_RESET=y 66 64 CONFIG_DRM=y 67 65 CONFIG_DRM_RADEON=y 66 + CONFIG_DRM_VIRTIO_GPU=y 68 67 CONFIG_FRAMEBUFFER_CONSOLE=y 69 68 CONFIG_USB=y 70 69 CONFIG_USB_XHCI_HCD=y ··· 81 74 CONFIG_MMC=y 82 75 CONFIG_MMC_SPI=y 83 76 CONFIG_RTC_CLASS=y 77 + CONFIG_VIRTIO_PCI=y 78 + CONFIG_VIRTIO_BALLOON=y 79 + CONFIG_VIRTIO_INPUT=y 80 + CONFIG_VIRTIO_MMIO=y 81 + CONFIG_RPMSG_CHAR=y 82 + CONFIG_RPMSG_VIRTIO=y 84 83 CONFIG_EXT4_FS=y 85 84 CONFIG_EXT4_FS_POSIX_ACL=y 86 85 CONFIG_AUTOFS4_FS=y ··· 101 88 CONFIG_ROOT_NFS=y 102 89 CONFIG_9P_FS=y 103 90 CONFIG_CRYPTO_USER_API_HASH=y 91 + CONFIG_CRYPTO_DEV_VIRTIO=y 104 92 CONFIG_PRINTK_TIME=y 105 93 CONFIG_DEBUG_FS=y 106 94 CONFIG_DEBUG_PAGEALLOC=y 95 + CONFIG_SCHED_STACK_END_CHECK=y 107 96 CONFIG_DEBUG_VM=y 108 97 CONFIG_DEBUG_VM_PGFLAGS=y 109 98 CONFIG_DEBUG_MEMORY_INIT=y 110 99 CONFIG_DEBUG_PER_CPU_MAPS=y 111 100 CONFIG_SOFTLOCKUP_DETECTOR=y 112 101 CONFIG_WQ_WATCHDOG=y 113 - CONFIG_SCHED_STACK_END_CHECK=y 114 102 CONFIG_DEBUG_TIMEKEEPING=y 115 103 CONFIG_DEBUG_RT_MUTEXES=y 116 104 CONFIG_DEBUG_SPINLOCK=y
+15 -1
arch/riscv/configs/rv32_defconfig
··· 31 31 CONFIG_IP_PNP_RARP=y 32 32 CONFIG_NETLINK_DIAG=y 33 33 CONFIG_NET_9P=y 34 + CONFIG_NET_9P_VIRTIO=y 34 35 CONFIG_PCI=y 35 36 CONFIG_PCIEPORTBUS=y 36 37 CONFIG_PCI_HOST_GENERIC=y ··· 39 38 CONFIG_DEVTMPFS=y 40 39 CONFIG_DEVTMPFS_MOUNT=y 41 40 CONFIG_BLK_DEV_LOOP=y 41 + CONFIG_VIRTIO_BLK=y 42 42 CONFIG_BLK_DEV_SD=y 43 43 CONFIG_BLK_DEV_SR=y 44 + CONFIG_SCSI_VIRTIO=y 44 45 CONFIG_ATA=y 45 46 CONFIG_SATA_AHCI=y 46 47 CONFIG_SATA_AHCI_PLATFORM=y 47 48 CONFIG_NETDEVICES=y 49 + CONFIG_VIRTIO_NET=y 48 50 CONFIG_MACB=y 49 51 CONFIG_E1000E=y 50 52 CONFIG_R8169=y ··· 58 54 CONFIG_SERIAL_OF_PLATFORM=y 59 55 CONFIG_SERIAL_EARLYCON_RISCV_SBI=y 60 56 CONFIG_HVC_RISCV_SBI=y 57 + CONFIG_VIRTIO_CONSOLE=y 61 58 CONFIG_HW_RANDOM=y 59 + CONFIG_HW_RANDOM_VIRTIO=y 62 60 # CONFIG_PTP_1588_CLOCK is not set 63 61 CONFIG_POWER_RESET=y 64 62 CONFIG_DRM=y 65 63 CONFIG_DRM_RADEON=y 64 + CONFIG_DRM_VIRTIO_GPU=y 66 65 CONFIG_FRAMEBUFFER_CONSOLE=y 67 66 CONFIG_USB=y 68 67 CONFIG_USB_XHCI_HCD=y ··· 77 70 CONFIG_USB_STORAGE=y 78 71 CONFIG_USB_UAS=y 79 72 CONFIG_RTC_CLASS=y 73 + CONFIG_VIRTIO_PCI=y 74 + CONFIG_VIRTIO_BALLOON=y 75 + CONFIG_VIRTIO_INPUT=y 76 + CONFIG_VIRTIO_MMIO=y 77 + CONFIG_RPMSG_CHAR=y 78 + CONFIG_RPMSG_VIRTIO=y 80 79 CONFIG_EXT4_FS=y 81 80 CONFIG_EXT4_FS_POSIX_ACL=y 82 81 CONFIG_AUTOFS4_FS=y ··· 97 84 CONFIG_ROOT_NFS=y 98 85 CONFIG_9P_FS=y 99 86 CONFIG_CRYPTO_USER_API_HASH=y 87 + CONFIG_CRYPTO_DEV_VIRTIO=y 100 88 CONFIG_PRINTK_TIME=y 101 89 CONFIG_DEBUG_FS=y 102 90 CONFIG_DEBUG_PAGEALLOC=y 91 + CONFIG_SCHED_STACK_END_CHECK=y 103 92 CONFIG_DEBUG_VM=y 104 93 CONFIG_DEBUG_VM_PGFLAGS=y 105 94 CONFIG_DEBUG_MEMORY_INIT=y 106 95 CONFIG_DEBUG_PER_CPU_MAPS=y 107 96 CONFIG_SOFTLOCKUP_DETECTOR=y 108 97 CONFIG_WQ_WATCHDOG=y 109 - CONFIG_SCHED_STACK_END_CHECK=y 110 98 CONFIG_DEBUG_TIMEKEEPING=y 111 99 CONFIG_DEBUG_RT_MUTEXES=y 112 100 CONFIG_DEBUG_SPINLOCK=y
+4 -4
arch/riscv/include/asm/clint.h
··· 15 15 writel(1, clint_ipi_base + hartid); 16 16 } 17 17 18 - static inline void clint_send_ipi_mask(const struct cpumask *hartid_mask) 18 + static inline void clint_send_ipi_mask(const struct cpumask *mask) 19 19 { 20 - int hartid; 20 + int cpu; 21 21 22 - for_each_cpu(hartid, hartid_mask) 23 - clint_send_ipi_single(hartid); 22 + for_each_cpu(cpu, mask) 23 + clint_send_ipi_single(cpuid_to_hartid_map(cpu)); 24 24 } 25 25 26 26 static inline void clint_clear_ipi(unsigned long hartid)
+41 -37
arch/riscv/include/asm/pgtable.h
··· 19 19 #include <asm/tlbflush.h> 20 20 #include <linux/mm_types.h> 21 21 22 + #ifdef CONFIG_MMU 23 + 24 + #define VMALLOC_SIZE (KERN_VIRT_SIZE >> 1) 25 + #define VMALLOC_END (PAGE_OFFSET - 1) 26 + #define VMALLOC_START (PAGE_OFFSET - VMALLOC_SIZE) 27 + 28 + #define BPF_JIT_REGION_SIZE (SZ_128M) 29 + #define BPF_JIT_REGION_START (PAGE_OFFSET - BPF_JIT_REGION_SIZE) 30 + #define BPF_JIT_REGION_END (VMALLOC_END) 31 + 32 + /* 33 + * Roughly size the vmemmap space to be large enough to fit enough 34 + * struct pages to map half the virtual address space. Then 35 + * position vmemmap directly below the VMALLOC region. 36 + */ 37 + #define VMEMMAP_SHIFT \ 38 + (CONFIG_VA_BITS - PAGE_SHIFT - 1 + STRUCT_PAGE_MAX_SHIFT) 39 + #define VMEMMAP_SIZE BIT(VMEMMAP_SHIFT) 40 + #define VMEMMAP_END (VMALLOC_START - 1) 41 + #define VMEMMAP_START (VMALLOC_START - VMEMMAP_SIZE) 42 + 43 + /* 44 + * Define vmemmap for pfn_to_page & page_to_pfn calls. Needed if kernel 45 + * is configured with CONFIG_SPARSEMEM_VMEMMAP enabled. 46 + */ 47 + #define vmemmap ((struct page *)VMEMMAP_START) 48 + 49 + #define PCI_IO_SIZE SZ_16M 50 + #define PCI_IO_END VMEMMAP_START 51 + #define PCI_IO_START (PCI_IO_END - PCI_IO_SIZE) 52 + 53 + #define FIXADDR_TOP PCI_IO_START 54 + #ifdef CONFIG_64BIT 55 + #define FIXADDR_SIZE PMD_SIZE 56 + #else 57 + #define FIXADDR_SIZE PGDIR_SIZE 58 + #endif 59 + #define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE) 60 + 61 + #endif 62 + 22 63 #ifdef CONFIG_64BIT 23 64 #include <asm/pgtable-64.h> 24 65 #else ··· 130 89 #define __S101 PAGE_READ_EXEC 131 90 #define __S110 PAGE_SHARED_EXEC 132 91 #define __S111 PAGE_SHARED_EXEC 133 - 134 - #define VMALLOC_SIZE (KERN_VIRT_SIZE >> 1) 135 - #define VMALLOC_END (PAGE_OFFSET - 1) 136 - #define VMALLOC_START (PAGE_OFFSET - VMALLOC_SIZE) 137 - 138 - #define BPF_JIT_REGION_SIZE (SZ_128M) 139 - #define BPF_JIT_REGION_START (PAGE_OFFSET - BPF_JIT_REGION_SIZE) 140 - #define BPF_JIT_REGION_END (VMALLOC_END) 141 - 142 - /* 143 - * Roughly size the vmemmap space to be large enough to fit enough 144 - * struct pages to map half the virtual address space. Then 145 - * position vmemmap directly below the VMALLOC region. 146 - */ 147 - #define VMEMMAP_SHIFT \ 148 - (CONFIG_VA_BITS - PAGE_SHIFT - 1 + STRUCT_PAGE_MAX_SHIFT) 149 - #define VMEMMAP_SIZE BIT(VMEMMAP_SHIFT) 150 - #define VMEMMAP_END (VMALLOC_START - 1) 151 - #define VMEMMAP_START (VMALLOC_START - VMEMMAP_SIZE) 152 - 153 - /* 154 - * Define vmemmap for pfn_to_page & page_to_pfn calls. Needed if kernel 155 - * is configured with CONFIG_SPARSEMEM_VMEMMAP enabled. 156 - */ 157 - #define vmemmap ((struct page *)VMEMMAP_START) 158 92 159 93 static inline int pmd_present(pmd_t pmd) 160 94 { ··· 447 431 448 432 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) 449 433 #define __swp_entry_to_pte(x) ((pte_t) { (x).val }) 450 - 451 - #define PCI_IO_SIZE SZ_16M 452 - #define PCI_IO_END VMEMMAP_START 453 - #define PCI_IO_START (PCI_IO_END - PCI_IO_SIZE) 454 - 455 - #define FIXADDR_TOP PCI_IO_START 456 - #ifdef CONFIG_64BIT 457 - #define FIXADDR_SIZE PMD_SIZE 458 - #else 459 - #define FIXADDR_SIZE PGDIR_SIZE 460 - #endif 461 - #define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE) 462 434 463 435 /* 464 436 * Task size is 0x4000000000 for RV64 or 0x9fc00000 for RV32.
+18 -18
arch/riscv/include/asm/uaccess.h
··· 11 11 /* 12 12 * User space memory access functions 13 13 */ 14 + 15 + extern unsigned long __must_check __asm_copy_to_user(void __user *to, 16 + const void *from, unsigned long n); 17 + extern unsigned long __must_check __asm_copy_from_user(void *to, 18 + const void __user *from, unsigned long n); 19 + 20 + static inline unsigned long 21 + raw_copy_from_user(void *to, const void __user *from, unsigned long n) 22 + { 23 + return __asm_copy_from_user(to, from, n); 24 + } 25 + 26 + static inline unsigned long 27 + raw_copy_to_user(void __user *to, const void *from, unsigned long n) 28 + { 29 + return __asm_copy_to_user(to, from, n); 30 + } 31 + 14 32 #ifdef CONFIG_MMU 15 33 #include <linux/errno.h> 16 34 #include <linux/compiler.h> ··· 384 366 __put_user((x), __p) : \ 385 367 -EFAULT; \ 386 368 }) 387 - 388 - 389 - extern unsigned long __must_check __asm_copy_to_user(void __user *to, 390 - const void *from, unsigned long n); 391 - extern unsigned long __must_check __asm_copy_from_user(void *to, 392 - const void __user *from, unsigned long n); 393 - 394 - static inline unsigned long 395 - raw_copy_from_user(void *to, const void __user *from, unsigned long n) 396 - { 397 - return __asm_copy_from_user(to, from, n); 398 - } 399 - 400 - static inline unsigned long 401 - raw_copy_to_user(void __user *to, const void *from, unsigned long n) 402 - { 403 - return __asm_copy_to_user(to, from, n); 404 - } 405 369 406 370 extern long strncpy_from_user(char *dest, const char __user *src, long count); 407 371
+1 -1
arch/riscv/kernel/smp.c
··· 96 96 if (IS_ENABLED(CONFIG_RISCV_SBI)) 97 97 sbi_send_ipi(cpumask_bits(&hartid_mask)); 98 98 else 99 - clint_send_ipi_mask(&hartid_mask); 99 + clint_send_ipi_mask(mask); 100 100 } 101 101 102 102 static void send_ipi_single(int cpu, enum ipi_message_type op)
+1 -1
arch/riscv/lib/Makefile
··· 2 2 lib-y += delay.o 3 3 lib-y += memcpy.o 4 4 lib-y += memset.o 5 - lib-$(CONFIG_MMU) += uaccess.o 5 + lib-y += uaccess.o 6 6 lib-$(CONFIG_64BIT) += tishift.o
+7 -1
arch/x86/kvm/lapic.c
··· 1445 1445 } 1446 1446 } 1447 1447 1448 + static void cancel_hv_timer(struct kvm_lapic *apic); 1449 + 1448 1450 static void apic_update_lvtt(struct kvm_lapic *apic) 1449 1451 { 1450 1452 u32 timer_mode = kvm_lapic_get_reg(apic, APIC_LVTT) & ··· 1456 1454 if (apic_lvtt_tscdeadline(apic) != (timer_mode == 1457 1455 APIC_LVT_TIMER_TSCDEADLINE)) { 1458 1456 hrtimer_cancel(&apic->lapic_timer.timer); 1457 + preempt_disable(); 1458 + if (apic->lapic_timer.hv_timer_in_use) 1459 + cancel_hv_timer(apic); 1460 + preempt_enable(); 1459 1461 kvm_lapic_set_reg(apic, APIC_TMICT, 0); 1460 1462 apic->lapic_timer.period = 0; 1461 1463 apic->lapic_timer.tscdeadline = 0; ··· 1721 1715 1722 1716 hrtimer_start(&apic->lapic_timer.timer, 1723 1717 apic->lapic_timer.target_expiration, 1724 - HRTIMER_MODE_ABS); 1718 + HRTIMER_MODE_ABS_HARD); 1725 1719 } 1726 1720 1727 1721 bool kvm_lapic_hv_timer_in_use(struct kvm_vcpu *vcpu)
+17 -8
arch/x86/kvm/svm.c
··· 1933 1933 static void __unregister_enc_region_locked(struct kvm *kvm, 1934 1934 struct enc_region *region) 1935 1935 { 1936 - /* 1937 - * The guest may change the memory encryption attribute from C=0 -> C=1 1938 - * or vice versa for this memory range. Lets make sure caches are 1939 - * flushed to ensure that guest data gets written into memory with 1940 - * correct C-bit. 1941 - */ 1942 - sev_clflush_pages(region->pages, region->npages); 1943 - 1944 1936 sev_unpin_memory(kvm, region->pages, region->npages); 1945 1937 list_del(&region->list); 1946 1938 kfree(region); ··· 1961 1969 return; 1962 1970 1963 1971 mutex_lock(&kvm->lock); 1972 + 1973 + /* 1974 + * Ensure that all guest tagged cache entries are flushed before 1975 + * releasing the pages back to the system for use. CLFLUSH will 1976 + * not do this, so issue a WBINVD. 1977 + */ 1978 + wbinvd_on_all_cpus(); 1964 1979 1965 1980 /* 1966 1981 * if userspace was terminated before unregistering the memory regions ··· 7157 7158 if (!svm_sev_enabled()) 7158 7159 return -ENOTTY; 7159 7160 7161 + if (!argp) 7162 + return 0; 7163 + 7160 7164 if (copy_from_user(&sev_cmd, argp, sizeof(struct kvm_sev_cmd))) 7161 7165 return -EFAULT; 7162 7166 ··· 7286 7284 ret = -EINVAL; 7287 7285 goto failed; 7288 7286 } 7287 + 7288 + /* 7289 + * Ensure that all guest tagged cache entries are flushed before 7290 + * releasing the pages back to the system for use. CLFLUSH will 7291 + * not do this, so issue a WBINVD. 7292 + */ 7293 + wbinvd_on_all_cpus(); 7289 7294 7290 7295 __unregister_enc_region_locked(kvm, region); 7291 7296
+1 -1
arch/x86/kvm/vmx/vmx.c
··· 6287 6287 #endif 6288 6288 ASM_CALL_CONSTRAINT 6289 6289 : 6290 - THUNK_TARGET(entry), 6290 + [thunk_target]"r"(entry), 6291 6291 [ss]"i"(__KERNEL_DS), 6292 6292 [cs]"i"(__KERNEL_CS) 6293 6293 );
+4 -2
arch/x86/kvm/x86.c
··· 1554 1554 */ 1555 1555 static int handle_fastpath_set_x2apic_icr_irqoff(struct kvm_vcpu *vcpu, u64 data) 1556 1556 { 1557 - if (lapic_in_kernel(vcpu) && apic_x2apic_mode(vcpu->arch.apic) && 1557 + if (!lapic_in_kernel(vcpu) || !apic_x2apic_mode(vcpu->arch.apic)) 1558 + return 1; 1559 + 1560 + if (((data & APIC_SHORT_MASK) == APIC_DEST_NOSHORT) && 1558 1561 ((data & APIC_DEST_MASK) == APIC_DEST_PHYSICAL) && 1559 1562 ((data & APIC_MODE_MASK) == APIC_DM_FIXED)) { 1560 1563 ··· 2447 2444 vcpu->hv_clock.tsc_timestamp = tsc_timestamp; 2448 2445 vcpu->hv_clock.system_time = kernel_ns + v->kvm->arch.kvmclock_offset; 2449 2446 vcpu->last_guest_tsc = tsc_timestamp; 2450 - WARN_ON((s64)vcpu->hv_clock.system_time < 0); 2451 2447 2452 2448 /* If the host uses TSC clocksource, then it is stable */ 2453 2449 pvclock_flags = 0;
+3 -20
drivers/base/memory.c
··· 97 97 } 98 98 99 99 /* 100 - * Show whether the memory block is likely to be offlineable (or is already 101 - * offline). Once offline, the memory block could be removed. The return 102 - * value does, however, not indicate that there is a way to remove the 103 - * memory block. 100 + * Legacy interface that we cannot remove. Always indicate "removable" 101 + * with CONFIG_MEMORY_HOTREMOVE - bad heuristic. 104 102 */ 105 103 static ssize_t removable_show(struct device *dev, struct device_attribute *attr, 106 104 char *buf) 107 105 { 108 - struct memory_block *mem = to_memory_block(dev); 109 - unsigned long pfn; 110 - int ret = 1, i; 111 - 112 - if (mem->state != MEM_ONLINE) 113 - goto out; 114 - 115 - for (i = 0; i < sections_per_block; i++) { 116 - if (!present_section_nr(mem->start_section_nr + i)) 117 - continue; 118 - pfn = section_nr_to_pfn(mem->start_section_nr + i); 119 - ret &= is_mem_section_removable(pfn, PAGES_PER_SECTION); 120 - } 121 - 122 - out: 123 - return sprintf(buf, "%d\n", ret); 106 + return sprintf(buf, "%d\n", (int)IS_ENABLED(CONFIG_MEMORY_HOTREMOVE)); 124 107 } 125 108 126 109 /*
+1 -1
drivers/bus/sunxi-rsb.c
··· 345 345 if (ret) 346 346 goto unlock; 347 347 348 - *buf = readl(rsb->regs + RSB_DATA); 348 + *buf = readl(rsb->regs + RSB_DATA) & GENMASK(len * 8 - 1, 0); 349 349 350 350 unlock: 351 351 mutex_unlock(&rsb->lock);
+2 -1
drivers/bus/ti-sysc.c
··· 1266 1266 SYSC_QUIRK("gpu", 0x50000000, 0x14, -1, -1, 0x00010201, 0xffffffff, 0), 1267 1267 SYSC_QUIRK("gpu", 0x50000000, 0xfe00, 0xfe10, -1, 0x40000000 , 0xffffffff, 1268 1268 SYSC_MODULE_QUIRK_SGX), 1269 + SYSC_QUIRK("lcdc", 0, 0, 0x54, -1, 0x4f201000, 0xffffffff, 1270 + SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY), 1269 1271 SYSC_QUIRK("usb_otg_hs", 0, 0x400, 0x404, 0x408, 0x00000050, 1270 1272 0xffffffff, SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY), 1271 1273 SYSC_QUIRK("usb_otg_hs", 0, 0, 0x10, -1, 0x4ea2080d, 0xffffffff, ··· 1296 1294 SYSC_QUIRK("gpu", 0, 0xfe00, 0xfe10, -1, 0x40000000 , 0xffffffff, 0), 1297 1295 SYSC_QUIRK("hsi", 0, 0, 0x10, 0x14, 0x50043101, 0xffffffff, 0), 1298 1296 SYSC_QUIRK("iss", 0, 0, 0x10, -1, 0x40000101, 0xffffffff, 0), 1299 - SYSC_QUIRK("lcdc", 0, 0, 0x54, -1, 0x4f201000, 0xffffffff, 0), 1300 1297 SYSC_QUIRK("mcasp", 0, 0, 0x4, -1, 0x44306302, 0xffffffff, 0), 1301 1298 SYSC_QUIRK("mcasp", 0, 0, 0x4, -1, 0x44307b02, 0xffffffff, 0), 1302 1299 SYSC_QUIRK("mcbsp", 0, -1, 0x8c, -1, 0, 0, 0),
+2 -2
drivers/clk/imx/clk-imx8mp.c
··· 560 560 hws[IMX8MP_CLK_MEDIA_AXI] = imx8m_clk_hw_composite("media_axi", imx8mp_media_axi_sels, ccm_base + 0x8a00); 561 561 hws[IMX8MP_CLK_MEDIA_APB] = imx8m_clk_hw_composite("media_apb", imx8mp_media_apb_sels, ccm_base + 0x8a80); 562 562 hws[IMX8MP_CLK_HDMI_APB] = imx8m_clk_hw_composite("hdmi_apb", imx8mp_media_apb_sels, ccm_base + 0x8b00); 563 - hws[IMX8MP_CLK_HDMI_AXI] = imx8m_clk_hw_composite("hdmi_axi", imx8mp_media_apb_sels, ccm_base + 0x8b80); 563 + hws[IMX8MP_CLK_HDMI_AXI] = imx8m_clk_hw_composite("hdmi_axi", imx8mp_media_axi_sels, ccm_base + 0x8b80); 564 564 hws[IMX8MP_CLK_GPU_AXI] = imx8m_clk_hw_composite("gpu_axi", imx8mp_gpu_axi_sels, ccm_base + 0x8c00); 565 565 hws[IMX8MP_CLK_GPU_AHB] = imx8m_clk_hw_composite("gpu_ahb", imx8mp_gpu_ahb_sels, ccm_base + 0x8c80); 566 566 hws[IMX8MP_CLK_NOC] = imx8m_clk_hw_composite_critical("noc", imx8mp_noc_sels, ccm_base + 0x8d00); ··· 686 686 hws[IMX8MP_CLK_CAN1_ROOT] = imx_clk_hw_gate2("can1_root_clk", "can1", ccm_base + 0x4350, 0); 687 687 hws[IMX8MP_CLK_CAN2_ROOT] = imx_clk_hw_gate2("can2_root_clk", "can2", ccm_base + 0x4360, 0); 688 688 hws[IMX8MP_CLK_SDMA1_ROOT] = imx_clk_hw_gate4("sdma1_root_clk", "ipg_root", ccm_base + 0x43a0, 0); 689 - hws[IMX8MP_CLK_ENET_QOS_ROOT] = imx_clk_hw_gate4("enet_qos_root_clk", "enet_axi", ccm_base + 0x43b0, 0); 689 + hws[IMX8MP_CLK_ENET_QOS_ROOT] = imx_clk_hw_gate4("enet_qos_root_clk", "sim_enet_root_clk", ccm_base + 0x43b0, 0); 690 690 hws[IMX8MP_CLK_SIM_ENET_ROOT] = imx_clk_hw_gate4("sim_enet_root_clk", "enet_axi", ccm_base + 0x4400, 0); 691 691 hws[IMX8MP_CLK_GPU2D_ROOT] = imx_clk_hw_gate4("gpu2d_root_clk", "gpu2d_div", ccm_base + 0x4450, 0); 692 692 hws[IMX8MP_CLK_GPU3D_ROOT] = imx_clk_hw_gate4("gpu3d_root_clk", "gpu3d_core_div", ccm_base + 0x4460, 0);
+4 -4
drivers/clk/imx/clk-scu.c
··· 43 43 __le32 rate; 44 44 __le16 resource; 45 45 u8 clk; 46 - } __packed; 46 + } __packed __aligned(4); 47 47 48 48 struct req_get_clock_rate { 49 49 __le16 resource; 50 50 u8 clk; 51 - } __packed; 51 + } __packed __aligned(4); 52 52 53 53 struct resp_get_clock_rate { 54 54 __le32 rate; ··· 84 84 struct req_get_clock_parent { 85 85 __le16 resource; 86 86 u8 clk; 87 - } __packed req; 87 + } __packed __aligned(4) req; 88 88 struct resp_get_clock_parent { 89 89 u8 parent; 90 90 } resp; ··· 121 121 u8 clk; 122 122 u8 enable; 123 123 u8 autog; 124 - } __packed; 124 + } __packed __aligned(4); 125 125 126 126 static inline struct clk_scu *to_clk_scu(struct clk_hw *hw) 127 127 {
+1 -1
drivers/clk/ti/clk-43xx.c
··· 78 78 }; 79 79 80 80 static const struct omap_clkctrl_reg_data am4_l4_rtc_clkctrl_regs[] __initconst = { 81 - { AM4_L4_RTC_RTC_CLKCTRL, NULL, CLKF_SW_SUP, "clk_32768_ck" }, 81 + { AM4_L4_RTC_RTC_CLKCTRL, NULL, CLKF_SW_SUP, "clkdiv32k_ick" }, 82 82 { 0 }, 83 83 }; 84 84
+4 -2
drivers/clocksource/hyperv_timer.c
··· 343 343 344 344 static u64 read_hv_sched_clock_tsc(void) 345 345 { 346 - return read_hv_clock_tsc() - hv_sched_clock_offset; 346 + return (read_hv_clock_tsc() - hv_sched_clock_offset) * 347 + (NSEC_PER_SEC / HV_CLOCK_HZ); 347 348 } 348 349 349 350 static void suspend_hv_clock_tsc(struct clocksource *arg) ··· 399 398 400 399 static u64 read_hv_sched_clock_msr(void) 401 400 { 402 - return read_hv_clock_msr() - hv_sched_clock_offset; 401 + return (read_hv_clock_msr() - hv_sched_clock_offset) * 402 + (NSEC_PER_SEC / HV_CLOCK_HZ); 403 403 } 404 404 405 405 static struct clocksource hyperv_cs_msr = {
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
··· 974 974 /* Map SG to device */ 975 975 r = -ENOMEM; 976 976 nents = dma_map_sg(adev->dev, ttm->sg->sgl, ttm->sg->nents, direction); 977 - if (nents != ttm->sg->nents) 977 + if (nents == 0) 978 978 goto release_sg; 979 979 980 980 /* convert SG to linear array of pages and dma addresses */
+1 -1
drivers/gpu/drm/drm_prime.c
··· 967 967 968 968 index = 0; 969 969 for_each_sg(sgt->sgl, sg, sgt->nents, count) { 970 - len = sg->length; 970 + len = sg_dma_len(sg); 971 971 page = sg_page(sg); 972 972 addr = sg_dma_address(sg); 973 973
+1 -1
drivers/gpu/drm/radeon/radeon_ttm.c
··· 528 528 529 529 r = -ENOMEM; 530 530 nents = dma_map_sg(rdev->dev, ttm->sg->sgl, ttm->sg->nents, direction); 531 - if (nents != ttm->sg->nents) 531 + if (nents == 0) 532 532 goto release_sg; 533 533 534 534 drm_prime_sg_to_page_addr_arrays(ttm->sg, ttm->pages,
+2
drivers/gpu/drm/scheduler/sched_main.c
··· 661 661 662 662 trace_drm_sched_process_job(s_fence); 663 663 664 + dma_fence_get(&s_fence->finished); 664 665 drm_sched_fence_finished(s_fence); 666 + dma_fence_put(&s_fence->finished); 665 667 wake_up_interruptible(&sched->wake_up_worker); 666 668 } 667 669
+1
drivers/i2c/busses/i2c-hix5hd2.c
··· 477 477 i2c_del_adapter(&priv->adap); 478 478 pm_runtime_disable(priv->dev); 479 479 pm_runtime_set_suspended(priv->dev); 480 + clk_disable_unprepare(priv->clk); 480 481 481 482 return 0; 482 483 }
+7 -11
drivers/i2c/busses/i2c-nvidia-gpu.c
··· 8 8 #include <linux/delay.h> 9 9 #include <linux/i2c.h> 10 10 #include <linux/interrupt.h> 11 + #include <linux/iopoll.h> 11 12 #include <linux/module.h> 12 13 #include <linux/pci.h> 13 14 #include <linux/platform_device.h> ··· 76 75 77 76 static int gpu_i2c_check_status(struct gpu_i2c_dev *i2cd) 78 77 { 79 - unsigned long target = jiffies + msecs_to_jiffies(1000); 80 78 u32 val; 79 + int ret; 81 80 82 - do { 83 - val = readl(i2cd->regs + I2C_MST_CNTL); 84 - if (!(val & I2C_MST_CNTL_CYCLE_TRIGGER)) 85 - break; 86 - if ((val & I2C_MST_CNTL_STATUS) != 87 - I2C_MST_CNTL_STATUS_BUS_BUSY) 88 - break; 89 - usleep_range(500, 600); 90 - } while (time_is_after_jiffies(target)); 81 + ret = readl_poll_timeout(i2cd->regs + I2C_MST_CNTL, val, 82 + !(val & I2C_MST_CNTL_CYCLE_TRIGGER) || 83 + (val & I2C_MST_CNTL_STATUS) != I2C_MST_CNTL_STATUS_BUS_BUSY, 84 + 500, 1000 * USEC_PER_MSEC); 91 85 92 - if (time_is_before_jiffies(target)) { 86 + if (ret) { 93 87 dev_err(i2cd->dev, "i2c timeout error %x\n", val); 94 88 return -ETIMEDOUT; 95 89 }
+1 -1
drivers/i2c/busses/i2c-pca-platform.c
··· 140 140 int ret = 0; 141 141 int irq; 142 142 143 - irq = platform_get_irq(pdev, 0); 143 + irq = platform_get_irq_optional(pdev, 0); 144 144 /* If irq is 0, we do polling. */ 145 145 if (irq < 0) 146 146 irq = 0;
+1
drivers/i2c/busses/i2c-st.c
··· 434 434 /** 435 435 * st_i2c_rd_fill_tx_fifo() - Fill the Tx FIFO in read mode 436 436 * @i2c_dev: Controller's private data 437 + * @max: Maximum amount of data to fill into the Tx FIFO 437 438 * 438 439 * This functions fills the Tx FIFO with fixed pattern when 439 440 * in read mode to trigger clock.
+3 -1
drivers/infiniband/core/device.c
··· 896 896 cdev->dev.parent = device->dev.parent; 897 897 rdma_init_coredev(cdev, device, read_pnet(&rnet->net)); 898 898 cdev->dev.release = compatdev_release; 899 - dev_set_name(&cdev->dev, "%s", dev_name(&device->dev)); 899 + ret = dev_set_name(&cdev->dev, "%s", dev_name(&device->dev)); 900 + if (ret) 901 + goto add_err; 900 902 901 903 ret = device_add(&cdev->dev); 902 904 if (ret)
+5 -1
drivers/infiniband/core/nldev.c
··· 918 918 919 919 nla_strlcpy(name, tb[RDMA_NLDEV_ATTR_DEV_NAME], 920 920 IB_DEVICE_NAME_MAX); 921 + if (strlen(name) == 0) { 922 + err = -EINVAL; 923 + goto done; 924 + } 921 925 err = ib_device_rename(device, name); 922 926 goto done; 923 927 } ··· 1518 1514 1519 1515 nla_strlcpy(ibdev_name, tb[RDMA_NLDEV_ATTR_DEV_NAME], 1520 1516 sizeof(ibdev_name)); 1521 - if (strchr(ibdev_name, '%')) 1517 + if (strchr(ibdev_name, '%') || strlen(ibdev_name) == 0) 1522 1518 return -EINVAL; 1523 1519 1524 1520 nla_strlcpy(type, tb[RDMA_NLDEV_ATTR_LINK_TYPE], sizeof(type));
+3 -8
drivers/infiniband/core/security.c
··· 349 349 else if (qp_pps) 350 350 new_pps->main.pkey_index = qp_pps->main.pkey_index; 351 351 352 - if ((qp_attr_mask & IB_QP_PKEY_INDEX) && (qp_attr_mask & IB_QP_PORT)) 352 + if (((qp_attr_mask & IB_QP_PKEY_INDEX) && 353 + (qp_attr_mask & IB_QP_PORT)) || 354 + (qp_pps && qp_pps->main.state != IB_PORT_PKEY_NOT_VALID)) 353 355 new_pps->main.state = IB_PORT_PKEY_VALID; 354 - 355 - if (!(qp_attr_mask & (IB_QP_PKEY_INDEX | IB_QP_PORT)) && qp_pps) { 356 - new_pps->main.port_num = qp_pps->main.port_num; 357 - new_pps->main.pkey_index = qp_pps->main.pkey_index; 358 - if (qp_pps->main.state != IB_PORT_PKEY_NOT_VALID) 359 - new_pps->main.state = IB_PORT_PKEY_VALID; 360 - } 361 356 362 357 if (qp_attr_mask & IB_QP_ALT_PATH) { 363 358 new_pps->alt.port_num = qp_attr->alt_port_num;
+1 -1
drivers/infiniband/core/umem_odp.c
··· 275 275 mmu_interval_notifier_remove(&umem_odp->notifier); 276 276 kvfree(umem_odp->dma_list); 277 277 kvfree(umem_odp->page_list); 278 - put_pid(umem_odp->tgid); 279 278 } 279 + put_pid(umem_odp->tgid); 280 280 kfree(umem_odp); 281 281 } 282 282 EXPORT_SYMBOL(ib_umem_odp_release);
+22 -11
drivers/infiniband/core/user_mad.c
··· 1129 1129 .llseek = no_llseek, 1130 1130 }; 1131 1131 1132 + static struct ib_umad_port *get_port(struct ib_device *ibdev, 1133 + struct ib_umad_device *umad_dev, 1134 + unsigned int port) 1135 + { 1136 + if (!umad_dev) 1137 + return ERR_PTR(-EOPNOTSUPP); 1138 + if (!rdma_is_port_valid(ibdev, port)) 1139 + return ERR_PTR(-EINVAL); 1140 + if (!rdma_cap_ib_mad(ibdev, port)) 1141 + return ERR_PTR(-EOPNOTSUPP); 1142 + 1143 + return &umad_dev->ports[port - rdma_start_port(ibdev)]; 1144 + } 1145 + 1132 1146 static int ib_umad_get_nl_info(struct ib_device *ibdev, void *client_data, 1133 1147 struct ib_client_nl_info *res) 1134 1148 { 1135 - struct ib_umad_device *umad_dev = client_data; 1149 + struct ib_umad_port *port = get_port(ibdev, client_data, res->port); 1136 1150 1137 - if (!rdma_is_port_valid(ibdev, res->port)) 1138 - return -EINVAL; 1151 + if (IS_ERR(port)) 1152 + return PTR_ERR(port); 1139 1153 1140 1154 res->abi = IB_USER_MAD_ABI_VERSION; 1141 - res->cdev = &umad_dev->ports[res->port - rdma_start_port(ibdev)].dev; 1142 - 1155 + res->cdev = &port->dev; 1143 1156 return 0; 1144 1157 } 1145 1158 ··· 1167 1154 static int ib_issm_get_nl_info(struct ib_device *ibdev, void *client_data, 1168 1155 struct ib_client_nl_info *res) 1169 1156 { 1170 - struct ib_umad_device *umad_dev = 1171 - ib_get_client_data(ibdev, &umad_client); 1157 + struct ib_umad_port *port = get_port(ibdev, client_data, res->port); 1172 1158 1173 - if (!rdma_is_port_valid(ibdev, res->port)) 1174 - return -EINVAL; 1159 + if (IS_ERR(port)) 1160 + return PTR_ERR(port); 1175 1161 1176 1162 res->abi = IB_USER_MAD_ABI_VERSION; 1177 - res->cdev = &umad_dev->ports[res->port - rdma_start_port(ibdev)].sm_dev; 1178 - 1163 + res->cdev = &port->sm_dev; 1179 1164 return 0; 1180 1165 } 1181 1166
+22 -3
drivers/infiniband/hw/hfi1/user_sdma.c
··· 141 141 */ 142 142 xchg(&pq->state, SDMA_PKT_Q_DEFERRED); 143 143 if (list_empty(&pq->busy.list)) { 144 + pq->busy.lock = &sde->waitlock; 144 145 iowait_get_priority(&pq->busy); 145 146 iowait_queue(pkts_sent, &pq->busy, &sde->dmawait); 146 147 } ··· 156 155 { 157 156 struct hfi1_user_sdma_pkt_q *pq = 158 157 container_of(wait, struct hfi1_user_sdma_pkt_q, busy); 158 + pq->busy.lock = NULL; 159 159 xchg(&pq->state, SDMA_PKT_Q_ACTIVE); 160 160 wake_up(&wait->wait_dma); 161 161 }; ··· 258 256 return ret; 259 257 } 260 258 259 + static void flush_pq_iowait(struct hfi1_user_sdma_pkt_q *pq) 260 + { 261 + unsigned long flags; 262 + seqlock_t *lock = pq->busy.lock; 263 + 264 + if (!lock) 265 + return; 266 + write_seqlock_irqsave(lock, flags); 267 + if (!list_empty(&pq->busy.list)) { 268 + list_del_init(&pq->busy.list); 269 + pq->busy.lock = NULL; 270 + } 271 + write_sequnlock_irqrestore(lock, flags); 272 + } 273 + 261 274 int hfi1_user_sdma_free_queues(struct hfi1_filedata *fd, 262 275 struct hfi1_ctxtdata *uctxt) 263 276 { ··· 298 281 kfree(pq->reqs); 299 282 kfree(pq->req_in_use); 300 283 kmem_cache_destroy(pq->txreq_cache); 284 + flush_pq_iowait(pq); 301 285 kfree(pq); 302 286 } else { 303 287 spin_unlock(&fd->pq_rcu_lock); ··· 605 587 if (ret < 0) { 606 588 if (ret != -EBUSY) 607 589 goto free_req; 608 - wait_event_interruptible_timeout( 590 + if (wait_event_interruptible_timeout( 609 591 pq->busy.wait_dma, 610 - (pq->state == SDMA_PKT_Q_ACTIVE), 592 + pq->state == SDMA_PKT_Q_ACTIVE, 611 593 msecs_to_jiffies( 612 - SDMA_IOWAIT_TIMEOUT)); 594 + SDMA_IOWAIT_TIMEOUT)) <= 0) 595 + flush_pq_iowait(pq); 613 596 } 614 597 } 615 598 *count += idx;
+25 -2
drivers/infiniband/hw/mlx5/cq.c
··· 330 330 dump_cqe(dev, cqe); 331 331 } 332 332 333 + static void handle_atomics(struct mlx5_ib_qp *qp, struct mlx5_cqe64 *cqe64, 334 + u16 tail, u16 head) 335 + { 336 + u16 idx; 337 + 338 + do { 339 + idx = tail & (qp->sq.wqe_cnt - 1); 340 + if (idx == head) 341 + break; 342 + 343 + tail = qp->sq.w_list[idx].next; 344 + } while (1); 345 + tail = qp->sq.w_list[idx].next; 346 + qp->sq.last_poll = tail; 347 + } 348 + 333 349 static void free_cq_buf(struct mlx5_ib_dev *dev, struct mlx5_ib_cq_buf *buf) 334 350 { 335 351 mlx5_frag_buf_free(dev->mdev, &buf->frag_buf); ··· 384 368 } 385 369 386 370 static void sw_comp(struct mlx5_ib_qp *qp, int num_entries, struct ib_wc *wc, 387 - int *npolled, int is_send) 371 + int *npolled, bool is_send) 388 372 { 389 373 struct mlx5_ib_wq *wq; 390 374 unsigned int cur; ··· 399 383 return; 400 384 401 385 for (i = 0; i < cur && np < num_entries; i++) { 402 - wc->wr_id = wq->wrid[wq->tail & (wq->wqe_cnt - 1)]; 386 + unsigned int idx; 387 + 388 + idx = (is_send) ? wq->last_poll : wq->tail; 389 + idx &= (wq->wqe_cnt - 1); 390 + wc->wr_id = wq->wrid[idx]; 403 391 wc->status = IB_WC_WR_FLUSH_ERR; 404 392 wc->vendor_err = MLX5_CQE_SYNDROME_WR_FLUSH_ERR; 405 393 wq->tail++; 394 + if (is_send) 395 + wq->last_poll = wq->w_list[idx].next; 406 396 np++; 407 397 wc->qp = &qp->ibqp; 408 398 wc++; ··· 495 473 wqe_ctr = be16_to_cpu(cqe64->wqe_counter); 496 474 idx = wqe_ctr & (wq->wqe_cnt - 1); 497 475 handle_good_req(wc, cqe64, wq, idx); 476 + handle_atomics(*cur_qp, cqe64, wq->last_poll, idx); 498 477 wc->wr_id = wq->wrid[idx]; 499 478 wq->tail = wq->wqe_head[idx] + 1; 500 479 wc->status = IB_WC_SUCCESS;
+3 -2
drivers/infiniband/hw/mlx5/main.c
··· 5723 5723 const struct mlx5_ib_counters *cnts = 5724 5724 get_counters(dev, counter->port - 1); 5725 5725 5726 - /* Q counters are in the beginning of all counters */ 5727 5726 return rdma_alloc_hw_stats_struct(cnts->names, 5728 - cnts->num_q_counters, 5727 + cnts->num_q_counters + 5728 + cnts->num_cong_counters + 5729 + cnts->num_ext_ppcnt_counters, 5729 5730 RDMA_HW_STATS_DEFAULT_LIFESPAN); 5730 5731 } 5731 5732
+1
drivers/infiniband/hw/mlx5/mlx5_ib.h
··· 288 288 unsigned head; 289 289 unsigned tail; 290 290 u16 cur_post; 291 + u16 last_poll; 291 292 void *cur_edge; 292 293 }; 293 294
+5
drivers/infiniband/hw/mlx5/qp.c
··· 3775 3775 qp->sq.cur_post = 0; 3776 3776 if (qp->sq.wqe_cnt) 3777 3777 qp->sq.cur_edge = get_sq_edge(&qp->sq, 0); 3778 + qp->sq.last_poll = 0; 3778 3779 qp->db.db[MLX5_RCV_DBR] = 0; 3779 3780 qp->db.db[MLX5_SND_DBR] = 0; 3780 3781 } ··· 6204 6203 min_resp_len = offsetof(typeof(resp), reserved) + sizeof(resp.reserved); 6205 6204 if (udata->outlen && udata->outlen < min_resp_len) 6206 6205 return ERR_PTR(-EINVAL); 6206 + 6207 + if (!capable(CAP_SYS_RAWIO) && 6208 + init_attr->create_flags & IB_WQ_FLAGS_DELAY_DROP) 6209 + return ERR_PTR(-EPERM); 6207 6210 6208 6211 dev = to_mdev(pd->device); 6209 6212 switch (init_attr->wq_type) {
+1 -1
drivers/infiniband/sw/rdmavt/cq.c
··· 327 327 if (cq->ip) 328 328 kref_put(&cq->ip->ref, rvt_release_mmap_info); 329 329 else 330 - vfree(cq->queue); 330 + vfree(cq->kqueue); 331 331 } 332 332 333 333 /**
+1
drivers/input/input.c
··· 190 190 input_value_sync 191 191 }; 192 192 193 + input_set_timestamp(dev, ktime_get()); 193 194 input_pass_values(dev, vals, ARRAY_SIZE(vals)); 194 195 195 196 if (dev->rep[REP_PERIOD])
+11
drivers/input/keyboard/tm2-touchkey.c
··· 75 75 .cmd_led_off = ARIES_TOUCHKEY_CMD_LED_OFF, 76 76 }; 77 77 78 + static const struct touchkey_variant tc360_touchkey_variant = { 79 + .keycode_reg = 0x00, 80 + .base_reg = 0x00, 81 + .fixed_regulator = true, 82 + .cmd_led_on = TM2_TOUCHKEY_CMD_LED_ON, 83 + .cmd_led_off = TM2_TOUCHKEY_CMD_LED_OFF, 84 + }; 85 + 78 86 static int tm2_touchkey_led_brightness_set(struct led_classdev *led_dev, 79 87 enum led_brightness brightness) 80 88 { ··· 335 327 }, { 336 328 .compatible = "cypress,aries-touchkey", 337 329 .data = &aries_touchkey_variant, 330 + }, { 331 + .compatible = "coreriver,tc360-touchkey", 332 + .data = &tc360_touchkey_variant, 338 333 }, 339 334 { }, 340 335 };
+1
drivers/input/mouse/synaptics.c
··· 186 186 "SYN3052", /* HP EliteBook 840 G4 */ 187 187 "SYN3221", /* HP 15-ay000 */ 188 188 "SYN323d", /* HP Spectre X360 13-w013dx */ 189 + "SYN3257", /* HP Envy 13-ad105ng */ 189 190 NULL 190 191 }; 191 192
+2 -2
drivers/input/rmi4/rmi_f11.c
··· 1203 1203 * If distance threshold values are set, switch to reduced reporting 1204 1204 * mode so they actually get used by the controller. 1205 1205 */ 1206 - if (ctrl->ctrl0_11[RMI_F11_DELTA_X_THRESHOLD] || 1207 - ctrl->ctrl0_11[RMI_F11_DELTA_Y_THRESHOLD]) { 1206 + if (sensor->axis_align.delta_x_threshold || 1207 + sensor->axis_align.delta_y_threshold) { 1208 1208 ctrl->ctrl0_11[0] &= ~RMI_F11_REPORT_MODE_MASK; 1209 1209 ctrl->ctrl0_11[0] |= RMI_F11_REPORT_MODE_REDUCED; 1210 1210 }
+4 -4
drivers/input/touchscreen/raydium_i2c_ts.c
··· 432 432 return 0; 433 433 } 434 434 435 - static bool raydium_i2c_boot_trigger(struct i2c_client *client) 435 + static int raydium_i2c_boot_trigger(struct i2c_client *client) 436 436 { 437 437 static const u8 cmd[7][6] = { 438 438 { 0x08, 0x0C, 0x09, 0x00, 0x50, 0xD7 }, ··· 457 457 } 458 458 } 459 459 460 - return false; 460 + return 0; 461 461 } 462 462 463 - static bool raydium_i2c_fw_trigger(struct i2c_client *client) 463 + static int raydium_i2c_fw_trigger(struct i2c_client *client) 464 464 { 465 465 static const u8 cmd[5][11] = { 466 466 { 0, 0x09, 0x71, 0x0C, 0x09, 0x00, 0x50, 0xD7, 0, 0, 0 }, ··· 483 483 } 484 484 } 485 485 486 - return false; 486 + return 0; 487 487 } 488 488 489 489 static int raydium_i2c_check_path(struct i2c_client *client)
+4 -4
drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c
··· 648 648 return 0; 649 649 650 650 err_erif_unresolve: 651 - list_for_each_entry_from_reverse(erve, &mr_vif->route_evif_list, 652 - vif_node) 651 + list_for_each_entry_continue_reverse(erve, &mr_vif->route_evif_list, 652 + vif_node) 653 653 mlxsw_sp_mr_route_evif_unresolve(mr_table, erve); 654 654 err_irif_unresolve: 655 - list_for_each_entry_from_reverse(irve, &mr_vif->route_ivif_list, 656 - vif_node) 655 + list_for_each_entry_continue_reverse(irve, &mr_vif->route_ivif_list, 656 + vif_node) 657 657 mlxsw_sp_mr_route_ivif_unresolve(mr_table, irve); 658 658 mr_vif->rif = NULL; 659 659 return err;
+52 -4
drivers/net/ethernet/micrel/ks8851_mll.c
··· 157 157 */ 158 158 159 159 /** 160 + * ks_check_endian - Check whether endianness of the bus is correct 161 + * @ks : The chip information 162 + * 163 + * The KS8851-16MLL EESK pin allows selecting the endianness of the 16bit 164 + * bus. To maintain optimum performance, the bus endianness should be set 165 + * such that it matches the endianness of the CPU. 166 + */ 167 + 168 + static int ks_check_endian(struct ks_net *ks) 169 + { 170 + u16 cider; 171 + 172 + /* 173 + * Read CIDER register first, however read it the "wrong" way around. 174 + * If the endian strap on the KS8851-16MLL in incorrect and the chip 175 + * is operating in different endianness than the CPU, then the meaning 176 + * of BE[3:0] byte-enable bits is also swapped such that: 177 + * BE[3,2,1,0] becomes BE[1,0,3,2] 178 + * 179 + * Luckily for us, the byte-enable bits are the top four MSbits of 180 + * the address register and the CIDER register is at offset 0xc0. 181 + * Hence, by reading address 0xc0c0, which is not impacted by endian 182 + * swapping, we assert either BE[3:2] or BE[1:0] while reading the 183 + * CIDER register. 184 + * 185 + * If the bus configuration is correct, reading 0xc0c0 asserts 186 + * BE[3:2] and this read returns 0x0000, because to read register 187 + * with bottom two LSbits of address set to 0, BE[1:0] must be 188 + * asserted. 189 + * 190 + * If the bus configuration is NOT correct, reading 0xc0c0 asserts 191 + * BE[1:0] and this read returns non-zero 0x8872 value. 192 + */ 193 + iowrite16(BE3 | BE2 | KS_CIDER, ks->hw_addr_cmd); 194 + cider = ioread16(ks->hw_addr); 195 + if (!cider) 196 + return 0; 197 + 198 + netdev_err(ks->netdev, "incorrect EESK endian strap setting\n"); 199 + 200 + return -EINVAL; 201 + } 202 + 203 + /** 160 204 * ks_rdreg16 - read 16 bit register from device 161 205 * @ks : The chip information 162 206 * @offset: The register address ··· 210 166 211 167 static u16 ks_rdreg16(struct ks_net *ks, int offset) 212 168 { 213 - ks->cmd_reg_cache = (u16)offset | ((BE3 | BE2) >> (offset & 0x02)); 169 + ks->cmd_reg_cache = (u16)offset | ((BE1 | BE0) << (offset & 0x02)); 214 170 iowrite16(ks->cmd_reg_cache, ks->hw_addr_cmd); 215 171 return ioread16(ks->hw_addr); 216 172 } ··· 225 181 226 182 static void ks_wrreg16(struct ks_net *ks, int offset, u16 value) 227 183 { 228 - ks->cmd_reg_cache = (u16)offset | ((BE3 | BE2) >> (offset & 0x02)); 184 + ks->cmd_reg_cache = (u16)offset | ((BE1 | BE0) << (offset & 0x02)); 229 185 iowrite16(ks->cmd_reg_cache, ks->hw_addr_cmd); 230 186 iowrite16(value, ks->hw_addr); 231 187 } ··· 241 197 { 242 198 len >>= 1; 243 199 while (len--) 244 - *wptr++ = be16_to_cpu(ioread16(ks->hw_addr)); 200 + *wptr++ = (u16)ioread16(ks->hw_addr); 245 201 } 246 202 247 203 /** ··· 255 211 { 256 212 len >>= 1; 257 213 while (len--) 258 - iowrite16(cpu_to_be16(*wptr++), ks->hw_addr); 214 + iowrite16(*wptr++, ks->hw_addr); 259 215 } 260 216 261 217 static void ks_disable_int(struct ks_net *ks) ··· 1261 1217 err = PTR_ERR(ks->hw_addr_cmd); 1262 1218 goto err_free; 1263 1219 } 1220 + 1221 + err = ks_check_endian(ks); 1222 + if (err) 1223 + goto err_free; 1264 1224 1265 1225 netdev->irq = platform_get_irq(pdev, 0); 1266 1226
+1 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_init.c
··· 1720 1720 1721 1721 ahw->reset.seq_error = 0; 1722 1722 ahw->reset.buff = kzalloc(QLC_83XX_RESTART_TEMPLATE_SIZE, GFP_KERNEL); 1723 - if (p_dev->ahw->reset.buff == NULL) 1723 + if (ahw->reset.buff == NULL) 1724 1724 return -ENOMEM; 1725 1725 1726 1726 p_buff = p_dev->ahw->reset.buff;
+7 -9
drivers/net/ethernet/realtek/r8169_main.c
··· 5182 5182 if (!tp->phydev) { 5183 5183 mdiobus_unregister(new_bus); 5184 5184 return -ENODEV; 5185 + } else if (!tp->phydev->drv) { 5186 + /* Most chip versions fail with the genphy driver. 5187 + * Therefore ensure that the dedicated PHY driver is loaded. 5188 + */ 5189 + dev_err(&pdev->dev, "realtek.ko not loaded, maybe it needs to be added to initramfs?\n"); 5190 + mdiobus_unregister(new_bus); 5191 + return -EUNATCH; 5185 5192 } 5186 5193 5187 5194 /* PHY will be woken up in rtl_open() */ ··· 5350 5343 enum mac_version chipset; 5351 5344 struct net_device *dev; 5352 5345 u16 xid; 5353 - 5354 - /* Some tools for creating an initramfs don't consider softdeps, then 5355 - * r8169.ko may be in initramfs, but realtek.ko not. Then the generic 5356 - * PHY driver is used that doesn't work with most chip versions. 5357 - */ 5358 - if (!driver_find("RTL8201CP Ethernet", &mdio_bus_type)) { 5359 - dev_err(&pdev->dev, "realtek.ko not loaded, maybe it needs to be added to initramfs?\n"); 5360 - return -ENOENT; 5361 - } 5362 5346 5363 5347 dev = devm_alloc_etherdev(&pdev->dev, sizeof (*tp)); 5364 5348 if (!dev)
+2 -2
drivers/scsi/qla2xxx/qla_os.c
··· 864 864 goto qc24_fail_command; 865 865 } 866 866 867 - if (atomic_read(&fcport->state) != FCS_ONLINE) { 867 + if (atomic_read(&fcport->state) != FCS_ONLINE || fcport->deleted) { 868 868 if (atomic_read(&fcport->state) == FCS_DEVICE_DEAD || 869 869 atomic_read(&base_vha->loop_state) == LOOP_DEAD) { 870 870 ql_dbg(ql_dbg_io, vha, 0x3005, ··· 946 946 goto qc24_fail_command; 947 947 } 948 948 949 - if (atomic_read(&fcport->state) != FCS_ONLINE) { 949 + if (atomic_read(&fcport->state) != FCS_ONLINE || fcport->deleted) { 950 950 if (atomic_read(&fcport->state) == FCS_DEVICE_DEAD || 951 951 atomic_read(&base_vha->loop_state) == LOOP_DEAD) { 952 952 ql_dbg(ql_dbg_io, vha, 0x3077,
+3 -1
drivers/scsi/sd.c
··· 3169 3169 if (sd_validate_opt_xfer_size(sdkp, dev_max)) { 3170 3170 q->limits.io_opt = logical_to_bytes(sdp, sdkp->opt_xfer_blocks); 3171 3171 rw_max = logical_to_sectors(sdp, sdkp->opt_xfer_blocks); 3172 - } else 3172 + } else { 3173 + q->limits.io_opt = 0; 3173 3174 rw_max = min_not_zero(logical_to_sectors(sdp, dev_max), 3174 3175 (sector_t)BLK_DEF_MAX_SECTORS); 3176 + } 3175 3177 3176 3178 /* Do not exceed controller limit */ 3177 3179 rw_max = min(rw_max, queue_max_hw_sectors(q));
+4 -4
drivers/soc/fsl/dpio/dpio-driver.c
··· 233 233 goto err_allocate_irqs; 234 234 } 235 235 236 - err = register_dpio_irq_handlers(dpio_dev, desc.cpu); 237 - if (err) 238 - goto err_register_dpio_irq; 239 - 240 236 priv->io = dpaa2_io_create(&desc, dev); 241 237 if (!priv->io) { 242 238 dev_err(dev, "dpaa2_io_create failed\n"); 243 239 err = -ENOMEM; 244 240 goto err_dpaa2_io_create; 245 241 } 242 + 243 + err = register_dpio_irq_handlers(dpio_dev, desc.cpu); 244 + if (err) 245 + goto err_register_dpio_irq; 246 246 247 247 dev_info(dev, "probed\n"); 248 248 dev_dbg(dev, " receives_notifications = %d\n",
+1 -1
drivers/soc/samsung/exynos-chipid.c
··· 59 59 syscon = of_find_compatible_node(NULL, NULL, 60 60 "samsung,exynos4210-chipid"); 61 61 if (!syscon) 62 - return ENODEV; 62 + return -ENODEV; 63 63 64 64 regmap = device_node_to_regmap(syscon); 65 65 of_node_put(syscon);
+3
drivers/tee/amdtee/core.c
··· 139 139 u32 index = get_session_index(session); 140 140 struct amdtee_session *sess; 141 141 142 + if (index >= TEE_NUM_SESSIONS) 143 + return NULL; 144 + 142 145 list_for_each_entry(sess, &ctxdata->sess_list, list_node) 143 146 if (ta_handle == sess->ta_handle && 144 147 test_bit(index, sess->sess_mask))
+2
fs/afs/fs_probe.c
··· 145 145 read_lock(&server->fs_lock); 146 146 ac.alist = rcu_dereference_protected(server->addresses, 147 147 lockdep_is_held(&server->fs_lock)); 148 + afs_get_addrlist(ac.alist); 148 149 read_unlock(&server->fs_lock); 149 150 150 151 atomic_set(&server->probe_outstanding, ac.alist->nr_addrs); ··· 164 163 165 164 if (!in_progress) 166 165 afs_fs_probe_done(server); 166 + afs_put_addrlist(ac.alist); 167 167 return in_progress; 168 168 } 169 169
+11 -3
fs/ceph/file.c
··· 1415 1415 struct inode *inode = file_inode(file); 1416 1416 struct ceph_inode_info *ci = ceph_inode(inode); 1417 1417 struct ceph_fs_client *fsc = ceph_inode_to_client(inode); 1418 + struct ceph_osd_client *osdc = &fsc->client->osdc; 1418 1419 struct ceph_cap_flush *prealloc_cf; 1419 1420 ssize_t count, written = 0; 1420 1421 int err, want, got; 1421 1422 bool direct_lock = false; 1423 + u32 map_flags; 1424 + u64 pool_flags; 1422 1425 loff_t pos; 1423 1426 loff_t limit = max(i_size_read(inode), fsc->max_file_size); 1424 1427 ··· 1484 1481 goto out; 1485 1482 } 1486 1483 1487 - /* FIXME: not complete since it doesn't account for being at quota */ 1488 - if (ceph_osdmap_flag(&fsc->client->osdc, CEPH_OSDMAP_FULL)) { 1484 + down_read(&osdc->lock); 1485 + map_flags = osdc->osdmap->flags; 1486 + pool_flags = ceph_pg_pool_flags(osdc->osdmap, ci->i_layout.pool_id); 1487 + up_read(&osdc->lock); 1488 + if ((map_flags & CEPH_OSDMAP_FULL) || 1489 + (pool_flags & CEPH_POOL_FLAG_FULL)) { 1489 1490 err = -ENOSPC; 1490 1491 goto out; 1491 1492 } ··· 1582 1575 } 1583 1576 1584 1577 if (written >= 0) { 1585 - if (ceph_osdmap_flag(&fsc->client->osdc, CEPH_OSDMAP_NEARFULL)) 1578 + if ((map_flags & CEPH_OSDMAP_NEARFULL) || 1579 + (pool_flags & CEPH_POOL_FLAG_NEARFULL)) 1586 1580 iocb->ki_flags |= IOCB_DSYNC; 1587 1581 written = generic_write_sync(iocb, written); 1588 1582 }
+1
fs/ceph/snap.c
··· 1155 1155 pr_err("snapid map %llx -> %x still in use\n", 1156 1156 sm->snap, sm->dev); 1157 1157 } 1158 + kfree(sm); 1158 1159 } 1159 1160 }
+1
include/linux/bpf.h
··· 161 161 } 162 162 void copy_map_value_locked(struct bpf_map *map, void *dst, void *src, 163 163 bool lock_src); 164 + int bpf_obj_name_cpy(char *dst, const char *src, unsigned int size); 164 165 165 166 struct bpf_offload_dev; 166 167 struct bpf_offloaded_map;
+4 -3
include/linux/ceph/messenger.h
··· 175 175 #endif /* CONFIG_BLOCK */ 176 176 struct ceph_bvec_iter bvec_pos; 177 177 struct { 178 - struct page **pages; /* NOT OWNER. */ 178 + struct page **pages; 179 179 size_t length; /* total # bytes */ 180 180 unsigned int alignment; /* first page */ 181 + bool own_pages; 181 182 }; 182 183 struct ceph_pagelist *pagelist; 183 184 }; ··· 357 356 extern bool ceph_con_keepalive_expired(struct ceph_connection *con, 358 357 unsigned long interval); 359 358 360 - extern void ceph_msg_data_add_pages(struct ceph_msg *msg, struct page **pages, 361 - size_t length, size_t alignment); 359 + void ceph_msg_data_add_pages(struct ceph_msg *msg, struct page **pages, 360 + size_t length, size_t alignment, bool own_pages); 362 361 extern void ceph_msg_data_add_pagelist(struct ceph_msg *msg, 363 362 struct ceph_pagelist *pagelist); 364 363 #ifdef CONFIG_BLOCK
+4
include/linux/ceph/osdmap.h
··· 37 37 #define CEPH_POOL_FLAG_HASHPSPOOL (1ULL << 0) /* hash pg seed and pool id 38 38 together */ 39 39 #define CEPH_POOL_FLAG_FULL (1ULL << 1) /* pool is full */ 40 + #define CEPH_POOL_FLAG_FULL_QUOTA (1ULL << 10) /* pool ran out of quota, 41 + will set FULL too */ 42 + #define CEPH_POOL_FLAG_NEARFULL (1ULL << 11) /* pool is nearfull */ 40 43 41 44 struct ceph_pg_pool_info { 42 45 struct rb_node node; ··· 307 304 308 305 extern const char *ceph_pg_pool_name_by_id(struct ceph_osdmap *map, u64 id); 309 306 extern int ceph_pg_poolid_by_name(struct ceph_osdmap *map, const char *name); 307 + u64 ceph_pg_pool_flags(struct ceph_osdmap *map, u64 id); 310 308 311 309 #endif
+4 -2
include/linux/ceph/rados.h
··· 143 143 /* 144 144 * osd map flag bits 145 145 */ 146 - #define CEPH_OSDMAP_NEARFULL (1<<0) /* sync writes (near ENOSPC) */ 147 - #define CEPH_OSDMAP_FULL (1<<1) /* no data writes (ENOSPC) */ 146 + #define CEPH_OSDMAP_NEARFULL (1<<0) /* sync writes (near ENOSPC), 147 + not set since ~luminous */ 148 + #define CEPH_OSDMAP_FULL (1<<1) /* no data writes (ENOSPC), 149 + not set since ~luminous */ 148 150 #define CEPH_OSDMAP_PAUSERD (1<<2) /* pause all reads */ 149 151 #define CEPH_OSDMAP_PAUSEWR (1<<3) /* pause all writes */ 150 152 #define CEPH_OSDMAP_PAUSEREC (1<<4) /* pause recovery */
+5 -5
include/linux/clk-provider.h
··· 522 522 * @clk_gate_flags: gate-specific flags for this clock 523 523 * @lock: shared register lock for this clock 524 524 */ 525 - #define clk_hw_register_gate_parent_hw(dev, name, parent_name, flags, reg, \ 525 + #define clk_hw_register_gate_parent_hw(dev, name, parent_hw, flags, reg, \ 526 526 bit_idx, clk_gate_flags, lock) \ 527 - __clk_hw_register_gate((dev), NULL, (name), (parent_name), NULL, \ 527 + __clk_hw_register_gate((dev), NULL, (name), NULL, (parent_hw), \ 528 528 NULL, (flags), (reg), (bit_idx), \ 529 529 (clk_gate_flags), (lock)) 530 530 /** ··· 539 539 * @clk_gate_flags: gate-specific flags for this clock 540 540 * @lock: shared register lock for this clock 541 541 */ 542 - #define clk_hw_register_gate_parent_data(dev, name, parent_name, flags, reg, \ 542 + #define clk_hw_register_gate_parent_data(dev, name, parent_data, flags, reg, \ 543 543 bit_idx, clk_gate_flags, lock) \ 544 - __clk_hw_register_gate((dev), NULL, (name), (parent_name), NULL, \ 545 - NULL, (flags), (reg), (bit_idx), \ 544 + __clk_hw_register_gate((dev), NULL, (name), NULL, NULL, (parent_data), \ 545 + (flags), (reg), (bit_idx), \ 546 546 (clk_gate_flags), (lock)) 547 547 void clk_unregister_gate(struct clk *clk); 548 548 void clk_hw_unregister_gate(struct clk_hw *hw);
+2 -2
include/linux/i2c.h
··· 506 506 * @smbus_xfer_atomic: same as @smbus_xfer. Yet, only using atomic context 507 507 * so e.g. PMICs can be accessed very late before shutdown. Optional. 508 508 * @functionality: Return the flags that this algorithm/adapter pair supports 509 - * from the I2C_FUNC_* flags. 509 + * from the ``I2C_FUNC_*`` flags. 510 510 * @reg_slave: Register given client to I2C slave mode of this adapter 511 511 * @unreg_slave: Unregister given client from I2C slave mode of this adapter 512 512 * ··· 515 515 * be addressed using the same bus algorithms - i.e. bit-banging or the PCF8584 516 516 * to name two of the most common. 517 517 * 518 - * The return codes from the @master_xfer{_atomic} fields should indicate the 518 + * The return codes from the ``master_xfer{_atomic}`` fields should indicate the 519 519 * type of error code that occurred during the transfer, as documented in the 520 520 * Kernel Documentation file Documentation/i2c/fault-codes.rst. 521 521 */
+2 -2
include/linux/ieee80211.h
··· 2111 2111 { 2112 2112 struct ieee80211_he_spr *he_spr = (void *)he_spr_ie; 2113 2113 u8 spr_len = sizeof(struct ieee80211_he_spr); 2114 - u32 he_spr_params; 2114 + u8 he_spr_params; 2115 2115 2116 2116 /* Make sure the input is not NULL */ 2117 2117 if (!he_spr_ie) 2118 2118 return 0; 2119 2119 2120 2120 /* Calc required length */ 2121 - he_spr_params = le32_to_cpu(he_spr->he_sr_control); 2121 + he_spr_params = he_spr->he_sr_control; 2122 2122 if (he_spr_params & IEEE80211_HE_SPR_NON_SRG_OFFSET_PRESENT) 2123 2123 spr_len++; 2124 2124 if (he_spr_params & IEEE80211_HE_SPR_SRG_INFORMATION_PRESENT)
+12
include/linux/memcontrol.h
··· 695 695 void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, 696 696 int val); 697 697 void __mod_lruvec_slab_state(void *p, enum node_stat_item idx, int val); 698 + void mod_memcg_obj_state(void *p, int idx, int val); 698 699 699 700 static inline void mod_lruvec_state(struct lruvec *lruvec, 700 701 enum node_stat_item idx, int val) ··· 1124 1123 __mod_node_page_state(page_pgdat(page), idx, val); 1125 1124 } 1126 1125 1126 + static inline void mod_memcg_obj_state(void *p, int idx, int val) 1127 + { 1128 + } 1129 + 1127 1130 static inline 1128 1131 unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, 1129 1132 gfp_t gfp_mask, ··· 1432 1427 return memcg ? memcg->kmemcg_id : -1; 1433 1428 } 1434 1429 1430 + struct mem_cgroup *mem_cgroup_from_obj(void *p); 1431 + 1435 1432 #else 1436 1433 1437 1434 static inline int memcg_kmem_charge(struct page *page, gfp_t gfp, int order) ··· 1473 1466 1474 1467 static inline void memcg_put_cache_ids(void) 1475 1468 { 1469 + } 1470 + 1471 + static inline struct mem_cgroup *mem_cgroup_from_obj(void *p) 1472 + { 1473 + return NULL; 1476 1474 } 1477 1475 1478 1476 #endif /* CONFIG_MEMCG_KMEM */
+3
include/uapi/linux/input-event-codes.h
··· 652 652 /* Electronic privacy screen control */ 653 653 #define KEY_PRIVACY_SCREEN_TOGGLE 0x279 654 654 655 + /* Select an area of screen to be copied */ 656 + #define KEY_SELECTIVE_SCREENSHOT 0x27a 657 + 655 658 /* 656 659 * Some keyboards have keys which do not have a defined meaning, these keys 657 660 * are intended to be programmed / bound to macros by the user. For most
+5 -5
include/uapi/linux/serio.h
··· 9 9 #ifndef _UAPI_SERIO_H 10 10 #define _UAPI_SERIO_H 11 11 12 - 12 + #include <linux/const.h> 13 13 #include <linux/ioctl.h> 14 14 15 15 #define SPIOCSTYPE _IOW('q', 0x01, unsigned long) ··· 18 18 /* 19 19 * bit masks for use in "interrupt" flags (3rd argument) 20 20 */ 21 - #define SERIO_TIMEOUT BIT(0) 22 - #define SERIO_PARITY BIT(1) 23 - #define SERIO_FRAME BIT(2) 24 - #define SERIO_OOB_DATA BIT(3) 21 + #define SERIO_TIMEOUT _BITUL(0) 22 + #define SERIO_PARITY _BITUL(1) 23 + #define SERIO_FRAME _BITUL(2) 24 + #define SERIO_OOB_DATA _BITUL(3) 25 25 26 26 /* 27 27 * Serio types
+2 -1
kernel/bpf/btf.c
··· 4577 4577 union bpf_attr __user *uattr) 4578 4578 { 4579 4579 struct bpf_btf_info __user *uinfo; 4580 - struct bpf_btf_info info = {}; 4580 + struct bpf_btf_info info; 4581 4581 u32 info_copy, btf_copy; 4582 4582 void __user *ubtf; 4583 4583 u32 uinfo_len; ··· 4586 4586 uinfo_len = attr->info.info_len; 4587 4587 4588 4588 info_copy = min_t(u32, uinfo_len, sizeof(info)); 4589 + memset(&info, 0, sizeof(info)); 4589 4590 if (copy_from_user(&info, uinfo, info_copy)) 4590 4591 return -EFAULT; 4591 4592
+20 -14
kernel/bpf/syscall.c
··· 689 689 offsetof(union bpf_attr, CMD##_LAST_FIELD) - \ 690 690 sizeof(attr->CMD##_LAST_FIELD)) != NULL 691 691 692 - /* dst and src must have at least BPF_OBJ_NAME_LEN number of bytes. 693 - * Return 0 on success and < 0 on error. 692 + /* dst and src must have at least "size" number of bytes. 693 + * Return strlen on success and < 0 on error. 694 694 */ 695 - static int bpf_obj_name_cpy(char *dst, const char *src) 695 + int bpf_obj_name_cpy(char *dst, const char *src, unsigned int size) 696 696 { 697 - const char *end = src + BPF_OBJ_NAME_LEN; 697 + const char *end = src + size; 698 + const char *orig_src = src; 698 699 699 - memset(dst, 0, BPF_OBJ_NAME_LEN); 700 + memset(dst, 0, size); 700 701 /* Copy all isalnum(), '_' and '.' chars. */ 701 702 while (src < end && *src) { 702 703 if (!isalnum(*src) && ··· 706 705 *dst++ = *src++; 707 706 } 708 707 709 - /* No '\0' found in BPF_OBJ_NAME_LEN number of bytes */ 708 + /* No '\0' found in "size" number of bytes */ 710 709 if (src == end) 711 710 return -EINVAL; 712 711 713 - return 0; 712 + return src - orig_src; 714 713 } 715 714 716 715 int map_check_no_btf(const struct bpf_map *map, ··· 804 803 if (IS_ERR(map)) 805 804 return PTR_ERR(map); 806 805 807 - err = bpf_obj_name_cpy(map->name, attr->map_name); 808 - if (err) 806 + err = bpf_obj_name_cpy(map->name, attr->map_name, 807 + sizeof(attr->map_name)); 808 + if (err < 0) 809 809 goto free_map; 810 810 811 811 atomic64_set(&map->refcnt, 1); ··· 2104 2102 goto free_prog; 2105 2103 2106 2104 prog->aux->load_time = ktime_get_boottime_ns(); 2107 - err = bpf_obj_name_cpy(prog->aux->name, attr->prog_name); 2108 - if (err) 2105 + err = bpf_obj_name_cpy(prog->aux->name, attr->prog_name, 2106 + sizeof(attr->prog_name)); 2107 + if (err < 0) 2109 2108 goto free_prog; 2110 2109 2111 2110 /* run eBPF verifier */ ··· 2990 2987 union bpf_attr __user *uattr) 2991 2988 { 2992 2989 struct bpf_prog_info __user *uinfo = u64_to_user_ptr(attr->info.info); 2993 - struct bpf_prog_info info = {}; 2990 + struct bpf_prog_info info; 2994 2991 u32 info_len = attr->info.info_len; 2995 2992 struct bpf_prog_stats stats; 2996 2993 char __user *uinsns; ··· 3002 2999 return err; 3003 3000 info_len = min_t(u32, sizeof(info), info_len); 3004 3001 3002 + memset(&info, 0, sizeof(info)); 3005 3003 if (copy_from_user(&info, uinfo, info_len)) 3006 3004 return -EFAULT; 3007 3005 ··· 3266 3262 union bpf_attr __user *uattr) 3267 3263 { 3268 3264 struct bpf_map_info __user *uinfo = u64_to_user_ptr(attr->info.info); 3269 - struct bpf_map_info info = {}; 3265 + struct bpf_map_info info; 3270 3266 u32 info_len = attr->info.info_len; 3271 3267 int err; 3272 3268 ··· 3275 3271 return err; 3276 3272 info_len = min_t(u32, sizeof(info), info_len); 3277 3273 3274 + memset(&info, 0, sizeof(info)); 3278 3275 info.type = map->map_type; 3279 3276 info.id = map->id; 3280 3277 info.key_size = map->key_size; ··· 3566 3561 3567 3562 SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, size) 3568 3563 { 3569 - union bpf_attr attr = {}; 3564 + union bpf_attr attr; 3570 3565 int err; 3571 3566 3572 3567 if (sysctl_unprivileged_bpf_disabled && !capable(CAP_SYS_ADMIN)) ··· 3578 3573 size = min_t(u32, size, sizeof(attr)); 3579 3574 3580 3575 /* copy attributes from user space, may be less than sizeof(bpf_attr) */ 3576 + memset(&attr, 0, sizeof(attr)); 3581 3577 if (copy_from_user(&attr, uattr, size) != 0) 3582 3578 return -EFAULT; 3583 3579
+2 -2
kernel/fork.c
··· 397 397 mod_zone_page_state(page_zone(first_page), NR_KERNEL_STACK_KB, 398 398 THREAD_SIZE / 1024 * account); 399 399 400 - mod_memcg_page_state(first_page, MEMCG_KERNEL_STACK_KB, 401 - account * (THREAD_SIZE / 1024)); 400 + mod_memcg_obj_state(stack, MEMCG_KERNEL_STACK_KB, 401 + account * (THREAD_SIZE / 1024)); 402 402 } 403 403 } 404 404
+9 -2
kernel/irq/manage.c
··· 323 323 324 324 if (desc->affinity_notify) { 325 325 kref_get(&desc->affinity_notify->kref); 326 - schedule_work(&desc->affinity_notify->work); 326 + if (!schedule_work(&desc->affinity_notify->work)) { 327 + /* Work was already scheduled, drop our extra ref */ 328 + kref_put(&desc->affinity_notify->kref, 329 + desc->affinity_notify->release); 330 + } 327 331 } 328 332 irqd_set(data, IRQD_AFFINITY_SET); 329 333 ··· 427 423 raw_spin_unlock_irqrestore(&desc->lock, flags); 428 424 429 425 if (old_notify) { 430 - cancel_work_sync(&old_notify->work); 426 + if (cancel_work_sync(&old_notify->work)) { 427 + /* Pending work had a ref, put that one too */ 428 + kref_put(&old_notify->kref, old_notify->release); 429 + } 431 430 kref_put(&old_notify->kref, old_notify->release); 432 431 } 433 432
+1 -2
mm/hugetlb_cgroup.c
··· 240 240 if (!page_counter_try_charge(&h_cg->hugepage[idx], nr_pages, 241 241 &counter)) { 242 242 ret = -ENOMEM; 243 - hugetlb_event(hugetlb_cgroup_from_counter(counter, idx), idx, 244 - HUGETLB_MAX); 243 + hugetlb_event(h_cg, idx, HUGETLB_MAX); 245 244 } 246 245 css_put(&h_cg->css); 247 246 done:
+38
mm/memcontrol.c
··· 777 777 rcu_read_unlock(); 778 778 } 779 779 780 + void mod_memcg_obj_state(void *p, int idx, int val) 781 + { 782 + struct mem_cgroup *memcg; 783 + 784 + rcu_read_lock(); 785 + memcg = mem_cgroup_from_obj(p); 786 + if (memcg) 787 + mod_memcg_state(memcg, idx, val); 788 + rcu_read_unlock(); 789 + } 790 + 780 791 /** 781 792 * __count_memcg_events - account VM events in a cgroup 782 793 * @memcg: the memory cgroup ··· 2672 2661 } 2673 2662 2674 2663 #ifdef CONFIG_MEMCG_KMEM 2664 + /* 2665 + * Returns a pointer to the memory cgroup to which the kernel object is charged. 2666 + * 2667 + * The caller must ensure the memcg lifetime, e.g. by taking rcu_read_lock(), 2668 + * cgroup_mutex, etc. 2669 + */ 2670 + struct mem_cgroup *mem_cgroup_from_obj(void *p) 2671 + { 2672 + struct page *page; 2673 + 2674 + if (mem_cgroup_disabled()) 2675 + return NULL; 2676 + 2677 + page = virt_to_head_page(p); 2678 + 2679 + /* 2680 + * Slab pages don't have page->mem_cgroup set because corresponding 2681 + * kmem caches can be reparented during the lifetime. That's why 2682 + * memcg_from_slab_page() should be used instead. 2683 + */ 2684 + if (PageSlab(page)) 2685 + return memcg_from_slab_page(page); 2686 + 2687 + /* All other pages use page->mem_cgroup */ 2688 + return page->mem_cgroup; 2689 + } 2690 + 2675 2691 static int memcg_alloc_cache_id(void) 2676 2692 { 2677 2693 int id, size;
+6
mm/sparse.c
··· 781 781 ms->usage = NULL; 782 782 } 783 783 memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); 784 + /* 785 + * Mark the section invalid so that valid_section() 786 + * return false. This prevents code from dereferencing 787 + * ms->usage array. 788 + */ 789 + ms->section_mem_map &= ~SECTION_HAS_MEM_MAP; 784 790 } 785 791 786 792 if (section_is_early && memmap)
+20 -21
mm/swapfile.c
··· 2899 2899 p->bdev = inode->i_sb->s_bdev; 2900 2900 } 2901 2901 2902 - inode_lock(inode); 2903 - if (IS_SWAPFILE(inode)) 2904 - return -EBUSY; 2905 - 2906 2902 return 0; 2907 2903 } 2908 2904 ··· 3153 3157 mapping = swap_file->f_mapping; 3154 3158 inode = mapping->host; 3155 3159 3156 - /* will take i_rwsem; */ 3157 3160 error = claim_swapfile(p, inode); 3158 3161 if (unlikely(error)) 3159 3162 goto bad_swap; 3163 + 3164 + inode_lock(inode); 3165 + if (IS_SWAPFILE(inode)) { 3166 + error = -EBUSY; 3167 + goto bad_swap_unlock_inode; 3168 + } 3160 3169 3161 3170 /* 3162 3171 * Read the swap header. 3163 3172 */ 3164 3173 if (!mapping->a_ops->readpage) { 3165 3174 error = -EINVAL; 3166 - goto bad_swap; 3175 + goto bad_swap_unlock_inode; 3167 3176 } 3168 3177 page = read_mapping_page(mapping, 0, swap_file); 3169 3178 if (IS_ERR(page)) { 3170 3179 error = PTR_ERR(page); 3171 - goto bad_swap; 3180 + goto bad_swap_unlock_inode; 3172 3181 } 3173 3182 swap_header = kmap(page); 3174 3183 3175 3184 maxpages = read_swap_header(p, swap_header, inode); 3176 3185 if (unlikely(!maxpages)) { 3177 3186 error = -EINVAL; 3178 - goto bad_swap; 3187 + goto bad_swap_unlock_inode; 3179 3188 } 3180 3189 3181 3190 /* OK, set up the swap map and apply the bad block list */ 3182 3191 swap_map = vzalloc(maxpages); 3183 3192 if (!swap_map) { 3184 3193 error = -ENOMEM; 3185 - goto bad_swap; 3194 + goto bad_swap_unlock_inode; 3186 3195 } 3187 3196 3188 3197 if (bdi_cap_stable_pages_required(inode_to_bdi(inode))) ··· 3212 3211 GFP_KERNEL); 3213 3212 if (!cluster_info) { 3214 3213 error = -ENOMEM; 3215 - goto bad_swap; 3214 + goto bad_swap_unlock_inode; 3216 3215 } 3217 3216 3218 3217 for (ci = 0; ci < nr_cluster; ci++) ··· 3221 3220 p->percpu_cluster = alloc_percpu(struct percpu_cluster); 3222 3221 if (!p->percpu_cluster) { 3223 3222 error = -ENOMEM; 3224 - goto bad_swap; 3223 + goto bad_swap_unlock_inode; 3225 3224 } 3226 3225 for_each_possible_cpu(cpu) { 3227 3226 struct percpu_cluster *cluster; ··· 3235 3234 3236 3235 error = swap_cgroup_swapon(p->type, maxpages); 3237 3236 if (error) 3238 - goto bad_swap; 3237 + goto bad_swap_unlock_inode; 3239 3238 3240 3239 nr_extents = setup_swap_map_and_extents(p, swap_header, swap_map, 3241 3240 cluster_info, maxpages, &span); 3242 3241 if (unlikely(nr_extents < 0)) { 3243 3242 error = nr_extents; 3244 - goto bad_swap; 3243 + goto bad_swap_unlock_inode; 3245 3244 } 3246 3245 /* frontswap enabled? set up bit-per-page map for frontswap */ 3247 3246 if (IS_ENABLED(CONFIG_FRONTSWAP)) ··· 3281 3280 3282 3281 error = init_swap_address_space(p->type, maxpages); 3283 3282 if (error) 3284 - goto bad_swap; 3283 + goto bad_swap_unlock_inode; 3285 3284 3286 3285 /* 3287 3286 * Flush any pending IO and dirty mappings before we start using this ··· 3291 3290 error = inode_drain_writes(inode); 3292 3291 if (error) { 3293 3292 inode->i_flags &= ~S_SWAPFILE; 3294 - goto bad_swap; 3293 + goto bad_swap_unlock_inode; 3295 3294 } 3296 3295 3297 3296 mutex_lock(&swapon_mutex); ··· 3316 3315 3317 3316 error = 0; 3318 3317 goto out; 3318 + bad_swap_unlock_inode: 3319 + inode_unlock(inode); 3319 3320 bad_swap: 3320 3321 free_percpu(p->percpu_cluster); 3321 3322 p->percpu_cluster = NULL; ··· 3325 3322 set_blocksize(p->bdev, p->old_block_size); 3326 3323 blkdev_put(p->bdev, FMODE_READ | FMODE_WRITE | FMODE_EXCL); 3327 3324 } 3325 + inode = NULL; 3328 3326 destroy_swap_extents(p); 3329 3327 swap_cgroup_swapoff(p->type); 3330 3328 spin_lock(&swap_lock); ··· 3337 3333 kvfree(frontswap_map); 3338 3334 if (inced_nr_rotate_swap) 3339 3335 atomic_dec(&nr_rotate_swap); 3340 - if (swap_file) { 3341 - if (inode) { 3342 - inode_unlock(inode); 3343 - inode = NULL; 3344 - } 3336 + if (swap_file) 3345 3337 filp_close(swap_file, NULL); 3346 - } 3347 3338 out: 3348 3339 if (page && !IS_ERR(page)) { 3349 3340 kunmap(page);
+7 -2
net/ceph/messenger.c
··· 3248 3248 3249 3249 static void ceph_msg_data_destroy(struct ceph_msg_data *data) 3250 3250 { 3251 - if (data->type == CEPH_MSG_DATA_PAGELIST) 3251 + if (data->type == CEPH_MSG_DATA_PAGES && data->own_pages) { 3252 + int num_pages = calc_pages_for(data->alignment, data->length); 3253 + ceph_release_page_vector(data->pages, num_pages); 3254 + } else if (data->type == CEPH_MSG_DATA_PAGELIST) { 3252 3255 ceph_pagelist_release(data->pagelist); 3256 + } 3253 3257 } 3254 3258 3255 3259 void ceph_msg_data_add_pages(struct ceph_msg *msg, struct page **pages, 3256 - size_t length, size_t alignment) 3260 + size_t length, size_t alignment, bool own_pages) 3257 3261 { 3258 3262 struct ceph_msg_data *data; 3259 3263 ··· 3269 3265 data->pages = pages; 3270 3266 data->length = length; 3271 3267 data->alignment = alignment & ~PAGE_MASK; 3268 + data->own_pages = own_pages; 3272 3269 3273 3270 msg->data_length += length; 3274 3271 }
+3 -11
net/ceph/osd_client.c
··· 962 962 BUG_ON(length > (u64) SIZE_MAX); 963 963 if (length) 964 964 ceph_msg_data_add_pages(msg, osd_data->pages, 965 - length, osd_data->alignment); 965 + length, osd_data->alignment, false); 966 966 } else if (osd_data->type == CEPH_OSD_DATA_TYPE_PAGELIST) { 967 967 BUG_ON(!length); 968 968 ceph_msg_data_add_pagelist(msg, osd_data->pagelist); ··· 4436 4436 CEPH_MSG_DATA_PAGES); 4437 4437 *lreq->preply_pages = data->pages; 4438 4438 *lreq->preply_len = data->length; 4439 - } else { 4440 - ceph_release_page_vector(data->pages, 4441 - calc_pages_for(0, data->length)); 4439 + data->own_pages = false; 4442 4440 } 4443 4441 } 4444 4442 lreq->notify_finish_error = return_code; ··· 5504 5506 return m; 5505 5507 } 5506 5508 5507 - /* 5508 - * TODO: switch to a msg-owned pagelist 5509 - */ 5510 5509 static struct ceph_msg *alloc_msg_with_page_vector(struct ceph_msg_header *hdr) 5511 5510 { 5512 5511 struct ceph_msg *m; ··· 5517 5522 5518 5523 if (data_len) { 5519 5524 struct page **pages; 5520 - struct ceph_osd_data osd_data; 5521 5525 5522 5526 pages = ceph_alloc_page_vector(calc_pages_for(0, data_len), 5523 5527 GFP_NOIO); ··· 5525 5531 return NULL; 5526 5532 } 5527 5533 5528 - ceph_osd_data_pages_init(&osd_data, pages, data_len, 0, false, 5529 - false); 5530 - ceph_osdc_msg_data_add(m, &osd_data); 5534 + ceph_msg_data_add_pages(m, pages, data_len, 0, true); 5531 5535 } 5532 5536 5533 5537 return m;
+9
net/ceph/osdmap.c
··· 710 710 } 711 711 EXPORT_SYMBOL(ceph_pg_poolid_by_name); 712 712 713 + u64 ceph_pg_pool_flags(struct ceph_osdmap *map, u64 id) 714 + { 715 + struct ceph_pg_pool_info *pi; 716 + 717 + pi = __lookup_pg_pool(&map->pg_pools, id); 718 + return pi ? pi->flags : 0; 719 + } 720 + EXPORT_SYMBOL(ceph_pg_pool_flags); 721 + 713 722 static void __remove_pg_pool(struct rb_root *root, struct ceph_pg_pool_info *pi) 714 723 { 715 724 rb_erase(&pi->node, root);
+1
net/ipv4/Kconfig
··· 303 303 304 304 config NET_IPVTI 305 305 tristate "Virtual (secure) IP: tunneling" 306 + depends on IPV6 || IPV6=n 306 307 select INET_TUNNEL 307 308 select NET_IP_TUNNEL 308 309 select XFRM
+2 -5
net/ipv4/bpf_tcp_ca.c
··· 184 184 { 185 185 const struct tcp_congestion_ops *utcp_ca; 186 186 struct tcp_congestion_ops *tcp_ca; 187 - size_t tcp_ca_name_len; 188 187 int prog_fd; 189 188 u32 moff; 190 189 ··· 198 199 tcp_ca->flags = utcp_ca->flags; 199 200 return 1; 200 201 case offsetof(struct tcp_congestion_ops, name): 201 - tcp_ca_name_len = strnlen(utcp_ca->name, sizeof(utcp_ca->name)); 202 - if (!tcp_ca_name_len || 203 - tcp_ca_name_len == sizeof(utcp_ca->name)) 202 + if (bpf_obj_name_cpy(tcp_ca->name, utcp_ca->name, 203 + sizeof(tcp_ca->name)) <= 0) 204 204 return -EINVAL; 205 205 if (tcp_ca_find(utcp_ca->name)) 206 206 return -EEXIST; 207 - memcpy(tcp_ca->name, utcp_ca->name, sizeof(tcp_ca->name)); 208 207 return 1; 209 208 } 210 209
+29 -7
net/ipv4/ip_vti.c
··· 187 187 int mtu; 188 188 189 189 if (!dst) { 190 - struct rtable *rt; 190 + switch (skb->protocol) { 191 + case htons(ETH_P_IP): { 192 + struct rtable *rt; 191 193 192 - fl->u.ip4.flowi4_oif = dev->ifindex; 193 - fl->u.ip4.flowi4_flags |= FLOWI_FLAG_ANYSRC; 194 - rt = __ip_route_output_key(dev_net(dev), &fl->u.ip4); 195 - if (IS_ERR(rt)) { 194 + fl->u.ip4.flowi4_oif = dev->ifindex; 195 + fl->u.ip4.flowi4_flags |= FLOWI_FLAG_ANYSRC; 196 + rt = __ip_route_output_key(dev_net(dev), &fl->u.ip4); 197 + if (IS_ERR(rt)) { 198 + dev->stats.tx_carrier_errors++; 199 + goto tx_error_icmp; 200 + } 201 + dst = &rt->dst; 202 + skb_dst_set(skb, dst); 203 + break; 204 + } 205 + #if IS_ENABLED(CONFIG_IPV6) 206 + case htons(ETH_P_IPV6): 207 + fl->u.ip6.flowi6_oif = dev->ifindex; 208 + fl->u.ip6.flowi6_flags |= FLOWI_FLAG_ANYSRC; 209 + dst = ip6_route_output(dev_net(dev), NULL, &fl->u.ip6); 210 + if (dst->error) { 211 + dst_release(dst); 212 + dst = NULL; 213 + dev->stats.tx_carrier_errors++; 214 + goto tx_error_icmp; 215 + } 216 + skb_dst_set(skb, dst); 217 + break; 218 + #endif 219 + default: 196 220 dev->stats.tx_carrier_errors++; 197 221 goto tx_error_icmp; 198 222 } 199 - dst = &rt->dst; 200 - skb_dst_set(skb, dst); 201 223 } 202 224 203 225 dst_hold(dst);
+26 -8
net/ipv6/ip6_vti.c
··· 311 311 312 312 if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb)) { 313 313 rcu_read_unlock(); 314 - return 0; 314 + goto discard; 315 315 } 316 316 317 317 ipv6h = ipv6_hdr(skb); ··· 450 450 int mtu; 451 451 452 452 if (!dst) { 453 - fl->u.ip6.flowi6_oif = dev->ifindex; 454 - fl->u.ip6.flowi6_flags |= FLOWI_FLAG_ANYSRC; 455 - dst = ip6_route_output(dev_net(dev), NULL, &fl->u.ip6); 456 - if (dst->error) { 457 - dst_release(dst); 458 - dst = NULL; 453 + switch (skb->protocol) { 454 + case htons(ETH_P_IP): { 455 + struct rtable *rt; 456 + 457 + fl->u.ip4.flowi4_oif = dev->ifindex; 458 + fl->u.ip4.flowi4_flags |= FLOWI_FLAG_ANYSRC; 459 + rt = __ip_route_output_key(dev_net(dev), &fl->u.ip4); 460 + if (IS_ERR(rt)) 461 + goto tx_err_link_failure; 462 + dst = &rt->dst; 463 + skb_dst_set(skb, dst); 464 + break; 465 + } 466 + case htons(ETH_P_IPV6): 467 + fl->u.ip6.flowi6_oif = dev->ifindex; 468 + fl->u.ip6.flowi6_flags |= FLOWI_FLAG_ANYSRC; 469 + dst = ip6_route_output(dev_net(dev), NULL, &fl->u.ip6); 470 + if (dst->error) { 471 + dst_release(dst); 472 + dst = NULL; 473 + goto tx_err_link_failure; 474 + } 475 + skb_dst_set(skb, dst); 476 + break; 477 + default: 459 478 goto tx_err_link_failure; 460 479 } 461 - skb_dst_set(skb, dst); 462 480 } 463 481 464 482 dst_hold(dst);
+1 -1
net/ipv6/xfrm6_tunnel.c
··· 78 78 79 79 hlist_for_each_entry_rcu(x6spi, 80 80 &xfrm6_tn->spi_byaddr[xfrm6_tunnel_spi_hash_byaddr(saddr)], 81 - list_byaddr) { 81 + list_byaddr, lockdep_is_held(&xfrm6_tunnel_spi_lock)) { 82 82 if (xfrm6_addr_equal(&x6spi->addr, saddr)) 83 83 return x6spi; 84 84 }
+2 -1
net/mac80211/debugfs_sta.c
··· 5 5 * Copyright 2007 Johannes Berg <johannes@sipsolutions.net> 6 6 * Copyright 2013-2014 Intel Mobile Communications GmbH 7 7 * Copyright(c) 2016 Intel Deutschland GmbH 8 - * Copyright (C) 2018 - 2019 Intel Corporation 8 + * Copyright (C) 2018 - 2020 Intel Corporation 9 9 */ 10 10 11 11 #include <linux/debugfs.h> ··· 78 78 FLAG(MPSP_OWNER), 79 79 FLAG(MPSP_RECIPIENT), 80 80 FLAG(PS_DELIVER), 81 + FLAG(USES_ENCRYPTION), 81 82 #undef FLAG 82 83 }; 83 84
+12 -8
net/mac80211/key.c
··· 6 6 * Copyright 2007-2008 Johannes Berg <johannes@sipsolutions.net> 7 7 * Copyright 2013-2014 Intel Mobile Communications GmbH 8 8 * Copyright 2015-2017 Intel Deutschland GmbH 9 - * Copyright 2018-2019 Intel Corporation 9 + * Copyright 2018-2020 Intel Corporation 10 10 */ 11 11 12 12 #include <linux/if_ether.h> ··· 277 277 sta ? sta->sta.addr : bcast_addr, ret); 278 278 } 279 279 280 - int ieee80211_set_tx_key(struct ieee80211_key *key) 280 + static int _ieee80211_set_tx_key(struct ieee80211_key *key, bool force) 281 281 { 282 282 struct sta_info *sta = key->sta; 283 283 struct ieee80211_local *local = key->local; 284 284 285 285 assert_key_lock(local); 286 286 287 + set_sta_flag(sta, WLAN_STA_USES_ENCRYPTION); 288 + 287 289 sta->ptk_idx = key->conf.keyidx; 288 290 289 - if (!ieee80211_hw_check(&local->hw, AMPDU_KEYBORDER_SUPPORT)) 291 + if (force || !ieee80211_hw_check(&local->hw, AMPDU_KEYBORDER_SUPPORT)) 290 292 clear_sta_flag(sta, WLAN_STA_BLOCK_BA); 291 293 ieee80211_check_fast_xmit(sta); 292 294 293 295 return 0; 296 + } 297 + 298 + int ieee80211_set_tx_key(struct ieee80211_key *key) 299 + { 300 + return _ieee80211_set_tx_key(key, false); 294 301 } 295 302 296 303 static void ieee80211_pairwise_rekey(struct ieee80211_key *old, ··· 488 481 if (pairwise) { 489 482 rcu_assign_pointer(sta->ptk[idx], new); 490 483 if (new && 491 - !(new->conf.flags & IEEE80211_KEY_FLAG_NO_AUTO_TX)) { 492 - sta->ptk_idx = idx; 493 - clear_sta_flag(sta, WLAN_STA_BLOCK_BA); 494 - ieee80211_check_fast_xmit(sta); 495 - } 484 + !(new->conf.flags & IEEE80211_KEY_FLAG_NO_AUTO_TX)) 485 + _ieee80211_set_tx_key(new, true); 496 486 } else { 497 487 rcu_assign_pointer(sta->gtk[idx], new); 498 488 }
+5
net/mac80211/sta_info.c
··· 1049 1049 might_sleep(); 1050 1050 lockdep_assert_held(&local->sta_mtx); 1051 1051 1052 + while (sta->sta_state == IEEE80211_STA_AUTHORIZED) { 1053 + ret = sta_info_move_state(sta, IEEE80211_STA_ASSOC); 1054 + WARN_ON_ONCE(ret); 1055 + } 1056 + 1052 1057 /* now keys can no longer be reached */ 1053 1058 ieee80211_free_sta_keys(local, sta); 1054 1059
+1
net/mac80211/sta_info.h
··· 98 98 WLAN_STA_MPSP_OWNER, 99 99 WLAN_STA_MPSP_RECIPIENT, 100 100 WLAN_STA_PS_DELIVER, 101 + WLAN_STA_USES_ENCRYPTION, 101 102 102 103 NUM_WLAN_STA_FLAGS, 103 104 };
+32 -5
net/mac80211/tx.c
··· 590 590 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(tx->skb); 591 591 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)tx->skb->data; 592 592 593 - if (unlikely(info->flags & IEEE80211_TX_INTFL_DONT_ENCRYPT)) 593 + if (unlikely(info->flags & IEEE80211_TX_INTFL_DONT_ENCRYPT)) { 594 594 tx->key = NULL; 595 - else if (tx->sta && 596 - (key = rcu_dereference(tx->sta->ptk[tx->sta->ptk_idx]))) 595 + return TX_CONTINUE; 596 + } 597 + 598 + if (tx->sta && 599 + (key = rcu_dereference(tx->sta->ptk[tx->sta->ptk_idx]))) 597 600 tx->key = key; 598 601 else if (ieee80211_is_group_privacy_action(tx->skb) && 599 602 (key = rcu_dereference(tx->sdata->default_multicast_key))) ··· 657 654 if (!skip_hw && tx->key && 658 655 tx->key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE) 659 656 info->control.hw_key = &tx->key->conf; 657 + } else if (!ieee80211_is_mgmt(hdr->frame_control) && tx->sta && 658 + test_sta_flag(tx->sta, WLAN_STA_USES_ENCRYPTION)) { 659 + return TX_DROP; 660 660 } 661 661 662 662 return TX_CONTINUE; ··· 3605 3599 tx.skb = skb; 3606 3600 tx.sdata = vif_to_sdata(info->control.vif); 3607 3601 3608 - if (txq->sta) 3602 + if (txq->sta) { 3609 3603 tx.sta = container_of(txq->sta, struct sta_info, sta); 3604 + /* 3605 + * Drop unicast frames to unauthorised stations unless they are 3606 + * EAPOL frames from the local station. 3607 + */ 3608 + if (unlikely(!ieee80211_vif_is_mesh(&tx.sdata->vif) && 3609 + tx.sdata->vif.type != NL80211_IFTYPE_OCB && 3610 + !is_multicast_ether_addr(hdr->addr1) && 3611 + !test_sta_flag(tx.sta, WLAN_STA_AUTHORIZED) && 3612 + (!(info->control.flags & 3613 + IEEE80211_TX_CTRL_PORT_CTRL_PROTO) || 3614 + !ether_addr_equal(tx.sdata->vif.addr, 3615 + hdr->addr2)))) { 3616 + I802_DEBUG_INC(local->tx_handlers_drop_unauth_port); 3617 + ieee80211_free_txskb(&local->hw, skb); 3618 + goto begin; 3619 + } 3620 + } 3610 3621 3611 3622 /* 3612 3623 * The key can be removed while the packet was queued, so need to call ··· 5341 5318 struct ieee80211_local *local = sdata->local; 5342 5319 struct sk_buff *skb; 5343 5320 struct ethhdr *ehdr; 5321 + u32 ctrl_flags = 0; 5344 5322 u32 flags; 5345 5323 5346 5324 /* Only accept CONTROL_PORT_PROTOCOL configured in CONNECT/ASSOCIATE ··· 5350 5326 if (proto != sdata->control_port_protocol && 5351 5327 proto != cpu_to_be16(ETH_P_PREAUTH)) 5352 5328 return -EINVAL; 5329 + 5330 + if (proto == sdata->control_port_protocol) 5331 + ctrl_flags |= IEEE80211_TX_CTRL_PORT_CTRL_PROTO; 5353 5332 5354 5333 if (unencrypted) 5355 5334 flags = IEEE80211_TX_INTFL_DONT_ENCRYPT; ··· 5379 5352 skb_reset_mac_header(skb); 5380 5353 5381 5354 local_bh_disable(); 5382 - __ieee80211_subif_start_xmit(skb, skb->dev, flags, 0); 5355 + __ieee80211_subif_start_xmit(skb, skb->dev, flags, ctrl_flags); 5383 5356 local_bh_enable(); 5384 5357 5385 5358 return 0;
+1 -1
net/wireless/nl80211.c
··· 16790 16790 goto nla_put_failure; 16791 16791 16792 16792 if ((sta_opmode->changed & STA_OPMODE_MAX_BW_CHANGED) && 16793 - nla_put_u8(msg, NL80211_ATTR_CHANNEL_WIDTH, sta_opmode->bw)) 16793 + nla_put_u32(msg, NL80211_ATTR_CHANNEL_WIDTH, sta_opmode->bw)) 16794 16794 goto nla_put_failure; 16795 16795 16796 16796 if ((sta_opmode->changed & STA_OPMODE_N_SS_CHANGED) &&
+5 -1
net/wireless/scan.c
··· 2019 2019 2020 2020 spin_lock_bh(&rdev->bss_lock); 2021 2021 2022 - if (WARN_ON(cbss->pub.channel == chan)) 2022 + /* 2023 + * Some APs use CSA also for bandwidth changes, i.e., without actually 2024 + * changing the control channel, so no need to update in such a case. 2025 + */ 2026 + if (cbss->pub.channel == chan) 2023 2027 goto done; 2024 2028 2025 2029 /* use transmitting bss */
+5 -4
net/xfrm/xfrm_device.c
··· 78 78 int err; 79 79 unsigned long flags; 80 80 struct xfrm_state *x; 81 - struct sk_buff *skb2, *nskb; 82 81 struct softnet_data *sd; 82 + struct sk_buff *skb2, *nskb, *pskb = NULL; 83 83 netdev_features_t esp_features = features; 84 84 struct xfrm_offload *xo = xfrm_offload(skb); 85 85 struct sec_path *sp; ··· 168 168 } else { 169 169 if (skb == skb2) 170 170 skb = nskb; 171 - 172 - if (!skb) 173 - return NULL; 171 + else 172 + pskb->next = nskb; 174 173 175 174 continue; 176 175 } 177 176 178 177 skb_push(skb2, skb2->data - skb_mac_header(skb2)); 178 + pskb = skb2; 179 179 } 180 180 181 181 return skb; ··· 383 383 return xfrm_dev_feat_change(dev); 384 384 385 385 case NETDEV_DOWN: 386 + case NETDEV_UNREGISTER: 386 387 return xfrm_dev_down(dev); 387 388 } 388 389 return NOTIFY_DONE;
+2
net/xfrm/xfrm_policy.c
··· 434 434 435 435 static void xfrm_policy_kill(struct xfrm_policy *policy) 436 436 { 437 + write_lock_bh(&policy->lock); 437 438 policy->walk.dead = 1; 439 + write_unlock_bh(&policy->lock); 438 440 439 441 atomic_inc(&policy->genid); 440 442
+5 -1
net/xfrm/xfrm_user.c
··· 110 110 return 0; 111 111 112 112 uctx = nla_data(rt); 113 - if (uctx->len != (sizeof(struct xfrm_user_sec_ctx) + uctx->ctx_len)) 113 + if (uctx->len > nla_len(rt) || 114 + uctx->len != (sizeof(struct xfrm_user_sec_ctx) + uctx->ctx_len)) 114 115 return -EINVAL; 115 116 116 117 return 0; ··· 2274 2273 xfrm_mark_get(attrs, &mark); 2275 2274 2276 2275 err = verify_newpolicy_info(&ua->policy); 2276 + if (err) 2277 + goto free_state; 2278 + err = verify_sec_ctx_len(attrs); 2277 2279 if (err) 2278 2280 goto free_state; 2279 2281
-1
scripts/dtc/dtc-lexer.l
··· 23 23 #include "srcpos.h" 24 24 #include "dtc-parser.tab.h" 25 25 26 - YYLTYPE yylloc; 27 26 extern bool treesource_error; 28 27 29 28 /* CAUTION: this will stop working if we ever use yyless() or yyunput() */
+27 -4
scripts/parse-maintainers.pl
··· 8 8 my $output_file = "MAINTAINERS.new"; 9 9 my $output_section = "SECTION.new"; 10 10 my $help = 0; 11 - 11 + my $order = 0; 12 12 my $P = $0; 13 13 14 14 if (!GetOptions( 15 15 'input=s' => \$input_file, 16 16 'output=s' => \$output_file, 17 17 'section=s' => \$output_section, 18 + 'order!' => \$order, 18 19 'h|help|usage' => \$help, 19 20 )) { 20 21 die "$P: invalid argument - use --help if necessary\n"; ··· 33 32 --input => MAINTAINERS file to read (default: MAINTAINERS) 34 33 --output => sorted MAINTAINERS file to write (default: MAINTAINERS.new) 35 34 --section => new sorted MAINTAINERS file to write to (default: SECTION.new) 35 + --order => Use the preferred section content output ordering (default: 0) 36 + Preferred ordering of section output is: 37 + M: Person acting as a maintainer 38 + R: Person acting as a patch reviewer 39 + L: Mailing list where patches should be sent 40 + S: Maintenance status 41 + W: URI for general information 42 + Q: URI for patchwork tracking 43 + B: URI for bug tracking/submission 44 + C: URI for chat 45 + P: URI or file for subsystem specific coding styles 46 + T: SCM tree type and location 47 + F: File and directory pattern 48 + X: File and directory exclusion pattern 49 + N: File glob 50 + K: Keyword - patch content regex 36 51 37 52 If <pattern match regexes> exist, then the sections that match the 38 53 regexes are not written to the output file but are written to the ··· 73 56 74 57 sub by_pattern($$) { 75 58 my ($a, $b) = @_; 76 - my $preferred_order = 'MRPLSWTQBCFXNK'; 59 + my $preferred_order = 'MRLSWQBCPTFXNK'; 77 60 78 61 my $a1 = uc(substr($a, 0, 1)); 79 62 my $b1 = uc(substr($b, 0, 1)); ··· 122 105 print $file $separator; 123 106 } 124 107 print $file $key . "\n"; 125 - foreach my $pattern (sort by_pattern split('\n', %$hashref{$key})) { 126 - print $file ($pattern . "\n"); 108 + if ($order) { 109 + foreach my $pattern (sort by_pattern split('\n', %$hashref{$key})) { 110 + print $file ($pattern . "\n"); 111 + } 112 + } else { 113 + foreach my $pattern (split('\n', %$hashref{$key})) { 114 + print $file ($pattern . "\n"); 115 + } 127 116 } 128 117 } 129 118 }