Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge USB 4.20-rc8 mergepoint into usb-next

We need the USB changes in here for additional patches to be able to
apply cleanly.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+3437 -1692
+8
CREDITS
··· 2541 2541 S: Victoria 3163 2542 2542 S: Australia 2543 2543 2544 + N: Eric Miao 2545 + E: eric.y.miao@gmail.com 2546 + D: MMP support 2547 + 2544 2548 N: Pauline Middelink 2545 2549 E: middelin@polyware.nl 2546 2550 D: General low-level bug fixes, /proc fixes, identd support ··· 4118 4114 S: 1507 145th Place SE #B5 4119 4115 S: Bellevue, Washington 98007 4120 4116 S: USA 4117 + 4118 + N: Haojian Zhuang 4119 + E: haojian.zhuang@gmail.com 4120 + D: MMP support 4121 4121 4122 4122 N: Richard Zidlicky 4123 4123 E: rz@linux-m68k.org, rdzidlic@geocities.com
+4 -1
Documentation/core-api/xarray.rst
··· 187 187 * :c:func:`xa_erase_bh` 188 188 * :c:func:`xa_erase_irq` 189 189 * :c:func:`xa_cmpxchg` 190 + * :c:func:`xa_cmpxchg_bh` 191 + * :c:func:`xa_cmpxchg_irq` 190 192 * :c:func:`xa_store_range` 191 193 * :c:func:`xa_alloc` 192 194 * :c:func:`xa_alloc_bh` ··· 265 263 context, or :c:func:`xa_lock_irq` in process context and :c:func:`xa_lock` 266 264 in the interrupt handler. Some of the more common patterns have helper 267 265 functions such as :c:func:`xa_store_bh`, :c:func:`xa_store_irq`, 268 - :c:func:`xa_erase_bh` and :c:func:`xa_erase_irq`. 266 + :c:func:`xa_erase_bh`, :c:func:`xa_erase_irq`, :c:func:`xa_cmpxchg_bh` 267 + and :c:func:`xa_cmpxchg_irq`. 269 268 270 269 Sometimes you need to protect access to the XArray with a mutex because 271 270 that lock sits above another mutex in the locking hierarchy. That does
+10
Documentation/media/uapi/v4l/extended-controls.rst
··· 1505 1505 configuring a stateless hardware decoding pipeline for MPEG-2. 1506 1506 The bitstream parameters are defined according to :ref:`mpeg2part2`. 1507 1507 1508 + .. note:: 1509 + 1510 + This compound control is not yet part of the public kernel API and 1511 + it is expected to change. 1512 + 1508 1513 .. c:type:: v4l2_ctrl_mpeg2_slice_params 1509 1514 1510 1515 .. cssclass:: longtable ··· 1629 1624 ``V4L2_CID_MPEG_VIDEO_MPEG2_QUANTIZATION (struct)`` 1630 1625 Specifies quantization matrices (as extracted from the bitstream) for the 1631 1626 associated MPEG-2 slice data. 1627 + 1628 + .. note:: 1629 + 1630 + This compound control is not yet part of the public kernel API and 1631 + it is expected to change. 1632 1632 1633 1633 .. c:type:: v4l2_ctrl_mpeg2_quantization 1634 1634
+16 -7
MAINTAINERS
··· 1739 1739 M: Matthias Brugger <matthias.bgg@gmail.com> 1740 1740 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1741 1741 L: linux-mediatek@lists.infradead.org (moderated for non-subscribers) 1742 + W: https://mtk.bcnfs.org/ 1743 + C: irc://chat.freenode.net/linux-mediatek 1742 1744 S: Maintained 1743 1745 F: arch/arm/boot/dts/mt6* 1744 1746 F: arch/arm/boot/dts/mt7* 1745 1747 F: arch/arm/boot/dts/mt8* 1746 1748 F: arch/arm/mach-mediatek/ 1747 1749 F: arch/arm64/boot/dts/mediatek/ 1750 + F: drivers/soc/mediatek/ 1748 1751 N: mtk 1752 + N: mt[678] 1749 1753 K: mediatek 1750 1754 1751 1755 ARM/Mediatek USB3 PHY DRIVER ··· 4847 4843 4848 4844 DRM DRIVERS 4849 4845 M: David Airlie <airlied@linux.ie> 4846 + M: Daniel Vetter <daniel@ffwll.ch> 4850 4847 L: dri-devel@lists.freedesktop.org 4851 4848 T: git git://anongit.freedesktop.org/drm/drm 4852 4849 B: https://bugs.freedesktop.org/ ··· 8944 8939 8945 8940 MARVELL 88E6XXX ETHERNET SWITCH FABRIC DRIVER 8946 8941 M: Andrew Lunn <andrew@lunn.ch> 8947 - M: Vivien Didelot <vivien.didelot@savoirfairelinux.com> 8942 + M: Vivien Didelot <vivien.didelot@gmail.com> 8948 8943 L: netdev@vger.kernel.org 8949 8944 S: Maintained 8950 8945 F: drivers/net/dsa/mv88e6xxx/ ··· 9448 9443 F: drivers/media/platform/mtk-vpu/ 9449 9444 F: Documentation/devicetree/bindings/media/mediatek-vcodec.txt 9450 9445 F: Documentation/devicetree/bindings/media/mediatek-vpu.txt 9446 + 9447 + MEDIATEK MT76 WIRELESS LAN DRIVER 9448 + M: Felix Fietkau <nbd@nbd.name> 9449 + M: Lorenzo Bianconi <lorenzo.bianconi83@gmail.com> 9450 + L: linux-wireless@vger.kernel.org 9451 + S: Maintained 9452 + F: drivers/net/wireless/mediatek/mt76/ 9451 9453 9452 9454 MEDIATEK MT7601U WIRELESS LAN DRIVER 9453 9455 M: Jakub Kicinski <kubakici@wp.pl> ··· 10018 10006 F: drivers/media/radio/radio-miropcm20* 10019 10007 10020 10008 MMP SUPPORT 10021 - M: Eric Miao <eric.y.miao@gmail.com> 10022 - M: Haojian Zhuang <haojian.zhuang@gmail.com> 10009 + R: Lubomir Rintel <lkundrak@v3.sk> 10023 10010 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 10024 - T: git git://github.com/hzhuang1/linux.git 10025 - T: git git://git.linaro.org/people/ycmiao/pxa-linux.git 10026 - S: Maintained 10011 + S: Odd Fixes 10027 10012 F: arch/arm/boot/dts/mmp* 10028 10013 F: arch/arm/mach-mmp/ 10029 10014 ··· 10426 10417 10427 10418 NETWORKING [DSA] 10428 10419 M: Andrew Lunn <andrew@lunn.ch> 10429 - M: Vivien Didelot <vivien.didelot@savoirfairelinux.com> 10420 + M: Vivien Didelot <vivien.didelot@gmail.com> 10430 10421 M: Florian Fainelli <f.fainelli@gmail.com> 10431 10422 S: Maintained 10432 10423 F: Documentation/devicetree/bindings/net/dsa/
+1 -1
Makefile
··· 2 2 VERSION = 4 3 3 PATCHLEVEL = 20 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc6 5 + EXTRAVERSION = -rc7 6 6 NAME = Shy Crocodile 7 7 8 8 # *DOCUMENTATION*
+1
arch/alpha/kernel/setup.c
··· 634 634 635 635 /* Find our memory. */ 636 636 setup_memory(kernel_end); 637 + memblock_set_bottom_up(true); 637 638 638 639 /* First guess at cpu cache sizes. Do this before init_arch. */ 639 640 determine_cpu_caches(cpu->type);
+3 -3
arch/alpha/mm/numa.c
··· 144 144 if (!nid && (node_max_pfn < end_kernel_pfn || node_min_pfn > start_kernel_pfn)) 145 145 panic("kernel loaded out of ram"); 146 146 147 + memblock_add(PFN_PHYS(node_min_pfn), 148 + (node_max_pfn - node_min_pfn) << PAGE_SHIFT); 149 + 147 150 /* Zone start phys-addr must be 2^(MAX_ORDER-1) aligned. 148 151 Note that we round this down, not up - node memory 149 152 has much larger alignment than 8Mb, so it's safe. */ 150 153 node_min_pfn &= ~((1UL << (MAX_ORDER-1))-1); 151 - 152 - memblock_add(PFN_PHYS(node_min_pfn), 153 - (node_max_pfn - node_min_pfn) << PAGE_SHIFT); 154 154 155 155 NODE_DATA(nid)->node_start_pfn = node_min_pfn; 156 156 NODE_DATA(nid)->node_present_pages = node_max_pfn - node_min_pfn;
+2 -2
arch/arm/boot/dts/arm-realview-pb1176.dts
··· 45 45 }; 46 46 47 47 /* The voltage to the MMC card is hardwired at 3.3V */ 48 - vmmc: fixedregulator@0 { 48 + vmmc: regulator-vmmc { 49 49 compatible = "regulator-fixed"; 50 50 regulator-name = "vmmc"; 51 51 regulator-min-microvolt = <3300000>; ··· 53 53 regulator-boot-on; 54 54 }; 55 55 56 - veth: fixedregulator@0 { 56 + veth: regulator-veth { 57 57 compatible = "regulator-fixed"; 58 58 regulator-name = "veth"; 59 59 regulator-min-microvolt = <3300000>;
+2 -2
arch/arm/boot/dts/arm-realview-pb11mp.dts
··· 145 145 }; 146 146 147 147 /* The voltage to the MMC card is hardwired at 3.3V */ 148 - vmmc: fixedregulator@0 { 148 + vmmc: regulator-vmmc { 149 149 compatible = "regulator-fixed"; 150 150 regulator-name = "vmmc"; 151 151 regulator-min-microvolt = <3300000>; ··· 153 153 regulator-boot-on; 154 154 }; 155 155 156 - veth: fixedregulator@0 { 156 + veth: regulator-veth { 157 157 compatible = "regulator-fixed"; 158 158 regulator-name = "veth"; 159 159 regulator-min-microvolt = <3300000>;
+1 -1
arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts
··· 31 31 32 32 wifi_pwrseq: wifi-pwrseq { 33 33 compatible = "mmc-pwrseq-simple"; 34 - reset-gpios = <&expgpio 1 GPIO_ACTIVE_HIGH>; 34 + reset-gpios = <&expgpio 1 GPIO_ACTIVE_LOW>; 35 35 }; 36 36 }; 37 37
+1 -1
arch/arm/boot/dts/bcm2837-rpi-3-b.dts
··· 26 26 27 27 wifi_pwrseq: wifi-pwrseq { 28 28 compatible = "mmc-pwrseq-simple"; 29 - reset-gpios = <&expgpio 1 GPIO_ACTIVE_HIGH>; 29 + reset-gpios = <&expgpio 1 GPIO_ACTIVE_LOW>; 30 30 }; 31 31 }; 32 32
+7 -2
arch/arm/boot/dts/imx7d-nitrogen7.dts
··· 86 86 compatible = "regulator-fixed"; 87 87 regulator-min-microvolt = <3300000>; 88 88 regulator-max-microvolt = <3300000>; 89 - clocks = <&clks IMX7D_CLKO2_ROOT_DIV>; 90 - clock-names = "slow"; 91 89 regulator-name = "reg_wlan"; 92 90 startup-delay-us = <70000>; 93 91 gpio = <&gpio4 21 GPIO_ACTIVE_HIGH>; 94 92 enable-active-high; 93 + }; 94 + 95 + usdhc2_pwrseq: usdhc2_pwrseq { 96 + compatible = "mmc-pwrseq-simple"; 97 + clocks = <&clks IMX7D_CLKO2_ROOT_DIV>; 98 + clock-names = "ext_clock"; 95 99 }; 96 100 }; 97 101 ··· 379 375 bus-width = <4>; 380 376 non-removable; 381 377 vmmc-supply = <&reg_wlan>; 378 + mmc-pwrseq = <&usdhc2_pwrseq>; 382 379 cap-power-off-card; 383 380 keep-power-in-suspend; 384 381 status = "okay";
+21 -1
arch/arm/boot/dts/imx7d-pico.dtsi
··· 100 100 regulator-min-microvolt = <1800000>; 101 101 regulator-max-microvolt = <1800000>; 102 102 }; 103 + 104 + usdhc2_pwrseq: usdhc2_pwrseq { 105 + compatible = "mmc-pwrseq-simple"; 106 + clocks = <&clks IMX7D_CLKO2_ROOT_DIV>; 107 + clock-names = "ext_clock"; 108 + }; 109 + }; 110 + 111 + &clks { 112 + assigned-clocks = <&clks IMX7D_CLKO2_ROOT_SRC>, 113 + <&clks IMX7D_CLKO2_ROOT_DIV>; 114 + assigned-clock-parents = <&clks IMX7D_CKIL>; 115 + assigned-clock-rates = <0>, <32768>; 103 116 }; 104 117 105 118 &i2c4 { ··· 212 199 213 200 &usdhc2 { /* Wifi SDIO */ 214 201 pinctrl-names = "default"; 215 - pinctrl-0 = <&pinctrl_usdhc2>; 202 + pinctrl-0 = <&pinctrl_usdhc2 &pinctrl_wifi_clk>; 216 203 no-1-8-v; 217 204 non-removable; 218 205 keep-power-in-suspend; 219 206 wakeup-source; 220 207 vmmc-supply = <&reg_ap6212>; 208 + mmc-pwrseq = <&usdhc2_pwrseq>; 221 209 status = "okay"; 222 210 }; 223 211 ··· 315 301 }; 316 302 317 303 &iomuxc_lpsr { 304 + pinctrl_wifi_clk: wificlkgrp { 305 + fsl,pins = < 306 + MX7D_PAD_LPSR_GPIO1_IO03__CCM_CLKO2 0x7d 307 + >; 308 + }; 309 + 318 310 pinctrl_wdog: wdoggrp { 319 311 fsl,pins = < 320 312 MX7D_PAD_LPSR_GPIO1_IO00__WDOG1_WDOG_B 0x74
+2 -2
arch/arm/boot/dts/sun8i-a83t-bananapi-m3.dts
··· 314 314 315 315 &reg_dldo3 { 316 316 regulator-always-on; 317 - regulator-min-microvolt = <2500000>; 318 - regulator-max-microvolt = <2500000>; 317 + regulator-min-microvolt = <3300000>; 318 + regulator-max-microvolt = <3300000>; 319 319 regulator-name = "vcc-pd"; 320 320 }; 321 321
+1 -1
arch/arm/mach-imx/cpuidle-imx6sx.c
··· 110 110 * except for power up sw2iso which need to be 111 111 * larger than LDO ramp up time. 112 112 */ 113 - imx_gpc_set_arm_power_up_timing(2, 1); 113 + imx_gpc_set_arm_power_up_timing(0xf, 1); 114 114 imx_gpc_set_arm_power_down_timing(1, 1); 115 115 116 116 return cpuidle_register(&imx6sx_cpuidle_driver, NULL);
+4 -2
arch/arm/mach-mmp/cputype.h
··· 44 44 #define cpu_is_pxa910() (0) 45 45 #endif 46 46 47 - #ifdef CONFIG_CPU_MMP2 47 + #if defined(CONFIG_CPU_MMP2) || defined(CONFIG_MACH_MMP2_DT) 48 48 static inline int cpu_is_mmp2(void) 49 49 { 50 - return (((read_cpuid_id() >> 8) & 0xff) == 0x58); 50 + return (((read_cpuid_id() >> 8) & 0xff) == 0x58) && 51 + (((mmp_chip_id & 0xfff) == 0x410) || 52 + ((mmp_chip_id & 0xfff) == 0x610)); 51 53 } 52 54 #else 53 55 #define cpu_is_mmp2() (0)
-4
arch/arm64/boot/dts/marvell/armada-ap806-quad.dtsi
··· 20 20 compatible = "arm,cortex-a72", "arm,armv8"; 21 21 reg = <0x000>; 22 22 enable-method = "psci"; 23 - cpu-idle-states = <&CPU_SLEEP_0>; 24 23 }; 25 24 cpu1: cpu@1 { 26 25 device_type = "cpu"; 27 26 compatible = "arm,cortex-a72", "arm,armv8"; 28 27 reg = <0x001>; 29 28 enable-method = "psci"; 30 - cpu-idle-states = <&CPU_SLEEP_0>; 31 29 }; 32 30 cpu2: cpu@100 { 33 31 device_type = "cpu"; 34 32 compatible = "arm,cortex-a72", "arm,armv8"; 35 33 reg = <0x100>; 36 34 enable-method = "psci"; 37 - cpu-idle-states = <&CPU_SLEEP_0>; 38 35 }; 39 36 cpu3: cpu@101 { 40 37 device_type = "cpu"; 41 38 compatible = "arm,cortex-a72", "arm,armv8"; 42 39 reg = <0x101>; 43 40 enable-method = "psci"; 44 - cpu-idle-states = <&CPU_SLEEP_0>; 45 41 }; 46 42 }; 47 43 };
-27
arch/arm64/boot/dts/marvell/armada-ap806.dtsi
··· 28 28 method = "smc"; 29 29 }; 30 30 31 - cpus { 32 - #address-cells = <1>; 33 - #size-cells = <0>; 34 - 35 - idle_states { 36 - entry_method = "arm,pcsi"; 37 - 38 - CPU_SLEEP_0: cpu-sleep-0 { 39 - compatible = "arm,idle-state"; 40 - local-timer-stop; 41 - arm,psci-suspend-param = <0x0010000>; 42 - entry-latency-us = <80>; 43 - exit-latency-us = <160>; 44 - min-residency-us = <320>; 45 - }; 46 - 47 - CLUSTER_SLEEP_0: cluster-sleep-0 { 48 - compatible = "arm,idle-state"; 49 - local-timer-stop; 50 - arm,psci-suspend-param = <0x1010000>; 51 - entry-latency-us = <500>; 52 - exit-latency-us = <1000>; 53 - min-residency-us = <2500>; 54 - }; 55 - }; 56 - }; 57 - 58 31 ap806 { 59 32 #address-cells = <2>; 60 33 #size-cells = <2>;
+6 -1
arch/arm64/boot/dts/mediatek/mt7622-bananapi-bpi-r64.dts
··· 16 16 model = "Bananapi BPI-R64"; 17 17 compatible = "bananapi,bpi-r64", "mediatek,mt7622"; 18 18 19 + aliases { 20 + serial0 = &uart0; 21 + }; 22 + 19 23 chosen { 20 - bootargs = "earlycon=uart8250,mmio32,0x11002000 console=ttyS0,115200n1 swiotlb=512"; 24 + stdout-path = "serial0:115200n8"; 25 + bootargs = "earlycon=uart8250,mmio32,0x11002000 swiotlb=512"; 21 26 }; 22 27 23 28 cpus {
+6 -1
arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
··· 17 17 model = "MediaTek MT7622 RFB1 board"; 18 18 compatible = "mediatek,mt7622-rfb1", "mediatek,mt7622"; 19 19 20 + aliases { 21 + serial0 = &uart0; 22 + }; 23 + 20 24 chosen { 21 - bootargs = "earlycon=uart8250,mmio32,0x11002000 console=ttyS0,115200n1 swiotlb=512"; 25 + stdout-path = "serial0:115200n8"; 26 + bootargs = "earlycon=uart8250,mmio32,0x11002000 swiotlb=512"; 22 27 }; 23 28 24 29 cpus {
-10
arch/arm64/boot/dts/mediatek/mt7622.dtsi
··· 227 227 #reset-cells = <1>; 228 228 }; 229 229 230 - timer: timer@10004000 { 231 - compatible = "mediatek,mt7622-timer", 232 - "mediatek,mt6577-timer"; 233 - reg = <0 0x10004000 0 0x80>; 234 - interrupts = <GIC_SPI 152 IRQ_TYPE_LEVEL_LOW>; 235 - clocks = <&infracfg CLK_INFRA_APXGPT_PD>, 236 - <&topckgen CLK_TOP_RTC>; 237 - clock-names = "system-clk", "rtc-clk"; 238 - }; 239 - 240 230 scpsys: scpsys@10006000 { 241 231 compatible = "mediatek,mt7622-scpsys", 242 232 "syscon";
-9
arch/arm64/include/asm/memory.h
··· 35 35 #define PCI_IO_SIZE SZ_16M 36 36 37 37 /* 38 - * Log2 of the upper bound of the size of a struct page. Used for sizing 39 - * the vmemmap region only, does not affect actual memory footprint. 40 - * We don't use sizeof(struct page) directly since taking its size here 41 - * requires its definition to be available at this point in the inclusion 42 - * chain, and it may not be a power of 2 in the first place. 43 - */ 44 - #define STRUCT_PAGE_MAX_SHIFT 6 45 - 46 - /* 47 38 * VMEMMAP_SIZE - allows the whole linear region to be covered by 48 39 * a struct page array 49 40 */
+1 -1
arch/arm64/mm/dma-mapping.c
··· 429 429 prot, 430 430 __builtin_return_address(0)); 431 431 if (addr) { 432 - memset(addr, 0, size); 433 432 if (!coherent) 434 433 __dma_flush_area(page_to_virt(page), iosize); 434 + memset(addr, 0, size); 435 435 } else { 436 436 iommu_dma_unmap_page(dev, *handle, iosize, 0, attrs); 437 437 dma_release_from_contiguous(dev, page,
-8
arch/arm64/mm/init.c
··· 610 610 BUILD_BUG_ON(TASK_SIZE_32 > TASK_SIZE_64); 611 611 #endif 612 612 613 - #ifdef CONFIG_SPARSEMEM_VMEMMAP 614 - /* 615 - * Make sure we chose the upper bound of sizeof(struct page) 616 - * correctly when sizing the VMEMMAP array. 617 - */ 618 - BUILD_BUG_ON(sizeof(struct page) > (1 << STRUCT_PAGE_MAX_SHIFT)); 619 - #endif 620 - 621 613 if (PAGE_SIZE >= 16384 && get_num_physpages() <= 128) { 622 614 extern int sysctl_overcommit_memory; 623 615 /*
+1 -1
arch/powerpc/boot/Makefile
··· 197 197 $(obj)/zImage.coff.lds $(obj)/zImage.ps3.lds : $(obj)/%: $(srctree)/$(src)/%.S 198 198 $(Q)cp $< $@ 199 199 200 - $(obj)/serial.c: $(obj)/autoconf.h 200 + $(srctree)/$(src)/serial.c: $(obj)/autoconf.h 201 201 202 202 $(obj)/autoconf.h: $(obj)/%: $(objtree)/include/generated/% 203 203 $(Q)cp $< $@
+3 -1
arch/powerpc/boot/crt0.S
··· 15 15 RELA = 7 16 16 RELACOUNT = 0x6ffffff9 17 17 18 - .text 18 + .data 19 19 /* A procedure descriptor used when booting this as a COFF file. 20 20 * When making COFF, this comes first in the link and we're 21 21 * linked at 0x500000. ··· 23 23 .globl _zimage_start_opd 24 24 _zimage_start_opd: 25 25 .long 0x500000, 0, 0, 0 26 + .text 27 + b _zimage_start 26 28 27 29 #ifdef __powerpc64__ 28 30 .balign 8
+2
arch/powerpc/include/asm/perf_event.h
··· 26 26 #include <asm/ptrace.h> 27 27 #include <asm/reg.h> 28 28 29 + #define perf_arch_bpf_user_pt_regs(regs) &regs->user_regs 30 + 29 31 /* 30 32 * Overload regs->result to specify whether we should use the MSR (result 31 33 * is zero) or the SIAR (result is non zero).
-1
arch/powerpc/include/uapi/asm/Kbuild
··· 1 1 # UAPI Header export list 2 2 include include/uapi/asm-generic/Kbuild.asm 3 3 4 - generic-y += bpf_perf_event.h 5 4 generic-y += param.h 6 5 generic-y += poll.h 7 6 generic-y += resource.h
+9
arch/powerpc/include/uapi/asm/bpf_perf_event.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _UAPI__ASM_BPF_PERF_EVENT_H__ 3 + #define _UAPI__ASM_BPF_PERF_EVENT_H__ 4 + 5 + #include <asm/ptrace.h> 6 + 7 + typedef struct user_pt_regs bpf_user_pt_regs_t; 8 + 9 + #endif /* _UAPI__ASM_BPF_PERF_EVENT_H__ */
+5 -1
arch/powerpc/kernel/legacy_serial.c
··· 372 372 373 373 /* Now find out if one of these is out firmware console */ 374 374 path = of_get_property(of_chosen, "linux,stdout-path", NULL); 375 + if (path == NULL) 376 + path = of_get_property(of_chosen, "stdout-path", NULL); 375 377 if (path != NULL) { 376 378 stdout = of_find_node_by_path(path); 377 379 if (stdout) ··· 597 595 /* We are getting a weird phandle from OF ... */ 598 596 /* ... So use the full path instead */ 599 597 name = of_get_property(of_chosen, "linux,stdout-path", NULL); 598 + if (name == NULL) 599 + name = of_get_property(of_chosen, "stdout-path", NULL); 600 600 if (name == NULL) { 601 - DBG(" no linux,stdout-path !\n"); 601 + DBG(" no stdout-path !\n"); 602 602 return -ENODEV; 603 603 } 604 604 prom_stdout = of_find_node_by_path(name);
+6 -1
arch/powerpc/kernel/msi.c
··· 34 34 { 35 35 struct pci_controller *phb = pci_bus_to_host(dev->bus); 36 36 37 - phb->controller_ops.teardown_msi_irqs(dev); 37 + /* 38 + * We can be called even when arch_setup_msi_irqs() returns -ENOSYS, 39 + * so check the pointer again. 40 + */ 41 + if (phb->controller_ops.teardown_msi_irqs) 42 + phb->controller_ops.teardown_msi_irqs(dev); 38 43 }
+6 -1
arch/powerpc/kernel/ptrace.c
··· 3266 3266 user_exit(); 3267 3267 3268 3268 if (test_thread_flag(TIF_SYSCALL_EMU)) { 3269 - ptrace_report_syscall(regs); 3270 3269 /* 3270 + * A nonzero return code from tracehook_report_syscall_entry() 3271 + * tells us to prevent the syscall execution, but we are not 3272 + * going to execute it anyway. 3273 + * 3271 3274 * Returning -1 will skip the syscall execution. We want to 3272 3275 * avoid clobbering any register also, thus, not 'gotoing' 3273 3276 * skip label. 3274 3277 */ 3278 + if (tracehook_report_syscall_entry(regs)) 3279 + ; 3275 3280 return -1; 3276 3281 } 3277 3282
+1
arch/powerpc/mm/dump_linuxpagetables.c
··· 19 19 #include <linux/hugetlb.h> 20 20 #include <linux/io.h> 21 21 #include <linux/mm.h> 22 + #include <linux/highmem.h> 22 23 #include <linux/sched.h> 23 24 #include <linux/seq_file.h> 24 25 #include <asm/fixmap.h>
+16 -3
arch/powerpc/mm/init_64.c
··· 188 188 pr_debug("vmemmap_populate %lx..%lx, node %d\n", start, end, node); 189 189 190 190 for (; start < end; start += page_size) { 191 - void *p; 191 + void *p = NULL; 192 192 int rc; 193 193 194 194 if (vmemmap_populated(start, page_size)) 195 195 continue; 196 196 197 + /* 198 + * Allocate from the altmap first if we have one. This may 199 + * fail due to alignment issues when using 16MB hugepages, so 200 + * fall back to system memory if the altmap allocation fail. 201 + */ 197 202 if (altmap) 198 203 p = altmap_alloc_block_buf(page_size, altmap); 199 - else 204 + if (!p) 200 205 p = vmemmap_alloc_block_buf(page_size, node); 201 206 if (!p) 202 207 return -ENOMEM; ··· 260 255 { 261 256 unsigned long page_size = 1 << mmu_psize_defs[mmu_vmemmap_psize].shift; 262 257 unsigned long page_order = get_order(page_size); 258 + unsigned long alt_start = ~0, alt_end = ~0; 259 + unsigned long base_pfn; 263 260 264 261 start = _ALIGN_DOWN(start, page_size); 262 + if (altmap) { 263 + alt_start = altmap->base_pfn; 264 + alt_end = altmap->base_pfn + altmap->reserve + 265 + altmap->free + altmap->alloc + altmap->align; 266 + } 265 267 266 268 pr_debug("vmemmap_free %lx...%lx\n", start, end); 267 269 ··· 292 280 page = pfn_to_page(addr >> PAGE_SHIFT); 293 281 section_base = pfn_to_page(vmemmap_section_start(start)); 294 282 nr_pages = 1 << page_order; 283 + base_pfn = PHYS_PFN(addr); 295 284 296 - if (altmap) { 285 + if (base_pfn >= alt_start && base_pfn < alt_end) { 297 286 vmem_altmap_free(altmap, nr_pages); 298 287 } else if (PageReserved(page)) { 299 288 /* allocated from bootmem */
+1 -2
arch/powerpc/platforms/pseries/Kconfig
··· 140 140 Bus device driver for GX bus based adapters. 141 141 142 142 config PAPR_SCM 143 - depends on PPC_PSERIES && MEMORY_HOTPLUG 144 - select LIBNVDIMM 143 + depends on PPC_PSERIES && MEMORY_HOTPLUG && LIBNVDIMM 145 144 tristate "Support for the PAPR Storage Class Memory interface" 146 145 help 147 146 Enable access to hypervisor provided storage class memory.
+30 -9
arch/powerpc/platforms/pseries/papr_scm.c
··· 55 55 do { 56 56 rc = plpar_hcall(H_SCM_BIND_MEM, ret, p->drc_index, 0, 57 57 p->blocks, BIND_ANY_ADDR, token); 58 - token = be64_to_cpu(ret[0]); 58 + token = ret[0]; 59 59 cond_resched(); 60 60 } while (rc == H_BUSY); 61 61 ··· 64 64 return -ENXIO; 65 65 } 66 66 67 - p->bound_addr = be64_to_cpu(ret[1]); 67 + p->bound_addr = ret[1]; 68 68 69 69 dev_dbg(&p->pdev->dev, "bound drc %x to %pR\n", p->drc_index, &p->res); 70 70 ··· 82 82 do { 83 83 rc = plpar_hcall(H_SCM_UNBIND_MEM, ret, p->drc_index, 84 84 p->bound_addr, p->blocks, token); 85 - token = be64_to_cpu(ret); 85 + token = ret[0]; 86 86 cond_resched(); 87 87 } while (rc == H_BUSY); 88 88 ··· 223 223 goto err; 224 224 } 225 225 226 + if (nvdimm_bus_check_dimm_count(p->bus, 1)) 227 + goto err; 228 + 226 229 /* now add the region */ 227 230 228 231 memset(&mapping, 0, sizeof(mapping)); ··· 260 257 261 258 static int papr_scm_probe(struct platform_device *pdev) 262 259 { 263 - uint32_t drc_index, metadata_size, unit_cap[2]; 264 260 struct device_node *dn = pdev->dev.of_node; 261 + u32 drc_index, metadata_size; 262 + u64 blocks, block_size; 265 263 struct papr_scm_priv *p; 264 + const char *uuid_str; 265 + u64 uuid[2]; 266 266 int rc; 267 267 268 268 /* check we have all the required DT properties */ ··· 274 268 return -ENODEV; 275 269 } 276 270 277 - if (of_property_read_u32_array(dn, "ibm,unit-capacity", unit_cap, 2)) { 278 - dev_err(&pdev->dev, "%pOF: missing unit-capacity!\n", dn); 271 + if (of_property_read_u64(dn, "ibm,block-size", &block_size)) { 272 + dev_err(&pdev->dev, "%pOF: missing block-size!\n", dn); 273 + return -ENODEV; 274 + } 275 + 276 + if (of_property_read_u64(dn, "ibm,number-of-blocks", &blocks)) { 277 + dev_err(&pdev->dev, "%pOF: missing number-of-blocks!\n", dn); 278 + return -ENODEV; 279 + } 280 + 281 + if (of_property_read_string(dn, "ibm,unit-guid", &uuid_str)) { 282 + dev_err(&pdev->dev, "%pOF: missing unit-guid!\n", dn); 279 283 return -ENODEV; 280 284 } 281 285 ··· 298 282 299 283 p->dn = dn; 300 284 p->drc_index = drc_index; 301 - p->block_size = unit_cap[0]; 302 - p->blocks = unit_cap[1]; 285 + p->block_size = block_size; 286 + p->blocks = blocks; 287 + 288 + /* We just need to ensure that set cookies are unique across */ 289 + uuid_parse(uuid_str, (uuid_t *) uuid); 290 + p->nd_set.cookie1 = uuid[0]; 291 + p->nd_set.cookie2 = uuid[1]; 303 292 304 293 /* might be zero */ 305 294 p->metadata_size = metadata_size; ··· 317 296 318 297 /* setup the resource for the newly bound range */ 319 298 p->res.start = p->bound_addr; 320 - p->res.end = p->bound_addr + p->blocks * p->block_size; 299 + p->res.end = p->bound_addr + p->blocks * p->block_size - 1; 321 300 p->res.name = pdev->name; 322 301 p->res.flags = IORESOURCE_MEM; 323 302
+1
arch/sh/include/asm/io.h
··· 24 24 #define __IO_PREFIX generic 25 25 #include <asm/io_generic.h> 26 26 #include <asm/io_trapped.h> 27 + #include <asm-generic/pci_iomap.h> 27 28 #include <mach/mangle-port.h> 28 29 29 30 #define __raw_writeb(v,a) (__chk_io_ptr(a), *(volatile u8 __force *)(a) = (v))
+1
arch/x86/include/asm/msr-index.h
··· 390 390 #define MSR_F15H_NB_PERF_CTR 0xc0010241 391 391 #define MSR_F15H_PTSC 0xc0010280 392 392 #define MSR_F15H_IC_CFG 0xc0011021 393 + #define MSR_F15H_EX_CFG 0xc001102c 393 394 394 395 /* Fam 10h MSRs */ 395 396 #define MSR_FAM10H_MMIO_CONF_BASE 0xc0010058
+2
arch/x86/kvm/vmx.c
··· 11985 11985 kunmap(vmx->nested.pi_desc_page); 11986 11986 kvm_release_page_dirty(vmx->nested.pi_desc_page); 11987 11987 vmx->nested.pi_desc_page = NULL; 11988 + vmx->nested.pi_desc = NULL; 11989 + vmcs_write64(POSTED_INTR_DESC_ADDR, -1ull); 11988 11990 } 11989 11991 page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->posted_intr_desc_addr); 11990 11992 if (is_error_page(page))
+3 -1
arch/x86/kvm/x86.c
··· 2426 2426 case MSR_AMD64_PATCH_LOADER: 2427 2427 case MSR_AMD64_BU_CFG2: 2428 2428 case MSR_AMD64_DC_CFG: 2429 + case MSR_F15H_EX_CFG: 2429 2430 break; 2430 2431 2431 2432 case MSR_IA32_UCODE_REV: ··· 2722 2721 case MSR_AMD64_BU_CFG2: 2723 2722 case MSR_IA32_PERF_CTL: 2724 2723 case MSR_AMD64_DC_CFG: 2724 + case MSR_F15H_EX_CFG: 2725 2725 msr_info->data = 0; 2726 2726 break; 2727 2727 case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5: ··· 7448 7446 7449 7447 static void vcpu_scan_ioapic(struct kvm_vcpu *vcpu) 7450 7448 { 7451 - if (!kvm_apic_hw_enabled(vcpu->arch.apic)) 7449 + if (!kvm_apic_present(vcpu)) 7452 7450 return; 7453 7451 7454 7452 bitmap_zero(vcpu->arch.ioapic_handled_vectors, 256);
+2 -1
block/bio.c
··· 1261 1261 if (ret) 1262 1262 goto cleanup; 1263 1263 } else { 1264 - zero_fill_bio(bio); 1264 + if (bmd->is_our_pages) 1265 + zero_fill_bio(bio); 1265 1266 iov_iter_advance(iter, bio->bi_iter.bi_size); 1266 1267 } 1267 1268
+1 -1
block/blk-zoned.c
··· 378 378 struct page *page; 379 379 int order; 380 380 381 - for (order = get_order(size); order > 0; order--) { 381 + for (order = get_order(size); order >= 0; order--) { 382 382 page = alloc_pages_node(node, GFP_NOIO | __GFP_ZERO, order); 383 383 if (page) { 384 384 *nr_zones = min_t(unsigned int, *nr_zones,
+1 -1
drivers/clk/qcom/gcc-qcs404.c
··· 297 297 .hw.init = &(struct clk_init_data){ 298 298 .name = "gpll0_out_main", 299 299 .parent_names = (const char *[]) 300 - { "gpll0_sleep_clk_src" }, 300 + { "cxo" }, 301 301 .num_parents = 1, 302 302 .ops = &clk_alpha_pll_ops, 303 303 },
+7
drivers/crypto/chelsio/chtls/chtls.h
··· 153 153 unsigned int cdev_state; 154 154 }; 155 155 156 + struct chtls_listen { 157 + struct chtls_dev *cdev; 158 + struct sock *sk; 159 + }; 160 + 156 161 struct chtls_hws { 157 162 struct sk_buff_head sk_recv_queue; 158 163 u8 txqid; ··· 220 215 u16 resv2; 221 216 u32 delack_mode; 222 217 u32 delack_seq; 218 + u32 snd_win; 219 + u32 rcv_win; 223 220 224 221 void *passive_reap_next; /* placeholder for passive */ 225 222 struct chtls_hws tlshws;
+52 -26
drivers/crypto/chelsio/chtls/chtls_cm.c
··· 21 21 #include <linux/kallsyms.h> 22 22 #include <linux/kprobes.h> 23 23 #include <linux/if_vlan.h> 24 + #include <net/inet_common.h> 24 25 #include <net/tcp.h> 25 26 #include <net/dst.h> 26 27 ··· 888 887 return mtu_idx; 889 888 } 890 889 891 - static unsigned int select_rcv_wnd(struct chtls_sock *csk) 892 - { 893 - unsigned int rcvwnd; 894 - unsigned int wnd; 895 - struct sock *sk; 896 - 897 - sk = csk->sk; 898 - wnd = tcp_full_space(sk); 899 - 900 - if (wnd < MIN_RCV_WND) 901 - wnd = MIN_RCV_WND; 902 - 903 - rcvwnd = MAX_RCV_WND; 904 - 905 - csk_set_flag(csk, CSK_UPDATE_RCV_WND); 906 - return min(wnd, rcvwnd); 907 - } 908 - 909 890 static unsigned int select_rcv_wscale(int space, int wscale_ok, int win_clamp) 910 891 { 911 892 int wscale = 0; ··· 934 951 csk->mtu_idx = chtls_select_mss(csk, dst_mtu(__sk_dst_get(sk)), 935 952 req); 936 953 opt0 = TCAM_BYPASS_F | 937 - WND_SCALE_V((tp)->rx_opt.rcv_wscale) | 954 + WND_SCALE_V(RCV_WSCALE(tp)) | 938 955 MSS_IDX_V(csk->mtu_idx) | 939 956 L2T_IDX_V(csk->l2t_entry->idx) | 940 957 NAGLE_V(!(tp->nonagle & TCP_NAGLE_OFF)) | ··· 986 1003 } 987 1004 BLOG_SKB_CB(skb)->backlog_rcv(sk, skb); 988 1005 return 0; 1006 + } 1007 + 1008 + static void chtls_set_tcp_window(struct chtls_sock *csk) 1009 + { 1010 + struct net_device *ndev = csk->egress_dev; 1011 + struct port_info *pi = netdev_priv(ndev); 1012 + unsigned int linkspeed; 1013 + u8 scale; 1014 + 1015 + linkspeed = pi->link_cfg.speed; 1016 + scale = linkspeed / SPEED_10000; 1017 + #define CHTLS_10G_RCVWIN (256 * 1024) 1018 + csk->rcv_win = CHTLS_10G_RCVWIN; 1019 + if (scale) 1020 + csk->rcv_win *= scale; 1021 + #define CHTLS_10G_SNDWIN (256 * 1024) 1022 + csk->snd_win = CHTLS_10G_SNDWIN; 1023 + if (scale) 1024 + csk->snd_win *= scale; 989 1025 } 990 1026 991 1027 static struct sock *chtls_recv_sock(struct sock *lsk, ··· 1069 1067 csk->port_id = port_id; 1070 1068 csk->egress_dev = ndev; 1071 1069 csk->tos = PASS_OPEN_TOS_G(ntohl(req->tos_stid)); 1070 + chtls_set_tcp_window(csk); 1071 + tp->rcv_wnd = csk->rcv_win; 1072 + csk->sndbuf = csk->snd_win; 1072 1073 csk->ulp_mode = ULP_MODE_TLS; 1073 1074 step = cdev->lldi->nrxq / cdev->lldi->nchan; 1074 1075 csk->rss_qid = cdev->lldi->rxq_ids[port_id * step]; ··· 1081 1076 csk->sndbuf = newsk->sk_sndbuf; 1082 1077 csk->smac_idx = cxgb4_tp_smt_idx(cdev->lldi->adapter_type, 1083 1078 cxgb4_port_viid(ndev)); 1084 - tp->rcv_wnd = select_rcv_wnd(csk); 1085 1079 RCV_WSCALE(tp) = select_rcv_wscale(tcp_full_space(newsk), 1086 - WSCALE_OK(tp), 1080 + sock_net(newsk)-> 1081 + ipv4.sysctl_tcp_window_scaling, 1087 1082 tp->window_clamp); 1088 1083 neigh_release(n); 1089 1084 inet_inherit_port(&tcp_hashinfo, lsk, newsk); ··· 1135 1130 struct cpl_t5_pass_accept_rpl *rpl; 1136 1131 struct cpl_pass_accept_req *req; 1137 1132 struct listen_ctx *listen_ctx; 1133 + struct vlan_ethhdr *vlan_eh; 1138 1134 struct request_sock *oreq; 1139 1135 struct sk_buff *reply_skb; 1140 1136 struct chtls_sock *csk; ··· 1148 1142 unsigned int stid; 1149 1143 unsigned int len; 1150 1144 unsigned int tid; 1145 + bool th_ecn, ect; 1146 + __u8 ip_dsfield; /* IPv4 tos or IPv6 dsfield */ 1147 + u16 eth_hdr_len; 1148 + bool ecn_ok; 1151 1149 1152 1150 req = cplhdr(skb) + RSS_HDR; 1153 1151 tid = GET_TID(req); ··· 1190 1180 oreq->mss = 0; 1191 1181 oreq->ts_recent = 0; 1192 1182 1193 - eh = (struct ethhdr *)(req + 1); 1194 - iph = (struct iphdr *)(eh + 1); 1183 + eth_hdr_len = T6_ETH_HDR_LEN_G(ntohl(req->hdr_len)); 1184 + if (eth_hdr_len == ETH_HLEN) { 1185 + eh = (struct ethhdr *)(req + 1); 1186 + iph = (struct iphdr *)(eh + 1); 1187 + network_hdr = (void *)(eh + 1); 1188 + } else { 1189 + vlan_eh = (struct vlan_ethhdr *)(req + 1); 1190 + iph = (struct iphdr *)(vlan_eh + 1); 1191 + network_hdr = (void *)(vlan_eh + 1); 1192 + } 1195 1193 if (iph->version != 0x4) 1196 1194 goto free_oreq; 1197 1195 1198 - network_hdr = (void *)(eh + 1); 1199 1196 tcph = (struct tcphdr *)(iph + 1); 1197 + skb_set_network_header(skb, (void *)iph - (void *)req); 1200 1198 1201 1199 tcp_rsk(oreq)->tfo_listener = false; 1202 1200 tcp_rsk(oreq)->rcv_isn = ntohl(tcph->seq); 1203 1201 chtls_set_req_port(oreq, tcph->source, tcph->dest); 1204 - inet_rsk(oreq)->ecn_ok = 0; 1205 1202 chtls_set_req_addr(oreq, iph->daddr, iph->saddr); 1206 - if (req->tcpopt.wsf <= 14) { 1203 + ip_dsfield = ipv4_get_dsfield(iph); 1204 + if (req->tcpopt.wsf <= 14 && 1205 + sock_net(sk)->ipv4.sysctl_tcp_window_scaling) { 1207 1206 inet_rsk(oreq)->wscale_ok = 1; 1208 1207 inet_rsk(oreq)->snd_wscale = req->tcpopt.wsf; 1209 1208 } 1210 1209 inet_rsk(oreq)->ir_iif = sk->sk_bound_dev_if; 1210 + th_ecn = tcph->ece && tcph->cwr; 1211 + if (th_ecn) { 1212 + ect = !INET_ECN_is_not_ect(ip_dsfield); 1213 + ecn_ok = sock_net(sk)->ipv4.sysctl_tcp_ecn; 1214 + if ((!ect && ecn_ok) || tcp_ca_needs_ecn(sk)) 1215 + inet_rsk(oreq)->ecn_ok = 1; 1216 + } 1211 1217 1212 1218 newsk = chtls_recv_sock(sk, oreq, network_hdr, req, cdev); 1213 1219 if (!newsk)
+8 -12
drivers/crypto/chelsio/chtls/chtls_io.c
··· 397 397 398 398 req_wr->lsodisable_to_flags = 399 399 htonl(TX_ULP_MODE_V(ULP_MODE_TLS) | 400 - FW_OFLD_TX_DATA_WR_URGENT_V(skb_urgent(skb)) | 400 + TX_URG_V(skb_urgent(skb)) | 401 401 T6_TX_FORCE_F | wr_ulp_mode_force | 402 402 TX_SHOVE_V((!csk_flag(sk, CSK_TX_MORE_DATA)) && 403 403 skb_queue_empty(&csk->txq))); ··· 534 534 FW_OFLD_TX_DATA_WR_SHOVE_F); 535 535 536 536 req->tunnel_to_proxy = htonl(wr_ulp_mode_force | 537 - FW_OFLD_TX_DATA_WR_URGENT_V(skb_urgent(skb)) | 538 - FW_OFLD_TX_DATA_WR_SHOVE_V((!csk_flag 539 - (sk, CSK_TX_MORE_DATA)) && 540 - skb_queue_empty(&csk->txq))); 537 + TX_URG_V(skb_urgent(skb)) | 538 + TX_SHOVE_V((!csk_flag(sk, CSK_TX_MORE_DATA)) && 539 + skb_queue_empty(&csk->txq))); 541 540 req->plen = htonl(len); 542 541 } 543 542 ··· 994 995 int mss, flags, err; 995 996 int recordsz = 0; 996 997 int copied = 0; 997 - int hdrlen = 0; 998 998 long timeo; 999 999 1000 1000 lock_sock(sk); ··· 1030 1032 1031 1033 recordsz = tls_header_read(&hdr, &msg->msg_iter); 1032 1034 size -= TLS_HEADER_LENGTH; 1033 - hdrlen += TLS_HEADER_LENGTH; 1035 + copied += TLS_HEADER_LENGTH; 1034 1036 csk->tlshws.txleft = recordsz; 1035 1037 csk->tlshws.type = hdr.type; 1036 1038 if (skb) ··· 1081 1083 int off = TCP_OFF(sk); 1082 1084 bool merge; 1083 1085 1084 - if (!page) 1085 - goto wait_for_memory; 1086 - 1087 - pg_size <<= compound_order(page); 1086 + if (page) 1087 + pg_size <<= compound_order(page); 1088 1088 if (off < pg_size && 1089 1089 skb_can_coalesce(skb, i, page, off)) { 1090 1090 merge = 1; ··· 1183 1187 chtls_tcp_push(sk, flags); 1184 1188 done: 1185 1189 release_sock(sk); 1186 - return copied + hdrlen; 1190 + return copied; 1187 1191 do_fault: 1188 1192 if (!skb->len) { 1189 1193 __skb_unlink(skb, &csk->txq);
+63 -42
drivers/crypto/chelsio/chtls/chtls_main.c
··· 55 55 static int listen_notify_handler(struct notifier_block *this, 56 56 unsigned long event, void *data) 57 57 { 58 - struct chtls_dev *cdev; 59 - struct sock *sk; 60 - int ret; 58 + struct chtls_listen *clisten; 59 + int ret = NOTIFY_DONE; 61 60 62 - sk = data; 63 - ret = NOTIFY_DONE; 61 + clisten = (struct chtls_listen *)data; 64 62 65 63 switch (event) { 66 64 case CHTLS_LISTEN_START: 65 + ret = chtls_listen_start(clisten->cdev, clisten->sk); 66 + kfree(clisten); 67 + break; 67 68 case CHTLS_LISTEN_STOP: 68 - mutex_lock(&cdev_list_lock); 69 - list_for_each_entry(cdev, &cdev_list, list) { 70 - if (event == CHTLS_LISTEN_START) 71 - ret = chtls_listen_start(cdev, sk); 72 - else 73 - chtls_listen_stop(cdev, sk); 74 - } 75 - mutex_unlock(&cdev_list_lock); 69 + chtls_listen_stop(clisten->cdev, clisten->sk); 70 + kfree(clisten); 76 71 break; 77 72 } 78 73 return ret; ··· 85 90 return 0; 86 91 } 87 92 88 - static int chtls_start_listen(struct sock *sk) 93 + static int chtls_start_listen(struct chtls_dev *cdev, struct sock *sk) 89 94 { 95 + struct chtls_listen *clisten; 90 96 int err; 91 97 92 98 if (sk->sk_protocol != IPPROTO_TCP) ··· 98 102 return -EADDRNOTAVAIL; 99 103 100 104 sk->sk_backlog_rcv = listen_backlog_rcv; 105 + clisten = kmalloc(sizeof(*clisten), GFP_KERNEL); 106 + if (!clisten) 107 + return -ENOMEM; 108 + clisten->cdev = cdev; 109 + clisten->sk = sk; 101 110 mutex_lock(&notify_mutex); 102 111 err = raw_notifier_call_chain(&listen_notify_list, 103 - CHTLS_LISTEN_START, sk); 112 + CHTLS_LISTEN_START, clisten); 104 113 mutex_unlock(&notify_mutex); 105 114 return err; 106 115 } 107 116 108 - static void chtls_stop_listen(struct sock *sk) 117 + static void chtls_stop_listen(struct chtls_dev *cdev, struct sock *sk) 109 118 { 119 + struct chtls_listen *clisten; 120 + 110 121 if (sk->sk_protocol != IPPROTO_TCP) 111 122 return; 112 123 124 + clisten = kmalloc(sizeof(*clisten), GFP_KERNEL); 125 + if (!clisten) 126 + return; 127 + clisten->cdev = cdev; 128 + clisten->sk = sk; 113 129 mutex_lock(&notify_mutex); 114 130 raw_notifier_call_chain(&listen_notify_list, 115 - CHTLS_LISTEN_STOP, sk); 131 + CHTLS_LISTEN_STOP, clisten); 116 132 mutex_unlock(&notify_mutex); 117 133 } 118 134 ··· 146 138 147 139 static int chtls_create_hash(struct tls_device *dev, struct sock *sk) 148 140 { 141 + struct chtls_dev *cdev = to_chtls_dev(dev); 142 + 149 143 if (sk->sk_state == TCP_LISTEN) 150 - return chtls_start_listen(sk); 144 + return chtls_start_listen(cdev, sk); 151 145 return 0; 152 146 } 153 147 154 148 static void chtls_destroy_hash(struct tls_device *dev, struct sock *sk) 155 149 { 150 + struct chtls_dev *cdev = to_chtls_dev(dev); 151 + 156 152 if (sk->sk_state == TCP_LISTEN) 157 - chtls_stop_listen(sk); 153 + chtls_stop_listen(cdev, sk); 154 + } 155 + 156 + static void chtls_free_uld(struct chtls_dev *cdev) 157 + { 158 + int i; 159 + 160 + tls_unregister_device(&cdev->tlsdev); 161 + kvfree(cdev->kmap.addr); 162 + idr_destroy(&cdev->hwtid_idr); 163 + for (i = 0; i < (1 << RSPQ_HASH_BITS); i++) 164 + kfree_skb(cdev->rspq_skb_cache[i]); 165 + kfree(cdev->lldi); 166 + kfree_skb(cdev->askb); 167 + kfree(cdev); 168 + } 169 + 170 + static inline void chtls_dev_release(struct kref *kref) 171 + { 172 + struct chtls_dev *cdev; 173 + struct tls_device *dev; 174 + 175 + dev = container_of(kref, struct tls_device, kref); 176 + cdev = to_chtls_dev(dev); 177 + chtls_free_uld(cdev); 158 178 } 159 179 160 180 static void chtls_register_dev(struct chtls_dev *cdev) ··· 195 159 tlsdev->feature = chtls_inline_feature; 196 160 tlsdev->hash = chtls_create_hash; 197 161 tlsdev->unhash = chtls_destroy_hash; 198 - tls_register_device(&cdev->tlsdev); 162 + tlsdev->release = chtls_dev_release; 163 + kref_init(&tlsdev->kref); 164 + tls_register_device(tlsdev); 199 165 cdev->cdev_state = CHTLS_CDEV_STATE_UP; 200 - } 201 - 202 - static void chtls_unregister_dev(struct chtls_dev *cdev) 203 - { 204 - tls_unregister_device(&cdev->tlsdev); 205 166 } 206 167 207 168 static void process_deferq(struct work_struct *task_param) ··· 295 262 return NULL; 296 263 } 297 264 298 - static void chtls_free_uld(struct chtls_dev *cdev) 299 - { 300 - int i; 301 - 302 - chtls_unregister_dev(cdev); 303 - kvfree(cdev->kmap.addr); 304 - idr_destroy(&cdev->hwtid_idr); 305 - for (i = 0; i < (1 << RSPQ_HASH_BITS); i++) 306 - kfree_skb(cdev->rspq_skb_cache[i]); 307 - kfree(cdev->lldi); 308 - kfree_skb(cdev->askb); 309 - kfree(cdev); 310 - } 311 - 312 265 static void chtls_free_all_uld(void) 313 266 { 314 267 struct chtls_dev *cdev, *tmp; 315 268 316 269 mutex_lock(&cdev_mutex); 317 270 list_for_each_entry_safe(cdev, tmp, &cdev_list, list) { 318 - if (cdev->cdev_state == CHTLS_CDEV_STATE_UP) 319 - chtls_free_uld(cdev); 271 + if (cdev->cdev_state == CHTLS_CDEV_STATE_UP) { 272 + list_del(&cdev->list); 273 + kref_put(&cdev->tlsdev.kref, cdev->tlsdev.release); 274 + } 320 275 } 321 276 mutex_unlock(&cdev_mutex); 322 277 } ··· 325 304 mutex_lock(&cdev_mutex); 326 305 list_del(&cdev->list); 327 306 mutex_unlock(&cdev_mutex); 328 - chtls_free_uld(cdev); 307 + kref_put(&cdev->tlsdev.kref, cdev->tlsdev.release); 329 308 break; 330 309 default: 331 310 break;
+30 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
··· 330 330 case CHIP_TOPAZ: 331 331 if (((adev->pdev->device == 0x6900) && (adev->pdev->revision == 0x81)) || 332 332 ((adev->pdev->device == 0x6900) && (adev->pdev->revision == 0x83)) || 333 - ((adev->pdev->device == 0x6907) && (adev->pdev->revision == 0x87))) { 333 + ((adev->pdev->device == 0x6907) && (adev->pdev->revision == 0x87)) || 334 + ((adev->pdev->device == 0x6900) && (adev->pdev->revision == 0xD1)) || 335 + ((adev->pdev->device == 0x6900) && (adev->pdev->revision == 0xD3))) { 334 336 info->is_kicker = true; 335 337 strcpy(fw_name, "amdgpu/topaz_k_smc.bin"); 336 338 } else ··· 353 351 if (type == CGS_UCODE_ID_SMU) { 354 352 if (((adev->pdev->device == 0x67ef) && 355 353 ((adev->pdev->revision == 0xe0) || 356 - (adev->pdev->revision == 0xe2) || 357 354 (adev->pdev->revision == 0xe5))) || 358 355 ((adev->pdev->device == 0x67ff) && 359 356 ((adev->pdev->revision == 0xcf) || ··· 360 359 (adev->pdev->revision == 0xff)))) { 361 360 info->is_kicker = true; 362 361 strcpy(fw_name, "amdgpu/polaris11_k_smc.bin"); 363 - } else 362 + } else if ((adev->pdev->device == 0x67ef) && 363 + (adev->pdev->revision == 0xe2)) { 364 + info->is_kicker = true; 365 + strcpy(fw_name, "amdgpu/polaris11_k2_smc.bin"); 366 + } else { 364 367 strcpy(fw_name, "amdgpu/polaris11_smc.bin"); 368 + } 365 369 } else if (type == CGS_UCODE_ID_SMU_SK) { 366 370 strcpy(fw_name, "amdgpu/polaris11_smc_sk.bin"); 367 371 } ··· 381 375 (adev->pdev->revision == 0xe7) || 382 376 (adev->pdev->revision == 0xef))) || 383 377 ((adev->pdev->device == 0x6fdf) && 384 - (adev->pdev->revision == 0xef))) { 378 + ((adev->pdev->revision == 0xef) || 379 + (adev->pdev->revision == 0xff)))) { 385 380 info->is_kicker = true; 386 381 strcpy(fw_name, "amdgpu/polaris10_k_smc.bin"); 387 - } else 382 + } else if ((adev->pdev->device == 0x67df) && 383 + ((adev->pdev->revision == 0xe1) || 384 + (adev->pdev->revision == 0xf7))) { 385 + info->is_kicker = true; 386 + strcpy(fw_name, "amdgpu/polaris10_k2_smc.bin"); 387 + } else { 388 388 strcpy(fw_name, "amdgpu/polaris10_smc.bin"); 389 + } 389 390 } else if (type == CGS_UCODE_ID_SMU_SK) { 390 391 strcpy(fw_name, "amdgpu/polaris10_smc_sk.bin"); 391 392 } 392 393 break; 393 394 case CHIP_POLARIS12: 394 - strcpy(fw_name, "amdgpu/polaris12_smc.bin"); 395 + if (((adev->pdev->device == 0x6987) && 396 + ((adev->pdev->revision == 0xc0) || 397 + (adev->pdev->revision == 0xc3))) || 398 + ((adev->pdev->device == 0x6981) && 399 + ((adev->pdev->revision == 0x00) || 400 + (adev->pdev->revision == 0x01) || 401 + (adev->pdev->revision == 0x10)))) { 402 + info->is_kicker = true; 403 + strcpy(fw_name, "amdgpu/polaris12_k_smc.bin"); 404 + } else { 405 + strcpy(fw_name, "amdgpu/polaris12_smc.bin"); 406 + } 395 407 break; 396 408 case CHIP_VEGAM: 397 409 strcpy(fw_name, "amdgpu/vegam_smc.bin");
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
··· 124 124 goto free_chunk; 125 125 } 126 126 127 + mutex_lock(&p->ctx->lock); 128 + 127 129 /* skip guilty context job */ 128 130 if (atomic_read(&p->ctx->guilty) == 1) { 129 131 ret = -ECANCELED; 130 132 goto free_chunk; 131 133 } 132 - 133 - mutex_lock(&p->ctx->lock); 134 134 135 135 /* get chunks */ 136 136 chunk_array_user = u64_to_user_ptr(cs->in.chunks);
+7
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 872 872 {0x1002, 0x6864, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10}, 873 873 {0x1002, 0x6867, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10}, 874 874 {0x1002, 0x6868, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10}, 875 + {0x1002, 0x6869, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10}, 876 + {0x1002, 0x686a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10}, 877 + {0x1002, 0x686b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10}, 875 878 {0x1002, 0x686c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10}, 879 + {0x1002, 0x686d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10}, 880 + {0x1002, 0x686e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10}, 881 + {0x1002, 0x686f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10}, 876 882 {0x1002, 0x687f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10}, 877 883 /* Vega 12 */ 878 884 {0x1002, 0x69A0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA12}, ··· 891 885 {0x1002, 0x66A1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA20}, 892 886 {0x1002, 0x66A2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA20}, 893 887 {0x1002, 0x66A3, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA20}, 888 + {0x1002, 0x66A4, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA20}, 894 889 {0x1002, 0x66A7, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA20}, 895 890 {0x1002, 0x66AF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA20}, 896 891 /* Raven */
+7
drivers/gpu/drm/amd/amdkfd/kfd_device.c
··· 337 337 { 0x6864, &vega10_device_info }, /* Vega10 */ 338 338 { 0x6867, &vega10_device_info }, /* Vega10 */ 339 339 { 0x6868, &vega10_device_info }, /* Vega10 */ 340 + { 0x6869, &vega10_device_info }, /* Vega10 */ 341 + { 0x686A, &vega10_device_info }, /* Vega10 */ 342 + { 0x686B, &vega10_device_info }, /* Vega10 */ 340 343 { 0x686C, &vega10_vf_device_info }, /* Vega10 vf*/ 344 + { 0x686D, &vega10_device_info }, /* Vega10 */ 345 + { 0x686E, &vega10_device_info }, /* Vega10 */ 346 + { 0x686F, &vega10_device_info }, /* Vega10 */ 341 347 { 0x687F, &vega10_device_info }, /* Vega10 */ 342 348 { 0x66a0, &vega20_device_info }, /* Vega20 */ 343 349 { 0x66a1, &vega20_device_info }, /* Vega20 */ 344 350 { 0x66a2, &vega20_device_info }, /* Vega20 */ 345 351 { 0x66a3, &vega20_device_info }, /* Vega20 */ 352 + { 0x66a4, &vega20_device_info }, /* Vega20 */ 346 353 { 0x66a7, &vega20_device_info }, /* Vega20 */ 347 354 { 0x66af, &vega20_device_info } /* Vega20 */ 348 355 };
+1 -1
drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
··· 130 130 data->registry_data.disable_auto_wattman = 1; 131 131 data->registry_data.auto_wattman_debug = 0; 132 132 data->registry_data.auto_wattman_sample_period = 100; 133 - data->registry_data.fclk_gfxclk_ratio = 0x3F6CCCCD; 133 + data->registry_data.fclk_gfxclk_ratio = 0; 134 134 data->registry_data.auto_wattman_threshold = 50; 135 135 data->registry_data.gfxoff_controlled_by_driver = 1; 136 136 data->gfxoff_allowed = false;
+2
drivers/gpu/drm/amd/powerplay/inc/smu7_ppsmc.h
··· 386 386 #define PPSMC_MSG_AgmResetPsm ((uint16_t) 0x403) 387 387 #define PPSMC_MSG_ReadVftCell ((uint16_t) 0x404) 388 388 389 + #define PPSMC_MSG_ApplyAvfsCksOffVoltage ((uint16_t) 0x415) 390 + 389 391 #define PPSMC_MSG_GFX_CU_PG_ENABLE ((uint16_t) 0x280) 390 392 #define PPSMC_MSG_GFX_CU_PG_DISABLE ((uint16_t) 0x281) 391 393 #define PPSMC_MSG_GetCurrPkgPwr ((uint16_t) 0x282)
+6
drivers/gpu/drm/amd/powerplay/smumgr/polaris10_smumgr.c
··· 1985 1985 1986 1986 smum_send_msg_to_smc(hwmgr, PPSMC_MSG_EnableAvfs); 1987 1987 1988 + /* Apply avfs cks-off voltages to avoid the overshoot 1989 + * when switching to the highest sclk frequency 1990 + */ 1991 + if (data->apply_avfs_cks_off_voltage) 1992 + smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ApplyAvfsCksOffVoltage); 1993 + 1988 1994 return 0; 1989 1995 } 1990 1996
+3
drivers/gpu/drm/amd/powerplay/smumgr/smumgr.c
··· 37 37 MODULE_FIRMWARE("amdgpu/polaris10_smc.bin"); 38 38 MODULE_FIRMWARE("amdgpu/polaris10_smc_sk.bin"); 39 39 MODULE_FIRMWARE("amdgpu/polaris10_k_smc.bin"); 40 + MODULE_FIRMWARE("amdgpu/polaris10_k2_smc.bin"); 40 41 MODULE_FIRMWARE("amdgpu/polaris11_smc.bin"); 41 42 MODULE_FIRMWARE("amdgpu/polaris11_smc_sk.bin"); 42 43 MODULE_FIRMWARE("amdgpu/polaris11_k_smc.bin"); 44 + MODULE_FIRMWARE("amdgpu/polaris11_k2_smc.bin"); 43 45 MODULE_FIRMWARE("amdgpu/polaris12_smc.bin"); 46 + MODULE_FIRMWARE("amdgpu/polaris12_k_smc.bin"); 44 47 MODULE_FIRMWARE("amdgpu/vegam_smc.bin"); 45 48 MODULE_FIRMWARE("amdgpu/vega10_smc.bin"); 46 49 MODULE_FIRMWARE("amdgpu/vega10_acg_smc.bin");
+1 -1
drivers/gpu/drm/i915/gvt/fb_decoder.c
··· 235 235 plane->bpp = skl_pixel_formats[fmt].bpp; 236 236 plane->drm_format = skl_pixel_formats[fmt].drm_format; 237 237 } else { 238 - plane->tiled = !!(val & DISPPLANE_TILED); 238 + plane->tiled = val & DISPPLANE_TILED; 239 239 fmt = bdw_format_to_drm(val & DISPPLANE_PIXFORMAT_MASK); 240 240 plane->bpp = bdw_pixel_formats[fmt].bpp; 241 241 plane->drm_format = bdw_pixel_formats[fmt].drm_format;
+1
drivers/gpu/drm/i915/i915_drv.c
··· 1444 1444 1445 1445 intel_uncore_sanitize(dev_priv); 1446 1446 1447 + intel_gt_init_workarounds(dev_priv); 1447 1448 i915_gem_load_init_fences(dev_priv); 1448 1449 1449 1450 /* On the 945G/GM, the chipset reports the MSI capability on the
+9
drivers/gpu/drm/i915/i915_drv.h
··· 67 67 #include "intel_ringbuffer.h" 68 68 #include "intel_uncore.h" 69 69 #include "intel_wopcm.h" 70 + #include "intel_workarounds.h" 70 71 #include "intel_uc.h" 71 72 72 73 #include "i915_gem.h" ··· 1806 1805 int dpio_phy_iosf_port[I915_NUM_PHYS_VLV]; 1807 1806 1808 1807 struct i915_workarounds workarounds; 1808 + struct i915_wa_list gt_wa_list; 1809 1809 1810 1810 struct i915_frontbuffer_tracking fb_tracking; 1811 1811 ··· 2150 2148 struct delayed_work idle_work; 2151 2149 2152 2150 ktime_t last_init_time; 2151 + 2152 + struct i915_vma *scratch; 2153 2153 } gt; 2154 2154 2155 2155 /* perform PHY state sanity checks? */ ··· 3872 3868 return CNL_HWS_CSB_WRITE_INDEX; 3873 3869 else 3874 3870 return I915_HWS_CSB_WRITE_INDEX; 3871 + } 3872 + 3873 + static inline u32 i915_scratch_offset(const struct drm_i915_private *i915) 3874 + { 3875 + return i915_ggtt_offset(i915->gt.scratch); 3875 3876 } 3876 3877 3877 3878 #endif
+52 -2
drivers/gpu/drm/i915/i915_gem.c
··· 5305 5305 } 5306 5306 } 5307 5307 5308 - intel_gt_workarounds_apply(dev_priv); 5308 + intel_gt_apply_workarounds(dev_priv); 5309 5309 5310 5310 i915_gem_init_swizzling(dev_priv); 5311 5311 ··· 5500 5500 goto out_ctx; 5501 5501 } 5502 5502 5503 + static int 5504 + i915_gem_init_scratch(struct drm_i915_private *i915, unsigned int size) 5505 + { 5506 + struct drm_i915_gem_object *obj; 5507 + struct i915_vma *vma; 5508 + int ret; 5509 + 5510 + obj = i915_gem_object_create_stolen(i915, size); 5511 + if (!obj) 5512 + obj = i915_gem_object_create_internal(i915, size); 5513 + if (IS_ERR(obj)) { 5514 + DRM_ERROR("Failed to allocate scratch page\n"); 5515 + return PTR_ERR(obj); 5516 + } 5517 + 5518 + vma = i915_vma_instance(obj, &i915->ggtt.vm, NULL); 5519 + if (IS_ERR(vma)) { 5520 + ret = PTR_ERR(vma); 5521 + goto err_unref; 5522 + } 5523 + 5524 + ret = i915_vma_pin(vma, 0, 0, PIN_GLOBAL | PIN_HIGH); 5525 + if (ret) 5526 + goto err_unref; 5527 + 5528 + i915->gt.scratch = vma; 5529 + return 0; 5530 + 5531 + err_unref: 5532 + i915_gem_object_put(obj); 5533 + return ret; 5534 + } 5535 + 5536 + static void i915_gem_fini_scratch(struct drm_i915_private *i915) 5537 + { 5538 + i915_vma_unpin_and_release(&i915->gt.scratch, 0); 5539 + } 5540 + 5503 5541 int i915_gem_init(struct drm_i915_private *dev_priv) 5504 5542 { 5505 5543 int ret; ··· 5584 5546 goto err_unlock; 5585 5547 } 5586 5548 5587 - ret = i915_gem_contexts_init(dev_priv); 5549 + ret = i915_gem_init_scratch(dev_priv, 5550 + IS_GEN2(dev_priv) ? SZ_256K : PAGE_SIZE); 5588 5551 if (ret) { 5589 5552 GEM_BUG_ON(ret == -EIO); 5590 5553 goto err_ggtt; 5554 + } 5555 + 5556 + ret = i915_gem_contexts_init(dev_priv); 5557 + if (ret) { 5558 + GEM_BUG_ON(ret == -EIO); 5559 + goto err_scratch; 5591 5560 } 5592 5561 5593 5562 ret = intel_engines_init(dev_priv); ··· 5669 5624 err_context: 5670 5625 if (ret != -EIO) 5671 5626 i915_gem_contexts_fini(dev_priv); 5627 + err_scratch: 5628 + i915_gem_fini_scratch(dev_priv); 5672 5629 err_ggtt: 5673 5630 err_unlock: 5674 5631 intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL); ··· 5722 5675 intel_uc_fini(dev_priv); 5723 5676 i915_gem_cleanup_engines(dev_priv); 5724 5677 i915_gem_contexts_fini(dev_priv); 5678 + i915_gem_fini_scratch(dev_priv); 5725 5679 mutex_unlock(&dev_priv->drm.struct_mutex); 5680 + 5681 + intel_wa_list_free(&dev_priv->gt_wa_list); 5726 5682 5727 5683 intel_cleanup_gt_powersave(dev_priv); 5728 5684
+1 -6
drivers/gpu/drm/i915/i915_gem_execbuffer.c
··· 1268 1268 else if (gen >= 4) 1269 1269 len = 4; 1270 1270 else 1271 - len = 6; 1271 + len = 3; 1272 1272 1273 1273 batch = reloc_gpu(eb, vma, len); 1274 1274 if (IS_ERR(batch)) ··· 1306 1306 *batch++ = addr; 1307 1307 *batch++ = target_offset; 1308 1308 } else { 1309 - *batch++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL; 1310 - *batch++ = addr; 1311 - *batch++ = target_offset; 1312 - 1313 - /* And again for good measure (blb/pnv) */ 1314 1309 *batch++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL; 1315 1310 *batch++ = addr; 1316 1311 *batch++ = target_offset;
+1 -1
drivers/gpu/drm/i915/i915_gpu_error.c
··· 1495 1495 if (HAS_BROKEN_CS_TLB(i915)) 1496 1496 ee->wa_batchbuffer = 1497 1497 i915_error_object_create(i915, 1498 - engine->scratch); 1498 + i915->gt.scratch); 1499 1499 request_record_user_bo(request, ee); 1500 1500 1501 1501 ee->ctx =
+2 -42
drivers/gpu/drm/i915/intel_engine_cs.c
··· 490 490 intel_engine_init_cmd_parser(engine); 491 491 } 492 492 493 - int intel_engine_create_scratch(struct intel_engine_cs *engine, 494 - unsigned int size) 495 - { 496 - struct drm_i915_gem_object *obj; 497 - struct i915_vma *vma; 498 - int ret; 499 - 500 - WARN_ON(engine->scratch); 501 - 502 - obj = i915_gem_object_create_stolen(engine->i915, size); 503 - if (!obj) 504 - obj = i915_gem_object_create_internal(engine->i915, size); 505 - if (IS_ERR(obj)) { 506 - DRM_ERROR("Failed to allocate scratch page\n"); 507 - return PTR_ERR(obj); 508 - } 509 - 510 - vma = i915_vma_instance(obj, &engine->i915->ggtt.vm, NULL); 511 - if (IS_ERR(vma)) { 512 - ret = PTR_ERR(vma); 513 - goto err_unref; 514 - } 515 - 516 - ret = i915_vma_pin(vma, 0, 0, PIN_GLOBAL | PIN_HIGH); 517 - if (ret) 518 - goto err_unref; 519 - 520 - engine->scratch = vma; 521 - return 0; 522 - 523 - err_unref: 524 - i915_gem_object_put(obj); 525 - return ret; 526 - } 527 - 528 - void intel_engine_cleanup_scratch(struct intel_engine_cs *engine) 529 - { 530 - i915_vma_unpin_and_release(&engine->scratch, 0); 531 - } 532 - 533 493 static void cleanup_status_page(struct intel_engine_cs *engine) 534 494 { 535 495 if (HWS_NEEDS_PHYSICAL(engine->i915)) { ··· 664 704 { 665 705 struct drm_i915_private *i915 = engine->i915; 666 706 667 - intel_engine_cleanup_scratch(engine); 668 - 669 707 cleanup_status_page(engine); 670 708 671 709 intel_engine_fini_breadcrumbs(engine); ··· 678 720 __intel_context_unpin(i915->kernel_context, engine); 679 721 680 722 i915_timeline_fini(&engine->timeline); 723 + 724 + intel_wa_list_free(&engine->wa_list); 681 725 } 682 726 683 727 u64 intel_engine_get_active_head(const struct intel_engine_cs *engine)
+16 -14
drivers/gpu/drm/i915/intel_lrc.c
··· 442 442 * may not be visible to the HW prior to the completion of the UC 443 443 * register write and that we may begin execution from the context 444 444 * before its image is complete leading to invalid PD chasing. 445 + * 446 + * Furthermore, Braswell, at least, wants a full mb to be sure that 447 + * the writes are coherent in memory (visible to the GPU) prior to 448 + * execution, and not just visible to other CPUs (as is the result of 449 + * wmb). 445 450 */ 446 - wmb(); 451 + mb(); 447 452 return ce->lrc_desc; 448 453 } 449 454 ··· 1448 1443 static u32 * 1449 1444 gen8_emit_flush_coherentl3_wa(struct intel_engine_cs *engine, u32 *batch) 1450 1445 { 1446 + /* NB no one else is allowed to scribble over scratch + 256! */ 1451 1447 *batch++ = MI_STORE_REGISTER_MEM_GEN8 | MI_SRM_LRM_GLOBAL_GTT; 1452 1448 *batch++ = i915_mmio_reg_offset(GEN8_L3SQCREG4); 1453 - *batch++ = i915_ggtt_offset(engine->scratch) + 256; 1449 + *batch++ = i915_scratch_offset(engine->i915) + 256; 1454 1450 *batch++ = 0; 1455 1451 1456 1452 *batch++ = MI_LOAD_REGISTER_IMM(1); ··· 1465 1459 1466 1460 *batch++ = MI_LOAD_REGISTER_MEM_GEN8 | MI_SRM_LRM_GLOBAL_GTT; 1467 1461 *batch++ = i915_mmio_reg_offset(GEN8_L3SQCREG4); 1468 - *batch++ = i915_ggtt_offset(engine->scratch) + 256; 1462 + *batch++ = i915_scratch_offset(engine->i915) + 256; 1469 1463 *batch++ = 0; 1470 1464 1471 1465 return batch; ··· 1502 1496 PIPE_CONTROL_GLOBAL_GTT_IVB | 1503 1497 PIPE_CONTROL_CS_STALL | 1504 1498 PIPE_CONTROL_QW_WRITE, 1505 - i915_ggtt_offset(engine->scratch) + 1499 + i915_scratch_offset(engine->i915) + 1506 1500 2 * CACHELINE_BYTES); 1507 1501 1508 1502 *batch++ = MI_ARB_ON_OFF | MI_ARB_ENABLE; ··· 1579 1573 PIPE_CONTROL_GLOBAL_GTT_IVB | 1580 1574 PIPE_CONTROL_CS_STALL | 1581 1575 PIPE_CONTROL_QW_WRITE, 1582 - i915_ggtt_offset(engine->scratch) 1576 + i915_scratch_offset(engine->i915) 1583 1577 + 2 * CACHELINE_BYTES); 1584 1578 } 1585 1579 ··· 1799 1793 1800 1794 static int gen8_init_common_ring(struct intel_engine_cs *engine) 1801 1795 { 1796 + intel_engine_apply_workarounds(engine); 1797 + 1802 1798 intel_mocs_init_engine(engine); 1803 1799 1804 1800 intel_engine_reset_breadcrumbs(engine); ··· 2147 2139 { 2148 2140 struct intel_engine_cs *engine = request->engine; 2149 2141 u32 scratch_addr = 2150 - i915_ggtt_offset(engine->scratch) + 2 * CACHELINE_BYTES; 2142 + i915_scratch_offset(engine->i915) + 2 * CACHELINE_BYTES; 2151 2143 bool vf_flush_wa = false, dc_flush_wa = false; 2152 2144 u32 *cs, flags = 0; 2153 2145 int len; ··· 2484 2476 if (ret) 2485 2477 return ret; 2486 2478 2487 - ret = intel_engine_create_scratch(engine, PAGE_SIZE); 2488 - if (ret) 2489 - goto err_cleanup_common; 2490 - 2491 2479 ret = intel_init_workaround_bb(engine); 2492 2480 if (ret) { 2493 2481 /* ··· 2495 2491 ret); 2496 2492 } 2497 2493 2498 - return 0; 2494 + intel_engine_init_workarounds(engine); 2499 2495 2500 - err_cleanup_common: 2501 - intel_engine_cleanup_common(engine); 2502 - return ret; 2496 + return 0; 2503 2497 } 2504 2498 2505 2499 int logical_xcs_ring_init(struct intel_engine_cs *engine)
+24 -28
drivers/gpu/drm/i915/intel_ringbuffer.c
··· 69 69 static int 70 70 gen2_render_ring_flush(struct i915_request *rq, u32 mode) 71 71 { 72 + unsigned int num_store_dw; 72 73 u32 cmd, *cs; 73 74 74 75 cmd = MI_FLUSH; 75 - 76 + num_store_dw = 0; 76 77 if (mode & EMIT_INVALIDATE) 77 78 cmd |= MI_READ_FLUSH; 79 + if (mode & EMIT_FLUSH) 80 + num_store_dw = 4; 78 81 79 - cs = intel_ring_begin(rq, 2); 82 + cs = intel_ring_begin(rq, 2 + 3 * num_store_dw); 80 83 if (IS_ERR(cs)) 81 84 return PTR_ERR(cs); 82 85 83 86 *cs++ = cmd; 84 - *cs++ = MI_NOOP; 87 + while (num_store_dw--) { 88 + *cs++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL; 89 + *cs++ = i915_scratch_offset(rq->i915); 90 + *cs++ = 0; 91 + } 92 + *cs++ = MI_FLUSH | MI_NO_WRITE_FLUSH; 93 + 85 94 intel_ring_advance(rq, cs); 86 95 87 96 return 0; ··· 159 150 */ 160 151 if (mode & EMIT_INVALIDATE) { 161 152 *cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE; 162 - *cs++ = i915_ggtt_offset(rq->engine->scratch) | 163 - PIPE_CONTROL_GLOBAL_GTT; 153 + *cs++ = i915_scratch_offset(rq->i915) | PIPE_CONTROL_GLOBAL_GTT; 164 154 *cs++ = 0; 165 155 *cs++ = 0; 166 156 ··· 167 159 *cs++ = MI_FLUSH; 168 160 169 161 *cs++ = GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE; 170 - *cs++ = i915_ggtt_offset(rq->engine->scratch) | 171 - PIPE_CONTROL_GLOBAL_GTT; 162 + *cs++ = i915_scratch_offset(rq->i915) | PIPE_CONTROL_GLOBAL_GTT; 172 163 *cs++ = 0; 173 164 *cs++ = 0; 174 165 } ··· 219 212 static int 220 213 intel_emit_post_sync_nonzero_flush(struct i915_request *rq) 221 214 { 222 - u32 scratch_addr = 223 - i915_ggtt_offset(rq->engine->scratch) + 2 * CACHELINE_BYTES; 215 + u32 scratch_addr = i915_scratch_offset(rq->i915) + 2 * CACHELINE_BYTES; 224 216 u32 *cs; 225 217 226 218 cs = intel_ring_begin(rq, 6); ··· 252 246 static int 253 247 gen6_render_ring_flush(struct i915_request *rq, u32 mode) 254 248 { 255 - u32 scratch_addr = 256 - i915_ggtt_offset(rq->engine->scratch) + 2 * CACHELINE_BYTES; 249 + u32 scratch_addr = i915_scratch_offset(rq->i915) + 2 * CACHELINE_BYTES; 257 250 u32 *cs, flags = 0; 258 251 int ret; 259 252 ··· 321 316 static int 322 317 gen7_render_ring_flush(struct i915_request *rq, u32 mode) 323 318 { 324 - u32 scratch_addr = 325 - i915_ggtt_offset(rq->engine->scratch) + 2 * CACHELINE_BYTES; 319 + u32 scratch_addr = i915_scratch_offset(rq->i915) + 2 * CACHELINE_BYTES; 326 320 u32 *cs, flags = 0; 327 321 328 322 /* ··· 975 971 } 976 972 977 973 /* Just userspace ABI convention to limit the wa batch bo to a resonable size */ 978 - #define I830_BATCH_LIMIT (256*1024) 974 + #define I830_BATCH_LIMIT SZ_256K 979 975 #define I830_TLB_ENTRIES (2) 980 976 #define I830_WA_SIZE max(I830_TLB_ENTRIES*4096, I830_BATCH_LIMIT) 981 977 static int ··· 983 979 u64 offset, u32 len, 984 980 unsigned int dispatch_flags) 985 981 { 986 - u32 *cs, cs_offset = i915_ggtt_offset(rq->engine->scratch); 982 + u32 *cs, cs_offset = i915_scratch_offset(rq->i915); 983 + 984 + GEM_BUG_ON(rq->i915->gt.scratch->size < I830_WA_SIZE); 987 985 988 986 cs = intel_ring_begin(rq, 6); 989 987 if (IS_ERR(cs)) ··· 1443 1437 { 1444 1438 struct i915_timeline *timeline; 1445 1439 struct intel_ring *ring; 1446 - unsigned int size; 1447 1440 int err; 1448 1441 1449 1442 intel_engine_setup_common(engine); ··· 1467 1462 GEM_BUG_ON(engine->buffer); 1468 1463 engine->buffer = ring; 1469 1464 1470 - size = PAGE_SIZE; 1471 - if (HAS_BROKEN_CS_TLB(engine->i915)) 1472 - size = I830_WA_SIZE; 1473 - err = intel_engine_create_scratch(engine, size); 1465 + err = intel_engine_init_common(engine); 1474 1466 if (err) 1475 1467 goto err_unpin; 1476 1468 1477 - err = intel_engine_init_common(engine); 1478 - if (err) 1479 - goto err_scratch; 1480 - 1481 1469 return 0; 1482 1470 1483 - err_scratch: 1484 - intel_engine_cleanup_scratch(engine); 1485 1471 err_unpin: 1486 1472 intel_ring_unpin(ring); 1487 1473 err_ring: ··· 1546 1550 /* Stall until the page table load is complete */ 1547 1551 *cs++ = MI_STORE_REGISTER_MEM | MI_SRM_LRM_GLOBAL_GTT; 1548 1552 *cs++ = i915_mmio_reg_offset(RING_PP_DIR_BASE(engine)); 1549 - *cs++ = i915_ggtt_offset(engine->scratch); 1553 + *cs++ = i915_scratch_offset(rq->i915); 1550 1554 *cs++ = MI_NOOP; 1551 1555 1552 1556 intel_ring_advance(rq, cs); ··· 1655 1659 /* Insert a delay before the next switch! */ 1656 1660 *cs++ = MI_STORE_REGISTER_MEM | MI_SRM_LRM_GLOBAL_GTT; 1657 1661 *cs++ = i915_mmio_reg_offset(last_reg); 1658 - *cs++ = i915_ggtt_offset(engine->scratch); 1662 + *cs++ = i915_scratch_offset(rq->i915); 1659 1663 *cs++ = MI_NOOP; 1660 1664 } 1661 1665 *cs++ = MI_ARB_ON_OFF | MI_ARB_ENABLE;
+2 -5
drivers/gpu/drm/i915/intel_ringbuffer.h
··· 15 15 #include "i915_selftest.h" 16 16 #include "i915_timeline.h" 17 17 #include "intel_gpu_commands.h" 18 + #include "intel_workarounds.h" 18 19 19 20 struct drm_printer; 20 21 struct i915_sched_attr; ··· 441 440 442 441 struct intel_hw_status_page status_page; 443 442 struct i915_ctx_workarounds wa_ctx; 444 - struct i915_vma *scratch; 443 + struct i915_wa_list wa_list; 445 444 446 445 u32 irq_keep_mask; /* always keep these interrupts */ 447 446 u32 irq_enable_mask; /* bitmask to enable ring interrupt */ ··· 898 897 void intel_engine_setup_common(struct intel_engine_cs *engine); 899 898 int intel_engine_init_common(struct intel_engine_cs *engine); 900 899 void intel_engine_cleanup_common(struct intel_engine_cs *engine); 901 - 902 - int intel_engine_create_scratch(struct intel_engine_cs *engine, 903 - unsigned int size); 904 - void intel_engine_cleanup_scratch(struct intel_engine_cs *engine); 905 900 906 901 int intel_init_render_ring_buffer(struct intel_engine_cs *engine); 907 902 int intel_init_bsd_ring_buffer(struct intel_engine_cs *engine);
+390 -201
drivers/gpu/drm/i915/intel_workarounds.c
··· 48 48 * - Public functions to init or apply the given workaround type. 49 49 */ 50 50 51 + static void wa_init_start(struct i915_wa_list *wal, const char *name) 52 + { 53 + wal->name = name; 54 + } 55 + 56 + static void wa_init_finish(struct i915_wa_list *wal) 57 + { 58 + if (!wal->count) 59 + return; 60 + 61 + DRM_DEBUG_DRIVER("Initialized %u %s workarounds\n", 62 + wal->count, wal->name); 63 + } 64 + 51 65 static void wa_add(struct drm_i915_private *i915, 52 66 i915_reg_t reg, const u32 mask, const u32 val) 53 67 { ··· 594 580 return 0; 595 581 } 596 582 597 - static void bdw_gt_workarounds_apply(struct drm_i915_private *dev_priv) 583 + static void 584 + wal_add(struct i915_wa_list *wal, const struct i915_wa *wa) 598 585 { 586 + const unsigned int grow = 1 << 4; 587 + 588 + GEM_BUG_ON(!is_power_of_2(grow)); 589 + 590 + if (IS_ALIGNED(wal->count, grow)) { /* Either uninitialized or full. */ 591 + struct i915_wa *list; 592 + 593 + list = kmalloc_array(ALIGN(wal->count + 1, grow), sizeof(*wa), 594 + GFP_KERNEL); 595 + if (!list) { 596 + DRM_ERROR("No space for workaround init!\n"); 597 + return; 598 + } 599 + 600 + if (wal->list) 601 + memcpy(list, wal->list, sizeof(*wa) * wal->count); 602 + 603 + wal->list = list; 604 + } 605 + 606 + wal->list[wal->count++] = *wa; 599 607 } 600 608 601 - static void chv_gt_workarounds_apply(struct drm_i915_private *dev_priv) 609 + static void 610 + wa_masked_en(struct i915_wa_list *wal, i915_reg_t reg, u32 val) 602 611 { 612 + struct i915_wa wa = { 613 + .reg = reg, 614 + .mask = val, 615 + .val = _MASKED_BIT_ENABLE(val) 616 + }; 617 + 618 + wal_add(wal, &wa); 603 619 } 604 620 605 - static void gen9_gt_workarounds_apply(struct drm_i915_private *dev_priv) 621 + static void 622 + wa_write_masked_or(struct i915_wa_list *wal, i915_reg_t reg, u32 mask, 623 + u32 val) 606 624 { 607 - /* WaContextSwitchWithConcurrentTLBInvalidate:skl,bxt,kbl,glk,cfl */ 608 - I915_WRITE(GEN9_CSFE_CHICKEN1_RCS, 609 - _MASKED_BIT_ENABLE(GEN9_PREEMPT_GPGPU_SYNC_SWITCH_DISABLE)); 625 + struct i915_wa wa = { 626 + .reg = reg, 627 + .mask = mask, 628 + .val = val 629 + }; 610 630 611 - /* WaEnableLbsSlaRetryTimerDecrement:skl,bxt,kbl,glk,cfl */ 612 - I915_WRITE(BDW_SCRATCH1, I915_READ(BDW_SCRATCH1) | 613 - GEN9_LBS_SLA_RETRY_TIMER_DECREMENT_ENABLE); 631 + wal_add(wal, &wa); 632 + } 633 + 634 + static void 635 + wa_write(struct i915_wa_list *wal, i915_reg_t reg, u32 val) 636 + { 637 + wa_write_masked_or(wal, reg, ~0, val); 638 + } 639 + 640 + static void 641 + wa_write_or(struct i915_wa_list *wal, i915_reg_t reg, u32 val) 642 + { 643 + wa_write_masked_or(wal, reg, val, val); 644 + } 645 + 646 + static void gen9_gt_workarounds_init(struct drm_i915_private *i915) 647 + { 648 + struct i915_wa_list *wal = &i915->gt_wa_list; 614 649 615 650 /* WaDisableKillLogic:bxt,skl,kbl */ 616 - if (!IS_COFFEELAKE(dev_priv)) 617 - I915_WRITE(GAM_ECOCHK, I915_READ(GAM_ECOCHK) | 618 - ECOCHK_DIS_TLB); 651 + if (!IS_COFFEELAKE(i915)) 652 + wa_write_or(wal, 653 + GAM_ECOCHK, 654 + ECOCHK_DIS_TLB); 619 655 620 - if (HAS_LLC(dev_priv)) { 656 + if (HAS_LLC(i915)) { 621 657 /* WaCompressedResourceSamplerPbeMediaNewHashMode:skl,kbl 622 658 * 623 659 * Must match Display Engine. See 624 660 * WaCompressedResourceDisplayNewHashMode. 625 661 */ 626 - I915_WRITE(MMCD_MISC_CTRL, 627 - I915_READ(MMCD_MISC_CTRL) | 628 - MMCD_PCLA | 629 - MMCD_HOTSPOT_EN); 662 + wa_write_or(wal, 663 + MMCD_MISC_CTRL, 664 + MMCD_PCLA | MMCD_HOTSPOT_EN); 630 665 } 631 666 632 667 /* WaDisableHDCInvalidation:skl,bxt,kbl,cfl */ 633 - I915_WRITE(GAM_ECOCHK, I915_READ(GAM_ECOCHK) | 634 - BDW_DISABLE_HDC_INVALIDATION); 635 - 636 - /* WaProgramL3SqcReg1DefaultForPerf:bxt,glk */ 637 - if (IS_GEN9_LP(dev_priv)) { 638 - u32 val = I915_READ(GEN8_L3SQCREG1); 639 - 640 - val &= ~L3_PRIO_CREDITS_MASK; 641 - val |= L3_GENERAL_PRIO_CREDITS(62) | L3_HIGH_PRIO_CREDITS(2); 642 - I915_WRITE(GEN8_L3SQCREG1, val); 643 - } 644 - 645 - /* WaOCLCoherentLineFlush:skl,bxt,kbl,cfl */ 646 - I915_WRITE(GEN8_L3SQCREG4, 647 - I915_READ(GEN8_L3SQCREG4) | GEN8_LQSC_FLUSH_COHERENT_LINES); 648 - 649 - /* WaEnablePreemptionGranularityControlByUMD:skl,bxt,kbl,cfl,[cnl] */ 650 - I915_WRITE(GEN7_FF_SLICE_CS_CHICKEN1, 651 - _MASKED_BIT_ENABLE(GEN9_FFSC_PERCTX_PREEMPT_CTRL)); 668 + wa_write_or(wal, 669 + GAM_ECOCHK, 670 + BDW_DISABLE_HDC_INVALIDATION); 652 671 } 653 672 654 - static void skl_gt_workarounds_apply(struct drm_i915_private *dev_priv) 673 + static void skl_gt_workarounds_init(struct drm_i915_private *i915) 655 674 { 656 - gen9_gt_workarounds_apply(dev_priv); 675 + struct i915_wa_list *wal = &i915->gt_wa_list; 657 676 658 - /* WaEnableGapsTsvCreditFix:skl */ 659 - I915_WRITE(GEN8_GARBCNTL, 660 - I915_READ(GEN8_GARBCNTL) | GEN9_GAPS_TSV_CREDIT_DISABLE); 677 + gen9_gt_workarounds_init(i915); 661 678 662 679 /* WaDisableGafsUnitClkGating:skl */ 663 - I915_WRITE(GEN7_UCGCTL4, 664 - I915_READ(GEN7_UCGCTL4) | GEN8_EU_GAUNIT_CLOCK_GATE_DISABLE); 680 + wa_write_or(wal, 681 + GEN7_UCGCTL4, 682 + GEN8_EU_GAUNIT_CLOCK_GATE_DISABLE); 665 683 666 684 /* WaInPlaceDecompressionHang:skl */ 667 - if (IS_SKL_REVID(dev_priv, SKL_REVID_H0, REVID_FOREVER)) 668 - I915_WRITE(GEN9_GAMT_ECO_REG_RW_IA, 669 - I915_READ(GEN9_GAMT_ECO_REG_RW_IA) | 670 - GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS); 685 + if (IS_SKL_REVID(i915, SKL_REVID_H0, REVID_FOREVER)) 686 + wa_write_or(wal, 687 + GEN9_GAMT_ECO_REG_RW_IA, 688 + GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS); 671 689 } 672 690 673 - static void bxt_gt_workarounds_apply(struct drm_i915_private *dev_priv) 691 + static void bxt_gt_workarounds_init(struct drm_i915_private *i915) 674 692 { 675 - gen9_gt_workarounds_apply(dev_priv); 693 + struct i915_wa_list *wal = &i915->gt_wa_list; 676 694 677 - /* WaDisablePooledEuLoadBalancingFix:bxt */ 678 - I915_WRITE(FF_SLICE_CS_CHICKEN2, 679 - _MASKED_BIT_ENABLE(GEN9_POOLED_EU_LOAD_BALANCING_FIX_DISABLE)); 695 + gen9_gt_workarounds_init(i915); 680 696 681 697 /* WaInPlaceDecompressionHang:bxt */ 682 - I915_WRITE(GEN9_GAMT_ECO_REG_RW_IA, 683 - I915_READ(GEN9_GAMT_ECO_REG_RW_IA) | 684 - GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS); 698 + wa_write_or(wal, 699 + GEN9_GAMT_ECO_REG_RW_IA, 700 + GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS); 685 701 } 686 702 687 - static void kbl_gt_workarounds_apply(struct drm_i915_private *dev_priv) 703 + static void kbl_gt_workarounds_init(struct drm_i915_private *i915) 688 704 { 689 - gen9_gt_workarounds_apply(dev_priv); 705 + struct i915_wa_list *wal = &i915->gt_wa_list; 690 706 691 - /* WaEnableGapsTsvCreditFix:kbl */ 692 - I915_WRITE(GEN8_GARBCNTL, 693 - I915_READ(GEN8_GARBCNTL) | GEN9_GAPS_TSV_CREDIT_DISABLE); 707 + gen9_gt_workarounds_init(i915); 694 708 695 709 /* WaDisableDynamicCreditSharing:kbl */ 696 - if (IS_KBL_REVID(dev_priv, 0, KBL_REVID_B0)) 697 - I915_WRITE(GAMT_CHKN_BIT_REG, 698 - I915_READ(GAMT_CHKN_BIT_REG) | 699 - GAMT_CHKN_DISABLE_DYNAMIC_CREDIT_SHARING); 710 + if (IS_KBL_REVID(i915, 0, KBL_REVID_B0)) 711 + wa_write_or(wal, 712 + GAMT_CHKN_BIT_REG, 713 + GAMT_CHKN_DISABLE_DYNAMIC_CREDIT_SHARING); 700 714 701 715 /* WaDisableGafsUnitClkGating:kbl */ 702 - I915_WRITE(GEN7_UCGCTL4, 703 - I915_READ(GEN7_UCGCTL4) | GEN8_EU_GAUNIT_CLOCK_GATE_DISABLE); 716 + wa_write_or(wal, 717 + GEN7_UCGCTL4, 718 + GEN8_EU_GAUNIT_CLOCK_GATE_DISABLE); 704 719 705 720 /* WaInPlaceDecompressionHang:kbl */ 706 - I915_WRITE(GEN9_GAMT_ECO_REG_RW_IA, 707 - I915_READ(GEN9_GAMT_ECO_REG_RW_IA) | 708 - GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS); 709 - 710 - /* WaKBLVECSSemaphoreWaitPoll:kbl */ 711 - if (IS_KBL_REVID(dev_priv, KBL_REVID_A0, KBL_REVID_E0)) { 712 - struct intel_engine_cs *engine; 713 - unsigned int tmp; 714 - 715 - for_each_engine(engine, dev_priv, tmp) { 716 - if (engine->id == RCS) 717 - continue; 718 - 719 - I915_WRITE(RING_SEMA_WAIT_POLL(engine->mmio_base), 1); 720 - } 721 - } 721 + wa_write_or(wal, 722 + GEN9_GAMT_ECO_REG_RW_IA, 723 + GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS); 722 724 } 723 725 724 - static void glk_gt_workarounds_apply(struct drm_i915_private *dev_priv) 726 + static void glk_gt_workarounds_init(struct drm_i915_private *i915) 725 727 { 726 - gen9_gt_workarounds_apply(dev_priv); 728 + gen9_gt_workarounds_init(i915); 727 729 } 728 730 729 - static void cfl_gt_workarounds_apply(struct drm_i915_private *dev_priv) 731 + static void cfl_gt_workarounds_init(struct drm_i915_private *i915) 730 732 { 731 - gen9_gt_workarounds_apply(dev_priv); 733 + struct i915_wa_list *wal = &i915->gt_wa_list; 732 734 733 - /* WaEnableGapsTsvCreditFix:cfl */ 734 - I915_WRITE(GEN8_GARBCNTL, 735 - I915_READ(GEN8_GARBCNTL) | GEN9_GAPS_TSV_CREDIT_DISABLE); 735 + gen9_gt_workarounds_init(i915); 736 736 737 737 /* WaDisableGafsUnitClkGating:cfl */ 738 - I915_WRITE(GEN7_UCGCTL4, 739 - I915_READ(GEN7_UCGCTL4) | GEN8_EU_GAUNIT_CLOCK_GATE_DISABLE); 738 + wa_write_or(wal, 739 + GEN7_UCGCTL4, 740 + GEN8_EU_GAUNIT_CLOCK_GATE_DISABLE); 740 741 741 742 /* WaInPlaceDecompressionHang:cfl */ 742 - I915_WRITE(GEN9_GAMT_ECO_REG_RW_IA, 743 - I915_READ(GEN9_GAMT_ECO_REG_RW_IA) | 744 - GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS); 743 + wa_write_or(wal, 744 + GEN9_GAMT_ECO_REG_RW_IA, 745 + GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS); 745 746 } 746 747 747 748 static void wa_init_mcr(struct drm_i915_private *dev_priv) 748 749 { 749 750 const struct sseu_dev_info *sseu = &(INTEL_INFO(dev_priv)->sseu); 750 - u32 mcr; 751 + struct i915_wa_list *wal = &dev_priv->gt_wa_list; 751 752 u32 mcr_slice_subslice_mask; 752 753 753 754 /* ··· 799 770 WARN_ON((enabled_mask & disabled_mask) != enabled_mask); 800 771 } 801 772 802 - mcr = I915_READ(GEN8_MCR_SELECTOR); 803 - 804 773 if (INTEL_GEN(dev_priv) >= 11) 805 774 mcr_slice_subslice_mask = GEN11_MCR_SLICE_MASK | 806 775 GEN11_MCR_SUBSLICE_MASK; ··· 816 789 * occasions, such as INSTDONE, where this value is dependent 817 790 * on s/ss combo, the read should be done with read_subslice_reg. 818 791 */ 819 - mcr &= ~mcr_slice_subslice_mask; 820 - mcr |= intel_calculate_mcr_s_ss_select(dev_priv); 821 - I915_WRITE(GEN8_MCR_SELECTOR, mcr); 792 + wa_write_masked_or(wal, 793 + GEN8_MCR_SELECTOR, 794 + mcr_slice_subslice_mask, 795 + intel_calculate_mcr_s_ss_select(dev_priv)); 822 796 } 823 797 824 - static void cnl_gt_workarounds_apply(struct drm_i915_private *dev_priv) 798 + static void cnl_gt_workarounds_init(struct drm_i915_private *i915) 825 799 { 826 - wa_init_mcr(dev_priv); 800 + struct i915_wa_list *wal = &i915->gt_wa_list; 801 + 802 + wa_init_mcr(i915); 827 803 828 804 /* WaDisableI2mCycleOnWRPort:cnl (pre-prod) */ 829 - if (IS_CNL_REVID(dev_priv, CNL_REVID_B0, CNL_REVID_B0)) 830 - I915_WRITE(GAMT_CHKN_BIT_REG, 831 - I915_READ(GAMT_CHKN_BIT_REG) | 832 - GAMT_CHKN_DISABLE_I2M_CYCLE_ON_WR_PORT); 805 + if (IS_CNL_REVID(i915, CNL_REVID_B0, CNL_REVID_B0)) 806 + wa_write_or(wal, 807 + GAMT_CHKN_BIT_REG, 808 + GAMT_CHKN_DISABLE_I2M_CYCLE_ON_WR_PORT); 833 809 834 810 /* WaInPlaceDecompressionHang:cnl */ 835 - I915_WRITE(GEN9_GAMT_ECO_REG_RW_IA, 836 - I915_READ(GEN9_GAMT_ECO_REG_RW_IA) | 837 - GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS); 838 - 839 - /* WaEnablePreemptionGranularityControlByUMD:cnl */ 840 - I915_WRITE(GEN7_FF_SLICE_CS_CHICKEN1, 841 - _MASKED_BIT_ENABLE(GEN9_FFSC_PERCTX_PREEMPT_CTRL)); 811 + wa_write_or(wal, 812 + GEN9_GAMT_ECO_REG_RW_IA, 813 + GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS); 842 814 } 843 815 844 - static void icl_gt_workarounds_apply(struct drm_i915_private *dev_priv) 816 + static void icl_gt_workarounds_init(struct drm_i915_private *i915) 845 817 { 846 - wa_init_mcr(dev_priv); 818 + struct i915_wa_list *wal = &i915->gt_wa_list; 847 819 848 - /* This is not an Wa. Enable for better image quality */ 849 - I915_WRITE(_3D_CHICKEN3, 850 - _MASKED_BIT_ENABLE(_3D_CHICKEN3_AA_LINE_QUALITY_FIX_ENABLE)); 820 + wa_init_mcr(i915); 851 821 852 822 /* WaInPlaceDecompressionHang:icl */ 853 - I915_WRITE(GEN9_GAMT_ECO_REG_RW_IA, I915_READ(GEN9_GAMT_ECO_REG_RW_IA) | 854 - GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS); 855 - 856 - /* WaPipelineFlushCoherentLines:icl */ 857 - I915_WRITE(GEN8_L3SQCREG4, I915_READ(GEN8_L3SQCREG4) | 858 - GEN8_LQSC_FLUSH_COHERENT_LINES); 859 - 860 - /* Wa_1405543622:icl 861 - * Formerly known as WaGAPZPriorityScheme 862 - */ 863 - I915_WRITE(GEN8_GARBCNTL, I915_READ(GEN8_GARBCNTL) | 864 - GEN11_ARBITRATION_PRIO_ORDER_MASK); 865 - 866 - /* Wa_1604223664:icl 867 - * Formerly known as WaL3BankAddressHashing 868 - */ 869 - I915_WRITE(GEN8_GARBCNTL, 870 - (I915_READ(GEN8_GARBCNTL) & ~GEN11_HASH_CTRL_EXCL_MASK) | 871 - GEN11_HASH_CTRL_EXCL_BIT0); 872 - I915_WRITE(GEN11_GLBLINVL, 873 - (I915_READ(GEN11_GLBLINVL) & ~GEN11_BANK_HASH_ADDR_EXCL_MASK) | 874 - GEN11_BANK_HASH_ADDR_EXCL_BIT0); 823 + wa_write_or(wal, 824 + GEN9_GAMT_ECO_REG_RW_IA, 825 + GAMT_ECO_ENABLE_IN_PLACE_DECOMPRESS); 875 826 876 827 /* WaModifyGamTlbPartitioning:icl */ 877 - I915_WRITE(GEN11_GACB_PERF_CTRL, 878 - (I915_READ(GEN11_GACB_PERF_CTRL) & ~GEN11_HASH_CTRL_MASK) | 879 - GEN11_HASH_CTRL_BIT0 | GEN11_HASH_CTRL_BIT4); 880 - 881 - /* Wa_1405733216:icl 882 - * Formerly known as WaDisableCleanEvicts 883 - */ 884 - I915_WRITE(GEN8_L3SQCREG4, I915_READ(GEN8_L3SQCREG4) | 885 - GEN11_LQSC_CLEAN_EVICT_DISABLE); 828 + wa_write_masked_or(wal, 829 + GEN11_GACB_PERF_CTRL, 830 + GEN11_HASH_CTRL_MASK, 831 + GEN11_HASH_CTRL_BIT0 | GEN11_HASH_CTRL_BIT4); 886 832 887 833 /* Wa_1405766107:icl 888 834 * Formerly known as WaCL2SFHalfMaxAlloc 889 835 */ 890 - I915_WRITE(GEN11_LSN_UNSLCVC, I915_READ(GEN11_LSN_UNSLCVC) | 891 - GEN11_LSN_UNSLCVC_GAFS_HALF_SF_MAXALLOC | 892 - GEN11_LSN_UNSLCVC_GAFS_HALF_CL2_MAXALLOC); 836 + wa_write_or(wal, 837 + GEN11_LSN_UNSLCVC, 838 + GEN11_LSN_UNSLCVC_GAFS_HALF_SF_MAXALLOC | 839 + GEN11_LSN_UNSLCVC_GAFS_HALF_CL2_MAXALLOC); 893 840 894 841 /* Wa_220166154:icl 895 842 * Formerly known as WaDisCtxReload 896 843 */ 897 - I915_WRITE(GAMW_ECO_DEV_RW_IA_REG, I915_READ(GAMW_ECO_DEV_RW_IA_REG) | 898 - GAMW_ECO_DEV_CTX_RELOAD_DISABLE); 844 + wa_write_or(wal, 845 + GEN8_GAMW_ECO_DEV_RW_IA, 846 + GAMW_ECO_DEV_CTX_RELOAD_DISABLE); 899 847 900 848 /* Wa_1405779004:icl (pre-prod) */ 901 - if (IS_ICL_REVID(dev_priv, ICL_REVID_A0, ICL_REVID_A0)) 902 - I915_WRITE(SLICE_UNIT_LEVEL_CLKGATE, 903 - I915_READ(SLICE_UNIT_LEVEL_CLKGATE) | 904 - MSCUNIT_CLKGATE_DIS); 849 + if (IS_ICL_REVID(i915, ICL_REVID_A0, ICL_REVID_A0)) 850 + wa_write_or(wal, 851 + SLICE_UNIT_LEVEL_CLKGATE, 852 + MSCUNIT_CLKGATE_DIS); 905 853 906 854 /* Wa_1406680159:icl */ 907 - I915_WRITE(SUBSLICE_UNIT_LEVEL_CLKGATE, 908 - I915_READ(SUBSLICE_UNIT_LEVEL_CLKGATE) | 909 - GWUNIT_CLKGATE_DIS); 910 - 911 - /* Wa_1604302699:icl */ 912 - I915_WRITE(GEN10_L3_CHICKEN_MODE_REGISTER, 913 - I915_READ(GEN10_L3_CHICKEN_MODE_REGISTER) | 914 - GEN11_I2M_WRITE_DISABLE); 855 + wa_write_or(wal, 856 + SUBSLICE_UNIT_LEVEL_CLKGATE, 857 + GWUNIT_CLKGATE_DIS); 915 858 916 859 /* Wa_1406838659:icl (pre-prod) */ 917 - if (IS_ICL_REVID(dev_priv, ICL_REVID_A0, ICL_REVID_B0)) 918 - I915_WRITE(INF_UNIT_LEVEL_CLKGATE, 919 - I915_READ(INF_UNIT_LEVEL_CLKGATE) | 920 - CGPSF_CLKGATE_DIS); 921 - 922 - /* WaForwardProgressSoftReset:icl */ 923 - I915_WRITE(GEN10_SCRATCH_LNCF2, 924 - I915_READ(GEN10_SCRATCH_LNCF2) | 925 - PMFLUSHDONE_LNICRSDROP | 926 - PMFLUSH_GAPL3UNBLOCK | 927 - PMFLUSHDONE_LNEBLK); 860 + if (IS_ICL_REVID(i915, ICL_REVID_A0, ICL_REVID_B0)) 861 + wa_write_or(wal, 862 + INF_UNIT_LEVEL_CLKGATE, 863 + CGPSF_CLKGATE_DIS); 928 864 929 865 /* Wa_1406463099:icl 930 866 * Formerly known as WaGamTlbPendError 931 867 */ 932 - I915_WRITE(GAMT_CHKN_BIT_REG, 933 - I915_READ(GAMT_CHKN_BIT_REG) | 934 - GAMT_CHKN_DISABLE_L3_COH_PIPE); 868 + wa_write_or(wal, 869 + GAMT_CHKN_BIT_REG, 870 + GAMT_CHKN_DISABLE_L3_COH_PIPE); 935 871 } 936 872 937 - void intel_gt_workarounds_apply(struct drm_i915_private *dev_priv) 873 + void intel_gt_init_workarounds(struct drm_i915_private *i915) 938 874 { 939 - if (INTEL_GEN(dev_priv) < 8) 875 + struct i915_wa_list *wal = &i915->gt_wa_list; 876 + 877 + wa_init_start(wal, "GT"); 878 + 879 + if (INTEL_GEN(i915) < 8) 940 880 return; 941 - else if (IS_BROADWELL(dev_priv)) 942 - bdw_gt_workarounds_apply(dev_priv); 943 - else if (IS_CHERRYVIEW(dev_priv)) 944 - chv_gt_workarounds_apply(dev_priv); 945 - else if (IS_SKYLAKE(dev_priv)) 946 - skl_gt_workarounds_apply(dev_priv); 947 - else if (IS_BROXTON(dev_priv)) 948 - bxt_gt_workarounds_apply(dev_priv); 949 - else if (IS_KABYLAKE(dev_priv)) 950 - kbl_gt_workarounds_apply(dev_priv); 951 - else if (IS_GEMINILAKE(dev_priv)) 952 - glk_gt_workarounds_apply(dev_priv); 953 - else if (IS_COFFEELAKE(dev_priv)) 954 - cfl_gt_workarounds_apply(dev_priv); 955 - else if (IS_CANNONLAKE(dev_priv)) 956 - cnl_gt_workarounds_apply(dev_priv); 957 - else if (IS_ICELAKE(dev_priv)) 958 - icl_gt_workarounds_apply(dev_priv); 881 + else if (IS_BROADWELL(i915)) 882 + return; 883 + else if (IS_CHERRYVIEW(i915)) 884 + return; 885 + else if (IS_SKYLAKE(i915)) 886 + skl_gt_workarounds_init(i915); 887 + else if (IS_BROXTON(i915)) 888 + bxt_gt_workarounds_init(i915); 889 + else if (IS_KABYLAKE(i915)) 890 + kbl_gt_workarounds_init(i915); 891 + else if (IS_GEMINILAKE(i915)) 892 + glk_gt_workarounds_init(i915); 893 + else if (IS_COFFEELAKE(i915)) 894 + cfl_gt_workarounds_init(i915); 895 + else if (IS_CANNONLAKE(i915)) 896 + cnl_gt_workarounds_init(i915); 897 + else if (IS_ICELAKE(i915)) 898 + icl_gt_workarounds_init(i915); 959 899 else 960 - MISSING_CASE(INTEL_GEN(dev_priv)); 900 + MISSING_CASE(INTEL_GEN(i915)); 901 + 902 + wa_init_finish(wal); 903 + } 904 + 905 + static enum forcewake_domains 906 + wal_get_fw_for_rmw(struct drm_i915_private *dev_priv, 907 + const struct i915_wa_list *wal) 908 + { 909 + enum forcewake_domains fw = 0; 910 + struct i915_wa *wa; 911 + unsigned int i; 912 + 913 + for (i = 0, wa = wal->list; i < wal->count; i++, wa++) 914 + fw |= intel_uncore_forcewake_for_reg(dev_priv, 915 + wa->reg, 916 + FW_REG_READ | 917 + FW_REG_WRITE); 918 + 919 + return fw; 920 + } 921 + 922 + static void 923 + wa_list_apply(struct drm_i915_private *dev_priv, const struct i915_wa_list *wal) 924 + { 925 + enum forcewake_domains fw; 926 + unsigned long flags; 927 + struct i915_wa *wa; 928 + unsigned int i; 929 + 930 + if (!wal->count) 931 + return; 932 + 933 + fw = wal_get_fw_for_rmw(dev_priv, wal); 934 + 935 + spin_lock_irqsave(&dev_priv->uncore.lock, flags); 936 + intel_uncore_forcewake_get__locked(dev_priv, fw); 937 + 938 + for (i = 0, wa = wal->list; i < wal->count; i++, wa++) { 939 + u32 val = I915_READ_FW(wa->reg); 940 + 941 + val &= ~wa->mask; 942 + val |= wa->val; 943 + 944 + I915_WRITE_FW(wa->reg, val); 945 + } 946 + 947 + intel_uncore_forcewake_put__locked(dev_priv, fw); 948 + spin_unlock_irqrestore(&dev_priv->uncore.lock, flags); 949 + 950 + DRM_DEBUG_DRIVER("Applied %u %s workarounds\n", wal->count, wal->name); 951 + } 952 + 953 + void intel_gt_apply_workarounds(struct drm_i915_private *dev_priv) 954 + { 955 + wa_list_apply(dev_priv, &dev_priv->gt_wa_list); 961 956 } 962 957 963 958 struct whitelist { ··· 1124 1075 struct whitelist w; 1125 1076 1126 1077 whitelist_apply(engine, whitelist_build(engine, &w)); 1078 + } 1079 + 1080 + static void rcs_engine_wa_init(struct intel_engine_cs *engine) 1081 + { 1082 + struct drm_i915_private *i915 = engine->i915; 1083 + struct i915_wa_list *wal = &engine->wa_list; 1084 + 1085 + if (IS_ICELAKE(i915)) { 1086 + /* This is not an Wa. Enable for better image quality */ 1087 + wa_masked_en(wal, 1088 + _3D_CHICKEN3, 1089 + _3D_CHICKEN3_AA_LINE_QUALITY_FIX_ENABLE); 1090 + 1091 + /* WaPipelineFlushCoherentLines:icl */ 1092 + wa_write_or(wal, 1093 + GEN8_L3SQCREG4, 1094 + GEN8_LQSC_FLUSH_COHERENT_LINES); 1095 + 1096 + /* 1097 + * Wa_1405543622:icl 1098 + * Formerly known as WaGAPZPriorityScheme 1099 + */ 1100 + wa_write_or(wal, 1101 + GEN8_GARBCNTL, 1102 + GEN11_ARBITRATION_PRIO_ORDER_MASK); 1103 + 1104 + /* 1105 + * Wa_1604223664:icl 1106 + * Formerly known as WaL3BankAddressHashing 1107 + */ 1108 + wa_write_masked_or(wal, 1109 + GEN8_GARBCNTL, 1110 + GEN11_HASH_CTRL_EXCL_MASK, 1111 + GEN11_HASH_CTRL_EXCL_BIT0); 1112 + wa_write_masked_or(wal, 1113 + GEN11_GLBLINVL, 1114 + GEN11_BANK_HASH_ADDR_EXCL_MASK, 1115 + GEN11_BANK_HASH_ADDR_EXCL_BIT0); 1116 + 1117 + /* 1118 + * Wa_1405733216:icl 1119 + * Formerly known as WaDisableCleanEvicts 1120 + */ 1121 + wa_write_or(wal, 1122 + GEN8_L3SQCREG4, 1123 + GEN11_LQSC_CLEAN_EVICT_DISABLE); 1124 + 1125 + /* Wa_1604302699:icl */ 1126 + wa_write_or(wal, 1127 + GEN10_L3_CHICKEN_MODE_REGISTER, 1128 + GEN11_I2M_WRITE_DISABLE); 1129 + 1130 + /* WaForwardProgressSoftReset:icl */ 1131 + wa_write_or(wal, 1132 + GEN10_SCRATCH_LNCF2, 1133 + PMFLUSHDONE_LNICRSDROP | 1134 + PMFLUSH_GAPL3UNBLOCK | 1135 + PMFLUSHDONE_LNEBLK); 1136 + } 1137 + 1138 + if (IS_GEN9(i915) || IS_CANNONLAKE(i915)) { 1139 + /* WaEnablePreemptionGranularityControlByUMD:skl,bxt,kbl,cfl,cnl */ 1140 + wa_masked_en(wal, 1141 + GEN7_FF_SLICE_CS_CHICKEN1, 1142 + GEN9_FFSC_PERCTX_PREEMPT_CTRL); 1143 + } 1144 + 1145 + if (IS_SKYLAKE(i915) || IS_KABYLAKE(i915) || IS_COFFEELAKE(i915)) { 1146 + /* WaEnableGapsTsvCreditFix:skl,kbl,cfl */ 1147 + wa_write_or(wal, 1148 + GEN8_GARBCNTL, 1149 + GEN9_GAPS_TSV_CREDIT_DISABLE); 1150 + } 1151 + 1152 + if (IS_BROXTON(i915)) { 1153 + /* WaDisablePooledEuLoadBalancingFix:bxt */ 1154 + wa_masked_en(wal, 1155 + FF_SLICE_CS_CHICKEN2, 1156 + GEN9_POOLED_EU_LOAD_BALANCING_FIX_DISABLE); 1157 + } 1158 + 1159 + if (IS_GEN9(i915)) { 1160 + /* WaContextSwitchWithConcurrentTLBInvalidate:skl,bxt,kbl,glk,cfl */ 1161 + wa_masked_en(wal, 1162 + GEN9_CSFE_CHICKEN1_RCS, 1163 + GEN9_PREEMPT_GPGPU_SYNC_SWITCH_DISABLE); 1164 + 1165 + /* WaEnableLbsSlaRetryTimerDecrement:skl,bxt,kbl,glk,cfl */ 1166 + wa_write_or(wal, 1167 + BDW_SCRATCH1, 1168 + GEN9_LBS_SLA_RETRY_TIMER_DECREMENT_ENABLE); 1169 + 1170 + /* WaProgramL3SqcReg1DefaultForPerf:bxt,glk */ 1171 + if (IS_GEN9_LP(i915)) 1172 + wa_write_masked_or(wal, 1173 + GEN8_L3SQCREG1, 1174 + L3_PRIO_CREDITS_MASK, 1175 + L3_GENERAL_PRIO_CREDITS(62) | 1176 + L3_HIGH_PRIO_CREDITS(2)); 1177 + 1178 + /* WaOCLCoherentLineFlush:skl,bxt,kbl,cfl */ 1179 + wa_write_or(wal, 1180 + GEN8_L3SQCREG4, 1181 + GEN8_LQSC_FLUSH_COHERENT_LINES); 1182 + } 1183 + } 1184 + 1185 + static void xcs_engine_wa_init(struct intel_engine_cs *engine) 1186 + { 1187 + struct drm_i915_private *i915 = engine->i915; 1188 + struct i915_wa_list *wal = &engine->wa_list; 1189 + 1190 + /* WaKBLVECSSemaphoreWaitPoll:kbl */ 1191 + if (IS_KBL_REVID(i915, KBL_REVID_A0, KBL_REVID_E0)) { 1192 + wa_write(wal, 1193 + RING_SEMA_WAIT_POLL(engine->mmio_base), 1194 + 1); 1195 + } 1196 + } 1197 + 1198 + void intel_engine_init_workarounds(struct intel_engine_cs *engine) 1199 + { 1200 + struct i915_wa_list *wal = &engine->wa_list; 1201 + 1202 + if (GEM_WARN_ON(INTEL_GEN(engine->i915) < 8)) 1203 + return; 1204 + 1205 + wa_init_start(wal, engine->name); 1206 + 1207 + if (engine->id == RCS) 1208 + rcs_engine_wa_init(engine); 1209 + else 1210 + xcs_engine_wa_init(engine); 1211 + 1212 + wa_init_finish(wal); 1213 + } 1214 + 1215 + void intel_engine_apply_workarounds(struct intel_engine_cs *engine) 1216 + { 1217 + wa_list_apply(engine->i915, &engine->wa_list); 1127 1218 } 1128 1219 1129 1220 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+25 -1
drivers/gpu/drm/i915/intel_workarounds.h
··· 7 7 #ifndef _I915_WORKAROUNDS_H_ 8 8 #define _I915_WORKAROUNDS_H_ 9 9 10 + #include <linux/slab.h> 11 + 12 + struct i915_wa { 13 + i915_reg_t reg; 14 + u32 mask; 15 + u32 val; 16 + }; 17 + 18 + struct i915_wa_list { 19 + const char *name; 20 + struct i915_wa *list; 21 + unsigned int count; 22 + }; 23 + 24 + static inline void intel_wa_list_free(struct i915_wa_list *wal) 25 + { 26 + kfree(wal->list); 27 + memset(wal, 0, sizeof(*wal)); 28 + } 29 + 10 30 int intel_ctx_workarounds_init(struct drm_i915_private *dev_priv); 11 31 int intel_ctx_workarounds_emit(struct i915_request *rq); 12 32 13 - void intel_gt_workarounds_apply(struct drm_i915_private *dev_priv); 33 + void intel_gt_init_workarounds(struct drm_i915_private *dev_priv); 34 + void intel_gt_apply_workarounds(struct drm_i915_private *dev_priv); 14 35 15 36 void intel_whitelist_workarounds_apply(struct intel_engine_cs *engine); 37 + 38 + void intel_engine_init_workarounds(struct intel_engine_cs *engine); 39 + void intel_engine_apply_workarounds(struct intel_engine_cs *engine); 16 40 17 41 #endif
+7 -4
drivers/gpu/drm/mediatek/mtk_dsi.c
··· 818 818 dsi->encoder.possible_crtcs = 1; 819 819 820 820 /* If there's a bridge, attach to it and let it create the connector */ 821 - ret = drm_bridge_attach(&dsi->encoder, dsi->bridge, NULL); 822 - if (ret) { 823 - DRM_ERROR("Failed to attach bridge to drm\n"); 824 - 821 + if (dsi->bridge) { 822 + ret = drm_bridge_attach(&dsi->encoder, dsi->bridge, NULL); 823 + if (ret) { 824 + DRM_ERROR("Failed to attach bridge to drm\n"); 825 + goto err_encoder_cleanup; 826 + } 827 + } else { 825 828 /* Otherwise create our own connector and attach to a panel */ 826 829 ret = mtk_dsi_create_connector(drm, dsi); 827 830 if (ret)
+19 -11
drivers/gpu/drm/nouveau/dispnv50/disp.c
··· 198 198 /****************************************************************************** 199 199 * EVO channel helpers 200 200 *****************************************************************************/ 201 + static void 202 + evo_flush(struct nv50_dmac *dmac) 203 + { 204 + /* Push buffer fetches are not coherent with BAR1, we need to ensure 205 + * writes have been flushed right through to VRAM before writing PUT. 206 + */ 207 + if (dmac->push.type & NVIF_MEM_VRAM) { 208 + struct nvif_device *device = dmac->base.device; 209 + nvif_wr32(&device->object, 0x070000, 0x00000001); 210 + nvif_msec(device, 2000, 211 + if (!(nvif_rd32(&device->object, 0x070000) & 0x00000002)) 212 + break; 213 + ); 214 + } 215 + } 216 + 201 217 u32 * 202 218 evo_wait(struct nv50_dmac *evoc, int nr) 203 219 { ··· 224 208 mutex_lock(&dmac->lock); 225 209 if (put + nr >= (PAGE_SIZE / 4) - 8) { 226 210 dmac->ptr[put] = 0x20000000; 211 + evo_flush(dmac); 227 212 228 213 nvif_wr32(&dmac->base.user, 0x0000, 0x00000000); 229 214 if (nvif_msec(device, 2000, ··· 247 230 { 248 231 struct nv50_dmac *dmac = evoc; 249 232 250 - /* Push buffer fetches are not coherent with BAR1, we need to ensure 251 - * writes have been flushed right through to VRAM before writing PUT. 252 - */ 253 - if (dmac->push.type & NVIF_MEM_VRAM) { 254 - struct nvif_device *device = dmac->base.device; 255 - nvif_wr32(&device->object, 0x070000, 0x00000001); 256 - nvif_msec(device, 2000, 257 - if (!(nvif_rd32(&device->object, 0x070000) & 0x00000002)) 258 - break; 259 - ); 260 - } 233 + evo_flush(dmac); 261 234 262 235 nvif_wr32(&dmac->base.user, 0x0000, (push - dmac->ptr) << 2); 263 236 mutex_unlock(&dmac->lock); ··· 1271 1264 { 1272 1265 struct nv50_mstm *mstm = *pmstm; 1273 1266 if (mstm) { 1267 + drm_dp_mst_topology_mgr_destroy(&mstm->mgr); 1274 1268 kfree(*pmstm); 1275 1269 *pmstm = NULL; 1276 1270 }
+6
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 1171 1171 goto err_free; 1172 1172 } 1173 1173 1174 + err = nouveau_drm_device_init(drm); 1175 + if (err) 1176 + goto err_put; 1177 + 1174 1178 platform_set_drvdata(pdev, drm); 1175 1179 1176 1180 return drm; 1177 1181 1182 + err_put: 1183 + drm_dev_put(drm); 1178 1184 err_free: 1179 1185 nvkm_device_del(pdevice); 1180 1186
-6
drivers/gpu/drm/rockchip/rockchip_drm_drv.c
··· 448 448 return 0; 449 449 } 450 450 451 - static void rockchip_drm_platform_shutdown(struct platform_device *pdev) 452 - { 453 - rockchip_drm_platform_remove(pdev); 454 - } 455 - 456 451 static const struct of_device_id rockchip_drm_dt_ids[] = { 457 452 { .compatible = "rockchip,display-subsystem", }, 458 453 { /* sentinel */ }, ··· 457 462 static struct platform_driver rockchip_drm_platform_driver = { 458 463 .probe = rockchip_drm_platform_probe, 459 464 .remove = rockchip_drm_platform_remove, 460 - .shutdown = rockchip_drm_platform_shutdown, 461 465 .driver = { 462 466 .name = "rockchip-drm", 463 467 .of_match_table = rockchip_drm_dt_ids,
+3 -1
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 49 49 50 50 #define VMWGFX_REPO "In Tree" 51 51 52 + #define VMWGFX_VALIDATION_MEM_GRAN (16*PAGE_SIZE) 53 + 52 54 53 55 /** 54 56 * Fully encoded drm commands. Might move to vmw_drm.h ··· 920 918 spin_unlock(&dev_priv->cap_lock); 921 919 } 922 920 923 - 921 + vmw_validation_mem_init_ttm(dev_priv, VMWGFX_VALIDATION_MEM_GRAN); 924 922 ret = vmw_kms_init(dev_priv); 925 923 if (unlikely(ret != 0)) 926 924 goto out_no_kms;
+5
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
··· 606 606 607 607 struct vmw_cmdbuf_man *cman; 608 608 DECLARE_BITMAP(irqthread_pending, VMW_IRQTHREAD_MAX); 609 + 610 + /* Validation memory reservation */ 611 + struct vmw_validation_mem vvm; 609 612 }; 610 613 611 614 static inline struct vmw_surface *vmw_res_to_srf(struct vmw_resource *res) ··· 849 846 extern void vmw_ttm_global_release(struct vmw_private *dev_priv); 850 847 extern int vmw_mmap(struct file *filp, struct vm_area_struct *vma); 851 848 849 + extern void vmw_validation_mem_init_ttm(struct vmw_private *dev_priv, 850 + size_t gran); 852 851 /** 853 852 * TTM buffer object driver - vmwgfx_ttm_buffer.c 854 853 */
+2 -2
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
··· 1738 1738 void *buf) 1739 1739 { 1740 1740 struct vmw_buffer_object *vmw_bo; 1741 - int ret; 1742 1741 1743 1742 struct { 1744 1743 uint32_t header; ··· 1747 1748 return vmw_translate_guest_ptr(dev_priv, sw_context, 1748 1749 &cmd->body.ptr, 1749 1750 &vmw_bo); 1750 - return ret; 1751 1751 } 1752 1752 1753 1753 ··· 3834 3836 int32_t out_fence_fd = -1; 3835 3837 struct sync_file *sync_file = NULL; 3836 3838 DECLARE_VAL_CONTEXT(val_ctx, &sw_context->res_ht, 1); 3839 + 3840 + vmw_validation_set_val_mem(&val_ctx, &dev_priv->vvm); 3837 3841 3838 3842 if (flags & DRM_VMW_EXECBUF_FLAG_EXPORT_FENCE_FD) { 3839 3843 out_fence_fd = get_unused_fd_flags(O_CLOEXEC);
+36
drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c
··· 96 96 drm_global_item_unref(&dev_priv->bo_global_ref.ref); 97 97 drm_global_item_unref(&dev_priv->mem_global_ref); 98 98 } 99 + 100 + /* struct vmw_validation_mem callback */ 101 + static int vmw_vmt_reserve(struct vmw_validation_mem *m, size_t size) 102 + { 103 + static struct ttm_operation_ctx ctx = {.interruptible = false, 104 + .no_wait_gpu = false}; 105 + struct vmw_private *dev_priv = container_of(m, struct vmw_private, vvm); 106 + 107 + return ttm_mem_global_alloc(vmw_mem_glob(dev_priv), size, &ctx); 108 + } 109 + 110 + /* struct vmw_validation_mem callback */ 111 + static void vmw_vmt_unreserve(struct vmw_validation_mem *m, size_t size) 112 + { 113 + struct vmw_private *dev_priv = container_of(m, struct vmw_private, vvm); 114 + 115 + return ttm_mem_global_free(vmw_mem_glob(dev_priv), size); 116 + } 117 + 118 + /** 119 + * vmw_validation_mem_init_ttm - Interface the validation memory tracker 120 + * to ttm. 121 + * @dev_priv: Pointer to struct vmw_private. The reason we choose a vmw private 122 + * rather than a struct vmw_validation_mem is to make sure assumption in the 123 + * callbacks that struct vmw_private derives from struct vmw_validation_mem 124 + * holds true. 125 + * @gran: The recommended allocation granularity 126 + */ 127 + void vmw_validation_mem_init_ttm(struct vmw_private *dev_priv, size_t gran) 128 + { 129 + struct vmw_validation_mem *vvm = &dev_priv->vvm; 130 + 131 + vvm->reserve_mem = vmw_vmt_reserve; 132 + vvm->unreserve_mem = vmw_vmt_unreserve; 133 + vvm->gran = gran; 134 + }
+20 -1
drivers/gpu/drm/vmwgfx/vmwgfx_validation.c
··· 104 104 return NULL; 105 105 106 106 if (ctx->mem_size_left < size) { 107 - struct page *page = alloc_page(GFP_KERNEL | __GFP_ZERO); 107 + struct page *page; 108 108 109 + if (ctx->vm && ctx->vm_size_left < PAGE_SIZE) { 110 + int ret = ctx->vm->reserve_mem(ctx->vm, ctx->vm->gran); 111 + 112 + if (ret) 113 + return NULL; 114 + 115 + ctx->vm_size_left += ctx->vm->gran; 116 + ctx->total_mem += ctx->vm->gran; 117 + } 118 + 119 + page = alloc_page(GFP_KERNEL | __GFP_ZERO); 109 120 if (!page) 110 121 return NULL; 122 + 123 + if (ctx->vm) 124 + ctx->vm_size_left -= PAGE_SIZE; 111 125 112 126 list_add_tail(&page->lru, &ctx->page_list); 113 127 ctx->page_address = page_address(page); ··· 152 138 } 153 139 154 140 ctx->mem_size_left = 0; 141 + if (ctx->vm && ctx->total_mem) { 142 + ctx->vm->unreserve_mem(ctx->vm, ctx->total_mem); 143 + ctx->total_mem = 0; 144 + ctx->vm_size_left = 0; 145 + } 155 146 } 156 147 157 148 /**
+37
drivers/gpu/drm/vmwgfx/vmwgfx_validation.h
··· 34 34 #include <drm/ttm/ttm_execbuf_util.h> 35 35 36 36 /** 37 + * struct vmw_validation_mem - Custom interface to provide memory reservations 38 + * for the validation code. 39 + * @reserve_mem: Callback to reserve memory 40 + * @unreserve_mem: Callback to unreserve memory 41 + * @gran: Reservation granularity. Contains a hint how much memory should 42 + * be reserved in each call to @reserve_mem(). A slow implementation may want 43 + * reservation to be done in large batches. 44 + */ 45 + struct vmw_validation_mem { 46 + int (*reserve_mem)(struct vmw_validation_mem *m, size_t size); 47 + void (*unreserve_mem)(struct vmw_validation_mem *m, size_t size); 48 + size_t gran; 49 + }; 50 + 51 + /** 37 52 * struct vmw_validation_context - Per command submission validation context 38 53 * @ht: Hash table used to find resource- or buffer object duplicates 39 54 * @resource_list: List head for resource validation metadata ··· 62 47 * buffer objects 63 48 * @mem_size_left: Free memory left in the last page in @page_list 64 49 * @page_address: Kernel virtual address of the last page in @page_list 50 + * @vm: A pointer to the memory reservation interface or NULL if no 51 + * memory reservation is needed. 52 + * @vm_size_left: Amount of reserved memory that so far has not been allocated. 53 + * @total_mem: Amount of reserved memory. 65 54 */ 66 55 struct vmw_validation_context { 67 56 struct drm_open_hash *ht; ··· 78 59 unsigned int merge_dups; 79 60 unsigned int mem_size_left; 80 61 u8 *page_address; 62 + struct vmw_validation_mem *vm; 63 + size_t vm_size_left; 64 + size_t total_mem; 81 65 }; 82 66 83 67 struct vmw_buffer_object; ··· 121 99 vmw_validation_has_bos(struct vmw_validation_context *ctx) 122 100 { 123 101 return !list_empty(&ctx->bo_list); 102 + } 103 + 104 + /** 105 + * vmw_validation_set_val_mem - Register a validation mem object for 106 + * validation memory reservation 107 + * @ctx: The validation context 108 + * @vm: Pointer to a struct vmw_validation_mem 109 + * 110 + * Must be set before the first attempt to allocate validation memory. 111 + */ 112 + static inline void 113 + vmw_validation_set_val_mem(struct vmw_validation_context *ctx, 114 + struct vmw_validation_mem *vm) 115 + { 116 + ctx->vm = vm; 124 117 } 125 118 126 119 /**
+7
drivers/hid/hid-ids.h
··· 17 17 #ifndef HID_IDS_H_FILE 18 18 #define HID_IDS_H_FILE 19 19 20 + #define USB_VENDOR_ID_258A 0x258a 21 + #define USB_DEVICE_ID_258A_6A88 0x6a88 22 + 20 23 #define USB_VENDOR_ID_3M 0x0596 21 24 #define USB_DEVICE_ID_3M1968 0x0500 22 25 #define USB_DEVICE_ID_3M2256 0x0502 ··· 943 940 944 941 #define USB_VENDOR_ID_REALTEK 0x0bda 945 942 #define USB_DEVICE_ID_REALTEK_READER 0x0152 943 + 944 + #define USB_VENDOR_ID_RETROUSB 0xf000 945 + #define USB_DEVICE_ID_RETROUSB_SNES_RETROPAD 0x0003 946 + #define USB_DEVICE_ID_RETROUSB_SNES_RETROPORT 0x00f1 946 947 947 948 #define USB_VENDOR_ID_ROCCAT 0x1e7d 948 949 #define USB_DEVICE_ID_ROCCAT_ARVO 0x30d4
+1
drivers/hid/hid-ite.c
··· 42 42 43 43 static const struct hid_device_id ite_devices[] = { 44 44 { HID_USB_DEVICE(USB_VENDOR_ID_ITE, USB_DEVICE_ID_ITE8595) }, 45 + { HID_USB_DEVICE(USB_VENDOR_ID_258A, USB_DEVICE_ID_258A_6A88) }, 45 46 { } 46 47 }; 47 48 MODULE_DEVICE_TABLE(hid, ite_devices);
+2
drivers/hid/hid-quirks.c
··· 137 137 { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3003), HID_QUIRK_NOGET }, 138 138 { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3008), HID_QUIRK_NOGET }, 139 139 { HID_USB_DEVICE(USB_VENDOR_ID_REALTEK, USB_DEVICE_ID_REALTEK_READER), HID_QUIRK_NO_INIT_REPORTS }, 140 + { HID_USB_DEVICE(USB_VENDOR_ID_RETROUSB, USB_DEVICE_ID_RETROUSB_SNES_RETROPAD), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE }, 141 + { HID_USB_DEVICE(USB_VENDOR_ID_RETROUSB, USB_DEVICE_ID_RETROUSB_SNES_RETROPORT), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE }, 140 142 { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_RUMBLEPAD), HID_QUIRK_BADPAD }, 141 143 { HID_USB_DEVICE(USB_VENDOR_ID_SEMICO, USB_DEVICE_ID_SEMICO_USB_KEYKOARD2), HID_QUIRK_NO_INIT_REPORTS }, 142 144 { HID_USB_DEVICE(USB_VENDOR_ID_SEMICO, USB_DEVICE_ID_SEMICO_USB_KEYKOARD), HID_QUIRK_NO_INIT_REPORTS },
+3
drivers/infiniband/core/roce_gid_mgmt.c
··· 267 267 struct net_device *cookie_ndev = cookie; 268 268 bool match = false; 269 269 270 + if (!rdma_ndev) 271 + return false; 272 + 270 273 rcu_read_lock(); 271 274 if (netif_is_bond_master(cookie_ndev) && 272 275 rdma_is_upper_dev_rcu(rdma_ndev, cookie_ndev))
+2 -1
drivers/infiniband/hw/hfi1/chip.c
··· 12500 12500 } 12501 12501 12502 12502 /* allocate space for the counter values */ 12503 - dd->cntrs = kcalloc(dd->ndevcntrs, sizeof(u64), GFP_KERNEL); 12503 + dd->cntrs = kcalloc(dd->ndevcntrs + num_driver_cntrs, sizeof(u64), 12504 + GFP_KERNEL); 12504 12505 if (!dd->cntrs) 12505 12506 goto bail; 12506 12507
+2
drivers/infiniband/hw/hfi1/hfi.h
··· 155 155 extern struct hfi1_ib_stats hfi1_stats; 156 156 extern const struct pci_error_handlers hfi1_pci_err_handler; 157 157 158 + extern int num_driver_cntrs; 159 + 158 160 /* 159 161 * First-cut criterion for "device is active" is 160 162 * two thousand dwords combined Tx, Rx traffic per
+7
drivers/infiniband/hw/hfi1/qp.c
··· 340 340 default: 341 341 break; 342 342 } 343 + 344 + /* 345 + * System latency between send and schedule is large enough that 346 + * forcing call_send to true for piothreshold packets is necessary. 347 + */ 348 + if (wqe->length <= piothreshold) 349 + *call_send = true; 343 350 return 0; 344 351 } 345 352
+1 -1
drivers/infiniband/hw/hfi1/verbs.c
··· 1479 1479 static DEFINE_MUTEX(cntr_names_lock); /* protects the *_cntr_names bufers */ 1480 1480 static const char **dev_cntr_names; 1481 1481 static const char **port_cntr_names; 1482 - static int num_driver_cntrs = ARRAY_SIZE(driver_cntr_names); 1482 + int num_driver_cntrs = ARRAY_SIZE(driver_cntr_names); 1483 1483 static int num_dev_cntrs; 1484 1484 static int num_port_cntrs; 1485 1485 static int cntr_names_initialized;
+3 -1
drivers/infiniband/hw/mlx5/devx.c
··· 1066 1066 1067 1067 err = uverbs_get_flags32(&access, attrs, 1068 1068 MLX5_IB_ATTR_DEVX_UMEM_REG_ACCESS, 1069 - IB_ACCESS_SUPPORTED); 1069 + IB_ACCESS_LOCAL_WRITE | 1070 + IB_ACCESS_REMOTE_WRITE | 1071 + IB_ACCESS_REMOTE_READ); 1070 1072 if (err) 1071 1073 return err; 1072 1074
+4 -5
drivers/infiniband/hw/mlx5/odp.c
··· 506 506 static int pagefault_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr, 507 507 u64 io_virt, size_t bcnt, u32 *bytes_mapped) 508 508 { 509 + int npages = 0, current_seq, page_shift, ret, np; 510 + bool implicit = false; 509 511 struct ib_umem_odp *odp_mr = to_ib_umem_odp(mr->umem); 510 512 u64 access_mask = ODP_READ_ALLOWED_BIT; 511 - int npages = 0, page_shift, np; 512 513 u64 start_idx, page_mask; 513 514 struct ib_umem_odp *odp; 514 - int current_seq; 515 515 size_t size; 516 - int ret; 517 516 518 517 if (!odp_mr->page_list) { 519 518 odp = implicit_mr_get_data(mr, io_virt, bcnt); ··· 520 521 if (IS_ERR(odp)) 521 522 return PTR_ERR(odp); 522 523 mr = odp->private; 523 - 524 + implicit = true; 524 525 } else { 525 526 odp = odp_mr; 526 527 } ··· 599 600 600 601 out: 601 602 if (ret == -EAGAIN) { 602 - if (mr->parent || !odp->dying) { 603 + if (implicit || !odp->dying) { 603 604 unsigned long timeout = 604 605 msecs_to_jiffies(MMU_NOTIFIER_TIMEOUT); 605 606
+4
drivers/md/dm-cache-metadata.c
··· 930 930 bool dirty_flag; 931 931 *result = true; 932 932 933 + if (from_cblock(cmd->cache_blocks) == 0) 934 + /* Nothing to do */ 935 + return 0; 936 + 933 937 r = dm_bitset_cursor_begin(&cmd->dirty_info, cmd->dirty_root, 934 938 from_cblock(cmd->cache_blocks), &cmd->dirty_cursor); 935 939 if (r) {
+37 -35
drivers/md/dm-thin.c
··· 195 195 struct dm_thin_new_mapping; 196 196 197 197 /* 198 - * The pool runs in 4 modes. Ordered in degraded order for comparisons. 198 + * The pool runs in various modes. Ordered in degraded order for comparisons. 199 199 */ 200 200 enum pool_mode { 201 201 PM_WRITE, /* metadata may be changed */ ··· 282 282 mempool_t mapping_pool; 283 283 }; 284 284 285 - static enum pool_mode get_pool_mode(struct pool *pool); 286 285 static void metadata_operation_failed(struct pool *pool, const char *op, int r); 286 + 287 + static enum pool_mode get_pool_mode(struct pool *pool) 288 + { 289 + return pool->pf.mode; 290 + } 291 + 292 + static void notify_of_pool_mode_change(struct pool *pool) 293 + { 294 + const char *descs[] = { 295 + "write", 296 + "out-of-data-space", 297 + "read-only", 298 + "read-only", 299 + "fail" 300 + }; 301 + const char *extra_desc = NULL; 302 + enum pool_mode mode = get_pool_mode(pool); 303 + 304 + if (mode == PM_OUT_OF_DATA_SPACE) { 305 + if (!pool->pf.error_if_no_space) 306 + extra_desc = " (queue IO)"; 307 + else 308 + extra_desc = " (error IO)"; 309 + } 310 + 311 + dm_table_event(pool->ti->table); 312 + DMINFO("%s: switching pool to %s%s mode", 313 + dm_device_name(pool->pool_md), 314 + descs[(int)mode], extra_desc ? : ""); 315 + } 287 316 288 317 /* 289 318 * Target context for a pool. ··· 2380 2351 queue_delayed_work(pool->wq, &pool->waker, COMMIT_PERIOD); 2381 2352 } 2382 2353 2383 - static void notify_of_pool_mode_change_to_oods(struct pool *pool); 2384 - 2385 2354 /* 2386 2355 * We're holding onto IO to allow userland time to react. After the 2387 2356 * timeout either the pool will have been resized (and thus back in ··· 2392 2365 2393 2366 if (get_pool_mode(pool) == PM_OUT_OF_DATA_SPACE && !pool->pf.error_if_no_space) { 2394 2367 pool->pf.error_if_no_space = true; 2395 - notify_of_pool_mode_change_to_oods(pool); 2368 + notify_of_pool_mode_change(pool); 2396 2369 error_retry_list_with_code(pool, BLK_STS_NOSPC); 2397 2370 } 2398 2371 } ··· 2460 2433 2461 2434 /*----------------------------------------------------------------*/ 2462 2435 2463 - static enum pool_mode get_pool_mode(struct pool *pool) 2464 - { 2465 - return pool->pf.mode; 2466 - } 2467 - 2468 - static void notify_of_pool_mode_change(struct pool *pool, const char *new_mode) 2469 - { 2470 - dm_table_event(pool->ti->table); 2471 - DMINFO("%s: switching pool to %s mode", 2472 - dm_device_name(pool->pool_md), new_mode); 2473 - } 2474 - 2475 - static void notify_of_pool_mode_change_to_oods(struct pool *pool) 2476 - { 2477 - if (!pool->pf.error_if_no_space) 2478 - notify_of_pool_mode_change(pool, "out-of-data-space (queue IO)"); 2479 - else 2480 - notify_of_pool_mode_change(pool, "out-of-data-space (error IO)"); 2481 - } 2482 - 2483 2436 static bool passdown_enabled(struct pool_c *pt) 2484 2437 { 2485 2438 return pt->adjusted_pf.discard_passdown; ··· 2508 2501 2509 2502 switch (new_mode) { 2510 2503 case PM_FAIL: 2511 - if (old_mode != new_mode) 2512 - notify_of_pool_mode_change(pool, "failure"); 2513 2504 dm_pool_metadata_read_only(pool->pmd); 2514 2505 pool->process_bio = process_bio_fail; 2515 2506 pool->process_discard = process_bio_fail; ··· 2521 2516 2522 2517 case PM_OUT_OF_METADATA_SPACE: 2523 2518 case PM_READ_ONLY: 2524 - if (!is_read_only_pool_mode(old_mode)) 2525 - notify_of_pool_mode_change(pool, "read-only"); 2526 2519 dm_pool_metadata_read_only(pool->pmd); 2527 2520 pool->process_bio = process_bio_read_only; 2528 2521 pool->process_discard = process_bio_success; ··· 2541 2538 * alarming rate. Adjust your low water mark if you're 2542 2539 * frequently seeing this mode. 2543 2540 */ 2544 - if (old_mode != new_mode) 2545 - notify_of_pool_mode_change_to_oods(pool); 2546 2541 pool->out_of_data_space = true; 2547 2542 pool->process_bio = process_bio_read_only; 2548 2543 pool->process_discard = process_discard_bio; ··· 2553 2552 break; 2554 2553 2555 2554 case PM_WRITE: 2556 - if (old_mode != new_mode) 2557 - notify_of_pool_mode_change(pool, "write"); 2558 2555 if (old_mode == PM_OUT_OF_DATA_SPACE) 2559 2556 cancel_delayed_work_sync(&pool->no_space_timeout); 2560 2557 pool->out_of_data_space = false; ··· 2572 2573 * doesn't cause an unexpected mode transition on resume. 2573 2574 */ 2574 2575 pt->adjusted_pf.mode = new_mode; 2576 + 2577 + if (old_mode != new_mode) 2578 + notify_of_pool_mode_change(pool); 2575 2579 } 2576 2580 2577 2581 static void abort_transaction(struct pool *pool) ··· 4025 4023 .name = "thin-pool", 4026 4024 .features = DM_TARGET_SINGLETON | DM_TARGET_ALWAYS_WRITEABLE | 4027 4025 DM_TARGET_IMMUTABLE, 4028 - .version = {1, 20, 0}, 4026 + .version = {1, 21, 0}, 4029 4027 .module = THIS_MODULE, 4030 4028 .ctr = pool_ctr, 4031 4029 .dtr = pool_dtr, ··· 4399 4397 4400 4398 static struct target_type thin_target = { 4401 4399 .name = "thin", 4402 - .version = {1, 20, 0}, 4400 + .version = {1, 21, 0}, 4403 4401 .module = THIS_MODULE, 4404 4402 .ctr = thin_ctr, 4405 4403 .dtr = thin_dtr,
+38 -84
drivers/md/dm-zoned-target.c
··· 20 20 struct dm_zone *zone; 21 21 struct bio *bio; 22 22 refcount_t ref; 23 - blk_status_t status; 24 23 }; 25 24 26 25 /* ··· 77 78 { 78 79 struct dmz_bioctx *bioctx = dm_per_bio_data(bio, sizeof(struct dmz_bioctx)); 79 80 80 - if (bioctx->status == BLK_STS_OK && status != BLK_STS_OK) 81 - bioctx->status = status; 82 - bio_endio(bio); 81 + if (status != BLK_STS_OK && bio->bi_status == BLK_STS_OK) 82 + bio->bi_status = status; 83 + 84 + if (refcount_dec_and_test(&bioctx->ref)) { 85 + struct dm_zone *zone = bioctx->zone; 86 + 87 + if (zone) { 88 + if (bio->bi_status != BLK_STS_OK && 89 + bio_op(bio) == REQ_OP_WRITE && 90 + dmz_is_seq(zone)) 91 + set_bit(DMZ_SEQ_WRITE_ERR, &zone->flags); 92 + dmz_deactivate_zone(zone); 93 + } 94 + bio_endio(bio); 95 + } 83 96 } 84 97 85 98 /* 86 - * Partial clone read BIO completion callback. This terminates the 99 + * Completion callback for an internally cloned target BIO. This terminates the 87 100 * target BIO when there are no more references to its context. 88 101 */ 89 - static void dmz_read_bio_end_io(struct bio *bio) 102 + static void dmz_clone_endio(struct bio *clone) 90 103 { 91 - struct dmz_bioctx *bioctx = bio->bi_private; 92 - blk_status_t status = bio->bi_status; 104 + struct dmz_bioctx *bioctx = clone->bi_private; 105 + blk_status_t status = clone->bi_status; 93 106 94 - bio_put(bio); 107 + bio_put(clone); 95 108 dmz_bio_endio(bioctx->bio, status); 96 109 } 97 110 98 111 /* 99 - * Issue a BIO to a zone. The BIO may only partially process the 112 + * Issue a clone of a target BIO. The clone may only partially process the 100 113 * original target BIO. 101 114 */ 102 - static int dmz_submit_read_bio(struct dmz_target *dmz, struct dm_zone *zone, 103 - struct bio *bio, sector_t chunk_block, 104 - unsigned int nr_blocks) 115 + static int dmz_submit_bio(struct dmz_target *dmz, struct dm_zone *zone, 116 + struct bio *bio, sector_t chunk_block, 117 + unsigned int nr_blocks) 105 118 { 106 119 struct dmz_bioctx *bioctx = dm_per_bio_data(bio, sizeof(struct dmz_bioctx)); 107 - sector_t sector; 108 120 struct bio *clone; 109 121 110 - /* BIO remap sector */ 111 - sector = dmz_start_sect(dmz->metadata, zone) + dmz_blk2sect(chunk_block); 112 - 113 - /* If the read is not partial, there is no need to clone the BIO */ 114 - if (nr_blocks == dmz_bio_blocks(bio)) { 115 - /* Setup and submit the BIO */ 116 - bio->bi_iter.bi_sector = sector; 117 - refcount_inc(&bioctx->ref); 118 - generic_make_request(bio); 119 - return 0; 120 - } 121 - 122 - /* Partial BIO: we need to clone the BIO */ 123 122 clone = bio_clone_fast(bio, GFP_NOIO, &dmz->bio_set); 124 123 if (!clone) 125 124 return -ENOMEM; 126 125 127 - /* Setup the clone */ 128 - clone->bi_iter.bi_sector = sector; 126 + bio_set_dev(clone, dmz->dev->bdev); 127 + clone->bi_iter.bi_sector = 128 + dmz_start_sect(dmz->metadata, zone) + dmz_blk2sect(chunk_block); 129 129 clone->bi_iter.bi_size = dmz_blk2sect(nr_blocks) << SECTOR_SHIFT; 130 - clone->bi_end_io = dmz_read_bio_end_io; 130 + clone->bi_end_io = dmz_clone_endio; 131 131 clone->bi_private = bioctx; 132 132 133 133 bio_advance(bio, clone->bi_iter.bi_size); 134 134 135 - /* Submit the clone */ 136 135 refcount_inc(&bioctx->ref); 137 136 generic_make_request(clone); 137 + 138 + if (bio_op(bio) == REQ_OP_WRITE && dmz_is_seq(zone)) 139 + zone->wp_block += nr_blocks; 138 140 139 141 return 0; 140 142 } ··· 214 214 if (nr_blocks) { 215 215 /* Valid blocks found: read them */ 216 216 nr_blocks = min_t(unsigned int, nr_blocks, end_block - chunk_block); 217 - ret = dmz_submit_read_bio(dmz, rzone, bio, chunk_block, nr_blocks); 217 + ret = dmz_submit_bio(dmz, rzone, bio, chunk_block, nr_blocks); 218 218 if (ret) 219 219 return ret; 220 220 chunk_block += nr_blocks; ··· 226 226 } 227 227 228 228 return 0; 229 - } 230 - 231 - /* 232 - * Issue a write BIO to a zone. 233 - */ 234 - static void dmz_submit_write_bio(struct dmz_target *dmz, struct dm_zone *zone, 235 - struct bio *bio, sector_t chunk_block, 236 - unsigned int nr_blocks) 237 - { 238 - struct dmz_bioctx *bioctx = dm_per_bio_data(bio, sizeof(struct dmz_bioctx)); 239 - 240 - /* Setup and submit the BIO */ 241 - bio_set_dev(bio, dmz->dev->bdev); 242 - bio->bi_iter.bi_sector = dmz_start_sect(dmz->metadata, zone) + dmz_blk2sect(chunk_block); 243 - refcount_inc(&bioctx->ref); 244 - generic_make_request(bio); 245 - 246 - if (dmz_is_seq(zone)) 247 - zone->wp_block += nr_blocks; 248 229 } 249 230 250 231 /* ··· 246 265 return -EROFS; 247 266 248 267 /* Submit write */ 249 - dmz_submit_write_bio(dmz, zone, bio, chunk_block, nr_blocks); 268 + ret = dmz_submit_bio(dmz, zone, bio, chunk_block, nr_blocks); 269 + if (ret) 270 + return ret; 250 271 251 272 /* 252 273 * Validate the blocks in the data zone and invalidate ··· 284 301 return -EROFS; 285 302 286 303 /* Submit write */ 287 - dmz_submit_write_bio(dmz, bzone, bio, chunk_block, nr_blocks); 304 + ret = dmz_submit_bio(dmz, bzone, bio, chunk_block, nr_blocks); 305 + if (ret) 306 + return ret; 288 307 289 308 /* 290 309 * Validate the blocks in the buffer zone ··· 585 600 bioctx->zone = NULL; 586 601 bioctx->bio = bio; 587 602 refcount_set(&bioctx->ref, 1); 588 - bioctx->status = BLK_STS_OK; 589 603 590 604 /* Set the BIO pending in the flush list */ 591 605 if (!nr_sectors && bio_op(bio) == REQ_OP_WRITE) { ··· 605 621 dmz_queue_chunk_work(dmz, bio); 606 622 607 623 return DM_MAPIO_SUBMITTED; 608 - } 609 - 610 - /* 611 - * Completed target BIO processing. 612 - */ 613 - static int dmz_end_io(struct dm_target *ti, struct bio *bio, blk_status_t *error) 614 - { 615 - struct dmz_bioctx *bioctx = dm_per_bio_data(bio, sizeof(struct dmz_bioctx)); 616 - 617 - if (bioctx->status == BLK_STS_OK && *error) 618 - bioctx->status = *error; 619 - 620 - if (!refcount_dec_and_test(&bioctx->ref)) 621 - return DM_ENDIO_INCOMPLETE; 622 - 623 - /* Done */ 624 - bio->bi_status = bioctx->status; 625 - 626 - if (bioctx->zone) { 627 - struct dm_zone *zone = bioctx->zone; 628 - 629 - if (*error && bio_op(bio) == REQ_OP_WRITE) { 630 - if (dmz_is_seq(zone)) 631 - set_bit(DMZ_SEQ_WRITE_ERR, &zone->flags); 632 - } 633 - dmz_deactivate_zone(zone); 634 - } 635 - 636 - return DM_ENDIO_DONE; 637 624 } 638 625 639 626 /* ··· 901 946 .ctr = dmz_ctr, 902 947 .dtr = dmz_dtr, 903 948 .map = dmz_map, 904 - .end_io = dmz_end_io, 905 949 .io_hints = dmz_io_hints, 906 950 .prepare_ioctl = dmz_prepare_ioctl, 907 951 .postsuspend = dmz_suspend,
+2
drivers/md/dm.c
··· 1593 1593 return ret; 1594 1594 } 1595 1595 1596 + blk_queue_split(md->queue, &bio); 1597 + 1596 1598 init_clone_info(&ci, md, map, bio); 1597 1599 1598 1600 if (bio->bi_opf & REQ_PREFLUSH) {
+13
drivers/media/Kconfig
··· 110 110 111 111 This is currently experimental. 112 112 113 + config MEDIA_CONTROLLER_REQUEST_API 114 + bool "Enable Media controller Request API (EXPERIMENTAL)" 115 + depends on MEDIA_CONTROLLER && STAGING_MEDIA 116 + default n 117 + ---help--- 118 + DO NOT ENABLE THIS OPTION UNLESS YOU KNOW WHAT YOU'RE DOING. 119 + 120 + This option enables the Request API for the Media controller and V4L2 121 + interfaces. It is currently needed by a few stateless codec drivers. 122 + 123 + There is currently no intention to provide API or ABI stability for 124 + this new API as of yet. 125 + 113 126 # 114 127 # Video4Linux support 115 128 # Only enables if one of the V4L2 types (ATV, webcam, radio) is selected
+35 -9
drivers/media/common/videobuf2/videobuf2-core.c
··· 947 947 } 948 948 atomic_dec(&q->owned_by_drv_count); 949 949 950 - if (vb->req_obj.req) { 950 + if (state != VB2_BUF_STATE_QUEUED && vb->req_obj.req) { 951 951 /* This is not supported at the moment */ 952 952 WARN_ON(state == VB2_BUF_STATE_REQUEUEING); 953 953 media_request_object_unbind(&vb->req_obj); ··· 1359 1359 { 1360 1360 struct vb2_buffer *vb = container_of(obj, struct vb2_buffer, req_obj); 1361 1361 1362 - if (vb->state == VB2_BUF_STATE_IN_REQUEST) 1362 + if (vb->state == VB2_BUF_STATE_IN_REQUEST) { 1363 1363 vb->state = VB2_BUF_STATE_DEQUEUED; 1364 + if (vb->request) 1365 + media_request_put(vb->request); 1366 + vb->request = NULL; 1367 + } 1364 1368 } 1365 1369 1366 1370 static const struct media_request_object_ops vb2_core_req_ops = { ··· 1532 1528 return ret; 1533 1529 1534 1530 vb->state = VB2_BUF_STATE_IN_REQUEST; 1531 + 1532 + /* 1533 + * Increment the refcount and store the request. 1534 + * The request refcount is decremented again when the 1535 + * buffer is dequeued. This is to prevent vb2_buffer_done() 1536 + * from freeing the request from interrupt context, which can 1537 + * happen if the application closed the request fd after 1538 + * queueing the request. 1539 + */ 1540 + media_request_get(req); 1541 + vb->request = req; 1542 + 1535 1543 /* Fill buffer information for the userspace */ 1536 1544 if (pb) { 1537 1545 call_void_bufop(q, copy_timestamp, vb, pb); ··· 1765 1749 call_void_memop(vb, unmap_dmabuf, vb->planes[i].mem_priv); 1766 1750 vb->planes[i].dbuf_mapped = 0; 1767 1751 } 1768 - if (vb->req_obj.req) { 1769 - media_request_object_unbind(&vb->req_obj); 1770 - media_request_object_put(&vb->req_obj); 1771 - } 1772 1752 call_void_bufop(q, init_buffer, vb); 1773 1753 } 1774 1754 ··· 1808 1796 1809 1797 /* go back to dequeued state */ 1810 1798 __vb2_dqbuf(vb); 1799 + 1800 + if (WARN_ON(vb->req_obj.req)) { 1801 + media_request_object_unbind(&vb->req_obj); 1802 + media_request_object_put(&vb->req_obj); 1803 + } 1804 + if (vb->request) 1805 + media_request_put(vb->request); 1806 + vb->request = NULL; 1811 1807 1812 1808 dprintk(2, "dqbuf of buffer %d, with state %d\n", 1813 1809 vb->index, vb->state); ··· 1923 1903 vb->prepared = false; 1924 1904 } 1925 1905 __vb2_dqbuf(vb); 1906 + 1907 + if (vb->req_obj.req) { 1908 + media_request_object_unbind(&vb->req_obj); 1909 + media_request_object_put(&vb->req_obj); 1910 + } 1911 + if (vb->request) 1912 + media_request_put(vb->request); 1913 + vb->request = NULL; 1926 1914 } 1927 1915 } 1928 1916 ··· 1968 1940 if (ret) 1969 1941 return ret; 1970 1942 ret = vb2_start_streaming(q); 1971 - if (ret) { 1972 - __vb2_queue_cancel(q); 1943 + if (ret) 1973 1944 return ret; 1974 - } 1975 1945 } 1976 1946 1977 1947 q->streaming = 1;
+9 -4
drivers/media/common/videobuf2/videobuf2-v4l2.c
··· 333 333 } 334 334 335 335 static int vb2_queue_or_prepare_buf(struct vb2_queue *q, struct media_device *mdev, 336 - struct v4l2_buffer *b, 337 - const char *opname, 336 + struct v4l2_buffer *b, bool is_prepare, 338 337 struct media_request **p_req) 339 338 { 339 + const char *opname = is_prepare ? "prepare_buf" : "qbuf"; 340 340 struct media_request *req; 341 341 struct vb2_v4l2_buffer *vbuf; 342 342 struct vb2_buffer *vb; ··· 377 377 if (ret) 378 378 return ret; 379 379 } 380 + 381 + if (is_prepare) 382 + return 0; 380 383 381 384 if (!(b->flags & V4L2_BUF_FLAG_REQUEST_FD)) { 382 385 if (q->uses_requests) { ··· 634 631 *caps |= V4L2_BUF_CAP_SUPPORTS_USERPTR; 635 632 if (q->io_modes & VB2_DMABUF) 636 633 *caps |= V4L2_BUF_CAP_SUPPORTS_DMABUF; 634 + #ifdef CONFIG_MEDIA_CONTROLLER_REQUEST_API 637 635 if (q->supports_requests) 638 636 *caps |= V4L2_BUF_CAP_SUPPORTS_REQUESTS; 637 + #endif 639 638 } 640 639 641 640 int vb2_reqbufs(struct vb2_queue *q, struct v4l2_requestbuffers *req) ··· 662 657 if (b->flags & V4L2_BUF_FLAG_REQUEST_FD) 663 658 return -EINVAL; 664 659 665 - ret = vb2_queue_or_prepare_buf(q, mdev, b, "prepare_buf", NULL); 660 + ret = vb2_queue_or_prepare_buf(q, mdev, b, true, NULL); 666 661 667 662 return ret ? ret : vb2_core_prepare_buf(q, b->index, b); 668 663 } ··· 734 729 return -EBUSY; 735 730 } 736 731 737 - ret = vb2_queue_or_prepare_buf(q, mdev, b, "qbuf", &req); 732 + ret = vb2_queue_or_prepare_buf(q, mdev, b, false, &req); 738 733 if (ret) 739 734 return ret; 740 735 ret = vb2_core_qbuf(q, b->index, b, req);
+4
drivers/media/media-device.c
··· 381 381 static long media_device_request_alloc(struct media_device *mdev, 382 382 int *alloc_fd) 383 383 { 384 + #ifdef CONFIG_MEDIA_CONTROLLER_REQUEST_API 384 385 if (!mdev->ops || !mdev->ops->req_validate || !mdev->ops->req_queue) 385 386 return -ENOTTY; 386 387 387 388 return media_request_alloc(mdev, alloc_fd); 389 + #else 390 + return -ENOTTY; 391 + #endif 388 392 } 389 393 390 394 static long copy_arg_from_user(void *karg, void __user *uarg, unsigned int cmd)
+10 -3
drivers/media/platform/vicodec/vicodec-core.c
··· 997 997 998 998 q_data->sequence = 0; 999 999 1000 - if (!V4L2_TYPE_IS_OUTPUT(q->type)) 1000 + if (!V4L2_TYPE_IS_OUTPUT(q->type)) { 1001 + if (!ctx->is_enc) { 1002 + state->width = q_data->width; 1003 + state->height = q_data->height; 1004 + } 1001 1005 return 0; 1006 + } 1002 1007 1003 - state->width = q_data->width; 1004 - state->height = q_data->height; 1008 + if (ctx->is_enc) { 1009 + state->width = q_data->width; 1010 + state->height = q_data->height; 1011 + } 1005 1012 state->ref_frame.width = state->ref_frame.height = 0; 1006 1013 state->ref_frame.luma = kvmalloc(size + 2 * size / chroma_div, 1007 1014 GFP_KERNEL);
-2
drivers/media/platform/vivid/vivid-sdr-cap.c
··· 276 276 277 277 list_for_each_entry_safe(buf, tmp, &dev->sdr_cap_active, list) { 278 278 list_del(&buf->list); 279 - v4l2_ctrl_request_complete(buf->vb.vb2_buf.req_obj.req, 280 - &dev->ctrl_hdl_sdr_cap); 281 279 vb2_buffer_done(&buf->vb.vb2_buf, 282 280 VB2_BUF_STATE_QUEUED); 283 281 }
-2
drivers/media/platform/vivid/vivid-vbi-cap.c
··· 204 204 205 205 list_for_each_entry_safe(buf, tmp, &dev->vbi_cap_active, list) { 206 206 list_del(&buf->list); 207 - v4l2_ctrl_request_complete(buf->vb.vb2_buf.req_obj.req, 208 - &dev->ctrl_hdl_vbi_cap); 209 207 vb2_buffer_done(&buf->vb.vb2_buf, 210 208 VB2_BUF_STATE_QUEUED); 211 209 }
-2
drivers/media/platform/vivid/vivid-vbi-out.c
··· 96 96 97 97 list_for_each_entry_safe(buf, tmp, &dev->vbi_out_active, list) { 98 98 list_del(&buf->list); 99 - v4l2_ctrl_request_complete(buf->vb.vb2_buf.req_obj.req, 100 - &dev->ctrl_hdl_vbi_out); 101 99 vb2_buffer_done(&buf->vb.vb2_buf, 102 100 VB2_BUF_STATE_QUEUED); 103 101 }
-2
drivers/media/platform/vivid/vivid-vid-cap.c
··· 243 243 244 244 list_for_each_entry_safe(buf, tmp, &dev->vid_cap_active, list) { 245 245 list_del(&buf->list); 246 - v4l2_ctrl_request_complete(buf->vb.vb2_buf.req_obj.req, 247 - &dev->ctrl_hdl_vid_cap); 248 246 vb2_buffer_done(&buf->vb.vb2_buf, 249 247 VB2_BUF_STATE_QUEUED); 250 248 }
-2
drivers/media/platform/vivid/vivid-vid-out.c
··· 162 162 163 163 list_for_each_entry_safe(buf, tmp, &dev->vid_out_active, list) { 164 164 list_del(&buf->list); 165 - v4l2_ctrl_request_complete(buf->vb.vb2_buf.req_obj.req, 166 - &dev->ctrl_hdl_vid_out); 167 165 vb2_buffer_done(&buf->vb.vb2_buf, 168 166 VB2_BUF_STATE_QUEUED); 169 167 }
+1 -1
drivers/media/platform/vsp1/vsp1_lif.c
··· 95 95 format = vsp1_entity_get_pad_format(&lif->entity, lif->entity.config, 96 96 LIF_PAD_SOURCE); 97 97 98 - switch (entity->vsp1->version & VI6_IP_VERSION_SOC_MASK) { 98 + switch (entity->vsp1->version & VI6_IP_VERSION_MODEL_MASK) { 99 99 case VI6_IP_VERSION_MODEL_VSPD_GEN2: 100 100 case VI6_IP_VERSION_MODEL_VSPD_V2H: 101 101 hbth = 1536;
+2 -2
drivers/media/v4l2-core/v4l2-ctrls.c
··· 1563 1563 u64 offset; 1564 1564 s64 val; 1565 1565 1566 - switch (ctrl->type) { 1566 + switch ((u32)ctrl->type) { 1567 1567 case V4L2_CTRL_TYPE_INTEGER: 1568 1568 return ROUND_TO_RANGE(ptr.p_s32[idx], u32, ctrl); 1569 1569 case V4L2_CTRL_TYPE_INTEGER64: ··· 2232 2232 is_array = nr_of_dims > 0; 2233 2233 2234 2234 /* Prefill elem_size for all types handled by std_type_ops */ 2235 - switch (type) { 2235 + switch ((u32)type) { 2236 2236 case V4L2_CTRL_TYPE_INTEGER64: 2237 2237 elem_size = sizeof(s64); 2238 2238 break;
+10 -5
drivers/mmc/core/block.c
··· 472 472 static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md, 473 473 struct mmc_blk_ioc_data *idata) 474 474 { 475 - struct mmc_command cmd = {}; 475 + struct mmc_command cmd = {}, sbc = {}; 476 476 struct mmc_data data = {}; 477 477 struct mmc_request mrq = {}; 478 478 struct scatterlist sg; ··· 550 550 } 551 551 552 552 if (idata->rpmb) { 553 - err = mmc_set_blockcount(card, data.blocks, 554 - idata->ic.write_flag & (1 << 31)); 555 - if (err) 556 - return err; 553 + sbc.opcode = MMC_SET_BLOCK_COUNT; 554 + /* 555 + * We don't do any blockcount validation because the max size 556 + * may be increased by a future standard. We just copy the 557 + * 'Reliable Write' bit here. 558 + */ 559 + sbc.arg = data.blocks | (idata->ic.write_flag & BIT(31)); 560 + sbc.flags = MMC_RSP_R1 | MMC_CMD_AC; 561 + mrq.sbc = &sbc; 557 562 } 558 563 559 564 if ((MMC_EXTRACT_INDEX_FROM_ARG(cmd.arg) == EXT_CSD_SANITIZE_START) &&
+15 -9
drivers/mmc/core/mmc.c
··· 30 30 #include "pwrseq.h" 31 31 32 32 #define DEFAULT_CMD6_TIMEOUT_MS 500 33 + #define MIN_CACHE_EN_TIMEOUT_MS 1600 33 34 34 35 static const unsigned int tran_exp[] = { 35 36 10000, 100000, 1000000, 10000000, ··· 527 526 card->cid.year += 16; 528 527 529 528 /* check whether the eMMC card supports BKOPS */ 530 - if (!mmc_card_broken_hpi(card) && 531 - ext_csd[EXT_CSD_BKOPS_SUPPORT] & 0x1) { 529 + if (ext_csd[EXT_CSD_BKOPS_SUPPORT] & 0x1) { 532 530 card->ext_csd.bkops = 1; 533 531 card->ext_csd.man_bkops_en = 534 532 (ext_csd[EXT_CSD_BKOPS_EN] & ··· 1782 1782 if (err) { 1783 1783 pr_warn("%s: Enabling HPI failed\n", 1784 1784 mmc_hostname(card->host)); 1785 + card->ext_csd.hpi_en = 0; 1785 1786 err = 0; 1786 - } else 1787 + } else { 1787 1788 card->ext_csd.hpi_en = 1; 1789 + } 1788 1790 } 1789 1791 1790 1792 /* 1791 - * If cache size is higher than 0, this indicates 1792 - * the existence of cache and it can be turned on. 1793 + * If cache size is higher than 0, this indicates the existence of cache 1794 + * and it can be turned on. Note that some eMMCs from Micron has been 1795 + * reported to need ~800 ms timeout, while enabling the cache after 1796 + * sudden power failure tests. Let's extend the timeout to a minimum of 1797 + * DEFAULT_CACHE_EN_TIMEOUT_MS and do it for all cards. 1793 1798 */ 1794 - if (!mmc_card_broken_hpi(card) && 1795 - card->ext_csd.cache_size > 0) { 1799 + if (card->ext_csd.cache_size > 0) { 1800 + unsigned int timeout_ms = MIN_CACHE_EN_TIMEOUT_MS; 1801 + 1802 + timeout_ms = max(card->ext_csd.generic_cmd6_time, timeout_ms); 1796 1803 err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, 1797 - EXT_CSD_CACHE_CTRL, 1, 1798 - card->ext_csd.generic_cmd6_time); 1804 + EXT_CSD_CACHE_CTRL, 1, timeout_ms); 1799 1805 if (err && err != -EBADMSG) 1800 1806 goto free_card; 1801 1807
+9 -2
drivers/mmc/host/omap.c
··· 104 104 unsigned int vdd; 105 105 u16 saved_con; 106 106 u16 bus_mode; 107 + u16 power_mode; 107 108 unsigned int fclk_freq; 108 109 109 110 struct tasklet_struct cover_tasklet; ··· 1158 1157 struct mmc_omap_slot *slot = mmc_priv(mmc); 1159 1158 struct mmc_omap_host *host = slot->host; 1160 1159 int i, dsor; 1161 - int clk_enabled; 1160 + int clk_enabled, init_stream; 1162 1161 1163 1162 mmc_omap_select_slot(slot, 0); 1164 1163 ··· 1168 1167 slot->vdd = ios->vdd; 1169 1168 1170 1169 clk_enabled = 0; 1170 + init_stream = 0; 1171 1171 switch (ios->power_mode) { 1172 1172 case MMC_POWER_OFF: 1173 1173 mmc_omap_set_power(slot, 0, ios->vdd); ··· 1176 1174 case MMC_POWER_UP: 1177 1175 /* Cannot touch dsor yet, just power up MMC */ 1178 1176 mmc_omap_set_power(slot, 1, ios->vdd); 1177 + slot->power_mode = ios->power_mode; 1179 1178 goto exit; 1180 1179 case MMC_POWER_ON: 1181 1180 mmc_omap_fclk_enable(host, 1); 1182 1181 clk_enabled = 1; 1183 1182 dsor |= 1 << 11; 1183 + if (slot->power_mode != MMC_POWER_ON) 1184 + init_stream = 1; 1184 1185 break; 1185 1186 } 1187 + slot->power_mode = ios->power_mode; 1186 1188 1187 1189 if (slot->bus_mode != ios->bus_mode) { 1188 1190 if (slot->pdata->set_bus_mode != NULL) ··· 1202 1196 for (i = 0; i < 2; i++) 1203 1197 OMAP_MMC_WRITE(host, CON, dsor); 1204 1198 slot->saved_con = dsor; 1205 - if (ios->power_mode == MMC_POWER_ON) { 1199 + if (init_stream) { 1206 1200 /* worst case at 400kHz, 80 cycles makes 200 microsecs */ 1207 1201 int usecs = 250; 1208 1202 ··· 1240 1234 slot->host = host; 1241 1235 slot->mmc = mmc; 1242 1236 slot->id = id; 1237 + slot->power_mode = MMC_POWER_UNDEFINED; 1243 1238 slot->pdata = &host->pdata->slots[id]; 1244 1239 1245 1240 host->slots[id] = slot;
+11 -1
drivers/mmc/host/omap_hsmmc.c
··· 1909 1909 mmc->max_blk_size = 512; /* Block Length at max can be 1024 */ 1910 1910 mmc->max_blk_count = 0xFFFF; /* No. of Blocks is 16 bits */ 1911 1911 mmc->max_req_size = mmc->max_blk_size * mmc->max_blk_count; 1912 - mmc->max_seg_size = mmc->max_req_size; 1913 1912 1914 1913 mmc->caps |= MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED | 1915 1914 MMC_CAP_WAIT_WHILE_BUSY | MMC_CAP_ERASE | MMC_CAP_CMD23; ··· 1937 1938 ret = PTR_ERR(host->tx_chan); 1938 1939 goto err_irq; 1939 1940 } 1941 + 1942 + /* 1943 + * Limit the maximum segment size to the lower of the request size 1944 + * and the DMA engine device segment size limits. In reality, with 1945 + * 32-bit transfers, the DMA engine can do longer segments than this 1946 + * but there is no way to represent that in the DMA model - if we 1947 + * increase this figure here, we get warnings from the DMA API debug. 1948 + */ 1949 + mmc->max_seg_size = min3(mmc->max_req_size, 1950 + dma_get_max_seg_size(host->rx_chan->device->dev), 1951 + dma_get_max_seg_size(host->tx_chan->device->dev)); 1940 1952 1941 1953 /* Request IRQ for MMC operations */ 1942 1954 ret = devm_request_irq(&pdev->dev, host->irq, omap_hsmmc_irq, 0,
+8 -4
drivers/mmc/host/sdhci-omap.c
··· 288 288 struct device *dev = omap_host->dev; 289 289 struct mmc_ios *ios = &mmc->ios; 290 290 u32 start_window = 0, max_window = 0; 291 + bool dcrc_was_enabled = false; 291 292 u8 cur_match, prev_match = 0; 292 293 u32 length = 0, max_len = 0; 293 - u32 ier = host->ier; 294 294 u32 phase_delay = 0; 295 295 int ret = 0; 296 296 u32 reg; ··· 317 317 * during the tuning procedure. So disable it during the 318 318 * tuning procedure. 319 319 */ 320 - ier &= ~SDHCI_INT_DATA_CRC; 321 - sdhci_writel(host, ier, SDHCI_INT_ENABLE); 322 - sdhci_writel(host, ier, SDHCI_SIGNAL_ENABLE); 320 + if (host->ier & SDHCI_INT_DATA_CRC) { 321 + host->ier &= ~SDHCI_INT_DATA_CRC; 322 + dcrc_was_enabled = true; 323 + } 323 324 324 325 while (phase_delay <= MAX_PHASE_DELAY) { 325 326 sdhci_omap_set_dll(omap_host, phase_delay); ··· 367 366 368 367 ret: 369 368 sdhci_reset(host, SDHCI_RESET_CMD | SDHCI_RESET_DATA); 369 + /* Reenable forbidden interrupt */ 370 + if (dcrc_was_enabled) 371 + host->ier |= SDHCI_INT_DATA_CRC; 370 372 sdhci_writel(host, host->ier, SDHCI_INT_ENABLE); 371 373 sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE); 372 374 return ret;
+4 -4
drivers/mmc/host/sdhci-tegra.c
··· 510 510 511 511 err = device_property_read_u32(host->mmc->parent, 512 512 "nvidia,pad-autocal-pull-up-offset-3v3-timeout", 513 - &autocal->pull_up_3v3); 513 + &autocal->pull_up_3v3_timeout); 514 514 if (err) 515 515 autocal->pull_up_3v3_timeout = 0; 516 516 517 517 err = device_property_read_u32(host->mmc->parent, 518 518 "nvidia,pad-autocal-pull-down-offset-3v3-timeout", 519 - &autocal->pull_down_3v3); 519 + &autocal->pull_down_3v3_timeout); 520 520 if (err) 521 521 autocal->pull_down_3v3_timeout = 0; 522 522 523 523 err = device_property_read_u32(host->mmc->parent, 524 524 "nvidia,pad-autocal-pull-up-offset-1v8-timeout", 525 - &autocal->pull_up_1v8); 525 + &autocal->pull_up_1v8_timeout); 526 526 if (err) 527 527 autocal->pull_up_1v8_timeout = 0; 528 528 529 529 err = device_property_read_u32(host->mmc->parent, 530 530 "nvidia,pad-autocal-pull-down-offset-1v8-timeout", 531 - &autocal->pull_down_1v8); 531 + &autocal->pull_down_1v8_timeout); 532 532 if (err) 533 533 autocal->pull_down_1v8_timeout = 0; 534 534
+15 -7
drivers/mmc/host/sdhci.c
··· 127 127 { 128 128 u16 ctrl2; 129 129 130 - ctrl2 = sdhci_readb(host, SDHCI_HOST_CONTROL2); 130 + ctrl2 = sdhci_readw(host, SDHCI_HOST_CONTROL2); 131 131 if (ctrl2 & SDHCI_CTRL_V4_MODE) 132 132 return; 133 133 134 134 ctrl2 |= SDHCI_CTRL_V4_MODE; 135 - sdhci_writeb(host, ctrl2, SDHCI_HOST_CONTROL); 135 + sdhci_writew(host, ctrl2, SDHCI_HOST_CONTROL2); 136 136 } 137 137 138 138 /* ··· 216 216 timeout = ktime_add_ms(ktime_get(), 100); 217 217 218 218 /* hw clears the bit when it's done */ 219 - while (sdhci_readb(host, SDHCI_SOFTWARE_RESET) & mask) { 220 - if (ktime_after(ktime_get(), timeout)) { 219 + while (1) { 220 + bool timedout = ktime_after(ktime_get(), timeout); 221 + 222 + if (!(sdhci_readb(host, SDHCI_SOFTWARE_RESET) & mask)) 223 + break; 224 + if (timedout) { 221 225 pr_err("%s: Reset 0x%x never completed.\n", 222 226 mmc_hostname(host->mmc), (int)mask); 223 227 sdhci_dumpregs(host); ··· 1612 1608 1613 1609 /* Wait max 20 ms */ 1614 1610 timeout = ktime_add_ms(ktime_get(), 20); 1615 - while (!((clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL)) 1616 - & SDHCI_CLOCK_INT_STABLE)) { 1617 - if (ktime_after(ktime_get(), timeout)) { 1611 + while (1) { 1612 + bool timedout = ktime_after(ktime_get(), timeout); 1613 + 1614 + clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL); 1615 + if (clk & SDHCI_CLOCK_INT_STABLE) 1616 + break; 1617 + if (timedout) { 1618 1618 pr_err("%s: Internal clock never stabilised.\n", 1619 1619 mmc_hostname(host->mmc)); 1620 1620 sdhci_dumpregs(host);
+1 -1
drivers/net/dsa/mv88e6xxx/chip.c
··· 1124 1124 u16 *p = _p; 1125 1125 int i; 1126 1126 1127 - regs->version = 0; 1127 + regs->version = chip->info->prod_num; 1128 1128 1129 1129 memset(p, 0xff, 32 * sizeof(u16)); 1130 1130
-3
drivers/net/ethernet/apm/xgene/xgene_enet_main.c
··· 29 29 #define RES_RING_CSR 1 30 30 #define RES_RING_CMD 2 31 31 32 - static const struct of_device_id xgene_enet_of_match[]; 33 - static const struct acpi_device_id xgene_enet_acpi_match[]; 34 - 35 32 static void xgene_enet_init_bufpool(struct xgene_enet_desc_ring *buf_pool) 36 33 { 37 34 struct xgene_enet_raw_desc16 *raw_desc;
+2
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
··· 1282 1282 BNX2X_SP_RTNL_TX_STOP, 1283 1283 BNX2X_SP_RTNL_GET_DRV_VERSION, 1284 1284 BNX2X_SP_RTNL_CHANGE_UDP_PORT, 1285 + BNX2X_SP_RTNL_UPDATE_SVID, 1285 1286 }; 1286 1287 1287 1288 enum bnx2x_iov_flag { ··· 2521 2520 void bnx2x_init_ptp(struct bnx2x *bp); 2522 2521 int bnx2x_configure_ptp_filters(struct bnx2x *bp); 2523 2522 void bnx2x_set_rx_ts(struct bnx2x *bp, struct sk_buff *skb); 2523 + void bnx2x_register_phc(struct bnx2x *bp); 2524 2524 2525 2525 #define BNX2X_MAX_PHC_DRIFT 31000000 2526 2526 #define BNX2X_PTP_TX_TIMEOUT
+1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
··· 2842 2842 bnx2x_set_rx_mode_inner(bp); 2843 2843 2844 2844 if (bp->flags & PTP_SUPPORTED) { 2845 + bnx2x_register_phc(bp); 2845 2846 bnx2x_init_ptp(bp); 2846 2847 bnx2x_configure_ptp_filters(bp); 2847 2848 }
+49 -21
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 2925 2925 func_params.f_obj = &bp->func_obj; 2926 2926 func_params.cmd = BNX2X_F_CMD_SWITCH_UPDATE; 2927 2927 2928 + /* Prepare parameters for function state transitions */ 2929 + __set_bit(RAMROD_COMP_WAIT, &func_params.ramrod_flags); 2930 + __set_bit(RAMROD_RETRY, &func_params.ramrod_flags); 2931 + 2928 2932 if (IS_MF_UFP(bp) || IS_MF_BD(bp)) { 2929 2933 int func = BP_ABS_FUNC(bp); 2930 2934 u32 val; ··· 4315 4311 bnx2x_handle_eee_event(bp); 4316 4312 4317 4313 if (val & DRV_STATUS_OEM_UPDATE_SVID) 4318 - bnx2x_handle_update_svid_cmd(bp); 4314 + bnx2x_schedule_sp_rtnl(bp, 4315 + BNX2X_SP_RTNL_UPDATE_SVID, 0); 4319 4316 4320 4317 if (bp->link_vars.periodic_flags & 4321 4318 PERIODIC_FLAGS_LINK_EVENT) { ··· 7728 7723 REG_WR(bp, reg_addr, val); 7729 7724 } 7730 7725 7726 + if (CHIP_IS_E3B0(bp)) 7727 + bp->flags |= PTP_SUPPORTED; 7728 + 7731 7729 return 0; 7732 7730 } 7733 7731 ··· 8480 8472 /* Fill a user request section if needed */ 8481 8473 if (!test_bit(RAMROD_CONT, ramrod_flags)) { 8482 8474 ramrod_param.user_req.u.vlan.vlan = vlan; 8475 + __set_bit(BNX2X_VLAN, &ramrod_param.user_req.vlan_mac_flags); 8483 8476 /* Set the command: ADD or DEL */ 8484 8477 if (set) 8485 8478 ramrod_param.user_req.cmd = BNX2X_VLAN_MAC_ADD; ··· 8499 8490 } 8500 8491 8501 8492 return rc; 8493 + } 8494 + 8495 + static int bnx2x_del_all_vlans(struct bnx2x *bp) 8496 + { 8497 + struct bnx2x_vlan_mac_obj *vlan_obj = &bp->sp_objs[0].vlan_obj; 8498 + unsigned long ramrod_flags = 0, vlan_flags = 0; 8499 + struct bnx2x_vlan_entry *vlan; 8500 + int rc; 8501 + 8502 + __set_bit(RAMROD_COMP_WAIT, &ramrod_flags); 8503 + __set_bit(BNX2X_VLAN, &vlan_flags); 8504 + rc = vlan_obj->delete_all(bp, vlan_obj, &vlan_flags, &ramrod_flags); 8505 + if (rc) 8506 + return rc; 8507 + 8508 + /* Mark that hw forgot all entries */ 8509 + list_for_each_entry(vlan, &bp->vlan_reg, link) 8510 + vlan->hw = false; 8511 + bp->vlan_cnt = 0; 8512 + 8513 + return 0; 8502 8514 } 8503 8515 8504 8516 int bnx2x_del_all_macs(struct bnx2x *bp, ··· 9360 9330 BNX2X_ERR("Failed to schedule DEL commands for UC MACs list: %d\n", 9361 9331 rc); 9362 9332 9333 + /* Remove all currently configured VLANs */ 9334 + rc = bnx2x_del_all_vlans(bp); 9335 + if (rc < 0) 9336 + BNX2X_ERR("Failed to delete all VLANs\n"); 9337 + 9363 9338 /* Disable LLH */ 9364 9339 if (!CHIP_IS_E1(bp)) 9365 9340 REG_WR(bp, NIG_REG_LLH0_FUNC_EN + port*8, 0); ··· 9452 9417 * function stop ramrod is sent, since as part of this ramrod FW access 9453 9418 * PTP registers. 9454 9419 */ 9455 - if (bp->flags & PTP_SUPPORTED) 9420 + if (bp->flags & PTP_SUPPORTED) { 9456 9421 bnx2x_stop_ptp(bp); 9422 + if (bp->ptp_clock) { 9423 + ptp_clock_unregister(bp->ptp_clock); 9424 + bp->ptp_clock = NULL; 9425 + } 9426 + } 9457 9427 9458 9428 /* Disable HW interrupts, NAPI */ 9459 9429 bnx2x_netif_stop(bp, 1); ··· 10398 10358 if (test_and_clear_bit(BNX2X_SP_RTNL_GET_DRV_VERSION, 10399 10359 &bp->sp_rtnl_state)) 10400 10360 bnx2x_update_mng_version(bp); 10361 + 10362 + if (test_and_clear_bit(BNX2X_SP_RTNL_UPDATE_SVID, &bp->sp_rtnl_state)) 10363 + bnx2x_handle_update_svid_cmd(bp); 10401 10364 10402 10365 if (test_and_clear_bit(BNX2X_SP_RTNL_CHANGE_UDP_PORT, 10403 10366 &bp->sp_rtnl_state)) { ··· 11793 11750 * If maximum allowed number of connections is zero - 11794 11751 * disable the feature. 11795 11752 */ 11796 - if (!bp->cnic_eth_dev.max_fcoe_conn) 11753 + if (!bp->cnic_eth_dev.max_fcoe_conn) { 11797 11754 bp->flags |= NO_FCOE_FLAG; 11755 + eth_zero_addr(bp->fip_mac); 11756 + } 11798 11757 } 11799 11758 11800 11759 static void bnx2x_get_cnic_info(struct bnx2x *bp) ··· 12539 12494 12540 12495 bp->dump_preset_idx = 1; 12541 12496 12542 - if (CHIP_IS_E3B0(bp)) 12543 - bp->flags |= PTP_SUPPORTED; 12544 - 12545 12497 return rc; 12546 12498 } 12547 12499 ··· 13066 13024 13067 13025 int bnx2x_vlan_reconfigure_vid(struct bnx2x *bp) 13068 13026 { 13069 - struct bnx2x_vlan_entry *vlan; 13070 - 13071 - /* The hw forgot all entries after reload */ 13072 - list_for_each_entry(vlan, &bp->vlan_reg, link) 13073 - vlan->hw = false; 13074 - bp->vlan_cnt = 0; 13075 - 13076 13027 /* Don't set rx mode here. Our caller will do it. */ 13077 13028 bnx2x_vlan_configure(bp, false); 13078 13029 ··· 13930 13895 return -ENOTSUPP; 13931 13896 } 13932 13897 13933 - static void bnx2x_register_phc(struct bnx2x *bp) 13898 + void bnx2x_register_phc(struct bnx2x *bp) 13934 13899 { 13935 13900 /* Fill the ptp_clock_info struct and register PTP clock*/ 13936 13901 bp->ptp_clock_info.owner = THIS_MODULE; ··· 14132 14097 dev->base_addr, bp->pdev->irq, dev->dev_addr); 14133 14098 pcie_print_link_status(bp->pdev); 14134 14099 14135 - bnx2x_register_phc(bp); 14136 - 14137 14100 if (!IS_MF_SD_STORAGE_PERSONALITY_ONLY(bp)) 14138 14101 bnx2x_set_os_driver_state(bp, OS_DRIVER_STATE_DISABLED); 14139 14102 ··· 14164 14131 struct bnx2x *bp, 14165 14132 bool remove_netdev) 14166 14133 { 14167 - if (bp->ptp_clock) { 14168 - ptp_clock_unregister(bp->ptp_clock); 14169 - bp->ptp_clock = NULL; 14170 - } 14171 - 14172 14134 /* Delete storage MAC address */ 14173 14135 if (!NO_FCOE(bp)) { 14174 14136 rtnl_lock();
+3 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.h
··· 265 265 BNX2X_ETH_MAC, 266 266 BNX2X_ISCSI_ETH_MAC, 267 267 BNX2X_NETQ_ETH_MAC, 268 + BNX2X_VLAN, 268 269 BNX2X_DONT_CONSUME_CAM_CREDIT, 269 270 BNX2X_DONT_CONSUME_CAM_CREDIT_DEST, 270 271 }; ··· 273 272 #define BNX2X_VLAN_MAC_CMP_MASK (1 << BNX2X_UC_LIST_MAC | \ 274 273 1 << BNX2X_ETH_MAC | \ 275 274 1 << BNX2X_ISCSI_ETH_MAC | \ 276 - 1 << BNX2X_NETQ_ETH_MAC) 275 + 1 << BNX2X_NETQ_ETH_MAC | \ 276 + 1 << BNX2X_VLAN) 277 277 #define BNX2X_VLAN_MAC_CMP_FLAGS(flags) \ 278 278 ((flags) & BNX2X_VLAN_MAC_CMP_MASK) 279 279
+4 -1
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 2572 2572 static int bnxt_run_loopback(struct bnxt *bp) 2573 2573 { 2574 2574 struct bnxt_tx_ring_info *txr = &bp->tx_ring[0]; 2575 + struct bnxt_rx_ring_info *rxr = &bp->rx_ring[0]; 2575 2576 struct bnxt_cp_ring_info *cpr; 2576 2577 int pkt_size, i = 0; 2577 2578 struct sk_buff *skb; ··· 2580 2579 u8 *data; 2581 2580 int rc; 2582 2581 2583 - cpr = &txr->bnapi->cp_ring; 2582 + cpr = &rxr->bnapi->cp_ring; 2583 + if (bp->flags & BNXT_FLAG_CHIP_P5) 2584 + cpr = cpr->cp_ring_arr[BNXT_RX_HDL]; 2584 2585 pkt_size = min(bp->dev->mtu + ETH_HLEN, bp->rx_copy_thresh); 2585 2586 skb = netdev_alloc_skb(bp->dev, pkt_size); 2586 2587 if (!skb)
+42 -6
drivers/net/ethernet/cadence/macb_main.c
··· 61 61 #define MACB_TX_ERR_FLAGS (MACB_BIT(ISR_TUND) \ 62 62 | MACB_BIT(ISR_RLE) \ 63 63 | MACB_BIT(TXERR)) 64 - #define MACB_TX_INT_FLAGS (MACB_TX_ERR_FLAGS | MACB_BIT(TCOMP)) 64 + #define MACB_TX_INT_FLAGS (MACB_TX_ERR_FLAGS | MACB_BIT(TCOMP) \ 65 + | MACB_BIT(TXUBR)) 65 66 66 67 /* Max length of transmit frame must be a multiple of 8 bytes */ 67 68 #define MACB_TX_LEN_ALIGN 8 ··· 681 680 if (bp->hw_dma_cap & HW_DMA_CAP_64B) { 682 681 desc_64 = macb_64b_desc(bp, desc); 683 682 desc_64->addrh = upper_32_bits(addr); 683 + /* The low bits of RX address contain the RX_USED bit, clearing 684 + * of which allows packet RX. Make sure the high bits are also 685 + * visible to HW at that point. 686 + */ 687 + dma_wmb(); 684 688 } 685 689 #endif 686 690 desc->addr = lower_32_bits(addr); ··· 934 928 935 929 if (entry == bp->rx_ring_size - 1) 936 930 paddr |= MACB_BIT(RX_WRAP); 937 - macb_set_addr(bp, desc, paddr); 938 931 desc->ctrl = 0; 932 + /* Setting addr clears RX_USED and allows reception, 933 + * make sure ctrl is cleared first to avoid a race. 934 + */ 935 + dma_wmb(); 936 + macb_set_addr(bp, desc, paddr); 939 937 940 938 /* properly align Ethernet header */ 941 939 skb_reserve(skb, NET_IP_ALIGN); 942 940 } else { 943 - desc->addr &= ~MACB_BIT(RX_USED); 944 941 desc->ctrl = 0; 942 + dma_wmb(); 943 + desc->addr &= ~MACB_BIT(RX_USED); 945 944 } 946 945 } 947 946 ··· 1000 989 1001 990 rxused = (desc->addr & MACB_BIT(RX_USED)) ? true : false; 1002 991 addr = macb_get_addr(bp, desc); 1003 - ctrl = desc->ctrl; 1004 992 1005 993 if (!rxused) 1006 994 break; 995 + 996 + /* Ensure ctrl is at least as up-to-date as rxused */ 997 + dma_rmb(); 998 + 999 + ctrl = desc->ctrl; 1007 1000 1008 1001 queue->rx_tail++; 1009 1002 count++; ··· 1183 1168 /* Make hw descriptor updates visible to CPU */ 1184 1169 rmb(); 1185 1170 1186 - ctrl = desc->ctrl; 1187 - 1188 1171 if (!(desc->addr & MACB_BIT(RX_USED))) 1189 1172 break; 1173 + 1174 + /* Ensure ctrl is at least as up-to-date as addr */ 1175 + dma_rmb(); 1176 + 1177 + ctrl = desc->ctrl; 1190 1178 1191 1179 if (ctrl & MACB_BIT(RX_SOF)) { 1192 1180 if (first_frag != -1) ··· 1330 1312 netif_tx_start_all_queues(dev); 1331 1313 } 1332 1314 1315 + static void macb_tx_restart(struct macb_queue *queue) 1316 + { 1317 + unsigned int head = queue->tx_head; 1318 + unsigned int tail = queue->tx_tail; 1319 + struct macb *bp = queue->bp; 1320 + 1321 + if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) 1322 + queue_writel(queue, ISR, MACB_BIT(TXUBR)); 1323 + 1324 + if (head == tail) 1325 + return; 1326 + 1327 + macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART)); 1328 + } 1329 + 1333 1330 static irqreturn_t macb_interrupt(int irq, void *dev_id) 1334 1331 { 1335 1332 struct macb_queue *queue = dev_id; ··· 1401 1368 1402 1369 if (status & MACB_BIT(TCOMP)) 1403 1370 macb_tx_interrupt(queue); 1371 + 1372 + if (status & MACB_BIT(TXUBR)) 1373 + macb_tx_restart(queue); 1404 1374 1405 1375 /* Link change detection isn't possible with RMII, so we'll 1406 1376 * add that if/when we get our hands on a full-blown MII PHY.
+2
drivers/net/ethernet/cadence/macb_ptp.c
··· 319 319 desc_ptp = macb_ptp_desc(queue->bp, desc); 320 320 tx_timestamp = &queue->tx_timestamps[head]; 321 321 tx_timestamp->skb = skb; 322 + /* ensure ts_1/ts_2 is loaded after ctrl (TX_USED check) */ 323 + dma_rmb(); 322 324 tx_timestamp->desc_ptp.ts_1 = desc_ptp->ts_1; 323 325 tx_timestamp->desc_ptp.ts_2 = desc_ptp->ts_2; 324 326 /* move head */
+3
drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
··· 1453 1453 #define T6_TX_FORCE_V(x) ((x) << T6_TX_FORCE_S) 1454 1454 #define T6_TX_FORCE_F T6_TX_FORCE_V(1U) 1455 1455 1456 + #define TX_URG_S 16 1457 + #define TX_URG_V(x) ((x) << TX_URG_S) 1458 + 1456 1459 #define TX_SHOVE_S 14 1457 1460 #define TX_SHOVE_V(x) ((x) << TX_SHOVE_S) 1458 1461
+3
drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
··· 379 379 380 380 hns_ae_ring_enable_all(handle, 0); 381 381 382 + /* clean rx fbd. */ 383 + hns_rcb_wait_fbd_clean(handle->qs, handle->q_num, RCB_INT_FLAG_RX); 384 + 382 385 (void)hns_mac_vm_config_bc_en(mac_cb, 0, false); 383 386 } 384 387
+10 -4
drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c
··· 67 67 struct mac_driver *drv = (struct mac_driver *)mac_drv; 68 68 69 69 /*enable GE rX/tX */ 70 - if ((mode == MAC_COMM_MODE_TX) || (mode == MAC_COMM_MODE_RX_AND_TX)) 70 + if (mode == MAC_COMM_MODE_TX || mode == MAC_COMM_MODE_RX_AND_TX) 71 71 dsaf_set_dev_bit(drv, GMAC_PORT_EN_REG, GMAC_PORT_TX_EN_B, 1); 72 72 73 - if ((mode == MAC_COMM_MODE_RX) || (mode == MAC_COMM_MODE_RX_AND_TX)) 73 + if (mode == MAC_COMM_MODE_RX || mode == MAC_COMM_MODE_RX_AND_TX) { 74 + /* enable rx pcs */ 75 + dsaf_set_dev_bit(drv, GMAC_PCS_RX_EN_REG, 0, 0); 74 76 dsaf_set_dev_bit(drv, GMAC_PORT_EN_REG, GMAC_PORT_RX_EN_B, 1); 77 + } 75 78 } 76 79 77 80 static void hns_gmac_disable(void *mac_drv, enum mac_commom_mode mode) ··· 82 79 struct mac_driver *drv = (struct mac_driver *)mac_drv; 83 80 84 81 /*disable GE rX/tX */ 85 - if ((mode == MAC_COMM_MODE_TX) || (mode == MAC_COMM_MODE_RX_AND_TX)) 82 + if (mode == MAC_COMM_MODE_TX || mode == MAC_COMM_MODE_RX_AND_TX) 86 83 dsaf_set_dev_bit(drv, GMAC_PORT_EN_REG, GMAC_PORT_TX_EN_B, 0); 87 84 88 - if ((mode == MAC_COMM_MODE_RX) || (mode == MAC_COMM_MODE_RX_AND_TX)) 85 + if (mode == MAC_COMM_MODE_RX || mode == MAC_COMM_MODE_RX_AND_TX) { 86 + /* disable rx pcs */ 87 + dsaf_set_dev_bit(drv, GMAC_PCS_RX_EN_REG, 0, 1); 89 88 dsaf_set_dev_bit(drv, GMAC_PORT_EN_REG, GMAC_PORT_RX_EN_B, 0); 89 + } 90 90 } 91 91 92 92 /* hns_gmac_get_en - get port enable
+15
drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
··· 778 778 return rc; 779 779 } 780 780 781 + static void hns_mac_remove_phydev(struct hns_mac_cb *mac_cb) 782 + { 783 + if (!to_acpi_device_node(mac_cb->fw_port) || !mac_cb->phy_dev) 784 + return; 785 + 786 + phy_device_remove(mac_cb->phy_dev); 787 + phy_device_free(mac_cb->phy_dev); 788 + 789 + mac_cb->phy_dev = NULL; 790 + } 791 + 781 792 #define MAC_MEDIA_TYPE_MAX_LEN 16 782 793 783 794 static const struct { ··· 1128 1117 int max_port_num = hns_mac_get_max_port_num(dsaf_dev); 1129 1118 1130 1119 for (i = 0; i < max_port_num; i++) { 1120 + if (!dsaf_dev->mac_cb[i]) 1121 + continue; 1122 + 1131 1123 dsaf_dev->misc_op->cpld_reset_led(dsaf_dev->mac_cb[i]); 1124 + hns_mac_remove_phydev(dsaf_dev->mac_cb[i]); 1132 1125 dsaf_dev->mac_cb[i] = NULL; 1133 1126 } 1134 1127 }
+349 -174
drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
··· 935 935 } 936 936 937 937 /** 938 + * hns_dsaf_tcam_uc_cfg_vague - INT 939 + * @dsaf_dev: dsa fabric device struct pointer 940 + * @address, 941 + * @ptbl_tcam_data, 942 + */ 943 + static void hns_dsaf_tcam_uc_cfg_vague(struct dsaf_device *dsaf_dev, 944 + u32 address, 945 + struct dsaf_tbl_tcam_data *tcam_data, 946 + struct dsaf_tbl_tcam_data *tcam_mask, 947 + struct dsaf_tbl_tcam_ucast_cfg *tcam_uc) 948 + { 949 + spin_lock_bh(&dsaf_dev->tcam_lock); 950 + hns_dsaf_tbl_tcam_addr_cfg(dsaf_dev, address); 951 + hns_dsaf_tbl_tcam_data_cfg(dsaf_dev, tcam_data); 952 + hns_dsaf_tbl_tcam_ucast_cfg(dsaf_dev, tcam_uc); 953 + hns_dsaf_tbl_tcam_match_cfg(dsaf_dev, tcam_mask); 954 + hns_dsaf_tbl_tcam_data_ucast_pul(dsaf_dev); 955 + 956 + /*Restore Match Data*/ 957 + tcam_mask->tbl_tcam_data_high = 0xffffffff; 958 + tcam_mask->tbl_tcam_data_low = 0xffffffff; 959 + hns_dsaf_tbl_tcam_match_cfg(dsaf_dev, tcam_mask); 960 + 961 + spin_unlock_bh(&dsaf_dev->tcam_lock); 962 + } 963 + 964 + /** 965 + * hns_dsaf_tcam_mc_cfg_vague - INT 966 + * @dsaf_dev: dsa fabric device struct pointer 967 + * @address, 968 + * @ptbl_tcam_data, 969 + * @ptbl_tcam_mask 970 + * @ptbl_tcam_mcast 971 + */ 972 + static void hns_dsaf_tcam_mc_cfg_vague(struct dsaf_device *dsaf_dev, 973 + u32 address, 974 + struct dsaf_tbl_tcam_data *tcam_data, 975 + struct dsaf_tbl_tcam_data *tcam_mask, 976 + struct dsaf_tbl_tcam_mcast_cfg *tcam_mc) 977 + { 978 + spin_lock_bh(&dsaf_dev->tcam_lock); 979 + hns_dsaf_tbl_tcam_addr_cfg(dsaf_dev, address); 980 + hns_dsaf_tbl_tcam_data_cfg(dsaf_dev, tcam_data); 981 + hns_dsaf_tbl_tcam_mcast_cfg(dsaf_dev, tcam_mc); 982 + hns_dsaf_tbl_tcam_match_cfg(dsaf_dev, tcam_mask); 983 + hns_dsaf_tbl_tcam_data_mcast_pul(dsaf_dev); 984 + 985 + /*Restore Match Data*/ 986 + tcam_mask->tbl_tcam_data_high = 0xffffffff; 987 + tcam_mask->tbl_tcam_data_low = 0xffffffff; 988 + hns_dsaf_tbl_tcam_match_cfg(dsaf_dev, tcam_mask); 989 + 990 + spin_unlock_bh(&dsaf_dev->tcam_lock); 991 + } 992 + 993 + /** 938 994 * hns_dsaf_tcam_mc_invld - INT 939 995 * @dsaf_id: dsa fabric id 940 996 * @address ··· 1544 1488 return i; 1545 1489 1546 1490 soft_mac_entry++; 1491 + } 1492 + return DSAF_INVALID_ENTRY_IDX; 1493 + } 1494 + 1495 + /** 1496 + * hns_dsaf_find_empty_mac_entry_reverse 1497 + * search dsa fabric soft empty-entry from the end 1498 + * @dsaf_dev: dsa fabric device struct pointer 1499 + */ 1500 + static u16 hns_dsaf_find_empty_mac_entry_reverse(struct dsaf_device *dsaf_dev) 1501 + { 1502 + struct dsaf_drv_priv *priv = hns_dsaf_dev_priv(dsaf_dev); 1503 + struct dsaf_drv_soft_mac_tbl *soft_mac_entry; 1504 + int i; 1505 + 1506 + soft_mac_entry = priv->soft_mac_tbl + (DSAF_TCAM_SUM - 1); 1507 + for (i = (DSAF_TCAM_SUM - 1); i > 0; i--) { 1508 + /* search all entry from end to start.*/ 1509 + if (soft_mac_entry->index == DSAF_INVALID_ENTRY_IDX) 1510 + return i; 1511 + soft_mac_entry--; 1547 1512 } 1548 1513 return DSAF_INVALID_ENTRY_IDX; 1549 1514 } ··· 2243 2166 DSAF_INODE_LOCAL_ADDR_FALSE_NUM_0_REG + 0x80 * (u64)node_num); 2244 2167 2245 2168 hw_stats->vlan_drop += dsaf_read_dev(dsaf_dev, 2246 - DSAF_INODE_SW_VLAN_TAG_DISC_0_REG + 0x80 * (u64)node_num); 2169 + DSAF_INODE_SW_VLAN_TAG_DISC_0_REG + 4 * (u64)node_num); 2247 2170 hw_stats->stp_drop += dsaf_read_dev(dsaf_dev, 2248 - DSAF_INODE_IN_DATA_STP_DISC_0_REG + 0x80 * (u64)node_num); 2171 + DSAF_INODE_IN_DATA_STP_DISC_0_REG + 4 * (u64)node_num); 2249 2172 2250 2173 /* pfc pause frame statistics stored in dsaf inode*/ 2251 2174 if ((node_num < DSAF_SERVICE_NW_NUM) && !is_ver1) { ··· 2362 2285 DSAF_INODE_BD_ORDER_STATUS_0_REG + j * 4); 2363 2286 p[223 + i] = dsaf_read_dev(ddev, 2364 2287 DSAF_INODE_SW_VLAN_TAG_DISC_0_REG + j * 4); 2365 - p[224 + i] = dsaf_read_dev(ddev, 2288 + p[226 + i] = dsaf_read_dev(ddev, 2366 2289 DSAF_INODE_IN_DATA_STP_DISC_0_REG + j * 4); 2367 2290 } 2368 2291 2369 - p[227] = dsaf_read_dev(ddev, DSAF_INODE_GE_FC_EN_0_REG + port * 4); 2292 + p[229] = dsaf_read_dev(ddev, DSAF_INODE_GE_FC_EN_0_REG + port * 4); 2370 2293 2371 2294 for (i = 0; i < DSAF_INODE_NUM / DSAF_COMM_CHN; i++) { 2372 2295 j = i * DSAF_COMM_CHN + port; 2373 - p[228 + i] = dsaf_read_dev(ddev, 2296 + p[230 + i] = dsaf_read_dev(ddev, 2374 2297 DSAF_INODE_VC0_IN_PKT_NUM_0_REG + j * 4); 2375 2298 } 2376 2299 2377 - p[231] = dsaf_read_dev(ddev, 2378 - DSAF_INODE_VC1_IN_PKT_NUM_0_REG + port * 4); 2300 + p[233] = dsaf_read_dev(ddev, 2301 + DSAF_INODE_VC1_IN_PKT_NUM_0_REG + port * 0x80); 2379 2302 2380 2303 /* dsaf inode registers */ 2381 2304 for (i = 0; i < HNS_DSAF_SBM_NUM(ddev) / DSAF_COMM_CHN; i++) { 2382 2305 j = i * DSAF_COMM_CHN + port; 2383 - p[232 + i] = dsaf_read_dev(ddev, 2306 + p[234 + i] = dsaf_read_dev(ddev, 2384 2307 DSAF_SBM_CFG_REG_0_REG + j * 0x80); 2385 - p[235 + i] = dsaf_read_dev(ddev, 2308 + p[237 + i] = dsaf_read_dev(ddev, 2386 2309 DSAF_SBM_BP_CFG_0_XGE_REG_0_REG + j * 0x80); 2387 - p[238 + i] = dsaf_read_dev(ddev, 2310 + p[240 + i] = dsaf_read_dev(ddev, 2388 2311 DSAF_SBM_BP_CFG_1_REG_0_REG + j * 0x80); 2389 - p[241 + i] = dsaf_read_dev(ddev, 2312 + p[243 + i] = dsaf_read_dev(ddev, 2390 2313 DSAF_SBM_BP_CFG_2_XGE_REG_0_REG + j * 0x80); 2391 - p[244 + i] = dsaf_read_dev(ddev, 2314 + p[246 + i] = dsaf_read_dev(ddev, 2392 2315 DSAF_SBM_FREE_CNT_0_0_REG + j * 0x80); 2393 - p[245 + i] = dsaf_read_dev(ddev, 2316 + p[249 + i] = dsaf_read_dev(ddev, 2394 2317 DSAF_SBM_FREE_CNT_1_0_REG + j * 0x80); 2395 - p[248 + i] = dsaf_read_dev(ddev, 2318 + p[252 + i] = dsaf_read_dev(ddev, 2396 2319 DSAF_SBM_BP_CNT_0_0_REG + j * 0x80); 2397 - p[251 + i] = dsaf_read_dev(ddev, 2320 + p[255 + i] = dsaf_read_dev(ddev, 2398 2321 DSAF_SBM_BP_CNT_1_0_REG + j * 0x80); 2399 - p[254 + i] = dsaf_read_dev(ddev, 2322 + p[258 + i] = dsaf_read_dev(ddev, 2400 2323 DSAF_SBM_BP_CNT_2_0_REG + j * 0x80); 2401 - p[257 + i] = dsaf_read_dev(ddev, 2324 + p[261 + i] = dsaf_read_dev(ddev, 2402 2325 DSAF_SBM_BP_CNT_3_0_REG + j * 0x80); 2403 - p[260 + i] = dsaf_read_dev(ddev, 2326 + p[264 + i] = dsaf_read_dev(ddev, 2404 2327 DSAF_SBM_INER_ST_0_REG + j * 0x80); 2405 - p[263 + i] = dsaf_read_dev(ddev, 2328 + p[267 + i] = dsaf_read_dev(ddev, 2406 2329 DSAF_SBM_MIB_REQ_FAILED_TC_0_REG + j * 0x80); 2407 - p[266 + i] = dsaf_read_dev(ddev, 2330 + p[270 + i] = dsaf_read_dev(ddev, 2408 2331 DSAF_SBM_LNK_INPORT_CNT_0_REG + j * 0x80); 2409 - p[269 + i] = dsaf_read_dev(ddev, 2332 + p[273 + i] = dsaf_read_dev(ddev, 2410 2333 DSAF_SBM_LNK_DROP_CNT_0_REG + j * 0x80); 2411 - p[272 + i] = dsaf_read_dev(ddev, 2334 + p[276 + i] = dsaf_read_dev(ddev, 2412 2335 DSAF_SBM_INF_OUTPORT_CNT_0_REG + j * 0x80); 2413 - p[275 + i] = dsaf_read_dev(ddev, 2336 + p[279 + i] = dsaf_read_dev(ddev, 2414 2337 DSAF_SBM_LNK_INPORT_TC0_CNT_0_REG + j * 0x80); 2415 - p[278 + i] = dsaf_read_dev(ddev, 2338 + p[282 + i] = dsaf_read_dev(ddev, 2416 2339 DSAF_SBM_LNK_INPORT_TC1_CNT_0_REG + j * 0x80); 2417 - p[281 + i] = dsaf_read_dev(ddev, 2340 + p[285 + i] = dsaf_read_dev(ddev, 2418 2341 DSAF_SBM_LNK_INPORT_TC2_CNT_0_REG + j * 0x80); 2419 - p[284 + i] = dsaf_read_dev(ddev, 2342 + p[288 + i] = dsaf_read_dev(ddev, 2420 2343 DSAF_SBM_LNK_INPORT_TC3_CNT_0_REG + j * 0x80); 2421 - p[287 + i] = dsaf_read_dev(ddev, 2344 + p[291 + i] = dsaf_read_dev(ddev, 2422 2345 DSAF_SBM_LNK_INPORT_TC4_CNT_0_REG + j * 0x80); 2423 - p[290 + i] = dsaf_read_dev(ddev, 2346 + p[294 + i] = dsaf_read_dev(ddev, 2424 2347 DSAF_SBM_LNK_INPORT_TC5_CNT_0_REG + j * 0x80); 2425 - p[293 + i] = dsaf_read_dev(ddev, 2348 + p[297 + i] = dsaf_read_dev(ddev, 2426 2349 DSAF_SBM_LNK_INPORT_TC6_CNT_0_REG + j * 0x80); 2427 - p[296 + i] = dsaf_read_dev(ddev, 2350 + p[300 + i] = dsaf_read_dev(ddev, 2428 2351 DSAF_SBM_LNK_INPORT_TC7_CNT_0_REG + j * 0x80); 2429 - p[299 + i] = dsaf_read_dev(ddev, 2352 + p[303 + i] = dsaf_read_dev(ddev, 2430 2353 DSAF_SBM_LNK_REQ_CNT_0_REG + j * 0x80); 2431 - p[302 + i] = dsaf_read_dev(ddev, 2354 + p[306 + i] = dsaf_read_dev(ddev, 2432 2355 DSAF_SBM_LNK_RELS_CNT_0_REG + j * 0x80); 2433 - p[305 + i] = dsaf_read_dev(ddev, 2356 + p[309 + i] = dsaf_read_dev(ddev, 2434 2357 DSAF_SBM_BP_CFG_3_REG_0_REG + j * 0x80); 2435 - p[308 + i] = dsaf_read_dev(ddev, 2358 + p[312 + i] = dsaf_read_dev(ddev, 2436 2359 DSAF_SBM_BP_CFG_4_REG_0_REG + j * 0x80); 2437 2360 } 2438 2361 2439 2362 /* dsaf onode registers */ 2440 2363 for (i = 0; i < DSAF_XOD_NUM; i++) { 2441 - p[311 + i] = dsaf_read_dev(ddev, 2364 + p[315 + i] = dsaf_read_dev(ddev, 2442 2365 DSAF_XOD_ETS_TSA_TC0_TC3_CFG_0_REG + i * 0x90); 2443 - p[319 + i] = dsaf_read_dev(ddev, 2366 + p[323 + i] = dsaf_read_dev(ddev, 2444 2367 DSAF_XOD_ETS_TSA_TC4_TC7_CFG_0_REG + i * 0x90); 2445 - p[327 + i] = dsaf_read_dev(ddev, 2368 + p[331 + i] = dsaf_read_dev(ddev, 2446 2369 DSAF_XOD_ETS_BW_TC0_TC3_CFG_0_REG + i * 0x90); 2447 - p[335 + i] = dsaf_read_dev(ddev, 2370 + p[339 + i] = dsaf_read_dev(ddev, 2448 2371 DSAF_XOD_ETS_BW_TC4_TC7_CFG_0_REG + i * 0x90); 2449 - p[343 + i] = dsaf_read_dev(ddev, 2372 + p[347 + i] = dsaf_read_dev(ddev, 2450 2373 DSAF_XOD_ETS_BW_OFFSET_CFG_0_REG + i * 0x90); 2451 - p[351 + i] = dsaf_read_dev(ddev, 2374 + p[355 + i] = dsaf_read_dev(ddev, 2452 2375 DSAF_XOD_ETS_TOKEN_CFG_0_REG + i * 0x90); 2453 2376 } 2454 2377 2455 - p[359] = dsaf_read_dev(ddev, DSAF_XOD_PFS_CFG_0_0_REG + port * 0x90); 2456 - p[360] = dsaf_read_dev(ddev, DSAF_XOD_PFS_CFG_1_0_REG + port * 0x90); 2457 - p[361] = dsaf_read_dev(ddev, DSAF_XOD_PFS_CFG_2_0_REG + port * 0x90); 2378 + p[363] = dsaf_read_dev(ddev, DSAF_XOD_PFS_CFG_0_0_REG + port * 0x90); 2379 + p[364] = dsaf_read_dev(ddev, DSAF_XOD_PFS_CFG_1_0_REG + port * 0x90); 2380 + p[365] = dsaf_read_dev(ddev, DSAF_XOD_PFS_CFG_2_0_REG + port * 0x90); 2458 2381 2459 2382 for (i = 0; i < DSAF_XOD_BIG_NUM / DSAF_COMM_CHN; i++) { 2460 2383 j = i * DSAF_COMM_CHN + port; 2461 - p[362 + i] = dsaf_read_dev(ddev, 2384 + p[366 + i] = dsaf_read_dev(ddev, 2462 2385 DSAF_XOD_GNT_L_0_REG + j * 0x90); 2463 - p[365 + i] = dsaf_read_dev(ddev, 2386 + p[369 + i] = dsaf_read_dev(ddev, 2464 2387 DSAF_XOD_GNT_H_0_REG + j * 0x90); 2465 - p[368 + i] = dsaf_read_dev(ddev, 2388 + p[372 + i] = dsaf_read_dev(ddev, 2466 2389 DSAF_XOD_CONNECT_STATE_0_REG + j * 0x90); 2467 - p[371 + i] = dsaf_read_dev(ddev, 2390 + p[375 + i] = dsaf_read_dev(ddev, 2468 2391 DSAF_XOD_RCVPKT_CNT_0_REG + j * 0x90); 2469 - p[374 + i] = dsaf_read_dev(ddev, 2392 + p[378 + i] = dsaf_read_dev(ddev, 2470 2393 DSAF_XOD_RCVTC0_CNT_0_REG + j * 0x90); 2471 - p[377 + i] = dsaf_read_dev(ddev, 2394 + p[381 + i] = dsaf_read_dev(ddev, 2472 2395 DSAF_XOD_RCVTC1_CNT_0_REG + j * 0x90); 2473 - p[380 + i] = dsaf_read_dev(ddev, 2396 + p[384 + i] = dsaf_read_dev(ddev, 2474 2397 DSAF_XOD_RCVTC2_CNT_0_REG + j * 0x90); 2475 - p[383 + i] = dsaf_read_dev(ddev, 2398 + p[387 + i] = dsaf_read_dev(ddev, 2476 2399 DSAF_XOD_RCVTC3_CNT_0_REG + j * 0x90); 2477 - p[386 + i] = dsaf_read_dev(ddev, 2400 + p[390 + i] = dsaf_read_dev(ddev, 2478 2401 DSAF_XOD_RCVVC0_CNT_0_REG + j * 0x90); 2479 - p[389 + i] = dsaf_read_dev(ddev, 2402 + p[393 + i] = dsaf_read_dev(ddev, 2480 2403 DSAF_XOD_RCVVC1_CNT_0_REG + j * 0x90); 2481 2404 } 2482 2405 2483 - p[392] = dsaf_read_dev(ddev, 2484 - DSAF_XOD_XGE_RCVIN0_CNT_0_REG + port * 0x90); 2485 - p[393] = dsaf_read_dev(ddev, 2486 - DSAF_XOD_XGE_RCVIN1_CNT_0_REG + port * 0x90); 2487 - p[394] = dsaf_read_dev(ddev, 2488 - DSAF_XOD_XGE_RCVIN2_CNT_0_REG + port * 0x90); 2489 - p[395] = dsaf_read_dev(ddev, 2490 - DSAF_XOD_XGE_RCVIN3_CNT_0_REG + port * 0x90); 2491 2406 p[396] = dsaf_read_dev(ddev, 2492 - DSAF_XOD_XGE_RCVIN4_CNT_0_REG + port * 0x90); 2407 + DSAF_XOD_XGE_RCVIN0_CNT_0_REG + port * 0x90); 2493 2408 p[397] = dsaf_read_dev(ddev, 2494 - DSAF_XOD_XGE_RCVIN5_CNT_0_REG + port * 0x90); 2409 + DSAF_XOD_XGE_RCVIN1_CNT_0_REG + port * 0x90); 2495 2410 p[398] = dsaf_read_dev(ddev, 2496 - DSAF_XOD_XGE_RCVIN6_CNT_0_REG + port * 0x90); 2411 + DSAF_XOD_XGE_RCVIN2_CNT_0_REG + port * 0x90); 2497 2412 p[399] = dsaf_read_dev(ddev, 2498 - DSAF_XOD_XGE_RCVIN7_CNT_0_REG + port * 0x90); 2413 + DSAF_XOD_XGE_RCVIN3_CNT_0_REG + port * 0x90); 2499 2414 p[400] = dsaf_read_dev(ddev, 2500 - DSAF_XOD_PPE_RCVIN0_CNT_0_REG + port * 0x90); 2415 + DSAF_XOD_XGE_RCVIN4_CNT_0_REG + port * 0x90); 2501 2416 p[401] = dsaf_read_dev(ddev, 2502 - DSAF_XOD_PPE_RCVIN1_CNT_0_REG + port * 0x90); 2417 + DSAF_XOD_XGE_RCVIN5_CNT_0_REG + port * 0x90); 2503 2418 p[402] = dsaf_read_dev(ddev, 2504 - DSAF_XOD_ROCEE_RCVIN0_CNT_0_REG + port * 0x90); 2419 + DSAF_XOD_XGE_RCVIN6_CNT_0_REG + port * 0x90); 2505 2420 p[403] = dsaf_read_dev(ddev, 2506 - DSAF_XOD_ROCEE_RCVIN1_CNT_0_REG + port * 0x90); 2421 + DSAF_XOD_XGE_RCVIN7_CNT_0_REG + port * 0x90); 2507 2422 p[404] = dsaf_read_dev(ddev, 2423 + DSAF_XOD_PPE_RCVIN0_CNT_0_REG + port * 0x90); 2424 + p[405] = dsaf_read_dev(ddev, 2425 + DSAF_XOD_PPE_RCVIN1_CNT_0_REG + port * 0x90); 2426 + p[406] = dsaf_read_dev(ddev, 2427 + DSAF_XOD_ROCEE_RCVIN0_CNT_0_REG + port * 0x90); 2428 + p[407] = dsaf_read_dev(ddev, 2429 + DSAF_XOD_ROCEE_RCVIN1_CNT_0_REG + port * 0x90); 2430 + p[408] = dsaf_read_dev(ddev, 2508 2431 DSAF_XOD_FIFO_STATUS_0_REG + port * 0x90); 2509 2432 2510 2433 /* dsaf voq registers */ 2511 2434 for (i = 0; i < DSAF_VOQ_NUM / DSAF_COMM_CHN; i++) { 2512 2435 j = (i * DSAF_COMM_CHN + port) * 0x90; 2513 - p[405 + i] = dsaf_read_dev(ddev, 2436 + p[409 + i] = dsaf_read_dev(ddev, 2514 2437 DSAF_VOQ_ECC_INVERT_EN_0_REG + j); 2515 - p[408 + i] = dsaf_read_dev(ddev, 2438 + p[412 + i] = dsaf_read_dev(ddev, 2516 2439 DSAF_VOQ_SRAM_PKT_NUM_0_REG + j); 2517 - p[411 + i] = dsaf_read_dev(ddev, DSAF_VOQ_IN_PKT_NUM_0_REG + j); 2518 - p[414 + i] = dsaf_read_dev(ddev, 2440 + p[415 + i] = dsaf_read_dev(ddev, DSAF_VOQ_IN_PKT_NUM_0_REG + j); 2441 + p[418 + i] = dsaf_read_dev(ddev, 2519 2442 DSAF_VOQ_OUT_PKT_NUM_0_REG + j); 2520 - p[417 + i] = dsaf_read_dev(ddev, 2443 + p[421 + i] = dsaf_read_dev(ddev, 2521 2444 DSAF_VOQ_ECC_ERR_ADDR_0_REG + j); 2522 - p[420 + i] = dsaf_read_dev(ddev, DSAF_VOQ_BP_STATUS_0_REG + j); 2523 - p[423 + i] = dsaf_read_dev(ddev, DSAF_VOQ_SPUP_IDLE_0_REG + j); 2524 - p[426 + i] = dsaf_read_dev(ddev, 2445 + p[424 + i] = dsaf_read_dev(ddev, DSAF_VOQ_BP_STATUS_0_REG + j); 2446 + p[427 + i] = dsaf_read_dev(ddev, DSAF_VOQ_SPUP_IDLE_0_REG + j); 2447 + p[430 + i] = dsaf_read_dev(ddev, 2525 2448 DSAF_VOQ_XGE_XOD_REQ_0_0_REG + j); 2526 - p[429 + i] = dsaf_read_dev(ddev, 2449 + p[433 + i] = dsaf_read_dev(ddev, 2527 2450 DSAF_VOQ_XGE_XOD_REQ_1_0_REG + j); 2528 - p[432 + i] = dsaf_read_dev(ddev, 2451 + p[436 + i] = dsaf_read_dev(ddev, 2529 2452 DSAF_VOQ_PPE_XOD_REQ_0_REG + j); 2530 - p[435 + i] = dsaf_read_dev(ddev, 2453 + p[439 + i] = dsaf_read_dev(ddev, 2531 2454 DSAF_VOQ_ROCEE_XOD_REQ_0_REG + j); 2532 - p[438 + i] = dsaf_read_dev(ddev, 2455 + p[442 + i] = dsaf_read_dev(ddev, 2533 2456 DSAF_VOQ_BP_ALL_THRD_0_REG + j); 2534 2457 } 2535 2458 2536 2459 /* dsaf tbl registers */ 2537 - p[441] = dsaf_read_dev(ddev, DSAF_TBL_CTRL_0_REG); 2538 - p[442] = dsaf_read_dev(ddev, DSAF_TBL_INT_MSK_0_REG); 2539 - p[443] = dsaf_read_dev(ddev, DSAF_TBL_INT_SRC_0_REG); 2540 - p[444] = dsaf_read_dev(ddev, DSAF_TBL_INT_STS_0_REG); 2541 - p[445] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_ADDR_0_REG); 2542 - p[446] = dsaf_read_dev(ddev, DSAF_TBL_LINE_ADDR_0_REG); 2543 - p[447] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_HIGH_0_REG); 2544 - p[448] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_LOW_0_REG); 2545 - p[449] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_MCAST_CFG_4_0_REG); 2546 - p[450] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_MCAST_CFG_3_0_REG); 2547 - p[451] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_MCAST_CFG_2_0_REG); 2548 - p[452] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_MCAST_CFG_1_0_REG); 2549 - p[453] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_MCAST_CFG_0_0_REG); 2550 - p[454] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_UCAST_CFG_0_REG); 2551 - p[455] = dsaf_read_dev(ddev, DSAF_TBL_LIN_CFG_0_REG); 2552 - p[456] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_RDATA_HIGH_0_REG); 2553 - p[457] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_RDATA_LOW_0_REG); 2554 - p[458] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_RAM_RDATA4_0_REG); 2555 - p[459] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_RAM_RDATA3_0_REG); 2556 - p[460] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_RAM_RDATA2_0_REG); 2557 - p[461] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_RAM_RDATA1_0_REG); 2558 - p[462] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_RAM_RDATA0_0_REG); 2559 - p[463] = dsaf_read_dev(ddev, DSAF_TBL_LIN_RDATA_0_REG); 2460 + p[445] = dsaf_read_dev(ddev, DSAF_TBL_CTRL_0_REG); 2461 + p[446] = dsaf_read_dev(ddev, DSAF_TBL_INT_MSK_0_REG); 2462 + p[447] = dsaf_read_dev(ddev, DSAF_TBL_INT_SRC_0_REG); 2463 + p[448] = dsaf_read_dev(ddev, DSAF_TBL_INT_STS_0_REG); 2464 + p[449] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_ADDR_0_REG); 2465 + p[450] = dsaf_read_dev(ddev, DSAF_TBL_LINE_ADDR_0_REG); 2466 + p[451] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_HIGH_0_REG); 2467 + p[452] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_LOW_0_REG); 2468 + p[453] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_MCAST_CFG_4_0_REG); 2469 + p[454] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_MCAST_CFG_3_0_REG); 2470 + p[455] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_MCAST_CFG_2_0_REG); 2471 + p[456] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_MCAST_CFG_1_0_REG); 2472 + p[457] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_MCAST_CFG_0_0_REG); 2473 + p[458] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_UCAST_CFG_0_REG); 2474 + p[459] = dsaf_read_dev(ddev, DSAF_TBL_LIN_CFG_0_REG); 2475 + p[460] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_RDATA_HIGH_0_REG); 2476 + p[461] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_RDATA_LOW_0_REG); 2477 + p[462] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_RAM_RDATA4_0_REG); 2478 + p[463] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_RAM_RDATA3_0_REG); 2479 + p[464] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_RAM_RDATA2_0_REG); 2480 + p[465] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_RAM_RDATA1_0_REG); 2481 + p[466] = dsaf_read_dev(ddev, DSAF_TBL_TCAM_RAM_RDATA0_0_REG); 2482 + p[467] = dsaf_read_dev(ddev, DSAF_TBL_LIN_RDATA_0_REG); 2560 2483 2561 2484 for (i = 0; i < DSAF_SW_PORT_NUM; i++) { 2562 2485 j = i * 0x8; 2563 - p[464 + 2 * i] = dsaf_read_dev(ddev, 2486 + p[468 + 2 * i] = dsaf_read_dev(ddev, 2564 2487 DSAF_TBL_DA0_MIS_INFO1_0_REG + j); 2565 - p[465 + 2 * i] = dsaf_read_dev(ddev, 2488 + p[469 + 2 * i] = dsaf_read_dev(ddev, 2566 2489 DSAF_TBL_DA0_MIS_INFO0_0_REG + j); 2567 2490 } 2568 2491 2569 - p[480] = dsaf_read_dev(ddev, DSAF_TBL_SA_MIS_INFO2_0_REG); 2570 - p[481] = dsaf_read_dev(ddev, DSAF_TBL_SA_MIS_INFO1_0_REG); 2571 - p[482] = dsaf_read_dev(ddev, DSAF_TBL_SA_MIS_INFO0_0_REG); 2572 - p[483] = dsaf_read_dev(ddev, DSAF_TBL_PUL_0_REG); 2573 - p[484] = dsaf_read_dev(ddev, DSAF_TBL_OLD_RSLT_0_REG); 2574 - p[485] = dsaf_read_dev(ddev, DSAF_TBL_OLD_SCAN_VAL_0_REG); 2575 - p[486] = dsaf_read_dev(ddev, DSAF_TBL_DFX_CTRL_0_REG); 2576 - p[487] = dsaf_read_dev(ddev, DSAF_TBL_DFX_STAT_0_REG); 2577 - p[488] = dsaf_read_dev(ddev, DSAF_TBL_DFX_STAT_2_0_REG); 2578 - p[489] = dsaf_read_dev(ddev, DSAF_TBL_LKUP_NUM_I_0_REG); 2579 - p[490] = dsaf_read_dev(ddev, DSAF_TBL_LKUP_NUM_O_0_REG); 2580 - p[491] = dsaf_read_dev(ddev, DSAF_TBL_UCAST_BCAST_MIS_INFO_0_0_REG); 2492 + p[484] = dsaf_read_dev(ddev, DSAF_TBL_SA_MIS_INFO2_0_REG); 2493 + p[485] = dsaf_read_dev(ddev, DSAF_TBL_SA_MIS_INFO1_0_REG); 2494 + p[486] = dsaf_read_dev(ddev, DSAF_TBL_SA_MIS_INFO0_0_REG); 2495 + p[487] = dsaf_read_dev(ddev, DSAF_TBL_PUL_0_REG); 2496 + p[488] = dsaf_read_dev(ddev, DSAF_TBL_OLD_RSLT_0_REG); 2497 + p[489] = dsaf_read_dev(ddev, DSAF_TBL_OLD_SCAN_VAL_0_REG); 2498 + p[490] = dsaf_read_dev(ddev, DSAF_TBL_DFX_CTRL_0_REG); 2499 + p[491] = dsaf_read_dev(ddev, DSAF_TBL_DFX_STAT_0_REG); 2500 + p[492] = dsaf_read_dev(ddev, DSAF_TBL_DFX_STAT_2_0_REG); 2501 + p[493] = dsaf_read_dev(ddev, DSAF_TBL_LKUP_NUM_I_0_REG); 2502 + p[494] = dsaf_read_dev(ddev, DSAF_TBL_LKUP_NUM_O_0_REG); 2503 + p[495] = dsaf_read_dev(ddev, DSAF_TBL_UCAST_BCAST_MIS_INFO_0_0_REG); 2581 2504 2582 2505 /* dsaf other registers */ 2583 - p[492] = dsaf_read_dev(ddev, DSAF_INODE_FIFO_WL_0_REG + port * 0x4); 2584 - p[493] = dsaf_read_dev(ddev, DSAF_ONODE_FIFO_WL_0_REG + port * 0x4); 2585 - p[494] = dsaf_read_dev(ddev, DSAF_XGE_GE_WORK_MODE_0_REG + port * 0x4); 2586 - p[495] = dsaf_read_dev(ddev, 2506 + p[496] = dsaf_read_dev(ddev, DSAF_INODE_FIFO_WL_0_REG + port * 0x4); 2507 + p[497] = dsaf_read_dev(ddev, DSAF_ONODE_FIFO_WL_0_REG + port * 0x4); 2508 + p[498] = dsaf_read_dev(ddev, DSAF_XGE_GE_WORK_MODE_0_REG + port * 0x4); 2509 + p[499] = dsaf_read_dev(ddev, 2587 2510 DSAF_XGE_APP_RX_LINK_UP_0_REG + port * 0x4); 2588 - p[496] = dsaf_read_dev(ddev, DSAF_NETPORT_CTRL_SIG_0_REG + port * 0x4); 2589 - p[497] = dsaf_read_dev(ddev, DSAF_XGE_CTRL_SIG_CFG_0_REG + port * 0x4); 2511 + p[500] = dsaf_read_dev(ddev, DSAF_NETPORT_CTRL_SIG_0_REG + port * 0x4); 2512 + p[501] = dsaf_read_dev(ddev, DSAF_XGE_CTRL_SIG_CFG_0_REG + port * 0x4); 2590 2513 2591 2514 if (!is_ver1) 2592 - p[498] = dsaf_read_dev(ddev, DSAF_PAUSE_CFG_REG + port * 0x4); 2515 + p[502] = dsaf_read_dev(ddev, DSAF_PAUSE_CFG_REG + port * 0x4); 2593 2516 2594 2517 /* mark end of dsaf regs */ 2595 - for (i = 499; i < 504; i++) 2518 + for (i = 503; i < 504; i++) 2596 2519 p[i] = 0xdddddddd; 2597 2520 } 2598 2521 ··· 2750 2673 return DSAF_DUMP_REGS_NUM; 2751 2674 } 2752 2675 2676 + static void set_promisc_tcam_enable(struct dsaf_device *dsaf_dev, u32 port) 2677 + { 2678 + struct dsaf_tbl_tcam_ucast_cfg tbl_tcam_ucast = {0, 1, 0, 0, 0x80}; 2679 + struct dsaf_tbl_tcam_data tbl_tcam_data_mc = {0x01000000, port}; 2680 + struct dsaf_tbl_tcam_data tbl_tcam_mask_uc = {0x01000000, 0xf}; 2681 + struct dsaf_tbl_tcam_mcast_cfg tbl_tcam_mcast = {0, 0, {0} }; 2682 + struct dsaf_drv_priv *priv = hns_dsaf_dev_priv(dsaf_dev); 2683 + struct dsaf_tbl_tcam_data tbl_tcam_data_uc = {0, port}; 2684 + struct dsaf_drv_mac_single_dest_entry mask_entry; 2685 + struct dsaf_drv_tbl_tcam_key temp_key, mask_key; 2686 + struct dsaf_drv_soft_mac_tbl *soft_mac_entry; 2687 + u16 entry_index = DSAF_INVALID_ENTRY_IDX; 2688 + struct dsaf_drv_tbl_tcam_key mac_key; 2689 + struct hns_mac_cb *mac_cb; 2690 + u8 addr[ETH_ALEN] = {0}; 2691 + u8 port_num; 2692 + u16 mskid; 2693 + 2694 + /* promisc use vague table match with vlanid = 0 & macaddr = 0 */ 2695 + hns_dsaf_set_mac_key(dsaf_dev, &mac_key, 0x00, port, addr); 2696 + entry_index = hns_dsaf_find_soft_mac_entry(dsaf_dev, &mac_key); 2697 + if (entry_index != DSAF_INVALID_ENTRY_IDX) 2698 + return; 2699 + 2700 + /* put promisc tcam entry in the end. */ 2701 + /* 1. set promisc unicast vague tcam entry. */ 2702 + entry_index = hns_dsaf_find_empty_mac_entry_reverse(dsaf_dev); 2703 + if (entry_index == DSAF_INVALID_ENTRY_IDX) { 2704 + dev_err(dsaf_dev->dev, 2705 + "enable uc promisc failed (port:%#x)\n", 2706 + port); 2707 + return; 2708 + } 2709 + 2710 + mac_cb = dsaf_dev->mac_cb[port]; 2711 + (void)hns_mac_get_inner_port_num(mac_cb, 0, &port_num); 2712 + tbl_tcam_ucast.tbl_ucast_out_port = port_num; 2713 + 2714 + /* config uc vague table */ 2715 + hns_dsaf_tcam_uc_cfg_vague(dsaf_dev, entry_index, &tbl_tcam_data_uc, 2716 + &tbl_tcam_mask_uc, &tbl_tcam_ucast); 2717 + 2718 + /* update software entry */ 2719 + soft_mac_entry = priv->soft_mac_tbl; 2720 + soft_mac_entry += entry_index; 2721 + soft_mac_entry->index = entry_index; 2722 + soft_mac_entry->tcam_key.high.val = mac_key.high.val; 2723 + soft_mac_entry->tcam_key.low.val = mac_key.low.val; 2724 + /* step back to the START for mc. */ 2725 + soft_mac_entry = priv->soft_mac_tbl; 2726 + 2727 + /* 2. set promisc multicast vague tcam entry. */ 2728 + entry_index = hns_dsaf_find_empty_mac_entry_reverse(dsaf_dev); 2729 + if (entry_index == DSAF_INVALID_ENTRY_IDX) { 2730 + dev_err(dsaf_dev->dev, 2731 + "enable mc promisc failed (port:%#x)\n", 2732 + port); 2733 + return; 2734 + } 2735 + 2736 + memset(&mask_entry, 0x0, sizeof(mask_entry)); 2737 + memset(&mask_key, 0x0, sizeof(mask_key)); 2738 + memset(&temp_key, 0x0, sizeof(temp_key)); 2739 + mask_entry.addr[0] = 0x01; 2740 + hns_dsaf_set_mac_key(dsaf_dev, &mask_key, mask_entry.in_vlan_id, 2741 + port, mask_entry.addr); 2742 + tbl_tcam_mcast.tbl_mcast_item_vld = 1; 2743 + tbl_tcam_mcast.tbl_mcast_old_en = 0; 2744 + 2745 + if (port < DSAF_SERVICE_NW_NUM) { 2746 + mskid = port; 2747 + } else if (port >= DSAF_BASE_INNER_PORT_NUM) { 2748 + mskid = port - DSAF_BASE_INNER_PORT_NUM + DSAF_SERVICE_NW_NUM; 2749 + } else { 2750 + dev_err(dsaf_dev->dev, "%s,pnum(%d)error,key(%#x:%#x)\n", 2751 + dsaf_dev->ae_dev.name, port, 2752 + mask_key.high.val, mask_key.low.val); 2753 + return; 2754 + } 2755 + 2756 + dsaf_set_bit(tbl_tcam_mcast.tbl_mcast_port_msk[mskid / 32], 2757 + mskid % 32, 1); 2758 + memcpy(&temp_key, &mask_key, sizeof(mask_key)); 2759 + hns_dsaf_tcam_mc_cfg_vague(dsaf_dev, entry_index, &tbl_tcam_data_mc, 2760 + (struct dsaf_tbl_tcam_data *)(&mask_key), 2761 + &tbl_tcam_mcast); 2762 + 2763 + /* update software entry */ 2764 + soft_mac_entry += entry_index; 2765 + soft_mac_entry->index = entry_index; 2766 + soft_mac_entry->tcam_key.high.val = temp_key.high.val; 2767 + soft_mac_entry->tcam_key.low.val = temp_key.low.val; 2768 + } 2769 + 2770 + static void set_promisc_tcam_disable(struct dsaf_device *dsaf_dev, u32 port) 2771 + { 2772 + struct dsaf_tbl_tcam_data tbl_tcam_data_mc = {0x01000000, port}; 2773 + struct dsaf_tbl_tcam_ucast_cfg tbl_tcam_ucast = {0, 0, 0, 0, 0}; 2774 + struct dsaf_tbl_tcam_mcast_cfg tbl_tcam_mcast = {0, 0, {0} }; 2775 + struct dsaf_drv_priv *priv = hns_dsaf_dev_priv(dsaf_dev); 2776 + struct dsaf_tbl_tcam_data tbl_tcam_data_uc = {0, 0}; 2777 + struct dsaf_tbl_tcam_data tbl_tcam_mask = {0, 0}; 2778 + struct dsaf_drv_soft_mac_tbl *soft_mac_entry; 2779 + u16 entry_index = DSAF_INVALID_ENTRY_IDX; 2780 + struct dsaf_drv_tbl_tcam_key mac_key; 2781 + u8 addr[ETH_ALEN] = {0}; 2782 + 2783 + /* 1. delete uc vague tcam entry. */ 2784 + /* promisc use vague table match with vlanid = 0 & macaddr = 0 */ 2785 + hns_dsaf_set_mac_key(dsaf_dev, &mac_key, 0x00, port, addr); 2786 + entry_index = hns_dsaf_find_soft_mac_entry(dsaf_dev, &mac_key); 2787 + 2788 + if (entry_index == DSAF_INVALID_ENTRY_IDX) 2789 + return; 2790 + 2791 + /* config uc vague table */ 2792 + hns_dsaf_tcam_uc_cfg_vague(dsaf_dev, entry_index, &tbl_tcam_data_uc, 2793 + &tbl_tcam_mask, &tbl_tcam_ucast); 2794 + /* update soft management table. */ 2795 + soft_mac_entry = priv->soft_mac_tbl; 2796 + soft_mac_entry += entry_index; 2797 + soft_mac_entry->index = DSAF_INVALID_ENTRY_IDX; 2798 + /* step back to the START for mc. */ 2799 + soft_mac_entry = priv->soft_mac_tbl; 2800 + 2801 + /* 2. delete mc vague tcam entry. */ 2802 + addr[0] = 0x01; 2803 + memset(&mac_key, 0x0, sizeof(mac_key)); 2804 + hns_dsaf_set_mac_key(dsaf_dev, &mac_key, 0x00, port, addr); 2805 + entry_index = hns_dsaf_find_soft_mac_entry(dsaf_dev, &mac_key); 2806 + 2807 + if (entry_index == DSAF_INVALID_ENTRY_IDX) 2808 + return; 2809 + 2810 + /* config mc vague table */ 2811 + hns_dsaf_tcam_mc_cfg_vague(dsaf_dev, entry_index, &tbl_tcam_data_mc, 2812 + &tbl_tcam_mask, &tbl_tcam_mcast); 2813 + /* update soft management table. */ 2814 + soft_mac_entry += entry_index; 2815 + soft_mac_entry->index = DSAF_INVALID_ENTRY_IDX; 2816 + } 2817 + 2753 2818 /* Reserve the last TCAM entry for promisc support */ 2754 - #define dsaf_promisc_tcam_entry(port) \ 2755 - (DSAF_TCAM_SUM - DSAFV2_MAC_FUZZY_TCAM_NUM + (port)) 2756 2819 void hns_dsaf_set_promisc_tcam(struct dsaf_device *dsaf_dev, 2757 2820 u32 port, bool enable) 2758 2821 { 2759 - struct dsaf_drv_priv *priv = hns_dsaf_dev_priv(dsaf_dev); 2760 - struct dsaf_drv_soft_mac_tbl *soft_mac_entry = priv->soft_mac_tbl; 2761 - u16 entry_index; 2762 - struct dsaf_drv_tbl_tcam_key tbl_tcam_data, tbl_tcam_mask; 2763 - struct dsaf_tbl_tcam_mcast_cfg mac_data = {0}; 2764 - 2765 - if ((AE_IS_VER1(dsaf_dev->dsaf_ver)) || HNS_DSAF_IS_DEBUG(dsaf_dev)) 2766 - return; 2767 - 2768 - /* find the tcam entry index for promisc */ 2769 - entry_index = dsaf_promisc_tcam_entry(port); 2770 - 2771 - memset(&tbl_tcam_data, 0, sizeof(tbl_tcam_data)); 2772 - memset(&tbl_tcam_mask, 0, sizeof(tbl_tcam_mask)); 2773 - 2774 - /* config key mask */ 2775 - if (enable) { 2776 - dsaf_set_field(tbl_tcam_data.low.bits.port_vlan, 2777 - DSAF_TBL_TCAM_KEY_PORT_M, 2778 - DSAF_TBL_TCAM_KEY_PORT_S, port); 2779 - dsaf_set_field(tbl_tcam_mask.low.bits.port_vlan, 2780 - DSAF_TBL_TCAM_KEY_PORT_M, 2781 - DSAF_TBL_TCAM_KEY_PORT_S, 0xf); 2782 - 2783 - /* SUB_QID */ 2784 - dsaf_set_bit(mac_data.tbl_mcast_port_msk[0], 2785 - DSAF_SERVICE_NW_NUM, true); 2786 - mac_data.tbl_mcast_item_vld = true; /* item_vld bit */ 2787 - } else { 2788 - mac_data.tbl_mcast_item_vld = false; /* item_vld bit */ 2789 - } 2790 - 2791 - dev_dbg(dsaf_dev->dev, 2792 - "set_promisc_entry, %s Mac key(%#x:%#x) entry_index%d\n", 2793 - dsaf_dev->ae_dev.name, tbl_tcam_data.high.val, 2794 - tbl_tcam_data.low.val, entry_index); 2795 - 2796 - /* config promisc entry with mask */ 2797 - hns_dsaf_tcam_mc_cfg(dsaf_dev, entry_index, 2798 - (struct dsaf_tbl_tcam_data *)&tbl_tcam_data, 2799 - (struct dsaf_tbl_tcam_data *)&tbl_tcam_mask, 2800 - &mac_data); 2801 - 2802 - /* config software entry */ 2803 - soft_mac_entry += entry_index; 2804 - soft_mac_entry->index = enable ? entry_index : DSAF_INVALID_ENTRY_IDX; 2822 + if (enable) 2823 + set_promisc_tcam_enable(dsaf_dev, port); 2824 + else 2825 + set_promisc_tcam_disable(dsaf_dev, port); 2805 2826 } 2806 2827 2807 2828 int hns_dsaf_wait_pkt_clean(struct dsaf_device *dsaf_dev, int port)
+7 -6
drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
··· 176 176 #define DSAF_INODE_IN_DATA_STP_DISC_0_REG 0x1A50 177 177 #define DSAF_INODE_GE_FC_EN_0_REG 0x1B00 178 178 #define DSAF_INODE_VC0_IN_PKT_NUM_0_REG 0x1B50 179 - #define DSAF_INODE_VC1_IN_PKT_NUM_0_REG 0x1C00 179 + #define DSAF_INODE_VC1_IN_PKT_NUM_0_REG 0x103C 180 180 #define DSAF_INODE_IN_PRIO_PAUSE_BASE_REG 0x1C00 181 181 #define DSAF_INODE_IN_PRIO_PAUSE_BASE_OFFSET 0x100 182 182 #define DSAF_INODE_IN_PRIO_PAUSE_OFFSET 0x50 ··· 404 404 #define RCB_ECC_ERR_ADDR4_REG 0x460 405 405 #define RCB_ECC_ERR_ADDR5_REG 0x464 406 406 407 - #define RCB_COM_SF_CFG_INTMASK_RING 0x480 408 - #define RCB_COM_SF_CFG_RING_STS 0x484 409 - #define RCB_COM_SF_CFG_RING 0x488 410 - #define RCB_COM_SF_CFG_INTMASK_BD 0x48C 411 - #define RCB_COM_SF_CFG_BD_RINT_STS 0x470 407 + #define RCB_COM_SF_CFG_INTMASK_RING 0x470 408 + #define RCB_COM_SF_CFG_RING_STS 0x474 409 + #define RCB_COM_SF_CFG_RING 0x478 410 + #define RCB_COM_SF_CFG_INTMASK_BD 0x47C 411 + #define RCB_COM_SF_CFG_BD_RINT_STS 0x480 412 412 #define RCB_COM_RCB_RD_BD_BUSY 0x490 413 413 #define RCB_COM_RCB_FBD_CRT_EN 0x494 414 414 #define RCB_COM_AXI_WR_ERR_INTMASK 0x498 ··· 534 534 #define GMAC_LD_LINK_COUNTER_REG 0x01D0UL 535 535 #define GMAC_LOOP_REG 0x01DCUL 536 536 #define GMAC_RECV_CONTROL_REG 0x01E0UL 537 + #define GMAC_PCS_RX_EN_REG 0x01E4UL 537 538 #define GMAC_VLAN_CODE_REG 0x01E8UL 538 539 #define GMAC_RX_OVERRUN_CNT_REG 0x01ECUL 539 540 #define GMAC_RX_LENGTHFIELD_ERR_CNT_REG 0x01F4UL
+39 -4
drivers/net/ethernet/hisilicon/hns/hns_enet.c
··· 1186 1186 if (h->phy_if == PHY_INTERFACE_MODE_XGMII) 1187 1187 phy_dev->autoneg = false; 1188 1188 1189 + if (h->phy_if == PHY_INTERFACE_MODE_SGMII) 1190 + phy_stop(phy_dev); 1191 + 1189 1192 return 0; 1190 1193 } 1191 1194 ··· 1284 1281 return cpu; 1285 1282 } 1286 1283 1284 + static void hns_nic_free_irq(int q_num, struct hns_nic_priv *priv) 1285 + { 1286 + int i; 1287 + 1288 + for (i = 0; i < q_num * 2; i++) { 1289 + if (priv->ring_data[i].ring->irq_init_flag == RCB_IRQ_INITED) { 1290 + irq_set_affinity_hint(priv->ring_data[i].ring->irq, 1291 + NULL); 1292 + free_irq(priv->ring_data[i].ring->irq, 1293 + &priv->ring_data[i]); 1294 + priv->ring_data[i].ring->irq_init_flag = 1295 + RCB_IRQ_NOT_INITED; 1296 + } 1297 + } 1298 + } 1299 + 1287 1300 static int hns_nic_init_irq(struct hns_nic_priv *priv) 1288 1301 { 1289 1302 struct hnae_handle *h = priv->ae_handle; ··· 1325 1306 if (ret) { 1326 1307 netdev_err(priv->netdev, "request irq(%d) fail\n", 1327 1308 rd->ring->irq); 1328 - return ret; 1309 + goto out_free_irq; 1329 1310 } 1330 1311 disable_irq(rd->ring->irq); 1331 1312 ··· 1340 1321 } 1341 1322 1342 1323 return 0; 1324 + 1325 + out_free_irq: 1326 + hns_nic_free_irq(h->q_num, priv); 1327 + return ret; 1343 1328 } 1344 1329 1345 1330 static int hns_nic_net_up(struct net_device *ndev) ··· 1352 1329 struct hnae_handle *h = priv->ae_handle; 1353 1330 int i, j; 1354 1331 int ret; 1332 + 1333 + if (!test_bit(NIC_STATE_DOWN, &priv->state)) 1334 + return 0; 1355 1335 1356 1336 ret = hns_nic_init_irq(priv); 1357 1337 if (ret != 0) { ··· 1391 1365 for (j = i - 1; j >= 0; j--) 1392 1366 hns_nic_ring_close(ndev, j); 1393 1367 1368 + hns_nic_free_irq(h->q_num, priv); 1394 1369 set_bit(NIC_STATE_DOWN, &priv->state); 1395 1370 1396 1371 return ret; ··· 1509 1482 } 1510 1483 1511 1484 static void hns_tx_timeout_reset(struct hns_nic_priv *priv); 1485 + #define HNS_TX_TIMEO_LIMIT (40 * HZ) 1512 1486 static void hns_nic_net_timeout(struct net_device *ndev) 1513 1487 { 1514 1488 struct hns_nic_priv *priv = netdev_priv(ndev); 1515 1489 1516 - hns_tx_timeout_reset(priv); 1490 + if (ndev->watchdog_timeo < HNS_TX_TIMEO_LIMIT) { 1491 + ndev->watchdog_timeo *= 2; 1492 + netdev_info(ndev, "watchdog_timo changed to %d.\n", 1493 + ndev->watchdog_timeo); 1494 + } else { 1495 + ndev->watchdog_timeo = HNS_NIC_TX_TIMEOUT; 1496 + hns_tx_timeout_reset(priv); 1497 + } 1517 1498 } 1518 1499 1519 1500 static int hns_nic_do_ioctl(struct net_device *netdev, struct ifreq *ifr, ··· 2084 2049 = container_of(work, struct hns_nic_priv, service_task); 2085 2050 struct hnae_handle *h = priv->ae_handle; 2086 2051 2052 + hns_nic_reset_subtask(priv); 2087 2053 hns_nic_update_link_status(priv->netdev); 2088 2054 h->dev->ops->update_led_status(h); 2089 2055 hns_nic_update_stats(priv->netdev); 2090 2056 2091 - hns_nic_reset_subtask(priv); 2092 2057 hns_nic_service_event_complete(priv); 2093 2058 } 2094 2059 ··· 2374 2339 ndev->min_mtu = MAC_MIN_MTU; 2375 2340 switch (priv->enet_ver) { 2376 2341 case AE_VERSION_2: 2377 - ndev->features |= NETIF_F_TSO | NETIF_F_TSO6; 2342 + ndev->features |= NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_NTUPLE; 2378 2343 ndev->hw_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | 2379 2344 NETIF_F_RXCSUM | NETIF_F_SG | NETIF_F_GSO | 2380 2345 NETIF_F_GRO | NETIF_F_TSO | NETIF_F_TSO6;
+10 -8
drivers/net/ethernet/ibm/ibmvnic.c
··· 1939 1939 static struct ibmvnic_rwi *get_next_rwi(struct ibmvnic_adapter *adapter) 1940 1940 { 1941 1941 struct ibmvnic_rwi *rwi; 1942 + unsigned long flags; 1942 1943 1943 - mutex_lock(&adapter->rwi_lock); 1944 + spin_lock_irqsave(&adapter->rwi_lock, flags); 1944 1945 1945 1946 if (!list_empty(&adapter->rwi_list)) { 1946 1947 rwi = list_first_entry(&adapter->rwi_list, struct ibmvnic_rwi, ··· 1951 1950 rwi = NULL; 1952 1951 } 1953 1952 1954 - mutex_unlock(&adapter->rwi_lock); 1953 + spin_unlock_irqrestore(&adapter->rwi_lock, flags); 1955 1954 return rwi; 1956 1955 } 1957 1956 ··· 2026 2025 struct list_head *entry, *tmp_entry; 2027 2026 struct ibmvnic_rwi *rwi, *tmp; 2028 2027 struct net_device *netdev = adapter->netdev; 2028 + unsigned long flags; 2029 2029 int ret; 2030 2030 2031 2031 if (adapter->state == VNIC_REMOVING || ··· 2043 2041 goto err; 2044 2042 } 2045 2043 2046 - mutex_lock(&adapter->rwi_lock); 2044 + spin_lock_irqsave(&adapter->rwi_lock, flags); 2047 2045 2048 2046 list_for_each(entry, &adapter->rwi_list) { 2049 2047 tmp = list_entry(entry, struct ibmvnic_rwi, list); 2050 2048 if (tmp->reset_reason == reason) { 2051 2049 netdev_dbg(netdev, "Skipping matching reset\n"); 2052 - mutex_unlock(&adapter->rwi_lock); 2050 + spin_unlock_irqrestore(&adapter->rwi_lock, flags); 2053 2051 ret = EBUSY; 2054 2052 goto err; 2055 2053 } 2056 2054 } 2057 2055 2058 - rwi = kzalloc(sizeof(*rwi), GFP_KERNEL); 2056 + rwi = kzalloc(sizeof(*rwi), GFP_ATOMIC); 2059 2057 if (!rwi) { 2060 - mutex_unlock(&adapter->rwi_lock); 2058 + spin_unlock_irqrestore(&adapter->rwi_lock, flags); 2061 2059 ibmvnic_close(netdev); 2062 2060 ret = ENOMEM; 2063 2061 goto err; ··· 2071 2069 } 2072 2070 rwi->reset_reason = reason; 2073 2071 list_add_tail(&rwi->list, &adapter->rwi_list); 2074 - mutex_unlock(&adapter->rwi_lock); 2072 + spin_unlock_irqrestore(&adapter->rwi_lock, flags); 2075 2073 adapter->resetting = true; 2076 2074 netdev_dbg(adapter->netdev, "Scheduling reset (reason %d)\n", reason); 2077 2075 schedule_work(&adapter->ibmvnic_reset); ··· 4761 4759 4762 4760 INIT_WORK(&adapter->ibmvnic_reset, __ibmvnic_reset); 4763 4761 INIT_LIST_HEAD(&adapter->rwi_list); 4764 - mutex_init(&adapter->rwi_lock); 4762 + spin_lock_init(&adapter->rwi_lock); 4765 4763 adapter->resetting = false; 4766 4764 4767 4765 adapter->mac_change_pending = false;
+1 -1
drivers/net/ethernet/ibm/ibmvnic.h
··· 1075 1075 struct tasklet_struct tasklet; 1076 1076 enum vnic_state state; 1077 1077 enum ibmvnic_reset_reason reset_reason; 1078 - struct mutex rwi_lock; 1078 + spinlock_t rwi_lock; 1079 1079 struct list_head rwi_list; 1080 1080 struct work_struct ibmvnic_reset; 1081 1081 bool resetting;
+7 -7
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 1543 1543 netdev_info(netdev, "set new mac address %pM\n", addr->sa_data); 1544 1544 1545 1545 /* Copy the address first, so that we avoid a possible race with 1546 - * .set_rx_mode(). If we copy after changing the address in the filter 1547 - * list, we might open ourselves to a narrow race window where 1548 - * .set_rx_mode could delete our dev_addr filter and prevent traffic 1549 - * from passing. 1546 + * .set_rx_mode(). 1547 + * - Remove old address from MAC filter 1548 + * - Copy new address 1549 + * - Add new address to MAC filter 1550 1550 */ 1551 - ether_addr_copy(netdev->dev_addr, addr->sa_data); 1552 - 1553 1551 spin_lock_bh(&vsi->mac_filter_hash_lock); 1554 1552 i40e_del_mac_filter(vsi, netdev->dev_addr); 1555 - i40e_add_mac_filter(vsi, addr->sa_data); 1553 + ether_addr_copy(netdev->dev_addr, addr->sa_data); 1554 + i40e_add_mac_filter(vsi, netdev->dev_addr); 1556 1555 spin_unlock_bh(&vsi->mac_filter_hash_lock); 1556 + 1557 1557 if (vsi->type == I40E_VSI_MAIN) { 1558 1558 i40e_status ret; 1559 1559
+12 -31
drivers/net/ethernet/intel/i40e/i40e_txrx.c
··· 1559 1559 } 1560 1560 1561 1561 /** 1562 - * i40e_receive_skb - Send a completed packet up the stack 1563 - * @rx_ring: rx ring in play 1564 - * @skb: packet to send up 1565 - * @vlan_tag: vlan tag for packet 1566 - **/ 1567 - void i40e_receive_skb(struct i40e_ring *rx_ring, 1568 - struct sk_buff *skb, u16 vlan_tag) 1569 - { 1570 - struct i40e_q_vector *q_vector = rx_ring->q_vector; 1571 - 1572 - if ((rx_ring->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) && 1573 - (vlan_tag & VLAN_VID_MASK)) 1574 - __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tag); 1575 - 1576 - napi_gro_receive(&q_vector->napi, skb); 1577 - } 1578 - 1579 - /** 1580 1562 * i40e_alloc_rx_buffers - Replace used receive buffers 1581 1563 * @rx_ring: ring to place buffers on 1582 1564 * @cleaned_count: number of buffers to replace ··· 1775 1793 * other fields within the skb. 1776 1794 **/ 1777 1795 void i40e_process_skb_fields(struct i40e_ring *rx_ring, 1778 - union i40e_rx_desc *rx_desc, struct sk_buff *skb, 1779 - u8 rx_ptype) 1796 + union i40e_rx_desc *rx_desc, struct sk_buff *skb) 1780 1797 { 1781 1798 u64 qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); 1782 1799 u32 rx_status = (qword & I40E_RXD_QW1_STATUS_MASK) >> ··· 1783 1802 u32 tsynvalid = rx_status & I40E_RXD_QW1_STATUS_TSYNVALID_MASK; 1784 1803 u32 tsyn = (rx_status & I40E_RXD_QW1_STATUS_TSYNINDX_MASK) >> 1785 1804 I40E_RXD_QW1_STATUS_TSYNINDX_SHIFT; 1805 + u8 rx_ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >> 1806 + I40E_RXD_QW1_PTYPE_SHIFT; 1786 1807 1787 1808 if (unlikely(tsynvalid)) 1788 1809 i40e_ptp_rx_hwtstamp(rx_ring->vsi->back, skb, tsyn); ··· 1794 1811 i40e_rx_checksum(rx_ring->vsi, skb, rx_desc); 1795 1812 1796 1813 skb_record_rx_queue(skb, rx_ring->queue_index); 1814 + 1815 + if (qword & BIT(I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) { 1816 + u16 vlan_tag = rx_desc->wb.qword0.lo_dword.l2tag1; 1817 + 1818 + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), 1819 + le16_to_cpu(vlan_tag)); 1820 + } 1797 1821 1798 1822 /* modifies the skb - consumes the enet header */ 1799 1823 skb->protocol = eth_type_trans(skb, rx_ring->netdev); ··· 2340 2350 struct i40e_rx_buffer *rx_buffer; 2341 2351 union i40e_rx_desc *rx_desc; 2342 2352 unsigned int size; 2343 - u16 vlan_tag; 2344 - u8 rx_ptype; 2345 2353 u64 qword; 2346 2354 2347 2355 /* return some buffers to hardware, one at a time is too slow */ ··· 2432 2444 /* probably a little skewed due to removing CRC */ 2433 2445 total_rx_bytes += skb->len; 2434 2446 2435 - qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); 2436 - rx_ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >> 2437 - I40E_RXD_QW1_PTYPE_SHIFT; 2438 - 2439 2447 /* populate checksum, VLAN, and protocol */ 2440 - i40e_process_skb_fields(rx_ring, rx_desc, skb, rx_ptype); 2441 - 2442 - vlan_tag = (qword & BIT(I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) ? 2443 - le16_to_cpu(rx_desc->wb.qword0.lo_dword.l2tag1) : 0; 2448 + i40e_process_skb_fields(rx_ring, rx_desc, skb); 2444 2449 2445 2450 i40e_trace(clean_rx_irq_rx, rx_ring, rx_desc, skb); 2446 - i40e_receive_skb(rx_ring, skb, vlan_tag); 2451 + napi_gro_receive(&rx_ring->q_vector->napi, skb); 2447 2452 skb = NULL; 2448 2453 2449 2454 /* update budget accounting */
+1 -4
drivers/net/ethernet/intel/i40e/i40e_txrx_common.h
··· 12 12 union i40e_rx_desc *rx_desc, 13 13 u64 qw); 14 14 void i40e_process_skb_fields(struct i40e_ring *rx_ring, 15 - union i40e_rx_desc *rx_desc, struct sk_buff *skb, 16 - u8 rx_ptype); 17 - void i40e_receive_skb(struct i40e_ring *rx_ring, 18 - struct sk_buff *skb, u16 vlan_tag); 15 + union i40e_rx_desc *rx_desc, struct sk_buff *skb); 19 16 void i40e_xdp_ring_update_tail(struct i40e_ring *xdp_ring); 20 17 void i40e_update_rx_stats(struct i40e_ring *rx_ring, 21 18 unsigned int total_rx_bytes,
+2 -10
drivers/net/ethernet/intel/i40e/i40e_xsk.c
··· 634 634 struct i40e_rx_buffer *bi; 635 635 union i40e_rx_desc *rx_desc; 636 636 unsigned int size; 637 - u16 vlan_tag; 638 - u8 rx_ptype; 639 637 u64 qword; 640 638 641 639 if (cleaned_count >= I40E_RX_BUFFER_WRITE) { ··· 711 713 total_rx_bytes += skb->len; 712 714 total_rx_packets++; 713 715 714 - qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len); 715 - rx_ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >> 716 - I40E_RXD_QW1_PTYPE_SHIFT; 717 - i40e_process_skb_fields(rx_ring, rx_desc, skb, rx_ptype); 718 - 719 - vlan_tag = (qword & BIT(I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) ? 720 - le16_to_cpu(rx_desc->wb.qword0.lo_dword.l2tag1) : 0; 721 - i40e_receive_skb(rx_ring, skb, vlan_tag); 716 + i40e_process_skb_fields(rx_ring, rx_desc, skb); 717 + napi_gro_receive(&rx_ring->q_vector->napi, skb); 722 718 } 723 719 724 720 i40e_finalize_xdp_rx(rx_ring, xdp_xmit);
+10 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
··· 700 700 u8 num_tcs = adapter->hw_tcs; 701 701 u32 reg_val; 702 702 u32 queue; 703 - u32 word; 704 703 705 704 /* remove VLAN filters beloning to this VF */ 706 705 ixgbe_clear_vf_vlans(adapter, vf); ··· 756 757 IXGBE_WRITE_REG(hw, IXGBE_PVFTXDCTL(reg_idx), reg_val); 757 758 } 758 759 } 760 + 761 + IXGBE_WRITE_FLUSH(hw); 762 + } 763 + 764 + static void ixgbe_vf_clear_mbx(struct ixgbe_adapter *adapter, u32 vf) 765 + { 766 + struct ixgbe_hw *hw = &adapter->hw; 767 + u32 word; 759 768 760 769 /* Clear VF's mailbox memory */ 761 770 for (word = 0; word < IXGBE_VFMAILBOX_SIZE; word++) ··· 837 830 838 831 /* reset the filters for the device */ 839 832 ixgbe_vf_reset_event(adapter, vf); 833 + 834 + ixgbe_vf_clear_mbx(adapter, vf); 840 835 841 836 /* set vf mac address */ 842 837 if (!is_zero_ether_addr(vf_mac))
+3 -3
drivers/net/ethernet/marvell/mvneta.c
··· 408 408 struct mvneta_pcpu_stats __percpu *stats; 409 409 410 410 int pkt_size; 411 - unsigned int frag_size; 412 411 void __iomem *base; 413 412 struct mvneta_rx_queue *rxqs; 414 413 struct mvneta_tx_queue *txqs; ··· 2904 2905 if (!pp->bm_priv) { 2905 2906 /* Set Offset */ 2906 2907 mvneta_rxq_offset_set(pp, rxq, 0); 2907 - mvneta_rxq_buf_size_set(pp, rxq, pp->frag_size); 2908 + mvneta_rxq_buf_size_set(pp, rxq, PAGE_SIZE < SZ_64K ? 2909 + PAGE_SIZE : 2910 + MVNETA_RX_BUF_SIZE(pp->pkt_size)); 2908 2911 mvneta_rxq_bm_disable(pp, rxq); 2909 2912 mvneta_rxq_fill(pp, rxq, rxq->size); 2910 2913 } else { ··· 3761 3760 int ret; 3762 3761 3763 3762 pp->pkt_size = MVNETA_RX_PKT_SIZE(pp->dev->mtu); 3764 - pp->frag_size = PAGE_SIZE; 3765 3763 3766 3764 ret = mvneta_setup_rxqs(pp); 3767 3765 if (ret)
+9 -7
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
··· 4390 4390 case PHY_INTERFACE_MODE_10GKR: 4391 4391 case PHY_INTERFACE_MODE_XAUI: 4392 4392 case PHY_INTERFACE_MODE_NA: 4393 - phylink_set(mask, 10000baseCR_Full); 4394 - phylink_set(mask, 10000baseSR_Full); 4395 - phylink_set(mask, 10000baseLR_Full); 4396 - phylink_set(mask, 10000baseLRM_Full); 4397 - phylink_set(mask, 10000baseER_Full); 4398 - phylink_set(mask, 10000baseKR_Full); 4393 + if (port->gop_id == 0) { 4394 + phylink_set(mask, 10000baseT_Full); 4395 + phylink_set(mask, 10000baseCR_Full); 4396 + phylink_set(mask, 10000baseSR_Full); 4397 + phylink_set(mask, 10000baseLR_Full); 4398 + phylink_set(mask, 10000baseLRM_Full); 4399 + phylink_set(mask, 10000baseER_Full); 4400 + phylink_set(mask, 10000baseKR_Full); 4401 + } 4399 4402 /* Fall-through */ 4400 4403 case PHY_INTERFACE_MODE_RGMII: 4401 4404 case PHY_INTERFACE_MODE_RGMII_ID: ··· 4409 4406 phylink_set(mask, 10baseT_Full); 4410 4407 phylink_set(mask, 100baseT_Half); 4411 4408 phylink_set(mask, 100baseT_Full); 4412 - phylink_set(mask, 10000baseT_Full); 4413 4409 /* Fall-through */ 4414 4410 case PHY_INTERFACE_MODE_1000BASEX: 4415 4411 case PHY_INTERFACE_MODE_2500BASEX:
+3 -8
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 1190 1190 struct ethtool_ts_info *info) 1191 1191 { 1192 1192 struct mlx5_core_dev *mdev = priv->mdev; 1193 - int ret; 1194 - 1195 - ret = ethtool_op_get_ts_info(priv->netdev, info); 1196 - if (ret) 1197 - return ret; 1198 1193 1199 1194 info->phc_index = mlx5_clock_get_ptp_index(mdev); 1200 1195 ··· 1197 1202 info->phc_index == -1) 1198 1203 return 0; 1199 1204 1200 - info->so_timestamping |= SOF_TIMESTAMPING_TX_HARDWARE | 1201 - SOF_TIMESTAMPING_RX_HARDWARE | 1202 - SOF_TIMESTAMPING_RAW_HARDWARE; 1205 + info->so_timestamping = SOF_TIMESTAMPING_TX_HARDWARE | 1206 + SOF_TIMESTAMPING_RX_HARDWARE | 1207 + SOF_TIMESTAMPING_RAW_HARDWARE; 1203 1208 1204 1209 info->tx_types = BIT(HWTSTAMP_TX_OFF) | 1205 1210 BIT(HWTSTAMP_TX_ON);
+6
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 128 128 return !params->lro_en && frag_sz <= PAGE_SIZE; 129 129 } 130 130 131 + #define MLX5_MAX_MPWQE_LOG_WQE_STRIDE_SZ ((BIT(__mlx5_bit_sz(wq, log_wqe_stride_size)) - 1) + \ 132 + MLX5_MPWQE_LOG_STRIDE_SZ_BASE) 131 133 static bool mlx5e_rx_mpwqe_is_linear_skb(struct mlx5_core_dev *mdev, 132 134 struct mlx5e_params *params) 133 135 { ··· 138 136 u8 log_num_strides; 139 137 140 138 if (!mlx5e_rx_is_linear_skb(mdev, params)) 139 + return false; 140 + 141 + if (order_base_2(frag_sz) > MLX5_MAX_MPWQE_LOG_WQE_STRIDE_SZ) 141 142 return false; 142 143 143 144 if (MLX5_CAP_GEN(mdev, ext_stride_num_range)) ··· 1401 1396 struct mlx5_core_dev *mdev = c->mdev; 1402 1397 struct mlx5_rate_limit rl = {0}; 1403 1398 1399 + cancel_work_sync(&sq->dim.work); 1404 1400 mlx5e_destroy_sq(mdev, sq->sqn); 1405 1401 if (sq->rate_limit) { 1406 1402 rl.rate = sq->rate_limit;
+4 -5
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 46 46 47 47 #define MLX5E_REP_PARAMS_LOG_SQ_SIZE \ 48 48 max(0x6, MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE) 49 + #define MLX5E_REP_PARAMS_DEF_NUM_CHANNELS 1 49 50 50 51 static const char mlx5e_rep_driver_name[] = "mlx5e_rep"; 51 52 ··· 467 466 468 467 ASSERT_RTNL(); 469 468 470 - if ((!neigh_connected && (e->flags & MLX5_ENCAP_ENTRY_VALID)) || 471 - !ether_addr_equal(e->h_dest, ha)) 469 + if ((e->flags & MLX5_ENCAP_ENTRY_VALID) && 470 + (!neigh_connected || !ether_addr_equal(e->h_dest, ha))) 472 471 mlx5e_tc_encap_flows_del(priv, e); 473 472 474 473 if (neigh_connected && !(e->flags & MLX5_ENCAP_ENTRY_VALID)) { ··· 1084 1083 if (err) 1085 1084 return err; 1086 1085 1087 - 1088 - priv->channels.params.num_channels = 1089 - mlx5e_get_netdev_max_channels(netdev); 1086 + priv->channels.params.num_channels = MLX5E_REP_PARAMS_DEF_NUM_CHANNELS; 1090 1087 1091 1088 mlx5e_build_rep_params(mdev, &priv->channels.params, netdev->mtu); 1092 1089 mlx5e_build_rep_netdev(netdev);
+6 -4
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 1190 1190 int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget) 1191 1191 { 1192 1192 struct mlx5e_rq *rq = container_of(cq, struct mlx5e_rq, cq); 1193 - struct mlx5e_xdpsq *xdpsq; 1193 + struct mlx5e_xdpsq *xdpsq = &rq->xdpsq; 1194 1194 struct mlx5_cqe64 *cqe; 1195 1195 int work_done = 0; 1196 1196 ··· 1201 1201 work_done += mlx5e_decompress_cqes_cont(rq, cq, 0, budget); 1202 1202 1203 1203 cqe = mlx5_cqwq_get_cqe(&cq->wq); 1204 - if (!cqe) 1204 + if (!cqe) { 1205 + if (unlikely(work_done)) 1206 + goto out; 1205 1207 return 0; 1206 - 1207 - xdpsq = &rq->xdpsq; 1208 + } 1208 1209 1209 1210 do { 1210 1211 if (mlx5_get_cqe_format(cqe) == MLX5_COMPRESSED) { ··· 1220 1219 rq->handle_rx_cqe(rq, cqe); 1221 1220 } while ((++work_done < budget) && (cqe = mlx5_cqwq_get_cqe(&cq->wq))); 1222 1221 1222 + out: 1223 1223 if (xdpsq->doorbell) { 1224 1224 mlx5e_xmit_xdp_doorbell(xdpsq); 1225 1225 xdpsq->doorbell = false;
-2
drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
··· 74 74 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_recover) }, 75 75 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_cqes) }, 76 76 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_queue_wake) }, 77 - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_udp_seg_rem) }, 78 77 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_cqe_err) }, 79 78 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xdp_xmit) }, 80 79 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xdp_full) }, ··· 197 198 s->tx_nop += sq_stats->nop; 198 199 s->tx_queue_stopped += sq_stats->stopped; 199 200 s->tx_queue_wake += sq_stats->wake; 200 - s->tx_udp_seg_rem += sq_stats->udp_seg_rem; 201 201 s->tx_queue_dropped += sq_stats->dropped; 202 202 s->tx_cqe_err += sq_stats->cqe_err; 203 203 s->tx_recover += sq_stats->recover;
-2
drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
··· 87 87 u64 tx_recover; 88 88 u64 tx_cqes; 89 89 u64 tx_queue_wake; 90 - u64 tx_udp_seg_rem; 91 90 u64 tx_cqe_err; 92 91 u64 tx_xdp_xmit; 93 92 u64 tx_xdp_full; ··· 220 221 u64 csum_partial_inner; 221 222 u64 added_vlan_packets; 222 223 u64 nop; 223 - u64 udp_seg_rem; 224 224 #ifdef CONFIG_MLX5_EN_TLS 225 225 u64 tls_ooo; 226 226 u64 tls_resync_bytes;
+22 -14
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 870 870 struct mlx5_flow_handle *rule; 871 871 872 872 memcpy(slow_attr, flow->esw_attr, sizeof(*slow_attr)); 873 - slow_attr->action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST, 874 - slow_attr->mirror_count = 0, 875 - slow_attr->dest_chain = FDB_SLOW_PATH_CHAIN, 873 + slow_attr->action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; 874 + slow_attr->mirror_count = 0; 875 + slow_attr->dest_chain = FDB_SLOW_PATH_CHAIN; 876 876 877 877 rule = mlx5e_tc_offload_fdb_rules(esw, flow, spec, slow_attr); 878 878 if (!IS_ERR(rule)) ··· 887 887 struct mlx5_esw_flow_attr *slow_attr) 888 888 { 889 889 memcpy(slow_attr, flow->esw_attr, sizeof(*slow_attr)); 890 + slow_attr->action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; 891 + slow_attr->mirror_count = 0; 892 + slow_attr->dest_chain = FDB_SLOW_PATH_CHAIN; 890 893 mlx5e_tc_unoffload_fdb_rules(esw, flow, slow_attr); 891 894 flow->flags &= ~MLX5E_TC_FLOW_SLOW; 892 895 } ··· 910 907 struct mlx5e_priv *out_priv; 911 908 int err = 0, encap_err = 0; 912 909 913 - /* if prios are not supported, keep the old behaviour of using same prio 914 - * for all offloaded rules. 915 - */ 916 - if (!mlx5_eswitch_prios_supported(esw)) 917 - attr->prio = 1; 910 + if (!mlx5_eswitch_prios_supported(esw) && attr->prio != 1) { 911 + NL_SET_ERR_MSG(extack, "E-switch priorities unsupported, upgrade FW"); 912 + return -EOPNOTSUPP; 913 + } 918 914 919 915 if (attr->chain > max_chain) { 920 916 NL_SET_ERR_MSG(extack, "Requested chain is out of supported range"); ··· 1096 1094 flow->rule[0] = rule; 1097 1095 } 1098 1096 1099 - if (e->flags & MLX5_ENCAP_ENTRY_VALID) { 1100 - e->flags &= ~MLX5_ENCAP_ENTRY_VALID; 1101 - mlx5_packet_reformat_dealloc(priv->mdev, e->encap_id); 1102 - } 1097 + /* we know that the encap is valid */ 1098 + e->flags &= ~MLX5_ENCAP_ENTRY_VALID; 1099 + mlx5_packet_reformat_dealloc(priv->mdev, e->encap_id); 1103 1100 } 1104 1101 1105 1102 static struct mlx5_fc *mlx5e_tc_get_counter(struct mlx5e_tc_flow *flow) ··· 2967 2966 NL_SET_ERR_MSG(extack, "Requested destination chain is out of supported range"); 2968 2967 return -EOPNOTSUPP; 2969 2968 } 2970 - action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | 2971 - MLX5_FLOW_CONTEXT_ACTION_COUNT; 2969 + action |= MLX5_FLOW_CONTEXT_ACTION_COUNT; 2972 2970 attr->dest_chain = dest_chain; 2973 2971 2974 2972 continue; ··· 2979 2979 attr->action = action; 2980 2980 if (!actions_match_supported(priv, exts, parse_attr, flow, extack)) 2981 2981 return -EOPNOTSUPP; 2982 + 2983 + if (attr->dest_chain) { 2984 + if (attr->action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) { 2985 + NL_SET_ERR_MSG(extack, "Mirroring goto chain rules isn't supported"); 2986 + return -EOPNOTSUPP; 2987 + } 2988 + attr->action |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; 2989 + } 2982 2990 2983 2991 if (attr->mirror_count > 0 && !mlx5_esw_has_fwd_fdb(priv->mdev)) { 2984 2992 NL_SET_ERR_MSG_MOD(extack,
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
··· 452 452 453 453 if ((fte->action.action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) && 454 454 --fte->dests_size) { 455 - modify_mask = BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_DESTINATION_LIST), 455 + modify_mask = BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_DESTINATION_LIST); 456 456 update_fte = true; 457 457 } 458 458 out:
+18 -1
drivers/net/ethernet/mellanox/mlxsw/core.c
··· 81 81 struct mlxsw_core_port *ports; 82 82 unsigned int max_ports; 83 83 bool reload_fail; 84 + bool fw_flash_in_progress; 84 85 unsigned long driver_priv[0]; 85 86 /* driver_priv has to be always the last item */ 86 87 }; ··· 429 428 struct rcu_head rcu; 430 429 }; 431 430 432 - #define MLXSW_EMAD_TIMEOUT_MS 200 431 + #define MLXSW_EMAD_TIMEOUT_DURING_FW_FLASH_MS 3000 432 + #define MLXSW_EMAD_TIMEOUT_MS 200 433 433 434 434 static void mlxsw_emad_trans_timeout_schedule(struct mlxsw_reg_trans *trans) 435 435 { 436 436 unsigned long timeout = msecs_to_jiffies(MLXSW_EMAD_TIMEOUT_MS); 437 + 438 + if (trans->core->fw_flash_in_progress) 439 + timeout = msecs_to_jiffies(MLXSW_EMAD_TIMEOUT_DURING_FW_FLASH_MS); 437 440 438 441 queue_delayed_work(trans->core->emad_wq, &trans->timeout_dw, timeout); 439 442 } ··· 1858 1853 p_linear_size); 1859 1854 } 1860 1855 EXPORT_SYMBOL(mlxsw_core_kvd_sizes_get); 1856 + 1857 + void mlxsw_core_fw_flash_start(struct mlxsw_core *mlxsw_core) 1858 + { 1859 + mlxsw_core->fw_flash_in_progress = true; 1860 + } 1861 + EXPORT_SYMBOL(mlxsw_core_fw_flash_start); 1862 + 1863 + void mlxsw_core_fw_flash_end(struct mlxsw_core *mlxsw_core) 1864 + { 1865 + mlxsw_core->fw_flash_in_progress = false; 1866 + } 1867 + EXPORT_SYMBOL(mlxsw_core_fw_flash_end); 1861 1868 1862 1869 static int __init mlxsw_core_module_init(void) 1863 1870 {
+3
drivers/net/ethernet/mellanox/mlxsw/core.h
··· 292 292 u64 *p_single_size, u64 *p_double_size, 293 293 u64 *p_linear_size); 294 294 295 + void mlxsw_core_fw_flash_start(struct mlxsw_core *mlxsw_core); 296 + void mlxsw_core_fw_flash_end(struct mlxsw_core *mlxsw_core); 297 + 295 298 bool mlxsw_core_res_valid(struct mlxsw_core *mlxsw_core, 296 299 enum mlxsw_res_id res_id); 297 300
+7 -1
drivers/net/ethernet/mellanox/mlxsw/spectrum.c
··· 309 309 }, 310 310 .mlxsw_sp = mlxsw_sp 311 311 }; 312 + int err; 312 313 313 - return mlxfw_firmware_flash(&mlxsw_sp_mlxfw_dev.mlxfw_dev, firmware); 314 + mlxsw_core_fw_flash_start(mlxsw_sp->core); 315 + err = mlxfw_firmware_flash(&mlxsw_sp_mlxfw_dev.mlxfw_dev, firmware); 316 + mlxsw_core_fw_flash_end(mlxsw_sp->core); 317 + 318 + return err; 314 319 } 315 320 316 321 static int mlxsw_sp_fw_rev_validate(struct mlxsw_sp *mlxsw_sp) ··· 3526 3521 MLXSW_SP_RXL_MR_MARK(ACL2, TRAP_TO_CPU, MULTICAST, false), 3527 3522 /* NVE traps */ 3528 3523 MLXSW_SP_RXL_MARK(NVE_ENCAP_ARP, TRAP_TO_CPU, ARP, false), 3524 + MLXSW_SP_RXL_NO_MARK(NVE_DECAP_ARP, TRAP_TO_CPU, ARP, false), 3529 3525 }; 3530 3526 3531 3527 static int mlxsw_sp_cpu_policers_set(struct mlxsw_core *mlxsw_core)
+1 -1
drivers/net/ethernet/mellanox/mlxsw/spectrum_nve.c
··· 977 977 { 978 978 WARN_ON(mlxsw_sp->nve->num_nve_tunnels); 979 979 rhashtable_destroy(&mlxsw_sp->nve->mc_list_ht); 980 - mlxsw_sp->nve = NULL; 981 980 kfree(mlxsw_sp->nve); 981 + mlxsw_sp->nve = NULL; 982 982 }
+1
drivers/net/ethernet/mellanox/mlxsw/trap.h
··· 60 60 MLXSW_TRAP_ID_IPV6_MC_LINK_LOCAL_DEST = 0x91, 61 61 MLXSW_TRAP_ID_HOST_MISS_IPV6 = 0x92, 62 62 MLXSW_TRAP_ID_IPIP_DECAP_ERROR = 0xB1, 63 + MLXSW_TRAP_ID_NVE_DECAP_ARP = 0xB8, 63 64 MLXSW_TRAP_ID_NVE_ENCAP_ARP = 0xBD, 64 65 MLXSW_TRAP_ID_ROUTER_ALERT_IPV4 = 0xD6, 65 66 MLXSW_TRAP_ID_ROUTER_ALERT_IPV6 = 0xD7,
+3 -8
drivers/net/ethernet/microchip/lan743x_main.c
··· 802 802 u32 mac_addr_hi = 0; 803 803 u32 mac_addr_lo = 0; 804 804 u32 data; 805 - int ret; 806 805 807 806 netdev = adapter->netdev; 808 - lan743x_csr_write(adapter, MAC_CR, MAC_CR_RST_); 809 - ret = lan743x_csr_wait_for_bit(adapter, MAC_CR, MAC_CR_RST_, 810 - 0, 1000, 20000, 100); 811 - if (ret) 812 - return ret; 813 807 814 808 /* setup auto duplex, and speed detection */ 815 809 data = lan743x_csr_read(adapter, MAC_CR); ··· 2713 2719 snprintf(adapter->mdiobus->id, MII_BUS_ID_SIZE, 2714 2720 "pci-%s", pci_name(adapter->pdev)); 2715 2721 2716 - /* set to internal PHY id */ 2717 - adapter->mdiobus->phy_mask = ~(u32)BIT(1); 2722 + if ((adapter->csr.id_rev & ID_REV_ID_MASK_) == ID_REV_ID_LAN7430_) 2723 + /* LAN7430 uses internal phy at address 1 */ 2724 + adapter->mdiobus->phy_mask = ~(u32)BIT(1); 2718 2725 2719 2726 /* register mdiobus */ 2720 2727 ret = mdiobus_register(adapter->mdiobus);
+1 -1
drivers/net/ethernet/neterion/vxge/vxge-config.c
··· 808 808 struct vxge_hw_device_date *fw_date = &hw_info->fw_date; 809 809 struct vxge_hw_device_version *flash_version = &hw_info->flash_version; 810 810 struct vxge_hw_device_date *flash_date = &hw_info->flash_date; 811 - u64 data0, data1 = 0, steer_ctrl = 0; 811 + u64 data0 = 0, data1 = 0, steer_ctrl = 0; 812 812 enum vxge_hw_status status; 813 813 814 814 status = vxge_hw_vpath_fw_api(vpath,
+22 -6
drivers/net/ethernet/netronome/nfp/flower/offload.c
··· 345 345 !(tcp_flags & (TCPHDR_FIN | TCPHDR_SYN | TCPHDR_RST))) 346 346 return -EOPNOTSUPP; 347 347 348 - /* We need to store TCP flags in the IPv4 key space, thus 349 - * we need to ensure we include a IPv4 key layer if we have 350 - * not done so already. 348 + /* We need to store TCP flags in the either the IPv4 or IPv6 key 349 + * space, thus we need to ensure we include a IPv4/IPv6 key 350 + * layer if we have not done so already. 351 351 */ 352 - if (!(key_layer & NFP_FLOWER_LAYER_IPV4)) { 353 - key_layer |= NFP_FLOWER_LAYER_IPV4; 354 - key_size += sizeof(struct nfp_flower_ipv4); 352 + if (!key_basic) 353 + return -EOPNOTSUPP; 354 + 355 + if (!(key_layer & NFP_FLOWER_LAYER_IPV4) && 356 + !(key_layer & NFP_FLOWER_LAYER_IPV6)) { 357 + switch (key_basic->n_proto) { 358 + case cpu_to_be16(ETH_P_IP): 359 + key_layer |= NFP_FLOWER_LAYER_IPV4; 360 + key_size += sizeof(struct nfp_flower_ipv4); 361 + break; 362 + 363 + case cpu_to_be16(ETH_P_IPV6): 364 + key_layer |= NFP_FLOWER_LAYER_IPV6; 365 + key_size += sizeof(struct nfp_flower_ipv6); 366 + break; 367 + 368 + default: 369 + return -EOPNOTSUPP; 370 + } 355 371 } 356 372 } 357 373
+1 -1
drivers/net/ethernet/nuvoton/w90p910_ether.c
··· 912 912 .ndo_validate_addr = eth_validate_addr, 913 913 }; 914 914 915 - static void __init get_mac_address(struct net_device *dev) 915 + static void get_mac_address(struct net_device *dev) 916 916 { 917 917 struct w90p910_ether *ether = netdev_priv(dev); 918 918 struct platform_device *pdev;
+2 -1
drivers/net/ethernet/qlogic/qed/qed_hsi.h
··· 12831 12831 MFW_DRV_MSG_BW_UPDATE10, 12832 12832 MFW_DRV_MSG_TRANSCEIVER_STATE_CHANGE, 12833 12833 MFW_DRV_MSG_BW_UPDATE11, 12834 - MFW_DRV_MSG_OEM_CFG_UPDATE, 12834 + MFW_DRV_MSG_RESERVED, 12835 12835 MFW_DRV_MSG_GET_TLV_REQ, 12836 + MFW_DRV_MSG_OEM_CFG_UPDATE, 12836 12837 MFW_DRV_MSG_MAX 12837 12838 }; 12838 12839
+1
drivers/net/ethernet/qlogic/qed/qed_ll2.c
··· 2496 2496 if (unlikely(dma_mapping_error(&cdev->pdev->dev, mapping))) { 2497 2497 DP_NOTICE(cdev, 2498 2498 "Unable to map frag - dropping packet\n"); 2499 + rc = -ENOMEM; 2499 2500 goto err; 2500 2501 } 2501 2502
+1 -1
drivers/net/ethernet/realtek/r8169.c
··· 6469 6469 goto out; 6470 6470 } 6471 6471 6472 - if (status & LinkChg) 6472 + if (status & LinkChg && tp->dev->phydev) 6473 6473 phy_mac_interrupt(tp->dev->phydev); 6474 6474 6475 6475 if (unlikely(status & RxFIFOOver &&
+1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 4250 4250 priv->wq = create_singlethread_workqueue("stmmac_wq"); 4251 4251 if (!priv->wq) { 4252 4252 dev_err(priv->device, "failed to create workqueue\n"); 4253 + ret = -ENOMEM; 4253 4254 goto error_wq; 4254 4255 } 4255 4256
+2 -2
drivers/net/ieee802154/ca8210.c
··· 721 721 static void ca8210_rx_done(struct cas_control *cas_ctl) 722 722 { 723 723 u8 *buf; 724 - u8 len; 724 + unsigned int len; 725 725 struct work_priv_container *mlme_reset_wpc; 726 726 struct ca8210_priv *priv = cas_ctl->priv; 727 727 ··· 730 730 if (len > CA8210_SPI_BUF_SIZE) { 731 731 dev_crit( 732 732 &priv->spi->dev, 733 - "Received packet len (%d) erroneously long\n", 733 + "Received packet len (%u) erroneously long\n", 734 734 len 735 735 ); 736 736 goto finish;
+2 -2
drivers/net/ieee802154/mac802154_hwsim.c
··· 492 492 !info->attrs[MAC802154_HWSIM_ATTR_RADIO_EDGE]) 493 493 return -EINVAL; 494 494 495 - if (nla_parse_nested(edge_attrs, MAC802154_HWSIM_EDGE_ATTR_MAX + 1, 495 + if (nla_parse_nested(edge_attrs, MAC802154_HWSIM_EDGE_ATTR_MAX, 496 496 info->attrs[MAC802154_HWSIM_ATTR_RADIO_EDGE], 497 497 hwsim_edge_policy, NULL)) 498 498 return -EINVAL; ··· 542 542 !info->attrs[MAC802154_HWSIM_ATTR_RADIO_EDGE]) 543 543 return -EINVAL; 544 544 545 - if (nla_parse_nested(edge_attrs, MAC802154_HWSIM_EDGE_ATTR_MAX + 1, 545 + if (nla_parse_nested(edge_attrs, MAC802154_HWSIM_EDGE_ATTR_MAX, 546 546 info->attrs[MAC802154_HWSIM_ATTR_RADIO_EDGE], 547 547 hwsim_edge_policy, NULL)) 548 548 return -EINVAL;
+2 -5
drivers/net/phy/phy_device.c
··· 308 308 if (ret < 0) 309 309 return ret; 310 310 311 - /* The PHY needs to renegotiate. */ 312 - phydev->link = 0; 313 - phydev->state = PHY_UP; 314 - 315 - phy_start_machine(phydev); 311 + if (phydev->attached_dev && phydev->adjust_link) 312 + phy_start_machine(phydev); 316 313 317 314 return 0; 318 315 }
+16 -2
drivers/net/usb/hso.c
··· 2807 2807 return -EIO; 2808 2808 } 2809 2809 2810 + /* check if we have a valid interface */ 2811 + if (if_num > 16) { 2812 + kfree(config_data); 2813 + return -EINVAL; 2814 + } 2815 + 2810 2816 switch (config_data[if_num]) { 2811 2817 case 0x0: 2812 2818 result = 0; ··· 2883 2877 2884 2878 /* Get the interface/port specification from either driver_info or from 2885 2879 * the device itself */ 2886 - if (id->driver_info) 2880 + if (id->driver_info) { 2881 + /* if_num is controlled by the device, driver_info is a 0 terminated 2882 + * array. Make sure, the access is in bounds! */ 2883 + for (i = 0; i <= if_num; ++i) 2884 + if (((u32 *)(id->driver_info))[i] == 0) 2885 + goto exit; 2887 2886 port_spec = ((u32 *)(id->driver_info))[if_num]; 2888 - else 2887 + } else { 2889 2888 port_spec = hso_get_config_data(interface); 2889 + if (port_spec < 0) 2890 + goto exit; 2891 + } 2890 2892 2891 2893 /* Check if we need to switch to alt interfaces prior to port 2892 2894 * configuration */
+4
drivers/net/usb/lan78xx.c
··· 2320 2320 ret = lan78xx_write_reg(dev, RX_ADDRL, addr_lo); 2321 2321 ret = lan78xx_write_reg(dev, RX_ADDRH, addr_hi); 2322 2322 2323 + /* Added to support MAC address changes */ 2324 + ret = lan78xx_write_reg(dev, MAF_LO(0), addr_lo); 2325 + ret = lan78xx_write_reg(dev, MAF_HI(0), addr_hi | MAF_HI_VALID_); 2326 + 2323 2327 return 0; 2324 2328 } 2325 2329
+2
drivers/net/usb/qmi_wwan.c
··· 1117 1117 {QMI_FIXED_INTF(0x1435, 0xd181, 4)}, /* Wistron NeWeb D18Q1 */ 1118 1118 {QMI_FIXED_INTF(0x1435, 0xd181, 5)}, /* Wistron NeWeb D18Q1 */ 1119 1119 {QMI_FIXED_INTF(0x1435, 0xd191, 4)}, /* Wistron NeWeb D19Q1 */ 1120 + {QMI_QUIRK_SET_DTR(0x1508, 0x1001, 4)}, /* Fibocom NL668 series */ 1120 1121 {QMI_FIXED_INTF(0x16d8, 0x6003, 0)}, /* CMOTech 6003 */ 1121 1122 {QMI_FIXED_INTF(0x16d8, 0x6007, 0)}, /* CMOTech CHE-628S */ 1122 1123 {QMI_FIXED_INTF(0x16d8, 0x6008, 0)}, /* CMOTech CMU-301 */ ··· 1230 1229 {QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */ 1231 1230 {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */ 1232 1231 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1201, 2)}, /* Telit LE920, LE920A4 */ 1232 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1900, 1)}, /* Telit LN940 series */ 1233 1233 {QMI_FIXED_INTF(0x1c9e, 0x9801, 3)}, /* Telewell TW-3G HSPA+ */ 1234 1234 {QMI_FIXED_INTF(0x1c9e, 0x9803, 4)}, /* Telewell TW-3G HSPA+ */ 1235 1235 {QMI_FIXED_INTF(0x1c9e, 0x9b01, 3)}, /* XS Stick W100-2 from 4G Systems */
+22 -11
drivers/net/usb/r8152.c
··· 129 129 #define USB_UPS_CTRL 0xd800 130 130 #define USB_POWER_CUT 0xd80a 131 131 #define USB_MISC_0 0xd81a 132 + #define USB_MISC_1 0xd81f 132 133 #define USB_AFE_CTRL2 0xd824 133 134 #define USB_UPS_CFG 0xd842 134 135 #define USB_UPS_FLAGS 0xd848 ··· 556 555 557 556 /* MAC PASSTHRU */ 558 557 #define AD_MASK 0xfee0 558 + #define BND_MASK 0x0004 559 559 #define EFUSE 0xcfdb 560 560 #define PASS_THRU_MASK 0x1 561 561 ··· 1152 1150 return ret; 1153 1151 } 1154 1152 1155 - /* Devices containing RTL8153-AD can support a persistent 1153 + /* Devices containing proper chips can support a persistent 1156 1154 * host system provided MAC address. 1157 1155 * Examples of this are Dell TB15 and Dell WD15 docks 1158 1156 */ ··· 1167 1165 1168 1166 /* test for -AD variant of RTL8153 */ 1169 1167 ocp_data = ocp_read_word(tp, MCU_TYPE_USB, USB_MISC_0); 1170 - if ((ocp_data & AD_MASK) != 0x1000) 1171 - return -ENODEV; 1172 - 1173 - /* test for MAC address pass-through bit */ 1174 - ocp_data = ocp_read_byte(tp, MCU_TYPE_USB, EFUSE); 1175 - if ((ocp_data & PASS_THRU_MASK) != 1) 1176 - return -ENODEV; 1168 + if ((ocp_data & AD_MASK) == 0x1000) { 1169 + /* test for MAC address pass-through bit */ 1170 + ocp_data = ocp_read_byte(tp, MCU_TYPE_USB, EFUSE); 1171 + if ((ocp_data & PASS_THRU_MASK) != 1) { 1172 + netif_dbg(tp, probe, tp->netdev, 1173 + "No efuse for RTL8153-AD MAC pass through\n"); 1174 + return -ENODEV; 1175 + } 1176 + } else { 1177 + /* test for RTL8153-BND */ 1178 + ocp_data = ocp_read_byte(tp, MCU_TYPE_USB, USB_MISC_1); 1179 + if ((ocp_data & BND_MASK) == 0) { 1180 + netif_dbg(tp, probe, tp->netdev, 1181 + "Invalid variant for MAC pass through\n"); 1182 + return -ENODEV; 1183 + } 1184 + } 1177 1185 1178 1186 /* returns _AUXMAC_#AABBCCDDEEFF# */ 1179 1187 status = acpi_evaluate_object(NULL, "\\_SB.AMAC", NULL, &buffer); ··· 1229 1217 if (tp->version == RTL_VER_01) { 1230 1218 ret = pla_ocp_read(tp, PLA_IDR, 8, sa.sa_data); 1231 1219 } else { 1232 - /* if this is not an RTL8153-AD, no eFuse mac pass thru set, 1233 - * or system doesn't provide valid _SB.AMAC this will be 1234 - * be expected to non-zero 1220 + /* if device doesn't support MAC pass through this will 1221 + * be expected to be non-zero 1235 1222 */ 1236 1223 ret = vendor_mac_passthru_addr_read(tp, &sa); 1237 1224 if (ret < 0)
+14 -7
drivers/net/vxlan.c
··· 568 568 rd->remote_port = port; 569 569 rd->remote_vni = vni; 570 570 rd->remote_ifindex = ifindex; 571 + rd->offloaded = false; 571 572 return 1; 572 573 } 573 574 ··· 3259 3258 struct vxlan_net *vn = net_generic(net, vxlan_net_id); 3260 3259 struct vxlan_dev *vxlan = netdev_priv(dev); 3261 3260 struct vxlan_fdb *f = NULL; 3261 + bool unregister = false; 3262 3262 int err; 3263 3263 3264 3264 err = vxlan_dev_configure(net, dev, conf, false, extack); ··· 3285 3283 err = register_netdevice(dev); 3286 3284 if (err) 3287 3285 goto errout; 3286 + unregister = true; 3288 3287 3289 3288 err = rtnl_configure_link(dev, NULL); 3290 - if (err) { 3291 - unregister_netdevice(dev); 3289 + if (err) 3292 3290 goto errout; 3293 - } 3294 3291 3295 3292 /* notify default fdb entry */ 3296 3293 if (f) ··· 3297 3296 3298 3297 list_add(&vxlan->next, &vn->vxlan_list); 3299 3298 return 0; 3299 + 3300 3300 errout: 3301 + /* unregister_netdevice() destroys the default FDB entry with deletion 3302 + * notification. But the addition notification was not sent yet, so 3303 + * destroy the entry by hand here. 3304 + */ 3301 3305 if (f) 3302 3306 vxlan_fdb_destroy(vxlan, f, false); 3307 + if (unregister) 3308 + unregister_netdevice(dev); 3303 3309 return err; 3304 3310 } 3305 3311 ··· 3542 3534 struct vxlan_rdst *dst = &vxlan->default_dst; 3543 3535 struct vxlan_rdst old_dst; 3544 3536 struct vxlan_config conf; 3545 - struct vxlan_fdb *f = NULL; 3546 3537 int err; 3547 3538 3548 3539 err = vxlan_nl2conf(tb, data, ··· 3567 3560 old_dst.remote_ifindex, 0); 3568 3561 3569 3562 if (!vxlan_addr_any(&dst->remote_ip)) { 3570 - err = vxlan_fdb_create(vxlan, all_zeros_mac, 3563 + err = vxlan_fdb_update(vxlan, all_zeros_mac, 3571 3564 &dst->remote_ip, 3572 3565 NUD_REACHABLE | NUD_PERMANENT, 3566 + NLM_F_APPEND | NLM_F_CREATE, 3573 3567 vxlan->cfg.dst_port, 3574 3568 dst->remote_vni, 3575 3569 dst->remote_vni, 3576 3570 dst->remote_ifindex, 3577 - NTF_SELF, &f); 3571 + NTF_SELF); 3578 3572 if (err) { 3579 3573 spin_unlock_bh(&vxlan->hash_lock); 3580 3574 return err; 3581 3575 } 3582 - vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f), RTM_NEWNEIGH); 3583 3576 } 3584 3577 spin_unlock_bh(&vxlan->hash_lock); 3585 3578 }
+28
drivers/net/wireless/ath/ath10k/core.c
··· 2418 2418 return 0; 2419 2419 } 2420 2420 2421 + static int ath10k_core_compat_services(struct ath10k *ar) 2422 + { 2423 + struct ath10k_fw_file *fw_file = &ar->normal_mode_fw.fw_file; 2424 + 2425 + /* all 10.x firmware versions support thermal throttling but don't 2426 + * advertise the support via service flags so we have to hardcode 2427 + * it here 2428 + */ 2429 + switch (fw_file->wmi_op_version) { 2430 + case ATH10K_FW_WMI_OP_VERSION_10_1: 2431 + case ATH10K_FW_WMI_OP_VERSION_10_2: 2432 + case ATH10K_FW_WMI_OP_VERSION_10_2_4: 2433 + case ATH10K_FW_WMI_OP_VERSION_10_4: 2434 + set_bit(WMI_SERVICE_THERM_THROT, ar->wmi.svc_map); 2435 + break; 2436 + default: 2437 + break; 2438 + } 2439 + 2440 + return 0; 2441 + } 2442 + 2421 2443 int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode, 2422 2444 const struct ath10k_fw_components *fw) 2423 2445 { ··· 2636 2614 status = ath10k_wmi_wait_for_unified_ready(ar); 2637 2615 if (status) { 2638 2616 ath10k_err(ar, "wmi unified ready event not received\n"); 2617 + goto err_hif_stop; 2618 + } 2619 + 2620 + status = ath10k_core_compat_services(ar); 2621 + if (status) { 2622 + ath10k_err(ar, "compat services failed: %d\n", status); 2639 2623 goto err_hif_stop; 2640 2624 } 2641 2625
+3 -2
drivers/net/wireless/ath/ath10k/debug.c
··· 2578 2578 debugfs_create_file("pktlog_filter", 0644, ar->debug.debugfs_phy, ar, 2579 2579 &fops_pktlog_filter); 2580 2580 2581 - debugfs_create_file("quiet_period", 0644, ar->debug.debugfs_phy, ar, 2582 - &fops_quiet_period); 2581 + if (test_bit(WMI_SERVICE_THERM_THROT, ar->wmi.svc_map)) 2582 + debugfs_create_file("quiet_period", 0644, ar->debug.debugfs_phy, ar, 2583 + &fops_quiet_period); 2583 2584 2584 2585 debugfs_create_file("tpc_stats", 0400, ar->debug.debugfs_phy, ar, 2585 2586 &fops_tpc_stats);
+9
drivers/net/wireless/ath/ath10k/thermal.c
··· 140 140 141 141 lockdep_assert_held(&ar->conf_mutex); 142 142 143 + if (!test_bit(WMI_SERVICE_THERM_THROT, ar->wmi.svc_map)) 144 + return; 145 + 143 146 if (!ar->wmi.ops->gen_pdev_set_quiet_mode) 144 147 return; 145 148 ··· 167 164 struct thermal_cooling_device *cdev; 168 165 struct device *hwmon_dev; 169 166 int ret; 167 + 168 + if (!test_bit(WMI_SERVICE_THERM_THROT, ar->wmi.svc_map)) 169 + return 0; 170 170 171 171 cdev = thermal_cooling_device_register("ath10k_thermal", ar, 172 172 &ath10k_thermal_ops); ··· 222 216 223 217 void ath10k_thermal_unregister(struct ath10k *ar) 224 218 { 219 + if (!test_bit(WMI_SERVICE_THERM_THROT, ar->wmi.svc_map)) 220 + return; 221 + 225 222 sysfs_remove_link(&ar->dev->kobj, "cooling_device"); 226 223 thermal_cooling_device_unregister(ar->thermal.cdev); 227 224 }
+3
drivers/net/wireless/ath/ath10k/wmi-tlv.h
··· 1564 1564 SVCMAP(WMI_TLV_SERVICE_SPOOF_MAC_SUPPORT, 1565 1565 WMI_SERVICE_SPOOF_MAC_SUPPORT, 1566 1566 WMI_TLV_MAX_SERVICE); 1567 + SVCMAP(WMI_TLV_SERVICE_THERM_THROT, 1568 + WMI_SERVICE_THERM_THROT, 1569 + WMI_TLV_MAX_SERVICE); 1567 1570 } 1568 1571 1569 1572 #undef SVCMAP
+1
drivers/net/wireless/ath/ath10k/wmi.h
··· 205 205 WMI_SERVICE_SPOOF_MAC_SUPPORT, 206 206 WMI_SERVICE_TX_DATA_ACK_RSSI, 207 207 WMI_SERVICE_VDEV_DIFFERENT_BEACON_INTERVAL_SUPPORT, 208 + WMI_SERVICE_THERM_THROT, 208 209 209 210 /* keep last */ 210 211 WMI_SERVICE_MAX,
+9
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
··· 881 881 int ret, i, j; 882 882 u16 cmd_wide_id = WIDE_ID(PHY_OPS_GROUP, GEO_TX_POWER_LIMIT); 883 883 884 + /* 885 + * This command is not supported on earlier firmware versions. 886 + * Unfortunately, we don't have a TLV API flag to rely on, so 887 + * rely on the major version which is in the first byte of 888 + * ucode_ver. 889 + */ 890 + if (IWL_UCODE_SERIAL(mvm->fw->ucode_ver) < 41) 891 + return 0; 892 + 884 893 ret = iwl_mvm_sar_get_wgds_table(mvm); 885 894 if (ret < 0) { 886 895 IWL_DEBUG_RADIO(mvm,
+2 -3
drivers/net/wireless/marvell/mwifiex/11n.c
··· 696 696 "Send delba to tid=%d, %pM\n", 697 697 tid, rx_reor_tbl_ptr->ta); 698 698 mwifiex_send_delba(priv, tid, rx_reor_tbl_ptr->ta, 0); 699 - spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, 700 - flags); 701 - return; 699 + goto exit; 702 700 } 703 701 } 702 + exit: 704 703 spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 705 704 } 706 705
+49 -47
drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c
··· 103 103 * There could be holes in the buffer, which are skipped by the function. 104 104 * Since the buffer is linear, the function uses rotation to simulate 105 105 * circular buffer. 106 - * 107 - * The caller must hold rx_reorder_tbl_lock spinlock. 108 106 */ 109 107 static void 110 108 mwifiex_11n_dispatch_pkt_until_start_win(struct mwifiex_private *priv, ··· 111 113 { 112 114 int pkt_to_send, i; 113 115 void *rx_tmp_ptr; 116 + unsigned long flags; 114 117 115 118 pkt_to_send = (start_win > tbl->start_win) ? 116 119 min((start_win - tbl->start_win), tbl->win_size) : 117 120 tbl->win_size; 118 121 119 122 for (i = 0; i < pkt_to_send; ++i) { 123 + spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); 120 124 rx_tmp_ptr = NULL; 121 125 if (tbl->rx_reorder_ptr[i]) { 122 126 rx_tmp_ptr = tbl->rx_reorder_ptr[i]; 123 127 tbl->rx_reorder_ptr[i] = NULL; 124 128 } 129 + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 125 130 if (rx_tmp_ptr) 126 131 mwifiex_11n_dispatch_pkt(priv, rx_tmp_ptr); 127 132 } 128 133 134 + spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); 129 135 /* 130 136 * We don't have a circular buffer, hence use rotation to simulate 131 137 * circular buffer ··· 140 138 } 141 139 142 140 tbl->start_win = start_win; 141 + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 143 142 } 144 143 145 144 /* ··· 150 147 * The start window is adjusted automatically when a hole is located. 151 148 * Since the buffer is linear, the function uses rotation to simulate 152 149 * circular buffer. 153 - * 154 - * The caller must hold rx_reorder_tbl_lock spinlock. 155 150 */ 156 151 static void 157 152 mwifiex_11n_scan_and_dispatch(struct mwifiex_private *priv, ··· 157 156 { 158 157 int i, j, xchg; 159 158 void *rx_tmp_ptr; 159 + unsigned long flags; 160 160 161 161 for (i = 0; i < tbl->win_size; ++i) { 162 - if (!tbl->rx_reorder_ptr[i]) 162 + spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); 163 + if (!tbl->rx_reorder_ptr[i]) { 164 + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, 165 + flags); 163 166 break; 167 + } 164 168 rx_tmp_ptr = tbl->rx_reorder_ptr[i]; 165 169 tbl->rx_reorder_ptr[i] = NULL; 170 + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 166 171 mwifiex_11n_dispatch_pkt(priv, rx_tmp_ptr); 167 172 } 168 173 174 + spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); 169 175 /* 170 176 * We don't have a circular buffer, hence use rotation to simulate 171 177 * circular buffer ··· 185 177 } 186 178 } 187 179 tbl->start_win = (tbl->start_win + i) & (MAX_TID_VALUE - 1); 180 + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 188 181 } 189 182 190 183 /* ··· 193 184 * 194 185 * The function stops the associated timer and dispatches all the 195 186 * pending packets in the Rx reorder table before deletion. 196 - * 197 - * The caller must hold rx_reorder_tbl_lock spinlock. 198 187 */ 199 188 static void 200 189 mwifiex_del_rx_reorder_entry(struct mwifiex_private *priv, ··· 218 211 219 212 del_timer_sync(&tbl->timer_context.timer); 220 213 tbl->timer_context.timer_is_set = false; 214 + 215 + spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); 221 216 list_del(&tbl->list); 217 + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 218 + 222 219 kfree(tbl->rx_reorder_ptr); 223 220 kfree(tbl); 224 221 ··· 235 224 /* 236 225 * This function returns the pointer to an entry in Rx reordering 237 226 * table which matches the given TA/TID pair. 238 - * 239 - * The caller must hold rx_reorder_tbl_lock spinlock. 240 227 */ 241 228 struct mwifiex_rx_reorder_tbl * 242 229 mwifiex_11n_get_rx_reorder_tbl(struct mwifiex_private *priv, int tid, u8 *ta) 243 230 { 244 231 struct mwifiex_rx_reorder_tbl *tbl; 232 + unsigned long flags; 245 233 246 - list_for_each_entry(tbl, &priv->rx_reorder_tbl_ptr, list) 247 - if (!memcmp(tbl->ta, ta, ETH_ALEN) && tbl->tid == tid) 234 + spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); 235 + list_for_each_entry(tbl, &priv->rx_reorder_tbl_ptr, list) { 236 + if (!memcmp(tbl->ta, ta, ETH_ALEN) && tbl->tid == tid) { 237 + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, 238 + flags); 248 239 return tbl; 240 + } 241 + } 242 + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 249 243 250 244 return NULL; 251 245 } ··· 267 251 return; 268 252 269 253 spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); 270 - list_for_each_entry_safe(tbl, tmp, &priv->rx_reorder_tbl_ptr, list) 271 - if (!memcmp(tbl->ta, ta, ETH_ALEN)) 254 + list_for_each_entry_safe(tbl, tmp, &priv->rx_reorder_tbl_ptr, list) { 255 + if (!memcmp(tbl->ta, ta, ETH_ALEN)) { 256 + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, 257 + flags); 272 258 mwifiex_del_rx_reorder_entry(priv, tbl); 259 + spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); 260 + } 261 + } 273 262 spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 274 263 275 264 return; ··· 283 262 /* 284 263 * This function finds the last sequence number used in the packets 285 264 * buffered in Rx reordering table. 286 - * 287 - * The caller must hold rx_reorder_tbl_lock spinlock. 288 265 */ 289 266 static int 290 267 mwifiex_11n_find_last_seq_num(struct reorder_tmr_cnxt *ctx) 291 268 { 292 269 struct mwifiex_rx_reorder_tbl *rx_reorder_tbl_ptr = ctx->ptr; 270 + struct mwifiex_private *priv = ctx->priv; 271 + unsigned long flags; 293 272 int i; 294 273 295 - for (i = rx_reorder_tbl_ptr->win_size - 1; i >= 0; --i) 296 - if (rx_reorder_tbl_ptr->rx_reorder_ptr[i]) 274 + spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); 275 + for (i = rx_reorder_tbl_ptr->win_size - 1; i >= 0; --i) { 276 + if (rx_reorder_tbl_ptr->rx_reorder_ptr[i]) { 277 + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, 278 + flags); 297 279 return i; 280 + } 281 + } 282 + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 298 283 299 284 return -1; 300 285 } ··· 318 291 struct reorder_tmr_cnxt *ctx = 319 292 from_timer(ctx, t, timer); 320 293 int start_win, seq_num; 321 - unsigned long flags; 322 294 323 295 ctx->timer_is_set = false; 324 - spin_lock_irqsave(&ctx->priv->rx_reorder_tbl_lock, flags); 325 296 seq_num = mwifiex_11n_find_last_seq_num(ctx); 326 297 327 - if (seq_num < 0) { 328 - spin_unlock_irqrestore(&ctx->priv->rx_reorder_tbl_lock, flags); 298 + if (seq_num < 0) 329 299 return; 330 - } 331 300 332 301 mwifiex_dbg(ctx->priv->adapter, INFO, "info: flush data %d\n", seq_num); 333 302 start_win = (ctx->ptr->start_win + seq_num + 1) & (MAX_TID_VALUE - 1); 334 303 mwifiex_11n_dispatch_pkt_until_start_win(ctx->priv, ctx->ptr, 335 304 start_win); 336 - spin_unlock_irqrestore(&ctx->priv->rx_reorder_tbl_lock, flags); 337 305 } 338 306 339 307 /* ··· 355 333 * If we get a TID, ta pair which is already present dispatch all the 356 334 * the packets and move the window size until the ssn 357 335 */ 358 - spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); 359 336 tbl = mwifiex_11n_get_rx_reorder_tbl(priv, tid, ta); 360 337 if (tbl) { 361 338 mwifiex_11n_dispatch_pkt_until_start_win(priv, tbl, seq_num); 362 - spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 363 339 return; 364 340 } 365 - spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 366 341 /* if !tbl then create one */ 367 342 new_node = kzalloc(sizeof(struct mwifiex_rx_reorder_tbl), GFP_KERNEL); 368 343 if (!new_node) ··· 570 551 int prev_start_win, start_win, end_win, win_size; 571 552 u16 pkt_index; 572 553 bool init_window_shift = false; 573 - unsigned long flags; 574 554 int ret = 0; 575 555 576 - spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); 577 556 tbl = mwifiex_11n_get_rx_reorder_tbl(priv, tid, ta); 578 557 if (!tbl) { 579 - spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 580 558 if (pkt_type != PKT_TYPE_BAR) 581 559 mwifiex_11n_dispatch_pkt(priv, payload); 582 560 return ret; 583 561 } 584 562 585 563 if ((pkt_type == PKT_TYPE_AMSDU) && !tbl->amsdu) { 586 - spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 587 564 mwifiex_11n_dispatch_pkt(priv, payload); 588 565 return ret; 589 566 } ··· 666 651 if (!tbl->timer_context.timer_is_set || 667 652 prev_start_win != tbl->start_win) 668 653 mwifiex_11n_rxreorder_timer_restart(tbl); 669 - 670 - spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 671 654 return ret; 672 655 } 673 656 ··· 694 681 peer_mac, tid, initiator); 695 682 696 683 if (cleanup_rx_reorder_tbl) { 697 - spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); 698 684 tbl = mwifiex_11n_get_rx_reorder_tbl(priv, tid, 699 685 peer_mac); 700 686 if (!tbl) { 701 - spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, 702 - flags); 703 687 mwifiex_dbg(priv->adapter, EVENT, 704 688 "event: TID, TA not found in table\n"); 705 689 return; 706 690 } 707 691 mwifiex_del_rx_reorder_entry(priv, tbl); 708 - spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 709 692 } else { 710 693 ptx_tbl = mwifiex_get_ba_tbl(priv, tid, peer_mac); 711 694 if (!ptx_tbl) { ··· 735 726 int tid, win_size; 736 727 struct mwifiex_rx_reorder_tbl *tbl; 737 728 uint16_t block_ack_param_set; 738 - unsigned long flags; 739 729 740 730 block_ack_param_set = le16_to_cpu(add_ba_rsp->block_ack_param_set); 741 731 ··· 748 740 mwifiex_dbg(priv->adapter, ERROR, "ADDBA RSP: failed %pM tid=%d)\n", 749 741 add_ba_rsp->peer_mac_addr, tid); 750 742 751 - spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); 752 743 tbl = mwifiex_11n_get_rx_reorder_tbl(priv, tid, 753 744 add_ba_rsp->peer_mac_addr); 754 745 if (tbl) 755 746 mwifiex_del_rx_reorder_entry(priv, tbl); 756 747 757 - spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 758 748 return 0; 759 749 } 760 750 761 751 win_size = (block_ack_param_set & IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK) 762 752 >> BLOCKACKPARAM_WINSIZE_POS; 763 753 764 - spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); 765 754 tbl = mwifiex_11n_get_rx_reorder_tbl(priv, tid, 766 755 add_ba_rsp->peer_mac_addr); 767 756 if (tbl) { ··· 769 764 else 770 765 tbl->amsdu = false; 771 766 } 772 - spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 773 767 774 768 mwifiex_dbg(priv->adapter, CMD, 775 769 "cmd: ADDBA RSP: %pM tid=%d ssn=%d win_size=%d\n", ··· 808 804 809 805 spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); 810 806 list_for_each_entry_safe(del_tbl_ptr, tmp_node, 811 - &priv->rx_reorder_tbl_ptr, list) 807 + &priv->rx_reorder_tbl_ptr, list) { 808 + spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 812 809 mwifiex_del_rx_reorder_entry(priv, del_tbl_ptr); 810 + spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); 811 + } 813 812 INIT_LIST_HEAD(&priv->rx_reorder_tbl_ptr); 814 813 spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 815 814 ··· 936 929 int tlv_buf_left = len; 937 930 int ret; 938 931 u8 *tmp; 939 - unsigned long flags; 940 932 941 933 mwifiex_dbg_dump(priv->adapter, EVT_D, "RXBA_SYNC event:", 942 934 event_buf, len); ··· 955 949 tlv_rxba->mac, tlv_rxba->tid, tlv_seq_num, 956 950 tlv_bitmap_len); 957 951 958 - spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); 959 952 rx_reor_tbl_ptr = 960 953 mwifiex_11n_get_rx_reorder_tbl(priv, tlv_rxba->tid, 961 954 tlv_rxba->mac); 962 955 if (!rx_reor_tbl_ptr) { 963 - spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, 964 - flags); 965 956 mwifiex_dbg(priv->adapter, ERROR, 966 957 "Can not find rx_reorder_tbl!"); 967 958 return; 968 959 } 969 - spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 970 960 971 961 for (i = 0; i < tlv_bitmap_len; i++) { 972 962 for (j = 0 ; j < 8; j++) {
-3
drivers/net/wireless/marvell/mwifiex/uap_txrx.c
··· 421 421 spin_unlock_irqrestore(&priv->sta_list_spinlock, flags); 422 422 } 423 423 424 - spin_lock_irqsave(&priv->rx_reorder_tbl_lock, flags); 425 424 if (!priv->ap_11n_enabled || 426 425 (!mwifiex_11n_get_rx_reorder_tbl(priv, uap_rx_pd->priority, ta) && 427 426 (le16_to_cpu(uap_rx_pd->rx_pkt_type) != PKT_TYPE_AMSDU))) { 428 427 ret = mwifiex_handle_uap_rx_forward(priv, skb); 429 - spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 430 428 return ret; 431 429 } 432 - spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 433 430 434 431 /* Reorder and send to kernel */ 435 432 pkt_type = (u8)le16_to_cpu(uap_rx_pd->rx_pkt_type);
+6 -1
drivers/net/wireless/mediatek/mt76/tx.c
··· 400 400 401 401 for (i = 0; i < ARRAY_SIZE(sta->txq); i++) { 402 402 struct ieee80211_txq *txq = sta->txq[i]; 403 - struct mt76_txq *mtxq = (struct mt76_txq *) txq->drv_priv; 403 + struct mt76_txq *mtxq; 404 + 405 + if (!txq) 406 + continue; 407 + 408 + mtxq = (struct mt76_txq *)txq->drv_priv; 404 409 405 410 spin_lock_bh(&mtxq->hwq->lock); 406 411 mtxq->send_bar = mtxq->aggr && send_bar;
+1
drivers/net/wireless/realtek/rtlwifi/base.c
··· 2289 2289 2290 2290 if (rtl_c2h_fast_cmd(hw, skb)) { 2291 2291 rtl_c2h_content_parsing(hw, skb); 2292 + kfree_skb(skb); 2292 2293 return; 2293 2294 } 2294 2295
+1 -1
drivers/net/xen-netfront.c
··· 905 905 if (skb_shinfo(skb)->nr_frags == MAX_SKB_FRAGS) { 906 906 unsigned int pull_to = NETFRONT_SKB_CB(skb)->pull_to; 907 907 908 - BUG_ON(pull_to <= skb_headlen(skb)); 908 + BUG_ON(pull_to < skb_headlen(skb)); 909 909 __pskb_pull_tail(skb, pull_to - skb_headlen(skb)); 910 910 } 911 911 if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) {
+1 -1
drivers/pci/pcie/aer.c
··· 1064 1064 .regs = aer_regs, 1065 1065 }; 1066 1066 1067 - if (kfifo_in_spinlocked(&aer_recover_ring, &entry, sizeof(entry), 1067 + if (kfifo_in_spinlocked(&aer_recover_ring, &entry, 1, 1068 1068 &aer_recover_ring_lock)) 1069 1069 schedule_work(&aer_recover_work); 1070 1070 else
+2 -1
drivers/pinctrl/meson/pinctrl-meson.c
··· 191 191 case PIN_CONFIG_BIAS_DISABLE: 192 192 dev_dbg(pc->dev, "pin %u: disable bias\n", pin); 193 193 194 - meson_calc_reg_and_bit(bank, pin, REG_PULL, &reg, &bit); 194 + meson_calc_reg_and_bit(bank, pin, REG_PULLEN, &reg, 195 + &bit); 195 196 ret = regmap_update_bits(pc->reg_pullen, reg, 196 197 BIT(bit), 0); 197 198 if (ret)
+15 -13
drivers/pinctrl/qcom/pinctrl-sdm660.c
··· 33 33 } 34 34 35 35 36 - #define PINGROUP(id, base, f1, f2, f3, f4, f5, f6, f7, f8, f9) \ 36 + #define PINGROUP(id, _tile, f1, f2, f3, f4, f5, f6, f7, f8, f9) \ 37 37 { \ 38 38 .name = "gpio" #id, \ 39 39 .pins = gpio##id##_pins, \ ··· 51 51 msm_mux_##f9 \ 52 52 }, \ 53 53 .nfuncs = 10, \ 54 - .ctl_reg = base + REG_SIZE * id, \ 55 - .io_reg = base + 0x4 + REG_SIZE * id, \ 56 - .intr_cfg_reg = base + 0x8 + REG_SIZE * id, \ 57 - .intr_status_reg = base + 0xc + REG_SIZE * id, \ 58 - .intr_target_reg = base + 0x8 + REG_SIZE * id, \ 54 + .ctl_reg = REG_SIZE * id, \ 55 + .io_reg = 0x4 + REG_SIZE * id, \ 56 + .intr_cfg_reg = 0x8 + REG_SIZE * id, \ 57 + .intr_status_reg = 0xc + REG_SIZE * id, \ 58 + .intr_target_reg = 0x8 + REG_SIZE * id, \ 59 + .tile = _tile, \ 59 60 .mux_bit = 2, \ 60 61 .pull_bit = 0, \ 61 62 .drv_bit = 6, \ ··· 83 82 .intr_cfg_reg = 0, \ 84 83 .intr_status_reg = 0, \ 85 84 .intr_target_reg = 0, \ 85 + .tile = NORTH, \ 86 86 .mux_bit = -1, \ 87 87 .pull_bit = pull, \ 88 88 .drv_bit = drv, \ ··· 1399 1397 PINGROUP(111, SOUTH, _, _, _, _, _, _, _, _, _), 1400 1398 PINGROUP(112, SOUTH, _, _, _, _, _, _, _, _, _), 1401 1399 PINGROUP(113, SOUTH, _, _, _, _, _, _, _, _, _), 1402 - SDC_QDSD_PINGROUP(sdc1_clk, 0x99a000, 13, 6), 1403 - SDC_QDSD_PINGROUP(sdc1_cmd, 0x99a000, 11, 3), 1404 - SDC_QDSD_PINGROUP(sdc1_data, 0x99a000, 9, 0), 1405 - SDC_QDSD_PINGROUP(sdc2_clk, 0x99b000, 14, 6), 1406 - SDC_QDSD_PINGROUP(sdc2_cmd, 0x99b000, 11, 3), 1407 - SDC_QDSD_PINGROUP(sdc2_data, 0x99b000, 9, 0), 1408 - SDC_QDSD_PINGROUP(sdc1_rclk, 0x99a000, 15, 0), 1400 + SDC_QDSD_PINGROUP(sdc1_clk, 0x9a000, 13, 6), 1401 + SDC_QDSD_PINGROUP(sdc1_cmd, 0x9a000, 11, 3), 1402 + SDC_QDSD_PINGROUP(sdc1_data, 0x9a000, 9, 0), 1403 + SDC_QDSD_PINGROUP(sdc2_clk, 0x9b000, 14, 6), 1404 + SDC_QDSD_PINGROUP(sdc2_cmd, 0x9b000, 11, 3), 1405 + SDC_QDSD_PINGROUP(sdc2_data, 0x9b000, 9, 0), 1406 + SDC_QDSD_PINGROUP(sdc1_rclk, 0x9a000, 15, 0), 1409 1407 }; 1410 1408 1411 1409 static const struct msm_pinctrl_soc_data sdm660_pinctrl = {
+1 -1
drivers/pinctrl/sunxi/pinctrl-sun8i-a83t.c
··· 568 568 SUNXI_PIN(SUNXI_PINCTRL_PIN(H, 11), 569 569 SUNXI_FUNCTION(0x0, "gpio_in"), 570 570 SUNXI_FUNCTION(0x1, "gpio_out"), 571 - SUNXI_FUNCTION_IRQ_BANK(0x6, 2, 1)), /* PH_EINT11 */ 571 + SUNXI_FUNCTION_IRQ_BANK(0x6, 2, 11)), /* PH_EINT11 */ 572 572 }; 573 573 574 574 static const struct sunxi_pinctrl_desc sun8i_a83t_pinctrl_data = {
+1 -1
drivers/scsi/bnx2fc/bnx2fc_fcoe.c
··· 2364 2364 if (!interface) { 2365 2365 printk(KERN_ERR PFX "bnx2fc_interface_create failed\n"); 2366 2366 rc = -ENOMEM; 2367 - goto ifput_err; 2367 + goto netdev_err; 2368 2368 } 2369 2369 2370 2370 if (is_vlan_dev(netdev)) {
+2 -2
drivers/scsi/qla2xxx/qla_os.c
··· 4886 4886 fcport->d_id = e->u.new_sess.id; 4887 4887 fcport->flags |= FCF_FABRIC_DEVICE; 4888 4888 fcport->fw_login_state = DSC_LS_PLOGI_PEND; 4889 - if (e->u.new_sess.fc4_type & FS_FC4TYPE_FCP) 4889 + if (e->u.new_sess.fc4_type == FS_FC4TYPE_FCP) 4890 4890 fcport->fc4_type = FC4_TYPE_FCP_SCSI; 4891 4891 4892 - if (e->u.new_sess.fc4_type & FS_FC4TYPE_NVME) { 4892 + if (e->u.new_sess.fc4_type == FS_FC4TYPE_NVME) { 4893 4893 fcport->fc4_type = FC4_TYPE_OTHER; 4894 4894 fcport->fc4f_nvme = FC4_TYPE_NVME; 4895 4895 }
+1
drivers/staging/media/sunxi/cedrus/Kconfig
··· 3 3 depends on VIDEO_DEV && VIDEO_V4L2 && MEDIA_CONTROLLER 4 4 depends on HAS_DMA 5 5 depends on OF 6 + depends on MEDIA_CONTROLLER_REQUEST_API 6 7 select SUNXI_SRAM 7 8 select VIDEOBUF2_DMA_CONTIG 8 9 select V4L2_MEM2MEM_DEV
+2 -2
drivers/staging/media/sunxi/cedrus/cedrus_hw.c
··· 255 255 256 256 res = platform_get_resource(dev->pdev, IORESOURCE_MEM, 0); 257 257 dev->base = devm_ioremap_resource(dev->dev, res); 258 - if (!dev->base) { 258 + if (IS_ERR(dev->base)) { 259 259 v4l2_err(&dev->v4l2_dev, "Failed to map registers\n"); 260 260 261 - ret = -ENOMEM; 261 + ret = PTR_ERR(dev->base); 262 262 goto err_sram; 263 263 } 264 264
+2 -2
drivers/thermal/hisi_thermal.c
··· 424 424 struct platform_device *pdev = data->pdev; 425 425 struct device *dev = &pdev->dev; 426 426 427 - data->nr_sensors = 2; 427 + data->nr_sensors = 1; 428 428 429 429 data->sensor = devm_kzalloc(dev, sizeof(*data->sensor) * 430 430 data->nr_sensors, GFP_KERNEL); ··· 589 589 return ret; 590 590 } 591 591 592 - ret = platform_get_irq_byname(pdev, sensor->irq_name); 592 + ret = platform_get_irq(pdev, 0); 593 593 if (ret < 0) 594 594 return ret; 595 595
+6 -6
drivers/thermal/st/stm_thermal.c
··· 241 241 sensor->t0 = TS1_T0_VAL1; 242 242 243 243 /* Retrieve fmt0 and put it on Hz */ 244 - sensor->fmt0 = ADJUST * readl_relaxed(sensor->base + DTS_T0VALR1_OFFSET) 245 - & TS1_FMT0_MASK; 244 + sensor->fmt0 = ADJUST * (readl_relaxed(sensor->base + 245 + DTS_T0VALR1_OFFSET) & TS1_FMT0_MASK); 246 246 247 247 /* Retrieve ramp coefficient */ 248 248 sensor->ramp_coeff = readl_relaxed(sensor->base + DTS_RAMPVALR_OFFSET) & ··· 532 532 if (ret) 533 533 return ret; 534 534 535 + ret = stm_thermal_read_factory_settings(sensor); 536 + if (ret) 537 + goto thermal_unprepare; 538 + 535 539 ret = stm_thermal_calibration(sensor); 536 540 if (ret) 537 541 goto thermal_unprepare; ··· 639 635 640 636 /* Populate sensor */ 641 637 sensor->base = base; 642 - 643 - ret = stm_thermal_read_factory_settings(sensor); 644 - if (ret) 645 - return ret; 646 638 647 639 sensor->clk = devm_clk_get(&pdev->dev, "pclk"); 648 640 if (IS_ERR(sensor->clk)) {
+2 -1
drivers/usb/host/xhci-hub.c
··· 1551 1551 portsc_buf[port_index] = 0; 1552 1552 1553 1553 /* Bail out if a USB3 port has a new device in link training */ 1554 - if ((t1 & PORT_PLS_MASK) == XDEV_POLLING) { 1554 + if ((hcd->speed >= HCD_USB3) && 1555 + (t1 & PORT_PLS_MASK) == XDEV_POLLING) { 1555 1556 bus_state->bus_suspended = 0; 1556 1557 spin_unlock_irqrestore(&xhci->lock, flags); 1557 1558 xhci_dbg(xhci, "Bus suspend bailout, port in polling\n");
+2 -2
drivers/usb/host/xhci.h
··· 1854 1854 struct xhci_hub usb3_rhub; 1855 1855 /* support xHCI 1.0 spec USB2 hardware LPM */ 1856 1856 unsigned hw_lpm_support:1; 1857 + /* Broken Suspend flag for SNPS Suspend resume issue */ 1858 + unsigned broken_suspend:1; 1857 1859 /* cached usb2 extened protocol capabilites */ 1858 1860 u32 *ext_caps; 1859 1861 unsigned int num_ext_caps; ··· 1873 1871 void *dbc; 1874 1872 /* platform-specific data -- must come last */ 1875 1873 unsigned long priv[0] __aligned(sizeof(s64)); 1876 - /* Broken Suspend flag for SNPS Suspend resume issue */ 1877 - u8 broken_suspend; 1878 1874 }; 1879 1875 1880 1876 /* Platform specific overrides to generic XHCI hc_driver ops */
+15 -1
drivers/usb/serial/option.c
··· 1164 1164 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920A4_1213, 0xff) }, 1165 1165 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920A4_1214), 1166 1166 .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) | RSVD(3) }, 1167 + { USB_DEVICE(TELIT_VENDOR_ID, 0x1900), /* Telit LN940 (QMI) */ 1168 + .driver_info = NCTRL(0) | RSVD(1) }, 1169 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1901, 0xff), /* Telit LN940 (MBIM) */ 1170 + .driver_info = NCTRL(0) }, 1167 1171 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */ 1168 1172 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0002, 0xff, 0xff, 0xff), 1169 1173 .driver_info = RSVD(1) }, ··· 1332 1328 .driver_info = RSVD(4) }, 1333 1329 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0414, 0xff, 0xff, 0xff) }, 1334 1330 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0417, 0xff, 0xff, 0xff) }, 1331 + { USB_DEVICE_INTERFACE_CLASS(ZTE_VENDOR_ID, 0x0602, 0xff) }, /* GosunCn ZTE WeLink ME3630 (MBIM mode) */ 1335 1332 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1008, 0xff, 0xff, 0xff), 1336 1333 .driver_info = RSVD(4) }, 1337 1334 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1010, 0xff, 0xff, 0xff), ··· 1536 1531 .driver_info = RSVD(2) }, 1537 1532 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1428, 0xff, 0xff, 0xff), /* Telewell TW-LTE 4G v2 */ 1538 1533 .driver_info = RSVD(2) }, 1534 + { USB_DEVICE_INTERFACE_CLASS(ZTE_VENDOR_ID, 0x1476, 0xff) }, /* GosunCn ZTE WeLink ME3630 (ECM/NCM mode) */ 1539 1535 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1533, 0xff, 0xff, 0xff) }, 1540 1536 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1534, 0xff, 0xff, 0xff) }, 1541 1537 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1535, 0xff, 0xff, 0xff) }, ··· 1764 1758 { USB_DEVICE_AND_INTERFACE_INFO(ALINK_VENDOR_ID, ALINK_PRODUCT_3GU, 0xff, 0xff, 0xff) }, 1765 1759 { USB_DEVICE(ALINK_VENDOR_ID, SIMCOM_PRODUCT_SIM7100E), 1766 1760 .driver_info = RSVD(5) | RSVD(6) }, 1761 + { USB_DEVICE_INTERFACE_CLASS(0x1e0e, 0x9003, 0xff) }, /* Simcom SIM7500/SIM7600 MBIM mode */ 1767 1762 { USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_X060S_X200), 1768 1763 .driver_info = NCTRL(0) | NCTRL(1) | RSVD(4) }, 1769 1764 { USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_X220_X500D), ··· 1947 1940 { USB_DEVICE_AND_INTERFACE_INFO(WETELECOM_VENDOR_ID, WETELECOM_PRODUCT_WMD200, 0xff, 0xff, 0xff) }, 1948 1941 { USB_DEVICE_AND_INTERFACE_INFO(WETELECOM_VENDOR_ID, WETELECOM_PRODUCT_6802, 0xff, 0xff, 0xff) }, 1949 1942 { USB_DEVICE_AND_INTERFACE_INFO(WETELECOM_VENDOR_ID, WETELECOM_PRODUCT_WMD300, 0xff, 0xff, 0xff) }, 1950 - { USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0x421d, 0xff, 0xff, 0xff) }, /* HP lt2523 (Novatel E371) */ 1943 + { USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0x421d, 0xff, 0xff, 0xff) }, /* HP lt2523 (Novatel E371) */ 1944 + { USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0xa31d, 0xff, 0x06, 0x10) }, /* HP lt4132 (Huawei ME906s-158) */ 1945 + { USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0xa31d, 0xff, 0x06, 0x12) }, 1946 + { USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0xa31d, 0xff, 0x06, 0x13) }, 1947 + { USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0xa31d, 0xff, 0x06, 0x14) }, 1948 + { USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0xa31d, 0xff, 0x06, 0x1b) }, 1949 + { USB_DEVICE(0x1508, 0x1001), /* Fibocom NL668 */ 1950 + .driver_info = RSVD(4) | RSVD(5) | RSVD(6) }, 1951 1951 { } /* Terminating entry */ 1952 1952 }; 1953 1953 MODULE_DEVICE_TABLE(usb, option_ids);
+7 -1
drivers/vhost/net.c
··· 513 513 struct socket *sock; 514 514 struct vhost_virtqueue *vq = poll_rx ? tvq : rvq; 515 515 516 - mutex_lock_nested(&vq->mutex, poll_rx ? VHOST_NET_VQ_TX: VHOST_NET_VQ_RX); 516 + /* Try to hold the vq mutex of the paired virtqueue. We can't 517 + * use mutex_lock() here since we could not guarantee a 518 + * consistenet lock ordering. 519 + */ 520 + if (!mutex_trylock(&vq->mutex)) 521 + return; 522 + 517 523 vhost_disable_notify(&net->dev, vq); 518 524 sock = rvq->private_data; 519 525
+19 -4
drivers/vhost/vhost.c
··· 295 295 { 296 296 int i; 297 297 298 - for (i = 0; i < d->nvqs; ++i) { 299 - mutex_lock(&d->vqs[i]->mutex); 298 + for (i = 0; i < d->nvqs; ++i) 300 299 __vhost_vq_meta_reset(d->vqs[i]); 301 - mutex_unlock(&d->vqs[i]->mutex); 302 - } 303 300 } 304 301 305 302 static void vhost_vq_reset(struct vhost_dev *dev, ··· 892 895 #define vhost_get_used(vq, x, ptr) \ 893 896 vhost_get_user(vq, x, ptr, VHOST_ADDR_USED) 894 897 898 + static void vhost_dev_lock_vqs(struct vhost_dev *d) 899 + { 900 + int i = 0; 901 + for (i = 0; i < d->nvqs; ++i) 902 + mutex_lock_nested(&d->vqs[i]->mutex, i); 903 + } 904 + 905 + static void vhost_dev_unlock_vqs(struct vhost_dev *d) 906 + { 907 + int i = 0; 908 + for (i = 0; i < d->nvqs; ++i) 909 + mutex_unlock(&d->vqs[i]->mutex); 910 + } 911 + 895 912 static int vhost_new_umem_range(struct vhost_umem *umem, 896 913 u64 start, u64 size, u64 end, 897 914 u64 userspace_addr, int perm) ··· 987 976 int ret = 0; 988 977 989 978 mutex_lock(&dev->mutex); 979 + vhost_dev_lock_vqs(dev); 990 980 switch (msg->type) { 991 981 case VHOST_IOTLB_UPDATE: 992 982 if (!dev->iotlb) { ··· 1021 1009 break; 1022 1010 } 1023 1011 1012 + vhost_dev_unlock_vqs(dev); 1024 1013 mutex_unlock(&dev->mutex); 1025 1014 1026 1015 return ret; ··· 2233 2220 return -EFAULT; 2234 2221 } 2235 2222 if (unlikely(vq->log_used)) { 2223 + /* Make sure used idx is seen before log. */ 2224 + smp_wmb(); 2236 2225 /* Log used index update. */ 2237 2226 log_write(vq->log_base, 2238 2227 vq->log_addr + offsetof(struct vring_used, idx),
+35 -6
drivers/video/backlight/pwm_bl.c
··· 562 562 goto err_alloc; 563 563 } 564 564 565 - if (!data->levels) { 565 + if (data->levels) { 566 + /* 567 + * For the DT case, only when brightness levels is defined 568 + * data->levels is filled. For the non-DT case, data->levels 569 + * can come from platform data, however is not usual. 570 + */ 571 + for (i = 0; i <= data->max_brightness; i++) { 572 + if (data->levels[i] > pb->scale) 573 + pb->scale = data->levels[i]; 574 + 575 + pb->levels = data->levels; 576 + } 577 + } else if (!data->max_brightness) { 578 + /* 579 + * If no brightness levels are provided and max_brightness is 580 + * not set, use the default brightness table. For the DT case, 581 + * max_brightness is set to 0 when brightness levels is not 582 + * specified. For the non-DT case, max_brightness is usually 583 + * set to some value. 584 + */ 585 + 586 + /* Get the PWM period (in nanoseconds) */ 587 + pwm_get_state(pb->pwm, &state); 588 + 566 589 ret = pwm_backlight_brightness_default(&pdev->dev, data, 567 590 state.period); 568 591 if (ret < 0) { ··· 593 570 "failed to setup default brightness table\n"); 594 571 goto err_alloc; 595 572 } 596 - } 597 573 598 - for (i = 0; i <= data->max_brightness; i++) { 599 - if (data->levels[i] > pb->scale) 600 - pb->scale = data->levels[i]; 574 + for (i = 0; i <= data->max_brightness; i++) { 575 + if (data->levels[i] > pb->scale) 576 + pb->scale = data->levels[i]; 601 577 602 - pb->levels = data->levels; 578 + pb->levels = data->levels; 579 + } 580 + } else { 581 + /* 582 + * That only happens for the non-DT case, where platform data 583 + * sets the max_brightness value. 584 + */ 585 + pb->scale = data->max_brightness; 603 586 } 604 587 605 588 pb->lth_brightness = data->lth_brightness * (state.period / pb->scale);
+2
fs/aio.c
··· 45 45 46 46 #include <asm/kmap_types.h> 47 47 #include <linux/uaccess.h> 48 + #include <linux/nospec.h> 48 49 49 50 #include "internal.h" 50 51 ··· 1039 1038 if (!table || id >= table->nr) 1040 1039 goto out; 1041 1040 1041 + id = array_index_nospec(id, table->nr); 1042 1042 ctx = rcu_dereference(table->table[id]); 1043 1043 if (ctx && ctx->user_id == ctx_id) { 1044 1044 if (percpu_ref_tryget_live(&ctx->users))
+2 -2
fs/ceph/super.c
··· 563 563 seq_puts(m, ",noacl"); 564 564 #endif 565 565 566 - if (fsopt->flags & CEPH_MOUNT_OPT_NOCOPYFROM) 567 - seq_puts(m, ",nocopyfrom"); 566 + if ((fsopt->flags & CEPH_MOUNT_OPT_NOCOPYFROM) == 0) 567 + seq_puts(m, ",copyfrom"); 568 568 569 569 if (fsopt->mds_namespace) 570 570 seq_show_option(m, "mds_namespace", fsopt->mds_namespace);
+3 -1
fs/ceph/super.h
··· 42 42 #define CEPH_MOUNT_OPT_NOQUOTADF (1<<13) /* no root dir quota in statfs */ 43 43 #define CEPH_MOUNT_OPT_NOCOPYFROM (1<<14) /* don't use RADOS 'copy-from' op */ 44 44 45 - #define CEPH_MOUNT_OPT_DEFAULT CEPH_MOUNT_OPT_DCACHE 45 + #define CEPH_MOUNT_OPT_DEFAULT \ 46 + (CEPH_MOUNT_OPT_DCACHE | \ 47 + CEPH_MOUNT_OPT_NOCOPYFROM) 46 48 47 49 #define ceph_set_mount_opt(fsc, opt) \ 48 50 (fsc)->mount_options->flags |= CEPH_MOUNT_OPT_##opt;
+23 -3
fs/fuse/dir.c
··· 1119 1119 if (fc->default_permissions || 1120 1120 ((mask & MAY_EXEC) && S_ISREG(inode->i_mode))) { 1121 1121 struct fuse_inode *fi = get_fuse_inode(inode); 1122 + u32 perm_mask = STATX_MODE | STATX_UID | STATX_GID; 1122 1123 1123 - if (time_before64(fi->i_time, get_jiffies_64())) { 1124 + if (perm_mask & READ_ONCE(fi->inval_mask) || 1125 + time_before64(fi->i_time, get_jiffies_64())) { 1124 1126 refreshed = true; 1125 1127 1126 1128 err = fuse_perm_getattr(inode, mask); ··· 1243 1241 1244 1242 static int fuse_dir_release(struct inode *inode, struct file *file) 1245 1243 { 1246 - fuse_release_common(file, FUSE_RELEASEDIR); 1244 + fuse_release_common(file, true); 1247 1245 1248 1246 return 0; 1249 1247 } ··· 1251 1249 static int fuse_dir_fsync(struct file *file, loff_t start, loff_t end, 1252 1250 int datasync) 1253 1251 { 1254 - return fuse_fsync_common(file, start, end, datasync, 1); 1252 + struct inode *inode = file->f_mapping->host; 1253 + struct fuse_conn *fc = get_fuse_conn(inode); 1254 + int err; 1255 + 1256 + if (is_bad_inode(inode)) 1257 + return -EIO; 1258 + 1259 + if (fc->no_fsyncdir) 1260 + return 0; 1261 + 1262 + inode_lock(inode); 1263 + err = fuse_fsync_common(file, start, end, datasync, FUSE_FSYNCDIR); 1264 + if (err == -ENOSYS) { 1265 + fc->no_fsyncdir = 1; 1266 + err = 0; 1267 + } 1268 + inode_unlock(inode); 1269 + 1270 + return err; 1255 1271 } 1256 1272 1257 1273 static long fuse_dir_ioctl(struct file *file, unsigned int cmd,
+33 -31
fs/fuse/file.c
··· 89 89 iput(req->misc.release.inode); 90 90 } 91 91 92 - static void fuse_file_put(struct fuse_file *ff, bool sync) 92 + static void fuse_file_put(struct fuse_file *ff, bool sync, bool isdir) 93 93 { 94 94 if (refcount_dec_and_test(&ff->count)) { 95 95 struct fuse_req *req = ff->reserved_req; 96 96 97 - if (ff->fc->no_open) { 97 + if (ff->fc->no_open && !isdir) { 98 98 /* 99 99 * Drop the release request when client does not 100 100 * implement 'open' ··· 247 247 req->in.args[0].value = inarg; 248 248 } 249 249 250 - void fuse_release_common(struct file *file, int opcode) 250 + void fuse_release_common(struct file *file, bool isdir) 251 251 { 252 252 struct fuse_file *ff = file->private_data; 253 253 struct fuse_req *req = ff->reserved_req; 254 + int opcode = isdir ? FUSE_RELEASEDIR : FUSE_RELEASE; 254 255 255 256 fuse_prepare_release(ff, file->f_flags, opcode); 256 257 ··· 273 272 * synchronous RELEASE is allowed (and desirable) in this case 274 273 * because the server can be trusted not to screw up. 275 274 */ 276 - fuse_file_put(ff, ff->fc->destroy_req != NULL); 275 + fuse_file_put(ff, ff->fc->destroy_req != NULL, isdir); 277 276 } 278 277 279 278 static int fuse_open(struct inode *inode, struct file *file) ··· 289 288 if (fc->writeback_cache) 290 289 write_inode_now(inode, 1); 291 290 292 - fuse_release_common(file, FUSE_RELEASE); 291 + fuse_release_common(file, false); 293 292 294 293 /* return value is ignored by VFS */ 295 294 return 0; ··· 303 302 * iput(NULL) is a no-op and since the refcount is 1 and everything's 304 303 * synchronous, we are fine with not doing igrab() here" 305 304 */ 306 - fuse_file_put(ff, true); 305 + fuse_file_put(ff, true, false); 307 306 } 308 307 EXPORT_SYMBOL_GPL(fuse_sync_release); 309 308 ··· 442 441 } 443 442 444 443 int fuse_fsync_common(struct file *file, loff_t start, loff_t end, 445 - int datasync, int isdir) 444 + int datasync, int opcode) 446 445 { 447 446 struct inode *inode = file->f_mapping->host; 448 447 struct fuse_conn *fc = get_fuse_conn(inode); 449 448 struct fuse_file *ff = file->private_data; 450 449 FUSE_ARGS(args); 451 450 struct fuse_fsync_in inarg; 451 + 452 + memset(&inarg, 0, sizeof(inarg)); 453 + inarg.fh = ff->fh; 454 + inarg.fsync_flags = datasync ? 1 : 0; 455 + args.in.h.opcode = opcode; 456 + args.in.h.nodeid = get_node_id(inode); 457 + args.in.numargs = 1; 458 + args.in.args[0].size = sizeof(inarg); 459 + args.in.args[0].value = &inarg; 460 + return fuse_simple_request(fc, &args); 461 + } 462 + 463 + static int fuse_fsync(struct file *file, loff_t start, loff_t end, 464 + int datasync) 465 + { 466 + struct inode *inode = file->f_mapping->host; 467 + struct fuse_conn *fc = get_fuse_conn(inode); 452 468 int err; 453 469 454 470 if (is_bad_inode(inode)) ··· 497 479 if (err) 498 480 goto out; 499 481 500 - if ((!isdir && fc->no_fsync) || (isdir && fc->no_fsyncdir)) 482 + if (fc->no_fsync) 501 483 goto out; 502 484 503 - memset(&inarg, 0, sizeof(inarg)); 504 - inarg.fh = ff->fh; 505 - inarg.fsync_flags = datasync ? 1 : 0; 506 - args.in.h.opcode = isdir ? FUSE_FSYNCDIR : FUSE_FSYNC; 507 - args.in.h.nodeid = get_node_id(inode); 508 - args.in.numargs = 1; 509 - args.in.args[0].size = sizeof(inarg); 510 - args.in.args[0].value = &inarg; 511 - err = fuse_simple_request(fc, &args); 485 + err = fuse_fsync_common(file, start, end, datasync, FUSE_FSYNC); 512 486 if (err == -ENOSYS) { 513 - if (isdir) 514 - fc->no_fsyncdir = 1; 515 - else 516 - fc->no_fsync = 1; 487 + fc->no_fsync = 1; 517 488 err = 0; 518 489 } 519 490 out: 520 491 inode_unlock(inode); 521 - return err; 522 - } 523 492 524 - static int fuse_fsync(struct file *file, loff_t start, loff_t end, 525 - int datasync) 526 - { 527 - return fuse_fsync_common(file, start, end, datasync, 0); 493 + return err; 528 494 } 529 495 530 496 void fuse_read_fill(struct fuse_req *req, struct file *file, loff_t pos, ··· 809 807 put_page(page); 810 808 } 811 809 if (req->ff) 812 - fuse_file_put(req->ff, false); 810 + fuse_file_put(req->ff, false, false); 813 811 } 814 812 815 813 static void fuse_send_readpages(struct fuse_req *req, struct file *file) ··· 1462 1460 __free_page(req->pages[i]); 1463 1461 1464 1462 if (req->ff) 1465 - fuse_file_put(req->ff, false); 1463 + fuse_file_put(req->ff, false, false); 1466 1464 } 1467 1465 1468 1466 static void fuse_writepage_finish(struct fuse_conn *fc, struct fuse_req *req) ··· 1621 1619 ff = __fuse_write_file_get(fc, fi); 1622 1620 err = fuse_flush_times(inode, ff); 1623 1621 if (ff) 1624 - fuse_file_put(ff, 0); 1622 + fuse_file_put(ff, false, false); 1625 1623 1626 1624 return err; 1627 1625 } ··· 1942 1940 err = 0; 1943 1941 } 1944 1942 if (data.ff) 1945 - fuse_file_put(data.ff, false); 1943 + fuse_file_put(data.ff, false, false); 1946 1944 1947 1945 kfree(data.orig_pages); 1948 1946 out:
+2 -2
fs/fuse/fuse_i.h
··· 822 822 /** 823 823 * Send RELEASE or RELEASEDIR request 824 824 */ 825 - void fuse_release_common(struct file *file, int opcode); 825 + void fuse_release_common(struct file *file, bool isdir); 826 826 827 827 /** 828 828 * Send FSYNC or FSYNCDIR request 829 829 */ 830 830 int fuse_fsync_common(struct file *file, loff_t start, loff_t end, 831 - int datasync, int isdir); 831 + int datasync, int opcode); 832 832 833 833 /** 834 834 * Notify poll wakeup
+2 -1
fs/fuse/inode.c
··· 115 115 static void fuse_destroy_inode(struct inode *inode) 116 116 { 117 117 struct fuse_inode *fi = get_fuse_inode(inode); 118 - if (S_ISREG(inode->i_mode)) { 118 + if (S_ISREG(inode->i_mode) && !is_bad_inode(inode)) { 119 119 WARN_ON(!list_empty(&fi->write_files)); 120 120 WARN_ON(!list_empty(&fi->queued_writes)); 121 121 } ··· 1068 1068 1069 1069 fuse_conn_put(fc); 1070 1070 } 1071 + kfree(fud->pq.processing); 1071 1072 kfree(fud); 1072 1073 } 1073 1074 EXPORT_SYMBOL_GPL(fuse_dev_free);
+13 -1
fs/overlayfs/dir.c
··· 651 651 return ovl_create_object(dentry, S_IFLNK, 0, link); 652 652 } 653 653 654 + static int ovl_set_link_redirect(struct dentry *dentry) 655 + { 656 + const struct cred *old_cred; 657 + int err; 658 + 659 + old_cred = ovl_override_creds(dentry->d_sb); 660 + err = ovl_set_redirect(dentry, false); 661 + revert_creds(old_cred); 662 + 663 + return err; 664 + } 665 + 654 666 static int ovl_link(struct dentry *old, struct inode *newdir, 655 667 struct dentry *new) 656 668 { ··· 682 670 goto out_drop_write; 683 671 684 672 if (ovl_is_metacopy_dentry(old)) { 685 - err = ovl_set_redirect(old, false); 673 + err = ovl_set_link_redirect(old); 686 674 if (err) 687 675 goto out_drop_write; 688 676 }
+3 -3
fs/overlayfs/export.c
··· 754 754 goto out; 755 755 } 756 756 757 - /* Otherwise, get a connected non-upper dir or disconnected non-dir */ 758 - if (d_is_dir(origin.dentry) && 759 - (origin.dentry->d_flags & DCACHE_DISCONNECTED)) { 757 + /* Find origin.dentry again with ovl_acceptable() layer check */ 758 + if (d_is_dir(origin.dentry)) { 760 759 dput(origin.dentry); 761 760 origin.dentry = NULL; 762 761 err = ovl_check_origin_fh(ofs, fh, true, NULL, &stack); ··· 768 769 goto out_err; 769 770 } 770 771 772 + /* Get a connected non-upper dir or disconnected non-dir */ 771 773 dentry = ovl_get_dentry(sb, NULL, &origin, index); 772 774 773 775 out:
+4 -13
fs/overlayfs/inode.c
··· 286 286 if (err) 287 287 return err; 288 288 289 - /* No need to do any access on underlying for special files */ 290 - if (special_file(realinode->i_mode)) 291 - return 0; 292 - 293 - /* No need to access underlying for execute */ 294 - mask &= ~MAY_EXEC; 295 - if ((mask & (MAY_READ | MAY_WRITE)) == 0) 296 - return 0; 297 - 298 - /* Lower files get copied up, so turn write access into read */ 299 - if (!upperinode && mask & MAY_WRITE) { 289 + old_cred = ovl_override_creds(inode->i_sb); 290 + if (!upperinode && 291 + !special_file(realinode->i_mode) && mask & MAY_WRITE) { 300 292 mask &= ~(MAY_WRITE | MAY_APPEND); 293 + /* Make sure mounter can read file for copy up later */ 301 294 mask |= MAY_READ; 302 295 } 303 - 304 - old_cred = ovl_override_creds(inode->i_sb); 305 296 err = inode_permission(realinode, mask); 306 297 revert_creds(old_cred); 307 298
+2 -1
fs/userfaultfd.c
··· 1566 1566 cond_resched(); 1567 1567 1568 1568 BUG_ON(!vma_can_userfault(vma)); 1569 - WARN_ON(!(vma->vm_flags & VM_MAYWRITE)); 1570 1569 1571 1570 /* 1572 1571 * Nothing to do: this vma is already registered into this ··· 1573 1574 */ 1574 1575 if (!vma->vm_userfaultfd_ctx.ctx) 1575 1576 goto skip; 1577 + 1578 + WARN_ON(!(vma->vm_flags & VM_MAYWRITE)); 1576 1579 1577 1580 if (vma->vm_start > start) 1578 1581 start = vma->vm_start;
+1
include/asm-generic/fixmap.h
··· 16 16 #define __ASM_GENERIC_FIXMAP_H 17 17 18 18 #include <linux/bug.h> 19 + #include <linux/mm_types.h> 19 20 20 21 #define __fix_to_virt(x) (FIXADDR_TOP - ((x) << PAGE_SHIFT)) 21 22 #define __virt_to_fix(x) ((FIXADDR_TOP - ((x)&PAGE_MASK)) >> PAGE_SHIFT)
+1 -1
include/linux/filter.h
··· 861 861 extern int bpf_jit_enable; 862 862 extern int bpf_jit_harden; 863 863 extern int bpf_jit_kallsyms; 864 - extern int bpf_jit_limit; 864 + extern long bpf_jit_limit; 865 865 866 866 typedef void (*bpf_jit_fill_hole_t)(void *area, unsigned int size); 867 867
+6 -4
include/linux/mlx5/mlx5_ifc.h
··· 582 582 }; 583 583 584 584 struct mlx5_ifc_flow_table_eswitch_cap_bits { 585 - u8 reserved_at_0[0x1c]; 586 - u8 fdb_multi_path_to_table[0x1]; 587 - u8 reserved_at_1d[0x1]; 585 + u8 reserved_at_0[0x1a]; 588 586 u8 multi_fdb_encap[0x1]; 589 - u8 reserved_at_1e[0x1e1]; 587 + u8 reserved_at_1b[0x1]; 588 + u8 fdb_multi_path_to_table[0x1]; 589 + u8 reserved_at_1d[0x3]; 590 + 591 + u8 reserved_at_20[0x1e0]; 590 592 591 593 struct mlx5_ifc_flow_table_prop_layout_bits flow_table_properties_nic_esw_fdb; 592 594
+5
include/linux/mm_types.h
··· 206 206 #endif 207 207 } _struct_page_alignment; 208 208 209 + /* 210 + * Used for sizing the vmemmap region on some architectures 211 + */ 212 + #define STRUCT_PAGE_MAX_SHIFT (order_base_2(sizeof(struct page))) 213 + 209 214 #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK) 210 215 #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) 211 216
+6
include/linux/mmzone.h
··· 783 783 static inline void memory_present(int nid, unsigned long start, unsigned long end) {} 784 784 #endif 785 785 786 + #if defined(CONFIG_SPARSEMEM) 787 + void memblocks_present(void); 788 + #else 789 + static inline void memblocks_present(void) {} 790 + #endif 791 + 786 792 #ifdef CONFIG_HAVE_MEMORYLESS_NODES 787 793 int local_memory_node(int node_id); 788 794 #else
+1 -1
include/linux/mod_devicetable.h
··· 565 565 /** 566 566 * struct mdio_device_id - identifies PHY devices on an MDIO/MII bus 567 567 * @phy_id: The result of 568 - * (mdio_read(&MII_PHYSID1) << 16 | mdio_read(&PHYSID2)) & @phy_id_mask 568 + * (mdio_read(&MII_PHYSID1) << 16 | mdio_read(&MII_PHYSID2)) & @phy_id_mask 569 569 * for this PHY type 570 570 * @phy_id_mask: Defines the significant bits of @phy_id. A value of 0 571 571 * is used to terminate an array of struct mdio_device_id.
-12
include/linux/netfilter/nfnetlink.h
··· 62 62 } 63 63 #endif /* CONFIG_PROVE_LOCKING */ 64 64 65 - /* 66 - * nfnl_dereference - fetch RCU pointer when updates are prevented by subsys mutex 67 - * 68 - * @p: The pointer to read, prior to dereferencing 69 - * @ss: The nfnetlink subsystem ID 70 - * 71 - * Return the value of the specified RCU-protected pointer, but omit 72 - * the READ_ONCE(), because caller holds the NFNL subsystem mutex. 73 - */ 74 - #define nfnl_dereference(p, ss) \ 75 - rcu_dereference_protected(p, lockdep_nfnl_is_held(ss)) 76 - 77 65 #define MODULE_ALIAS_NFNL_SUBSYS(subsys) \ 78 66 MODULE_ALIAS("nfnetlink-subsys-" __stringify(subsys)) 79 67
+5 -4
include/linux/t10-pi.h
··· 39 39 40 40 static inline u32 t10_pi_ref_tag(struct request *rq) 41 41 { 42 + unsigned int shift = ilog2(queue_logical_block_size(rq->q)); 43 + 42 44 #ifdef CONFIG_BLK_DEV_INTEGRITY 43 - return blk_rq_pos(rq) >> 44 - (rq->q->integrity.interval_exp - 9) & 0xffffffff; 45 - #else 46 - return -1U; 45 + if (rq->q->integrity.interval_exp) 46 + shift = rq->q->integrity.interval_exp; 47 47 #endif 48 + return blk_rq_pos(rq) >> (shift - SECTOR_SHIFT) & 0xffffffff; 48 49 } 49 50 50 51 extern const struct blk_integrity_profile t10_pi_type1_crc;
+54
include/linux/xarray.h
··· 554 554 } 555 555 556 556 /** 557 + * xa_cmpxchg_bh() - Conditionally replace an entry in the XArray. 558 + * @xa: XArray. 559 + * @index: Index into array. 560 + * @old: Old value to test against. 561 + * @entry: New value to place in array. 562 + * @gfp: Memory allocation flags. 563 + * 564 + * This function is like calling xa_cmpxchg() except it disables softirqs 565 + * while holding the array lock. 566 + * 567 + * Context: Any context. Takes and releases the xa_lock while 568 + * disabling softirqs. May sleep if the @gfp flags permit. 569 + * Return: The old value at this index or xa_err() if an error happened. 570 + */ 571 + static inline void *xa_cmpxchg_bh(struct xarray *xa, unsigned long index, 572 + void *old, void *entry, gfp_t gfp) 573 + { 574 + void *curr; 575 + 576 + xa_lock_bh(xa); 577 + curr = __xa_cmpxchg(xa, index, old, entry, gfp); 578 + xa_unlock_bh(xa); 579 + 580 + return curr; 581 + } 582 + 583 + /** 584 + * xa_cmpxchg_irq() - Conditionally replace an entry in the XArray. 585 + * @xa: XArray. 586 + * @index: Index into array. 587 + * @old: Old value to test against. 588 + * @entry: New value to place in array. 589 + * @gfp: Memory allocation flags. 590 + * 591 + * This function is like calling xa_cmpxchg() except it disables interrupts 592 + * while holding the array lock. 593 + * 594 + * Context: Process context. Takes and releases the xa_lock while 595 + * disabling interrupts. May sleep if the @gfp flags permit. 596 + * Return: The old value at this index or xa_err() if an error happened. 597 + */ 598 + static inline void *xa_cmpxchg_irq(struct xarray *xa, unsigned long index, 599 + void *old, void *entry, gfp_t gfp) 600 + { 601 + void *curr; 602 + 603 + xa_lock_irq(xa); 604 + curr = __xa_cmpxchg(xa, index, old, entry, gfp); 605 + xa_unlock_irq(xa); 606 + 607 + return curr; 608 + } 609 + 610 + /** 557 611 * xa_insert() - Store this entry in the XArray unless another entry is 558 612 * already present. 559 613 * @xa: XArray.
+86
include/media/mpeg2-ctrls.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * These are the MPEG2 state controls for use with stateless MPEG-2 4 + * codec drivers. 5 + * 6 + * It turns out that these structs are not stable yet and will undergo 7 + * more changes. So keep them private until they are stable and ready to 8 + * become part of the official public API. 9 + */ 10 + 11 + #ifndef _MPEG2_CTRLS_H_ 12 + #define _MPEG2_CTRLS_H_ 13 + 14 + #define V4L2_CID_MPEG_VIDEO_MPEG2_SLICE_PARAMS (V4L2_CID_MPEG_BASE+250) 15 + #define V4L2_CID_MPEG_VIDEO_MPEG2_QUANTIZATION (V4L2_CID_MPEG_BASE+251) 16 + 17 + /* enum v4l2_ctrl_type type values */ 18 + #define V4L2_CTRL_TYPE_MPEG2_SLICE_PARAMS 0x0103 19 + #define V4L2_CTRL_TYPE_MPEG2_QUANTIZATION 0x0104 20 + 21 + #define V4L2_MPEG2_PICTURE_CODING_TYPE_I 1 22 + #define V4L2_MPEG2_PICTURE_CODING_TYPE_P 2 23 + #define V4L2_MPEG2_PICTURE_CODING_TYPE_B 3 24 + #define V4L2_MPEG2_PICTURE_CODING_TYPE_D 4 25 + 26 + struct v4l2_mpeg2_sequence { 27 + /* ISO/IEC 13818-2, ITU-T Rec. H.262: Sequence header */ 28 + __u16 horizontal_size; 29 + __u16 vertical_size; 30 + __u32 vbv_buffer_size; 31 + 32 + /* ISO/IEC 13818-2, ITU-T Rec. H.262: Sequence extension */ 33 + __u8 profile_and_level_indication; 34 + __u8 progressive_sequence; 35 + __u8 chroma_format; 36 + __u8 pad; 37 + }; 38 + 39 + struct v4l2_mpeg2_picture { 40 + /* ISO/IEC 13818-2, ITU-T Rec. H.262: Picture header */ 41 + __u8 picture_coding_type; 42 + 43 + /* ISO/IEC 13818-2, ITU-T Rec. H.262: Picture coding extension */ 44 + __u8 f_code[2][2]; 45 + __u8 intra_dc_precision; 46 + __u8 picture_structure; 47 + __u8 top_field_first; 48 + __u8 frame_pred_frame_dct; 49 + __u8 concealment_motion_vectors; 50 + __u8 q_scale_type; 51 + __u8 intra_vlc_format; 52 + __u8 alternate_scan; 53 + __u8 repeat_first_field; 54 + __u8 progressive_frame; 55 + __u8 pad; 56 + }; 57 + 58 + struct v4l2_ctrl_mpeg2_slice_params { 59 + __u32 bit_size; 60 + __u32 data_bit_offset; 61 + 62 + struct v4l2_mpeg2_sequence sequence; 63 + struct v4l2_mpeg2_picture picture; 64 + 65 + /* ISO/IEC 13818-2, ITU-T Rec. H.262: Slice */ 66 + __u8 quantiser_scale_code; 67 + 68 + __u8 backward_ref_index; 69 + __u8 forward_ref_index; 70 + __u8 pad; 71 + }; 72 + 73 + struct v4l2_ctrl_mpeg2_quantization { 74 + /* ISO/IEC 13818-2, ITU-T Rec. H.262: Quant matrix extension */ 75 + __u8 load_intra_quantiser_matrix; 76 + __u8 load_non_intra_quantiser_matrix; 77 + __u8 load_chroma_intra_quantiser_matrix; 78 + __u8 load_chroma_non_intra_quantiser_matrix; 79 + 80 + __u8 intra_quantiser_matrix[64]; 81 + __u8 non_intra_quantiser_matrix[64]; 82 + __u8 chroma_intra_quantiser_matrix[64]; 83 + __u8 chroma_non_intra_quantiser_matrix[64]; 84 + }; 85 + 86 + #endif
+6
include/media/v4l2-ctrls.h
··· 22 22 #include <linux/videodev2.h> 23 23 #include <media/media-request.h> 24 24 25 + /* 26 + * Include the mpeg2 stateless codec compound control definitions. 27 + * This will move to the public headers once this API is fully stable. 28 + */ 29 + #include <media/mpeg2-ctrls.h> 30 + 25 31 /* forward references */ 26 32 struct file; 27 33 struct v4l2_ctrl_handler;
+2
include/media/videobuf2-core.h
··· 239 239 * @num_planes: number of planes in the buffer 240 240 * on an internal driver queue. 241 241 * @timestamp: frame timestamp in ns. 242 + * @request: the request this buffer is associated with. 242 243 * @req_obj: used to bind this buffer to a request. This 243 244 * request object has a refcount. 244 245 */ ··· 250 249 unsigned int memory; 251 250 unsigned int num_planes; 252 251 u64 timestamp; 252 + struct media_request *request; 253 253 struct media_request_object req_obj; 254 254 255 255 /* private: internal use only
-19
include/net/ip_tunnels.h
··· 144 144 bool ignore_df; 145 145 }; 146 146 147 - #define TUNNEL_CSUM __cpu_to_be16(0x01) 148 - #define TUNNEL_ROUTING __cpu_to_be16(0x02) 149 - #define TUNNEL_KEY __cpu_to_be16(0x04) 150 - #define TUNNEL_SEQ __cpu_to_be16(0x08) 151 - #define TUNNEL_STRICT __cpu_to_be16(0x10) 152 - #define TUNNEL_REC __cpu_to_be16(0x20) 153 - #define TUNNEL_VERSION __cpu_to_be16(0x40) 154 - #define TUNNEL_NO_KEY __cpu_to_be16(0x80) 155 - #define TUNNEL_DONT_FRAGMENT __cpu_to_be16(0x0100) 156 - #define TUNNEL_OAM __cpu_to_be16(0x0200) 157 - #define TUNNEL_CRIT_OPT __cpu_to_be16(0x0400) 158 - #define TUNNEL_GENEVE_OPT __cpu_to_be16(0x0800) 159 - #define TUNNEL_VXLAN_OPT __cpu_to_be16(0x1000) 160 - #define TUNNEL_NOCACHE __cpu_to_be16(0x2000) 161 - #define TUNNEL_ERSPAN_OPT __cpu_to_be16(0x4000) 162 - 163 - #define TUNNEL_OPTIONS_PRESENT \ 164 - (TUNNEL_GENEVE_OPT | TUNNEL_VXLAN_OPT | TUNNEL_ERSPAN_OPT) 165 - 166 147 struct tnl_ptk_info { 167 148 __be16 flags; 168 149 __be16 proto;
+21 -4
include/net/sock.h
··· 2340 2340 void __sock_tx_timestamp(__u16 tsflags, __u8 *tx_flags); 2341 2341 2342 2342 /** 2343 - * sock_tx_timestamp - checks whether the outgoing packet is to be time stamped 2343 + * _sock_tx_timestamp - checks whether the outgoing packet is to be time stamped 2344 2344 * @sk: socket sending this packet 2345 2345 * @tsflags: timestamping flags to use 2346 2346 * @tx_flags: completed with instructions for time stamping 2347 + * @tskey: filled in with next sk_tskey (not for TCP, which uses seqno) 2347 2348 * 2348 2349 * Note: callers should take care of initial ``*tx_flags`` value (usually 0) 2349 2350 */ 2350 - static inline void sock_tx_timestamp(const struct sock *sk, __u16 tsflags, 2351 - __u8 *tx_flags) 2351 + static inline void _sock_tx_timestamp(struct sock *sk, __u16 tsflags, 2352 + __u8 *tx_flags, __u32 *tskey) 2352 2353 { 2353 - if (unlikely(tsflags)) 2354 + if (unlikely(tsflags)) { 2354 2355 __sock_tx_timestamp(tsflags, tx_flags); 2356 + if (tsflags & SOF_TIMESTAMPING_OPT_ID && tskey && 2357 + tsflags & SOF_TIMESTAMPING_TX_RECORD_MASK) 2358 + *tskey = sk->sk_tskey++; 2359 + } 2355 2360 if (unlikely(sock_flag(sk, SOCK_WIFI_STATUS))) 2356 2361 *tx_flags |= SKBTX_WIFI_STATUS; 2362 + } 2363 + 2364 + static inline void sock_tx_timestamp(struct sock *sk, __u16 tsflags, 2365 + __u8 *tx_flags) 2366 + { 2367 + _sock_tx_timestamp(sk, tsflags, tx_flags, NULL); 2368 + } 2369 + 2370 + static inline void skb_setup_tx_timestamp(struct sk_buff *skb, __u16 tsflags) 2371 + { 2372 + _sock_tx_timestamp(skb->sk, tsflags, &skb_shinfo(skb)->tx_flags, 2373 + &skb_shinfo(skb)->tskey); 2357 2374 } 2358 2375 2359 2376 /**
+6
include/net/tls.h
··· 76 76 * 77 77 * void (*unhash)(struct tls_device *device, struct sock *sk); 78 78 * This function cleans listen state set by Inline TLS driver 79 + * 80 + * void (*release)(struct kref *kref); 81 + * Release the registered device and allocated resources 82 + * @kref: Number of reference to tls_device 79 83 */ 80 84 struct tls_device { 81 85 char name[TLS_DEVICE_NAME_MAX]; ··· 87 83 int (*feature)(struct tls_device *device); 88 84 int (*hash)(struct tls_device *device, struct sock *sk); 89 85 void (*unhash)(struct tls_device *device, struct sock *sk); 86 + void (*release)(struct kref *kref); 87 + struct kref kref; 90 88 }; 91 89 92 90 enum {
+1
include/net/xfrm.h
··· 1552 1552 int (*func)(struct xfrm_state *, int, void*), void *); 1553 1553 void xfrm_state_walk_done(struct xfrm_state_walk *walk, struct net *net); 1554 1554 struct xfrm_state *xfrm_state_alloc(struct net *net); 1555 + void xfrm_state_free(struct xfrm_state *x); 1555 1556 struct xfrm_state *xfrm_state_find(const xfrm_address_t *daddr, 1556 1557 const xfrm_address_t *saddr, 1557 1558 const struct flowi *fl,
+1
include/uapi/asm-generic/Kbuild.asm
··· 3 3 # 4 4 mandatory-y += auxvec.h 5 5 mandatory-y += bitsperlong.h 6 + mandatory-y += bpf_perf_event.h 6 7 mandatory-y += byteorder.h 7 8 mandatory-y += errno.h 8 9 mandatory-y += fcntl.h
+2 -2
include/uapi/linux/blkzoned.h
··· 141 141 */ 142 142 #define BLKREPORTZONE _IOWR(0x12, 130, struct blk_zone_report) 143 143 #define BLKRESETZONE _IOW(0x12, 131, struct blk_zone_range) 144 - #define BLKGETZONESZ _IOW(0x12, 132, __u32) 145 - #define BLKGETNRZONES _IOW(0x12, 133, __u32) 144 + #define BLKGETZONESZ _IOR(0x12, 132, __u32) 145 + #define BLKGETNRZONES _IOR(0x12, 133, __u32) 146 146 147 147 #endif /* _UAPI_BLKZONED_H */
+20
include/uapi/linux/if_tunnel.h
··· 160 160 }; 161 161 162 162 #define IFLA_VTI_MAX (__IFLA_VTI_MAX - 1) 163 + 164 + #define TUNNEL_CSUM __cpu_to_be16(0x01) 165 + #define TUNNEL_ROUTING __cpu_to_be16(0x02) 166 + #define TUNNEL_KEY __cpu_to_be16(0x04) 167 + #define TUNNEL_SEQ __cpu_to_be16(0x08) 168 + #define TUNNEL_STRICT __cpu_to_be16(0x10) 169 + #define TUNNEL_REC __cpu_to_be16(0x20) 170 + #define TUNNEL_VERSION __cpu_to_be16(0x40) 171 + #define TUNNEL_NO_KEY __cpu_to_be16(0x80) 172 + #define TUNNEL_DONT_FRAGMENT __cpu_to_be16(0x0100) 173 + #define TUNNEL_OAM __cpu_to_be16(0x0200) 174 + #define TUNNEL_CRIT_OPT __cpu_to_be16(0x0400) 175 + #define TUNNEL_GENEVE_OPT __cpu_to_be16(0x0800) 176 + #define TUNNEL_VXLAN_OPT __cpu_to_be16(0x1000) 177 + #define TUNNEL_NOCACHE __cpu_to_be16(0x2000) 178 + #define TUNNEL_ERSPAN_OPT __cpu_to_be16(0x4000) 179 + 180 + #define TUNNEL_OPTIONS_PRESENT \ 181 + (TUNNEL_GENEVE_OPT | TUNNEL_VXLAN_OPT | TUNNEL_ERSPAN_OPT) 182 + 163 183 #endif /* _UAPI_IF_TUNNEL_H_ */
+7 -3
include/uapi/linux/in.h
··· 266 266 267 267 #define IN_CLASSD(a) ((((long int) (a)) & 0xf0000000) == 0xe0000000) 268 268 #define IN_MULTICAST(a) IN_CLASSD(a) 269 - #define IN_MULTICAST_NET 0xF0000000 269 + #define IN_MULTICAST_NET 0xe0000000 270 270 271 - #define IN_EXPERIMENTAL(a) ((((long int) (a)) & 0xf0000000) == 0xf0000000) 272 - #define IN_BADCLASS(a) IN_EXPERIMENTAL((a)) 271 + #define IN_BADCLASS(a) ((((long int) (a) ) == 0xffffffff) 272 + #define IN_EXPERIMENTAL(a) IN_BADCLASS((a)) 273 + 274 + #define IN_CLASSE(a) ((((long int) (a)) & 0xf0000000) == 0xf0000000) 275 + #define IN_CLASSE_NET 0xffffffff 276 + #define IN_CLASSE_NSHIFT 0 273 277 274 278 /* Address to accept any incoming messages. */ 275 279 #define INADDR_ANY ((unsigned long int) 0x00000000)
+9
include/uapi/linux/input-event-codes.h
··· 752 752 753 753 #define ABS_MISC 0x28 754 754 755 + /* 756 + * 0x2e is reserved and should not be used in input drivers. 757 + * It was used by HID as ABS_MISC+6 and userspace needs to detect if 758 + * the next ABS_* event is correct or is just ABS_MISC + n. 759 + * We define here ABS_RESERVED so userspace can rely on it and detect 760 + * the situation described above. 761 + */ 762 + #define ABS_RESERVED 0x2e 763 + 755 764 #define ABS_MT_SLOT 0x2f /* MT slot being modified */ 756 765 #define ABS_MT_TOUCH_MAJOR 0x30 /* Major axis of touching ellipse */ 757 766 #define ABS_MT_TOUCH_MINOR 0x31 /* Minor axis (omit if circular) */
+2 -2
include/uapi/linux/net_tstamp.h
··· 155 155 }; 156 156 157 157 struct sock_txtime { 158 - clockid_t clockid; /* reference clockid */ 159 - __u32 flags; /* as defined by enum txtime_flags */ 158 + __kernel_clockid_t clockid;/* reference clockid */ 159 + __u32 flags; /* as defined by enum txtime_flags */ 160 160 }; 161 161 162 162 #endif /* _NET_TIMESTAMPING_H */
+1 -1
include/uapi/linux/netlink.h
··· 155 155 #define NETLINK_LIST_MEMBERSHIPS 9 156 156 #define NETLINK_CAP_ACK 10 157 157 #define NETLINK_EXT_ACK 11 158 - #define NETLINK_DUMP_STRICT_CHK 12 158 + #define NETLINK_GET_STRICT_CHK 12 159 159 160 160 struct nl_pktinfo { 161 161 __u32 group;
-68
include/uapi/linux/v4l2-controls.h
··· 404 404 #define V4L2_CID_MPEG_VIDEO_MV_V_SEARCH_RANGE (V4L2_CID_MPEG_BASE+228) 405 405 #define V4L2_CID_MPEG_VIDEO_FORCE_KEY_FRAME (V4L2_CID_MPEG_BASE+229) 406 406 407 - #define V4L2_CID_MPEG_VIDEO_MPEG2_SLICE_PARAMS (V4L2_CID_MPEG_BASE+250) 408 - #define V4L2_CID_MPEG_VIDEO_MPEG2_QUANTIZATION (V4L2_CID_MPEG_BASE+251) 409 - 410 407 #define V4L2_CID_MPEG_VIDEO_H263_I_FRAME_QP (V4L2_CID_MPEG_BASE+300) 411 408 #define V4L2_CID_MPEG_VIDEO_H263_P_FRAME_QP (V4L2_CID_MPEG_BASE+301) 412 409 #define V4L2_CID_MPEG_VIDEO_H263_B_FRAME_QP (V4L2_CID_MPEG_BASE+302) ··· 1093 1096 #define V4L2_CID_DETECT_MD_GLOBAL_THRESHOLD (V4L2_CID_DETECT_CLASS_BASE + 2) 1094 1097 #define V4L2_CID_DETECT_MD_THRESHOLD_GRID (V4L2_CID_DETECT_CLASS_BASE + 3) 1095 1098 #define V4L2_CID_DETECT_MD_REGION_GRID (V4L2_CID_DETECT_CLASS_BASE + 4) 1096 - 1097 - #define V4L2_MPEG2_PICTURE_CODING_TYPE_I 1 1098 - #define V4L2_MPEG2_PICTURE_CODING_TYPE_P 2 1099 - #define V4L2_MPEG2_PICTURE_CODING_TYPE_B 3 1100 - #define V4L2_MPEG2_PICTURE_CODING_TYPE_D 4 1101 - 1102 - struct v4l2_mpeg2_sequence { 1103 - /* ISO/IEC 13818-2, ITU-T Rec. H.262: Sequence header */ 1104 - __u16 horizontal_size; 1105 - __u16 vertical_size; 1106 - __u32 vbv_buffer_size; 1107 - 1108 - /* ISO/IEC 13818-2, ITU-T Rec. H.262: Sequence extension */ 1109 - __u8 profile_and_level_indication; 1110 - __u8 progressive_sequence; 1111 - __u8 chroma_format; 1112 - __u8 pad; 1113 - }; 1114 - 1115 - struct v4l2_mpeg2_picture { 1116 - /* ISO/IEC 13818-2, ITU-T Rec. H.262: Picture header */ 1117 - __u8 picture_coding_type; 1118 - 1119 - /* ISO/IEC 13818-2, ITU-T Rec. H.262: Picture coding extension */ 1120 - __u8 f_code[2][2]; 1121 - __u8 intra_dc_precision; 1122 - __u8 picture_structure; 1123 - __u8 top_field_first; 1124 - __u8 frame_pred_frame_dct; 1125 - __u8 concealment_motion_vectors; 1126 - __u8 q_scale_type; 1127 - __u8 intra_vlc_format; 1128 - __u8 alternate_scan; 1129 - __u8 repeat_first_field; 1130 - __u8 progressive_frame; 1131 - __u8 pad; 1132 - }; 1133 - 1134 - struct v4l2_ctrl_mpeg2_slice_params { 1135 - __u32 bit_size; 1136 - __u32 data_bit_offset; 1137 - 1138 - struct v4l2_mpeg2_sequence sequence; 1139 - struct v4l2_mpeg2_picture picture; 1140 - 1141 - /* ISO/IEC 13818-2, ITU-T Rec. H.262: Slice */ 1142 - __u8 quantiser_scale_code; 1143 - 1144 - __u8 backward_ref_index; 1145 - __u8 forward_ref_index; 1146 - __u8 pad; 1147 - }; 1148 - 1149 - struct v4l2_ctrl_mpeg2_quantization { 1150 - /* ISO/IEC 13818-2, ITU-T Rec. H.262: Quant matrix extension */ 1151 - __u8 load_intra_quantiser_matrix; 1152 - __u8 load_non_intra_quantiser_matrix; 1153 - __u8 load_chroma_intra_quantiser_matrix; 1154 - __u8 load_chroma_non_intra_quantiser_matrix; 1155 - 1156 - __u8 intra_quantiser_matrix[64]; 1157 - __u8 non_intra_quantiser_matrix[64]; 1158 - __u8 chroma_intra_quantiser_matrix[64]; 1159 - __u8 chroma_non_intra_quantiser_matrix[64]; 1160 - }; 1161 1099 1162 1100 #endif
-4
include/uapi/linux/videodev2.h
··· 1622 1622 __u8 __user *p_u8; 1623 1623 __u16 __user *p_u16; 1624 1624 __u32 __user *p_u32; 1625 - struct v4l2_ctrl_mpeg2_slice_params __user *p_mpeg2_slice_params; 1626 - struct v4l2_ctrl_mpeg2_quantization __user *p_mpeg2_quantization; 1627 1625 void __user *ptr; 1628 1626 }; 1629 1627 } __attribute__ ((packed)); ··· 1667 1669 V4L2_CTRL_TYPE_U8 = 0x0100, 1668 1670 V4L2_CTRL_TYPE_U16 = 0x0101, 1669 1671 V4L2_CTRL_TYPE_U32 = 0x0102, 1670 - V4L2_CTRL_TYPE_MPEG2_SLICE_PARAMS = 0x0103, 1671 - V4L2_CTRL_TYPE_MPEG2_QUANTIZATION = 0x0104, 1672 1672 }; 1673 1673 1674 1674 /* Used in the VIDIOC_QUERYCTRL ioctl for querying controls */
+2 -2
init/Kconfig
··· 515 515 depends on PSI 516 516 help 517 517 If set, pressure stall information tracking will be disabled 518 - per default but can be enabled through passing psi_enable=1 519 - on the kernel commandline during boot. 518 + per default but can be enabled through passing psi=1 on the 519 + kernel commandline during boot. 520 520 521 521 endmenu # "CPU/Task time and stats accounting" 522 522
+15 -6
kernel/bpf/core.c
··· 365 365 } 366 366 367 367 #ifdef CONFIG_BPF_JIT 368 - # define BPF_JIT_LIMIT_DEFAULT (PAGE_SIZE * 40000) 369 - 370 368 /* All BPF JIT sysctl knobs here. */ 371 369 int bpf_jit_enable __read_mostly = IS_BUILTIN(CONFIG_BPF_JIT_ALWAYS_ON); 372 370 int bpf_jit_harden __read_mostly; 373 371 int bpf_jit_kallsyms __read_mostly; 374 - int bpf_jit_limit __read_mostly = BPF_JIT_LIMIT_DEFAULT; 372 + long bpf_jit_limit __read_mostly; 375 373 376 374 static __always_inline void 377 375 bpf_get_prog_addr_region(const struct bpf_prog *prog, ··· 578 580 579 581 static atomic_long_t bpf_jit_current; 580 582 583 + /* Can be overridden by an arch's JIT compiler if it has a custom, 584 + * dedicated BPF backend memory area, or if neither of the two 585 + * below apply. 586 + */ 587 + u64 __weak bpf_jit_alloc_exec_limit(void) 588 + { 581 589 #if defined(MODULES_VADDR) 590 + return MODULES_END - MODULES_VADDR; 591 + #else 592 + return VMALLOC_END - VMALLOC_START; 593 + #endif 594 + } 595 + 582 596 static int __init bpf_jit_charge_init(void) 583 597 { 584 598 /* Only used as heuristic here to derive limit. */ 585 - bpf_jit_limit = min_t(u64, round_up((MODULES_END - MODULES_VADDR) >> 2, 586 - PAGE_SIZE), INT_MAX); 599 + bpf_jit_limit = min_t(u64, round_up(bpf_jit_alloc_exec_limit() >> 2, 600 + PAGE_SIZE), LONG_MAX); 587 601 return 0; 588 602 } 589 603 pure_initcall(bpf_jit_charge_init); 590 - #endif 591 604 592 605 static int bpf_jit_charge_modmem(u32 pages) 593 606 {
+10 -3
kernel/bpf/verifier.c
··· 5102 5102 } 5103 5103 new_sl->next = env->explored_states[insn_idx]; 5104 5104 env->explored_states[insn_idx] = new_sl; 5105 - /* connect new state to parentage chain */ 5106 - for (i = 0; i < BPF_REG_FP; i++) 5107 - cur_regs(env)[i].parent = &new->frame[new->curframe]->regs[i]; 5105 + /* connect new state to parentage chain. Current frame needs all 5106 + * registers connected. Only r6 - r9 of the callers are alive (pushed 5107 + * to the stack implicitly by JITs) so in callers' frames connect just 5108 + * r6 - r9 as an optimization. Callers will have r1 - r5 connected to 5109 + * the state of the call instruction (with WRITTEN set), and r0 comes 5110 + * from callee with its full parentage chain, anyway. 5111 + */ 5112 + for (j = 0; j <= cur->curframe; j++) 5113 + for (i = j < cur->curframe ? BPF_REG_6 : 0; i < BPF_REG_FP; i++) 5114 + cur->frame[j]->regs[i].parent = &new->frame[j]->regs[i]; 5108 5115 /* clear write marks in current state: the writes we did are not writes 5109 5116 * our child did, so they don't screen off its reads from us. 5110 5117 * (There are no read marks in current state, because reads always mark
+6 -1
kernel/dma/direct.c
··· 309 309 310 310 min_mask = min_t(u64, min_mask, (max_pfn - 1) << PAGE_SHIFT); 311 311 312 - return mask >= phys_to_dma(dev, min_mask); 312 + /* 313 + * This check needs to be against the actual bit mask value, so 314 + * use __phys_to_dma() here so that the SME encryption mask isn't 315 + * part of the check. 316 + */ 317 + return mask >= __phys_to_dma(dev, min_mask); 313 318 } 314 319 315 320 int dma_direct_mapping_error(struct device *dev, dma_addr_t dma_addr)
+1
kernel/trace/ftrace.c
··· 5460 5460 if (ops->flags & FTRACE_OPS_FL_ENABLED) 5461 5461 ftrace_shutdown(ops, 0); 5462 5462 ops->flags |= FTRACE_OPS_FL_DELETED; 5463 + ftrace_free_filter(ops); 5463 5464 mutex_unlock(&ftrace_lock); 5464 5465 } 5465 5466
+4 -1
kernel/trace/trace_events_filter.c
··· 570 570 } 571 571 } 572 572 573 + kfree(op_stack); 574 + kfree(inverts); 573 575 return prog; 574 576 out_free: 575 577 kfree(op_stack); 576 - kfree(prog_stack); 577 578 kfree(inverts); 579 + kfree(prog_stack); 578 580 return ERR_PTR(ret); 579 581 } 580 582 ··· 1720 1718 err = process_preds(call, filter_string, *filterp, pe); 1721 1719 if (err && set_str) 1722 1720 append_filter_err(pe, *filterp); 1721 + create_filter_finish(pe); 1723 1722 1724 1723 return err; 1725 1724 }
+4 -2
kernel/trace/trace_events_trigger.c
··· 732 732 733 733 /* The filter is for the 'trigger' event, not the triggered event */ 734 734 ret = create_event_filter(file->event_call, filter_str, false, &filter); 735 - if (ret) 736 - goto out; 735 + /* 736 + * If create_event_filter() fails, filter still needs to be freed. 737 + * Which the calling code will do with data->filter. 738 + */ 737 739 assign: 738 740 tmp = rcu_access_pointer(data->filter); 739 741
+2 -2
lib/radix-tree.c
··· 784 784 while (radix_tree_is_internal_node(node)) { 785 785 unsigned offset; 786 786 787 - if (node == RADIX_TREE_RETRY) 788 - goto restart; 789 787 parent = entry_to_node(node); 790 788 offset = radix_tree_descend(parent, &node, index); 791 789 slot = parent->slots + offset; 790 + if (node == RADIX_TREE_RETRY) 791 + goto restart; 792 792 if (parent->shift == 0) 793 793 break; 794 794 }
+112 -43
lib/test_xarray.c
··· 28 28 } while (0) 29 29 #endif 30 30 31 + static void *xa_mk_index(unsigned long index) 32 + { 33 + return xa_mk_value(index & LONG_MAX); 34 + } 35 + 31 36 static void *xa_store_index(struct xarray *xa, unsigned long index, gfp_t gfp) 32 37 { 33 - return xa_store(xa, index, xa_mk_value(index & LONG_MAX), gfp); 38 + return xa_store(xa, index, xa_mk_index(index), gfp); 34 39 } 35 40 36 41 static void xa_alloc_index(struct xarray *xa, unsigned long index, gfp_t gfp) 37 42 { 38 43 u32 id = 0; 39 44 40 - XA_BUG_ON(xa, xa_alloc(xa, &id, UINT_MAX, xa_mk_value(index & LONG_MAX), 45 + XA_BUG_ON(xa, xa_alloc(xa, &id, UINT_MAX, xa_mk_index(index), 41 46 gfp) != 0); 42 47 XA_BUG_ON(xa, id != index); 43 48 } 44 49 45 50 static void xa_erase_index(struct xarray *xa, unsigned long index) 46 51 { 47 - XA_BUG_ON(xa, xa_erase(xa, index) != xa_mk_value(index & LONG_MAX)); 52 + XA_BUG_ON(xa, xa_erase(xa, index) != xa_mk_index(index)); 48 53 XA_BUG_ON(xa, xa_load(xa, index) != NULL); 49 54 } 50 55 ··· 123 118 124 119 xas_set(&xas, 0); 125 120 xas_for_each(&xas, entry, ULONG_MAX) { 126 - xas_store(&xas, xa_mk_value(xas.xa_index)); 121 + xas_store(&xas, xa_mk_index(xas.xa_index)); 127 122 } 128 123 xas_unlock(&xas); 129 124 ··· 201 196 XA_BUG_ON(xa, xa_store_index(xa, index + 2, GFP_KERNEL)); 202 197 xa_set_mark(xa, index + 2, XA_MARK_1); 203 198 XA_BUG_ON(xa, xa_store_index(xa, next, GFP_KERNEL)); 204 - xa_store_order(xa, index, order, xa_mk_value(index), 199 + xa_store_order(xa, index, order, xa_mk_index(index), 205 200 GFP_KERNEL); 206 201 for (i = base; i < next; i++) { 207 202 XA_STATE(xas, xa, i); ··· 410 405 xas_set(&xas, j); 411 406 do { 412 407 xas_lock(&xas); 413 - xas_store(&xas, xa_mk_value(j)); 408 + xas_store(&xas, xa_mk_index(j)); 414 409 xas_unlock(&xas); 415 410 } while (xas_nomem(&xas, GFP_KERNEL)); 416 411 } ··· 428 423 xas_set(&xas, 0); 429 424 j = i; 430 425 xas_for_each(&xas, entry, ULONG_MAX) { 431 - XA_BUG_ON(xa, entry != xa_mk_value(j)); 426 + XA_BUG_ON(xa, entry != xa_mk_index(j)); 432 427 xas_store(&xas, NULL); 433 428 j++; 434 429 } ··· 445 440 unsigned long min = index & ~((1UL << order) - 1); 446 441 unsigned long max = min + (1UL << order); 447 442 448 - xa_store_order(xa, index, order, xa_mk_value(index), GFP_KERNEL); 449 - XA_BUG_ON(xa, xa_load(xa, min) != xa_mk_value(index)); 450 - XA_BUG_ON(xa, xa_load(xa, max - 1) != xa_mk_value(index)); 443 + xa_store_order(xa, index, order, xa_mk_index(index), GFP_KERNEL); 444 + XA_BUG_ON(xa, xa_load(xa, min) != xa_mk_index(index)); 445 + XA_BUG_ON(xa, xa_load(xa, max - 1) != xa_mk_index(index)); 451 446 XA_BUG_ON(xa, xa_load(xa, max) != NULL); 452 447 XA_BUG_ON(xa, xa_load(xa, min - 1) != NULL); 453 448 454 449 xas_lock(&xas); 455 - XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(min)) != xa_mk_value(index)); 450 + XA_BUG_ON(xa, xas_store(&xas, xa_mk_index(min)) != xa_mk_index(index)); 456 451 xas_unlock(&xas); 457 - XA_BUG_ON(xa, xa_load(xa, min) != xa_mk_value(min)); 458 - XA_BUG_ON(xa, xa_load(xa, max - 1) != xa_mk_value(min)); 452 + XA_BUG_ON(xa, xa_load(xa, min) != xa_mk_index(min)); 453 + XA_BUG_ON(xa, xa_load(xa, max - 1) != xa_mk_index(min)); 459 454 XA_BUG_ON(xa, xa_load(xa, max) != NULL); 460 455 XA_BUG_ON(xa, xa_load(xa, min - 1) != NULL); 461 456 ··· 475 470 XA_BUG_ON(xa, xas_store(&xas, NULL) != xa_mk_value(1)); 476 471 xas_unlock(&xas); 477 472 XA_BUG_ON(xa, !xa_empty(xa)); 473 + } 474 + 475 + static noinline void check_multi_store_3(struct xarray *xa, unsigned long index, 476 + unsigned int order) 477 + { 478 + XA_STATE(xas, xa, 0); 479 + void *entry; 480 + int n = 0; 481 + 482 + xa_store_order(xa, index, order, xa_mk_index(index), GFP_KERNEL); 483 + 484 + xas_lock(&xas); 485 + xas_for_each(&xas, entry, ULONG_MAX) { 486 + XA_BUG_ON(xa, entry != xa_mk_index(index)); 487 + n++; 488 + } 489 + XA_BUG_ON(xa, n != 1); 490 + xas_set(&xas, index + 1); 491 + xas_for_each(&xas, entry, ULONG_MAX) { 492 + XA_BUG_ON(xa, entry != xa_mk_index(index)); 493 + n++; 494 + } 495 + XA_BUG_ON(xa, n != 2); 496 + xas_unlock(&xas); 497 + 498 + xa_destroy(xa); 478 499 } 479 500 #endif 480 501 ··· 554 523 555 524 for (i = 0; i < max_order; i++) { 556 525 for (j = 0; j < max_order; j++) { 557 - xa_store_order(xa, 0, i, xa_mk_value(i), GFP_KERNEL); 558 - xa_store_order(xa, 0, j, xa_mk_value(j), GFP_KERNEL); 526 + xa_store_order(xa, 0, i, xa_mk_index(i), GFP_KERNEL); 527 + xa_store_order(xa, 0, j, xa_mk_index(j), GFP_KERNEL); 559 528 560 529 for (k = 0; k < max_order; k++) { 561 530 void *entry = xa_load(xa, (1UL << k) - 1); 562 531 if ((i < k) && (j < k)) 563 532 XA_BUG_ON(xa, entry != NULL); 564 533 else 565 - XA_BUG_ON(xa, entry != xa_mk_value(j)); 534 + XA_BUG_ON(xa, entry != xa_mk_index(j)); 566 535 } 567 536 568 537 xa_erase(xa, 0); ··· 576 545 check_multi_store_1(xa, (1UL << i) + 1, i); 577 546 } 578 547 check_multi_store_2(xa, 4095, 9); 548 + 549 + for (i = 1; i < 20; i++) { 550 + check_multi_store_3(xa, 0, i); 551 + check_multi_store_3(xa, 1UL << i, i); 552 + } 579 553 #endif 580 554 } 581 555 ··· 623 587 xa_destroy(&xa0); 624 588 625 589 id = 0xfffffffeU; 626 - XA_BUG_ON(&xa0, xa_alloc(&xa0, &id, UINT_MAX, xa_mk_value(0), 590 + XA_BUG_ON(&xa0, xa_alloc(&xa0, &id, UINT_MAX, xa_mk_index(id), 627 591 GFP_KERNEL) != 0); 628 592 XA_BUG_ON(&xa0, id != 0xfffffffeU); 629 - XA_BUG_ON(&xa0, xa_alloc(&xa0, &id, UINT_MAX, xa_mk_value(0), 593 + XA_BUG_ON(&xa0, xa_alloc(&xa0, &id, UINT_MAX, xa_mk_index(id), 630 594 GFP_KERNEL) != 0); 631 595 XA_BUG_ON(&xa0, id != 0xffffffffU); 632 - XA_BUG_ON(&xa0, xa_alloc(&xa0, &id, UINT_MAX, xa_mk_value(0), 596 + XA_BUG_ON(&xa0, xa_alloc(&xa0, &id, UINT_MAX, xa_mk_index(id), 633 597 GFP_KERNEL) != -ENOSPC); 634 598 XA_BUG_ON(&xa0, id != 0xffffffffU); 635 599 xa_destroy(&xa0); 600 + 601 + id = 10; 602 + XA_BUG_ON(&xa0, xa_alloc(&xa0, &id, 5, xa_mk_index(id), 603 + GFP_KERNEL) != -ENOSPC); 604 + XA_BUG_ON(&xa0, xa_store_index(&xa0, 3, GFP_KERNEL) != 0); 605 + XA_BUG_ON(&xa0, xa_alloc(&xa0, &id, 5, xa_mk_index(id), 606 + GFP_KERNEL) != -ENOSPC); 607 + xa_erase_index(&xa0, 3); 608 + XA_BUG_ON(&xa0, !xa_empty(&xa0)); 636 609 } 637 610 638 611 static noinline void __check_store_iter(struct xarray *xa, unsigned long start, ··· 655 610 xas_lock(&xas); 656 611 xas_for_each_conflict(&xas, entry) { 657 612 XA_BUG_ON(xa, !xa_is_value(entry)); 658 - XA_BUG_ON(xa, entry < xa_mk_value(start)); 659 - XA_BUG_ON(xa, entry > xa_mk_value(start + (1UL << order) - 1)); 613 + XA_BUG_ON(xa, entry < xa_mk_index(start)); 614 + XA_BUG_ON(xa, entry > xa_mk_index(start + (1UL << order) - 1)); 660 615 count++; 661 616 } 662 - xas_store(&xas, xa_mk_value(start)); 617 + xas_store(&xas, xa_mk_index(start)); 663 618 xas_unlock(&xas); 664 619 if (xas_nomem(&xas, GFP_KERNEL)) { 665 620 count = 0; ··· 667 622 } 668 623 XA_BUG_ON(xa, xas_error(&xas)); 669 624 XA_BUG_ON(xa, count != present); 670 - XA_BUG_ON(xa, xa_load(xa, start) != xa_mk_value(start)); 625 + XA_BUG_ON(xa, xa_load(xa, start) != xa_mk_index(start)); 671 626 XA_BUG_ON(xa, xa_load(xa, start + (1UL << order) - 1) != 672 - xa_mk_value(start)); 627 + xa_mk_index(start)); 673 628 xa_erase_index(xa, start); 674 629 } 675 630 ··· 748 703 for (j = 0; j < index; j++) { 749 704 XA_STATE(xas, xa, j + index); 750 705 xa_store_index(xa, index - 1, GFP_KERNEL); 751 - xa_store_order(xa, index, i, xa_mk_value(index), 706 + xa_store_order(xa, index, i, xa_mk_index(index), 752 707 GFP_KERNEL); 753 708 rcu_read_lock(); 754 709 xas_for_each(&xas, entry, ULONG_MAX) { ··· 823 778 j = 0; 824 779 index = 0; 825 780 xa_for_each(xa, entry, index, ULONG_MAX, XA_PRESENT) { 826 - XA_BUG_ON(xa, xa_mk_value(index) != entry); 781 + XA_BUG_ON(xa, xa_mk_index(index) != entry); 827 782 XA_BUG_ON(xa, index != j++); 828 783 } 829 784 } ··· 831 786 xa_destroy(xa); 832 787 } 833 788 789 + static noinline void check_find_3(struct xarray *xa) 790 + { 791 + XA_STATE(xas, xa, 0); 792 + unsigned long i, j, k; 793 + void *entry; 794 + 795 + for (i = 0; i < 100; i++) { 796 + for (j = 0; j < 100; j++) { 797 + for (k = 0; k < 100; k++) { 798 + xas_set(&xas, j); 799 + xas_for_each_marked(&xas, entry, k, XA_MARK_0) 800 + ; 801 + if (j > k) 802 + XA_BUG_ON(xa, 803 + xas.xa_node != XAS_RESTART); 804 + } 805 + } 806 + xa_store_index(xa, i, GFP_KERNEL); 807 + xa_set_mark(xa, i, XA_MARK_0); 808 + } 809 + xa_destroy(xa); 810 + } 811 + 834 812 static noinline void check_find(struct xarray *xa) 835 813 { 836 814 check_find_1(xa); 837 815 check_find_2(xa); 816 + check_find_3(xa); 838 817 check_multi_find(xa); 839 818 check_multi_find_2(xa); 840 819 } ··· 898 829 for (index = 0; index < (1UL << (order + 5)); 899 830 index += (1UL << order)) { 900 831 xa_store_order(xa, index, order, 901 - xa_mk_value(index), GFP_KERNEL); 832 + xa_mk_index(index), GFP_KERNEL); 902 833 XA_BUG_ON(xa, xa_load(xa, index) != 903 - xa_mk_value(index)); 834 + xa_mk_index(index)); 904 835 XA_BUG_ON(xa, xa_find_entry(xa, 905 - xa_mk_value(index)) != index); 836 + xa_mk_index(index)) != index); 906 837 } 907 838 XA_BUG_ON(xa, xa_find_entry(xa, xa) != -1); 908 839 xa_destroy(xa); ··· 913 844 XA_BUG_ON(xa, xa_find_entry(xa, xa) != -1); 914 845 xa_store_index(xa, ULONG_MAX, GFP_KERNEL); 915 846 XA_BUG_ON(xa, xa_find_entry(xa, xa) != -1); 916 - XA_BUG_ON(xa, xa_find_entry(xa, xa_mk_value(LONG_MAX)) != -1); 847 + XA_BUG_ON(xa, xa_find_entry(xa, xa_mk_index(ULONG_MAX)) != -1); 917 848 xa_erase_index(xa, ULONG_MAX); 918 849 XA_BUG_ON(xa, !xa_empty(xa)); 919 850 } ··· 933 864 XA_BUG_ON(xa, xas.xa_node == XAS_RESTART); 934 865 XA_BUG_ON(xa, xas.xa_index != i); 935 866 if (i == 0 || i == idx) 936 - XA_BUG_ON(xa, entry != xa_mk_value(i)); 867 + XA_BUG_ON(xa, entry != xa_mk_index(i)); 937 868 else 938 869 XA_BUG_ON(xa, entry != NULL); 939 870 } ··· 947 878 XA_BUG_ON(xa, xas.xa_node == XAS_RESTART); 948 879 XA_BUG_ON(xa, xas.xa_index != i); 949 880 if (i == 0 || i == idx) 950 - XA_BUG_ON(xa, entry != xa_mk_value(i)); 881 + XA_BUG_ON(xa, entry != xa_mk_index(i)); 951 882 else 952 883 XA_BUG_ON(xa, entry != NULL); 953 884 } while (i > 0); ··· 978 909 do { 979 910 void *entry = xas_prev(&xas); 980 911 i--; 981 - XA_BUG_ON(xa, entry != xa_mk_value(i)); 912 + XA_BUG_ON(xa, entry != xa_mk_index(i)); 982 913 XA_BUG_ON(xa, i != xas.xa_index); 983 914 } while (i != 0); 984 915 ··· 987 918 988 919 do { 989 920 void *entry = xas_next(&xas); 990 - XA_BUG_ON(xa, entry != xa_mk_value(i)); 921 + XA_BUG_ON(xa, entry != xa_mk_index(i)); 991 922 XA_BUG_ON(xa, i != xas.xa_index); 992 923 i++; 993 924 } while (i < (1 << 16)); ··· 1003 934 void *entry = xas_prev(&xas); 1004 935 i--; 1005 936 if ((i < (1 << 8)) || (i >= (1 << 15))) 1006 - XA_BUG_ON(xa, entry != xa_mk_value(i)); 937 + XA_BUG_ON(xa, entry != xa_mk_index(i)); 1007 938 else 1008 939 XA_BUG_ON(xa, entry != NULL); 1009 940 XA_BUG_ON(xa, i != xas.xa_index); ··· 1015 946 do { 1016 947 void *entry = xas_next(&xas); 1017 948 if ((i < (1 << 8)) || (i >= (1 << 15))) 1018 - XA_BUG_ON(xa, entry != xa_mk_value(i)); 949 + XA_BUG_ON(xa, entry != xa_mk_index(i)); 1019 950 else 1020 951 XA_BUG_ON(xa, entry != NULL); 1021 952 XA_BUG_ON(xa, i != xas.xa_index); ··· 1045 976 if (xas_error(&xas)) 1046 977 goto unlock; 1047 978 for (i = 0; i < (1U << order); i++) { 1048 - XA_BUG_ON(xa, xas_store(&xas, xa_mk_value(index + i))); 979 + XA_BUG_ON(xa, xas_store(&xas, xa_mk_index(index + i))); 1049 980 xas_next(&xas); 1050 981 } 1051 982 unlock: ··· 1100 1031 if (xas_error(&xas)) 1101 1032 goto unlock; 1102 1033 for (i = 0; i < (1UL << order); i++) { 1103 - void *old = xas_store(&xas, xa_mk_value(base + i)); 1034 + void *old = xas_store(&xas, xa_mk_index(base + i)); 1104 1035 if (xas.xa_index == index) 1105 - XA_BUG_ON(xa, old != xa_mk_value(base + i)); 1036 + XA_BUG_ON(xa, old != xa_mk_index(base + i)); 1106 1037 else 1107 1038 XA_BUG_ON(xa, old != NULL); 1108 1039 xas_next(&xas); ··· 1154 1085 unsigned long last) 1155 1086 { 1156 1087 #ifdef CONFIG_XARRAY_MULTI 1157 - xa_store_range(xa, first, last, xa_mk_value(first), GFP_KERNEL); 1088 + xa_store_range(xa, first, last, xa_mk_index(first), GFP_KERNEL); 1158 1089 1159 - XA_BUG_ON(xa, xa_load(xa, first) != xa_mk_value(first)); 1160 - XA_BUG_ON(xa, xa_load(xa, last) != xa_mk_value(first)); 1090 + XA_BUG_ON(xa, xa_load(xa, first) != xa_mk_index(first)); 1091 + XA_BUG_ON(xa, xa_load(xa, last) != xa_mk_index(first)); 1161 1092 XA_BUG_ON(xa, xa_load(xa, first - 1) != NULL); 1162 1093 XA_BUG_ON(xa, xa_load(xa, last + 1) != NULL); 1163 1094 ··· 1264 1195 XA_BUG_ON(xa, xas.xa_node->nr_values != 0); 1265 1196 rcu_read_unlock(); 1266 1197 1267 - xa_store_order(xa, 1 << order, order, xa_mk_value(1 << order), 1198 + xa_store_order(xa, 1 << order, order, xa_mk_index(1UL << order), 1268 1199 GFP_KERNEL); 1269 1200 XA_BUG_ON(xa, xas.xa_node->count != xas.xa_node->nr_values * 2); 1270 1201
+3 -5
lib/xarray.c
··· 1131 1131 entry = xa_head(xas->xa); 1132 1132 xas->xa_node = NULL; 1133 1133 if (xas->xa_index > max_index(entry)) 1134 - goto bounds; 1134 + goto out; 1135 1135 if (!xa_is_node(entry)) { 1136 1136 if (xa_marked(xas->xa, mark)) 1137 1137 return entry; ··· 1180 1180 } 1181 1181 1182 1182 out: 1183 - if (!max) 1183 + if (xas->xa_index > max) 1184 1184 goto max; 1185 - bounds: 1186 - xas->xa_node = XAS_BOUNDS; 1187 - return NULL; 1185 + return set_bounds(xas); 1188 1186 max: 1189 1187 xas->xa_node = XAS_RESTART; 1190 1188 return NULL;
+3 -2
mm/hugetlb.c
··· 1248 1248 (struct hugepage_subpool *)page_private(page); 1249 1249 bool restore_reserve; 1250 1250 1251 - set_page_private(page, 0); 1252 - page->mapping = NULL; 1253 1251 VM_BUG_ON_PAGE(page_count(page), page); 1254 1252 VM_BUG_ON_PAGE(page_mapcount(page), page); 1253 + 1254 + set_page_private(page, 0); 1255 + page->mapping = NULL; 1255 1256 restore_reserve = PagePrivate(page); 1256 1257 ClearPagePrivate(page); 1257 1258
+1 -1
mm/memblock.c
··· 1727 1727 return -1; 1728 1728 } 1729 1729 1730 - bool __init memblock_is_reserved(phys_addr_t addr) 1730 + bool __init_memblock memblock_is_reserved(phys_addr_t addr) 1731 1731 { 1732 1732 return memblock_search(&memblock.reserved, addr) != -1; 1733 1733 }
+1 -3
mm/shmem.c
··· 661 661 { 662 662 void *old; 663 663 664 - xa_lock_irq(&mapping->i_pages); 665 - old = __xa_cmpxchg(&mapping->i_pages, index, radswap, NULL, 0); 666 - xa_unlock_irq(&mapping->i_pages); 664 + old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0); 667 665 if (old != radswap) 668 666 return -ENOENT; 669 667 free_swap_and_cache(radix_to_swp_entry(radswap));
+16
mm/sparse.c
··· 240 240 } 241 241 242 242 /* 243 + * Mark all memblocks as present using memory_present(). This is a 244 + * convienence function that is useful for a number of arches 245 + * to mark all of the systems memory as present during initialization. 246 + */ 247 + void __init memblocks_present(void) 248 + { 249 + struct memblock_region *reg; 250 + 251 + for_each_memblock(memory, reg) { 252 + memory_present(memblock_get_region_node(reg), 253 + memblock_region_memory_base_pfn(reg), 254 + memblock_region_memory_end_pfn(reg)); 255 + } 256 + } 257 + 258 + /* 243 259 * Subtle, we encode the real pfn into the mem_map such that 244 260 * the identity pfn - section_mem_map will return the actual 245 261 * physical page frame number.
+1 -1
net/can/raw.c
··· 771 771 if (err < 0) 772 772 goto free_skb; 773 773 774 - sock_tx_timestamp(sk, sk->sk_tsflags, &skb_shinfo(skb)->tx_flags); 774 + skb_setup_tx_timestamp(skb, sk->sk_tsflags); 775 775 776 776 skb->dev = dev; 777 777 skb->sk = sk;
+5 -1
net/core/flow_dissector.c
··· 783 783 /* Pass parameters to the BPF program */ 784 784 cb->qdisc_cb.flow_keys = &flow_keys; 785 785 flow_keys.nhoff = nhoff; 786 + flow_keys.thoff = nhoff; 786 787 787 788 bpf_compute_data_pointers((struct sk_buff *)skb); 788 789 result = BPF_PROG_RUN(attached, skb); ··· 791 790 /* Restore state */ 792 791 memcpy(cb, &cb_saved, sizeof(cb_saved)); 793 792 793 + flow_keys.nhoff = clamp_t(u16, flow_keys.nhoff, 0, skb->len); 794 + flow_keys.thoff = clamp_t(u16, flow_keys.thoff, 795 + flow_keys.nhoff, skb->len); 796 + 794 797 __skb_flow_bpf_to_target(&flow_keys, flow_dissector, 795 798 target_container); 796 - key_control->thoff = min_t(u16, key_control->thoff, skb->len); 797 799 rcu_read_unlock(); 798 800 return result == BPF_OK; 799 801 }
+1
net/core/gro_cells.c
··· 84 84 for_each_possible_cpu(i) { 85 85 struct gro_cell *cell = per_cpu_ptr(gcells->cells, i); 86 86 87 + napi_disable(&cell->napi); 87 88 netif_napi_del(&cell->napi); 88 89 __skb_queue_purge(&cell->napi_skbs); 89 90 }
+6 -1
net/core/neighbour.c
··· 2494 2494 2495 2495 ndm = nlmsg_data(nlh); 2496 2496 if (ndm->ndm_pad1 || ndm->ndm_pad2 || ndm->ndm_ifindex || 2497 - ndm->ndm_state || ndm->ndm_flags || ndm->ndm_type) { 2497 + ndm->ndm_state || ndm->ndm_type) { 2498 2498 NL_SET_ERR_MSG(extack, "Invalid values in header for neighbor dump request"); 2499 + return -EINVAL; 2500 + } 2501 + 2502 + if (ndm->ndm_flags & ~NTF_PROXY) { 2503 + NL_SET_ERR_MSG(extack, "Invalid flags in header for neighbor dump request"); 2499 2504 return -EINVAL; 2500 2505 } 2501 2506
+17 -3
net/core/sysctl_net_core.c
··· 28 28 static int min_sndbuf = SOCK_MIN_SNDBUF; 29 29 static int min_rcvbuf = SOCK_MIN_RCVBUF; 30 30 static int max_skb_frags = MAX_SKB_FRAGS; 31 + static long long_one __maybe_unused = 1; 32 + static long long_max __maybe_unused = LONG_MAX; 31 33 32 34 static int net_msg_warn; /* Unused, but still a sysctl */ 33 35 ··· 291 289 292 290 return proc_dointvec_minmax(table, write, buffer, lenp, ppos); 293 291 } 292 + 293 + static int 294 + proc_dolongvec_minmax_bpf_restricted(struct ctl_table *table, int write, 295 + void __user *buffer, size_t *lenp, 296 + loff_t *ppos) 297 + { 298 + if (!capable(CAP_SYS_ADMIN)) 299 + return -EPERM; 300 + 301 + return proc_doulongvec_minmax(table, write, buffer, lenp, ppos); 302 + } 294 303 #endif 295 304 296 305 static struct ctl_table net_core_table[] = { ··· 411 398 { 412 399 .procname = "bpf_jit_limit", 413 400 .data = &bpf_jit_limit, 414 - .maxlen = sizeof(int), 401 + .maxlen = sizeof(long), 415 402 .mode = 0600, 416 - .proc_handler = proc_dointvec_minmax_bpf_restricted, 417 - .extra1 = &one, 403 + .proc_handler = proc_dolongvec_minmax_bpf_restricted, 404 + .extra1 = &long_one, 405 + .extra2 = &long_max, 418 406 }, 419 407 #endif 420 408 {
+3 -2
net/ipv4/devinet.c
··· 952 952 { 953 953 int rc = -1; /* Something else, probably a multicast. */ 954 954 955 - if (ipv4_is_zeronet(addr)) 955 + if (ipv4_is_zeronet(addr) || ipv4_is_lbcast(addr)) 956 956 rc = 0; 957 957 else { 958 958 __u32 haddr = ntohl(addr); 959 - 960 959 if (IN_CLASSA(haddr)) 961 960 rc = 8; 962 961 else if (IN_CLASSB(haddr)) 963 962 rc = 16; 964 963 else if (IN_CLASSC(haddr)) 965 964 rc = 24; 965 + else if (IN_CLASSE(haddr)) 966 + rc = 32; 966 967 } 967 968 968 969 return rc;
+1
net/ipv4/ip_forward.c
··· 72 72 if (unlikely(opt->optlen)) 73 73 ip_forward_options(skb); 74 74 75 + skb->tstamp = 0; 75 76 return dst_output(net, sk, skb); 76 77 } 77 78
+12 -6
net/ipv4/ip_fragment.c
··· 346 346 struct net *net = container_of(qp->q.net, struct net, ipv4.frags); 347 347 struct rb_node **rbn, *parent; 348 348 struct sk_buff *skb1, *prev_tail; 349 + int ihl, end, skb1_run_end; 349 350 struct net_device *dev; 350 351 unsigned int fragsize; 351 352 int flags, offset; 352 - int ihl, end; 353 353 int err = -ENOENT; 354 354 u8 ecn; 355 355 ··· 419 419 * overlapping fragment, the entire datagram (and any constituent 420 420 * fragments) MUST be silently discarded. 421 421 * 422 - * We do the same here for IPv4 (and increment an snmp counter). 422 + * We do the same here for IPv4 (and increment an snmp counter) but 423 + * we do not want to drop the whole queue in response to a duplicate 424 + * fragment. 423 425 */ 424 426 425 427 err = -EINVAL; ··· 446 444 do { 447 445 parent = *rbn; 448 446 skb1 = rb_to_skb(parent); 447 + skb1_run_end = skb1->ip_defrag_offset + 448 + FRAG_CB(skb1)->frag_run_len; 449 449 if (end <= skb1->ip_defrag_offset) 450 450 rbn = &parent->rb_left; 451 - else if (offset >= skb1->ip_defrag_offset + 452 - FRAG_CB(skb1)->frag_run_len) 451 + else if (offset >= skb1_run_end) 453 452 rbn = &parent->rb_right; 454 - else /* Found an overlap with skb1. */ 455 - goto overlap; 453 + else if (offset >= skb1->ip_defrag_offset && 454 + end <= skb1_run_end) 455 + goto err; /* No new data, potential duplicate */ 456 + else 457 + goto overlap; /* Found an overlap */ 456 458 } while (*rbn); 457 459 /* Here we have parent properly set, and rbn pointing to 458 460 * one of its NULL left/right children. Insert skb.
+2
net/ipv4/ipconfig.c
··· 429 429 ic_netmask = htonl(IN_CLASSB_NET); 430 430 else if (IN_CLASSC(ntohl(ic_myaddr))) 431 431 ic_netmask = htonl(IN_CLASSC_NET); 432 + else if (IN_CLASSE(ntohl(ic_myaddr))) 433 + ic_netmask = htonl(IN_CLASSE_NET); 432 434 else { 433 435 pr_err("IP-Config: Unable to guess netmask for address %pI4\n", 434 436 &ic_myaddr);
+4
net/ipv4/ipmr.c
··· 69 69 #include <net/nexthop.h> 70 70 #include <net/switchdev.h> 71 71 72 + #include <linux/nospec.h> 73 + 72 74 struct ipmr_rule { 73 75 struct fib_rule common; 74 76 }; ··· 1614 1612 return -EFAULT; 1615 1613 if (vr.vifi >= mrt->maxvif) 1616 1614 return -EINVAL; 1615 + vr.vifi = array_index_nospec(vr.vifi, mrt->maxvif); 1617 1616 read_lock(&mrt_lock); 1618 1617 vif = &mrt->vif_table[vr.vifi]; 1619 1618 if (VIF_EXISTS(mrt, vr.vifi)) { ··· 1689 1686 return -EFAULT; 1690 1687 if (vr.vifi >= mrt->maxvif) 1691 1688 return -EINVAL; 1689 + vr.vifi = array_index_nospec(vr.vifi, mrt->maxvif); 1692 1690 read_lock(&mrt_lock); 1693 1691 vif = &mrt->vif_table[vr.vifi]; 1694 1692 if (VIF_EXISTS(mrt, vr.vifi)) {
+1 -1
net/ipv4/raw.c
··· 391 391 392 392 skb->ip_summed = CHECKSUM_NONE; 393 393 394 - sock_tx_timestamp(sk, sockc->tsflags, &skb_shinfo(skb)->tx_flags); 394 + skb_setup_tx_timestamp(skb, sockc->tsflags); 395 395 396 396 if (flags & MSG_CONFIRM) 397 397 skb_set_dst_pending_confirm(skb, 1);
+1
net/ipv6/ip6_output.c
··· 378 378 __IP6_INC_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTFORWDATAGRAMS); 379 379 __IP6_ADD_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTOCTETS, skb->len); 380 380 381 + skb->tstamp = 0; 381 382 return dst_output(net, sk, skb); 382 383 } 383 384
+2 -1
net/ipv6/ip6_udp_tunnel.c
··· 15 15 int udp_sock_create6(struct net *net, struct udp_port_cfg *cfg, 16 16 struct socket **sockp) 17 17 { 18 - struct sockaddr_in6 udp6_addr; 18 + struct sockaddr_in6 udp6_addr = {}; 19 19 int err; 20 20 struct socket *sock = NULL; 21 21 ··· 42 42 goto error; 43 43 44 44 if (cfg->peer_udp_port) { 45 + memset(&udp6_addr, 0, sizeof(udp6_addr)); 45 46 udp6_addr.sin6_family = AF_INET6; 46 47 memcpy(&udp6_addr.sin6_addr, &cfg->peer_ip6, 47 48 sizeof(udp6_addr.sin6_addr));
+4
net/ipv6/ip6mr.c
··· 52 52 #include <net/ip6_checksum.h> 53 53 #include <linux/netconf.h> 54 54 55 + #include <linux/nospec.h> 56 + 55 57 struct ip6mr_rule { 56 58 struct fib_rule common; 57 59 }; ··· 1843 1841 return -EFAULT; 1844 1842 if (vr.mifi >= mrt->maxvif) 1845 1843 return -EINVAL; 1844 + vr.mifi = array_index_nospec(vr.mifi, mrt->maxvif); 1846 1845 read_lock(&mrt_lock); 1847 1846 vif = &mrt->vif_table[vr.mifi]; 1848 1847 if (VIF_EXISTS(mrt, vr.mifi)) { ··· 1918 1915 return -EFAULT; 1919 1916 if (vr.mifi >= mrt->maxvif) 1920 1917 return -EINVAL; 1918 + vr.mifi = array_index_nospec(vr.mifi, mrt->maxvif); 1921 1919 read_lock(&mrt_lock); 1922 1920 vif = &mrt->vif_table[vr.mifi]; 1923 1921 if (VIF_EXISTS(mrt, vr.mifi)) {
+2
net/ipv6/raw.c
··· 658 658 659 659 skb->ip_summed = CHECKSUM_NONE; 660 660 661 + skb_setup_tx_timestamp(skb, sockc->tsflags); 662 + 661 663 if (flags & MSG_CONFIRM) 662 664 skb_set_dst_pending_confirm(skb, 1); 663 665
+3
net/mac80211/iface.c
··· 7 7 * Copyright 2008, Johannes Berg <johannes@sipsolutions.net> 8 8 * Copyright 2013-2014 Intel Mobile Communications GmbH 9 9 * Copyright (c) 2016 Intel Deutschland GmbH 10 + * Copyright (C) 2018 Intel Corporation 10 11 * 11 12 * This program is free software; you can redistribute it and/or modify 12 13 * it under the terms of the GNU General Public License version 2 as ··· 1951 1950 1952 1951 WARN(local->open_count, "%s: open count remains %d\n", 1953 1952 wiphy_name(local->hw.wiphy), local->open_count); 1953 + 1954 + ieee80211_txq_teardown_flows(local); 1954 1955 1955 1956 mutex_lock(&local->iflist_mtx); 1956 1957 list_for_each_entry_safe(sdata, tmp, &local->interfaces, list) {
-2
net/mac80211/main.c
··· 1262 1262 rtnl_unlock(); 1263 1263 ieee80211_led_exit(local); 1264 1264 ieee80211_wep_free(local); 1265 - ieee80211_txq_teardown_flows(local); 1266 1265 fail_flows: 1267 1266 destroy_workqueue(local->workqueue); 1268 1267 fail_workqueue: ··· 1287 1288 #if IS_ENABLED(CONFIG_IPV6) 1288 1289 unregister_inet6addr_notifier(&local->ifa6_notifier); 1289 1290 #endif 1290 - ieee80211_txq_teardown_flows(local); 1291 1291 1292 1292 rtnl_lock(); 1293 1293
+5
net/mac80211/status.c
··· 556 556 } 557 557 558 558 ieee80211_led_tx(local); 559 + 560 + if (skb_has_frag_list(skb)) { 561 + kfree_skb_list(skb_shinfo(skb)->frag_list); 562 + skb_shinfo(skb)->frag_list = NULL; 563 + } 559 564 } 560 565 561 566 /*
+1 -1
net/netfilter/ipset/ip_set_list_set.c
··· 531 531 ret = -EMSGSIZE; 532 532 } else { 533 533 cb->args[IPSET_CB_ARG0] = i; 534 + ipset_nest_end(skb, atd); 534 535 } 535 - ipset_nest_end(skb, atd); 536 536 out: 537 537 rcu_read_unlock(); 538 538 return ret;
+1 -1
net/netfilter/nf_conncount.c
··· 427 427 count = 1; 428 428 rbconn->list.count = count; 429 429 430 - rb_link_node(&rbconn->node, parent, rbnode); 430 + rb_link_node_rcu(&rbconn->node, parent, rbnode); 431 431 rb_insert_color(&rbconn->node, root); 432 432 out_unlock: 433 433 spin_unlock_bh(&nf_conncount_locks[hash % CONNCOUNT_LOCK_SLOTS]);
+4 -3
net/netfilter/nf_conntrack_seqadj.c
··· 115 115 /* TCP SACK sequence number adjustment */ 116 116 static unsigned int nf_ct_sack_adjust(struct sk_buff *skb, 117 117 unsigned int protoff, 118 - struct tcphdr *tcph, 119 118 struct nf_conn *ct, 120 119 enum ip_conntrack_info ctinfo) 121 120 { 122 - unsigned int dir, optoff, optend; 121 + struct tcphdr *tcph = (void *)skb->data + protoff; 123 122 struct nf_conn_seqadj *seqadj = nfct_seqadj(ct); 123 + unsigned int dir, optoff, optend; 124 124 125 125 optoff = protoff + sizeof(struct tcphdr); 126 126 optend = protoff + tcph->doff * 4; ··· 128 128 if (!skb_make_writable(skb, optend)) 129 129 return 0; 130 130 131 + tcph = (void *)skb->data + protoff; 131 132 dir = CTINFO2DIR(ctinfo); 132 133 133 134 while (optoff < optend) { ··· 208 207 ntohl(newack)); 209 208 tcph->ack_seq = newack; 210 209 211 - res = nf_ct_sack_adjust(skb, protoff, tcph, ct, ctinfo); 210 + res = nf_ct_sack_adjust(skb, protoff, ct, ctinfo); 212 211 out: 213 212 spin_unlock_bh(&ct->lock); 214 213
+2 -1
net/netfilter/nf_nat_core.c
··· 117 117 dst = skb_dst(skb); 118 118 if (dst->xfrm) 119 119 dst = ((struct xfrm_dst *)dst)->route; 120 - dst_hold(dst); 120 + if (!dst_hold_safe(dst)) 121 + return -EHOSTUNREACH; 121 122 122 123 if (sk && !net_eq(net, sock_net(sk))) 123 124 sk = NULL;
+13 -8
net/netfilter/nf_tables_api.c
··· 1216 1216 if (nla_put_string(skb, NFTA_CHAIN_TYPE, basechain->type->name)) 1217 1217 goto nla_put_failure; 1218 1218 1219 - if (basechain->stats && nft_dump_stats(skb, basechain->stats)) 1219 + if (rcu_access_pointer(basechain->stats) && 1220 + nft_dump_stats(skb, rcu_dereference(basechain->stats))) 1220 1221 goto nla_put_failure; 1221 1222 } 1222 1223 ··· 1393 1392 return newstats; 1394 1393 } 1395 1394 1396 - static void nft_chain_stats_replace(struct nft_base_chain *chain, 1395 + static void nft_chain_stats_replace(struct net *net, 1396 + struct nft_base_chain *chain, 1397 1397 struct nft_stats __percpu *newstats) 1398 1398 { 1399 1399 struct nft_stats __percpu *oldstats; ··· 1402 1400 if (newstats == NULL) 1403 1401 return; 1404 1402 1405 - if (chain->stats) { 1406 - oldstats = nfnl_dereference(chain->stats, NFNL_SUBSYS_NFTABLES); 1403 + if (rcu_access_pointer(chain->stats)) { 1404 + oldstats = rcu_dereference_protected(chain->stats, 1405 + lockdep_commit_lock_is_held(net)); 1407 1406 rcu_assign_pointer(chain->stats, newstats); 1408 1407 synchronize_rcu(); 1409 1408 free_percpu(oldstats); ··· 1442 1439 struct nft_base_chain *basechain = nft_base_chain(chain); 1443 1440 1444 1441 module_put(basechain->type->owner); 1445 - free_percpu(basechain->stats); 1446 - if (basechain->stats) 1442 + if (rcu_access_pointer(basechain->stats)) { 1447 1443 static_branch_dec(&nft_counters_enabled); 1444 + free_percpu(rcu_dereference_raw(basechain->stats)); 1445 + } 1448 1446 kfree(chain->name); 1449 1447 kfree(basechain); 1450 1448 } else { ··· 1594 1590 kfree(basechain); 1595 1591 return PTR_ERR(stats); 1596 1592 } 1597 - basechain->stats = stats; 1593 + rcu_assign_pointer(basechain->stats, stats); 1598 1594 static_branch_inc(&nft_counters_enabled); 1599 1595 } 1600 1596 ··· 6184 6180 return; 6185 6181 6186 6182 basechain = nft_base_chain(trans->ctx.chain); 6187 - nft_chain_stats_replace(basechain, nft_trans_chain_stats(trans)); 6183 + nft_chain_stats_replace(trans->ctx.net, basechain, 6184 + nft_trans_chain_stats(trans)); 6188 6185 6189 6186 switch (nft_trans_chain_policy(trans)) { 6190 6187 case NF_DROP:
+1 -1
net/netfilter/nf_tables_core.c
··· 101 101 struct nft_stats *stats; 102 102 103 103 base_chain = nft_base_chain(chain); 104 - if (!base_chain->stats) 104 + if (!rcu_access_pointer(base_chain->stats)) 105 105 return; 106 106 107 107 local_bh_disable();
+2 -2
net/netlink/af_netlink.c
··· 1706 1706 nlk->flags &= ~NETLINK_F_EXT_ACK; 1707 1707 err = 0; 1708 1708 break; 1709 - case NETLINK_DUMP_STRICT_CHK: 1709 + case NETLINK_GET_STRICT_CHK: 1710 1710 if (val) 1711 1711 nlk->flags |= NETLINK_F_STRICT_CHK; 1712 1712 else ··· 1806 1806 return -EFAULT; 1807 1807 err = 0; 1808 1808 break; 1809 - case NETLINK_DUMP_STRICT_CHK: 1809 + case NETLINK_GET_STRICT_CHK: 1810 1810 if (len < sizeof(int)) 1811 1811 return -EINVAL; 1812 1812 len = sizeof(int);
+3 -3
net/packet/af_packet.c
··· 1965 1965 skb->mark = sk->sk_mark; 1966 1966 skb->tstamp = sockc.transmit_time; 1967 1967 1968 - sock_tx_timestamp(sk, sockc.tsflags, &skb_shinfo(skb)->tx_flags); 1968 + skb_setup_tx_timestamp(skb, sockc.tsflags); 1969 1969 1970 1970 if (unlikely(extra_len == 4)) 1971 1971 skb->no_fcs = 1; ··· 2460 2460 skb->priority = po->sk.sk_priority; 2461 2461 skb->mark = po->sk.sk_mark; 2462 2462 skb->tstamp = sockc->transmit_time; 2463 - sock_tx_timestamp(&po->sk, sockc->tsflags, &skb_shinfo(skb)->tx_flags); 2463 + skb_setup_tx_timestamp(skb, sockc->tsflags); 2464 2464 skb_zcopy_set_nouarg(skb, ph.raw); 2465 2465 2466 2466 skb_reserve(skb, hlen); ··· 2898 2898 goto out_free; 2899 2899 } 2900 2900 2901 - sock_tx_timestamp(sk, sockc.tsflags, &skb_shinfo(skb)->tx_flags); 2901 + skb_setup_tx_timestamp(skb, sockc.tsflags); 2902 2902 2903 2903 if (!vnet_hdr.gso_type && (len > dev->mtu + reserve + extra_len) && 2904 2904 !packet_extra_vlan_len_allowed(dev, skb)) {
+19 -7
net/rds/message.c
··· 308 308 /* 309 309 * RDS ops use this to grab SG entries from the rm's sg pool. 310 310 */ 311 - struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents) 311 + struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents, 312 + int *ret) 312 313 { 313 314 struct scatterlist *sg_first = (struct scatterlist *) &rm[1]; 314 315 struct scatterlist *sg_ret; 315 316 316 - WARN_ON(rm->m_used_sgs + nents > rm->m_total_sgs); 317 - WARN_ON(!nents); 318 - 319 - if (rm->m_used_sgs + nents > rm->m_total_sgs) 317 + if (WARN_ON(!ret)) 320 318 return NULL; 319 + 320 + if (nents <= 0) { 321 + pr_warn("rds: alloc sgs failed! nents <= 0\n"); 322 + *ret = -EINVAL; 323 + return NULL; 324 + } 325 + 326 + if (rm->m_used_sgs + nents > rm->m_total_sgs) { 327 + pr_warn("rds: alloc sgs failed! total %d used %d nents %d\n", 328 + rm->m_total_sgs, rm->m_used_sgs, nents); 329 + *ret = -ENOMEM; 330 + return NULL; 331 + } 321 332 322 333 sg_ret = &sg_first[rm->m_used_sgs]; 323 334 sg_init_table(sg_ret, nents); ··· 343 332 unsigned int i; 344 333 int num_sgs = ceil(total_len, PAGE_SIZE); 345 334 int extra_bytes = num_sgs * sizeof(struct scatterlist); 335 + int ret; 346 336 347 337 rm = rds_message_alloc(extra_bytes, GFP_NOWAIT); 348 338 if (!rm) ··· 352 340 set_bit(RDS_MSG_PAGEVEC, &rm->m_flags); 353 341 rm->m_inc.i_hdr.h_len = cpu_to_be32(total_len); 354 342 rm->data.op_nents = ceil(total_len, PAGE_SIZE); 355 - rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs); 343 + rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs, &ret); 356 344 if (!rm->data.op_sg) { 357 345 rds_message_put(rm); 358 - return ERR_PTR(-ENOMEM); 346 + return ERR_PTR(ret); 359 347 } 360 348 361 349 for (i = 0; i < rm->data.op_nents; ++i) {
+37 -40
net/rds/rdma.c
··· 517 517 return tot_pages; 518 518 } 519 519 520 - int rds_rdma_extra_size(struct rds_rdma_args *args) 520 + int rds_rdma_extra_size(struct rds_rdma_args *args, 521 + struct rds_iov_vector *iov) 521 522 { 522 - struct rds_iovec vec; 523 + struct rds_iovec *vec; 523 524 struct rds_iovec __user *local_vec; 524 525 int tot_pages = 0; 525 526 unsigned int nr_pages; ··· 531 530 if (args->nr_local == 0) 532 531 return -EINVAL; 533 532 534 - /* figure out the number of pages in the vector */ 535 - for (i = 0; i < args->nr_local; i++) { 536 - if (copy_from_user(&vec, &local_vec[i], 537 - sizeof(struct rds_iovec))) 538 - return -EFAULT; 533 + iov->iov = kcalloc(args->nr_local, 534 + sizeof(struct rds_iovec), 535 + GFP_KERNEL); 536 + if (!iov->iov) 537 + return -ENOMEM; 539 538 540 - nr_pages = rds_pages_in_vec(&vec); 539 + vec = &iov->iov[0]; 540 + 541 + if (copy_from_user(vec, local_vec, args->nr_local * 542 + sizeof(struct rds_iovec))) 543 + return -EFAULT; 544 + iov->len = args->nr_local; 545 + 546 + /* figure out the number of pages in the vector */ 547 + for (i = 0; i < args->nr_local; i++, vec++) { 548 + 549 + nr_pages = rds_pages_in_vec(vec); 541 550 if (nr_pages == 0) 542 551 return -EINVAL; 543 552 ··· 569 558 * Extract all arguments and set up the rdma_op 570 559 */ 571 560 int rds_cmsg_rdma_args(struct rds_sock *rs, struct rds_message *rm, 572 - struct cmsghdr *cmsg) 561 + struct cmsghdr *cmsg, 562 + struct rds_iov_vector *vec) 573 563 { 574 564 struct rds_rdma_args *args; 575 565 struct rm_rdma_op *op = &rm->rdma; 576 566 int nr_pages; 577 567 unsigned int nr_bytes; 578 568 struct page **pages = NULL; 579 - struct rds_iovec iovstack[UIO_FASTIOV], *iovs = iovstack; 580 - int iov_size; 569 + struct rds_iovec *iovs; 581 570 unsigned int i, j; 582 571 int ret = 0; 583 572 ··· 597 586 goto out_ret; 598 587 } 599 588 600 - /* Check whether to allocate the iovec area */ 601 - iov_size = args->nr_local * sizeof(struct rds_iovec); 602 - if (args->nr_local > UIO_FASTIOV) { 603 - iovs = sock_kmalloc(rds_rs_to_sk(rs), iov_size, GFP_KERNEL); 604 - if (!iovs) { 605 - ret = -ENOMEM; 606 - goto out_ret; 607 - } 589 + if (vec->len != args->nr_local) { 590 + ret = -EINVAL; 591 + goto out_ret; 608 592 } 609 593 610 - if (copy_from_user(iovs, (struct rds_iovec __user *)(unsigned long) args->local_vec_addr, iov_size)) { 611 - ret = -EFAULT; 612 - goto out; 613 - } 594 + iovs = vec->iov; 614 595 615 596 nr_pages = rds_rdma_pages(iovs, args->nr_local); 616 597 if (nr_pages < 0) { 617 598 ret = -EINVAL; 618 - goto out; 599 + goto out_ret; 619 600 } 620 601 621 602 pages = kcalloc(nr_pages, sizeof(struct page *), GFP_KERNEL); 622 603 if (!pages) { 623 604 ret = -ENOMEM; 624 - goto out; 605 + goto out_ret; 625 606 } 626 607 627 608 op->op_write = !!(args->flags & RDS_RDMA_READWRITE); ··· 623 620 op->op_active = 1; 624 621 op->op_recverr = rs->rs_recverr; 625 622 WARN_ON(!nr_pages); 626 - op->op_sg = rds_message_alloc_sgs(rm, nr_pages); 627 - if (!op->op_sg) { 628 - ret = -ENOMEM; 629 - goto out; 630 - } 623 + op->op_sg = rds_message_alloc_sgs(rm, nr_pages, &ret); 624 + if (!op->op_sg) 625 + goto out_pages; 631 626 632 627 if (op->op_notify || op->op_recverr) { 633 628 /* We allocate an uninitialized notifier here, because ··· 636 635 op->op_notifier = kmalloc(sizeof(struct rds_notifier), GFP_KERNEL); 637 636 if (!op->op_notifier) { 638 637 ret = -ENOMEM; 639 - goto out; 638 + goto out_pages; 640 639 } 641 640 op->op_notifier->n_user_token = args->user_token; 642 641 op->op_notifier->n_status = RDS_RDMA_SUCCESS; ··· 682 681 */ 683 682 ret = rds_pin_pages(iov->addr, nr, pages, !op->op_write); 684 683 if (ret < 0) 685 - goto out; 684 + goto out_pages; 686 685 else 687 686 ret = 0; 688 687 ··· 715 714 nr_bytes, 716 715 (unsigned int) args->remote_vec.bytes); 717 716 ret = -EINVAL; 718 - goto out; 717 + goto out_pages; 719 718 } 720 719 op->op_bytes = nr_bytes; 721 720 722 - out: 723 - if (iovs != iovstack) 724 - sock_kfree_s(rds_rs_to_sk(rs), iovs, iov_size); 721 + out_pages: 725 722 kfree(pages); 726 723 out_ret: 727 724 if (ret) ··· 837 838 rm->atomic.op_silent = !!(args->flags & RDS_RDMA_SILENT); 838 839 rm->atomic.op_active = 1; 839 840 rm->atomic.op_recverr = rs->rs_recverr; 840 - rm->atomic.op_sg = rds_message_alloc_sgs(rm, 1); 841 - if (!rm->atomic.op_sg) { 842 - ret = -ENOMEM; 841 + rm->atomic.op_sg = rds_message_alloc_sgs(rm, 1, &ret); 842 + if (!rm->atomic.op_sg) 843 843 goto err; 844 - } 845 844 846 845 /* verify 8 byte-aligned */ 847 846 if (args->local_addr & 0x7) {
+18 -5
net/rds/rds.h
··· 386 386 INIT_LIST_HEAD(&q->zcookie_head); 387 387 } 388 388 389 + struct rds_iov_vector { 390 + struct rds_iovec *iov; 391 + int len; 392 + }; 393 + 394 + struct rds_iov_vector_arr { 395 + struct rds_iov_vector *vec; 396 + int len; 397 + int indx; 398 + int incr; 399 + }; 400 + 389 401 struct rds_message { 390 402 refcount_t m_refcount; 391 403 struct list_head m_sock_item; ··· 839 827 840 828 /* message.c */ 841 829 struct rds_message *rds_message_alloc(unsigned int nents, gfp_t gfp); 842 - struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents); 830 + struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents, 831 + int *ret); 843 832 int rds_message_copy_from_user(struct rds_message *rm, struct iov_iter *from, 844 833 bool zcopy); 845 834 struct rds_message *rds_message_map_pages(unsigned long *page_addrs, unsigned int total_len); ··· 917 904 int rds_get_mr_for_dest(struct rds_sock *rs, char __user *optval, int optlen); 918 905 int rds_free_mr(struct rds_sock *rs, char __user *optval, int optlen); 919 906 void rds_rdma_drop_keys(struct rds_sock *rs); 920 - int rds_rdma_extra_size(struct rds_rdma_args *args); 921 - int rds_cmsg_rdma_args(struct rds_sock *rs, struct rds_message *rm, 922 - struct cmsghdr *cmsg); 907 + int rds_rdma_extra_size(struct rds_rdma_args *args, 908 + struct rds_iov_vector *iov); 923 909 int rds_cmsg_rdma_dest(struct rds_sock *rs, struct rds_message *rm, 924 910 struct cmsghdr *cmsg); 925 911 int rds_cmsg_rdma_args(struct rds_sock *rs, struct rds_message *rm, 926 - struct cmsghdr *cmsg); 912 + struct cmsghdr *cmsg, 913 + struct rds_iov_vector *vec); 927 914 int rds_cmsg_rdma_map(struct rds_sock *rs, struct rds_message *rm, 928 915 struct cmsghdr *cmsg); 929 916 void rds_rdma_free_op(struct rm_rdma_op *ro);
+50 -11
net/rds/send.c
··· 876 876 * rds_message is getting to be quite complicated, and we'd like to allocate 877 877 * it all in one go. This figures out how big it needs to be up front. 878 878 */ 879 - static int rds_rm_size(struct msghdr *msg, int num_sgs) 879 + static int rds_rm_size(struct msghdr *msg, int num_sgs, 880 + struct rds_iov_vector_arr *vct) 880 881 { 881 882 struct cmsghdr *cmsg; 882 883 int size = 0; 883 884 int cmsg_groups = 0; 884 885 int retval; 885 886 bool zcopy_cookie = false; 887 + struct rds_iov_vector *iov, *tmp_iov; 888 + 889 + if (num_sgs < 0) 890 + return -EINVAL; 886 891 887 892 for_each_cmsghdr(cmsg, msg) { 888 893 if (!CMSG_OK(msg, cmsg)) ··· 898 893 899 894 switch (cmsg->cmsg_type) { 900 895 case RDS_CMSG_RDMA_ARGS: 896 + if (vct->indx >= vct->len) { 897 + vct->len += vct->incr; 898 + tmp_iov = 899 + krealloc(vct->vec, 900 + vct->len * 901 + sizeof(struct rds_iov_vector), 902 + GFP_KERNEL); 903 + if (!tmp_iov) { 904 + vct->len -= vct->incr; 905 + return -ENOMEM; 906 + } 907 + vct->vec = tmp_iov; 908 + } 909 + iov = &vct->vec[vct->indx]; 910 + memset(iov, 0, sizeof(struct rds_iov_vector)); 911 + vct->indx++; 901 912 cmsg_groups |= 1; 902 - retval = rds_rdma_extra_size(CMSG_DATA(cmsg)); 913 + retval = rds_rdma_extra_size(CMSG_DATA(cmsg), iov); 903 914 if (retval < 0) 904 915 return retval; 905 916 size += retval; ··· 972 951 } 973 952 974 953 static int rds_cmsg_send(struct rds_sock *rs, struct rds_message *rm, 975 - struct msghdr *msg, int *allocated_mr) 954 + struct msghdr *msg, int *allocated_mr, 955 + struct rds_iov_vector_arr *vct) 976 956 { 977 957 struct cmsghdr *cmsg; 978 - int ret = 0; 958 + int ret = 0, ind = 0; 979 959 980 960 for_each_cmsghdr(cmsg, msg) { 981 961 if (!CMSG_OK(msg, cmsg)) ··· 990 968 */ 991 969 switch (cmsg->cmsg_type) { 992 970 case RDS_CMSG_RDMA_ARGS: 993 - ret = rds_cmsg_rdma_args(rs, rm, cmsg); 971 + if (ind >= vct->indx) 972 + return -ENOMEM; 973 + ret = rds_cmsg_rdma_args(rs, rm, cmsg, &vct->vec[ind]); 974 + ind++; 994 975 break; 995 976 996 977 case RDS_CMSG_RDMA_DEST: ··· 1109 1084 sock_flag(rds_rs_to_sk(rs), SOCK_ZEROCOPY)); 1110 1085 int num_sgs = ceil(payload_len, PAGE_SIZE); 1111 1086 int namelen; 1087 + struct rds_iov_vector_arr vct; 1088 + int ind; 1089 + 1090 + memset(&vct, 0, sizeof(vct)); 1091 + 1092 + /* expect 1 RDMA CMSG per rds_sendmsg. can still grow if more needed. */ 1093 + vct.incr = 1; 1112 1094 1113 1095 /* Mirror Linux UDP mirror of BSD error message compatibility */ 1114 1096 /* XXX: Perhaps MSG_MORE someday */ ··· 1252 1220 num_sgs = iov_iter_npages(&msg->msg_iter, INT_MAX); 1253 1221 } 1254 1222 /* size of rm including all sgs */ 1255 - ret = rds_rm_size(msg, num_sgs); 1223 + ret = rds_rm_size(msg, num_sgs, &vct); 1256 1224 if (ret < 0) 1257 1225 goto out; 1258 1226 ··· 1264 1232 1265 1233 /* Attach data to the rm */ 1266 1234 if (payload_len) { 1267 - rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs); 1268 - if (!rm->data.op_sg) { 1269 - ret = -ENOMEM; 1235 + rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs, &ret); 1236 + if (!rm->data.op_sg) 1270 1237 goto out; 1271 - } 1272 1238 ret = rds_message_copy_from_user(rm, &msg->msg_iter, zcopy); 1273 1239 if (ret) 1274 1240 goto out; ··· 1300 1270 rm->m_conn_path = cpath; 1301 1271 1302 1272 /* Parse any control messages the user may have included. */ 1303 - ret = rds_cmsg_send(rs, rm, msg, &allocated_mr); 1273 + ret = rds_cmsg_send(rs, rm, msg, &allocated_mr, &vct); 1304 1274 if (ret) { 1305 1275 /* Trigger connection so that its ready for the next retry */ 1306 1276 if (ret == -EAGAIN) ··· 1378 1348 if (ret) 1379 1349 goto out; 1380 1350 rds_message_put(rm); 1351 + 1352 + for (ind = 0; ind < vct.indx; ind++) 1353 + kfree(vct.vec[ind].iov); 1354 + kfree(vct.vec); 1355 + 1381 1356 return payload_len; 1382 1357 1383 1358 out: 1359 + for (ind = 0; ind < vct.indx; ind++) 1360 + kfree(vct.vec[ind].iov); 1361 + kfree(vct.vec); 1362 + 1384 1363 /* If the user included a RDMA_MAP cmsg, we allocated a MR on the fly. 1385 1364 * If the sendmsg goes through, we keep the MR. If it fails with EAGAIN 1386 1365 * or in any other way, we need to destroy the MR again */
+3 -4
net/sched/cls_flower.c
··· 1258 1258 fnew->flags |= TCA_CLS_FLAGS_NOT_IN_HW; 1259 1259 1260 1260 if (fold) { 1261 - if (!tc_skip_sw(fold->flags)) 1262 - rhashtable_remove_fast(&fold->mask->ht, 1263 - &fold->ht_node, 1264 - fold->mask->filter_ht_params); 1261 + rhashtable_remove_fast(&fold->mask->ht, 1262 + &fold->ht_node, 1263 + fold->mask->filter_ht_params); 1265 1264 if (!tc_skip_hw(fold->flags)) 1266 1265 fl_hw_destroy_filter(tp, fold, NULL); 1267 1266 }
+1
net/sctp/ipv6.c
··· 101 101 if (addr) { 102 102 addr->a.v6.sin6_family = AF_INET6; 103 103 addr->a.v6.sin6_port = 0; 104 + addr->a.v6.sin6_flowinfo = 0; 104 105 addr->a.v6.sin6_addr = ifa->addr; 105 106 addr->a.v6.sin6_scope_id = ifa->idev->dev->ifindex; 106 107 addr->valid = 1;
+12 -2
net/smc/af_smc.c
··· 147 147 sk->sk_shutdown |= SHUTDOWN_MASK; 148 148 } 149 149 if (smc->clcsock) { 150 + if (smc->use_fallback && sk->sk_state == SMC_LISTEN) { 151 + /* wake up clcsock accept */ 152 + rc = kernel_sock_shutdown(smc->clcsock, SHUT_RDWR); 153 + } 154 + mutex_lock(&smc->clcsock_release_lock); 150 155 sock_release(smc->clcsock); 151 156 smc->clcsock = NULL; 157 + mutex_unlock(&smc->clcsock_release_lock); 152 158 } 153 159 if (smc->use_fallback) { 154 160 if (sk->sk_state != SMC_LISTEN && sk->sk_state != SMC_INIT) ··· 211 205 spin_lock_init(&smc->conn.send_lock); 212 206 sk->sk_prot->hash(sk); 213 207 sk_refcnt_debug_inc(sk); 208 + mutex_init(&smc->clcsock_release_lock); 214 209 215 210 return sk; 216 211 } ··· 828 821 struct socket *new_clcsock = NULL; 829 822 struct sock *lsk = &lsmc->sk; 830 823 struct sock *new_sk; 831 - int rc; 824 + int rc = -EINVAL; 832 825 833 826 release_sock(lsk); 834 827 new_sk = smc_sock_alloc(sock_net(lsk), NULL, lsk->sk_protocol); ··· 841 834 } 842 835 *new_smc = smc_sk(new_sk); 843 836 844 - rc = kernel_accept(lsmc->clcsock, &new_clcsock, 0); 837 + mutex_lock(&lsmc->clcsock_release_lock); 838 + if (lsmc->clcsock) 839 + rc = kernel_accept(lsmc->clcsock, &new_clcsock, 0); 840 + mutex_unlock(&lsmc->clcsock_release_lock); 845 841 lock_sock(lsk); 846 842 if (rc < 0) 847 843 lsk->sk_err = -rc;
+4
net/smc/smc.h
··· 219 219 * started, waiting for unsent 220 220 * data to be sent 221 221 */ 222 + struct mutex clcsock_release_lock; 223 + /* protects clcsock of a listen 224 + * socket 225 + * */ 222 226 }; 223 227 224 228 static inline struct smc_sock *smc_sk(const struct sock *sk)
+1
net/sunrpc/clnt.c
··· 1952 1952 /* retry with existing socket, after a delay */ 1953 1953 rpc_delay(task, 3*HZ); 1954 1954 /* fall through */ 1955 + case -ENOTCONN: 1955 1956 case -EAGAIN: 1956 1957 /* Check for timeouts before looping back to call_bind */ 1957 1958 case -ETIMEDOUT:
+4 -31
net/sunrpc/xprt.c
··· 67 67 */ 68 68 static void xprt_init(struct rpc_xprt *xprt, struct net *net); 69 69 static __be32 xprt_alloc_xid(struct rpc_xprt *xprt); 70 - static void xprt_connect_status(struct rpc_task *task); 71 70 static void xprt_destroy(struct rpc_xprt *xprt); 72 71 73 72 static DEFINE_SPINLOCK(xprt_list_lock); ··· 679 680 /* Try to schedule an autoclose RPC call */ 680 681 if (test_and_set_bit(XPRT_LOCKED, &xprt->state) == 0) 681 682 queue_work(xprtiod_workqueue, &xprt->task_cleanup); 682 - xprt_wake_pending_tasks(xprt, -EAGAIN); 683 + else if (xprt->snd_task) 684 + rpc_wake_up_queued_task_set_status(&xprt->pending, 685 + xprt->snd_task, -ENOTCONN); 683 686 spin_unlock_bh(&xprt->transport_lock); 684 687 } 685 688 EXPORT_SYMBOL_GPL(xprt_force_disconnect); ··· 821 820 if (!xprt_connected(xprt)) { 822 821 task->tk_timeout = task->tk_rqstp->rq_timeout; 823 822 task->tk_rqstp->rq_connect_cookie = xprt->connect_cookie; 824 - rpc_sleep_on(&xprt->pending, task, xprt_connect_status); 823 + rpc_sleep_on(&xprt->pending, task, NULL); 825 824 826 825 if (test_bit(XPRT_CLOSING, &xprt->state)) 827 826 return; ··· 838 837 } 839 838 } 840 839 xprt_release_write(xprt, task); 841 - } 842 - 843 - static void xprt_connect_status(struct rpc_task *task) 844 - { 845 - switch (task->tk_status) { 846 - case 0: 847 - dprintk("RPC: %5u xprt_connect_status: connection established\n", 848 - task->tk_pid); 849 - break; 850 - case -ECONNREFUSED: 851 - case -ECONNRESET: 852 - case -ECONNABORTED: 853 - case -ENETUNREACH: 854 - case -EHOSTUNREACH: 855 - case -EPIPE: 856 - case -EAGAIN: 857 - dprintk("RPC: %5u xprt_connect_status: retrying\n", task->tk_pid); 858 - break; 859 - case -ETIMEDOUT: 860 - dprintk("RPC: %5u xprt_connect_status: connect attempt timed " 861 - "out\n", task->tk_pid); 862 - break; 863 - default: 864 - dprintk("RPC: %5u xprt_connect_status: error %d connecting to " 865 - "server %s\n", task->tk_pid, -task->tk_status, 866 - task->tk_rqstp->rq_xprt->servername); 867 - task->tk_status = -EIO; 868 - } 869 840 } 870 841 871 842 enum xprt_xid_rb_cmp {
+4 -6
net/sunrpc/xprtsock.c
··· 1217 1217 1218 1218 trace_rpc_socket_close(xprt, sock); 1219 1219 sock_release(sock); 1220 + 1221 + xprt_disconnect_done(xprt); 1220 1222 } 1221 1223 1222 1224 /** ··· 1239 1237 1240 1238 xs_reset_transport(transport); 1241 1239 xprt->reestablish_timeout = 0; 1242 - 1243 - xprt_disconnect_done(xprt); 1244 1240 } 1245 1241 1246 1242 static void xs_inject_disconnect(struct rpc_xprt *xprt) ··· 1489 1489 &transport->sock_state)) 1490 1490 xprt_clear_connecting(xprt); 1491 1491 clear_bit(XPRT_CLOSING, &xprt->state); 1492 - if (sk->sk_err) 1493 - xprt_wake_pending_tasks(xprt, -sk->sk_err); 1494 1492 /* Trigger the socket release */ 1495 1493 xs_tcp_force_close(xprt); 1496 1494 } ··· 2090 2092 trace_rpc_socket_connect(xprt, sock, 0); 2091 2093 status = 0; 2092 2094 out: 2093 - xprt_unlock_connect(xprt, transport); 2094 2095 xprt_clear_connecting(xprt); 2096 + xprt_unlock_connect(xprt, transport); 2095 2097 xprt_wake_pending_tasks(xprt, status); 2096 2098 } 2097 2099 ··· 2327 2329 } 2328 2330 status = -EAGAIN; 2329 2331 out: 2330 - xprt_unlock_connect(xprt, transport); 2331 2332 xprt_clear_connecting(xprt); 2333 + xprt_unlock_connect(xprt, transport); 2332 2334 xprt_wake_pending_tasks(xprt, status); 2333 2335 } 2334 2336
+24 -16
net/tipc/socket.c
··· 880 880 DECLARE_SOCKADDR(struct sockaddr_tipc *, dest, m->msg_name); 881 881 int blks = tsk_blocks(GROUP_H_SIZE + dlen); 882 882 struct tipc_sock *tsk = tipc_sk(sk); 883 - struct tipc_group *grp = tsk->group; 884 883 struct net *net = sock_net(sk); 885 884 struct tipc_member *mb = NULL; 886 885 u32 node, port; ··· 893 894 /* Block or return if destination link or member is congested */ 894 895 rc = tipc_wait_for_cond(sock, &timeout, 895 896 !tipc_dest_find(&tsk->cong_links, node, 0) && 896 - !tipc_group_cong(grp, node, port, blks, &mb)); 897 + tsk->group && 898 + !tipc_group_cong(tsk->group, node, port, blks, 899 + &mb)); 897 900 if (unlikely(rc)) 898 901 return rc; 899 902 ··· 925 924 struct tipc_sock *tsk = tipc_sk(sk); 926 925 struct list_head *cong_links = &tsk->cong_links; 927 926 int blks = tsk_blocks(GROUP_H_SIZE + dlen); 928 - struct tipc_group *grp = tsk->group; 929 927 struct tipc_msg *hdr = &tsk->phdr; 930 928 struct tipc_member *first = NULL; 931 929 struct tipc_member *mbr = NULL; ··· 941 941 type = msg_nametype(hdr); 942 942 inst = dest->addr.name.name.instance; 943 943 scope = msg_lookup_scope(hdr); 944 - exclude = tipc_group_exclude(grp); 945 944 946 945 while (++lookups < 4) { 946 + exclude = tipc_group_exclude(tsk->group); 947 + 947 948 first = NULL; 948 949 949 950 /* Look for a non-congested destination member, if any */ ··· 953 952 &dstcnt, exclude, false)) 954 953 return -EHOSTUNREACH; 955 954 tipc_dest_pop(&dsts, &node, &port); 956 - cong = tipc_group_cong(grp, node, port, blks, &mbr); 955 + cong = tipc_group_cong(tsk->group, node, port, blks, 956 + &mbr); 957 957 if (!cong) 958 958 break; 959 959 if (mbr == first) ··· 973 971 /* Block or return if destination link or member is congested */ 974 972 rc = tipc_wait_for_cond(sock, &timeout, 975 973 !tipc_dest_find(cong_links, node, 0) && 976 - !tipc_group_cong(grp, node, port, 974 + tsk->group && 975 + !tipc_group_cong(tsk->group, node, port, 977 976 blks, &mbr)); 978 977 if (unlikely(rc)) 979 978 return rc; ··· 1009 1006 struct sock *sk = sock->sk; 1010 1007 struct net *net = sock_net(sk); 1011 1008 struct tipc_sock *tsk = tipc_sk(sk); 1012 - struct tipc_group *grp = tsk->group; 1013 - struct tipc_nlist *dsts = tipc_group_dests(grp); 1009 + struct tipc_nlist *dsts; 1014 1010 struct tipc_mc_method *method = &tsk->mc_method; 1015 1011 bool ack = method->mandatory && method->rcast; 1016 1012 int blks = tsk_blocks(MCAST_H_SIZE + dlen); ··· 1018 1016 struct sk_buff_head pkts; 1019 1017 int rc = -EHOSTUNREACH; 1020 1018 1021 - if (!dsts->local && !dsts->remote) 1022 - return -EHOSTUNREACH; 1023 - 1024 1019 /* Block or return if any destination link or member is congested */ 1025 - rc = tipc_wait_for_cond(sock, &timeout, !tsk->cong_link_cnt && 1026 - !tipc_group_bc_cong(grp, blks)); 1020 + rc = tipc_wait_for_cond(sock, &timeout, 1021 + !tsk->cong_link_cnt && tsk->group && 1022 + !tipc_group_bc_cong(tsk->group, blks)); 1027 1023 if (unlikely(rc)) 1028 1024 return rc; 1025 + 1026 + dsts = tipc_group_dests(tsk->group); 1027 + if (!dsts->local && !dsts->remote) 1028 + return -EHOSTUNREACH; 1029 1029 1030 1030 /* Complete message header */ 1031 1031 if (dest) { ··· 1040 1036 msg_set_hdr_sz(hdr, GROUP_H_SIZE); 1041 1037 msg_set_destport(hdr, 0); 1042 1038 msg_set_destnode(hdr, 0); 1043 - msg_set_grp_bc_seqno(hdr, tipc_group_bc_snd_nxt(grp)); 1039 + msg_set_grp_bc_seqno(hdr, tipc_group_bc_snd_nxt(tsk->group)); 1044 1040 1045 1041 /* Avoid getting stuck with repeated forced replicasts */ 1046 1042 msg_set_grp_bc_ack_req(hdr, ack); ··· 2728 2724 rhashtable_walk_start(&iter); 2729 2725 2730 2726 while ((tsk = rhashtable_walk_next(&iter)) && !IS_ERR(tsk)) { 2731 - spin_lock_bh(&tsk->sk.sk_lock.slock); 2727 + sock_hold(&tsk->sk); 2728 + rhashtable_walk_stop(&iter); 2729 + lock_sock(&tsk->sk); 2732 2730 msg = &tsk->phdr; 2733 2731 msg_set_prevnode(msg, tipc_own_addr(net)); 2734 2732 msg_set_orignode(msg, tipc_own_addr(net)); 2735 - spin_unlock_bh(&tsk->sk.sk_lock.slock); 2733 + release_sock(&tsk->sk); 2734 + rhashtable_walk_start(&iter); 2735 + sock_put(&tsk->sk); 2736 2736 } 2737 2737 2738 2738 rhashtable_walk_stop(&iter);
+6 -3
net/tipc/udp_media.c
··· 245 245 } 246 246 247 247 err = tipc_udp_xmit(net, _skb, ub, src, &rcast->addr); 248 - if (err) { 249 - kfree_skb(_skb); 248 + if (err) 250 249 goto out; 251 - } 252 250 } 253 251 err = 0; 254 252 out: ··· 678 680 err = tipc_parse_udp_addr(opts[TIPC_NLA_UDP_REMOTE], &remote, NULL); 679 681 if (err) 680 682 goto err; 683 + 684 + if (remote.proto != local.proto) { 685 + err = -EINVAL; 686 + goto err; 687 + } 681 688 682 689 /* Checking remote ip address */ 683 690 rmcast = tipc_udp_is_mcast_addr(&remote);
+27 -17
net/tls/tls_main.c
··· 56 56 static struct proto *saved_tcpv6_prot; 57 57 static DEFINE_MUTEX(tcpv6_prot_mutex); 58 58 static LIST_HEAD(device_list); 59 - static DEFINE_MUTEX(device_mutex); 59 + static DEFINE_SPINLOCK(device_spinlock); 60 60 static struct proto tls_prots[TLS_NUM_PROTS][TLS_NUM_CONFIG][TLS_NUM_CONFIG]; 61 61 static struct proto_ops tls_sw_proto_ops; 62 62 ··· 538 538 struct inet_connection_sock *icsk = inet_csk(sk); 539 539 struct tls_context *ctx; 540 540 541 - ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); 541 + ctx = kzalloc(sizeof(*ctx), GFP_ATOMIC); 542 542 if (!ctx) 543 543 return NULL; 544 544 545 545 icsk->icsk_ulp_data = ctx; 546 + ctx->setsockopt = sk->sk_prot->setsockopt; 547 + ctx->getsockopt = sk->sk_prot->getsockopt; 548 + ctx->sk_proto_close = sk->sk_prot->close; 546 549 return ctx; 547 550 } 548 551 ··· 555 552 struct tls_device *dev; 556 553 int rc = 0; 557 554 558 - mutex_lock(&device_mutex); 555 + spin_lock_bh(&device_spinlock); 559 556 list_for_each_entry(dev, &device_list, dev_list) { 560 557 if (dev->feature && dev->feature(dev)) { 561 558 ctx = create_ctx(sk); ··· 573 570 } 574 571 } 575 572 out: 576 - mutex_unlock(&device_mutex); 573 + spin_unlock_bh(&device_spinlock); 577 574 return rc; 578 575 } 579 576 ··· 582 579 struct tls_context *ctx = tls_get_ctx(sk); 583 580 struct tls_device *dev; 584 581 585 - mutex_lock(&device_mutex); 582 + spin_lock_bh(&device_spinlock); 586 583 list_for_each_entry(dev, &device_list, dev_list) { 587 - if (dev->unhash) 584 + if (dev->unhash) { 585 + kref_get(&dev->kref); 586 + spin_unlock_bh(&device_spinlock); 588 587 dev->unhash(dev, sk); 588 + kref_put(&dev->kref, dev->release); 589 + spin_lock_bh(&device_spinlock); 590 + } 589 591 } 590 - mutex_unlock(&device_mutex); 592 + spin_unlock_bh(&device_spinlock); 591 593 ctx->unhash(sk); 592 594 } 593 595 ··· 603 595 int err; 604 596 605 597 err = ctx->hash(sk); 606 - mutex_lock(&device_mutex); 598 + spin_lock_bh(&device_spinlock); 607 599 list_for_each_entry(dev, &device_list, dev_list) { 608 - if (dev->hash) 600 + if (dev->hash) { 601 + kref_get(&dev->kref); 602 + spin_unlock_bh(&device_spinlock); 609 603 err |= dev->hash(dev, sk); 604 + kref_put(&dev->kref, dev->release); 605 + spin_lock_bh(&device_spinlock); 606 + } 610 607 } 611 - mutex_unlock(&device_mutex); 608 + spin_unlock_bh(&device_spinlock); 612 609 613 610 if (err) 614 611 tls_hw_unhash(sk); ··· 688 675 rc = -ENOMEM; 689 676 goto out; 690 677 } 691 - ctx->setsockopt = sk->sk_prot->setsockopt; 692 - ctx->getsockopt = sk->sk_prot->getsockopt; 693 - ctx->sk_proto_close = sk->sk_prot->close; 694 678 695 679 /* Build IPv6 TLS whenever the address of tcpv6 _prot changes */ 696 680 if (ip_ver == TLSV6 && ··· 709 699 710 700 void tls_register_device(struct tls_device *device) 711 701 { 712 - mutex_lock(&device_mutex); 702 + spin_lock_bh(&device_spinlock); 713 703 list_add_tail(&device->dev_list, &device_list); 714 - mutex_unlock(&device_mutex); 704 + spin_unlock_bh(&device_spinlock); 715 705 } 716 706 EXPORT_SYMBOL(tls_register_device); 717 707 718 708 void tls_unregister_device(struct tls_device *device) 719 709 { 720 - mutex_lock(&device_mutex); 710 + spin_lock_bh(&device_spinlock); 721 711 list_del(&device->dev_list); 722 - mutex_unlock(&device_mutex); 712 + spin_unlock_bh(&device_spinlock); 723 713 } 724 714 EXPORT_SYMBOL(tls_unregister_device); 725 715
+6 -1
net/vmw_vsock/af_vsock.c
··· 107 107 #include <linux/mutex.h> 108 108 #include <linux/net.h> 109 109 #include <linux/poll.h> 110 + #include <linux/random.h> 110 111 #include <linux/skbuff.h> 111 112 #include <linux/smp.h> 112 113 #include <linux/socket.h> ··· 505 504 static int __vsock_bind_stream(struct vsock_sock *vsk, 506 505 struct sockaddr_vm *addr) 507 506 { 508 - static u32 port = LAST_RESERVED_PORT + 1; 507 + static u32 port = 0; 509 508 struct sockaddr_vm new_addr; 509 + 510 + if (!port) 511 + port = LAST_RESERVED_PORT + 1 + 512 + prandom_u32_max(U32_MAX - LAST_RESERVED_PORT); 510 513 511 514 vsock_addr_init(&new_addr, addr->svm_cid, addr->svm_port); 512 515
+50 -17
net/vmw_vsock/vmci_transport.c
··· 264 264 } 265 265 266 266 static int 267 + vmci_transport_alloc_send_control_pkt(struct sockaddr_vm *src, 268 + struct sockaddr_vm *dst, 269 + enum vmci_transport_packet_type type, 270 + u64 size, 271 + u64 mode, 272 + struct vmci_transport_waiting_info *wait, 273 + u16 proto, 274 + struct vmci_handle handle) 275 + { 276 + struct vmci_transport_packet *pkt; 277 + int err; 278 + 279 + pkt = kmalloc(sizeof(*pkt), GFP_KERNEL); 280 + if (!pkt) 281 + return -ENOMEM; 282 + 283 + err = __vmci_transport_send_control_pkt(pkt, src, dst, type, size, 284 + mode, wait, proto, handle, 285 + true); 286 + kfree(pkt); 287 + 288 + return err; 289 + } 290 + 291 + static int 267 292 vmci_transport_send_control_pkt(struct sock *sk, 268 293 enum vmci_transport_packet_type type, 269 294 u64 size, ··· 297 272 u16 proto, 298 273 struct vmci_handle handle) 299 274 { 300 - struct vmci_transport_packet *pkt; 301 275 struct vsock_sock *vsk; 302 - int err; 303 276 304 277 vsk = vsock_sk(sk); 305 278 ··· 307 284 if (!vsock_addr_bound(&vsk->remote_addr)) 308 285 return -EINVAL; 309 286 310 - pkt = kmalloc(sizeof(*pkt), GFP_KERNEL); 311 - if (!pkt) 312 - return -ENOMEM; 313 - 314 - err = __vmci_transport_send_control_pkt(pkt, &vsk->local_addr, 315 - &vsk->remote_addr, type, size, 316 - mode, wait, proto, handle, 317 - true); 318 - kfree(pkt); 319 - 320 - return err; 287 + return vmci_transport_alloc_send_control_pkt(&vsk->local_addr, 288 + &vsk->remote_addr, 289 + type, size, mode, 290 + wait, proto, handle); 321 291 } 322 292 323 293 static int vmci_transport_send_reset_bh(struct sockaddr_vm *dst, ··· 328 312 static int vmci_transport_send_reset(struct sock *sk, 329 313 struct vmci_transport_packet *pkt) 330 314 { 315 + struct sockaddr_vm *dst_ptr; 316 + struct sockaddr_vm dst; 317 + struct vsock_sock *vsk; 318 + 331 319 if (pkt->type == VMCI_TRANSPORT_PACKET_TYPE_RST) 332 320 return 0; 333 - return vmci_transport_send_control_pkt(sk, 334 - VMCI_TRANSPORT_PACKET_TYPE_RST, 335 - 0, 0, NULL, VSOCK_PROTO_INVALID, 336 - VMCI_INVALID_HANDLE); 321 + 322 + vsk = vsock_sk(sk); 323 + 324 + if (!vsock_addr_bound(&vsk->local_addr)) 325 + return -EINVAL; 326 + 327 + if (vsock_addr_bound(&vsk->remote_addr)) { 328 + dst_ptr = &vsk->remote_addr; 329 + } else { 330 + vsock_addr_init(&dst, pkt->dg.src.context, 331 + pkt->src_port); 332 + dst_ptr = &dst; 333 + } 334 + return vmci_transport_alloc_send_control_pkt(&vsk->local_addr, dst_ptr, 335 + VMCI_TRANSPORT_PACKET_TYPE_RST, 336 + 0, 0, NULL, VSOCK_PROTO_INVALID, 337 + VMCI_INVALID_HANDLE); 337 338 } 338 339 339 340 static int vmci_transport_send_negotiate(struct sock *sk, size_t size)
+3 -1
net/wireless/nl80211.c
··· 8930 8930 if (info->attrs[NL80211_ATTR_CONTROL_PORT_OVER_NL80211]) { 8931 8931 int r = validate_pae_over_nl80211(rdev, info); 8932 8932 8933 - if (r < 0) 8933 + if (r < 0) { 8934 + kzfree(connkeys); 8934 8935 return r; 8936 + } 8935 8937 8936 8938 ibss.control_port_over_nl80211 = true; 8937 8939 }
+6 -1
net/xfrm/xfrm_input.c
··· 346 346 347 347 skb->sp->xvec[skb->sp->len++] = x; 348 348 349 + skb_dst_force(skb); 350 + if (!skb_dst(skb)) { 351 + XFRM_INC_STATS(net, LINUX_MIB_XFRMINERROR); 352 + goto drop; 353 + } 354 + 349 355 lock: 350 356 spin_lock(&x->lock); 351 357 ··· 391 385 XFRM_SKB_CB(skb)->seq.input.low = seq; 392 386 XFRM_SKB_CB(skb)->seq.input.hi = seq_hi; 393 387 394 - skb_dst_force(skb); 395 388 dev_hold(skb->dev); 396 389 397 390 if (crypto_done)
+1
net/xfrm/xfrm_output.c
··· 102 102 skb_dst_force(skb); 103 103 if (!skb_dst(skb)) { 104 104 XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTERROR); 105 + err = -EHOSTUNREACH; 105 106 goto error_nolock; 106 107 } 107 108
+8 -2
net/xfrm/xfrm_state.c
··· 426 426 module_put(mode->owner); 427 427 } 428 428 429 + void xfrm_state_free(struct xfrm_state *x) 430 + { 431 + kmem_cache_free(xfrm_state_cache, x); 432 + } 433 + EXPORT_SYMBOL(xfrm_state_free); 434 + 429 435 static void xfrm_state_gc_destroy(struct xfrm_state *x) 430 436 { 431 437 tasklet_hrtimer_cancel(&x->mtimer); ··· 458 452 } 459 453 xfrm_dev_state_free(x); 460 454 security_xfrm_state_free(x); 461 - kmem_cache_free(xfrm_state_cache, x); 455 + xfrm_state_free(x); 462 456 } 463 457 464 458 static void xfrm_state_gc_task(struct work_struct *work) ··· 794 788 { 795 789 spin_lock_bh(&net->xfrm.xfrm_state_lock); 796 790 si->sadcnt = net->xfrm.state_num; 797 - si->sadhcnt = net->xfrm.state_hmask; 791 + si->sadhcnt = net->xfrm.state_hmask + 1; 798 792 si->sadhmcnt = xfrm_state_hashmax; 799 793 spin_unlock_bh(&net->xfrm.xfrm_state_lock); 800 794 }
+2 -2
net/xfrm/xfrm_user.c
··· 2288 2288 2289 2289 } 2290 2290 2291 - kfree(x); 2291 + xfrm_state_free(x); 2292 2292 kfree(xp); 2293 2293 2294 2294 return 0; 2295 2295 2296 2296 free_state: 2297 - kfree(x); 2297 + xfrm_state_free(x); 2298 2298 nomem: 2299 2299 return err; 2300 2300 }
+2 -2
scripts/checkstack.pl
··· 47 47 $xs = "[0-9a-f ]"; # hex character or space 48 48 $funcre = qr/^$x* <(.*)>:$/; 49 49 if ($arch eq 'aarch64') { 50 - #ffffffc0006325cc: a9bb7bfd stp x29, x30, [sp,#-80]! 51 - $re = qr/^.*stp.*sp,\#-([0-9]{1,8})\]\!/o; 50 + #ffffffc0006325cc: a9bb7bfd stp x29, x30, [sp, #-80]! 51 + $re = qr/^.*stp.*sp, \#-([0-9]{1,8})\]\!/o; 52 52 } elsif ($arch eq 'arm') { 53 53 #c0008ffc: e24dd064 sub sp, sp, #100 ; 0x64 54 54 $re = qr/.*sub.*sp, sp, #(([0-9]{2}|[3-9])[0-9]{2})/o;
+4 -2
scripts/spdxcheck.py
··· 168 168 self.curline = 0 169 169 try: 170 170 for line in fd: 171 + line = line.decode(locale.getpreferredencoding(False), errors='ignore') 171 172 self.curline += 1 172 173 if self.curline > maxlines: 173 174 break ··· 250 249 251 250 try: 252 251 if len(args.path) and args.path[0] == '-': 253 - parser.parse_lines(sys.stdin, args.maxlines, '-') 252 + stdin = os.fdopen(sys.stdin.fileno(), 'rb') 253 + parser.parse_lines(stdin, args.maxlines, '-') 254 254 else: 255 255 if args.path: 256 256 for p in args.path: 257 257 if os.path.isfile(p): 258 - parser.parse_lines(open(p), args.maxlines, p) 258 + parser.parse_lines(open(p, 'rb'), args.maxlines, p) 259 259 elif os.path.isdir(p): 260 260 scan_git_subtree(repo.head.reference.commit.tree, p) 261 261 else:
+5 -5
security/integrity/ima/ima_policy.c
··· 580 580 ima_update_policy_flag(); 581 581 } 582 582 583 + /* Keep the enumeration in sync with the policy_tokens! */ 583 584 enum { 584 - Opt_err = -1, 585 - Opt_measure = 1, Opt_dont_measure, 585 + Opt_measure, Opt_dont_measure, 586 586 Opt_appraise, Opt_dont_appraise, 587 587 Opt_audit, Opt_hash, Opt_dont_hash, 588 588 Opt_obj_user, Opt_obj_role, Opt_obj_type, ··· 592 592 Opt_uid_gt, Opt_euid_gt, Opt_fowner_gt, 593 593 Opt_uid_lt, Opt_euid_lt, Opt_fowner_lt, 594 594 Opt_appraise_type, Opt_permit_directio, 595 - Opt_pcr 595 + Opt_pcr, Opt_err 596 596 }; 597 597 598 - static match_table_t policy_tokens = { 598 + static const match_table_t policy_tokens = { 599 599 {Opt_measure, "measure"}, 600 600 {Opt_dont_measure, "dont_measure"}, 601 601 {Opt_appraise, "appraise"}, ··· 1103 1103 { 1104 1104 } 1105 1105 1106 - #define pt(token) policy_tokens[token + Opt_err].pattern 1106 + #define pt(token) policy_tokens[token].pattern 1107 1107 #define mt(token) mask_tokens[token] 1108 1108 1109 1109 /*
+1 -1
security/keys/keyctl_pkey.c
··· 25 25 } 26 26 27 27 enum { 28 - Opt_err = -1, 28 + Opt_err, 29 29 Opt_enc, /* "enc=<encoding>" eg. "enc=oaep" */ 30 30 Opt_hash, /* "hash=<digest-name>" eg. "hash=sha1" */ 31 31 };
+1 -1
security/keys/trusted.c
··· 711 711 } 712 712 713 713 enum { 714 - Opt_err = -1, 714 + Opt_err, 715 715 Opt_new, Opt_load, Opt_update, 716 716 Opt_keyhandle, Opt_keyauth, Opt_blobauth, 717 717 Opt_pcrinfo, Opt_pcrlock, Opt_migratable,
+1 -1
sound/firewire/fireface/ff-protocol-ff400.c
··· 30 30 int err; 31 31 32 32 err = snd_fw_transaction(ff->unit, TCODE_READ_QUADLET_REQUEST, 33 - FF400_SYNC_STATUS, &reg, sizeof(reg), 0); 33 + FF400_CLOCK_CONFIG, &reg, sizeof(reg), 0); 34 34 if (err < 0) 35 35 return err; 36 36 data = le32_to_cpu(reg);
+77
sound/pci/hda/patch_realtek.c
··· 5520 5520 ALC285_FIXUP_LENOVO_HEADPHONE_NOISE, 5521 5521 ALC295_FIXUP_HP_AUTO_MUTE, 5522 5522 ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE, 5523 + ALC294_FIXUP_ASUS_MIC, 5524 + ALC294_FIXUP_ASUS_HEADSET_MIC, 5525 + ALC294_FIXUP_ASUS_SPK, 5523 5526 }; 5524 5527 5525 5528 static const struct hda_fixup alc269_fixups[] = { ··· 6395 6392 [ALC285_FIXUP_LENOVO_HEADPHONE_NOISE] = { 6396 6393 .type = HDA_FIXUP_FUNC, 6397 6394 .v.func = alc285_fixup_invalidate_dacs, 6395 + .chained = true, 6396 + .chain_id = ALC269_FIXUP_THINKPAD_ACPI 6398 6397 }, 6399 6398 [ALC295_FIXUP_HP_AUTO_MUTE] = { 6400 6399 .type = HDA_FIXUP_FUNC, ··· 6410 6405 }, 6411 6406 .chained = true, 6412 6407 .chain_id = ALC269_FIXUP_HEADSET_MIC 6408 + }, 6409 + [ALC294_FIXUP_ASUS_MIC] = { 6410 + .type = HDA_FIXUP_PINS, 6411 + .v.pins = (const struct hda_pintbl[]) { 6412 + { 0x13, 0x90a60160 }, /* use as internal mic */ 6413 + { 0x19, 0x04a11120 }, /* use as headset mic, without its own jack detect */ 6414 + { } 6415 + }, 6416 + .chained = true, 6417 + .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC 6418 + }, 6419 + [ALC294_FIXUP_ASUS_HEADSET_MIC] = { 6420 + .type = HDA_FIXUP_PINS, 6421 + .v.pins = (const struct hda_pintbl[]) { 6422 + { 0x19, 0x01a1113c }, /* use as headset mic, without its own jack detect */ 6423 + { } 6424 + }, 6425 + .chained = true, 6426 + .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC 6427 + }, 6428 + [ALC294_FIXUP_ASUS_SPK] = { 6429 + .type = HDA_FIXUP_VERBS, 6430 + .v.verbs = (const struct hda_verb[]) { 6431 + /* Set EAPD high */ 6432 + { 0x20, AC_VERB_SET_COEF_INDEX, 0x40 }, 6433 + { 0x20, AC_VERB_SET_PROC_COEF, 0x8800 }, 6434 + { } 6435 + }, 6436 + .chained = true, 6437 + .chain_id = ALC294_FIXUP_ASUS_HEADSET_MIC 6413 6438 }, 6414 6439 }; 6415 6440 ··· 6583 6548 SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC), 6584 6549 SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC), 6585 6550 SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK), 6551 + SND_PCI_QUIRK(0x1043, 0x14a1, "ASUS UX533FD", ALC294_FIXUP_ASUS_SPK), 6586 6552 SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A), 6587 6553 SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC), 6588 6554 SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW), ··· 7191 7155 SND_HDA_PIN_QUIRK(0x10ec0293, 0x1028, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE, 7192 7156 ALC292_STANDARD_PINS, 7193 7157 {0x13, 0x90a60140}), 7158 + SND_HDA_PIN_QUIRK(0x10ec0294, 0x1043, "ASUS", ALC294_FIXUP_ASUS_MIC, 7159 + {0x14, 0x90170110}, 7160 + {0x1b, 0x90a70130}, 7161 + {0x21, 0x04211020}), 7162 + SND_HDA_PIN_QUIRK(0x10ec0294, 0x1043, "ASUS", ALC294_FIXUP_ASUS_SPK, 7163 + {0x12, 0x90a60130}, 7164 + {0x17, 0x90170110}, 7165 + {0x21, 0x04211020}), 7194 7166 SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, 7195 7167 ALC295_STANDARD_PINS, 7196 7168 {0x17, 0x21014020}, ··· 7269 7225 7270 7226 /* HP */ 7271 7227 alc_update_coef_idx(codec, 0x4, 0, 1<<11); 7228 + } 7229 + 7230 + static void alc294_hp_init(struct hda_codec *codec) 7231 + { 7232 + struct alc_spec *spec = codec->spec; 7233 + hda_nid_t hp_pin = spec->gen.autocfg.hp_pins[0]; 7234 + int i, val; 7235 + 7236 + if (!hp_pin) 7237 + return; 7238 + 7239 + snd_hda_codec_write(codec, hp_pin, 0, 7240 + AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE); 7241 + 7242 + msleep(100); 7243 + 7244 + snd_hda_codec_write(codec, hp_pin, 0, 7245 + AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0); 7246 + 7247 + alc_update_coef_idx(codec, 0x6f, 0x000f, 0);/* Set HP depop to manual mode */ 7248 + alc_update_coefex_idx(codec, 0x58, 0x00, 0x8000, 0x8000); /* HP depop procedure start */ 7249 + 7250 + /* Wait for depop procedure finish */ 7251 + val = alc_read_coefex_idx(codec, 0x58, 0x01); 7252 + for (i = 0; i < 20 && val & 0x0080; i++) { 7253 + msleep(50); 7254 + val = alc_read_coefex_idx(codec, 0x58, 0x01); 7255 + } 7256 + /* Set HP depop to auto mode */ 7257 + alc_update_coef_idx(codec, 0x6f, 0x000f, 0x000b); 7258 + msleep(50); 7272 7259 } 7273 7260 7274 7261 /* ··· 7427 7352 spec->codec_variant = ALC269_TYPE_ALC294; 7428 7353 spec->gen.mixer_nid = 0; /* ALC2x4 does not have any loopback mixer path */ 7429 7354 alc_update_coef_idx(codec, 0x6b, 0x0018, (1<<4) | (1<<3)); /* UAJ MIC Vref control by verb */ 7355 + alc294_hp_init(codec); 7430 7356 break; 7431 7357 case 0x10ec0300: 7432 7358 spec->codec_variant = ALC269_TYPE_ALC300; ··· 7439 7363 spec->codec_variant = ALC269_TYPE_ALC700; 7440 7364 spec->gen.mixer_nid = 0; /* ALC700 does not have any loopback mixer path */ 7441 7365 alc_update_coef_idx(codec, 0x4a, 1 << 15, 0); /* Combo jack auto trigger control */ 7366 + alc294_hp_init(codec); 7442 7367 break; 7443 7368 7444 7369 }
+1 -1
tools/include/uapi/linux/netlink.h
··· 155 155 #define NETLINK_LIST_MEMBERSHIPS 9 156 156 #define NETLINK_CAP_ACK 10 157 157 #define NETLINK_EXT_ACK 11 158 - #define NETLINK_DUMP_STRICT_CHK 12 158 + #define NETLINK_GET_STRICT_CHK 12 159 159 160 160 struct nl_pktinfo { 161 161 __u32 group;
+1
tools/testing/radix-tree/Makefile
··· 7 7 TARGETS = main idr-test multiorder xarray 8 8 CORE_OFILES := xarray.o radix-tree.o idr.o linux.o test.o find_bit.o bitmap.o 9 9 OFILES = main.o $(CORE_OFILES) regression1.o regression2.o regression3.o \ 10 + regression4.o \ 10 11 tag_check.o multiorder.o idr-test.o iteration_check.o benchmark.o 11 12 12 13 ifndef SHIFT
+1
tools/testing/radix-tree/main.c
··· 308 308 regression1_test(); 309 309 regression2_test(); 310 310 regression3_test(); 311 + regression4_test(); 311 312 iteration_test(0, 10 + 90 * long_run); 312 313 iteration_test(7, 10 + 90 * long_run); 313 314 single_thread_tests(long_run);
+1
tools/testing/radix-tree/regression.h
··· 5 5 void regression1_test(void); 6 6 void regression2_test(void); 7 7 void regression3_test(void); 8 + void regression4_test(void); 8 9 9 10 #endif
+79
tools/testing/radix-tree/regression4.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/kernel.h> 3 + #include <linux/gfp.h> 4 + #include <linux/slab.h> 5 + #include <linux/radix-tree.h> 6 + #include <linux/rcupdate.h> 7 + #include <stdlib.h> 8 + #include <pthread.h> 9 + #include <stdio.h> 10 + #include <assert.h> 11 + 12 + #include "regression.h" 13 + 14 + static pthread_barrier_t worker_barrier; 15 + static int obj0, obj1; 16 + static RADIX_TREE(mt_tree, GFP_KERNEL); 17 + 18 + static void *reader_fn(void *arg) 19 + { 20 + int i; 21 + void *entry; 22 + 23 + rcu_register_thread(); 24 + pthread_barrier_wait(&worker_barrier); 25 + 26 + for (i = 0; i < 1000000; i++) { 27 + rcu_read_lock(); 28 + entry = radix_tree_lookup(&mt_tree, 0); 29 + rcu_read_unlock(); 30 + if (entry != &obj0) { 31 + printf("iteration %d bad entry = %p\n", i, entry); 32 + abort(); 33 + } 34 + } 35 + 36 + rcu_unregister_thread(); 37 + 38 + return NULL; 39 + } 40 + 41 + static void *writer_fn(void *arg) 42 + { 43 + int i; 44 + 45 + rcu_register_thread(); 46 + pthread_barrier_wait(&worker_barrier); 47 + 48 + for (i = 0; i < 1000000; i++) { 49 + radix_tree_insert(&mt_tree, 1, &obj1); 50 + radix_tree_delete(&mt_tree, 1); 51 + } 52 + 53 + rcu_unregister_thread(); 54 + 55 + return NULL; 56 + } 57 + 58 + void regression4_test(void) 59 + { 60 + pthread_t reader, writer; 61 + 62 + printv(1, "regression test 4 starting\n"); 63 + 64 + radix_tree_insert(&mt_tree, 0, &obj0); 65 + pthread_barrier_init(&worker_barrier, NULL, 2); 66 + 67 + if (pthread_create(&reader, NULL, reader_fn, NULL) || 68 + pthread_create(&writer, NULL, writer_fn, NULL)) { 69 + perror("pthread_create"); 70 + exit(1); 71 + } 72 + 73 + if (pthread_join(reader, NULL) || pthread_join(writer, NULL)) { 74 + perror("pthread_join"); 75 + exit(1); 76 + } 77 + 78 + printv(1, "regression test 4 passed\n"); 79 + }
+17 -19
tools/testing/selftests/bpf/bpf_flow.c
··· 70 70 { 71 71 void *data_end = (void *)(long)skb->data_end; 72 72 void *data = (void *)(long)skb->data; 73 - __u16 nhoff = skb->flow_keys->nhoff; 73 + __u16 thoff = skb->flow_keys->thoff; 74 74 __u8 *hdr; 75 75 76 76 /* Verifies this variable offset does not overflow */ 77 - if (nhoff > (USHRT_MAX - hdr_size)) 77 + if (thoff > (USHRT_MAX - hdr_size)) 78 78 return NULL; 79 79 80 - hdr = data + nhoff; 80 + hdr = data + thoff; 81 81 if (hdr + hdr_size <= data_end) 82 82 return hdr; 83 83 84 - if (bpf_skb_load_bytes(skb, nhoff, buffer, hdr_size)) 84 + if (bpf_skb_load_bytes(skb, thoff, buffer, hdr_size)) 85 85 return NULL; 86 86 87 87 return buffer; ··· 158 158 /* Only inspect standard GRE packets with version 0 */ 159 159 return BPF_OK; 160 160 161 - keys->nhoff += sizeof(*gre); /* Step over GRE Flags and Proto */ 161 + keys->thoff += sizeof(*gre); /* Step over GRE Flags and Proto */ 162 162 if (GRE_IS_CSUM(gre->flags)) 163 - keys->nhoff += 4; /* Step over chksum and Padding */ 163 + keys->thoff += 4; /* Step over chksum and Padding */ 164 164 if (GRE_IS_KEY(gre->flags)) 165 - keys->nhoff += 4; /* Step over key */ 165 + keys->thoff += 4; /* Step over key */ 166 166 if (GRE_IS_SEQ(gre->flags)) 167 - keys->nhoff += 4; /* Step over sequence number */ 167 + keys->thoff += 4; /* Step over sequence number */ 168 168 169 169 keys->is_encap = true; 170 170 ··· 174 174 if (!eth) 175 175 return BPF_DROP; 176 176 177 - keys->nhoff += sizeof(*eth); 177 + keys->thoff += sizeof(*eth); 178 178 179 179 return parse_eth_proto(skb, eth->h_proto); 180 180 } else { ··· 191 191 if ((__u8 *)tcp + (tcp->doff << 2) > data_end) 192 192 return BPF_DROP; 193 193 194 - keys->thoff = keys->nhoff; 195 194 keys->sport = tcp->source; 196 195 keys->dport = tcp->dest; 197 196 return BPF_OK; ··· 200 201 if (!udp) 201 202 return BPF_DROP; 202 203 203 - keys->thoff = keys->nhoff; 204 204 keys->sport = udp->source; 205 205 keys->dport = udp->dest; 206 206 return BPF_OK; ··· 250 252 keys->ipv4_src = iph->saddr; 251 253 keys->ipv4_dst = iph->daddr; 252 254 253 - keys->nhoff += iph->ihl << 2; 254 - if (data + keys->nhoff > data_end) 255 + keys->thoff += iph->ihl << 2; 256 + if (data + keys->thoff > data_end) 255 257 return BPF_DROP; 256 258 257 259 if (iph->frag_off & bpf_htons(IP_MF | IP_OFFSET)) { ··· 283 285 keys->addr_proto = ETH_P_IPV6; 284 286 memcpy(&keys->ipv6_src, &ip6h->saddr, 2*sizeof(ip6h->saddr)); 285 287 286 - keys->nhoff += sizeof(struct ipv6hdr); 288 + keys->thoff += sizeof(struct ipv6hdr); 287 289 288 290 return parse_ipv6_proto(skb, ip6h->nexthdr); 289 291 } ··· 299 301 /* hlen is in 8-octets and does not include the first 8 bytes 300 302 * of the header 301 303 */ 302 - skb->flow_keys->nhoff += (1 + ip6h->hdrlen) << 3; 304 + skb->flow_keys->thoff += (1 + ip6h->hdrlen) << 3; 303 305 304 306 return parse_ipv6_proto(skb, ip6h->nexthdr); 305 307 } ··· 313 315 if (!fragh) 314 316 return BPF_DROP; 315 317 316 - keys->nhoff += sizeof(*fragh); 318 + keys->thoff += sizeof(*fragh); 317 319 keys->is_frag = true; 318 320 if (!(fragh->frag_off & bpf_htons(IP6_OFFSET))) 319 321 keys->is_first_frag = true; ··· 339 341 __be16 proto; 340 342 341 343 /* Peek back to see if single or double-tagging */ 342 - if (bpf_skb_load_bytes(skb, keys->nhoff - sizeof(proto), &proto, 344 + if (bpf_skb_load_bytes(skb, keys->thoff - sizeof(proto), &proto, 343 345 sizeof(proto))) 344 346 return BPF_DROP; 345 347 ··· 352 354 if (vlan->h_vlan_encapsulated_proto != bpf_htons(ETH_P_8021Q)) 353 355 return BPF_DROP; 354 356 355 - keys->nhoff += sizeof(*vlan); 357 + keys->thoff += sizeof(*vlan); 356 358 } 357 359 358 360 vlan = bpf_flow_dissect_get_header(skb, sizeof(*vlan), &_vlan); 359 361 if (!vlan) 360 362 return BPF_DROP; 361 363 362 - keys->nhoff += sizeof(*vlan); 364 + keys->thoff += sizeof(*vlan); 363 365 /* Only allow 8021AD + 8021Q double tagging and no triple tagging.*/ 364 366 if (vlan->h_vlan_encapsulated_proto == bpf_htons(ETH_P_8021AD) || 365 367 vlan->h_vlan_encapsulated_proto == bpf_htons(ETH_P_8021Q))
+33 -5
tools/testing/selftests/bpf/test_verifier.c
··· 13915 13915 .result_unpriv = REJECT, 13916 13916 .result = ACCEPT, 13917 13917 }, 13918 + { 13919 + "calls: cross frame pruning", 13920 + .insns = { 13921 + /* r8 = !!random(); 13922 + * call pruner() 13923 + * if (r8) 13924 + * do something bad; 13925 + */ 13926 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 13927 + BPF_FUNC_get_prandom_u32), 13928 + BPF_MOV64_IMM(BPF_REG_8, 0), 13929 + BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), 13930 + BPF_MOV64_IMM(BPF_REG_8, 1), 13931 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), 13932 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4), 13933 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_8, 1, 1), 13934 + BPF_LDX_MEM(BPF_B, BPF_REG_9, BPF_REG_1, 0), 13935 + BPF_MOV64_IMM(BPF_REG_0, 0), 13936 + BPF_EXIT_INSN(), 13937 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 0), 13938 + BPF_EXIT_INSN(), 13939 + }, 13940 + .prog_type = BPF_PROG_TYPE_SOCKET_FILTER, 13941 + .errstr_unpriv = "function calls to other bpf functions are allowed for root only", 13942 + .result_unpriv = REJECT, 13943 + .errstr = "!read_ok", 13944 + .result = REJECT, 13945 + }, 13918 13946 }; 13919 13947 13920 13948 static int probe_filter_length(const struct bpf_insn *fp) ··· 13968 13940 return fd; 13969 13941 } 13970 13942 13971 - static int create_prog_dummy1(enum bpf_map_type prog_type) 13943 + static int create_prog_dummy1(enum bpf_prog_type prog_type) 13972 13944 { 13973 13945 struct bpf_insn prog[] = { 13974 13946 BPF_MOV64_IMM(BPF_REG_0, 42), ··· 13979 13951 ARRAY_SIZE(prog), "GPL", 0, NULL, 0); 13980 13952 } 13981 13953 13982 - static int create_prog_dummy2(enum bpf_map_type prog_type, int mfd, int idx) 13954 + static int create_prog_dummy2(enum bpf_prog_type prog_type, int mfd, int idx) 13983 13955 { 13984 13956 struct bpf_insn prog[] = { 13985 13957 BPF_MOV64_IMM(BPF_REG_3, idx), ··· 13994 13966 ARRAY_SIZE(prog), "GPL", 0, NULL, 0); 13995 13967 } 13996 13968 13997 - static int create_prog_array(enum bpf_map_type prog_type, uint32_t max_elem, 13969 + static int create_prog_array(enum bpf_prog_type prog_type, uint32_t max_elem, 13998 13970 int p1key) 13999 13971 { 14000 13972 int p2key = 1; ··· 14065 14037 14066 14038 static char bpf_vlog[UINT_MAX >> 8]; 14067 14039 14068 - static void do_test_fixup(struct bpf_test *test, enum bpf_map_type prog_type, 14040 + static void do_test_fixup(struct bpf_test *test, enum bpf_prog_type prog_type, 14069 14041 struct bpf_insn *prog, int *map_fds) 14070 14042 { 14071 14043 int *fixup_map_hash_8b = test->fixup_map_hash_8b; ··· 14194 14166 do { 14195 14167 prog[*fixup_map_stacktrace].imm = map_fds[12]; 14196 14168 fixup_map_stacktrace++; 14197 - } while (fixup_map_stacktrace); 14169 + } while (*fixup_map_stacktrace); 14198 14170 } 14199 14171 } 14200 14172
+1
tools/testing/selftests/net/Makefile
··· 7 7 TEST_PROGS := run_netsocktests run_afpackettests test_bpf.sh netdevice.sh rtnetlink.sh 8 8 TEST_PROGS += fib_tests.sh fib-onlink-tests.sh pmtu.sh udpgso.sh ip_defrag.sh 9 9 TEST_PROGS += udpgso_bench.sh fib_rule_tests.sh msg_zerocopy.sh psock_snd.sh 10 + TEST_PROGS += test_vxlan_fdb_changelink.sh 10 11 TEST_PROGS_EXTENDED := in_netns.sh 11 12 TEST_GEN_FILES = socket 12 13 TEST_GEN_FILES += psock_fanout psock_tpacket msg_zerocopy
+29
tools/testing/selftests/net/test_vxlan_fdb_changelink.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + # Check FDB default-remote handling across "ip link set". 5 + 6 + check_remotes() 7 + { 8 + local what=$1; shift 9 + local N=$(bridge fdb sh dev vx | grep 00:00:00:00:00:00 | wc -l) 10 + 11 + echo -ne "expected two remotes after $what\t" 12 + if [[ $N != 2 ]]; then 13 + echo "[FAIL]" 14 + EXIT_STATUS=1 15 + else 16 + echo "[ OK ]" 17 + fi 18 + } 19 + 20 + ip link add name vx up type vxlan id 2000 dstport 4789 21 + bridge fdb ap dev vx 00:00:00:00:00:00 dst 192.0.2.20 self permanent 22 + bridge fdb ap dev vx 00:00:00:00:00:00 dst 192.0.2.30 self permanent 23 + check_remotes "fdb append" 24 + 25 + ip link set dev vx type vxlan remote 192.0.2.30 26 + check_remotes "link set" 27 + 28 + ip link del dev vx 29 + exit $EXIT_STATUS
+7 -2
tools/testing/selftests/seccomp/seccomp_bpf.c
··· 2731 2731 ASSERT_EQ(child_pid, waitpid(child_pid, &status, 0)); 2732 2732 ASSERT_EQ(true, WIFSTOPPED(status)); 2733 2733 ASSERT_EQ(SIGSTOP, WSTOPSIG(status)); 2734 - /* Verify signal delivery came from parent now. */ 2735 2734 ASSERT_EQ(0, ptrace(PTRACE_GETSIGINFO, child_pid, NULL, &info)); 2736 - EXPECT_EQ(getpid(), info.si_pid); 2735 + /* 2736 + * There is no siginfo on SIGSTOP any more, so we can't verify 2737 + * signal delivery came from parent now (getpid() == info.si_pid). 2738 + * https://lkml.kernel.org/r/CAGXu5jJaZAOzP1qFz66tYrtbuywqb+UN2SOA1VLHpCCOiYvYeg@mail.gmail.com 2739 + * At least verify the SIGSTOP via PTRACE_GETSIGINFO. 2740 + */ 2741 + EXPECT_EQ(SIGSTOP, info.si_signo); 2737 2742 2738 2743 /* Restart nanosleep with SIGCONT, which triggers restart_syscall. */ 2739 2744 ASSERT_EQ(0, kill(child_pid, SIGCONT));
+4
tools/virtio/linux/kernel.h
··· 23 23 #define PAGE_MASK (~(PAGE_SIZE-1)) 24 24 #define PAGE_ALIGN(x) ((x + PAGE_SIZE - 1) & PAGE_MASK) 25 25 26 + /* generic data direction definitions */ 27 + #define READ 0 28 + #define WRITE 1 29 + 26 30 typedef unsigned long long phys_addr_t; 27 31 typedef unsigned long long dma_addr_t; 28 32 typedef size_t __kernel_size_t;
+5 -1
virt/kvm/coalesced_mmio.c
··· 175 175 { 176 176 struct kvm_coalesced_mmio_dev *dev, *tmp; 177 177 178 + if (zone->pio != 1 && zone->pio != 0) 179 + return -EINVAL; 180 + 178 181 mutex_lock(&kvm->slots_lock); 179 182 180 183 list_for_each_entry_safe(dev, tmp, &kvm->coalesced_zones, list) 181 - if (coalesced_mmio_in_range(dev, zone->addr, zone->size)) { 184 + if (zone->pio == dev->zone.pio && 185 + coalesced_mmio_in_range(dev, zone->addr, zone->size)) { 182 186 kvm_io_bus_unregister_dev(kvm, 183 187 zone->pio ? KVM_PIO_BUS : KVM_MMIO_BUS, &dev->dev); 184 188 kvm_iodevice_destructor(&dev->dev);