Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Conflicts:

drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
9e26680733d5 ("bnxt_en: Update firmware call to retrieve TX PTP timestamp")
9e518f25802c ("bnxt_en: 1PPS functions to configure TSIO pins")
099fdeda659d ("bnxt_en: Event handler for PPS events")

kernel/bpf/helpers.c
include/linux/bpf-cgroup.h
a2baf4e8bb0f ("bpf: Fix potentially incorrect results with bpf_get_local_storage()")
c7603cfa04e7 ("bpf: Add ambient BPF runtime context stored in current")

drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
5957cc557dc5 ("net/mlx5: Set all field of mlx5_irq before inserting it to the xarray")
2d0b41a37679 ("net/mlx5: Refcount mlx5_irq with integer")

MAINTAINERS
7b637cd52f02 ("MAINTAINERS: fix Microchip CAN BUS Analyzer Tool entry typo")
7d901a1e878a ("net: phy: add Maxlinear GPY115/21x/24x driver")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+2986 -1480
+2 -2
Documentation/bpf/libbpf/libbpf_naming_convention.rst
··· 108 108 109 109 For example, if current state of ``libbpf.map`` is: 110 110 111 - .. code-block:: c 111 + .. code-block:: none 112 112 113 113 LIBBPF_0.0.1 { 114 114 global: ··· 121 121 , and a new symbol ``bpf_func_c`` is being introduced, then 122 122 ``libbpf.map`` should be changed like this: 123 123 124 - .. code-block:: c 124 + .. code-block:: none 125 125 126 126 LIBBPF_0.0.1 { 127 127 global:
-109
Documentation/gpu/rfc/i915_gem_lmem.rst
··· 18 18 * Route shmem backend over to TTM SYSTEM for discrete 19 19 * TTM purgeable object support 20 20 * Move i915 buddy allocator over to TTM 21 - * MMAP ioctl mode(see `I915 MMAP`_) 22 - * SET/GET ioctl caching(see `I915 SET/GET CACHING`_) 23 21 * Send RFC(with mesa-dev on cc) for final sign off on the uAPI 24 22 * Add pciid for DG1 and turn on uAPI for real 25 - 26 - New object placement and region query uAPI 27 - ========================================== 28 - Starting from DG1 we need to give userspace the ability to allocate buffers from 29 - device local-memory. Currently the driver supports gem_create, which can place 30 - buffers in system memory via shmem, and the usual assortment of other 31 - interfaces, like dumb buffers and userptr. 32 - 33 - To support this new capability, while also providing a uAPI which will work 34 - beyond just DG1, we propose to offer three new bits of uAPI: 35 - 36 - DRM_I915_QUERY_MEMORY_REGIONS 37 - ----------------------------- 38 - New query ID which allows userspace to discover the list of supported memory 39 - regions(like system-memory and local-memory) for a given device. We identify 40 - each region with a class and instance pair, which should be unique. The class 41 - here would be DEVICE or SYSTEM, and the instance would be zero, on platforms 42 - like DG1. 43 - 44 - Side note: The class/instance design is borrowed from our existing engine uAPI, 45 - where we describe every physical engine in terms of its class, and the 46 - particular instance, since we can have more than one per class. 47 - 48 - In the future we also want to expose more information which can further 49 - describe the capabilities of a region. 50 - 51 - .. kernel-doc:: include/uapi/drm/i915_drm.h 52 - :functions: drm_i915_gem_memory_class drm_i915_gem_memory_class_instance drm_i915_memory_region_info drm_i915_query_memory_regions 53 - 54 - GEM_CREATE_EXT 55 - -------------- 56 - New ioctl which is basically just gem_create but now allows userspace to provide 57 - a chain of possible extensions. Note that if we don't provide any extensions and 58 - set flags=0 then we get the exact same behaviour as gem_create. 59 - 60 - Side note: We also need to support PXP[1] in the near future, which is also 61 - applicable to integrated platforms, and adds its own gem_create_ext extension, 62 - which basically lets userspace mark a buffer as "protected". 63 - 64 - .. kernel-doc:: include/uapi/drm/i915_drm.h 65 - :functions: drm_i915_gem_create_ext 66 - 67 - I915_GEM_CREATE_EXT_MEMORY_REGIONS 68 - ---------------------------------- 69 - Implemented as an extension for gem_create_ext, we would now allow userspace to 70 - optionally provide an immutable list of preferred placements at creation time, 71 - in priority order, for a given buffer object. For the placements we expect 72 - them each to use the class/instance encoding, as per the output of the regions 73 - query. Having the list in priority order will be useful in the future when 74 - placing an object, say during eviction. 75 - 76 - .. kernel-doc:: include/uapi/drm/i915_drm.h 77 - :functions: drm_i915_gem_create_ext_memory_regions 78 - 79 - One fair criticism here is that this seems a little over-engineered[2]. If we 80 - just consider DG1 then yes, a simple gem_create.flags or something is totally 81 - all that's needed to tell the kernel to allocate the buffer in local-memory or 82 - whatever. However looking to the future we need uAPI which can also support 83 - upcoming Xe HP multi-tile architecture in a sane way, where there can be 84 - multiple local-memory instances for a given device, and so using both class and 85 - instance in our uAPI to describe regions is desirable, although specifically 86 - for DG1 it's uninteresting, since we only have a single local-memory instance. 87 - 88 - Existing uAPI issues 89 - ==================== 90 - Some potential issues we still need to resolve. 91 - 92 - I915 MMAP 93 - --------- 94 - In i915 there are multiple ways to MMAP GEM object, including mapping the same 95 - object using different mapping types(WC vs WB), i.e multiple active mmaps per 96 - object. TTM expects one MMAP at most for the lifetime of the object. If it 97 - turns out that we have to backpedal here, there might be some potential 98 - userspace fallout. 99 - 100 - I915 SET/GET CACHING 101 - -------------------- 102 - In i915 we have set/get_caching ioctl. TTM doesn't let us to change this, but 103 - DG1 doesn't support non-snooped pcie transactions, so we can just always 104 - allocate as WB for smem-only buffers. If/when our hw gains support for 105 - non-snooped pcie transactions then we must fix this mode at allocation time as 106 - a new GEM extension. 107 - 108 - This is related to the mmap problem, because in general (meaning, when we're 109 - not running on intel cpus) the cpu mmap must not, ever, be inconsistent with 110 - allocation mode. 111 - 112 - Possible idea is to let the kernel picks the mmap mode for userspace from the 113 - following table: 114 - 115 - smem-only: WB. Userspace does not need to call clflush. 116 - 117 - smem+lmem: We only ever allow a single mode, so simply allocate this as uncached 118 - memory, and always give userspace a WC mapping. GPU still does snooped access 119 - here(assuming we can't turn it off like on DG1), which is a bit inefficient. 120 - 121 - lmem only: always WC 122 - 123 - This means on discrete you only get a single mmap mode, all others must be 124 - rejected. That's probably going to be a new default mode or something like 125 - that. 126 - 127 - Links 128 - ===== 129 - [1] https://patchwork.freedesktop.org/series/86798/ 130 - 131 - [2] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5599#note_553791
-10
Documentation/networking/nf_conntrack-sysctl.rst
··· 191 191 TCP connections may be offloaded from nf conntrack to nf flow table. 192 192 Once aged, the connection is returned to nf conntrack with tcp pickup timeout. 193 193 194 - nf_flowtable_tcp_pickup - INTEGER (seconds) 195 - default 120 196 - 197 - TCP connection timeout after being aged from nf flow table offload. 198 - 199 194 nf_flowtable_udp_timeout - INTEGER (seconds) 200 195 default 30 201 196 202 197 Control offload timeout for udp connections. 203 198 UDP connections may be offloaded from nf conntrack to nf flow table. 204 199 Once aged, the connection is returned to nf conntrack with udp pickup timeout. 205 - 206 - nf_flowtable_udp_pickup - INTEGER (seconds) 207 - default 30 208 - 209 - UDP connection timeout after being aged from nf flow table offload.
+1 -1
Documentation/userspace-api/seccomp_filter.rst
··· 263 263 ``ioctl(SECCOMP_IOCTL_NOTIF_ADDFD)``. The ``id`` member of 264 264 ``struct seccomp_notif_addfd`` should be the same ``id`` as in 265 265 ``struct seccomp_notif``. The ``newfd_flags`` flag may be used to set flags 266 - like O_EXEC on the file descriptor in the notifying process. If the supervisor 266 + like O_CLOEXEC on the file descriptor in the notifying process. If the supervisor 267 267 wants to inject the file descriptor with a specific number, the 268 268 ``SECCOMP_ADDFD_FLAG_SETFD`` flag can be used, and set the ``newfd`` member to 269 269 the specific number to use. If that file descriptor is already open in the
+5 -4
MAINTAINERS
··· 11347 11347 S: Supported 11348 11348 F: drivers/net/phy/mxl-gpy.c 11349 11349 11350 - MCAB MICROCHIP CAN BUS ANALYZER TOOL DRIVER 11350 + MCBA MICROCHIP CAN BUS ANALYZER TOOL DRIVER 11351 11351 R: Yasushi SHOJI <yashi@spacecubics.com> 11352 11352 L: linux-can@vger.kernel.org 11353 11353 S: Maintained ··· 15823 15823 F: drivers/i2c/busses/i2c-emev2.c 15824 15824 15825 15825 RENESAS ETHERNET DRIVERS 15826 - R: Sergei Shtylyov <sergei.shtylyov@gmail.com> 15826 + R: Sergey Shtylyov <s.shtylyov@omp.ru> 15827 15827 L: netdev@vger.kernel.org 15828 15828 L: linux-renesas-soc@vger.kernel.org 15829 15829 F: Documentation/devicetree/bindings/net/renesas,*.yaml ··· 17835 17835 F: include/uapi/linux/sync_file.h 17836 17836 17837 17837 SYNOPSYS ARC ARCHITECTURE 17838 - M: Vineet Gupta <vgupta@synopsys.com> 17838 + M: Vineet Gupta <vgupta@kernel.org> 17839 17839 L: linux-snps-arc@lists.infradead.org 17840 17840 S: Supported 17841 17841 T: git git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc.git ··· 20037 20037 F: Documentation/devicetree/bindings/mfd/wlf,arizona.yaml 20038 20038 F: Documentation/devicetree/bindings/mfd/wm831x.txt 20039 20039 F: Documentation/devicetree/bindings/regulator/wlf,arizona.yaml 20040 - F: Documentation/devicetree/bindings/sound/wlf,arizona.yaml 20040 + F: Documentation/devicetree/bindings/sound/wlf,*.yaml 20041 + F: Documentation/devicetree/bindings/sound/wm* 20041 20042 F: Documentation/hwmon/wm83??.rst 20042 20043 F: arch/arm/mach-s3c/mach-crag6410* 20043 20044 F: drivers/clk/clk-wm83*.c
+11 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 14 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc5 6 6 NAME = Opossums on Parade 7 7 8 8 # *DOCUMENTATION* ··· 1315 1315 PHONY += scripts_unifdef 1316 1316 scripts_unifdef: scripts_basic 1317 1317 $(Q)$(MAKE) $(build)=scripts scripts/unifdef 1318 + 1319 + # --------------------------------------------------------------------------- 1320 + # Install 1321 + 1322 + # Many distributions have the custom install script, /sbin/installkernel. 1323 + # If DKMS is installed, 'make install' will eventually recuses back 1324 + # to the this Makefile to build and install external modules. 1325 + # Cancel sub_make_done so that options such as M=, V=, etc. are parsed. 1326 + 1327 + install: sub_make_done := 1318 1328 1319 1329 # --------------------------------------------------------------------------- 1320 1330 # Tools
+1 -1
arch/arc/Kconfig
··· 409 409 help 410 410 Depending on the configuration, CPU can contain DSP registers 411 411 (ACC0_GLO, ACC0_GHI, DSP_BFLY0, DSP_CTRL, DSP_FFT_CTRL). 412 - Bellow is options describing how to handle these registers in 412 + Below are options describing how to handle these registers in 413 413 interrupt entry / exit and in context switch. 414 414 415 415 config ARC_DSP_NONE
+1 -1
arch/arc/include/asm/checksum.h
··· 24 24 */ 25 25 static inline __sum16 csum_fold(__wsum s) 26 26 { 27 - unsigned r = s << 16 | s >> 16; /* ror */ 27 + unsigned int r = s << 16 | s >> 16; /* ror */ 28 28 s = ~s; 29 29 s -= r; 30 30 return s >> 16;
+1 -1
arch/arc/include/asm/perf_event.h
··· 123 123 #define C(_x) PERF_COUNT_HW_CACHE_##_x 124 124 #define CACHE_OP_UNSUPPORTED 0xffff 125 125 126 - static const unsigned arc_pmu_cache_map[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = { 126 + static const unsigned int arc_pmu_cache_map[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = { 127 127 [C(L1D)] = { 128 128 [C(OP_READ)] = { 129 129 [C(RESULT_ACCESS)] = PERF_COUNT_ARC_LDC,
+6 -3
arch/arc/kernel/fpu.c
··· 57 57 58 58 void fpu_init_task(struct pt_regs *regs) 59 59 { 60 + const unsigned int fwe = 0x80000000; 61 + 60 62 /* default rounding mode */ 61 63 write_aux_reg(ARC_REG_FPU_CTRL, 0x100); 62 64 63 - /* set "Write enable" to allow explicit write to exception flags */ 64 - write_aux_reg(ARC_REG_FPU_STATUS, 0x80000000); 65 + /* Initialize to zero: setting requires FWE be set */ 66 + write_aux_reg(ARC_REG_FPU_STATUS, fwe); 65 67 } 66 68 67 69 void fpu_save_restore(struct task_struct *prev, struct task_struct *next) 68 70 { 69 71 struct arc_fpu *save = &prev->thread.fpu; 70 72 struct arc_fpu *restore = &next->thread.fpu; 73 + const unsigned int fwe = 0x80000000; 71 74 72 75 save->ctrl = read_aux_reg(ARC_REG_FPU_CTRL); 73 76 save->status = read_aux_reg(ARC_REG_FPU_STATUS); 74 77 75 78 write_aux_reg(ARC_REG_FPU_CTRL, restore->ctrl); 76 - write_aux_reg(ARC_REG_FPU_STATUS, restore->status); 79 + write_aux_reg(ARC_REG_FPU_STATUS, (fwe | restore->status)); 77 80 } 78 81 79 82 #endif
+5 -5
arch/arc/kernel/unwind.c
··· 260 260 { 261 261 const u8 *ptr; 262 262 unsigned long tableSize = table->size, hdrSize; 263 - unsigned n; 263 + unsigned int n; 264 264 const u32 *fde; 265 265 struct { 266 266 u8 version; ··· 462 462 { 463 463 const u8 *cur = *pcur; 464 464 uleb128_t value; 465 - unsigned shift; 465 + unsigned int shift; 466 466 467 467 for (shift = 0, value = 0; cur < end; shift += 7) { 468 468 if (shift + 7 > 8 * sizeof(value) ··· 483 483 { 484 484 const u8 *cur = *pcur; 485 485 sleb128_t value; 486 - unsigned shift; 486 + unsigned int shift; 487 487 488 488 for (shift = 0, value = 0; cur < end; shift += 7) { 489 489 if (shift + 7 > 8 * sizeof(value) ··· 609 609 static signed fde_pointer_type(const u32 *cie) 610 610 { 611 611 const u8 *ptr = (const u8 *)(cie + 2); 612 - unsigned version = *ptr; 612 + unsigned int version = *ptr; 613 613 614 614 if (*++ptr) { 615 615 const char *aug; ··· 904 904 const u8 *ptr = NULL, *end = NULL; 905 905 unsigned long pc = UNW_PC(frame) - frame->call_frame; 906 906 unsigned long startLoc = 0, endLoc = 0, cfa; 907 - unsigned i; 907 + unsigned int i; 908 908 signed ptrType = -1; 909 909 uleb128_t retAddrReg = 0; 910 910 const struct unwind_table *table;
+2
arch/arc/kernel/vmlinux.lds.S
··· 88 88 CPUIDLE_TEXT 89 89 LOCK_TEXT 90 90 KPROBES_TEXT 91 + IRQENTRY_TEXT 92 + SOFTIRQENTRY_TEXT 91 93 *(.fixup) 92 94 *(.gnu.warning) 93 95 }
+1 -1
arch/arm/boot/dts/am437x-l4.dtsi
··· 1595 1595 compatible = "ti,am4372-d_can", "ti,am3352-d_can"; 1596 1596 reg = <0x0 0x2000>; 1597 1597 clocks = <&dcan1_fck>; 1598 - clock-name = "fck"; 1598 + clock-names = "fck"; 1599 1599 syscon-raminit = <&scm_conf 0x644 1>; 1600 1600 interrupts = <GIC_SPI 49 IRQ_TYPE_LEVEL_HIGH>; 1601 1601 status = "disabled";
+1 -1
arch/arm/boot/dts/am43x-epos-evm.dts
··· 582 582 status = "okay"; 583 583 pinctrl-names = "default"; 584 584 pinctrl-0 = <&i2c0_pins>; 585 - clock-frequency = <400000>; 585 + clock-frequency = <100000>; 586 586 587 587 tps65218: tps65218@24 { 588 588 reg = <0x24>;
+2 -2
arch/arm/boot/dts/imx53-m53menlo.dts
··· 388 388 389 389 pinctrl_power_button: powerbutgrp { 390 390 fsl,pins = < 391 - MX53_PAD_SD2_DATA2__GPIO1_13 0x1e4 391 + MX53_PAD_SD2_DATA0__GPIO1_15 0x1e4 392 392 >; 393 393 }; 394 394 395 395 pinctrl_power_out: poweroutgrp { 396 396 fsl,pins = < 397 - MX53_PAD_SD2_DATA0__GPIO1_15 0x1e4 397 + MX53_PAD_SD2_DATA2__GPIO1_13 0x1e4 398 398 >; 399 399 }; 400 400
+7 -1
arch/arm/boot/dts/imx6qdl-sr-som.dtsi
··· 54 54 pinctrl-names = "default"; 55 55 pinctrl-0 = <&pinctrl_microsom_enet_ar8035>; 56 56 phy-mode = "rgmii-id"; 57 - phy-reset-duration = <2>; 57 + 58 + /* 59 + * The PHY seems to require a long-enough reset duration to avoid 60 + * some rare issues where the PHY gets stuck in an inconsistent and 61 + * non-functional state at boot-up. 10ms proved to be fine . 62 + */ 63 + phy-reset-duration = <10>; 58 64 phy-reset-gpios = <&gpio4 15 GPIO_ACTIVE_LOW>; 59 65 status = "okay"; 60 66
+1
arch/arm/boot/dts/imx6ull-colibri-wifi.dtsi
··· 43 43 assigned-clock-rates = <0>, <198000000>; 44 44 cap-power-off-card; 45 45 keep-power-in-suspend; 46 + max-frequency = <25000000>; 46 47 mmc-pwrseq = <&wifi_pwrseq>; 47 48 no-1-8-v; 48 49 non-removable;
+1 -8
arch/arm/boot/dts/omap5-board-common.dtsi
··· 30 30 regulator-max-microvolt = <5000000>; 31 31 }; 32 32 33 - vdds_1v8_main: fixedregulator-vdds_1v8_main { 34 - compatible = "regulator-fixed"; 35 - regulator-name = "vdds_1v8_main"; 36 - vin-supply = <&smps7_reg>; 37 - regulator-min-microvolt = <1800000>; 38 - regulator-max-microvolt = <1800000>; 39 - }; 40 - 41 33 vmmcsd_fixed: fixedregulator-mmcsd { 42 34 compatible = "regulator-fixed"; 43 35 regulator-name = "vmmcsd_fixed"; ··· 479 487 regulator-boot-on; 480 488 }; 481 489 490 + vdds_1v8_main: 482 491 smps7_reg: smps7 { 483 492 /* VDDS_1v8_OMAP over VDDS_1v8_MAIN */ 484 493 regulator-name = "smps7";
+2 -2
arch/arm/boot/dts/ste-nomadik-stn8815.dtsi
··· 755 755 status = "disabled"; 756 756 }; 757 757 758 - vica: intc@10140000 { 758 + vica: interrupt-controller@10140000 { 759 759 compatible = "arm,versatile-vic"; 760 760 interrupt-controller; 761 761 #interrupt-cells = <1>; 762 762 reg = <0x10140000 0x20>; 763 763 }; 764 764 765 - vicb: intc@10140020 { 765 + vicb: interrupt-controller@10140020 { 766 766 compatible = "arm,versatile-vic"; 767 767 interrupt-controller; 768 768 #interrupt-cells = <1>;
+14 -10
arch/arm/boot/dts/stm32mp15xx-dhcom-pdk2.dtsi
··· 37 37 poll-interval = <20>; 38 38 39 39 /* 40 - * The EXTi IRQ line 3 is shared with touchscreen and ethernet, 40 + * The EXTi IRQ line 3 is shared with ethernet, 41 41 * so mark this as polled GPIO key. 42 42 */ 43 43 button-0 { 44 44 label = "TA1-GPIO-A"; 45 45 linux,code = <KEY_A>; 46 46 gpios = <&gpiof 3 GPIO_ACTIVE_LOW>; 47 + }; 48 + 49 + /* 50 + * The EXTi IRQ line 6 is shared with touchscreen, 51 + * so mark this as polled GPIO key. 52 + */ 53 + button-1 { 54 + label = "TA2-GPIO-B"; 55 + linux,code = <KEY_B>; 56 + gpios = <&gpiod 6 GPIO_ACTIVE_LOW>; 47 57 }; 48 58 49 59 /* ··· 70 60 gpio-keys { 71 61 compatible = "gpio-keys"; 72 62 73 - button-1 { 74 - label = "TA2-GPIO-B"; 75 - linux,code = <KEY_B>; 76 - gpios = <&gpiod 6 GPIO_ACTIVE_LOW>; 77 - wakeup-source; 78 - }; 79 - 80 63 button-3 { 81 64 label = "TA4-GPIO-D"; 82 65 linux,code = <KEY_D>; ··· 85 82 label = "green:led5"; 86 83 gpios = <&gpioc 6 GPIO_ACTIVE_HIGH>; 87 84 default-state = "off"; 85 + status = "disabled"; 88 86 }; 89 87 90 88 led-1 { ··· 189 185 touchscreen@38 { 190 186 compatible = "edt,edt-ft5406"; 191 187 reg = <0x38>; 192 - interrupt-parent = <&gpiog>; 193 - interrupts = <2 IRQ_TYPE_EDGE_FALLING>; /* GPIO E */ 188 + interrupt-parent = <&gpioc>; 189 + interrupts = <6 IRQ_TYPE_EDGE_FALLING>; /* GPIO E */ 194 190 }; 195 191 }; 196 192
+4 -1
arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
··· 12 12 aliases { 13 13 ethernet0 = &ethernet0; 14 14 ethernet1 = &ksz8851; 15 + rtc0 = &hwrtc; 16 + rtc1 = &rtc; 15 17 }; 16 18 17 19 memory@c0000000 { ··· 140 138 reset-gpios = <&gpioh 3 GPIO_ACTIVE_LOW>; 141 139 reset-assert-us = <500>; 142 140 reset-deassert-us = <500>; 141 + smsc,disable-energy-detect; 143 142 interrupt-parent = <&gpioi>; 144 143 interrupts = <11 IRQ_TYPE_LEVEL_LOW>; 145 144 }; ··· 251 248 /delete-property/dmas; 252 249 /delete-property/dma-names; 253 250 254 - rtc@32 { 251 + hwrtc: rtc@32 { 255 252 compatible = "microcrystal,rv8803"; 256 253 reg = <0x32>; 257 254 };
+1 -1
arch/arm/mach-imx/common.h
··· 68 68 void v7_secondary_startup(void); 69 69 void imx_scu_map_io(void); 70 70 void imx_smp_prepare(void); 71 - void imx_gpcv2_set_core1_pdn_pup_by_software(bool pdn); 72 71 #else 73 72 static inline void imx_scu_map_io(void) {} 74 73 static inline void imx_smp_prepare(void) {} ··· 80 81 void imx_gpc_restore_all(void); 81 82 void imx_gpc_hwirq_mask(unsigned int hwirq); 82 83 void imx_gpc_hwirq_unmask(unsigned int hwirq); 84 + void imx_gpcv2_set_core1_pdn_pup_by_software(bool pdn); 83 85 void imx_anatop_init(void); 84 86 void imx_anatop_pre_suspend(void); 85 87 void imx_anatop_post_resume(void);
+14 -3
arch/arm/mach-imx/mmdc.c
··· 103 103 struct perf_event *mmdc_events[MMDC_NUM_COUNTERS]; 104 104 struct hlist_node node; 105 105 struct fsl_mmdc_devtype_data *devtype_data; 106 + struct clk *mmdc_ipg_clk; 106 107 }; 107 108 108 109 /* ··· 463 462 464 463 cpuhp_state_remove_instance_nocalls(cpuhp_mmdc_state, &pmu_mmdc->node); 465 464 perf_pmu_unregister(&pmu_mmdc->pmu); 465 + iounmap(pmu_mmdc->mmdc_base); 466 + clk_disable_unprepare(pmu_mmdc->mmdc_ipg_clk); 466 467 kfree(pmu_mmdc); 467 468 return 0; 468 469 } 469 470 470 - static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_base) 471 + static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_base, 472 + struct clk *mmdc_ipg_clk) 471 473 { 472 474 struct mmdc_pmu *pmu_mmdc; 473 475 char *name; ··· 498 494 } 499 495 500 496 mmdc_num = mmdc_pmu_init(pmu_mmdc, mmdc_base, &pdev->dev); 497 + pmu_mmdc->mmdc_ipg_clk = mmdc_ipg_clk; 501 498 if (mmdc_num == 0) 502 499 name = "mmdc"; 503 500 else ··· 534 529 535 530 #else 536 531 #define imx_mmdc_remove NULL 537 - #define imx_mmdc_perf_init(pdev, mmdc_base) 0 532 + #define imx_mmdc_perf_init(pdev, mmdc_base, mmdc_ipg_clk) 0 538 533 #endif 539 534 540 535 static int imx_mmdc_probe(struct platform_device *pdev) ··· 572 567 val &= ~(1 << BP_MMDC_MAPSR_PSD); 573 568 writel_relaxed(val, reg); 574 569 575 - return imx_mmdc_perf_init(pdev, mmdc_base); 570 + err = imx_mmdc_perf_init(pdev, mmdc_base, mmdc_ipg_clk); 571 + if (err) { 572 + iounmap(mmdc_base); 573 + clk_disable_unprepare(mmdc_ipg_clk); 574 + } 575 + 576 + return err; 576 577 } 577 578 578 579 int imx_mmdc_get_ddr_type(void)
+1
arch/arm/mach-ixp4xx/Kconfig
··· 91 91 92 92 config MACH_GORAMO_MLR 93 93 bool "GORAMO Multi Link Router" 94 + depends on IXP4XX_PCI_LEGACY 94 95 help 95 96 Say 'Y' here if you want your kernel to support GORAMO 96 97 MultiLink router.
+9 -1
arch/arm/mach-omap2/omap_hwmod.c
··· 3776 3776 struct omap_hwmod_ocp_if *oi; 3777 3777 struct clockdomain *clkdm; 3778 3778 struct clk_hw_omap *clk; 3779 + struct clk_hw *hw; 3779 3780 3780 3781 if (!oh) 3781 3782 return NULL; ··· 3793 3792 c = oi->_clk; 3794 3793 } 3795 3794 3796 - clk = to_clk_hw_omap(__clk_get_hw(c)); 3795 + hw = __clk_get_hw(c); 3796 + if (!hw) 3797 + return NULL; 3798 + 3799 + clk = to_clk_hw_omap(hw); 3800 + if (!clk) 3801 + return NULL; 3802 + 3797 3803 clkdm = clk->clkdm; 3798 3804 if (!clkdm) 3799 3805 return NULL;
+6 -3
arch/arm64/Kconfig
··· 1800 1800 If unsure, say N. 1801 1801 1802 1802 config RANDOMIZE_MODULE_REGION_FULL 1803 - bool "Randomize the module region over a 4 GB range" 1803 + bool "Randomize the module region over a 2 GB range" 1804 1804 depends on RANDOMIZE_BASE 1805 1805 default y 1806 1806 help 1807 - Randomizes the location of the module region inside a 4 GB window 1807 + Randomizes the location of the module region inside a 2 GB window 1808 1808 covering the core kernel. This way, it is less likely for modules 1809 1809 to leak information about the location of core kernel data structures 1810 1810 but it does imply that function calls between modules and the core ··· 1812 1812 1813 1813 When this option is not set, the module region will be randomized over 1814 1814 a limited range that contains the [_stext, _etext] interval of the 1815 - core kernel, so branch relocations are always in range. 1815 + core kernel, so branch relocations are almost always in range unless 1816 + ARM64_MODULE_PLTS is enabled and the region is exhausted. In this 1817 + particular case of region exhaustion, modules might be able to fall 1818 + back to a larger 2GB area. 1816 1819 1817 1820 config CC_HAVE_STACKPROTECTOR_SYSREG 1818 1821 def_bool $(cc-option,-mstack-protector-guard=sysreg -mstack-protector-guard-reg=sp_el0 -mstack-protector-guard-offset=0)
+12 -9
arch/arm64/Makefile
··· 21 21 endif 22 22 23 23 ifeq ($(CONFIG_ARM64_ERRATUM_843419),y) 24 - ifneq ($(CONFIG_ARM64_LD_HAS_FIX_ERRATUM_843419),y) 25 - $(warning ld does not support --fix-cortex-a53-843419; kernel may be susceptible to erratum) 26 - else 24 + ifeq ($(CONFIG_ARM64_LD_HAS_FIX_ERRATUM_843419),y) 27 25 LDFLAGS_vmlinux += --fix-cortex-a53-843419 28 - endif 29 - endif 30 - 31 - ifeq ($(CONFIG_ARM64_USE_LSE_ATOMICS), y) 32 - ifneq ($(CONFIG_ARM64_LSE_ATOMICS), y) 33 - $(warning LSE atomics not supported by binutils) 34 26 endif 35 27 endif 36 28 ··· 168 176 169 177 archprepare: 170 178 $(Q)$(MAKE) $(build)=arch/arm64/tools kapi 179 + ifeq ($(CONFIG_ARM64_ERRATUM_843419),y) 180 + ifneq ($(CONFIG_ARM64_LD_HAS_FIX_ERRATUM_843419),y) 181 + @echo "warning: ld does not support --fix-cortex-a53-843419; kernel may be susceptible to erratum" >&2 182 + endif 183 + endif 184 + ifeq ($(CONFIG_ARM64_USE_LSE_ATOMICS),y) 185 + ifneq ($(CONFIG_ARM64_LSE_ATOMICS),y) 186 + @echo "warning: LSE atomics not supported by binutils" >&2 187 + endif 188 + endif 189 + 171 190 172 191 # We use MRPROPER_FILES and CLEAN_FILES now 173 192 archclean:
+2
arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28-var2.dts
··· 54 54 55 55 &mscc_felix_port0 { 56 56 label = "swp0"; 57 + managed = "in-band-status"; 57 58 phy-handle = <&phy0>; 58 59 phy-mode = "sgmii"; 59 60 status = "okay"; ··· 62 61 63 62 &mscc_felix_port1 { 64 63 label = "swp1"; 64 + managed = "in-band-status"; 65 65 phy-handle = <&phy1>; 66 66 phy-mode = "sgmii"; 67 67 status = "okay";
+1 -1
arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
··· 66 66 }; 67 67 }; 68 68 69 - sysclk: clock-sysclk { 69 + sysclk: sysclk { 70 70 compatible = "fixed-clock"; 71 71 #clock-cells = <0>; 72 72 clock-frequency = <100000000>;
+3
arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
··· 19 19 aliases { 20 20 spi0 = &spi0; 21 21 ethernet1 = &eth1; 22 + mmc0 = &sdhci0; 23 + mmc1 = &sdhci1; 22 24 }; 23 25 24 26 chosen { ··· 121 119 pinctrl-names = "default"; 122 120 pinctrl-0 = <&i2c1_pins>; 123 121 clock-frequency = <100000>; 122 + /delete-property/ mrvl,i2c-fast-mode; 124 123 status = "okay"; 125 124 126 125 rtc@6f {
+54 -6
arch/arm64/boot/dts/nvidia/tegra194.dtsi
··· 1840 1840 1841 1841 interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE1R &emc>, 1842 1842 <&mc TEGRA194_MEMORY_CLIENT_PCIE1W &emc>; 1843 - interconnect-names = "read", "write"; 1843 + interconnect-names = "dma-mem", "write"; 1844 + iommus = <&smmu TEGRA194_SID_PCIE1>; 1845 + iommu-map = <0x0 &smmu TEGRA194_SID_PCIE1 0x1000>; 1846 + iommu-map-mask = <0x0>; 1847 + dma-coherent; 1844 1848 }; 1845 1849 1846 1850 pcie@14120000 { ··· 1894 1890 1895 1891 interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE2AR &emc>, 1896 1892 <&mc TEGRA194_MEMORY_CLIENT_PCIE2AW &emc>; 1897 - interconnect-names = "read", "write"; 1893 + interconnect-names = "dma-mem", "write"; 1894 + iommus = <&smmu TEGRA194_SID_PCIE2>; 1895 + iommu-map = <0x0 &smmu TEGRA194_SID_PCIE2 0x1000>; 1896 + iommu-map-mask = <0x0>; 1897 + dma-coherent; 1898 1898 }; 1899 1899 1900 1900 pcie@14140000 { ··· 1948 1940 1949 1941 interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE3R &emc>, 1950 1942 <&mc TEGRA194_MEMORY_CLIENT_PCIE3W &emc>; 1951 - interconnect-names = "read", "write"; 1943 + interconnect-names = "dma-mem", "write"; 1944 + iommus = <&smmu TEGRA194_SID_PCIE3>; 1945 + iommu-map = <0x0 &smmu TEGRA194_SID_PCIE3 0x1000>; 1946 + iommu-map-mask = <0x0>; 1947 + dma-coherent; 1952 1948 }; 1953 1949 1954 1950 pcie@14160000 { ··· 2002 1990 2003 1991 interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE4R &emc>, 2004 1992 <&mc TEGRA194_MEMORY_CLIENT_PCIE4W &emc>; 2005 - interconnect-names = "read", "write"; 1993 + interconnect-names = "dma-mem", "write"; 1994 + iommus = <&smmu TEGRA194_SID_PCIE4>; 1995 + iommu-map = <0x0 &smmu TEGRA194_SID_PCIE4 0x1000>; 1996 + iommu-map-mask = <0x0>; 1997 + dma-coherent; 2006 1998 }; 2007 1999 2008 2000 pcie@14180000 { ··· 2056 2040 2057 2041 interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE0R &emc>, 2058 2042 <&mc TEGRA194_MEMORY_CLIENT_PCIE0W &emc>; 2059 - interconnect-names = "read", "write"; 2043 + interconnect-names = "dma-mem", "write"; 2044 + iommus = <&smmu TEGRA194_SID_PCIE0>; 2045 + iommu-map = <0x0 &smmu TEGRA194_SID_PCIE0 0x1000>; 2046 + iommu-map-mask = <0x0>; 2047 + dma-coherent; 2060 2048 }; 2061 2049 2062 2050 pcie@141a0000 { ··· 2114 2094 2115 2095 interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE5R &emc>, 2116 2096 <&mc TEGRA194_MEMORY_CLIENT_PCIE5W &emc>; 2117 - interconnect-names = "read", "write"; 2097 + interconnect-names = "dma-mem", "write"; 2098 + iommus = <&smmu TEGRA194_SID_PCIE5>; 2099 + iommu-map = <0x0 &smmu TEGRA194_SID_PCIE5 0x1000>; 2100 + iommu-map-mask = <0x0>; 2101 + dma-coherent; 2118 2102 }; 2119 2103 2120 2104 pcie_ep@14160000 { ··· 2151 2127 nvidia,aspm-cmrt-us = <60>; 2152 2128 nvidia,aspm-pwr-on-t-us = <20>; 2153 2129 nvidia,aspm-l0s-entrance-latency-us = <3>; 2130 + 2131 + interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE4R &emc>, 2132 + <&mc TEGRA194_MEMORY_CLIENT_PCIE4W &emc>; 2133 + interconnect-names = "dma-mem", "write"; 2134 + iommus = <&smmu TEGRA194_SID_PCIE4>; 2135 + iommu-map = <0x0 &smmu TEGRA194_SID_PCIE4 0x1000>; 2136 + iommu-map-mask = <0x0>; 2137 + dma-coherent; 2154 2138 }; 2155 2139 2156 2140 pcie_ep@14180000 { ··· 2191 2159 nvidia,aspm-cmrt-us = <60>; 2192 2160 nvidia,aspm-pwr-on-t-us = <20>; 2193 2161 nvidia,aspm-l0s-entrance-latency-us = <3>; 2162 + 2163 + interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE0R &emc>, 2164 + <&mc TEGRA194_MEMORY_CLIENT_PCIE0W &emc>; 2165 + interconnect-names = "dma-mem", "write"; 2166 + iommus = <&smmu TEGRA194_SID_PCIE0>; 2167 + iommu-map = <0x0 &smmu TEGRA194_SID_PCIE0 0x1000>; 2168 + iommu-map-mask = <0x0>; 2169 + dma-coherent; 2194 2170 }; 2195 2171 2196 2172 pcie_ep@141a0000 { ··· 2234 2194 nvidia,aspm-cmrt-us = <60>; 2235 2195 nvidia,aspm-pwr-on-t-us = <20>; 2236 2196 nvidia,aspm-l0s-entrance-latency-us = <3>; 2197 + 2198 + interconnects = <&mc TEGRA194_MEMORY_CLIENT_PCIE5R &emc>, 2199 + <&mc TEGRA194_MEMORY_CLIENT_PCIE5W &emc>; 2200 + interconnect-names = "dma-mem", "write"; 2201 + iommus = <&smmu TEGRA194_SID_PCIE5>; 2202 + iommu-map = <0x0 &smmu TEGRA194_SID_PCIE5 0x1000>; 2203 + iommu-map-mask = <0x0>; 2204 + dma-coherent; 2237 2205 }; 2238 2206 2239 2207 sram@40000000 {
+11 -1
arch/arm64/include/asm/ptrace.h
··· 320 320 321 321 static inline unsigned long regs_return_value(struct pt_regs *regs) 322 322 { 323 - return regs->regs[0]; 323 + unsigned long val = regs->regs[0]; 324 + 325 + /* 326 + * Audit currently uses regs_return_value() instead of 327 + * syscall_get_return_value(). Apply the same sign-extension here until 328 + * audit is updated to use syscall_get_return_value(). 329 + */ 330 + if (compat_user_mode(regs)) 331 + val = sign_extend64(val, 31); 332 + 333 + return val; 324 334 } 325 335 326 336 static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
+1 -1
arch/arm64/include/asm/stacktrace.h
··· 35 35 * accounting information necessary for robust unwinding. 36 36 * 37 37 * @fp: The fp value in the frame record (or the real fp) 38 - * @pc: The fp value in the frame record (or the real lr) 38 + * @pc: The lr value in the frame record (or the real lr) 39 39 * 40 40 * @stacks_done: Stacks which have been entirely unwound, for which it is no 41 41 * longer valid to unwind to.
+11 -10
arch/arm64/include/asm/syscall.h
··· 29 29 regs->regs[0] = regs->orig_x0; 30 30 } 31 31 32 + static inline long syscall_get_return_value(struct task_struct *task, 33 + struct pt_regs *regs) 34 + { 35 + unsigned long val = regs->regs[0]; 36 + 37 + if (is_compat_thread(task_thread_info(task))) 38 + val = sign_extend64(val, 31); 39 + 40 + return val; 41 + } 32 42 33 43 static inline long syscall_get_error(struct task_struct *task, 34 44 struct pt_regs *regs) 35 45 { 36 - unsigned long error = regs->regs[0]; 37 - 38 - if (is_compat_thread(task_thread_info(task))) 39 - error = sign_extend64(error, 31); 46 + unsigned long error = syscall_get_return_value(task, regs); 40 47 41 48 return IS_ERR_VALUE(error) ? error : 0; 42 - } 43 - 44 - static inline long syscall_get_return_value(struct task_struct *task, 45 - struct pt_regs *regs) 46 - { 47 - return regs->regs[0]; 48 49 } 49 50 50 51 static inline void syscall_set_return_value(struct task_struct *task,
+3 -1
arch/arm64/kernel/kaslr.c
··· 162 162 * a PAGE_SIZE multiple in the range [_etext - MODULES_VSIZE, 163 163 * _stext) . This guarantees that the resulting region still 164 164 * covers [_stext, _etext], and that all relative branches can 165 - * be resolved without veneers. 165 + * be resolved without veneers unless this region is exhausted 166 + * and we fall back to a larger 2GB window in module_alloc() 167 + * when ARM64_MODULE_PLTS is enabled. 166 168 */ 167 169 module_range = MODULES_VSIZE - (u64)(_etext - _stext); 168 170 module_alloc_base = (u64)_etext + offset - MODULES_VSIZE;
+1 -1
arch/arm64/kernel/ptrace.c
··· 1862 1862 audit_syscall_exit(regs); 1863 1863 1864 1864 if (flags & _TIF_SYSCALL_TRACEPOINT) 1865 - trace_sys_exit(regs, regs_return_value(regs)); 1865 + trace_sys_exit(regs, syscall_get_return_value(current, regs)); 1866 1866 1867 1867 if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP)) 1868 1868 tracehook_report_syscall(regs, PTRACE_SYSCALL_EXIT);
+2 -1
arch/arm64/kernel/signal.c
··· 29 29 #include <asm/unistd.h> 30 30 #include <asm/fpsimd.h> 31 31 #include <asm/ptrace.h> 32 + #include <asm/syscall.h> 32 33 #include <asm/signal32.h> 33 34 #include <asm/traps.h> 34 35 #include <asm/vdso.h> ··· 891 890 retval == -ERESTART_RESTARTBLOCK || 892 891 (retval == -ERESTARTSYS && 893 892 !(ksig.ka.sa.sa_flags & SA_RESTART)))) { 894 - regs->regs[0] = -EINTR; 893 + syscall_set_return_value(current, regs, -EINTR, 0); 895 894 regs->pc = continue_addr; 896 895 } 897 896
+1 -1
arch/arm64/kernel/stacktrace.c
··· 218 218 219 219 #ifdef CONFIG_STACKTRACE 220 220 221 - noinline void arch_stack_walk(stack_trace_consume_fn consume_entry, 221 + noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry, 222 222 void *cookie, struct task_struct *task, 223 223 struct pt_regs *regs) 224 224 {
+3 -6
arch/arm64/kernel/syscall.c
··· 54 54 ret = do_ni_syscall(regs, scno); 55 55 } 56 56 57 - if (is_compat_task()) 58 - ret = lower_32_bits(ret); 59 - 60 - regs->regs[0] = ret; 57 + syscall_set_return_value(current, regs, 0, ret); 61 58 62 59 /* 63 60 * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(), ··· 112 115 * syscall. do_notify_resume() will send a signal to userspace 113 116 * before the syscall is restarted. 114 117 */ 115 - regs->regs[0] = -ERESTARTNOINTR; 118 + syscall_set_return_value(current, regs, -ERESTARTNOINTR, 0); 116 119 return; 117 120 } 118 121 ··· 133 136 * anyway. 134 137 */ 135 138 if (scno == NO_SYSCALL) 136 - regs->regs[0] = -ENOSYS; 139 + syscall_set_return_value(current, regs, -ENOSYS, 0); 137 140 scno = syscall_trace_enter(regs); 138 141 if (scno == NO_SYSCALL) 139 142 goto trace_exit;
+1 -1
arch/mips/Makefile
··· 321 321 322 322 ifdef CONFIG_MIPS 323 323 CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -x c /dev/null | \ 324 - egrep -vw '__GNUC_(|MINOR_|PATCHLEVEL_)_' | \ 324 + egrep -vw '__GNUC_(MINOR_|PATCHLEVEL_)?_' | \ 325 325 sed -e "s/^\#define /-D'/" -e "s/ /'='/" -e "s/$$/'/" -e 's/\$$/&&/g') 326 326 endif 327 327
+11 -6
arch/mips/include/asm/pgalloc.h
··· 58 58 59 59 static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) 60 60 { 61 - pmd_t *pmd = NULL; 61 + pmd_t *pmd; 62 62 struct page *pg; 63 63 64 - pg = alloc_pages(GFP_KERNEL | __GFP_ACCOUNT, PMD_ORDER); 65 - if (pg) { 66 - pgtable_pmd_page_ctor(pg); 67 - pmd = (pmd_t *)page_address(pg); 68 - pmd_init((unsigned long)pmd, (unsigned long)invalid_pte_table); 64 + pg = alloc_pages(GFP_KERNEL_ACCOUNT, PMD_ORDER); 65 + if (!pg) 66 + return NULL; 67 + 68 + if (!pgtable_pmd_page_ctor(pg)) { 69 + __free_pages(pg, PMD_ORDER); 70 + return NULL; 69 71 } 72 + 73 + pmd = (pmd_t *)page_address(pg); 74 + pmd_init((unsigned long)pmd, (unsigned long)invalid_pte_table); 70 75 return pmd; 71 76 } 72 77
+2 -1
arch/mips/mti-malta/malta-platform.c
··· 48 48 .mapbase = 0x1f000900, /* The CBUS UART */ 49 49 .irq = MIPS_CPU_IRQ_BASE + MIPSCPU_INT_MB2, 50 50 .uartclk = 3686400, /* Twice the usual clk! */ 51 - .iotype = UPIO_MEM32, 51 + .iotype = IS_ENABLED(CONFIG_CPU_BIG_ENDIAN) ? 52 + UPIO_MEM32BE : UPIO_MEM32, 52 53 .flags = CBUS_UART_FLAGS, 53 54 .regshift = 3, 54 55 },
+7
arch/riscv/Kconfig
··· 492 492 493 493 config STACKPROTECTOR_PER_TASK 494 494 def_bool y 495 + depends on !GCC_PLUGIN_RANDSTRUCT 495 496 depends on STACKPROTECTOR && CC_HAVE_STACKPROTECTOR_TLS 497 + 498 + config PHYS_RAM_BASE_FIXED 499 + bool "Explicitly specified physical RAM address" 500 + default n 496 501 497 502 config PHYS_RAM_BASE 498 503 hex "Platform Physical RAM address" 504 + depends on PHYS_RAM_BASE_FIXED 499 505 default "0x80000000" 500 506 help 501 507 This is the physical address of RAM in the system. It has to be ··· 514 508 # This prevents XIP from being enabled by all{yes,mod}config, which 515 509 # fail to build since XIP doesn't support large kernels. 516 510 depends on !COMPILE_TEST 511 + select PHYS_RAM_BASE_FIXED 517 512 help 518 513 Execute-In-Place allows the kernel to run from non-volatile storage 519 514 directly addressable by the CPU, such as NOR flash. This saves RAM
+1 -1
arch/riscv/boot/dts/sifive/hifive-unmatched-a00.dts
··· 24 24 25 25 memory@80000000 { 26 26 device_type = "memory"; 27 - reg = <0x0 0x80000000 0x2 0x00000000>; 27 + reg = <0x0 0x80000000 0x4 0x00000000>; 28 28 }; 29 29 30 30 soc {
+4 -3
arch/riscv/include/asm/page.h
··· 103 103 }; 104 104 105 105 extern struct kernel_mapping kernel_map; 106 + extern phys_addr_t phys_ram_base; 106 107 107 108 #ifdef CONFIG_64BIT 108 109 #define is_kernel_mapping(x) \ ··· 114 113 #define linear_mapping_pa_to_va(x) ((void *)((unsigned long)(x) + kernel_map.va_pa_offset)) 115 114 #define kernel_mapping_pa_to_va(y) ({ \ 116 115 unsigned long _y = y; \ 117 - (_y >= CONFIG_PHYS_RAM_BASE) ? \ 118 - (void *)((unsigned long)(_y) + kernel_map.va_kernel_pa_offset + XIP_OFFSET) : \ 119 - (void *)((unsigned long)(_y) + kernel_map.va_kernel_xip_pa_offset); \ 116 + (IS_ENABLED(CONFIG_XIP_KERNEL) && _y < phys_ram_base) ? \ 117 + (void *)((unsigned long)(_y) + kernel_map.va_kernel_xip_pa_offset) : \ 118 + (void *)((unsigned long)(_y) + kernel_map.va_kernel_pa_offset + XIP_OFFSET); \ 120 119 }) 121 120 #define __pa_to_va_nodebug(x) linear_mapping_pa_to_va(x) 122 121
+1 -1
arch/riscv/kernel/stacktrace.c
··· 27 27 fp = frame_pointer(regs); 28 28 sp = user_stack_pointer(regs); 29 29 pc = instruction_pointer(regs); 30 - } else if (task == current) { 30 + } else if (task == NULL || task == current) { 31 31 fp = (unsigned long)__builtin_frame_address(1); 32 32 sp = (unsigned long)__builtin_frame_address(0); 33 33 pc = (unsigned long)__builtin_return_address(0);
+12 -5
arch/riscv/mm/init.c
··· 36 36 #define kernel_map (*(struct kernel_mapping *)XIP_FIXUP(&kernel_map)) 37 37 #endif 38 38 39 + phys_addr_t phys_ram_base __ro_after_init; 40 + EXPORT_SYMBOL(phys_ram_base); 41 + 39 42 #ifdef CONFIG_XIP_KERNEL 40 43 extern char _xiprom[], _exiprom[]; 41 44 #endif ··· 163 160 phys_addr_t vmlinux_end = __pa_symbol(&_end); 164 161 phys_addr_t vmlinux_start = __pa_symbol(&_start); 165 162 phys_addr_t __maybe_unused max_mapped_addr; 166 - phys_addr_t dram_end; 163 + phys_addr_t phys_ram_end; 167 164 168 165 #ifdef CONFIG_XIP_KERNEL 169 166 vmlinux_start = __pa_symbol(&_sdata); ··· 184 181 #endif 185 182 memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start); 186 183 187 - dram_end = memblock_end_of_DRAM(); 188 184 185 + phys_ram_end = memblock_end_of_DRAM(); 189 186 #ifndef CONFIG_64BIT 187 + #ifndef CONFIG_XIP_KERNEL 188 + phys_ram_base = memblock_start_of_DRAM(); 189 + #endif 190 190 /* 191 191 * memblock allocator is not aware of the fact that last 4K bytes of 192 192 * the addressable memory can not be mapped because of IS_ERR_VALUE ··· 200 194 * be done in create_kernel_page_table. 201 195 */ 202 196 max_mapped_addr = __pa(~(ulong)0); 203 - if (max_mapped_addr == (dram_end - 1)) 197 + if (max_mapped_addr == (phys_ram_end - 1)) 204 198 memblock_set_current_limit(max_mapped_addr - 4096); 205 199 #endif 206 200 207 - min_low_pfn = PFN_UP(memblock_start_of_DRAM()); 208 - max_low_pfn = max_pfn = PFN_DOWN(dram_end); 201 + min_low_pfn = PFN_UP(phys_ram_base); 202 + max_low_pfn = max_pfn = PFN_DOWN(phys_ram_end); 209 203 210 204 dma32_phys_limit = min(4UL * SZ_1G, (unsigned long)PFN_PHYS(max_low_pfn)); 211 205 set_max_mapnr(max_low_pfn - ARCH_PFN_OFFSET); ··· 564 558 kernel_map.xiprom = (uintptr_t)CONFIG_XIP_PHYS_ADDR; 565 559 kernel_map.xiprom_sz = (uintptr_t)(&_exiprom) - (uintptr_t)(&_xiprom); 566 560 561 + phys_ram_base = CONFIG_PHYS_RAM_BASE; 567 562 kernel_map.phys_addr = (uintptr_t)CONFIG_PHYS_RAM_BASE; 568 563 kernel_map.size = (uintptr_t)(&_end) - (uintptr_t)(&_sdata); 569 564
+7 -5
arch/x86/events/core.c
··· 2489 2489 return; 2490 2490 2491 2491 for_each_set_bit(i, cpuc->dirty, X86_PMC_IDX_MAX) { 2492 - /* Metrics and fake events don't have corresponding HW counters. */ 2493 - if (is_metric_idx(i) || (i == INTEL_PMC_IDX_FIXED_VLBR)) 2494 - continue; 2495 - else if (i >= INTEL_PMC_IDX_FIXED) 2492 + if (i >= INTEL_PMC_IDX_FIXED) { 2493 + /* Metrics and fake events don't have corresponding HW counters. */ 2494 + if ((i - INTEL_PMC_IDX_FIXED) >= hybrid(cpuc->pmu, num_counters_fixed)) 2495 + continue; 2496 + 2496 2497 wrmsrl(MSR_ARCH_PERFMON_FIXED_CTR0 + (i - INTEL_PMC_IDX_FIXED), 0); 2497 - else 2498 + } else { 2498 2499 wrmsrl(x86_pmu_event_addr(i), 0); 2500 + } 2499 2501 } 2500 2502 2501 2503 bitmap_zero(cpuc->dirty, X86_PMC_IDX_MAX);
+15 -8
arch/x86/events/intel/core.c
··· 2904 2904 */ 2905 2905 static int intel_pmu_handle_irq(struct pt_regs *regs) 2906 2906 { 2907 - struct cpu_hw_events *cpuc; 2907 + struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); 2908 + bool late_ack = hybrid_bit(cpuc->pmu, late_ack); 2909 + bool mid_ack = hybrid_bit(cpuc->pmu, mid_ack); 2908 2910 int loops; 2909 2911 u64 status; 2910 2912 int handled; 2911 2913 int pmu_enabled; 2912 - 2913 - cpuc = this_cpu_ptr(&cpu_hw_events); 2914 2914 2915 2915 /* 2916 2916 * Save the PMU state. ··· 2918 2918 */ 2919 2919 pmu_enabled = cpuc->enabled; 2920 2920 /* 2921 - * No known reason to not always do late ACK, 2922 - * but just in case do it opt-in. 2921 + * In general, the early ACK is only applied for old platforms. 2922 + * For the big core starts from Haswell, the late ACK should be 2923 + * applied. 2924 + * For the small core after Tremont, we have to do the ACK right 2925 + * before re-enabling counters, which is in the middle of the 2926 + * NMI handler. 2923 2927 */ 2924 - if (!x86_pmu.late_ack) 2928 + if (!late_ack && !mid_ack) 2925 2929 apic_write(APIC_LVTPC, APIC_DM_NMI); 2926 2930 intel_bts_disable_local(); 2927 2931 cpuc->enabled = 0; ··· 2962 2958 goto again; 2963 2959 2964 2960 done: 2961 + if (mid_ack) 2962 + apic_write(APIC_LVTPC, APIC_DM_NMI); 2965 2963 /* Only restore PMU state when it's active. See x86_pmu_disable(). */ 2966 2964 cpuc->enabled = pmu_enabled; 2967 2965 if (pmu_enabled) ··· 2975 2969 * have been reset. This avoids spurious NMIs on 2976 2970 * Haswell CPUs. 2977 2971 */ 2978 - if (x86_pmu.late_ack) 2972 + if (late_ack) 2979 2973 apic_write(APIC_LVTPC, APIC_DM_NMI); 2980 2974 return handled; 2981 2975 } ··· 6135 6129 static_branch_enable(&perf_is_hybrid); 6136 6130 x86_pmu.num_hybrid_pmus = X86_HYBRID_NUM_PMUS; 6137 6131 6138 - x86_pmu.late_ack = true; 6139 6132 x86_pmu.pebs_aliases = NULL; 6140 6133 x86_pmu.pebs_prec_dist = true; 6141 6134 x86_pmu.pebs_block = true; ··· 6172 6167 pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX]; 6173 6168 pmu->name = "cpu_core"; 6174 6169 pmu->cpu_type = hybrid_big; 6170 + pmu->late_ack = true; 6175 6171 if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU)) { 6176 6172 pmu->num_counters = x86_pmu.num_counters + 2; 6177 6173 pmu->num_counters_fixed = x86_pmu.num_counters_fixed + 1; ··· 6198 6192 pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_ATOM_IDX]; 6199 6193 pmu->name = "cpu_atom"; 6200 6194 pmu->cpu_type = hybrid_small; 6195 + pmu->mid_ack = true; 6201 6196 pmu->num_counters = x86_pmu.num_counters; 6202 6197 pmu->num_counters_fixed = x86_pmu.num_counters_fixed; 6203 6198 pmu->max_pebs_events = x86_pmu.max_pebs_events;
+17 -1
arch/x86/events/perf_event.h
··· 656 656 struct event_constraint *event_constraints; 657 657 struct event_constraint *pebs_constraints; 658 658 struct extra_reg *extra_regs; 659 + 660 + unsigned int late_ack :1, 661 + mid_ack :1, 662 + enabled_ack :1; 659 663 }; 660 664 661 665 static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu) ··· 689 685 \ 690 686 __Fp; \ 691 687 })) 688 + 689 + #define hybrid_bit(_pmu, _field) \ 690 + ({ \ 691 + bool __Fp = x86_pmu._field; \ 692 + \ 693 + if (is_hybrid() && (_pmu)) \ 694 + __Fp = hybrid_pmu(_pmu)->_field; \ 695 + \ 696 + __Fp; \ 697 + }) 692 698 693 699 enum hybrid_pmu_type { 694 700 hybrid_big = 0x40, ··· 769 755 770 756 /* PMI handler bits */ 771 757 unsigned int late_ack :1, 758 + mid_ack :1, 772 759 enabled_ack :1; 773 760 /* 774 761 * sysfs attrs ··· 1130 1115 1131 1116 static inline void x86_pmu_disable_event(struct perf_event *event) 1132 1117 { 1118 + u64 disable_mask = __this_cpu_read(cpu_hw_events.perf_ctr_virt_mask); 1133 1119 struct hw_perf_event *hwc = &event->hw; 1134 1120 1135 - wrmsrl(hwc->config_base, hwc->config); 1121 + wrmsrl(hwc->config_base, hwc->config & ~disable_mask); 1136 1122 1137 1123 if (is_counter_pair(hwc)) 1138 1124 wrmsrl(x86_pmu_config_addr(hwc->idx + 1), 0);
+4 -4
arch/x86/tools/relocs.c
··· 57 57 [S_REL] = 58 58 "^(__init_(begin|end)|" 59 59 "__x86_cpu_dev_(start|end)|" 60 - "(__parainstructions|__alt_instructions)(|_end)|" 61 - "(__iommu_table|__apicdrivers|__smp_locks)(|_end)|" 60 + "(__parainstructions|__alt_instructions)(_end)?|" 61 + "(__iommu_table|__apicdrivers|__smp_locks)(_end)?|" 62 62 "__(start|end)_pci_.*|" 63 63 "__(start|end)_builtin_fw|" 64 - "__(start|stop)___ksymtab(|_gpl)|" 65 - "__(start|stop)___kcrctab(|_gpl)|" 64 + "__(start|stop)___ksymtab(_gpl)?|" 65 + "__(start|stop)___kcrctab(_gpl)?|" 66 66 "__(start|stop)___param|" 67 67 "__(start|stop)___modver|" 68 68 "__(start|stop)___bug_table|"
+8 -6
block/blk-cgroup.c
··· 790 790 struct blkcg_gq *parent = blkg->parent; 791 791 struct blkg_iostat_set *bisc = per_cpu_ptr(blkg->iostat_cpu, cpu); 792 792 struct blkg_iostat cur, delta; 793 + unsigned long flags; 793 794 unsigned int seq; 794 795 795 796 /* fetch the current per-cpu values */ ··· 800 799 } while (u64_stats_fetch_retry(&bisc->sync, seq)); 801 800 802 801 /* propagate percpu delta to global */ 803 - u64_stats_update_begin(&blkg->iostat.sync); 802 + flags = u64_stats_update_begin_irqsave(&blkg->iostat.sync); 804 803 blkg_iostat_set(&delta, &cur); 805 804 blkg_iostat_sub(&delta, &bisc->last); 806 805 blkg_iostat_add(&blkg->iostat.cur, &delta); 807 806 blkg_iostat_add(&bisc->last, &delta); 808 - u64_stats_update_end(&blkg->iostat.sync); 807 + u64_stats_update_end_irqrestore(&blkg->iostat.sync, flags); 809 808 810 809 /* propagate global delta to parent (unless that's root) */ 811 810 if (parent && parent->parent) { 812 - u64_stats_update_begin(&parent->iostat.sync); 811 + flags = u64_stats_update_begin_irqsave(&parent->iostat.sync); 813 812 blkg_iostat_set(&delta, &blkg->iostat.cur); 814 813 blkg_iostat_sub(&delta, &blkg->iostat.last); 815 814 blkg_iostat_add(&parent->iostat.cur, &delta); 816 815 blkg_iostat_add(&blkg->iostat.last, &delta); 817 - u64_stats_update_end(&parent->iostat.sync); 816 + u64_stats_update_end_irqrestore(&parent->iostat.sync, flags); 818 817 } 819 818 } 820 819 ··· 849 848 memset(&tmp, 0, sizeof(tmp)); 850 849 for_each_possible_cpu(cpu) { 851 850 struct disk_stats *cpu_dkstats; 851 + unsigned long flags; 852 852 853 853 cpu_dkstats = per_cpu_ptr(bdev->bd_stats, cpu); 854 854 tmp.ios[BLKG_IOSTAT_READ] += ··· 866 864 tmp.bytes[BLKG_IOSTAT_DISCARD] += 867 865 cpu_dkstats->sectors[STAT_DISCARD] << 9; 868 866 869 - u64_stats_update_begin(&blkg->iostat.sync); 867 + flags = u64_stats_update_begin_irqsave(&blkg->iostat.sync); 870 868 blkg_iostat_set(&blkg->iostat.cur, &tmp); 871 - u64_stats_update_end(&blkg->iostat.sync); 869 + u64_stats_update_end_irqrestore(&blkg->iostat.sync, flags); 872 870 } 873 871 } 874 872 }
+5 -1
block/blk-iolatency.c
··· 833 833 834 834 enable = iolatency_set_min_lat_nsec(blkg, lat_val); 835 835 if (enable) { 836 - WARN_ON_ONCE(!blk_get_queue(blkg->q)); 836 + if (!blk_get_queue(blkg->q)) { 837 + ret = -ENODEV; 838 + goto out; 839 + } 840 + 837 841 blkg_get(blkg); 838 842 } 839 843
+1 -1
block/kyber-iosched.c
··· 596 596 struct list_head *head = &kcq->rq_list[sched_domain]; 597 597 598 598 spin_lock(&kcq->lock); 599 + trace_block_rq_insert(rq); 599 600 if (at_head) 600 601 list_move(&rq->queuelist, head); 601 602 else 602 603 list_move_tail(&rq->queuelist, head); 603 604 sbitmap_set_bit(&khd->kcq_map[sched_domain], 604 605 rq->mq_ctx->index_hw[hctx->type]); 605 - trace_block_rq_insert(rq); 606 606 spin_unlock(&kcq->lock); 607 607 } 608 608 }
+1 -1
block/partitions/ldm.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 - /** 2 + /* 3 3 * ldm - Support for Windows Logical Disk Manager (Dynamic Disks) 4 4 * 5 5 * Copyright (C) 2001,2002 Richard Russon <ldm@flatcap.org>
-7
drivers/acpi/acpica/nsrepair2.c
··· 379 379 380 380 (*element_ptr)->common.reference_count = 381 381 original_ref_count; 382 - 383 - /* 384 - * The original_element holds a reference from the package object 385 - * that represents _HID. Since a new element was created by _HID, 386 - * remove the reference from the _CID package. 387 - */ 388 - acpi_ut_remove_reference(original_element); 389 382 } 390 383 391 384 element_ptr++;
+2 -2
drivers/base/dd.c
··· 653 653 else if (drv->remove) 654 654 drv->remove(dev); 655 655 probe_failed: 656 - kfree(dev->dma_range_map); 657 - dev->dma_range_map = NULL; 658 656 if (dev->bus) 659 657 blocking_notifier_call_chain(&dev->bus->p->bus_notifier, 660 658 BUS_NOTIFY_DRIVER_NOT_BOUND, dev); ··· 660 662 device_links_no_driver(dev); 661 663 devres_release_all(dev); 662 664 arch_teardown_dma_ops(dev); 665 + kfree(dev->dma_range_map); 666 + dev->dma_range_map = NULL; 663 667 driver_sysfs_remove(dev); 664 668 dev->driver = NULL; 665 669 dev_set_drvdata(dev, NULL);
+8 -6
drivers/base/firmware_loader/fallback.c
··· 89 89 { 90 90 /* 91 91 * There is a small window in which user can write to 'loading' 92 - * between loading done and disappearance of 'loading' 92 + * between loading done/aborted and disappearance of 'loading' 93 93 */ 94 - if (fw_sysfs_done(fw_priv)) 94 + if (fw_state_is_aborted(fw_priv) || fw_sysfs_done(fw_priv)) 95 95 return; 96 96 97 - list_del_init(&fw_priv->pending_list); 98 97 fw_state_aborted(fw_priv); 99 98 } 100 99 ··· 279 280 * Same logic as fw_load_abort, only the DONE bit 280 281 * is ignored and we set ABORT only on failure. 281 282 */ 282 - list_del_init(&fw_priv->pending_list); 283 283 if (rc) { 284 284 fw_state_aborted(fw_priv); 285 285 written = rc; ··· 511 513 } 512 514 513 515 mutex_lock(&fw_lock); 516 + if (fw_state_is_aborted(fw_priv)) { 517 + mutex_unlock(&fw_lock); 518 + retval = -EINTR; 519 + goto out; 520 + } 514 521 list_add(&fw_priv->pending_list, &pending_fw_head); 515 522 mutex_unlock(&fw_lock); 516 523 ··· 538 535 if (fw_state_is_aborted(fw_priv)) { 539 536 if (retval == -ERESTARTSYS) 540 537 retval = -EINTR; 541 - else 542 - retval = -EAGAIN; 543 538 } else if (fw_priv->is_paged_buf && !fw_priv->data) 544 539 retval = -ENOMEM; 545 540 541 + out: 546 542 device_del(f_dev); 547 543 err_put_dev: 548 544 put_device(f_dev);
+9 -1
drivers/base/firmware_loader/firmware.h
··· 117 117 118 118 WRITE_ONCE(fw_st->status, status); 119 119 120 - if (status == FW_STATUS_DONE || status == FW_STATUS_ABORTED) 120 + if (status == FW_STATUS_DONE || status == FW_STATUS_ABORTED) { 121 + #ifdef CONFIG_FW_LOADER_USER_HELPER 122 + /* 123 + * Doing this here ensures that the fw_priv is deleted from 124 + * the pending list in all abort/done paths. 125 + */ 126 + list_del_init(&fw_priv->pending_list); 127 + #endif 121 128 complete_all(&fw_st->completion); 129 + } 122 130 } 123 131 124 132 static inline void fw_state_aborted(struct fw_priv *fw_priv)
+2
drivers/base/firmware_loader/main.c
··· 783 783 return; 784 784 785 785 fw_priv = fw->priv; 786 + mutex_lock(&fw_lock); 786 787 if (!fw_state_is_aborted(fw_priv)) 787 788 fw_state_aborted(fw_priv); 789 + mutex_unlock(&fw_lock); 788 790 } 789 791 790 792 /* called from request_firmware() and request_firmware_work_func() */
+1 -1
drivers/block/n64cart.c
··· 74 74 75 75 n64cart_wait_dma(); 76 76 77 - n64cart_write_reg(PI_DRAM_REG, dma_addr + bv->bv_offset); 77 + n64cart_write_reg(PI_DRAM_REG, dma_addr); 78 78 n64cart_write_reg(PI_CART_REG, (bstart | CART_DOMAIN) & CART_MAX); 79 79 n64cart_write_reg(PI_WRITE_REG, bv->bv_len - 1); 80 80
+15 -7
drivers/bus/ti-sysc.c
··· 100 100 * @cookie: data used by legacy platform callbacks 101 101 * @name: name if available 102 102 * @revision: interconnect target module revision 103 + * @reserved: target module is reserved and already in use 103 104 * @enabled: sysc runtime enabled status 104 105 * @needs_resume: runtime resume needed on resume from suspend 105 106 * @child_needs_resume: runtime resume needed for child on resume from suspend ··· 131 130 struct ti_sysc_cookie cookie; 132 131 const char *name; 133 132 u32 revision; 133 + unsigned int reserved:1; 134 134 unsigned int enabled:1; 135 135 unsigned int needs_resume:1; 136 136 unsigned int child_needs_resume:1; ··· 2953 2951 case SOC_3430 ... SOC_3630: 2954 2952 sysc_add_disabled(0x48304000); /* timer12 */ 2955 2953 break; 2954 + case SOC_AM3: 2955 + sysc_add_disabled(0x48310000); /* rng */ 2956 2956 default: 2957 2957 break; 2958 2958 } ··· 3097 3093 return error; 3098 3094 3099 3095 error = sysc_check_active_timer(ddata); 3100 - if (error) 3101 - return error; 3096 + if (error == -EBUSY) 3097 + ddata->reserved = true; 3102 3098 3103 3099 error = sysc_get_clocks(ddata); 3104 3100 if (error) ··· 3134 3130 sysc_show_registers(ddata); 3135 3131 3136 3132 ddata->dev->type = &sysc_device_type; 3137 - error = of_platform_populate(ddata->dev->of_node, sysc_match_table, 3138 - pdata ? pdata->auxdata : NULL, 3139 - ddata->dev); 3140 - if (error) 3141 - goto err; 3133 + 3134 + if (!ddata->reserved) { 3135 + error = of_platform_populate(ddata->dev->of_node, 3136 + sysc_match_table, 3137 + pdata ? pdata->auxdata : NULL, 3138 + ddata->dev); 3139 + if (error) 3140 + goto err; 3141 + } 3142 3142 3143 3143 INIT_DELAYED_WORK(&ddata->idle_work, ti_sysc_idle); 3144 3144
+4 -4
drivers/char/tpm/tpm_ftpm_tee.c
··· 254 254 pvt_data->session = sess_arg.session; 255 255 256 256 /* Allocate dynamic shared memory with fTPM TA */ 257 - pvt_data->shm = tee_shm_alloc(pvt_data->ctx, 258 - MAX_COMMAND_SIZE + MAX_RESPONSE_SIZE, 259 - TEE_SHM_MAPPED | TEE_SHM_DMA_BUF); 257 + pvt_data->shm = tee_shm_alloc_kernel_buf(pvt_data->ctx, 258 + MAX_COMMAND_SIZE + 259 + MAX_RESPONSE_SIZE); 260 260 if (IS_ERR(pvt_data->shm)) { 261 - dev_err(dev, "%s: tee_shm_alloc failed\n", __func__); 261 + dev_err(dev, "%s: tee_shm_alloc_kernel_buf failed\n", __func__); 262 262 rc = -ENOMEM; 263 263 goto out_shm_alloc; 264 264 }
+33 -19
drivers/cpuidle/governors/teo.c
··· 382 382 alt_intercepts = 2 * idx_intercept_sum > cpu_data->total - idx_hit_sum; 383 383 alt_recent = idx_recent_sum > NR_RECENT / 2; 384 384 if (alt_recent || alt_intercepts) { 385 - s64 last_enabled_span_ns = duration_ns; 386 - int last_enabled_idx = idx; 385 + s64 first_suitable_span_ns = duration_ns; 386 + int first_suitable_idx = idx; 387 387 388 388 /* 389 389 * Look for the deepest idle state whose target residency had ··· 397 397 intercept_sum = 0; 398 398 recent_sum = 0; 399 399 400 - for (i = idx - 1; i >= idx0; i--) { 400 + for (i = idx - 1; i >= 0; i--) { 401 401 struct teo_bin *bin = &cpu_data->state_bins[i]; 402 402 s64 span_ns; 403 403 404 404 intercept_sum += bin->intercepts; 405 405 recent_sum += bin->recent; 406 406 407 - if (dev->states_usage[i].disable) 408 - continue; 409 - 410 407 span_ns = teo_middle_of_bin(i, drv); 411 - if (!teo_time_ok(span_ns)) { 412 - /* 413 - * The current state is too shallow, so select 414 - * the first enabled deeper state. 415 - */ 416 - duration_ns = last_enabled_span_ns; 417 - idx = last_enabled_idx; 418 - break; 419 - } 420 408 421 409 if ((!alt_recent || 2 * recent_sum > idx_recent_sum) && 422 410 (!alt_intercepts || 423 411 2 * intercept_sum > idx_intercept_sum)) { 424 - idx = i; 425 - duration_ns = span_ns; 412 + if (teo_time_ok(span_ns) && 413 + !dev->states_usage[i].disable) { 414 + idx = i; 415 + duration_ns = span_ns; 416 + } else { 417 + /* 418 + * The current state is too shallow or 419 + * disabled, so take the first enabled 420 + * deeper state with suitable time span. 421 + */ 422 + idx = first_suitable_idx; 423 + duration_ns = first_suitable_span_ns; 424 + } 426 425 break; 427 426 } 428 427 429 - last_enabled_span_ns = span_ns; 430 - last_enabled_idx = i; 428 + if (dev->states_usage[i].disable) 429 + continue; 430 + 431 + if (!teo_time_ok(span_ns)) { 432 + /* 433 + * The current state is too shallow, but if an 434 + * alternative candidate state has been found, 435 + * it may still turn out to be a better choice. 436 + */ 437 + if (first_suitable_idx != idx) 438 + continue; 439 + 440 + break; 441 + } 442 + 443 + first_suitable_span_ns = span_ns; 444 + first_suitable_idx = i; 431 445 } 432 446 } 433 447
+14
drivers/dma/idxd/idxd.h
··· 294 294 struct idxd_wq *wq; 295 295 }; 296 296 297 + /* 298 + * This is software defined error for the completion status. We overload the error code 299 + * that will never appear in completion status and only SWERR register. 300 + */ 301 + enum idxd_completion_status { 302 + IDXD_COMP_DESC_ABORT = 0xff, 303 + }; 304 + 297 305 #define confdev_to_idxd(dev) container_of(dev, struct idxd_device, conf_dev) 298 306 #define confdev_to_wq(dev) container_of(dev, struct idxd_wq, conf_dev) 299 307 ··· 489 481 static inline void perfmon_init(void) {} 490 482 static inline void perfmon_exit(void) {} 491 483 #endif 484 + 485 + static inline void complete_desc(struct idxd_desc *desc, enum idxd_complete_type reason) 486 + { 487 + idxd_dma_complete_txd(desc, reason); 488 + idxd_free_desc(desc->wq, desc); 489 + } 492 490 493 491 #endif
+20 -10
drivers/dma/idxd/init.c
··· 102 102 spin_lock_init(&idxd->irq_entries[i].list_lock); 103 103 } 104 104 105 + idxd_msix_perm_setup(idxd); 106 + 105 107 irq_entry = &idxd->irq_entries[0]; 106 108 rc = request_threaded_irq(irq_entry->vector, NULL, idxd_misc_thread, 107 109 0, "idxd-misc", irq_entry); ··· 150 148 } 151 149 152 150 idxd_unmask_error_interrupts(idxd); 153 - idxd_msix_perm_setup(idxd); 154 151 return 0; 155 152 156 153 err_wq_irqs: ··· 163 162 err_misc_irq: 164 163 /* Disable error interrupt generation */ 165 164 idxd_mask_error_interrupts(idxd); 165 + idxd_msix_perm_clear(idxd); 166 166 err_irq_entries: 167 167 pci_free_irq_vectors(pdev); 168 168 dev_err(dev, "No usable interrupts\n"); ··· 760 758 for (i = 0; i < msixcnt; i++) { 761 759 irq_entry = &idxd->irq_entries[i]; 762 760 synchronize_irq(irq_entry->vector); 763 - free_irq(irq_entry->vector, irq_entry); 764 761 if (i == 0) 765 762 continue; 766 763 idxd_flush_pending_llist(irq_entry); 767 764 idxd_flush_work_list(irq_entry); 768 765 } 769 - 770 - idxd_msix_perm_clear(idxd); 771 - idxd_release_int_handles(idxd); 772 - pci_free_irq_vectors(pdev); 773 - pci_iounmap(pdev, idxd->reg_base); 774 - pci_disable_device(pdev); 775 - destroy_workqueue(idxd->wq); 766 + flush_workqueue(idxd->wq); 776 767 } 777 768 778 769 static void idxd_remove(struct pci_dev *pdev) 779 770 { 780 771 struct idxd_device *idxd = pci_get_drvdata(pdev); 772 + struct idxd_irq_entry *irq_entry; 773 + int msixcnt = pci_msix_vec_count(pdev); 774 + int i; 781 775 782 776 dev_dbg(&pdev->dev, "%s called\n", __func__); 783 777 idxd_shutdown(pdev); 784 778 if (device_pasid_enabled(idxd)) 785 779 idxd_disable_system_pasid(idxd); 786 780 idxd_unregister_devices(idxd); 787 - perfmon_pmu_remove(idxd); 781 + 782 + for (i = 0; i < msixcnt; i++) { 783 + irq_entry = &idxd->irq_entries[i]; 784 + free_irq(irq_entry->vector, irq_entry); 785 + } 786 + idxd_msix_perm_clear(idxd); 787 + idxd_release_int_handles(idxd); 788 + pci_free_irq_vectors(pdev); 789 + pci_iounmap(pdev, idxd->reg_base); 788 790 iommu_dev_disable_feature(&pdev->dev, IOMMU_DEV_FEAT_SVA); 791 + pci_disable_device(pdev); 792 + destroy_workqueue(idxd->wq); 793 + perfmon_pmu_remove(idxd); 794 + device_unregister(&idxd->conf_dev); 789 795 } 790 796 791 797 static struct pci_driver idxd_pci_driver = {
+18 -9
drivers/dma/idxd/irq.c
··· 245 245 return false; 246 246 } 247 247 248 - static inline void complete_desc(struct idxd_desc *desc, enum idxd_complete_type reason) 249 - { 250 - idxd_dma_complete_txd(desc, reason); 251 - idxd_free_desc(desc->wq, desc); 252 - } 253 - 254 248 static int irq_process_pending_llist(struct idxd_irq_entry *irq_entry, 255 249 enum irq_work_type wtype, 256 250 int *processed, u64 data) ··· 266 272 reason = IDXD_COMPLETE_DEV_FAIL; 267 273 268 274 llist_for_each_entry_safe(desc, t, head, llnode) { 269 - if (desc->completion->status) { 270 - if ((desc->completion->status & DSA_COMP_STATUS_MASK) != DSA_COMP_SUCCESS) 275 + u8 status = desc->completion->status & DSA_COMP_STATUS_MASK; 276 + 277 + if (status) { 278 + if (unlikely(status == IDXD_COMP_DESC_ABORT)) { 279 + complete_desc(desc, IDXD_COMPLETE_ABORT); 280 + (*processed)++; 281 + continue; 282 + } 283 + 284 + if (unlikely(status != DSA_COMP_SUCCESS)) 271 285 match_fault(desc, data); 272 286 complete_desc(desc, reason); 273 287 (*processed)++; ··· 331 329 spin_unlock_irqrestore(&irq_entry->list_lock, flags); 332 330 333 331 list_for_each_entry(desc, &flist, list) { 334 - if ((desc->completion->status & DSA_COMP_STATUS_MASK) != DSA_COMP_SUCCESS) 332 + u8 status = desc->completion->status & DSA_COMP_STATUS_MASK; 333 + 334 + if (unlikely(status == IDXD_COMP_DESC_ABORT)) { 335 + complete_desc(desc, IDXD_COMPLETE_ABORT); 336 + continue; 337 + } 338 + 339 + if (unlikely(status != DSA_COMP_SUCCESS)) 335 340 match_fault(desc, data); 336 341 complete_desc(desc, reason); 337 342 }
+70 -22
drivers/dma/idxd/submit.c
··· 25 25 * Descriptor completion vectors are 1...N for MSIX. We will round 26 26 * robin through the N vectors. 27 27 */ 28 - wq->vec_ptr = (wq->vec_ptr % idxd->num_wq_irqs) + 1; 28 + wq->vec_ptr = desc->vector = (wq->vec_ptr % idxd->num_wq_irqs) + 1; 29 29 if (!idxd->int_handles) { 30 30 desc->hw->int_handle = wq->vec_ptr; 31 31 } else { 32 - desc->vector = wq->vec_ptr; 33 32 /* 34 33 * int_handles are only for descriptor completion. However for device 35 34 * MSIX enumeration, vec 0 is used for misc interrupts. Therefore even ··· 87 88 sbitmap_queue_clear(&wq->sbq, desc->id, cpu); 88 89 } 89 90 91 + static struct idxd_desc *list_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie, 92 + struct idxd_desc *desc) 93 + { 94 + struct idxd_desc *d, *n; 95 + 96 + lockdep_assert_held(&ie->list_lock); 97 + list_for_each_entry_safe(d, n, &ie->work_list, list) { 98 + if (d == desc) { 99 + list_del(&d->list); 100 + return d; 101 + } 102 + } 103 + 104 + /* 105 + * At this point, the desc needs to be aborted is held by the completion 106 + * handler where it has taken it off the pending list but has not added to the 107 + * work list. It will be cleaned up by the interrupt handler when it sees the 108 + * IDXD_COMP_DESC_ABORT for completion status. 109 + */ 110 + return NULL; 111 + } 112 + 113 + static void llist_abort_desc(struct idxd_wq *wq, struct idxd_irq_entry *ie, 114 + struct idxd_desc *desc) 115 + { 116 + struct idxd_desc *d, *t, *found = NULL; 117 + struct llist_node *head; 118 + unsigned long flags; 119 + 120 + desc->completion->status = IDXD_COMP_DESC_ABORT; 121 + /* 122 + * Grab the list lock so it will block the irq thread handler. This allows the 123 + * abort code to locate the descriptor need to be aborted. 124 + */ 125 + spin_lock_irqsave(&ie->list_lock, flags); 126 + head = llist_del_all(&ie->pending_llist); 127 + if (head) { 128 + llist_for_each_entry_safe(d, t, head, llnode) { 129 + if (d == desc) { 130 + found = desc; 131 + continue; 132 + } 133 + list_add_tail(&desc->list, &ie->work_list); 134 + } 135 + } 136 + 137 + if (!found) 138 + found = list_abort_desc(wq, ie, desc); 139 + spin_unlock_irqrestore(&ie->list_lock, flags); 140 + 141 + if (found) 142 + complete_desc(found, IDXD_COMPLETE_ABORT); 143 + } 144 + 90 145 int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc) 91 146 { 92 147 struct idxd_device *idxd = wq->idxd; 148 + struct idxd_irq_entry *ie = NULL; 93 149 void __iomem *portal; 94 150 int rc; 95 151 ··· 162 108 * even on UP because the recipient is a device. 163 109 */ 164 110 wmb(); 111 + 112 + /* 113 + * Pending the descriptor to the lockless list for the irq_entry 114 + * that we designated the descriptor to. 115 + */ 116 + if (desc->hw->flags & IDXD_OP_FLAG_RCI) { 117 + ie = &idxd->irq_entries[desc->vector]; 118 + llist_add(&desc->llnode, &ie->pending_llist); 119 + } 120 + 165 121 if (wq_dedicated(wq)) { 166 122 iosubmit_cmds512(portal, desc->hw, 1); 167 123 } else { ··· 182 118 * device is not accepting descriptor at all. 183 119 */ 184 120 rc = enqcmds(portal, desc->hw); 185 - if (rc < 0) 121 + if (rc < 0) { 122 + if (ie) 123 + llist_abort_desc(wq, ie, desc); 186 124 return rc; 125 + } 187 126 } 188 127 189 128 percpu_ref_put(&wq->wq_active); 190 - 191 - /* 192 - * Pending the descriptor to the lockless list for the irq_entry 193 - * that we designated the descriptor to. 194 - */ 195 - if (desc->hw->flags & IDXD_OP_FLAG_RCI) { 196 - int vec; 197 - 198 - /* 199 - * If the driver is on host kernel, it would be the value 200 - * assigned to interrupt handle, which is index for MSIX 201 - * vector. If it's guest then can't use the int_handle since 202 - * that is the index to IMS for the entire device. The guest 203 - * device local index will be used. 204 - */ 205 - vec = !idxd->int_handles ? desc->hw->int_handle : desc->vector; 206 - llist_add(&desc->llnode, &idxd->irq_entries[vec].pending_llist); 207 - } 208 - 209 129 return 0; 210 130 }
-2
drivers/dma/idxd/sysfs.c
··· 1744 1744 1745 1745 device_unregister(&group->conf_dev); 1746 1746 } 1747 - 1748 - device_unregister(&idxd->conf_dev); 1749 1747 } 1750 1748 1751 1749 int idxd_register_bus_type(void)
+2
drivers/dma/imx-dma.c
··· 812 812 dma_length += sg_dma_len(sg); 813 813 } 814 814 815 + imxdma_config_write(chan, &imxdmac->config, direction); 816 + 815 817 switch (imxdmac->word_size) { 816 818 case DMA_SLAVE_BUSWIDTH_4_BYTES: 817 819 if (sg_dma_len(sgl) & 3 || sgl->dma_address & 3)
+7 -2
drivers/dma/of-dma.c
··· 67 67 return NULL; 68 68 69 69 ofdma_target = of_dma_find_controller(&dma_spec_target); 70 - if (!ofdma_target) 71 - return NULL; 70 + if (!ofdma_target) { 71 + ofdma->dma_router->route_free(ofdma->dma_router->dev, 72 + route_data); 73 + chan = ERR_PTR(-EPROBE_DEFER); 74 + goto err; 75 + } 72 76 73 77 chan = ofdma_target->of_dma_xlate(&dma_spec_target, ofdma_target); 74 78 if (IS_ERR_OR_NULL(chan)) { ··· 93 89 } 94 90 } 95 91 92 + err: 96 93 /* 97 94 * Need to put the node back since the ofdma->of_dma_route_allocate 98 95 * has taken it for generating the new, translated dma_spec
+1 -1
drivers/dma/sh/usb-dmac.c
··· 855 855 856 856 error: 857 857 of_dma_controller_free(pdev->dev.of_node); 858 - pm_runtime_put(&pdev->dev); 859 858 error_pm: 859 + pm_runtime_put(&pdev->dev); 860 860 pm_runtime_disable(&pdev->dev); 861 861 return ret; 862 862 }
+2 -2
drivers/dma/stm32-dma.c
··· 1200 1200 1201 1201 chan->config_init = false; 1202 1202 1203 - ret = pm_runtime_get_sync(dmadev->ddev.dev); 1203 + ret = pm_runtime_resume_and_get(dmadev->ddev.dev); 1204 1204 if (ret < 0) 1205 1205 return ret; 1206 1206 ··· 1470 1470 struct stm32_dma_device *dmadev = dev_get_drvdata(dev); 1471 1471 int id, ret, scr; 1472 1472 1473 - ret = pm_runtime_get_sync(dev); 1473 + ret = pm_runtime_resume_and_get(dev); 1474 1474 if (ret < 0) 1475 1475 return ret; 1476 1476
+3 -3
drivers/dma/stm32-dmamux.c
··· 137 137 138 138 /* Set dma request */ 139 139 spin_lock_irqsave(&dmamux->lock, flags); 140 - ret = pm_runtime_get_sync(&pdev->dev); 140 + ret = pm_runtime_resume_and_get(&pdev->dev); 141 141 if (ret < 0) { 142 142 spin_unlock_irqrestore(&dmamux->lock, flags); 143 143 goto error; ··· 336 336 struct stm32_dmamux_data *stm32_dmamux = platform_get_drvdata(pdev); 337 337 int i, ret; 338 338 339 - ret = pm_runtime_get_sync(dev); 339 + ret = pm_runtime_resume_and_get(dev); 340 340 if (ret < 0) 341 341 return ret; 342 342 ··· 361 361 if (ret < 0) 362 362 return ret; 363 363 364 - ret = pm_runtime_get_sync(dev); 364 + ret = pm_runtime_resume_and_get(dev); 365 365 if (ret < 0) 366 366 return ret; 367 367
+2 -2
drivers/dma/uniphier-xdmac.c
··· 209 209 writel(0, xc->reg_ch_base + XDMAC_TSS); 210 210 211 211 /* wait until transfer is stopped */ 212 - return readl_poll_timeout(xc->reg_ch_base + XDMAC_STAT, val, 213 - !(val & XDMAC_STAT_TENF), 100, 1000); 212 + return readl_poll_timeout_atomic(xc->reg_ch_base + XDMAC_STAT, val, 213 + !(val & XDMAC_STAT_TENF), 100, 1000); 214 214 } 215 215 216 216 /* xc->vc.lock must be held by caller */
+12
drivers/dma/xilinx/xilinx_dma.c
··· 394 394 * @genlock: Support genlock mode 395 395 * @err: Channel has errors 396 396 * @idle: Check for channel idle 397 + * @terminating: Check for channel being synchronized by user 397 398 * @tasklet: Cleanup work after irq 398 399 * @config: Device configuration info 399 400 * @flush_on_fsync: Flush on Frame sync ··· 432 431 bool genlock; 433 432 bool err; 434 433 bool idle; 434 + bool terminating; 435 435 struct tasklet_struct tasklet; 436 436 struct xilinx_vdma_config config; 437 437 bool flush_on_fsync; ··· 1051 1049 /* Run any dependencies, then free the descriptor */ 1052 1050 dma_run_dependencies(&desc->async_tx); 1053 1051 xilinx_dma_free_tx_descriptor(chan, desc); 1052 + 1053 + /* 1054 + * While we ran a callback the user called a terminate function, 1055 + * which takes care of cleaning up any remaining descriptors 1056 + */ 1057 + if (chan->terminating) 1058 + break; 1054 1059 } 1055 1060 1056 1061 spin_unlock_irqrestore(&chan->lock, flags); ··· 1974 1965 if (desc->cyclic) 1975 1966 chan->cyclic = true; 1976 1967 1968 + chan->terminating = false; 1969 + 1977 1970 spin_unlock_irqrestore(&chan->lock, flags); 1978 1971 1979 1972 return cookie; ··· 2447 2436 2448 2437 xilinx_dma_chan_reset(chan); 2449 2438 /* Remove and free all of the descriptors in the lists */ 2439 + chan->terminating = true; 2450 2440 xilinx_dma_free_descriptors(chan); 2451 2441 chan->idle = true; 2452 2442
+11 -3
drivers/firmware/broadcom/tee_bnxt_fw.c
··· 212 212 213 213 pvt_data.dev = dev; 214 214 215 - fw_shm_pool = tee_shm_alloc(pvt_data.ctx, MAX_SHM_MEM_SZ, 216 - TEE_SHM_MAPPED | TEE_SHM_DMA_BUF); 215 + fw_shm_pool = tee_shm_alloc_kernel_buf(pvt_data.ctx, MAX_SHM_MEM_SZ); 217 216 if (IS_ERR(fw_shm_pool)) { 218 - dev_err(pvt_data.dev, "tee_shm_alloc failed\n"); 217 + dev_err(pvt_data.dev, "tee_shm_alloc_kernel_buf failed\n"); 219 218 err = PTR_ERR(fw_shm_pool); 220 219 goto out_sess; 221 220 } ··· 241 242 return 0; 242 243 } 243 244 245 + static void tee_bnxt_fw_shutdown(struct device *dev) 246 + { 247 + tee_shm_free(pvt_data.fw_shm_pool); 248 + tee_client_close_session(pvt_data.ctx, pvt_data.session_id); 249 + tee_client_close_context(pvt_data.ctx); 250 + pvt_data.ctx = NULL; 251 + } 252 + 244 253 static const struct tee_client_device_id tee_bnxt_fw_id_table[] = { 245 254 {UUID_INIT(0x6272636D, 0x2019, 0x0716, 246 255 0x42, 0x43, 0x4D, 0x5F, 0x53, 0x43, 0x48, 0x49)}, ··· 264 257 .bus = &tee_bus_type, 265 258 .probe = tee_bnxt_fw_probe, 266 259 .remove = tee_bnxt_fw_remove, 260 + .shutdown = tee_bnxt_fw_shutdown, 267 261 }, 268 262 }; 269 263
+2
drivers/fpga/dfl-fme-perf.c
··· 953 953 return 0; 954 954 955 955 priv->cpu = target; 956 + perf_pmu_migrate_context(&priv->pmu, cpu, target); 957 + 956 958 return 0; 957 959 } 958 960
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
··· 1040 1040 */ 1041 1041 bool amdgpu_acpi_is_s0ix_supported(struct amdgpu_device *adev) 1042 1042 { 1043 - #if defined(CONFIG_AMD_PMC) || defined(CONFIG_AMD_PMC_MODULE) 1043 + #if IS_ENABLED(CONFIG_AMD_PMC) && IS_ENABLED(CONFIG_PM_SLEEP) 1044 1044 if (acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0) { 1045 1045 if (adev->flags & AMD_IS_APU) 1046 1046 return pm_suspend_target_state == PM_SUSPEND_TO_IDLE;
+40
drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
··· 468 468 return (fw_cap & ATOM_FIRMWARE_CAP_DYNAMIC_BOOT_CFG_ENABLE) ? true : false; 469 469 } 470 470 471 + /* 472 + * Helper function to query RAS EEPROM address 473 + * 474 + * @adev: amdgpu_device pointer 475 + * 476 + * Return true if vbios supports ras rom address reporting 477 + */ 478 + bool amdgpu_atomfirmware_ras_rom_addr(struct amdgpu_device *adev, uint8_t* i2c_address) 479 + { 480 + struct amdgpu_mode_info *mode_info = &adev->mode_info; 481 + int index; 482 + u16 data_offset, size; 483 + union firmware_info *firmware_info; 484 + u8 frev, crev; 485 + 486 + if (i2c_address == NULL) 487 + return false; 488 + 489 + *i2c_address = 0; 490 + 491 + index = get_index_into_master_table(atom_master_list_of_data_tables_v2_1, 492 + firmwareinfo); 493 + 494 + if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, 495 + index, &size, &frev, &crev, &data_offset)) { 496 + /* support firmware_info 3.4 + */ 497 + if ((frev == 3 && crev >=4) || (frev > 3)) { 498 + firmware_info = (union firmware_info *) 499 + (mode_info->atom_context->bios + data_offset); 500 + *i2c_address = firmware_info->v34.ras_rom_i2c_slave_addr; 501 + } 502 + } 503 + 504 + if (*i2c_address != 0) 505 + return true; 506 + 507 + return false; 508 + } 509 + 510 + 471 511 union smu_info { 472 512 struct atom_smu_info_v3_1 v31; 473 513 };
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h
··· 36 36 int amdgpu_atomfirmware_get_gfx_info(struct amdgpu_device *adev); 37 37 bool amdgpu_atomfirmware_mem_ecc_supported(struct amdgpu_device *adev); 38 38 bool amdgpu_atomfirmware_sram_ecc_supported(struct amdgpu_device *adev); 39 + bool amdgpu_atomfirmware_ras_rom_addr(struct amdgpu_device *adev, uint8_t* i2c_address); 39 40 bool amdgpu_atomfirmware_mem_training_supported(struct amdgpu_device *adev); 40 41 bool amdgpu_atomfirmware_dynamic_boot_config_supported(struct amdgpu_device *adev); 41 42 int amdgpu_atomfirmware_get_fw_reserved_fb_size(struct amdgpu_device *adev);
+9 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
··· 299 299 ip->major, ip->minor, 300 300 ip->revision); 301 301 302 + if (le16_to_cpu(ip->hw_id) == VCN_HWID) 303 + adev->vcn.num_vcn_inst++; 304 + 302 305 for (k = 0; k < num_base_address; k++) { 303 306 /* 304 307 * convert the endianness of base addresses in place, ··· 388 385 { 389 386 struct binary_header *bhdr; 390 387 struct harvest_table *harvest_info; 391 - int i; 388 + int i, vcn_harvest_count = 0; 392 389 393 390 bhdr = (struct binary_header *)adev->mman.discovery_bin; 394 391 harvest_info = (struct harvest_table *)(adev->mman.discovery_bin + ··· 400 397 401 398 switch (le32_to_cpu(harvest_info->list[i].hw_id)) { 402 399 case VCN_HWID: 403 - adev->harvest_ip_mask |= AMD_HARVEST_IP_VCN_MASK; 404 - adev->harvest_ip_mask |= AMD_HARVEST_IP_JPEG_MASK; 400 + vcn_harvest_count++; 405 401 break; 406 402 case DMU_HWID: 407 403 adev->harvest_ip_mask |= AMD_HARVEST_IP_DMU_MASK; ··· 408 406 default: 409 407 break; 410 408 } 409 + } 410 + if (vcn_harvest_count == adev->vcn.num_vcn_inst) { 411 + adev->harvest_ip_mask |= AMD_HARVEST_IP_VCN_MASK; 412 + adev->harvest_ip_mask |= AMD_HARVEST_IP_JPEG_MASK; 411 413 } 412 414 } 413 415
+9
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 1213 1213 {0x1002, 0x740F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT}, 1214 1214 {0x1002, 0x7410, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT}, 1215 1215 1216 + /* BEIGE_GOBY */ 1217 + {0x1002, 0x7420, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY}, 1218 + {0x1002, 0x7421, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY}, 1219 + {0x1002, 0x7422, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY}, 1220 + {0x1002, 0x7423, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY}, 1221 + {0x1002, 0x743F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BEIGE_GOBY}, 1222 + 1216 1223 {0, 0, 0} 1217 1224 }; 1218 1225 ··· 1571 1564 pci_ignore_hotplug(pdev); 1572 1565 pci_set_power_state(pdev, PCI_D3cold); 1573 1566 drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF; 1567 + } else if (amdgpu_device_supports_boco(drm_dev)) { 1568 + /* nothing to do */ 1574 1569 } else if (amdgpu_device_supports_baco(drm_dev)) { 1575 1570 amdgpu_device_baco_enter(drm_dev); 1576 1571 }
+4
drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
··· 26 26 #include "amdgpu_ras.h" 27 27 #include <linux/bits.h> 28 28 #include "atom.h" 29 + #include "amdgpu_atomfirmware.h" 29 30 30 31 #define EEPROM_I2C_TARGET_ADDR_VEGA20 0xA0 31 32 #define EEPROM_I2C_TARGET_ADDR_ARCTURUS 0xA8 ··· 96 95 { 97 96 if (!i2c_addr) 98 97 return false; 98 + 99 + if (amdgpu_atomfirmware_ras_rom_addr(adev, (uint8_t*)i2c_addr)) 100 + return true; 99 101 100 102 switch (adev->asic_type) { 101 103 case CHIP_VEGA20:
+2 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_res_cursor.h
··· 54 54 { 55 55 struct drm_mm_node *node; 56 56 57 - if (!res) { 57 + if (!res || res->mem_type == TTM_PL_SYSTEM) { 58 58 cur->start = start; 59 59 cur->size = size; 60 60 cur->remaining = size; 61 61 cur->node = NULL; 62 + WARN_ON(res && start + size > res->num_pages << PAGE_SHIFT); 62 63 return; 63 64 } 64 65
+20 -1
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
··· 1295 1295 return false; 1296 1296 } 1297 1297 1298 + static bool check_if_enlarge_doorbell_range(struct amdgpu_device *adev) 1299 + { 1300 + if ((adev->asic_type == CHIP_RENOIR) && 1301 + (adev->gfx.me_fw_version >= 0x000000a5) && 1302 + (adev->gfx.me_feature_version >= 52)) 1303 + return true; 1304 + else 1305 + return false; 1306 + } 1307 + 1298 1308 static void gfx_v9_0_check_if_need_gfxoff(struct amdgpu_device *adev) 1299 1309 { 1300 1310 if (gfx_v9_0_should_disable_gfxoff(adev->pdev)) ··· 3685 3675 if (ring->use_doorbell) { 3686 3676 WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_LOWER, 3687 3677 (adev->doorbell_index.kiq * 2) << 2); 3688 - WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER, 3678 + /* If GC has entered CGPG, ringing doorbell > first page 3679 + * doesn't wakeup GC. Enlarge CP_MEC_DOORBELL_RANGE_UPPER to 3680 + * workaround this issue. And this change has to align with firmware 3681 + * update. 3682 + */ 3683 + if (check_if_enlarge_doorbell_range(adev)) 3684 + WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER, 3685 + (adev->doorbell.size - 4)); 3686 + else 3687 + WREG32_SOC15(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER, 3689 3688 (adev->doorbell_index.userqueue_end * 2) << 2); 3690 3689 } 3691 3690
+7 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 1548 1548 } 1549 1549 1550 1550 hdr = (const struct dmcub_firmware_header_v1_0 *)adev->dm.dmub_fw->data; 1551 + adev->dm.dmcub_fw_version = le32_to_cpu(hdr->header.ucode_version); 1551 1552 1552 1553 if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) { 1553 1554 adev->firmware.ucode[AMDGPU_UCODE_ID_DMCUB].ucode_id = ··· 1562 1561 adev->dm.dmcub_fw_version); 1563 1562 } 1564 1563 1565 - adev->dm.dmcub_fw_version = le32_to_cpu(hdr->header.ucode_version); 1566 1564 1567 1565 adev->dm.dmub_srv = kzalloc(sizeof(*adev->dm.dmub_srv), GFP_KERNEL); 1568 1566 dmub_srv = adev->dm.dmub_srv; ··· 9605 9605 } else if (amdgpu_freesync_vid_mode && aconnector && 9606 9606 is_freesync_video_mode(&new_crtc_state->mode, 9607 9607 aconnector)) { 9608 - set_freesync_fixed_config(dm_new_crtc_state); 9608 + struct drm_display_mode *high_mode; 9609 + 9610 + high_mode = get_highest_refresh_rate_mode(aconnector, false); 9611 + if (!drm_mode_equal(&new_crtc_state->mode, high_mode)) { 9612 + set_freesync_fixed_config(dm_new_crtc_state); 9613 + } 9609 9614 } 9610 9615 9611 9616 ret = dm_atomic_get_state(state, &dm_state);
+1 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
··· 584 584 handler_data = container_of(handler_list->next, struct amdgpu_dm_irq_handler_data, list); 585 585 586 586 /*allocate a new amdgpu_dm_irq_handler_data*/ 587 - handler_data_add = kzalloc(sizeof(*handler_data), GFP_KERNEL); 587 + handler_data_add = kzalloc(sizeof(*handler_data), GFP_ATOMIC); 588 588 if (!handler_data_add) { 589 589 DRM_ERROR("DM_IRQ: failed to allocate irq handler!\n"); 590 590 return;
+3 -1
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
··· 66 66 for (i = 0; i < context->stream_count; i++) { 67 67 const struct dc_stream_state *stream = context->streams[i]; 68 68 69 + /* Extend the WA to DP for Linux*/ 69 70 if (stream->signal == SIGNAL_TYPE_HDMI_TYPE_A || 70 71 stream->signal == SIGNAL_TYPE_DVI_SINGLE_LINK || 71 - stream->signal == SIGNAL_TYPE_DVI_DUAL_LINK) 72 + stream->signal == SIGNAL_TYPE_DVI_DUAL_LINK || 73 + stream->signal == SIGNAL_TYPE_DISPLAY_PORT) 72 74 tmds_present = true; 73 75 } 74 76
+2
drivers/gpu/drm/amd/display/dc/dc.h
··· 183 183 unsigned int cursor_cache_size; 184 184 struct dc_plane_cap planes[MAX_PLANES]; 185 185 struct dc_color_caps color; 186 + bool vbios_lttpr_aware; 187 + bool vbios_lttpr_enable; 186 188 }; 187 189 188 190 struct dc_bug_wa {
+1 -1
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_optc.c
··· 464 464 465 465 REG_UPDATE_2(OTG_GLOBAL_CONTROL1, 466 466 MASTER_UPDATE_LOCK_DB_X, 467 - h_blank_start - 200 - 1, 467 + (h_blank_start - 200 - 1) / optc1->opp_count, 468 468 MASTER_UPDATE_LOCK_DB_Y, 469 469 v_blank_start - 1); 470 470 }
+20 -1
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
··· 1788 1788 } 1789 1789 pri_pipe->next_odm_pipe = sec_pipe; 1790 1790 sec_pipe->prev_odm_pipe = pri_pipe; 1791 - ASSERT(sec_pipe->top_pipe == NULL); 1792 1791 1793 1792 if (!sec_pipe->top_pipe) 1794 1793 sec_pipe->stream_res.opp = pool->opps[pipe_idx]; ··· 2615 2616 dc->caps.color.mpc.ogam_rom_caps.pq = 0; 2616 2617 dc->caps.color.mpc.ogam_rom_caps.hlg = 0; 2617 2618 dc->caps.color.mpc.ocsc = 1; 2619 + 2620 + /* read VBIOS LTTPR caps */ 2621 + { 2622 + if (ctx->dc_bios->funcs->get_lttpr_caps) { 2623 + enum bp_result bp_query_result; 2624 + uint8_t is_vbios_lttpr_enable = 0; 2625 + 2626 + bp_query_result = ctx->dc_bios->funcs->get_lttpr_caps(ctx->dc_bios, &is_vbios_lttpr_enable); 2627 + dc->caps.vbios_lttpr_enable = (bp_query_result == BP_RESULT_OK) && !!is_vbios_lttpr_enable; 2628 + } 2629 + 2630 + if (ctx->dc_bios->funcs->get_lttpr_interop) { 2631 + enum bp_result bp_query_result; 2632 + uint8_t is_vbios_interop_enabled = 0; 2633 + 2634 + bp_query_result = ctx->dc_bios->funcs->get_lttpr_interop(ctx->dc_bios, 2635 + &is_vbios_interop_enabled); 2636 + dc->caps.vbios_lttpr_aware = (bp_query_result == BP_RESULT_OK) && !!is_vbios_interop_enabled; 2637 + } 2638 + } 2618 2639 2619 2640 if (dc->ctx->dce_environment == DCE_ENV_PRODUCTION_DRV) 2620 2641 dc->debug = debug_defaults_drv;
+2 -2
drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c
··· 146 146 147 147 .min_dcfclk = 500.0, /* TODO: set this to actual min DCFCLK */ 148 148 .num_states = 1, 149 - .sr_exit_time_us = 26.5, 150 - .sr_enter_plus_exit_time_us = 31, 149 + .sr_exit_time_us = 35.5, 150 + .sr_enter_plus_exit_time_us = 40, 151 151 .urgent_latency_us = 4.0, 152 152 .urgent_latency_pixel_data_only_us = 4.0, 153 153 .urgent_latency_pixel_mixed_with_vm_data_us = 4.0,
+16
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
··· 1968 1968 dc->caps.color.mpc.ogam_rom_caps.hlg = 0; 1969 1969 dc->caps.color.mpc.ocsc = 1; 1970 1970 1971 + /* read VBIOS LTTPR caps */ 1972 + { 1973 + if (ctx->dc_bios->funcs->get_lttpr_caps) { 1974 + enum bp_result bp_query_result; 1975 + uint8_t is_vbios_lttpr_enable = 0; 1976 + 1977 + bp_query_result = ctx->dc_bios->funcs->get_lttpr_caps(ctx->dc_bios, &is_vbios_lttpr_enable); 1978 + dc->caps.vbios_lttpr_enable = (bp_query_result == BP_RESULT_OK) && !!is_vbios_lttpr_enable; 1979 + } 1980 + 1981 + /* interop bit is implicit */ 1982 + { 1983 + dc->caps.vbios_lttpr_aware = true; 1984 + } 1985 + } 1986 + 1971 1987 if (dc->ctx->dce_environment == DCE_ENV_PRODUCTION_DRV) 1972 1988 dc->debug = debug_defaults_drv; 1973 1989 else if (dc->ctx->dce_environment == DCE_ENV_FPGA_MAXIMUS) {
+5 -3
drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c
··· 267 267 268 268 bool dmub_dcn31_is_hw_init(struct dmub_srv *dmub) 269 269 { 270 - uint32_t is_hw_init; 270 + union dmub_fw_boot_status status; 271 + uint32_t is_enable; 271 272 272 - REG_GET(DMCUB_CNTL, DMCUB_ENABLE, &is_hw_init); 273 + status.all = REG_READ(DMCUB_SCRATCH0); 274 + REG_GET(DMCUB_CNTL, DMCUB_ENABLE, &is_enable); 273 275 274 - return is_hw_init != 0; 276 + return is_enable != 0 && status.bits.dal_fw; 275 277 } 276 278 277 279 bool dmub_dcn31_is_supported(struct dmub_srv *dmub)
+1 -1
drivers/gpu/drm/amd/include/atomfirmware.h
··· 590 590 uint8_t board_i2c_feature_id; // enum of atom_board_i2c_feature_id_def 591 591 uint8_t board_i2c_feature_gpio_id; // i2c id find in gpio_lut data table gpio_id 592 592 uint8_t board_i2c_feature_slave_addr; 593 - uint8_t reserved3; 593 + uint8_t ras_rom_i2c_slave_addr; 594 594 uint16_t bootup_mvddq_mv; 595 595 uint16_t bootup_mvpp_mv; 596 596 uint32_t zfbstartaddrin16mb;
+1 -1
drivers/gpu/drm/amd/pm/inc/smu_v13_0.h
··· 26 26 #include "amdgpu_smu.h" 27 27 28 28 #define SMU13_DRIVER_IF_VERSION_INV 0xFFFFFFFF 29 - #define SMU13_DRIVER_IF_VERSION_YELLOW_CARP 0x03 29 + #define SMU13_DRIVER_IF_VERSION_YELLOW_CARP 0x04 30 30 #define SMU13_DRIVER_IF_VERSION_ALDE 0x07 31 31 32 32 /* MP Apertures */
+3 -1
drivers/gpu/drm/amd/pm/inc/smu_v13_0_1_pmfw.h
··· 111 111 uint32_t InWhisperMode : 1; 112 112 uint32_t spare0 : 1; 113 113 uint32_t ZstateStatus : 4; 114 - uint32_t spare1 :12; 114 + uint32_t spare1 : 4; 115 + uint32_t DstateFun : 4; 116 + uint32_t DstateDev : 4; 115 117 // MP1_EXT_SCRATCH2 116 118 uint32_t P2JobHandler :24; 117 119 uint32_t RsmuPmiP2FinishedCnt : 8;
+1 -2
drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
··· 353 353 struct amdgpu_device *adev = smu->adev; 354 354 uint32_t val; 355 355 356 - if (powerplay_table->platform_caps & SMU_11_0_7_PP_PLATFORM_CAP_BACO || 357 - powerplay_table->platform_caps & SMU_11_0_7_PP_PLATFORM_CAP_MACO) { 356 + if (powerplay_table->platform_caps & SMU_11_0_7_PP_PLATFORM_CAP_BACO) { 358 357 val = RREG32_SOC15(NBIO, 0, mmRCC_BIF_STRAP0); 359 358 smu_baco->platform_support = 360 359 (val & RCC_BIF_STRAP0__STRAP_PX_CAPABLE_MASK) ? true :
+1 -1
drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
··· 256 256 return 0; 257 257 258 258 err3_out: 259 - kfree(smu_table->clocks_table); 259 + kfree(smu_table->watermarks_table); 260 260 err2_out: 261 261 kfree(smu_table->gpu_metrics_table); 262 262 err1_out:
+24 -10
drivers/gpu/drm/i915/display/intel_display.c
··· 5746 5746 5747 5747 switch (crtc_state->pipe_bpp) { 5748 5748 case 18: 5749 - val |= PIPEMISC_DITHER_6_BPC; 5749 + val |= PIPEMISC_6_BPC; 5750 5750 break; 5751 5751 case 24: 5752 - val |= PIPEMISC_DITHER_8_BPC; 5752 + val |= PIPEMISC_8_BPC; 5753 5753 break; 5754 5754 case 30: 5755 - val |= PIPEMISC_DITHER_10_BPC; 5755 + val |= PIPEMISC_10_BPC; 5756 5756 break; 5757 5757 case 36: 5758 - val |= PIPEMISC_DITHER_12_BPC; 5758 + /* Port output 12BPC defined for ADLP+ */ 5759 + if (DISPLAY_VER(dev_priv) > 12) 5760 + val |= PIPEMISC_12_BPC_ADLP; 5759 5761 break; 5760 5762 default: 5761 5763 MISSING_CASE(crtc_state->pipe_bpp); ··· 5810 5808 5811 5809 tmp = intel_de_read(dev_priv, PIPEMISC(crtc->pipe)); 5812 5810 5813 - switch (tmp & PIPEMISC_DITHER_BPC_MASK) { 5814 - case PIPEMISC_DITHER_6_BPC: 5811 + switch (tmp & PIPEMISC_BPC_MASK) { 5812 + case PIPEMISC_6_BPC: 5815 5813 return 18; 5816 - case PIPEMISC_DITHER_8_BPC: 5814 + case PIPEMISC_8_BPC: 5817 5815 return 24; 5818 - case PIPEMISC_DITHER_10_BPC: 5816 + case PIPEMISC_10_BPC: 5819 5817 return 30; 5820 - case PIPEMISC_DITHER_12_BPC: 5821 - return 36; 5818 + /* 5819 + * PORT OUTPUT 12 BPC defined for ADLP+. 5820 + * 5821 + * TODO: 5822 + * For previous platforms with DSI interface, bits 5:7 5823 + * are used for storing pipe_bpp irrespective of dithering. 5824 + * Since the value of 12 BPC is not defined for these bits 5825 + * on older platforms, need to find a workaround for 12 BPC 5826 + * MIPI DSI HW readout. 5827 + */ 5828 + case PIPEMISC_12_BPC_ADLP: 5829 + if (DISPLAY_VER(dev_priv) > 12) 5830 + return 36; 5831 + fallthrough; 5822 5832 default: 5823 5833 MISSING_CASE(tmp); 5824 5834 return 0;
+1
drivers/gpu/drm/i915/gvt/handlers.c
··· 3149 3149 MMIO_DFH(_MMIO(0xb100), D_BDW, F_CMD_ACCESS, NULL, NULL); 3150 3150 MMIO_DFH(_MMIO(0xb10c), D_BDW, F_CMD_ACCESS, NULL, NULL); 3151 3151 MMIO_D(_MMIO(0xb110), D_BDW); 3152 + MMIO_D(GEN9_SCRATCH_LNCF1, D_BDW_PLUS); 3152 3153 3153 3154 MMIO_F(_MMIO(0x24d0), 48, F_CMD_ACCESS | F_CMD_WRITE_PATCH, 0, 0, 3154 3155 D_BDW_PLUS, NULL, force_nonpriv_write);
+2
drivers/gpu/drm/i915/gvt/mmio_context.c
··· 105 105 {RCS0, COMMON_SLICE_CHICKEN2, 0xffff, true}, /* 0x7014 */ 106 106 {RCS0, GEN9_CS_DEBUG_MODE1, 0xffff, false}, /* 0x20ec */ 107 107 {RCS0, GEN8_L3SQCREG4, 0, false}, /* 0xb118 */ 108 + {RCS0, GEN9_SCRATCH1, 0, false}, /* 0xb11c */ 109 + {RCS0, GEN9_SCRATCH_LNCF1, 0, false}, /* 0xb008 */ 108 110 {RCS0, GEN7_HALF_SLICE_CHICKEN1, 0xffff, true}, /* 0xe100 */ 109 111 {RCS0, HALF_SLICE_CHICKEN2, 0xffff, true}, /* 0xe180 */ 110 112 {RCS0, HALF_SLICE_CHICKEN3, 0xffff, true}, /* 0xe184 */
+2 -2
drivers/gpu/drm/i915/i915_globals.c
··· 138 138 atomic_inc(&active); 139 139 } 140 140 141 - static void __exit __i915_globals_flush(void) 141 + static void __i915_globals_flush(void) 142 142 { 143 143 atomic_inc(&active); /* skip shrinking */ 144 144 ··· 148 148 atomic_dec(&active); 149 149 } 150 150 151 - void __exit i915_globals_exit(void) 151 + void i915_globals_exit(void) 152 152 { 153 153 GEM_BUG_ON(atomic_read(&active)); 154 154
+18 -1
drivers/gpu/drm/i915/i915_gpu_error.c
··· 727 727 if (GRAPHICS_VER(m->i915) >= 12) { 728 728 int i; 729 729 730 - for (i = 0; i < GEN12_SFC_DONE_MAX; i++) 730 + for (i = 0; i < GEN12_SFC_DONE_MAX; i++) { 731 + /* 732 + * SFC_DONE resides in the VD forcewake domain, so it 733 + * only exists if the corresponding VCS engine is 734 + * present. 735 + */ 736 + if (!HAS_ENGINE(gt->_gt, _VCS(i * 2))) 737 + continue; 738 + 731 739 err_printf(m, " SFC_DONE[%d]: 0x%08x\n", i, 732 740 gt->sfc_done[i]); 741 + } 733 742 734 743 err_printf(m, " GAM_DONE: 0x%08x\n", gt->gam_done); 735 744 } ··· 1590 1581 1591 1582 if (GRAPHICS_VER(i915) >= 12) { 1592 1583 for (i = 0; i < GEN12_SFC_DONE_MAX; i++) { 1584 + /* 1585 + * SFC_DONE resides in the VD forcewake domain, so it 1586 + * only exists if the corresponding VCS engine is 1587 + * present. 1588 + */ 1589 + if (!HAS_ENGINE(gt->_gt, _VCS(i * 2))) 1590 + continue; 1591 + 1593 1592 gt->sfc_done[i] = 1594 1593 intel_uncore_read(uncore, GEN12_SFC_DONE(i)); 1595 1594 }
+1
drivers/gpu/drm/i915/i915_pci.c
··· 1195 1195 err = pci_register_driver(&i915_pci_driver); 1196 1196 if (err) { 1197 1197 i915_pmu_exit(); 1198 + i915_globals_exit(); 1198 1199 return err; 1199 1200 } 1200 1201
+12 -6
drivers/gpu/drm/i915/i915_reg.h
··· 422 422 #define GEN12_HCP_SFC_LOCK_ACK_BIT REG_BIT(1) 423 423 #define GEN12_HCP_SFC_USAGE_BIT REG_BIT(0) 424 424 425 - #define GEN12_SFC_DONE(n) _MMIO(0x1cc00 + (n) * 0x100) 425 + #define GEN12_SFC_DONE(n) _MMIO(0x1cc000 + (n) * 0x1000) 426 426 #define GEN12_SFC_DONE_MAX 4 427 427 428 428 #define RING_PP_DIR_BASE(base) _MMIO((base) + 0x228) ··· 6163 6163 #define PIPEMISC_HDR_MODE_PRECISION (1 << 23) /* icl+ */ 6164 6164 #define PIPEMISC_OUTPUT_COLORSPACE_YUV (1 << 11) 6165 6165 #define PIPEMISC_PIXEL_ROUNDING_TRUNC REG_BIT(8) /* tgl+ */ 6166 - #define PIPEMISC_DITHER_BPC_MASK (7 << 5) 6167 - #define PIPEMISC_DITHER_8_BPC (0 << 5) 6168 - #define PIPEMISC_DITHER_10_BPC (1 << 5) 6169 - #define PIPEMISC_DITHER_6_BPC (2 << 5) 6170 - #define PIPEMISC_DITHER_12_BPC (3 << 5) 6166 + /* 6167 + * For Display < 13, Bits 5-7 of PIPE MISC represent DITHER BPC with 6168 + * valid values of: 6, 8, 10 BPC. 6169 + * ADLP+, the bits 5-7 represent PORT OUTPUT BPC with valid values of: 6170 + * 6, 8, 10, 12 BPC. 6171 + */ 6172 + #define PIPEMISC_BPC_MASK (7 << 5) 6173 + #define PIPEMISC_8_BPC (0 << 5) 6174 + #define PIPEMISC_10_BPC (1 << 5) 6175 + #define PIPEMISC_6_BPC (2 << 5) 6176 + #define PIPEMISC_12_BPC_ADLP (4 << 5) /* adlp+ */ 6171 6177 #define PIPEMISC_DITHER_ENABLE (1 << 4) 6172 6178 #define PIPEMISC_DITHER_TYPE_MASK (3 << 2) 6173 6179 #define PIPEMISC_DITHER_TYPE_SP (0 << 2)
+18 -4
drivers/gpu/drm/kmb/kmb_drv.c
··· 203 203 unsigned long status, val, val1; 204 204 int plane_id, dma0_state, dma1_state; 205 205 struct kmb_drm_private *kmb = to_kmb(dev); 206 + u32 ctrl = 0; 206 207 207 208 status = kmb_read_lcd(kmb, LCD_INT_STATUS); 208 209 ··· 227 226 228 227 kmb_clr_bitmask_lcd(kmb, LCD_CONTROL, 229 228 kmb->plane_status[plane_id].ctrl); 229 + 230 + ctrl = kmb_read_lcd(kmb, LCD_CONTROL); 231 + if (!(ctrl & (LCD_CTRL_VL1_ENABLE | 232 + LCD_CTRL_VL2_ENABLE | 233 + LCD_CTRL_GL1_ENABLE | 234 + LCD_CTRL_GL2_ENABLE))) { 235 + /* If no LCD layers are using DMA, 236 + * then disable DMA pipelined AXI read 237 + * transactions. 238 + */ 239 + kmb_clr_bitmask_lcd(kmb, LCD_CONTROL, 240 + LCD_CTRL_PIPELINE_DMA); 241 + } 230 242 231 243 kmb->plane_status[plane_id].disable = false; 232 244 } ··· 425 411 .fops = &fops, 426 412 DRM_GEM_CMA_DRIVER_OPS_VMAP, 427 413 .name = "kmb-drm", 428 - .desc = "KEEMBAY DISPLAY DRIVER ", 429 - .date = "20201008", 430 - .major = 1, 431 - .minor = 0, 414 + .desc = "KEEMBAY DISPLAY DRIVER", 415 + .date = DRIVER_DATE, 416 + .major = DRIVER_MAJOR, 417 + .minor = DRIVER_MINOR, 432 418 }; 433 419 434 420 static int kmb_remove(struct platform_device *pdev)
+5
drivers/gpu/drm/kmb/kmb_drv.h
··· 15 15 #define KMB_MAX_HEIGHT 1080 /*Max height in pixels */ 16 16 #define KMB_MIN_WIDTH 1920 /*Max width in pixels */ 17 17 #define KMB_MIN_HEIGHT 1080 /*Max height in pixels */ 18 + 19 + #define DRIVER_DATE "20210223" 20 + #define DRIVER_MAJOR 1 21 + #define DRIVER_MINOR 1 22 + 18 23 #define KMB_LCD_DEFAULT_CLK 200000000 19 24 #define KMB_SYS_CLK_MHZ 500 20 25
+13 -2
drivers/gpu/drm/kmb/kmb_plane.c
··· 427 427 428 428 kmb_set_bitmask_lcd(kmb, LCD_CONTROL, ctrl); 429 429 430 - /* FIXME no doc on how to set output format,these values are 431 - * taken from the Myriadx tests 430 + /* Enable pipeline AXI read transactions for the DMA 431 + * after setting graphics layers. This must be done 432 + * in a separate write cycle. 433 + */ 434 + kmb_set_bitmask_lcd(kmb, LCD_CONTROL, LCD_CTRL_PIPELINE_DMA); 435 + 436 + /* FIXME no doc on how to set output format, these values are taken 437 + * from the Myriadx tests 432 438 */ 433 439 out_format |= LCD_OUTF_FORMAT_RGB888; 434 440 ··· 531 525 &primary->base_plane); 532 526 plane->id = i; 533 527 } 528 + 529 + /* Disable pipeline AXI read transactions for the DMA 530 + * prior to setting graphics layers 531 + */ 532 + kmb_clr_bitmask_lcd(kmb, LCD_CONTROL, LCD_CTRL_PIPELINE_DMA); 534 533 535 534 return primary; 536 535 cleanup:
+5 -1
drivers/gpu/drm/mediatek/mtk_dpi.c
··· 605 605 struct drm_crtc_state *crtc_state, 606 606 struct drm_connector_state *conn_state) 607 607 { 608 - struct mtk_dpi *dpi = bridge->driver_private; 608 + struct mtk_dpi *dpi = bridge_to_dpi(bridge); 609 609 unsigned int out_bus_format; 610 610 611 611 out_bus_format = bridge_state->output_bus_cfg.format; 612 + 613 + if (out_bus_format == MEDIA_BUS_FMT_FIXED) 614 + if (dpi->conf->num_output_fmts) 615 + out_bus_format = dpi->conf->output_fmts[0]; 612 616 613 617 dev_dbg(dpi->dev, "input format 0x%04x, output format 0x%04x\n", 614 618 bridge_state->input_bus_cfg.format,
-3
drivers/gpu/drm/mediatek/mtk_drm_crtc.c
··· 532 532 struct drm_atomic_state *state) 533 533 { 534 534 struct mtk_drm_crtc *mtk_crtc = to_mtk_crtc(crtc); 535 - const struct drm_plane_helper_funcs *plane_helper_funcs = 536 - plane->helper_private; 537 535 538 536 if (!mtk_crtc->enabled) 539 537 return; 540 538 541 - plane_helper_funcs->atomic_update(plane, state); 542 539 mtk_drm_crtc_update_config(mtk_crtc, false); 543 540 } 544 541
+34 -26
drivers/gpu/drm/mediatek/mtk_drm_plane.c
··· 110 110 true, true); 111 111 } 112 112 113 + static void mtk_plane_update_new_state(struct drm_plane_state *new_state, 114 + struct mtk_plane_state *mtk_plane_state) 115 + { 116 + struct drm_framebuffer *fb = new_state->fb; 117 + struct drm_gem_object *gem; 118 + struct mtk_drm_gem_obj *mtk_gem; 119 + unsigned int pitch, format; 120 + dma_addr_t addr; 121 + 122 + gem = fb->obj[0]; 123 + mtk_gem = to_mtk_gem_obj(gem); 124 + addr = mtk_gem->dma_addr; 125 + pitch = fb->pitches[0]; 126 + format = fb->format->format; 127 + 128 + addr += (new_state->src.x1 >> 16) * fb->format->cpp[0]; 129 + addr += (new_state->src.y1 >> 16) * pitch; 130 + 131 + mtk_plane_state->pending.enable = true; 132 + mtk_plane_state->pending.pitch = pitch; 133 + mtk_plane_state->pending.format = format; 134 + mtk_plane_state->pending.addr = addr; 135 + mtk_plane_state->pending.x = new_state->dst.x1; 136 + mtk_plane_state->pending.y = new_state->dst.y1; 137 + mtk_plane_state->pending.width = drm_rect_width(&new_state->dst); 138 + mtk_plane_state->pending.height = drm_rect_height(&new_state->dst); 139 + mtk_plane_state->pending.rotation = new_state->rotation; 140 + } 141 + 113 142 static void mtk_plane_atomic_async_update(struct drm_plane *plane, 114 143 struct drm_atomic_state *state) 115 144 { ··· 155 126 plane->state->src_h = new_state->src_h; 156 127 plane->state->src_w = new_state->src_w; 157 128 swap(plane->state->fb, new_state->fb); 158 - new_plane_state->pending.async_dirty = true; 159 129 130 + mtk_plane_update_new_state(new_state, new_plane_state); 131 + wmb(); /* Make sure the above parameters are set before update */ 132 + new_plane_state->pending.async_dirty = true; 160 133 mtk_drm_crtc_async_update(new_state->crtc, plane, state); 161 134 } 162 135 ··· 220 189 struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 221 190 plane); 222 191 struct mtk_plane_state *mtk_plane_state = to_mtk_plane_state(new_state); 223 - struct drm_crtc *crtc = new_state->crtc; 224 - struct drm_framebuffer *fb = new_state->fb; 225 - struct drm_gem_object *gem; 226 - struct mtk_drm_gem_obj *mtk_gem; 227 - unsigned int pitch, format; 228 - dma_addr_t addr; 229 192 230 - if (!crtc || WARN_ON(!fb)) 193 + if (!new_state->crtc || WARN_ON(!new_state->fb)) 231 194 return; 232 195 233 196 if (!new_state->visible) { ··· 229 204 return; 230 205 } 231 206 232 - gem = fb->obj[0]; 233 - mtk_gem = to_mtk_gem_obj(gem); 234 - addr = mtk_gem->dma_addr; 235 - pitch = fb->pitches[0]; 236 - format = fb->format->format; 237 - 238 - addr += (new_state->src.x1 >> 16) * fb->format->cpp[0]; 239 - addr += (new_state->src.y1 >> 16) * pitch; 240 - 241 - mtk_plane_state->pending.enable = true; 242 - mtk_plane_state->pending.pitch = pitch; 243 - mtk_plane_state->pending.format = format; 244 - mtk_plane_state->pending.addr = addr; 245 - mtk_plane_state->pending.x = new_state->dst.x1; 246 - mtk_plane_state->pending.y = new_state->dst.y1; 247 - mtk_plane_state->pending.width = drm_rect_width(&new_state->dst); 248 - mtk_plane_state->pending.height = drm_rect_height(&new_state->dst); 249 - mtk_plane_state->pending.rotation = new_state->rotation; 207 + mtk_plane_update_new_state(new_state, mtk_plane_state); 250 208 wmb(); /* Make sure the above parameters are set before update */ 251 209 mtk_plane_state->pending.dirty = true; 252 210 }
+5
drivers/gpu/drm/meson/meson_registers.h
··· 634 634 #define VPP_WRAP_OSD3_MATRIX_PRE_OFFSET2 0x3dbc 635 635 #define VPP_WRAP_OSD3_MATRIX_EN_CTRL 0x3dbd 636 636 637 + /* osd1 HDR */ 638 + #define OSD1_HDR2_CTRL 0x38a0 639 + #define OSD1_HDR2_CTRL_VDIN0_HDR2_TOP_EN BIT(13) 640 + #define OSD1_HDR2_CTRL_REG_ONLY_MAT BIT(16) 641 + 637 642 /* osd2 scaler */ 638 643 #define OSD2_VSC_PHASE_STEP 0x3d00 639 644 #define OSD2_VSC_INI_PHASE 0x3d01
+6 -1
drivers/gpu/drm/meson/meson_viu.c
··· 425 425 if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXM) || 426 426 meson_vpu_is_compatible(priv, VPU_COMPATIBLE_GXL)) 427 427 meson_viu_load_matrix(priv); 428 - else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) 428 + else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) { 429 429 meson_viu_set_g12a_osd1_matrix(priv, RGB709_to_YUV709l_coeff, 430 430 true); 431 + /* fix green/pink color distortion from vendor u-boot */ 432 + writel_bits_relaxed(OSD1_HDR2_CTRL_REG_ONLY_MAT | 433 + OSD1_HDR2_CTRL_VDIN0_HDR2_TOP_EN, 0, 434 + priv->io_base + _REG(OSD1_HDR2_CTRL)); 435 + } 431 436 432 437 /* Initialize OSD1 fifo control register */ 433 438 reg = VIU_OSD_DDR_PRIORITY_URGENT |
+1 -1
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
··· 492 492 resource_size_t vram_start; 493 493 resource_size_t vram_size; 494 494 resource_size_t prim_bb_mem; 495 - void __iomem *rmmio; 495 + u32 __iomem *rmmio; 496 496 u32 *fifo_mem; 497 497 resource_size_t fifo_mem_size; 498 498 uint32_t fb_max_width;
+16 -1
drivers/infiniband/core/cma.c
··· 926 926 return ret; 927 927 } 928 928 929 + static int cma_init_conn_qp(struct rdma_id_private *id_priv, struct ib_qp *qp) 930 + { 931 + struct ib_qp_attr qp_attr; 932 + int qp_attr_mask, ret; 933 + 934 + qp_attr.qp_state = IB_QPS_INIT; 935 + ret = rdma_init_qp_attr(&id_priv->id, &qp_attr, &qp_attr_mask); 936 + if (ret) 937 + return ret; 938 + 939 + return ib_modify_qp(qp, &qp_attr, qp_attr_mask); 940 + } 941 + 929 942 int rdma_create_qp(struct rdma_cm_id *id, struct ib_pd *pd, 930 943 struct ib_qp_init_attr *qp_init_attr) 931 944 { 932 945 struct rdma_id_private *id_priv; 933 946 struct ib_qp *qp; 934 - int ret = 0; 947 + int ret; 935 948 936 949 id_priv = container_of(id, struct rdma_id_private, id); 937 950 if (id->device != pd->device) { ··· 961 948 962 949 if (id->qp_type == IB_QPT_UD) 963 950 ret = cma_init_ud_qp(id_priv, qp); 951 + else 952 + ret = cma_init_conn_qp(id_priv, qp); 964 953 if (ret) 965 954 goto out_destroy; 966 955
+9 -3
drivers/infiniband/hw/cxgb4/cq.c
··· 967 967 return !err || err == -ENODATA ? npolled : err; 968 968 } 969 969 970 + void c4iw_cq_rem_ref(struct c4iw_cq *chp) 971 + { 972 + if (refcount_dec_and_test(&chp->refcnt)) 973 + complete(&chp->cq_rel_comp); 974 + } 975 + 970 976 int c4iw_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata) 971 977 { 972 978 struct c4iw_cq *chp; ··· 982 976 chp = to_c4iw_cq(ib_cq); 983 977 984 978 xa_erase_irq(&chp->rhp->cqs, chp->cq.cqid); 985 - refcount_dec(&chp->refcnt); 986 - wait_event(chp->wait, !refcount_read(&chp->refcnt)); 979 + c4iw_cq_rem_ref(chp); 980 + wait_for_completion(&chp->cq_rel_comp); 987 981 988 982 ucontext = rdma_udata_to_drv_context(udata, struct c4iw_ucontext, 989 983 ibucontext); ··· 1087 1081 spin_lock_init(&chp->lock); 1088 1082 spin_lock_init(&chp->comp_handler_lock); 1089 1083 refcount_set(&chp->refcnt, 1); 1090 - init_waitqueue_head(&chp->wait); 1084 + init_completion(&chp->cq_rel_comp); 1091 1085 ret = xa_insert_irq(&rhp->cqs, chp->cq.cqid, chp, GFP_KERNEL); 1092 1086 if (ret) 1093 1087 goto err_destroy_cq;
+2 -4
drivers/infiniband/hw/cxgb4/ev.c
··· 213 213 break; 214 214 } 215 215 done: 216 - if (refcount_dec_and_test(&chp->refcnt)) 217 - wake_up(&chp->wait); 216 + c4iw_cq_rem_ref(chp); 218 217 c4iw_qp_rem_ref(&qhp->ibqp); 219 218 out: 220 219 return; ··· 233 234 spin_lock_irqsave(&chp->comp_handler_lock, flag); 234 235 (*chp->ibcq.comp_handler)(&chp->ibcq, chp->ibcq.cq_context); 235 236 spin_unlock_irqrestore(&chp->comp_handler_lock, flag); 236 - if (refcount_dec_and_test(&chp->refcnt)) 237 - wake_up(&chp->wait); 237 + c4iw_cq_rem_ref(chp); 238 238 } else { 239 239 pr_debug("unknown cqid 0x%x\n", qid); 240 240 xa_unlock_irqrestore(&dev->cqs, flag);
+2 -1
drivers/infiniband/hw/cxgb4/iw_cxgb4.h
··· 428 428 spinlock_t lock; 429 429 spinlock_t comp_handler_lock; 430 430 refcount_t refcnt; 431 - wait_queue_head_t wait; 431 + struct completion cq_rel_comp; 432 432 struct c4iw_wr_wait *wr_waitp; 433 433 }; 434 434 ··· 979 979 struct ib_mr *c4iw_get_dma_mr(struct ib_pd *pd, int acc); 980 980 int c4iw_dereg_mr(struct ib_mr *ib_mr, struct ib_udata *udata); 981 981 int c4iw_destroy_cq(struct ib_cq *ib_cq, struct ib_udata *udata); 982 + void c4iw_cq_rem_ref(struct c4iw_cq *chp); 982 983 int c4iw_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, 983 984 struct ib_udata *udata); 984 985 int c4iw_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags);
+3 -4
drivers/infiniband/hw/hns/hns_roce_cmd.c
··· 213 213 214 214 hr_cmd->context = 215 215 kcalloc(hr_cmd->max_cmds, sizeof(*hr_cmd->context), GFP_KERNEL); 216 - if (!hr_cmd->context) 216 + if (!hr_cmd->context) { 217 + hr_dev->cmd_mod = 0; 217 218 return -ENOMEM; 219 + } 218 220 219 221 for (i = 0; i < hr_cmd->max_cmds; ++i) { 220 222 hr_cmd->context[i].token = i; ··· 230 228 spin_lock_init(&hr_cmd->context_lock); 231 229 232 230 hr_cmd->use_events = 1; 233 - down(&hr_cmd->poll_sem); 234 231 235 232 return 0; 236 233 } ··· 240 239 241 240 kfree(hr_cmd->context); 242 241 hr_cmd->use_events = 0; 243 - 244 - up(&hr_cmd->poll_sem); 245 242 } 246 243 247 244 struct hns_roce_cmd_mailbox *
+1 -3
drivers/infiniband/hw/hns/hns_roce_main.c
··· 873 873 874 874 if (hr_dev->cmd_mod) { 875 875 ret = hns_roce_cmd_use_events(hr_dev); 876 - if (ret) { 876 + if (ret) 877 877 dev_warn(dev, 878 878 "Cmd event mode failed, set back to poll!\n"); 879 - hns_roce_cmd_use_polling(hr_dev); 880 - } 881 879 } 882 880 883 881 ret = hns_roce_init_hem(hr_dev);
+1 -3
drivers/infiniband/hw/mlx5/cq.c
··· 945 945 u32 *cqb = NULL; 946 946 void *cqc; 947 947 int cqe_size; 948 - unsigned int irqn; 949 948 int eqn; 950 949 int err; 951 950 ··· 983 984 INIT_WORK(&cq->notify_work, notify_soft_wc_handler); 984 985 } 985 986 986 - err = mlx5_vector2eqn(dev->mdev, vector, &eqn, &irqn); 987 + err = mlx5_vector2eqn(dev->mdev, vector, &eqn); 987 988 if (err) 988 989 goto err_cqb; 989 990 ··· 1006 1007 goto err_cqb; 1007 1008 1008 1009 mlx5_ib_dbg(dev, "cqn 0x%x\n", cq->mcq.cqn); 1009 - cq->mcq.irqn = irqn; 1010 1010 if (udata) 1011 1011 cq->mcq.tasklet_ctx.comp = mlx5_ib_cq_comp; 1012 1012 else
+1 -2
drivers/infiniband/hw/mlx5/devx.c
··· 975 975 struct mlx5_ib_dev *dev; 976 976 int user_vector; 977 977 int dev_eqn; 978 - unsigned int irqn; 979 978 int err; 980 979 981 980 if (uverbs_copy_from(&user_vector, attrs, ··· 986 987 return PTR_ERR(c); 987 988 dev = to_mdev(c->ibucontext.device); 988 989 989 - err = mlx5_vector2eqn(dev->mdev, user_vector, &dev_eqn, &irqn); 990 + err = mlx5_vector2eqn(dev->mdev, user_vector, &dev_eqn); 990 991 if (err < 0) 991 992 return err; 992 993
+2 -2
drivers/infiniband/hw/mlx5/mr.c
··· 531 531 */ 532 532 spin_unlock_irq(&ent->lock); 533 533 need_delay = need_resched() || someone_adding(cache) || 534 - time_after(jiffies, 535 - READ_ONCE(cache->last_add) + 300 * HZ); 534 + !time_after(jiffies, 535 + READ_ONCE(cache->last_add) + 300 * HZ); 536 536 spin_lock_irq(&ent->lock); 537 537 if (ent->disabled) 538 538 goto out;
+1
drivers/infiniband/sw/rxe/rxe_net.c
··· 259 259 260 260 iph->version = IPVERSION; 261 261 iph->ihl = sizeof(struct iphdr) >> 2; 262 + iph->tot_len = htons(skb->len); 262 263 iph->frag_off = df; 263 264 iph->protocol = proto; 264 265 iph->tos = tos;
+1 -1
drivers/infiniband/sw/rxe/rxe_resp.c
··· 318 318 pr_warn("%s: invalid num_sge in SRQ entry\n", __func__); 319 319 return RESPST_ERR_MALFORMED_WQE; 320 320 } 321 - size = sizeof(wqe) + wqe->dma.num_sge*sizeof(struct rxe_sge); 321 + size = sizeof(*wqe) + wqe->dma.num_sge*sizeof(struct rxe_sge); 322 322 memcpy(&qp->resp.srq_wqe, wqe, size); 323 323 324 324 qp->resp.wqe = &qp->resp.srq_wqe.wqe;
+8 -1
drivers/interconnect/core.c
··· 403 403 { 404 404 struct icc_path **ptr, *path; 405 405 406 - ptr = devres_alloc(devm_icc_release, sizeof(**ptr), GFP_KERNEL); 406 + ptr = devres_alloc(devm_icc_release, sizeof(*ptr), GFP_KERNEL); 407 407 if (!ptr) 408 408 return ERR_PTR(-ENOMEM); 409 409 ··· 973 973 } 974 974 node->avg_bw = node->init_avg; 975 975 node->peak_bw = node->init_peak; 976 + 977 + if (provider->pre_aggregate) 978 + provider->pre_aggregate(node); 979 + 976 980 if (provider->aggregate) 977 981 provider->aggregate(node, 0, node->init_avg, node->init_peak, 978 982 &node->avg_bw, &node->peak_bw); 983 + 979 984 provider->set(node, node); 980 985 node->avg_bw = 0; 981 986 node->peak_bw = 0; ··· 1111 1106 dev_dbg(p->dev, "interconnect provider is in synced state\n"); 1112 1107 list_for_each_entry(n, &p->nodes, node_list) { 1113 1108 if (n->init_avg || n->init_peak) { 1109 + n->init_avg = 0; 1110 + n->init_peak = 0; 1114 1111 aggregate_requests(n); 1115 1112 p->set(n, n); 1116 1113 }
+10 -12
drivers/interconnect/qcom/icc-rpmh.c
··· 20 20 { 21 21 size_t i; 22 22 struct qcom_icc_node *qn; 23 + struct qcom_icc_provider *qp; 23 24 24 25 qn = node->data; 26 + qp = to_qcom_provider(node->provider); 25 27 26 28 for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) { 27 29 qn->sum_avg[i] = 0; 28 30 qn->max_peak[i] = 0; 29 31 } 32 + 33 + for (i = 0; i < qn->num_bcms; i++) 34 + qcom_icc_bcm_voter_add(qp->voter, qn->bcms[i]); 30 35 } 31 36 EXPORT_SYMBOL_GPL(qcom_icc_pre_aggregate); 32 37 ··· 49 44 { 50 45 size_t i; 51 46 struct qcom_icc_node *qn; 52 - struct qcom_icc_provider *qp; 53 47 54 48 qn = node->data; 55 - qp = to_qcom_provider(node->provider); 56 49 57 50 if (!tag) 58 51 tag = QCOM_ICC_TAG_ALWAYS; ··· 60 57 qn->sum_avg[i] += avg_bw; 61 58 qn->max_peak[i] = max_t(u32, qn->max_peak[i], peak_bw); 62 59 } 60 + 61 + if (node->init_avg || node->init_peak) { 62 + qn->sum_avg[i] = max_t(u64, qn->sum_avg[i], node->init_avg); 63 + qn->max_peak[i] = max_t(u64, qn->max_peak[i], node->init_peak); 64 + } 63 65 } 64 66 65 67 *agg_avg += avg_bw; 66 68 *agg_peak = max_t(u32, *agg_peak, peak_bw); 67 - 68 - for (i = 0; i < qn->num_bcms; i++) 69 - qcom_icc_bcm_voter_add(qp->voter, qn->bcms[i]); 70 69 71 70 return 0; 72 71 } ··· 84 79 int qcom_icc_set(struct icc_node *src, struct icc_node *dst) 85 80 { 86 81 struct qcom_icc_provider *qp; 87 - struct qcom_icc_node *qn; 88 82 struct icc_node *node; 89 83 90 84 if (!src) ··· 92 88 node = src; 93 89 94 90 qp = to_qcom_provider(node->provider); 95 - qn = node->data; 96 - 97 - qn->sum_avg[QCOM_ICC_BUCKET_AMC] = max_t(u64, qn->sum_avg[QCOM_ICC_BUCKET_AMC], 98 - node->avg_bw); 99 - qn->max_peak[QCOM_ICC_BUCKET_AMC] = max_t(u64, qn->max_peak[QCOM_ICC_BUCKET_AMC], 100 - node->peak_bw); 101 91 102 92 qcom_icc_bcm_voter_commit(qp->voter); 103 93
-2
drivers/md/raid1.c
··· 474 474 /* 475 475 * When the device is faulty, it is not necessary to 476 476 * handle write error. 477 - * For failfast, this is the only remaining device, 478 - * We need to retry the write without FailFast. 479 477 */ 480 478 if (!test_bit(Faulty, &rdev->flags)) 481 479 set_bit(R1BIO_WriteError, &r1_bio->state);
+2 -2
drivers/md/raid10.c
··· 471 471 /* 472 472 * When the device is faulty, it is not necessary to 473 473 * handle write error. 474 - * For failfast, this is the only remaining device, 475 - * We need to retry the write without FailFast. 476 474 */ 477 475 if (!test_bit(Faulty, &rdev->flags)) 478 476 set_bit(R10BIO_WriteError, &r10_bio->state); 479 477 else { 478 + /* Fail the request */ 479 + set_bit(R10BIO_Degraded, &r10_bio->state); 480 480 r10_bio->devs[slot].bio = NULL; 481 481 to_put = bio; 482 482 dec_rdev = 1;
+11 -5
drivers/net/bareudp.c
··· 71 71 family = AF_INET6; 72 72 73 73 if (bareudp->ethertype == htons(ETH_P_IP)) { 74 - struct iphdr *iphdr; 74 + __u8 ipversion; 75 75 76 - iphdr = (struct iphdr *)(skb->data + BAREUDP_BASE_HLEN); 77 - if (iphdr->version == 4) { 78 - proto = bareudp->ethertype; 79 - } else if (bareudp->multi_proto_mode && (iphdr->version == 6)) { 76 + if (skb_copy_bits(skb, BAREUDP_BASE_HLEN, &ipversion, 77 + sizeof(ipversion))) { 78 + bareudp->dev->stats.rx_dropped++; 79 + goto drop; 80 + } 81 + ipversion >>= 4; 82 + 83 + if (ipversion == 4) { 84 + proto = htons(ETH_P_IP); 85 + } else if (ipversion == 6 && bareudp->multi_proto_mode) { 80 86 proto = htons(ETH_P_IPV6); 81 87 } else { 82 88 bareudp->dev->stats.rx_dropped++;
+4 -4
drivers/net/can/m_can/m_can.c
··· 1168 1168 FIELD_PREP(TDCR_TDCO_MASK, tdco)); 1169 1169 } 1170 1170 1171 - reg_btp = FIELD_PREP(NBTP_NBRP_MASK, brp) | 1172 - FIELD_PREP(NBTP_NSJW_MASK, sjw) | 1173 - FIELD_PREP(NBTP_NTSEG1_MASK, tseg1) | 1174 - FIELD_PREP(NBTP_NTSEG2_MASK, tseg2); 1171 + reg_btp |= FIELD_PREP(DBTP_DBRP_MASK, brp) | 1172 + FIELD_PREP(DBTP_DSJW_MASK, sjw) | 1173 + FIELD_PREP(DBTP_DTSEG1_MASK, tseg1) | 1174 + FIELD_PREP(DBTP_DTSEG2_MASK, tseg2); 1175 1175 1176 1176 m_can_write(cdev, M_CAN_DBTP, reg_btp); 1177 1177 }
+5 -2
drivers/net/dsa/hirschmann/hellcreek.c
··· 912 912 { 913 913 struct hellcreek *hellcreek = ds->priv; 914 914 u16 entries; 915 + int ret = 0; 915 916 size_t i; 916 917 917 918 mutex_lock(&hellcreek->reg_lock); ··· 944 943 if (!(entry.portmask & BIT(port))) 945 944 continue; 946 945 947 - cb(entry.mac, 0, entry.is_static, data); 946 + ret = cb(entry.mac, 0, entry.is_static, data); 947 + if (ret) 948 + break; 948 949 } 949 950 950 951 mutex_unlock(&hellcreek->reg_lock); 951 952 952 - return 0; 953 + return ret; 953 954 } 954 955 955 956 static int hellcreek_vlan_filtering(struct dsa_switch *ds, int port,
+19 -15
drivers/net/dsa/lan9303-core.c
··· 557 557 return 0; 558 558 } 559 559 560 - typedef void alr_loop_cb_t(struct lan9303 *chip, u32 dat0, u32 dat1, 561 - int portmap, void *ctx); 560 + typedef int alr_loop_cb_t(struct lan9303 *chip, u32 dat0, u32 dat1, 561 + int portmap, void *ctx); 562 562 563 - static void lan9303_alr_loop(struct lan9303 *chip, alr_loop_cb_t *cb, void *ctx) 563 + static int lan9303_alr_loop(struct lan9303 *chip, alr_loop_cb_t *cb, void *ctx) 564 564 { 565 - int i; 565 + int ret = 0, i; 566 566 567 567 mutex_lock(&chip->alr_mutex); 568 568 lan9303_write_switch_reg(chip, LAN9303_SWE_ALR_CMD, ··· 582 582 LAN9303_ALR_DAT1_PORT_BITOFFS; 583 583 portmap = alrport_2_portmap[alrport]; 584 584 585 - cb(chip, dat0, dat1, portmap, ctx); 585 + ret = cb(chip, dat0, dat1, portmap, ctx); 586 + if (ret) 587 + break; 586 588 587 589 lan9303_write_switch_reg(chip, LAN9303_SWE_ALR_CMD, 588 590 LAN9303_ALR_CMD_GET_NEXT); 589 591 lan9303_write_switch_reg(chip, LAN9303_SWE_ALR_CMD, 0); 590 592 } 591 593 mutex_unlock(&chip->alr_mutex); 594 + 595 + return ret; 592 596 } 593 597 594 598 static void alr_reg_to_mac(u32 dat0, u32 dat1, u8 mac[6]) ··· 610 606 }; 611 607 612 608 /* Clear learned (non-static) entry on given port */ 613 - static void alr_loop_cb_del_port_learned(struct lan9303 *chip, u32 dat0, 614 - u32 dat1, int portmap, void *ctx) 609 + static int alr_loop_cb_del_port_learned(struct lan9303 *chip, u32 dat0, 610 + u32 dat1, int portmap, void *ctx) 615 611 { 616 612 struct del_port_learned_ctx *del_ctx = ctx; 617 613 int port = del_ctx->port; 618 614 619 615 if (((BIT(port) & portmap) == 0) || (dat1 & LAN9303_ALR_DAT1_STATIC)) 620 - return; 616 + return 0; 621 617 622 618 /* learned entries has only one port, we can just delete */ 623 619 dat1 &= ~LAN9303_ALR_DAT1_VALID; /* delete entry */ 624 620 lan9303_alr_make_entry_raw(chip, dat0, dat1); 621 + 622 + return 0; 625 623 } 626 624 627 625 struct port_fdb_dump_ctx { ··· 632 626 dsa_fdb_dump_cb_t *cb; 633 627 }; 634 628 635 - static void alr_loop_cb_fdb_port_dump(struct lan9303 *chip, u32 dat0, 636 - u32 dat1, int portmap, void *ctx) 629 + static int alr_loop_cb_fdb_port_dump(struct lan9303 *chip, u32 dat0, 630 + u32 dat1, int portmap, void *ctx) 637 631 { 638 632 struct port_fdb_dump_ctx *dump_ctx = ctx; 639 633 u8 mac[ETH_ALEN]; 640 634 bool is_static; 641 635 642 636 if ((BIT(dump_ctx->port) & portmap) == 0) 643 - return; 637 + return 0; 644 638 645 639 alr_reg_to_mac(dat0, dat1, mac); 646 640 is_static = !!(dat1 & LAN9303_ALR_DAT1_STATIC); 647 - dump_ctx->cb(mac, 0, is_static, dump_ctx->data); 641 + return dump_ctx->cb(mac, 0, is_static, dump_ctx->data); 648 642 } 649 643 650 644 /* Set a static ALR entry. Delete entry if port_map is zero */ ··· 1216 1210 }; 1217 1211 1218 1212 dev_dbg(chip->dev, "%s(%d)\n", __func__, port); 1219 - lan9303_alr_loop(chip, alr_loop_cb_fdb_port_dump, &dump_ctx); 1220 - 1221 - return 0; 1213 + return lan9303_alr_loop(chip, alr_loop_cb_fdb_port_dump, &dump_ctx); 1222 1214 } 1223 1215 1224 1216 static int lan9303_port_mdb_prepare(struct dsa_switch *ds, int port,
+10 -4
drivers/net/dsa/lantiq_gswip.c
··· 1404 1404 addr[1] = mac_bridge.key[2] & 0xff; 1405 1405 addr[0] = (mac_bridge.key[2] >> 8) & 0xff; 1406 1406 if (mac_bridge.val[1] & GSWIP_TABLE_MAC_BRIDGE_STATIC) { 1407 - if (mac_bridge.val[0] & BIT(port)) 1408 - cb(addr, 0, true, data); 1407 + if (mac_bridge.val[0] & BIT(port)) { 1408 + err = cb(addr, 0, true, data); 1409 + if (err) 1410 + return err; 1411 + } 1409 1412 } else { 1410 - if (((mac_bridge.val[0] & GENMASK(7, 4)) >> 4) == port) 1411 - cb(addr, 0, false, data); 1413 + if (((mac_bridge.val[0] & GENMASK(7, 4)) >> 4) == port) { 1414 + err = cb(addr, 0, false, data); 1415 + if (err) 1416 + return err; 1417 + } 1412 1418 } 1413 1419 } 1414 1420 return 0;
+67 -15
drivers/net/dsa/microchip/ksz8795.c
··· 687 687 shifts = ksz8->shifts; 688 688 689 689 ksz8_r_table(dev, TABLE_VLAN, addr, &data); 690 - addr *= dev->phy_port_cnt; 691 - for (i = 0; i < dev->phy_port_cnt; i++) { 690 + addr *= 4; 691 + for (i = 0; i < 4; i++) { 692 692 dev->vlan_cache[addr + i].table[0] = (u16)data; 693 693 data >>= shifts[VLAN_TABLE]; 694 694 } ··· 702 702 u64 buf; 703 703 704 704 data = (u16 *)&buf; 705 - addr = vid / dev->phy_port_cnt; 705 + addr = vid / 4; 706 706 index = vid & 3; 707 707 ksz8_r_table(dev, TABLE_VLAN, addr, &buf); 708 708 *vlan = data[index]; ··· 716 716 u64 buf; 717 717 718 718 data = (u16 *)&buf; 719 - addr = vid / dev->phy_port_cnt; 719 + addr = vid / 4; 720 720 index = vid & 3; 721 721 ksz8_r_table(dev, TABLE_VLAN, addr, &buf); 722 722 data[index] = vlan; ··· 1119 1119 if (ksz_is_ksz88x3(dev)) 1120 1120 return -ENOTSUPP; 1121 1121 1122 + /* Discard packets with VID not enabled on the switch */ 1122 1123 ksz_cfg(dev, S_MIRROR_CTRL, SW_VLAN_ENABLE, flag); 1123 1124 1125 + /* Discard packets with VID not enabled on the ingress port */ 1126 + for (port = 0; port < dev->phy_port_cnt; ++port) 1127 + ksz_port_cfg(dev, port, REG_PORT_CTRL_2, PORT_INGRESS_FILTER, 1128 + flag); 1129 + 1124 1130 return 0; 1131 + } 1132 + 1133 + static void ksz8_port_enable_pvid(struct ksz_device *dev, int port, bool state) 1134 + { 1135 + if (ksz_is_ksz88x3(dev)) { 1136 + ksz_cfg(dev, REG_SW_INSERT_SRC_PVID, 1137 + 0x03 << (4 - 2 * port), state); 1138 + } else { 1139 + ksz_pwrite8(dev, port, REG_PORT_CTRL_12, state ? 0x0f : 0x00); 1140 + } 1125 1141 } 1126 1142 1127 1143 static int ksz8_port_vlan_add(struct dsa_switch *ds, int port, ··· 1146 1130 { 1147 1131 bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED; 1148 1132 struct ksz_device *dev = ds->priv; 1133 + struct ksz_port *p = &dev->ports[port]; 1149 1134 u16 data, new_pvid = 0; 1150 1135 u8 fid, member, valid; 1151 1136 1152 1137 if (ksz_is_ksz88x3(dev)) 1153 1138 return -ENOTSUPP; 1154 1139 1155 - ksz_port_cfg(dev, port, P_TAG_CTRL, PORT_REMOVE_TAG, untagged); 1140 + /* If a VLAN is added with untagged flag different from the 1141 + * port's Remove Tag flag, we need to change the latter. 1142 + * Ignore VID 0, which is always untagged. 1143 + * Ignore CPU port, which will always be tagged. 1144 + */ 1145 + if (untagged != p->remove_tag && vlan->vid != 0 && 1146 + port != dev->cpu_port) { 1147 + unsigned int vid; 1148 + 1149 + /* Reject attempts to add a VLAN that requires the 1150 + * Remove Tag flag to be changed, unless there are no 1151 + * other VLANs currently configured. 1152 + */ 1153 + for (vid = 1; vid < dev->num_vlans; ++vid) { 1154 + /* Skip the VID we are going to add or reconfigure */ 1155 + if (vid == vlan->vid) 1156 + continue; 1157 + 1158 + ksz8_from_vlan(dev, dev->vlan_cache[vid].table[0], 1159 + &fid, &member, &valid); 1160 + if (valid && (member & BIT(port))) 1161 + return -EINVAL; 1162 + } 1163 + 1164 + ksz_port_cfg(dev, port, P_TAG_CTRL, PORT_REMOVE_TAG, untagged); 1165 + p->remove_tag = untagged; 1166 + } 1156 1167 1157 1168 ksz8_r_vlan_table(dev, vlan->vid, &data); 1158 1169 ksz8_from_vlan(dev, data, &fid, &member, &valid); ··· 1203 1160 u16 vid; 1204 1161 1205 1162 ksz_pread16(dev, port, REG_PORT_CTRL_VID, &vid); 1206 - vid &= 0xfff; 1163 + vid &= ~VLAN_VID_MASK; 1207 1164 vid |= new_pvid; 1208 1165 ksz_pwrite16(dev, port, REG_PORT_CTRL_VID, vid); 1166 + 1167 + ksz8_port_enable_pvid(dev, port, true); 1209 1168 } 1210 1169 1211 1170 return 0; ··· 1216 1171 static int ksz8_port_vlan_del(struct dsa_switch *ds, int port, 1217 1172 const struct switchdev_obj_port_vlan *vlan) 1218 1173 { 1219 - bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED; 1220 1174 struct ksz_device *dev = ds->priv; 1221 - u16 data, pvid, new_pvid = 0; 1175 + u16 data, pvid; 1222 1176 u8 fid, member, valid; 1223 1177 1224 1178 if (ksz_is_ksz88x3(dev)) ··· 1225 1181 1226 1182 ksz_pread16(dev, port, REG_PORT_CTRL_VID, &pvid); 1227 1183 pvid = pvid & 0xFFF; 1228 - 1229 - ksz_port_cfg(dev, port, P_TAG_CTRL, PORT_REMOVE_TAG, untagged); 1230 1184 1231 1185 ksz8_r_vlan_table(dev, vlan->vid, &data); 1232 1186 ksz8_from_vlan(dev, data, &fid, &member, &valid); ··· 1237 1195 valid = 0; 1238 1196 } 1239 1197 1240 - if (pvid == vlan->vid) 1241 - new_pvid = 1; 1242 - 1243 1198 ksz8_to_vlan(dev, fid, member, valid, &data); 1244 1199 ksz8_w_vlan_table(dev, vlan->vid, data); 1245 1200 1246 - if (new_pvid != pvid) 1247 - ksz_pwrite16(dev, port, REG_PORT_CTRL_VID, pvid); 1201 + if (pvid == vlan->vid) 1202 + ksz8_port_enable_pvid(dev, port, false); 1248 1203 1249 1204 return 0; 1250 1205 } ··· 1473 1434 ksz_cfg(dev, S_REPLACE_VID_CTRL, SW_REPLACE_VID, false); 1474 1435 1475 1436 ksz_cfg(dev, S_MIRROR_CTRL, SW_MIRROR_RX_TX, false); 1437 + 1438 + if (!ksz_is_ksz88x3(dev)) 1439 + ksz_cfg(dev, REG_SW_CTRL_19, SW_INS_TAG_ENABLE, true); 1476 1440 1477 1441 /* set broadcast storm protection 10% rate */ 1478 1442 regmap_update_bits(dev->regmap[1], S_REPLACE_VID_CTRL, ··· 1758 1716 1759 1717 /* set the real number of ports */ 1760 1718 dev->ds->num_ports = dev->port_cnt; 1719 + 1720 + /* We rely on software untagging on the CPU port, so that we 1721 + * can support both tagged and untagged VLANs 1722 + */ 1723 + dev->ds->untag_bridge_pvid = true; 1724 + 1725 + /* VLAN filtering is partly controlled by the global VLAN 1726 + * Enable flag 1727 + */ 1728 + dev->ds->vlan_filtering_is_global = true; 1761 1729 1762 1730 return 0; 1763 1731 }
+4
drivers/net/dsa/microchip/ksz8795_reg.h
··· 631 631 #define REG_PORT_4_OUT_RATE_3 0xEE 632 632 #define REG_PORT_5_OUT_RATE_3 0xFE 633 633 634 + /* 88x3 specific */ 635 + 636 + #define REG_SW_INSERT_SRC_PVID 0xC2 637 + 634 638 /* PME */ 635 639 636 640 #define SW_PME_OUTPUT_ENABLE BIT(1)
+3 -6
drivers/net/dsa/microchip/ksz_common.h
··· 27 27 struct ksz_port { 28 28 u16 member; 29 29 u16 vid_member; 30 + bool remove_tag; /* Remove Tag flag set, for ksz8795 only */ 30 31 int stp_state; 31 32 struct phy_device phydev; 32 33 ··· 206 205 int ret; 207 206 208 207 ret = regmap_bulk_read(dev->regmap[2], reg, value, 2); 209 - if (!ret) { 210 - /* Ick! ToDo: Add 64bit R/W to regmap on 32bit systems */ 211 - value[0] = swab32(value[0]); 212 - value[1] = swab32(value[1]); 213 - *val = swab64((u64)*value); 214 - } 208 + if (!ret) 209 + *val = (u64)value[0] << 32 | value[1]; 215 210 216 211 return ret; 217 212 }
+1
drivers/net/dsa/mt7530.c
··· 47 47 MIB_DESC(2, 0x48, "TxBytes"), 48 48 MIB_DESC(1, 0x60, "RxDrop"), 49 49 MIB_DESC(1, 0x64, "RxFiltering"), 50 + MIB_DESC(1, 0x68, "RxUnicast"), 50 51 MIB_DESC(1, 0x6c, "RxMulticast"), 51 52 MIB_DESC(1, 0x70, "RxBroadcast"), 52 53 MIB_DESC(1, 0x74, "RxAlignErr"),
+72 -1
drivers/net/dsa/qca/ar9331.c
··· 101 101 AR9331_SW_PORT_STATUS_RX_FLOW_EN | AR9331_SW_PORT_STATUS_TX_FLOW_EN | \ 102 102 AR9331_SW_PORT_STATUS_SPEED_M) 103 103 104 + #define AR9331_SW_REG_PORT_CTRL(_port) (0x104 + (_port) * 0x100) 105 + #define AR9331_SW_PORT_CTRL_HEAD_EN BIT(11) 106 + #define AR9331_SW_PORT_CTRL_PORT_STATE GENMASK(2, 0) 107 + #define AR9331_SW_PORT_CTRL_PORT_STATE_DISABLED 0 108 + #define AR9331_SW_PORT_CTRL_PORT_STATE_BLOCKING 1 109 + #define AR9331_SW_PORT_CTRL_PORT_STATE_LISTENING 2 110 + #define AR9331_SW_PORT_CTRL_PORT_STATE_LEARNING 3 111 + #define AR9331_SW_PORT_CTRL_PORT_STATE_FORWARD 4 112 + 113 + #define AR9331_SW_REG_PORT_VLAN(_port) (0x108 + (_port) * 0x100) 114 + #define AR9331_SW_PORT_VLAN_8021Q_MODE GENMASK(31, 30) 115 + #define AR9331_SW_8021Q_MODE_SECURE 3 116 + #define AR9331_SW_8021Q_MODE_CHECK 2 117 + #define AR9331_SW_8021Q_MODE_FALLBACK 1 118 + #define AR9331_SW_8021Q_MODE_NONE 0 119 + #define AR9331_SW_PORT_VLAN_PORT_VID_MEMBER GENMASK(25, 16) 120 + 104 121 /* MIB registers */ 105 122 #define AR9331_MIB_COUNTER(x) (0x20000 + ((x) * 0x100)) 106 123 ··· 388 371 return 0; 389 372 } 390 373 374 + static int ar9331_sw_setup_port(struct dsa_switch *ds, int port) 375 + { 376 + struct ar9331_sw_priv *priv = (struct ar9331_sw_priv *)ds->priv; 377 + struct regmap *regmap = priv->regmap; 378 + u32 port_mask, port_ctrl, val; 379 + int ret; 380 + 381 + /* Generate default port settings */ 382 + port_ctrl = FIELD_PREP(AR9331_SW_PORT_CTRL_PORT_STATE, 383 + AR9331_SW_PORT_CTRL_PORT_STATE_FORWARD); 384 + 385 + if (dsa_is_cpu_port(ds, port)) { 386 + /* CPU port should be allowed to communicate with all user 387 + * ports. 388 + */ 389 + port_mask = dsa_user_ports(ds); 390 + /* Enable Atheros header on CPU port. This will allow us 391 + * communicate with each port separately 392 + */ 393 + port_ctrl |= AR9331_SW_PORT_CTRL_HEAD_EN; 394 + } else if (dsa_is_user_port(ds, port)) { 395 + /* User ports should communicate only with the CPU port. 396 + */ 397 + port_mask = BIT(dsa_upstream_port(ds, port)); 398 + } else { 399 + /* Other ports do not need to communicate at all */ 400 + port_mask = 0; 401 + } 402 + 403 + val = FIELD_PREP(AR9331_SW_PORT_VLAN_8021Q_MODE, 404 + AR9331_SW_8021Q_MODE_NONE) | 405 + FIELD_PREP(AR9331_SW_PORT_VLAN_PORT_VID_MEMBER, port_mask); 406 + 407 + ret = regmap_write(regmap, AR9331_SW_REG_PORT_VLAN(port), val); 408 + if (ret) 409 + goto error; 410 + 411 + ret = regmap_write(regmap, AR9331_SW_REG_PORT_CTRL(port), port_ctrl); 412 + if (ret) 413 + goto error; 414 + 415 + return 0; 416 + error: 417 + dev_err(priv->dev, "%s: error: %i\n", __func__, ret); 418 + 419 + return ret; 420 + } 421 + 391 422 static int ar9331_sw_setup(struct dsa_switch *ds) 392 423 { 393 424 struct ar9331_sw_priv *priv = (struct ar9331_sw_priv *)ds->priv; 394 425 struct regmap *regmap = priv->regmap; 395 - int ret; 426 + int ret, i; 396 427 397 428 ret = ar9331_sw_reset(priv); 398 429 if (ret) ··· 466 401 AR9331_SW_GLOBAL_CTRL_MFS_M); 467 402 if (ret) 468 403 goto error; 404 + 405 + for (i = 0; i < ds->num_ports; i++) { 406 + ret = ar9331_sw_setup_port(ds, i); 407 + if (ret) 408 + goto error; 409 + } 469 410 470 411 ds->configure_vlan_while_not_filtering = false; 471 412
+4 -1
drivers/net/dsa/sja1105/sja1105_main.c
··· 1789 1789 /* We need to hide the dsa_8021q VLANs from the user. */ 1790 1790 if (!priv->vlan_aware) 1791 1791 l2_lookup.vlanid = 0; 1792 - cb(macaddr, l2_lookup.vlanid, l2_lookup.lockeds, data); 1792 + rc = cb(macaddr, l2_lookup.vlanid, l2_lookup.lockeds, data); 1793 + if (rc) 1794 + return rc; 1793 1795 } 1794 1796 return 0; 1795 1797 } ··· 2651 2649 } 2652 2650 2653 2651 sja1105_devlink_teardown(ds); 2652 + sja1105_mdiobus_unregister(ds); 2654 2653 sja1105_flower_teardown(ds); 2655 2654 sja1105_tas_teardown(ds); 2656 2655 sja1105_ptp_clock_unregister(ds);
+4 -1
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 428 428 429 429 if (ptp && ptp->tx_tstamp_en && !skb_is_gso(skb) && 430 430 atomic_dec_if_positive(&ptp->tx_avail) >= 0) { 431 - if (!bnxt_ptp_parse(skb, &ptp->tx_seqid)) { 431 + if (!bnxt_ptp_parse(skb, &ptp->tx_seqid, 432 + &ptp->tx_hdr_off)) { 433 + if (vlan_tag_flags) 434 + ptp->tx_hdr_off += VLAN_HLEN; 432 435 lflags |= cpu_to_le32(TX_BD_FLAGS_STAMP); 433 436 skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; 434 437 } else {
+55 -21
drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h
··· 368 368 #define HWRM_FUNC_PTP_TS_QUERY 0x19fUL 369 369 #define HWRM_FUNC_PTP_EXT_CFG 0x1a0UL 370 370 #define HWRM_FUNC_PTP_EXT_QCFG 0x1a1UL 371 + #define HWRM_FUNC_KEY_CTX_ALLOC 0x1a2UL 371 372 #define HWRM_SELFTEST_QLIST 0x200UL 372 373 #define HWRM_SELFTEST_EXEC 0x201UL 373 374 #define HWRM_SELFTEST_IRQ 0x202UL ··· 532 531 #define HWRM_VERSION_MAJOR 1 533 532 #define HWRM_VERSION_MINOR 10 534 533 #define HWRM_VERSION_UPDATE 2 535 - #define HWRM_VERSION_RSVD 47 536 - #define HWRM_VERSION_STR "1.10.2.47" 534 + #define HWRM_VERSION_RSVD 52 535 + #define HWRM_VERSION_STR "1.10.2.52" 537 536 538 537 /* hwrm_ver_get_input (size:192b/24B) */ 539 538 struct hwrm_ver_get_input { ··· 586 585 #define VER_GET_RESP_DEV_CAPS_CFG_CFA_ADV_FLOW_MGNT_SUPPORTED 0x1000UL 587 586 #define VER_GET_RESP_DEV_CAPS_CFG_CFA_TFLIB_SUPPORTED 0x2000UL 588 587 #define VER_GET_RESP_DEV_CAPS_CFG_CFA_TRUFLOW_SUPPORTED 0x4000UL 588 + #define VER_GET_RESP_DEV_CAPS_CFG_SECURE_BOOT_CAPABLE 0x8000UL 589 589 u8 roce_fw_maj_8b; 590 590 u8 roce_fw_min_8b; 591 591 u8 roce_fw_bld_8b; ··· 888 886 #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_REASON_CODE_FW_EXCEPTION_FATAL (0x2UL << 8) 889 887 #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_REASON_CODE_FW_EXCEPTION_NON_FATAL (0x3UL << 8) 890 888 #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_REASON_CODE_FAST_RESET (0x4UL << 8) 891 - #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_REASON_CODE_LAST ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_REASON_CODE_FAST_RESET 889 + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_REASON_CODE_FW_ACTIVATION (0x5UL << 8) 890 + #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_REASON_CODE_LAST ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_REASON_CODE_FW_ACTIVATION 892 891 #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_DELAY_IN_100MS_TICKS_MASK 0xffff0000UL 893 892 #define ASYNC_EVENT_CMPL_RESET_NOTIFY_EVENT_DATA1_DELAY_IN_100MS_TICKS_SFT 16 894 893 }; ··· 1239 1236 u8 timestamp_lo; 1240 1237 __le16 timestamp_hi; 1241 1238 __le32 event_data1; 1242 - #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_MASK 0xffUL 1243 - #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_SFT 0 1244 - #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_RESERVED 0x0UL 1245 - #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_PAUSE_STORM 0x1UL 1246 - #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_INVALID_SIGNAL 0x2UL 1247 - #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_NVM 0x3UL 1248 - #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_LAST ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_NVM 1239 + #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_MASK 0xffUL 1240 + #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_SFT 0 1241 + #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_RESERVED 0x0UL 1242 + #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_PAUSE_STORM 0x1UL 1243 + #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_INVALID_SIGNAL 0x2UL 1244 + #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_NVM 0x3UL 1245 + #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_DOORBELL_DROP_THRESHOLD 0x4UL 1246 + #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_LAST ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_DOORBELL_DROP_THRESHOLD 1249 1247 }; 1250 1248 1251 1249 /* hwrm_async_event_cmpl_error_report_pause_storm (size:128b/16B) */ ··· 1450 1446 #define FUNC_VF_CFG_REQ_ENABLES_NUM_VNICS 0x200UL 1451 1447 #define FUNC_VF_CFG_REQ_ENABLES_NUM_STAT_CTXS 0x400UL 1452 1448 #define FUNC_VF_CFG_REQ_ENABLES_NUM_HW_RING_GRPS 0x800UL 1449 + #define FUNC_VF_CFG_REQ_ENABLES_NUM_TX_KEY_CTXS 0x1000UL 1450 + #define FUNC_VF_CFG_REQ_ENABLES_NUM_RX_KEY_CTXS 0x2000UL 1453 1451 __le16 mtu; 1454 1452 __le16 guest_vlan; 1455 1453 __le16 async_event_cr; ··· 1475 1469 __le16 num_vnics; 1476 1470 __le16 num_stat_ctxs; 1477 1471 __le16 num_hw_ring_grps; 1478 - u8 unused_0[4]; 1472 + __le16 num_tx_key_ctxs; 1473 + __le16 num_rx_key_ctxs; 1479 1474 }; 1480 1475 1481 1476 /* hwrm_func_vf_cfg_output (size:128b/16B) */ ··· 1500 1493 u8 unused_0[6]; 1501 1494 }; 1502 1495 1503 - /* hwrm_func_qcaps_output (size:704b/88B) */ 1496 + /* hwrm_func_qcaps_output (size:768b/96B) */ 1504 1497 struct hwrm_func_qcaps_output { 1505 1498 __le16 error_code; 1506 1499 __le16 req_type; ··· 1594 1587 #define FUNC_QCAPS_RESP_MPC_CHNLS_CAP_TE_CFA 0x4UL 1595 1588 #define FUNC_QCAPS_RESP_MPC_CHNLS_CAP_RE_CFA 0x8UL 1596 1589 #define FUNC_QCAPS_RESP_MPC_CHNLS_CAP_PRIMATE 0x10UL 1597 - u8 unused_1; 1590 + __le16 max_key_ctxs_alloc; 1591 + u8 unused_1[7]; 1598 1592 u8 valid; 1599 1593 }; 1600 1594 ··· 1610 1602 u8 unused_0[6]; 1611 1603 }; 1612 1604 1613 - /* hwrm_func_qcfg_output (size:832b/104B) */ 1605 + /* hwrm_func_qcfg_output (size:896b/112B) */ 1614 1606 struct hwrm_func_qcfg_output { 1615 1607 __le16 error_code; 1616 1608 __le16 req_type; ··· 1757 1749 #define FUNC_QCFG_RESP_PARTITION_MAX_BW_BW_VALUE_UNIT_PERCENT1_100 (0x1UL << 29) 1758 1750 #define FUNC_QCFG_RESP_PARTITION_MAX_BW_BW_VALUE_UNIT_LAST FUNC_QCFG_RESP_PARTITION_MAX_BW_BW_VALUE_UNIT_PERCENT1_100 1759 1751 __le16 host_mtu; 1760 - u8 unused_3; 1752 + __le16 alloc_tx_key_ctxs; 1753 + __le16 alloc_rx_key_ctxs; 1754 + u8 unused_3[5]; 1761 1755 u8 valid; 1762 1756 }; 1763 1757 1764 - /* hwrm_func_cfg_input (size:832b/104B) */ 1758 + /* hwrm_func_cfg_input (size:896b/112B) */ 1765 1759 struct hwrm_func_cfg_input { 1766 1760 __le16 req_type; 1767 1761 __le16 cmpl_ring; ··· 1830 1820 #define FUNC_CFG_REQ_ENABLES_PARTITION_MAX_BW 0x8000000UL 1831 1821 #define FUNC_CFG_REQ_ENABLES_TPID 0x10000000UL 1832 1822 #define FUNC_CFG_REQ_ENABLES_HOST_MTU 0x20000000UL 1823 + #define FUNC_CFG_REQ_ENABLES_TX_KEY_CTXS 0x40000000UL 1824 + #define FUNC_CFG_REQ_ENABLES_RX_KEY_CTXS 0x80000000UL 1833 1825 __le16 admin_mtu; 1834 1826 __le16 mru; 1835 1827 __le16 num_rsscos_ctxs; ··· 1941 1929 #define FUNC_CFG_REQ_PARTITION_MAX_BW_BW_VALUE_UNIT_LAST FUNC_CFG_REQ_PARTITION_MAX_BW_BW_VALUE_UNIT_PERCENT1_100 1942 1930 __be16 tpid; 1943 1931 __le16 host_mtu; 1932 + __le16 num_tx_key_ctxs; 1933 + __le16 num_rx_key_ctxs; 1934 + u8 unused_0[4]; 1944 1935 }; 1945 1936 1946 1937 /* hwrm_func_cfg_output (size:128b/16B) */ ··· 2114 2099 #define FUNC_DRV_RGTR_REQ_FLAGS_MASTER_SUPPORT 0x40UL 2115 2100 #define FUNC_DRV_RGTR_REQ_FLAGS_FAST_RESET_SUPPORT 0x80UL 2116 2101 #define FUNC_DRV_RGTR_REQ_FLAGS_RSS_STRICT_HASH_TYPE_SUPPORT 0x100UL 2102 + #define FUNC_DRV_RGTR_REQ_FLAGS_NPAR_1_2_SUPPORT 0x200UL 2117 2103 __le32 enables; 2118 2104 #define FUNC_DRV_RGTR_REQ_ENABLES_OS_TYPE 0x1UL 2119 2105 #define FUNC_DRV_RGTR_REQ_ENABLES_VER 0x2UL ··· 2284 2268 u8 unused_0[6]; 2285 2269 }; 2286 2270 2287 - /* hwrm_func_resource_qcaps_output (size:448b/56B) */ 2271 + /* hwrm_func_resource_qcaps_output (size:512b/64B) */ 2288 2272 struct hwrm_func_resource_qcaps_output { 2289 2273 __le16 error_code; 2290 2274 __le16 req_type; ··· 2316 2300 __le16 max_tx_scheduler_inputs; 2317 2301 __le16 flags; 2318 2302 #define FUNC_RESOURCE_QCAPS_RESP_FLAGS_MIN_GUARANTEED 0x1UL 2303 + __le16 min_tx_key_ctxs; 2304 + __le16 max_tx_key_ctxs; 2305 + __le16 min_rx_key_ctxs; 2306 + __le16 max_rx_key_ctxs; 2319 2307 u8 unused_0[5]; 2320 2308 u8 valid; 2321 2309 }; 2322 2310 2323 - /* hwrm_func_vf_resource_cfg_input (size:448b/56B) */ 2311 + /* hwrm_func_vf_resource_cfg_input (size:512b/64B) */ 2324 2312 struct hwrm_func_vf_resource_cfg_input { 2325 2313 __le16 req_type; 2326 2314 __le16 cmpl_ring; ··· 2351 2331 __le16 max_hw_ring_grps; 2352 2332 __le16 flags; 2353 2333 #define FUNC_VF_RESOURCE_CFG_REQ_FLAGS_MIN_GUARANTEED 0x1UL 2334 + __le16 min_tx_key_ctxs; 2335 + __le16 max_tx_key_ctxs; 2336 + __le16 min_rx_key_ctxs; 2337 + __le16 max_rx_key_ctxs; 2354 2338 u8 unused_0[2]; 2355 2339 }; 2356 2340 ··· 2372 2348 __le16 reserved_vnics; 2373 2349 __le16 reserved_stat_ctx; 2374 2350 __le16 reserved_hw_ring_grps; 2375 - u8 unused_0[7]; 2351 + __le16 reserved_tx_key_ctxs; 2352 + __le16 reserved_rx_key_ctxs; 2353 + u8 unused_0[3]; 2376 2354 u8 valid; 2377 2355 }; 2378 2356 ··· 4246 4220 u8 valid; 4247 4221 }; 4248 4222 4249 - /* hwrm_port_ts_query_input (size:256b/32B) */ 4223 + /* hwrm_port_ts_query_input (size:320b/40B) */ 4250 4224 struct hwrm_port_ts_query_input { 4251 4225 __le16 req_type; 4252 4226 __le16 cmpl_ring; ··· 4264 4238 __le16 enables; 4265 4239 #define PORT_TS_QUERY_REQ_ENABLES_TS_REQ_TIMEOUT 0x1UL 4266 4240 #define PORT_TS_QUERY_REQ_ENABLES_PTP_SEQ_ID 0x2UL 4241 + #define PORT_TS_QUERY_REQ_ENABLES_PTP_HDR_OFFSET 0x4UL 4267 4242 __le16 ts_req_timeout; 4268 4243 __le32 ptp_seq_id; 4244 + __le16 ptp_hdr_offset; 4245 + u8 unused_1[6]; 4269 4246 }; 4270 4247 4271 4248 /* hwrm_port_ts_query_output (size:192b/24B) */ ··· 8201 8172 u8 host_idx; 8202 8173 u8 flags; 8203 8174 #define FW_RESET_REQ_FLAGS_RESET_GRACEFUL 0x1UL 8175 + #define FW_RESET_REQ_FLAGS_FW_ACTIVATION 0x2UL 8204 8176 u8 unused_0[4]; 8205 8177 }; 8206 8178 ··· 8982 8952 u8 valid; 8983 8953 }; 8984 8954 8985 - /* hwrm_nvm_write_input (size:384b/48B) */ 8955 + /* hwrm_nvm_write_input (size:448b/56B) */ 8986 8956 struct hwrm_nvm_write_input { 8987 8957 __le16 req_type; 8988 8958 __le16 cmpl_ring; ··· 8998 8968 __le16 option; 8999 8969 __le16 flags; 9000 8970 #define NVM_WRITE_REQ_FLAGS_KEEP_ORIG_ACTIVE_IMG 0x1UL 8971 + #define NVM_WRITE_REQ_FLAGS_BATCH_MODE 0x2UL 8972 + #define NVM_WRITE_REQ_FLAGS_BATCH_LAST 0x4UL 9001 8973 __le32 dir_item_length; 8974 + __le32 offset; 8975 + __le32 len; 9002 8976 __le32 unused_0; 9003 8977 }; 9004 8978
+3 -1
drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
··· 20 20 #include "bnxt.h" 21 21 #include "bnxt_ptp.h" 22 22 23 - int bnxt_ptp_parse(struct sk_buff *skb, u16 *seq_id) 23 + int bnxt_ptp_parse(struct sk_buff *skb, u16 *seq_id, u16 *hdr_off) 24 24 { 25 25 unsigned int ptp_class; 26 26 struct ptp_header *hdr; ··· 34 34 if (!hdr) 35 35 return -EINVAL; 36 36 37 + *hdr_off = (u8 *)hdr - skb->data; 37 38 *seq_id = ntohs(hdr->sequence_id); 38 39 return 0; 39 40 default: ··· 95 94 PORT_TS_QUERY_REQ_FLAGS_PATH_TX) { 96 95 req.enables = cpu_to_le16(BNXT_PTP_QTS_TX_ENABLES); 97 96 req.ptp_seq_id = cpu_to_le32(bp->ptp_cfg->tx_seqid); 97 + req.ptp_hdr_offset = cpu_to_le16(bp->ptp_cfg->tx_hdr_off); 98 98 req.ts_req_timeout = cpu_to_le16(BNXT_PTP_QTS_TIMEOUT); 99 99 } 100 100 mutex_lock(&bp->hwrm_cmd_lock);
+6 -4
drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
··· 10 10 #ifndef BNXT_PTP_H 11 11 #define BNXT_PTP_H 12 12 13 - #define BNXT_PTP_GRC_WIN 5 14 - #define BNXT_PTP_GRC_WIN_BASE 0x5000 13 + #define BNXT_PTP_GRC_WIN 6 14 + #define BNXT_PTP_GRC_WIN_BASE 0x6000 15 15 16 16 #define BNXT_MAX_PHC_DRIFT 31000000 17 17 #define BNXT_LO_TIMER_MASK 0x0000ffffffffUL ··· 19 19 20 20 #define BNXT_PTP_QTS_TIMEOUT 1000 21 21 #define BNXT_PTP_QTS_TX_ENABLES (PORT_TS_QUERY_REQ_ENABLES_PTP_SEQ_ID | \ 22 - PORT_TS_QUERY_REQ_ENABLES_TS_REQ_TIMEOUT) 22 + PORT_TS_QUERY_REQ_ENABLES_TS_REQ_TIMEOUT | \ 23 + PORT_TS_QUERY_REQ_ENABLES_PTP_HDR_OFFSET) 23 24 24 25 struct pps_pin { 25 26 u8 event; ··· 89 88 #define BNXT_PHC_OVERFLOW_PERIOD (19 * 3600 * HZ) 90 89 91 90 u16 tx_seqid; 91 + u16 tx_hdr_off; 92 92 struct bnxt *bp; 93 93 atomic_t tx_avail; 94 94 #define BNXT_MAX_TX_TS 1 ··· 127 125 ((dst) = READ_ONCE(src)) 128 126 #endif 129 127 130 - int bnxt_ptp_parse(struct sk_buff *skb, u16 *seq_id); 128 + int bnxt_ptp_parse(struct sk_buff *skb, u16 *seq_id, u16 *hdr_off); 131 129 void bnxt_ptp_pps_event(struct bnxt *bp, u32 data1, u32 data2); 132 130 void bnxt_ptp_reapply_pps(struct bnxt *bp); 133 131 int bnxt_hwtstamp_set(struct net_device *dev, struct ifreq *ifr);
+8 -5
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 1530 1530 set_bit(__IAVF_VSI_DOWN, adapter->vsi.state); 1531 1531 1532 1532 iavf_map_rings_to_vectors(adapter); 1533 - 1534 - if (RSS_AQ(adapter)) 1535 - adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_RSS; 1536 - else 1537 - err = iavf_init_rss(adapter); 1538 1533 err: 1539 1534 return err; 1540 1535 } ··· 2218 2223 2219 2224 if (adapter->flags & IAVF_FLAG_REINIT_ITR_NEEDED) { 2220 2225 err = iavf_reinit_interrupt_scheme(adapter); 2226 + if (err) 2227 + goto reset_err; 2228 + } 2229 + 2230 + if (RSS_AQ(adapter)) { 2231 + adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_RSS; 2232 + } else { 2233 + err = iavf_init_rss(adapter); 2221 2234 if (err) 2222 2235 goto reset_err; 2223 2236 }
+1
drivers/net/ethernet/intel/ice/ice.h
··· 234 234 ICE_VFLR_EVENT_PENDING, 235 235 ICE_FLTR_OVERFLOW_PROMISC, 236 236 ICE_VF_DIS, 237 + ICE_VF_DEINIT_IN_PROGRESS, 237 238 ICE_CFG_BUSY, 238 239 ICE_SERVICE_SCHED, 239 240 ICE_SERVICE_DIS,
+20 -8
drivers/net/ethernet/intel/ice/ice_main.c
··· 191 191 struct ice_netdev_priv *np = netdev_priv(netdev); 192 192 struct ice_vsi *vsi = np->vsi; 193 193 194 + /* Under some circumstances, we might receive a request to delete our 195 + * own device address from our uc list. Because we store the device 196 + * address in the VSI's MAC filter list, we need to ignore such 197 + * requests and not delete our device address from this list. 198 + */ 199 + if (ether_addr_equal(addr, netdev->dev_addr)) 200 + return 0; 201 + 194 202 if (ice_fltr_add_mac_to_list(vsi, &vsi->tmp_unsync_list, addr, 195 203 ICE_FWD_TO_VSI)) 196 204 return -EINVAL; ··· 4202 4194 struct ice_hw *hw; 4203 4195 int i, err; 4204 4196 4197 + if (pdev->is_virtfn) { 4198 + dev_err(dev, "can't probe a virtual function\n"); 4199 + return -EINVAL; 4200 + } 4201 + 4205 4202 /* this driver uses devres, see 4206 4203 * Documentation/driver-api/driver-model/devres.rst 4207 4204 */ ··· 5132 5119 return -EADDRNOTAVAIL; 5133 5120 5134 5121 if (ether_addr_equal(netdev->dev_addr, mac)) { 5135 - netdev_warn(netdev, "already using mac %pM\n", mac); 5122 + netdev_dbg(netdev, "already using mac %pM\n", mac); 5136 5123 return 0; 5137 5124 } 5138 5125 ··· 5143 5130 return -EBUSY; 5144 5131 } 5145 5132 5133 + netif_addr_lock_bh(netdev); 5146 5134 /* Clean up old MAC filter. Not an error if old filter doesn't exist */ 5147 5135 status = ice_fltr_remove_mac(vsi, netdev->dev_addr, ICE_FWD_TO_VSI); 5148 5136 if (status && status != ICE_ERR_DOES_NOT_EXIST) { ··· 5153 5139 5154 5140 /* Add filter for new MAC. If filter exists, return success */ 5155 5141 status = ice_fltr_add_mac(vsi, mac, ICE_FWD_TO_VSI); 5156 - if (status == ICE_ERR_ALREADY_EXISTS) { 5142 + if (status == ICE_ERR_ALREADY_EXISTS) 5157 5143 /* Although this MAC filter is already present in hardware it's 5158 5144 * possible in some cases (e.g. bonding) that dev_addr was 5159 5145 * modified outside of the driver and needs to be restored back 5160 5146 * to this value. 5161 5147 */ 5162 - memcpy(netdev->dev_addr, mac, netdev->addr_len); 5163 5148 netdev_dbg(netdev, "filter for MAC %pM already exists\n", mac); 5164 - return 0; 5165 - } 5166 - 5167 - /* error if the new filter addition failed */ 5168 - if (status) 5149 + else if (status) 5150 + /* error if the new filter addition failed */ 5169 5151 err = -EADDRNOTAVAIL; 5170 5152 5171 5153 err_update_filters: 5172 5154 if (err) { 5173 5155 netdev_err(netdev, "can't set MAC %pM. filter update failed\n", 5174 5156 mac); 5157 + netif_addr_unlock_bh(netdev); 5175 5158 return err; 5176 5159 } 5177 5160 5178 5161 /* change the netdev's MAC address */ 5179 5162 memcpy(netdev->dev_addr, mac, netdev->addr_len); 5163 + netif_addr_unlock_bh(netdev); 5180 5164 netdev_dbg(vsi->netdev, "updated MAC address to %pM\n", 5181 5165 netdev->dev_addr); 5182 5166
+7
drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
··· 615 615 struct ice_hw *hw = &pf->hw; 616 616 unsigned int tmp, i; 617 617 618 + set_bit(ICE_VF_DEINIT_IN_PROGRESS, pf->state); 619 + 618 620 if (!pf->vf) 619 621 return; 620 622 ··· 682 680 i); 683 681 684 682 clear_bit(ICE_VF_DIS, pf->state); 683 + clear_bit(ICE_VF_DEINIT_IN_PROGRESS, pf->state); 685 684 clear_bit(ICE_FLAG_SRIOV_ENA, pf->flags); 686 685 } 687 686 ··· 4417 4414 struct ice_vf *vf = NULL; 4418 4415 struct device *dev; 4419 4416 int err = 0; 4417 + 4418 + /* if de-init is underway, don't process messages from VF */ 4419 + if (test_bit(ICE_VF_DEINIT_IN_PROGRESS, pf->state)) 4420 + return; 4420 4421 4421 4422 dev = ice_pf_to_dev(pf); 4422 4423 if (ice_validate_vf_id(pf, vf_id)) {
+1 -1
drivers/net/ethernet/marvell/mvpp2/mvpp2.h
··· 938 938 #define MVPP2_BM_COOKIE_POOL_OFFS 8 939 939 #define MVPP2_BM_COOKIE_CPU_OFFS 24 940 940 941 - #define MVPP2_BM_SHORT_FRAME_SIZE 704 /* frame size 128 */ 941 + #define MVPP2_BM_SHORT_FRAME_SIZE 736 /* frame size 128 */ 942 942 #define MVPP2_BM_LONG_FRAME_SIZE 2240 /* frame size 1664 */ 943 943 #define MVPP2_BM_JUMBO_FRAME_SIZE 10432 /* frame size 9856 */ 944 944 /* BM short pool packet size
+2 -2
drivers/net/ethernet/marvell/prestera/prestera_switchdev.c
··· 758 758 prestera_fdb_offload_notify(struct prestera_port *port, 759 759 struct switchdev_notifier_fdb_info *info) 760 760 { 761 - struct switchdev_notifier_fdb_info send_info; 761 + struct switchdev_notifier_fdb_info send_info = {}; 762 762 763 763 send_info.addr = info->addr; 764 764 send_info.vid = info->vid; ··· 1133 1133 static void prestera_fdb_event(struct prestera_switch *sw, 1134 1134 struct prestera_event *evt, void *arg) 1135 1135 { 1136 - struct switchdev_notifier_fdb_info info; 1136 + struct switchdev_notifier_fdb_info info = {}; 1137 1137 struct net_device *dev = NULL; 1138 1138 struct prestera_port *port; 1139 1139 struct prestera_lag *lag;
+1
drivers/net/ethernet/mellanox/mlx5/core/cq.c
··· 135 135 cq->cqn); 136 136 137 137 cq->uar = dev->priv.uar; 138 + cq->irqn = eq->core.irqn; 138 139 139 140 return 0; 140 141
+9 -2
drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
··· 1019 1019 MLX5_NB_INIT(&tracer->nb, fw_tracer_event, DEVICE_TRACER); 1020 1020 mlx5_eq_notifier_register(dev, &tracer->nb); 1021 1021 1022 - mlx5_fw_tracer_start(tracer); 1023 - 1022 + err = mlx5_fw_tracer_start(tracer); 1023 + if (err) { 1024 + mlx5_core_warn(dev, "FWTracer: Failed to start tracer %d\n", err); 1025 + goto err_notifier_unregister; 1026 + } 1024 1027 return 0; 1025 1028 1029 + err_notifier_unregister: 1030 + mlx5_eq_notifier_unregister(dev, &tracer->nb); 1031 + mlx5_core_destroy_mkey(dev, &tracer->buff.mkey); 1026 1032 err_dealloc_pd: 1027 1033 mlx5_core_dealloc_pd(dev, tracer->buff.pdn); 1034 + cancel_work_sync(&tracer->read_fw_strings_work); 1028 1035 return err; 1029 1036 } 1030 1037
+5
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
··· 124 124 if (IS_ERR(rt)) 125 125 return PTR_ERR(rt); 126 126 127 + if (rt->rt_type != RTN_UNICAST) { 128 + ret = -ENETUNREACH; 129 + goto err_rt_release; 130 + } 131 + 127 132 if (mlx5_lag_is_multipath(mdev) && rt->rt_gw_family != AF_INET) { 128 133 ret = -ENETUNREACH; 129 134 goto err_rt_release;
+12 -21
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 1535 1535 { 1536 1536 struct mlx5_core_dev *mdev = priv->mdev; 1537 1537 struct mlx5_core_cq *mcq = &cq->mcq; 1538 - int eqn_not_used; 1539 - unsigned int irqn; 1540 1538 int err; 1541 1539 u32 i; 1542 - 1543 - err = mlx5_vector2eqn(mdev, param->eq_ix, &eqn_not_used, &irqn); 1544 - if (err) 1545 - return err; 1546 1540 1547 1541 err = mlx5_cqwq_create(mdev, &param->wq, param->cqc, &cq->wq, 1548 1542 &cq->wq_ctrl); ··· 1551 1557 mcq->vector = param->eq_ix; 1552 1558 mcq->comp = mlx5e_completion_event; 1553 1559 mcq->event = mlx5e_cq_error_event; 1554 - mcq->irqn = irqn; 1555 1560 1556 1561 for (i = 0; i < mlx5_cqwq_get_size(&cq->wq); i++) { 1557 1562 struct mlx5_cqe64 *cqe = mlx5_cqwq_get_wqe(&cq->wq, i); ··· 1598 1605 void *in; 1599 1606 void *cqc; 1600 1607 int inlen; 1601 - unsigned int irqn_not_used; 1602 1608 int eqn; 1603 1609 int err; 1604 1610 1605 - err = mlx5_vector2eqn(mdev, param->eq_ix, &eqn, &irqn_not_used); 1611 + err = mlx5_vector2eqn(mdev, param->eq_ix, &eqn); 1606 1612 if (err) 1607 1613 return err; 1608 1614 ··· 1883 1891 if (err) 1884 1892 goto err_close_icosq; 1885 1893 1894 + err = mlx5e_open_rxq_rq(c, params, &cparam->rq); 1895 + if (err) 1896 + goto err_close_sqs; 1897 + 1886 1898 if (c->xdp) { 1887 1899 err = mlx5e_open_xdpsq(c, params, &cparam->xdp_sq, NULL, 1888 1900 &c->rq_xdpsq, false); 1889 1901 if (err) 1890 - goto err_close_sqs; 1902 + goto err_close_rq; 1891 1903 } 1892 - 1893 - err = mlx5e_open_rxq_rq(c, params, &cparam->rq); 1894 - if (err) 1895 - goto err_close_xdp_sq; 1896 1904 1897 1905 err = mlx5e_open_xdpsq(c, params, &cparam->xdp_sq, NULL, &c->xdpsq, true); 1898 1906 if (err) 1899 - goto err_close_rq; 1907 + goto err_close_xdp_sq; 1900 1908 1901 1909 return 0; 1902 - 1903 - err_close_rq: 1904 - mlx5e_close_rq(&c->rq); 1905 1910 1906 1911 err_close_xdp_sq: 1907 1912 if (c->xdp) 1908 1913 mlx5e_close_xdpsq(&c->rq_xdpsq); 1914 + 1915 + err_close_rq: 1916 + mlx5e_close_rq(&c->rq); 1909 1917 1910 1918 err_close_sqs: 1911 1919 mlx5e_close_sqs(c); ··· 1941 1949 static void mlx5e_close_queues(struct mlx5e_channel *c) 1942 1950 { 1943 1951 mlx5e_close_xdpsq(&c->xdpsq); 1944 - mlx5e_close_rq(&c->rq); 1945 1952 if (c->xdp) 1946 1953 mlx5e_close_xdpsq(&c->rq_xdpsq); 1954 + mlx5e_close_rq(&c->rq); 1947 1955 mlx5e_close_sqs(c); 1948 1956 mlx5e_close_icosq(&c->icosq); 1949 1957 mlx5e_close_icosq(&c->async_icosq); ··· 1975 1983 struct mlx5e_channel *c; 1976 1984 unsigned int irq; 1977 1985 int err; 1978 - int eqn; 1979 1986 1980 - err = mlx5_vector2eqn(priv->mdev, ix, &eqn, &irq); 1987 + err = mlx5_vector2irqn(priv->mdev, ix, &irq); 1981 1988 if (err) 1982 1989 return err; 1983 1990
+16 -4
drivers/net/ethernet/mellanox/mlx5/core/eq.c
··· 855 855 return err; 856 856 } 857 857 858 - int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn, 859 - unsigned int *irqn) 858 + static int vector2eqnirqn(struct mlx5_core_dev *dev, int vector, int *eqn, 859 + unsigned int *irqn) 860 860 { 861 861 struct mlx5_eq_table *table = dev->priv.eq_table; 862 862 struct mlx5_eq_comp *eq, *n; ··· 865 865 866 866 list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) { 867 867 if (i++ == vector) { 868 - *eqn = eq->core.eqn; 869 - *irqn = eq->core.irqn; 868 + if (irqn) 869 + *irqn = eq->core.irqn; 870 + if (eqn) 871 + *eqn = eq->core.eqn; 870 872 err = 0; 871 873 break; 872 874 } ··· 876 874 877 875 return err; 878 876 } 877 + 878 + int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn) 879 + { 880 + return vector2eqnirqn(dev, vector, eqn, NULL); 881 + } 879 882 EXPORT_SYMBOL(mlx5_vector2eqn); 883 + 884 + int mlx5_vector2irqn(struct mlx5_core_dev *dev, int vector, unsigned int *irqn) 885 + { 886 + return vector2eqnirqn(dev, vector, NULL, irqn); 887 + } 880 888 881 889 unsigned int mlx5_comp_vectors_count(struct mlx5_core_dev *dev) 882 890 {
+3 -3
drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c
··· 69 69 mlx5_esw_bridge_fdb_offload_notify(struct net_device *dev, const unsigned char *addr, u16 vid, 70 70 unsigned long val) 71 71 { 72 - struct switchdev_notifier_fdb_info send_info; 72 + struct switchdev_notifier_fdb_info send_info = {}; 73 73 74 74 send_info.addr = addr; 75 75 send_info.vid = vid; ··· 579 579 xa_init(&bridge->vports); 580 580 bridge->ifindex = ifindex; 581 581 bridge->refcnt = 1; 582 - bridge->ageing_time = BR_DEFAULT_AGEING_TIME; 582 + bridge->ageing_time = clock_t_to_jiffies(BR_DEFAULT_AGEING_TIME); 583 583 list_add(&bridge->list, &br_offloads->bridges); 584 584 585 585 return bridge; ··· 1006 1006 if (!vport->bridge) 1007 1007 return -EINVAL; 1008 1008 1009 - vport->bridge->ageing_time = ageing_time; 1009 + vport->bridge->ageing_time = clock_t_to_jiffies(ageing_time); 1010 1010 return 0; 1011 1011 } 1012 1012
+1
drivers/net/ethernet/mellanox/mlx5/core/esw/sample.c
··· 501 501 err_offload_rule: 502 502 mlx5_esw_vporttbl_put(esw, &per_vport_tbl_attr); 503 503 err_default_tbl: 504 + kfree(sample_flow); 504 505 return ERR_PTR(err); 505 506 } 506 507
+11 -3
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 48 48 #include "lib/fs_chains.h" 49 49 #include "en_tc.h" 50 50 #include "en/mapping.h" 51 + #include "devlink.h" 51 52 52 53 #define mlx5_esw_for_each_rep(esw, i, rep) \ 53 54 xa_for_each(&((esw)->offloads.vport_reps), i, rep) ··· 3361 3360 if (cur_mlx5_mode == mlx5_mode) 3362 3361 goto unlock; 3363 3362 3364 - if (mode == DEVLINK_ESWITCH_MODE_SWITCHDEV) 3363 + if (mode == DEVLINK_ESWITCH_MODE_SWITCHDEV) { 3364 + if (mlx5_devlink_trap_get_num_active(esw->dev)) { 3365 + NL_SET_ERR_MSG_MOD(extack, 3366 + "Can't change mode while devlink traps are active"); 3367 + err = -EOPNOTSUPP; 3368 + goto unlock; 3369 + } 3365 3370 err = esw_offloads_start(esw, extack); 3366 - else if (mode == DEVLINK_ESWITCH_MODE_LEGACY) 3371 + } else if (mode == DEVLINK_ESWITCH_MODE_LEGACY) { 3367 3372 err = esw_offloads_stop(esw, extack); 3368 - else 3373 + } else { 3369 3374 err = -EINVAL; 3375 + } 3370 3376 3371 3377 unlock: 3372 3378 mlx5_esw_unlock(esw);
+1 -3
drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c
··· 417 417 struct mlx5_wq_param wqp; 418 418 struct mlx5_cqe64 *cqe; 419 419 int inlen, err, eqn; 420 - unsigned int irqn; 421 420 void *cqc, *in; 422 421 __be64 *pas; 423 422 u32 i; ··· 445 446 goto err_cqwq; 446 447 } 447 448 448 - err = mlx5_vector2eqn(mdev, smp_processor_id(), &eqn, &irqn); 449 + err = mlx5_vector2eqn(mdev, smp_processor_id(), &eqn); 449 450 if (err) { 450 451 kvfree(in); 451 452 goto err_cqwq; ··· 475 476 *conn->cq.mcq.arm_db = 0; 476 477 conn->cq.mcq.vector = 0; 477 478 conn->cq.mcq.comp = mlx5_fpga_conn_cq_complete; 478 - conn->cq.mcq.irqn = irqn; 479 479 conn->cq.mcq.uar = fdev->conn_res.uar; 480 480 tasklet_setup(&conn->cq.tasklet, mlx5_fpga_conn_cq_tasklet); 481 481
+2
drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h
··· 104 104 struct cpu_rmap *mlx5_eq_table_get_rmap(struct mlx5_core_dev *dev); 105 105 #endif 106 106 107 + int mlx5_vector2irqn(struct mlx5_core_dev *dev, int vector, unsigned int *irqn); 108 + 107 109 #endif
+4 -8
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 1839 1839 if (err) 1840 1840 goto err_sf; 1841 1841 1842 - #ifdef CONFIG_MLX5_CORE_EN 1843 1842 err = mlx5e_init(); 1844 - if (err) { 1845 - pci_unregister_driver(&mlx5_core_driver); 1846 - goto err_debug; 1847 - } 1848 - #endif 1843 + if (err) 1844 + goto err_en; 1849 1845 1850 1846 return 0; 1851 1847 1848 + err_en: 1849 + mlx5_sf_driver_unregister(); 1852 1850 err_sf: 1853 1851 pci_unregister_driver(&mlx5_core_driver); 1854 1852 err_debug: ··· 1856 1858 1857 1859 static void __exit cleanup(void) 1858 1860 { 1859 - #ifdef CONFIG_MLX5_CORE_EN 1860 1861 mlx5e_cleanup(); 1861 - #endif 1862 1862 mlx5_sf_driver_unregister(); 1863 1863 pci_unregister_driver(&mlx5_core_driver); 1864 1864 mlx5_unregister_debugfs();
+5
drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
··· 208 208 int mlx5_fw_version_query(struct mlx5_core_dev *dev, 209 209 u32 *running_ver, u32 *stored_ver); 210 210 211 + #ifdef CONFIG_MLX5_CORE_EN 211 212 int mlx5e_init(void); 212 213 void mlx5e_cleanup(void); 214 + #else 215 + static inline int mlx5e_init(void){ return 0; } 216 + static inline void mlx5e_cleanup(void){} 217 + #endif 213 218 214 219 static inline bool mlx5_sriov_is_enabled(struct mlx5_core_dev *dev) 215 220 {
+7 -3
drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
··· 234 234 err = -ENOMEM; 235 235 goto err_cpumask; 236 236 } 237 + irq->pool = pool; 237 238 irq->refcount = 1; 238 239 irq->index = i; 239 240 err = xa_err(xa_store(&pool->irqs, irq->index, irq, GFP_KERNEL)); ··· 243 242 irq->index, err); 244 243 goto err_xa; 245 244 } 246 - irq->pool = pool; 247 245 return irq; 248 246 err_xa: 249 247 free_cpumask_var(irq->mask); ··· 271 271 272 272 int mlx5_irq_detach_nb(struct mlx5_irq *irq, struct notifier_block *nb) 273 273 { 274 + int err = 0; 275 + 276 + err = atomic_notifier_chain_unregister(&irq->nh, nb); 274 277 irq_put(irq); 275 - return atomic_notifier_chain_unregister(&irq->nh, nb); 278 + return err; 276 279 } 277 280 278 281 struct cpumask *mlx5_irq_get_affinity_mask(struct mlx5_irq *irq) ··· 459 456 if (!pool) 460 457 return ERR_PTR(-ENOMEM); 461 458 pool->dev = dev; 459 + mutex_init(&pool->lock); 462 460 xa_init_flags(&pool->irqs, XA_FLAGS_ALLOC); 463 461 pool->xa_num_irqs.min = start; 464 462 pool->xa_num_irqs.max = start + size - 1; ··· 468 464 name); 469 465 pool->min_threshold = min_threshold * MLX5_EQ_REFS_PER_IRQ; 470 466 pool->max_threshold = max_threshold * MLX5_EQ_REFS_PER_IRQ; 471 - mutex_init(&pool->lock); 472 467 mlx5_core_dbg(dev, "pool->name = %s, pool->size = %d, pool->start = %d", 473 468 name, size, start); 474 469 return pool; ··· 485 482 xa_for_each(&pool->irqs, index, irq) 486 483 irq_release(irq); 487 484 xa_destroy(&pool->irqs); 485 + mutex_destroy(&pool->lock); 488 486 kvfree(pool); 489 487 } 490 488
+1 -3
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
··· 749 749 struct mlx5_cqe64 *cqe; 750 750 struct mlx5dr_cq *cq; 751 751 int inlen, err, eqn; 752 - unsigned int irqn; 753 752 void *cqc, *in; 754 753 __be64 *pas; 755 754 int vector; ··· 781 782 goto err_cqwq; 782 783 783 784 vector = raw_smp_processor_id() % mlx5_comp_vectors_count(mdev); 784 - err = mlx5_vector2eqn(mdev, vector, &eqn, &irqn); 785 + err = mlx5_vector2eqn(mdev, vector, &eqn); 785 786 if (err) { 786 787 kvfree(in); 787 788 goto err_cqwq; ··· 817 818 *cq->mcq.arm_db = cpu_to_be32(2 << 28); 818 819 819 820 cq->mcq.vector = 0; 820 - cq->mcq.irqn = irqn; 821 821 cq->mcq.uar = uar; 822 822 823 823 return cq;
+2
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v0.c
··· 352 352 { 353 353 MLX5_SET(ste_rx_steering_mult, hw_ste_p, tunneling_action, 354 354 DR_STE_TUNL_ACTION_DECAP); 355 + MLX5_SET(ste_rx_steering_mult, hw_ste_p, fail_on_error, 1); 355 356 } 356 357 357 358 static void dr_ste_v0_set_rx_pop_vlan(u8 *hw_ste_p) ··· 366 365 MLX5_SET(ste_rx_steering_mult, hw_ste_p, tunneling_action, 367 366 DR_STE_TUNL_ACTION_L3_DECAP); 368 367 MLX5_SET(ste_modify_packet, hw_ste_p, action_description, vlan ? 1 : 0); 368 + MLX5_SET(ste_rx_steering_mult, hw_ste_p, fail_on_error, 1); 369 369 } 370 370 371 371 static void dr_ste_v0_set_rewrite_actions(u8 *hw_ste_p, u16 num_of_actions,
+2 -2
drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
··· 9079 9079 9080 9080 static void mlxsw_sp_rif_fid_fdb_del(struct mlxsw_sp_rif *rif, const char *mac) 9081 9081 { 9082 - struct switchdev_notifier_fdb_info info; 9082 + struct switchdev_notifier_fdb_info info = {}; 9083 9083 struct net_device *dev; 9084 9084 9085 9085 dev = br_fdb_find_port(rif->dev, mac, 0); ··· 9127 9127 9128 9128 static void mlxsw_sp_rif_vlan_fdb_del(struct mlxsw_sp_rif *rif, const char *mac) 9129 9129 { 9130 + struct switchdev_notifier_fdb_info info = {}; 9130 9131 u16 vid = mlxsw_sp_fid_8021q_vid(rif->fid); 9131 - struct switchdev_notifier_fdb_info info; 9132 9132 struct net_device *br_dev; 9133 9133 struct net_device *dev; 9134 9134
+1 -1
drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
··· 2520 2520 const char *mac, u16 vid, 2521 2521 struct net_device *dev, bool offloaded) 2522 2522 { 2523 - struct switchdev_notifier_fdb_info info; 2523 + struct switchdev_notifier_fdb_info info = {}; 2524 2524 2525 2525 info.addr = mac; 2526 2526 info.vid = vid;
+1 -1
drivers/net/ethernet/microchip/sparx5/sparx5_mactable.c
··· 277 277 const char *mac, u16 vid, 278 278 struct net_device *dev, bool offloaded) 279 279 { 280 - struct switchdev_notifier_fdb_info info; 280 + struct switchdev_notifier_fdb_info info = {}; 281 281 282 282 info.addr = mac; 283 283 info.vid = vid;
+4
drivers/net/ethernet/realtek/r8169_main.c
··· 3502 3502 RTL_W8(tp, MCU, RTL_R8(tp, MCU) | EN_NDP | EN_OOB_RESET); 3503 3503 RTL_W8(tp, DLLPR, RTL_R8(tp, DLLPR) & ~PFM_EN); 3504 3504 3505 + /* The default value is 0x13. Change it to 0x2f */ 3506 + rtl_csi_access_enable(tp, 0x2f); 3507 + 3505 3508 rtl_eri_write(tp, 0x1d0, ERIAR_MASK_0011, 0x0000); 3506 3509 3507 3510 /* disable EEE */ 3508 3511 rtl_eri_write(tp, 0x1b0, ERIAR_MASK_0011, 0x0000); 3509 3512 3510 3513 rtl_pcie_state_l2l3_disable(tp); 3514 + rtl_hw_aspm_clkreq_enable(tp, true); 3511 3515 } 3512 3516 3513 3517 DECLARE_RTL_COND(rtl_mac_ocp_e00e_cond)
+1 -1
drivers/net/ethernet/rocker/rocker_main.c
··· 2716 2716 rocker_fdb_offload_notify(struct rocker_port *rocker_port, 2717 2717 struct switchdev_notifier_fdb_info *recv_info) 2718 2718 { 2719 - struct switchdev_notifier_fdb_info info; 2719 + struct switchdev_notifier_fdb_info info = {}; 2720 2720 2721 2721 info.addr = recv_info->addr; 2722 2722 info.vid = recv_info->vid;
+1 -1
drivers/net/ethernet/rocker/rocker_ofdpa.c
··· 1822 1822 container_of(work, struct ofdpa_fdb_learn_work, work); 1823 1823 bool removing = (lw->flags & OFDPA_OP_FLAG_REMOVE); 1824 1824 bool learned = (lw->flags & OFDPA_OP_FLAG_LEARNED); 1825 - struct switchdev_notifier_fdb_info info; 1825 + struct switchdev_notifier_fdb_info info = {}; 1826 1826 1827 1827 info.addr = lw->addr; 1828 1828 info.vid = lw->vid;
+1 -1
drivers/net/ethernet/ti/am65-cpsw-switchdev.c
··· 358 358 static void am65_cpsw_fdb_offload_notify(struct net_device *ndev, 359 359 struct switchdev_notifier_fdb_info *rcv) 360 360 { 361 - struct switchdev_notifier_fdb_info info; 361 + struct switchdev_notifier_fdb_info info = {}; 362 362 363 363 info.addr = rcv->addr; 364 364 info.vid = rcv->vid;
+5 -2
drivers/net/ethernet/ti/cpsw_new.c
··· 921 921 struct cpdma_chan *txch; 922 922 int ret, q_idx; 923 923 924 - if (skb_padto(skb, CPSW_MIN_PACKET_SIZE)) { 924 + if (skb_put_padto(skb, READ_ONCE(priv->tx_packet_min))) { 925 925 cpsw_err(priv, tx_err, "packet pad failed\n"); 926 926 ndev->stats.tx_dropped++; 927 927 return NET_XMIT_DROP; ··· 1101 1101 1102 1102 for (i = 0; i < n; i++) { 1103 1103 xdpf = frames[i]; 1104 - if (xdpf->len < CPSW_MIN_PACKET_SIZE) 1104 + if (xdpf->len < READ_ONCE(priv->tx_packet_min)) 1105 1105 break; 1106 1106 1107 1107 if (cpsw_xdp_tx_frame(priv, xdpf, NULL, priv->emac_port)) ··· 1390 1390 priv->dev = dev; 1391 1391 priv->msg_enable = netif_msg_init(debug_level, CPSW_DEBUG); 1392 1392 priv->emac_port = i + 1; 1393 + priv->tx_packet_min = CPSW_MIN_PACKET_SIZE; 1393 1394 1394 1395 if (is_valid_ether_addr(slave_data->mac_addr)) { 1395 1396 ether_addr_copy(priv->mac_addr, slave_data->mac_addr); ··· 1699 1698 1700 1699 priv = netdev_priv(sl_ndev); 1701 1700 slave->port_vlan = vlan; 1701 + WRITE_ONCE(priv->tx_packet_min, CPSW_MIN_PACKET_SIZE_VLAN); 1702 1702 if (netif_running(sl_ndev)) 1703 1703 cpsw_port_add_switch_def_ale_entries(priv, 1704 1704 slave); ··· 1728 1726 1729 1727 priv = netdev_priv(slave->ndev); 1730 1728 slave->port_vlan = slave->data->dual_emac_res_vlan; 1729 + WRITE_ONCE(priv->tx_packet_min, CPSW_MIN_PACKET_SIZE); 1731 1730 cpsw_port_add_dual_emac_def_ale_entries(priv, slave); 1732 1731 } 1733 1732
+3 -1
drivers/net/ethernet/ti/cpsw_priv.h
··· 89 89 90 90 #define CPSW_POLL_WEIGHT 64 91 91 #define CPSW_RX_VLAN_ENCAP_HDR_SIZE 4 92 - #define CPSW_MIN_PACKET_SIZE (VLAN_ETH_ZLEN) 92 + #define CPSW_MIN_PACKET_SIZE_VLAN (VLAN_ETH_ZLEN) 93 + #define CPSW_MIN_PACKET_SIZE (ETH_ZLEN) 93 94 #define CPSW_MAX_PACKET_SIZE (VLAN_ETH_FRAME_LEN +\ 94 95 ETH_FCS_LEN +\ 95 96 CPSW_RX_VLAN_ENCAP_HDR_SIZE) ··· 381 380 u32 emac_port; 382 381 struct cpsw_common *cpsw; 383 382 int offload_fwd_mark; 383 + u32 tx_packet_min; 384 384 }; 385 385 386 386 #define ndev_to_cpsw(ndev) (((struct cpsw_priv *)netdev_priv(ndev))->cpsw)
+1 -1
drivers/net/ethernet/ti/cpsw_switchdev.c
··· 368 368 static void cpsw_fdb_offload_notify(struct net_device *ndev, 369 369 struct switchdev_notifier_fdb_info *rcv) 370 370 { 371 - struct switchdev_notifier_fdb_info info; 371 + struct switchdev_notifier_fdb_info info = {}; 372 372 373 373 info.addr = rcv->addr; 374 374 info.vid = rcv->vid;
+3 -3
drivers/net/ieee802154/mac802154_hwsim.c
··· 418 418 struct hwsim_edge *e; 419 419 u32 v0, v1; 420 420 421 - if (!info->attrs[MAC802154_HWSIM_ATTR_RADIO_ID] && 421 + if (!info->attrs[MAC802154_HWSIM_ATTR_RADIO_ID] || 422 422 !info->attrs[MAC802154_HWSIM_ATTR_RADIO_EDGE]) 423 423 return -EINVAL; 424 424 ··· 528 528 u32 v0, v1; 529 529 u8 lqi; 530 530 531 - if (!info->attrs[MAC802154_HWSIM_ATTR_RADIO_ID] && 531 + if (!info->attrs[MAC802154_HWSIM_ATTR_RADIO_ID] || 532 532 !info->attrs[MAC802154_HWSIM_ATTR_RADIO_EDGE]) 533 533 return -EINVAL; 534 534 535 535 if (nla_parse_nested_deprecated(edge_attrs, MAC802154_HWSIM_EDGE_ATTR_MAX, info->attrs[MAC802154_HWSIM_ATTR_RADIO_EDGE], hwsim_edge_policy, NULL)) 536 536 return -EINVAL; 537 537 538 - if (!edge_attrs[MAC802154_HWSIM_EDGE_ATTR_ENDPOINT_ID] && 538 + if (!edge_attrs[MAC802154_HWSIM_EDGE_ATTR_ENDPOINT_ID] || 539 539 !edge_attrs[MAC802154_HWSIM_EDGE_ATTR_LQI]) 540 540 return -EINVAL; 541 541
+1 -1
drivers/net/pcs/pcs-xpcs.c
··· 1089 1089 1090 1090 xpcs = kzalloc(sizeof(*xpcs), GFP_KERNEL); 1091 1091 if (!xpcs) 1092 - return NULL; 1092 + return ERR_PTR(-ENOMEM); 1093 1093 1094 1094 xpcs->mdiodev = mdiodev; 1095 1095
-2
drivers/net/phy/micrel.c
··· 1760 1760 .name = "Micrel KSZ87XX Switch", 1761 1761 /* PHY_BASIC_FEATURES */ 1762 1762 .config_init = kszphy_config_init, 1763 - .config_aneg = ksz8873mll_config_aneg, 1764 - .read_status = ksz8873mll_read_status, 1765 1763 .match_phy_device = ksz8795_match_phy_device, 1766 1764 .suspend = genphy_suspend, 1767 1765 .resume = genphy_resume,
+16 -5
drivers/net/ppp/ppp_generic.c
··· 284 284 static int ppp_connect_channel(struct channel *pch, int unit); 285 285 static int ppp_disconnect_channel(struct channel *pch); 286 286 static void ppp_destroy_channel(struct channel *pch); 287 - static int unit_get(struct idr *p, void *ptr); 287 + static int unit_get(struct idr *p, void *ptr, int min); 288 288 static int unit_set(struct idr *p, void *ptr, int n); 289 289 static void unit_put(struct idr *p, int n); 290 290 static void *unit_find(struct idr *p, int n); ··· 1155 1155 mutex_lock(&pn->all_ppp_mutex); 1156 1156 1157 1157 if (unit < 0) { 1158 - ret = unit_get(&pn->units_idr, ppp); 1158 + ret = unit_get(&pn->units_idr, ppp, 0); 1159 1159 if (ret < 0) 1160 1160 goto err; 1161 + if (!ifname_is_set) { 1162 + while (1) { 1163 + snprintf(ppp->dev->name, IFNAMSIZ, "ppp%i", ret); 1164 + if (!__dev_get_by_name(ppp->ppp_net, ppp->dev->name)) 1165 + break; 1166 + unit_put(&pn->units_idr, ret); 1167 + ret = unit_get(&pn->units_idr, ppp, ret + 1); 1168 + if (ret < 0) 1169 + goto err; 1170 + } 1171 + } 1161 1172 } else { 1162 1173 /* Caller asked for a specific unit number. Fail with -EEXIST 1163 1174 * if unavailable. For backward compatibility, return -EEXIST ··· 1317 1306 * the PPP unit identifer as suffix (i.e. ppp<unit_id>). This allows 1318 1307 * userspace to infer the device name using to the PPPIOCGUNIT ioctl. 1319 1308 */ 1320 - if (!tb[IFLA_IFNAME]) 1309 + if (!tb[IFLA_IFNAME] || !nla_len(tb[IFLA_IFNAME]) || !*(char *)nla_data(tb[IFLA_IFNAME])) 1321 1310 conf.ifname_is_set = false; 1322 1311 1323 1312 err = ppp_dev_configure(src_net, dev, &conf); ··· 3563 3552 } 3564 3553 3565 3554 /* get new free unit number and associate pointer with it */ 3566 - static int unit_get(struct idr *p, void *ptr) 3555 + static int unit_get(struct idr *p, void *ptr, int min) 3567 3556 { 3568 - return idr_alloc(p, ptr, 0, 0, GFP_KERNEL); 3557 + return idr_alloc(p, ptr, min, 0, GFP_KERNEL); 3569 3558 } 3570 3559 3571 3560 /* put unit number back to a pool */
+6 -6
drivers/net/wwan/mhi_wwan_ctrl.c
··· 41 41 /* Increment RX budget and schedule RX refill if necessary */ 42 42 static void mhi_wwan_rx_budget_inc(struct mhi_wwan_dev *mhiwwan) 43 43 { 44 - spin_lock(&mhiwwan->rx_lock); 44 + spin_lock_bh(&mhiwwan->rx_lock); 45 45 46 46 mhiwwan->rx_budget++; 47 47 48 48 if (test_bit(MHI_WWAN_RX_REFILL, &mhiwwan->flags)) 49 49 schedule_work(&mhiwwan->rx_refill); 50 50 51 - spin_unlock(&mhiwwan->rx_lock); 51 + spin_unlock_bh(&mhiwwan->rx_lock); 52 52 } 53 53 54 54 /* Decrement RX budget if non-zero and return true on success */ ··· 56 56 { 57 57 bool ret = false; 58 58 59 - spin_lock(&mhiwwan->rx_lock); 59 + spin_lock_bh(&mhiwwan->rx_lock); 60 60 61 61 if (mhiwwan->rx_budget) { 62 62 mhiwwan->rx_budget--; ··· 64 64 ret = true; 65 65 } 66 66 67 - spin_unlock(&mhiwwan->rx_lock); 67 + spin_unlock_bh(&mhiwwan->rx_lock); 68 68 69 69 return ret; 70 70 } ··· 130 130 { 131 131 struct mhi_wwan_dev *mhiwwan = wwan_port_get_drvdata(port); 132 132 133 - spin_lock(&mhiwwan->rx_lock); 133 + spin_lock_bh(&mhiwwan->rx_lock); 134 134 clear_bit(MHI_WWAN_RX_REFILL, &mhiwwan->flags); 135 - spin_unlock(&mhiwwan->rx_lock); 135 + spin_unlock_bh(&mhiwwan->rx_lock); 136 136 137 137 cancel_work_sync(&mhiwwan->rx_refill); 138 138
+8 -4
drivers/net/wwan/wwan_core.c
··· 164 164 goto done_unlock; 165 165 166 166 id = ida_alloc(&wwan_dev_ids, GFP_KERNEL); 167 - if (id < 0) 167 + if (id < 0) { 168 + wwandev = ERR_PTR(id); 168 169 goto done_unlock; 170 + } 169 171 170 172 wwandev = kzalloc(sizeof(*wwandev), GFP_KERNEL); 171 173 if (!wwandev) { 174 + wwandev = ERR_PTR(-ENOMEM); 172 175 ida_free(&wwan_dev_ids, id); 173 176 goto done_unlock; 174 177 } ··· 185 182 err = device_register(&wwandev->dev); 186 183 if (err) { 187 184 put_device(&wwandev->dev); 188 - wwandev = NULL; 185 + wwandev = ERR_PTR(err); 186 + goto done_unlock; 189 187 } 190 188 191 189 done_unlock: ··· 1021 1017 return -EINVAL; 1022 1018 1023 1019 wwandev = wwan_create_dev(parent); 1024 - if (!wwandev) 1025 - return -ENOMEM; 1020 + if (IS_ERR(wwandev)) 1021 + return PTR_ERR(wwandev); 1026 1022 1027 1023 if (WARN_ON(wwandev->ops)) { 1028 1024 wwan_remove_dev(wwandev);
+3
drivers/platform/x86/Kconfig
··· 508 508 depends on RFKILL || RFKILL = n 509 509 depends on ACPI_VIDEO || ACPI_VIDEO = n 510 510 depends on BACKLIGHT_CLASS_DEVICE 511 + depends on I2C 511 512 select ACPI_PLATFORM_PROFILE 512 513 select HWMON 513 514 select NVRAM ··· 692 691 tristate "INTEL HID Event" 693 692 depends on ACPI 694 693 depends on INPUT 694 + depends on I2C 695 695 select INPUT_SPARSEKMAP 696 696 help 697 697 This driver provides support for the Intel HID Event hotkey interface. ··· 744 742 tristate "INTEL VIRTUAL BUTTON" 745 743 depends on ACPI 746 744 depends on INPUT 745 + depends on I2C 747 746 select INPUT_SPARSEKMAP 748 747 help 749 748 This driver provides support for the Intel Virtual Button interface.
+76
drivers/platform/x86/dual_accel_detect.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Helper code to detect 360 degree hinges (yoga) style 2-in-1 devices using 2 accelerometers 4 + * to allow the OS to determine the angle between the display and the base of the device. 5 + * 6 + * On Windows these are read by a special HingeAngleService process which calls undocumented 7 + * ACPI methods, to let the firmware know if the 2-in-1 is in tablet- or laptop-mode. 8 + * The firmware may use this to disable the kbd and touchpad to avoid spurious input in 9 + * tablet-mode as well as to report SW_TABLET_MODE info to the OS. 10 + * 11 + * Since Linux does not call these undocumented methods, the SW_TABLET_MODE info reported 12 + * by various drivers/platform/x86 drivers is incorrect. These drivers use the detection 13 + * code in this file to disable SW_TABLET_MODE reporting to avoid reporting broken info 14 + * (instead userspace can derive the status itself by directly reading the 2 accels). 15 + */ 16 + 17 + #include <linux/acpi.h> 18 + #include <linux/i2c.h> 19 + 20 + static int dual_accel_i2c_resource_count(struct acpi_resource *ares, void *data) 21 + { 22 + struct acpi_resource_i2c_serialbus *sb; 23 + int *count = data; 24 + 25 + if (i2c_acpi_get_i2c_resource(ares, &sb)) 26 + *count = *count + 1; 27 + 28 + return 1; 29 + } 30 + 31 + static int dual_accel_i2c_client_count(struct acpi_device *adev) 32 + { 33 + int ret, count = 0; 34 + LIST_HEAD(r); 35 + 36 + ret = acpi_dev_get_resources(adev, &r, dual_accel_i2c_resource_count, &count); 37 + if (ret < 0) 38 + return ret; 39 + 40 + acpi_dev_free_resource_list(&r); 41 + return count; 42 + } 43 + 44 + static bool dual_accel_detect_bosc0200(void) 45 + { 46 + struct acpi_device *adev; 47 + int count; 48 + 49 + adev = acpi_dev_get_first_match_dev("BOSC0200", NULL, -1); 50 + if (!adev) 51 + return false; 52 + 53 + count = dual_accel_i2c_client_count(adev); 54 + 55 + acpi_dev_put(adev); 56 + 57 + return count == 2; 58 + } 59 + 60 + static bool dual_accel_detect(void) 61 + { 62 + /* Systems which use a pair of accels with KIOX010A / KIOX020A ACPI ids */ 63 + if (acpi_dev_present("KIOX010A", NULL, -1) && 64 + acpi_dev_present("KIOX020A", NULL, -1)) 65 + return true; 66 + 67 + /* Systems which use a single DUAL250E ACPI device to model 2 accels */ 68 + if (acpi_dev_present("DUAL250E", NULL, -1)) 69 + return true; 70 + 71 + /* Systems which use a single BOSC0200 ACPI device to model 2 accels */ 72 + if (dual_accel_detect_bosc0200()) 73 + return true; 74 + 75 + return false; 76 + }
+6 -15
drivers/platform/x86/intel-hid.c
··· 14 14 #include <linux/module.h> 15 15 #include <linux/platform_device.h> 16 16 #include <linux/suspend.h> 17 + #include "dual_accel_detect.h" 17 18 18 19 /* When NOT in tablet mode, VGBS returns with the flag 0x40 */ 19 20 #define TABLET_MODE_FLAG BIT(6) ··· 123 122 struct input_dev *array; 124 123 struct input_dev *switches; 125 124 bool wakeup_mode; 125 + bool dual_accel; 126 126 }; 127 127 128 128 #define HID_EVENT_FILTER_UUID "eeec56b3-4442-408f-a792-4edd4d758054" ··· 453 451 * SW_TABLET_MODE report, in these cases we enable support when receiving 454 452 * the first event instead of during driver setup. 455 453 * 456 - * Some 360 degree hinges (yoga) style 2-in-1 devices use 2 accelerometers 457 - * to allow the OS to determine the angle between the display and the base 458 - * of the device. On Windows these are read by a special HingeAngleService 459 - * process which calls an ACPI DSM (Device Specific Method) on the 460 - * ACPI KIOX010A device node for the sensor in the display, to let the 461 - * firmware know if the 2-in-1 is in tablet- or laptop-mode so that it can 462 - * disable the kbd and touchpad to avoid spurious input in tablet-mode. 463 - * 464 - * The linux kxcjk1013 driver calls the DSM for this once at probe time 465 - * to ensure that the builtin kbd and touchpad work. On some devices this 466 - * causes a "spurious" 0xcd event on the intel-hid ACPI dev. In this case 467 - * there is not a functional tablet-mode switch, so we should not register 468 - * the tablet-mode switch device. 454 + * See dual_accel_detect.h for more info on the dual_accel check. 469 455 */ 470 - if (!priv->switches && (event == 0xcc || event == 0xcd) && 471 - !acpi_dev_present("KIOX010A", NULL, -1)) { 456 + if (!priv->switches && !priv->dual_accel && (event == 0xcc || event == 0xcd)) { 472 457 dev_info(&device->dev, "switch event received, enable switches supports\n"); 473 458 err = intel_hid_switches_setup(device); 474 459 if (err) ··· 595 606 if (!priv) 596 607 return -ENOMEM; 597 608 dev_set_drvdata(&device->dev, priv); 609 + 610 + priv->dual_accel = dual_accel_detect(); 598 611 599 612 err = intel_hid_input_setup(device); 600 613 if (err) {
+15 -3
drivers/platform/x86/intel-vbtn.c
··· 14 14 #include <linux/module.h> 15 15 #include <linux/platform_device.h> 16 16 #include <linux/suspend.h> 17 + #include "dual_accel_detect.h" 17 18 18 19 /* Returned when NOT in tablet mode on some HP Stream x360 11 models */ 19 20 #define VGBS_TABLET_MODE_FLAG_ALT 0x10 ··· 67 66 struct intel_vbtn_priv { 68 67 struct input_dev *buttons_dev; 69 68 struct input_dev *switches_dev; 69 + bool dual_accel; 70 70 bool has_buttons; 71 71 bool has_switches; 72 72 bool wakeup_mode; ··· 162 160 input_dev = priv->buttons_dev; 163 161 } else if ((ke = sparse_keymap_entry_from_scancode(priv->switches_dev, event))) { 164 162 if (!priv->has_switches) { 163 + /* See dual_accel_detect.h for more info */ 164 + if (priv->dual_accel) 165 + return; 166 + 165 167 dev_info(&device->dev, "Registering Intel Virtual Switches input-dev after receiving a switch event\n"); 166 168 ret = input_register_device(priv->switches_dev); 167 169 if (ret) ··· 254 248 {} /* Array terminator */ 255 249 }; 256 250 257 - static bool intel_vbtn_has_switches(acpi_handle handle) 251 + static bool intel_vbtn_has_switches(acpi_handle handle, bool dual_accel) 258 252 { 259 253 unsigned long long vgbs; 260 254 acpi_status status; 255 + 256 + /* See dual_accel_detect.h for more info */ 257 + if (dual_accel) 258 + return false; 261 259 262 260 if (!dmi_check_system(dmi_switches_allow_list)) 263 261 return false; ··· 273 263 static int intel_vbtn_probe(struct platform_device *device) 274 264 { 275 265 acpi_handle handle = ACPI_HANDLE(&device->dev); 276 - bool has_buttons, has_switches; 266 + bool dual_accel, has_buttons, has_switches; 277 267 struct intel_vbtn_priv *priv; 278 268 acpi_status status; 279 269 int err; 280 270 271 + dual_accel = dual_accel_detect(); 281 272 has_buttons = acpi_has_method(handle, "VBDL"); 282 - has_switches = intel_vbtn_has_switches(handle); 273 + has_switches = intel_vbtn_has_switches(handle, dual_accel); 283 274 284 275 if (!has_buttons && !has_switches) { 285 276 dev_warn(&device->dev, "failed to read Intel Virtual Button driver\n"); ··· 292 281 return -ENOMEM; 293 282 dev_set_drvdata(&device->dev, priv); 294 283 284 + priv->dual_accel = dual_accel; 295 285 priv->has_buttons = has_buttons; 296 286 priv->has_switches = has_switches; 297 287
+2
drivers/platform/x86/pcengines-apuv2.c
··· 94 94 NULL, 1, GPIO_ACTIVE_LOW), 95 95 GPIO_LOOKUP_IDX(AMD_FCH_GPIO_DRIVER_NAME, APU2_GPIO_LINE_LED3, 96 96 NULL, 2, GPIO_ACTIVE_LOW), 97 + {} /* Terminating entry */ 97 98 } 98 99 }; 99 100 ··· 124 123 .table = { 125 124 GPIO_LOOKUP_IDX(AMD_FCH_GPIO_DRIVER_NAME, APU2_GPIO_LINE_MODESW, 126 125 NULL, 0, GPIO_ACTIVE_LOW), 126 + {} /* Terminating entry */ 127 127 } 128 128 }; 129 129
+2 -1
drivers/platform/x86/thinkpad_acpi.c
··· 73 73 #include <linux/uaccess.h> 74 74 #include <acpi/battery.h> 75 75 #include <acpi/video.h> 76 + #include "dual_accel_detect.h" 76 77 77 78 /* ThinkPad CMOS commands */ 78 79 #define TP_CMOS_VOLUME_DOWN 0 ··· 3233 3232 * the laptop/tent/tablet mode to the EC. The bmc150 iio driver 3234 3233 * does not support this, so skip the hotkey on these models. 3235 3234 */ 3236 - if (has_tablet_mode && !acpi_dev_present("BOSC0200", "1", -1)) 3235 + if (has_tablet_mode && !dual_accel_detect()) 3237 3236 tp_features.hotkey_tablet = TP_HOTKEY_TABLET_USES_GMMS; 3238 3237 type = "GMMS"; 3239 3238 } else if (acpi_evalf(hkey_handle, &res, "MHKG", "qd")) {
+1 -1
drivers/ptp/ptp_sysfs.c
··· 154 154 struct ptp_clock *ptp = dev_get_drvdata(dev); 155 155 struct ptp_clock_info *info = ptp->info; 156 156 struct ptp_vclock *vclock; 157 - u8 *num = data; 157 + u32 *num = data; 158 158 159 159 vclock = info_to_vclock(info); 160 160 dev_info(dev->parent, "delete virtual clock ptp%d\n",
+11 -2
drivers/s390/block/dasd_eckd.c
··· 1004 1004 static void dasd_eckd_store_conf_data(struct dasd_device *device, 1005 1005 struct dasd_conf_data *conf_data, int chp) 1006 1006 { 1007 + struct dasd_eckd_private *private = device->private; 1007 1008 struct channel_path_desc_fmt0 *chp_desc; 1008 1009 struct subchannel_id sch_id; 1010 + void *cdp; 1009 1011 1010 - ccw_device_get_schid(device->cdev, &sch_id); 1011 1012 /* 1012 1013 * path handling and read_conf allocate data 1013 1014 * free it before replacing the pointer 1015 + * also replace the old private->conf_data pointer 1016 + * with the new one if this points to the same data 1014 1017 */ 1015 - kfree(device->path[chp].conf_data); 1018 + cdp = device->path[chp].conf_data; 1019 + if (private->conf_data == cdp) { 1020 + private->conf_data = (void *)conf_data; 1021 + dasd_eckd_identify_conf_parts(private); 1022 + } 1023 + ccw_device_get_schid(device->cdev, &sch_id); 1016 1024 device->path[chp].conf_data = conf_data; 1017 1025 device->path[chp].cssid = sch_id.cssid; 1018 1026 device->path[chp].ssid = sch_id.ssid; ··· 1028 1020 if (chp_desc) 1029 1021 device->path[chp].chpid = chp_desc->chpid; 1030 1022 kfree(chp_desc); 1023 + kfree(cdp); 1031 1024 } 1032 1025 1033 1026 static void dasd_eckd_clear_conf_data(struct dasd_device *device)
+2 -2
drivers/s390/net/qeth_l2_main.c
··· 279 279 280 280 static void qeth_l2_dev2br_fdb_flush(struct qeth_card *card) 281 281 { 282 - struct switchdev_notifier_fdb_info info; 282 + struct switchdev_notifier_fdb_info info = {}; 283 283 284 284 QETH_CARD_TEXT(card, 2, "fdbflush"); 285 285 ··· 636 636 struct net_if_token *token, 637 637 struct mac_addr_lnid *addr_lnid) 638 638 { 639 - struct switchdev_notifier_fdb_info info; 639 + struct switchdev_notifier_fdb_info info = {}; 640 640 u8 ntfy_mac[ETH_ALEN]; 641 641 642 642 ether_addr_copy(ntfy_mac, addr_lnid->mac);
+1 -1
drivers/soc/Makefile
··· 13 13 obj-y += fsl/ 14 14 obj-$(CONFIG_ARCH_GEMINI) += gemini/ 15 15 obj-y += imx/ 16 - obj-$(CONFIG_ARCH_IXP4XX) += ixp4xx/ 16 + obj-y += ixp4xx/ 17 17 obj-$(CONFIG_SOC_XWAY) += lantiq/ 18 18 obj-$(CONFIG_LITEX_SOC_CONTROLLER) += litex/ 19 19 obj-y += mediatek/
+12 -72
drivers/soc/imx/soc-imx8m.c
··· 5 5 6 6 #include <linux/init.h> 7 7 #include <linux/io.h> 8 - #include <linux/module.h> 9 - #include <linux/nvmem-consumer.h> 10 8 #include <linux/of_address.h> 11 9 #include <linux/slab.h> 12 10 #include <linux/sys_soc.h> ··· 29 31 30 32 struct imx8_soc_data { 31 33 char *name; 32 - u32 (*soc_revision)(struct device *dev); 34 + u32 (*soc_revision)(void); 33 35 }; 34 36 35 37 static u64 soc_uid; ··· 50 52 static inline u32 imx8mq_soc_revision_from_atf(void) { return 0; }; 51 53 #endif 52 54 53 - static u32 __init imx8mq_soc_revision(struct device *dev) 55 + static u32 __init imx8mq_soc_revision(void) 54 56 { 55 57 struct device_node *np; 56 58 void __iomem *ocotp_base; ··· 75 77 rev = REV_B1; 76 78 } 77 79 78 - if (dev) { 79 - int ret; 80 - 81 - ret = nvmem_cell_read_u64(dev, "soc_unique_id", &soc_uid); 82 - if (ret) { 83 - iounmap(ocotp_base); 84 - of_node_put(np); 85 - return ret; 86 - } 87 - } else { 88 - soc_uid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH); 89 - soc_uid <<= 32; 90 - soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW); 91 - } 80 + soc_uid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH); 81 + soc_uid <<= 32; 82 + soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW); 92 83 93 84 iounmap(ocotp_base); 94 85 of_node_put(np); ··· 107 120 of_node_put(np); 108 121 } 109 122 110 - static u32 __init imx8mm_soc_revision(struct device *dev) 123 + static u32 __init imx8mm_soc_revision(void) 111 124 { 112 125 struct device_node *np; 113 126 void __iomem *anatop_base; ··· 125 138 iounmap(anatop_base); 126 139 of_node_put(np); 127 140 128 - if (dev) { 129 - int ret; 130 - 131 - ret = nvmem_cell_read_u64(dev, "soc_unique_id", &soc_uid); 132 - if (ret) 133 - return ret; 134 - } else { 135 - imx8mm_soc_uid(); 136 - } 141 + imx8mm_soc_uid(); 137 142 138 143 return rev; 139 144 } ··· 150 171 .soc_revision = imx8mm_soc_revision, 151 172 }; 152 173 153 - static __maybe_unused const struct of_device_id imx8_machine_match[] = { 174 + static __maybe_unused const struct of_device_id imx8_soc_match[] = { 154 175 { .compatible = "fsl,imx8mq", .data = &imx8mq_soc_data, }, 155 176 { .compatible = "fsl,imx8mm", .data = &imx8mm_soc_data, }, 156 177 { .compatible = "fsl,imx8mn", .data = &imx8mn_soc_data, }, 157 178 { .compatible = "fsl,imx8mp", .data = &imx8mp_soc_data, }, 158 - { } 159 - }; 160 - 161 - static __maybe_unused const struct of_device_id imx8_soc_match[] = { 162 - { .compatible = "fsl,imx8mq-soc", .data = &imx8mq_soc_data, }, 163 - { .compatible = "fsl,imx8mm-soc", .data = &imx8mm_soc_data, }, 164 - { .compatible = "fsl,imx8mn-soc", .data = &imx8mn_soc_data, }, 165 - { .compatible = "fsl,imx8mp-soc", .data = &imx8mp_soc_data, }, 166 179 { } 167 180 }; 168 181 ··· 163 192 kasprintf(GFP_KERNEL, "%d.%d", (soc_rev >> 4) & 0xf, soc_rev & 0xf) : \ 164 193 "unknown" 165 194 166 - static int imx8_soc_info(struct platform_device *pdev) 195 + static int __init imx8_soc_init(void) 167 196 { 168 197 struct soc_device_attribute *soc_dev_attr; 169 198 struct soc_device *soc_dev; ··· 182 211 if (ret) 183 212 goto free_soc; 184 213 185 - if (pdev) 186 - id = of_match_node(imx8_soc_match, pdev->dev.of_node); 187 - else 188 - id = of_match_node(imx8_machine_match, of_root); 214 + id = of_match_node(imx8_soc_match, of_root); 189 215 if (!id) { 190 216 ret = -ENODEV; 191 217 goto free_soc; ··· 191 223 data = id->data; 192 224 if (data) { 193 225 soc_dev_attr->soc_id = data->name; 194 - if (data->soc_revision) { 195 - if (pdev) { 196 - soc_rev = data->soc_revision(&pdev->dev); 197 - ret = soc_rev; 198 - if (ret < 0) 199 - goto free_soc; 200 - } else { 201 - soc_rev = data->soc_revision(NULL); 202 - } 203 - } 226 + if (data->soc_revision) 227 + soc_rev = data->soc_revision(); 204 228 } 205 229 206 230 soc_dev_attr->revision = imx8_revision(soc_rev); ··· 230 270 kfree(soc_dev_attr); 231 271 return ret; 232 272 } 233 - 234 - /* Retain device_initcall is for backward compatibility with DTS. */ 235 - static int __init imx8_soc_init(void) 236 - { 237 - if (of_find_matching_node_and_match(NULL, imx8_soc_match, NULL)) 238 - return 0; 239 - 240 - return imx8_soc_info(NULL); 241 - } 242 273 device_initcall(imx8_soc_init); 243 - 244 - static struct platform_driver imx8_soc_info_driver = { 245 - .probe = imx8_soc_info, 246 - .driver = { 247 - .name = "imx8_soc_info", 248 - .of_match_table = imx8_soc_match, 249 - }, 250 - }; 251 - 252 - module_platform_driver(imx8_soc_info_driver); 253 - MODULE_LICENSE("GPL v2");
+5 -7
drivers/soc/ixp4xx/ixp4xx-npe.c
··· 21 21 #include <linux/of_platform.h> 22 22 #include <linux/platform_device.h> 23 23 #include <linux/soc/ixp4xx/npe.h> 24 - #include <mach/hardware.h> 25 24 #include <linux/soc/ixp4xx/cpu.h> 26 25 27 26 #define DEBUG_MSG 0 ··· 693 694 694 695 if (!(ixp4xx_read_feature_bits() & 695 696 (IXP4XX_FEATURE_RESET_NPEA << i))) { 696 - dev_info(dev, "NPE%d at 0x%08x-0x%08x not available\n", 697 - i, res->start, res->end); 697 + dev_info(dev, "NPE%d at %pR not available\n", 698 + i, res); 698 699 continue; /* NPE already disabled or not present */ 699 700 } 700 701 npe->regs = devm_ioremap_resource(dev, res); ··· 702 703 return PTR_ERR(npe->regs); 703 704 704 705 if (npe_reset(npe)) { 705 - dev_info(dev, "NPE%d at 0x%08x-0x%08x does not reset\n", 706 - i, res->start, res->end); 706 + dev_info(dev, "NPE%d at %pR does not reset\n", 707 + i, res); 707 708 continue; 708 709 } 709 710 npe->valid = 1; 710 - dev_info(dev, "NPE%d at 0x%08x-0x%08x registered\n", 711 - i, res->start, res->end); 711 + dev_info(dev, "NPE%d at %pR registered\n", i, res); 712 712 found++; 713 713 } 714 714
+5 -5
drivers/soc/ixp4xx/ixp4xx-qmgr.c
··· 12 12 #include <linux/of.h> 13 13 #include <linux/platform_device.h> 14 14 #include <linux/soc/ixp4xx/qmgr.h> 15 - #include <mach/hardware.h> 16 15 #include <linux/soc/ixp4xx/cpu.h> 17 16 18 17 static struct qmgr_regs __iomem *qmgr_regs; ··· 146 147 /* ACK - it may clear any bits so don't rely on it */ 147 148 __raw_writel(0xFFFFFFFF, &qmgr_regs->irqstat[0]); 148 149 149 - en_bitmap = qmgr_regs->irqen[0]; 150 + en_bitmap = __raw_readl(&qmgr_regs->irqen[0]); 150 151 while (en_bitmap) { 151 152 i = __fls(en_bitmap); /* number of the last "low" queue */ 152 153 en_bitmap &= ~BIT(i); 153 - src = qmgr_regs->irqsrc[i >> 3]; 154 - stat = qmgr_regs->stat1[i >> 3]; 154 + src = __raw_readl(&qmgr_regs->irqsrc[i >> 3]); 155 + stat = __raw_readl(&qmgr_regs->stat1[i >> 3]); 155 156 if (src & 4) /* the IRQ condition is inverted */ 156 157 stat = ~stat; 157 158 if (stat & BIT(src & 3)) { ··· 171 172 /* ACK - it may clear any bits so don't rely on it */ 172 173 __raw_writel(0xFFFFFFFF, &qmgr_regs->irqstat[1]); 173 174 174 - req_bitmap = qmgr_regs->irqen[1] & qmgr_regs->statne_h; 175 + req_bitmap = __raw_readl(&qmgr_regs->irqen[1]) & 176 + __raw_readl(&qmgr_regs->statne_h); 175 177 while (req_bitmap) { 176 178 i = __fls(req_bitmap); /* number of the last "high" queue */ 177 179 req_bitmap &= ~BIT(i);
+4 -2
drivers/soc/tegra/Kconfig
··· 15 15 select PL310_ERRATA_769419 if CACHE_L2X0 16 16 select SOC_TEGRA_FLOWCTRL 17 17 select SOC_TEGRA_PMC 18 - select SOC_TEGRA20_VOLTAGE_COUPLER 18 + select SOC_TEGRA20_VOLTAGE_COUPLER if REGULATOR 19 19 select TEGRA_TIMER 20 20 help 21 21 Support for NVIDIA Tegra AP20 and T20 processors, based on the ··· 29 29 select PL310_ERRATA_769419 if CACHE_L2X0 30 30 select SOC_TEGRA_FLOWCTRL 31 31 select SOC_TEGRA_PMC 32 - select SOC_TEGRA30_VOLTAGE_COUPLER 32 + select SOC_TEGRA30_VOLTAGE_COUPLER if REGULATOR 33 33 select TEGRA_TIMER 34 34 help 35 35 Support for NVIDIA Tegra T30 processor family, based on the ··· 155 155 config SOC_TEGRA20_VOLTAGE_COUPLER 156 156 bool "Voltage scaling support for Tegra20 SoCs" 157 157 depends on ARCH_TEGRA_2x_SOC || COMPILE_TEST 158 + depends on REGULATOR 158 159 159 160 config SOC_TEGRA30_VOLTAGE_COUPLER 160 161 bool "Voltage scaling support for Tegra30 SoCs" 161 162 depends on ARCH_TEGRA_3x_SOC || COMPILE_TEST 163 + depends on REGULATOR
+18 -3
drivers/spi/spi-cadence-quadspi.c
··· 325 325 f_pdata->inst_width = CQSPI_INST_TYPE_SINGLE; 326 326 f_pdata->addr_width = CQSPI_INST_TYPE_SINGLE; 327 327 f_pdata->data_width = CQSPI_INST_TYPE_SINGLE; 328 - f_pdata->dtr = op->data.dtr && op->cmd.dtr && op->addr.dtr; 328 + 329 + /* 330 + * For an op to be DTR, cmd phase along with every other non-empty 331 + * phase should have dtr field set to 1. If an op phase has zero 332 + * nbytes, ignore its dtr field; otherwise, check its dtr field. 333 + */ 334 + f_pdata->dtr = op->cmd.dtr && 335 + (!op->addr.nbytes || op->addr.dtr) && 336 + (!op->data.nbytes || op->data.dtr); 329 337 330 338 switch (op->data.buswidth) { 331 339 case 0: ··· 1236 1228 { 1237 1229 bool all_true, all_false; 1238 1230 1239 - all_true = op->cmd.dtr && op->addr.dtr && op->dummy.dtr && 1240 - op->data.dtr; 1231 + /* 1232 + * op->dummy.dtr is required for converting nbytes into ncycles. 1233 + * Also, don't check the dtr field of the op phase having zero nbytes. 1234 + */ 1235 + all_true = op->cmd.dtr && 1236 + (!op->addr.nbytes || op->addr.dtr) && 1237 + (!op->dummy.nbytes || op->dummy.dtr) && 1238 + (!op->data.nbytes || op->data.dtr); 1239 + 1241 1240 all_false = !op->cmd.dtr && !op->addr.dtr && !op->dummy.dtr && 1242 1241 !op->data.dtr; 1243 1242
+16 -2
drivers/spi/spi-imx.c
··· 505 505 struct spi_message *msg) 506 506 { 507 507 struct spi_device *spi = msg->spi; 508 + struct spi_transfer *xfer; 508 509 u32 ctrl = MX51_ECSPI_CTRL_ENABLE; 510 + u32 min_speed_hz = ~0U; 509 511 u32 testreg, delay; 510 512 u32 cfg = readl(spi_imx->base + MX51_ECSPI_CONFIG); 511 513 ··· 579 577 * be asserted before the SCLK polarity changes, which would disrupt 580 578 * the SPI communication as the device on the other end would consider 581 579 * the change of SCLK polarity as a clock tick already. 580 + * 581 + * Because spi_imx->spi_bus_clk is only set in bitbang prepare_message 582 + * callback, iterate over all the transfers in spi_message, find the 583 + * one with lowest bus frequency, and use that bus frequency for the 584 + * delay calculation. In case all transfers have speed_hz == 0, then 585 + * min_speed_hz is ~0 and the resulting delay is zero. 582 586 */ 583 - delay = (2 * 1000000) / spi_imx->spi_bus_clk; 584 - if (likely(delay < 10)) /* SCLK is faster than 100 kHz */ 587 + list_for_each_entry(xfer, &msg->transfers, transfer_list) { 588 + if (!xfer->speed_hz) 589 + continue; 590 + min_speed_hz = min(xfer->speed_hz, min_speed_hz); 591 + } 592 + 593 + delay = (2 * 1000000) / min_speed_hz; 594 + if (likely(delay < 10)) /* SCLK is faster than 200 kHz */ 585 595 udelay(delay); 586 596 else /* SCLK is _very_ slow */ 587 597 usleep_range(delay, delay + 10);
+2
drivers/spi/spi-meson-spicc.c
··· 785 785 clk_disable_unprepare(spicc->core); 786 786 clk_disable_unprepare(spicc->pclk); 787 787 788 + spi_master_put(spicc->master); 789 + 788 790 return 0; 789 791 } 790 792
+5 -14
drivers/spi/spi-mt65xx.c
··· 426 426 mtk_spi_prepare_transfer(master, xfer); 427 427 mtk_spi_setup_packet(master); 428 428 429 - cnt = xfer->len / 4; 430 - if (xfer->tx_buf) 429 + if (xfer->tx_buf) { 430 + cnt = xfer->len / 4; 431 431 iowrite32_rep(mdata->base + SPI_TX_DATA_REG, xfer->tx_buf, cnt); 432 - 433 - if (xfer->rx_buf) 434 - ioread32_rep(mdata->base + SPI_RX_DATA_REG, xfer->rx_buf, cnt); 435 - 436 - remainder = xfer->len % 4; 437 - if (remainder > 0) { 438 - reg_val = 0; 439 - if (xfer->tx_buf) { 432 + remainder = xfer->len % 4; 433 + if (remainder > 0) { 434 + reg_val = 0; 440 435 memcpy(&reg_val, xfer->tx_buf + (cnt * 4), remainder); 441 436 writel(reg_val, mdata->base + SPI_TX_DATA_REG); 442 - } 443 - if (xfer->rx_buf) { 444 - reg_val = readl(mdata->base + SPI_RX_DATA_REG); 445 - memcpy(xfer->rx_buf + (cnt * 4), &reg_val, remainder); 446 437 } 447 438 } 448 439
+8
drivers/spi/spi-mux.c
··· 167 167 return ret; 168 168 } 169 169 170 + static const struct spi_device_id spi_mux_id[] = { 171 + { "spi-mux" }, 172 + { } 173 + }; 174 + MODULE_DEVICE_TABLE(spi, spi_mux_id); 175 + 170 176 static const struct of_device_id spi_mux_of_match[] = { 171 177 { .compatible = "spi-mux" }, 172 178 { } 173 179 }; 180 + MODULE_DEVICE_TABLE(of, spi_mux_of_match); 174 181 175 182 static struct spi_driver spi_mux_driver = { 176 183 .probe = spi_mux_probe, ··· 185 178 .name = "spi-mux", 186 179 .of_match_table = spi_mux_of_match, 187 180 }, 181 + .id_table = spi_mux_id, 188 182 }; 189 183 190 184 module_spi_driver(spi_mux_driver);
+4
drivers/spi/spi.c
··· 58 58 const struct spi_device *spi = to_spi_device(dev); 59 59 int len; 60 60 61 + len = of_device_modalias(dev, buf, PAGE_SIZE); 62 + if (len != -ENODEV) 63 + return len; 64 + 61 65 len = acpi_device_modalias(dev, buf, PAGE_SIZE - 1); 62 66 if (len != -ENODEV) 63 67 return len;
-1
drivers/staging/mt7621-pci/pci-mt7621.c
··· 422 422 dev_err(dev, "pcie%d no card, disable it (RST & CLK)\n", 423 423 slot); 424 424 mt7621_control_assert(port); 425 - clk_disable_unprepare(port->clk); 426 425 port->enabled = false; 427 426 428 427 if (slot == 0) {
+20 -10
drivers/staging/rtl8712/hal_init.c
··· 29 29 #define FWBUFF_ALIGN_SZ 512 30 30 #define MAX_DUMP_FWSZ (48 * 1024) 31 31 32 + static void rtl871x_load_fw_fail(struct _adapter *adapter) 33 + { 34 + struct usb_device *udev = adapter->dvobjpriv.pusbdev; 35 + struct device *dev = &udev->dev; 36 + struct device *parent = dev->parent; 37 + 38 + complete(&adapter->rtl8712_fw_ready); 39 + 40 + dev_err(&udev->dev, "r8712u: Firmware request failed\n"); 41 + 42 + if (parent) 43 + device_lock(parent); 44 + 45 + device_release_driver(dev); 46 + 47 + if (parent) 48 + device_unlock(parent); 49 + } 50 + 32 51 static void rtl871x_load_fw_cb(const struct firmware *firmware, void *context) 33 52 { 34 53 struct _adapter *adapter = context; 35 54 36 55 if (!firmware) { 37 - struct usb_device *udev = adapter->dvobjpriv.pusbdev; 38 - struct usb_interface *usb_intf = adapter->pusb_intf; 39 - 40 - dev_err(&udev->dev, "r8712u: Firmware request failed\n"); 41 - usb_put_dev(udev); 42 - usb_set_intfdata(usb_intf, NULL); 43 - r8712_free_drv_sw(adapter); 44 - adapter->dvobj_deinit(adapter); 45 - complete(&adapter->rtl8712_fw_ready); 46 - free_netdev(adapter->pnetdev); 56 + rtl871x_load_fw_fail(adapter); 47 57 return; 48 58 } 49 59 adapter->fw = firmware;
+8
drivers/staging/rtl8712/rtl8712_led.c
··· 1820 1820 break; 1821 1821 } 1822 1822 } 1823 + 1824 + void r8712_flush_led_works(struct _adapter *padapter) 1825 + { 1826 + struct led_priv *pledpriv = &padapter->ledpriv; 1827 + 1828 + flush_work(&pledpriv->SwLed0.BlinkWorkItem); 1829 + flush_work(&pledpriv->SwLed1.BlinkWorkItem); 1830 + }
+1
drivers/staging/rtl8712/rtl871x_led.h
··· 112 112 void r8712_InitSwLeds(struct _adapter *padapter); 113 113 void r8712_DeInitSwLeds(struct _adapter *padapter); 114 114 void LedControl871x(struct _adapter *padapter, enum LED_CTL_MODE LedAction); 115 + void r8712_flush_led_works(struct _adapter *padapter); 115 116 116 117 #endif 117 118
+8
drivers/staging/rtl8712/rtl871x_pwrctrl.c
··· 224 224 } 225 225 mutex_unlock(&pwrctrl->mutex_lock); 226 226 } 227 + 228 + void r8712_flush_rwctrl_works(struct _adapter *padapter) 229 + { 230 + struct pwrctrl_priv *pwrctrl = &padapter->pwrctrlpriv; 231 + 232 + flush_work(&pwrctrl->SetPSModeWorkItem); 233 + flush_work(&pwrctrl->rpwm_workitem); 234 + }
+1
drivers/staging/rtl8712/rtl871x_pwrctrl.h
··· 108 108 void r8712_set_ps_mode(struct _adapter *padapter, uint ps_mode, 109 109 uint smart_ps); 110 110 void r8712_set_rpwm(struct _adapter *padapter, u8 val8); 111 + void r8712_flush_rwctrl_works(struct _adapter *padapter); 111 112 112 113 #endif /* __RTL871X_PWRCTRL_H_ */
+21 -26
drivers/staging/rtl8712/usb_intf.c
··· 591 591 { 592 592 struct net_device *pnetdev = usb_get_intfdata(pusb_intf); 593 593 struct usb_device *udev = interface_to_usbdev(pusb_intf); 594 + struct _adapter *padapter = netdev_priv(pnetdev); 594 595 595 - if (pnetdev) { 596 - struct _adapter *padapter = netdev_priv(pnetdev); 596 + /* never exit with a firmware callback pending */ 597 + wait_for_completion(&padapter->rtl8712_fw_ready); 598 + usb_set_intfdata(pusb_intf, NULL); 599 + release_firmware(padapter->fw); 600 + if (drvpriv.drv_registered) 601 + padapter->surprise_removed = true; 602 + if (pnetdev->reg_state != NETREG_UNINITIALIZED) 603 + unregister_netdev(pnetdev); /* will call netdev_close() */ 604 + r8712_flush_rwctrl_works(padapter); 605 + r8712_flush_led_works(padapter); 606 + udelay(1); 607 + /* Stop driver mlme relation timer */ 608 + r8712_stop_drv_timers(padapter); 609 + r871x_dev_unload(padapter); 610 + r8712_free_drv_sw(padapter); 611 + free_netdev(pnetdev); 597 612 598 - /* never exit with a firmware callback pending */ 599 - wait_for_completion(&padapter->rtl8712_fw_ready); 600 - pnetdev = usb_get_intfdata(pusb_intf); 601 - usb_set_intfdata(pusb_intf, NULL); 602 - if (!pnetdev) 603 - goto firmware_load_fail; 604 - release_firmware(padapter->fw); 605 - if (drvpriv.drv_registered) 606 - padapter->surprise_removed = true; 607 - if (pnetdev->reg_state != NETREG_UNINITIALIZED) 608 - unregister_netdev(pnetdev); /* will call netdev_close() */ 609 - flush_scheduled_work(); 610 - udelay(1); 611 - /* Stop driver mlme relation timer */ 612 - r8712_stop_drv_timers(padapter); 613 - r871x_dev_unload(padapter); 614 - r8712_free_drv_sw(padapter); 615 - free_netdev(pnetdev); 613 + /* decrease the reference count of the usb device structure 614 + * when disconnect 615 + */ 616 + usb_put_dev(udev); 616 617 617 - /* decrease the reference count of the usb device structure 618 - * when disconnect 619 - */ 620 - usb_put_dev(udev); 621 - } 622 - firmware_load_fail: 623 618 /* If we didn't unplug usb dongle and remove/insert module, driver 624 619 * fails on sitesurvey for the first time when device is up. 625 620 * Reset usb port for sitesurvey fail issue.
+1
drivers/staging/rtl8723bs/Kconfig
··· 5 5 depends on m 6 6 select WIRELESS_EXT 7 7 select WEXT_PRIV 8 + select CRYPTO_LIB_ARC4 8 9 help 9 10 This option enables support for RTL8723BS SDIO drivers, such as 10 11 the wifi found on the 1st gen Intel Compute Stick, the CHIP
+2
drivers/staging/rtl8723bs/hal/sdio_ops.c
··· 909 909 } else { 910 910 rtw_c2h_wk_cmd(adapter, (u8 *)c2h_evt); 911 911 } 912 + } else { 913 + kfree(c2h_evt); 912 914 } 913 915 } else { 914 916 /* Error handling for malloc fail */
+34 -4
drivers/tee/optee/call.c
··· 184 184 struct optee_msg_arg *ma; 185 185 186 186 shm = tee_shm_alloc(ctx, OPTEE_MSG_GET_ARG_SIZE(num_params), 187 - TEE_SHM_MAPPED); 187 + TEE_SHM_MAPPED | TEE_SHM_PRIV); 188 188 if (IS_ERR(shm)) 189 189 return shm; 190 190 ··· 416 416 } 417 417 418 418 /** 419 - * optee_disable_shm_cache() - Disables caching of some shared memory allocation 420 - * in OP-TEE 419 + * __optee_disable_shm_cache() - Disables caching of some shared memory 420 + * allocation in OP-TEE 421 421 * @optee: main service struct 422 + * @is_mapped: true if the cached shared memory addresses were mapped by this 423 + * kernel, are safe to dereference, and should be freed 422 424 */ 423 - void optee_disable_shm_cache(struct optee *optee) 425 + static void __optee_disable_shm_cache(struct optee *optee, bool is_mapped) 424 426 { 425 427 struct optee_call_waiter w; 426 428 ··· 441 439 if (res.result.status == OPTEE_SMC_RETURN_OK) { 442 440 struct tee_shm *shm; 443 441 442 + /* 443 + * Shared memory references that were not mapped by 444 + * this kernel must be ignored to prevent a crash. 445 + */ 446 + if (!is_mapped) 447 + continue; 448 + 444 449 shm = reg_pair_to_ptr(res.result.shm_upper32, 445 450 res.result.shm_lower32); 446 451 tee_shm_free(shm); ··· 456 447 } 457 448 } 458 449 optee_cq_wait_final(&optee->call_queue, &w); 450 + } 451 + 452 + /** 453 + * optee_disable_shm_cache() - Disables caching of mapped shared memory 454 + * allocations in OP-TEE 455 + * @optee: main service struct 456 + */ 457 + void optee_disable_shm_cache(struct optee *optee) 458 + { 459 + return __optee_disable_shm_cache(optee, true); 460 + } 461 + 462 + /** 463 + * optee_disable_unmapped_shm_cache() - Disables caching of shared memory 464 + * allocations in OP-TEE which are not 465 + * currently mapped 466 + * @optee: main service struct 467 + */ 468 + void optee_disable_unmapped_shm_cache(struct optee *optee) 469 + { 470 + return __optee_disable_shm_cache(optee, false); 459 471 } 460 472 461 473 #define PAGELIST_ENTRIES_PER_PAGE \
+42 -1
drivers/tee/optee/core.c
··· 6 6 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 7 7 8 8 #include <linux/arm-smccc.h> 9 + #include <linux/crash_dump.h> 9 10 #include <linux/errno.h> 10 11 #include <linux/io.h> 11 12 #include <linux/module.h> ··· 278 277 if (!ctxdata) 279 278 return; 280 279 281 - shm = tee_shm_alloc(ctx, sizeof(struct optee_msg_arg), TEE_SHM_MAPPED); 280 + shm = tee_shm_alloc(ctx, sizeof(struct optee_msg_arg), 281 + TEE_SHM_MAPPED | TEE_SHM_PRIV); 282 282 if (!IS_ERR(shm)) { 283 283 arg = tee_shm_get_va(shm, 0); 284 284 /* ··· 574 572 return ERR_PTR(-EINVAL); 575 573 } 576 574 575 + /* optee_remove - Device Removal Routine 576 + * @pdev: platform device information struct 577 + * 578 + * optee_remove is called by platform subsystem to alert the driver 579 + * that it should release the device 580 + */ 581 + 577 582 static int optee_remove(struct platform_device *pdev) 578 583 { 579 584 struct optee *optee = platform_get_drvdata(pdev); ··· 611 602 return 0; 612 603 } 613 604 605 + /* optee_shutdown - Device Removal Routine 606 + * @pdev: platform device information struct 607 + * 608 + * platform_shutdown is called by the platform subsystem to alert 609 + * the driver that a shutdown, reboot, or kexec is happening and 610 + * device must be disabled. 611 + */ 612 + static void optee_shutdown(struct platform_device *pdev) 613 + { 614 + optee_disable_shm_cache(platform_get_drvdata(pdev)); 615 + } 616 + 614 617 static int optee_probe(struct platform_device *pdev) 615 618 { 616 619 optee_invoke_fn *invoke_fn; ··· 632 611 struct tee_device *teedev; 633 612 u32 sec_caps; 634 613 int rc; 614 + 615 + /* 616 + * The kernel may have crashed at the same time that all available 617 + * secure world threads were suspended and we cannot reschedule the 618 + * suspended threads without access to the crashed kernel's wait_queue. 619 + * Therefore, we cannot reliably initialize the OP-TEE driver in the 620 + * kdump kernel. 621 + */ 622 + if (is_kdump_kernel()) 623 + return -ENODEV; 635 624 636 625 invoke_fn = get_invoke_func(&pdev->dev); 637 626 if (IS_ERR(invoke_fn)) ··· 717 686 optee->memremaped_shm = memremaped_shm; 718 687 optee->pool = pool; 719 688 689 + /* 690 + * Ensure that there are no pre-existing shm objects before enabling 691 + * the shm cache so that there's no chance of receiving an invalid 692 + * address during shutdown. This could occur, for example, if we're 693 + * kexec booting from an older kernel that did not properly cleanup the 694 + * shm cache. 695 + */ 696 + optee_disable_unmapped_shm_cache(optee); 697 + 720 698 optee_enable_shm_cache(optee); 721 699 722 700 if (optee->sec_caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) ··· 768 728 static struct platform_driver optee_driver = { 769 729 .probe = optee_probe, 770 730 .remove = optee_remove, 731 + .shutdown = optee_shutdown, 771 732 .driver = { 772 733 .name = "optee", 773 734 .of_match_table = optee_dt_match,
+1
drivers/tee/optee/optee_private.h
··· 159 159 160 160 void optee_enable_shm_cache(struct optee *optee); 161 161 void optee_disable_shm_cache(struct optee *optee); 162 + void optee_disable_unmapped_shm_cache(struct optee *optee); 162 163 163 164 int optee_shm_register(struct tee_context *ctx, struct tee_shm *shm, 164 165 struct page **pages, size_t num_pages,
+3 -2
drivers/tee/optee/rpc.c
··· 314 314 shm = cmd_alloc_suppl(ctx, sz); 315 315 break; 316 316 case OPTEE_RPC_SHM_TYPE_KERNEL: 317 - shm = tee_shm_alloc(ctx, sz, TEE_SHM_MAPPED); 317 + shm = tee_shm_alloc(ctx, sz, TEE_SHM_MAPPED | TEE_SHM_PRIV); 318 318 break; 319 319 default: 320 320 arg->ret = TEEC_ERROR_BAD_PARAMETERS; ··· 502 502 503 503 switch (OPTEE_SMC_RETURN_GET_RPC_FUNC(param->a0)) { 504 504 case OPTEE_SMC_RPC_FUNC_ALLOC: 505 - shm = tee_shm_alloc(ctx, param->a1, TEE_SHM_MAPPED); 505 + shm = tee_shm_alloc(ctx, param->a1, 506 + TEE_SHM_MAPPED | TEE_SHM_PRIV); 506 507 if (!IS_ERR(shm) && !tee_shm_get_pa(shm, 0, &pa)) { 507 508 reg_pair_from_64(&param->a1, &param->a2, pa); 508 509 reg_pair_from_64(&param->a4, &param->a5,
+16 -4
drivers/tee/optee/shm_pool.c
··· 27 27 shm->paddr = page_to_phys(page); 28 28 shm->size = PAGE_SIZE << order; 29 29 30 - if (shm->flags & TEE_SHM_DMA_BUF) { 30 + /* 31 + * Shared memory private to the OP-TEE driver doesn't need 32 + * to be registered with OP-TEE. 33 + */ 34 + if (!(shm->flags & TEE_SHM_PRIV)) { 31 35 unsigned int nr_pages = 1 << order, i; 32 36 struct page **pages; 33 37 34 38 pages = kcalloc(nr_pages, sizeof(pages), GFP_KERNEL); 35 - if (!pages) 36 - return -ENOMEM; 39 + if (!pages) { 40 + rc = -ENOMEM; 41 + goto err; 42 + } 37 43 38 44 for (i = 0; i < nr_pages; i++) { 39 45 pages[i] = page; ··· 50 44 rc = optee_shm_register(shm->ctx, shm, pages, nr_pages, 51 45 (unsigned long)shm->kaddr); 52 46 kfree(pages); 47 + if (rc) 48 + goto err; 53 49 } 54 50 51 + return 0; 52 + 53 + err: 54 + __free_pages(page, order); 55 55 return rc; 56 56 } 57 57 58 58 static void pool_op_free(struct tee_shm_pool_mgr *poolm, 59 59 struct tee_shm *shm) 60 60 { 61 - if (shm->flags & TEE_SHM_DMA_BUF) 61 + if (!(shm->flags & TEE_SHM_PRIV)) 62 62 optee_shm_unregister(shm->ctx, shm); 63 63 64 64 free_pages((unsigned long)shm->kaddr, get_order(shm->size));
+19 -1
drivers/tee/tee_shm.c
··· 117 117 return ERR_PTR(-EINVAL); 118 118 } 119 119 120 - if ((flags & ~(TEE_SHM_MAPPED | TEE_SHM_DMA_BUF))) { 120 + if ((flags & ~(TEE_SHM_MAPPED | TEE_SHM_DMA_BUF | TEE_SHM_PRIV))) { 121 121 dev_err(teedev->dev.parent, "invalid shm flags 0x%x", flags); 122 122 return ERR_PTR(-EINVAL); 123 123 } ··· 192 192 return ret; 193 193 } 194 194 EXPORT_SYMBOL_GPL(tee_shm_alloc); 195 + 196 + /** 197 + * tee_shm_alloc_kernel_buf() - Allocate shared memory for kernel buffer 198 + * @ctx: Context that allocates the shared memory 199 + * @size: Requested size of shared memory 200 + * 201 + * The returned memory registered in secure world and is suitable to be 202 + * passed as a memory buffer in parameter argument to 203 + * tee_client_invoke_func(). The memory allocated is later freed with a 204 + * call to tee_shm_free(). 205 + * 206 + * @returns a pointer to 'struct tee_shm' 207 + */ 208 + struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size) 209 + { 210 + return tee_shm_alloc(ctx, size, TEE_SHM_MAPPED); 211 + } 212 + EXPORT_SYMBOL_GPL(tee_shm_alloc_kernel_buf); 195 213 196 214 struct tee_shm *tee_shm_register(struct tee_context *ctx, unsigned long addr, 197 215 size_t length, u32 flags)
+1 -14
drivers/thunderbolt/switch.c
··· 1875 1875 NULL, 1876 1876 }; 1877 1877 1878 - static bool has_port(const struct tb_switch *sw, enum tb_port_type type) 1879 - { 1880 - const struct tb_port *port; 1881 - 1882 - tb_switch_for_each_port(sw, port) { 1883 - if (!port->disabled && port->config.type == type) 1884 - return true; 1885 - } 1886 - 1887 - return false; 1888 - } 1889 - 1890 1878 static umode_t switch_attr_is_visible(struct kobject *kobj, 1891 1879 struct attribute *attr, int n) 1892 1880 { ··· 1883 1895 1884 1896 if (attr == &dev_attr_authorized.attr) { 1885 1897 if (sw->tb->security_level == TB_SECURITY_NOPCIE || 1886 - sw->tb->security_level == TB_SECURITY_DPONLY || 1887 - !has_port(sw, TB_TYPE_PCIE_UP)) 1898 + sw->tb->security_level == TB_SECURITY_DPONLY) 1888 1899 return 0; 1889 1900 } else if (attr == &dev_attr_device.attr) { 1890 1901 if (!sw->device)
+3 -2
drivers/tty/serial/8250/8250_aspeed_vuart.c
··· 329 329 { 330 330 struct uart_8250_port *up = up_to_u8250p(port); 331 331 unsigned int iir, lsr; 332 + unsigned long flags; 332 333 unsigned int space, count; 333 334 334 335 iir = serial_port_in(port, UART_IIR); ··· 337 336 if (iir & UART_IIR_NO_INT) 338 337 return 0; 339 338 340 - spin_lock(&port->lock); 339 + spin_lock_irqsave(&port->lock, flags); 341 340 342 341 lsr = serial_port_in(port, UART_LSR); 343 342 ··· 371 370 if (lsr & UART_LSR_THRE) 372 371 serial8250_tx_chars(up); 373 372 374 - uart_unlock_and_check_sysrq(port); 373 + uart_unlock_and_check_sysrq_irqrestore(port, flags); 375 374 376 375 return 1; 377 376 }
+3 -2
drivers/tty/serial/8250/8250_fsl.c
··· 30 30 int fsl8250_handle_irq(struct uart_port *port) 31 31 { 32 32 unsigned char lsr, orig_lsr; 33 + unsigned long flags; 33 34 unsigned int iir; 34 35 struct uart_8250_port *up = up_to_u8250p(port); 35 36 36 - spin_lock(&up->port.lock); 37 + spin_lock_irqsave(&up->port.lock, flags); 37 38 38 39 iir = port->serial_in(port, UART_IIR); 39 40 if (iir & UART_IIR_NO_INT) { ··· 83 82 84 83 up->lsr_saved_flags = orig_lsr; 85 84 86 - uart_unlock_and_check_sysrq(&up->port); 85 + uart_unlock_and_check_sysrq_irqrestore(&up->port, flags); 87 86 88 87 return 1; 89 88 }
+5
drivers/tty/serial/8250/8250_mtk.c
··· 93 93 struct dma_tx_state state; 94 94 int copied, total, cnt; 95 95 unsigned char *ptr; 96 + unsigned long flags; 96 97 97 98 if (data->rx_status == DMA_RX_SHUTDOWN) 98 99 return; 100 + 101 + spin_lock_irqsave(&up->port.lock, flags); 99 102 100 103 dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state); 101 104 total = dma->rx_size - state.residue; ··· 123 120 tty_flip_buffer_push(tty_port); 124 121 125 122 mtk8250_rx_dma(up); 123 + 124 + spin_unlock_irqrestore(&up->port.lock, flags); 126 125 } 127 126 128 127 static void mtk8250_rx_dma(struct uart_8250_port *up)
+7
drivers/tty/serial/8250/8250_pci.c
··· 3836 3836 { PCI_VDEVICE(INTEL, 0x0f0c), }, 3837 3837 { PCI_VDEVICE(INTEL, 0x228a), }, 3838 3838 { PCI_VDEVICE(INTEL, 0x228c), }, 3839 + { PCI_VDEVICE(INTEL, 0x4b96), }, 3840 + { PCI_VDEVICE(INTEL, 0x4b97), }, 3841 + { PCI_VDEVICE(INTEL, 0x4b98), }, 3842 + { PCI_VDEVICE(INTEL, 0x4b99), }, 3843 + { PCI_VDEVICE(INTEL, 0x4b9a), }, 3844 + { PCI_VDEVICE(INTEL, 0x4b9b), }, 3839 3845 { PCI_VDEVICE(INTEL, 0x9ce3), }, 3840 3846 { PCI_VDEVICE(INTEL, 0x9ce4), }, 3841 3847 ··· 4002 3996 if (pci_match_id(pci_use_msi, dev)) { 4003 3997 dev_dbg(&dev->dev, "Using MSI(-X) interrupts\n"); 4004 3998 pci_set_master(dev); 3999 + uart.port.flags &= ~UPF_SHARE_IRQ; 4005 4000 rc = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_ALL_TYPES); 4006 4001 } else { 4007 4002 dev_dbg(&dev->dev, "Using legacy interrupts\n");
+12 -5
drivers/tty/serial/8250/8250_port.c
··· 311 311 /* Uart divisor latch read */ 312 312 static int default_serial_dl_read(struct uart_8250_port *up) 313 313 { 314 - return serial_in(up, UART_DLL) | serial_in(up, UART_DLM) << 8; 314 + /* Assign these in pieces to truncate any bits above 7. */ 315 + unsigned char dll = serial_in(up, UART_DLL); 316 + unsigned char dlm = serial_in(up, UART_DLM); 317 + 318 + return dll | dlm << 8; 315 319 } 316 320 317 321 /* Uart divisor latch write */ ··· 1301 1297 serial_out(up, UART_LCR, 0); 1302 1298 1303 1299 serial_out(up, UART_FCR, UART_FCR_ENABLE_FIFO); 1304 - scratch = serial_in(up, UART_IIR) >> 6; 1305 1300 1306 - switch (scratch) { 1301 + /* Assign this as it is to truncate any bits above 7. */ 1302 + scratch = serial_in(up, UART_IIR); 1303 + 1304 + switch (scratch >> 6) { 1307 1305 case 0: 1308 1306 autoconfig_8250(up); 1309 1307 break; ··· 1899 1893 unsigned char status; 1900 1894 struct uart_8250_port *up = up_to_u8250p(port); 1901 1895 bool skip_rx = false; 1896 + unsigned long flags; 1902 1897 1903 1898 if (iir & UART_IIR_NO_INT) 1904 1899 return 0; 1905 1900 1906 - spin_lock(&port->lock); 1901 + spin_lock_irqsave(&port->lock, flags); 1907 1902 1908 1903 status = serial_port_in(port, UART_LSR); 1909 1904 ··· 1930 1923 (up->ier & UART_IER_THRI)) 1931 1924 serial8250_tx_chars(up); 1932 1925 1933 - uart_unlock_and_check_sysrq(port); 1926 + uart_unlock_and_check_sysrq_irqrestore(port, flags); 1934 1927 1935 1928 return 1; 1936 1929 }
+1 -1
drivers/tty/serial/fsl_lpuart.c
··· 1415 1415 1416 1416 static unsigned int lpuart32_get_mctrl(struct uart_port *port) 1417 1417 { 1418 - unsigned int mctrl = 0; 1418 + unsigned int mctrl = TIOCM_CAR | TIOCM_DSR | TIOCM_CTS; 1419 1419 u32 reg; 1420 1420 1421 1421 reg = lpuart32_read(port, UARTCTRL);
+2 -1
drivers/tty/serial/max310x.c
··· 1293 1293 freq = uartclk; 1294 1294 if (freq == 0) { 1295 1295 dev_err(dev, "Cannot get clock rate\n"); 1296 - return -EINVAL; 1296 + ret = -EINVAL; 1297 + goto out_clk; 1297 1298 } 1298 1299 1299 1300 if (xtal) {
+4 -2
drivers/tty/serial/serial-tegra.c
··· 1045 1045 1046 1046 if (tup->cdata->fifo_mode_enable_status) { 1047 1047 ret = tegra_uart_wait_fifo_mode_enabled(tup); 1048 - dev_err(tup->uport.dev, "FIFO mode not enabled\n"); 1049 - if (ret < 0) 1048 + if (ret < 0) { 1049 + dev_err(tup->uport.dev, 1050 + "Failed to enable FIFO mode: %d\n", ret); 1050 1051 return ret; 1052 + } 1051 1053 } else { 1052 1054 /* 1053 1055 * For all tegra devices (up to t210), there is a hardware
+1
drivers/usb/cdns3/cdns3-ep0.c
··· 731 731 request->actual = 0; 732 732 priv_dev->status_completion_no_call = true; 733 733 priv_dev->pending_status_request = request; 734 + usb_gadget_set_state(&priv_dev->gadget, USB_STATE_CONFIGURED); 734 735 spin_unlock_irqrestore(&priv_dev->lock, flags); 735 736 736 737 /*
+1 -1
drivers/usb/cdns3/cdnsp-gadget.c
··· 1882 1882 pdev->gadget.name = "cdnsp-gadget"; 1883 1883 pdev->gadget.speed = USB_SPEED_UNKNOWN; 1884 1884 pdev->gadget.sg_supported = 1; 1885 - pdev->gadget.max_speed = USB_SPEED_SUPER_PLUS; 1885 + pdev->gadget.max_speed = max_speed; 1886 1886 pdev->gadget.lpm_capable = 1; 1887 1887 1888 1888 pdev->setup_buf = kzalloc(CDNSP_EP0_SETUP_SIZE, GFP_KERNEL);
+2 -2
drivers/usb/cdns3/cdnsp-gadget.h
··· 383 383 #define IMAN_IE BIT(1) 384 384 #define IMAN_IP BIT(0) 385 385 /* bits 2:31 need to be preserved */ 386 - #define IMAN_IE_SET(p) (((p) & IMAN_IE) | 0x2) 387 - #define IMAN_IE_CLEAR(p) (((p) & IMAN_IE) & ~(0x2)) 386 + #define IMAN_IE_SET(p) ((p) | IMAN_IE) 387 + #define IMAN_IE_CLEAR(p) ((p) & ~IMAN_IE) 388 388 389 389 /* IMOD - Interrupter Moderation Register - irq_control bitmasks. */ 390 390 /*
+8 -10
drivers/usb/cdns3/cdnsp-ring.c
··· 1932 1932 } 1933 1933 1934 1934 if (enqd_len + trb_buff_len >= full_len) { 1935 - if (need_zero_pkt && zero_len_trb) { 1936 - zero_len_trb = true; 1937 - } else { 1938 - field &= ~TRB_CHAIN; 1939 - field |= TRB_IOC; 1940 - more_trbs_coming = false; 1941 - need_zero_pkt = false; 1942 - preq->td.last_trb = ring->enqueue; 1943 - } 1935 + if (need_zero_pkt) 1936 + zero_len_trb = !zero_len_trb; 1937 + 1938 + field &= ~TRB_CHAIN; 1939 + field |= TRB_IOC; 1940 + more_trbs_coming = false; 1941 + preq->td.last_trb = ring->enqueue; 1944 1942 } 1945 1943 1946 1944 /* Only set interrupt on short packet for OUT endpoints. */ ··· 1953 1955 length_field = TRB_LEN(trb_buff_len) | TRB_TD_SIZE(remainder) | 1954 1956 TRB_INTR_TARGET(0); 1955 1957 1956 - cdnsp_queue_trb(pdev, ring, more_trbs_coming | need_zero_pkt, 1958 + cdnsp_queue_trb(pdev, ring, more_trbs_coming | zero_len_trb, 1957 1959 lower_32_bits(send_addr), 1958 1960 upper_32_bits(send_addr), 1959 1961 length_field,
+1 -8
drivers/usb/class/usbtmc.c
··· 2324 2324 dev_err(dev, "overflow with length %d, actual length is %d\n", 2325 2325 data->iin_wMaxPacketSize, urb->actual_length); 2326 2326 fallthrough; 2327 - case -ECONNRESET: 2328 - case -ENOENT: 2329 - case -ESHUTDOWN: 2330 - case -EILSEQ: 2331 - case -ETIME: 2332 - case -EPIPE: 2327 + default: 2333 2328 /* urb terminated, clean up */ 2334 2329 dev_dbg(dev, "urb terminated, status: %d\n", status); 2335 2330 return; 2336 - default: 2337 - dev_err(dev, "unknown status received: %d\n", status); 2338 2331 } 2339 2332 exit: 2340 2333 rv = usb_submit_urb(urb, GFP_ATOMIC);
+5 -1
drivers/usb/common/usb-otg-fsm.c
··· 193 193 if (!fsm->host_req_flag) 194 194 return; 195 195 196 - INIT_DELAYED_WORK(&fsm->hnp_polling_work, otg_hnp_polling_work); 196 + if (!fsm->hnp_work_inited) { 197 + INIT_DELAYED_WORK(&fsm->hnp_polling_work, otg_hnp_polling_work); 198 + fsm->hnp_work_inited = true; 199 + } 200 + 197 201 schedule_delayed_work(&fsm->hnp_polling_work, 198 202 msecs_to_jiffies(T_HOST_REQ_POLL)); 199 203 }
+27 -2
drivers/usb/dwc3/gadget.c
··· 1741 1741 { 1742 1742 struct dwc3_request *req; 1743 1743 struct dwc3_request *tmp; 1744 + struct list_head local; 1744 1745 struct dwc3 *dwc = dep->dwc; 1745 1746 1746 - list_for_each_entry_safe(req, tmp, &dep->cancelled_list, list) { 1747 + restart: 1748 + list_replace_init(&dep->cancelled_list, &local); 1749 + 1750 + list_for_each_entry_safe(req, tmp, &local, list) { 1747 1751 dwc3_gadget_ep_skip_trbs(dep, req); 1748 1752 switch (req->status) { 1749 1753 case DWC3_REQUEST_STATUS_DISCONNECTED: ··· 1765 1761 break; 1766 1762 } 1767 1763 } 1764 + 1765 + if (!list_empty(&dep->cancelled_list)) 1766 + goto restart; 1768 1767 } 1769 1768 1770 1769 static int dwc3_gadget_ep_dequeue(struct usb_ep *ep, ··· 2254 2247 dev_err(dwc->dev, "timed out waiting for SETUP phase\n"); 2255 2248 return -ETIMEDOUT; 2256 2249 } 2250 + } 2251 + 2252 + /* 2253 + * Avoid issuing a runtime resume if the device is already in the 2254 + * suspended state during gadget disconnect. DWC3 gadget was already 2255 + * halted/stopped during runtime suspend. 2256 + */ 2257 + if (!is_on) { 2258 + pm_runtime_barrier(dwc->dev); 2259 + if (pm_runtime_suspended(dwc->dev)) 2260 + return 0; 2257 2261 } 2258 2262 2259 2263 /* ··· 2976 2958 { 2977 2959 struct dwc3_request *req; 2978 2960 struct dwc3_request *tmp; 2961 + struct list_head local; 2979 2962 2980 - list_for_each_entry_safe(req, tmp, &dep->started_list, list) { 2963 + restart: 2964 + list_replace_init(&dep->started_list, &local); 2965 + 2966 + list_for_each_entry_safe(req, tmp, &local, list) { 2981 2967 int ret; 2982 2968 2983 2969 ret = dwc3_gadget_ep_cleanup_completed_request(dep, event, ··· 2989 2967 if (ret) 2990 2968 break; 2991 2969 } 2970 + 2971 + if (!list_empty(&dep->started_list)) 2972 + goto restart; 2992 2973 } 2993 2974 2994 2975 static bool dwc3_gadget_ep_should_continue(struct dwc3_ep *dep)
+39 -7
drivers/usb/gadget/function/f_hid.c
··· 41 41 unsigned char bInterfaceSubClass; 42 42 unsigned char bInterfaceProtocol; 43 43 unsigned char protocol; 44 + unsigned char idle; 44 45 unsigned short report_desc_length; 45 46 char *report_desc; 46 47 unsigned short report_length; ··· 339 338 340 339 spin_lock_irqsave(&hidg->write_spinlock, flags); 341 340 341 + if (!hidg->req) { 342 + spin_unlock_irqrestore(&hidg->write_spinlock, flags); 343 + return -ESHUTDOWN; 344 + } 345 + 342 346 #define WRITE_COND (!hidg->write_pending) 343 347 try_again: 344 348 /* write queue */ ··· 364 358 count = min_t(unsigned, count, hidg->report_length); 365 359 366 360 spin_unlock_irqrestore(&hidg->write_spinlock, flags); 367 - status = copy_from_user(req->buf, buffer, count); 368 361 362 + if (!req) { 363 + ERROR(hidg->func.config->cdev, "hidg->req is NULL\n"); 364 + status = -ESHUTDOWN; 365 + goto release_write_pending; 366 + } 367 + 368 + status = copy_from_user(req->buf, buffer, count); 369 369 if (status != 0) { 370 370 ERROR(hidg->func.config->cdev, 371 371 "copy_from_user error\n"); ··· 399 387 400 388 spin_unlock_irqrestore(&hidg->write_spinlock, flags); 401 389 402 - status = usb_ep_queue(hidg->in_ep, req, GFP_ATOMIC); 403 - if (status < 0) { 404 - ERROR(hidg->func.config->cdev, 405 - "usb_ep_queue error on int endpoint %zd\n", status); 390 + if (!hidg->in_ep->enabled) { 391 + ERROR(hidg->func.config->cdev, "in_ep is disabled\n"); 392 + status = -ESHUTDOWN; 406 393 goto release_write_pending; 407 - } else { 408 - status = count; 409 394 } 395 + 396 + status = usb_ep_queue(hidg->in_ep, req, GFP_ATOMIC); 397 + if (status < 0) 398 + goto release_write_pending; 399 + else 400 + status = count; 410 401 411 402 return status; 412 403 release_write_pending: ··· 538 523 goto respond; 539 524 break; 540 525 526 + case ((USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8 527 + | HID_REQ_GET_IDLE): 528 + VDBG(cdev, "get_idle\n"); 529 + length = min_t(unsigned int, length, 1); 530 + ((u8 *) req->buf)[0] = hidg->idle; 531 + goto respond; 532 + break; 533 + 541 534 case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8 542 535 | HID_REQ_SET_REPORT): 543 536 VDBG(cdev, "set_report | wLength=%d\n", ctrl->wLength); ··· 567 544 goto respond; 568 545 } 569 546 goto stall; 547 + break; 548 + 549 + case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8 550 + | HID_REQ_SET_IDLE): 551 + VDBG(cdev, "set_idle\n"); 552 + length = 0; 553 + hidg->idle = value >> 8; 554 + goto respond; 570 555 break; 571 556 572 557 case ((USB_DIR_IN | USB_TYPE_STANDARD | USB_RECIP_INTERFACE) << 8 ··· 804 773 hidg_interface_desc.bInterfaceSubClass = hidg->bInterfaceSubClass; 805 774 hidg_interface_desc.bInterfaceProtocol = hidg->bInterfaceProtocol; 806 775 hidg->protocol = HID_REPORT_PROTOCOL; 776 + hidg->idle = 1; 807 777 hidg_ss_in_ep_desc.wMaxPacketSize = cpu_to_le16(hidg->report_length); 808 778 hidg_ss_in_comp_desc.wBytesPerInterval = 809 779 cpu_to_le16(hidg->report_length);
+10 -4
drivers/usb/gadget/udc/max3420_udc.c
··· 1255 1255 err = devm_request_irq(&spi->dev, irq, max3420_irq_handler, 0, 1256 1256 "max3420", udc); 1257 1257 if (err < 0) 1258 - return err; 1258 + goto del_gadget; 1259 1259 1260 1260 udc->thread_task = kthread_create(max3420_thread, udc, 1261 1261 "max3420-thread"); 1262 - if (IS_ERR(udc->thread_task)) 1263 - return PTR_ERR(udc->thread_task); 1262 + if (IS_ERR(udc->thread_task)) { 1263 + err = PTR_ERR(udc->thread_task); 1264 + goto del_gadget; 1265 + } 1264 1266 1265 1267 irq = of_irq_get_byname(spi->dev.of_node, "vbus"); 1266 1268 if (irq <= 0) { /* no vbus irq implies self-powered design */ ··· 1282 1280 err = devm_request_irq(&spi->dev, irq, 1283 1281 max3420_vbus_handler, 0, "vbus", udc); 1284 1282 if (err < 0) 1285 - return err; 1283 + goto del_gadget; 1286 1284 } 1287 1285 1288 1286 return 0; 1287 + 1288 + del_gadget: 1289 + usb_del_gadget_udc(&udc->gadget); 1290 + return err; 1289 1291 } 1290 1292 1291 1293 static int max3420_remove(struct spi_device *spi)
+5 -4
drivers/usb/host/ohci-at91.c
··· 611 611 if (ohci_at91->wakeup) 612 612 enable_irq_wake(hcd->irq); 613 613 614 - ohci_at91_port_suspend(ohci_at91->sfr_regmap, 1); 615 - 616 614 ret = ohci_suspend(hcd, ohci_at91->wakeup); 617 615 if (ret) { 618 616 if (ohci_at91->wakeup) ··· 630 632 /* flush the writes */ 631 633 (void) ohci_readl (ohci, &ohci->regs->control); 632 634 msleep(1); 635 + ohci_at91_port_suspend(ohci_at91->sfr_regmap, 1); 633 636 at91_stop_clock(ohci_at91); 637 + } else { 638 + ohci_at91_port_suspend(ohci_at91->sfr_regmap, 1); 634 639 } 635 640 636 641 return ret; ··· 645 644 struct usb_hcd *hcd = dev_get_drvdata(dev); 646 645 struct ohci_at91_priv *ohci_at91 = hcd_to_ohci_at91_priv(hcd); 647 646 647 + ohci_at91_port_suspend(ohci_at91->sfr_regmap, 0); 648 + 648 649 if (ohci_at91->wakeup) 649 650 disable_irq_wake(hcd->irq); 650 651 else 651 652 at91_start_clock(ohci_at91); 652 653 653 654 ohci_resume(hcd, false); 654 - 655 - ohci_at91_port_suspend(ohci_at91->sfr_regmap, 0); 656 655 657 656 return 0; 658 657 }
+38 -5
drivers/usb/musb/omap2430.c
··· 35 35 struct device *control_otghs; 36 36 unsigned int is_runtime_suspended:1; 37 37 unsigned int needs_resume:1; 38 + unsigned int phy_suspended:1; 38 39 }; 39 40 #define glue_to_musb(g) platform_get_drvdata(g->musb) 40 41 ··· 459 458 460 459 omap2430_low_level_exit(musb); 461 460 462 - phy_power_off(musb->phy); 463 - phy_exit(musb->phy); 461 + if (!glue->phy_suspended) { 462 + phy_power_off(musb->phy); 463 + phy_exit(musb->phy); 464 + } 464 465 465 466 glue->is_runtime_suspended = 1; 466 467 ··· 477 474 if (!musb) 478 475 return 0; 479 476 480 - phy_init(musb->phy); 481 - phy_power_on(musb->phy); 477 + if (!glue->phy_suspended) { 478 + phy_init(musb->phy); 479 + phy_power_on(musb->phy); 480 + } 482 481 483 482 omap2430_low_level_init(musb); 484 483 musb_writel(musb->mregs, OTG_INTERFSEL, ··· 494 489 return 0; 495 490 } 496 491 492 + /* I2C and SPI PHYs need to be suspended before the glue layer */ 497 493 static int omap2430_suspend(struct device *dev) 494 + { 495 + struct omap2430_glue *glue = dev_get_drvdata(dev); 496 + struct musb *musb = glue_to_musb(glue); 497 + 498 + phy_power_off(musb->phy); 499 + phy_exit(musb->phy); 500 + glue->phy_suspended = 1; 501 + 502 + return 0; 503 + } 504 + 505 + /* Glue layer needs to be suspended after musb_suspend() */ 506 + static int omap2430_suspend_late(struct device *dev) 498 507 { 499 508 struct omap2430_glue *glue = dev_get_drvdata(dev); 500 509 ··· 520 501 return omap2430_runtime_suspend(dev); 521 502 } 522 503 523 - static int omap2430_resume(struct device *dev) 504 + static int omap2430_resume_early(struct device *dev) 524 505 { 525 506 struct omap2430_glue *glue = dev_get_drvdata(dev); 526 507 ··· 532 513 return omap2430_runtime_resume(dev); 533 514 } 534 515 516 + static int omap2430_resume(struct device *dev) 517 + { 518 + struct omap2430_glue *glue = dev_get_drvdata(dev); 519 + struct musb *musb = glue_to_musb(glue); 520 + 521 + phy_init(musb->phy); 522 + phy_power_on(musb->phy); 523 + glue->phy_suspended = 0; 524 + 525 + return 0; 526 + } 527 + 535 528 static const struct dev_pm_ops omap2430_pm_ops = { 536 529 .runtime_suspend = omap2430_runtime_suspend, 537 530 .runtime_resume = omap2430_runtime_resume, 538 531 .suspend = omap2430_suspend, 532 + .suspend_late = omap2430_suspend_late, 533 + .resume_early = omap2430_resume_early, 539 534 .resume = omap2430_resume, 540 535 }; 541 536
+1
drivers/usb/serial/ch341.c
··· 851 851 .owner = THIS_MODULE, 852 852 .name = "ch341-uart", 853 853 }, 854 + .bulk_in_size = 512, 854 855 .id_table = id_table, 855 856 .num_ports = 1, 856 857 .open = ch341_open,
+1
drivers/usb/serial/ftdi_sio.c
··· 219 219 { USB_DEVICE(FTDI_VID, FTDI_MTXORB_6_PID) }, 220 220 { USB_DEVICE(FTDI_VID, FTDI_R2000KU_TRUE_RNG) }, 221 221 { USB_DEVICE(FTDI_VID, FTDI_VARDAAN_PID) }, 222 + { USB_DEVICE(FTDI_VID, FTDI_AUTO_M3_OP_COM_V2_PID) }, 222 223 { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_0100_PID) }, 223 224 { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_0101_PID) }, 224 225 { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_0102_PID) },
+3
drivers/usb/serial/ftdi_sio_ids.h
··· 159 159 /* Vardaan Enterprises Serial Interface VEUSB422R3 */ 160 160 #define FTDI_VARDAAN_PID 0xF070 161 161 162 + /* Auto-M3 Ltd. - OP-COM USB V2 - OBD interface Adapter */ 163 + #define FTDI_AUTO_M3_OP_COM_V2_PID 0x4f50 164 + 162 165 /* 163 166 * Xsens Technologies BV products (http://www.xsens.com). 164 167 */
+2
drivers/usb/serial/option.c
··· 1203 1203 .driver_info = NCTRL(2) | RSVD(3) }, 1204 1204 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1055, 0xff), /* Telit FN980 (PCIe) */ 1205 1205 .driver_info = NCTRL(0) | RSVD(1) }, 1206 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1056, 0xff), /* Telit FD980 */ 1207 + .driver_info = NCTRL(2) | RSVD(3) }, 1206 1208 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910), 1207 1209 .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) }, 1208 1210 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+26 -16
drivers/usb/serial/pl2303.c
··· 418 418 bcdDevice = le16_to_cpu(desc->bcdDevice); 419 419 bcdUSB = le16_to_cpu(desc->bcdUSB); 420 420 421 - switch (bcdDevice) { 422 - case 0x100: 423 - /* 424 - * Assume it's an HXN-type if the device doesn't support the old read 425 - * request value. 426 - */ 427 - if (bcdUSB == 0x200 && !pl2303_supports_hx_status(serial)) 428 - return TYPE_HXN; 421 + switch (bcdUSB) { 422 + case 0x110: 423 + switch (bcdDevice) { 424 + case 0x300: 425 + return TYPE_HX; 426 + case 0x400: 427 + return TYPE_HXD; 428 + default: 429 + return TYPE_HX; 430 + } 429 431 break; 430 - case 0x300: 431 - if (bcdUSB == 0x200) 432 + case 0x200: 433 + switch (bcdDevice) { 434 + case 0x100: 435 + case 0x305: 436 + /* 437 + * Assume it's an HXN-type if the device doesn't 438 + * support the old read request value. 439 + */ 440 + if (!pl2303_supports_hx_status(serial)) 441 + return TYPE_HXN; 442 + break; 443 + case 0x300: 432 444 return TYPE_TA; 433 - 434 - return TYPE_HX; 435 - case 0x400: 436 - return TYPE_HXD; 437 - case 0x500: 438 - return TYPE_TB; 445 + case 0x500: 446 + return TYPE_TB; 447 + } 448 + break; 439 449 } 440 450 441 451 dev_err(&serial->interface->dev,
+2 -2
drivers/usb/typec/tcpm/tcpm.c
··· 5369 5369 void tcpm_sink_frs(struct tcpm_port *port) 5370 5370 { 5371 5371 spin_lock(&port->pd_event_lock); 5372 - port->pd_events = TCPM_FRS_EVENT; 5372 + port->pd_events |= TCPM_FRS_EVENT; 5373 5373 spin_unlock(&port->pd_event_lock); 5374 5374 kthread_queue_work(port->wq, &port->event_work); 5375 5375 } ··· 5378 5378 void tcpm_sourcing_vbus(struct tcpm_port *port) 5379 5379 { 5380 5380 spin_lock(&port->pd_event_lock); 5381 - port->pd_events = TCPM_SOURCING_VBUS; 5381 + port->pd_events |= TCPM_SOURCING_VBUS; 5382 5382 spin_unlock(&port->pd_event_lock); 5383 5383 kthread_queue_work(port->wq, &port->event_work); 5384 5384 }
+1 -2
drivers/vdpa/mlx5/net/mlx5_vnet.c
··· 526 526 void __iomem *uar_page = ndev->mvdev.res.uar->map; 527 527 u32 out[MLX5_ST_SZ_DW(create_cq_out)]; 528 528 struct mlx5_vdpa_cq *vcq = &mvq->cq; 529 - unsigned int irqn; 530 529 __be64 *pas; 531 530 int inlen; 532 531 void *cqc; ··· 565 566 /* Use vector 0 by default. Consider adding code to choose least used 566 567 * vector. 567 568 */ 568 - err = mlx5_vector2eqn(mdev, 0, &eqn, &irqn); 569 + err = mlx5_vector2eqn(mdev, 0, &eqn); 569 570 if (err) 570 571 goto err_vec; 571 572
+8 -8
drivers/virt/acrn/vm.c
··· 64 64 test_and_set_bit(ACRN_VM_FLAG_DESTROYED, &vm->flags)) 65 65 return 0; 66 66 67 + ret = hcall_destroy_vm(vm->vmid); 68 + if (ret < 0) { 69 + dev_err(acrn_dev.this_device, 70 + "Failed to destroy VM %u\n", vm->vmid); 71 + clear_bit(ACRN_VM_FLAG_DESTROYED, &vm->flags); 72 + return ret; 73 + } 74 + 67 75 /* Remove from global VM list */ 68 76 write_lock_bh(&acrn_vm_list_lock); 69 77 list_del_init(&vm->list); ··· 84 76 if (vm->monitor_page) { 85 77 put_page(vm->monitor_page); 86 78 vm->monitor_page = NULL; 87 - } 88 - 89 - ret = hcall_destroy_vm(vm->vmid); 90 - if (ret < 0) { 91 - dev_err(acrn_dev.this_device, 92 - "Failed to destroy VM %u\n", vm->vmid); 93 - clear_bit(ACRN_VM_FLAG_DESTROYED, &vm->flags); 94 - return ret; 95 79 } 96 80 97 81 acrn_vm_all_ram_unmap(vm);
+16 -1
fs/ceph/caps.c
··· 4150 4150 4151 4151 /* 4152 4152 * Delayed work handler to process end of delayed cap release LRU list. 4153 + * 4154 + * If new caps are added to the list while processing it, these won't get 4155 + * processed in this run. In this case, the ci->i_hold_caps_max will be 4156 + * returned so that the work can be scheduled accordingly. 4153 4157 */ 4154 - void ceph_check_delayed_caps(struct ceph_mds_client *mdsc) 4158 + unsigned long ceph_check_delayed_caps(struct ceph_mds_client *mdsc) 4155 4159 { 4156 4160 struct inode *inode; 4157 4161 struct ceph_inode_info *ci; 4162 + struct ceph_mount_options *opt = mdsc->fsc->mount_options; 4163 + unsigned long delay_max = opt->caps_wanted_delay_max * HZ; 4164 + unsigned long loop_start = jiffies; 4165 + unsigned long delay = 0; 4158 4166 4159 4167 dout("check_delayed_caps\n"); 4160 4168 spin_lock(&mdsc->cap_delay_lock); ··· 4170 4162 ci = list_first_entry(&mdsc->cap_delay_list, 4171 4163 struct ceph_inode_info, 4172 4164 i_cap_delay_list); 4165 + if (time_before(loop_start, ci->i_hold_caps_max - delay_max)) { 4166 + dout("%s caps added recently. Exiting loop", __func__); 4167 + delay = ci->i_hold_caps_max; 4168 + break; 4169 + } 4173 4170 if ((ci->i_ceph_flags & CEPH_I_FLUSH) == 0 && 4174 4171 time_before(jiffies, ci->i_hold_caps_max)) 4175 4172 break; ··· 4190 4177 } 4191 4178 } 4192 4179 spin_unlock(&mdsc->cap_delay_lock); 4180 + 4181 + return delay; 4193 4182 } 4194 4183 4195 4184 /*
+16 -9
fs/ceph/mds_client.c
··· 4490 4490 } 4491 4491 4492 4492 /* 4493 - * delayed work -- periodically trim expired leases, renew caps with mds 4493 + * delayed work -- periodically trim expired leases, renew caps with mds. If 4494 + * the @delay parameter is set to 0 or if it's more than 5 secs, the default 4495 + * workqueue delay value of 5 secs will be used. 4494 4496 */ 4495 - static void schedule_delayed(struct ceph_mds_client *mdsc) 4497 + static void schedule_delayed(struct ceph_mds_client *mdsc, unsigned long delay) 4496 4498 { 4497 - int delay = 5; 4498 - unsigned hz = round_jiffies_relative(HZ * delay); 4499 - schedule_delayed_work(&mdsc->delayed_work, hz); 4499 + unsigned long max_delay = HZ * 5; 4500 + 4501 + /* 5 secs default delay */ 4502 + if (!delay || (delay > max_delay)) 4503 + delay = max_delay; 4504 + schedule_delayed_work(&mdsc->delayed_work, 4505 + round_jiffies_relative(delay)); 4500 4506 } 4501 4507 4502 4508 static void delayed_work(struct work_struct *work) 4503 4509 { 4504 - int i; 4505 4510 struct ceph_mds_client *mdsc = 4506 4511 container_of(work, struct ceph_mds_client, delayed_work.work); 4512 + unsigned long delay; 4507 4513 int renew_interval; 4508 4514 int renew_caps; 4515 + int i; 4509 4516 4510 4517 dout("mdsc delayed_work\n"); 4511 4518 ··· 4552 4545 } 4553 4546 mutex_unlock(&mdsc->mutex); 4554 4547 4555 - ceph_check_delayed_caps(mdsc); 4548 + delay = ceph_check_delayed_caps(mdsc); 4556 4549 4557 4550 ceph_queue_cap_reclaim_work(mdsc); 4558 4551 ··· 4560 4553 4561 4554 maybe_recover_session(mdsc); 4562 4555 4563 - schedule_delayed(mdsc); 4556 + schedule_delayed(mdsc, delay); 4564 4557 } 4565 4558 4566 4559 int ceph_mdsc_init(struct ceph_fs_client *fsc) ··· 5037 5030 mdsc->mdsmap->m_epoch); 5038 5031 5039 5032 mutex_unlock(&mdsc->mutex); 5040 - schedule_delayed(mdsc); 5033 + schedule_delayed(mdsc, 0); 5041 5034 return; 5042 5035 5043 5036 bad_unlock:
+17 -17
fs/ceph/snap.c
··· 67 67 { 68 68 lockdep_assert_held(&mdsc->snap_rwsem); 69 69 70 - dout("get_realm %p %d -> %d\n", realm, 71 - atomic_read(&realm->nref), atomic_read(&realm->nref)+1); 72 70 /* 73 - * since we _only_ increment realm refs or empty the empty 74 - * list with snap_rwsem held, adjusting the empty list here is 75 - * safe. we do need to protect against concurrent empty list 76 - * additions, however. 71 + * The 0->1 and 1->0 transitions must take the snap_empty_lock 72 + * atomically with the refcount change. Go ahead and bump the 73 + * nref here, unless it's 0, in which case we take the spinlock 74 + * and then do the increment and remove it from the list. 77 75 */ 78 - if (atomic_inc_return(&realm->nref) == 1) { 79 - spin_lock(&mdsc->snap_empty_lock); 76 + if (atomic_inc_not_zero(&realm->nref)) 77 + return; 78 + 79 + spin_lock(&mdsc->snap_empty_lock); 80 + if (atomic_inc_return(&realm->nref) == 1) 80 81 list_del_init(&realm->empty_item); 81 - spin_unlock(&mdsc->snap_empty_lock); 82 - } 82 + spin_unlock(&mdsc->snap_empty_lock); 83 83 } 84 84 85 85 static void __insert_snap_realm(struct rb_root *root, ··· 208 208 { 209 209 lockdep_assert_held_write(&mdsc->snap_rwsem); 210 210 211 - dout("__put_snap_realm %llx %p %d -> %d\n", realm->ino, realm, 212 - atomic_read(&realm->nref), atomic_read(&realm->nref)-1); 211 + /* 212 + * We do not require the snap_empty_lock here, as any caller that 213 + * increments the value must hold the snap_rwsem. 214 + */ 213 215 if (atomic_dec_and_test(&realm->nref)) 214 216 __destroy_snap_realm(mdsc, realm); 215 217 } 216 218 217 219 /* 218 - * caller needn't hold any locks 220 + * See comments in ceph_get_snap_realm. Caller needn't hold any locks. 219 221 */ 220 222 void ceph_put_snap_realm(struct ceph_mds_client *mdsc, 221 223 struct ceph_snap_realm *realm) 222 224 { 223 - dout("put_snap_realm %llx %p %d -> %d\n", realm->ino, realm, 224 - atomic_read(&realm->nref), atomic_read(&realm->nref)-1); 225 - if (!atomic_dec_and_test(&realm->nref)) 225 + if (!atomic_dec_and_lock(&realm->nref, &mdsc->snap_empty_lock)) 226 226 return; 227 227 228 228 if (down_write_trylock(&mdsc->snap_rwsem)) { 229 + spin_unlock(&mdsc->snap_empty_lock); 229 230 __destroy_snap_realm(mdsc, realm); 230 231 up_write(&mdsc->snap_rwsem); 231 232 } else { 232 - spin_lock(&mdsc->snap_empty_lock); 233 233 list_add(&realm->empty_item, &mdsc->snap_empty); 234 234 spin_unlock(&mdsc->snap_empty_lock); 235 235 }
+1 -1
fs/ceph/super.h
··· 1167 1167 extern bool __ceph_should_report_size(struct ceph_inode_info *ci); 1168 1168 extern void ceph_check_caps(struct ceph_inode_info *ci, int flags, 1169 1169 struct ceph_mds_session *session); 1170 - extern void ceph_check_delayed_caps(struct ceph_mds_client *mdsc); 1170 + extern unsigned long ceph_check_delayed_caps(struct ceph_mds_client *mdsc); 1171 1171 extern void ceph_flush_dirty_caps(struct ceph_mds_client *mdsc); 1172 1172 extern int ceph_drop_caps_for_unlink(struct inode *inode); 1173 1173 extern int ceph_encode_inode_release(void **p, struct inode *inode,
-3
fs/ext4/ext4_jbd2.c
··· 244 244 * "bh" may be NULL: a metadata block may have been freed from memory 245 245 * but there may still be a record of it in the journal, and that record 246 246 * still needs to be revoked. 247 - * 248 - * If the handle isn't valid we're not journaling, but we still need to 249 - * call into ext4_journal_revoke() to put the buffer head. 250 247 */ 251 248 int __ext4_forget(const char *where, unsigned int line, handle_t *handle, 252 249 int is_metadata, struct inode *inode,
+1 -1
fs/ext4/mmp.c
··· 138 138 unsigned mmp_check_interval; 139 139 unsigned long last_update_time; 140 140 unsigned long diff; 141 - int retval; 141 + int retval = 0; 142 142 143 143 mmp_block = le64_to_cpu(es->s_mmp_block); 144 144 mmp = (struct mmp_struct *)(bh->b_data);
+1 -1
fs/ext4/namei.c
··· 2517 2517 goto journal_error; 2518 2518 err = ext4_handle_dirty_dx_node(handle, dir, 2519 2519 frame->bh); 2520 - if (err) 2520 + if (restart || err) 2521 2521 goto journal_error; 2522 2522 } else { 2523 2523 struct dx_root *dxroot;
+45 -26
fs/io-wq.c
··· 130 130 }; 131 131 132 132 static void create_io_worker(struct io_wq *wq, struct io_wqe *wqe, int index); 133 + static void io_wqe_dec_running(struct io_worker *worker); 133 134 134 135 static bool io_worker_get(struct io_worker *worker) 135 136 { ··· 169 168 { 170 169 struct io_wqe *wqe = worker->wqe; 171 170 struct io_wqe_acct *acct = io_wqe_get_acct(worker); 172 - unsigned flags; 173 171 174 172 if (refcount_dec_and_test(&worker->ref)) 175 173 complete(&worker->ref_done); 176 174 wait_for_completion(&worker->ref_done); 177 175 178 - preempt_disable(); 179 - current->flags &= ~PF_IO_WORKER; 180 - flags = worker->flags; 181 - worker->flags = 0; 182 - if (flags & IO_WORKER_F_RUNNING) 183 - atomic_dec(&acct->nr_running); 184 - worker->flags = 0; 185 - preempt_enable(); 186 - 187 176 raw_spin_lock_irq(&wqe->lock); 188 - if (flags & IO_WORKER_F_FREE) 177 + if (worker->flags & IO_WORKER_F_FREE) 189 178 hlist_nulls_del_rcu(&worker->nulls_node); 190 179 list_del_rcu(&worker->all_list); 191 180 acct->nr_workers--; 181 + preempt_disable(); 182 + io_wqe_dec_running(worker); 183 + worker->flags = 0; 184 + current->flags &= ~PF_IO_WORKER; 185 + preempt_enable(); 192 186 raw_spin_unlock_irq(&wqe->lock); 193 187 194 188 kfree_rcu(worker, rcu); ··· 210 214 struct hlist_nulls_node *n; 211 215 struct io_worker *worker; 212 216 213 - n = rcu_dereference(hlist_nulls_first_rcu(&wqe->free_list)); 214 - if (is_a_nulls(n)) 215 - return false; 216 - 217 - worker = hlist_nulls_entry(n, struct io_worker, nulls_node); 218 - if (io_worker_get(worker)) { 219 - wake_up_process(worker->task); 217 + /* 218 + * Iterate free_list and see if we can find an idle worker to 219 + * activate. If a given worker is on the free_list but in the process 220 + * of exiting, keep trying. 221 + */ 222 + hlist_nulls_for_each_entry_rcu(worker, n, &wqe->free_list, nulls_node) { 223 + if (!io_worker_get(worker)) 224 + continue; 225 + if (wake_up_process(worker->task)) { 226 + io_worker_release(worker); 227 + return true; 228 + } 220 229 io_worker_release(worker); 221 - return true; 222 230 } 223 231 224 232 return false; ··· 247 247 ret = io_wqe_activate_free_worker(wqe); 248 248 rcu_read_unlock(); 249 249 250 - if (!ret && acct->nr_workers < acct->max_workers) { 251 - atomic_inc(&acct->nr_running); 252 - atomic_inc(&wqe->wq->worker_refs); 253 - create_io_worker(wqe->wq, wqe, acct->index); 250 + if (!ret) { 251 + bool do_create = false; 252 + 253 + raw_spin_lock_irq(&wqe->lock); 254 + if (acct->nr_workers < acct->max_workers) { 255 + atomic_inc(&acct->nr_running); 256 + atomic_inc(&wqe->wq->worker_refs); 257 + acct->nr_workers++; 258 + do_create = true; 259 + } 260 + raw_spin_unlock_irq(&wqe->lock); 261 + if (do_create) 262 + create_io_worker(wqe->wq, wqe, acct->index); 254 263 } 255 264 } 256 265 ··· 280 271 { 281 272 struct create_worker_data *cwd; 282 273 struct io_wq *wq; 274 + struct io_wqe *wqe; 275 + struct io_wqe_acct *acct; 283 276 284 277 cwd = container_of(cb, struct create_worker_data, work); 285 - wq = cwd->wqe->wq; 278 + wqe = cwd->wqe; 279 + wq = wqe->wq; 280 + acct = &wqe->acct[cwd->index]; 281 + raw_spin_lock_irq(&wqe->lock); 282 + if (acct->nr_workers < acct->max_workers) 283 + acct->nr_workers++; 284 + raw_spin_unlock_irq(&wqe->lock); 286 285 create_io_worker(wq, cwd->wqe, cwd->index); 287 286 kfree(cwd); 288 287 } ··· 652 635 kfree(worker); 653 636 fail: 654 637 atomic_dec(&acct->nr_running); 638 + raw_spin_lock_irq(&wqe->lock); 639 + acct->nr_workers--; 640 + raw_spin_unlock_irq(&wqe->lock); 655 641 io_worker_ref_put(wq); 656 642 return; 657 643 } ··· 670 650 worker->flags |= IO_WORKER_F_FREE; 671 651 if (index == IO_WQ_ACCT_BOUND) 672 652 worker->flags |= IO_WORKER_F_BOUND; 673 - if (!acct->nr_workers && (worker->flags & IO_WORKER_F_BOUND)) 653 + if ((acct->nr_workers == 1) && (worker->flags & IO_WORKER_F_BOUND)) 674 654 worker->flags |= IO_WORKER_F_FIXED; 675 - acct->nr_workers++; 676 655 raw_spin_unlock_irq(&wqe->lock); 677 656 wake_up_new_task(tsk); 678 657 }
+28 -14
fs/namespace.c
··· 1938 1938 namespace_unlock(); 1939 1939 } 1940 1940 1941 + static bool has_locked_children(struct mount *mnt, struct dentry *dentry) 1942 + { 1943 + struct mount *child; 1944 + 1945 + list_for_each_entry(child, &mnt->mnt_mounts, mnt_child) { 1946 + if (!is_subdir(child->mnt_mountpoint, dentry)) 1947 + continue; 1948 + 1949 + if (child->mnt.mnt_flags & MNT_LOCKED) 1950 + return true; 1951 + } 1952 + return false; 1953 + } 1954 + 1941 1955 /** 1942 1956 * clone_private_mount - create a private clone of a path 1943 1957 * @path: path to clone ··· 1967 1953 struct mount *old_mnt = real_mount(path->mnt); 1968 1954 struct mount *new_mnt; 1969 1955 1956 + down_read(&namespace_sem); 1970 1957 if (IS_MNT_UNBINDABLE(old_mnt)) 1971 - return ERR_PTR(-EINVAL); 1958 + goto invalid; 1959 + 1960 + if (!check_mnt(old_mnt)) 1961 + goto invalid; 1962 + 1963 + if (has_locked_children(old_mnt, path->dentry)) 1964 + goto invalid; 1972 1965 1973 1966 new_mnt = clone_mnt(old_mnt, path->dentry, CL_PRIVATE); 1967 + up_read(&namespace_sem); 1968 + 1974 1969 if (IS_ERR(new_mnt)) 1975 1970 return ERR_CAST(new_mnt); 1976 1971 ··· 1987 1964 new_mnt->mnt_ns = MNT_NS_INTERNAL; 1988 1965 1989 1966 return &new_mnt->mnt; 1967 + 1968 + invalid: 1969 + up_read(&namespace_sem); 1970 + return ERR_PTR(-EINVAL); 1990 1971 } 1991 1972 EXPORT_SYMBOL_GPL(clone_private_mount); 1992 1973 ··· 2340 2313 out_unlock: 2341 2314 namespace_unlock(); 2342 2315 return err; 2343 - } 2344 - 2345 - static bool has_locked_children(struct mount *mnt, struct dentry *dentry) 2346 - { 2347 - struct mount *child; 2348 - list_for_each_entry(child, &mnt->mnt_mounts, mnt_child) { 2349 - if (!is_subdir(child->mnt_mountpoint, dentry)) 2350 - continue; 2351 - 2352 - if (child->mnt.mnt_flags & MNT_LOCKED) 2353 - return true; 2354 - } 2355 - return false; 2356 2316 } 2357 2317 2358 2318 static struct mount *__do_loopback(struct path *old_path, int recurse)
+11 -6
fs/notify/fanotify/fanotify_user.c
··· 54 54 55 55 #include <linux/sysctl.h> 56 56 57 + static long ft_zero = 0; 58 + static long ft_int_max = INT_MAX; 59 + 57 60 struct ctl_table fanotify_table[] = { 58 61 { 59 62 .procname = "max_user_groups", 60 63 .data = &init_user_ns.ucount_max[UCOUNT_FANOTIFY_GROUPS], 61 - .maxlen = sizeof(int), 64 + .maxlen = sizeof(long), 62 65 .mode = 0644, 63 - .proc_handler = proc_dointvec_minmax, 64 - .extra1 = SYSCTL_ZERO, 66 + .proc_handler = proc_doulongvec_minmax, 67 + .extra1 = &ft_zero, 68 + .extra2 = &ft_int_max, 65 69 }, 66 70 { 67 71 .procname = "max_user_marks", 68 72 .data = &init_user_ns.ucount_max[UCOUNT_FANOTIFY_MARKS], 69 - .maxlen = sizeof(int), 73 + .maxlen = sizeof(long), 70 74 .mode = 0644, 71 - .proc_handler = proc_dointvec_minmax, 72 - .extra1 = SYSCTL_ZERO, 75 + .proc_handler = proc_doulongvec_minmax, 76 + .extra1 = &ft_zero, 77 + .extra2 = &ft_int_max, 73 78 }, 74 79 { 75 80 .procname = "max_queued_events",
+11 -6
fs/notify/inotify/inotify_user.c
··· 55 55 56 56 #include <linux/sysctl.h> 57 57 58 + static long it_zero = 0; 59 + static long it_int_max = INT_MAX; 60 + 58 61 struct ctl_table inotify_table[] = { 59 62 { 60 63 .procname = "max_user_instances", 61 64 .data = &init_user_ns.ucount_max[UCOUNT_INOTIFY_INSTANCES], 62 - .maxlen = sizeof(int), 65 + .maxlen = sizeof(long), 63 66 .mode = 0644, 64 - .proc_handler = proc_dointvec_minmax, 65 - .extra1 = SYSCTL_ZERO, 67 + .proc_handler = proc_doulongvec_minmax, 68 + .extra1 = &it_zero, 69 + .extra2 = &it_int_max, 66 70 }, 67 71 { 68 72 .procname = "max_user_watches", 69 73 .data = &init_user_ns.ucount_max[UCOUNT_INOTIFY_WATCHES], 70 - .maxlen = sizeof(int), 74 + .maxlen = sizeof(long), 71 75 .mode = 0644, 72 - .proc_handler = proc_dointvec_minmax, 73 - .extra1 = SYSCTL_ZERO, 76 + .proc_handler = proc_doulongvec_minmax, 77 + .extra1 = &it_zero, 78 + .extra2 = &it_int_max, 74 79 }, 75 80 { 76 81 .procname = "max_queued_events",
+1 -1
fs/overlayfs/export.c
··· 392 392 */ 393 393 take_dentry_name_snapshot(&name, real); 394 394 this = lookup_one_len(name.name.name, connected, name.name.len); 395 + release_dentry_name_snapshot(&name); 395 396 err = PTR_ERR(this); 396 397 if (IS_ERR(this)) { 397 398 goto fail; ··· 407 406 } 408 407 409 408 out: 410 - release_dentry_name_snapshot(&name); 411 409 dput(parent); 412 410 inode_unlock(dir); 413 411 return this;
+46 -1
fs/overlayfs/file.c
··· 392 392 return ret; 393 393 } 394 394 395 + /* 396 + * Calling iter_file_splice_write() directly from overlay's f_op may deadlock 397 + * due to lock order inversion between pipe->mutex in iter_file_splice_write() 398 + * and file_start_write(real.file) in ovl_write_iter(). 399 + * 400 + * So do everything ovl_write_iter() does and call iter_file_splice_write() on 401 + * the real file. 402 + */ 403 + static ssize_t ovl_splice_write(struct pipe_inode_info *pipe, struct file *out, 404 + loff_t *ppos, size_t len, unsigned int flags) 405 + { 406 + struct fd real; 407 + const struct cred *old_cred; 408 + struct inode *inode = file_inode(out); 409 + struct inode *realinode = ovl_inode_real(inode); 410 + ssize_t ret; 411 + 412 + inode_lock(inode); 413 + /* Update mode */ 414 + ovl_copyattr(realinode, inode); 415 + ret = file_remove_privs(out); 416 + if (ret) 417 + goto out_unlock; 418 + 419 + ret = ovl_real_fdget(out, &real); 420 + if (ret) 421 + goto out_unlock; 422 + 423 + old_cred = ovl_override_creds(inode->i_sb); 424 + file_start_write(real.file); 425 + 426 + ret = iter_file_splice_write(pipe, real.file, ppos, len, flags); 427 + 428 + file_end_write(real.file); 429 + /* Update size */ 430 + ovl_copyattr(realinode, inode); 431 + revert_creds(old_cred); 432 + fdput(real); 433 + 434 + out_unlock: 435 + inode_unlock(inode); 436 + 437 + return ret; 438 + } 439 + 395 440 static int ovl_fsync(struct file *file, loff_t start, loff_t end, int datasync) 396 441 { 397 442 struct fd real; ··· 648 603 .fadvise = ovl_fadvise, 649 604 .flush = ovl_flush, 650 605 .splice_read = generic_file_splice_read, 651 - .splice_write = iter_file_splice_write, 606 + .splice_write = ovl_splice_write, 652 607 653 608 .copy_file_range = ovl_copy_file_range, 654 609 .remap_file_range = ovl_remap_file_range,
+5
fs/overlayfs/readdir.c
··· 481 481 } 482 482 this = lookup_one_len(p->name, dir, p->len); 483 483 if (IS_ERR_OR_NULL(this) || !this->d_inode) { 484 + /* Mark a stale entry */ 485 + p->is_whiteout = true; 484 486 if (IS_ERR(this)) { 485 487 err = PTR_ERR(this); 486 488 this = NULL; ··· 778 776 if (err) 779 777 goto out; 780 778 } 779 + } 780 + /* ovl_cache_update_ino() sets is_whiteout on stale entry */ 781 + if (!p->is_whiteout) { 781 782 if (!dir_emit(ctx, p->name, p->len, p->ino, p->type)) 782 783 break; 783 784 }
+1
include/asm-generic/vmlinux.lds.h
··· 586 586 NOINSTR_TEXT \ 587 587 *(.text..refcount) \ 588 588 *(.ref.text) \ 589 + *(.text.asan.* .text.tsan.*) \ 589 590 TEXT_CFI_JT \ 590 591 MEM_KEEP(init.text*) \ 591 592 MEM_KEEP(exit.text*) \
+1 -1
include/linux/inetdevice.h
··· 41 41 unsigned long mr_qri; /* Query Response Interval */ 42 42 unsigned char mr_qrv; /* Query Robustness Variable */ 43 43 unsigned char mr_gq_running; 44 - unsigned char mr_ifc_count; 44 + u32 mr_ifc_count; 45 45 struct timer_list mr_gq_timer; /* general query timer */ 46 46 struct timer_list mr_ifc_timer; /* interface change timer */ 47 47
+1 -2
include/linux/mlx5/driver.h
··· 1047 1047 void mlx5_fill_page_array(struct mlx5_frag_buf *buf, __be64 *pas); 1048 1048 void mlx5_fill_page_frag_array_perm(struct mlx5_frag_buf *buf, __be64 *pas, u8 perm); 1049 1049 void mlx5_fill_page_frag_array(struct mlx5_frag_buf *frag_buf, __be64 *pas); 1050 - int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn, 1051 - unsigned int *irqn); 1050 + int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn); 1052 1051 int mlx5_core_attach_mcg(struct mlx5_core_dev *dev, union ib_gid *mgid, u32 qpn); 1053 1052 int mlx5_core_detach_mcg(struct mlx5_core_dev *dev, union ib_gid *mgid, u32 qpn); 1054 1053
+3
include/linux/netfilter/ipset/ip_set.h
··· 196 196 u32 elements; /* Number of elements vs timeout */ 197 197 }; 198 198 199 + /* Max range where every element is added/deleted in one step */ 200 + #define IPSET_MAX_RANGE (1<<20) 201 + 199 202 /* The max revision number supported by any set type + 1 */ 200 203 #define IPSET_REVISION_MAX 9 201 204
+2 -2
include/linux/once.h
··· 7 7 8 8 bool __do_once_start(bool *done, unsigned long *flags); 9 9 void __do_once_done(bool *done, struct static_key_true *once_key, 10 - unsigned long *flags); 10 + unsigned long *flags, struct module *mod); 11 11 12 12 /* Call a function exactly once. The idea of DO_ONCE() is to perform 13 13 * a function call such as initialization of random seeds, etc, only ··· 46 46 if (unlikely(___ret)) { \ 47 47 func(__VA_ARGS__); \ 48 48 __do_once_done(&___done, &___once_key, \ 49 - &___flags); \ 49 + &___flags, THIS_MODULE); \ 50 50 } \ 51 51 } \ 52 52 ___ret; \
+2 -1
include/linux/security.h
··· 120 120 LOCKDOWN_MMIOTRACE, 121 121 LOCKDOWN_DEBUGFS, 122 122 LOCKDOWN_XMON_WR, 123 + LOCKDOWN_BPF_WRITE_USER, 123 124 LOCKDOWN_INTEGRITY_MAX, 124 125 LOCKDOWN_KCORE, 125 126 LOCKDOWN_KPROBES, 126 - LOCKDOWN_BPF_READ, 127 + LOCKDOWN_BPF_READ_KERNEL, 127 128 LOCKDOWN_PERF, 128 129 LOCKDOWN_TRACEFS, 129 130 LOCKDOWN_XMON_RW,
+24
include/linux/serial_core.h
··· 518 518 if (sysrq_ch) 519 519 handle_sysrq(sysrq_ch); 520 520 } 521 + 522 + static inline void uart_unlock_and_check_sysrq_irqrestore(struct uart_port *port, 523 + unsigned long flags) 524 + { 525 + int sysrq_ch; 526 + 527 + if (!port->has_sysrq) { 528 + spin_unlock_irqrestore(&port->lock, flags); 529 + return; 530 + } 531 + 532 + sysrq_ch = port->sysrq_ch; 533 + port->sysrq_ch = 0; 534 + 535 + spin_unlock_irqrestore(&port->lock, flags); 536 + 537 + if (sysrq_ch) 538 + handle_sysrq(sysrq_ch); 539 + } 521 540 #else /* CONFIG_MAGIC_SYSRQ_SERIAL */ 522 541 static inline int uart_handle_sysrq_char(struct uart_port *port, unsigned int ch) 523 542 { ··· 549 530 static inline void uart_unlock_and_check_sysrq(struct uart_port *port) 550 531 { 551 532 spin_unlock(&port->lock); 533 + } 534 + static inline void uart_unlock_and_check_sysrq_irqrestore(struct uart_port *port, 535 + unsigned long flags) 536 + { 537 + spin_unlock_irqrestore(&port->lock, flags); 552 538 } 553 539 #endif /* CONFIG_MAGIC_SYSRQ_SERIAL */ 554 540
+2
include/linux/tee_drv.h
··· 27 27 #define TEE_SHM_USER_MAPPED BIT(4) /* Memory mapped in user space */ 28 28 #define TEE_SHM_POOL BIT(5) /* Memory allocated from pool */ 29 29 #define TEE_SHM_KERNEL_MAPPED BIT(6) /* Memory mapped in kernel space */ 30 + #define TEE_SHM_PRIV BIT(7) /* Memory private to TEE driver */ 30 31 31 32 struct device; 32 33 struct tee_device; ··· 333 332 * @returns a pointer to 'struct tee_shm' 334 333 */ 335 334 struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags); 335 + struct tee_shm *tee_shm_alloc_kernel_buf(struct tee_context *ctx, size_t size); 336 336 337 337 /** 338 338 * tee_shm_register() - Register shared memory buffer
+1
include/linux/usb/otg-fsm.h
··· 196 196 struct mutex lock; 197 197 u8 *host_req_flag; 198 198 struct delayed_work hnp_polling_work; 199 + bool hnp_work_inited; 199 200 bool state_changed; 200 201 }; 201 202
-2
include/net/netns/conntrack.h
··· 30 30 u8 tcp_ignore_invalid_rst; 31 31 #if IS_ENABLED(CONFIG_NF_FLOW_TABLE) 32 32 unsigned int offload_timeout; 33 - unsigned int offload_pickup; 34 33 #endif 35 34 }; 36 35 ··· 43 44 unsigned int timeouts[UDP_CT_MAX]; 44 45 #if IS_ENABLED(CONFIG_NF_FLOW_TABLE) 45 46 unsigned int offload_timeout; 46 - unsigned int offload_pickup; 47 47 #endif 48 48 }; 49 49
+2
include/net/psample.h
··· 31 31 void psample_group_take(struct psample_group *group); 32 32 void psample_group_put(struct psample_group *group); 33 33 34 + struct sk_buff; 35 + 34 36 #if IS_ENABLED(CONFIG_PSAMPLE) 35 37 36 38 void psample_sample_packet(struct psample_group *group, struct sk_buff *skb,
+5 -2
include/uapi/linux/neighbour.h
··· 66 66 #define NUD_NONE 0x00 67 67 68 68 /* NUD_NOARP & NUD_PERMANENT are pseudostates, they never change 69 - and make no address resolution or NUD. 70 - NUD_PERMANENT also cannot be deleted by garbage collectors. 69 + * and make no address resolution or NUD. 70 + * NUD_PERMANENT also cannot be deleted by garbage collectors. 71 + * When NTF_EXT_LEARNED is set for a bridge fdb entry the different cache entry 72 + * states don't make sense and thus are ignored. Such entries don't age and 73 + * can roam. 71 74 */ 72 75 73 76 struct nda_cacheinfo {
+6 -1
kernel/bpf/core.c
··· 1362 1362 } 1363 1363 1364 1364 /** 1365 - * __bpf_prog_run - run eBPF program on a given context 1365 + * ___bpf_prog_run - run eBPF program on a given context 1366 1366 * @regs: is the array of MAX_BPF_EXT_REG eBPF pseudo-registers 1367 1367 * @insn: is the array of eBPF instructions 1368 1368 * 1369 1369 * Decode and execute eBPF instructions. 1370 + * 1371 + * Return: whatever value is in %BPF_R0 at program exit 1370 1372 */ 1371 1373 static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) 1372 1374 { ··· 1880 1878 * 1881 1879 * Try to JIT eBPF program, if JIT is not available, use interpreter. 1882 1880 * The BPF program will be executed via BPF_PROG_RUN() macro. 1881 + * 1882 + * Return: the &fp argument along with &err set to 0 for success or 1883 + * a negative errno code on failure 1883 1884 */ 1884 1885 struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err) 1885 1886 {
+2 -2
kernel/bpf/hashtab.c
··· 1644 1644 /* We cannot do copy_from_user or copy_to_user inside 1645 1645 * the rcu_read_lock. Allocate enough space here. 1646 1646 */ 1647 - keys = kvmalloc(key_size * bucket_size, GFP_USER | __GFP_NOWARN); 1648 - values = kvmalloc(value_size * bucket_size, GFP_USER | __GFP_NOWARN); 1647 + keys = kvmalloc_array(key_size, bucket_size, GFP_USER | __GFP_NOWARN); 1648 + values = kvmalloc_array(value_size, bucket_size, GFP_USER | __GFP_NOWARN); 1649 1649 if (!keys || !values) { 1650 1650 ret = -ENOMEM; 1651 1651 goto after_loop;
+2 -2
kernel/bpf/helpers.c
··· 1396 1396 case BPF_FUNC_probe_read_user: 1397 1397 return &bpf_probe_read_user_proto; 1398 1398 case BPF_FUNC_probe_read_kernel: 1399 - return security_locked_down(LOCKDOWN_BPF_READ) < 0 ? 1399 + return security_locked_down(LOCKDOWN_BPF_READ_KERNEL) < 0 ? 1400 1400 NULL : &bpf_probe_read_kernel_proto; 1401 1401 case BPF_FUNC_probe_read_user_str: 1402 1402 return &bpf_probe_read_user_str_proto; 1403 1403 case BPF_FUNC_probe_read_kernel_str: 1404 - return security_locked_down(LOCKDOWN_BPF_READ) < 0 ? 1404 + return security_locked_down(LOCKDOWN_BPF_READ_KERNEL) < 0 ? 1405 1405 NULL : &bpf_probe_read_kernel_str_proto; 1406 1406 case BPF_FUNC_snprintf_btf: 1407 1407 return &bpf_snprintf_btf_proto;
+11 -8
kernel/cgroup/rstat.c
··· 347 347 } 348 348 349 349 static struct cgroup_rstat_cpu * 350 - cgroup_base_stat_cputime_account_begin(struct cgroup *cgrp) 350 + cgroup_base_stat_cputime_account_begin(struct cgroup *cgrp, unsigned long *flags) 351 351 { 352 352 struct cgroup_rstat_cpu *rstatc; 353 353 354 354 rstatc = get_cpu_ptr(cgrp->rstat_cpu); 355 - u64_stats_update_begin(&rstatc->bsync); 355 + *flags = u64_stats_update_begin_irqsave(&rstatc->bsync); 356 356 return rstatc; 357 357 } 358 358 359 359 static void cgroup_base_stat_cputime_account_end(struct cgroup *cgrp, 360 - struct cgroup_rstat_cpu *rstatc) 360 + struct cgroup_rstat_cpu *rstatc, 361 + unsigned long flags) 361 362 { 362 - u64_stats_update_end(&rstatc->bsync); 363 + u64_stats_update_end_irqrestore(&rstatc->bsync, flags); 363 364 cgroup_rstat_updated(cgrp, smp_processor_id()); 364 365 put_cpu_ptr(rstatc); 365 366 } ··· 368 367 void __cgroup_account_cputime(struct cgroup *cgrp, u64 delta_exec) 369 368 { 370 369 struct cgroup_rstat_cpu *rstatc; 370 + unsigned long flags; 371 371 372 - rstatc = cgroup_base_stat_cputime_account_begin(cgrp); 372 + rstatc = cgroup_base_stat_cputime_account_begin(cgrp, &flags); 373 373 rstatc->bstat.cputime.sum_exec_runtime += delta_exec; 374 - cgroup_base_stat_cputime_account_end(cgrp, rstatc); 374 + cgroup_base_stat_cputime_account_end(cgrp, rstatc, flags); 375 375 } 376 376 377 377 void __cgroup_account_cputime_field(struct cgroup *cgrp, 378 378 enum cpu_usage_stat index, u64 delta_exec) 379 379 { 380 380 struct cgroup_rstat_cpu *rstatc; 381 + unsigned long flags; 381 382 382 - rstatc = cgroup_base_stat_cputime_account_begin(cgrp); 383 + rstatc = cgroup_base_stat_cputime_account_begin(cgrp, &flags); 383 384 384 385 switch (index) { 385 386 case CPUTIME_USER: ··· 397 394 break; 398 395 } 399 396 400 - cgroup_base_stat_cputime_account_end(cgrp, rstatc); 397 + cgroup_base_stat_cputime_account_end(cgrp, rstatc, flags); 401 398 } 402 399 403 400 /*
+32 -3
kernel/events/core.c
··· 11917 11917 return gctx; 11918 11918 } 11919 11919 11920 + static bool 11921 + perf_check_permission(struct perf_event_attr *attr, struct task_struct *task) 11922 + { 11923 + unsigned int ptrace_mode = PTRACE_MODE_READ_REALCREDS; 11924 + bool is_capable = perfmon_capable(); 11925 + 11926 + if (attr->sigtrap) { 11927 + /* 11928 + * perf_event_attr::sigtrap sends signals to the other task. 11929 + * Require the current task to also have CAP_KILL. 11930 + */ 11931 + rcu_read_lock(); 11932 + is_capable &= ns_capable(__task_cred(task)->user_ns, CAP_KILL); 11933 + rcu_read_unlock(); 11934 + 11935 + /* 11936 + * If the required capabilities aren't available, checks for 11937 + * ptrace permissions: upgrade to ATTACH, since sending signals 11938 + * can effectively change the target task. 11939 + */ 11940 + ptrace_mode = PTRACE_MODE_ATTACH_REALCREDS; 11941 + } 11942 + 11943 + /* 11944 + * Preserve ptrace permission check for backwards compatibility. The 11945 + * ptrace check also includes checks that the current task and other 11946 + * task have matching uids, and is therefore not done here explicitly. 11947 + */ 11948 + return is_capable || ptrace_may_access(task, ptrace_mode); 11949 + } 11950 + 11920 11951 /** 11921 11952 * sys_perf_event_open - open a performance event, associate it to a task/cpu 11922 11953 * ··· 12194 12163 goto err_file; 12195 12164 12196 12165 /* 12197 - * Preserve ptrace permission check for backwards compatibility. 12198 - * 12199 12166 * We must hold exec_update_lock across this and any potential 12200 12167 * perf_install_in_context() call for this new event to 12201 12168 * serialize against exec() altering our credentials (and the 12202 12169 * perf_event_exit_task() that could imply). 12203 12170 */ 12204 12171 err = -EACCES; 12205 - if (!perfmon_capable() && !ptrace_may_access(task, PTRACE_MODE_READ_REALCREDS)) 12172 + if (!perf_check_permission(&attr, task)) 12206 12173 goto err_cred; 12207 12174 } 12208 12175
+35 -55
kernel/sched/core.c
··· 1981 1981 dequeue_task(rq, p, flags); 1982 1982 } 1983 1983 1984 - /* 1985 - * __normal_prio - return the priority that is based on the static prio 1986 - */ 1987 - static inline int __normal_prio(struct task_struct *p) 1984 + static inline int __normal_prio(int policy, int rt_prio, int nice) 1988 1985 { 1989 - return p->static_prio; 1986 + int prio; 1987 + 1988 + if (dl_policy(policy)) 1989 + prio = MAX_DL_PRIO - 1; 1990 + else if (rt_policy(policy)) 1991 + prio = MAX_RT_PRIO - 1 - rt_prio; 1992 + else 1993 + prio = NICE_TO_PRIO(nice); 1994 + 1995 + return prio; 1990 1996 } 1991 1997 1992 1998 /* ··· 2004 1998 */ 2005 1999 static inline int normal_prio(struct task_struct *p) 2006 2000 { 2007 - int prio; 2008 - 2009 - if (task_has_dl_policy(p)) 2010 - prio = MAX_DL_PRIO-1; 2011 - else if (task_has_rt_policy(p)) 2012 - prio = MAX_RT_PRIO-1 - p->rt_priority; 2013 - else 2014 - prio = __normal_prio(p); 2015 - return prio; 2001 + return __normal_prio(p->policy, p->rt_priority, PRIO_TO_NICE(p->static_prio)); 2016 2002 } 2017 2003 2018 2004 /* ··· 4097 4099 } else if (PRIO_TO_NICE(p->static_prio) < 0) 4098 4100 p->static_prio = NICE_TO_PRIO(0); 4099 4101 4100 - p->prio = p->normal_prio = __normal_prio(p); 4102 + p->prio = p->normal_prio = p->static_prio; 4101 4103 set_load_weight(p, false); 4102 4104 4103 4105 /* ··· 6339 6341 } 6340 6342 EXPORT_SYMBOL(default_wake_function); 6341 6343 6344 + static void __setscheduler_prio(struct task_struct *p, int prio) 6345 + { 6346 + if (dl_prio(prio)) 6347 + p->sched_class = &dl_sched_class; 6348 + else if (rt_prio(prio)) 6349 + p->sched_class = &rt_sched_class; 6350 + else 6351 + p->sched_class = &fair_sched_class; 6352 + 6353 + p->prio = prio; 6354 + } 6355 + 6342 6356 #ifdef CONFIG_RT_MUTEXES 6343 6357 6344 6358 static inline int __rt_effective_prio(struct task_struct *pi_task, int prio) ··· 6466 6456 } else { 6467 6457 p->dl.pi_se = &p->dl; 6468 6458 } 6469 - p->sched_class = &dl_sched_class; 6470 6459 } else if (rt_prio(prio)) { 6471 6460 if (dl_prio(oldprio)) 6472 6461 p->dl.pi_se = &p->dl; 6473 6462 if (oldprio < prio) 6474 6463 queue_flag |= ENQUEUE_HEAD; 6475 - p->sched_class = &rt_sched_class; 6476 6464 } else { 6477 6465 if (dl_prio(oldprio)) 6478 6466 p->dl.pi_se = &p->dl; 6479 6467 if (rt_prio(oldprio)) 6480 6468 p->rt.timeout = 0; 6481 - p->sched_class = &fair_sched_class; 6482 6469 } 6483 6470 6484 - p->prio = prio; 6471 + __setscheduler_prio(p, prio); 6485 6472 6486 6473 if (queued) 6487 6474 enqueue_task(rq, p, queue_flag); ··· 6831 6824 set_load_weight(p, true); 6832 6825 } 6833 6826 6834 - /* Actually do priority change: must hold pi & rq lock. */ 6835 - static void __setscheduler(struct rq *rq, struct task_struct *p, 6836 - const struct sched_attr *attr, bool keep_boost) 6837 - { 6838 - /* 6839 - * If params can't change scheduling class changes aren't allowed 6840 - * either. 6841 - */ 6842 - if (attr->sched_flags & SCHED_FLAG_KEEP_PARAMS) 6843 - return; 6844 - 6845 - __setscheduler_params(p, attr); 6846 - 6847 - /* 6848 - * Keep a potential priority boosting if called from 6849 - * sched_setscheduler(). 6850 - */ 6851 - p->prio = normal_prio(p); 6852 - if (keep_boost) 6853 - p->prio = rt_effective_prio(p, p->prio); 6854 - 6855 - if (dl_prio(p->prio)) 6856 - p->sched_class = &dl_sched_class; 6857 - else if (rt_prio(p->prio)) 6858 - p->sched_class = &rt_sched_class; 6859 - else 6860 - p->sched_class = &fair_sched_class; 6861 - } 6862 - 6863 6827 /* 6864 6828 * Check the target process has a UID that matches the current process's: 6865 6829 */ ··· 6851 6873 const struct sched_attr *attr, 6852 6874 bool user, bool pi) 6853 6875 { 6854 - int newprio = dl_policy(attr->sched_policy) ? MAX_DL_PRIO - 1 : 6855 - MAX_RT_PRIO - 1 - attr->sched_priority; 6856 - int retval, oldprio, oldpolicy = -1, queued, running; 6857 - int new_effective_prio, policy = attr->sched_policy; 6876 + int oldpolicy = -1, policy = attr->sched_policy; 6877 + int retval, oldprio, newprio, queued, running; 6858 6878 const struct sched_class *prev_class; 6859 6879 struct callback_head *head; 6860 6880 struct rq_flags rf; ··· 7050 7074 p->sched_reset_on_fork = reset_on_fork; 7051 7075 oldprio = p->prio; 7052 7076 7077 + newprio = __normal_prio(policy, attr->sched_priority, attr->sched_nice); 7053 7078 if (pi) { 7054 7079 /* 7055 7080 * Take priority boosted tasks into account. If the new ··· 7059 7082 * the runqueue. This will be done when the task deboost 7060 7083 * itself. 7061 7084 */ 7062 - new_effective_prio = rt_effective_prio(p, newprio); 7063 - if (new_effective_prio == oldprio) 7085 + newprio = rt_effective_prio(p, newprio); 7086 + if (newprio == oldprio) 7064 7087 queue_flags &= ~DEQUEUE_MOVE; 7065 7088 } 7066 7089 ··· 7073 7096 7074 7097 prev_class = p->sched_class; 7075 7098 7076 - __setscheduler(rq, p, attr, pi); 7099 + if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) { 7100 + __setscheduler_params(p, attr); 7101 + __setscheduler_prio(p, newprio); 7102 + } 7077 7103 __setscheduler_uclamp(p, attr); 7078 7104 7079 7105 if (queued) {
+1 -1
kernel/seccomp.c
··· 602 602 smp_store_release(&thread->seccomp.filter, 603 603 caller->seccomp.filter); 604 604 atomic_set(&thread->seccomp.filter_count, 605 - atomic_read(&thread->seccomp.filter_count)); 605 + atomic_read(&caller->seccomp.filter_count)); 606 606 607 607 /* 608 608 * Don't let an unprivileged task work around
+4 -2
kernel/time/timer.c
··· 1265 1265 static void timer_sync_wait_running(struct timer_base *base) 1266 1266 { 1267 1267 if (atomic_read(&base->timer_waiters)) { 1268 + raw_spin_unlock_irq(&base->lock); 1268 1269 spin_unlock(&base->expiry_lock); 1269 1270 spin_lock(&base->expiry_lock); 1271 + raw_spin_lock_irq(&base->lock); 1270 1272 } 1271 1273 } 1272 1274 ··· 1459 1457 if (timer->flags & TIMER_IRQSAFE) { 1460 1458 raw_spin_unlock(&base->lock); 1461 1459 call_timer_fn(timer, fn, baseclk); 1462 - base->running_timer = NULL; 1463 1460 raw_spin_lock(&base->lock); 1461 + base->running_timer = NULL; 1464 1462 } else { 1465 1463 raw_spin_unlock_irq(&base->lock); 1466 1464 call_timer_fn(timer, fn, baseclk); 1465 + raw_spin_lock_irq(&base->lock); 1467 1466 base->running_timer = NULL; 1468 1467 timer_sync_wait_running(base); 1469 - raw_spin_lock_irq(&base->lock); 1470 1468 } 1471 1469 } 1472 1470 }
+7 -6
kernel/trace/bpf_trace.c
··· 1017 1017 return &bpf_get_numa_node_id_proto; 1018 1018 case BPF_FUNC_perf_event_read: 1019 1019 return &bpf_perf_event_read_proto; 1020 - case BPF_FUNC_probe_write_user: 1021 - return bpf_get_probe_write_proto(); 1022 1020 case BPF_FUNC_current_task_under_cgroup: 1023 1021 return &bpf_current_task_under_cgroup_proto; 1024 1022 case BPF_FUNC_get_prandom_u32: 1025 1023 return &bpf_get_prandom_u32_proto; 1024 + case BPF_FUNC_probe_write_user: 1025 + return security_locked_down(LOCKDOWN_BPF_WRITE_USER) < 0 ? 1026 + NULL : bpf_get_probe_write_proto(); 1026 1027 case BPF_FUNC_probe_read_user: 1027 1028 return &bpf_probe_read_user_proto; 1028 1029 case BPF_FUNC_probe_read_kernel: 1029 - return security_locked_down(LOCKDOWN_BPF_READ) < 0 ? 1030 + return security_locked_down(LOCKDOWN_BPF_READ_KERNEL) < 0 ? 1030 1031 NULL : &bpf_probe_read_kernel_proto; 1031 1032 case BPF_FUNC_probe_read_user_str: 1032 1033 return &bpf_probe_read_user_str_proto; 1033 1034 case BPF_FUNC_probe_read_kernel_str: 1034 - return security_locked_down(LOCKDOWN_BPF_READ) < 0 ? 1035 + return security_locked_down(LOCKDOWN_BPF_READ_KERNEL) < 0 ? 1035 1036 NULL : &bpf_probe_read_kernel_str_proto; 1036 1037 #ifdef CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE 1037 1038 case BPF_FUNC_probe_read: 1038 - return security_locked_down(LOCKDOWN_BPF_READ) < 0 ? 1039 + return security_locked_down(LOCKDOWN_BPF_READ_KERNEL) < 0 ? 1039 1040 NULL : &bpf_probe_read_compat_proto; 1040 1041 case BPF_FUNC_probe_read_str: 1041 - return security_locked_down(LOCKDOWN_BPF_READ) < 0 ? 1042 + return security_locked_down(LOCKDOWN_BPF_READ_KERNEL) < 0 ? 1042 1043 NULL : &bpf_probe_read_compat_str_proto; 1043 1044 #endif 1044 1045 #ifdef CONFIG_CGROUPS
+135 -20
kernel/tracepoint.c
··· 15 15 #include <linux/sched/task.h> 16 16 #include <linux/static_key.h> 17 17 18 + enum tp_func_state { 19 + TP_FUNC_0, 20 + TP_FUNC_1, 21 + TP_FUNC_2, 22 + TP_FUNC_N, 23 + }; 24 + 18 25 extern tracepoint_ptr_t __start___tracepoints_ptrs[]; 19 26 extern tracepoint_ptr_t __stop___tracepoints_ptrs[]; 20 27 21 28 DEFINE_SRCU(tracepoint_srcu); 22 29 EXPORT_SYMBOL_GPL(tracepoint_srcu); 30 + 31 + enum tp_transition_sync { 32 + TP_TRANSITION_SYNC_1_0_1, 33 + TP_TRANSITION_SYNC_N_2_1, 34 + 35 + _NR_TP_TRANSITION_SYNC, 36 + }; 37 + 38 + struct tp_transition_snapshot { 39 + unsigned long rcu; 40 + unsigned long srcu; 41 + bool ongoing; 42 + }; 43 + 44 + /* Protected by tracepoints_mutex */ 45 + static struct tp_transition_snapshot tp_transition_snapshot[_NR_TP_TRANSITION_SYNC]; 46 + 47 + static void tp_rcu_get_state(enum tp_transition_sync sync) 48 + { 49 + struct tp_transition_snapshot *snapshot = &tp_transition_snapshot[sync]; 50 + 51 + /* Keep the latest get_state snapshot. */ 52 + snapshot->rcu = get_state_synchronize_rcu(); 53 + snapshot->srcu = start_poll_synchronize_srcu(&tracepoint_srcu); 54 + snapshot->ongoing = true; 55 + } 56 + 57 + static void tp_rcu_cond_sync(enum tp_transition_sync sync) 58 + { 59 + struct tp_transition_snapshot *snapshot = &tp_transition_snapshot[sync]; 60 + 61 + if (!snapshot->ongoing) 62 + return; 63 + cond_synchronize_rcu(snapshot->rcu); 64 + if (!poll_state_synchronize_srcu(&tracepoint_srcu, snapshot->srcu)) 65 + synchronize_srcu(&tracepoint_srcu); 66 + snapshot->ongoing = false; 67 + } 23 68 24 69 /* Set to 1 to enable tracepoint debug output */ 25 70 static const int tracepoint_debug; ··· 291 246 return old; 292 247 } 293 248 294 - static void tracepoint_update_call(struct tracepoint *tp, struct tracepoint_func *tp_funcs, bool sync) 249 + /* 250 + * Count the number of functions (enum tp_func_state) in a tp_funcs array. 251 + */ 252 + static enum tp_func_state nr_func_state(const struct tracepoint_func *tp_funcs) 253 + { 254 + if (!tp_funcs) 255 + return TP_FUNC_0; 256 + if (!tp_funcs[1].func) 257 + return TP_FUNC_1; 258 + if (!tp_funcs[2].func) 259 + return TP_FUNC_2; 260 + return TP_FUNC_N; /* 3 or more */ 261 + } 262 + 263 + static void tracepoint_update_call(struct tracepoint *tp, struct tracepoint_func *tp_funcs) 295 264 { 296 265 void *func = tp->iterator; 297 266 298 267 /* Synthetic events do not have static call sites */ 299 268 if (!tp->static_call_key) 300 269 return; 301 - 302 - if (!tp_funcs[1].func) { 270 + if (nr_func_state(tp_funcs) == TP_FUNC_1) 303 271 func = tp_funcs[0].func; 304 - /* 305 - * If going from the iterator back to a single caller, 306 - * we need to synchronize with __DO_TRACE to make sure 307 - * that the data passed to the callback is the one that 308 - * belongs to that callback. 309 - */ 310 - if (sync) 311 - tracepoint_synchronize_unregister(); 312 - } 313 - 314 272 __static_call_update(tp->static_call_key, tp->static_call_tramp, func); 315 273 } 316 274 ··· 347 299 * a pointer to it. This array is referenced by __DO_TRACE from 348 300 * include/linux/tracepoint.h using rcu_dereference_sched(). 349 301 */ 350 - tracepoint_update_call(tp, tp_funcs, false); 351 - rcu_assign_pointer(tp->funcs, tp_funcs); 352 - static_key_enable(&tp->key); 302 + switch (nr_func_state(tp_funcs)) { 303 + case TP_FUNC_1: /* 0->1 */ 304 + /* 305 + * Make sure new static func never uses old data after a 306 + * 1->0->1 transition sequence. 307 + */ 308 + tp_rcu_cond_sync(TP_TRANSITION_SYNC_1_0_1); 309 + /* Set static call to first function */ 310 + tracepoint_update_call(tp, tp_funcs); 311 + /* Both iterator and static call handle NULL tp->funcs */ 312 + rcu_assign_pointer(tp->funcs, tp_funcs); 313 + static_key_enable(&tp->key); 314 + break; 315 + case TP_FUNC_2: /* 1->2 */ 316 + /* Set iterator static call */ 317 + tracepoint_update_call(tp, tp_funcs); 318 + /* 319 + * Iterator callback installed before updating tp->funcs. 320 + * Requires ordering between RCU assign/dereference and 321 + * static call update/call. 322 + */ 323 + fallthrough; 324 + case TP_FUNC_N: /* N->N+1 (N>1) */ 325 + rcu_assign_pointer(tp->funcs, tp_funcs); 326 + /* 327 + * Make sure static func never uses incorrect data after a 328 + * N->...->2->1 (N>1) transition sequence. 329 + */ 330 + if (tp_funcs[0].data != old[0].data) 331 + tp_rcu_get_state(TP_TRANSITION_SYNC_N_2_1); 332 + break; 333 + default: 334 + WARN_ON_ONCE(1); 335 + break; 336 + } 353 337 354 338 release_probes(old); 355 339 return 0; ··· 408 328 /* Failed allocating new tp_funcs, replaced func with stub */ 409 329 return 0; 410 330 411 - if (!tp_funcs) { 331 + switch (nr_func_state(tp_funcs)) { 332 + case TP_FUNC_0: /* 1->0 */ 412 333 /* Removed last function */ 413 334 if (tp->unregfunc && static_key_enabled(&tp->key)) 414 335 tp->unregfunc(); 415 336 416 337 static_key_disable(&tp->key); 338 + /* Set iterator static call */ 339 + tracepoint_update_call(tp, tp_funcs); 340 + /* Both iterator and static call handle NULL tp->funcs */ 341 + rcu_assign_pointer(tp->funcs, NULL); 342 + /* 343 + * Make sure new static func never uses old data after a 344 + * 1->0->1 transition sequence. 345 + */ 346 + tp_rcu_get_state(TP_TRANSITION_SYNC_1_0_1); 347 + break; 348 + case TP_FUNC_1: /* 2->1 */ 417 349 rcu_assign_pointer(tp->funcs, tp_funcs); 418 - } else { 350 + /* 351 + * Make sure static func never uses incorrect data after a 352 + * N->...->2->1 (N>2) transition sequence. If the first 353 + * element's data has changed, then force the synchronization 354 + * to prevent current readers that have loaded the old data 355 + * from calling the new function. 356 + */ 357 + if (tp_funcs[0].data != old[0].data) 358 + tp_rcu_get_state(TP_TRANSITION_SYNC_N_2_1); 359 + tp_rcu_cond_sync(TP_TRANSITION_SYNC_N_2_1); 360 + /* Set static call to first function */ 361 + tracepoint_update_call(tp, tp_funcs); 362 + break; 363 + case TP_FUNC_2: /* N->N-1 (N>2) */ 364 + fallthrough; 365 + case TP_FUNC_N: 419 366 rcu_assign_pointer(tp->funcs, tp_funcs); 420 - tracepoint_update_call(tp, tp_funcs, 421 - tp_funcs[0].func != old[0].func); 367 + /* 368 + * Make sure static func never uses incorrect data after a 369 + * N->...->2->1 (N>2) transition sequence. 370 + */ 371 + if (tp_funcs[0].data != old[0].data) 372 + tp_rcu_get_state(TP_TRANSITION_SYNC_N_2_1); 373 + break; 374 + default: 375 + WARN_ON_ONCE(1); 376 + break; 422 377 } 423 378 release_probes(old); 424 379 return 0;
+11 -8
kernel/ucount.c
··· 58 58 .permissions = set_permissions, 59 59 }; 60 60 61 - #define UCOUNT_ENTRY(name) \ 62 - { \ 63 - .procname = name, \ 64 - .maxlen = sizeof(int), \ 65 - .mode = 0644, \ 66 - .proc_handler = proc_dointvec_minmax, \ 67 - .extra1 = SYSCTL_ZERO, \ 68 - .extra2 = SYSCTL_INT_MAX, \ 61 + static long ue_zero = 0; 62 + static long ue_int_max = INT_MAX; 63 + 64 + #define UCOUNT_ENTRY(name) \ 65 + { \ 66 + .procname = name, \ 67 + .maxlen = sizeof(long), \ 68 + .mode = 0644, \ 69 + .proc_handler = proc_doulongvec_minmax, \ 70 + .extra1 = &ue_zero, \ 71 + .extra2 = &ue_int_max, \ 69 72 } 70 73 static struct ctl_table user_table[] = { 71 74 UCOUNT_ENTRY("max_user_namespaces"),
+8 -3
lib/once.c
··· 3 3 #include <linux/spinlock.h> 4 4 #include <linux/once.h> 5 5 #include <linux/random.h> 6 + #include <linux/module.h> 6 7 7 8 struct once_work { 8 9 struct work_struct work; 9 10 struct static_key_true *key; 11 + struct module *module; 10 12 }; 11 13 12 14 static void once_deferred(struct work_struct *w) ··· 18 16 work = container_of(w, struct once_work, work); 19 17 BUG_ON(!static_key_enabled(work->key)); 20 18 static_branch_disable(work->key); 19 + module_put(work->module); 21 20 kfree(work); 22 21 } 23 22 24 - static void once_disable_jump(struct static_key_true *key) 23 + static void once_disable_jump(struct static_key_true *key, struct module *mod) 25 24 { 26 25 struct once_work *w; 27 26 ··· 32 29 33 30 INIT_WORK(&w->work, once_deferred); 34 31 w->key = key; 32 + w->module = mod; 33 + __module_get(mod); 35 34 schedule_work(&w->work); 36 35 } 37 36 ··· 58 53 EXPORT_SYMBOL(__do_once_start); 59 54 60 55 void __do_once_done(bool *done, struct static_key_true *once_key, 61 - unsigned long *flags) 56 + unsigned long *flags, struct module *mod) 62 57 __releases(once_lock) 63 58 { 64 59 *done = true; 65 60 spin_unlock_irqrestore(&once_lock, *flags); 66 - once_disable_jump(once_key); 61 + once_disable_jump(once_key, mod); 67 62 } 68 63 EXPORT_SYMBOL(__do_once_done);
+4
net/bpf/test_run.c
··· 7 7 #include <linux/vmalloc.h> 8 8 #include <linux/etherdevice.h> 9 9 #include <linux/filter.h> 10 + #include <linux/rcupdate_trace.h> 10 11 #include <linux/sched/signal.h> 11 12 #include <net/bpf_sk_storage.h> 12 13 #include <net/sock.h> ··· 1045 1044 goto out; 1046 1045 } 1047 1046 } 1047 + 1048 + rcu_read_lock_trace(); 1048 1049 retval = bpf_prog_run_pin_on_cpu(prog, ctx); 1050 + rcu_read_unlock_trace(); 1049 1051 1050 1052 if (copy_to_user(&uattr->test.retval, &retval, sizeof(u32))) { 1051 1053 err = -EFAULT;
+1 -2
net/bridge/br.c
··· 166 166 case SWITCHDEV_FDB_ADD_TO_BRIDGE: 167 167 fdb_info = ptr; 168 168 err = br_fdb_external_learn_add(br, p, fdb_info->addr, 169 - fdb_info->vid, 170 - fdb_info->is_local, false); 169 + fdb_info->vid, false); 171 170 if (err) { 172 171 err = notifier_from_errno(err); 173 172 break;
+4 -7
net/bridge/br_fdb.c
··· 1036 1036 "FDB entry towards bridge must be permanent"); 1037 1037 return -EINVAL; 1038 1038 } 1039 - 1040 - err = br_fdb_external_learn_add(br, p, addr, vid, 1041 - ndm->ndm_state & NUD_PERMANENT, 1042 - true); 1039 + err = br_fdb_external_learn_add(br, p, addr, vid, true); 1043 1040 } else { 1044 1041 spin_lock_bh(&br->hash_lock); 1045 1042 err = fdb_add_entry(br, p, addr, ndm, nlh_flags, vid, nfea_tb); ··· 1264 1267 } 1265 1268 1266 1269 int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p, 1267 - const unsigned char *addr, u16 vid, bool is_local, 1270 + const unsigned char *addr, u16 vid, 1268 1271 bool swdev_notify) 1269 1272 { 1270 1273 struct net_bridge_fdb_entry *fdb; ··· 1282 1285 if (swdev_notify) 1283 1286 flags |= BIT(BR_FDB_ADDED_BY_USER); 1284 1287 1285 - if (is_local) 1288 + if (!p) 1286 1289 flags |= BIT(BR_FDB_LOCAL); 1287 1290 1288 1291 fdb = fdb_create(br, p, addr, vid, flags); ··· 1311 1314 if (swdev_notify) 1312 1315 set_bit(BR_FDB_ADDED_BY_USER, &fdb->flags); 1313 1316 1314 - if (is_local) 1317 + if (!p) 1315 1318 set_bit(BR_FDB_LOCAL, &fdb->flags); 1316 1319 1317 1320 if (modified)
+2
net/bridge/br_if.c
··· 614 614 615 615 err = dev_set_allmulti(dev, 1); 616 616 if (err) { 617 + br_multicast_del_port(p); 617 618 kfree(p); /* kobject not yet init'd, manually free */ 618 619 goto err1; 619 620 } ··· 723 722 err3: 724 723 sysfs_remove_link(br->ifobj, p->dev->name); 725 724 err2: 725 + br_multicast_del_port(p); 726 726 kobject_put(&p->kobj); 727 727 dev_set_allmulti(dev, -1); 728 728 err1:
+1 -1
net/bridge/br_private.h
··· 770 770 int br_fdb_sync_static(struct net_bridge *br, struct net_bridge_port *p); 771 771 void br_fdb_unsync_static(struct net_bridge *br, struct net_bridge_port *p); 772 772 int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p, 773 - const unsigned char *addr, u16 vid, bool is_local, 773 + const unsigned char *addr, u16 vid, 774 774 bool swdev_notify); 775 775 int br_fdb_external_learn_del(struct net_bridge *br, struct net_bridge_port *p, 776 776 const unsigned char *addr, u16 vid,
+6
net/bridge/netfilter/nf_conntrack_bridge.c
··· 88 88 89 89 skb = ip_fraglist_next(&iter); 90 90 } 91 + 92 + if (!err) 93 + return 0; 94 + 95 + kfree_skb_list(iter.frag); 96 + 91 97 return err; 92 98 } 93 99 slow_path:
+9 -1
net/core/page_pool.c
··· 739 739 struct page_pool *pp; 740 740 741 741 page = compound_head(page); 742 - if (unlikely(page->pp_magic != PP_SIGNATURE)) 742 + 743 + /* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation 744 + * in order to preserve any existing bits, such as bit 0 for the 745 + * head page of compound page and bit 1 for pfmemalloc page, so 746 + * mask those bits for freeing side when doing below checking, 747 + * and page_is_pfmemalloc() is checked in __page_pool_put_page() 748 + * to avoid recycling the pfmemalloc page. 749 + */ 750 + if (unlikely((page->pp_magic & ~0x3UL) != PP_SIGNATURE)) 743 751 return false; 744 752 745 753 pp = page->pp;
+3 -3
net/dccp/dccp.h
··· 41 41 #define dccp_pr_debug_cat(format, a...) DCCP_PRINTK(dccp_debug, format, ##a) 42 42 #define dccp_debug(fmt, a...) dccp_pr_debug_cat(KERN_DEBUG fmt, ##a) 43 43 #else 44 - #define dccp_pr_debug(format, a...) 45 - #define dccp_pr_debug_cat(format, a...) 46 - #define dccp_debug(format, a...) 44 + #define dccp_pr_debug(format, a...) do {} while (0) 45 + #define dccp_pr_debug_cat(format, a...) do {} while (0) 46 + #define dccp_debug(format, a...) do {} while (0) 47 47 #endif 48 48 49 49 extern struct inet_hashinfo dccp_hashinfo;
+1 -1
net/dsa/slave.c
··· 2281 2281 static void 2282 2282 dsa_fdb_offload_notify(struct dsa_switchdev_event_work *switchdev_work) 2283 2283 { 2284 + struct switchdev_notifier_fdb_info info = {}; 2284 2285 struct dsa_switch *ds = switchdev_work->ds; 2285 - struct switchdev_notifier_fdb_info info; 2286 2286 struct dsa_port *dp; 2287 2287 2288 2288 if (!dsa_is_user_port(ds, switchdev_work->port))
+6 -1
net/ieee802154/socket.c
··· 983 983 .sendpage = sock_no_sendpage, 984 984 }; 985 985 986 + static void ieee802154_sock_destruct(struct sock *sk) 987 + { 988 + skb_queue_purge(&sk->sk_receive_queue); 989 + } 990 + 986 991 /* Create a socket. Initialise the socket, blank the addresses 987 992 * set the state. 988 993 */ ··· 1028 1023 sock->ops = ops; 1029 1024 1030 1025 sock_init_data(sock, sk); 1031 - /* FIXME: sk->sk_destruct */ 1026 + sk->sk_destruct = ieee802154_sock_destruct; 1032 1027 sk->sk_family = PF_IEEE802154; 1033 1028 1034 1029 /* Checksums on by default */
+14 -7
net/ipv4/igmp.c
··· 803 803 static void igmp_ifc_timer_expire(struct timer_list *t) 804 804 { 805 805 struct in_device *in_dev = from_timer(in_dev, t, mr_ifc_timer); 806 + u32 mr_ifc_count; 806 807 807 808 igmpv3_send_cr(in_dev); 808 - if (in_dev->mr_ifc_count) { 809 - in_dev->mr_ifc_count--; 809 + restart: 810 + mr_ifc_count = READ_ONCE(in_dev->mr_ifc_count); 811 + 812 + if (mr_ifc_count) { 813 + if (cmpxchg(&in_dev->mr_ifc_count, 814 + mr_ifc_count, 815 + mr_ifc_count - 1) != mr_ifc_count) 816 + goto restart; 810 817 igmp_ifc_start_timer(in_dev, 811 818 unsolicited_report_interval(in_dev)); 812 819 } ··· 825 818 struct net *net = dev_net(in_dev->dev); 826 819 if (IGMP_V1_SEEN(in_dev) || IGMP_V2_SEEN(in_dev)) 827 820 return; 828 - in_dev->mr_ifc_count = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv; 821 + WRITE_ONCE(in_dev->mr_ifc_count, in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv); 829 822 igmp_ifc_start_timer(in_dev, 1); 830 823 } 831 824 ··· 964 957 in_dev->mr_qri; 965 958 } 966 959 /* cancel the interface change timer */ 967 - in_dev->mr_ifc_count = 0; 960 + WRITE_ONCE(in_dev->mr_ifc_count, 0); 968 961 if (del_timer(&in_dev->mr_ifc_timer)) 969 962 __in_dev_put(in_dev); 970 963 /* clear deleted report items */ ··· 1731 1724 igmp_group_dropped(pmc); 1732 1725 1733 1726 #ifdef CONFIG_IP_MULTICAST 1734 - in_dev->mr_ifc_count = 0; 1727 + WRITE_ONCE(in_dev->mr_ifc_count, 0); 1735 1728 if (del_timer(&in_dev->mr_ifc_timer)) 1736 1729 __in_dev_put(in_dev); 1737 1730 in_dev->mr_gq_running = 0; ··· 1948 1941 pmc->sfmode = MCAST_INCLUDE; 1949 1942 #ifdef CONFIG_IP_MULTICAST 1950 1943 pmc->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv; 1951 - in_dev->mr_ifc_count = pmc->crcount; 1944 + WRITE_ONCE(in_dev->mr_ifc_count, pmc->crcount); 1952 1945 for (psf = pmc->sources; psf; psf = psf->sf_next) 1953 1946 psf->sf_crcount = 0; 1954 1947 igmp_ifc_event(pmc->interface); ··· 2127 2120 /* else no filters; keep old mode for reports */ 2128 2121 2129 2122 pmc->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv; 2130 - in_dev->mr_ifc_count = pmc->crcount; 2123 + WRITE_ONCE(in_dev->mr_ifc_count, pmc->crcount); 2131 2124 for (psf = pmc->sources; psf; psf = psf->sf_next) 2132 2125 psf->sf_crcount = 0; 2133 2126 igmp_ifc_event(in_dev);
+1 -1
net/ipv4/tcp_bbr.c
··· 1041 1041 bbr->prior_cwnd = 0; 1042 1042 tp->snd_ssthresh = TCP_INFINITE_SSTHRESH; 1043 1043 bbr->rtt_cnt = 0; 1044 - bbr->next_rtt_delivered = 0; 1044 + bbr->next_rtt_delivered = tp->delivered; 1045 1045 bbr->prev_ca_state = TCP_CA_Open; 1046 1046 bbr->packet_conservation = 0; 1047 1047
+8 -1
net/netfilter/ipset/ip_set_hash_ip.c
··· 132 132 ret = ip_set_get_hostipaddr4(tb[IPSET_ATTR_IP_TO], &ip_to); 133 133 if (ret) 134 134 return ret; 135 - if (ip > ip_to) 135 + if (ip > ip_to) { 136 + if (ip_to == 0) 137 + return -IPSET_ERR_HASH_ELEM; 136 138 swap(ip, ip_to); 139 + } 137 140 } else if (tb[IPSET_ATTR_CIDR]) { 138 141 u8 cidr = nla_get_u8(tb[IPSET_ATTR_CIDR]); 139 142 ··· 146 143 } 147 144 148 145 hosts = h->netmask == 32 ? 1 : 2 << (32 - h->netmask - 1); 146 + 147 + /* 64bit division is not allowed on 32bit */ 148 + if (((u64)ip_to - ip + 1) >> (32 - h->netmask) > IPSET_MAX_RANGE) 149 + return -ERANGE; 149 150 150 151 if (retried) { 151 152 ip = ntohl(h->next.ip);
+9 -1
net/netfilter/ipset/ip_set_hash_ipmark.c
··· 121 121 122 122 e.mark = ntohl(nla_get_be32(tb[IPSET_ATTR_MARK])); 123 123 e.mark &= h->markmask; 124 + if (e.mark == 0 && e.ip == 0) 125 + return -IPSET_ERR_HASH_ELEM; 124 126 125 127 if (adt == IPSET_TEST || 126 128 !(tb[IPSET_ATTR_IP_TO] || tb[IPSET_ATTR_CIDR])) { ··· 135 133 ret = ip_set_get_hostipaddr4(tb[IPSET_ATTR_IP_TO], &ip_to); 136 134 if (ret) 137 135 return ret; 138 - if (ip > ip_to) 136 + if (ip > ip_to) { 137 + if (e.mark == 0 && ip_to == 0) 138 + return -IPSET_ERR_HASH_ELEM; 139 139 swap(ip, ip_to); 140 + } 140 141 } else if (tb[IPSET_ATTR_CIDR]) { 141 142 u8 cidr = nla_get_u8(tb[IPSET_ATTR_CIDR]); 142 143 ··· 147 142 return -IPSET_ERR_INVALID_CIDR; 148 143 ip_set_mask_from_to(ip, ip_to, cidr); 149 144 } 145 + 146 + if (((u64)ip_to - ip + 1) > IPSET_MAX_RANGE) 147 + return -ERANGE; 150 148 151 149 if (retried) 152 150 ip = ntohl(h->next.ip);
+3
net/netfilter/ipset/ip_set_hash_ipport.c
··· 173 173 swap(port, port_to); 174 174 } 175 175 176 + if (((u64)ip_to - ip + 1)*(port_to - port + 1) > IPSET_MAX_RANGE) 177 + return -ERANGE; 178 + 176 179 if (retried) 177 180 ip = ntohl(h->next.ip); 178 181 for (; ip <= ip_to; ip++) {
+3
net/netfilter/ipset/ip_set_hash_ipportip.c
··· 180 180 swap(port, port_to); 181 181 } 182 182 183 + if (((u64)ip_to - ip + 1)*(port_to - port + 1) > IPSET_MAX_RANGE) 184 + return -ERANGE; 185 + 183 186 if (retried) 184 187 ip = ntohl(h->next.ip); 185 188 for (; ip <= ip_to; ip++) {
+3
net/netfilter/ipset/ip_set_hash_ipportnet.c
··· 253 253 swap(port, port_to); 254 254 } 255 255 256 + if (((u64)ip_to - ip + 1)*(port_to - port + 1) > IPSET_MAX_RANGE) 257 + return -ERANGE; 258 + 256 259 ip2_to = ip2_from; 257 260 if (tb[IPSET_ATTR_IP2_TO]) { 258 261 ret = ip_set_get_hostipaddr4(tb[IPSET_ATTR_IP2_TO], &ip2_to);
+10 -1
net/netfilter/ipset/ip_set_hash_net.c
··· 140 140 ipset_adtfn adtfn = set->variant->adt[adt]; 141 141 struct hash_net4_elem e = { .cidr = HOST_MASK }; 142 142 struct ip_set_ext ext = IP_SET_INIT_UEXT(set); 143 - u32 ip = 0, ip_to = 0; 143 + u32 ip = 0, ip_to = 0, ipn, n = 0; 144 144 int ret; 145 145 146 146 if (tb[IPSET_ATTR_LINENO]) ··· 188 188 if (ip + UINT_MAX == ip_to) 189 189 return -IPSET_ERR_HASH_RANGE; 190 190 } 191 + ipn = ip; 192 + do { 193 + ipn = ip_set_range_to_cidr(ipn, ip_to, &e.cidr); 194 + n++; 195 + } while (ipn++ < ip_to); 196 + 197 + if (n > IPSET_MAX_RANGE) 198 + return -ERANGE; 199 + 191 200 if (retried) 192 201 ip = ntohl(h->next.ip); 193 202 do {
+9 -1
net/netfilter/ipset/ip_set_hash_netiface.c
··· 202 202 ipset_adtfn adtfn = set->variant->adt[adt]; 203 203 struct hash_netiface4_elem e = { .cidr = HOST_MASK, .elem = 1 }; 204 204 struct ip_set_ext ext = IP_SET_INIT_UEXT(set); 205 - u32 ip = 0, ip_to = 0; 205 + u32 ip = 0, ip_to = 0, ipn, n = 0; 206 206 int ret; 207 207 208 208 if (tb[IPSET_ATTR_LINENO]) ··· 256 256 } else { 257 257 ip_set_mask_from_to(ip, ip_to, e.cidr); 258 258 } 259 + ipn = ip; 260 + do { 261 + ipn = ip_set_range_to_cidr(ipn, ip_to, &e.cidr); 262 + n++; 263 + } while (ipn++ < ip_to); 264 + 265 + if (n > IPSET_MAX_RANGE) 266 + return -ERANGE; 259 267 260 268 if (retried) 261 269 ip = ntohl(h->next.ip);
+15 -1
net/netfilter/ipset/ip_set_hash_netnet.c
··· 168 168 struct hash_netnet4_elem e = { }; 169 169 struct ip_set_ext ext = IP_SET_INIT_UEXT(set); 170 170 u32 ip = 0, ip_to = 0; 171 - u32 ip2 = 0, ip2_from = 0, ip2_to = 0; 171 + u32 ip2 = 0, ip2_from = 0, ip2_to = 0, ipn; 172 + u64 n = 0, m = 0; 172 173 int ret; 173 174 174 175 if (tb[IPSET_ATTR_LINENO]) ··· 245 244 } else { 246 245 ip_set_mask_from_to(ip2_from, ip2_to, e.cidr[1]); 247 246 } 247 + ipn = ip; 248 + do { 249 + ipn = ip_set_range_to_cidr(ipn, ip_to, &e.cidr[0]); 250 + n++; 251 + } while (ipn++ < ip_to); 252 + ipn = ip2_from; 253 + do { 254 + ipn = ip_set_range_to_cidr(ipn, ip2_to, &e.cidr[1]); 255 + m++; 256 + } while (ipn++ < ip2_to); 257 + 258 + if (n*m > IPSET_MAX_RANGE) 259 + return -ERANGE; 248 260 249 261 if (retried) { 250 262 ip = ntohl(h->next.ip[0]);
+10 -1
net/netfilter/ipset/ip_set_hash_netport.c
··· 158 158 ipset_adtfn adtfn = set->variant->adt[adt]; 159 159 struct hash_netport4_elem e = { .cidr = HOST_MASK - 1 }; 160 160 struct ip_set_ext ext = IP_SET_INIT_UEXT(set); 161 - u32 port, port_to, p = 0, ip = 0, ip_to = 0; 161 + u32 port, port_to, p = 0, ip = 0, ip_to = 0, ipn; 162 + u64 n = 0; 162 163 bool with_ports = false; 163 164 u8 cidr; 164 165 int ret; ··· 236 235 } else { 237 236 ip_set_mask_from_to(ip, ip_to, e.cidr + 1); 238 237 } 238 + ipn = ip; 239 + do { 240 + ipn = ip_set_range_to_cidr(ipn, ip_to, &cidr); 241 + n++; 242 + } while (ipn++ < ip_to); 243 + 244 + if (n*(port_to - port + 1) > IPSET_MAX_RANGE) 245 + return -ERANGE; 239 246 240 247 if (retried) { 241 248 ip = ntohl(h->next.ip);
+15 -1
net/netfilter/ipset/ip_set_hash_netportnet.c
··· 182 182 struct hash_netportnet4_elem e = { }; 183 183 struct ip_set_ext ext = IP_SET_INIT_UEXT(set); 184 184 u32 ip = 0, ip_to = 0, p = 0, port, port_to; 185 - u32 ip2_from = 0, ip2_to = 0, ip2; 185 + u32 ip2_from = 0, ip2_to = 0, ip2, ipn; 186 + u64 n = 0, m = 0; 186 187 bool with_ports = false; 187 188 int ret; 188 189 ··· 285 284 } else { 286 285 ip_set_mask_from_to(ip2_from, ip2_to, e.cidr[1]); 287 286 } 287 + ipn = ip; 288 + do { 289 + ipn = ip_set_range_to_cidr(ipn, ip_to, &e.cidr[0]); 290 + n++; 291 + } while (ipn++ < ip_to); 292 + ipn = ip2_from; 293 + do { 294 + ipn = ip_set_range_to_cidr(ipn, ip2_to, &e.cidr[1]); 295 + m++; 296 + } while (ipn++ < ip2_to); 297 + 298 + if (n*m*(port_to - port + 1) > IPSET_MAX_RANGE) 299 + return -ERANGE; 288 300 289 301 if (retried) { 290 302 ip = ntohl(h->next.ip[0]);
+22 -49
net/netfilter/nf_conntrack_core.c
··· 66 66 67 67 struct conntrack_gc_work { 68 68 struct delayed_work dwork; 69 - u32 last_bucket; 69 + u32 next_bucket; 70 70 bool exiting; 71 71 bool early_drop; 72 - long next_gc_run; 73 72 }; 74 73 75 74 static __read_mostly struct kmem_cache *nf_conntrack_cachep; 76 75 static DEFINE_SPINLOCK(nf_conntrack_locks_all_lock); 77 76 static __read_mostly bool nf_conntrack_locks_all; 78 77 79 - /* every gc cycle scans at most 1/GC_MAX_BUCKETS_DIV part of table */ 80 - #define GC_MAX_BUCKETS_DIV 128u 81 - /* upper bound of full table scan */ 82 - #define GC_MAX_SCAN_JIFFIES (16u * HZ) 83 - /* desired ratio of entries found to be expired */ 84 - #define GC_EVICT_RATIO 50u 78 + #define GC_SCAN_INTERVAL (120u * HZ) 79 + #define GC_SCAN_MAX_DURATION msecs_to_jiffies(10) 85 80 86 81 static struct conntrack_gc_work conntrack_gc_work; 87 82 ··· 1358 1363 1359 1364 static void gc_worker(struct work_struct *work) 1360 1365 { 1361 - unsigned int min_interval = max(HZ / GC_MAX_BUCKETS_DIV, 1u); 1362 - unsigned int i, goal, buckets = 0, expired_count = 0; 1363 - unsigned int nf_conntrack_max95 = 0; 1366 + unsigned long end_time = jiffies + GC_SCAN_MAX_DURATION; 1367 + unsigned int i, hashsz, nf_conntrack_max95 = 0; 1368 + unsigned long next_run = GC_SCAN_INTERVAL; 1364 1369 struct conntrack_gc_work *gc_work; 1365 - unsigned int ratio, scanned = 0; 1366 - unsigned long next_run; 1367 - 1368 1370 gc_work = container_of(work, struct conntrack_gc_work, dwork.work); 1369 1371 1370 - goal = nf_conntrack_htable_size / GC_MAX_BUCKETS_DIV; 1371 - i = gc_work->last_bucket; 1372 + i = gc_work->next_bucket; 1372 1373 if (gc_work->early_drop) 1373 1374 nf_conntrack_max95 = nf_conntrack_max / 100u * 95u; 1374 1375 ··· 1372 1381 struct nf_conntrack_tuple_hash *h; 1373 1382 struct hlist_nulls_head *ct_hash; 1374 1383 struct hlist_nulls_node *n; 1375 - unsigned int hashsz; 1376 1384 struct nf_conn *tmp; 1377 1385 1378 - i++; 1379 1386 rcu_read_lock(); 1380 1387 1381 1388 nf_conntrack_get_ht(&ct_hash, &hashsz); 1382 - if (i >= hashsz) 1383 - i = 0; 1389 + if (i >= hashsz) { 1390 + rcu_read_unlock(); 1391 + break; 1392 + } 1384 1393 1385 1394 hlist_nulls_for_each_entry_rcu(h, n, &ct_hash[i], hnnode) { 1386 1395 struct nf_conntrack_net *cnet; ··· 1388 1397 1389 1398 tmp = nf_ct_tuplehash_to_ctrack(h); 1390 1399 1391 - scanned++; 1392 1400 if (test_bit(IPS_OFFLOAD_BIT, &tmp->status)) { 1393 1401 nf_ct_offload_timeout(tmp); 1394 1402 continue; ··· 1395 1405 1396 1406 if (nf_ct_is_expired(tmp)) { 1397 1407 nf_ct_gc_expired(tmp); 1398 - expired_count++; 1399 1408 continue; 1400 1409 } 1401 1410 ··· 1427 1438 */ 1428 1439 rcu_read_unlock(); 1429 1440 cond_resched(); 1430 - } while (++buckets < goal); 1441 + i++; 1442 + 1443 + if (time_after(jiffies, end_time) && i < hashsz) { 1444 + gc_work->next_bucket = i; 1445 + next_run = 0; 1446 + break; 1447 + } 1448 + } while (i < hashsz); 1431 1449 1432 1450 if (gc_work->exiting) 1433 1451 return; ··· 1445 1449 * 1446 1450 * This worker is only here to reap expired entries when system went 1447 1451 * idle after a busy period. 1448 - * 1449 - * The heuristics below are supposed to balance conflicting goals: 1450 - * 1451 - * 1. Minimize time until we notice a stale entry 1452 - * 2. Maximize scan intervals to not waste cycles 1453 - * 1454 - * Normally, expire ratio will be close to 0. 1455 - * 1456 - * As soon as a sizeable fraction of the entries have expired 1457 - * increase scan frequency. 1458 1452 */ 1459 - ratio = scanned ? expired_count * 100 / scanned : 0; 1460 - if (ratio > GC_EVICT_RATIO) { 1461 - gc_work->next_gc_run = min_interval; 1462 - } else { 1463 - unsigned int max = GC_MAX_SCAN_JIFFIES / GC_MAX_BUCKETS_DIV; 1464 - 1465 - BUILD_BUG_ON((GC_MAX_SCAN_JIFFIES / GC_MAX_BUCKETS_DIV) == 0); 1466 - 1467 - gc_work->next_gc_run += min_interval; 1468 - if (gc_work->next_gc_run > max) 1469 - gc_work->next_gc_run = max; 1453 + if (next_run) { 1454 + gc_work->early_drop = false; 1455 + gc_work->next_bucket = 0; 1470 1456 } 1471 - 1472 - next_run = gc_work->next_gc_run; 1473 - gc_work->last_bucket = i; 1474 - gc_work->early_drop = false; 1475 1457 queue_delayed_work(system_power_efficient_wq, &gc_work->dwork, next_run); 1476 1458 } 1477 1459 1478 1460 static void conntrack_gc_work_init(struct conntrack_gc_work *gc_work) 1479 1461 { 1480 1462 INIT_DEFERRABLE_WORK(&gc_work->dwork, gc_worker); 1481 - gc_work->next_gc_run = HZ; 1482 1463 gc_work->exiting = false; 1483 1464 } 1484 1465
-1
net/netfilter/nf_conntrack_proto_tcp.c
··· 1478 1478 1479 1479 #if IS_ENABLED(CONFIG_NF_FLOW_TABLE) 1480 1480 tn->offload_timeout = 30 * HZ; 1481 - tn->offload_pickup = 120 * HZ; 1482 1481 #endif 1483 1482 } 1484 1483
-1
net/netfilter/nf_conntrack_proto_udp.c
··· 271 271 272 272 #if IS_ENABLED(CONFIG_NF_FLOW_TABLE) 273 273 un->offload_timeout = 30 * HZ; 274 - un->offload_pickup = 30 * HZ; 275 274 #endif 276 275 } 277 276
-16
net/netfilter/nf_conntrack_standalone.c
··· 575 575 NF_SYSCTL_CT_PROTO_TIMEOUT_TCP_UNACK, 576 576 #if IS_ENABLED(CONFIG_NF_FLOW_TABLE) 577 577 NF_SYSCTL_CT_PROTO_TIMEOUT_TCP_OFFLOAD, 578 - NF_SYSCTL_CT_PROTO_TIMEOUT_TCP_OFFLOAD_PICKUP, 579 578 #endif 580 579 NF_SYSCTL_CT_PROTO_TCP_LOOSE, 581 580 NF_SYSCTL_CT_PROTO_TCP_LIBERAL, ··· 584 585 NF_SYSCTL_CT_PROTO_TIMEOUT_UDP_STREAM, 585 586 #if IS_ENABLED(CONFIG_NF_FLOW_TABLE) 586 587 NF_SYSCTL_CT_PROTO_TIMEOUT_UDP_OFFLOAD, 587 - NF_SYSCTL_CT_PROTO_TIMEOUT_UDP_OFFLOAD_PICKUP, 588 588 #endif 589 589 NF_SYSCTL_CT_PROTO_TIMEOUT_ICMP, 590 590 NF_SYSCTL_CT_PROTO_TIMEOUT_ICMPV6, ··· 774 776 .mode = 0644, 775 777 .proc_handler = proc_dointvec_jiffies, 776 778 }, 777 - [NF_SYSCTL_CT_PROTO_TIMEOUT_TCP_OFFLOAD_PICKUP] = { 778 - .procname = "nf_flowtable_tcp_pickup", 779 - .maxlen = sizeof(unsigned int), 780 - .mode = 0644, 781 - .proc_handler = proc_dointvec_jiffies, 782 - }, 783 779 #endif 784 780 [NF_SYSCTL_CT_PROTO_TCP_LOOSE] = { 785 781 .procname = "nf_conntrack_tcp_loose", ··· 820 828 #if IS_ENABLED(CONFIG_NFT_FLOW_OFFLOAD) 821 829 [NF_SYSCTL_CT_PROTO_TIMEOUT_UDP_OFFLOAD] = { 822 830 .procname = "nf_flowtable_udp_timeout", 823 - .maxlen = sizeof(unsigned int), 824 - .mode = 0644, 825 - .proc_handler = proc_dointvec_jiffies, 826 - }, 827 - [NF_SYSCTL_CT_PROTO_TIMEOUT_UDP_OFFLOAD_PICKUP] = { 828 - .procname = "nf_flowtable_udp_pickup", 829 831 .maxlen = sizeof(unsigned int), 830 832 .mode = 0644, 831 833 .proc_handler = proc_dointvec_jiffies, ··· 1004 1018 1005 1019 #if IS_ENABLED(CONFIG_NF_FLOW_TABLE) 1006 1020 table[NF_SYSCTL_CT_PROTO_TIMEOUT_TCP_OFFLOAD].data = &tn->offload_timeout; 1007 - table[NF_SYSCTL_CT_PROTO_TIMEOUT_TCP_OFFLOAD_PICKUP].data = &tn->offload_pickup; 1008 1021 #endif 1009 1022 1010 1023 } ··· 1096 1111 table[NF_SYSCTL_CT_PROTO_TIMEOUT_UDP_STREAM].data = &un->timeouts[UDP_CT_REPLIED]; 1097 1112 #if IS_ENABLED(CONFIG_NF_FLOW_TABLE) 1098 1113 table[NF_SYSCTL_CT_PROTO_TIMEOUT_UDP_OFFLOAD].data = &un->offload_timeout; 1099 - table[NF_SYSCTL_CT_PROTO_TIMEOUT_UDP_OFFLOAD_PICKUP].data = &un->offload_pickup; 1100 1114 #endif 1101 1115 1102 1116 nf_conntrack_standalone_init_tcp_sysctl(net, table);
+8 -3
net/netfilter/nf_flow_table_core.c
··· 182 182 { 183 183 struct net *net = nf_ct_net(ct); 184 184 int l4num = nf_ct_protonum(ct); 185 - unsigned int timeout; 185 + s32 timeout; 186 186 187 187 if (l4num == IPPROTO_TCP) { 188 188 struct nf_tcp_net *tn = nf_tcp_pernet(net); 189 189 190 - timeout = tn->offload_pickup; 190 + timeout = tn->timeouts[TCP_CONNTRACK_ESTABLISHED]; 191 + timeout -= tn->offload_timeout; 191 192 } else if (l4num == IPPROTO_UDP) { 192 193 struct nf_udp_net *tn = nf_udp_pernet(net); 193 194 194 - timeout = tn->offload_pickup; 195 + timeout = tn->timeouts[UDP_CT_REPLIED]; 196 + timeout -= tn->offload_timeout; 195 197 } else { 196 198 return; 197 199 } 200 + 201 + if (timeout < 0) 202 + timeout = 0; 198 203 199 204 if (nf_flow_timeout_delta(ct->timeout) > (__s32)timeout) 200 205 ct->timeout = nfct_time_stamp + timeout;
+7 -6
net/openvswitch/flow.c
··· 293 293 } 294 294 295 295 /** 296 - * Parse vlan tag from vlan header. 296 + * parse_vlan_tag - Parse vlan tag from vlan header. 297 297 * @skb: skb containing frame to parse 298 298 * @key_vh: pointer to parsed vlan tag 299 299 * @untag_vlan: should the vlan header be removed from the frame 300 300 * 301 - * Returns ERROR on memory error. 302 - * Returns 0 if it encounters a non-vlan or incomplete packet. 303 - * Returns 1 after successfully parsing vlan tag. 301 + * Return: ERROR on memory error. 302 + * %0 if it encounters a non-vlan or incomplete packet. 303 + * %1 after successfully parsing vlan tag. 304 304 */ 305 305 static int parse_vlan_tag(struct sk_buff *skb, struct vlan_head *key_vh, 306 306 bool untag_vlan) ··· 532 532 * L3 header 533 533 * @key: output flow key 534 534 * 535 + * Return: %0 if successful, otherwise a negative errno value. 535 536 */ 536 537 static int key_extract_l3l4(struct sk_buff *skb, struct sw_flow_key *key) 537 538 { ··· 749 748 * 750 749 * The caller must ensure that skb->len >= ETH_HLEN. 751 750 * 752 - * Returns 0 if successful, otherwise a negative errno value. 753 - * 754 751 * Initializes @skb header fields as follows: 755 752 * 756 753 * - skb->mac_header: the L2 header. ··· 763 764 * 764 765 * - skb->protocol: the type of the data starting at skb->network_header. 765 766 * Equals to key->eth.type. 767 + * 768 + * Return: %0 if successful, otherwise a negative errno value. 766 769 */ 767 770 static int key_extract(struct sk_buff *skb, struct sw_flow_key *key) 768 771 {
+3
net/sched/act_mirred.c
··· 271 271 goto out; 272 272 } 273 273 274 + /* All mirred/redirected skbs should clear previous ct info */ 275 + nf_reset_ct(skb2); 276 + 274 277 want_ingress = tcf_mirred_act_wants_ingress(m_eaction); 275 278 276 279 expects_nh = want_ingress || !m_mac_header_xmit;
+1 -1
net/smc/af_smc.c
··· 795 795 reason_code = SMC_CLC_DECL_NOSRVLINK; 796 796 goto connect_abort; 797 797 } 798 - smc->conn.lnk = link; 798 + smc_switch_link_and_count(&smc->conn, link); 799 799 } 800 800 801 801 /* create send buffer and rmb */
+2 -2
net/smc/smc_core.c
··· 917 917 return rc; 918 918 } 919 919 920 - static void smc_switch_link_and_count(struct smc_connection *conn, 921 - struct smc_link *to_lnk) 920 + void smc_switch_link_and_count(struct smc_connection *conn, 921 + struct smc_link *to_lnk) 922 922 { 923 923 atomic_dec(&conn->lnk->conn_cnt); 924 924 conn->lnk = to_lnk;
+4
net/smc/smc_core.h
··· 97 97 unsigned long *wr_tx_mask; /* bit mask of used indexes */ 98 98 u32 wr_tx_cnt; /* number of WR send buffers */ 99 99 wait_queue_head_t wr_tx_wait; /* wait for free WR send buf */ 100 + atomic_t wr_tx_refcnt; /* tx refs to link */ 100 101 101 102 struct smc_wr_buf *wr_rx_bufs; /* WR recv payload buffers */ 102 103 struct ib_recv_wr *wr_rx_ibs; /* WR recv meta data */ ··· 110 109 111 110 struct ib_reg_wr wr_reg; /* WR register memory region */ 112 111 wait_queue_head_t wr_reg_wait; /* wait for wr_reg result */ 112 + atomic_t wr_reg_refcnt; /* reg refs to link */ 113 113 enum smc_wr_reg_state wr_reg_state; /* state of wr_reg request */ 114 114 115 115 u8 gid[SMC_GID_SIZE];/* gid matching used vlan id*/ ··· 446 444 int smcr_link_init(struct smc_link_group *lgr, struct smc_link *lnk, 447 445 u8 link_idx, struct smc_init_info *ini); 448 446 void smcr_link_clear(struct smc_link *lnk, bool log); 447 + void smc_switch_link_and_count(struct smc_connection *conn, 448 + struct smc_link *to_lnk); 449 449 int smcr_buf_map_lgr(struct smc_link *lnk); 450 450 int smcr_buf_reg_lgr(struct smc_link *lnk); 451 451 void smcr_lgr_set_type(struct smc_link_group *lgr, enum smc_lgr_type new_type);
+4 -6
net/smc/smc_llc.c
··· 888 888 if (!rc) 889 889 goto out; 890 890 out_clear_lnk: 891 + lnk_new->state = SMC_LNK_INACTIVE; 891 892 smcr_link_clear(lnk_new, false); 892 893 out_reject: 893 894 smc_llc_cli_add_link_reject(qentry); ··· 1185 1184 goto out_err; 1186 1185 return 0; 1187 1186 out_err: 1187 + link_new->state = SMC_LNK_INACTIVE; 1188 1188 smcr_link_clear(link_new, false); 1189 1189 return rc; 1190 1190 } ··· 1288 1286 del_llc->reason = 0; 1289 1287 smc_llc_send_message(lnk, &qentry->msg); /* response */ 1290 1288 1291 - if (smc_link_downing(&lnk_del->state)) { 1292 - if (smc_switch_conns(lgr, lnk_del, false)) 1293 - smc_wr_tx_wait_no_pending_sends(lnk_del); 1294 - } 1289 + if (smc_link_downing(&lnk_del->state)) 1290 + smc_switch_conns(lgr, lnk_del, false); 1295 1291 smcr_link_clear(lnk_del, true); 1296 1292 1297 1293 active_links = smc_llc_active_link_count(lgr); ··· 1805 1805 link->smcibdev->ibdev->name, link->ibport); 1806 1806 complete(&link->llc_testlink_resp); 1807 1807 cancel_delayed_work_sync(&link->llc_testlink_wrk); 1808 - smc_wr_wakeup_reg_wait(link); 1809 - smc_wr_wakeup_tx_wait(link); 1810 1808 } 1811 1809 1812 1810 /* register a new rtoken at the remote peer (for all links) */
+17 -1
net/smc/smc_tx.c
··· 496 496 /* Wakeup sndbuf consumers from any context (IRQ or process) 497 497 * since there is more data to transmit; usable snd_wnd as max transmit 498 498 */ 499 - static int smcr_tx_sndbuf_nonempty(struct smc_connection *conn) 499 + static int _smcr_tx_sndbuf_nonempty(struct smc_connection *conn) 500 500 { 501 501 struct smc_cdc_producer_flags *pflags = &conn->local_tx_ctrl.prod_flags; 502 502 struct smc_link *link = conn->lnk; ··· 547 547 548 548 out_unlock: 549 549 spin_unlock_bh(&conn->send_lock); 550 + return rc; 551 + } 552 + 553 + static int smcr_tx_sndbuf_nonempty(struct smc_connection *conn) 554 + { 555 + struct smc_link *link = conn->lnk; 556 + int rc = -ENOLINK; 557 + 558 + if (!link) 559 + return rc; 560 + 561 + atomic_inc(&link->wr_tx_refcnt); 562 + if (smc_link_usable(link)) 563 + rc = _smcr_tx_sndbuf_nonempty(conn); 564 + if (atomic_dec_and_test(&link->wr_tx_refcnt)) 565 + wake_up_all(&link->wr_tx_wait); 550 566 return rc; 551 567 } 552 568
+10
net/smc/smc_wr.c
··· 322 322 if (rc) 323 323 return rc; 324 324 325 + atomic_inc(&link->wr_reg_refcnt); 325 326 rc = wait_event_interruptible_timeout(link->wr_reg_wait, 326 327 (link->wr_reg_state != POSTED), 327 328 SMC_WR_REG_MR_WAIT_TIME); 329 + if (atomic_dec_and_test(&link->wr_reg_refcnt)) 330 + wake_up_all(&link->wr_reg_wait); 328 331 if (!rc) { 329 332 /* timeout - terminate link */ 330 333 smcr_link_down_cond_sched(link); ··· 569 566 return; 570 567 ibdev = lnk->smcibdev->ibdev; 571 568 569 + smc_wr_wakeup_reg_wait(lnk); 570 + smc_wr_wakeup_tx_wait(lnk); 571 + 572 572 if (smc_wr_tx_wait_no_pending_sends(lnk)) 573 573 memset(lnk->wr_tx_mask, 0, 574 574 BITS_TO_LONGS(SMC_WR_BUF_CNT) * 575 575 sizeof(*lnk->wr_tx_mask)); 576 + wait_event(lnk->wr_reg_wait, (!atomic_read(&lnk->wr_reg_refcnt))); 577 + wait_event(lnk->wr_tx_wait, (!atomic_read(&lnk->wr_tx_refcnt))); 576 578 577 579 if (lnk->wr_rx_dma_addr) { 578 580 ib_dma_unmap_single(ibdev, lnk->wr_rx_dma_addr, ··· 736 728 memset(lnk->wr_tx_mask, 0, 737 729 BITS_TO_LONGS(SMC_WR_BUF_CNT) * sizeof(*lnk->wr_tx_mask)); 738 730 init_waitqueue_head(&lnk->wr_tx_wait); 731 + atomic_set(&lnk->wr_tx_refcnt, 0); 739 732 init_waitqueue_head(&lnk->wr_reg_wait); 733 + atomic_set(&lnk->wr_reg_refcnt, 0); 740 734 return rc; 741 735 742 736 dma_unmap:
+3 -3
net/tipc/link.c
··· 913 913 skb = tipc_msg_create(SOCK_WAKEUP, 0, INT_H_SIZE, 0, 914 914 dnode, l->addr, dport, 0, 0); 915 915 if (!skb) 916 - return -ENOMEM; 916 + return -ENOBUFS; 917 917 msg_set_dest_droppable(buf_msg(skb), true); 918 918 TIPC_SKB_CB(skb)->chain_imp = msg_importance(hdr); 919 919 skb_queue_tail(&l->wakeupq, skb); ··· 1031 1031 * 1032 1032 * Consumes the buffer chain. 1033 1033 * Messages at TIPC_SYSTEM_IMPORTANCE are always accepted 1034 - * Return: 0 if success, or errno: -ELINKCONG, -EMSGSIZE or -ENOBUFS or -ENOMEM 1034 + * Return: 0 if success, or errno: -ELINKCONG, -EMSGSIZE or -ENOBUFS 1035 1035 */ 1036 1036 int tipc_link_xmit(struct tipc_link *l, struct sk_buff_head *list, 1037 1037 struct sk_buff_head *xmitq) ··· 1089 1089 if (!_skb) { 1090 1090 kfree_skb(skb); 1091 1091 __skb_queue_purge(list); 1092 - return -ENOMEM; 1092 + return -ENOBUFS; 1093 1093 } 1094 1094 __skb_queue_tail(transmq, skb); 1095 1095 tipc_link_set_skb_retransmit_time(skb, l);
+5 -2
net/vmw_vsock/virtio_transport.c
··· 357 357 358 358 static void virtio_vsock_reset_sock(struct sock *sk) 359 359 { 360 - lock_sock(sk); 360 + /* vmci_transport.c doesn't take sk_lock here either. At least we're 361 + * under vsock_table_lock so the sock cannot disappear while we're 362 + * executing. 363 + */ 364 + 361 365 sk->sk_state = TCP_CLOSE; 362 366 sk->sk_err = ECONNRESET; 363 367 sk_error_report(sk); 364 - release_sock(sk); 365 368 } 366 369 367 370 static void virtio_vsock_update_guest_cid(struct virtio_vsock *vsock)
+11 -7
scripts/checkversion.pl
··· 1 1 #! /usr/bin/env perl 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 # 4 - # checkversion find uses of LINUX_VERSION_CODE or KERNEL_VERSION 5 - # without including <linux/version.h>, or cases of 6 - # including <linux/version.h> that don't need it. 7 - # Copyright (C) 2003, Randy Dunlap <rdunlap@xenotime.net> 4 + # checkversion finds uses of all macros in <linux/version.h> 5 + # where the source files do not #include <linux/version.h>; or cases 6 + # of including <linux/version.h> where it is not needed. 7 + # Copyright (C) 2003, Randy Dunlap <rdunlap@infradead.org> 8 8 9 9 use strict; 10 10 ··· 13 13 my $debugging; 14 14 15 15 foreach my $file (@ARGV) { 16 - next if $file =~ "include/linux/version\.h"; 16 + next if $file =~ "include/generated/uapi/linux/version\.h"; 17 + next if $file =~ "usr/include/linux/version\.h"; 17 18 # Open this file. 18 19 open( my $f, '<', $file ) 19 20 or die "Can't open $file: $!\n"; ··· 42 41 $iLinuxVersion = $. if m/^\s*#\s*include\s*<linux\/version\.h>/o; 43 42 } 44 43 45 - # Look for uses: LINUX_VERSION_CODE, KERNEL_VERSION, UTS_RELEASE 46 - if (($_ =~ /LINUX_VERSION_CODE/) || ($_ =~ /\WKERNEL_VERSION/)) { 44 + # Look for uses: LINUX_VERSION_CODE, KERNEL_VERSION, 45 + # LINUX_VERSION_MAJOR, LINUX_VERSION_PATCHLEVEL, LINUX_VERSION_SUBLEVEL 46 + if (($_ =~ /LINUX_VERSION_CODE/) || ($_ =~ /\WKERNEL_VERSION/) || 47 + ($_ =~ /LINUX_VERSION_MAJOR/) || ($_ =~ /LINUX_VERSION_PATCHLEVEL/) || 48 + ($_ =~ /LINUX_VERSION_SUBLEVEL/)) { 47 49 $fUseVersion = 1; 48 50 last if $iLinuxVersion; 49 51 }
+2 -1
security/security.c
··· 58 58 [LOCKDOWN_MMIOTRACE] = "unsafe mmio", 59 59 [LOCKDOWN_DEBUGFS] = "debugfs access", 60 60 [LOCKDOWN_XMON_WR] = "xmon write access", 61 + [LOCKDOWN_BPF_WRITE_USER] = "use of bpf to write user RAM", 61 62 [LOCKDOWN_INTEGRITY_MAX] = "integrity", 62 63 [LOCKDOWN_KCORE] = "/proc/kcore access", 63 64 [LOCKDOWN_KPROBES] = "use of kprobes", 64 - [LOCKDOWN_BPF_READ] = "use of bpf to read kernel RAM", 65 + [LOCKDOWN_BPF_READ_KERNEL] = "use of bpf to read kernel RAM", 65 66 [LOCKDOWN_PERF] = "unsafe use of perf", 66 67 [LOCKDOWN_TRACEFS] = "use of tracefs", 67 68 [LOCKDOWN_XMON_RW] = "xmon read and write access",
+1 -1
sound/core/memalloc.c
··· 215 215 struct vm_area_struct *area) 216 216 { 217 217 return remap_pfn_range(area, area->vm_start, 218 - dmab->addr >> PAGE_SHIFT, 218 + page_to_pfn(virt_to_page(dmab->area)), 219 219 area->vm_end - area->vm_start, 220 220 area->vm_page_prot); 221 221 }
+5 -2
sound/core/pcm_native.c
··· 246 246 if (!(substream->runtime->hw.info & SNDRV_PCM_INFO_MMAP)) 247 247 return false; 248 248 249 - if (substream->ops->mmap) 249 + if (substream->ops->mmap || substream->ops->page) 250 250 return true; 251 251 252 252 switch (substream->dma_buffer.dev.type) { 253 253 case SNDRV_DMA_TYPE_UNKNOWN: 254 - return false; 254 + /* we can't know the device, so just assume that the driver does 255 + * everything right 256 + */ 257 + return true; 255 258 case SNDRV_DMA_TYPE_CONTINUOUS: 256 259 case SNDRV_DMA_TYPE_VMALLOC: 257 260 return true;
+27 -12
sound/core/seq/seq_ports.c
··· 514 514 return err; 515 515 } 516 516 517 - static void delete_and_unsubscribe_port(struct snd_seq_client *client, 518 - struct snd_seq_client_port *port, 519 - struct snd_seq_subscribers *subs, 520 - bool is_src, bool ack) 517 + /* called with grp->list_mutex held */ 518 + static void __delete_and_unsubscribe_port(struct snd_seq_client *client, 519 + struct snd_seq_client_port *port, 520 + struct snd_seq_subscribers *subs, 521 + bool is_src, bool ack) 521 522 { 522 523 struct snd_seq_port_subs_info *grp; 523 524 struct list_head *list; ··· 526 525 527 526 grp = is_src ? &port->c_src : &port->c_dest; 528 527 list = is_src ? &subs->src_list : &subs->dest_list; 529 - down_write(&grp->list_mutex); 530 528 write_lock_irq(&grp->list_lock); 531 529 empty = list_empty(list); 532 530 if (!empty) ··· 535 535 536 536 if (!empty) 537 537 unsubscribe_port(client, port, grp, &subs->info, ack); 538 + } 539 + 540 + static void delete_and_unsubscribe_port(struct snd_seq_client *client, 541 + struct snd_seq_client_port *port, 542 + struct snd_seq_subscribers *subs, 543 + bool is_src, bool ack) 544 + { 545 + struct snd_seq_port_subs_info *grp; 546 + 547 + grp = is_src ? &port->c_src : &port->c_dest; 548 + down_write(&grp->list_mutex); 549 + __delete_and_unsubscribe_port(client, port, subs, is_src, ack); 538 550 up_write(&grp->list_mutex); 539 551 } 540 552 ··· 602 590 struct snd_seq_client_port *dest_port, 603 591 struct snd_seq_port_subscribe *info) 604 592 { 605 - struct snd_seq_port_subs_info *src = &src_port->c_src; 593 + struct snd_seq_port_subs_info *dest = &dest_port->c_dest; 606 594 struct snd_seq_subscribers *subs; 607 595 int err = -ENOENT; 608 596 609 - down_write(&src->list_mutex); 597 + /* always start from deleting the dest port for avoiding concurrent 598 + * deletions 599 + */ 600 + down_write(&dest->list_mutex); 610 601 /* look for the connection */ 611 - list_for_each_entry(subs, &src->list_head, src_list) { 602 + list_for_each_entry(subs, &dest->list_head, dest_list) { 612 603 if (match_subs_info(info, &subs->info)) { 613 - atomic_dec(&subs->ref_count); /* mark as not ready */ 604 + __delete_and_unsubscribe_port(dest_client, dest_port, 605 + subs, false, 606 + connector->number != dest_client->number); 614 607 err = 0; 615 608 break; 616 609 } 617 610 } 618 - up_write(&src->list_mutex); 611 + up_write(&dest->list_mutex); 619 612 if (err < 0) 620 613 return err; 621 614 622 615 delete_and_unsubscribe_port(src_client, src_port, subs, true, 623 616 connector->number != src_client->number); 624 - delete_and_unsubscribe_port(dest_client, dest_port, subs, false, 625 - connector->number != dest_client->number); 626 617 kfree(subs); 627 618 return 0; 628 619 }
+4
sound/pci/hda/patch_realtek.c
··· 8274 8274 SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC), 8275 8275 SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC), 8276 8276 SND_PCI_QUIRK(0x1025, 0x129c, "Acer SWIFT SF314-55", ALC256_FIXUP_ACER_HEADSET_MIC), 8277 + SND_PCI_QUIRK(0x1025, 0x1300, "Acer SWIFT SF314-56", ALC256_FIXUP_ACER_MIC_NO_PRESENCE), 8277 8278 SND_PCI_QUIRK(0x1025, 0x1308, "Acer Aspire Z24-890", ALC286_FIXUP_ACER_AIO_HEADSET_MIC), 8278 8279 SND_PCI_QUIRK(0x1025, 0x132a, "Acer TravelMate B114-21", ALC233_FIXUP_ACER_HEADSET_MIC), 8279 8280 SND_PCI_QUIRK(0x1025, 0x1330, "Acer TravelMate X514-51T", ALC255_FIXUP_ACER_HEADSET_MIC), 8281 + SND_PCI_QUIRK(0x1025, 0x142b, "Acer Swift SF314-42", ALC255_FIXUP_ACER_MIC_NO_PRESENCE), 8280 8282 SND_PCI_QUIRK(0x1025, 0x1430, "Acer TravelMate B311R-31", ALC256_FIXUP_ACER_MIC_NO_PRESENCE), 8281 8283 SND_PCI_QUIRK(0x1025, 0x1466, "Acer Aspire A515-56", ALC255_FIXUP_ACER_HEADPHONE_AND_MIC), 8282 8284 SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z), ··· 8431 8429 SND_PCI_QUIRK(0x103c, 0x87f4, "HP", ALC287_FIXUP_HP_GPIO_LED), 8432 8430 SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED), 8433 8431 SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP), 8432 + SND_PCI_QUIRK(0x103c, 0x8805, "HP ProBook 650 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED), 8434 8433 SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), 8435 8434 SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), 8436 8435 SND_PCI_QUIRK(0x103c, 0x8847, "HP EliteBook x360 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), ··· 8466 8463 SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC), 8467 8464 SND_PCI_QUIRK(0x1043, 0x1740, "ASUS UX430UA", ALC295_FIXUP_ASUS_DACS), 8468 8465 SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_DUAL_SPK), 8466 + SND_PCI_QUIRK(0x1043, 0x1662, "ASUS GV301QH", ALC294_FIXUP_ASUS_DUAL_SPK), 8469 8467 SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS), 8470 8468 SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC), 8471 8469 SND_PCI_QUIRK(0x1043, 0x18f1, "Asus FX505DT", ALC256_FIXUP_ASUS_HEADSET_MIC),
+1
sound/soc/Kconfig
··· 36 36 37 37 config SND_SOC_TOPOLOGY 38 38 bool 39 + select SND_DYNAMIC_MINORS 39 40 40 41 config SND_SOC_TOPOLOGY_KUNIT_TEST 41 42 tristate "KUnit tests for SoC topology"
+5
sound/soc/amd/acp-da7219-max98357a.c
··· 525 525 | SND_SOC_DAIFMT_CBM_CFM, 526 526 .init = cz_da7219_init, 527 527 .dpcm_playback = 1, 528 + .stop_dma_first = 1, 528 529 .ops = &cz_da7219_play_ops, 529 530 SND_SOC_DAILINK_REG(designware1, dlgs, platform), 530 531 }, ··· 535 534 .dai_fmt = SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF 536 535 | SND_SOC_DAIFMT_CBM_CFM, 537 536 .dpcm_capture = 1, 537 + .stop_dma_first = 1, 538 538 .ops = &cz_da7219_cap_ops, 539 539 SND_SOC_DAILINK_REG(designware2, dlgs, platform), 540 540 }, ··· 545 543 .dai_fmt = SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF 546 544 | SND_SOC_DAIFMT_CBM_CFM, 547 545 .dpcm_playback = 1, 546 + .stop_dma_first = 1, 548 547 .ops = &cz_max_play_ops, 549 548 SND_SOC_DAILINK_REG(designware3, mx, platform), 550 549 }, ··· 556 553 .dai_fmt = SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF 557 554 | SND_SOC_DAIFMT_CBM_CFM, 558 555 .dpcm_capture = 1, 556 + .stop_dma_first = 1, 559 557 .ops = &cz_dmic0_cap_ops, 560 558 SND_SOC_DAILINK_REG(designware3, adau, platform), 561 559 }, ··· 567 563 .dai_fmt = SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF 568 564 | SND_SOC_DAIFMT_CBM_CFM, 569 565 .dpcm_capture = 1, 566 + .stop_dma_first = 1, 570 567 .ops = &cz_dmic1_cap_ops, 571 568 SND_SOC_DAILINK_REG(designware2, adau, platform), 572 569 },
+1 -1
sound/soc/amd/acp-pcm-dma.c
··· 969 969 970 970 acp_set_sram_bank_state(rtd->acp_mmio, 0, true); 971 971 /* Save for runtime private data */ 972 - rtd->dma_addr = substream->dma_buffer.addr; 972 + rtd->dma_addr = runtime->dma_addr; 973 973 rtd->order = get_order(size); 974 974 975 975 /* Fill the page table entries in ACP SRAM */
+1 -1
sound/soc/amd/raven/acp3x-pcm-dma.c
··· 286 286 pr_err("pinfo failed\n"); 287 287 } 288 288 size = params_buffer_bytes(params); 289 - rtd->dma_addr = substream->dma_buffer.addr; 289 + rtd->dma_addr = substream->runtime->dma_addr; 290 290 rtd->num_pages = (PAGE_ALIGN(size) >> PAGE_SHIFT); 291 291 config_acp3x_dma(rtd, substream->stream); 292 292 return 0;
+1 -1
sound/soc/amd/renoir/acp3x-pdm-dma.c
··· 242 242 return -EINVAL; 243 243 size = params_buffer_bytes(params); 244 244 period_bytes = params_period_bytes(params); 245 - rtd->dma_addr = substream->dma_buffer.addr; 245 + rtd->dma_addr = substream->runtime->dma_addr; 246 246 rtd->num_pages = (PAGE_ALIGN(size) >> PAGE_SHIFT); 247 247 config_acp_dma(rtd, substream->stream); 248 248 init_pdm_ring_buffer(MEM_WINDOW_START, size, period_bytes,
+2
sound/soc/amd/renoir/rn-pci-acp3x.c
··· 382 382 .runtime_resume = snd_rn_acp_resume, 383 383 .suspend = snd_rn_acp_suspend, 384 384 .resume = snd_rn_acp_resume, 385 + .restore = snd_rn_acp_resume, 386 + .poweroff = snd_rn_acp_suspend, 385 387 }; 386 388 387 389 static void snd_rn_acp_remove(struct pci_dev *pci)
+1
sound/soc/codecs/Kconfig
··· 1559 1559 config SND_SOC_WCD938X 1560 1560 depends on SND_SOC_WCD938X_SDW 1561 1561 tristate 1562 + depends on SOUNDWIRE || !SOUNDWIRE 1562 1563 1563 1564 config SND_SOC_WCD938X_SDW 1564 1565 tristate "WCD9380/WCD9385 Codec - SDW"
+4 -1
sound/soc/codecs/Makefile
··· 583 583 obj-$(CONFIG_SND_SOC_WCD9335) += snd-soc-wcd9335.o 584 584 obj-$(CONFIG_SND_SOC_WCD934X) += snd-soc-wcd934x.o 585 585 obj-$(CONFIG_SND_SOC_WCD938X) += snd-soc-wcd938x.o 586 - obj-$(CONFIG_SND_SOC_WCD938X_SDW) += snd-soc-wcd938x-sdw.o 586 + ifdef CONFIG_SND_SOC_WCD938X_SDW 587 + # avoid link failure by forcing sdw code built-in when needed 588 + obj-$(CONFIG_SND_SOC_WCD938X) += snd-soc-wcd938x-sdw.o 589 + endif 587 590 obj-$(CONFIG_SND_SOC_WL1273) += snd-soc-wl1273.o 588 591 obj-$(CONFIG_SND_SOC_WM0010) += snd-soc-wm0010.o 589 592 obj-$(CONFIG_SND_SOC_WM1250_EV1) += snd-soc-wm1250-ev1.o
+70 -34
sound/soc/codecs/cs42l42.c
··· 405 405 .use_single_write = true, 406 406 }; 407 407 408 - static DECLARE_TLV_DB_SCALE(adc_tlv, -9600, 100, false); 408 + static DECLARE_TLV_DB_SCALE(adc_tlv, -9700, 100, true); 409 409 static DECLARE_TLV_DB_SCALE(mixer_tlv, -6300, 100, true); 410 410 411 411 static const char * const cs42l42_hpf_freq_text[] = { ··· 425 425 CS42L42_ADC_WNF_CF_SHIFT, 426 426 cs42l42_wnf3_freq_text); 427 427 428 - static const char * const cs42l42_wnf05_freq_text[] = { 429 - "280Hz", "315Hz", "350Hz", "385Hz", 430 - "420Hz", "455Hz", "490Hz", "525Hz" 431 - }; 432 - 433 - static SOC_ENUM_SINGLE_DECL(cs42l42_wnf05_freq_enum, CS42L42_ADC_WNF_HPF_CTL, 434 - CS42L42_ADC_WNF_CF_SHIFT, 435 - cs42l42_wnf05_freq_text); 436 - 437 428 static const struct snd_kcontrol_new cs42l42_snd_controls[] = { 438 429 /* ADC Volume and Filter Controls */ 439 430 SOC_SINGLE("ADC Notch Switch", CS42L42_ADC_CTL, 440 - CS42L42_ADC_NOTCH_DIS_SHIFT, true, false), 431 + CS42L42_ADC_NOTCH_DIS_SHIFT, true, true), 441 432 SOC_SINGLE("ADC Weak Force Switch", CS42L42_ADC_CTL, 442 433 CS42L42_ADC_FORCE_WEAK_VCM_SHIFT, true, false), 443 434 SOC_SINGLE("ADC Invert Switch", CS42L42_ADC_CTL, 444 435 CS42L42_ADC_INV_SHIFT, true, false), 445 436 SOC_SINGLE("ADC Boost Switch", CS42L42_ADC_CTL, 446 437 CS42L42_ADC_DIG_BOOST_SHIFT, true, false), 447 - SOC_SINGLE_SX_TLV("ADC Volume", CS42L42_ADC_VOLUME, 448 - CS42L42_ADC_VOL_SHIFT, 0xA0, 0x6C, adc_tlv), 438 + SOC_SINGLE_S8_TLV("ADC Volume", CS42L42_ADC_VOLUME, -97, 12, adc_tlv), 449 439 SOC_SINGLE("ADC WNF Switch", CS42L42_ADC_WNF_HPF_CTL, 450 440 CS42L42_ADC_WNF_EN_SHIFT, true, false), 451 441 SOC_SINGLE("ADC HPF Switch", CS42L42_ADC_WNF_HPF_CTL, 452 442 CS42L42_ADC_HPF_EN_SHIFT, true, false), 453 443 SOC_ENUM("HPF Corner Freq", cs42l42_hpf_freq_enum), 454 444 SOC_ENUM("WNF 3dB Freq", cs42l42_wnf3_freq_enum), 455 - SOC_ENUM("WNF 05dB Freq", cs42l42_wnf05_freq_enum), 456 445 457 446 /* DAC Volume and Filter Controls */ 458 447 SOC_SINGLE("DACA Invert Switch", CS42L42_DAC_CTL1, ··· 460 471 SND_SOC_DAPM_OUTPUT("HP"), 461 472 SND_SOC_DAPM_DAC("DAC", NULL, CS42L42_PWR_CTL1, CS42L42_HP_PDN_SHIFT, 1), 462 473 SND_SOC_DAPM_MIXER("MIXER", CS42L42_PWR_CTL1, CS42L42_MIXER_PDN_SHIFT, 1, NULL, 0), 463 - SND_SOC_DAPM_AIF_IN("SDIN1", NULL, 0, CS42L42_ASP_RX_DAI0_EN, CS42L42_ASP_RX0_CH1_SHIFT, 0), 464 - SND_SOC_DAPM_AIF_IN("SDIN2", NULL, 1, CS42L42_ASP_RX_DAI0_EN, CS42L42_ASP_RX0_CH2_SHIFT, 0), 474 + SND_SOC_DAPM_AIF_IN("SDIN1", NULL, 0, SND_SOC_NOPM, 0, 0), 475 + SND_SOC_DAPM_AIF_IN("SDIN2", NULL, 1, SND_SOC_NOPM, 0, 0), 465 476 466 477 /* Playback Requirements */ 467 478 SND_SOC_DAPM_SUPPLY("ASP DAI0", CS42L42_PWR_CTL1, CS42L42_ASP_DAI_PDN_SHIFT, 1, NULL, 0), ··· 619 630 620 631 for (i = 0; i < ARRAY_SIZE(pll_ratio_table); i++) { 621 632 if (pll_ratio_table[i].sclk == clk) { 633 + cs42l42->pll_config = i; 634 + 622 635 /* Configure the internal sample rate */ 623 636 snd_soc_component_update_bits(component, CS42L42_MCLK_CTL, 624 637 CS42L42_INTERNAL_FS_MASK, ··· 629 638 (pll_ratio_table[i].mclk_int != 630 639 24000000)) << 631 640 CS42L42_INTERNAL_FS_SHIFT); 632 - /* Set the MCLK src (PLL or SCLK) and the divide 633 - * ratio 634 - */ 641 + 635 642 snd_soc_component_update_bits(component, CS42L42_MCLK_SRC_SEL, 636 - CS42L42_MCLK_SRC_SEL_MASK | 637 643 CS42L42_MCLKDIV_MASK, 638 - (pll_ratio_table[i].mclk_src_sel 639 - << CS42L42_MCLK_SRC_SEL_SHIFT) | 640 644 (pll_ratio_table[i].mclk_div << 641 645 CS42L42_MCLKDIV_SHIFT)); 642 646 /* Set up the LRCLK */ ··· 667 681 CS42L42_FSYNC_PULSE_WIDTH_MASK, 668 682 CS42L42_FRAC1_VAL(fsync - 1) << 669 683 CS42L42_FSYNC_PULSE_WIDTH_SHIFT); 670 - snd_soc_component_update_bits(component, 671 - CS42L42_ASP_FRM_CFG, 672 - CS42L42_ASP_5050_MASK, 673 - CS42L42_ASP_5050_MASK); 674 - /* Set the frame delay to 1.0 SCLK clocks */ 675 - snd_soc_component_update_bits(component, CS42L42_ASP_FRM_CFG, 676 - CS42L42_ASP_FSD_MASK, 677 - CS42L42_ASP_FSD_1_0 << 678 - CS42L42_ASP_FSD_SHIFT); 679 684 /* Set the sample rates (96k or lower) */ 680 685 snd_soc_component_update_bits(component, CS42L42_FS_RATE_EN, 681 686 CS42L42_FS_EN_MASK, ··· 766 789 /* interface format */ 767 790 switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) { 768 791 case SND_SOC_DAIFMT_I2S: 769 - case SND_SOC_DAIFMT_LEFT_J: 792 + /* 793 + * 5050 mode, frame starts on falling edge of LRCLK, 794 + * frame delayed by 1.0 SCLKs 795 + */ 796 + snd_soc_component_update_bits(component, 797 + CS42L42_ASP_FRM_CFG, 798 + CS42L42_ASP_STP_MASK | 799 + CS42L42_ASP_5050_MASK | 800 + CS42L42_ASP_FSD_MASK, 801 + CS42L42_ASP_5050_MASK | 802 + (CS42L42_ASP_FSD_1_0 << 803 + CS42L42_ASP_FSD_SHIFT)); 770 804 break; 771 805 default: 772 806 return -EINVAL; ··· 807 819 return 0; 808 820 } 809 821 822 + static int cs42l42_dai_startup(struct snd_pcm_substream *substream, struct snd_soc_dai *dai) 823 + { 824 + struct snd_soc_component *component = dai->component; 825 + struct cs42l42_private *cs42l42 = snd_soc_component_get_drvdata(component); 826 + 827 + /* 828 + * Sample rates < 44.1 kHz would produce an out-of-range SCLK with 829 + * a standard I2S frame. If the machine driver sets SCLK it must be 830 + * legal. 831 + */ 832 + if (cs42l42->sclk) 833 + return 0; 834 + 835 + /* Machine driver has not set a SCLK, limit bottom end to 44.1 kHz */ 836 + return snd_pcm_hw_constraint_minmax(substream->runtime, 837 + SNDRV_PCM_HW_PARAM_RATE, 838 + 44100, 192000); 839 + } 840 + 810 841 static int cs42l42_pcm_hw_params(struct snd_pcm_substream *substream, 811 842 struct snd_pcm_hw_params *params, 812 843 struct snd_soc_dai *dai) ··· 838 831 839 832 cs42l42->srate = params_rate(params); 840 833 cs42l42->bclk = snd_soc_params_to_bclk(params); 834 + 835 + /* I2S frame always has 2 channels even for mono audio */ 836 + if (channels == 1) 837 + cs42l42->bclk *= 2; 841 838 842 839 switch(substream->stream) { 843 840 case SNDRV_PCM_STREAM_CAPTURE: ··· 866 855 snd_soc_component_update_bits(component, CS42L42_ASP_RX_DAI0_CH2_AP_RES, 867 856 CS42L42_ASP_RX_CH_AP_MASK | 868 857 CS42L42_ASP_RX_CH_RES_MASK, val); 858 + 859 + /* Channel B comes from the last active channel */ 860 + snd_soc_component_update_bits(component, CS42L42_SP_RX_CH_SEL, 861 + CS42L42_SP_RX_CHB_SEL_MASK, 862 + (channels - 1) << CS42L42_SP_RX_CHB_SEL_SHIFT); 863 + 864 + /* Both LRCLK slots must be enabled */ 865 + snd_soc_component_update_bits(component, CS42L42_ASP_RX_DAI0_EN, 866 + CS42L42_ASP_RX0_CH_EN_MASK, 867 + BIT(CS42L42_ASP_RX0_CH1_SHIFT) | 868 + BIT(CS42L42_ASP_RX0_CH2_SHIFT)); 869 869 break; 870 870 default: 871 871 break; ··· 922 900 */ 923 901 regmap_multi_reg_write(cs42l42->regmap, cs42l42_to_osc_seq, 924 902 ARRAY_SIZE(cs42l42_to_osc_seq)); 903 + 904 + /* Must disconnect PLL before stopping it */ 905 + snd_soc_component_update_bits(component, 906 + CS42L42_MCLK_SRC_SEL, 907 + CS42L42_MCLK_SRC_SEL_MASK, 908 + 0); 909 + usleep_range(100, 200); 910 + 925 911 snd_soc_component_update_bits(component, CS42L42_PLL_CTL1, 926 912 CS42L42_PLL_START_MASK, 0); 927 913 } 928 914 } else { 929 915 if (!cs42l42->stream_use) { 930 916 /* SCLK must be running before codec unmute */ 931 - if ((cs42l42->bclk < 11289600) && (cs42l42->sclk < 11289600)) { 917 + if (pll_ratio_table[cs42l42->pll_config].mclk_src_sel) { 932 918 snd_soc_component_update_bits(component, CS42L42_PLL_CTL1, 933 919 CS42L42_PLL_START_MASK, 1); 934 920 ··· 957 927 CS42L42_PLL_LOCK_TIMEOUT_US); 958 928 if (ret < 0) 959 929 dev_warn(component->dev, "PLL failed to lock: %d\n", ret); 930 + 931 + /* PLL must be running to drive glitchless switch logic */ 932 + snd_soc_component_update_bits(component, 933 + CS42L42_MCLK_SRC_SEL, 934 + CS42L42_MCLK_SRC_SEL_MASK, 935 + CS42L42_MCLK_SRC_SEL_MASK); 960 936 } 961 937 962 938 /* Mark SCLK as present, turn off internal oscillator */ ··· 996 960 SNDRV_PCM_FMTBIT_S24_LE |\ 997 961 SNDRV_PCM_FMTBIT_S32_LE ) 998 962 999 - 1000 963 static const struct snd_soc_dai_ops cs42l42_ops = { 964 + .startup = cs42l42_dai_startup, 1001 965 .hw_params = cs42l42_pcm_hw_params, 1002 966 .set_fmt = cs42l42_set_dai_fmt, 1003 967 .set_sysclk = cs42l42_set_sysclk,
+3
sound/soc/codecs/cs42l42.h
··· 653 653 654 654 /* Page 0x25 Audio Port Registers */ 655 655 #define CS42L42_SP_RX_CH_SEL (CS42L42_PAGE_25 + 0x01) 656 + #define CS42L42_SP_RX_CHB_SEL_SHIFT 2 657 + #define CS42L42_SP_RX_CHB_SEL_MASK (3 << CS42L42_SP_RX_CHB_SEL_SHIFT) 656 658 657 659 #define CS42L42_SP_RX_ISOC_CTL (CS42L42_PAGE_25 + 0x02) 658 660 #define CS42L42_SP_RX_RSYNC_SHIFT 6 ··· 777 775 struct gpio_desc *reset_gpio; 778 776 struct completion pdn_done; 779 777 struct snd_soc_jack *jack; 778 + int pll_config; 780 779 int bclk; 781 780 u32 sclk; 782 781 u32 srate;
+6 -36
sound/soc/codecs/nau8824.c
··· 828 828 } 829 829 } 830 830 831 - static void nau8824_dapm_disable_pin(struct nau8824 *nau8824, const char *pin) 832 - { 833 - struct snd_soc_dapm_context *dapm = nau8824->dapm; 834 - const char *prefix = dapm->component->name_prefix; 835 - char prefixed_pin[80]; 836 - 837 - if (prefix) { 838 - snprintf(prefixed_pin, sizeof(prefixed_pin), "%s %s", 839 - prefix, pin); 840 - snd_soc_dapm_disable_pin(dapm, prefixed_pin); 841 - } else { 842 - snd_soc_dapm_disable_pin(dapm, pin); 843 - } 844 - } 845 - 846 - static void nau8824_dapm_enable_pin(struct nau8824 *nau8824, const char *pin) 847 - { 848 - struct snd_soc_dapm_context *dapm = nau8824->dapm; 849 - const char *prefix = dapm->component->name_prefix; 850 - char prefixed_pin[80]; 851 - 852 - if (prefix) { 853 - snprintf(prefixed_pin, sizeof(prefixed_pin), "%s %s", 854 - prefix, pin); 855 - snd_soc_dapm_force_enable_pin(dapm, prefixed_pin); 856 - } else { 857 - snd_soc_dapm_force_enable_pin(dapm, pin); 858 - } 859 - } 860 - 861 831 static void nau8824_eject_jack(struct nau8824 *nau8824) 862 832 { 863 833 struct snd_soc_dapm_context *dapm = nau8824->dapm; ··· 836 866 /* Clear all interruption status */ 837 867 nau8824_int_status_clear_all(regmap); 838 868 839 - nau8824_dapm_disable_pin(nau8824, "SAR"); 840 - nau8824_dapm_disable_pin(nau8824, "MICBIAS"); 869 + snd_soc_dapm_disable_pin(dapm, "SAR"); 870 + snd_soc_dapm_disable_pin(dapm, "MICBIAS"); 841 871 snd_soc_dapm_sync(dapm); 842 872 843 873 /* Enable the insertion interruption, disable the ejection ··· 867 897 struct regmap *regmap = nau8824->regmap; 868 898 int adc_value, event = 0, event_mask = 0; 869 899 870 - nau8824_dapm_enable_pin(nau8824, "MICBIAS"); 871 - nau8824_dapm_enable_pin(nau8824, "SAR"); 900 + snd_soc_dapm_enable_pin(dapm, "MICBIAS"); 901 + snd_soc_dapm_enable_pin(dapm, "SAR"); 872 902 snd_soc_dapm_sync(dapm); 873 903 874 904 msleep(100); ··· 879 909 if (adc_value < HEADSET_SARADC_THD) { 880 910 event |= SND_JACK_HEADPHONE; 881 911 882 - nau8824_dapm_disable_pin(nau8824, "SAR"); 883 - nau8824_dapm_disable_pin(nau8824, "MICBIAS"); 912 + snd_soc_dapm_disable_pin(dapm, "SAR"); 913 + snd_soc_dapm_disable_pin(dapm, "MICBIAS"); 884 914 snd_soc_dapm_sync(dapm); 885 915 } else { 886 916 event |= SND_JACK_HEADSET;
+1
sound/soc/codecs/rt5682.c
··· 44 44 {RT5682_I2C_CTRL, 0x000f}, 45 45 {RT5682_PLL2_INTERNAL, 0x8266}, 46 46 {RT5682_SAR_IL_CMD_3, 0x8365}, 47 + {RT5682_SAR_IL_CMD_6, 0x0180}, 47 48 }; 48 49 49 50 void rt5682_apply_patch_list(struct rt5682_priv *rt5682, struct device *dev)
+10
sound/soc/codecs/tlv320aic31xx.c
··· 35 35 36 36 #include "tlv320aic31xx.h" 37 37 38 + static int aic31xx_set_jack(struct snd_soc_component *component, 39 + struct snd_soc_jack *jack, void *data); 40 + 38 41 static const struct reg_default aic31xx_reg_defaults[] = { 39 42 { AIC31XX_CLKMUX, 0x00 }, 40 43 { AIC31XX_PLLPR, 0x11 }, ··· 1258 1255 aic31xx->supplies); 1259 1256 return ret; 1260 1257 } 1258 + 1259 + /* 1260 + * The jack detection configuration is in the same register 1261 + * that is used to report jack detect status so is volatile 1262 + * and not covered by the cache sync, restore it separately. 1263 + */ 1264 + aic31xx_set_jack(component, aic31xx->jack, NULL); 1261 1265 1262 1266 return 0; 1263 1267 }
+26 -7
sound/soc/codecs/tlv320aic32x4.c
··· 682 682 static int aic32x4_set_processing_blocks(struct snd_soc_component *component, 683 683 u8 r_block, u8 p_block) 684 684 { 685 - if (r_block > 18 || p_block > 25) 686 - return -EINVAL; 685 + struct aic32x4_priv *aic32x4 = snd_soc_component_get_drvdata(component); 687 686 688 - snd_soc_component_write(component, AIC32X4_ADCSPB, r_block); 689 - snd_soc_component_write(component, AIC32X4_DACSPB, p_block); 687 + if (aic32x4->type == AIC32X4_TYPE_TAS2505) { 688 + if (r_block || p_block > 3) 689 + return -EINVAL; 690 + 691 + snd_soc_component_write(component, AIC32X4_DACSPB, p_block); 692 + } else { /* AIC32x4 */ 693 + if (r_block > 18 || p_block > 25) 694 + return -EINVAL; 695 + 696 + snd_soc_component_write(component, AIC32X4_ADCSPB, r_block); 697 + snd_soc_component_write(component, AIC32X4_DACSPB, p_block); 698 + } 690 699 691 700 return 0; 692 701 } ··· 704 695 unsigned int sample_rate, unsigned int channels, 705 696 unsigned int bit_depth) 706 697 { 698 + struct aic32x4_priv *aic32x4 = snd_soc_component_get_drvdata(component); 707 699 u8 aosr; 708 700 u16 dosr; 709 701 u8 adc_resource_class, dac_resource_class; ··· 731 721 adc_resource_class = 6; 732 722 dac_resource_class = 8; 733 723 dosr_increment = 8; 734 - aic32x4_set_processing_blocks(component, 1, 1); 724 + if (aic32x4->type == AIC32X4_TYPE_TAS2505) 725 + aic32x4_set_processing_blocks(component, 0, 1); 726 + else 727 + aic32x4_set_processing_blocks(component, 1, 1); 735 728 } else if (sample_rate <= 96000) { 736 729 aosr = 64; 737 730 adc_resource_class = 6; 738 731 dac_resource_class = 8; 739 732 dosr_increment = 4; 740 - aic32x4_set_processing_blocks(component, 1, 9); 733 + if (aic32x4->type == AIC32X4_TYPE_TAS2505) 734 + aic32x4_set_processing_blocks(component, 0, 1); 735 + else 736 + aic32x4_set_processing_blocks(component, 1, 9); 741 737 } else if (sample_rate == 192000) { 742 738 aosr = 32; 743 739 adc_resource_class = 3; 744 740 dac_resource_class = 4; 745 741 dosr_increment = 2; 746 - aic32x4_set_processing_blocks(component, 13, 19); 742 + if (aic32x4->type == AIC32X4_TYPE_TAS2505) 743 + aic32x4_set_processing_blocks(component, 0, 1); 744 + else 745 + aic32x4_set_processing_blocks(component, 13, 19); 747 746 } else { 748 747 dev_err(component->dev, "Sampling rate not supported\n"); 749 748 return -EINVAL;
-1
sound/soc/codecs/wm_adsp.c
··· 747 747 static void wm_adsp2_cleanup_debugfs(struct wm_adsp *dsp) 748 748 { 749 749 wm_adsp_debugfs_clear(dsp); 750 - debugfs_remove_recursive(dsp->debugfs_root); 751 750 } 752 751 #else 753 752 static inline void wm_adsp2_init_debugfs(struct wm_adsp *dsp,
+1 -2
sound/soc/intel/atom/sst-mfld-platform-pcm.c
··· 127 127 snd_pcm_uframes_t period_size; 128 128 ssize_t periodbytes; 129 129 ssize_t buffer_bytes = snd_pcm_lib_buffer_bytes(substream); 130 - u32 buffer_addr = virt_to_phys(substream->dma_buffer.area); 130 + u32 buffer_addr = substream->runtime->dma_addr; 131 131 132 132 channels = substream->runtime->channels; 133 133 period_size = substream->runtime->period_size; ··· 233 233 /* set codec params and inform SST driver the same */ 234 234 sst_fill_pcm_params(substream, &param); 235 235 sst_fill_alloc_params(substream, &alloc_params); 236 - substream->runtime->dma_area = substream->dma_buffer.area; 237 236 str_params.sparams = param; 238 237 str_params.aparams = alloc_params; 239 238 str_params.codec = SST_CODEC_TYPE_PCM;
+1 -1
sound/soc/intel/boards/sof_da7219_max98373.c
··· 404 404 return -ENOMEM; 405 405 406 406 /* By default dais[0] is configured for max98373 */ 407 - if (!strcmp(pdev->name, "sof_da7219_max98360a")) { 407 + if (!strcmp(pdev->name, "sof_da7219_mx98360a")) { 408 408 dais[0] = (struct snd_soc_dai_link) { 409 409 .name = "SSP1-Codec", 410 410 .id = 0,
+18 -8
sound/soc/kirkwood/kirkwood-dma.c
··· 104 104 int err; 105 105 struct snd_pcm_runtime *runtime = substream->runtime; 106 106 struct kirkwood_dma_data *priv = kirkwood_priv(substream); 107 - const struct mbus_dram_target_info *dram; 108 - unsigned long addr; 109 107 110 108 snd_soc_set_runtime_hwparams(substream, &kirkwood_dma_snd_hw); 111 109 ··· 140 142 writel((unsigned int)-1, priv->io + KIRKWOOD_ERR_MASK); 141 143 } 142 144 143 - dram = mv_mbus_dram_info(); 144 - addr = substream->dma_buffer.addr; 145 145 if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { 146 146 if (priv->substream_play) 147 147 return -EBUSY; 148 148 priv->substream_play = substream; 149 - kirkwood_dma_conf_mbus_windows(priv->io, 150 - KIRKWOOD_PLAYBACK_WIN, addr, dram); 151 149 } else { 152 150 if (priv->substream_rec) 153 151 return -EBUSY; 154 152 priv->substream_rec = substream; 155 - kirkwood_dma_conf_mbus_windows(priv->io, 156 - KIRKWOOD_RECORD_WIN, addr, dram); 157 153 } 158 154 159 155 return 0; ··· 171 179 free_irq(priv->irq, priv); 172 180 } 173 181 182 + return 0; 183 + } 184 + 185 + static int kirkwood_dma_hw_params(struct snd_soc_component *component, 186 + struct snd_pcm_substream *substream, 187 + struct snd_pcm_hw_params *params) 188 + { 189 + struct kirkwood_dma_data *priv = kirkwood_priv(substream); 190 + const struct mbus_dram_target_info *dram = mv_mbus_dram_info(); 191 + unsigned long addr = substream->runtime->dma_addr; 192 + 193 + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) 194 + kirkwood_dma_conf_mbus_windows(priv->io, 195 + KIRKWOOD_PLAYBACK_WIN, addr, dram); 196 + else 197 + kirkwood_dma_conf_mbus_windows(priv->io, 198 + KIRKWOOD_RECORD_WIN, addr, dram); 174 199 return 0; 175 200 } 176 201 ··· 255 246 .name = DRV_NAME, 256 247 .open = kirkwood_dma_open, 257 248 .close = kirkwood_dma_close, 249 + .hw_params = kirkwood_dma_hw_params, 258 250 .prepare = kirkwood_dma_prepare, 259 251 .pointer = kirkwood_dma_pointer, 260 252 .pcm_construct = kirkwood_dma_new,
+27 -36
sound/soc/soc-component.c
··· 148 148 return soc_component_ret(component, ret); 149 149 } 150 150 151 - static int soc_component_pin(struct snd_soc_component *component, 152 - const char *pin, 153 - int (*pin_func)(struct snd_soc_dapm_context *dapm, 154 - const char *pin)) 155 - { 156 - struct snd_soc_dapm_context *dapm = 157 - snd_soc_component_get_dapm(component); 158 - char *full_name; 159 - int ret; 160 - 161 - if (!component->name_prefix) { 162 - ret = pin_func(dapm, pin); 163 - goto end; 164 - } 165 - 166 - full_name = kasprintf(GFP_KERNEL, "%s %s", component->name_prefix, pin); 167 - if (!full_name) { 168 - ret = -ENOMEM; 169 - goto end; 170 - } 171 - 172 - ret = pin_func(dapm, full_name); 173 - kfree(full_name); 174 - end: 175 - return soc_component_ret(component, ret); 176 - } 177 - 178 151 int snd_soc_component_enable_pin(struct snd_soc_component *component, 179 152 const char *pin) 180 153 { 181 - return soc_component_pin(component, pin, snd_soc_dapm_enable_pin); 154 + struct snd_soc_dapm_context *dapm = 155 + snd_soc_component_get_dapm(component); 156 + return snd_soc_dapm_enable_pin(dapm, pin); 182 157 } 183 158 EXPORT_SYMBOL_GPL(snd_soc_component_enable_pin); 184 159 185 160 int snd_soc_component_enable_pin_unlocked(struct snd_soc_component *component, 186 161 const char *pin) 187 162 { 188 - return soc_component_pin(component, pin, snd_soc_dapm_enable_pin_unlocked); 163 + struct snd_soc_dapm_context *dapm = 164 + snd_soc_component_get_dapm(component); 165 + return snd_soc_dapm_enable_pin_unlocked(dapm, pin); 189 166 } 190 167 EXPORT_SYMBOL_GPL(snd_soc_component_enable_pin_unlocked); 191 168 192 169 int snd_soc_component_disable_pin(struct snd_soc_component *component, 193 170 const char *pin) 194 171 { 195 - return soc_component_pin(component, pin, snd_soc_dapm_disable_pin); 172 + struct snd_soc_dapm_context *dapm = 173 + snd_soc_component_get_dapm(component); 174 + return snd_soc_dapm_disable_pin(dapm, pin); 196 175 } 197 176 EXPORT_SYMBOL_GPL(snd_soc_component_disable_pin); 198 177 199 178 int snd_soc_component_disable_pin_unlocked(struct snd_soc_component *component, 200 179 const char *pin) 201 180 { 202 - return soc_component_pin(component, pin, snd_soc_dapm_disable_pin_unlocked); 181 + struct snd_soc_dapm_context *dapm = 182 + snd_soc_component_get_dapm(component); 183 + return snd_soc_dapm_disable_pin_unlocked(dapm, pin); 203 184 } 204 185 EXPORT_SYMBOL_GPL(snd_soc_component_disable_pin_unlocked); 205 186 206 187 int snd_soc_component_nc_pin(struct snd_soc_component *component, 207 188 const char *pin) 208 189 { 209 - return soc_component_pin(component, pin, snd_soc_dapm_nc_pin); 190 + struct snd_soc_dapm_context *dapm = 191 + snd_soc_component_get_dapm(component); 192 + return snd_soc_dapm_nc_pin(dapm, pin); 210 193 } 211 194 EXPORT_SYMBOL_GPL(snd_soc_component_nc_pin); 212 195 213 196 int snd_soc_component_nc_pin_unlocked(struct snd_soc_component *component, 214 197 const char *pin) 215 198 { 216 - return soc_component_pin(component, pin, snd_soc_dapm_nc_pin_unlocked); 199 + struct snd_soc_dapm_context *dapm = 200 + snd_soc_component_get_dapm(component); 201 + return snd_soc_dapm_nc_pin_unlocked(dapm, pin); 217 202 } 218 203 EXPORT_SYMBOL_GPL(snd_soc_component_nc_pin_unlocked); 219 204 220 205 int snd_soc_component_get_pin_status(struct snd_soc_component *component, 221 206 const char *pin) 222 207 { 223 - return soc_component_pin(component, pin, snd_soc_dapm_get_pin_status); 208 + struct snd_soc_dapm_context *dapm = 209 + snd_soc_component_get_dapm(component); 210 + return snd_soc_dapm_get_pin_status(dapm, pin); 224 211 } 225 212 EXPORT_SYMBOL_GPL(snd_soc_component_get_pin_status); 226 213 227 214 int snd_soc_component_force_enable_pin(struct snd_soc_component *component, 228 215 const char *pin) 229 216 { 230 - return soc_component_pin(component, pin, snd_soc_dapm_force_enable_pin); 217 + struct snd_soc_dapm_context *dapm = 218 + snd_soc_component_get_dapm(component); 219 + return snd_soc_dapm_force_enable_pin(dapm, pin); 231 220 } 232 221 EXPORT_SYMBOL_GPL(snd_soc_component_force_enable_pin); 233 222 ··· 224 235 struct snd_soc_component *component, 225 236 const char *pin) 226 237 { 227 - return soc_component_pin(component, pin, snd_soc_dapm_force_enable_pin_unlocked); 238 + struct snd_soc_dapm_context *dapm = 239 + snd_soc_component_get_dapm(component); 240 + return snd_soc_dapm_force_enable_pin_unlocked(dapm, pin); 228 241 } 229 242 EXPORT_SYMBOL_GPL(snd_soc_component_force_enable_pin_unlocked); 230 243
+2 -2
sound/soc/sof/intel/Kconfig
··· 278 278 279 279 config SND_SOC_SOF_INTEL_SOUNDWIRE_LINK_BASELINE 280 280 tristate 281 + select SOUNDWIRE_INTEL if SND_SOC_SOF_INTEL_SOUNDWIRE 282 + select SND_INTEL_SOUNDWIRE_ACPI if SND_SOC_SOF_INTEL_SOUNDWIRE 281 283 282 284 config SND_SOC_SOF_INTEL_SOUNDWIRE 283 285 tristate "SOF support for SoundWire" ··· 287 285 depends on SND_SOC_SOF_INTEL_SOUNDWIRE_LINK_BASELINE 288 286 depends on ACPI && SOUNDWIRE 289 287 depends on !(SOUNDWIRE=m && SND_SOC_SOF_INTEL_SOUNDWIRE_LINK_BASELINE=y) 290 - select SOUNDWIRE_INTEL 291 - select SND_INTEL_SOUNDWIRE_ACPI 292 288 help 293 289 This adds support for SoundWire with Sound Open Firmware 294 290 for Intel(R) platforms.
+2 -2
sound/soc/sof/intel/hda-ipc.c
··· 107 107 } else { 108 108 /* reply correct size ? */ 109 109 if (reply.hdr.size != msg->reply_size && 110 - /* getter payload is never known upfront */ 111 - !(reply.hdr.cmd & SOF_IPC_GLB_PROBE)) { 110 + /* getter payload is never known upfront */ 111 + ((reply.hdr.cmd & SOF_GLB_TYPE_MASK) != SOF_IPC_GLB_PROBE)) { 112 112 dev_err(sdev->dev, "error: reply expected %zu got %u bytes\n", 113 113 msg->reply_size, reply.hdr.size); 114 114 ret = -EINVAL;
+12
sound/soc/sof/intel/hda.c
··· 187 187 int hda_sdw_startup(struct snd_sof_dev *sdev) 188 188 { 189 189 struct sof_intel_hda_dev *hdev; 190 + struct snd_sof_pdata *pdata = sdev->pdata; 190 191 191 192 hdev = sdev->pdata->hw_pdata; 192 193 193 194 if (!hdev->sdw) 195 + return 0; 196 + 197 + if (pdata->machine && !pdata->machine->mach_params.link_mask) 194 198 return 0; 195 199 196 200 return sdw_intel_startup(hdev->sdw); ··· 1006 1002 hda_mach->mach_params.dmic_num = dmic_num; 1007 1003 pdata->machine = hda_mach; 1008 1004 pdata->tplg_filename = tplg_filename; 1005 + 1006 + if (codec_num == 2) { 1007 + /* 1008 + * Prevent SoundWire links from starting when an external 1009 + * HDaudio codec is used 1010 + */ 1011 + hda_mach->mach_params.link_mask = 0; 1012 + } 1009 1013 } 1010 1014 } 1011 1015
+1 -1
sound/soc/uniphier/aio-dma.c
··· 198 198 vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); 199 199 200 200 return remap_pfn_range(vma, vma->vm_start, 201 - substream->dma_buffer.addr >> PAGE_SHIFT, 201 + substream->runtime->dma_addr >> PAGE_SHIFT, 202 202 vma->vm_end - vma->vm_start, vma->vm_page_prot); 203 203 } 204 204
+2 -2
sound/soc/xilinx/xlnx_formatter_pcm.c
··· 452 452 453 453 stream_data->buffer_size = size; 454 454 455 - low = lower_32_bits(substream->dma_buffer.addr); 456 - high = upper_32_bits(substream->dma_buffer.addr); 455 + low = lower_32_bits(runtime->dma_addr); 456 + high = upper_32_bits(runtime->dma_addr); 457 457 writel(low, stream_data->mmio + XLNX_AUD_BUFF_ADDR_LSB); 458 458 writel(high, stream_data->mmio + XLNX_AUD_BUFF_ADDR_MSB); 459 459
+1 -1
sound/usb/card.c
··· 907 907 } 908 908 } 909 909 910 - if (chip->quirk_type & QUIRK_SETUP_DISABLE_AUTOSUSPEND) 910 + if (chip->quirk_type == QUIRK_SETUP_DISABLE_AUTOSUSPEND) 911 911 usb_enable_autosuspend(interface_to_usbdev(intf)); 912 912 913 913 chip->num_interfaces--;
+6
sound/usb/clock.c
··· 324 324 sources[ret - 1], 325 325 visited, validate); 326 326 if (ret > 0) { 327 + /* 328 + * For Samsung USBC Headset (AKG), setting clock selector again 329 + * will result in incorrect default clock setting problems 330 + */ 331 + if (chip->usb_id == USB_ID(0x04e8, 0xa051)) 332 + return ret; 327 333 err = uac_clock_selector_set_val(chip, entity_id, cur); 328 334 if (err < 0) 329 335 return err;
+20 -15
sound/usb/mixer.c
··· 1816 1816 strlcat(name, " - Output Jack", name_size); 1817 1817 } 1818 1818 1819 + /* get connector value to "wake up" the USB audio */ 1820 + static int connector_mixer_resume(struct usb_mixer_elem_list *list) 1821 + { 1822 + struct usb_mixer_elem_info *cval = mixer_elem_list_to_info(list); 1823 + 1824 + get_connector_value(cval, NULL, NULL); 1825 + return 0; 1826 + } 1827 + 1819 1828 /* Build a mixer control for a UAC connector control (jack-detect) */ 1820 1829 static void build_connector_control(struct usb_mixer_interface *mixer, 1821 1830 const struct usbmix_name_map *imap, ··· 1842 1833 if (!cval) 1843 1834 return; 1844 1835 snd_usb_mixer_elem_init_std(&cval->head, mixer, term->id); 1836 + 1837 + /* set up a specific resume callback */ 1838 + cval->head.resume = connector_mixer_resume; 1839 + 1845 1840 /* 1846 1841 * UAC2: The first byte from reading the UAC2_TE_CONNECTOR control returns the 1847 1842 * number of channels connected. ··· 3655 3642 return 0; 3656 3643 } 3657 3644 3658 - static int default_mixer_resume(struct usb_mixer_elem_list *list) 3659 - { 3660 - struct usb_mixer_elem_info *cval = mixer_elem_list_to_info(list); 3661 - 3662 - /* get connector value to "wake up" the USB audio */ 3663 - if (cval->val_type == USB_MIXER_BOOLEAN && cval->channels == 1) 3664 - get_connector_value(cval, NULL, NULL); 3665 - 3666 - return 0; 3667 - } 3668 - 3669 3645 static int default_mixer_reset_resume(struct usb_mixer_elem_list *list) 3670 3646 { 3671 - int err = default_mixer_resume(list); 3647 + int err; 3672 3648 3673 - if (err < 0) 3674 - return err; 3649 + if (list->resume) { 3650 + err = list->resume(list); 3651 + if (err < 0) 3652 + return err; 3653 + } 3675 3654 return restore_mixer_value(list); 3676 3655 } 3677 3656 ··· 3702 3697 list->id = unitid; 3703 3698 list->dump = snd_usb_mixer_dump_cval; 3704 3699 #ifdef CONFIG_PM 3705 - list->resume = default_mixer_resume; 3700 + list->resume = NULL; 3706 3701 list->reset_resume = default_mixer_reset_resume; 3707 3702 #endif 3708 3703 }
+24 -10
sound/usb/mixer_scarlett_gen2.c
··· 228 228 }; 229 229 230 230 static const char *const scarlett2_dim_mute_names[SCARLETT2_DIM_MUTE_COUNT] = { 231 - "Mute", "Dim" 231 + "Mute Playback Switch", "Dim Playback Switch" 232 232 }; 233 233 234 234 /* Description of each hardware port type: ··· 1856 1856 struct snd_ctl_elem_value *ucontrol) 1857 1857 { 1858 1858 struct usb_mixer_elem_info *elem = kctl->private_data; 1859 - struct scarlett2_data *private = elem->head.mixer->private_data; 1859 + struct usb_mixer_interface *mixer = elem->head.mixer; 1860 + struct scarlett2_data *private = mixer->private_data; 1860 1861 int index = line_out_remap(private, elem->control); 1862 + 1863 + mutex_lock(&private->data_mutex); 1864 + if (private->vol_updated) 1865 + scarlett2_update_volumes(mixer); 1866 + mutex_unlock(&private->data_mutex); 1861 1867 1862 1868 ucontrol->value.integer.value[0] = private->mute_switch[index]; 1863 1869 return 0; ··· 1961 1955 ~SNDRV_CTL_ELEM_ACCESS_WRITE; 1962 1956 } 1963 1957 1964 - /* Notify of write bit change */ 1965 - snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_INFO, 1958 + /* Notify of write bit and possible value change */ 1959 + snd_ctl_notify(card, 1960 + SNDRV_CTL_EVENT_MASK_VALUE | SNDRV_CTL_EVENT_MASK_INFO, 1966 1961 &private->vol_ctls[index]->id); 1967 - snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_INFO, 1962 + snd_ctl_notify(card, 1963 + SNDRV_CTL_EVENT_MASK_VALUE | SNDRV_CTL_EVENT_MASK_INFO, 1968 1964 &private->mute_ctls[index]->id); 1969 1965 } 1970 1966 ··· 2538 2530 { 2539 2531 struct scarlett2_data *private = mixer->private_data; 2540 2532 const struct scarlett2_device_info *info = private->info; 2533 + const char *s; 2541 2534 2542 2535 if (!info->direct_monitor) 2543 2536 return 0; 2544 2537 2538 + s = info->direct_monitor == 1 2539 + ? "Direct Monitor Playback Switch" 2540 + : "Direct Monitor Playback Enum"; 2541 + 2545 2542 return scarlett2_add_new_ctl( 2546 2543 mixer, &scarlett2_direct_monitor_ctl[info->direct_monitor - 1], 2547 - 0, 1, "Direct Monitor Playback Switch", 2548 - &private->direct_monitor_ctl); 2544 + 0, 1, s, &private->direct_monitor_ctl); 2549 2545 } 2550 2546 2551 2547 /*** Speaker Switching Control ***/ ··· 2601 2589 2602 2590 /* disable the line out SW/HW switch */ 2603 2591 scarlett2_sw_hw_ctl_ro(private, i); 2604 - snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_INFO, 2592 + snd_ctl_notify(card, 2593 + SNDRV_CTL_EVENT_MASK_VALUE | 2594 + SNDRV_CTL_EVENT_MASK_INFO, 2605 2595 &private->sw_hw_ctls[i]->id); 2606 2596 } 2607 2597 ··· 2927 2913 if (private->vol_sw_hw_switch[line_index]) { 2928 2914 private->mute_switch[line_index] = val; 2929 2915 snd_ctl_notify(mixer->chip->card, 2930 - SNDRV_CTL_EVENT_MASK_INFO, 2916 + SNDRV_CTL_EVENT_MASK_VALUE, 2931 2917 &private->mute_ctls[i]->id); 2932 2918 } 2933 2919 } ··· 3469 3455 3470 3456 /* Add MSD control */ 3471 3457 return scarlett2_add_new_ctl(mixer, &scarlett2_msd_ctl, 3472 - 0, 1, "MSD Mode", NULL); 3458 + 0, 1, "MSD Mode Switch", NULL); 3473 3459 } 3474 3460 3475 3461 /*** Cleanup/Suspend Callbacks ***/
+1
sound/usb/quirks.c
··· 1899 1899 REG_QUIRK_ENTRY(0x0951, 0x16ea, 2), /* Kingston HyperX Cloud Flight S */ 1900 1900 REG_QUIRK_ENTRY(0x0ecb, 0x1f46, 2), /* JBL Quantum 600 */ 1901 1901 REG_QUIRK_ENTRY(0x0ecb, 0x2039, 2), /* JBL Quantum 400 */ 1902 + REG_QUIRK_ENTRY(0x0ecb, 0x203c, 2), /* JBL Quantum 600 */ 1902 1903 REG_QUIRK_ENTRY(0x0ecb, 0x203e, 2), /* JBL Quantum 800 */ 1903 1904 { 0 } /* terminator */ 1904 1905 };
+1 -2
tools/lib/bpf/btf.c
··· 804 804 btf->nr_types = 0; 805 805 btf->start_id = 1; 806 806 btf->start_str_off = 0; 807 + btf->fd = -1; 807 808 808 809 if (base_btf) { 809 810 btf->base_btf = base_btf; ··· 832 831 err = err ?: btf_parse_type_sec(btf); 833 832 if (err) 834 833 goto done; 835 - 836 - btf->fd = -1; 837 834 838 835 done: 839 836 if (err) {
+3 -1
tools/lib/bpf/libbpf_probes.c
··· 75 75 case BPF_PROG_TYPE_CGROUP_SOCK_ADDR: 76 76 xattr.expected_attach_type = BPF_CGROUP_INET4_CONNECT; 77 77 break; 78 + case BPF_PROG_TYPE_CGROUP_SOCKOPT: 79 + xattr.expected_attach_type = BPF_CGROUP_GETSOCKOPT; 80 + break; 78 81 case BPF_PROG_TYPE_SK_LOOKUP: 79 82 xattr.expected_attach_type = BPF_SK_LOOKUP; 80 83 break; ··· 107 104 case BPF_PROG_TYPE_SK_REUSEPORT: 108 105 case BPF_PROG_TYPE_FLOW_DISSECTOR: 109 106 case BPF_PROG_TYPE_CGROUP_SYSCTL: 110 - case BPF_PROG_TYPE_CGROUP_SOCKOPT: 111 107 case BPF_PROG_TYPE_TRACING: 112 108 case BPF_PROG_TYPE_STRUCT_OPS: 113 109 case BPF_PROG_TYPE_EXT: