···11.. SPDX-License-Identifier: GPL-2.0+2233-==============================================================44-Linux* Base Driver for the Intel(R) PRO/100 Family of Adapters55-==============================================================33+=============================================================44+Linux Base Driver for the Intel(R) PRO/100 Family of Adapters55+=============================================================6677June 1, 201888···2121In This Release2222===============23232424-This file describes the Linux* Base Driver for the Intel(R) PRO/100 Family of2424+This file describes the Linux Base Driver for the Intel(R) PRO/100 Family of2525Adapters. This driver includes support for Itanium(R)2-based systems.26262727For questions related to hardware requirements, refer to the documentation···138138The latest release of ethtool can be found from139139https://www.kernel.org/pub/software/network/ethtool/140140141141-Enabling Wake on LAN* (WoL)142142----------------------------143143-WoL is provided through the ethtool* utility. For instructions on141141+Enabling Wake on LAN (WoL)142142+--------------------------143143+WoL is provided through the ethtool utility. For instructions on144144enabling WoL with ethtool, refer to the ethtool man page. WoL will be145145enabled on the system during the next shut down or reboot. For this146146driver version, in order to enable WoL, the e100 driver must be loaded
···11.. SPDX-License-Identifier: GPL-2.0+2233-===========================================================44-Linux* Base Driver for Intel(R) Ethernet Network Connection55-===========================================================33+==========================================================44+Linux Base Driver for Intel(R) Ethernet Network Connection55+==========================================================6677Intel Gigabit Linux driver.88Copyright(c) 1999 - 2013 Intel Corporation.···438438 The latest release of ethtool can be found from439439 https://www.kernel.org/pub/software/network/ethtool/440440441441-Enabling Wake on LAN* (WoL)442442----------------------------441441+Enabling Wake on LAN (WoL)442442+--------------------------443443444444- WoL is configured through the ethtool* utility.444444+ WoL is configured through the ethtool utility.445445446446 WoL will be enabled on the system during the next shut down or reboot.447447 For this driver version, in order to enable WoL, the e1000 driver must be
···11.. SPDX-License-Identifier: GPL-2.0+2233-======================================================44-Linux* Driver for Intel(R) Ethernet Network Connection55-======================================================33+=====================================================44+Linux Driver for Intel(R) Ethernet Network Connection55+=====================================================6677Intel Gigabit Linux driver.88Copyright(c) 2008-2018 Intel Corporation.···338338manually set devices for 1 Gbps and higher.339339340340Speed, duplex, and autonegotiation advertising are configured through the341341-ethtool* utility.341341+ethtool utility.342342343343Caution: Only experienced network administrators should force speed and duplex344344or change autonegotiation advertising manually. The settings at the switch must···351351operate only in full duplex and only at their native speed.352352353353354354-Enabling Wake on LAN* (WoL)355355----------------------------356356-WoL is configured through the ethtool* utility.354354+Enabling Wake on LAN (WoL)355355+--------------------------356356+WoL is configured through the ethtool utility.357357358358WoL will be enabled on the system during the next shut down or reboot. For359359this driver version, in order to enable WoL, the e1000e driver must be loaded
···11.. SPDX-License-Identifier: GPL-2.0+2233-==============================================================44-Linux* Base Driver for Intel(R) Ethernet Multi-host Controller55-==============================================================33+=============================================================44+Linux Base Driver for Intel(R) Ethernet Multi-host Controller55+=============================================================6677August 20, 201888Copyright(c) 2015-2018 Intel Corporation.···120120Known Issues/Troubleshooting121121============================122122123123-Enabling SR-IOV in a 64-bit Microsoft* Windows Server* 2012/R2 guest OS under Linux KVM124124----------------------------------------------------------------------------------------123123+Enabling SR-IOV in a 64-bit Microsoft Windows Server 2012/R2 guest OS under Linux KVM124124+-------------------------------------------------------------------------------------125125KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM. This126126includes traditional PCIe devices, as well as SR-IOV-capable devices based on127127the Intel Ethernet Controller XL710.
···11.. SPDX-License-Identifier: GPL-2.0+2233-==================================================================44-Linux* Base Driver for the Intel(R) Ethernet Controller 700 Series55-==================================================================33+=================================================================44+Linux Base Driver for the Intel(R) Ethernet Controller 700 Series55+=================================================================6677Intel 40 Gigabit Linux driver.88Copyright(c) 1999-2018 Intel Corporation.···384384Network Adapter XXV710 based devices.385385386386Speed, duplex, and autonegotiation advertising are configured through the387387-ethtool* utility.387387+ethtool utility.388388389389Caution: Only experienced network administrators should force speed and duplex390390or change autonegotiation advertising manually. The settings at the switch must
···11.. SPDX-License-Identifier: GPL-2.0+2233-==================================================================44-Linux* Base Driver for Intel(R) Ethernet Adaptive Virtual Function55-==================================================================33+=================================================================44+Linux Base Driver for Intel(R) Ethernet Adaptive Virtual Function55+=================================================================6677Intel Ethernet Adaptive Virtual Function Linux driver.88Copyright(c) 2013-2018 Intel Corporation.···1919Overview2020========21212222-This file describes the iavf Linux* Base Driver. This driver was formerly2222+This file describes the iavf Linux Base Driver. This driver was formerly2323called i40evf.24242525The iavf driver supports the below mentioned virtual function devices and
···11.. SPDX-License-Identifier: GPL-2.0+2233-===================================================================44-Linux* Base Driver for the Intel(R) Ethernet Connection E800 Series55-===================================================================33+==================================================================44+Linux Base Driver for the Intel(R) Ethernet Connection E800 Series55+==================================================================6677Intel ice Linux driver.88Copyright(c) 2018 Intel Corporation.
···11.. SPDX-License-Identifier: GPL-2.0+2233-===========================================================44-Linux* Base Driver for Intel(R) Ethernet Network Connection55-===========================================================33+==========================================================44+Linux Base Driver for Intel(R) Ethernet Network Connection55+==========================================================6677Intel Gigabit Linux driver.88Copyright(c) 1999-2018 Intel Corporation.···129129https://www.kernel.org/pub/software/network/ethtool/130130131131132132-Enabling Wake on LAN* (WoL)133133----------------------------134134-WoL is configured through the ethtool* utility.132132+Enabling Wake on LAN (WoL)133133+--------------------------134134+WoL is configured through the ethtool utility.135135136136WoL will be enabled on the system during the next shut down or reboot. For137137this driver version, in order to enable WoL, the igb driver must be loaded
···11.. SPDX-License-Identifier: GPL-2.0+2233-============================================================44-Linux* Base Virtual Function Driver for Intel(R) 1G Ethernet55-============================================================33+===========================================================44+Linux Base Virtual Function Driver for Intel(R) 1G Ethernet55+===========================================================6677Intel Gigabit Virtual Function Linux driver.88Copyright(c) 1999-2018 Intel Corporation.
···11.. SPDX-License-Identifier: GPL-2.0+2233-=============================================================================44-Linux* Base Driver for the Intel(R) Ethernet 10 Gigabit PCI Express Adapters55-=============================================================================33+===========================================================================44+Linux Base Driver for the Intel(R) Ethernet 10 Gigabit PCI Express Adapters55+===========================================================================6677Intel 10 Gigabit Linux driver.88Copyright(c) 1999-2018 Intel Corporation.···519519Known Issues/Troubleshooting520520============================521521522522-Enabling SR-IOV in a 64-bit Microsoft* Windows Server* 2012/R2 guest OS523523------------------------------------------------------------------------522522+Enabling SR-IOV in a 64-bit Microsoft Windows Server 2012/R2 guest OS523523+---------------------------------------------------------------------524524Linux KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM.525525This includes traditional PCIe devices, as well as SR-IOV-capable devices based526526on the Intel Ethernet Controller XL710.
···11.. SPDX-License-Identifier: GPL-2.0+2233-=============================================================44-Linux* Base Virtual Function Driver for Intel(R) 10G Ethernet55-=============================================================33+============================================================44+Linux Base Virtual Function Driver for Intel(R) 10G Ethernet55+============================================================6677Intel 10 Gigabit Virtual Function Linux driver.88Copyright(c) 1999-2018 Intel Corporation.
···11.. SPDX-License-Identifier: GPL-2.0+2233-==========================================================44-Linux* Driver for the Pensando(R) Ethernet adapter family55-==========================================================33+========================================================44+Linux Driver for the Pensando(R) Ethernet adapter family55+========================================================6677Pensando Linux Ethernet driver.88Copyright(c) 2019 Pensando Systems, Inc
+7-4
Documentation/networking/ip-sysctl.txt
···207207208208somaxconn - INTEGER209209 Limit of socket listen() backlog, known in userspace as SOMAXCONN.210210- Defaults to 128. See also tcp_max_syn_backlog for additional tuning211211- for TCP sockets.210210+ Defaults to 4096. (Was 128 before linux-5.4)211211+ See also tcp_max_syn_backlog for additional tuning for TCP sockets.212212213213tcp_abort_on_overflow - BOOLEAN214214 If listening service is too slow to accept new connections,···408408 up to ~64K of unswappable memory.409409410410tcp_max_syn_backlog - INTEGER411411- Maximal number of remembered connection requests, which have not412412- received an acknowledgment from connecting client.411411+ Maximal number of remembered connection requests (SYN_RECV),412412+ which have not received an acknowledgment from connecting client.413413+ This is a per-listener limit.413414 The minimal value is 128 for low memory machines, and it will414415 increase in proportion to the memory of machine.415416 If server suffers from overload, try increasing this number.417417+ Remember to also check /proc/sys/net/core/somaxconn418418+ A SYN_RECV request socket consumes about 304 bytes of memory.416419417420tcp_max_tw_buckets - INTEGER418421 Maximal number of timewait sockets held by system simultaneously.
+3-4
MAINTAINERS
···1140811408NETWORKING [TLS]1140911409M: Boris Pismenny <borisp@mellanox.com>1141011410M: Aviad Yehezkel <aviadye@mellanox.com>1141111411-M: Dave Watson <davejwatson@fb.com>1141211411M: John Fastabend <john.fastabend@gmail.com>1141311412M: Daniel Borkmann <daniel@iogearbox.net>1141411413M: Jakub Kicinski <jakub.kicinski@netronome.com>···13905139061390613907RISC-V ARCHITECTURE1390713908M: Paul Walmsley <paul.walmsley@sifive.com>1390813908-M: Palmer Dabbelt <palmer@sifive.com>1390913909+M: Palmer Dabbelt <palmer@dabbelt.com>1390913910M: Albert Ou <aou@eecs.berkeley.edu>1391013911L: linux-riscv@lists.infradead.org1391113912T: git git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git···1478214783F: drivers/media/mmc/siano/14783147841478414785SIFIVE DRIVERS1478514785-M: Palmer Dabbelt <palmer@sifive.com>1478614786+M: Palmer Dabbelt <palmer@dabbelt.com>1478614787M: Paul Walmsley <paul.walmsley@sifive.com>1478714788L: linux-riscv@lists.infradead.org1478814789T: git git://github.com/sifive/riscv-linux.git···14792147931479314794SIFIVE FU540 SYSTEM-ON-CHIP1479414795M: Paul Walmsley <paul.walmsley@sifive.com>1479514795-M: Palmer Dabbelt <palmer@sifive.com>1479614796+M: Palmer Dabbelt <palmer@dabbelt.com>1479614797L: linux-riscv@lists.infradead.org1479714798T: git git://git.kernel.org/pub/scm/linux/kernel/git/pjw/sifive.git1479814799S: Supported
···3232CONFIG_DEVTMPFS=y3333# CONFIG_STANDALONE is not set3434# CONFIG_PREVENT_FIRMWARE_BUILD is not set3535+CONFIG_MTD=y3636+CONFIG_MTD_SPI_NOR=y3537CONFIG_SCSI=y3638CONFIG_BLK_DEV_SD=y3739CONFIG_NETDEVICES=y···5755CONFIG_GPIO_DWAPB=y5856CONFIG_GPIO_SNPS_CREG=y5957# CONFIG_HWMON is not set5858+CONFIG_REGULATOR=y5959+CONFIG_REGULATOR_FIXED_VOLTAGE=y6060CONFIG_DRM=y6161# CONFIG_DRM_FBDEV_EMULATION is not set6262CONFIG_DRM_UDL=y···7672CONFIG_MMC_DW=y7773CONFIG_DMADEVICES=y7874CONFIG_DW_AXI_DMAC=y7575+CONFIG_IIO=y7676+CONFIG_TI_ADC108S102=y7977CONFIG_EXT3_FS=y8078CONFIG_VFAT_FS=y8179CONFIG_TMPFS=y
+2-2
arch/arc/kernel/perf_event.c
···614614 /* loop thru all available h/w condition indexes */615615 for (i = 0; i < cc_bcr.c; i++) {616616 write_aux_reg(ARC_REG_CC_INDEX, i);617617- cc_name.indiv.word0 = read_aux_reg(ARC_REG_CC_NAME0);618618- cc_name.indiv.word1 = read_aux_reg(ARC_REG_CC_NAME1);617617+ cc_name.indiv.word0 = le32_to_cpu(read_aux_reg(ARC_REG_CC_NAME0));618618+ cc_name.indiv.word1 = le32_to_cpu(read_aux_reg(ARC_REG_CC_NAME1));619619620620 arc_pmu_map_hw_event(i, cc_name.str);621621 arc_pmu_add_raw_event_attr(i, cc_name.str);
···91919292static inline void kuap_update_sr(u32 sr, u32 addr, u32 end)9393{9494+ addr &= 0xf0000000; /* align addr to start of segment */9495 barrier(); /* make sure thread.kuap is updated before playing with SRs */9596 while (addr < end) {9697 mtsrin(sr, addr);
+3
arch/powerpc/include/asm/elf.h
···175175 ARCH_DLINFO_CACHE_GEOMETRY; \176176} while (0)177177178178+/* Relocate the kernel image to @final_address */179179+void relocate(unsigned long final_address);180180+178181#endif /* _ASM_POWERPC_ELF_H */
+13
arch/powerpc/kernel/prom_init.c
···32493249 /* Switch to secure mode. */32503250 prom_printf("Switching to secure mode.\n");3251325132523252+ /*32533253+ * The ultravisor will do an integrity check of the kernel image but we32543254+ * relocated it so the check will fail. Restore the original image by32553255+ * relocating it back to the kernel virtual base address.32563256+ */32573257+ if (IS_ENABLED(CONFIG_RELOCATABLE))32583258+ relocate(KERNELBASE);32593259+32523260 ret = enter_secure_mode(kbase, fdt);32613261+32623262+ /* Relocate the kernel again. */32633263+ if (IS_ENABLED(CONFIG_RELOCATABLE))32643264+ relocate(kbase);32653265+32533266 if (ret != U_SUCCESS) {32543267 prom_printf("Returned %d from switching to secure mode.\n", ret);32553268 prom_rtas_os_term("Switch to secure mode failed.\n");
···66 * Copyright (C) 2015 Regents of the University of California77 */8899+#include <linux/elf.h>910#include <linux/mm.h>1011#include <linux/slab.h>1112#include <linux/binfmts.h>···2625 struct vdso_data data;2726 u8 page[PAGE_SIZE];2827} vdso_data_store __page_aligned_data;2929-struct vdso_data *vdso_data = &vdso_data_store.data;2828+static struct vdso_data *vdso_data = &vdso_data_store.data;30293130static int __init vdso_init(void)3231{
+1
arch/riscv/mm/context.c
···77#include <linux/mm.h>88#include <asm/tlbflush.h>99#include <asm/cacheflush.h>1010+#include <asm/mmu_context.h>10111112/*1213 * When necessary, performs a deferred icache flush for the given MM context,
+2
arch/riscv/mm/fault.c
···1818#include <asm/ptrace.h>1919#include <asm/tlbflush.h>20202121+#include "../kernel/head.h"2222+2123/*2224 * This routine handles page faults. It determines the address and the2325 * problem, and then passes it off to one of the appropriate routines.
+3-2
arch/riscv/mm/init.c
···1919#include <asm/pgtable.h>2020#include <asm/io.h>21212222+#include "../kernel/head.h"2323+2224unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]2325 __page_aligned_bss;2426EXPORT_SYMBOL(empty_zero_page);···339337 */340338341339#ifndef __riscv_cmodel_medany342342-#error "setup_vm() is called from head.S before relocate so it should "343343- "not use absolute addressing."340340+#error "setup_vm() is called from head.S before relocate so it should not use absolute addressing."344341#endif345342346343asmlinkage void __init setup_vm(uintptr_t dtb_pa)
···734734static void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)735735{736736 vcpu->arch.efer = efer;737737- if (!npt_enabled && !(efer & EFER_LMA))738738- efer &= ~EFER_LME;737737+738738+ if (!npt_enabled) {739739+ /* Shadow paging assumes NX to be available. */740740+ efer |= EFER_NX;741741+742742+ if (!(efer & EFER_LMA))743743+ efer &= ~EFER_LME;744744+ }739745740746 to_svm(vcpu)->vmcb->save.efer = efer | EFER_SVME;741747 mark_dirty(to_svm(vcpu)->vmcb, VMCB_CR);
+3-11
arch/x86/kvm/vmx/vmx.c
···969969 u64 guest_efer = vmx->vcpu.arch.efer;970970 u64 ignore_bits = 0;971971972972- if (!enable_ept) {973973- /*974974- * NX is needed to handle CR0.WP=1, CR4.SMEP=1. Testing975975- * host CPUID is more efficient than testing guest CPUID976976- * or CR4. Host SMEP is anyway a requirement for guest SMEP.977977- */978978- if (boot_cpu_has(X86_FEATURE_SMEP))979979- guest_efer |= EFER_NX;980980- else if (!(guest_efer & EFER_NX))981981- ignore_bits |= EFER_NX;982982- }972972+ /* Shadow paging assumes NX to be available. */973973+ if (!enable_ept)974974+ guest_efer |= EFER_NX;983975984976 /*985977 * LMA and LME handled by hardware; SCE meaningless outside long mode.
···17071707 if (!sdma->script_number)17081708 sdma->script_number = SDMA_SCRIPT_ADDRS_ARRAY_SIZE_V1;1709170917101710+ if (sdma->script_number > sizeof(struct sdma_script_start_addrs)17111711+ / sizeof(s32)) {17121712+ dev_err(sdma->dev,17131713+ "SDMA script number %d not match with firmware.\n",17141714+ sdma->script_number);17151715+ return;17161716+ }17171717+17101718 for (i = 0; i < sdma->script_number; i++)17111719 if (addr_arr[i] > 0)17121720 saddr_arr[i] = addr_arr[i];
+19
drivers/dma/qcom/bam_dma.c
···694694695695 /* remove all transactions, including active transaction */696696 spin_lock_irqsave(&bchan->vc.lock, flag);697697+ /*698698+ * If we have transactions queued, then some might be committed to the699699+ * hardware in the desc fifo. The only way to reset the desc fifo is700700+ * to do a hardware reset (either by pipe or the entire block).701701+ * bam_chan_init_hw() will trigger a pipe reset, and also reinit the702702+ * pipe. If the pipe is left disabled (default state after pipe reset)703703+ * and is accessed by a connected hardware engine, a fatal error in704704+ * the BAM will occur. There is a small window where this could happen705705+ * with bam_chan_init_hw(), but it is assumed that the caller has706706+ * stopped activity on any attached hardware engine. Make sure to do707707+ * this first so that the BAM hardware doesn't cause memory corruption708708+ * by accessing freed resources.709709+ */710710+ if (!list_empty(&bchan->desc_list)) {711711+ async_desc = list_first_entry(&bchan->desc_list,712712+ struct bam_async_desc, desc_node);713713+ bam_chan_init_hw(bchan, async_desc->dir);714714+ }715715+697716 list_for_each_entry_safe(async_desc, tmp,698717 &bchan->desc_list, desc_node) {699718 list_add(&async_desc->vd.node, &bchan->vc.desc_issued);
···182182183183config EFI_RCI2_TABLE184184 bool "EFI Runtime Configuration Interface Table Version 2 Support"185185+ depends on X86 || COMPILE_TEST185186 help186187 Displays the content of the Runtime Configuration Interface187188 Table version 2 on Dell EMC PowerEdge systems as a binary
+1-1
drivers/firmware/efi/efi.c
···554554 sizeof(*seed) + size);555555 if (seed != NULL) {556556 pr_notice("seeding entropy pool\n");557557- add_device_randomness(seed->bits, seed->size);557557+ add_bootloader_randomness(seed->bits, seed->size);558558 early_memunmap(seed, sizeof(*seed) + size);559559 } else {560560 pr_err("Could not map UEFI random seed!\n");
···195195 unsigned long dram_base,196196 efi_loaded_image_t *image)197197{198198+ unsigned long kernel_base;198199 efi_status_t status;199200200201 /*···205204 * loaded. These assumptions are made by the decompressor,206205 * before any memory map is available.207206 */208208- dram_base = round_up(dram_base, SZ_128M);207207+ kernel_base = round_up(dram_base, SZ_128M);209208210210- status = reserve_kernel_base(sys_table, dram_base, reserve_addr,209209+ /*210210+ * Note that some platforms (notably, the Raspberry Pi 2) put211211+ * spin-tables and other pieces of firmware at the base of RAM,212212+ * abusing the fact that the window of TEXT_OFFSET bytes at the213213+ * base of the kernel image is only partially used at the moment.214214+ * (Up to 5 pages are used for the swapper page tables)215215+ */216216+ kernel_base += TEXT_OFFSET - 5 * PAGE_SIZE;217217+218218+ status = reserve_kernel_base(sys_table, kernel_base, reserve_addr,211219 reserve_size);212220 if (status != EFI_SUCCESS) {213221 pr_efi_err(sys_table, "Unable to allocate memory for uncompressed kernel.\n");···230220 *image_size = image->image_size;231221 status = efi_relocate_kernel(sys_table, image_addr, *image_size,232222 *image_size,233233- dram_base + MAX_UNCOMP_KERNEL_SIZE, 0);223223+ kernel_base + MAX_UNCOMP_KERNEL_SIZE, 0, 0);234224 if (status != EFI_SUCCESS) {235225 pr_efi_err(sys_table, "Failed to relocate kernel.\n");236226 efi_free(sys_table, *reserve_size, *reserve_addr);
+10-14
drivers/firmware/efi/libstub/efi-stub-helper.c
···260260}261261262262/*263263- * Allocate at the lowest possible address.263263+ * Allocate at the lowest possible address that is not below 'min'.264264 */265265-efi_status_t efi_low_alloc(efi_system_table_t *sys_table_arg,266266- unsigned long size, unsigned long align,267267- unsigned long *addr)265265+efi_status_t efi_low_alloc_above(efi_system_table_t *sys_table_arg,266266+ unsigned long size, unsigned long align,267267+ unsigned long *addr, unsigned long min)268268{269269 unsigned long map_size, desc_size, buff_size;270270 efi_memory_desc_t *map;···311311 start = desc->phys_addr;312312 end = start + desc->num_pages * EFI_PAGE_SIZE;313313314314- /*315315- * Don't allocate at 0x0. It will confuse code that316316- * checks pointers against NULL. Skip the first 8317317- * bytes so we start at a nice even number.318318- */319319- if (start == 0x0)320320- start += 8;314314+ if (start < min)315315+ start = min;321316322317 start = round_up(start, align);323318 if ((start + size) > end)···693698 unsigned long image_size,694699 unsigned long alloc_size,695700 unsigned long preferred_addr,696696- unsigned long alignment)701701+ unsigned long alignment,702702+ unsigned long min_addr)697703{698704 unsigned long cur_image_addr;699705 unsigned long new_addr = 0;···727731 * possible.728732 */729733 if (status != EFI_SUCCESS) {730730- status = efi_low_alloc(sys_table_arg, alloc_size, alignment,731731- &new_addr);734734+ status = efi_low_alloc_above(sys_table_arg, alloc_size,735735+ alignment, &new_addr, min_addr);732736 }733737 if (status != EFI_SUCCESS) {734738 pr_efi_err(sys_table_arg, "Failed to allocate usable memory for kernel.\n");
+8
drivers/firmware/efi/test/efi_test.c
···1414#include <linux/init.h>1515#include <linux/proc_fs.h>1616#include <linux/efi.h>1717+#include <linux/security.h>1718#include <linux/slab.h>1819#include <linux/uaccess.h>1920···718717719718static int efi_test_open(struct inode *inode, struct file *file)720719{720720+ int ret = security_locked_down(LOCKDOWN_EFI_TEST);721721+722722+ if (ret)723723+ return ret;724724+725725+ if (!capable(CAP_SYS_ADMIN))726726+ return -EACCES;721727 /*722728 * nothing special to do here723729 * We do accept multiple open files at the same time as we
+1
drivers/firmware/efi/tpm.c
···88888989 if (tbl_size < 0) {9090 pr_err(FW_BUG "Failed to parse event in TPM Final Events Log\n");9191+ ret = -EINVAL;9192 goto out_calc;9293 }9394
+3-1
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
···218218 struct amdgpu_ring *ring = to_amdgpu_ring(sched_job->sched);219219 struct dma_fence *fence = NULL, *finished;220220 struct amdgpu_job *job;221221- int r;221221+ int r = 0;222222223223 job = to_amdgpu_job(sched_job);224224 finished = &job->base.s_fence->finished;···243243 job->fence = dma_fence_get(fence);244244245245 amdgpu_job_free_resources(job);246246+247247+ fence = r ? ERR_PTR(r) : fence;246248 return fence;247249}248250
···580580#ifdef CONFIG_DRM_AMD_DC_DCN2_0581581 // Allocate memory for the vm_helper582582 dc->vm_helper = kzalloc(sizeof(struct vm_helper), GFP_KERNEL);583583+ if (!dc->vm_helper) {584584+ dm_error("%s: failed to create dc->vm_helper\n", __func__);585585+ goto fail;586586+ }583587584588#endif585589 memcpy(&dc->bb_overrides, &init_params->bb_overrides, sizeof(dc->bb_overrides));
+9
drivers/gpu/drm/amd/display/dc/core/dc_link.c
···27672767 CONTROLLER_DP_TEST_PATTERN_VIDEOMODE,27682768 COLOR_DEPTH_UNDEFINED);2769276927702770+ /* This second call is needed to reconfigure the DIG27712771+ * as a workaround for the incorrect value being applied27722772+ * from transmitter control.27732773+ */27742774+ if (!dc_is_virtual_signal(pipe_ctx->stream->signal))27752775+ stream->link->link_enc->funcs->setup(27762776+ stream->link->link_enc,27772777+ pipe_ctx->stream->signal);27782778+27702779#ifdef CONFIG_DRM_AMD_DC_DSC_SUPPORT27712780 if (pipe_ctx->stream->timing.flags.DSC) {27722781 if (dc_is_dp_signal(pipe_ctx->stream->signal) ||
+18-6
drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c
···374374 enum display_dongle_type *dongle = &sink_cap->dongle_type;375375 uint8_t type2_dongle_buf[DP_ADAPTOR_TYPE2_SIZE];376376 bool is_type2_dongle = false;377377+ int retry_count = 2;377378 struct dp_hdmi_dongle_signature_data *dongle_signature;378379379380 /* Assume we have no valid DP passive dongle connected */···387386 DP_HDMI_DONGLE_ADDRESS,388387 type2_dongle_buf,389388 sizeof(type2_dongle_buf))) {390390- *dongle = DISPLAY_DONGLE_DP_DVI_DONGLE;391391- sink_cap->max_hdmi_pixel_clock = DP_ADAPTOR_DVI_MAX_TMDS_CLK;389389+ /* Passive HDMI dongles can sometimes fail here without retrying*/390390+ while (retry_count > 0) {391391+ if (i2c_read(ddc,392392+ DP_HDMI_DONGLE_ADDRESS,393393+ type2_dongle_buf,394394+ sizeof(type2_dongle_buf)))395395+ break;396396+ retry_count--;397397+ }398398+ if (retry_count == 0) {399399+ *dongle = DISPLAY_DONGLE_DP_DVI_DONGLE;400400+ sink_cap->max_hdmi_pixel_clock = DP_ADAPTOR_DVI_MAX_TMDS_CLK;392401393393- CONN_DATA_DETECT(ddc->link, type2_dongle_buf, sizeof(type2_dongle_buf),394394- "DP-DVI passive dongle %dMhz: ",395395- DP_ADAPTOR_DVI_MAX_TMDS_CLK / 1000);396396- return;402402+ CONN_DATA_DETECT(ddc->link, type2_dongle_buf, sizeof(type2_dongle_buf),403403+ "DP-DVI passive dongle %dMhz: ",404404+ DP_ADAPTOR_DVI_MAX_TMDS_CLK / 1000);405405+ return;406406+ }397407 }398408399409 /* Check if Type 2 dongle.*/
+6
drivers/gpu/drm/amd/display/dc/core/dc_resource.c
···404404 if (stream1->view_format != stream2->view_format)405405 return false;406406407407+ if (stream1->ignore_msa_timing_param || stream2->ignore_msa_timing_param)408408+ return false;409409+407410 return true;408411}409412static bool is_dp_and_hdmi_sharable(···15411538{1542153915431540 if (!are_stream_backends_same(old_stream, stream))15411541+ return false;15421542+15431543+ if (old_stream->ignore_msa_timing_param != stream->ignore_msa_timing_param)15441544 return false;1545154515461546 return true;
···93159315static void lpt_init_pch_refclk(struct drm_i915_private *dev_priv)93169316{93179317 struct intel_encoder *encoder;93189318- bool pch_ssc_in_use = false;93199318 bool has_fdi = false;9320931993219320 for_each_intel_encoder(&dev_priv->drm, encoder) {···93429343 * clock hierarchy. That would also allow us to do93439344 * clock bending finally.93449345 */93469346+ dev_priv->pch_ssc_use = 0;93479347+93459348 if (spll_uses_pch_ssc(dev_priv)) {93469349 DRM_DEBUG_KMS("SPLL using PCH SSC\n");93479347- pch_ssc_in_use = true;93509350+ dev_priv->pch_ssc_use |= BIT(DPLL_ID_SPLL);93489351 }9349935293509353 if (wrpll_uses_pch_ssc(dev_priv, DPLL_ID_WRPLL1)) {93519354 DRM_DEBUG_KMS("WRPLL1 using PCH SSC\n");93529352- pch_ssc_in_use = true;93559355+ dev_priv->pch_ssc_use |= BIT(DPLL_ID_WRPLL1);93539356 }9354935793559358 if (wrpll_uses_pch_ssc(dev_priv, DPLL_ID_WRPLL2)) {93569359 DRM_DEBUG_KMS("WRPLL2 using PCH SSC\n");93579357- pch_ssc_in_use = true;93609360+ dev_priv->pch_ssc_use |= BIT(DPLL_ID_WRPLL2);93589361 }9359936293609360- if (pch_ssc_in_use)93639363+ if (dev_priv->pch_ssc_use)93619364 return;9362936593639366 if (has_fdi) {
+15
drivers/gpu/drm/i915/display/intel_dpll_mgr.c
···525525 val = I915_READ(WRPLL_CTL(id));526526 I915_WRITE(WRPLL_CTL(id), val & ~WRPLL_PLL_ENABLE);527527 POSTING_READ(WRPLL_CTL(id));528528+529529+ /*530530+ * Try to set up the PCH reference clock once all DPLLs531531+ * that depend on it have been shut down.532532+ */533533+ if (dev_priv->pch_ssc_use & BIT(id))534534+ intel_init_pch_refclk(dev_priv);528535}529536530537static void hsw_ddi_spll_disable(struct drm_i915_private *dev_priv,531538 struct intel_shared_dpll *pll)532539{540540+ enum intel_dpll_id id = pll->info->id;533541 u32 val;534542535543 val = I915_READ(SPLL_CTL);536544 I915_WRITE(SPLL_CTL, val & ~SPLL_PLL_ENABLE);537545 POSTING_READ(SPLL_CTL);546546+547547+ /*548548+ * Try to set up the PCH reference clock once all DPLLs549549+ * that depend on it have been shut down.550550+ */551551+ if (dev_priv->pch_ssc_use & BIT(id))552552+ intel_init_pch_refclk(dev_priv);538553}539554540555static bool hsw_ddi_wrpll_get_hw_state(struct drm_i915_private *dev_priv,
···379379static void380380radeon_pci_shutdown(struct pci_dev *pdev)381381{382382+#ifdef CONFIG_PPC64383383+ struct drm_device *ddev = pci_get_drvdata(pdev);384384+#endif385385+382386 /* if we are running in a VM, make sure the device383387 * torn down properly on reboot/shutdown384388 */385389 if (radeon_device_is_virtual())386390 radeon_pci_remove(pdev);391391+392392+#ifdef CONFIG_PPC64393393+ /* Some adapters need to be suspended before a394394+ * shutdown occurs in order to prevent an error395395+ * during kexec.396396+ * Make this power specific becauase it breaks397397+ * some non-power boards.398398+ */399399+ radeon_suspend_kms(ddev, true, true, false);400400+#endif387401}388402389403static int radeon_pmops_suspend(struct device *dev)
···516516 MY PICTURES => KEY_WORDPROCESSOR517517 MY MUSIC=> KEY_SPREADSHEET518518 */519519- unsigned int keys[] = {519519+ static const unsigned int keys[] = {520520 KEY_FN,521521 KEY_MESSENGER, KEY_CALENDAR,522522 KEY_ADDRESSBOOK, KEY_DOCUMENTS,···532532 0533533 };534534535535- unsigned int *pkeys = &keys[0];535535+ const unsigned int *pkeys = &keys[0];536536 unsigned short i;537537538538 if (pm->ifnum != 1) /* only set up ONCE for interace 1 */
···5454{5555 struct zpff_device *zpff;5656 struct hid_report *report;5757- struct hid_input *hidinput = list_entry(hid->inputs.next,5858- struct hid_input, list);5959- struct input_dev *dev = hidinput->input;5757+ struct hid_input *hidinput;5858+ struct input_dev *dev;6059 int i, error;6060+6161+ if (list_empty(&hid->inputs)) {6262+ hid_err(hid, "no inputs found\n");6363+ return -ENODEV;6464+ }6565+ hidinput = list_entry(hid->inputs.next, struct hid_input, list);6666+ dev = hidinput->input;61676268 for (i = 0; i < 4; i++) {6369 report = hid_validate_values(hid, HID_OUTPUT_REPORT, 0, i, 1);
+7-111
drivers/hid/i2c-hid/i2c-hid-core.c
···2626#include <linux/delay.h>2727#include <linux/slab.h>2828#include <linux/pm.h>2929-#include <linux/pm_runtime.h>3029#include <linux/device.h>3130#include <linux/wait.h>3231#include <linux/err.h>···4748/* quirks to control the device */4849#define I2C_HID_QUIRK_SET_PWR_WAKEUP_DEV BIT(0)4950#define I2C_HID_QUIRK_NO_IRQ_AFTER_RESET BIT(1)5050-#define I2C_HID_QUIRK_NO_RUNTIME_PM BIT(2)5151-#define I2C_HID_QUIRK_DELAY_AFTER_SLEEP BIT(3)5251#define I2C_HID_QUIRK_BOGUS_IRQ BIT(4)53525453/* flags */···169172 { USB_VENDOR_ID_WEIDA, HID_ANY_ID,170173 I2C_HID_QUIRK_SET_PWR_WAKEUP_DEV },171174 { I2C_VENDOR_ID_HANTICK, I2C_PRODUCT_ID_HANTICK_5288,172172- I2C_HID_QUIRK_NO_IRQ_AFTER_RESET |173173- I2C_HID_QUIRK_NO_RUNTIME_PM },174174- { I2C_VENDOR_ID_RAYDIUM, I2C_PRODUCT_ID_RAYDIUM_4B33,175175- I2C_HID_QUIRK_DELAY_AFTER_SLEEP },176176- { USB_VENDOR_ID_LG, I2C_DEVICE_ID_LG_8001,177177- I2C_HID_QUIRK_NO_RUNTIME_PM },178178- { I2C_VENDOR_ID_GOODIX, I2C_DEVICE_ID_GOODIX_01F0,179179- I2C_HID_QUIRK_NO_RUNTIME_PM },175175+ I2C_HID_QUIRK_NO_IRQ_AFTER_RESET },180176 { USB_VENDOR_ID_ELAN, HID_ANY_ID,181177 I2C_HID_QUIRK_BOGUS_IRQ },182178 { 0, 0 }···387397{388398 struct i2c_hid *ihid = i2c_get_clientdata(client);389399 int ret;390390- unsigned long now, delay;391400392401 i2c_hid_dbg(ihid, "%s\n", __func__);393402···404415 goto set_pwr_exit;405416 }406417407407- if (ihid->quirks & I2C_HID_QUIRK_DELAY_AFTER_SLEEP &&408408- power_state == I2C_HID_PWR_ON) {409409- now = jiffies;410410- if (time_after(ihid->sleep_delay, now)) {411411- delay = jiffies_to_usecs(ihid->sleep_delay - now);412412- usleep_range(delay, delay + 1);413413- }414414- }415415-416418 ret = __i2c_hid_command(client, &hid_set_power_cmd, power_state,417419 0, NULL, 0, NULL, 0);418418-419419- if (ihid->quirks & I2C_HID_QUIRK_DELAY_AFTER_SLEEP &&420420- power_state == I2C_HID_PWR_SLEEP)421421- ihid->sleep_delay = jiffies + msecs_to_jiffies(20);422420423421 if (ret)424422 dev_err(&client->dev, "failed to change power setting.\n");···767791{768792 struct i2c_client *client = hid->driver_data;769793 struct i2c_hid *ihid = i2c_get_clientdata(client);770770- int ret = 0;771771-772772- ret = pm_runtime_get_sync(&client->dev);773773- if (ret < 0)774774- return ret;775794776795 set_bit(I2C_HID_STARTED, &ihid->flags);777796 return 0;···778807 struct i2c_hid *ihid = i2c_get_clientdata(client);779808780809 clear_bit(I2C_HID_STARTED, &ihid->flags);781781-782782- /* Save some power */783783- pm_runtime_put(&client->dev);784784-}785785-786786-static int i2c_hid_power(struct hid_device *hid, int lvl)787787-{788788- struct i2c_client *client = hid->driver_data;789789- struct i2c_hid *ihid = i2c_get_clientdata(client);790790-791791- i2c_hid_dbg(ihid, "%s lvl:%d\n", __func__, lvl);792792-793793- switch (lvl) {794794- case PM_HINT_FULLON:795795- pm_runtime_get_sync(&client->dev);796796- break;797797- case PM_HINT_NORMAL:798798- pm_runtime_put(&client->dev);799799- break;800800- }801801- return 0;802810}803811804812struct hid_ll_driver i2c_hid_ll_driver = {···786836 .stop = i2c_hid_stop,787837 .open = i2c_hid_open,788838 .close = i2c_hid_close,789789- .power = i2c_hid_power,790839 .output_report = i2c_hid_output_report,791840 .raw_request = i2c_hid_raw_request,792841};···1053110410541105 i2c_hid_acpi_fix_up_power(&client->dev);1055110610561056- pm_runtime_get_noresume(&client->dev);10571057- pm_runtime_set_active(&client->dev);10581058- pm_runtime_enable(&client->dev);10591107 device_enable_async_suspend(&client->dev);1060110810611109 /* Make sure there is something at this address */···10601114 if (ret < 0) {10611115 dev_dbg(&client->dev, "nothing at this address: %d\n", ret);10621116 ret = -ENXIO;10631063- goto err_pm;11171117+ goto err_regulator;10641118 }1065111910661120 ret = i2c_hid_fetch_hid_descriptor(ihid);10671121 if (ret < 0)10681068- goto err_pm;11221122+ goto err_regulator;1069112310701124 ret = i2c_hid_init_irq(client);10711125 if (ret < 0)10721072- goto err_pm;11261126+ goto err_regulator;1073112710741128 hid = hid_allocate_device();10751129 if (IS_ERR(hid)) {···11001154 goto err_mem_free;11011155 }1102115611031103- if (!(ihid->quirks & I2C_HID_QUIRK_NO_RUNTIME_PM))11041104- pm_runtime_put(&client->dev);11051105-11061157 return 0;1107115811081159err_mem_free:···1107116411081165err_irq:11091166 free_irq(client->irq, ihid);11101110-11111111-err_pm:11121112- pm_runtime_put_noidle(&client->dev);11131113- pm_runtime_disable(&client->dev);1114116711151168err_regulator:11161169 regulator_bulk_disable(ARRAY_SIZE(ihid->pdata.supplies),···11191180{11201181 struct i2c_hid *ihid = i2c_get_clientdata(client);11211182 struct hid_device *hid;11221122-11231123- if (!(ihid->quirks & I2C_HID_QUIRK_NO_RUNTIME_PM))11241124- pm_runtime_get_sync(&client->dev);11251125- pm_runtime_disable(&client->dev);11261126- pm_runtime_set_suspended(&client->dev);11271127- pm_runtime_put_noidle(&client->dev);1128118311291184 hid = ihid->hid;11301185 hid_destroy_device(hid);···11521219 int wake_status;1153122011541221 if (hid->driver && hid->driver->suspend) {11551155- /*11561156- * Wake up the device so that IO issues in11571157- * HID driver's suspend code can succeed.11581158- */11591159- ret = pm_runtime_resume(dev);11601160- if (ret < 0)11611161- return ret;11621162-11631222 ret = hid->driver->suspend(hid, PMSG_SUSPEND);11641223 if (ret < 0)11651224 return ret;11661225 }1167122611681168- if (!pm_runtime_suspended(dev)) {11691169- /* Save some power */11701170- i2c_hid_set_power(client, I2C_HID_PWR_SLEEP);12271227+ /* Save some power */12281228+ i2c_hid_set_power(client, I2C_HID_PWR_SLEEP);1171122911721172- disable_irq(client->irq);11731173- }12301230+ disable_irq(client->irq);1174123111751232 if (device_may_wakeup(&client->dev)) {11761233 wake_status = enable_irq_wake(client->irq);···12021279 wake_status);12031280 }1204128112051205- /* We'll resume to full power */12061206- pm_runtime_disable(dev);12071207- pm_runtime_set_active(dev);12081208- pm_runtime_enable(dev);12091209-12101282 enable_irq(client->irq);1211128312121284 /* Instead of resetting device, simply powers the device on. This···12221304}12231305#endif1224130612251225-#ifdef CONFIG_PM12261226-static int i2c_hid_runtime_suspend(struct device *dev)12271227-{12281228- struct i2c_client *client = to_i2c_client(dev);12291229-12301230- i2c_hid_set_power(client, I2C_HID_PWR_SLEEP);12311231- disable_irq(client->irq);12321232- return 0;12331233-}12341234-12351235-static int i2c_hid_runtime_resume(struct device *dev)12361236-{12371237- struct i2c_client *client = to_i2c_client(dev);12381238-12391239- enable_irq(client->irq);12401240- i2c_hid_set_power(client, I2C_HID_PWR_ON);12411241- return 0;12421242-}12431243-#endif12441244-12451307static const struct dev_pm_ops i2c_hid_pm = {12461308 SET_SYSTEM_SLEEP_PM_OPS(i2c_hid_suspend, i2c_hid_resume)12471247- SET_RUNTIME_PM_OPS(i2c_hid_runtime_suspend, i2c_hid_runtime_resume,12481248- NULL)12491309};1250131012511311static const struct i2c_device_id i2c_hid_id_table[] = {
+19
drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
···323323 .driver_data = (void *)&sipodev_desc324324 },325325 {326326+ /*327327+ * There are at least 2 Primebook C11B versions, the older328328+ * version has a product-name of "Primebook C11B", and a329329+ * bios version / release / firmware revision of:330330+ * V2.1.2 / 05/03/2018 / 18.2331331+ * The new version has "PRIMEBOOK C11B" as product-name and a332332+ * bios version / release / firmware revision of:333333+ * CFALKSW05_BIOS_V1.1.2 / 11/19/2018 / 19.2334334+ * Only the older version needs this quirk, note the newer335335+ * version will not match as it has a different product-name.336336+ */337337+ .ident = "Trekstor Primebook C11B",338338+ .matches = {339339+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "TREKSTOR"),340340+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Primebook C11B"),341341+ },342342+ .driver_data = (void *)&sipodev_desc343343+ },344344+ {326345 .ident = "Direkt-Tek DTLAPY116-2",327346 .matches = {328347 DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Direkt-Tek"),
···170170171171 /* Polling the CVRF bit to make sure read data is ready */172172 return regmap_field_read_poll_timeout(ina->fields[F_CVRF],173173- cvrf, cvrf, wait, 100000);173173+ cvrf, cvrf, wait, wait * 2);174174}175175176176static int ina3221_read_value(struct ina3221_data *ina, unsigned int reg,
+12-3
drivers/hwmon/nct7904.c
···8282#define FANCTL1_FMR_REG 0x00 /* Bank 3; 1 reg per channel */8383#define FANCTL1_OUT_REG 0x10 /* Bank 3; 1 reg per channel */84848585+#define VOLT_MONITOR_MODE 0x08686+#define THERMAL_DIODE_MODE 0x18787+#define THERMISTOR_MODE 0x38888+8589#define ENABLE_TSI BIT(1)86908791static const unsigned short normal_i2c[] = {···939935 for (i = 0; i < 4; i++) {940936 val = (ret >> (i * 2)) & 0x03;941937 bit = (1 << i);942942- if (val == 0) {938938+ if (val == VOLT_MONITOR_MODE) {943939 data->tcpu_mask &= ~bit;940940+ } else if (val == THERMAL_DIODE_MODE && i < 2) {941941+ data->temp_mode |= bit;942942+ data->vsen_mask &= ~(0x06 << (i * 2));943943+ } else if (val == THERMISTOR_MODE) {944944+ data->vsen_mask &= ~(0x02 << (i * 2));944945 } else {945945- if (val == 0x1 || val == 0x2)946946- data->temp_mode |= bit;946946+ /* Reserved */947947+ data->tcpu_mask &= ~bit;947948 data->vsen_mask &= ~(0x06 << (i * 2));948949 }949950 }
···6565#define SDMA_DESCQ_CNT 20486666#define SDMA_DESC_INTR 646767#define INVALID_TAIL 0xffff6868+#define SDMA_PAD max_t(size_t, MAX_16B_PADDING, sizeof(u32))68696970static uint sdma_descq_cnt = SDMA_DESCQ_CNT;7071module_param(sdma_descq_cnt, uint, S_IRUGO);···12971296 struct sdma_engine *sde;1298129712991298 if (dd->sdma_pad_dma) {13001300- dma_free_coherent(&dd->pcidev->dev, 4,12991299+ dma_free_coherent(&dd->pcidev->dev, SDMA_PAD,13011300 (void *)dd->sdma_pad_dma,13021301 dd->sdma_pad_phys);13031302 dd->sdma_pad_dma = NULL;···14921491 }1493149214941493 /* Allocate memory for pad */14951495- dd->sdma_pad_dma = dma_alloc_coherent(&dd->pcidev->dev, sizeof(u32),14941494+ dd->sdma_pad_dma = dma_alloc_coherent(&dd->pcidev->dev, SDMA_PAD,14961495 &dd->sdma_pad_phys, GFP_KERNEL);14971496 if (!dd->sdma_pad_dma) {14981497 dd_dev_err(dd, "failed to allocate SendDMA pad memory\n");
-5
drivers/infiniband/hw/hfi1/tid_rdma.c
···27362736 diff = cmp_psn(psn,27372737 flow->flow_state.r_next_psn);27382738 if (diff > 0) {27392739- if (!(qp->r_flags & RVT_R_RDMAR_SEQ))27402740- restart_tid_rdma_read_req(rcd,27412741- qp,27422742- wqe);27432743-27442739 /* Drop the packet.*/27452740 goto s_unlock;27462741 } else if (diff < 0) {
+4-6
drivers/infiniband/hw/hfi1/verbs.c
···147147/* Length of buffer to create verbs txreq cache name */148148#define TXREQ_NAME_LEN 24149149150150-/* 16B trailing buffer */151151-static const u8 trail_buf[MAX_16B_PADDING];152152-153150static uint wss_threshold = 80;154151module_param(wss_threshold, uint, S_IRUGO);155152MODULE_PARM_DESC(wss_threshold, "Percentage (1-100) of LLC to use as a threshold for a cacheless copy");···817820818821 /* add icrc, lt byte, and padding to flit */819822 if (extra_bytes)820820- ret = sdma_txadd_kvaddr(sde->dd, &tx->txreq,821821- (void *)trail_buf, extra_bytes);823823+ ret = sdma_txadd_daddr(sde->dd, &tx->txreq,824824+ sde->dd->sdma_pad_phys, extra_bytes);822825823826bail_txadd:824827 return ret;···10861089 }10871090 /* add icrc, lt byte, and padding to flit */10881091 if (extra_bytes)10891089- seg_pio_copy_mid(pbuf, trail_buf, extra_bytes);10921092+ seg_pio_copy_mid(pbuf, ppd->dd->sdma_pad_dma,10931093+ extra_bytes);1090109410911095 seg_pio_copy_end(pbuf);10921096 }
···19671967 int err;1968196819691969 if (IS_ENABLED(CONFIG_INFINIBAND_ON_DEMAND_PAGING)) {19701970- xa_erase(&dev->mdev->priv.mkey_table,19711971- mlx5_base_mkey(mmw->mmkey.key));19701970+ xa_erase_irq(&dev->mdev->priv.mkey_table,19711971+ mlx5_base_mkey(mmw->mmkey.key));19721972 /*19731973 * pagefault_single_data_segment() may be accessing mmw under19741974 * SRCU if the user bound an ODP MR to this MW.
+5-3
drivers/infiniband/hw/mlx5/qp.c
···32493249 }3250325032513251 /* Only remove the old rate after new rate was set */32523252- if ((old_rl.rate &&32533253- !mlx5_rl_are_equal(&old_rl, &new_rl)) ||32543254- (new_state != MLX5_SQC_STATE_RDY))32523252+ if ((old_rl.rate && !mlx5_rl_are_equal(&old_rl, &new_rl)) ||32533253+ (new_state != MLX5_SQC_STATE_RDY)) {32553254 mlx5_rl_remove_rate(dev, &old_rl);32553255+ if (new_state != MLX5_SQC_STATE_RDY)32563256+ memset(&new_rl, 0, sizeof(new_rl));32573257+ }3256325832573259 ibqp->rl = new_rl;32583260 sq->state = new_state;
···27942794 struct device_domain_info *info;2795279527962796 info = dev->archdata.iommu;27972797- if (info && info != DUMMY_DEVICE_DOMAIN_INFO)27972797+ if (info && info != DUMMY_DEVICE_DOMAIN_INFO && info != DEFER_DEVICE_DOMAIN_INFO)27982798 return (info->domain == si_domain);2799279928002800 return 0;
+1-3
drivers/iommu/ipmmu-vmsa.c
···11051105 /* Root devices have mandatory IRQs */11061106 if (ipmmu_is_root(mmu)) {11071107 irq = platform_get_irq(pdev, 0);11081108- if (irq < 0) {11091109- dev_err(&pdev->dev, "no IRQ found\n");11081108+ if (irq < 0)11101109 return irq;11111111- }1112111011131111 ret = devm_request_irq(&pdev->dev, irq, ipmmu_irq, 0,11141112 dev_name(&pdev->dev), mmu);
+1-1
drivers/isdn/capi/capi.c
···744744745745 poll_wait(file, &(cdev->recvwait), wait);746746 mask = EPOLLOUT | EPOLLWRNORM;747747- if (!skb_queue_empty(&cdev->recvqueue))747747+ if (!skb_queue_empty_lockless(&cdev->recvqueue))748748 mask |= EPOLLIN | EPOLLRDNORM;749749 return mask;750750}
···3737 unsigned int i;3838 u32 reg, offset;39394040- if (priv->type == BCM7445_DEVICE_ID)4141- offset = CORE_STS_OVERRIDE_IMP;4242- else4343- offset = CORE_STS_OVERRIDE_IMP2;4444-4540 /* Enable the port memories */4641 reg = core_readl(priv, CORE_MEM_PSM_VDD_CTRL);4742 reg &= ~P_TXQ_PSM_VDD(port);4843 core_writel(priv, reg, CORE_MEM_PSM_VDD_CTRL);4949-5050- /* Enable Broadcast, Multicast, Unicast forwarding to IMP port */5151- reg = core_readl(priv, CORE_IMP_CTL);5252- reg |= (RX_BCST_EN | RX_MCST_EN | RX_UCST_EN);5353- reg &= ~(RX_DIS | TX_DIS);5454- core_writel(priv, reg, CORE_IMP_CTL);55445645 /* Enable forwarding */5746 core_writel(priv, SW_FWDG_EN, CORE_SWMODE);···60716172 b53_brcm_hdr_setup(ds, port);62736363- /* Force link status for IMP port */6464- reg = core_readl(priv, offset);6565- reg |= (MII_SW_OR | LINK_STS);6666- core_writel(priv, reg, offset);7474+ if (port == 8) {7575+ if (priv->type == BCM7445_DEVICE_ID)7676+ offset = CORE_STS_OVERRIDE_IMP;7777+ else7878+ offset = CORE_STS_OVERRIDE_IMP2;7979+8080+ /* Force link status for IMP port */8181+ reg = core_readl(priv, offset);8282+ reg |= (MII_SW_OR | LINK_STS);8383+ core_writel(priv, reg, offset);8484+8585+ /* Enable Broadcast, Multicast, Unicast forwarding to IMP port */8686+ reg = core_readl(priv, CORE_IMP_CTL);8787+ reg |= (RX_BCST_EN | RX_MCST_EN | RX_UCST_EN);8888+ reg &= ~(RX_DIS | TX_DIS);8989+ core_writel(priv, reg, CORE_IMP_CTL);9090+ } else {9191+ reg = core_readl(priv, CORE_G_PCTL_PORT(port));9292+ reg &= ~(RX_DIS | TX_DIS);9393+ core_writel(priv, reg, CORE_G_PCTL_PORT(port));9494+ }6795}68966997static void bcm_sf2_gphy_enable_set(struct dsa_switch *ds, bool enable)
+2-2
drivers/net/dsa/sja1105/Kconfig
···26262727config NET_DSA_SJA1105_TAS2828 bool "Support for the Time-Aware Scheduler on NXP SJA1105"2929- depends on NET_DSA_SJA11053030- depends on NET_SCH_TAPRIO2929+ depends on NET_DSA_SJA1105 && NET_SCH_TAPRIO3030+ depends on NET_SCH_TAPRIO=y || NET_DSA_SJA1105=m3131 help3232 This enables support for the TTEthernet-based egress scheduling3333 engine in the SJA1105 DSA driver, which is controlled using a
+3
drivers/net/ethernet/arc/emac_rockchip.c
···256256 if (priv->regulator)257257 regulator_disable(priv->regulator);258258259259+ if (priv->soc_data->need_div_macclk)260260+ clk_disable_unprepare(priv->macclk);261261+259262 free_netdev(ndev);260263 return err;261264}
+4-6
drivers/net/ethernet/broadcom/bnxt/bnxt.c
···1038210382{1038310383 bnxt_unmap_bars(bp, bp->pdev);1038410384 pci_release_regions(bp->pdev);1038510385- pci_disable_device(bp->pdev);1038510385+ if (pci_is_enabled(bp->pdev))1038610386+ pci_disable_device(bp->pdev);1038610387}10387103881038810389static void bnxt_init_dflt_coal(struct bnxt *bp)···1067010669 bp->fw_reset_state = BNXT_FW_RESET_STATE_RESET_FW;1067110670 }1067210671 /* fall through */1067310673- case BNXT_FW_RESET_STATE_RESET_FW: {1067410674- u32 wait_dsecs = bp->fw_health->post_reset_wait_dsecs;1067510675-1067210672+ case BNXT_FW_RESET_STATE_RESET_FW:1067610673 bnxt_reset_all(bp);1067710674 bp->fw_reset_state = BNXT_FW_RESET_STATE_ENABLE_DEV;1067810678- bnxt_queue_fw_reset_work(bp, wait_dsecs * HZ / 10);1067510675+ bnxt_queue_fw_reset_work(bp, bp->fw_reset_min_dsecs * HZ / 10);1067910676 return;1068010680- }1068110677 case BNXT_FW_RESET_STATE_ENABLE_DEV:1068210678 if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state) &&1068310679 bp->fw_health) {
+67-45
drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
···2929 val = bnxt_fw_health_readl(bp, BNXT_FW_HEALTH_REG);3030 health_status = val & 0xffff;31313232- if (health_status == BNXT_FW_STATUS_HEALTHY) {3333- rc = devlink_fmsg_string_pair_put(fmsg, "FW status",3434- "Healthy;");3535- if (rc)3636- return rc;3737- } else if (health_status < BNXT_FW_STATUS_HEALTHY) {3838- rc = devlink_fmsg_string_pair_put(fmsg, "FW status",3939- "Not yet completed initialization;");3232+ if (health_status < BNXT_FW_STATUS_HEALTHY) {3333+ rc = devlink_fmsg_string_pair_put(fmsg, "Description",3434+ "Not yet completed initialization");4035 if (rc)4136 return rc;4237 } else if (health_status > BNXT_FW_STATUS_HEALTHY) {4343- rc = devlink_fmsg_string_pair_put(fmsg, "FW status",4444- "Encountered fatal error and cannot recover;");3838+ rc = devlink_fmsg_string_pair_put(fmsg, "Description",3939+ "Encountered fatal error and cannot recover");4540 if (rc)4641 return rc;4742 }48434944 if (val >> 16) {5050- rc = devlink_fmsg_u32_pair_put(fmsg, "Error", val >> 16);4545+ rc = devlink_fmsg_u32_pair_put(fmsg, "Error code", val >> 16);5146 if (rc)5247 return rc;5348 }···210215211216static const struct bnxt_dl_nvm_param nvm_params[] = {212217 {DEVLINK_PARAM_GENERIC_ID_ENABLE_SRIOV, NVM_OFF_ENABLE_SRIOV,213213- BNXT_NVM_SHARED_CFG, 1},218218+ BNXT_NVM_SHARED_CFG, 1, 1},214219 {DEVLINK_PARAM_GENERIC_ID_IGNORE_ARI, NVM_OFF_IGNORE_ARI,215215- BNXT_NVM_SHARED_CFG, 1},220220+ BNXT_NVM_SHARED_CFG, 1, 1},216221 {DEVLINK_PARAM_GENERIC_ID_MSIX_VEC_PER_PF_MAX,217217- NVM_OFF_MSIX_VEC_PER_PF_MAX, BNXT_NVM_SHARED_CFG, 10},222222+ NVM_OFF_MSIX_VEC_PER_PF_MAX, BNXT_NVM_SHARED_CFG, 10, 4},218223 {DEVLINK_PARAM_GENERIC_ID_MSIX_VEC_PER_PF_MIN,219219- NVM_OFF_MSIX_VEC_PER_PF_MIN, BNXT_NVM_SHARED_CFG, 7},224224+ NVM_OFF_MSIX_VEC_PER_PF_MIN, BNXT_NVM_SHARED_CFG, 7, 4},220225 {BNXT_DEVLINK_PARAM_ID_GRE_VER_CHECK, NVM_OFF_DIS_GRE_VER_CHECK,221221- BNXT_NVM_SHARED_CFG, 1},226226+ BNXT_NVM_SHARED_CFG, 1, 1},222227};228228+229229+union bnxt_nvm_data {230230+ u8 val8;231231+ __le32 val32;232232+};233233+234234+static void bnxt_copy_to_nvm_data(union bnxt_nvm_data *dst,235235+ union devlink_param_value *src,236236+ int nvm_num_bits, int dl_num_bytes)237237+{238238+ u32 val32 = 0;239239+240240+ if (nvm_num_bits == 1) {241241+ dst->val8 = src->vbool;242242+ return;243243+ }244244+ if (dl_num_bytes == 4)245245+ val32 = src->vu32;246246+ else if (dl_num_bytes == 2)247247+ val32 = (u32)src->vu16;248248+ else if (dl_num_bytes == 1)249249+ val32 = (u32)src->vu8;250250+ dst->val32 = cpu_to_le32(val32);251251+}252252+253253+static void bnxt_copy_from_nvm_data(union devlink_param_value *dst,254254+ union bnxt_nvm_data *src,255255+ int nvm_num_bits, int dl_num_bytes)256256+{257257+ u32 val32;258258+259259+ if (nvm_num_bits == 1) {260260+ dst->vbool = src->val8;261261+ return;262262+ }263263+ val32 = le32_to_cpu(src->val32);264264+ if (dl_num_bytes == 4)265265+ dst->vu32 = val32;266266+ else if (dl_num_bytes == 2)267267+ dst->vu16 = (u16)val32;268268+ else if (dl_num_bytes == 1)269269+ dst->vu8 = (u8)val32;270270+}223271224272static int bnxt_hwrm_nvm_req(struct bnxt *bp, u32 param_id, void *msg,225273 int msg_len, union devlink_param_value *val)226274{227275 struct hwrm_nvm_get_variable_input *req = msg;228228- void *data_addr = NULL, *buf = NULL;229276 struct bnxt_dl_nvm_param nvm_param;230230- int bytesize, idx = 0, rc, i;277277+ union bnxt_nvm_data *data;231278 dma_addr_t data_dma_addr;279279+ int idx = 0, rc, i;232280233281 /* Get/Set NVM CFG parameter is supported only on PFs */234282 if (BNXT_VF(bp))···292254 else if (nvm_param.dir_type == BNXT_NVM_FUNC_CFG)293255 idx = bp->pf.fw_fid - BNXT_FIRST_PF_FID;294256295295- bytesize = roundup(nvm_param.num_bits, BITS_PER_BYTE) / BITS_PER_BYTE;296296- switch (bytesize) {297297- case 1:298298- if (nvm_param.num_bits == 1)299299- buf = &val->vbool;300300- else301301- buf = &val->vu8;302302- break;303303- case 2:304304- buf = &val->vu16;305305- break;306306- case 4:307307- buf = &val->vu32;308308- break;309309- default:310310- return -EFAULT;311311- }312312-313313- data_addr = dma_alloc_coherent(&bp->pdev->dev, bytesize,314314- &data_dma_addr, GFP_KERNEL);315315- if (!data_addr)257257+ data = dma_alloc_coherent(&bp->pdev->dev, sizeof(*data),258258+ &data_dma_addr, GFP_KERNEL);259259+ if (!data)316260 return -ENOMEM;317261318262 req->dest_data_addr = cpu_to_le64(data_dma_addr);319319- req->data_len = cpu_to_le16(nvm_param.num_bits);263263+ req->data_len = cpu_to_le16(nvm_param.nvm_num_bits);320264 req->option_num = cpu_to_le16(nvm_param.offset);321265 req->index_0 = cpu_to_le16(idx);322266 if (idx)323267 req->dimensions = cpu_to_le16(1);324268325269 if (req->req_type == cpu_to_le16(HWRM_NVM_SET_VARIABLE)) {326326- memcpy(data_addr, buf, bytesize);270270+ bnxt_copy_to_nvm_data(data, val, nvm_param.nvm_num_bits,271271+ nvm_param.dl_num_bytes);327272 rc = hwrm_send_message(bp, msg, msg_len, HWRM_CMD_TIMEOUT);328273 } else {329274 rc = hwrm_send_message_silent(bp, msg, msg_len,330275 HWRM_CMD_TIMEOUT);276276+ if (!rc)277277+ bnxt_copy_from_nvm_data(val, data,278278+ nvm_param.nvm_num_bits,279279+ nvm_param.dl_num_bytes);331280 }332332- if (!rc && req->req_type == cpu_to_le16(HWRM_NVM_GET_VARIABLE))333333- memcpy(buf, data_addr, bytesize);334334-335335- dma_free_coherent(&bp->pdev->dev, bytesize, data_addr, data_dma_addr);281281+ dma_free_coherent(&bp->pdev->dev, sizeof(*data), data, data_dma_addr);336282 if (rc == -EACCES)337283 netdev_err(bp->dev, "PF does not have admin privileges to modify NVM config\n");338284 return rc;
···695695 lld->write_cmpl_support = adap->params.write_cmpl_support;696696}697697698698-static void uld_attach(struct adapter *adap, unsigned int uld)698698+static int uld_attach(struct adapter *adap, unsigned int uld)699699{700700- void *handle;701700 struct cxgb4_lld_info lli;701701+ void *handle;702702703703 uld_init(adap, &lli);704704 uld_queue_init(adap, uld, &lli);···708708 dev_warn(adap->pdev_dev,709709 "could not attach to the %s driver, error %ld\n",710710 adap->uld[uld].name, PTR_ERR(handle));711711- return;711711+ return PTR_ERR(handle);712712 }713713714714 adap->uld[uld].handle = handle;···716716717717 if (adap->flags & CXGB4_FULL_INIT_DONE)718718 adap->uld[uld].state_change(handle, CXGB4_STATE_UP);719719+720720+ return 0;719721}720722721721-/**722722- * cxgb4_register_uld - register an upper-layer driver723723- * @type: the ULD type724724- * @p: the ULD methods723723+/* cxgb4_register_uld - register an upper-layer driver724724+ * @type: the ULD type725725+ * @p: the ULD methods725726 *726726- * Registers an upper-layer driver with this driver and notifies the ULD727727- * about any presently available devices that support its type. Returns728728- * %-EBUSY if a ULD of the same type is already registered.727727+ * Registers an upper-layer driver with this driver and notifies the ULD728728+ * about any presently available devices that support its type.729729 */730730void cxgb4_register_uld(enum cxgb4_uld type,731731 const struct cxgb4_uld_info *p)732732{733733- int ret = 0;734733 struct adapter *adap;734734+ int ret = 0;735735736736 if (type >= CXGB4_ULD_MAX)737737 return;···763763 if (ret)764764 goto free_irq;765765 adap->uld[type] = *p;766766- uld_attach(adap, type);766766+ ret = uld_attach(adap, type);767767+ if (ret)768768+ goto free_txq;767769 continue;770770+free_txq:771771+ release_sge_txq_uld(adap, type);768772free_irq:769773 if (adap->flags & CXGB4_FULL_INIT_DONE)770774 quiesce_rx_uld(adap, type);
+2-6
drivers/net/ethernet/chelsio/cxgb4/sge.c
···37913791 * write the CIDX Updates into the Status Page at the end of the37923792 * TX Queue.37933793 */37943794- c.autoequiqe_to_viid = htonl((dbqt37953795- ? FW_EQ_ETH_CMD_AUTOEQUIQE_F37963796- : FW_EQ_ETH_CMD_AUTOEQUEQE_F) |37943794+ c.autoequiqe_to_viid = htonl(FW_EQ_ETH_CMD_AUTOEQUEQE_F |37973795 FW_EQ_ETH_CMD_VIID_V(pi->viid));3798379637993797 c.fetchszm_to_iqid =38003800- htonl(FW_EQ_ETH_CMD_HOSTFCMODE_V(dbqt38013801- ? HOSTFCMODE_INGRESS_QUEUE_X38023802- : HOSTFCMODE_STATUS_PAGE_X) |37983798+ htonl(FW_EQ_ETH_CMD_HOSTFCMODE_V(HOSTFCMODE_STATUS_PAGE_X) |38033799 FW_EQ_ETH_CMD_PCIECHN_V(pi->tx_chan) |38043800 FW_EQ_ETH_CMD_FETCHRO_F | FW_EQ_ETH_CMD_IQID_V(iqid));38053801
···3558355835593559 for (i = 0; i < irq_cnt; i++) {35603560 snprintf(irq_name, sizeof(irq_name), "int%d", i);35613561- irq = platform_get_irq_byname(pdev, irq_name);35613561+ irq = platform_get_irq_byname_optional(pdev, irq_name);35623562 if (irq < 0)35633563 irq = platform_get_irq(pdev, i);35643564 if (irq < 0) {
+2-2
drivers/net/ethernet/freescale/fec_ptp.c
···600600601601 INIT_DELAYED_WORK(&fep->time_keep, fec_time_keep);602602603603- irq = platform_get_irq_byname(pdev, "pps");603603+ irq = platform_get_irq_byname_optional(pdev, "pps");604604 if (irq < 0)605605- irq = platform_get_irq(pdev, irq_idx);605605+ irq = platform_get_irq_optional(pdev, irq_idx);606606 /* Failure to get an irq is not fatal,607607 * only the PTP_CLOCK_PPS clock events should stop608608 */
+2
drivers/net/ethernet/google/gve/gve_rx.c
···289289290290 len = be16_to_cpu(rx_desc->len) - GVE_RX_PAD;291291 page_info = &rx->data.page_info[idx];292292+ dma_sync_single_for_cpu(&priv->pdev->dev, rx->data.qpl->page_buses[idx],293293+ PAGE_SIZE, DMA_FROM_DEVICE);292294293295 /* gvnic can only receive into registered segments. If the buffer294296 * can't be recycled, our only choice is to copy the data out of
···607607 for (i = 0; i < adapter->num_rx_queues; i++)608608 rxdr[i].count = rxdr->count;609609610610+ err = 0;610611 if (netif_running(adapter->netdev)) {611612 /* Try to get new resources before deleting old */612613 err = e1000_setup_all_rx_resources(adapter);···628627 adapter->rx_ring = rxdr;629628 adapter->tx_ring = txdr;630629 err = e1000_up(adapter);631631- if (err)632632- goto err_setup;633630 }634631 kfree(tx_old);635632 kfree(rx_old);636633637634 clear_bit(__E1000_RESETTING, &adapter->flags);638638- return 0;635635+ return err;636636+639637err_setup_tx:640638 e1000_free_all_rx_resources(adapter);641639err_setup_rx:···646646err_alloc_tx:647647 if (netif_running(adapter->netdev))648648 e1000_up(adapter);649649-err_setup:650649 clear_bit(__E1000_RESETTING, &adapter->flags);651650 return err;652651}
-5
drivers/net/ethernet/intel/i40e/i40e_xsk.c
···157157 err = i40e_queue_pair_enable(vsi, qid);158158 if (err)159159 return err;160160-161161- /* Kick start the NAPI context so that receiving will start */162162- err = i40e_xsk_wakeup(vsi->netdev, qid, XDP_WAKEUP_RX);163163- if (err)164164- return err;165160 }166161167162 return 0;
+1-1
drivers/net/ethernet/intel/igb/e1000_82575.c
···466466 ? igb_setup_copper_link_82575467467 : igb_setup_serdes_link_82575;468468469469- if (mac->type == e1000_82580) {469469+ if (mac->type == e1000_82580 || mac->type == e1000_i350) {470470 switch (hw->device_id) {471471 /* feature not supported on these id's */472472 case E1000_DEV_ID_DH89XXCC_SGMII:
+5-3
drivers/net/ethernet/intel/igb/igb_main.c
···753753 struct net_device *netdev = igb->netdev;754754 hw->hw_addr = NULL;755755 netdev_err(netdev, "PCIe link lost\n");756756- WARN(1, "igb: Failed to read reg 0x%x!\n", reg);756756+ WARN(pci_device_is_present(igb->pdev),757757+ "igb: Failed to read reg 0x%x!\n", reg);757758 }758759759760 return value;···20652064 if ((hw->phy.media_type == e1000_media_type_copper) &&20662065 (!(connsw & E1000_CONNSW_AUTOSENSE_EN))) {20672066 swap_now = true;20682068- } else if (!(connsw & E1000_CONNSW_SERDESD)) {20672067+ } else if ((hw->phy.media_type != e1000_media_type_copper) &&20682068+ !(connsw & E1000_CONNSW_SERDESD)) {20692069 /* copper signal takes time to appear */20702070 if (adapter->copper_tries < 4) {20712071 adapter->copper_tries++;···23722370 adapter->ei.get_invariants(hw);23732371 adapter->flags &= ~IGB_FLAG_MEDIA_RESET;23742372 }23752375- if ((mac->type == e1000_82575) &&23732373+ if ((mac->type == e1000_82575 || mac->type == e1000_i350) &&23762374 (adapter->flags & IGB_FLAG_MAS_ENABLE)) {23772375 igb_enable_mas(adapter);23782376 }
+2-1
drivers/net/ethernet/intel/igc/igc_main.c
···40474047 hw->hw_addr = NULL;40484048 netif_device_detach(netdev);40494049 netdev_err(netdev, "PCIe link lost, device now detached\n");40504050- WARN(1, "igc: Failed to read reg 0x%x!\n", reg);40504050+ WARN(pci_device_is_present(igc->pdev),40514051+ "igc: Failed to read reg 0x%x!\n", reg);40514052 }4052405340534054 return value;
-1
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
···43104310 if (test_bit(__IXGBE_RX_FCOE, &rx_ring->state))43114311 set_bit(__IXGBE_RX_3K_BUFFER, &rx_ring->state);4312431243134313- clear_bit(__IXGBE_RX_BUILD_SKB_ENABLED, &rx_ring->state);43144313 if (adapter->flags2 & IXGBE_FLAG2_RX_LEGACY)43154314 continue;43164315
···471471 priv->mfunc.master.res_tracker.res_alloc[RES_MPT].quota[pf];472472}473473474474-static int get_max_gauranteed_vfs_counter(struct mlx4_dev *dev)474474+static int475475+mlx4_calc_res_counter_guaranteed(struct mlx4_dev *dev,476476+ struct resource_allocator *res_alloc,477477+ int vf)475478{476476- /* reduce the sink counter */477477- return (dev->caps.max_counters - 1 -478478- (MLX4_PF_COUNTERS_PER_PORT * MLX4_MAX_PORTS))479479- / MLX4_MAX_PORTS;479479+ struct mlx4_active_ports actv_ports;480480+ int ports, counters_guaranteed;481481+482482+ /* For master, only allocate according to the number of phys ports */483483+ if (vf == mlx4_master_func_num(dev))484484+ return MLX4_PF_COUNTERS_PER_PORT * dev->caps.num_ports;485485+486486+ /* calculate real number of ports for the VF */487487+ actv_ports = mlx4_get_active_ports(dev, vf);488488+ ports = bitmap_weight(actv_ports.ports, dev->caps.num_ports);489489+ counters_guaranteed = ports * MLX4_VF_COUNTERS_PER_PORT;490490+491491+ /* If we do not have enough counters for this VF, do not492492+ * allocate any for it. '-1' to reduce the sink counter.493493+ */494494+ if ((res_alloc->res_reserved + counters_guaranteed) >495495+ (dev->caps.max_counters - 1))496496+ return 0;497497+498498+ return counters_guaranteed;480499}481500482501int mlx4_init_resource_tracker(struct mlx4_dev *dev)···503484 struct mlx4_priv *priv = mlx4_priv(dev);504485 int i, j;505486 int t;506506- int max_vfs_guarantee_counter = get_max_gauranteed_vfs_counter(dev);507487508488 priv->mfunc.master.res_tracker.slave_list =509489 kcalloc(dev->num_slaves, sizeof(struct slave_list),···621603 break;622604 case RES_COUNTER:623605 res_alloc->quota[t] = dev->caps.max_counters;624624- if (t == mlx4_master_func_num(dev))625625- res_alloc->guaranteed[t] =626626- MLX4_PF_COUNTERS_PER_PORT *627627- MLX4_MAX_PORTS;628628- else if (t <= max_vfs_guarantee_counter)629629- res_alloc->guaranteed[t] =630630- MLX4_VF_COUNTERS_PER_PORT *631631- MLX4_MAX_PORTS;632632- else633633- res_alloc->guaranteed[t] = 0;606606+ res_alloc->guaranteed[t] =607607+ mlx4_calc_res_counter_guaranteed(dev, res_alloc, t);634608 break;635609 default:636610 break;
···3838 return -ENOMEM;39394040 tx_priv->expected_seq = start_offload_tcp_sn;4141- tx_priv->crypto_info = crypto_info;4141+ tx_priv->crypto_info = *(struct tls12_crypto_info_aes_gcm_128 *)crypto_info;4242 mlx5e_set_ktls_tx_priv_ctx(tls_ctx, tx_priv);43434444 /* tc and underlay_qpn values are not in use for tls tis */
···11861186 if (err)11871187 goto err_thermal_init;1188118811891189- if (mlxsw_driver->params_register && !reload)11891189+ if (mlxsw_driver->params_register)11901190 devlink_params_publish(devlink);1191119111921192 return 0;···12591259 return;12601260 }1261126112621262- if (mlxsw_core->driver->params_unregister && !reload)12621262+ if (mlxsw_core->driver->params_unregister)12631263 devlink_params_unpublish(devlink);12641264 mlxsw_thermal_fini(mlxsw_core->thermal);12651265 mlxsw_hwmon_fini(mlxsw_core->hwmon);
+9-2
drivers/net/ethernet/mscc/ocelot.c
···261261 port->pvid = vid;262262263263 /* Untagged egress vlan clasification */264264- if (untagged)264264+ if (untagged && port->vid != vid) {265265+ if (port->vid) {266266+ dev_err(ocelot->dev,267267+ "Port already has a native VLAN: %d\n",268268+ port->vid);269269+ return -EBUSY;270270+ }265271 port->vid = vid;272272+ }266273267274 ocelot_vlan_port_apply(ocelot, port);268275···941934static int ocelot_vlan_rx_add_vid(struct net_device *dev, __be16 proto,942935 u16 vid)943936{944944- return ocelot_vlan_vid_add(dev, vid, false, true);937937+ return ocelot_vlan_vid_add(dev, vid, false, false);945938}946939947940static int ocelot_vlan_rx_kill_vid(struct net_device *dev, __be16 proto,
···107107108108static LIST_HEAD(bpq_devices);109109110110-/*111111- * bpqether network devices are paired with ethernet devices below them, so112112- * form a special "super class" of normal ethernet devices; split their locks113113- * off into a separate class since they always nest.114114- */115115-static struct lock_class_key bpq_netdev_xmit_lock_key;116116-static struct lock_class_key bpq_netdev_addr_lock_key;117117-118118-static void bpq_set_lockdep_class_one(struct net_device *dev,119119- struct netdev_queue *txq,120120- void *_unused)121121-{122122- lockdep_set_class(&txq->_xmit_lock, &bpq_netdev_xmit_lock_key);123123-}124124-125125-static void bpq_set_lockdep_class(struct net_device *dev)126126-{127127- lockdep_set_class(&dev->addr_list_lock, &bpq_netdev_addr_lock_key);128128- netdev_for_each_tx_queue(dev, bpq_set_lockdep_class_one, NULL);129129-}130130-131110/* ------------------------------------------------------------------------ */132111133112···477498 err = register_netdevice(ndev);478499 if (err)479500 goto error;480480- bpq_set_lockdep_class(ndev);481501482502 /* List protected by RTNL */483503 list_add_rcu(&bpq->bpq_list, &bpq_devices);
+11-4
drivers/net/hyperv/netvsc_drv.c
···982982 if (netif_running(ndev)) {983983 ret = rndis_filter_open(nvdev);984984 if (ret)985985- return ret;985985+ goto err;986986987987 rdev = nvdev->extension;988988 if (!rdev->link_state)···990990 }991991992992 return 0;993993+994994+err:995995+ netif_device_detach(ndev);996996+997997+ rndis_filter_device_remove(hdev, nvdev);998998+999999+ return ret;9931000}99410019951002static int netvsc_set_channels(struct net_device *net,···1814180718151808 ret = rndis_filter_set_offload_params(ndev, nvdev, &offloads);1816180918171817- if (ret)18101810+ if (ret) {18181811 features ^= NETIF_F_LRO;18121812+ ndev->features = features;18131813+ }1819181418201815syncvf:18211816 if (!vf_netdev)···23432334 NETIF_F_HIGHDMA | NETIF_F_HW_VLAN_CTAG_TX |23442335 NETIF_F_HW_VLAN_CTAG_RX;23452336 net->vlan_features = net->features;23462346-23472347- netdev_lockdep_set_classes(net);2348233723492338 /* MTU range: 68 - 1500 or 65521 */23502339 net->min_mtu = NETVSC_MTU_MIN;
···13241324{13251325 struct ppp *ppp;1326132613271327- netdev_lockdep_set_classes(dev);13281328-13291327 ppp = netdev_priv(dev);13301328 /* Let the netdevice take a reference on the ppp file. This ensures13311329 * that ppp_destroy_interface() won't run before the device gets
···865865866866 /* similarly, oper state is irrelevant; set to up to avoid confusion */867867 dev->operstate = IF_OPER_UP;868868- netdev_lockdep_set_classes(dev);869868 return 0;870869871870out_rth:
+50-12
drivers/net/vxlan.c
···24872487 vni = tunnel_id_to_key32(info->key.tun_id);24882488 ifindex = 0;24892489 dst_cache = &info->dst_cache;24902490- if (info->options_len &&24912491- info->key.tun_flags & TUNNEL_VXLAN_OPT)24902490+ if (info->key.tun_flags & TUNNEL_VXLAN_OPT) {24912491+ if (info->options_len < sizeof(*md))24922492+ goto drop;24922493 md = ip_tunnel_info_opts(info);24942494+ }24932495 ttl = info->key.ttl;24942496 tos = info->key.tos;24952497 label = info->key.label;···35683566{35693567 struct vxlan_net *vn = net_generic(net, vxlan_net_id);35703568 struct vxlan_dev *vxlan = netdev_priv(dev);35693569+ struct net_device *remote_dev = NULL;35713570 struct vxlan_fdb *f = NULL;35723571 bool unregister = false;35723572+ struct vxlan_rdst *dst;35733573 int err;3574357435753575+ dst = &vxlan->default_dst;35753576 err = vxlan_dev_configure(net, dev, conf, false, extack);35763577 if (err)35773578 return err;···35823577 dev->ethtool_ops = &vxlan_ethtool_ops;3583357835843579 /* create an fdb entry for a valid default destination */35853585- if (!vxlan_addr_any(&vxlan->default_dst.remote_ip)) {35803580+ if (!vxlan_addr_any(&dst->remote_ip)) {35863581 err = vxlan_fdb_create(vxlan, all_zeros_mac,35873587- &vxlan->default_dst.remote_ip,35823582+ &dst->remote_ip,35883583 NUD_REACHABLE | NUD_PERMANENT,35893584 vxlan->cfg.dst_port,35903590- vxlan->default_dst.remote_vni,35913591- vxlan->default_dst.remote_vni,35923592- vxlan->default_dst.remote_ifindex,35853585+ dst->remote_vni,35863586+ dst->remote_vni,35873587+ dst->remote_ifindex,35933588 NTF_SELF, &f);35943589 if (err)35953590 return err;···36003595 goto errout;36013596 unregister = true;3602359735983598+ if (dst->remote_ifindex) {35993599+ remote_dev = __dev_get_by_index(net, dst->remote_ifindex);36003600+ if (!remote_dev)36013601+ goto errout;36023602+36033603+ err = netdev_upper_dev_link(remote_dev, dev, extack);36043604+ if (err)36053605+ goto errout;36063606+ }36073607+36033608 err = rtnl_configure_link(dev, NULL);36043609 if (err)36053605- goto errout;36103610+ goto unlink;3606361136073612 if (f) {36083608- vxlan_fdb_insert(vxlan, all_zeros_mac,36093609- vxlan->default_dst.remote_vni, f);36133613+ vxlan_fdb_insert(vxlan, all_zeros_mac, dst->remote_vni, f);3610361436113615 /* notify default fdb entry */36123616 err = vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f),36133617 RTM_NEWNEIGH, true, extack);36143618 if (err) {36153619 vxlan_fdb_destroy(vxlan, f, false, false);36203620+ if (remote_dev)36213621+ netdev_upper_dev_unlink(remote_dev, dev);36163622 goto unregister;36173623 }36183624 }3619362536203626 list_add(&vxlan->next, &vn->vxlan_list);36273627+ if (remote_dev)36283628+ dst->remote_dev = remote_dev;36213629 return 0;36223622-36303630+unlink:36313631+ if (remote_dev)36323632+ netdev_upper_dev_unlink(remote_dev, dev);36233633errout:36243634 /* unregister_netdevice() destroys the default FDB entry with deletion36253635 * notification. But the addition notification was not sent yet, so···39523932 struct netlink_ext_ack *extack)39533933{39543934 struct vxlan_dev *vxlan = netdev_priv(dev);39553955- struct vxlan_rdst *dst = &vxlan->default_dst;39563935 struct net_device *lowerdev;39573936 struct vxlan_config conf;39373937+ struct vxlan_rdst *dst;39583938 int err;3959393939403940+ dst = &vxlan->default_dst;39603941 err = vxlan_nl2conf(tb, data, dev, &conf, true, extack);39613942 if (err)39623943 return err;3963394439643945 err = vxlan_config_validate(vxlan->net, &conf, &lowerdev,39653946 vxlan, extack);39473947+ if (err)39483948+ return err;39493949+39503950+ if (dst->remote_dev == lowerdev)39513951+ lowerdev = NULL;39523952+39533953+ err = netdev_adjacent_change_prepare(dst->remote_dev, lowerdev, dev,39543954+ extack);39663955 if (err)39673956 return err;39683957···39913962 NTF_SELF, true, extack);39923963 if (err) {39933964 spin_unlock_bh(&vxlan->hash_lock[hash_index]);39653965+ netdev_adjacent_change_abort(dst->remote_dev,39663966+ lowerdev, dev);39943967 return err;39953968 }39963969 }···40103979 if (conf.age_interval != vxlan->cfg.age_interval)40113980 mod_timer(&vxlan->age_timer, jiffies);4012398139823982+ netdev_adjacent_change_commit(dst->remote_dev, lowerdev, dev);39833983+ if (lowerdev && lowerdev != dst->remote_dev) {39843984+ dst->remote_dev = lowerdev;39853985+ netdev_update_lockdep_key(lowerdev);39863986+ }40133987 vxlan_config_apply(dev, &conf, lowerdev, vxlan->net, true);40143988 return 0;40153989}···4027399140283992 list_del(&vxlan->next);40293993 unregister_netdevice_queue(dev, head);39943994+ if (vxlan->default_dst.remote_dev)39953995+ netdev_upper_dev_unlink(vxlan->default_dst.remote_dev, dev);40303996}4031399740323998static size_t vxlan_get_size(const struct net_device *dev)
···288288 * STA_CONTEXT_DOT11AX_API_S289289 * @IWL_UCODE_TLV_CAPA_SAR_TABLE_VER: This ucode supports different sar290290 * version tables.291291+ * @IWL_UCODE_TLV_API_REDUCED_SCAN_CONFIG: This ucode supports v3 of292292+ * SCAN_CONFIG_DB_CMD_API_S.291293 *292294 * @NUM_IWL_UCODE_TLV_API: number of bits used293295 */···323321 IWL_UCODE_TLV_API_WOWLAN_TCP_SYN_WAKE = (__force iwl_ucode_tlv_api_t)53,324322 IWL_UCODE_TLV_API_FTM_RTT_ACCURACY = (__force iwl_ucode_tlv_api_t)54,325323 IWL_UCODE_TLV_API_SAR_TABLE_VER = (__force iwl_ucode_tlv_api_t)55,324324+ IWL_UCODE_TLV_API_REDUCED_SCAN_CONFIG = (__force iwl_ucode_tlv_api_t)56,326325 IWL_UCODE_TLV_API_ADWELL_HB_DEF_N_AP = (__force iwl_ucode_tlv_api_t)57,327326 IWL_UCODE_TLV_API_SCAN_EXT_CHAN_VER = (__force iwl_ucode_tlv_api_t)58,328327
+1
drivers/net/wireless/intel/iwlwifi/iwl-csr.h
···279279 * Indicates MAC is entering a power-saving sleep power-down.280280 * Not a good time to access device-internal resources.281281 */282282+#define CSR_GP_CNTRL_REG_FLAG_INIT_DONE (0x00000004)282283#define CSR_GP_CNTRL_REG_FLAG_GOING_TO_SLEEP (0x00000010)283284#define CSR_GP_CNTRL_REG_FLAG_XTAL_ON (0x00000400)284285
···14821482 mvm_sta->sta_id, i);14831483 txq_id = iwl_mvm_tvqm_enable_txq(mvm, mvm_sta->sta_id,14841484 i, wdg);14851485+ /*14861486+ * on failures, just set it to IWL_MVM_INVALID_QUEUE14871487+ * to try again later, we have no other good way of14881488+ * failing here14891489+ */14901490+ if (txq_id < 0)14911491+ txq_id = IWL_MVM_INVALID_QUEUE;14851492 tid_data->txq_id = txq_id;1486149314871494 /*···19571950 sta->sta_id = IWL_MVM_INVALID_STA;19581951}1959195219601960-static void iwl_mvm_enable_aux_snif_queue(struct iwl_mvm *mvm, u16 *queue,19531953+static void iwl_mvm_enable_aux_snif_queue(struct iwl_mvm *mvm, u16 queue,19611954 u8 sta_id, u8 fifo)19621955{19631956 unsigned int wdg_timeout = iwlmvm_mod_params.tfd_q_hang_detect ?19641957 mvm->trans->trans_cfg->base_params->wd_timeout :19651958 IWL_WATCHDOG_DISABLED;19591959+ struct iwl_trans_txq_scd_cfg cfg = {19601960+ .fifo = fifo,19611961+ .sta_id = sta_id,19621962+ .tid = IWL_MAX_TID_COUNT,19631963+ .aggregate = false,19641964+ .frame_limit = IWL_FRAME_LIMIT,19651965+ };1966196619671967- if (iwl_mvm_has_new_tx_api(mvm)) {19681968- int tvqm_queue =19691969- iwl_mvm_tvqm_enable_txq(mvm, sta_id,19701970- IWL_MAX_TID_COUNT,19711971- wdg_timeout);19721972- *queue = tvqm_queue;19731973- } else {19741974- struct iwl_trans_txq_scd_cfg cfg = {19751975- .fifo = fifo,19761976- .sta_id = sta_id,19771977- .tid = IWL_MAX_TID_COUNT,19781978- .aggregate = false,19791979- .frame_limit = IWL_FRAME_LIMIT,19801980- };19671967+ WARN_ON(iwl_mvm_has_new_tx_api(mvm));1981196819821982- iwl_mvm_enable_txq(mvm, NULL, *queue, 0, &cfg, wdg_timeout);19691969+ iwl_mvm_enable_txq(mvm, NULL, queue, 0, &cfg, wdg_timeout);19701970+}19711971+19721972+static int iwl_mvm_enable_aux_snif_queue_tvqm(struct iwl_mvm *mvm, u8 sta_id)19731973+{19741974+ unsigned int wdg_timeout = iwlmvm_mod_params.tfd_q_hang_detect ?19751975+ mvm->trans->trans_cfg->base_params->wd_timeout :19761976+ IWL_WATCHDOG_DISABLED;19771977+19781978+ WARN_ON(!iwl_mvm_has_new_tx_api(mvm));19791979+19801980+ return iwl_mvm_tvqm_enable_txq(mvm, sta_id, IWL_MAX_TID_COUNT,19811981+ wdg_timeout);19821982+}19831983+19841984+static int iwl_mvm_add_int_sta_with_queue(struct iwl_mvm *mvm, int macidx,19851985+ int maccolor,19861986+ struct iwl_mvm_int_sta *sta,19871987+ u16 *queue, int fifo)19881988+{19891989+ int ret;19901990+19911991+ /* Map queue to fifo - needs to happen before adding station */19921992+ if (!iwl_mvm_has_new_tx_api(mvm))19931993+ iwl_mvm_enable_aux_snif_queue(mvm, *queue, sta->sta_id, fifo);19941994+19951995+ ret = iwl_mvm_add_int_sta_common(mvm, sta, NULL, macidx, maccolor);19961996+ if (ret) {19971997+ if (!iwl_mvm_has_new_tx_api(mvm))19981998+ iwl_mvm_disable_txq(mvm, NULL, *queue,19991999+ IWL_MAX_TID_COUNT, 0);20002000+ return ret;19832001 }20022002+20032003+ /*20042004+ * For 22000 firmware and on we cannot add queue to a station unknown20052005+ * to firmware so enable queue here - after the station was added20062006+ */20072007+ if (iwl_mvm_has_new_tx_api(mvm)) {20082008+ int txq;20092009+20102010+ txq = iwl_mvm_enable_aux_snif_queue_tvqm(mvm, sta->sta_id);20112011+ if (txq < 0) {20122012+ iwl_mvm_rm_sta_common(mvm, sta->sta_id);20132013+ return txq;20142014+ }20152015+20162016+ *queue = txq;20172017+ }20182018+20192019+ return 0;19842020}1985202119862022int iwl_mvm_add_aux_sta(struct iwl_mvm *mvm)···20391989 if (ret)20401990 return ret;2041199120422042- /* Map Aux queue to fifo - needs to happen before adding Aux station */20432043- if (!iwl_mvm_has_new_tx_api(mvm))20442044- iwl_mvm_enable_aux_snif_queue(mvm, &mvm->aux_queue,20452045- mvm->aux_sta.sta_id,20462046- IWL_MVM_TX_FIFO_MCAST);20472047-20482048- ret = iwl_mvm_add_int_sta_common(mvm, &mvm->aux_sta, NULL,20492049- MAC_INDEX_AUX, 0);19921992+ ret = iwl_mvm_add_int_sta_with_queue(mvm, MAC_INDEX_AUX, 0,19931993+ &mvm->aux_sta, &mvm->aux_queue,19941994+ IWL_MVM_TX_FIFO_MCAST);20501995 if (ret) {20511996 iwl_mvm_dealloc_int_sta(mvm, &mvm->aux_sta);20521997 return ret;20531998 }20542054-20552055- /*20562056- * For 22000 firmware and on we cannot add queue to a station unknown20572057- * to firmware so enable queue here - after the station was added20582058- */20592059- if (iwl_mvm_has_new_tx_api(mvm))20602060- iwl_mvm_enable_aux_snif_queue(mvm, &mvm->aux_queue,20612061- mvm->aux_sta.sta_id,20622062- IWL_MVM_TX_FIFO_MCAST);2063199920642000 return 0;20652001}···20532017int iwl_mvm_add_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)20542018{20552019 struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);20562056- int ret;2057202020582021 lockdep_assert_held(&mvm->mutex);2059202220602060- /* Map snif queue to fifo - must happen before adding snif station */20612061- if (!iwl_mvm_has_new_tx_api(mvm))20622062- iwl_mvm_enable_aux_snif_queue(mvm, &mvm->snif_queue,20632063- mvm->snif_sta.sta_id,20232023+ return iwl_mvm_add_int_sta_with_queue(mvm, mvmvif->id, mvmvif->color,20242024+ &mvm->snif_sta, &mvm->snif_queue,20642025 IWL_MVM_TX_FIFO_BE);20652065-20662066- ret = iwl_mvm_add_int_sta_common(mvm, &mvm->snif_sta, vif->addr,20672067- mvmvif->id, 0);20682068- if (ret)20692069- return ret;20702070-20712071- /*20722072- * For 22000 firmware and on we cannot add queue to a station unknown20732073- * to firmware so enable queue here - after the station was added20742074- */20752075- if (iwl_mvm_has_new_tx_api(mvm))20762076- iwl_mvm_enable_aux_snif_queue(mvm, &mvm->snif_queue,20772077- mvm->snif_sta.sta_id,20782078- IWL_MVM_TX_FIFO_BE);20792079-20802080- return 0;20812026}2082202720832028int iwl_mvm_rm_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)···21502133 queue = iwl_mvm_tvqm_enable_txq(mvm, bsta->sta_id,21512134 IWL_MAX_TID_COUNT,21522135 wdg_timeout);21362136+ if (queue < 0) {21372137+ iwl_mvm_rm_sta_common(mvm, bsta->sta_id);21382138+ return queue;21392139+ }2153214021542141 if (vif->type == NL80211_IFTYPE_AP ||21552142 vif->type == NL80211_IFTYPE_ADHOC)···23282307 }23292308 ret = iwl_mvm_add_int_sta_common(mvm, msta, maddr,23302309 mvmvif->id, mvmvif->color);23312331- if (ret) {23322332- iwl_mvm_dealloc_int_sta(mvm, msta);23332333- return ret;23342334- }23102310+ if (ret)23112311+ goto err;2335231223362313 /*23372314 * Enable cab queue after the ADD_STA command is sent.···23422323 int queue = iwl_mvm_tvqm_enable_txq(mvm, msta->sta_id,23432324 0,23442325 timeout);23262326+ if (queue < 0) {23272327+ ret = queue;23282328+ goto err;23292329+ }23452330 mvmvif->cab_queue = queue;23462331 } else if (!fw_has_api(&mvm->fw->ucode_capa,23472332 IWL_UCODE_TLV_API_STA_TYPE))···23532330 timeout);2354233123552332 return 0;23332333+err:23342334+ iwl_mvm_dealloc_int_sta(mvm, msta);23352335+ return ret;23562336}2357233723582338static int __iwl_mvm_remove_sta_key(struct iwl_mvm *mvm, u8 sta_id,
···5757#include "internal.h"5858#include "fw/dbg.h"59596060+static int iwl_pcie_gen2_force_power_gating(struct iwl_trans *trans)6161+{6262+ iwl_set_bits_prph(trans, HPM_HIPM_GEN_CFG,6363+ HPM_HIPM_GEN_CFG_CR_FORCE_ACTIVE);6464+ udelay(20);6565+ iwl_set_bits_prph(trans, HPM_HIPM_GEN_CFG,6666+ HPM_HIPM_GEN_CFG_CR_PG_EN |6767+ HPM_HIPM_GEN_CFG_CR_SLP_EN);6868+ udelay(20);6969+ iwl_clear_bits_prph(trans, HPM_HIPM_GEN_CFG,7070+ HPM_HIPM_GEN_CFG_CR_FORCE_ACTIVE);7171+7272+ iwl_trans_sw_reset(trans);7373+ iwl_clear_bit(trans, CSR_GP_CNTRL, CSR_GP_CNTRL_REG_FLAG_INIT_DONE);7474+7575+ return 0;7676+}7777+6078/*6179 * Start up NIC's basic functionality after it has been reset6280 * (e.g. after platform boot, or shutdown via iwl_pcie_apm_stop())···10991 CSR_HW_IF_CONFIG_REG_BIT_HAP_WAKE_L1A);1109211193 iwl_pcie_apm_config(trans);9494+9595+ if (trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000 &&9696+ trans->cfg->integrated) {9797+ ret = iwl_pcie_gen2_force_power_gating(trans);9898+ if (ret)9999+ return ret;100100+ }112101113102 ret = iwl_finish_nic_init(trans, trans->trans_cfg);114103 if (ret)
-25
drivers/net/wireless/intersil/hostap/hostap_hw.c
···30413041 }30423042}3043304330443044-30453045-/*30463046- * HostAP uses two layers of net devices, where the inner30473047- * layer gets called all the time from the outer layer.30483048- * This is a natural nesting, which needs a split lock type.30493049- */30503050-static struct lock_class_key hostap_netdev_xmit_lock_key;30513051-static struct lock_class_key hostap_netdev_addr_lock_key;30523052-30533053-static void prism2_set_lockdep_class_one(struct net_device *dev,30543054- struct netdev_queue *txq,30553055- void *_unused)30563056-{30573057- lockdep_set_class(&txq->_xmit_lock,30583058- &hostap_netdev_xmit_lock_key);30593059-}30603060-30613061-static void prism2_set_lockdep_class(struct net_device *dev)30623062-{30633063- lockdep_set_class(&dev->addr_list_lock,30643064- &hostap_netdev_addr_lock_key);30653065- netdev_for_each_tx_queue(dev, prism2_set_lockdep_class_one, NULL);30663066-}30673067-30683044static struct net_device *30693045prism2_init_local_data(struct prism2_helper_functions *funcs, int card_idx,30703046 struct device *sdev)···31993223 if (ret >= 0)32003224 ret = register_netdevice(dev);3201322532023202- prism2_set_lockdep_class(dev);32033226 rtnl_unlock();32043227 if (ret < 0) {32053228 printk(KERN_WARNING "%s: register netdevice failed!\n",
···11+// SPDX-License-Identifier: ISC22+/*33+ * Copyright (C) 2019 Lorenzo Bianconi <lorenzo@kernel.org>44+ */55+66+#include <linux/pci.h>77+88+void mt76_pci_disable_aspm(struct pci_dev *pdev)99+{1010+ struct pci_dev *parent = pdev->bus->self;1111+ u16 aspm_conf, parent_aspm_conf = 0;1212+1313+ pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &aspm_conf);1414+ aspm_conf &= PCI_EXP_LNKCTL_ASPMC;1515+ if (parent) {1616+ pcie_capability_read_word(parent, PCI_EXP_LNKCTL,1717+ &parent_aspm_conf);1818+ parent_aspm_conf &= PCI_EXP_LNKCTL_ASPMC;1919+ }2020+2121+ if (!aspm_conf && (!parent || !parent_aspm_conf)) {2222+ /* aspm already disabled */2323+ return;2424+ }2525+2626+ dev_info(&pdev->dev, "disabling ASPM %s %s\n",2727+ (aspm_conf & PCI_EXP_LNKCTL_ASPM_L0S) ? "L0s" : "",2828+ (aspm_conf & PCI_EXP_LNKCTL_ASPM_L1) ? "L1" : "");2929+3030+ if (IS_ENABLED(CONFIG_PCIEASPM)) {3131+ int err;3232+3333+ err = pci_disable_link_state(pdev, aspm_conf);3434+ if (!err)3535+ return;3636+ }3737+3838+ /* both device and parent should have the same ASPM setting.3939+ * disable ASPM in downstream component first and then upstream.4040+ */4141+ pcie_capability_clear_word(pdev, PCI_EXP_LNKCTL, aspm_conf);4242+ if (parent)4343+ pcie_capability_clear_word(parent, PCI_EXP_LNKCTL,4444+ aspm_conf);4545+}4646+EXPORT_SYMBOL_GPL(mt76_pci_disable_aspm);
+2-1
drivers/net/wireless/realtek/rtlwifi/pci.c
···822822 hdr = rtl_get_hdr(skb);823823 fc = rtl_get_fc(skb);824824825825- if (!stats.crc && !stats.hwerror) {825825+ if (!stats.crc && !stats.hwerror && (skb->len > FCS_LEN)) {826826 memcpy(IEEE80211_SKB_RXCB(skb), &rx_status,827827 sizeof(rx_status));828828···859859 _rtl_pci_rx_to_mac80211(hw, skb, rx_status);860860 }861861 } else {862862+ /* drop packets with errors or those too short */862863 dev_kfree_skb_any(skb);863864 }864865new_trx_end:
···472472 if (err)473473 return err;474474475475- /*476476- * .apply might have to round some values in *state, if possible477477- * read the actually implemented value back.478478- */479479- if (chip->ops->get_state)480480- chip->ops->get_state(chip, pwm, &pwm->state);481481- else482482- pwm->state = *state;475475+ pwm->state = *state;483476 } else {484477 /*485478 * FIXME: restore the initial state in case of error.
···1212#include <linux/platform_device.h>1313#include "core.h"1414#include "drd.h"1515+#include "host-export.h"15161617static int __cdns3_host_init(struct cdns3 *cdns)1718{
+5
drivers/usb/core/config.c
···348348349349 /* Validate the wMaxPacketSize field */350350 maxp = usb_endpoint_maxp(&endpoint->desc);351351+ if (maxp == 0) {352352+ dev_warn(ddev, "config %d interface %d altsetting %d endpoint 0x%X has wMaxPacketSize 0, skipping\n",353353+ cfgno, inum, asnum, d->bEndpointAddress);354354+ goto skip_to_next_endpoint_or_interface_descriptor;355355+ }351356352357 /* Find the highest legal maxpacket size for this endpoint */353358 i = 0; /* additional transactions per microframe */
+1
drivers/usb/dwc3/Kconfig
···102102 depends on ARCH_MESON || COMPILE_TEST103103 default USB_DWC3104104 select USB_ROLE_SWITCH105105+ select REGMAP_MMIO105106 help106107 Support USB2/3 functionality in Amlogic G12A platforms.107108 Say 'Y' or 'M' if you have one such device.
+1-2
drivers/usb/dwc3/core.c
···312312313313 reg = dwc3_readl(dwc->regs, DWC3_GFLADJ);314314 dft = reg & DWC3_GFLADJ_30MHZ_MASK;315315- if (!dev_WARN_ONCE(dwc->dev, dft == dwc->fladj,316316- "request value same as default, ignoring\n")) {315315+ if (dft != dwc->fladj) {317316 reg &= ~DWC3_GFLADJ_30MHZ_MASK;318317 reg |= DWC3_GFLADJ_30MHZ_SDBND_SEL | dwc->fladj;319318 dwc3_writel(dwc->regs, DWC3_GFLADJ, reg);
+1-1
drivers/usb/dwc3/dwc3-pci.c
···258258259259 ret = platform_device_add_properties(dwc->dwc3, p);260260 if (ret < 0)261261- return ret;261261+ goto err;262262263263 ret = dwc3_pci_quirks(dwc);264264 if (ret)
···9898 if (ep->enabled)9999 goto out;100100101101+ /* UDC drivers can't handle endpoints with maxpacket size 0 */102102+ if (usb_endpoint_maxp(ep->desc) == 0) {103103+ /*104104+ * We should log an error message here, but we can't call105105+ * dev_err() because there's no way to find the gadget106106+ * given only ep.107107+ */108108+ ret = -EINVAL;109109+ goto out;110110+ }111111+101112 ret = ep->ops->enable(ep, ep->desc);102113 if (ret)103114 goto out;
+1-1
drivers/usb/gadget/udc/fsl_udc_core.c
···25762576 dma_pool_destroy(udc_controller->td_pool);25772577 free_irq(udc_controller->irq, udc_controller);25782578 iounmap(dr_regs);25792579- if (pdata->operating_mode == FSL_USB2_DR_DEVICE)25792579+ if (res && (pdata->operating_mode == FSL_USB2_DR_DEVICE))25802580 release_mem_region(res->start, resource_size(res));2581258125822582 /* free udc --wait for the release() finished */
···33303330 if (xhci_urb_suitable_for_idt(urb)) {33313331 memcpy(&send_addr, urb->transfer_buffer,33323332 trb_buff_len);33333333+ le64_to_cpus(&send_addr);33333334 field |= TRB_IDT;33343335 }33353336 }···34763475 if (xhci_urb_suitable_for_idt(urb)) {34773476 memcpy(&addr, urb->transfer_buffer,34783477 urb->transfer_buffer_length);34783478+ le64_to_cpus(&addr);34793479 field |= TRB_IDT;34803480 } else {34813481 addr = (u64) urb->transfer_dma;
+45-9
drivers/usb/host/xhci.c
···30713071 }30723072}3073307330743074+static void xhci_endpoint_disable(struct usb_hcd *hcd,30753075+ struct usb_host_endpoint *host_ep)30763076+{30773077+ struct xhci_hcd *xhci;30783078+ struct xhci_virt_device *vdev;30793079+ struct xhci_virt_ep *ep;30803080+ struct usb_device *udev;30813081+ unsigned long flags;30823082+ unsigned int ep_index;30833083+30843084+ xhci = hcd_to_xhci(hcd);30853085+rescan:30863086+ spin_lock_irqsave(&xhci->lock, flags);30873087+30883088+ udev = (struct usb_device *)host_ep->hcpriv;30893089+ if (!udev || !udev->slot_id)30903090+ goto done;30913091+30923092+ vdev = xhci->devs[udev->slot_id];30933093+ if (!vdev)30943094+ goto done;30953095+30963096+ ep_index = xhci_get_endpoint_index(&host_ep->desc);30973097+ ep = &vdev->eps[ep_index];30983098+ if (!ep)30993099+ goto done;31003100+31013101+ /* wait for hub_tt_work to finish clearing hub TT */31023102+ if (ep->ep_state & EP_CLEARING_TT) {31033103+ spin_unlock_irqrestore(&xhci->lock, flags);31043104+ schedule_timeout_uninterruptible(1);31053105+ goto rescan;31063106+ }31073107+31083108+ if (ep->ep_state)31093109+ xhci_dbg(xhci, "endpoint disable with ep_state 0x%x\n",31103110+ ep->ep_state);31113111+done:31123112+ host_ep->hcpriv = NULL;31133113+ spin_unlock_irqrestore(&xhci->lock, flags);31143114+}31153115+30743116/*30753117 * Called after usb core issues a clear halt control message.30763118 * The host side of the halt should already be cleared by a reset endpoint···52805238 unsigned int ep_index;52815239 unsigned long flags;5282524052835283- /*52845284- * udev might be NULL if tt buffer is cleared during a failed device52855285- * enumeration due to a halted control endpoint. Usb core might52865286- * have allocated a new udev for the next enumeration attempt.52875287- */52885288-52895241 xhci = hcd_to_xhci(hcd);52425242+52435243+ spin_lock_irqsave(&xhci->lock, flags);52905244 udev = (struct usb_device *)ep->hcpriv;52915291- if (!udev)52925292- return;52935245 slot_id = udev->slot_id;52945246 ep_index = xhci_get_endpoint_index(&ep->desc);5295524752965296- spin_lock_irqsave(&xhci->lock, flags);52975248 xhci->devs[slot_id]->eps[ep_index].ep_state &= ~EP_CLEARING_TT;52985249 xhci_ring_doorbell_for_active_rings(xhci, slot_id, ep_index);52995250 spin_unlock_irqrestore(&xhci->lock, flags);···53235288 .free_streams = xhci_free_streams,53245289 .add_endpoint = xhci_add_endpoint,53255290 .drop_endpoint = xhci_drop_endpoint,52915291+ .endpoint_disable = xhci_endpoint_disable,53265292 .endpoint_reset = xhci_endpoint_reset,53275293 .check_bandwidth = xhci_check_bandwidth,53285294 .reset_bandwidth = xhci_reset_bandwidth,
···87878888struct whiteheat_port_settings {8989 __u8 port; /* port number (1 to N) */9090- __u32 baud; /* any value 7 - 460800, firmware calculates9090+ __le32 baud; /* any value 7 - 460800, firmware calculates9191 best fit; arrives little endian */9292 __u8 bits; /* 5, 6, 7, or 8 */9393 __u8 stop; /* 1 or 2, default 1 (2 = 1.5 if bits = 5) */
-10
drivers/usb/storage/scsiglue.c
···6868static int slave_alloc (struct scsi_device *sdev)6969{7070 struct us_data *us = host_to_us(sdev->host);7171- int maxp;72717372 /*7473 * Set the INQUIRY transfer length to 36. We don't use any of···7576 * less than 36 bytes.7677 */7778 sdev->inquiry_len = 36;7878-7979- /*8080- * USB has unusual scatter-gather requirements: the length of each8181- * scatterlist element except the last must be divisible by the8282- * Bulk maxpacket value. Fortunately this value is always a8383- * power of 2. Inform the block layer about this requirement.8484- */8585- maxp = usb_maxpacket(us->pusb_dev, us->recv_bulk_pipe, 0);8686- blk_queue_virt_boundary(sdev->request_queue, maxp - 1);87798880 /*8981 * Some host controllers may have alignment requirements.
-20
drivers/usb/storage/uas.c
···789789{790790 struct uas_dev_info *devinfo =791791 (struct uas_dev_info *)sdev->host->hostdata;792792- int maxp;793792794793 sdev->hostdata = devinfo;795795-796796- /*797797- * We have two requirements here. We must satisfy the requirements798798- * of the physical HC and the demands of the protocol, as we799799- * definitely want no additional memory allocation in this path800800- * ruling out using bounce buffers.801801- *802802- * For a transmission on USB to continue we must never send803803- * a package that is smaller than maxpacket. Hence the length of each804804- * scatterlist element except the last must be divisible by the805805- * Bulk maxpacket value.806806- * If the HC does not ensure that through SG,807807- * the upper layer must do that. We must assume nothing808808- * about the capabilities off the HC, so we use the most809809- * pessimistic requirement.810810- */811811-812812- maxp = usb_maxpacket(devinfo->udev, devinfo->data_in_pipe, 0);813813- blk_queue_virt_boundary(sdev->request_queue, maxp - 1);814794815795 /*816796 * The protocol has no requirements on alignment in the strict sense.
+3
drivers/usb/usbip/vhci_tx.c
···147147 }148148149149 kfree(iov);150150+ /* This is only for isochronous case */150151 kfree(iso_buffer);152152+ iso_buffer = NULL;153153+151154 usbip_dbg_vhci_tx("send txdata\n");152155153156 total_size += txsize;
+7-1
drivers/vhost/vringh.c
···852852 return 0;853853}854854855855+static inline int kern_xfer(void *dst, void *src, size_t len)856856+{857857+ memcpy(dst, src, len);858858+ return 0;859859+}860860+855861/**856862 * vringh_init_kern - initialize a vringh for a kernelspace vring.857863 * @vrh: the vringh to initialize.···964958ssize_t vringh_iov_push_kern(struct vringh_kiov *wiov,965959 const void *src, size_t len)966960{967967- return vringh_iov_xfer(wiov, (void *)src, len, xfer_kern);961961+ return vringh_iov_xfer(wiov, (void *)src, len, kern_xfer);968962}969963EXPORT_SYMBOL(vringh_iov_push_kern);970964
+3-4
drivers/virtio/virtio_ring.c
···14991499 * counter first before updating event flags.15001500 */15011501 virtio_wmb(vq->weak_barriers);15021502- } else {15031503- used_idx = vq->last_used_idx;15041504- wrap_counter = vq->packed.used_wrap_counter;15051502 }1506150315071504 if (vq->packed.event_flags_shadow == VRING_PACKED_EVENT_FLAG_DISABLE) {···15151518 */15161519 virtio_mb(vq->weak_barriers);1517152015181518- if (is_used_desc_packed(vq, used_idx, wrap_counter)) {15211521+ if (is_used_desc_packed(vq,15221522+ vq->last_used_idx,15231523+ vq->packed.used_wrap_counter)) {15191524 END_USE(vq);15201525 return false;15211526 }
···405405 else406406 fuse_invalidate_entry_cache(entry);407407408408- fuse_advise_use_readdirplus(dir);408408+ if (inode)409409+ fuse_advise_use_readdirplus(dir);409410 return newent;410411411412 out_iput:···15201519 if (WARN_ON(!S_ISREG(inode->i_mode)))15211520 return -EIO;15221521 is_truncate = true;15221522+ }15231523+15241524+ /* Flush dirty data/metadata before non-truncate SETATTR */15251525+ if (is_wb && S_ISREG(inode->i_mode) &&15261526+ attr->ia_valid &15271527+ (ATTR_MODE | ATTR_UID | ATTR_GID | ATTR_MTIME_SET |15281528+ ATTR_TIMES_SET)) {15291529+ err = write_inode_now(inode, true);15301530+ if (err)15311531+ return err;15321532+15331533+ fuse_set_nowrite(inode);15341534+ fuse_release_nowrite(inode);15231535 }1524153615251537 if (is_truncate) {
+8-6
fs/fuse/file.c
···217217{218218 struct fuse_conn *fc = get_fuse_conn(inode);219219 int err;220220- bool lock_inode = (file->f_flags & O_TRUNC) &&220220+ bool is_wb_truncate = (file->f_flags & O_TRUNC) &&221221 fc->atomic_o_trunc &&222222 fc->writeback_cache;223223···225225 if (err)226226 return err;227227228228- if (lock_inode)228228+ if (is_wb_truncate) {229229 inode_lock(inode);230230+ fuse_set_nowrite(inode);231231+ }230232231233 err = fuse_do_open(fc, get_node_id(inode), file, isdir);232234233235 if (!err)234236 fuse_finish_open(inode, file);235237236236- if (lock_inode)238238+ if (is_wb_truncate) {239239+ fuse_release_nowrite(inode);237240 inode_unlock(inode);241241+ }238242239243 return err;240244}···2001199720021998 if (!data->ff) {20031999 err = -EIO;20042004- data->ff = fuse_write_file_get(fc, get_fuse_inode(inode));20002000+ data->ff = fuse_write_file_get(fc, fi);20052001 if (!data->ff)20062002 goto out_unlock;20072003 }···20462042 * under writeback, so we can release the page lock.20472043 */20482044 if (data->wpa == NULL) {20492049- struct fuse_inode *fi = get_fuse_inode(inode);20502050-20512045 err = -ENOMEM;20522046 wpa = fuse_writepage_args_alloc();20532047 if (!wpa) {
+4
fs/fuse/fuse_i.h
···479479 bool destroy:1;480480 bool no_control:1;481481 bool no_force_umount:1;482482+ bool no_mount_options:1;482483 unsigned int max_read;483484 unsigned int blksize;484485 const char *subtype;···713712714713 /** Do not allow MNT_FORCE umount */715714 unsigned int no_force_umount:1;715715+716716+ /* Do not show mount options */717717+ unsigned int no_mount_options:1;716718717719 /** The number of requests waiting for completion */718720 atomic_t num_waiting;
···204204 do { if (0) printk(KERN_DEBUG pr_fmt(fmt), ##__VA_ARGS__); } while (0)205205#define dynamic_dev_dbg(dev, fmt, ...) \206206 do { if (0) dev_printk(KERN_DEBUG, dev, fmt, ##__VA_ARGS__); } while (0)207207+#define dynamic_hex_dump(prefix_str, prefix_type, rowsize, \208208+ groupsize, buf, len, ascii) \209209+ do { if (0) \210210+ print_hex_dump(KERN_DEBUG, prefix_str, prefix_type, \211211+ rowsize, groupsize, buf, len, ascii); \212212+ } while (0)207213#endif208214209215#endif
+16-2
include/linux/efi.h
···15791579efi_status_t efi_get_memory_map(efi_system_table_t *sys_table_arg,15801580 struct efi_boot_memmap *map);1581158115821582+efi_status_t efi_low_alloc_above(efi_system_table_t *sys_table_arg,15831583+ unsigned long size, unsigned long align,15841584+ unsigned long *addr, unsigned long min);15851585+15861586+static inline15821587efi_status_t efi_low_alloc(efi_system_table_t *sys_table_arg,15831588 unsigned long size, unsigned long align,15841584- unsigned long *addr);15891589+ unsigned long *addr)15901590+{15911591+ /*15921592+ * Don't allocate at 0x0. It will confuse code that15931593+ * checks pointers against NULL. Skip the first 815941594+ * bytes so we start at a nice even number.15951595+ */15961596+ return efi_low_alloc_above(sys_table_arg, size, align, addr, 0x8);15971597+}1585159815861599efi_status_t efi_high_alloc(efi_system_table_t *sys_table_arg,15871600 unsigned long size, unsigned long align,···16051592 unsigned long image_size,16061593 unsigned long alloc_size,16071594 unsigned long preferred_addr,16081608- unsigned long alignment);15951595+ unsigned long alignment,15961596+ unsigned long min_addr);1609159716101598efi_status_t handle_cmdline_files(efi_system_table_t *sys_table_arg,16111599 efi_loaded_image_t *image,
···325325 return !!(gfp_flags & __GFP_DIRECT_RECLAIM);326326}327327328328+/**329329+ * gfpflags_normal_context - is gfp_flags a normal sleepable context?330330+ * @gfp_flags: gfp_flags to test331331+ *332332+ * Test whether @gfp_flags indicates that the allocation is from the333333+ * %current context and allowed to sleep.334334+ *335335+ * An allocation being allowed to block doesn't mean it owns the %current336336+ * context. When direct reclaim path tries to allocate memory, the337337+ * allocation context is nested inside whatever %current was doing at the338338+ * time of the original allocation. The nested allocation may be allowed339339+ * to block but modifying anything %current owns can corrupt the outer340340+ * context's expectations.341341+ *342342+ * %true result from this function indicates that the allocation context343343+ * can sleep and use anything that's associated with %current.344344+ */345345+static inline bool gfpflags_normal_context(const gfp_t gfp_flags)346346+{347347+ return (gfp_flags & (__GFP_DIRECT_RECLAIM | __GFP_MEMALLOC)) ==348348+ __GFP_DIRECT_RECLAIM;349349+}350350+328351#ifdef CONFIG_HIGHMEM329352#define OPT_ZONE_HIGHMEM ZONE_HIGHMEM330353#else
-1
include/linux/if_macvlan.h
···2929 netdev_features_t set_features;3030 enum macvlan_mode mode;3131 u16 flags;3232- int nest_level;3332 unsigned int macaddr_count;3433#ifdef CONFIG_NET_POLL_CONTROLLER3534 struct netpoll *netpoll;
···925925struct devlink;926926struct tlsdev_ops;927927928928+928929/*929930 * This structure defines the management hooks for network devices.930931 * The following hooks can be defined; unless noted otherwise, they are···14221421 void (*ndo_dfwd_del_station)(struct net_device *pdev,14231422 void *priv);1424142314251425- int (*ndo_get_lock_subclass)(struct net_device *dev);14261424 int (*ndo_set_tx_maxrate)(struct net_device *dev,14271425 int queue_index,14281426 u32 maxrate);···16491649 * @perm_addr: Permanent hw address16501650 * @addr_assign_type: Hw address assignment type16511651 * @addr_len: Hardware address length16521652+ * @upper_level: Maximum depth level of upper devices.16531653+ * @lower_level: Maximum depth level of lower devices.16521654 * @neigh_priv_len: Used in neigh_alloc()16531655 * @dev_id: Used to differentiate devices that share16541656 * the same link layer address···17601758 * @phydev: Physical device may attach itself17611759 * for hardware timestamping17621760 * @sfp_bus: attached &struct sfp_bus structure.17631763- *17641764- * @qdisc_tx_busylock: lockdep class annotating Qdisc->busylock spinlock17651765- * @qdisc_running_key: lockdep class annotating Qdisc->running seqcount17611761+ * @qdisc_tx_busylock_key: lockdep class annotating Qdisc->busylock17621762+ spinlock17631763+ * @qdisc_running_key: lockdep class annotating Qdisc->running seqcount17641764+ * @qdisc_xmit_lock_key: lockdep class annotating17651765+ * netdev_queue->_xmit_lock spinlock17661766+ * @addr_list_lock_key: lockdep class annotating17671767+ * net_device->addr_list_lock spinlock17661768 *17671769 * @proto_down: protocol port state information can be sent to the17681770 * switch driver and used to set the phys state of the···18811875 unsigned char perm_addr[MAX_ADDR_LEN];18821876 unsigned char addr_assign_type;18831877 unsigned char addr_len;18781878+ unsigned char upper_level;18791879+ unsigned char lower_level;18841880 unsigned short neigh_priv_len;18851881 unsigned short dev_id;18861882 unsigned short dev_port;···20532045#endif20542046 struct phy_device *phydev;20552047 struct sfp_bus *sfp_bus;20562056- struct lock_class_key *qdisc_tx_busylock;20572057- struct lock_class_key *qdisc_running_key;20482048+ struct lock_class_key qdisc_tx_busylock_key;20492049+ struct lock_class_key qdisc_running_key;20502050+ struct lock_class_key qdisc_xmit_lock_key;20512051+ struct lock_class_key addr_list_lock_key;20582052 bool proto_down;20592053 unsigned wol_enabled:1;20602054};···2132212221332123 for (i = 0; i < dev->num_tx_queues; i++)21342124 f(dev, &dev->_tx[i], arg);21352135-}21362136-21372137-#define netdev_lockdep_set_classes(dev) \21382138-{ \21392139- static struct lock_class_key qdisc_tx_busylock_key; \21402140- static struct lock_class_key qdisc_running_key; \21412141- static struct lock_class_key qdisc_xmit_lock_key; \21422142- static struct lock_class_key dev_addr_list_lock_key; \21432143- unsigned int i; \21442144- \21452145- (dev)->qdisc_tx_busylock = &qdisc_tx_busylock_key; \21462146- (dev)->qdisc_running_key = &qdisc_running_key; \21472147- lockdep_set_class(&(dev)->addr_list_lock, \21482148- &dev_addr_list_lock_key); \21492149- for (i = 0; i < (dev)->num_tx_queues; i++) \21502150- lockdep_set_class(&(dev)->_tx[i]._xmit_lock, \21512151- &qdisc_xmit_lock_key); \21522125}2153212621542127u16 netdev_pick_tx(struct net_device *dev, struct sk_buff *skb,···31323139}3133314031343141void netif_tx_stop_all_queues(struct net_device *dev);31423142+void netdev_update_lockdep_key(struct net_device *dev);3135314331363144static inline bool netif_tx_queue_stopped(const struct netdev_queue *dev_queue)31373145{···40504056 spin_lock(&dev->addr_list_lock);40514057}4052405840534053-static inline void netif_addr_lock_nested(struct net_device *dev)40544054-{40554055- int subclass = SINGLE_DEPTH_NESTING;40564056-40574057- if (dev->netdev_ops->ndo_get_lock_subclass)40584058- subclass = dev->netdev_ops->ndo_get_lock_subclass(dev);40594059-40604060- spin_lock_nested(&dev->addr_list_lock, subclass);40614061-}40624062-40634059static inline void netif_addr_lock_bh(struct net_device *dev)40644060{40654061 spin_lock_bh(&dev->addr_list_lock);···43134329 struct netlink_ext_ack *extack);43144330void netdev_upper_dev_unlink(struct net_device *dev,43154331 struct net_device *upper_dev);43324332+int netdev_adjacent_change_prepare(struct net_device *old_dev,43334333+ struct net_device *new_dev,43344334+ struct net_device *dev,43354335+ struct netlink_ext_ack *extack);43364336+void netdev_adjacent_change_commit(struct net_device *old_dev,43374337+ struct net_device *new_dev,43384338+ struct net_device *dev);43394339+void netdev_adjacent_change_abort(struct net_device *old_dev,43404340+ struct net_device *new_dev,43414341+ struct net_device *dev);43164342void netdev_adjacent_rename_links(struct net_device *dev, char *oldname);43174343void *netdev_lower_dev_get_private(struct net_device *dev,43184344 struct net_device *lower_dev);···43344340extern u8 netdev_rss_key[NETDEV_RSS_KEY_LEN] __read_mostly;43354341void netdev_rss_key_fill(void *buffer, size_t len);4336434243374337-int dev_get_nest_level(struct net_device *dev);43384343int skb_checksum_help(struct sk_buff *skb);43394344int skb_crc32c_csum_help(struct sk_buff *skb);43404345int skb_csum_hwoffload_help(struct sk_buff *skb,
+1-1
include/linux/perf_event.h
···292292 * -EBUSY -- @event is for this PMU but PMU temporarily unavailable293293 * -EINVAL -- @event is for this PMU but @event is not valid294294 * -EOPNOTSUPP -- @event is for this PMU, @event is valid, but not supported295295- * -EACCESS -- @event is for this PMU, @event is valid, but no privilidges295295+ * -EACCES -- @event is for this PMU, @event is valid, but no privileges296296 *297297 * 0 -- @event is for this PMU and valid298298 *
+3
include/linux/platform_data/dma-imx-sdma.h
···5151 /* End of v2 array */5252 s32 zcanfd_2_mcu_addr;5353 s32 zqspi_2_mcu_addr;5454+ s32 mcu_2_ecspi_addr;5455 /* End of v3 array */5656+ s32 mcu_2_zqspi_addr;5757+ /* End of v4 array */5558};56595760/**
···13541354 return skb->hash;13551355}1356135613571357-__u32 skb_get_hash_perturb(const struct sk_buff *skb, u32 perturb);13571357+__u32 skb_get_hash_perturb(const struct sk_buff *skb,13581358+ const siphash_key_t *perturb);1358135913591360static inline __u32 skb_get_hash_raw(const struct sk_buff *skb)13601361{···14941493{14951494 return list->next == (const struct sk_buff *) list;14961495}14961496+14971497+/**14981498+ * skb_queue_empty_lockless - check if a queue is empty14991499+ * @list: queue head15001500+ *15011501+ * Returns true if the queue is empty, false otherwise.15021502+ * This variant can be used in lockless contexts.15031503+ */15041504+static inline bool skb_queue_empty_lockless(const struct sk_buff_head *list)15051505+{15061506+ return READ_ONCE(list->next) == (const struct sk_buff *) list;15071507+}15081508+1497150914981510/**14991511 * skb_queue_is_last - check if skb is the last entry in the queue···18611847 struct sk_buff *prev, struct sk_buff *next,18621848 struct sk_buff_head *list)18631849{18641864- newsk->next = next;18651865- newsk->prev = prev;18661866- next->prev = prev->next = newsk;18501850+ /* see skb_queue_empty_lockless() for the opposite READ_ONCE() */18511851+ WRITE_ONCE(newsk->next, next);18521852+ WRITE_ONCE(newsk->prev, prev);18531853+ WRITE_ONCE(next->prev, newsk);18541854+ WRITE_ONCE(prev->next, newsk);18671855 list->qlen++;18681856}18691857···18761860 struct sk_buff *first = list->next;18771861 struct sk_buff *last = list->prev;1878186218791879- first->prev = prev;18801880- prev->next = first;18631863+ WRITE_ONCE(first->prev, prev);18641864+ WRITE_ONCE(prev->next, first);1881186518821882- last->next = next;18831883- next->prev = last;18661866+ WRITE_ONCE(last->next, next);18671867+ WRITE_ONCE(next->prev, last);18841868}1885186918861870/**···20212005 next = skb->next;20222006 prev = skb->prev;20232007 skb->next = skb->prev = NULL;20242024- next->prev = prev;20252025- prev->next = next;20082008+ WRITE_ONCE(next->prev, prev);20092009+ WRITE_ONCE(prev->next, next);20262010}2027201120282012/**
+1-1
include/linux/socket.h
···263263#define PF_MAX AF_MAX264264265265/* Maximum queue length specifiable by listen. */266266-#define SOMAXCONN 128266266+#define SOMAXCONN 4096267267268268/* Flags we can use with send/ and recv.269269 Added those for 1003.1g not all are supported yet
···185185}186186187187struct ip_frag_state {188188- struct iphdr *iph;188188+ bool DF;189189 unsigned int hlen;190190 unsigned int ll_rs;191191 unsigned int mtu;···196196};197197198198void ip_frag_init(struct sk_buff *skb, unsigned int hlen, unsigned int ll_rs,199199- unsigned int mtu, struct ip_frag_state *state);199199+ unsigned int mtu, bool DF, struct ip_frag_state *state);200200struct sk_buff *ip_frag_next(struct sk_buff *skb,201201 struct ip_frag_state *state);202202
+1
include/net/ip_vs.h
···889889 struct delayed_work defense_work; /* Work handler */890890 int drop_rate;891891 int drop_counter;892892+ int old_secure_tcp;892893 atomic_t dropentry;893894 /* locks in ctl.c */894895 spinlock_t dropentry_lock; /* drop entry handling */
+1-1
include/net/net_namespace.h
···342342#define __net_initconst __initconst343343#endif344344345345-int peernet2id_alloc(struct net *net, struct net *peer);345345+int peernet2id_alloc(struct net *net, struct net *peer, gfp_t gfp);346346int peernet2id(struct net *net, struct net *peer);347347bool peernet_has_id(struct net *net, struct net *peer);348348struct net *get_net_ns_by_id(struct net *net, int id);
+10-5
include/net/sock.h
···954954{955955 int cpu = raw_smp_processor_id();956956957957- if (unlikely(sk->sk_incoming_cpu != cpu))958958- sk->sk_incoming_cpu = cpu;957957+ if (unlikely(READ_ONCE(sk->sk_incoming_cpu) != cpu))958958+ WRITE_ONCE(sk->sk_incoming_cpu, cpu);959959}960960961961static inline void sock_rps_record_flow_hash(__u32 hash)···22422242 * sk_page_frag - return an appropriate page_frag22432243 * @sk: socket22442244 *22452245- * If socket allocation mode allows current thread to sleep, it means its22462246- * safe to use the per task page_frag instead of the per socket one.22452245+ * Use the per task page_frag instead of the per socket one for22462246+ * optimization when we know that we're in the normal context and owns22472247+ * everything that's associated with %current.22482248+ *22492249+ * gfpflags_allow_blocking() isn't enough here as direct reclaim may nest22502250+ * inside other socket operations and end up recursing into sk_page_frag()22512251+ * while it's already in use.22472252 */22482253static inline struct page_frag *sk_page_frag(struct sock *sk)22492254{22502250- if (gfpflags_allow_blocking(sk->sk_allocation))22552255+ if (gfpflags_normal_context(sk->sk_allocation))22512256 return ¤t->task_frag;2252225722532258 return &sk->sk_frag;
···3838 *3939 * Protocol changelog:4040 *4141+ * 7.1:4242+ * - add the following messages:4343+ * FUSE_SETATTR, FUSE_SYMLINK, FUSE_MKNOD, FUSE_MKDIR, FUSE_UNLINK,4444+ * FUSE_RMDIR, FUSE_RENAME, FUSE_LINK, FUSE_OPEN, FUSE_READ, FUSE_WRITE,4545+ * FUSE_RELEASE, FUSE_FSYNC, FUSE_FLUSH, FUSE_SETXATTR, FUSE_GETXATTR,4646+ * FUSE_LISTXATTR, FUSE_REMOVEXATTR, FUSE_OPENDIR, FUSE_READDIR,4747+ * FUSE_RELEASEDIR4848+ * - add padding to messages to accommodate 32-bit servers on 64-bit kernels4949+ *5050+ * 7.2:5151+ * - add FOPEN_DIRECT_IO and FOPEN_KEEP_CACHE flags5252+ * - add FUSE_FSYNCDIR message5353+ *5454+ * 7.3:5555+ * - add FUSE_ACCESS message5656+ * - add FUSE_CREATE message5757+ * - add filehandle to fuse_setattr_in5858+ *5959+ * 7.4:6060+ * - add frsize to fuse_kstatfs6161+ * - clean up request size limit checking6262+ *6363+ * 7.5:6464+ * - add flags and max_write to fuse_init_out6565+ *6666+ * 7.6:6767+ * - add max_readahead to fuse_init_in and fuse_init_out6868+ *6969+ * 7.7:7070+ * - add FUSE_INTERRUPT message7171+ * - add POSIX file lock support7272+ *7373+ * 7.8:7474+ * - add lock_owner and flags fields to fuse_release_in7575+ * - add FUSE_BMAP message7676+ * - add FUSE_DESTROY message7777+ *4178 * 7.9:4279 * - new fuse_getattr_in input argument of GETATTR4380 * - add lk_flags in fuse_lk_in
+1-1
kernel/bpf/core.c
···502502 return WARN_ON_ONCE(bpf_adj_branches(prog, off, off + cnt, off, false));503503}504504505505-void bpf_prog_kallsyms_del_subprogs(struct bpf_prog *fp)505505+static void bpf_prog_kallsyms_del_subprogs(struct bpf_prog *fp)506506{507507 int i;508508
+32-1
kernel/bpf/devmap.c
···128128129129 if (!dtab->n_buckets) /* Overflow check */130130 return -EINVAL;131131- cost += sizeof(struct hlist_head) * dtab->n_buckets;131131+ cost += (u64) sizeof(struct hlist_head) * dtab->n_buckets;132132 }133133134134 /* if map size is larger than memlock limit, reject it */···719719 .map_check_btf = map_check_no_btf,720720};721721722722+static void dev_map_hash_remove_netdev(struct bpf_dtab *dtab,723723+ struct net_device *netdev)724724+{725725+ unsigned long flags;726726+ u32 i;727727+728728+ spin_lock_irqsave(&dtab->index_lock, flags);729729+ for (i = 0; i < dtab->n_buckets; i++) {730730+ struct bpf_dtab_netdev *dev;731731+ struct hlist_head *head;732732+ struct hlist_node *next;733733+734734+ head = dev_map_index_hash(dtab, i);735735+736736+ hlist_for_each_entry_safe(dev, next, head, index_hlist) {737737+ if (netdev != dev->dev)738738+ continue;739739+740740+ dtab->items--;741741+ hlist_del_rcu(&dev->index_hlist);742742+ call_rcu(&dev->rcu, __dev_map_entry_free);743743+ }744744+ }745745+ spin_unlock_irqrestore(&dtab->index_lock, flags);746746+}747747+722748static int dev_map_notification(struct notifier_block *notifier,723749 ulong event, void *ptr)724750{···761735 */762736 rcu_read_lock();763737 list_for_each_entry_rcu(dtab, &dev_map_list, list) {738738+ if (dtab->map.map_type == BPF_MAP_TYPE_DEVMAP_HASH) {739739+ dev_map_hash_remove_netdev(dtab, netdev);740740+ continue;741741+ }742742+764743 for (i = 0; i < dtab->map.max_entries; i++) {765744 struct bpf_dtab_netdev *dev, *odev;766745
+20-11
kernel/bpf/syscall.c
···13261326{13271327 struct bpf_prog_aux *aux = container_of(rcu, struct bpf_prog_aux, rcu);1328132813291329+ kvfree(aux->func_info);13291330 free_used_maps(aux);13301331 bpf_prog_uncharge_memlock(aux->prog);13311332 security_bpf_prog_free(aux);13321333 bpf_prog_free(aux->prog);13341334+}13351335+13361336+static void __bpf_prog_put_noref(struct bpf_prog *prog, bool deferred)13371337+{13381338+ bpf_prog_kallsyms_del_all(prog);13391339+ btf_put(prog->aux->btf);13401340+ bpf_prog_free_linfo(prog);13411341+13421342+ if (deferred)13431343+ call_rcu(&prog->aux->rcu, __bpf_prog_put_rcu);13441344+ else13451345+ __bpf_prog_put_rcu(&prog->aux->rcu);13331346}1334134713351348static void __bpf_prog_put(struct bpf_prog *prog, bool do_idr_lock)···13511338 perf_event_bpf_event(prog, PERF_BPF_EVENT_PROG_UNLOAD, 0);13521339 /* bpf_prog_free_id() must be called first */13531340 bpf_prog_free_id(prog, do_idr_lock);13541354- bpf_prog_kallsyms_del_all(prog);13551355- btf_put(prog->aux->btf);13561356- kvfree(prog->aux->func_info);13571357- bpf_prog_free_linfo(prog);13581358-13591359- call_rcu(&prog->aux->rcu, __bpf_prog_put_rcu);13411341+ __bpf_prog_put_noref(prog, true);13601342 }13611343}13621344···17491741 return err;1750174217511743free_used_maps:17521752- bpf_prog_free_linfo(prog);17531753- kvfree(prog->aux->func_info);17541754- btf_put(prog->aux->btf);17551755- bpf_prog_kallsyms_del_subprogs(prog);17561756- free_used_maps(prog->aux);17441744+ /* In case we have subprogs, we need to wait for a grace17451745+ * period before we can tear down JIT memory since symbols17461746+ * are already exposed under kallsyms.17471747+ */17481748+ __bpf_prog_put_noref(prog, prog->aux->func_cnt);17491749+ return err;17571750free_prog:17581751 bpf_prog_uncharge_memlock(prog);17591752free_prog_sec:
+2-1
kernel/cgroup/cpuset.c
···798798 cpumask_subset(cp->cpus_allowed, top_cpuset.effective_cpus))799799 continue;800800801801- if (is_sched_load_balance(cp))801801+ if (is_sched_load_balance(cp) &&802802+ !cpumask_empty(cp->effective_cpus))802803 csa[csn++] = cp;803804804805 /* skip @cp's subtree if not a partition root */
···740740 return 0;741741}742742743743-/* batman-adv network devices have devices nesting below it and are a special744744- * "super class" of normal network devices; split their locks off into a745745- * separate class since they always nest.746746- */747747-static struct lock_class_key batadv_netdev_xmit_lock_key;748748-static struct lock_class_key batadv_netdev_addr_lock_key;749749-750750-/**751751- * batadv_set_lockdep_class_one() - Set lockdep class for a single tx queue752752- * @dev: device which owns the tx queue753753- * @txq: tx queue to modify754754- * @_unused: always NULL755755- */756756-static void batadv_set_lockdep_class_one(struct net_device *dev,757757- struct netdev_queue *txq,758758- void *_unused)759759-{760760- lockdep_set_class(&txq->_xmit_lock, &batadv_netdev_xmit_lock_key);761761-}762762-763763-/**764764- * batadv_set_lockdep_class() - Set txq and addr_list lockdep class765765- * @dev: network device to modify766766- */767767-static void batadv_set_lockdep_class(struct net_device *dev)768768-{769769- lockdep_set_class(&dev->addr_list_lock, &batadv_netdev_addr_lock_key);770770- netdev_for_each_tx_queue(dev, batadv_set_lockdep_class_one, NULL);771771-}772772-773743/**774744 * batadv_softif_init_late() - late stage initialization of soft interface775745 * @dev: registered network device to modify···752782 u32 random_seqno;753783 int ret;754784 size_t cnt_len = sizeof(u64) * BATADV_CNT_NUM;755755-756756- batadv_set_lockdep_class(dev);757785758786 bat_priv = netdev_priv(dev);759787 bat_priv->soft_iface = dev;
+7
net/batman-adv/types.h
···1717#include <linux/if.h>1818#include <linux/if_ether.h>1919#include <linux/kref.h>2020+#include <linux/mutex.h>2021#include <linux/netdevice.h>2122#include <linux/netlink.h>2223#include <linux/sched.h> /* for linux/wait.h */···82818382 /** @ogm_seqno: OGM sequence number - used to identify each OGM */8483 atomic_t ogm_seqno;8484+8585+ /** @ogm_buff_mutex: lock protecting ogm_buff and ogm_buff_len */8686+ struct mutex ogm_buff_mutex;8587};86888789/**···1542153815431539 /** @ogm_seqno: OGM sequence number - used to identify each OGM */15441540 atomic_t ogm_seqno;15411541+15421542+ /** @ogm_buff_mutex: lock protecting ogm_buff and ogm_buff_len */15431543+ struct mutex ogm_buff_mutex;1545154415461545 /** @ogm_wq: workqueue used to schedule OGM transmissions */15471546 struct delayed_work ogm_wq;
···9595 * This may also be a clone skbuff, we could preserve the geometry for9696 * the copies but probably not worth the effort.9797 */9898- ip_frag_init(skb, hlen, ll_rs, frag_max_size, &state);9898+ ip_frag_init(skb, hlen, ll_rs, frag_max_size, false, &state);9999100100 while (state.left > 0) {101101 struct sk_buff *skb2;
···509509 key = &tun_info->key;510510 if (!(tun_info->key.tun_flags & TUNNEL_ERSPAN_OPT))511511 goto err_free_skb;512512- md = ip_tunnel_info_opts(tun_info);513513- if (!md)512512+ if (tun_info->options_len < sizeof(*md))514513 goto err_free_skb;514514+ md = ip_tunnel_info_opts(tun_info);515515516516 /* ERSPAN has fixed 8 byte GRE header */517517 version = md->version;
+6-5
net/ipv4/ip_output.c
···645645EXPORT_SYMBOL(ip_fraglist_prepare);646646647647void ip_frag_init(struct sk_buff *skb, unsigned int hlen,648648- unsigned int ll_rs, unsigned int mtu,648648+ unsigned int ll_rs, unsigned int mtu, bool DF,649649 struct ip_frag_state *state)650650{651651 struct iphdr *iph = ip_hdr(skb);652652653653+ state->DF = DF;653654 state->hlen = hlen;654655 state->ll_rs = ll_rs;655656 state->mtu = mtu;···668667{669668 /* Copy the flags to each fragment. */670669 IPCB(to)->flags = IPCB(from)->flags;671671-672672- if (IPCB(from)->flags & IPSKB_FRAG_PMTU)673673- state->iph->frag_off |= htons(IP_DF);674670675671 /* ANK: dirty, but effective trick. Upgrade options only if676672 * the segment to be fragmented was THE FIRST (otherwise,···736738 */737739 iph = ip_hdr(skb2);738740 iph->frag_off = htons((state->offset >> 3));741741+ if (state->DF)742742+ iph->frag_off |= htons(IP_DF);739743740744 /*741745 * Added AC : If we are fragmenting a fragment that's not the···883883 * Fragment the datagram.884884 */885885886886- ip_frag_init(skb, hlen, ll_rs, mtu, &state);886886+ ip_frag_init(skb, hlen, ll_rs, mtu, IPCB(skb)->flags & IPSKB_FRAG_PMTU,887887+ &state);887888888889 /*889890 * Keep copying data until we run out.
+2-2
net/ipv4/tcp.c
···584584 }585585 /* This barrier is coupled with smp_wmb() in tcp_reset() */586586 smp_rmb();587587- if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue))587587+ if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))588588 mask |= EPOLLERR;589589590590 return mask;···19641964 if (unlikely(flags & MSG_ERRQUEUE))19651965 return inet_recv_error(sk, msg, len, addr_len);1966196619671967- if (sk_can_busy_loop(sk) && skb_queue_empty(&sk->sk_receive_queue) &&19671967+ if (sk_can_busy_loop(sk) && skb_queue_empty_lockless(&sk->sk_receive_queue) &&19681968 (sk->sk_state == TCP_ESTABLISHED))19691969 sk_busy_loop(sk, nonblock);19701970
···388388 return -1;389389 score += 4;390390391391- if (sk->sk_incoming_cpu == raw_smp_processor_id())391391+ if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())392392 score++;393393 return score;394394}···13161316 scratch->_tsize_state |= UDP_SKB_IS_STATELESS;13171317}1318131813191319+static void udp_skb_csum_unnecessary_set(struct sk_buff *skb)13201320+{13211321+ /* We come here after udp_lib_checksum_complete() returned 0.13221322+ * This means that __skb_checksum_complete() might have13231323+ * set skb->csum_valid to 1.13241324+ * On 64bit platforms, we can set csum_unnecessary13251325+ * to true, but only if the skb is not shared.13261326+ */13271327+#if BITS_PER_LONG == 6413281328+ if (!skb_shared(skb))13291329+ udp_skb_scratch(skb)->csum_unnecessary = true;13301330+#endif13311331+}13321332+13191333static int udp_skb_truesize(struct sk_buff *skb)13201334{13211335 return udp_skb_scratch(skb)->_tsize_state & ~UDP_SKB_IS_STATELESS;···15641550 *total += skb->truesize;15651551 kfree_skb(skb);15661552 } else {15671567- /* the csum related bits could be changed, refresh15681568- * the scratch area15691569- */15701570- udp_set_dev_scratch(skb);15531553+ udp_skb_csum_unnecessary_set(skb);15711554 break;15721555 }15731556 }···1588157715891578 spin_lock_bh(&rcvq->lock);15901579 skb = __first_packet_length(sk, rcvq, &total);15911591- if (!skb && !skb_queue_empty(sk_queue)) {15801580+ if (!skb && !skb_queue_empty_lockless(sk_queue)) {15921581 spin_lock(&sk_queue->lock);15931582 skb_queue_splice_tail_init(sk_queue, rcvq);15941583 spin_unlock(&sk_queue->lock);···16611650 return skb;16621651 }1663165216641664- if (skb_queue_empty(sk_queue)) {16531653+ if (skb_queue_empty_lockless(sk_queue)) {16651654 spin_unlock_bh(&queue->lock);16661655 goto busy_check;16671656 }···16871676 break;1688167716891678 sk_busy_loop(sk, flags & MSG_DONTWAIT);16901690- } while (!skb_queue_empty(sk_queue));16791679+ } while (!skb_queue_empty_lockless(sk_queue));1691168016921681 /* sk_queue is empty, reader_queue may contain peeked packets */16931682 } while (timeo &&···27232712 __poll_t mask = datagram_poll(file, sock, wait);27242713 struct sock *sk = sock->sk;2725271427262726- if (!skb_queue_empty(&udp_sk(sk)->reader_queue))27152715+ if (!skb_queue_empty_lockless(&udp_sk(sk)->reader_queue))27272716 mask |= EPOLLIN | EPOLLRDNORM;2728271727292718 /* Check for false positives due to checksum errors */
+1
net/ipv6/addrconf_core.c
···77#include <linux/export.h>88#include <net/ipv6.h>99#include <net/ipv6_stubs.h>1010+#include <net/addrconf.h>1011#include <net/ip.h>11121213/* if ipv6 module registers this function is used by xfrm to force all
+1-1
net/ipv6/inet6_hashtables.c
···105105 return -1;106106107107 score = 1;108108- if (sk->sk_incoming_cpu == raw_smp_processor_id())108108+ if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())109109 score++;110110 }111111 return score;
+2-2
net/ipv6/ip6_gre.c
···980980 dsfield = key->tos;981981 if (!(tun_info->key.tun_flags & TUNNEL_ERSPAN_OPT))982982 goto tx_err;983983- md = ip_tunnel_info_opts(tun_info);984984- if (!md)983983+ if (tun_info->options_len < sizeof(*md))985984 goto tx_err;985985+ md = ip_tunnel_info_opts(tun_info);986986987987 tun_id = tunnel_id_to_key32(key->tun_id);988988 if (md->version == 1) {
+1-1
net/ipv6/udp.c
···135135 return -1;136136 score++;137137138138- if (sk->sk_incoming_cpu == raw_smp_processor_id())138138+ if (READ_ONCE(sk->sk_incoming_cpu) == raw_smp_processor_id())139139 score++;140140141141 return score;
···193193194194 mutex_lock(&__ip_vs_app_mutex);195195196196+ /* increase the module use count */197197+ if (!ip_vs_use_count_inc()) {198198+ err = -ENOENT;199199+ goto out_unlock;200200+ }201201+196202 list_for_each_entry(a, &ipvs->app_list, a_list) {197203 if (!strcmp(app->name, a->name)) {198204 err = -EEXIST;205205+ /* decrease the module use count */206206+ ip_vs_use_count_dec();199207 goto out_unlock;200208 }201209 }202210 a = kmemdup(app, sizeof(*app), GFP_KERNEL);203211 if (!a) {204212 err = -ENOMEM;213213+ /* decrease the module use count */214214+ ip_vs_use_count_dec();205215 goto out_unlock;206216 }207217 INIT_LIST_HEAD(&a->incs_list);208218 list_add(&a->a_list, &ipvs->app_list);209209- /* increase the module use count */210210- ip_vs_use_count_inc();211219212220out_unlock:213221 mutex_unlock(&__ip_vs_app_mutex);
+11-18
net/netfilter/ipvs/ip_vs_ctl.c
···9393static void update_defense_level(struct netns_ipvs *ipvs)9494{9595 struct sysinfo i;9696- static int old_secure_tcp = 0;9796 int availmem;9897 int nomem;9998 int to_change = -1;···173174 spin_lock(&ipvs->securetcp_lock);174175 switch (ipvs->sysctl_secure_tcp) {175176 case 0:176176- if (old_secure_tcp >= 2)177177+ if (ipvs->old_secure_tcp >= 2)177178 to_change = 0;178179 break;179180 case 1:180181 if (nomem) {181181- if (old_secure_tcp < 2)182182+ if (ipvs->old_secure_tcp < 2)182183 to_change = 1;183184 ipvs->sysctl_secure_tcp = 2;184185 } else {185185- if (old_secure_tcp >= 2)186186+ if (ipvs->old_secure_tcp >= 2)186187 to_change = 0;187188 }188189 break;189190 case 2:190191 if (nomem) {191191- if (old_secure_tcp < 2)192192+ if (ipvs->old_secure_tcp < 2)192193 to_change = 1;193194 } else {194194- if (old_secure_tcp >= 2)195195+ if (ipvs->old_secure_tcp >= 2)195196 to_change = 0;196197 ipvs->sysctl_secure_tcp = 1;197198 }198199 break;199200 case 3:200200- if (old_secure_tcp < 2)201201+ if (ipvs->old_secure_tcp < 2)201202 to_change = 1;202203 break;203204 }204204- old_secure_tcp = ipvs->sysctl_secure_tcp;205205+ ipvs->old_secure_tcp = ipvs->sysctl_secure_tcp;205206 if (to_change >= 0)206207 ip_vs_protocol_timeout_change(ipvs,207208 ipvs->sysctl_secure_tcp > 1);···12741275 struct ip_vs_service *svc = NULL;1275127612761277 /* increase the module use count */12771277- ip_vs_use_count_inc();12781278+ if (!ip_vs_use_count_inc())12791279+ return -ENOPROTOOPT;1278128012791281 /* Lookup the scheduler by 'u->sched_name' */12801282 if (strcmp(u->sched_name, "none")) {···24352435 if (copy_from_user(arg, user, len) != 0)24362436 return -EFAULT;2437243724382438- /* increase the module use count */24392439- ip_vs_use_count_inc();24402440-24412438 /* Handle daemons since they have another lock */24422439 if (cmd == IP_VS_SO_SET_STARTDAEMON ||24432440 cmd == IP_VS_SO_SET_STOPDAEMON) {···24472450 ret = -EINVAL;24482451 if (strscpy(cfg.mcast_ifn, dm->mcast_ifn,24492452 sizeof(cfg.mcast_ifn)) <= 0)24502450- goto out_dec;24532453+ return ret;24512454 cfg.syncid = dm->syncid;24522455 ret = start_sync_thread(ipvs, &cfg, dm->state);24532456 } else {24542457 ret = stop_sync_thread(ipvs, dm->state);24552458 }24562456- goto out_dec;24592459+ return ret;24572460 }2458246124592462 mutex_lock(&__ip_vs_mutex);···2548255125492552 out_unlock:25502553 mutex_unlock(&__ip_vs_mutex);25512551- out_dec:25522552- /* decrease the module use count */25532553- ip_vs_use_count_dec();25542554-25552554 return ret;25562555}25572556
+2-1
net/netfilter/ipvs/ip_vs_pe.c
···6868 struct ip_vs_pe *tmp;69697070 /* increase the module use count */7171- ip_vs_use_count_inc();7171+ if (!ip_vs_use_count_inc())7272+ return -ENOENT;72737374 mutex_lock(&ip_vs_pe_mutex);7475 /* Make sure that the pe with this name doesn't exist
+2-1
net/netfilter/ipvs/ip_vs_sched.c
···179179 }180180181181 /* increase the module use count */182182- ip_vs_use_count_inc();182182+ if (!ip_vs_use_count_inc())183183+ return -ENOENT;183184184185 mutex_lock(&ip_vs_sched_mutex);185186
···347347348348 policy = nft_trans_chain_policy(trans);349349 err = nft_flow_offload_chain(trans->ctx.chain, &policy,350350- FLOW_BLOCK_BIND);350350+ FLOW_BLOCK_UNBIND);351351 break;352352 case NFT_MSG_NEWRULE:353353 if (!(trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD))
+38
net/netfilter/nft_payload.c
···161161162162 switch (priv->offset) {163163 case offsetof(struct ethhdr, h_source):164164+ if (priv->len != ETH_ALEN)165165+ return -EOPNOTSUPP;166166+164167 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_ETH_ADDRS, eth_addrs,165168 src, ETH_ALEN, reg);166169 break;167170 case offsetof(struct ethhdr, h_dest):171171+ if (priv->len != ETH_ALEN)172172+ return -EOPNOTSUPP;173173+168174 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_ETH_ADDRS, eth_addrs,169175 dst, ETH_ALEN, reg);170176 break;177177+ default:178178+ return -EOPNOTSUPP;171179 }172180173181 return 0;···189181190182 switch (priv->offset) {191183 case offsetof(struct iphdr, saddr):184184+ if (priv->len != sizeof(struct in_addr))185185+ return -EOPNOTSUPP;186186+192187 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_IPV4_ADDRS, ipv4, src,193188 sizeof(struct in_addr), reg);194189 break;195190 case offsetof(struct iphdr, daddr):191191+ if (priv->len != sizeof(struct in_addr))192192+ return -EOPNOTSUPP;193193+196194 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_IPV4_ADDRS, ipv4, dst,197195 sizeof(struct in_addr), reg);198196 break;199197 case offsetof(struct iphdr, protocol):198198+ if (priv->len != sizeof(__u8))199199+ return -EOPNOTSUPP;200200+200201 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_BASIC, basic, ip_proto,201202 sizeof(__u8), reg);202203 nft_offload_set_dependency(ctx, NFT_OFFLOAD_DEP_TRANSPORT);···225208226209 switch (priv->offset) {227210 case offsetof(struct ipv6hdr, saddr):211211+ if (priv->len != sizeof(struct in6_addr))212212+ return -EOPNOTSUPP;213213+228214 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_IPV6_ADDRS, ipv6, src,229215 sizeof(struct in6_addr), reg);230216 break;231217 case offsetof(struct ipv6hdr, daddr):218218+ if (priv->len != sizeof(struct in6_addr))219219+ return -EOPNOTSUPP;220220+232221 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_IPV6_ADDRS, ipv6, dst,233222 sizeof(struct in6_addr), reg);234223 break;235224 case offsetof(struct ipv6hdr, nexthdr):225225+ if (priv->len != sizeof(__u8))226226+ return -EOPNOTSUPP;227227+236228 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_BASIC, basic, ip_proto,237229 sizeof(__u8), reg);238230 nft_offload_set_dependency(ctx, NFT_OFFLOAD_DEP_TRANSPORT);···281255282256 switch (priv->offset) {283257 case offsetof(struct tcphdr, source):258258+ if (priv->len != sizeof(__be16))259259+ return -EOPNOTSUPP;260260+284261 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_PORTS, tp, src,285262 sizeof(__be16), reg);286263 break;287264 case offsetof(struct tcphdr, dest):265265+ if (priv->len != sizeof(__be16))266266+ return -EOPNOTSUPP;267267+288268 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_PORTS, tp, dst,289269 sizeof(__be16), reg);290270 break;···309277310278 switch (priv->offset) {311279 case offsetof(struct udphdr, source):280280+ if (priv->len != sizeof(__be16))281281+ return -EOPNOTSUPP;282282+312283 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_PORTS, tp, src,313284 sizeof(__be16), reg);314285 break;315286 case offsetof(struct udphdr, dest):287287+ if (priv->len != sizeof(__be16))288288+ return -EOPNOTSUPP;289289+316290 NFT_OFFLOAD_MATCH(FLOW_DISSECTOR_KEY_PORTS, tp, dst,317291 sizeof(__be16), reg);318292 break;
-23
net/netrom/af_netrom.c
···6464static const struct proto_ops nr_proto_ops;65656666/*6767- * NETROM network devices are virtual network devices encapsulating NETROM6868- * frames into AX.25 which will be sent through an AX.25 device, so form a6969- * special "super class" of normal net devices; split their locks off into a7070- * separate class since they always nest.7171- */7272-static struct lock_class_key nr_netdev_xmit_lock_key;7373-static struct lock_class_key nr_netdev_addr_lock_key;7474-7575-static void nr_set_lockdep_one(struct net_device *dev,7676- struct netdev_queue *txq,7777- void *_unused)7878-{7979- lockdep_set_class(&txq->_xmit_lock, &nr_netdev_xmit_lock_key);8080-}8181-8282-static void nr_set_lockdep_key(struct net_device *dev)8383-{8484- lockdep_set_class(&dev->addr_list_lock, &nr_netdev_addr_lock_key);8585- netdev_for_each_tx_queue(dev, nr_set_lockdep_one, NULL);8686-}8787-8888-/*8967 * Socket removal during an interrupt is now safe.9068 */9169static void nr_remove_socket(struct sock *sk)···13921414 free_netdev(dev);13931415 goto fail;13941416 }13951395- nr_set_lockdep_key(dev);13961417 dev_nr[i] = dev;13971418 }13981419
+2-2
net/nfc/llcp_sock.c
···554554 if (sk->sk_state == LLCP_LISTEN)555555 return llcp_accept_poll(sk);556556557557- if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue))557557+ if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))558558 mask |= EPOLLERR |559559 (sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0);560560561561- if (!skb_queue_empty(&sk->sk_receive_queue))561561+ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))562562 mask |= EPOLLIN | EPOLLRDNORM;563563564564 if (sk->sk_state == LLCP_CLOSED)
···338338339339 if (sk->sk_state == TCP_CLOSE)340340 return EPOLLERR;341341- if (!skb_queue_empty(&sk->sk_receive_queue))341341+ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))342342 mask |= EPOLLIN | EPOLLRDNORM;343343- if (!skb_queue_empty(&pn->ctrlreq_queue))343343+ if (!skb_queue_empty_lockless(&pn->ctrlreq_queue))344344 mask |= EPOLLPRI;345345 if (!mask && sk->sk_state == TCP_CLOSE_WAIT)346346 return EPOLLHUP;
-23
net/rose/af_rose.c
···6565ax25_address rose_callsign;66666767/*6868- * ROSE network devices are virtual network devices encapsulating ROSE6969- * frames into AX.25 which will be sent through an AX.25 device, so form a7070- * special "super class" of normal net devices; split their locks off into a7171- * separate class since they always nest.7272- */7373-static struct lock_class_key rose_netdev_xmit_lock_key;7474-static struct lock_class_key rose_netdev_addr_lock_key;7575-7676-static void rose_set_lockdep_one(struct net_device *dev,7777- struct netdev_queue *txq,7878- void *_unused)7979-{8080- lockdep_set_class(&txq->_xmit_lock, &rose_netdev_xmit_lock_key);8181-}8282-8383-static void rose_set_lockdep_key(struct net_device *dev)8484-{8585- lockdep_set_class(&dev->addr_list_lock, &rose_netdev_addr_lock_key);8686- netdev_for_each_tx_queue(dev, rose_set_lockdep_one, NULL);8787-}8888-8989-/*9068 * Convert a ROSE address into text.9169 */9270char *rose2asc(char *buf, const rose_address *addr)···15111533 free_netdev(dev);15121534 goto fail;15131535 }15141514- rose_set_lockdep_key(dev);15151536 dev_rose[i] = dev;15161537 }15171538
+1
net/rxrpc/ar-internal.h
···601601 int debug_id; /* debug ID for printks */602602 unsigned short rx_pkt_offset; /* Current recvmsg packet offset */603603 unsigned short rx_pkt_len; /* Current recvmsg packet len */604604+ bool rx_pkt_last; /* Current recvmsg packet is last */604605605606 /* Rx/Tx circular buffer, depending on phase.606607 *
+13-5
net/rxrpc/recvmsg.c
···267267 */268268static int rxrpc_locate_data(struct rxrpc_call *call, struct sk_buff *skb,269269 u8 *_annotation,270270- unsigned int *_offset, unsigned int *_len)270270+ unsigned int *_offset, unsigned int *_len,271271+ bool *_last)271272{272273 struct rxrpc_skb_priv *sp = rxrpc_skb(skb);273274 unsigned int offset = sizeof(struct rxrpc_wire_header);274275 unsigned int len;276276+ bool last = false;275277 int ret;276278 u8 annotation = *_annotation;277279 u8 subpacket = annotation & RXRPC_RX_ANNO_SUBPACKET;···283281 len = skb->len - offset;284282 if (subpacket < sp->nr_subpackets - 1)285283 len = RXRPC_JUMBO_DATALEN;284284+ else if (sp->rx_flags & RXRPC_SKB_INCL_LAST)285285+ last = true;286286287287 if (!(annotation & RXRPC_RX_ANNO_VERIFIED)) {288288 ret = rxrpc_verify_packet(call, skb, annotation, offset, len);···295291296292 *_offset = offset;297293 *_len = len;294294+ *_last = last;298295 call->security->locate_data(call, skb, _offset, _len);299296 return 0;300297}···314309 rxrpc_serial_t serial;315310 rxrpc_seq_t hard_ack, top, seq;316311 size_t remain;317317- bool last;312312+ bool rx_pkt_last;318313 unsigned int rx_pkt_offset, rx_pkt_len;319314 int ix, copy, ret = -EAGAIN, ret2;320315···324319325320 rx_pkt_offset = call->rx_pkt_offset;326321 rx_pkt_len = call->rx_pkt_len;322322+ rx_pkt_last = call->rx_pkt_last;327323328324 if (call->state >= RXRPC_CALL_SERVER_ACK_REQUEST) {329325 seq = call->rx_hard_ack;···335329 /* Barriers against rxrpc_input_data(). */336330 hard_ack = call->rx_hard_ack;337331 seq = hard_ack + 1;332332+338333 while (top = smp_load_acquire(&call->rx_top),339334 before_eq(seq, top)340335 ) {···363356 if (rx_pkt_offset == 0) {364357 ret2 = rxrpc_locate_data(call, skb,365358 &call->rxtx_annotations[ix],366366- &rx_pkt_offset, &rx_pkt_len);359359+ &rx_pkt_offset, &rx_pkt_len,360360+ &rx_pkt_last);367361 trace_rxrpc_recvmsg(call, rxrpc_recvmsg_next, seq,368362 rx_pkt_offset, rx_pkt_len, ret2);369363 if (ret2 < 0) {···404396 }405397406398 /* The whole packet has been transferred. */407407- last = sp->hdr.flags & RXRPC_LAST_PACKET;408399 if (!(flags & MSG_PEEK))409400 rxrpc_rotate_rx_window(call);410401 rx_pkt_offset = 0;411402 rx_pkt_len = 0;412403413413- if (last) {404404+ if (rx_pkt_last) {414405 ASSERTCMP(seq, ==, READ_ONCE(call->rx_top));415406 ret = 1;416407 goto out;···422415 if (!(flags & MSG_PEEK)) {423416 call->rx_pkt_offset = rx_pkt_offset;424417 call->rx_pkt_len = rx_pkt_len;418418+ call->rx_pkt_last = rx_pkt_last;425419 }426420done:427421 trace_rxrpc_recvmsg(call, rxrpc_recvmsg_data_return, seq,
···11521152 * offload state (PENDING, ACTIVE, INACTIVE) so it can be visible in dump().11531153 * This is left as TODO.11541154 */11551155-void taprio_offload_config_changed(struct taprio_sched *q)11551155+static void taprio_offload_config_changed(struct taprio_sched *q)11561156{11571157 struct sched_gate_list *oper, *admin;11581158
+4-4
net/sctp/socket.c
···84768476 mask = 0;8477847784788478 /* Is there any exceptional events? */84798479- if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue))84798479+ if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))84808480 mask |= EPOLLERR |84818481 (sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0);84828482 if (sk->sk_shutdown & RCV_SHUTDOWN)···84858485 mask |= EPOLLHUP;8486848684878487 /* Is it readable? Reconsider this code with TCP-style support. */84888488- if (!skb_queue_empty(&sk->sk_receive_queue))84888488+ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))84898489 mask |= EPOLLIN | EPOLLRDNORM;8490849084918491 /* The association is either gone or not ready. */···88718871 if (sk_can_busy_loop(sk)) {88728872 sk_busy_loop(sk, noblock);8873887388748874- if (!skb_queue_empty(&sk->sk_receive_queue))88748874+ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))88758875 continue;88768876 }88778877···93069306 newinet->inet_rcv_saddr = inet->inet_rcv_saddr;93079307 newinet->inet_dport = htons(asoc->peer.port);93089308 newinet->pmtudisc = inet->pmtudisc;93099309- newinet->inet_id = asoc->next_tsn ^ jiffies;93099309+ newinet->inet_id = prandom_u32();9310931093119311 newinet->uc_ttl = inet->uc_ttl;93129312 newinet->mc_loop = 1;
+10-3
net/smc/af_smc.c
···123123};124124EXPORT_SYMBOL_GPL(smc_proto6);125125126126+static void smc_restore_fallback_changes(struct smc_sock *smc)127127+{128128+ smc->clcsock->file->private_data = smc->sk.sk_socket;129129+ smc->clcsock->file = NULL;130130+}131131+126132static int __smc_release(struct smc_sock *smc)127133{128134 struct sock *sk = &smc->sk;···147141 }148142 sk->sk_state = SMC_CLOSED;149143 sk->sk_state_change(sk);144144+ smc_restore_fallback_changes(smc);150145 }151146152147 sk->sk_prot->unhash(sk);···707700 int smc_type;708701 int rc = 0;709702710710- sock_hold(&smc->sk); /* sock put in passive closing */711711-712703 if (smc->use_fallback)713704 return smc_connect_fallback(smc, smc->fallback_rsn);714705···851846 rc = kernel_connect(smc->clcsock, addr, alen, flags);852847 if (rc && rc != -EINPROGRESS)853848 goto out;849849+850850+ sock_hold(&smc->sk); /* sock put in passive closing */854851 if (flags & O_NONBLOCK) {855852 if (schedule_work(&smc->connect_work))856853 smc->connect_nonblock = 1;···12981291 /* check if RDMA is available */12991292 if (!ism_supported) { /* SMC_TYPE_R or SMC_TYPE_B */13001293 /* prepare RDMA check */13011301- memset(&ini, 0, sizeof(ini));13021294 ini.is_smcd = false;12951295+ ini.ism_dev = NULL;13031296 ini.ib_lcl = &pclc->lcl;13041297 rc = smc_find_rdma_device(new_smc, &ini);13051298 if (rc) {
+1-1
net/smc/smc_core.c
···561561 }562562563563 rtnl_lock();564564- nest_lvl = dev_get_nest_level(ndev);564564+ nest_lvl = ndev->lower_level;565565 for (i = 0; i < nest_lvl; i++) {566566 struct list_head *lower = &ndev->adj_list.lower;567567
+1-1
net/smc/smc_pnet.c
···718718 int i, nest_lvl;719719720720 rtnl_lock();721721- nest_lvl = dev_get_nest_level(ndev);721721+ nest_lvl = ndev->lower_level;722722 for (i = 0; i < nest_lvl; i++) {723723 struct list_head *lower = &ndev->adj_list.lower;724724
···19431943 rpc_destroy_wait_queue(&xprt->backlog);19441944 kfree(xprt->servername);19451945 /*19461946+ * Destroy any existing back channel19471947+ */19481948+ xprt_destroy_backchannel(xprt, UINT_MAX);19491949+19501950+ /*19461951 * Tear down transport state and free the rpc_xprt19471952 */19481953 xprt->ops->destroy(xprt);
···740740 /* fall through */741741 case TIPC_LISTEN:742742 case TIPC_CONNECTING:743743- if (!skb_queue_empty(&sk->sk_receive_queue))743743+ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))744744 revents |= EPOLLIN | EPOLLRDNORM;745745 break;746746 case TIPC_OPEN:···748748 revents |= EPOLLOUT;749749 if (!tipc_sk_type_connectionless(sk))750750 break;751751- if (skb_queue_empty(&sk->sk_receive_queue))751751+ if (skb_queue_empty_lockless(&sk->sk_receive_queue))752752 break;753753 revents |= EPOLLIN | EPOLLRDNORM;754754 break;
+3-3
net/unix/af_unix.c
···25992599 mask |= EPOLLRDHUP | EPOLLIN | EPOLLRDNORM;2600260026012601 /* readable? */26022602- if (!skb_queue_empty(&sk->sk_receive_queue))26022602+ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))26032603 mask |= EPOLLIN | EPOLLRDNORM;2604260426052605 /* Connection-based need to check for termination and startup */···26282628 mask = 0;2629262926302630 /* exceptional events? */26312631- if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue))26312631+ if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))26322632 mask |= EPOLLERR |26332633 (sock_flag(sk, SOCK_SELECT_ERR_QUEUE) ? EPOLLPRI : 0);26342634···26382638 mask |= EPOLLHUP;2639263926402640 /* readable? */26412641- if (!skb_queue_empty(&sk->sk_receive_queue))26412641+ if (!skb_queue_empty_lockless(&sk->sk_receive_queue))26422642 mask |= EPOLLIN | EPOLLRDNORM;2643264326442644 /* Connection-based need to check for termination and startup */
+1-1
net/vmw_vsock/af_vsock.c
···870870 * the queue and write as long as the socket isn't shutdown for871871 * sending.872872 */873873- if (!skb_queue_empty(&sk->sk_receive_queue) ||873873+ if (!skb_queue_empty_lockless(&sk->sk_receive_queue) ||874874 (sk->sk_shutdown & RCV_SHUTDOWN)) {875875 mask |= EPOLLIN | EPOLLRDNORM;876876 }
+5
net/wireless/chan.c
···204204 return false;205205 }206206207207+ /* channel 14 is only for IEEE 802.11b */208208+ if (chandef->center_freq1 == 2484 &&209209+ chandef->width != NL80211_CHAN_WIDTH_20_NOHT)210210+ return false;211211+207212 if (cfg80211_chandef_is_edmg(chandef) &&208213 !cfg80211_edmg_chandef_valid(chandef))209214 return false;
···145145 struct snd_array pins; /* struct hdmi_spec_per_pin */146146 struct hdmi_pcm pcm_rec[16];147147 struct mutex pcm_lock;148148+ struct mutex bind_lock; /* for audio component binding */148149 /* pcm_bitmap means which pcms have been assigned to pins*/149150 unsigned long pcm_bitmap;150151 int pcm_used; /* counter of pcm_rec[] */···22592258 struct hdmi_spec *spec = codec->spec;22602259 int pin_idx;2261226022622262- mutex_lock(&spec->pcm_lock);22612261+ mutex_lock(&spec->bind_lock);22632262 spec->use_jack_detect = !codec->jackpoll_interval;22642263 for (pin_idx = 0; pin_idx < spec->num_pins; pin_idx++) {22652264 struct hdmi_spec_per_pin *per_pin = get_pin(spec, pin_idx);···22762275 snd_hda_jack_detect_enable_callback(codec, pin_nid,22772276 jack_callback);22782277 }22792279- mutex_unlock(&spec->pcm_lock);22782278+ mutex_unlock(&spec->bind_lock);22802279 return 0;22812280}22822281···23832382 spec->ops = generic_standard_hdmi_ops;23842383 spec->dev_num = 1; /* initialize to 1 */23852384 mutex_init(&spec->pcm_lock);23852385+ mutex_init(&spec->bind_lock);23862386 snd_hdac_register_chmap_ops(&codec->core, &spec->chmap);2387238723882388 spec->chmap.ops.get_chmap = hdmi_get_chmap;···24532451 int i;2454245224552453 spec = container_of(acomp->audio_ops, struct hdmi_spec, drm_audio_ops);24562456- mutex_lock(&spec->pcm_lock);24542454+ mutex_lock(&spec->bind_lock);24572455 spec->use_acomp_notifier = use_acomp;24582456 spec->codec->relaxed_resume = use_acomp;24592457 /* reprogram each jack detection logic depending on the notifier */···24632461 get_pin(spec, i)->pin_nid,24642462 use_acomp);24652463 }24662466- mutex_unlock(&spec->pcm_lock);24642464+ mutex_unlock(&spec->bind_lock);24672465}2468246624692467/* enable / disable the notifier via master bind / unbind */
+11
sound/pci/hda/patch_realtek.c
···409409 case 0x10ec0672:410410 alc_update_coef_idx(codec, 0xd, 0, 1<<14); /* EAPD Ctrl */411411 break;412412+ case 0x10ec0623:413413+ alc_update_coef_idx(codec, 0x19, 1<<13, 0);414414+ break;412415 case 0x10ec0668:413416 alc_update_coef_idx(codec, 0x7, 3<<13, 0);414417 break;···29232920 ALC269_TYPE_ALC225,29242921 ALC269_TYPE_ALC294,29252922 ALC269_TYPE_ALC300,29232923+ ALC269_TYPE_ALC623,29262924 ALC269_TYPE_ALC700,29272925};29282926···29592955 case ALC269_TYPE_ALC225:29602956 case ALC269_TYPE_ALC294:29612957 case ALC269_TYPE_ALC300:29582958+ case ALC269_TYPE_ALC623:29622959 case ALC269_TYPE_ALC700:29632960 ssids = alc269_ssids;29642961 break;···72217216 SND_PCI_QUIRK(0x17aa, 0x312f, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),72227217 SND_PCI_QUIRK(0x17aa, 0x313c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),72237218 SND_PCI_QUIRK(0x17aa, 0x3151, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),72197219+ SND_PCI_QUIRK(0x17aa, 0x3176, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),72207220+ SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),72247221 SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),72257222 SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),72267223 SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),···80238016 case 0x10ec0300:80248017 spec->codec_variant = ALC269_TYPE_ALC300;80258018 spec->gen.mixer_nid = 0; /* no loopback on ALC300 */80198019+ break;80208020+ case 0x10ec0623:80218021+ spec->codec_variant = ALC269_TYPE_ALC623;80268022 break;80278023 case 0x10ec0700:80288024 case 0x10ec0701:···92289218 HDA_CODEC_ENTRY(0x10ec0298, "ALC298", patch_alc269),92299219 HDA_CODEC_ENTRY(0x10ec0299, "ALC299", patch_alc269),92309220 HDA_CODEC_ENTRY(0x10ec0300, "ALC300", patch_alc269),92219221+ HDA_CODEC_ENTRY(0x10ec0623, "ALC623", patch_alc269),92319222 HDA_CODEC_REV_ENTRY(0x10ec0861, 0x100340, "ALC660", patch_alc861),92329223 HDA_CODEC_ENTRY(0x10ec0660, "ALC660-VD", patch_alc861vd),92339224 HDA_CODEC_ENTRY(0x10ec0861, "ALC861", patch_alc861),
+1
sound/usb/quirks.c
···16571657 case 0x23ba: /* Playback Designs */16581658 case 0x25ce: /* Mytek devices */16591659 case 0x278b: /* Rotel? */16601660+ case 0x292b: /* Gustard/Ess based devices */16601661 case 0x2ab6: /* T+A devices */16611662 case 0x3842: /* EVGA */16621663 case 0xc502: /* HiBy devices */
+5
tools/testing/selftests/bpf/test_offload.py
···2222import pprint2323import random2424import re2525+import stat2526import string2627import struct2728import subprocess···312311 for f in out.split():313312 if f == "ports":314313 continue314314+315315 p = os.path.join(path, f)316316+ if not os.stat(p).st_mode & stat.S_IRUSR:317317+ continue318318+316319 if os.path.isfile(p):317320 _, out = cmd('cat %s/%s' % (path, f))318321 dfs[f] = out.strip()
···14381438 fi14391439 log_test $rc 0 "Prefix route with metric on link up"1440144014411441+ # explicitly check for metric changes on edge scenarios14421442+ run_cmd "$IP addr flush dev dummy2"14431443+ run_cmd "$IP addr add dev dummy2 172.16.104.0/24 metric 259"14441444+ run_cmd "$IP addr change dev dummy2 172.16.104.0/24 metric 260"14451445+ rc=$?14461446+ if [ $rc -eq 0 ]; then14471447+ check_route "172.16.104.0/24 dev dummy2 proto kernel scope link src 172.16.104.0 metric 260"14481448+ rc=$?14491449+ fi14501450+ log_test $rc 0 "Modify metric of .0/24 address"14511451+14521452+ run_cmd "$IP addr flush dev dummy2"14531453+ run_cmd "$IP addr add dev dummy2 172.16.104.1/32 peer 172.16.104.2 metric 260"14541454+ run_cmd "$IP addr change dev dummy2 172.16.104.1/32 peer 172.16.104.2 metric 261"14551455+ rc=$?14561456+ if [ $rc -eq 0 ]; then14571457+ check_route "172.16.104.2 dev dummy2 proto kernel scope link src 172.16.104.1 metric 261"14581458+ rc=$?14591459+ fi14601460+ log_test $rc 0 "Modify metric of address with peer route"14611461+14411462 $IP li del dummy114421463 $IP li del dummy214431464 cleanup
tools/testing/selftests/net/l2tp.sh
+2-1
tools/testing/selftests/net/reuseport_dualstack.c
···129129{130130 struct epoll_event ev;131131 int epfd, i, test_fd;132132- uint16_t test_family;132132+ int test_family;133133 socklen_t len;134134135135 epfd = epoll_create(1);···146146 send_from_v4(proto);147147148148 test_fd = receive_once(epfd, proto);149149+ len = sizeof(test_family);149150 if (getsockopt(test_fd, SOL_SOCKET, SO_DOMAIN, &test_family, &len))150151 error(1, errno, "failed to read socket domain");151152 if (test_family != AF_INET)
+4-2
tools/usb/usbip/libsrc/usbip_device_driver.c
···6969 FILE *fd = NULL;7070 struct udev_device *plat;7171 const char *speed;7272- int ret = 0;7272+ size_t ret;73737474 plat = udev_device_get_parent(sdev);7575 path = udev_device_get_syspath(plat);···7979 if (!fd)8080 return -1;8181 ret = fread((char *) &descr, sizeof(descr), 1, fd);8282- if (ret < 0)8282+ if (ret != 1) {8383+ err("Cannot read vudc device descr file: %s", strerror(errno));8384 goto err;8585+ }8486 fclose(fd);85878688 copy_descr_attr(dev, &descr, bDeviceClass);
+26-22
virt/kvm/kvm_main.c
···627627628628static struct kvm *kvm_create_vm(unsigned long type)629629{630630- int r, i;631630 struct kvm *kvm = kvm_arch_alloc_vm();631631+ int r = -ENOMEM;632632+ int i;632633633634 if (!kvm)634635 return ERR_PTR(-ENOMEM);···641640 mutex_init(&kvm->lock);642641 mutex_init(&kvm->irq_lock);643642 mutex_init(&kvm->slots_lock);644644- refcount_set(&kvm->users_count, 1);645643 INIT_LIST_HEAD(&kvm->devices);646644645645+ BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX);646646+647647+ for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {648648+ struct kvm_memslots *slots = kvm_alloc_memslots();649649+650650+ if (!slots)651651+ goto out_err_no_arch_destroy_vm;652652+ /* Generations must be different for each address space. */653653+ slots->generation = i;654654+ rcu_assign_pointer(kvm->memslots[i], slots);655655+ }656656+657657+ for (i = 0; i < KVM_NR_BUSES; i++) {658658+ rcu_assign_pointer(kvm->buses[i],659659+ kzalloc(sizeof(struct kvm_io_bus), GFP_KERNEL_ACCOUNT));660660+ if (!kvm->buses[i])661661+ goto out_err_no_arch_destroy_vm;662662+ }663663+664664+ refcount_set(&kvm->users_count, 1);647665 r = kvm_arch_init_vm(kvm, type);648666 if (r)649649- goto out_err_no_disable;667667+ goto out_err_no_arch_destroy_vm;650668651669 r = hardware_enable_all();652670 if (r)···675655 INIT_HLIST_HEAD(&kvm->irq_ack_notifier_list);676656#endif677657678678- BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX);679679-680680- r = -ENOMEM;681681- for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {682682- struct kvm_memslots *slots = kvm_alloc_memslots();683683- if (!slots)684684- goto out_err_no_srcu;685685- /* Generations must be different for each address space. */686686- slots->generation = i;687687- rcu_assign_pointer(kvm->memslots[i], slots);688688- }689689-690658 if (init_srcu_struct(&kvm->srcu))691659 goto out_err_no_srcu;692660 if (init_srcu_struct(&kvm->irq_srcu))693661 goto out_err_no_irq_srcu;694694- for (i = 0; i < KVM_NR_BUSES; i++) {695695- rcu_assign_pointer(kvm->buses[i],696696- kzalloc(sizeof(struct kvm_io_bus), GFP_KERNEL_ACCOUNT));697697- if (!kvm->buses[i])698698- goto out_err;699699- }700662701663 r = kvm_init_mmu_notifier(kvm);702664 if (r)···699697out_err_no_srcu:700698 hardware_disable_all();701699out_err_no_disable:702702- refcount_set(&kvm->users_count, 0);700700+ kvm_arch_destroy_vm(kvm);701701+ WARN_ON_ONCE(!refcount_dec_and_test(&kvm->users_count));702702+out_err_no_arch_destroy_vm:703703 for (i = 0; i < KVM_NR_BUSES; i++)704704 kfree(kvm_get_bus(kvm, i));705705 for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)