···9999Jaegeuk Kim <jaegeuk@kernel.org> <jaegeuk@google.com>100100Jaegeuk Kim <jaegeuk@kernel.org> <jaegeuk@motorola.com>101101Jaegeuk Kim <jaegeuk@kernel.org> <jaegeuk.kim@samsung.com>102102+Jakub Kicinski <kuba@kernel.org> <jakub.kicinski@netronome.com>102103James Bottomley <jejb@mulgrave.(none)>103104James Bottomley <jejb@titanic.il.steeleye.com>104105James E Wilson <wilson@specifix.com>
+11-2
Documentation/ABI/stable/sysfs-driver-mlxreg-io
···29293030 The files are read only.31313232-What: /sys/devices/platform/mlxplat/mlxreg-io/hwmon/hwmon*/jtag_enable3232+What: /sys/devices/platform/mlxplat/mlxreg-io/hwmon/hwmon*/cpld3_version33333434Date: November 20183535KernelVersion: 5.03636Contact: Vadim Pasternak <vadimpmellanox.com>3737Description: These files show with which CPLD versions have been burned3838- on LED board.3838+ on LED or Gearbox board.39394040 The files are read only.4141···118118 Value 1 in file means this is reset cause, 0 - otherwise.119119 Only one bit could be 1 at the same time, representing only120120 the last reset cause.121121+122122+ The files are read only.123123+124124+What: /sys/devices/platform/mlxplat/mlxreg-io/hwmon/hwmon*/cpld4_version125125+Date: November 2018126126+KernelVersion: 5.0127127+Contact: Vadim Pasternak <vadimpmellanox.com>128128+Description: These files show with which CPLD versions have been burned129129+ on LED board.121130122131 The files are read only.123132
+1-1
Documentation/admin-guide/devices.txt
···319319 182 = /dev/perfctr Performance-monitoring counters320320 183 = /dev/hwrng Generic random number generator321321 184 = /dev/cpu/microcode CPU microcode update interface322322- 186 = /dev/atomicps Atomic shapshot of process state data322322+ 186 = /dev/atomicps Atomic snapshot of process state data323323 187 = /dev/irnet IrNET device324324 188 = /dev/smbusbios SMBus BIOS325325 189 = /dev/ussp_ctl User space serial port control
+1-1
Documentation/media/v4l-drivers/meye.rst
···95959696Besides the video4linux interface, the driver has a private interface9797for accessing the Motion Eye extended parameters (camera sharpness,9898-agc, video framerate), the shapshot and the MJPEG capture facilities.9898+agc, video framerate), the snapshot and the MJPEG capture facilities.9999100100This interface consists of several ioctls (prototypes and structures101101can be found in include/linux/meye.h):
···255255 Red Hat Josh Poimboeuf <jpoimboe@redhat.com>256256 SUSE Jiri Kosina <jkosina@suse.cz>257257258258- Amazon258258+ Amazon Peter Bowen <pzb@amzn.com>259259 Google Kees Cook <keescook@chromium.org>260260 ============= ========================================================261261
+10-10
MAINTAINERS
···720720F: drivers/i2c/busses/i2c-altera.c721721722722ALTERA MAILBOX DRIVER723723-M: Ley Foon Tan <lftan@altera.com>723723+M: Ley Foon Tan <ley.foon.tan@intel.com>724724L: nios2-dev@lists.rocketboards.org (moderated for non-subscribers)725725S: Maintained726726F: drivers/mailbox/mailbox-altera.c···1407140714081408ARM/ACTIONS SEMI ARCHITECTURE14091409M: Andreas Färber <afaerber@suse.de>14101410-R: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>14101410+M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>14111411L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)14121412S: Maintained14131413N: owl···31503150F: arch/mips/net/3151315131523152BPF JIT for NFP NICs31533153-M: Jakub Kicinski <jakub.kicinski@netronome.com>31533153+M: Jakub Kicinski <kuba@kernel.org>31543154L: netdev@vger.kernel.org31553155L: bpf@vger.kernel.org31563156S: Supported···1143111431F: net/netrom/11432114321143311433NETRONOME ETHERNET DRIVERS1143411434-M: Jakub Kicinski <jakub.kicinski@netronome.com>1143411434+M: Jakub Kicinski <kuba@kernel.org>1143511435L: oss-drivers@netronome.com1143611436S: Maintained1143711437F: drivers/net/ethernet/netronome/···1159111591M: Aviad Yehezkel <aviadye@mellanox.com>1159211592M: John Fastabend <john.fastabend@gmail.com>1159311593M: Daniel Borkmann <daniel@iogearbox.net>1159411594-M: Jakub Kicinski <jakub.kicinski@netronome.com>1159411594+M: Jakub Kicinski <kuba@kernel.org>1159511595L: netdev@vger.kernel.org1159611596S: Maintained1159711597F: net/tls/*···1160311603Q: http://patchwork.kernel.org/project/linux-wireless/list/11604116041160511605NETDEVSIM1160611606-M: Jakub Kicinski <jakub.kicinski@netronome.com>1160611606+M: Jakub Kicinski <kuba@kernel.org>1160711607S: Maintained1160811608F: drivers/net/netdevsim/*1160911609···1168011680F: drivers/scsi/nsp32*11681116811168211682NIOS2 ARCHITECTURE1168311683-M: Ley Foon Tan <lftan@altera.com>1168311683+M: Ley Foon Tan <ley.foon.tan@intel.com>1168411684L: nios2-dev@lists.rocketboards.org (moderated for non-subscribers)1168511685T: git git://git.kernel.org/pub/scm/linux/kernel/git/lftan/nios2.git1168611686S: Maintained···1256412564F: drivers/pci/controller/pci-aardvark.c12565125651256612566PCI DRIVER FOR ALTERA PCIE IP1256712567-M: Ley Foon Tan <lftan@altera.com>1256712567+M: Ley Foon Tan <ley.foon.tan@intel.com>1256812568L: rfi@lists.rocketboards.org (moderated for non-subscribers)1256912569L: linux-pci@vger.kernel.org1257012570S: Supported···1274312743F: Documentation/PCI/pci-error-recovery.rst12744127441274512745PCI MSI DRIVER FOR ALTERA MSI IP1274612746-M: Ley Foon Tan <lftan@altera.com>1274612746+M: Ley Foon Tan <ley.foon.tan@intel.com>1274712747L: rfi@lists.rocketboards.org (moderated for non-subscribers)1274812748L: linux-pci@vger.kernel.org1274912749S: Supported···1804318043M: Alexei Starovoitov <ast@kernel.org>1804418044M: Daniel Borkmann <daniel@iogearbox.net>1804518045M: David S. Miller <davem@davemloft.net>1804618046-M: Jakub Kicinski <jakub.kicinski@netronome.com>1804618046+M: Jakub Kicinski <kuba@kernel.org>1804718047M: Jesper Dangaard Brouer <hawk@kernel.org>1804818048M: John Fastabend <john.fastabend@gmail.com>1804918049L: netdev@vger.kernel.org
···55#include <linux/ftrace.h>66#include <asm-generic/asm-prototypes.h>7788+long long __lshrti3(long long a, int b);99+long long __ashrti3(long long a, int b);1010+long long __ashlti3(long long a, int b);1111+812#endif /* _ASM_RISCV_PROTOTYPES_H */
+10-6
arch/riscv/kernel/head.S
···80808181#ifdef CONFIG_SMP8282 li t0, CONFIG_NR_CPUS8383- bgeu a0, t0, .Lsecondary_park8383+ blt a0, t0, .Lgood_cores8484+ tail .Lsecondary_park8585+.Lgood_cores:8486#endif85878688 /* Pick one hart to run the main boot sequence */···211209 tail smp_callin212210#endif213211214214-.align 2215215-.Lsecondary_park:216216- /* We lack SMP support or have too many harts, so park this hart */217217- wfi218218- j .Lsecondary_park219212END(_start)220213221214#ifdef CONFIG_RISCV_M_MODE···291294 ret292295END(reset_regs)293296#endif /* CONFIG_RISCV_M_MODE */297297+298298+.section ".text", "ax",@progbits299299+.align 2300300+.Lsecondary_park:301301+ /* We lack SMP support or have too many harts, so park this hart */302302+ wfi303303+ j .Lsecondary_park294304295305__PAGE_ALIGNED_BSS296306 /* Empty zero page */
···1052105210531053 if (!early_ipl_comp_list_addr)10541054 return;10551055- if (ipl_block.hdr.flags & IPL_PL_FLAG_IPLSR)10551055+ if (ipl_block.hdr.flags & IPL_PL_FLAG_SIPL)10561056 pr_info("Linux is running with Secure-IPL enabled\n");10571057 else10581058 pr_info("Linux is running with Secure-IPL disabled\n");
···467467{468468 struct thermal_state *state = &per_cpu(thermal_state, cpu);469469 struct device *dev = get_cpu_device(cpu);470470+ u32 l;470471471472 state->package_throttle.level = PACKAGE_LEVEL;472473 state->core_throttle.level = CORE_LEVEL;473474474475 INIT_DELAYED_WORK(&state->package_throttle.therm_work, throttle_active_work);475476 INIT_DELAYED_WORK(&state->core_throttle.therm_work, throttle_active_work);477477+478478+ /* Unmask the thermal vector after the above workqueues are initialized. */479479+ l = apic_read(APIC_LVTTHMR);480480+ apic_write(APIC_LVTTHMR, l & ~APIC_LVT_MASKED);476481477482 return thermal_throttle_add_dev(dev, cpu);478483}···726721727722 rdmsr(MSR_IA32_MISC_ENABLE, l, h);728723 wrmsr(MSR_IA32_MISC_ENABLE, l | MSR_IA32_MISC_ENABLE_TM1, h);729729-730730- /* Unmask the thermal vector: */731731- l = apic_read(APIC_LVTTHMR);732732- apic_write(APIC_LVTTHMR, l & ~APIC_LVT_MASKED);733724734725 pr_info_once("CPU0: Thermal monitoring enabled (%s)\n",735726 tm2 ? "TM2" : "TM1");
+1-1
arch/x86/kernel/cpu/resctrl/core.c
···618618 if (static_branch_unlikely(&rdt_mon_enable_key))619619 rmdir_mondata_subdir_allrdtgrp(r, d->id);620620 list_del(&d->list);621621- if (is_mbm_enabled())621621+ if (r->mon_capable && is_mbm_enabled())622622 cancel_delayed_work(&d->mbm_over);623623 if (is_llc_occupancy_enabled() && has_busy_rmid(r, d)) {624624 /*
+3-3
arch/x86/kernel/cpu/resctrl/rdtgroup.c
···17411741 struct rdt_domain *d;17421742 int cpu;1743174317441744- if (!zalloc_cpumask_var(&cpu_mask, GFP_KERNEL))17451745- return -ENOMEM;17461746-17471744 if (level == RDT_RESOURCE_L3)17481745 update = l3_qos_cfg_update;17491746 else if (level == RDT_RESOURCE_L2)17501747 update = l2_qos_cfg_update;17511748 else17521749 return -EINVAL;17501750+17511751+ if (!zalloc_cpumask_var(&cpu_mask, GFP_KERNEL))17521752+ return -ENOMEM;1753175317541754 r_l = &rdt_resources_all[level];17551755 list_for_each_entry(d, &r_l->domains, list) {
+7-2
block/blk-merge.c
···164164 unsigned long mask = queue_segment_boundary(q);165165166166 offset = mask & (page_to_phys(start_page) + offset);167167- return min_t(unsigned long, mask - offset + 1,168168- queue_max_segment_size(q));167167+168168+ /*169169+ * overflow may be triggered in case of zero page physical address170170+ * on 32bit arch, use queue's max segment size when that happens.171171+ */172172+ return min_not_zero(mask - offset + 1,173173+ (unsigned long)queue_max_segment_size(q));169174}170175171176/**
+1-1
block/blk-settings.c
···328328 * storage device can address. The default of 512 covers most329329 * hardware.330330 **/331331-void blk_queue_logical_block_size(struct request_queue *q, unsigned short size)331331+void blk_queue_logical_block_size(struct request_queue *q, unsigned int size)332332{333333 q->limits.logical_block_size = size;334334
···129129 return BLK_STS_IOERR;130130 case BLK_ZONE_COND_EMPTY:131131 case BLK_ZONE_COND_IMP_OPEN:132132+ case BLK_ZONE_COND_EXP_OPEN:133133+ case BLK_ZONE_COND_CLOSED:132134 /* Writes must be at the write pointer position */133135 if (sector != zone->wp)134136 return BLK_STS_IOERR;135137136136- if (zone->cond == BLK_ZONE_COND_EMPTY)138138+ if (zone->cond != BLK_ZONE_COND_EXP_OPEN)137139 zone->cond = BLK_ZONE_COND_IMP_OPEN;138140139141 zone->wp += nr_sectors;
+8-2
drivers/bus/ti-sysc.c
···343343 return -EINVAL;344344 }345345346346+ /* Always add a slot for main clocks fck and ick even if unused */347347+ if (!nr_fck)348348+ ddata->nr_clocks++;349349+ if (!nr_ick)350350+ ddata->nr_clocks++;351351+346352 ddata->clocks = devm_kcalloc(ddata->dev,347353 ddata->nr_clocks, sizeof(*ddata->clocks),348354 GFP_KERNEL);···427421 struct clk *clock;428422 int i, error;429423430430- if (!ddata->clocks)424424+ if (!ddata->clocks || ddata->nr_clocks < SYSC_OPTFCK0 + 1)431425 return 0;432426433427 for (i = SYSC_OPTFCK0; i < SYSC_MAX_CLOCKS; i++) {···461455 struct clk *clock;462456 int i;463457464464- if (!ddata->clocks)458458+ if (!ddata->clocks || ddata->nr_clocks < SYSC_OPTFCK0 + 1)465459 return;466460467461 for (i = SYSC_OPTFCK0; i < SYSC_MAX_CLOCKS; i++) {
+8-2
drivers/clk/clk.c
···34263426 if (core->flags & CLK_IS_CRITICAL) {34273427 unsigned long flags;3428342834293429- clk_core_prepare(core);34293429+ ret = clk_core_prepare(core);34303430+ if (ret)34313431+ goto out;3430343234313433 flags = clk_enable_lock();34323432- clk_core_enable(core);34343434+ ret = clk_core_enable(core);34333435 clk_enable_unlock(flags);34363436+ if (ret) {34373437+ clk_core_unprepare(core);34383438+ goto out;34393439+ }34343440 }3435344134363442 clk_core_reparent_orphans_nolock();
···1212#include <linux/clk-provider.h>1313#include <linux/of.h>1414#include <linux/of_address.h>1515+#include <linux/clk.h>15161617#include "clk.h"1718#include "clk-cpu.h"···16461645 exynos5_subcmus_init(ctx, ARRAY_SIZE(exynos5x_subcmus),16471646 exynos5x_subcmus);16481647 }16481648+16491649+ /*16501650+ * Keep top part of G3D clock path enabled permanently to ensure16511651+ * that the internal busses get their clock regardless of the16521652+ * main G3D clock enablement status.16531653+ */16541654+ clk_prepare_enable(__clk_lookup("mout_sw_aclk_g3d"));1649165516501656 samsung_clk_of_add_provider(np, ctx);16511657}
···6565config PMS70036666 tristate "Plantower PMS7003 particulate matter sensor"6767 depends on SERIAL_DEV_BUS6868+ select IIO_BUFFER6869 select IIO_TRIGGERED_BUFFER6970 help7071 Say Y here to build support for the Plantower PMS7003 particulate
+2-1
drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
···1301130113021302 for (i = 0; i < ARRAY_SIZE(st_lsm6dsx_sensor_settings); i++) {13031303 for (j = 0; j < ST_LSM6DSX_MAX_ID; j++) {13041304- if (id == st_lsm6dsx_sensor_settings[i].id[j].hw_id)13041304+ if (st_lsm6dsx_sensor_settings[i].id[j].name &&13051305+ id == st_lsm6dsx_sensor_settings[i].id[j].hw_id)13051306 break;13061307 }13071308 if (j < ST_LSM6DSX_MAX_ID)
+5-1
drivers/iio/industrialio-buffer.c
···566566 const unsigned long *mask, bool timestamp)567567{568568 unsigned bytes = 0;569569- int length, i;569569+ int length, i, largest = 0;570570571571 /* How much space will the demuxed element take? */572572 for_each_set_bit(i, mask,···574574 length = iio_storage_bytes_for_si(indio_dev, i);575575 bytes = ALIGN(bytes, length);576576 bytes += length;577577+ largest = max(largest, length);577578 }578579579580 if (timestamp) {580581 length = iio_storage_bytes_for_timestamp(indio_dev);581582 bytes = ALIGN(bytes, length);582583 bytes += length;584584+ largest = max(largest, length);583585 }586586+587587+ bytes = ALIGN(bytes, largest);584588 return bytes;585589}586590
···278278279279void lkdtm_UNSET_SMEP(void)280280{281281-#ifdef CONFIG_X86_64281281+#if IS_ENABLED(CONFIG_X86_64) && !IS_ENABLED(CONFIG_UML)282282#define MOV_CR4_DEPTH 64283283 void (*direct_write_cr4)(unsigned long val);284284 unsigned char *insn;···338338 native_write_cr4(cr4);339339 }340340#else341341- pr_err("FAIL: this test is x86_64-only\n");341341+ pr_err("XFAIL: this test is x86_64-only\n");342342#endif343343}344344345345-#ifdef CONFIG_X86_32346345void lkdtm_DOUBLE_FAULT(void)347346{347347+#ifdef CONFIG_X86_32348348 /*349349 * Trigger #DF by setting the stack limit to zero. This clobbers350350 * a GDT TLS slot, which is okay because the current task will die···373373 asm volatile ("movw %0, %%ss; addl $0, (%%esp)" ::374374 "r" ((unsigned short)(GDT_ENTRY_TLS_MIN << 3)));375375376376- panic("tried to double fault but didn't die\n");377377-}376376+ pr_err("FAIL: tried to double fault but didn't die\n");377377+#else378378+ pr_err("XFAIL: this test is ia32-only\n");378379#endif380380+}
+10-1
drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
···148148 struct resources *r = &this->resources;149149 int ret;150150151151+ ret = pm_runtime_get_sync(this->dev);152152+ if (ret < 0)153153+ return ret;154154+151155 ret = gpmi_reset_block(r->gpmi_regs, false);152156 if (ret)153157 goto err_out;···183179 */184180 writel(BM_GPMI_CTRL1_DECOUPLE_CS, r->gpmi_regs + HW_GPMI_CTRL1_SET);185181186186- return 0;187182err_out:183183+ pm_runtime_mark_last_busy(this->dev);184184+ pm_runtime_put_autosuspend(this->dev);188185 return ret;189186}190187···27262721 dev_err(this->dev, "Error setting GPMI : %d\n", ret);27272722 return ret;27282723 }27242724+27252725+ /* Set flag to get timing setup restored for next exec_op */27262726+ if (this->hw.clk_rate)27272727+ this->hw.must_apply_timings = true;2729272827302729 /* re-init the BCH registers */27312730 ret = bch_set_geometry(this);
···23232323 ring->switch_queue = qp;23242324 ring->switch_port = port;23252325 ring->inspect = true;23262326- priv->ring_map[q + port * num_tx_queues] = ring;23262326+ priv->ring_map[qp + port * num_tx_queues] = ring;23272327 qp++;23282328 }23292329···23382338 struct net_device *slave_dev;23392339 unsigned int num_tx_queues;23402340 struct net_device *dev;23412341- unsigned int q, port;23412341+ unsigned int q, qp, port;2342234223432343 priv = container_of(nb, struct bcm_sysport_priv, dsa_notifier);23442344 if (priv->netdev != info->master)···23642364 continue;2365236523662366 ring->inspect = false;23672367- priv->ring_map[q + port * num_tx_queues] = NULL;23672367+ qp = ring->switch_queue;23682368+ priv->ring_map[qp + port * num_tx_queues] = NULL;23682369 }2369237023702371 return 0;
+20-9
drivers/net/ethernet/broadcom/bnxt/bnxt.c
···1106511065 struct flow_keys *keys1 = &f1->fkeys;1106611066 struct flow_keys *keys2 = &f2->fkeys;11067110671106811068- if (keys1->addrs.v4addrs.src == keys2->addrs.v4addrs.src &&1106911069- keys1->addrs.v4addrs.dst == keys2->addrs.v4addrs.dst &&1107011070- keys1->ports.ports == keys2->ports.ports &&1107111071- keys1->basic.ip_proto == keys2->basic.ip_proto &&1107211072- keys1->basic.n_proto == keys2->basic.n_proto &&1106811068+ if (keys1->basic.n_proto != keys2->basic.n_proto ||1106911069+ keys1->basic.ip_proto != keys2->basic.ip_proto)1107011070+ return false;1107111071+1107211072+ if (keys1->basic.n_proto == htons(ETH_P_IP)) {1107311073+ if (keys1->addrs.v4addrs.src != keys2->addrs.v4addrs.src ||1107411074+ keys1->addrs.v4addrs.dst != keys2->addrs.v4addrs.dst)1107511075+ return false;1107611076+ } else {1107711077+ if (memcmp(&keys1->addrs.v6addrs.src, &keys2->addrs.v6addrs.src,1107811078+ sizeof(keys1->addrs.v6addrs.src)) ||1107911079+ memcmp(&keys1->addrs.v6addrs.dst, &keys2->addrs.v6addrs.dst,1108011080+ sizeof(keys1->addrs.v6addrs.dst)))1108111081+ return false;1108211082+ }1108311083+1108411084+ if (keys1->ports.ports == keys2->ports.ports &&1107311085 keys1->control.flags == keys2->control.flags &&1107411086 ether_addr_equal(f1->src_mac_addr, f2->src_mac_addr) &&1107511087 ether_addr_equal(f1->dst_mac_addr, f2->dst_mac_addr))···1137311361 return -EOPNOTSUPP;11374113621137511363 /* The PF and it's VF-reps only support the switchdev framework */1137611376- if (!BNXT_PF(bp))1136411364+ if (!BNXT_PF(bp) || !(bp->flags & BNXT_FLAG_DSN_VALID))1137711365 return -EOPNOTSUPP;11378113661137911367 ppid->id_len = sizeof(bp->switch_id);···1174611734 put_unaligned_le32(dw, &dsn[0]);1174711735 pci_read_config_dword(pdev, pos + 4, &dw);1174811736 put_unaligned_le32(dw, &dsn[4]);1173711737+ bp->flags |= BNXT_FLAG_DSN_VALID;1174911738 return 0;1175011739}1175111740···11858118451185911846 if (BNXT_PF(bp)) {1186011847 /* Read the adapter's DSN to use as the eswitch switch_id */1186111861- rc = bnxt_pcie_dsn_get(bp, bp->switch_id);1186211862- if (rc)1186311863- goto init_err_pci_clean;1184811848+ bnxt_pcie_dsn_get(bp, bp->switch_id);1186411849 }11865118501186611851 /* MTU range: 60 - FW defined max */
+1-3
drivers/net/ethernet/broadcom/bnxt/bnxt.h
···15321532 #define BNXT_FLAG_NO_AGG_RINGS 0x2000015331533 #define BNXT_FLAG_RX_PAGE_MODE 0x4000015341534 #define BNXT_FLAG_MULTI_HOST 0x10000015351535+ #define BNXT_FLAG_DSN_VALID 0x20000015351536 #define BNXT_FLAG_DOUBLE_DB 0x40000015361537 #define BNXT_FLAG_CHIP_NITRO_A0 0x100000015371538 #define BNXT_FLAG_DIM 0x2000000···19371936 case HWRM_CFA_ENCAP_RECORD_FREE:19381937 case HWRM_CFA_DECAP_FILTER_ALLOC:19391938 case HWRM_CFA_DECAP_FILTER_FREE:19401940- case HWRM_CFA_NTUPLE_FILTER_ALLOC:19411941- case HWRM_CFA_NTUPLE_FILTER_FREE:19421942- case HWRM_CFA_NTUPLE_FILTER_CFG:19431939 case HWRM_CFA_EM_FLOW_ALLOC:19441940 case HWRM_CFA_EM_FLOW_FREE:19451941 case HWRM_CFA_EM_FLOW_CFG:
+3
drivers/net/ethernet/broadcom/bnxt/bnxt_vfr.c
···398398 struct net_device *dev;399399 int rc, i;400400401401+ if (!(bp->flags & BNXT_FLAG_DSN_VALID))402402+ return -ENODEV;403403+401404 bp->vf_reps = kcalloc(num_vfs, sizeof(vf_rep), GFP_KERNEL);402405 if (!bp->vf_reps)403406 return -ENOMEM;
+17-13
drivers/net/ethernet/cadence/macb_main.c
···611611 .mac_link_up = macb_mac_link_up,612612};613613614614+static bool macb_phy_handle_exists(struct device_node *dn)615615+{616616+ dn = of_parse_phandle(dn, "phy-handle", 0);617617+ of_node_put(dn);618618+ return dn != NULL;619619+}620620+614621static int macb_phylink_connect(struct macb *bp)615622{623623+ struct device_node *dn = bp->pdev->dev.of_node;616624 struct net_device *dev = bp->dev;617625 struct phy_device *phydev;618626 int ret;619627620620- if (bp->pdev->dev.of_node &&621621- of_parse_phandle(bp->pdev->dev.of_node, "phy-handle", 0)) {622622- ret = phylink_of_phy_connect(bp->phylink, bp->pdev->dev.of_node,623623- 0);624624- if (ret) {625625- netdev_err(dev, "Could not attach PHY (%d)\n", ret);626626- return ret;627627- }628628- } else {628628+ if (dn)629629+ ret = phylink_of_phy_connect(bp->phylink, dn, 0);630630+631631+ if (!dn || (ret && !macb_phy_handle_exists(dn))) {629632 phydev = phy_find_first(bp->mii_bus);630633 if (!phydev) {631634 netdev_err(dev, "no PHY found\n");···637634638635 /* attach the mac to the phy */639636 ret = phylink_connect_phy(bp->phylink, phydev);640640- if (ret) {641641- netdev_err(dev, "Could not attach to PHY (%d)\n", ret);642642- return ret;643643- }637637+ }638638+639639+ if (ret) {640640+ netdev_err(dev, "Could not attach PHY (%d)\n", ret);641641+ return ret;644642 }645643646644 phylink_start(bp->phylink);
+11-3
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
···31353135{31363136 struct port_info *pi = netdev_priv(dev);31373137 struct adapter *adap = pi->adapter;31383138+ struct ch_sched_queue qe = { 0 };31393139+ struct ch_sched_params p = { 0 };31383140 struct sched_class *e;31393139- struct ch_sched_params p;31403140- struct ch_sched_queue qe;31413141 u32 req_rate;31423142 int err = 0;31433143···31523152 "Failed to rate limit on queue %d. Link Down?\n",31533153 index);31543154 return -EINVAL;31553155+ }31563156+31573157+ qe.queue = index;31583158+ e = cxgb4_sched_queue_lookup(dev, &qe);31593159+ if (e && e->info.u.params.level != SCHED_CLASS_LEVEL_CL_RL) {31603160+ dev_err(adap->pdev_dev,31613161+ "Queue %u already bound to class %u of type: %u\n",31623162+ index, e->idx, e->info.u.params.level);31633163+ return -EBUSY;31553164 }3156316531573166 /* Convert from Mbps to Kbps */···31923183 return 0;3193318431943185 /* Fetch any available unused or matching scheduling class */31953195- memset(&p, 0, sizeof(p));31963186 p.type = SCHED_CLASS_TYPE_PACKET;31973187 p.u.params.level = SCHED_CLASS_LEVEL_CL_RL;31983188 p.u.params.mode = SCHED_CLASS_MODE_CLASS;
···5454#define HNS3_INNER_VLAN_TAG 15555#define HNS3_OUTER_VLAN_TAG 256565757+#define HNS3_MIN_TX_LEN 33U5858+5759/* hns3_pci_tbl - PCI Device ID Table5860 *5961 * Last entry must be all 0s···14061404 struct sk_buff *frag_skb;14071405 int bd_num = 0;14081406 int ret;14071407+14081408+ /* Hardware can only handle short frames above 32 bytes */14091409+ if (skb_put_padto(skb, HNS3_MIN_TX_LEN))14101410+ return NETDEV_TX_OK;1409141114101412 /* Prefetch the data used later */14111413 prefetch(skb->data);
···20812081 struct ixgbe_hw *hw = &adapter->hw;20822082 int count = 0;2083208320842084- if ((netdev_uc_count(netdev)) > 10) {20852085- pr_err("Too many unicast filters - No Space\n");20862086- return -ENOSPC;20872087- }20882088-20892084 if (!netdev_uc_empty(netdev)) {20902085 struct netdev_hw_addr *ha;20912086
+10-9
drivers/net/ethernet/marvell/mvneta.c
···20812081mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,20822082 struct bpf_prog *prog, struct xdp_buff *xdp)20832083{20842084- u32 ret, act = bpf_prog_run_xdp(prog, xdp);20842084+ unsigned int len;20852085+ u32 ret, act;20862086+20872087+ len = xdp->data_end - xdp->data_hard_start - pp->rx_offset_correction;20882088+ act = bpf_prog_run_xdp(prog, xdp);2085208920862090 switch (act) {20872091 case XDP_PASS:···20982094 if (err) {20992095 ret = MVNETA_XDP_DROPPED;21002096 __page_pool_put_page(rxq->page_pool,21012101- virt_to_head_page(xdp->data),21022102- xdp->data_end - xdp->data_hard_start,21032103- true);20972097+ virt_to_head_page(xdp->data),20982098+ len, true);21042099 } else {21052100 ret = MVNETA_XDP_REDIR;21062101 }···21092106 ret = mvneta_xdp_xmit_back(pp, xdp);21102107 if (ret != MVNETA_XDP_TX)21112108 __page_pool_put_page(rxq->page_pool,21122112- virt_to_head_page(xdp->data),21132113- xdp->data_end - xdp->data_hard_start,21142114- true);21092109+ virt_to_head_page(xdp->data),21102110+ len, true);21152111 break;21162112 default:21172113 bpf_warn_invalid_xdp_action(act);···21212119 case XDP_DROP:21222120 __page_pool_put_page(rxq->page_pool,21232121 virt_to_head_page(xdp->data),21242124- xdp->data_end - xdp->data_hard_start,21252125- true);21222122+ len, true);21262123 ret = MVNETA_XDP_DROPPED;21272124 break;21282125 }
+1-1
drivers/net/ethernet/mellanox/mlx4/crdump.c
···182182 crdump_enable_crspace_access(dev, cr_space);183183184184 /* Get the available snapshot ID for the dumps */185185- id = devlink_region_shapshot_id_get(devlink);185185+ id = devlink_region_snapshot_id_get(devlink);186186187187 /* Try to capture dumps */188188 mlx4_crdump_collect_crspace(dev, cr_space, id);
···299299 u64 len;300300 int err;301301302302+ if (skb_cow_head(skb, MLXSW_TXHDR_LEN)) {303303+ this_cpu_inc(mlxsw_sx_port->pcpu_stats->tx_dropped);304304+ dev_kfree_skb_any(skb);305305+ return NETDEV_TX_OK;306306+ }307307+302308 memset(skb->cb, 0, sizeof(struct mlxsw_skb_cb));303309304310 if (mlxsw_core_skb_transmit_busy(mlxsw_sx->core, &tx_info))305311 return NETDEV_TX_BUSY;306312307307- if (unlikely(skb_headroom(skb) < MLXSW_TXHDR_LEN)) {308308- struct sk_buff *skb_orig = skb;309309-310310- skb = skb_realloc_headroom(skb, MLXSW_TXHDR_LEN);311311- if (!skb) {312312- this_cpu_inc(mlxsw_sx_port->pcpu_stats->tx_dropped);313313- dev_kfree_skb_any(skb_orig);314314- return NETDEV_TX_OK;315315- }316316- dev_consume_skb_any(skb_orig);317317- }318313 mlxsw_sx_txhdr_construct(skb, &tx_info);319314 /* TX header is consumed by HW on the way so we shouldn't count its320315 * bytes as being sent.
···8080 if (attr->max_size && (attr->max_size > size))8181 size = attr->max_size;82828383- skb = netdev_alloc_skb_ip_align(priv->dev, size);8383+ skb = netdev_alloc_skb(priv->dev, size);8484 if (!skb)8585 return NULL;8686···244244 struct net_device *orig_ndev)245245{246246 struct stmmac_test_priv *tpriv = pt->af_packet_priv;247247+ unsigned char *src = tpriv->packet->src;248248+ unsigned char *dst = tpriv->packet->dst;247249 struct stmmachdr *shdr;248250 struct ethhdr *ehdr;249251 struct udphdr *uhdr;···262260 goto out;263261264262 ehdr = (struct ethhdr *)skb_mac_header(skb);265265- if (tpriv->packet->dst) {266266- if (!ether_addr_equal(ehdr->h_dest, tpriv->packet->dst))263263+ if (dst) {264264+ if (!ether_addr_equal_unaligned(ehdr->h_dest, dst))267265 goto out;268266 }269267 if (tpriv->packet->sarc) {270270- if (!ether_addr_equal(ehdr->h_source, ehdr->h_dest))268268+ if (!ether_addr_equal_unaligned(ehdr->h_source, ehdr->h_dest))271269 goto out;272272- } else if (tpriv->packet->src) {273273- if (!ether_addr_equal(ehdr->h_source, tpriv->packet->src))270270+ } else if (src) {271271+ if (!ether_addr_equal_unaligned(ehdr->h_source, src))274272 goto out;275273 }276274···716714 struct ethhdr *ehdr;717715718716 ehdr = (struct ethhdr *)skb_mac_header(skb);719719- if (!ether_addr_equal(ehdr->h_source, orig_ndev->dev_addr))717717+ if (!ether_addr_equal_unaligned(ehdr->h_source, orig_ndev->dev_addr))720718 goto out;721719 if (ehdr->h_proto != htons(ETH_P_PAUSE))722720 goto out;···853851 if (tpriv->vlan_id) {854852 if (skb->vlan_proto != htons(proto))855853 goto out;856856- if (skb->vlan_tci != tpriv->vlan_id)854854+ if (skb->vlan_tci != tpriv->vlan_id) {855855+ /* Means filter did not work. */856856+ tpriv->ok = false;857857+ complete(&tpriv->comp);857858 goto out;859859+ }858860 }859861860862 ehdr = (struct ethhdr *)skb_mac_header(skb);861861- if (!ether_addr_equal(ehdr->h_dest, tpriv->packet->dst))863863+ if (!ether_addr_equal_unaligned(ehdr->h_dest, tpriv->packet->dst))862864 goto out;863865864866 ihdr = ip_hdr(skb);···971965{972966 int ret, prev_cap = priv->dma_cap.vlhash;973967968968+ if (!(priv->dev->features & NETIF_F_HW_VLAN_CTAG_FILTER))969969+ return -EOPNOTSUPP;970970+974971 priv->dma_cap.vlhash = 0;975972 ret = __stmmac_test_vlanfilt(priv);976973 priv->dma_cap.vlhash = prev_cap;···10651056static int stmmac_test_dvlanfilt_perfect(struct stmmac_priv *priv)10661057{10671058 int ret, prev_cap = priv->dma_cap.vlhash;10591059+10601060+ if (!(priv->dev->features & NETIF_F_HW_VLAN_STAG_FILTER))10611061+ return -EOPNOTSUPP;1068106210691063 priv->dma_cap.vlhash = 0;10701064 ret = __stmmac_test_dvlanfilt(priv);···13351323 struct stmmac_packet_attrs attr = { };13361324 struct flow_dissector *dissector;13371325 struct flow_cls_offload *cls;13261326+ int ret, old_enable = 0;13381327 struct flow_rule *rule;13391339- int ret;1340132813411329 if (!tc_can_offload(priv->dev))13421330 return -EOPNOTSUPP;13431331 if (!priv->dma_cap.l3l4fnum)13441332 return -EOPNOTSUPP;13451345- if (priv->rss.enable)13331333+ if (priv->rss.enable) {13341334+ old_enable = priv->rss.enable;13351335+ priv->rss.enable = false;13461336 stmmac_rss_configure(priv, priv->hw, NULL,13471337 priv->plat->rx_queues_to_use);13381338+ }1348133913491340 dissector = kzalloc(sizeof(*dissector), GFP_KERNEL);13501341 if (!dissector) {···14141399cleanup_dissector:14151400 kfree(dissector);14161401cleanup_rss:14171417- if (priv->rss.enable) {14021402+ if (old_enable) {14031403+ priv->rss.enable = old_enable;14181404 stmmac_rss_configure(priv, priv->hw, &priv->rss,14191405 priv->plat->rx_queues_to_use);14201406 }···14601444 struct stmmac_packet_attrs attr = { };14611445 struct flow_dissector *dissector;14621446 struct flow_cls_offload *cls;14471447+ int ret, old_enable = 0;14631448 struct flow_rule *rule;14641464- int ret;1465144914661450 if (!tc_can_offload(priv->dev))14671451 return -EOPNOTSUPP;14681452 if (!priv->dma_cap.l3l4fnum)14691453 return -EOPNOTSUPP;14701470- if (priv->rss.enable)14541454+ if (priv->rss.enable) {14551455+ old_enable = priv->rss.enable;14561456+ priv->rss.enable = false;14711457 stmmac_rss_configure(priv, priv->hw, NULL,14721458 priv->plat->rx_queues_to_use);14591459+ }1473146014741461 dissector = kzalloc(sizeof(*dissector), GFP_KERNEL);14751462 if (!dissector) {···15441525cleanup_dissector:15451526 kfree(dissector);15461527cleanup_rss:15471547- if (priv->rss.enable) {15281528+ if (old_enable) {15291529+ priv->rss.enable = old_enable;15481530 stmmac_rss_configure(priv, priv->hw, &priv->rss,15491531 priv->plat->rx_queues_to_use);15501532 }···15981578 struct arphdr *ahdr;1599157916001580 ehdr = (struct ethhdr *)skb_mac_header(skb);16011601- if (!ether_addr_equal(ehdr->h_dest, tpriv->packet->src))15811581+ if (!ether_addr_equal_unaligned(ehdr->h_dest, tpriv->packet->src))16021582 goto out;1603158316041584 ahdr = arp_hdr(skb);
+4
drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
···577577{578578 int ret = 0;579579580580+ /* When RSS is enabled, the filtering will be bypassed */581581+ if (priv->rss.enable)582582+ return -EBUSY;583583+580584 switch (cls->command) {581585 case FLOW_CLS_REPLACE:582586 ret = tc_add_flow(priv, cls);
···53535454 get_random_bytes(dummy_data, NSIM_DEV_DUMMY_REGION_SIZE);55555656- id = devlink_region_shapshot_id_get(priv_to_devlink(nsim_dev));5656+ id = devlink_region_snapshot_id_get(priv_to_devlink(nsim_dev));5757 err = devlink_region_snapshot_create(nsim_dev->dummy_region,5858 dummy_data, id, kfree);5959 if (err) {
+4-4
drivers/net/phy/Kconfig
···340340 Currently supports dm9161e and dm9131341341342342config DP83822_PHY343343- tristate "Texas Instruments DP83822 PHY"343343+ tristate "Texas Instruments DP83822/825 PHYs"344344 ---help---345345- Supports the DP83822 PHY.345345+ Supports the DP83822 and DP83825I PHYs.346346347347config DP83TC811_PHY348348- tristate "Texas Instruments DP83TC822 PHY"348348+ tristate "Texas Instruments DP83TC811 PHY"349349 ---help---350350- Supports the DP83TC822 PHY.350350+ Supports the DP83TC811 PHY.351351352352config DP83848_PHY353353 tristate "Texas Instruments DP83848 PHY"
+7-1
drivers/net/phy/dp83867.c
···9797#define DP83867_PHYCR_FIFO_DEPTH_MAX 0x039898#define DP83867_PHYCR_FIFO_DEPTH_MASK GENMASK(15, 14)9999#define DP83867_PHYCR_RESERVED_MASK BIT(11)100100+#define DP83867_PHYCR_FORCE_LINK_GOOD BIT(10)100101101102/* RGMIIDCTL bits */102103#define DP83867_RGMII_TX_CLK_DELAY_MAX 0xf···600599601600 usleep_range(10, 20);602601603603- return 0;602602+ /* After reset FORCE_LINK_GOOD bit is set. Although the603603+ * default value should be unset. Disable FORCE_LINK_GOOD604604+ * for the phy to work properly.605605+ */606606+ return phy_modify(phydev, MII_DP83867_PHYCTRL,607607+ DP83867_PHYCR_FORCE_LINK_GOOD, 0);604608}605609606610static struct phy_driver dp83867_driver[] = {
···793793 drvres = ap_drv->flags & AP_DRIVER_FLAG_DEFAULT;794794 if (!!devres != !!drvres)795795 return -ENODEV;796796- /* (re-)init queue's state machine */797797- ap_queue_reinit_state(to_ap_queue(dev));798796 }799797800798 /* Add queue/card to list of active queues/cards */
+1-1
drivers/s390/crypto/ap_bus.h
···261261void ap_queue_remove(struct ap_queue *aq);262262void ap_queue_suspend(struct ap_device *ap_dev);263263void ap_queue_resume(struct ap_device *ap_dev);264264-void ap_queue_reinit_state(struct ap_queue *aq);264264+void ap_queue_init_state(struct ap_queue *aq);265265266266struct ap_card *ap_card_create(int id, int queue_depth, int raw_device_type,267267 int comp_device_type, unsigned int functions);
···688688689689int vnic_dev_hang_notify(struct vnic_dev *vdev)690690{691691- u64 a0, a1;691691+ u64 a0 = 0, a1 = 0;692692 int wait = 1000;693693 return vnic_dev_cmd(vdev, CMD_HANG_NOTIFY, &a0, &a1, wait);694694}695695696696int vnic_dev_mac_addr(struct vnic_dev *vdev, u8 *mac_addr)697697{698698- u64 a0, a1;698698+ u64 a[2] = {};699699 int wait = 1000;700700 int err, i;701701702702 for (i = 0; i < ETH_ALEN; i++)703703 mac_addr[i] = 0;704704705705- err = vnic_dev_cmd(vdev, CMD_MAC_ADDR, &a0, &a1, wait);705705+ err = vnic_dev_cmd(vdev, CMD_MAC_ADDR, &a[0], &a[1], wait);706706 if (err)707707 return err;708708709709 for (i = 0; i < ETH_ALEN; i++)710710- mac_addr[i] = ((u8 *)&a0)[i];710710+ mac_addr[i] = ((u8 *)&a)[i];711711712712 return 0;713713}···732732733733void vnic_dev_add_addr(struct vnic_dev *vdev, u8 *addr)734734{735735- u64 a0 = 0, a1 = 0;735735+ u64 a[2] = {};736736 int wait = 1000;737737 int err;738738 int i;739739740740 for (i = 0; i < ETH_ALEN; i++)741741- ((u8 *)&a0)[i] = addr[i];741741+ ((u8 *)&a)[i] = addr[i];742742743743- err = vnic_dev_cmd(vdev, CMD_ADDR_ADD, &a0, &a1, wait);743743+ err = vnic_dev_cmd(vdev, CMD_ADDR_ADD, &a[0], &a[1], wait);744744 if (err)745745 pr_err("Can't add addr [%pM], %d\n", addr, err);746746}747747748748void vnic_dev_del_addr(struct vnic_dev *vdev, u8 *addr)749749{750750- u64 a0 = 0, a1 = 0;750750+ u64 a[2] = {};751751 int wait = 1000;752752 int err;753753 int i;754754755755 for (i = 0; i < ETH_ALEN; i++)756756- ((u8 *)&a0)[i] = addr[i];756756+ ((u8 *)&a)[i] = addr[i];757757758758- err = vnic_dev_cmd(vdev, CMD_ADDR_DEL, &a0, &a1, wait);758758+ err = vnic_dev_cmd(vdev, CMD_ADDR_DEL, &a[0], &a[1], wait);759759 if (err)760760 pr_err("Can't del addr [%pM], %d\n", addr, err);761761}
+3-1
drivers/scsi/sd.c
···22112211 u8 type;22122212 int ret = 0;2213221322142214- if (scsi_device_protection(sdp) == 0 || (buffer[12] & 1) == 0)22142214+ if (scsi_device_protection(sdp) == 0 || (buffer[12] & 1) == 0) {22152215+ sdkp->protection_type = 0;22152216 return ret;22172217+ }2216221822172219 type = ((buffer[12] >> 1) & 7) + 1; /* P_TYPE 0 = Type 1 */22182220
+3-1
drivers/scsi/storvsc_drv.c
···18421842 */18431843 host->sg_tablesize = (stor_device->max_transfer_bytes >> PAGE_SHIFT);18441844 /*18451845+ * For non-IDE disks, the host supports multiple channels.18451846 * Set the number of HW queues we are supporting.18461847 */18471847- host->nr_hw_queues = num_present_cpus();18481848+ if (!dev_is_ide)18491849+ host->nr_hw_queues = num_present_cpus();1848185018491851 /*18501852 * Set the error handler work queue.
+15-9
drivers/soc/amlogic/meson-ee-pwrc.c
···323323 struct meson_ee_pwrc *pwrc,324324 struct meson_ee_pwrc_domain *dom)325325{326326+ int ret;327327+326328 dom->pwrc = pwrc;327329 dom->num_rstc = dom->desc.reset_names_count;328330 dom->num_clks = dom->desc.clk_names_count;···370368 * prepare/enable counters won't be in sync.371369 */372370 if (dom->num_clks && dom->desc.get_power && !dom->desc.get_power(dom)) {373373- int ret = clk_bulk_prepare_enable(dom->num_clks, dom->clks);371371+ ret = clk_bulk_prepare_enable(dom->num_clks, dom->clks);374372 if (ret)375373 return ret;376374377377- pm_genpd_init(&dom->base, &pm_domain_always_on_gov, false);378378- } else379379- pm_genpd_init(&dom->base, NULL,380380- (dom->desc.get_power ?381381- dom->desc.get_power(dom) : true));375375+ ret = pm_genpd_init(&dom->base, &pm_domain_always_on_gov,376376+ false);377377+ if (ret)378378+ return ret;379379+ } else {380380+ ret = pm_genpd_init(&dom->base, NULL,381381+ (dom->desc.get_power ?382382+ dom->desc.get_power(dom) : true));383383+ if (ret)384384+ return ret;385385+ }382386383387 return 0;384388}···449441 pwrc->xlate.domains[i] = &dom->base;450442 }451443452452- of_genpd_add_provider_onecell(pdev->dev.of_node, &pwrc->xlate);453453-454454- return 0;444444+ return of_genpd_add_provider_onecell(pdev->dev.of_node, &pwrc->xlate);455445}456446457447static void meson_ee_pwrc_shutdown(struct platform_device *pdev)
···7272 }7373 }74747575- if (!rv)7676- return -ENODATA;7777-7875 /* Second, find the set of routes valid for this device. */7976 for (i = 0; ni_device_routes_list[i]; ++i) {8077 if (memcmp(ni_device_routes_list[i]->device, board_name,···8184 }8285 }83868484- if (!dr)8585- return -ENODATA;8686-8787 tables->route_values = rv;8888 tables->valid_routes = dr;8989+9090+ if (!rv || !dr)9191+ return -ENODATA;89929093 return 0;9194}···483486 const struct ni_route_tables *tables)484487{485488 int src;489489+490490+ if (!tables->route_values)491491+ return -EINVAL;486492487493 dest = B(dest); /* subtract NI names offset */488494 /* ensure we are not going to under/over run the route value table */
···11921192 * PORT_OVER_CURRENT is not. So check for any of them.11931193 */11941194 if (udev || (portstatus & USB_PORT_STAT_CONNECTION) ||11951195+ (portchange & USB_PORT_STAT_C_CONNECTION) ||11951196 (portstatus & USB_PORT_STAT_OVERCURRENT) ||11961197 (portchange & USB_PORT_STAT_C_OVERCURRENT))11971198 set_bit(port1, hub->change_bits);
+5-1
drivers/usb/serial/ch341.c
···642642static int ch341_reset_resume(struct usb_serial *serial)643643{644644 struct usb_serial_port *port = serial->port[0];645645- struct ch341_private *priv = usb_get_serial_port_data(port);645645+ struct ch341_private *priv;646646 int ret;647647+648648+ priv = usb_get_serial_port_data(port);649649+ if (!priv)650650+ return 0;647651648652 /* reconfigure ch341 serial port after bus-reset */649653 ch341_configure(serial->dev, priv);
+9-7
drivers/usb/serial/io_edgeport.c
···716716 if (txCredits) {717717 port = edge_serial->serial->port[portNumber];718718 edge_port = usb_get_serial_port_data(port);719719- if (edge_port->open) {719719+ if (edge_port && edge_port->open) {720720 spin_lock_irqsave(&edge_port->ep_lock,721721 flags);722722 edge_port->txCredits += txCredits;···17251725static void process_rcvd_data(struct edgeport_serial *edge_serial,17261726 unsigned char *buffer, __u16 bufferLength)17271727{17281728- struct device *dev = &edge_serial->serial->dev->dev;17281728+ struct usb_serial *serial = edge_serial->serial;17291729+ struct device *dev = &serial->dev->dev;17291730 struct usb_serial_port *port;17301731 struct edgeport_port *edge_port;17311732 __u16 lastBufferLength;···1822182118231822 /* spit this data back into the tty driver if this18241823 port is open */18251825- if (rxLen) {18261826- port = edge_serial->serial->port[18271827- edge_serial->rxPort];18241824+ if (rxLen && edge_serial->rxPort < serial->num_ports) {18251825+ port = serial->port[edge_serial->rxPort];18281826 edge_port = usb_get_serial_port_data(port);18291829- if (edge_port->open) {18271827+ if (edge_port && edge_port->open) {18301828 dev_dbg(dev, "%s - Sending %d bytes to TTY for port %d\n",18311829 __func__, rxLen,18321830 edge_serial->rxPort);···18331833 rxLen);18341834 edge_port->port->icount.rx += rxLen;18351835 }18361836- buffer += rxLen;18371836 }18371837+ buffer += rxLen;18381838 break;1839183918401840 case EXPECT_HDR3: /* Expect 3rd byte of status header */···18691869 __u8 code = edge_serial->rxStatusCode;1870187018711871 /* switch the port pointer to the one being currently talked about */18721872+ if (edge_serial->rxPort >= edge_serial->serial->num_ports)18731873+ return;18721874 port = edge_serial->serial->port[edge_serial->rxPort];18731875 edge_port = usb_get_serial_port_data(port);18741876 if (edge_port == NULL) {
+4
drivers/usb/serial/keyspan.c
···10581058 for (i = 0; i < serial->num_ports; ++i) {10591059 port = serial->port[i];10601060 p_priv = usb_get_serial_port_data(port);10611061+ if (!p_priv)10621062+ continue;1061106310621064 if (p_priv->resend_cont) {10631065 dev_dbg(&port->dev, "%s - sending setup\n", __func__);···14611459 for (i = 0; i < serial->num_ports; ++i) {14621460 port = serial->port[i];14631461 p_priv = usb_get_serial_port_data(port);14621462+ if (!p_priv)14631463+ continue;1464146414651465 if (p_priv->resend_cont) {14661466 dev_dbg(&port->dev, "%s - sending setup\n", __func__);
···841841 u8 newMSR = (u8) *ch;842842 unsigned long flags;843843844844+ /* May be called from qt2_process_read_urb() for an unbound port. */844845 port_priv = usb_get_serial_port_data(port);846846+ if (!port_priv)847847+ return;845848846849 spin_lock_irqsave(&port_priv->lock, flags);847850 port_priv->shadowMSR = newMSR;···872869 unsigned long flags;873870 u8 newLSR = (u8) *ch;874871872872+ /* May be called from qt2_process_read_urb() for an unbound port. */875873 port_priv = usb_get_serial_port_data(port);874874+ if (!port_priv)875875+ return;876876877877 if (newLSR & UART_LSR_BI)878878 newLSR &= (u8) (UART_LSR_OE | UART_LSR_BI);
···13171317 return -EINVAL;13181318 }1319131913201320+ /* Prevent individual ports from being unbound. */13211321+ driver->driver.suppress_bind_attrs = true;13221322+13201323 usb_serial_operations_init(driver);1321132413221325 /* Add this device to our list of devices */
···42384238}4239423942404240static int btrfs_unlink_subvol(struct btrfs_trans_handle *trans,42414241- struct inode *dir, u64 objectid,42424242- const char *name, int name_len)42414241+ struct inode *dir, struct dentry *dentry)42434242{42444243 struct btrfs_root *root = BTRFS_I(dir)->root;42444244+ struct btrfs_inode *inode = BTRFS_I(d_inode(dentry));42454245 struct btrfs_path *path;42464246 struct extent_buffer *leaf;42474247 struct btrfs_dir_item *di;42484248 struct btrfs_key key;42494249+ const char *name = dentry->d_name.name;42504250+ int name_len = dentry->d_name.len;42494251 u64 index;42504252 int ret;42534253+ u64 objectid;42514254 u64 dir_ino = btrfs_ino(BTRFS_I(dir));42554255+42564256+ if (btrfs_ino(inode) == BTRFS_FIRST_FREE_OBJECTID) {42574257+ objectid = inode->root->root_key.objectid;42584258+ } else if (btrfs_ino(inode) == BTRFS_EMPTY_SUBVOL_DIR_OBJECTID) {42594259+ objectid = inode->location.objectid;42604260+ } else {42614261+ WARN_ON(1);42624262+ return -EINVAL;42634263+ }4252426442534265 path = btrfs_alloc_path();42544266 if (!path)···42834271 }42844272 btrfs_release_path(path);4285427342864286- ret = btrfs_del_root_ref(trans, objectid, root->root_key.objectid,42874287- dir_ino, &index, name, name_len);42884288- if (ret < 0) {42894289- if (ret != -ENOENT) {42904290- btrfs_abort_transaction(trans, ret);42914291- goto out;42924292- }42744274+ /*42754275+ * This is a placeholder inode for a subvolume we didn't have a42764276+ * reference to at the time of the snapshot creation. In the meantime42774277+ * we could have renamed the real subvol link into our snapshot, so42784278+ * depending on btrfs_del_root_ref to return -ENOENT here is incorret.42794279+ * Instead simply lookup the dir_index_item for this entry so we can42804280+ * remove it. Otherwise we know we have a ref to the root and we can42814281+ * call btrfs_del_root_ref, and it _shouldn't_ fail.42824282+ */42834283+ if (btrfs_ino(inode) == BTRFS_EMPTY_SUBVOL_DIR_OBJECTID) {42934284 di = btrfs_search_dir_index_item(root, path, dir_ino,42944285 name, name_len);42954286 if (IS_ERR_OR_NULL(di)) {···43074292 leaf = path->nodes[0];43084293 btrfs_item_key_to_cpu(leaf, &key, path->slots[0]);43094294 index = key.offset;42954295+ btrfs_release_path(path);42964296+ } else {42974297+ ret = btrfs_del_root_ref(trans, objectid,42984298+ root->root_key.objectid, dir_ino,42994299+ &index, name, name_len);43004300+ if (ret) {43014301+ btrfs_abort_transaction(trans, ret);43024302+ goto out;43034303+ }43104304 }43114311- btrfs_release_path(path);4312430543134306 ret = btrfs_delete_delayed_dir_index(trans, BTRFS_I(dir), index);43144307 if (ret) {···4510448745114488 btrfs_record_snapshot_destroy(trans, BTRFS_I(dir));4512448945134513- ret = btrfs_unlink_subvol(trans, dir, dest->root_key.objectid,45144514- dentry->d_name.name, dentry->d_name.len);44904490+ ret = btrfs_unlink_subvol(trans, dir, dentry);45154491 if (ret) {45164492 err = ret;45174493 btrfs_abort_transaction(trans, ret);···46054583 return PTR_ERR(trans);4606458446074585 if (unlikely(btrfs_ino(BTRFS_I(inode)) == BTRFS_EMPTY_SUBVOL_DIR_OBJECTID)) {46084608- err = btrfs_unlink_subvol(trans, dir,46094609- BTRFS_I(inode)->location.objectid,46104610- dentry->d_name.name,46114611- dentry->d_name.len);45864586+ err = btrfs_unlink_subvol(trans, dir, dentry);46124587 goto out;46134588 }46144589···95559536 u64 new_ino = btrfs_ino(BTRFS_I(new_inode));95569537 u64 old_idx = 0;95579538 u64 new_idx = 0;95589558- u64 root_objectid;95599539 int ret;95609540 bool root_log_pinned = false;95619541 bool dest_log_pinned = false;···9660964296619643 /* src is a subvolume */96629644 if (old_ino == BTRFS_FIRST_FREE_OBJECTID) {96639663- root_objectid = BTRFS_I(old_inode)->root->root_key.objectid;96649664- ret = btrfs_unlink_subvol(trans, old_dir, root_objectid,96659665- old_dentry->d_name.name,96669666- old_dentry->d_name.len);96459645+ ret = btrfs_unlink_subvol(trans, old_dir, old_dentry);96679646 } else { /* src is an inode */96689647 ret = __btrfs_unlink_inode(trans, root, BTRFS_I(old_dir),96699648 BTRFS_I(old_dentry->d_inode),···9676966196779662 /* dest is a subvolume */96789663 if (new_ino == BTRFS_FIRST_FREE_OBJECTID) {96799679- root_objectid = BTRFS_I(new_inode)->root->root_key.objectid;96809680- ret = btrfs_unlink_subvol(trans, new_dir, root_objectid,96819681- new_dentry->d_name.name,96829682- new_dentry->d_name.len);96649664+ ret = btrfs_unlink_subvol(trans, new_dir, new_dentry);96839665 } else { /* dest is an inode */96849666 ret = __btrfs_unlink_inode(trans, dest, BTRFS_I(new_dir),96859667 BTRFS_I(new_dentry->d_inode),···98749862 struct inode *new_inode = d_inode(new_dentry);98759863 struct inode *old_inode = d_inode(old_dentry);98769864 u64 index = 0;98779877- u64 root_objectid;98789865 int ret;98799866 u64 old_ino = btrfs_ino(BTRFS_I(old_inode));98809867 bool log_pinned = false;···99819970 BTRFS_I(old_inode), 1);9982997199839972 if (unlikely(old_ino == BTRFS_FIRST_FREE_OBJECTID)) {99849984- root_objectid = BTRFS_I(old_inode)->root->root_key.objectid;99859985- ret = btrfs_unlink_subvol(trans, old_dir, root_objectid,99869986- old_dentry->d_name.name,99879987- old_dentry->d_name.len);99739973+ ret = btrfs_unlink_subvol(trans, old_dir, old_dentry);99889974 } else {99899975 ret = __btrfs_unlink_inode(trans, root, BTRFS_I(old_dir),99909976 BTRFS_I(d_inode(old_dentry)),···100009992 new_inode->i_ctime = current_time(new_inode);100019993 if (unlikely(btrfs_ino(BTRFS_I(new_inode)) ==100029994 BTRFS_EMPTY_SUBVOL_DIR_OBJECTID)) {1000310003- root_objectid = BTRFS_I(new_inode)->location.objectid;1000410004- ret = btrfs_unlink_subvol(trans, new_dir, root_objectid,1000510005- new_dentry->d_name.name,1000610006- new_dentry->d_name.len);99959995+ ret = btrfs_unlink_subvol(trans, new_dir, new_dentry);100079996 BUG_ON(new_inode->i_nlink == 0);100089997 } else {100099998 ret = btrfs_unlink_inode(trans, dest, BTRFS_I(new_dir),
+13-1
fs/btrfs/ioctl.c
···42524252 &sa->progress, sa->flags & BTRFS_SCRUB_READONLY,42534253 0);4254425442554255- if (ret == 0 && copy_to_user(arg, sa, sizeof(*sa)))42554255+ /*42564256+ * Copy scrub args to user space even if btrfs_scrub_dev() returned an42574257+ * error. This is important as it allows user space to know how much42584258+ * progress scrub has done. For example, if scrub is canceled we get42594259+ * -ECANCELED from btrfs_scrub_dev() and return that error back to user42604260+ * space. Later user space can inspect the progress from the structure42614261+ * btrfs_ioctl_scrub_args and resume scrub from where it left off42624262+ * previously (btrfs-progs does this).42634263+ * If we fail to copy the btrfs_ioctl_scrub_args structure to user space42644264+ * then return -EFAULT to signal the structure was not copied or it may42654265+ * be corrupt and unreliable due to a partial copy.42664266+ */42674267+ if (copy_to_user(arg, sa, sizeof(*sa)))42564268 ret = -EFAULT;4257426942584270 if (!(sa->flags & BTRFS_SCRUB_READONLY))
+5-1
fs/btrfs/qgroup.c
···24232423 u64 nr_old_roots = 0;24242424 int ret = 0;2425242524262426+ /*24272427+ * If quotas get disabled meanwhile, the resouces need to be freed and24282428+ * we can't just exit here.24292429+ */24262430 if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags))24272427- return 0;24312431+ goto out_free;2428243224292433 if (new_roots) {24302434 if (!maybe_fs_roots(new_roots))
+46-5
fs/btrfs/relocation.c
···517517 return 1;518518}519519520520+static bool reloc_root_is_dead(struct btrfs_root *root)521521+{522522+ /*523523+ * Pair with set_bit/clear_bit in clean_dirty_subvols and524524+ * btrfs_update_reloc_root. We need to see the updated bit before525525+ * trying to access reloc_root526526+ */527527+ smp_rmb();528528+ if (test_bit(BTRFS_ROOT_DEAD_RELOC_TREE, &root->state))529529+ return true;530530+ return false;531531+}532532+533533+/*534534+ * Check if this subvolume tree has valid reloc tree.535535+ *536536+ * Reloc tree after swap is considered dead, thus not considered as valid.537537+ * This is enough for most callers, as they don't distinguish dead reloc root538538+ * from no reloc root. But should_ignore_root() below is a special case.539539+ */540540+static bool have_reloc_root(struct btrfs_root *root)541541+{542542+ if (reloc_root_is_dead(root))543543+ return false;544544+ if (!root->reloc_root)545545+ return false;546546+ return true;547547+}520548521549static int should_ignore_root(struct btrfs_root *root)522550{···552524553525 if (!test_bit(BTRFS_ROOT_REF_COWS, &root->state))554526 return 0;527527+528528+ /* This root has been merged with its reloc tree, we can ignore it */529529+ if (reloc_root_is_dead(root))530530+ return 1;555531556532 reloc_root = root->reloc_root;557533 if (!reloc_root)···14711439 * The subvolume has reloc tree but the swap is finished, no need to14721440 * create/update the dead reloc tree14731441 */14741474- if (test_bit(BTRFS_ROOT_DEAD_RELOC_TREE, &root->state))14421442+ if (reloc_root_is_dead(root))14751443 return 0;1476144414771445 if (root->reloc_root) {···15101478 struct btrfs_root_item *root_item;15111479 int ret;1512148015131513- if (test_bit(BTRFS_ROOT_DEAD_RELOC_TREE, &root->state) ||15141514- !root->reloc_root)14811481+ if (!have_reloc_root(root))15151482 goto out;1516148315171484 reloc_root = root->reloc_root;···15201489 if (fs_info->reloc_ctl->merge_reloc_tree &&15211490 btrfs_root_refs(root_item) == 0) {15221491 set_bit(BTRFS_ROOT_DEAD_RELOC_TREE, &root->state);14921492+ /*14931493+ * Mark the tree as dead before we change reloc_root so14941494+ * have_reloc_root will not touch it from now on.14951495+ */14961496+ smp_wmb();15231497 __del_reloc_root(reloc_root);15241498 }15251499···22372201 if (ret2 < 0 && !ret)22382202 ret = ret2;22392203 }22042204+ /*22052205+ * Need barrier to ensure clear_bit() only happens after22062206+ * root->reloc_root = NULL. Pairs with have_reloc_root.22072207+ */22082208+ smp_wmb();22402209 clear_bit(BTRFS_ROOT_DEAD_RELOC_TREE, &root->state);22412210 btrfs_put_fs_root(root);22422211 } else {···47594718 struct btrfs_root *root = pending->root;47604719 struct reloc_control *rc = root->fs_info->reloc_ctl;4761472047624762- if (!root->reloc_root || !rc)47214721+ if (!rc || !have_reloc_root(root))47634722 return;4764472347654724 if (!rc->merge_reloc_tree)···47934752 struct reloc_control *rc = root->fs_info->reloc_ctl;47944753 int ret;4795475447964796- if (!root->reloc_root || !rc)47554755+ if (!rc || !have_reloc_root(root))47974756 return 0;4798475747994758 rc = root->fs_info->reloc_ctl;
···38813881 }38823882 }3883388338843884- num_devices = btrfs_num_devices(fs_info);38843884+ /*38853885+ * rw_devices will not change at the moment, device add/delete/replace38863886+ * are excluded by EXCL_OP38873887+ */38883888+ num_devices = fs_info->fs_devices->rw_devices;3885388938863890 /*38873891 * SINGLE profile on-disk has no profile bit, but in-memory we have a
+3-1
fs/fuse/file.c
···882882 struct fuse_args_pages *ap = &ia->ap;883883 loff_t pos = page_offset(ap->pages[0]);884884 size_t count = ap->num_pages << PAGE_SHIFT;885885+ ssize_t res;885886 int err;886887887888 ap->args.out_pages = true;···897896 if (!err)898897 return;899898 } else {900900- err = fuse_simple_request(fc, &ap->args);899899+ res = fuse_simple_request(fc, &ap->args);900900+ err = res < 0 ? res : 0;901901 }902902 fuse_readpages_end(fc, &ap->args, err);903903}
···17861786 struct iovec *iovec, struct iovec *fast_iov,17871787 struct iov_iter *iter)17881788{17891789+ if (req->opcode == IORING_OP_READ_FIXED ||17901790+ req->opcode == IORING_OP_WRITE_FIXED)17911791+ return 0;17891792 if (!req->io && io_alloc_async_ctx(req))17901793 return -ENOMEM;17911794···18431840 if (!force_nonblock)18441841 req->rw.kiocb.ki_flags &= ~IOCB_NOWAIT;1845184218431843+ req->result = 0;18461844 io_size = ret;18471845 if (req->flags & REQ_F_LINK)18481846 req->result = io_size;···19311927 if (!force_nonblock)19321928 req->rw.kiocb.ki_flags &= ~IOCB_NOWAIT;1933192919301930+ req->result = 0;19341931 io_size = ret;19351932 if (req->flags & REQ_F_LINK)19361933 req->result = io_size;···20392034 return false;20402035}2041203620372037+static void io_link_work_cb(struct io_wq_work **workptr)20382038+{20392039+ struct io_wq_work *work = *workptr;20402040+ struct io_kiocb *link = work->data;20412041+20422042+ io_queue_linked_timeout(link);20432043+ work->func = io_wq_submit_work;20442044+}20452045+20462046+static void io_wq_assign_next(struct io_wq_work **workptr, struct io_kiocb *nxt)20472047+{20482048+ struct io_kiocb *link;20492049+20502050+ io_prep_async_work(nxt, &link);20512051+ *workptr = &nxt->work;20522052+ if (link) {20532053+ nxt->work.flags |= IO_WQ_WORK_CB;20542054+ nxt->work.func = io_link_work_cb;20552055+ nxt->work.data = link;20562056+ }20572057+}20582058+20422059static void io_fsync_finish(struct io_wq_work **workptr)20432060{20442061 struct io_kiocb *req = container_of(*workptr, struct io_kiocb, work);···20792052 io_cqring_add_event(req, ret);20802053 io_put_req_find_next(req, &nxt);20812054 if (nxt)20822082- *workptr = &nxt->work;20552055+ io_wq_assign_next(workptr, nxt);20832056}2084205720852058static int io_fsync(struct io_kiocb *req, struct io_kiocb **nxt,···21352108 io_cqring_add_event(req, ret);21362109 io_put_req_find_next(req, &nxt);21372110 if (nxt)21382138- *workptr = &nxt->work;21112111+ io_wq_assign_next(workptr, nxt);21392112}2140211321412114static int io_sync_file_range(struct io_kiocb *req, struct io_kiocb **nxt,···24012374 return;24022375 __io_accept(req, &nxt, false);24032376 if (nxt)24042404- *workptr = &nxt->work;23772377+ io_wq_assign_next(workptr, nxt);24052378}24062379#endif24072380···26322605 req_set_fail_links(req);26332606 io_put_req_find_next(req, &nxt);26342607 if (nxt)26352635- *workptr = &nxt->work;26082608+ io_wq_assign_next(workptr, nxt);26362609}2637261026382611static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,···32863259 return ret;3287326032883261 if (ctx->flags & IORING_SETUP_IOPOLL) {32623262+ const bool in_async = io_wq_current_is_worker();32633263+32893264 if (req->result == -EAGAIN)32903265 return -EAGAIN;3291326632673267+ /* workqueue context doesn't hold uring_lock, grab it now */32683268+ if (in_async)32693269+ mutex_lock(&ctx->uring_lock);32703270+32923271 io_iopoll_req_issued(req);32723272+32733273+ if (in_async)32743274+ mutex_unlock(&ctx->uring_lock);32933275 }3294327632953277 return 0;32963296-}32973297-32983298-static void io_link_work_cb(struct io_wq_work **workptr)32993299-{33003300- struct io_wq_work *work = *workptr;33013301- struct io_kiocb *link = work->data;33023302-33033303- io_queue_linked_timeout(link);33043304- work->func = io_wq_submit_work;33053278}3306327933073280static void io_wq_submit_work(struct io_wq_work **workptr)···33403313 }3341331433423315 /* if a dependent link is ready, pass it back */33433343- if (!ret && nxt) {33443344- struct io_kiocb *link;33453345-33463346- io_prep_async_work(nxt, &link);33473347- *workptr = &nxt->work;33483348- if (link) {33493349- nxt->work.flags |= IO_WQ_WORK_CB;33503350- nxt->work.func = io_link_work_cb;33513351- nxt->work.data = link;33523352- }33533353- }33163316+ if (!ret && nxt)33173317+ io_wq_assign_next(workptr, nxt);33543318}3355331933563320static bool io_req_op_valid(int op)···51585140 submitted = to_submit;51595141 } else if (to_submit) {51605142 struct mm_struct *cur_mm;51435143+51445144+ if (current->mm != ctx->sqo_mm ||51455145+ current_cred() != ctx->creds) {51465146+ ret = -EPERM;51475147+ goto out;51485148+ }5161514951625150 to_submit = min(to_submit, ctx->sq_entries);51635151 mutex_lock(&ctx->uring_lock);
+13-77
fs/namei.c
···12321232 BUG_ON(!path->dentry->d_op);12331233 BUG_ON(!path->dentry->d_op->d_manage);12341234 ret = path->dentry->d_op->d_manage(path, false);12351235+ flags = smp_load_acquire(&path->dentry->d_flags);12351236 if (ret < 0)12361237 break;12371238 }···16501649 if (IS_ERR(dentry))16511650 return dentry;16521651 if (unlikely(!d_in_lookup(dentry))) {16531653- if (!(flags & LOOKUP_NO_REVAL)) {16541654- int error = d_revalidate(dentry, flags);16551655- if (unlikely(error <= 0)) {16561656- if (!error) {16571657- d_invalidate(dentry);16581658- dput(dentry);16591659- goto again;16601660- }16521652+ int error = d_revalidate(dentry, flags);16531653+ if (unlikely(error <= 0)) {16541654+ if (!error) {16551655+ d_invalidate(dentry);16611656 dput(dentry);16621662- dentry = ERR_PTR(error);16571657+ goto again;16631658 }16591659+ dput(dentry);16601660+ dentry = ERR_PTR(error);16641661 }16651662 } else {16661663 old = inode->i_op->lookup(inode, dentry, flags);···26172618EXPORT_SYMBOL(user_path_at_empty);2618261926192620/**26202620- * mountpoint_last - look up last component for umount26212621- * @nd: pathwalk nameidata - currently pointing at parent directory of "last"26222622- *26232623- * This is a special lookup_last function just for umount. In this case, we26242624- * need to resolve the path without doing any revalidation.26252625- *26262626- * The nameidata should be the result of doing a LOOKUP_PARENT pathwalk. Since26272627- * mountpoints are always pinned in the dcache, their ancestors are too. Thus,26282628- * in almost all cases, this lookup will be served out of the dcache. The only26292629- * cases where it won't are if nd->last refers to a symlink or the path is26302630- * bogus and it doesn't exist.26312631- *26322632- * Returns:26332633- * -error: if there was an error during lookup. This includes -ENOENT if the26342634- * lookup found a negative dentry.26352635- *26362636- * 0: if we successfully resolved nd->last and found it to not to be a26372637- * symlink that needs to be followed.26382638- *26392639- * 1: if we successfully resolved nd->last and found it to be a symlink26402640- * that needs to be followed.26412641- */26422642-static int26432643-mountpoint_last(struct nameidata *nd)26442644-{26452645- int error = 0;26462646- struct dentry *dir = nd->path.dentry;26472647- struct path path;26482648-26492649- /* If we're in rcuwalk, drop out of it to handle last component */26502650- if (nd->flags & LOOKUP_RCU) {26512651- if (unlazy_walk(nd))26522652- return -ECHILD;26532653- }26542654-26552655- nd->flags &= ~LOOKUP_PARENT;26562656-26572657- if (unlikely(nd->last_type != LAST_NORM)) {26582658- error = handle_dots(nd, nd->last_type);26592659- if (error)26602660- return error;26612661- path.dentry = dget(nd->path.dentry);26622662- } else {26632663- path.dentry = d_lookup(dir, &nd->last);26642664- if (!path.dentry) {26652665- /*26662666- * No cached dentry. Mounted dentries are pinned in the26672667- * cache, so that means that this dentry is probably26682668- * a symlink or the path doesn't actually point26692669- * to a mounted dentry.26702670- */26712671- path.dentry = lookup_slow(&nd->last, dir,26722672- nd->flags | LOOKUP_NO_REVAL);26732673- if (IS_ERR(path.dentry))26742674- return PTR_ERR(path.dentry);26752675- }26762676- }26772677- if (d_flags_negative(smp_load_acquire(&path.dentry->d_flags))) {26782678- dput(path.dentry);26792679- return -ENOENT;26802680- }26812681- path.mnt = nd->path.mnt;26822682- return step_into(nd, &path, 0, d_backing_inode(path.dentry), 0);26832683-}26842684-26852685-/**26862621 * path_mountpoint - look up a path to be umounted26872622 * @nd: lookup context26882623 * @flags: lookup flags···26322699 int err;2633270026342701 while (!(err = link_path_walk(s, nd)) &&26352635- (err = mountpoint_last(nd)) > 0) {27022702+ (err = lookup_last(nd)) > 0) {26362703 s = trailing_symlink(nd);26372704 }27052705+ if (!err && (nd->flags & LOOKUP_RCU))27062706+ err = unlazy_walk(nd);27072707+ if (!err)27082708+ err = handle_lookup_down(nd);26382709 if (!err) {26392710 *path = nd->path;26402711 nd->path.mnt = NULL;26412712 nd->path.dentry = NULL;26422642- follow_mount(path);26432713 }26442714 terminate_walk(nd);26452715 return err;
···1111 * The cache doesn't need to be flushed when TLB entries change when1212 * the cache is mapped to physical memory, not virtual memory1313 */1414+#ifndef flush_cache_all1415static inline void flush_cache_all(void)1516{1617}1818+#endif17192020+#ifndef flush_cache_mm1821static inline void flush_cache_mm(struct mm_struct *mm)1922{2023}2424+#endif21252626+#ifndef flush_cache_dup_mm2227static inline void flush_cache_dup_mm(struct mm_struct *mm)2328{2429}3030+#endif25313232+#ifndef flush_cache_range2633static inline void flush_cache_range(struct vm_area_struct *vma,2734 unsigned long start,2835 unsigned long end)2936{3037}3838+#endif31394040+#ifndef flush_cache_page3241static inline void flush_cache_page(struct vm_area_struct *vma,3342 unsigned long vmaddr,3443 unsigned long pfn)3544{3645}4646+#endif37474848+#ifndef flush_dcache_page3849static inline void flush_dcache_page(struct page *page)3950{4051}5252+#endif41535454+#ifndef flush_dcache_mmap_lock4255static inline void flush_dcache_mmap_lock(struct address_space *mapping)4356{4457}5858+#endif45596060+#ifndef flush_dcache_mmap_unlock4661static inline void flush_dcache_mmap_unlock(struct address_space *mapping)4762{4863}6464+#endif49656666+#ifndef flush_icache_range5067static inline void flush_icache_range(unsigned long start, unsigned long end)5168{5269}7070+#endif53717272+#ifndef flush_icache_page5473static inline void flush_icache_page(struct vm_area_struct *vma,5574 struct page *page)5675{5776}7777+#endif58787979+#ifndef flush_icache_user_range5980static inline void flush_icache_user_range(struct vm_area_struct *vma,6081 struct page *page,6182 unsigned long addr, int len)6283{6384}8585+#endif64868787+#ifndef flush_cache_vmap6588static inline void flush_cache_vmap(unsigned long start, unsigned long end)6689{6790}9191+#endif68929393+#ifndef flush_cache_vunmap6994static inline void flush_cache_vunmap(unsigned long start, unsigned long end)7095{7196}9797+#endif72987373-#define copy_to_user_page(vma, page, vaddr, dst, src, len) \9999+#ifndef copy_to_user_page100100+#define copy_to_user_page(vma, page, vaddr, dst, src, len) \74101 do { \75102 memcpy(dst, src, len); \76103 flush_icache_user_range(vma, page, vaddr, len); \77104 } while (0)105105+#endif106106+107107+#ifndef copy_from_user_page78108#define copy_from_user_page(vma, page, vaddr, dst, src, len) \79109 memcpy(dst, src, len)110110+#endif8011181112#endif /* __ASM_CACHEFLUSH_H */
+6
include/drm/drm_dp_mst_helper.h
···605605 * &drm_dp_sideband_msg_tx.state once they are queued606606 */607607 struct mutex qlock;608608+609609+ /**610610+ * @is_waiting_for_dwn_reply: indicate whether is waiting for down reply611611+ */612612+ bool is_waiting_for_dwn_reply;613613+608614 /**609615 * @tx_msg_downq: List of pending down replies.610616 */
···328328 unsigned int max_sectors;329329 unsigned int max_segment_size;330330 unsigned int physical_block_size;331331+ unsigned int logical_block_size;331332 unsigned int alignment_offset;332333 unsigned int io_min;333334 unsigned int io_opt;···339338 unsigned int discard_granularity;340339 unsigned int discard_alignment;341340342342- unsigned short logical_block_size;343341 unsigned short max_segments;344342 unsigned short max_integrity_segments;345343 unsigned short max_discard_segments;···10771077 unsigned int max_write_same_sectors);10781078extern void blk_queue_max_write_zeroes_sectors(struct request_queue *q,10791079 unsigned int max_write_same_sectors);10801080-extern void blk_queue_logical_block_size(struct request_queue *, unsigned short);10801080+extern void blk_queue_logical_block_size(struct request_queue *, unsigned int);10811081extern void blk_queue_physical_block_size(struct request_queue *, unsigned int);10821082extern void blk_queue_alignment_offset(struct request_queue *q,10831083 unsigned int alignment);···12911291 return q->limits.max_segment_size;12921292}1293129312941294-static inline unsigned short queue_logical_block_size(const struct request_queue *q)12941294+static inline unsigned queue_logical_block_size(const struct request_queue *q)12951295{12961296 int retval = 512;12971297···13011301 return retval;13021302}1303130313041304-static inline unsigned short bdev_logical_block_size(struct block_device *bdev)13041304+static inline unsigned int bdev_logical_block_size(struct block_device *bdev)13051305{13061306 return queue_logical_block_size(bdev_get_queue(bdev));13071307}
+15-3
include/linux/mm.h
···26582658 !page_poisoning_enabled();26592659}2660266026612661-#ifdef CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT26622662-DECLARE_STATIC_KEY_TRUE(_debug_pagealloc_enabled);26612661+#ifdef CONFIG_DEBUG_PAGEALLOC26622662+extern void init_debug_pagealloc(void);26632663#else26642664-DECLARE_STATIC_KEY_FALSE(_debug_pagealloc_enabled);26642664+static inline void init_debug_pagealloc(void) {}26652665#endif26662666+extern bool _debug_pagealloc_enabled_early;26672667+DECLARE_STATIC_KEY_FALSE(_debug_pagealloc_enabled);2666266826672669static inline bool debug_pagealloc_enabled(void)26702670+{26712671+ return IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) &&26722672+ _debug_pagealloc_enabled_early;26732673+}26742674+26752675+/*26762676+ * For use in fast paths after init_debug_pagealloc() has run, or when a26772677+ * false negative result is not harmful when called too early.26782678+ */26792679+static inline bool debug_pagealloc_enabled_static(void)26682680{26692681 if (!IS_ENABLED(CONFIG_DEBUG_PAGEALLOC))26702682 return false;
+2-3
include/linux/mmzone.h
···215215 NR_INACTIVE_FILE, /* " " " " " */216216 NR_ACTIVE_FILE, /* " " " " " */217217 NR_UNEVICTABLE, /* " " " " " */218218- NR_SLAB_RECLAIMABLE, /* Please do not reorder this item */219219- NR_SLAB_UNRECLAIMABLE, /* and this one without looking at220220- * memcg_flush_percpu_vmstats() first. */218218+ NR_SLAB_RECLAIMABLE,219219+ NR_SLAB_UNRECLAIMABLE,221220 NR_ISOLATED_ANON, /* Temporary isolated pages from anon lru */222221 NR_ISOLATED_FILE, /* Temporary isolated pages from file lru */223222 WORKINGSET_NODES,
-1
include/linux/namei.h
···34343535/* internal use only */3636#define LOOKUP_PARENT 0x00103737-#define LOOKUP_NO_REVAL 0x00803837#define LOOKUP_JUMPED 0x10003938#define LOOKUP_ROOT 0x20004039#define LOOKUP_ROOT_GRABBED 0x0008
+2-2
include/linux/sched.h
···1929192919301930/*19311931 * If parent process has a registered restartable sequences area, the19321932- * child inherits. Only applies when forking a process, not a thread.19321932+ * child inherits. Unregister rseq for a clone with CLONE_VM set.19331933 */19341934static inline void rseq_fork(struct task_struct *t, unsigned long clone_flags)19351935{19361936- if (clone_flags & CLONE_THREAD) {19361936+ if (clone_flags & CLONE_VM) {19371937 t->rseq = NULL;19381938 t->rseq_sig = 0;19391939 t->rseq_event_mask = 0;
···3030/* Shift (rsh) a tnum right (by a fixed shift) */3131struct tnum tnum_rshift(struct tnum a, u8 shift);3232/* Shift (arsh) a tnum right (by a fixed min_shift) */3333-struct tnum tnum_arshift(struct tnum a, u8 min_shift);3333+struct tnum tnum_arshift(struct tnum a, u8 min_shift, u8 insn_bitness);3434/* Add two tnums, return @a + @b */3535struct tnum tnum_add(struct tnum a, struct tnum b);3636/* Subtract two tnums, return @a - @b */
+5
include/net/cfg80211.h
···35483548 *35493549 * @start_radar_detection: Start radar detection in the driver.35503550 *35513551+ * @end_cac: End running CAC, probably because a related CAC35523552+ * was finished on another phy.35533553+ *35513554 * @update_ft_ies: Provide updated Fast BSS Transition information to the35523555 * driver. If the SME is in the driver/firmware, this information can be35533556 * used in building Authentication and Reassociation Request frames.···38773874 struct net_device *dev,38783875 struct cfg80211_chan_def *chandef,38793876 u32 cac_time_ms);38773877+ void (*end_cac)(struct wiphy *wiphy,38783878+ struct net_device *dev);38803879 int (*update_ft_ies)(struct wiphy *wiphy, struct net_device *dev,38813880 struct cfg80211_update_ft_ies_params *ftie);38823881 int (*crit_proto_start)(struct wiphy *wiphy,
···4444 return TNUM(a.value >> shift, a.mask >> shift);4545}46464747-struct tnum tnum_arshift(struct tnum a, u8 min_shift)4747+struct tnum tnum_arshift(struct tnum a, u8 min_shift, u8 insn_bitness)4848{4949 /* if a.value is negative, arithmetic shifting by minimum shift5050 * will have larger negative offset compared to more shifting.5151 * If a.value is nonnegative, arithmetic shifting by minimum shift5252 * will have larger positive offset compare to more shifting.5353 */5454- return TNUM((s64)a.value >> min_shift, (s64)a.mask >> min_shift);5454+ if (insn_bitness == 32)5555+ return TNUM((u32)(((s32)a.value) >> min_shift),5656+ (u32)(((s32)a.mask) >> min_shift));5757+ else5858+ return TNUM((s64)a.value >> min_shift,5959+ (s64)a.mask >> min_shift);5560}56615762struct tnum tnum_add(struct tnum a, struct tnum b)
+10-3
kernel/bpf/verifier.c
···50495049 /* Upon reaching here, src_known is true and50505050 * umax_val is equal to umin_val.50515051 */50525052- dst_reg->smin_value >>= umin_val;50535053- dst_reg->smax_value >>= umin_val;50545054- dst_reg->var_off = tnum_arshift(dst_reg->var_off, umin_val);50525052+ if (insn_bitness == 32) {50535053+ dst_reg->smin_value = (u32)(((s32)dst_reg->smin_value) >> umin_val);50545054+ dst_reg->smax_value = (u32)(((s32)dst_reg->smax_value) >> umin_val);50555055+ } else {50565056+ dst_reg->smin_value >>= umin_val;50575057+ dst_reg->smax_value >>= umin_val;50585058+ }50595059+50605060+ dst_reg->var_off = tnum_arshift(dst_reg->var_off, umin_val,50615061+ insn_bitness);5055506250565063 /* blow away the dst_reg umin_value/umax_value and rely on50575064 * dst_reg var_off to refine the result.
+72-71
kernel/cpu.c
···19091909}19101910EXPORT_SYMBOL(__cpuhp_remove_state);1911191119121912+#ifdef CONFIG_HOTPLUG_SMT19131913+static void cpuhp_offline_cpu_device(unsigned int cpu)19141914+{19151915+ struct device *dev = get_cpu_device(cpu);19161916+19171917+ dev->offline = true;19181918+ /* Tell user space about the state change */19191919+ kobject_uevent(&dev->kobj, KOBJ_OFFLINE);19201920+}19211921+19221922+static void cpuhp_online_cpu_device(unsigned int cpu)19231923+{19241924+ struct device *dev = get_cpu_device(cpu);19251925+19261926+ dev->offline = false;19271927+ /* Tell user space about the state change */19281928+ kobject_uevent(&dev->kobj, KOBJ_ONLINE);19291929+}19301930+19311931+int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)19321932+{19331933+ int cpu, ret = 0;19341934+19351935+ cpu_maps_update_begin();19361936+ for_each_online_cpu(cpu) {19371937+ if (topology_is_primary_thread(cpu))19381938+ continue;19391939+ ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE);19401940+ if (ret)19411941+ break;19421942+ /*19431943+ * As this needs to hold the cpu maps lock it's impossible19441944+ * to call device_offline() because that ends up calling19451945+ * cpu_down() which takes cpu maps lock. cpu maps lock19461946+ * needs to be held as this might race against in kernel19471947+ * abusers of the hotplug machinery (thermal management).19481948+ *19491949+ * So nothing would update device:offline state. That would19501950+ * leave the sysfs entry stale and prevent onlining after19511951+ * smt control has been changed to 'off' again. This is19521952+ * called under the sysfs hotplug lock, so it is properly19531953+ * serialized against the regular offline usage.19541954+ */19551955+ cpuhp_offline_cpu_device(cpu);19561956+ }19571957+ if (!ret)19581958+ cpu_smt_control = ctrlval;19591959+ cpu_maps_update_done();19601960+ return ret;19611961+}19621962+19631963+int cpuhp_smt_enable(void)19641964+{19651965+ int cpu, ret = 0;19661966+19671967+ cpu_maps_update_begin();19681968+ cpu_smt_control = CPU_SMT_ENABLED;19691969+ for_each_present_cpu(cpu) {19701970+ /* Skip online CPUs and CPUs on offline nodes */19711971+ if (cpu_online(cpu) || !node_online(cpu_to_node(cpu)))19721972+ continue;19731973+ ret = _cpu_up(cpu, 0, CPUHP_ONLINE);19741974+ if (ret)19751975+ break;19761976+ /* See comment in cpuhp_smt_disable() */19771977+ cpuhp_online_cpu_device(cpu);19781978+ }19791979+ cpu_maps_update_done();19801980+ return ret;19811981+}19821982+#endif19831983+19121984#if defined(CONFIG_SYSFS) && defined(CONFIG_HOTPLUG_CPU)19131985static ssize_t show_cpuhp_state(struct device *dev,19141986 struct device_attribute *attr, char *buf)···21342062};2135206321362064#ifdef CONFIG_HOTPLUG_SMT21372137-21382138-static void cpuhp_offline_cpu_device(unsigned int cpu)21392139-{21402140- struct device *dev = get_cpu_device(cpu);21412141-21422142- dev->offline = true;21432143- /* Tell user space about the state change */21442144- kobject_uevent(&dev->kobj, KOBJ_OFFLINE);21452145-}21462146-21472147-static void cpuhp_online_cpu_device(unsigned int cpu)21482148-{21492149- struct device *dev = get_cpu_device(cpu);21502150-21512151- dev->offline = false;21522152- /* Tell user space about the state change */21532153- kobject_uevent(&dev->kobj, KOBJ_ONLINE);21542154-}21552155-21562156-int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)21572157-{21582158- int cpu, ret = 0;21592159-21602160- cpu_maps_update_begin();21612161- for_each_online_cpu(cpu) {21622162- if (topology_is_primary_thread(cpu))21632163- continue;21642164- ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE);21652165- if (ret)21662166- break;21672167- /*21682168- * As this needs to hold the cpu maps lock it's impossible21692169- * to call device_offline() because that ends up calling21702170- * cpu_down() which takes cpu maps lock. cpu maps lock21712171- * needs to be held as this might race against in kernel21722172- * abusers of the hotplug machinery (thermal management).21732173- *21742174- * So nothing would update device:offline state. That would21752175- * leave the sysfs entry stale and prevent onlining after21762176- * smt control has been changed to 'off' again. This is21772177- * called under the sysfs hotplug lock, so it is properly21782178- * serialized against the regular offline usage.21792179- */21802180- cpuhp_offline_cpu_device(cpu);21812181- }21822182- if (!ret)21832183- cpu_smt_control = ctrlval;21842184- cpu_maps_update_done();21852185- return ret;21862186-}21872187-21882188-int cpuhp_smt_enable(void)21892189-{21902190- int cpu, ret = 0;21912191-21922192- cpu_maps_update_begin();21932193- cpu_smt_control = CPU_SMT_ENABLED;21942194- for_each_present_cpu(cpu) {21952195- /* Skip online CPUs and CPUs on offline nodes */21962196- if (cpu_online(cpu) || !node_online(cpu_to_node(cpu)))21972197- continue;21982198- ret = _cpu_up(cpu, 0, CPUHP_ONLINE);21992199- if (ret)22002200- break;22012201- /* See comment in cpuhp_smt_disable() */22022202- cpuhp_online_cpu_device(cpu);22032203- }22042204- cpu_maps_update_done();22052205- return ret;22062206-}22072207-2208206522092066static ssize_t22102067__store_smt_control(struct device *dev, struct device_attribute *attr,
···1146511465 }1146611466 }11467114671146811468- if (perf_need_aux_event(event) && !perf_get_aux_event(event, group_leader))1146811468+ if (perf_need_aux_event(event) && !perf_get_aux_event(event, group_leader)) {1146911469+ err = -EINVAL;1146911470 goto err_locked;1147111471+ }11470114721147111473 /*1147211474 * Must be under the same ctx::mutex as perf_install_in_context(),
+1
kernel/futex.c
···1178117811791179/**11801180 * wait_for_owner_exiting - Block until the owner has exited11811181+ * @ret: owner's current futex lock status11811182 * @exiting: Pointer to the exiting task11821183 *11831184 * Caller must hold a refcount on @exiting.
···12261226 * In this case, we attempt to acquire the lock again12271227 * without sleeping.12281228 */12291229- if ((wstate == WRITER_HANDOFF) &&12301230- (rwsem_spin_on_owner(sem, 0) == OWNER_NULL))12291229+ if (wstate == WRITER_HANDOFF &&12301230+ rwsem_spin_on_owner(sem, RWSEM_NONSPINNABLE) == OWNER_NULL)12311231 goto trylock_again;1232123212331233 /* Block until there are no active lockers. */
+10-5
kernel/ptrace.c
···264264 return ret;265265}266266267267-static int ptrace_has_cap(struct user_namespace *ns, unsigned int mode)267267+static bool ptrace_has_cap(const struct cred *cred, struct user_namespace *ns,268268+ unsigned int mode)268269{270270+ int ret;271271+269272 if (mode & PTRACE_MODE_NOAUDIT)270270- return has_ns_capability_noaudit(current, ns, CAP_SYS_PTRACE);273273+ ret = security_capable(cred, ns, CAP_SYS_PTRACE, CAP_OPT_NOAUDIT);271274 else272272- return has_ns_capability(current, ns, CAP_SYS_PTRACE);275275+ ret = security_capable(cred, ns, CAP_SYS_PTRACE, CAP_OPT_NONE);276276+277277+ return ret == 0;273278}274279275280/* Returns 0 on success, -errno on denial. */···326321 gid_eq(caller_gid, tcred->sgid) &&327322 gid_eq(caller_gid, tcred->gid))328323 goto ok;329329- if (ptrace_has_cap(tcred->user_ns, mode))324324+ if (ptrace_has_cap(cred, tcred->user_ns, mode))330325 goto ok;331326 rcu_read_unlock();332327 return -EPERM;···345340 mm = task->mm;346341 if (mm &&347342 ((get_dumpable(mm) != SUID_DUMP_USER) &&348348- !ptrace_has_cap(mm->user_ns, mode)))343343+ !ptrace_has_cap(cred, mm->user_ns, mode)))349344 return -EPERM;350345351346 return security_ptrace_access_check(task, mode);
+2
kernel/rseq.c
···310310 int ret;311311312312 if (flags & RSEQ_FLAG_UNREGISTER) {313313+ if (flags & ~RSEQ_FLAG_UNREGISTER)314314+ return -EINVAL;313315 /* Unregister rseq for current thread. */314316 if (current->rseq != rseq || !current->rseq)315317 return -EINVAL;
···58585959 /*6060 * Do a quick check without holding jiffies_lock:6161+ * The READ_ONCE() pairs with two updates done later in this function.6162 */6262- delta = ktime_sub(now, last_jiffies_update);6363+ delta = ktime_sub(now, READ_ONCE(last_jiffies_update));6364 if (delta < tick_period)6465 return;6566···7170 if (delta >= tick_period) {72717372 delta = ktime_sub(delta, tick_period);7474- last_jiffies_update = ktime_add(last_jiffies_update,7575- tick_period);7373+ /* Pairs with the lockless read in this function. */7474+ WRITE_ONCE(last_jiffies_update,7575+ ktime_add(last_jiffies_update, tick_period));76767777 /* Slow path for long timeouts */7878 if (unlikely(delta >= tick_period)) {···81798280 ticks = ktime_divns(delta, incr);83818484- last_jiffies_update = ktime_add_ns(last_jiffies_update,8585- incr * ticks);8282+ /* Pairs with the lockless read in this function. */8383+ WRITE_ONCE(last_jiffies_update,8484+ ktime_add_ns(last_jiffies_update,8585+ incr * ticks));8686 }8787 do_timer(++ticks);8888
+1
lib/vdso/gettimeofday.c
···221221 return 0;222222}223223224224+static __maybe_unused224225int __cvdso_clock_getres(clockid_t clock, struct __kernel_timespec *res)225226{226227 int ret = __cvdso_clock_getres_common(clock, res);
+24-14
mm/huge_memory.c
···527527 set_compound_page_dtor(page, TRANSHUGE_PAGE_DTOR);528528}529529530530-static unsigned long __thp_get_unmapped_area(struct file *filp, unsigned long len,530530+static unsigned long __thp_get_unmapped_area(struct file *filp,531531+ unsigned long addr, unsigned long len,531532 loff_t off, unsigned long flags, unsigned long size)532533{533533- unsigned long addr;534534 loff_t off_end = off + len;535535 loff_t off_align = round_up(off, size);536536- unsigned long len_pad;536536+ unsigned long len_pad, ret;537537538538 if (off_end <= off_align || (off_end - off_align) < size)539539 return 0;···542542 if (len_pad < len || (off + len_pad) < off)543543 return 0;544544545545- addr = current->mm->get_unmapped_area(filp, 0, len_pad,545545+ ret = current->mm->get_unmapped_area(filp, addr, len_pad,546546 off >> PAGE_SHIFT, flags);547547- if (IS_ERR_VALUE(addr))547547+548548+ /*549549+ * The failure might be due to length padding. The caller will retry550550+ * without the padding.551551+ */552552+ if (IS_ERR_VALUE(ret))548553 return 0;549554550550- addr += (off - addr) & (size - 1);551551- return addr;555555+ /*556556+ * Do not try to align to THP boundary if allocation at the address557557+ * hint succeeds.558558+ */559559+ if (ret == addr)560560+ return addr;561561+562562+ ret += (off - ret) & (size - 1);563563+ return ret;552564}553565554566unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,555567 unsigned long len, unsigned long pgoff, unsigned long flags)556568{569569+ unsigned long ret;557570 loff_t off = (loff_t)pgoff << PAGE_SHIFT;558571559559- if (addr)560560- goto out;561572 if (!IS_DAX(filp->f_mapping->host) || !IS_ENABLED(CONFIG_FS_DAX_PMD))562573 goto out;563574564564- addr = __thp_get_unmapped_area(filp, len, off, flags, PMD_SIZE);565565- if (addr)566566- return addr;567567-568568- out:575575+ ret = __thp_get_unmapped_area(filp, addr, len, off, flags, PMD_SIZE);576576+ if (ret)577577+ return ret;578578+out:569579 return current->mm->get_unmapped_area(filp, addr, len, pgoff, flags);570580}571581EXPORT_SYMBOL_GPL(thp_get_unmapped_area);
+9-28
mm/memcontrol.c
···32873287 }32883288}3289328932903290-static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg, bool slab_only)32903290+static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg)32913291{32923292- unsigned long stat[MEMCG_NR_STAT];32923292+ unsigned long stat[MEMCG_NR_STAT] = {0};32933293 struct mem_cgroup *mi;32943294 int node, cpu, i;32953295- int min_idx, max_idx;32963296-32973297- if (slab_only) {32983298- min_idx = NR_SLAB_RECLAIMABLE;32993299- max_idx = NR_SLAB_UNRECLAIMABLE;33003300- } else {33013301- min_idx = 0;33023302- max_idx = MEMCG_NR_STAT;33033303- }33043304-33053305- for (i = min_idx; i < max_idx; i++)33063306- stat[i] = 0;3307329533083296 for_each_online_cpu(cpu)33093309- for (i = min_idx; i < max_idx; i++)32973297+ for (i = 0; i < MEMCG_NR_STAT; i++)33103298 stat[i] += per_cpu(memcg->vmstats_percpu->stat[i], cpu);3311329933123300 for (mi = memcg; mi; mi = parent_mem_cgroup(mi))33133313- for (i = min_idx; i < max_idx; i++)33013301+ for (i = 0; i < MEMCG_NR_STAT; i++)33143302 atomic_long_add(stat[i], &mi->vmstats[i]);33153315-33163316- if (!slab_only)33173317- max_idx = NR_VM_NODE_STAT_ITEMS;3318330333193304 for_each_node(node) {33203305 struct mem_cgroup_per_node *pn = memcg->nodeinfo[node];33213306 struct mem_cgroup_per_node *pi;3322330733233323- for (i = min_idx; i < max_idx; i++)33083308+ for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)33243309 stat[i] = 0;3325331033263311 for_each_online_cpu(cpu)33273327- for (i = min_idx; i < max_idx; i++)33123312+ for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)33283313 stat[i] += per_cpu(33293314 pn->lruvec_stat_cpu->count[i], cpu);3330331533313316 for (pi = pn; pi; pi = parent_nodeinfo(pi, node))33323332- for (i = min_idx; i < max_idx; i++)33173317+ for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)33333318 atomic_long_add(stat[i], &pi->lruvec_stat[i]);33343319 }33353320}···33883403 parent = root_mem_cgroup;3389340433903405 /*33913391- * Deactivate and reparent kmem_caches. Then flush percpu33923392- * slab statistics to have precise values at the parent and33933393- * all ancestor levels. It's required to keep slab stats33943394- * accurate after the reparenting of kmem_caches.34063406+ * Deactivate and reparent kmem_caches.33953407 */33963408 memcg_deactivate_kmem_caches(memcg, parent);33973397- memcg_flush_percpu_vmstats(memcg, true);3398340933993410 kmemcg_id = memcg->kmemcg_id;34003411 BUG_ON(kmemcg_id < 0);···48944913 * Flush percpu vmstats and vmevents to guarantee the value correctness48954914 * on parent's and all ancestor levels.48964915 */48974897- memcg_flush_percpu_vmstats(memcg, false);49164916+ memcg_flush_percpu_vmstats(memcg);48984917 memcg_flush_percpu_vmevents(memcg);48994918 __mem_cgroup_free(memcg);49004919}
+7-3
mm/mempolicy.c
···21482148 nmask = policy_nodemask(gfp, pol);21492149 if (!nmask || node_isset(hpage_node, *nmask)) {21502150 mpol_cond_put(pol);21512151+ /*21522152+ * First, try to allocate THP only on local node, but21532153+ * don't reclaim unnecessarily, just compact.21542154+ */21512155 page = __alloc_pages_node(hpage_node,21522152- gfp | __GFP_THISNODE, order);21562156+ gfp | __GFP_THISNODE | __GFP_NORETRY, order);2153215721542158 /*21552159 * If hugepage allocations are configured to always21562160 * synchronous compact or the vma has been madvised21572161 * to prefer hugepage backing, retry allowing remote21582158- * memory as well.21622162+ * memory with both reclaim and compact as well.21592163 */21602164 if (!page && (gfp & __GFP_DIRECT_RECLAIM))21612165 page = __alloc_pages_node(hpage_node,21622162- gfp | __GFP_NORETRY, order);21662166+ gfp, order);2163216721642168 goto out;21652169 }
+5-5
mm/page-writeback.c
···201201 if (this_bw < tot_bw) {202202 if (min) {203203 min *= this_bw;204204- do_div(min, tot_bw);204204+ min = div64_ul(min, tot_bw);205205 }206206 if (max < 100) {207207 max *= this_bw;208208- do_div(max, tot_bw);208208+ max = div64_ul(max, tot_bw);209209 }210210 }211211···766766 struct wb_domain *dom = dtc_dom(dtc);767767 unsigned long thresh = dtc->thresh;768768 u64 wb_thresh;769769- long numerator, denominator;769769+ unsigned long numerator, denominator;770770 unsigned long wb_min_ratio, wb_max_ratio;771771772772 /*···777777778778 wb_thresh = (thresh * (100 - bdi_min_ratio)) / 100;779779 wb_thresh *= numerator;780780- do_div(wb_thresh, denominator);780780+ wb_thresh = div64_ul(wb_thresh, denominator);781781782782 wb_min_max_ratio(dtc->wb, &wb_min_ratio, &wb_max_ratio);783783···11021102 bw = written - min(written, wb->written_stamp);11031103 bw *= HZ;11041104 if (unlikely(elapsed > period)) {11051105- do_div(bw, elapsed);11051105+ bw = div64_ul(bw, elapsed);11061106 avg = bw;11071107 goto out;11081108 }
+18-43
mm/page_alloc.c
···694694#ifdef CONFIG_DEBUG_PAGEALLOC695695unsigned int _debug_guardpage_minorder;696696697697-#ifdef CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT698698-DEFINE_STATIC_KEY_TRUE(_debug_pagealloc_enabled);699699-#else697697+bool _debug_pagealloc_enabled_early __read_mostly698698+ = IS_ENABLED(CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT);699699+EXPORT_SYMBOL(_debug_pagealloc_enabled_early);700700DEFINE_STATIC_KEY_FALSE(_debug_pagealloc_enabled);701701-#endif702701EXPORT_SYMBOL(_debug_pagealloc_enabled);703702704703DEFINE_STATIC_KEY_FALSE(_debug_guardpage_enabled);705704706705static int __init early_debug_pagealloc(char *buf)707706{708708- bool enable = false;709709-710710- if (kstrtobool(buf, &enable))711711- return -EINVAL;712712-713713- if (enable)714714- static_branch_enable(&_debug_pagealloc_enabled);715715-716716- return 0;707707+ return kstrtobool(buf, &_debug_pagealloc_enabled_early);717708}718709early_param("debug_pagealloc", early_debug_pagealloc);719710720720-static void init_debug_guardpage(void)711711+void init_debug_pagealloc(void)721712{722713 if (!debug_pagealloc_enabled())723714 return;715715+716716+ static_branch_enable(&_debug_pagealloc_enabled);724717725718 if (!debug_guardpage_minorder())726719 return;···11791186 */11801187 arch_free_page(page, order);1181118811821182- if (debug_pagealloc_enabled())11891189+ if (debug_pagealloc_enabled_static())11831190 kernel_map_pages(page, 1 << order, 0);1184119111851192 kasan_free_nondeferred_pages(page, order);···1200120712011208static bool bulkfree_pcp_prepare(struct page *page)12021209{12031203- if (debug_pagealloc_enabled())12101210+ if (debug_pagealloc_enabled_static())12041211 return free_pages_check(page);12051212 else12061213 return false;···12141221 */12151222static bool free_pcp_prepare(struct page *page)12161223{12171217- if (debug_pagealloc_enabled())12241224+ if (debug_pagealloc_enabled_static())12181225 return free_pages_prepare(page, 0, true);12191226 else12201227 return free_pages_prepare(page, 0, false);···1966197319671974 for_each_populated_zone(zone)19681975 set_zone_contiguous(zone);19691969-19701970-#ifdef CONFIG_DEBUG_PAGEALLOC19711971- init_debug_guardpage();19721972-#endif19731976}1974197719751978#ifdef CONFIG_CMA···20952106 */20962107static inline bool check_pcp_refill(struct page *page)20972108{20982098- if (debug_pagealloc_enabled())21092109+ if (debug_pagealloc_enabled_static())20992110 return check_new_page(page);21002111 else21012112 return false;···21172128}21182129static inline bool check_new_pcp(struct page *page)21192130{21202120- if (debug_pagealloc_enabled())21312131+ if (debug_pagealloc_enabled_static())21212132 return check_new_page(page);21222133 else21232134 return false;···21442155 set_page_refcounted(page);2145215621462157 arch_alloc_page(page, order);21472147- if (debug_pagealloc_enabled())21582158+ if (debug_pagealloc_enabled_static())21482159 kernel_map_pages(page, 1 << order, 1);21492160 kasan_alloc_pages(page, order);21502161 kernel_poison_pages(page, 1 << order, 1);···44654476 if (page)44664477 goto got_pg;4467447844684468- if (order >= pageblock_order && (gfp_mask & __GFP_IO) &&44694469- !(gfp_mask & __GFP_RETRY_MAYFAIL)) {44794479+ /*44804480+ * Checks for costly allocations with __GFP_NORETRY, which44814481+ * includes some THP page fault allocations44824482+ */44834483+ if (costly_order && (gfp_mask & __GFP_NORETRY)) {44704484 /*44714485 * If allocating entire pageblock(s) and compaction44724486 * failed because all zones are below low watermarks···44894497 */44904498 if (compact_result == COMPACT_SKIPPED ||44914499 compact_result == COMPACT_DEFERRED)44924492- goto nopage;44934493- }44944494-44954495- /*44964496- * Checks for costly allocations with __GFP_NORETRY, which44974497- * includes THP page fault allocations44984498- */44994499- if (costly_order && (gfp_mask & __GFP_NORETRY)) {45004500- /*45014501- * If compaction is deferred for high-order allocations,45024502- * it is because sync compaction recently failed. If45034503- * this is the case and the caller requested a THP45044504- * allocation, we do not want to heavily disrupt the45054505- * system, so we fail the allocation instead of entering45064506- * direct reclaim.45074507- */45084508- if (compact_result == COMPACT_DEFERRED)45094500 goto nopage;4510450145114502 /*
+4-3
mm/shmem.c
···21072107 /*21082108 * Our priority is to support MAP_SHARED mapped hugely;21092109 * and support MAP_PRIVATE mapped hugely too, until it is COWed.21102110- * But if caller specified an address hint, respect that as before.21102110+ * But if caller specified an address hint and we allocated area there21112111+ * successfully, respect that as before.21112112 */21122112- if (uaddr)21132113+ if (uaddr == addr)21132114 return addr;2114211521152116 if (shmem_huge != SHMEM_HUGE_FORCE) {···21442143 if (inflated_len < len)21452144 return addr;2146214521472147- inflated_addr = get_area(NULL, 0, inflated_len, 0, flags);21462146+ inflated_addr = get_area(NULL, uaddr, inflated_len, 0, flags);21482147 if (IS_ERR_VALUE(inflated_addr))21492148 return addr;21502149 if (inflated_addr & ~PAGE_MASK)
+2-2
mm/slab.c
···14161416#if DEBUG14171417static bool is_debug_pagealloc_cache(struct kmem_cache *cachep)14181418{14191419- if (debug_pagealloc_enabled() && OFF_SLAB(cachep) &&14191419+ if (debug_pagealloc_enabled_static() && OFF_SLAB(cachep) &&14201420 (cachep->size % PAGE_SIZE) == 0)14211421 return true;14221422···20082008 * to check size >= 256. It guarantees that all necessary small20092009 * sized slab is initialized in current slab initialization sequence.20102010 */20112011- if (debug_pagealloc_enabled() && (flags & SLAB_POISON) &&20112011+ if (debug_pagealloc_enabled_static() && (flags & SLAB_POISON) &&20122012 size >= 256 && cachep->object_size > cache_line_size()) {20132013 if (size < PAGE_SIZE || size % PAGE_SIZE == 0) {20142014 size_t tmp_size = ALIGN(size, PAGE_SIZE);
+2-1
mm/slab_common.c
···903903 * deactivates the memcg kmem_caches through workqueue. Make sure all904904 * previous workitems on workqueue are processed.905905 */906906- flush_workqueue(memcg_kmem_cache_wq);906906+ if (likely(memcg_kmem_cache_wq))907907+ flush_workqueue(memcg_kmem_cache_wq);907908908909 /*909910 * If we're racing with children kmem_cache deactivation, it might
+1-1
mm/slub.c
···288288 unsigned long freepointer_addr;289289 void *p;290290291291- if (!debug_pagealloc_enabled())291291+ if (!debug_pagealloc_enabled_static())292292 return get_freepointer(s, object);293293294294 freepointer_addr = (unsigned long)object + s->offset;
+8-1
mm/sparse.c
···777777 if (bitmap_empty(subsection_map, SUBSECTIONS_PER_SECTION)) {778778 unsigned long section_nr = pfn_to_section_nr(pfn);779779780780- if (!section_is_early) {780780+ /*781781+ * When removing an early section, the usage map is kept (as the782782+ * usage maps of other sections fall into the same page). It783783+ * will be re-used when re-adding the section - which is then no784784+ * longer an early section. If the usage map is PageReserved, it785785+ * was allocated during boot.786786+ */787787+ if (!PageReserved(virt_to_page(ms->usage))) {781788 kfree(ms->usage);782789 ms->usage = NULL;783790 }
+2-2
mm/vmalloc.c
···13831383{13841384 flush_cache_vunmap(va->va_start, va->va_end);13851385 unmap_vmap_area(va);13861386- if (debug_pagealloc_enabled())13861386+ if (debug_pagealloc_enabled_static())13871387 flush_tlb_kernel_range(va->va_start, va->va_end);1388138813891389 free_vmap_area_noflush(va);···1681168116821682 vunmap_page_range((unsigned long)addr, (unsigned long)addr + size);1683168316841684- if (debug_pagealloc_enabled())16841684+ if (debug_pagealloc_enabled_static())16851685 flush_tlb_kernel_range((unsigned long)addr,16861686 (unsigned long)addr + size);16871687
···9177917791789178void netdev_update_lockdep_key(struct net_device *dev)91799179{91809180- struct netdev_queue *queue;91819181- int i;91829182-91839183- lockdep_unregister_key(&dev->qdisc_xmit_lock_key);91849180 lockdep_unregister_key(&dev->addr_list_lock_key);91859185-91869186- lockdep_register_key(&dev->qdisc_xmit_lock_key);91879181 lockdep_register_key(&dev->addr_list_lock_key);9188918291899183 lockdep_set_class(&dev->addr_list_lock, &dev->addr_list_lock_key);91909190- for (i = 0; i < dev->num_tx_queues; i++) {91919191- queue = netdev_get_tx_queue(dev, i);91929192-91939193- lockdep_set_class(&queue->_xmit_lock,91949194- &dev->qdisc_xmit_lock_key);91959195- }91969184}91979185EXPORT_SYMBOL(netdev_update_lockdep_key);91989186
+4-4
net/core/devlink.c
···64066406 devlink_port->attrs.flavour != DEVLINK_PORT_FLAVOUR_DSA;64076407}6408640864096409-#define DEVLINK_PORT_TYPE_WARN_TIMEOUT (HZ * 30)64096409+#define DEVLINK_PORT_TYPE_WARN_TIMEOUT (HZ * 3600)6410641064116411static void devlink_port_type_warn_schedule(struct devlink_port *devlink_port)64126412{···75637563EXPORT_SYMBOL_GPL(devlink_region_destroy);7564756475657565/**75667566- * devlink_region_shapshot_id_get - get snapshot ID75667566+ * devlink_region_snapshot_id_get - get snapshot ID75677567 *75687568 * This callback should be called when adding a new snapshot,75697569 * Driver should use the same id for multiple snapshots taken···75717571 *75727572 * @devlink: devlink75737573 */75747574-u32 devlink_region_shapshot_id_get(struct devlink *devlink)75747574+u32 devlink_region_snapshot_id_get(struct devlink *devlink)75757575{75767576 u32 id;75777577···7581758175827582 return id;75837583}75847584-EXPORT_SYMBOL_GPL(devlink_region_shapshot_id_get);75847584+EXPORT_SYMBOL_GPL(devlink_region_snapshot_id_get);7585758575867586/**75877587 * devlink_region_snapshot_create - create a new snapshot
···21932193 int count = cb->args[2];21942194 t_key key = cb->args[3];2195219521962196+ /* First time here, count and key are both always 0. Count > 021972197+ * and key == 0 means the dump has wrapped around and we are done.21982198+ */21992199+ if (count && !key)22002200+ return skb->len;22012201+21962202 while ((l = leaf_walk_rcu(&tp, key)) != NULL) {21972203 int err;21982204
···121121 struct sk_psock *psock;122122 int copied, ret;123123124124- if (unlikely(flags & MSG_ERRQUEUE))125125- return inet_recv_error(sk, msg, len, addr_len);126126- if (!skb_queue_empty(&sk->sk_receive_queue))127127- return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);128128-129124 psock = sk_psock_get(sk);130125 if (unlikely(!psock))126126+ return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);127127+ if (unlikely(flags & MSG_ERRQUEUE))128128+ return inet_recv_error(sk, msg, len, addr_len);129129+ if (!skb_queue_empty(&sk->sk_receive_queue) &&130130+ sk_psock_queue_empty(psock))131131 return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);132132 lock_sock(sk);133133msg_bytes_ready:···139139 timeo = sock_rcvtimeo(sk, nonblock);140140 data = tcp_bpf_wait_data(sk, psock, flags, timeo, &err);141141 if (data) {142142- if (skb_queue_empty(&sk->sk_receive_queue))142142+ if (!sk_psock_queue_empty(psock))143143 goto msg_bytes_ready;144144 release_sock(sk);145145 sk_psock_put(sk, psock);···315315 */316316 delta = msg->sg.size;317317 psock->eval = sk_psock_msg_verdict(sk, psock, msg);318318- if (msg->sg.size < delta)319319- delta -= msg->sg.size;320320- else321321- delta = 0;318318+ delta -= msg->sg.size;322319 }323320324321 if (msg->cork_bytes &&
+4-3
net/ipv4/tcp_input.c
···915915/* This must be called before lost_out is incremented */916916static void tcp_verify_retransmit_hint(struct tcp_sock *tp, struct sk_buff *skb)917917{918918- if (!tp->retransmit_skb_hint ||919919- before(TCP_SKB_CB(skb)->seq,920920- TCP_SKB_CB(tp->retransmit_skb_hint)->seq))918918+ if ((!tp->retransmit_skb_hint && tp->retrans_out >= tp->lost_out) ||919919+ (tp->retransmit_skb_hint &&920920+ before(TCP_SKB_CB(skb)->seq,921921+ TCP_SKB_CB(tp->retransmit_skb_hint)->seq)))921922 tp->retransmit_skb_hint = skb;922923}923924
+4-2
net/ipv4/tcp_ulp.c
···9999 rcu_read_unlock();100100}101101102102-void tcp_update_ulp(struct sock *sk, struct proto *proto)102102+void tcp_update_ulp(struct sock *sk, struct proto *proto,103103+ void (*write_space)(struct sock *sk))103104{104105 struct inet_connection_sock *icsk = inet_csk(sk);105106106107 if (!icsk->icsk_ulp_ops) {108108+ sk->sk_write_space = write_space;107109 sk->sk_prot = proto;108110 return;109111 }110112111113 if (icsk->icsk_ulp_ops->update)112112- icsk->icsk_ulp_ops->update(sk, proto);114114+ icsk->icsk_ulp_ops->update(sk, proto, write_space);113115}114116115117void tcp_cleanup_ulp(struct sock *sk)
+23
net/mac80211/cfg.c
···29542954 return err;29552955}2956295629572957+static void ieee80211_end_cac(struct wiphy *wiphy,29582958+ struct net_device *dev)29592959+{29602960+ struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);29612961+ struct ieee80211_local *local = sdata->local;29622962+29632963+ mutex_lock(&local->mtx);29642964+ list_for_each_entry(sdata, &local->interfaces, list) {29652965+ /* it might be waiting for the local->mtx, but then29662966+ * by the time it gets it, sdata->wdev.cac_started29672967+ * will no longer be true29682968+ */29692969+ cancel_delayed_work(&sdata->dfs_cac_timer_work);29702970+29712971+ if (sdata->wdev.cac_started) {29722972+ ieee80211_vif_release_channel(sdata);29732973+ sdata->wdev.cac_started = false;29742974+ }29752975+ }29762976+ mutex_unlock(&local->mtx);29772977+}29782978+29572979static struct cfg80211_beacon_data *29582980cfg80211_beacon_dup(struct cfg80211_beacon_data *beacon)29592981{···40454023#endif40464024 .get_channel = ieee80211_cfg_get_channel,40474025 .start_radar_detection = ieee80211_start_radar_detection,40264026+ .end_cac = ieee80211_end_cac,40484027 .channel_switch = ieee80211_channel_switch,40494028 .set_qos_map = ieee80211_set_qos_map,40504029 .set_ap_chanwidth = ieee80211_set_ap_chanwidth,
+3
net/mac80211/mesh_hwmp.c
···328328 unsigned long fail_avg =329329 ewma_mesh_fail_avg_read(&sta->mesh->fail_avg);330330331331+ if (sta->mesh->plink_state != NL80211_PLINK_ESTAB)332332+ return MAX_METRIC;333333+331334 /* Try to get rate based on HW/SW RC algorithm.332335 * Rate is returned in units of Kbps, correct this333336 * to comply with airtime calculation units
+15-3
net/mac80211/tkip.c
···263263 if ((keyid >> 6) != key->conf.keyidx)264264 return TKIP_DECRYPT_INVALID_KEYIDX;265265266266- if (rx_ctx->ctx.state != TKIP_STATE_NOT_INIT &&267267- (iv32 < rx_ctx->iv32 ||268268- (iv32 == rx_ctx->iv32 && iv16 <= rx_ctx->iv16)))266266+ /* Reject replays if the received TSC is smaller than or equal to the267267+ * last received value in a valid message, but with an exception for268268+ * the case where a new key has been set and no valid frame using that269269+ * key has yet received and the local RSC was initialized to 0. This270270+ * exception allows the very first frame sent by the transmitter to be271271+ * accepted even if that transmitter were to use TSC 0 (IEEE 802.11272272+ * described TSC to be initialized to 1 whenever a new key is taken into273273+ * use).274274+ */275275+ if (iv32 < rx_ctx->iv32 ||276276+ (iv32 == rx_ctx->iv32 &&277277+ (iv16 < rx_ctx->iv16 ||278278+ (iv16 == rx_ctx->iv16 &&279279+ (rx_ctx->iv32 || rx_ctx->iv16 ||280280+ rx_ctx->ctx.state != TKIP_STATE_NOT_INIT)))))269281 return TKIP_DECRYPT_REPLAY;270282271283 if (only_iv) {
+1-1
net/netfilter/ipset/ip_set_bitmap_gen.h
···6060 if (SET_WITH_TIMEOUT(set))6161 del_timer_sync(&map->gc);62626363- ip_set_free(map->members);6463 if (set->dsize && set->extensions & IPSET_EXT_DESTROY)6564 mtype_ext_cleanup(set);6565+ ip_set_free(map->members);6666 ip_set_free(map);67676868 set->data = NULL;
+13
net/netfilter/nf_nat_proto.c
···233233 return false;234234235235 hdr = (struct icmphdr *)(skb->data + hdroff);236236+ switch (hdr->type) {237237+ case ICMP_ECHO:238238+ case ICMP_ECHOREPLY:239239+ case ICMP_TIMESTAMP:240240+ case ICMP_TIMESTAMPREPLY:241241+ case ICMP_INFO_REQUEST:242242+ case ICMP_INFO_REPLY:243243+ case ICMP_ADDRESS:244244+ case ICMP_ADDRESSREPLY:245245+ break;246246+ default:247247+ return true;248248+ }236249 inet_proto_csum_replace2(&hdr->checksum, skb,237250 hdr->un.echo.id, tuple->src.u.icmp.id, false);238251 hdr->un.echo.id = tuple->src.u.icmp.id;
+26-13
net/netfilter/nf_tables_api.c
···2222#include <net/net_namespace.h>2323#include <net/sock.h>24242525+#define NFT_MODULE_AUTOLOAD_LIMIT (MODULE_NAME_LEN - sizeof("nft-expr-255-"))2626+2527static LIST_HEAD(nf_tables_expressions);2628static LIST_HEAD(nf_tables_objects);2729static LIST_HEAD(nf_tables_flowtables);···566564}567565568566/*569569- * Loading a module requires dropping mutex that guards the570570- * transaction.571571- * We first need to abort any pending transactions as once572572- * mutex is unlocked a different client could start a new573573- * transaction. It must not see any 'future generation'574574- * changes * as these changes will never happen.567567+ * Loading a module requires dropping mutex that guards the transaction.568568+ * A different client might race to start a new transaction meanwhile. Zap the569569+ * list of pending transaction and then restore it once the mutex is grabbed570570+ * again. Users of this function return EAGAIN which implicitly triggers the571571+ * transaction abort path to clean up the list of pending transactions.575572 */576573#ifdef CONFIG_MODULES577577-static int __nf_tables_abort(struct net *net);578578-579574static void nft_request_module(struct net *net, const char *fmt, ...)580575{581576 char module_name[MODULE_NAME_LEN];577577+ LIST_HEAD(commit_list);582578 va_list args;583579 int ret;584580585585- __nf_tables_abort(net);581581+ list_splice_init(&net->nft.commit_list, &commit_list);586582587583 va_start(args, fmt);588584 ret = vsnprintf(module_name, MODULE_NAME_LEN, fmt, args);589585 va_end(args);590590- if (WARN(ret >= MODULE_NAME_LEN, "truncated: '%s' (len %d)", module_name, ret))586586+ if (ret >= MODULE_NAME_LEN)591587 return;592588593589 mutex_unlock(&net->nft.commit_mutex);594590 request_module("%s", module_name);595591 mutex_lock(&net->nft.commit_mutex);592592+593593+ WARN_ON_ONCE(!list_empty(&net->nft.commit_list));594594+ list_splice(&commit_list, &net->nft.commit_list);596595}597596#endif598597···10481045 }1049104610501047 list_for_each_entry_safe(flowtable, nft, &ctx->table->flowtables, list) {10481048+ if (!nft_is_active_next(ctx->net, flowtable))10491049+ continue;10501050+10511051 err = nft_delflowtable(ctx, flowtable);10521052 if (err < 0)10531053 goto out;10541054 }1055105510561056 list_for_each_entry_safe(obj, ne, &ctx->table->objects, list) {10571057+ if (!nft_is_active_next(ctx->net, obj))10581058+ continue;10591059+10571060 err = nft_delobj(ctx, obj);10581061 if (err < 0)10591062 goto out;···12501241 .len = NFT_CHAIN_MAXNAMELEN - 1 },12511242 [NFTA_CHAIN_HOOK] = { .type = NLA_NESTED },12521243 [NFTA_CHAIN_POLICY] = { .type = NLA_U32 },12531253- [NFTA_CHAIN_TYPE] = { .type = NLA_STRING },12441244+ [NFTA_CHAIN_TYPE] = { .type = NLA_STRING,12451245+ .len = NFT_MODULE_AUTOLOAD_LIMIT },12541246 [NFTA_CHAIN_COUNTERS] = { .type = NLA_NESTED },12551247 [NFTA_CHAIN_FLAGS] = { .type = NLA_U32 },12561248};···16861676 goto err_hook;16871677 }16881678 if (nft_hook_list_find(hook_list, hook)) {16791679+ kfree(hook);16891680 err = -EEXIST;16901681 goto err_hook;16911682 }···23662355}2367235623682357static const struct nla_policy nft_expr_policy[NFTA_EXPR_MAX + 1] = {23692369- [NFTA_EXPR_NAME] = { .type = NLA_STRING },23582358+ [NFTA_EXPR_NAME] = { .type = NLA_STRING,23592359+ .len = NFT_MODULE_AUTOLOAD_LIMIT },23702360 [NFTA_EXPR_DATA] = { .type = NLA_NESTED },23712361};23722362···42104198 [NFTA_SET_ELEM_USERDATA] = { .type = NLA_BINARY,42114199 .len = NFT_USERDATA_MAXLEN },42124200 [NFTA_SET_ELEM_EXPR] = { .type = NLA_NESTED },42134213- [NFTA_SET_ELEM_OBJREF] = { .type = NLA_STRING },42014201+ [NFTA_SET_ELEM_OBJREF] = { .type = NLA_STRING,42024202+ .len = NFT_OBJ_MAXNAMELEN - 1 },42144203};4215420442164205static const struct nla_policy nft_set_elem_list_policy[NFTA_SET_ELEM_LIST_MAX + 1] = {
+4-1
net/netfilter/nft_tunnel.c
···7676 struct nft_tunnel *priv = nft_expr_priv(expr);7777 u32 len;78787979- if (!tb[NFTA_TUNNEL_KEY] &&7979+ if (!tb[NFTA_TUNNEL_KEY] ||8080 !tb[NFTA_TUNNEL_DREG])8181 return -EINVAL;8282···265265 NULL);266266 if (err < 0)267267 return err;268268+269269+ if (!tb[NFTA_TUNNEL_KEY_ERSPAN_VERSION])270270+ return -EINVAL;268271269272 version = ntohl(nla_get_be32(tb[NFTA_TUNNEL_KEY_ERSPAN_VERSION]));270273 switch (version) {
···256256 return ret;257257258258 ret = crypto_wait_req(ret, &ctx->async_wait);259259- } else if (ret == -EBADMSG) {260260- TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSDECRYPTERROR);261259 }262260263261 if (async)···680682681683 split_point = msg_pl->apply_bytes;682684 split = split_point && split_point < msg_pl->sg.size;685685+ if (unlikely((!split &&686686+ msg_pl->sg.size +687687+ prot->overhead_size > msg_en->sg.size) ||688688+ (split &&689689+ split_point +690690+ prot->overhead_size > msg_en->sg.size))) {691691+ split = true;692692+ split_point = msg_en->sg.size;693693+ }683694 if (split) {684695 rc = tls_split_open_record(sk, rec, &tmp, msg_pl, msg_en,685696 split_point, prot->overhead_size,686697 &orig_end);687698 if (rc < 0)688699 return rc;700700+ /* This can happen if above tls_split_open_record allocates701701+ * a single large encryption buffer instead of two smaller702702+ * ones. In this case adjust pointers and continue without703703+ * split.704704+ */705705+ if (!msg_pl->sg.size) {706706+ tls_merge_open_record(sk, rec, tmp, orig_end);707707+ msg_pl = &rec->msg_plaintext;708708+ msg_en = &rec->msg_encrypted;709709+ split = false;710710+ }689711 sk_msg_trim(sk, msg_en, msg_pl->sg.size +690712 prot->overhead_size);691713 }···725707 &rec->sg_content_type);726708 } else {727709 sg_mark_end(sk_msg_elem(msg_pl, i));710710+ }711711+712712+ if (msg_pl->sg.end < msg_pl->sg.start) {713713+ sg_chain(&msg_pl->sg.data[msg_pl->sg.start],714714+ MAX_SKB_FRAGS - msg_pl->sg.start + 1,715715+ msg_pl->sg.data);728716 }729717730718 i = msg_pl->sg.start;···796772 psock = sk_psock_get(sk);797773 if (!psock || !policy) {798774 err = tls_push_record(sk, flags, record_type);799799- if (err) {775775+ if (err && err != -EINPROGRESS) {800776 *copied -= sk_msg_free(sk, msg);801777 tls_free_open_rec(sk);802778 }···807783 if (psock->eval == __SK_NONE) {808784 delta = msg->sg.size;809785 psock->eval = sk_psock_msg_verdict(sk, psock, msg);810810- if (delta < msg->sg.size)811811- delta -= msg->sg.size;812812- else813813- delta = 0;786786+ delta -= msg->sg.size;814787 }815788 if (msg->cork_bytes && msg->cork_bytes > msg->sg.size &&816789 !enospc && !full_record) {···822801 switch (psock->eval) {823802 case __SK_PASS:824803 err = tls_push_record(sk, flags, record_type);825825- if (err < 0) {804804+ if (err && err != -EINPROGRESS) {826805 *copied -= sk_msg_free(sk, msg);827806 tls_free_open_rec(sk);828807 goto out_err;···15361515 if (err == -EINPROGRESS)15371516 tls_advance_record_sn(sk, prot,15381517 &tls_ctx->rx);15391539-15181518+ else if (err == -EBADMSG)15191519+ TLS_INC_STATS(sock_net(sk),15201520+ LINUX_MIB_TLSDECRYPTERROR);15401521 return err;15411522 }15421523 } else {
+6-59
net/vmw_vsock/hyperv_transport.c
···138138 ****************************************************************************139139 * The only valid Service GUIDs, from the perspectives of both the host and *140140 * Linux VM, that can be connected by the other end, must conform to this *141141- * format: <port>-facb-11e6-bd58-64006a7986d3, and the "port" must be in *142142- * this range [0, 0x7FFFFFFF]. *141141+ * format: <port>-facb-11e6-bd58-64006a7986d3. *143142 ****************************************************************************144143 *145144 * When we write apps on the host to connect(), the GUID ServiceID is used.146145 * When we write apps in Linux VM to connect(), we only need to specify the147146 * port and the driver will form the GUID and use that to request the host.148147 *149149- * From the perspective of Linux VM:150150- * 1. the local ephemeral port (i.e. the local auto-bound port when we call151151- * connect() without explicit bind()) is generated by __vsock_bind_stream(),152152- * and the range is [1024, 0xFFFFFFFF).153153- * 2. the remote ephemeral port (i.e. the auto-generated remote port for154154- * a connect request initiated by the host's connect()) is generated by155155- * hvs_remote_addr_init() and the range is [0x80000000, 0xFFFFFFFF).156148 */157157-158158-#define MAX_LISTEN_PORT ((u32)0x7FFFFFFF)159159-#define MAX_VM_LISTEN_PORT MAX_LISTEN_PORT160160-#define MAX_HOST_LISTEN_PORT MAX_LISTEN_PORT161161-#define MIN_HOST_EPHEMERAL_PORT (MAX_HOST_LISTEN_PORT + 1)162149163150/* 00000000-facb-11e6-bd58-64006a7986d3 */164151static const guid_t srv_id_template =···169182 unsigned int port = get_port_by_srv_id(svr_id);170183171184 vsock_addr_init(addr, VMADDR_CID_ANY, port);172172-}173173-174174-static void hvs_remote_addr_init(struct sockaddr_vm *remote,175175- struct sockaddr_vm *local)176176-{177177- static u32 host_ephemeral_port = MIN_HOST_EPHEMERAL_PORT;178178- struct sock *sk;179179-180180- /* Remote peer is always the host */181181- vsock_addr_init(remote, VMADDR_CID_HOST, VMADDR_PORT_ANY);182182-183183- while (1) {184184- /* Wrap around ? */185185- if (host_ephemeral_port < MIN_HOST_EPHEMERAL_PORT ||186186- host_ephemeral_port == VMADDR_PORT_ANY)187187- host_ephemeral_port = MIN_HOST_EPHEMERAL_PORT;188188-189189- remote->svm_port = host_ephemeral_port++;190190-191191- sk = vsock_find_connected_socket(remote, local);192192- if (!sk) {193193- /* Found an available ephemeral port */194194- return;195195- }196196-197197- /* Release refcnt got in vsock_find_connected_socket */198198- sock_put(sk);199199- }200185}201186202187static void hvs_set_channel_pending_send_size(struct vmbus_channel *chan)···300341 if_type = &chan->offermsg.offer.if_type;301342 if_instance = &chan->offermsg.offer.if_instance;302343 conn_from_host = chan->offermsg.offer.u.pipe.user_def[0];303303-304304- /* The host or the VM should only listen on a port in305305- * [0, MAX_LISTEN_PORT]306306- */307307- if (!is_valid_srv_id(if_type) ||308308- get_port_by_srv_id(if_type) > MAX_LISTEN_PORT)344344+ if (!is_valid_srv_id(if_type))309345 return;310346311347 hvs_addr_init(&addr, conn_from_host ? if_type : if_instance);···325371 vnew = vsock_sk(new);326372327373 hvs_addr_init(&vnew->local_addr, if_type);328328- hvs_remote_addr_init(&vnew->remote_addr, &vnew->local_addr);329374375375+ /* Remote peer is always the host */376376+ vsock_addr_init(&vnew->remote_addr,377377+ VMADDR_CID_HOST, VMADDR_PORT_ANY);378378+ vnew->remote_addr.svm_port = get_port_by_srv_id(if_instance);330379 ret = vsock_assign_transport(vnew, vsock_sk(sk));331380 /* Transport assigned (looking at remote_addr) must be the332381 * same where we received the request.···723766724767static bool hvs_stream_allow(u32 cid, u32 port)725768{726726- /* The host's port range [MIN_HOST_EPHEMERAL_PORT, 0xFFFFFFFF) is727727- * reserved as ephemeral ports, which are used as the host's ports728728- * when the host initiates connections.729729- *730730- * Perform this check in the guest so an immediate error is produced731731- * instead of a timeout.732732- */733733- if (port > MAX_HOST_LISTEN_PORT)734734- return false;735735-736769 if (cid == VMADDR_CID_HOST)737770 return true;738771
···538538rdev_set_wiphy_params(struct cfg80211_registered_device *rdev, u32 changed)539539{540540 int ret;541541+542542+ if (!rdev->ops->set_wiphy_params)543543+ return -EOPNOTSUPP;544544+541545 trace_rdev_set_wiphy_params(&rdev->wiphy, changed);542546 ret = rdev->ops->set_wiphy_params(&rdev->wiphy, changed);543547 trace_rdev_return_int(&rdev->wiphy, ret);···11691165 chandef, cac_time_ms);11701166 trace_rdev_return_int(&rdev->wiphy, ret);11711167 return ret;11681168+}11691169+11701170+static inline void11711171+rdev_end_cac(struct cfg80211_registered_device *rdev,11721172+ struct net_device *dev)11731173+{11741174+ trace_rdev_end_cac(&rdev->wiphy, dev);11751175+ if (rdev->ops->end_cac)11761176+ rdev->ops->end_cac(&rdev->wiphy, dev);11771177+ trace_rdev_return_void(&rdev->wiphy);11721178}1173117911741180static inline int
+32-4
net/wireless/reg.c
···2261226122622262static void handle_channel_custom(struct wiphy *wiphy,22632263 struct ieee80211_channel *chan,22642264- const struct ieee80211_regdomain *regd)22642264+ const struct ieee80211_regdomain *regd,22652265+ u32 min_bw)22652266{22662267 u32 bw_flags = 0;22672268 const struct ieee80211_reg_rule *reg_rule = NULL;22682269 const struct ieee80211_power_rule *power_rule = NULL;22692270 u32 bw;2270227122712271- for (bw = MHZ_TO_KHZ(20); bw >= MHZ_TO_KHZ(5); bw = bw / 2) {22722272+ for (bw = MHZ_TO_KHZ(20); bw >= min_bw; bw = bw / 2) {22722273 reg_rule = freq_reg_info_regd(MHZ_TO_KHZ(chan->center_freq),22732274 regd, bw);22742275 if (!IS_ERR(reg_rule))···23252324 if (!sband)23262325 return;2327232623272327+ /*23282328+ * We currently assume that you always want at least 20 MHz,23292329+ * otherwise channel 12 might get enabled if this rule is23302330+ * compatible to US, which permits 2402 - 2472 MHz.23312331+ */23282332 for (i = 0; i < sband->n_channels; i++)23292329- handle_channel_custom(wiphy, &sband->channels[i], regd);23332333+ handle_channel_custom(wiphy, &sband->channels[i], regd,23342334+ MHZ_TO_KHZ(20));23302335}2331233623322337/* Used by drivers prior to wiphy registration */···38923885}38933886EXPORT_SYMBOL(regulatory_pre_cac_allowed);3894388738883888+static void cfg80211_check_and_end_cac(struct cfg80211_registered_device *rdev)38893889+{38903890+ struct wireless_dev *wdev;38913891+ /* If we finished CAC or received radar, we should end any38923892+ * CAC running on the same channels.38933893+ * the check !cfg80211_chandef_dfs_usable contain 2 options:38943894+ * either all channels are available - those the CAC_FINISHED38953895+ * event has effected another wdev state, or there is a channel38963896+ * in unavailable state in wdev chandef - those the RADAR_DETECTED38973897+ * event has effected another wdev state.38983898+ * In both cases we should end the CAC on the wdev.38993899+ */39003900+ list_for_each_entry(wdev, &rdev->wiphy.wdev_list, list) {39013901+ if (wdev->cac_started &&39023902+ !cfg80211_chandef_dfs_usable(&rdev->wiphy, &wdev->chandef))39033903+ rdev_end_cac(rdev, wdev->netdev);39043904+ }39053905+}39063906+38953907void regulatory_propagate_dfs_state(struct wiphy *wiphy,38963908 struct cfg80211_chan_def *chandef,38973909 enum nl80211_dfs_state dfs_state,···39373911 cfg80211_set_dfs_state(&rdev->wiphy, chandef, dfs_state);3938391239393913 if (event == NL80211_RADAR_DETECTED ||39403940- event == NL80211_RADAR_CAC_FINISHED)39143914+ event == NL80211_RADAR_CAC_FINISHED) {39413915 cfg80211_sched_dfs_chan_update(rdev);39163916+ cfg80211_check_and_end_cac(rdev);39173917+ }3942391839433919 nl80211_radar_notify(rdev, chandef, event, NULL, GFP_KERNEL);39443920 }
+3-3
net/wireless/sme.c
···13071307 if (wdev->conn_owner_nlportid) {13081308 switch (wdev->iftype) {13091309 case NL80211_IFTYPE_ADHOC:13101310- cfg80211_leave_ibss(rdev, wdev->netdev, false);13101310+ __cfg80211_leave_ibss(rdev, wdev->netdev, false);13111311 break;13121312 case NL80211_IFTYPE_AP:13131313 case NL80211_IFTYPE_P2P_GO:13141314- cfg80211_stop_ap(rdev, wdev->netdev, false);13141314+ __cfg80211_stop_ap(rdev, wdev->netdev, false);13151315 break;13161316 case NL80211_IFTYPE_MESH_POINT:13171317- cfg80211_leave_mesh(rdev, wdev->netdev);13171317+ __cfg80211_leave_mesh(rdev, wdev->netdev);13181318 break;13191319 case NL80211_IFTYPE_STATION:13201320 case NL80211_IFTYPE_P2P_CLIENT:
···24322432{24332433 struct rt5640_priv *rt5640 = snd_soc_component_get_drvdata(component);2434243424352435+ /*24362436+ * soc_remove_component() force-disables jack and thus rt5640->jack24372437+ * could be NULL at the time of driver's module unloading.24382438+ */24392439+ if (!rt5640->jack)24402440+ return;24412441+24352442 disable_irq(rt5640->irq);24362443 rt5640_cancel_work(rt5640);24372444
···232232 stop_traffic233233 local ucth1=${uc_rate[1]}234234235235- start_traffic $h1 own bc bc235235+ start_traffic $h1 192.0.2.65 bc bc236236237237 local d0=$(date +%s)238238 local t0=$(ethtool_stats_get $h3 rx_octets_prio_0)···254254 ret = 100 * ($ucth1 - $ucth2) / $ucth1255255 if (ret > 0) { ret } else { 0 }256256 ")257257- check_err $(bc <<< "$deg > 25")257257+258258+ # Minimum shaper of 200Mbps on MC TCs should cause about 20% of259259+ # degradation on 1Gbps link.260260+ check_err $(bc <<< "$deg < 15") "Minimum shaper not in effect"261261+ check_err $(bc <<< "$deg > 25") "MC traffic degrades UC performance too much"258262259263 local interval=$((d1 - d0))260264 local mc_ir=$(rate $u0 $u1 $interval)