···315315Mythri P K <mythripk@ti.com>316316Nadia Yvette Chambers <nyc@holomorphy.com> William Lee Irwin III <wli@holomorphy.com>317317Nathan Chancellor <nathan@kernel.org> <natechancellor@gmail.com>318318+Neil Armstrong <neil.armstrong@linaro.org> <narmstrong@baylibre.com>318319Nguyen Anh Quynh <aquynh@gmail.com>319320Nicholas Piggin <npiggin@gmail.com> <npiggen@suse.de>320321Nicholas Piggin <npiggin@gmail.com> <npiggin@kernel.dk>
···88title: Amlogic Meson Firmware registers Interface991010maintainers:1111- - Neil Armstrong <narmstrong@baylibre.com>1111+ - Neil Armstrong <neil.armstrong@linaro.org>12121313description: |1414 The Meson SoCs have a register bank with status and data shared with the
···88title: Amlogic specific extensions to the Synopsys Designware HDMI Controller991010maintainers:1111- - Neil Armstrong <narmstrong@baylibre.com>1111+ - Neil Armstrong <neil.armstrong@linaro.org>12121313allOf:1414 - $ref: /schemas/sound/name-prefix.yaml#
···88title: Amlogic Meson Display Controller991010maintainers:1111- - Neil Armstrong <narmstrong@baylibre.com>1111+ - Neil Armstrong <neil.armstrong@linaro.org>12121313description: |1414 The Amlogic Meson Display controller is composed of several components
···8899maintainers:1010 - Andrzej Hajda <andrzej.hajda@intel.com>1111- - Neil Armstrong <narmstrong@baylibre.com>1111+ - Neil Armstrong <neil.armstrong@linaro.org>1212 - Robert Foss <robert.foss@linaro.org>13131414properties:
···8899maintainers:1010 - Phong LE <ple@baylibre.com>1111- - Neil Armstrong <narmstrong@baylibre.com>1111+ - Neil Armstrong <neil.armstrong@linaro.org>12121313description: |1414 The IT66121 is a high-performance and low-power single channel HDMI
···77title: Generic i.MX bus frequency device8899maintainers:1010- - Leonard Crestez <leonard.crestez@nxp.com>1010+ - Peng Fan <peng.fan@nxp.com>11111212description: |1313 The i.MX SoC family has multiple buses for which clock frequency (and
···88title: Amlogic Meson Message-Handling-Unit Controller991010maintainers:1111- - Neil Armstrong <narmstrong@baylibre.com>1111+ - Neil Armstrong <neil.armstrong@linaro.org>12121313description: |1414 The Amlogic's Meson SoCs Message-Handling-Unit (MHU) is a mailbox controller
···88title: Amlogic Video Decoder991010maintainers:1111- - Neil Armstrong <narmstrong@baylibre.com>1111+ - Neil Armstrong <neil.armstrong@linaro.org>1212 - Maxime Jourdan <mjourdan@baylibre.com>13131414description: |
···88title: Amlogic Meson AO-CEC Controller991010maintainers:1111- - Neil Armstrong <narmstrong@baylibre.com>1111+ - Neil Armstrong <neil.armstrong@linaro.org>12121313description: |1414 The Amlogic Meson AO-CEC module is present is Amlogic SoCs and its purpose is
···77title: i.MX8M DDR Controller8899maintainers:1010- - Leonard Crestez <leonard.crestez@nxp.com>1010+ - Peng Fan <peng.fan@nxp.com>11111212description:1313 The DDRC block is integrated in i.MX8M for interfacing with DDR based
···77title: Khadas on-board Microcontroller Device Tree Bindings8899maintainers:1010- - Neil Armstrong <narmstrong@baylibre.com>1010+ - Neil Armstrong <neil.armstrong@linaro.org>11111212description: |1313 Khadas embeds a microcontroller on their VIM and Edge boards adding some
···88title: Amlogic Meson DWMAC Ethernet controller991010maintainers:1111- - Neil Armstrong <narmstrong@baylibre.com>1111+ - Neil Armstrong <neil.armstrong@linaro.org>1212 - Martin Blumenstingl <martin.blumenstingl@googlemail.com>13131414# We need a select here so we don't match all nodes with 'snps,dwmac'
···77title: Qualcomm Technologies, Inc. SC7280 TLMM block8899maintainers:1010- - Rajendra Nayak <rnayak@codeaurora.org>1010+ - Bjorn Andersson <andersson@kernel.org>11111212description: |1313 This binding describes the Top Level Mode Multiplexer block found in the
···88title: Amlogic Meson Everything-Else Power Domains991010maintainers:1111- - Neil Armstrong <narmstrong@baylibre.com>1111+ - Neil Armstrong <neil.armstrong@linaro.org>12121313description: |+1414 The Everything-Else Power Domains node should be the child of a syscon
···77title: Qualcomm RPM/RPMh Power domains8899maintainers:1010- - Rajendra Nayak <rnayak@codeaurora.org>1010+ - Bjorn Andersson <andersson@kernel.org>11111212description:1313 For RPM/RPMh Power domains, we communicate a performance state to RPM/RPMh
···88title: Amlogic Meson Random number generator991010maintainers:1111- - Neil Armstrong <narmstrong@baylibre.com>1111+ - Neil Armstrong <neil.armstrong@linaro.org>12121313properties:1414 compatible:
···88title: Amlogic Meson SoC UART Serial Interface991010maintainers:1111- - Neil Armstrong <narmstrong@baylibre.com>1111+ - Neil Armstrong <neil.armstrong@linaro.org>12121313description: |1414 The Amlogic Meson SoC UART Serial Interface is present on a large range
···88title: Amlogic Canvas Video Lookup Table991010maintainers:1111- - Neil Armstrong <narmstrong@baylibre.com>1111+ - Neil Armstrong <neil.armstrong@linaro.org>1212 - Maxime Jourdan <mjourdan@baylibre.com>13131414description: |
···88title: Amlogic Meson G12A DWC3 USB SoC Controller Glue991010maintainers:1111- - Neil Armstrong <narmstrong@baylibre.com>1111+ - Neil Armstrong <neil.armstrong@linaro.org>12121313description: |1414 The Amlogic G12A embeds a DWC3 USB IP Core configured for USB2 and USB3
···88title: Meson GXBB SoCs Watchdog timer991010maintainers:1111- - Neil Armstrong <narmstrong@baylibre.com>1111+ - Neil Armstrong <neil.armstrong@linaro.org>12121313allOf:1414 - $ref: watchdog.yaml#
+1-1
Documentation/i2c/dev-interface.rst
···148148You do not need to pass the address byte; instead, set it through149149ioctl I2C_SLAVE before you try to access the device.150150151151-You can do SMBus level transactions (see documentation file smbus-protocol151151+You can do SMBus level transactions (see documentation file smbus-protocol.rst152152for details) through the following functions::153153154154 __s32 i2c_smbus_write_quick(int file, __u8 value);
+3-3
Documentation/i2c/slave-interface.rst
···3232===========33333434I2C slave backends behave like standard I2C clients. So, you can instantiate3535-them as described in the document 'instantiating-devices'. The only difference3636-is that i2c slave backends have their own address space. So, you have to add3737-0x1000 to the address you would originally request. An example for3535+them as described in the document instantiating-devices.rst. The only3636+difference is that i2c slave backends have their own address space. So, you3737+have to add 0x1000 to the address you would originally request. An example for3838instantiating the slave-eeprom driver from userspace at the 7 bit address 0x643939on bus 1::4040
+2-2
Documentation/i2c/writing-clients.rst
···364364contains for each message the client address, the number of bytes of the365365message and the message data itself.366366367367-You can read the file ``i2c-protocol`` for more information about the367367+You can read the file i2c-protocol.rst for more information about the368368actual I2C protocol.369369370370···414414value, except for block transactions, which return the number of values415415read. The block buffers need not be longer than 32 bytes.416416417417-You can read the file ``smbus-protocol`` for more information about the417417+You can read the file smbus-protocol.rst for more information about the418418actual SMBus protocol.419419420420
-1
Documentation/networking/mptcp-sysctl.rst
···4747 Default: 148484949pm_type - INTEGER5050-5150 Set the default path manager type to use for each new MPTCP5251 socket. In-kernel path management will control subflow5352 connections and address advertisements according to
-9
Documentation/networking/nf_conntrack-sysctl.rst
···7070 Default for generic timeout. This refers to layer 4 unknown/unsupported7171 protocols.72727373-nf_conntrack_helper - BOOLEAN7474- - 0 - disabled (default)7575- - not 0 - enabled7676-7777- Enable automatic conntrack helper assignment.7878- If disabled it is required to set up iptables rules to assign7979- helpers to connections. See the CT target description in the8080- iptables-extensions(8) man page for further information.8181-8273nf_conntrack_icmp_timeout - INTEGER (seconds)8374 default 308475
+20-17
MAINTAINERS
···671671F: include/trace/events/afs.h672672673673AGPGART DRIVER674674-M: David Airlie <airlied@linux.ie>674674+M: David Airlie <airlied@redhat.com>675675+L: dri-devel@lists.freedesktop.org675676S: Maintained676677T: git git://anongit.freedesktop.org/drm/drm677678F: drivers/char/agp/···1011101010121011AMD MP2 I2C DRIVER10131012M: Elie Morisse <syniurge@gmail.com>10141014-M: Nehal Shah <nehal-bakulchandra.shah@amd.com>10151013M: Shyam Sundar S K <shyam-sundar.s-k@amd.com>10161014L: linux-i2c@vger.kernel.org10171015S: Maintained···18031803N: sun50i1804180418051805ARM/Amlogic Meson SoC CLOCK FRAMEWORK18061806-M: Neil Armstrong <narmstrong@baylibre.com>18061806+M: Neil Armstrong <neil.armstrong@linaro.org>18071807M: Jerome Brunet <jbrunet@baylibre.com>18081808L: linux-amlogic@lists.infradead.org18091809S: Maintained···18281828F: sound/soc/meson/1829182918301830ARM/Amlogic Meson SoC support18311831-M: Neil Armstrong <narmstrong@baylibre.com>18311831+M: Neil Armstrong <neil.armstrong@linaro.org>18321832M: Kevin Hilman <khilman@baylibre.com>18331833R: Jerome Brunet <jbrunet@baylibre.com>18341834R: Martin Blumenstingl <martin.blumenstingl@googlemail.com>···25312531F: arch/arm/mach-orion5x/ts78xx-*2532253225332533ARM/OXNAS platform support25342534-M: Neil Armstrong <narmstrong@baylibre.com>25342534+M: Neil Armstrong <neil.armstrong@linaro.org>25352535L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)25362536L: linux-oxnas@groups.io (moderated for non-subscribers)25372537S: Maintained···52455245F: include/linux/blk-cgroup.h5246524652475247CONTROL GROUP - CPUSET52485248+M: Waiman Long <longman@redhat.com>52485249M: Zefan Li <lizefan.x@bytedance.com>52495250L: cgroups@vger.kernel.org52505251S: Maintained···67546753F: drivers/gpu/drm/panel/panel-widechips-ws2401.c6755675467566755DRM DRIVERS67576757-M: David Airlie <airlied@linux.ie>67566756+M: David Airlie <airlied@gmail.com>67586757M: Daniel Vetter <daniel@ffwll.ch>67596758L: dri-devel@lists.freedesktop.org67606759S: Maintained···67936792F: drivers/gpu/drm/sun4i/6794679367956794DRM DRIVERS FOR AMLOGIC SOCS67966796-M: Neil Armstrong <narmstrong@baylibre.com>67956795+M: Neil Armstrong <neil.armstrong@linaro.org>67976796L: dri-devel@lists.freedesktop.org67986797L: linux-amlogic@lists.infradead.org67996798S: Supported···6815681468166815DRM DRIVERS FOR BRIDGE CHIPS68176816M: Andrzej Hajda <andrzej.hajda@intel.com>68186818-M: Neil Armstrong <narmstrong@baylibre.com>68176817+M: Neil Armstrong <neil.armstrong@linaro.org>68196818M: Robert Foss <robert.foss@linaro.org>68206819R: Laurent Pinchart <Laurent.pinchart@ideasonboard.com>68216820R: Jonas Karlman <jonas@kwiboo.se>···8653865286548653GOOGLE ETHERNET DRIVERS86558654M: Jeroen de Borst <jeroendb@google.com>86568656-R: Catherine Sullivan <csully@google.com>86578657-R: David Awogbemila <awogbemila@google.com>86558655+M: Catherine Sullivan <csully@google.com>86568656+R: Shailend Chand <shailend@google.com>86588657L: netdev@vger.kernel.org86598658S: Supported86608659F: Documentation/networking/device_drivers/ethernet/google/gve.rst···91239122F: drivers/dma/hisi_dma.c9124912391259124HISILICON GPIO DRIVER91269126-M: Luo Jiaxing <luojiaxing@huawei.com>91259125+M: Jay Fang <f.fangjian@huawei.com>91279126L: linux-gpio@vger.kernel.org91289127S: Maintained91299128F: drivers/gpio/gpio-hisi.c···10829108281083010829ITE IT66121 HDMI BRIDGE DRIVER1083110830M: Phong LE <ple@baylibre.com>1083210832-M: Neil Armstrong <narmstrong@baylibre.com>1083110831+M: Neil Armstrong <neil.armstrong@linaro.org>1083310832S: Maintained1083410833T: git git://anongit.freedesktop.org/drm/drm-misc1083510834F: Documentation/devicetree/bindings/display/bridge/ite,it66121.yaml···1134811347F: kernel/module/kdb.c11349113481135011349KHADAS MCU MFD DRIVER1135111351-M: Neil Armstrong <narmstrong@baylibre.com>1135011350+M: Neil Armstrong <neil.armstrong@linaro.org>1135211351L: linux-amlogic@lists.infradead.org1135311352S: Maintained1135411353F: Documentation/devicetree/bindings/mfd/khadas,mcu.yaml···1321913218F: drivers/watchdog/menz69_wdt.c13220132191322113220MESON AO CEC DRIVER FOR AMLOGIC SOCS1322213222-M: Neil Armstrong <narmstrong@baylibre.com>1322113221+M: Neil Armstrong <neil.armstrong@linaro.org>1322313222L: linux-media@vger.kernel.org1322413223L: linux-amlogic@lists.infradead.org1322513224S: Supported···1323013229F: drivers/media/cec/platform/meson/ao-cec.c13231132301323213231MESON GE2D DRIVER FOR AMLOGIC SOCS1323313233-M: Neil Armstrong <narmstrong@baylibre.com>1323213232+M: Neil Armstrong <neil.armstrong@linaro.org>1323413233L: linux-media@vger.kernel.org1323513234L: linux-amlogic@lists.infradead.org1323613235S: Supported···1324613245F: drivers/mtd/nand/raw/meson_*13247132461324813247MESON VIDEO DECODER DRIVER FOR AMLOGIC SOCS1324913249-M: Neil Armstrong <narmstrong@baylibre.com>1324813248+M: Neil Armstrong <neil.armstrong@linaro.org>1325013249L: linux-media@vger.kernel.org1325113250L: linux-amlogic@lists.infradead.org1325213251S: Supported···16858168571685916858QUALCOMM ETHQOS ETHERNET DRIVER1686016859M: Vinod Koul <vkoul@kernel.org>1686016860+R: Bhupesh Sharma <bhupesh.sharma@linaro.org>1686116861L: netdev@vger.kernel.org1686216862S: Maintained1686316863F: Documentation/devicetree/bindings/net/qcom,ethqos.txt···1996119959F: drivers/net/team/1996219960F: include/linux/if_team.h1996319961F: include/uapi/linux/if_team.h1996219962+F: tools/testing/selftests/net/team/19964199631996519964TECHNOLOGIC SYSTEMS TS-5500 PLATFORM SUPPORT1996619965M: "Savoir-faire Linux Inc." <kernel@savoirfairelinux.com>···2156821565F: include/uapi/linux/virtio_gpio.h21569215662157021567VIRTIO GPU DRIVER2157121571-M: David Airlie <airlied@linux.ie>2156821568+M: David Airlie <airlied@redhat.com>2157221569M: Gerd Hoffmann <kraxel@redhat.com>2157321570R: Gurchetan Singh <gurchetansingh@chromium.org>2157421571R: Chia-I Wu <olvaffe@gmail.com>
···8888 };8989 };9090};9191+9292+&wlan_host_wake_l {9393+ /* Kevin has an external pull up, but Bob does not. */9494+ rockchip,pins = <0 RK_PB0 RK_FUNC_GPIO &pcfg_pull_up>;9595+};
···244244&edp {245245 status = "okay";246246247247+ /*248248+ * eDP PHY/clk don't sync reliably at anything other than 24 MHz. Only249249+ * set this here, because rk3399-gru.dtsi ensures we can generate this250250+ * off GPLL=600MHz, whereas some other RK3399 boards may not.251251+ */252252+ assigned-clocks = <&cru PCLK_EDP>;253253+ assigned-clock-rates = <24000000>;254254+247255 ports {248256 edp_out: port@1 {249257 reg = <1>;···586578 };587579588580 wlan_host_wake_l: wlan-host-wake-l {581581+ /* Kevin has an external pull up, but Bob does not */589582 rockchip,pins = <0 RK_PB0 RK_FUNC_GPIO &pcfg_pull_none>;590583 };591584};
···21142114 * at, which would end badly once inaccessible.21152115 */21162116 kmemleak_free_part(__hyp_bss_start, __hyp_bss_end - __hyp_bss_start);21172117- kmemleak_free_part(__va(hyp_mem_base), hyp_mem_size);21172117+ kmemleak_free_part_phys(hyp_mem_base, hyp_mem_size);21182118 return pkvm_drop_host_privileges();21192119}21202120
+18-14
arch/arm64/mm/mmu.c
···331331 }332332 BUG_ON(p4d_bad(p4d));333333334334- /*335335- * No need for locking during early boot. And it doesn't work as336336- * expected with KASLR enabled.337337- */338338- if (system_state != SYSTEM_BOOTING)339339- mutex_lock(&fixmap_lock);340334 pudp = pud_set_fixmap_offset(p4dp, addr);341335 do {342336 pud_t old_pud = READ_ONCE(*pudp);···362368 } while (pudp++, addr = next, addr != end);363369364370 pud_clear_fixmap();365365- if (system_state != SYSTEM_BOOTING)366366- mutex_unlock(&fixmap_lock);367371}368372369369-static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,370370- unsigned long virt, phys_addr_t size,371371- pgprot_t prot,372372- phys_addr_t (*pgtable_alloc)(int),373373- int flags)373373+static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,374374+ unsigned long virt, phys_addr_t size,375375+ pgprot_t prot,376376+ phys_addr_t (*pgtable_alloc)(int),377377+ int flags)374378{375379 unsigned long addr, end, next;376380 pgd_t *pgdp = pgd_offset_pgd(pgdir, virt);···392400 } while (pgdp++, addr = next, addr != end);393401}394402403403+static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,404404+ unsigned long virt, phys_addr_t size,405405+ pgprot_t prot,406406+ phys_addr_t (*pgtable_alloc)(int),407407+ int flags)408408+{409409+ mutex_lock(&fixmap_lock);410410+ __create_pgd_mapping_locked(pgdir, phys, virt, size, prot,411411+ pgtable_alloc, flags);412412+ mutex_unlock(&fixmap_lock);413413+}414414+395415#ifdef CONFIG_UNMAP_KERNEL_AT_EL0396396-extern __alias(__create_pgd_mapping)416416+extern __alias(__create_pgd_mapping_locked)397417void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,398418 phys_addr_t size, pgprot_t prot,399419 phys_addr_t (*pgtable_alloc)(int), int flags);
···224224 Enabling this option will probably slow down your kernel.225225226226config 64BIT227227- def_bool "$(ARCH)" = "parisc64"227227+ def_bool y if "$(ARCH)" = "parisc64"228228+ bool "64-bit kernel" if "$(ARCH)" = "parisc"228229 depends on PA8X00230230+ help231231+ Enable this if you want to support 64bit kernel on PA-RISC platform.232232+233233+ At the moment, only people willing to use more than 2GB of RAM,234234+ or having a 64bit-only capable PA-RISC machine should say Y here.235235+236236+ Since there is no 64bit userland on PA-RISC, there is no point to237237+ enable this option otherwise. The 64bit kernel is significantly bigger238238+ and slower than the 32bit one.229239230240choice231241 prompt "Kernel page size"
+1
arch/riscv/Kconfig
···386386config RISCV_ISA_SVPBMT387387 bool "SVPBMT extension support"388388 depends on 64BIT && MMU389389+ depends on !XIP_KERNEL389390 select RISCV_ALTERNATIVE390391 default y391392 help
+2-2
arch/riscv/Kconfig.erratas
···46464747config ERRATA_THEAD_PBMT4848 bool "Apply T-Head memory type errata"4949- depends on ERRATA_THEAD && 64BIT4949+ depends on ERRATA_THEAD && 64BIT && MMU5050 select RISCV_ALTERNATIVE_EARLY5151 default y5252 help···57575858config ERRATA_THEAD_CMO5959 bool "Apply T-Head cache management errata"6060- depends on ERRATA_THEAD6060+ depends on ERRATA_THEAD && MMU6161 select RISCV_DMA_NONCOHERENT6262 default y6363 help
···1212#include <linux/of_device.h>1313#include <asm/cacheflush.h>14141515-static unsigned int riscv_cbom_block_size = L1_CACHE_BYTES;1515+unsigned int riscv_cbom_block_size;1616static bool noncoherent_supported;17171818void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,···7979void riscv_init_cbom_blocksize(void)8080{8181 struct device_node *node;8282+ unsigned long cbom_hartid;8383+ u32 val, probed_block_size;8284 int ret;8383- u32 val;84858686+ probed_block_size = 0;8587 for_each_of_cpu_node(node) {8688 unsigned long hartid;8787- int cbom_hartid;88898990 ret = riscv_of_processor_hartid(node, &hartid);9091 if (ret)9191- continue;9292-9393- if (hartid < 0)9492 continue;95939694 /* set block-size for cbom extension if available */···9698 if (ret)9799 continue;981009999- if (!riscv_cbom_block_size) {100100- riscv_cbom_block_size = val;101101+ if (!probed_block_size) {102102+ probed_block_size = val;101103 cbom_hartid = hartid;102104 } else {103103- if (riscv_cbom_block_size != val)104104- pr_warn("cbom-block-size mismatched between harts %d and %lu\n",105105+ if (probed_block_size != val)106106+ pr_warn("cbom-block-size mismatched between harts %lu and %lu\n",105107 cbom_hartid, hartid);106108 }107109 }110110+111111+ if (probed_block_size)112112+ riscv_cbom_block_size = probed_block_size;108113}109114#endif110115111116void riscv_noncoherent_supported(void)112117{118118+ WARN(!riscv_cbom_block_size,119119+ "Non-coherent DMA support enabled without a block size\n");113120 noncoherent_supported = true;114121}
+13-3
arch/s390/kvm/gaccess.c
···489489 PROT_TYPE_ALC = 2,490490 PROT_TYPE_DAT = 3,491491 PROT_TYPE_IEP = 4,492492+ /* Dummy value for passing an initialized value when code != PGM_PROTECTION */493493+ PROT_NONE,492494};493495494496static int trans_exc_ending(struct kvm_vcpu *vcpu, int code, unsigned long gva, u8 ar,···506504 switch (code) {507505 case PGM_PROTECTION:508506 switch (prot) {507507+ case PROT_NONE:508508+ /* We should never get here, acts like termination */509509+ WARN_ON_ONCE(1);510510+ break;509511 case PROT_TYPE_IEP:510512 tec->b61 = 1;511513 fallthrough;···974968 return rc;975969 } else {976970 gpa = kvm_s390_real_to_abs(vcpu, ga);977977- if (kvm_is_error_gpa(vcpu->kvm, gpa))971971+ if (kvm_is_error_gpa(vcpu->kvm, gpa)) {978972 rc = PGM_ADDRESSING;973973+ prot = PROT_NONE;974974+ }979975 }980976 if (rc)981977 return trans_exc(vcpu, rc, ga, ar, mode, prot);···11201112 if (rc == PGM_PROTECTION && try_storage_prot_override)11211113 rc = access_guest_page_with_key(vcpu->kvm, mode, gpas[idx],11221114 data, fragment_len, PAGE_SPO_ACC);11231123- if (rc == PGM_PROTECTION)11241124- prot = PROT_TYPE_KEYC;11251115 if (rc)11261116 break;11271117 len -= fragment_len;···11291123 if (rc > 0) {11301124 bool terminate = (mode == GACC_STORE) && (idx > 0);1131112511261126+ if (rc == PGM_PROTECTION)11271127+ prot = PROT_TYPE_KEYC;11281128+ else11291129+ prot = PROT_NONE;11321130 rc = trans_exc_ending(vcpu, rc, ga, ar, mode, prot, terminate);11331131 }11341132out_unlock:
+1-1
arch/s390/kvm/interrupt.c
···33243324 if (gaite->count == 0)33253325 return;33263326 if (gaite->aisb != 0)33273327- set_bit_inv(gaite->aisbo, (unsigned long *)gaite->aisb);33273327+ set_bit_inv(gaite->aisbo, phys_to_virt(gaite->aisb));3328332833293329 kvm = kvm_s390_pci_si_to_kvm(aift, si);33303330 if (!kvm)
+2-2
arch/s390/kvm/kvm-s390.c
···505505 goto out;506506 }507507508508- if (kvm_s390_pci_interp_allowed()) {508508+ if (IS_ENABLED(CONFIG_VFIO_PCI_ZDEV_KVM)) {509509 rc = kvm_s390_pci_init();510510 if (rc) {511511 pr_err("Unable to allocate AIFT for PCI\n");···527527void kvm_arch_exit(void)528528{529529 kvm_s390_gib_destroy();530530- if (kvm_s390_pci_interp_allowed())530530+ if (IS_ENABLED(CONFIG_VFIO_PCI_ZDEV_KVM))531531 kvm_s390_pci_exit();532532 debug_unregister(kvm_s390_dbf);533533 debug_unregister(kvm_s390_dbf_uv);
···132132# The wrappers will select whether using "malloc" or the kernel allocator.133133LINK_WRAPS = -Wl,--wrap,malloc -Wl,--wrap,free -Wl,--wrap,calloc134134135135+# Avoid binutils 2.39+ warnings by marking the stack non-executable and136136+# ignorning warnings for the kallsyms sections.137137+LDFLAGS_EXECSTACK = -z noexecstack138138+ifeq ($(CONFIG_LD_IS_BFD),y)139139+LDFLAGS_EXECSTACK += $(call ld-option,--no-warn-rwx-segments)140140+endif141141+135142LD_FLAGS_CMDLINE = $(foreach opt,$(KBUILD_LDFLAGS),-Wl,$(opt))136143137144# Used by link-vmlinux.sh which has special support for um link138145export CFLAGS_vmlinux := $(LINK-y) $(LINK_WRAPS) $(LD_FLAGS_CMDLINE)146146+export LDFLAGS_vmlinux := $(LDFLAGS_EXECSTACK)139147140148# When cleaning we don't include .config, so we don't include141149# TT or skas makefiles and don't clean skas_ptregs.h.
···3333#include "um_arch.h"34343535#define DEFAULT_COMMAND_LINE_ROOT "root=98:0"3636-#define DEFAULT_COMMAND_LINE_CONSOLE "console=tty"3636+#define DEFAULT_COMMAND_LINE_CONSOLE "console=tty0"37373838/* Changed in add_arg and setup_arch, which run before SMP is started */3939static char __initdata command_line[COMMAND_LINE_SIZE] = { 0 };
···315315{316316 struct kvm_lapic *apic = vcpu->arch.apic;317317 struct kvm_cpuid_entry2 *best;318318- u64 guest_supported_xcr0;319318320319 best = kvm_find_cpuid_entry(vcpu, 1);321320 if (best && apic) {···326327 kvm_apic_set_version(vcpu);327328 }328329329329- guest_supported_xcr0 =330330+ vcpu->arch.guest_supported_xcr0 =330331 cpuid_get_supported_xcr0(vcpu->arch.cpuid_entries, vcpu->arch.cpuid_nent);331332332332- vcpu->arch.guest_fpu.fpstate->user_xfeatures = guest_supported_xcr0;333333+ /*334334+ * FP+SSE can always be saved/restored via KVM_{G,S}ET_XSAVE, even if335335+ * XSAVE/XCRO are not exposed to the guest, and even if XSAVE isn't336336+ * supported by the host.337337+ */338338+ vcpu->arch.guest_fpu.fpstate->user_xfeatures = vcpu->arch.guest_supported_xcr0 |339339+ XFEATURE_MASK_FPSSE;333340334341 kvm_update_pv_runtime(vcpu);335342
···295295296296 while (!blk_try_enter_queue(q, pm)) {297297 if (flags & BLK_MQ_REQ_NOWAIT)298298- return -EBUSY;298298+ return -EAGAIN;299299300300 /*301301 * read pair of barrier in blk_freeze_queue_start(), we need to···325325 if (test_bit(GD_DEAD, &disk->state))326326 goto dead;327327 bio_wouldblock_error(bio);328328- return -EBUSY;328328+ return -EAGAIN;329329 }330330331331 /*
+8-3
block/blk-lib.c
···309309 struct blk_plug plug;310310 int ret = 0;311311312312+ /* make sure that "len << SECTOR_SHIFT" doesn't overflow */313313+ if (max_sectors > UINT_MAX >> SECTOR_SHIFT)314314+ max_sectors = UINT_MAX >> SECTOR_SHIFT;315315+ max_sectors &= ~bs_mask;316316+312317 if (max_sectors == 0)313318 return -EOPNOTSUPP;314319 if ((sector | nr_sects) & bs_mask)···327322328323 bio = blk_next_bio(bio, bdev, 0, REQ_OP_SECURE_ERASE, gfp);329324 bio->bi_iter.bi_sector = sector;330330- bio->bi_iter.bi_size = len;325325+ bio->bi_iter.bi_size = len << SECTOR_SHIFT;331326332332- sector += len << SECTOR_SHIFT;333333- nr_sects -= len << SECTOR_SHIFT;327327+ sector += len;328328+ nr_sects -= len;334329 if (!nr_sects) {335330 ret = submit_bio_wait(bio);336331 bio_put(bio);
+2-1
block/genhd.c
···602602 * Prevent new I/O from crossing bio_queue_enter().603603 */604604 blk_queue_start_drain(q);605605- blk_mq_freeze_queue_wait(q);606605607606 if (!(disk->flags & GENHD_FL_HIDDEN)) {608607 sysfs_remove_link(&disk_to_dev(disk)->kobj, "bdi");···624625 sysfs_remove_link(block_depr, dev_name(disk_to_dev(disk)));625626 pm_runtime_set_memalloc_noio(disk_to_dev(disk), false);626627 device_del(disk_to_dev(disk));628628+629629+ blk_mq_freeze_queue_wait(q);627630628631 blk_throtl_cancel_bios(disk->queue);629632
+1-1
certs/Kconfig
···4343 bool "Provide system-wide ring of trusted keys"4444 depends on KEYS4545 depends on ASYMMETRIC_KEY_TYPE4646- depends on X509_CERTIFICATE_PARSER4646+ depends on X509_CERTIFICATE_PARSER = y4747 help4848 Provide a system keyring to which trusted keys can be added. Keys in4949 the keyring are considered to be trusted. Keys may be added at will
···449449 return -EINVAL;450450 }451451452452+ /* Enable IRQ line */453453+ irq_enabled |= BIT(event_node->channel);454454+452455 /* Skip configuration if it is the same as previously set */453456 if (priv->irq_trigger[event_node->channel] == next_irq_trigger)454457 continue;···465462 priv->irq_trigger[event_node->channel] << 3;466463 iowrite8(QUAD8_CTR_IOR | ior_cfg,467464 &priv->reg->channel[event_node->channel].control);468468-469469- /* Enable IRQ line */470470- irq_enabled |= BIT(event_node->channel);471465 }472466473467 iowrite8(irq_enabled, &priv->reg->index_interrupt);
···14141515/* SHIM variables */1616static const efi_guid_t shim_guid = EFI_SHIM_LOCK_GUID;1717-static const efi_char16_t shim_MokSBState_name[] = L"MokSBState";1717+static const efi_char16_t shim_MokSBState_name[] = L"MokSBStateRT";18181919static efi_status_t get_var(efi_char16_t *name, efi_guid_t *vendor, u32 *attr,2020 unsigned long *data_size, void *data)···43434444 /*4545 * See if a user has put the shim into insecure mode. If so, and if the4646- * variable doesn't have the runtime attribute set, we might as well4747- * honor that.4646+ * variable doesn't have the non-volatile attribute set, we might as4747+ * well honor that.4848 */4949 size = sizeof(moksbstate);5050 status = get_efi_var(shim_MokSBState_name, &shim_guid,···5353 /* If it fails, we don't care why. Default to secure */5454 if (status != EFI_SUCCESS)5555 goto secure_boot_enabled;5656- if (!(attr & EFI_VARIABLE_RUNTIME_ACCESS) && moksbstate == 1)5656+ if (!(attr & EFI_VARIABLE_NON_VOLATILE) && moksbstate == 1)5757 return efi_secureboot_mode_disabled;58585959secure_boot_enabled:
+7
drivers/firmware/efi/libstub/x86-stub.c
···516516 hdr->ramdisk_image = 0;517517 hdr->ramdisk_size = 0;518518519519+ /*520520+ * Disregard any setup data that was provided by the bootloader:521521+ * setup_data could be pointing anywhere, and we have no way of522522+ * authenticating or validating the payload.523523+ */524524+ hdr->setup_data = 0;525525+519526 efi_stub_entry(handle, sys_table_arg, boot_params);520527 /* not reached */521528
+4-4
drivers/fpga/intel-m10-bmc-sec-update.c
···148148 stride = regmap_get_reg_stride(sec->m10bmc->regmap);149149 num_bits = FLASH_COUNT_SIZE * 8;150150151151- flash_buf = kmalloc(FLASH_COUNT_SIZE, GFP_KERNEL);152152- if (!flash_buf)153153- return -ENOMEM;154154-155151 if (FLASH_COUNT_SIZE % stride) {156152 dev_err(sec->dev,157153 "FLASH_COUNT_SIZE (0x%x) not aligned to stride (0x%x)\n",···155159 WARN_ON_ONCE(1);156160 return -EINVAL;157161 }162162+163163+ flash_buf = kmalloc(FLASH_COUNT_SIZE, GFP_KERNEL);164164+ if (!flash_buf)165165+ return -ENOMEM;158166159167 ret = regmap_bulk_read(sec->m10bmc->regmap, STAGING_FLASH_COUNT,160168 flash_buf, FLASH_COUNT_SIZE / stride);
···23652365 }23662366 adev->ip_blocks[i].status.sw = true;2367236723682368- /* need to do gmc hw init early so we can allocate gpu mem */23692369- if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GMC) {23682368+ if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_COMMON) {23692369+ /* need to do common hw init early so everything is set up for gmc */23702370+ r = adev->ip_blocks[i].version->funcs->hw_init((void *)adev);23712371+ if (r) {23722372+ DRM_ERROR("hw_init %d failed %d\n", i, r);23732373+ goto init_failed;23742374+ }23752375+ adev->ip_blocks[i].status.hw = true;23762376+ } else if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GMC) {23772377+ /* need to do gmc hw init early so we can allocate gpu mem */23702378 /* Try to reserve bad pages early */23712379 if (amdgpu_sriov_vf(adev))23722380 amdgpu_virt_exchange_data(adev);···30603052 int i, r;3061305330623054 static enum amd_ip_block_type ip_order[] = {30633063- AMD_IP_BLOCK_TYPE_GMC,30643055 AMD_IP_BLOCK_TYPE_COMMON,30563056+ AMD_IP_BLOCK_TYPE_GMC,30653057 AMD_IP_BLOCK_TYPE_PSP,30663058 AMD_IP_BLOCK_TYPE_IH,30673059 };
···12111211 return 0;12121212}1213121312141214-static void soc15_doorbell_range_init(struct amdgpu_device *adev)12151215-{12161216- int i;12171217- struct amdgpu_ring *ring;12181218-12191219- /* sdma/ih doorbell range are programed by hypervisor */12201220- if (!amdgpu_sriov_vf(adev)) {12211221- for (i = 0; i < adev->sdma.num_instances; i++) {12221222- ring = &adev->sdma.instance[i].ring;12231223- adev->nbio.funcs->sdma_doorbell_range(adev, i,12241224- ring->use_doorbell, ring->doorbell_index,12251225- adev->doorbell_index.sdma_doorbell_range);12261226- }12271227-12281228- adev->nbio.funcs->ih_doorbell_range(adev, adev->irq.ih.use_doorbell,12291229- adev->irq.ih.doorbell_index);12301230- }12311231-}12321232-12331214static int soc15_common_hw_init(void *handle)12341215{12351216 struct amdgpu_device *adev = (struct amdgpu_device *)handle;···1230124912311250 /* enable the doorbell aperture */12321251 soc15_enable_doorbell_aperture(adev, true);12331233- /* HW doorbell routing policy: doorbell writing not12341234- * in SDMA/IH/MM/ACV range will be routed to CP. So12351235- * we need to init SDMA/IH/MM/ACV doorbell range prior12361236- * to CP ip block init and ring test.12371237- */12381238- soc15_doorbell_range_init(adev);1239125212401253 return 0;12411254}
+1
drivers/gpu/drm/amd/amdgpu/soc21.c
···421421{422422 switch (adev->ip_versions[GC_HWIP][0]) {423423 case IP_VERSION(11, 0, 0):424424+ return amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__UMC);424425 case IP_VERSION(11, 0, 2):425426 return false;426427 default:
···27582758 skip_video_pattern);2759275927602760 /* Transmit idle pattern once training successful. */27612761- if (status == LINK_TRAINING_SUCCESS && !is_link_bw_low)27612761+ if (status == LINK_TRAINING_SUCCESS && !is_link_bw_low) {27622762 dp_set_hw_test_pattern(link, &pipe_ctx->link_res, DP_TEST_PATTERN_VIDEO_MODE, NULL, 0);27632763+ /* Update verified link settings to current one27642764+ * Because DPIA LT might fallback to lower link setting.27652765+ */27662766+ link->verified_link_cap.link_rate = link->cur_link_settings.link_rate;27672767+ link->verified_link_cap.lane_count = link->cur_link_settings.lane_count;27682768+ }27632769 } else {27642770 status = dc_link_dp_perform_link_training(link,27652771 &pipe_ctx->link_res,···51265120 link->dpcd_caps.lttpr_caps.supported_128b_132b_rates.raw =51275121 lttpr_dpcd_data[DP_PHY_REPEATER_128B132B_RATES -51285122 DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV];51235123+51245124+ /* If this chip cap is set, at least one retimer must exist in the chain51255125+ * Override count to 1 if we receive a known bad count (0 or an invalid value) */51265126+ if (link->chip_caps & EXT_DISPLAY_PATH_CAPS__DP_FIXED_VS_EN &&51275127+ (dp_convert_to_count(link->dpcd_caps.lttpr_caps.phy_repeater_cnt) == 0)) {51285128+ ASSERT(0);51295129+ link->dpcd_caps.lttpr_caps.phy_repeater_cnt = 0x80;51305130+ }5129513151305132 /* Attempt to train in LTTPR transparent mode if repeater count exceeds 8. */51315133 is_lttpr_present = (link->dpcd_caps.lttpr_caps.max_lane_count > 0 &&
+17
drivers/gpu/drm/amd/display/dc/core/dc_resource.c
···35843584 }35853585}3586358635873587+void reset_sync_context_for_pipe(const struct dc *dc,35883588+ struct dc_state *context,35893589+ uint8_t pipe_idx)35903590+{35913591+ int i;35923592+ struct pipe_ctx *pipe_ctx_reset;35933593+35943594+ /* reset the otg sync context for the pipe and its slave pipes if any */35953595+ for (i = 0; i < dc->res_pool->pipe_count; i++) {35963596+ pipe_ctx_reset = &context->res_ctx.pipe_ctx[i];35973597+35983598+ if (((GET_PIPE_SYNCD_FROM_PIPE(pipe_ctx_reset) == pipe_idx) &&35993599+ IS_PIPE_SYNCD_VALID(pipe_ctx_reset)) || (i == pipe_idx))36003600+ SET_PIPE_SYNCD_TO_PIPE(pipe_ctx_reset, i);36013601+ }36023602+}36033603+35873604uint8_t resource_transmitter_to_phy_idx(const struct dc *dc, enum transmitter transmitter)35883605{35893606 /* TODO - get transmitter to phy idx mapping from DMUB */
+1-1
drivers/gpu/drm/amd/display/dc/core/dc_stream.c
···329329330330 dc = stream->ctx->dc;331331332332- if (attributes->height * attributes->width * 4 > 16384)332332+ if (dc->debug.allow_sw_cursor_fallback && attributes->height * attributes->width * 4 > 16384)333333 if (stream->mall_stream_config.type == SUBVP_MAIN)334334 return false;335335
+1
drivers/gpu/drm/amd/display/dc/dc.h
···745745 bool disable_fixed_vs_aux_timeout_wa;746746 bool force_disable_subvp;747747 bool force_subvp_mclk_switch;748748+ bool allow_sw_cursor_fallback;748749 bool force_usr_allow;749750 /* uses value at boot and disables switch */750751 bool disable_dtb_ref_clk_switch;
···15651565 /* Any updates are handled in dc interface, just need15661566 * to apply existing for plane enable / opp change */15671567 if (pipe_ctx->update_flags.bits.enable || pipe_ctx->update_flags.bits.opp_changed15681568+ || pipe_ctx->update_flags.bits.plane_changed15681569 || pipe_ctx->stream->update_flags.bits.gamut_remap15691570 || pipe_ctx->stream->update_flags.bits.out_csc) {15701571 /* dpp/cm gamut remap*/
···244244}245245246246/**247247+ * Finds dummy_latency_index when MCLK switching using firmware based248248+ * vblank stretch is enabled. This function will iterate through the249249+ * table of dummy pstate latencies until the lowest value that allows250250+ * dm_allow_self_refresh_and_mclk_switch to happen is found251251+ */252252+int dcn32_find_dummy_latency_index_for_fw_based_mclk_switch(struct dc *dc,253253+ struct dc_state *context,254254+ display_e2e_pipe_params_st *pipes,255255+ int pipe_cnt,256256+ int vlevel)257257+{258258+ const int max_latency_table_entries = 4;259259+ const struct vba_vars_st *vba = &context->bw_ctx.dml.vba;260260+ int dummy_latency_index = 0;261261+262262+ dc_assert_fp_enabled();263263+264264+ while (dummy_latency_index < max_latency_table_entries) {265265+ context->bw_ctx.dml.soc.dram_clock_change_latency_us =266266+ dc->clk_mgr->bw_params->dummy_pstate_table[dummy_latency_index].dummy_pstate_latency_us;267267+ dcn32_internal_validate_bw(dc, context, pipes, &pipe_cnt, &vlevel, false);268268+269269+ if (vlevel < context->bw_ctx.dml.vba.soc.num_states &&270270+ vba->DRAMClockChangeSupport[vlevel][vba->maxMpcComb] != dm_dram_clock_change_unsupported)271271+ break;272272+273273+ dummy_latency_index++;274274+ }275275+276276+ if (dummy_latency_index == max_latency_table_entries) {277277+ ASSERT(dummy_latency_index != max_latency_table_entries);278278+ /* If the execution gets here, it means dummy p_states are279279+ * not possible. This should never happen and would mean280280+ * something is severely wrong.281281+ * Here we reset dummy_latency_index to 3, because it is282282+ * better to have underflows than system crashes.283283+ */284284+ dummy_latency_index = max_latency_table_entries - 1;285285+ }286286+287287+ return dummy_latency_index;288288+}289289+290290+/**247291 * dcn32_helper_populate_phantom_dlg_params - Get DLG params for phantom pipes248292 * and populate pipe_ctx with those params.249293 *···16901646 dcn30_can_support_mclk_switch_using_fw_based_vblank_stretch(dc, context);1691164716921648 if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching) {16931693- dummy_latency_index = dcn30_find_dummy_latency_index_for_fw_based_mclk_switch(dc,16491649+ dummy_latency_index = dcn32_find_dummy_latency_index_for_fw_based_mclk_switch(dc,16941650 context, pipes, pipe_cnt, vlevel);1695165116961652 /* After calling dcn30_find_dummy_latency_index_for_fw_based_mclk_switch
···71717272void dcn32_update_bw_bounding_box_fpu(struct dc *dc, struct clk_bw_params *bw_params);73737474+int dcn32_find_dummy_latency_index_for_fw_based_mclk_switch(struct dc *dc,7575+ struct dc_state *context,7676+ display_e2e_pipe_params_st *pipes,7777+ int pipe_cnt,7878+ int vlevel);7979+7480#endif
···651651652652 unsigned int OutputTypeAndRatePerState[DC__VOLTAGE_STATES][DC__NUM_DPP__MAX];653653 double RequiredDISPCLKPerSurface[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];654654- unsigned int MicroTileHeightY[DC__NUM_DPP__MAX];655655- unsigned int MicroTileHeightC[DC__NUM_DPP__MAX];656656- unsigned int MicroTileWidthY[DC__NUM_DPP__MAX];657657- unsigned int MicroTileWidthC[DC__NUM_DPP__MAX];654654+ unsigned int MacroTileHeightY[DC__NUM_DPP__MAX];655655+ unsigned int MacroTileHeightC[DC__NUM_DPP__MAX];656656+ unsigned int MacroTileWidthY[DC__NUM_DPP__MAX];657657+ unsigned int MacroTileWidthC[DC__NUM_DPP__MAX];658658 bool ImmediateFlipRequiredFinal;659659 bool DCCProgrammingAssumesScanDirectionUnknownFinal;660660 bool EnoughWritebackUnits;···800800 double PSCL_FACTOR[DC__NUM_DPP__MAX];801801 double PSCL_FACTOR_CHROMA[DC__NUM_DPP__MAX];802802 double MaximumVStartup[DC__VOLTAGE_STATES][2][DC__NUM_DPP__MAX];803803- unsigned int MacroTileWidthY[DC__NUM_DPP__MAX];804804- unsigned int MacroTileWidthC[DC__NUM_DPP__MAX];805803 double AlignedDCCMetaPitch[DC__NUM_DPP__MAX];806804 double AlignedYPitch[DC__NUM_DPP__MAX];807805 double AlignedCPitch[DC__NUM_DPP__MAX];
+1
drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr.h
···214214struct clk_bw_params {215215 unsigned int vram_type;216216 unsigned int num_channels;217217+ unsigned int dram_channel_width_bytes;217218 unsigned int dispclk_vco_khz;218219 unsigned int dc_mode_softmax_memclk;219220 struct clk_limit_table clk_table;
···209209 if (!adev->scpm_enabled)210210 return 0;211211212212- if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 7))212212+ if ((adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 7)) ||213213+ (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 0)))213214 return 0;214215215216 /* override pptable_id from driver parameter */···219218 dev_info(adev->dev, "override pptable id %d\n", pptable_id);220219 } else {221220 pptable_id = smu->smu_table.boot_values.pp_table_id;222222-223223- /*224224- * Temporary solution for SMU V13.0.0 with SCPM enabled:225225- * - use vbios carried pptable when pptable_id is 3664, 3715 or 3795226226- * - use 36831 soft pptable when pptable_id is 3683227227- */228228- if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 0)) {229229- switch (pptable_id) {230230- case 3664:231231- case 3715:232232- case 3795:233233- pptable_id = 0;234234- break;235235- case 3683:236236- pptable_id = 36831;237237- break;238238- default:239239- dev_err(adev->dev, "Unsupported pptable id %d\n", pptable_id);240240- return -EINVAL;241241- }242242- }243221 }244222245223 /* "pptable_id == 0" means vbios carries the pptable. */···451471 } else {452472 pptable_id = smu->smu_table.boot_values.pp_table_id;453473454454- /*455455- * Temporary solution for SMU V13.0.0 with SCPM disabled:456456- * - use 3664, 3683 or 3715 on request457457- * - use 3664 when pptable_id is 0458458- * TODO: drop these when the pptable carried in vbios is ready.459459- */460460- if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 0)) {461461- switch (pptable_id) {462462- case 0:463463- pptable_id = 3664;464464- break;465465- case 3664:466466- case 3683:467467- case 3715:468468- break;469469- default:470470- dev_err(adev->dev, "Unsupported pptable id %d\n", pptable_id);471471- return -EINVAL;472472- }473473- }474474 }475475476476 /* force using vbios pptable in sriov mode */
···410410{411411 struct smu_table_context *smu_table = &smu->smu_table;412412 struct amdgpu_device *adev = smu->adev;413413- uint32_t pptable_id;414413 int ret = 0;415414416416- /*417417- * With SCPM enabled, the pptable used will be signed. It cannot418418- * be used directly by driver. To get the raw pptable, we need to419419- * rely on the combo pptable(and its revelant SMU message).420420- */421421- if (adev->scpm_enabled) {422422- ret = smu_v13_0_0_get_pptable_from_pmfw(smu,423423- &smu_table->power_play_table,424424- &smu_table->power_play_table_size);425425- } else {426426- /* override pptable_id from driver parameter */427427- if (amdgpu_smu_pptable_id >= 0) {428428- pptable_id = amdgpu_smu_pptable_id;429429- dev_info(adev->dev, "override pptable id %d\n", pptable_id);430430- } else {431431- pptable_id = smu_table->boot_values.pp_table_id;432432- }433433-434434- /*435435- * Temporary solution for SMU V13.0.0 with SCPM disabled:436436- * - use vbios carried pptable when pptable_id is 3664, 3715 or 3795437437- * - use soft pptable when pptable_id is 3683438438- */439439- if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 0)) {440440- switch (pptable_id) {441441- case 3664:442442- case 3715:443443- case 3795:444444- pptable_id = 0;445445- break;446446- case 3683:447447- break;448448- default:449449- dev_err(adev->dev, "Unsupported pptable id %d\n", pptable_id);450450- return -EINVAL;451451- }452452- }453453-454454- /* force using vbios pptable in sriov mode */455455- if ((amdgpu_sriov_vf(adev) || !pptable_id) && (amdgpu_emu_mode != 1))456456- ret = smu_v13_0_0_get_pptable_from_pmfw(smu,457457- &smu_table->power_play_table,458458- &smu_table->power_play_table_size);459459- else460460- ret = smu_v13_0_get_pptable_from_firmware(smu,461461- &smu_table->power_play_table,462462- &smu_table->power_play_table_size,463463- pptable_id);464464- }415415+ ret = smu_v13_0_0_get_pptable_from_pmfw(smu,416416+ &smu_table->power_play_table,417417+ &smu_table->power_play_table_size);465418 if (ret)466419 return ret;467420
···112112{113113 struct psb_gem_object *pobj = to_psb_gem_object(obj);114114115115- drm_gem_object_release(obj);116116-117115 /* Undo the mmap pin if we are destroying the object */118116 if (pobj->mmapping)119117 psb_gem_unpin(pobj);118118+119119+ drm_gem_object_release(obj);120120121121 WARN_ON(pobj->in_gart && !pobj->stolen);122122
+7-4
drivers/gpu/drm/gma500/gma_display.c
···532532 WARN_ON(drm_crtc_vblank_get(crtc) != 0);533533534534 gma_crtc->page_flip_event = event;535535+ spin_unlock_irqrestore(&dev->event_lock, flags);535536536537 /* Call this locked if we want an event at vblank interrupt. */537538 ret = crtc_funcs->mode_set_base(crtc, crtc->x, crtc->y, old_fb);538539 if (ret) {539539- gma_crtc->page_flip_event = NULL;540540- drm_crtc_vblank_put(crtc);540540+ spin_lock_irqsave(&dev->event_lock, flags);541541+ if (gma_crtc->page_flip_event) {542542+ gma_crtc->page_flip_event = NULL;543543+ drm_crtc_vblank_put(crtc);544544+ }545545+ spin_unlock_irqrestore(&dev->event_lock, flags);541546 }542542-543543- spin_unlock_irqrestore(&dev->event_lock, flags);544547 } else {545548 ret = crtc_funcs->mode_set_base(crtc, crtc->x, crtc->y, old_fb);546549 }
+1-4
drivers/gpu/drm/gma500/oaktrail_device.c
···501501static int oaktrail_chip_setup(struct drm_device *dev)502502{503503 struct drm_psb_private *dev_priv = to_drm_psb_private(dev);504504- struct pci_dev *pdev = to_pci_dev(dev->dev);505504 int ret;506505507507- if (pci_enable_msi(pdev))508508- dev_warn(dev->dev, "Enabling MSI failed!\n");509509-506506+ dev_priv->use_msi = true;510507 dev_priv->regmap = oaktrail_regmap;511508512509 ret = mid_chip_setup(dev);
+1-7
drivers/gpu/drm/gma500/power.c
···139139 dev_priv->regs.saveBSM = bsm;140140 pci_read_config_dword(pdev, 0xFC, &vbt);141141 dev_priv->regs.saveVBT = vbt;142142- pci_read_config_dword(pdev, PSB_PCIx_MSI_ADDR_LOC, &dev_priv->msi_addr);143143- pci_read_config_dword(pdev, PSB_PCIx_MSI_DATA_LOC, &dev_priv->msi_data);144142145143 pci_disable_device(pdev);146144 pci_set_power_state(pdev, PCI_D3hot);···166168 pci_restore_state(pdev);167169 pci_write_config_dword(pdev, 0x5c, dev_priv->regs.saveBSM);168170 pci_write_config_dword(pdev, 0xFC, dev_priv->regs.saveVBT);169169- /* restoring MSI address and data in PCIx space */170170- pci_write_config_dword(pdev, PSB_PCIx_MSI_ADDR_LOC, dev_priv->msi_addr);171171- pci_write_config_dword(pdev, PSB_PCIx_MSI_DATA_LOC, dev_priv->msi_data);172171 ret = pci_enable_device(pdev);173172174173 if (ret != 0)···218223 mutex_lock(&power_mutex);219224 gma_resume_pci(pdev);220225 gma_resume_display(pdev);221221- gma_irq_preinstall(dev);222222- gma_irq_postinstall(dev);226226+ gma_irq_install(dev);223227 mutex_unlock(&power_mutex);224228 return 0;225229}
+1-1
drivers/gpu/drm/gma500/psb_drv.c
···383383 PSB_WVDC32(0xFFFFFFFF, PSB_INT_MASK_R);384384 spin_unlock_irqrestore(&dev_priv->irqmask_lock, irqflags);385385386386- gma_irq_install(dev, pdev->irq);386386+ gma_irq_install(dev);387387388388 dev->max_vblank_count = 0xffffff; /* only 24 bits of frame count */389389
+1-4
drivers/gpu/drm/gma500/psb_drv.h
···490490 int rpm_enabled;491491492492 /* MID specific */493493+ bool use_msi;493494 bool has_gct;494495 struct oaktrail_gct_data gct_data;495496···499498500499 /* Register state */501500 struct psb_save_area regs;502502-503503- /* MSI reg save */504504- uint32_t msi_addr;505505- uint32_t msi_data;506501507502 /* Hotplug handling */508503 struct work_struct hotplug_work;
+12-3
drivers/gpu/drm/gma500/psb_irq.c
···316316 spin_unlock_irqrestore(&dev_priv->irqmask_lock, irqflags);317317}318318319319-int gma_irq_install(struct drm_device *dev, unsigned int irq)319319+int gma_irq_install(struct drm_device *dev)320320{321321+ struct drm_psb_private *dev_priv = to_drm_psb_private(dev);322322+ struct pci_dev *pdev = to_pci_dev(dev->dev);321323 int ret;322324323323- if (irq == IRQ_NOTCONNECTED)325325+ if (dev_priv->use_msi && pci_enable_msi(pdev)) {326326+ dev_warn(dev->dev, "Enabling MSI failed!\n");327327+ dev_priv->use_msi = false;328328+ }329329+330330+ if (pdev->irq == IRQ_NOTCONNECTED)324331 return -ENOTCONN;325332326333 gma_irq_preinstall(dev);327334328335 /* PCI devices require shared interrupts. */329329- ret = request_irq(irq, gma_irq_handler, IRQF_SHARED, dev->driver->name, dev);336336+ ret = request_irq(pdev->irq, gma_irq_handler, IRQF_SHARED, dev->driver->name, dev);330337 if (ret)331338 return ret;332339···376369 spin_unlock_irqrestore(&dev_priv->irqmask_lock, irqflags);377370378371 free_irq(pdev->irq, dev);372372+ if (dev_priv->use_msi)373373+ pci_disable_msi(pdev);379374}380375381376int gma_crtc_enable_vblank(struct drm_crtc *crtc)
···14381438 if (!guc_submission_initialized(guc))14391439 return;1440144014411441- cancel_delayed_work(&guc->timestamp.work);14411441+ /*14421442+ * There is a race with suspend flow where the worker runs after suspend14431443+ * and causes an unclaimed register access warning. Cancel the worker14441444+ * synchronously here.14451445+ */14461446+ cancel_delayed_work_sync(&guc->timestamp.work);1442144714431448 /*14441449 * Before parking, we should sample engine busyness stats if we need to.
+2-1
drivers/gpu/drm/i915/i915_gem.c
···1191119111921192 intel_uc_cleanup_firmwares(&to_gt(dev_priv)->uc);1193119311941194- i915_gem_drain_freed_objects(dev_priv);11941194+ /* Flush any outstanding work, including i915_gem_context.release_work. */11951195+ i915_gem_drain_workqueue(dev_priv);1195119611961197 drm_WARN_ON(&dev_priv->drm, !list_empty(&dev_priv->gem.contexts.list));11971198}
···685685 if (--dsi->refcount != 0)686686 return;687687688688+ /*689689+ * mtk_dsi_stop() and mtk_dsi_start() is asymmetric, since690690+ * mtk_dsi_stop() should be called after mtk_drm_crtc_atomic_disable(),691691+ * which needs irq for vblank, and mtk_dsi_stop() will disable irq.692692+ * mtk_dsi_start() needs to be called in mtk_output_dsi_enable(),693693+ * after dsi is fully set.694694+ */695695+ mtk_dsi_stop(dsi);696696+697697+ mtk_dsi_switch_to_cmd_mode(dsi, VM_DONE_INT_FLAG, 500);688698 mtk_dsi_reset_engine(dsi);689699 mtk_dsi_lane0_ulp_mode_enter(dsi);690700 mtk_dsi_clk_ulp_mode_enter(dsi);···744734{745735 if (!dsi->enabled)746736 return;747747-748748- /*749749- * mtk_dsi_stop() and mtk_dsi_start() is asymmetric, since750750- * mtk_dsi_stop() should be called after mtk_drm_crtc_atomic_disable(),751751- * which needs irq for vblank, and mtk_dsi_stop() will disable irq.752752- * mtk_dsi_start() needs to be called in mtk_output_dsi_enable(),753753- * after dsi is fully set.754754- */755755- mtk_dsi_stop(dsi);756756-757757- mtk_dsi_switch_to_cmd_mode(dsi, VM_DONE_INT_FLAG, 500);758737759738 dsi->enabled = false;760739}···807808808809static const struct drm_bridge_funcs mtk_dsi_bridge_funcs = {809810 .attach = mtk_dsi_bridge_attach,811811+ .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,810812 .atomic_disable = mtk_dsi_bridge_atomic_disable,813813+ .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,811814 .atomic_enable = mtk_dsi_bridge_atomic_enable,812815 .atomic_pre_enable = mtk_dsi_bridge_atomic_pre_enable,813816 .atomic_post_disable = mtk_dsi_bridge_atomic_post_disable,817817+ .atomic_reset = drm_atomic_helper_bridge_reset,814818 .mode_set = mtk_dsi_bridge_mode_set,815819};816820
+1-1
drivers/gpu/drm/meson/meson_plane.c
···170170171171 /* Enable OSD and BLK0, set max global alpha */172172 priv->viu.osd1_ctrl_stat = OSD_ENABLE |173173- (0xFF << OSD_GLOBAL_ALPHA_SHIFT) |173173+ (0x100 << OSD_GLOBAL_ALPHA_SHIFT) |174174 OSD_BLK0_ENABLE;175175176176 priv->viu.osd1_ctrl_stat2 = readl(priv->io_base +
···262262 if (ret)263263 return ret;264264265265- drm_fbdev_generic_setup(dev, 0);265265+ /*266266+ * FIXME: A 24-bit color depth does not work with 24 bpp on267267+ * G200ER. Force 32 bpp.268268+ */269269+ drm_fbdev_generic_setup(dev, 32);266270267271 return 0;268272}
···14391439 die &= ~RK3568_SYS_DSP_INFACE_EN_HDMI_MUX;14401440 die |= RK3568_SYS_DSP_INFACE_EN_HDMI |14411441 FIELD_PREP(RK3568_SYS_DSP_INFACE_EN_HDMI_MUX, vp->id);14421442+ dip &= ~RK3568_DSP_IF_POL__HDMI_PIN_POL;14431443+ dip |= FIELD_PREP(RK3568_DSP_IF_POL__HDMI_PIN_POL, polflags);14421444 break;14431445 case ROCKCHIP_VOP2_EP_EDP0:14441446 die &= ~RK3568_SYS_DSP_INFACE_EN_EDP_MUX;14451447 die |= RK3568_SYS_DSP_INFACE_EN_EDP |14461448 FIELD_PREP(RK3568_SYS_DSP_INFACE_EN_EDP_MUX, vp->id);14491449+ dip &= ~RK3568_DSP_IF_POL__EDP_PIN_POL;14501450+ dip |= FIELD_PREP(RK3568_DSP_IF_POL__EDP_PIN_POL, polflags);14471451 break;14481452 case ROCKCHIP_VOP2_EP_MIPI0:14491453 die &= ~RK3568_SYS_DSP_INFACE_EN_MIPI0_MUX;
+1-1
drivers/hv/hv_fcopy.c
···129129130130 /*131131 * The strings sent from the host are encoded in132132- * in utf16; convert it to utf8 strings.132132+ * utf16; convert it to utf8 strings.133133 * The host assures us that the utf16 strings will not exceed134134 * the max lengths specified. We will however, reserve room135135 * for the string terminating character - in the utf16s_utf8s()
+41-15
drivers/hv/vmbus_drv.c
···3535#include <linux/kernel.h>3636#include <linux/syscore_ops.h>3737#include <linux/dma-map-ops.h>3838+#include <linux/pci.h>3839#include <clocksource/hyperv_timer.h>3940#include "hyperv_vmbus.h"4041···2263226222642263static void vmbus_reserve_fb(void)22652264{22662266- int size;22652265+ resource_size_t start = 0, size;22662266+ struct pci_dev *pdev;22672267+22682268+ if (efi_enabled(EFI_BOOT)) {22692269+ /* Gen2 VM: get FB base from EFI framebuffer */22702270+ start = screen_info.lfb_base;22712271+ size = max_t(__u32, screen_info.lfb_size, 0x800000);22722272+ } else {22732273+ /* Gen1 VM: get FB base from PCI */22742274+ pdev = pci_get_device(PCI_VENDOR_ID_MICROSOFT,22752275+ PCI_DEVICE_ID_HYPERV_VIDEO, NULL);22762276+ if (!pdev)22772277+ return;22782278+22792279+ if (pdev->resource[0].flags & IORESOURCE_MEM) {22802280+ start = pci_resource_start(pdev, 0);22812281+ size = pci_resource_len(pdev, 0);22822282+ }22832283+22842284+ /*22852285+ * Release the PCI device so hyperv_drm or hyperv_fb driver can22862286+ * grab it later.22872287+ */22882288+ pci_dev_put(pdev);22892289+ }22902290+22912291+ if (!start)22922292+ return;22932293+22672294 /*22682295 * Make a claim for the frame buffer in the resource tree under the22692296 * first node, which will be the one below 4GB. The length seems to22702297 * be underreported, particularly in a Generation 1 VM. So start out22712298 * reserving a larger area and make it smaller until it succeeds.22722299 */22732273-22742274- if (screen_info.lfb_base) {22752275- if (efi_enabled(EFI_BOOT))22762276- size = max_t(__u32, screen_info.lfb_size, 0x800000);22772277- else22782278- size = max_t(__u32, screen_info.lfb_size, 0x4000000);22792279-22802280- for (; !fb_mmio && (size >= 0x100000); size >>= 1) {22812281- fb_mmio = __request_region(hyperv_mmio,22822282- screen_info.lfb_base, size,22832283- fb_mmio_name, 0);22842284- }22852285- }23002300+ for (; !fb_mmio && (size >= 0x100000); size >>= 1)23012301+ fb_mmio = __request_region(hyperv_mmio, start, size, fb_mmio_name, 0);22862302}2287230322882304/**···23312313 bool fb_overlap_ok)23322314{23332315 struct resource *iter, *shadow;23342334- resource_size_t range_min, range_max, start;23162316+ resource_size_t range_min, range_max, start, end;23352317 const char *dev_n = dev_name(&device_obj->device);23362318 int retval;23372319···23662348 range_max = iter->end;23672349 start = (range_min + align - 1) & ~(align - 1);23682350 for (; start + size - 1 <= range_max; start += align) {23512351+ end = start + size - 1;23522352+23532353+ /* Skip the whole fb_mmio region if not fb_overlap_ok */23542354+ if (!fb_overlap_ok && fb_mmio &&23552355+ (((start >= fb_mmio->start) && (start <= fb_mmio->end)) ||23562356+ ((end >= fb_mmio->start) && (end <= fb_mmio->end))))23572357+ continue;23582358+23692359 shadow = __request_region(iter, start, size, NULL,23702360 IORESOURCE_BUSY);23712361 if (!shadow)
+1-1
drivers/i2c/busses/i2c-imx.c
···15831583 if (i2c_imx->dma)15841584 i2c_imx_dma_free(i2c_imx);1585158515861586- if (ret == 0) {15861586+ if (ret >= 0) {15871587 /* setup chip registers to defaults */15881588 imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IADR);15891589 imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IFDR);
···23492349 if (!dmar_in_use())23502350 return 0;2351235123522352- /*23532353- * It's unlikely that any I/O board is hot added before the IOMMU23542354- * subsystem is initialized.23552355- */23562356- if (IS_ENABLED(CONFIG_INTEL_IOMMU) && !intel_iommu_enabled)23572357- return -EOPNOTSUPP;23582358-23592352 if (dmar_detect_dsm(handle, DMAR_DSM_FUNC_DRHD)) {23602353 tmp = handle;23612354 } else {
+26-3
drivers/iommu/intel/iommu.c
···399399{400400 unsigned long fl_sagaw, sl_sagaw;401401402402- fl_sagaw = BIT(2) | (cap_fl1gp_support(iommu->cap) ? BIT(3) : 0);402402+ fl_sagaw = BIT(2) | (cap_5lp_support(iommu->cap) ? BIT(3) : 0);403403 sl_sagaw = cap_sagaw(iommu->cap);404404405405 /* Second level only. */···3019301930203020#ifdef CONFIG_INTEL_IOMMU_SVM30213021 if (pasid_supported(iommu) && ecap_prs(iommu->ecap)) {30223022+ /*30233023+ * Call dmar_alloc_hwirq() with dmar_global_lock held,30243024+ * could cause possible lock race condition.30253025+ */30263026+ up_write(&dmar_global_lock);30223027 ret = intel_svm_enable_prq(iommu);30283028+ down_write(&dmar_global_lock);30233029 if (ret)30243030 goto free_iommu;30253031 }···39383932 force_on = (!intel_iommu_tboot_noforce && tboot_force_iommu()) ||39393933 platform_optin_force_iommu();3940393439353935+ down_write(&dmar_global_lock);39413936 if (dmar_table_init()) {39423937 if (force_on)39433938 panic("tboot: Failed to initialize DMAR table\n");···39503943 panic("tboot: Failed to initialize DMAR device scope\n");39513944 goto out_free_dmar;39523945 }39463946+39473947+ up_write(&dmar_global_lock);39483948+39493949+ /*39503950+ * The bus notifier takes the dmar_global_lock, so lockdep will39513951+ * complain later when we register it under the lock.39523952+ */39533953+ dmar_register_bus_notifier();39543954+39553955+ down_write(&dmar_global_lock);3953395639543957 if (!no_iommu)39553958 intel_iommu_debugfs_init();···40053988 pr_err("Initialization failed\n");40063989 goto out_free_dmar;40073990 }39913991+ up_write(&dmar_global_lock);4008399240093993 init_iommu_pm_ops();4010399439953995+ down_read(&dmar_global_lock);40113996 for_each_active_iommu(iommu, drhd) {40123997 /*40133998 * The flush queue implementation does not perform···40274008 "%s", iommu->name);40284009 iommu_device_register(&iommu->iommu, &intel_iommu_ops, NULL);40294010 }40114011+ up_read(&dmar_global_lock);4030401240314013 bus_set_iommu(&pci_bus_type, &intel_iommu_ops);40324014 if (si_domain && !hw_pass_through)40334015 register_memory_notifier(&intel_iommu_memory_nb);4034401640174017+ down_read(&dmar_global_lock);40354018 if (probe_acpi_namespace_devices())40364019 pr_warn("ACPI name space devices didn't probe correctly\n");40374020···4044402340454024 iommu_disable_protect_mem_regions(iommu);40464025 }40264026+ up_read(&dmar_global_lock);40274027+40284028+ pr_info("Intel(R) Virtualization Technology for Directed I/O\n");4047402940484030 intel_iommu_enabled = 1;40494049- dmar_register_bus_notifier();40504050- pr_info("Intel(R) Virtualization Technology for Directed I/O\n");4051403140524032 return 0;4053403340544034out_free_dmar:40554035 intel_iommu_free_dmars();40364036+ up_write(&dmar_global_lock);40564037 return ret;40574038}40584039
+1-1
drivers/media/usb/b2c2/flexcop-usb.c
···511511512512 if (fc_usb->uintf->cur_altsetting->desc.bNumEndpoints < 1)513513 return -ENODEV;514514- if (!usb_endpoint_is_isoc_in(&fc_usb->uintf->cur_altsetting->endpoint[1].desc))514514+ if (!usb_endpoint_is_isoc_in(&fc_usb->uintf->cur_altsetting->endpoint[0].desc))515515 return -ENODEV;516516517517 switch (fc_usb->udev->speed) {
···157157 int err;158158159159 err = gve_alloc_page(priv, &priv->pdev->dev, &buf_state->page_info.page,160160- &buf_state->addr, DMA_FROM_DEVICE, GFP_KERNEL);160160+ &buf_state->addr, DMA_FROM_DEVICE, GFP_ATOMIC);161161 if (err)162162 return err;163163
+26-6
drivers/net/ethernet/intel/i40e/i40e_main.c
···59095909}5910591059115911/**59125912+ * i40e_bw_bytes_to_mbits - Convert max_tx_rate from bytes to mbits59135913+ * @vsi: Pointer to vsi structure59145914+ * @max_tx_rate: max TX rate in bytes to be converted into Mbits59155915+ *59165916+ * Helper function to convert units before send to set BW limit59175917+ **/59185918+static u64 i40e_bw_bytes_to_mbits(struct i40e_vsi *vsi, u64 max_tx_rate)59195919+{59205920+ if (max_tx_rate < I40E_BW_MBPS_DIVISOR) {59215921+ dev_warn(&vsi->back->pdev->dev,59225922+ "Setting max tx rate to minimum usable value of 50Mbps.\n");59235923+ max_tx_rate = I40E_BW_CREDIT_DIVISOR;59245924+ } else {59255925+ do_div(max_tx_rate, I40E_BW_MBPS_DIVISOR);59265926+ }59275927+59285928+ return max_tx_rate;59295929+}59305930+59315931+/**59125932 * i40e_set_bw_limit - setup BW limit for Tx traffic based on max_tx_rate59135933 * @vsi: VSI to be configured59145934 * @seid: seid of the channel/VSI···59505930 max_tx_rate, seid);59515931 return -EINVAL;59525932 }59535953- if (max_tx_rate && max_tx_rate < 50) {59335933+ if (max_tx_rate && max_tx_rate < I40E_BW_CREDIT_DIVISOR) {59545934 dev_warn(&pf->pdev->dev,59555935 "Setting max tx rate to minimum usable value of 50Mbps.\n");59565956- max_tx_rate = 50;59365936+ max_tx_rate = I40E_BW_CREDIT_DIVISOR;59575937 }5958593859595939 /* Tx rate credits are in values of 50Mbps, 0 is disabled */···8244822482458225 if (i40e_is_tc_mqprio_enabled(pf)) {82468226 if (vsi->mqprio_qopt.max_rate[0]) {82478247- u64 max_tx_rate = vsi->mqprio_qopt.max_rate[0];82278227+ u64 max_tx_rate = i40e_bw_bytes_to_mbits(vsi,82288228+ vsi->mqprio_qopt.max_rate[0]);8248822982498249- do_div(max_tx_rate, I40E_BW_MBPS_DIVISOR);82508230 ret = i40e_set_bw_limit(vsi, vsi->seid, max_tx_rate);82518231 if (!ret) {82528232 u64 credits = max_tx_rate;···1099110971 }10992109721099310973 if (vsi->mqprio_qopt.max_rate[0]) {1099410994- u64 max_tx_rate = vsi->mqprio_qopt.max_rate[0];1097410974+ u64 max_tx_rate = i40e_bw_bytes_to_mbits(vsi,1097510975+ vsi->mqprio_qopt.max_rate[0]);1099510976 u64 credits = 0;10996109771099710997- do_div(max_tx_rate, I40E_BW_MBPS_DIVISOR);1099810978 ret = i40e_set_bw_limit(vsi, vsi->seid, max_tx_rate);1099910979 if (ret)1100010980 goto end_unlock;
···20392039}2040204020412041/**20422042+ * i40e_vc_get_max_frame_size20432043+ * @vf: pointer to the VF20442044+ *20452045+ * Max frame size is determined based on the current port's max frame size and20462046+ * whether a port VLAN is configured on this VF. The VF is not aware whether20472047+ * it's in a port VLAN so the PF needs to account for this in max frame size20482048+ * checks and sending the max frame size to the VF.20492049+ **/20502050+static u16 i40e_vc_get_max_frame_size(struct i40e_vf *vf)20512051+{20522052+ u16 max_frame_size = vf->pf->hw.phy.link_info.max_frame_size;20532053+20542054+ if (vf->port_vlan_id)20552055+ max_frame_size -= VLAN_HLEN;20562056+20572057+ return max_frame_size;20582058+}20592059+20602060+/**20422061 * i40e_vc_get_vf_resources_msg20432062 * @vf: pointer to the VF info20442063 * @msg: pointer to the msg buffer···21582139 vfres->max_vectors = pf->hw.func_caps.num_msix_vectors_vf;21592140 vfres->rss_key_size = I40E_HKEY_ARRAY_SIZE;21602141 vfres->rss_lut_size = I40E_VF_HLUT_ARRAY_SIZE;21422142+ vfres->max_mtu = i40e_vc_get_max_frame_size(vf);2161214321622144 if (vf->lan_vsi_idx) {21632145 vfres->vsi_res[0].vsi_id = vf->lan_vsi_id;
+3-6
drivers/net/ethernet/intel/iavf/iavf_main.c
···10771077{10781078 struct iavf_adapter *adapter = netdev_priv(netdev);10791079 struct sockaddr *addr = p;10801080- bool handle_mac = iavf_is_mac_set_handled(netdev, addr->sa_data);10811080 int ret;1082108110831082 if (!is_valid_ether_addr(addr->sa_data))···10931094 return 0;10941095 }1095109610961096- if (handle_mac)10971097- goto done;10981098-10991099- ret = wait_event_interruptible_timeout(adapter->vc_waitqueue, false, msecs_to_jiffies(2500));10971097+ ret = wait_event_interruptible_timeout(adapter->vc_waitqueue,10981098+ iavf_is_mac_set_handled(netdev, addr->sa_data),10991099+ msecs_to_jiffies(2500));1100110011011101 /* If ret < 0 then it means wait was interrupted.11021102 * If ret == 0 then it means we got a timeout.···11091111 if (!ret)11101112 return -EAGAIN;1111111311121112-done:11131114 if (!ether_addr_equal(netdev->dev_addr, addr->sa_data))11141115 return -EACCES;11151116
+6-3
drivers/net/ethernet/intel/iavf/iavf_txrx.c
···114114{115115 u32 head, tail;116116117117+ /* underlying hardware might not allow access and/or always return118118+ * 0 for the head/tail registers so just use the cached values119119+ */117120 head = ring->next_to_clean;118118- tail = readl(ring->tail);121121+ tail = ring->next_to_use;119122120123 if (head != tail)121124 return (head < tail) ?···13931390#endif13941391 struct sk_buff *skb;1395139213961396- if (!rx_buffer)13931393+ if (!rx_buffer || !size)13971394 return NULL;13981395 /* prefetch first cache line of first page */13991396 va = page_address(rx_buffer->page) + rx_buffer->page_offset;···15511548 /* exit if we failed to retrieve a buffer */15521549 if (!skb) {15531550 rx_ring->rx_stats.alloc_buff_failed++;15541554- if (rx_buffer)15511551+ if (rx_buffer && size)15551552 rx_buffer->pagecnt_bias++;15561553 break;15571554 }
+5-2
drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
···269269void iavf_configure_queues(struct iavf_adapter *adapter)270270{271271 struct virtchnl_vsi_queue_config_info *vqci;272272- struct virtchnl_queue_pair_info *vqpi;272272+ int i, max_frame = adapter->vf_res->max_mtu;273273 int pairs = adapter->num_active_queues;274274- int i, max_frame = IAVF_MAX_RXBUFFER;274274+ struct virtchnl_queue_pair_info *vqpi;275275 size_t len;276276+277277+ if (max_frame > IAVF_MAX_RXBUFFER || !max_frame)278278+ max_frame = IAVF_MAX_RXBUFFER;276279277280 if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) {278281 /* bail because we already have a command pending */
+28-18
drivers/net/ethernet/intel/ice/ice_lib.c
···914914 */915915static int ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)916916{917917- u16 offset = 0, qmap = 0, tx_count = 0, pow = 0;917917+ u16 offset = 0, qmap = 0, tx_count = 0, rx_count = 0, pow = 0;918918 u16 num_txq_per_tc, num_rxq_per_tc;919919 u16 qcount_tx = vsi->alloc_txq;920920 u16 qcount_rx = vsi->alloc_rxq;···981981 * at least 1)982982 */983983 if (offset)984984- vsi->num_rxq = offset;984984+ rx_count = offset;985985 else986986- vsi->num_rxq = num_rxq_per_tc;986986+ rx_count = num_rxq_per_tc;987987988988- if (vsi->num_rxq > vsi->alloc_rxq) {988988+ if (rx_count > vsi->alloc_rxq) {989989 dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Rx queues (%u), than were allocated (%u)!\n",990990- vsi->num_rxq, vsi->alloc_rxq);990990+ rx_count, vsi->alloc_rxq);991991+ return -EINVAL;992992+ }993993+994994+ if (tx_count > vsi->alloc_txq) {995995+ dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Tx queues (%u), than were allocated (%u)!\n",996996+ tx_count, vsi->alloc_txq);991997 return -EINVAL;992998 }9939999941000 vsi->num_txq = tx_count;995995- if (vsi->num_txq > vsi->alloc_txq) {996996- dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Tx queues (%u), than were allocated (%u)!\n",997997- vsi->num_txq, vsi->alloc_txq);998998- return -EINVAL;999999- }10011001+ vsi->num_rxq = rx_count;1000100210011003 if (vsi->type == ICE_VSI_VF && vsi->num_txq != vsi->num_rxq) {10021004 dev_dbg(ice_pf_to_dev(vsi->back), "VF VSI should have same number of Tx and Rx queues. Hence making them equal\n");···34923490 u16 pow, offset = 0, qcount_tx = 0, qcount_rx = 0, qmap;34933491 u16 tc0_offset = vsi->mqprio_qopt.qopt.offset[0];34943492 int tc0_qcount = vsi->mqprio_qopt.qopt.count[0];34933493+ u16 new_txq, new_rxq;34953494 u8 netdev_tc = 0;34963495 int i;34973496···35333530 }35343531 }3535353235363536- /* Set actual Tx/Rx queue pairs */35373537- vsi->num_txq = offset + qcount_tx;35383538- if (vsi->num_txq > vsi->alloc_txq) {35333533+ new_txq = offset + qcount_tx;35343534+ if (new_txq > vsi->alloc_txq) {35393535 dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Tx queues (%u), than were allocated (%u)!\n",35403540- vsi->num_txq, vsi->alloc_txq);35363536+ new_txq, vsi->alloc_txq);35413537 return -EINVAL;35423538 }3543353935443544- vsi->num_rxq = offset + qcount_rx;35453545- if (vsi->num_rxq > vsi->alloc_rxq) {35403540+ new_rxq = offset + qcount_rx;35413541+ if (new_rxq > vsi->alloc_rxq) {35463542 dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Rx queues (%u), than were allocated (%u)!\n",35473547- vsi->num_rxq, vsi->alloc_rxq);35433543+ new_rxq, vsi->alloc_rxq);35483544 return -EINVAL;35493545 }35463546+35473547+ /* Set actual Tx/Rx queue pairs */35483548+ vsi->num_txq = new_txq;35493549+ vsi->num_rxq = new_rxq;3550355035513551 /* Setup queue TC[0].qmap for given VSI context */35523552 ctxt->info.tc_mapping[0] = cpu_to_le16(qmap);···35823576{35833577 u16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };35843578 struct ice_pf *pf = vsi->back;35793579+ struct ice_tc_cfg old_tc_cfg;35853580 struct ice_vsi_ctx *ctx;35863581 struct device *dev;35873582 int i, ret = 0;···36073600 max_txqs[i] = vsi->num_txq;36083601 }3609360236033603+ memcpy(&old_tc_cfg, &vsi->tc_cfg, sizeof(old_tc_cfg));36103604 vsi->tc_cfg.ena_tc = ena_tc;36113605 vsi->tc_cfg.numtc = num_tc;36123606···36243616 else36253617 ret = ice_vsi_setup_q_map(vsi, ctx);3626361836273627- if (ret)36193619+ if (ret) {36203620+ memcpy(&vsi->tc_cfg, &old_tc_cfg, sizeof(vsi->tc_cfg));36283621 goto out;36223622+ }3629362336303624 /* must to indicate which section of VSI context are being modified */36313625 ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_RXQ_MAP_VALID);
+14-11
drivers/net/ethernet/intel/ice/ice_main.c
···23992399 return -EBUSY;24002400 }2401240124022402- ice_unplug_aux_dev(pf);24032403-24042402 switch (reset) {24052403 case ICE_RESET_PFR:24062404 set_bit(ICE_PFR_REQ, pf->state);···66496651 */66506652int ice_down(struct ice_vsi *vsi)66516653{66526652- int i, tx_err, rx_err, link_err = 0, vlan_err = 0;66546654+ int i, tx_err, rx_err, vlan_err = 0;6653665566546656 WARN_ON(!test_bit(ICE_VSI_DOWN, vsi->state));66556657···6683668566846686 ice_napi_disable_all(vsi);6685668766866686- if (test_bit(ICE_FLAG_LINK_DOWN_ON_CLOSE_ENA, vsi->back->flags)) {66876687- link_err = ice_force_phys_link_state(vsi, false);66886688- if (link_err)66896689- netdev_err(vsi->netdev, "Failed to set physical link down, VSI %d error %d\n",66906690- vsi->vsi_num, link_err);66916691- }66926692-66936688 ice_for_each_txq(vsi, i)66946689 ice_clean_tx_ring(vsi->tx_rings[i]);6695669066966691 ice_for_each_rxq(vsi, i)66976692 ice_clean_rx_ring(vsi->rx_rings[i]);6698669366996699- if (tx_err || rx_err || link_err || vlan_err) {66946694+ if (tx_err || rx_err || vlan_err) {67006695 netdev_err(vsi->netdev, "Failed to close VSI 0x%04X on switch 0x%04X\n",67016696 vsi->vsi_num, vsi->vsw->sw_id);67026697 return -EIO;···68506859 err = ice_vsi_req_irq_msix(vsi, int_name);68516860 if (err)68526861 goto err_setup_rx;68626862+68636863+ ice_vsi_cfg_netdev_tc(vsi, vsi->tc_cfg.ena_tc);6853686468546865 if (vsi->type == ICE_VSI_PF) {68556866 /* Notify the stack of the actual queue counts. */···88838890 if (ice_is_reset_in_progress(pf->state)) {88848891 netdev_err(netdev, "can't stop net device while reset is in progress");88858892 return -EBUSY;88938893+ }88948894+88958895+ if (test_bit(ICE_FLAG_LINK_DOWN_ON_CLOSE_ENA, vsi->back->flags)) {88968896+ int link_err = ice_force_phys_link_state(vsi, false);88978897+88988898+ if (link_err) {88998899+ netdev_err(vsi->netdev, "Failed to set physical link down, VSI %d error %d\n",89008900+ vsi->vsi_num, link_err);89018901+ return -EIO;89028902+ }88868903 }8887890488888905 ice_vsi_close(vsi);
+4-1
drivers/net/ethernet/intel/ice/ice_txrx.c
···610610 if (test_bit(ICE_VSI_DOWN, vsi->state))611611 return -ENETDOWN;612612613613- if (!ice_is_xdp_ena_vsi(vsi) || queue_index >= vsi->num_xdp_txq)613613+ if (!ice_is_xdp_ena_vsi(vsi))614614 return -ENXIO;615615616616 if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))···621621 xdp_ring = vsi->xdp_rings[queue_index];622622 spin_lock(&xdp_ring->tx_lock);623623 } else {624624+ /* Generally, should not happen */625625+ if (unlikely(queue_index >= vsi->num_xdp_txq))626626+ return -ENXIO;624627 xdp_ring = vsi->xdp_rings[queue_index];625628 }626629
···179179 /* Only return ad bits of the gw register */180180 ret &= MLXBF_GIGE_MDIO_GW_AD_MASK;181181182182+ /* The MDIO lock is set on read. To release it, clear gw register */183183+ writel(0, priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET);184184+182185 return ret;183186}184187···205202 ret = readl_poll_timeout_atomic(priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET,206203 temp, !(temp & MLXBF_GIGE_MDIO_GW_BUSY_MASK),207204 5, 1000000);205205+206206+ /* The MDIO lock is set on read. To release it, clear gw register */207207+ writel(0, priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET);208208209209 return ret;210210}
+10-4
drivers/net/ethernet/microsoft/mana/gdma_main.c
···397397 break;398398 }399399400400+ /* Per GDMA spec, rmb is necessary after checking owner_bits, before401401+ * reading eqe.402402+ */403403+ rmb();404404+400405 mana_gd_process_eqe(eq);401406402407 eq->head++;···11391134 if (WARN_ON_ONCE(owner_bits != new_bits))11401135 return -1;1141113611371137+ /* Per GDMA spec, rmb is necessary after checking owner_bits, before11381138+ * reading completion info11391139+ */11401140+ rmb();11411141+11421142 comp->wq_num = cqe->cqe_info.wq_num;11431143 comp->is_sq = cqe->cqe_info.is_sq;11441144 memcpy(comp->cqe_data, cqe->cqe_data, GDMA_COMP_DATA_SIZE);···1474146414751465 pci_disable_device(pdev);14761466}14771477-14781478-#ifndef PCI_VENDOR_ID_MICROSOFT14791479-#define PCI_VENDOR_ID_MICROSOFT 0x141414801480-#endif1481146714821468static const struct pci_device_id mana_id_table[] = {14831469 { PCI_DEVICE(PCI_VENDOR_ID_MICROSOFT, MANA_PF_DEVICE_ID) },
+2
drivers/net/ethernet/renesas/ravb_main.c
···14491449 phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Half_BIT);14501450 }1451145114521452+ /* Indicate that the MAC is responsible for managing PHY PM */14531453+ phydev->mac_managed_pm = true;14521454 phy_attached_info(phydev);1453145514541456 return 0;
+2
drivers/net/ethernet/renesas/sh_eth.c
···20292029 if (mdp->cd->register_type != SH_ETH_REG_GIGABIT)20302030 phy_set_max_speed(phydev, SPEED_100);2031203120322032+ /* Indicate that the MAC is responsible for managing PHY PM */20332033+ phydev->mac_managed_pm = true;20322034 phy_attached_info(phydev);2033203520342036 return 0;
···8686 IPA_QMI_PLATFORM_TYPE_MSM_QNX_V01 = 0x5, /* QNX MSM */8787};88888989-/* This defines the start and end offset of a range of memory. Both9090- * fields are offsets relative to the start of IPA shared memory.9191- * The end value is the last addressable byte *within* the range.8989+/* This defines the start and end offset of a range of memory. The start9090+ * value is a byte offset relative to the start of IPA shared memory. The9191+ * end value is the last addressable unit *within* the range. Typically9292+ * the end value is in units of bytes, however it can also be a maximum9393+ * array index value.9294 */9395struct ipa_mem_bounds {9496 u32 start;···131129 u8 hdr_tbl_info_valid;132130 struct ipa_mem_bounds hdr_tbl_info;133131134134- /* Routing table information. These define the location and size of135135- * non-hashable IPv4 and IPv6 filter tables. The start values are136136- * offsets relative to the start of IPA shared memory.132132+ /* Routing table information. These define the location and maximum133133+ * *index* (not byte) for the modem portion of non-hashable IPv4 and134134+ * IPv6 routing tables. The start values are byte offsets relative135135+ * to the start of IPA shared memory.137136 */138137 u8 v4_route_tbl_info_valid;139139- struct ipa_mem_array v4_route_tbl_info;138138+ struct ipa_mem_bounds v4_route_tbl_info;140139 u8 v6_route_tbl_info_valid;141141- struct ipa_mem_array v6_route_tbl_info;140140+ struct ipa_mem_bounds v6_route_tbl_info;142141143142 /* Filter table information. These define the location of the144143 * non-hashable IPv4 and IPv6 filter tables. The start values are145145- * offsets relative to the start of IPA shared memory.144144+ * byte offsets relative to the start of IPA shared memory.146145 */147146 u8 v4_filter_tbl_start_valid;148147 u32 v4_filter_tbl_start;···184181 u8 zip_tbl_info_valid;185182 struct ipa_mem_bounds zip_tbl_info;186183187187- /* Routing table information. These define the location and size188188- * of hashable IPv4 and IPv6 filter tables. The start values are189189- * offsets relative to the start of IPA shared memory.184184+ /* Routing table information. These define the location and maximum185185+ * *index* (not byte) for the modem portion of hashable IPv4 and IPv6186186+ * routing tables (if supported by hardware). The start values are187187+ * byte offsets relative to the start of IPA shared memory.190188 */191189 u8 v4_hash_route_tbl_info_valid;192192- struct ipa_mem_array v4_hash_route_tbl_info;190190+ struct ipa_mem_bounds v4_hash_route_tbl_info;193191 u8 v6_hash_route_tbl_info_valid;194194- struct ipa_mem_array v6_hash_route_tbl_info;192192+ struct ipa_mem_bounds v6_hash_route_tbl_info;195193196194 /* Filter table information. These define the location and size197197- * of hashable IPv4 and IPv6 filter tables. The start values are198198- * offsets relative to the start of IPA shared memory.195195+ * of hashable IPv4 and IPv6 filter tables (if supported by hardware).196196+ * The start values are byte offsets relative to the start of IPA197197+ * shared memory.199198 */200199 u8 v4_hash_filter_tbl_start_valid;201200 u32 v4_hash_filter_tbl_start;
-2
drivers/net/ipa/ipa_table.c
···108108109109/* Assignment of route table entries to the modem and AP */110110#define IPA_ROUTE_MODEM_MIN 0111111-#define IPA_ROUTE_MODEM_COUNT 8112112-113111#define IPA_ROUTE_AP_MIN IPA_ROUTE_MODEM_COUNT114112#define IPA_ROUTE_AP_COUNT \115113 (IPA_ROUTE_COUNT_MAX - IPA_ROUTE_MODEM_COUNT)
+3
drivers/net/ipa/ipa_table.h
···1313/* The maximum number of filter table entries (IPv4, IPv6; hashed or not) */1414#define IPA_FILTER_COUNT_MAX 1415151616+/* The number of route table entries allotted to the modem */1717+#define IPA_ROUTE_MODEM_COUNT 81818+1619/* The maximum number of route table entries (IPv4, IPv6; hashed or not) */1720#define IPA_ROUTE_COUNT_MAX 151821
+4-2
drivers/net/ipvlan/ipvlan_core.c
···495495496496static int ipvlan_process_outbound(struct sk_buff *skb)497497{498498- struct ethhdr *ethh = eth_hdr(skb);499498 int ret = NET_XMIT_DROP;500499501500 /* The ipvlan is a pseudo-L2 device, so the packets that we receive···504505 if (skb_mac_header_was_set(skb)) {505506 /* In this mode we dont care about506507 * multicast and broadcast traffic */508508+ struct ethhdr *ethh = eth_hdr(skb);509509+507510 if (is_multicast_ether_addr(ethh->h_dest)) {508511 pr_debug_ratelimited(509512 "Dropped {multi|broad}cast of type=[%x]\n",···590589static int ipvlan_xmit_mode_l2(struct sk_buff *skb, struct net_device *dev)591590{592591 const struct ipvl_dev *ipvlan = netdev_priv(dev);593593- struct ethhdr *eth = eth_hdr(skb);592592+ struct ethhdr *eth = skb_eth_hdr(skb);594593 struct ipvl_addr *addr;595594 void *lyr3h;596595 int addr_type;···620619 return dev_forward_skb(ipvlan->phy_dev, skb);621620622621 } else if (is_multicast_ether_addr(eth->h_dest)) {622622+ skb_reset_mac_header(skb);623623 ipvlan_skb_crossing_ns(skb, NULL);624624 ipvlan_multicast_enqueue(ipvlan->port, skb, true);625625 return NET_XMIT_SUCCESS;
···140140 depends on INTEL_MEI141141 depends on PM142142 depends on CFG80211143143+ depends on BROKEN143144 help144145 Enables the iwlmei kernel module.145146
+2-2
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
···18331833 * If nss < MAX: we can set zeros in other streams18341834 */18351835 if (nss > MAX_HE_SUPP_NSS) {18361836- IWL_INFO(mvm, "Got NSS = %d - trimming to %d\n", nss,18371837- MAX_HE_SUPP_NSS);18361836+ IWL_DEBUG_INFO(mvm, "Got NSS = %d - trimming to %d\n", nss,18371837+ MAX_HE_SUPP_NSS);18381838 nss = MAX_HE_SUPP_NSS;18391839 }18401840
···221221222222static struct irt_entry *iosapic_alloc_irt(int num_entries)223223{224224- unsigned long a;225225-226226- /* The IRT needs to be 8-byte aligned for the PDC call. 227227- * Normally kmalloc would guarantee larger alignment, but228228- * if CONFIG_DEBUG_SLAB is enabled, then we can get only229229- * 4-byte alignment on 32-bit kernels230230- */231231- a = (unsigned long)kmalloc(sizeof(struct irt_entry) * num_entries + 8, GFP_KERNEL);232232- a = (a + 7UL) & ~7UL;233233- return (struct irt_entry *)a;224224+ return kcalloc(num_entries, sizeof(struct irt_entry), GFP_KERNEL);234225}235226236227/**
···2151215121522152 abort_cmd = ha->tgt.tgt_ops->find_cmd_by_tag(sess,21532153 le32_to_cpu(abts->exchange_addr_to_abort));21542154- if (!abort_cmd)21542154+ if (!abort_cmd) {21552155+ mempool_free(mcmd, qla_tgt_mgmt_cmd_mempool);21552156 return -EIO;21572157+ }21562158 mcmd->unpacked_lun = abort_cmd->se_cmd.orig_fe_lun;2157215921582160 if (abort_cmd->qpair) {
+1
drivers/thunderbolt/icm.c
···25292529 tb->cm_ops = &icm_icl_ops;25302530 break;2531253125322532+ case PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_2C_NHI:25322533 case PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_4C_NHI:25332534 icm->is_supported = icm_tgl_is_supported;25342535 icm->get_mode = icm_ar_get_mode;
+1
drivers/thunderbolt/nhi.h
···5555 * need for the PCI quirk anymore as we will use ICM also on Apple5656 * hardware.5757 */5858+#define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_2C_NHI 0x11345859#define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_4C_NHI 0x11375960#define PCI_DEVICE_ID_INTEL_WIN_RIDGE_2C_NHI 0x157d6061#define PCI_DEVICE_ID_INTEL_WIN_RIDGE_2C_BRIDGE 0x157e
···60396039 *60406040 * Return: The same as for usb_reset_and_verify_device().60416041 * However, if a reset is already in progress (for instance, if a60426042- * driver doesn't have pre_ or post_reset() callbacks, and while60426042+ * driver doesn't have pre_reset() or post_reset() callbacks, and while60436043 * being unbound or re-bound during the ongoing reset its disconnect()60446044 * or probe() routine tries to perform a second, nested reset), the60456045 * routine returns -EINPROGRESS.
+7-6
drivers/usb/dwc3/core.c
···1752175217531753 dwc3_get_properties(dwc);1754175417551755- if (!dwc->sysdev_is_parent) {17561756- ret = dma_set_mask_and_coherent(dwc->sysdev, DMA_BIT_MASK(64));17571757- if (ret)17581758- return ret;17591759- }17601760-17611755 dwc->reset = devm_reset_control_array_get_optional_shared(dev);17621756 if (IS_ERR(dwc->reset))17631757 return PTR_ERR(dwc->reset);···1816182218171823 platform_set_drvdata(pdev, dwc);18181824 dwc3_cache_hwparams(dwc);18251825+18261826+ if (!dwc->sysdev_is_parent &&18271827+ DWC3_GHWPARAMS0_AWIDTH(dwc->hwparams.hwparams0) == 64) {18281828+ ret = dma_set_mask_and_coherent(dwc->sysdev, DMA_BIT_MASK(64));18291829+ if (ret)18301830+ goto disable_clks;18311831+ }1819183218201833 spin_lock_init(&dwc->lock);18211834 mutex_init(&dwc->mutex);
···5656 tristate "Analogix ANX7411 Type-C DRP Port controller driver"5757 depends on I2C5858 depends on USB_ROLE_SWITCH5959+ depends on POWER_SUPPLY5960 help6061 Say Y or M here if your system has Analogix ANX7411 Type-C DRP Port6162 controller driver.
···382382 unsigned long ring_size = nr_pages * XEN_PAGE_SIZE;383383 grant_ref_t gref_head;384384 unsigned int i;385385+ void *addr;385386 int ret;386387387387- *vaddr = alloc_pages_exact(ring_size, gfp | __GFP_ZERO);388388+ addr = *vaddr = alloc_pages_exact(ring_size, gfp | __GFP_ZERO);388389 if (!*vaddr) {389390 ret = -ENOMEM;390391 goto err;···402401 unsigned long gfn;403402404403 if (is_vmalloc_addr(*vaddr))405405- gfn = pfn_to_gfn(vmalloc_to_pfn(vaddr[i]));404404+ gfn = pfn_to_gfn(vmalloc_to_pfn(addr));406405 else407407- gfn = virt_to_gfn(vaddr[i]);406406+ gfn = virt_to_gfn(addr);408407409408 grefs[i] = gnttab_claim_grant_reference(&gref_head);410409 gnttab_grant_foreign_access_ref(grefs[i], dev->otherend_id,411410 gfn, 0);411411+412412+ addr += XEN_PAGE_SIZE;412413 }413414414415 return 0;
+36-6
fs/btrfs/disk-io.c
···44754475 set_bit(BTRFS_FS_CLOSING_START, &fs_info->flags);4476447644774477 /*44784478+ * If we had UNFINISHED_DROPS we could still be processing them, so44794479+ * clear that bit and wake up relocation so it can stop.44804480+ * We must do this before stopping the block group reclaim task, because44814481+ * at btrfs_relocate_block_group() we wait for this bit, and after the44824482+ * wait we stop with -EINTR if btrfs_fs_closing() returns non-zero - we44834483+ * have just set BTRFS_FS_CLOSING_START, so btrfs_fs_closing() will44844484+ * return 1.44854485+ */44864486+ btrfs_wake_unfinished_drop(fs_info);44874487+44884488+ /*44784489 * We may have the reclaim task running and relocating a data block group,44794490 * in which case it may create delayed iputs. So stop it before we park44804491 * the cleaner kthread otherwise we can get new delayed iputs after···45024491 * still try to wake up the cleaner.45034492 */45044493 kthread_park(fs_info->cleaner_kthread);45054505-45064506- /*45074507- * If we had UNFINISHED_DROPS we could still be processing them, so45084508- * clear that bit and wake up relocation so it can stop.45094509- */45104510- btrfs_wake_unfinished_drop(fs_info);4511449445124495 /* wait for the qgroup rescan worker to stop */45134496 btrfs_qgroup_wait_for_completion(fs_info, false);···4524451945254520 /* clear out the rbtree of defraggable inodes */45264521 btrfs_cleanup_defrag_inodes(fs_info);45224522+45234523+ /*45244524+ * After we parked the cleaner kthread, ordered extents may have45254525+ * completed and created new delayed iputs. If one of the async reclaim45264526+ * tasks is running and in the RUN_DELAYED_IPUTS flush state, then we45274527+ * can hang forever trying to stop it, because if a delayed iput is45284528+ * added after it ran btrfs_run_delayed_iputs() and before it called45294529+ * btrfs_wait_on_delayed_iputs(), it will hang forever since there is45304530+ * no one else to run iputs.45314531+ *45324532+ * So wait for all ongoing ordered extents to complete and then run45334533+ * delayed iputs. This works because once we reach this point no one45344534+ * can either create new ordered extents nor create delayed iputs45354535+ * through some other means.45364536+ *45374537+ * Also note that btrfs_wait_ordered_roots() is not safe here, because45384538+ * it waits for BTRFS_ORDERED_COMPLETE to be set on an ordered extent,45394539+ * but the delayed iput for the respective inode is made only when doing45404540+ * the final btrfs_put_ordered_extent() (which must happen at45414541+ * btrfs_finish_ordered_io() when we are unmounting).45424542+ */45434543+ btrfs_flush_workqueue(fs_info->endio_write_workers);45444544+ /* Ordered extents for free space inodes. */45454545+ btrfs_flush_workqueue(fs_info->endio_freespace_worker);45464546+ btrfs_run_delayed_iputs(fs_info);4527454745284548 cancel_work_sync(&fs_info->async_reclaim_work);45294549 cancel_work_sync(&fs_info->async_data_reclaim_work);
+38-2
fs/btrfs/zoned.c
···19181918 return ret;19191919}1920192019211921+static void wait_eb_writebacks(struct btrfs_block_group *block_group)19221922+{19231923+ struct btrfs_fs_info *fs_info = block_group->fs_info;19241924+ const u64 end = block_group->start + block_group->length;19251925+ struct radix_tree_iter iter;19261926+ struct extent_buffer *eb;19271927+ void __rcu **slot;19281928+19291929+ rcu_read_lock();19301930+ radix_tree_for_each_slot(slot, &fs_info->buffer_radix, &iter,19311931+ block_group->start >> fs_info->sectorsize_bits) {19321932+ eb = radix_tree_deref_slot(slot);19331933+ if (!eb)19341934+ continue;19351935+ if (radix_tree_deref_retry(eb)) {19361936+ slot = radix_tree_iter_retry(&iter);19371937+ continue;19381938+ }19391939+19401940+ if (eb->start < block_group->start)19411941+ continue;19421942+ if (eb->start >= end)19431943+ break;19441944+19451945+ slot = radix_tree_iter_resume(slot, &iter);19461946+ rcu_read_unlock();19471947+ wait_on_extent_buffer_writeback(eb);19481948+ rcu_read_lock();19491949+ }19501950+ rcu_read_unlock();19511951+}19521952+19211953static int do_zone_finish(struct btrfs_block_group *block_group, bool fully_written)19221954{19231955 struct btrfs_fs_info *fs_info = block_group->fs_info;19241956 struct map_lookup *map;19571957+ const bool is_metadata = (block_group->flags &19581958+ (BTRFS_BLOCK_GROUP_METADATA | BTRFS_BLOCK_GROUP_SYSTEM));19251959 int ret = 0;19261960 int i;19271961···19661932 }1967193319681934 /* Check if we have unwritten allocated space */19691969- if ((block_group->flags &19701970- (BTRFS_BLOCK_GROUP_METADATA | BTRFS_BLOCK_GROUP_SYSTEM)) &&19351935+ if (is_metadata &&19711936 block_group->start + block_group->alloc_offset > block_group->meta_write_pointer) {19721937 spin_unlock(&block_group->lock);19731938 return -EAGAIN;···19911958 /* No need to wait for NOCOW writers. Zoned mode does not allow that */19921959 btrfs_wait_ordered_roots(fs_info, U64_MAX, block_group->start,19931960 block_group->length);19611961+ /* Wait for extent buffers to be written. */19621962+ if (is_metadata)19631963+ wait_eb_writebacks(block_group);1994196419951965 spin_lock(&block_group->lock);19961966
+2-2
fs/cifs/cifsfs.h
···153153#endif /* CONFIG_CIFS_NFSD_EXPORT */154154155155/* when changing internal version - update following two lines at same time */156156-#define SMB3_PRODUCT_BUILD 38157157-#define CIFS_VERSION "2.38"156156+#define SMB3_PRODUCT_BUILD 39157157+#define CIFS_VERSION "2.39"158158#endif /* _CIFSFS_H */
···167167#define EXT4_MB_CR0_OPTIMIZED 0x8000168168/* Avg fragment size rb tree lookup succeeded at least once for cr = 1 */169169#define EXT4_MB_CR1_OPTIMIZED 0x00010000170170-/* Perform linear traversal for one group */171171-#define EXT4_MB_SEARCH_NEXT_LINEAR 0x00020000172170struct ext4_allocation_request {173171 /* target inode for block we're allocating */174172 struct inode *inode;···15981600 struct list_head s_discard_list;15991601 struct work_struct s_discard_work;16001602 atomic_t s_retry_alloc_pending;16011601- struct rb_root s_mb_avg_fragment_size_root;16021602- rwlock_t s_mb_rb_lock;16031603+ struct list_head *s_mb_avg_fragment_size;16041604+ rwlock_t *s_mb_avg_fragment_size_locks;16031605 struct list_head *s_mb_largest_free_orders;16041606 rwlock_t *s_mb_largest_free_orders_locks;16051607···34113413 ext4_grpblk_t bb_first_free; /* first free block */34123414 ext4_grpblk_t bb_free; /* total free blocks */34133415 ext4_grpblk_t bb_fragments; /* nr of freespace fragments */34163416+ int bb_avg_fragment_size_order; /* order of average34173417+ fragment in BG */34143418 ext4_grpblk_t bb_largest_free_order;/* order of largest frag in BG */34153419 ext4_group_t bb_group; /* Group number */34163420 struct list_head bb_prealloc_list;···34203420 void *bb_bitmap;34213421#endif34223422 struct rw_semaphore alloc_sem;34233423- struct rb_node bb_avg_fragment_size_rb;34233423+ struct list_head bb_avg_fragment_size_node;34243424 struct list_head bb_largest_free_order_node;34253425 ext4_grpblk_t bb_counters[]; /* Nr of free power-of-two-block34263426 * regions, index is order.
+4
fs/ext4/extents.c
···460460 error_msg = "invalid eh_entries";461461 goto corrupted;462462 }463463+ if (unlikely((eh->eh_entries == 0) && (depth > 0))) {464464+ error_msg = "eh_entries is 0 but eh_depth is > 0";465465+ goto corrupted;466466+ }463467 if (!ext4_valid_extent_entries(inode, eh, lblk, &pblk, depth)) {464468 error_msg = "invalid extent entries";465469 goto corrupted;
···140140 * number of buddy bitmap orders possible) number of lists. Group-infos are141141 * placed in appropriate lists.142142 *143143- * 2) Average fragment size rb tree (sbi->s_mb_avg_fragment_size_root)143143+ * 2) Average fragment size lists (sbi->s_mb_avg_fragment_size)144144 *145145- * Locking: sbi->s_mb_rb_lock (rwlock)145145+ * Locking: sbi->s_mb_avg_fragment_size_locks(array of rw locks)146146 *147147- * This is a red black tree consisting of group infos and the tree is sorted148148- * by average fragment sizes (which is calculated as ext4_group_info->bb_free149149- * / ext4_group_info->bb_fragments).147147+ * This is an array of lists where in the i-th list there are groups with148148+ * average fragment size >= 2^i and < 2^(i+1). The average fragment size149149+ * is computed as ext4_group_info->bb_free / ext4_group_info->bb_fragments.150150+ * Note that we don't bother with a special list for completely empty groups151151+ * so we only have MB_NUM_ORDERS(sb) lists.150152 *151153 * When "mb_optimize_scan" mount option is set, mballoc consults the above data152154 * structures to decide the order in which groups are to be traversed for···162160 *163161 * At CR = 1, we only consider groups where average fragment size > request164162 * size. So, we lookup a group which has average fragment size just above or165165- * equal to request size using our rb tree (data structure 2) in O(log N) time.163163+ * equal to request size using our average fragment size group lists (data164164+ * structure 2) in O(1) time.166165 *167166 * If "mb_optimize_scan" mount option is not set, mballoc traverses groups in168167 * linear order which requires O(N) search time for each CR 0 and CR 1 phase.···805802 }806803}807804808808-static void ext4_mb_rb_insert(struct rb_root *root, struct rb_node *new,809809- int (*cmp)(struct rb_node *, struct rb_node *))805805+static int mb_avg_fragment_size_order(struct super_block *sb, ext4_grpblk_t len)810806{811811- struct rb_node **iter = &root->rb_node, *parent = NULL;807807+ int order;812808813813- while (*iter) {814814- parent = *iter;815815- if (cmp(new, *iter) > 0)816816- iter = &((*iter)->rb_left);817817- else818818- iter = &((*iter)->rb_right);819819- }820820-821821- rb_link_node(new, parent, iter);822822- rb_insert_color(new, root);809809+ /*810810+ * We don't bother with a special lists groups with only 1 block free811811+ * extents and for completely empty groups.812812+ */813813+ order = fls(len) - 2;814814+ if (order < 0)815815+ return 0;816816+ if (order == MB_NUM_ORDERS(sb))817817+ order--;818818+ return order;823819}824820825825-static int826826-ext4_mb_avg_fragment_size_cmp(struct rb_node *rb1, struct rb_node *rb2)827827-{828828- struct ext4_group_info *grp1 = rb_entry(rb1,829829- struct ext4_group_info,830830- bb_avg_fragment_size_rb);831831- struct ext4_group_info *grp2 = rb_entry(rb2,832832- struct ext4_group_info,833833- bb_avg_fragment_size_rb);834834- int num_frags_1, num_frags_2;835835-836836- num_frags_1 = grp1->bb_fragments ?837837- grp1->bb_free / grp1->bb_fragments : 0;838838- num_frags_2 = grp2->bb_fragments ?839839- grp2->bb_free / grp2->bb_fragments : 0;840840-841841- return (num_frags_2 - num_frags_1);842842-}843843-844844-/*845845- * Reinsert grpinfo into the avg_fragment_size tree with new average846846- * fragment size.847847- */821821+/* Move group to appropriate avg_fragment_size list */848822static void849823mb_update_avg_fragment_size(struct super_block *sb, struct ext4_group_info *grp)850824{851825 struct ext4_sb_info *sbi = EXT4_SB(sb);826826+ int new_order;852827853828 if (!test_opt2(sb, MB_OPTIMIZE_SCAN) || grp->bb_free == 0)854829 return;855830856856- write_lock(&sbi->s_mb_rb_lock);857857- if (!RB_EMPTY_NODE(&grp->bb_avg_fragment_size_rb)) {858858- rb_erase(&grp->bb_avg_fragment_size_rb,859859- &sbi->s_mb_avg_fragment_size_root);860860- RB_CLEAR_NODE(&grp->bb_avg_fragment_size_rb);861861- }831831+ new_order = mb_avg_fragment_size_order(sb,832832+ grp->bb_free / grp->bb_fragments);833833+ if (new_order == grp->bb_avg_fragment_size_order)834834+ return;862835863863- ext4_mb_rb_insert(&sbi->s_mb_avg_fragment_size_root,864864- &grp->bb_avg_fragment_size_rb,865865- ext4_mb_avg_fragment_size_cmp);866866- write_unlock(&sbi->s_mb_rb_lock);836836+ if (grp->bb_avg_fragment_size_order != -1) {837837+ write_lock(&sbi->s_mb_avg_fragment_size_locks[838838+ grp->bb_avg_fragment_size_order]);839839+ list_del(&grp->bb_avg_fragment_size_node);840840+ write_unlock(&sbi->s_mb_avg_fragment_size_locks[841841+ grp->bb_avg_fragment_size_order]);842842+ }843843+ grp->bb_avg_fragment_size_order = new_order;844844+ write_lock(&sbi->s_mb_avg_fragment_size_locks[845845+ grp->bb_avg_fragment_size_order]);846846+ list_add_tail(&grp->bb_avg_fragment_size_node,847847+ &sbi->s_mb_avg_fragment_size[grp->bb_avg_fragment_size_order]);848848+ write_unlock(&sbi->s_mb_avg_fragment_size_locks[849849+ grp->bb_avg_fragment_size_order]);867850}868851869852/*···898909 *new_cr = 1;899910 } else {900911 *group = grp->bb_group;901901- ac->ac_last_optimal_group = *group;902912 ac->ac_flags |= EXT4_MB_CR0_OPTIMIZED;903913 }904914}905915906916/*907907- * Choose next group by traversing average fragment size tree. Updates *new_cr908908- * if cr lvel needs an update. Sets EXT4_MB_SEARCH_NEXT_LINEAR to indicate that909909- * the linear search should continue for one iteration since there's lock910910- * contention on the rb tree lock.917917+ * Choose next group by traversing average fragment size list of suitable918918+ * order. Updates *new_cr if cr level needs an update.911919 */912920static void ext4_mb_choose_next_group_cr1(struct ext4_allocation_context *ac,913921 int *new_cr, ext4_group_t *group, ext4_group_t ngroups)914922{915923 struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);916916- int avg_fragment_size, best_so_far;917917- struct rb_node *node, *found;918918- struct ext4_group_info *grp;919919-920920- /*921921- * If there is contention on the lock, instead of waiting for the lock922922- * to become available, just continue searching lineraly. We'll resume923923- * our rb tree search later starting at ac->ac_last_optimal_group.924924- */925925- if (!read_trylock(&sbi->s_mb_rb_lock)) {926926- ac->ac_flags |= EXT4_MB_SEARCH_NEXT_LINEAR;927927- return;928928- }924924+ struct ext4_group_info *grp, *iter;925925+ int i;929926930927 if (unlikely(ac->ac_flags & EXT4_MB_CR1_OPTIMIZED)) {931928 if (sbi->s_mb_stats)932929 atomic_inc(&sbi->s_bal_cr1_bad_suggestions);933933- /* We have found something at CR 1 in the past */934934- grp = ext4_get_group_info(ac->ac_sb, ac->ac_last_optimal_group);935935- for (found = rb_next(&grp->bb_avg_fragment_size_rb); found != NULL;936936- found = rb_next(found)) {937937- grp = rb_entry(found, struct ext4_group_info,938938- bb_avg_fragment_size_rb);930930+ }931931+932932+ for (i = mb_avg_fragment_size_order(ac->ac_sb, ac->ac_g_ex.fe_len);933933+ i < MB_NUM_ORDERS(ac->ac_sb); i++) {934934+ if (list_empty(&sbi->s_mb_avg_fragment_size[i]))935935+ continue;936936+ read_lock(&sbi->s_mb_avg_fragment_size_locks[i]);937937+ if (list_empty(&sbi->s_mb_avg_fragment_size[i])) {938938+ read_unlock(&sbi->s_mb_avg_fragment_size_locks[i]);939939+ continue;940940+ }941941+ grp = NULL;942942+ list_for_each_entry(iter, &sbi->s_mb_avg_fragment_size[i],943943+ bb_avg_fragment_size_node) {939944 if (sbi->s_mb_stats)940945 atomic64_inc(&sbi->s_bal_cX_groups_considered[1]);941941- if (likely(ext4_mb_good_group(ac, grp->bb_group, 1)))946946+ if (likely(ext4_mb_good_group(ac, iter->bb_group, 1))) {947947+ grp = iter;942948 break;943943- }944944- goto done;945945- }946946-947947- node = sbi->s_mb_avg_fragment_size_root.rb_node;948948- best_so_far = 0;949949- found = NULL;950950-951951- while (node) {952952- grp = rb_entry(node, struct ext4_group_info,953953- bb_avg_fragment_size_rb);954954- avg_fragment_size = 0;955955- if (ext4_mb_good_group(ac, grp->bb_group, 1)) {956956- avg_fragment_size = grp->bb_fragments ?957957- grp->bb_free / grp->bb_fragments : 0;958958- if (!best_so_far || avg_fragment_size < best_so_far) {959959- best_so_far = avg_fragment_size;960960- found = node;961949 }962950 }963963- if (avg_fragment_size > ac->ac_g_ex.fe_len)964964- node = node->rb_right;965965- else966966- node = node->rb_left;951951+ read_unlock(&sbi->s_mb_avg_fragment_size_locks[i]);952952+ if (grp)953953+ break;967954 }968955969969-done:970970- if (found) {971971- grp = rb_entry(found, struct ext4_group_info,972972- bb_avg_fragment_size_rb);956956+ if (grp) {973957 *group = grp->bb_group;974958 ac->ac_flags |= EXT4_MB_CR1_OPTIMIZED;975959 } else {976960 *new_cr = 2;977961 }978978-979979- read_unlock(&sbi->s_mb_rb_lock);980980- ac->ac_last_optimal_group = *group;981962}982963983964static inline int should_optimize_scan(struct ext4_allocation_context *ac)···97310149741015 if (ac->ac_groups_linear_remaining) {9751016 ac->ac_groups_linear_remaining--;976976- goto inc_and_return;977977- }978978-979979- if (ac->ac_flags & EXT4_MB_SEARCH_NEXT_LINEAR) {980980- ac->ac_flags &= ~EXT4_MB_SEARCH_NEXT_LINEAR;9811017 goto inc_and_return;9821018 }9831019···10031049{10041050 *new_cr = ac->ac_criteria;1005105110061006- if (!should_optimize_scan(ac) || ac->ac_groups_linear_remaining)10521052+ if (!should_optimize_scan(ac) || ac->ac_groups_linear_remaining) {10531053+ *group = next_linear_group(ac, *group, ngroups);10071054 return;10551055+ }1008105610091057 if (*new_cr == 0) {10101058 ext4_mb_choose_next_group_cr0(ac, new_cr, group, ngroups);···10311075 struct ext4_sb_info *sbi = EXT4_SB(sb);10321076 int i;1033107710341034- if (test_opt2(sb, MB_OPTIMIZE_SCAN) && grp->bb_largest_free_order >= 0) {10781078+ for (i = MB_NUM_ORDERS(sb) - 1; i >= 0; i--)10791079+ if (grp->bb_counters[i] > 0)10801080+ break;10811081+ /* No need to move between order lists? */10821082+ if (!test_opt2(sb, MB_OPTIMIZE_SCAN) ||10831083+ i == grp->bb_largest_free_order) {10841084+ grp->bb_largest_free_order = i;10851085+ return;10861086+ }10871087+10881088+ if (grp->bb_largest_free_order >= 0) {10351089 write_lock(&sbi->s_mb_largest_free_orders_locks[10361090 grp->bb_largest_free_order]);10371091 list_del_init(&grp->bb_largest_free_order_node);10381092 write_unlock(&sbi->s_mb_largest_free_orders_locks[10391093 grp->bb_largest_free_order]);10401094 }10411041- grp->bb_largest_free_order = -1; /* uninit */10421042-10431043- for (i = MB_NUM_ORDERS(sb) - 1; i >= 0; i--) {10441044- if (grp->bb_counters[i] > 0) {10451045- grp->bb_largest_free_order = i;10461046- break;10471047- }10481048- }10491049- if (test_opt2(sb, MB_OPTIMIZE_SCAN) &&10501050- grp->bb_largest_free_order >= 0 && grp->bb_free) {10951095+ grp->bb_largest_free_order = i;10961096+ if (grp->bb_largest_free_order >= 0 && grp->bb_free) {10511097 write_lock(&sbi->s_mb_largest_free_orders_locks[10521098 grp->bb_largest_free_order]);10531099 list_add_tail(&grp->bb_largest_free_order_node,···11061148 EXT4_GROUP_INFO_BBITMAP_CORRUPT);11071149 }11081150 mb_set_largest_free_order(sb, grp);11511151+ mb_update_avg_fragment_size(sb, grp);1109115211101153 clear_bit(EXT4_GROUP_INFO_NEED_INIT_BIT, &(grp->bb_state));1111115411121155 period = get_cycles() - period;11131156 atomic_inc(&sbi->s_mb_buddies_generated);11141157 atomic64_add(period, &sbi->s_mb_generation_time);11151115- mb_update_avg_fragment_size(sb, grp);11161158}1117115911181160/* The buddy information is attached the buddy cache inode···25942636ext4_mb_regular_allocator(struct ext4_allocation_context *ac)25952637{25962638 ext4_group_t prefetch_grp = 0, ngroups, group, i;25972597- int cr = -1;26392639+ int cr = -1, new_cr;25982640 int err = 0, first_err = 0;25992641 unsigned int nr = 0, prefetch_ios = 0;26002642 struct ext4_sb_info *sbi;···26652707 * from the goal value specified26662708 */26672709 group = ac->ac_g_ex.fe_group;26682668- ac->ac_last_optimal_group = group;26692710 ac->ac_groups_linear_remaining = sbi->s_mb_max_linear_groups;26702711 prefetch_grp = group;2671271226722672- for (i = 0; i < ngroups; group = next_linear_group(ac, group, ngroups),26732673- i++) {26742674- int ret = 0, new_cr;27132713+ for (i = 0, new_cr = cr; i < ngroups; i++,27142714+ ext4_mb_choose_next_group(ac, &new_cr, &group, ngroups)) {27152715+ int ret = 0;2675271626762717 cond_resched();26772677-26782678- ext4_mb_choose_next_group(ac, &new_cr, &group, ngroups);26792718 if (new_cr != cr) {26802719 cr = new_cr;26812720 goto repeat;···29462991 struct super_block *sb = pde_data(file_inode(seq->file));29472992 unsigned long position;2948299329492949- read_lock(&EXT4_SB(sb)->s_mb_rb_lock);29502950-29512951- if (*pos < 0 || *pos >= MB_NUM_ORDERS(sb) + 1)29942994+ if (*pos < 0 || *pos >= 2*MB_NUM_ORDERS(sb))29522995 return NULL;29532996 position = *pos + 1;29542997 return (void *) ((unsigned long) position);···29583005 unsigned long position;2959300629603007 ++*pos;29612961- if (*pos < 0 || *pos >= MB_NUM_ORDERS(sb) + 1)30083008+ if (*pos < 0 || *pos >= 2*MB_NUM_ORDERS(sb))29623009 return NULL;29633010 position = *pos + 1;29643011 return (void *) ((unsigned long) position);···29703017 struct ext4_sb_info *sbi = EXT4_SB(sb);29713018 unsigned long position = ((unsigned long) v);29723019 struct ext4_group_info *grp;29732973- struct rb_node *n;29742974- unsigned int count, min, max;30203020+ unsigned int count;2975302129763022 position--;29773023 if (position >= MB_NUM_ORDERS(sb)) {29782978- seq_puts(seq, "fragment_size_tree:\n");29792979- n = rb_first(&sbi->s_mb_avg_fragment_size_root);29802980- if (!n) {29812981- seq_puts(seq, "\ttree_min: 0\n\ttree_max: 0\n\ttree_nodes: 0\n");29822982- return 0;29832983- }29842984- grp = rb_entry(n, struct ext4_group_info, bb_avg_fragment_size_rb);29852985- min = grp->bb_fragments ? grp->bb_free / grp->bb_fragments : 0;29862986- count = 1;29872987- while (rb_next(n)) {29882988- count++;29892989- n = rb_next(n);29902990- }29912991- grp = rb_entry(n, struct ext4_group_info, bb_avg_fragment_size_rb);29922992- max = grp->bb_fragments ? grp->bb_free / grp->bb_fragments : 0;30243024+ position -= MB_NUM_ORDERS(sb);30253025+ if (position == 0)30263026+ seq_puts(seq, "avg_fragment_size_lists:\n");2993302729942994- seq_printf(seq, "\ttree_min: %u\n\ttree_max: %u\n\ttree_nodes: %u\n",29952995- min, max, count);30283028+ count = 0;30293029+ read_lock(&sbi->s_mb_avg_fragment_size_locks[position]);30303030+ list_for_each_entry(grp, &sbi->s_mb_avg_fragment_size[position],30313031+ bb_avg_fragment_size_node)30323032+ count++;30333033+ read_unlock(&sbi->s_mb_avg_fragment_size_locks[position]);30343034+ seq_printf(seq, "\tlist_order_%u_groups: %u\n",30353035+ (unsigned int)position, count);29963036 return 0;29973037 }29983038···29953049 seq_puts(seq, "max_free_order_lists:\n");29963050 }29973051 count = 0;30523052+ read_lock(&sbi->s_mb_largest_free_orders_locks[position]);29983053 list_for_each_entry(grp, &sbi->s_mb_largest_free_orders[position],29993054 bb_largest_free_order_node)30003055 count++;30563056+ read_unlock(&sbi->s_mb_largest_free_orders_locks[position]);30013057 seq_printf(seq, "\tlist_order_%u_groups: %u\n",30023058 (unsigned int)position, count);30033059···30073059}3008306030093061static void ext4_mb_seq_structs_summary_stop(struct seq_file *seq, void *v)30103010-__releases(&EXT4_SB(sb)->s_mb_rb_lock)30113062{30123012- struct super_block *sb = pde_data(file_inode(seq->file));30133013-30143014- read_unlock(&EXT4_SB(sb)->s_mb_rb_lock);30153063}3016306430173065const struct seq_operations ext4_mb_seq_structs_summary_ops = {···31203176 init_rwsem(&meta_group_info[i]->alloc_sem);31213177 meta_group_info[i]->bb_free_root = RB_ROOT;31223178 INIT_LIST_HEAD(&meta_group_info[i]->bb_largest_free_order_node);31233123- RB_CLEAR_NODE(&meta_group_info[i]->bb_avg_fragment_size_rb);31793179+ INIT_LIST_HEAD(&meta_group_info[i]->bb_avg_fragment_size_node);31243180 meta_group_info[i]->bb_largest_free_order = -1; /* uninit */31813181+ meta_group_info[i]->bb_avg_fragment_size_order = -1; /* uninit */31253182 meta_group_info[i]->bb_group = group;3126318331273184 mb_group_bb_bitmap_alloc(sb, meta_group_info[i], group);···33713426 i++;33723427 } while (i < MB_NUM_ORDERS(sb));3373342833743374- sbi->s_mb_avg_fragment_size_root = RB_ROOT;34293429+ sbi->s_mb_avg_fragment_size =34303430+ kmalloc_array(MB_NUM_ORDERS(sb), sizeof(struct list_head),34313431+ GFP_KERNEL);34323432+ if (!sbi->s_mb_avg_fragment_size) {34333433+ ret = -ENOMEM;34343434+ goto out;34353435+ }34363436+ sbi->s_mb_avg_fragment_size_locks =34373437+ kmalloc_array(MB_NUM_ORDERS(sb), sizeof(rwlock_t),34383438+ GFP_KERNEL);34393439+ if (!sbi->s_mb_avg_fragment_size_locks) {34403440+ ret = -ENOMEM;34413441+ goto out;34423442+ }34433443+ for (i = 0; i < MB_NUM_ORDERS(sb); i++) {34443444+ INIT_LIST_HEAD(&sbi->s_mb_avg_fragment_size[i]);34453445+ rwlock_init(&sbi->s_mb_avg_fragment_size_locks[i]);34463446+ }33753447 sbi->s_mb_largest_free_orders =33763448 kmalloc_array(MB_NUM_ORDERS(sb), sizeof(struct list_head),33773449 GFP_KERNEL);···34073445 INIT_LIST_HEAD(&sbi->s_mb_largest_free_orders[i]);34083446 rwlock_init(&sbi->s_mb_largest_free_orders_locks[i]);34093447 }34103410- rwlock_init(&sbi->s_mb_rb_lock);3411344834123449 spin_lock_init(&sbi->s_md_lock);34133450 sbi->s_mb_free_pending = 0;···34773516 free_percpu(sbi->s_locality_groups);34783517 sbi->s_locality_groups = NULL;34793518out:35193519+ kfree(sbi->s_mb_avg_fragment_size);35203520+ kfree(sbi->s_mb_avg_fragment_size_locks);34803521 kfree(sbi->s_mb_largest_free_orders);34813522 kfree(sbi->s_mb_largest_free_orders_locks);34823523 kfree(sbi->s_mb_offsets);···35453582 kvfree(group_info);35463583 rcu_read_unlock();35473584 }35853585+ kfree(sbi->s_mb_avg_fragment_size);35863586+ kfree(sbi->s_mb_avg_fragment_size_locks);35483587 kfree(sbi->s_mb_largest_free_orders);35493588 kfree(sbi->s_mb_largest_free_orders_locks);35503589 kfree(sbi->s_mb_offsets);···51585193 struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb);51595194 int bsbits = ac->ac_sb->s_blocksize_bits;51605195 loff_t size, isize;51965196+ bool inode_pa_eligible, group_pa_eligible;5161519751625198 if (!(ac->ac_flags & EXT4_MB_HINT_DATA))51635199 return;···51665200 if (unlikely(ac->ac_flags & EXT4_MB_HINT_GOAL_ONLY))51675201 return;5168520252035203+ group_pa_eligible = sbi->s_mb_group_prealloc > 0;52045204+ inode_pa_eligible = true;51695205 size = ac->ac_o_ex.fe_logical + EXT4_C2B(sbi, ac->ac_o_ex.fe_len);51705206 isize = (i_size_read(ac->ac_inode) + ac->ac_sb->s_blocksize - 1)51715207 >> bsbits;5172520852095209+ /* No point in using inode preallocation for closed files */51735210 if ((size == isize) && !ext4_fs_is_busy(sbi) &&51745174- !inode_is_open_for_write(ac->ac_inode)) {51755175- ac->ac_flags |= EXT4_MB_HINT_NOPREALLOC;51765176- return;51775177- }52115211+ !inode_is_open_for_write(ac->ac_inode))52125212+ inode_pa_eligible = false;5178521351795179- if (sbi->s_mb_group_prealloc <= 0) {51805180- ac->ac_flags |= EXT4_MB_STREAM_ALLOC;51815181- return;51825182- }51835183-51845184- /* don't use group allocation for large files */51855214 size = max(size, isize);51865186- if (size > sbi->s_mb_stream_request) {51875187- ac->ac_flags |= EXT4_MB_STREAM_ALLOC;52155215+ /* Don't use group allocation for large files */52165216+ if (size > sbi->s_mb_stream_request)52175217+ group_pa_eligible = false;52185218+52195219+ if (!group_pa_eligible) {52205220+ if (inode_pa_eligible)52215221+ ac->ac_flags |= EXT4_MB_STREAM_ALLOC;52225222+ else52235223+ ac->ac_flags |= EXT4_MB_HINT_NOPREALLOC;51885224 return;51895225 }51905226···55335565 ext4_fsblk_t block = 0;55345566 unsigned int inquota = 0;55355567 unsigned int reserv_clstrs = 0;55685568+ int retries = 0;55365569 u64 seq;5537557055385571 might_sleep();···56365667 ar->len = ac->ac_b_ex.fe_len;56375668 }56385669 } else {56395639- if (ext4_mb_discard_preallocations_should_retry(sb, ac, &seq))56705670+ if (++retries < 3 &&56715671+ ext4_mb_discard_preallocations_should_retry(sb, ac, &seq))56405672 goto repeat;56415673 /*56425674 * If block allocation fails then the pa allocated above
-1
fs/ext4/mballoc.h
···178178 /* copy of the best found extent taken before preallocation efforts */179179 struct ext4_free_extent ac_f_ex;180180181181- ext4_group_t ac_last_optimal_group;182181 __u32 ac_groups_considered;183182 __u32 ac_flags; /* allocation hints */184183 __u16 ac_groups_scanned;
+25
fs/nfs/internal.h
···606606 return GFP_KERNEL;607607}608608609609+/*610610+ * Special version of should_remove_suid() that ignores capabilities.611611+ */612612+static inline int nfs_should_remove_suid(const struct inode *inode)613613+{614614+ umode_t mode = inode->i_mode;615615+ int kill = 0;616616+617617+ /* suid always must be killed */618618+ if (unlikely(mode & S_ISUID))619619+ kill = ATTR_KILL_SUID;620620+621621+ /*622622+ * sgid without any exec bits is just a mandatory locking mark; leave623623+ * it alone. If some exec bits are set, it's a real sgid; kill it.624624+ */625625+ if (unlikely((mode & S_ISGID) && (mode & S_IXGRP)))626626+ kill |= ATTR_KILL_SGID;627627+628628+ if (unlikely(kill && S_ISREG(mode)))629629+ return kill;630630+631631+ return 0;632632+}633633+609634/* unlink.c */610635extern struct rpc_task *611636nfs_async_rename(struct inode *old_dir, struct inode *new_dir,
+7-2
fs/nfs/nfs42proc.c
···78787979 status = nfs4_call_sync(server->client, server, msg,8080 &args.seq_args, &res.seq_res, 0);8181- if (status == 0)8181+ if (status == 0) {8282+ if (nfs_should_remove_suid(inode)) {8383+ spin_lock(&inode->i_lock);8484+ nfs_set_cache_invalid(inode, NFS_INO_INVALID_MODE);8585+ spin_unlock(&inode->i_lock);8686+ }8287 status = nfs_post_op_update_inode_force_wcc(inode,8388 res.falloc_fattr);8484-8989+ }8590 if (msg->rpc_proc == &nfs4_procedures[NFSPROC4_CLNT_ALLOCATE])8691 trace_nfs4_fallocate(inode, &args, status);8792 else
+18-9
fs/nfs/super.c
···10511051 if (ctx->bsize)10521052 sb->s_blocksize = nfs_block_size(ctx->bsize, &sb->s_blocksize_bits);1053105310541054- if (server->nfs_client->rpc_ops->version != 2) {10551055- /* The VFS shouldn't apply the umask to mode bits. We will do10561056- * so ourselves when necessary.10541054+ switch (server->nfs_client->rpc_ops->version) {10551055+ case 2:10561056+ sb->s_time_gran = 1000;10571057+ sb->s_time_min = 0;10581058+ sb->s_time_max = U32_MAX;10591059+ break;10601060+ case 3:10611061+ /*10621062+ * The VFS shouldn't apply the umask to mode bits.10631063+ * We will do so ourselves when necessary.10571064 */10581065 sb->s_flags |= SB_POSIXACL;10591066 sb->s_time_gran = 1;10601060- sb->s_export_op = &nfs_export_ops;10611061- } else10621062- sb->s_time_gran = 1000;10631063-10641064- if (server->nfs_client->rpc_ops->version != 4) {10651067 sb->s_time_min = 0;10661068 sb->s_time_max = U32_MAX;10671067- } else {10691069+ sb->s_export_op = &nfs_export_ops;10701070+ break;10711071+ case 4:10721072+ sb->s_flags |= SB_POSIXACL;10731073+ sb->s_time_gran = 1;10681074 sb->s_time_min = S64_MIN;10691075 sb->s_time_max = S64_MAX;10761076+ if (server->caps & NFS_CAP_ATOMIC_OPEN_V1)10771077+ sb->s_export_op = &nfs_export_ops;10781078+ break;10701079 }1071108010721081 sb->s_magic = NFS_SUPER_MAGIC;
-25
fs/nfs/write.c
···14961496 NFS_PROTO(data->inode)->commit_rpc_prepare(task, data);14971497}1498149814991499-/*15001500- * Special version of should_remove_suid() that ignores capabilities.15011501- */15021502-static int nfs_should_remove_suid(const struct inode *inode)15031503-{15041504- umode_t mode = inode->i_mode;15051505- int kill = 0;15061506-15071507- /* suid always must be killed */15081508- if (unlikely(mode & S_ISUID))15091509- kill = ATTR_KILL_SUID;15101510-15111511- /*15121512- * sgid without any exec bits is just a mandatory locking mark; leave15131513- * it alone. If some exec bits are set, it's a real sgid; kill it.15141514- */15151515- if (unlikely((mode & S_ISGID) && (mode & S_IXGRP)))15161516- kill |= ATTR_KILL_SGID;15171517-15181518- if (unlikely(kill && S_ISREG(mode)))15191519- return kill;15201520-15211521- return 0;15221522-}15231523-15241499static void nfs_writeback_check_extend(struct nfs_pgio_header *hdr,15251500 struct nfs_fattr *fattr)15261501{
+16-13
fs/nfsd/vfs.c
···300300static void301301nfsd_sanitize_attrs(struct inode *inode, struct iattr *iap)302302{303303+ /* Ignore mode updates on symlinks */304304+ if (S_ISLNK(inode->i_mode))305305+ iap->ia_valid &= ~ATTR_MODE;306306+303307 /* sanitize the mode change */304308 if (iap->ia_valid & ATTR_MODE) {305309 iap->ia_mode &= S_IALLUGO;···357353 int accmode = NFSD_MAY_SATTR;358354 umode_t ftype = 0;359355 __be32 err;360360- int host_err;356356+ int host_err = 0;361357 bool get_write_count;362358 bool size_change = (iap->ia_valid & ATTR_SIZE);363359···394390395391 dentry = fhp->fh_dentry;396392 inode = d_inode(dentry);397397-398398- /* Ignore any mode updates on symlinks */399399- if (S_ISLNK(inode->i_mode))400400- iap->ia_valid &= ~ATTR_MODE;401401-402402- if (!iap->ia_valid)403403- return 0;404393405394 nfsd_sanitize_attrs(inode, iap);406395···445448 goto out_unlock;446449 }447450448448- iap->ia_valid |= ATTR_CTIME;449449- host_err = notify_change(&init_user_ns, dentry, iap, NULL);451451+ if (iap->ia_valid) {452452+ iap->ia_valid |= ATTR_CTIME;453453+ host_err = notify_change(&init_user_ns, dentry, iap, NULL);454454+ }450455451456out_unlock:452457 if (attr->na_seclabel && attr->na_seclabel->len)···845846 struct splice_desc *sd)846847{847848 struct svc_rqst *rqstp = sd->u.data;849849+ struct page *page = buf->page; // may be a compound one850850+ unsigned offset = buf->offset;848851849849- svc_rqst_replace_page(rqstp, buf->page);850850- if (rqstp->rq_res.page_len == 0)851851- rqstp->rq_res.page_base = buf->offset;852852+ page += offset / PAGE_SIZE;853853+ for (int i = sd->len; i > 0; i -= PAGE_SIZE)854854+ svc_rqst_replace_page(rqstp, page++);855855+ if (rqstp->rq_res.page_len == 0) // first call856856+ rqstp->rq_res.page_base = offset % PAGE_SIZE;852857 rqstp->rq_res.page_len += sd->len;853858 return sd->len;854859}
···11271127 * cover a worst-case of every other cpu being on one of two nodes for a11281128 * very large NR_CPUS.11291129 *11301130- * Use PAGE_SIZE as a minimum for smaller configurations.11301130+ * Use PAGE_SIZE as a minimum for smaller configurations while avoiding11311131+ * unsigned comparison to -1.11311132 */11321132-#define CPUMAP_FILE_MAX_BYTES ((((NR_CPUS * 9)/32 - 1) > PAGE_SIZE) \11331133+#define CPUMAP_FILE_MAX_BYTES (((NR_CPUS * 9)/32 > PAGE_SIZE) \11331134 ? (NR_CPUS * 9)/32 - 1 : PAGE_SIZE)11341135#define CPULIST_FILE_MAX_BYTES (((NR_CPUS * 7)/2 > PAGE_SIZE) ? (NR_CPUS * 7)/2 : PAGE_SIZE)11351136
···180180181181#define HP_SDC_CMD_SET_IM 0x40 /* 010xxxxx == set irq mask */182182183183-/* The documents provided do not explicitly state that all registers betweem 183183+/* The documents provided do not explicitly state that all registers between184184 * 0x01 and 0x1f inclusive can be read by sending their register index as a 185185 * command, but this is implied and appears to be the case.186186 */
···624624/* number of characters left in xmit buffer before we ask for more */625625#define WAKEUP_CHARS 256626626627627+/**628628+ * uart_xmit_advance - Advance xmit buffer and account Tx'ed chars629629+ * @up: uart_port structure describing the port630630+ * @chars: number of characters sent631631+ *632632+ * This function advances the tail of circular xmit buffer by the number of633633+ * @chars transmitted and handles accounting of transmitted bytes (into634634+ * @up's icount.tx).635635+ */636636+static inline void uart_xmit_advance(struct uart_port *up, unsigned int chars)637637+{638638+ struct circ_buf *xmit = &up->state->xmit;639639+640640+ xmit->tail = (xmit->tail + chars) & (UART_XMIT_SIZE - 1);641641+ up->icount.tx += chars;642642+}643643+627644struct module;628645struct tty_driver;629646
···26482648 io_kill_timeouts(ctx, NULL, true);26492649 /* if we failed setting up the ctx, we might not have any rings */26502650 io_iopoll_try_reap_events(ctx);26512651+ /* drop cached put refs after potentially doing completions */26522652+ if (current->io_uring)26532653+ io_uring_drop_tctx_refs(current);26512654 }2652265526532656 INIT_WORK(&ctx->exit_work, io_ring_exit_work);
+2-1
io_uring/msg_ring.c
···165165 req_set_fail(req);166166 io_req_set_res(req, ret, 0);167167 /* put file to avoid an attempt to IOPOLL the req */168168- io_put_file(req->file);168168+ if (!(req->flags & REQ_F_FIXED_FILE))169169+ io_put_file(req->file);169170 req->file = NULL;170171 return IOU_OK;171172}
···20472047 /*20482048 * If the new process will be in a different time namespace20492049 * do not allow it to share VM or a thread group with the forking task.20502050- *20512051- * On vfork, the child process enters the target time namespace only20522052- * after exec.20532050 */20542054- if ((clone_flags & (CLONE_VM | CLONE_VFORK)) == CLONE_VM) {20512051+ if (clone_flags & (CLONE_THREAD | CLONE_VM)) {20552052 if (nsp->time_ns != nsp->time_ns_for_children)20562053 return ERR_PTR(-EINVAL);20572054 }
+1-2
kernel/nsproxy.c
···179179 if (IS_ERR(new_ns))180180 return PTR_ERR(new_ns);181181182182- if ((flags & CLONE_VM) == 0)183183- timens_on_fork(new_ns, tsk);182182+ timens_on_fork(new_ns, tsk);184183185184 tsk->nsproxy = new_ns;186185 return 0;
+2-4
kernel/workqueue.c
···30663066 if (WARN_ON(!work->func))30673067 return false;3068306830693069- if (!from_cancel) {30703070- lock_map_acquire(&work->lockdep_map);30713071- lock_map_release(&work->lockdep_map);30723072- }30693069+ lock_map_acquire(&work->lockdep_map);30703070+ lock_map_release(&work->lockdep_map);3073307130743072 if (start_flush_work(work, &barr, from_cancel)) {30753073 wait_for_completion(&barr.done);
+3-1
lib/Kconfig.debug
···264264config DEBUG_INFO_DWARF4265265 bool "Generate DWARF Version 4 debuginfo"266266 select DEBUG_INFO267267+ depends on !CC_IS_CLANG || (CC_IS_CLANG && (AS_IS_LLVM || (AS_IS_GNU && AS_VERSION >= 23502)))267268 help268268- Generate DWARF v4 debug info. This requires gcc 4.5+ and gdb 7.0+.269269+ Generate DWARF v4 debug info. This requires gcc 4.5+, binutils 2.35.2270270+ if using clang without clang's integrated assembler, and gdb 7.0+.269271270272 If you have consumers of DWARF debug info that are not ready for271273 newer revisions of DWARF, you may wish to choose this or have your
+4-1
mm/slab_common.c
···475475void kmem_cache_destroy(struct kmem_cache *s)476476{477477 int refcnt;478478+ bool rcu_set;478479479480 if (unlikely(!s) || !kasan_check_byte(s))480481 return;481482482483 cpus_read_lock();483484 mutex_lock(&slab_mutex);485485+486486+ rcu_set = s->flags & SLAB_TYPESAFE_BY_RCU;484487485488 refcnt = --s->refcount;486489 if (refcnt)···495492out_unlock:496493 mutex_unlock(&slab_mutex);497494 cpus_read_unlock();498498- if (!refcnt && !(s->flags & SLAB_TYPESAFE_BY_RCU))495495+ if (!refcnt && !rcu_set)499496 kmem_cache_release(s);500497}501498EXPORT_SYMBOL(kmem_cache_destroy);
+16-2
mm/slub.c
···310310 */311311static nodemask_t slab_nodes;312312313313+/*314314+ * Workqueue used for flush_cpu_slab().315315+ */316316+static struct workqueue_struct *flushwq;317317+313318/********************************************************************314319 * Core slab cache functions315320 *******************************************************************/···27352730 INIT_WORK(&sfw->work, flush_cpu_slab);27362731 sfw->skip = false;27372732 sfw->s = s;27382738- schedule_work_on(cpu, &sfw->work);27332733+ queue_work_on(cpu, flushwq, &sfw->work);27392734 }2740273527412736 for_each_online_cpu(cpu) {···4863485848644859void __init kmem_cache_init_late(void)48654860{48614861+ flushwq = alloc_workqueue("slub_flushwq", WQ_MEM_RECLAIM, 0);48624862+ WARN_ON(!flushwq);48664863}4867486448684865struct kmem_cache *···49334926 /* Honor the call site pointer we received. */49344927 trace_kmalloc(caller, ret, s, size, s->size, gfpflags);4935492849294929+ ret = kasan_kmalloc(s, ret, size, gfpflags);49304930+49364931 return ret;49374932}49384933EXPORT_SYMBOL(__kmalloc_track_caller);···4965495649664957 /* Honor the call site pointer we received. */49674958 trace_kmalloc_node(caller, ret, s, size, s->size, gfpflags, node);49594959+49604960+ ret = kasan_kmalloc(s, ret, size, gfpflags);4968496149694962 return ret;49704963}···59015890 char *name = kmalloc(ID_STR_LENGTH, GFP_KERNEL);59025891 char *p = name;5903589259045904- BUG_ON(!name);58935893+ if (!name)58945894+ return ERR_PTR(-ENOMEM);5905589559065896 *p++ = ':';59075897 /*···59605948 * for the symlinks.59615949 */59625950 name = create_unique_id(s);59515951+ if (IS_ERR(name))59525952+ return PTR_ERR(name);59635953 }5964595459655955 s->kobj.kset = kset;
+4
net/batman-adv/hard-interface.c
···1010#include <linux/atomic.h>1111#include <linux/byteorder/generic.h>1212#include <linux/container_of.h>1313+#include <linux/errno.h>1314#include <linux/gfp.h>1415#include <linux/if.h>1516#include <linux/if_arp.h>···700699 __be16 ethertype = htons(ETH_P_BATMAN);701700 int max_header_len = batadv_max_header_len();702701 int ret;702702+703703+ if (hard_iface->net_dev->mtu < ETH_MIN_MTU + max_header_len)704704+ return -EINVAL;703705704706 if (hard_iface->if_status != BATADV_IF_NOT_IN_USE)705707 goto out;
+3-1
net/bridge/netfilter/ebtables.c
···10401040 goto free_iterate;10411041 }1042104210431043- if (repl->valid_hooks != t->valid_hooks)10431043+ if (repl->valid_hooks != t->valid_hooks) {10441044+ ret = -EINVAL;10441045 goto free_unlock;10461046+ }1045104710461048 if (repl->num_counters && repl->num_counters != t->private->nentries) {10471049 ret = -EINVAL;
···150150 MPTCP_SKB_CB(from)->map_seq, MPTCP_SKB_CB(to)->map_seq,151151 to->len, MPTCP_SKB_CB(from)->end_seq);152152 MPTCP_SKB_CB(to)->end_seq = MPTCP_SKB_CB(from)->end_seq;153153- kfree_skb_partial(from, fragstolen);153153+154154+ /* note the fwd memory can reach a negative value after accounting155155+ * for the delta, but the later skb free will restore a non156156+ * negative one157157+ */154158 atomic_add(delta, &sk->sk_rmem_alloc);155159 mptcp_rmem_charge(sk, delta);160160+ kfree_skb_partial(from, fragstolen);161161+156162 return true;157163}158164
···157157 data = ib_ptr;158158 data_limit = ib_ptr + datalen;159159160160- /* strlen("\1DCC SENT t AAAAAAAA P\1\n")=24161161- * 5+MINMATCHLEN+strlen("t AAAAAAAA P\1\n")=14 */162162- while (data < data_limit - (19 + MINMATCHLEN)) {163163- if (memcmp(data, "\1DCC ", 5)) {160160+ /* Skip any whitespace */161161+ while (data < data_limit - 10) {162162+ if (*data == ' ' || *data == '\r' || *data == '\n')163163+ data++;164164+ else165165+ break;166166+ }167167+168168+ /* strlen("PRIVMSG x ")=10 */169169+ if (data < data_limit - 10) {170170+ if (strncasecmp("PRIVMSG ", data, 8))171171+ goto out;172172+ data += 8;173173+ }174174+175175+ /* strlen(" :\1DCC SENT t AAAAAAAA P\1\n")=26176176+ * 7+MINMATCHLEN+strlen("t AAAAAAAA P\1\n")=26177177+ */178178+ while (data < data_limit - (21 + MINMATCHLEN)) {179179+ /* Find first " :", the start of message */180180+ if (memcmp(data, " :", 2)) {164181 data++;165182 continue;166183 }184184+ data += 2;185185+186186+ /* then check that place only for the DCC command */187187+ if (memcmp(data, "\1DCC ", 5))188188+ goto out;167189 data += 5;168168- /* we have at least (19+MINMATCHLEN)-5 bytes valid data left */190190+ /* we have at least (21+MINMATCHLEN)-(2+5) bytes valid data left */169191170192 iph = ip_hdr(skb);171193 pr_debug("DCC found in master %pI4:%u %pI4:%u\n",···203181 pr_debug("DCC %s detected\n", dccprotos[i]);204182205183 /* we have at least206206- * (19+MINMATCHLEN)-5-dccprotos[i].matchlen bytes valid184184+ * (21+MINMATCHLEN)-7-dccprotos[i].matchlen bytes valid207185 * data left (== 14/13 bytes) */208186 if (parse_dcc(data, data_limit, &dcc_ip,209187 &dcc_port, &addr_beg_p, &addr_end_p)) {
···21372137 }2138213821392139 if (chain->tmplt_ops && chain->tmplt_ops != tp->ops) {21402140+ tfilter_put(tp, fh);21402141 NL_SET_ERR_MSG(extack, "Chain template is set to a different filter kind");21412142 err = -EINVAL;21422143 goto errout;
+11-7
net/sched/sch_taprio.c
···6767 u32 flags;6868 enum tk_offsets tk_offset;6969 int clockid;7070+ bool offloaded;7071 atomic64_t picos_per_byte; /* Using picoseconds because for 10Gbps+7172 * speeds it's sub-nanoseconds per byte7273 */···12801279 goto done;12811280 }1282128112821282+ q->offloaded = true;12831283+12831284done:12841285 taprio_offload_free(offload);12851286···12961293 struct tc_taprio_qopt_offload *offload;12971294 int err;1298129512991299- if (!FULL_OFFLOAD_IS_ENABLED(q->flags))12961296+ if (!q->offloaded)13001297 return 0;13011301-13021302- if (!ops->ndo_setup_tc)13031303- return -EOPNOTSUPP;1304129813051299 offload = taprio_offload_alloc(0);13061300 if (!offload) {···13131313 "Device failed to disable offload");13141314 goto out;13151315 }13161316+13171317+ q->offloaded = false;1316131813171319out:13181320 taprio_offload_free(offload);···1951194919521950static struct Qdisc *taprio_leaf(struct Qdisc *sch, unsigned long cl)19531951{19541954- struct netdev_queue *dev_queue = taprio_queue_get(sch, cl);19521952+ struct taprio_sched *q = qdisc_priv(sch);19531953+ struct net_device *dev = qdisc_dev(sch);19541954+ unsigned int ntx = cl - 1;1955195519561956- if (!dev_queue)19561956+ if (ntx >= dev->num_tx_queues)19571957 return NULL;1958195819591959- return dev_queue->qdisc_sleeping;19591959+ return q->qdiscs[ntx];19601960}1961196119621962static unsigned long taprio_find(struct Qdisc *sch, u32 classid)
···9898bool menu_is_visible(struct menu *menu);9999bool menu_has_prompt(struct menu *menu);100100const char *menu_get_prompt(struct menu *menu);101101-struct menu *menu_get_root_menu(struct menu *menu);102101struct menu *menu_get_parent_menu(struct menu *menu);103102bool menu_has_help(struct menu *menu);104103const char *menu_get_help(struct menu *menu);
-5
scripts/kconfig/menu.c
···661661 return NULL;662662}663663664664-struct menu *menu_get_root_menu(struct menu *menu)665665-{666666- return &rootmenu;667667-}668668-669664struct menu *menu_get_parent_menu(struct menu *menu)670665{671666 enum prop_type type;
+5-5
sound/core/init.c
···178178 return -ENOMEM;179179180180 err = snd_card_init(card, parent, idx, xid, module, extra_size);181181- if (err < 0) {182182- kfree(card);183183- return err;184184- }181181+ if (err < 0)182182+ return err; /* card is freed by error handler */185183186184 *card_ret = card;187185 return 0;···231233 card->managed = true;232234 err = snd_card_init(card, parent, idx, xid, module, extra_size);233235 if (err < 0) {234234- devres_free(card);236236+ devres_free(card); /* in managed mode, we need to free manually */235237 return err;236238 }237239···295297 mutex_unlock(&snd_card_mutex);296298 dev_err(parent, "cannot find the slot for index %d (range 0-%i), error: %d\n",297299 idx, snd_ecards_limit - 1, err);300300+ if (!card->managed)301301+ kfree(card); /* manually free here, as no destructor called */298302 return err;299303 }300304 set_bit(idx, snd_cards_lock); /* lock it */
+2-2
sound/pci/hda/hda_bind.c
···157157 return codec->bus->core.ext_ops->hdev_detach(&codec->core);158158 }159159160160- refcount_dec(&codec->pcm_ref);161160 snd_hda_codec_disconnect_pcms(codec);162161 snd_hda_jack_tbl_disconnect(codec);163163- wait_event(codec->remove_sleep, !refcount_read(&codec->pcm_ref));162162+ if (!refcount_dec_and_test(&codec->pcm_ref))163163+ wait_event(codec->remove_sleep, !refcount_read(&codec->pcm_ref));164164 snd_power_sync_ref(codec->bus->card);165165166166 if (codec->patch_ops.free)
···758758 * The endpoint needs to be closed via snd_usb_endpoint_close() later.759759 *760760 * Note that this function doesn't configure the endpoint. The substream761761- * needs to set it up later via snd_usb_endpoint_set_params() and762762- * snd_usb_endpoint_prepare().761761+ * needs to set it up later via snd_usb_endpoint_configure().763762 */764763struct snd_usb_endpoint *765764snd_usb_endpoint_open(struct snd_usb_audio *chip,···12921293/*12931294 * snd_usb_endpoint_set_params: configure an snd_usb_endpoint12941295 *12951295- * It's called either from hw_params callback.12961296 * Determine the number of URBs to be used on this endpoint.12971297 * An endpoint must be configured before it can be started.12981298 * An endpoint that is already running can not be reconfigured.12991299 */13001300-int snd_usb_endpoint_set_params(struct snd_usb_audio *chip,13011301- struct snd_usb_endpoint *ep)13001300+static int snd_usb_endpoint_set_params(struct snd_usb_audio *chip,13011301+ struct snd_usb_endpoint *ep)13021302{13031303 const struct audioformat *fmt = ep->cur_audiofmt;13041304 int err;···13801382}1381138313821384/*13831383- * snd_usb_endpoint_prepare: Prepare the endpoint13851385+ * snd_usb_endpoint_configure: Configure the endpoint13841386 *13851387 * This function sets up the EP to be fully usable state.13861386- * It's called either from prepare callback.13881388+ * It's called either from hw_params or prepare callback.13871389 * The function checks need_setup flag, and performs nothing unless needed,13881390 * so it's safe to call this multiple times.13891391 *13901392 * This returns zero if unchanged, 1 if the configuration has changed,13911393 * or a negative error code.13921394 */13931393-int snd_usb_endpoint_prepare(struct snd_usb_audio *chip,13941394- struct snd_usb_endpoint *ep)13951395+int snd_usb_endpoint_configure(struct snd_usb_audio *chip,13961396+ struct snd_usb_endpoint *ep)13951397{13961398 bool iface_first;13971399 int err = 0;···14121414 if (err < 0)14131415 goto unlock;14141416 }14171417+ err = snd_usb_endpoint_set_params(chip, ep);14181418+ if (err < 0)14191419+ goto unlock;14151420 goto done;14161421 }14171422···14391438 goto unlock;1440143914411440 err = init_sample_rate(chip, ep);14411441+ if (err < 0)14421442+ goto unlock;14431443+14441444+ err = snd_usb_endpoint_set_params(chip, ep);14421445 if (err < 0)14431446 goto unlock;14441447
···443443 if (stop_endpoints(subs, false))444444 sync_pending_stops(subs);445445 if (subs->sync_endpoint) {446446- err = snd_usb_endpoint_prepare(chip, subs->sync_endpoint);446446+ err = snd_usb_endpoint_configure(chip, subs->sync_endpoint);447447 if (err < 0)448448 return err;449449 }450450- err = snd_usb_endpoint_prepare(chip, subs->data_endpoint);450450+ err = snd_usb_endpoint_configure(chip, subs->data_endpoint);451451 if (err < 0)452452 return err;453453 snd_usb_set_format_quirk(subs, subs->cur_audiofmt);454454 } else {455455 if (subs->sync_endpoint) {456456- err = snd_usb_endpoint_prepare(chip, subs->sync_endpoint);456456+ err = snd_usb_endpoint_configure(chip, subs->sync_endpoint);457457 if (err < 0)458458 return err;459459 }···551551 subs->cur_audiofmt = fmt;552552 mutex_unlock(&chip->mutex);553553554554- if (subs->sync_endpoint) {555555- ret = snd_usb_endpoint_set_params(chip, subs->sync_endpoint);556556- if (ret < 0)557557- goto unlock;558558- }559559-560560- ret = snd_usb_endpoint_set_params(chip, subs->data_endpoint);554554+ ret = configure_endpoints(chip, subs);561555562556 unlock:563557 if (ret < 0)
+3-2
tools/arch/x86/include/asm/cpufeatures.h
···457457#define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */458458#define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitigated */459459#define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */460460-#define X86_BUG_RETBLEED X86_BUG(26) /* CPU is affected by RETBleed */461461-#define X86_BUG_EIBRS_PBRSB X86_BUG(27) /* EIBRS is vulnerable to Post Barrier RSB Predictions */460460+#define X86_BUG_MMIO_UNKNOWN X86_BUG(26) /* CPU is too old and its MMIO Stale Data status is unknown */461461+#define X86_BUG_RETBLEED X86_BUG(27) /* CPU is affected by RETBleed */462462+#define X86_BUG_EIBRS_PBRSB X86_BUG(28) /* EIBRS is vulnerable to Post Barrier RSB Predictions */462463463464#endif /* _ASM_X86_CPUFEATURES_H */
+3-3
tools/hv/hv_kvp_daemon.c
···44444545/*4646 * KVP protocol: The user mode component first registers with the4747- * the kernel component. Subsequently, the kernel component requests, data4747+ * kernel component. Subsequently, the kernel component requests, data4848 * for the specified keys. In response to this message the user mode component4949 * fills in the value corresponding to the specified key. We overload the5050 * sequence field in the cn_msg header to define our KVP message types.···772772 const char *str;773773774774 if (family == AF_INET) {775775- addr = (struct sockaddr_in *)addrp;775775+ addr = addrp;776776 str = inet_ntop(family, &addr->sin_addr, tmp, 50);777777 addr_length = INET_ADDRSTRLEN;778778 } else {779779- addr6 = (struct sockaddr_in6 *)addrp;779779+ addr6 = addrp;780780 str = inet_ntop(family, &addr6->sin6_addr.s6_addr, tmp, 50);781781 addr_length = INET6_ADDRSTRLEN;782782 }
···33713371 return 0;3372337233733373 perf_cpu_map__for_each_cpu(cpu, idx, cpus) {33743374+ if (cpu.cpu == -1)33753375+ continue;33743376 /* Return ENODEV is input cpu is greater than max cpu */33753377 if ((unsigned long)cpu.cpu > mask->nbits)33763378 return -ENODEV;
+83
tools/perf/tests/shell/stat_bpf_counters_cgrp.sh
···11+#!/bin/sh22+# perf stat --bpf-counters --for-each-cgroup test33+# SPDX-License-Identifier: GPL-2.044+55+set -e66+77+test_cgroups=88+if [ "$1" = "-v" ]; then99+ verbose="1"1010+fi1111+1212+# skip if --bpf-counters --for-each-cgroup is not supported1313+check_bpf_counter()1414+{1515+ if ! perf stat -a --bpf-counters --for-each-cgroup / true > /dev/null 2>&1; then1616+ if [ "${verbose}" = "1" ]; then1717+ echo "Skipping: --bpf-counters --for-each-cgroup not supported"1818+ perf --no-pager stat -a --bpf-counters --for-each-cgroup / true || true1919+ fi2020+ exit 22121+ fi2222+}2323+2424+# find two cgroups to measure2525+find_cgroups()2626+{2727+ # try usual systemd slices first2828+ if [ -d /sys/fs/cgroup/system.slice -a -d /sys/fs/cgroup/user.slice ]; then2929+ test_cgroups="system.slice,user.slice"3030+ return3131+ fi3232+3333+ # try root and self cgroups3434+ local self_cgrp=$(grep perf_event /proc/self/cgroup | cut -d: -f3)3535+ if [ -z ${self_cgrp} ]; then3636+ # cgroup v2 doesn't specify perf_event3737+ self_cgrp=$(grep ^0: /proc/self/cgroup | cut -d: -f3)3838+ fi3939+4040+ if [ -z ${self_cgrp} ]; then4141+ test_cgroups="/"4242+ else4343+ test_cgroups="/,${self_cgrp}"4444+ fi4545+}4646+4747+# As cgroup events are cpu-wide, we cannot simply compare the result.4848+# Just check if it runs without failure and has non-zero results.4949+check_system_wide_counted()5050+{5151+ local output5252+5353+ output=$(perf stat -a --bpf-counters --for-each-cgroup ${test_cgroups} -e cpu-clock -x, sleep 1 2>&1)5454+ if echo ${output} | grep -q -F "<not "; then5555+ echo "Some system-wide events are not counted"5656+ if [ "${verbose}" = "1" ]; then5757+ echo ${output}5858+ fi5959+ exit 16060+ fi6161+}6262+6363+check_cpu_list_counted()6464+{6565+ local output6666+6767+ output=$(perf stat -C 1 --bpf-counters --for-each-cgroup ${test_cgroups} -e cpu-clock -x, taskset -c 1 sleep 1 2>&1)6868+ if echo ${output} | grep -q -F "<not "; then6969+ echo "Some CPU events are not counted"7070+ if [ "${verbose}" = "1" ]; then7171+ echo ${output}7272+ fi7373+ exit 17474+ fi7575+}7676+7777+check_bpf_counter7878+find_cgroups7979+8080+check_system_wide_counted8181+check_cpu_list_counted8282+8383+exit 0
+8-2
tools/perf/tests/wp.c
···22#include <stdlib.h>33#include <string.h>44#include <unistd.h>55+#include <errno.h>56#include <sys/ioctl.h>77+#include <linux/compiler.h>68#include <linux/hw_breakpoint.h>79#include <linux/kernel.h>810#include "tests.h"···139137#endif140138}141139142142-static int test__wp_modify(struct test_suite *test __maybe_unused,143143- int subtest __maybe_unused)140140+static int test__wp_modify(struct test_suite *test __maybe_unused, int subtest __maybe_unused)144141{145142#if defined(__s390x__)146143 return TEST_SKIP;···161160 new_attr.disabled = 1;162161 ret = ioctl(fd, PERF_EVENT_IOC_MODIFY_ATTRIBUTES, &new_attr);163162 if (ret < 0) {163163+ if (errno == ENOTTY) {164164+ test->test_cases[subtest].skip_reason = "missing kernel support";165165+ ret = TEST_SKIP;166166+ }167167+164168 pr_debug("ioctl(PERF_EVENT_IOC_MODIFY_ATTRIBUTES) failed\n");165169 close(fd);166170 return ret;
+5-5
tools/perf/util/bpf_counter_cgroup.c
···95959696 perf_cpu_map__for_each_cpu(cpu, i, evlist->core.all_cpus) {9797 link = bpf_program__attach_perf_event(skel->progs.on_cgrp_switch,9898- FD(cgrp_switch, cpu.cpu));9898+ FD(cgrp_switch, i));9999 if (IS_ERR(link)) {100100 pr_err("Failed to attach cgroup program\n");101101 err = PTR_ERR(link);···115115 evsel->cgrp = NULL;116116117117 /* open single copy of the events w/o cgroup */118118- err = evsel__open_per_cpu(evsel, evlist->core.all_cpus, -1);118118+ err = evsel__open_per_cpu(evsel, evsel->core.cpus, -1);119119 if (err) {120120 pr_err("Failed to open first cgroup events\n");121121 goto out;122122 }123123124124 map_fd = bpf_map__fd(skel->maps.events);125125- perf_cpu_map__for_each_cpu(cpu, j, evlist->core.all_cpus) {126126- int fd = FD(evsel, cpu.cpu);125125+ perf_cpu_map__for_each_cpu(cpu, j, evsel->core.cpus) {126126+ int fd = FD(evsel, j);127127 __u32 idx = evsel->core.idx * total_cpus + cpu.cpu;128128129129 err = bpf_map_update_elem(map_fd, &idx, &fd,···269269 goto out;270270 }271271272272- perf_cpu_map__for_each_cpu(cpu, i, evlist->core.all_cpus) {272272+ perf_cpu_map__for_each_cpu(cpu, i, evsel->core.cpus) {273273 counts = perf_counts(evsel->counts, i, 0);274274 counts->val = values[cpu.cpu].counter;275275 counts->ena = values[cpu.cpu].enabled;
+1-1
tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
···176176}177177178178// This will be attached to cgroup-switches event for each cpu179179-SEC("perf_events")179179+SEC("perf_event")180180int BPF_PROG(on_cgrp_switch)181181{182182 return bperf_cgroup_count();
···21022102 * unusual. One significant peculiarity is that the mapping (start -> pgoff)21032103 * is not the same for the kernel map and the modules map. That happens because21042104 * the data is copied adjacently whereas the original kcore has gaps. Finally,21052105- * kallsyms and modules files are compared with their copies to check that21062106- * modules have not been loaded or unloaded while the copies were taking place.21052105+ * kallsyms file is compared with its copy to check that modules have not been21062106+ * loaded or unloaded while the copies were taking place.21072107 *21082108 * Return: %0 on success, %-1 on failure.21092109 */···21652165 if (copy_bytes(kcore.fd, p->offset, extract.fd, offs, p->len))21662166 goto out_extract_close;21672167 }21682168-21692169- if (kcore_copy__compare_file(from_dir, to_dir, "modules"))21702170- goto out_extract_close;2171216821722169 if (kcore_copy__compare_file(from_dir, to_dir, "kallsyms"))21732170 goto out_extract_close;
···11+#!/bin/sh22+# SPDX-License-Identifier: GPL-2.033+#44+# cause kernel oops in bond_rr_gen_slave_id55+DEBUG=${DEBUG:-0}66+77+set -e88+test ${DEBUG} -ne 0 && set -x99+1010+finish()1111+{1212+ ip netns delete server || true1313+ ip netns delete client || true1414+ ip link del link1_1 || true1515+}1616+1717+trap finish EXIT1818+1919+client_ip4=192.168.1.1982020+server_ip4=192.168.1.2542121+2222+# setup kernel so it reboots after causing the panic2323+echo 180 >/proc/sys/kernel/panic2424+2525+# build namespaces2626+ip link add dev link1_1 type veth peer name link1_22727+2828+ip netns add "server"2929+ip link set dev link1_2 netns server up name eth03030+ip netns exec server ip addr add ${server_ip4}/24 dev eth03131+3232+ip netns add "client"3333+ip link set dev link1_1 netns client down name eth03434+ip netns exec client ip link add dev bond0 down type bond mode 1 \3535+ miimon 100 all_slaves_active 13636+ip netns exec client ip link set dev eth0 down master bond03737+ip netns exec client ip link set dev bond0 up3838+ip netns exec client ip addr add ${client_ip4}/24 dev bond03939+ip netns exec client ping -c 5 $server_ip4 >/dev/null4040+4141+ip netns exec client ip link set dev eth0 down nomaster4242+ip netns exec client ip link set dev bond0 down4343+ip netns exec client ip link set dev bond0 type bond mode 0 \4444+ arp_interval 1000 arp_ip_target "+${server_ip4}"4545+ip netns exec client ip link set dev eth0 down master bond04646+ip netns exec client ip link set dev bond0 up4747+ip netns exec client ping -c 5 $server_ip4 >/dev/null4848+4949+exit 0
···11+#!/bin/bash22+# SPDX-License-Identifier: GPL-2.033+#44+# Test bond device handling of addr lists (dev->uc, mc)55+#66+77+ALL_TESTS="88+ bond_cleanup_mode199+ bond_cleanup_mode41010+ bond_listen_lacpdu_multicast_case_down1111+ bond_listen_lacpdu_multicast_case_up1212+"1313+1414+REQUIRE_MZ=no1515+NUM_NETIFS=01616+lib_dir=$(dirname "$0")1717+source "$lib_dir"/../../../net/forwarding/lib.sh1818+1919+source "$lib_dir"/lag_lib.sh2020+2121+2222+destroy()2323+{2424+ local ifnames=(dummy1 dummy2 bond1 mv0)2525+ local ifname2626+2727+ for ifname in "${ifnames[@]}"; do2828+ ip link del "$ifname" &>/dev/null2929+ done3030+}3131+3232+cleanup()3333+{3434+ pre_cleanup3535+3636+ destroy3737+}3838+3939+4040+# bond driver control paths vary between modes that have a primary slave4141+# (bond_uses_primary()) and others. Test both kinds of modes.4242+4343+bond_cleanup_mode1()4444+{4545+ RET=04646+4747+ test_LAG_cleanup "bonding" "active-backup"4848+}4949+5050+bond_cleanup_mode4() {5151+ RET=05252+5353+ test_LAG_cleanup "bonding" "802.3ad"5454+}5555+5656+bond_listen_lacpdu_multicast()5757+{5858+ # Initial state of bond device, up | down5959+ local init_state=$16060+ local lacpdu_mc="01:80:c2:00:00:02"6161+6262+ ip link add dummy1 type dummy6363+ ip link add bond1 "$init_state" type bond mode 802.3ad6464+ ip link set dev dummy1 master bond16565+ if [ "$init_state" = "down" ]; then6666+ ip link set dev bond1 up6767+ fi6868+6969+ grep_bridge_fdb "$lacpdu_mc" bridge fdb show brport dummy1 >/dev/null7070+ check_err $? "LACPDU multicast address not present on slave (1)"7171+7272+ ip link set dev bond1 down7373+7474+ not grep_bridge_fdb "$lacpdu_mc" bridge fdb show brport dummy1 >/dev/null7575+ check_err $? "LACPDU multicast address still present on slave"7676+7777+ ip link set dev bond1 up7878+7979+ grep_bridge_fdb "$lacpdu_mc" bridge fdb show brport dummy1 >/dev/null8080+ check_err $? "LACPDU multicast address not present on slave (2)"8181+8282+ cleanup8383+8484+ log_test "bonding LACPDU multicast address to slave (from bond $init_state)"8585+}8686+8787+# The LACPDU mc addr is added by different paths depending on the initial state8888+# of the bond when enslaving a device. Test both cases.8989+9090+bond_listen_lacpdu_multicast_case_down()9191+{9292+ RET=09393+9494+ bond_listen_lacpdu_multicast "down"9595+}9696+9797+bond_listen_lacpdu_multicast_case_up()9898+{9999+ RET=0100100+101101+ bond_listen_lacpdu_multicast "up"102102+}103103+104104+105105+trap cleanup EXIT106106+107107+tests_run108108+109109+exit "$EXIT_STATUS"
···11+#!/bin/bash22+# SPDX-License-Identifier: GPL-2.033+44+# Test that a link aggregation device (bonding, team) removes the hardware55+# addresses that it adds on its underlying devices.66+test_LAG_cleanup()77+{88+ local driver=$199+ local mode=$21010+ local ucaddr="02:00:00:12:34:56"1111+ local addr6="fe80::78:9abc/64"1212+ local mcaddr="33:33:ff:78:9a:bc"1313+ local name1414+1515+ ip link add dummy1 type dummy1616+ ip link add dummy2 type dummy1717+ if [ "$driver" = "bonding" ]; then1818+ name="bond1"1919+ ip link add "$name" up type bond mode "$mode"2020+ ip link set dev dummy1 master "$name"2121+ ip link set dev dummy2 master "$name"2222+ elif [ "$driver" = "team" ]; then2323+ name="team0"2424+ teamd -d -c '2525+ {2626+ "device": "'"$name"'",2727+ "runner": {2828+ "name": "'"$mode"'"2929+ },3030+ "ports": {3131+ "dummy1":3232+ {},3333+ "dummy2":3434+ {}3535+ }3636+ }3737+ '3838+ ip link set dev "$name" up3939+ else4040+ check_err 14141+ log_test test_LAG_cleanup ": unknown driver \"$driver\""4242+ return4343+ fi4444+4545+ # Used to test dev->uc handling4646+ ip link add mv0 link "$name" up address "$ucaddr" type macvlan4747+ # Used to test dev->mc handling4848+ ip address add "$addr6" dev "$name"4949+ ip link set dev "$name" down5050+ ip link del "$name"5151+5252+ not grep_bridge_fdb "$ucaddr" bridge fdb show >/dev/null5353+ check_err $? "macvlan unicast address still present on a slave"5454+5555+ not grep_bridge_fdb "$mcaddr" bridge fdb show >/dev/null5656+ check_err $? "IPv6 solicited-node multicast mac address still present on a slave"5757+5858+ cleanup5959+6060+ log_test "$driver cleanup mode $mode"6161+}
+6
tools/testing/selftests/drivers/net/team/Makefile
···11+# SPDX-License-Identifier: GPL-2.022+# Makefile for net selftests33+44+TEST_PROGS := dev_addr_lists.sh55+66+include ../../../lib.mk
···11+#!/bin/bash22+# SPDX-License-Identifier: GPL-2.033+#44+# Test team device handling of addr lists (dev->uc, mc)55+#66+77+ALL_TESTS="88+ team_cleanup99+"1010+1111+REQUIRE_MZ=no1212+NUM_NETIFS=01313+lib_dir=$(dirname "$0")1414+source "$lib_dir"/../../../net/forwarding/lib.sh1515+1616+source "$lib_dir"/../bonding/lag_lib.sh1717+1818+1919+destroy()2020+{2121+ local ifnames=(dummy0 dummy1 team0 mv0)2222+ local ifname2323+2424+ for ifname in "${ifnames[@]}"; do2525+ ip link del "$ifname" &>/dev/null2626+ done2727+}2828+2929+cleanup()3030+{3131+ pre_cleanup3232+3333+ destroy3434+}3535+3636+3737+team_cleanup()3838+{3939+ RET=04040+4141+ test_LAG_cleanup "team" "lacp"4242+}4343+4444+4545+require_command teamd4646+4747+trap cleanup EXIT4848+4949+tests_run5050+5151+exit "$EXIT_STATUS"
+1-1
tools/testing/selftests/kvm/rseq_test.c
···227227 ucall_init(vm, NULL);228228229229 pthread_create(&migration_thread, NULL, migration_worker,230230- (void *)(unsigned long)gettid());230230+ (void *)(unsigned long)syscall(SYS_gettid));231231232232 for (i = 0; !done; i++) {233233 vcpu_run(vcpu);
···4242selfdir = $(realpath $(dir $(filter %/lib.mk,$(MAKEFILE_LIST))))4343top_srcdir = $(selfdir)/../../..44444545+ifeq ($(KHDR_INCLUDES),)4646+KHDR_INCLUDES := -isystem $(top_srcdir)/usr/include4747+endif4848+4549# The following are built by lib.mk common compile rules.4650# TEST_CUSTOM_PROGS should be used by tests that require4751# custom build rule and prevent common build rule use.
···2828# +------------------+ +------------------+2929#30303131-ALL_TESTS="mcast_v4 mcast_v6 rpf_v4 rpf_v6"3131+ALL_TESTS="mcast_v4 mcast_v6 rpf_v4 rpf_v6 unres_v4 unres_v6"3232NUM_NETIFS=63333source lib.sh3434source tc_common.sh···404404 tc filter del dev $h1 ingress protocol ipv6 pref 1 handle 1 flower405405406406 log_test "RPF IPv6"407407+}408408+409409+unres_v4()410410+{411411+ # Send a multicast packet not corresponding to an installed route,412412+ # causing the kernel to queue the packet for resolution and emit an413413+ # IGMPMSG_NOCACHE notification. smcrouted will react to this414414+ # notification by consulting its (*, G) list and installing an (S, G)415415+ # route, which will be used to forward the queued packet.416416+417417+ RET=0418418+419419+ tc filter add dev $h2 ingress protocol ip pref 1 handle 1 flower \420420+ dst_ip 225.1.2.3 ip_proto udp dst_port 12345 action drop421421+ tc filter add dev $h3 ingress protocol ip pref 1 handle 1 flower \422422+ dst_ip 225.1.2.3 ip_proto udp dst_port 12345 action drop423423+424424+ # Forwarding should fail before installing a matching (*, G).425425+ $MZ $h1 -c 1 -p 128 -t udp "ttl=10,sp=54321,dp=12345" \426426+ -a 00:11:22:33:44:55 -b 01:00:5e:01:02:03 \427427+ -A 198.51.100.2 -B 225.1.2.3 -q428428+429429+ tc_check_packets "dev $h2 ingress" 1 0430430+ check_err $? "Multicast received on first host when should not"431431+ tc_check_packets "dev $h3 ingress" 1 0432432+ check_err $? "Multicast received on second host when should not"433433+434434+ # Create (*, G). Will not be installed in the kernel.435435+ create_mcast_sg $rp1 0.0.0.0 225.1.2.3 $rp2 $rp3436436+437437+ $MZ $h1 -c 1 -p 128 -t udp "ttl=10,sp=54321,dp=12345" \438438+ -a 00:11:22:33:44:55 -b 01:00:5e:01:02:03 \439439+ -A 198.51.100.2 -B 225.1.2.3 -q440440+441441+ tc_check_packets "dev $h2 ingress" 1 1442442+ check_err $? "Multicast not received on first host"443443+ tc_check_packets "dev $h3 ingress" 1 1444444+ check_err $? "Multicast not received on second host"445445+446446+ delete_mcast_sg $rp1 0.0.0.0 225.1.2.3 $rp2 $rp3447447+448448+ tc filter del dev $h3 ingress protocol ip pref 1 handle 1 flower449449+ tc filter del dev $h2 ingress protocol ip pref 1 handle 1 flower450450+451451+ log_test "Unresolved queue IPv4"452452+}453453+454454+unres_v6()455455+{456456+ # Send a multicast packet not corresponding to an installed route,457457+ # causing the kernel to queue the packet for resolution and emit an458458+ # MRT6MSG_NOCACHE notification. smcrouted will react to this459459+ # notification by consulting its (*, G) list and installing an (S, G)460460+ # route, which will be used to forward the queued packet.461461+462462+ RET=0463463+464464+ tc filter add dev $h2 ingress protocol ipv6 pref 1 handle 1 flower \465465+ dst_ip ff0e::3 ip_proto udp dst_port 12345 action drop466466+ tc filter add dev $h3 ingress protocol ipv6 pref 1 handle 1 flower \467467+ dst_ip ff0e::3 ip_proto udp dst_port 12345 action drop468468+469469+ # Forwarding should fail before installing a matching (*, G).470470+ $MZ $h1 -6 -c 1 -p 128 -t udp "ttl=10,sp=54321,dp=12345" \471471+ -a 00:11:22:33:44:55 -b 33:33:00:00:00:03 \472472+ -A 2001:db8:1::2 -B ff0e::3 -q473473+474474+ tc_check_packets "dev $h2 ingress" 1 0475475+ check_err $? "Multicast received on first host when should not"476476+ tc_check_packets "dev $h3 ingress" 1 0477477+ check_err $? "Multicast received on second host when should not"478478+479479+ # Create (*, G). Will not be installed in the kernel.480480+ create_mcast_sg $rp1 :: ff0e::3 $rp2 $rp3481481+482482+ $MZ $h1 -6 -c 1 -p 128 -t udp "ttl=10,sp=54321,dp=12345" \483483+ -a 00:11:22:33:44:55 -b 33:33:00:00:00:03 \484484+ -A 2001:db8:1::2 -B ff0e::3 -q485485+486486+ tc_check_packets "dev $h2 ingress" 1 1487487+ check_err $? "Multicast not received on first host"488488+ tc_check_packets "dev $h3 ingress" 1 1489489+ check_err $? "Multicast not received on second host"490490+491491+ delete_mcast_sg $rp1 :: ff0e::3 $rp2 $rp3492492+493493+ tc filter del dev $h3 ingress protocol ipv6 pref 1 handle 1 flower494494+ tc filter del dev $h2 ingress protocol ipv6 pref 1 handle 1 flower495495+496496+ log_test "Unresolved queue IPv6"407497}408498409499trap cleanup EXIT
+1
tools/testing/selftests/net/forwarding/sch_red.sh
···11+#!/bin/bash12# SPDX-License-Identifier: GPL-2.02334# This test sends one stream of traffic from H1 through a TBF shaper, to a RED