···11731173D: APM driver (early port)11741174D: DRM drivers (author of several)1175117511761176+N: Veaceslav Falico11771177+E: vfalico@gmail.com11781178+D: Co-maintainer and co-author of the network bonding driver.11791179+11761180N: János Farkas11771181E: chexum@shadow.banki.hu11781182D: romfs, various (mostly networking) fixes···24922488D: XF86_Mach824932489D: XF86_851424942490D: cfdisk (curses based disk partitioning program)24912491+24922492+N: Mat Martineau24932493+E: mat@martineau.name24942494+D: MPTCP subsystem co-maintainer 2020-202324952495+D: Keyctl restricted keyring and Diffie-Hellman UAPI24962496+D: Bluetooth L2CAP ERTM mode and AMP24972497+S: USA2495249824962499N: John S. Marvin24972500E: jsm@fc.hp.com···41824171S: B-1206 Jingmao Guojigongyu41834172S: 16 Baliqiao Nanjie, Beijing 10110041844173S: People's Repulic of China41744174+41754175+N: Vlad Yasevich41764176+E: vyasevich@gmail.com41774177+D: SCTP protocol maintainer.4185417841864179N: Aviad Yehezkel41874180E: aviadye@nvidia.com
+6-9
Documentation/admin-guide/cgroup-v2.rst
···12451245 This is a simple interface to trigger memory reclaim in the12461246 target cgroup.1247124712481248- This file accepts a string which contains the number of bytes to12491249- reclaim.12481248+ This file accepts a single key, the number of bytes to reclaim.12491249+ No nested keys are currently supported.1250125012511251 Example::1252125212531253 echo "1G" > memory.reclaim12541254+12551255+ The interface can be later extended with nested keys to12561256+ configure the reclaim behavior. For example, specify the12571257+ type of memory to reclaim from (anon, file, ..).1254125812551259 Please note that the kernel can over or under reclaim from12561260 the target cgroup. If less bytes are reclaimed than the···12661262 the memory reclaim normally is not exercised in this case.12671263 This means that the networking layer will not adapt based on12681264 reclaim induced by memory.reclaim.12691269-12701270- This file also allows the user to specify the nodes to reclaim from,12711271- via the 'nodes=' key, for example::12721272-12731273- echo "1G nodes=0,1" > memory.reclaim12741274-12751275- The above instructs the kernel to reclaim memory from nodes 0,1.1276126512771266 memory.peak12781267 A read-only single value file which exists on non-root
···1919 additional information and example.20202121patternProperties:2222- # 25 LDOs2323- "^LDO([1-9]|[1][0-9]|2[0-5])$":2222+ # 25 LDOs, without LDO10-122323+ "^LDO([1-9]|1[3-9]|2[0-5])$":2424 type: object2525 $ref: regulator.yaml#2626 unevaluatedProperties: false2727 description:2828 Properties for single LDO regulator.2929+3030+ required:3131+ - regulator-name3232+3333+ "^LDO(1[0-2])$":3434+ type: object3535+ $ref: regulator.yaml#3636+ unevaluatedProperties: false3737+ description:3838+ Properties for single LDO regulator.3939+4040+ properties:4141+ samsung,ext-control-gpios:4242+ maxItems: 14343+ description:4444+ LDO10, LDO11 and LDO12 can be configured to external control over4545+ GPIO.29463047 required:3148 - regulator-name
+1-1
Documentation/devicetree/bindings/riscv/cpus.yaml
···8383 insensitive, letters in the riscv,isa string must be all8484 lowercase to simplify parsing.8585 $ref: "/schemas/types.yaml#/definitions/string"8686- pattern: ^rv(?:64|32)imaf?d?q?c?b?v?k?h?(?:_[hsxz](?:[a-z])+)*$8686+ pattern: ^rv(?:64|32)imaf?d?q?c?b?k?j?p?v?h?(?:[hsxz](?:[a-z])+)?(?:_[hsxz](?:[a-z])+)*$87878888 # RISC-V requires 'timebase-frequency' in /cpus, so disallow it here8989 timebase-frequency: false
···4040 description:4141 Indicates that the setting of RTC time is allowed by the host CPU.42424343+ wakeup-source: true4444+4345required:4446 - compatible4547 - reg
···88userspace tools.991010Documentation for Linux bridging is on:1111- http://www.linuxfoundation.org/collaborate/workgroups/networking/bridge1111+ https://wiki.linuxfoundation.org/networking/bridge12121313The bridge-utilities are maintained at:1414 git://git.kernel.org/pub/scm/linux/kernel/git/shemminger/bridge-utils.git
···819819----820820This driver supports NAPI (Rx polling mode).821821For more information on NAPI, see822822-https://www.linuxfoundation.org/collaborate/workgroups/networking/napi822822+https://wiki.linuxfoundation.org/networking/napi823823824824825825MACVLAN
+3-7
Documentation/networking/nf_conntrack-sysctl.rst
···173173 default 3174174175175nf_conntrack_sctp_timeout_established - INTEGER (seconds)176176- default 432000 (5 days)176176+ default 210177177+178178+ Default is set to (hb_interval * path_max_retrans + rto_max)177179178180nf_conntrack_sctp_timeout_shutdown_sent - INTEGER (seconds)179181 default 0.3···191189192190 This timeout is used to setup conntrack entry on secondary paths.193191 Default is set to hb_interval.194194-195195-nf_conntrack_sctp_timeout_heartbeat_acked - INTEGER (seconds)196196- default 210197197-198198- This timeout is used to setup conntrack entry on secondary paths.199199- Default is set to (hb_interval * path_max_retrans + rto_max)200192201193nf_conntrack_udp_timeout - INTEGER (seconds)202194 default 30
+7-3
Documentation/virt/kvm/api.rst
···80708070state is final and avoid missing dirty pages from another ioctl ordered80718071after the bitmap collection.8072807280738073-NOTE: One example of using the backup bitmap is saving arm64 vgic/its80748074-tables through KVM_DEV_ARM_{VGIC_GRP_CTRL, ITS_SAVE_TABLES} command on80758075-KVM device "kvm-arm-vgic-its" when dirty ring is enabled.80738073+NOTE: Multiple examples of using the backup bitmap: (1) save vgic/its80748074+tables through command KVM_DEV_ARM_{VGIC_GRP_CTRL, ITS_SAVE_TABLES} on80758075+KVM device "kvm-arm-vgic-its". (2) restore vgic/its tables through80768076+command KVM_DEV_ARM_{VGIC_GRP_CTRL, ITS_RESTORE_TABLES} on KVM device80778077+"kvm-arm-vgic-its". VGICv3 LPI pending status is restored. (3) save80788078+vgic3 pending table through KVM_DEV_ARM_VGIC_{GRP_CTRL, SAVE_PENDING_TABLES}80798079+command on KVM device "kvm-arm-vgic-v3".80768080807780818.30 KVM_CAP_XEN_HVM80788082--------------------
+36
Documentation/x86/amd-memory-encryption.rst
···9595not enable SME, then Linux will not be able to activate memory encryption, even9696if configured to do so by default or the mem_encrypt=on command line parameter9797is specified.9898+9999+Secure Nested Paging (SNP)100100+==========================101101+102102+SEV-SNP introduces new features (SEV_FEATURES[1:63]) which can be enabled103103+by the hypervisor for security enhancements. Some of these features need104104+guest side implementation to function correctly. The below table lists the105105+expected guest behavior with various possible scenarios of guest/hypervisor106106+SNP feature support.107107+108108++-----------------+---------------+---------------+------------------+109109+| Feature Enabled | Guest needs | Guest has | Guest boot |110110+| by the HV | implementation| implementation| behaviour |111111++=================+===============+===============+==================+112112+| No | No | No | Boot |113113+| | | | |114114++-----------------+---------------+---------------+------------------+115115+| No | Yes | No | Boot |116116+| | | | |117117++-----------------+---------------+---------------+------------------+118118+| No | Yes | Yes | Boot |119119+| | | | |120120++-----------------+---------------+---------------+------------------+121121+| Yes | No | No | Boot with |122122+| | | | feature enabled |123123++-----------------+---------------+---------------+------------------+124124+| Yes | Yes | No | Graceful boot |125125+| | | | failure |126126++-----------------+---------------+---------------+------------------+127127+| Yes | Yes | Yes | Boot with |128128+| | | | feature enabled |129129++-----------------+---------------+---------------+------------------+130130+131131+More details in AMD64 APM[1] Vol 2: 15.34.10 SEV_STATUS MSR132132+133133+[1] https://www.amd.com/system/files/TechDocs/40332.pdf
+23-15
MAINTAINERS
···10971097F: drivers/dma/ptdma/1098109810991099AMD SEATTLE DEVICE TREE SUPPORT11001100-M: Brijesh Singh <brijeshkumar.singh@amd.com>11011100M: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>11021101M: Tom Lendacky <thomas.lendacky@amd.com>11031102S: Supported···22112212S: Maintained22122213T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git22132214X: drivers/media/i2c/22152215+F: arch/arm64/boot/dts/freescale/22162216+X: arch/arm64/boot/dts/freescale/fsl-*22172217+X: arch/arm64/boot/dts/freescale/qoriq-*22142218N: imx22152219N: mxs22162220···2452245024532451ARM/Mediatek SoC support24542452M: Matthias Brugger <matthias.bgg@gmail.com>24532453+R: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>24542454+L: linux-kernel@vger.kernel.org24552455L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)24562456L: linux-mediatek@lists.infradead.org (moderated for non-subscribers)24572457S: Maintained24582458W: https://mtk.wiki.kernel.org/24592459-C: irc://chat.freenode.net/linux-mediatek24592459+C: irc://irc.libera.chat/linux-mediatek24602460+F: arch/arm/boot/dts/mt2*24602461F: arch/arm/boot/dts/mt6*24612462F: arch/arm/boot/dts/mt7*24622463F: arch/arm/boot/dts/mt8*···24672462F: arch/arm64/boot/dts/mediatek/24682463F: drivers/soc/mediatek/24692464N: mtk24702470-N: mt[678]24652465+N: mt[2678]24712466K: mediatek2472246724732468ARM/Mediatek USB3 PHY DRIVER···3771376637723767BONDING DRIVER37733768M: Jay Vosburgh <j.vosburgh@gmail.com>37743774-M: Veaceslav Falico <vfalico@gmail.com>37753769M: Andy Gospodarek <andy@greyhouse.net>37763770L: netdev@vger.kernel.org37773771S: Supported···76197615F: drivers/firmware/efi/test/7620761676217617EFI VARIABLE FILESYSTEM76227622-M: Matthew Garrett <matthew.garrett@nebula.com>76237618M: Jeremy Kerr <jk@ozlabs.org>76247619M: Ard Biesheuvel <ardb@kernel.org>76257620L: linux-efi@vger.kernel.org···7897789478987895EXTRA BOOT CONFIG78997896M: Masami Hiramatsu <mhiramat@kernel.org>78977897+L: linux-kernel@vger.kernel.org78987898+L: linux-trace-kernel@vger.kernel.org78997899+Q: https://patchwork.kernel.org/project/linux-trace-kernel/list/79007900S: Maintained79017901+T: git git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git79017902F: Documentation/admin-guide/bootconfig.rst79027903F: fs/proc/bootconfig.c79037904F: include/linux/bootconfig.h···84748467F: include/linux/fscache*.h8475846884768469FSCRYPT: FILE SYSTEM LEVEL ENCRYPTION SUPPORT84708470+M: Eric Biggers <ebiggers@kernel.org>84778471M: Theodore Y. Ts'o <tytso@mit.edu>84788472M: Jaegeuk Kim <jaegeuk@kernel.org>84798479-M: Eric Biggers <ebiggers@kernel.org>84808473L: linux-fscrypt@vger.kernel.org84818474S: Supported84828475Q: https://patchwork.kernel.org/project/linux-fscrypt/list/84838483-T: git git://git.kernel.org/pub/scm/fs/fscrypt/fscrypt.git84768476+T: git https://git.kernel.org/pub/scm/fs/fscrypt/linux.git84848477F: Documentation/filesystems/fscrypt.rst84858478F: fs/crypto/84868486-F: include/linux/fscrypt*.h84798479+F: include/linux/fscrypt.h84878480F: include/uapi/linux/fscrypt.h8488848184898482FSI SUBSYSTEM···85268519FSVERITY: READ-ONLY FILE-BASED AUTHENTICITY PROTECTION85278520M: Eric Biggers <ebiggers@kernel.org>85288521M: Theodore Y. Ts'o <tytso@mit.edu>85298529-L: linux-fscrypt@vger.kernel.org85228522+L: fsverity@lists.linux.dev85308523S: Supported85318531-Q: https://patchwork.kernel.org/project/linux-fscrypt/list/85328532-T: git git://git.kernel.org/pub/scm/fs/fscrypt/fscrypt.git fsverity85248524+Q: https://patchwork.kernel.org/project/fsverity/list/85258525+T: git https://git.kernel.org/pub/scm/fs/fsverity/linux.git85338526F: Documentation/filesystems/fsverity.rst85348527F: fs/verity/85358528F: include/linux/fsverity.h···85778570F: arch/*/*/*/*ftrace*85788571F: arch/*/*/*ftrace*85798572F: include/*/ftrace.h85738573+F: samples/ftrace8580857485818575FUNGIBLE ETHERNET DRIVERS85828576M: Dimitris Michailidis <dmichail@fungible.com>···14608146001460914601NETWORKING [IPv4/IPv6]1461014602M: "David S. Miller" <davem@davemloft.net>1461114611-M: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>1461214603M: David Ahern <dsahern@kernel.org>1461314604L: netdev@vger.kernel.org1461414605S: Maintained···1464014633F: net/netlabel/14641146341464214635NETWORKING [MPTCP]1464314643-M: Mat Martineau <mathew.j.martineau@linux.intel.com>1464414636M: Matthieu Baerts <matthieu.baerts@tessares.net>1464514637L: netdev@vger.kernel.org1464614638L: mptcp@lists.linux.dev···1566415658M: Jonas Bonn <jonas@southpole.se>1566515659M: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>1566615660M: Stafford Horne <shorne@gmail.com>1566715667-L: openrisc@lists.librecores.org1566115661+L: linux-openrisc@vger.kernel.org1566815662S: Maintained1566915663W: http://openrisc.io1567015664T: git https://github.com/openrisc/linux.git···1797617970L: linux-riscv@lists.infradead.org1797717971S: Supported1797817972Q: https://patchwork.kernel.org/project/linux-riscv/list/1797317973+C: irc://irc.libera.chat/riscv1797917974P: Documentation/riscv/patch-acceptance.rst1798017975T: git git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git1798117976F: arch/riscv/···1869818691F: include/target/18699186921870018693SCTP PROTOCOL1870118701-M: Vlad Yasevich <vyasevich@gmail.com>1870218694M: Neil Horman <nhorman@tuxdriver.com>1870318695M: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>1869618696+M: Xin Long <lucien.xin@gmail.com>1870418697L: linux-sctp@vger.kernel.org1870518698S: Maintained1870618699W: http://lksctp.sourceforge.net···21738217312173921732USB WEBCAM GADGET2174021733M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>2173421734+M: Daniel Scally <dan.scally@ideasonboard.com>2174121735L: linux-usb@vger.kernel.org2174221736S: Maintained2174321737F: drivers/usb/gadget/function/*uvc*
···53535454clean-files += poly1305-core.S sha256-core.S sha512-core.S55555656+aflags-thumb2-$(CONFIG_THUMB2_KERNEL) := -U__thumb2__ -D__thumb2__=15757+5858+AFLAGS_sha256-core.o += $(aflags-thumb2-y)5959+AFLAGS_sha512-core.o += $(aflags-thumb2-y)6060+5661# massage the perlasm code a bit so we only get the NEON routine if we need it5762poly1305-aflags-$(CONFIG_CPU_V7) := -U__LINUX_ARM_ARCH__ -D__LINUX_ARM_ARCH__=55863poly1305-aflags-$(CONFIG_KERNEL_MODE_NEON) := -U__LINUX_ARM_ARCH__ -D__LINUX_ARM_ARCH__=75959-AFLAGS_poly1305-core.o += $(poly1305-aflags-y)6464+AFLAGS_poly1305-core.o += $(poly1305-aflags-y) $(aflags-thumb2-y)
+1-1
arch/arm/mm/nommu.c
···161161 mpu_setup();162162163163 /* allocate the zero page. */164164- zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);164164+ zero_page = (void *)memblock_alloc(PAGE_SIZE, PAGE_SIZE);165165 if (!zero_page)166166 panic("%s: Failed to allocate %lu bytes align=0x%lx\n",167167 __func__, PAGE_SIZE, PAGE_SIZE);
···1079107910801080 /* uaccess failed, don't leave stale tags */10811081 if (num_tags != MTE_GRANULES_PER_PAGE)10821082- mte_clear_page_tags(page);10821082+ mte_clear_page_tags(maddr);10831083 set_page_mte_tagged(page);1084108410851085 kvm_release_pfn_dirty(pfn);
+5-8
arch/arm64/kvm/vgic/vgic-its.c
···21872187 ((u64)ite->irq->intid << KVM_ITS_ITE_PINTID_SHIFT) |21882188 ite->collection->collection_id;21892189 val = cpu_to_le64(val);21902190- return kvm_write_guest_lock(kvm, gpa, &val, ite_esz);21902190+ return vgic_write_guest_lock(kvm, gpa, &val, ite_esz);21912191}2192219221932193/**···23392339 (itt_addr_field << KVM_ITS_DTE_ITTADDR_SHIFT) |23402340 (dev->num_eventid_bits - 1));23412341 val = cpu_to_le64(val);23422342- return kvm_write_guest_lock(kvm, ptr, &val, dte_esz);23422342+ return vgic_write_guest_lock(kvm, ptr, &val, dte_esz);23432343}2344234423452345/**···25262526 ((u64)collection->target_addr << KVM_ITS_CTE_RDBASE_SHIFT) |25272527 collection->collection_id);25282528 val = cpu_to_le64(val);25292529- return kvm_write_guest_lock(its->dev->kvm, gpa, &val, esz);25292529+ return vgic_write_guest_lock(its->dev->kvm, gpa, &val, esz);25302530}2531253125322532/*···26072607 */26082608 val = 0;26092609 BUG_ON(cte_esz > sizeof(val));26102610- ret = kvm_write_guest_lock(its->dev->kvm, gpa, &val, cte_esz);26102610+ ret = vgic_write_guest_lock(its->dev->kvm, gpa, &val, cte_esz);26112611 return ret;26122612}26132613···27432743static int vgic_its_ctrl(struct kvm *kvm, struct vgic_its *its, u64 attr)27442744{27452745 const struct vgic_its_abi *abi = vgic_its_get_abi(its);27462746- struct vgic_dist *dist = &kvm->arch.vgic;27472746 int ret = 0;2748274727492748 if (attr == KVM_DEV_ARM_VGIC_CTRL_INIT) /* Nothing to do */···27622763 vgic_its_reset(kvm, its);27632764 break;27642765 case KVM_DEV_ARM_ITS_SAVE_TABLES:27652765- dist->save_its_tables_in_progress = true;27662766 ret = abi->save_tables(its);27672767- dist->save_its_tables_in_progress = false;27682767 break;27692768 case KVM_DEV_ARM_ITS_RESTORE_TABLES:27702769 ret = abi->restore_tables(its);···27892792{27902793 struct vgic_dist *dist = &kvm->arch.vgic;2791279427922792- return dist->save_its_tables_in_progress;27952795+ return dist->table_write_in_progress;27932796}2794279727952798static int vgic_its_set_attr(struct kvm_device *dev,
+13-16
arch/arm64/kvm/vgic/vgic-v3.c
···339339 if (status) {340340 /* clear consumed data */341341 val &= ~(1 << bit_nr);342342- ret = kvm_write_guest_lock(kvm, ptr, &val, 1);342342+ ret = vgic_write_guest_lock(kvm, ptr, &val, 1);343343 if (ret)344344 return ret;345345 }···350350 * The deactivation of the doorbell interrupt will trigger the351351 * unmapping of the associated vPE.352352 */353353-static void unmap_all_vpes(struct vgic_dist *dist)353353+static void unmap_all_vpes(struct kvm *kvm)354354{355355- struct irq_desc *desc;355355+ struct vgic_dist *dist = &kvm->arch.vgic;356356 int i;357357358358- for (i = 0; i < dist->its_vm.nr_vpes; i++) {359359- desc = irq_to_desc(dist->its_vm.vpes[i]->irq);360360- irq_domain_deactivate_irq(irq_desc_get_irq_data(desc));361361- }358358+ for (i = 0; i < dist->its_vm.nr_vpes; i++)359359+ free_irq(dist->its_vm.vpes[i]->irq, kvm_get_vcpu(kvm, i));362360}363361364364-static void map_all_vpes(struct vgic_dist *dist)362362+static void map_all_vpes(struct kvm *kvm)365363{366366- struct irq_desc *desc;364364+ struct vgic_dist *dist = &kvm->arch.vgic;367365 int i;368366369369- for (i = 0; i < dist->its_vm.nr_vpes; i++) {370370- desc = irq_to_desc(dist->its_vm.vpes[i]->irq);371371- irq_domain_activate_irq(irq_desc_get_irq_data(desc), false);372372- }367367+ for (i = 0; i < dist->its_vm.nr_vpes; i++)368368+ WARN_ON(vgic_v4_request_vpe_irq(kvm_get_vcpu(kvm, i),369369+ dist->its_vm.vpes[i]->irq));373370}374371375372/**···391394 * and enabling of the doorbells have already been done.392395 */393396 if (kvm_vgic_global_state.has_gicv4_1) {394394- unmap_all_vpes(dist);397397+ unmap_all_vpes(kvm);395398 vlpi_avail = true;396399 }397400···434437 else435438 val &= ~(1 << bit_nr);436439437437- ret = kvm_write_guest_lock(kvm, ptr, &val, 1);440440+ ret = vgic_write_guest_lock(kvm, ptr, &val, 1);438441 if (ret)439442 goto out;440443 }441444442445out:443446 if (vlpi_avail)444444- map_all_vpes(dist);447447+ map_all_vpes(kvm);445448446449 return ret;447450}
+6-2
arch/arm64/kvm/vgic/vgic-v4.c
···222222 *val = !!(*ptr & mask);223223}224224225225+int vgic_v4_request_vpe_irq(struct kvm_vcpu *vcpu, int irq)226226+{227227+ return request_irq(irq, vgic_v4_doorbell_handler, 0, "vcpu", vcpu);228228+}229229+225230/**226231 * vgic_v4_init - Initialize the GICv4 data structures227232 * @kvm: Pointer to the VM being initialized···288283 irq_flags &= ~IRQ_NOAUTOEN;289284 irq_set_status_flags(irq, irq_flags);290285291291- ret = request_irq(irq, vgic_v4_doorbell_handler,292292- 0, "vcpu", vcpu);286286+ ret = vgic_v4_request_vpe_irq(vcpu, irq);293287 if (ret) {294288 kvm_err("failed to allocate vcpu IRQ%d\n", irq);295289 /*
···173173 return flags;174174}175175176176+static inline notrace unsigned long irq_soft_mask_andc_return(unsigned long mask)177177+{178178+ unsigned long flags = irq_soft_mask_return();179179+180180+ irq_soft_mask_set(flags & ~mask);181181+182182+ return flags;183183+}184184+176185static inline unsigned long arch_local_save_flags(void)177186{178187 return irq_soft_mask_return();···201192202193static inline unsigned long arch_local_irq_save(void)203194{204204- return irq_soft_mask_set_return(IRQS_DISABLED);195195+ return irq_soft_mask_or_return(IRQS_DISABLED);205196}206197207198static inline bool arch_irqs_disabled_flags(unsigned long flags)···340331 * is a different soft-masked interrupt pending that requires hard341332 * masking.342333 */343343-static inline bool should_hard_irq_enable(void)334334+static inline bool should_hard_irq_enable(struct pt_regs *regs)344335{345336 if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) {346346- WARN_ON(irq_soft_mask_return() == IRQS_ENABLED);337337+ WARN_ON(irq_soft_mask_return() != IRQS_ALL_DISABLED);338338+ WARN_ON(!(get_paca()->irq_happened & PACA_IRQ_HARD_DIS));347339 WARN_ON(mfmsr() & MSR_EE);348340 }349341···357347 *358348 * TODO: Add test for 64e359349 */360360- if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !power_pmu_wants_prompt_pmi())361361- return false;350350+ if (IS_ENABLED(CONFIG_PPC_BOOK3S_64)) {351351+ if (!power_pmu_wants_prompt_pmi())352352+ return false;353353+ /*354354+ * If PMIs are disabled then IRQs should be disabled as well,355355+ * so we shouldn't see this condition, check for it just in356356+ * case because we are about to enable PMIs.357357+ */358358+ if (WARN_ON_ONCE(regs->softe & IRQS_PMI_DISABLED))359359+ return false;360360+ }362361363362 if (get_paca()->irq_happened & PACA_IRQ_MUST_HARD_MASK)364363 return false;···377358378359/*379360 * Do the hard enabling, only call this if should_hard_irq_enable is true.361361+ * This allows PMI interrupts to profile irq handlers.380362 */381363static inline void do_hard_irq_enable(void)382364{383383- if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) {384384- WARN_ON(irq_soft_mask_return() == IRQS_ENABLED);385385- WARN_ON(get_paca()->irq_happened & PACA_IRQ_MUST_HARD_MASK);386386- WARN_ON(mfmsr() & MSR_EE);387387- }388365 /*389389- * This allows PMI interrupts (and watchdog soft-NMIs) through.390390- * There is no other reason to enable this way.366366+ * Asynch interrupts come in with IRQS_ALL_DISABLED,367367+ * PACA_IRQ_HARD_DIS, and MSR[EE]=0.391368 */369369+ if (IS_ENABLED(CONFIG_PPC_BOOK3S_64))370370+ irq_soft_mask_andc_return(IRQS_PMI_DISABLED);392371 get_paca()->irq_happened &= ~PACA_IRQ_HARD_DIS;393372 __hard_irq_enable();394373}···469452 return !(regs->msr & MSR_EE);470453}471454472472-static __always_inline bool should_hard_irq_enable(void)455455+static __always_inline bool should_hard_irq_enable(struct pt_regs *regs)473456{474457 return false;475458}
+1-1
arch/powerpc/kernel/dbell.c
···27272828 ppc_msgsync();29293030- if (should_hard_irq_enable())3030+ if (should_hard_irq_enable(regs))3131 do_hard_irq_enable();32323333 kvmppc_clear_host_ipi(smp_processor_id());
+2-1
arch/powerpc/kernel/head_85xx.S
···864864 * SPE unavailable trap from kernel - print a message, but let865865 * the task use SPE in the kernel until it returns to user mode.866866 */867867-KernelSPE:867867+SYM_FUNC_START_LOCAL(KernelSPE)868868 lwz r3,_MSR(r1)869869 oris r3,r3,MSR_SPE@h870870 stw r3,_MSR(r1) /* enable use of SPE after return */···881881#endif882882 .align 4,0883883884884+SYM_FUNC_END(KernelSPE)884885#endif /* CONFIG_SPE */885886886887/*
+1-1
arch/powerpc/kernel/irq.c
···238238 irq = static_call(ppc_get_irq)();239239240240 /* We can hard enable interrupts now to allow perf interrupts */241241- if (should_hard_irq_enable())241241+ if (should_hard_irq_enable(regs))242242 do_hard_irq_enable();243243244244 /* And finally process it */
+1-1
arch/powerpc/kernel/time.c
···515515 }516516517517 /* Conditionally hard-enable interrupts. */518518- if (should_hard_irq_enable()) {518518+ if (should_hard_irq_enable(regs)) {519519 /*520520 * Ensure a positive value is written to the decrementer, or521521 * else some CPUs will continue to take decrementer exceptions.
+7-4
arch/powerpc/kexec/file_load_64.c
···989989 * linux,drconf-usable-memory properties. Get an approximate on the990990 * number of usable memory entries and use for FDT size estimation.991991 */992992- usm_entries = ((memblock_end_of_DRAM() / drmem_lmb_size()) +993993- (2 * (resource_size(&crashk_res) / drmem_lmb_size())));994994-995995- extra_size = (unsigned int)(usm_entries * sizeof(u64));992992+ if (drmem_lmb_size()) {993993+ usm_entries = ((memory_hotplug_max() / drmem_lmb_size()) +994994+ (2 * (resource_size(&crashk_res) / drmem_lmb_size())));995995+ extra_size = (unsigned int)(usm_entries * sizeof(u64));996996+ } else {997997+ extra_size = 0;998998+ }9969999971000 /*9981001 * Get the number of CPU nodes in the current DT. This allows to
···234234 end = (unsigned long)__end_rodata;235235236236 radix__change_memory_range(start, end, _PAGE_WRITE);237237+238238+ for (start = PAGE_OFFSET; start < (unsigned long)_stext; start += PAGE_SIZE) {239239+ end = start + PAGE_SIZE;240240+ if (overlaps_interrupt_vector_text(start, end))241241+ radix__change_memory_range(start, end, _PAGE_WRITE);242242+ else243243+ break;244244+ }237245}238246239247void radix__mark_initmem_nx(void)···270262static unsigned long next_boundary(unsigned long addr, unsigned long end)271263{272264#ifdef CONFIG_STRICT_KERNEL_RWX265265+ unsigned long stext_phys;266266+267267+ stext_phys = __pa_symbol(_stext);268268+269269+ // Relocatable kernel running at non-zero real address270270+ if (stext_phys != 0) {271271+ // The end of interrupts code at zero is a rodata boundary272272+ unsigned long end_intr = __pa_symbol(__end_interrupts) - stext_phys;273273+ if (addr < end_intr)274274+ return end_intr;275275+276276+ // Start of relocated kernel text is a rodata boundary277277+ if (addr < stext_phys)278278+ return stext_phys;279279+ }280280+273281 if (addr < __pa_symbol(__srwx_boundary))274282 return __pa_symbol(__srwx_boundary);275283#endif
+7-7
arch/powerpc/perf/imc-pmu.c
···2222 * Used to avoid races in counting the nest-pmu units during hotplug2323 * register and unregister2424 */2525-static DEFINE_SPINLOCK(nest_init_lock);2525+static DEFINE_MUTEX(nest_init_lock);2626static DEFINE_PER_CPU(struct imc_pmu_ref *, local_nest_imc_refc);2727static struct imc_pmu **per_nest_pmu_arr;2828static cpumask_t nest_imc_cpumask;···16291629static void imc_common_cpuhp_mem_free(struct imc_pmu *pmu_ptr)16301630{16311631 if (pmu_ptr->domain == IMC_DOMAIN_NEST) {16321632- spin_lock(&nest_init_lock);16321632+ mutex_lock(&nest_init_lock);16331633 if (nest_pmus == 1) {16341634 cpuhp_remove_state(CPUHP_AP_PERF_POWERPC_NEST_IMC_ONLINE);16351635 kfree(nest_imc_refc);···1639163916401640 if (nest_pmus > 0)16411641 nest_pmus--;16421642- spin_unlock(&nest_init_lock);16421642+ mutex_unlock(&nest_init_lock);16431643 }1644164416451645 /* Free core_imc memory */···17961796 * rest. To handle the cpuhotplug callback unregister, we track17971797 * the number of nest pmus in "nest_pmus".17981798 */17991799- spin_lock(&nest_init_lock);17991799+ mutex_lock(&nest_init_lock);18001800 if (nest_pmus == 0) {18011801 ret = init_nest_pmu_ref();18021802 if (ret) {18031803- spin_unlock(&nest_init_lock);18031803+ mutex_unlock(&nest_init_lock);18041804 kfree(per_nest_pmu_arr);18051805 per_nest_pmu_arr = NULL;18061806 goto err_free_mem;···18081808 /* Register for cpu hotplug notification. */18091809 ret = nest_pmu_cpumask_init();18101810 if (ret) {18111811- spin_unlock(&nest_init_lock);18111811+ mutex_unlock(&nest_init_lock);18121812 kfree(nest_imc_refc);18131813 kfree(per_nest_pmu_arr);18141814 per_nest_pmu_arr = NULL;···18161816 }18171817 }18181818 nest_pmus++;18191819- spin_unlock(&nest_init_lock);18191819+ mutex_unlock(&nest_init_lock);18201820 break;18211821 case IMC_DOMAIN_CORE:18221822 ret = core_imc_pmu_cpumask_init();
···7070 */7171enum riscv_isa_ext_key {7272 RISCV_ISA_EXT_KEY_FPU, /* For 'F' and 'D' */7373- RISCV_ISA_EXT_KEY_ZIHINTPAUSE,7473 RISCV_ISA_EXT_KEY_SVINVAL,7574 RISCV_ISA_EXT_KEY_MAX,7675};···9091 return RISCV_ISA_EXT_KEY_FPU;9192 case RISCV_ISA_EXT_d:9293 return RISCV_ISA_EXT_KEY_FPU;9393- case RISCV_ISA_EXT_ZIHINTPAUSE:9494- return RISCV_ISA_EXT_KEY_ZIHINTPAUSE;9594 case RISCV_ISA_EXT_SVINVAL:9695 return RISCV_ISA_EXT_KEY_SVINVAL;9796 default:
+12-16
arch/riscv/include/asm/vdso/processor.h
···4455#ifndef __ASSEMBLY__6677-#include <linux/jump_label.h>87#include <asm/barrier.h>99-#include <asm/hwcap.h>108119static inline void cpu_relax(void)1210{1313- if (!static_branch_likely(&riscv_isa_ext_keys[RISCV_ISA_EXT_KEY_ZIHINTPAUSE])) {1411#ifdef __riscv_muldiv1515- int dummy;1616- /* In lieu of a halt instruction, induce a long-latency stall. */1717- __asm__ __volatile__ ("div %0, %0, zero" : "=r" (dummy));1212+ int dummy;1313+ /* In lieu of a halt instruction, induce a long-latency stall. */1414+ __asm__ __volatile__ ("div %0, %0, zero" : "=r" (dummy));1815#endif1919- } else {2020- /*2121- * Reduce instruction retirement.2222- * This assumes the PC changes.2323- */2424-#ifdef CONFIG_TOOLCHAIN_HAS_ZIHINTPAUSE2525- __asm__ __volatile__ ("pause");1616+1717+#ifdef __riscv_zihintpause1818+ /*1919+ * Reduce instruction retirement.2020+ * This assumes the PC changes.2121+ */2222+ __asm__ __volatile__ ("pause");2623#else2727- /* Encoding of the pause instruction */2828- __asm__ __volatile__ (".4byte 0x100000F");2424+ /* Encoding of the pause instruction */2525+ __asm__ __volatile__ (".4byte 0x100000F");2926#endif3030- }3127 barrier();3228}3329
+1-1
arch/riscv/kernel/head.S
···326326 call soc_early_init327327 tail start_kernel328328329329-#if CONFIG_RISCV_BOOT_SPINWAIT329329+#ifdef CONFIG_RISCV_BOOT_SPINWAIT330330.Lsecondary_start:331331 /* Set trap vector to spin forever to help debug */332332 la a3, .Lsecondary_park
+18
arch/riscv/kernel/probes/kprobes.c
···4848 post_kprobe_handler(p, kcb, regs);4949}50505151+static bool __kprobes arch_check_kprobe(struct kprobe *p)5252+{5353+ unsigned long tmp = (unsigned long)p->addr - p->offset;5454+ unsigned long addr = (unsigned long)p->addr;5555+5656+ while (tmp <= addr) {5757+ if (tmp == addr)5858+ return true;5959+6060+ tmp += GET_INSN_LENGTH(*(u16 *)tmp);6161+ }6262+6363+ return false;6464+}6565+5166int __kprobes arch_prepare_kprobe(struct kprobe *p)5267{5368 unsigned long probe_addr = (unsigned long)p->addr;54695570 if (probe_addr & 0x1)7171+ return -EILSEQ;7272+7373+ if (!arch_check_kprobe(p))5674 return -EILSEQ;57755876 /* copy instruction */
+2-2
arch/riscv/kernel/probes/simulate-insn.c
···7171 u32 rd_index = (opcode >> 7) & 0x1f;7272 u32 rs1_index = (opcode >> 15) & 0x1f;73737474- ret = rv_insn_reg_set_val(regs, rd_index, addr + 4);7474+ ret = rv_insn_reg_get_val(regs, rs1_index, &base_addr);7575 if (!ret)7676 return ret;77777878- ret = rv_insn_reg_get_val(regs, rs1_index, &base_addr);7878+ ret = rv_insn_reg_set_val(regs, rd_index, addr + 4);7979 if (!ret)8080 return ret;8181
+2-1
arch/riscv/kernel/smpboot.c
···39394040void __init smp_prepare_boot_cpu(void)4141{4242- init_cpu_topology();4342}44434544void __init smp_prepare_cpus(unsigned int max_cpus)···4647 int cpuid;4748 int ret;4849 unsigned int curr_cpuid;5050+5151+ init_cpu_topology();49525053 curr_cpuid = smp_processor_id();5154 store_cpu_topology(curr_cpuid);
···44 * Written by Niibe Yutaka and Paul Mundt55 */66OUTPUT_ARCH(sh)77+#define RUNTIME_DISCARD_EXIT78#include <asm/thread_info.h>89#include <asm/cache.h>910#include <asm/vmlinux.lds.h>
···180180181181 /* Load the new page-table. */182182 write_cr3(top_level_pgt);183183+184184+ /*185185+ * Now that the required page table mappings are established and a186186+ * GHCB can be used, check for SNP guest/HV feature compatibility.187187+ */188188+ snp_check_features();183189}184190185191static pte_t *split_large_pmd(struct x86_mapping_info *info,
+2
arch/x86/boot/compressed/misc.h
···126126127127#ifdef CONFIG_AMD_MEM_ENCRYPT128128void sev_enable(struct boot_params *bp);129129+void snp_check_features(void);129130void sev_es_shutdown_ghcb(void);130131extern bool sev_es_check_ghcb_fault(unsigned long address);131132void snp_set_page_private(unsigned long paddr);···144143 if (bp)145144 bp->cc_blob_address = 0;146145}146146+static inline void snp_check_features(void) { }147147static inline void sev_es_shutdown_ghcb(void) { }148148static inline bool sev_es_check_ghcb_fault(unsigned long address)149149{
+70
arch/x86/boot/compressed/sev.c
···208208 error("Can't unmap GHCB page");209209}210210211211+static void __noreturn sev_es_ghcb_terminate(struct ghcb *ghcb, unsigned int set,212212+ unsigned int reason, u64 exit_info_2)213213+{214214+ u64 exit_info_1 = SVM_VMGEXIT_TERM_REASON(set, reason);215215+216216+ vc_ghcb_invalidate(ghcb);217217+ ghcb_set_sw_exit_code(ghcb, SVM_VMGEXIT_TERM_REQUEST);218218+ ghcb_set_sw_exit_info_1(ghcb, exit_info_1);219219+ ghcb_set_sw_exit_info_2(ghcb, exit_info_2);220220+221221+ sev_es_wr_ghcb_msr(__pa(ghcb));222222+ VMGEXIT();223223+224224+ while (true)225225+ asm volatile("hlt\n" : : : "memory");226226+}227227+211228bool sev_es_check_ghcb_fault(unsigned long address)212229{213230 /* Check whether the fault was on the GHCB page */···285268 attrs = 1;286269 if (rmpadjust((unsigned long)&boot_ghcb_page, RMP_PG_SIZE_4K, attrs))287270 sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_NOT_VMPL0);271271+}272272+273273+/*274274+ * SNP_FEATURES_IMPL_REQ is the mask of SNP features that will need275275+ * guest side implementation for proper functioning of the guest. If any276276+ * of these features are enabled in the hypervisor but are lacking guest277277+ * side implementation, the behavior of the guest will be undefined. The278278+ * guest could fail in non-obvious way making it difficult to debug.279279+ *280280+ * As the behavior of reserved feature bits is unknown to be on the281281+ * safe side add them to the required features mask.282282+ */283283+#define SNP_FEATURES_IMPL_REQ (MSR_AMD64_SNP_VTOM | \284284+ MSR_AMD64_SNP_REFLECT_VC | \285285+ MSR_AMD64_SNP_RESTRICTED_INJ | \286286+ MSR_AMD64_SNP_ALT_INJ | \287287+ MSR_AMD64_SNP_DEBUG_SWAP | \288288+ MSR_AMD64_SNP_VMPL_SSS | \289289+ MSR_AMD64_SNP_SECURE_TSC | \290290+ MSR_AMD64_SNP_VMGEXIT_PARAM | \291291+ MSR_AMD64_SNP_VMSA_REG_PROTECTION | \292292+ MSR_AMD64_SNP_RESERVED_BIT13 | \293293+ MSR_AMD64_SNP_RESERVED_BIT15 | \294294+ MSR_AMD64_SNP_RESERVED_MASK)295295+296296+/*297297+ * SNP_FEATURES_PRESENT is the mask of SNP features that are implemented298298+ * by the guest kernel. As and when a new feature is implemented in the299299+ * guest kernel, a corresponding bit should be added to the mask.300300+ */301301+#define SNP_FEATURES_PRESENT (0)302302+303303+void snp_check_features(void)304304+{305305+ u64 unsupported;306306+307307+ if (!(sev_status & MSR_AMD64_SEV_SNP_ENABLED))308308+ return;309309+310310+ /*311311+ * Terminate the boot if hypervisor has enabled any feature lacking312312+ * guest side implementation. Pass on the unsupported features mask through313313+ * EXIT_INFO_2 of the GHCB protocol so that those features can be reported314314+ * as part of the guest boot failure.315315+ */316316+ unsupported = sev_status & SNP_FEATURES_IMPL_REQ & ~SNP_FEATURES_PRESENT;317317+ if (unsupported) {318318+ if (ghcb_version < 2 || (!boot_ghcb && !early_setup_ghcb()))319319+ sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SNP_UNSUPPORTED);320320+321321+ sev_es_ghcb_terminate(boot_ghcb, SEV_TERM_SET_GEN,322322+ GHCB_SNP_UNSUPPORTED, unsupported);323323+ }288324}289325290326void sev_enable(struct boot_params *bp)
+1
arch/x86/events/intel/core.c
···63396339 break;6340634063416341 case INTEL_FAM6_SAPPHIRERAPIDS_X:63426342+ case INTEL_FAM6_EMERALDRAPIDS_X:63426343 pmem = true;63436344 x86_pmu.late_ack = true;63446345 memcpy(hw_cache_event_ids, spr_hw_cache_event_ids, sizeof(hw_cache_event_ids));
···1414#include <asm/mmu.h>1515#include <asm/mpspec.h>1616#include <asm/x86_init.h>1717+#include <asm/cpufeature.h>17181819#ifdef CONFIG_ACPI_APEI1920# include <asm/pgtable_types.h>···63626463/* Physical address to resume after wakeup */6564unsigned long acpi_get_wakeup_address(void);6565+6666+static inline bool acpi_skip_set_wakeup_address(void)6767+{6868+ return cpu_feature_enabled(X86_FEATURE_XENPV);6969+}7070+7171+#define acpi_skip_set_wakeup_address acpi_skip_set_wakeup_address66726773/*6874 * Check if the CPU can handle C2 and deeper
+24-2
arch/x86/include/asm/debugreg.h
···3939 asm("mov %%db6, %0" :"=r" (val));4040 break;4141 case 7:4242- asm("mov %%db7, %0" :"=r" (val));4242+ /*4343+ * Apply __FORCE_ORDER to DR7 reads to forbid re-ordering them4444+ * with other code.4545+ *4646+ * This is needed because a DR7 access can cause a #VC exception4747+ * when running under SEV-ES. Taking a #VC exception is not a4848+ * safe thing to do just anywhere in the entry code and4949+ * re-ordering might place the access into an unsafe location.5050+ *5151+ * This happened in the NMI handler, where the DR7 read was5252+ * re-ordered to happen before the call to sev_es_ist_enter(),5353+ * causing stack recursion.5454+ */5555+ asm volatile("mov %%db7, %0" : "=r" (val) : __FORCE_ORDER);4356 break;4457 default:4558 BUG();···7966 asm("mov %0, %%db6" ::"r" (value));8067 break;8168 case 7:8282- asm("mov %0, %%db7" ::"r" (value));6969+ /*7070+ * Apply __FORCE_ORDER to DR7 writes to forbid re-ordering them7171+ * with other code.7272+ *7373+ * While is didn't happen with a DR7 write (see the DR7 read7474+ * comment above which explains where it happened), add the7575+ * __FORCE_ORDER here too to avoid similar problems in the7676+ * future.7777+ */7878+ asm volatile("mov %0, %%db7" ::"r" (value), __FORCE_ORDER);8379 break;8480 default:8581 BUG();
···330330331331static void disable_freq_invariance_workfn(struct work_struct *work)332332{333333+ int cpu;334334+333335 static_branch_disable(&arch_scale_freq_key);336336+337337+ /*338338+ * Set arch_freq_scale to a default value on all cpus339339+ * This negates the effect of scaling340340+ */341341+ for_each_possible_cpu(cpu)342342+ per_cpu(arch_freq_scale, cpu) = SCHED_CAPACITY_SCALE;334343}335344336345static DECLARE_WORK(disable_freq_invariance_work,
···65656666 legacy_pic->init(0);67676868- for (i = 0; i < nr_legacy_irqs(); i++)6868+ for (i = 0; i < nr_legacy_irqs(); i++) {6969 irq_set_chip_and_handler(i, chip, handle_level_irq);7070+ irq_set_status_flags(i, IRQ_LEVEL);7171+ }7072}71737274void __init init_IRQ(void)
+9-12
arch/x86/kvm/vmx/vmx.c
···34403440{34413441 u32 ar;3442344234433443- if (var->unusable || !var->present)34443444- ar = 1 << 16;34453445- else {34463446- ar = var->type & 15;34473447- ar |= (var->s & 1) << 4;34483448- ar |= (var->dpl & 3) << 5;34493449- ar |= (var->present & 1) << 7;34503450- ar |= (var->avl & 1) << 12;34513451- ar |= (var->l & 1) << 13;34523452- ar |= (var->db & 1) << 14;34533453- ar |= (var->g & 1) << 15;34543454- }34433443+ ar = var->type & 15;34443444+ ar |= (var->s & 1) << 4;34453445+ ar |= (var->dpl & 3) << 5;34463446+ ar |= (var->present & 1) << 7;34473447+ ar |= (var->avl & 1) << 12;34483448+ ar |= (var->l & 1) << 13;34493449+ ar |= (var->db & 1) << 14;34503450+ ar |= (var->g & 1) << 15;34513451+ ar |= (var->unusable || !var->present) << 16;3455345234563453 return ar;34573454}
···20012001 struct blkg_iostat_set *bis;20022002 unsigned long flags;2003200320042004+ /* Root-level stats are sourced from system-wide IO stats */20052005+ if (!cgroup_parent(blkcg->css.cgroup))20062006+ return;20072007+20042008 cpu = get_cpu();20052009 bis = per_cpu_ptr(bio->bi_blkg->iostat_cpu, cpu);20062010 flags = u64_stats_update_begin_irqsave(&bis->sync);
+3-2
block/blk-mq.c
···40694069 * blk_mq_destroy_queue - shutdown a request queue40704070 * @q: request queue to shutdown40714071 *40724072- * This shuts down a request queue allocated by blk_mq_init_queue() and drops40734073- * the initial reference. All future requests will failed with -ENODEV.40724072+ * This shuts down a request queue allocated by blk_mq_init_queue(). All future40734073+ * requests will be failed with -ENODEV. The caller is responsible for dropping40744074+ * the reference from blk_mq_init_queue() by calling blk_put_queue().40744075 *40754076 * Context: can sleep40764077 */
···6060 .priority = 0,6161};62626363+#ifndef acpi_skip_set_wakeup_address6464+#define acpi_skip_set_wakeup_address() false6565+#endif6666+6367static int acpi_sleep_prepare(u32 acpi_state)6468{6569#ifdef CONFIG_ACPI_SLEEP6670 unsigned long acpi_wakeup_address;67716872 /* do we have a wakeup address for S2 and S3? */6969- if (acpi_state == ACPI_STATE_S3) {7373+ if (acpi_state == ACPI_STATE_S3 && !acpi_skip_set_wakeup_address()) {7074 acpi_wakeup_address = acpi_get_wakeup_address();7175 if (!acpi_wakeup_address)7276 return -EFAULT;
+28-21
drivers/acpi/video_detect.c
···110110}111111#endif112112113113-static bool apple_gmux_backlight_present(void)114114-{115115- struct acpi_device *adev;116116- struct device *dev;117117-118118- adev = acpi_dev_get_first_match_dev(GMUX_ACPI_HID, NULL, -1);119119- if (!adev)120120- return false;121121-122122- dev = acpi_get_first_physical_node(adev);123123- if (!dev)124124- return false;125125-126126- /*127127- * drivers/platform/x86/apple-gmux.c only supports old style128128- * Apple GMUX with an IO-resource.129129- */130130- return pnp_get_resource(to_pnp_dev(dev), IORESOURCE_IO, 0) != NULL;131131-}132132-133113/* Force to use vendor driver when the ACPI device is known to be134114 * buggy */135115static int video_detect_force_vendor(const struct dmi_system_id *d)···592612 },593613 {594614 .callback = video_detect_force_native,615615+ /* Asus U46E */616616+ .matches = {617617+ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer Inc."),618618+ DMI_MATCH(DMI_PRODUCT_NAME, "U46E"),619619+ },620620+ },621621+ {622622+ .callback = video_detect_force_native,595623 /* Asus UX303UB */596624 .matches = {597625 DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),598626 DMI_MATCH(DMI_PRODUCT_NAME, "UX303UB"),627627+ },628628+ },629629+ {630630+ .callback = video_detect_force_native,631631+ /* HP EliteBook 8460p */632632+ .matches = {633633+ DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),634634+ DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 8460p"),635635+ },636636+ },637637+ {638638+ .callback = video_detect_force_native,639639+ /* HP Pavilion g6-1d80nr / B4U19UA */640640+ .matches = {641641+ DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),642642+ DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion g6 Notebook PC"),643643+ DMI_MATCH(DMI_PRODUCT_SKU, "B4U19UA"),599644 },600645 },601646 {···771766{772767 static DEFINE_MUTEX(init_mutex);773768 static bool nvidia_wmi_ec_present;769769+ static bool apple_gmux_present;774770 static bool native_available;775771 static bool init_done;776772 static long video_caps;···785779 ACPI_UINT32_MAX, find_video, NULL,786780 &video_caps, NULL);787781 nvidia_wmi_ec_present = nvidia_wmi_ec_supported();782782+ apple_gmux_present = apple_gmux_detect(NULL, NULL);788783 init_done = true;789784 }790785 if (native)···807800 if (nvidia_wmi_ec_present)808801 return acpi_backlight_nvidia_wmi_ec;809802810810- if (apple_gmux_backlight_present())803803+ if (apple_gmux_present)811804 return acpi_backlight_apple_gmux;812805813806 /* Use ACPI video if available, except when native should be preferred. */
···3434static DEFINE_MUTEX(device_ctls_mutex);3535static LIST_HEAD(edac_device_list);36363737+/* Default workqueue processing interval on this instance, in msecs */3838+#define DEFAULT_POLL_INTERVAL 10003939+3740#ifdef CONFIG_EDAC_DEBUG3841static void edac_device_dump_device(struct edac_device_ctl_info *edac_dev)3942{···339336 * whole one second to save timers firing all over the period340337 * between integral seconds341338 */342342- if (edac_dev->poll_msec == 1000)339339+ if (edac_dev->poll_msec == DEFAULT_POLL_INTERVAL)343340 edac_queue_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay));344341 else345342 edac_queue_work(&edac_dev->work, edac_dev->delay);···369366 * timers firing on sub-second basis, while they are happy370367 * to fire together on the 1 second exactly371368 */372372- if (edac_dev->poll_msec == 1000)369369+ if (edac_dev->poll_msec == DEFAULT_POLL_INTERVAL)373370 edac_queue_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay));374371 else375372 edac_queue_work(&edac_dev->work, edac_dev->delay);···403400 edac_dev->delay = msecs_to_jiffies(msec);404401405402 /* See comment in edac_device_workq_setup() above */406406- if (edac_dev->poll_msec == 1000)403403+ if (edac_dev->poll_msec == DEFAULT_POLL_INTERVAL)407404 edac_mod_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay));408405 else409406 edac_mod_work(&edac_dev->work, edac_dev->delay);···445442 /* This instance is NOW RUNNING */446443 edac_dev->op_state = OP_RUNNING_POLL;447444448448- /*449449- * enable workq processing on this instance,450450- * default = 1000 msec451451- */452452- edac_device_workq_setup(edac_dev, 1000);445445+ edac_device_workq_setup(edac_dev, edac_dev->poll_msec ?: DEFAULT_POLL_INTERVAL);453446 } else {454447 edac_dev->op_state = OP_RUNNING_INTERRUPT;455448 }
···819819820820 r = container_of(resource, struct inbound_transaction_resource,821821 resource);822822- if (is_fcp_request(r->request))822822+ if (is_fcp_request(r->request)) {823823+ kfree(r->data);823824 goto out;825825+ }824826825827 if (a->length != fw_get_response_length(r->request)) {826828 ret = -EINVAL;
+2
drivers/firmware/efi/efi.c
···10071007 /* first try to find a slot in an existing linked list entry */10081008 for (prsv = efi_memreserve_root->next; prsv; ) {10091009 rsv = memremap(prsv, sizeof(*rsv), MEMREMAP_WB);10101010+ if (!rsv)10111011+ return -ENOMEM;10101012 index = atomic_fetch_add_unless(&rsv->count, 1, rsv->size);10111013 if (index < rsv->size) {10121014 rsv->entry[index].base = addr;
+1-1
drivers/firmware/efi/memattr.c
···3333 return -ENOMEM;3434 }35353636- if (tbl->version > 1) {3636+ if (tbl->version > 2) {3737 pr_warn("Unexpected EFI Memory Attributes table version %d\n",3838 tbl->version);3939 goto unmap;
+12-5
drivers/fpga/intel-m10-bmc-sec-update.c
···574574 len = scnprintf(buf, SEC_UPDATE_LEN_MAX, "secure-update%d",575575 sec->fw_name_id);576576 sec->fw_name = kmemdup_nul(buf, len, GFP_KERNEL);577577- if (!sec->fw_name)578578- return -ENOMEM;577577+ if (!sec->fw_name) {578578+ ret = -ENOMEM;579579+ goto fw_name_fail;580580+ }579581580582 fwl = firmware_upload_register(THIS_MODULE, sec->dev, sec->fw_name,581583 &m10bmc_ops, sec);582584 if (IS_ERR(fwl)) {583585 dev_err(sec->dev, "Firmware Upload driver failed to start\n");584584- kfree(sec->fw_name);585585- xa_erase(&fw_upload_xa, sec->fw_name_id);586586- return PTR_ERR(fwl);586586+ ret = PTR_ERR(fwl);587587+ goto fw_uploader_fail;587588 }588589589590 sec->fwl = fwl;590591 return 0;592592+593593+fw_uploader_fail:594594+ kfree(sec->fw_name);595595+fw_name_fail:596596+ xa_erase(&fw_upload_xa, sec->fw_name_id);597597+ return ret;591598}592599593600static int m10bmc_sec_remove(struct platform_device *pdev)
+2-2
drivers/fpga/stratix10-soc.c
···213213 /* Allocate buffers from the service layer's pool. */214214 for (i = 0; i < NUM_SVC_BUFS; i++) {215215 kbuf = stratix10_svc_allocate_memory(priv->chan, SVC_BUF_SIZE);216216- if (!kbuf) {216216+ if (IS_ERR(kbuf)) {217217 s10_free_buffers(mgr);218218- ret = -ENOMEM;218218+ ret = PTR_ERR(kbuf);219219 goto init_done;220220 }221221
···11041104 dev_dbg(&adev->dev, "IRQ %d already in use\n", irq);11051105 }1106110611071107- if (wake_capable)11071107+ /* avoid suspend issues with GPIOs when systems are using S3 */11081108+ if (wake_capable && acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0)11081109 *wake_capable = info.wake_capable;1109111011101111 return irq;
+2-2
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
···790790 * zero here */791791 WARN_ON(simd != 0);792792793793- /* type 2 wave data */794794- dst[(*no_fields)++] = 2;793793+ /* type 3 wave data */794794+ dst[(*no_fields)++] = 3;795795 dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_STATUS);796796 dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_PC_LO);797797 dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_PC_HI);
+1
drivers/gpu/drm/amd/amdgpu/imu_v11_0.c
···3535MODULE_FIRMWARE("amdgpu/gc_11_0_1_imu.bin");3636MODULE_FIRMWARE("amdgpu/gc_11_0_2_imu.bin");3737MODULE_FIRMWARE("amdgpu/gc_11_0_3_imu.bin");3838+MODULE_FIRMWARE("amdgpu/gc_11_0_4_imu.bin");38393940static int imu_v11_0_init_microcode(struct amdgpu_device *adev)4041{
+2-1
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
···4040MODULE_FIRMWARE("amdgpu/gc_11_0_2_mes1.bin");4141MODULE_FIRMWARE("amdgpu/gc_11_0_3_mes.bin");4242MODULE_FIRMWARE("amdgpu/gc_11_0_3_mes1.bin");4343+MODULE_FIRMWARE("amdgpu/gc_11_0_4_mes.bin");4444+MODULE_FIRMWARE("amdgpu/gc_11_0_4_mes1.bin");43454446static int mes_v11_0_hw_fini(void *handle);4547static int mes_v11_0_kiq_hw_init(struct amdgpu_device *adev);···198196 mes_add_queue_pkt.trap_handler_addr = input->tba_addr;199197 mes_add_queue_pkt.tma_addr = input->tma_addr;200198 mes_add_queue_pkt.is_kfd_process = input->is_kfd_process;201201- mes_add_queue_pkt.trap_en = 1;202199203200 /* For KFD, gds_size is re-used for queue size (needed in MES for AQL queues) */204201 mes_add_queue_pkt.is_aql_queue = input->is_aql_queue;
···45014501static int dm_early_init(void *handle)45024502{45034503 struct amdgpu_device *adev = (struct amdgpu_device *)handle;45044504+ struct amdgpu_mode_info *mode_info = &adev->mode_info;45054505+ struct atom_context *ctx = mode_info->atom_context;45064506+ int index = GetIndexIntoMasterTable(DATA, Object_Header);45074507+ u16 data_offset;45084508+45094509+ /* if there is no object header, skip DM */45104510+ if (!amdgpu_atom_parse_data_header(ctx, index, NULL, NULL, NULL, &data_offset)) {45114511+ adev->harvest_ip_mask |= AMD_HARVEST_IP_DMU_MASK;45124512+ dev_info(adev->dev, "No object header, skipping DM\n");45134513+ return -ENOENT;45144514+ }4504451545054516 switch (adev->asic_type) {45064517#if defined(CONFIG_DRM_AMD_DC_SI)···88928881 if (!dm_old_crtc_state->stream)88938882 goto skip_modeset;8894888388848884+ /* Unset freesync video if it was active before */88858885+ if (dm_old_crtc_state->freesync_config.state == VRR_STATE_ACTIVE_FIXED) {88868886+ dm_new_crtc_state->freesync_config.state = VRR_STATE_INACTIVE;88878887+ dm_new_crtc_state->freesync_config.fixed_refresh_in_uhz = 0;88888888+ }88898889+88908890+ /* Now check if we should set freesync video mode */88958891 if (amdgpu_freesync_vid_mode && dm_new_crtc_state->stream &&88968892 is_timing_unchanged_for_freesync(new_crtc_state,88978893 old_crtc_state)) {···95159497 bool lock_and_validation_needed = false;95169498 struct dm_crtc_state *dm_old_crtc_state, *dm_new_crtc_state;95179499#if defined(CONFIG_DRM_AMD_DC_DCN)95009500+ struct drm_dp_mst_topology_mgr *mgr;95019501+ struct drm_dp_mst_topology_state *mst_state;95189502 struct dsc_mst_fairness_vars vars[MAX_PIPES];95199503#endif95209504···9764974497659745 lock_and_validation_needed = true;97669746 }97479747+97489748+#if defined(CONFIG_DRM_AMD_DC_DCN)97499749+ /* set the slot info for each mst_state based on the link encoding format */97509750+ for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) {97519751+ struct amdgpu_dm_connector *aconnector;97529752+ struct drm_connector *connector;97539753+ struct drm_connector_list_iter iter;97549754+ u8 link_coding_cap;97559755+97569756+ drm_connector_list_iter_begin(dev, &iter);97579757+ drm_for_each_connector_iter(connector, &iter) {97589758+ if (connector->index == mst_state->mgr->conn_base_id) {97599759+ aconnector = to_amdgpu_dm_connector(connector);97609760+ link_coding_cap = dc_link_dp_mst_decide_link_encoding_format(aconnector->dc_link);97619761+ drm_dp_mst_update_slots(mst_state, link_coding_cap);97629762+97639763+ break;97649764+ }97659765+ }97669766+ drm_connector_list_iter_end(&iter);97679767+ }97689768+#endif9767976997689770 /**97699771 * Streams and planes are reset when there are changes that affect
···903903 if (IS_ERR(mst_state))904904 return PTR_ERR(mst_state);905905906906- mst_state->pbn_div = dm_mst_get_pbn_divider(dc_link);907907-#if defined(CONFIG_DRM_AMD_DC_DCN)908908- drm_dp_mst_update_slots(mst_state, dc_link_dp_mst_decide_link_encoding_format(dc_link));909909-#endif910910-911906 /* Set up params */912907 for (i = 0; i < dc_state->stream_count; i++) {913908 struct dc_dsc_policy dsc_policy = {0};
+12-2
drivers/gpu/drm/amd/display/dc/core/dc_link.c
···39953995 struct fixed31_32 avg_time_slots_per_mtp = dc_fixpt_from_int(0);39963996 int i;39973997 bool mst_mode = (link->type == dc_connection_mst_branch);39983998+ /* adjust for drm changes*/39993999+ bool update_drm_mst_state = true;39984000 const struct link_hwss *link_hwss = get_link_hwss(link, &pipe_ctx->link_res);39994001 const struct dc_link_settings empty_link_settings = {0};40004002 DC_LOGGER_INIT(link->ctx->logger);40034003+4001400440024005 /* deallocate_mst_payload is called before disable link. When mode or40034006 * disable/enable monitor, new stream is created which is not in link···40174014 &empty_link_settings,40184015 avg_time_slots_per_mtp);4019401640204020- if (mst_mode) {40174017+ if (mst_mode || update_drm_mst_state) {40214018 /* when link is in mst mode, reply on mst manager to remove40224019 * payload40234020 */···40804077 stream->ctx,40814078 stream);4082407940804080+ if (!update_drm_mst_state)40814081+ dm_helpers_dp_mst_send_payload_allocation(40824082+ stream->ctx,40834083+ stream,40844084+ false);40854085+ }40864086+40874087+ if (update_drm_mst_state)40834088 dm_helpers_dp_mst_send_payload_allocation(40844089 stream->ctx,40854090 stream,40864091 false);40874087- }4088409240894093 return DC_OK;40904094}
···532532 if (dmub->hw_funcs.reset)533533 dmub->hw_funcs.reset(dmub);534534535535+ /* reset the cache of the last wptr as well now that hw is reset */536536+ dmub->inbox1_last_wptr = 0;537537+535538 cw0.offset.quad_part = inst_fb->gpu_addr;536539 cw0.region.base = DMUB_CW0_BASE;537540 cw0.region.top = cw0.region.base + inst_fb->size - 1;···651648652649 if (dmub->hw_funcs.reset)653650 dmub->hw_funcs.reset(dmub);651651+652652+ /* mailboxes have been reset in hw, so reset the sw state as well */653653+ dmub->inbox1_last_wptr = 0;654654+ dmub->inbox1_rb.wrpt = 0;655655+ dmub->inbox1_rb.rptr = 0;656656+ dmub->outbox0_rb.wrpt = 0;657657+ dmub->outbox0_rb.rptr = 0;658658+ dmub->outbox1_rb.wrpt = 0;659659+ dmub->outbox1_rb.rptr = 0;654660655661 dmub->hw_init = false;656662
···15001500 }1501150115021502 /*15031503+ * For SMU 13.0.4/11, PMFW will handle the features disablement properly15041504+ * for gpu reset case. Driver involvement is unnecessary.15051505+ */15061506+ if (amdgpu_in_reset(adev)) {15071507+ switch (adev->ip_versions[MP1_HWIP][0]) {15081508+ case IP_VERSION(13, 0, 4):15091509+ case IP_VERSION(13, 0, 11):15101510+ return 0;15111511+ default:15121512+ break;15131513+ }15141514+ }15151515+15161516+ /*15031517 * For gpu reset, runpm and hibernation through BACO,15041518 * BACO feature has to be kept enabled.15051519 */
···171171 .fb_imageblit = drm_fbdev_fb_imageblit,172172};173173174174-static struct fb_deferred_io drm_fbdev_defio = {175175- .delay = HZ / 20,176176- .deferred_io = drm_fb_helper_deferred_io,177177-};178178-179174/*180175 * This function uses the client API to create a framebuffer backed by a dumb buffer.181176 */···217222 return -ENOMEM;218223 fbi->flags |= FBINFO_VIRTFB | FBINFO_READS_FAST;219224220220- fbi->fbdefio = &drm_fbdev_defio;221221- fb_deferred_io_init(fbi);225225+ /* Set a default deferred I/O handler */226226+ fb_helper->fbdefio.delay = HZ / 20;227227+ fb_helper->fbdefio.deferred_io = drm_fb_helper_deferred_io;228228+229229+ fbi->fbdefio = &fb_helper->fbdefio;230230+ ret = fb_deferred_io_init(fbi);231231+ if (ret)232232+ return ret;222233 } else {223234 /* buffer is mapped for HW framebuffer */224235 ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
+54-22
drivers/gpu/drm/drm_vma_manager.c
···240240}241241EXPORT_SYMBOL(drm_vma_offset_remove);242242243243-/**244244- * drm_vma_node_allow - Add open-file to list of allowed users245245- * @node: Node to modify246246- * @tag: Tag of file to remove247247- *248248- * Add @tag to the list of allowed open-files for this node. If @tag is249249- * already on this list, the ref-count is incremented.250250- *251251- * The list of allowed-users is preserved across drm_vma_offset_add() and252252- * drm_vma_offset_remove() calls. You may even call it if the node is currently253253- * not added to any offset-manager.254254- *255255- * You must remove all open-files the same number of times as you added them256256- * before destroying the node. Otherwise, you will leak memory.257257- *258258- * This is locked against concurrent access internally.259259- *260260- * RETURNS:261261- * 0 on success, negative error code on internal failure (out-of-mem)262262- */263263-int drm_vma_node_allow(struct drm_vma_offset_node *node, struct drm_file *tag)243243+static int vma_node_allow(struct drm_vma_offset_node *node,244244+ struct drm_file *tag, bool ref_counted)264245{265246 struct rb_node **iter;266247 struct rb_node *parent = NULL;···263282 entry = rb_entry(*iter, struct drm_vma_offset_file, vm_rb);264283265284 if (tag == entry->vm_tag) {266266- entry->vm_count++;285285+ if (ref_counted)286286+ entry->vm_count++;267287 goto unlock;268288 } else if (tag > entry->vm_tag) {269289 iter = &(*iter)->rb_right;···289307 kfree(new);290308 return ret;291309}310310+311311+/**312312+ * drm_vma_node_allow - Add open-file to list of allowed users313313+ * @node: Node to modify314314+ * @tag: Tag of file to remove315315+ *316316+ * Add @tag to the list of allowed open-files for this node. If @tag is317317+ * already on this list, the ref-count is incremented.318318+ *319319+ * The list of allowed-users is preserved across drm_vma_offset_add() and320320+ * drm_vma_offset_remove() calls. You may even call it if the node is currently321321+ * not added to any offset-manager.322322+ *323323+ * You must remove all open-files the same number of times as you added them324324+ * before destroying the node. Otherwise, you will leak memory.325325+ *326326+ * This is locked against concurrent access internally.327327+ *328328+ * RETURNS:329329+ * 0 on success, negative error code on internal failure (out-of-mem)330330+ */331331+int drm_vma_node_allow(struct drm_vma_offset_node *node, struct drm_file *tag)332332+{333333+ return vma_node_allow(node, tag, true);334334+}292335EXPORT_SYMBOL(drm_vma_node_allow);336336+337337+/**338338+ * drm_vma_node_allow_once - Add open-file to list of allowed users339339+ * @node: Node to modify340340+ * @tag: Tag of file to remove341341+ *342342+ * Add @tag to the list of allowed open-files for this node.343343+ *344344+ * The list of allowed-users is preserved across drm_vma_offset_add() and345345+ * drm_vma_offset_remove() calls. You may even call it if the node is currently346346+ * not added to any offset-manager.347347+ *348348+ * This is not ref-counted unlike drm_vma_node_allow() hence drm_vma_node_revoke()349349+ * should only be called once after this.350350+ *351351+ * This is locked against concurrent access internally.352352+ *353353+ * RETURNS:354354+ * 0 on success, negative error code on internal failure (out-of-mem)355355+ */356356+int drm_vma_node_allow_once(struct drm_vma_offset_node *node, struct drm_file *tag)357357+{358358+ return vma_node_allow(node, tag, false);359359+}360360+EXPORT_SYMBOL(drm_vma_node_allow_once);293361294362/**295363 * drm_vma_node_revoke - Remove open-file from list of allowed users
···18611861 vm = ctx->vm;18621862 GEM_BUG_ON(!vm);1863186318641864- err = xa_alloc(&file_priv->vm_xa, &id, vm, xa_limit_32b, GFP_KERNEL);18651865- if (err)18661866- return err;18671867-18641864+ /*18651865+ * Get a reference for the allocated handle. Once the handle is18661866+ * visible in the vm_xa table, userspace could try to close it18671867+ * from under our feet, so we need to hold the extra reference18681868+ * first.18691869+ */18681870 i915_vm_get(vm);18711871+18721872+ err = xa_alloc(&file_priv->vm_xa, &id, vm, xa_limit_32b, GFP_KERNEL);18731873+ if (err) {18741874+ i915_vm_put(vm);18751875+ return err;18761876+ }1869187718701878 GEM_BUG_ON(id == 0); /* reserved for invalid/unassigned ppgtt */18711879 args->value = id;
···305305 spin_unlock(&obj->vma.lock);306306307307 obj->tiling_and_stride = tiling | stride;308308- i915_gem_object_unlock(obj);309309-310310- /* Force the fence to be reacquired for GTT access */311311- i915_gem_object_release_mmap_gtt(obj);312308313309 /* Try to preallocate memory required to save swizzling on put-pages */314310 if (i915_gem_object_needs_bit17_swizzle(obj)) {···316320 bitmap_free(obj->bit_17);317321 obj->bit_17 = NULL;318322 }323323+324324+ i915_gem_object_unlock(obj);325325+326326+ /* Force the fence to be reacquired for GTT access */327327+ i915_gem_object_release_mmap_gtt(obj);319328320329 return 0;321330}
···17021702 goto next_context;1703170317041704 guilty = false;17051705- rq = intel_context_find_active_request(ce);17051705+ rq = intel_context_get_active_request(ce);17061706 if (!rq) {17071707 head = ce->ring->tail;17081708 goto out_replay;···17151715 head = intel_ring_wrap(ce->ring, rq->head);1716171617171717 __i915_request_reset(rq, guilty);17181718+ i915_request_put(rq);17181719out_replay:17191720 guc_reset_state(ce, head, guilty);17201721next_context:···4818481748194818 xa_lock_irqsave(&guc->context_lookup, flags);48204819 xa_for_each(&guc->context_lookup, index, ce) {48204820+ bool found;48214821+48214822 if (!kref_get_unless_zero(&ce->ref))48224823 continue;48234824···48364833 goto next;48374834 }4838483548364836+ found = false;48374837+ spin_lock(&ce->guc_state.lock);48394838 list_for_each_entry(rq, &ce->guc_state.requests, sched.link) {48404839 if (i915_test_request_state(rq) != I915_REQUEST_ACTIVE)48414840 continue;4842484148424842+ found = true;48434843+ break;48444844+ }48454845+ spin_unlock(&ce->guc_state.lock);48464846+48474847+ if (found) {48434848 intel_engine_set_hung_context(engine, ce);4844484948454850 /* Can only cope with one hang at a time... */···48554844 xa_lock(&guc->context_lookup);48564845 goto done;48574846 }48474847+48584848next:48594849 intel_context_put(ce);48604850 xa_lock(&guc->context_lookup);
+6-27
drivers/gpu/drm/i915/i915_gpu_error.c
···15961596{15971597 struct intel_engine_capture_vma *capture = NULL;15981598 struct intel_engine_coredump *ee;15991599- struct intel_context *ce;15991599+ struct intel_context *ce = NULL;16001600 struct i915_request *rq = NULL;16011601- unsigned long flags;1602160116031602 ee = intel_engine_coredump_alloc(engine, ALLOW_FAIL, dump_flags);16041603 if (!ee)16051604 return NULL;1606160516071607- ce = intel_engine_get_hung_context(engine);16081608- if (ce) {16091609- intel_engine_clear_hung_context(engine);16101610- rq = intel_context_find_active_request(ce);16111611- if (!rq || !i915_request_started(rq))16121612- goto no_request_capture;16131613- } else {16141614- /*16151615- * Getting here with GuC enabled means it is a forced error capture16161616- * with no actual hang. So, no need to attempt the execlist search.16171617- */16181618- if (!intel_uc_uses_guc_submission(&engine->gt->uc)) {16191619- spin_lock_irqsave(&engine->sched_engine->lock, flags);16201620- rq = intel_engine_execlist_find_hung_request(engine);16211621- spin_unlock_irqrestore(&engine->sched_engine->lock,16221622- flags);16231623- }16241624- }16251625- if (rq)16261626- rq = i915_request_get_rcu(rq);16271627-16281628- if (!rq)16061606+ intel_engine_get_hung_entity(engine, &ce, &rq);16071607+ if (!rq || !i915_request_started(rq))16291608 goto no_request_capture;1630160916311610 capture = intel_engine_coredump_add_request(ee, rq, ATOMIC_MAYFAIL);16321632- if (!capture) {16331633- i915_request_put(rq);16111611+ if (!capture)16341612 goto no_request_capture;16351635- }16361613 if (dump_flags & CORE_DUMP_FLAG_IS_GUC_CAPTURE)16371614 intel_guc_capture_get_matching_node(engine->gt, ee, ce);16381615···16191642 return ee;1620164316211644no_request_capture:16451645+ if (rq)16461646+ i915_request_put(rq);16221647 kfree(ee);16231648 return NULL;16241649}
···11+/*22+ * Copyright 2018 Red Hat Inc.33+ *44+ * Permission is hereby granted, free of charge, to any person obtaining a55+ * copy of this software and associated documentation files (the "Software"),66+ * to deal in the Software without restriction, including without limitation77+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,88+ * and/or sell copies of the Software, and to permit persons to whom the99+ * Software is furnished to do so, subject to the following conditions:1010+ *1111+ * The above copyright notice and this permission notice shall be included in1212+ * all copies or substantial portions of the Software.1313+ *1414+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR1515+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,1616+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL1717+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR1818+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,1919+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR2020+ * OTHER DEALINGS IN THE SOFTWARE.2121+ */2222+#include "gf100.h"2323+#include "ram.h"2424+2525+bool2626+tu102_fb_vpr_scrub_required(struct nvkm_fb *fb)2727+{2828+ return (nvkm_rd32(fb->subdev.device, 0x1fa80c) & 0x00000010) != 0;2929+}3030+3131+static const struct nvkm_fb_func3232+tu102_fb = {3333+ .dtor = gf100_fb_dtor,3434+ .oneinit = gf100_fb_oneinit,3535+ .init = gm200_fb_init,3636+ .init_page = gv100_fb_init_page,3737+ .init_unkn = gp100_fb_init_unkn,3838+ .sysmem.flush_page_init = gf100_fb_sysmem_flush_page_init,3939+ .vpr.scrub_required = tu102_fb_vpr_scrub_required,4040+ .vpr.scrub = gp102_fb_vpr_scrub,4141+ .ram_new = gp100_ram_new,4242+ .default_bigpage = 16,4343+};4444+4545+int4646+tu102_fb_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, struct nvkm_fb **pfb)4747+{4848+ return gp102_fb_new_(&tu102_fb, device, type, inst, pfb);4949+}5050+5151+MODULE_FIRMWARE("nvidia/tu102/nvdec/scrubber.bin");5252+MODULE_FIRMWARE("nvidia/tu104/nvdec/scrubber.bin");5353+MODULE_FIRMWARE("nvidia/tu106/nvdec/scrubber.bin");5454+MODULE_FIRMWARE("nvidia/tu116/nvdec/scrubber.bin");5555+MODULE_FIRMWARE("nvidia/tu117/nvdec/scrubber.bin");
···351351352352 if (dev->flags & ACCESS_NO_IRQ_SUSPEND) {353353 dev_pm_set_driver_flags(&pdev->dev,354354- DPM_FLAG_SMART_PREPARE |355355- DPM_FLAG_MAY_SKIP_RESUME);354354+ DPM_FLAG_SMART_PREPARE);356355 } else {357356 dev_pm_set_driver_flags(&pdev->dev,358357 DPM_FLAG_SMART_PREPARE |359359- DPM_FLAG_SMART_SUSPEND |360360- DPM_FLAG_MAY_SKIP_RESUME);358358+ DPM_FLAG_SMART_SUSPEND);361359 }362360363361 device_enable_async_suspend(&pdev->dev);···417419 */418420 return !has_acpi_companion(dev);419421}420420-421421-static void dw_i2c_plat_complete(struct device *dev)422422-{423423- /*424424- * The device can only be in runtime suspend at this point if it has not425425- * been resumed throughout the ending system suspend/resume cycle, so if426426- * the platform firmware might mess up with it, request the runtime PM427427- * framework to resume it.428428- */429429- if (pm_runtime_suspended(dev) && pm_resume_via_firmware())430430- pm_request_resume(dev);431431-}432422#else433423#define dw_i2c_plat_prepare NULL434434-#define dw_i2c_plat_complete NULL435424#endif436425437426#ifdef CONFIG_PM···468483469484static const struct dev_pm_ops dw_i2c_dev_pm_ops = {470485 .prepare = dw_i2c_plat_prepare,471471- .complete = dw_i2c_plat_complete,472486 SET_LATE_SYSTEM_SLEEP_PM_OPS(dw_i2c_plat_suspend, dw_i2c_plat_resume)473487 SET_RUNTIME_PM_OPS(dw_i2c_plat_runtime_suspend, dw_i2c_plat_runtime_resume, NULL)474488};
+2-2
drivers/i2c/busses/i2c-mxs.c
···826826 /* Setup the DMA */827827 i2c->dmach = dma_request_chan(dev, "rx-tx");828828 if (IS_ERR(i2c->dmach)) {829829- dev_err(dev, "Failed to request dma\n");830830- return PTR_ERR(i2c->dmach);829829+ return dev_err_probe(dev, PTR_ERR(i2c->dmach),830830+ "Failed to request dma\n");831831 }832832833833 platform_set_drvdata(pdev, i2c);
+22-22
drivers/i2c/busses/i2c-rk3x.c
···8080#define DEFAULT_SCL_RATE (100 * 1000) /* Hz */81818282/**8383- * struct i2c_spec_values:8383+ * struct i2c_spec_values - I2C specification values for various modes8484 * @min_hold_start_ns: min hold time (repeated) START condition8585 * @min_low_ns: min LOW period of the SCL clock8686 * @min_high_ns: min HIGH period of the SCL cloc···136136};137137138138/**139139- * struct rk3x_i2c_calced_timings:139139+ * struct rk3x_i2c_calced_timings - calculated V1 timings140140 * @div_low: Divider output for low141141 * @div_high: Divider output for high142142 * @tuning: Used to adjust setup/hold data time,···159159};160160161161/**162162- * struct rk3x_i2c_soc_data:162162+ * struct rk3x_i2c_soc_data - SOC-specific data163163 * @grf_offset: offset inside the grf regmap for setting the i2c type164164 * @calc_timings: Callback function for i2c timing information calculated165165 */···239239}240240241241/**242242- * Generate a START condition, which triggers a REG_INT_START interrupt.242242+ * rk3x_i2c_start - Generate a START condition, which triggers a REG_INT_START interrupt.243243+ * @i2c: target controller data243244 */244245static void rk3x_i2c_start(struct rk3x_i2c *i2c)245246{···259258}260259261260/**262262- * Generate a STOP condition, which triggers a REG_INT_STOP interrupt.263263- *261261+ * rk3x_i2c_stop - Generate a STOP condition, which triggers a REG_INT_STOP interrupt.262262+ * @i2c: target controller data264263 * @error: Error code to return in rk3x_i2c_xfer265264 */266265static void rk3x_i2c_stop(struct rk3x_i2c *i2c, int error)···299298}300299301300/**302302- * Setup a read according to i2c->msg301301+ * rk3x_i2c_prepare_read - Setup a read according to i2c->msg302302+ * @i2c: target controller data303303 */304304static void rk3x_i2c_prepare_read(struct rk3x_i2c *i2c)305305{···331329}332330333331/**334334- * Fill the transmit buffer with data from i2c->msg332332+ * rk3x_i2c_fill_transmit_buf - Fill the transmit buffer with data from i2c->msg333333+ * @i2c: target controller data335334 */336335static void rk3x_i2c_fill_transmit_buf(struct rk3x_i2c *i2c)337336{···535532}536533537534/**538538- * Get timing values of I2C specification539539- *535535+ * rk3x_i2c_get_spec - Get timing values of I2C specification540536 * @speed: Desired SCL frequency541537 *542542- * Returns: Matched i2c spec values.538538+ * Return: Matched i2c_spec_values.543539 */544540static const struct i2c_spec_values *rk3x_i2c_get_spec(unsigned int speed)545541{···551549}552550553551/**554554- * Calculate divider values for desired SCL frequency555555- *552552+ * rk3x_i2c_v0_calc_timings - Calculate divider values for desired SCL frequency556553 * @clk_rate: I2C input clock rate557554 * @t: Known I2C timing information558555 * @t_calc: Caculated rk3x private timings that would be written into regs559556 *560560- * Returns: 0 on success, -EINVAL if the goal SCL rate is too slow. In that case557557+ * Return: %0 on success, -%EINVAL if the goal SCL rate is too slow. In that case561558 * a best-effort divider value is returned in divs. If the target rate is562559 * too high, we silently use the highest possible rate.563560 */···711710}712711713712/**714714- * Calculate timing values for desired SCL frequency715715- *713713+ * rk3x_i2c_v1_calc_timings - Calculate timing values for desired SCL frequency716714 * @clk_rate: I2C input clock rate717715 * @t: Known I2C timing information718716 * @t_calc: Caculated rk3x private timings that would be written into regs719717 *720720- * Returns: 0 on success, -EINVAL if the goal SCL rate is too slow. In that case718718+ * Return: %0 on success, -%EINVAL if the goal SCL rate is too slow. In that case721719 * a best-effort divider value is returned in divs. If the target rate is722720 * too high, we silently use the highest possible rate.723721 * The following formulas are v1's method to calculate timings.···960960}961961962962/**963963- * Setup I2C registers for an I2C operation specified by msgs, num.964964- *965965- * Must be called with i2c->lock held.966966- *963963+ * rk3x_i2c_setup - Setup I2C registers for an I2C operation specified by msgs, num.964964+ * @i2c: target controller data967965 * @msgs: I2C msgs to process968966 * @num: Number of msgs969967 *970970- * returns: Number of I2C msgs processed or negative in case of error968968+ * Must be called with i2c->lock held.969969+ *970970+ * Return: Number of I2C msgs processed or negative in case of error971971 */972972static int rk3x_i2c_setup(struct rk3x_i2c *i2c, struct i2c_msg *msgs, int num)973973{
···1010#include <linux/regmap.h>1111#include <linux/acpi.h>1212#include <linux/bitops.h>1313+#include <linux/bitfield.h>13141415#include <linux/iio/iio.h>1516#include <linux/iio/sysfs.h>···145144#define FXOS8700_NVM_DATA_BNK0 0xa7146145147146/* Bit definitions for FXOS8700_CTRL_REG1 */148148-#define FXOS8700_CTRL_ODR_MSK 0x38149147#define FXOS8700_CTRL_ODR_MAX 0x00150150-#define FXOS8700_CTRL_ODR_MIN GENMASK(4, 3)148148+#define FXOS8700_CTRL_ODR_MSK GENMASK(5, 3)151149152150/* Bit definitions for FXOS8700_M_CTRL_REG1 */153151#define FXOS8700_HMS_MASK GENMASK(1, 0)···320320 switch (iio_type) {321321 case IIO_ACCEL:322322 return FXOS8700_ACCEL;323323- case IIO_ANGL_VEL:323323+ case IIO_MAGN:324324 return FXOS8700_MAGN;325325 default:326326 return -EINVAL;···345345static int fxos8700_set_scale(struct fxos8700_data *data,346346 enum fxos8700_sensor t, int uscale)347347{348348- int i;348348+ int i, ret, val;349349+ bool active_mode;349350 static const int scale_num = ARRAY_SIZE(fxos8700_accel_scale);350351 struct device *dev = regmap_get_device(data->regmap);351352352353 if (t == FXOS8700_MAGN) {353353- dev_err(dev, "Magnetometer scale is locked at 1200uT\n");354354+ dev_err(dev, "Magnetometer scale is locked at 0.001Gs\n");354355 return -EINVAL;356356+ }357357+358358+ /*359359+ * When device is in active mode, it failed to set an ACCEL360360+ * full-scale range(2g/4g/8g) in FXOS8700_XYZ_DATA_CFG.361361+ * This is not align with the datasheet, but it is a fxos8700362362+ * chip behavier. Set the device in standby mode before setting363363+ * an ACCEL full-scale range.364364+ */365365+ ret = regmap_read(data->regmap, FXOS8700_CTRL_REG1, &val);366366+ if (ret)367367+ return ret;368368+369369+ active_mode = val & FXOS8700_ACTIVE;370370+ if (active_mode) {371371+ ret = regmap_write(data->regmap, FXOS8700_CTRL_REG1,372372+ val & ~FXOS8700_ACTIVE);373373+ if (ret)374374+ return ret;355375 }356376357377 for (i = 0; i < scale_num; i++)···381361 if (i == scale_num)382362 return -EINVAL;383363384384- return regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG,364364+ ret = regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG,385365 fxos8700_accel_scale[i].bits);366366+ if (ret)367367+ return ret;368368+ return regmap_write(data->regmap, FXOS8700_CTRL_REG1,369369+ active_mode);386370}387371388372static int fxos8700_get_scale(struct fxos8700_data *data,···396372 static const int scale_num = ARRAY_SIZE(fxos8700_accel_scale);397373398374 if (t == FXOS8700_MAGN) {399399- *uscale = 1200; /* Magnetometer is locked at 1200uT */375375+ *uscale = 1000; /* Magnetometer is locked at 0.001Gs */400376 return 0;401377 }402378···418394 int axis, int *val)419395{420396 u8 base, reg;397397+ s16 tmp;421398 int ret;422422- enum fxos8700_sensor type = fxos8700_to_sensor(chan_type);423399424424- base = type ? FXOS8700_OUT_X_MSB : FXOS8700_M_OUT_X_MSB;400400+ /*401401+ * Different register base addresses varies with channel types.402402+ * This bug hasn't been noticed before because using an enum is403403+ * really hard to read. Use an a switch statement to take over that.404404+ */405405+ switch (chan_type) {406406+ case IIO_ACCEL:407407+ base = FXOS8700_OUT_X_MSB;408408+ break;409409+ case IIO_MAGN:410410+ base = FXOS8700_M_OUT_X_MSB;411411+ break;412412+ default:413413+ return -EINVAL;414414+ }425415426416 /* Block read 6 bytes of device output registers to avoid data loss */427417 ret = regmap_bulk_read(data->regmap, base, data->buf,428428- FXOS8700_DATA_BUF_SIZE);418418+ sizeof(data->buf));429419 if (ret)430420 return ret;431421432422 /* Convert axis to buffer index */433423 reg = axis - IIO_MOD_X;434424425425+ /*426426+ * Convert to native endianness. The accel data and magn data427427+ * are signed, so a forced type conversion is needed.428428+ */429429+ tmp = be16_to_cpu(data->buf[reg]);430430+431431+ /*432432+ * ACCEL output data registers contain the X-axis, Y-axis, and Z-axis433433+ * 14-bit left-justified sample data and MAGN output data registers434434+ * contain the X-axis, Y-axis, and Z-axis 16-bit sample data. Apply435435+ * a signed 2 bits right shift to the readback raw data from ACCEL436436+ * output data register and keep that from MAGN sensor as the origin.437437+ * Value should be extended to 32 bit.438438+ */439439+ switch (chan_type) {440440+ case IIO_ACCEL:441441+ tmp = tmp >> 2;442442+ break;443443+ case IIO_MAGN:444444+ /* Nothing to do */445445+ break;446446+ default:447447+ return -EINVAL;448448+ }449449+435450 /* Convert to native endianness */436436- *val = sign_extend32(be16_to_cpu(data->buf[reg]), 15);451451+ *val = sign_extend32(tmp, 15);437452438453 return 0;439454}···508445 if (i >= odr_num)509446 return -EINVAL;510447511511- return regmap_update_bits(data->regmap,512512- FXOS8700_CTRL_REG1,513513- FXOS8700_CTRL_ODR_MSK + FXOS8700_ACTIVE,514514- fxos8700_odr[i].bits << 3 | active_mode);448448+ val &= ~FXOS8700_CTRL_ODR_MSK;449449+ val |= FIELD_PREP(FXOS8700_CTRL_ODR_MSK, fxos8700_odr[i].bits) | FXOS8700_ACTIVE;450450+ return regmap_write(data->regmap, FXOS8700_CTRL_REG1, val);515451}516452517453static int fxos8700_get_odr(struct fxos8700_data *data, enum fxos8700_sensor t,···523461 if (ret)524462 return ret;525463526526- val &= FXOS8700_CTRL_ODR_MSK;464464+ val = FIELD_GET(FXOS8700_CTRL_ODR_MSK, val);527465528466 for (i = 0; i < odr_num; i++)529467 if (val == fxos8700_odr[i].bits)···588526static IIO_CONST_ATTR(in_magn_sampling_frequency_available,589527 "1.5625 6.25 12.5 50 100 200 400 800");590528static IIO_CONST_ATTR(in_accel_scale_available, "0.000244 0.000488 0.000976");591591-static IIO_CONST_ATTR(in_magn_scale_available, "0.000001200");529529+static IIO_CONST_ATTR(in_magn_scale_available, "0.001000");592530593531static struct attribute *fxos8700_attrs[] = {594532 &iio_const_attr_in_accel_sampling_frequency_available.dev_attr.attr,···654592 if (ret)655593 return ret;656594657657- /* Max ODR (800Hz individual or 400Hz hybrid), active mode */658658- ret = regmap_write(data->regmap, FXOS8700_CTRL_REG1,659659- FXOS8700_CTRL_ODR_MAX | FXOS8700_ACTIVE);595595+ /*596596+ * Set max full-scale range (+/-8G) for ACCEL sensor in chip597597+ * initialization then activate the device.598598+ */599599+ ret = regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, MODE_8G);660600 if (ret)661601 return ret;662602663663- /* Set for max full-scale range (+/-8G) */664664- return regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, MODE_8G);603603+ /* Max ODR (800Hz individual or 400Hz hybrid), active mode */604604+ return regmap_update_bits(data->regmap, FXOS8700_CTRL_REG1,605605+ FXOS8700_CTRL_ODR_MSK | FXOS8700_ACTIVE,606606+ FIELD_PREP(FXOS8700_CTRL_ODR_MSK, FXOS8700_CTRL_ODR_MAX) |607607+ FXOS8700_ACTIVE);665608}666609667610static void fxos8700_chip_uninit(void *data)
+1
drivers/iio/imu/st_lsm6dsx/Kconfig
···44 tristate "ST_LSM6DSx driver for STM 6-axis IMU MEMS sensors"55 depends on (I2C || SPI || I3C)66 select IIO_BUFFER77+ select IIO_TRIGGERED_BUFFER78 select IIO_KFIFO_BUF89 select IIO_ST_LSM6DSX_I2C if (I2C)910 select IIO_ST_LSM6DSX_SPI if (SPI_MASTER)
+5-4
drivers/iio/light/cm32181.c
···440440 if (!indio_dev)441441 return -ENOMEM;442442443443+ i2c_set_clientdata(client, indio_dev);444444+443445 /*444446 * Some ACPI systems list 2 I2C resources for the CM3218 sensor, the445447 * SMBus Alert Response Address (ARA, 0x0c) and the actual I2C address.···461459 if (IS_ERR(client))462460 return PTR_ERR(client);463461 }464464-465465- i2c_set_clientdata(client, indio_dev);466462467463 cm32181 = iio_priv(indio_dev);468464 cm32181->client = client;···490490491491static int cm32181_suspend(struct device *dev)492492{493493- struct i2c_client *client = to_i2c_client(dev);493493+ struct cm32181_chip *cm32181 = iio_priv(dev_get_drvdata(dev));494494+ struct i2c_client *client = cm32181->client;494495495496 return i2c_smbus_write_word_data(client, CM32181_REG_ADDR_CMD,496497 CM32181_CMD_ALS_DISABLE);···499498500499static int cm32181_resume(struct device *dev)501500{502502- struct i2c_client *client = to_i2c_client(dev);503501 struct cm32181_chip *cm32181 = iio_priv(dev_get_drvdata(dev));502502+ struct i2c_client *client = cm32181->client;504503505504 return i2c_smbus_write_word_data(client, CM32181_REG_ADDR_CMD,506505 cm32181->conf_regs[CM32181_REG_ADDR_CMD]);
-1
drivers/input/mouse/synaptics.c
···192192 "SYN3221", /* HP 15-ay000 */193193 "SYN323d", /* HP Spectre X360 13-w013dx */194194 "SYN3257", /* HP Envy 13-ad105ng */195195- "SYN3286", /* HP Laptop 15-da3001TU */196195 NULL197196};198197
···149149 bytes, GFP_KERNEL);150150 if (!i)151151 return -ENOMEM;152152- memcpy(&i->j, j, bytes);152152+ unsafe_memcpy(&i->j, j, bytes,153153+ /* "bytes" was calculated by set_bytes() above */);153154 /* Add to the location after 'where' points to */154155 list_add(&i->list, where);155156 ret = 1;
···3535 the xrx200 / VR9 SoC.36363737config NET_DSA_MT75303838- tristate "MediaTek MT753x and MT7621 Ethernet switch support"3838+ tristate "MediaTek MT7530 and MT7531 Ethernet switch support"3939 select NET_DSA_TAG_MTK4040 select MEDIATEK_GE_PHY4141 help4242- This enables support for the MediaTek MT7530, MT7531, and MT76214343- Ethernet switch chips.4242+ This enables support for the MediaTek MT7530 and MT7531 Ethernet4343+ switch chips. Multi-chip module MT7530 in MT7621AT, MT7621DAT,4444+ MT7621ST and MT7623AI SoCs is supported.44454546config NET_DSA_MV88E60604647 tristate "Marvell 88E6060 ethernet switch chip support"
···450450 /* ring full, shall not happen because queue is stopped if full451451 * below452452 */453453- netif_stop_queue(tx->adapter->netdev);453453+ netif_stop_subqueue(tx->adapter->netdev, tx->queue_index);454454455455 spin_unlock_irqrestore(&tx->lock, flags);456456···493493494494 if (tsnep_tx_desc_available(tx) < (MAX_SKB_FRAGS + 1)) {495495 /* ring can get full with next frame */496496- netif_stop_queue(tx->adapter->netdev);496496+ netif_stop_subqueue(tx->adapter->netdev, tx->queue_index);497497 }498498499499 spin_unlock_irqrestore(&tx->lock, flags);···503503504504static bool tsnep_tx_poll(struct tsnep_tx *tx, int napi_budget)505505{506506+ struct tsnep_tx_entry *entry;507507+ struct netdev_queue *nq;506508 unsigned long flags;507509 int budget = 128;508508- struct tsnep_tx_entry *entry;509509- int count;510510 int length;511511+ int count;512512+513513+ nq = netdev_get_tx_queue(tx->adapter->netdev, tx->queue_index);511514512515 spin_lock_irqsave(&tx->lock, flags);513516···567564 } while (likely(budget));568565569566 if ((tsnep_tx_desc_available(tx) >= ((MAX_SKB_FRAGS + 1) * 2)) &&570570- netif_queue_stopped(tx->adapter->netdev)) {571571- netif_wake_queue(tx->adapter->netdev);567567+ netif_tx_queue_stopped(nq)) {568568+ netif_tx_wake_queue(nq);572569 }573570574571 spin_unlock_irqrestore(&tx->lock, flags);
···36413641 struct ice_vsi *vsi = np->vsi;36423642 struct ice_pf *pf = vsi->back;36433643 int new_rx = 0, new_tx = 0;36443644+ bool locked = false;36443645 u32 curr_combined;36463646+ int ret = 0;3645364736463648 /* do not support changing channels in Safe Mode */36473649 if (ice_is_safe_mode(pf)) {···37073705 return -EINVAL;37083706 }3709370737103710- ice_vsi_recfg_qs(vsi, new_rx, new_tx);37083708+ if (pf->adev) {37093709+ mutex_lock(&pf->adev_mutex);37103710+ device_lock(&pf->adev->dev);37113711+ locked = true;37123712+ if (pf->adev->dev.driver) {37133713+ netdev_err(dev, "Cannot change channels when RDMA is active\n");37143714+ ret = -EBUSY;37153715+ goto adev_unlock;37163716+ }37173717+ }3711371837123712- if (!netif_is_rxfh_configured(dev))37133713- return ice_vsi_set_dflt_rss_lut(vsi, new_rx);37193719+ ice_vsi_recfg_qs(vsi, new_rx, new_tx, locked);37203720+37213721+ if (!netif_is_rxfh_configured(dev)) {37223722+ ret = ice_vsi_set_dflt_rss_lut(vsi, new_rx);37233723+ goto adev_unlock;37243724+ }3714372537153726 /* Update rss_size due to change in Rx queues */37163727 vsi->rss_size = ice_get_valid_rss_size(&pf->hw, new_rx);3717372837183718- return 0;37293729+adev_unlock:37303730+ if (locked) {37313731+ device_unlock(&pf->adev->dev);37323732+ mutex_unlock(&pf->adev_mutex);37333733+ }37343734+ return ret;37193735}3720373637213737/**
-3
drivers/net/ethernet/intel/ice/ice_lib.c
···32353235 }32363236 }3237323732383238- if (vsi->type == ICE_VSI_PF)32393239- ice_devlink_destroy_pf_port(pf);32403240-32413238 if (vsi->type == ICE_VSI_VF &&32423239 vsi->agg_node && vsi->agg_node->valid)32433240 vsi->agg_node->num_vsis--;
+20-10
drivers/net/ethernet/intel/ice/ice_main.c
···41954195 * @vsi: VSI being changed41964196 * @new_rx: new number of Rx queues41974197 * @new_tx: new number of Tx queues41984198+ * @locked: is adev device_lock held41984199 *41994200 * Only change the number of queues if new_tx, or new_rx is non-0.42004201 *42014202 * Returns 0 on success.42024203 */42034203-int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx)42044204+int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx, bool locked)42044205{42054206 struct ice_pf *pf = vsi->back;42064207 int err = 0, timeout = 50;···4230422942314230 ice_vsi_close(vsi);42324231 ice_vsi_rebuild(vsi, false);42334233- ice_pf_dcb_recfg(pf);42324232+ ice_pf_dcb_recfg(pf, locked);42344233 ice_vsi_open(vsi);42354234done:42364235 clear_bit(ICE_CFG_BUSY, pf->state);···45914590}4592459145934592/**45944594- * ice_register_netdev - register netdev and devlink port45934593+ * ice_register_netdev - register netdev45954594 * @pf: pointer to the PF struct45964595 */45974596static int ice_register_netdev(struct ice_pf *pf)···46034602 if (!vsi || !vsi->netdev)46044603 return -EIO;4605460446064606- err = ice_devlink_create_pf_port(pf);46074607- if (err)46084608- goto err_devlink_create;46094609-46104610- SET_NETDEV_DEVLINK_PORT(vsi->netdev, &pf->devlink_port);46114605 err = register_netdev(vsi->netdev);46124606 if (err)46134607 goto err_register_netdev;···4613461746144618 return 0;46154619err_register_netdev:46164616- ice_devlink_destroy_pf_port(pf);46174617-err_devlink_create:46184620 free_netdev(vsi->netdev);46194621 vsi->netdev = NULL;46204622 clear_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state);···46304636ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)46314637{46324638 struct device *dev = &pdev->dev;46394639+ struct ice_vsi *vsi;46334640 struct ice_pf *pf;46344641 struct ice_hw *hw;46354642 int i, err;···49134918 pcie_print_link_status(pf->pdev);4914491949154920probe_done:49214921+ err = ice_devlink_create_pf_port(pf);49224922+ if (err)49234923+ goto err_create_pf_port;49244924+49254925+ vsi = ice_get_main_vsi(pf);49264926+ if (!vsi || !vsi->netdev) {49274927+ err = -EINVAL;49284928+ goto err_netdev_reg;49294929+ }49304930+49314931+ SET_NETDEV_DEVLINK_PORT(vsi->netdev, &pf->devlink_port);49324932+49164933 err = ice_register_netdev(pf);49174934 if (err)49184935 goto err_netdev_reg;···49624955err_devlink_reg_param:49634956 ice_devlink_unregister_params(pf);49644957err_netdev_reg:49584958+ ice_devlink_destroy_pf_port(pf);49594959+err_create_pf_port:49654960err_send_version_unroll:49664961 ice_vsi_release_all(pf);49674962err_alloc_sw_unroll:···50925083 ice_setup_mc_magic_wake(pf);50935084 ice_vsi_release_all(pf);50945085 mutex_destroy(&(&pf->hw)->fdir_fltr_lock);50865086+ ice_devlink_destroy_pf_port(pf);50955087 ice_set_wake(pf);50965088 ice_free_irq_msix_misc(pf);50975089 ice_for_each_vsi(pf, i) {
+9-5
drivers/net/ethernet/intel/igc/igc_ptp.c
···417417 *418418 * We need to convert the system time value stored in the RX/TXSTMP registers419419 * into a hwtstamp which can be used by the upper level timestamping functions.420420+ *421421+ * Returns 0 on success.420422 **/421421-static void igc_ptp_systim_to_hwtstamp(struct igc_adapter *adapter,422422- struct skb_shared_hwtstamps *hwtstamps,423423- u64 systim)423423+static int igc_ptp_systim_to_hwtstamp(struct igc_adapter *adapter,424424+ struct skb_shared_hwtstamps *hwtstamps,425425+ u64 systim)424426{425427 switch (adapter->hw.mac.type) {426428 case igc_i225:···432430 systim & 0xFFFFFFFF);433431 break;434432 default:435435- break;433433+ return -EINVAL;436434 }435435+ return 0;437436}438437439438/**···655652656653 regval = rd32(IGC_TXSTMPL);657654 regval |= (u64)rd32(IGC_TXSTMPH) << 32;658658- igc_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval);655655+ if (igc_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval))656656+ return;659657660658 switch (adapter->link_speed) {661659 case SPEED_10:
···15001500 BIT(DEVLINK_PARAM_CMODE_RUNTIME),15011501 rvu_af_dl_dwrr_mtu_get, rvu_af_dl_dwrr_mtu_set,15021502 rvu_af_dl_dwrr_mtu_validate),15031503+};15041504+15051505+static const struct devlink_param rvu_af_dl_param_exact_match[] = {15031506 DEVLINK_PARAM_DRIVER(RVU_AF_DEVLINK_PARAM_ID_NPC_EXACT_FEATURE_DISABLE,15041507 "npc_exact_feature_disable", DEVLINK_PARAM_TYPE_STRING,15051508 BIT(DEVLINK_PARAM_CMODE_RUNTIME),···15591556{15601557 struct rvu_devlink *rvu_dl;15611558 struct devlink *dl;15621562- size_t size;15631559 int err;1564156015651561 dl = devlink_alloc(&rvu_devlink_ops, sizeof(struct rvu_devlink),···15801578 goto err_dl_health;15811579 }1582158015831583- /* Register exact match devlink only for CN10K-B */15841584- size = ARRAY_SIZE(rvu_af_dl_params);15851585- if (!rvu_npc_exact_has_match_table(rvu))15861586- size -= 1;15871587-15881588- err = devlink_params_register(dl, rvu_af_dl_params, size);15811581+ err = devlink_params_register(dl, rvu_af_dl_params, ARRAY_SIZE(rvu_af_dl_params));15891582 if (err) {15901583 dev_err(rvu->dev,15911584 "devlink params register failed with error %d", err);15921585 goto err_dl_health;15931586 }1594158715881588+ /* Register exact match devlink only for CN10K-B */15891589+ if (!rvu_npc_exact_has_match_table(rvu))15901590+ goto done;15911591+15921592+ err = devlink_params_register(dl, rvu_af_dl_param_exact_match,15931593+ ARRAY_SIZE(rvu_af_dl_param_exact_match));15941594+ if (err) {15951595+ dev_err(rvu->dev,15961596+ "devlink exact match params register failed with error %d", err);15971597+ goto err_dl_exact_match;15981598+ }15991599+16001600+done:15951601 devlink_register(dl);15961602 return 0;16031603+16041604+err_dl_exact_match:16051605+ devlink_params_unregister(dl, rvu_af_dl_params, ARRAY_SIZE(rvu_af_dl_params));1597160615981607err_dl_health:15991608 rvu_health_reporters_destroy(rvu);···16181605 struct devlink *dl = rvu_dl->dl;1619160616201607 devlink_unregister(dl);16211621- devlink_params_unregister(dl, rvu_af_dl_params,16221622- ARRAY_SIZE(rvu_af_dl_params));16081608+16091609+ devlink_params_unregister(dl, rvu_af_dl_params, ARRAY_SIZE(rvu_af_dl_params));16101610+16111611+ /* Unregister exact match devlink only for CN10K-B */16121612+ if (rvu_npc_exact_has_match_table(rvu))16131613+ devlink_params_unregister(dl, rvu_af_dl_param_exact_match,16141614+ ARRAY_SIZE(rvu_af_dl_param_exact_match));16151615+16231616 rvu_health_reporters_destroy(rvu);16241617 devlink_free(dl);16251618}
+4-2
drivers/net/ethernet/mediatek/mtk_eth_soc.c
···31773177 struct mtk_eth *eth = mac->hw;31783178 int i, err;3179317931803180- if (mtk_uses_dsa(dev) && !eth->prog) {31803180+ if ((mtk_uses_dsa(dev) && !eth->prog) &&31813181+ !(mac->id == 1 && MTK_HAS_CAPS(eth->soc->caps, MTK_GMAC1_TRGMII))) {31813182 for (i = 0; i < ARRAY_SIZE(eth->dsa_meta); i++) {31823183 struct metadata_dst *md_dst = eth->dsa_meta[i];31833184···31953194 }31963195 } else {31973196 /* Hardware special tag parsing needs to be disabled if at least31983198- * one MAC does not use DSA.31973197+ * one MAC does not use DSA, or the second MAC of the MT7621 and31983198+ * MT7623 SoCs is being used.31993199 */32003200 u32 val = mtk_r32(eth, MTK_CDMP_IG_CTRL);32013201 val &= ~MTK_CDMP_STAG_EN;
+3-1
drivers/net/ethernet/mediatek/mtk_eth_soc.h
···519519#define SGMII_SPEED_10 FIELD_PREP(SGMII_SPEED_MASK, 0)520520#define SGMII_SPEED_100 FIELD_PREP(SGMII_SPEED_MASK, 1)521521#define SGMII_SPEED_1000 FIELD_PREP(SGMII_SPEED_MASK, 2)522522-#define SGMII_DUPLEX_FULL BIT(4)522522+#define SGMII_DUPLEX_HALF BIT(4)523523#define SGMII_IF_MODE_BIT5 BIT(5)524524#define SGMII_REMOTE_FAULT_DIS BIT(8)525525#define SGMII_CODE_SYNC_SET_VAL BIT(9)···10361036 * @regmap: The register map pointing at the range used to setup10371037 * SGMII modes10381038 * @ana_rgc3: The offset refers to register ANA_RGC3 related to regmap10391039+ * @interface: Currently configured interface mode10391040 * @pcs: Phylink PCS structure10401041 */10411042struct mtk_pcs {10421043 struct regmap *regmap;10431044 u32 ana_rgc3;10451045+ phy_interface_t interface;10441046 struct phylink_pcs pcs;10451047};10461048
···4343 int advertise, link_timer;4444 bool changed, use_an;45454646- if (interface == PHY_INTERFACE_MODE_2500BASEX)4747- rgc3 = RG_PHY_SPEED_3_125G;4848- else4949- rgc3 = 0;5050-5146 advertise = phylink_mii_c22_pcs_encode_advertisement(interface,5247 advertising);5348 if (advertise < 0)···8388 bmcr = 0;8489 }85908686- /* Configure the underlying interface speed */8787- regmap_update_bits(mpcs->regmap, mpcs->ana_rgc3,8888- RG_PHY_SPEED_3_125G, rgc3);9191+ if (mpcs->interface != interface) {9292+ /* PHYA power down */9393+ regmap_update_bits(mpcs->regmap, SGMSYS_QPHY_PWR_STATE_CTRL,9494+ SGMII_PHYA_PWD, SGMII_PHYA_PWD);9595+9696+ if (interface == PHY_INTERFACE_MODE_2500BASEX)9797+ rgc3 = RG_PHY_SPEED_3_125G;9898+ else9999+ rgc3 = 0;100100+101101+ /* Configure the underlying interface speed */102102+ regmap_update_bits(mpcs->regmap, mpcs->ana_rgc3,103103+ RG_PHY_SPEED_3_125G, rgc3);104104+105105+ mpcs->interface = interface;106106+ }8910790108 /* Update the advertisement, noting whether it has changed */91109 regmap_update_bits_check(mpcs->regmap, SGMSYS_PCS_ADVERTISE,···116108 regmap_update_bits(mpcs->regmap, SGMSYS_PCS_CONTROL_1,117109 SGMII_AN_RESTART | SGMII_AN_ENABLE, bmcr);118110119119- /* Release PHYA power down state */120120- regmap_update_bits(mpcs->regmap, SGMSYS_QPHY_PWR_STATE_CTRL,121121- SGMII_PHYA_PWD, 0);111111+ /* Release PHYA power down state112112+ * Only removing bit SGMII_PHYA_PWD isn't enough.113113+ * There are cases when the SGMII_PHYA_PWD register contains 0x9 which114114+ * prevents SGMII from working. The SGMII still shows link but no traffic115115+ * can flow. Writing 0x0 to the PHYA_PWD register fix the issue. 0x0 was116116+ * taken from a good working state of the SGMII interface.117117+ * Unknown how much the QPHY needs but it is racy without a sleep.118118+ * Tested on mt7622 & mt7986.119119+ */120120+ usleep_range(50, 100);121121+ regmap_write(mpcs->regmap, SGMSYS_QPHY_PWR_STATE_CTRL, 0);122122123123 return changed;124124}···154138 else155139 sgm_mode = SGMII_SPEED_1000;156140157157- if (duplex == DUPLEX_FULL)158158- sgm_mode |= SGMII_DUPLEX_FULL;141141+ if (duplex != DUPLEX_FULL)142142+ sgm_mode |= SGMII_DUPLEX_HALF;159143160144 regmap_update_bits(mpcs->regmap, SGMSYS_SGMII_MODE,161161- SGMII_DUPLEX_FULL | SGMII_SPEED_MASK,145145+ SGMII_DUPLEX_HALF | SGMII_SPEED_MASK,162146 sgm_mode);163147 }164148}···187171 return PTR_ERR(ss->pcs[i].regmap);188172189173 ss->pcs[i].pcs.ops = &mtk_pcs_ops;174174+ ss->pcs[i].pcs.poll = true;175175+ ss->pcs[i].interface = PHY_INTERFACE_MODE_NA;190176 }191177192178 return 0;
···14381438 rx_work_done = (likely(fp->type & QEDE_FASTPATH_RX) &&14391439 qede_has_rx_work(fp->rxq)) ?14401440 qede_rx_int(fp, budget) : 0;14411441+14421442+ if (fp->xdp_xmit & QEDE_XDP_REDIRECT)14431443+ xdp_do_flush();14441444+14411445 /* Handle case where we are called by netpoll with a budget of 0 */14421446 if (rx_work_done < budget || !budget) {14431447 if (!qede_poll_is_more_work(fp)) {···14601456 fp->xdp_tx->tx_db.data.bd_prod = cpu_to_le16(xdp_prod);14611457 qede_update_tx_producer(fp->xdp_tx);14621458 }14631463-14641464- if (fp->xdp_xmit & QEDE_XDP_REDIRECT)14651465- xdp_do_flush_map();1466145914671460 return rx_work_done;14681461}
+8-2
drivers/net/ethernet/renesas/ravb_main.c
···11011101 ravb_write(ndev, ~(EIS_QFS | EIS_RESERVED), EIS);11021102 if (eis & EIS_QFS) {11031103 ris2 = ravb_read(ndev, RIS2);11041104- ravb_write(ndev, ~(RIS2_QFF0 | RIS2_RFFF | RIS2_RESERVED),11041104+ ravb_write(ndev, ~(RIS2_QFF0 | RIS2_QFF1 | RIS2_RFFF | RIS2_RESERVED),11051105 RIS2);1106110611071107 /* Receive Descriptor Empty int */11081108 if (ris2 & RIS2_QFF0)11091109 priv->stats[RAVB_BE].rx_over_errors++;1110111011111111- /* Receive Descriptor Empty int */11111111+ /* Receive Descriptor Empty int */11121112 if (ris2 & RIS2_QFF1)11131113 priv->stats[RAVB_NC].rx_over_errors++;11141114···29732973 else29742974 ret = ravb_close(ndev);2975297529762976+ if (priv->info->ccc_gac)29772977+ ravb_ptp_stop(ndev);29782978+29762979 return ret;29772980}29782981···3013301030143011 /* Restore descriptor base address table */30153012 ravb_write(ndev, priv->desc_bat_dma, DBAT);30133013+30143014+ if (priv->info->ccc_gac)30153015+ ravb_ptp_init(ndev, priv->pdev);3016301630173017 if (netif_running(ndev)) {30183018 if (priv->wol_enabled) {
+13-9
drivers/net/ethernet/renesas/rswitch.c
···10741074 port = NULL;10751075 goto out;10761076 }10771077- if (index == rdev->etha->index)10771077+ if (index == rdev->etha->index) {10781078+ if (!of_device_is_available(port))10791079+ port = NULL;10781080 break;10811081+ }10791082 }1080108310811084out:···1109110611101107 port = rswitch_get_port_node(rdev);11111108 if (!port)11121112- return -ENODEV;11091109+ return 0; /* ignored */1113111011141111 err = of_get_phy_mode(port, &rdev->etha->phy_interface);11151112 of_node_put(port);···13271324{13281325 int i, err;1329132613301330- for (i = 0; i < RSWITCH_NUM_PORTS; i++) {13271327+ rswitch_for_each_enabled_port(priv, i) {13311328 err = rswitch_ether_port_init_one(priv->rdev[i]);13321329 if (err)13331330 goto err_init_one;13341331 }1335133213361336- for (i = 0; i < RSWITCH_NUM_PORTS; i++) {13331333+ rswitch_for_each_enabled_port(priv, i) {13371334 err = rswitch_serdes_init(priv->rdev[i]);13381335 if (err)13391336 goto err_serdes;···13421339 return 0;1343134013441341err_serdes:13451345- for (i--; i >= 0; i--)13421342+ rswitch_for_each_enabled_port_continue_reverse(priv, i)13461343 rswitch_serdes_deinit(priv->rdev[i]);13471344 i = RSWITCH_NUM_PORTS;1348134513491346err_init_one:13501350- for (i--; i >= 0; i--)13471347+ rswitch_for_each_enabled_port_continue_reverse(priv, i)13511348 rswitch_ether_port_deinit_one(priv->rdev[i]);1352134913531350 return err;···16111608 netif_napi_add(ndev, &rdev->napi, rswitch_poll);1612160916131610 port = rswitch_get_port_node(rdev);16111611+ rdev->disabled = !port;16141612 err = of_get_ethdev_address(port, ndev);16151613 of_node_put(port);16161614 if (err) {···17111707 if (err)17121708 goto err_ether_port_init_all;1713170917141714- for (i = 0; i < RSWITCH_NUM_PORTS; i++) {17101710+ rswitch_for_each_enabled_port(priv, i) {17151711 err = register_netdev(priv->rdev[i]->ndev);17161712 if (err) {17171717- for (i--; i >= 0; i--)17131713+ rswitch_for_each_enabled_port_continue_reverse(priv, i)17181714 unregister_netdev(priv->rdev[i]->ndev);17191715 goto err_register_netdev;17201716 }17211717 }1722171817231723- for (i = 0; i < RSWITCH_NUM_PORTS; i++)17191719+ rswitch_for_each_enabled_port(priv, i)17241720 netdev_info(priv->rdev[i]->ndev, "MAC address %pM\n",17251721 priv->rdev[i]->ndev->dev_addr);17261722
+12
drivers/net/ethernet/renesas/rswitch.h
···1313#define RSWITCH_MAX_NUM_QUEUES 12814141515#define RSWITCH_NUM_PORTS 31616+#define rswitch_for_each_enabled_port(priv, i) \1717+ for (i = 0; i < RSWITCH_NUM_PORTS; i++) \1818+ if (priv->rdev[i]->disabled) \1919+ continue; \2020+ else2121+2222+#define rswitch_for_each_enabled_port_continue_reverse(priv, i) \2323+ for (i--; i >= 0; i--) \2424+ if (priv->rdev[i]->disabled) \2525+ continue; \2626+ else16271728#define TX_RING_SIZE 10241829#define RX_RING_SIZE 1024···949938 struct rswitch_gwca_queue *tx_queue;950939 struct rswitch_gwca_queue *rx_queue;951940 u8 ts_tag;941941+ bool disabled;952942953943 int port;954944 struct rswitch_etha *etha;
···987987void netvsc_dma_unmap(struct hv_device *hv_dev,988988 struct hv_netvsc_packet *packet)989989{990990- u32 page_count = packet->cp_partial ?991991- packet->page_buf_cnt - packet->rmsg_pgcnt :992992- packet->page_buf_cnt;993990 int i;994991995992 if (!hv_is_isolation_supported())···995998 if (!packet->dma_range)996999 return;9971000998998- for (i = 0; i < page_count; i++)10011001+ for (i = 0; i < packet->page_buf_cnt; i++)9991002 dma_unmap_single(&hv_dev->device, packet->dma_range[i].dma,10001003 packet->dma_range[i].mapping_size,10011004 DMA_TO_DEVICE);···10251028 struct hv_netvsc_packet *packet,10261029 struct hv_page_buffer *pb)10271030{10281028- u32 page_count = packet->cp_partial ?10291029- packet->page_buf_cnt - packet->rmsg_pgcnt :10301030- packet->page_buf_cnt;10311031+ u32 page_count = packet->page_buf_cnt;10311032 dma_addr_t dma;10321033 int i;10331034
+16-7
drivers/net/mdio/mdio-mux-meson-g12a.c
···44 */5566#include <linux/bitfield.h>77+#include <linux/delay.h>78#include <linux/clk.h>89#include <linux/clk-provider.h>910#include <linux/device.h>···151150152151static int g12a_enable_internal_mdio(struct g12a_mdio_mux *priv)153152{153153+ u32 value;154154 int ret;155155156156 /* Enable the phy clock */···165163166164 /* Initialize ephy control */167165 writel(EPHY_G12A_ID, priv->regs + ETH_PHY_CNTL0);168168- writel(FIELD_PREP(PHY_CNTL1_ST_MODE, 3) |169169- FIELD_PREP(PHY_CNTL1_ST_PHYADD, EPHY_DFLT_ADD) |170170- FIELD_PREP(PHY_CNTL1_MII_MODE, EPHY_MODE_RMII) |171171- PHY_CNTL1_CLK_EN |172172- PHY_CNTL1_CLKFREQ |173173- PHY_CNTL1_PHY_ENB,174174- priv->regs + ETH_PHY_CNTL1);166166+167167+ /* Make sure we get a 0 -> 1 transition on the enable bit */168168+ value = FIELD_PREP(PHY_CNTL1_ST_MODE, 3) |169169+ FIELD_PREP(PHY_CNTL1_ST_PHYADD, EPHY_DFLT_ADD) |170170+ FIELD_PREP(PHY_CNTL1_MII_MODE, EPHY_MODE_RMII) |171171+ PHY_CNTL1_CLK_EN |172172+ PHY_CNTL1_CLKFREQ;173173+ writel(value, priv->regs + ETH_PHY_CNTL1);175174 writel(PHY_CNTL2_USE_INTERNAL |176175 PHY_CNTL2_SMI_SRC_MAC |177176 PHY_CNTL2_RX_CLK_EPHY,178177 priv->regs + ETH_PHY_CNTL2);178178+179179+ value |= PHY_CNTL1_PHY_ENB;180180+ writel(value, priv->regs + ETH_PHY_CNTL1);181181+182182+ /* The phy needs a bit of time to power up */183183+ mdelay(10);179184180185 return 0;181186}
+4-2
drivers/net/phy/dp83822.c
···233233 DP83822_ENERGY_DET_INT_EN |234234 DP83822_LINK_QUAL_INT_EN);235235236236- if (!dp83822->fx_enabled)236236+ /* Private data pointer is NULL on DP83825/26 */237237+ if (!dp83822 || !dp83822->fx_enabled)237238 misr_status |= DP83822_ANEG_COMPLETE_INT_EN |238239 DP83822_DUP_MODE_CHANGE_INT_EN |239240 DP83822_SPEED_CHANGED_INT_EN;···254253 DP83822_PAGE_RX_INT_EN |255254 DP83822_EEE_ERROR_CHANGE_INT_EN);256255257257- if (!dp83822->fx_enabled)256256+ /* Private data pointer is NULL on DP83825/26 */257257+ if (!dp83822 || !dp83822->fx_enabled)258258 misr_status |= DP83822_ANEG_ERR_INT_EN |259259 DP83822_WOL_PKT_INT_EN;260260
···15171517 * another mac interface, so we should create a device link between15181518 * phy dev and mac dev.15191519 */15201520- if (phydev->mdio.bus->parent && dev->dev.parent != phydev->mdio.bus->parent)15201520+ if (dev && phydev->mdio.bus->parent && dev->dev.parent != phydev->mdio.bus->parent)15211521 phydev->devlink = device_link_add(dev->dev.parent, &phydev->mdio.dev,15221522 DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS);15231523
+4-4
drivers/net/virtio_net.c
···1677167716781678 received = virtnet_receive(rq, budget, &xdp_xmit);1679167916801680+ if (xdp_xmit & VIRTIO_XDP_REDIR)16811681+ xdp_do_flush();16821682+16801683 /* Out of packets? */16811684 if (received < budget)16821685 virtqueue_napi_complete(napi, rq->vq, received);16831683-16841684- if (xdp_xmit & VIRTIO_XDP_REDIR)16851685- xdp_do_flush();1686168616871687 if (xdp_xmit & VIRTIO_XDP_TX) {16881688 sq = virtnet_xdp_get_sq(vi);···21582158 cancel_delayed_work_sync(&vi->refill);2159215921602160 for (i = 0; i < vi->max_queue_pairs; i++) {21612161- xdp_rxq_info_unreg(&vi->rq[i].xdp_rxq);21622161 napi_disable(&vi->rq[i].napi);21622162+ xdp_rxq_info_unreg(&vi->rq[i].xdp_rxq);21632163 virtnet_napi_tx_disable(&vi->sq[i].napi);21642164 }21652165
···840840841841 if (!rxq->que_started) {842842 atomic_set(&rxq->rx_processing, 0);843843+ pm_runtime_put_autosuspend(rxq->dpmaif_ctrl->dev);843844 dev_err(rxq->dpmaif_ctrl->dev, "Work RXQ: %d has not been started\n", rxq->index);844845 return work_done;845846 }846847847847- if (!rxq->sleep_lock_pending) {848848- pm_runtime_get_noresume(rxq->dpmaif_ctrl->dev);848848+ if (!rxq->sleep_lock_pending)849849 t7xx_pci_disable_sleep(t7xx_dev);850850- }851850852851 ret = try_wait_for_completion(&t7xx_dev->sleep_lock_acquire);853852 if (!ret) {···875876 napi_complete_done(napi, work_done);876877 t7xx_dpmaif_clr_ip_busy_sts(&rxq->dpmaif_ctrl->hw_info);877878 t7xx_dpmaif_dlq_unmask_rx_done(&rxq->dpmaif_ctrl->hw_info, rxq->index);879879+ t7xx_pci_enable_sleep(rxq->dpmaif_ctrl->t7xx_dev);880880+ pm_runtime_mark_last_busy(rxq->dpmaif_ctrl->dev);881881+ pm_runtime_put_autosuspend(rxq->dpmaif_ctrl->dev);882882+ atomic_set(&rxq->rx_processing, 0);878883 } else {879884 t7xx_dpmaif_clr_ip_busy_sts(&rxq->dpmaif_ctrl->hw_info);880885 }881881-882882- t7xx_pci_enable_sleep(rxq->dpmaif_ctrl->t7xx_dev);883883- pm_runtime_mark_last_busy(rxq->dpmaif_ctrl->dev);884884- pm_runtime_put_noidle(rxq->dpmaif_ctrl->dev);885885- atomic_set(&rxq->rx_processing, 0);886886887887 return work_done;888888}···889891void t7xx_dpmaif_irq_rx_done(struct dpmaif_ctrl *dpmaif_ctrl, const unsigned int que_mask)890892{891893 struct dpmaif_rx_queue *rxq;892892- int qno;894894+ struct dpmaif_ctrl *ctrl;895895+ int qno, ret;893896894897 qno = ffs(que_mask) - 1;895898 if (qno < 0 || qno > DPMAIF_RXQ_NUM - 1) {···899900 }900901901902 rxq = &dpmaif_ctrl->rxq[qno];903903+ ctrl = rxq->dpmaif_ctrl;904904+ /* We need to make sure that the modem has been resumed before905905+ * calling napi. This can't be done inside the polling function906906+ * as we could be blocked waiting for device to be resumed,907907+ * which can't be done from softirq context the poll function908908+ * is running in.909909+ */910910+ ret = pm_runtime_resume_and_get(ctrl->dev);911911+ if (ret < 0 && ret != -EACCES) {912912+ dev_err(ctrl->dev, "Failed to resume device: %d\n", ret);913913+ return;914914+ }902915 napi_schedule(&rxq->napi);903916}904917
+15-1
drivers/net/wwan/t7xx/t7xx_netdev.c
···2727#include <linux/list.h>2828#include <linux/netdev_features.h>2929#include <linux/netdevice.h>3030+#include <linux/pm_runtime.h>3031#include <linux/skbuff.h>3132#include <linux/types.h>3233#include <linux/wwan.h>···46454746static void t7xx_ccmni_enable_napi(struct t7xx_ccmni_ctrl *ctlb)4847{4949- int i;4848+ struct dpmaif_ctrl *ctrl;4949+ int i, ret;5050+5151+ ctrl = ctlb->hif_ctrl;50525153 if (ctlb->is_napi_en)5254 return;53555456 for (i = 0; i < RXQ_NUM; i++) {5757+ /* The usage count has to be bumped every time before calling5858+ * napi_schedule. It will be decresed in the poll routine,5959+ * right after napi_complete_done is called.6060+ */6161+ ret = pm_runtime_resume_and_get(ctrl->dev);6262+ if (ret < 0) {6363+ dev_err(ctrl->dev, "Failed to resume device: %d\n",6464+ ret);6565+ return;6666+ }5567 napi_enable(ctlb->napi[i]);5668 napi_schedule(ctlb->napi[i]);5769 }
···10931093 if (ns) {10941094 if (ns->head->effects)10951095 effects = le32_to_cpu(ns->head->effects->iocs[opcode]);10961096- if (ns->head->ids.csi == NVME_CAP_CSS_NVM)10961096+ if (ns->head->ids.csi == NVME_CSI_NVM)10971097 effects |= nvme_known_nvm_effects(opcode);10981098 if (effects & ~(NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC))10991099 dev_warn_once(ctrl->device,···49214921 blk_mq_destroy_queue(ctrl->admin_q);49224922 blk_put_queue(ctrl->admin_q);49234923out_free_tagset:49244924- blk_mq_free_tag_set(ctrl->admin_tagset);49244924+ blk_mq_free_tag_set(set);49254925+ ctrl->admin_q = NULL;49264926+ ctrl->fabrics_q = NULL;49254927 return ret;49264928}49274929EXPORT_SYMBOL_GPL(nvme_alloc_admin_tag_set);···4985498349864984out_free_tag_set:49874985 blk_mq_free_tag_set(set);49864986+ ctrl->connect_q = NULL;49884987 return ret;49894988}49904989EXPORT_SYMBOL_GPL(nvme_alloc_io_tag_set);
+8-10
drivers/nvme/host/fc.c
···3521352135223522 nvme_fc_init_queue(ctrl, 0);3523352335243524- ret = nvme_alloc_admin_tag_set(&ctrl->ctrl, &ctrl->admin_tag_set,35253525- &nvme_fc_admin_mq_ops,35263526- struct_size((struct nvme_fcp_op_w_sgl *)NULL, priv,35273527- ctrl->lport->ops->fcprqst_priv_sz));35283528- if (ret)35293529- goto out_free_queues;35303530-35313524 /*35323525 * Would have been nice to init io queues tag set as well.35333526 * However, we require interaction from the controller···3530353735313538 ret = nvme_init_ctrl(&ctrl->ctrl, dev, &nvme_fc_ctrl_ops, 0);35323539 if (ret)35333533- goto out_cleanup_tagset;35403540+ goto out_free_queues;3534354135353542 /* at this point, teardown path changes to ref counting on nvme ctrl */35433543+35443544+ ret = nvme_alloc_admin_tag_set(&ctrl->ctrl, &ctrl->admin_tag_set,35453545+ &nvme_fc_admin_mq_ops,35463546+ struct_size((struct nvme_fcp_op_w_sgl *)NULL, priv,35473547+ ctrl->lport->ops->fcprqst_priv_sz));35483548+ if (ret)35493549+ goto fail_ctrl;3536355035373551 spin_lock_irqsave(&rport->lock, flags);35383552 list_add_tail(&ctrl->ctrl_list, &rport->ctrl_list);···3592359235933593 return ERR_PTR(-EIO);3594359435953595-out_cleanup_tagset:35963596- nvme_remove_admin_tag_set(&ctrl->ctrl);35973595out_free_queues:35983596 kfree(ctrl->queues);35993597out_free_ida:
···4141 void *val, size_t bytes)4242{4343 struct sunxi_sid *sid = context;4444+ u32 word;44454545- memcpy_fromio(val, sid->base + sid->value_offset + offset, bytes);4646+ /* .stride = 4 so offset is guaranteed to be aligned */4747+ __ioread32_copy(val, sid->base + sid->value_offset + offset, bytes / 4);4848+4949+ val += round_down(bytes, 4);5050+ offset += round_down(bytes, 4);5151+ bytes = bytes % 4;5252+5353+ if (!bytes)5454+ return 0;5555+5656+ /* Handle any trailing bytes */5757+ word = readl_relaxed(sid->base + sid->value_offset + offset);5858+ memcpy(val, &word, bytes);46594760 return 0;4861}
+1-5
drivers/of/fdt.c
···2626#include <linux/serial_core.h>2727#include <linux/sysfs.h>2828#include <linux/random.h>2929-#include <linux/kmemleak.h>30293130#include <asm/setup.h> /* for COMMAND_LINE_SIZE */3231#include <asm/page.h>···524525 size = dt_mem_next_cell(dt_root_size_cells, &prop);525526526527 if (size &&527527- early_init_dt_reserve_memory(base, size, nomap) == 0) {528528+ early_init_dt_reserve_memory(base, size, nomap) == 0)528529 pr_debug("Reserved memory: reserved region for node '%s': base %pa, size %lu MiB\n",529530 uname, &base, (unsigned long)(size / SZ_1M));530530- if (!nomap)531531- kmemleak_alloc_phys(base, size, 0);532532- }533531 else534532 pr_err("Reserved memory: failed to reserve memory for node '%s': base %pa, size %lu MiB\n",535533 uname, &base, (unsigned long)(size / SZ_1M));
+3-6
drivers/parisc/pdc_stable.c
···274274275275 /* We'll use a local copy of buf */276276 count = min_t(size_t, count, sizeof(in)-1);277277- strncpy(in, buf, count);278278- in[count] = '\0';277277+ strscpy(in, buf, count + 1);279278280279 /* Let's clean up the target. 0xff is a blank pattern */281280 memset(&hwpath, 0xff, sizeof(hwpath));···387388388389 /* We'll use a local copy of buf */389390 count = min_t(size_t, count, sizeof(in)-1);390390- strncpy(in, buf, count);391391- in[count] = '\0';391391+ strscpy(in, buf, count + 1);392392393393 /* Let's clean up the target. 0 is a blank pattern */394394 memset(&layers, 0, sizeof(layers));···754756755757 /* We'll use a local copy of buf */756758 count = min_t(size_t, count, sizeof(in)-1);757757- strncpy(in, buf, count);758758- in[count] = '\0';759759+ strscpy(in, buf, count + 1);759760760761 /* Current flags are stored in primary boot path entry */761762 pathentry = &pdcspath_entry_primary;
+6-1
drivers/perf/arm-cmn.c
···15761576 hw->dn++;15771577 continue;15781578 }15791579- hw->dtcs_used |= arm_cmn_node_to_xp(cmn, dn)->dtc;15801579 hw->num_dns++;15811580 if (bynodeid)15821581 break;···15881589 nodeid, nid.x, nid.y, nid.port, nid.dev, type);15891590 return -EINVAL;15901591 }15921592+ /*15931593+ * Keep assuming non-cycles events count in all DTC domains; turns out15941594+ * it's hard to make a worthwhile optimisation around this, short of15951595+ * going all-in with domain-local counter allocation as well.15961596+ */15971597+ hw->dtcs_used = (1U << cmn->num_dtcs) - 1;1591159815921599 return arm_cmn_validate_group(cmn, event);15931600}
+1
drivers/platform/x86/amd/Kconfig
···88config AMD_PMC99 tristate "AMD SoC PMC driver"1010 depends on ACPI && PCI && RTC_CLASS1111+ select SERIO1112 help1213 The driver provides support for AMD Power Management Controller1314 primarily responsible for S2Idle transactions that are driven from
+56-2
drivers/platform/x86/amd/pmc.c
···2222#include <linux/pci.h>2323#include <linux/platform_device.h>2424#include <linux/rtc.h>2525+#include <linux/serio.h>2526#include <linux/suspend.h>2627#include <linux/seq_file.h>2728#include <linux/uaccess.h>···160159static bool enable_stb;161160module_param(enable_stb, bool, 0644);162161MODULE_PARM_DESC(enable_stb, "Enable the STB debug mechanism");162162+163163+static bool disable_workarounds;164164+module_param(disable_workarounds, bool, 0644);165165+MODULE_PARM_DESC(disable_workarounds, "Disable workarounds for platform bugs");163166164167static struct amd_pmc_dev pmc;165168static int amd_pmc_send_cmd(struct amd_pmc_dev *dev, u32 arg, u32 *data, u8 msg, bool ret);···658653 return -EINVAL;659654}660655656656+static int amd_pmc_czn_wa_irq1(struct amd_pmc_dev *pdev)657657+{658658+ struct device *d;659659+ int rc;660660+661661+ if (!pdev->major) {662662+ rc = amd_pmc_get_smu_version(pdev);663663+ if (rc)664664+ return rc;665665+ }666666+667667+ if (pdev->major > 64 || (pdev->major == 64 && pdev->minor > 65))668668+ return 0;669669+670670+ d = bus_find_device_by_name(&serio_bus, NULL, "serio0");671671+ if (!d)672672+ return 0;673673+ if (device_may_wakeup(d)) {674674+ dev_info_once(d, "Disabling IRQ1 wakeup source to avoid platform firmware bug\n");675675+ disable_irq_wake(1);676676+ device_set_wakeup_enable(d, false);677677+ }678678+ put_device(d);679679+680680+ return 0;681681+}682682+661683static int amd_pmc_verify_czn_rtc(struct amd_pmc_dev *pdev, u32 *arg)662684{663685 struct rtc_device *rtc_device;···747715 /* Reset and Start SMU logging - to monitor the s0i3 stats */748716 amd_pmc_setup_smu_logging(pdev);749717750750- /* Activate CZN specific RTC functionality */751751- if (pdev->cpu_id == AMD_CPU_ID_CZN) {718718+ /* Activate CZN specific platform bug workarounds */719719+ if (pdev->cpu_id == AMD_CPU_ID_CZN && !disable_workarounds) {752720 rc = amd_pmc_verify_czn_rtc(pdev, &arg);753721 if (rc) {754722 dev_err(pdev->dev, "failed to set RTC: %d\n", rc);···814782 .check = amd_pmc_s2idle_check,815783 .restore = amd_pmc_s2idle_restore,816784};785785+786786+static int __maybe_unused amd_pmc_suspend_handler(struct device *dev)787787+{788788+ struct amd_pmc_dev *pdev = dev_get_drvdata(dev);789789+790790+ if (pdev->cpu_id == AMD_CPU_ID_CZN && !disable_workarounds) {791791+ int rc = amd_pmc_czn_wa_irq1(pdev);792792+793793+ if (rc) {794794+ dev_err(pdev->dev, "failed to adjust keyboard wakeup: %d\n", rc);795795+ return rc;796796+ }797797+ }798798+799799+ return 0;800800+}801801+802802+static SIMPLE_DEV_PM_OPS(amd_pmc_pm, amd_pmc_suspend_handler, NULL);803803+817804#endif818805819806static const struct pci_device_id pmc_pci_ids[] = {···1031980 .name = "amd_pmc",1032981 .acpi_match_table = amd_pmc_acpi_ids,1033982 .dev_groups = pmc_groups,983983+#ifdef CONFIG_SUSPEND984984+ .pm = &amd_pmc_pm,985985+#endif1034986 },1035987 .probe = amd_pmc_probe,1036988 .remove = amd_pmc_remove,
+1-8
drivers/platform/x86/amd/pmf/auto-mode.c
···275275 */276276277277 if (is_apmf_func_supported(dev, APMF_FUNC_STATIC_SLIDER_GRANULAR)) {278278- int mode = amd_pmf_get_pprof_modes(dev);279279-280280- if (mode < 0)281281- return mode;282282-283278 dev_dbg(dev->dev, "resetting AMT thermals\n");284284- amd_pmf_update_slider(dev, SLIDER_OP_SET, mode, NULL);279279+ amd_pmf_set_sps_power_limits(dev);285280 }286281 return 0;287282}···294299void amd_pmf_init_auto_mode(struct amd_pmf_dev *dev)295300{296301 amd_pmf_load_defaults_auto_mode(dev);297297- /* update the thermal limits for Automode */298298- amd_pmf_set_automode(dev, config_store.current_mode, NULL);299302 amd_pmf_init_metrics_table(dev);300303}
+5-9
drivers/platform/x86/amd/pmf/cnqf.c
···103103104104 src = amd_pmf_cnqf_get_power_source(dev);105105106106- if (dev->current_profile == PLATFORM_PROFILE_BALANCED) {106106+ if (is_pprof_balanced(dev)) {107107 amd_pmf_set_cnqf(dev, src, config_store.current_mode, NULL);108108 } else {109109 /*···307307 const char *buf, size_t count)308308{309309 struct amd_pmf_dev *pdev = dev_get_drvdata(dev);310310- int mode, result, src;310310+ int result, src;311311 bool input;312312-313313- mode = amd_pmf_get_pprof_modes(pdev);314314- if (mode < 0)315315- return mode;316312317313 result = kstrtobool(buf, &input);318314 if (result)···317321 src = amd_pmf_cnqf_get_power_source(pdev);318322 pdev->cnqf_enabled = input;319323320320- if (pdev->cnqf_enabled && pdev->current_profile == PLATFORM_PROFILE_BALANCED) {324324+ if (pdev->cnqf_enabled && is_pprof_balanced(pdev)) {321325 amd_pmf_set_cnqf(pdev, src, config_store.current_mode, NULL);322326 } else {323327 if (is_apmf_func_supported(pdev, APMF_FUNC_STATIC_SLIDER_GRANULAR))324324- amd_pmf_update_slider(pdev, SLIDER_OP_SET, mode, NULL);328328+ amd_pmf_set_sps_power_limits(pdev);325329 }326330327331 dev_dbg(pdev->dev, "Received CnQF %s\n", input ? "on" : "off");···382386 dev->cnqf_enabled = amd_pmf_check_flags(dev);383387384388 /* update the thermal for CnQF */385385- if (dev->cnqf_enabled && dev->current_profile == PLATFORM_PROFILE_BALANCED) {389389+ if (dev->cnqf_enabled && is_pprof_balanced(dev)) {386390 src = amd_pmf_cnqf_get_power_source(dev);387391 amd_pmf_set_cnqf(dev, src, config_store.current_mode, NULL);388392 }
···981981 *982982 * Returns true if and only if alua_rtpg_work() will be called asynchronously.983983 * That function is responsible for calling @qdata->fn().984984+ *985985+ * Context: may be called from atomic context (alua_check()) only if the caller986986+ * holds an sdev reference.984987 */985988static bool alua_rtpg_queue(struct alua_port_group *pg,986989 struct scsi_device *sdev,···991988{992989 int start_queue = 0;993990 unsigned long flags;994994-995995- might_sleep();996991997992 if (WARN_ON_ONCE(!pg) || scsi_device_get(sdev))998993 return false;
+1-1
drivers/scsi/hpsa.c
···58505850{58515851 struct Scsi_Host *sh;5852585258535853- sh = scsi_host_alloc(&hpsa_driver_template, sizeof(h));58535853+ sh = scsi_host_alloc(&hpsa_driver_template, sizeof(struct ctlr_info));58545854 if (sh == NULL) {58555855 dev_err(&h->pdev->dev, "scsi_host_alloc failed\n");58565856 return -ENOMEM;
+16-6
drivers/scsi/iscsi_tcp.c
···849849 enum iscsi_host_param param, char *buf)850850{851851 struct iscsi_sw_tcp_host *tcp_sw_host = iscsi_host_priv(shost);852852- struct iscsi_session *session = tcp_sw_host->session;852852+ struct iscsi_session *session;853853 struct iscsi_conn *conn;854854 struct iscsi_tcp_conn *tcp_conn;855855 struct iscsi_sw_tcp_conn *tcp_sw_conn;···859859860860 switch (param) {861861 case ISCSI_HOST_PARAM_IPADDRESS:862862+ session = tcp_sw_host->session;862863 if (!session)863864 return -ENOTCONN;864865···960959 if (!cls_session)961960 goto remove_host;962961 session = cls_session->dd_data;963963- tcp_sw_host = iscsi_host_priv(shost);964964- tcp_sw_host->session = session;965962966963 if (iscsi_tcp_r2tpool_alloc(session))967964 goto remove_session;965965+966966+ /* We are now fully setup so expose the session to sysfs. */967967+ tcp_sw_host = iscsi_host_priv(shost);968968+ tcp_sw_host->session = session;968969 return cls_session;969970970971remove_session:···986983 if (WARN_ON_ONCE(session->leadconn))987984 return;988985989989- iscsi_tcp_r2tpool_free(cls_session->dd_data);990990- iscsi_session_teardown(cls_session);991991-986986+ iscsi_session_remove(cls_session);987987+ /*988988+ * Our get_host_param needs to access the session, so remove the989989+ * host from sysfs before freeing the session to make sure userspace990990+ * is no longer accessing the callout.991991+ */992992 iscsi_host_remove(shost, false);993993+994994+ iscsi_tcp_r2tpool_free(cls_session->dd_data);995995+996996+ iscsi_session_free(cls_session);993997 iscsi_host_free(shost);994998}995999
+31-7
drivers/scsi/libiscsi.c
···31043104}31053105EXPORT_SYMBOL_GPL(iscsi_session_setup);3106310631073107-/**31083108- * iscsi_session_teardown - destroy session, host, and cls_session31093109- * @cls_session: iscsi session31073107+/*31083108+ * issi_session_remove - Remove session from iSCSI class.31103109 */31113111-void iscsi_session_teardown(struct iscsi_cls_session *cls_session)31103110+void iscsi_session_remove(struct iscsi_cls_session *cls_session)31123111{31133112 struct iscsi_session *session = cls_session->dd_data;31143114- struct module *owner = cls_session->transport->owner;31153113 struct Scsi_Host *shost = session->host;3116311431173115 iscsi_remove_session(cls_session);31163116+ /*31173117+ * host removal only has to wait for its children to be removed from31183118+ * sysfs, and iscsi_tcp needs to do iscsi_host_remove before freeing31193119+ * the session, so drop the session count here.31203120+ */31213121+ iscsi_host_dec_session_cnt(shost);31223122+}31233123+EXPORT_SYMBOL_GPL(iscsi_session_remove);31243124+31253125+/**31263126+ * iscsi_session_free - Free iscsi session and it's resources31273127+ * @cls_session: iscsi session31283128+ */31293129+void iscsi_session_free(struct iscsi_cls_session *cls_session)31303130+{31313131+ struct iscsi_session *session = cls_session->dd_data;31323132+ struct module *owner = cls_session->transport->owner;3118313331193134 iscsi_pool_free(&session->cmdpool);31203135 kfree(session->password);···31473132 kfree(session->discovery_parent_type);3148313331493134 iscsi_free_session(cls_session);31503150-31513151- iscsi_host_dec_session_cnt(shost);31523135 module_put(owner);31363136+}31373137+EXPORT_SYMBOL_GPL(iscsi_session_free);31383138+31393139+/**31403140+ * iscsi_session_teardown - destroy session and cls_session31413141+ * @cls_session: iscsi session31423142+ */31433143+void iscsi_session_teardown(struct iscsi_cls_session *cls_session)31443144+{31453145+ iscsi_session_remove(cls_session);31463146+ iscsi_session_free(cls_session);31533147}31543148EXPORT_SYMBOL_GPL(iscsi_session_teardown);31553149
···7373{7474 struct se_session *sess = se_cmd->se_sess;75757676- assert_spin_locked(&sess->sess_cmd_lock);7777- WARN_ON_ONCE(!irqs_disabled());7676+ lockdep_assert_held(&sess->sess_cmd_lock);7777+7878 /*7979 * If command already reached CMD_T_COMPLETE state within8080 * target_complete_cmd() or CMD_T_FABRIC_STOP due to shutdown,
···4343 struct uart_8250_dma *dma = p->dma;4444 struct tty_port *tty_port = &p->port.state->port;4545 struct dma_tx_state state;4646+ enum dma_status dma_status;4647 int count;47484848- dma->rx_running = 0;4949- dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);4949+ /*5050+ * New DMA Rx can be started during the completion handler before it5151+ * could acquire port's lock and it might still be ongoing. Don't to5252+ * anything in such case.5353+ */5454+ dma_status = dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);5555+ if (dma_status == DMA_IN_PROGRESS)5656+ return;50575158 count = dma->rx_size - state.residue;52595360 tty_insert_flip_string(tty_port, dma->rx_buf, count);5461 p->port.icount.rx += count;6262+ dma->rx_running = 0;55635664 tty_flip_buffer_push(tty_port);5765}···7062 struct uart_8250_dma *dma = p->dma;7163 unsigned long flags;72647373- __dma_rx_complete(p);7474-7565 spin_lock_irqsave(&p->port.lock, flags);6666+ if (dma->rx_running)6767+ __dma_rx_complete(p);6868+6969+ /*7070+ * Cannot be combined with the previous check because __dma_rx_complete()7171+ * changes dma->rx_running.7272+ */7673 if (!dma->rx_running && (serial_lsr_in(p) & UART_LSR_DR))7774 p->dma->rx_dma(p);7875 spin_unlock_irqrestore(&p->port.lock, flags);
+5-28
drivers/tty/serial/stm32-usart.c
···797797 spin_unlock(&port->lock);798798 }799799800800- if (stm32_usart_rx_dma_enabled(port))801801- return IRQ_WAKE_THREAD;802802- else803803- return IRQ_HANDLED;804804-}805805-806806-static irqreturn_t stm32_usart_threaded_interrupt(int irq, void *ptr)807807-{808808- struct uart_port *port = ptr;809809- struct tty_port *tport = &port->state->port;810810- struct stm32_port *stm32_port = to_stm32_port(port);811811- unsigned int size;812812- unsigned long flags;813813-814800 /* Receiver timeout irq for DMA RX */815815- if (!stm32_port->throttled) {816816- spin_lock_irqsave(&port->lock, flags);801801+ if (stm32_usart_rx_dma_enabled(port) && !stm32_port->throttled) {802802+ spin_lock(&port->lock);817803 size = stm32_usart_receive_chars(port, false);818818- uart_unlock_and_check_sysrq_irqrestore(port, flags);804804+ uart_unlock_and_check_sysrq(port);819805 if (size)820806 tty_flip_buffer_push(tport);821807 }···10011015 u32 val;10021016 int ret;1003101710041004- ret = request_threaded_irq(port->irq, stm32_usart_interrupt,10051005- stm32_usart_threaded_interrupt,10061006- IRQF_ONESHOT | IRQF_NO_SUSPEND,10071007- name, port);10181018+ ret = request_irq(port->irq, stm32_usart_interrupt,10191019+ IRQF_NO_SUSPEND, name, port);10081020 if (ret)10091021 return ret;10101022···15841600 struct device *dev = &pdev->dev;15851601 struct dma_slave_config config;15861602 int ret;15871587-15881588- /*15891589- * Using DMA and threaded handler for the console could lead to15901590- * deadlocks.15911591- */15921592- if (uart_console(port))15931593- return -ENODEV;1594160315951604 stm32port->rx_buf = dma_alloc_coherent(dev, RX_BUF_L,15961605 &stm32port->rx_dma_buf,
+5-4
drivers/tty/vt/vc_screen.c
···386386387387 uni_mode = use_unicode(inode);388388 attr = use_attributes(inode);389389- ret = -ENXIO;390390- vc = vcs_vc(inode, &viewed);391391- if (!vc)392392- goto unlock_out;393389394390 ret = -EINVAL;395391 if (pos < 0)···402406 while (count) {403407 unsigned int this_round, skip = 0;404408 int size;409409+410410+ ret = -ENXIO;411411+ vc = vcs_vc(inode, &viewed);412412+ if (!vc)413413+ goto unlock_out;405414406415 /* Check whether we are above size each round,407416 * as copy_to_user at the end of this loop
+15-14
drivers/ufs/core/ufshcd.c
···12341234 * clock scaling is in progress12351235 */12361236 ufshcd_scsi_block_requests(hba);12371237+ mutex_lock(&hba->wb_mutex);12371238 down_write(&hba->clk_scaling_lock);1238123912391240 if (!hba->clk_scaling.is_allowed ||12401241 ufshcd_wait_for_doorbell_clr(hba, DOORBELL_CLR_TOUT_US)) {12411242 ret = -EBUSY;12421243 up_write(&hba->clk_scaling_lock);12441244+ mutex_unlock(&hba->wb_mutex);12431245 ufshcd_scsi_unblock_requests(hba);12441246 goto out;12451247 }···12531251 return ret;12541252}1255125312561256-static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, bool writelock)12541254+static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, int err, bool scale_up)12571255{12581258- if (writelock)12591259- up_write(&hba->clk_scaling_lock);12601260- else12611261- up_read(&hba->clk_scaling_lock);12561256+ up_write(&hba->clk_scaling_lock);12571257+12581258+ /* Enable Write Booster if we have scaled up else disable it */12591259+ if (ufshcd_enable_wb_if_scaling_up(hba) && !err)12601260+ ufshcd_wb_toggle(hba, scale_up);12611261+12621262+ mutex_unlock(&hba->wb_mutex);12631263+12621264 ufshcd_scsi_unblock_requests(hba);12631265 ufshcd_release(hba);12641266}···12791273static int ufshcd_devfreq_scale(struct ufs_hba *hba, bool scale_up)12801274{12811275 int ret = 0;12821282- bool is_writelock = true;1283127612841277 ret = ufshcd_clock_scaling_prepare(hba);12851278 if (ret)···13071302 }13081303 }1309130413101310- /* Enable Write Booster if we have scaled up else disable it */13111311- if (ufshcd_enable_wb_if_scaling_up(hba)) {13121312- downgrade_write(&hba->clk_scaling_lock);13131313- is_writelock = false;13141314- ufshcd_wb_toggle(hba, scale_up);13151315- }13161316-13171305out_unprepare:13181318- ufshcd_clock_scaling_unprepare(hba, is_writelock);13061306+ ufshcd_clock_scaling_unprepare(hba, ret, scale_up);13191307 return ret;13201308}13211309···6064606660656067static void ufshcd_clk_scaling_allow(struct ufs_hba *hba, bool allow)60666068{60696069+ mutex_lock(&hba->wb_mutex);60676070 down_write(&hba->clk_scaling_lock);60686071 hba->clk_scaling.is_allowed = allow;60696072 up_write(&hba->clk_scaling_lock);60736073+ mutex_unlock(&hba->wb_mutex);60706074}6071607560726076static void ufshcd_clk_scaling_suspend(struct ufs_hba *hba, bool suspend)···97939793 /* Initialize mutex for exception event control */97949794 mutex_init(&hba->ee_ctrl_mutex);9795979597969796+ mutex_init(&hba->wb_mutex);97969797 init_rwsem(&hba->clk_scaling_lock);9797979897989799 ufshcd_init_clk_gating(hba);
+1-1
drivers/usb/dwc3/dwc3-qcom.c
···901901 qcom->mode = usb_get_dr_mode(&qcom->dwc3->dev);902902903903 /* enable vbus override for device mode */904904- if (qcom->mode == USB_DR_MODE_PERIPHERAL)904904+ if (qcom->mode != USB_DR_MODE_HOST)905905 dwc3_qcom_vbus_override_enable(qcom, true);906906907907 /* register extcon to override sw_vbus on Vbus change later */
-1
drivers/usb/fotg210/fotg210-udc.c
···10101010 int ret;1011101110121012 /* hook up the driver */10131013- driver->driver.bus = NULL;10141013 fotg210->driver = driver;10151014 fotg210->gadget.dev.of_node = fotg210->dev->of_node;10161015 fotg210->gadget.speed = USB_SPEED_UNKNOWN;
+3-1
drivers/usb/gadget/function/f_fs.c
···279279 struct usb_request *req = ffs->ep0req;280280 int ret;281281282282- if (!req)282282+ if (!req) {283283+ spin_unlock_irq(&ffs->ev.waitq.lock);283284 return -EINVAL;285285+ }284286285287 req->zero = len < le16_to_cpu(ffs->ev.setup.wLength);286288
···22852285 /* lock is needed but whether should use this lock or another */22862286 spin_lock_irqsave(&udc->lock, flags);2287228722882288- driver->driver.bus = NULL;22892288 /* hook up the driver */22902289 udc->driver = driver;22912290 udc->gadget.speed = driver->max_speed;
-1
drivers/usb/gadget/udc/fsl_udc_core.c
···19431943 /* lock is needed but whether should use this lock or another */19441944 spin_lock_irqsave(&udc_controller->lock, flags);1945194519461946- driver->driver.bus = NULL;19471946 /* hook up the driver */19481947 udc_controller->driver = driver;19491948 spin_unlock_irqrestore(&udc_controller->lock, flags);
···14001400 con->port = NULL;14011401 }1402140214031403+ kfree(ucsi->connector);14041404+ ucsi->connector = NULL;14051405+14031406err_reset:14041407 memset(&ucsi->cap, 0, sizeof(ucsi->cap));14051408 ucsi_reset_ppm(ucsi);···1434143114351432int ucsi_resume(struct ucsi *ucsi)14361433{14371437- queue_work(system_long_wq, &ucsi->resume_work);14341434+ if (ucsi->connector)14351435+ queue_work(system_long_wq, &ucsi->resume_work);14381436 return 0;14391437}14401438EXPORT_SYMBOL_GPL(ucsi_resume);···1554155015551551 /* Disable notifications */15561552 ucsi->ops->async_write(ucsi, UCSI_CONTROL, &cmd, sizeof(cmd));15531553+15541554+ if (!ucsi->connector)15551555+ return;1557155615581557 for (i = 0; i < ucsi->cap.num_connectors; i++) {15591558 cancel_work_sync(&ucsi->connector[i].work);
+1-1
drivers/vdpa/ifcvf/ifcvf_main.c
···849849 ret = ifcvf_init_hw(vf, pdev);850850 if (ret) {851851 IFCVF_ERR(pdev, "Failed to init IFCVF hw\n");852852- return ret;852852+ goto err;853853 }854854855855 for (i = 0; i < vf->nr_vring; i++)
+20-11
drivers/vfio/vfio_iommu_type1.c
···18561856 * significantly boosts non-hugetlbfs mappings and doesn't seem to hurt when18571857 * hugetlbfs is in use.18581858 */18591859-static void vfio_test_domain_fgsp(struct vfio_domain *domain)18591859+static void vfio_test_domain_fgsp(struct vfio_domain *domain, struct list_head *regions)18601860{18611861- struct page *pages;18621861 int ret, order = get_order(PAGE_SIZE * 2);18621862+ struct vfio_iova *region;18631863+ struct page *pages;18641864+ dma_addr_t start;1863186518641866 pages = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);18651867 if (!pages)18661868 return;1867186918681868- ret = iommu_map(domain->domain, 0, page_to_phys(pages), PAGE_SIZE * 2,18691869- IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE);18701870- if (!ret) {18711871- size_t unmapped = iommu_unmap(domain->domain, 0, PAGE_SIZE);18701870+ list_for_each_entry(region, regions, list) {18711871+ start = ALIGN(region->start, PAGE_SIZE * 2);18721872+ if (start >= region->end || (region->end - start < PAGE_SIZE * 2))18731873+ continue;1872187418731873- if (unmapped == PAGE_SIZE)18741874- iommu_unmap(domain->domain, PAGE_SIZE, PAGE_SIZE);18751875- else18761876- domain->fgsp = true;18751875+ ret = iommu_map(domain->domain, start, page_to_phys(pages), PAGE_SIZE * 2,18761876+ IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE);18771877+ if (!ret) {18781878+ size_t unmapped = iommu_unmap(domain->domain, start, PAGE_SIZE);18791879+18801880+ if (unmapped == PAGE_SIZE)18811881+ iommu_unmap(domain->domain, start + PAGE_SIZE, PAGE_SIZE);18821882+ else18831883+ domain->fgsp = true;18841884+ }18851885+ break;18771886 }1878188718791888 __free_pages(pages, order);···23352326 }23362327 }2337232823382338- vfio_test_domain_fgsp(domain);23292329+ vfio_test_domain_fgsp(domain, &iova_copy);2339233023402331 /* replay mappings on new domains */23412332 ret = vfio_iommu_replay(iommu, domain);
+3
drivers/vhost/net.c
···15111511 nvq = &n->vqs[index];15121512 mutex_lock(&vq->mutex);1513151315141514+ if (fd == -1)15151515+ vhost_clear_msg(&n->dev);15161516+15141517 /* Verify that ring has been setup correctly. */15151518 if (!vhost_vq_access_ok(vq)) {15161519 r = -EFAULT;
···4949 struct clk *lcdc_clk;50505151 struct backlight_device *backlight;5252- u8 bl_power;5352 u8 saved_lcdcon;54535554 u32 pseudo_palette[16];···108109static int atmel_bl_update_status(struct backlight_device *bl)109110{110111 struct atmel_lcdfb_info *sinfo = bl_get_data(bl);111111- int power = sinfo->bl_power;112112- int brightness = bl->props.brightness;113113-114114- /* REVISIT there may be a meaningful difference between115115- * fb_blank and power ... there seem to be some cases116116- * this doesn't handle correctly.117117- */118118- if (bl->props.fb_blank != sinfo->bl_power)119119- power = bl->props.fb_blank;120120- else if (bl->props.power != sinfo->bl_power)121121- power = bl->props.power;122122-123123- if (brightness < 0 && power == FB_BLANK_UNBLANK)124124- brightness = lcdc_readl(sinfo, ATMEL_LCDC_CONTRAST_VAL);125125- else if (power != FB_BLANK_UNBLANK)126126- brightness = 0;112112+ int brightness = backlight_get_brightness(bl);127113128114 lcdc_writel(sinfo, ATMEL_LCDC_CONTRAST_VAL, brightness);129115 if (contrast_ctr & ATMEL_LCDC_POL_POSITIVE)···116132 brightness ? contrast_ctr : 0);117133 else118134 lcdc_writel(sinfo, ATMEL_LCDC_CONTRAST_CTR, contrast_ctr);119119-120120- bl->props.fb_blank = bl->props.power = sinfo->bl_power = power;121135122136 return 0;123137}···136154{137155 struct backlight_properties props;138156 struct backlight_device *bl;139139-140140- sinfo->bl_power = FB_BLANK_UNBLANK;141157142158 if (sinfo->backlight)143159 return;
+2-4
drivers/video/fbdev/aty/aty128fb.c
···17661766 unsigned int reg = aty_ld_le32(LVDS_GEN_CNTL);17671767 int level;1768176817691769- if (bd->props.power != FB_BLANK_UNBLANK ||17701770- bd->props.fb_blank != FB_BLANK_UNBLANK ||17711771- !par->lcd_on)17691769+ if (!par->lcd_on)17721770 level = 0;17731771 else17741774- level = bd->props.brightness;17721772+ level = backlight_get_brightness(bd);1775177317761774 reg |= LVDS_BL_MOD_EN | LVDS_BLON;17771775 if (level > 0) {
+1-7
drivers/video/fbdev/aty/atyfb_base.c
···22192219{22202220 struct atyfb_par *par = bl_get_data(bd);22212221 unsigned int reg = aty_ld_lcd(LCD_MISC_CNTL, par);22222222- int level;22232223-22242224- if (bd->props.power != FB_BLANK_UNBLANK ||22252225- bd->props.fb_blank != FB_BLANK_UNBLANK)22262226- level = 0;22272227- else22282228- level = bd->props.brightness;22222222+ int level = backlight_get_brightness(bd);2229222322302224 reg |= (BLMOD_EN | BIASMOD_EN);22312225 if (level > 0) {
+1-5
drivers/video/fbdev/aty/radeon_backlight.c
···5757 * backlight. This provides some greater power saving and the display5858 * is useless without backlight anyway.5959 */6060- if (bd->props.power != FB_BLANK_UNBLANK ||6161- bd->props.fb_blank != FB_BLANK_UNBLANK)6262- level = 0;6363- else6464- level = bd->props.brightness;6060+ level = backlight_get_brightness(bd);65616662 del_timer_sync(&rinfo->lvds_timer);6763 radeon_engine_idle();
+5-2
drivers/video/fbdev/core/fbcon.c
···24952495 h > FBCON_SWAP(info->var.rotate, info->var.yres, info->var.xres))24962496 return -EINVAL;2497249724982498+ if (font->width > 32 || font->height > 32)24992499+ return -EINVAL;25002500+24982501 /* Make sure drawing engine can handle the font */24992499- if (!(info->pixmap.blit_x & (1 << (font->width - 1))) ||25002500- !(info->pixmap.blit_y & (1 << (font->height - 1))))25022502+ if (!(info->pixmap.blit_x & BIT(font->width - 1)) ||25032503+ !(info->pixmap.blit_y & BIT(font->height - 1)))25012504 return -EINVAL;2502250525032506 /* Make sure driver can handle the font length */
+1-1
drivers/video/fbdev/core/fbmon.c
···10501050}1051105110521052/**10531053- * fb_get_hblank_by_freq - get horizontal blank time given hfreq10531053+ * fb_get_hblank_by_hfreq - get horizontal blank time given hfreq10541054 * @hfreq: horizontal freq10551055 * @xres: horizontal resolution in pixels10561056 *
+1-6
drivers/video/fbdev/mx3fb.c
···283283static int mx3fb_bl_update_status(struct backlight_device *bl)284284{285285 struct mx3fb_data *fbd = bl_get_data(bl);286286- int brightness = bl->props.brightness;287287-288288- if (bl->props.power != FB_BLANK_UNBLANK)289289- brightness = 0;290290- if (bl->props.fb_blank != FB_BLANK_UNBLANK)291291- brightness = 0;286286+ int brightness = backlight_get_brightness(bl);292287293288 fbd->backlight_level = (fbd->backlight_level & ~0xFF) | brightness;294289
···1010#define DSS_SUBSYS_NAME "DISPLAY"11111212#include <linux/kernel.h>1313+#include <linux/kstrtox.h>1314#include <linux/module.h>1415#include <linux/platform_device.h>1516#include <linux/sysfs.h>···3736 int r;3837 bool enable;39384040- r = strtobool(buf, &enable);3939+ r = kstrtobool(buf, &enable);4140 if (r)4241 return r;4342···7473 if (!dssdev->driver->enable_te || !dssdev->driver->get_te)7574 return -ENOENT;76757777- r = strtobool(buf, &te);7676+ r = kstrtobool(buf, &te);7877 if (r)7978 return r;8079···184183 if (!dssdev->driver->set_mirror || !dssdev->driver->get_mirror)185184 return -ENOENT;186185187187- r = strtobool(buf, &mirror);186186+ r = kstrtobool(buf, &mirror);188187 if (r)189188 return r;190189
···1010#define DSS_SUBSYS_NAME "MANAGER"11111212#include <linux/kernel.h>1313+#include <linux/kstrtox.h>1314#include <linux/slab.h>1415#include <linux/module.h>1516#include <linux/platform_device.h>···247246 bool enable;248247 int r;249248250250- r = strtobool(buf, &enable);249249+ r = kstrtobool(buf, &enable);251250 if (r)252251 return r;253252···291290 if(!dss_has_feature(FEAT_ALPHA_FIXED_ZORDER))292291 return -ENODEV;293292294294- r = strtobool(buf, &enable);293293+ r = kstrtobool(buf, &enable);295294 if (r)296295 return r;297296···330329 if (!dss_has_feature(FEAT_CPR))331330 return -ENODEV;332331333333- r = strtobool(buf, &enable);332332+ r = kstrtobool(buf, &enable);334333 if (r)335334 return r;336335
···806806{807807 struct ceph_mds_session *s;808808809809+ if (READ_ONCE(mdsc->fsc->mount_state) == CEPH_MOUNT_FENCE_IO)810810+ return ERR_PTR(-EIO);811811+809812 if (mds >= mdsc->mdsmap->possible_max_rank)810813 return ERR_PTR(-EINVAL);811814···14801477 struct ceph_msg *msg;14811478 int mstate;14821479 int mds = session->s_mds;14801480+14811481+ if (READ_ONCE(mdsc->fsc->mount_state) == CEPH_MOUNT_FENCE_IO)14821482+ return -EIO;1483148314841484 /* wait for mds to go active? */14851485 mstate = ceph_mdsmap_get_state(mdsc->mdsmap, mds);···28662860 return;28672861 }2868286228632863+ if (READ_ONCE(mdsc->fsc->mount_state) == CEPH_MOUNT_FENCE_IO) {28642864+ dout("do_request metadata corrupted\n");28652865+ err = -EIO;28662866+ goto finish;28672867+ }28692868 if (req->r_timeout &&28702869 time_after_eq(jiffies, req->r_started + req->r_timeout)) {28712870 dout("do_request timed out\n");···32563245 u64 tid;32573246 int err, result;32583247 int mds = session->s_mds;32483248+ bool close_sessions = false;3259324932603250 if (msg->front.iov_len < sizeof(*head)) {32613251 pr_err("mdsc_handle_reply got corrupt (short) reply\n");···33633351 realm = NULL;33643352 if (rinfo->snapblob_len) {33653353 down_write(&mdsc->snap_rwsem);33663366- ceph_update_snap_trace(mdsc, rinfo->snapblob,33543354+ err = ceph_update_snap_trace(mdsc, rinfo->snapblob,33673355 rinfo->snapblob + rinfo->snapblob_len,33683356 le32_to_cpu(head->op) == CEPH_MDS_OP_RMSNAP,33693357 &realm);33583358+ if (err) {33593359+ up_write(&mdsc->snap_rwsem);33603360+ close_sessions = true;33613361+ if (err == -EIO)33623362+ ceph_msg_dump(msg);33633363+ goto out_err;33643364+ }33703365 downgrade_write(&mdsc->snap_rwsem);33713366 } else {33723367 down_read(&mdsc->snap_rwsem);···34313412 req->r_end_latency, err);34323413out:34333414 ceph_mdsc_put_request(req);34153415+34163416+ /* Defer closing the sessions after s_mutex lock being released */34173417+ if (close_sessions)34183418+ ceph_mdsc_close_sessions(mdsc);34343419 return;34353420}34363421···50345011}5035501250365013/*50375037- * called after sb is ro.50145014+ * called after sb is ro or when metadata corrupted.50385015 */50395016void ceph_mdsc_close_sessions(struct ceph_mds_client *mdsc)50405017{···53245301 struct ceph_mds_client *mdsc = s->s_mdsc;5325530253265303 pr_warn("mds%d closed our session\n", s->s_mds);53275327- send_mds_reconnect(mdsc, s);53045304+ if (READ_ONCE(mdsc->fsc->mount_state) != CEPH_MOUNT_FENCE_IO)53055305+ send_mds_reconnect(mdsc, s);53285306}5329530753305308static void mds_dispatch(struct ceph_connection *con, struct ceph_msg *msg)
+34-2
fs/ceph/snap.c
···11// SPDX-License-Identifier: GPL-2.022#include <linux/ceph/ceph_debug.h>3344+#include <linux/fs.h>45#include <linux/sort.h>56#include <linux/slab.h>67#include <linux/iversion.h>···767766 struct ceph_snap_realm *realm;768767 struct ceph_snap_realm *first_realm = NULL;769768 struct ceph_snap_realm *realm_to_rebuild = NULL;769769+ struct ceph_client *client = mdsc->fsc->client;770770 int rebuild_snapcs;771771 int err = -ENOMEM;772772+ int ret;772773 LIST_HEAD(dirty_realms);773774774775 lockdep_assert_held_write(&mdsc->snap_rwsem);···887884 if (first_realm)888885 ceph_put_snap_realm(mdsc, first_realm);889886 pr_err("%s error %d\n", __func__, err);887887+888888+ /*889889+ * When receiving a corrupted snap trace we don't know what890890+ * exactly has happened in MDS side. And we shouldn't continue891891+ * writing to OSD, which may corrupt the snapshot contents.892892+ *893893+ * Just try to blocklist this kclient and then this kclient894894+ * must be remounted to continue after the corrupted metadata895895+ * fixed in the MDS side.896896+ */897897+ WRITE_ONCE(mdsc->fsc->mount_state, CEPH_MOUNT_FENCE_IO);898898+ ret = ceph_monc_blocklist_add(&client->monc, &client->msgr.inst.addr);899899+ if (ret)900900+ pr_err("%s failed to blocklist %s: %d\n", __func__,901901+ ceph_pr_addr(&client->msgr.inst.addr), ret);902902+903903+ WARN(1, "%s: %s%sdo remount to continue%s",904904+ __func__, ret ? "" : ceph_pr_addr(&client->msgr.inst.addr),905905+ ret ? "" : " was blocklisted, ",906906+ err == -EIO ? " after corrupted snaptrace is fixed" : "");907907+890908 return err;891909}892910···1008984 __le64 *split_inos = NULL, *split_realms = NULL;1009985 int i;1010986 int locked_rwsem = 0;987987+ bool close_sessions = false;10119881012989 /* decode */1013990 if (msg->front.iov_len < sizeof(*h))···11171092 * update using the provided snap trace. if we are deleting a11181093 * snap, we can avoid queueing cap_snaps.11191094 */11201120- ceph_update_snap_trace(mdsc, p, e,11211121- op == CEPH_SNAP_OP_DESTROY, NULL);10951095+ if (ceph_update_snap_trace(mdsc, p, e,10961096+ op == CEPH_SNAP_OP_DESTROY,10971097+ NULL)) {10981098+ close_sessions = true;10991099+ goto bad;11001100+ }1122110111231102 if (op == CEPH_SNAP_OP_SPLIT)11241103 /* we took a reference when we created the realm, above */···11411112out:11421113 if (locked_rwsem)11431114 up_write(&mdsc->snap_rwsem);11151115+11161116+ if (close_sessions)11171117+ ceph_mdsc_close_sessions(mdsc);11441118 return;11451119}11461120
+11
fs/ceph/super.h
···100100 char *mon_addr;101101};102102103103+/* mount state */104104+enum {105105+ CEPH_MOUNT_MOUNTING,106106+ CEPH_MOUNT_MOUNTED,107107+ CEPH_MOUNT_UNMOUNTING,108108+ CEPH_MOUNT_UNMOUNTED,109109+ CEPH_MOUNT_SHUTDOWN,110110+ CEPH_MOUNT_RECOVER,111111+ CEPH_MOUNT_FENCE_IO,112112+};113113+103114#define CEPH_ASYNC_CREATE_CONFLICT_BITS 8104115105116struct ceph_fs_client {
···482482 */483483 e_hash = ext4_xattr_hash_entry_signed(entry->e_name, entry->e_name_len,484484 &tmp_data, 1);485485- if (e_hash == entry->e_hash)486486- return 0;487487-488485 /* Still no match - bad */489489- return -EFSCORRUPTED;486486+ if (e_hash != entry->e_hash)487487+ return -EFSCORRUPTED;488488+489489+ /* Let people know about old hash */490490+ pr_warn_once("ext4: filesystem with signed xattr name hash");490491 }491492 return 0;492493}···30973096 while (name_len--) {30983097 hash = (hash << NAME_HASH_SHIFT) ^30993098 (hash >> (8*sizeof(hash) - NAME_HASH_SHIFT)) ^31003100- *name++;30993099+ (unsigned char)*name++;31013100 }31023101 while (value_count--) {31033102 hash = (hash << VALUE_HASH_SHIFT) ^
+1-1
fs/freevxfs/Kconfig
···88 of SCO UnixWare (and possibly others) and optionally available99 for Sunsoft Solaris, HP-UX and many other operating systems. However1010 these particular OS implementations of vxfs may differ in on-disk1111- data endianess and/or superblock offset. The vxfs module has been1111+ data endianness and/or superblock offset. The vxfs module has been1212 tested with SCO UnixWare and HP-UX B.10.20 (pa-risc 1.1 arch.)1313 Currently only readonly access is supported and VxFX versions1414 2, 3 and 4. Tests were performed with HP-UX VxFS version 3.
···8080 brelse(bd->bd_bh);8181}82828383+static int __gfs2_writepage(struct page *page, struct writeback_control *wbc,8484+ void *data)8585+{8686+ struct address_space *mapping = data;8787+ int ret = mapping->a_ops->writepage(page, wbc);8888+ mapping_set_error(mapping, ret);8989+ return ret;9090+}9191+8392/**8493 * gfs2_ail1_start_one - Start I/O on a transaction8594 * @sdp: The superblock···140131 if (!mapping)141132 continue;142133 spin_unlock(&sdp->sd_ail_lock);143143- ret = filemap_fdatawrite_wbc(mapping, wbc);134134+ ret = write_cache_pages(mapping, wbc, __gfs2_writepage, mapping);144135 if (need_resched()) {145136 blk_finish_plug(plug);146137 cond_resched();
+15-2
fs/ksmbd/connection.c
···280280{281281 struct ksmbd_conn *conn = (struct ksmbd_conn *)p;282282 struct ksmbd_transport *t = conn->transport;283283- unsigned int pdu_size;283283+ unsigned int pdu_size, max_allowed_pdu_size;284284 char hdr_buf[4] = {0,};285285 int size;286286···305305 pdu_size = get_rfc1002_len(hdr_buf);306306 ksmbd_debug(CONN, "RFC1002 header %u bytes\n", pdu_size);307307308308+ if (conn->status == KSMBD_SESS_GOOD)309309+ max_allowed_pdu_size =310310+ SMB3_MAX_MSGSIZE + conn->vals->max_write_size;311311+ else312312+ max_allowed_pdu_size = SMB3_MAX_MSGSIZE;313313+314314+ if (pdu_size > max_allowed_pdu_size) {315315+ pr_err_ratelimited("PDU length(%u) excceed maximum allowed pdu size(%u) on connection(%d)\n",316316+ pdu_size, max_allowed_pdu_size,317317+ conn->status);318318+ break;319319+ }320320+308321 /*309322 * Check if pdu size is valid (min : smb header size,310323 * max : 0x00FFFFFF).311324 */312325 if (pdu_size < __SMB2_HEADER_STRUCTURE_SIZE ||313326 pdu_size > MAX_STREAM_PROT_LEN) {314314- continue;327327+ break;315328 }316329317330 /* 4 for rfc1002 length field */
+2-1
fs/ksmbd/ksmbd_netlink.h
···106106 __u32 sub_auth[3]; /* Subauth value for Security ID */107107 __u32 smb2_max_credits; /* MAX credits */108108 __u32 smbd_max_io_size; /* smbd read write size */109109- __u32 reserved[127]; /* Reserved room */109109+ __u32 max_connections; /* Number of maximum simultaneous connections */110110+ __u32 reserved[126]; /* Reserved room */110111 __u32 ifc_list_sz; /* interfaces list size */111112 __s8 ____payload[];112113};
+4-4
fs/ksmbd/ndr.c
···242242 return ret;243243244244 if (da->version != 3 && da->version != 4) {245245- pr_err("v%d version is not supported\n", da->version);245245+ ksmbd_debug(VFS, "v%d version is not supported\n", da->version);246246 return -EINVAL;247247 }248248···251251 return ret;252252253253 if (da->version != version2) {254254- pr_err("ndr version mismatched(version: %d, version2: %d)\n",254254+ ksmbd_debug(VFS, "ndr version mismatched(version: %d, version2: %d)\n",255255 da->version, version2);256256 return -EINVAL;257257 }···457457 if (ret)458458 return ret;459459 if (acl->version != 4) {460460- pr_err("v%d version is not supported\n", acl->version);460460+ ksmbd_debug(VFS, "v%d version is not supported\n", acl->version);461461 return -EINVAL;462462 }463463···465465 if (ret)466466 return ret;467467 if (acl->version != version2) {468468- pr_err("ndr version mismatched(version: %d, version2: %d)\n",468468+ ksmbd_debug(VFS, "ndr version mismatched(version: %d, version2: %d)\n",469469 acl->version, version2);470470 return -EINVAL;471471 }
+1
fs/ksmbd/server.h
···4141 unsigned int share_fake_fscaps;4242 struct smb_sid domain_sid;4343 unsigned int auth_mechs;4444+ unsigned int max_connections;44454546 char *conf[SERVER_CONF_WORK_GROUP + 1];4647};
···308308 if (req->smbd_max_io_size)309309 init_smbd_max_io_size(req->smbd_max_io_size);310310311311+ if (req->max_connections)312312+ server_conf.max_connections = req->max_connections;313313+311314 ret = ksmbd_set_netbios_name(req->netbios_name);312315 ret |= ksmbd_set_server_string(req->server_string);313316 ret |= ksmbd_set_work_group(req->work_group);
+16-1
fs/ksmbd/transport_tcp.c
···1515#define IFACE_STATE_DOWN BIT(0)1616#define IFACE_STATE_CONFIGURED BIT(1)17171818+static atomic_t active_num_conn;1919+1820struct interface {1921 struct task_struct *ksmbd_kthread;2022 struct socket *ksmbd_socket;···187185 struct tcp_transport *t;188186189187 t = alloc_transport(client_sk);190190- if (!t)188188+ if (!t) {189189+ sock_release(client_sk);191190 return -ENOMEM;191191+ }192192193193 csin = KSMBD_TCP_PEER_SOCKADDR(KSMBD_TRANS(t)->conn);194194 if (kernel_getpeername(client_sk, csin) < 0) {···240236 if (ret == -EAGAIN)241237 /* check for new connections every 100 msecs */242238 schedule_timeout_interruptible(HZ / 10);239239+ continue;240240+ }241241+242242+ if (server_conf.max_connections &&243243+ atomic_inc_return(&active_num_conn) >= server_conf.max_connections) {244244+ pr_info_ratelimited("Limit the maximum number of connections(%u)\n",245245+ atomic_read(&active_num_conn));246246+ atomic_dec(&active_num_conn);247247+ sock_release(client_sk);243248 continue;244249 }245250···381368static void ksmbd_tcp_disconnect(struct ksmbd_transport *t)382369{383370 free_transport(TCP_TRANS(t));371371+ if (server_conf.max_connections)372372+ atomic_dec(&active_num_conn);384373}385374386375static void tcp_destroy_socket(struct socket *ksmbd_socket)
+36-25
fs/nfsd/filecache.c
···662662};663663664664/**665665+ * nfsd_file_cond_queue - conditionally unhash and queue a nfsd_file666666+ * @nf: nfsd_file to attempt to queue667667+ * @dispose: private list to queue successfully-put objects668668+ *669669+ * Unhash an nfsd_file, try to get a reference to it, and then put that670670+ * reference. If it's the last reference, queue it to the dispose list.671671+ */672672+static void673673+nfsd_file_cond_queue(struct nfsd_file *nf, struct list_head *dispose)674674+ __must_hold(RCU)675675+{676676+ int decrement = 1;677677+678678+ /* If we raced with someone else unhashing, ignore it */679679+ if (!nfsd_file_unhash(nf))680680+ return;681681+682682+ /* If we can't get a reference, ignore it */683683+ if (!nfsd_file_get(nf))684684+ return;685685+686686+ /* Extra decrement if we remove from the LRU */687687+ if (nfsd_file_lru_remove(nf))688688+ ++decrement;689689+690690+ /* If refcount goes to 0, then put on the dispose list */691691+ if (refcount_sub_and_test(decrement, &nf->nf_ref)) {692692+ list_add(&nf->nf_lru, dispose);693693+ trace_nfsd_file_closing(nf);694694+ }695695+}696696+697697+/**665698 * nfsd_file_queue_for_close: try to close out any open nfsd_files for an inode666699 * @inode: inode on which to close out nfsd_files667700 * @dispose: list on which to gather nfsd_files to close out···721688722689 rcu_read_lock();723690 do {724724- int decrement = 1;725725-726691 nf = rhashtable_lookup(&nfsd_file_rhash_tbl, &key,727692 nfsd_file_rhash_params);728693 if (!nf)729694 break;730730-731731- /* If we raced with someone else unhashing, ignore it */732732- if (!nfsd_file_unhash(nf))733733- continue;734734-735735- /* If we can't get a reference, ignore it */736736- if (!nfsd_file_get(nf))737737- continue;738738-739739- /* Extra decrement if we remove from the LRU */740740- if (nfsd_file_lru_remove(nf))741741- ++decrement;742742-743743- /* If refcount goes to 0, then put on the dispose list */744744- if (refcount_sub_and_test(decrement, &nf->nf_ref)) {745745- list_add(&nf->nf_lru, dispose);746746- trace_nfsd_file_closing(nf);747747- }695695+ nfsd_file_cond_queue(nf, dispose);748696 } while (1);749697 rcu_read_unlock();750698}···942928943929 nf = rhashtable_walk_next(&iter);944930 while (!IS_ERR_OR_NULL(nf)) {945945- if (!net || nf->nf_net == net) {946946- nfsd_file_unhash(nf);947947- nfsd_file_lru_remove(nf);948948- list_add(&nf->nf_lru, &dispose);949949- }931931+ if (!net || nf->nf_net == net)932932+ nfsd_file_cond_queue(nf, &dispose);950933 nf = rhashtable_walk_next(&iter);951934 }952935
···6363 long long bytes_used;6464 unsigned int inodes;6565 unsigned int fragments;6666- int xattr_ids;6666+ unsigned int xattr_ids;6767 unsigned int ids;6868 bool panic_on_errors;6969 const struct squashfs_decompressor_thread_ops *thread_ops;
+2-2
fs/squashfs/xattr.h
···10101111#ifdef CONFIG_SQUASHFS_XATTR1212extern __le64 *squashfs_read_xattr_id_table(struct super_block *, u64,1313- u64 *, int *);1313+ u64 *, unsigned int *);1414extern int squashfs_xattr_lookup(struct super_block *, unsigned int, int *,1515 unsigned int *, unsigned long long *);1616#else1717static inline __le64 *squashfs_read_xattr_id_table(struct super_block *sb,1818- u64 start, u64 *xattr_table_start, int *xattr_ids)1818+ u64 start, u64 *xattr_table_start, unsigned int *xattr_ids)1919{2020 struct squashfs_xattr_id_table *id_table;2121
+2-2
fs/squashfs/xattr_id.c
···5656 * Read uncompressed xattr id lookup table indexes from disk into memory5757 */5858__le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 table_start,5959- u64 *xattr_table_start, int *xattr_ids)5959+ u64 *xattr_table_start, unsigned int *xattr_ids)6060{6161 struct squashfs_sb_info *msblk = sb->s_fs_info;6262 unsigned int len, indexes;···7676 /* Sanity check values */77777878 /* there is always at least one xattr id */7979- if (*xattr_ids == 0)7979+ if (*xattr_ids <= 0)8080 return ERR_PTR(-EINVAL);81818282 len = SQUASHFS_XATTR_BLOCK_BYTES(*xattr_ids);
+12
include/drm/drm_fb_helper.h
···208208 * the smem_start field should always be cleared to zero.209209 */210210 bool hint_leak_smem_start;211211+212212+#ifdef CONFIG_FB_DEFERRED_IO213213+ /**214214+ * @fbdefio:215215+ *216216+ * Temporary storage for the driver's FB deferred I/O handler. If the217217+ * driver uses the DRM fbdev emulation layer, this is set by the core218218+ * to a generic deferred I/O handler if a driver is preferring to use219219+ * a shadow buffer.220220+ */221221+ struct fb_deferred_io fbdefio;222222+#endif211223};212224213225static inline struct drm_fb_helper *
···303303 */304304#define kunit_test_init_section_suites(__suites...) \305305 __kunit_test_suites(CONCATENATE(__UNIQUE_ID(array), _probe), \306306- CONCATENATE(__UNIQUE_ID(suites), _probe), \307306 ##__suites)308307309308#define kunit_test_init_section_suite(suite) \···682683 .right_text = #right, \683684 }; \684685 \685685- if (likely(memcmp(__left, __right, __size) op 0)) \686686- break; \686686+ if (likely(__left && __right)) \687687+ if (likely(memcmp(__left, __right, __size) op 0)) \688688+ break; \687689 \688690 _KUNIT_FAILED(test, \689691 assert_type, \
+1-1
include/kvm/arm_vgic.h
···263263 struct vgic_io_device dist_iodev;264264265265 bool has_its;266266- bool save_its_tables_in_progress;266266+ bool table_write_in_progress;267267268268 /*269269 * Contains the attributes and gpa of the LPI configuration table.
+107-2
include/linux/apple-gmux.h
···88#define LINUX_APPLE_GMUX_H991010#include <linux/acpi.h>1111+#include <linux/io.h>1212+#include <linux/pnp.h>11131214#define GMUX_ACPI_HID "APP000B"13151616+/*1717+ * gmux port offsets. Many of these are not yet used, but may be in the1818+ * future, and it's useful to have them documented here anyhow.1919+ */2020+#define GMUX_PORT_VERSION_MAJOR 0x042121+#define GMUX_PORT_VERSION_MINOR 0x052222+#define GMUX_PORT_VERSION_RELEASE 0x062323+#define GMUX_PORT_SWITCH_DISPLAY 0x102424+#define GMUX_PORT_SWITCH_GET_DISPLAY 0x112525+#define GMUX_PORT_INTERRUPT_ENABLE 0x142626+#define GMUX_PORT_INTERRUPT_STATUS 0x162727+#define GMUX_PORT_SWITCH_DDC 0x282828+#define GMUX_PORT_SWITCH_EXTERNAL 0x402929+#define GMUX_PORT_SWITCH_GET_EXTERNAL 0x413030+#define GMUX_PORT_DISCRETE_POWER 0x503131+#define GMUX_PORT_MAX_BRIGHTNESS 0x703232+#define GMUX_PORT_BRIGHTNESS 0x743333+#define GMUX_PORT_VALUE 0xc23434+#define GMUX_PORT_READ 0xd03535+#define GMUX_PORT_WRITE 0xd43636+3737+#define GMUX_MIN_IO_LEN (GMUX_PORT_BRIGHTNESS + 4)3838+1439#if IS_ENABLED(CONFIG_APPLE_GMUX)4040+static inline bool apple_gmux_is_indexed(unsigned long iostart)4141+{4242+ u16 val;4343+4444+ outb(0xaa, iostart + 0xcc);4545+ outb(0x55, iostart + 0xcd);4646+ outb(0x00, iostart + 0xce);4747+4848+ val = inb(iostart + 0xcc) | (inb(iostart + 0xcd) << 8);4949+ if (val == 0x55aa)5050+ return true;5151+5252+ return false;5353+}15541655/**1717- * apple_gmux_present() - detect if gmux is built into the machine5656+ * apple_gmux_detect() - detect if gmux is built into the machine5757+ *5858+ * @pnp_dev: Device to probe or NULL to use the first matching device5959+ * @indexed_ret: Returns (by reference) if the gmux is indexed or not6060+ *6161+ * Detect if a supported gmux device is present by actually probing it.6262+ * This avoids the false positives returned on some models by6363+ * apple_gmux_present().6464+ *6565+ * Return: %true if a supported gmux ACPI device is detected and the kernel6666+ * was configured with CONFIG_APPLE_GMUX, %false otherwise.6767+ */6868+static inline bool apple_gmux_detect(struct pnp_dev *pnp_dev, bool *indexed_ret)6969+{7070+ u8 ver_major, ver_minor, ver_release;7171+ struct device *dev = NULL;7272+ struct acpi_device *adev;7373+ struct resource *res;7474+ bool indexed = false;7575+ bool ret = false;7676+7777+ if (!pnp_dev) {7878+ adev = acpi_dev_get_first_match_dev(GMUX_ACPI_HID, NULL, -1);7979+ if (!adev)8080+ return false;8181+8282+ dev = get_device(acpi_get_first_physical_node(adev));8383+ acpi_dev_put(adev);8484+ if (!dev)8585+ return false;8686+8787+ pnp_dev = to_pnp_dev(dev);8888+ }8989+9090+ res = pnp_get_resource(pnp_dev, IORESOURCE_IO, 0);9191+ if (!res || resource_size(res) < GMUX_MIN_IO_LEN)9292+ goto out;9393+9494+ /*9595+ * Invalid version information may indicate either that the gmux9696+ * device isn't present or that it's a new one that uses indexed io.9797+ */9898+ ver_major = inb(res->start + GMUX_PORT_VERSION_MAJOR);9999+ ver_minor = inb(res->start + GMUX_PORT_VERSION_MINOR);100100+ ver_release = inb(res->start + GMUX_PORT_VERSION_RELEASE);101101+ if (ver_major == 0xff && ver_minor == 0xff && ver_release == 0xff) {102102+ indexed = apple_gmux_is_indexed(res->start);103103+ if (!indexed)104104+ goto out;105105+ }106106+107107+ if (indexed_ret)108108+ *indexed_ret = indexed;109109+110110+ ret = true;111111+out:112112+ put_device(dev);113113+ return ret;114114+}115115+116116+/**117117+ * apple_gmux_present() - check if gmux ACPI device is present18118 *19119 * Drivers may use this to activate quirks specific to dual GPU MacBook Pros20120 * and Mac Pros, e.g. for deferred probing, runtime pm and backlight.21121 *2222- * Return: %true if gmux is present and the kernel was configured122122+ * Return: %true if gmux ACPI device is present and the kernel was configured23123 * with CONFIG_APPLE_GMUX, %false otherwise.24124 */25125static inline bool apple_gmux_present(void)···13030#else /* !CONFIG_APPLE_GMUX */1313113232static inline bool apple_gmux_present(void)3333+{3434+ return false;3535+}3636+3737+static inline bool apple_gmux_detect(struct pnp_dev *pnp_dev, bool *indexed_ret)13338{13439 return false;13540}
-10
include/linux/ceph/libceph.h
···9999100100#define CEPH_AUTH_NAME_DEFAULT "guest"101101102102-/* mount state */103103-enum {104104- CEPH_MOUNT_MOUNTING,105105- CEPH_MOUNT_MOUNTED,106106- CEPH_MOUNT_UNMOUNTING,107107- CEPH_MOUNT_UNMOUNTED,108108- CEPH_MOUNT_SHUTDOWN,109109- CEPH_MOUNT_RECOVER,110110-};111111-112102static inline unsigned long ceph_timeout_jiffies(unsigned long timeout)113103{114104 return timeout ?: MAX_SCHEDULE_TIMEOUT;
···252252 int rss_en;253253 int mac_port_sel_speed;254254 bool en_tx_lpi_clockgating;255255+ bool rx_clk_runs_in_lpi;255256 int has_xgmac;256257 bool vlan_fail_q_en;257258 u8 vlan_fail_q;
+1-2
include/linux/swap.h
···418418extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,419419 unsigned long nr_pages,420420 gfp_t gfp_mask,421421- unsigned int reclaim_options,422422- nodemask_t *nodemask);421421+ unsigned int reclaim_options);423422extern unsigned long mem_cgroup_shrink_node(struct mem_cgroup *mem,424423 gfp_t gfp_mask, bool noswap,425424 pg_data_t *pgdat,
+12
include/linux/util_macros.h
···3838 */3939#define find_closest_descending(x, a, as) __find_closest(x, a, as, >=)40404141+/**4242+ * is_insidevar - check if the @ptr points inside the @var memory range.4343+ * @ptr: the pointer to a memory address.4444+ * @var: the variable which address and size identify the memory range.4545+ *4646+ * Evaluates to true if the address in @ptr lies within the memory4747+ * range allocated to @var.4848+ */4949+#define is_insidevar(ptr, var) \5050+ ((uintptr_t)(ptr) >= (uintptr_t)(var) && \5151+ (uintptr_t)(ptr) < (uintptr_t)(var) + sizeof(var))5252+4153#endif
···9494 CTA_TIMEOUT_SCTP_SHUTDOWN_RECD,9595 CTA_TIMEOUT_SCTP_SHUTDOWN_ACK_SENT,9696 CTA_TIMEOUT_SCTP_HEARTBEAT_SENT,9797- CTA_TIMEOUT_SCTP_HEARTBEAT_ACKED,9898- CTA_TIMEOUT_SCTP_DATA_SENT,9797+ CTA_TIMEOUT_SCTP_HEARTBEAT_ACKED, /* no longer used */9998 __CTA_TIMEOUT_SCTP_MAX10099};101100#define CTA_TIMEOUT_SCTP_MAX (__CTA_TIMEOUT_SCTP_MAX - 1)
+2
include/ufs/ufshcd.h
···808808 * @urgent_bkops_lvl: keeps track of urgent bkops level for device809809 * @is_urgent_bkops_lvl_checked: keeps track if the urgent bkops level for810810 * device is known or not.811811+ * @wb_mutex: used to serialize devfreq and sysfs write booster toggling811812 * @clk_scaling_lock: used to serialize device commands and clock scaling812813 * @desc_size: descriptor sizes reported by device813814 * @scsi_block_reqs_cnt: reference counting for scsi block requests···952951 enum bkops_status urgent_bkops_lvl;953952 bool is_urgent_bkops_lvl_checked;954953954954+ struct mutex wb_mutex;955955 struct rw_semaphore clk_scaling_lock;956956 unsigned char desc_size[QUERY_DESC_IDN_MAX];957957 atomic_t scsi_block_reqs_cnt;
+8-10
io_uring/io_uring.c
···17651765 }17661766 spin_unlock(&ctx->completion_lock);1767176717681768- ret = io_req_prep_async(req);17691769- if (ret) {17701770-fail:17711771- io_req_defer_failed(req, ret);17721772- return;17731773- }17741768 io_prep_async_link(req);17751769 de = kmalloc(sizeof(*de), GFP_KERNEL);17761770 if (!de) {17771771 ret = -ENOMEM;17781778- goto fail;17721772+ io_req_defer_failed(req, ret);17731773+ return;17791774 }1780177517811776 spin_lock(&ctx->completion_lock);···20432048 req->flags &= ~REQ_F_HARDLINK;20442049 req->flags |= REQ_F_LINK;20452050 io_req_defer_failed(req, req->cqe.res);20462046- } else if (unlikely(req->ctx->drain_active)) {20472047- io_drain_req(req);20482051 } else {20492052 int ret = io_req_prep_async(req);2050205320512051- if (unlikely(ret))20542054+ if (unlikely(ret)) {20522055 io_req_defer_failed(req, ret);20562056+ return;20572057+ }20582058+20592059+ if (unlikely(req->ctx->drain_active))20602060+ io_drain_req(req);20532061 else20542062 io_queue_iowq(req, NULL);20552063 }
+11
io_uring/net.c
···6262 u16 flags;6363 /* initialised and used only by !msg send variants */6464 u16 addr_len;6565+ u16 buf_group;6566 void __user *addr;6667 /* used only for send zerocopy */6768 struct io_kiocb *notif;···581580 if (req->opcode == IORING_OP_RECV && sr->len)582581 return -EINVAL;583582 req->flags |= REQ_F_APOLL_MULTISHOT;583583+ /*584584+ * Store the buffer group for this multishot receive separately,585585+ * as if we end up doing an io-wq based issue that selects a586586+ * buffer, it has to be committed immediately and that will587587+ * clear ->buf_list. This means we lose the link to the buffer588588+ * list, and the eventual buffer put on completion then cannot589589+ * restore it.590590+ */591591+ sr->buf_group = req->buf_index;584592 }585593586594#ifdef CONFIG_COMPAT···606596607597 sr->done_io = 0;608598 sr->len = 0; /* get from the provided buffer */599599+ req->buf_index = sr->buf_group;609600}610601611602/*
···32433243 return reg->type != SCALAR_VALUE;32443244}3245324532463246+/* Copy src state preserving dst->parent and dst->live fields */32473247+static void copy_register_state(struct bpf_reg_state *dst, const struct bpf_reg_state *src)32483248+{32493249+ struct bpf_reg_state *parent = dst->parent;32503250+ enum bpf_reg_liveness live = dst->live;32513251+32523252+ *dst = *src;32533253+ dst->parent = parent;32543254+ dst->live = live;32553255+}32563256+32463257static void save_register_state(struct bpf_func_state *state,32473258 int spi, struct bpf_reg_state *reg,32483259 int size)32493260{32503261 int i;3251326232523252- state->stack[spi].spilled_ptr = *reg;32633263+ copy_register_state(&state->stack[spi].spilled_ptr, reg);32533264 if (size == BPF_REG_SIZE)32543265 state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;32553266···35883577 */35893578 s32 subreg_def = state->regs[dst_regno].subreg_def;3590357935913591- state->regs[dst_regno] = *reg;35803580+ copy_register_state(&state->regs[dst_regno], reg);35923581 state->regs[dst_regno].subreg_def = subreg_def;35933582 } else {35943583 for (i = 0; i < size; i++) {···3609359836103599 if (dst_regno >= 0) {36113600 /* restore register state from stack */36123612- state->regs[dst_regno] = *reg;36013601+ copy_register_state(&state->regs[dst_regno], reg);36133602 /* mark reg as written since spilled pointer state likely36143603 * has its liveness marks cleared by is_state_visited()36153604 * which resets stack/reg liveness for state transitions···96039592 */96049593 if (!ptr_is_dst_reg) {96059594 tmp = *dst_reg;96069606- *dst_reg = *ptr_reg;95959595+ copy_register_state(dst_reg, ptr_reg);96079596 }96089597 ret = sanitize_speculative_path(env, NULL, env->insn_idx + 1,96099598 env->insn_idx);···1085610845 * to propagate min/max range.1085710846 */1085810847 src_reg->id = ++env->id_gen;1085910859- *dst_reg = *src_reg;1084810848+ copy_register_state(dst_reg, src_reg);1086010849 dst_reg->live |= REG_LIVE_WRITTEN;1086110850 dst_reg->subreg_def = DEF_NOT_SUBREG;1086210851 } else {···1086710856 insn->src_reg);1086810857 return -EACCES;1086910858 } else if (src_reg->type == SCALAR_VALUE) {1087010870- *dst_reg = *src_reg;1085910859+ copy_register_state(dst_reg, src_reg);1087110860 /* Make sure ID is cleared otherwise1087210861 * dst_reg min/max could be incorrectly1087310862 * propagated into src_reg by find_equal_scalars()···11666116551166711656 bpf_for_each_reg_in_vstate(vstate, state, reg, ({1166811657 if (reg->type == SCALAR_VALUE && reg->id == known_reg->id)1166911669- *reg = *known_reg;1165811658+ copy_register_state(reg, known_reg);1167011659 }));1167111660}1167211661
+2-1
kernel/cgroup/cpuset.c
···13461346 * A parent can be left with no CPU as long as there is no13471347 * task directly associated with the parent partition.13481348 */13491349- if (!cpumask_intersects(cs->cpus_allowed, parent->effective_cpus) &&13491349+ if (cpumask_subset(parent->effective_cpus, cs->cpus_allowed) &&13501350 partition_is_populated(parent, cs))13511351 return PERR_NOCPUS;13521352···23242324 new_prs = -new_prs;23252325 spin_lock_irq(&callback_lock);23262326 cs->partition_root_state = new_prs;23272327+ WRITE_ONCE(cs->prs_err, err);23272328 spin_unlock_irq(&callback_lock);23282329 /*23292330 * Update child cpusets, if present.
+17-22
kernel/events/core.c
···4813481348144814 cpc = per_cpu_ptr(pmu->cpu_pmu_context, event->cpu);48154815 epc = &cpc->epc;48164816-48164816+ raw_spin_lock_irq(&ctx->lock);48174817 if (!epc->ctx) {48184818 atomic_set(&epc->refcount, 1);48194819 epc->embedded = 1;48204820- raw_spin_lock_irq(&ctx->lock);48214820 list_add(&epc->pmu_ctx_entry, &ctx->pmu_ctx_list);48224821 epc->ctx = ctx;48234823- raw_spin_unlock_irq(&ctx->lock);48244822 } else {48254823 WARN_ON_ONCE(epc->ctx != ctx);48264824 atomic_inc(&epc->refcount);48274825 }48284828-48264826+ raw_spin_unlock_irq(&ctx->lock);48294827 return epc;48304828 }48314829···4894489648954897static void put_pmu_ctx(struct perf_event_pmu_context *epc)48964898{48994899+ struct perf_event_context *ctx = epc->ctx;48974900 unsigned long flags;4898490148994899- if (!atomic_dec_and_test(&epc->refcount))49024902+ /*49034903+ * XXX49044904+ *49054905+ * lockdep_assert_held(&ctx->mutex);49064906+ *49074907+ * can't because of the call-site in _free_event()/put_event()49084908+ * which isn't always called under ctx->mutex.49094909+ */49104910+ if (!atomic_dec_and_raw_lock_irqsave(&epc->refcount, &ctx->lock, flags))49004911 return;4901491249024902- if (epc->ctx) {49034903- struct perf_event_context *ctx = epc->ctx;49134913+ WARN_ON_ONCE(list_empty(&epc->pmu_ctx_entry));4904491449054905- /*49064906- * XXX49074907- *49084908- * lockdep_assert_held(&ctx->mutex);49094909- *49104910- * can't because of the call-site in _free_event()/put_event()49114911- * which isn't always called under ctx->mutex.49124912- */49134913-49144914- WARN_ON_ONCE(list_empty(&epc->pmu_ctx_entry));49154915- raw_spin_lock_irqsave(&ctx->lock, flags);49164916- list_del_init(&epc->pmu_ctx_entry);49174917- epc->ctx = NULL;49184918- raw_spin_unlock_irqrestore(&ctx->lock, flags);49194919- }49154915+ list_del_init(&epc->pmu_ctx_entry);49164916+ epc->ctx = NULL;4920491749214918 WARN_ON_ONCE(!list_empty(&epc->pinned_active));49224919 WARN_ON_ONCE(!list_empty(&epc->flexible_active));49204920+49214921+ raw_spin_unlock_irqrestore(&ctx->lock, flags);4923492249244923 if (epc->embedded)49254924 return;
···23932393 sched_annotate_sleep();23942394 mutex_lock(&module_mutex);23952395 mod = find_module_all(name, strlen(name), true);23962396- ret = !mod || mod->state == MODULE_STATE_LIVE;23962396+ ret = !mod || mod->state == MODULE_STATE_LIVE23972397+ || mod->state == MODULE_STATE_GOING;23972398 mutex_unlock(&module_mutex);2398239923992400 return ret;···2570256925712570 mod->state = MODULE_STATE_UNFORMED;2572257125732573-again:25742572 mutex_lock(&module_mutex);25752573 old = find_module_all(mod->name, strlen(mod->name), true);25762574 if (old != NULL) {25772577- if (old->state != MODULE_STATE_LIVE) {25752575+ if (old->state == MODULE_STATE_COMING25762576+ || old->state == MODULE_STATE_UNFORMED) {25782577 /* Wait in case it fails to load. */25792578 mutex_unlock(&module_mutex);25802579 err = wait_event_interruptible(module_wq,25812580 finished_loading(mod->name));25822581 if (err)25832582 goto out_unlocked;25842584- goto again;25832583+25842584+ /* The module might have gone in the meantime. */25852585+ mutex_lock(&module_mutex);25862586+ old = find_module_all(mod->name, strlen(mod->name),25872587+ true);25852588 }25862586- err = -EEXIST;25892589+25902590+ /*25912591+ * We are here only when the same module was being loaded. Do25922592+ * not try to load it again right now. It prevents long delays25932593+ * caused by serialized module load failures. It might happen25942594+ * when more devices of the same type trigger load of25952595+ * a particular module.25962596+ */25972597+ if (old && old->state == MODULE_STATE_LIVE)25982598+ err = -EEXIST;25992599+ else26002600+ err = -EBUSY;25872601 goto out;25882602 }25892603 mod_update_bounds(mod);
+8-2
kernel/sched/core.c
···82908290 if (retval)82918291 goto out_put_task;8292829282938293+ /*82948294+ * With non-SMP configs, user_cpus_ptr/user_mask isn't used and82958295+ * alloc_user_cpus_ptr() returns NULL.82968296+ */82938297 user_mask = alloc_user_cpus_ptr(NUMA_NO_NODE);82948294- if (IS_ENABLED(CONFIG_SMP) && !user_mask) {82988298+ if (user_mask) {82998299+ cpumask_copy(user_mask, in_mask);83008300+ } else if (IS_ENABLED(CONFIG_SMP)) {82958301 retval = -ENOMEM;82968302 goto out_put_task;82978303 }82988298- cpumask_copy(user_mask, in_mask);83048304+82998305 ac = (struct affinity_context){83008306 .new_mask = in_mask,83018307 .user_mask = user_mask,
+26-20
kernel/sched/fair.c
···72297229 eenv_task_busy_time(&eenv, p, prev_cpu);7230723072317231 for (; pd; pd = pd->next) {72327232+ unsigned long util_min = p_util_min, util_max = p_util_max;72327233 unsigned long cpu_cap, cpu_thermal_cap, util;72337234 unsigned long cur_delta, max_spare_cap = 0;72347235 unsigned long rq_util_min, rq_util_max;72357235- unsigned long util_min, util_max;72367236 unsigned long prev_spare_cap = 0;72377237 int max_spare_cap_cpu = -1;72387238 unsigned long base_energy;···72517251 eenv.pd_cap = 0;7252725272537253 for_each_cpu(cpu, cpus) {72547254+ struct rq *rq = cpu_rq(cpu);72557255+72547256 eenv.pd_cap += cpu_thermal_cap;7255725772567258 if (!cpumask_test_cpu(cpu, sched_domain_span(sd)))···72717269 * much capacity we can get out of the CPU; this is72727270 * aligned with sched_cpu_util().72737271 */72747274- if (uclamp_is_used()) {72757275- if (uclamp_rq_is_idle(cpu_rq(cpu))) {72767276- util_min = p_util_min;72777277- util_max = p_util_max;72787278- } else {72797279- /*72807280- * Open code uclamp_rq_util_with() except for72817281- * the clamp() part. Ie: apply max aggregation72827282- * only. util_fits_cpu() logic requires to72837283- * operate on non clamped util but must use the72847284- * max-aggregated uclamp_{min, max}.72857285- */72867286- rq_util_min = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MIN);72877287- rq_util_max = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MAX);72727272+ if (uclamp_is_used() && !uclamp_rq_is_idle(rq)) {72737273+ /*72747274+ * Open code uclamp_rq_util_with() except for72757275+ * the clamp() part. Ie: apply max aggregation72767276+ * only. util_fits_cpu() logic requires to72777277+ * operate on non clamped util but must use the72787278+ * max-aggregated uclamp_{min, max}.72797279+ */72807280+ rq_util_min = uclamp_rq_get(rq, UCLAMP_MIN);72817281+ rq_util_max = uclamp_rq_get(rq, UCLAMP_MAX);7288728272897289- util_min = max(rq_util_min, p_util_min);72907290- util_max = max(rq_util_max, p_util_max);72917291- }72837283+ util_min = max(rq_util_min, p_util_min);72847284+ util_max = max(rq_util_max, p_util_max);72927285 }72937286 if (!util_fits_cpu(util, util_min, util_max, cpu))72947287 continue;···88688871 * * Thermal pressure will impact all cpus in this perf domain88698872 * equally.88708873 */88718871- if (static_branch_unlikely(&sched_asym_cpucapacity)) {88748874+ if (sched_energy_enabled()) {88728875 unsigned long inv_cap = capacity_orig - thermal_load_avg(rq);88738873- struct perf_domain *pd = rcu_dereference(rq->rd->pd);88768876+ struct perf_domain *pd;8874887788788878+ rcu_read_lock();88798879+88808880+ pd = rcu_dereference(rq->rd->pd);88758881 rq->cpu_capacity_inverted = 0;8876888288778883 for (; pd; pd = pd->next) {88788884 struct cpumask *pd_span = perf_domain_span(pd);88798885 unsigned long pd_cap_orig, pd_cap;88868886+88878887+ /* We can't be inverted against our own pd */88888888+ if (cpumask_test_cpu(cpu_of(rq), pd_span))88898889+ continue;8880889088818891 cpu = cpumask_any(pd_span);88828892 pd_cap_orig = arch_scale_cpu_capacity(cpu);···89098905 break;89108906 }89118907 }89088908+89098909+ rcu_read_unlock();89128910 }8913891189148912 trace_sched_cpu_capacity_tp(rq);
+4-4
kernel/trace/Kconfig
···933933 default y934934 help935935 The ring buffer has its own internal recursion. Although when936936- recursion happens it wont cause harm because of the protection,937937- but it does cause an unwanted overhead. Enabling this option will936936+ recursion happens it won't cause harm because of the protection,937937+ but it does cause unwanted overhead. Enabling this option will938938 place where recursion was detected into the ftrace "recursed_functions"939939 file.940940···10171017 The test runs for 10 seconds. This will slow your boot time10181018 by at least 10 more seconds.1019101910201020- At the end of the test, statics and more checks are done.10211021- It will output the stats of each per cpu buffer. What10201020+ At the end of the test, statistics and more checks are done.10211021+ It will output the stats of each per cpu buffer: What10221022 was written, the sizes, what was read, what was lost, and10231023 other similar details.10241024
+2-1
kernel/trace/bpf_trace.c
···833833834834 work = container_of(entry, struct send_signal_irq_work, irq_work);835835 group_send_sig_info(work->sig, SEND_SIG_PRIV, work->task, work->type);836836+ put_task_struct(work->task);836837}837838838839static int bpf_send_signal_common(u32 sig, enum pid_type type)···868867 * to the irq_work. The current task may change when queued869868 * irq works get executed.870869 */871871- work->task = current;870870+ work->task = get_task_struct(current);872871 work->sig = sig;873872 work->type = type;874873 irq_work_queue(&work->irq_work);
+22-1
kernel/trace/ftrace.c
···12481248 call_rcu(&hash->rcu, __free_ftrace_hash_rcu);12491249}1250125012511251+/**12521252+ * ftrace_free_filter - remove all filters for an ftrace_ops12531253+ * @ops - the ops to remove the filters from12541254+ */12511255void ftrace_free_filter(struct ftrace_ops *ops)12521256{12531257 ftrace_ops_init(ops);12541258 free_ftrace_hash(ops->func_hash->filter_hash);12551259 free_ftrace_hash(ops->func_hash->notrace_hash);12561260}12611261+EXPORT_SYMBOL_GPL(ftrace_free_filter);1257126212581263static struct ftrace_hash *alloc_ftrace_hash(int size_bits)12591264{···58445839 *58455840 * Filters denote which functions should be enabled when tracing is enabled58465841 * If @ip is NULL, it fails to update filter.58425842+ *58435843+ * This can allocate memory which must be freed before @ops can be freed,58445844+ * either by removing each filtered addr or by using58455845+ * ftrace_free_filter(@ops).58475846 */58485847int ftrace_set_filter_ip(struct ftrace_ops *ops, unsigned long ip,58495848 int remove, int reset)···58675858 *58685859 * Filters denote which functions should be enabled when tracing is enabled58695860 * If @ips array or any ip specified within is NULL , it fails to update filter.58705870- */58615861+ *58625862+ * This can allocate memory which must be freed before @ops can be freed,58635863+ * either by removing each filtered addr or by using58645864+ * ftrace_free_filter(@ops).58655865+*/58715866int ftrace_set_filter_ips(struct ftrace_ops *ops, unsigned long *ips,58725867 unsigned int cnt, int remove, int reset)58735868{···59135900 *59145901 * Filters denote which functions should be enabled when tracing is enabled.59155902 * If @buf is NULL and reset is set, all functions will be enabled for tracing.59035903+ *59045904+ * This can allocate memory which must be freed before @ops can be freed,59055905+ * either by removing each filtered addr or by using59065906+ * ftrace_free_filter(@ops).59165907 */59175908int ftrace_set_filter(struct ftrace_ops *ops, unsigned char *buf,59185909 int len, int reset)···59365919 * Notrace Filters denote which functions should not be enabled when tracing59375920 * is enabled. If @buf is NULL and reset is set, all functions will be enabled59385921 * for tracing.59225922+ *59235923+ * This can allocate memory which must be freed before @ops can be freed,59245924+ * either by removing each filtered addr or by using59255925+ * ftrace_free_filter(@ops).59395926 */59405927int ftrace_set_notrace(struct ftrace_ops *ops, unsigned char *buf,59415928 int len, int reset)
+1-1
kernel/trace/rv/rv.c
···516516 struct rv_monitor_def *mdef;517517 int retval = -EINVAL;518518 bool enable = true;519519- char *ptr = buff;519519+ char *ptr;520520 int len;521521522522 if (count < 1 || count > MAX_RV_MONITOR_NAME_SIZE + 1)
···14901490extern void trace_event_enable_tgid_record(bool enable);1491149114921492extern int event_trace_init(void);14931493+extern int init_events(void);14931494extern int event_trace_add_tracer(struct dentry *parent, struct trace_array *tr);14941495extern int event_trace_del_tracer(struct trace_array *tr);14951496extern void __trace_early_add_events(struct trace_array *tr);
+4-4
kernel/trace/trace_events_filter.c
···128128}129129130130/**131131- * prog_entry - a singe entry in the filter program131131+ * struct prog_entry - a singe entry in the filter program132132 * @target: Index to jump to on a branch (actually one minus the index)133133 * @when_to_branch: The value of the result of the predicate to do a branch134134 * @pred: The predicate to execute.···140140};141141142142/**143143- * update_preds- assign a program entry a label target143143+ * update_preds - assign a program entry a label target144144 * @prog: The program array145145 * @N: The index of the current entry in @prog146146- * @when_to_branch: What to assign a program entry for its branch condition146146+ * @invert: What to assign a program entry for its branch condition147147 *148148 * The program entry at @N has a target that points to the index of a program149149 * entry that can have its target and when_to_branch fields updated.150150 * Update the current program entry denoted by index @N target field to be151151 * that of the updated entry. This will denote the entry to update if152152- * we are processing an "||" after an "&&"152152+ * we are processing an "||" after an "&&".153153 */154154static void update_preds(struct prog_entry *prog, int N, int invert)155155{
···754754 select KALLSYMS755755 select CRC32756756 select STACKDEPOT757757+ select STACKDEPOT_ALWAYS_INIT if !DEBUG_KMEMLEAK_DEFAULT_OFF757758 help758759 Say Y here if you want to enable the memory leak759760 detector. The memory allocation/freeing is traced in a way···12081207 depends on DEBUG_KERNEL && PROC_FS12091208 default y12101209 help12111211- If you say Y here, the /proc/sched_debug file will be provided12101210+ If you say Y here, the /sys/kernel/debug/sched file will be provided12121211 that can help debug the scheduler. The runtime overhead of this12131212 option is minimal.12141213···19181917 help19191918 Add fault injections into various functions that are annotated with19201919 ALLOW_ERROR_INJECTION() in the kernel. BPF may also modify the return19211921- value of theses functions. This is useful to test error paths of code.19201920+ value of these functions. This is useful to test error paths of code.1922192119231922 If unsure, say N19241923···25662565 to the KUnit documentation in Documentation/dev-tools/kunit/.2567256625682567 If unsure, say N.25682568+25692569+config MEMCPY_SLOW_KUNIT_TEST25702570+ bool "Include exhaustive memcpy tests"25712571+ depends on MEMCPY_KUNIT_TEST25722572+ default y25732573+ help25742574+ Some memcpy tests are quite exhaustive in checking for overlaps25752575+ and bit ranges. These can be very slow, so they are split out25762576+ as a separate config, in case they need to be disabled.2569257725702578config IS_SIGNED_TYPE_KUNIT_TEST25712579 tristate "Test is_signed_type() macro" if !KUNIT_ALL_TESTS
+1-1
lib/Kconfig.kcsan
···194194 Enable support for modeling a subset of weak memory, which allows195195 detecting a subset of data races due to missing memory barriers.196196197197- Depends on KCSAN_STRICT, because the options strenghtening certain197197+ Depends on KCSAN_STRICT, because the options strengthening certain198198 plain accesses by default (depending on !KCSAN_STRICT) reduce the199199 ability to detect any data races invoving reordered accesses, in200200 particular reordered writes.
+31
lib/dec_and_lock.c
···4949 return 0;5050}5151EXPORT_SYMBOL(_atomic_dec_and_lock_irqsave);5252+5353+int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock)5454+{5555+ /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */5656+ if (atomic_add_unless(atomic, -1, 1))5757+ return 0;5858+5959+ /* Otherwise do it the slow way */6060+ raw_spin_lock(lock);6161+ if (atomic_dec_and_test(atomic))6262+ return 1;6363+ raw_spin_unlock(lock);6464+ return 0;6565+}6666+EXPORT_SYMBOL(_atomic_dec_and_raw_lock);6767+6868+int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock_t *lock,6969+ unsigned long *flags)7070+{7171+ /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */7272+ if (atomic_add_unless(atomic, -1, 1))7373+ return 0;7474+7575+ /* Otherwise do it the slow way */7676+ raw_spin_lock_irqsave(lock, *flags);7777+ if (atomic_dec_and_test(atomic))7878+ return 1;7979+ raw_spin_unlock_irqrestore(lock, *flags);8080+ return 0;8181+}8282+EXPORT_SYMBOL(_atomic_dec_and_raw_lock_irqsave);
···670670 unsigned char piv)671671{672672 struct maple_node *node = mte_to_node(mn);673673+ enum maple_type type = mte_node_type(mn);673674674674- if (piv >= mt_pivots[piv]) {675675+ if (piv >= mt_pivots[type]) {675676 WARN_ON(1);676677 return 0;677678 }678678- switch (mte_node_type(mn)) {679679+ switch (type) {679680 case maple_arange_64:680681 return node->ma64.pivot[piv];681682 case maple_range_64:···48884887 unsigned long *pivots, *gaps;48894888 void __rcu **slots;48904889 unsigned long gap = 0;48914891- unsigned long max, min, index;48904890+ unsigned long max, min;48924891 unsigned char offset;4893489248944893 if (unlikely(mas_is_err(mas)))···49104909 min = mas_safe_min(mas, pivots, --offset);4911491049124911 max = mas_safe_pivot(mas, pivots, offset, type);49134913- index = mas->index;49144914- while (index <= max) {49124912+ while (mas->index <= max) {49154913 gap = 0;49164914 if (gaps)49174915 gap = gaps[offset];···49414941 min = mas_safe_min(mas, pivots, offset);49424942 }4943494349444944- if (unlikely(index > max)) {49454945- mas_set_err(mas, -EBUSY);49464946- return false;49474947- }49444944+ if (unlikely((mas->index > max) || (size - 1 > max - mas->index)))49454945+ goto no_space;4948494649494947 if (unlikely(ma_is_leaf(type))) {49504948 mas->offset = offset;···49594961 return false;4960496249614963ascend:49624962- if (mte_is_root(mas->node))49634963- mas_set_err(mas, -EBUSY);49644964+ if (!mte_is_root(mas->node))49654965+ return false;4964496649674967+no_space:49684968+ mas_set_err(mas, -EBUSY);49654969 return false;49664970}49674971
+2
lib/memcpy_kunit.c
···309309310310static void init_large(struct kunit *test)311311{312312+ if (!IS_ENABLED(CONFIG_MEMCPY_SLOW_KUNIT_TEST))313313+ kunit_skip(test, "Slow test skipped. Enable with CONFIG_MEMCPY_SLOW_KUNIT_TEST=y");312314313315 /* Get many bit patterns. */314316 get_random_bytes(large_src, ARRAY_SIZE(large_src));
+3
lib/nlattr.c
···1010#include <linux/kernel.h>1111#include <linux/errno.h>1212#include <linux/jiffies.h>1313+#include <linux/nospec.h>1314#include <linux/skbuff.h>1415#include <linux/string.h>1516#include <linux/types.h>···382381 if (type <= 0 || type > maxtype)383382 return 0;384383384384+ type = array_index_nospec(type, maxtype + 1);385385 pt = &policy[type];386386387387 BUG_ON(pt->type > NLA_TYPE_MAX);···598596 }599597 continue;600598 }599599+ type = array_index_nospec(type, maxtype + 1);601600 if (policy) {602601 int err = validate_nla(nla, maxtype, policy,603602 validate, extack, depth);
+89
lib/test_maple_tree.c
···25172517 mt_set_non_kernel(0);25182518}2519251925202520+static noinline void check_empty_area_window(struct maple_tree *mt)25212521+{25222522+ unsigned long i, nr_entries = 20;25232523+ MA_STATE(mas, mt, 0, 0);25242524+25252525+ for (i = 1; i <= nr_entries; i++)25262526+ mtree_store_range(mt, i*10, i*10 + 9,25272527+ xa_mk_value(i), GFP_KERNEL);25282528+25292529+ /* Create another hole besides the one at 0 */25302530+ mtree_store_range(mt, 160, 169, NULL, GFP_KERNEL);25312531+25322532+ /* Check lower bounds that don't fit */25332533+ rcu_read_lock();25342534+ MT_BUG_ON(mt, mas_empty_area_rev(&mas, 5, 90, 10) != -EBUSY);25352535+25362536+ mas_reset(&mas);25372537+ MT_BUG_ON(mt, mas_empty_area_rev(&mas, 6, 90, 5) != -EBUSY);25382538+25392539+ /* Check lower bound that does fit */25402540+ mas_reset(&mas);25412541+ MT_BUG_ON(mt, mas_empty_area_rev(&mas, 5, 90, 5) != 0);25422542+ MT_BUG_ON(mt, mas.index != 5);25432543+ MT_BUG_ON(mt, mas.last != 9);25442544+ rcu_read_unlock();25452545+25462546+ /* Check one gap that doesn't fit and one that does */25472547+ rcu_read_lock();25482548+ mas_reset(&mas);25492549+ MT_BUG_ON(mt, mas_empty_area_rev(&mas, 5, 217, 9) != 0);25502550+ MT_BUG_ON(mt, mas.index != 161);25512551+ MT_BUG_ON(mt, mas.last != 169);25522552+25532553+ /* Check one gap that does fit above the min */25542554+ mas_reset(&mas);25552555+ MT_BUG_ON(mt, mas_empty_area_rev(&mas, 100, 218, 3) != 0);25562556+ MT_BUG_ON(mt, mas.index != 216);25572557+ MT_BUG_ON(mt, mas.last != 218);25582558+25592559+ /* Check size that doesn't fit any gap */25602560+ mas_reset(&mas);25612561+ MT_BUG_ON(mt, mas_empty_area_rev(&mas, 100, 218, 16) != -EBUSY);25622562+25632563+ /*25642564+ * Check size that doesn't fit the lower end of the window but25652565+ * does fit the gap25662566+ */25672567+ mas_reset(&mas);25682568+ MT_BUG_ON(mt, mas_empty_area_rev(&mas, 167, 200, 4) != -EBUSY);25692569+25702570+ /*25712571+ * Check size that doesn't fit the upper end of the window but25722572+ * does fit the gap25732573+ */25742574+ mas_reset(&mas);25752575+ MT_BUG_ON(mt, mas_empty_area_rev(&mas, 100, 162, 4) != -EBUSY);25762576+25772577+ /* Check mas_empty_area forward */25782578+ mas_reset(&mas);25792579+ MT_BUG_ON(mt, mas_empty_area(&mas, 0, 100, 9) != 0);25802580+ MT_BUG_ON(mt, mas.index != 0);25812581+ MT_BUG_ON(mt, mas.last != 8);25822582+25832583+ mas_reset(&mas);25842584+ MT_BUG_ON(mt, mas_empty_area(&mas, 0, 100, 4) != 0);25852585+ MT_BUG_ON(mt, mas.index != 0);25862586+ MT_BUG_ON(mt, mas.last != 3);25872587+25882588+ mas_reset(&mas);25892589+ MT_BUG_ON(mt, mas_empty_area(&mas, 0, 100, 11) != -EBUSY);25902590+25912591+ mas_reset(&mas);25922592+ MT_BUG_ON(mt, mas_empty_area(&mas, 5, 100, 6) != -EBUSY);25932593+25942594+ mas_reset(&mas);25952595+ MT_BUG_ON(mt, mas_empty_area(&mas, 0, 8, 10) != -EBUSY);25962596+25972597+ mas_reset(&mas);25982598+ mas_empty_area(&mas, 100, 165, 3);25992599+26002600+ mas_reset(&mas);26012601+ MT_BUG_ON(mt, mas_empty_area(&mas, 100, 163, 6) != -EBUSY);26022602+ rcu_read_unlock();26032603+}26042604+25202605static DEFINE_MTREE(tree);25212606static int maple_tree_seed(void)25222607{···2848276328492764 mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);28502765 check_bnode_min_spanning(&tree);27662766+ mtree_destroy(&tree);27672767+27682768+ mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE);27692769+ check_empty_area_window(&tree);28512770 mtree_destroy(&tree);2852277128532772#if defined(BENCH)
···50515051 entry = huge_pte_clear_uffd_wp(entry);50525052 set_huge_pte_at(dst, addr, dst_pte, entry);50535053 } else if (unlikely(is_pte_marker(entry))) {50545054+ /* No swap on hugetlb */50555055+ WARN_ON_ONCE(50565056+ is_swapin_error_entry(pte_to_swp_entry(entry)));50545057 /*50555058 * We copy the pte marker only if the dst vma has50565059 * uffd-wp enabled.
+21-1
mm/khugepaged.c
···847847 return SCAN_SUCCEED;848848}849849850850+/*851851+ * See pmd_trans_unstable() for how the result may change out from852852+ * underneath us, even if we hold mmap_lock in read.853853+ */850854static int find_pmd_or_thp_or_none(struct mm_struct *mm,851855 unsigned long address,852856 pmd_t **pmd)···869865#endif870866 if (pmd_none(pmde))871867 return SCAN_PMD_NONE;868868+ if (!pmd_present(pmde))869869+ return SCAN_PMD_NULL;872870 if (pmd_trans_huge(pmde))873871 return SCAN_PMD_MAPPED;872872+ if (pmd_devmap(pmde))873873+ return SCAN_PMD_NULL;874874 if (pmd_bad(pmde))875875 return SCAN_PMD_NULL;876876 return SCAN_SUCCEED;···16501642 * has higher cost too. It would also probably require locking16511643 * the anon_vma.16521644 */16531653- if (vma->anon_vma) {16451645+ if (READ_ONCE(vma->anon_vma)) {16541646 result = SCAN_PAGE_ANON;16551647 goto next;16561648 }···16781670 result = SCAN_PTE_MAPPED_HUGEPAGE;16791671 if ((cc->is_khugepaged || is_target) &&16801672 mmap_write_trylock(mm)) {16731673+ /*16741674+ * Re-check whether we have an ->anon_vma, because16751675+ * collapse_and_free_pmd() requires that either no16761676+ * ->anon_vma exists or the anon_vma is locked.16771677+ * We already checked ->anon_vma above, but that check16781678+ * is racy because ->anon_vma can be populated under the16791679+ * mmap lock in read mode.16801680+ */16811681+ if (vma->anon_vma) {16821682+ result = SCAN_PAGE_ANON;16831683+ goto unlock_next;16841684+ }16811685 /*16821686 * When a vma is registered with uffd-wp, we can't16831687 * recycle the pmd pgtable because there can be pte
···828828 return -EBUSY;829829 return -ENOENT;830830 } else if (is_pte_marker_entry(entry)) {831831- /*832832- * We're copying the pgtable should only because dst_vma has833833- * uffd-wp enabled, do sanity check.834834- */835835- WARN_ON_ONCE(!userfaultfd_wp(dst_vma));836836- set_pte_at(dst_mm, addr, dst_pte, pte);831831+ if (is_swapin_error_entry(entry) || userfaultfd_wp(dst_vma))832832+ set_pte_at(dst_mm, addr, dst_pte, pte);837833 return 0;838834 }839835 if (!userfaultfd_wp(dst_vma))···36253629 /*36263630 * Be careful so that we will only recover a special uffd-wp pte into a36273631 * none pte. Otherwise it means the pte could have changed, so retry.36323632+ *36333633+ * This should also cover the case where e.g. the pte changed36343634+ * quickly from a PTE_MARKER_UFFD_WP into PTE_MARKER_SWAPIN_ERROR.36353635+ * So is_pte_marker() check is not enough to safely drop the pte.36283636 */36293629- if (is_pte_marker(*vmf->pte))36373637+ if (pte_same(vmf->orig_pte, *vmf->pte))36303638 pte_clear(vmf->vma->vm_mm, vmf->address, vmf->pte);36313639 pte_unmap_unlock(vmf->pte, vmf->ptl);36323640 return 0;
+2-1
mm/mempolicy.c
···600600601601 /* With MPOL_MF_MOVE, we migrate only unshared hugepage. */602602 if (flags & (MPOL_MF_MOVE_ALL) ||603603- (flags & MPOL_MF_MOVE && page_mapcount(page) == 1)) {603603+ (flags & MPOL_MF_MOVE && page_mapcount(page) == 1 &&604604+ !hugetlb_pmd_shared(pte))) {604605 if (isolate_hugetlb(page, qp->pagelist) &&605606 (flags & MPOL_MF_STRICT))606607 /*
+7-1
mm/mprotect.c
···245245 newpte = pte_swp_mksoft_dirty(newpte);246246 if (pte_swp_uffd_wp(oldpte))247247 newpte = pte_swp_mkuffd_wp(newpte);248248- } else if (pte_marker_entry_uffd_wp(entry)) {248248+ } else if (is_pte_marker_entry(entry)) {249249+ /*250250+ * Ignore swapin errors unconditionally,251251+ * because any access should sigbus anyway.252252+ */253253+ if (is_swapin_error_entry(entry))254254+ continue;249255 /*250256 * If this is uffd-wp pte marker and we'd like251257 * to unprotect it, drop it; the next page
+19-6
mm/mremap.c
···10271027 }1028102810291029 /*10301030- * Function vma_merge() is called on the extension we are adding to10311031- * the already existing vma, vma_merge() will merge this extension with10321032- * the already existing vma (expand operation itself) and possibly also10331033- * with the next vma if it becomes adjacent to the expanded vma and10341034- * otherwise compatible.10301030+ * Function vma_merge() is called on the extension we10311031+ * are adding to the already existing vma, vma_merge()10321032+ * will merge this extension with the already existing10331033+ * vma (expand operation itself) and possibly also with10341034+ * the next vma if it becomes adjacent to the expanded10351035+ * vma and otherwise compatible.10361036+ *10371037+ * However, vma_merge() can currently fail due to10381038+ * is_mergeable_vma() check for vm_ops->close (see the10391039+ * comment there). Yet this should not prevent vma10401040+ * expanding, so perform a simple expand for such vma.10411041+ * Ideally the check for close op should be only done10421042+ * when a vma would be actually removed due to a merge.10351043 */10361036- vma = vma_merge(mm, vma, extension_start, extension_end,10441044+ if (!vma->vm_ops || !vma->vm_ops->close) {10451045+ vma = vma_merge(mm, vma, extension_start, extension_end,10371046 vma->vm_flags, vma->anon_vma, vma->vm_file,10381047 extension_pgoff, vma_policy(vma),10391048 vma->vm_userfaultfd_ctx, anon_vma_name(vma));10491049+ } else if (vma_adjust(vma, vma->vm_start, addr + new_len,10501050+ vma->vm_pgoff, NULL)) {10511051+ vma = NULL;10521052+ }10401053 if (!vma) {10411054 vm_unacct_memory(pages);10421055 ret = -ENOMEM;
···33233323 if (mem_cgroup_disabled())33243324 return;3325332533263326+ /* migration can happen before addition */33273327+ if (!mm->lru_gen.memcg)33283328+ return;33293329+33263330 rcu_read_lock();33273331 memcg = mem_cgroup_from_task(task);33283332 rcu_read_unlock();33293333 if (memcg == mm->lru_gen.memcg)33303334 return;3331333533323332- VM_WARN_ON_ONCE(!mm->lru_gen.memcg);33333336 VM_WARN_ON_ONCE(list_empty(&mm->lru_gen.list));3334333733353338 lru_gen_del_mm(mm);···67576754unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg,67586755 unsigned long nr_pages,67596756 gfp_t gfp_mask,67606760- unsigned int reclaim_options,67616761- nodemask_t *nodemask)67576757+ unsigned int reclaim_options)67626758{67636759 unsigned long nr_reclaimed;67646760 unsigned int noreclaim_flag;···67726770 .may_unmap = 1,67736771 .may_swap = !!(reclaim_options & MEMCG_RECLAIM_MAY_SWAP),67746772 .proactive = !!(reclaim_options & MEMCG_RECLAIM_PROACTIVE),67756775- .nodemask = nodemask,67766773 };67776774 /*67786775 * Traverse the ZONELIST_FALLBACK zonelist of the current node to put
+205-32
mm/zsmalloc.c
···113113 * have room for two bit at least.114114 */115115#define OBJ_ALLOCATED_TAG 1116116-#define OBJ_TAG_BITS 1116116+117117+#ifdef CONFIG_ZPOOL118118+/*119119+ * The second least-significant bit in the object's header identifies if the120120+ * value stored at the header is a deferred handle from the last reclaim121121+ * attempt.122122+ *123123+ * As noted above, this is valid because we have room for two bits.124124+ */125125+#define OBJ_DEFERRED_HANDLE_TAG 2126126+#define OBJ_TAG_BITS 2127127+#define OBJ_TAG_MASK (OBJ_ALLOCATED_TAG | OBJ_DEFERRED_HANDLE_TAG)128128+#else129129+#define OBJ_TAG_BITS 1130130+#define OBJ_TAG_MASK OBJ_ALLOCATED_TAG131131+#endif /* CONFIG_ZPOOL */132132+117133#define OBJ_INDEX_BITS (BITS_PER_LONG - _PFN_BITS - OBJ_TAG_BITS)118134#define OBJ_INDEX_MASK ((_AC(1, UL) << OBJ_INDEX_BITS) - 1)119135···238222 * Handle of allocated object.239223 */240224 unsigned long handle;225225+#ifdef CONFIG_ZPOOL226226+ /*227227+ * Deferred handle of a reclaimed object.228228+ */229229+ unsigned long deferred_handle;230230+#endif241231 };242232};243233···294272 /* links the zspage to the lru list in the pool */295273 struct list_head lru;296274 bool under_reclaim;297297- /* list of unfreed handles whose objects have been reclaimed */298298- unsigned long *deferred_handles;299275#endif300276301277 struct zs_pool *pool;···917897 return *(unsigned long *)handle;918898}919899920920-static bool obj_allocated(struct page *page, void *obj, unsigned long *phandle)900900+static bool obj_tagged(struct page *page, void *obj, unsigned long *phandle,901901+ int tag)921902{922903 unsigned long handle;923904 struct zspage *zspage = get_zspage(page);···929908 } else930909 handle = *(unsigned long *)obj;931910932932- if (!(handle & OBJ_ALLOCATED_TAG))911911+ if (!(handle & tag))933912 return false;934913935935- *phandle = handle & ~OBJ_ALLOCATED_TAG;914914+ /* Clear all tags before returning the handle */915915+ *phandle = handle & ~OBJ_TAG_MASK;936916 return true;937917}918918+919919+static inline bool obj_allocated(struct page *page, void *obj, unsigned long *phandle)920920+{921921+ return obj_tagged(page, obj, phandle, OBJ_ALLOCATED_TAG);922922+}923923+924924+#ifdef CONFIG_ZPOOL925925+static bool obj_stores_deferred_handle(struct page *page, void *obj,926926+ unsigned long *phandle)927927+{928928+ return obj_tagged(page, obj, phandle, OBJ_DEFERRED_HANDLE_TAG);929929+}930930+#endif938931939932static void reset_page(struct page *page)940933{···981946}982947983948#ifdef CONFIG_ZPOOL949949+static unsigned long find_deferred_handle_obj(struct size_class *class,950950+ struct page *page, int *obj_idx);951951+984952/*985953 * Free all the deferred handles whose objects are freed in zs_free.986954 */987987-static void free_handles(struct zs_pool *pool, struct zspage *zspage)955955+static void free_handles(struct zs_pool *pool, struct size_class *class,956956+ struct zspage *zspage)988957{989989- unsigned long handle = (unsigned long)zspage->deferred_handles;958958+ int obj_idx = 0;959959+ struct page *page = get_first_page(zspage);960960+ unsigned long handle;990961991991- while (handle) {992992- unsigned long nxt_handle = handle_to_obj(handle);962962+ while (1) {963963+ handle = find_deferred_handle_obj(class, page, &obj_idx);964964+ if (!handle) {965965+ page = get_next_page(page);966966+ if (!page)967967+ break;968968+ obj_idx = 0;969969+ continue;970970+ }993971994972 cache_free_handle(pool, handle);995995- handle = nxt_handle;973973+ obj_idx++;996974 }997975}998976#else999999-static inline void free_handles(struct zs_pool *pool, struct zspage *zspage) {}977977+static inline void free_handles(struct zs_pool *pool, struct size_class *class,978978+ struct zspage *zspage) {}1000979#endif10019801002981static void __free_zspage(struct zs_pool *pool, struct size_class *class,···1028979 VM_BUG_ON(fg != ZS_EMPTY);10299801030981 /* Free all deferred handles from zs_free */10311031- free_handles(pool, zspage);982982+ free_handles(pool, class, zspage);10329831033984 next = page = get_first_page(zspage);1034985 do {···11161067#ifdef CONFIG_ZPOOL11171068 INIT_LIST_HEAD(&zspage->lru);11181069 zspage->under_reclaim = false;11191119- zspage->deferred_handles = NULL;11201070#endif1121107111221072 set_freeobj(zspage, 0);···16161568}16171569EXPORT_SYMBOL_GPL(zs_malloc);1618157016191619-static void obj_free(int class_size, unsigned long obj)15711571+static void obj_free(int class_size, unsigned long obj, unsigned long *handle)16201572{16211573 struct link_free *link;16221574 struct zspage *zspage;···16301582 zspage = get_zspage(f_page);1631158316321584 vaddr = kmap_atomic(f_page);16331633-16341634- /* Insert this object in containing zspage's freelist */16351585 link = (struct link_free *)(vaddr + f_offset);16361636- if (likely(!ZsHugePage(zspage)))16371637- link->next = get_freeobj(zspage) << OBJ_TAG_BITS;16381638- else16391639- f_page->index = 0;15861586+15871587+ if (handle) {15881588+#ifdef CONFIG_ZPOOL15891589+ /* Stores the (deferred) handle in the object's header */15901590+ *handle |= OBJ_DEFERRED_HANDLE_TAG;15911591+ *handle &= ~OBJ_ALLOCATED_TAG;15921592+15931593+ if (likely(!ZsHugePage(zspage)))15941594+ link->deferred_handle = *handle;15951595+ else15961596+ f_page->index = *handle;15971597+#endif15981598+ } else {15991599+ /* Insert this object in containing zspage's freelist */16001600+ if (likely(!ZsHugePage(zspage)))16011601+ link->next = get_freeobj(zspage) << OBJ_TAG_BITS;16021602+ else16031603+ f_page->index = 0;16041604+ set_freeobj(zspage, f_objidx);16051605+ }16061606+16401607 kunmap_atomic(vaddr);16411641- set_freeobj(zspage, f_objidx);16421608 mod_zspage_inuse(zspage, -1);16431609}16441610···16771615 zspage = get_zspage(f_page);16781616 class = zspage_class(pool, zspage);1679161716801680- obj_free(class->size, obj);16811618 class_stat_dec(class, OBJ_USED, 1);1682161916831620#ifdef CONFIG_ZPOOL···16851624 * Reclaim needs the handles during writeback. It'll free16861625 * them along with the zspage when it's done with them.16871626 *16881688- * Record current deferred handle at the memory location16891689- * whose address is given by handle.16271627+ * Record current deferred handle in the object's header.16901628 */16911691- record_obj(handle, (unsigned long)zspage->deferred_handles);16921692- zspage->deferred_handles = (unsigned long *)handle;16291629+ obj_free(class->size, obj, &handle);16931630 spin_unlock(&pool->lock);16941631 return;16951632 }16961633#endif16341634+ obj_free(class->size, obj, NULL);16351635+16971636 fullness = fix_fullness_group(class, zspage);16981637 if (fullness == ZS_EMPTY)16991638 free_zspage(pool, class, zspage);···17741713}1775171417761715/*17771777- * Find alloced object in zspage from index object and17161716+ * Find object with a certain tag in zspage from index object and17781717 * return handle.17791718 */17801780-static unsigned long find_alloced_obj(struct size_class *class,17811781- struct page *page, int *obj_idx)17191719+static unsigned long find_tagged_obj(struct size_class *class,17201720+ struct page *page, int *obj_idx, int tag)17821721{17831722 unsigned int offset;17841723 int index = *obj_idx;···17891728 offset += class->size * index;1790172917911730 while (offset < PAGE_SIZE) {17921792- if (obj_allocated(page, addr + offset, &handle))17311731+ if (obj_tagged(page, addr + offset, &handle, tag))17931732 break;1794173317951734 offset += class->size;···1802174118031742 return handle;18041743}17441744+17451745+/*17461746+ * Find alloced object in zspage from index object and17471747+ * return handle.17481748+ */17491749+static unsigned long find_alloced_obj(struct size_class *class,17501750+ struct page *page, int *obj_idx)17511751+{17521752+ return find_tagged_obj(class, page, obj_idx, OBJ_ALLOCATED_TAG);17531753+}17541754+17551755+#ifdef CONFIG_ZPOOL17561756+/*17571757+ * Find object storing a deferred handle in header in zspage from index object17581758+ * and return handle.17591759+ */17601760+static unsigned long find_deferred_handle_obj(struct size_class *class,17611761+ struct page *page, int *obj_idx)17621762+{17631763+ return find_tagged_obj(class, page, obj_idx, OBJ_DEFERRED_HANDLE_TAG);17641764+}17651765+#endif1805176618061767struct zs_compact_control {18071768 /* Source spage for migration which could be a subpage of zspage */···18671784 zs_object_copy(class, free_obj, used_obj);18681785 obj_idx++;18691786 record_obj(handle, free_obj);18701870- obj_free(class->size, used_obj);17871787+ obj_free(class->size, used_obj, NULL);18711788 }1872178918731790 /* Remember last position in this iteration */···25612478EXPORT_SYMBOL_GPL(zs_destroy_pool);2562247925632480#ifdef CONFIG_ZPOOL24812481+static void restore_freelist(struct zs_pool *pool, struct size_class *class,24822482+ struct zspage *zspage)24832483+{24842484+ unsigned int obj_idx = 0;24852485+ unsigned long handle, off = 0; /* off is within-page offset */24862486+ struct page *page = get_first_page(zspage);24872487+ struct link_free *prev_free = NULL;24882488+ void *prev_page_vaddr = NULL;24892489+24902490+ /* in case no free object found */24912491+ set_freeobj(zspage, (unsigned int)(-1UL));24922492+24932493+ while (page) {24942494+ void *vaddr = kmap_atomic(page);24952495+ struct page *next_page;24962496+24972497+ while (off < PAGE_SIZE) {24982498+ void *obj_addr = vaddr + off;24992499+25002500+ /* skip allocated object */25012501+ if (obj_allocated(page, obj_addr, &handle)) {25022502+ obj_idx++;25032503+ off += class->size;25042504+ continue;25052505+ }25062506+25072507+ /* free deferred handle from reclaim attempt */25082508+ if (obj_stores_deferred_handle(page, obj_addr, &handle))25092509+ cache_free_handle(pool, handle);25102510+25112511+ if (prev_free)25122512+ prev_free->next = obj_idx << OBJ_TAG_BITS;25132513+ else /* first free object found */25142514+ set_freeobj(zspage, obj_idx);25152515+25162516+ prev_free = (struct link_free *)vaddr + off / sizeof(*prev_free);25172517+ /* if last free object in a previous page, need to unmap */25182518+ if (prev_page_vaddr) {25192519+ kunmap_atomic(prev_page_vaddr);25202520+ prev_page_vaddr = NULL;25212521+ }25222522+25232523+ obj_idx++;25242524+ off += class->size;25252525+ }25262526+25272527+ /*25282528+ * Handle the last (full or partial) object on this page.25292529+ */25302530+ next_page = get_next_page(page);25312531+ if (next_page) {25322532+ if (!prev_free || prev_page_vaddr) {25332533+ /*25342534+ * There is no free object in this page, so we can safely25352535+ * unmap it.25362536+ */25372537+ kunmap_atomic(vaddr);25382538+ } else {25392539+ /* update prev_page_vaddr since prev_free is on this page */25402540+ prev_page_vaddr = vaddr;25412541+ }25422542+ } else { /* this is the last page */25432543+ if (prev_free) {25442544+ /*25452545+ * Reset OBJ_TAG_BITS bit to last link to tell25462546+ * whether it's allocated object or not.25472547+ */25482548+ prev_free->next = -1UL << OBJ_TAG_BITS;25492549+ }25502550+25512551+ /* unmap previous page (if not done yet) */25522552+ if (prev_page_vaddr) {25532553+ kunmap_atomic(prev_page_vaddr);25542554+ prev_page_vaddr = NULL;25552555+ }25562556+25572557+ kunmap_atomic(vaddr);25582558+ }25592559+25602560+ page = next_page;25612561+ off %= PAGE_SIZE;25622562+ }25632563+}25642564+25642565static int zs_reclaim_page(struct zs_pool *pool, unsigned int retries)25652566{25662567 int i, obj_idx, ret = 0;···27282561 return 0;27292562 }2730256325642564+ /*25652565+ * Eviction fails on one of the handles, so we need to restore zspage.25662566+ * We need to rebuild its freelist (and free stored deferred handles),25672567+ * put it back to the correct size class, and add it to the LRU list.25682568+ */25692569+ restore_freelist(pool, class, zspage);27312570 putback_zspage(class, zspage);27322571 list_add(&zspage->lru, &pool->lru);27332572 unlock_zspage(zspage);
···140140 canid_t rxid;141141 ktime_t tx_gap;142142 ktime_t lastrxcf_tstamp;143143- struct hrtimer rxtimer, txtimer;143143+ struct hrtimer rxtimer, txtimer, txfrtimer;144144 struct can_isotp_options opt;145145 struct can_isotp_fc_options rxfc, txfc;146146 struct can_isotp_ll_options ll;···871871 }872872873873 /* start timer to send next consecutive frame with correct delay */874874- hrtimer_start(&so->txtimer, so->tx_gap, HRTIMER_MODE_REL_SOFT);874874+ hrtimer_start(&so->txfrtimer, so->tx_gap, HRTIMER_MODE_REL_SOFT);875875}876876877877static enum hrtimer_restart isotp_tx_timer_handler(struct hrtimer *hrtimer)···879879 struct isotp_sock *so = container_of(hrtimer, struct isotp_sock,880880 txtimer);881881 struct sock *sk = &so->sk;882882- enum hrtimer_restart restart = HRTIMER_NORESTART;883882884884- switch (so->tx.state) {885885- case ISOTP_SENDING:883883+ /* don't handle timeouts in IDLE state */884884+ if (so->tx.state == ISOTP_IDLE)885885+ return HRTIMER_NORESTART;886886887887- /* cfecho should be consumed by isotp_rcv_echo() here */888888- if (!so->cfecho) {889889- /* start timeout for unlikely lost echo skb */890890- hrtimer_set_expires(&so->txtimer,891891- ktime_add(ktime_get(),892892- ktime_set(ISOTP_ECHO_TIMEOUT, 0)));893893- restart = HRTIMER_RESTART;887887+ /* we did not get any flow control or echo frame in time */894888895895- /* push out the next consecutive frame */896896- isotp_send_cframe(so);897897- break;898898- }889889+ /* report 'communication error on send' */890890+ sk->sk_err = ECOMM;891891+ if (!sock_flag(sk, SOCK_DEAD))892892+ sk_error_report(sk);899893900900- /* cfecho has not been cleared in isotp_rcv_echo() */901901- pr_notice_once("can-isotp: cfecho %08X timeout\n", so->cfecho);902902- fallthrough;894894+ /* reset tx state */895895+ so->tx.state = ISOTP_IDLE;896896+ wake_up_interruptible(&so->wait);903897904904- case ISOTP_WAIT_FC:905905- case ISOTP_WAIT_FIRST_FC:898898+ return HRTIMER_NORESTART;899899+}906900907907- /* we did not get any flow control frame in time */901901+static enum hrtimer_restart isotp_txfr_timer_handler(struct hrtimer *hrtimer)902902+{903903+ struct isotp_sock *so = container_of(hrtimer, struct isotp_sock,904904+ txfrtimer);908905909909- /* report 'communication error on send' */910910- sk->sk_err = ECOMM;911911- if (!sock_flag(sk, SOCK_DEAD))912912- sk_error_report(sk);906906+ /* start echo timeout handling and cover below protocol error */907907+ hrtimer_start(&so->txtimer, ktime_set(ISOTP_ECHO_TIMEOUT, 0),908908+ HRTIMER_MODE_REL_SOFT);913909914914- /* reset tx state */915915- so->tx.state = ISOTP_IDLE;916916- wake_up_interruptible(&so->wait);917917- break;910910+ /* cfecho should be consumed by isotp_rcv_echo() here */911911+ if (so->tx.state == ISOTP_SENDING && !so->cfecho)912912+ isotp_send_cframe(so);918913919919- default:920920- WARN_ONCE(1, "can-isotp: tx timer state %08X cfecho %08X\n",921921- so->tx.state, so->cfecho);922922- }923923-924924- return restart;914914+ return HRTIMER_NORESTART;925915}926916927917static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)···11521162 /* wait for complete transmission of current pdu */11531163 wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE);1154116411651165+ /* force state machines to be idle also when a signal occurred */11661166+ so->tx.state = ISOTP_IDLE;11671167+ so->rx.state = ISOTP_IDLE;11681168+11551169 spin_lock(&isotp_notifier_lock);11561170 while (isotp_busy_notifier == so) {11571171 spin_unlock(&isotp_notifier_lock);···11881194 }11891195 }1190119611971197+ hrtimer_cancel(&so->txfrtimer);11911198 hrtimer_cancel(&so->txtimer);11921199 hrtimer_cancel(&so->rxtimer);11931200···15921597 so->rxtimer.function = isotp_rx_timer_handler;15931598 hrtimer_init(&so->txtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_SOFT);15941599 so->txtimer.function = isotp_tx_timer_handler;16001600+ hrtimer_init(&so->txfrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_SOFT);16011601+ so->txfrtimer.function = isotp_txfr_timer_handler;1595160215961603 init_waitqueue_head(&so->wait);15971604 spin_lock_init(&so->rx_lock);
-4
net/can/j1939/transport.c
···10921092 bool active;1093109310941094 j1939_session_list_lock(priv);10951095- /* This function should be called with a session ref-count of at10961096- * least 2.10971097- */10981098- WARN_ON_ONCE(kref_read(&session->kref) < 2);10991095 active = j1939_session_deactivate_locked(session);11001096 j1939_session_list_unlock(priv);11011097
+31-16
net/can/raw.c
···132132 return;133133134134 /* make sure to not pass oversized frames to the socket */135135- if ((can_is_canfd_skb(oskb) && !ro->fd_frames && !ro->xl_frames) ||136136- (can_is_canxl_skb(oskb) && !ro->xl_frames))135135+ if ((!ro->fd_frames && can_is_canfd_skb(oskb)) ||136136+ (!ro->xl_frames && can_is_canxl_skb(oskb)))137137 return;138138139139 /* eliminate multiple filter matches for the same skb */···670670 if (copy_from_sockptr(&ro->fd_frames, optval, optlen))671671 return -EFAULT;672672673673+ /* Enabling CAN XL includes CAN FD */674674+ if (ro->xl_frames && !ro->fd_frames) {675675+ ro->fd_frames = ro->xl_frames;676676+ return -EINVAL;677677+ }673678 break;674679675680 case CAN_RAW_XL_FRAMES:···684679 if (copy_from_sockptr(&ro->xl_frames, optval, optlen))685680 return -EFAULT;686681682682+ /* Enabling CAN XL includes CAN FD */683683+ if (ro->xl_frames)684684+ ro->fd_frames = ro->xl_frames;687685 break;688686689687 case CAN_RAW_JOIN_FILTERS:···794786 return 0;795787}796788789789+static bool raw_bad_txframe(struct raw_sock *ro, struct sk_buff *skb, int mtu)790790+{791791+ /* Classical CAN -> no checks for flags and device capabilities */792792+ if (can_is_can_skb(skb))793793+ return false;794794+795795+ /* CAN FD -> needs to be enabled and a CAN FD or CAN XL device */796796+ if (ro->fd_frames && can_is_canfd_skb(skb) &&797797+ (mtu == CANFD_MTU || can_is_canxl_dev_mtu(mtu)))798798+ return false;799799+800800+ /* CAN XL -> needs to be enabled and a CAN XL device */801801+ if (ro->xl_frames && can_is_canxl_skb(skb) &&802802+ can_is_canxl_dev_mtu(mtu))803803+ return false;804804+805805+ return true;806806+}807807+797808static int raw_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)798809{799810 struct sock *sk = sock->sk;···860833 goto free_skb;861834862835 err = -EINVAL;863863- if (ro->xl_frames && can_is_canxl_dev_mtu(dev->mtu)) {864864- /* CAN XL, CAN FD and Classical CAN */865865- if (!can_is_canxl_skb(skb) && !can_is_canfd_skb(skb) &&866866- !can_is_can_skb(skb))867867- goto free_skb;868868- } else if (ro->fd_frames && dev->mtu == CANFD_MTU) {869869- /* CAN FD and Classical CAN */870870- if (!can_is_canfd_skb(skb) && !can_is_can_skb(skb))871871- goto free_skb;872872- } else {873873- /* Classical CAN */874874- if (!can_is_can_skb(skb))875875- goto free_skb;876876- }836836+ if (raw_bad_txframe(ro, skb, dev->mtu))837837+ goto free_skb;877838878839 sockcm_init(&sockc, sk);879840 if (msg->msg_controllen) {
+9
net/core/gro.c
···162162 struct sk_buff *lp;163163 int segs;164164165165+ /* Do not splice page pool based packets w/ non-page pool166166+ * packets. This can result in reference count issues as page167167+ * pool pages will not decrement the reference count and will168168+ * instead be immediately returned to the pool or have frag169169+ * count decremented.170170+ */171171+ if (p->pp_recycle != skb->pp_recycle)172172+ return -ETOOMANYREFS;173173+165174 /* pairs with WRITE_ONCE() in netif_set_gro_max_size() */166175 gro_max_size = READ_ONCE(p->dev->gro_max_size);167176
···66#include <linux/bpf.h>77#include <linux/init.h>88#include <linux/wait.h>99+#include <linux/util_macros.h>9101011#include <net/inet_common.h>1112#include <net/tls.h>···640639 */641640void tcp_bpf_clone(const struct sock *sk, struct sock *newsk)642641{643643- int family = sk->sk_family == AF_INET6 ? TCP_BPF_IPV6 : TCP_BPF_IPV4;644642 struct proto *prot = newsk->sk_prot;645643646646- if (prot == &tcp_bpf_prots[family][TCP_BPF_BASE])644644+ if (is_insidevar(prot, tcp_bpf_prots))647645 newsk->sk_prot = sk->sk_prot_creator;648646}649647#endif /* CONFIG_BPF_SYSCALL */
+32-27
net/ipv6/addrconf.c
···31273127 offset = sizeof(struct in6_addr) - 4;31283128 memcpy(&addr.s6_addr32[3], idev->dev->dev_addr + offset, 4);3129312931303130- if (idev->dev->flags&IFF_POINTOPOINT) {31303130+ if (!(idev->dev->flags & IFF_POINTOPOINT) && idev->dev->type == ARPHRD_SIT) {31313131+ scope = IPV6_ADDR_COMPATv4;31323132+ plen = 96;31333133+ pflags |= RTF_NONEXTHOP;31343134+ } else {31313135 if (idev->cnf.addr_gen_mode == IN6_ADDR_GEN_MODE_NONE)31323136 return;3133313731343138 addr.s6_addr32[0] = htonl(0xfe800000);31353139 scope = IFA_LINK;31363140 plen = 64;31373137- } else {31383138- scope = IPV6_ADDR_COMPATv4;31393139- plen = 96;31403140- pflags |= RTF_NONEXTHOP;31413141 }3142314231433143 if (addr.s6_addr32[3]) {···34473447}34483448#endif3449344934503450+static void addrconf_init_auto_addrs(struct net_device *dev)34513451+{34523452+ switch (dev->type) {34533453+#if IS_ENABLED(CONFIG_IPV6_SIT)34543454+ case ARPHRD_SIT:34553455+ addrconf_sit_config(dev);34563456+ break;34573457+#endif34583458+#if IS_ENABLED(CONFIG_NET_IPGRE) || IS_ENABLED(CONFIG_IPV6_GRE)34593459+ case ARPHRD_IP6GRE:34603460+ case ARPHRD_IPGRE:34613461+ addrconf_gre_config(dev);34623462+ break;34633463+#endif34643464+ case ARPHRD_LOOPBACK:34653465+ init_loopback(dev);34663466+ break;34673467+34683468+ default:34693469+ addrconf_dev_config(dev);34703470+ break;34713471+ }34723472+}34733473+34503474static int fixup_permanent_addr(struct net *net,34513475 struct inet6_dev *idev,34523476 struct inet6_ifaddr *ifp)···36393615 run_pending = 1;36403616 }3641361736423642- switch (dev->type) {36433643-#if IS_ENABLED(CONFIG_IPV6_SIT)36443644- case ARPHRD_SIT:36453645- addrconf_sit_config(dev);36463646- break;36473647-#endif36483648-#if IS_ENABLED(CONFIG_NET_IPGRE) || IS_ENABLED(CONFIG_IPV6_GRE)36493649- case ARPHRD_IP6GRE:36503650- case ARPHRD_IPGRE:36513651- addrconf_gre_config(dev);36523652- break;36533653-#endif36543654- case ARPHRD_LOOPBACK:36553655- init_loopback(dev);36563656- break;36573657-36583658- default:36593659- addrconf_dev_config(dev);36603660- break;36613661- }36183618+ addrconf_init_auto_addrs(dev);3662361936633620 if (!IS_ERR_OR_NULL(idev)) {36643621 if (run_pending)···6402639764036398 if (idev->cnf.addr_gen_mode != new_val) {64046399 idev->cnf.addr_gen_mode = new_val;64056405- addrconf_dev_config(idev->dev);64006400+ addrconf_init_auto_addrs(idev->dev);64066401 }64076402 } else if (&net->ipv6.devconf_all->addr_gen_mode == ctl->data) {64086403 struct net_device *dev;···64136408 if (idev &&64146409 idev->cnf.addr_gen_mode != new_val) {64156410 idev->cnf.addr_gen_mode = new_val;64166416- addrconf_dev_config(idev->dev);64116411+ addrconf_init_auto_addrs(idev->dev);64176412 }64186413 }64196414 }
+14-1
net/ipv6/ip6_output.c
···547547 pneigh_lookup(&nd_tbl, net, &hdr->daddr, skb->dev, 0)) {548548 int proxied = ip6_forward_proxy_check(skb);549549 if (proxied > 0) {550550- hdr->hop_limit--;550550+ /* It's tempting to decrease the hop limit551551+ * here by 1, as we do at the end of the552552+ * function too.553553+ *554554+ * But that would be incorrect, as proxying is555555+ * not forwarding. The ip6_input function556556+ * will handle this packet locally, and it557557+ * depends on the hop limit being unchanged.558558+ *559559+ * One example is the NDP hop limit, that560560+ * always has to stay 255, but other would be561561+ * similar checks around RA packets, where the562562+ * user can even change the desired limit.563563+ */551564 return ip6_input(skb);552565 } else if (proxied < 0) {553566 __IP6_INC_STATS(net, idev, IPSTATS_MIB_INDISCARDS);
-1
net/mac802154/rx.c
···213213 ret = ieee802154_parse_frame_start(skb, &hdr);214214 if (ret) {215215 pr_debug("got invalid frame\n");216216- kfree_skb(skb);217216 return;218217 }219218
+13-3
net/mctp/af_mctp.c
···544544545545static void mctp_sk_close(struct sock *sk, long timeout)546546{547547- struct mctp_sock *msk = container_of(sk, struct mctp_sock, sk);548548-549549- del_timer_sync(&msk->key_expiry);550547 sk_common_release(sk);551548}552549···577580 spin_lock_irqsave(&key->lock, fl2);578581 __mctp_key_remove(key, net, fl2, MCTP_TRACE_KEY_CLOSED);579582 }583583+ sock_set_flag(sk, SOCK_DEAD);580584 spin_unlock_irqrestore(&net->mctp.keys_lock, flags);585585+586586+ /* Since there are no more tag allocations (we have removed all of the587587+ * keys), stop any pending expiry events. the timer cannot be re-queued588588+ * as the sk is no longer observable589589+ */590590+ del_timer_sync(&msk->key_expiry);591591+}592592+593593+static void mctp_sk_destruct(struct sock *sk)594594+{595595+ skb_queue_purge(&sk->sk_receive_queue);581596}582597583598static struct proto mctp_proto = {···628619 return -ENOMEM;629620630621 sock_init_data(sock, sk);622622+ sk->sk_destruct = mctp_sk_destruct;631623632624 rc = 0;633625 if (sk->sk_prot->init)
+21-13
net/mctp/route.c
···147147 key->valid = true;148148 spin_lock_init(&key->lock);149149 refcount_set(&key->refs, 1);150150+ sock_hold(key->sk);150151151152 return key;152153}···166165 mctp_dev_release_key(key->dev, key);167166 spin_unlock_irqrestore(&key->lock, flags);168167168168+ sock_put(key->sk);169169 kfree(key);170170}171171···178176 int rc = 0;179177180178 spin_lock_irqsave(&net->mctp.keys_lock, flags);179179+180180+ if (sock_flag(&msk->sk, SOCK_DEAD)) {181181+ rc = -EINVAL;182182+ goto out_unlock;183183+ }181184182185 hlist_for_each_entry(tmp, &net->mctp.keys, hlist) {183186 if (mctp_key_match(tmp, key->local_addr, key->peer_addr,···205198 hlist_add_head(&key->sklist, &msk->keys);206199 }207200201201+out_unlock:208202 spin_unlock_irqrestore(&net->mctp.keys_lock, flags);209203210204 return rc;···323315324316static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)325317{318318+ struct mctp_sk_key *key, *any_key = NULL;326319 struct net *net = dev_net(skb->dev);327327- struct mctp_sk_key *key;328320 struct mctp_sock *msk;329321 struct mctp_hdr *mh;330322 unsigned long f;···369361 * key for reassembly - we'll create a more specific370362 * one for future packets if required (ie, !EOM).371363 */372372- key = mctp_lookup_key(net, skb, MCTP_ADDR_ANY, &f);373373- if (key) {374374- msk = container_of(key->sk,364364+ any_key = mctp_lookup_key(net, skb, MCTP_ADDR_ANY, &f);365365+ if (any_key) {366366+ msk = container_of(any_key->sk,375367 struct mctp_sock, sk);376376- spin_unlock_irqrestore(&key->lock, f);377377- mctp_key_unref(key);378378- key = NULL;368368+ spin_unlock_irqrestore(&any_key->lock, f);379369 }380370 }381371···425419 * this function.426420 */427421 rc = mctp_key_add(key, msk);428428- if (rc) {429429- kfree(key);430430- } else {422422+ if (!rc)431423 trace_mctp_key_acquire(key);432424433433- /* we don't need to release key->lock on exit */434434- mctp_key_unref(key);435435- }425425+ /* we don't need to release key->lock on exit, so426426+ * clean up here and suppress the unlock via427427+ * setting to NULL428428+ */429429+ mctp_key_unref(key);436430 key = NULL;437431438432 } else {···479473 spin_unlock_irqrestore(&key->lock, f);480474 mctp_key_unref(key);481475 }476476+ if (any_key)477477+ mctp_key_unref(any_key);482478out:483479 if (rc)484480 kfree_skb(skb);
+71-96
net/netfilter/nf_conntrack_proto_sctp.c
···2727#include <net/netfilter/nf_conntrack_ecache.h>2828#include <net/netfilter/nf_conntrack_timeout.h>29293030-/* FIXME: Examine ipfilter's timeouts and conntrack transitions more3131- closely. They're more complex. --RR3232-3333- And so for me for SCTP :D -Kiran */3434-3530static const char *const sctp_conntrack_names[] = {3636- "NONE",3737- "CLOSED",3838- "COOKIE_WAIT",3939- "COOKIE_ECHOED",4040- "ESTABLISHED",4141- "SHUTDOWN_SENT",4242- "SHUTDOWN_RECD",4343- "SHUTDOWN_ACK_SENT",4444- "HEARTBEAT_SENT",4545- "HEARTBEAT_ACKED",3131+ [SCTP_CONNTRACK_NONE] = "NONE",3232+ [SCTP_CONNTRACK_CLOSED] = "CLOSED",3333+ [SCTP_CONNTRACK_COOKIE_WAIT] = "COOKIE_WAIT",3434+ [SCTP_CONNTRACK_COOKIE_ECHOED] = "COOKIE_ECHOED",3535+ [SCTP_CONNTRACK_ESTABLISHED] = "ESTABLISHED",3636+ [SCTP_CONNTRACK_SHUTDOWN_SENT] = "SHUTDOWN_SENT",3737+ [SCTP_CONNTRACK_SHUTDOWN_RECD] = "SHUTDOWN_RECD",3838+ [SCTP_CONNTRACK_SHUTDOWN_ACK_SENT] = "SHUTDOWN_ACK_SENT",3939+ [SCTP_CONNTRACK_HEARTBEAT_SENT] = "HEARTBEAT_SENT",4640};47414842#define SECS * HZ···4854 [SCTP_CONNTRACK_CLOSED] = 10 SECS,4955 [SCTP_CONNTRACK_COOKIE_WAIT] = 3 SECS,5056 [SCTP_CONNTRACK_COOKIE_ECHOED] = 3 SECS,5151- [SCTP_CONNTRACK_ESTABLISHED] = 5 DAYS,5757+ [SCTP_CONNTRACK_ESTABLISHED] = 210 SECS,5258 [SCTP_CONNTRACK_SHUTDOWN_SENT] = 300 SECS / 1000,5359 [SCTP_CONNTRACK_SHUTDOWN_RECD] = 300 SECS / 1000,5460 [SCTP_CONNTRACK_SHUTDOWN_ACK_SENT] = 3 SECS,5561 [SCTP_CONNTRACK_HEARTBEAT_SENT] = 30 SECS,5656- [SCTP_CONNTRACK_HEARTBEAT_ACKED] = 210 SECS,5757- [SCTP_CONNTRACK_DATA_SENT] = 30 SECS,5862};59636064#define SCTP_FLAG_HEARTBEAT_VTAG_FAILED 1···6674#define sSR SCTP_CONNTRACK_SHUTDOWN_RECD6775#define sSA SCTP_CONNTRACK_SHUTDOWN_ACK_SENT6876#define sHS SCTP_CONNTRACK_HEARTBEAT_SENT6969-#define sHA SCTP_CONNTRACK_HEARTBEAT_ACKED7070-#define sDS SCTP_CONNTRACK_DATA_SENT7177#define sIV SCTP_CONNTRACK_MAX72787379/*···8898CLOSED - We have seen a SHUTDOWN_COMPLETE chunk in the direction of8999 the SHUTDOWN chunk. Connection is closed.90100HEARTBEAT_SENT - We have seen a HEARTBEAT in a new flow.9191-HEARTBEAT_ACKED - We have seen a HEARTBEAT-ACK/DATA/SACK in the direction9292- opposite to that of the HEARTBEAT/DATA chunk. Secondary connection9393- is established.9494-DATA_SENT - We have seen a DATA/SACK in a new flow.95101*/9610297103/* TODO···101115*/102116103117/* SCTP conntrack state transitions */104104-static const u8 sctp_conntracks[2][12][SCTP_CONNTRACK_MAX] = {118118+static const u8 sctp_conntracks[2][11][SCTP_CONNTRACK_MAX] = {105119 {106120/* ORIGINAL */107107-/* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA, sDS */108108-/* init */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCW, sHA, sCW},109109-/* init_ack */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL, sHA, sCL},110110-/* abort */ {sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL},111111-/* shutdown */ {sCL, sCL, sCW, sCE, sSS, sSS, sSR, sSA, sCL, sSS, sCL},112112-/* shutdown_ack */ {sSA, sCL, sCW, sCE, sES, sSA, sSA, sSA, sSA, sHA, sSA},113113-/* error */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL, sHA, sCL},/* Can't have Stale cookie*/114114-/* cookie_echo */ {sCL, sCL, sCE, sCE, sES, sSS, sSR, sSA, sCL, sHA, sCL},/* 5.2.4 - Big TODO */115115-/* cookie_ack */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL, sHA, sCL},/* Can't come in orig dir */116116-/* shutdown_comp*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sCL, sCL, sHA, sCL},117117-/* heartbeat */ {sHS, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA, sDS},118118-/* heartbeat_ack*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA, sDS},119119-/* data/sack */ {sDS, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA, sDS}121121+/* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS */122122+/* init */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCW},123123+/* init_ack */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL},124124+/* abort */ {sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL},125125+/* shutdown */ {sCL, sCL, sCW, sCE, sSS, sSS, sSR, sSA, sCL},126126+/* shutdown_ack */ {sSA, sCL, sCW, sCE, sES, sSA, sSA, sSA, sSA},127127+/* error */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL},/* Can't have Stale cookie*/128128+/* cookie_echo */ {sCL, sCL, sCE, sCE, sES, sSS, sSR, sSA, sCL},/* 5.2.4 - Big TODO */129129+/* cookie_ack */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL},/* Can't come in orig dir */130130+/* shutdown_comp*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sCL, sCL},131131+/* heartbeat */ {sHS, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS},132132+/* heartbeat_ack*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS},120133 },121134 {122135/* REPLY */123123-/* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA, sDS */124124-/* init */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV, sHA, sIV},/* INIT in sCL Big TODO */125125-/* init_ack */ {sIV, sCW, sCW, sCE, sES, sSS, sSR, sSA, sIV, sHA, sIV},126126-/* abort */ {sIV, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sIV, sCL, sIV},127127-/* shutdown */ {sIV, sCL, sCW, sCE, sSR, sSS, sSR, sSA, sIV, sSR, sIV},128128-/* shutdown_ack */ {sIV, sCL, sCW, sCE, sES, sSA, sSA, sSA, sIV, sHA, sIV},129129-/* error */ {sIV, sCL, sCW, sCL, sES, sSS, sSR, sSA, sIV, sHA, sIV},130130-/* cookie_echo */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV, sHA, sIV},/* Can't come in reply dir */131131-/* cookie_ack */ {sIV, sCL, sCW, sES, sES, sSS, sSR, sSA, sIV, sHA, sIV},132132-/* shutdown_comp*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sCL, sIV, sHA, sIV},133133-/* heartbeat */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA, sHA},134134-/* heartbeat_ack*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHA, sHA, sHA},135135-/* data/sack */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHA, sHA, sHA},136136+/* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS */137137+/* init */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV},/* INIT in sCL Big TODO */138138+/* init_ack */ {sIV, sCW, sCW, sCE, sES, sSS, sSR, sSA, sIV},139139+/* abort */ {sIV, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sIV},140140+/* shutdown */ {sIV, sCL, sCW, sCE, sSR, sSS, sSR, sSA, sIV},141141+/* shutdown_ack */ {sIV, sCL, sCW, sCE, sES, sSA, sSA, sSA, sIV},142142+/* error */ {sIV, sCL, sCW, sCL, sES, sSS, sSR, sSA, sIV},143143+/* cookie_echo */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV},/* Can't come in reply dir */144144+/* cookie_ack */ {sIV, sCL, sCW, sES, sES, sSS, sSR, sSA, sIV},145145+/* shutdown_comp*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sCL, sIV},146146+/* heartbeat */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS},147147+/* heartbeat_ack*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sES},136148 }137149};138150···142158}143159#endif144160161161+/* do_basic_checks ensures sch->length > 0, do not use before */145162#define for_each_sctp_chunk(skb, sch, _sch, offset, dataoff, count) \146163for ((offset) = (dataoff) + sizeof(struct sctphdr), (count) = 0; \147164 (offset) < (skb)->len && \···243258 pr_debug("SCTP_CID_HEARTBEAT_ACK");244259 i = 10;245260 break;246246- case SCTP_CID_DATA:247247- case SCTP_CID_SACK:248248- pr_debug("SCTP_CID_DATA/SACK");249249- i = 11;250250- break;251261 default:252262 /* Other chunks like DATA or SACK do not change the state */253263 pr_debug("Unknown chunk type, Will stay in %s\n",···296316 ih->init_tag);297317298318 ct->proto.sctp.vtag[IP_CT_DIR_REPLY] = ih->init_tag;299299- } else if (sch->type == SCTP_CID_HEARTBEAT ||300300- sch->type == SCTP_CID_DATA ||301301- sch->type == SCTP_CID_SACK) {319319+ } else if (sch->type == SCTP_CID_HEARTBEAT) {302320 pr_debug("Setting vtag %x for secondary conntrack\n",303321 sh->vtag);304322 ct->proto.sctp.vtag[IP_CT_DIR_ORIGINAL] = sh->vtag;···382404383405 if (!sctp_new(ct, skb, sh, dataoff))384406 return -NF_ACCEPT;385385- } else {386386- /* Check the verification tag (Sec 8.5) */387387- if (!test_bit(SCTP_CID_INIT, map) &&388388- !test_bit(SCTP_CID_SHUTDOWN_COMPLETE, map) &&389389- !test_bit(SCTP_CID_COOKIE_ECHO, map) &&390390- !test_bit(SCTP_CID_ABORT, map) &&391391- !test_bit(SCTP_CID_SHUTDOWN_ACK, map) &&392392- !test_bit(SCTP_CID_HEARTBEAT, map) &&393393- !test_bit(SCTP_CID_HEARTBEAT_ACK, map) &&394394- sh->vtag != ct->proto.sctp.vtag[dir]) {395395- pr_debug("Verification tag check failed\n");396396- goto out;397397- }407407+ }408408+409409+ /* Check the verification tag (Sec 8.5) */410410+ if (!test_bit(SCTP_CID_INIT, map) &&411411+ !test_bit(SCTP_CID_SHUTDOWN_COMPLETE, map) &&412412+ !test_bit(SCTP_CID_COOKIE_ECHO, map) &&413413+ !test_bit(SCTP_CID_ABORT, map) &&414414+ !test_bit(SCTP_CID_SHUTDOWN_ACK, map) &&415415+ !test_bit(SCTP_CID_HEARTBEAT, map) &&416416+ !test_bit(SCTP_CID_HEARTBEAT_ACK, map) &&417417+ sh->vtag != ct->proto.sctp.vtag[dir]) {418418+ pr_debug("Verification tag check failed\n");419419+ goto out;398420 }399421400422 old_state = new_state = SCTP_CONNTRACK_NONE;···402424 for_each_sctp_chunk (skb, sch, _sch, offset, dataoff, count) {403425 /* Special cases of Verification tag check (Sec 8.5.1) */404426 if (sch->type == SCTP_CID_INIT) {405405- /* Sec 8.5.1 (A) */427427+ /* (A) vtag MUST be zero */406428 if (sh->vtag != 0)407429 goto out_unlock;408430 } else if (sch->type == SCTP_CID_ABORT) {409409- /* Sec 8.5.1 (B) */410410- if (sh->vtag != ct->proto.sctp.vtag[dir] &&411411- sh->vtag != ct->proto.sctp.vtag[!dir])431431+ /* (B) vtag MUST match own vtag if T flag is unset OR432432+ * MUST match peer's vtag if T flag is set433433+ */434434+ if ((!(sch->flags & SCTP_CHUNK_FLAG_T) &&435435+ sh->vtag != ct->proto.sctp.vtag[dir]) ||436436+ ((sch->flags & SCTP_CHUNK_FLAG_T) &&437437+ sh->vtag != ct->proto.sctp.vtag[!dir]))412438 goto out_unlock;413439 } else if (sch->type == SCTP_CID_SHUTDOWN_COMPLETE) {414414- /* Sec 8.5.1 (C) */415415- if (sh->vtag != ct->proto.sctp.vtag[dir] &&416416- sh->vtag != ct->proto.sctp.vtag[!dir] &&417417- sch->flags & SCTP_CHUNK_FLAG_T)440440+ /* (C) vtag MUST match own vtag if T flag is unset OR441441+ * MUST match peer's vtag if T flag is set442442+ */443443+ if ((!(sch->flags & SCTP_CHUNK_FLAG_T) &&444444+ sh->vtag != ct->proto.sctp.vtag[dir]) ||445445+ ((sch->flags & SCTP_CHUNK_FLAG_T) &&446446+ sh->vtag != ct->proto.sctp.vtag[!dir]))418447 goto out_unlock;419448 } else if (sch->type == SCTP_CID_COOKIE_ECHO) {420420- /* Sec 8.5.1 (D) */449449+ /* (D) vtag must be same as init_vtag as found in INIT_ACK */421450 if (sh->vtag != ct->proto.sctp.vtag[dir])422451 goto out_unlock;423452 } else if (sch->type == SCTP_CID_HEARTBEAT) {···460475 ct->proto.sctp.vtag[!dir] = 0;461476 } else if (ct->proto.sctp.flags & SCTP_FLAG_HEARTBEAT_VTAG_FAILED) {462477 ct->proto.sctp.flags &= ~SCTP_FLAG_HEARTBEAT_VTAG_FAILED;463463- }464464- } else if (sch->type == SCTP_CID_DATA || sch->type == SCTP_CID_SACK) {465465- if (ct->proto.sctp.vtag[dir] == 0) {466466- pr_debug("Setting vtag %x for dir %d\n", sh->vtag, dir);467467- ct->proto.sctp.vtag[dir] = sh->vtag;468478 }469479 }470480···498518 }499519500520 ct->proto.sctp.state = new_state;501501- if (old_state != new_state)521521+ if (old_state != new_state) {502522 nf_conntrack_event_cache(IPCT_PROTOINFO, ct);523523+ if (new_state == SCTP_CONNTRACK_ESTABLISHED &&524524+ !test_and_set_bit(IPS_ASSURED_BIT, &ct->status))525525+ nf_conntrack_event_cache(IPCT_ASSURED, ct);526526+ }503527 }504528 spin_unlock_bh(&ct->lock);505529···516532 timeouts = nf_sctp_pernet(nf_ct_net(ct))->timeouts;517533518534 nf_ct_refresh_acct(ct, ctinfo, skb, timeouts[new_state]);519519-520520- if (old_state == SCTP_CONNTRACK_COOKIE_ECHOED &&521521- dir == IP_CT_DIR_REPLY &&522522- new_state == SCTP_CONNTRACK_ESTABLISHED) {523523- pr_debug("Setting assured bit\n");524524- set_bit(IPS_ASSURED_BIT, &ct->status);525525- nf_conntrack_event_cache(IPCT_ASSURED, ct);526526- }527535528536 return NF_ACCEPT;529537···677701 [CTA_TIMEOUT_SCTP_SHUTDOWN_ACK_SENT] = { .type = NLA_U32 },678702 [CTA_TIMEOUT_SCTP_HEARTBEAT_SENT] = { .type = NLA_U32 },679703 [CTA_TIMEOUT_SCTP_HEARTBEAT_ACKED] = { .type = NLA_U32 },680680- [CTA_TIMEOUT_SCTP_DATA_SENT] = { .type = NLA_U32 },681704};682705#endif /* CONFIG_NF_CONNTRACK_TIMEOUT */683706
···3838 return !nft_rbtree_interval_end(rbe);3939}40404141-static bool nft_rbtree_equal(const struct nft_set *set, const void *this,4242- const struct nft_rbtree_elem *interval)4141+static int nft_rbtree_cmp(const struct nft_set *set,4242+ const struct nft_rbtree_elem *e1,4343+ const struct nft_rbtree_elem *e2)4344{4444- return memcmp(this, nft_set_ext_key(&interval->ext), set->klen) == 0;4545+ return memcmp(nft_set_ext_key(&e1->ext), nft_set_ext_key(&e2->ext),4646+ set->klen);4547}46484749static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set,···5452 const struct nft_rbtree_elem *rbe, *interval = NULL;5553 u8 genmask = nft_genmask_cur(net);5654 const struct rb_node *parent;5757- const void *this;5855 int d;59566057 parent = rcu_dereference_raw(priv->root.rb_node);···63626463 rbe = rb_entry(parent, struct nft_rbtree_elem, node);65646666- this = nft_set_ext_key(&rbe->ext);6767- d = memcmp(this, key, set->klen);6565+ d = memcmp(nft_set_ext_key(&rbe->ext), key, set->klen);6866 if (d < 0) {6967 parent = rcu_dereference_raw(parent->rb_left);7068 if (interval &&7171- nft_rbtree_equal(set, this, interval) &&6969+ !nft_rbtree_cmp(set, rbe, interval) &&7270 nft_rbtree_interval_end(rbe) &&7371 nft_rbtree_interval_start(interval))7472 continue;···215215 return rbe;216216}217217218218+static int nft_rbtree_gc_elem(const struct nft_set *__set,219219+ struct nft_rbtree *priv,220220+ struct nft_rbtree_elem *rbe)221221+{222222+ struct nft_set *set = (struct nft_set *)__set;223223+ struct rb_node *prev = rb_prev(&rbe->node);224224+ struct nft_rbtree_elem *rbe_prev;225225+ struct nft_set_gc_batch *gcb;226226+227227+ gcb = nft_set_gc_batch_check(set, NULL, GFP_ATOMIC);228228+ if (!gcb)229229+ return -ENOMEM;230230+231231+ /* search for expired end interval coming before this element. */232232+ do {233233+ rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node);234234+ if (nft_rbtree_interval_end(rbe_prev))235235+ break;236236+237237+ prev = rb_prev(prev);238238+ } while (prev != NULL);239239+240240+ rb_erase(&rbe_prev->node, &priv->root);241241+ rb_erase(&rbe->node, &priv->root);242242+ atomic_sub(2, &set->nelems);243243+244244+ nft_set_gc_batch_add(gcb, rbe);245245+ nft_set_gc_batch_complete(gcb);246246+247247+ return 0;248248+}249249+250250+static bool nft_rbtree_update_first(const struct nft_set *set,251251+ struct nft_rbtree_elem *rbe,252252+ struct rb_node *first)253253+{254254+ struct nft_rbtree_elem *first_elem;255255+256256+ first_elem = rb_entry(first, struct nft_rbtree_elem, node);257257+ /* this element is closest to where the new element is to be inserted:258258+ * update the first element for the node list path.259259+ */260260+ if (nft_rbtree_cmp(set, rbe, first_elem) < 0)261261+ return true;262262+263263+ return false;264264+}265265+218266static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,219267 struct nft_rbtree_elem *new,220268 struct nft_set_ext **ext)221269{222222- bool overlap = false, dup_end_left = false, dup_end_right = false;270270+ struct nft_rbtree_elem *rbe, *rbe_le = NULL, *rbe_ge = NULL;271271+ struct rb_node *node, *parent, **p, *first = NULL;223272 struct nft_rbtree *priv = nft_set_priv(set);224273 u8 genmask = nft_genmask_next(net);225225- struct nft_rbtree_elem *rbe;226226- struct rb_node *parent, **p;227227- int d;274274+ int d, err;228275229229- /* Detect overlaps as we descend the tree. Set the flag in these cases:230230- *231231- * a1. _ _ __>| ?_ _ __| (insert end before existing end)232232- * a2. _ _ ___| ?_ _ _>| (insert end after existing end)233233- * a3. _ _ ___? >|_ _ __| (insert start before existing end)234234- *235235- * and clear it later on, as we eventually reach the points indicated by236236- * '?' above, in the cases described below. We'll always meet these237237- * later, locally, due to tree ordering, and overlaps for the intervals238238- * that are the closest together are always evaluated last.239239- *240240- * b1. _ _ __>| !_ _ __| (insert end before existing start)241241- * b2. _ _ ___| !_ _ _>| (insert end after existing start)242242- * b3. _ _ ___! >|_ _ __| (insert start after existing end, as a leaf)243243- * '--' no nodes falling in this range244244- * b4. >|_ _ ! (insert start before existing start)245245- *246246- * Case a3. resolves to b3.:247247- * - if the inserted start element is the leftmost, because the '0'248248- * element in the tree serves as end element249249- * - otherwise, if an existing end is found immediately to the left. If250250- * there are existing nodes in between, we need to further descend the251251- * tree before we can conclude the new start isn't causing an overlap252252- *253253- * or to b4., which, preceded by a3., means we already traversed one or254254- * more existing intervals entirely, from the right.255255- *256256- * For a new, rightmost pair of elements, we'll hit cases b3. and b2.,257257- * in that order.258258- *259259- * The flag is also cleared in two special cases:260260- *261261- * b5. |__ _ _!|<_ _ _ (insert start right before existing end)262262- * b6. |__ _ >|!__ _ _ (insert end right after existing start)263263- *264264- * which always happen as last step and imply that no further265265- * overlapping is possible.266266- *267267- * Another special case comes from the fact that start elements matching268268- * an already existing start element are allowed: insertion is not269269- * performed but we return -EEXIST in that case, and the error will be270270- * cleared by the caller if NLM_F_EXCL is not present in the request.271271- * This way, request for insertion of an exact overlap isn't reported as272272- * error to userspace if not desired.273273- *274274- * However, if the existing start matches a pre-existing start, but the275275- * end element doesn't match the corresponding pre-existing end element,276276- * we need to report a partial overlap. This is a local condition that277277- * can be noticed without need for a tracking flag, by checking for a278278- * local duplicated end for a corresponding start, from left and right,279279- * separately.276276+ /* Descend the tree to search for an existing element greater than the277277+ * key value to insert that is greater than the new element. This is the278278+ * first element to walk the ordered elements to find possible overlap.280279 */281281-282280 parent = NULL;283281 p = &priv->root.rb_node;284282 while (*p != NULL) {285283 parent = *p;286284 rbe = rb_entry(parent, struct nft_rbtree_elem, node);287287- d = memcmp(nft_set_ext_key(&rbe->ext),288288- nft_set_ext_key(&new->ext),289289- set->klen);285285+ d = nft_rbtree_cmp(set, rbe, new);286286+290287 if (d < 0) {291288 p = &parent->rb_left;292292-293293- if (nft_rbtree_interval_start(new)) {294294- if (nft_rbtree_interval_end(rbe) &&295295- nft_set_elem_active(&rbe->ext, genmask) &&296296- !nft_set_elem_expired(&rbe->ext) && !*p)297297- overlap = false;298298- } else {299299- if (dup_end_left && !*p)300300- return -ENOTEMPTY;301301-302302- overlap = nft_rbtree_interval_end(rbe) &&303303- nft_set_elem_active(&rbe->ext,304304- genmask) &&305305- !nft_set_elem_expired(&rbe->ext);306306-307307- if (overlap) {308308- dup_end_right = true;309309- continue;310310- }311311- }312289 } else if (d > 0) {290290+ if (!first ||291291+ nft_rbtree_update_first(set, rbe, first))292292+ first = &rbe->node;293293+313294 p = &parent->rb_right;314314-315315- if (nft_rbtree_interval_end(new)) {316316- if (dup_end_right && !*p)317317- return -ENOTEMPTY;318318-319319- overlap = nft_rbtree_interval_end(rbe) &&320320- nft_set_elem_active(&rbe->ext,321321- genmask) &&322322- !nft_set_elem_expired(&rbe->ext);323323-324324- if (overlap) {325325- dup_end_left = true;326326- continue;327327- }328328- } else if (nft_set_elem_active(&rbe->ext, genmask) &&329329- !nft_set_elem_expired(&rbe->ext)) {330330- overlap = nft_rbtree_interval_end(rbe);331331- }332295 } else {333333- if (nft_rbtree_interval_end(rbe) &&334334- nft_rbtree_interval_start(new)) {296296+ if (nft_rbtree_interval_end(rbe))335297 p = &parent->rb_left;336336-337337- if (nft_set_elem_active(&rbe->ext, genmask) &&338338- !nft_set_elem_expired(&rbe->ext))339339- overlap = false;340340- } else if (nft_rbtree_interval_start(rbe) &&341341- nft_rbtree_interval_end(new)) {298298+ else342299 p = &parent->rb_right;343343-344344- if (nft_set_elem_active(&rbe->ext, genmask) &&345345- !nft_set_elem_expired(&rbe->ext))346346- overlap = false;347347- } else if (nft_set_elem_active(&rbe->ext, genmask) &&348348- !nft_set_elem_expired(&rbe->ext)) {349349- *ext = &rbe->ext;350350- return -EEXIST;351351- } else {352352- overlap = false;353353- if (nft_rbtree_interval_end(rbe))354354- p = &parent->rb_left;355355- else356356- p = &parent->rb_right;357357- }358300 }359359-360360- dup_end_left = dup_end_right = false;361301 }362302363363- if (overlap)303303+ if (!first)304304+ first = rb_first(&priv->root);305305+306306+ /* Detect overlap by going through the list of valid tree nodes.307307+ * Values stored in the tree are in reversed order, starting from308308+ * highest to lowest value.309309+ */310310+ for (node = first; node != NULL; node = rb_next(node)) {311311+ rbe = rb_entry(node, struct nft_rbtree_elem, node);312312+313313+ if (!nft_set_elem_active(&rbe->ext, genmask))314314+ continue;315315+316316+ /* perform garbage collection to avoid bogus overlap reports. */317317+ if (nft_set_elem_expired(&rbe->ext)) {318318+ err = nft_rbtree_gc_elem(set, priv, rbe);319319+ if (err < 0)320320+ return err;321321+322322+ continue;323323+ }324324+325325+ d = nft_rbtree_cmp(set, rbe, new);326326+ if (d == 0) {327327+ /* Matching end element: no need to look for an328328+ * overlapping greater or equal element.329329+ */330330+ if (nft_rbtree_interval_end(rbe)) {331331+ rbe_le = rbe;332332+ break;333333+ }334334+335335+ /* first element that is greater or equal to key value. */336336+ if (!rbe_ge) {337337+ rbe_ge = rbe;338338+ continue;339339+ }340340+341341+ /* this is a closer more or equal element, update it. */342342+ if (nft_rbtree_cmp(set, rbe_ge, new) != 0) {343343+ rbe_ge = rbe;344344+ continue;345345+ }346346+347347+ /* element is equal to key value, make sure flags are348348+ * the same, an existing more or equal start element349349+ * must not be replaced by more or equal end element.350350+ */351351+ if ((nft_rbtree_interval_start(new) &&352352+ nft_rbtree_interval_start(rbe_ge)) ||353353+ (nft_rbtree_interval_end(new) &&354354+ nft_rbtree_interval_end(rbe_ge))) {355355+ rbe_ge = rbe;356356+ continue;357357+ }358358+ } else if (d > 0) {359359+ /* annotate element greater than the new element. */360360+ rbe_ge = rbe;361361+ continue;362362+ } else if (d < 0) {363363+ /* annotate element less than the new element. */364364+ rbe_le = rbe;365365+ break;366366+ }367367+ }368368+369369+ /* - new start element matching existing start element: full overlap370370+ * reported as -EEXIST, cleared by caller if NLM_F_EXCL is not given.371371+ */372372+ if (rbe_ge && !nft_rbtree_cmp(set, new, rbe_ge) &&373373+ nft_rbtree_interval_start(rbe_ge) == nft_rbtree_interval_start(new)) {374374+ *ext = &rbe_ge->ext;375375+ return -EEXIST;376376+ }377377+378378+ /* - new end element matching existing end element: full overlap379379+ * reported as -EEXIST, cleared by caller if NLM_F_EXCL is not given.380380+ */381381+ if (rbe_le && !nft_rbtree_cmp(set, new, rbe_le) &&382382+ nft_rbtree_interval_end(rbe_le) == nft_rbtree_interval_end(new)) {383383+ *ext = &rbe_le->ext;384384+ return -EEXIST;385385+ }386386+387387+ /* - new start element with existing closest, less or equal key value388388+ * being a start element: partial overlap, reported as -ENOTEMPTY.389389+ * Anonymous sets allow for two consecutive start element since they390390+ * are constant, skip them to avoid bogus overlap reports.391391+ */392392+ if (!nft_set_is_anonymous(set) && rbe_le &&393393+ nft_rbtree_interval_start(rbe_le) && nft_rbtree_interval_start(new))364394 return -ENOTEMPTY;395395+396396+ /* - new end element with existing closest, less or equal key value397397+ * being a end element: partial overlap, reported as -ENOTEMPTY.398398+ */399399+ if (rbe_le &&400400+ nft_rbtree_interval_end(rbe_le) && nft_rbtree_interval_end(new))401401+ return -ENOTEMPTY;402402+403403+ /* - new end element with existing closest, greater or equal key value404404+ * being an end element: partial overlap, reported as -ENOTEMPTY405405+ */406406+ if (rbe_ge &&407407+ nft_rbtree_interval_end(rbe_ge) && nft_rbtree_interval_end(new))408408+ return -ENOTEMPTY;409409+410410+ /* Accepted element: pick insertion point depending on key value */411411+ parent = NULL;412412+ p = &priv->root.rb_node;413413+ while (*p != NULL) {414414+ parent = *p;415415+ rbe = rb_entry(parent, struct nft_rbtree_elem, node);416416+ d = nft_rbtree_cmp(set, rbe, new);417417+418418+ if (d < 0)419419+ p = &parent->rb_left;420420+ else if (d > 0)421421+ p = &parent->rb_right;422422+ else if (nft_rbtree_interval_end(rbe))423423+ p = &parent->rb_left;424424+ else425425+ p = &parent->rb_right;426426+ }365427366428 rb_link_node_rcu(&new->node, parent, p);367429 rb_insert_color(&new->node, &priv->root);···563501 struct nft_rbtree *priv;564502 struct rb_node *node;565503 struct nft_set *set;504504+ struct net *net;505505+ u8 genmask;566506567507 priv = container_of(work, struct nft_rbtree, gc_work.work);568508 set = nft_set_container_of(priv);509509+ net = read_pnet(&set->net);510510+ genmask = nft_genmask_cur(net);569511570512 write_lock_bh(&priv->lock);571513 write_seqcount_begin(&priv->count);572514 for (node = rb_first(&priv->root); node != NULL; node = rb_next(node)) {573515 rbe = rb_entry(node, struct nft_rbtree_elem, node);574516517517+ if (!nft_set_elem_active(&rbe->ext, genmask))518518+ continue;519519+520520+ /* elements are reversed in the rbtree for historical reasons,521521+ * from highest to lowest value, that is why end element is522522+ * always visited before the start element.523523+ */575524 if (nft_rbtree_interval_end(rbe)) {576525 rbe_end = rbe;577526 continue;578527 }579528 if (!nft_set_elem_expired(&rbe->ext))580529 continue;581581- if (nft_set_elem_mark_busy(&rbe->ext))530530+531531+ if (nft_set_elem_mark_busy(&rbe->ext)) {532532+ rbe_end = NULL;582533 continue;534534+ }583535584536 if (rbe_prev) {585537 rb_erase(&rbe_prev->node, &priv->root);
+24-14
net/netlink/af_netlink.c
···580580 if (nlk_sk(sk)->bound)581581 goto err;582582583583- nlk_sk(sk)->portid = portid;583583+ /* portid can be read locklessly from netlink_getname(). */584584+ WRITE_ONCE(nlk_sk(sk)->portid, portid);585585+584586 sock_hold(sk);585587586588 err = __netlink_insert(table, sk);···10981096 return -EINVAL;1099109711001098 if (addr->sa_family == AF_UNSPEC) {11011101- sk->sk_state = NETLINK_UNCONNECTED;11021102- nlk->dst_portid = 0;11031103- nlk->dst_group = 0;10991099+ /* paired with READ_ONCE() in netlink_getsockbyportid() */11001100+ WRITE_ONCE(sk->sk_state, NETLINK_UNCONNECTED);11011101+ /* dst_portid and dst_group can be read locklessly */11021102+ WRITE_ONCE(nlk->dst_portid, 0);11031103+ WRITE_ONCE(nlk->dst_group, 0);11041104 return 0;11051105 }11061106 if (addr->sa_family != AF_NETLINK)···11231119 err = netlink_autobind(sock);1124112011251121 if (err == 0) {11261126- sk->sk_state = NETLINK_CONNECTED;11271127- nlk->dst_portid = nladdr->nl_pid;11281128- nlk->dst_group = ffs(nladdr->nl_groups);11221122+ /* paired with READ_ONCE() in netlink_getsockbyportid() */11231123+ WRITE_ONCE(sk->sk_state, NETLINK_CONNECTED);11241124+ /* dst_portid and dst_group can be read locklessly */11251125+ WRITE_ONCE(nlk->dst_portid, nladdr->nl_pid);11261126+ WRITE_ONCE(nlk->dst_group, ffs(nladdr->nl_groups));11291127 }1130112811311129 return err;···11441138 nladdr->nl_pad = 0;1145113911461140 if (peer) {11471147- nladdr->nl_pid = nlk->dst_portid;11481148- nladdr->nl_groups = netlink_group_mask(nlk->dst_group);11411141+ /* Paired with WRITE_ONCE() in netlink_connect() */11421142+ nladdr->nl_pid = READ_ONCE(nlk->dst_portid);11431143+ nladdr->nl_groups = netlink_group_mask(READ_ONCE(nlk->dst_group));11491144 } else {11501150- nladdr->nl_pid = nlk->portid;11451145+ /* Paired with WRITE_ONCE() in netlink_insert() */11461146+ nladdr->nl_pid = READ_ONCE(nlk->portid);11511147 netlink_lock_table();11521148 nladdr->nl_groups = nlk->groups ? nlk->groups[0] : 0;11531149 netlink_unlock_table();···1176116811771169 /* Don't bother queuing skb if kernel socket has no input function */11781170 nlk = nlk_sk(sock);11791179- if (sock->sk_state == NETLINK_CONNECTED &&11801180- nlk->dst_portid != nlk_sk(ssk)->portid) {11711171+ /* dst_portid and sk_state can be changed in netlink_connect() */11721172+ if (READ_ONCE(sock->sk_state) == NETLINK_CONNECTED &&11731173+ READ_ONCE(nlk->dst_portid) != nlk_sk(ssk)->portid) {11811174 sock_put(sock);11821175 return ERR_PTR(-ECONNREFUSED);11831176 }···18951886 goto out;18961887 netlink_skb_flags |= NETLINK_SKB_DST;18971888 } else {18981898- dst_portid = nlk->dst_portid;18991899- dst_group = nlk->dst_group;18891889+ /* Paired with WRITE_ONCE() in netlink_connect() */18901890+ dst_portid = READ_ONCE(nlk->dst_portid);18911891+ dst_group = READ_ONCE(nlk->dst_group);19001892 }1901189319021894 /* Paired with WRITE_ONCE() in netlink_insert() */
···431431 while (cl->cmode == HTB_MAY_BORROW && p && mask) {432432 m = mask;433433 while (m) {434434- int prio = ffz(~m);434434+ unsigned int prio = ffz(~m);435435+436436+ if (WARN_ON_ONCE(prio > ARRAY_SIZE(p->inner.clprio)))437437+ break;435438 m &= ~(1 << prio);436439437440 if (p->inner.clprio[prio].feed.rb_node)
···7373 }7474 }75757676+ /* If somehow no addresses were found that can be used with this7777+ * scope, it's an error.7878+ */7979+ if (list_empty(&dest->address_list))8080+ error = -ENETUNREACH;8181+7682out:7783 if (error)7884 sctp_bind_addr_clean(dest);
+1-3
net/sctp/transport.c
···196196197197 /* When a data chunk is sent, reset the heartbeat interval. */198198 expires = jiffies + sctp_transport_timeout(transport);199199- if ((time_before(transport->hb_timer.expires, expires) ||200200- !timer_pending(&transport->hb_timer)) &&201201- !mod_timer(&transport->hb_timer,199199+ if (!mod_timer(&transport->hb_timer,202200 expires + get_random_u32_below(transport->rto)))203201 sctp_transport_hold(transport);204202}
···482482 int rc = -EOPNOTSUPP;483483484484 lock_sock(sk);485485+ if (sock->state != SS_UNCONNECTED) {486486+ rc = -EINVAL;487487+ release_sock(sk);488488+ return rc;489489+ }490490+485491 if (sk->sk_state != TCP_LISTEN) {486492 memset(&x25_sk(sk)->dest_addr, 0, X25_ADDR_LEN);487493 sk->sk_max_ack_backlog = backlog;
+18-11
rust/kernel/print.rs
···142142macro_rules! print_macro (143143 // The non-continuation cases (most of them, e.g. `INFO`).144144 ($format_string:path, false, $($arg:tt)+) => (145145- // SAFETY: This hidden macro should only be called by the documented146146- // printing macros which ensure the format string is one of the fixed147147- // ones. All `__LOG_PREFIX`s are null-terminated as they are generated148148- // by the `module!` proc macro or fixed values defined in a kernel149149- // crate.150150- unsafe {151151- $crate::print::call_printk(152152- &$format_string,153153- crate::__LOG_PREFIX,154154- format_args!($($arg)+),155155- );145145+ // To remain sound, `arg`s must be expanded outside the `unsafe` block.146146+ // Typically one would use a `let` binding for that; however, `format_args!`147147+ // takes borrows on the arguments, but does not extend the scope of temporaries.148148+ // Therefore, a `match` expression is used to keep them around, since149149+ // the scrutinee is kept until the end of the `match`.150150+ match format_args!($($arg)+) {151151+ // SAFETY: This hidden macro should only be called by the documented152152+ // printing macros which ensure the format string is one of the fixed153153+ // ones. All `__LOG_PREFIX`s are null-terminated as they are generated154154+ // by the `module!` proc macro or fixed values defined in a kernel155155+ // crate.156156+ args => unsafe {157157+ $crate::print::call_printk(158158+ &$format_string,159159+ crate::__LOG_PREFIX,160160+ args,161161+ );162162+ }156163 }157164 );158165
···1212# (note, if this is a problem with function_graph tracing, then simply1313# replace "function" with "function_graph" in the following steps).1414#1515-# # cd /sys/kernel/debug/tracing1515+# # cd /sys/kernel/tracing1616# # echo schedule > set_ftrace_filter1717# # echo function > current_tracer1818#···2020#2121# # echo nop > current_tracer2222#2323-# # cat available_filter_functions > ~/full-file2323+# Starting with v5.1 this can be done with numbers, making it much faster:2424+#2525+# The old (slow) way, for kernels before v5.1.2626+#2727+# [old-way] # cat available_filter_functions > ~/full-file2828+#2929+# [old-way] *** Note *** this process will take several minutes to update the3030+# [old-way] filters. Setting multiple functions is an O(n^2) operation, and we3131+# [old-way] are dealing with thousands of functions. So go have coffee, talk3232+# [old-way] with your coworkers, read facebook. And eventually, this operation3333+# [old-way] will end.3434+#3535+# The new way (using numbers) is an O(n) operation, and usually takes less than a second.3636+#3737+# seq `wc -l available_filter_functions | cut -d' ' -f1` > ~/full-file3838+#3939+# This will create a sequence of numbers that match the functions in4040+# available_filter_functions, and when echoing in a number into the4141+# set_ftrace_filter file, it will enable the corresponding function in4242+# O(1) time. Making enabling all functions O(n) where n is the number of4343+# functions to enable.4444+#4545+# For either the new or old way, the rest of the operations remain the same.4646+#2447# # ftrace-bisect ~/full-file ~/test-file ~/non-test-file2548# # cat ~/test-file > set_ftrace_filter2626-#2727-# *** Note *** this will take several minutes. Setting multiple functions is2828-# an O(n^2) operation, and we are dealing with thousands of functions. So go2929-# have coffee, talk with your coworkers, read facebook. And eventually, this3030-# operation will end.3149#3250# # echo function > current_tracer3351#···5335#5436# Reboot back to test kernel.5537#5656-# # cd /sys/kernel/debug/tracing3838+# # cd /sys/kernel/tracing5739# # mv ~/test-file ~/full-file5840#5941# If it didn't crash.
+68-17
sound/core/memalloc.c
···541541 struct sg_table *sgt;542542 void *p;543543544544+#ifdef CONFIG_SND_DMA_SGBUF545545+ if (cpu_feature_enabled(X86_FEATURE_XENPV))546546+ return snd_dma_sg_fallback_alloc(dmab, size);547547+#endif544548 sgt = dma_alloc_noncontiguous(dmab->dev.dev, size, dmab->dev.dir,545549 DEFAULT_GFP, 0);546550#ifdef CONFIG_SND_DMA_SGBUF547547- if (!sgt && !get_dma_ops(dmab->dev.dev)) {548548- if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG)549549- dmab->dev.type = SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK;550550- else551551- dmab->dev.type = SNDRV_DMA_TYPE_DEV_SG_FALLBACK;551551+ if (!sgt && !get_dma_ops(dmab->dev.dev))552552 return snd_dma_sg_fallback_alloc(dmab, size);553553- }554553#endif555554 if (!sgt)556555 return NULL;···716717717718/* Fallback SG-buffer allocations for x86 */718719struct snd_dma_sg_fallback {720720+ bool use_dma_alloc_coherent;719721 size_t count;720722 struct page **pages;723723+ /* DMA address array; the first page contains #pages in ~PAGE_MASK */724724+ dma_addr_t *addrs;721725};722726723727static void __snd_dma_sg_fallback_free(struct snd_dma_buffer *dmab,724728 struct snd_dma_sg_fallback *sgbuf)725729{726726- bool wc = dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK;727727- size_t i;730730+ size_t i, size;728731729729- for (i = 0; i < sgbuf->count && sgbuf->pages[i]; i++)730730- do_free_pages(page_address(sgbuf->pages[i]), PAGE_SIZE, wc);732732+ if (sgbuf->pages && sgbuf->addrs) {733733+ i = 0;734734+ while (i < sgbuf->count) {735735+ if (!sgbuf->pages[i] || !sgbuf->addrs[i])736736+ break;737737+ size = sgbuf->addrs[i] & ~PAGE_MASK;738738+ if (WARN_ON(!size))739739+ break;740740+ if (sgbuf->use_dma_alloc_coherent)741741+ dma_free_coherent(dmab->dev.dev, size << PAGE_SHIFT,742742+ page_address(sgbuf->pages[i]),743743+ sgbuf->addrs[i] & PAGE_MASK);744744+ else745745+ do_free_pages(page_address(sgbuf->pages[i]),746746+ size << PAGE_SHIFT, false);747747+ i += size;748748+ }749749+ }731750 kvfree(sgbuf->pages);751751+ kvfree(sgbuf->addrs);732752 kfree(sgbuf);733753}734754···756738 struct snd_dma_sg_fallback *sgbuf;757739 struct page **pagep, *curp;758740 size_t chunk, npages;741741+ dma_addr_t *addrp;759742 dma_addr_t addr;760743 void *p;761761- bool wc = dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK;744744+745745+ /* correct the type */746746+ if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_SG)747747+ dmab->dev.type = SNDRV_DMA_TYPE_DEV_SG_FALLBACK;748748+ else if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG)749749+ dmab->dev.type = SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK;762750763751 sgbuf = kzalloc(sizeof(*sgbuf), GFP_KERNEL);764752 if (!sgbuf)765753 return NULL;754754+ sgbuf->use_dma_alloc_coherent = cpu_feature_enabled(X86_FEATURE_XENPV);766755 size = PAGE_ALIGN(size);767756 sgbuf->count = size >> PAGE_SHIFT;768757 sgbuf->pages = kvcalloc(sgbuf->count, sizeof(*sgbuf->pages), GFP_KERNEL);769769- if (!sgbuf->pages)758758+ sgbuf->addrs = kvcalloc(sgbuf->count, sizeof(*sgbuf->addrs), GFP_KERNEL);759759+ if (!sgbuf->pages || !sgbuf->addrs)770760 goto error;771761772762 pagep = sgbuf->pages;773773- chunk = size;763763+ addrp = sgbuf->addrs;764764+ chunk = (PAGE_SIZE - 1) << PAGE_SHIFT; /* to fit in low bits in addrs */774765 while (size > 0) {775766 chunk = min(size, chunk);776776- p = do_alloc_pages(dmab->dev.dev, chunk, &addr, wc);767767+ if (sgbuf->use_dma_alloc_coherent)768768+ p = dma_alloc_coherent(dmab->dev.dev, chunk, &addr, DEFAULT_GFP);769769+ else770770+ p = do_alloc_pages(dmab->dev.dev, chunk, &addr, false);777771 if (!p) {778772 if (chunk <= PAGE_SIZE)779773 goto error;···797767 size -= chunk;798768 /* fill pages */799769 npages = chunk >> PAGE_SHIFT;770770+ *addrp = npages; /* store in lower bits */800771 curp = virt_to_page(p);801801- while (npages--)772772+ while (npages--) {802773 *pagep++ = curp++;774774+ *addrp++ |= addr;775775+ addr += PAGE_SIZE;776776+ }803777 }804778805779 p = vmap(sgbuf->pages, sgbuf->count, VM_MAP, PAGE_KERNEL);806780 if (!p)807781 goto error;782782+783783+ if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK)784784+ set_pages_array_wc(sgbuf->pages, sgbuf->count);785785+808786 dmab->private_data = sgbuf;809787 /* store the first page address for convenience */810810- dmab->addr = snd_sgbuf_get_addr(dmab, 0);788788+ dmab->addr = sgbuf->addrs[0] & PAGE_MASK;811789 return p;812790813791 error:···825787826788static void snd_dma_sg_fallback_free(struct snd_dma_buffer *dmab)827789{790790+ struct snd_dma_sg_fallback *sgbuf = dmab->private_data;791791+792792+ if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK)793793+ set_pages_array_wb(sgbuf->pages, sgbuf->count);828794 vunmap(dmab->area);829795 __snd_dma_sg_fallback_free(dmab, dmab->private_data);796796+}797797+798798+static dma_addr_t snd_dma_sg_fallback_get_addr(struct snd_dma_buffer *dmab,799799+ size_t offset)800800+{801801+ struct snd_dma_sg_fallback *sgbuf = dmab->private_data;802802+ size_t index = offset >> PAGE_SHIFT;803803+804804+ return (sgbuf->addrs[index] & PAGE_MASK) | (offset & ~PAGE_MASK);830805}831806832807static int snd_dma_sg_fallback_mmap(struct snd_dma_buffer *dmab,···856805 .alloc = snd_dma_sg_fallback_alloc,857806 .free = snd_dma_sg_fallback_free,858807 .mmap = snd_dma_sg_fallback_mmap,808808+ .get_addr = snd_dma_sg_fallback_get_addr,859809 /* reuse vmalloc helpers */860860- .get_addr = snd_dma_vmalloc_get_addr,861810 .get_page = snd_dma_vmalloc_get_page,862811 .get_chunk_size = snd_dma_vmalloc_get_chunk_size,863812};
···344344 size_t count, loff_t *ppos)345345{346346 struct sof_mtrace_priv *priv = file->private_data;347347- int id, ret;347347+ unsigned int id;348348 char *buf;349349 u32 mask;350350+ int ret;350351351352 /*352353 * To update Nth mask entry, write:···358357 if (IS_ERR(buf))359358 return PTR_ERR(buf);360359361361- ret = sscanf(buf, "%d,0x%x", &id, &mask);360360+ ret = sscanf(buf, "%u,0x%x", &id, &mask);362361 if (ret != 2) {363363- ret = sscanf(buf, "%d,%x", &id, &mask);362362+ ret = sscanf(buf, "%u,%x", &id, &mask);364363 if (ret != 2) {365364 ret = -EINVAL;366365 goto out;
+9-7
sound/soc/sof/sof-audio.c
···271271 struct snd_sof_widget *swidget = widget->dobj.private;272272 struct snd_soc_dapm_path *p;273273274274- /* return if the widget is in use or if it is already unprepared */275275- if (!swidget->prepared || swidget->use_count > 1)276276- return;274274+ /* skip if the widget is in use or if it is already unprepared */275275+ if (!swidget || !swidget->prepared || swidget->use_count > 0)276276+ goto sink_unprepare;277277278278 if (widget_ops[widget->id].ipc_unprepare)279279 /* unprepare the source widget */···281281282282 swidget->prepared = false;283283284284+sink_unprepare:284285 /* unprepare all widgets in the sink paths */285286 snd_soc_dapm_widget_for_each_sink_path(widget, p) {286287 if (!p->walking && p->sink->dobj.private) {···304303 struct snd_soc_dapm_path *p;305304 int ret;306305307307- if (!widget_ops[widget->id].ipc_prepare || swidget->prepared)306306+ if (!swidget || !widget_ops[widget->id].ipc_prepare || swidget->prepared)308307 goto sink_prepare;309308310309 /* prepare the source widget */···327326 p->walking = false;328327 if (ret < 0) {329328 /* unprepare the source widget */330330- if (widget_ops[widget->id].ipc_unprepare && swidget->prepared) {329329+ if (widget_ops[widget->id].ipc_unprepare &&330330+ swidget && swidget->prepared) {331331 widget_ops[widget->id].ipc_unprepare(swidget);332332 swidget->prepared = false;333333 }···431429432430 for_each_dapm_widgets(list, i, widget) {433431 /* starting widget for playback is AIF type */434434- if (dir == SNDRV_PCM_STREAM_PLAYBACK && !WIDGET_IS_AIF(widget->id))432432+ if (dir == SNDRV_PCM_STREAM_PLAYBACK && widget->id != snd_soc_dapm_aif_in)435433 continue;436434437435 /* starting widget for capture is DAI type */438438- if (dir == SNDRV_PCM_STREAM_CAPTURE && !WIDGET_IS_DAI(widget->id))436436+ if (dir == SNDRV_PCM_STREAM_CAPTURE && widget->id != snd_soc_dapm_dai_out)439437 continue;440438441439 switch (op) {
···3030#define MAX_STRERR_LEN 2563131#define MAX_TEST_NAME 8032323333+#define __always_unused __attribute__((__unused__))3434+3335#define _FAIL(errnum, fmt...) \3436 ({ \3537 error_at_line(0, (errnum), __func__, __LINE__, fmt); \···323321 return socket_loopback_reuseport(family, sotype, -1);324322}325323326326-static void test_insert_invalid(int family, int sotype, int mapfd)324324+static void test_insert_invalid(struct test_sockmap_listen *skel __always_unused,325325+ int family, int sotype, int mapfd)327326{328327 u32 key = 0;329328 u64 value;···341338 FAIL_ERRNO("map_update: expected EBADF");342339}343340344344-static void test_insert_opened(int family, int sotype, int mapfd)341341+static void test_insert_opened(struct test_sockmap_listen *skel __always_unused,342342+ int family, int sotype, int mapfd)345343{346344 u32 key = 0;347345 u64 value;···363359 xclose(s);364360}365361366366-static void test_insert_bound(int family, int sotype, int mapfd)362362+static void test_insert_bound(struct test_sockmap_listen *skel __always_unused,363363+ int family, int sotype, int mapfd)367364{368365 struct sockaddr_storage addr;369366 socklen_t len;···391386 xclose(s);392387}393388394394-static void test_insert(int family, int sotype, int mapfd)389389+static void test_insert(struct test_sockmap_listen *skel __always_unused,390390+ int family, int sotype, int mapfd)395391{396392 u64 value;397393 u32 key;···408402 xclose(s);409403}410404411411-static void test_delete_after_insert(int family, int sotype, int mapfd)405405+static void test_delete_after_insert(struct test_sockmap_listen *skel __always_unused,406406+ int family, int sotype, int mapfd)412407{413408 u64 value;414409 u32 key;···426419 xclose(s);427420}428421429429-static void test_delete_after_close(int family, int sotype, int mapfd)422422+static void test_delete_after_close(struct test_sockmap_listen *skel __always_unused,423423+ int family, int sotype, int mapfd)430424{431425 int err, s;432426 u64 value;···450442 FAIL_ERRNO("map_delete: expected EINVAL/EINVAL");451443}452444453453-static void test_lookup_after_insert(int family, int sotype, int mapfd)445445+static void test_lookup_after_insert(struct test_sockmap_listen *skel __always_unused,446446+ int family, int sotype, int mapfd)454447{455448 u64 cookie, value;456449 socklen_t len;···479470 xclose(s);480471}481472482482-static void test_lookup_after_delete(int family, int sotype, int mapfd)473473+static void test_lookup_after_delete(struct test_sockmap_listen *skel __always_unused,474474+ int family, int sotype, int mapfd)483475{484476 int err, s;485477 u64 value;···503493 xclose(s);504494}505495506506-static void test_lookup_32_bit_value(int family, int sotype, int mapfd)496496+static void test_lookup_32_bit_value(struct test_sockmap_listen *skel __always_unused,497497+ int family, int sotype, int mapfd)507498{508499 u32 key, value32;509500 int err, s;···534523 xclose(s);535524}536525537537-static void test_update_existing(int family, int sotype, int mapfd)526526+static void test_update_existing(struct test_sockmap_listen *skel __always_unused,527527+ int family, int sotype, int mapfd)538528{539529 int s1, s2;540530 u64 value;···563551/* Exercise the code path where we destroy child sockets that never564552 * got accept()'ed, aka orphans, when parent socket gets closed.565553 */566566-static void test_destroy_orphan_child(int family, int sotype, int mapfd)554554+static void do_destroy_orphan_child(int family, int sotype, int mapfd)567555{568556 struct sockaddr_storage addr;569557 socklen_t len;···594582 xclose(s);595583}596584585585+static void test_destroy_orphan_child(struct test_sockmap_listen *skel,586586+ int family, int sotype, int mapfd)587587+{588588+ int msg_verdict = bpf_program__fd(skel->progs.prog_msg_verdict);589589+ int skb_verdict = bpf_program__fd(skel->progs.prog_skb_verdict);590590+ const struct test {591591+ int progfd;592592+ enum bpf_attach_type atype;593593+ } tests[] = {594594+ { -1, -1 },595595+ { msg_verdict, BPF_SK_MSG_VERDICT },596596+ { skb_verdict, BPF_SK_SKB_VERDICT },597597+ };598598+ const struct test *t;599599+600600+ for (t = tests; t < tests + ARRAY_SIZE(tests); t++) {601601+ if (t->progfd != -1 &&602602+ xbpf_prog_attach(t->progfd, mapfd, t->atype, 0) != 0)603603+ return;604604+605605+ do_destroy_orphan_child(family, sotype, mapfd);606606+607607+ if (t->progfd != -1)608608+ xbpf_prog_detach2(t->progfd, mapfd, t->atype);609609+ }610610+}611611+597612/* Perform a passive open after removing listening socket from SOCKMAP598613 * to ensure that callbacks get restored properly.599614 */600600-static void test_clone_after_delete(int family, int sotype, int mapfd)615615+static void test_clone_after_delete(struct test_sockmap_listen *skel __always_unused,616616+ int family, int sotype, int mapfd)601617{602618 struct sockaddr_storage addr;603619 socklen_t len;···661621 * SOCKMAP, but got accept()'ed only after the parent has been removed662622 * from SOCKMAP, gets cloned without parent psock state or callbacks.663623 */664664-static void test_accept_after_delete(int family, int sotype, int mapfd)624624+static void test_accept_after_delete(struct test_sockmap_listen *skel __always_unused,625625+ int family, int sotype, int mapfd)665626{666627 struct sockaddr_storage addr;667628 const u32 zero = 0;···716675/* Check that child socket that got created and accepted while parent717676 * was in a SOCKMAP is cloned without parent psock state or callbacks.718677 */719719-static void test_accept_before_delete(int family, int sotype, int mapfd)678678+static void test_accept_before_delete(struct test_sockmap_listen *skel __always_unused,679679+ int family, int sotype, int mapfd)720680{721681 struct sockaddr_storage addr;722682 const u32 zero = 0, one = 1;···826784 return NULL;827785}828786829829-static void test_syn_recv_insert_delete(int family, int sotype, int mapfd)787787+static void test_syn_recv_insert_delete(struct test_sockmap_listen *skel __always_unused,788788+ int family, int sotype, int mapfd)830789{831790 struct connect_accept_ctx ctx = { 0 };832791 struct sockaddr_storage addr;···890847 return NULL;891848}892849893893-static void test_race_insert_listen(int family, int socktype, int mapfd)850850+static void test_race_insert_listen(struct test_sockmap_listen *skel __always_unused,851851+ int family, int socktype, int mapfd)894852{895853 struct connect_accept_ctx ctx = { 0 };896854 const u32 zero = 0;···15171473 int family, int sotype)15181474{15191475 const struct op_test {15201520- void (*fn)(int family, int sotype, int mapfd);14761476+ void (*fn)(struct test_sockmap_listen *skel,14771477+ int family, int sotype, int mapfd);15211478 const char *name;15221479 int sotype;15231480 } tests[] = {···15651520 if (!test__start_subtest(s))15661521 continue;1567152215681568- t->fn(family, sotype, map_fd);15231523+ t->fn(skel, family, sotype, map_fd);15691524 test_ops_cleanup(map);15701525 }15711526}