Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR.

net/mac80211/key.c
02e0e426a2fb ("wifi: mac80211: fix error path key leak")
2a8b665e6bcc ("wifi: mac80211: remove key_mtx")
7d6904bf26b9 ("Merge wireless into wireless-next")
https://lore.kernel.org/all/20231012113648.46eea5ec@canb.auug.org.au/

Adjacent changes:

drivers/net/ethernet/ti/Kconfig
a602ee3176a8 ("net: ethernet: ti: Fix mixed module-builtin object")
98bdeae9502b ("net: cpmac: remove driver to prepare for platform removal")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+2810 -1465
+7 -7
Documentation/ABI/testing/sysfs-class-firmware
··· 1 1 What: /sys/class/firmware/.../data 2 2 Date: July 2022 3 3 KernelVersion: 5.19 4 - Contact: Russ Weight <russell.h.weight@intel.com> 4 + Contact: Russ Weight <russ.weight@linux.dev> 5 5 Description: The data sysfs file is used for firmware-fallback and for 6 6 firmware uploads. Cat a firmware image to this sysfs file 7 7 after you echo 1 to the loading sysfs file. When the firmware ··· 13 13 What: /sys/class/firmware/.../cancel 14 14 Date: July 2022 15 15 KernelVersion: 5.19 16 - Contact: Russ Weight <russell.h.weight@intel.com> 16 + Contact: Russ Weight <russ.weight@linux.dev> 17 17 Description: Write-only. For firmware uploads, write a "1" to this file to 18 18 request that the transfer of firmware data to the lower-level 19 19 device be canceled. This request will be rejected (EBUSY) if ··· 23 23 What: /sys/class/firmware/.../error 24 24 Date: July 2022 25 25 KernelVersion: 5.19 26 - Contact: Russ Weight <russell.h.weight@intel.com> 26 + Contact: Russ Weight <russ.weight@linux.dev> 27 27 Description: Read-only. Returns a string describing a failed firmware 28 28 upload. This string will be in the form of <STATUS>:<ERROR>, 29 29 where <STATUS> will be one of the status strings described ··· 37 37 What: /sys/class/firmware/.../loading 38 38 Date: July 2022 39 39 KernelVersion: 5.19 40 - Contact: Russ Weight <russell.h.weight@intel.com> 40 + Contact: Russ Weight <russ.weight@linux.dev> 41 41 Description: The loading sysfs file is used for both firmware-fallback and 42 42 for firmware uploads. Echo 1 onto the loading file to indicate 43 43 you are writing a firmware file to the data sysfs node. Echo ··· 49 49 What: /sys/class/firmware/.../remaining_size 50 50 Date: July 2022 51 51 KernelVersion: 5.19 52 - Contact: Russ Weight <russell.h.weight@intel.com> 52 + Contact: Russ Weight <russ.weight@linux.dev> 53 53 Description: Read-only. For firmware upload, this file contains the size 54 54 of the firmware data that remains to be transferred to the 55 55 lower-level device driver. The size value is initialized to ··· 62 62 What: /sys/class/firmware/.../status 63 63 Date: July 2022 64 64 KernelVersion: 5.19 65 - Contact: Russ Weight <russell.h.weight@intel.com> 65 + Contact: Russ Weight <russ.weight@linux.dev> 66 66 Description: Read-only. Returns a string describing the current status of 67 67 a firmware upload. The string will be one of the following: 68 68 idle, "receiving", "preparing", "transferring", "programming". ··· 70 70 What: /sys/class/firmware/.../timeout 71 71 Date: July 2022 72 72 KernelVersion: 5.19 73 - Contact: Russ Weight <russell.h.weight@intel.com> 73 + Contact: Russ Weight <russ.weight@linux.dev> 74 74 Description: This file supports the timeout mechanism for firmware 75 75 fallback. This file has no affect on firmware uploads. For 76 76 more information on timeouts please see the documentation
+2 -2
Documentation/core-api/workqueue.rst
··· 244 244 time thus achieving the same ordering property as ST wq. 245 245 246 246 In the current implementation the above configuration only guarantees 247 - ST behavior within a given NUMA node. Instead ``alloc_ordered_queue()`` should 247 + ST behavior within a given NUMA node. Instead ``alloc_ordered_workqueue()`` should 248 248 be used to achieve system-wide ST behavior. 249 249 250 250 ··· 390 390 scope can be changed using ``apply_workqueue_attrs()``. 391 391 392 392 If ``WQ_SYSFS`` is set, the workqueue will have the following affinity scope 393 - related interface files under its ``/sys/devices/virtual/WQ_NAME/`` 393 + related interface files under its ``/sys/devices/virtual/workqueue/WQ_NAME/`` 394 394 directory. 395 395 396 396 ``affinity_scope``
+3
Documentation/devicetree/bindings/dma/xilinx/xlnx,zynqmp-dma-1.0.yaml
··· 13 13 14 14 maintainers: 15 15 - Michael Tretter <m.tretter@pengutronix.de> 16 + - Harini Katakam <harini.katakam@amd.com> 17 + - Radhey Shyam Pandey <radhey.shyam.pandey@amd.com> 16 18 17 19 allOf: 18 20 - $ref: ../dma-controller.yaml# ··· 67 65 - interrupts 68 66 - clocks 69 67 - clock-names 68 + - xlnx,bus-width 70 69 71 70 additionalProperties: false 72 71
+1 -1
Documentation/devicetree/bindings/iio/adc/adi,ad7292.yaml
··· 61 61 required: 62 62 - reg 63 63 64 - additionalProperties: true 64 + additionalProperties: false 65 65 66 66 allOf: 67 67 - $ref: /schemas/spi/spi-peripheral-props.yaml#
+1
Documentation/devicetree/bindings/iio/light/rohm,bu27010.yaml
··· 45 45 light-sensor@38 { 46 46 compatible = "rohm,bu27010"; 47 47 reg = <0x38>; 48 + vdd-supply = <&vdd>; 48 49 }; 49 50 };
+12
Documentation/filesystems/overlayfs.rst
··· 339 339 rightmost one and going left. In the above example lower1 will be the 340 340 top, lower2 the middle and lower3 the bottom layer. 341 341 342 + Note: directory names containing colons can be provided as lower layer by 343 + escaping the colons with a single backslash. For example: 344 + 345 + mount -t overlay overlay -olowerdir=/a\:lower\:\:dir /merged 346 + 347 + Since kernel version v6.5, directory names containing colons can also 348 + be provided as lower layer using the fsconfig syscall from new mount api: 349 + 350 + fsconfig(fs_fd, FSCONFIG_SET_STRING, "lowerdir", "/a:lower::dir", 0); 351 + 352 + In the latter case, colons in lower layer directory names will be escaped 353 + as an octal characters (\072) when displayed in /proc/self/mountinfo. 342 354 343 355 Metadata only copy up 344 356 ---------------------
+9 -9
Documentation/netlink/specs/devlink.yaml
··· 313 313 - dev-name 314 314 - sb-index 315 315 reply: &sb-get-reply 316 - value: 11 316 + value: 13 317 317 attributes: *sb-id-attrs 318 318 dump: 319 319 request: ··· 340 340 - sb-index 341 341 - sb-pool-index 342 342 reply: &sb-pool-get-reply 343 - value: 15 343 + value: 17 344 344 attributes: *sb-pool-id-attrs 345 345 dump: 346 346 request: ··· 368 368 - sb-index 369 369 - sb-pool-index 370 370 reply: &sb-port-pool-get-reply 371 - value: 19 371 + value: 21 372 372 attributes: *sb-port-pool-id-attrs 373 373 dump: 374 374 request: ··· 397 397 - sb-pool-type 398 398 - sb-tc-index 399 399 reply: &sb-tc-pool-bind-get-reply 400 - value: 23 400 + value: 25 401 401 attributes: *sb-tc-pool-bind-id-attrs 402 402 dump: 403 403 request: ··· 528 528 - dev-name 529 529 - trap-name 530 530 reply: &trap-get-reply 531 - value: 61 531 + value: 63 532 532 attributes: *trap-id-attrs 533 533 dump: 534 534 request: ··· 554 554 - dev-name 555 555 - trap-group-name 556 556 reply: &trap-group-get-reply 557 - value: 65 557 + value: 67 558 558 attributes: *trap-group-id-attrs 559 559 dump: 560 560 request: ··· 580 580 - dev-name 581 581 - trap-policer-id 582 582 reply: &trap-policer-get-reply 583 - value: 69 583 + value: 71 584 584 attributes: *trap-policer-id-attrs 585 585 dump: 586 586 request: ··· 607 607 - port-index 608 608 - rate-node-name 609 609 reply: &rate-get-reply 610 - value: 74 610 + value: 76 611 611 attributes: *rate-id-attrs 612 612 dump: 613 613 request: ··· 633 633 - dev-name 634 634 - linecard-index 635 635 reply: &linecard-get-reply 636 - value: 78 636 + value: 80 637 637 attributes: *linecard-id-attrs 638 638 dump: 639 639 request:
+5 -3
Documentation/networking/representors.rst
··· 162 162 The representor netdevice should *not* directly refer to a PCIe device (e.g. 163 163 through ``net_dev->dev.parent`` / ``SET_NETDEV_DEV()``), either of the 164 164 representee or of the switchdev function. 165 - Instead, it should implement the ``ndo_get_devlink_port()`` netdevice op, which 166 - the kernel uses to provide the ``phys_switch_id`` and ``phys_port_name`` sysfs 167 - nodes. (Some legacy drivers implement ``ndo_get_port_parent_id()`` and 165 + Instead, the driver should use the ``SET_NETDEV_DEVLINK_PORT`` macro to 166 + assign a devlink port instance to the netdevice before registering the 167 + netdevice; the kernel uses the devlink port to provide the ``phys_switch_id`` 168 + and ``phys_port_name`` sysfs nodes. 169 + (Some legacy drivers implement ``ndo_get_port_parent_id()`` and 168 170 ``ndo_get_phys_port_name()`` directly, but this is deprecated.) See 169 171 :ref:`Documentation/networking/devlink/devlink-port.rst <devlink_port>` for the 170 172 details of this API.
+12 -7
Documentation/process/embargoed-hardware-issues.rst
··· 25 25 The Linux kernel hardware security team is separate from the regular Linux 26 26 kernel security team. 27 27 28 - The team only handles the coordination of embargoed hardware security 29 - issues. Reports of pure software security bugs in the Linux kernel are not 28 + The team only handles developing fixes for embargoed hardware security 29 + issues. Reports of pure software security bugs in the Linux kernel are not 30 30 handled by this team and the reporter will be guided to contact the regular 31 31 Linux kernel security team (:ref:`Documentation/admin-guide/ 32 32 <securitybugs>`) instead. 33 33 34 34 The team can be contacted by email at <hardware-security@kernel.org>. This 35 - is a private list of security officers who will help you to coordinate an 36 - issue according to our documented process. 35 + is a private list of security officers who will help you to coordinate a 36 + fix according to our documented process. 37 37 38 38 The list is encrypted and email to the list can be sent by either PGP or 39 39 S/MIME encrypted and must be signed with the reporter's PGP key or S/MIME ··· 132 132 133 133 The hardware security team will provide an incident-specific encrypted 134 134 mailing-list which will be used for initial discussion with the reporter, 135 - further disclosure and coordination. 135 + further disclosure, and coordination of fixes. 136 136 137 137 The hardware security team will provide the disclosing party a list of 138 138 developers (domain experts) who should be informed initially about the 139 - issue after confirming with the developers that they will adhere to this 139 + issue after confirming with the developers that they will adhere to this 140 140 Memorandum of Understanding and the documented process. These developers 141 141 form the initial response team and will be responsible for handling the 142 142 issue after initial contact. The hardware security team is supporting the ··· 209 209 After acknowledgement or resolution of an objection the expert is disclosed 210 210 by the incident team and brought into the development process. 211 211 212 + List participants may not communicate about the issue outside of the 213 + private mailing list. List participants may not use any shared resources 214 + (e.g. employer build farms, CI systems, etc) when working on patches. 215 + 212 216 213 217 Coordinated release 214 218 """"""""""""""""""" 215 219 216 220 The involved parties will negotiate the date and time where the embargo 217 221 ends. At that point the prepared mitigations are integrated into the 218 - relevant kernel trees and published. 222 + relevant kernel trees and published. There is no pre-notification process: 223 + fixes are published in public and available to everyone at the same time. 219 224 220 225 While we understand that hardware security issues need coordinated embargo 221 226 time, the embargo time should be constrained to the minimum time which is
+6 -2
Documentation/trace/fprobe.rst
··· 91 91 92 92 .. code-block:: c 93 93 94 - int entry_callback(struct fprobe *fp, unsigned long entry_ip, struct pt_regs *regs, void *entry_data); 94 + int entry_callback(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct pt_regs *regs, void *entry_data); 95 95 96 - void exit_callback(struct fprobe *fp, unsigned long entry_ip, struct pt_regs *regs, void *entry_data); 96 + void exit_callback(struct fprobe *fp, unsigned long entry_ip, unsigned long ret_ip, struct pt_regs *regs, void *entry_data); 97 97 98 98 Note that the @entry_ip is saved at function entry and passed to exit handler. 99 99 If the entry callback function returns !0, the corresponding exit callback will be cancelled. ··· 107 107 This is the ftrace address of the traced function (both entry and exit). 108 108 Note that this may not be the actual entry address of the function but 109 109 the address where the ftrace is instrumented. 110 + 111 + @ret_ip 112 + This is the return address that the traced function will return to, 113 + somewhere in the caller. This can be used at both entry and exit. 110 114 111 115 @regs 112 116 This is the `pt_regs` data structure at the entry and exit. Note that
+1 -1
Documentation/translations/zh_CN/core-api/workqueue.rst
··· 202 202 同的排序属性。 203 203 204 204 在目前的实现中,上述配置只保证了特定NUMA节点内的ST行为。相反, 205 - ``alloc_ordered_queue()`` 应该被用来实现全系统的ST行为。 205 + ``alloc_ordered_workqueue()`` 应该被用来实现全系统的ST行为。 206 206 207 207 208 208 执行场景示例
+1 -1
MAINTAINERS
··· 8114 8114 8115 8115 FIRMWARE LOADER (request_firmware) 8116 8116 M: Luis Chamberlain <mcgrof@kernel.org> 8117 - M: Russ Weight <russell.h.weight@intel.com> 8117 + M: Russ Weight <russ.weight@linux.dev> 8118 8118 L: linux-kernel@vger.kernel.org 8119 8119 S: Maintained 8120 8120 F: Documentation/firmware_class/
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 6 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc6 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
+2 -2
arch/arm64/include/asm/kvm_arm.h
··· 344 344 */ 345 345 #define __HFGRTR_EL2_RES0 (GENMASK(63, 56) | GENMASK(53, 51)) 346 346 #define __HFGRTR_EL2_MASK GENMASK(49, 0) 347 - #define __HFGRTR_EL2_nMASK (GENMASK(55, 54) | BIT(50)) 347 + #define __HFGRTR_EL2_nMASK (GENMASK(58, 57) | GENMASK(55, 54) | BIT(50)) 348 348 349 349 #define __HFGWTR_EL2_RES0 (GENMASK(63, 56) | GENMASK(53, 51) | \ 350 350 BIT(46) | BIT(42) | BIT(40) | BIT(28) | \ 351 351 GENMASK(26, 25) | BIT(21) | BIT(18) | \ 352 352 GENMASK(15, 14) | GENMASK(10, 9) | BIT(2)) 353 353 #define __HFGWTR_EL2_MASK GENMASK(49, 0) 354 - #define __HFGWTR_EL2_nMASK (GENMASK(55, 54) | BIT(50)) 354 + #define __HFGWTR_EL2_nMASK (GENMASK(58, 57) | GENMASK(55, 54) | BIT(50)) 355 355 356 356 #define __HFGITR_EL2_RES0 GENMASK(63, 57) 357 357 #define __HFGITR_EL2_MASK GENMASK(54, 0)
+3 -10
arch/arm64/kvm/arch_timer.c
··· 55 55 .get_input_level = kvm_arch_timer_get_input_level, 56 56 }; 57 57 58 - static bool has_cntpoff(void) 59 - { 60 - return (has_vhe() && cpus_have_final_cap(ARM64_HAS_ECV_CNTPOFF)); 61 - } 62 - 63 58 static int nr_timers(struct kvm_vcpu *vcpu) 64 59 { 65 60 if (!vcpu_has_nv(vcpu)) ··· 175 180 return timecounter->cc->read(timecounter->cc); 176 181 } 177 182 178 - static void get_timer_map(struct kvm_vcpu *vcpu, struct timer_map *map) 183 + void get_timer_map(struct kvm_vcpu *vcpu, struct timer_map *map) 179 184 { 180 185 if (vcpu_has_nv(vcpu)) { 181 186 if (is_hyp_ctxt(vcpu)) { ··· 543 548 timer_set_ctl(ctx, read_sysreg_el0(SYS_CNTP_CTL)); 544 549 cval = read_sysreg_el0(SYS_CNTP_CVAL); 545 550 546 - if (!has_cntpoff()) 547 - cval -= timer_get_offset(ctx); 551 + cval -= timer_get_offset(ctx); 548 552 549 553 timer_set_cval(ctx, cval); 550 554 ··· 630 636 cval = timer_get_cval(ctx); 631 637 offset = timer_get_offset(ctx); 632 638 set_cntpoff(offset); 633 - if (!has_cntpoff()) 634 - cval += offset; 639 + cval += offset; 635 640 write_sysreg_el0(cval, SYS_CNTP_CVAL); 636 641 isb(); 637 642 write_sysreg_el0(timer_get_ctl(ctx), SYS_CNTP_CTL);
+2
arch/arm64/kvm/emulate-nested.c
··· 977 977 978 978 static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = { 979 979 /* HFGRTR_EL2, HFGWTR_EL2 */ 980 + SR_FGT(SYS_PIR_EL1, HFGxTR, nPIR_EL1, 0), 981 + SR_FGT(SYS_PIRE0_EL1, HFGxTR, nPIRE0_EL1, 0), 980 982 SR_FGT(SYS_TPIDR2_EL0, HFGxTR, nTPIDR2_EL0, 0), 981 983 SR_FGT(SYS_SMPRI_EL1, HFGxTR, nSMPRI_EL1, 0), 982 984 SR_FGT(SYS_ACCDATA_EL1, HFGxTR, nACCDATA_EL1, 0),
+44
arch/arm64/kvm/hyp/vhe/switch.c
··· 39 39 40 40 ___activate_traps(vcpu); 41 41 42 + if (has_cntpoff()) { 43 + struct timer_map map; 44 + 45 + get_timer_map(vcpu, &map); 46 + 47 + /* 48 + * We're entrering the guest. Reload the correct 49 + * values from memory now that TGE is clear. 50 + */ 51 + if (map.direct_ptimer == vcpu_ptimer(vcpu)) 52 + val = __vcpu_sys_reg(vcpu, CNTP_CVAL_EL0); 53 + if (map.direct_ptimer == vcpu_hptimer(vcpu)) 54 + val = __vcpu_sys_reg(vcpu, CNTHP_CVAL_EL2); 55 + 56 + if (map.direct_ptimer) { 57 + write_sysreg_el0(val, SYS_CNTP_CVAL); 58 + isb(); 59 + } 60 + } 61 + 42 62 val = read_sysreg(cpacr_el1); 43 63 val |= CPACR_ELx_TTA; 44 64 val &= ~(CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN | ··· 96 76 ___deactivate_traps(vcpu); 97 77 98 78 write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2); 79 + 80 + if (has_cntpoff()) { 81 + struct timer_map map; 82 + u64 val, offset; 83 + 84 + get_timer_map(vcpu, &map); 85 + 86 + /* 87 + * We're exiting the guest. Save the latest CVAL value 88 + * to memory and apply the offset now that TGE is set. 89 + */ 90 + val = read_sysreg_el0(SYS_CNTP_CVAL); 91 + if (map.direct_ptimer == vcpu_ptimer(vcpu)) 92 + __vcpu_sys_reg(vcpu, CNTP_CVAL_EL0) = val; 93 + if (map.direct_ptimer == vcpu_hptimer(vcpu)) 94 + __vcpu_sys_reg(vcpu, CNTHP_CVAL_EL2) = val; 95 + 96 + offset = read_sysreg_s(SYS_CNTPOFF_EL2); 97 + 98 + if (map.direct_ptimer && offset) { 99 + write_sysreg_el0(val + offset, SYS_CNTP_CVAL); 100 + isb(); 101 + } 102 + } 99 103 100 104 /* 101 105 * ARM errata 1165522 and 1530923 require the actual execution of the
+2 -2
arch/arm64/kvm/pmu.c
··· 39 39 { 40 40 struct kvm_pmu_events *pmu = kvm_get_pmu_events(); 41 41 42 - if (!kvm_arm_support_pmu_v3() || !pmu || !kvm_pmu_switch_needed(attr)) 42 + if (!kvm_arm_support_pmu_v3() || !kvm_pmu_switch_needed(attr)) 43 43 return; 44 44 45 45 if (!attr->exclude_host) ··· 55 55 { 56 56 struct kvm_pmu_events *pmu = kvm_get_pmu_events(); 57 57 58 - if (!kvm_arm_support_pmu_v3() || !pmu) 58 + if (!kvm_arm_support_pmu_v3()) 59 59 return; 60 60 61 61 pmu->events_host &= ~clr;
+2 -2
arch/arm64/kvm/sys_regs.c
··· 2122 2122 { SYS_DESC(SYS_PMMIR_EL1), trap_raz_wi }, 2123 2123 2124 2124 { SYS_DESC(SYS_MAIR_EL1), access_vm_reg, reset_unknown, MAIR_EL1 }, 2125 - { SYS_DESC(SYS_PIRE0_EL1), access_vm_reg, reset_unknown, PIRE0_EL1 }, 2126 - { SYS_DESC(SYS_PIR_EL1), access_vm_reg, reset_unknown, PIR_EL1 }, 2125 + { SYS_DESC(SYS_PIRE0_EL1), NULL, reset_unknown, PIRE0_EL1 }, 2126 + { SYS_DESC(SYS_PIR_EL1), NULL, reset_unknown, PIR_EL1 }, 2127 2127 { SYS_DESC(SYS_AMAIR_EL1), access_vm_reg, reset_amair_el1, AMAIR_EL1 }, 2128 2128 2129 2129 { SYS_DESC(SYS_LORSA_EL1), trap_loregion },
-5
arch/ia64/include/asm/cpu.h
··· 15 15 16 16 DECLARE_PER_CPU(int, cpu_state); 17 17 18 - #ifdef CONFIG_HOTPLUG_CPU 19 - extern int arch_register_cpu(int num); 20 - extern void arch_unregister_cpu(int); 21 - #endif 22 - 23 18 #endif /* _ASM_IA64_CPU_H_ */
+1 -1
arch/ia64/kernel/topology.c
··· 59 59 } 60 60 EXPORT_SYMBOL(arch_unregister_cpu); 61 61 #else 62 - static int __init arch_register_cpu(int num) 62 + int __init arch_register_cpu(int num) 63 63 { 64 64 return register_cpu(&sysfs_cpus[num].cpu, num); 65 65 }
+2 -3
arch/loongarch/include/asm/io.h
··· 52 52 * @offset: bus address of the memory 53 53 * @size: size of the resource to map 54 54 */ 55 - extern pgprot_t pgprot_wc; 56 - 57 55 #define ioremap_wc(offset, size) \ 58 - ioremap_prot((offset), (size), pgprot_val(pgprot_wc)) 56 + ioremap_prot((offset), (size), \ 57 + pgprot_val(wc_enabled ? PAGE_KERNEL_WUC : PAGE_KERNEL_SUC)) 59 58 60 59 #define ioremap_cache(offset, size) \ 61 60 ioremap_prot((offset), (size), pgprot_val(PAGE_KERNEL))
+8
arch/loongarch/include/asm/linkage.h
··· 33 33 .cfi_endproc; \ 34 34 SYM_END(name, SYM_T_FUNC) 35 35 36 + #define SYM_CODE_START(name) \ 37 + SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN) \ 38 + .cfi_startproc; 39 + 40 + #define SYM_CODE_END(name) \ 41 + .cfi_endproc; \ 42 + SYM_END(name, SYM_T_NONE) 43 + 36 44 #endif
+3 -1
arch/loongarch/include/asm/pgtable-bits.h
··· 105 105 return __pgprot(prot); 106 106 } 107 107 108 + extern bool wc_enabled; 109 + 108 110 #define pgprot_writecombine pgprot_writecombine 109 111 110 112 static inline pgprot_t pgprot_writecombine(pgprot_t _prot) 111 113 { 112 114 unsigned long prot = pgprot_val(_prot); 113 115 114 - prot = (prot & ~_CACHE_MASK) | _CACHE_WUC; 116 + prot = (prot & ~_CACHE_MASK) | (wc_enabled ? _CACHE_WUC : _CACHE_SUC); 115 117 116 118 return __pgprot(prot); 117 119 }
+2 -2
arch/loongarch/kernel/entry.S
··· 18 18 .text 19 19 .cfi_sections .debug_frame 20 20 .align 5 21 - SYM_FUNC_START(handle_syscall) 21 + SYM_CODE_START(handle_syscall) 22 22 csrrd t0, PERCPU_BASE_KS 23 23 la.pcrel t1, kernelsp 24 24 add.d t1, t1, t0 ··· 71 71 bl do_syscall 72 72 73 73 RESTORE_ALL_AND_RET 74 - SYM_FUNC_END(handle_syscall) 74 + SYM_CODE_END(handle_syscall) 75 75 _ASM_NOKPROBE(handle_syscall) 76 76 77 77 SYM_CODE_START(ret_from_fork)
+8 -8
arch/loongarch/kernel/genex.S
··· 31 31 1: jr ra 32 32 SYM_FUNC_END(__arch_cpu_idle) 33 33 34 - SYM_FUNC_START(handle_vint) 34 + SYM_CODE_START(handle_vint) 35 35 BACKUP_T0T1 36 36 SAVE_ALL 37 37 la_abs t1, __arch_cpu_idle ··· 46 46 la_abs t0, do_vint 47 47 jirl ra, t0, 0 48 48 RESTORE_ALL_AND_RET 49 - SYM_FUNC_END(handle_vint) 49 + SYM_CODE_END(handle_vint) 50 50 51 - SYM_FUNC_START(except_vec_cex) 51 + SYM_CODE_START(except_vec_cex) 52 52 b cache_parity_error 53 - SYM_FUNC_END(except_vec_cex) 53 + SYM_CODE_END(except_vec_cex) 54 54 55 55 .macro build_prep_badv 56 56 csrrd t0, LOONGARCH_CSR_BADV ··· 66 66 67 67 .macro BUILD_HANDLER exception handler prep 68 68 .align 5 69 - SYM_FUNC_START(handle_\exception) 69 + SYM_CODE_START(handle_\exception) 70 70 666: 71 71 BACKUP_T0T1 72 72 SAVE_ALL ··· 76 76 jirl ra, t0, 0 77 77 668: 78 78 RESTORE_ALL_AND_RET 79 - SYM_FUNC_END(handle_\exception) 79 + SYM_CODE_END(handle_\exception) 80 80 SYM_DATA(unwind_hint_\exception, .word 668b - 666b) 81 81 .endm 82 82 ··· 93 93 BUILD_HANDLER watch watch none 94 94 BUILD_HANDLER reserved reserved none /* others */ 95 95 96 - SYM_FUNC_START(handle_sys) 96 + SYM_CODE_START(handle_sys) 97 97 la_abs t0, handle_syscall 98 98 jr t0 99 - SYM_FUNC_END(handle_sys) 99 + SYM_CODE_END(handle_sys)
+5 -5
arch/loongarch/kernel/setup.c
··· 161 161 } 162 162 163 163 #ifdef CONFIG_ARCH_WRITECOMBINE 164 - pgprot_t pgprot_wc = PAGE_KERNEL_WUC; 164 + bool wc_enabled = true; 165 165 #else 166 - pgprot_t pgprot_wc = PAGE_KERNEL_SUC; 166 + bool wc_enabled = false; 167 167 #endif 168 168 169 - EXPORT_SYMBOL(pgprot_wc); 169 + EXPORT_SYMBOL(wc_enabled); 170 170 171 171 static int __init setup_writecombine(char *p) 172 172 { 173 173 if (!strcmp(p, "on")) 174 - pgprot_wc = PAGE_KERNEL_WUC; 174 + wc_enabled = true; 175 175 else if (!strcmp(p, "off")) 176 - pgprot_wc = PAGE_KERNEL_SUC; 176 + wc_enabled = false; 177 177 else 178 178 pr_warn("Unknown writecombine setting \"%s\".\n", p); 179 179
+5 -4
arch/loongarch/mm/init.c
··· 43 43 { 44 44 void *vfrom, *vto; 45 45 46 - vto = kmap_atomic(to); 47 - vfrom = kmap_atomic(from); 46 + vfrom = kmap_local_page(from); 47 + vto = kmap_local_page(to); 48 48 copy_page(vto, vfrom); 49 - kunmap_atomic(vfrom); 50 - kunmap_atomic(vto); 49 + kunmap_local(vfrom); 50 + kunmap_local(vto); 51 51 /* Make sure this page is cleared on other CPU's too before using it */ 52 52 smp_wmb(); 53 53 } ··· 240 240 pgd_t invalid_pg_dir[_PTRS_PER_PGD] __page_aligned_bss; 241 241 #ifndef __PAGETABLE_PUD_FOLDED 242 242 pud_t invalid_pud_table[PTRS_PER_PUD] __page_aligned_bss; 243 + EXPORT_SYMBOL(invalid_pud_table); 243 244 #endif 244 245 #ifndef __PAGETABLE_PMD_FOLDED 245 246 pmd_t invalid_pmd_table[PTRS_PER_PMD] __page_aligned_bss;
+18 -18
arch/loongarch/mm/tlbex.S
··· 17 17 #define PTRS_PER_PTE_BITS (PAGE_SHIFT - 3) 18 18 19 19 .macro tlb_do_page_fault, write 20 - SYM_FUNC_START(tlb_do_page_fault_\write) 20 + SYM_CODE_START(tlb_do_page_fault_\write) 21 21 SAVE_ALL 22 22 csrrd a2, LOONGARCH_CSR_BADV 23 23 move a0, sp ··· 25 25 li.w a1, \write 26 26 bl do_page_fault 27 27 RESTORE_ALL_AND_RET 28 - SYM_FUNC_END(tlb_do_page_fault_\write) 28 + SYM_CODE_END(tlb_do_page_fault_\write) 29 29 .endm 30 30 31 31 tlb_do_page_fault 0 32 32 tlb_do_page_fault 1 33 33 34 - SYM_FUNC_START(handle_tlb_protect) 34 + SYM_CODE_START(handle_tlb_protect) 35 35 BACKUP_T0T1 36 36 SAVE_ALL 37 37 move a0, sp ··· 41 41 la_abs t0, do_page_fault 42 42 jirl ra, t0, 0 43 43 RESTORE_ALL_AND_RET 44 - SYM_FUNC_END(handle_tlb_protect) 44 + SYM_CODE_END(handle_tlb_protect) 45 45 46 - SYM_FUNC_START(handle_tlb_load) 46 + SYM_CODE_START(handle_tlb_load) 47 47 csrwr t0, EXCEPTION_KS0 48 48 csrwr t1, EXCEPTION_KS1 49 49 csrwr ra, EXCEPTION_KS2 ··· 187 187 csrrd ra, EXCEPTION_KS2 188 188 la_abs t0, tlb_do_page_fault_0 189 189 jr t0 190 - SYM_FUNC_END(handle_tlb_load) 190 + SYM_CODE_END(handle_tlb_load) 191 191 192 - SYM_FUNC_START(handle_tlb_load_ptw) 192 + SYM_CODE_START(handle_tlb_load_ptw) 193 193 csrwr t0, LOONGARCH_CSR_KS0 194 194 csrwr t1, LOONGARCH_CSR_KS1 195 195 la_abs t0, tlb_do_page_fault_0 196 196 jr t0 197 - SYM_FUNC_END(handle_tlb_load_ptw) 197 + SYM_CODE_END(handle_tlb_load_ptw) 198 198 199 - SYM_FUNC_START(handle_tlb_store) 199 + SYM_CODE_START(handle_tlb_store) 200 200 csrwr t0, EXCEPTION_KS0 201 201 csrwr t1, EXCEPTION_KS1 202 202 csrwr ra, EXCEPTION_KS2 ··· 343 343 csrrd ra, EXCEPTION_KS2 344 344 la_abs t0, tlb_do_page_fault_1 345 345 jr t0 346 - SYM_FUNC_END(handle_tlb_store) 346 + SYM_CODE_END(handle_tlb_store) 347 347 348 - SYM_FUNC_START(handle_tlb_store_ptw) 348 + SYM_CODE_START(handle_tlb_store_ptw) 349 349 csrwr t0, LOONGARCH_CSR_KS0 350 350 csrwr t1, LOONGARCH_CSR_KS1 351 351 la_abs t0, tlb_do_page_fault_1 352 352 jr t0 353 - SYM_FUNC_END(handle_tlb_store_ptw) 353 + SYM_CODE_END(handle_tlb_store_ptw) 354 354 355 - SYM_FUNC_START(handle_tlb_modify) 355 + SYM_CODE_START(handle_tlb_modify) 356 356 csrwr t0, EXCEPTION_KS0 357 357 csrwr t1, EXCEPTION_KS1 358 358 csrwr ra, EXCEPTION_KS2 ··· 497 497 csrrd ra, EXCEPTION_KS2 498 498 la_abs t0, tlb_do_page_fault_1 499 499 jr t0 500 - SYM_FUNC_END(handle_tlb_modify) 500 + SYM_CODE_END(handle_tlb_modify) 501 501 502 - SYM_FUNC_START(handle_tlb_modify_ptw) 502 + SYM_CODE_START(handle_tlb_modify_ptw) 503 503 csrwr t0, LOONGARCH_CSR_KS0 504 504 csrwr t1, LOONGARCH_CSR_KS1 505 505 la_abs t0, tlb_do_page_fault_1 506 506 jr t0 507 - SYM_FUNC_END(handle_tlb_modify_ptw) 507 + SYM_CODE_END(handle_tlb_modify_ptw) 508 508 509 - SYM_FUNC_START(handle_tlb_refill) 509 + SYM_CODE_START(handle_tlb_refill) 510 510 csrwr t0, LOONGARCH_CSR_TLBRSAVE 511 511 csrrd t0, LOONGARCH_CSR_PGD 512 512 lddir t0, t0, 3 ··· 521 521 tlbfill 522 522 csrrd t0, LOONGARCH_CSR_TLBRSAVE 523 523 ertn 524 - SYM_FUNC_END(handle_tlb_refill) 524 + SYM_CODE_END(handle_tlb_refill)
+1 -2
arch/mips/kvm/mmu.c
··· 592 592 gfn_t gfn = gpa >> PAGE_SHIFT; 593 593 int srcu_idx, err; 594 594 kvm_pfn_t pfn; 595 - pte_t *ptep, entry, old_pte; 595 + pte_t *ptep, entry; 596 596 bool writeable; 597 597 unsigned long prot_bits; 598 598 unsigned long mmu_seq; ··· 664 664 entry = pfn_pte(pfn, __pgprot(prot_bits)); 665 665 666 666 /* Write the PTE */ 667 - old_pte = *ptep; 668 667 set_pte(ptep, entry); 669 668 670 669 err = 0;
+7
arch/powerpc/include/asm/nohash/32/pte-8xx.h
··· 94 94 95 95 #define pte_wrprotect pte_wrprotect 96 96 97 + static inline int pte_read(pte_t pte) 98 + { 99 + return (pte_val(pte) & _PAGE_RO) != _PAGE_NA; 100 + } 101 + 102 + #define pte_read pte_read 103 + 97 104 static inline int pte_write(pte_t pte) 98 105 { 99 106 return !(pte_val(pte) & _PAGE_RO);
+1 -1
arch/powerpc/include/asm/nohash/64/pgtable.h
··· 197 197 { 198 198 unsigned long old; 199 199 200 - if (pte_young(*ptep)) 200 + if (!pte_young(*ptep)) 201 201 return 0; 202 202 old = pte_update(mm, addr, ptep, _PAGE_ACCESSED, 0, 0); 203 203 return (old & _PAGE_ACCESSED) != 0;
+2
arch/powerpc/include/asm/nohash/pgtable.h
··· 25 25 return pte_val(pte) & _PAGE_RW; 26 26 } 27 27 #endif 28 + #ifndef pte_read 28 29 static inline int pte_read(pte_t pte) { return 1; } 30 + #endif 29 31 static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; } 30 32 static inline int pte_special(pte_t pte) { return pte_val(pte) & _PAGE_SPECIAL; } 31 33 static inline int pte_none(pte_t pte) { return (pte_val(pte) & ~_PTE_NONE_MASK) == 0; }
+5 -3
arch/powerpc/kernel/entry_32.S
··· 137 137 lis r4,icache_44x_need_flush@ha 138 138 lwz r5,icache_44x_need_flush@l(r4) 139 139 cmplwi cr0,r5,0 140 - bne- 2f 140 + bne- .L44x_icache_flush 141 141 #endif /* CONFIG_PPC_47x */ 142 + .L44x_icache_flush_return: 142 143 kuep_unlock 143 144 lwz r4,_LINK(r1) 144 145 lwz r5,_CCR(r1) ··· 173 172 b 1b 174 173 175 174 #ifdef CONFIG_44x 176 - 2: li r7,0 175 + .L44x_icache_flush: 176 + li r7,0 177 177 iccci r0,r0 178 178 stw r7,icache_44x_need_flush@l(r4) 179 - b 1b 179 + b .L44x_icache_flush_return 180 180 #endif /* CONFIG_44x */ 181 181 182 182 .globl ret_from_fork
+1 -1
arch/powerpc/kernel/head_85xx.S
··· 395 395 #ifdef CONFIG_PPC_FPU 396 396 FP_UNAVAILABLE_EXCEPTION 397 397 #else 398 - EXCEPTION(0x0800, FP_UNAVAIL, FloatingPointUnavailable, unknown_exception) 398 + EXCEPTION(0x0800, FP_UNAVAIL, FloatingPointUnavailable, emulation_assist_interrupt) 399 399 #endif 400 400 401 401 /* System Call Interrupt */
+1 -7
arch/powerpc/platforms/pseries/hvCall.S
··· 184 184 plpar_hcall_trace: 185 185 HCALL_INST_PRECALL(R5) 186 186 187 - std r4,STK_PARAM(R4)(r1) 188 - mr r0,r4 189 - 190 187 mr r4,r5 191 188 mr r5,r6 192 189 mr r6,r7 ··· 193 196 194 197 HVSC 195 198 196 - ld r12,STK_PARAM(R4)(r1) 199 + ld r12,STACK_FRAME_MIN_SIZE+STK_PARAM(R4)(r1) 197 200 std r4,0(r12) 198 201 std r5,8(r12) 199 202 std r6,16(r12) ··· 292 295 #ifdef CONFIG_TRACEPOINTS 293 296 plpar_hcall9_trace: 294 297 HCALL_INST_PRECALL(R5) 295 - 296 - std r4,STK_PARAM(R4)(r1) 297 - mr r0,r4 298 298 299 299 mr r4,r5 300 300 mr r5,r6
-1
arch/riscv/Makefile
··· 6 6 # for more details. 7 7 # 8 8 9 - OBJCOPYFLAGS := -O binary 10 9 LDFLAGS_vmlinux := -z norelro 11 10 ifeq ($(CONFIG_RELOCATABLE),y) 12 11 LDFLAGS_vmlinux += -shared -Bsymbolic -z notext --emit-relocs
+4
arch/riscv/errata/andes/Makefile
··· 1 + ifdef CONFIG_RISCV_ALTERNATIVE_EARLY 2 + CFLAGS_errata.o := -mcmodel=medany 3 + endif 4 + 1 5 obj-y += errata.o
+21
arch/riscv/include/asm/ftrace.h
··· 31 31 return addr; 32 32 } 33 33 34 + /* 35 + * Let's do like x86/arm64 and ignore the compat syscalls. 36 + */ 37 + #define ARCH_TRACE_IGNORE_COMPAT_SYSCALLS 38 + static inline bool arch_trace_is_compat_syscall(struct pt_regs *regs) 39 + { 40 + return is_compat_task(); 41 + } 42 + 43 + #define ARCH_HAS_SYSCALL_MATCH_SYM_NAME 44 + static inline bool arch_syscall_match_sym_name(const char *sym, 45 + const char *name) 46 + { 47 + /* 48 + * Since all syscall functions have __riscv_ prefix, we must skip it. 49 + * However, as we described above, we decided to ignore compat 50 + * syscalls, so we don't care about __riscv_compat_ prefix here. 51 + */ 52 + return !strcmp(sym + 8, name); 53 + } 54 + 34 55 struct dyn_arch_ftrace { 35 56 }; 36 57 #endif
+9
arch/riscv/include/asm/kprobes.h
··· 40 40 int kprobe_fault_handler(struct pt_regs *regs, unsigned int trapnr); 41 41 bool kprobe_breakpoint_handler(struct pt_regs *regs); 42 42 bool kprobe_single_step_handler(struct pt_regs *regs); 43 + #else 44 + static inline bool kprobe_breakpoint_handler(struct pt_regs *regs) 45 + { 46 + return false; 47 + } 43 48 49 + static inline bool kprobe_single_step_handler(struct pt_regs *regs) 50 + { 51 + return false; 52 + } 44 53 #endif /* CONFIG_KPROBES */ 45 54 #endif /* _ASM_RISCV_KPROBES_H */
+11
arch/riscv/include/asm/uprobes.h
··· 34 34 bool simulate; 35 35 }; 36 36 37 + #ifdef CONFIG_UPROBES 37 38 bool uprobe_breakpoint_handler(struct pt_regs *regs); 38 39 bool uprobe_single_step_handler(struct pt_regs *regs); 40 + #else 41 + static inline bool uprobe_breakpoint_handler(struct pt_regs *regs) 42 + { 43 + return false; 44 + } 39 45 46 + static inline bool uprobe_single_step_handler(struct pt_regs *regs) 47 + { 48 + return false; 49 + } 50 + #endif /* CONFIG_UPROBES */ 40 51 #endif /* _ASM_RISCV_UPROBES_H */
+2 -2
arch/riscv/kernel/irq.c
··· 60 60 } 61 61 #endif /* CONFIG_VMAP_STACK */ 62 62 63 - #ifdef CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK 63 + #ifdef CONFIG_SOFTIRQ_ON_OWN_STACK 64 64 void do_softirq_own_stack(void) 65 65 { 66 66 #ifdef CONFIG_IRQ_STACKS ··· 92 92 #endif 93 93 __do_softirq(); 94 94 } 95 - #endif /* CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK */ 95 + #endif /* CONFIG_SOFTIRQ_ON_OWN_STACK */ 96 96 97 97 #else 98 98 static void init_irq_stacks(void) {}
-13
arch/riscv/kernel/setup.c
··· 173 173 if (ret < 0) 174 174 goto error; 175 175 176 - #ifdef CONFIG_KEXEC_CORE 177 - if (crashk_res.start != crashk_res.end) { 178 - ret = add_resource(&iomem_resource, &crashk_res); 179 - if (ret < 0) 180 - goto error; 181 - } 182 - if (crashk_low_res.start != crashk_low_res.end) { 183 - ret = add_resource(&iomem_resource, &crashk_low_res); 184 - if (ret < 0) 185 - goto error; 186 - } 187 - #endif 188 - 189 176 #ifdef CONFIG_CRASH_DUMP 190 177 if (elfcorehdr_size > 0) { 191 178 elfcorehdr_res.start = elfcorehdr_addr;
-7
arch/riscv/kernel/signal.c
··· 311 311 /* Align the stack frame. */ 312 312 sp &= ~0xfUL; 313 313 314 - /* 315 - * Fail if the size of the altstack is not large enough for the 316 - * sigframe construction. 317 - */ 318 - if (current->sas_ss_size && sp < current->sas_ss_sp) 319 - return (void __user __force *)-1UL; 320 - 321 314 return (void __user *)sp; 322 315 } 323 316
+18 -10
arch/riscv/kernel/traps.c
··· 13 13 #include <linux/kdebug.h> 14 14 #include <linux/uaccess.h> 15 15 #include <linux/kprobes.h> 16 + #include <linux/uprobes.h> 17 + #include <asm/uprobes.h> 16 18 #include <linux/mm.h> 17 19 #include <linux/module.h> 18 20 #include <linux/irq.h> ··· 249 247 return GET_INSN_LENGTH(insn); 250 248 } 251 249 250 + static bool probe_single_step_handler(struct pt_regs *regs) 251 + { 252 + bool user = user_mode(regs); 253 + 254 + return user ? uprobe_single_step_handler(regs) : kprobe_single_step_handler(regs); 255 + } 256 + 257 + static bool probe_breakpoint_handler(struct pt_regs *regs) 258 + { 259 + bool user = user_mode(regs); 260 + 261 + return user ? uprobe_breakpoint_handler(regs) : kprobe_breakpoint_handler(regs); 262 + } 263 + 252 264 void handle_break(struct pt_regs *regs) 253 265 { 254 - #ifdef CONFIG_KPROBES 255 - if (kprobe_single_step_handler(regs)) 266 + if (probe_single_step_handler(regs)) 256 267 return; 257 268 258 - if (kprobe_breakpoint_handler(regs)) 259 - return; 260 - #endif 261 - #ifdef CONFIG_UPROBES 262 - if (uprobe_single_step_handler(regs)) 269 + if (probe_breakpoint_handler(regs)) 263 270 return; 264 271 265 - if (uprobe_breakpoint_handler(regs)) 266 - return; 267 - #endif 268 272 current->thread.bad_cause = regs->cause; 269 273 270 274 if (user_mode(regs))
+6 -10
arch/s390/kvm/interrupt.c
··· 303 303 return 0; 304 304 } 305 305 306 - static inline int gisa_in_alert_list(struct kvm_s390_gisa *gisa) 307 - { 308 - return READ_ONCE(gisa->next_alert) != (u32)virt_to_phys(gisa); 309 - } 310 - 311 306 static inline void gisa_set_ipm_gisc(struct kvm_s390_gisa *gisa, u32 gisc) 312 307 { 313 308 set_bit_inv(IPM_BIT_OFFSET + gisc, (unsigned long *) gisa); ··· 3211 3216 3212 3217 if (!gi->origin) 3213 3218 return; 3214 - if (gi->alert.mask) 3215 - KVM_EVENT(3, "vm 0x%pK has unexpected iam 0x%02x", 3216 - kvm, gi->alert.mask); 3217 - while (gisa_in_alert_list(gi->origin)) 3218 - cpu_relax(); 3219 + WARN(gi->alert.mask != 0x00, 3220 + "unexpected non zero alert.mask 0x%02x", 3221 + gi->alert.mask); 3222 + gi->alert.mask = 0x00; 3223 + if (gisa_set_iam(gi->origin, gi->alert.mask)) 3224 + process_gib_alert_list(); 3219 3225 hrtimer_cancel(&gi->timer); 3220 3226 gi->origin = NULL; 3221 3227 VM_EVENT(kvm, 3, "gisa 0x%pK destroyed", gisa);
+3 -2
arch/x86/events/utils.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include <asm/insn.h> 3 + #include <linux/mm.h> 3 4 4 5 #include "perf_event.h" 5 6 ··· 133 132 * The LBR logs any address in the IP, even if the IP just 134 133 * faulted. This means userspace can control the from address. 135 134 * Ensure we don't blindly read any address by validating it is 136 - * a known text address. 135 + * a known text address and not a vsyscall address. 137 136 */ 138 - if (kernel_text_address(from)) { 137 + if (kernel_text_address(from) && !in_gate_area_no_mm(from)) { 139 138 addr = (void *)from; 140 139 /* 141 140 * Assume we can get the maximum possible size
-2
arch/x86/include/asm/cpu.h
··· 28 28 }; 29 29 30 30 #ifdef CONFIG_HOTPLUG_CPU 31 - extern int arch_register_cpu(int num); 32 - extern void arch_unregister_cpu(int); 33 31 extern void soft_restart_cpu(void); 34 32 #endif 35 33
+2 -1
arch/x86/include/asm/fpu/api.h
··· 157 157 static inline void fpu_sync_guest_vmexit_xfd_state(void) { } 158 158 #endif 159 159 160 - extern void fpu_copy_guest_fpstate_to_uabi(struct fpu_guest *gfpu, void *buf, unsigned int size, u32 pkru); 160 + extern void fpu_copy_guest_fpstate_to_uabi(struct fpu_guest *gfpu, void *buf, 161 + unsigned int size, u64 xfeatures, u32 pkru); 161 162 extern int fpu_copy_uabi_to_guest_fpstate(struct fpu_guest *gfpu, const void *buf, u64 xcr0, u32 *vpkru); 162 163 163 164 static inline void fpstate_set_confidential(struct fpu_guest *gfpu)
-1
arch/x86/include/asm/kvm_host.h
··· 528 528 u64 raw_event_mask; 529 529 struct kvm_pmc gp_counters[KVM_INTEL_PMC_MAX_GENERIC]; 530 530 struct kvm_pmc fixed_counters[KVM_PMC_MAX_FIXED]; 531 - struct irq_work irq_work; 532 531 533 532 /* 534 533 * Overlay the bitmap with a 64-bit atomic so that all bits can be
+7 -2
arch/x86/include/asm/msr-index.h
··· 637 637 /* AMD Last Branch Record MSRs */ 638 638 #define MSR_AMD64_LBR_SELECT 0xc000010e 639 639 640 - /* Fam 17h MSRs */ 641 - #define MSR_F17H_IRPERF 0xc00000e9 640 + /* Zen4 */ 641 + #define MSR_ZEN4_BP_CFG 0xc001102e 642 + #define MSR_ZEN4_BP_CFG_SHARED_BTB_FIX_BIT 5 642 643 644 + /* Zen 2 */ 643 645 #define MSR_ZEN2_SPECTRAL_CHICKEN 0xc00110e3 644 646 #define MSR_ZEN2_SPECTRAL_CHICKEN_BIT BIT_ULL(1) 647 + 648 + /* Fam 17h MSRs */ 649 + #define MSR_F17H_IRPERF 0xc00000e9 645 650 646 651 /* Fam 16h MSRs */ 647 652 #define MSR_F16H_L2I_PERF_CTL 0xc0010230
-1
arch/x86/include/asm/smp.h
··· 129 129 void native_send_call_func_ipi(const struct cpumask *mask); 130 130 void native_send_call_func_single_ipi(int cpu); 131 131 132 - bool smp_park_other_cpus_in_init(void); 133 132 void smp_store_cpu_info(int id); 134 133 135 134 asmlinkage __visible void smp_reboot_interrupt(void);
+1
arch/x86/include/asm/svm.h
··· 268 268 AVIC_IPI_FAILURE_TARGET_NOT_RUNNING, 269 269 AVIC_IPI_FAILURE_INVALID_TARGET, 270 270 AVIC_IPI_FAILURE_INVALID_BACKING_PAGE, 271 + AVIC_IPI_FAILURE_INVALID_IPI_VECTOR, 271 272 }; 272 273 273 274 #define AVIC_PHYSICAL_MAX_INDEX_MASK GENMASK_ULL(8, 0)
+13
arch/x86/kernel/alternative.c
··· 403 403 u8 insn_buff[MAX_PATCH_LEN]; 404 404 405 405 DPRINTK(ALT, "alt table %px, -> %px", start, end); 406 + 407 + /* 408 + * In the case CONFIG_X86_5LEVEL=y, KASAN_SHADOW_START is defined using 409 + * cpu_feature_enabled(X86_FEATURE_LA57) and is therefore patched here. 410 + * During the process, KASAN becomes confused seeing partial LA57 411 + * conversion and triggers a false-positive out-of-bound report. 412 + * 413 + * Disable KASAN until the patching is complete. 414 + */ 415 + kasan_disable_current(); 416 + 406 417 /* 407 418 * The scan order should be from start to end. A later scanned 408 419 * alternative code can overwrite previously scanned alternative code. ··· 463 452 464 453 text_poke_early(instr, insn_buff, insn_buff_sz); 465 454 } 455 + 456 + kasan_enable_current(); 466 457 } 467 458 468 459 static inline bool is_jcc32(struct insn *insn)
+8
arch/x86/kernel/cpu/amd.c
··· 80 80 AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x17, 0x00, 0x0, 0x2f, 0xf), 81 81 AMD_MODEL_RANGE(0x17, 0x50, 0x0, 0x5f, 0xf)); 82 82 83 + static const int amd_erratum_1485[] = 84 + AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x19, 0x10, 0x0, 0x1f, 0xf), 85 + AMD_MODEL_RANGE(0x19, 0x60, 0x0, 0xaf, 0xf)); 86 + 83 87 static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum) 84 88 { 85 89 int osvw_id = *erratum++; ··· 1153 1149 pr_notice_once("AMD Zen1 DIV0 bug detected. Disable SMT for full protection.\n"); 1154 1150 setup_force_cpu_bug(X86_BUG_DIV0); 1155 1151 } 1152 + 1153 + if (!cpu_has(c, X86_FEATURE_HYPERVISOR) && 1154 + cpu_has_amd_erratum(c, amd_erratum_1485)) 1155 + msr_set_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_SHARED_BTB_FIX_BIT); 1156 1156 } 1157 1157 1158 1158 #ifdef CONFIG_X86_32
+5 -5
arch/x86/kernel/cpu/resctrl/monitor.c
··· 30 30 struct list_head list; 31 31 }; 32 32 33 - /** 34 - * @rmid_free_lru A least recently used list of free RMIDs 33 + /* 34 + * @rmid_free_lru - A least recently used list of free RMIDs 35 35 * These RMIDs are guaranteed to have an occupancy less than the 36 36 * threshold occupancy 37 37 */ 38 38 static LIST_HEAD(rmid_free_lru); 39 39 40 - /** 41 - * @rmid_limbo_count count of currently unused but (potentially) 40 + /* 41 + * @rmid_limbo_count - count of currently unused but (potentially) 42 42 * dirty RMIDs. 43 43 * This counts RMIDs that no one is currently using but that 44 44 * may have a occupancy value > resctrl_rmid_realloc_threshold. User can ··· 46 46 */ 47 47 static unsigned int rmid_limbo_count; 48 48 49 - /** 49 + /* 50 50 * @rmid_entry - The entry in the limbo and free lists. 51 51 */ 52 52 static struct rmid_entry *rmid_ptrs;
+3 -2
arch/x86/kernel/fpu/core.c
··· 369 369 EXPORT_SYMBOL_GPL(fpu_swap_kvm_fpstate); 370 370 371 371 void fpu_copy_guest_fpstate_to_uabi(struct fpu_guest *gfpu, void *buf, 372 - unsigned int size, u32 pkru) 372 + unsigned int size, u64 xfeatures, u32 pkru) 373 373 { 374 374 struct fpstate *kstate = gfpu->fpstate; 375 375 union fpregs_state *ustate = buf; 376 376 struct membuf mb = { .p = buf, .left = size }; 377 377 378 378 if (cpu_feature_enabled(X86_FEATURE_XSAVE)) { 379 - __copy_xstate_to_uabi_buf(mb, kstate, pkru, XSTATE_COPY_XSAVE); 379 + __copy_xstate_to_uabi_buf(mb, kstate, xfeatures, pkru, 380 + XSTATE_COPY_XSAVE); 380 381 } else { 381 382 memcpy(&ustate->fxsave, &kstate->regs.fxsave, 382 383 sizeof(ustate->fxsave));
+6 -6
arch/x86/kernel/fpu/xstate.c
··· 1049 1049 * __copy_xstate_to_uabi_buf - Copy kernel saved xstate to a UABI buffer 1050 1050 * @to: membuf descriptor 1051 1051 * @fpstate: The fpstate buffer from which to copy 1052 + * @xfeatures: The mask of xfeatures to save (XSAVE mode only) 1052 1053 * @pkru_val: The PKRU value to store in the PKRU component 1053 1054 * @copy_mode: The requested copy mode 1054 1055 * ··· 1060 1059 * It supports partial copy but @to.pos always starts from zero. 1061 1060 */ 1062 1061 void __copy_xstate_to_uabi_buf(struct membuf to, struct fpstate *fpstate, 1063 - u32 pkru_val, enum xstate_copy_mode copy_mode) 1062 + u64 xfeatures, u32 pkru_val, 1063 + enum xstate_copy_mode copy_mode) 1064 1064 { 1065 1065 const unsigned int off_mxcsr = offsetof(struct fxregs_state, mxcsr); 1066 1066 struct xregs_state *xinit = &init_fpstate.regs.xsave; ··· 1085 1083 break; 1086 1084 1087 1085 case XSTATE_COPY_XSAVE: 1088 - header.xfeatures &= fpstate->user_xfeatures; 1086 + header.xfeatures &= fpstate->user_xfeatures & xfeatures; 1089 1087 break; 1090 1088 } 1091 1089 ··· 1187 1185 enum xstate_copy_mode copy_mode) 1188 1186 { 1189 1187 __copy_xstate_to_uabi_buf(to, tsk->thread.fpu.fpstate, 1188 + tsk->thread.fpu.fpstate->user_xfeatures, 1190 1189 tsk->thread.pkru, copy_mode); 1191 1190 } 1192 1191 ··· 1539 1536 fpregs_restore_userregs(); 1540 1537 1541 1538 newfps->xfeatures = curfps->xfeatures | xfeatures; 1542 - 1543 - if (!guest_fpu) 1544 - newfps->user_xfeatures = curfps->user_xfeatures | xfeatures; 1545 - 1539 + newfps->user_xfeatures = curfps->user_xfeatures | xfeatures; 1546 1540 newfps->xfd = curfps->xfd & ~xfeatures; 1547 1541 1548 1542 /* Do the final updates within the locked region */
+2 -1
arch/x86/kernel/fpu/xstate.h
··· 43 43 44 44 struct membuf; 45 45 extern void __copy_xstate_to_uabi_buf(struct membuf to, struct fpstate *fpstate, 46 - u32 pkru_val, enum xstate_copy_mode copy_mode); 46 + u64 xfeatures, u32 pkru_val, 47 + enum xstate_copy_mode copy_mode); 47 48 extern void copy_xstate_to_uabi_buf(struct membuf to, struct task_struct *tsk, 48 49 enum xstate_copy_mode mode); 49 50 extern int copy_uabi_from_kernel_to_xstate(struct fpstate *fpstate, const void *kbuf, u32 *pkru);
+7 -32
arch/x86/kernel/smp.c
··· 131 131 } 132 132 133 133 /* 134 - * Disable virtualization, APIC etc. and park the CPU in a HLT loop 134 + * this function calls the 'stop' function on all other CPUs in the system. 135 135 */ 136 136 DEFINE_IDTENTRY_SYSVEC(sysvec_reboot) 137 137 { ··· 172 172 * 2) Wait for all other CPUs to report that they reached the 173 173 * HLT loop in stop_this_cpu() 174 174 * 175 - * 3) If the system uses INIT/STARTUP for CPU bringup, then 176 - * send all present CPUs an INIT vector, which brings them 177 - * completely out of the way. 175 + * 3) If #2 timed out send an NMI to the CPUs which did not 176 + * yet report 178 177 * 179 - * 4) If #3 is not possible and #2 timed out send an NMI to the 180 - * CPUs which did not yet report 181 - * 182 - * 5) Wait for all other CPUs to report that they reached the 178 + * 4) Wait for all other CPUs to report that they reached the 183 179 * HLT loop in stop_this_cpu() 184 180 * 185 - * #4 can obviously race against a CPU reaching the HLT loop late. 181 + * #3 can obviously race against a CPU reaching the HLT loop late. 186 182 * That CPU will have reported already and the "have all CPUs 187 183 * reached HLT" condition will be true despite the fact that the 188 184 * other CPU is still handling the NMI. Again, there is no ··· 194 198 /* 195 199 * Don't wait longer than a second for IPI completion. The 196 200 * wait request is not checked here because that would 197 - * prevent an NMI/INIT shutdown in case that not all 201 + * prevent an NMI shutdown attempt in case that not all 198 202 * CPUs reach shutdown state. 199 203 */ 200 204 timeout = USEC_PER_SEC; ··· 202 206 udelay(1); 203 207 } 204 208 205 - /* 206 - * Park all other CPUs in INIT including "offline" CPUs, if 207 - * possible. That's a safe place where they can't resume execution 208 - * of HLT and then execute the HLT loop from overwritten text or 209 - * page tables. 210 - * 211 - * The only downside is a broadcast MCE, but up to the point where 212 - * the kexec() kernel brought all APs online again an MCE will just 213 - * make HLT resume and handle the MCE. The machine crashes and burns 214 - * due to overwritten text, page tables and data. So there is a 215 - * choice between fire and frying pan. The result is pretty much 216 - * the same. Chose frying pan until x86 provides a sane mechanism 217 - * to park a CPU. 218 - */ 219 - if (smp_park_other_cpus_in_init()) 220 - goto done; 221 - 222 - /* 223 - * If park with INIT was not possible and the REBOOT_VECTOR didn't 224 - * take all secondary CPUs offline, try with the NMI. 225 - */ 209 + /* if the REBOOT_VECTOR didn't work, try with the NMI */ 226 210 if (!cpumask_empty(&cpus_stop_mask)) { 227 211 /* 228 212 * If NMI IPI is enabled, try to register the stop handler ··· 225 249 udelay(1); 226 250 } 227 251 228 - done: 229 252 local_irq_save(flags); 230 253 disable_local_APIC(); 231 254 mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
-27
arch/x86/kernel/smpboot.c
··· 1240 1240 cache_aps_init(); 1241 1241 } 1242 1242 1243 - bool smp_park_other_cpus_in_init(void) 1244 - { 1245 - unsigned int cpu, this_cpu = smp_processor_id(); 1246 - unsigned int apicid; 1247 - 1248 - if (apic->wakeup_secondary_cpu_64 || apic->wakeup_secondary_cpu) 1249 - return false; 1250 - 1251 - /* 1252 - * If this is a crash stop which does not execute on the boot CPU, 1253 - * then this cannot use the INIT mechanism because INIT to the boot 1254 - * CPU will reset the machine. 1255 - */ 1256 - if (this_cpu) 1257 - return false; 1258 - 1259 - for_each_cpu_and(cpu, &cpus_booted_once_mask, cpu_present_mask) { 1260 - if (cpu == this_cpu) 1261 - continue; 1262 - apicid = apic->cpu_present_to_apicid(cpu); 1263 - if (apicid == BAD_APICID) 1264 - continue; 1265 - send_init_sequence(apicid); 1266 - } 1267 - return true; 1268 - } 1269 - 1270 1243 /* 1271 1244 * Early setup to make printk work. 1272 1245 */
+1 -1
arch/x86/kernel/topology.c
··· 54 54 EXPORT_SYMBOL(arch_unregister_cpu); 55 55 #else /* CONFIG_HOTPLUG_CPU */ 56 56 57 - static int __init arch_register_cpu(int num) 57 + int __init arch_register_cpu(int num) 58 58 { 59 59 return register_cpu(&per_cpu(cpu_devices, num).cpu, num); 60 60 }
-8
arch/x86/kvm/cpuid.c
··· 360 360 vcpu->arch.guest_supported_xcr0 = 361 361 cpuid_get_supported_xcr0(vcpu->arch.cpuid_entries, vcpu->arch.cpuid_nent); 362 362 363 - /* 364 - * FP+SSE can always be saved/restored via KVM_{G,S}ET_XSAVE, even if 365 - * XSAVE/XCRO are not exposed to the guest, and even if XSAVE isn't 366 - * supported by the host. 367 - */ 368 - vcpu->arch.guest_fpu.fpstate->user_xfeatures = vcpu->arch.guest_supported_xcr0 | 369 - XFEATURE_MASK_FPSSE; 370 - 371 363 kvm_update_pv_runtime(vcpu); 372 364 373 365 vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu);
+6 -2
arch/x86/kvm/lapic.c
··· 2759 2759 { 2760 2760 u32 reg = kvm_lapic_get_reg(apic, lvt_type); 2761 2761 int vector, mode, trig_mode; 2762 + int r; 2762 2763 2763 2764 if (kvm_apic_hw_enabled(apic) && !(reg & APIC_LVT_MASKED)) { 2764 2765 vector = reg & APIC_VECTOR_MASK; 2765 2766 mode = reg & APIC_MODE_MASK; 2766 2767 trig_mode = reg & APIC_LVT_LEVEL_TRIGGER; 2767 - return __apic_accept_irq(apic, mode, vector, 1, trig_mode, 2768 - NULL); 2768 + 2769 + r = __apic_accept_irq(apic, mode, vector, 1, trig_mode, NULL); 2770 + if (r && lvt_type == APIC_LVTPC) 2771 + kvm_lapic_set_reg(apic, APIC_LVTPC, reg | APIC_LVT_MASKED); 2772 + return r; 2769 2773 } 2770 2774 return 0; 2771 2775 }
+1 -26
arch/x86/kvm/pmu.c
··· 93 93 #undef __KVM_X86_PMU_OP 94 94 } 95 95 96 - static void kvm_pmi_trigger_fn(struct irq_work *irq_work) 97 - { 98 - struct kvm_pmu *pmu = container_of(irq_work, struct kvm_pmu, irq_work); 99 - struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu); 100 - 101 - kvm_pmu_deliver_pmi(vcpu); 102 - } 103 - 104 96 static inline void __kvm_perf_overflow(struct kvm_pmc *pmc, bool in_pmi) 105 97 { 106 98 struct kvm_pmu *pmu = pmc_to_pmu(pmc); ··· 116 124 __set_bit(pmc->idx, (unsigned long *)&pmu->global_status); 117 125 } 118 126 119 - if (!pmc->intr || skip_pmi) 120 - return; 121 - 122 - /* 123 - * Inject PMI. If vcpu was in a guest mode during NMI PMI 124 - * can be ejected on a guest mode re-entry. Otherwise we can't 125 - * be sure that vcpu wasn't executing hlt instruction at the 126 - * time of vmexit and is not going to re-enter guest mode until 127 - * woken up. So we should wake it, but this is impossible from 128 - * NMI context. Do it from irq work instead. 129 - */ 130 - if (in_pmi && !kvm_handling_nmi_from_guest(pmc->vcpu)) 131 - irq_work_queue(&pmc_to_pmu(pmc)->irq_work); 132 - else 127 + if (pmc->intr && !skip_pmi) 133 128 kvm_make_request(KVM_REQ_PMI, pmc->vcpu); 134 129 } 135 130 ··· 654 675 655 676 void kvm_pmu_reset(struct kvm_vcpu *vcpu) 656 677 { 657 - struct kvm_pmu *pmu = vcpu_to_pmu(vcpu); 658 - 659 - irq_work_sync(&pmu->irq_work); 660 678 static_call(kvm_x86_pmu_reset)(vcpu); 661 679 } 662 680 ··· 663 687 664 688 memset(pmu, 0, sizeof(*pmu)); 665 689 static_call(kvm_x86_pmu_init)(vcpu); 666 - init_irq_work(&pmu->irq_work, kvm_pmi_trigger_fn); 667 690 pmu->event_count = 0; 668 691 pmu->need_cleanup = false; 669 692 kvm_pmu_refresh(vcpu);
+6
arch/x86/kvm/pmu.h
··· 74 74 return counter & pmc_bitmask(pmc); 75 75 } 76 76 77 + static inline void pmc_write_counter(struct kvm_pmc *pmc, u64 val) 78 + { 79 + pmc->counter += val - pmc_read_counter(pmc); 80 + pmc->counter &= pmc_bitmask(pmc); 81 + } 82 + 77 83 static inline void pmc_release_perf_event(struct kvm_pmc *pmc) 78 84 { 79 85 if (pmc->perf_event) {
+4 -1
arch/x86/kvm/svm/avic.c
··· 529 529 case AVIC_IPI_FAILURE_INVALID_BACKING_PAGE: 530 530 WARN_ONCE(1, "Invalid backing page\n"); 531 531 break; 532 + case AVIC_IPI_FAILURE_INVALID_IPI_VECTOR: 533 + /* Invalid IPI with vector < 16 */ 534 + break; 532 535 default: 533 - pr_err("Unknown IPI interception\n"); 536 + vcpu_unimpl(vcpu, "Unknown avic incomplete IPI interception\n"); 534 537 } 535 538 536 539 return 1;
+3
arch/x86/kvm/svm/nested.c
··· 1253 1253 1254 1254 nested_svm_uninit_mmu_context(vcpu); 1255 1255 vmcb_mark_all_dirty(svm->vmcb); 1256 + 1257 + if (kvm_apicv_activated(vcpu->kvm)) 1258 + kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu); 1256 1259 } 1257 1260 1258 1261 kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
+1 -1
arch/x86/kvm/svm/pmu.c
··· 160 160 /* MSR_PERFCTRn */ 161 161 pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_COUNTER); 162 162 if (pmc) { 163 - pmc->counter += data - pmc_read_counter(pmc); 163 + pmc_write_counter(pmc, data); 164 164 pmc_update_sample_period(pmc); 165 165 return 0; 166 166 }
+2 -3
arch/x86/kvm/svm/svm.c
··· 691 691 */ 692 692 if (boot_cpu_has(X86_FEATURE_V_TSC_AUX)) { 693 693 struct sev_es_save_area *hostsa; 694 - u32 msr_hi; 694 + u32 __maybe_unused msr_hi; 695 695 696 696 hostsa = (struct sev_es_save_area *)(page_address(sd->save_area) + 0x400); 697 697 ··· 913 913 if (intercept == svm->x2avic_msrs_intercepted) 914 914 return; 915 915 916 - if (!x2avic_enabled || 917 - !apic_x2apic_mode(svm->vcpu.arch.apic)) 916 + if (!x2avic_enabled) 918 917 return; 919 918 920 919 for (i = 0; i < MAX_DIRECT_ACCESS_MSRS; i++) {
+2 -2
arch/x86/kvm/vmx/pmu_intel.c
··· 436 436 if (!msr_info->host_initiated && 437 437 !(msr & MSR_PMC_FULL_WIDTH_BIT)) 438 438 data = (s64)(s32)data; 439 - pmc->counter += data - pmc_read_counter(pmc); 439 + pmc_write_counter(pmc, data); 440 440 pmc_update_sample_period(pmc); 441 441 break; 442 442 } else if ((pmc = get_fixed_pmc(pmu, msr))) { 443 - pmc->counter += data - pmc_read_counter(pmc); 443 + pmc_write_counter(pmc, data); 444 444 pmc_update_sample_period(pmc); 445 445 break; 446 446 } else if ((pmc = get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0))) {
+27 -13
arch/x86/kvm/x86.c
··· 5382 5382 return 0; 5383 5383 } 5384 5384 5385 - static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu, 5386 - struct kvm_xsave *guest_xsave) 5387 - { 5388 - if (fpstate_is_confidential(&vcpu->arch.guest_fpu)) 5389 - return; 5390 - 5391 - fpu_copy_guest_fpstate_to_uabi(&vcpu->arch.guest_fpu, 5392 - guest_xsave->region, 5393 - sizeof(guest_xsave->region), 5394 - vcpu->arch.pkru); 5395 - } 5396 5385 5397 5386 static void kvm_vcpu_ioctl_x86_get_xsave2(struct kvm_vcpu *vcpu, 5398 5387 u8 *state, unsigned int size) 5399 5388 { 5389 + /* 5390 + * Only copy state for features that are enabled for the guest. The 5391 + * state itself isn't problematic, but setting bits in the header for 5392 + * features that are supported in *this* host but not exposed to the 5393 + * guest can result in KVM_SET_XSAVE failing when live migrating to a 5394 + * compatible host without the features that are NOT exposed to the 5395 + * guest. 5396 + * 5397 + * FP+SSE can always be saved/restored via KVM_{G,S}ET_XSAVE, even if 5398 + * XSAVE/XCRO are not exposed to the guest, and even if XSAVE isn't 5399 + * supported by the host. 5400 + */ 5401 + u64 supported_xcr0 = vcpu->arch.guest_supported_xcr0 | 5402 + XFEATURE_MASK_FPSSE; 5403 + 5400 5404 if (fpstate_is_confidential(&vcpu->arch.guest_fpu)) 5401 5405 return; 5402 5406 5403 - fpu_copy_guest_fpstate_to_uabi(&vcpu->arch.guest_fpu, 5404 - state, size, vcpu->arch.pkru); 5407 + fpu_copy_guest_fpstate_to_uabi(&vcpu->arch.guest_fpu, state, size, 5408 + supported_xcr0, vcpu->arch.pkru); 5409 + } 5410 + 5411 + static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu, 5412 + struct kvm_xsave *guest_xsave) 5413 + { 5414 + return kvm_vcpu_ioctl_x86_get_xsave2(vcpu, (void *)guest_xsave->region, 5415 + sizeof(guest_xsave->region)); 5405 5416 } 5406 5417 5407 5418 static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu, ··· 12853 12842 static_call(kvm_x86_smi_allowed)(vcpu, false))) 12854 12843 return true; 12855 12844 #endif 12845 + 12846 + if (kvm_test_request(KVM_REQ_PMI, vcpu)) 12847 + return true; 12856 12848 12857 12849 if (kvm_arch_interrupt_allowed(vcpu) && 12858 12850 (kvm_cpu_has_interrupt(vcpu) ||
+16 -5
block/fops.c
··· 772 772 773 773 filemap_invalidate_lock(inode->i_mapping); 774 774 775 - /* Invalidate the page cache, including dirty pages. */ 776 - error = truncate_bdev_range(bdev, file_to_blk_mode(file), start, end); 777 - if (error) 778 - goto fail; 779 - 775 + /* 776 + * Invalidate the page cache, including dirty pages, for valid 777 + * de-allocate mode calls to fallocate(). 778 + */ 780 779 switch (mode) { 781 780 case FALLOC_FL_ZERO_RANGE: 782 781 case FALLOC_FL_ZERO_RANGE | FALLOC_FL_KEEP_SIZE: 782 + error = truncate_bdev_range(bdev, file_to_blk_mode(file), start, end); 783 + if (error) 784 + goto fail; 785 + 783 786 error = blkdev_issue_zeroout(bdev, start >> SECTOR_SHIFT, 784 787 len >> SECTOR_SHIFT, GFP_KERNEL, 785 788 BLKDEV_ZERO_NOUNMAP); 786 789 break; 787 790 case FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE: 791 + error = truncate_bdev_range(bdev, file_to_blk_mode(file), start, end); 792 + if (error) 793 + goto fail; 794 + 788 795 error = blkdev_issue_zeroout(bdev, start >> SECTOR_SHIFT, 789 796 len >> SECTOR_SHIFT, GFP_KERNEL, 790 797 BLKDEV_ZERO_NOFALLBACK); 791 798 break; 792 799 case FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE | FALLOC_FL_NO_HIDE_STALE: 800 + error = truncate_bdev_range(bdev, file_to_blk_mode(file), start, end); 801 + if (error) 802 + goto fail; 803 + 793 804 error = blkdev_issue_discard(bdev, start >> SECTOR_SHIFT, 794 805 len >> SECTOR_SHIFT, GFP_KERNEL); 795 806 break;
+1
drivers/acpi/acpi_processor.c
··· 12 12 #define pr_fmt(fmt) "ACPI: " fmt 13 13 14 14 #include <linux/acpi.h> 15 + #include <linux/cpu.h> 15 16 #include <linux/device.h> 16 17 #include <linux/dmi.h> 17 18 #include <linux/kernel.h>
+11
drivers/acpi/ec.c
··· 1915 1915 }, 1916 1916 { 1917 1917 /* 1918 + * HP Pavilion Gaming Laptop 15-dk1xxx 1919 + * https://github.com/systemd/systemd/issues/28942 1920 + */ 1921 + .callback = ec_honor_dsdt_gpe, 1922 + .matches = { 1923 + DMI_MATCH(DMI_SYS_VENDOR, "HP"), 1924 + DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion Gaming Laptop 15-dk1xxx"), 1925 + }, 1926 + }, 1927 + { 1928 + /* 1918 1929 * Samsung hardware 1919 1930 * https://bugzilla.kernel.org/show_bug.cgi?id=44161 1920 1931 */
+20 -6
drivers/acpi/resource.c
··· 440 440 }, 441 441 }, 442 442 { 443 + .ident = "Asus ExpertBook B1402CBA", 444 + .matches = { 445 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 446 + DMI_MATCH(DMI_BOARD_NAME, "B1402CBA"), 447 + }, 448 + }, 449 + { 443 450 .ident = "Asus ExpertBook B1502CBA", 444 451 .matches = { 445 452 DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), ··· 507 500 508 501 static const struct dmi_system_id pcspecialist_laptop[] = { 509 502 { 510 - .ident = "PCSpecialist Elimina Pro 16 M", 511 - /* 512 - * Some models have product-name "Elimina Pro 16 M", 513 - * others "GM6BGEQ". Match on board-name to match both. 514 - */ 503 + /* TongFang GM6BGEQ / PCSpecialist Elimina Pro 16 M, RTX 3050 */ 515 504 .matches = { 516 - DMI_MATCH(DMI_SYS_VENDOR, "PCSpecialist"), 517 505 DMI_MATCH(DMI_BOARD_NAME, "GM6BGEQ"), 506 + }, 507 + }, 508 + { 509 + /* TongFang GM6BG5Q, RTX 4050 */ 510 + .matches = { 511 + DMI_MATCH(DMI_BOARD_NAME, "GM6BG5Q"), 512 + }, 513 + }, 514 + { 515 + /* TongFang GM6BG0Q / PCSpecialist Elimina Pro 16 M, RTX 4060 */ 516 + .matches = { 517 + DMI_MATCH(DMI_BOARD_NAME, "GM6BG0Q"), 518 518 }, 519 519 }, 520 520 { }
+2
drivers/android/binder.c
··· 4812 4812 "undelivered TRANSACTION_ERROR: %u\n", 4813 4813 e->cmd); 4814 4814 } break; 4815 + case BINDER_WORK_TRANSACTION_PENDING: 4816 + case BINDER_WORK_TRANSACTION_ONEWAY_SPAM_SUSPECT: 4815 4817 case BINDER_WORK_TRANSACTION_COMPLETE: { 4816 4818 binder_debug(BINDER_DEBUG_DEAD_TRANSACTION, 4817 4819 "undelivered TRANSACTION_COMPLETE\n");
+1 -1
drivers/base/regmap/regmap.c
··· 1478 1478 1479 1479 /* If the user didn't specify a name match any */ 1480 1480 if (data) 1481 - return !strcmp((*r)->name, data); 1481 + return (*r)->name && !strcmp((*r)->name, data); 1482 1482 else 1483 1483 return 1; 1484 1484 }
+3 -7
drivers/bluetooth/btrtl.c
··· 962 962 skb_put_data(skb, buf, strlen(buf)); 963 963 } 964 964 965 - static int btrtl_register_devcoredump_support(struct hci_dev *hdev) 965 + static void btrtl_register_devcoredump_support(struct hci_dev *hdev) 966 966 { 967 - int err; 967 + hci_devcd_register(hdev, btrtl_coredump, btrtl_dmp_hdr, NULL); 968 968 969 - err = hci_devcd_register(hdev, btrtl_coredump, btrtl_dmp_hdr, NULL); 970 - 971 - return err; 972 969 } 973 970 974 971 void btrtl_set_driver_name(struct hci_dev *hdev, const char *driver_name) ··· 1252 1255 } 1253 1256 1254 1257 done: 1255 - if (!err) 1256 - err = btrtl_register_devcoredump_support(hdev); 1258 + btrtl_register_devcoredump_support(hdev); 1257 1259 1258 1260 return err; 1259 1261 }
+3
drivers/bluetooth/hci_vhci.c
··· 74 74 struct vhci_data *data = hci_get_drvdata(hdev); 75 75 76 76 memcpy(skb_push(skb, 1), &hci_skb_pkt_type(skb), 1); 77 + 78 + mutex_lock(&data->open_mutex); 77 79 skb_queue_tail(&data->readq, skb); 80 + mutex_unlock(&data->open_mutex); 78 81 79 82 wake_up_interruptible(&data->read_wait); 80 83 return 0;
+2 -2
drivers/counter/counter-chrdev.c
··· 247 247 if (*id == component_id) 248 248 return 0; 249 249 250 - if (ext->type == COUNTER_COMP_ARRAY) { 251 - element = ext->priv; 250 + if (ext[*ext_idx].type == COUNTER_COMP_ARRAY) { 251 + element = ext[*ext_idx].priv; 252 252 253 253 if (component_id - *id < element->length) 254 254 return 0;
+1 -1
drivers/counter/microchip-tcb-capture.c
··· 97 97 priv->qdec_mode = 0; 98 98 /* Set highest rate based on whether soc has gclk or not */ 99 99 bmr &= ~(ATMEL_TC_QDEN | ATMEL_TC_POSEN); 100 - if (priv->tc_cfg->has_gclk) 100 + if (!priv->tc_cfg->has_gclk) 101 101 cmr |= ATMEL_TC_TIMER_CLOCK2; 102 102 else 103 103 cmr |= ATMEL_TC_TIMER_CLOCK1;
+4 -9
drivers/dma-buf/dma-fence-unwrap.c
··· 76 76 dma_fence_unwrap_for_each(tmp, &iter[i], fences[i]) { 77 77 if (!dma_fence_is_signaled(tmp)) { 78 78 ++count; 79 - } else if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, 80 - &tmp->flags)) { 81 - if (ktime_after(tmp->timestamp, timestamp)) 82 - timestamp = tmp->timestamp; 83 79 } else { 84 - /* 85 - * Use the current time if the fence is 86 - * currently signaling. 87 - */ 88 - timestamp = ktime_get(); 80 + ktime_t t = dma_fence_timestamp(tmp); 81 + 82 + if (ktime_after(t, timestamp)) 83 + timestamp = t; 89 84 } 90 85 } 91 86 }
+3 -6
drivers/dma-buf/sync_file.c
··· 268 268 sizeof(info->driver_name)); 269 269 270 270 info->status = dma_fence_get_status(fence); 271 - while (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags) && 272 - !test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, &fence->flags)) 273 - cpu_relax(); 274 271 info->timestamp_ns = 275 - test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, &fence->flags) ? 276 - ktime_to_ns(fence->timestamp) : 277 - ktime_set(0, 0); 272 + dma_fence_is_signaled(fence) ? 273 + ktime_to_ns(dma_fence_timestamp(fence)) : 274 + ktime_set(0, 0); 278 275 279 276 return info->status; 280 277 }
+22 -3
drivers/dma/fsl-edma-common.c
··· 92 92 93 93 edma_writel_chreg(fsl_chan, val, ch_sbr); 94 94 95 - if (flags & FSL_EDMA_DRV_HAS_CHMUX) 96 - edma_writel_chreg(fsl_chan, fsl_chan->srcid, ch_mux); 95 + if (flags & FSL_EDMA_DRV_HAS_CHMUX) { 96 + /* 97 + * ch_mux: With the exception of 0, attempts to write a value 98 + * already in use will be forced to 0. 99 + */ 100 + if (!edma_readl_chreg(fsl_chan, ch_mux)) 101 + edma_writel_chreg(fsl_chan, fsl_chan->srcid, ch_mux); 102 + } 97 103 98 104 val = edma_readl_chreg(fsl_chan, ch_csr); 99 105 val |= EDMA_V3_CH_CSR_ERQ; ··· 454 448 455 449 edma_write_tcdreg(fsl_chan, tcd->dlast_sga, dlast_sga); 456 450 451 + csr = le16_to_cpu(tcd->csr); 452 + 457 453 if (fsl_chan->is_sw) { 458 - csr = le16_to_cpu(tcd->csr); 459 454 csr |= EDMA_TCD_CSR_START; 460 455 tcd->csr = cpu_to_le16(csr); 461 456 } 457 + 458 + /* 459 + * Must clear CHn_CSR[DONE] bit before enable TCDn_CSR[ESG] at EDMAv3 460 + * eDMAv4 have not such requirement. 461 + * Change MLINK need clear CHn_CSR[DONE] for both eDMAv3 and eDMAv4. 462 + */ 463 + if (((fsl_edma_drvflags(fsl_chan) & FSL_EDMA_DRV_CLEAR_DONE_E_SG) && 464 + (csr & EDMA_TCD_CSR_E_SG)) || 465 + ((fsl_edma_drvflags(fsl_chan) & FSL_EDMA_DRV_CLEAR_DONE_E_LINK) && 466 + (csr & EDMA_TCD_CSR_E_LINK))) 467 + edma_writel_chreg(fsl_chan, edma_readl_chreg(fsl_chan, ch_csr), ch_csr); 468 + 462 469 463 470 edma_write_tcdreg(fsl_chan, tcd->csr, csr); 464 471 }
+13 -1
drivers/dma/fsl-edma-common.h
··· 183 183 #define FSL_EDMA_DRV_BUS_8BYTE BIT(10) 184 184 #define FSL_EDMA_DRV_DEV_TO_DEV BIT(11) 185 185 #define FSL_EDMA_DRV_ALIGN_64BYTE BIT(12) 186 + /* Need clean CHn_CSR DONE before enable TCD's ESG */ 187 + #define FSL_EDMA_DRV_CLEAR_DONE_E_SG BIT(13) 188 + /* Need clean CHn_CSR DONE before enable TCD's MAJORELINK */ 189 + #define FSL_EDMA_DRV_CLEAR_DONE_E_LINK BIT(14) 186 190 187 191 #define FSL_EDMA_DRV_EDMA3 (FSL_EDMA_DRV_SPLIT_REG | \ 188 192 FSL_EDMA_DRV_BUS_8BYTE | \ 189 193 FSL_EDMA_DRV_DEV_TO_DEV | \ 190 - FSL_EDMA_DRV_ALIGN_64BYTE) 194 + FSL_EDMA_DRV_ALIGN_64BYTE | \ 195 + FSL_EDMA_DRV_CLEAR_DONE_E_SG | \ 196 + FSL_EDMA_DRV_CLEAR_DONE_E_LINK) 197 + 198 + #define FSL_EDMA_DRV_EDMA4 (FSL_EDMA_DRV_SPLIT_REG | \ 199 + FSL_EDMA_DRV_BUS_8BYTE | \ 200 + FSL_EDMA_DRV_DEV_TO_DEV | \ 201 + FSL_EDMA_DRV_ALIGN_64BYTE | \ 202 + FSL_EDMA_DRV_CLEAR_DONE_E_LINK) 191 203 192 204 struct fsl_edma_drvdata { 193 205 u32 dmamuxs; /* only used before v3 */
+5 -3
drivers/dma/fsl-edma-main.c
··· 154 154 fsl_chan = to_fsl_edma_chan(chan); 155 155 i = fsl_chan - fsl_edma->chans; 156 156 157 - chan = dma_get_slave_channel(chan); 158 - chan->device->privatecnt++; 159 157 fsl_chan->priority = dma_spec->args[1]; 160 158 fsl_chan->is_rxchan = dma_spec->args[2] & ARGS_RX; 161 159 fsl_chan->is_remote = dma_spec->args[2] & ARGS_REMOTE; 162 160 fsl_chan->is_multi_fifo = dma_spec->args[2] & ARGS_MULTI_FIFO; 163 161 164 162 if (!b_chmux && i == dma_spec->args[0]) { 163 + chan = dma_get_slave_channel(chan); 164 + chan->device->privatecnt++; 165 165 mutex_unlock(&fsl_edma->fsl_edma_mutex); 166 166 return chan; 167 167 } else if (b_chmux && !fsl_chan->srcid) { 168 168 /* if controller support channel mux, choose a free channel */ 169 + chan = dma_get_slave_channel(chan); 170 + chan->device->privatecnt++; 169 171 fsl_chan->srcid = dma_spec->args[0]; 170 172 mutex_unlock(&fsl_edma->fsl_edma_mutex); 171 173 return chan; ··· 357 355 }; 358 356 359 357 static struct fsl_edma_drvdata imx93_data4 = { 360 - .flags = FSL_EDMA_DRV_HAS_CHMUX | FSL_EDMA_DRV_HAS_DMACLK | FSL_EDMA_DRV_EDMA3, 358 + .flags = FSL_EDMA_DRV_HAS_CHMUX | FSL_EDMA_DRV_HAS_DMACLK | FSL_EDMA_DRV_EDMA4, 361 359 .chreg_space_sz = 0x8000, 362 360 .chreg_off = 0x10000, 363 361 .setup_irq = fsl_edma3_irq_init,
+3 -2
drivers/dma/idxd/device.c
··· 477 477 union idxd_command_reg cmd; 478 478 DECLARE_COMPLETION_ONSTACK(done); 479 479 u32 stat; 480 + unsigned long flags; 480 481 481 482 if (idxd_device_is_halted(idxd)) { 482 483 dev_warn(&idxd->pdev->dev, "Device is HALTED!\n"); ··· 491 490 cmd.operand = operand; 492 491 cmd.int_req = 1; 493 492 494 - spin_lock(&idxd->cmd_lock); 493 + spin_lock_irqsave(&idxd->cmd_lock, flags); 495 494 wait_event_lock_irq(idxd->cmd_waitq, 496 495 !test_bit(IDXD_FLAG_CMD_RUNNING, &idxd->flags), 497 496 idxd->cmd_lock); ··· 508 507 * After command submitted, release lock and go to sleep until 509 508 * the command completes via interrupt. 510 509 */ 511 - spin_unlock(&idxd->cmd_lock); 510 + spin_unlock_irqrestore(&idxd->cmd_lock, flags); 512 511 wait_for_completion(&done); 513 512 stat = ioread32(idxd->reg_base + IDXD_CMDSTS_OFFSET); 514 513 spin_lock(&idxd->cmd_lock);
+1 -2
drivers/dma/mediatek/mtk-uart-apdma.c
··· 450 450 mtk_uart_apdma_write(c, VFF_EN, VFF_EN_CLR_B); 451 451 mtk_uart_apdma_write(c, VFF_INT_EN, VFF_INT_EN_CLR_B); 452 452 453 - synchronize_irq(c->irq); 454 - 455 453 spin_unlock_irqrestore(&c->vc.lock, flags); 454 + synchronize_irq(c->irq); 456 455 457 456 return 0; 458 457 }
+1
drivers/dma/ste_dma40.c
··· 3668 3668 regulator_disable(base->lcpa_regulator); 3669 3669 regulator_put(base->lcpa_regulator); 3670 3670 } 3671 + pm_runtime_disable(base->dev); 3671 3672 3672 3673 report_failure: 3673 3674 d40_err(dev, "probe failed\n");
+7 -4
drivers/dma/stm32-dma.c
··· 1113 1113 chan->chan_reg.dma_scr &= ~STM32_DMA_SCR_PFCTRL; 1114 1114 1115 1115 /* Activate Double Buffer Mode if DMA triggers STM32 MDMA and more than 1 sg */ 1116 - if (chan->trig_mdma && sg_len > 1) 1116 + if (chan->trig_mdma && sg_len > 1) { 1117 1117 chan->chan_reg.dma_scr |= STM32_DMA_SCR_DBM; 1118 + chan->chan_reg.dma_scr &= ~STM32_DMA_SCR_CT; 1119 + } 1118 1120 1119 1121 for_each_sg(sgl, sg, sg_len, i) { 1120 1122 ret = stm32_dma_set_xfer_param(chan, direction, &buswidth, ··· 1389 1387 1390 1388 residue = stm32_dma_get_remaining_bytes(chan); 1391 1389 1392 - if (chan->desc->cyclic && !stm32_dma_is_current_sg(chan)) { 1390 + if ((chan->desc->cyclic || chan->trig_mdma) && !stm32_dma_is_current_sg(chan)) { 1393 1391 n_sg++; 1394 1392 if (n_sg == chan->desc->num_sgs) 1395 1393 n_sg = 0; 1396 - residue = sg_req->len; 1394 + if (!chan->trig_mdma) 1395 + residue = sg_req->len; 1397 1396 } 1398 1397 1399 1398 /* ··· 1404 1401 * residue = remaining bytes from NDTR + remaining 1405 1402 * periods/sg to be transferred 1406 1403 */ 1407 - if (!chan->desc->cyclic || n_sg != 0) 1404 + if ((!chan->desc->cyclic && !chan->trig_mdma) || n_sg != 0) 1408 1405 for (i = n_sg; i < desc->num_sgs; i++) 1409 1406 residue += desc->sg_req[i].len; 1410 1407
+24 -9
drivers/dma/stm32-mdma.c
··· 777 777 /* Enable interrupts */ 778 778 ccr &= ~STM32_MDMA_CCR_IRQ_MASK; 779 779 ccr |= STM32_MDMA_CCR_TEIE | STM32_MDMA_CCR_CTCIE; 780 - if (sg_len > 1) 781 - ccr |= STM32_MDMA_CCR_BTIE; 782 780 desc->ccr = ccr; 783 781 784 782 return 0; ··· 1234 1236 unsigned long flags; 1235 1237 u32 status, reg; 1236 1238 1239 + /* Transfer can be terminated */ 1240 + if (!chan->desc || (stm32_mdma_read(dmadev, STM32_MDMA_CCR(chan->id)) & STM32_MDMA_CCR_EN)) 1241 + return -EPERM; 1242 + 1237 1243 hwdesc = chan->desc->node[chan->curr_hwdesc].hwdesc; 1238 1244 1239 1245 spin_lock_irqsave(&chan->vchan.lock, flags); ··· 1318 1316 1319 1317 static size_t stm32_mdma_desc_residue(struct stm32_mdma_chan *chan, 1320 1318 struct stm32_mdma_desc *desc, 1321 - u32 curr_hwdesc) 1319 + u32 curr_hwdesc, 1320 + struct dma_tx_state *state) 1322 1321 { 1323 1322 struct stm32_mdma_device *dmadev = stm32_mdma_get_dev(chan); 1324 1323 struct stm32_mdma_hwdesc *hwdesc; 1325 - u32 cbndtr, residue, modulo, burst_size; 1324 + u32 cisr, clar, cbndtr, residue, modulo, burst_size; 1326 1325 int i; 1327 1326 1327 + cisr = stm32_mdma_read(dmadev, STM32_MDMA_CISR(chan->id)); 1328 + 1328 1329 residue = 0; 1329 - for (i = curr_hwdesc + 1; i < desc->count; i++) { 1330 + /* Get the next hw descriptor to process from current transfer */ 1331 + clar = stm32_mdma_read(dmadev, STM32_MDMA_CLAR(chan->id)); 1332 + for (i = desc->count - 1; i >= 0; i--) { 1330 1333 hwdesc = desc->node[i].hwdesc; 1334 + 1335 + if (hwdesc->clar == clar) 1336 + break;/* Current transfer found, stop cumulating */ 1337 + 1338 + /* Cumulate residue of unprocessed hw descriptors */ 1331 1339 residue += STM32_MDMA_CBNDTR_BNDT(hwdesc->cbndtr); 1332 1340 } 1333 1341 cbndtr = stm32_mdma_read(dmadev, STM32_MDMA_CBNDTR(chan->id)); 1334 1342 residue += cbndtr & STM32_MDMA_CBNDTR_BNDT_MASK; 1343 + 1344 + state->in_flight_bytes = 0; 1345 + if (chan->chan_config.m2m_hw && (cisr & STM32_MDMA_CISR_CRQA)) 1346 + state->in_flight_bytes = cbndtr & STM32_MDMA_CBNDTR_BNDT_MASK; 1335 1347 1336 1348 if (!chan->mem_burst) 1337 1349 return residue; ··· 1376 1360 1377 1361 vdesc = vchan_find_desc(&chan->vchan, cookie); 1378 1362 if (chan->desc && cookie == chan->desc->vdesc.tx.cookie) 1379 - residue = stm32_mdma_desc_residue(chan, chan->desc, 1380 - chan->curr_hwdesc); 1363 + residue = stm32_mdma_desc_residue(chan, chan->desc, chan->curr_hwdesc, state); 1381 1364 else if (vdesc) 1382 - residue = stm32_mdma_desc_residue(chan, 1383 - to_stm32_mdma_desc(vdesc), 0); 1365 + residue = stm32_mdma_desc_residue(chan, to_stm32_mdma_desc(vdesc), 0, state); 1366 + 1384 1367 dma_set_residue(state, residue); 1385 1368 1386 1369 spin_unlock_irqrestore(&chan->vchan.lock, flags);
+4
drivers/gpu/drm/amd/amdgpu/amdgpu_doorbell_mgr.c
··· 142 142 int r; 143 143 int size; 144 144 145 + /* SI HW does not have doorbells, skip allocation */ 146 + if (adev->doorbell.num_kernel_doorbells == 0) 147 + return 0; 148 + 145 149 /* Reserve first num_kernel_doorbells (page-aligned) for kernel ops */ 146 150 size = ALIGN(adev->doorbell.num_kernel_doorbells * sizeof(u32), PAGE_SIZE); 147 151
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
··· 252 252 struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); 253 253 struct amdgpu_res_cursor cursor; 254 254 255 - if (bo->tbo.resource->mem_type != TTM_PL_VRAM) 255 + if (!bo->tbo.resource || bo->tbo.resource->mem_type != TTM_PL_VRAM) 256 256 return false; 257 257 258 258 amdgpu_res_first(bo->tbo.resource, 0, amdgpu_bo_size(bo), &cursor);
+3
drivers/gpu/drm/amd/display/dc/core/dc.c
··· 1262 1262 if (stream == NULL) 1263 1263 continue; 1264 1264 1265 + if (stream->apply_seamless_boot_optimization) 1266 + continue; 1267 + 1265 1268 // only looking for first odm pipe 1266 1269 if (pipe->prev_odm_pipe) 1267 1270 continue;
+13 -4
drivers/gpu/drm/drm_atomic_helper.c
··· 290 290 update_connector_routing(struct drm_atomic_state *state, 291 291 struct drm_connector *connector, 292 292 struct drm_connector_state *old_connector_state, 293 - struct drm_connector_state *new_connector_state) 293 + struct drm_connector_state *new_connector_state, 294 + bool added_by_user) 294 295 { 295 296 const struct drm_connector_helper_funcs *funcs; 296 297 struct drm_encoder *new_encoder; ··· 340 339 * there's a chance the connector may have been destroyed during the 341 340 * process, but it's better to ignore that then cause 342 341 * drm_atomic_helper_resume() to fail. 342 + * 343 + * Last, we want to ignore connector registration when the connector 344 + * was not pulled in the atomic state by user-space (ie, was pulled 345 + * in by the driver, e.g. when updating a DP-MST stream). 343 346 */ 344 347 if (!state->duplicated && drm_connector_is_unregistered(connector) && 345 - crtc_state->active) { 348 + added_by_user && crtc_state->active) { 346 349 drm_dbg_atomic(connector->dev, 347 350 "[CONNECTOR:%d:%s] is not registered\n", 348 351 connector->base.id, connector->name); ··· 625 620 struct drm_connector *connector; 626 621 struct drm_connector_state *old_connector_state, *new_connector_state; 627 622 int i, ret; 628 - unsigned int connectors_mask = 0; 623 + unsigned int connectors_mask = 0, user_connectors_mask = 0; 624 + 625 + for_each_oldnew_connector_in_state(state, connector, old_connector_state, new_connector_state, i) 626 + user_connectors_mask |= BIT(i); 629 627 630 628 for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { 631 629 bool has_connectors = ··· 693 685 */ 694 686 ret = update_connector_routing(state, connector, 695 687 old_connector_state, 696 - new_connector_state); 688 + new_connector_state, 689 + BIT(i) & user_connectors_mask); 697 690 if (ret) 698 691 return ret; 699 692 if (old_connector_state->crtc) {
+4 -2
drivers/gpu/drm/drm_gem.c
··· 540 540 struct page **pages; 541 541 struct folio *folio; 542 542 struct folio_batch fbatch; 543 - int i, j, npages; 543 + long i, j, npages; 544 544 545 545 if (WARN_ON(!obj->filp)) 546 546 return ERR_PTR(-EINVAL); ··· 564 564 565 565 i = 0; 566 566 while (i < npages) { 567 + long nr; 567 568 folio = shmem_read_folio_gfp(mapping, i, 568 569 mapping_gfp_mask(mapping)); 569 570 if (IS_ERR(folio)) 570 571 goto fail; 571 - for (j = 0; j < folio_nr_pages(folio); j++, i++) 572 + nr = min(npages - i, folio_nr_pages(folio)); 573 + for (j = 0; j < nr; j++, i++) 572 574 pages[i] = folio_file_page(folio, i); 573 575 574 576 /* Make sure shmem keeps __GFP_DMA32 allocated pages in the
+18 -9
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
··· 119 119 struct dpu_sw_pipe_cfg *pipe_cfg) 120 120 { 121 121 int src_width, src_height, dst_height, fps; 122 + u64 plane_pixel_rate, plane_bit_rate; 122 123 u64 plane_prefill_bw; 123 124 u64 plane_bw; 124 125 u32 hw_latency_lines; ··· 137 136 scale_factor = src_height > dst_height ? 138 137 mult_frac(src_height, 1, dst_height) : 1; 139 138 140 - plane_bw = 141 - src_width * mode->vtotal * fps * fmt->bpp * 142 - scale_factor; 139 + plane_pixel_rate = src_width * mode->vtotal * fps; 140 + plane_bit_rate = plane_pixel_rate * fmt->bpp; 143 141 144 - plane_prefill_bw = 145 - src_width * hw_latency_lines * fps * fmt->bpp * 146 - scale_factor * mode->vtotal; 142 + plane_bw = plane_bit_rate * scale_factor; 143 + 144 + plane_prefill_bw = plane_bw * hw_latency_lines; 147 145 148 146 if ((vbp+vpw) > hw_latency_lines) 149 147 do_div(plane_prefill_bw, (vbp+vpw)); ··· 733 733 static int dpu_plane_atomic_check_pipe(struct dpu_plane *pdpu, 734 734 struct dpu_sw_pipe *pipe, 735 735 struct dpu_sw_pipe_cfg *pipe_cfg, 736 - const struct dpu_format *fmt) 736 + const struct dpu_format *fmt, 737 + const struct drm_display_mode *mode) 737 738 { 738 739 uint32_t min_src_size; 740 + struct dpu_kms *kms = _dpu_plane_get_kms(&pdpu->base); 739 741 740 742 min_src_size = DPU_FORMAT_IS_YUV(fmt) ? 2 : 1; 741 743 ··· 774 772 DPU_DEBUG_PLANE(pdpu, "invalid dest rect " DRM_RECT_FMT "\n", 775 773 DRM_RECT_ARG(&pipe_cfg->dst_rect)); 776 774 return -EINVAL; 775 + } 776 + 777 + /* max clk check */ 778 + if (_dpu_plane_calc_clk(mode, pipe_cfg) > kms->perf.max_core_clk_rate) { 779 + DPU_DEBUG_PLANE(pdpu, "plane exceeds max mdp core clk limits\n"); 780 + return -E2BIG; 777 781 } 778 782 779 783 return 0; ··· 907 899 r_pipe_cfg->dst_rect.x1 = pipe_cfg->dst_rect.x2; 908 900 } 909 901 910 - ret = dpu_plane_atomic_check_pipe(pdpu, pipe, pipe_cfg, fmt); 902 + ret = dpu_plane_atomic_check_pipe(pdpu, pipe, pipe_cfg, fmt, &crtc_state->adjusted_mode); 911 903 if (ret) 912 904 return ret; 913 905 914 906 if (r_pipe->sspp) { 915 - ret = dpu_plane_atomic_check_pipe(pdpu, r_pipe, r_pipe_cfg, fmt); 907 + ret = dpu_plane_atomic_check_pipe(pdpu, r_pipe, r_pipe_cfg, fmt, 908 + &crtc_state->adjusted_mode); 916 909 if (ret) 917 910 return ret; 918 911 }
+6 -7
drivers/gpu/drm/msm/dp/dp_ctrl.c
··· 1774 1774 return rc; 1775 1775 1776 1776 while (--link_train_max_retries) { 1777 - rc = dp_ctrl_reinitialize_mainlink(ctrl); 1778 - if (rc) { 1779 - DRM_ERROR("Failed to reinitialize mainlink. rc=%d\n", 1780 - rc); 1781 - break; 1782 - } 1783 - 1784 1777 training_step = DP_TRAINING_NONE; 1785 1778 rc = dp_ctrl_setup_main_link(ctrl, &training_step); 1786 1779 if (rc == 0) { ··· 1824 1831 1825 1832 /* stop link training before start re training */ 1826 1833 dp_ctrl_clear_training_pattern(ctrl); 1834 + } 1835 + 1836 + rc = dp_ctrl_reinitialize_mainlink(ctrl); 1837 + if (rc) { 1838 + DRM_ERROR("Failed to reinitialize mainlink. rc=%d\n", rc); 1839 + break; 1827 1840 } 1828 1841 } 1829 1842
+2 -2
drivers/gpu/drm/msm/dp/dp_link.c
··· 1090 1090 } else if (dp_link_read_psr_error_status(link)) { 1091 1091 DRM_ERROR("PSR IRQ_HPD received\n"); 1092 1092 } else if (dp_link_psr_capability_changed(link)) { 1093 - drm_dbg_dp(link->drm_dev, "PSR Capability changed"); 1093 + drm_dbg_dp(link->drm_dev, "PSR Capability changed\n"); 1094 1094 } else { 1095 1095 ret = dp_link_process_link_status_update(link); 1096 1096 if (!ret) { ··· 1107 1107 } 1108 1108 } 1109 1109 1110 - drm_dbg_dp(link->drm_dev, "sink request=%#x", 1110 + drm_dbg_dp(link->drm_dev, "sink request=%#x\n", 1111 1111 dp_link->sink_request); 1112 1112 return ret; 1113 1113 }
+15 -4
drivers/gpu/drm/msm/dsi/dsi_host.c
··· 1082 1082 1083 1083 static void dsi_wait4video_eng_busy(struct msm_dsi_host *msm_host) 1084 1084 { 1085 + u32 data; 1086 + 1085 1087 if (!(msm_host->mode_flags & MIPI_DSI_MODE_VIDEO)) 1088 + return; 1089 + 1090 + data = dsi_read(msm_host, REG_DSI_STATUS0); 1091 + 1092 + /* if video mode engine is not busy, its because 1093 + * either timing engine was not turned on or the 1094 + * DSI controller has finished transmitting the video 1095 + * data already, so no need to wait in those cases 1096 + */ 1097 + if (!(data & DSI_STATUS0_VIDEO_MODE_ENGINE_BUSY)) 1086 1098 return; 1087 1099 1088 1100 if (msm_host->power_on && msm_host->enabled) { ··· 1906 1894 } 1907 1895 1908 1896 msm_host->irq = irq_of_parse_and_map(pdev->dev.of_node, 0); 1909 - if (msm_host->irq < 0) { 1910 - ret = msm_host->irq; 1911 - dev_err(&pdev->dev, "failed to get irq: %d\n", ret); 1912 - return ret; 1897 + if (!msm_host->irq) { 1898 + dev_err(&pdev->dev, "failed to get irq\n"); 1899 + return -EINVAL; 1913 1900 } 1914 1901 1915 1902 /* do not autoenable, will be enabled later */
+1 -1
drivers/gpu/drm/msm/msm_mdss.c
··· 511 511 static const struct msm_mdss_data msm8998_data = { 512 512 .ubwc_enc_version = UBWC_1_0, 513 513 .ubwc_dec_version = UBWC_1_0, 514 - .highest_bank_bit = 1, 514 + .highest_bank_bit = 2, 515 515 }; 516 516 517 517 static const struct msm_mdss_data qcm2290_data = {
+1 -3
drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
··· 1342 1342 _INIT_DCS_CMD(0xB1, 0x01, 0xBF, 0x11), 1343 1343 _INIT_DCS_CMD(0xCB, 0x86), 1344 1344 _INIT_DCS_CMD(0xD2, 0x3C, 0xFA), 1345 - _INIT_DCS_CMD(0xE9, 0xC5), 1346 - _INIT_DCS_CMD(0xD3, 0x00, 0x00, 0x00, 0x00, 0x80, 0x0C, 0x01), 1347 - _INIT_DCS_CMD(0xE9, 0x3F), 1345 + _INIT_DCS_CMD(0xD3, 0x00, 0x00, 0x44, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0x0C, 0x01), 1348 1346 _INIT_DCS_CMD(0xE7, 0x02, 0x00, 0x28, 0x01, 0x7E, 0x0F, 0x7E, 0x10, 0xA0, 0x00, 0x00, 0x20, 0x40, 0x50, 0x40), 1349 1347 _INIT_DCS_CMD(0xBD, 0x02), 1350 1348 _INIT_DCS_CMD(0xD8, 0xFF, 0xFF, 0xBF, 0xFE, 0xAA, 0xA0, 0xFF, 0xFF, 0xBF, 0xFE, 0xAA, 0xA0),
+1 -1
drivers/gpu/drm/scheduler/sched_main.c
··· 929 929 930 930 if (next) { 931 931 next->s_fence->scheduled.timestamp = 932 - job->s_fence->finished.timestamp; 932 + dma_fence_timestamp(&job->s_fence->finished); 933 933 /* start TO timer for next job */ 934 934 drm_sched_start_timeout(sched); 935 935 }
+1 -1
drivers/gpu/drm/tiny/simpledrm.c
··· 745 745 746 746 ret = devm_aperture_acquire_from_firmware(dev, res->start, resource_size(res)); 747 747 if (ret) { 748 - drm_err(dev, "could not acquire memory range %pr: %d\n", &res, ret); 748 + drm_err(dev, "could not acquire memory range %pr: %d\n", res, ret); 749 749 return ERR_PTR(ret); 750 750 } 751 751
+4 -3
drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
··· 34 34 35 35 static void vmw_bo_release(struct vmw_bo *vbo) 36 36 { 37 + WARN_ON(vbo->tbo.base.funcs && 38 + kref_read(&vbo->tbo.base.refcount) != 0); 37 39 vmw_bo_unmap(vbo); 38 40 drm_gem_object_release(&vbo->tbo.base); 39 41 } ··· 499 497 if (!(flags & drm_vmw_synccpu_allow_cs)) { 500 498 atomic_dec(&vmw_bo->cpu_writers); 501 499 } 502 - vmw_user_bo_unref(vmw_bo); 500 + vmw_user_bo_unref(&vmw_bo); 503 501 } 504 502 505 503 return ret; ··· 541 539 return ret; 542 540 543 541 ret = vmw_user_bo_synccpu_grab(vbo, arg->flags); 544 - vmw_user_bo_unref(vbo); 542 + vmw_user_bo_unref(&vbo); 545 543 if (unlikely(ret != 0)) { 546 544 if (ret == -ERESTARTSYS || ret == -EBUSY) 547 545 return -EBUSY; ··· 614 612 } 615 613 616 614 *out = to_vmw_bo(gobj); 617 - ttm_bo_get(&(*out)->tbo); 618 615 619 616 return 0; 620 617 }
+12 -5
drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
··· 195 195 return buf; 196 196 } 197 197 198 - static inline void vmw_user_bo_unref(struct vmw_bo *vbo) 198 + static inline struct vmw_bo *vmw_user_bo_ref(struct vmw_bo *vbo) 199 199 { 200 - if (vbo) { 201 - ttm_bo_put(&vbo->tbo); 202 - drm_gem_object_put(&vbo->tbo.base); 203 - } 200 + drm_gem_object_get(&vbo->tbo.base); 201 + return vbo; 202 + } 203 + 204 + static inline void vmw_user_bo_unref(struct vmw_bo **buf) 205 + { 206 + struct vmw_bo *tmp_buf = *buf; 207 + 208 + *buf = NULL; 209 + if (tmp_buf) 210 + drm_gem_object_put(&tmp_buf->tbo.base); 204 211 } 205 212 206 213 static inline struct vmw_bo *to_vmw_bo(struct drm_gem_object *gobj)
+3 -3
drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c
··· 432 432 * for the new COTable. Initially pin the buffer object to make sure 433 433 * we can use tryreserve without failure. 434 434 */ 435 - ret = vmw_bo_create(dev_priv, &bo_params, &buf); 435 + ret = vmw_gem_object_create(dev_priv, &bo_params, &buf); 436 436 if (ret) { 437 437 DRM_ERROR("Failed initializing new cotable MOB.\n"); 438 438 goto out_done; ··· 502 502 503 503 vmw_resource_mob_attach(res); 504 504 /* Let go of the old mob. */ 505 - vmw_bo_unreference(&old_buf); 505 + vmw_user_bo_unref(&old_buf); 506 506 res->id = vcotbl->type; 507 507 508 508 ret = dma_resv_reserve_fences(bo->base.resv, 1); ··· 521 521 out_wait: 522 522 ttm_bo_unpin(bo); 523 523 ttm_bo_unreserve(bo); 524 - vmw_bo_unreference(&buf); 524 + vmw_user_bo_unref(&buf); 525 525 526 526 out_done: 527 527 MKS_STAT_TIME_POP(MKSSTAT_KERN_COTABLE_RESIZE);
+4
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
··· 853 853 /** 854 854 * GEM related functionality - vmwgfx_gem.c 855 855 */ 856 + struct vmw_bo_params; 857 + int vmw_gem_object_create(struct vmw_private *vmw, 858 + struct vmw_bo_params *params, 859 + struct vmw_bo **p_vbo); 856 860 extern int vmw_gem_object_create_with_handle(struct vmw_private *dev_priv, 857 861 struct drm_file *filp, 858 862 uint32_t size,
+7 -5
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
··· 1151 1151 SVGAMobId *id, 1152 1152 struct vmw_bo **vmw_bo_p) 1153 1153 { 1154 - struct vmw_bo *vmw_bo; 1154 + struct vmw_bo *vmw_bo, *tmp_bo; 1155 1155 uint32_t handle = *id; 1156 1156 struct vmw_relocation *reloc; 1157 1157 int ret; ··· 1164 1164 } 1165 1165 vmw_bo_placement_set(vmw_bo, VMW_BO_DOMAIN_MOB, VMW_BO_DOMAIN_MOB); 1166 1166 ret = vmw_validation_add_bo(sw_context->ctx, vmw_bo); 1167 - vmw_user_bo_unref(vmw_bo); 1167 + tmp_bo = vmw_bo; 1168 + vmw_user_bo_unref(&tmp_bo); 1168 1169 if (unlikely(ret != 0)) 1169 1170 return ret; 1170 1171 ··· 1207 1206 SVGAGuestPtr *ptr, 1208 1207 struct vmw_bo **vmw_bo_p) 1209 1208 { 1210 - struct vmw_bo *vmw_bo; 1209 + struct vmw_bo *vmw_bo, *tmp_bo; 1211 1210 uint32_t handle = ptr->gmrId; 1212 1211 struct vmw_relocation *reloc; 1213 1212 int ret; ··· 1221 1220 vmw_bo_placement_set(vmw_bo, VMW_BO_DOMAIN_GMR | VMW_BO_DOMAIN_VRAM, 1222 1221 VMW_BO_DOMAIN_GMR | VMW_BO_DOMAIN_VRAM); 1223 1222 ret = vmw_validation_add_bo(sw_context->ctx, vmw_bo); 1224 - vmw_user_bo_unref(vmw_bo); 1223 + tmp_bo = vmw_bo; 1224 + vmw_user_bo_unref(&tmp_bo); 1225 1225 if (unlikely(ret != 0)) 1226 1226 return ret; 1227 1227 ··· 1621 1619 { 1622 1620 VMW_DECLARE_CMD_VAR(*cmd, SVGA3dCmdSetTextureState); 1623 1621 SVGA3dTextureState *last_state = (SVGA3dTextureState *) 1624 - ((unsigned long) header + header->size + sizeof(header)); 1622 + ((unsigned long) header + header->size + sizeof(*header)); 1625 1623 SVGA3dTextureState *cur_state = (SVGA3dTextureState *) 1626 1624 ((unsigned long) header + sizeof(*cmd)); 1627 1625 struct vmw_resource *ctx;
+15 -3
drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
··· 111 111 .vm_ops = &vmw_vm_ops, 112 112 }; 113 113 114 + int vmw_gem_object_create(struct vmw_private *vmw, 115 + struct vmw_bo_params *params, 116 + struct vmw_bo **p_vbo) 117 + { 118 + int ret = vmw_bo_create(vmw, params, p_vbo); 119 + 120 + if (ret != 0) 121 + goto out_no_bo; 122 + 123 + (*p_vbo)->tbo.base.funcs = &vmw_gem_object_funcs; 124 + out_no_bo: 125 + return ret; 126 + } 127 + 114 128 int vmw_gem_object_create_with_handle(struct vmw_private *dev_priv, 115 129 struct drm_file *filp, 116 130 uint32_t size, ··· 140 126 .pin = false 141 127 }; 142 128 143 - ret = vmw_bo_create(dev_priv, &params, p_vbo); 129 + ret = vmw_gem_object_create(dev_priv, &params, p_vbo); 144 130 if (ret != 0) 145 131 goto out_no_bo; 146 - 147 - (*p_vbo)->tbo.base.funcs = &vmw_gem_object_funcs; 148 132 149 133 ret = drm_gem_handle_create(filp, &(*p_vbo)->tbo.base, handle); 150 134 out_no_bo:
+3 -3
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 1471 1471 /* Reserve and switch the backing mob. */ 1472 1472 mutex_lock(&res->dev_priv->cmdbuf_mutex); 1473 1473 (void) vmw_resource_reserve(res, false, true); 1474 - vmw_bo_unreference(&res->guest_memory_bo); 1475 - res->guest_memory_bo = vmw_bo_reference(bo_mob); 1474 + vmw_user_bo_unref(&res->guest_memory_bo); 1475 + res->guest_memory_bo = vmw_user_bo_ref(bo_mob); 1476 1476 res->guest_memory_offset = 0; 1477 1477 vmw_resource_unreserve(res, false, false, false, NULL, 0); 1478 1478 mutex_unlock(&res->dev_priv->cmdbuf_mutex); ··· 1666 1666 err_out: 1667 1667 /* vmw_user_lookup_handle takes one ref so does new_fb */ 1668 1668 if (bo) 1669 - vmw_user_bo_unref(bo); 1669 + vmw_user_bo_unref(&bo); 1670 1670 if (surface) 1671 1671 vmw_surface_unreference(&surface); 1672 1672
+1 -1
drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c
··· 451 451 452 452 ret = vmw_overlay_update_stream(dev_priv, buf, arg, true); 453 453 454 - vmw_user_bo_unref(buf); 454 + vmw_user_bo_unref(&buf); 455 455 456 456 out_unlock: 457 457 mutex_unlock(&overlay->mutex);
+6 -6
drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
··· 141 141 if (res->coherent) 142 142 vmw_bo_dirty_release(res->guest_memory_bo); 143 143 ttm_bo_unreserve(bo); 144 - vmw_bo_unreference(&res->guest_memory_bo); 144 + vmw_user_bo_unref(&res->guest_memory_bo); 145 145 } 146 146 147 147 if (likely(res->hw_destroy != NULL)) { ··· 338 338 return 0; 339 339 } 340 340 341 - ret = vmw_bo_create(res->dev_priv, &bo_params, &gbo); 341 + ret = vmw_gem_object_create(res->dev_priv, &bo_params, &gbo); 342 342 if (unlikely(ret != 0)) 343 343 goto out_no_bo; 344 344 ··· 457 457 vmw_resource_mob_detach(res); 458 458 if (res->coherent) 459 459 vmw_bo_dirty_release(res->guest_memory_bo); 460 - vmw_bo_unreference(&res->guest_memory_bo); 460 + vmw_user_bo_unref(&res->guest_memory_bo); 461 461 } 462 462 463 463 if (new_guest_memory_bo) { 464 - res->guest_memory_bo = vmw_bo_reference(new_guest_memory_bo); 464 + res->guest_memory_bo = vmw_user_bo_ref(new_guest_memory_bo); 465 465 466 466 /* 467 467 * The validation code should already have added a ··· 551 551 ttm_bo_put(val_buf->bo); 552 552 val_buf->bo = NULL; 553 553 if (guest_memory_dirty) 554 - vmw_bo_unreference(&res->guest_memory_bo); 554 + vmw_user_bo_unref(&res->guest_memory_bo); 555 555 556 556 return ret; 557 557 } ··· 727 727 goto out_no_validate; 728 728 else if (!res->func->needs_guest_memory && res->guest_memory_bo) { 729 729 WARN_ON_ONCE(vmw_resource_mob_attached(res)); 730 - vmw_bo_unreference(&res->guest_memory_bo); 730 + vmw_user_bo_unref(&res->guest_memory_bo); 731 731 } 732 732 733 733 return 0;
+2 -2
drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
··· 180 180 181 181 res->guest_memory_size = size; 182 182 if (byte_code) { 183 - res->guest_memory_bo = vmw_bo_reference(byte_code); 183 + res->guest_memory_bo = vmw_user_bo_ref(byte_code); 184 184 res->guest_memory_offset = offset; 185 185 } 186 186 shader->size = size; ··· 809 809 shader_type, num_input_sig, 810 810 num_output_sig, tfile, shader_handle); 811 811 out_bad_arg: 812 - vmw_user_bo_unref(buffer); 812 + vmw_user_bo_unref(&buffer); 813 813 return ret; 814 814 } 815 815
+11 -18
drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
··· 686 686 container_of(base, struct vmw_user_surface, prime.base); 687 687 struct vmw_resource *res = &user_srf->srf.res; 688 688 689 - if (res->guest_memory_bo) 690 - drm_gem_object_put(&res->guest_memory_bo->tbo.base); 691 - 692 689 *p_base = NULL; 693 690 vmw_resource_unreference(&res); 694 691 } ··· 852 855 * expect a backup buffer to be present. 853 856 */ 854 857 if (dev_priv->has_mob && req->shareable) { 855 - uint32_t backup_handle; 858 + struct vmw_bo_params params = { 859 + .domain = VMW_BO_DOMAIN_SYS, 860 + .busy_domain = VMW_BO_DOMAIN_SYS, 861 + .bo_type = ttm_bo_type_device, 862 + .size = res->guest_memory_size, 863 + .pin = false 864 + }; 856 865 857 - ret = vmw_gem_object_create_with_handle(dev_priv, 858 - file_priv, 859 - res->guest_memory_size, 860 - &backup_handle, 861 - &res->guest_memory_bo); 866 + ret = vmw_gem_object_create(dev_priv, 867 + &params, 868 + &res->guest_memory_bo); 862 869 if (unlikely(ret != 0)) { 863 870 vmw_resource_unreference(&res); 864 871 goto out_unlock; 865 872 } 866 - vmw_bo_reference(res->guest_memory_bo); 867 - /* 868 - * We don't expose the handle to the userspace and surface 869 - * already holds a gem reference 870 - */ 871 - drm_gem_handle_delete(file_priv, backup_handle); 872 873 } 873 874 874 875 tmp = vmw_resource_reference(&srf->res); ··· 1507 1512 if (ret == 0) { 1508 1513 if (res->guest_memory_bo->tbo.base.size < res->guest_memory_size) { 1509 1514 VMW_DEBUG_USER("Surface backup buffer too small.\n"); 1510 - vmw_bo_unreference(&res->guest_memory_bo); 1515 + vmw_user_bo_unref(&res->guest_memory_bo); 1511 1516 ret = -EINVAL; 1512 1517 goto out_unlock; 1513 1518 } else { ··· 1521 1526 res->guest_memory_size, 1522 1527 &backup_handle, 1523 1528 &res->guest_memory_bo); 1524 - if (ret == 0) 1525 - vmw_bo_reference(res->guest_memory_bo); 1526 1529 } 1527 1530 1528 1531 if (unlikely(ret != 0)) {
+15 -12
drivers/hwtracing/coresight/coresight-tmc-etr.c
··· 610 610 611 611 flat_buf->vaddr = dma_alloc_noncoherent(real_dev, etr_buf->size, 612 612 &flat_buf->daddr, 613 - DMA_FROM_DEVICE, GFP_KERNEL); 613 + DMA_FROM_DEVICE, 614 + GFP_KERNEL | __GFP_NOWARN); 614 615 if (!flat_buf->vaddr) { 615 616 kfree(flat_buf); 616 617 return -ENOMEM; ··· 1175 1174 } 1176 1175 1177 1176 /* 1178 - * In sysFS mode we can have multiple writers per sink. Since this 1179 - * sink is already enabled no memory is needed and the HW need not be 1180 - * touched, even if the buffer size has changed. 1181 - */ 1182 - if (drvdata->mode == CS_MODE_SYSFS) { 1183 - atomic_inc(&csdev->refcnt); 1184 - goto out; 1185 - } 1186 - 1187 - /* 1188 1177 * If we don't have a buffer or it doesn't match the requested size, 1189 1178 * use the buffer allocated above. Otherwise reuse the existing buffer. 1190 1179 */ ··· 1195 1204 1196 1205 static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev) 1197 1206 { 1198 - int ret; 1207 + int ret = 0; 1199 1208 unsigned long flags; 1200 1209 struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 1201 1210 struct etr_buf *sysfs_buf = tmc_etr_get_sysfs_buffer(csdev); ··· 1204 1213 return PTR_ERR(sysfs_buf); 1205 1214 1206 1215 spin_lock_irqsave(&drvdata->spinlock, flags); 1216 + 1217 + /* 1218 + * In sysFS mode we can have multiple writers per sink. Since this 1219 + * sink is already enabled no memory is needed and the HW need not be 1220 + * touched, even if the buffer size has changed. 1221 + */ 1222 + if (drvdata->mode == CS_MODE_SYSFS) { 1223 + atomic_inc(&csdev->refcnt); 1224 + goto out; 1225 + } 1226 + 1207 1227 ret = tmc_etr_enable_hw(drvdata, sysfs_buf); 1208 1228 if (!ret) { 1209 1229 drvdata->mode = CS_MODE_SYSFS; 1210 1230 atomic_inc(&csdev->refcnt); 1211 1231 } 1212 1232 1233 + out: 1213 1234 spin_unlock_irqrestore(&drvdata->spinlock, flags); 1214 1235 1215 1236 if (!ret)
+25 -4
drivers/iio/adc/ad7192.c
··· 177 177 struct ad7192_state { 178 178 const struct ad7192_chip_info *chip_info; 179 179 struct regulator *avdd; 180 + struct regulator *vref; 180 181 struct clk *mclk; 181 182 u16 int_vref_mv; 182 183 u32 fclk; ··· 1009 1008 if (ret) 1010 1009 return dev_err_probe(&spi->dev, ret, "Failed to enable specified DVdd supply\n"); 1011 1010 1012 - ret = regulator_get_voltage(st->avdd); 1013 - if (ret < 0) { 1014 - dev_err(&spi->dev, "Device tree error, reference voltage undefined\n"); 1015 - return ret; 1011 + st->vref = devm_regulator_get_optional(&spi->dev, "vref"); 1012 + if (IS_ERR(st->vref)) { 1013 + if (PTR_ERR(st->vref) != -ENODEV) 1014 + return PTR_ERR(st->vref); 1015 + 1016 + ret = regulator_get_voltage(st->avdd); 1017 + if (ret < 0) 1018 + return dev_err_probe(&spi->dev, ret, 1019 + "Device tree error, AVdd voltage undefined\n"); 1020 + } else { 1021 + ret = regulator_enable(st->vref); 1022 + if (ret) { 1023 + dev_err(&spi->dev, "Failed to enable specified Vref supply\n"); 1024 + return ret; 1025 + } 1026 + 1027 + ret = devm_add_action_or_reset(&spi->dev, ad7192_reg_disable, st->vref); 1028 + if (ret) 1029 + return ret; 1030 + 1031 + ret = regulator_get_voltage(st->vref); 1032 + if (ret < 0) 1033 + return dev_err_probe(&spi->dev, ret, 1034 + "Device tree error, Vref voltage undefined\n"); 1016 1035 } 1017 1036 st->int_vref_mv = ret / 1000; 1018 1037
+2 -2
drivers/iio/adc/imx8qxp-adc.c
··· 38 38 #define IMX8QXP_ADR_ADC_FCTRL 0x30 39 39 #define IMX8QXP_ADR_ADC_SWTRIG 0x34 40 40 #define IMX8QXP_ADR_ADC_TCTRL(tid) (0xc0 + (tid) * 4) 41 - #define IMX8QXP_ADR_ADC_CMDH(cid) (0x100 + (cid) * 8) 42 - #define IMX8QXP_ADR_ADC_CMDL(cid) (0x104 + (cid) * 8) 41 + #define IMX8QXP_ADR_ADC_CMDL(cid) (0x100 + (cid) * 8) 42 + #define IMX8QXP_ADR_ADC_CMDH(cid) (0x104 + (cid) * 8) 43 43 #define IMX8QXP_ADR_ADC_RESFIFO 0x300 44 44 #define IMX8QXP_ADR_ADC_TST 0xffc 45 45
+2
drivers/iio/addac/Kconfig
··· 24 24 depends on GPIOLIB && SPI 25 25 select REGMAP_SPI 26 26 select CRC8 27 + select IIO_BUFFER 28 + select IIO_TRIGGERED_BUFFER 27 29 help 28 30 Say yes here to build support for Analog Devices AD74412R/AD74413R 29 31 quad-channel software configurable input/output solution.
+5 -1
drivers/iio/common/cros_ec_sensors/cros_ec_sensors_core.c
··· 190 190 /* 191 191 * Ignore samples if the buffer is not set: it is needed if the ODR is 192 192 * set but the buffer is not enabled yet. 193 + * 194 + * Note: iio_device_claim_buffer_mode() returns -EBUSY if the buffer 195 + * is not enabled. 193 196 */ 194 - if (!iio_buffer_enabled(indio_dev)) 197 + if (iio_device_claim_buffer_mode(indio_dev) < 0) 195 198 return 0; 196 199 197 200 out = (s16 *)st->samples; ··· 213 210 iio_push_to_buffers_with_timestamp(indio_dev, st->samples, 214 211 timestamp + delta); 215 212 213 + iio_device_release_buffer_mode(indio_dev); 216 214 return 0; 217 215 } 218 216 EXPORT_SYMBOL_GPL(cros_ec_sensors_push_data);
+2 -2
drivers/iio/dac/ad3552r.c
··· 140 140 }; 141 141 142 142 enum ad3542r_id { 143 - AD3542R_ID = 0x4008, 144 - AD3552R_ID = 0x4009, 143 + AD3542R_ID = 0x4009, 144 + AD3552R_ID = 0x4008, 145 145 }; 146 146 147 147 enum ad3552r_ch_output_range {
+2 -2
drivers/iio/frequency/admv1013.c
··· 351 351 if (vcm < 0) 352 352 return vcm; 353 353 354 - if (vcm < 1800000) 354 + if (vcm <= 1800000) 355 355 mixer_vgate = (2389 * vcm / 1000000 + 8100) / 100; 356 - else if (vcm > 1800000 && vcm < 2600000) 356 + else if (vcm > 1800000 && vcm <= 2600000) 357 357 mixer_vgate = (2375 * vcm / 1000000 + 125) / 100; 358 358 else 359 359 return -EINVAL;
+2
drivers/iio/imu/bno055/Kconfig
··· 2 2 3 3 config BOSCH_BNO055 4 4 tristate 5 + select IIO_BUFFER 6 + select IIO_TRIGGERED_BUFFER 5 7 6 8 config BOSCH_BNO055_SERIAL 7 9 tristate "Bosch BNO055 attached via UART"
-1
drivers/iio/light/vcnl4000.c
··· 1513 1513 1514 1514 out: 1515 1515 mutex_unlock(&data->vcnl4000_lock); 1516 - data->chip_spec->set_power_state(data, data->ps_int || data->als_int); 1517 1516 1518 1517 return ret; 1519 1518 }
+1 -1
drivers/iio/pressure/bmp280-core.c
··· 2179 2179 * however as it happens, the BMP085 shares the chip ID of BMP180 2180 2180 * so we look for an IRQ if we have that. 2181 2181 */ 2182 - if (irq > 0 || (chip_id == BMP180_CHIP_ID)) { 2182 + if (irq > 0 && (chip_id == BMP180_CHIP_ID)) { 2183 2183 ret = bmp085_fetch_eoc_irq(dev, name, irq, data); 2184 2184 if (ret) 2185 2185 return ret;
+4 -4
drivers/iio/pressure/dps310.c
··· 57 57 #define DPS310_RESET_MAGIC 0x09 58 58 #define DPS310_COEF_BASE 0x10 59 59 60 - /* Make sure sleep time is <= 20ms for usleep_range */ 61 - #define DPS310_POLL_SLEEP_US(t) min(20000, (t) / 8) 60 + /* Make sure sleep time is <= 30ms for usleep_range */ 61 + #define DPS310_POLL_SLEEP_US(t) min(30000, (t) / 8) 62 62 /* Silently handle error in rate value here */ 63 63 #define DPS310_POLL_TIMEOUT_US(rc) ((rc) <= 0 ? 1000000 : 1000000 / (rc)) 64 64 ··· 402 402 if (rc) 403 403 return rc; 404 404 405 - /* Wait for device chip access: 2.5ms in specification */ 406 - usleep_range(2500, 12000); 405 + /* Wait for device chip access: 15ms in specification */ 406 + usleep_range(15000, 55000); 407 407 return 0; 408 408 } 409 409
+1 -1
drivers/iio/pressure/ms5611_core.c
··· 76 76 77 77 crc = (crc >> 12) & 0x000F; 78 78 79 - return crc_orig != 0x0000 && crc == crc_orig; 79 + return crc == crc_orig; 80 80 } 81 81 82 82 static int ms5611_read_prom(struct iio_dev *indio_dev)
+3 -3
drivers/iio/proximity/irsd200.c
··· 759 759 { 760 760 struct iio_dev *indio_dev = ((struct iio_poll_func *)pollf)->indio_dev; 761 761 struct irsd200_data *data = iio_priv(indio_dev); 762 - s16 buf = 0; 762 + s64 buf[2] = {}; 763 763 int ret; 764 764 765 - ret = irsd200_read_data(data, &buf); 765 + ret = irsd200_read_data(data, (s16 *)buf); 766 766 if (ret) 767 767 goto end; 768 768 769 - iio_push_to_buffers_with_timestamp(indio_dev, &buf, 769 + iio_push_to_buffers_with_timestamp(indio_dev, buf, 770 770 iio_get_time_ns(indio_dev)); 771 771 772 772 end:
+4
drivers/input/joystick/xpad.c
··· 130 130 { 0x0079, 0x18d4, "GPD Win 2 X-Box Controller", 0, XTYPE_XBOX360 }, 131 131 { 0x03eb, 0xff01, "Wooting One (Legacy)", 0, XTYPE_XBOX360 }, 132 132 { 0x03eb, 0xff02, "Wooting Two (Legacy)", 0, XTYPE_XBOX360 }, 133 + { 0x03f0, 0x0495, "HyperX Clutch Gladiate", 0, XTYPE_XBOXONE }, 133 134 { 0x044f, 0x0f00, "Thrustmaster Wheel", 0, XTYPE_XBOX }, 134 135 { 0x044f, 0x0f03, "Thrustmaster Wheel", 0, XTYPE_XBOX }, 135 136 { 0x044f, 0x0f07, "Thrustmaster, Inc. Controller", 0, XTYPE_XBOX }, ··· 273 272 { 0x1038, 0x1430, "SteelSeries Stratus Duo", 0, XTYPE_XBOX360 }, 274 273 { 0x1038, 0x1431, "SteelSeries Stratus Duo", 0, XTYPE_XBOX360 }, 275 274 { 0x11c9, 0x55f0, "Nacon GC-100XF", 0, XTYPE_XBOX360 }, 275 + { 0x11ff, 0x0511, "PXN V900", 0, XTYPE_XBOX360 }, 276 276 { 0x1209, 0x2882, "Ardwiino Controller", 0, XTYPE_XBOX360 }, 277 277 { 0x12ab, 0x0004, "Honey Bee Xbox360 dancepad", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 }, 278 278 { 0x12ab, 0x0301, "PDP AFTERGLOW AX.1", 0, XTYPE_XBOX360 }, ··· 461 459 { USB_INTERFACE_INFO('X', 'B', 0) }, /* Xbox USB-IF not-approved class */ 462 460 XPAD_XBOX360_VENDOR(0x0079), /* GPD Win 2 controller */ 463 461 XPAD_XBOX360_VENDOR(0x03eb), /* Wooting Keyboards (Legacy) */ 462 + XPAD_XBOXONE_VENDOR(0x03f0), /* HP HyperX Xbox One controllers */ 464 463 XPAD_XBOX360_VENDOR(0x044f), /* Thrustmaster Xbox 360 controllers */ 465 464 XPAD_XBOX360_VENDOR(0x045e), /* Microsoft Xbox 360 controllers */ 466 465 XPAD_XBOXONE_VENDOR(0x045e), /* Microsoft Xbox One controllers */ ··· 480 477 XPAD_XBOX360_VENDOR(0x1038), /* SteelSeries controllers */ 481 478 XPAD_XBOXONE_VENDOR(0x10f5), /* Turtle Beach Controllers */ 482 479 XPAD_XBOX360_VENDOR(0x11c9), /* Nacon GC100XF */ 480 + XPAD_XBOX360_VENDOR(0x11ff), /* PXN V900 */ 483 481 XPAD_XBOX360_VENDOR(0x1209), /* Ardwiino Controllers */ 484 482 XPAD_XBOX360_VENDOR(0x12ab), /* Xbox 360 dance pads */ 485 483 XPAD_XBOX360_VENDOR(0x1430), /* RedOctane Xbox 360 controllers */
+1
drivers/input/misc/powermate.c
··· 425 425 pm->requires_update = 0; 426 426 usb_kill_urb(pm->irq); 427 427 input_unregister_device(pm->input); 428 + usb_kill_urb(pm->config); 428 429 usb_free_urb(pm->irq); 429 430 usb_free_urb(pm->config); 430 431 powermate_free_buffers(interface_to_usbdev(intf), pm);
+1
drivers/input/mouse/elantech.c
··· 2114 2114 psmouse->protocol_handler = elantech_process_byte; 2115 2115 psmouse->disconnect = elantech_disconnect; 2116 2116 psmouse->reconnect = elantech_reconnect; 2117 + psmouse->fast_reconnect = NULL; 2117 2118 psmouse->pktsize = info->hw_version > 1 ? 6 : 4; 2118 2119 2119 2120 return 0;
+7 -12
drivers/input/mouse/psmouse-smbus.c
··· 5 5 6 6 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 7 7 8 - #include <linux/delay.h> 9 8 #include <linux/kernel.h> 10 9 #include <linux/module.h> 11 10 #include <linux/libps2.h> ··· 118 119 return PSMOUSE_FULL_PACKET; 119 120 } 120 121 121 - static void psmouse_activate_smbus_mode(struct psmouse_smbus_dev *smbdev) 122 - { 123 - if (smbdev->need_deactivate) { 124 - psmouse_deactivate(smbdev->psmouse); 125 - /* Give the device time to switch into SMBus mode */ 126 - msleep(30); 127 - } 128 - } 129 - 130 122 static int psmouse_smbus_reconnect(struct psmouse *psmouse) 131 123 { 132 - psmouse_activate_smbus_mode(psmouse->private); 124 + struct psmouse_smbus_dev *smbdev = psmouse->private; 125 + 126 + if (smbdev->need_deactivate) 127 + psmouse_deactivate(psmouse); 128 + 133 129 return 0; 134 130 } 135 131 ··· 257 263 } 258 264 } 259 265 260 - psmouse_activate_smbus_mode(smbdev); 266 + if (need_deactivate) 267 + psmouse_deactivate(psmouse); 261 268 262 269 psmouse->private = smbdev; 263 270 psmouse->protocol_handler = psmouse_smbus_process_byte;
+2
drivers/input/mouse/synaptics.c
··· 1623 1623 psmouse->set_rate = synaptics_set_rate; 1624 1624 psmouse->disconnect = synaptics_disconnect; 1625 1625 psmouse->reconnect = synaptics_reconnect; 1626 + psmouse->fast_reconnect = NULL; 1626 1627 psmouse->cleanup = synaptics_reset; 1627 1628 /* Synaptics can usually stay in sync without extra help */ 1628 1629 psmouse->resync_time = 0; ··· 1753 1752 psmouse_matches_pnp_id(psmouse, topbuttonpad_pnp_ids) && 1754 1753 !SYN_CAP_EXT_BUTTONS_STICK(info->ext_cap_10); 1755 1754 const struct rmi_device_platform_data pdata = { 1755 + .reset_delay_ms = 30, 1756 1756 .sensor_pdata = { 1757 1757 .sensor_type = rmi_sensor_touchpad, 1758 1758 .axis_align.flip_y = true,
+28 -22
drivers/input/rmi4/rmi_smbus.c
··· 235 235 236 236 static int rmi_smb_enable_smbus_mode(struct rmi_smb_xport *rmi_smb) 237 237 { 238 - int retval; 238 + struct i2c_client *client = rmi_smb->client; 239 + int smbus_version; 240 + 241 + /* 242 + * psmouse driver resets the controller, we only need to wait 243 + * to give the firmware chance to fully reinitialize. 244 + */ 245 + if (rmi_smb->xport.pdata.reset_delay_ms) 246 + msleep(rmi_smb->xport.pdata.reset_delay_ms); 239 247 240 248 /* we need to get the smbus version to activate the touchpad */ 241 - retval = rmi_smb_get_version(rmi_smb); 242 - if (retval < 0) 243 - return retval; 249 + smbus_version = rmi_smb_get_version(rmi_smb); 250 + if (smbus_version < 0) 251 + return smbus_version; 252 + 253 + rmi_dbg(RMI_DEBUG_XPORT, &client->dev, "Smbus version is %d", 254 + smbus_version); 255 + 256 + if (smbus_version != 2 && smbus_version != 3) { 257 + dev_err(&client->dev, "Unrecognized SMB version %d\n", 258 + smbus_version); 259 + return -ENODEV; 260 + } 244 261 245 262 return 0; 246 263 } ··· 270 253 rmi_smb_clear_state(rmi_smb); 271 254 272 255 /* 273 - * we do not call the actual reset command, it has to be handled in 274 - * PS/2 or there will be races between PS/2 and SMBus. 275 - * PS/2 should ensure that a psmouse_reset is called before 276 - * intializing the device and after it has been removed to be in a known 277 - * state. 256 + * We do not call the actual reset command, it has to be handled in 257 + * PS/2 or there will be races between PS/2 and SMBus. PS/2 should 258 + * ensure that a psmouse_reset is called before initializing the 259 + * device and after it has been removed to be in a known state. 278 260 */ 279 261 return rmi_smb_enable_smbus_mode(rmi_smb); 280 262 } ··· 288 272 { 289 273 struct rmi_device_platform_data *pdata = dev_get_platdata(&client->dev); 290 274 struct rmi_smb_xport *rmi_smb; 291 - int smbus_version; 292 275 int error; 293 276 294 277 if (!pdata) { ··· 326 311 rmi_smb->xport.proto_name = "smb"; 327 312 rmi_smb->xport.ops = &rmi_smb_ops; 328 313 329 - smbus_version = rmi_smb_get_version(rmi_smb); 330 - if (smbus_version < 0) 331 - return smbus_version; 332 - 333 - rmi_dbg(RMI_DEBUG_XPORT, &client->dev, "Smbus version is %d", 334 - smbus_version); 335 - 336 - if (smbus_version != 2 && smbus_version != 3) { 337 - dev_err(&client->dev, "Unrecognized SMB version %d\n", 338 - smbus_version); 339 - return -ENODEV; 340 - } 314 + error = rmi_smb_enable_smbus_mode(rmi_smb); 315 + if (error) 316 + return error; 341 317 342 318 i2c_set_clientdata(client, rmi_smb); 343 319
+8
drivers/input/serio/i8042-acpipnpio.h
··· 619 619 .driver_data = (void *)(SERIO_QUIRK_NOMUX) 620 620 }, 621 621 { 622 + /* Fujitsu Lifebook E5411 */ 623 + .matches = { 624 + DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU CLIENT COMPUTING LIMITED"), 625 + DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK E5411"), 626 + }, 627 + .driver_data = (void *)(SERIO_QUIRK_NOAUX) 628 + }, 629 + { 622 630 /* Gigabyte M912 */ 623 631 .matches = { 624 632 DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"),
+19
drivers/input/touchscreen/goodix.c
··· 900 900 dev_info(dev, "No ACPI GpioInt resource, assuming that the GPIO order is reset, int\n"); 901 901 ts->irq_pin_access_method = IRQ_PIN_ACCESS_ACPI_GPIO; 902 902 gpio_mapping = acpi_goodix_int_last_gpios; 903 + } else if (ts->gpio_count == 1 && ts->gpio_int_idx == 0) { 904 + /* 905 + * On newer devices there is only 1 GpioInt resource and _PS0 906 + * does the whole reset sequence for us. 907 + */ 908 + acpi_device_fix_up_power(ACPI_COMPANION(dev)); 909 + 910 + /* 911 + * Before the _PS0 call the int GPIO may have been in output 912 + * mode and the call should have put the int GPIO in input mode, 913 + * but the GPIO subsys cached state may still think it is 914 + * in output mode, causing gpiochip_lock_as_irq() failure. 915 + * 916 + * Add a mapping for the int GPIO to make the 917 + * gpiod_int = gpiod_get(..., GPIOD_IN) call succeed, 918 + * which will explicitly set the direction to input. 919 + */ 920 + ts->irq_pin_access_method = IRQ_PIN_ACCESS_NONE; 921 + gpio_mapping = acpi_goodix_int_first_gpios; 903 922 } else { 904 923 dev_warn(dev, "Unexpected ACPI resources: gpio_count %d, gpio_int_idx %d\n", 905 924 ts->gpio_count, ts->gpio_int_idx);
+3 -7
drivers/mcb/mcb-core.c
··· 387 387 388 388 static int __mcb_bus_add_devices(struct device *dev, void *data) 389 389 { 390 - struct mcb_device *mdev = to_mcb_device(dev); 391 390 int retval; 392 391 393 - if (mdev->is_added) 394 - return 0; 395 - 396 392 retval = device_attach(dev); 397 - if (retval < 0) 393 + if (retval < 0) { 398 394 dev_err(dev, "Error adding device (%d)\n", retval); 399 - 400 - mdev->is_added = true; 395 + return retval; 396 + } 401 397 402 398 return 0; 403 399 }
-2
drivers/mcb/mcb-parse.c
··· 99 99 mdev->mem.end = mdev->mem.start + size - 1; 100 100 mdev->mem.flags = IORESOURCE_MEM; 101 101 102 - mdev->is_added = false; 103 - 104 102 ret = mcb_device_register(bus, mdev); 105 103 if (ret < 0) 106 104 goto err;
+2 -8
drivers/media/i2c/ov8858.c
··· 1850 1850 } 1851 1851 1852 1852 ret = v4l2_fwnode_endpoint_parse(endpoint, &vep); 1853 + fwnode_handle_put(endpoint); 1853 1854 if (ret) { 1854 1855 dev_err(dev, "Failed to parse endpoint: %d\n", ret); 1855 - fwnode_handle_put(endpoint); 1856 1856 return ret; 1857 1857 } 1858 1858 ··· 1864 1864 default: 1865 1865 dev_err(dev, "Unsupported number of data lanes %u\n", 1866 1866 ov8858->num_lanes); 1867 - fwnode_handle_put(endpoint); 1868 1867 return -EINVAL; 1869 1868 } 1870 - 1871 - ov8858->subdev.fwnode = endpoint; 1872 1869 1873 1870 return 0; 1874 1871 } ··· 1910 1913 1911 1914 ret = ov8858_init_ctrls(ov8858); 1912 1915 if (ret) 1913 - goto err_put_fwnode; 1916 + return ret; 1914 1917 1915 1918 sd = &ov8858->subdev; 1916 1919 sd->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE | V4L2_SUBDEV_FL_HAS_EVENTS; ··· 1961 1964 media_entity_cleanup(&sd->entity); 1962 1965 err_free_handler: 1963 1966 v4l2_ctrl_handler_free(&ov8858->ctrl_handler); 1964 - err_put_fwnode: 1965 - fwnode_handle_put(ov8858->subdev.fwnode); 1966 1967 1967 1968 return ret; 1968 1969 } ··· 1973 1978 v4l2_async_unregister_subdev(sd); 1974 1979 media_entity_cleanup(&sd->entity); 1975 1980 v4l2_ctrl_handler_free(&ov8858->ctrl_handler); 1976 - fwnode_handle_put(ov8858->subdev.fwnode); 1977 1981 1978 1982 pm_runtime_disable(&client->dev); 1979 1983 if (!pm_runtime_status_suspended(&client->dev))
+3 -1
drivers/media/pci/intel/ipu-bridge.c
··· 107 107 for_each_acpi_dev_match(ivsc_adev, acpi_id->id, NULL, -1) 108 108 /* camera sensor depends on IVSC in DSDT if exist */ 109 109 for_each_acpi_consumer_dev(ivsc_adev, consumer) 110 - if (consumer->handle == handle) 110 + if (consumer->handle == handle) { 111 + acpi_dev_put(consumer); 111 112 return ivsc_adev; 113 + } 112 114 } 113 115 114 116 return NULL;
+11 -4
drivers/media/platform/xilinx/xilinx-vipp.c
··· 55 55 { 56 56 struct xvip_graph_entity *entity; 57 57 struct v4l2_async_connection *asd; 58 + struct list_head *lists[] = { 59 + &xdev->notifier.done_list, 60 + &xdev->notifier.waiting_list 61 + }; 62 + unsigned int i; 58 63 59 - list_for_each_entry(asd, &xdev->notifier.done_list, asc_entry) { 60 - entity = to_xvip_entity(asd); 61 - if (entity->asd.match.fwnode == fwnode) 62 - return entity; 64 + for (i = 0; i < ARRAY_SIZE(lists); i++) { 65 + list_for_each_entry(asd, lists[i], asc_entry) { 66 + entity = to_xvip_entity(asd); 67 + if (entity->asd.match.fwnode == fwnode) 68 + return entity; 69 + } 63 70 } 64 71 65 72 return NULL;
+7
drivers/media/v4l2-core/v4l2-subdev.c
··· 502 502 V4L2_SUBDEV_CLIENT_CAP_STREAMS; 503 503 int rval; 504 504 505 + /* 506 + * If the streams API is not enabled, remove V4L2_SUBDEV_CAP_STREAMS. 507 + * Remove this when the API is no longer experimental. 508 + */ 509 + if (!v4l2_subdev_enable_streams_api) 510 + streams_subdev = false; 511 + 505 512 switch (cmd) { 506 513 case VIDIOC_SUBDEV_QUERYCAP: { 507 514 struct v4l2_subdev_capability *cap = arg;
+1 -1
drivers/net/bonding/bond_main.c
··· 4023 4023 if (likely(n <= hlen)) 4024 4024 return data; 4025 4025 else if (skb && likely(pskb_may_pull(skb, n))) 4026 - return skb->head; 4026 + return skb->data; 4027 4027 4028 4028 return NULL; 4029 4029 }
+15 -9
drivers/net/dsa/bcm_sf2.c
··· 617 617 dn = of_find_compatible_node(NULL, NULL, "brcm,unimac-mdio"); 618 618 priv->master_mii_bus = of_mdio_find_bus(dn); 619 619 if (!priv->master_mii_bus) { 620 - of_node_put(dn); 621 - return -EPROBE_DEFER; 620 + err = -EPROBE_DEFER; 621 + goto err_of_node_put; 622 622 } 623 623 624 - get_device(&priv->master_mii_bus->dev); 625 624 priv->master_mii_dn = dn; 626 625 627 626 priv->slave_mii_bus = mdiobus_alloc(); 628 627 if (!priv->slave_mii_bus) { 629 - of_node_put(dn); 630 - return -ENOMEM; 628 + err = -ENOMEM; 629 + goto err_put_master_mii_bus_dev; 631 630 } 632 631 633 632 priv->slave_mii_bus->priv = priv; ··· 683 684 } 684 685 685 686 err = mdiobus_register(priv->slave_mii_bus); 686 - if (err && dn) { 687 - mdiobus_free(priv->slave_mii_bus); 688 - of_node_put(dn); 689 - } 687 + if (err && dn) 688 + goto err_free_slave_mii_bus; 690 689 690 + return 0; 691 + 692 + err_free_slave_mii_bus: 693 + mdiobus_free(priv->slave_mii_bus); 694 + err_put_master_mii_bus_dev: 695 + put_device(&priv->master_mii_bus->dev); 696 + err_of_node_put: 697 + of_node_put(dn); 691 698 return err; 692 699 } 693 700 ··· 701 696 { 702 697 mdiobus_unregister(priv->slave_mii_bus); 703 698 mdiobus_free(priv->slave_mii_bus); 699 + put_device(&priv->master_mii_bus->dev); 704 700 of_node_put(priv->master_mii_dn); 705 701 } 706 702
+29 -7
drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
··· 911 911 struct sock *sk, long *timeo_p) 912 912 { 913 913 DEFINE_WAIT_FUNC(wait, woken_wake_function); 914 - int err = 0; 914 + int ret, err = 0; 915 915 long current_timeo; 916 916 long vm_wait = 0; 917 917 bool noblock; ··· 942 942 943 943 set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); 944 944 sk->sk_write_pending++; 945 - sk_wait_event(sk, &current_timeo, sk->sk_err || 946 - (sk->sk_shutdown & SEND_SHUTDOWN) || 947 - (csk_mem_free(cdev, sk) && !vm_wait), &wait); 945 + ret = sk_wait_event(sk, &current_timeo, sk->sk_err || 946 + (sk->sk_shutdown & SEND_SHUTDOWN) || 947 + (csk_mem_free(cdev, sk) && !vm_wait), 948 + &wait); 948 949 sk->sk_write_pending--; 950 + if (ret < 0) 951 + goto do_error; 949 952 950 953 if (vm_wait) { 951 954 vm_wait -= current_timeo; ··· 1351 1348 int copied = 0; 1352 1349 int target; 1353 1350 long timeo; 1351 + int ret; 1354 1352 1355 1353 buffers_freed = 0; 1356 1354 ··· 1427 1423 if (copied >= target) 1428 1424 break; 1429 1425 chtls_cleanup_rbuf(sk, copied); 1430 - sk_wait_data(sk, &timeo, NULL); 1426 + ret = sk_wait_data(sk, &timeo, NULL); 1427 + if (ret < 0) { 1428 + copied = copied ? : ret; 1429 + goto unlock; 1430 + } 1431 1431 continue; 1432 1432 found_ok_skb: 1433 1433 if (!skb->len) { ··· 1526 1518 1527 1519 if (buffers_freed) 1528 1520 chtls_cleanup_rbuf(sk, copied); 1521 + 1522 + unlock: 1529 1523 release_sock(sk); 1530 1524 return copied; 1531 1525 } ··· 1544 1534 int copied = 0; 1545 1535 size_t avail; /* amount of available data in current skb */ 1546 1536 long timeo; 1537 + int ret; 1547 1538 1548 1539 lock_sock(sk); 1549 1540 timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); ··· 1596 1585 release_sock(sk); 1597 1586 lock_sock(sk); 1598 1587 } else { 1599 - sk_wait_data(sk, &timeo, NULL); 1588 + ret = sk_wait_data(sk, &timeo, NULL); 1589 + if (ret < 0) { 1590 + /* here 'copied' is 0 due to previous checks */ 1591 + copied = ret; 1592 + break; 1593 + } 1600 1594 } 1601 1595 1602 1596 if (unlikely(peek_seq != tp->copied_seq)) { ··· 1672 1656 int copied = 0; 1673 1657 long timeo; 1674 1658 int target; /* Read at least this many bytes */ 1659 + int ret; 1675 1660 1676 1661 buffers_freed = 0; 1677 1662 ··· 1764 1747 if (copied >= target) 1765 1748 break; 1766 1749 chtls_cleanup_rbuf(sk, copied); 1767 - sk_wait_data(sk, &timeo, NULL); 1750 + ret = sk_wait_data(sk, &timeo, NULL); 1751 + if (ret < 0) { 1752 + copied = copied ? : ret; 1753 + goto unlock; 1754 + } 1768 1755 continue; 1769 1756 1770 1757 found_ok_skb: ··· 1837 1816 if (buffers_freed) 1838 1817 chtls_cleanup_rbuf(sk, copied); 1839 1818 1819 + unlock: 1840 1820 release_sock(sk); 1841 1821 return copied; 1842 1822 }
+16 -2
drivers/net/ethernet/google/gve/gve_rx.c
··· 146 146 err = gve_rx_alloc_buffer(priv, &priv->pdev->dev, &rx->data.page_info[i], 147 147 &rx->data.data_ring[i]); 148 148 if (err) 149 - goto alloc_err; 149 + goto alloc_err_rda; 150 150 } 151 151 152 152 if (!rx->data.raw_addressing) { ··· 171 171 return slots; 172 172 173 173 alloc_err_qpl: 174 + /* Fully free the copy pool pages. */ 174 175 while (j--) { 175 176 page_ref_sub(rx->qpl_copy_pool[j].page, 176 177 rx->qpl_copy_pool[j].pagecnt_bias - 1); 177 178 put_page(rx->qpl_copy_pool[j].page); 178 179 } 179 - alloc_err: 180 + 181 + /* Do not fully free QPL pages - only remove the bias added in this 182 + * function with gve_setup_rx_buffer. 183 + */ 184 + while (i--) 185 + page_ref_sub(rx->data.page_info[i].page, 186 + rx->data.page_info[i].pagecnt_bias - 1); 187 + 188 + gve_unassign_qpl(priv, rx->data.qpl->id); 189 + rx->data.qpl = NULL; 190 + 191 + return err; 192 + 193 + alloc_err_rda: 180 194 while (i--) 181 195 gve_rx_free_buffer(&priv->pdev->dev, 182 196 &rx->data.page_info[i],
+2 -2
drivers/net/ethernet/intel/i40e/i40e_common.c
··· 1095 1095 I40E_PFLAN_QALLOC_FIRSTQ_SHIFT; 1096 1096 j = (val & I40E_PFLAN_QALLOC_LASTQ_MASK) >> 1097 1097 I40E_PFLAN_QALLOC_LASTQ_SHIFT; 1098 - if (val & I40E_PFLAN_QALLOC_VALID_MASK) 1098 + if (val & I40E_PFLAN_QALLOC_VALID_MASK && j >= base_queue) 1099 1099 num_queues = (j - base_queue) + 1; 1100 1100 else 1101 1101 num_queues = 0; ··· 1105 1105 I40E_PF_VT_PFALLOC_FIRSTVF_SHIFT; 1106 1106 j = (val & I40E_PF_VT_PFALLOC_LASTVF_MASK) >> 1107 1107 I40E_PF_VT_PFALLOC_LASTVF_SHIFT; 1108 - if (val & I40E_PF_VT_PFALLOC_VALID_MASK) 1108 + if (val & I40E_PF_VT_PFALLOC_VALID_MASK && j >= i) 1109 1109 num_vfs = (j - i) + 1; 1110 1110 else 1111 1111 num_vfs = 0;
+1 -2
drivers/net/ethernet/intel/ice/ice_lib.c
··· 1201 1201 1202 1202 ctxt->info.q_opt_rss = ((lut_type << ICE_AQ_VSI_Q_OPT_RSS_LUT_S) & 1203 1203 ICE_AQ_VSI_Q_OPT_RSS_LUT_M) | 1204 - ((hash_type << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) & 1205 - ICE_AQ_VSI_Q_OPT_RSS_HASH_M); 1204 + (hash_type & ICE_AQ_VSI_Q_OPT_RSS_HASH_M); 1206 1205 } 1207 1206 1208 1207 static void
+18
drivers/net/ethernet/intel/ice/ice_main.c
··· 6 6 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 7 7 8 8 #include <generated/utsrelease.h> 9 + #include <linux/crash_dump.h> 9 10 #include "ice.h" 10 11 #include "ice_base.h" 11 12 #include "ice_lib.h" ··· 4688 4687 4689 4688 static void ice_deinit_features(struct ice_pf *pf) 4690 4689 { 4690 + if (ice_is_safe_mode(pf)) 4691 + return; 4692 + 4691 4693 ice_deinit_lag(pf); 4692 4694 if (test_bit(ICE_FLAG_DCB_CAPABLE, pf->flags)) 4693 4695 ice_cfg_lldp_mib_change(&pf->hw, false); ··· 5022 5018 if (pdev->is_virtfn) { 5023 5019 dev_err(dev, "can't probe a virtual function\n"); 5024 5020 return -EINVAL; 5021 + } 5022 + 5023 + /* when under a kdump kernel initiate a reset before enabling the 5024 + * device in order to clear out any pending DMA transactions. These 5025 + * transactions can cause some systems to machine check when doing 5026 + * the pcim_enable_device() below. 5027 + */ 5028 + if (is_kdump_kernel()) { 5029 + pci_save_state(pdev); 5030 + pci_clear_master(pdev); 5031 + err = pcie_flr(pdev); 5032 + if (err) 5033 + return err; 5034 + pci_restore_state(pdev); 5025 5035 } 5026 5036 5027 5037 /* this driver uses devres, see
+6 -7
drivers/net/ethernet/marvell/octeon_ep/octep_main.c
··· 898 898 hw_desc->dptr = tx_buffer->sglist_dma; 899 899 } 900 900 901 - /* Flush the hw descriptor before writing to doorbell */ 902 - wmb(); 903 - 904 - /* Ring Doorbell to notify the NIC there is a new packet */ 905 - writel(1, iq->doorbell_reg); 901 + netdev_tx_sent_queue(iq->netdev_q, skb->len); 902 + skb_tx_timestamp(skb); 906 903 atomic_inc(&iq->instr_pending); 907 904 wi++; 908 905 if (wi == iq->max_count) 909 906 wi = 0; 910 907 iq->host_write_index = wi; 908 + /* Flush the hw descriptor before writing to doorbell */ 909 + wmb(); 911 910 912 - netdev_tx_sent_queue(iq->netdev_q, skb->len); 911 + /* Ring Doorbell to notify the NIC there is a new packet */ 912 + writel(1, iq->doorbell_reg); 913 913 iq->stats.instr_posted++; 914 - skb_tx_timestamp(skb); 915 914 return NETDEV_TX_OK; 916 915 917 916 dma_map_sg_err:
+28 -36
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
··· 2256 2256 2257 2257 int mlx5_cmd_init(struct mlx5_core_dev *dev) 2258 2258 { 2259 - int size = sizeof(struct mlx5_cmd_prot_block); 2260 - int align = roundup_pow_of_two(size); 2261 2259 struct mlx5_cmd *cmd = &dev->cmd; 2262 - u32 cmd_l; 2263 - int err; 2264 2260 2265 - cmd->pool = dma_pool_create("mlx5_cmd", mlx5_core_dma_dev(dev), size, align, 0); 2266 - if (!cmd->pool) 2267 - return -ENOMEM; 2268 - 2269 - err = alloc_cmd_page(dev, cmd); 2270 - if (err) 2271 - goto err_free_pool; 2272 - 2273 - cmd_l = (u32)(cmd->dma); 2274 - if (cmd_l & 0xfff) { 2275 - mlx5_core_err(dev, "invalid command queue address\n"); 2276 - err = -ENOMEM; 2277 - goto err_cmd_page; 2278 - } 2279 2261 cmd->checksum_disabled = 1; 2280 2262 2281 2263 spin_lock_init(&cmd->alloc_lock); 2282 2264 spin_lock_init(&cmd->token_lock); 2283 2265 2284 - create_msg_cache(dev); 2285 - 2286 2266 set_wqname(dev); 2287 2267 cmd->wq = create_singlethread_workqueue(cmd->wq_name); 2288 2268 if (!cmd->wq) { 2289 2269 mlx5_core_err(dev, "failed to create command workqueue\n"); 2290 - err = -ENOMEM; 2291 - goto err_cache; 2270 + return -ENOMEM; 2292 2271 } 2293 2272 2294 2273 mlx5_cmdif_debugfs_init(dev); 2295 2274 2296 2275 return 0; 2297 - 2298 - err_cache: 2299 - destroy_msg_cache(dev); 2300 - err_cmd_page: 2301 - free_cmd_page(dev, cmd); 2302 - err_free_pool: 2303 - dma_pool_destroy(cmd->pool); 2304 - return err; 2305 2276 } 2306 2277 2307 2278 void mlx5_cmd_cleanup(struct mlx5_core_dev *dev) ··· 2281 2310 2282 2311 mlx5_cmdif_debugfs_cleanup(dev); 2283 2312 destroy_workqueue(cmd->wq); 2284 - destroy_msg_cache(dev); 2285 - free_cmd_page(dev, cmd); 2286 - dma_pool_destroy(cmd->pool); 2287 2313 } 2288 2314 2289 2315 int mlx5_cmd_enable(struct mlx5_core_dev *dev) 2290 2316 { 2317 + int size = sizeof(struct mlx5_cmd_prot_block); 2318 + int align = roundup_pow_of_two(size); 2291 2319 struct mlx5_cmd *cmd = &dev->cmd; 2292 2320 u32 cmd_h, cmd_l; 2321 + int err; 2293 2322 2294 2323 memset(&cmd->vars, 0, sizeof(cmd->vars)); 2295 2324 cmd->vars.cmdif_rev = cmdif_rev(dev); ··· 2322 2351 sema_init(&cmd->vars.pages_sem, 1); 2323 2352 sema_init(&cmd->vars.throttle_sem, DIV_ROUND_UP(cmd->vars.max_reg_cmds, 2)); 2324 2353 2354 + cmd->pool = dma_pool_create("mlx5_cmd", mlx5_core_dma_dev(dev), size, align, 0); 2355 + if (!cmd->pool) 2356 + return -ENOMEM; 2357 + 2358 + err = alloc_cmd_page(dev, cmd); 2359 + if (err) 2360 + goto err_free_pool; 2361 + 2325 2362 cmd_h = (u32)((u64)(cmd->dma) >> 32); 2326 2363 cmd_l = (u32)(cmd->dma); 2327 - if (WARN_ON(cmd_l & 0xfff)) 2328 - return -EINVAL; 2364 + if (cmd_l & 0xfff) { 2365 + mlx5_core_err(dev, "invalid command queue address\n"); 2366 + err = -ENOMEM; 2367 + goto err_cmd_page; 2368 + } 2329 2369 2330 2370 iowrite32be(cmd_h, &dev->iseg->cmdq_addr_h); 2331 2371 iowrite32be(cmd_l, &dev->iseg->cmdq_addr_l_sz); ··· 2349 2367 cmd->mode = CMD_MODE_POLLING; 2350 2368 cmd->allowed_opcode = CMD_ALLOWED_OPCODE_ALL; 2351 2369 2370 + create_msg_cache(dev); 2352 2371 create_debugfs_files(dev); 2353 2372 2354 2373 return 0; 2374 + 2375 + err_cmd_page: 2376 + free_cmd_page(dev, cmd); 2377 + err_free_pool: 2378 + dma_pool_destroy(cmd->pool); 2379 + return err; 2355 2380 } 2356 2381 2357 2382 void mlx5_cmd_disable(struct mlx5_core_dev *dev) 2358 2383 { 2359 2384 struct mlx5_cmd *cmd = &dev->cmd; 2360 2385 2361 - clean_debug_files(dev); 2362 2386 flush_workqueue(cmd->wq); 2387 + clean_debug_files(dev); 2388 + destroy_msg_cache(dev); 2389 + free_cmd_page(dev, cmd); 2390 + dma_pool_destroy(cmd->pool); 2363 2391 } 2364 2392 2365 2393 void mlx5_cmd_set_state(struct mlx5_core_dev *dev,
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
··· 848 848 849 849 mlx5_core_dbg(tracer->dev, "FWTracer: ownership changed, current=(%d)\n", tracer->owner); 850 850 if (tracer->owner) { 851 - tracer->owner = false; 851 + mlx5_fw_tracer_ownership_acquire(tracer); 852 852 return; 853 853 } 854 854
+11
drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c
··· 467 467 /* only handle the event on peers */ 468 468 if (mlx5_esw_bridge_is_local(dev, rep, esw)) 469 469 break; 470 + 471 + fdb_info = container_of(info, 472 + struct switchdev_notifier_fdb_info, 473 + info); 474 + /* Mark for deletion to prevent the update wq task from 475 + * spuriously refreshing the entry which would mark it again as 476 + * offloaded in SW bridge. After this fallthrough to regular 477 + * async delete code. 478 + */ 479 + mlx5_esw_bridge_fdb_mark_deleted(dev, vport_num, esw_owner_vhca_id, br_offloads, 480 + fdb_info); 470 481 fallthrough; 471 482 case SWITCHDEV_FDB_ADD_TO_DEVICE: 472 483 case SWITCHDEV_FDB_DEL_TO_DEVICE:
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
··· 24 24 25 25 route_dev = dev_get_by_index(dev_net(e->out_dev), e->route_dev_ifindex); 26 26 27 - if (!route_dev || !netif_is_ovs_master(route_dev)) 27 + if (!route_dev || !netif_is_ovs_master(route_dev) || 28 + attr->parse_attr->filter_dev == e->out_dev) 28 29 goto out; 29 30 30 31 err = mlx5e_set_fwd_to_int_port_actions(priv, attr, e->route_dev_ifindex,
+4 -4
drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
··· 874 874 } 875 875 876 876 out: 877 - if (flags & XDP_XMIT_FLUSH) { 878 - if (sq->mpwqe.wqe) 879 - mlx5e_xdp_mpwqe_complete(sq); 877 + if (sq->mpwqe.wqe) 878 + mlx5e_xdp_mpwqe_complete(sq); 879 + 880 + if (flags & XDP_XMIT_FLUSH) 880 881 mlx5e_xmit_xdp_doorbell(sq); 881 - } 882 882 883 883 return nxmit; 884 884 }
+9 -1
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 701 701 702 702 /* update HW stats in background for next time */ 703 703 mlx5e_queue_update_stats(priv); 704 - memcpy(stats, &priv->stats.vf_vport, sizeof(*stats)); 704 + mlx5e_stats_copy_rep_stats(stats, &priv->stats.rep_stats); 705 705 } 706 706 707 707 static int mlx5e_rep_change_mtu(struct net_device *netdev, int new_mtu) ··· 769 769 770 770 static void mlx5e_build_rep_params(struct net_device *netdev) 771 771 { 772 + const bool take_rtnl = netdev->reg_state == NETREG_REGISTERED; 772 773 struct mlx5e_priv *priv = netdev_priv(netdev); 773 774 struct mlx5e_rep_priv *rpriv = priv->ppriv; 774 775 struct mlx5_eswitch_rep *rep = rpriv->rep; ··· 795 794 /* RQ */ 796 795 mlx5e_build_rq_params(mdev, params); 797 796 797 + /* If netdev is already registered (e.g. move from nic profile to uplink, 798 + * RTNL lock must be held before triggering netdev notifiers. 799 + */ 800 + if (take_rtnl) 801 + rtnl_lock(); 798 802 /* update XDP supported features */ 799 803 mlx5e_set_xdp_feature(netdev); 804 + if (take_rtnl) 805 + rtnl_unlock(); 800 806 801 807 /* CQ moderation params */ 802 808 params->rx_dim_enabled = MLX5_CAP_GEN(mdev, cq_moderation);
+26 -9
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 457 457 static int mlx5e_refill_rx_wqes(struct mlx5e_rq *rq, u16 ix, int wqe_bulk) 458 458 { 459 459 int remaining = wqe_bulk; 460 - int i = 0; 460 + int total_alloc = 0; 461 + int refill_alloc; 462 + int refill; 461 463 462 464 /* The WQE bulk is split into smaller bulks that are sized 463 465 * according to the page pool cache refill size to avoid overflowing 464 466 * the page pool cache due to too many page releases at once. 465 467 */ 466 468 do { 467 - int refill = min_t(u16, rq->wqe.info.refill_unit, remaining); 468 - int alloc_count; 469 + refill = min_t(u16, rq->wqe.info.refill_unit, remaining); 469 470 470 - mlx5e_free_rx_wqes(rq, ix + i, refill); 471 - alloc_count = mlx5e_alloc_rx_wqes(rq, ix + i, refill); 472 - i += alloc_count; 473 - if (unlikely(alloc_count != refill)) 474 - break; 471 + mlx5e_free_rx_wqes(rq, ix + total_alloc, refill); 472 + refill_alloc = mlx5e_alloc_rx_wqes(rq, ix + total_alloc, refill); 473 + if (unlikely(refill_alloc != refill)) 474 + goto err_free; 475 475 476 + total_alloc += refill_alloc; 476 477 remaining -= refill; 477 478 } while (remaining); 478 479 479 - return i; 480 + return total_alloc; 481 + 482 + err_free: 483 + mlx5e_free_rx_wqes(rq, ix, total_alloc + refill_alloc); 484 + 485 + for (int i = 0; i < total_alloc + refill; i++) { 486 + int j = mlx5_wq_cyc_ctr2ix(&rq->wqe.wq, ix + i); 487 + struct mlx5e_wqe_frag_info *frag; 488 + 489 + frag = get_frag(rq, j); 490 + for (int k = 0; k < rq->wqe.info.num_frags; k++, frag++) 491 + frag->flags |= BIT(MLX5E_WQE_FRAG_SKIP_RELEASE); 492 + } 493 + 494 + return 0; 480 495 } 481 496 482 497 static void ··· 830 815 frag_page--; 831 816 mlx5e_page_release_fragmented(rq, frag_page); 832 817 } 818 + 819 + bitmap_fill(wi->skip_release_bitmap, rq->mpwqe.pages_per_wqe); 833 820 834 821 err: 835 822 rq->stats->buff_alloc_err++;
+10 -1
drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
··· 484 484 struct mlx5e_vnic_env_stats vnic; 485 485 struct mlx5e_vport_stats vport; 486 486 struct mlx5e_pport_stats pport; 487 - struct rtnl_link_stats64 vf_vport; 488 487 struct mlx5e_pcie_stats pcie; 489 488 struct mlx5e_rep_stats rep_stats; 490 489 }; 490 + 491 + static inline void mlx5e_stats_copy_rep_stats(struct rtnl_link_stats64 *vf_vport, 492 + struct mlx5e_rep_stats *rep_stats) 493 + { 494 + memset(vf_vport, 0, sizeof(*vf_vport)); 495 + vf_vport->rx_packets = rep_stats->vport_rx_packets; 496 + vf_vport->tx_packets = rep_stats->vport_tx_packets; 497 + vf_vport->rx_bytes = rep_stats->vport_rx_bytes; 498 + vf_vport->tx_bytes = rep_stats->vport_tx_bytes; 499 + } 491 500 492 501 extern mlx5e_stats_grp_t mlx5e_nic_stats_grps[]; 493 502 unsigned int mlx5e_nic_stats_grps_num(struct mlx5e_priv *priv);
+3 -2
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 4974 4974 if (err) 4975 4975 return err; 4976 4976 4977 - rpriv->prev_vf_vport_stats = priv->stats.vf_vport; 4977 + mlx5e_stats_copy_rep_stats(&rpriv->prev_vf_vport_stats, 4978 + &priv->stats.rep_stats); 4978 4979 break; 4979 4980 default: 4980 4981 NL_SET_ERR_MSG_MOD(extack, "mlx5 supports only police action for matchall"); ··· 5015 5014 u64 dbytes; 5016 5015 u64 dpkts; 5017 5016 5018 - cur_stats = priv->stats.vf_vport; 5017 + mlx5e_stats_copy_rep_stats(&cur_stats, &priv->stats.rep_stats); 5019 5018 dpkts = cur_stats.rx_packets - rpriv->prev_vf_vport_stats.rx_packets; 5020 5019 dbytes = cur_stats.rx_bytes - rpriv->prev_vf_vport_stats.rx_bytes; 5021 5020 rpriv->prev_vf_vport_stats = cur_stats;
+24 -1
drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c
··· 1748 1748 entry->lastuse = jiffies; 1749 1749 } 1750 1750 1751 + void mlx5_esw_bridge_fdb_mark_deleted(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id, 1752 + struct mlx5_esw_bridge_offloads *br_offloads, 1753 + struct switchdev_notifier_fdb_info *fdb_info) 1754 + { 1755 + struct mlx5_esw_bridge_fdb_entry *entry; 1756 + struct mlx5_esw_bridge *bridge; 1757 + 1758 + bridge = mlx5_esw_bridge_from_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); 1759 + if (!bridge) 1760 + return; 1761 + 1762 + entry = mlx5_esw_bridge_fdb_lookup(bridge, fdb_info->addr, fdb_info->vid); 1763 + if (!entry) { 1764 + esw_debug(br_offloads->esw->dev, 1765 + "FDB mark deleted entry with specified key not found (MAC=%pM,vid=%u,vport=%u)\n", 1766 + fdb_info->addr, fdb_info->vid, vport_num); 1767 + return; 1768 + } 1769 + 1770 + entry->flags |= MLX5_ESW_BRIDGE_FLAG_DELETED; 1771 + } 1772 + 1751 1773 void mlx5_esw_bridge_fdb_create(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id, 1752 1774 struct mlx5_esw_bridge_offloads *br_offloads, 1753 1775 struct switchdev_notifier_fdb_info *fdb_info) ··· 1832 1810 unsigned long lastuse = 1833 1811 (unsigned long)mlx5_fc_query_lastuse(entry->ingress_counter); 1834 1812 1835 - if (entry->flags & MLX5_ESW_BRIDGE_FLAG_ADDED_BY_USER) 1813 + if (entry->flags & (MLX5_ESW_BRIDGE_FLAG_ADDED_BY_USER | 1814 + MLX5_ESW_BRIDGE_FLAG_DELETED)) 1836 1815 continue; 1837 1816 1838 1817 if (time_after(lastuse, entry->lastuse))
+3
drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.h
··· 62 62 void mlx5_esw_bridge_fdb_update_used(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id, 63 63 struct mlx5_esw_bridge_offloads *br_offloads, 64 64 struct switchdev_notifier_fdb_info *fdb_info); 65 + void mlx5_esw_bridge_fdb_mark_deleted(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id, 66 + struct mlx5_esw_bridge_offloads *br_offloads, 67 + struct switchdev_notifier_fdb_info *fdb_info); 65 68 void mlx5_esw_bridge_fdb_create(struct net_device *dev, u16 vport_num, u16 esw_owner_vhca_id, 66 69 struct mlx5_esw_bridge_offloads *br_offloads, 67 70 struct switchdev_notifier_fdb_info *fdb_info);
+1
drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_priv.h
··· 133 133 enum { 134 134 MLX5_ESW_BRIDGE_FLAG_ADDED_BY_USER = BIT(0), 135 135 MLX5_ESW_BRIDGE_FLAG_PEER = BIT(1), 136 + MLX5_ESW_BRIDGE_FLAG_DELETED = BIT(2), 136 137 }; 137 138 138 139 enum {
+8 -9
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
··· 1038 1038 return ERR_PTR(err); 1039 1039 } 1040 1040 1041 - static void mlx5_eswitch_event_handlers_register(struct mlx5_eswitch *esw) 1041 + static void mlx5_eswitch_event_handler_register(struct mlx5_eswitch *esw) 1042 1042 { 1043 - MLX5_NB_INIT(&esw->nb, eswitch_vport_event, NIC_VPORT_CHANGE); 1044 - mlx5_eq_notifier_register(esw->dev, &esw->nb); 1045 - 1046 1043 if (esw->mode == MLX5_ESWITCH_OFFLOADS && mlx5_eswitch_is_funcs_handler(esw->dev)) { 1047 1044 MLX5_NB_INIT(&esw->esw_funcs.nb, mlx5_esw_funcs_changed_handler, 1048 1045 ESW_FUNCTIONS_CHANGED); ··· 1047 1050 } 1048 1051 } 1049 1052 1050 - static void mlx5_eswitch_event_handlers_unregister(struct mlx5_eswitch *esw) 1053 + static void mlx5_eswitch_event_handler_unregister(struct mlx5_eswitch *esw) 1051 1054 { 1052 1055 if (esw->mode == MLX5_ESWITCH_OFFLOADS && mlx5_eswitch_is_funcs_handler(esw->dev)) 1053 1056 mlx5_eq_notifier_unregister(esw->dev, &esw->esw_funcs.nb); 1054 - 1055 - mlx5_eq_notifier_unregister(esw->dev, &esw->nb); 1056 1057 1057 1058 flush_workqueue(esw->work_queue); 1058 1059 } ··· 1478 1483 1479 1484 mlx5_eswitch_update_num_of_vfs(esw, num_vfs); 1480 1485 1486 + MLX5_NB_INIT(&esw->nb, eswitch_vport_event, NIC_VPORT_CHANGE); 1487 + mlx5_eq_notifier_register(esw->dev, &esw->nb); 1488 + 1481 1489 if (esw->mode == MLX5_ESWITCH_LEGACY) { 1482 1490 err = esw_legacy_enable(esw); 1483 1491 } else { ··· 1493 1495 1494 1496 esw->fdb_table.flags |= MLX5_ESW_FDB_CREATED; 1495 1497 1496 - mlx5_eswitch_event_handlers_register(esw); 1498 + mlx5_eswitch_event_handler_register(esw); 1497 1499 1498 1500 esw_info(esw->dev, "Enable: mode(%s), nvfs(%d), necvfs(%d), active vports(%d)\n", 1499 1501 esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS", ··· 1620 1622 */ 1621 1623 mlx5_esw_mode_change_notify(esw, MLX5_ESWITCH_LEGACY); 1622 1624 1623 - mlx5_eswitch_event_handlers_unregister(esw); 1625 + mlx5_eq_notifier_unregister(esw->dev, &esw->nb); 1626 + mlx5_eswitch_event_handler_unregister(esw); 1624 1627 1625 1628 esw_info(esw->dev, "Disable: mode(%s), nvfs(%d), necvfs(%d), active vports(%d)\n", 1626 1629 esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS",
+5 -2
drivers/net/ethernet/qlogic/qed/qed_ll2.c
··· 113 113 static int qed_ll2_alloc_buffer(struct qed_dev *cdev, 114 114 u8 **data, dma_addr_t *phys_addr) 115 115 { 116 - *data = kmalloc(cdev->ll2->rx_size, GFP_ATOMIC); 116 + size_t size = cdev->ll2->rx_size + NET_SKB_PAD + 117 + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); 118 + 119 + *data = kmalloc(size, GFP_ATOMIC); 117 120 if (!(*data)) { 118 121 DP_INFO(cdev, "Failed to allocate LL2 buffer data\n"); 119 122 return -ENOMEM; ··· 2592 2589 INIT_LIST_HEAD(&cdev->ll2->list); 2593 2590 spin_lock_init(&cdev->ll2->lock); 2594 2591 2595 - cdev->ll2->rx_size = NET_SKB_PAD + ETH_HLEN + 2592 + cdev->ll2->rx_size = PRM_DMA_PAD_BYTES_NUM + ETH_HLEN + 2596 2593 L1_CACHE_BYTES + params->mtu; 2597 2594 2598 2595 /* Allocate memory for LL2.
+5
drivers/net/ethernet/ti/Kconfig
··· 90 90 The unit can time stamp PTP UDP/IPv4 and Layer 2 packets, and the 91 91 driver offers a PTP Hardware Clock. 92 92 93 + config TI_K3_CPPI_DESC_POOL 94 + tristate 95 + 93 96 config TI_K3_AM65_CPSW_NUSS 94 97 tristate "TI K3 AM654x/J721E CPSW Ethernet driver" 95 98 depends on ARCH_K3 && OF && TI_K3_UDMA_GLUE_LAYER 96 99 select NET_DEVLINK 97 100 select TI_DAVINCI_MDIO 98 101 select PHYLINK 102 + select TI_K3_CPPI_DESC_POOL 99 103 imply PHY_TI_GMII_SEL 100 104 depends on TI_K3_AM65_CPTS || !TI_K3_AM65_CPTS 101 105 help ··· 184 180 tristate "TI Gigabit PRU Ethernet driver" 185 181 select PHYLIB 186 182 select TI_ICSS_IEP 183 + select TI_K3_CPPI_DESC_POOL 187 184 depends on PRU_REMOTEPROC 188 185 depends on ARCH_K3 && OF && TI_K3_UDMA_GLUE_LAYER 189 186 help
+4 -3
drivers/net/ethernet/ti/Makefile
··· 23 23 obj-$(CONFIG_TI_KEYSTONE_NETCP_ETHSS) += keystone_netcp_ethss.o 24 24 keystone_netcp_ethss-y := netcp_ethss.o netcp_sgmii.o netcp_xgbepcsr.o cpsw_ale.o 25 25 26 + obj-$(CONFIG_TI_K3_CPPI_DESC_POOL) += k3-cppi-desc-pool.o 27 + 26 28 obj-$(CONFIG_TI_K3_AM65_CPSW_NUSS) += ti-am65-cpsw-nuss.o 27 - ti-am65-cpsw-nuss-y := am65-cpsw-nuss.o cpsw_sl.o am65-cpsw-ethtool.o cpsw_ale.o k3-cppi-desc-pool.o am65-cpsw-qos.o 29 + ti-am65-cpsw-nuss-y := am65-cpsw-nuss.o cpsw_sl.o am65-cpsw-ethtool.o cpsw_ale.o am65-cpsw-qos.o 28 30 ti-am65-cpsw-nuss-$(CONFIG_TI_K3_AM65_CPSW_SWITCHDEV) += am65-cpsw-switchdev.o 29 31 obj-$(CONFIG_TI_K3_AM65_CPTS) += am65-cpts.o 30 32 31 33 obj-$(CONFIG_TI_ICSSG_PRUETH) += icssg-prueth.o 32 - icssg-prueth-y := k3-cppi-desc-pool.o \ 33 - icssg/icssg_prueth.o \ 34 + icssg-prueth-y := icssg/icssg_prueth.o \ 34 35 icssg/icssg_classifier.o \ 35 36 icssg/icssg_queues.o \ 36 37 icssg/icssg_config.o \
+2 -2
drivers/net/ethernet/ti/icssg/icssg_config.c
··· 379 379 380 380 /* Bitmask for ICSSG r30 commands */ 381 381 static const struct icssg_r30_cmd emac_r32_bitmask[] = { 382 - {{0xffff0004, 0xffff0100, 0xffff0100, EMAC_NONE}}, /* EMAC_PORT_DISABLE */ 382 + {{0xffff0004, 0xffff0100, 0xffff0004, EMAC_NONE}}, /* EMAC_PORT_DISABLE */ 383 383 {{0xfffb0040, 0xfeff0200, 0xfeff0200, EMAC_NONE}}, /* EMAC_PORT_BLOCK */ 384 - {{0xffbb0000, 0xfcff0000, 0xdcff0000, EMAC_NONE}}, /* EMAC_PORT_FORWARD */ 384 + {{0xffbb0000, 0xfcff0000, 0xdcfb0000, EMAC_NONE}}, /* EMAC_PORT_FORWARD */ 385 385 {{0xffbb0000, 0xfcff0000, 0xfcff2000, EMAC_NONE}}, /* EMAC_PORT_FORWARD_WO_LEARNING */ 386 386 {{0xffff0001, EMAC_NONE, EMAC_NONE, EMAC_NONE}}, /* ACCEPT ALL */ 387 387 {{0xfffe0002, EMAC_NONE, EMAC_NONE, EMAC_NONE}}, /* ACCEPT TAGGED */
+9
drivers/net/ethernet/ti/icssg/icssg_stats.c
··· 9 9 #include "icssg_stats.h" 10 10 #include <linux/regmap.h> 11 11 12 + #define ICSSG_TX_PACKET_OFFSET 0xA0 13 + #define ICSSG_TX_BYTE_OFFSET 0xEC 14 + 12 15 static u32 stats_base[] = { 0x54c, /* Slice 0 stats start */ 13 16 0xb18, /* Slice 1 stats start */ 14 17 }; ··· 21 18 struct prueth *prueth = emac->prueth; 22 19 int slice = prueth_emac_slice(emac); 23 20 u32 base = stats_base[slice]; 21 + u32 tx_pkt_cnt = 0; 24 22 u32 val; 25 23 int i; 26 24 ··· 33 29 base + icssg_all_stats[i].offset, 34 30 val); 35 31 32 + if (icssg_all_stats[i].offset == ICSSG_TX_PACKET_OFFSET) 33 + tx_pkt_cnt = val; 34 + 36 35 emac->stats[i] += val; 36 + if (icssg_all_stats[i].offset == ICSSG_TX_BYTE_OFFSET) 37 + emac->stats[i] -= tx_pkt_cnt * 8; 37 38 } 38 39 } 39 40
+10
drivers/net/ethernet/ti/k3-cppi-desc-pool.c
··· 39 39 40 40 gen_pool_destroy(pool->gen_pool); /* frees pool->name */ 41 41 } 42 + EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_destroy); 42 43 43 44 struct k3_cppi_desc_pool * 44 45 k3_cppi_desc_pool_create_name(struct device *dev, size_t size, ··· 99 98 devm_kfree(pool->dev, pool); 100 99 return ERR_PTR(ret); 101 100 } 101 + EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_create_name); 102 102 103 103 dma_addr_t k3_cppi_desc_pool_virt2dma(struct k3_cppi_desc_pool *pool, 104 104 void *addr) 105 105 { 106 106 return addr ? pool->dma_addr + (addr - pool->cpumem) : 0; 107 107 } 108 + EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_virt2dma); 108 109 109 110 void *k3_cppi_desc_pool_dma2virt(struct k3_cppi_desc_pool *pool, dma_addr_t dma) 110 111 { 111 112 return dma ? pool->cpumem + (dma - pool->dma_addr) : NULL; 112 113 } 114 + EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_dma2virt); 113 115 114 116 void *k3_cppi_desc_pool_alloc(struct k3_cppi_desc_pool *pool) 115 117 { 116 118 return (void *)gen_pool_alloc(pool->gen_pool, pool->desc_size); 117 119 } 120 + EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_alloc); 118 121 119 122 void k3_cppi_desc_pool_free(struct k3_cppi_desc_pool *pool, void *addr) 120 123 { 121 124 gen_pool_free(pool->gen_pool, (unsigned long)addr, pool->desc_size); 122 125 } 126 + EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_free); 123 127 124 128 size_t k3_cppi_desc_pool_avail(struct k3_cppi_desc_pool *pool) 125 129 { 126 130 return gen_pool_avail(pool->gen_pool) / pool->desc_size; 127 131 } 132 + EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_avail); 133 + 134 + MODULE_LICENSE("GPL"); 135 + MODULE_DESCRIPTION("TI K3 CPPI5 descriptors pool API");
+47
drivers/net/mdio/mdio-mux.c
··· 55 55 return r; 56 56 } 57 57 58 + static int mdio_mux_read_c45(struct mii_bus *bus, int phy_id, int dev_addr, 59 + int regnum) 60 + { 61 + struct mdio_mux_child_bus *cb = bus->priv; 62 + struct mdio_mux_parent_bus *pb = cb->parent; 63 + int r; 64 + 65 + mutex_lock_nested(&pb->mii_bus->mdio_lock, MDIO_MUTEX_MUX); 66 + r = pb->switch_fn(pb->current_child, cb->bus_number, pb->switch_data); 67 + if (r) 68 + goto out; 69 + 70 + pb->current_child = cb->bus_number; 71 + 72 + r = pb->mii_bus->read_c45(pb->mii_bus, phy_id, dev_addr, regnum); 73 + out: 74 + mutex_unlock(&pb->mii_bus->mdio_lock); 75 + 76 + return r; 77 + } 78 + 58 79 /* 59 80 * The parent bus' lock is used to order access to the switch_fn. 60 81 */ ··· 95 74 pb->current_child = cb->bus_number; 96 75 97 76 r = pb->mii_bus->write(pb->mii_bus, phy_id, regnum, val); 77 + out: 78 + mutex_unlock(&pb->mii_bus->mdio_lock); 79 + 80 + return r; 81 + } 82 + 83 + static int mdio_mux_write_c45(struct mii_bus *bus, int phy_id, int dev_addr, 84 + int regnum, u16 val) 85 + { 86 + struct mdio_mux_child_bus *cb = bus->priv; 87 + struct mdio_mux_parent_bus *pb = cb->parent; 88 + 89 + int r; 90 + 91 + mutex_lock_nested(&pb->mii_bus->mdio_lock, MDIO_MUTEX_MUX); 92 + r = pb->switch_fn(pb->current_child, cb->bus_number, pb->switch_data); 93 + if (r) 94 + goto out; 95 + 96 + pb->current_child = cb->bus_number; 97 + 98 + r = pb->mii_bus->write_c45(pb->mii_bus, phy_id, dev_addr, regnum, val); 98 99 out: 99 100 mutex_unlock(&pb->mii_bus->mdio_lock); 100 101 ··· 216 173 cb->mii_bus->parent = dev; 217 174 cb->mii_bus->read = mdio_mux_read; 218 175 cb->mii_bus->write = mdio_mux_write; 176 + if (parent_bus->read_c45) 177 + cb->mii_bus->read_c45 = mdio_mux_read_c45; 178 + if (parent_bus->write_c45) 179 + cb->mii_bus->write_c45 = mdio_mux_write_c45; 219 180 r = of_mdiobus_register(cb->mii_bus, child_bus_node); 220 181 if (r) { 221 182 mdiobus_free(cb->mii_bus);
+3
drivers/net/phy/bcm7xxx.c
··· 894 894 .name = _name, \ 895 895 /* PHY_BASIC_FEATURES */ \ 896 896 .flags = PHY_IS_INTERNAL, \ 897 + .get_sset_count = bcm_phy_get_sset_count, \ 898 + .get_strings = bcm_phy_get_strings, \ 899 + .get_stats = bcm7xxx_28nm_get_phy_stats, \ 897 900 .probe = bcm7xxx_28nm_probe, \ 898 901 .config_init = bcm7xxx_16nm_ephy_config_init, \ 899 902 .config_aneg = genphy_config_aneg, \
+5 -2
drivers/net/tun.c
··· 3073 3073 struct net *net = sock_net(&tfile->sk); 3074 3074 struct tun_struct *tun; 3075 3075 void __user* argp = (void __user*)arg; 3076 - unsigned int ifindex, carrier; 3076 + unsigned int carrier; 3077 3077 struct ifreq ifr; 3078 3078 kuid_t owner; 3079 3079 kgid_t group; 3080 + int ifindex; 3080 3081 int sndbuf; 3081 3082 int vnet_hdr_sz; 3082 3083 int le; ··· 3133 3132 ret = -EFAULT; 3134 3133 if (copy_from_user(&ifindex, argp, sizeof(ifindex))) 3135 3134 goto unlock; 3136 - 3135 + ret = -EINVAL; 3136 + if (ifindex < 0) 3137 + goto unlock; 3137 3138 ret = 0; 3138 3139 tfile->ifindex = ifindex; 3139 3140 goto unlock;
+1 -1
drivers/net/usb/smsc95xx.c
··· 897 897 898 898 if (timeout >= 100) { 899 899 netdev_warn(dev->net, "timeout waiting for completion of Lite Reset\n"); 900 - return ret; 900 + return -ETIMEDOUT; 901 901 } 902 902 903 903 ret = smsc95xx_set_mac_address(dev);
+8 -8
drivers/net/virtio_net.c
··· 607 607 608 608 --dma->ref; 609 609 610 - if (dma->ref) { 611 - if (dma->need_sync && len) { 612 - offset = buf - (head + sizeof(*dma)); 610 + if (dma->need_sync && len) { 611 + offset = buf - (head + sizeof(*dma)); 613 612 614 - virtqueue_dma_sync_single_range_for_cpu(rq->vq, dma->addr, offset, 615 - len, DMA_FROM_DEVICE); 616 - } 617 - 618 - return; 613 + virtqueue_dma_sync_single_range_for_cpu(rq->vq, dma->addr, 614 + offset, len, 615 + DMA_FROM_DEVICE); 619 616 } 617 + 618 + if (dma->ref) 619 + return; 620 620 621 621 virtqueue_dma_unmap_single_attrs(rq->vq, dma->addr, dma->len, 622 622 DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC);
-17
drivers/net/wwan/iosm/iosm_ipc_imem.c
··· 4 4 */ 5 5 6 6 #include <linux/delay.h> 7 - #include <linux/pm_runtime.h> 8 7 9 8 #include "iosm_ipc_chnl_cfg.h" 10 9 #include "iosm_ipc_devlink.h" ··· 631 632 /* Complete all memory stores after setting bit */ 632 633 smp_mb__after_atomic(); 633 634 634 - if (ipc_imem->pcie->pci->device == INTEL_CP_DEVICE_7560_ID) { 635 - pm_runtime_mark_last_busy(ipc_imem->dev); 636 - pm_runtime_put_autosuspend(ipc_imem->dev); 637 - } 638 - 639 635 return; 640 636 641 637 err_ipc_mux_deinit: ··· 1234 1240 1235 1241 /* forward MDM_NOT_READY to listeners */ 1236 1242 ipc_uevent_send(ipc_imem->dev, UEVENT_MDM_NOT_READY); 1237 - pm_runtime_get_sync(ipc_imem->dev); 1238 1243 1239 1244 hrtimer_cancel(&ipc_imem->td_alloc_timer); 1240 1245 hrtimer_cancel(&ipc_imem->tdupdate_timer); ··· 1419 1426 1420 1427 set_bit(IOSM_DEVLINK_INIT, &ipc_imem->flag); 1421 1428 } 1422 - 1423 - if (!pm_runtime_enabled(ipc_imem->dev)) 1424 - pm_runtime_enable(ipc_imem->dev); 1425 - 1426 - pm_runtime_set_autosuspend_delay(ipc_imem->dev, 1427 - IPC_MEM_AUTO_SUSPEND_DELAY_MS); 1428 - pm_runtime_use_autosuspend(ipc_imem->dev); 1429 - pm_runtime_allow(ipc_imem->dev); 1430 - pm_runtime_mark_last_busy(ipc_imem->dev); 1431 - 1432 1429 return ipc_imem; 1433 1430 devlink_channel_fail: 1434 1431 ipc_devlink_deinit(ipc_imem->ipc_devlink);
-2
drivers/net/wwan/iosm/iosm_ipc_imem.h
··· 103 103 #define FULLY_FUNCTIONAL 0 104 104 #define IOSM_DEVLINK_INIT 1 105 105 106 - #define IPC_MEM_AUTO_SUSPEND_DELAY_MS 5000 107 - 108 106 /* List of the supported UL/DL pipes. */ 109 107 enum ipc_mem_pipes { 110 108 IPC_MEM_PIPE_0 = 0,
+1 -3
drivers/net/wwan/iosm/iosm_ipc_pcie.c
··· 6 6 #include <linux/acpi.h> 7 7 #include <linux/bitfield.h> 8 8 #include <linux/module.h> 9 - #include <linux/pm_runtime.h> 10 9 #include <net/rtnetlink.h> 11 10 12 11 #include "iosm_ipc_imem.h" ··· 437 438 return 0; 438 439 } 439 440 440 - static DEFINE_RUNTIME_DEV_PM_OPS(iosm_ipc_pm, ipc_pcie_suspend_cb, 441 - ipc_pcie_resume_cb, NULL); 441 + static SIMPLE_DEV_PM_OPS(iosm_ipc_pm, ipc_pcie_suspend_cb, ipc_pcie_resume_cb); 442 442 443 443 static struct pci_driver iosm_ipc_driver = { 444 444 .name = KBUILD_MODNAME,
+1 -16
drivers/net/wwan/iosm/iosm_ipc_port.c
··· 3 3 * Copyright (C) 2020-21 Intel Corporation. 4 4 */ 5 5 6 - #include <linux/pm_runtime.h> 7 - 8 6 #include "iosm_ipc_chnl_cfg.h" 9 7 #include "iosm_ipc_imem_ops.h" 10 8 #include "iosm_ipc_port.h" ··· 13 15 struct iosm_cdev *ipc_port = wwan_port_get_drvdata(port); 14 16 int ret = 0; 15 17 16 - pm_runtime_get_sync(ipc_port->ipc_imem->dev); 17 18 ipc_port->channel = ipc_imem_sys_port_open(ipc_port->ipc_imem, 18 19 ipc_port->chl_id, 19 20 IPC_HP_CDEV_OPEN); 20 21 if (!ipc_port->channel) 21 22 ret = -EIO; 22 - 23 - pm_runtime_mark_last_busy(ipc_port->ipc_imem->dev); 24 - pm_runtime_put_autosuspend(ipc_port->ipc_imem->dev); 25 23 26 24 return ret; 27 25 } ··· 27 33 { 28 34 struct iosm_cdev *ipc_port = wwan_port_get_drvdata(port); 29 35 30 - pm_runtime_get_sync(ipc_port->ipc_imem->dev); 31 36 ipc_imem_sys_port_close(ipc_port->ipc_imem, ipc_port->channel); 32 - pm_runtime_mark_last_busy(ipc_port->ipc_imem->dev); 33 - pm_runtime_put_autosuspend(ipc_port->ipc_imem->dev); 34 37 } 35 38 36 39 /* transfer control data to modem */ 37 40 static int ipc_port_ctrl_tx(struct wwan_port *port, struct sk_buff *skb) 38 41 { 39 42 struct iosm_cdev *ipc_port = wwan_port_get_drvdata(port); 40 - int ret; 41 43 42 - pm_runtime_get_sync(ipc_port->ipc_imem->dev); 43 - ret = ipc_imem_sys_cdev_write(ipc_port, skb); 44 - pm_runtime_mark_last_busy(ipc_port->ipc_imem->dev); 45 - pm_runtime_put_autosuspend(ipc_port->ipc_imem->dev); 46 - 47 - return ret; 44 + return ipc_imem_sys_cdev_write(ipc_port, skb); 48 45 } 49 46 50 47 static const struct wwan_port_ops ipc_wwan_ctrl_ops = {
-8
drivers/net/wwan/iosm/iosm_ipc_trace.c
··· 3 3 * Copyright (C) 2020-2021 Intel Corporation. 4 4 */ 5 5 6 - #include <linux/pm_runtime.h> 7 6 #include <linux/wwan.h> 8 - 9 7 #include "iosm_ipc_trace.h" 10 8 11 9 /* sub buffer size and number of sub buffer */ ··· 97 99 if (ret) 98 100 return ret; 99 101 100 - pm_runtime_get_sync(ipc_trace->ipc_imem->dev); 101 - 102 102 mutex_lock(&ipc_trace->trc_mutex); 103 103 if (val == TRACE_ENABLE && ipc_trace->mode != TRACE_ENABLE) { 104 104 ipc_trace->channel = ipc_imem_sys_port_open(ipc_trace->ipc_imem, ··· 117 121 ret = count; 118 122 unlock: 119 123 mutex_unlock(&ipc_trace->trc_mutex); 120 - 121 - pm_runtime_mark_last_busy(ipc_trace->ipc_imem->dev); 122 - pm_runtime_put_autosuspend(ipc_trace->ipc_imem->dev); 123 - 124 124 return ret; 125 125 } 126 126
+2 -19
drivers/net/wwan/iosm/iosm_ipc_wwan.c
··· 6 6 #include <linux/etherdevice.h> 7 7 #include <linux/if_arp.h> 8 8 #include <linux/if_link.h> 9 - #include <linux/pm_runtime.h> 10 9 #include <linux/rtnetlink.h> 11 10 #include <linux/wwan.h> 12 11 #include <net/pkt_sched.h> ··· 51 52 struct iosm_netdev_priv *priv = wwan_netdev_drvpriv(netdev); 52 53 struct iosm_wwan *ipc_wwan = priv->ipc_wwan; 53 54 int if_id = priv->if_id; 54 - int ret = 0; 55 55 56 56 if (if_id < IP_MUX_SESSION_START || 57 57 if_id >= ARRAY_SIZE(ipc_wwan->sub_netlist)) 58 58 return -EINVAL; 59 59 60 - pm_runtime_get_sync(ipc_wwan->ipc_imem->dev); 61 60 /* get channel id */ 62 61 priv->ch_id = ipc_imem_sys_wwan_open(ipc_wwan->ipc_imem, if_id); 63 62 ··· 63 66 dev_err(ipc_wwan->dev, 64 67 "cannot connect wwan0 & id %d to the IPC mem layer", 65 68 if_id); 66 - ret = -ENODEV; 67 - goto err_out; 69 + return -ENODEV; 68 70 } 69 71 70 72 /* enable tx path, DL data may follow */ ··· 72 76 dev_dbg(ipc_wwan->dev, "Channel id %d allocated to if_id %d", 73 77 priv->ch_id, priv->if_id); 74 78 75 - err_out: 76 - pm_runtime_mark_last_busy(ipc_wwan->ipc_imem->dev); 77 - pm_runtime_put_autosuspend(ipc_wwan->ipc_imem->dev); 78 - 79 - return ret; 79 + return 0; 80 80 } 81 81 82 82 /* Bring-down the wwan net link */ ··· 82 90 83 91 netif_stop_queue(netdev); 84 92 85 - pm_runtime_get_sync(priv->ipc_wwan->ipc_imem->dev); 86 93 ipc_imem_sys_wwan_close(priv->ipc_wwan->ipc_imem, priv->if_id, 87 94 priv->ch_id); 88 95 priv->ch_id = -1; 89 - pm_runtime_mark_last_busy(priv->ipc_wwan->ipc_imem->dev); 90 - pm_runtime_put_autosuspend(priv->ipc_wwan->ipc_imem->dev); 91 96 92 97 return 0; 93 98 } ··· 106 117 if_id >= ARRAY_SIZE(ipc_wwan->sub_netlist)) 107 118 return -EINVAL; 108 119 109 - pm_runtime_get(ipc_wwan->ipc_imem->dev); 110 120 /* Send the SKB to device for transmission */ 111 121 ret = ipc_imem_sys_wwan_transmit(ipc_wwan->ipc_imem, 112 122 if_id, priv->ch_id, skb); ··· 119 131 ret = NETDEV_TX_BUSY; 120 132 dev_err(ipc_wwan->dev, "unable to push packets"); 121 133 } else { 122 - pm_runtime_mark_last_busy(ipc_wwan->ipc_imem->dev); 123 - pm_runtime_put_autosuspend(ipc_wwan->ipc_imem->dev); 124 134 goto exit; 125 135 } 126 - 127 - pm_runtime_mark_last_busy(ipc_wwan->ipc_imem->dev); 128 - pm_runtime_put_autosuspend(ipc_wwan->ipc_imem->dev); 129 136 130 137 return ret; 131 138
+2 -1
drivers/perf/riscv_pmu.c
··· 23 23 return ((event->attr.type == PERF_TYPE_HARDWARE) || 24 24 (event->attr.type == PERF_TYPE_HW_CACHE) || 25 25 (event->attr.type == PERF_TYPE_RAW)) && 26 - !!(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT); 26 + !!(event->hw.flags & PERF_EVENT_FLAG_USER_READ_CNT) && 27 + (event->hw.idx != -1); 27 28 } 28 29 29 30 void arch_perf_update_userpage(struct perf_event *event,
+10 -6
drivers/perf/riscv_pmu_sbi.c
··· 510 510 { 511 511 struct perf_event *event = (struct perf_event *)arg; 512 512 513 - csr_write(CSR_SCOUNTEREN, 514 - csr_read(CSR_SCOUNTEREN) | (1 << pmu_sbi_csr_index(event))); 513 + if (event->hw.idx != -1) 514 + csr_write(CSR_SCOUNTEREN, 515 + csr_read(CSR_SCOUNTEREN) | (1 << pmu_sbi_csr_index(event))); 515 516 } 516 517 517 518 static void pmu_sbi_reset_scounteren(void *arg) 518 519 { 519 520 struct perf_event *event = (struct perf_event *)arg; 520 521 521 - csr_write(CSR_SCOUNTEREN, 522 - csr_read(CSR_SCOUNTEREN) & ~(1 << pmu_sbi_csr_index(event))); 522 + if (event->hw.idx != -1) 523 + csr_write(CSR_SCOUNTEREN, 524 + csr_read(CSR_SCOUNTEREN) & ~(1 << pmu_sbi_csr_index(event))); 523 525 } 524 526 525 527 static void pmu_sbi_ctr_start(struct perf_event *event, u64 ival) ··· 543 541 544 542 if ((hwc->flags & PERF_EVENT_FLAG_USER_ACCESS) && 545 543 (hwc->flags & PERF_EVENT_FLAG_USER_READ_CNT)) 546 - pmu_sbi_set_scounteren((void *)event); 544 + on_each_cpu_mask(mm_cpumask(event->owner->mm), 545 + pmu_sbi_set_scounteren, (void *)event, 1); 547 546 } 548 547 549 548 static void pmu_sbi_ctr_stop(struct perf_event *event, unsigned long flag) ··· 554 551 555 552 if ((hwc->flags & PERF_EVENT_FLAG_USER_ACCESS) && 556 553 (hwc->flags & PERF_EVENT_FLAG_USER_READ_CNT)) 557 - pmu_sbi_reset_scounteren((void *)event); 554 + on_each_cpu_mask(mm_cpumask(event->owner->mm), 555 + pmu_sbi_reset_scounteren, (void *)event, 1); 558 556 559 557 ret = sbi_ecall(SBI_EXT_PMU, SBI_EXT_PMU_COUNTER_STOP, hwc->idx, 1, flag, 0, 0, 0); 560 558 if (ret.error && (ret.error != SBI_ERR_ALREADY_STOPPED) &&
+4 -4
drivers/power/supply/qcom_battmgr.c
··· 105 105 106 106 struct qcom_battmgr_update_request { 107 107 struct pmic_glink_hdr hdr; 108 - u32 battery_id; 108 + __le32 battery_id; 109 109 }; 110 110 111 111 struct qcom_battmgr_charge_time_request { ··· 1282 1282 { 1283 1283 struct qcom_battmgr *battmgr = container_of(work, struct qcom_battmgr, enable_work); 1284 1284 struct qcom_battmgr_enable_request req = { 1285 - .hdr.owner = PMIC_GLINK_OWNER_BATTMGR, 1286 - .hdr.type = PMIC_GLINK_NOTIFY, 1287 - .hdr.opcode = BATTMGR_REQUEST_NOTIFICATION, 1285 + .hdr.owner = cpu_to_le32(PMIC_GLINK_OWNER_BATTMGR), 1286 + .hdr.type = cpu_to_le32(PMIC_GLINK_NOTIFY), 1287 + .hdr.opcode = cpu_to_le32(BATTMGR_REQUEST_NOTIFICATION), 1288 1288 }; 1289 1289 int ret; 1290 1290
+4
drivers/soundwire/Makefile
··· 15 15 soundwire-bus-y += debugfs.o 16 16 endif 17 17 18 + ifdef CONFIG_IRQ_DOMAIN 19 + soundwire-bus-y += irq.o 20 + endif 21 + 18 22 #AMD driver 19 23 soundwire-amd-y := amd_manager.o 20 24 obj-$(CONFIG_SOUNDWIRE_AMD) += soundwire-amd.o
+5 -26
drivers/soundwire/bus.c
··· 3 3 4 4 #include <linux/acpi.h> 5 5 #include <linux/delay.h> 6 - #include <linux/irq.h> 7 6 #include <linux/mod_devicetable.h> 8 7 #include <linux/pm_runtime.h> 9 8 #include <linux/soundwire/sdw_registers.h> 10 9 #include <linux/soundwire/sdw.h> 11 10 #include <linux/soundwire/sdw_type.h> 12 11 #include "bus.h" 12 + #include "irq.h" 13 13 #include "sysfs_local.h" 14 14 15 15 static DEFINE_IDA(sdw_bus_ida); ··· 24 24 bus->id = rc; 25 25 return 0; 26 26 } 27 - 28 - static int sdw_irq_map(struct irq_domain *h, unsigned int virq, 29 - irq_hw_number_t hw) 30 - { 31 - struct sdw_bus *bus = h->host_data; 32 - 33 - irq_set_chip_data(virq, bus); 34 - irq_set_chip(virq, &bus->irq_chip); 35 - irq_set_nested_thread(virq, 1); 36 - irq_set_noprobe(virq); 37 - 38 - return 0; 39 - } 40 - 41 - static const struct irq_domain_ops sdw_domain_ops = { 42 - .map = sdw_irq_map, 43 - }; 44 27 45 28 /** 46 29 * sdw_bus_master_add() - add a bus Master instance ··· 151 168 bus->params.curr_bank = SDW_BANK0; 152 169 bus->params.next_bank = SDW_BANK1; 153 170 154 - bus->irq_chip.name = dev_name(bus->dev); 155 - bus->domain = irq_domain_create_linear(fwnode, SDW_MAX_DEVICES, 156 - &sdw_domain_ops, bus); 157 - if (!bus->domain) { 158 - dev_err(bus->dev, "Failed to add IRQ domain\n"); 159 - return -EINVAL; 160 - } 171 + ret = sdw_irq_create(bus, fwnode); 172 + if (ret) 173 + return ret; 161 174 162 175 return 0; 163 176 } ··· 192 213 { 193 214 device_for_each_child(bus->dev, NULL, sdw_delete_slave); 194 215 195 - irq_domain_remove(bus->domain); 216 + sdw_irq_delete(bus); 196 217 197 218 sdw_master_device_del(bus); 198 219
+4 -7
drivers/soundwire/bus_type.c
··· 7 7 #include <linux/soundwire/sdw.h> 8 8 #include <linux/soundwire/sdw_type.h> 9 9 #include "bus.h" 10 + #include "irq.h" 10 11 #include "sysfs_local.h" 11 12 12 13 /** ··· 123 122 if (drv->ops && drv->ops->read_prop) 124 123 drv->ops->read_prop(slave); 125 124 126 - if (slave->prop.use_domain_irq) { 127 - slave->irq = irq_create_mapping(slave->bus->domain, slave->dev_num); 128 - if (!slave->irq) 129 - dev_warn(dev, "Failed to map IRQ\n"); 130 - } 125 + if (slave->prop.use_domain_irq) 126 + sdw_irq_create_mapping(slave); 131 127 132 128 /* init the sysfs as we have properties now */ 133 129 ret = sdw_slave_sysfs_init(slave); ··· 174 176 slave->probed = false; 175 177 176 178 if (slave->prop.use_domain_irq) 177 - irq_dispose_mapping(irq_find_mapping(slave->bus->domain, 178 - slave->dev_num)); 179 + sdw_irq_dispose_mapping(slave); 179 180 180 181 mutex_unlock(&slave->sdw_dev_lock); 181 182
+59
drivers/soundwire/irq.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (C) 2023 Cirrus Logic, Inc. and 3 + // Cirrus Logic International Semiconductor Ltd. 4 + 5 + #include <linux/device.h> 6 + #include <linux/fwnode.h> 7 + #include <linux/irq.h> 8 + #include <linux/irqdomain.h> 9 + #include <linux/soundwire/sdw.h> 10 + #include "irq.h" 11 + 12 + static int sdw_irq_map(struct irq_domain *h, unsigned int virq, 13 + irq_hw_number_t hw) 14 + { 15 + struct sdw_bus *bus = h->host_data; 16 + 17 + irq_set_chip_data(virq, bus); 18 + irq_set_chip(virq, &bus->irq_chip); 19 + irq_set_nested_thread(virq, 1); 20 + irq_set_noprobe(virq); 21 + 22 + return 0; 23 + } 24 + 25 + static const struct irq_domain_ops sdw_domain_ops = { 26 + .map = sdw_irq_map, 27 + }; 28 + 29 + int sdw_irq_create(struct sdw_bus *bus, 30 + struct fwnode_handle *fwnode) 31 + { 32 + bus->irq_chip.name = dev_name(bus->dev); 33 + 34 + bus->domain = irq_domain_create_linear(fwnode, SDW_MAX_DEVICES, 35 + &sdw_domain_ops, bus); 36 + if (!bus->domain) { 37 + dev_err(bus->dev, "Failed to add IRQ domain\n"); 38 + return -EINVAL; 39 + } 40 + 41 + return 0; 42 + } 43 + 44 + void sdw_irq_delete(struct sdw_bus *bus) 45 + { 46 + irq_domain_remove(bus->domain); 47 + } 48 + 49 + void sdw_irq_create_mapping(struct sdw_slave *slave) 50 + { 51 + slave->irq = irq_create_mapping(slave->bus->domain, slave->dev_num); 52 + if (!slave->irq) 53 + dev_warn(&slave->dev, "Failed to map IRQ\n"); 54 + } 55 + 56 + void sdw_irq_dispose_mapping(struct sdw_slave *slave) 57 + { 58 + irq_dispose_mapping(irq_find_mapping(slave->bus->domain, slave->dev_num)); 59 + }
+43
drivers/soundwire/irq.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2023 Cirrus Logic, Inc. and 4 + * Cirrus Logic International Semiconductor Ltd. 5 + */ 6 + 7 + #ifndef __SDW_IRQ_H 8 + #define __SDW_IRQ_H 9 + 10 + #include <linux/soundwire/sdw.h> 11 + #include <linux/fwnode.h> 12 + 13 + #if IS_ENABLED(CONFIG_IRQ_DOMAIN) 14 + 15 + int sdw_irq_create(struct sdw_bus *bus, 16 + struct fwnode_handle *fwnode); 17 + void sdw_irq_delete(struct sdw_bus *bus); 18 + void sdw_irq_create_mapping(struct sdw_slave *slave); 19 + void sdw_irq_dispose_mapping(struct sdw_slave *slave); 20 + 21 + #else /* CONFIG_IRQ_DOMAIN */ 22 + 23 + static inline int sdw_irq_create(struct sdw_bus *bus, 24 + struct fwnode_handle *fwnode) 25 + { 26 + return 0; 27 + } 28 + 29 + static inline void sdw_irq_delete(struct sdw_bus *bus) 30 + { 31 + } 32 + 33 + static inline void sdw_irq_create_mapping(struct sdw_slave *slave) 34 + { 35 + } 36 + 37 + static inline void sdw_irq_dispose_mapping(struct sdw_slave *slave) 38 + { 39 + } 40 + 41 + #endif /* CONFIG_IRQ_DOMAIN */ 42 + 43 + #endif /* __SDW_IRQ_H */
+3 -2
drivers/spi/spi-npcm-fiu.c
··· 353 353 uma_cfg |= ilog2(op->cmd.buswidth); 354 354 uma_cfg |= ilog2(op->addr.buswidth) 355 355 << NPCM_FIU_UMA_CFG_ADBPCK_SHIFT; 356 - uma_cfg |= ilog2(op->dummy.buswidth) 357 - << NPCM_FIU_UMA_CFG_DBPCK_SHIFT; 356 + if (op->dummy.nbytes) 357 + uma_cfg |= ilog2(op->dummy.buswidth) 358 + << NPCM_FIU_UMA_CFG_DBPCK_SHIFT; 358 359 uma_cfg |= ilog2(op->data.buswidth) 359 360 << NPCM_FIU_UMA_CFG_RDBPCK_SHIFT; 360 361 uma_cfg |= op->dummy.nbytes << NPCM_FIU_UMA_CFG_DBSIZ_SHIFT;
+20 -20
drivers/thunderbolt/icm.c
··· 41 41 #define PHY_PORT_CS1_LINK_STATE_SHIFT 26 42 42 43 43 #define ICM_TIMEOUT 5000 /* ms */ 44 + #define ICM_RETRIES 3 44 45 #define ICM_APPROVE_TIMEOUT 10000 /* ms */ 45 46 #define ICM_MAX_LINK 4 46 47 ··· 297 296 298 297 static int icm_request(struct tb *tb, const void *request, size_t request_size, 299 298 void *response, size_t response_size, size_t npackets, 300 - unsigned int timeout_msec) 299 + int retries, unsigned int timeout_msec) 301 300 { 302 301 struct icm *icm = tb_priv(tb); 303 - int retries = 3; 304 302 305 303 do { 306 304 struct tb_cfg_request *req; ··· 410 410 return -ENOMEM; 411 411 412 412 ret = icm_request(tb, &request, sizeof(request), switches, 413 - sizeof(*switches), npackets, ICM_TIMEOUT); 413 + sizeof(*switches), npackets, ICM_RETRIES, ICM_TIMEOUT); 414 414 if (ret) 415 415 goto err_free; 416 416 ··· 463 463 464 464 memset(&reply, 0, sizeof(reply)); 465 465 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 466 - 1, ICM_TIMEOUT); 466 + 1, ICM_RETRIES, ICM_TIMEOUT); 467 467 if (ret) 468 468 return ret; 469 469 ··· 488 488 memset(&reply, 0, sizeof(reply)); 489 489 /* Use larger timeout as establishing tunnels can take some time */ 490 490 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 491 - 1, ICM_APPROVE_TIMEOUT); 491 + 1, ICM_RETRIES, ICM_APPROVE_TIMEOUT); 492 492 if (ret) 493 493 return ret; 494 494 ··· 515 515 516 516 memset(&reply, 0, sizeof(reply)); 517 517 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 518 - 1, ICM_TIMEOUT); 518 + 1, ICM_RETRIES, ICM_TIMEOUT); 519 519 if (ret) 520 520 return ret; 521 521 ··· 543 543 544 544 memset(&reply, 0, sizeof(reply)); 545 545 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 546 - 1, ICM_TIMEOUT); 546 + 1, ICM_RETRIES, ICM_TIMEOUT); 547 547 if (ret) 548 548 return ret; 549 549 ··· 577 577 578 578 memset(&reply, 0, sizeof(reply)); 579 579 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 580 - 1, ICM_TIMEOUT); 580 + 1, ICM_RETRIES, ICM_TIMEOUT); 581 581 if (ret) 582 582 return ret; 583 583 ··· 1020 1020 1021 1021 memset(&reply, 0, sizeof(reply)); 1022 1022 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 1023 - 1, 20000); 1023 + 1, 10, 2000); 1024 1024 if (ret) 1025 1025 return ret; 1026 1026 ··· 1053 1053 1054 1054 memset(&reply, 0, sizeof(reply)); 1055 1055 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 1056 - 1, ICM_APPROVE_TIMEOUT); 1056 + 1, ICM_RETRIES, ICM_APPROVE_TIMEOUT); 1057 1057 if (ret) 1058 1058 return ret; 1059 1059 ··· 1081 1081 1082 1082 memset(&reply, 0, sizeof(reply)); 1083 1083 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 1084 - 1, ICM_TIMEOUT); 1084 + 1, ICM_RETRIES, ICM_TIMEOUT); 1085 1085 if (ret) 1086 1086 return ret; 1087 1087 ··· 1110 1110 1111 1111 memset(&reply, 0, sizeof(reply)); 1112 1112 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 1113 - 1, ICM_TIMEOUT); 1113 + 1, ICM_RETRIES, ICM_TIMEOUT); 1114 1114 if (ret) 1115 1115 return ret; 1116 1116 ··· 1144 1144 1145 1145 memset(&reply, 0, sizeof(reply)); 1146 1146 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 1147 - 1, ICM_TIMEOUT); 1147 + 1, ICM_RETRIES, ICM_TIMEOUT); 1148 1148 if (ret) 1149 1149 return ret; 1150 1150 ··· 1170 1170 1171 1171 memset(&reply, 0, sizeof(reply)); 1172 1172 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 1173 - 1, ICM_TIMEOUT); 1173 + 1, ICM_RETRIES, ICM_TIMEOUT); 1174 1174 if (ret) 1175 1175 return ret; 1176 1176 ··· 1496 1496 1497 1497 memset(&reply, 0, sizeof(reply)); 1498 1498 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 1499 - 1, ICM_TIMEOUT); 1499 + 1, ICM_RETRIES, ICM_TIMEOUT); 1500 1500 if (ret) 1501 1501 return ret; 1502 1502 ··· 1522 1522 1523 1523 memset(&reply, 0, sizeof(reply)); 1524 1524 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 1525 - 1, ICM_TIMEOUT); 1525 + 1, ICM_RETRIES, ICM_TIMEOUT); 1526 1526 if (ret) 1527 1527 return ret; 1528 1528 ··· 1543 1543 1544 1544 memset(&reply, 0, sizeof(reply)); 1545 1545 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 1546 - 1, ICM_TIMEOUT); 1546 + 1, ICM_RETRIES, ICM_TIMEOUT); 1547 1547 if (ret) 1548 1548 return ret; 1549 1549 ··· 1604 1604 1605 1605 memset(&reply, 0, sizeof(reply)); 1606 1606 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 1607 - 1, ICM_TIMEOUT); 1607 + 1, ICM_RETRIES, ICM_TIMEOUT); 1608 1608 if (ret) 1609 1609 return ret; 1610 1610 ··· 1626 1626 1627 1627 memset(&reply, 0, sizeof(reply)); 1628 1628 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 1629 - 1, 20000); 1629 + 1, ICM_RETRIES, 20000); 1630 1630 if (ret) 1631 1631 return ret; 1632 1632 ··· 2298 2298 2299 2299 memset(&reply, 0, sizeof(reply)); 2300 2300 ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 2301 - 1, ICM_TIMEOUT); 2301 + 1, ICM_RETRIES, ICM_TIMEOUT); 2302 2302 if (ret) 2303 2303 return ret; 2304 2304
+7
drivers/thunderbolt/switch.c
··· 2725 2725 !tb_port_is_width_supported(down, TB_LINK_WIDTH_DUAL)) 2726 2726 return 0; 2727 2727 2728 + /* 2729 + * Both lanes need to be in CL0. Here we assume lane 0 already be in 2730 + * CL0 and check just for lane 1. 2731 + */ 2732 + if (tb_wait_for_port(down->dual_link_port, false) <= 0) 2733 + return -ENOTCONN; 2734 + 2728 2735 ret = tb_port_lane_bonding_enable(up); 2729 2736 if (ret) { 2730 2737 tb_port_warn(up, "failed to enable lane bonding\n");
+1 -1
drivers/thunderbolt/tmu.c
··· 382 382 } else if (ucap && tb_port_tmu_is_unidirectional(up)) { 383 383 if (tmu_rates[TB_SWITCH_TMU_MODE_LOWRES] == rate) 384 384 sw->tmu.mode = TB_SWITCH_TMU_MODE_LOWRES; 385 - else if (tmu_rates[TB_SWITCH_TMU_MODE_LOWRES] == rate) 385 + else if (tmu_rates[TB_SWITCH_TMU_MODE_HIFI_UNI] == rate) 386 386 sw->tmu.mode = TB_SWITCH_TMU_MODE_HIFI_UNI; 387 387 } else if (rate) { 388 388 sw->tmu.mode = TB_SWITCH_TMU_MODE_HIFI_BI;
+41 -17
drivers/thunderbolt/xdomain.c
··· 703 703 mutex_unlock(&xdomain_lock); 704 704 } 705 705 706 + static void start_handshake(struct tb_xdomain *xd) 707 + { 708 + xd->state = XDOMAIN_STATE_INIT; 709 + queue_delayed_work(xd->tb->wq, &xd->state_work, 710 + msecs_to_jiffies(XDOMAIN_SHORT_TIMEOUT)); 711 + } 712 + 713 + /* Can be called from state_work */ 714 + static void __stop_handshake(struct tb_xdomain *xd) 715 + { 716 + cancel_delayed_work_sync(&xd->properties_changed_work); 717 + xd->properties_changed_retries = 0; 718 + xd->state_retries = 0; 719 + } 720 + 721 + static void stop_handshake(struct tb_xdomain *xd) 722 + { 723 + cancel_delayed_work_sync(&xd->state_work); 724 + __stop_handshake(xd); 725 + } 726 + 706 727 static void tb_xdp_handle_request(struct work_struct *work) 707 728 { 708 729 struct xdomain_request_work *xw = container_of(work, typeof(*xw), work); ··· 786 765 case UUID_REQUEST: 787 766 tb_dbg(tb, "%llx: received XDomain UUID request\n", route); 788 767 ret = tb_xdp_uuid_response(ctl, route, sequence, uuid); 768 + /* 769 + * If we've stopped the discovery with an error such as 770 + * timing out, we will restart the handshake now that we 771 + * received UUID request from the remote host. 772 + */ 773 + if (!ret && xd && xd->state == XDOMAIN_STATE_ERROR) { 774 + dev_dbg(&xd->dev, "restarting handshake\n"); 775 + start_handshake(xd); 776 + } 789 777 break; 790 778 791 779 case LINK_STATE_STATUS_REQUEST: ··· 1551 1521 msecs_to_jiffies(XDOMAIN_SHORT_TIMEOUT)); 1552 1522 } 1553 1523 1524 + static void tb_xdomain_failed(struct tb_xdomain *xd) 1525 + { 1526 + xd->state = XDOMAIN_STATE_ERROR; 1527 + queue_delayed_work(xd->tb->wq, &xd->state_work, 1528 + msecs_to_jiffies(XDOMAIN_DEFAULT_TIMEOUT)); 1529 + } 1530 + 1554 1531 static void tb_xdomain_state_work(struct work_struct *work) 1555 1532 { 1556 1533 struct tb_xdomain *xd = container_of(work, typeof(*xd), state_work.work); ··· 1584 1547 if (ret) { 1585 1548 if (ret == -EAGAIN) 1586 1549 goto retry_state; 1587 - xd->state = XDOMAIN_STATE_ERROR; 1550 + tb_xdomain_failed(xd); 1588 1551 } else { 1589 1552 tb_xdomain_queue_properties_changed(xd); 1590 1553 if (xd->bonding_possible) ··· 1649 1612 if (ret) { 1650 1613 if (ret == -EAGAIN) 1651 1614 goto retry_state; 1652 - xd->state = XDOMAIN_STATE_ERROR; 1615 + tb_xdomain_failed(xd); 1653 1616 } else { 1654 1617 xd->state = XDOMAIN_STATE_ENUMERATED; 1655 1618 } ··· 1660 1623 break; 1661 1624 1662 1625 case XDOMAIN_STATE_ERROR: 1626 + dev_dbg(&xd->dev, "discovery failed, stopping handshake\n"); 1627 + __stop_handshake(xd); 1663 1628 break; 1664 1629 1665 1630 default: ··· 1870 1831 kfree(xd->device_name); 1871 1832 kfree(xd->vendor_name); 1872 1833 kfree(xd); 1873 - } 1874 - 1875 - static void start_handshake(struct tb_xdomain *xd) 1876 - { 1877 - xd->state = XDOMAIN_STATE_INIT; 1878 - queue_delayed_work(xd->tb->wq, &xd->state_work, 1879 - msecs_to_jiffies(XDOMAIN_SHORT_TIMEOUT)); 1880 - } 1881 - 1882 - static void stop_handshake(struct tb_xdomain *xd) 1883 - { 1884 - cancel_delayed_work_sync(&xd->properties_changed_work); 1885 - cancel_delayed_work_sync(&xd->state_work); 1886 - xd->properties_changed_retries = 0; 1887 - xd->state_retries = 0; 1888 1834 } 1889 1835 1890 1836 static int __maybe_unused tb_xdomain_suspend(struct device *dev)
+10 -15
drivers/tty/serial/8250/8250_omap.c
··· 1617 1617 { 1618 1618 struct omap8250_priv *priv = dev_get_drvdata(dev); 1619 1619 struct uart_8250_port *up = serial8250_get_port(priv->line); 1620 - int err; 1620 + int err = 0; 1621 1621 1622 1622 serial8250_suspend_port(priv->line); 1623 1623 ··· 1627 1627 if (!device_may_wakeup(dev)) 1628 1628 priv->wer = 0; 1629 1629 serial_out(up, UART_OMAP_WER, priv->wer); 1630 - err = pm_runtime_force_suspend(dev); 1630 + if (uart_console(&up->port) && console_suspend_enabled) 1631 + err = pm_runtime_force_suspend(dev); 1631 1632 flush_work(&priv->qos_work); 1632 1633 1633 1634 return err; ··· 1637 1636 static int omap8250_resume(struct device *dev) 1638 1637 { 1639 1638 struct omap8250_priv *priv = dev_get_drvdata(dev); 1639 + struct uart_8250_port *up = serial8250_get_port(priv->line); 1640 1640 int err; 1641 1641 1642 - err = pm_runtime_force_resume(dev); 1643 - if (err) 1644 - return err; 1642 + if (uart_console(&up->port) && console_suspend_enabled) { 1643 + err = pm_runtime_force_resume(dev); 1644 + if (err) 1645 + return err; 1646 + } 1647 + 1645 1648 serial8250_resume_port(priv->line); 1646 1649 /* Paired with pm_runtime_resume_and_get() in omap8250_suspend() */ 1647 1650 pm_runtime_mark_last_busy(dev); ··· 1722 1717 1723 1718 if (priv->line >= 0) 1724 1719 up = serial8250_get_port(priv->line); 1725 - /* 1726 - * When using 'no_console_suspend', the console UART must not be 1727 - * suspended. Since driver suspend is managed by runtime suspend, 1728 - * preventing runtime suspend (by returning error) will keep device 1729 - * active during suspend. 1730 - */ 1731 - if (priv->is_suspending && !console_suspend_enabled) { 1732 - if (up && uart_console(&up->port)) 1733 - return -EBUSY; 1734 - } 1735 1720 1736 1721 if (priv->habit & UART_ERRATA_CLOCK_DISABLE) { 1737 1722 int ret;
+10 -5
drivers/tty/serial/serial_core.c
··· 156 156 * enabled, serial_port_runtime_resume() calls start_tx() again 157 157 * after enabling the device. 158 158 */ 159 - if (pm_runtime_active(&port_dev->dev)) 159 + if (!pm_runtime_enabled(port->dev) || pm_runtime_active(port->dev)) 160 160 port->ops->start_tx(port); 161 161 pm_runtime_mark_last_busy(&port_dev->dev); 162 162 pm_runtime_put_autosuspend(&port_dev->dev); ··· 1404 1404 static int uart_rs485_config(struct uart_port *port) 1405 1405 { 1406 1406 struct serial_rs485 *rs485 = &port->rs485; 1407 + unsigned long flags; 1407 1408 int ret; 1409 + 1410 + if (!(rs485->flags & SER_RS485_ENABLED)) 1411 + return 0; 1408 1412 1409 1413 uart_sanitize_serial_rs485(port, rs485); 1410 1414 uart_set_rs485_termination(port, rs485); 1411 1415 1416 + spin_lock_irqsave(&port->lock, flags); 1412 1417 ret = port->rs485_config(port, NULL, rs485); 1418 + spin_unlock_irqrestore(&port->lock, flags); 1413 1419 if (ret) 1414 1420 memset(rs485, 0, sizeof(*rs485)); 1415 1421 ··· 2480 2474 if (ret == 0) { 2481 2475 if (tty) 2482 2476 uart_change_line_settings(tty, state, NULL); 2477 + uart_rs485_config(uport); 2483 2478 spin_lock_irq(&uport->lock); 2484 2479 if (!(uport->rs485.flags & SER_RS485_ENABLED)) 2485 2480 ops->set_mctrl(uport, uport->mctrl); 2486 - else 2487 - uart_rs485_config(uport); 2488 2481 ops->start_tx(uport); 2489 2482 spin_unlock_irq(&uport->lock); 2490 2483 tty_port_set_initialized(port, true); ··· 2592 2587 port->mctrl &= TIOCM_DTR; 2593 2588 if (!(port->rs485.flags & SER_RS485_ENABLED)) 2594 2589 port->ops->set_mctrl(port, port->mctrl); 2595 - else 2596 - uart_rs485_config(port); 2597 2590 spin_unlock_irqrestore(&port->lock, flags); 2591 + 2592 + uart_rs485_config(port); 2598 2593 2599 2594 /* 2600 2595 * If this driver supports console, and it hasn't been
+1 -1
drivers/ufs/core/ufshcd.c
··· 6895 6895 mask, 0, 1000, 1000); 6896 6896 6897 6897 dev_err(hba->dev, "Clearing task management function with tag %d %s\n", 6898 - tag, err ? "succeeded" : "failed"); 6898 + tag, err < 0 ? "failed" : "succeeded"); 6899 6899 6900 6900 out: 6901 6901 return err;
+3
drivers/usb/cdns3/cdnsp-gadget.c
··· 1125 1125 unsigned long flags; 1126 1126 int ret; 1127 1127 1128 + if (request->status != -EINPROGRESS) 1129 + return 0; 1130 + 1128 1131 if (!pep->endpoint.desc) { 1129 1132 dev_err(pdev->dev, 1130 1133 "%s: can't dequeue to disabled endpoint\n",
+1 -2
drivers/usb/cdns3/core.h
··· 131 131 #else /* CONFIG_PM_SLEEP */ 132 132 static inline int cdns_resume(struct cdns *cdns) 133 133 { return 0; } 134 - static inline int cdns_set_active(struct cdns *cdns, u8 set_active) 135 - { return 0; } 134 + static inline void cdns_set_active(struct cdns *cdns, u8 set_active) { } 136 135 static inline int cdns_suspend(struct cdns *cdns) 137 136 { return 0; } 138 137 #endif /* CONFIG_PM_SLEEP */
+22 -3
drivers/usb/core/hub.c
··· 151 151 if (udev->quirks & USB_QUIRK_NO_LPM) 152 152 return 0; 153 153 154 + /* Skip if the device BOS descriptor couldn't be read */ 155 + if (!udev->bos) 156 + return 0; 157 + 154 158 /* USB 2.1 (and greater) devices indicate LPM support through 155 159 * their USB 2.0 Extended Capabilities BOS descriptor. 156 160 */ ··· 329 325 unsigned int hub_u2_del; 330 326 331 327 if (!udev->lpm_capable || udev->speed < USB_SPEED_SUPER) 328 + return; 329 + 330 + /* Skip if the device BOS descriptor couldn't be read */ 331 + if (!udev->bos) 332 332 return; 333 333 334 334 hub = usb_hub_to_struct_hub(udev->parent); ··· 2712 2704 static enum usb_ssp_rate get_port_ssp_rate(struct usb_device *hdev, 2713 2705 u32 ext_portstatus) 2714 2706 { 2715 - struct usb_ssp_cap_descriptor *ssp_cap = hdev->bos->ssp_cap; 2707 + struct usb_ssp_cap_descriptor *ssp_cap; 2716 2708 u32 attr; 2717 2709 u8 speed_id; 2718 2710 u8 ssac; 2719 2711 u8 lanes; 2720 2712 int i; 2721 2713 2714 + if (!hdev->bos) 2715 + goto out; 2716 + 2717 + ssp_cap = hdev->bos->ssp_cap; 2722 2718 if (!ssp_cap) 2723 2719 goto out; 2724 2720 ··· 4227 4215 enum usb3_link_state state) 4228 4216 { 4229 4217 int timeout; 4230 - __u8 u1_mel = udev->bos->ss_cap->bU1devExitLat; 4231 - __le16 u2_mel = udev->bos->ss_cap->bU2DevExitLat; 4218 + __u8 u1_mel; 4219 + __le16 u2_mel; 4220 + 4221 + /* Skip if the device BOS descriptor couldn't be read */ 4222 + if (!udev->bos) 4223 + return; 4224 + 4225 + u1_mel = udev->bos->ss_cap->bU1devExitLat; 4226 + u2_mel = udev->bos->ss_cap->bU2DevExitLat; 4232 4227 4233 4228 /* If the device says it doesn't have *any* exit latency to come out of 4234 4229 * U1 or U2, it's probably lying. Assume it doesn't implement that link
+1 -1
drivers/usb/core/hub.h
··· 153 153 { 154 154 return (hdev->descriptor.bDeviceProtocol == USB_HUB_PR_SS && 155 155 le16_to_cpu(hdev->descriptor.bcdUSB) >= 0x0310 && 156 - hdev->bos->ssp_cap); 156 + hdev->bos && hdev->bos->ssp_cap); 157 157 } 158 158 159 159 static inline unsigned hub_power_on_good_delay(struct usb_hub *hub)
+38 -1
drivers/usb/dwc3/core.c
··· 279 279 * XHCI driver will reset the host block. If dwc3 was configured for 280 280 * host-only mode or current role is host, then we can return early. 281 281 */ 282 - if (dwc->dr_mode == USB_DR_MODE_HOST || dwc->current_dr_role == DWC3_GCTL_PRTCAP_HOST) 282 + if (dwc->current_dr_role == DWC3_GCTL_PRTCAP_HOST) 283 283 return 0; 284 + 285 + /* 286 + * If the dr_mode is host and the dwc->current_dr_role is not the 287 + * corresponding DWC3_GCTL_PRTCAP_HOST, then the dwc3_core_init_mode 288 + * isn't executed yet. Ensure the phy is ready before the controller 289 + * updates the GCTL.PRTCAPDIR or other settings by soft-resetting 290 + * the phy. 291 + * 292 + * Note: GUSB3PIPECTL[n] and GUSB2PHYCFG[n] are port settings where n 293 + * is port index. If this is a multiport host, then we need to reset 294 + * all active ports. 295 + */ 296 + if (dwc->dr_mode == USB_DR_MODE_HOST) { 297 + u32 usb3_port; 298 + u32 usb2_port; 299 + 300 + usb3_port = dwc3_readl(dwc->regs, DWC3_GUSB3PIPECTL(0)); 301 + usb3_port |= DWC3_GUSB3PIPECTL_PHYSOFTRST; 302 + dwc3_writel(dwc->regs, DWC3_GUSB3PIPECTL(0), usb3_port); 303 + 304 + usb2_port = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0)); 305 + usb2_port |= DWC3_GUSB2PHYCFG_PHYSOFTRST; 306 + dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), usb2_port); 307 + 308 + /* Small delay for phy reset assertion */ 309 + usleep_range(1000, 2000); 310 + 311 + usb3_port &= ~DWC3_GUSB3PIPECTL_PHYSOFTRST; 312 + dwc3_writel(dwc->regs, DWC3_GUSB3PIPECTL(0), usb3_port); 313 + 314 + usb2_port &= ~DWC3_GUSB2PHYCFG_PHYSOFTRST; 315 + dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), usb2_port); 316 + 317 + /* Wait for clock synchronization */ 318 + msleep(50); 319 + return 0; 320 + } 284 321 285 322 reg = dwc3_readl(dwc->regs, DWC3_DCTL); 286 323 reg |= DWC3_DCTL_CSFTRST;
+19 -7
drivers/usb/gadget/function/f_ncm.c
··· 1156 1156 struct sk_buff_head *list) 1157 1157 { 1158 1158 struct f_ncm *ncm = func_to_ncm(&port->func); 1159 - __le16 *tmp = (void *) skb->data; 1159 + unsigned char *ntb_ptr = skb->data; 1160 + __le16 *tmp; 1160 1161 unsigned index, index2; 1161 1162 int ndp_index; 1162 1163 unsigned dg_len, dg_len2; ··· 1170 1169 const struct ndp_parser_opts *opts = ncm->parser_opts; 1171 1170 unsigned crc_len = ncm->is_crc ? sizeof(uint32_t) : 0; 1172 1171 int dgram_counter; 1172 + int to_process = skb->len; 1173 + 1174 + parse_ntb: 1175 + tmp = (__le16 *)ntb_ptr; 1173 1176 1174 1177 /* dwSignature */ 1175 1178 if (get_unaligned_le32(tmp) != opts->nth_sign) { ··· 1220 1215 * walk through NDP 1221 1216 * dwSignature 1222 1217 */ 1223 - tmp = (void *)(skb->data + ndp_index); 1218 + tmp = (__le16 *)(ntb_ptr + ndp_index); 1224 1219 if (get_unaligned_le32(tmp) != ncm->ndp_sign) { 1225 1220 INFO(port->func.config->cdev, "Wrong NDP SIGN\n"); 1226 1221 goto err; ··· 1277 1272 if (ncm->is_crc) { 1278 1273 uint32_t crc, crc2; 1279 1274 1280 - crc = get_unaligned_le32(skb->data + 1275 + crc = get_unaligned_le32(ntb_ptr + 1281 1276 index + dg_len - 1282 1277 crc_len); 1283 1278 crc2 = ~crc32_le(~0, 1284 - skb->data + index, 1279 + ntb_ptr + index, 1285 1280 dg_len - crc_len); 1286 1281 if (crc != crc2) { 1287 1282 INFO(port->func.config->cdev, ··· 1308 1303 dg_len - crc_len); 1309 1304 if (skb2 == NULL) 1310 1305 goto err; 1311 - skb_put_data(skb2, skb->data + index, 1306 + skb_put_data(skb2, ntb_ptr + index, 1312 1307 dg_len - crc_len); 1313 1308 1314 1309 skb_queue_tail(list, skb2); ··· 1321 1316 } while (ndp_len > 2 * (opts->dgram_item_len * 2)); 1322 1317 } while (ndp_index); 1323 1318 1324 - dev_consume_skb_any(skb); 1325 - 1326 1319 VDBG(port->func.config->cdev, 1327 1320 "Parsed NTB with %d frames\n", dgram_counter); 1321 + 1322 + to_process -= block_len; 1323 + if (to_process != 0) { 1324 + ntb_ptr = (unsigned char *)(ntb_ptr + block_len); 1325 + goto parse_ntb; 1326 + } 1327 + 1328 + dev_consume_skb_any(skb); 1329 + 1328 1330 return 0; 1329 1331 err: 1330 1332 skb_queue_purge(list);
+12 -8
drivers/usb/gadget/udc/udc-xilinx.c
··· 497 497 /* Get the Buffer address and copy the transmit data.*/ 498 498 eprambase = (u32 __force *)(udc->addr + ep->rambase); 499 499 if (ep->is_in) { 500 - memcpy(eprambase, bufferptr, bytestosend); 500 + memcpy_toio((void __iomem *)eprambase, bufferptr, 501 + bytestosend); 501 502 udc->write_fn(udc->addr, ep->offset + 502 503 XUSB_EP_BUF0COUNT_OFFSET, bufferlen); 503 504 } else { 504 - memcpy(bufferptr, eprambase, bytestosend); 505 + memcpy_toio((void __iomem *)bufferptr, eprambase, 506 + bytestosend); 505 507 } 506 508 /* 507 509 * Enable the buffer for transmission. ··· 517 515 eprambase = (u32 __force *)(udc->addr + ep->rambase + 518 516 ep->ep_usb.maxpacket); 519 517 if (ep->is_in) { 520 - memcpy(eprambase, bufferptr, bytestosend); 518 + memcpy_toio((void __iomem *)eprambase, bufferptr, 519 + bytestosend); 521 520 udc->write_fn(udc->addr, ep->offset + 522 521 XUSB_EP_BUF1COUNT_OFFSET, bufferlen); 523 522 } else { 524 - memcpy(bufferptr, eprambase, bytestosend); 523 + memcpy_toio((void __iomem *)bufferptr, eprambase, 524 + bytestosend); 525 525 } 526 526 /* 527 527 * Enable the buffer for transmission. ··· 1025 1021 udc->addr); 1026 1022 length = req->usb_req.actual = min_t(u32, length, 1027 1023 EP0_MAX_PACKET); 1028 - memcpy(corebuf, req->usb_req.buf, length); 1024 + memcpy_toio((void __iomem *)corebuf, req->usb_req.buf, length); 1029 1025 udc->write_fn(udc->addr, XUSB_EP_BUF0COUNT_OFFSET, length); 1030 1026 udc->write_fn(udc->addr, XUSB_BUFFREADY_OFFSET, 1); 1031 1027 } else { ··· 1756 1752 1757 1753 /* Load up the chapter 9 command buffer.*/ 1758 1754 ep0rambase = (u32 __force *) (udc->addr + XUSB_SETUP_PKT_ADDR_OFFSET); 1759 - memcpy(&setup, ep0rambase, 8); 1755 + memcpy_toio((void __iomem *)&setup, ep0rambase, 8); 1760 1756 1761 1757 udc->setup = setup; 1762 1758 udc->setup.wValue = cpu_to_le16((u16 __force)setup.wValue); ··· 1843 1839 (ep0->rambase << 2)); 1844 1840 buffer = req->usb_req.buf + req->usb_req.actual; 1845 1841 req->usb_req.actual = req->usb_req.actual + bytes_to_rx; 1846 - memcpy(buffer, ep0rambase, bytes_to_rx); 1842 + memcpy_toio((void __iomem *)buffer, ep0rambase, bytes_to_rx); 1847 1843 1848 1844 if (req->usb_req.length == req->usb_req.actual) { 1849 1845 /* Data transfer completed get ready for Status stage */ ··· 1919 1915 (ep0->rambase << 2)); 1920 1916 buffer = req->usb_req.buf + req->usb_req.actual; 1921 1917 req->usb_req.actual = req->usb_req.actual + length; 1922 - memcpy(ep0rambase, buffer, length); 1918 + memcpy_toio((void __iomem *)ep0rambase, buffer, length); 1923 1919 } 1924 1920 udc->write_fn(udc->addr, XUSB_EP_BUF0COUNT_OFFSET, count); 1925 1921 udc->write_fn(udc->addr, XUSB_BUFFREADY_OFFSET, 1);
+10 -9
drivers/usb/host/xhci-hub.c
··· 1062 1062 *status |= USB_PORT_STAT_C_CONFIG_ERROR << 16; 1063 1063 1064 1064 /* USB3 specific wPortStatus bits */ 1065 - if (portsc & PORT_POWER) { 1065 + if (portsc & PORT_POWER) 1066 1066 *status |= USB_SS_PORT_STAT_POWER; 1067 - /* link state handling */ 1068 - if (link_state == XDEV_U0) 1069 - bus_state->suspended_ports &= ~(1 << portnum); 1070 - } 1071 1067 1072 - /* remote wake resume signaling complete */ 1073 - if (bus_state->port_remote_wakeup & (1 << portnum) && 1068 + /* no longer suspended or resuming */ 1069 + if (link_state != XDEV_U3 && 1074 1070 link_state != XDEV_RESUME && 1075 1071 link_state != XDEV_RECOVERY) { 1076 - bus_state->port_remote_wakeup &= ~(1 << portnum); 1077 - usb_hcd_end_port_resume(&hcd->self, portnum); 1072 + /* remote wake resume signaling complete */ 1073 + if (bus_state->port_remote_wakeup & (1 << portnum)) { 1074 + bus_state->port_remote_wakeup &= ~(1 << portnum); 1075 + usb_hcd_end_port_resume(&hcd->self, portnum); 1076 + } 1077 + bus_state->suspended_ports &= ~(1 << portnum); 1078 1078 } 1079 1079 1080 1080 xhci_hub_report_usb3_link_state(xhci, status, portsc); ··· 1131 1131 usb_hcd_end_port_resume(&port->rhub->hcd->self, portnum); 1132 1132 } 1133 1133 port->rexit_active = 0; 1134 + bus_state->suspended_ports &= ~(1 << portnum); 1134 1135 } 1135 1136 } 1136 1137
+2 -2
drivers/usb/host/xhci-mem.c
··· 2285 2285 writel(erst_size, &ir->ir_set->erst_size); 2286 2286 2287 2287 erst_base = xhci_read_64(xhci, &ir->ir_set->erst_base); 2288 - erst_base &= ERST_PTR_MASK; 2289 - erst_base |= (ir->erst.erst_dma_addr & (u64) ~ERST_PTR_MASK); 2288 + erst_base &= ERST_BASE_RSVDP; 2289 + erst_base |= ir->erst.erst_dma_addr & ~ERST_BASE_RSVDP; 2290 2290 xhci_write_64(xhci, erst_base, &ir->ir_set->erst_base); 2291 2291 2292 2292 /* Set the event ring dequeue address of this interrupter */
+9 -7
drivers/usb/host/xhci-ring.c
··· 798 798 static void xhci_unmap_td_bounce_buffer(struct xhci_hcd *xhci, 799 799 struct xhci_ring *ring, struct xhci_td *td) 800 800 { 801 - struct device *dev = xhci_to_hcd(xhci)->self.controller; 801 + struct device *dev = xhci_to_hcd(xhci)->self.sysdev; 802 802 struct xhci_segment *seg = td->bounce_seg; 803 803 struct urb *urb = td->urb; 804 804 size_t len; ··· 2996 2996 */ 2997 2997 static void xhci_update_erst_dequeue(struct xhci_hcd *xhci, 2998 2998 struct xhci_interrupter *ir, 2999 - union xhci_trb *event_ring_deq) 2999 + union xhci_trb *event_ring_deq, 3000 + bool clear_ehb) 3000 3001 { 3001 3002 u64 temp_64; 3002 3003 dma_addr_t deq; ··· 3018 3017 return; 3019 3018 3020 3019 /* Update HC event ring dequeue pointer */ 3021 - temp_64 &= ERST_PTR_MASK; 3020 + temp_64 &= ERST_DESI_MASK; 3022 3021 temp_64 |= ((u64) deq & (u64) ~ERST_PTR_MASK); 3023 3022 } 3024 3023 3025 3024 /* Clear the event handler busy flag (RW1C) */ 3026 - temp_64 |= ERST_EHB; 3025 + if (clear_ehb) 3026 + temp_64 |= ERST_EHB; 3027 3027 xhci_write_64(xhci, temp_64, &ir->ir_set->erst_dequeue); 3028 3028 } 3029 3029 ··· 3105 3103 while (xhci_handle_event(xhci, ir) > 0) { 3106 3104 if (event_loop++ < TRBS_PER_SEGMENT / 2) 3107 3105 continue; 3108 - xhci_update_erst_dequeue(xhci, ir, event_ring_deq); 3106 + xhci_update_erst_dequeue(xhci, ir, event_ring_deq, false); 3109 3107 event_ring_deq = ir->event_ring->dequeue; 3110 3108 3111 3109 /* ring is half-full, force isoc trbs to interrupt more often */ ··· 3115 3113 event_loop = 0; 3116 3114 } 3117 3115 3118 - xhci_update_erst_dequeue(xhci, ir, event_ring_deq); 3116 + xhci_update_erst_dequeue(xhci, ir, event_ring_deq, true); 3119 3117 ret = IRQ_HANDLED; 3120 3118 3121 3119 out: ··· 3471 3469 static int xhci_align_td(struct xhci_hcd *xhci, struct urb *urb, u32 enqd_len, 3472 3470 u32 *trb_buff_len, struct xhci_segment *seg) 3473 3471 { 3474 - struct device *dev = xhci_to_hcd(xhci)->self.controller; 3472 + struct device *dev = xhci_to_hcd(xhci)->self.sysdev; 3475 3473 unsigned int unalign; 3476 3474 unsigned int max_pkt; 3477 3475 u32 new_buff_len;
+1 -1
drivers/usb/host/xhci.h
··· 514 514 #define ERST_SIZE_MASK (0xffff << 16) 515 515 516 516 /* erst_base bitmasks */ 517 - #define ERST_BASE_RSVDP (0x3f) 517 + #define ERST_BASE_RSVDP (GENMASK_ULL(5, 0)) 518 518 519 519 /* erst_dequeue bitmasks */ 520 520 /* Dequeue ERST Segment Index (DESI) - Segment number (or alias)
+1
drivers/usb/misc/onboard_usb_hub.c
··· 434 434 { USB_DEVICE(VENDOR_ID_GENESYS, 0x0608) }, /* Genesys Logic GL850G USB 2.0 */ 435 435 { USB_DEVICE(VENDOR_ID_GENESYS, 0x0610) }, /* Genesys Logic GL852G USB 2.0 */ 436 436 { USB_DEVICE(VENDOR_ID_GENESYS, 0x0620) }, /* Genesys Logic GL3523 USB 3.1 */ 437 + { USB_DEVICE(VENDOR_ID_MICROCHIP, 0x2412) }, /* USB2412 USB 2.0 */ 437 438 { USB_DEVICE(VENDOR_ID_MICROCHIP, 0x2514) }, /* USB2514B USB 2.0 */ 438 439 { USB_DEVICE(VENDOR_ID_MICROCHIP, 0x2517) }, /* USB2517 USB 2.0 */ 439 440 { USB_DEVICE(VENDOR_ID_REALTEK, 0x0411) }, /* RTS5411 USB 3.1 */
+1
drivers/usb/misc/onboard_usb_hub.h
··· 47 47 }; 48 48 49 49 static const struct of_device_id onboard_hub_match[] = { 50 + { .compatible = "usb424,2412", .data = &microchip_usb424_data, }, 50 51 { .compatible = "usb424,2514", .data = &microchip_usb424_data, }, 51 52 { .compatible = "usb424,2517", .data = &microchip_usb424_data, }, 52 53 { .compatible = "usb451,8140", .data = &ti_tusb8041_data, },
+1 -1
drivers/usb/musb/musb_debugfs.c
··· 39 39 { "IntrUsbE", MUSB_INTRUSBE, 8 }, 40 40 { "DevCtl", MUSB_DEVCTL, 8 }, 41 41 { "VControl", 0x68, 32 }, 42 - { "HWVers", 0x69, 16 }, 42 + { "HWVers", MUSB_HWVERS, 16 }, 43 43 { "LinkInfo", MUSB_LINKINFO, 8 }, 44 44 { "VPLen", MUSB_VPLEN, 8 }, 45 45 { "HS_EOF1", MUSB_HS_EOF1, 8 },
+8 -1
drivers/usb/musb/musb_host.c
··· 321 321 musb_giveback(musb, urb, status); 322 322 qh->is_ready = ready; 323 323 324 + /* 325 + * musb->lock had been unlocked in musb_giveback, so qh may 326 + * be freed, need to get it again 327 + */ 328 + qh = musb_ep_get_qh(hw_ep, is_in); 329 + 324 330 /* reclaim resources (and bandwidth) ASAP; deschedule it, and 325 331 * invalidate qh as soon as list_empty(&hep->urb_list) 326 332 */ 327 - if (list_empty(&qh->hep->urb_list)) { 333 + if (qh && list_empty(&qh->hep->urb_list)) { 328 334 struct list_head *head; 329 335 struct dma_controller *dma = musb->dma_controller; 330 336 ··· 2404 2398 * and its URB list has emptied, recycle this qh. 2405 2399 */ 2406 2400 if (ready && list_empty(&qh->hep->urb_list)) { 2401 + musb_ep_set_qh(qh->hw_ep, is_in, NULL); 2407 2402 qh->hep->hcpriv = NULL; 2408 2403 list_del(&qh->ring); 2409 2404 kfree(qh);
+5
drivers/usb/typec/altmodes/displayport.c
··· 304 304 typec_altmode_update_active(alt, false); 305 305 dp->data.status = 0; 306 306 dp->data.conf = 0; 307 + if (dp->hpd) { 308 + drm_connector_oob_hotplug_event(dp->connector_fwnode); 309 + dp->hpd = false; 310 + sysfs_notify(&dp->alt->dev.kobj, "displayport", "hpd"); 311 + } 307 312 break; 308 313 case DP_CMD_STATUS_UPDATE: 309 314 dp->data.status = *vdo;
+6 -6
drivers/usb/typec/tcpm/qcom/qcom_pmic_typec_pdphy.c
··· 381 381 struct device *dev = pmic_typec_pdphy->dev; 382 382 int ret; 383 383 384 - ret = regulator_enable(pmic_typec_pdphy->vdd_pdphy); 385 - if (ret) 386 - return ret; 387 - 388 384 /* PD 2.0, DR=TYPEC_DEVICE, PR=TYPEC_SINK */ 389 385 ret = regmap_update_bits(pmic_typec_pdphy->regmap, 390 386 pmic_typec_pdphy->base + USB_PDPHY_MSG_CONFIG_REG, ··· 418 422 ret = regmap_write(pmic_typec_pdphy->regmap, 419 423 pmic_typec_pdphy->base + USB_PDPHY_EN_CONTROL_REG, 0); 420 424 421 - regulator_disable(pmic_typec_pdphy->vdd_pdphy); 422 - 423 425 return ret; 424 426 } 425 427 ··· 441 447 int i; 442 448 int ret; 443 449 450 + ret = regulator_enable(pmic_typec_pdphy->vdd_pdphy); 451 + if (ret) 452 + return ret; 453 + 444 454 pmic_typec_pdphy->tcpm_port = tcpm_port; 445 455 446 456 ret = pmic_typec_pdphy_reset(pmic_typec_pdphy); ··· 465 467 disable_irq(pmic_typec_pdphy->irq_data[i].irq); 466 468 467 469 qcom_pmic_typec_pdphy_reset_on(pmic_typec_pdphy); 470 + 471 + regulator_disable(pmic_typec_pdphy->vdd_pdphy); 468 472 } 469 473 470 474 struct pmic_typec_pdphy *qcom_pmic_typec_pdphy_alloc(struct device *dev)
+9
drivers/usb/typec/ucsi/psy.c
··· 37 37 struct device *dev = con->ucsi->dev; 38 38 39 39 device_property_read_u8(dev, "scope", &scope); 40 + if (scope == POWER_SUPPLY_SCOPE_UNKNOWN) { 41 + u32 mask = UCSI_CAP_ATTR_POWER_AC_SUPPLY | 42 + UCSI_CAP_ATTR_BATTERY_CHARGING; 43 + 44 + if (con->ucsi->cap.attributes & mask) 45 + scope = POWER_SUPPLY_SCOPE_SYSTEM; 46 + else 47 + scope = POWER_SUPPLY_SCOPE_DEVICE; 48 + } 40 49 val->intval = scope; 41 50 return 0; 42 51 }
+2
drivers/usb/typec/ucsi/ucsi.c
··· 787 787 788 788 typec_set_mode(con->port, TYPEC_STATE_SAFE); 789 789 790 + typec_partner_set_usb_power_delivery(con->partner, NULL); 790 791 ucsi_unregister_partner_pdos(con); 791 792 ucsi_unregister_altmodes(con, UCSI_RECIPIENT_SOP); 792 793 typec_unregister_partner(con->partner); ··· 885 884 if (ret < 0) { 886 885 dev_err(ucsi->dev, "%s: GET_CONNECTOR_STATUS failed (%d)\n", 887 886 __func__, ret); 887 + clear_bit(EVENT_PENDING, &con->ucsi->flags); 888 888 goto out_unlock; 889 889 } 890 890
+4
drivers/video/fbdev/aty/atyfb_base.c
··· 3440 3440 } 3441 3441 3442 3442 info->fix.mmio_start = raddr; 3443 + #if defined(__i386__) || defined(__ia64__) 3443 3444 /* 3444 3445 * By using strong UC we force the MTRR to never have an 3445 3446 * effect on the MMIO region on both non-PAT and PAT systems. 3446 3447 */ 3447 3448 par->ati_regbase = ioremap_uc(info->fix.mmio_start, 0x1000); 3449 + #else 3450 + par->ati_regbase = ioremap(info->fix.mmio_start, 0x1000); 3451 + #endif 3448 3452 if (par->ati_regbase == NULL) 3449 3453 return -ENOMEM; 3450 3454
+1 -1
drivers/video/fbdev/core/cfbcopyarea.c
··· 382 382 { 383 383 u32 dx = area->dx, dy = area->dy, sx = area->sx, sy = area->sy; 384 384 u32 height = area->height, width = area->width; 385 - unsigned long const bits_per_line = p->fix.line_length*8u; 385 + unsigned int const bits_per_line = p->fix.line_length * 8u; 386 386 unsigned long __iomem *base = NULL; 387 387 int bits = BITS_PER_LONG, bytes = bits >> 3; 388 388 unsigned dst_idx = 0, src_idx = 0, rev_copy = 0;
+1 -1
drivers/video/fbdev/core/syscopyarea.c
··· 316 316 { 317 317 u32 dx = area->dx, dy = area->dy, sx = area->sx, sy = area->sy; 318 318 u32 height = area->height, width = area->width; 319 - unsigned long const bits_per_line = p->fix.line_length*8u; 319 + unsigned int const bits_per_line = p->fix.line_length * 8u; 320 320 unsigned long *base = NULL; 321 321 int bits = BITS_PER_LONG, bytes = bits >> 3; 322 322 unsigned dst_idx = 0, src_idx = 0, rev_copy = 0;
+1 -1
drivers/video/fbdev/mmp/hw/mmp_ctrl.h
··· 1406 1406 1407 1407 /*pathes*/ 1408 1408 int path_num; 1409 - struct mmphw_path_plat path_plats[]; 1409 + struct mmphw_path_plat path_plats[] __counted_by(path_num); 1410 1410 }; 1411 1411 1412 1412 static inline int overlay_is_vid(struct mmp_overlay *overlay)
+2 -2
drivers/video/fbdev/omap/omapfb_main.c
··· 1645 1645 } 1646 1646 fbdev->int_irq = platform_get_irq(pdev, 0); 1647 1647 if (fbdev->int_irq < 0) { 1648 - r = ENXIO; 1648 + r = -ENXIO; 1649 1649 goto cleanup; 1650 1650 } 1651 1651 1652 1652 fbdev->ext_irq = platform_get_irq(pdev, 1); 1653 1653 if (fbdev->ext_irq < 0) { 1654 - r = ENXIO; 1654 + r = -ENXIO; 1655 1655 goto cleanup; 1656 1656 } 1657 1657
+1 -1
drivers/video/fbdev/sa1100fb.c
··· 1214 1214 }, 1215 1215 }; 1216 1216 1217 - int __init sa1100fb_init(void) 1217 + static int __init sa1100fb_init(void) 1218 1218 { 1219 1219 if (fb_get_options("sa1100fb", NULL)) 1220 1220 return -ENODEV;
+1 -1
drivers/video/fbdev/uvesafb.c
··· 1928 1928 } 1929 1929 } 1930 1930 1931 - cn_del_callback(&uvesafb_cn_id); 1932 1931 driver_remove_file(&uvesafb_driver.driver, &driver_attr_v86d); 1933 1932 platform_device_unregister(uvesafb_device); 1934 1933 platform_driver_unregister(&uvesafb_driver); 1934 + cn_del_callback(&uvesafb_cn_id); 1935 1935 } 1936 1936 1937 1937 module_exit(uvesafb_exit);
+1 -1
fs/btrfs/volumes.c
··· 5109 5109 ASSERT(space_info); 5110 5110 5111 5111 ctl->max_chunk_size = READ_ONCE(space_info->chunk_size); 5112 - ctl->max_stripe_size = ctl->max_chunk_size; 5112 + ctl->max_stripe_size = min_t(u64, ctl->max_chunk_size, SZ_1G); 5113 5113 5114 5114 if (ctl->type & BTRFS_BLOCK_GROUP_SYSTEM) 5115 5115 ctl->devs_max = min_t(int, ctl->devs_max, BTRFS_MAX_DEVS_SYS_CHUNK);
+1 -1
fs/ceph/crypto.c
··· 460 460 out: 461 461 fscrypt_fname_free_buffer(&_tname); 462 462 out_inode: 463 - if ((dir != fname->dir) && !IS_ERR(dir)) { 463 + if (dir != fname->dir) { 464 464 if ((dir->i_state & I_NEW)) 465 465 discard_new_inode(dir); 466 466 else
+1 -1
fs/ceph/file.c
··· 2969 2969 ret = do_splice_direct(src_file, &src_off, dst_file, 2970 2970 &dst_off, src_objlen, flags); 2971 2971 /* Abort on short copies or on error */ 2972 - if (ret < src_objlen) { 2972 + if (ret < (long)src_objlen) { 2973 2973 dout("Failed partial copy (%zd)\n", ret); 2974 2974 goto out; 2975 2975 }
+1 -3
fs/ceph/inode.c
··· 769 769 ci->i_truncate_seq = truncate_seq; 770 770 771 771 /* the MDS should have revoked these caps */ 772 - WARN_ON_ONCE(issued & (CEPH_CAP_FILE_EXCL | 773 - CEPH_CAP_FILE_RD | 774 - CEPH_CAP_FILE_WR | 772 + WARN_ON_ONCE(issued & (CEPH_CAP_FILE_RD | 775 773 CEPH_CAP_FILE_LAZYIO)); 776 774 /* 777 775 * If we hold relevant caps, or in the case where we're
+29 -5
fs/fs_context.c
··· 192 192 EXPORT_SYMBOL(vfs_parse_fs_string); 193 193 194 194 /** 195 - * generic_parse_monolithic - Parse key[=val][,key[=val]]* mount data 195 + * vfs_parse_monolithic_sep - Parse key[=val][,key[=val]]* mount data 196 196 * @fc: The superblock configuration to fill in. 197 197 * @data: The data to parse 198 + * @sep: callback for separating next option 198 199 * 199 - * Parse a blob of data that's in key[=val][,key[=val]]* form. This can be 200 - * called from the ->monolithic_mount_data() fs_context operation. 200 + * Parse a blob of data that's in key[=val][,key[=val]]* form with a custom 201 + * option separator callback. 201 202 * 202 203 * Returns 0 on success or the error returned by the ->parse_option() fs_context 203 204 * operation on failure. 204 205 */ 205 - int generic_parse_monolithic(struct fs_context *fc, void *data) 206 + int vfs_parse_monolithic_sep(struct fs_context *fc, void *data, 207 + char *(*sep)(char **)) 206 208 { 207 209 char *options = data, *key; 208 210 int ret = 0; ··· 216 214 if (ret) 217 215 return ret; 218 216 219 - while ((key = strsep(&options, ",")) != NULL) { 217 + while ((key = sep(&options)) != NULL) { 220 218 if (*key) { 221 219 size_t v_len = 0; 222 220 char *value = strchr(key, '='); ··· 234 232 } 235 233 236 234 return ret; 235 + } 236 + EXPORT_SYMBOL(vfs_parse_monolithic_sep); 237 + 238 + static char *vfs_parse_comma_sep(char **s) 239 + { 240 + return strsep(s, ","); 241 + } 242 + 243 + /** 244 + * generic_parse_monolithic - Parse key[=val][,key[=val]]* mount data 245 + * @fc: The superblock configuration to fill in. 246 + * @data: The data to parse 247 + * 248 + * Parse a blob of data that's in key[=val][,key[=val]]* form. This can be 249 + * called from the ->monolithic_mount_data() fs_context operation. 250 + * 251 + * Returns 0 on success or the error returned by the ->parse_option() fs_context 252 + * operation on failure. 253 + */ 254 + int generic_parse_monolithic(struct fs_context *fc, void *data) 255 + { 256 + return vfs_parse_monolithic_sep(fc, data, vfs_parse_comma_sep); 237 257 } 238 258 EXPORT_SYMBOL(generic_parse_monolithic); 239 259
+5 -4
fs/namei.c
··· 188 188 } 189 189 } 190 190 191 - result->refcnt = 1; 191 + atomic_set(&result->refcnt, 1); 192 192 /* The empty path is special. */ 193 193 if (unlikely(!len)) { 194 194 if (empty) ··· 249 249 memcpy((char *)result->name, filename, len); 250 250 result->uptr = NULL; 251 251 result->aname = NULL; 252 - result->refcnt = 1; 252 + atomic_set(&result->refcnt, 1); 253 253 audit_getname(result); 254 254 255 255 return result; ··· 261 261 if (IS_ERR(name)) 262 262 return; 263 263 264 - BUG_ON(name->refcnt <= 0); 264 + if (WARN_ON_ONCE(!atomic_read(&name->refcnt))) 265 + return; 265 266 266 - if (--name->refcnt > 0) 267 + if (!atomic_dec_and_test(&name->refcnt)) 267 268 return; 268 269 269 270 if (name->name != name->iname) {
+5 -7
fs/ntfs3/attrib.c
··· 1106 1106 } 1107 1107 } 1108 1108 1109 - /* 1109 + /* 1110 1110 * The code below may require additional cluster (to extend attribute list) 1111 - * and / or one MFT record 1112 - * It is too complex to undo operations if -ENOSPC occurs deep inside 1111 + * and / or one MFT record 1112 + * It is too complex to undo operations if -ENOSPC occurs deep inside 1113 1113 * in 'ni_insert_nonresident'. 1114 1114 * Return in advance -ENOSPC here if there are no free cluster and no free MFT. 1115 1115 */ ··· 1736 1736 le_b = NULL; 1737 1737 attr_b = ni_find_attr(ni, NULL, &le_b, ATTR_DATA, NULL, 1738 1738 0, NULL, &mi_b); 1739 - if (!attr_b) { 1740 - err = -ENOENT; 1741 - goto out; 1742 - } 1739 + if (!attr_b) 1740 + return -ENOENT; 1743 1741 1744 1742 attr = attr_b; 1745 1743 le = le_b;
+13 -2
fs/ntfs3/attrlist.c
··· 52 52 53 53 if (!attr->non_res) { 54 54 lsize = le32_to_cpu(attr->res.data_size); 55 - le = kmalloc(al_aligned(lsize), GFP_NOFS | __GFP_NOWARN); 55 + /* attr is resident: lsize < record_size (1K or 4K) */ 56 + le = kvmalloc(al_aligned(lsize), GFP_KERNEL); 56 57 if (!le) { 57 58 err = -ENOMEM; 58 59 goto out; ··· 81 80 if (err < 0) 82 81 goto out; 83 82 84 - le = kmalloc(al_aligned(lsize), GFP_NOFS | __GFP_NOWARN); 83 + /* attr is nonresident. 84 + * The worst case: 85 + * 1T (2^40) extremely fragmented file. 86 + * cluster = 4K (2^12) => 2^28 fragments 87 + * 2^9 fragments per one record => 2^19 records 88 + * 2^5 bytes of ATTR_LIST_ENTRY per one record => 2^24 bytes. 89 + * 90 + * the result is 16M bytes per attribute list. 91 + * Use kvmalloc to allocate in range [several Kbytes - dozen Mbytes] 92 + */ 93 + le = kvmalloc(al_aligned(lsize), GFP_KERNEL); 85 94 if (!le) { 86 95 err = -ENOMEM; 87 96 goto out;
+3 -1
fs/ntfs3/bitmap.c
··· 125 125 struct rb_node *node, *next; 126 126 127 127 kfree(wnd->free_bits); 128 + wnd->free_bits = NULL; 128 129 run_close(&wnd->run); 129 130 130 131 node = rb_first(&wnd->start_tree); ··· 660 659 wnd->bits_last = wbits; 661 660 662 661 wnd->free_bits = 663 - kcalloc(wnd->nwnd, sizeof(u16), GFP_NOFS | __GFP_NOWARN); 662 + kvmalloc_array(wnd->nwnd, sizeof(u16), GFP_KERNEL | __GFP_ZERO); 663 + 664 664 if (!wnd->free_bits) 665 665 return -ENOMEM; 666 666
+5 -1
fs/ntfs3/dir.c
··· 309 309 return 0; 310 310 } 311 311 312 - dt_type = (fname->dup.fa & FILE_ATTRIBUTE_DIRECTORY) ? DT_DIR : DT_REG; 312 + /* NTFS: symlinks are "dir + reparse" or "file + reparse" */ 313 + if (fname->dup.fa & FILE_ATTRIBUTE_REPARSE_POINT) 314 + dt_type = DT_LNK; 315 + else 316 + dt_type = (fname->dup.fa & FILE_ATTRIBUTE_DIRECTORY) ? DT_DIR : DT_REG; 313 317 314 318 return !dir_emit(ctx, (s8 *)name, name_len, ino, dt_type); 315 319 }
+2 -2
fs/ntfs3/file.c
··· 745 745 } 746 746 747 747 static ssize_t ntfs_file_splice_read(struct file *in, loff_t *ppos, 748 - struct pipe_inode_info *pipe, 749 - size_t len, unsigned int flags) 748 + struct pipe_inode_info *pipe, size_t len, 749 + unsigned int flags) 750 750 { 751 751 struct inode *inode = in->f_mapping->host; 752 752 struct ntfs_inode *ni = ntfs_i(inode);
+7 -1
fs/ntfs3/frecord.c
··· 2148 2148 2149 2149 for (i = 0; i < pages_per_frame; i++) { 2150 2150 pg = pages[i]; 2151 - if (i == idx) 2151 + if (i == idx || !pg) 2152 2152 continue; 2153 2153 unlock_page(pg); 2154 2154 put_page(pg); ··· 3207 3207 fname = resident_data_ex(attr, SIZEOF_ATTRIBUTE_FILENAME); 3208 3208 if (!fname || !memcmp(&fname->dup, dup, sizeof(fname->dup))) 3209 3209 continue; 3210 + 3211 + /* Check simple case when parent inode equals current inode. */ 3212 + if (ino_get(&fname->home) == ni->vfs_inode.i_ino) { 3213 + ntfs_set_state(sbi, NTFS_DIRTY_ERROR); 3214 + continue; 3215 + } 3210 3216 3211 3217 /* ntfs_iget5 may sleep. */ 3212 3218 dir = ntfs_iget5(sb, &fname->home, NULL);
+4 -2
fs/ntfs3/fslog.c
··· 2168 2168 2169 2169 if (!page) { 2170 2170 page = kmalloc(log->page_size, GFP_NOFS); 2171 - if (!page) 2172 - return -ENOMEM; 2171 + if (!page) { 2172 + err = -ENOMEM; 2173 + goto out; 2174 + } 2173 2175 } 2174 2176 2175 2177 /*
+8 -11
fs/ntfs3/fsntfs.c
··· 983 983 if (err) 984 984 return err; 985 985 986 - mark_inode_dirty(&ni->vfs_inode); 986 + mark_inode_dirty_sync(&ni->vfs_inode); 987 987 /* verify(!ntfs_update_mftmirr()); */ 988 988 989 - /* 990 - * If we used wait=1, sync_inode_metadata waits for the io for the 991 - * inode to finish. It hangs when media is removed. 992 - * So wait=0 is sent down to sync_inode_metadata 993 - * and filemap_fdatawrite is used for the data blocks. 994 - */ 995 - err = sync_inode_metadata(&ni->vfs_inode, 0); 996 - if (!err) 997 - err = filemap_fdatawrite(ni->vfs_inode.i_mapping); 989 + /* write mft record on disk. */ 990 + err = _ni_write_inode(&ni->vfs_inode, 1); 998 991 999 992 return err; 1000 993 } ··· 2454 2461 { 2455 2462 CLST end, i, zone_len, zlen; 2456 2463 struct wnd_bitmap *wnd = &sbi->used.bitmap; 2464 + bool dirty = false; 2457 2465 2458 2466 down_write_nested(&wnd->rw_lock, BITMAP_MUTEX_CLUSTERS); 2459 2467 if (!wnd_is_used(wnd, lcn, len)) { 2460 - ntfs_set_state(sbi, NTFS_DIRTY_ERROR); 2468 + /* mark volume as dirty out of wnd->rw_lock */ 2469 + dirty = true; 2461 2470 2462 2471 end = lcn + len; 2463 2472 len = 0; ··· 2513 2518 2514 2519 out: 2515 2520 up_write(&wnd->rw_lock); 2521 + if (dirty) 2522 + ntfs_set_state(sbi, NTFS_DIRTY_ERROR); 2516 2523 } 2517 2524 2518 2525 /*
+3
fs/ntfs3/index.c
··· 729 729 u32 total = le32_to_cpu(hdr->total); 730 730 u16 offs[128]; 731 731 732 + if (unlikely(!cmp)) 733 + return NULL; 734 + 732 735 fill_table: 733 736 if (end > total) 734 737 return NULL;
+3 -2
fs/ntfs3/inode.c
··· 170 170 nt2kernel(std5->cr_time, &ni->i_crtime); 171 171 #endif 172 172 nt2kernel(std5->a_time, &inode->i_atime); 173 - ctime = inode_get_ctime(inode); 174 173 nt2kernel(std5->c_time, &ctime); 174 + inode_set_ctime_to_ts(inode, ctime); 175 175 nt2kernel(std5->m_time, &inode->i_mtime); 176 176 177 177 ni->std_fa = std5->fa; ··· 1660 1660 d_instantiate(dentry, inode); 1661 1661 1662 1662 /* Set original time. inode times (i_ctime) may be changed in ntfs_init_acl. */ 1663 - inode->i_atime = inode->i_mtime = inode_set_ctime_to_ts(inode, ni->i_crtime); 1663 + inode->i_atime = inode->i_mtime = 1664 + inode_set_ctime_to_ts(inode, ni->i_crtime); 1664 1665 dir->i_mtime = inode_set_ctime_to_ts(dir, ni->i_crtime); 1665 1666 1666 1667 mark_inode_dirty(dir);
+3 -3
fs/ntfs3/namei.c
··· 156 156 err = ntfs_link_inode(inode, de); 157 157 158 158 if (!err) { 159 - dir->i_mtime = inode_set_ctime_to_ts(inode, 160 - inode_set_ctime_current(dir)); 159 + dir->i_mtime = inode_set_ctime_to_ts( 160 + inode, inode_set_ctime_current(dir)); 161 161 mark_inode_dirty(inode); 162 162 mark_inode_dirty(dir); 163 163 d_instantiate(de, inode); ··· 373 373 374 374 #ifdef CONFIG_NTFS3_FS_POSIX_ACL 375 375 if (IS_POSIXACL(dir)) { 376 - /* 376 + /* 377 377 * Load in cache current acl to avoid ni_lock(dir): 378 378 * ntfs_create_inode -> ntfs_init_acl -> posix_acl_create -> 379 379 * ntfs_get_acl -> ntfs_get_acl_ex -> ni_lock
+1 -1
fs/ntfs3/ntfs.h
··· 847 847 // Birth Volume Id is the Object Id of the Volume on. 848 848 // which the Object Id was allocated. It never changes. 849 849 struct GUID BirthVolumeId; //0x10: 850 - 850 + 851 851 // Birth Object Id is the first Object Id that was 852 852 // ever assigned to this MFT Record. I.e. If the Object Id 853 853 // is changed for some reason, this field will reflect the
+2 -2
fs/ntfs3/ntfs_fs.h
··· 42 42 #define MINUS_ONE_T ((size_t)(-1)) 43 43 /* Biggest MFT / smallest cluster */ 44 44 #define MAXIMUM_BYTES_PER_MFT 4096 45 + #define MAXIMUM_SHIFT_BYTES_PER_MFT 12 45 46 #define NTFS_BLOCKS_PER_MFT_RECORD (MAXIMUM_BYTES_PER_MFT / 512) 46 47 47 48 #define MAXIMUM_BYTES_PER_INDEX 4096 49 + #define MAXIMUM_SHIFT_BYTES_PER_INDEX 12 48 50 #define NTFS_BLOCKS_PER_INODE (MAXIMUM_BYTES_PER_INDEX / 512) 49 51 50 52 /* NTFS specific error code when fixup failed. */ ··· 497 495 struct kstat *stat, u32 request_mask, u32 flags); 498 496 int ntfs3_setattr(struct mnt_idmap *idmap, struct dentry *dentry, 499 497 struct iattr *attr); 500 - void ntfs_sparse_cluster(struct inode *inode, struct page *page0, CLST vcn, 501 - CLST len); 502 498 int ntfs_file_open(struct inode *inode, struct file *file); 503 499 int ntfs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, 504 500 __u64 start, __u64 len);
+58 -16
fs/ntfs3/record.c
··· 189 189 return err; 190 190 } 191 191 192 + /* 193 + * mi_enum_attr - start/continue attributes enumeration in record. 194 + * 195 + * NOTE: mi->mrec - memory of size sbi->record_size 196 + * here we sure that mi->mrec->total == sbi->record_size (see mi_read) 197 + */ 192 198 struct ATTRIB *mi_enum_attr(struct mft_inode *mi, struct ATTRIB *attr) 193 199 { 194 200 const struct MFT_REC *rec = mi->mrec; 195 201 u32 used = le32_to_cpu(rec->used); 196 - u32 t32, off, asize; 202 + u32 t32, off, asize, prev_type; 197 203 u16 t16; 204 + u64 data_size, alloc_size, tot_size; 198 205 199 206 if (!attr) { 200 207 u32 total = le32_to_cpu(rec->total); ··· 220 213 if (!is_rec_inuse(rec)) 221 214 return NULL; 222 215 216 + prev_type = 0; 223 217 attr = Add2Ptr(rec, off); 224 218 } else { 225 219 /* Check if input attr inside record. */ ··· 234 226 return NULL; 235 227 } 236 228 237 - if (off + asize < off) { 238 - /* Overflow check. */ 229 + /* Overflow check. */ 230 + if (off + asize < off) 239 231 return NULL; 240 - } 241 232 233 + prev_type = le32_to_cpu(attr->type); 242 234 attr = Add2Ptr(attr, asize); 243 235 off += asize; 244 236 } ··· 258 250 259 251 /* 0x100 is last known attribute for now. */ 260 252 t32 = le32_to_cpu(attr->type); 261 - if ((t32 & 0xf) || (t32 > 0x100)) 253 + if (!t32 || (t32 & 0xf) || (t32 > 0x100)) 254 + return NULL; 255 + 256 + /* attributes in record must be ordered by type */ 257 + if (t32 < prev_type) 262 258 return NULL; 263 259 264 260 /* Check overflow and boundary. */ ··· 271 259 272 260 /* Check size of attribute. */ 273 261 if (!attr->non_res) { 262 + /* Check resident fields. */ 274 263 if (asize < SIZEOF_RESIDENT) 275 264 return NULL; 276 265 277 266 t16 = le16_to_cpu(attr->res.data_off); 278 - 279 267 if (t16 > asize) 280 268 return NULL; 281 269 282 - t32 = le32_to_cpu(attr->res.data_size); 283 - if (t16 + t32 > asize) 270 + if (t16 + le32_to_cpu(attr->res.data_size) > asize) 284 271 return NULL; 285 272 286 273 t32 = sizeof(short) * attr->name_len; ··· 289 278 return attr; 290 279 } 291 280 292 - /* Check some nonresident fields. */ 293 - if (attr->name_len && 294 - le16_to_cpu(attr->name_off) + sizeof(short) * attr->name_len > 295 - le16_to_cpu(attr->nres.run_off)) { 281 + /* Check nonresident fields. */ 282 + if (attr->non_res != 1) 296 283 return NULL; 297 - } 298 284 299 - if (attr->nres.svcn || !is_attr_ext(attr)) { 285 + t16 = le16_to_cpu(attr->nres.run_off); 286 + if (t16 > asize) 287 + return NULL; 288 + 289 + t32 = sizeof(short) * attr->name_len; 290 + if (t32 && le16_to_cpu(attr->name_off) + t32 > t16) 291 + return NULL; 292 + 293 + /* Check start/end vcn. */ 294 + if (le64_to_cpu(attr->nres.svcn) > le64_to_cpu(attr->nres.evcn) + 1) 295 + return NULL; 296 + 297 + data_size = le64_to_cpu(attr->nres.data_size); 298 + if (le64_to_cpu(attr->nres.valid_size) > data_size) 299 + return NULL; 300 + 301 + alloc_size = le64_to_cpu(attr->nres.alloc_size); 302 + if (data_size > alloc_size) 303 + return NULL; 304 + 305 + t32 = mi->sbi->cluster_mask; 306 + if (alloc_size & t32) 307 + return NULL; 308 + 309 + if (!attr->nres.svcn && is_attr_ext(attr)) { 310 + /* First segment of sparse/compressed attribute */ 311 + if (asize + 8 < SIZEOF_NONRESIDENT_EX) 312 + return NULL; 313 + 314 + tot_size = le64_to_cpu(attr->nres.total_size); 315 + if (tot_size & t32) 316 + return NULL; 317 + 318 + if (tot_size > alloc_size) 319 + return NULL; 320 + } else { 300 321 if (asize + 8 < SIZEOF_NONRESIDENT) 301 322 return NULL; 302 323 303 324 if (attr->nres.c_unit) 304 325 return NULL; 305 - } else if (asize + 8 < SIZEOF_NONRESIDENT_EX) 306 - return NULL; 326 + } 307 327 308 328 return attr; 309 329 }
+74 -30
fs/ntfs3/super.c
··· 453 453 * ntfs3.1 454 454 * cluster size 455 455 * number of clusters 456 + * total number of mft records 457 + * number of used mft records ~= number of files + folders 458 + * real state of ntfs "dirty"/"clean" 459 + * current state of ntfs "dirty"/"clean" 456 460 */ 457 461 static int ntfs3_volinfo(struct seq_file *m, void *o) 458 462 { 459 463 struct super_block *sb = m->private; 460 464 struct ntfs_sb_info *sbi = sb->s_fs_info; 461 465 462 - seq_printf(m, "ntfs%d.%d\n%u\n%zu\n", sbi->volume.major_ver, 463 - sbi->volume.minor_ver, sbi->cluster_size, 464 - sbi->used.bitmap.nbits); 466 + seq_printf(m, "ntfs%d.%d\n%u\n%zu\n\%zu\n%zu\n%s\n%s\n", 467 + sbi->volume.major_ver, sbi->volume.minor_ver, 468 + sbi->cluster_size, sbi->used.bitmap.nbits, 469 + sbi->mft.bitmap.nbits, 470 + sbi->mft.bitmap.nbits - wnd_zeroes(&sbi->mft.bitmap), 471 + sbi->volume.real_dirty ? "dirty" : "clean", 472 + (sbi->volume.flags & VOLUME_FLAG_DIRTY) ? "dirty" : "clean"); 465 473 466 474 return 0; 467 475 } ··· 496 488 { 497 489 int err; 498 490 struct super_block *sb = pde_data(file_inode(file)); 499 - struct ntfs_sb_info *sbi = sb->s_fs_info; 500 491 ssize_t ret = count; 501 - u8 *label = kmalloc(count, GFP_NOFS); 492 + u8 *label; 493 + 494 + if (sb_rdonly(sb)) 495 + return -EROFS; 496 + 497 + label = kmalloc(count, GFP_NOFS); 502 498 503 499 if (!label) 504 500 return -ENOMEM; ··· 514 502 while (ret > 0 && label[ret - 1] == '\n') 515 503 ret -= 1; 516 504 517 - err = ntfs_set_label(sbi, label, ret); 505 + err = ntfs_set_label(sb->s_fs_info, label, ret); 518 506 519 507 if (err < 0) { 520 508 ntfs_err(sb, "failed (%d) to write label", err); ··· 588 576 wnd_close(&sbi->mft.bitmap); 589 577 wnd_close(&sbi->used.bitmap); 590 578 591 - if (sbi->mft.ni) 579 + if (sbi->mft.ni) { 592 580 iput(&sbi->mft.ni->vfs_inode); 581 + sbi->mft.ni = NULL; 582 + } 593 583 594 - if (sbi->security.ni) 584 + if (sbi->security.ni) { 595 585 iput(&sbi->security.ni->vfs_inode); 586 + sbi->security.ni = NULL; 587 + } 596 588 597 - if (sbi->reparse.ni) 589 + if (sbi->reparse.ni) { 598 590 iput(&sbi->reparse.ni->vfs_inode); 591 + sbi->reparse.ni = NULL; 592 + } 599 593 600 - if (sbi->objid.ni) 594 + if (sbi->objid.ni) { 601 595 iput(&sbi->objid.ni->vfs_inode); 596 + sbi->objid.ni = NULL; 597 + } 602 598 603 - if (sbi->volume.ni) 599 + if (sbi->volume.ni) { 604 600 iput(&sbi->volume.ni->vfs_inode); 601 + sbi->volume.ni = NULL; 602 + } 605 603 606 604 ntfs_update_mftmirr(sbi, 0); 607 605 ··· 858 836 struct ntfs_sb_info *sbi = sb->s_fs_info; 859 837 int err; 860 838 u32 mb, gb, boot_sector_size, sct_per_clst, record_size; 861 - u64 sectors, clusters, mlcn, mlcn2; 839 + u64 sectors, clusters, mlcn, mlcn2, dev_size0; 862 840 struct NTFS_BOOT *boot; 863 841 struct buffer_head *bh; 864 842 struct MFT_REC *rec; ··· 866 844 u8 cluster_bits; 867 845 u32 boot_off = 0; 868 846 const char *hint = "Primary boot"; 847 + 848 + /* Save original dev_size. Used with alternative boot. */ 849 + dev_size0 = dev_size; 869 850 870 851 sbi->volume.blocks = dev_size >> PAGE_SHIFT; 871 852 ··· 878 853 879 854 check_boot: 880 855 err = -EINVAL; 856 + 857 + /* Corrupted image; do not read OOB */ 858 + if (bh->b_size - sizeof(*boot) < boot_off) 859 + goto out; 860 + 881 861 boot = (struct NTFS_BOOT *)Add2Ptr(bh->b_data, boot_off); 882 862 883 863 if (memcmp(boot->system_id, "NTFS ", sizeof("NTFS ") - 1)) { ··· 929 899 goto out; 930 900 } 931 901 932 - sbi->record_size = record_size = 933 - boot->record_size < 0 ? 1 << (-boot->record_size) : 934 - (u32)boot->record_size << cluster_bits; 902 + if (boot->record_size >= 0) { 903 + record_size = (u32)boot->record_size << cluster_bits; 904 + } else if (-boot->record_size <= MAXIMUM_SHIFT_BYTES_PER_MFT) { 905 + record_size = 1u << (-boot->record_size); 906 + } else { 907 + ntfs_err(sb, "%s: invalid record size %d.", hint, 908 + boot->record_size); 909 + goto out; 910 + } 911 + 912 + sbi->record_size = record_size; 935 913 sbi->record_bits = blksize_bits(record_size); 936 914 sbi->attr_size_tr = (5 * record_size >> 4); // ~320 bytes 937 915 ··· 956 918 goto out; 957 919 } 958 920 959 - sbi->index_size = boot->index_size < 0 ? 960 - 1u << (-boot->index_size) : 961 - (u32)boot->index_size << cluster_bits; 921 + if (boot->index_size >= 0) { 922 + sbi->index_size = (u32)boot->index_size << cluster_bits; 923 + } else if (-boot->index_size <= MAXIMUM_SHIFT_BYTES_PER_INDEX) { 924 + sbi->index_size = 1u << (-boot->index_size); 925 + } else { 926 + ntfs_err(sb, "%s: invalid index size %d.", hint, 927 + boot->index_size); 928 + goto out; 929 + } 962 930 963 931 /* Check index record size. */ 964 932 if (sbi->index_size < SECTOR_SIZE || !is_power_of_2(sbi->index_size)) { ··· 1099 1055 1100 1056 if (bh->b_blocknr && !sb_rdonly(sb)) { 1101 1057 /* 1102 - * Alternative boot is ok but primary is not ok. 1103 - * Do not update primary boot here 'cause it may be faked boot. 1104 - * Let ntfs to be mounted and update boot later. 1105 - */ 1058 + * Alternative boot is ok but primary is not ok. 1059 + * Do not update primary boot here 'cause it may be faked boot. 1060 + * Let ntfs to be mounted and update boot later. 1061 + */ 1106 1062 *boot2 = kmemdup(boot, sizeof(*boot), GFP_NOFS | __GFP_NOWARN); 1107 1063 } 1108 1064 1109 1065 out: 1110 - if (err == -EINVAL && !bh->b_blocknr && dev_size > PAGE_SHIFT) { 1066 + if (err == -EINVAL && !bh->b_blocknr && dev_size0 > PAGE_SHIFT) { 1111 1067 u32 block_size = min_t(u32, sector_size, PAGE_SIZE); 1112 - u64 lbo = dev_size - sizeof(*boot); 1068 + u64 lbo = dev_size0 - sizeof(*boot); 1113 1069 1114 1070 /* 1115 1071 * Try alternative boot (last sector) ··· 1123 1079 1124 1080 boot_off = lbo & (block_size - 1); 1125 1081 hint = "Alternative boot"; 1082 + dev_size = dev_size0; /* restore original size. */ 1126 1083 goto check_boot; 1127 1084 } 1128 1085 brelse(bh); ··· 1412 1367 } 1413 1368 1414 1369 bytes = inode->i_size; 1415 - sbi->def_table = t = kmalloc(bytes, GFP_NOFS | __GFP_NOWARN); 1370 + sbi->def_table = t = kvmalloc(bytes, GFP_KERNEL); 1416 1371 if (!t) { 1417 1372 err = -ENOMEM; 1418 1373 goto put_inode_out; ··· 1566 1521 1567 1522 if (boot2) { 1568 1523 /* 1569 - * Alternative boot is ok but primary is not ok. 1570 - * Volume is recognized as NTFS. Update primary boot. 1571 - */ 1524 + * Alternative boot is ok but primary is not ok. 1525 + * Volume is recognized as NTFS. Update primary boot. 1526 + */ 1572 1527 struct buffer_head *bh0 = sb_getblk(sb, 0); 1573 1528 if (bh0) { 1574 1529 if (buffer_locked(bh0)) ··· 1609 1564 out: 1610 1565 ntfs3_put_sbi(sbi); 1611 1566 kfree(boot2); 1567 + ntfs3_put_sbi(sbi); 1612 1568 return err; 1613 1569 } 1614 1570 ··· 1803 1757 if (IS_ENABLED(CONFIG_NTFS3_LZX_XPRESS)) 1804 1758 pr_info("ntfs3: Read-only LZX/Xpress compression included\n"); 1805 1759 1806 - 1807 1760 #ifdef CONFIG_PROC_FS 1808 1761 /* Create "/proc/fs/ntfs3" */ 1809 1762 proc_info_root = proc_mkdir("fs/ntfs3", NULL); ··· 1844 1799 if (proc_info_root) 1845 1800 remove_proc_entry("fs/ntfs3", NULL); 1846 1801 #endif 1847 - 1848 1802 } 1849 1803 1850 1804 MODULE_LICENSE("GPL");
+6 -1
fs/ntfs3/xattr.c
··· 211 211 size = le32_to_cpu(info->size); 212 212 213 213 /* Enumerate all xattrs. */ 214 - for (ret = 0, off = 0; off < size; off += ea_size) { 214 + ret = 0; 215 + for (off = 0; off + sizeof(struct EA_FULL) < size; off += ea_size) { 215 216 ea = Add2Ptr(ea_all, off); 216 217 ea_size = unpacked_ea_size(ea); 217 218 ··· 220 219 break; 221 220 222 221 if (buffer) { 222 + /* Check if we can use field ea->name */ 223 + if (off + ea_size > size) 224 + break; 225 + 223 226 if (ret + ea->name_len + 1 > bytes_per_buffer) { 224 227 err = -ERANGE; 225 228 goto out;
+54 -63
fs/overlayfs/params.c
··· 157 157 {} 158 158 }; 159 159 160 + static char *ovl_next_opt(char **s) 161 + { 162 + char *sbegin = *s; 163 + char *p; 164 + 165 + if (sbegin == NULL) 166 + return NULL; 167 + 168 + for (p = sbegin; *p; p++) { 169 + if (*p == '\\') { 170 + p++; 171 + if (!*p) 172 + break; 173 + } else if (*p == ',') { 174 + *p = '\0'; 175 + *s = p + 1; 176 + return sbegin; 177 + } 178 + } 179 + *s = NULL; 180 + return sbegin; 181 + } 182 + 183 + static int ovl_parse_monolithic(struct fs_context *fc, void *data) 184 + { 185 + return vfs_parse_monolithic_sep(fc, data, ovl_next_opt); 186 + } 187 + 160 188 static ssize_t ovl_parse_param_split_lowerdirs(char *str) 161 189 { 162 190 ssize_t nr_layers = 1, nr_colons = 0; ··· 192 164 193 165 for (s = d = str;; s++, d++) { 194 166 if (*s == '\\') { 195 - s++; 167 + /* keep esc chars in split lowerdir */ 168 + *d++ = *s++; 196 169 } else if (*s == ':') { 197 170 bool next_colon = (*(s + 1) == ':'); 198 171 ··· 268 239 } 269 240 } 270 241 271 - static int ovl_mount_dir(const char *name, struct path *path) 242 + static int ovl_mount_dir(const char *name, struct path *path, bool upper) 272 243 { 273 244 int err = -ENOMEM; 274 245 char *tmp = kstrdup(name, GFP_KERNEL); ··· 277 248 ovl_unescape(tmp); 278 249 err = ovl_mount_dir_noesc(tmp, path); 279 250 280 - if (!err && path->dentry->d_flags & DCACHE_OP_REAL) { 251 + if (!err && upper && path->dentry->d_flags & DCACHE_OP_REAL) { 281 252 pr_err("filesystem on '%s' not supported as upperdir\n", 282 253 tmp); 283 254 path_put_init(path); ··· 298 269 struct path path; 299 270 char *dup; 300 271 301 - err = ovl_mount_dir(name, &path); 272 + err = ovl_mount_dir(name, &path, true); 302 273 if (err) 303 274 return err; 304 275 ··· 350 321 * Set "/lower1", "/lower2", and "/lower3" as lower layers and 351 322 * "/data1" and "/data2" as data lower layers. Any existing lower 352 323 * layers are replaced. 353 - * (2) lowerdir=:/lower4 354 - * Append "/lower4" to current stack of lower layers. This requires 355 - * that there already is at least one lower layer configured. 356 - * (3) lowerdir=::/lower5 357 - * Append data "/lower5" as data lower layer. This requires that 358 - * there's at least one regular lower layer present. 359 324 */ 360 325 static int ovl_parse_param_lowerdir(const char *name, struct fs_context *fc) 361 326 { ··· 371 348 return 0; 372 349 } 373 350 374 - if (strncmp(name, "::", 2) == 0) { 375 - /* 376 - * This is a data layer. 377 - * There must be at least one regular lower layer 378 - * specified. 379 - */ 380 - if (ctx->nr == 0) { 381 - pr_err("data lower layers without regular lower layers not allowed"); 382 - return -EINVAL; 383 - } 384 - 385 - /* Skip the leading "::". */ 386 - name += 2; 387 - data_layer = true; 388 - /* 389 - * A data layer is automatically an append as there 390 - * must've been at least one regular lower layer. 391 - */ 392 - append = true; 393 - } else if (*name == ':') { 394 - /* 395 - * This is a regular lower layer. 396 - * If users want to append a layer enforce that they 397 - * have already specified a first layer before. It's 398 - * better to be strict. 399 - */ 400 - if (ctx->nr == 0) { 401 - pr_err("cannot append layer if no previous layer has been specified"); 402 - return -EINVAL; 403 - } 404 - 405 - /* 406 - * Once a sequence of data layers has started regular 407 - * lower layers are forbidden. 408 - */ 409 - if (ctx->nr_data > 0) { 410 - pr_err("regular lower layers cannot follow data lower layers"); 411 - return -EINVAL; 412 - } 413 - 414 - /* Skip the leading ":". */ 415 - name++; 416 - append = true; 351 + if (*name == ':') { 352 + pr_err("cannot append lower layer"); 353 + return -EINVAL; 417 354 } 418 355 419 356 dup = kstrdup(name, GFP_KERNEL); ··· 455 472 l = &ctx->lower[nr]; 456 473 memset(l, 0, sizeof(*l)); 457 474 458 - err = ovl_mount_dir_noesc(dup_iter, &l->path); 475 + err = ovl_mount_dir(dup_iter, &l->path, false); 459 476 if (err) 460 477 goto out_put; 461 478 ··· 665 682 } 666 683 667 684 static const struct fs_context_operations ovl_context_ops = { 685 + .parse_monolithic = ovl_parse_monolithic, 668 686 .parse_param = ovl_parse_param, 669 687 .get_tree = ovl_get_tree, 670 688 .reconfigure = ovl_reconfigure, ··· 934 950 struct super_block *sb = dentry->d_sb; 935 951 struct ovl_fs *ofs = OVL_FS(sb); 936 952 size_t nr, nr_merged_lower = ofs->numlayer - ofs->numdatalayer; 937 - char **lowerdatadirs = &ofs->config.lowerdirs[nr_merged_lower]; 938 953 939 - /* lowerdirs[] starts from offset 1 */ 940 - seq_printf(m, ",lowerdir=%s", ofs->config.lowerdirs[1]); 941 - /* dump regular lower layers */ 942 - for (nr = 2; nr < nr_merged_lower; nr++) 943 - seq_printf(m, ":%s", ofs->config.lowerdirs[nr]); 944 - /* dump data lower layers */ 945 - for (nr = 0; nr < ofs->numdatalayer; nr++) 946 - seq_printf(m, "::%s", lowerdatadirs[nr]); 954 + /* 955 + * lowerdirs[] starts from offset 1, then 956 + * >= 0 regular lower layers prefixed with : and 957 + * >= 0 data-only lower layers prefixed with :: 958 + * 959 + * we need to escase comma and space like seq_show_option() does and 960 + * we also need to escape the colon separator from lowerdir paths. 961 + */ 962 + seq_puts(m, ",lowerdir="); 963 + for (nr = 1; nr < ofs->numlayer; nr++) { 964 + if (nr > 1) 965 + seq_putc(m, ':'); 966 + if (nr >= nr_merged_lower) 967 + seq_putc(m, ':'); 968 + seq_escape(m, ofs->config.lowerdirs[nr], ":, \t\n\\"); 969 + } 947 970 if (ofs->config.upperdir) { 948 971 seq_show_option(m, "upperdir", ofs->config.upperdir); 949 972 seq_show_option(m, "workdir", ofs->config.workdir);
+69 -74
fs/smb/client/cached_dir.c
··· 15 15 static struct cached_fid *init_cached_dir(const char *path); 16 16 static void free_cached_dir(struct cached_fid *cfid); 17 17 static void smb2_close_cached_fid(struct kref *ref); 18 + static void cfids_laundromat_worker(struct work_struct *work); 18 19 19 20 static struct cached_fid *find_or_create_cached_dir(struct cached_fids *cfids, 20 21 const char *path, ··· 170 169 return -ENOENT; 171 170 } 172 171 /* 173 - * At this point we either have a lease already and we can just 174 - * return it. If not we are guaranteed to be the only thread accessing 175 - * this cfid. 172 + * Return cached fid if it has a lease. Otherwise, it is either a new 173 + * entry or laundromat worker removed it from @cfids->entries. Caller 174 + * will put last reference if the latter. 176 175 */ 176 + spin_lock(&cfids->cfid_list_lock); 177 177 if (cfid->has_lease) { 178 + spin_unlock(&cfids->cfid_list_lock); 178 179 *ret_cfid = cfid; 179 180 kfree(utf16_path); 180 181 return 0; 181 182 } 183 + spin_unlock(&cfids->cfid_list_lock); 182 184 183 185 /* 184 186 * Skip any prefix paths in @path as lookup_positive_unlocked() ends up ··· 298 294 goto oshr_free; 299 295 } 300 296 } 297 + spin_lock(&cfids->cfid_list_lock); 301 298 cfid->dentry = dentry; 302 299 cfid->time = jiffies; 303 300 cfid->has_lease = true; 301 + spin_unlock(&cfids->cfid_list_lock); 304 302 305 303 oshr_free: 306 304 kfree(utf16_path); ··· 311 305 free_rsp_buf(resp_buftype[0], rsp_iov[0].iov_base); 312 306 free_rsp_buf(resp_buftype[1], rsp_iov[1].iov_base); 313 307 spin_lock(&cfids->cfid_list_lock); 314 - if (rc && !cfid->has_lease) { 315 - if (cfid->on_list) { 316 - list_del(&cfid->entry); 317 - cfid->on_list = false; 318 - cfids->num_entries--; 308 + if (!cfid->has_lease) { 309 + if (rc) { 310 + if (cfid->on_list) { 311 + list_del(&cfid->entry); 312 + cfid->on_list = false; 313 + cfids->num_entries--; 314 + } 315 + rc = -ENOENT; 316 + } else { 317 + /* 318 + * We are guaranteed to have two references at this 319 + * point. One for the caller and one for a potential 320 + * lease. Release the Lease-ref so that the directory 321 + * will be closed when the caller closes the cached 322 + * handle. 323 + */ 324 + spin_unlock(&cfids->cfid_list_lock); 325 + kref_put(&cfid->refcount, smb2_close_cached_fid); 326 + goto out; 319 327 } 320 - rc = -ENOENT; 321 328 } 322 329 spin_unlock(&cfids->cfid_list_lock); 323 - if (!rc && !cfid->has_lease) { 324 - /* 325 - * We are guaranteed to have two references at this point. 326 - * One for the caller and one for a potential lease. 327 - * Release the Lease-ref so that the directory will be closed 328 - * when the caller closes the cached handle. 329 - */ 330 - kref_put(&cfid->refcount, smb2_close_cached_fid); 331 - } 332 330 if (rc) { 333 331 if (cfid->is_open) 334 332 SMB2_close(0, cfid->tcon, cfid->fid.persistent_fid, ··· 340 330 free_cached_dir(cfid); 341 331 cfid = NULL; 342 332 } 343 - 333 + out: 344 334 if (rc == 0) { 345 335 *ret_cfid = cfid; 346 336 atomic_inc(&tcon->num_remote_opens); ··· 582 572 kfree(cfid); 583 573 } 584 574 585 - static int 586 - cifs_cfids_laundromat_thread(void *p) 575 + static void cfids_laundromat_worker(struct work_struct *work) 587 576 { 588 - struct cached_fids *cfids = p; 577 + struct cached_fids *cfids; 589 578 struct cached_fid *cfid, *q; 590 - struct list_head entry; 579 + LIST_HEAD(entry); 591 580 592 - while (!kthread_should_stop()) { 593 - ssleep(1); 594 - INIT_LIST_HEAD(&entry); 595 - if (kthread_should_stop()) 596 - return 0; 597 - spin_lock(&cfids->cfid_list_lock); 598 - list_for_each_entry_safe(cfid, q, &cfids->entries, entry) { 599 - if (time_after(jiffies, cfid->time + HZ * dir_cache_timeout)) { 600 - list_del(&cfid->entry); 601 - list_add(&cfid->entry, &entry); 602 - cfids->num_entries--; 603 - } 604 - } 605 - spin_unlock(&cfids->cfid_list_lock); 581 + cfids = container_of(work, struct cached_fids, laundromat_work.work); 606 582 607 - list_for_each_entry_safe(cfid, q, &entry, entry) { 583 + spin_lock(&cfids->cfid_list_lock); 584 + list_for_each_entry_safe(cfid, q, &cfids->entries, entry) { 585 + if (cfid->time && 586 + time_after(jiffies, cfid->time + HZ * dir_cache_timeout)) { 608 587 cfid->on_list = false; 609 - list_del(&cfid->entry); 610 - /* 611 - * Cancel, and wait for the work to finish in 612 - * case we are racing with it. 613 - */ 614 - cancel_work_sync(&cfid->lease_break); 615 - if (cfid->has_lease) { 616 - /* 617 - * We lease has not yet been cancelled from 618 - * the server so we need to drop the reference. 619 - */ 620 - spin_lock(&cfids->cfid_list_lock); 621 - cfid->has_lease = false; 622 - spin_unlock(&cfids->cfid_list_lock); 623 - kref_put(&cfid->refcount, smb2_close_cached_fid); 624 - } 588 + list_move(&cfid->entry, &entry); 589 + cfids->num_entries--; 590 + /* To prevent race with smb2_cached_lease_break() */ 591 + kref_get(&cfid->refcount); 625 592 } 626 593 } 594 + spin_unlock(&cfids->cfid_list_lock); 627 595 628 - return 0; 596 + list_for_each_entry_safe(cfid, q, &entry, entry) { 597 + list_del(&cfid->entry); 598 + /* 599 + * Cancel and wait for the work to finish in case we are racing 600 + * with it. 601 + */ 602 + cancel_work_sync(&cfid->lease_break); 603 + if (cfid->has_lease) { 604 + /* 605 + * Our lease has not yet been cancelled from the server 606 + * so we need to drop the reference. 607 + */ 608 + spin_lock(&cfids->cfid_list_lock); 609 + cfid->has_lease = false; 610 + spin_unlock(&cfids->cfid_list_lock); 611 + kref_put(&cfid->refcount, smb2_close_cached_fid); 612 + } 613 + /* Drop the extra reference opened above */ 614 + kref_put(&cfid->refcount, smb2_close_cached_fid); 615 + } 616 + queue_delayed_work(cifsiod_wq, &cfids->laundromat_work, 617 + dir_cache_timeout * HZ); 629 618 } 630 - 631 619 632 620 struct cached_fids *init_cached_dirs(void) 633 621 { ··· 637 629 spin_lock_init(&cfids->cfid_list_lock); 638 630 INIT_LIST_HEAD(&cfids->entries); 639 631 640 - /* 641 - * since we're in a cifs function already, we know that 642 - * this will succeed. No need for try_module_get(). 643 - */ 644 - __module_get(THIS_MODULE); 645 - cfids->laundromat = kthread_run(cifs_cfids_laundromat_thread, 646 - cfids, "cifsd-cfid-laundromat"); 647 - if (IS_ERR(cfids->laundromat)) { 648 - cifs_dbg(VFS, "Failed to start cfids laundromat thread.\n"); 649 - kfree(cfids); 650 - module_put(THIS_MODULE); 651 - return NULL; 652 - } 632 + INIT_DELAYED_WORK(&cfids->laundromat_work, cfids_laundromat_worker); 633 + queue_delayed_work(cifsiod_wq, &cfids->laundromat_work, 634 + dir_cache_timeout * HZ); 635 + 653 636 return cfids; 654 637 } 655 638 ··· 656 657 if (cfids == NULL) 657 658 return; 658 659 659 - if (cfids->laundromat) { 660 - kthread_stop(cfids->laundromat); 661 - cfids->laundromat = NULL; 662 - module_put(THIS_MODULE); 663 - } 660 + cancel_delayed_work_sync(&cfids->laundromat_work); 664 661 665 662 spin_lock(&cfids->cfid_list_lock); 666 663 list_for_each_entry_safe(cfid, q, &cfids->entries, entry) {
+1 -1
fs/smb/client/cached_dir.h
··· 57 57 spinlock_t cfid_list_lock; 58 58 int num_entries; 59 59 struct list_head entries; 60 - struct task_struct *laundromat; 60 + struct delayed_work laundromat_work; 61 61 }; 62 62 63 63 extern struct cached_fids *init_cached_dirs(void);
+6 -5
fs/smb/server/smb2pdu.c
··· 231 231 { 232 232 struct smb2_hdr *rsp_hdr; 233 233 234 - if (work->next_smb2_rcv_hdr_off) 235 - rsp_hdr = ksmbd_resp_buf_next(work); 236 - else 237 - rsp_hdr = smb2_get_msg(work->response_buf); 234 + rsp_hdr = smb2_get_msg(work->response_buf); 238 235 rsp_hdr->Status = err; 236 + 237 + work->iov_idx = 0; 238 + work->iov_cnt = 0; 239 + work->next_smb2_rcv_hdr_off = 0; 239 240 smb2_set_err_rsp(work); 240 241 } 241 242 ··· 6152 6151 memcpy(aux_payload_buf, rpc_resp->payload, rpc_resp->payload_sz); 6153 6152 6154 6153 nbytes = rpc_resp->payload_sz; 6155 - kvfree(rpc_resp); 6156 6154 err = ksmbd_iov_pin_rsp_read(work, (void *)rsp, 6157 6155 offsetof(struct smb2_read_rsp, Buffer), 6158 6156 aux_payload_buf, nbytes); 6159 6157 if (err) 6160 6158 goto out; 6159 + kvfree(rpc_resp); 6161 6160 } else { 6162 6161 err = ksmbd_iov_pin_rsp(work, (void *)rsp, 6163 6162 offsetof(struct smb2_read_rsp, Buffer));
+5 -2
fs/smb/server/vfs_cache.c
··· 106 106 ci = __ksmbd_inode_lookup(inode); 107 107 if (ci) { 108 108 ret = KSMBD_INODE_STATUS_OK; 109 - if (ci->m_flags & S_DEL_PENDING) 109 + if (ci->m_flags & (S_DEL_PENDING | S_DEL_ON_CLS)) 110 110 ret = KSMBD_INODE_STATUS_PENDING_DELETE; 111 111 atomic_dec(&ci->m_count); 112 112 } ··· 116 116 117 117 bool ksmbd_inode_pending_delete(struct ksmbd_file *fp) 118 118 { 119 - return (fp->f_ci->m_flags & S_DEL_PENDING); 119 + return (fp->f_ci->m_flags & (S_DEL_PENDING | S_DEL_ON_CLS)); 120 120 } 121 121 122 122 void ksmbd_set_inode_pending_delete(struct ksmbd_file *fp) ··· 603 603 void ksmbd_update_fstate(struct ksmbd_file_table *ft, struct ksmbd_file *fp, 604 604 unsigned int state) 605 605 { 606 + if (!fp) 607 + return; 608 + 606 609 write_lock(&ft->lock); 607 610 fp->f_state = state; 608 611 write_unlock(&ft->lock);
+6
fs/xfs/libxfs/xfs_ag.c
··· 1001 1001 error = -ENOSPC; 1002 1002 goto resv_init_out; 1003 1003 } 1004 + 1005 + /* Update perag geometry */ 1006 + pag->block_count -= delta; 1007 + __xfs_agino_range(pag->pag_mount, pag->block_count, &pag->agino_min, 1008 + &pag->agino_max); 1009 + 1004 1010 xfs_ialloc_log_agi(*tpp, agibp, XFS_AGI_LENGTH); 1005 1011 xfs_alloc_log_agf(*tpp, agfbp, XFS_AGF_LENGTH); 1006 1012 return 0;
-1
fs/xfs/scrub/xfile.c
··· 10 10 #include "xfs_log_format.h" 11 11 #include "xfs_trans_resv.h" 12 12 #include "xfs_mount.h" 13 - #include "xfs_format.h" 14 13 #include "scrub/xfile.h" 15 14 #include "scrub/xfarray.h" 16 15 #include "scrub/scrub.h"
+2 -1
fs/xfs/xfs_extent_busy.c
··· 62 62 rb_link_node(&new->rb_node, parent, rbp); 63 63 rb_insert_color(&new->rb_node, &pag->pagb_tree); 64 64 65 - list_add(&new->list, busy_list); 65 + /* always process discard lists in fifo order */ 66 + list_add_tail(&new->list, busy_list); 66 67 spin_unlock(&pag->pagb_lock); 67 68 } 68 69
+5
fs/xfs/xfs_iops.c
··· 584 584 } 585 585 } 586 586 587 + if ((request_mask & STATX_CHANGE_COOKIE) && IS_I_VERSION(inode)) { 588 + stat->change_cookie = inode_query_iversion(inode); 589 + stat->result_mask |= STATX_CHANGE_COOKIE; 590 + } 591 + 587 592 /* 588 593 * Note: If you add another clause to set an attribute flag, please 589 594 * update attributes_mask below.
+3 -3
fs/xfs/xfs_notify_failure.c
··· 126 126 struct xfs_rmap_irec ri_low = { }; 127 127 struct xfs_rmap_irec ri_high; 128 128 struct xfs_agf *agf; 129 - xfs_agblock_t agend; 130 129 struct xfs_perag *pag; 130 + xfs_agblock_t range_agend; 131 131 132 132 pag = xfs_perag_get(mp, agno); 133 133 error = xfs_alloc_read_agf(pag, tp, 0, &agf_bp); ··· 148 148 ri_high.rm_startblock = XFS_FSB_TO_AGBNO(mp, end_fsbno); 149 149 150 150 agf = agf_bp->b_addr; 151 - agend = min(be32_to_cpu(agf->agf_length), 151 + range_agend = min(be32_to_cpu(agf->agf_length) - 1, 152 152 ri_high.rm_startblock); 153 153 notify.startblock = ri_low.rm_startblock; 154 - notify.blockcount = agend - ri_low.rm_startblock; 154 + notify.blockcount = range_agend + 1 - ri_low.rm_startblock; 155 155 156 156 error = xfs_rmap_query_range(cur, &ri_low, &ri_high, 157 157 xfs_dax_failure_fn, &notify);
-5
include/acpi/processor.h
··· 465 465 extern int acpi_processor_ffh_lpi_enter(struct acpi_lpi_state *lpi); 466 466 #endif 467 467 468 - #ifdef CONFIG_ACPI_HOTPLUG_CPU 469 - extern int arch_register_cpu(int cpu); 470 - extern void arch_unregister_cpu(int cpu); 471 - #endif 472 - 473 468 #endif
+7
include/kvm/arm_arch_timer.h
··· 82 82 struct arch_timer_context *emul_ptimer; 83 83 }; 84 84 85 + void get_timer_map(struct kvm_vcpu *vcpu, struct timer_map *map); 86 + 85 87 struct arch_timer_cpu { 86 88 struct arch_timer_context timers[NR_KVM_TIMERS]; 87 89 ··· 146 144 /* CPU HP callbacks */ 147 145 void kvm_timer_cpu_up(void); 148 146 void kvm_timer_cpu_down(void); 147 + 148 + static inline bool has_cntpoff(void) 149 + { 150 + return (has_vhe() && cpus_have_final_cap(ARM64_HAS_ECV_CNTPOFF)); 151 + } 149 152 150 153 #endif
+1 -1
include/linux/cgroup-defs.h
··· 238 238 * Lists running through all tasks using this cgroup group. 239 239 * mg_tasks lists tasks which belong to this cset but are in the 240 240 * process of being migrated out or in. Protected by 241 - * css_set_rwsem, but, during migration, once tasks are moved to 241 + * css_set_lock, but, during migration, once tasks are moved to 242 242 * mg_tasks, it can be read safely while holding cgroup_mutex. 243 243 */ 244 244 struct list_head tasks;
+2
include/linux/cpu.h
··· 80 80 struct device *cpu_device_create(struct device *parent, void *drvdata, 81 81 const struct attribute_group **groups, 82 82 const char *fmt, ...); 83 + extern int arch_register_cpu(int cpu); 84 + extern void arch_unregister_cpu(int cpu); 83 85 #ifdef CONFIG_HOTPLUG_CPU 84 86 extern void unregister_cpu(struct cpu *cpu); 85 87 extern ssize_t arch_cpu_probe(const char *, size_t);
+19
include/linux/dma-fence.h
··· 568 568 fence->error = error; 569 569 } 570 570 571 + /** 572 + * dma_fence_timestamp - helper to get the completion timestamp of a fence 573 + * @fence: fence to get the timestamp from. 574 + * 575 + * After a fence is signaled the timestamp is updated with the signaling time, 576 + * but setting the timestamp can race with tasks waiting for the signaling. This 577 + * helper busy waits for the correct timestamp to appear. 578 + */ 579 + static inline ktime_t dma_fence_timestamp(struct dma_fence *fence) 580 + { 581 + if (WARN_ON(!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))) 582 + return ktime_get(); 583 + 584 + while (!test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, &fence->flags)) 585 + cpu_relax(); 586 + 587 + return fence->timestamp; 588 + } 589 + 571 590 signed long dma_fence_wait_timeout(struct dma_fence *, 572 591 bool intr, signed long timeout); 573 592 signed long dma_fence_wait_any_timeout(struct dma_fence **fences,
+1 -1
include/linux/fs.h
··· 2403 2403 struct filename { 2404 2404 const char *name; /* pointer to actual string */ 2405 2405 const __user char *uptr; /* original userland pointer */ 2406 - int refcnt; 2406 + atomic_t refcnt; 2407 2407 struct audit_names *aname; 2408 2408 const char iname[]; 2409 2409 };
+2
include/linux/fs_context.h
··· 136 136 extern int vfs_parse_fs_param(struct fs_context *fc, struct fs_parameter *param); 137 137 extern int vfs_parse_fs_string(struct fs_context *fc, const char *key, 138 138 const char *value, size_t v_size); 139 + int vfs_parse_monolithic_sep(struct fs_context *fc, void *data, 140 + char *(*sep)(char **)); 139 141 extern int generic_parse_monolithic(struct fs_context *fc, void *data); 140 142 extern int vfs_get_tree(struct fs_context *fc); 141 143 extern void put_fs_context(struct fs_context *fc);
-1
include/linux/mcb.h
··· 63 63 struct mcb_device { 64 64 struct device dev; 65 65 struct mcb_bus *bus; 66 - bool is_added; 67 66 struct mcb_driver *driver; 68 67 u16 id; 69 68 int inst;
+16 -3
include/linux/virtio_net.h
··· 3 3 #define _LINUX_VIRTIO_NET_H 4 4 5 5 #include <linux/if_vlan.h> 6 + #include <linux/udp.h> 6 7 #include <uapi/linux/tcp.h> 7 - #include <uapi/linux/udp.h> 8 8 #include <uapi/linux/virtio_net.h> 9 9 10 10 static inline bool virtio_net_hdr_match_proto(__be16 protocol, __u8 gso_type) ··· 151 151 unsigned int nh_off = p_off; 152 152 struct skb_shared_info *shinfo = skb_shinfo(skb); 153 153 154 - /* UFO may not include transport header in gso_size. */ 155 - if (gso_type & SKB_GSO_UDP) 154 + switch (gso_type & ~SKB_GSO_TCP_ECN) { 155 + case SKB_GSO_UDP: 156 + /* UFO may not include transport header in gso_size. */ 156 157 nh_off -= thlen; 158 + break; 159 + case SKB_GSO_UDP_L4: 160 + if (!(hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM)) 161 + return -EINVAL; 162 + if (skb->csum_offset != offsetof(struct udphdr, check)) 163 + return -EINVAL; 164 + if (skb->len - p_off > gso_size * UDP_MAX_SEGMENTS) 165 + return -EINVAL; 166 + if (gso_type != SKB_GSO_UDP_L4) 167 + return -EINVAL; 168 + break; 169 + } 157 170 158 171 /* Kernel has a special handling for GSO_BY_FRAGS. */ 159 172 if (gso_size == GSO_BY_FRAGS)
+1 -1
include/net/bluetooth/hci_mon.h
··· 56 56 __u8 type; 57 57 __u8 bus; 58 58 bdaddr_t bdaddr; 59 - char name[8]; 59 + char name[8] __nonstring; 60 60 } __packed; 61 61 #define HCI_MON_NEW_INDEX_SIZE 16 62 62
+1
include/net/netns/xfrm.h
··· 50 50 struct list_head policy_all; 51 51 struct hlist_head *policy_byidx; 52 52 unsigned int policy_idx_hmask; 53 + unsigned int idx_generator; 53 54 struct hlist_head policy_inexact[XFRM_POLICY_MAX]; 54 55 struct xfrm_policy_hash policy_bydst[XFRM_POLICY_MAX]; 55 56 unsigned int policy_count[XFRM_POLICY_MAX * 2];
+4 -6
include/net/sock.h
··· 336 336 * @sk_cgrp_data: cgroup data for this cgroup 337 337 * @sk_memcg: this socket's memory cgroup association 338 338 * @sk_write_pending: a write to stream socket waits to start 339 - * @sk_wait_pending: number of threads blocked on this socket 339 + * @sk_disconnects: number of disconnect operations performed on this sock 340 340 * @sk_state_change: callback to indicate change in the state of the sock 341 341 * @sk_data_ready: callback to indicate there is data to be processed 342 342 * @sk_write_space: callback to indicate there is bf sending space available ··· 429 429 unsigned int sk_napi_id; 430 430 #endif 431 431 int sk_rcvbuf; 432 - int sk_wait_pending; 432 + int sk_disconnects; 433 433 434 434 struct sk_filter __rcu *sk_filter; 435 435 union { ··· 1189 1189 } 1190 1190 1191 1191 #define sk_wait_event(__sk, __timeo, __condition, __wait) \ 1192 - ({ int __rc; \ 1193 - __sk->sk_wait_pending++; \ 1192 + ({ int __rc, __dis = __sk->sk_disconnects; \ 1194 1193 release_sock(__sk); \ 1195 1194 __rc = __condition; \ 1196 1195 if (!__rc) { \ ··· 1199 1200 } \ 1200 1201 sched_annotate_sleep(); \ 1201 1202 lock_sock(__sk); \ 1202 - __sk->sk_wait_pending--; \ 1203 - __rc = __condition; \ 1203 + __rc = __dis == __sk->sk_disconnects ? __condition : -EPIPE; \ 1204 1204 __rc; \ 1205 1205 }) 1206 1206
+3
include/net/tcp.h
··· 143 143 #define TCP_RTO_MAX ((unsigned)(120*HZ)) 144 144 #define TCP_RTO_MIN ((unsigned)(HZ/5)) 145 145 #define TCP_TIMEOUT_MIN (2U) /* Min timeout for TCP timers in jiffies */ 146 + 147 + #define TCP_TIMEOUT_MIN_US (2*USEC_PER_MSEC) /* Min TCP timeout in microsecs */ 148 + 146 149 #define TCP_TIMEOUT_INIT ((unsigned)(1*HZ)) /* RFC6298 2.1 initial RTO value */ 147 150 #define TCP_TIMEOUT_FALLBACK ((unsigned)(3*HZ)) /* RFC 1122 initial RTO value, now 148 151 * used as a fallback RTO for the
+2 -2
include/trace/events/neigh.h
··· 39 39 ), 40 40 41 41 TP_fast_assign( 42 - struct in6_addr *pin6; 43 42 __be32 *p32; 44 43 45 44 __entry->family = tbl->family; ··· 46 47 __entry->entries = atomic_read(&tbl->gc_entries); 47 48 __entry->created = n != NULL; 48 49 __entry->gc_exempt = exempt_from_gc; 49 - pin6 = (struct in6_addr *)__entry->primary_key6; 50 50 p32 = (__be32 *)__entry->primary_key4; 51 51 52 52 if (tbl->family == AF_INET) ··· 55 57 56 58 #if IS_ENABLED(CONFIG_IPV6) 57 59 if (tbl->family == AF_INET6) { 60 + struct in6_addr *pin6; 61 + 58 62 pin6 = (struct in6_addr *)__entry->primary_key6; 59 63 *pin6 = *(struct in6_addr *)pkey; 60 64 }
+1 -1
include/video/mmp_disp.h
··· 231 231 232 232 /* layers */ 233 233 int overlay_num; 234 - struct mmp_overlay overlays[]; 234 + struct mmp_overlay overlays[] __counted_by(overlay_num); 235 235 }; 236 236 237 237 extern struct mmp_path *mmp_get_path(const char *name);
-2
include/video/uvesafb.h
··· 109 109 u32 ack; 110 110 }; 111 111 112 - static int uvesafb_exec(struct uvesafb_ktask *tsk); 113 - 114 112 #define UVESAFB_EXACT_RES 1 115 113 #define UVESAFB_EXACT_DEPTH 2 116 114
+4 -4
kernel/auditsc.c
··· 2212 2212 if (!n->name) 2213 2213 continue; 2214 2214 if (n->name->uptr == uptr) { 2215 - n->name->refcnt++; 2215 + atomic_inc(&n->name->refcnt); 2216 2216 return n->name; 2217 2217 } 2218 2218 } ··· 2241 2241 n->name = name; 2242 2242 n->name_len = AUDIT_NAME_FULL; 2243 2243 name->aname = n; 2244 - name->refcnt++; 2244 + atomic_inc(&name->refcnt); 2245 2245 } 2246 2246 2247 2247 static inline int audit_copy_fcaps(struct audit_names *name, ··· 2373 2373 return; 2374 2374 if (name) { 2375 2375 n->name = name; 2376 - name->refcnt++; 2376 + atomic_inc(&name->refcnt); 2377 2377 } 2378 2378 2379 2379 out: ··· 2500 2500 if (found_parent) { 2501 2501 found_child->name = found_parent->name; 2502 2502 found_child->name_len = AUDIT_NAME_FULL; 2503 - found_child->name->refcnt++; 2503 + atomic_inc(&found_child->name->refcnt); 2504 2504 } 2505 2505 } 2506 2506
+2 -3
kernel/cgroup/cgroup-v1.c
··· 360 360 } 361 361 css_task_iter_end(&it); 362 362 length = n; 363 - /* now sort & (if procs) strip out duplicates */ 363 + /* now sort & strip out duplicates (tgids or recycled thread PIDs) */ 364 364 sort(array, length, sizeof(pid_t), cmppid, NULL); 365 - if (type == CGROUP_FILE_PROCS) 366 - length = pidlist_uniq(array, length); 365 + length = pidlist_uniq(array, length); 367 366 368 367 l = cgroup_pidlist_find_create(cgrp, type); 369 368 if (!l) {
+59 -14
kernel/sched/fair.c
··· 872 872 * 873 873 * Which allows an EDF like search on (sub)trees. 874 874 */ 875 - static struct sched_entity *pick_eevdf(struct cfs_rq *cfs_rq) 875 + static struct sched_entity *__pick_eevdf(struct cfs_rq *cfs_rq) 876 876 { 877 877 struct rb_node *node = cfs_rq->tasks_timeline.rb_root.rb_node; 878 878 struct sched_entity *curr = cfs_rq->curr; 879 879 struct sched_entity *best = NULL; 880 + struct sched_entity *best_left = NULL; 880 881 881 882 if (curr && (!curr->on_rq || !entity_eligible(cfs_rq, curr))) 882 883 curr = NULL; 884 + best = curr; 883 885 884 886 /* 885 887 * Once selected, run a task until it either becomes non-eligible or ··· 902 900 } 903 901 904 902 /* 905 - * If this entity has an earlier deadline than the previous 906 - * best, take this one. If it also has the earliest deadline 907 - * of its subtree, we're done. 903 + * Now we heap search eligible trees for the best (min_)deadline 908 904 */ 909 - if (!best || deadline_gt(deadline, best, se)) { 905 + if (!best || deadline_gt(deadline, best, se)) 910 906 best = se; 911 - if (best->deadline == best->min_deadline) 907 + 908 + /* 909 + * Every se in a left branch is eligible, keep track of the 910 + * branch with the best min_deadline 911 + */ 912 + if (node->rb_left) { 913 + struct sched_entity *left = __node_2_se(node->rb_left); 914 + 915 + if (!best_left || deadline_gt(min_deadline, best_left, left)) 916 + best_left = left; 917 + 918 + /* 919 + * min_deadline is in the left branch. rb_left and all 920 + * descendants are eligible, so immediately switch to the second 921 + * loop. 922 + */ 923 + if (left->min_deadline == se->min_deadline) 912 924 break; 913 925 } 914 926 915 - /* 916 - * If the earlest deadline in this subtree is in the fully 917 - * eligible left half of our space, go there. 918 - */ 927 + /* min_deadline is at this node, no need to look right */ 928 + if (se->deadline == se->min_deadline) 929 + break; 930 + 931 + /* else min_deadline is in the right branch. */ 932 + node = node->rb_right; 933 + } 934 + 935 + /* 936 + * We ran into an eligible node which is itself the best. 937 + * (Or nr_running == 0 and both are NULL) 938 + */ 939 + if (!best_left || (s64)(best_left->min_deadline - best->deadline) > 0) 940 + return best; 941 + 942 + /* 943 + * Now best_left and all of its children are eligible, and we are just 944 + * looking for deadline == min_deadline 945 + */ 946 + node = &best_left->run_node; 947 + while (node) { 948 + struct sched_entity *se = __node_2_se(node); 949 + 950 + /* min_deadline is the current node */ 951 + if (se->deadline == se->min_deadline) 952 + return se; 953 + 954 + /* min_deadline is in the left branch */ 919 955 if (node->rb_left && 920 956 __node_2_se(node->rb_left)->min_deadline == se->min_deadline) { 921 957 node = node->rb_left; 922 958 continue; 923 959 } 924 960 961 + /* else min_deadline is in the right branch */ 925 962 node = node->rb_right; 926 963 } 964 + return NULL; 965 + } 927 966 928 - if (!best || (curr && deadline_gt(deadline, best, curr))) 929 - best = curr; 967 + static struct sched_entity *pick_eevdf(struct cfs_rq *cfs_rq) 968 + { 969 + struct sched_entity *se = __pick_eevdf(cfs_rq); 930 970 931 - if (unlikely(!best)) { 971 + if (!se) { 932 972 struct sched_entity *left = __pick_first_entity(cfs_rq); 933 973 if (left) { 934 974 pr_err("EEVDF scheduling fail, picking leftmost\n"); ··· 978 934 } 979 935 } 980 936 981 - return best; 937 + return se; 982 938 } 983 939 984 940 #ifdef CONFIG_SCHED_DEBUG ··· 3657 3613 */ 3658 3614 deadline = div_s64(deadline * old_weight, weight); 3659 3615 se->deadline = se->vruntime + deadline; 3616 + min_deadline_cb_propagate(&se->run_node, NULL); 3660 3617 } 3661 3618 3662 3619 #ifdef CONFIG_SMP
+3 -3
kernel/trace/fprobe.c
··· 189 189 { 190 190 int i, size; 191 191 192 - if (num < 0) 192 + if (num <= 0) 193 193 return -EINVAL; 194 194 195 195 if (!fp->exit_handler) { ··· 202 202 size = fp->nr_maxactive; 203 203 else 204 204 size = num * num_possible_cpus() * 2; 205 - if (size < 0) 206 - return -E2BIG; 205 + if (size <= 0) 206 + return -EINVAL; 207 207 208 208 fp->rethook = rethook_alloc((void *)fp, fprobe_exit_handler); 209 209 if (!fp->rethook)
+19 -5
kernel/workqueue.c
··· 2166 2166 { 2167 2167 struct worker *worker; 2168 2168 int id; 2169 - char id_buf[16]; 2169 + char id_buf[23]; 2170 2170 2171 2171 /* ID is needed to determine kthread name */ 2172 2172 id = ida_alloc(&pool->worker_ida, GFP_KERNEL); ··· 4600 4600 } 4601 4601 cpus_read_unlock(); 4602 4602 4603 + /* for unbound pwq, flush the pwq_release_worker ensures that the 4604 + * pwq_release_workfn() completes before calling kfree(wq). 4605 + */ 4606 + if (ret) 4607 + kthread_flush_worker(pwq_release_worker); 4608 + 4603 4609 return ret; 4604 4610 4605 4611 enomem: 4606 4612 if (wq->cpu_pwq) { 4607 - for_each_possible_cpu(cpu) 4608 - kfree(*per_cpu_ptr(wq->cpu_pwq, cpu)); 4613 + for_each_possible_cpu(cpu) { 4614 + struct pool_workqueue *pwq = *per_cpu_ptr(wq->cpu_pwq, cpu); 4615 + 4616 + if (pwq) 4617 + kmem_cache_free(pwq_cache, pwq); 4618 + } 4609 4619 free_percpu(wq->cpu_pwq); 4610 4620 wq->cpu_pwq = NULL; 4611 4621 } ··· 5792 5782 list_for_each_entry(wq, &workqueues, list) { 5793 5783 if (!(wq->flags & WQ_UNBOUND)) 5794 5784 continue; 5785 + 5795 5786 /* creating multiple pwqs breaks ordering guarantee */ 5796 - if (wq->flags & __WQ_ORDERED) 5797 - continue; 5787 + if (!list_empty(&wq->pwqs)) { 5788 + if (wq->flags & __WQ_ORDERED_EXPLICIT) 5789 + continue; 5790 + wq->flags &= ~__WQ_ORDERED; 5791 + } 5798 5792 5799 5793 ctx = apply_wqattrs_prepare(wq, wq->unbound_attrs, unbound_cpumask); 5800 5794 if (IS_ERR(ctx)) {
+5 -2
mm/slab_common.c
··· 895 895 896 896 static unsigned int __kmalloc_minalign(void) 897 897 { 898 + unsigned int minalign = dma_get_cache_alignment(); 899 + 898 900 if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && 899 901 is_swiotlb_allocated()) 900 - return ARCH_KMALLOC_MINALIGN; 901 - return dma_get_cache_alignment(); 902 + minalign = ARCH_KMALLOC_MINALIGN; 903 + 904 + return max(minalign, arch_slab_minalign()); 902 905 } 903 906 904 907 void __init
+9
net/bluetooth/hci_conn.c
··· 1627 1627 return ERR_PTR(-EOPNOTSUPP); 1628 1628 } 1629 1629 1630 + /* Reject outgoing connection to device with same BD ADDR against 1631 + * CVE-2020-26555 1632 + */ 1633 + if (!bacmp(&hdev->bdaddr, dst)) { 1634 + bt_dev_dbg(hdev, "Reject connection with same BD_ADDR %pMR\n", 1635 + dst); 1636 + return ERR_PTR(-ECONNREFUSED); 1637 + } 1638 + 1630 1639 acl = hci_conn_hash_lookup_ba(hdev, ACL_LINK, dst); 1631 1640 if (!acl) { 1632 1641 acl = hci_conn_add(hdev, ACL_LINK, dst, HCI_ROLE_MASTER);
+39 -9
net/bluetooth/hci_event.c
··· 26 26 /* Bluetooth HCI event handling. */ 27 27 28 28 #include <asm/unaligned.h> 29 + #include <linux/crypto.h> 30 + #include <crypto/algapi.h> 29 31 30 32 #include <net/bluetooth/bluetooth.h> 31 33 #include <net/bluetooth/hci_core.h> ··· 3270 3268 3271 3269 bt_dev_dbg(hdev, "bdaddr %pMR type 0x%x", &ev->bdaddr, ev->link_type); 3272 3270 3271 + /* Reject incoming connection from device with same BD ADDR against 3272 + * CVE-2020-26555 3273 + */ 3274 + if (hdev && !bacmp(&hdev->bdaddr, &ev->bdaddr)) { 3275 + bt_dev_dbg(hdev, "Reject connection with same BD_ADDR %pMR\n", 3276 + &ev->bdaddr); 3277 + hci_reject_conn(hdev, &ev->bdaddr); 3278 + return; 3279 + } 3280 + 3273 3281 mask |= hci_proto_connect_ind(hdev, &ev->bdaddr, ev->link_type, 3274 3282 &flags); 3275 3283 ··· 4754 4742 if (!conn) 4755 4743 goto unlock; 4756 4744 4745 + /* Ignore NULL link key against CVE-2020-26555 */ 4746 + if (!crypto_memneq(ev->link_key, ZERO_KEY, HCI_LINK_KEY_SIZE)) { 4747 + bt_dev_dbg(hdev, "Ignore NULL link key (ZERO KEY) for %pMR", 4748 + &ev->bdaddr); 4749 + hci_disconnect(conn, HCI_ERROR_AUTH_FAILURE); 4750 + hci_conn_drop(conn); 4751 + goto unlock; 4752 + } 4753 + 4757 4754 hci_conn_hold(conn); 4758 4755 conn->disc_timeout = HCI_DISCONN_TIMEOUT; 4759 4756 hci_conn_drop(conn); ··· 5295 5274 * available, then do not declare that OOB data is 5296 5275 * present. 5297 5276 */ 5298 - if (!memcmp(data->rand256, ZERO_KEY, 16) || 5299 - !memcmp(data->hash256, ZERO_KEY, 16)) 5277 + if (!crypto_memneq(data->rand256, ZERO_KEY, 16) || 5278 + !crypto_memneq(data->hash256, ZERO_KEY, 16)) 5300 5279 return 0x00; 5301 5280 5302 5281 return 0x02; ··· 5306 5285 * not supported by the hardware, then check that if 5307 5286 * P-192 data values are present. 5308 5287 */ 5309 - if (!memcmp(data->rand192, ZERO_KEY, 16) || 5310 - !memcmp(data->hash192, ZERO_KEY, 16)) 5288 + if (!crypto_memneq(data->rand192, ZERO_KEY, 16) || 5289 + !crypto_memneq(data->hash192, ZERO_KEY, 16)) 5311 5290 return 0x00; 5312 5291 5313 5292 return 0x01; ··· 5324 5303 hci_dev_lock(hdev); 5325 5304 5326 5305 conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &ev->bdaddr); 5327 - if (!conn) 5306 + if (!conn || !hci_conn_ssp_enabled(conn)) 5328 5307 goto unlock; 5329 5308 5330 5309 hci_conn_hold(conn); ··· 5571 5550 hci_dev_lock(hdev); 5572 5551 5573 5552 conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &ev->bdaddr); 5574 - if (!conn) 5553 + if (!conn || !hci_conn_ssp_enabled(conn)) 5575 5554 goto unlock; 5576 5555 5577 5556 /* Reset the authentication requirement to unknown */ ··· 7042 7021 hci_dev_unlock(hdev); 7043 7022 } 7044 7023 7024 + static int hci_iso_term_big_sync(struct hci_dev *hdev, void *data) 7025 + { 7026 + u8 handle = PTR_UINT(data); 7027 + 7028 + return hci_le_terminate_big_sync(hdev, handle, 7029 + HCI_ERROR_LOCAL_HOST_TERM); 7030 + } 7031 + 7045 7032 static void hci_le_create_big_complete_evt(struct hci_dev *hdev, void *data, 7046 7033 struct sk_buff *skb) 7047 7034 { ··· 7094 7065 rcu_read_lock(); 7095 7066 } 7096 7067 7068 + rcu_read_unlock(); 7069 + 7097 7070 if (!ev->status && !i) 7098 7071 /* If no BISes have been connected for the BIG, 7099 7072 * terminate. This is in case all bound connections 7100 7073 * have been closed before the BIG creation 7101 7074 * has completed. 7102 7075 */ 7103 - hci_le_terminate_big_sync(hdev, ev->handle, 7104 - HCI_ERROR_LOCAL_HOST_TERM); 7076 + hci_cmd_sync_queue(hdev, hci_iso_term_big_sync, 7077 + UINT_PTR(ev->handle), NULL); 7105 7078 7106 - rcu_read_unlock(); 7107 7079 hci_dev_unlock(hdev); 7108 7080 } 7109 7081
+2 -1
net/bluetooth/hci_sock.c
··· 488 488 ni->type = hdev->dev_type; 489 489 ni->bus = hdev->bus; 490 490 bacpy(&ni->bdaddr, &hdev->bdaddr); 491 - memcpy(ni->name, hdev->name, 8); 491 + memcpy_and_pad(ni->name, sizeof(ni->name), hdev->name, 492 + strnlen(hdev->name, sizeof(ni->name)), '\0'); 492 493 493 494 opcode = cpu_to_le16(HCI_MON_NEW_INDEX); 494 495 break;
+12 -14
net/bluetooth/hci_sync.c
··· 5369 5369 { 5370 5370 int err = 0; 5371 5371 u16 handle = conn->handle; 5372 + bool disconnect = false; 5372 5373 struct hci_conn *c; 5373 5374 5374 5375 switch (conn->state) { ··· 5400 5399 hci_dev_unlock(hdev); 5401 5400 return 0; 5402 5401 case BT_BOUND: 5403 - hci_dev_lock(hdev); 5404 - hci_conn_failed(conn, reason); 5405 - hci_dev_unlock(hdev); 5406 - return 0; 5402 + break; 5407 5403 default: 5408 - hci_dev_lock(hdev); 5409 - conn->state = BT_CLOSED; 5410 - hci_disconn_cfm(conn, reason); 5411 - hci_conn_del(conn); 5412 - hci_dev_unlock(hdev); 5413 - return 0; 5404 + disconnect = true; 5405 + break; 5414 5406 } 5415 5407 5416 5408 hci_dev_lock(hdev); 5417 5409 5418 - /* Check if the connection hasn't been cleanup while waiting 5419 - * commands to complete. 5420 - */ 5410 + /* Check if the connection has been cleaned up concurrently */ 5421 5411 c = hci_conn_hash_lookup_handle(hdev, handle); 5422 5412 if (!c || c != conn) { 5423 5413 err = 0; ··· 5420 5428 * or in case of LE it was still scanning so it can be cleanup 5421 5429 * safely. 5422 5430 */ 5423 - hci_conn_failed(conn, reason); 5431 + if (disconnect) { 5432 + conn->state = BT_CLOSED; 5433 + hci_disconn_cfm(conn, reason); 5434 + hci_conn_del(conn); 5435 + } else { 5436 + hci_conn_failed(conn, reason); 5437 + } 5424 5438 5425 5439 unlock: 5426 5440 hci_dev_unlock(hdev);
+2 -2
net/ceph/messenger.c
··· 459 459 set_sock_callbacks(sock, con); 460 460 461 461 con_sock_state_connecting(con); 462 - ret = sock->ops->connect(sock, (struct sockaddr *)&ss, sizeof(ss), 463 - O_NONBLOCK); 462 + ret = kernel_connect(sock, (struct sockaddr *)&ss, sizeof(ss), 463 + O_NONBLOCK); 464 464 if (ret == -EINPROGRESS) { 465 465 dout("connect %s EINPROGRESS sk_state = %u\n", 466 466 ceph_pr_addr(&con->peer_addr),
+50 -15
net/core/dev.c
··· 345 345 static void __netdev_name_node_alt_destroy(struct netdev_name_node *name_node) 346 346 { 347 347 list_del(&name_node->list); 348 - netdev_name_node_del(name_node); 349 348 kfree(name_node->name); 350 349 netdev_name_node_free(name_node); 351 350 } ··· 363 364 if (name_node == dev->name_node || name_node->dev != dev) 364 365 return -EINVAL; 365 366 367 + netdev_name_node_del(name_node); 368 + synchronize_rcu(); 366 369 __netdev_name_node_alt_destroy(name_node); 367 370 368 371 return 0; ··· 381 380 /* Device list insertion */ 382 381 static void list_netdevice(struct net_device *dev) 383 382 { 383 + struct netdev_name_node *name_node; 384 384 struct net *net = dev_net(dev); 385 385 386 386 ASSERT_RTNL(); ··· 392 390 hlist_add_head_rcu(&dev->index_hlist, 393 391 dev_index_hash(net, dev->ifindex)); 394 392 write_unlock(&dev_base_lock); 393 + 394 + netdev_for_each_altname(dev, name_node) 395 + netdev_name_node_add(net, name_node); 396 + 395 397 /* We reserved the ifindex, this can't fail */ 396 398 WARN_ON(xa_store(&net->dev_by_index, dev->ifindex, dev, GFP_KERNEL)); 397 399 ··· 407 401 */ 408 402 static void unlist_netdevice(struct net_device *dev, bool lock) 409 403 { 404 + struct netdev_name_node *name_node; 410 405 struct net *net = dev_net(dev); 411 406 412 407 ASSERT_RTNL(); 413 408 414 409 xa_erase(&net->dev_by_index, dev->ifindex); 410 + 411 + netdev_for_each_altname(dev, name_node) 412 + netdev_name_node_del(name_node); 415 413 416 414 /* Unlink dev from the device chain */ 417 415 if (lock) ··· 1096 1086 1097 1087 for_each_netdev(net, d) { 1098 1088 struct netdev_name_node *name_node; 1099 - list_for_each_entry(name_node, &d->name_node->list, list) { 1089 + 1090 + netdev_for_each_altname(d, name_node) { 1100 1091 if (!sscanf(name_node->name, name, &i)) 1101 1092 continue; 1102 1093 if (i < 0 || i >= max_netdevices) ··· 1132 1121 * for the digits, or if all bits are used. 1133 1122 */ 1134 1123 return -ENFILE; 1124 + } 1125 + 1126 + static int dev_prep_valid_name(struct net *net, struct net_device *dev, 1127 + const char *want_name, char *out_name) 1128 + { 1129 + int ret; 1130 + 1131 + if (!dev_valid_name(want_name)) 1132 + return -EINVAL; 1133 + 1134 + if (strchr(want_name, '%')) { 1135 + ret = __dev_alloc_name(net, want_name, out_name); 1136 + return ret < 0 ? ret : 0; 1137 + } else if (netdev_name_in_use(net, want_name)) { 1138 + return -EEXIST; 1139 + } else if (out_name != want_name) { 1140 + strscpy(out_name, want_name, IFNAMSIZ); 1141 + } 1142 + 1143 + return 0; 1135 1144 } 1136 1145 1137 1146 static int dev_alloc_name_ns(struct net *net, ··· 1191 1160 static int dev_get_valid_name(struct net *net, struct net_device *dev, 1192 1161 const char *name) 1193 1162 { 1194 - BUG_ON(!net); 1163 + char buf[IFNAMSIZ]; 1164 + int ret; 1195 1165 1196 - if (!dev_valid_name(name)) 1197 - return -EINVAL; 1198 - 1199 - if (strchr(name, '%')) 1200 - return dev_alloc_name_ns(net, dev, name); 1201 - else if (netdev_name_in_use(net, name)) 1202 - return -EEXIST; 1203 - else if (dev->name != name) 1204 - strscpy(dev->name, name, IFNAMSIZ); 1205 - 1206 - return 0; 1166 + ret = dev_prep_valid_name(net, dev, name, buf); 1167 + if (ret >= 0) 1168 + strscpy(dev->name, buf, IFNAMSIZ); 1169 + return ret; 1207 1170 } 1208 1171 1209 1172 /** ··· 11106 11081 int __dev_change_net_namespace(struct net_device *dev, struct net *net, 11107 11082 const char *pat, int new_ifindex) 11108 11083 { 11084 + struct netdev_name_node *name_node; 11109 11085 struct net *net_old = dev_net(dev); 11086 + char new_name[IFNAMSIZ] = {}; 11110 11087 int err, new_nsid; 11111 11088 11112 11089 ASSERT_RTNL(); ··· 11135 11108 /* We get here if we can't use the current device name */ 11136 11109 if (!pat) 11137 11110 goto out; 11138 - err = dev_get_valid_name(net, dev, pat); 11111 + err = dev_prep_valid_name(net, dev, pat, new_name); 11139 11112 if (err < 0) 11140 11113 goto out; 11141 11114 } 11115 + /* Check that none of the altnames conflicts. */ 11116 + err = -EEXIST; 11117 + netdev_for_each_altname(dev, name_node) 11118 + if (netdev_name_in_use(net, name_node->name)) 11119 + goto out; 11142 11120 11143 11121 /* Check that new_ifindex isn't used yet. */ 11144 11122 if (new_ifindex) { ··· 11210 11178 /* Send a netdev-add uevent to the new namespace */ 11211 11179 kobject_uevent(&dev->dev.kobj, KOBJ_ADD); 11212 11180 netdev_adjacent_add_links(dev); 11181 + 11182 + if (new_name[0]) /* Rename the netdev to prepared name */ 11183 + strscpy(dev->name, new_name, IFNAMSIZ); 11213 11184 11214 11185 /* Fixup kobjects */ 11215 11186 err = device_rename(&dev->dev, dev->name);
+3
net/core/dev.h
··· 62 62 int netdev_get_name(struct net *net, char *name, int ifindex); 63 63 int dev_change_name(struct net_device *dev, const char *newname); 64 64 65 + #define netdev_for_each_altname(dev, namenode) \ 66 + list_for_each_entry((namenode), &(dev)->name_node->list, list) 67 + 65 68 int netdev_name_node_alt_create(struct net_device *dev, const char *name); 66 69 int netdev_name_node_alt_destroy(struct net_device *dev, const char *name); 67 70
+7 -7
net/core/pktgen.c
··· 670 670 seq_puts(seq, " Flags: "); 671 671 672 672 for (i = 0; i < NR_PKT_FLAGS; i++) { 673 - if (i == F_FLOW_SEQ) 673 + if (i == FLOW_SEQ_SHIFT) 674 674 if (!pkt_dev->cflows) 675 675 continue; 676 676 677 - if (pkt_dev->flags & (1 << i)) 677 + if (pkt_dev->flags & (1 << i)) { 678 678 seq_printf(seq, "%s ", pkt_flag_names[i]); 679 - else if (i == F_FLOW_SEQ) 680 - seq_puts(seq, "FLOW_RND "); 681 - 682 679 #ifdef CONFIG_XFRM 683 - if (i == F_IPSEC && pkt_dev->spi) 684 - seq_printf(seq, "spi:%u", pkt_dev->spi); 680 + if (i == IPSEC_SHIFT && pkt_dev->spi) 681 + seq_printf(seq, "spi:%u ", pkt_dev->spi); 685 682 #endif 683 + } else if (i == FLOW_SEQ_SHIFT) { 684 + seq_puts(seq, "FLOW_RND "); 685 + } 686 686 } 687 687 688 688 seq_puts(seq, "\n");
+1 -3
net/core/rtnetlink.c
··· 5532 5532 rtnl_offload_xstats_get_size_hw_s_info_one(const struct net_device *dev, 5533 5533 enum netdev_offload_xstats_type type) 5534 5534 { 5535 - bool enabled = netdev_offload_xstats_enabled(dev, type); 5536 - 5537 5535 return nla_total_size(0) + 5538 5536 /* IFLA_OFFLOAD_XSTATS_HW_S_INFO_REQUEST */ 5539 5537 nla_total_size(sizeof(u8)) + 5540 5538 /* IFLA_OFFLOAD_XSTATS_HW_S_INFO_USED */ 5541 - (enabled ? nla_total_size(sizeof(u8)) : 0) + 5539 + nla_total_size(sizeof(u8)) + 5542 5540 0; 5543 5541 } 5544 5542
+7 -5
net/core/stream.c
··· 117 117 */ 118 118 int sk_stream_wait_memory(struct sock *sk, long *timeo_p) 119 119 { 120 - int err = 0; 120 + int ret, err = 0; 121 121 long vm_wait = 0; 122 122 long current_timeo = *timeo_p; 123 123 DEFINE_WAIT_FUNC(wait, woken_wake_function); ··· 142 142 143 143 set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); 144 144 sk->sk_write_pending++; 145 - sk_wait_event(sk, &current_timeo, READ_ONCE(sk->sk_err) || 146 - (READ_ONCE(sk->sk_shutdown) & SEND_SHUTDOWN) || 147 - (sk_stream_memory_free(sk) && 148 - !vm_wait), &wait); 145 + ret = sk_wait_event(sk, &current_timeo, READ_ONCE(sk->sk_err) || 146 + (READ_ONCE(sk->sk_shutdown) & SEND_SHUTDOWN) || 147 + (sk_stream_memory_free(sk) && !vm_wait), 148 + &wait); 149 149 sk->sk_write_pending--; 150 + if (ret < 0) 151 + goto do_error; 150 152 151 153 if (vm_wait) { 152 154 vm_wait -= current_timeo;
+6 -26
net/ethtool/bitset.c
··· 431 431 ethnl_string_array_t names, 432 432 struct netlink_ext_ack *extack, bool *mod) 433 433 { 434 - u32 *orig_bitmap, *saved_bitmap = NULL; 435 434 struct nlattr *bit_attr; 436 435 bool no_mask; 437 - bool dummy; 438 436 int rem; 439 437 int ret; 440 438 ··· 448 450 } 449 451 450 452 no_mask = tb[ETHTOOL_A_BITSET_NOMASK]; 451 - if (no_mask) { 452 - unsigned int nwords = DIV_ROUND_UP(nbits, 32); 453 - unsigned int nbytes = nwords * sizeof(u32); 454 - 455 - /* The bitmap size is only the size of the map part without 456 - * its mask part. 457 - */ 458 - saved_bitmap = kcalloc(nwords, sizeof(u32), GFP_KERNEL); 459 - if (!saved_bitmap) 460 - return -ENOMEM; 461 - memcpy(saved_bitmap, bitmap, nbytes); 462 - ethnl_bitmap32_clear(bitmap, 0, nbits, &dummy); 463 - orig_bitmap = saved_bitmap; 464 - } else { 465 - orig_bitmap = bitmap; 466 - } 453 + if (no_mask) 454 + ethnl_bitmap32_clear(bitmap, 0, nbits, mod); 467 455 468 456 nla_for_each_nested(bit_attr, tb[ETHTOOL_A_BITSET_BITS], rem) { 469 457 bool old_val, new_val; ··· 458 474 if (nla_type(bit_attr) != ETHTOOL_A_BITSET_BITS_BIT) { 459 475 NL_SET_ERR_MSG_ATTR(extack, bit_attr, 460 476 "only ETHTOOL_A_BITSET_BITS_BIT allowed in ETHTOOL_A_BITSET_BITS"); 461 - ret = -EINVAL; 462 - goto out; 477 + return -EINVAL; 463 478 } 464 479 ret = ethnl_parse_bit(&idx, &new_val, nbits, bit_attr, no_mask, 465 480 names, extack); 466 481 if (ret < 0) 467 - goto out; 468 - old_val = orig_bitmap[idx / 32] & ((u32)1 << (idx % 32)); 482 + return ret; 483 + old_val = bitmap[idx / 32] & ((u32)1 << (idx % 32)); 469 484 if (new_val != old_val) { 470 485 if (new_val) 471 486 bitmap[idx / 32] |= ((u32)1 << (idx % 32)); ··· 474 491 } 475 492 } 476 493 477 - ret = 0; 478 - out: 479 - kfree(saved_bitmap); 480 - return ret; 494 + return 0; 481 495 } 482 496 483 497 static int ethnl_compact_sanity_checks(unsigned int nbits,
+8 -2
net/ipv4/af_inet.c
··· 597 597 598 598 add_wait_queue(sk_sleep(sk), &wait); 599 599 sk->sk_write_pending += writebias; 600 - sk->sk_wait_pending++; 601 600 602 601 /* Basic assumption: if someone sets sk->sk_err, he _must_ 603 602 * change state of the socket from TCP_SYN_*. ··· 612 613 } 613 614 remove_wait_queue(sk_sleep(sk), &wait); 614 615 sk->sk_write_pending -= writebias; 615 - sk->sk_wait_pending--; 616 616 return timeo; 617 617 } 618 618 ··· 640 642 return -EINVAL; 641 643 642 644 if (uaddr->sa_family == AF_UNSPEC) { 645 + sk->sk_disconnects++; 643 646 err = sk->sk_prot->disconnect(sk, flags); 644 647 sock->state = err ? SS_DISCONNECTING : SS_UNCONNECTED; 645 648 goto out; ··· 695 696 int writebias = (sk->sk_protocol == IPPROTO_TCP) && 696 697 tcp_sk(sk)->fastopen_req && 697 698 tcp_sk(sk)->fastopen_req->data ? 1 : 0; 699 + int dis = sk->sk_disconnects; 698 700 699 701 /* Error code is set above */ 700 702 if (!timeo || !inet_wait_for_connect(sk, timeo, writebias)) ··· 704 704 err = sock_intr_errno(timeo); 705 705 if (signal_pending(current)) 706 706 goto out; 707 + 708 + if (dis != sk->sk_disconnects) { 709 + err = -EPIPE; 710 + goto out; 711 + } 707 712 } 708 713 709 714 /* Connection was closed by RST, timeout, ICMP error ··· 730 725 sock_error: 731 726 err = sock_error(sk) ? : -ECONNABORTED; 732 727 sock->state = SS_UNCONNECTED; 728 + sk->sk_disconnects++; 733 729 if (sk->sk_prot->disconnect(sk, flags)) 734 730 sock->state = SS_DISCONNECTING; 735 731 goto out;
+3 -1
net/ipv4/esp4.c
··· 732 732 skb->csum = csum_block_sub(skb->csum, csumdiff, 733 733 skb->len - trimlen); 734 734 } 735 - pskb_trim(skb, skb->len - trimlen); 735 + ret = pskb_trim(skb, skb->len - trimlen); 736 + if (unlikely(ret)) 737 + return ret; 736 738 737 739 ret = nexthdr[1]; 738 740
+9 -5
net/ipv4/fib_semantics.c
··· 1325 1325 unsigned char scope) 1326 1326 { 1327 1327 struct fib_nh *nh; 1328 + __be32 saddr; 1328 1329 1329 1330 if (nhc->nhc_family != AF_INET) 1330 1331 return inet_select_addr(nhc->nhc_dev, 0, scope); 1331 1332 1332 1333 nh = container_of(nhc, struct fib_nh, nh_common); 1333 - nh->nh_saddr = inet_select_addr(nh->fib_nh_dev, nh->fib_nh_gw4, scope); 1334 - nh->nh_saddr_genid = atomic_read(&net->ipv4.dev_addr_genid); 1334 + saddr = inet_select_addr(nh->fib_nh_dev, nh->fib_nh_gw4, scope); 1335 1335 1336 - return nh->nh_saddr; 1336 + WRITE_ONCE(nh->nh_saddr, saddr); 1337 + WRITE_ONCE(nh->nh_saddr_genid, atomic_read(&net->ipv4.dev_addr_genid)); 1338 + 1339 + return saddr; 1337 1340 } 1338 1341 1339 1342 __be32 fib_result_prefsrc(struct net *net, struct fib_result *res) ··· 1350 1347 struct fib_nh *nh; 1351 1348 1352 1349 nh = container_of(nhc, struct fib_nh, nh_common); 1353 - if (nh->nh_saddr_genid == atomic_read(&net->ipv4.dev_addr_genid)) 1354 - return nh->nh_saddr; 1350 + if (READ_ONCE(nh->nh_saddr_genid) == 1351 + atomic_read(&net->ipv4.dev_addr_genid)) 1352 + return READ_ONCE(nh->nh_saddr); 1355 1353 } 1356 1354 1357 1355 return fib_info_update_nhc_saddr(net, nhc, res->fi->fib_scope);
-1
net/ipv4/inet_connection_sock.c
··· 1145 1145 if (newsk) { 1146 1146 struct inet_connection_sock *newicsk = inet_csk(newsk); 1147 1147 1148 - newsk->sk_wait_pending = 0; 1149 1148 inet_sk_set_state(newsk, TCP_SYN_RECV); 1150 1149 newicsk->icsk_bind_hash = NULL; 1151 1150 newicsk->icsk_bind2_hash = NULL;
+9 -15
net/ipv4/inet_hashtables.c
··· 149 149 const struct sock *sk) 150 150 { 151 151 #if IS_ENABLED(CONFIG_IPV6) 152 - if (sk->sk_family != tb2->family) 153 - return false; 152 + if (sk->sk_family != tb2->family) { 153 + if (sk->sk_family == AF_INET) 154 + return ipv6_addr_v4mapped(&tb2->v6_rcv_saddr) && 155 + tb2->v6_rcv_saddr.s6_addr32[3] == sk->sk_rcv_saddr; 156 + 157 + return ipv6_addr_v4mapped(&sk->sk_v6_rcv_saddr) && 158 + sk->sk_v6_rcv_saddr.s6_addr32[3] == tb2->rcv_saddr; 159 + } 154 160 155 161 if (sk->sk_family == AF_INET6) 156 162 return ipv6_addr_equal(&tb2->v6_rcv_saddr, ··· 825 819 tb->l3mdev != l3mdev) 826 820 return false; 827 821 828 - #if IS_ENABLED(CONFIG_IPV6) 829 - if (sk->sk_family != tb->family) { 830 - if (sk->sk_family == AF_INET) 831 - return ipv6_addr_v4mapped(&tb->v6_rcv_saddr) && 832 - tb->v6_rcv_saddr.s6_addr32[3] == sk->sk_rcv_saddr; 833 - 834 - return false; 835 - } 836 - 837 - if (sk->sk_family == AF_INET6) 838 - return ipv6_addr_equal(&tb->v6_rcv_saddr, &sk->sk_v6_rcv_saddr); 839 - #endif 840 - return tb->rcv_saddr == sk->sk_rcv_saddr; 822 + return inet_bind2_bucket_addr_match(tb, sk); 841 823 } 842 824 843 825 bool inet_bind2_bucket_match_addr_any(const struct inet_bind2_bucket *tb, const struct net *net,
+8 -8
net/ipv4/tcp.c
··· 831 831 */ 832 832 if (!skb_queue_empty(&sk->sk_receive_queue)) 833 833 break; 834 - sk_wait_data(sk, &timeo, NULL); 834 + ret = sk_wait_data(sk, &timeo, NULL); 835 + if (ret < 0) 836 + break; 835 837 if (signal_pending(current)) { 836 838 ret = sock_intr_errno(timeo); 837 839 break; ··· 2444 2442 __sk_flush_backlog(sk); 2445 2443 } else { 2446 2444 tcp_cleanup_rbuf(sk, copied); 2447 - sk_wait_data(sk, &timeo, last); 2445 + err = sk_wait_data(sk, &timeo, last); 2446 + if (err < 0) { 2447 + err = copied ? : err; 2448 + goto out; 2449 + } 2448 2450 } 2449 2451 2450 2452 if ((flags & MSG_PEEK) && ··· 2971 2965 struct tcp_sock *tp = tcp_sk(sk); 2972 2966 int old_state = sk->sk_state; 2973 2967 u32 seq; 2974 - 2975 - /* Deny disconnect if other threads are blocked in sk_wait_event() 2976 - * or inet_wait_for_connect(). 2977 - */ 2978 - if (sk->sk_wait_pending) 2979 - return -EBUSY; 2980 2968 2981 2969 if (old_state != TCP_CLOSE) 2982 2970 tcp_set_state(sk, TCP_CLOSE);
+12
net/ipv4/tcp_bpf.c
··· 307 307 } 308 308 309 309 data = tcp_msg_wait_data(sk, psock, timeo); 310 + if (data < 0) { 311 + copied = data; 312 + goto unlock; 313 + } 310 314 if (data && !sk_psock_queue_empty(psock)) 311 315 goto msg_bytes_ready; 312 316 copied = -EAGAIN; ··· 321 317 tcp_rcv_space_adjust(sk); 322 318 if (copied > 0) 323 319 __tcp_cleanup_rbuf(sk, copied); 320 + 321 + unlock: 324 322 release_sock(sk); 325 323 sk_psock_put(sk, psock); 326 324 return copied; ··· 357 351 358 352 timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); 359 353 data = tcp_msg_wait_data(sk, psock, timeo); 354 + if (data < 0) { 355 + ret = data; 356 + goto unlock; 357 + } 360 358 if (data) { 361 359 if (!sk_psock_queue_empty(psock)) 362 360 goto msg_bytes_ready; ··· 371 361 copied = -EAGAIN; 372 362 } 373 363 ret = copied; 364 + 365 + unlock: 374 366 release_sock(sk); 375 367 sk_psock_put(sk, psock); 376 368 return ret;
+1
net/ipv4/tcp_ipv4.c
··· 1870 1870 #ifdef CONFIG_TLS_DEVICE 1871 1871 tail->decrypted != skb->decrypted || 1872 1872 #endif 1873 + !mptcp_skb_can_collapse(tail, skb) || 1873 1874 thtail->doff != th->doff || 1874 1875 memcmp(thtail + 1, th + 1, hdrlen - sizeof(*th))) 1875 1876 goto no_coalesce;
+19 -6
net/ipv4/tcp_output.c
··· 2535 2535 return true; 2536 2536 } 2537 2537 2538 + static bool tcp_rtx_queue_empty_or_single_skb(const struct sock *sk) 2539 + { 2540 + const struct rb_node *node = sk->tcp_rtx_queue.rb_node; 2541 + 2542 + /* No skb in the rtx queue. */ 2543 + if (!node) 2544 + return true; 2545 + 2546 + /* Only one skb in rtx queue. */ 2547 + return !node->rb_left && !node->rb_right; 2548 + } 2549 + 2538 2550 /* TCP Small Queues : 2539 2551 * Control number of packets in qdisc/devices to two packets / or ~1 ms. 2540 2552 * (These limits are doubled for retransmits) ··· 2585 2573 limit += extra_bytes; 2586 2574 } 2587 2575 if (refcount_read(&sk->sk_wmem_alloc) > limit) { 2588 - /* Always send skb if rtx queue is empty. 2576 + /* Always send skb if rtx queue is empty or has one skb. 2589 2577 * No need to wait for TX completion to call us back, 2590 2578 * after softirq/tasklet schedule. 2591 2579 * This helps when TX completions are delayed too much. 2592 2580 */ 2593 - if (tcp_rtx_queue_empty(sk)) 2581 + if (tcp_rtx_queue_empty_or_single_skb(sk)) 2594 2582 return false; 2595 2583 2596 2584 set_bit(TSQ_THROTTLED, &sk->sk_tsq_flags); ··· 2794 2782 { 2795 2783 struct inet_connection_sock *icsk = inet_csk(sk); 2796 2784 struct tcp_sock *tp = tcp_sk(sk); 2797 - u32 timeout, rto_delta_us; 2785 + u32 timeout, timeout_us, rto_delta_us; 2798 2786 int early_retrans; 2799 2787 2800 2788 /* Don't do any loss probe on a Fast Open connection before 3WHS ··· 2818 2806 * sample is available then probe after TCP_TIMEOUT_INIT. 2819 2807 */ 2820 2808 if (tp->srtt_us) { 2821 - timeout = usecs_to_jiffies(tp->srtt_us >> 2); 2809 + timeout_us = tp->srtt_us >> 2; 2822 2810 if (tp->packets_out == 1) 2823 - timeout += TCP_RTO_MIN; 2811 + timeout_us += tcp_rto_min_us(sk); 2824 2812 else 2825 - timeout += TCP_TIMEOUT_MIN; 2813 + timeout_us += TCP_TIMEOUT_MIN_US; 2814 + timeout = usecs_to_jiffies(timeout_us); 2826 2815 } else { 2827 2816 timeout = TCP_TIMEOUT_INIT; 2828 2817 }
+1 -1
net/ipv4/tcp_recovery.c
··· 104 104 tp->rack.advanced = 0; 105 105 tcp_rack_detect_loss(sk, &timeout); 106 106 if (timeout) { 107 - timeout = usecs_to_jiffies(timeout) + TCP_TIMEOUT_MIN; 107 + timeout = usecs_to_jiffies(timeout + TCP_TIMEOUT_MIN_US); 108 108 inet_csk_reset_xmit_timer(sk, ICSK_TIME_REO_TIMEOUT, 109 109 timeout, inet_csk(sk)->icsk_rto); 110 110 }
+3 -1
net/ipv6/esp6.c
··· 770 770 skb->csum = csum_block_sub(skb->csum, csumdiff, 771 771 skb->len - trimlen); 772 772 } 773 - pskb_trim(skb, skb->len - trimlen); 773 + ret = pskb_trim(skb, skb->len - trimlen); 774 + if (unlikely(ret)) 775 + return ret; 774 776 775 777 ret = nexthdr[1]; 776 778
+2 -2
net/ipv6/xfrm6_policy.c
··· 117 117 { 118 118 struct xfrm_dst *xdst = (struct xfrm_dst *)dst; 119 119 120 - if (likely(xdst->u.rt6.rt6i_idev)) 121 - in6_dev_put(xdst->u.rt6.rt6i_idev); 122 120 dst_destroy_metrics_generic(dst); 123 121 rt6_uncached_list_del(&xdst->u.rt6); 122 + if (likely(xdst->u.rt6.rt6i_idev)) 123 + in6_dev_put(xdst->u.rt6.rt6i_idev); 124 124 xfrm_dst_destroy(xdst); 125 125 } 126 126
+23 -20
net/mptcp/protocol.c
··· 1298 1298 if (copy == 0) { 1299 1299 u64 snd_una = READ_ONCE(msk->snd_una); 1300 1300 1301 - if (snd_una != msk->snd_nxt) { 1301 + if (snd_una != msk->snd_nxt || tcp_write_queue_tail(ssk)) { 1302 1302 tcp_remove_empty_skb(ssk); 1303 1303 return 0; 1304 1304 } ··· 1306 1306 zero_window_probe = true; 1307 1307 data_seq = snd_una - 1; 1308 1308 copy = 1; 1309 - 1310 - /* all mptcp-level data is acked, no skbs should be present into the 1311 - * ssk write queue 1312 - */ 1313 - WARN_ON_ONCE(reuse_skb); 1314 1309 } 1315 1310 1316 1311 copy = min_t(size_t, copy, info->limit - info->sent); ··· 1334 1339 if (reuse_skb) { 1335 1340 TCP_SKB_CB(skb)->tcp_flags &= ~TCPHDR_PSH; 1336 1341 mpext->data_len += copy; 1337 - WARN_ON_ONCE(zero_window_probe); 1338 1342 goto out; 1339 1343 } 1340 1344 ··· 2348 2354 #define MPTCP_CF_PUSH BIT(1) 2349 2355 #define MPTCP_CF_FASTCLOSE BIT(2) 2350 2356 2357 + /* be sure to send a reset only if the caller asked for it, also 2358 + * clean completely the subflow status when the subflow reaches 2359 + * TCP_CLOSE state 2360 + */ 2361 + static void __mptcp_subflow_disconnect(struct sock *ssk, 2362 + struct mptcp_subflow_context *subflow, 2363 + unsigned int flags) 2364 + { 2365 + if (((1 << ssk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)) || 2366 + (flags & MPTCP_CF_FASTCLOSE)) { 2367 + /* The MPTCP code never wait on the subflow sockets, TCP-level 2368 + * disconnect should never fail 2369 + */ 2370 + WARN_ON_ONCE(tcp_disconnect(ssk, 0)); 2371 + mptcp_subflow_ctx_reset(subflow); 2372 + } else { 2373 + tcp_shutdown(ssk, SEND_SHUTDOWN); 2374 + } 2375 + } 2376 + 2351 2377 /* subflow sockets can be either outgoing (connect) or incoming 2352 2378 * (accept). 2353 2379 * ··· 2405 2391 lock_sock_nested(ssk, SINGLE_DEPTH_NESTING); 2406 2392 2407 2393 if ((flags & MPTCP_CF_FASTCLOSE) && !__mptcp_check_fallback(msk)) { 2408 - /* be sure to force the tcp_disconnect() path, 2394 + /* be sure to force the tcp_close path 2409 2395 * to generate the egress reset 2410 2396 */ 2411 2397 ssk->sk_lingertime = 0; ··· 2415 2401 2416 2402 need_push = (flags & MPTCP_CF_PUSH) && __mptcp_retransmit_pending_data(sk); 2417 2403 if (!dispose_it) { 2418 - /* The MPTCP code never wait on the subflow sockets, TCP-level 2419 - * disconnect should never fail 2420 - */ 2421 - WARN_ON_ONCE(tcp_disconnect(ssk, 0)); 2422 - mptcp_subflow_ctx_reset(subflow); 2404 + __mptcp_subflow_disconnect(ssk, subflow, flags); 2423 2405 release_sock(ssk); 2424 2406 2425 2407 goto out; ··· 3108 3098 { 3109 3099 struct mptcp_sock *msk = mptcp_sk(sk); 3110 3100 3111 - /* Deny disconnect if other threads are blocked in sk_wait_event() 3112 - * or inet_wait_for_connect(). 3113 - */ 3114 - if (sk->sk_wait_pending) 3115 - return -EBUSY; 3116 - 3117 3101 /* We are on the fastopen error path. We can't call straight into the 3118 3102 * subflows cleanup code due to lock nesting (we are already under 3119 3103 * msk->firstsocket lock). ··· 3177 3173 inet_sk(nsk)->pinet6 = mptcp_inet6_sk(nsk); 3178 3174 #endif 3179 3175 3180 - nsk->sk_wait_pending = 0; 3181 3176 __mptcp_init_sock(nsk); 3182 3177 3183 3178 msk = mptcp_sk(nsk);
+34 -36
net/netfilter/nf_tables_api.c
··· 3166 3166 if (err < 0) 3167 3167 return err; 3168 3168 3169 - if (!tb[NFTA_EXPR_DATA]) 3169 + if (!tb[NFTA_EXPR_DATA] || !tb[NFTA_EXPR_NAME]) 3170 3170 return -EINVAL; 3171 3171 3172 3172 type = __nft_expr_type_get(ctx->family, tb[NFTA_EXPR_NAME]); ··· 5540 5540 const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv); 5541 5541 unsigned char *b = skb_tail_pointer(skb); 5542 5542 struct nlattr *nest; 5543 - u64 timeout = 0; 5544 5543 5545 5544 nest = nla_nest_start_noflag(skb, NFTA_LIST_ELEM); 5546 5545 if (nest == NULL) ··· 5575 5576 htonl(*nft_set_ext_flags(ext)))) 5576 5577 goto nla_put_failure; 5577 5578 5578 - if (nft_set_ext_exists(ext, NFT_SET_EXT_TIMEOUT)) { 5579 - timeout = *nft_set_ext_timeout(ext); 5580 - if (nla_put_be64(skb, NFTA_SET_ELEM_TIMEOUT, 5581 - nf_jiffies64_to_msecs(timeout), 5582 - NFTA_SET_ELEM_PAD)) 5583 - goto nla_put_failure; 5584 - } else if (set->flags & NFT_SET_TIMEOUT) { 5585 - timeout = READ_ONCE(set->timeout); 5586 - } 5579 + if (nft_set_ext_exists(ext, NFT_SET_EXT_TIMEOUT) && 5580 + nla_put_be64(skb, NFTA_SET_ELEM_TIMEOUT, 5581 + nf_jiffies64_to_msecs(*nft_set_ext_timeout(ext)), 5582 + NFTA_SET_ELEM_PAD)) 5583 + goto nla_put_failure; 5587 5584 5588 5585 if (nft_set_ext_exists(ext, NFT_SET_EXT_EXPIRATION)) { 5589 5586 u64 expires, now = get_jiffies_64(); ··· 5594 5599 nf_jiffies64_to_msecs(expires), 5595 5600 NFTA_SET_ELEM_PAD)) 5596 5601 goto nla_put_failure; 5597 - 5598 - if (reset) 5599 - *nft_set_ext_expiration(ext) = now + timeout; 5600 5602 } 5601 5603 5602 5604 if (nft_set_ext_exists(ext, NFT_SET_EXT_USERDATA)) { ··· 7597 7605 return -1; 7598 7606 } 7599 7607 7608 + static void audit_log_obj_reset(const struct nft_table *table, 7609 + unsigned int base_seq, unsigned int nentries) 7610 + { 7611 + char *buf = kasprintf(GFP_ATOMIC, "%s:%u", table->name, base_seq); 7612 + 7613 + audit_log_nfcfg(buf, table->family, nentries, 7614 + AUDIT_NFT_OP_OBJ_RESET, GFP_ATOMIC); 7615 + kfree(buf); 7616 + } 7617 + 7600 7618 struct nft_obj_filter { 7601 7619 char *table; 7602 7620 u32 type; ··· 7621 7619 struct net *net = sock_net(skb->sk); 7622 7620 int family = nfmsg->nfgen_family; 7623 7621 struct nftables_pernet *nft_net; 7622 + unsigned int entries = 0; 7624 7623 struct nft_object *obj; 7625 7624 bool reset = false; 7625 + int rc = 0; 7626 7626 7627 7627 if (NFNL_MSG_TYPE(cb->nlh->nlmsg_type) == NFT_MSG_GETOBJ_RESET) 7628 7628 reset = true; ··· 7637 7633 if (family != NFPROTO_UNSPEC && family != table->family) 7638 7634 continue; 7639 7635 7636 + entries = 0; 7640 7637 list_for_each_entry_rcu(obj, &table->objects, list) { 7641 7638 if (!nft_is_active(net, obj)) 7642 7639 goto cont; ··· 7653 7648 filter->type != NFT_OBJECT_UNSPEC && 7654 7649 obj->ops->type->type != filter->type) 7655 7650 goto cont; 7656 - if (reset) { 7657 - char *buf = kasprintf(GFP_ATOMIC, 7658 - "%s:%u", 7659 - table->name, 7660 - nft_net->base_seq); 7661 7651 7662 - audit_log_nfcfg(buf, 7663 - family, 7664 - obj->handle, 7665 - AUDIT_NFT_OP_OBJ_RESET, 7666 - GFP_ATOMIC); 7667 - kfree(buf); 7668 - } 7652 + rc = nf_tables_fill_obj_info(skb, net, 7653 + NETLINK_CB(cb->skb).portid, 7654 + cb->nlh->nlmsg_seq, 7655 + NFT_MSG_NEWOBJ, 7656 + NLM_F_MULTI | NLM_F_APPEND, 7657 + table->family, table, 7658 + obj, reset); 7659 + if (rc < 0) 7660 + break; 7669 7661 7670 - if (nf_tables_fill_obj_info(skb, net, NETLINK_CB(cb->skb).portid, 7671 - cb->nlh->nlmsg_seq, 7672 - NFT_MSG_NEWOBJ, 7673 - NLM_F_MULTI | NLM_F_APPEND, 7674 - table->family, table, 7675 - obj, reset) < 0) 7676 - goto done; 7677 - 7662 + entries++; 7678 7663 nl_dump_check_consistent(cb, nlmsg_hdr(skb)); 7679 7664 cont: 7680 7665 idx++; 7681 7666 } 7667 + if (reset && entries) 7668 + audit_log_obj_reset(table, nft_net->base_seq, entries); 7669 + if (rc < 0) 7670 + break; 7682 7671 } 7683 - done: 7684 7672 rcu_read_unlock(); 7685 7673 7686 7674 cb->args[0] = idx; ··· 7778 7780 7779 7781 audit_log_nfcfg(buf, 7780 7782 family, 7781 - obj->handle, 7783 + 1, 7782 7784 AUDIT_NFT_OP_OBJ_RESET, 7783 7785 GFP_ATOMIC); 7784 7786 kfree(buf);
+1
net/netfilter/nft_inner.c
··· 298 298 int err; 299 299 300 300 if (!tb[NFTA_INNER_FLAGS] || 301 + !tb[NFTA_INNER_NUM] || 301 302 !tb[NFTA_INNER_HDRSIZE] || 302 303 !tb[NFTA_INNER_TYPE] || 303 304 !tb[NFTA_INNER_EXPR])
+1 -1
net/netfilter/nft_payload.c
··· 179 179 180 180 switch (priv->base) { 181 181 case NFT_PAYLOAD_LL_HEADER: 182 - if (!skb_mac_header_was_set(skb)) 182 + if (!skb_mac_header_was_set(skb) || skb_mac_header_len(skb) == 0) 183 183 goto err; 184 184 185 185 if (skb_vlan_tag_present(skb) &&
+1 -1
net/netfilter/nft_set_pipapo.h
··· 147 147 unsigned long * __percpu *scratch; 148 148 size_t bsize_max; 149 149 struct rcu_head rcu; 150 - struct nft_pipapo_field f[]; 150 + struct nft_pipapo_field f[] __counted_by(field_count); 151 151 }; 152 152 153 153 /**
+2
net/netfilter/nft_set_rbtree.c
··· 568 568 nft_rbtree_interval_end(this)) { 569 569 parent = parent->rb_right; 570 570 continue; 571 + } else if (nft_set_elem_expired(&rbe->ext)) { 572 + break; 571 573 } else if (!nft_set_elem_active(&rbe->ext, genmask)) { 572 574 parent = parent->rb_left; 573 575 continue;
+2
net/nfc/nci/spi.c
··· 151 151 int ret; 152 152 153 153 skb = nci_skb_alloc(nspi->ndev, 0, GFP_KERNEL); 154 + if (!skb) 155 + return -ENOMEM; 154 156 155 157 /* add the NCI SPI header to the start of the buffer */ 156 158 hdr = skb_push(skb, NCI_SPI_HDR_LEN);
+2 -3
net/rfkill/core.c
··· 1180 1180 init_waitqueue_head(&data->read_wait); 1181 1181 1182 1182 mutex_lock(&rfkill_global_mutex); 1183 - mutex_lock(&data->mtx); 1184 1183 /* 1185 1184 * start getting events from elsewhere but hold mtx to get 1186 1185 * startup events added first ··· 1191 1192 goto free; 1192 1193 rfkill_sync(rfkill); 1193 1194 rfkill_fill_event(&ev->ev, rfkill, RFKILL_OP_ADD); 1195 + mutex_lock(&data->mtx); 1194 1196 list_add_tail(&ev->list, &data->events); 1197 + mutex_unlock(&data->mtx); 1195 1198 } 1196 1199 list_add(&data->list, &rfkill_fds); 1197 - mutex_unlock(&data->mtx); 1198 1200 mutex_unlock(&rfkill_global_mutex); 1199 1201 1200 1202 file->private_data = data; ··· 1203 1203 return stream_open(inode, file); 1204 1204 1205 1205 free: 1206 - mutex_unlock(&data->mtx); 1207 1206 mutex_unlock(&rfkill_global_mutex); 1208 1207 mutex_destroy(&data->mtx); 1209 1208 list_for_each_entry_safe(ev, tmp, &data->events, list)
+2 -2
net/rfkill/rfkill-gpio.c
··· 108 108 109 109 rfkill->clk = devm_clk_get(&pdev->dev, NULL); 110 110 111 - gpio = devm_gpiod_get_optional(&pdev->dev, "reset", GPIOD_OUT_LOW); 111 + gpio = devm_gpiod_get_optional(&pdev->dev, "reset", GPIOD_ASIS); 112 112 if (IS_ERR(gpio)) 113 113 return PTR_ERR(gpio); 114 114 115 115 rfkill->reset_gpio = gpio; 116 116 117 - gpio = devm_gpiod_get_optional(&pdev->dev, "shutdown", GPIOD_OUT_LOW); 117 + gpio = devm_gpiod_get_optional(&pdev->dev, "shutdown", GPIOD_ASIS); 118 118 if (IS_ERR(gpio)) 119 119 return PTR_ERR(gpio); 120 120
+14 -4
net/sched/sch_hfsc.c
··· 902 902 cl->cl_flags |= HFSC_USC; 903 903 } 904 904 905 + static void 906 + hfsc_upgrade_rt(struct hfsc_class *cl) 907 + { 908 + cl->cl_fsc = cl->cl_rsc; 909 + rtsc_init(&cl->cl_virtual, &cl->cl_fsc, cl->cl_vt, cl->cl_total); 910 + cl->cl_flags |= HFSC_FSC; 911 + } 912 + 905 913 static const struct nla_policy hfsc_policy[TCA_HFSC_MAX + 1] = { 906 914 [TCA_HFSC_RSC] = { .len = sizeof(struct tc_service_curve) }, 907 915 [TCA_HFSC_FSC] = { .len = sizeof(struct tc_service_curve) }, ··· 1019 1011 if (parent == NULL) 1020 1012 return -ENOENT; 1021 1013 } 1022 - if (!(parent->cl_flags & HFSC_FSC) && parent != &q->root) { 1023 - NL_SET_ERR_MSG(extack, "Invalid parent - parent class must have FSC"); 1024 - return -EINVAL; 1025 - } 1026 1014 1027 1015 if (classid == 0 || TC_H_MAJ(classid ^ sch->handle) != 0) 1028 1016 return -EINVAL; ··· 1069 1065 cl->cf_tree = RB_ROOT; 1070 1066 1071 1067 sch_tree_lock(sch); 1068 + /* Check if the inner class is a misconfigured 'rt' */ 1069 + if (!(parent->cl_flags & HFSC_FSC) && parent != &q->root) { 1070 + NL_SET_ERR_MSG(extack, 1071 + "Forced curve change on parent 'rt' to 'sc'"); 1072 + hfsc_upgrade_rt(parent); 1073 + } 1072 1074 qdisc_class_hash_insert(&q->clhash, &cl->cl_common); 1073 1075 list_add_tail(&cl->siblings, &parent->children); 1074 1076 if (parent->level == 0)
+3 -2
net/smc/af_smc.c
··· 1201 1201 (struct smc_clc_msg_accept_confirm_v2 *)aclc; 1202 1202 struct smc_clc_first_contact_ext *fce = 1203 1203 smc_get_clc_first_contact_ext(clc_v2, false); 1204 + struct net *net = sock_net(&smc->sk); 1204 1205 int rc; 1205 1206 1206 1207 if (!ini->first_contact_peer || aclc->hdr.version == SMC_V1) ··· 1211 1210 memcpy(ini->smcrv2.nexthop_mac, &aclc->r0.lcl.mac, ETH_ALEN); 1212 1211 ini->smcrv2.uses_gateway = false; 1213 1212 } else { 1214 - if (smc_ib_find_route(smc->clcsock->sk->sk_rcv_saddr, 1213 + if (smc_ib_find_route(net, smc->clcsock->sk->sk_rcv_saddr, 1215 1214 smc_ib_gid_to_ipv4(aclc->r0.lcl.gid), 1216 1215 ini->smcrv2.nexthop_mac, 1217 1216 &ini->smcrv2.uses_gateway)) ··· 2362 2361 smc_find_ism_store_rc(rc, ini); 2363 2362 return (!rc) ? 0 : ini->rc; 2364 2363 } 2365 - return SMC_CLC_DECL_NOSMCDEV; 2364 + return prfx_rc; 2366 2365 } 2367 2366 2368 2367 /* listen worker: finish RDMA setup */
+4 -3
net/smc/smc_ib.c
··· 193 193 return smcibdev->pattr[ibport - 1].state == IB_PORT_ACTIVE; 194 194 } 195 195 196 - int smc_ib_find_route(__be32 saddr, __be32 daddr, 196 + int smc_ib_find_route(struct net *net, __be32 saddr, __be32 daddr, 197 197 u8 nexthop_mac[], u8 *uses_gateway) 198 198 { 199 199 struct neighbour *neigh = NULL; ··· 205 205 206 206 if (daddr == cpu_to_be32(INADDR_NONE)) 207 207 goto out; 208 - rt = ip_route_output_flow(&init_net, &fl4, NULL); 208 + rt = ip_route_output_flow(net, &fl4, NULL); 209 209 if (IS_ERR(rt)) 210 210 goto out; 211 211 if (rt->rt_uses_gateway && rt->rt_gw_family != AF_INET) ··· 235 235 if (smcrv2 && attr->gid_type == IB_GID_TYPE_ROCE_UDP_ENCAP && 236 236 smc_ib_gid_to_ipv4((u8 *)&attr->gid) != cpu_to_be32(INADDR_NONE)) { 237 237 struct in_device *in_dev = __in_dev_get_rcu(ndev); 238 + struct net *net = dev_net(ndev); 238 239 const struct in_ifaddr *ifa; 239 240 bool subnet_match = false; 240 241 ··· 249 248 } 250 249 if (!subnet_match) 251 250 goto out; 252 - if (smcrv2->daddr && smc_ib_find_route(smcrv2->saddr, 251 + if (smcrv2->daddr && smc_ib_find_route(net, smcrv2->saddr, 253 252 smcrv2->daddr, 254 253 smcrv2->nexthop_mac, 255 254 &smcrv2->uses_gateway))
+1 -1
net/smc/smc_ib.h
··· 112 112 int smc_ib_determine_gid(struct smc_ib_device *smcibdev, u8 ibport, 113 113 unsigned short vlan_id, u8 gid[], u8 *sgid_index, 114 114 struct smc_init_info_smcrv2 *smcrv2); 115 - int smc_ib_find_route(__be32 saddr, __be32 daddr, 115 + int smc_ib_find_route(struct net *net, __be32 saddr, __be32 daddr, 116 116 u8 nexthop_mac[], u8 *uses_gateway); 117 117 bool smc_ib_is_valid_local_systemid(void); 118 118 int smcr_nl_get_device(struct sk_buff *skb, struct netlink_callback *cb);
+7 -3
net/tls/tls_main.c
··· 140 140 141 141 int wait_on_pending_writer(struct sock *sk, long *timeo) 142 142 { 143 - int rc = 0; 144 143 DEFINE_WAIT_FUNC(wait, woken_wake_function); 144 + int ret, rc = 0; 145 145 146 146 add_wait_queue(sk_sleep(sk), &wait); 147 147 while (1) { ··· 155 155 break; 156 156 } 157 157 158 - if (sk_wait_event(sk, timeo, 159 - !READ_ONCE(sk->sk_write_pending), &wait)) 158 + ret = sk_wait_event(sk, timeo, 159 + !READ_ONCE(sk->sk_write_pending), &wait); 160 + if (ret) { 161 + if (ret < 0) 162 + rc = ret; 160 163 break; 164 + } 161 165 } 162 166 remove_wait_queue(sk_sleep(sk), &wait); 163 167 return rc;
+13 -6
net/tls/tls_sw.c
··· 1291 1291 struct tls_context *tls_ctx = tls_get_ctx(sk); 1292 1292 struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx); 1293 1293 DEFINE_WAIT_FUNC(wait, woken_wake_function); 1294 + int ret = 0; 1294 1295 long timeo; 1295 1296 1296 1297 timeo = sock_rcvtimeo(sk, nonblock); ··· 1302 1301 1303 1302 if (sk->sk_err) 1304 1303 return sock_error(sk); 1304 + 1305 + if (ret < 0) 1306 + return ret; 1305 1307 1306 1308 if (!skb_queue_empty(&sk->sk_receive_queue)) { 1307 1309 tls_strp_check_rcv(&ctx->strp); ··· 1324 1320 released = true; 1325 1321 add_wait_queue(sk_sleep(sk), &wait); 1326 1322 sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk); 1327 - sk_wait_event(sk, &timeo, 1328 - tls_strp_msg_ready(ctx) || 1329 - !sk_psock_queue_empty(psock), 1330 - &wait); 1323 + ret = sk_wait_event(sk, &timeo, 1324 + tls_strp_msg_ready(ctx) || 1325 + !sk_psock_queue_empty(psock), 1326 + &wait); 1331 1327 sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk); 1332 1328 remove_wait_queue(sk_sleep(sk), &wait); 1333 1329 ··· 1856 1852 bool nonblock) 1857 1853 { 1858 1854 long timeo; 1855 + int ret; 1859 1856 1860 1857 timeo = sock_rcvtimeo(sk, nonblock); 1861 1858 ··· 1866 1861 ctx->reader_contended = 1; 1867 1862 1868 1863 add_wait_queue(&ctx->wq, &wait); 1869 - sk_wait_event(sk, &timeo, 1870 - !READ_ONCE(ctx->reader_present), &wait); 1864 + ret = sk_wait_event(sk, &timeo, 1865 + !READ_ONCE(ctx->reader_present), &wait); 1871 1866 remove_wait_queue(&ctx->wq, &wait); 1872 1867 1873 1868 if (timeo <= 0) 1874 1869 return -EAGAIN; 1875 1870 if (signal_pending(current)) 1876 1871 return sock_intr_errno(timeo); 1872 + if (ret < 0) 1873 + return ret; 1877 1874 } 1878 1875 1879 1876 WRITE_ONCE(ctx->reader_present, 1);
+1 -1
net/wireless/core.c
··· 1613 1613 list_add_tail(&work->entry, &rdev->wiphy_work_list); 1614 1614 spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); 1615 1615 1616 - schedule_work(&rdev->wiphy_work); 1616 + queue_work(system_unbound_wq, &rdev->wiphy_work); 1617 1617 } 1618 1618 EXPORT_SYMBOL_GPL(wiphy_work_queue); 1619 1619
+10 -12
net/xfrm/xfrm_interface_core.c
··· 380 380 skb->dev = dev; 381 381 382 382 if (err) { 383 - dev->stats.rx_errors++; 384 - dev->stats.rx_dropped++; 383 + DEV_STATS_INC(dev, rx_errors); 384 + DEV_STATS_INC(dev, rx_dropped); 385 385 386 386 return 0; 387 387 } ··· 426 426 xfrmi_xmit2(struct sk_buff *skb, struct net_device *dev, struct flowi *fl) 427 427 { 428 428 struct xfrm_if *xi = netdev_priv(dev); 429 - struct net_device_stats *stats = &xi->dev->stats; 430 429 struct dst_entry *dst = skb_dst(skb); 431 430 unsigned int length = skb->len; 432 431 struct net_device *tdev; ··· 472 473 tdev = dst->dev; 473 474 474 475 if (tdev == dev) { 475 - stats->collisions++; 476 + DEV_STATS_INC(dev, collisions); 476 477 net_warn_ratelimited("%s: Local routing loop detected!\n", 477 478 dev->name); 478 479 goto tx_err_dst_release; ··· 511 512 if (net_xmit_eval(err) == 0) { 512 513 dev_sw_netstats_tx_add(dev, 1, length); 513 514 } else { 514 - stats->tx_errors++; 515 - stats->tx_aborted_errors++; 515 + DEV_STATS_INC(dev, tx_errors); 516 + DEV_STATS_INC(dev, tx_aborted_errors); 516 517 } 517 518 518 519 return 0; 519 520 tx_err_link_failure: 520 - stats->tx_carrier_errors++; 521 + DEV_STATS_INC(dev, tx_carrier_errors); 521 522 dst_link_failure(skb); 522 523 tx_err_dst_release: 523 524 dst_release(dst); ··· 527 528 static netdev_tx_t xfrmi_xmit(struct sk_buff *skb, struct net_device *dev) 528 529 { 529 530 struct xfrm_if *xi = netdev_priv(dev); 530 - struct net_device_stats *stats = &xi->dev->stats; 531 531 struct dst_entry *dst = skb_dst(skb); 532 532 struct flowi fl; 533 533 int ret; ··· 543 545 dst = ip6_route_output(dev_net(dev), NULL, &fl.u.ip6); 544 546 if (dst->error) { 545 547 dst_release(dst); 546 - stats->tx_carrier_errors++; 548 + DEV_STATS_INC(dev, tx_carrier_errors); 547 549 goto tx_err; 548 550 } 549 551 skb_dst_set(skb, dst); ··· 559 561 fl.u.ip4.flowi4_flags |= FLOWI_FLAG_ANYSRC; 560 562 rt = __ip_route_output_key(dev_net(dev), &fl.u.ip4); 561 563 if (IS_ERR(rt)) { 562 - stats->tx_carrier_errors++; 564 + DEV_STATS_INC(dev, tx_carrier_errors); 563 565 goto tx_err; 564 566 } 565 567 skb_dst_set(skb, &rt->dst); ··· 578 580 return NETDEV_TX_OK; 579 581 580 582 tx_err: 581 - stats->tx_errors++; 582 - stats->tx_dropped++; 583 + DEV_STATS_INC(dev, tx_errors); 584 + DEV_STATS_INC(dev, tx_dropped); 583 585 kfree_skb(skb); 584 586 return NETDEV_TX_OK; 585 587 }
+16 -11
net/xfrm/xfrm_policy.c
··· 851 851 struct hlist_node *newpos = NULL; 852 852 bool matches_s, matches_d; 853 853 854 - if (!policy->bydst_reinsert) 854 + if (policy->walk.dead || !policy->bydst_reinsert) 855 855 continue; 856 856 857 857 WARN_ON_ONCE(policy->family != family); ··· 1256 1256 struct xfrm_pol_inexact_bin *bin; 1257 1257 u8 dbits, sbits; 1258 1258 1259 + if (policy->walk.dead) 1260 + continue; 1261 + 1259 1262 dir = xfrm_policy_id2dir(policy->index); 1260 - if (policy->walk.dead || dir >= XFRM_POLICY_MAX) 1263 + if (dir >= XFRM_POLICY_MAX) 1261 1264 continue; 1262 1265 1263 1266 if ((dir & XFRM_POLICY_MASK) == XFRM_POLICY_OUT) { ··· 1375 1372 * of an absolute inpredictability of ordering of rules. This will not pass. */ 1376 1373 static u32 xfrm_gen_index(struct net *net, int dir, u32 index) 1377 1374 { 1378 - static u32 idx_generator; 1379 - 1380 1375 for (;;) { 1381 1376 struct hlist_head *list; 1382 1377 struct xfrm_policy *p; ··· 1382 1381 int found; 1383 1382 1384 1383 if (!index) { 1385 - idx = (idx_generator | dir); 1386 - idx_generator += 8; 1384 + idx = (net->xfrm.idx_generator | dir); 1385 + net->xfrm.idx_generator += 8; 1387 1386 } else { 1388 1387 idx = index; 1389 1388 index = 0; ··· 1824 1823 1825 1824 again: 1826 1825 list_for_each_entry(pol, &net->xfrm.policy_all, walk.all) { 1826 + if (pol->walk.dead) 1827 + continue; 1828 + 1827 1829 dir = xfrm_policy_id2dir(pol->index); 1828 - if (pol->walk.dead || 1829 - dir >= XFRM_POLICY_MAX || 1830 + if (dir >= XFRM_POLICY_MAX || 1830 1831 pol->type != type) 1831 1832 continue; 1832 1833 ··· 1865 1862 1866 1863 again: 1867 1864 list_for_each_entry(pol, &net->xfrm.policy_all, walk.all) { 1865 + if (pol->walk.dead) 1866 + continue; 1867 + 1868 1868 dir = xfrm_policy_id2dir(pol->index); 1869 - if (pol->walk.dead || 1870 - dir >= XFRM_POLICY_MAX || 1869 + if (dir >= XFRM_POLICY_MAX || 1871 1870 pol->xdo.dev != dev) 1872 1871 continue; 1873 1872 ··· 3220 3215 } 3221 3216 3222 3217 for (i = 0; i < num_pols; i++) 3223 - pols[i]->curlft.use_time = ktime_get_real_seconds(); 3218 + WRITE_ONCE(pols[i]->curlft.use_time, ktime_get_real_seconds()); 3224 3219 3225 3220 if (num_xfrms < 0) { 3226 3221 /* Prohibit the flow */
+2 -2
sound/soc/ti/ams-delta.c
··· 336 336 } 337 337 338 338 /* Line discipline .receive_buf() */ 339 - static void cx81801_receive(struct tty_struct *tty, const u8 *cp, 340 - const char *fp, int count) 339 + static void cx81801_receive(struct tty_struct *tty, const u8 *cp, const u8 *fp, 340 + size_t count) 341 341 { 342 342 struct snd_soc_component *component = tty->disc_data; 343 343 const unsigned char *c;
+1 -1
tools/arch/x86/include/uapi/asm/unistd_32.h
··· 26 26 #ifndef __NR_setns 27 27 #define __NR_setns 346 28 28 #endif 29 - #ifdef __NR_seccomp 29 + #ifndef __NR_seccomp 30 30 #define __NR_seccomp 354 31 31 #endif
-2
tools/testing/selftests/kvm/include/ucall_common.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 - * tools/testing/selftests/kvm/include/kvm_util.h 4 - * 5 3 * Copyright (C) 2018, Google LLC. 6 4 */ 7 5 #ifndef SELFTEST_KVM_UCALL_COMMON_H
+23
tools/testing/selftests/kvm/include/x86_64/processor.h
··· 68 68 #define XFEATURE_MASK_OPMASK BIT_ULL(5) 69 69 #define XFEATURE_MASK_ZMM_Hi256 BIT_ULL(6) 70 70 #define XFEATURE_MASK_Hi16_ZMM BIT_ULL(7) 71 + #define XFEATURE_MASK_PT BIT_ULL(8) 72 + #define XFEATURE_MASK_PKRU BIT_ULL(9) 73 + #define XFEATURE_MASK_PASID BIT_ULL(10) 74 + #define XFEATURE_MASK_CET_USER BIT_ULL(11) 75 + #define XFEATURE_MASK_CET_KERNEL BIT_ULL(12) 76 + #define XFEATURE_MASK_LBR BIT_ULL(15) 71 77 #define XFEATURE_MASK_XTILE_CFG BIT_ULL(17) 72 78 #define XFEATURE_MASK_XTILE_DATA BIT_ULL(18) 73 79 ··· 153 147 #define X86_FEATURE_CLWB KVM_X86_CPU_FEATURE(0x7, 0, EBX, 24) 154 148 #define X86_FEATURE_UMIP KVM_X86_CPU_FEATURE(0x7, 0, ECX, 2) 155 149 #define X86_FEATURE_PKU KVM_X86_CPU_FEATURE(0x7, 0, ECX, 3) 150 + #define X86_FEATURE_OSPKE KVM_X86_CPU_FEATURE(0x7, 0, ECX, 4) 156 151 #define X86_FEATURE_LA57 KVM_X86_CPU_FEATURE(0x7, 0, ECX, 16) 157 152 #define X86_FEATURE_RDPID KVM_X86_CPU_FEATURE(0x7, 0, ECX, 22) 158 153 #define X86_FEATURE_SGX_LC KVM_X86_CPU_FEATURE(0x7, 0, ECX, 30) ··· 560 553 __asm__ __volatile__("xsetbv" :: "a" (eax), "d" (edx), "c" (index)); 561 554 } 562 555 556 + static inline void wrpkru(u32 pkru) 557 + { 558 + /* Note, ECX and EDX are architecturally required to be '0'. */ 559 + asm volatile(".byte 0x0f,0x01,0xef\n\t" 560 + : : "a" (pkru), "c"(0), "d"(0)); 561 + } 562 + 563 563 static inline struct desc_ptr get_gdt(void) 564 564 { 565 565 struct desc_ptr gdt; ··· 920 906 921 907 return nr_bits > feature.anti_feature.bit && 922 908 !kvm_cpu_has(feature.anti_feature); 909 + } 910 + 911 + static __always_inline uint64_t kvm_cpu_supported_xcr0(void) 912 + { 913 + if (!kvm_cpu_has_p(X86_PROPERTY_SUPPORTED_XCR0_LO)) 914 + return 0; 915 + 916 + return kvm_cpu_property(X86_PROPERTY_SUPPORTED_XCR0_LO) | 917 + ((uint64_t)kvm_cpu_property(X86_PROPERTY_SUPPORTED_XCR0_HI) << 32); 923 918 } 924 919 925 920 static inline size_t kvm_cpuid2_size(int nr_entries)
+7
tools/testing/selftests/kvm/lib/guest_sprintf.c
··· 200 200 ++fmt; 201 201 } 202 202 203 + /* 204 + * Play nice with %llu, %llx, etc. KVM selftests only support 205 + * 64-bit builds, so just treat %ll* the same as %l*. 206 + */ 207 + if (qualifier == 'l' && *fmt == 'l') 208 + ++fmt; 209 + 203 210 /* default base */ 204 211 base = 10; 205 212
-2
tools/testing/selftests/kvm/lib/x86_64/apic.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * tools/testing/selftests/kvm/lib/x86_64/processor.c 4 - * 5 3 * Copyright (C) 2021, Google LLC. 6 4 */ 7 5
+3 -6
tools/testing/selftests/kvm/memslot_perf_test.c
··· 1033 1033 struct test_result *rbestruntime) 1034 1034 { 1035 1035 uint64_t maxslots; 1036 - struct test_result result; 1036 + struct test_result result = {}; 1037 1037 1038 - result.nloops = 0; 1039 1038 if (!test_execute(targs->nslots, &maxslots, targs->seconds, data, 1040 1039 &result.nloops, 1041 1040 &result.slot_runtime, &result.guest_runtime)) { ··· 1088 1089 .seconds = 5, 1089 1090 .runs = 1, 1090 1091 }; 1091 - struct test_result rbestslottime; 1092 + struct test_result rbestslottime = {}; 1092 1093 int tctr; 1093 1094 1094 1095 if (!check_memory_sizes()) ··· 1097 1098 if (!parse_args(argc, argv, &targs)) 1098 1099 return -1; 1099 1100 1100 - rbestslottime.slottimens = 0; 1101 1101 for (tctr = targs.tfirst; tctr <= targs.tlast; tctr++) { 1102 1102 const struct test_data *data = &tests[tctr]; 1103 1103 unsigned int runctr; 1104 - struct test_result rbestruntime; 1104 + struct test_result rbestruntime = {}; 1105 1105 1106 1106 if (tctr > targs.tfirst) 1107 1107 pr_info("\n"); ··· 1108 1110 pr_info("Testing %s performance with %i runs, %d seconds each\n", 1109 1111 data->name, targs.runs, targs.seconds); 1110 1112 1111 - rbestruntime.runtimens = 0; 1112 1113 for (runctr = 0; runctr < targs.runs; runctr++) 1113 1114 if (!test_loop(data, &targs, 1114 1115 &rbestslottime, &rbestruntime))
-2
tools/testing/selftests/kvm/x86_64/hyperv_svm_test.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * KVM_GET/SET_* tests 4 - * 5 3 * Copyright (C) 2022, Red Hat, Inc. 6 4 * 7 5 * Tests for Hyper-V extensions to SVM.
-2
tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * tools/testing/selftests/kvm/nx_huge_page_test.c 4 - * 5 3 * Usage: to be run via nx_huge_page_test.sh, which does the necessary 6 4 * environment setup and teardown 7 5 *
-1
tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh
··· 4 4 # Wrapper script which performs setup and cleanup for nx_huge_pages_test. 5 5 # Makes use of root privileges to set up huge pages and KVM module parameters. 6 6 # 7 - # tools/testing/selftests/kvm/nx_huge_page_test.sh 8 7 # Copyright (C) 2022, Google LLC. 9 8 10 9 set -e
+108 -2
tools/testing/selftests/kvm/x86_64/state_test.c
··· 139 139 static void __attribute__((__flatten__)) guest_code(void *arg) 140 140 { 141 141 GUEST_SYNC(1); 142 + 143 + if (this_cpu_has(X86_FEATURE_XSAVE)) { 144 + uint64_t supported_xcr0 = this_cpu_supported_xcr0(); 145 + uint8_t buffer[4096]; 146 + 147 + memset(buffer, 0xcc, sizeof(buffer)); 148 + 149 + set_cr4(get_cr4() | X86_CR4_OSXSAVE); 150 + GUEST_ASSERT(this_cpu_has(X86_FEATURE_OSXSAVE)); 151 + 152 + xsetbv(0, xgetbv(0) | supported_xcr0); 153 + 154 + /* 155 + * Modify state for all supported xfeatures to take them out of 156 + * their "init" state, i.e. to make them show up in XSTATE_BV. 157 + * 158 + * Note off-by-default features, e.g. AMX, are out of scope for 159 + * this particular testcase as they have a different ABI. 160 + */ 161 + GUEST_ASSERT(supported_xcr0 & XFEATURE_MASK_FP); 162 + asm volatile ("fincstp"); 163 + 164 + GUEST_ASSERT(supported_xcr0 & XFEATURE_MASK_SSE); 165 + asm volatile ("vmovdqu %0, %%xmm0" :: "m" (buffer)); 166 + 167 + if (supported_xcr0 & XFEATURE_MASK_YMM) 168 + asm volatile ("vmovdqu %0, %%ymm0" :: "m" (buffer)); 169 + 170 + if (supported_xcr0 & XFEATURE_MASK_AVX512) { 171 + asm volatile ("kmovq %0, %%k1" :: "r" (-1ull)); 172 + asm volatile ("vmovupd %0, %%zmm0" :: "m" (buffer)); 173 + asm volatile ("vmovupd %0, %%zmm16" :: "m" (buffer)); 174 + } 175 + 176 + if (this_cpu_has(X86_FEATURE_MPX)) { 177 + uint64_t bounds[2] = { 10, 0xffffffffull }; 178 + uint64_t output[2] = { }; 179 + 180 + GUEST_ASSERT(supported_xcr0 & XFEATURE_MASK_BNDREGS); 181 + GUEST_ASSERT(supported_xcr0 & XFEATURE_MASK_BNDCSR); 182 + 183 + /* 184 + * Don't bother trying to get BNDCSR into the INUSE 185 + * state. MSR_IA32_BNDCFGS doesn't count as it isn't 186 + * managed via XSAVE/XRSTOR, and BNDCFGU can only be 187 + * modified by XRSTOR. Stuffing XSTATE_BV in the host 188 + * is simpler than doing XRSTOR here in the guest. 189 + * 190 + * However, temporarily enable MPX in BNDCFGS so that 191 + * BNDMOV actually loads BND1. If MPX isn't *fully* 192 + * enabled, all MPX instructions are treated as NOPs. 193 + * 194 + * Hand encode "bndmov (%rax),%bnd1" as support for MPX 195 + * mnemonics/registers has been removed from gcc and 196 + * clang (and was never fully supported by clang). 197 + */ 198 + wrmsr(MSR_IA32_BNDCFGS, BIT_ULL(0)); 199 + asm volatile (".byte 0x66,0x0f,0x1a,0x08" :: "a" (bounds)); 200 + /* 201 + * Hand encode "bndmov %bnd1, (%rax)" to sanity check 202 + * that BND1 actually got loaded. 203 + */ 204 + asm volatile (".byte 0x66,0x0f,0x1b,0x08" :: "a" (output)); 205 + wrmsr(MSR_IA32_BNDCFGS, 0); 206 + 207 + GUEST_ASSERT_EQ(bounds[0], output[0]); 208 + GUEST_ASSERT_EQ(bounds[1], output[1]); 209 + } 210 + if (this_cpu_has(X86_FEATURE_PKU)) { 211 + GUEST_ASSERT(supported_xcr0 & XFEATURE_MASK_PKRU); 212 + set_cr4(get_cr4() | X86_CR4_PKE); 213 + GUEST_ASSERT(this_cpu_has(X86_FEATURE_OSPKE)); 214 + 215 + wrpkru(-1u); 216 + } 217 + } 218 + 142 219 GUEST_SYNC(2); 143 220 144 221 if (arg) { ··· 230 153 231 154 int main(int argc, char *argv[]) 232 155 { 156 + uint64_t *xstate_bv, saved_xstate_bv; 233 157 vm_vaddr_t nested_gva = 0; 234 - 158 + struct kvm_cpuid2 empty_cpuid = {}; 235 159 struct kvm_regs regs1, regs2; 236 - struct kvm_vcpu *vcpu; 160 + struct kvm_vcpu *vcpu, *vcpuN; 237 161 struct kvm_vm *vm; 238 162 struct kvm_x86_state *state; 239 163 struct ucall uc; ··· 287 209 /* Restore state in a new VM. */ 288 210 vcpu = vm_recreate_with_one_vcpu(vm); 289 211 vcpu_load_state(vcpu, state); 212 + 213 + /* 214 + * Restore XSAVE state in a dummy vCPU, first without doing 215 + * KVM_SET_CPUID2, and then with an empty guest CPUID. Except 216 + * for off-by-default xfeatures, e.g. AMX, KVM is supposed to 217 + * allow KVM_SET_XSAVE regardless of guest CPUID. Manually 218 + * load only XSAVE state, MSRs in particular have a much more 219 + * convoluted ABI. 220 + * 221 + * Load two versions of XSAVE state: one with the actual guest 222 + * XSAVE state, and one with all supported features forced "on" 223 + * in xstate_bv, e.g. to ensure that KVM allows loading all 224 + * supported features, even if something goes awry in saving 225 + * the original snapshot. 226 + */ 227 + xstate_bv = (void *)&((uint8_t *)state->xsave->region)[512]; 228 + saved_xstate_bv = *xstate_bv; 229 + 230 + vcpuN = __vm_vcpu_add(vm, vcpu->id + 1); 231 + vcpu_xsave_set(vcpuN, state->xsave); 232 + *xstate_bv = kvm_cpu_supported_xcr0(); 233 + vcpu_xsave_set(vcpuN, state->xsave); 234 + 235 + vcpu_init_cpuid(vcpuN, &empty_cpuid); 236 + vcpu_xsave_set(vcpuN, state->xsave); 237 + *xstate_bv = saved_xstate_bv; 238 + vcpu_xsave_set(vcpuN, state->xsave); 239 + 290 240 kvm_x86_state_cleanup(state); 291 241 292 242 memset(&regs2, 0, sizeof(regs2));
-4
tools/testing/selftests/kvm/x86_64/tsc_scaling_sync.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * svm_vmcall_test 4 - * 5 3 * Copyright © 2021 Amazon.com, Inc. or its affiliates. 6 - * 7 - * Xen shared_info / pvclock testing 8 4 */ 9 5 10 6 #include "test_util.h"
-4
tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * svm_vmcall_test 4 - * 5 3 * Copyright © 2021 Amazon.com, Inc. or its affiliates. 6 - * 7 - * Xen shared_info / pvclock testing 8 4 */ 9 5 10 6 #include "test_util.h"
+1
tools/testing/selftests/net/Makefile
··· 34 34 TEST_PROGS += gre_gso.sh 35 35 TEST_PROGS += cmsg_so_mark.sh 36 36 TEST_PROGS += cmsg_time.sh cmsg_ipv6.sh 37 + TEST_PROGS += netns-name.sh 37 38 TEST_PROGS += srv6_end_dt46_l3vpn_test.sh 38 39 TEST_PROGS += srv6_end_dt4_l3vpn_test.sh 39 40 TEST_PROGS += srv6_end_dt6_l3vpn_test.sh
+5 -2
tools/testing/selftests/net/fib_tests.sh
··· 2437 2437 run_cmd "ip -n ns2 route add 203.0.113.0/24 2438 2438 nexthop via 172.16.201.2 nexthop via 172.16.202.2" 2439 2439 run_cmd "ip netns exec ns2 sysctl -qw net.ipv4.fib_multipath_hash_policy=1" 2440 + run_cmd "ip netns exec ns2 sysctl -qw net.ipv4.conf.veth2.rp_filter=0" 2441 + run_cmd "ip netns exec ns2 sysctl -qw net.ipv4.conf.all.rp_filter=0" 2442 + run_cmd "ip netns exec ns2 sysctl -qw net.ipv4.conf.default.rp_filter=0" 2440 2443 set +e 2441 2444 2442 2445 local dmac=$(ip -n ns2 -j link show dev veth2 | jq -r '.[]["address"]') ··· 2452 2449 # words, the FIB lookup tracepoint needs to be triggered for every 2453 2450 # packet. 2454 2451 local t0_rx_pkts=$(link_stats_get ns2 veth2 rx packets) 2455 - run_cmd "perf stat -e fib:fib_table_lookup --filter 'err == 0' -j -o $tmp_file -- $cmd" 2452 + run_cmd "perf stat -a -e fib:fib_table_lookup --filter 'err == 0' -j -o $tmp_file -- $cmd" 2456 2453 local t1_rx_pkts=$(link_stats_get ns2 veth2 rx packets) 2457 2454 local diff=$(echo $t1_rx_pkts - $t0_rx_pkts | bc -l) 2458 2455 list_rcv_eval $tmp_file $diff ··· 2497 2494 # words, the FIB lookup tracepoint needs to be triggered for every 2498 2495 # packet. 2499 2496 local t0_rx_pkts=$(link_stats_get ns2 veth2 rx packets) 2500 - run_cmd "perf stat -e fib6:fib6_table_lookup --filter 'err == 0' -j -o $tmp_file -- $cmd" 2497 + run_cmd "perf stat -a -e fib6:fib6_table_lookup --filter 'err == 0' -j -o $tmp_file -- $cmd" 2501 2498 local t1_rx_pkts=$(link_stats_get ns2 veth2 rx packets) 2502 2499 local diff=$(echo $t1_rx_pkts - $t0_rx_pkts | bc -l) 2503 2500 list_rcv_eval $tmp_file $diff
+19 -2
tools/testing/selftests/net/mptcp/mptcp_join.sh
··· 1432 1432 count=$(get_counter ${ns_tx} "MPTcpExtMPRstTx") 1433 1433 if [ -z "$count" ]; then 1434 1434 print_skip 1435 - elif [ $count -lt $rst_tx ]; then 1435 + # accept more rst than expected except if we don't expect any 1436 + elif { [ $rst_tx -ne 0 ] && [ $count -lt $rst_tx ]; } || 1437 + { [ $rst_tx -eq 0 ] && [ $count -ne 0 ]; }; then 1436 1438 fail_test "got $count MP_RST[s] TX expected $rst_tx" 1437 1439 else 1438 1440 print_ok ··· 1444 1442 count=$(get_counter ${ns_rx} "MPTcpExtMPRstRx") 1445 1443 if [ -z "$count" ]; then 1446 1444 print_skip 1447 - elif [ "$count" -lt "$rst_rx" ]; then 1445 + # accept more rst than expected except if we don't expect any 1446 + elif { [ $rst_rx -ne 0 ] && [ $count -lt $rst_rx ]; } || 1447 + { [ $rst_rx -eq 0 ] && [ $count -ne 0 ]; }; then 1448 1448 fail_test "got $count MP_RST[s] RX expected $rst_rx" 1449 1449 else 1450 1450 print_ok ··· 2309 2305 chk_join_nr 1 1 1 2310 2306 chk_rm_tx_nr 1 2311 2307 chk_rm_nr 1 1 2308 + chk_rst_nr 0 0 2312 2309 fi 2313 2310 2314 2311 # multiple subflows, remove ··· 2322 2317 run_tests $ns1 $ns2 10.0.1.1 2323 2318 chk_join_nr 2 2 2 2324 2319 chk_rm_nr 2 2 2320 + chk_rst_nr 0 0 2325 2321 fi 2326 2322 2327 2323 # single address, remove ··· 2335 2329 chk_join_nr 1 1 1 2336 2330 chk_add_nr 1 1 2337 2331 chk_rm_nr 1 1 invert 2332 + chk_rst_nr 0 0 2338 2333 fi 2339 2334 2340 2335 # subflow and signal, remove ··· 2349 2342 chk_join_nr 2 2 2 2350 2343 chk_add_nr 1 1 2351 2344 chk_rm_nr 1 1 2345 + chk_rst_nr 0 0 2352 2346 fi 2353 2347 2354 2348 # subflows and signal, remove ··· 2364 2356 chk_join_nr 3 3 3 2365 2357 chk_add_nr 1 1 2366 2358 chk_rm_nr 2 2 2359 + chk_rst_nr 0 0 2367 2360 fi 2368 2361 2369 2362 # addresses remove ··· 2379 2370 chk_join_nr 3 3 3 2380 2371 chk_add_nr 3 3 2381 2372 chk_rm_nr 3 3 invert 2373 + chk_rst_nr 0 0 2382 2374 fi 2383 2375 2384 2376 # invalid addresses remove ··· 2394 2384 chk_join_nr 1 1 1 2395 2385 chk_add_nr 3 3 2396 2386 chk_rm_nr 3 1 invert 2387 + chk_rst_nr 0 0 2397 2388 fi 2398 2389 2399 2390 # subflows and signal, flush ··· 2409 2398 chk_join_nr 3 3 3 2410 2399 chk_add_nr 1 1 2411 2400 chk_rm_nr 1 3 invert simult 2401 + chk_rst_nr 0 0 2412 2402 fi 2413 2403 2414 2404 # subflows flush ··· 2429 2417 else 2430 2418 chk_rm_nr 3 3 2431 2419 fi 2420 + chk_rst_nr 0 0 2432 2421 fi 2433 2422 2434 2423 # addresses flush ··· 2444 2431 chk_join_nr 3 3 3 2445 2432 chk_add_nr 3 3 2446 2433 chk_rm_nr 3 3 invert simult 2434 + chk_rst_nr 0 0 2447 2435 fi 2448 2436 2449 2437 # invalid addresses flush ··· 2459 2445 chk_join_nr 1 1 1 2460 2446 chk_add_nr 3 3 2461 2447 chk_rm_nr 3 1 invert 2448 + chk_rst_nr 0 0 2462 2449 fi 2463 2450 2464 2451 # remove id 0 subflow ··· 2471 2456 run_tests $ns1 $ns2 10.0.1.1 2472 2457 chk_join_nr 1 1 1 2473 2458 chk_rm_nr 1 1 2459 + chk_rst_nr 0 0 2474 2460 fi 2475 2461 2476 2462 # remove id 0 address ··· 2484 2468 chk_join_nr 1 1 1 2485 2469 chk_add_nr 1 1 2486 2470 chk_rm_nr 1 1 invert 2471 + chk_rst_nr 0 0 invert 2487 2472 fi 2488 2473 } 2489 2474
+87
tools/testing/selftests/net/netns-name.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + set -o pipefail 5 + 6 + NS=netns-name-test 7 + DEV=dummy-dev0 8 + DEV2=dummy-dev1 9 + ALT_NAME=some-alt-name 10 + 11 + RET_CODE=0 12 + 13 + cleanup() { 14 + ip netns del $NS 15 + } 16 + 17 + trap cleanup EXIT 18 + 19 + fail() { 20 + echo "ERROR: ${1:-unexpected return code} (ret: $_)" >&2 21 + RET_CODE=1 22 + } 23 + 24 + ip netns add $NS 25 + 26 + # 27 + # Test basic move without a rename 28 + # 29 + ip -netns $NS link add name $DEV type dummy || fail 30 + ip -netns $NS link set dev $DEV netns 1 || 31 + fail "Can't perform a netns move" 32 + ip link show dev $DEV >> /dev/null || fail "Device not found after move" 33 + ip link del $DEV || fail 34 + 35 + # 36 + # Test move with a conflict 37 + # 38 + ip link add name $DEV type dummy 39 + ip -netns $NS link add name $DEV type dummy || fail 40 + ip -netns $NS link set dev $DEV netns 1 2> /dev/null && 41 + fail "Performed a netns move with a name conflict" 42 + ip link show dev $DEV >> /dev/null || fail "Device not found after move" 43 + ip -netns $NS link del $DEV || fail 44 + ip link del $DEV || fail 45 + 46 + # 47 + # Test move with a conflict and rename 48 + # 49 + ip link add name $DEV type dummy 50 + ip -netns $NS link add name $DEV type dummy || fail 51 + ip -netns $NS link set dev $DEV netns 1 name $DEV2 || 52 + fail "Can't perform a netns move with rename" 53 + ip link del $DEV2 || fail 54 + ip link del $DEV || fail 55 + 56 + # 57 + # Test dup alt-name with netns move 58 + # 59 + ip link add name $DEV type dummy || fail 60 + ip link property add dev $DEV altname $ALT_NAME || fail 61 + ip -netns $NS link add name $DEV2 type dummy || fail 62 + ip -netns $NS link property add dev $DEV2 altname $ALT_NAME || fail 63 + 64 + ip -netns $NS link set dev $DEV2 netns 1 2> /dev/null && 65 + fail "Moved with alt-name dup" 66 + 67 + ip link del $DEV || fail 68 + ip -netns $NS link del $DEV2 || fail 69 + 70 + # 71 + # Test creating alt-name in one net-ns and using in another 72 + # 73 + ip -netns $NS link add name $DEV type dummy || fail 74 + ip -netns $NS link property add dev $DEV altname $ALT_NAME || fail 75 + ip -netns $NS link set dev $DEV netns 1 || fail 76 + ip link show dev $ALT_NAME >> /dev/null || fail "Can't find alt-name after move" 77 + ip -netns $NS link show dev $ALT_NAME 2> /dev/null && 78 + fail "Can still find alt-name after move" 79 + ip link del $DEV || fail 80 + 81 + echo -ne "$(basename $0) \t\t\t\t" 82 + if [ $RET_CODE -eq 0 ]; then 83 + echo "[ OK ]" 84 + else 85 + echo "[ FAIL ]" 86 + fi 87 + exit $RET_CODE
+20 -1
tools/testing/selftests/net/openvswitch/openvswitch.sh
··· 3 3 # 4 4 # OVS kernel module self tests 5 5 6 + trap ovs_exit_sig EXIT TERM INT ERR 7 + 6 8 # Kselftest framework requirement - SKIP code is 4. 7 9 ksft_skip=4 8 10 ··· 144 142 return 0 145 143 } 146 144 145 + ovs_del_flows () { 146 + info "Deleting all flows from DP: sbx:$1 br:$2" 147 + ovs_sbx "$1" python3 $ovs_base/ovs-dpctl.py del-flows "$2" 148 + return 0 149 + } 150 + 147 151 ovs_drop_record_and_run () { 148 152 local sbx=$1 149 153 shift ··· 205 197 # Setup server namespace 206 198 ip netns exec server ip addr add 172.31.110.20/24 dev s1 207 199 ip netns exec server ip link set s1 up 200 + 201 + # Check if drop reasons can be sent 202 + ovs_add_flow "test_drop_reason" dropreason \ 203 + 'in_port(1),eth(),eth_type(0x0806),arp()' 'drop(10)' 2>/dev/null 204 + if [ $? == 1 ]; then 205 + info "no support for drop reasons - skipping" 206 + ovs_exit_sig 207 + return $ksft_skip 208 + fi 209 + 210 + ovs_del_flows "test_drop_reason" dropreason 208 211 209 212 # Allow ARP 210 213 ovs_add_flow "test_drop_reason" dropreason \ ··· 544 525 fi 545 526 546 527 if python3 ovs-dpctl.py -h 2>&1 | \ 547 - grep "Need to install the python" >/dev/null 2>&1; then 528 + grep -E "Need to (install|upgrade) the python" >/dev/null 2>&1; then 548 529 stdbuf -o0 printf "TEST: %-60s [PYLIB]\n" "${tdesc}" 549 530 return $ksft_skip 550 531 fi
+46 -2
tools/testing/selftests/net/openvswitch/ovs-dpctl.py
··· 28 28 from pyroute2.netlink import nlmsg_atoms 29 29 from pyroute2.netlink.exceptions import NetlinkError 30 30 from pyroute2.netlink.generic import GenericNetlinkSocket 31 + import pyroute2 32 + 31 33 except ModuleNotFoundError: 32 - print("Need to install the python pyroute2 package.") 34 + print("Need to install the python pyroute2 package >= 0.6.") 33 35 sys.exit(0) 34 36 35 37 ··· 1119 1117 "src", 1120 1118 lambda x: str(ipaddress.IPv4Address(x)), 1121 1119 int, 1120 + convert_ipv4, 1122 1121 ), 1123 1122 ( 1124 1123 "dst", 1125 1124 "dst", 1126 - lambda x: str(ipaddress.IPv6Address(x)), 1125 + lambda x: str(ipaddress.IPv4Address(x)), 1127 1126 int, 1127 + convert_ipv4, 1128 1128 ), 1129 1129 ("tp_src", "tp_src", "%d", int), 1130 1130 ("tp_dst", "tp_dst", "%d", int), ··· 1908 1904 raise ne 1909 1905 return reply 1910 1906 1907 + def del_flows(self, dpifindex): 1908 + """ 1909 + Send a del message to the kernel that will drop all flows. 1910 + 1911 + dpifindex should be a valid datapath obtained by calling 1912 + into the OvsDatapath lookup 1913 + """ 1914 + 1915 + flowmsg = OvsFlow.ovs_flow_msg() 1916 + flowmsg["cmd"] = OVS_FLOW_CMD_DEL 1917 + flowmsg["version"] = OVS_DATAPATH_VERSION 1918 + flowmsg["reserved"] = 0 1919 + flowmsg["dpifindex"] = dpifindex 1920 + 1921 + try: 1922 + reply = self.nlm_request( 1923 + flowmsg, 1924 + msg_type=self.prid, 1925 + msg_flags=NLM_F_REQUEST | NLM_F_ACK, 1926 + ) 1927 + reply = reply[0] 1928 + except NetlinkError as ne: 1929 + print(flowmsg) 1930 + raise ne 1931 + return reply 1932 + 1911 1933 def dump(self, dpifindex, flowspec=None): 1912 1934 """ 1913 1935 Returns a list of messages containing flows. ··· 2028 1998 nlmsg_atoms.ovskey = ovskey 2029 1999 nlmsg_atoms.ovsactions = ovsactions 2030 2000 2001 + # version check for pyroute2 2002 + prverscheck = pyroute2.__version__.split(".") 2003 + if int(prverscheck[0]) == 0 and int(prverscheck[1]) < 6: 2004 + print("Need to upgrade the python pyroute2 package to >= 0.6.") 2005 + sys.exit(0) 2006 + 2031 2007 parser = argparse.ArgumentParser() 2032 2008 parser.add_argument( 2033 2009 "-v", ··· 2095 2059 addflcmd.add_argument("flbr", help="Datapath name") 2096 2060 addflcmd.add_argument("flow", help="Flow specification") 2097 2061 addflcmd.add_argument("acts", help="Flow actions") 2062 + 2063 + delfscmd = subparsers.add_parser("del-flows") 2064 + delfscmd.add_argument("flsbr", help="Datapath name") 2098 2065 2099 2066 args = parser.parse_args() 2100 2067 ··· 2182 2143 flow = OvsFlow.ovs_flow_msg() 2183 2144 flow.parse(args.flow, args.acts, rep["dpifindex"]) 2184 2145 ovsflow.add_flow(rep["dpifindex"], flow) 2146 + elif hasattr(args, "flsbr"): 2147 + rep = ovsdp.info(args.flsbr, 0) 2148 + if rep is None: 2149 + print("DP '%s' not found." % args.flsbr) 2150 + ovsflow.del_flows(rep["dpifindex"]) 2185 2151 2186 2152 return 0 2187 2153
+52
tools/testing/selftests/netfilter/nft_audit.sh
··· 11 11 exit $SKIP_RC 12 12 } 13 13 14 + # Run everything in a separate network namespace 15 + [ "${1}" != "run" ] && { unshare -n "${0}" run; exit $?; } 16 + 17 + # give other scripts a chance to finish - audit_logread sees all activity 18 + sleep 1 19 + 14 20 logfile=$(mktemp) 15 21 rulefile=$(mktemp) 16 22 echo "logging into $logfile" ··· 99 93 do_test 'nft add counter t2 c1; add counter t2 c2' \ 100 94 'table=t2 family=2 entries=2 op=nft_register_obj' 101 95 96 + for ((i = 3; i <= 500; i++)); do 97 + echo "add counter t2 c$i" 98 + done >$rulefile 99 + do_test "nft -f $rulefile" \ 100 + 'table=t2 family=2 entries=498 op=nft_register_obj' 101 + 102 102 # adding/updating quotas 103 103 104 104 do_test 'nft add quota t1 q1 { 10 bytes }' \ ··· 112 100 113 101 do_test 'nft add quota t2 q1 { 10 bytes }; add quota t2 q2 { 10 bytes }' \ 114 102 'table=t2 family=2 entries=2 op=nft_register_obj' 103 + 104 + for ((i = 3; i <= 500; i++)); do 105 + echo "add quota t2 q$i { 10 bytes }" 106 + done >$rulefile 107 + do_test "nft -f $rulefile" \ 108 + 'table=t2 family=2 entries=498 op=nft_register_obj' 115 109 116 110 # changing the quota value triggers obj update path 117 111 do_test 'nft add quota t1 q1 { 20 bytes }' \ ··· 167 149 168 150 do_test 'nft reset set t1 s' \ 169 151 'table=t1 family=2 entries=3 op=nft_reset_setelem' 152 + 153 + # resetting counters 154 + 155 + do_test 'nft reset counter t1 c1' \ 156 + 'table=t1 family=2 entries=1 op=nft_reset_obj' 157 + 158 + do_test 'nft reset counters t1' \ 159 + 'table=t1 family=2 entries=1 op=nft_reset_obj' 160 + 161 + do_test 'nft reset counters t2' \ 162 + 'table=t2 family=2 entries=342 op=nft_reset_obj 163 + table=t2 family=2 entries=158 op=nft_reset_obj' 164 + 165 + do_test 'nft reset counters' \ 166 + 'table=t1 family=2 entries=1 op=nft_reset_obj 167 + table=t2 family=2 entries=341 op=nft_reset_obj 168 + table=t2 family=2 entries=159 op=nft_reset_obj' 169 + 170 + # resetting quotas 171 + 172 + do_test 'nft reset quota t1 q1' \ 173 + 'table=t1 family=2 entries=1 op=nft_reset_obj' 174 + 175 + do_test 'nft reset quotas t1' \ 176 + 'table=t1 family=2 entries=1 op=nft_reset_obj' 177 + 178 + do_test 'nft reset quotas t2' \ 179 + 'table=t2 family=2 entries=315 op=nft_reset_obj 180 + table=t2 family=2 entries=185 op=nft_reset_obj' 181 + 182 + do_test 'nft reset quotas' \ 183 + 'table=t1 family=2 entries=1 op=nft_reset_obj 184 + table=t2 family=2 entries=314 op=nft_reset_obj 185 + table=t2 family=2 entries=186 op=nft_reset_obj' 170 186 171 187 # deleting rules 172 188
+3 -3
tools/testing/selftests/riscv/mm/Makefile
··· 5 5 # Additional include paths needed by kselftest.h and local headers 6 6 CFLAGS += -D_GNU_SOURCE -std=gnu99 -I. 7 7 8 - TEST_GEN_FILES := testcases/mmap_default testcases/mmap_bottomup 8 + TEST_GEN_FILES := mmap_default mmap_bottomup 9 9 10 - TEST_PROGS := testcases/run_mmap.sh 10 + TEST_PROGS := run_mmap.sh 11 11 12 12 include ../../lib.mk 13 13 14 - $(OUTPUT)/mm: testcases/mmap_default.c testcases/mmap_bottomup.c testcases/mmap_tests.h 14 + $(OUTPUT)/mm: mmap_default.c mmap_bottomup.c mmap_tests.h 15 15 $(CC) -o$@ $(CFLAGS) $(LDFLAGS) $^
+1 -1
tools/testing/selftests/riscv/mm/testcases/mmap_bottomup.c tools/testing/selftests/riscv/mm/mmap_bottomup.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 #include <sys/mman.h> 3 - #include <testcases/mmap_test.h> 3 + #include <mmap_test.h> 4 4 5 5 #include "../../kselftest_harness.h" 6 6
+1 -1
tools/testing/selftests/riscv/mm/testcases/mmap_default.c tools/testing/selftests/riscv/mm/mmap_default.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 #include <sys/mman.h> 3 - #include <testcases/mmap_test.h> 3 + #include <mmap_test.h> 4 4 5 5 #include "../../kselftest_harness.h" 6 6
tools/testing/selftests/riscv/mm/testcases/mmap_test.h tools/testing/selftests/riscv/mm/mmap_test.h
tools/testing/selftests/riscv/mm/testcases/run_mmap.sh tools/testing/selftests/riscv/mm/run_mmap.sh