Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 5.15-rc6 into usb-next

We need the usb fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+3703 -2521
+12 -12
Documentation/admin-guide/cgroup-v2.rst
··· 1226 1226 1227 1227 Note that all fields in this file are hierarchical and the 1228 1228 file modified event can be generated due to an event down the 1229 - hierarchy. For for the local events at the cgroup level see 1229 + hierarchy. For the local events at the cgroup level see 1230 1230 memory.events.local. 1231 1231 1232 1232 low ··· 2170 2170 2171 2171 Cgroup v2 device controller has no interface files and is implemented 2172 2172 on top of cgroup BPF. To control access to device files, a user may 2173 - create bpf programs of the BPF_CGROUP_DEVICE type and attach them 2174 - to cgroups. On an attempt to access a device file, corresponding 2175 - BPF programs will be executed, and depending on the return value 2176 - the attempt will succeed or fail with -EPERM. 2173 + create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach 2174 + them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a 2175 + device file, corresponding BPF programs will be executed, and depending 2176 + on the return value the attempt will succeed or fail with -EPERM. 2177 2177 2178 - A BPF_CGROUP_DEVICE program takes a pointer to the bpf_cgroup_dev_ctx 2179 - structure, which describes the device access attempt: access type 2180 - (mknod/read/write) and device (type, major and minor numbers). 2181 - If the program returns 0, the attempt fails with -EPERM, otherwise 2182 - it succeeds. 2178 + A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the 2179 + bpf_cgroup_dev_ctx structure, which describes the device access attempt: 2180 + access type (mknod/read/write) and device (type, major and minor numbers). 2181 + If the program returns 0, the attempt fails with -EPERM, otherwise it 2182 + succeeds. 2183 2183 2184 - An example of BPF_CGROUP_DEVICE program may be found in the kernel 2185 - source tree in the tools/testing/selftests/bpf/progs/dev_cgroup.c file. 2184 + An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in 2185 + tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree. 2186 2186 2187 2187 2188 2188 RDMA
+2
Documentation/devicetree/bindings/net/snps,dwmac.yaml
··· 21 21 contains: 22 22 enum: 23 23 - snps,dwmac 24 + - snps,dwmac-3.40a 24 25 - snps,dwmac-3.50a 25 26 - snps,dwmac-3.610 26 27 - snps,dwmac-3.70a ··· 77 76 - rockchip,rk3399-gmac 78 77 - rockchip,rv1108-gmac 79 78 - snps,dwmac 79 + - snps,dwmac-3.40a 80 80 - snps,dwmac-3.50a 81 81 - snps,dwmac-3.610 82 82 - snps,dwmac-3.70a
+1 -1
Documentation/devicetree/bindings/spi/snps,dw-apb-ssi.yaml
··· 171 171 cs-gpios = <&gpio0 13 0>, 172 172 <&gpio0 14 0>; 173 173 rx-sample-delay-ns = <3>; 174 - spi-flash@1 { 174 + flash@1 { 175 175 compatible = "spi-nand"; 176 176 reg = <1>; 177 177 rx-sample-delay-ns = <7>;
+76 -67
Documentation/filesystems/ntfs3.rst
··· 4 4 NTFS3 5 5 ===== 6 6 7 - 8 7 Summary and Features 9 8 ==================== 10 9 11 - NTFS3 is fully functional NTFS Read-Write driver. The driver works with 12 - NTFS versions up to 3.1, normal/compressed/sparse files 13 - and journal replaying. File system type to use on mount is 'ntfs3'. 10 + NTFS3 is fully functional NTFS Read-Write driver. The driver works with NTFS 11 + versions up to 3.1. File system type to use on mount is *ntfs3*. 14 12 15 13 - This driver implements NTFS read/write support for normal, sparse and 16 14 compressed files. 17 - - Supports native journal replaying; 18 - - Supports extended attributes 19 - Predefined extended attributes: 20 - - 'system.ntfs_security' gets/sets security 21 - descriptor (SECURITY_DESCRIPTOR_RELATIVE) 22 - - 'system.ntfs_attrib' gets/sets ntfs file/dir attributes. 23 - Note: applied to empty files, this allows to switch type between 24 - sparse(0x200), compressed(0x800) and normal; 15 + - Supports native journal replaying. 25 16 - Supports NFS export of mounted NTFS volumes. 17 + - Supports extended attributes. Predefined extended attributes: 18 + 19 + - *system.ntfs_security* gets/sets security 20 + 21 + Descriptor: SECURITY_DESCRIPTOR_RELATIVE 22 + 23 + - *system.ntfs_attrib* gets/sets ntfs file/dir attributes. 24 + 25 + Note: Applied to empty files, this allows to switch type between 26 + sparse(0x200), compressed(0x800) and normal. 26 27 27 28 Mount Options 28 29 ============= 29 30 30 31 The list below describes mount options supported by NTFS3 driver in addition to 31 - generic ones. 32 + generic ones. You can use every mount option with **no** option. If it is in 33 + this table marked with no it means default is without **no**. 32 34 33 - =============================================================================== 35 + .. flat-table:: 36 + :widths: 1 5 37 + :fill-cells: 34 38 35 - nls=name This option informs the driver how to interpret path 36 - strings and translate them to Unicode and back. If 37 - this option is not set, the default codepage will be 38 - used (CONFIG_NLS_DEFAULT). 39 - Examples: 40 - 'nls=utf8' 39 + * - iocharset=name 40 + - This option informs the driver how to interpret path strings and 41 + translate them to Unicode and back. If this option is not set, the 42 + default codepage will be used (CONFIG_NLS_DEFAULT). 41 43 42 - uid= 43 - gid= 44 - umask= Controls the default permissions for files/directories created 45 - after the NTFS volume is mounted. 44 + Example: iocharset=utf8 46 45 47 - fmask= 48 - dmask= Instead of specifying umask which applies both to 49 - files and directories, fmask applies only to files and 50 - dmask only to directories. 46 + * - uid= 47 + - :rspan:`1` 48 + * - gid= 51 49 52 - nohidden Files with the Windows-specific HIDDEN (FILE_ATTRIBUTE_HIDDEN) 53 - attribute will not be shown under Linux. 50 + * - umask= 51 + - Controls the default permissions for files/directories created after 52 + the NTFS volume is mounted. 54 53 55 - sys_immutable Files with the Windows-specific SYSTEM 56 - (FILE_ATTRIBUTE_SYSTEM) attribute will be marked as system 57 - immutable files. 54 + * - dmask= 55 + - :rspan:`1` Instead of specifying umask which applies both to files and 56 + directories, fmask applies only to files and dmask only to directories. 57 + * - fmask= 58 58 59 - discard Enable support of the TRIM command for improved performance 60 - on delete operations, which is recommended for use with the 61 - solid-state drives (SSD). 59 + * - noacsrules 60 + - "No access rules" mount option sets access rights for files/folders to 61 + 777 and owner/group to root. This mount option absorbs all other 62 + permissions. 62 63 63 - force Forces the driver to mount partitions even if 'dirty' flag 64 - (volume dirty) is set. Not recommended for use. 64 + - Permissions change for files/folders will be reported as successful, 65 + but they will remain 777. 65 66 66 - sparse Create new files as "sparse". 67 + - Owner/group change will be reported as successful, butthey will stay 68 + as root. 67 69 68 - showmeta Use this parameter to show all meta-files (System Files) on 69 - a mounted NTFS partition. 70 - By default, all meta-files are hidden. 70 + * - nohidden 71 + - Files with the Windows-specific HIDDEN (FILE_ATTRIBUTE_HIDDEN) attribute 72 + will not be shown under Linux. 71 73 72 - prealloc Preallocate space for files excessively when file size is 73 - increasing on writes. Decreases fragmentation in case of 74 - parallel write operations to different files. 74 + * - sys_immutable 75 + - Files with the Windows-specific SYSTEM (FILE_ATTRIBUTE_SYSTEM) attribute 76 + will be marked as system immutable files. 75 77 76 - no_acs_rules "No access rules" mount option sets access rights for 77 - files/folders to 777 and owner/group to root. This mount 78 - option absorbs all other permissions: 79 - - permissions change for files/folders will be reported 80 - as successful, but they will remain 777; 81 - - owner/group change will be reported as successful, but 82 - they will stay as root 78 + * - discard 79 + - Enable support of the TRIM command for improved performance on delete 80 + operations, which is recommended for use with the solid-state drives 81 + (SSD). 83 82 84 - acl Support POSIX ACLs (Access Control Lists). Effective if 85 - supported by Kernel. Not to be confused with NTFS ACLs. 86 - The option specified as acl enables support for POSIX ACLs. 83 + * - force 84 + - Forces the driver to mount partitions even if volume is marked dirty. 85 + Not recommended for use. 87 86 88 - noatime All files and directories will not update their last access 89 - time attribute if a partition is mounted with this parameter. 90 - This option can speed up file system operation. 87 + * - sparse 88 + - Create new files as sparse. 91 89 92 - =============================================================================== 90 + * - showmeta 91 + - Use this parameter to show all meta-files (System Files) on a mounted 92 + NTFS partition. By default, all meta-files are hidden. 93 93 94 - ToDo list 94 + * - prealloc 95 + - Preallocate space for files excessively when file size is increasing on 96 + writes. Decreases fragmentation in case of parallel write operations to 97 + different files. 98 + 99 + * - acl 100 + - Support POSIX ACLs (Access Control Lists). Effective if supported by 101 + Kernel. Not to be confused with NTFS ACLs. The option specified as acl 102 + enables support for POSIX ACLs. 103 + 104 + Todo list 95 105 ========= 96 - 97 - - Full journaling support (currently journal replaying is supported) over JBD. 98 - 106 + - Full journaling support over JBD. Currently journal replaying is supported 107 + which is not necessarily as effectice as JBD would be. 99 108 100 109 References 101 110 ========== 102 - https://www.paragon-software.com/home/ntfs-linux-professional/ 103 - - Commercial version of the NTFS driver for Linux. 111 + - Commercial version of the NTFS driver for Linux. 112 + https://www.paragon-software.com/home/ntfs-linux-professional/ 104 113 105 - almaz.alexandrovich@paragon-software.com 106 - - Direct e-mail address for feedback and requests on the NTFS3 implementation. 114 + - Direct e-mail address for feedback and requests on the NTFS3 implementation. 115 + almaz.alexandrovich@paragon-software.com
+1 -1
Documentation/userspace-api/vduse.rst
··· 18 18 is clarified or fixed in the future. 19 19 20 20 Create/Destroy VDUSE devices 21 - ------------------------ 21 + ---------------------------- 22 22 23 23 VDUSE devices are created as follows: 24 24
+8 -7
MAINTAINERS
··· 7343 7343 7344 7344 FPGA MANAGER FRAMEWORK 7345 7345 M: Moritz Fischer <mdf@kernel.org> 7346 + M: Wu Hao <hao.wu@intel.com> 7347 + M: Xu Yilun <yilun.xu@intel.com> 7346 7348 R: Tom Rix <trix@redhat.com> 7347 7349 L: linux-fpga@vger.kernel.org 7348 7350 S: Maintained 7349 - W: http://www.rocketboards.org 7350 7351 Q: http://patchwork.kernel.org/project/linux-fpga/list/ 7351 7352 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mdf/linux-fpga.git 7352 7353 F: Documentation/devicetree/bindings/fpga/ ··· 7441 7440 M: Joakim Zhang <qiangqing.zhang@nxp.com> 7442 7441 L: netdev@vger.kernel.org 7443 7442 S: Maintained 7444 - F: Documentation/devicetree/bindings/net/fsl-fec.txt 7443 + F: Documentation/devicetree/bindings/net/fsl,fec.yaml 7445 7444 F: drivers/net/ethernet/freescale/fec.h 7446 7445 F: drivers/net/ethernet/freescale/fec_main.c 7447 7446 F: drivers/net/ethernet/freescale/fec_ptp.c ··· 9308 9307 F: drivers/platform/x86/intel/atomisp2/led.c 9309 9308 9310 9309 INTEL BIOS SAR INT1092 DRIVER 9311 - M: Shravan S <s.shravan@intel.com> 9310 + M: Shravan Sudhakar <s.shravan@intel.com> 9312 9311 M: Intel Corporation <linuxwwan@intel.com> 9313 9312 L: platform-driver-x86@vger.kernel.org 9314 9313 S: Maintained ··· 9630 9629 F: tools/power/x86/intel-speed-select/ 9631 9630 9632 9631 INTEL STRATIX10 FIRMWARE DRIVERS 9633 - M: Richard Gong <richard.gong@linux.intel.com> 9632 + M: Dinh Nguyen <dinguyen@kernel.org> 9634 9633 L: linux-kernel@vger.kernel.org 9635 9634 S: Maintained 9636 9635 F: Documentation/ABI/testing/sysfs-devices-platform-stratix10-rsu ··· 10280 10279 M: Christian Borntraeger <borntraeger@de.ibm.com> 10281 10280 M: Janosch Frank <frankja@linux.ibm.com> 10282 10281 R: David Hildenbrand <david@redhat.com> 10283 - R: Cornelia Huck <cohuck@redhat.com> 10284 10282 R: Claudio Imbrenda <imbrenda@linux.ibm.com> 10285 10283 L: kvm@vger.kernel.org 10286 10284 S: Supported ··· 11153 11153 F: Documentation/devicetree/bindings/net/dsa/marvell.txt 11154 11154 F: Documentation/networking/devlink/mv88e6xxx.rst 11155 11155 F: drivers/net/dsa/mv88e6xxx/ 11156 + F: include/linux/dsa/mv88e6xxx.h 11156 11157 F: include/linux/platform_data/mv88e6xxx.h 11157 11158 11158 11159 MARVELL ARMADA 3700 PHY DRIVERS ··· 16302 16301 M: Heiko Carstens <hca@linux.ibm.com> 16303 16302 M: Vasily Gorbik <gor@linux.ibm.com> 16304 16303 M: Christian Borntraeger <borntraeger@de.ibm.com> 16304 + R: Alexander Gordeev <agordeev@linux.ibm.com> 16305 16305 L: linux-s390@vger.kernel.org 16306 16306 S: Supported 16307 16307 W: http://www.ibm.com/developerworks/linux/linux390/ ··· 16381 16379 F: drivers/s390/crypto/vfio_ap_private.h 16382 16380 16383 16381 S390 VFIO-CCW DRIVER 16384 - M: Cornelia Huck <cohuck@redhat.com> 16385 16382 M: Eric Farman <farman@linux.ibm.com> 16386 16383 M: Matthew Rosato <mjrosato@linux.ibm.com> 16387 16384 R: Halil Pasic <pasic@linux.ibm.com> ··· 17987 17986 SY8106A REGULATOR DRIVER 17988 17987 M: Icenowy Zheng <icenowy@aosc.io> 17989 17988 S: Maintained 17990 - F: Documentation/devicetree/bindings/regulator/sy8106a-regulator.txt 17989 + F: Documentation/devicetree/bindings/regulator/silergy,sy8106a.yaml 17991 17990 F: drivers/regulator/sy8106a-regulator.c 17992 17991 17993 17992 SYNC FILE FRAMEWORK
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 15 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc6 6 6 NAME = Opossums on Parade 7 7 8 8 # *DOCUMENTATION*
-5
arch/arc/include/asm/pgtable.h
··· 26 26 27 27 extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE); 28 28 29 - /* Macro to mark a page protection as uncacheable */ 30 - #define pgprot_noncached(prot) (__pgprot(pgprot_val(prot) & ~_PAGE_CACHEABLE)) 31 - 32 - extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE); 33 - 34 29 /* to cope with aliasing VIPT cache */ 35 30 #define HAVE_ARCH_UNMAPPED_AREA 36 31
+6 -5
arch/arm/boot/dts/bcm2711-rpi-4-b.dts
··· 40 40 regulator-always-on; 41 41 regulator-settling-time-us = <5000>; 42 42 gpios = <&expgpio 4 GPIO_ACTIVE_HIGH>; 43 - states = <1800000 0x1 44 - 3300000 0x0>; 43 + states = <1800000 0x1>, 44 + <3300000 0x0>; 45 45 status = "okay"; 46 46 }; 47 47 ··· 217 217 }; 218 218 219 219 &pcie0 { 220 - pci@1,0 { 220 + pci@0,0 { 221 + device_type = "pci"; 221 222 #address-cells = <3>; 222 223 #size-cells = <2>; 223 224 ranges; 224 225 225 226 reg = <0 0 0 0 0>; 226 227 227 - usb@1,0 { 228 - reg = <0x10000 0 0 0 0>; 228 + usb@0,0 { 229 + reg = <0 0 0 0 0>; 229 230 resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>; 230 231 }; 231 232 };
+10 -2
arch/arm/boot/dts/bcm2711.dtsi
··· 300 300 status = "disabled"; 301 301 }; 302 302 303 + vec: vec@7ec13000 { 304 + compatible = "brcm,bcm2711-vec"; 305 + reg = <0x7ec13000 0x1000>; 306 + clocks = <&clocks BCM2835_CLOCK_VEC>; 307 + interrupts = <GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>; 308 + status = "disabled"; 309 + }; 310 + 303 311 dvp: clock@7ef00000 { 304 312 compatible = "brcm,brcm2711-dvp"; 305 313 reg = <0x7ef00000 0x10>; ··· 540 532 compatible = "brcm,genet-mdio-v5"; 541 533 reg = <0xe14 0x8>; 542 534 reg-names = "mdio"; 543 - #address-cells = <0x0>; 544 - #size-cells = <0x1>; 535 + #address-cells = <0x1>; 536 + #size-cells = <0x0>; 545 537 }; 546 538 }; 547 539 };
+8
arch/arm/boot/dts/bcm2835-common.dtsi
··· 106 106 status = "okay"; 107 107 }; 108 108 109 + vec: vec@7e806000 { 110 + compatible = "brcm,bcm2835-vec"; 111 + reg = <0x7e806000 0x1000>; 112 + clocks = <&clocks BCM2835_CLOCK_VEC>; 113 + interrupts = <2 27>; 114 + status = "disabled"; 115 + }; 116 + 109 117 pixelvalve@7e807000 { 110 118 compatible = "brcm,bcm2835-pixelvalve2"; 111 119 reg = <0x7e807000 0x100>;
-8
arch/arm/boot/dts/bcm283x.dtsi
··· 464 464 status = "disabled"; 465 465 }; 466 466 467 - vec: vec@7e806000 { 468 - compatible = "brcm,bcm2835-vec"; 469 - reg = <0x7e806000 0x1000>; 470 - clocks = <&clocks BCM2835_CLOCK_VEC>; 471 - interrupts = <2 27>; 472 - status = "disabled"; 473 - }; 474 - 475 467 usb: usb@7e980000 { 476 468 compatible = "brcm,bcm2835-usb"; 477 469 reg = <0x7e980000 0x10000>;
+1 -1
arch/arm/boot/dts/spear3xx.dtsi
··· 47 47 }; 48 48 49 49 gmac: eth@e0800000 { 50 - compatible = "st,spear600-gmac"; 50 + compatible = "snps,dwmac-3.40a"; 51 51 reg = <0xe0800000 0x8000>; 52 52 interrupts = <23 22>; 53 53 interrupt-names = "macirq", "eth_wake_irq";
-1
arch/arm/configs/multi_v7_defconfig
··· 197 197 CONFIG_DEVTMPFS=y 198 198 CONFIG_DEVTMPFS_MOUNT=y 199 199 CONFIG_OMAP_OCP2SCP=y 200 - CONFIG_SIMPLE_PM_BUS=y 201 200 CONFIG_MTD=y 202 201 CONFIG_MTD_CMDLINE_PARTS=y 203 202 CONFIG_MTD_BLOCK=y
-1
arch/arm/configs/oxnas_v6_defconfig
··· 46 46 CONFIG_DEVTMPFS_MOUNT=y 47 47 CONFIG_DMA_CMA=y 48 48 CONFIG_CMA_SIZE_MBYTES=64 49 - CONFIG_SIMPLE_PM_BUS=y 50 49 CONFIG_MTD=y 51 50 CONFIG_MTD_CMDLINE_PARTS=y 52 51 CONFIG_MTD_BLOCK=y
-1
arch/arm/configs/shmobile_defconfig
··· 40 40 CONFIG_PCIE_RCAR_HOST=y 41 41 CONFIG_DEVTMPFS=y 42 42 CONFIG_DEVTMPFS_MOUNT=y 43 - CONFIG_SIMPLE_PM_BUS=y 44 43 CONFIG_MTD=y 45 44 CONFIG_MTD_BLOCK=y 46 45 CONFIG_MTD_CFI=y
+31 -9
arch/arm/mach-imx/src.c
··· 9 9 #include <linux/iopoll.h> 10 10 #include <linux/of.h> 11 11 #include <linux/of_address.h> 12 + #include <linux/platform_device.h> 12 13 #include <linux/reset-controller.h> 13 14 #include <linux/smp.h> 14 15 #include <asm/smp_plat.h> ··· 80 79 81 80 static const struct reset_control_ops imx_src_ops = { 82 81 .reset = imx_src_reset_module, 83 - }; 84 - 85 - static struct reset_controller_dev imx_reset_controller = { 86 - .ops = &imx_src_ops, 87 - .nr_resets = ARRAY_SIZE(sw_reset_bits), 88 82 }; 89 83 90 84 static void imx_gpcv2_set_m_core_pgc(bool enable, u32 offset) ··· 173 177 src_base = of_iomap(np, 0); 174 178 WARN_ON(!src_base); 175 179 176 - imx_reset_controller.of_node = np; 177 - if (IS_ENABLED(CONFIG_RESET_CONTROLLER)) 178 - reset_controller_register(&imx_reset_controller); 179 - 180 180 /* 181 181 * force warm reset sources to generate cold reset 182 182 * for a more reliable restart ··· 206 214 if (!gpc_base) 207 215 return; 208 216 } 217 + 218 + static const struct of_device_id imx_src_dt_ids[] = { 219 + { .compatible = "fsl,imx51-src" }, 220 + { /* sentinel */ } 221 + }; 222 + 223 + static int imx_src_probe(struct platform_device *pdev) 224 + { 225 + struct reset_controller_dev *rcdev; 226 + 227 + rcdev = devm_kzalloc(&pdev->dev, sizeof(*rcdev), GFP_KERNEL); 228 + if (!rcdev) 229 + return -ENOMEM; 230 + 231 + rcdev->ops = &imx_src_ops; 232 + rcdev->dev = &pdev->dev; 233 + rcdev->of_node = pdev->dev.of_node; 234 + rcdev->nr_resets = ARRAY_SIZE(sw_reset_bits); 235 + 236 + return devm_reset_controller_register(&pdev->dev, rcdev); 237 + } 238 + 239 + static struct platform_driver imx_src_driver = { 240 + .driver = { 241 + .name = "imx-src", 242 + .of_match_table = imx_src_dt_ids, 243 + }, 244 + .probe = imx_src_probe, 245 + }; 246 + builtin_platform_driver(imx_src_driver);
-1
arch/arm/mach-omap2/Kconfig
··· 112 112 select PM_GENERIC_DOMAINS 113 113 select PM_GENERIC_DOMAINS_OF 114 114 select RESET_CONTROLLER 115 - select SIMPLE_PM_BUS 116 115 select SOC_BUS 117 116 select TI_SYSC 118 117 select OMAP_IRQCHIP
-1
arch/arm64/configs/defconfig
··· 245 245 CONFIG_FW_LOADER_USER_HELPER=y 246 246 CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y 247 247 CONFIG_HISILICON_LPC=y 248 - CONFIG_SIMPLE_PM_BUS=y 249 248 CONFIG_FSL_MC_BUS=y 250 249 CONFIG_TEGRA_ACONNECT=m 251 250 CONFIG_GNSS=m
+1 -1
arch/arm64/mm/hugetlbpage.c
··· 43 43 #ifdef CONFIG_ARM64_4K_PAGES 44 44 order = PUD_SHIFT - PAGE_SHIFT; 45 45 #else 46 - order = CONT_PMD_SHIFT + PMD_SHIFT - PAGE_SHIFT; 46 + order = CONT_PMD_SHIFT - PAGE_SHIFT; 47 47 #endif 48 48 /* 49 49 * HugeTLB CMA reservation is required for gigantic
+2 -1
arch/csky/Kconfig
··· 8 8 select ARCH_HAS_SYNC_DMA_FOR_DEVICE 9 9 select ARCH_USE_BUILTIN_BSWAP 10 10 select ARCH_USE_QUEUED_RWLOCKS 11 - select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 11 + select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 && $(cc-option,-mbacktrace) 12 12 select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT 13 13 select COMMON_CLK 14 14 select CLKSRC_MMIO ··· 241 241 242 242 menuconfig HAVE_TCM 243 243 bool "Tightly-Coupled/Sram Memory" 244 + depends on !COMPILE_TEST 244 245 help 245 246 The implementation are not only used by TCM (Tightly-Coupled Meory) 246 247 but also used by sram on SOC bus. It follow existed linux tcm
-1
arch/csky/include/asm/bitops.h
··· 74 74 * bug fix, why only could use atomic!!!! 75 75 */ 76 76 #include <asm-generic/bitops/non-atomic.h> 77 - #define __clear_bit(nr, vaddr) clear_bit(nr, vaddr) 78 77 79 78 #include <asm-generic/bitops/le.h> 80 79 #include <asm-generic/bitops/ext2-atomic.h>
+2 -1
arch/csky/kernel/ptrace.c
··· 99 99 if (ret) 100 100 return ret; 101 101 102 - regs.sr = task_pt_regs(target)->sr; 102 + /* BIT(0) of regs.sr is Condition Code/Carry bit */ 103 + regs.sr = (regs.sr & BIT(0)) | (task_pt_regs(target)->sr & ~BIT(0)); 103 104 #ifdef CONFIG_CPU_HAS_HILO 104 105 regs.dcsr = task_pt_regs(target)->dcsr; 105 106 #endif
+4
arch/csky/kernel/signal.c
··· 52 52 struct sigcontext __user *sc) 53 53 { 54 54 int err = 0; 55 + unsigned long sr = regs->sr; 55 56 56 57 /* sc_pt_regs is structured the same as the start of pt_regs */ 57 58 err |= __copy_from_user(regs, &sc->sc_pt_regs, sizeof(struct pt_regs)); 59 + 60 + /* BIT(0) of regs->sr is Condition Code/Carry bit */ 61 + regs->sr = (sr & ~1) | (regs->sr & 1); 58 62 59 63 /* Restore the floating-point state. */ 60 64 err |= restore_fpu_state(sc);
+17 -11
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 255 255 * r3 contains the SRR1 wakeup value, SRR1 is trashed. 256 256 */ 257 257 _GLOBAL(idle_kvm_start_guest) 258 - ld r4,PACAEMERGSP(r13) 259 258 mfcr r5 260 259 mflr r0 261 - std r1,0(r4) 262 - std r5,8(r4) 263 - std r0,16(r4) 264 - subi r1,r4,STACK_FRAME_OVERHEAD 260 + std r5, 8(r1) // Save CR in caller's frame 261 + std r0, 16(r1) // Save LR in caller's frame 262 + // Create frame on emergency stack 263 + ld r4, PACAEMERGSP(r13) 264 + stdu r1, -SWITCH_FRAME_SIZE(r4) 265 + // Switch to new frame on emergency stack 266 + mr r1, r4 267 + std r3, 32(r1) // Save SRR1 wakeup value 265 268 SAVE_NVGPRS(r1) 266 269 267 270 /* ··· 315 312 beq kvm_no_guest 316 313 317 314 kvm_secondary_got_guest: 315 + 316 + // About to go to guest, clear saved SRR1 317 + li r0, 0 318 + std r0, 32(r1) 318 319 319 320 /* Set HSTATE_DSCR(r13) to something sensible */ 320 321 ld r6, PACA_DSCR_DEFAULT(r13) ··· 399 392 mfspr r4, SPRN_LPCR 400 393 rlwimi r4, r3, 0, LPCR_PECE0 | LPCR_PECE1 401 394 mtspr SPRN_LPCR, r4 402 - /* set up r3 for return */ 403 - mfspr r3,SPRN_SRR1 395 + // Return SRR1 wakeup value, or 0 if we went into the guest 396 + ld r3, 32(r1) 404 397 REST_NVGPRS(r1) 405 - addi r1, r1, STACK_FRAME_OVERHEAD 406 - ld r0, 16(r1) 407 - ld r5, 8(r1) 408 - ld r1, 0(r1) 398 + ld r1, 0(r1) // Switch back to caller stack 399 + ld r0, 16(r1) // Reload LR 400 + ld r5, 8(r1) // Reload CR 409 401 mtlr r0 410 402 mtcr r5 411 403 blr
+2 -1
arch/powerpc/sysdev/xive/common.c
··· 945 945 * interrupt to be inactive in that case. 946 946 */ 947 947 *state = (pq != XIVE_ESB_INVALID) && !xd->stale_p && 948 - (xd->saved_p || !!(pq & XIVE_ESB_VAL_P)); 948 + (xd->saved_p || (!!(pq & XIVE_ESB_VAL_P) && 949 + !irqd_irq_disabled(data))); 949 950 return 0; 950 951 default: 951 952 return -EINVAL;
+6 -7
arch/s390/lib/string.c
··· 259 259 #ifdef __HAVE_ARCH_STRRCHR 260 260 char *strrchr(const char *s, int c) 261 261 { 262 - size_t len = __strend(s) - s; 262 + ssize_t len = __strend(s) - s; 263 263 264 - if (len) 265 - do { 266 - if (s[len] == (char) c) 267 - return (char *) s + len; 268 - } while (--len > 0); 269 - return NULL; 264 + do { 265 + if (s[len] == (char)c) 266 + return (char *)s + len; 267 + } while (--len >= 0); 268 + return NULL; 270 269 } 271 270 EXPORT_SYMBOL(strrchr); 272 271 #endif
-1
arch/x86/Kconfig
··· 1525 1525 1526 1526 config AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT 1527 1527 bool "Activate AMD Secure Memory Encryption (SME) by default" 1528 - default y 1529 1528 depends on AMD_MEM_ENCRYPT 1530 1529 help 1531 1530 Say yes to have system memory encrypted by default if running on
+1
arch/x86/events/msr.c
··· 68 68 case INTEL_FAM6_BROADWELL_D: 69 69 case INTEL_FAM6_BROADWELL_G: 70 70 case INTEL_FAM6_BROADWELL_X: 71 + case INTEL_FAM6_SAPPHIRERAPIDS_X: 71 72 72 73 case INTEL_FAM6_ATOM_SILVERMONT: 73 74 case INTEL_FAM6_ATOM_SILVERMONT_D:
+1 -1
arch/x86/kernel/fpu/signal.c
··· 385 385 return -EINVAL; 386 386 } else { 387 387 /* Mask invalid bits out for historical reasons (broken hardware). */ 388 - fpu->state.fxsave.mxcsr &= ~mxcsr_feature_mask; 388 + fpu->state.fxsave.mxcsr &= mxcsr_feature_mask; 389 389 } 390 390 391 391 /* Enforce XFEATURE_MASK_FPSSE when XSAVE is enabled */
+6
block/bfq-cgroup.c
··· 666 666 bfq_put_idle_entity(bfq_entity_service_tree(entity), entity); 667 667 bfqg_and_blkg_put(bfqq_group(bfqq)); 668 668 669 + if (entity->parent && 670 + entity->parent->last_bfqq_created == bfqq) 671 + entity->parent->last_bfqq_created = NULL; 672 + else if (bfqd->last_bfqq_created == bfqq) 673 + bfqd->last_bfqq_created = NULL; 674 + 669 675 entity->parent = bfqg->my_entity; 670 676 entity->sched_data = &bfqg->sched_data; 671 677 /* pin down bfqg and its associated blkg */
+78 -70
block/blk-core.c
··· 49 49 #include "blk-mq.h" 50 50 #include "blk-mq-sched.h" 51 51 #include "blk-pm.h" 52 - #include "blk-rq-qos.h" 53 52 54 53 struct dentry *blk_debugfs_root; 55 54 ··· 336 337 } 337 338 EXPORT_SYMBOL(blk_put_queue); 338 339 339 - void blk_set_queue_dying(struct request_queue *q) 340 + void blk_queue_start_drain(struct request_queue *q) 340 341 { 341 - blk_queue_flag_set(QUEUE_FLAG_DYING, q); 342 - 343 342 /* 344 343 * When queue DYING flag is set, we need to block new req 345 344 * entering queue, so we call blk_freeze_queue_start() to 346 345 * prevent I/O from crossing blk_queue_enter(). 347 346 */ 348 347 blk_freeze_queue_start(q); 349 - 350 348 if (queue_is_mq(q)) 351 349 blk_mq_wake_waiters(q); 352 - 353 350 /* Make blk_queue_enter() reexamine the DYING flag. */ 354 351 wake_up_all(&q->mq_freeze_wq); 352 + } 353 + 354 + void blk_set_queue_dying(struct request_queue *q) 355 + { 356 + blk_queue_flag_set(QUEUE_FLAG_DYING, q); 357 + blk_queue_start_drain(q); 355 358 } 356 359 EXPORT_SYMBOL_GPL(blk_set_queue_dying); 357 360 ··· 386 385 */ 387 386 blk_freeze_queue(q); 388 387 389 - rq_qos_exit(q); 390 - 391 388 blk_queue_flag_set(QUEUE_FLAG_DEAD, q); 392 - 393 - /* for synchronous bio-based driver finish in-flight integrity i/o */ 394 - blk_flush_integrity(); 395 389 396 390 blk_sync_queue(q); 397 391 if (queue_is_mq(q)) ··· 412 416 } 413 417 EXPORT_SYMBOL(blk_cleanup_queue); 414 418 419 + static bool blk_try_enter_queue(struct request_queue *q, bool pm) 420 + { 421 + rcu_read_lock(); 422 + if (!percpu_ref_tryget_live(&q->q_usage_counter)) 423 + goto fail; 424 + 425 + /* 426 + * The code that increments the pm_only counter must ensure that the 427 + * counter is globally visible before the queue is unfrozen. 428 + */ 429 + if (blk_queue_pm_only(q) && 430 + (!pm || queue_rpm_status(q) == RPM_SUSPENDED)) 431 + goto fail_put; 432 + 433 + rcu_read_unlock(); 434 + return true; 435 + 436 + fail_put: 437 + percpu_ref_put(&q->q_usage_counter); 438 + fail: 439 + rcu_read_unlock(); 440 + return false; 441 + } 442 + 415 443 /** 416 444 * blk_queue_enter() - try to increase q->q_usage_counter 417 445 * @q: request queue pointer ··· 445 425 { 446 426 const bool pm = flags & BLK_MQ_REQ_PM; 447 427 448 - while (true) { 449 - bool success = false; 450 - 451 - rcu_read_lock(); 452 - if (percpu_ref_tryget_live(&q->q_usage_counter)) { 453 - /* 454 - * The code that increments the pm_only counter is 455 - * responsible for ensuring that that counter is 456 - * globally visible before the queue is unfrozen. 457 - */ 458 - if ((pm && queue_rpm_status(q) != RPM_SUSPENDED) || 459 - !blk_queue_pm_only(q)) { 460 - success = true; 461 - } else { 462 - percpu_ref_put(&q->q_usage_counter); 463 - } 464 - } 465 - rcu_read_unlock(); 466 - 467 - if (success) 468 - return 0; 469 - 428 + while (!blk_try_enter_queue(q, pm)) { 470 429 if (flags & BLK_MQ_REQ_NOWAIT) 471 430 return -EBUSY; 472 431 473 432 /* 474 - * read pair of barrier in blk_freeze_queue_start(), 475 - * we need to order reading __PERCPU_REF_DEAD flag of 476 - * .q_usage_counter and reading .mq_freeze_depth or 477 - * queue dying flag, otherwise the following wait may 478 - * never return if the two reads are reordered. 433 + * read pair of barrier in blk_freeze_queue_start(), we need to 434 + * order reading __PERCPU_REF_DEAD flag of .q_usage_counter and 435 + * reading .mq_freeze_depth or queue dying flag, otherwise the 436 + * following wait may never return if the two reads are 437 + * reordered. 479 438 */ 480 439 smp_rmb(); 481 - 482 440 wait_event(q->mq_freeze_wq, 483 441 (!q->mq_freeze_depth && 484 442 blk_pm_resume_queue(pm, q)) || ··· 464 466 if (blk_queue_dying(q)) 465 467 return -ENODEV; 466 468 } 469 + 470 + return 0; 467 471 } 468 472 469 473 static inline int bio_queue_enter(struct bio *bio) 470 474 { 471 - struct request_queue *q = bio->bi_bdev->bd_disk->queue; 472 - bool nowait = bio->bi_opf & REQ_NOWAIT; 473 - int ret; 475 + struct gendisk *disk = bio->bi_bdev->bd_disk; 476 + struct request_queue *q = disk->queue; 474 477 475 - ret = blk_queue_enter(q, nowait ? BLK_MQ_REQ_NOWAIT : 0); 476 - if (unlikely(ret)) { 477 - if (nowait && !blk_queue_dying(q)) 478 + while (!blk_try_enter_queue(q, false)) { 479 + if (bio->bi_opf & REQ_NOWAIT) { 480 + if (test_bit(GD_DEAD, &disk->state)) 481 + goto dead; 478 482 bio_wouldblock_error(bio); 479 - else 480 - bio_io_error(bio); 483 + return -EBUSY; 484 + } 485 + 486 + /* 487 + * read pair of barrier in blk_freeze_queue_start(), we need to 488 + * order reading __PERCPU_REF_DEAD flag of .q_usage_counter and 489 + * reading .mq_freeze_depth or queue dying flag, otherwise the 490 + * following wait may never return if the two reads are 491 + * reordered. 492 + */ 493 + smp_rmb(); 494 + wait_event(q->mq_freeze_wq, 495 + (!q->mq_freeze_depth && 496 + blk_pm_resume_queue(false, q)) || 497 + test_bit(GD_DEAD, &disk->state)); 498 + if (test_bit(GD_DEAD, &disk->state)) 499 + goto dead; 481 500 } 482 501 483 - return ret; 502 + return 0; 503 + dead: 504 + bio_io_error(bio); 505 + return -ENODEV; 484 506 } 485 507 486 508 void blk_queue_exit(struct request_queue *q) ··· 917 899 struct gendisk *disk = bio->bi_bdev->bd_disk; 918 900 blk_qc_t ret = BLK_QC_T_NONE; 919 901 920 - if (blk_crypto_bio_prep(&bio)) { 921 - if (!disk->fops->submit_bio) 922 - return blk_mq_submit_bio(bio); 902 + if (unlikely(bio_queue_enter(bio) != 0)) 903 + return BLK_QC_T_NONE; 904 + 905 + if (!submit_bio_checks(bio) || !blk_crypto_bio_prep(&bio)) 906 + goto queue_exit; 907 + if (disk->fops->submit_bio) { 923 908 ret = disk->fops->submit_bio(bio); 909 + goto queue_exit; 924 910 } 911 + return blk_mq_submit_bio(bio); 912 + 913 + queue_exit: 925 914 blk_queue_exit(disk->queue); 926 915 return ret; 927 916 } ··· 966 941 struct request_queue *q = bio->bi_bdev->bd_disk->queue; 967 942 struct bio_list lower, same; 968 943 969 - if (unlikely(bio_queue_enter(bio) != 0)) 970 - continue; 971 - 972 944 /* 973 945 * Create a fresh bio_list for all subordinate requests. 974 946 */ ··· 1001 979 static blk_qc_t __submit_bio_noacct_mq(struct bio *bio) 1002 980 { 1003 981 struct bio_list bio_list[2] = { }; 1004 - blk_qc_t ret = BLK_QC_T_NONE; 982 + blk_qc_t ret; 1005 983 1006 984 current->bio_list = bio_list; 1007 985 1008 986 do { 1009 - struct gendisk *disk = bio->bi_bdev->bd_disk; 1010 - 1011 - if (unlikely(bio_queue_enter(bio) != 0)) 1012 - continue; 1013 - 1014 - if (!blk_crypto_bio_prep(&bio)) { 1015 - blk_queue_exit(disk->queue); 1016 - ret = BLK_QC_T_NONE; 1017 - continue; 1018 - } 1019 - 1020 - ret = blk_mq_submit_bio(bio); 987 + ret = __submit_bio(bio); 1021 988 } while ((bio = bio_list_pop(&bio_list[0]))); 1022 989 1023 990 current->bio_list = NULL; ··· 1024 1013 */ 1025 1014 blk_qc_t submit_bio_noacct(struct bio *bio) 1026 1015 { 1027 - if (!submit_bio_checks(bio)) 1028 - return BLK_QC_T_NONE; 1029 - 1030 1016 /* 1031 1017 * We only want one ->submit_bio to be active at a time, else stack 1032 1018 * usage with stacked devices could be a problem. Use current->bio_list
+8 -1
block/blk-mq.c
··· 188 188 } 189 189 EXPORT_SYMBOL_GPL(blk_mq_freeze_queue); 190 190 191 - void blk_mq_unfreeze_queue(struct request_queue *q) 191 + void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic) 192 192 { 193 193 mutex_lock(&q->mq_freeze_lock); 194 + if (force_atomic) 195 + q->q_usage_counter.data->force_atomic = true; 194 196 q->mq_freeze_depth--; 195 197 WARN_ON_ONCE(q->mq_freeze_depth < 0); 196 198 if (!q->mq_freeze_depth) { ··· 200 198 wake_up_all(&q->mq_freeze_wq); 201 199 } 202 200 mutex_unlock(&q->mq_freeze_lock); 201 + } 202 + 203 + void blk_mq_unfreeze_queue(struct request_queue *q) 204 + { 205 + __blk_mq_unfreeze_queue(q, false); 203 206 } 204 207 EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue); 205 208
+2
block/blk.h
··· 51 51 void blk_free_flush_queue(struct blk_flush_queue *q); 52 52 53 53 void blk_freeze_queue(struct request_queue *q); 54 + void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic); 55 + void blk_queue_start_drain(struct request_queue *q); 54 56 55 57 #define BIO_INLINE_VECS 4 56 58 struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs,
+23
block/genhd.c
··· 26 26 #include <linux/badblocks.h> 27 27 28 28 #include "blk.h" 29 + #include "blk-rq-qos.h" 29 30 30 31 static struct kobject *block_depr; 31 32 ··· 560 559 */ 561 560 void del_gendisk(struct gendisk *disk) 562 561 { 562 + struct request_queue *q = disk->queue; 563 + 563 564 might_sleep(); 564 565 565 566 if (WARN_ON_ONCE(!disk_live(disk) && !(disk->flags & GENHD_FL_HIDDEN))) ··· 578 575 fsync_bdev(disk->part0); 579 576 __invalidate_device(disk->part0, true); 580 577 578 + /* 579 + * Fail any new I/O. 580 + */ 581 + set_bit(GD_DEAD, &disk->state); 581 582 set_capacity(disk, 0); 583 + 584 + /* 585 + * Prevent new I/O from crossing bio_queue_enter(). 586 + */ 587 + blk_queue_start_drain(q); 588 + blk_mq_freeze_queue_wait(q); 589 + 590 + rq_qos_exit(q); 591 + blk_sync_queue(q); 592 + blk_flush_integrity(); 593 + /* 594 + * Allow using passthrough request again after the queue is torn down. 595 + */ 596 + blk_queue_flag_clear(QUEUE_FLAG_INIT_DONE, q); 597 + __blk_mq_unfreeze_queue(q, true); 582 598 583 599 if (!(disk->flags & GENHD_FL_HIDDEN)) { 584 600 sysfs_remove_link(&disk_to_dev(disk)->kobj, "bdi"); ··· 1078 1056 struct gendisk *disk = dev_to_disk(dev); 1079 1057 1080 1058 might_sleep(); 1059 + WARN_ON_ONCE(disk_live(disk)); 1081 1060 1082 1061 disk_release_events(disk); 1083 1062 kfree(disk->random);
+6 -4
block/kyber-iosched.c
··· 151 151 152 152 struct kyber_queue_data { 153 153 struct request_queue *q; 154 + dev_t dev; 154 155 155 156 /* 156 157 * Each scheduling domain has a limited number of in-flight requests ··· 258 257 } 259 258 memset(buckets, 0, sizeof(kqd->latency_buckets[sched_domain][type])); 260 259 261 - trace_kyber_latency(kqd->q, kyber_domain_names[sched_domain], 260 + trace_kyber_latency(kqd->dev, kyber_domain_names[sched_domain], 262 261 kyber_latency_type_names[type], percentile, 263 262 bucket + 1, 1 << KYBER_LATENCY_SHIFT, samples); 264 263 ··· 271 270 depth = clamp(depth, 1U, kyber_depth[sched_domain]); 272 271 if (depth != kqd->domain_tokens[sched_domain].sb.depth) { 273 272 sbitmap_queue_resize(&kqd->domain_tokens[sched_domain], depth); 274 - trace_kyber_adjust(kqd->q, kyber_domain_names[sched_domain], 273 + trace_kyber_adjust(kqd->dev, kyber_domain_names[sched_domain], 275 274 depth); 276 275 } 277 276 } ··· 367 366 goto err; 368 367 369 368 kqd->q = q; 369 + kqd->dev = disk_devt(q->disk); 370 370 371 371 kqd->cpu_latency = alloc_percpu_gfp(struct kyber_cpu_latency, 372 372 GFP_KERNEL | __GFP_ZERO); ··· 776 774 list_del_init(&rq->queuelist); 777 775 return rq; 778 776 } else { 779 - trace_kyber_throttled(kqd->q, 777 + trace_kyber_throttled(kqd->dev, 780 778 kyber_domain_names[khd->cur_domain]); 781 779 } 782 780 } else if (sbitmap_any_bit_set(&khd->kcq_map[khd->cur_domain])) { ··· 789 787 list_del_init(&rq->queuelist); 790 788 return rq; 791 789 } else { 792 - trace_kyber_throttled(kqd->q, 790 + trace_kyber_throttled(kqd->dev, 793 791 kyber_domain_names[khd->cur_domain]); 794 792 } 795 793 }
+1 -1
drivers/acpi/arm64/gtdt.c
··· 36 36 37 37 static struct acpi_gtdt_descriptor acpi_gtdt_desc __initdata; 38 38 39 - static inline void *next_platform_timer(void *platform_timer) 39 + static inline __init void *next_platform_timer(void *platform_timer) 40 40 { 41 41 struct acpi_gtdt_header *gh = platform_timer; 42 42
+2 -1
drivers/acpi/x86/s2idle.c
··· 371 371 return 0; 372 372 373 373 if (acpi_s2idle_vendor_amd()) { 374 - /* AMD0004, AMDI0005: 374 + /* AMD0004, AMD0005, AMDI0005: 375 375 * - Should use rev_id 0x0 376 376 * - function mask > 0x3: Should use AMD method, but has off by one bug 377 377 * - function mask = 0x3: Should use Microsoft method ··· 390 390 ACPI_LPS0_DSM_UUID_MICROSOFT, 0, 391 391 &lps0_dsm_guid_microsoft); 392 392 if (lps0_dsm_func_mask > 0x3 && (!strcmp(hid, "AMD0004") || 393 + !strcmp(hid, "AMD0005") || 393 394 !strcmp(hid, "AMDI0005"))) { 394 395 lps0_dsm_func_mask = (lps0_dsm_func_mask << 1) | 0x1; 395 396 acpi_handle_debug(adev->handle, "_DSM UUID %s: Adjusted function mask: 0x%x\n",
+1 -4
drivers/ata/libahci_platform.c
··· 440 440 hpriv->phy_regulator = devm_regulator_get(dev, "phy"); 441 441 if (IS_ERR(hpriv->phy_regulator)) { 442 442 rc = PTR_ERR(hpriv->phy_regulator); 443 - if (rc == -EPROBE_DEFER) 444 - goto err_out; 445 - rc = 0; 446 - hpriv->phy_regulator = NULL; 443 + goto err_out; 447 444 } 448 445 449 446 if (flags & AHCI_PLATFORM_GET_RESETS) {
+4 -2
drivers/ata/pata_legacy.c
··· 352 352 iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2); 353 353 354 354 if (unlikely(slop)) { 355 - __le32 pad; 355 + __le32 pad = 0; 356 + 356 357 if (rw == READ) { 357 358 pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr)); 358 359 memcpy(buf + buflen - slop, &pad, slop); ··· 743 742 ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2); 744 743 745 744 if (unlikely(slop)) { 746 - __le32 pad; 745 + __le32 pad = 0; 746 + 747 747 if (rw == WRITE) { 748 748 memcpy(&pad, buf + buflen - slop, slop); 749 749 iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
+2 -1
drivers/base/core.c
··· 687 687 { 688 688 struct device_link *link; 689 689 690 - if (!consumer || !supplier || flags & ~DL_ADD_VALID_FLAGS || 690 + if (!consumer || !supplier || consumer == supplier || 691 + flags & ~DL_ADD_VALID_FLAGS || 691 692 (flags & DL_FLAG_STATELESS && flags & DL_MANAGED_LINK_FLAGS) || 692 693 (flags & DL_FLAG_SYNC_STATE_ONLY && 693 694 (flags & ~DL_FLAG_INFERRED) != DL_FLAG_SYNC_STATE_ONLY) ||
+1 -1
drivers/base/test/Makefile
··· 2 2 obj-$(CONFIG_TEST_ASYNC_DRIVER_PROBE) += test_async_driver_probe.o 3 3 4 4 obj-$(CONFIG_DRIVER_PE_KUNIT_TEST) += property-entry-test.o 5 - CFLAGS_REMOVE_property-entry-test.o += -fplugin-arg-structleak_plugin-byref -fplugin-arg-structleak_plugin-byref-all 5 + CFLAGS_property-entry-test.o += $(DISABLE_STRUCTLEAK_PLUGIN)
+22 -22
drivers/block/brd.c
··· 373 373 struct gendisk *disk; 374 374 char buf[DISK_NAME_LEN]; 375 375 376 + mutex_lock(&brd_devices_mutex); 377 + list_for_each_entry(brd, &brd_devices, brd_list) { 378 + if (brd->brd_number == i) { 379 + mutex_unlock(&brd_devices_mutex); 380 + return -EEXIST; 381 + } 382 + } 376 383 brd = kzalloc(sizeof(*brd), GFP_KERNEL); 377 - if (!brd) 384 + if (!brd) { 385 + mutex_unlock(&brd_devices_mutex); 378 386 return -ENOMEM; 387 + } 379 388 brd->brd_number = i; 389 + list_add_tail(&brd->brd_list, &brd_devices); 390 + mutex_unlock(&brd_devices_mutex); 391 + 380 392 spin_lock_init(&brd->brd_lock); 381 393 INIT_RADIX_TREE(&brd->brd_pages, GFP_ATOMIC); 382 394 ··· 423 411 blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue); 424 412 blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, disk->queue); 425 413 add_disk(disk); 426 - list_add_tail(&brd->brd_list, &brd_devices); 427 414 428 415 return 0; 429 416 430 417 out_free_dev: 418 + mutex_lock(&brd_devices_mutex); 419 + list_del(&brd->brd_list); 420 + mutex_unlock(&brd_devices_mutex); 431 421 kfree(brd); 432 422 return -ENOMEM; 433 423 } 434 424 435 425 static void brd_probe(dev_t dev) 436 426 { 437 - int i = MINOR(dev) / max_part; 438 - struct brd_device *brd; 439 - 440 - mutex_lock(&brd_devices_mutex); 441 - list_for_each_entry(brd, &brd_devices, brd_list) { 442 - if (brd->brd_number == i) 443 - goto out_unlock; 444 - } 445 - 446 - brd_alloc(i); 447 - out_unlock: 448 - mutex_unlock(&brd_devices_mutex); 427 + brd_alloc(MINOR(dev) / max_part); 449 428 } 450 429 451 430 static void brd_del_one(struct brd_device *brd) 452 431 { 453 - list_del(&brd->brd_list); 454 432 del_gendisk(brd->brd_disk); 455 433 blk_cleanup_disk(brd->brd_disk); 456 434 brd_free_pages(brd); 435 + mutex_lock(&brd_devices_mutex); 436 + list_del(&brd->brd_list); 437 + mutex_unlock(&brd_devices_mutex); 457 438 kfree(brd); 458 439 } 459 440 ··· 496 491 497 492 brd_debugfs_dir = debugfs_create_dir("ramdisk_pages", NULL); 498 493 499 - mutex_lock(&brd_devices_mutex); 500 494 for (i = 0; i < rd_nr; i++) { 501 495 err = brd_alloc(i); 502 496 if (err) 503 497 goto out_free; 504 498 } 505 499 506 - mutex_unlock(&brd_devices_mutex); 507 - 508 500 pr_info("brd: module loaded\n"); 509 501 return 0; 510 502 511 503 out_free: 504 + unregister_blkdev(RAMDISK_MAJOR, "ramdisk"); 512 505 debugfs_remove_recursive(brd_debugfs_dir); 513 506 514 507 list_for_each_entry_safe(brd, next, &brd_devices, brd_list) 515 508 brd_del_one(brd); 516 - mutex_unlock(&brd_devices_mutex); 517 - unregister_blkdev(RAMDISK_MAJOR, "ramdisk"); 518 509 519 510 pr_info("brd: module NOT loaded !!!\n"); 520 511 return err; ··· 520 519 { 521 520 struct brd_device *brd, *next; 522 521 522 + unregister_blkdev(RAMDISK_MAJOR, "ramdisk"); 523 523 debugfs_remove_recursive(brd_debugfs_dir); 524 524 525 525 list_for_each_entry_safe(brd, next, &brd_devices, brd_list) 526 526 brd_del_one(brd); 527 - 528 - unregister_blkdev(RAMDISK_MAJOR, "ramdisk"); 529 527 530 528 pr_info("brd: module unloaded\n"); 531 529 }
+3 -1
drivers/block/rnbd/rnbd-clt-sysfs.c
··· 71 71 int opt_mask = 0; 72 72 int token; 73 73 int ret = -EINVAL; 74 - int i, dest_port, nr_poll_queues; 74 + int nr_poll_queues = 0; 75 + int dest_port = 0; 75 76 int p_cnt = 0; 77 + int i; 76 78 77 79 options = kstrdup(buf, GFP_KERNEL); 78 80 if (!options)
+6 -31
drivers/block/virtio_blk.c
··· 689 689 static unsigned int virtblk_queue_depth; 690 690 module_param_named(queue_depth, virtblk_queue_depth, uint, 0444); 691 691 692 - static int virtblk_validate(struct virtio_device *vdev) 693 - { 694 - u32 blk_size; 695 - 696 - if (!vdev->config->get) { 697 - dev_err(&vdev->dev, "%s failure: config access disabled\n", 698 - __func__); 699 - return -EINVAL; 700 - } 701 - 702 - if (!virtio_has_feature(vdev, VIRTIO_BLK_F_BLK_SIZE)) 703 - return 0; 704 - 705 - blk_size = virtio_cread32(vdev, 706 - offsetof(struct virtio_blk_config, blk_size)); 707 - 708 - if (blk_size < SECTOR_SIZE || blk_size > PAGE_SIZE) 709 - __virtio_clear_bit(vdev, VIRTIO_BLK_F_BLK_SIZE); 710 - 711 - return 0; 712 - } 713 - 714 692 static int virtblk_probe(struct virtio_device *vdev) 715 693 { 716 694 struct virtio_blk *vblk; ··· 699 721 u16 min_io_size; 700 722 u8 physical_block_exp, alignment_offset; 701 723 unsigned int queue_depth; 724 + 725 + if (!vdev->config->get) { 726 + dev_err(&vdev->dev, "%s failure: config access disabled\n", 727 + __func__); 728 + return -EINVAL; 729 + } 702 730 703 731 err = ida_simple_get(&vd_index_ida, 0, minor_to_index(1 << MINORBITS), 704 732 GFP_KERNEL); ··· 819 835 blk_queue_logical_block_size(q, blk_size); 820 836 else 821 837 blk_size = queue_logical_block_size(q); 822 - 823 - if (blk_size < SECTOR_SIZE || blk_size > PAGE_SIZE) { 824 - dev_err(&vdev->dev, 825 - "block size is changed unexpectedly, now is %u\n", 826 - blk_size); 827 - err = -EINVAL; 828 - goto out_cleanup_disk; 829 - } 830 838 831 839 /* Use topology information if available */ 832 840 err = virtio_cread_feature(vdev, VIRTIO_BLK_F_TOPOLOGY, ··· 985 1009 .driver.name = KBUILD_MODNAME, 986 1010 .driver.owner = THIS_MODULE, 987 1011 .id_table = id_table, 988 - .validate = virtblk_validate, 989 1012 .probe = virtblk_probe, 990 1013 .remove = virtblk_remove, 991 1014 .config_changed = virtblk_config_changed,
-12
drivers/bus/Kconfig
··· 152 152 Interface 2, which can be used to connect things like NAND Flash, 153 153 SRAM, ethernet adapters, FPGAs and LCD displays. 154 154 155 - config SIMPLE_PM_BUS 156 - tristate "Simple Power-Managed Bus Driver" 157 - depends on OF && PM 158 - help 159 - Driver for transparent busses that don't need a real driver, but 160 - where the bus controller is part of a PM domain, or under the control 161 - of a functional clock, and thus relies on runtime PM for managing 162 - this PM domain and/or clock. 163 - An example of such a bus controller is the Renesas Bus State 164 - Controller (BSC, sometimes called "LBSC within Bus Bridge", or 165 - "External Bus Interface") as found on several Renesas ARM SoCs. 166 - 167 155 config SUN50I_DE2_BUS 168 156 bool "Allwinner A64 DE2 Bus Driver" 169 157 default ARM64
+1 -1
drivers/bus/Makefile
··· 27 27 obj-$(CONFIG_QCOM_EBI2) += qcom-ebi2.o 28 28 obj-$(CONFIG_SUN50I_DE2_BUS) += sun50i-de2.o 29 29 obj-$(CONFIG_SUNXI_RSB) += sunxi-rsb.o 30 - obj-$(CONFIG_SIMPLE_PM_BUS) += simple-pm-bus.o 30 + obj-$(CONFIG_OF) += simple-pm-bus.o 31 31 obj-$(CONFIG_TEGRA_ACONNECT) += tegra-aconnect.o 32 32 obj-$(CONFIG_TEGRA_GMI) += tegra-gmi.o 33 33 obj-$(CONFIG_TI_PWMSS) += ti-pwmss.o
+39 -3
drivers/bus/simple-pm-bus.c
··· 13 13 #include <linux/platform_device.h> 14 14 #include <linux/pm_runtime.h> 15 15 16 - 17 16 static int simple_pm_bus_probe(struct platform_device *pdev) 18 17 { 19 - const struct of_dev_auxdata *lookup = dev_get_platdata(&pdev->dev); 20 - struct device_node *np = pdev->dev.of_node; 18 + const struct device *dev = &pdev->dev; 19 + const struct of_dev_auxdata *lookup = dev_get_platdata(dev); 20 + struct device_node *np = dev->of_node; 21 + const struct of_device_id *match; 22 + 23 + /* 24 + * Allow user to use driver_override to bind this driver to a 25 + * transparent bus device which has a different compatible string 26 + * that's not listed in simple_pm_bus_of_match. We don't want to do any 27 + * of the simple-pm-bus tasks for these devices, so return early. 28 + */ 29 + if (pdev->driver_override) 30 + return 0; 31 + 32 + match = of_match_device(dev->driver->of_match_table, dev); 33 + /* 34 + * These are transparent bus devices (not simple-pm-bus matches) that 35 + * have their child nodes populated automatically. So, don't need to 36 + * do anything more. We only match with the device if this driver is 37 + * the most specific match because we don't want to incorrectly bind to 38 + * a device that has a more specific driver. 39 + */ 40 + if (match && match->data) { 41 + if (of_property_match_string(np, "compatible", match->compatible) == 0) 42 + return 0; 43 + else 44 + return -ENODEV; 45 + } 21 46 22 47 dev_dbg(&pdev->dev, "%s\n", __func__); 23 48 ··· 56 31 57 32 static int simple_pm_bus_remove(struct platform_device *pdev) 58 33 { 34 + const void *data = of_device_get_match_data(&pdev->dev); 35 + 36 + if (pdev->driver_override || data) 37 + return 0; 38 + 59 39 dev_dbg(&pdev->dev, "%s\n", __func__); 60 40 61 41 pm_runtime_disable(&pdev->dev); 62 42 return 0; 63 43 } 64 44 45 + #define ONLY_BUS ((void *) 1) /* Match if the device is only a bus. */ 46 + 65 47 static const struct of_device_id simple_pm_bus_of_match[] = { 66 48 { .compatible = "simple-pm-bus", }, 49 + { .compatible = "simple-bus", .data = ONLY_BUS }, 50 + { .compatible = "simple-mfd", .data = ONLY_BUS }, 51 + { .compatible = "isa", .data = ONLY_BUS }, 52 + { .compatible = "arm,amba-bus", .data = ONLY_BUS }, 67 53 { /* sentinel */ } 68 54 }; 69 55 MODULE_DEVICE_TABLE(of, simple_pm_bus_of_match);
+1
drivers/clk/qcom/Kconfig
··· 564 564 565 565 config SM_GCC_6350 566 566 tristate "SM6350 Global Clock Controller" 567 + select QCOM_GDSC 567 568 help 568 569 Support for the global clock controller on SM6350 devices. 569 570 Say Y if you want to use peripheral devices such as UART,
+1 -1
drivers/clk/qcom/gcc-sm6115.c
··· 3242 3242 }; 3243 3243 3244 3244 static struct gdsc hlos1_vote_turing_mmu_tbu0_gdsc = { 3245 - .gdscr = 0x7d060, 3245 + .gdscr = 0x7d07c, 3246 3246 .pd = { 3247 3247 .name = "hlos1_vote_turing_mmu_tbu0", 3248 3248 },
+2
drivers/clk/renesas/r9a07g044-cpg.c
··· 186 186 187 187 static const unsigned int r9a07g044_crit_mod_clks[] __initconst = { 188 188 MOD_CLK_BASE + R9A07G044_GIC600_GICCLK, 189 + MOD_CLK_BASE + R9A07G044_IA55_CLK, 190 + MOD_CLK_BASE + R9A07G044_DMAC_ACLK, 189 191 }; 190 192 191 193 const struct rzg2l_cpg_info r9a07g044_cpg_info = {
+1 -1
drivers/clk/renesas/rzg2l-cpg.c
··· 391 391 392 392 value = readl(priv->base + CLK_MON_R(clock->off)); 393 393 394 - return !(value & bitmask); 394 + return value & bitmask; 395 395 } 396 396 397 397 static const struct clk_ops rzg2l_mod_clock_ops = {
-9
drivers/clk/socfpga/clk-agilex.c
··· 165 165 .name = "boot_clk", }, 166 166 }; 167 167 168 - static const struct clk_parent_data s2f_usr0_mux[] = { 169 - { .fw_name = "f2s-free-clk", 170 - .name = "f2s-free-clk", }, 171 - { .fw_name = "boot_clk", 172 - .name = "boot_clk", }, 173 - }; 174 - 175 168 static const struct clk_parent_data emac_mux[] = { 176 169 { .fw_name = "emaca_free_clk", 177 170 .name = "emaca_free_clk", }, ··· 305 312 4, 0x44, 28, 1, 0, 0, 0}, 306 313 { AGILEX_CS_TIMER_CLK, "cs_timer_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24, 307 314 5, 0, 0, 0, 0x30, 1, 0}, 308 - { AGILEX_S2F_USER0_CLK, "s2f_user0_clk", NULL, s2f_usr0_mux, ARRAY_SIZE(s2f_usr0_mux), 0, 0x24, 309 - 6, 0, 0, 0, 0, 0, 0}, 310 315 { AGILEX_EMAC0_CLK, "emac0_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C, 311 316 0, 0, 0, 0, 0x94, 26, 0}, 312 317 { AGILEX_EMAC1_CLK, "emac1_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C,
+1 -1
drivers/edac/armada_xp_edac.c
··· 178 178 "details unavailable (multiple errors)"); 179 179 if (cnt_dbe) 180 180 edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 181 - cnt_sbe, /* error count */ 181 + cnt_dbe, /* error count */ 182 182 0, 0, 0, /* pfn, offset, syndrome */ 183 183 -1, -1, -1, /* top, mid, low layer */ 184 184 mci->ctl_name,
+9 -1
drivers/firmware/arm_ffa/bus.c
··· 49 49 return ffa_drv->probe(ffa_dev); 50 50 } 51 51 52 + static void ffa_device_remove(struct device *dev) 53 + { 54 + struct ffa_driver *ffa_drv = to_ffa_driver(dev->driver); 55 + 56 + ffa_drv->remove(to_ffa_dev(dev)); 57 + } 58 + 52 59 static int ffa_device_uevent(struct device *dev, struct kobj_uevent_env *env) 53 60 { 54 61 struct ffa_device *ffa_dev = to_ffa_dev(dev); ··· 93 86 .name = "arm_ffa", 94 87 .match = ffa_device_match, 95 88 .probe = ffa_device_probe, 89 + .remove = ffa_device_remove, 96 90 .uevent = ffa_device_uevent, 97 91 .dev_groups = ffa_device_attributes_groups, 98 92 }; ··· 135 127 136 128 static int __ffa_devices_unregister(struct device *dev, void *data) 137 129 { 138 - ffa_release_device(dev); 130 + device_unregister(dev); 139 131 140 132 return 0; 141 133 }
+2 -2
drivers/firmware/efi/cper.c
··· 25 25 #include <acpi/ghes.h> 26 26 #include <ras/ras_event.h> 27 27 28 - static char rcd_decode_str[CPER_REC_LEN]; 29 - 30 28 /* 31 29 * CPER record ID need to be unique even after reboot, because record 32 30 * ID is used as index for ERST storage, while CPER records from ··· 310 312 struct cper_mem_err_compact *cmem) 311 313 { 312 314 const char *ret = trace_seq_buffer_ptr(p); 315 + char rcd_decode_str[CPER_REC_LEN]; 313 316 314 317 if (cper_mem_err_location(cmem, rcd_decode_str)) 315 318 trace_seq_printf(p, "%s", rcd_decode_str); ··· 325 326 int len) 326 327 { 327 328 struct cper_mem_err_compact cmem; 329 + char rcd_decode_str[CPER_REC_LEN]; 328 330 329 331 /* Don't trust UEFI 2.1/2.2 structure with bad validation bits */ 330 332 if (len == sizeof(struct cper_sec_mem_err_old) &&
+1 -1
drivers/firmware/efi/libstub/fdt.c
··· 271 271 return status; 272 272 } 273 273 274 - efi_info("Exiting boot services and installing virtual address map...\n"); 274 + efi_info("Exiting boot services...\n"); 275 275 276 276 map.map = &memory_map; 277 277 status = efi_allocate_pages(MAX_FDT_SIZE, new_fdt_addr, ULONG_MAX);
+1 -1
drivers/firmware/efi/runtime-wrappers.c
··· 414 414 unsigned long data_size, 415 415 efi_char16_t *data) 416 416 { 417 - if (down_interruptible(&efi_runtime_lock)) { 417 + if (down_trylock(&efi_runtime_lock)) { 418 418 pr_warn("failed to invoke the reset_system() runtime service:\n" 419 419 "could not get exclusive access to the firmware\n"); 420 420 return;
+7
drivers/fpga/ice40-spi.c
··· 192 192 }; 193 193 MODULE_DEVICE_TABLE(of, ice40_fpga_of_match); 194 194 195 + static const struct spi_device_id ice40_fpga_spi_ids[] = { 196 + { .name = "ice40-fpga-mgr", }, 197 + {}, 198 + }; 199 + MODULE_DEVICE_TABLE(spi, ice40_fpga_spi_ids); 200 + 195 201 static struct spi_driver ice40_fpga_driver = { 196 202 .probe = ice40_fpga_probe, 197 203 .driver = { 198 204 .name = "ice40spi", 199 205 .of_match_table = of_match_ptr(ice40_fpga_of_match), 200 206 }, 207 + .id_table = ice40_fpga_spi_ids, 201 208 }; 202 209 203 210 module_spi_driver(ice40_fpga_driver);
+8
drivers/gpio/gpio-74x164.c
··· 174 174 return 0; 175 175 } 176 176 177 + static const struct spi_device_id gen_74x164_spi_ids[] = { 178 + { .name = "74hc595" }, 179 + { .name = "74lvc594" }, 180 + {}, 181 + }; 182 + MODULE_DEVICE_TABLE(spi, gen_74x164_spi_ids); 183 + 177 184 static const struct of_device_id gen_74x164_dt_ids[] = { 178 185 { .compatible = "fairchild,74hc595" }, 179 186 { .compatible = "nxp,74lvc594" }, ··· 195 188 }, 196 189 .probe = gen_74x164_probe, 197 190 .remove = gen_74x164_remove, 191 + .id_table = gen_74x164_spi_ids, 198 192 }; 199 193 module_spi_driver(gen_74x164_driver); 200 194
+18 -3
drivers/gpio/gpio-mockup.c
··· 476 476 477 477 static void gpio_mockup_unregister_pdevs(void) 478 478 { 479 + struct platform_device *pdev; 480 + struct fwnode_handle *fwnode; 479 481 int i; 480 482 481 - for (i = 0; i < GPIO_MOCKUP_MAX_GC; i++) 482 - platform_device_unregister(gpio_mockup_pdevs[i]); 483 + for (i = 0; i < GPIO_MOCKUP_MAX_GC; i++) { 484 + pdev = gpio_mockup_pdevs[i]; 485 + if (!pdev) 486 + continue; 487 + 488 + fwnode = dev_fwnode(&pdev->dev); 489 + platform_device_unregister(pdev); 490 + fwnode_remove_software_node(fwnode); 491 + } 483 492 } 484 493 485 494 static __init char **gpio_mockup_make_line_names(const char *label, ··· 517 508 struct property_entry properties[GPIO_MOCKUP_MAX_PROP]; 518 509 struct platform_device_info pdevinfo; 519 510 struct platform_device *pdev; 511 + struct fwnode_handle *fwnode; 520 512 char **line_names = NULL; 521 513 char chip_label[32]; 522 514 int prop = 0, base; ··· 546 536 "gpio-line-names", line_names, ngpio); 547 537 } 548 538 539 + fwnode = fwnode_create_software_node(properties, NULL); 540 + if (IS_ERR(fwnode)) 541 + return PTR_ERR(fwnode); 542 + 549 543 pdevinfo.name = "gpio-mockup"; 550 544 pdevinfo.id = idx; 551 - pdevinfo.properties = properties; 545 + pdevinfo.fwnode = fwnode; 552 546 553 547 pdev = platform_device_register_full(&pdevinfo); 554 548 kfree_strarray(line_names, ngpio); 555 549 if (IS_ERR(pdev)) { 550 + fwnode_remove_software_node(fwnode); 556 551 pr_err("error registering device"); 557 552 return PTR_ERR(pdev); 558 553 }
+9 -7
drivers/gpio/gpio-pca953x.c
··· 559 559 560 560 mutex_lock(&chip->i2c_lock); 561 561 562 - /* Disable pull-up/pull-down */ 563 - ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, 0); 564 - if (ret) 565 - goto exit; 566 - 567 562 /* Configure pull-up/pull-down */ 568 563 if (config == PIN_CONFIG_BIAS_PULL_UP) 569 564 ret = regmap_write_bits(chip->regmap, pull_sel_reg, bit, bit); 570 565 else if (config == PIN_CONFIG_BIAS_PULL_DOWN) 571 566 ret = regmap_write_bits(chip->regmap, pull_sel_reg, bit, 0); 567 + else 568 + ret = 0; 572 569 if (ret) 573 570 goto exit; 574 571 575 - /* Enable pull-up/pull-down */ 576 - ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, bit); 572 + /* Disable/Enable pull-up/pull-down */ 573 + if (config == PIN_CONFIG_BIAS_DISABLE) 574 + ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, 0); 575 + else 576 + ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, bit); 577 577 578 578 exit: 579 579 mutex_unlock(&chip->i2c_lock); ··· 587 587 588 588 switch (pinconf_to_config_param(config)) { 589 589 case PIN_CONFIG_BIAS_PULL_UP: 590 + case PIN_CONFIG_BIAS_PULL_PIN_DEFAULT: 590 591 case PIN_CONFIG_BIAS_PULL_DOWN: 592 + case PIN_CONFIG_BIAS_DISABLE: 591 593 return pca953x_gpio_set_pull_up_down(chip, offset, config); 592 594 default: 593 595 return -ENOTSUPP;
+12 -3
drivers/gpu/drm/drm_edid.c
··· 1834 1834 u8 *edid, int num_blocks) 1835 1835 { 1836 1836 int i; 1837 - u8 num_of_ext = edid[0x7e]; 1837 + u8 last_block; 1838 + 1839 + /* 1840 + * 0x7e in the EDID is the number of extension blocks. The EDID 1841 + * is 1 (base block) + num_ext_blocks big. That means we can think 1842 + * of 0x7e in the EDID of the _index_ of the last block in the 1843 + * combined chunk of memory. 1844 + */ 1845 + last_block = edid[0x7e]; 1838 1846 1839 1847 /* Calculate real checksum for the last edid extension block data */ 1840 - connector->real_edid_checksum = 1841 - drm_edid_block_checksum(edid + num_of_ext * EDID_LENGTH); 1848 + if (last_block < num_blocks) 1849 + connector->real_edid_checksum = 1850 + drm_edid_block_checksum(edid + last_block * EDID_LENGTH); 1842 1851 1843 1852 if (connector->bad_edid_counter++ && !drm_debug_enabled(DRM_UT_KMS)) 1844 1853 return;
+6
drivers/gpu/drm/drm_fb_helper.c
··· 1506 1506 { 1507 1507 struct drm_client_dev *client = &fb_helper->client; 1508 1508 struct drm_device *dev = fb_helper->dev; 1509 + struct drm_mode_config *config = &dev->mode_config; 1509 1510 int ret = 0; 1510 1511 int crtc_count = 0; 1511 1512 struct drm_connector_list_iter conn_iter; ··· 1664 1663 /* Handle our overallocation */ 1665 1664 sizes.surface_height *= drm_fbdev_overalloc; 1666 1665 sizes.surface_height /= 100; 1666 + if (sizes.surface_height > config->max_height) { 1667 + drm_dbg_kms(dev, "Fbdev over-allocation too large; clamping height to %d\n", 1668 + config->max_height); 1669 + sizes.surface_height = config->max_height; 1670 + } 1667 1671 1668 1672 /* push down into drivers */ 1669 1673 ret = (*fb_helper->funcs->fb_probe)(fb_helper, &sizes);
+1
drivers/gpu/drm/hyperv/hyperv_drm.h
··· 46 46 int hyperv_update_vram_location(struct hv_device *hdev, phys_addr_t vram_pp); 47 47 int hyperv_update_situation(struct hv_device *hdev, u8 active, u32 bpp, 48 48 u32 w, u32 h, u32 pitch); 49 + int hyperv_hide_hw_ptr(struct hv_device *hdev); 49 50 int hyperv_update_dirt(struct hv_device *hdev, struct drm_rect *rect); 50 51 int hyperv_connect_vsp(struct hv_device *hdev); 51 52
+1
drivers/gpu/drm/hyperv/hyperv_drm_modeset.c
··· 101 101 struct hyperv_drm_device *hv = to_hv(pipe->crtc.dev); 102 102 struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state); 103 103 104 + hyperv_hide_hw_ptr(hv->hdev); 104 105 hyperv_update_situation(hv->hdev, 1, hv->screen_depth, 105 106 crtc_state->mode.hdisplay, 106 107 crtc_state->mode.vdisplay,
+53 -1
drivers/gpu/drm/hyperv/hyperv_drm_proto.c
··· 299 299 return 0; 300 300 } 301 301 302 + /* 303 + * Hyper-V supports a hardware cursor feature. It's not used by Linux VM, 304 + * but the Hyper-V host still draws a point as an extra mouse pointer, 305 + * which is unwanted, especially when Xorg is running. 306 + * 307 + * The hyperv_fb driver uses synthvid_send_ptr() to hide the unwanted 308 + * pointer, by setting msg.ptr_pos.is_visible = 1 and setting the 309 + * msg.ptr_shape.data. Note: setting msg.ptr_pos.is_visible to 0 doesn't 310 + * work in tests. 311 + * 312 + * Copy synthvid_send_ptr() to hyperv_drm and rename it to 313 + * hyperv_hide_hw_ptr(). Note: hyperv_hide_hw_ptr() is also called in the 314 + * handler of the SYNTHVID_FEATURE_CHANGE event, otherwise the host still 315 + * draws an extra unwanted mouse pointer after the VM Connection window is 316 + * closed and reopened. 317 + */ 318 + int hyperv_hide_hw_ptr(struct hv_device *hdev) 319 + { 320 + struct synthvid_msg msg; 321 + 322 + memset(&msg, 0, sizeof(struct synthvid_msg)); 323 + msg.vid_hdr.type = SYNTHVID_POINTER_POSITION; 324 + msg.vid_hdr.size = sizeof(struct synthvid_msg_hdr) + 325 + sizeof(struct synthvid_pointer_position); 326 + msg.ptr_pos.is_visible = 1; 327 + msg.ptr_pos.video_output = 0; 328 + msg.ptr_pos.image_x = 0; 329 + msg.ptr_pos.image_y = 0; 330 + hyperv_sendpacket(hdev, &msg); 331 + 332 + memset(&msg, 0, sizeof(struct synthvid_msg)); 333 + msg.vid_hdr.type = SYNTHVID_POINTER_SHAPE; 334 + msg.vid_hdr.size = sizeof(struct synthvid_msg_hdr) + 335 + sizeof(struct synthvid_pointer_shape); 336 + msg.ptr_shape.part_idx = SYNTHVID_CURSOR_COMPLETE; 337 + msg.ptr_shape.is_argb = 1; 338 + msg.ptr_shape.width = 1; 339 + msg.ptr_shape.height = 1; 340 + msg.ptr_shape.hot_x = 0; 341 + msg.ptr_shape.hot_y = 0; 342 + msg.ptr_shape.data[0] = 0; 343 + msg.ptr_shape.data[1] = 1; 344 + msg.ptr_shape.data[2] = 1; 345 + msg.ptr_shape.data[3] = 1; 346 + hyperv_sendpacket(hdev, &msg); 347 + 348 + return 0; 349 + } 350 + 302 351 int hyperv_update_dirt(struct hv_device *hdev, struct drm_rect *rect) 303 352 { 304 353 struct hyperv_drm_device *hv = hv_get_drvdata(hdev); ··· 441 392 return; 442 393 } 443 394 444 - if (msg->vid_hdr.type == SYNTHVID_FEATURE_CHANGE) 395 + if (msg->vid_hdr.type == SYNTHVID_FEATURE_CHANGE) { 445 396 hv->dirt_needed = msg->feature_chg.is_dirt_needed; 397 + if (hv->dirt_needed) 398 + hyperv_hide_hw_ptr(hv->hdev); 399 + } 446 400 } 447 401 448 402 static void hyperv_receive(void *ctx)
+5 -2
drivers/gpu/drm/i915/display/intel_acpi.c
··· 186 186 { 187 187 struct pci_dev *pdev = to_pci_dev(i915->drm.dev); 188 188 acpi_handle dhandle; 189 + union acpi_object *obj; 189 190 190 191 dhandle = ACPI_HANDLE(&pdev->dev); 191 192 if (!dhandle) 192 193 return; 193 194 194 - acpi_evaluate_dsm(dhandle, &intel_dsm_guid2, INTEL_DSM_REVISION_ID, 195 - INTEL_DSM_FN_GET_BIOS_DATA_FUNCS_SUPPORTED, NULL); 195 + obj = acpi_evaluate_dsm(dhandle, &intel_dsm_guid2, INTEL_DSM_REVISION_ID, 196 + INTEL_DSM_FN_GET_BIOS_DATA_FUNCS_SUPPORTED, NULL); 197 + if (obj) 198 + ACPI_FREE(obj); 196 199 } 197 200 198 201 /*
+4 -1
drivers/gpu/drm/i915/gem/i915_gem_context.c
··· 937 937 unsigned int n; 938 938 939 939 e = alloc_engines(num_engines); 940 + if (!e) 941 + return ERR_PTR(-ENOMEM); 942 + e->num_engines = num_engines; 943 + 940 944 for (n = 0; n < num_engines; n++) { 941 945 struct intel_context *ce; 942 946 int ret; ··· 974 970 goto free_engines; 975 971 } 976 972 } 977 - e->num_engines = num_engines; 978 973 979 974 return e; 980 975
+1
drivers/gpu/drm/i915/gt/intel_context.c
··· 421 421 422 422 mutex_destroy(&ce->pin_mutex); 423 423 i915_active_fini(&ce->active); 424 + i915_sw_fence_fini(&ce->guc_blocked); 424 425 } 425 426 426 427 void i915_context_module_exit(void)
+24 -133
drivers/gpu/drm/mediatek/mtk_drm_crtc.c
··· 4 4 */ 5 5 6 6 #include <linux/clk.h> 7 - #include <linux/dma-mapping.h> 8 - #include <linux/mailbox_controller.h> 9 7 #include <linux/pm_runtime.h> 10 8 #include <linux/soc/mediatek/mtk-cmdq.h> 11 9 #include <linux/soc/mediatek/mtk-mmsys.h> ··· 50 52 bool pending_async_planes; 51 53 52 54 #if IS_REACHABLE(CONFIG_MTK_CMDQ) 53 - struct mbox_client cmdq_cl; 54 - struct mbox_chan *cmdq_chan; 55 - struct cmdq_pkt cmdq_handle; 55 + struct cmdq_client *cmdq_client; 56 56 u32 cmdq_event; 57 - u32 cmdq_vblank_cnt; 58 57 #endif 59 58 60 59 struct device *mmsys_dev; ··· 222 227 } 223 228 224 229 #if IS_REACHABLE(CONFIG_MTK_CMDQ) 225 - static int mtk_drm_cmdq_pkt_create(struct mbox_chan *chan, struct cmdq_pkt *pkt, 226 - size_t size) 230 + static void ddp_cmdq_cb(struct cmdq_cb_data data) 227 231 { 228 - struct device *dev; 229 - dma_addr_t dma_addr; 230 - 231 - pkt->va_base = kzalloc(size, GFP_KERNEL); 232 - if (!pkt->va_base) { 233 - kfree(pkt); 234 - return -ENOMEM; 235 - } 236 - pkt->buf_size = size; 237 - 238 - dev = chan->mbox->dev; 239 - dma_addr = dma_map_single(dev, pkt->va_base, pkt->buf_size, 240 - DMA_TO_DEVICE); 241 - if (dma_mapping_error(dev, dma_addr)) { 242 - dev_err(dev, "dma map failed, size=%u\n", (u32)(u64)size); 243 - kfree(pkt->va_base); 244 - kfree(pkt); 245 - return -ENOMEM; 246 - } 247 - 248 - pkt->pa_base = dma_addr; 249 - 250 - return 0; 251 - } 252 - 253 - static void mtk_drm_cmdq_pkt_destroy(struct mbox_chan *chan, struct cmdq_pkt *pkt) 254 - { 255 - dma_unmap_single(chan->mbox->dev, pkt->pa_base, pkt->buf_size, 256 - DMA_TO_DEVICE); 257 - kfree(pkt->va_base); 258 - kfree(pkt); 259 - } 260 - 261 - static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg) 262 - { 263 - struct mtk_drm_crtc *mtk_crtc = container_of(cl, struct mtk_drm_crtc, cmdq_cl); 264 - struct cmdq_cb_data *data = mssg; 265 - struct mtk_crtc_state *state; 266 - unsigned int i; 267 - 268 - state = to_mtk_crtc_state(mtk_crtc->base.state); 269 - 270 - state->pending_config = false; 271 - 272 - if (mtk_crtc->pending_planes) { 273 - for (i = 0; i < mtk_crtc->layer_nr; i++) { 274 - struct drm_plane *plane = &mtk_crtc->planes[i]; 275 - struct mtk_plane_state *plane_state; 276 - 277 - plane_state = to_mtk_plane_state(plane->state); 278 - 279 - plane_state->pending.config = false; 280 - } 281 - mtk_crtc->pending_planes = false; 282 - } 283 - 284 - if (mtk_crtc->pending_async_planes) { 285 - for (i = 0; i < mtk_crtc->layer_nr; i++) { 286 - struct drm_plane *plane = &mtk_crtc->planes[i]; 287 - struct mtk_plane_state *plane_state; 288 - 289 - plane_state = to_mtk_plane_state(plane->state); 290 - 291 - plane_state->pending.async_config = false; 292 - } 293 - mtk_crtc->pending_async_planes = false; 294 - } 295 - 296 - mtk_crtc->cmdq_vblank_cnt = 0; 297 - mtk_drm_cmdq_pkt_destroy(mtk_crtc->cmdq_chan, data->pkt); 232 + cmdq_pkt_destroy(data.data); 298 233 } 299 234 #endif 300 235 ··· 378 453 state->pending_vrefresh, 0, 379 454 cmdq_handle); 380 455 381 - if (!cmdq_handle) 382 - state->pending_config = false; 456 + state->pending_config = false; 383 457 } 384 458 385 459 if (mtk_crtc->pending_planes) { ··· 398 474 mtk_ddp_comp_layer_config(comp, local_layer, 399 475 plane_state, 400 476 cmdq_handle); 401 - if (!cmdq_handle) 402 - plane_state->pending.config = false; 477 + plane_state->pending.config = false; 403 478 } 404 - 405 - if (!cmdq_handle) 406 - mtk_crtc->pending_planes = false; 479 + mtk_crtc->pending_planes = false; 407 480 } 408 481 409 482 if (mtk_crtc->pending_async_planes) { ··· 420 499 mtk_ddp_comp_layer_config(comp, local_layer, 421 500 plane_state, 422 501 cmdq_handle); 423 - if (!cmdq_handle) 424 - plane_state->pending.async_config = false; 502 + plane_state->pending.async_config = false; 425 503 } 426 - 427 - if (!cmdq_handle) 428 - mtk_crtc->pending_async_planes = false; 504 + mtk_crtc->pending_async_planes = false; 429 505 } 430 506 } 431 507 ··· 430 512 bool needs_vblank) 431 513 { 432 514 #if IS_REACHABLE(CONFIG_MTK_CMDQ) 433 - struct cmdq_pkt *cmdq_handle = &mtk_crtc->cmdq_handle; 515 + struct cmdq_pkt *cmdq_handle; 434 516 #endif 435 517 struct drm_crtc *crtc = &mtk_crtc->base; 436 518 struct mtk_drm_private *priv = crtc->dev->dev_private; ··· 468 550 mtk_mutex_release(mtk_crtc->mutex); 469 551 } 470 552 #if IS_REACHABLE(CONFIG_MTK_CMDQ) 471 - if (mtk_crtc->cmdq_chan) { 472 - mbox_flush(mtk_crtc->cmdq_chan, 2000); 473 - cmdq_handle->cmd_buf_size = 0; 553 + if (mtk_crtc->cmdq_client) { 554 + mbox_flush(mtk_crtc->cmdq_client->chan, 2000); 555 + cmdq_handle = cmdq_pkt_create(mtk_crtc->cmdq_client, PAGE_SIZE); 474 556 cmdq_pkt_clear_event(cmdq_handle, mtk_crtc->cmdq_event); 475 557 cmdq_pkt_wfe(cmdq_handle, mtk_crtc->cmdq_event, false); 476 558 mtk_crtc_ddp_config(crtc, cmdq_handle); 477 559 cmdq_pkt_finalize(cmdq_handle); 478 - dma_sync_single_for_device(mtk_crtc->cmdq_chan->mbox->dev, 479 - cmdq_handle->pa_base, 480 - cmdq_handle->cmd_buf_size, 481 - DMA_TO_DEVICE); 482 - /* 483 - * CMDQ command should execute in next vblank, 484 - * If it fail to execute in next 2 vblank, timeout happen. 485 - */ 486 - mtk_crtc->cmdq_vblank_cnt = 2; 487 - mbox_send_message(mtk_crtc->cmdq_chan, cmdq_handle); 488 - mbox_client_txdone(mtk_crtc->cmdq_chan, 0); 560 + cmdq_pkt_flush_async(cmdq_handle, ddp_cmdq_cb, cmdq_handle); 489 561 } 490 562 #endif 491 563 mtk_crtc->config_updating = false; ··· 489 581 struct mtk_drm_private *priv = crtc->dev->dev_private; 490 582 491 583 #if IS_REACHABLE(CONFIG_MTK_CMDQ) 492 - if (!priv->data->shadow_register && !mtk_crtc->cmdq_chan) 493 - mtk_crtc_ddp_config(crtc, NULL); 494 - else if (mtk_crtc->cmdq_vblank_cnt > 0 && --mtk_crtc->cmdq_vblank_cnt == 0) 495 - DRM_ERROR("mtk_crtc %d CMDQ execute command timeout!\n", 496 - drm_crtc_index(&mtk_crtc->base)); 584 + if (!priv->data->shadow_register && !mtk_crtc->cmdq_client) 497 585 #else 498 586 if (!priv->data->shadow_register) 499 - mtk_crtc_ddp_config(crtc, NULL); 500 587 #endif 588 + mtk_crtc_ddp_config(crtc, NULL); 589 + 501 590 mtk_drm_finish_page_flip(mtk_crtc); 502 591 } 503 592 ··· 829 924 mutex_init(&mtk_crtc->hw_lock); 830 925 831 926 #if IS_REACHABLE(CONFIG_MTK_CMDQ) 832 - mtk_crtc->cmdq_cl.dev = mtk_crtc->mmsys_dev; 833 - mtk_crtc->cmdq_cl.tx_block = false; 834 - mtk_crtc->cmdq_cl.knows_txdone = true; 835 - mtk_crtc->cmdq_cl.rx_callback = ddp_cmdq_cb; 836 - mtk_crtc->cmdq_chan = 837 - mbox_request_channel(&mtk_crtc->cmdq_cl, 838 - drm_crtc_index(&mtk_crtc->base)); 839 - if (IS_ERR(mtk_crtc->cmdq_chan)) { 927 + mtk_crtc->cmdq_client = 928 + cmdq_mbox_create(mtk_crtc->mmsys_dev, 929 + drm_crtc_index(&mtk_crtc->base)); 930 + if (IS_ERR(mtk_crtc->cmdq_client)) { 840 931 dev_dbg(dev, "mtk_crtc %d failed to create mailbox client, writing register by CPU now\n", 841 932 drm_crtc_index(&mtk_crtc->base)); 842 - mtk_crtc->cmdq_chan = NULL; 933 + mtk_crtc->cmdq_client = NULL; 843 934 } 844 935 845 - if (mtk_crtc->cmdq_chan) { 936 + if (mtk_crtc->cmdq_client) { 846 937 ret = of_property_read_u32_index(priv->mutex_node, 847 938 "mediatek,gce-events", 848 939 drm_crtc_index(&mtk_crtc->base), ··· 846 945 if (ret) { 847 946 dev_dbg(dev, "mtk_crtc %d failed to get mediatek,gce-events property\n", 848 947 drm_crtc_index(&mtk_crtc->base)); 849 - mbox_free_channel(mtk_crtc->cmdq_chan); 850 - mtk_crtc->cmdq_chan = NULL; 851 - } else { 852 - ret = mtk_drm_cmdq_pkt_create(mtk_crtc->cmdq_chan, 853 - &mtk_crtc->cmdq_handle, 854 - PAGE_SIZE); 855 - if (ret) { 856 - dev_dbg(dev, "mtk_crtc %d failed to create cmdq packet\n", 857 - drm_crtc_index(&mtk_crtc->base)); 858 - mbox_free_channel(mtk_crtc->cmdq_chan); 859 - mtk_crtc->cmdq_chan = NULL; 860 - } 948 + cmdq_mbox_destroy(mtk_crtc->cmdq_client); 949 + mtk_crtc->cmdq_client = NULL; 861 950 } 862 951 } 863 952 #endif
+5 -4
drivers/gpu/drm/msm/adreno/a3xx_gpu.c
··· 571 571 } 572 572 573 573 icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem"); 574 - ret = IS_ERR(icc_path); 575 - if (ret) 574 + if (IS_ERR(icc_path)) { 575 + ret = PTR_ERR(icc_path); 576 576 goto fail; 577 + } 577 578 578 579 ocmem_icc_path = devm_of_icc_get(&pdev->dev, "ocmem"); 579 - ret = IS_ERR(ocmem_icc_path); 580 - if (ret) { 580 + if (IS_ERR(ocmem_icc_path)) { 581 + ret = PTR_ERR(ocmem_icc_path); 581 582 /* allow -ENODATA, ocmem icc is optional */ 582 583 if (ret != -ENODATA) 583 584 goto fail;
+5 -4
drivers/gpu/drm/msm/adreno/a4xx_gpu.c
··· 699 699 } 700 700 701 701 icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem"); 702 - ret = IS_ERR(icc_path); 703 - if (ret) 702 + if (IS_ERR(icc_path)) { 703 + ret = PTR_ERR(icc_path); 704 704 goto fail; 705 + } 705 706 706 707 ocmem_icc_path = devm_of_icc_get(&pdev->dev, "ocmem"); 707 - ret = IS_ERR(ocmem_icc_path); 708 - if (ret) { 708 + if (IS_ERR(ocmem_icc_path)) { 709 + ret = PTR_ERR(ocmem_icc_path); 709 710 /* allow -ENODATA, ocmem icc is optional */ 710 711 if (ret != -ENODATA) 711 712 goto fail;
+6
drivers/gpu/drm/msm/adreno/a6xx_gmu.c
··· 296 296 u32 val; 297 297 int request, ack; 298 298 299 + WARN_ON_ONCE(!mutex_is_locked(&gmu->lock)); 300 + 299 301 if (state >= ARRAY_SIZE(a6xx_gmu_oob_bits)) 300 302 return -EINVAL; 301 303 ··· 338 336 void a6xx_gmu_clear_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state) 339 337 { 340 338 int bit; 339 + 340 + WARN_ON_ONCE(!mutex_is_locked(&gmu->lock)); 341 341 342 342 if (state >= ARRAY_SIZE(a6xx_gmu_oob_bits)) 343 343 return; ··· 1485 1481 1486 1482 if (!pdev) 1487 1483 return -ENODEV; 1484 + 1485 + mutex_init(&gmu->lock); 1488 1486 1489 1487 gmu->dev = &pdev->dev; 1490 1488
+3
drivers/gpu/drm/msm/adreno/a6xx_gmu.h
··· 44 44 struct a6xx_gmu { 45 45 struct device *dev; 46 46 47 + /* For serializing communication with the GMU: */ 48 + struct mutex lock; 49 + 47 50 struct msm_gem_address_space *aspace; 48 51 49 52 void * __iomem mmio;
+37 -9
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
··· 106 106 u32 asid; 107 107 u64 memptr = rbmemptr(ring, ttbr0); 108 108 109 - if (ctx == a6xx_gpu->cur_ctx) 109 + if (ctx->seqno == a6xx_gpu->cur_ctx_seqno) 110 110 return; 111 111 112 112 if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid)) ··· 139 139 OUT_PKT7(ring, CP_EVENT_WRITE, 1); 140 140 OUT_RING(ring, 0x31); 141 141 142 - a6xx_gpu->cur_ctx = ctx; 142 + a6xx_gpu->cur_ctx_seqno = ctx->seqno; 143 143 } 144 144 145 145 static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) ··· 881 881 A6XX_RBBM_INT_0_MASK_UCHE_OOB_ACCESS | \ 882 882 A6XX_RBBM_INT_0_MASK_UCHE_TRAP_INTR) 883 883 884 - static int a6xx_hw_init(struct msm_gpu *gpu) 884 + static int hw_init(struct msm_gpu *gpu) 885 885 { 886 886 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 887 887 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); ··· 1081 1081 /* Always come up on rb 0 */ 1082 1082 a6xx_gpu->cur_ring = gpu->rb[0]; 1083 1083 1084 - a6xx_gpu->cur_ctx = NULL; 1084 + a6xx_gpu->cur_ctx_seqno = 0; 1085 1085 1086 1086 /* Enable the SQE_to start the CP engine */ 1087 1087 gpu_write(gpu, REG_A6XX_CP_SQE_CNTL, 1); ··· 1131 1131 /* Take the GMU out of its special boot mode */ 1132 1132 a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_BOOT_SLUMBER); 1133 1133 } 1134 + 1135 + return ret; 1136 + } 1137 + 1138 + static int a6xx_hw_init(struct msm_gpu *gpu) 1139 + { 1140 + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 1141 + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 1142 + int ret; 1143 + 1144 + mutex_lock(&a6xx_gpu->gmu.lock); 1145 + ret = hw_init(gpu); 1146 + mutex_unlock(&a6xx_gpu->gmu.lock); 1134 1147 1135 1148 return ret; 1136 1149 } ··· 1522 1509 1523 1510 trace_msm_gpu_resume(0); 1524 1511 1512 + mutex_lock(&a6xx_gpu->gmu.lock); 1525 1513 ret = a6xx_gmu_resume(a6xx_gpu); 1514 + mutex_unlock(&a6xx_gpu->gmu.lock); 1526 1515 if (ret) 1527 1516 return ret; 1528 1517 ··· 1547 1532 1548 1533 msm_devfreq_suspend(gpu); 1549 1534 1535 + mutex_lock(&a6xx_gpu->gmu.lock); 1550 1536 ret = a6xx_gmu_stop(a6xx_gpu); 1537 + mutex_unlock(&a6xx_gpu->gmu.lock); 1551 1538 if (ret) 1552 1539 return ret; 1553 1540 ··· 1564 1547 { 1565 1548 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 1566 1549 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 1567 - static DEFINE_MUTEX(perfcounter_oob); 1568 1550 1569 - mutex_lock(&perfcounter_oob); 1551 + mutex_lock(&a6xx_gpu->gmu.lock); 1570 1552 1571 1553 /* Force the GPU power on so we can read this register */ 1572 1554 a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET); 1573 1555 1574 1556 *value = gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO, 1575 - REG_A6XX_CP_ALWAYS_ON_COUNTER_HI); 1557 + REG_A6XX_CP_ALWAYS_ON_COUNTER_HI); 1576 1558 1577 1559 a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET); 1578 - mutex_unlock(&perfcounter_oob); 1560 + 1561 + mutex_unlock(&a6xx_gpu->gmu.lock); 1562 + 1579 1563 return 0; 1580 1564 } 1581 1565 ··· 1638 1620 return ~0LU; 1639 1621 1640 1622 return (unsigned long)busy_time; 1623 + } 1624 + 1625 + void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp) 1626 + { 1627 + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 1628 + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 1629 + 1630 + mutex_lock(&a6xx_gpu->gmu.lock); 1631 + a6xx_gmu_set_freq(gpu, opp); 1632 + mutex_unlock(&a6xx_gpu->gmu.lock); 1641 1633 } 1642 1634 1643 1635 static struct msm_gem_address_space * ··· 1794 1766 #endif 1795 1767 .gpu_busy = a6xx_gpu_busy, 1796 1768 .gpu_get_freq = a6xx_gmu_get_freq, 1797 - .gpu_set_freq = a6xx_gmu_set_freq, 1769 + .gpu_set_freq = a6xx_gpu_set_freq, 1798 1770 #if defined(CONFIG_DRM_MSM_GPU_STATE) 1799 1771 .gpu_state_get = a6xx_gpu_state_get, 1800 1772 .gpu_state_put = a6xx_gpu_state_put,
+10 -1
drivers/gpu/drm/msm/adreno/a6xx_gpu.h
··· 19 19 uint64_t sqe_iova; 20 20 21 21 struct msm_ringbuffer *cur_ring; 22 - struct msm_file_private *cur_ctx; 22 + 23 + /** 24 + * cur_ctx_seqno: 25 + * 26 + * The ctx->seqno value of the context with current pgtables 27 + * installed. Tracked by seqno rather than pointer value to 28 + * avoid dangling pointers, and cases where a ctx can be freed 29 + * and a new one created with the same address. 30 + */ 31 + int cur_ctx_seqno; 23 32 24 33 struct a6xx_gmu gmu; 25 34
+1 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
··· 794 794 DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30), 795 795 -1), 796 796 PP_BLK("pingpong_5", PINGPONG_5, 0x72800, MERGE_3D_2, sdm845_pp_sblk, 797 - DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30), 797 + DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 31), 798 798 -1), 799 799 }; 800 800
+16
drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
··· 1125 1125 __drm_atomic_helper_crtc_reset(crtc, &mdp5_cstate->base); 1126 1126 } 1127 1127 1128 + static const struct drm_crtc_funcs mdp5_crtc_no_lm_cursor_funcs = { 1129 + .set_config = drm_atomic_helper_set_config, 1130 + .destroy = mdp5_crtc_destroy, 1131 + .page_flip = drm_atomic_helper_page_flip, 1132 + .reset = mdp5_crtc_reset, 1133 + .atomic_duplicate_state = mdp5_crtc_duplicate_state, 1134 + .atomic_destroy_state = mdp5_crtc_destroy_state, 1135 + .atomic_print_state = mdp5_crtc_atomic_print_state, 1136 + .get_vblank_counter = mdp5_crtc_get_vblank_counter, 1137 + .enable_vblank = msm_crtc_enable_vblank, 1138 + .disable_vblank = msm_crtc_disable_vblank, 1139 + .get_vblank_timestamp = drm_crtc_vblank_helper_get_vblank_timestamp, 1140 + }; 1141 + 1128 1142 static const struct drm_crtc_funcs mdp5_crtc_funcs = { 1129 1143 .set_config = drm_atomic_helper_set_config, 1130 1144 .destroy = mdp5_crtc_destroy, ··· 1327 1313 mdp5_crtc->lm_cursor_enabled = cursor_plane ? false : true; 1328 1314 1329 1315 drm_crtc_init_with_planes(dev, crtc, plane, cursor_plane, 1316 + cursor_plane ? 1317 + &mdp5_crtc_no_lm_cursor_funcs : 1330 1318 &mdp5_crtc_funcs, NULL); 1331 1319 1332 1320 drm_flip_work_init(&mdp5_crtc->unref_cursor_work,
+5 -5
drivers/gpu/drm/msm/dp/dp_display.c
··· 1309 1309 * can not declared display is connected unless 1310 1310 * HDMI cable is plugged in and sink_count of 1311 1311 * dongle become 1 1312 + * also only signal audio when disconnected 1312 1313 */ 1313 - if (dp->link->sink_count) 1314 + if (dp->link->sink_count) { 1314 1315 dp->dp_display.is_connected = true; 1315 - else 1316 + } else { 1316 1317 dp->dp_display.is_connected = false; 1317 - 1318 - dp_display_handle_plugged_change(g_dp_display, 1319 - dp->dp_display.is_connected); 1318 + dp_display_handle_plugged_change(g_dp_display, false); 1319 + } 1320 1320 1321 1321 DRM_DEBUG_DP("After, sink_count=%d is_connected=%d core_inited=%d power_on=%d\n", 1322 1322 dp->link->sink_count, dp->dp_display.is_connected,
+3 -1
drivers/gpu/drm/msm/dsi/dsi.c
··· 215 215 goto fail; 216 216 } 217 217 218 - if (!msm_dsi_manager_validate_current_config(msm_dsi->id)) 218 + if (!msm_dsi_manager_validate_current_config(msm_dsi->id)) { 219 + ret = -EINVAL; 219 220 goto fail; 221 + } 220 222 221 223 msm_dsi->encoder = encoder; 222 224
+1 -1
drivers/gpu/drm/msm/dsi/dsi_host.c
··· 451 451 452 452 return 0; 453 453 err: 454 - for (; i > 0; i--) 454 + while (--i >= 0) 455 455 clk_disable_unprepare(msm_host->bus_clks[i]); 456 456 457 457 return ret;
+15 -15
drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
··· 110 110 static bool pll_14nm_poll_for_ready(struct dsi_pll_14nm *pll_14nm, 111 111 u32 nb_tries, u32 timeout_us) 112 112 { 113 - bool pll_locked = false; 113 + bool pll_locked = false, pll_ready = false; 114 114 void __iomem *base = pll_14nm->phy->pll_base; 115 115 u32 tries, val; 116 116 117 117 tries = nb_tries; 118 118 while (tries--) { 119 - val = dsi_phy_read(base + 120 - REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS); 119 + val = dsi_phy_read(base + REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS); 121 120 pll_locked = !!(val & BIT(5)); 122 121 123 122 if (pll_locked) ··· 125 126 udelay(timeout_us); 126 127 } 127 128 128 - if (!pll_locked) { 129 - tries = nb_tries; 130 - while (tries--) { 131 - val = dsi_phy_read(base + 132 - REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS); 133 - pll_locked = !!(val & BIT(0)); 129 + if (!pll_locked) 130 + goto out; 134 131 135 - if (pll_locked) 136 - break; 132 + tries = nb_tries; 133 + while (tries--) { 134 + val = dsi_phy_read(base + REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS); 135 + pll_ready = !!(val & BIT(0)); 137 136 138 - udelay(timeout_us); 139 - } 137 + if (pll_ready) 138 + break; 139 + 140 + udelay(timeout_us); 140 141 } 141 142 142 - DBG("DSI PLL is %slocked", pll_locked ? "" : "*not* "); 143 + out: 144 + DBG("DSI PLL is %slocked, %sready", pll_locked ? "" : "*not* ", pll_ready ? "" : "*not* "); 143 145 144 - return pll_locked; 146 + return pll_locked && pll_ready; 145 147 } 146 148 147 149 static void dsi_pll_14nm_config_init(struct dsi_pll_config *pconf)
+2 -2
drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c
··· 428 428 bytediv->reg = pll_28nm->phy->pll_base + REG_DSI_28nm_8960_PHY_PLL_CTRL_9; 429 429 430 430 snprintf(parent_name, 32, "dsi%dvco_clk", pll_28nm->phy->id); 431 - snprintf(clk_name, 32, "dsi%dpllbyte", pll_28nm->phy->id); 431 + snprintf(clk_name, 32, "dsi%dpllbyte", pll_28nm->phy->id + 1); 432 432 433 433 bytediv_init.name = clk_name; 434 434 bytediv_init.ops = &clk_bytediv_ops; ··· 442 442 return ret; 443 443 provided_clocks[DSI_BYTE_PLL_CLK] = &bytediv->hw; 444 444 445 - snprintf(clk_name, 32, "dsi%dpll", pll_28nm->phy->id); 445 + snprintf(clk_name, 32, "dsi%dpll", pll_28nm->phy->id + 1); 446 446 /* DIV3 */ 447 447 hw = devm_clk_hw_register_divider(dev, clk_name, 448 448 parent_name, 0, pll_28nm->phy->pll_base +
+2 -1
drivers/gpu/drm/msm/edp/edp_ctrl.c
··· 1116 1116 int msm_edp_ctrl_init(struct msm_edp *edp) 1117 1117 { 1118 1118 struct edp_ctrl *ctrl = NULL; 1119 - struct device *dev = &edp->pdev->dev; 1119 + struct device *dev; 1120 1120 int ret; 1121 1121 1122 1122 if (!edp) { ··· 1124 1124 return -EINVAL; 1125 1125 } 1126 1126 1127 + dev = &edp->pdev->dev; 1127 1128 ctrl = devm_kzalloc(dev, sizeof(*ctrl), GFP_KERNEL); 1128 1129 if (!ctrl) 1129 1130 return -ENOMEM;
+11 -4
drivers/gpu/drm/msm/msm_drv.c
··· 630 630 if (ret) 631 631 goto err_msm_uninit; 632 632 633 - ret = msm_disp_snapshot_init(ddev); 634 - if (ret) 635 - DRM_DEV_ERROR(dev, "msm_disp_snapshot_init failed ret = %d\n", ret); 636 - 633 + if (kms) { 634 + ret = msm_disp_snapshot_init(ddev); 635 + if (ret) 636 + DRM_DEV_ERROR(dev, "msm_disp_snapshot_init failed ret = %d\n", ret); 637 + } 637 638 drm_mode_config_reset(ddev); 638 639 639 640 #ifdef CONFIG_DRM_FBDEV_EMULATION ··· 683 682 684 683 static int context_init(struct drm_device *dev, struct drm_file *file) 685 684 { 685 + static atomic_t ident = ATOMIC_INIT(0); 686 686 struct msm_drm_private *priv = dev->dev_private; 687 687 struct msm_file_private *ctx; 688 688 ··· 691 689 if (!ctx) 692 690 return -ENOMEM; 693 691 692 + INIT_LIST_HEAD(&ctx->submitqueues); 693 + rwlock_init(&ctx->queuelock); 694 + 694 695 kref_init(&ctx->ref); 695 696 msm_submitqueue_init(dev, ctx); 696 697 697 698 ctx->aspace = msm_gpu_create_private_address_space(priv->gpu, current); 698 699 file->driver_priv = ctx; 700 + 701 + ctx->seqno = atomic_inc_return(&ident); 699 702 700 703 return 0; 701 704 }
+2 -45
drivers/gpu/drm/msm/msm_drv.h
··· 53 53 54 54 #define FRAC_16_16(mult, div) (((mult) << 16) / (div)) 55 55 56 - struct msm_file_private { 57 - rwlock_t queuelock; 58 - struct list_head submitqueues; 59 - int queueid; 60 - struct msm_gem_address_space *aspace; 61 - struct kref ref; 62 - }; 63 - 64 56 enum msm_mdp_plane_property { 65 57 PLANE_PROP_ZPOS, 66 58 PLANE_PROP_ALPHA, ··· 480 488 u32 msm_readl(const void __iomem *addr); 481 489 void msm_rmw(void __iomem *addr, u32 mask, u32 or); 482 490 483 - struct msm_gpu_submitqueue; 484 - int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx); 485 - struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx, 486 - u32 id); 487 - int msm_submitqueue_create(struct drm_device *drm, 488 - struct msm_file_private *ctx, 489 - u32 prio, u32 flags, u32 *id); 490 - int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx, 491 - struct drm_msm_submitqueue_query *args); 492 - int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id); 493 - void msm_submitqueue_close(struct msm_file_private *ctx); 494 - 495 - void msm_submitqueue_destroy(struct kref *kref); 496 - 497 - static inline void __msm_file_private_destroy(struct kref *kref) 498 - { 499 - struct msm_file_private *ctx = container_of(kref, 500 - struct msm_file_private, ref); 501 - 502 - msm_gem_address_space_put(ctx->aspace); 503 - kfree(ctx); 504 - } 505 - 506 - static inline void msm_file_private_put(struct msm_file_private *ctx) 507 - { 508 - kref_put(&ctx->ref, __msm_file_private_destroy); 509 - } 510 - 511 - static inline struct msm_file_private *msm_file_private_get( 512 - struct msm_file_private *ctx) 513 - { 514 - kref_get(&ctx->ref); 515 - return ctx; 516 - } 517 - 518 491 #define DBG(fmt, ...) DRM_DEBUG_DRIVER(fmt"\n", ##__VA_ARGS__) 519 492 #define VERB(fmt, ...) if (0) DRM_DEBUG_DRIVER(fmt"\n", ##__VA_ARGS__) 520 493 ··· 504 547 static inline unsigned long timeout_to_jiffies(const ktime_t *timeout) 505 548 { 506 549 ktime_t now = ktime_get(); 507 - unsigned long remaining_jiffies; 550 + s64 remaining_jiffies; 508 551 509 552 if (ktime_compare(*timeout, now) < 0) { 510 553 remaining_jiffies = 0; ··· 513 556 remaining_jiffies = ktime_divns(rem, NSEC_PER_SEC / HZ); 514 557 } 515 558 516 - return remaining_jiffies; 559 + return clamp(remaining_jiffies, 0LL, (s64)INT_MAX); 517 560 } 518 561 519 562 #endif /* __MSM_DRV_H__ */
+4 -3
drivers/gpu/drm/msm/msm_gem_submit.c
··· 46 46 if (!submit) 47 47 return ERR_PTR(-ENOMEM); 48 48 49 - ret = drm_sched_job_init(&submit->base, &queue->entity, queue); 49 + ret = drm_sched_job_init(&submit->base, queue->entity, queue); 50 50 if (ret) { 51 51 kfree(submit); 52 52 return ERR_PTR(ret); ··· 171 171 static int submit_lookup_cmds(struct msm_gem_submit *submit, 172 172 struct drm_msm_gem_submit *args, struct drm_file *file) 173 173 { 174 - unsigned i, sz; 174 + unsigned i; 175 + size_t sz; 175 176 int ret = 0; 176 177 177 178 for (i = 0; i < args->nr_cmds; i++) { ··· 908 907 /* The scheduler owns a ref now: */ 909 908 msm_gem_submit_get(submit); 910 909 911 - drm_sched_entity_push_job(&submit->base, &queue->entity); 910 + drm_sched_entity_push_job(&submit->base, queue->entity); 912 911 913 912 args->fence = submit->fence_id; 914 913
+64 -2
drivers/gpu/drm/msm/msm_gpu.h
··· 258 258 #define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_HIGH - DRM_SCHED_PRIORITY_MIN) 259 259 260 260 /** 261 + * struct msm_file_private - per-drm_file context 262 + * 263 + * @queuelock: synchronizes access to submitqueues list 264 + * @submitqueues: list of &msm_gpu_submitqueue created by userspace 265 + * @queueid: counter incremented each time a submitqueue is created, 266 + * used to assign &msm_gpu_submitqueue.id 267 + * @aspace: the per-process GPU address-space 268 + * @ref: reference count 269 + * @seqno: unique per process seqno 270 + */ 271 + struct msm_file_private { 272 + rwlock_t queuelock; 273 + struct list_head submitqueues; 274 + int queueid; 275 + struct msm_gem_address_space *aspace; 276 + struct kref ref; 277 + int seqno; 278 + 279 + /** 280 + * entities: 281 + * 282 + * Table of per-priority-level sched entities used by submitqueues 283 + * associated with this &drm_file. Because some userspace apps 284 + * make assumptions about rendering from multiple gl contexts 285 + * (of the same priority) within the process happening in FIFO 286 + * order without requiring any fencing beyond MakeCurrent(), we 287 + * create at most one &drm_sched_entity per-process per-priority- 288 + * level. 289 + */ 290 + struct drm_sched_entity *entities[NR_SCHED_PRIORITIES * MSM_GPU_MAX_RINGS]; 291 + }; 292 + 293 + /** 261 294 * msm_gpu_convert_priority - Map userspace priority to ring # and sched priority 262 295 * 263 296 * @gpu: the gpu instance ··· 337 304 } 338 305 339 306 /** 307 + * struct msm_gpu_submitqueues - Userspace created context. 308 + * 340 309 * A submitqueue is associated with a gl context or vk queue (or equiv) 341 310 * in userspace. 342 311 * ··· 356 321 * seqno, protected by submitqueue lock 357 322 * @lock: submitqueue lock 358 323 * @ref: reference count 359 - * @entity: the submit job-queue 324 + * @entity: the submit job-queue 360 325 */ 361 326 struct msm_gpu_submitqueue { 362 327 int id; ··· 368 333 struct idr fence_idr; 369 334 struct mutex lock; 370 335 struct kref ref; 371 - struct drm_sched_entity entity; 336 + struct drm_sched_entity *entity; 372 337 }; 373 338 374 339 struct msm_gpu_state_bo { ··· 455 420 456 421 int msm_gpu_pm_suspend(struct msm_gpu *gpu); 457 422 int msm_gpu_pm_resume(struct msm_gpu *gpu); 423 + 424 + int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx); 425 + struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx, 426 + u32 id); 427 + int msm_submitqueue_create(struct drm_device *drm, 428 + struct msm_file_private *ctx, 429 + u32 prio, u32 flags, u32 *id); 430 + int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx, 431 + struct drm_msm_submitqueue_query *args); 432 + int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id); 433 + void msm_submitqueue_close(struct msm_file_private *ctx); 434 + 435 + void msm_submitqueue_destroy(struct kref *kref); 436 + 437 + void __msm_file_private_destroy(struct kref *kref); 438 + 439 + static inline void msm_file_private_put(struct msm_file_private *ctx) 440 + { 441 + kref_put(&ctx->ref, __msm_file_private_destroy); 442 + } 443 + 444 + static inline struct msm_file_private *msm_file_private_get( 445 + struct msm_file_private *ctx) 446 + { 447 + kref_get(&ctx->ref); 448 + return ctx; 449 + } 458 450 459 451 void msm_devfreq_init(struct msm_gpu *gpu); 460 452 void msm_devfreq_cleanup(struct msm_gpu *gpu);
+6
drivers/gpu/drm/msm/msm_gpu_devfreq.c
··· 151 151 unsigned int idle_time; 152 152 unsigned long target_freq = df->idle_freq; 153 153 154 + if (!df->devfreq) 155 + return; 156 + 154 157 /* 155 158 * Hold devfreq lock to synchronize with get_dev_status()/ 156 159 * target() callbacks ··· 188 185 { 189 186 struct msm_gpu_devfreq *df = &gpu->devfreq; 190 187 unsigned long idle_freq, target_freq = 0; 188 + 189 + if (!df->devfreq) 190 + return; 191 191 192 192 /* 193 193 * Hold devfreq lock to synchronize with get_dev_status()/
+58 -14
drivers/gpu/drm/msm/msm_submitqueue.c
··· 7 7 8 8 #include "msm_gpu.h" 9 9 10 + void __msm_file_private_destroy(struct kref *kref) 11 + { 12 + struct msm_file_private *ctx = container_of(kref, 13 + struct msm_file_private, ref); 14 + int i; 15 + 16 + for (i = 0; i < ARRAY_SIZE(ctx->entities); i++) { 17 + if (!ctx->entities[i]) 18 + continue; 19 + 20 + drm_sched_entity_destroy(ctx->entities[i]); 21 + kfree(ctx->entities[i]); 22 + } 23 + 24 + msm_gem_address_space_put(ctx->aspace); 25 + kfree(ctx); 26 + } 27 + 10 28 void msm_submitqueue_destroy(struct kref *kref) 11 29 { 12 30 struct msm_gpu_submitqueue *queue = container_of(kref, 13 31 struct msm_gpu_submitqueue, ref); 14 32 15 33 idr_destroy(&queue->fence_idr); 16 - 17 - drm_sched_entity_destroy(&queue->entity); 18 34 19 35 msm_file_private_put(queue->ctx); 20 36 ··· 77 61 } 78 62 } 79 63 64 + static struct drm_sched_entity * 65 + get_sched_entity(struct msm_file_private *ctx, struct msm_ringbuffer *ring, 66 + unsigned ring_nr, enum drm_sched_priority sched_prio) 67 + { 68 + static DEFINE_MUTEX(entity_lock); 69 + unsigned idx = (ring_nr * NR_SCHED_PRIORITIES) + sched_prio; 70 + 71 + /* We should have already validated that the requested priority is 72 + * valid by the time we get here. 73 + */ 74 + if (WARN_ON(idx >= ARRAY_SIZE(ctx->entities))) 75 + return ERR_PTR(-EINVAL); 76 + 77 + mutex_lock(&entity_lock); 78 + 79 + if (!ctx->entities[idx]) { 80 + struct drm_sched_entity *entity; 81 + struct drm_gpu_scheduler *sched = &ring->sched; 82 + int ret; 83 + 84 + entity = kzalloc(sizeof(*ctx->entities[idx]), GFP_KERNEL); 85 + 86 + ret = drm_sched_entity_init(entity, sched_prio, &sched, 1, NULL); 87 + if (ret) { 88 + kfree(entity); 89 + return ERR_PTR(ret); 90 + } 91 + 92 + ctx->entities[idx] = entity; 93 + } 94 + 95 + mutex_unlock(&entity_lock); 96 + 97 + return ctx->entities[idx]; 98 + } 99 + 80 100 int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, 81 101 u32 prio, u32 flags, u32 *id) 82 102 { 83 103 struct msm_drm_private *priv = drm->dev_private; 84 104 struct msm_gpu_submitqueue *queue; 85 - struct msm_ringbuffer *ring; 86 - struct drm_gpu_scheduler *sched; 87 105 enum drm_sched_priority sched_prio; 88 106 unsigned ring_nr; 89 107 int ret; ··· 141 91 queue->flags = flags; 142 92 queue->ring_nr = ring_nr; 143 93 144 - ring = priv->gpu->rb[ring_nr]; 145 - sched = &ring->sched; 146 - 147 - ret = drm_sched_entity_init(&queue->entity, 148 - sched_prio, &sched, 1, NULL); 149 - if (ret) { 94 + queue->entity = get_sched_entity(ctx, priv->gpu->rb[ring_nr], 95 + ring_nr, sched_prio); 96 + if (IS_ERR(queue->entity)) { 97 + ret = PTR_ERR(queue->entity); 150 98 kfree(queue); 151 99 return ret; 152 100 } ··· 187 139 * than the middle priority level. 188 140 */ 189 141 default_prio = DIV_ROUND_UP(max_priority, 2); 190 - 191 - INIT_LIST_HEAD(&ctx->submitqueues); 192 - 193 - rwlock_init(&ctx->queuelock); 194 142 195 143 return msm_submitqueue_create(drm, ctx, default_prio, 0, NULL); 196 144 }
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/fifo/chang84.c
··· 82 82 if (offset < 0) 83 83 return 0; 84 84 85 - engn = fifo->base.func->engine_id(&fifo->base, engine); 85 + engn = fifo->base.func->engine_id(&fifo->base, engine) - 1; 86 86 save = nvkm_mask(device, 0x002520, 0x0000003f, 1 << engn); 87 87 nvkm_wr32(device, 0x0032fc, chan->base.inst->addr >> 12); 88 88 done = nvkm_msec(device, 2000,
+1
drivers/gpu/drm/panel/Kconfig
··· 295 295 depends on OF 296 296 depends on I2C 297 297 depends on BACKLIGHT_CLASS_DEVICE 298 + select CRC32 298 299 help 299 300 The panel is used with different sizes LCDs, from 480x272 to 300 301 1280x800, and 24 bit per pixel.
+1 -1
drivers/gpu/drm/r128/ati_pcigart.c
··· 214 214 } 215 215 ret = 0; 216 216 217 - #if defined(__i386__) || defined(__x86_64__) 217 + #ifdef CONFIG_X86 218 218 wbinvd(); 219 219 #else 220 220 mb();
+12 -4
drivers/gpu/drm/rcar-du/rcar_du_encoder.c
··· 86 86 } 87 87 88 88 /* 89 - * Create and initialize the encoder. On Gen3 skip the LVDS1 output if 89 + * Create and initialize the encoder. On Gen3, skip the LVDS1 output if 90 90 * the LVDS1 encoder is used as a companion for LVDS0 in dual-link 91 - * mode. 91 + * mode, or any LVDS output if it isn't connected. The latter may happen 92 + * on D3 or E3 as the LVDS encoders are needed to provide the pixel 93 + * clock to the DU, even when the LVDS outputs are not used. 92 94 */ 93 - if (rcdu->info->gen >= 3 && output == RCAR_DU_OUTPUT_LVDS1) { 94 - if (rcar_lvds_dual_link(bridge)) 95 + if (rcdu->info->gen >= 3) { 96 + if (output == RCAR_DU_OUTPUT_LVDS1 && 97 + rcar_lvds_dual_link(bridge)) 98 + return -ENOLINK; 99 + 100 + if ((output == RCAR_DU_OUTPUT_LVDS0 || 101 + output == RCAR_DU_OUTPUT_LVDS1) && 102 + !rcar_lvds_is_connected(bridge)) 95 103 return -ENOLINK; 96 104 } 97 105
+11
drivers/gpu/drm/rcar-du/rcar_lvds.c
··· 576 576 { 577 577 struct rcar_lvds *lvds = bridge_to_rcar_lvds(bridge); 578 578 579 + if (!lvds->next_bridge) 580 + return 0; 581 + 579 582 return drm_bridge_attach(bridge->encoder, lvds->next_bridge, bridge, 580 583 flags); 581 584 } ··· 600 597 return lvds->link_type != RCAR_LVDS_SINGLE_LINK; 601 598 } 602 599 EXPORT_SYMBOL_GPL(rcar_lvds_dual_link); 600 + 601 + bool rcar_lvds_is_connected(struct drm_bridge *bridge) 602 + { 603 + struct rcar_lvds *lvds = bridge_to_rcar_lvds(bridge); 604 + 605 + return lvds->next_bridge != NULL; 606 + } 607 + EXPORT_SYMBOL_GPL(rcar_lvds_is_connected); 603 608 604 609 /* ----------------------------------------------------------------------------- 605 610 * Probe & Remove
+5
drivers/gpu/drm/rcar-du/rcar_lvds.h
··· 16 16 int rcar_lvds_clk_enable(struct drm_bridge *bridge, unsigned long freq); 17 17 void rcar_lvds_clk_disable(struct drm_bridge *bridge); 18 18 bool rcar_lvds_dual_link(struct drm_bridge *bridge); 19 + bool rcar_lvds_is_connected(struct drm_bridge *bridge); 19 20 #else 20 21 static inline int rcar_lvds_clk_enable(struct drm_bridge *bridge, 21 22 unsigned long freq) ··· 25 24 } 26 25 static inline void rcar_lvds_clk_disable(struct drm_bridge *bridge) { } 27 26 static inline bool rcar_lvds_dual_link(struct drm_bridge *bridge) 27 + { 28 + return false; 29 + } 30 + static inline bool rcar_lvds_is_connected(struct drm_bridge *bridge) 28 31 { 29 32 return false; 30 33 }
+1 -1
drivers/iio/accel/fxls8962af-core.c
··· 738 738 739 739 if (reg & FXLS8962AF_INT_STATUS_SRC_BUF) { 740 740 ret = fxls8962af_fifo_flush(indio_dev); 741 - if (ret) 741 + if (ret < 0) 742 742 return IRQ_NONE; 743 743 744 744 return IRQ_HANDLED;
+1
drivers/iio/adc/ad7192.c
··· 293 293 .has_registers = true, 294 294 .addr_shift = 3, 295 295 .read_mask = BIT(6), 296 + .irq_flags = IRQF_TRIGGER_FALLING, 296 297 }; 297 298 298 299 static const struct ad_sd_calib_data ad7192_calib_arr[8] = {
+1 -1
drivers/iio/adc/ad7780.c
··· 203 203 .set_mode = ad7780_set_mode, 204 204 .postprocess_sample = ad7780_postprocess_sample, 205 205 .has_registers = false, 206 - .irq_flags = IRQF_TRIGGER_LOW, 206 + .irq_flags = IRQF_TRIGGER_FALLING, 207 207 }; 208 208 209 209 #define _AD7780_CHANNEL(_bits, _wordsize, _mask_all) \
+1 -1
drivers/iio/adc/ad7793.c
··· 206 206 .has_registers = true, 207 207 .addr_shift = 3, 208 208 .read_mask = BIT(6), 209 - .irq_flags = IRQF_TRIGGER_LOW, 209 + .irq_flags = IRQF_TRIGGER_FALLING, 210 210 }; 211 211 212 212 static const struct ad_sd_calib_data ad7793_calib_arr[6] = {
+1
drivers/iio/adc/aspeed_adc.c
··· 183 183 184 184 data = iio_priv(indio_dev); 185 185 data->dev = &pdev->dev; 186 + platform_set_drvdata(pdev, indio_dev); 186 187 187 188 data->base = devm_platform_ioremap_resource(pdev, 0); 188 189 if (IS_ERR(data->base))
+1 -2
drivers/iio/adc/max1027.c
··· 103 103 .sign = 'u', \ 104 104 .realbits = depth, \ 105 105 .storagebits = 16, \ 106 - .shift = 2, \ 106 + .shift = (depth == 10) ? 2 : 0, \ 107 107 .endianness = IIO_BE, \ 108 108 }, \ 109 109 } ··· 142 142 MAX1027_V_CHAN(11, depth) 143 143 144 144 #define MAX1X31_CHANNELS(depth) \ 145 - MAX1X27_CHANNELS(depth), \ 146 145 MAX1X29_CHANNELS(depth), \ 147 146 MAX1027_V_CHAN(12, depth), \ 148 147 MAX1027_V_CHAN(13, depth), \
+8
drivers/iio/adc/mt6577_auxadc.c
··· 82 82 MT6577_AUXADC_CHANNEL(15), 83 83 }; 84 84 85 + /* For Voltage calculation */ 86 + #define VOLTAGE_FULL_RANGE 1500 /* VA voltage */ 87 + #define AUXADC_PRECISE 4096 /* 12 bits */ 88 + 85 89 static int mt_auxadc_get_cali_data(int rawdata, bool enable_cali) 86 90 { 87 91 return rawdata; ··· 195 191 } 196 192 if (adc_dev->dev_comp->sample_data_cali) 197 193 *val = mt_auxadc_get_cali_data(*val, true); 194 + 195 + /* Convert adc raw data to voltage: 0 - 1500 mV */ 196 + *val = *val * VOLTAGE_FULL_RANGE / AUXADC_PRECISE; 197 + 198 198 return IIO_VAL_INT; 199 199 200 200 default:
+4 -2
drivers/iio/adc/rzg2l_adc.c
··· 401 401 exit_hw_init: 402 402 clk_disable_unprepare(adc->pclk); 403 403 404 - return 0; 404 + return ret; 405 405 } 406 406 407 407 static void rzg2l_adc_pm_runtime_disable(void *data) ··· 570 570 return ret; 571 571 572 572 ret = clk_prepare_enable(adc->adclk); 573 - if (ret) 573 + if (ret) { 574 + clk_disable_unprepare(adc->pclk); 574 575 return ret; 576 + } 575 577 576 578 rzg2l_adc_pwr(adc, true); 577 579
+6
drivers/iio/adc/ti-adc128s052.c
··· 171 171 mutex_init(&adc->lock); 172 172 173 173 ret = iio_device_register(indio_dev); 174 + if (ret) 175 + goto err_disable_regulator; 174 176 177 + return 0; 178 + 179 + err_disable_regulator: 180 + regulator_disable(adc->reg); 175 181 return ret; 176 182 } 177 183
+9 -2
drivers/iio/common/ssp_sensors/ssp_spi.c
··· 137 137 if (length > received_len - *data_index || length <= 0) { 138 138 ssp_dbg("[SSP]: MSG From MCU-invalid debug length(%d/%d)\n", 139 139 length, received_len); 140 - return length ? length : -EPROTO; 140 + return -EPROTO; 141 141 } 142 142 143 143 ssp_dbg("[SSP]: MSG From MCU - %s\n", &data_frame[*data_index]); ··· 273 273 for (idx = 0; idx < len;) { 274 274 switch (dataframe[idx++]) { 275 275 case SSP_MSG2AP_INST_BYPASS_DATA: 276 + if (idx >= len) 277 + return -EPROTO; 276 278 sd = dataframe[idx++]; 277 279 if (sd < 0 || sd >= SSP_SENSOR_MAX) { 278 280 dev_err(SSP_DEV, ··· 284 282 285 283 if (indio_devs[sd]) { 286 284 spd = iio_priv(indio_devs[sd]); 287 - if (spd->process_data) 285 + if (spd->process_data) { 286 + if (idx >= len) 287 + return -EPROTO; 288 288 spd->process_data(indio_devs[sd], 289 289 &dataframe[idx], 290 290 data->timestamp); 291 + } 291 292 } else { 292 293 dev_err(SSP_DEV, "no client for frame\n"); 293 294 } ··· 298 293 idx += ssp_offset_map[sd]; 299 294 break; 300 295 case SSP_MSG2AP_INST_DEBUG_DATA: 296 + if (idx >= len) 297 + return -EPROTO; 301 298 sd = ssp_print_mcu_debug(dataframe, &idx, len); 302 299 if (sd) { 303 300 dev_err(SSP_DEV,
+1
drivers/iio/dac/ti-dac5571.c
··· 350 350 data->dac5571_pwrdwn = dac5571_pwrdwn_quad; 351 351 break; 352 352 default: 353 + ret = -EINVAL; 353 354 goto err; 354 355 } 355 356
+2 -1
drivers/iio/imu/adis16475.c
··· 353 353 if (dec > st->info->max_dec) 354 354 dec = st->info->max_dec; 355 355 356 - ret = adis_write_reg_16(&st->adis, ADIS16475_REG_DEC_RATE, dec); 356 + ret = __adis_write_reg_16(&st->adis, ADIS16475_REG_DEC_RATE, dec); 357 357 if (ret) 358 358 goto error; 359 359 360 + adis_dev_unlock(&st->adis); 360 361 /* 361 362 * If decimation is used, then gyro and accel data will have meaningful 362 363 * bits on the LSB registers. This info is used on the trigger handler.
+11 -3
drivers/iio/imu/adis16480.c
··· 144 144 unsigned int max_dec_rate; 145 145 const unsigned int *filter_freqs; 146 146 bool has_pps_clk_mode; 147 + bool has_sleep_cnt; 147 148 const struct adis_data adis_data; 148 149 }; 149 150 ··· 940 939 .temp_scale = 5650, /* 5.65 milli degree Celsius */ 941 940 .int_clk = 2460000, 942 941 .max_dec_rate = 2048, 942 + .has_sleep_cnt = true, 943 943 .filter_freqs = adis16480_def_filter_freqs, 944 944 .adis_data = ADIS16480_DATA(16375, &adis16485_timeouts, 0), 945 945 }, ··· 954 952 .temp_scale = 5650, /* 5.65 milli degree Celsius */ 955 953 .int_clk = 2460000, 956 954 .max_dec_rate = 2048, 955 + .has_sleep_cnt = true, 957 956 .filter_freqs = adis16480_def_filter_freqs, 958 957 .adis_data = ADIS16480_DATA(16480, &adis16480_timeouts, 0), 959 958 }, ··· 968 965 .temp_scale = 5650, /* 5.65 milli degree Celsius */ 969 966 .int_clk = 2460000, 970 967 .max_dec_rate = 2048, 968 + .has_sleep_cnt = true, 971 969 .filter_freqs = adis16480_def_filter_freqs, 972 970 .adis_data = ADIS16480_DATA(16485, &adis16485_timeouts, 0), 973 971 }, ··· 982 978 .temp_scale = 5650, /* 5.65 milli degree Celsius */ 983 979 .int_clk = 2460000, 984 980 .max_dec_rate = 2048, 981 + .has_sleep_cnt = true, 985 982 .filter_freqs = adis16480_def_filter_freqs, 986 983 .adis_data = ADIS16480_DATA(16488, &adis16485_timeouts, 0), 987 984 }, ··· 1430 1425 if (ret) 1431 1426 return ret; 1432 1427 1433 - ret = devm_add_action_or_reset(&spi->dev, adis16480_stop, indio_dev); 1434 - if (ret) 1435 - return ret; 1428 + if (st->chip_info->has_sleep_cnt) { 1429 + ret = devm_add_action_or_reset(&spi->dev, adis16480_stop, 1430 + indio_dev); 1431 + if (ret) 1432 + return ret; 1433 + } 1436 1434 1437 1435 ret = adis16480_config_irq_pin(spi->dev.of_node, st); 1438 1436 if (ret)
+3 -3
drivers/iio/light/opt3001.c
··· 276 276 ret = wait_event_timeout(opt->result_ready_queue, 277 277 opt->result_ready, 278 278 msecs_to_jiffies(OPT3001_RESULT_READY_LONG)); 279 + if (ret == 0) 280 + return -ETIMEDOUT; 279 281 } else { 280 282 /* Sleep for result ready time */ 281 283 timeout = (opt->int_time == OPT3001_INT_TIME_SHORT) ? ··· 314 312 /* Disallow IRQ to access the device while lock is active */ 315 313 opt->ok_to_ignore_lock = false; 316 314 317 - if (ret == 0) 318 - return -ETIMEDOUT; 319 - else if (ret < 0) 315 + if (ret < 0) 320 316 return ret; 321 317 322 318 if (opt->use_irq) {
+1
drivers/iio/test/Makefile
··· 5 5 6 6 # Keep in alphabetical order 7 7 obj-$(CONFIG_IIO_TEST_FORMAT) += iio-test-format.o 8 + CFLAGS_iio-test-format.o += $(DISABLE_STRUCTLEAK_PLUGIN)
+2
drivers/input/joystick/xpad.c
··· 334 334 { 0x24c6, 0x5b03, "Thrustmaster Ferrari 458 Racing Wheel", 0, XTYPE_XBOX360 }, 335 335 { 0x24c6, 0x5d04, "Razer Sabertooth", 0, XTYPE_XBOX360 }, 336 336 { 0x24c6, 0xfafe, "Rock Candy Gamepad for Xbox 360", 0, XTYPE_XBOX360 }, 337 + { 0x3285, 0x0607, "Nacon GC-100", 0, XTYPE_XBOX360 }, 337 338 { 0x3767, 0x0101, "Fanatec Speedster 3 Forceshock Wheel", 0, XTYPE_XBOX }, 338 339 { 0xffff, 0xffff, "Chinese-made Xbox Controller", 0, XTYPE_XBOX }, 339 340 { 0x0000, 0x0000, "Generic X-Box pad", 0, XTYPE_UNKNOWN } ··· 452 451 XPAD_XBOXONE_VENDOR(0x24c6), /* PowerA Controllers */ 453 452 XPAD_XBOXONE_VENDOR(0x2e24), /* Hyperkin Duke X-Box One pad */ 454 453 XPAD_XBOX360_VENDOR(0x2f24), /* GameSir Controllers */ 454 + XPAD_XBOX360_VENDOR(0x3285), /* Nacon GC-100 */ 455 455 { } 456 456 }; 457 457
+29
drivers/input/keyboard/snvs_pwrkey.c
··· 3 3 // Driver for the IMX SNVS ON/OFF Power Key 4 4 // Copyright (C) 2015 Freescale Semiconductor, Inc. All Rights Reserved. 5 5 6 + #include <linux/clk.h> 6 7 #include <linux/device.h> 7 8 #include <linux/err.h> 8 9 #include <linux/init.h> ··· 100 99 return IRQ_HANDLED; 101 100 } 102 101 102 + static void imx_snvs_pwrkey_disable_clk(void *data) 103 + { 104 + clk_disable_unprepare(data); 105 + } 106 + 103 107 static void imx_snvs_pwrkey_act(void *pdata) 104 108 { 105 109 struct pwrkey_drv_data *pd = pdata; ··· 117 111 struct pwrkey_drv_data *pdata; 118 112 struct input_dev *input; 119 113 struct device_node *np; 114 + struct clk *clk; 120 115 int error; 121 116 u32 vid; 122 117 ··· 139 132 if (of_property_read_u32(np, "linux,keycode", &pdata->keycode)) { 140 133 pdata->keycode = KEY_POWER; 141 134 dev_warn(&pdev->dev, "KEY_POWER without setting in dts\n"); 135 + } 136 + 137 + clk = devm_clk_get_optional(&pdev->dev, NULL); 138 + if (IS_ERR(clk)) { 139 + dev_err(&pdev->dev, "Failed to get snvs clock (%pe)\n", clk); 140 + return PTR_ERR(clk); 141 + } 142 + 143 + error = clk_prepare_enable(clk); 144 + if (error) { 145 + dev_err(&pdev->dev, "Failed to enable snvs clock (%pe)\n", 146 + ERR_PTR(error)); 147 + return error; 148 + } 149 + 150 + error = devm_add_action_or_reset(&pdev->dev, 151 + imx_snvs_pwrkey_disable_clk, clk); 152 + if (error) { 153 + dev_err(&pdev->dev, 154 + "Failed to register clock cleanup handler (%pe)\n", 155 + ERR_PTR(error)); 156 + return error; 142 157 } 143 158 144 159 pdata->wakeup = of_property_read_bool(np, "wakeup-source");
+21 -21
drivers/input/touchscreen.c
··· 80 80 81 81 data_present = touchscreen_get_prop_u32(dev, "touchscreen-min-x", 82 82 input_abs_get_min(input, axis_x), 83 - &minimum) | 84 - touchscreen_get_prop_u32(dev, "touchscreen-size-x", 85 - input_abs_get_max(input, 86 - axis_x) + 1, 87 - &maximum) | 88 - touchscreen_get_prop_u32(dev, "touchscreen-fuzz-x", 89 - input_abs_get_fuzz(input, axis_x), 90 - &fuzz); 83 + &minimum); 84 + data_present |= touchscreen_get_prop_u32(dev, "touchscreen-size-x", 85 + input_abs_get_max(input, 86 + axis_x) + 1, 87 + &maximum); 88 + data_present |= touchscreen_get_prop_u32(dev, "touchscreen-fuzz-x", 89 + input_abs_get_fuzz(input, axis_x), 90 + &fuzz); 91 91 if (data_present) 92 92 touchscreen_set_params(input, axis_x, minimum, maximum - 1, fuzz); 93 93 94 94 data_present = touchscreen_get_prop_u32(dev, "touchscreen-min-y", 95 95 input_abs_get_min(input, axis_y), 96 - &minimum) | 97 - touchscreen_get_prop_u32(dev, "touchscreen-size-y", 98 - input_abs_get_max(input, 99 - axis_y) + 1, 100 - &maximum) | 101 - touchscreen_get_prop_u32(dev, "touchscreen-fuzz-y", 102 - input_abs_get_fuzz(input, axis_y), 103 - &fuzz); 96 + &minimum); 97 + data_present |= touchscreen_get_prop_u32(dev, "touchscreen-size-y", 98 + input_abs_get_max(input, 99 + axis_y) + 1, 100 + &maximum); 101 + data_present |= touchscreen_get_prop_u32(dev, "touchscreen-fuzz-y", 102 + input_abs_get_fuzz(input, axis_y), 103 + &fuzz); 104 104 if (data_present) 105 105 touchscreen_set_params(input, axis_y, minimum, maximum - 1, fuzz); 106 106 ··· 108 108 data_present = touchscreen_get_prop_u32(dev, 109 109 "touchscreen-max-pressure", 110 110 input_abs_get_max(input, axis), 111 - &maximum) | 112 - touchscreen_get_prop_u32(dev, 113 - "touchscreen-fuzz-pressure", 114 - input_abs_get_fuzz(input, axis), 115 - &fuzz); 111 + &maximum); 112 + data_present |= touchscreen_get_prop_u32(dev, 113 + "touchscreen-fuzz-pressure", 114 + input_abs_get_fuzz(input, axis), 115 + &fuzz); 116 116 if (data_present) 117 117 touchscreen_set_params(input, axis, 0, maximum, fuzz); 118 118
+16 -13
drivers/input/touchscreen/resistive-adc-touch.c
··· 71 71 unsigned int z2 = touch_info[st->ch_map[GRTS_CH_Z2]]; 72 72 unsigned int Rt; 73 73 74 - Rt = z2; 75 - Rt -= z1; 76 - Rt *= st->x_plate_ohms; 77 - Rt = DIV_ROUND_CLOSEST(Rt, 16); 78 - Rt *= x; 79 - Rt /= z1; 80 - Rt = DIV_ROUND_CLOSEST(Rt, 256); 81 - /* 82 - * On increased pressure the resistance (Rt) is decreasing 83 - * so, convert values to make it looks as real pressure. 84 - */ 85 - if (Rt < GRTS_DEFAULT_PRESSURE_MAX) 86 - press = GRTS_DEFAULT_PRESSURE_MAX - Rt; 74 + if (likely(x && z1)) { 75 + Rt = z2; 76 + Rt -= z1; 77 + Rt *= st->x_plate_ohms; 78 + Rt = DIV_ROUND_CLOSEST(Rt, 16); 79 + Rt *= x; 80 + Rt /= z1; 81 + Rt = DIV_ROUND_CLOSEST(Rt, 256); 82 + /* 83 + * On increased pressure the resistance (Rt) is 84 + * decreasing so, convert values to make it looks as 85 + * real pressure. 86 + */ 87 + if (Rt < GRTS_DEFAULT_PRESSURE_MAX) 88 + press = GRTS_DEFAULT_PRESSURE_MAX - Rt; 89 + } 87 90 } 88 91 89 92 if ((!x && !y) || (st->pressure && (press < st->pressure_min))) {
+8
drivers/iommu/Kconfig
··· 355 355 'arm-smmu.disable_bypass' will continue to override this 356 356 config. 357 357 358 + config ARM_SMMU_QCOM 359 + def_tristate y 360 + depends on ARM_SMMU && ARCH_QCOM 361 + select QCOM_SCM 362 + help 363 + When running on a Qualcomm platform that has the custom variant 364 + of the ARM SMMU, this needs to be built into the SMMU driver. 365 + 358 366 config ARM_SMMU_V3 359 367 tristate "ARM Ltd. System MMU Version 3 (SMMUv3) Support" 360 368 depends on ARM64
+5
drivers/isdn/capi/kcapi.c
··· 480 480 481 481 ctr_down(ctr, CAPI_CTR_DETACHED); 482 482 483 + if (ctr->cnr < 1 || ctr->cnr - 1 >= CAPI_MAXCONTR) { 484 + err = -EINVAL; 485 + goto unlock_out; 486 + } 487 + 483 488 if (capi_controller[ctr->cnr - 1] != ctr) { 484 489 err = -EINVAL; 485 490 goto unlock_out;
+1 -1
drivers/isdn/hardware/mISDN/netjet.c
··· 949 949 nj_disable_hwirq(card); 950 950 mode_tiger(&card->bc[0], ISDN_P_NONE); 951 951 mode_tiger(&card->bc[1], ISDN_P_NONE); 952 - card->isac.release(&card->isac); 953 952 spin_unlock_irqrestore(&card->lock, flags); 953 + card->isac.release(&card->isac); 954 954 release_region(card->base, card->base_s); 955 955 card->base_s = 0; 956 956 }
+1 -1
drivers/md/dm-clone-target.c
··· 161 161 162 162 static void __set_clone_mode(struct clone *clone, enum clone_metadata_mode new_mode) 163 163 { 164 - const char *descs[] = { 164 + static const char * const descs[] = { 165 165 "read-write", 166 166 "read-only", 167 167 "fail"
+8
drivers/md/dm-rq.c
··· 490 490 struct mapped_device *md = tio->md; 491 491 struct dm_target *ti = md->immutable_target; 492 492 493 + /* 494 + * blk-mq's unquiesce may come from outside events, such as 495 + * elevator switch, updating nr_requests or others, and request may 496 + * come during suspend, so simply ask for blk-mq to requeue it. 497 + */ 498 + if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags))) 499 + return BLK_STS_RESOURCE; 500 + 493 501 if (unlikely(!ti)) { 494 502 int srcu_idx; 495 503 struct dm_table *map = dm_get_live_table(md, &srcu_idx);
+12 -3
drivers/md/dm-verity-target.c
··· 475 475 struct bvec_iter start; 476 476 unsigned b; 477 477 struct crypto_wait wait; 478 + struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size); 478 479 479 480 for (b = 0; b < io->n_blocks; b++) { 480 481 int r; ··· 530 529 else if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_DATA, 531 530 cur_block, NULL, &start) == 0) 532 531 continue; 533 - else if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_DATA, 534 - cur_block)) 535 - return -EIO; 532 + else { 533 + if (bio->bi_status) { 534 + /* 535 + * Error correction failed; Just return error 536 + */ 537 + return -EIO; 538 + } 539 + if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_DATA, 540 + cur_block)) 541 + return -EIO; 542 + } 536 543 } 537 544 538 545 return 0;
+10 -7
drivers/md/dm.c
··· 496 496 false, 0, &io->stats_aux); 497 497 } 498 498 499 - static void end_io_acct(struct dm_io *io) 499 + static void end_io_acct(struct mapped_device *md, struct bio *bio, 500 + unsigned long start_time, struct dm_stats_aux *stats_aux) 500 501 { 501 - struct mapped_device *md = io->md; 502 - struct bio *bio = io->orig_bio; 503 - unsigned long duration = jiffies - io->start_time; 502 + unsigned long duration = jiffies - start_time; 504 503 505 - bio_end_io_acct(bio, io->start_time); 504 + bio_end_io_acct(bio, start_time); 506 505 507 506 if (unlikely(dm_stats_used(&md->stats))) 508 507 dm_stats_account_io(&md->stats, bio_data_dir(bio), 509 508 bio->bi_iter.bi_sector, bio_sectors(bio), 510 - true, duration, &io->stats_aux); 509 + true, duration, stats_aux); 511 510 512 511 /* nudge anyone waiting on suspend queue */ 513 512 if (unlikely(wq_has_sleeper(&md->wait))) ··· 789 790 blk_status_t io_error; 790 791 struct bio *bio; 791 792 struct mapped_device *md = io->md; 793 + unsigned long start_time = 0; 794 + struct dm_stats_aux stats_aux; 792 795 793 796 /* Push-back supersedes any I/O errors */ 794 797 if (unlikely(error)) { ··· 822 821 } 823 822 824 823 io_error = io->status; 825 - end_io_acct(io); 824 + start_time = io->start_time; 825 + stats_aux = io->stats_aux; 826 826 free_io(md, io); 827 + end_io_acct(md, bio, start_time, &stats_aux); 827 828 828 829 if (io_error == BLK_STS_DM_REQUEUE) 829 830 return;
+1
drivers/misc/Kconfig
··· 224 224 tristate "HiSilicon Hi6421v600 IRQ and powerkey" 225 225 depends on OF 226 226 depends on SPMI 227 + depends on HAS_IOMEM 227 228 select MFD_CORE 228 229 select REGMAP_SPMI 229 230 help
+1 -1
drivers/misc/cb710/sgbuf2.c
··· 47 47 #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 48 48 return false; 49 49 #else 50 - return ((ptr - NULL) & 3) != 0; 50 + return ((uintptr_t)ptr & 3) != 0; 51 51 #endif 52 52 } 53 53
+8
drivers/misc/eeprom/at25.c
··· 366 366 }; 367 367 MODULE_DEVICE_TABLE(of, at25_of_match); 368 368 369 + static const struct spi_device_id at25_spi_ids[] = { 370 + { .name = "at25",}, 371 + { .name = "fm25",}, 372 + { } 373 + }; 374 + MODULE_DEVICE_TABLE(spi, at25_spi_ids); 375 + 369 376 static int at25_probe(struct spi_device *spi) 370 377 { 371 378 struct at25_data *at25 = NULL; ··· 498 491 .dev_groups = sernum_groups, 499 492 }, 500 493 .probe = at25_probe, 494 + .id_table = at25_spi_ids, 501 495 }; 502 496 503 497 module_spi_driver(at25_driver);
+18
drivers/misc/eeprom/eeprom_93xx46.c
··· 406 406 }; 407 407 MODULE_DEVICE_TABLE(of, eeprom_93xx46_of_table); 408 408 409 + static const struct spi_device_id eeprom_93xx46_spi_ids[] = { 410 + { .name = "eeprom-93xx46", 411 + .driver_data = (kernel_ulong_t)&at93c46_data, }, 412 + { .name = "at93c46", 413 + .driver_data = (kernel_ulong_t)&at93c46_data, }, 414 + { .name = "at93c46d", 415 + .driver_data = (kernel_ulong_t)&atmel_at93c46d_data, }, 416 + { .name = "at93c56", 417 + .driver_data = (kernel_ulong_t)&at93c56_data, }, 418 + { .name = "at93c66", 419 + .driver_data = (kernel_ulong_t)&at93c66_data, }, 420 + { .name = "93lc46b", 421 + .driver_data = (kernel_ulong_t)&microchip_93lc46b_data, }, 422 + {} 423 + }; 424 + MODULE_DEVICE_TABLE(spi, eeprom_93xx46_spi_ids); 425 + 409 426 static int eeprom_93xx46_probe_dt(struct spi_device *spi) 410 427 { 411 428 const struct of_device_id *of_id = ··· 572 555 }, 573 556 .probe = eeprom_93xx46_probe, 574 557 .remove = eeprom_93xx46_remove, 558 + .id_table = eeprom_93xx46_spi_ids, 575 559 }; 576 560 577 561 module_spi_driver(eeprom_93xx46_driver);
+2
drivers/misc/fastrpc.c
··· 814 814 rpra[i].pv = (u64) ctx->args[i].ptr; 815 815 pages[i].addr = ctx->maps[i]->phys; 816 816 817 + mmap_read_lock(current->mm); 817 818 vma = find_vma(current->mm, ctx->args[i].ptr); 818 819 if (vma) 819 820 pages[i].addr += ctx->args[i].ptr - 820 821 vma->vm_start; 822 + mmap_read_unlock(current->mm); 821 823 822 824 pg_start = (ctx->args[i].ptr & PAGE_MASK) >> PAGE_SHIFT; 823 825 pg_end = ((ctx->args[i].ptr + len - 1) & PAGE_MASK) >>
+1
drivers/misc/gehc-achc.c
··· 539 539 540 540 static const struct spi_device_id gehc_achc_id[] = { 541 541 { "ge,achc", 0 }, 542 + { "achc", 0 }, 542 543 { } 543 544 }; 544 545 MODULE_DEVICE_TABLE(spi, gehc_achc_id);
+19 -14
drivers/misc/habanalabs/common/command_submission.c
··· 2649 2649 free_seq_arr: 2650 2650 kfree(cs_seq_arr); 2651 2651 2652 - /* update output args */ 2653 - memset(args, 0, sizeof(*args)); 2654 2652 if (rc) 2655 2653 return rc; 2654 + 2655 + if (mcs_data.wait_status == -ERESTARTSYS) { 2656 + dev_err_ratelimited(hdev->dev, 2657 + "user process got signal while waiting for Multi-CS\n"); 2658 + return -EINTR; 2659 + } 2660 + 2661 + /* update output args */ 2662 + memset(args, 0, sizeof(*args)); 2656 2663 2657 2664 if (mcs_data.completion_bitmap) { 2658 2665 args->out.status = HL_WAIT_CS_STATUS_COMPLETED; ··· 2674 2667 /* update if some CS was gone */ 2675 2668 if (mcs_data.timestamp) 2676 2669 args->out.flags |= HL_WAIT_CS_STATUS_FLAG_GONE; 2677 - } else if (mcs_data.wait_status == -ERESTARTSYS) { 2678 - args->out.status = HL_WAIT_CS_STATUS_INTERRUPTED; 2679 2670 } else { 2680 2671 args->out.status = HL_WAIT_CS_STATUS_BUSY; 2681 2672 } ··· 2693 2688 rc = _hl_cs_wait_ioctl(hdev, hpriv->ctx, args->in.timeout_us, seq, 2694 2689 &status, &timestamp); 2695 2690 2691 + if (rc == -ERESTARTSYS) { 2692 + dev_err_ratelimited(hdev->dev, 2693 + "user process got signal while waiting for CS handle %llu\n", 2694 + seq); 2695 + return -EINTR; 2696 + } 2697 + 2696 2698 memset(args, 0, sizeof(*args)); 2697 2699 2698 2700 if (rc) { 2699 - if (rc == -ERESTARTSYS) { 2700 - dev_err_ratelimited(hdev->dev, 2701 - "user process got signal while waiting for CS handle %llu\n", 2702 - seq); 2703 - args->out.status = HL_WAIT_CS_STATUS_INTERRUPTED; 2704 - rc = -EINTR; 2705 - } else if (rc == -ETIMEDOUT) { 2701 + if (rc == -ETIMEDOUT) { 2706 2702 dev_err_ratelimited(hdev->dev, 2707 2703 "CS %llu has timed-out while user process is waiting for it\n", 2708 2704 seq); ··· 2829 2823 dev_err_ratelimited(hdev->dev, 2830 2824 "user process got signal while waiting for interrupt ID %d\n", 2831 2825 interrupt->interrupt_id); 2832 - *status = HL_WAIT_CS_STATUS_INTERRUPTED; 2833 2826 rc = -EINTR; 2834 2827 } else { 2835 2828 *status = CS_WAIT_STATUS_BUSY; ··· 2883 2878 args->in.interrupt_timeout_us, args->in.addr, 2884 2879 args->in.target, interrupt_offset, &status); 2885 2880 2886 - memset(args, 0, sizeof(*args)); 2887 - 2888 2881 if (rc) { 2889 2882 if (rc != -EINTR) 2890 2883 dev_err_ratelimited(hdev->dev, ··· 2890 2887 2891 2888 return rc; 2892 2889 } 2890 + 2891 + memset(args, 0, sizeof(*args)); 2893 2892 2894 2893 switch (status) { 2895 2894 case CS_WAIT_STATUS_COMPLETED:
+8 -4
drivers/misc/mei/hbm.c
··· 1298 1298 1299 1299 if (dev->dev_state != MEI_DEV_INIT_CLIENTS || 1300 1300 dev->hbm_state != MEI_HBM_STARTING) { 1301 - if (dev->dev_state == MEI_DEV_POWER_DOWN) { 1301 + if (dev->dev_state == MEI_DEV_POWER_DOWN || 1302 + dev->dev_state == MEI_DEV_POWERING_DOWN) { 1302 1303 dev_dbg(dev->dev, "hbm: start: on shutdown, ignoring\n"); 1303 1304 return 0; 1304 1305 } ··· 1382 1381 1383 1382 if (dev->dev_state != MEI_DEV_INIT_CLIENTS || 1384 1383 dev->hbm_state != MEI_HBM_DR_SETUP) { 1385 - if (dev->dev_state == MEI_DEV_POWER_DOWN) { 1384 + if (dev->dev_state == MEI_DEV_POWER_DOWN || 1385 + dev->dev_state == MEI_DEV_POWERING_DOWN) { 1386 1386 dev_dbg(dev->dev, "hbm: dma setup response: on shutdown, ignoring\n"); 1387 1387 return 0; 1388 1388 } ··· 1450 1448 1451 1449 if (dev->dev_state != MEI_DEV_INIT_CLIENTS || 1452 1450 dev->hbm_state != MEI_HBM_CLIENT_PROPERTIES) { 1453 - if (dev->dev_state == MEI_DEV_POWER_DOWN) { 1451 + if (dev->dev_state == MEI_DEV_POWER_DOWN || 1452 + dev->dev_state == MEI_DEV_POWERING_DOWN) { 1454 1453 dev_dbg(dev->dev, "hbm: properties response: on shutdown, ignoring\n"); 1455 1454 return 0; 1456 1455 } ··· 1493 1490 1494 1491 if (dev->dev_state != MEI_DEV_INIT_CLIENTS || 1495 1492 dev->hbm_state != MEI_HBM_ENUM_CLIENTS) { 1496 - if (dev->dev_state == MEI_DEV_POWER_DOWN) { 1493 + if (dev->dev_state == MEI_DEV_POWER_DOWN || 1494 + dev->dev_state == MEI_DEV_POWERING_DOWN) { 1497 1495 dev_dbg(dev->dev, "hbm: enumeration response: on shutdown, ignoring\n"); 1498 1496 return 0; 1499 1497 }
+1
drivers/misc/mei/hw-me-regs.h
··· 92 92 #define MEI_DEV_ID_CDF 0x18D3 /* Cedar Fork */ 93 93 94 94 #define MEI_DEV_ID_ICP_LP 0x34E0 /* Ice Lake Point LP */ 95 + #define MEI_DEV_ID_ICP_N 0x38E0 /* Ice Lake Point N */ 95 96 96 97 #define MEI_DEV_ID_JSP_N 0x4DE0 /* Jasper Lake Point N */ 97 98
+1
drivers/misc/mei/pci-me.c
··· 96 96 {MEI_PCI_DEVICE(MEI_DEV_ID_CMP_H_3, MEI_ME_PCH8_ITOUCH_CFG)}, 97 97 98 98 {MEI_PCI_DEVICE(MEI_DEV_ID_ICP_LP, MEI_ME_PCH12_CFG)}, 99 + {MEI_PCI_DEVICE(MEI_DEV_ID_ICP_N, MEI_ME_PCH12_CFG)}, 99 100 100 101 {MEI_PCI_DEVICE(MEI_DEV_ID_TGP_LP, MEI_ME_PCH15_CFG)}, 101 102 {MEI_PCI_DEVICE(MEI_DEV_ID_TGP_H, MEI_ME_PCH15_SPS_CFG)},
+6 -2
drivers/mtd/nand/raw/qcom_nandc.c
··· 1676 1676 struct nand_ecc_ctrl *ecc = &chip->ecc; 1677 1677 int data_size1, data_size2, oob_size1, oob_size2; 1678 1678 int ret, reg_off = FLASH_BUF_ACC, read_loc = 0; 1679 + int raw_cw = cw; 1679 1680 1680 1681 nand_read_page_op(chip, page, 0, NULL, 0); 1681 1682 host->use_ecc = false; 1682 1683 1684 + if (nandc->props->qpic_v2) 1685 + raw_cw = ecc->steps - 1; 1686 + 1683 1687 clear_bam_transaction(nandc); 1684 1688 set_address(host, host->cw_size * cw, page); 1685 - update_rw_regs(host, 1, true, cw); 1689 + update_rw_regs(host, 1, true, raw_cw); 1686 1690 config_nand_page_read(chip); 1687 1691 1688 1692 data_size1 = mtd->writesize - host->cw_size * (ecc->steps - 1); ··· 1715 1711 nandc_set_read_loc(chip, cw, 3, read_loc, oob_size2, 1); 1716 1712 } 1717 1713 1718 - config_nand_cw_read(chip, false, cw); 1714 + config_nand_cw_read(chip, false, raw_cw); 1719 1715 1720 1716 read_data_dma(nandc, reg_off, data_buf, data_size1, 0); 1721 1717 reg_off += data_size1;
+3 -1
drivers/net/dsa/microchip/ksz_common.c
··· 449 449 void ksz_switch_remove(struct ksz_device *dev) 450 450 { 451 451 /* timer started */ 452 - if (dev->mib_read_interval) 452 + if (dev->mib_read_interval) { 453 + dev->mib_read_interval = 0; 453 454 cancel_delayed_work_sync(&dev->mib_read); 455 + } 454 456 455 457 dev->dev_ops->exit(dev); 456 458 dsa_unregister_switch(dev->ds);
+108 -17
drivers/net/dsa/mv88e6xxx/chip.c
··· 12 12 13 13 #include <linux/bitfield.h> 14 14 #include <linux/delay.h> 15 + #include <linux/dsa/mv88e6xxx.h> 15 16 #include <linux/etherdevice.h> 16 17 #include <linux/ethtool.h> 17 18 #include <linux/if_bridge.h> ··· 750 749 ops = chip->info->ops; 751 750 752 751 mv88e6xxx_reg_lock(chip); 753 - if ((!mv88e6xxx_port_ppu_updates(chip, port) || 752 + /* Internal PHYs propagate their configuration directly to the MAC. 753 + * External PHYs depend on whether the PPU is enabled for this port. 754 + */ 755 + if (((!mv88e6xxx_phy_is_internal(ds, port) && 756 + !mv88e6xxx_port_ppu_updates(chip, port)) || 754 757 mode == MLO_AN_FIXED) && ops->port_sync_link) 755 758 err = ops->port_sync_link(chip, port, mode, false); 756 759 mv88e6xxx_reg_unlock(chip); ··· 777 772 ops = chip->info->ops; 778 773 779 774 mv88e6xxx_reg_lock(chip); 780 - if (!mv88e6xxx_port_ppu_updates(chip, port) || mode == MLO_AN_FIXED) { 775 + /* Internal PHYs propagate their configuration directly to the MAC. 776 + * External PHYs depend on whether the PPU is enabled for this port. 777 + */ 778 + if ((!mv88e6xxx_phy_is_internal(ds, port) && 779 + !mv88e6xxx_port_ppu_updates(chip, port)) || 780 + mode == MLO_AN_FIXED) { 781 781 /* FIXME: for an automedia port, should we force the link 782 782 * down here - what if the link comes up due to "other" media 783 783 * while we're bringing the port up, how is the exclusivity ··· 1687 1677 return 0; 1688 1678 } 1689 1679 1680 + static int mv88e6xxx_port_commit_pvid(struct mv88e6xxx_chip *chip, int port) 1681 + { 1682 + struct dsa_port *dp = dsa_to_port(chip->ds, port); 1683 + struct mv88e6xxx_port *p = &chip->ports[port]; 1684 + u16 pvid = MV88E6XXX_VID_STANDALONE; 1685 + bool drop_untagged = false; 1686 + int err; 1687 + 1688 + if (dp->bridge_dev) { 1689 + if (br_vlan_enabled(dp->bridge_dev)) { 1690 + pvid = p->bridge_pvid.vid; 1691 + drop_untagged = !p->bridge_pvid.valid; 1692 + } else { 1693 + pvid = MV88E6XXX_VID_BRIDGED; 1694 + } 1695 + } 1696 + 1697 + err = mv88e6xxx_port_set_pvid(chip, port, pvid); 1698 + if (err) 1699 + return err; 1700 + 1701 + return mv88e6xxx_port_drop_untagged(chip, port, drop_untagged); 1702 + } 1703 + 1690 1704 static int mv88e6xxx_port_vlan_filtering(struct dsa_switch *ds, int port, 1691 1705 bool vlan_filtering, 1692 1706 struct netlink_ext_ack *extack) ··· 1724 1690 return -EOPNOTSUPP; 1725 1691 1726 1692 mv88e6xxx_reg_lock(chip); 1693 + 1727 1694 err = mv88e6xxx_port_set_8021q_mode(chip, port, mode); 1695 + if (err) 1696 + goto unlock; 1697 + 1698 + err = mv88e6xxx_port_commit_pvid(chip, port); 1699 + if (err) 1700 + goto unlock; 1701 + 1702 + unlock: 1728 1703 mv88e6xxx_reg_unlock(chip); 1729 1704 1730 1705 return err; ··· 1768 1725 u16 fid; 1769 1726 int err; 1770 1727 1771 - /* Null VLAN ID corresponds to the port private database */ 1728 + /* Ports have two private address databases: one for when the port is 1729 + * standalone and one for when the port is under a bridge and the 1730 + * 802.1Q mode is disabled. When the port is standalone, DSA wants its 1731 + * address database to remain 100% empty, so we never load an ATU entry 1732 + * into a standalone port's database. Therefore, translate the null 1733 + * VLAN ID into the port's database used for VLAN-unaware bridging. 1734 + */ 1772 1735 if (vid == 0) { 1773 - err = mv88e6xxx_port_get_fid(chip, port, &fid); 1774 - if (err) 1775 - return err; 1736 + fid = MV88E6XXX_FID_BRIDGED; 1776 1737 } else { 1777 1738 err = mv88e6xxx_vtu_get(chip, vid, &vlan); 1778 1739 if (err) ··· 2170 2123 struct mv88e6xxx_chip *chip = ds->priv; 2171 2124 bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED; 2172 2125 bool pvid = vlan->flags & BRIDGE_VLAN_INFO_PVID; 2126 + struct mv88e6xxx_port *p = &chip->ports[port]; 2173 2127 bool warn; 2174 2128 u8 member; 2175 2129 int err; ··· 2204 2156 } 2205 2157 2206 2158 if (pvid) { 2207 - err = mv88e6xxx_port_set_pvid(chip, port, vlan->vid); 2208 - if (err) { 2209 - dev_err(ds->dev, "p%d: failed to set PVID %d\n", 2210 - port, vlan->vid); 2159 + p->bridge_pvid.vid = vlan->vid; 2160 + p->bridge_pvid.valid = true; 2161 + 2162 + err = mv88e6xxx_port_commit_pvid(chip, port); 2163 + if (err) 2211 2164 goto out; 2212 - } 2165 + } else if (vlan->vid && p->bridge_pvid.vid == vlan->vid) { 2166 + /* The old pvid was reinstalled as a non-pvid VLAN */ 2167 + p->bridge_pvid.valid = false; 2168 + 2169 + err = mv88e6xxx_port_commit_pvid(chip, port); 2170 + if (err) 2171 + goto out; 2213 2172 } 2173 + 2214 2174 out: 2215 2175 mv88e6xxx_reg_unlock(chip); 2216 2176 ··· 2268 2212 const struct switchdev_obj_port_vlan *vlan) 2269 2213 { 2270 2214 struct mv88e6xxx_chip *chip = ds->priv; 2215 + struct mv88e6xxx_port *p = &chip->ports[port]; 2271 2216 int err = 0; 2272 2217 u16 pvid; 2273 2218 ··· 2286 2229 goto unlock; 2287 2230 2288 2231 if (vlan->vid == pvid) { 2289 - err = mv88e6xxx_port_set_pvid(chip, port, 0); 2232 + p->bridge_pvid.valid = false; 2233 + 2234 + err = mv88e6xxx_port_commit_pvid(chip, port); 2290 2235 if (err) 2291 2236 goto unlock; 2292 2237 } ··· 2452 2393 int err; 2453 2394 2454 2395 mv88e6xxx_reg_lock(chip); 2396 + 2455 2397 err = mv88e6xxx_bridge_map(chip, br); 2398 + if (err) 2399 + goto unlock; 2400 + 2401 + err = mv88e6xxx_port_commit_pvid(chip, port); 2402 + if (err) 2403 + goto unlock; 2404 + 2405 + unlock: 2456 2406 mv88e6xxx_reg_unlock(chip); 2457 2407 2458 2408 return err; ··· 2471 2403 struct net_device *br) 2472 2404 { 2473 2405 struct mv88e6xxx_chip *chip = ds->priv; 2406 + int err; 2474 2407 2475 2408 mv88e6xxx_reg_lock(chip); 2409 + 2476 2410 if (mv88e6xxx_bridge_map(chip, br) || 2477 2411 mv88e6xxx_port_vlan_map(chip, port)) 2478 2412 dev_err(ds->dev, "failed to remap in-chip Port VLAN\n"); 2413 + 2414 + err = mv88e6xxx_port_commit_pvid(chip, port); 2415 + if (err) 2416 + dev_err(ds->dev, 2417 + "port %d failed to restore standalone pvid: %pe\n", 2418 + port, ERR_PTR(err)); 2419 + 2479 2420 mv88e6xxx_reg_unlock(chip); 2480 2421 } 2481 2422 ··· 2930 2853 if (err) 2931 2854 return err; 2932 2855 2856 + /* Associate MV88E6XXX_VID_BRIDGED with MV88E6XXX_FID_BRIDGED in the 2857 + * ATU by virtue of the fact that mv88e6xxx_atu_new() will pick it as 2858 + * the first free FID after MV88E6XXX_FID_STANDALONE. This will be used 2859 + * as the private PVID on ports under a VLAN-unaware bridge. 2860 + * Shared (DSA and CPU) ports must also be members of it, to translate 2861 + * the VID from the DSA tag into MV88E6XXX_FID_BRIDGED, instead of 2862 + * relying on their port default FID. 2863 + */ 2864 + err = mv88e6xxx_port_vlan_join(chip, port, MV88E6XXX_VID_BRIDGED, 2865 + MV88E6XXX_G1_VTU_DATA_MEMBER_TAG_UNTAGGED, 2866 + false); 2867 + if (err) 2868 + return err; 2869 + 2933 2870 if (chip->info->ops->port_set_jumbo_size) { 2934 2871 err = chip->info->ops->port_set_jumbo_size(chip, port, 10218); 2935 2872 if (err) ··· 3016 2925 * database, and allow bidirectional communication between the 3017 2926 * CPU and DSA port(s), and the other ports. 3018 2927 */ 3019 - err = mv88e6xxx_port_set_fid(chip, port, 0); 2928 + err = mv88e6xxx_port_set_fid(chip, port, MV88E6XXX_FID_STANDALONE); 3020 2929 if (err) 3021 2930 return err; 3022 2931 ··· 3206 3115 } 3207 3116 } 3208 3117 3118 + err = mv88e6xxx_vtu_setup(chip); 3119 + if (err) 3120 + goto unlock; 3121 + 3209 3122 /* Setup Switch Port Registers */ 3210 3123 for (i = 0; i < mv88e6xxx_num_ports(chip); i++) { 3211 3124 if (dsa_is_unused_port(ds, i)) ··· 3236 3141 goto unlock; 3237 3142 3238 3143 err = mv88e6xxx_phy_setup(chip); 3239 - if (err) 3240 - goto unlock; 3241 - 3242 - err = mv88e6xxx_vtu_setup(chip); 3243 3144 if (err) 3244 3145 goto unlock; 3245 3146
+9
drivers/net/dsa/mv88e6xxx/chip.h
··· 21 21 #define EDSA_HLEN 8 22 22 #define MV88E6XXX_N_FID 4096 23 23 24 + #define MV88E6XXX_FID_STANDALONE 0 25 + #define MV88E6XXX_FID_BRIDGED 1 26 + 24 27 /* PVT limits for 4-bit port and 5-bit switch */ 25 28 #define MV88E6XXX_MAX_PVT_SWITCHES 32 26 29 #define MV88E6XXX_MAX_PVT_PORTS 16 ··· 249 246 u16 vid; 250 247 }; 251 248 249 + struct mv88e6xxx_vlan { 250 + u16 vid; 251 + bool valid; 252 + }; 253 + 252 254 struct mv88e6xxx_port { 253 255 struct mv88e6xxx_chip *chip; 254 256 int port; 257 + struct mv88e6xxx_vlan bridge_pvid; 255 258 u64 serdes_stats[2]; 256 259 u64 atu_member_violation; 257 260 u64 atu_miss_violation;
+21
drivers/net/dsa/mv88e6xxx/port.c
··· 1257 1257 return 0; 1258 1258 } 1259 1259 1260 + int mv88e6xxx_port_drop_untagged(struct mv88e6xxx_chip *chip, int port, 1261 + bool drop_untagged) 1262 + { 1263 + u16 old, new; 1264 + int err; 1265 + 1266 + err = mv88e6xxx_port_read(chip, port, MV88E6XXX_PORT_CTL2, &old); 1267 + if (err) 1268 + return err; 1269 + 1270 + if (drop_untagged) 1271 + new = old | MV88E6XXX_PORT_CTL2_DISCARD_UNTAGGED; 1272 + else 1273 + new = old & ~MV88E6XXX_PORT_CTL2_DISCARD_UNTAGGED; 1274 + 1275 + if (new == old) 1276 + return 0; 1277 + 1278 + return mv88e6xxx_port_write(chip, port, MV88E6XXX_PORT_CTL2, new); 1279 + } 1280 + 1260 1281 int mv88e6xxx_port_set_map_da(struct mv88e6xxx_chip *chip, int port) 1261 1282 { 1262 1283 u16 reg;
+2
drivers/net/dsa/mv88e6xxx/port.h
··· 423 423 phy_interface_t mode); 424 424 int mv88e6185_port_get_cmode(struct mv88e6xxx_chip *chip, int port, u8 *cmode); 425 425 int mv88e6352_port_get_cmode(struct mv88e6xxx_chip *chip, int port, u8 *cmode); 426 + int mv88e6xxx_port_drop_untagged(struct mv88e6xxx_chip *chip, int port, 427 + bool drop_untagged); 426 428 int mv88e6xxx_port_set_map_da(struct mv88e6xxx_chip *chip, int port); 427 429 int mv88e6095_port_set_upstream_port(struct mv88e6xxx_chip *chip, int port, 428 430 int upstream_port);
+137 -12
drivers/net/dsa/ocelot/felix.c
··· 266 266 */ 267 267 static int felix_setup_mmio_filtering(struct felix *felix) 268 268 { 269 - unsigned long user_ports = 0, cpu_ports = 0; 269 + unsigned long user_ports = dsa_user_ports(felix->ds); 270 270 struct ocelot_vcap_filter *redirect_rule; 271 271 struct ocelot_vcap_filter *tagging_rule; 272 272 struct ocelot *ocelot = &felix->ocelot; 273 273 struct dsa_switch *ds = felix->ds; 274 - int port, ret; 274 + int cpu = -1, port, ret; 275 275 276 276 tagging_rule = kzalloc(sizeof(struct ocelot_vcap_filter), GFP_KERNEL); 277 277 if (!tagging_rule) ··· 284 284 } 285 285 286 286 for (port = 0; port < ocelot->num_phys_ports; port++) { 287 - if (dsa_is_user_port(ds, port)) 288 - user_ports |= BIT(port); 289 - if (dsa_is_cpu_port(ds, port)) 290 - cpu_ports |= BIT(port); 287 + if (dsa_is_cpu_port(ds, port)) { 288 + cpu = port; 289 + break; 290 + } 291 291 } 292 + 293 + if (cpu < 0) 294 + return -EINVAL; 292 295 293 296 tagging_rule->key_type = OCELOT_VCAP_KEY_ETYPE; 294 297 *(__be16 *)tagging_rule->key.etype.etype.value = htons(ETH_P_1588); ··· 328 325 * the CPU port module 329 326 */ 330 327 redirect_rule->action.mask_mode = OCELOT_MASK_MODE_REDIRECT; 331 - redirect_rule->action.port_mask = cpu_ports; 328 + redirect_rule->action.port_mask = BIT(cpu); 332 329 } else { 333 330 /* Trap PTP packets only to the CPU port module (which is 334 331 * redirected to the NPI port) ··· 1077 1074 return 0; 1078 1075 } 1079 1076 1077 + static void ocelot_port_purge_txtstamp_skb(struct ocelot *ocelot, int port, 1078 + struct sk_buff *skb) 1079 + { 1080 + struct ocelot_port *ocelot_port = ocelot->ports[port]; 1081 + struct sk_buff *clone = OCELOT_SKB_CB(skb)->clone; 1082 + struct sk_buff *skb_match = NULL, *skb_tmp; 1083 + unsigned long flags; 1084 + 1085 + if (!clone) 1086 + return; 1087 + 1088 + spin_lock_irqsave(&ocelot_port->tx_skbs.lock, flags); 1089 + 1090 + skb_queue_walk_safe(&ocelot_port->tx_skbs, skb, skb_tmp) { 1091 + if (skb != clone) 1092 + continue; 1093 + __skb_unlink(skb, &ocelot_port->tx_skbs); 1094 + skb_match = skb; 1095 + break; 1096 + } 1097 + 1098 + spin_unlock_irqrestore(&ocelot_port->tx_skbs.lock, flags); 1099 + 1100 + WARN_ONCE(!skb_match, 1101 + "Could not find skb clone in TX timestamping list\n"); 1102 + } 1103 + 1104 + #define work_to_xmit_work(w) \ 1105 + container_of((w), struct felix_deferred_xmit_work, work) 1106 + 1107 + static void felix_port_deferred_xmit(struct kthread_work *work) 1108 + { 1109 + struct felix_deferred_xmit_work *xmit_work = work_to_xmit_work(work); 1110 + struct dsa_switch *ds = xmit_work->dp->ds; 1111 + struct sk_buff *skb = xmit_work->skb; 1112 + u32 rew_op = ocelot_ptp_rew_op(skb); 1113 + struct ocelot *ocelot = ds->priv; 1114 + int port = xmit_work->dp->index; 1115 + int retries = 10; 1116 + 1117 + do { 1118 + if (ocelot_can_inject(ocelot, 0)) 1119 + break; 1120 + 1121 + cpu_relax(); 1122 + } while (--retries); 1123 + 1124 + if (!retries) { 1125 + dev_err(ocelot->dev, "port %d failed to inject skb\n", 1126 + port); 1127 + ocelot_port_purge_txtstamp_skb(ocelot, port, skb); 1128 + kfree_skb(skb); 1129 + return; 1130 + } 1131 + 1132 + ocelot_port_inject_frame(ocelot, port, 0, rew_op, skb); 1133 + 1134 + consume_skb(skb); 1135 + kfree(xmit_work); 1136 + } 1137 + 1138 + static int felix_port_setup_tagger_data(struct dsa_switch *ds, int port) 1139 + { 1140 + struct dsa_port *dp = dsa_to_port(ds, port); 1141 + struct ocelot *ocelot = ds->priv; 1142 + struct felix *felix = ocelot_to_felix(ocelot); 1143 + struct felix_port *felix_port; 1144 + 1145 + if (!dsa_port_is_user(dp)) 1146 + return 0; 1147 + 1148 + felix_port = kzalloc(sizeof(*felix_port), GFP_KERNEL); 1149 + if (!felix_port) 1150 + return -ENOMEM; 1151 + 1152 + felix_port->xmit_worker = felix->xmit_worker; 1153 + felix_port->xmit_work_fn = felix_port_deferred_xmit; 1154 + 1155 + dp->priv = felix_port; 1156 + 1157 + return 0; 1158 + } 1159 + 1160 + static void felix_port_teardown_tagger_data(struct dsa_switch *ds, int port) 1161 + { 1162 + struct dsa_port *dp = dsa_to_port(ds, port); 1163 + struct felix_port *felix_port = dp->priv; 1164 + 1165 + if (!felix_port) 1166 + return; 1167 + 1168 + dp->priv = NULL; 1169 + kfree(felix_port); 1170 + } 1171 + 1080 1172 /* Hardware initialization done here so that we can allocate structures with 1081 1173 * devm without fear of dsa_register_switch returning -EPROBE_DEFER and causing 1082 1174 * us to allocate structures twice (leak memory) and map PCI memory twice ··· 1200 1102 } 1201 1103 } 1202 1104 1105 + felix->xmit_worker = kthread_create_worker(0, "felix_xmit"); 1106 + if (IS_ERR(felix->xmit_worker)) { 1107 + err = PTR_ERR(felix->xmit_worker); 1108 + goto out_deinit_timestamp; 1109 + } 1110 + 1203 1111 for (port = 0; port < ds->num_ports; port++) { 1204 1112 if (dsa_is_unused_port(ds, port)) 1205 1113 continue; ··· 1216 1112 * bits of vlan tag. 1217 1113 */ 1218 1114 felix_port_qos_map_init(ocelot, port); 1115 + 1116 + err = felix_port_setup_tagger_data(ds, port); 1117 + if (err) { 1118 + dev_err(ds->dev, 1119 + "port %d failed to set up tagger data: %pe\n", 1120 + port, ERR_PTR(err)); 1121 + goto out_deinit_ports; 1122 + } 1219 1123 } 1220 1124 1221 1125 err = ocelot_devlink_sb_register(ocelot); ··· 1238 1126 * there's no real point in checking for errors. 1239 1127 */ 1240 1128 felix_set_tag_protocol(ds, port, felix->tag_proto); 1129 + break; 1241 1130 } 1242 1131 1243 1132 ds->mtu_enforcement_ingress = true; ··· 1251 1138 if (dsa_is_unused_port(ds, port)) 1252 1139 continue; 1253 1140 1141 + felix_port_teardown_tagger_data(ds, port); 1254 1142 ocelot_deinit_port(ocelot, port); 1255 1143 } 1256 1144 1145 + kthread_destroy_worker(felix->xmit_worker); 1146 + 1147 + out_deinit_timestamp: 1257 1148 ocelot_deinit_timestamp(ocelot); 1258 1149 ocelot_deinit(ocelot); 1259 1150 ··· 1279 1162 continue; 1280 1163 1281 1164 felix_del_tag_protocol(ds, port, felix->tag_proto); 1165 + break; 1282 1166 } 1283 - 1284 - ocelot_devlink_sb_unregister(ocelot); 1285 - ocelot_deinit_timestamp(ocelot); 1286 - ocelot_deinit(ocelot); 1287 1167 1288 1168 for (port = 0; port < ocelot->num_phys_ports; port++) { 1289 1169 if (dsa_is_unused_port(ds, port)) 1290 1170 continue; 1291 1171 1172 + felix_port_teardown_tagger_data(ds, port); 1292 1173 ocelot_deinit_port(ocelot, port); 1293 1174 } 1175 + 1176 + kthread_destroy_worker(felix->xmit_worker); 1177 + 1178 + ocelot_devlink_sb_unregister(ocelot); 1179 + ocelot_deinit_timestamp(ocelot); 1180 + ocelot_deinit(ocelot); 1294 1181 1295 1182 if (felix->info->mdio_bus_free) 1296 1183 felix->info->mdio_bus_free(ocelot); ··· 1412 1291 if (!ocelot->ptp) 1413 1292 return; 1414 1293 1415 - if (ocelot_port_txtstamp_request(ocelot, port, skb, &clone)) 1294 + if (ocelot_port_txtstamp_request(ocelot, port, skb, &clone)) { 1295 + dev_err_ratelimited(ds->dev, 1296 + "port %d delivering skb without TX timestamp\n", 1297 + port); 1416 1298 return; 1299 + } 1417 1300 1418 1301 if (clone) 1419 1302 OCELOT_SKB_CB(skb)->clone = clone;
+1
drivers/net/dsa/ocelot/felix.h
··· 62 62 resource_size_t switch_base; 63 63 resource_size_t imdio_base; 64 64 enum dsa_tag_protocol tag_proto; 65 + struct kthread_worker *xmit_worker; 65 66 }; 66 67 67 68 struct net_device *felix_port_to_netdev(struct ocelot *ocelot, int port);
+1 -2
drivers/net/dsa/sja1105/sja1105_main.c
··· 3117 3117 sja1105_static_config_free(&priv->static_config); 3118 3118 } 3119 3119 3120 - const struct dsa_switch_ops sja1105_switch_ops = { 3120 + static const struct dsa_switch_ops sja1105_switch_ops = { 3121 3121 .get_tag_protocol = sja1105_get_tag_protocol, 3122 3122 .setup = sja1105_setup, 3123 3123 .teardown = sja1105_teardown, ··· 3166 3166 .port_bridge_tx_fwd_offload = dsa_tag_8021q_bridge_tx_fwd_offload, 3167 3167 .port_bridge_tx_fwd_unoffload = dsa_tag_8021q_bridge_tx_fwd_unoffload, 3168 3168 }; 3169 - EXPORT_SYMBOL_GPL(sja1105_switch_ops); 3170 3169 3171 3170 static const struct of_device_id sja1105_dt_ids[]; 3172 3171
+6 -39
drivers/net/dsa/sja1105/sja1105_ptp.c
··· 64 64 static int sja1105_change_rxtstamping(struct sja1105_private *priv, 65 65 bool on) 66 66 { 67 + struct sja1105_tagger_data *tagger_data = &priv->tagger_data; 67 68 struct sja1105_ptp_data *ptp_data = &priv->ptp_data; 68 69 struct sja1105_general_params_entry *general_params; 69 70 struct sja1105_table *table; ··· 80 79 priv->tagger_data.stampable_skb = NULL; 81 80 } 82 81 ptp_cancel_worker_sync(ptp_data->clock); 83 - skb_queue_purge(&ptp_data->skb_txtstamp_queue); 82 + skb_queue_purge(&tagger_data->skb_txtstamp_queue); 84 83 skb_queue_purge(&ptp_data->skb_rxtstamp_queue); 85 84 86 85 return sja1105_static_config_reload(priv, SJA1105_RX_HWTSTAMPING); ··· 453 452 return priv->info->rxtstamp(ds, port, skb); 454 453 } 455 454 456 - void sja1110_process_meta_tstamp(struct dsa_switch *ds, int port, u8 ts_id, 457 - enum sja1110_meta_tstamp dir, u64 tstamp) 458 - { 459 - struct sja1105_private *priv = ds->priv; 460 - struct sja1105_ptp_data *ptp_data = &priv->ptp_data; 461 - struct sk_buff *skb, *skb_tmp, *skb_match = NULL; 462 - struct skb_shared_hwtstamps shwt = {0}; 463 - 464 - /* We don't care about RX timestamps on the CPU port */ 465 - if (dir == SJA1110_META_TSTAMP_RX) 466 - return; 467 - 468 - spin_lock(&ptp_data->skb_txtstamp_queue.lock); 469 - 470 - skb_queue_walk_safe(&ptp_data->skb_txtstamp_queue, skb, skb_tmp) { 471 - if (SJA1105_SKB_CB(skb)->ts_id != ts_id) 472 - continue; 473 - 474 - __skb_unlink(skb, &ptp_data->skb_txtstamp_queue); 475 - skb_match = skb; 476 - 477 - break; 478 - } 479 - 480 - spin_unlock(&ptp_data->skb_txtstamp_queue.lock); 481 - 482 - if (WARN_ON(!skb_match)) 483 - return; 484 - 485 - shwt.hwtstamp = ns_to_ktime(sja1105_ticks_to_ns(tstamp)); 486 - skb_complete_tx_timestamp(skb_match, &shwt); 487 - } 488 - EXPORT_SYMBOL_GPL(sja1110_process_meta_tstamp); 489 - 490 455 /* In addition to cloning the skb which is done by the common 491 456 * sja1105_port_txtstamp, we need to generate a timestamp ID and save the 492 457 * packet to the TX timestamping queue. ··· 461 494 { 462 495 struct sk_buff *clone = SJA1105_SKB_CB(skb)->clone; 463 496 struct sja1105_private *priv = ds->priv; 464 - struct sja1105_ptp_data *ptp_data = &priv->ptp_data; 465 497 struct sja1105_port *sp = &priv->ports[port]; 466 498 u8 ts_id; 467 499 ··· 476 510 477 511 spin_unlock(&sp->data->meta_lock); 478 512 479 - skb_queue_tail(&ptp_data->skb_txtstamp_queue, clone); 513 + skb_queue_tail(&sp->data->skb_txtstamp_queue, clone); 480 514 } 481 515 482 516 /* Called from dsa_skb_tx_timestamp. This callback is just to clone ··· 919 953 /* Only used on SJA1105 */ 920 954 skb_queue_head_init(&ptp_data->skb_rxtstamp_queue); 921 955 /* Only used on SJA1110 */ 922 - skb_queue_head_init(&ptp_data->skb_txtstamp_queue); 956 + skb_queue_head_init(&tagger_data->skb_txtstamp_queue); 923 957 spin_lock_init(&tagger_data->meta_lock); 924 958 925 959 ptp_data->clock = ptp_clock_register(&ptp_data->caps, ds->dev); ··· 937 971 void sja1105_ptp_clock_unregister(struct dsa_switch *ds) 938 972 { 939 973 struct sja1105_private *priv = ds->priv; 974 + struct sja1105_tagger_data *tagger_data = &priv->tagger_data; 940 975 struct sja1105_ptp_data *ptp_data = &priv->ptp_data; 941 976 942 977 if (IS_ERR_OR_NULL(ptp_data->clock)) ··· 945 978 946 979 del_timer_sync(&ptp_data->extts_timer); 947 980 ptp_cancel_worker_sync(ptp_data->clock); 948 - skb_queue_purge(&ptp_data->skb_txtstamp_queue); 981 + skb_queue_purge(&tagger_data->skb_txtstamp_queue); 949 982 skb_queue_purge(&ptp_data->skb_rxtstamp_queue); 950 983 ptp_clock_unregister(ptp_data->clock); 951 984 ptp_data->clock = NULL;
-19
drivers/net/dsa/sja1105/sja1105_ptp.h
··· 8 8 9 9 #if IS_ENABLED(CONFIG_NET_DSA_SJA1105_PTP) 10 10 11 - /* Timestamps are in units of 8 ns clock ticks (equivalent to 12 - * a fixed 125 MHz clock). 13 - */ 14 - #define SJA1105_TICK_NS 8 15 - 16 - static inline s64 ns_to_sja1105_ticks(s64 ns) 17 - { 18 - return ns / SJA1105_TICK_NS; 19 - } 20 - 21 - static inline s64 sja1105_ticks_to_ns(s64 ticks) 22 - { 23 - return ticks * SJA1105_TICK_NS; 24 - } 25 - 26 11 /* Calculate the first base_time in the future that satisfies this 27 12 * relationship: 28 13 * ··· 62 77 struct timer_list extts_timer; 63 78 /* Used only on SJA1105 to reconstruct partial timestamps */ 64 79 struct sk_buff_head skb_rxtstamp_queue; 65 - /* Used on SJA1110 where meta frames are generated only for 66 - * 2-step TX timestamps 67 - */ 68 - struct sk_buff_head skb_txtstamp_queue; 69 80 struct ptp_clock_info caps; 70 81 struct ptp_clock *clock; 71 82 struct sja1105_ptp_cmd cmd;
+1
drivers/net/ethernet/Kconfig
··· 100 100 config KORINA 101 101 tristate "Korina (IDT RC32434) Ethernet support" 102 102 depends on MIKROTIK_RB532 || COMPILE_TEST 103 + select CRC32 103 104 select MII 104 105 help 105 106 If you have a Mikrotik RouterBoard 500 or IDT RC32434
+1
drivers/net/ethernet/arc/Kconfig
··· 21 21 depends on ARC || ARCH_ROCKCHIP || COMPILE_TEST 22 22 select MII 23 23 select PHYLIB 24 + select CRC32 24 25 25 26 config ARC_EMAC 26 27 tristate "ARC EMAC support"
+7 -8
drivers/net/ethernet/intel/ice/ice_ptp.c
··· 1313 1313 { 1314 1314 u8 idx; 1315 1315 1316 - spin_lock(&tx->lock); 1317 - 1318 1316 for (idx = 0; idx < tx->len; idx++) { 1319 1317 u8 phy_idx = idx + tx->quad_offset; 1320 1318 1321 - /* Clear any potential residual timestamp in the PHY block */ 1322 - if (!pf->hw.reset_ongoing) 1323 - ice_clear_phy_tstamp(&pf->hw, tx->quad, phy_idx); 1324 - 1319 + spin_lock(&tx->lock); 1325 1320 if (tx->tstamps[idx].skb) { 1326 1321 dev_kfree_skb_any(tx->tstamps[idx].skb); 1327 1322 tx->tstamps[idx].skb = NULL; 1328 1323 } 1329 - } 1324 + clear_bit(idx, tx->in_use); 1325 + spin_unlock(&tx->lock); 1330 1326 1331 - spin_unlock(&tx->lock); 1327 + /* Clear any potential residual timestamp in the PHY block */ 1328 + if (!pf->hw.reset_ongoing) 1329 + ice_clear_phy_tstamp(&pf->hw, tx->quad, phy_idx); 1330 + } 1332 1331 } 1333 1332 1334 1333 /**
+3 -4
drivers/net/ethernet/mellanox/mlx5/core/cq.c
··· 155 155 u32 in[MLX5_ST_SZ_DW(destroy_cq_in)] = {}; 156 156 int err; 157 157 158 + mlx5_debug_cq_remove(dev, cq); 159 + 158 160 mlx5_eq_del_cq(mlx5_get_async_eq(dev), cq); 159 161 mlx5_eq_del_cq(&cq->eq->core, cq); 160 162 ··· 164 162 MLX5_SET(destroy_cq_in, in, cqn, cq->cqn); 165 163 MLX5_SET(destroy_cq_in, in, uid, cq->uid); 166 164 err = mlx5_cmd_exec_in(dev, destroy_cq, in); 167 - if (err) 168 - return err; 169 165 170 166 synchronize_irq(cq->irqn); 171 167 172 - mlx5_debug_cq_remove(dev, cq); 173 168 mlx5_cq_put(cq); 174 169 wait_for_completion(&cq->free); 175 170 176 - return 0; 171 + return err; 177 172 } 178 173 EXPORT_SYMBOL(mlx5_core_destroy_cq); 179 174
+4 -4
drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c
··· 475 475 esw_warn(mdev, "Failed to allocate bridge offloads workqueue\n"); 476 476 goto err_alloc_wq; 477 477 } 478 - INIT_DELAYED_WORK(&br_offloads->update_work, mlx5_esw_bridge_update_work); 479 - queue_delayed_work(br_offloads->wq, &br_offloads->update_work, 480 - msecs_to_jiffies(MLX5_ESW_BRIDGE_UPDATE_INTERVAL)); 481 478 482 479 br_offloads->nb.notifier_call = mlx5_esw_bridge_switchdev_event; 483 480 err = register_switchdev_notifier(&br_offloads->nb); ··· 497 500 err); 498 501 goto err_register_netdev; 499 502 } 503 + INIT_DELAYED_WORK(&br_offloads->update_work, mlx5_esw_bridge_update_work); 504 + queue_delayed_work(br_offloads->wq, &br_offloads->update_work, 505 + msecs_to_jiffies(MLX5_ESW_BRIDGE_UPDATE_INTERVAL)); 500 506 return; 501 507 502 508 err_register_netdev: ··· 523 523 if (!br_offloads) 524 524 return; 525 525 526 + cancel_delayed_work_sync(&br_offloads->update_work); 526 527 unregister_netdevice_notifier(&br_offloads->netdev_nb); 527 528 unregister_switchdev_blocking_notifier(&br_offloads->nb_blk); 528 529 unregister_switchdev_notifier(&br_offloads->nb); 529 - cancel_delayed_work(&br_offloads->update_work); 530 530 destroy_workqueue(br_offloads->wq); 531 531 rtnl_lock(); 532 532 mlx5_esw_bridge_cleanup(esw);
+54 -7
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 2981 2981 agg_count += mqprio->qopt.count[i]; 2982 2982 } 2983 2983 2984 - if (priv->channels.params.num_channels < agg_count) { 2985 - netdev_err(netdev, "Num of queues (%d) exceeds available (%d)\n", 2984 + if (priv->channels.params.num_channels != agg_count) { 2985 + netdev_err(netdev, "Num of queues (%d) does not match available (%d)\n", 2986 2986 agg_count, priv->channels.params.num_channels); 2987 2987 return -EINVAL; 2988 2988 } ··· 3325 3325 return mlx5_set_port_fcs(mdev, !enable); 3326 3326 } 3327 3327 3328 + static int mlx5e_set_rx_port_ts(struct mlx5_core_dev *mdev, bool enable) 3329 + { 3330 + u32 in[MLX5_ST_SZ_DW(pcmr_reg)] = {}; 3331 + bool supported, curr_state; 3332 + int err; 3333 + 3334 + if (!MLX5_CAP_GEN(mdev, ports_check)) 3335 + return 0; 3336 + 3337 + err = mlx5_query_ports_check(mdev, in, sizeof(in)); 3338 + if (err) 3339 + return err; 3340 + 3341 + supported = MLX5_GET(pcmr_reg, in, rx_ts_over_crc_cap); 3342 + curr_state = MLX5_GET(pcmr_reg, in, rx_ts_over_crc); 3343 + 3344 + if (!supported || enable == curr_state) 3345 + return 0; 3346 + 3347 + MLX5_SET(pcmr_reg, in, local_port, 1); 3348 + MLX5_SET(pcmr_reg, in, rx_ts_over_crc, enable); 3349 + 3350 + return mlx5_set_ports_check(mdev, in, sizeof(in)); 3351 + } 3352 + 3328 3353 static int set_feature_rx_fcs(struct net_device *netdev, bool enable) 3329 3354 { 3330 3355 struct mlx5e_priv *priv = netdev_priv(netdev); 3356 + struct mlx5e_channels *chs = &priv->channels; 3357 + struct mlx5_core_dev *mdev = priv->mdev; 3331 3358 int err; 3332 3359 3333 3360 mutex_lock(&priv->state_lock); 3334 3361 3335 - priv->channels.params.scatter_fcs_en = enable; 3336 - err = mlx5e_modify_channels_scatter_fcs(&priv->channels, enable); 3337 - if (err) 3338 - priv->channels.params.scatter_fcs_en = !enable; 3362 + if (enable) { 3363 + err = mlx5e_set_rx_port_ts(mdev, false); 3364 + if (err) 3365 + goto out; 3339 3366 3367 + chs->params.scatter_fcs_en = true; 3368 + err = mlx5e_modify_channels_scatter_fcs(chs, true); 3369 + if (err) { 3370 + chs->params.scatter_fcs_en = false; 3371 + mlx5e_set_rx_port_ts(mdev, true); 3372 + } 3373 + } else { 3374 + chs->params.scatter_fcs_en = false; 3375 + err = mlx5e_modify_channels_scatter_fcs(chs, false); 3376 + if (err) { 3377 + chs->params.scatter_fcs_en = true; 3378 + goto out; 3379 + } 3380 + err = mlx5e_set_rx_port_ts(mdev, true); 3381 + if (err) { 3382 + mlx5_core_warn(mdev, "Failed to set RX port timestamp %d\n", err); 3383 + err = 0; 3384 + } 3385 + } 3386 + 3387 + out: 3340 3388 mutex_unlock(&priv->state_lock); 3341 - 3342 3389 return err; 3343 3390 } 3344 3391
+5 -1
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 618 618 params->mqprio.num_tc = 1; 619 619 params->tunneled_offload_en = false; 620 620 621 + /* Set an initial non-zero value, so that mlx5e_select_queue won't 622 + * divide by zero if called before first activating channels. 623 + */ 624 + priv->num_tc_x_num_ch = params->num_channels * params->mqprio.num_tc; 625 + 621 626 mlx5_query_min_inline(mdev, &params->tx_min_inline_mode); 622 627 } 623 628 ··· 648 643 netdev->hw_features |= NETIF_F_RXCSUM; 649 644 650 645 netdev->features |= netdev->hw_features; 651 - netdev->features |= NETIF_F_VLAN_CHALLENGED; 652 646 netdev->features |= NETIF_F_NETNS_LOCAL; 653 647 } 654 648
+5 -47
drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
··· 24 24 #define MLXSW_THERMAL_ZONE_MAX_NAME 16 25 25 #define MLXSW_THERMAL_TEMP_SCORE_MAX GENMASK(31, 0) 26 26 #define MLXSW_THERMAL_MAX_STATE 10 27 + #define MLXSW_THERMAL_MIN_STATE 2 27 28 #define MLXSW_THERMAL_MAX_DUTY 255 28 - /* Minimum and maximum fan allowed speed in percent: from 20% to 100%. Values 29 - * MLXSW_THERMAL_MAX_STATE + x, where x is between 2 and 10 are used for 30 - * setting fan speed dynamic minimum. For example, if value is set to 14 (40%) 31 - * cooling levels vector will be set to 4, 4, 4, 4, 4, 5, 6, 7, 8, 9, 10 to 32 - * introduce PWM speed in percent: 40, 40, 40, 40, 40, 50, 60. 70, 80, 90, 100. 33 - */ 34 - #define MLXSW_THERMAL_SPEED_MIN (MLXSW_THERMAL_MAX_STATE + 2) 35 - #define MLXSW_THERMAL_SPEED_MAX (MLXSW_THERMAL_MAX_STATE * 2) 36 - #define MLXSW_THERMAL_SPEED_MIN_LEVEL 2 /* 20% */ 37 29 38 30 /* External cooling devices, allowed for binding to mlxsw thermal zones. */ 39 31 static char * const mlxsw_thermal_external_allowed_cdev[] = { ··· 638 646 struct mlxsw_thermal *thermal = cdev->devdata; 639 647 struct device *dev = thermal->bus_info->dev; 640 648 char mfsc_pl[MLXSW_REG_MFSC_LEN]; 641 - unsigned long cur_state, i; 642 649 int idx; 643 - u8 duty; 644 650 int err; 651 + 652 + if (state > MLXSW_THERMAL_MAX_STATE) 653 + return -EINVAL; 645 654 646 655 idx = mlxsw_get_cooling_device_idx(thermal, cdev); 647 656 if (idx < 0) 648 657 return idx; 649 - 650 - /* Verify if this request is for changing allowed fan dynamical 651 - * minimum. If it is - update cooling levels accordingly and update 652 - * state, if current state is below the newly requested minimum state. 653 - * For example, if current state is 5, and minimal state is to be 654 - * changed from 4 to 6, thermal->cooling_levels[0 to 5] will be changed 655 - * all from 4 to 6. And state 5 (thermal->cooling_levels[4]) should be 656 - * overwritten. 657 - */ 658 - if (state >= MLXSW_THERMAL_SPEED_MIN && 659 - state <= MLXSW_THERMAL_SPEED_MAX) { 660 - state -= MLXSW_THERMAL_MAX_STATE; 661 - for (i = 0; i <= MLXSW_THERMAL_MAX_STATE; i++) 662 - thermal->cooling_levels[i] = max(state, i); 663 - 664 - mlxsw_reg_mfsc_pack(mfsc_pl, idx, 0); 665 - err = mlxsw_reg_query(thermal->core, MLXSW_REG(mfsc), mfsc_pl); 666 - if (err) 667 - return err; 668 - 669 - duty = mlxsw_reg_mfsc_pwm_duty_cycle_get(mfsc_pl); 670 - cur_state = mlxsw_duty_to_state(duty); 671 - 672 - /* If current fan state is lower than requested dynamical 673 - * minimum, increase fan speed up to dynamical minimum. 674 - */ 675 - if (state < cur_state) 676 - return 0; 677 - 678 - state = cur_state; 679 - } 680 - 681 - if (state > MLXSW_THERMAL_MAX_STATE) 682 - return -EINVAL; 683 658 684 659 /* Normalize the state to the valid speed range. */ 685 660 state = thermal->cooling_levels[state]; ··· 957 998 958 999 /* Initialize cooling levels per PWM state. */ 959 1000 for (i = 0; i < MLXSW_THERMAL_MAX_STATE; i++) 960 - thermal->cooling_levels[i] = max(MLXSW_THERMAL_SPEED_MIN_LEVEL, 961 - i); 1001 + thermal->cooling_levels[i] = max(MLXSW_THERMAL_MIN_STATE, i); 962 1002 963 1003 thermal->polling_delay = bus_info->low_frequency ? 964 1004 MLXSW_THERMAL_SLOW_POLL_INT :
+8 -2
drivers/net/ethernet/microchip/encx24j600-regmap.c
··· 497 497 .reg_read = regmap_encx24j600_phy_reg_read, 498 498 }; 499 499 500 - void devm_regmap_init_encx24j600(struct device *dev, 501 - struct encx24j600_context *ctx) 500 + int devm_regmap_init_encx24j600(struct device *dev, 501 + struct encx24j600_context *ctx) 502 502 { 503 503 mutex_init(&ctx->mutex); 504 504 regcfg.lock_arg = ctx; 505 505 ctx->regmap = devm_regmap_init(dev, &regmap_encx24j600, ctx, &regcfg); 506 + if (IS_ERR(ctx->regmap)) 507 + return PTR_ERR(ctx->regmap); 506 508 ctx->phymap = devm_regmap_init(dev, &phymap_encx24j600, ctx, &phycfg); 509 + if (IS_ERR(ctx->phymap)) 510 + return PTR_ERR(ctx->phymap); 511 + 512 + return 0; 507 513 } 508 514 EXPORT_SYMBOL_GPL(devm_regmap_init_encx24j600); 509 515
+4 -1
drivers/net/ethernet/microchip/encx24j600.c
··· 1023 1023 priv->speed = SPEED_100; 1024 1024 1025 1025 priv->ctx.spi = spi; 1026 - devm_regmap_init_encx24j600(&spi->dev, &priv->ctx); 1027 1026 ndev->irq = spi->irq; 1028 1027 ndev->netdev_ops = &encx24j600_netdev_ops; 1028 + 1029 + ret = devm_regmap_init_encx24j600(&spi->dev, &priv->ctx); 1030 + if (ret) 1031 + goto out_free; 1029 1032 1030 1033 mutex_init(&priv->lock); 1031 1034
+2 -2
drivers/net/ethernet/microchip/encx24j600_hw.h
··· 15 15 int bank; 16 16 }; 17 17 18 - void devm_regmap_init_encx24j600(struct device *dev, 19 - struct encx24j600_context *ctx); 18 + int devm_regmap_init_encx24j600(struct device *dev, 19 + struct encx24j600_context *ctx); 20 20 21 21 /* Single-byte instructions */ 22 22 #define BANK_SELECT(bank) (0xC0 | ((bank & (BANK_MASK >> BANK_SHIFT)) << 1))
+3 -1
drivers/net/ethernet/microsoft/mana/mana_en.c
··· 1477 1477 if (err) 1478 1478 goto out; 1479 1479 1480 - if (cq->gdma_id >= gc->max_num_cqs) 1480 + if (WARN_ON(cq->gdma_id >= gc->max_num_cqs)) { 1481 + err = -EINVAL; 1481 1482 goto out; 1483 + } 1482 1484 1483 1485 gc->cq_table[cq->gdma_id] = cq->gdma_cq; 1484 1486
+73 -38
drivers/net/ethernet/mscc/ocelot.c
··· 472 472 !(quirks & OCELOT_QUIRK_QSGMII_PORTS_MUST_BE_UP)) 473 473 ocelot_port_rmwl(ocelot_port, 474 474 DEV_CLOCK_CFG_MAC_TX_RST | 475 - DEV_CLOCK_CFG_MAC_TX_RST, 475 + DEV_CLOCK_CFG_MAC_RX_RST, 476 476 DEV_CLOCK_CFG_MAC_TX_RST | 477 - DEV_CLOCK_CFG_MAC_TX_RST, 477 + DEV_CLOCK_CFG_MAC_RX_RST, 478 478 DEV_CLOCK_CFG); 479 479 } 480 480 EXPORT_SYMBOL_GPL(ocelot_phylink_mac_link_down); ··· 569 569 } 570 570 EXPORT_SYMBOL_GPL(ocelot_phylink_mac_link_up); 571 571 572 - static void ocelot_port_add_txtstamp_skb(struct ocelot *ocelot, int port, 573 - struct sk_buff *clone) 572 + static int ocelot_port_add_txtstamp_skb(struct ocelot *ocelot, int port, 573 + struct sk_buff *clone) 574 574 { 575 575 struct ocelot_port *ocelot_port = ocelot->ports[port]; 576 + unsigned long flags; 576 577 577 - spin_lock(&ocelot_port->ts_id_lock); 578 + spin_lock_irqsave(&ocelot->ts_id_lock, flags); 579 + 580 + if (ocelot_port->ptp_skbs_in_flight == OCELOT_MAX_PTP_ID || 581 + ocelot->ptp_skbs_in_flight == OCELOT_PTP_FIFO_SIZE) { 582 + spin_unlock_irqrestore(&ocelot->ts_id_lock, flags); 583 + return -EBUSY; 584 + } 578 585 579 586 skb_shinfo(clone)->tx_flags |= SKBTX_IN_PROGRESS; 580 587 /* Store timestamp ID in OCELOT_SKB_CB(clone)->ts_id */ 581 588 OCELOT_SKB_CB(clone)->ts_id = ocelot_port->ts_id; 582 - ocelot_port->ts_id = (ocelot_port->ts_id + 1) % 4; 589 + 590 + ocelot_port->ts_id++; 591 + if (ocelot_port->ts_id == OCELOT_MAX_PTP_ID) 592 + ocelot_port->ts_id = 0; 593 + 594 + ocelot_port->ptp_skbs_in_flight++; 595 + ocelot->ptp_skbs_in_flight++; 596 + 583 597 skb_queue_tail(&ocelot_port->tx_skbs, clone); 584 598 585 - spin_unlock(&ocelot_port->ts_id_lock); 599 + spin_unlock_irqrestore(&ocelot->ts_id_lock, flags); 600 + 601 + return 0; 586 602 } 587 603 588 - u32 ocelot_ptp_rew_op(struct sk_buff *skb) 589 - { 590 - struct sk_buff *clone = OCELOT_SKB_CB(skb)->clone; 591 - u8 ptp_cmd = OCELOT_SKB_CB(skb)->ptp_cmd; 592 - u32 rew_op = 0; 593 - 594 - if (ptp_cmd == IFH_REW_OP_TWO_STEP_PTP && clone) { 595 - rew_op = ptp_cmd; 596 - rew_op |= OCELOT_SKB_CB(clone)->ts_id << 3; 597 - } else if (ptp_cmd == IFH_REW_OP_ORIGIN_PTP) { 598 - rew_op = ptp_cmd; 599 - } 600 - 601 - return rew_op; 602 - } 603 - EXPORT_SYMBOL(ocelot_ptp_rew_op); 604 - 605 - static bool ocelot_ptp_is_onestep_sync(struct sk_buff *skb) 604 + static bool ocelot_ptp_is_onestep_sync(struct sk_buff *skb, 605 + unsigned int ptp_class) 606 606 { 607 607 struct ptp_header *hdr; 608 - unsigned int ptp_class; 609 608 u8 msgtype, twostep; 610 - 611 - ptp_class = ptp_classify_raw(skb); 612 - if (ptp_class == PTP_CLASS_NONE) 613 - return false; 614 609 615 610 hdr = ptp_parse_header(skb, ptp_class); 616 611 if (!hdr) ··· 626 631 { 627 632 struct ocelot_port *ocelot_port = ocelot->ports[port]; 628 633 u8 ptp_cmd = ocelot_port->ptp_cmd; 634 + unsigned int ptp_class; 635 + int err; 636 + 637 + /* Don't do anything if PTP timestamping not enabled */ 638 + if (!ptp_cmd) 639 + return 0; 640 + 641 + ptp_class = ptp_classify_raw(skb); 642 + if (ptp_class == PTP_CLASS_NONE) 643 + return -EINVAL; 629 644 630 645 /* Store ptp_cmd in OCELOT_SKB_CB(skb)->ptp_cmd */ 631 646 if (ptp_cmd == IFH_REW_OP_ORIGIN_PTP) { 632 - if (ocelot_ptp_is_onestep_sync(skb)) { 647 + if (ocelot_ptp_is_onestep_sync(skb, ptp_class)) { 633 648 OCELOT_SKB_CB(skb)->ptp_cmd = ptp_cmd; 634 649 return 0; 635 650 } ··· 653 648 if (!(*clone)) 654 649 return -ENOMEM; 655 650 656 - ocelot_port_add_txtstamp_skb(ocelot, port, *clone); 651 + err = ocelot_port_add_txtstamp_skb(ocelot, port, *clone); 652 + if (err) 653 + return err; 654 + 657 655 OCELOT_SKB_CB(skb)->ptp_cmd = ptp_cmd; 656 + OCELOT_SKB_CB(*clone)->ptp_class = ptp_class; 658 657 } 659 658 660 659 return 0; ··· 692 683 spin_unlock_irqrestore(&ocelot->ptp_clock_lock, flags); 693 684 } 694 685 686 + static bool ocelot_validate_ptp_skb(struct sk_buff *clone, u16 seqid) 687 + { 688 + struct ptp_header *hdr; 689 + 690 + hdr = ptp_parse_header(clone, OCELOT_SKB_CB(clone)->ptp_class); 691 + if (WARN_ON(!hdr)) 692 + return false; 693 + 694 + return seqid == ntohs(hdr->sequence_id); 695 + } 696 + 695 697 void ocelot_get_txtstamp(struct ocelot *ocelot) 696 698 { 697 699 int budget = OCELOT_PTP_QUEUE_SZ; ··· 710 690 while (budget--) { 711 691 struct sk_buff *skb, *skb_tmp, *skb_match = NULL; 712 692 struct skb_shared_hwtstamps shhwtstamps; 693 + u32 val, id, seqid, txport; 713 694 struct ocelot_port *port; 714 695 struct timespec64 ts; 715 696 unsigned long flags; 716 - u32 val, id, txport; 717 697 718 698 val = ocelot_read(ocelot, SYS_PTP_STATUS); 719 699 ··· 726 706 /* Retrieve the ts ID and Tx port */ 727 707 id = SYS_PTP_STATUS_PTP_MESS_ID_X(val); 728 708 txport = SYS_PTP_STATUS_PTP_MESS_TXPORT_X(val); 709 + seqid = SYS_PTP_STATUS_PTP_MESS_SEQ_ID(val); 729 710 730 - /* Retrieve its associated skb */ 731 711 port = ocelot->ports[txport]; 732 712 713 + spin_lock(&ocelot->ts_id_lock); 714 + port->ptp_skbs_in_flight--; 715 + ocelot->ptp_skbs_in_flight--; 716 + spin_unlock(&ocelot->ts_id_lock); 717 + 718 + /* Retrieve its associated skb */ 719 + try_again: 733 720 spin_lock_irqsave(&port->tx_skbs.lock, flags); 734 721 735 722 skb_queue_walk_safe(&port->tx_skbs, skb, skb_tmp) { ··· 749 722 750 723 spin_unlock_irqrestore(&port->tx_skbs.lock, flags); 751 724 725 + if (WARN_ON(!skb_match)) 726 + continue; 727 + 728 + if (!ocelot_validate_ptp_skb(skb_match, seqid)) { 729 + dev_err_ratelimited(ocelot->dev, 730 + "port %d received stale TX timestamp for seqid %d, discarding\n", 731 + txport, seqid); 732 + dev_kfree_skb_any(skb); 733 + goto try_again; 734 + } 735 + 752 736 /* Get the h/w timestamp */ 753 737 ocelot_get_hwtimestamp(ocelot, &ts); 754 - 755 - if (unlikely(!skb_match)) 756 - continue; 757 738 758 739 /* Set the timestamp into the skb */ 759 740 memset(&shhwtstamps, 0, sizeof(shhwtstamps)); ··· 1983 1948 struct ocelot_port *ocelot_port = ocelot->ports[port]; 1984 1949 1985 1950 skb_queue_head_init(&ocelot_port->tx_skbs); 1986 - spin_lock_init(&ocelot_port->ts_id_lock); 1987 1951 1988 1952 /* Basic L2 initialization */ 1989 1953 ··· 2115 2081 mutex_init(&ocelot->stats_lock); 2116 2082 mutex_init(&ocelot->ptp_lock); 2117 2083 spin_lock_init(&ocelot->ptp_clock_lock); 2084 + spin_lock_init(&ocelot->ts_id_lock); 2118 2085 snprintf(queue_name, sizeof(queue_name), "%s-stats", 2119 2086 dev_name(ocelot->dev)); 2120 2087 ocelot->stats_queue = create_singlethread_workqueue(queue_name);
+2 -1
drivers/net/ethernet/mscc/ocelot_net.c
··· 8 8 * Copyright 2020-2021 NXP 9 9 */ 10 10 11 + #include <linux/dsa/ocelot.h> 11 12 #include <linux/if_bridge.h> 12 13 #include <linux/of_net.h> 13 14 #include <linux/phy/phy.h> ··· 1626 1625 if (phy_mode == PHY_INTERFACE_MODE_QSGMII) 1627 1626 ocelot_port_rmwl(ocelot_port, 0, 1628 1627 DEV_CLOCK_CFG_MAC_TX_RST | 1629 - DEV_CLOCK_CFG_MAC_TX_RST, 1628 + DEV_CLOCK_CFG_MAC_RX_RST, 1630 1629 DEV_CLOCK_CFG); 1631 1630 1632 1631 ocelot_port->phy_mode = phy_mode;
+1 -1
drivers/net/ethernet/neterion/s2io.c
··· 8566 8566 return; 8567 8567 } 8568 8568 8569 - if (s2io_set_mac_addr(netdev, netdev->dev_addr) == FAILURE) { 8569 + if (do_s2io_prog_unicast(netdev, netdev->dev_addr) == FAILURE) { 8570 8570 s2io_card_down(sp); 8571 8571 pr_err("Can't restore mac addr after reset.\n"); 8572 8572 return;
+14 -5
drivers/net/ethernet/netronome/nfp/flower/main.c
··· 830 830 if (err) 831 831 goto err_cleanup; 832 832 833 - err = flow_indr_dev_register(nfp_flower_indr_setup_tc_cb, app); 834 - if (err) 835 - goto err_cleanup; 836 - 837 833 if (app_priv->flower_ext_feats & NFP_FL_FEATS_VF_RLIM) 838 834 nfp_flower_qos_init(app); 839 835 ··· 938 942 return err; 939 943 } 940 944 941 - return nfp_tunnel_config_start(app); 945 + err = flow_indr_dev_register(nfp_flower_indr_setup_tc_cb, app); 946 + if (err) 947 + return err; 948 + 949 + err = nfp_tunnel_config_start(app); 950 + if (err) 951 + goto err_tunnel_config; 952 + 953 + return 0; 954 + 955 + err_tunnel_config: 956 + flow_indr_dev_unregister(nfp_flower_indr_setup_tc_cb, app, 957 + nfp_flower_setup_indr_tc_release); 958 + return err; 942 959 } 943 960 944 961 static void nfp_flower_stop(struct nfp_app *app)
+4
drivers/net/ethernet/pensando/ionic/ionic_lif.c
··· 1379 1379 1380 1380 static int ionic_addr_del(struct net_device *netdev, const u8 *addr) 1381 1381 { 1382 + /* Don't delete our own address from the uc list */ 1383 + if (ether_addr_equal(addr, netdev->dev_addr)) 1384 + return 0; 1385 + 1382 1386 return ionic_lif_list_addr(netdev_priv(netdev), addr, DEL_ADDR); 1383 1387 } 1384 1388
+1
drivers/net/ethernet/qlogic/qed/qed_main.c
··· 1299 1299 } else { 1300 1300 DP_NOTICE(cdev, 1301 1301 "Failed to acquire PTT for aRFS\n"); 1302 + rc = -EINVAL; 1302 1303 goto err; 1303 1304 } 1304 1305 }
+1
drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c
··· 71 71 72 72 static const struct of_device_id dwmac_generic_match[] = { 73 73 { .compatible = "st,spear600-gmac"}, 74 + { .compatible = "snps,dwmac-3.40a"}, 74 75 { .compatible = "snps,dwmac-3.50a"}, 75 76 { .compatible = "snps,dwmac-3.610"}, 76 77 { .compatible = "snps,dwmac-3.70a"},
+11 -2
drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c
··· 218 218 readl(ioaddr + DMA_BUS_MODE + i * 4); 219 219 } 220 220 221 - static void dwmac1000_get_hw_feature(void __iomem *ioaddr, 222 - struct dma_features *dma_cap) 221 + static int dwmac1000_get_hw_feature(void __iomem *ioaddr, 222 + struct dma_features *dma_cap) 223 223 { 224 224 u32 hw_cap = readl(ioaddr + DMA_HW_FEATURE); 225 + 226 + if (!hw_cap) { 227 + /* 0x00000000 is the value read on old hardware that does not 228 + * implement this register 229 + */ 230 + return -EOPNOTSUPP; 231 + } 225 232 226 233 dma_cap->mbps_10_100 = (hw_cap & DMA_HW_FEAT_MIISEL); 227 234 dma_cap->mbps_1000 = (hw_cap & DMA_HW_FEAT_GMIISEL) >> 1; ··· 259 252 dma_cap->number_tx_channel = (hw_cap & DMA_HW_FEAT_TXCHCNT) >> 22; 260 253 /* Alternate (enhanced) DESC mode */ 261 254 dma_cap->enh_desc = (hw_cap & DMA_HW_FEAT_ENHDESSEL) >> 24; 255 + 256 + return 0; 262 257 } 263 258 264 259 static void dwmac1000_rx_watchdog(void __iomem *ioaddr, u32 riwt,
+4 -2
drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
··· 347 347 writel(mtl_tx_op, ioaddr + MTL_CHAN_TX_OP_MODE(channel)); 348 348 } 349 349 350 - static void dwmac4_get_hw_feature(void __iomem *ioaddr, 351 - struct dma_features *dma_cap) 350 + static int dwmac4_get_hw_feature(void __iomem *ioaddr, 351 + struct dma_features *dma_cap) 352 352 { 353 353 u32 hw_cap = readl(ioaddr + GMAC_HW_FEATURE0); 354 354 ··· 437 437 dma_cap->frpbs = (hw_cap & GMAC_HW_FEAT_FRPBS) >> 11; 438 438 dma_cap->frpsel = (hw_cap & GMAC_HW_FEAT_FRPSEL) >> 10; 439 439 dma_cap->dvlan = (hw_cap & GMAC_HW_FEAT_DVLAN) >> 5; 440 + 441 + return 0; 440 442 } 441 443 442 444 /* Enable/disable TSO feature and set MSS */
+4 -2
drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
··· 371 371 return ret; 372 372 } 373 373 374 - static void dwxgmac2_get_hw_feature(void __iomem *ioaddr, 375 - struct dma_features *dma_cap) 374 + static int dwxgmac2_get_hw_feature(void __iomem *ioaddr, 375 + struct dma_features *dma_cap) 376 376 { 377 377 u32 hw_cap; 378 378 ··· 445 445 dma_cap->frpes = (hw_cap & XGMAC_HWFEAT_FRPES) >> 11; 446 446 dma_cap->frpbs = (hw_cap & XGMAC_HWFEAT_FRPPB) >> 9; 447 447 dma_cap->frpsel = (hw_cap & XGMAC_HWFEAT_FRPSEL) >> 3; 448 + 449 + return 0; 448 450 } 449 451 450 452 static void dwxgmac2_rx_watchdog(void __iomem *ioaddr, u32 riwt, u32 queue)
+3 -3
drivers/net/ethernet/stmicro/stmmac/hwif.h
··· 203 203 int (*dma_interrupt) (void __iomem *ioaddr, 204 204 struct stmmac_extra_stats *x, u32 chan, u32 dir); 205 205 /* If supported then get the optional core features */ 206 - void (*get_hw_feature)(void __iomem *ioaddr, 207 - struct dma_features *dma_cap); 206 + int (*get_hw_feature)(void __iomem *ioaddr, 207 + struct dma_features *dma_cap); 208 208 /* Program the HW RX Watchdog */ 209 209 void (*rx_watchdog)(void __iomem *ioaddr, u32 riwt, u32 queue); 210 210 void (*set_tx_ring_len)(void __iomem *ioaddr, u32 len, u32 chan); ··· 255 255 #define stmmac_dma_interrupt_status(__priv, __args...) \ 256 256 stmmac_do_callback(__priv, dma, dma_interrupt, __args) 257 257 #define stmmac_get_hw_feature(__priv, __args...) \ 258 - stmmac_do_void_callback(__priv, dma, get_hw_feature, __args) 258 + stmmac_do_callback(__priv, dma, get_hw_feature, __args) 259 259 #define stmmac_rx_watchdog(__priv, __args...) \ 260 260 stmmac_do_void_callback(__priv, dma, rx_watchdog, __args) 261 261 #define stmmac_set_tx_ring_len(__priv, __args...) \
+8
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 508 508 plat->pmt = 1; 509 509 } 510 510 511 + if (of_device_is_compatible(np, "snps,dwmac-3.40a")) { 512 + plat->has_gmac = 1; 513 + plat->enh_desc = 1; 514 + plat->tx_coe = 1; 515 + plat->bugged_jumbo = 1; 516 + plat->pmt = 1; 517 + } 518 + 511 519 if (of_device_is_compatible(np, "snps,dwmac-4.00") || 512 520 of_device_is_compatible(np, "snps,dwmac-4.10a") || 513 521 of_device_is_compatible(np, "snps,dwmac-4.20a") ||
+3
drivers/net/phy/phy_device.c
··· 3125 3125 { 3126 3126 struct phy_device *phydev = to_phy_device(dev); 3127 3127 3128 + if (phydev->state == PHY_READY || !phydev->attached_dev) 3129 + return; 3130 + 3128 3131 phy_disable_interrupts(phydev); 3129 3132 } 3130 3133
+4
drivers/net/usb/Kconfig
··· 99 99 config USB_RTL8152 100 100 tristate "Realtek RTL8152/RTL8153 Based USB Ethernet Adapters" 101 101 select MII 102 + select CRC32 103 + select CRYPTO 104 + select CRYPTO_HASH 105 + select CRYPTO_SHA256 102 106 help 103 107 This option adds support for Realtek RTL8152 based USB 2.0 104 108 10/100 Ethernet adapters and RTL8153 based USB 3.0 10/100/1000
+1 -1
drivers/net/virtio_net.c
··· 406 406 * add_recvbuf_mergeable() + get_mergeable_buf_len() 407 407 */ 408 408 truesize = headroom ? PAGE_SIZE : truesize; 409 - tailroom = truesize - len - headroom; 409 + tailroom = truesize - len - headroom - (hdr_padded_len - hdr_len); 410 410 buf = p - headroom; 411 411 412 412 len -= hdr_len;
+12 -9
drivers/nvme/host/core.c
··· 3550 3550 return 0; 3551 3551 } 3552 3552 3553 + static void nvme_cdev_rel(struct device *dev) 3554 + { 3555 + ida_simple_remove(&nvme_ns_chr_minor_ida, MINOR(dev->devt)); 3556 + } 3557 + 3553 3558 void nvme_cdev_del(struct cdev *cdev, struct device *cdev_device) 3554 3559 { 3555 3560 cdev_device_del(cdev, cdev_device); 3556 - ida_simple_remove(&nvme_ns_chr_minor_ida, MINOR(cdev_device->devt)); 3561 + put_device(cdev_device); 3557 3562 } 3558 3563 3559 3564 int nvme_cdev_add(struct cdev *cdev, struct device *cdev_device, ··· 3571 3566 return minor; 3572 3567 cdev_device->devt = MKDEV(MAJOR(nvme_ns_chr_devt), minor); 3573 3568 cdev_device->class = nvme_ns_chr_class; 3569 + cdev_device->release = nvme_cdev_rel; 3574 3570 device_initialize(cdev_device); 3575 3571 cdev_init(cdev, fops); 3576 3572 cdev->owner = owner; 3577 3573 ret = cdev_device_add(cdev, cdev_device); 3578 - if (ret) { 3574 + if (ret) 3579 3575 put_device(cdev_device); 3580 - ida_simple_remove(&nvme_ns_chr_minor_ida, minor); 3581 - } 3576 + 3582 3577 return ret; 3583 3578 } 3584 3579 ··· 3610 3605 ns->ctrl->instance, ns->head->instance); 3611 3606 if (ret) 3612 3607 return ret; 3613 - ret = nvme_cdev_add(&ns->cdev, &ns->cdev_device, &nvme_ns_chr_fops, 3614 - ns->ctrl->ops->module); 3615 - if (ret) 3616 - kfree_const(ns->cdev_device.kobj.name); 3617 - return ret; 3608 + 3609 + return nvme_cdev_add(&ns->cdev, &ns->cdev_device, &nvme_ns_chr_fops, 3610 + ns->ctrl->ops->module); 3618 3611 } 3619 3612 3620 3613 static struct nvme_ns_head *nvme_alloc_ns_head(struct nvme_ctrl *ctrl,
-2
drivers/nvme/host/multipath.c
··· 431 431 return ret; 432 432 ret = nvme_cdev_add(&head->cdev, &head->cdev_device, 433 433 &nvme_ns_head_chr_fops, THIS_MODULE); 434 - if (ret) 435 - kfree_const(head->cdev_device.kobj.name); 436 434 return ret; 437 435 } 438 436
+1 -1
drivers/nvme/host/pci.c
··· 1330 1330 iod->aborted = 1; 1331 1331 1332 1332 cmd.abort.opcode = nvme_admin_abort_cmd; 1333 - cmd.abort.cid = req->tag; 1333 + cmd.abort.cid = nvme_cid(req); 1334 1334 cmd.abort.sqid = cpu_to_le16(nvmeq->qid); 1335 1335 1336 1336 dev_warn(nvmeq->dev->ctrl.device,
+2 -1
drivers/nvmem/core.c
··· 1383 1383 *p-- = 0; 1384 1384 1385 1385 /* clear msb bits if any leftover in the last byte */ 1386 - *p &= GENMASK((cell->nbits%BITS_PER_BYTE) - 1, 0); 1386 + if (cell->nbits % BITS_PER_BYTE) 1387 + *p &= GENMASK((cell->nbits % BITS_PER_BYTE) - 1, 0); 1387 1388 } 1388 1389 1389 1390 static int __nvmem_cell_read(struct nvmem_device *nvmem,
+12 -6
drivers/pci/msi.c
··· 535 535 static int msi_capability_init(struct pci_dev *dev, int nvec, 536 536 struct irq_affinity *affd) 537 537 { 538 + const struct attribute_group **groups; 538 539 struct msi_desc *entry; 539 540 int ret; 540 541 ··· 559 558 if (ret) 560 559 goto err; 561 560 562 - dev->msi_irq_groups = msi_populate_sysfs(&dev->dev); 563 - if (IS_ERR(dev->msi_irq_groups)) { 564 - ret = PTR_ERR(dev->msi_irq_groups); 561 + groups = msi_populate_sysfs(&dev->dev); 562 + if (IS_ERR(groups)) { 563 + ret = PTR_ERR(groups); 565 564 goto err; 566 565 } 566 + 567 + dev->msi_irq_groups = groups; 567 568 568 569 /* Set MSI enabled bits */ 569 570 pci_intx_for_msi(dev, 0); ··· 694 691 static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries, 695 692 int nvec, struct irq_affinity *affd) 696 693 { 694 + const struct attribute_group **groups; 697 695 void __iomem *base; 698 696 int ret, tsize; 699 697 u16 control; ··· 734 730 735 731 msix_update_entries(dev, entries); 736 732 737 - dev->msi_irq_groups = msi_populate_sysfs(&dev->dev); 738 - if (IS_ERR(dev->msi_irq_groups)) { 739 - ret = PTR_ERR(dev->msi_irq_groups); 733 + groups = msi_populate_sysfs(&dev->dev); 734 + if (IS_ERR(groups)) { 735 + ret = PTR_ERR(groups); 740 736 goto out_free; 741 737 } 738 + 739 + dev->msi_irq_groups = groups; 742 740 743 741 /* Set MSI-X enabled bits and unmask the function */ 744 742 pci_intx_for_msi(dev, 0);
+2 -2
drivers/platform/mellanox/mlxreg-io.c
··· 98 98 if (ret) 99 99 goto access_error; 100 100 101 - *regval |= rol32(val, regsize * i); 101 + *regval |= rol32(val, regsize * i * 8); 102 102 } 103 103 } 104 104 ··· 141 141 return -EINVAL; 142 142 143 143 /* Convert buffer to input value. */ 144 - ret = kstrtou32(buf, len, &input_val); 144 + ret = kstrtou32(buf, 0, &input_val); 145 145 if (ret) 146 146 return ret; 147 147
+1
drivers/platform/x86/amd-pmc.c
··· 476 476 {"AMDI0006", 0}, 477 477 {"AMDI0007", 0}, 478 478 {"AMD0004", 0}, 479 + {"AMD0005", 0}, 479 480 { } 480 481 }; 481 482 MODULE_DEVICE_TABLE(acpi, amd_pmc_acpi_ids);
+1
drivers/platform/x86/dell/Kconfig
··· 167 167 config DELL_WMI_PRIVACY 168 168 bool "Dell WMI Hardware Privacy Support" 169 169 depends on LEDS_TRIGGER_AUDIO = y || DELL_WMI = LEDS_TRIGGER_AUDIO 170 + depends on DELL_WMI 170 171 help 171 172 This option adds integration with the "Dell Hardware Privacy" 172 173 feature of Dell laptops to the dell-wmi driver.
+1
drivers/platform/x86/gigabyte-wmi.c
··· 141 141 142 142 static const struct dmi_system_id gigabyte_wmi_known_working_platforms[] = { 143 143 DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B450M S2H V2"), 144 + DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE AX V2"), 144 145 DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE"), 145 146 DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE V2"), 146 147 DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 GAMING X V2"),
+14 -7
drivers/platform/x86/intel/int1092/intel_sar.c
··· 42 42 43 43 if (config->device_mode_info && 44 44 context->sar_data.device_mode < config->total_dev_mode) { 45 - struct wwan_device_mode_info *dev_mode = 46 - &config->device_mode_info[context->sar_data.device_mode]; 45 + int itr = 0; 47 46 48 - context->sar_data.antennatable_index = dev_mode->antennatable_index; 49 - context->sar_data.bandtable_index = dev_mode->bandtable_index; 50 - context->sar_data.sartable_index = dev_mode->sartable_index; 47 + for (itr = 0; itr < config->total_dev_mode; itr++) { 48 + if (context->sar_data.device_mode == 49 + config->device_mode_info[itr].device_mode) { 50 + struct wwan_device_mode_info *dev_mode = 51 + &config->device_mode_info[itr]; 52 + 53 + context->sar_data.antennatable_index = dev_mode->antennatable_index; 54 + context->sar_data.bandtable_index = dev_mode->bandtable_index; 55 + context->sar_data.sartable_index = dev_mode->sartable_index; 56 + break; 57 + } 58 + } 51 59 } 52 60 } 53 61 ··· 313 305 .remove = sar_remove, 314 306 .driver = { 315 307 .name = DRVNAME, 316 - .owner = THIS_MODULE, 317 308 .acpi_match_table = ACPI_PTR(sar_device_ids) 318 309 } 319 310 }; ··· 320 313 321 314 MODULE_LICENSE("GPL v2"); 322 315 MODULE_DESCRIPTION("Platform device driver for INTEL MODEM BIOS SAR"); 323 - MODULE_AUTHOR("Shravan S <s.shravan@intel.com>"); 316 + MODULE_AUTHOR("Shravan Sudhakar <s.shravan@intel.com>");
+1 -1
drivers/platform/x86/intel/int3472/intel_skl_int3472_discrete.c
··· 401 401 402 402 gpiod_remove_lookup_table(&int3472->gpios); 403 403 404 - if (int3472->clock.ena_gpio) 404 + if (int3472->clock.cl) 405 405 skl_int3472_unregister_clock(int3472); 406 406 407 407 gpiod_put(int3472->clock.ena_gpio);
+3 -3
drivers/platform/x86/intel_scu_ipc.c
··· 75 75 #define IPC_READ_BUFFER 0x90 76 76 77 77 /* Timeout in jiffies */ 78 - #define IPC_TIMEOUT (5 * HZ) 78 + #define IPC_TIMEOUT (10 * HZ) 79 79 80 80 static struct intel_scu_ipc_dev *ipcdev; /* Only one for now */ 81 81 static DEFINE_MUTEX(ipclock); /* lock used to prevent multiple call to SCU */ ··· 232 232 /* Wait till scu status is busy */ 233 233 static inline int busy_loop(struct intel_scu_ipc_dev *scu) 234 234 { 235 - unsigned long end = jiffies + msecs_to_jiffies(IPC_TIMEOUT); 235 + unsigned long end = jiffies + IPC_TIMEOUT; 236 236 237 237 do { 238 238 u32 status; ··· 247 247 return -ETIMEDOUT; 248 248 } 249 249 250 - /* Wait till ipc ioc interrupt is received or timeout in 3 HZ */ 250 + /* Wait till ipc ioc interrupt is received or timeout in 10 HZ */ 251 251 static inline int ipc_wait_for_interrupt(struct intel_scu_ipc_dev *scu) 252 252 { 253 253 int status;
-1
drivers/soc/canaan/Kconfig
··· 5 5 depends on RISCV && SOC_CANAAN && OF 6 6 default SOC_CANAAN 7 7 select PM 8 - select SIMPLE_PM_BUS 9 8 select SYSCON 10 9 select MFD_SYSCON 11 10 help
+2 -2
drivers/spi/spi-atmel.c
··· 1301 1301 * DMA map early, for performance (empties dcache ASAP) and 1302 1302 * better fault reporting. 1303 1303 */ 1304 - if ((!master->cur_msg_mapped) 1304 + if ((!master->cur_msg->is_dma_mapped) 1305 1305 && as->use_pdc) { 1306 1306 if (atmel_spi_dma_map_xfer(as, xfer) < 0) 1307 1307 return -ENOMEM; ··· 1381 1381 } 1382 1382 } 1383 1383 1384 - if (!master->cur_msg_mapped 1384 + if (!master->cur_msg->is_dma_mapped 1385 1385 && as->use_pdc) 1386 1386 atmel_spi_dma_unmap_xfer(master, xfer); 1387 1387
+45 -32
drivers/spi/spi-bcm-qspi.c
··· 1250 1250 1251 1251 static void bcm_qspi_hw_uninit(struct bcm_qspi *qspi) 1252 1252 { 1253 + u32 status = bcm_qspi_read(qspi, MSPI, MSPI_MSPI_STATUS); 1254 + 1253 1255 bcm_qspi_write(qspi, MSPI, MSPI_SPCR2, 0); 1254 1256 if (has_bspi(qspi)) 1255 1257 bcm_qspi_write(qspi, MSPI, MSPI_WRITE_LOCK, 0); 1256 1258 1259 + /* clear interrupt */ 1260 + bcm_qspi_write(qspi, MSPI, MSPI_MSPI_STATUS, status & ~1); 1257 1261 } 1258 1262 1259 1263 static const struct spi_controller_mem_ops bcm_qspi_mem_ops = { ··· 1401 1397 if (!qspi->dev_ids) 1402 1398 return -ENOMEM; 1403 1399 1400 + /* 1401 + * Some SoCs integrate spi controller (e.g., its interrupt bits) 1402 + * in specific ways 1403 + */ 1404 + if (soc_intc) { 1405 + qspi->soc_intc = soc_intc; 1406 + soc_intc->bcm_qspi_int_set(soc_intc, MSPI_DONE, true); 1407 + } else { 1408 + qspi->soc_intc = NULL; 1409 + } 1410 + 1411 + if (qspi->clk) { 1412 + ret = clk_prepare_enable(qspi->clk); 1413 + if (ret) { 1414 + dev_err(dev, "failed to prepare clock\n"); 1415 + goto qspi_probe_err; 1416 + } 1417 + qspi->base_clk = clk_get_rate(qspi->clk); 1418 + } else { 1419 + qspi->base_clk = MSPI_BASE_FREQ; 1420 + } 1421 + 1422 + if (data->has_mspi_rev) { 1423 + rev = bcm_qspi_read(qspi, MSPI, MSPI_REV); 1424 + /* some older revs do not have a MSPI_REV register */ 1425 + if ((rev & 0xff) == 0xff) 1426 + rev = 0; 1427 + } 1428 + 1429 + qspi->mspi_maj_rev = (rev >> 4) & 0xf; 1430 + qspi->mspi_min_rev = rev & 0xf; 1431 + qspi->mspi_spcr3_sysclk = data->has_spcr3_sysclk; 1432 + 1433 + qspi->max_speed_hz = qspi->base_clk / (bcm_qspi_spbr_min(qspi) * 2); 1434 + 1435 + /* 1436 + * On SW resets it is possible to have the mask still enabled 1437 + * Need to disable the mask and clear the status while we init 1438 + */ 1439 + bcm_qspi_hw_uninit(qspi); 1440 + 1404 1441 for (val = 0; val < num_irqs; val++) { 1405 1442 irq = -1; 1406 1443 name = qspi_irq_tab[val].irq_name; ··· 1477 1432 ret = -EINVAL; 1478 1433 goto qspi_probe_err; 1479 1434 } 1480 - 1481 - /* 1482 - * Some SoCs integrate spi controller (e.g., its interrupt bits) 1483 - * in specific ways 1484 - */ 1485 - if (soc_intc) { 1486 - qspi->soc_intc = soc_intc; 1487 - soc_intc->bcm_qspi_int_set(soc_intc, MSPI_DONE, true); 1488 - } else { 1489 - qspi->soc_intc = NULL; 1490 - } 1491 - 1492 - ret = clk_prepare_enable(qspi->clk); 1493 - if (ret) { 1494 - dev_err(dev, "failed to prepare clock\n"); 1495 - goto qspi_probe_err; 1496 - } 1497 - 1498 - qspi->base_clk = clk_get_rate(qspi->clk); 1499 - 1500 - if (data->has_mspi_rev) { 1501 - rev = bcm_qspi_read(qspi, MSPI, MSPI_REV); 1502 - /* some older revs do not have a MSPI_REV register */ 1503 - if ((rev & 0xff) == 0xff) 1504 - rev = 0; 1505 - } 1506 - 1507 - qspi->mspi_maj_rev = (rev >> 4) & 0xf; 1508 - qspi->mspi_min_rev = rev & 0xf; 1509 - qspi->mspi_spcr3_sysclk = data->has_spcr3_sysclk; 1510 - 1511 - qspi->max_speed_hz = qspi->base_clk / (bcm_qspi_spbr_min(qspi) * 2); 1512 1435 1513 1436 bcm_qspi_hw_init(qspi); 1514 1437 init_completion(&qspi->mspi_done);
+36 -28
drivers/spi/spi-mt65xx.c
··· 233 233 return delay; 234 234 inactive = (delay * DIV_ROUND_UP(mdata->spi_clk_hz, 1000000)) / 1000; 235 235 236 - setup = setup ? setup : 1; 237 - hold = hold ? hold : 1; 238 - inactive = inactive ? inactive : 1; 239 - 240 - reg_val = readl(mdata->base + SPI_CFG0_REG); 241 - if (mdata->dev_comp->enhance_timing) { 242 - hold = min_t(u32, hold, 0x10000); 243 - setup = min_t(u32, setup, 0x10000); 244 - reg_val &= ~(0xffff << SPI_ADJUST_CFG0_CS_HOLD_OFFSET); 245 - reg_val |= (((hold - 1) & 0xffff) 246 - << SPI_ADJUST_CFG0_CS_HOLD_OFFSET); 247 - reg_val &= ~(0xffff << SPI_ADJUST_CFG0_CS_SETUP_OFFSET); 248 - reg_val |= (((setup - 1) & 0xffff) 249 - << SPI_ADJUST_CFG0_CS_SETUP_OFFSET); 250 - } else { 251 - hold = min_t(u32, hold, 0x100); 252 - setup = min_t(u32, setup, 0x100); 253 - reg_val &= ~(0xff << SPI_CFG0_CS_HOLD_OFFSET); 254 - reg_val |= (((hold - 1) & 0xff) << SPI_CFG0_CS_HOLD_OFFSET); 255 - reg_val &= ~(0xff << SPI_CFG0_CS_SETUP_OFFSET); 256 - reg_val |= (((setup - 1) & 0xff) 257 - << SPI_CFG0_CS_SETUP_OFFSET); 236 + if (hold || setup) { 237 + reg_val = readl(mdata->base + SPI_CFG0_REG); 238 + if (mdata->dev_comp->enhance_timing) { 239 + if (hold) { 240 + hold = min_t(u32, hold, 0x10000); 241 + reg_val &= ~(0xffff << SPI_ADJUST_CFG0_CS_HOLD_OFFSET); 242 + reg_val |= (((hold - 1) & 0xffff) 243 + << SPI_ADJUST_CFG0_CS_HOLD_OFFSET); 244 + } 245 + if (setup) { 246 + setup = min_t(u32, setup, 0x10000); 247 + reg_val &= ~(0xffff << SPI_ADJUST_CFG0_CS_SETUP_OFFSET); 248 + reg_val |= (((setup - 1) & 0xffff) 249 + << SPI_ADJUST_CFG0_CS_SETUP_OFFSET); 250 + } 251 + } else { 252 + if (hold) { 253 + hold = min_t(u32, hold, 0x100); 254 + reg_val &= ~(0xff << SPI_CFG0_CS_HOLD_OFFSET); 255 + reg_val |= (((hold - 1) & 0xff) << SPI_CFG0_CS_HOLD_OFFSET); 256 + } 257 + if (setup) { 258 + setup = min_t(u32, setup, 0x100); 259 + reg_val &= ~(0xff << SPI_CFG0_CS_SETUP_OFFSET); 260 + reg_val |= (((setup - 1) & 0xff) 261 + << SPI_CFG0_CS_SETUP_OFFSET); 262 + } 263 + } 264 + writel(reg_val, mdata->base + SPI_CFG0_REG); 258 265 } 259 - writel(reg_val, mdata->base + SPI_CFG0_REG); 260 266 261 - inactive = min_t(u32, inactive, 0x100); 262 - reg_val = readl(mdata->base + SPI_CFG1_REG); 263 - reg_val &= ~SPI_CFG1_CS_IDLE_MASK; 264 - reg_val |= (((inactive - 1) & 0xff) << SPI_CFG1_CS_IDLE_OFFSET); 265 - writel(reg_val, mdata->base + SPI_CFG1_REG); 267 + if (inactive) { 268 + inactive = min_t(u32, inactive, 0x100); 269 + reg_val = readl(mdata->base + SPI_CFG1_REG); 270 + reg_val &= ~SPI_CFG1_CS_IDLE_MASK; 271 + reg_val |= (((inactive - 1) & 0xff) << SPI_CFG1_CS_IDLE_OFFSET); 272 + writel(reg_val, mdata->base + SPI_CFG1_REG); 273 + } 266 274 267 275 return 0; 268 276 }
+7
drivers/spi/spi-mux.c
··· 137 137 priv = spi_controller_get_devdata(ctlr); 138 138 priv->spi = spi; 139 139 140 + /* 141 + * Increase lockdep class as these lock are taken while the parent bus 142 + * already holds their instance's lock. 143 + */ 144 + lockdep_set_subclass(&ctlr->io_mutex, 1); 145 + lockdep_set_subclass(&ctlr->add_lock, 1); 146 + 140 147 priv->mux = devm_mux_control_get(&spi->dev, NULL); 141 148 if (IS_ERR(priv->mux)) { 142 149 ret = dev_err_probe(&spi->dev, PTR_ERR(priv->mux),
+7 -19
drivers/spi/spi-nxp-fspi.c
··· 33 33 34 34 #include <linux/acpi.h> 35 35 #include <linux/bitops.h> 36 + #include <linux/bitfield.h> 36 37 #include <linux/clk.h> 37 38 #include <linux/completion.h> 38 39 #include <linux/delay.h> ··· 316 315 #define NXP_FSPI_MIN_IOMAP SZ_4M 317 316 318 317 #define DCFG_RCWSR1 0x100 318 + #define SYS_PLL_RAT GENMASK(6, 2) 319 319 320 320 /* Access flash memory using IP bus only */ 321 321 #define FSPI_QUIRK_USE_IP_ONLY BIT(0) ··· 928 926 { .family = "QorIQ LS1028A" }, 929 927 { /* sentinel */ } 930 928 }; 931 - struct device_node *np; 932 929 struct regmap *map; 933 - u32 val = 0, sysclk = 0; 930 + u32 val, sys_pll_ratio; 934 931 int ret; 935 932 936 933 /* Check for LS1028A family */ ··· 938 937 return; 939 938 } 940 939 941 - /* Compute system clock frequency multiplier ratio */ 942 940 map = syscon_regmap_lookup_by_compatible("fsl,ls1028a-dcfg"); 943 941 if (IS_ERR(map)) { 944 942 dev_err(f->dev, "No syscon regmap\n"); ··· 948 948 if (ret < 0) 949 949 goto err; 950 950 951 - /* Strap bits 6:2 define SYS_PLL_RAT i.e frequency multiplier ratio */ 952 - val = (val >> 2) & 0x1F; 953 - WARN(val == 0, "Strapping is zero: Cannot determine ratio"); 951 + sys_pll_ratio = FIELD_GET(SYS_PLL_RAT, val); 952 + dev_dbg(f->dev, "val: 0x%08x, sys_pll_ratio: %d\n", val, sys_pll_ratio); 954 953 955 - /* Compute system clock frequency */ 956 - np = of_find_node_by_name(NULL, "clock-sysclk"); 957 - if (!np) 958 - goto err; 959 - 960 - if (of_property_read_u32(np, "clock-frequency", &sysclk)) 961 - goto err; 962 - 963 - sysclk = (sysclk * val) / 1000000; /* Convert sysclk to Mhz */ 964 - dev_dbg(f->dev, "val: 0x%08x, sysclk: %dMhz\n", val, sysclk); 965 - 966 - /* Use IP bus only if PLL is 300MHz */ 967 - if (sysclk == 300) 954 + /* Use IP bus only if platform clock is 300MHz */ 955 + if (sys_pll_ratio == 3) 968 956 f->devtype_data->quirks |= FSPI_QUIRK_USE_IP_ONLY; 969 957 970 958 return;
+1 -3
drivers/spi/spi-tegra20-slink.c
··· 1182 1182 } 1183 1183 #endif 1184 1184 1185 - #ifdef CONFIG_PM 1186 - static int tegra_slink_runtime_suspend(struct device *dev) 1185 + static int __maybe_unused tegra_slink_runtime_suspend(struct device *dev) 1187 1186 { 1188 1187 struct spi_master *master = dev_get_drvdata(dev); 1189 1188 struct tegra_slink_data *tspi = spi_master_get_devdata(master); ··· 1207 1208 } 1208 1209 return 0; 1209 1210 } 1210 - #endif /* CONFIG_PM */ 1211 1211 1212 1212 static const struct dev_pm_ops slink_pm_ops = { 1213 1213 SET_RUNTIME_PM_OPS(tegra_slink_runtime_suspend,
+11 -16
drivers/spi/spi.c
··· 478 478 */ 479 479 static DEFINE_MUTEX(board_lock); 480 480 481 - /* 482 - * Prevents addition of devices with same chip select and 483 - * addition of devices below an unregistering controller. 484 - */ 485 - static DEFINE_MUTEX(spi_add_lock); 486 - 487 481 /** 488 482 * spi_alloc_device - Allocate a new SPI device 489 483 * @ctlr: Controller to which device is connected ··· 630 636 * chipselect **BEFORE** we call setup(), else we'll trash 631 637 * its configuration. Lock against concurrent add() calls. 632 638 */ 633 - mutex_lock(&spi_add_lock); 639 + mutex_lock(&ctlr->add_lock); 634 640 status = __spi_add_device(spi); 635 - mutex_unlock(&spi_add_lock); 641 + mutex_unlock(&ctlr->add_lock); 636 642 return status; 637 643 } 638 644 EXPORT_SYMBOL_GPL(spi_add_device); ··· 652 658 /* Set the bus ID string */ 653 659 spi_dev_set_name(spi); 654 660 655 - WARN_ON(!mutex_is_locked(&spi_add_lock)); 661 + WARN_ON(!mutex_is_locked(&ctlr->add_lock)); 656 662 return __spi_add_device(spi); 657 663 } 658 664 ··· 2547 2553 return NULL; 2548 2554 2549 2555 device_initialize(&ctlr->dev); 2556 + INIT_LIST_HEAD(&ctlr->queue); 2557 + spin_lock_init(&ctlr->queue_lock); 2558 + spin_lock_init(&ctlr->bus_lock_spinlock); 2559 + mutex_init(&ctlr->bus_lock_mutex); 2560 + mutex_init(&ctlr->io_mutex); 2561 + mutex_init(&ctlr->add_lock); 2550 2562 ctlr->bus_num = -1; 2551 2563 ctlr->num_chipselect = 1; 2552 2564 ctlr->slave = slave; ··· 2825 2825 return id; 2826 2826 ctlr->bus_num = id; 2827 2827 } 2828 - INIT_LIST_HEAD(&ctlr->queue); 2829 - spin_lock_init(&ctlr->queue_lock); 2830 - spin_lock_init(&ctlr->bus_lock_spinlock); 2831 - mutex_init(&ctlr->bus_lock_mutex); 2832 - mutex_init(&ctlr->io_mutex); 2833 2828 ctlr->bus_lock_flag = 0; 2834 2829 init_completion(&ctlr->xfer_completion); 2835 2830 if (!ctlr->max_dma_len) ··· 2961 2966 2962 2967 /* Prevent addition of new devices, unregister existing ones */ 2963 2968 if (IS_ENABLED(CONFIG_SPI_DYNAMIC)) 2964 - mutex_lock(&spi_add_lock); 2969 + mutex_lock(&ctlr->add_lock); 2965 2970 2966 2971 device_for_each_child(&ctlr->dev, NULL, __unregister); 2967 2972 ··· 2992 2997 mutex_unlock(&board_lock); 2993 2998 2994 2999 if (IS_ENABLED(CONFIG_SPI_DYNAMIC)) 2995 - mutex_unlock(&spi_add_lock); 3000 + mutex_unlock(&ctlr->add_lock); 2996 3001 } 2997 3002 EXPORT_SYMBOL_GPL(spi_unregister_controller); 2998 3003
+14
drivers/spi/spidev.c
··· 673 673 674 674 static struct class *spidev_class; 675 675 676 + static const struct spi_device_id spidev_spi_ids[] = { 677 + { .name = "dh2228fv" }, 678 + { .name = "ltc2488" }, 679 + { .name = "sx1301" }, 680 + { .name = "bk4" }, 681 + { .name = "dhcom-board" }, 682 + { .name = "m53cpld" }, 683 + { .name = "spi-petra" }, 684 + { .name = "spi-authenta" }, 685 + {}, 686 + }; 687 + MODULE_DEVICE_TABLE(spi, spidev_spi_ids); 688 + 676 689 #ifdef CONFIG_OF 677 690 static const struct of_device_id spidev_dt_ids[] = { 678 691 { .compatible = "rohm,dh2228fv" }, ··· 831 818 }, 832 819 .probe = spidev_probe, 833 820 .remove = spidev_remove, 821 + .id_table = spidev_spi_ids, 834 822 835 823 /* NOTE: suspend/resume methods are not necessary here. 836 824 * We don't do anything except pass the requests to/from
+1 -1
drivers/staging/r8188eu/hal/hal_intf.c
··· 248 248 #ifdef CONFIG_88EU_AP_MODE 249 249 struct sta_info *psta = NULL; 250 250 struct sta_priv *pstapriv = &adapt->stapriv; 251 - if ((mac_id - 1) > 0) 251 + if (mac_id >= 2) 252 252 psta = pstapriv->sta_aid[(mac_id - 1) - 1]; 253 253 if (psta) 254 254 add_RATid(adapt, psta, 0);/* todo: based on rssi_level*/
+1 -1
drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
··· 182 182 offset = (uintptr_t)ubuf & (PAGE_SIZE - 1); 183 183 num_pages = DIV_ROUND_UP(count + offset, PAGE_SIZE); 184 184 185 - if (num_pages > (SIZE_MAX - sizeof(struct pagelist) - 185 + if ((size_t)num_pages > (SIZE_MAX - sizeof(struct pagelist) - 186 186 sizeof(struct vchiq_pagelist_info)) / 187 187 (sizeof(u32) + sizeof(pages[0]) + 188 188 sizeof(struct scatterlist)))
+3
drivers/tee/optee/core.c
··· 585 585 { 586 586 struct optee *optee = platform_get_drvdata(pdev); 587 587 588 + /* Unregister OP-TEE specific client devices on TEE bus */ 589 + optee_unregister_devices(); 590 + 588 591 /* 589 592 * Ask OP-TEE to free all cached shared memory objects to decrease 590 593 * reference counters and also avoid wild pointers in secure world
+22
drivers/tee/optee/device.c
··· 53 53 return 0; 54 54 } 55 55 56 + static void optee_release_device(struct device *dev) 57 + { 58 + struct tee_client_device *optee_device = to_tee_client_device(dev); 59 + 60 + kfree(optee_device); 61 + } 62 + 56 63 static int optee_register_device(const uuid_t *device_uuid) 57 64 { 58 65 struct tee_client_device *optee_device = NULL; ··· 70 63 return -ENOMEM; 71 64 72 65 optee_device->dev.bus = &tee_bus_type; 66 + optee_device->dev.release = optee_release_device; 73 67 if (dev_set_name(&optee_device->dev, "optee-ta-%pUb", device_uuid)) { 74 68 kfree(optee_device); 75 69 return -ENOMEM; ··· 161 153 int optee_enumerate_devices(u32 func) 162 154 { 163 155 return __optee_enumerate_devices(func); 156 + } 157 + 158 + static int __optee_unregister_device(struct device *dev, void *data) 159 + { 160 + if (!strncmp(dev_name(dev), "optee-ta", strlen("optee-ta"))) 161 + device_unregister(dev); 162 + 163 + return 0; 164 + } 165 + 166 + void optee_unregister_devices(void) 167 + { 168 + bus_for_each_dev(&tee_bus_type, NULL, NULL, 169 + __optee_unregister_device); 164 170 }
+1
drivers/tee/optee/optee_private.h
··· 184 184 #define PTA_CMD_GET_DEVICES 0x0 185 185 #define PTA_CMD_GET_DEVICES_SUPP 0x1 186 186 int optee_enumerate_devices(u32 func); 187 + void optee_unregister_devices(void); 187 188 188 189 /* 189 190 * Small helpers
+1
drivers/thunderbolt/Makefile
··· 7 7 thunderbolt-${CONFIG_ACPI} += acpi.o 8 8 thunderbolt-$(CONFIG_DEBUG_FS) += debugfs.o 9 9 thunderbolt-${CONFIG_USB4_KUNIT_TEST} += test.o 10 + CFLAGS_test.o += $(DISABLE_STRUCTLEAK_PLUGIN) 10 11 11 12 thunderbolt_dma_test-${CONFIG_USB4_DMA_TEST} += dma_test.o 12 13 obj-$(CONFIG_USB4_DMA_TEST) += thunderbolt_dma_test.o
+6 -2
drivers/tty/serial/8250/Kconfig
··· 361 361 If unsure, say N. 362 362 363 363 config SERIAL_8250_FSL 364 - bool 364 + bool "Freescale 16550 UART support" if COMPILE_TEST && !(PPC || ARM || ARM64) 365 365 depends on SERIAL_8250_CONSOLE 366 - default PPC || ARM || ARM64 || COMPILE_TEST 366 + default PPC || ARM || ARM64 367 + help 368 + Selecting this option enables a workaround for a break-detection 369 + erratum for Freescale 16550 UARTs in the 8250 driver. It also 370 + enables support for ACPI enumeration. 367 371 368 372 config SERIAL_8250_DW 369 373 tristate "Support for Synopsys DesignWare 8250 quirks"
+13 -15
drivers/usb/host/xhci-dbgtty.c
··· 408 408 return -EBUSY; 409 409 410 410 xhci_dbc_tty_init_port(dbc, port); 411 - tty_dev = tty_port_register_device(&port->port, 412 - dbc_tty_driver, 0, NULL); 413 - if (IS_ERR(tty_dev)) { 414 - ret = PTR_ERR(tty_dev); 415 - goto register_fail; 416 - } 417 411 418 412 ret = kfifo_alloc(&port->write_fifo, DBC_WRITE_BUF_SIZE, GFP_KERNEL); 419 413 if (ret) 420 - goto buf_alloc_fail; 414 + goto err_exit_port; 421 415 422 416 ret = xhci_dbc_alloc_requests(dbc, BULK_IN, &port->read_pool, 423 417 dbc_read_complete); 424 418 if (ret) 425 - goto request_fail; 419 + goto err_free_fifo; 426 420 427 421 ret = xhci_dbc_alloc_requests(dbc, BULK_OUT, &port->write_pool, 428 422 dbc_write_complete); 429 423 if (ret) 430 - goto request_fail; 424 + goto err_free_requests; 425 + 426 + tty_dev = tty_port_register_device(&port->port, 427 + dbc_tty_driver, 0, NULL); 428 + if (IS_ERR(tty_dev)) { 429 + ret = PTR_ERR(tty_dev); 430 + goto err_free_requests; 431 + } 431 432 432 433 port->registered = true; 433 434 434 435 return 0; 435 436 436 - request_fail: 437 + err_free_requests: 437 438 xhci_dbc_free_requests(&port->read_pool); 438 439 xhci_dbc_free_requests(&port->write_pool); 440 + err_free_fifo: 439 441 kfifo_free(&port->write_fifo); 440 - 441 - buf_alloc_fail: 442 - tty_unregister_device(dbc_tty_driver, 0); 443 - 444 - register_fail: 442 + err_exit_port: 445 443 xhci_dbc_tty_exit_port(port); 446 444 447 445 dev_err(dbc->dev, "can't register tty port, err %d\n", ret);
+5 -1
drivers/usb/host/xhci-pci.c
··· 30 30 #define PCI_VENDOR_ID_FRESCO_LOGIC 0x1b73 31 31 #define PCI_DEVICE_ID_FRESCO_LOGIC_PDK 0x1000 32 32 #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1009 0x1009 33 + #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1100 0x1100 33 34 #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1400 0x1400 34 35 35 36 #define PCI_VENDOR_ID_ETRON 0x1b6f ··· 121 120 /* Look for vendor-specific quirks */ 122 121 if (pdev->vendor == PCI_VENDOR_ID_FRESCO_LOGIC && 123 122 (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK || 123 + pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1100 || 124 124 pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1400)) { 125 125 if (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK && 126 126 pdev->revision == 0x0) { ··· 288 286 pdev->device == 0x3432) 289 287 xhci->quirks |= XHCI_BROKEN_STREAMS; 290 288 291 - if (pdev->vendor == PCI_VENDOR_ID_VIA && pdev->device == 0x3483) 289 + if (pdev->vendor == PCI_VENDOR_ID_VIA && pdev->device == 0x3483) { 292 290 xhci->quirks |= XHCI_LPM_SUPPORT; 291 + xhci->quirks |= XHCI_EP_CTX_BROKEN_DCS; 292 + } 293 293 294 294 if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA && 295 295 pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI)
+34 -5
drivers/usb/host/xhci-ring.c
··· 366 366 /* Must be called with xhci->lock held, releases and aquires lock back */ 367 367 static int xhci_abort_cmd_ring(struct xhci_hcd *xhci, unsigned long flags) 368 368 { 369 - u64 temp_64; 369 + u32 temp_32; 370 370 int ret; 371 371 372 372 xhci_dbg(xhci, "Abort command ring\n"); 373 373 374 374 reinit_completion(&xhci->cmd_ring_stop_completion); 375 375 376 - temp_64 = xhci_read_64(xhci, &xhci->op_regs->cmd_ring); 377 - xhci_write_64(xhci, temp_64 | CMD_RING_ABORT, 378 - &xhci->op_regs->cmd_ring); 376 + /* 377 + * The control bits like command stop, abort are located in lower 378 + * dword of the command ring control register. Limit the write 379 + * to the lower dword to avoid corrupting the command ring pointer 380 + * in case if the command ring is stopped by the time upper dword 381 + * is written. 382 + */ 383 + temp_32 = readl(&xhci->op_regs->cmd_ring); 384 + writel(temp_32 | CMD_RING_ABORT, &xhci->op_regs->cmd_ring); 379 385 380 386 /* Section 4.6.1.2 of xHCI 1.0 spec says software should also time the 381 387 * completion of the Command Abort operation. If CRR is not negated in 5 ··· 565 559 struct xhci_ring *ep_ring; 566 560 struct xhci_command *cmd; 567 561 struct xhci_segment *new_seg; 562 + struct xhci_segment *halted_seg = NULL; 568 563 union xhci_trb *new_deq; 569 564 int new_cycle; 565 + union xhci_trb *halted_trb; 566 + int index = 0; 570 567 dma_addr_t addr; 571 568 u64 hw_dequeue; 572 569 bool cycle_found = false; ··· 607 598 hw_dequeue = xhci_get_hw_deq(xhci, dev, ep_index, stream_id); 608 599 new_seg = ep_ring->deq_seg; 609 600 new_deq = ep_ring->dequeue; 610 - new_cycle = hw_dequeue & 0x1; 601 + 602 + /* 603 + * Quirk: xHC write-back of the DCS field in the hardware dequeue 604 + * pointer is wrong - use the cycle state of the TRB pointed to by 605 + * the dequeue pointer. 606 + */ 607 + if (xhci->quirks & XHCI_EP_CTX_BROKEN_DCS && 608 + !(ep->ep_state & EP_HAS_STREAMS)) 609 + halted_seg = trb_in_td(xhci, td->start_seg, 610 + td->first_trb, td->last_trb, 611 + hw_dequeue & ~0xf, false); 612 + if (halted_seg) { 613 + index = ((dma_addr_t)(hw_dequeue & ~0xf) - halted_seg->dma) / 614 + sizeof(*halted_trb); 615 + halted_trb = &halted_seg->trbs[index]; 616 + new_cycle = halted_trb->generic.field[3] & 0x1; 617 + xhci_dbg(xhci, "Endpoint DCS = %d TRB index = %d cycle = %d\n", 618 + (u8)(hw_dequeue & 0x1), index, new_cycle); 619 + } else { 620 + new_cycle = hw_dequeue & 0x1; 621 + } 611 622 612 623 /* 613 624 * We want to find the pointer, segment and cycle state of the new trb
+5
drivers/usb/host/xhci.c
··· 3214 3214 return; 3215 3215 3216 3216 /* Bail out if toggle is already being cleared by a endpoint reset */ 3217 + spin_lock_irqsave(&xhci->lock, flags); 3217 3218 if (ep->ep_state & EP_HARD_CLEAR_TOGGLE) { 3218 3219 ep->ep_state &= ~EP_HARD_CLEAR_TOGGLE; 3220 + spin_unlock_irqrestore(&xhci->lock, flags); 3219 3221 return; 3220 3222 } 3223 + spin_unlock_irqrestore(&xhci->lock, flags); 3221 3224 /* Only interrupt and bulk ep's use data toggle, USB2 spec 5.5.4-> */ 3222 3225 if (usb_endpoint_xfer_control(&host_ep->desc) || 3223 3226 usb_endpoint_xfer_isoc(&host_ep->desc)) ··· 3306 3303 xhci_free_command(xhci, cfg_cmd); 3307 3304 cleanup: 3308 3305 xhci_free_command(xhci, stop_cmd); 3306 + spin_lock_irqsave(&xhci->lock, flags); 3309 3307 if (ep->ep_state & EP_SOFT_CLEAR_TOGGLE) 3310 3308 ep->ep_state &= ~EP_SOFT_CLEAR_TOGGLE; 3309 + spin_unlock_irqrestore(&xhci->lock, flags); 3311 3310 } 3312 3311 3313 3312 static int xhci_check_streams_endpoint(struct xhci_hcd *xhci,
+1
drivers/usb/host/xhci.h
··· 1899 1899 #define XHCI_SG_TRB_CACHE_SIZE_QUIRK BIT_ULL(39) 1900 1900 #define XHCI_NO_SOFT_RETRY BIT_ULL(40) 1901 1901 #define XHCI_BROKEN_D3COLD BIT_ULL(41) 1902 + #define XHCI_EP_CTX_BROKEN_DCS BIT_ULL(42) 1902 1903 1903 1904 unsigned int num_active_eps; 1904 1905 unsigned int limit_active_eps;
+3 -1
drivers/usb/musb/musb_dsps.c
··· 899 899 if (usb_get_dr_mode(&pdev->dev) == USB_DR_MODE_PERIPHERAL) { 900 900 ret = dsps_setup_optional_vbus_irq(pdev, glue); 901 901 if (ret) 902 - goto err; 902 + goto unregister_pdev; 903 903 } 904 904 905 905 return 0; 906 906 907 + unregister_pdev: 908 + platform_device_unregister(glue->musb); 907 909 err: 908 910 pm_runtime_disable(&pdev->dev); 909 911 iounmap(glue->usbss_base);
+8
drivers/usb/serial/option.c
··· 246 246 /* These Quectel products use Quectel's vendor ID */ 247 247 #define QUECTEL_PRODUCT_EC21 0x0121 248 248 #define QUECTEL_PRODUCT_EC25 0x0125 249 + #define QUECTEL_PRODUCT_EG91 0x0191 249 250 #define QUECTEL_PRODUCT_EG95 0x0195 250 251 #define QUECTEL_PRODUCT_BG96 0x0296 251 252 #define QUECTEL_PRODUCT_EP06 0x0306 252 253 #define QUECTEL_PRODUCT_EM12 0x0512 253 254 #define QUECTEL_PRODUCT_RM500Q 0x0800 255 + #define QUECTEL_PRODUCT_EC200S_CN 0x6002 254 256 #define QUECTEL_PRODUCT_EC200T 0x6026 255 257 256 258 #define CMOTECH_VENDOR_ID 0x16d8 ··· 1113 1111 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC25, 0xff, 0xff, 0xff), 1114 1112 .driver_info = NUMEP2 }, 1115 1113 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC25, 0xff, 0, 0) }, 1114 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG91, 0xff, 0xff, 0xff), 1115 + .driver_info = NUMEP2 }, 1116 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG91, 0xff, 0, 0) }, 1116 1117 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0xff, 0xff), 1117 1118 .driver_info = NUMEP2 }, 1118 1119 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0, 0) }, ··· 1133 1128 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0, 0) }, 1134 1129 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x10), 1135 1130 .driver_info = ZLP }, 1131 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) }, 1136 1132 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) }, 1137 1133 1138 1134 { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) }, ··· 1233 1227 .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, 1234 1228 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1203, 0xff), /* Telit LE910Cx (RNDIS) */ 1235 1229 .driver_info = NCTRL(2) | RSVD(3) }, 1230 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1204, 0xff), /* Telit LE910Cx (MBIM) */ 1231 + .driver_info = NCTRL(0) | RSVD(1) }, 1236 1232 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910_USBCFG4), 1237 1233 .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) | RSVD(3) }, 1238 1234 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920),
+1
drivers/usb/serial/qcserial.c
··· 165 165 {DEVICE_SWI(0x1199, 0x907b)}, /* Sierra Wireless EM74xx */ 166 166 {DEVICE_SWI(0x1199, 0x9090)}, /* Sierra Wireless EM7565 QDL */ 167 167 {DEVICE_SWI(0x1199, 0x9091)}, /* Sierra Wireless EM7565 */ 168 + {DEVICE_SWI(0x1199, 0x90d2)}, /* Sierra Wireless EM9191 QDL */ 168 169 {DEVICE_SWI(0x413c, 0x81a2)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card */ 169 170 {DEVICE_SWI(0x413c, 0x81a3)}, /* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card */ 170 171 {DEVICE_SWI(0x413c, 0x81a4)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */
+5 -5
drivers/vhost/vdpa.c
··· 173 173 if (status != 0 && (ops->get_status(vdpa) & ~status) != 0) 174 174 return -EINVAL; 175 175 176 + if ((status_old & VIRTIO_CONFIG_S_DRIVER_OK) && !(status & VIRTIO_CONFIG_S_DRIVER_OK)) 177 + for (i = 0; i < nvqs; i++) 178 + vhost_vdpa_unsetup_vq_irq(v, i); 179 + 176 180 if (status == 0) { 177 181 ret = ops->reset(vdpa); 178 182 if (ret) ··· 187 183 if ((status & VIRTIO_CONFIG_S_DRIVER_OK) && !(status_old & VIRTIO_CONFIG_S_DRIVER_OK)) 188 184 for (i = 0; i < nvqs; i++) 189 185 vhost_vdpa_setup_vq_irq(v, i); 190 - 191 - if ((status_old & VIRTIO_CONFIG_S_DRIVER_OK) && !(status & VIRTIO_CONFIG_S_DRIVER_OK)) 192 - for (i = 0; i < nvqs; i++) 193 - vhost_vdpa_unsetup_vq_irq(v, i); 194 186 195 187 return 0; 196 188 } ··· 322 322 struct eventfd_ctx *ctx; 323 323 324 324 cb.callback = vhost_vdpa_config_cb; 325 - cb.private = v->vdpa; 325 + cb.private = v; 326 326 if (copy_from_user(&fd, argp, sizeof(fd))) 327 327 return -EFAULT; 328 328
+11
drivers/virtio/virtio.c
··· 239 239 driver_features_legacy = driver_features; 240 240 } 241 241 242 + /* 243 + * Some devices detect legacy solely via F_VERSION_1. Write 244 + * F_VERSION_1 to force LE config space accesses before FEATURES_OK for 245 + * these when needed. 246 + */ 247 + if (drv->validate && !virtio_legacy_is_little_endian() 248 + && device_features & BIT_ULL(VIRTIO_F_VERSION_1)) { 249 + dev->features = BIT_ULL(VIRTIO_F_VERSION_1); 250 + dev->config->finalize_features(dev); 251 + } 252 + 242 253 if (device_features & (1ULL << VIRTIO_F_VERSION_1)) 243 254 dev->features = driver_features & device_features; 244 255 else
+1 -1
fs/btrfs/ctree.h
··· 3030 3030 btrfs_lookup_dir_index_item(struct btrfs_trans_handle *trans, 3031 3031 struct btrfs_root *root, 3032 3032 struct btrfs_path *path, u64 dir, 3033 - u64 objectid, const char *name, int name_len, 3033 + u64 index, const char *name, int name_len, 3034 3034 int mod); 3035 3035 struct btrfs_dir_item * 3036 3036 btrfs_search_dir_index_item(struct btrfs_root *root,
+37 -11
fs/btrfs/dir-item.c
··· 190 190 } 191 191 192 192 /* 193 - * lookup a directory item based on name. 'dir' is the objectid 194 - * we're searching in, and 'mod' tells us if you plan on deleting the 195 - * item (use mod < 0) or changing the options (use mod > 0) 193 + * Lookup for a directory item by name. 194 + * 195 + * @trans: The transaction handle to use. Can be NULL if @mod is 0. 196 + * @root: The root of the target tree. 197 + * @path: Path to use for the search. 198 + * @dir: The inode number (objectid) of the directory. 199 + * @name: The name associated to the directory entry we are looking for. 200 + * @name_len: The length of the name. 201 + * @mod: Used to indicate if the tree search is meant for a read only 202 + * lookup, for a modification lookup or for a deletion lookup, so 203 + * its value should be 0, 1 or -1, respectively. 204 + * 205 + * Returns: NULL if the dir item does not exists, an error pointer if an error 206 + * happened, or a pointer to a dir item if a dir item exists for the given name. 196 207 */ 197 208 struct btrfs_dir_item *btrfs_lookup_dir_item(struct btrfs_trans_handle *trans, 198 209 struct btrfs_root *root, ··· 284 273 } 285 274 286 275 /* 287 - * lookup a directory item based on index. 'dir' is the objectid 288 - * we're searching in, and 'mod' tells us if you plan on deleting the 289 - * item (use mod < 0) or changing the options (use mod > 0) 276 + * Lookup for a directory index item by name and index number. 290 277 * 291 - * The name is used to make sure the index really points to the name you were 292 - * looking for. 278 + * @trans: The transaction handle to use. Can be NULL if @mod is 0. 279 + * @root: The root of the target tree. 280 + * @path: Path to use for the search. 281 + * @dir: The inode number (objectid) of the directory. 282 + * @index: The index number. 283 + * @name: The name associated to the directory entry we are looking for. 284 + * @name_len: The length of the name. 285 + * @mod: Used to indicate if the tree search is meant for a read only 286 + * lookup, for a modification lookup or for a deletion lookup, so 287 + * its value should be 0, 1 or -1, respectively. 288 + * 289 + * Returns: NULL if the dir index item does not exists, an error pointer if an 290 + * error happened, or a pointer to a dir item if the dir index item exists and 291 + * matches the criteria (name and index number). 293 292 */ 294 293 struct btrfs_dir_item * 295 294 btrfs_lookup_dir_index_item(struct btrfs_trans_handle *trans, 296 295 struct btrfs_root *root, 297 296 struct btrfs_path *path, u64 dir, 298 - u64 objectid, const char *name, int name_len, 297 + u64 index, const char *name, int name_len, 299 298 int mod) 300 299 { 300 + struct btrfs_dir_item *di; 301 301 struct btrfs_key key; 302 302 303 303 key.objectid = dir; 304 304 key.type = BTRFS_DIR_INDEX_KEY; 305 - key.offset = objectid; 305 + key.offset = index; 306 306 307 - return btrfs_lookup_match_dir(trans, root, path, &key, name, name_len, mod); 307 + di = btrfs_lookup_match_dir(trans, root, path, &key, name, name_len, mod); 308 + if (di == ERR_PTR(-ENOENT)) 309 + return NULL; 310 + 311 + return di; 308 312 } 309 313 310 314 struct btrfs_dir_item *
+1
fs/btrfs/extent-tree.c
··· 4859 4859 out_free_delayed: 4860 4860 btrfs_free_delayed_extent_op(extent_op); 4861 4861 out_free_buf: 4862 + btrfs_tree_unlock(buf); 4862 4863 free_extent_buffer(buf); 4863 4864 out_free_reserved: 4864 4865 btrfs_free_reserved_extent(fs_info, ins.objectid, ins.offset, 0);
+10 -9
fs/btrfs/file.c
··· 734 734 if (args->start >= inode->disk_i_size && !args->replace_extent) 735 735 modify_tree = 0; 736 736 737 - update_refs = (test_bit(BTRFS_ROOT_SHAREABLE, &root->state) || 738 - root == fs_info->tree_root); 737 + update_refs = (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID); 739 738 while (1) { 740 739 recow = 0; 741 740 ret = btrfs_lookup_file_extent(trans, root, path, ino, ··· 2703 2704 drop_args.bytes_found); 2704 2705 if (ret != -ENOSPC) { 2705 2706 /* 2706 - * When cloning we want to avoid transaction aborts when 2707 - * nothing was done and we are attempting to clone parts 2708 - * of inline extents, in such cases -EOPNOTSUPP is 2709 - * returned by __btrfs_drop_extents() without having 2710 - * changed anything in the file. 2707 + * The only time we don't want to abort is if we are 2708 + * attempting to clone a partial inline extent, in which 2709 + * case we'll get EOPNOTSUPP. However if we aren't 2710 + * clone we need to abort no matter what, because if we 2711 + * got EOPNOTSUPP via prealloc then we messed up and 2712 + * need to abort. 2711 2713 */ 2712 - if (extent_info && !extent_info->is_new_extent && 2713 - ret && ret != -EOPNOTSUPP) 2714 + if (ret && 2715 + (ret != -EOPNOTSUPP || 2716 + (extent_info && extent_info->is_new_extent))) 2714 2717 btrfs_abort_transaction(trans, ret); 2715 2718 break; 2716 2719 }
+48 -31
fs/btrfs/tree-log.c
··· 939 939 } 940 940 941 941 /* 942 - * helper function to see if a given name and sequence number found 943 - * in an inode back reference are already in a directory and correctly 944 - * point to this inode 942 + * See if a given name and sequence number found in an inode back reference are 943 + * already in a directory and correctly point to this inode. 944 + * 945 + * Returns: < 0 on error, 0 if the directory entry does not exists and 1 if it 946 + * exists. 945 947 */ 946 948 static noinline int inode_in_dir(struct btrfs_root *root, 947 949 struct btrfs_path *path, ··· 952 950 { 953 951 struct btrfs_dir_item *di; 954 952 struct btrfs_key location; 955 - int match = 0; 953 + int ret = 0; 956 954 957 955 di = btrfs_lookup_dir_index_item(NULL, root, path, dirid, 958 956 index, name, name_len, 0); 959 - if (di && !IS_ERR(di)) { 957 + if (IS_ERR(di)) { 958 + ret = PTR_ERR(di); 959 + goto out; 960 + } else if (di) { 960 961 btrfs_dir_item_key_to_cpu(path->nodes[0], di, &location); 961 962 if (location.objectid != objectid) 962 963 goto out; 963 - } else 964 + } else { 964 965 goto out; 965 - btrfs_release_path(path); 966 + } 966 967 968 + btrfs_release_path(path); 967 969 di = btrfs_lookup_dir_item(NULL, root, path, dirid, name, name_len, 0); 968 - if (di && !IS_ERR(di)) { 969 - btrfs_dir_item_key_to_cpu(path->nodes[0], di, &location); 970 - if (location.objectid != objectid) 971 - goto out; 972 - } else 970 + if (IS_ERR(di)) { 971 + ret = PTR_ERR(di); 973 972 goto out; 974 - match = 1; 973 + } else if (di) { 974 + btrfs_dir_item_key_to_cpu(path->nodes[0], di, &location); 975 + if (location.objectid == objectid) 976 + ret = 1; 977 + } 975 978 out: 976 979 btrfs_release_path(path); 977 - return match; 980 + return ret; 978 981 } 979 982 980 983 /* ··· 1189 1182 /* look for a conflicting sequence number */ 1190 1183 di = btrfs_lookup_dir_index_item(trans, root, path, btrfs_ino(dir), 1191 1184 ref_index, name, namelen, 0); 1192 - if (di && !IS_ERR(di)) { 1185 + if (IS_ERR(di)) { 1186 + return PTR_ERR(di); 1187 + } else if (di) { 1193 1188 ret = drop_one_dir_item(trans, root, path, dir, di); 1194 1189 if (ret) 1195 1190 return ret; ··· 1201 1192 /* look for a conflicting name */ 1202 1193 di = btrfs_lookup_dir_item(trans, root, path, btrfs_ino(dir), 1203 1194 name, namelen, 0); 1204 - if (di && !IS_ERR(di)) { 1195 + if (IS_ERR(di)) { 1196 + return PTR_ERR(di); 1197 + } else if (di) { 1205 1198 ret = drop_one_dir_item(trans, root, path, dir, di); 1206 1199 if (ret) 1207 1200 return ret; ··· 1528 1517 if (ret) 1529 1518 goto out; 1530 1519 1531 - /* if we already have a perfect match, we're done */ 1532 - if (!inode_in_dir(root, path, btrfs_ino(BTRFS_I(dir)), 1533 - btrfs_ino(BTRFS_I(inode)), ref_index, 1534 - name, namelen)) { 1520 + ret = inode_in_dir(root, path, btrfs_ino(BTRFS_I(dir)), 1521 + btrfs_ino(BTRFS_I(inode)), ref_index, 1522 + name, namelen); 1523 + if (ret < 0) { 1524 + goto out; 1525 + } else if (ret == 0) { 1535 1526 /* 1536 1527 * look for a conflicting back reference in the 1537 1528 * metadata. if we find one we have to unlink that name ··· 1593 1580 if (ret) 1594 1581 goto out; 1595 1582 } 1583 + /* Else, ret == 1, we already have a perfect match, we're done. */ 1596 1584 1597 1585 ref_ptr = (unsigned long)(ref_ptr + ref_struct_size) + namelen; 1598 1586 kfree(name); ··· 1950 1936 struct btrfs_key log_key; 1951 1937 struct inode *dir; 1952 1938 u8 log_type; 1953 - int exists; 1954 - int ret = 0; 1939 + bool exists; 1940 + int ret; 1955 1941 bool update_size = (key->type == BTRFS_DIR_INDEX_KEY); 1956 1942 bool name_added = false; 1957 1943 ··· 1971 1957 name_len); 1972 1958 1973 1959 btrfs_dir_item_key_to_cpu(eb, di, &log_key); 1974 - exists = btrfs_lookup_inode(trans, root, path, &log_key, 0); 1975 - if (exists == 0) 1976 - exists = 1; 1977 - else 1978 - exists = 0; 1960 + ret = btrfs_lookup_inode(trans, root, path, &log_key, 0); 1979 1961 btrfs_release_path(path); 1962 + if (ret < 0) 1963 + goto out; 1964 + exists = (ret == 0); 1965 + ret = 0; 1980 1966 1981 1967 if (key->type == BTRFS_DIR_ITEM_KEY) { 1982 1968 dst_di = btrfs_lookup_dir_item(trans, root, path, key->objectid, ··· 1991 1977 ret = -EINVAL; 1992 1978 goto out; 1993 1979 } 1994 - if (IS_ERR_OR_NULL(dst_di)) { 1980 + 1981 + if (IS_ERR(dst_di)) { 1982 + ret = PTR_ERR(dst_di); 1983 + goto out; 1984 + } else if (!dst_di) { 1995 1985 /* we need a sequence number to insert, so we only 1996 1986 * do inserts for the BTRFS_DIR_INDEX_KEY types 1997 1987 */ ··· 2299 2281 dir_key->offset, 2300 2282 name, name_len, 0); 2301 2283 } 2302 - if (!log_di || log_di == ERR_PTR(-ENOENT)) { 2284 + if (!log_di) { 2303 2285 btrfs_dir_item_key_to_cpu(eb, di, &location); 2304 2286 btrfs_release_path(path); 2305 2287 btrfs_release_path(log_path); ··· 3558 3540 if (err == -ENOSPC) { 3559 3541 btrfs_set_log_full_commit(trans); 3560 3542 err = 0; 3561 - } else if (err < 0 && err != -ENOENT) { 3562 - /* ENOENT can be returned if the entry hasn't been fsynced yet */ 3543 + } else if (err < 0) { 3563 3544 btrfs_abort_transaction(trans, err); 3564 3545 } 3565 3546
+1 -1
fs/io_uring.c
··· 2949 2949 struct io_ring_ctx *ctx = req->ctx; 2950 2950 2951 2951 req_set_fail(req); 2952 - if (issue_flags & IO_URING_F_NONBLOCK) { 2952 + if (!(issue_flags & IO_URING_F_NONBLOCK)) { 2953 2953 mutex_lock(&ctx->uring_lock); 2954 2954 __io_req_complete(req, issue_flags, ret, cflags); 2955 2955 mutex_unlock(&ctx->uring_lock);
+8 -1
fs/kernfs/dir.c
··· 1111 1111 1112 1112 kn = kernfs_find_ns(parent, dentry->d_name.name, ns); 1113 1113 /* attach dentry and inode */ 1114 - if (kn && kernfs_active(kn)) { 1114 + if (kn) { 1115 + /* Inactive nodes are invisible to the VFS so don't 1116 + * create a negative. 1117 + */ 1118 + if (!kernfs_active(kn)) { 1119 + up_read(&kernfs_rwsem); 1120 + return NULL; 1121 + } 1115 1122 inode = kernfs_get_inode(dir->i_sb, kn); 1116 1123 if (!inode) 1117 1124 inode = ERR_PTR(-ENOMEM);
+5 -15
fs/ntfs3/attrib.c
··· 6 6 * TODO: Merge attr_set_size/attr_data_get_block/attr_allocate_frame? 7 7 */ 8 8 9 - #include <linux/blkdev.h> 10 - #include <linux/buffer_head.h> 11 9 #include <linux/fs.h> 12 - #include <linux/hash.h> 13 - #include <linux/nls.h> 14 - #include <linux/ratelimit.h> 15 10 #include <linux/slab.h> 11 + #include <linux/kernel.h> 16 12 17 13 #include "debug.h" 18 14 #include "ntfs.h" ··· 287 291 if (!rsize) { 288 292 /* Empty resident -> Non empty nonresident. */ 289 293 } else if (!is_data) { 290 - err = ntfs_sb_write_run(sbi, run, 0, data, rsize); 294 + err = ntfs_sb_write_run(sbi, run, 0, data, rsize, 0); 291 295 if (err) 292 296 goto out2; 293 297 } else if (!page) { ··· 447 451 again_1: 448 452 align = sbi->cluster_size; 449 453 450 - if (is_ext) { 454 + if (is_ext) 451 455 align <<= attr_b->nres.c_unit; 452 - if (is_attr_sparsed(attr_b)) 453 - keep_prealloc = false; 454 - } 455 456 456 457 old_valid = le64_to_cpu(attr_b->nres.valid_size); 457 458 old_size = le64_to_cpu(attr_b->nres.data_size); ··· 457 464 458 465 new_alloc = (new_size + align - 1) & ~(u64)(align - 1); 459 466 new_alen = new_alloc >> cluster_bits; 460 - 461 - if (keep_prealloc && is_ext) 462 - keep_prealloc = false; 463 467 464 468 if (keep_prealloc && new_size < old_size) { 465 469 attr_b->nres.data_size = cpu_to_le64(new_size); ··· 519 529 } else if (pre_alloc == -1) { 520 530 pre_alloc = 0; 521 531 if (type == ATTR_DATA && !name_len && 522 - sbi->options.prealloc) { 532 + sbi->options->prealloc) { 523 533 CLST new_alen2 = bytes_to_cluster( 524 534 sbi, get_pre_allocated(new_size)); 525 535 pre_alloc = new_alen2 - new_alen; ··· 1956 1966 return 0; 1957 1967 1958 1968 from = vbo; 1959 - to = (vbo + bytes) < data_size ? (vbo + bytes) : data_size; 1969 + to = min_t(u64, vbo + bytes, data_size); 1960 1970 memset(Add2Ptr(resident_data(attr_b), from), 0, to - from); 1961 1971 return 0; 1962 1972 }
+3 -6
fs/ntfs3/attrlist.c
··· 5 5 * 6 6 */ 7 7 8 - #include <linux/blkdev.h> 9 - #include <linux/buffer_head.h> 10 8 #include <linux/fs.h> 11 - #include <linux/nls.h> 12 9 13 10 #include "debug.h" 14 11 #include "ntfs.h" ··· 333 336 334 337 if (attr && attr->non_res) { 335 338 err = ntfs_sb_write_run(ni->mi.sbi, &al->run, 0, al->le, 336 - al->size); 339 + al->size, 0); 337 340 if (err) 338 341 return err; 339 342 al->dirty = false; ··· 420 423 return true; 421 424 } 422 425 423 - int al_update(struct ntfs_inode *ni) 426 + int al_update(struct ntfs_inode *ni, int sync) 424 427 { 425 428 int err; 426 429 struct ATTRIB *attr; ··· 442 445 memcpy(resident_data(attr), al->le, al->size); 443 446 } else { 444 447 err = ntfs_sb_write_run(ni->mi.sbi, &al->run, 0, al->le, 445 - al->size); 448 + al->size, sync); 446 449 if (err) 447 450 goto out; 448 451
+2 -8
fs/ntfs3/bitfunc.c
··· 5 5 * 6 6 */ 7 7 8 - #include <linux/blkdev.h> 9 - #include <linux/buffer_head.h> 10 - #include <linux/fs.h> 11 - #include <linux/nls.h> 8 + #include <linux/types.h> 12 9 13 - #include "debug.h" 14 - #include "ntfs.h" 15 10 #include "ntfs_fs.h" 16 11 17 12 #define BITS_IN_SIZE_T (sizeof(size_t) * 8) ··· 119 124 120 125 pos = nbits & 7; 121 126 if (pos) { 122 - u8 mask = fill_mask[pos]; 123 - 127 + mask = fill_mask[pos]; 124 128 if ((*map & mask) != mask) 125 129 return false; 126 130 }
+6 -8
fs/ntfs3/bitmap.c
··· 10 10 * 11 11 */ 12 12 13 - #include <linux/blkdev.h> 14 13 #include <linux/buffer_head.h> 15 14 #include <linux/fs.h> 16 - #include <linux/nls.h> 15 + #include <linux/kernel.h> 17 16 18 - #include "debug.h" 19 17 #include "ntfs.h" 20 18 #include "ntfs_fs.h" 21 19 ··· 433 435 ; 434 436 } else { 435 437 n3 = rb_next(&e->count.node); 436 - max_new_len = len > new_len ? len : new_len; 438 + max_new_len = max(len, new_len); 437 439 if (!n3) { 438 440 wnd->extent_max = max_new_len; 439 441 } else { ··· 729 731 wbits = wnd->bits_last; 730 732 731 733 tail = wbits - wbit; 732 - op = tail < bits ? tail : bits; 734 + op = min_t(u32, tail, bits); 733 735 734 736 bh = wnd_map(wnd, iw); 735 737 if (IS_ERR(bh)) { ··· 782 784 wbits = wnd->bits_last; 783 785 784 786 tail = wbits - wbit; 785 - op = tail < bits ? tail : bits; 787 + op = min_t(u32, tail, bits); 786 788 787 789 bh = wnd_map(wnd, iw); 788 790 if (IS_ERR(bh)) { ··· 832 834 wbits = wnd->bits_last; 833 835 834 836 tail = wbits - wbit; 835 - op = tail < bits ? tail : bits; 837 + op = min_t(u32, tail, bits); 836 838 837 839 if (wbits != wnd->free_bits[iw]) { 838 840 bool ret; ··· 924 926 wbits = wnd->bits_last; 925 927 926 928 tail = wbits - wbit; 927 - op = tail < bits ? tail : bits; 929 + op = min_t(u32, tail, bits); 928 930 929 931 if (wnd->free_bits[iw]) { 930 932 bool ret;
+3
fs/ntfs3/debug.h
··· 11 11 #ifndef _LINUX_NTFS3_DEBUG_H 12 12 #define _LINUX_NTFS3_DEBUG_H 13 13 14 + struct super_block; 15 + struct inode; 16 + 14 17 #ifndef Add2Ptr 15 18 #define Add2Ptr(P, I) ((void *)((u8 *)(P) + (I))) 16 19 #define PtrOffset(B, O) ((size_t)((size_t)(O) - (size_t)(B)))
+12 -18
fs/ntfs3/dir.c
··· 7 7 * 8 8 */ 9 9 10 - #include <linux/blkdev.h> 11 - #include <linux/buffer_head.h> 12 10 #include <linux/fs.h> 13 - #include <linux/iversion.h> 14 11 #include <linux/nls.h> 15 12 16 13 #include "debug.h" ··· 15 18 #include "ntfs_fs.h" 16 19 17 20 /* Convert little endian UTF-16 to NLS string. */ 18 - int ntfs_utf16_to_nls(struct ntfs_sb_info *sbi, const struct le_str *uni, 21 + int ntfs_utf16_to_nls(struct ntfs_sb_info *sbi, const __le16 *name, u32 len, 19 22 u8 *buf, int buf_len) 20 23 { 21 - int ret, uni_len, warn; 22 - const __le16 *ip; 24 + int ret, warn; 23 25 u8 *op; 24 - struct nls_table *nls = sbi->options.nls; 26 + struct nls_table *nls = sbi->options->nls; 25 27 26 28 static_assert(sizeof(wchar_t) == sizeof(__le16)); 27 29 28 30 if (!nls) { 29 31 /* UTF-16 -> UTF-8 */ 30 - ret = utf16s_to_utf8s((wchar_t *)uni->name, uni->len, 31 - UTF16_LITTLE_ENDIAN, buf, buf_len); 32 + ret = utf16s_to_utf8s(name, len, UTF16_LITTLE_ENDIAN, buf, 33 + buf_len); 32 34 buf[ret] = '\0'; 33 35 return ret; 34 36 } 35 37 36 - ip = uni->name; 37 38 op = buf; 38 - uni_len = uni->len; 39 39 warn = 0; 40 40 41 - while (uni_len--) { 41 + while (len--) { 42 42 u16 ec; 43 43 int charlen; 44 44 char dump[5]; ··· 46 52 break; 47 53 } 48 54 49 - ec = le16_to_cpu(*ip++); 55 + ec = le16_to_cpu(*name++); 50 56 charlen = nls->uni2char(ec, op, buf_len); 51 57 52 58 if (charlen > 0) { ··· 180 186 { 181 187 int ret, slen; 182 188 const u8 *end; 183 - struct nls_table *nls = sbi->options.nls; 189 + struct nls_table *nls = sbi->options->nls; 184 190 u16 *uname = uni->name; 185 191 186 192 static_assert(sizeof(wchar_t) == sizeof(u16)); ··· 295 301 return 0; 296 302 297 303 /* Skip meta files. Unless option to show metafiles is set. */ 298 - if (!sbi->options.showmeta && ntfs_is_meta_file(sbi, ino)) 304 + if (!sbi->options->showmeta && ntfs_is_meta_file(sbi, ino)) 299 305 return 0; 300 306 301 - if (sbi->options.nohidden && (fname->dup.fa & FILE_ATTRIBUTE_HIDDEN)) 307 + if (sbi->options->nohidden && (fname->dup.fa & FILE_ATTRIBUTE_HIDDEN)) 302 308 return 0; 303 309 304 - name_len = ntfs_utf16_to_nls(sbi, (struct le_str *)&fname->name_len, 305 - name, PATH_MAX); 310 + name_len = ntfs_utf16_to_nls(sbi, fname->name, fname->name_len, name, 311 + PATH_MAX); 306 312 if (name_len <= 0) { 307 313 ntfs_warn(sbi->sb, "failed to convert name for inode %lx.", 308 314 ino);
+7 -5
fs/ntfs3/file.c
··· 12 12 #include <linux/compat.h> 13 13 #include <linux/falloc.h> 14 14 #include <linux/fiemap.h> 15 - #include <linux/nls.h> 16 15 17 16 #include "debug.h" 18 17 #include "ntfs.h" ··· 587 588 truncate_pagecache(inode, vbo_down); 588 589 589 590 if (!is_sparsed(ni) && !is_compressed(ni)) { 590 - /* Normal file. */ 591 - err = ntfs_zero_range(inode, vbo, end); 591 + /* 592 + * Normal file, can't make hole. 593 + * TODO: Try to find way to save info about hole. 594 + */ 595 + err = -EOPNOTSUPP; 592 596 goto out; 593 597 } 594 598 ··· 739 737 umode_t mode = inode->i_mode; 740 738 int err; 741 739 742 - if (sbi->options.no_acs_rules) { 740 + if (sbi->options->noacsrules) { 743 741 /* "No access rules" - Force any changes of time etc. */ 744 742 attr->ia_valid |= ATTR_FORCE; 745 743 /* and disable for editing some attributes. */ ··· 1187 1185 int err = 0; 1188 1186 1189 1187 /* If we are last writer on the inode, drop the block reservation. */ 1190 - if (sbi->options.prealloc && ((file->f_mode & FMODE_WRITE) && 1188 + if (sbi->options->prealloc && ((file->f_mode & FMODE_WRITE) && 1191 1189 atomic_read(&inode->i_writecount) == 1)) { 1192 1190 ni_lock(ni); 1193 1191 down_write(&ni->file.run_lock);
+41 -14
fs/ntfs3/frecord.c
··· 5 5 * 6 6 */ 7 7 8 - #include <linux/blkdev.h> 9 - #include <linux/buffer_head.h> 10 8 #include <linux/fiemap.h> 11 9 #include <linux/fs.h> 12 - #include <linux/nls.h> 13 10 #include <linux/vmalloc.h> 14 11 15 12 #include "debug.h" ··· 705 708 continue; 706 709 707 710 mi = ni_find_mi(ni, ino_get(&le->ref)); 711 + if (!mi) { 712 + /* Should never happened, 'cause already checked. */ 713 + goto bad; 714 + } 708 715 709 716 attr = mi_find_attr(mi, NULL, le->type, le_name(le), 710 717 le->name_len, &le->id); 718 + if (!attr) { 719 + /* Should never happened, 'cause already checked. */ 720 + goto bad; 721 + } 711 722 asize = le32_to_cpu(attr->size); 712 723 713 724 /* Insert into primary record. */ 714 725 attr_ins = mi_insert_attr(&ni->mi, le->type, le_name(le), 715 726 le->name_len, asize, 716 727 le16_to_cpu(attr->name_off)); 717 - id = attr_ins->id; 728 + if (!attr_ins) { 729 + /* 730 + * Internal error. 731 + * Either no space in primary record (already checked). 732 + * Either tried to insert another 733 + * non indexed attribute (logic error). 734 + */ 735 + goto bad; 736 + } 718 737 719 738 /* Copy all except id. */ 739 + id = attr_ins->id; 720 740 memcpy(attr_ins, attr, asize); 721 741 attr_ins->id = id; 722 742 ··· 749 735 ni->attr_list.dirty = false; 750 736 751 737 return 0; 738 + bad: 739 + ntfs_inode_err(&ni->vfs_inode, "Internal error"); 740 + make_bad_inode(&ni->vfs_inode); 741 + return -EINVAL; 752 742 } 753 743 754 744 /* ··· 973 955 /* Only indexed attributes can share same record. */ 974 956 continue; 975 957 } 958 + 959 + /* 960 + * Do not try to insert this attribute 961 + * if there is no room in record. 962 + */ 963 + if (le32_to_cpu(mi->mrec->used) + asize > sbi->record_size) 964 + continue; 976 965 977 966 /* Try to insert attribute into this subrecord. */ 978 967 attr = ni_ins_new_attr(ni, mi, le, type, name, name_len, asize, ··· 1476 1451 attr->res.flags = RESIDENT_FLAG_INDEXED; 1477 1452 1478 1453 /* is_attr_indexed(attr)) == true */ 1479 - le16_add_cpu(&ni->mi.mrec->hard_links, +1); 1454 + le16_add_cpu(&ni->mi.mrec->hard_links, 1); 1480 1455 ni->mi.dirty = true; 1481 1456 } 1482 1457 attr->res.res = 0; ··· 1631 1606 1632 1607 *le = NULL; 1633 1608 1634 - if (FILE_NAME_POSIX == name_type) 1609 + if (name_type == FILE_NAME_POSIX) 1635 1610 return NULL; 1636 1611 1637 1612 /* Enumerate all names. */ ··· 1731 1706 /* 1732 1707 * ni_parse_reparse 1733 1708 * 1734 - * Buffer is at least 24 bytes. 1709 + * buffer - memory for reparse buffer header 1735 1710 */ 1736 1711 enum REPARSE_SIGN ni_parse_reparse(struct ntfs_inode *ni, struct ATTRIB *attr, 1737 - void *buffer) 1712 + struct REPARSE_DATA_BUFFER *buffer) 1738 1713 { 1739 1714 const struct REPARSE_DATA_BUFFER *rp = NULL; 1740 1715 u8 bits; 1741 1716 u16 len; 1742 1717 typeof(rp->CompressReparseBuffer) *cmpr; 1743 - 1744 - static_assert(sizeof(struct REPARSE_DATA_BUFFER) <= 24); 1745 1718 1746 1719 /* Try to estimate reparse point. */ 1747 1720 if (!attr->non_res) { ··· 1825 1802 1826 1803 return REPARSE_NONE; 1827 1804 } 1805 + 1806 + if (buffer != rp) 1807 + memcpy(buffer, rp, sizeof(struct REPARSE_DATA_BUFFER)); 1828 1808 1829 1809 /* Looks like normal symlink. */ 1830 1810 return REPARSE_LINK; ··· 2932 2906 memcpy(Add2Ptr(attr, SIZEOF_RESIDENT), de + 1, de_key_size); 2933 2907 mi_get_ref(&ni->mi, &de->ref); 2934 2908 2935 - if (indx_insert_entry(&dir_ni->dir, dir_ni, de, sbi, NULL, 1)) { 2909 + if (indx_insert_entry(&dir_ni->dir, dir_ni, de, sbi, NULL, 1)) 2936 2910 return false; 2937 - } 2938 2911 } 2939 2912 2940 2913 return true; ··· 3102 3077 const struct EA_INFO *info; 3103 3078 3104 3079 info = resident_data_ex(attr, sizeof(struct EA_INFO)); 3105 - dup->ea_size = info->size_pack; 3080 + /* If ATTR_EA_INFO exists 'info' can't be NULL. */ 3081 + if (info) 3082 + dup->ea_size = info->size_pack; 3106 3083 } 3107 3084 } 3108 3085 ··· 3232 3205 goto out; 3233 3206 } 3234 3207 3235 - err = al_update(ni); 3208 + err = al_update(ni, sync); 3236 3209 if (err) 3237 3210 goto out; 3238 3211 }
+4 -8
fs/ntfs3/fslog.c
··· 6 6 */ 7 7 8 8 #include <linux/blkdev.h> 9 - #include <linux/buffer_head.h> 10 9 #include <linux/fs.h> 11 - #include <linux/hash.h> 12 - #include <linux/nls.h> 13 10 #include <linux/random.h> 14 - #include <linux/ratelimit.h> 15 11 #include <linux/slab.h> 16 12 17 13 #include "debug.h" ··· 2215 2219 2216 2220 err = ntfs_sb_write_run(log->ni->mi.sbi, 2217 2221 &log->ni->file.run, off, page, 2218 - log->page_size); 2222 + log->page_size, 0); 2219 2223 2220 2224 if (err) 2221 2225 goto out; ··· 3706 3710 3707 3711 if (a_dirty) { 3708 3712 attr = oa->attr; 3709 - err = ntfs_sb_write_run(sbi, oa->run1, vbo, buffer_le, bytes); 3713 + err = ntfs_sb_write_run(sbi, oa->run1, vbo, buffer_le, bytes, 0); 3710 3714 if (err) 3711 3715 goto out; 3712 3716 } ··· 5148 5152 5149 5153 ntfs_fix_pre_write(&rh->rhdr, log->page_size); 5150 5154 5151 - err = ntfs_sb_write_run(sbi, &ni->file.run, 0, rh, log->page_size); 5155 + err = ntfs_sb_write_run(sbi, &ni->file.run, 0, rh, log->page_size, 0); 5152 5156 if (!err) 5153 5157 err = ntfs_sb_write_run(sbi, &log->ni->file.run, log->page_size, 5154 - rh, log->page_size); 5158 + rh, log->page_size, 0); 5155 5159 5156 5160 kfree(rh); 5157 5161 if (err)
+37 -40
fs/ntfs3/fsntfs.c
··· 8 8 #include <linux/blkdev.h> 9 9 #include <linux/buffer_head.h> 10 10 #include <linux/fs.h> 11 - #include <linux/nls.h> 11 + #include <linux/kernel.h> 12 12 13 13 #include "debug.h" 14 14 #include "ntfs.h" ··· 358 358 enum ALLOCATE_OPT opt) 359 359 { 360 360 int err; 361 - CLST alen = 0; 361 + CLST alen; 362 362 struct super_block *sb = sbi->sb; 363 363 size_t alcn, zlen, zeroes, zlcn, zlen2, ztrim, new_zlen; 364 364 struct wnd_bitmap *wnd = &sbi->used.bitmap; ··· 370 370 if (!zlen) { 371 371 err = ntfs_refresh_zone(sbi); 372 372 if (err) 373 - goto out; 373 + goto up_write; 374 + 374 375 zlen = wnd_zone_len(wnd); 375 376 } 376 377 377 378 if (!zlen) { 378 379 ntfs_err(sbi->sb, "no free space to extend mft"); 379 - goto out; 380 + err = -ENOSPC; 381 + goto up_write; 380 382 } 381 383 382 384 lcn = wnd_zone_bit(wnd); 383 - alen = zlen > len ? len : zlen; 385 + alen = min_t(CLST, len, zlen); 384 386 385 387 wnd_zone_set(wnd, lcn + alen, zlen - alen); 386 388 387 389 err = wnd_set_used(wnd, lcn, alen); 388 - if (err) { 389 - up_write(&wnd->rw_lock); 390 - return err; 391 - } 390 + if (err) 391 + goto up_write; 392 + 392 393 alcn = lcn; 393 - goto out; 394 + goto space_found; 394 395 } 395 396 /* 396 397 * 'Cause cluster 0 is always used this value means that we should use ··· 405 404 406 405 alen = wnd_find(wnd, len, lcn, BITMAP_FIND_MARK_AS_USED, &alcn); 407 406 if (alen) 408 - goto out; 407 + goto space_found; 409 408 410 409 /* Try to use clusters from MftZone. */ 411 410 zlen = wnd_zone_len(wnd); 412 411 zeroes = wnd_zeroes(wnd); 413 412 414 413 /* Check too big request */ 415 - if (len > zeroes + zlen || zlen <= NTFS_MIN_MFT_ZONE) 416 - goto out; 414 + if (len > zeroes + zlen || zlen <= NTFS_MIN_MFT_ZONE) { 415 + err = -ENOSPC; 416 + goto up_write; 417 + } 417 418 418 419 /* How many clusters to cat from zone. */ 419 420 zlcn = wnd_zone_bit(wnd); 420 421 zlen2 = zlen >> 1; 421 - ztrim = len > zlen ? zlen : (len > zlen2 ? len : zlen2); 422 - new_zlen = zlen - ztrim; 423 - 424 - if (new_zlen < NTFS_MIN_MFT_ZONE) { 425 - new_zlen = NTFS_MIN_MFT_ZONE; 426 - if (new_zlen > zlen) 427 - new_zlen = zlen; 428 - } 422 + ztrim = clamp_val(len, zlen2, zlen); 423 + new_zlen = max_t(size_t, zlen - ztrim, NTFS_MIN_MFT_ZONE); 429 424 430 425 wnd_zone_set(wnd, zlcn, new_zlen); 431 426 432 427 /* Allocate continues clusters. */ 433 428 alen = wnd_find(wnd, len, 0, 434 429 BITMAP_FIND_MARK_AS_USED | BITMAP_FIND_FULL, &alcn); 435 - 436 - out: 437 - if (alen) { 438 - err = 0; 439 - *new_len = alen; 440 - *new_lcn = alcn; 441 - 442 - ntfs_unmap_meta(sb, alcn, alen); 443 - 444 - /* Set hint for next requests. */ 445 - if (!(opt & ALLOCATE_MFT)) 446 - sbi->used.next_free_lcn = alcn + alen; 447 - } else { 430 + if (!alen) { 448 431 err = -ENOSPC; 432 + goto up_write; 449 433 } 450 434 435 + space_found: 436 + err = 0; 437 + *new_len = alen; 438 + *new_lcn = alcn; 439 + 440 + ntfs_unmap_meta(sb, alcn, alen); 441 + 442 + /* Set hint for next requests. */ 443 + if (!(opt & ALLOCATE_MFT)) 444 + sbi->used.next_free_lcn = alcn + alen; 445 + up_write: 451 446 up_write(&wnd->rw_lock); 452 447 return err; 453 448 } ··· 1077 1080 } 1078 1081 1079 1082 int ntfs_sb_write_run(struct ntfs_sb_info *sbi, const struct runs_tree *run, 1080 - u64 vbo, const void *buf, size_t bytes) 1083 + u64 vbo, const void *buf, size_t bytes, int sync) 1081 1084 { 1082 1085 struct super_block *sb = sbi->sb; 1083 1086 u8 cluster_bits = sbi->cluster_bits; ··· 1096 1099 len = ((u64)clen << cluster_bits) - off; 1097 1100 1098 1101 for (;;) { 1099 - u32 op = len < bytes ? len : bytes; 1100 - int err = ntfs_sb_write(sb, lbo, op, buf, 0); 1102 + u32 op = min_t(u64, len, bytes); 1103 + int err = ntfs_sb_write(sb, lbo, op, buf, sync); 1101 1104 1102 1105 if (err) 1103 1106 return err; ··· 1297 1300 nb->off = off = lbo & (blocksize - 1); 1298 1301 1299 1302 for (;;) { 1300 - u32 len32 = len < bytes ? len : bytes; 1303 + u32 len32 = min_t(u64, len, bytes); 1301 1304 sector_t block = lbo >> sb->s_blocksize_bits; 1302 1305 1303 1306 do { ··· 2172 2175 2173 2176 /* Write main SDS bucket. */ 2174 2177 err = ntfs_sb_write_run(sbi, &ni->file.run, sbi->security.next_off, 2175 - d_security, aligned_sec_size); 2178 + d_security, aligned_sec_size, 0); 2176 2179 2177 2180 if (err) 2178 2181 goto out; ··· 2190 2193 2191 2194 /* Write copy SDS bucket. */ 2192 2195 err = ntfs_sb_write_run(sbi, &ni->file.run, mirr_off, d_security, 2193 - aligned_sec_size); 2196 + aligned_sec_size, 0); 2194 2197 if (err) 2195 2198 goto out; 2196 2199
+50 -116
fs/ntfs3/index.c
··· 8 8 #include <linux/blkdev.h> 9 9 #include <linux/buffer_head.h> 10 10 #include <linux/fs.h> 11 - #include <linux/nls.h> 11 + #include <linux/kernel.h> 12 12 13 13 #include "debug.h" 14 14 #include "ntfs.h" ··· 671 671 const struct INDEX_HDR *hdr, const void *key, 672 672 size_t key_len, const void *ctx, int *diff) 673 673 { 674 - struct NTFS_DE *e; 674 + struct NTFS_DE *e, *found = NULL; 675 675 NTFS_CMP_FUNC cmp = indx->cmp; 676 + int min_idx = 0, mid_idx, max_idx = 0; 677 + int diff2; 678 + int table_size = 8; 676 679 u32 e_size, e_key_len; 677 680 u32 end = le32_to_cpu(hdr->used); 678 681 u32 off = le32_to_cpu(hdr->de_off); 682 + u16 offs[128]; 679 683 680 - #ifdef NTFS3_INDEX_BINARY_SEARCH 681 - int max_idx = 0, fnd, min_idx; 682 - int nslots = 64; 683 - u16 *offs; 684 - 685 - if (end > 0x10000) 686 - goto next; 687 - 688 - offs = kmalloc(sizeof(u16) * nslots, GFP_NOFS); 689 - if (!offs) 690 - goto next; 691 - 692 - /* Use binary search algorithm. */ 693 - next1: 694 - if (off + sizeof(struct NTFS_DE) > end) { 695 - e = NULL; 696 - goto out1; 697 - } 698 - e = Add2Ptr(hdr, off); 699 - e_size = le16_to_cpu(e->size); 700 - 701 - if (e_size < sizeof(struct NTFS_DE) || off + e_size > end) { 702 - e = NULL; 703 - goto out1; 704 - } 705 - 706 - if (max_idx >= nslots) { 707 - u16 *ptr; 708 - int new_slots = ALIGN(2 * nslots, 8); 709 - 710 - ptr = kmalloc(sizeof(u16) * new_slots, GFP_NOFS); 711 - if (ptr) 712 - memcpy(ptr, offs, sizeof(u16) * max_idx); 713 - kfree(offs); 714 - offs = ptr; 715 - nslots = new_slots; 716 - if (!ptr) 717 - goto next; 718 - } 719 - 720 - /* Store entry table. */ 721 - offs[max_idx] = off; 722 - 723 - if (!de_is_last(e)) { 724 - off += e_size; 725 - max_idx += 1; 726 - goto next1; 727 - } 728 - 729 - /* 730 - * Table of pointers is created. 731 - * Use binary search to find entry that is <= to the search value. 732 - */ 733 - fnd = -1; 734 - min_idx = 0; 735 - 736 - while (min_idx <= max_idx) { 737 - int mid_idx = min_idx + ((max_idx - min_idx) >> 1); 738 - int diff2; 739 - 740 - e = Add2Ptr(hdr, offs[mid_idx]); 741 - 742 - e_key_len = le16_to_cpu(e->key_size); 743 - 744 - diff2 = (*cmp)(key, key_len, e + 1, e_key_len, ctx); 745 - 746 - if (!diff2) { 747 - *diff = 0; 748 - goto out1; 749 - } 750 - 751 - if (diff2 < 0) { 752 - max_idx = mid_idx - 1; 753 - fnd = mid_idx; 754 - if (!fnd) 755 - break; 756 - } else { 757 - min_idx = mid_idx + 1; 758 - } 759 - } 760 - 761 - if (fnd == -1) { 762 - e = NULL; 763 - goto out1; 764 - } 765 - 766 - *diff = -1; 767 - e = Add2Ptr(hdr, offs[fnd]); 768 - 769 - out1: 770 - kfree(offs); 771 - 772 - return e; 773 - #endif 774 - 775 - next: 776 - /* 777 - * Entries index are sorted. 778 - * Enumerate all entries until we find entry 779 - * that is <= to the search value. 780 - */ 684 + fill_table: 781 685 if (off + sizeof(struct NTFS_DE) > end) 782 686 return NULL; 783 687 ··· 691 787 if (e_size < sizeof(struct NTFS_DE) || off + e_size > end) 692 788 return NULL; 693 789 694 - off += e_size; 790 + if (!de_is_last(e)) { 791 + offs[max_idx] = off; 792 + off += e_size; 695 793 794 + max_idx++; 795 + if (max_idx < table_size) 796 + goto fill_table; 797 + 798 + max_idx--; 799 + } 800 + 801 + binary_search: 696 802 e_key_len = le16_to_cpu(e->key_size); 697 803 698 - *diff = (*cmp)(key, key_len, e + 1, e_key_len, ctx); 699 - if (!*diff) 700 - return e; 804 + diff2 = (*cmp)(key, key_len, e + 1, e_key_len, ctx); 805 + if (diff2 > 0) { 806 + if (found) { 807 + min_idx = mid_idx + 1; 808 + } else { 809 + if (de_is_last(e)) 810 + return NULL; 701 811 702 - if (*diff <= 0) 703 - return e; 812 + max_idx = 0; 813 + table_size = min(table_size * 2, 814 + (int)ARRAY_SIZE(offs)); 815 + goto fill_table; 816 + } 817 + } else if (diff2 < 0) { 818 + if (found) 819 + max_idx = mid_idx - 1; 820 + else 821 + max_idx--; 704 822 705 - if (de_is_last(e)) { 706 - *diff = 1; 823 + found = e; 824 + } else { 825 + *diff = 0; 707 826 return e; 708 827 } 709 - goto next; 828 + 829 + if (min_idx > max_idx) { 830 + *diff = -1; 831 + return found; 832 + } 833 + 834 + mid_idx = (min_idx + max_idx) >> 1; 835 + e = Add2Ptr(hdr, offs[mid_idx]); 836 + 837 + goto binary_search; 710 838 } 711 839 712 840 /* ··· 1072 1136 if (!e) 1073 1137 return -EINVAL; 1074 1138 1075 - if (fnd) 1076 - fnd->root_de = e; 1077 - 1139 + fnd->root_de = e; 1078 1140 err = 0; 1079 1141 1080 1142 for (;;) { ··· 1335 1401 static int indx_create_allocate(struct ntfs_index *indx, struct ntfs_inode *ni, 1336 1402 CLST *vbn) 1337 1403 { 1338 - int err = -ENOMEM; 1404 + int err; 1339 1405 struct ntfs_sb_info *sbi = ni->mi.sbi; 1340 1406 struct ATTRIB *bitmap; 1341 1407 struct ATTRIB *alloc;
+81 -78
fs/ntfs3/inode.c
··· 5 5 * 6 6 */ 7 7 8 - #include <linux/blkdev.h> 9 8 #include <linux/buffer_head.h> 10 9 #include <linux/fs.h> 11 - #include <linux/iversion.h> 12 10 #include <linux/mpage.h> 13 11 #include <linux/namei.h> 14 12 #include <linux/nls.h> ··· 47 49 48 50 inode->i_op = NULL; 49 51 /* Setup 'uid' and 'gid' */ 50 - inode->i_uid = sbi->options.fs_uid; 51 - inode->i_gid = sbi->options.fs_gid; 52 + inode->i_uid = sbi->options->fs_uid; 53 + inode->i_gid = sbi->options->fs_gid; 52 54 53 55 err = mi_init(&ni->mi, sbi, ino); 54 56 if (err) ··· 222 224 if (!attr->non_res) { 223 225 ni->i_valid = inode->i_size = rsize; 224 226 inode_set_bytes(inode, rsize); 225 - t32 = asize; 226 - } else { 227 - t32 = le16_to_cpu(attr->nres.run_off); 228 227 } 229 228 230 - mode = S_IFREG | (0777 & sbi->options.fs_fmask_inv); 229 + mode = S_IFREG | (0777 & sbi->options->fs_fmask_inv); 231 230 232 231 if (!attr->non_res) { 233 232 ni->ni_flags |= NI_FLAG_RESIDENT; ··· 267 272 goto out; 268 273 269 274 mode = sb->s_root 270 - ? (S_IFDIR | (0777 & sbi->options.fs_dmask_inv)) 275 + ? (S_IFDIR | (0777 & sbi->options->fs_dmask_inv)) 271 276 : (S_IFDIR | 0777); 272 277 goto next_attr; 273 278 ··· 310 315 rp_fa = ni_parse_reparse(ni, attr, &rp); 311 316 switch (rp_fa) { 312 317 case REPARSE_LINK: 313 - if (!attr->non_res) { 314 - inode->i_size = rsize; 315 - inode_set_bytes(inode, rsize); 316 - t32 = asize; 317 - } else { 318 - inode->i_size = 319 - le64_to_cpu(attr->nres.data_size); 320 - t32 = le16_to_cpu(attr->nres.run_off); 321 - } 318 + /* 319 + * Normal symlink. 320 + * Assume one unicode symbol == one utf8. 321 + */ 322 + inode->i_size = le16_to_cpu(rp.SymbolicLinkReparseBuffer 323 + .PrintNameLength) / 324 + sizeof(u16); 322 325 323 - /* Looks like normal symlink. */ 324 326 ni->i_valid = inode->i_size; 325 327 326 328 /* Clear directory bit. */ ··· 414 422 ni->std_fa &= ~FILE_ATTRIBUTE_DIRECTORY; 415 423 inode->i_op = &ntfs_link_inode_operations; 416 424 inode->i_fop = NULL; 417 - inode_nohighmem(inode); // ?? 425 + inode_nohighmem(inode); 418 426 } else if (S_ISREG(mode)) { 419 427 ni->std_fa &= ~FILE_ATTRIBUTE_DIRECTORY; 420 428 inode->i_op = &ntfs_file_inode_operations; ··· 435 443 goto out; 436 444 } 437 445 438 - if ((sbi->options.sys_immutable && 446 + if ((sbi->options->sys_immutable && 439 447 (std5->fa & FILE_ATTRIBUTE_SYSTEM)) && 440 448 !S_ISFIFO(mode) && !S_ISSOCK(mode) && !S_ISLNK(mode)) { 441 449 inode->i_flags |= S_IMMUTABLE; ··· 1192 1200 struct REPARSE_DATA_BUFFER *rp = NULL; 1193 1201 bool rp_inserted = false; 1194 1202 1203 + ni_lock_dir(dir_ni); 1204 + 1195 1205 dir_root = indx_get_root(&dir_ni->dir, dir_ni, NULL, NULL); 1196 - if (!dir_root) 1197 - return ERR_PTR(-EINVAL); 1206 + if (!dir_root) { 1207 + err = -EINVAL; 1208 + goto out1; 1209 + } 1198 1210 1199 1211 if (S_ISDIR(mode)) { 1200 1212 /* Use parent's directory attributes. */ ··· 1240 1244 * } 1241 1245 */ 1242 1246 } else if (S_ISREG(mode)) { 1243 - if (sbi->options.sparse) { 1247 + if (sbi->options->sparse) { 1244 1248 /* Sparsed regular file, cause option 'sparse'. */ 1245 1249 fa = FILE_ATTRIBUTE_SPARSE_FILE | 1246 1250 FILE_ATTRIBUTE_ARCHIVE; ··· 1482 1486 asize = ALIGN(SIZEOF_RESIDENT + nsize, 8); 1483 1487 t16 = PtrOffset(rec, attr); 1484 1488 1485 - /* 0x78 - the size of EA + EAINFO to store WSL */ 1489 + /* 1490 + * Below function 'ntfs_save_wsl_perm' requires 0x78 bytes. 1491 + * It is good idea to keep extened attributes resident. 1492 + */ 1486 1493 if (asize + t16 + 0x78 + 8 > sbi->record_size) { 1487 1494 CLST alen; 1488 1495 CLST clst = bytes_to_cluster(sbi, nsize); ··· 1520 1521 } 1521 1522 1522 1523 asize = SIZEOF_NONRESIDENT + ALIGN(err, 8); 1523 - inode->i_size = nsize; 1524 1524 } else { 1525 1525 attr->res.data_off = SIZEOF_RESIDENT_LE; 1526 1526 attr->res.data_size = cpu_to_le32(nsize); 1527 1527 memcpy(Add2Ptr(attr, SIZEOF_RESIDENT), rp, nsize); 1528 - inode->i_size = nsize; 1529 1528 nsize = 0; 1530 1529 } 1530 + /* Size of symlink equals the length of input string. */ 1531 + inode->i_size = size; 1531 1532 1532 1533 attr->size = cpu_to_le32(asize); 1533 1534 ··· 1550 1551 if (err) 1551 1552 goto out6; 1552 1553 1554 + /* Unlock parent directory before ntfs_init_acl. */ 1555 + ni_unlock(dir_ni); 1556 + 1553 1557 inode->i_generation = le16_to_cpu(rec->seq); 1554 1558 1555 1559 dir->i_mtime = dir->i_ctime = inode->i_atime; ··· 1564 1562 inode->i_op = &ntfs_link_inode_operations; 1565 1563 inode->i_fop = NULL; 1566 1564 inode->i_mapping->a_ops = &ntfs_aops; 1565 + inode->i_size = size; 1566 + inode_nohighmem(inode); 1567 1567 } else if (S_ISREG(mode)) { 1568 1568 inode->i_op = &ntfs_file_inode_operations; 1569 1569 inode->i_fop = &ntfs_file_operations; ··· 1581 1577 if (!S_ISLNK(mode) && (sb->s_flags & SB_POSIXACL)) { 1582 1578 err = ntfs_init_acl(mnt_userns, inode, dir); 1583 1579 if (err) 1584 - goto out6; 1580 + goto out7; 1585 1581 } else 1586 1582 #endif 1587 1583 { ··· 1590 1586 1591 1587 /* Write non resident data. */ 1592 1588 if (nsize) { 1593 - err = ntfs_sb_write_run(sbi, &ni->file.run, 0, rp, nsize); 1589 + err = ntfs_sb_write_run(sbi, &ni->file.run, 0, rp, nsize, 0); 1594 1590 if (err) 1595 1591 goto out7; 1596 1592 } ··· 1611 1607 out7: 1612 1608 1613 1609 /* Undo 'indx_insert_entry'. */ 1610 + ni_lock_dir(dir_ni); 1614 1611 indx_delete_entry(&dir_ni->dir, dir_ni, new_de + 1, 1615 1612 le16_to_cpu(new_de->key_size), sbi); 1613 + /* ni_unlock(dir_ni); will be called later. */ 1616 1614 out6: 1617 1615 if (rp_inserted) 1618 1616 ntfs_remove_reparse(sbi, IO_REPARSE_TAG_SYMLINK, &new_de->ref); ··· 1638 1632 kfree(rp); 1639 1633 1640 1634 out1: 1641 - if (err) 1635 + if (err) { 1636 + ni_unlock(dir_ni); 1642 1637 return ERR_PTR(err); 1638 + } 1643 1639 1644 1640 unlock_new_inode(inode); 1645 1641 ··· 1762 1754 static noinline int ntfs_readlink_hlp(struct inode *inode, char *buffer, 1763 1755 int buflen) 1764 1756 { 1765 - int i, err = 0; 1757 + int i, err = -EINVAL; 1766 1758 struct ntfs_inode *ni = ntfs_i(inode); 1767 1759 struct super_block *sb = inode->i_sb; 1768 1760 struct ntfs_sb_info *sbi = sb->s_fs_info; 1769 - u64 i_size = inode->i_size; 1770 - u16 nlen = 0; 1761 + u64 size; 1762 + u16 ulen = 0; 1771 1763 void *to_free = NULL; 1772 1764 struct REPARSE_DATA_BUFFER *rp; 1773 - struct le_str *uni; 1765 + const __le16 *uname; 1774 1766 struct ATTRIB *attr; 1775 1767 1776 1768 /* Reparse data present. Try to parse it. */ ··· 1779 1771 1780 1772 *buffer = 0; 1781 1773 1782 - /* Read into temporal buffer. */ 1783 - if (i_size > sbi->reparse.max_size || i_size <= sizeof(u32)) { 1784 - err = -EINVAL; 1785 - goto out; 1786 - } 1787 - 1788 1774 attr = ni_find_attr(ni, NULL, NULL, ATTR_REPARSE, NULL, 0, NULL, NULL); 1789 - if (!attr) { 1790 - err = -EINVAL; 1775 + if (!attr) 1791 1776 goto out; 1792 - } 1793 1777 1794 1778 if (!attr->non_res) { 1795 - rp = resident_data_ex(attr, i_size); 1796 - if (!rp) { 1797 - err = -EINVAL; 1779 + rp = resident_data_ex(attr, sizeof(struct REPARSE_DATA_BUFFER)); 1780 + if (!rp) 1798 1781 goto out; 1799 - } 1782 + size = le32_to_cpu(attr->res.data_size); 1800 1783 } else { 1801 - rp = kmalloc(i_size, GFP_NOFS); 1784 + size = le64_to_cpu(attr->nres.data_size); 1785 + rp = NULL; 1786 + } 1787 + 1788 + if (size > sbi->reparse.max_size || size <= sizeof(u32)) 1789 + goto out; 1790 + 1791 + if (!rp) { 1792 + rp = kmalloc(size, GFP_NOFS); 1802 1793 if (!rp) { 1803 1794 err = -ENOMEM; 1804 1795 goto out; 1805 1796 } 1806 1797 to_free = rp; 1807 - err = ntfs_read_run_nb(sbi, &ni->file.run, 0, rp, i_size, NULL); 1798 + /* Read into temporal buffer. */ 1799 + err = ntfs_read_run_nb(sbi, &ni->file.run, 0, rp, size, NULL); 1808 1800 if (err) 1809 1801 goto out; 1810 1802 } 1811 - 1812 - err = -EINVAL; 1813 1803 1814 1804 /* Microsoft Tag. */ 1815 1805 switch (rp->ReparseTag) { 1816 1806 case IO_REPARSE_TAG_MOUNT_POINT: 1817 1807 /* Mount points and junctions. */ 1818 1808 /* Can we use 'Rp->MountPointReparseBuffer.PrintNameLength'? */ 1819 - if (i_size <= offsetof(struct REPARSE_DATA_BUFFER, 1820 - MountPointReparseBuffer.PathBuffer)) 1809 + if (size <= offsetof(struct REPARSE_DATA_BUFFER, 1810 + MountPointReparseBuffer.PathBuffer)) 1821 1811 goto out; 1822 - uni = Add2Ptr(rp, 1823 - offsetof(struct REPARSE_DATA_BUFFER, 1824 - MountPointReparseBuffer.PathBuffer) + 1825 - le16_to_cpu(rp->MountPointReparseBuffer 1826 - .PrintNameOffset) - 1827 - 2); 1828 - nlen = le16_to_cpu(rp->MountPointReparseBuffer.PrintNameLength); 1812 + uname = Add2Ptr(rp, 1813 + offsetof(struct REPARSE_DATA_BUFFER, 1814 + MountPointReparseBuffer.PathBuffer) + 1815 + le16_to_cpu(rp->MountPointReparseBuffer 1816 + .PrintNameOffset)); 1817 + ulen = le16_to_cpu(rp->MountPointReparseBuffer.PrintNameLength); 1829 1818 break; 1830 1819 1831 1820 case IO_REPARSE_TAG_SYMLINK: 1832 1821 /* FolderSymbolicLink */ 1833 1822 /* Can we use 'Rp->SymbolicLinkReparseBuffer.PrintNameLength'? */ 1834 - if (i_size <= offsetof(struct REPARSE_DATA_BUFFER, 1835 - SymbolicLinkReparseBuffer.PathBuffer)) 1823 + if (size <= offsetof(struct REPARSE_DATA_BUFFER, 1824 + SymbolicLinkReparseBuffer.PathBuffer)) 1836 1825 goto out; 1837 - uni = Add2Ptr(rp, 1838 - offsetof(struct REPARSE_DATA_BUFFER, 1839 - SymbolicLinkReparseBuffer.PathBuffer) + 1840 - le16_to_cpu(rp->SymbolicLinkReparseBuffer 1841 - .PrintNameOffset) - 1842 - 2); 1843 - nlen = le16_to_cpu( 1826 + uname = Add2Ptr( 1827 + rp, offsetof(struct REPARSE_DATA_BUFFER, 1828 + SymbolicLinkReparseBuffer.PathBuffer) + 1829 + le16_to_cpu(rp->SymbolicLinkReparseBuffer 1830 + .PrintNameOffset)); 1831 + ulen = le16_to_cpu( 1844 1832 rp->SymbolicLinkReparseBuffer.PrintNameLength); 1845 1833 break; 1846 1834 ··· 1868 1864 goto out; 1869 1865 } 1870 1866 if (!IsReparseTagNameSurrogate(rp->ReparseTag) || 1871 - i_size <= sizeof(struct REPARSE_POINT)) { 1867 + size <= sizeof(struct REPARSE_POINT)) { 1872 1868 goto out; 1873 1869 } 1874 1870 1875 1871 /* Users tag. */ 1876 - uni = Add2Ptr(rp, sizeof(struct REPARSE_POINT) - 2); 1877 - nlen = le16_to_cpu(rp->ReparseDataLength) - 1872 + uname = Add2Ptr(rp, sizeof(struct REPARSE_POINT)); 1873 + ulen = le16_to_cpu(rp->ReparseDataLength) - 1878 1874 sizeof(struct REPARSE_POINT); 1879 1875 } 1880 1876 1881 1877 /* Convert nlen from bytes to UNICODE chars. */ 1882 - nlen >>= 1; 1878 + ulen >>= 1; 1883 1879 1884 1880 /* Check that name is available. */ 1885 - if (!nlen || &uni->name[nlen] > (__le16 *)Add2Ptr(rp, i_size)) 1881 + if (!ulen || uname + ulen > (__le16 *)Add2Ptr(rp, size)) 1886 1882 goto out; 1887 1883 1888 1884 /* If name is already zero terminated then truncate it now. */ 1889 - if (!uni->name[nlen - 1]) 1890 - nlen -= 1; 1891 - uni->len = nlen; 1885 + if (!uname[ulen - 1]) 1886 + ulen -= 1; 1892 1887 1893 - err = ntfs_utf16_to_nls(sbi, uni, buffer, buflen); 1888 + err = ntfs_utf16_to_nls(sbi, uname, ulen, buffer, buflen); 1894 1889 1895 1890 if (err < 0) 1896 1891 goto out;
+5
fs/ntfs3/lib/decompress_common.h
··· 5 5 * Copyright (C) 2015 Eric Biggers 6 6 */ 7 7 8 + #ifndef _LINUX_NTFS3_LIB_DECOMPRESS_COMMON_H 9 + #define _LINUX_NTFS3_LIB_DECOMPRESS_COMMON_H 10 + 8 11 #include <linux/string.h> 9 12 #include <linux/compiler.h> 10 13 #include <linux/types.h> ··· 339 336 340 337 return dst; 341 338 } 339 + 340 + #endif /* _LINUX_NTFS3_LIB_DECOMPRESS_COMMON_H */
+6
fs/ntfs3/lib/lib.h
··· 7 7 * - linux kernel code style 8 8 */ 9 9 10 + #ifndef _LINUX_NTFS3_LIB_LIB_H 11 + #define _LINUX_NTFS3_LIB_LIB_H 12 + 13 + #include <linux/types.h> 10 14 11 15 /* globals from xpress_decompress.c */ 12 16 struct xpress_decompressor *xpress_allocate_decompressor(void); ··· 28 24 const void *__restrict compressed_data, 29 25 size_t compressed_size, void *__restrict uncompressed_data, 30 26 size_t uncompressed_size); 27 + 28 + #endif /* _LINUX_NTFS3_LIB_LIB_H */
+6 -6
fs/ntfs3/lznt.c
··· 5 5 * 6 6 */ 7 7 8 - #include <linux/blkdev.h> 9 - #include <linux/buffer_head.h> 10 - #include <linux/fs.h> 11 - #include <linux/nls.h> 8 + #include <linux/kernel.h> 9 + #include <linux/slab.h> 10 + #include <linux/stddef.h> 11 + #include <linux/string.h> 12 + #include <linux/types.h> 12 13 13 14 #include "debug.h" 14 - #include "ntfs.h" 15 15 #include "ntfs_fs.h" 16 16 17 17 // clang-format off ··· 292 292 /* 293 293 * get_lznt_ctx 294 294 * @level: 0 - Standard compression. 295 - * !0 - Best compression, requires a lot of cpu. 295 + * !0 - Best compression, requires a lot of cpu. 296 296 */ 297 297 struct lznt *get_lznt_ctx(int level) 298 298 {
-24
fs/ntfs3/namei.c
··· 5 5 * 6 6 */ 7 7 8 - #include <linux/blkdev.h> 9 - #include <linux/buffer_head.h> 10 8 #include <linux/fs.h> 11 - #include <linux/iversion.h> 12 - #include <linux/namei.h> 13 9 #include <linux/nls.h> 14 10 15 11 #include "debug.h" ··· 95 99 static int ntfs_create(struct user_namespace *mnt_userns, struct inode *dir, 96 100 struct dentry *dentry, umode_t mode, bool excl) 97 101 { 98 - struct ntfs_inode *ni = ntfs_i(dir); 99 102 struct inode *inode; 100 - 101 - ni_lock_dir(ni); 102 103 103 104 inode = ntfs_create_inode(mnt_userns, dir, dentry, NULL, S_IFREG | mode, 104 105 0, NULL, 0, NULL); 105 - 106 - ni_unlock(ni); 107 106 108 107 return IS_ERR(inode) ? PTR_ERR(inode) : 0; 109 108 } ··· 111 120 static int ntfs_mknod(struct user_namespace *mnt_userns, struct inode *dir, 112 121 struct dentry *dentry, umode_t mode, dev_t rdev) 113 122 { 114 - struct ntfs_inode *ni = ntfs_i(dir); 115 123 struct inode *inode; 116 - 117 - ni_lock_dir(ni); 118 124 119 125 inode = ntfs_create_inode(mnt_userns, dir, dentry, NULL, mode, rdev, 120 126 NULL, 0, NULL); 121 - 122 - ni_unlock(ni); 123 127 124 128 return IS_ERR(inode) ? PTR_ERR(inode) : 0; 125 129 } ··· 186 200 { 187 201 u32 size = strlen(symname); 188 202 struct inode *inode; 189 - struct ntfs_inode *ni = ntfs_i(dir); 190 - 191 - ni_lock_dir(ni); 192 203 193 204 inode = ntfs_create_inode(mnt_userns, dir, dentry, NULL, S_IFLNK | 0777, 194 205 0, symname, size, NULL); 195 - 196 - ni_unlock(ni); 197 206 198 207 return IS_ERR(inode) ? PTR_ERR(inode) : 0; 199 208 } ··· 200 219 struct dentry *dentry, umode_t mode) 201 220 { 202 221 struct inode *inode; 203 - struct ntfs_inode *ni = ntfs_i(dir); 204 - 205 - ni_lock_dir(ni); 206 222 207 223 inode = ntfs_create_inode(mnt_userns, dir, dentry, NULL, S_IFDIR | mode, 208 224 0, NULL, 0, NULL); 209 - 210 - ni_unlock(ni); 211 225 212 226 return IS_ERR(inode) ? PTR_ERR(inode) : 0; 213 227 }
+14 -6
fs/ntfs3/ntfs.h
··· 10 10 #ifndef _LINUX_NTFS3_NTFS_H 11 11 #define _LINUX_NTFS3_NTFS_H 12 12 13 - /* TODO: Check 4K MFT record and 512 bytes cluster. */ 13 + #include <linux/blkdev.h> 14 + #include <linux/build_bug.h> 15 + #include <linux/kernel.h> 16 + #include <linux/stddef.h> 17 + #include <linux/string.h> 18 + #include <linux/types.h> 14 19 15 - /* Activate this define to use binary search in indexes. */ 16 - #define NTFS3_INDEX_BINARY_SEARCH 20 + #include "debug.h" 21 + 22 + /* TODO: Check 4K MFT record and 512 bytes cluster. */ 17 23 18 24 /* Check each run for marked clusters. */ 19 25 #define NTFS3_CHECK_FREE_CLST 20 26 21 27 #define NTFS_NAME_LEN 255 22 28 23 - /* ntfs.sys used 500 maximum links on-disk struct allows up to 0xffff. */ 24 - #define NTFS_LINK_MAX 0x400 25 - //#define NTFS_LINK_MAX 0xffff 29 + /* 30 + * ntfs.sys used 500 maximum links on-disk struct allows up to 0xffff. 31 + * xfstest generic/041 creates 3003 hardlinks. 32 + */ 33 + #define NTFS_LINK_MAX 4000 26 34 27 35 /* 28 36 * Activate to use 64 bit clusters instead of 32 bits in ntfs.sys.
+47 -20
fs/ntfs3/ntfs_fs.h
··· 9 9 #ifndef _LINUX_NTFS3_NTFS_FS_H 10 10 #define _LINUX_NTFS3_NTFS_FS_H 11 11 12 + #include <linux/blkdev.h> 13 + #include <linux/buffer_head.h> 14 + #include <linux/cleancache.h> 15 + #include <linux/fs.h> 16 + #include <linux/highmem.h> 17 + #include <linux/kernel.h> 18 + #include <linux/mm.h> 19 + #include <linux/mutex.h> 20 + #include <linux/page-flags.h> 21 + #include <linux/pagemap.h> 22 + #include <linux/rbtree.h> 23 + #include <linux/rwsem.h> 24 + #include <linux/slab.h> 25 + #include <linux/string.h> 26 + #include <linux/time64.h> 27 + #include <linux/types.h> 28 + #include <linux/uidgid.h> 29 + #include <asm/div64.h> 30 + #include <asm/page.h> 31 + 32 + #include "debug.h" 33 + #include "ntfs.h" 34 + 35 + struct dentry; 36 + struct fiemap_extent_info; 37 + struct user_namespace; 38 + struct page; 39 + struct writeback_control; 40 + enum utf16_endian; 41 + 42 + 12 43 #define MINUS_ONE_T ((size_t)(-1)) 13 44 /* Biggest MFT / smallest cluster */ 14 45 #define MAXIMUM_BYTES_PER_MFT 4096 ··· 83 52 // clang-format on 84 53 85 54 struct ntfs_mount_options { 55 + char *nls_name; 86 56 struct nls_table *nls; 87 57 88 58 kuid_t fs_uid; ··· 91 59 u16 fs_fmask_inv; 92 60 u16 fs_dmask_inv; 93 61 94 - unsigned uid : 1, /* uid was set. */ 95 - gid : 1, /* gid was set. */ 96 - fmask : 1, /* fmask was set. */ 97 - dmask : 1, /* dmask was set. */ 98 - sys_immutable : 1, /* Immutable system files. */ 99 - discard : 1, /* Issue discard requests on deletions. */ 100 - sparse : 1, /* Create sparse files. */ 101 - showmeta : 1, /* Show meta files. */ 102 - nohidden : 1, /* Do not show hidden files. */ 103 - force : 1, /* Rw mount dirty volume. */ 104 - no_acs_rules : 1, /*Exclude acs rules. */ 105 - prealloc : 1 /* Preallocate space when file is growing. */ 106 - ; 62 + unsigned fmask : 1; /* fmask was set. */ 63 + unsigned dmask : 1; /*dmask was set. */ 64 + unsigned sys_immutable : 1; /* Immutable system files. */ 65 + unsigned discard : 1; /* Issue discard requests on deletions. */ 66 + unsigned sparse : 1; /* Create sparse files. */ 67 + unsigned showmeta : 1; /* Show meta files. */ 68 + unsigned nohidden : 1; /* Do not show hidden files. */ 69 + unsigned force : 1; /* RW mount dirty volume. */ 70 + unsigned noacsrules : 1; /* Exclude acs rules. */ 71 + unsigned prealloc : 1; /* Preallocate space when file is growing. */ 107 72 }; 108 73 109 74 /* Special value to unpack and deallocate. */ ··· 211 182 u32 blocks_per_cluster; // cluster_size / sb->s_blocksize 212 183 213 184 u32 record_size; 214 - u32 sector_size; 215 185 u32 index_size; 216 186 217 - u8 sector_bits; 218 187 u8 cluster_bits; 219 188 u8 record_bits; 220 189 ··· 306 279 #endif 307 280 } compress; 308 281 309 - struct ntfs_mount_options options; 282 + struct ntfs_mount_options *options; 310 283 struct ratelimit_state msg_ratelimit; 311 284 }; 312 285 ··· 463 436 bool al_delete_le(struct ntfs_inode *ni, enum ATTR_TYPE type, CLST vcn, 464 437 const __le16 *name, size_t name_len, 465 438 const struct MFT_REF *ref); 466 - int al_update(struct ntfs_inode *ni); 439 + int al_update(struct ntfs_inode *ni, int sync); 467 440 static inline size_t al_aligned(size_t size) 468 441 { 469 442 return (size + 1023) & ~(size_t)1023; ··· 475 448 size_t get_set_bits_ex(const ulong *map, size_t bit, size_t nbits); 476 449 477 450 /* Globals from dir.c */ 478 - int ntfs_utf16_to_nls(struct ntfs_sb_info *sbi, const struct le_str *uni, 451 + int ntfs_utf16_to_nls(struct ntfs_sb_info *sbi, const __le16 *name, u32 len, 479 452 u8 *buf, int buf_len); 480 453 int ntfs_nls_to_utf16(struct ntfs_sb_info *sbi, const u8 *name, u32 name_len, 481 454 struct cpu_str *uni, u32 max_ulen, ··· 547 520 struct ATTR_LIST_ENTRY **entry); 548 521 int ni_new_attr_flags(struct ntfs_inode *ni, enum FILE_ATTRIBUTE new_fa); 549 522 enum REPARSE_SIGN ni_parse_reparse(struct ntfs_inode *ni, struct ATTRIB *attr, 550 - void *buffer); 523 + struct REPARSE_DATA_BUFFER *buffer); 551 524 int ni_write_inode(struct inode *inode, int sync, const char *hint); 552 525 #define _ni_write_inode(i, w) ni_write_inode(i, w, __func__) 553 526 int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo, ··· 604 577 int ntfs_sb_write(struct super_block *sb, u64 lbo, size_t bytes, 605 578 const void *buffer, int wait); 606 579 int ntfs_sb_write_run(struct ntfs_sb_info *sbi, const struct runs_tree *run, 607 - u64 vbo, const void *buf, size_t bytes); 580 + u64 vbo, const void *buf, size_t bytes, int sync); 608 581 struct buffer_head *ntfs_bread_run(struct ntfs_sb_info *sbi, 609 582 const struct runs_tree *run, u64 vbo); 610 583 int ntfs_read_run_nb(struct ntfs_sb_info *sbi, const struct runs_tree *run,
-3
fs/ntfs3/record.c
··· 5 5 * 6 6 */ 7 7 8 - #include <linux/blkdev.h> 9 - #include <linux/buffer_head.h> 10 8 #include <linux/fs.h> 11 - #include <linux/nls.h> 12 9 13 10 #include "debug.h" 14 11 #include "ntfs.h"
-2
fs/ntfs3/run.c
··· 7 7 */ 8 8 9 9 #include <linux/blkdev.h> 10 - #include <linux/buffer_head.h> 11 10 #include <linux/fs.h> 12 11 #include <linux/log2.h> 13 - #include <linux/nls.h> 14 12 15 13 #include "debug.h" 16 14 #include "ntfs.h"
+323 -328
fs/ntfs3/super.c
··· 23 23 * 24 24 */ 25 25 26 - #include <linux/backing-dev.h> 27 26 #include <linux/blkdev.h> 28 27 #include <linux/buffer_head.h> 29 28 #include <linux/exportfs.h> 30 29 #include <linux/fs.h> 31 - #include <linux/iversion.h> 30 + #include <linux/fs_context.h> 31 + #include <linux/fs_parser.h> 32 32 #include <linux/log2.h> 33 33 #include <linux/module.h> 34 34 #include <linux/nls.h> 35 - #include <linux/parser.h> 36 35 #include <linux/seq_file.h> 37 36 #include <linux/statfs.h> 38 37 ··· 204 205 return ret; 205 206 } 206 207 207 - static inline void clear_mount_options(struct ntfs_mount_options *options) 208 + static inline void put_mount_options(struct ntfs_mount_options *options) 208 209 { 210 + kfree(options->nls_name); 209 211 unload_nls(options->nls); 212 + kfree(options); 210 213 } 211 214 212 215 enum Opt { ··· 224 223 Opt_nohidden, 225 224 Opt_showmeta, 226 225 Opt_acl, 227 - Opt_noatime, 228 - Opt_nls, 226 + Opt_iocharset, 229 227 Opt_prealloc, 230 - Opt_no_acs_rules, 228 + Opt_noacsrules, 231 229 Opt_err, 232 230 }; 233 231 234 - static const match_table_t ntfs_tokens = { 235 - { Opt_uid, "uid=%u" }, 236 - { Opt_gid, "gid=%u" }, 237 - { Opt_umask, "umask=%o" }, 238 - { Opt_dmask, "dmask=%o" }, 239 - { Opt_fmask, "fmask=%o" }, 240 - { Opt_immutable, "sys_immutable" }, 241 - { Opt_discard, "discard" }, 242 - { Opt_force, "force" }, 243 - { Opt_sparse, "sparse" }, 244 - { Opt_nohidden, "nohidden" }, 245 - { Opt_acl, "acl" }, 246 - { Opt_noatime, "noatime" }, 247 - { Opt_showmeta, "showmeta" }, 248 - { Opt_nls, "nls=%s" }, 249 - { Opt_prealloc, "prealloc" }, 250 - { Opt_no_acs_rules, "no_acs_rules" }, 251 - { Opt_err, NULL }, 232 + static const struct fs_parameter_spec ntfs_fs_parameters[] = { 233 + fsparam_u32("uid", Opt_uid), 234 + fsparam_u32("gid", Opt_gid), 235 + fsparam_u32oct("umask", Opt_umask), 236 + fsparam_u32oct("dmask", Opt_dmask), 237 + fsparam_u32oct("fmask", Opt_fmask), 238 + fsparam_flag_no("sys_immutable", Opt_immutable), 239 + fsparam_flag_no("discard", Opt_discard), 240 + fsparam_flag_no("force", Opt_force), 241 + fsparam_flag_no("sparse", Opt_sparse), 242 + fsparam_flag_no("hidden", Opt_nohidden), 243 + fsparam_flag_no("acl", Opt_acl), 244 + fsparam_flag_no("showmeta", Opt_showmeta), 245 + fsparam_flag_no("prealloc", Opt_prealloc), 246 + fsparam_flag_no("acsrules", Opt_noacsrules), 247 + fsparam_string("iocharset", Opt_iocharset), 248 + {} 252 249 }; 253 250 254 - static noinline int ntfs_parse_options(struct super_block *sb, char *options, 255 - int silent, 256 - struct ntfs_mount_options *opts) 251 + /* 252 + * Load nls table or if @nls is utf8 then return NULL. 253 + */ 254 + static struct nls_table *ntfs_load_nls(char *nls) 257 255 { 258 - char *p; 259 - substring_t args[MAX_OPT_ARGS]; 260 - int option; 261 - char nls_name[30]; 262 - struct nls_table *nls; 256 + struct nls_table *ret; 263 257 264 - opts->fs_uid = current_uid(); 265 - opts->fs_gid = current_gid(); 266 - opts->fs_fmask_inv = opts->fs_dmask_inv = ~current_umask(); 267 - nls_name[0] = 0; 258 + if (!nls) 259 + nls = CONFIG_NLS_DEFAULT; 268 260 269 - if (!options) 270 - goto out; 261 + if (strcmp(nls, "utf8") == 0) 262 + return NULL; 271 263 272 - while ((p = strsep(&options, ","))) { 273 - int token; 264 + if (strcmp(nls, CONFIG_NLS_DEFAULT) == 0) 265 + return load_nls_default(); 274 266 275 - if (!*p) 276 - continue; 267 + ret = load_nls(nls); 268 + if (ret) 269 + return ret; 277 270 278 - token = match_token(p, ntfs_tokens, args); 279 - switch (token) { 280 - case Opt_immutable: 281 - opts->sys_immutable = 1; 282 - break; 283 - case Opt_uid: 284 - if (match_int(&args[0], &option)) 285 - return -EINVAL; 286 - opts->fs_uid = make_kuid(current_user_ns(), option); 287 - if (!uid_valid(opts->fs_uid)) 288 - return -EINVAL; 289 - opts->uid = 1; 290 - break; 291 - case Opt_gid: 292 - if (match_int(&args[0], &option)) 293 - return -EINVAL; 294 - opts->fs_gid = make_kgid(current_user_ns(), option); 295 - if (!gid_valid(opts->fs_gid)) 296 - return -EINVAL; 297 - opts->gid = 1; 298 - break; 299 - case Opt_umask: 300 - if (match_octal(&args[0], &option)) 301 - return -EINVAL; 302 - opts->fs_fmask_inv = opts->fs_dmask_inv = ~option; 303 - opts->fmask = opts->dmask = 1; 304 - break; 305 - case Opt_dmask: 306 - if (match_octal(&args[0], &option)) 307 - return -EINVAL; 308 - opts->fs_dmask_inv = ~option; 309 - opts->dmask = 1; 310 - break; 311 - case Opt_fmask: 312 - if (match_octal(&args[0], &option)) 313 - return -EINVAL; 314 - opts->fs_fmask_inv = ~option; 315 - opts->fmask = 1; 316 - break; 317 - case Opt_discard: 318 - opts->discard = 1; 319 - break; 320 - case Opt_force: 321 - opts->force = 1; 322 - break; 323 - case Opt_sparse: 324 - opts->sparse = 1; 325 - break; 326 - case Opt_nohidden: 327 - opts->nohidden = 1; 328 - break; 329 - case Opt_acl: 271 + return ERR_PTR(-EINVAL); 272 + } 273 + 274 + static int ntfs_fs_parse_param(struct fs_context *fc, 275 + struct fs_parameter *param) 276 + { 277 + struct ntfs_mount_options *opts = fc->fs_private; 278 + struct fs_parse_result result; 279 + int opt; 280 + 281 + opt = fs_parse(fc, ntfs_fs_parameters, param, &result); 282 + if (opt < 0) 283 + return opt; 284 + 285 + switch (opt) { 286 + case Opt_uid: 287 + opts->fs_uid = make_kuid(current_user_ns(), result.uint_32); 288 + if (!uid_valid(opts->fs_uid)) 289 + return invalf(fc, "ntfs3: Invalid value for uid."); 290 + break; 291 + case Opt_gid: 292 + opts->fs_gid = make_kgid(current_user_ns(), result.uint_32); 293 + if (!gid_valid(opts->fs_gid)) 294 + return invalf(fc, "ntfs3: Invalid value for gid."); 295 + break; 296 + case Opt_umask: 297 + if (result.uint_32 & ~07777) 298 + return invalf(fc, "ntfs3: Invalid value for umask."); 299 + opts->fs_fmask_inv = ~result.uint_32; 300 + opts->fs_dmask_inv = ~result.uint_32; 301 + opts->fmask = 1; 302 + opts->dmask = 1; 303 + break; 304 + case Opt_dmask: 305 + if (result.uint_32 & ~07777) 306 + return invalf(fc, "ntfs3: Invalid value for dmask."); 307 + opts->fs_dmask_inv = ~result.uint_32; 308 + opts->dmask = 1; 309 + break; 310 + case Opt_fmask: 311 + if (result.uint_32 & ~07777) 312 + return invalf(fc, "ntfs3: Invalid value for fmask."); 313 + opts->fs_fmask_inv = ~result.uint_32; 314 + opts->fmask = 1; 315 + break; 316 + case Opt_immutable: 317 + opts->sys_immutable = result.negated ? 0 : 1; 318 + break; 319 + case Opt_discard: 320 + opts->discard = result.negated ? 0 : 1; 321 + break; 322 + case Opt_force: 323 + opts->force = result.negated ? 0 : 1; 324 + break; 325 + case Opt_sparse: 326 + opts->sparse = result.negated ? 0 : 1; 327 + break; 328 + case Opt_nohidden: 329 + opts->nohidden = result.negated ? 1 : 0; 330 + break; 331 + case Opt_acl: 332 + if (!result.negated) 330 333 #ifdef CONFIG_NTFS3_FS_POSIX_ACL 331 - sb->s_flags |= SB_POSIXACL; 332 - break; 334 + fc->sb_flags |= SB_POSIXACL; 333 335 #else 334 - ntfs_err(sb, "support for ACL not compiled in!"); 335 - return -EINVAL; 336 + return invalf(fc, "ntfs3: Support for ACL not compiled in!"); 336 337 #endif 337 - case Opt_noatime: 338 - sb->s_flags |= SB_NOATIME; 339 - break; 340 - case Opt_showmeta: 341 - opts->showmeta = 1; 342 - break; 343 - case Opt_nls: 344 - match_strlcpy(nls_name, &args[0], sizeof(nls_name)); 345 - break; 346 - case Opt_prealloc: 347 - opts->prealloc = 1; 348 - break; 349 - case Opt_no_acs_rules: 350 - opts->no_acs_rules = 1; 351 - break; 352 - default: 353 - if (!silent) 354 - ntfs_err( 355 - sb, 356 - "Unrecognized mount option \"%s\" or missing value", 357 - p); 358 - //return -EINVAL; 359 - } 338 + else 339 + fc->sb_flags &= ~SB_POSIXACL; 340 + break; 341 + case Opt_showmeta: 342 + opts->showmeta = result.negated ? 0 : 1; 343 + break; 344 + case Opt_iocharset: 345 + kfree(opts->nls_name); 346 + opts->nls_name = param->string; 347 + param->string = NULL; 348 + break; 349 + case Opt_prealloc: 350 + opts->prealloc = result.negated ? 0 : 1; 351 + break; 352 + case Opt_noacsrules: 353 + opts->noacsrules = result.negated ? 1 : 0; 354 + break; 355 + default: 356 + /* Should not be here unless we forget add case. */ 357 + return -EINVAL; 360 358 } 361 - 362 - out: 363 - if (!strcmp(nls_name[0] ? nls_name : CONFIG_NLS_DEFAULT, "utf8")) { 364 - /* 365 - * For UTF-8 use utf16s_to_utf8s()/utf8s_to_utf16s() 366 - * instead of NLS. 367 - */ 368 - nls = NULL; 369 - } else if (nls_name[0]) { 370 - nls = load_nls(nls_name); 371 - if (!nls) { 372 - ntfs_err(sb, "failed to load \"%s\"", nls_name); 373 - return -EINVAL; 374 - } 375 - } else { 376 - nls = load_nls_default(); 377 - if (!nls) { 378 - ntfs_err(sb, "failed to load default nls"); 379 - return -EINVAL; 380 - } 381 - } 382 - opts->nls = nls; 383 - 384 359 return 0; 385 360 } 386 361 387 - static int ntfs_remount(struct super_block *sb, int *flags, char *data) 362 + static int ntfs_fs_reconfigure(struct fs_context *fc) 388 363 { 389 - int err, ro_rw; 364 + struct super_block *sb = fc->root->d_sb; 390 365 struct ntfs_sb_info *sbi = sb->s_fs_info; 391 - struct ntfs_mount_options old_opts; 392 - char *orig_data = kstrdup(data, GFP_KERNEL); 366 + struct ntfs_mount_options *new_opts = fc->fs_private; 367 + int ro_rw; 393 368 394 - if (data && !orig_data) 395 - return -ENOMEM; 396 - 397 - /* Store original options. */ 398 - memcpy(&old_opts, &sbi->options, sizeof(old_opts)); 399 - clear_mount_options(&sbi->options); 400 - memset(&sbi->options, 0, sizeof(sbi->options)); 401 - 402 - err = ntfs_parse_options(sb, data, 0, &sbi->options); 403 - if (err) 404 - goto restore_opts; 405 - 406 - ro_rw = sb_rdonly(sb) && !(*flags & SB_RDONLY); 369 + ro_rw = sb_rdonly(sb) && !(fc->sb_flags & SB_RDONLY); 407 370 if (ro_rw && (sbi->flags & NTFS_FLAGS_NEED_REPLAY)) { 408 - ntfs_warn( 409 - sb, 410 - "Couldn't remount rw because journal is not replayed. Please umount/remount instead\n"); 411 - err = -EINVAL; 412 - goto restore_opts; 371 + errorf(fc, "ntfs3: Couldn't remount rw because journal is not replayed. Please umount/remount instead\n"); 372 + return -EINVAL; 413 373 } 374 + 375 + new_opts->nls = ntfs_load_nls(new_opts->nls_name); 376 + if (IS_ERR(new_opts->nls)) { 377 + new_opts->nls = NULL; 378 + errorf(fc, "ntfs3: Cannot load iocharset %s", new_opts->nls_name); 379 + return -EINVAL; 380 + } 381 + if (new_opts->nls != sbi->options->nls) 382 + return invalf(fc, "ntfs3: Cannot use different iocharset when remounting!"); 414 383 415 384 sync_filesystem(sb); 416 385 417 386 if (ro_rw && (sbi->volume.flags & VOLUME_FLAG_DIRTY) && 418 - !sbi->options.force) { 419 - ntfs_warn(sb, "volume is dirty and \"force\" flag is not set!"); 420 - err = -EINVAL; 421 - goto restore_opts; 387 + !new_opts->force) { 388 + errorf(fc, "ntfs3: Volume is dirty and \"force\" flag is not set!"); 389 + return -EINVAL; 422 390 } 423 391 424 - clear_mount_options(&old_opts); 392 + memcpy(sbi->options, new_opts, sizeof(*new_opts)); 425 393 426 - *flags = (*flags & ~SB_LAZYTIME) | (sb->s_flags & SB_LAZYTIME) | 427 - SB_NODIRATIME | SB_NOATIME; 428 - ntfs_info(sb, "re-mounted. Opts: %s", orig_data); 429 - err = 0; 430 - goto out; 431 - 432 - restore_opts: 433 - clear_mount_options(&sbi->options); 434 - memcpy(&sbi->options, &old_opts, sizeof(old_opts)); 435 - 436 - out: 437 - kfree(orig_data); 438 - return err; 394 + return 0; 439 395 } 440 396 441 397 static struct kmem_cache *ntfs_inode_cachep; ··· 471 513 xpress_free_decompressor(sbi->compress.xpress); 472 514 lzx_free_decompressor(sbi->compress.lzx); 473 515 #endif 474 - clear_mount_options(&sbi->options); 475 - 476 516 kfree(sbi); 477 517 } 478 518 ··· 481 525 /* Mark rw ntfs as clear, if possible. */ 482 526 ntfs_set_state(sbi, NTFS_DIRTY_CLEAR); 483 527 528 + put_mount_options(sbi->options); 484 529 put_ntfs(sbi); 530 + sb->s_fs_info = NULL; 485 531 486 532 sync_blockdev(sb->s_bdev); 487 533 } ··· 510 552 { 511 553 struct super_block *sb = root->d_sb; 512 554 struct ntfs_sb_info *sbi = sb->s_fs_info; 513 - struct ntfs_mount_options *opts = &sbi->options; 555 + struct ntfs_mount_options *opts = sbi->options; 514 556 struct user_namespace *user_ns = seq_user_ns(m); 515 557 516 - if (opts->uid) 517 - seq_printf(m, ",uid=%u", 518 - from_kuid_munged(user_ns, opts->fs_uid)); 519 - if (opts->gid) 520 - seq_printf(m, ",gid=%u", 521 - from_kgid_munged(user_ns, opts->fs_gid)); 558 + seq_printf(m, ",uid=%u", 559 + from_kuid_munged(user_ns, opts->fs_uid)); 560 + seq_printf(m, ",gid=%u", 561 + from_kgid_munged(user_ns, opts->fs_gid)); 522 562 if (opts->fmask) 523 563 seq_printf(m, ",fmask=%04o", ~opts->fs_fmask_inv); 524 564 if (opts->dmask) 525 565 seq_printf(m, ",dmask=%04o", ~opts->fs_dmask_inv); 526 566 if (opts->nls) 527 - seq_printf(m, ",nls=%s", opts->nls->charset); 567 + seq_printf(m, ",iocharset=%s", opts->nls->charset); 528 568 else 529 - seq_puts(m, ",nls=utf8"); 569 + seq_puts(m, ",iocharset=utf8"); 530 570 if (opts->sys_immutable) 531 571 seq_puts(m, ",sys_immutable"); 532 572 if (opts->discard) ··· 537 581 seq_puts(m, ",nohidden"); 538 582 if (opts->force) 539 583 seq_puts(m, ",force"); 540 - if (opts->no_acs_rules) 541 - seq_puts(m, ",no_acs_rules"); 584 + if (opts->noacsrules) 585 + seq_puts(m, ",noacsrules"); 542 586 if (opts->prealloc) 543 587 seq_puts(m, ",prealloc"); 544 588 if (sb->s_flags & SB_POSIXACL) 545 589 seq_puts(m, ",acl"); 546 - if (sb->s_flags & SB_NOATIME) 547 - seq_puts(m, ",noatime"); 548 590 549 591 return 0; 550 592 } ··· 597 643 .statfs = ntfs_statfs, 598 644 .show_options = ntfs_show_options, 599 645 .sync_fs = ntfs_sync_fs, 600 - .remount_fs = ntfs_remount, 601 646 .write_inode = ntfs3_write_inode, 602 647 }; 603 648 ··· 682 729 struct ntfs_sb_info *sbi = sb->s_fs_info; 683 730 int err; 684 731 u32 mb, gb, boot_sector_size, sct_per_clst, record_size; 685 - u64 sectors, clusters, fs_size, mlcn, mlcn2; 732 + u64 sectors, clusters, mlcn, mlcn2; 686 733 struct NTFS_BOOT *boot; 687 734 struct buffer_head *bh; 688 735 struct MFT_REC *rec; ··· 740 787 goto out; 741 788 } 742 789 743 - sbi->sector_size = boot_sector_size; 744 - sbi->sector_bits = blksize_bits(boot_sector_size); 745 - fs_size = (sectors + 1) << sbi->sector_bits; 790 + sbi->volume.size = sectors * boot_sector_size; 746 791 747 - gb = format_size_gb(fs_size, &mb); 792 + gb = format_size_gb(sbi->volume.size + boot_sector_size, &mb); 748 793 749 794 /* 750 795 * - Volume formatted and mounted with the same sector size. 751 796 * - Volume formatted 4K and mounted as 512. 752 797 * - Volume formatted 512 and mounted as 4K. 753 798 */ 754 - if (sbi->sector_size != sector_size) { 755 - ntfs_warn(sb, 756 - "Different NTFS' sector size and media sector size"); 799 + if (boot_sector_size != sector_size) { 800 + ntfs_warn( 801 + sb, 802 + "Different NTFS' sector size (%u) and media sector size (%u)", 803 + boot_sector_size, sector_size); 757 804 dev_size += sector_size - 1; 758 805 } 759 806 ··· 763 810 sbi->mft.lbo = mlcn << sbi->cluster_bits; 764 811 sbi->mft.lbo2 = mlcn2 << sbi->cluster_bits; 765 812 766 - if (sbi->cluster_size < sbi->sector_size) 813 + /* Compare boot's cluster and sector. */ 814 + if (sbi->cluster_size < boot_sector_size) 767 815 goto out; 816 + 817 + /* Compare boot's cluster and media sector. */ 818 + if (sbi->cluster_size < sector_size) { 819 + /* No way to use ntfs_get_block in this case. */ 820 + ntfs_err( 821 + sb, 822 + "Failed to mount 'cause NTFS's cluster size (%u) is less than media sector size (%u)", 823 + sbi->cluster_size, sector_size); 824 + goto out; 825 + } 768 826 769 827 sbi->cluster_mask = sbi->cluster_size - 1; 770 828 sbi->cluster_mask_inv = ~(u64)sbi->cluster_mask; ··· 800 836 : (u32)boot->index_size << sbi->cluster_bits; 801 837 802 838 sbi->volume.ser_num = le64_to_cpu(boot->serial_num); 803 - sbi->volume.size = sectors << sbi->sector_bits; 804 839 805 840 /* Warning if RAW volume. */ 806 - if (dev_size < fs_size) { 841 + if (dev_size < sbi->volume.size + boot_sector_size) { 807 842 u32 mb0, gb0; 808 843 809 844 gb0 = format_size_gb(dev_size, &mb0); ··· 846 883 rec->total = cpu_to_le32(sbi->record_size); 847 884 ((struct ATTRIB *)Add2Ptr(rec, ao))->type = ATTR_END; 848 885 849 - if (sbi->cluster_size < PAGE_SIZE) 850 - sb_set_blocksize(sb, sbi->cluster_size); 886 + sb_set_blocksize(sb, min_t(u32, sbi->cluster_size, PAGE_SIZE)); 851 887 852 888 sbi->block_mask = sb->s_blocksize - 1; 853 889 sbi->blocks_per_cluster = sbi->cluster_size >> sb->s_blocksize_bits; ··· 859 897 if (clusters >= (1ull << (64 - sbi->cluster_bits))) 860 898 sbi->maxbytes = -1; 861 899 sbi->maxbytes_sparse = -1; 900 + sb->s_maxbytes = MAX_LFS_FILESIZE; 862 901 #else 863 902 /* Maximum size for sparse file. */ 864 903 sbi->maxbytes_sparse = (1ull << (sbi->cluster_bits + 32)) - 1; 904 + sb->s_maxbytes = 0xFFFFFFFFull << sbi->cluster_bits; 865 905 #endif 866 906 867 907 err = 0; ··· 877 913 /* 878 914 * ntfs_fill_super - Try to mount. 879 915 */ 880 - static int ntfs_fill_super(struct super_block *sb, void *data, int silent) 916 + static int ntfs_fill_super(struct super_block *sb, struct fs_context *fc) 881 917 { 882 918 int err; 883 - struct ntfs_sb_info *sbi; 919 + struct ntfs_sb_info *sbi = sb->s_fs_info; 884 920 struct block_device *bdev = sb->s_bdev; 885 - struct inode *bd_inode = bdev->bd_inode; 886 - struct request_queue *rq = bdev_get_queue(bdev); 887 - struct inode *inode = NULL; 921 + struct request_queue *rq; 922 + struct inode *inode; 888 923 struct ntfs_inode *ni; 889 924 size_t i, tt; 890 925 CLST vcn, lcn, len; ··· 891 928 const struct VOLUME_INFO *info; 892 929 u32 idx, done, bytes; 893 930 struct ATTR_DEF_ENTRY *t; 894 - u16 *upcase = NULL; 895 931 u16 *shared; 896 - bool is_ro; 897 932 struct MFT_REF ref; 898 933 899 934 ref.high = 0; 900 935 901 - sbi = kzalloc(sizeof(struct ntfs_sb_info), GFP_NOFS); 902 - if (!sbi) 903 - return -ENOMEM; 904 - 905 - sb->s_fs_info = sbi; 906 936 sbi->sb = sb; 907 937 sb->s_flags |= SB_NODIRATIME; 908 938 sb->s_magic = 0x7366746e; // "ntfs" ··· 904 948 sb->s_time_gran = NTFS_TIME_GRAN; // 100 nsec 905 949 sb->s_xattr = ntfs_xattr_handlers; 906 950 907 - ratelimit_state_init(&sbi->msg_ratelimit, DEFAULT_RATELIMIT_INTERVAL, 908 - DEFAULT_RATELIMIT_BURST); 909 - 910 - err = ntfs_parse_options(sb, data, silent, &sbi->options); 911 - if (err) 951 + sbi->options->nls = ntfs_load_nls(sbi->options->nls_name); 952 + if (IS_ERR(sbi->options->nls)) { 953 + sbi->options->nls = NULL; 954 + errorf(fc, "Cannot load nls %s", sbi->options->nls_name); 955 + err = -EINVAL; 912 956 goto out; 957 + } 913 958 914 - if (!rq || !blk_queue_discard(rq) || !rq->limits.discard_granularity) { 915 - ; 916 - } else { 959 + rq = bdev_get_queue(bdev); 960 + if (blk_queue_discard(rq) && rq->limits.discard_granularity) { 917 961 sbi->discard_granularity = rq->limits.discard_granularity; 918 962 sbi->discard_granularity_mask_inv = 919 963 ~(u64)(sbi->discard_granularity - 1); 920 964 } 921 965 922 - sb_set_blocksize(sb, PAGE_SIZE); 923 - 924 966 /* Parse boot. */ 925 967 err = ntfs_init_from_boot(sb, rq ? queue_logical_block_size(rq) : 512, 926 - bd_inode->i_size); 968 + bdev->bd_inode->i_size); 927 969 if (err) 928 970 goto out; 929 - 930 - #ifdef CONFIG_NTFS3_64BIT_CLUSTER 931 - sb->s_maxbytes = MAX_LFS_FILESIZE; 932 - #else 933 - sb->s_maxbytes = 0xFFFFFFFFull << sbi->cluster_bits; 934 - #endif 935 - 936 - mutex_init(&sbi->compress.mtx_lznt); 937 - #ifdef CONFIG_NTFS3_LZX_XPRESS 938 - mutex_init(&sbi->compress.mtx_xpress); 939 - mutex_init(&sbi->compress.mtx_lzx); 940 - #endif 941 971 942 972 /* 943 973 * Load $Volume. This should be done before $LogFile ··· 933 991 ref.seq = cpu_to_le16(MFT_REC_VOL); 934 992 inode = ntfs_iget5(sb, &ref, &NAME_VOLUME); 935 993 if (IS_ERR(inode)) { 936 - err = PTR_ERR(inode); 937 994 ntfs_err(sb, "Failed to load $Volume."); 938 - inode = NULL; 995 + err = PTR_ERR(inode); 939 996 goto out; 940 997 } 941 998 ··· 956 1015 } else { 957 1016 /* Should we break mounting here? */ 958 1017 //err = -EINVAL; 959 - //goto out; 1018 + //goto put_inode_out; 960 1019 } 961 1020 962 1021 attr = ni_find_attr(ni, attr, NULL, ATTR_VOL_INFO, NULL, 0, NULL, NULL); 963 1022 if (!attr || is_attr_ext(attr)) { 964 1023 err = -EINVAL; 965 - goto out; 1024 + goto put_inode_out; 966 1025 } 967 1026 968 1027 info = resident_data_ex(attr, SIZEOF_ATTRIBUTE_VOLUME_INFO); 969 1028 if (!info) { 970 1029 err = -EINVAL; 971 - goto out; 1030 + goto put_inode_out; 972 1031 } 973 1032 974 1033 sbi->volume.major_ver = info->major_ver; 975 1034 sbi->volume.minor_ver = info->minor_ver; 976 1035 sbi->volume.flags = info->flags; 977 - 978 1036 sbi->volume.ni = ni; 979 - inode = NULL; 980 1037 981 1038 /* Load $MFTMirr to estimate recs_mirr. */ 982 1039 ref.low = cpu_to_le32(MFT_REC_MIRR); 983 1040 ref.seq = cpu_to_le16(MFT_REC_MIRR); 984 1041 inode = ntfs_iget5(sb, &ref, &NAME_MIRROR); 985 1042 if (IS_ERR(inode)) { 986 - err = PTR_ERR(inode); 987 1043 ntfs_err(sb, "Failed to load $MFTMirr."); 988 - inode = NULL; 1044 + err = PTR_ERR(inode); 989 1045 goto out; 990 1046 } 991 1047 ··· 996 1058 ref.seq = cpu_to_le16(MFT_REC_LOG); 997 1059 inode = ntfs_iget5(sb, &ref, &NAME_LOGFILE); 998 1060 if (IS_ERR(inode)) { 999 - err = PTR_ERR(inode); 1000 1061 ntfs_err(sb, "Failed to load \x24LogFile."); 1001 - inode = NULL; 1062 + err = PTR_ERR(inode); 1002 1063 goto out; 1003 1064 } 1004 1065 ··· 1005 1068 1006 1069 err = ntfs_loadlog_and_replay(ni, sbi); 1007 1070 if (err) 1008 - goto out; 1071 + goto put_inode_out; 1009 1072 1010 1073 iput(inode); 1011 - inode = NULL; 1012 - 1013 - is_ro = sb_rdonly(sbi->sb); 1014 1074 1015 1075 if (sbi->flags & NTFS_FLAGS_NEED_REPLAY) { 1016 - if (!is_ro) { 1076 + if (!sb_rdonly(sb)) { 1017 1077 ntfs_warn(sb, 1018 1078 "failed to replay log file. Can't mount rw!"); 1019 1079 err = -EINVAL; 1020 1080 goto out; 1021 1081 } 1022 1082 } else if (sbi->volume.flags & VOLUME_FLAG_DIRTY) { 1023 - if (!is_ro && !sbi->options.force) { 1083 + if (!sb_rdonly(sb) && !sbi->options->force) { 1024 1084 ntfs_warn( 1025 1085 sb, 1026 1086 "volume is dirty and \"force\" flag is not set!"); ··· 1032 1098 1033 1099 inode = ntfs_iget5(sb, &ref, &NAME_MFT); 1034 1100 if (IS_ERR(inode)) { 1035 - err = PTR_ERR(inode); 1036 1101 ntfs_err(sb, "Failed to load $MFT."); 1037 - inode = NULL; 1102 + err = PTR_ERR(inode); 1038 1103 goto out; 1039 1104 } 1040 1105 ··· 1045 1112 1046 1113 err = wnd_init(&sbi->mft.bitmap, sb, tt); 1047 1114 if (err) 1048 - goto out; 1115 + goto put_inode_out; 1049 1116 1050 1117 err = ni_load_all_mi(ni); 1051 1118 if (err) 1052 - goto out; 1119 + goto put_inode_out; 1053 1120 1054 1121 sbi->mft.ni = ni; 1055 1122 ··· 1058 1125 ref.seq = cpu_to_le16(MFT_REC_BADCLUST); 1059 1126 inode = ntfs_iget5(sb, &ref, &NAME_BADCLUS); 1060 1127 if (IS_ERR(inode)) { 1061 - err = PTR_ERR(inode); 1062 1128 ntfs_err(sb, "Failed to load $BadClus."); 1063 - inode = NULL; 1129 + err = PTR_ERR(inode); 1064 1130 goto out; 1065 1131 } 1066 1132 ··· 1082 1150 ref.seq = cpu_to_le16(MFT_REC_BITMAP); 1083 1151 inode = ntfs_iget5(sb, &ref, &NAME_BITMAP); 1084 1152 if (IS_ERR(inode)) { 1085 - err = PTR_ERR(inode); 1086 1153 ntfs_err(sb, "Failed to load $Bitmap."); 1087 - inode = NULL; 1154 + err = PTR_ERR(inode); 1088 1155 goto out; 1089 1156 } 1090 - 1091 - ni = ntfs_i(inode); 1092 1157 1093 1158 #ifndef CONFIG_NTFS3_64BIT_CLUSTER 1094 1159 if (inode->i_size >> 32) { 1095 1160 err = -EINVAL; 1096 - goto out; 1161 + goto put_inode_out; 1097 1162 } 1098 1163 #endif 1099 1164 ··· 1098 1169 tt = sbi->used.bitmap.nbits; 1099 1170 if (inode->i_size < bitmap_size(tt)) { 1100 1171 err = -EINVAL; 1101 - goto out; 1172 + goto put_inode_out; 1102 1173 } 1103 1174 1104 1175 /* Not necessary. */ 1105 1176 sbi->used.bitmap.set_tail = true; 1106 - err = wnd_init(&sbi->used.bitmap, sbi->sb, tt); 1177 + err = wnd_init(&sbi->used.bitmap, sb, tt); 1107 1178 if (err) 1108 - goto out; 1179 + goto put_inode_out; 1109 1180 1110 1181 iput(inode); 1111 1182 ··· 1117 1188 /* Load $AttrDef. */ 1118 1189 ref.low = cpu_to_le32(MFT_REC_ATTR); 1119 1190 ref.seq = cpu_to_le16(MFT_REC_ATTR); 1120 - inode = ntfs_iget5(sbi->sb, &ref, &NAME_ATTRDEF); 1191 + inode = ntfs_iget5(sb, &ref, &NAME_ATTRDEF); 1121 1192 if (IS_ERR(inode)) { 1122 - err = PTR_ERR(inode); 1123 1193 ntfs_err(sb, "Failed to load $AttrDef -> %d", err); 1124 - inode = NULL; 1194 + err = PTR_ERR(inode); 1125 1195 goto out; 1126 1196 } 1127 1197 1128 1198 if (inode->i_size < sizeof(struct ATTR_DEF_ENTRY)) { 1129 1199 err = -EINVAL; 1130 - goto out; 1200 + goto put_inode_out; 1131 1201 } 1132 1202 bytes = inode->i_size; 1133 1203 sbi->def_table = t = kmalloc(bytes, GFP_NOFS); 1134 1204 if (!t) { 1135 1205 err = -ENOMEM; 1136 - goto out; 1206 + goto put_inode_out; 1137 1207 } 1138 1208 1139 1209 for (done = idx = 0; done < bytes; done += PAGE_SIZE, idx++) { ··· 1141 1213 1142 1214 if (IS_ERR(page)) { 1143 1215 err = PTR_ERR(page); 1144 - goto out; 1216 + goto put_inode_out; 1145 1217 } 1146 1218 memcpy(Add2Ptr(t, done), page_address(page), 1147 1219 min(PAGE_SIZE, tail)); ··· 1149 1221 1150 1222 if (!idx && ATTR_STD != t->type) { 1151 1223 err = -EINVAL; 1152 - goto out; 1224 + goto put_inode_out; 1153 1225 } 1154 1226 } 1155 1227 ··· 1182 1254 ref.seq = cpu_to_le16(MFT_REC_UPCASE); 1183 1255 inode = ntfs_iget5(sb, &ref, &NAME_UPCASE); 1184 1256 if (IS_ERR(inode)) { 1257 + ntfs_err(sb, "Failed to load $UpCase."); 1185 1258 err = PTR_ERR(inode); 1186 - ntfs_err(sb, "Failed to load \x24LogFile."); 1187 - inode = NULL; 1188 1259 goto out; 1189 1260 } 1190 - 1191 - ni = ntfs_i(inode); 1192 1261 1193 1262 if (inode->i_size != 0x10000 * sizeof(short)) { 1194 1263 err = -EINVAL; 1195 - goto out; 1196 - } 1197 - 1198 - sbi->upcase = upcase = kvmalloc(0x10000 * sizeof(short), GFP_KERNEL); 1199 - if (!upcase) { 1200 - err = -ENOMEM; 1201 - goto out; 1264 + goto put_inode_out; 1202 1265 } 1203 1266 1204 1267 for (idx = 0; idx < (0x10000 * sizeof(short) >> PAGE_SHIFT); idx++) { 1205 1268 const __le16 *src; 1206 - u16 *dst = Add2Ptr(upcase, idx << PAGE_SHIFT); 1269 + u16 *dst = Add2Ptr(sbi->upcase, idx << PAGE_SHIFT); 1207 1270 struct page *page = ntfs_map_page(inode->i_mapping, idx); 1208 1271 1209 1272 if (IS_ERR(page)) { 1210 1273 err = PTR_ERR(page); 1211 - goto out; 1274 + goto put_inode_out; 1212 1275 } 1213 1276 1214 1277 src = page_address(page); ··· 1213 1294 ntfs_unmap_page(page); 1214 1295 } 1215 1296 1216 - shared = ntfs_set_shared(upcase, 0x10000 * sizeof(short)); 1217 - if (shared && upcase != shared) { 1297 + shared = ntfs_set_shared(sbi->upcase, 0x10000 * sizeof(short)); 1298 + if (shared && sbi->upcase != shared) { 1299 + kvfree(sbi->upcase); 1218 1300 sbi->upcase = shared; 1219 - kvfree(upcase); 1220 1301 } 1221 1302 1222 1303 iput(inode); 1223 - inode = NULL; 1224 1304 1225 1305 if (is_ntfs3(sbi)) { 1226 1306 /* Load $Secure. */ ··· 1249 1331 ref.seq = cpu_to_le16(MFT_REC_ROOT); 1250 1332 inode = ntfs_iget5(sb, &ref, &NAME_ROOT); 1251 1333 if (IS_ERR(inode)) { 1252 - err = PTR_ERR(inode); 1253 1334 ntfs_err(sb, "Failed to load root."); 1254 - inode = NULL; 1335 + err = PTR_ERR(inode); 1255 1336 goto out; 1256 1337 } 1257 - 1258 - ni = ntfs_i(inode); 1259 1338 1260 1339 sb->s_root = d_make_root(inode); 1261 - 1262 1340 if (!sb->s_root) { 1263 - err = -EINVAL; 1264 - goto out; 1341 + err = -ENOMEM; 1342 + goto put_inode_out; 1265 1343 } 1344 + 1345 + fc->fs_private = NULL; 1266 1346 1267 1347 return 0; 1268 1348 1269 - out: 1349 + put_inode_out: 1270 1350 iput(inode); 1271 - 1272 - if (sb->s_root) { 1273 - d_drop(sb->s_root); 1274 - sb->s_root = NULL; 1275 - } 1276 - 1351 + out: 1352 + /* 1353 + * Free resources here. 1354 + * ntfs_fs_free will be called with fc->s_fs_info = NULL 1355 + */ 1277 1356 put_ntfs(sbi); 1278 - 1279 1357 sb->s_fs_info = NULL; 1358 + 1280 1359 return err; 1281 1360 } 1282 1361 ··· 1318 1403 if (sbi->flags & NTFS_FLAGS_NODISCARD) 1319 1404 return -EOPNOTSUPP; 1320 1405 1321 - if (!sbi->options.discard) 1406 + if (!sbi->options->discard) 1322 1407 return -EOPNOTSUPP; 1323 1408 1324 1409 lbo = (u64)lcn << sbi->cluster_bits; ··· 1343 1428 return err; 1344 1429 } 1345 1430 1346 - static struct dentry *ntfs_mount(struct file_system_type *fs_type, int flags, 1347 - const char *dev_name, void *data) 1431 + static int ntfs_fs_get_tree(struct fs_context *fc) 1348 1432 { 1349 - return mount_bdev(fs_type, flags, dev_name, data, ntfs_fill_super); 1433 + return get_tree_bdev(fc, ntfs_fill_super); 1434 + } 1435 + 1436 + /* 1437 + * ntfs_fs_free - Free fs_context. 1438 + * 1439 + * Note that this will be called after fill_super and reconfigure 1440 + * even when they pass. So they have to take pointers if they pass. 1441 + */ 1442 + static void ntfs_fs_free(struct fs_context *fc) 1443 + { 1444 + struct ntfs_mount_options *opts = fc->fs_private; 1445 + struct ntfs_sb_info *sbi = fc->s_fs_info; 1446 + 1447 + if (sbi) 1448 + put_ntfs(sbi); 1449 + 1450 + if (opts) 1451 + put_mount_options(opts); 1452 + } 1453 + 1454 + static const struct fs_context_operations ntfs_context_ops = { 1455 + .parse_param = ntfs_fs_parse_param, 1456 + .get_tree = ntfs_fs_get_tree, 1457 + .reconfigure = ntfs_fs_reconfigure, 1458 + .free = ntfs_fs_free, 1459 + }; 1460 + 1461 + /* 1462 + * ntfs_init_fs_context - Initialize spi and opts 1463 + * 1464 + * This will called when mount/remount. We will first initiliaze 1465 + * options so that if remount we can use just that. 1466 + */ 1467 + static int ntfs_init_fs_context(struct fs_context *fc) 1468 + { 1469 + struct ntfs_mount_options *opts; 1470 + struct ntfs_sb_info *sbi; 1471 + 1472 + opts = kzalloc(sizeof(struct ntfs_mount_options), GFP_NOFS); 1473 + if (!opts) 1474 + return -ENOMEM; 1475 + 1476 + /* Default options. */ 1477 + opts->fs_uid = current_uid(); 1478 + opts->fs_gid = current_gid(); 1479 + opts->fs_fmask_inv = ~current_umask(); 1480 + opts->fs_dmask_inv = ~current_umask(); 1481 + 1482 + if (fc->purpose == FS_CONTEXT_FOR_RECONFIGURE) 1483 + goto ok; 1484 + 1485 + sbi = kzalloc(sizeof(struct ntfs_sb_info), GFP_NOFS); 1486 + if (!sbi) 1487 + goto free_opts; 1488 + 1489 + sbi->upcase = kvmalloc(0x10000 * sizeof(short), GFP_KERNEL); 1490 + if (!sbi->upcase) 1491 + goto free_sbi; 1492 + 1493 + ratelimit_state_init(&sbi->msg_ratelimit, DEFAULT_RATELIMIT_INTERVAL, 1494 + DEFAULT_RATELIMIT_BURST); 1495 + 1496 + mutex_init(&sbi->compress.mtx_lznt); 1497 + #ifdef CONFIG_NTFS3_LZX_XPRESS 1498 + mutex_init(&sbi->compress.mtx_xpress); 1499 + mutex_init(&sbi->compress.mtx_lzx); 1500 + #endif 1501 + 1502 + sbi->options = opts; 1503 + fc->s_fs_info = sbi; 1504 + ok: 1505 + fc->fs_private = opts; 1506 + fc->ops = &ntfs_context_ops; 1507 + 1508 + return 0; 1509 + free_sbi: 1510 + kfree(sbi); 1511 + free_opts: 1512 + kfree(opts); 1513 + return -ENOMEM; 1350 1514 } 1351 1515 1352 1516 // clang-format off 1353 1517 static struct file_system_type ntfs_fs_type = { 1354 - .owner = THIS_MODULE, 1355 - .name = "ntfs3", 1356 - .mount = ntfs_mount, 1357 - .kill_sb = kill_block_super, 1358 - .fs_flags = FS_REQUIRES_DEV | FS_ALLOW_IDMAP, 1518 + .owner = THIS_MODULE, 1519 + .name = "ntfs3", 1520 + .init_fs_context = ntfs_init_fs_context, 1521 + .parameters = ntfs_fs_parameters, 1522 + .kill_sb = kill_block_super, 1523 + .fs_flags = FS_REQUIRES_DEV | FS_ALLOW_IDMAP, 1359 1524 }; 1360 1525 // clang-format on 1361 1526
+2 -6
fs/ntfs3/upcase.c
··· 5 5 * 6 6 */ 7 7 8 - #include <linux/blkdev.h> 9 - #include <linux/buffer_head.h> 10 - #include <linux/module.h> 11 - #include <linux/nls.h> 8 + #include <linux/kernel.h> 9 + #include <linux/types.h> 12 10 13 - #include "debug.h" 14 - #include "ntfs.h" 15 11 #include "ntfs_fs.h" 16 12 17 13 static inline u16 upcase_unicode_char(const u16 *upcase, u16 chr)
+62 -189
fs/ntfs3/xattr.c
··· 5 5 * 6 6 */ 7 7 8 - #include <linux/blkdev.h> 9 - #include <linux/buffer_head.h> 10 8 #include <linux/fs.h> 11 - #include <linux/nls.h> 12 9 #include <linux/posix_acl.h> 13 10 #include <linux/posix_acl_xattr.h> 14 11 #include <linux/xattr.h> ··· 75 78 size_t add_bytes, const struct EA_INFO **info) 76 79 { 77 80 int err; 81 + struct ntfs_sb_info *sbi = ni->mi.sbi; 78 82 struct ATTR_LIST_ENTRY *le = NULL; 79 83 struct ATTRIB *attr_info, *attr_ea; 80 84 void *ea_p; ··· 100 102 101 103 /* Check Ea limit. */ 102 104 size = le32_to_cpu((*info)->size); 103 - if (size > ni->mi.sbi->ea_max_size) 105 + if (size > sbi->ea_max_size) 104 106 return -EFBIG; 105 107 106 - if (attr_size(attr_ea) > ni->mi.sbi->ea_max_size) 108 + if (attr_size(attr_ea) > sbi->ea_max_size) 107 109 return -EFBIG; 108 110 109 111 /* Allocate memory for packed Ea. */ ··· 111 113 if (!ea_p) 112 114 return -ENOMEM; 113 115 114 - if (attr_ea->non_res) { 116 + if (!size) { 117 + ; 118 + } else if (attr_ea->non_res) { 115 119 struct runs_tree run; 116 120 117 121 run_init(&run); 118 122 119 123 err = attr_load_runs(attr_ea, ni, &run, NULL); 120 124 if (!err) 121 - err = ntfs_read_run_nb(ni->mi.sbi, &run, 0, ea_p, size, 122 - NULL); 125 + err = ntfs_read_run_nb(sbi, &run, 0, ea_p, size, NULL); 123 126 run_close(&run); 124 127 125 128 if (err) ··· 259 260 260 261 static noinline int ntfs_set_ea(struct inode *inode, const char *name, 261 262 size_t name_len, const void *value, 262 - size_t val_size, int flags, int locked) 263 + size_t val_size, int flags) 263 264 { 264 265 struct ntfs_inode *ni = ntfs_i(inode); 265 266 struct ntfs_sb_info *sbi = ni->mi.sbi; ··· 278 279 u64 new_sz; 279 280 void *p; 280 281 281 - if (!locked) 282 - ni_lock(ni); 282 + ni_lock(ni); 283 283 284 284 run_init(&ea_run); 285 285 ··· 368 370 new_ea->name[name_len] = 0; 369 371 memcpy(new_ea->name + name_len + 1, value, val_size); 370 372 new_pack = le16_to_cpu(ea_info.size_pack) + packed_ea_size(new_ea); 371 - 372 - /* Should fit into 16 bits. */ 373 - if (new_pack > 0xffff) { 374 - err = -EFBIG; // -EINVAL? 375 - goto out; 376 - } 377 373 ea_info.size_pack = cpu_to_le16(new_pack); 378 - 379 374 /* New size of ATTR_EA. */ 380 375 size += add; 381 - if (size > sbi->ea_max_size) { 376 + ea_info.size = cpu_to_le32(size); 377 + 378 + /* 379 + * 1. Check ea_info.size_pack for overflow. 380 + * 2. New attibute size must fit value from $AttrDef 381 + */ 382 + if (new_pack > 0xffff || size > sbi->ea_max_size) { 383 + ntfs_inode_warn( 384 + inode, 385 + "The size of extended attributes must not exceed 64KiB"); 382 386 err = -EFBIG; // -EINVAL? 383 387 goto out; 384 388 } 385 - ea_info.size = cpu_to_le32(size); 386 389 387 390 update_ea: 388 391 ··· 443 444 /* Delete xattr, ATTR_EA */ 444 445 ni_remove_attr_le(ni, attr, mi, le); 445 446 } else if (attr->non_res) { 446 - err = ntfs_sb_write_run(sbi, &ea_run, 0, ea_all, size); 447 + err = ntfs_sb_write_run(sbi, &ea_run, 0, ea_all, size, 0); 447 448 if (err) 448 449 goto out; 449 450 } else { ··· 467 468 mark_inode_dirty(&ni->vfs_inode); 468 469 469 470 out: 470 - if (!locked) 471 - ni_unlock(ni); 471 + ni_unlock(ni); 472 472 473 473 run_close(&ea_run); 474 474 kfree(ea_all); ··· 476 478 } 477 479 478 480 #ifdef CONFIG_NTFS3_FS_POSIX_ACL 479 - static inline void ntfs_posix_acl_release(struct posix_acl *acl) 480 - { 481 - if (acl && refcount_dec_and_test(&acl->a_refcount)) 482 - kfree(acl); 483 - } 484 - 485 481 static struct posix_acl *ntfs_get_acl_ex(struct user_namespace *mnt_userns, 486 482 struct inode *inode, int type, 487 483 int locked) ··· 513 521 /* Translate extended attribute to acl. */ 514 522 if (err >= 0) { 515 523 acl = posix_acl_from_xattr(mnt_userns, buf, err); 516 - if (!IS_ERR(acl)) 517 - set_cached_acl(inode, type, acl); 524 + } else if (err == -ENODATA) { 525 + acl = NULL; 518 526 } else { 519 - acl = err == -ENODATA ? NULL : ERR_PTR(err); 527 + acl = ERR_PTR(err); 520 528 } 529 + 530 + if (!IS_ERR(acl)) 531 + set_cached_acl(inode, type, acl); 521 532 522 533 __putname(buf); 523 534 ··· 541 546 542 547 static noinline int ntfs_set_acl_ex(struct user_namespace *mnt_userns, 543 548 struct inode *inode, struct posix_acl *acl, 544 - int type, int locked) 549 + int type) 545 550 { 546 551 const char *name; 547 552 size_t size, name_len; 548 553 void *value = NULL; 549 554 int err = 0; 555 + int flags; 550 556 551 557 if (S_ISLNK(inode->i_mode)) 552 558 return -EOPNOTSUPP; ··· 557 561 if (acl) { 558 562 umode_t mode = inode->i_mode; 559 563 560 - err = posix_acl_equiv_mode(acl, &mode); 561 - if (err < 0) 562 - return err; 564 + err = posix_acl_update_mode(mnt_userns, inode, &mode, 565 + &acl); 566 + if (err) 567 + goto out; 563 568 564 569 if (inode->i_mode != mode) { 565 570 inode->i_mode = mode; 566 571 mark_inode_dirty(inode); 567 - } 568 - 569 - if (!err) { 570 - /* 571 - * ACL can be exactly represented in the 572 - * traditional file mode permission bits. 573 - */ 574 - acl = NULL; 575 572 } 576 573 } 577 574 name = XATTR_NAME_POSIX_ACL_ACCESS; ··· 583 594 } 584 595 585 596 if (!acl) { 597 + /* Remove xattr if it can be presented via mode. */ 586 598 size = 0; 587 599 value = NULL; 600 + flags = XATTR_REPLACE; 588 601 } else { 589 602 size = posix_acl_xattr_size(acl->a_count); 590 603 value = kmalloc(size, GFP_NOFS); 591 604 if (!value) 592 605 return -ENOMEM; 593 - 594 606 err = posix_acl_to_xattr(mnt_userns, acl, value, size); 595 607 if (err < 0) 596 608 goto out; 609 + flags = 0; 597 610 } 598 611 599 - err = ntfs_set_ea(inode, name, name_len, value, size, 0, locked); 612 + err = ntfs_set_ea(inode, name, name_len, value, size, flags); 613 + if (err == -ENODATA && !size) 614 + err = 0; /* Removing non existed xattr. */ 600 615 if (!err) 601 616 set_cached_acl(inode, type, acl); 602 617 ··· 616 623 int ntfs_set_acl(struct user_namespace *mnt_userns, struct inode *inode, 617 624 struct posix_acl *acl, int type) 618 625 { 619 - return ntfs_set_acl_ex(mnt_userns, inode, acl, type, 0); 620 - } 621 - 622 - static int ntfs_xattr_get_acl(struct user_namespace *mnt_userns, 623 - struct inode *inode, int type, void *buffer, 624 - size_t size) 625 - { 626 - struct posix_acl *acl; 627 - int err; 628 - 629 - if (!(inode->i_sb->s_flags & SB_POSIXACL)) { 630 - ntfs_inode_warn(inode, "add mount option \"acl\" to use acl"); 631 - return -EOPNOTSUPP; 632 - } 633 - 634 - acl = ntfs_get_acl(inode, type, false); 635 - if (IS_ERR(acl)) 636 - return PTR_ERR(acl); 637 - 638 - if (!acl) 639 - return -ENODATA; 640 - 641 - err = posix_acl_to_xattr(mnt_userns, acl, buffer, size); 642 - ntfs_posix_acl_release(acl); 643 - 644 - return err; 645 - } 646 - 647 - static int ntfs_xattr_set_acl(struct user_namespace *mnt_userns, 648 - struct inode *inode, int type, const void *value, 649 - size_t size) 650 - { 651 - struct posix_acl *acl; 652 - int err; 653 - 654 - if (!(inode->i_sb->s_flags & SB_POSIXACL)) { 655 - ntfs_inode_warn(inode, "add mount option \"acl\" to use acl"); 656 - return -EOPNOTSUPP; 657 - } 658 - 659 - if (!inode_owner_or_capable(mnt_userns, inode)) 660 - return -EPERM; 661 - 662 - if (!value) { 663 - acl = NULL; 664 - } else { 665 - acl = posix_acl_from_xattr(mnt_userns, value, size); 666 - if (IS_ERR(acl)) 667 - return PTR_ERR(acl); 668 - 669 - if (acl) { 670 - err = posix_acl_valid(mnt_userns, acl); 671 - if (err) 672 - goto release_and_out; 673 - } 674 - } 675 - 676 - err = ntfs_set_acl(mnt_userns, inode, acl, type); 677 - 678 - release_and_out: 679 - ntfs_posix_acl_release(acl); 680 - return err; 626 + return ntfs_set_acl_ex(mnt_userns, inode, acl, type); 681 627 } 682 628 683 629 /* ··· 630 698 struct posix_acl *default_acl, *acl; 631 699 int err; 632 700 633 - /* 634 - * TODO: Refactoring lock. 635 - * ni_lock(dir) ... -> posix_acl_create(dir,...) -> ntfs_get_acl -> ni_lock(dir) 636 - */ 637 - inode->i_default_acl = NULL; 701 + err = posix_acl_create(dir, &inode->i_mode, &default_acl, &acl); 702 + if (err) 703 + return err; 638 704 639 - default_acl = ntfs_get_acl_ex(mnt_userns, dir, ACL_TYPE_DEFAULT, 1); 640 - 641 - if (!default_acl || default_acl == ERR_PTR(-EOPNOTSUPP)) { 642 - inode->i_mode &= ~current_umask(); 643 - err = 0; 644 - goto out; 645 - } 646 - 647 - if (IS_ERR(default_acl)) { 648 - err = PTR_ERR(default_acl); 649 - goto out; 650 - } 651 - 652 - acl = default_acl; 653 - err = __posix_acl_create(&acl, GFP_NOFS, &inode->i_mode); 654 - if (err < 0) 655 - goto out1; 656 - if (!err) { 657 - posix_acl_release(acl); 658 - acl = NULL; 659 - } 660 - 661 - if (!S_ISDIR(inode->i_mode)) { 662 - posix_acl_release(default_acl); 663 - default_acl = NULL; 664 - } 665 - 666 - if (default_acl) 705 + if (default_acl) { 667 706 err = ntfs_set_acl_ex(mnt_userns, inode, default_acl, 668 - ACL_TYPE_DEFAULT, 1); 707 + ACL_TYPE_DEFAULT); 708 + posix_acl_release(default_acl); 709 + } else { 710 + inode->i_default_acl = NULL; 711 + } 669 712 670 713 if (!acl) 671 714 inode->i_acl = NULL; 672 - else if (!err) 673 - err = ntfs_set_acl_ex(mnt_userns, inode, acl, ACL_TYPE_ACCESS, 674 - 1); 715 + else { 716 + if (!err) 717 + err = ntfs_set_acl_ex(mnt_userns, inode, acl, 718 + ACL_TYPE_ACCESS); 719 + posix_acl_release(acl); 720 + } 675 721 676 - posix_acl_release(acl); 677 - out1: 678 - posix_acl_release(default_acl); 679 - 680 - out: 681 722 return err; 682 723 } 683 724 #endif ··· 677 772 int ntfs_permission(struct user_namespace *mnt_userns, struct inode *inode, 678 773 int mask) 679 774 { 680 - if (ntfs_sb(inode->i_sb)->options.no_acs_rules) { 775 + if (ntfs_sb(inode->i_sb)->options->noacsrules) { 681 776 /* "No access rules" mode - Allow all changes. */ 682 777 return 0; 683 778 } ··· 785 880 goto out; 786 881 } 787 882 788 - #ifdef CONFIG_NTFS3_FS_POSIX_ACL 789 - if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 && 790 - !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS, 791 - sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) || 792 - (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 && 793 - !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT, 794 - sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) { 795 - /* TODO: init_user_ns? */ 796 - err = ntfs_xattr_get_acl( 797 - &init_user_ns, inode, 798 - name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 799 - ? ACL_TYPE_ACCESS 800 - : ACL_TYPE_DEFAULT, 801 - buffer, size); 802 - goto out; 803 - } 804 - #endif 805 883 /* Deal with NTFS extended attribute. */ 806 884 err = ntfs_get_ea(inode, name, name_len, buffer, size, NULL); 807 885 ··· 897 1009 goto out; 898 1010 } 899 1011 900 - #ifdef CONFIG_NTFS3_FS_POSIX_ACL 901 - if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 && 902 - !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS, 903 - sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) || 904 - (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 && 905 - !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT, 906 - sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) { 907 - err = ntfs_xattr_set_acl( 908 - mnt_userns, inode, 909 - name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 910 - ? ACL_TYPE_ACCESS 911 - : ACL_TYPE_DEFAULT, 912 - value, size); 913 - goto out; 914 - } 915 - #endif 916 1012 /* Deal with NTFS extended attribute. */ 917 - err = ntfs_set_ea(inode, name, name_len, value, size, flags, 0); 1013 + err = ntfs_set_ea(inode, name, name_len, value, size, flags); 918 1014 919 1015 out: 920 1016 return err; ··· 914 1042 int err; 915 1043 __le32 value; 916 1044 1045 + /* TODO: refactor this, so we don't lock 4 times in ntfs_set_ea */ 917 1046 value = cpu_to_le32(i_uid_read(inode)); 918 1047 err = ntfs_set_ea(inode, "$LXUID", sizeof("$LXUID") - 1, &value, 919 - sizeof(value), 0, 0); 1048 + sizeof(value), 0); 920 1049 if (err) 921 1050 goto out; 922 1051 923 1052 value = cpu_to_le32(i_gid_read(inode)); 924 1053 err = ntfs_set_ea(inode, "$LXGID", sizeof("$LXGID") - 1, &value, 925 - sizeof(value), 0, 0); 1054 + sizeof(value), 0); 926 1055 if (err) 927 1056 goto out; 928 1057 929 1058 value = cpu_to_le32(inode->i_mode); 930 1059 err = ntfs_set_ea(inode, "$LXMOD", sizeof("$LXMOD") - 1, &value, 931 - sizeof(value), 0, 0); 1060 + sizeof(value), 0); 932 1061 if (err) 933 1062 goto out; 934 1063 935 1064 if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) { 936 1065 value = cpu_to_le32(inode->i_rdev); 937 1066 err = ntfs_set_ea(inode, "$LXDEV", sizeof("$LXDEV") - 1, &value, 938 - sizeof(value), 0, 0); 1067 + sizeof(value), 0); 939 1068 if (err) 940 1069 goto out; 941 1070 }
+3 -3
include/kunit/test.h
··· 613 613 * and is automatically cleaned up after the test case concludes. See &struct 614 614 * kunit_resource for more information. 615 615 */ 616 - void *kunit_kmalloc_array(struct kunit *test, size_t n, size_t size, gfp_t flags); 616 + void *kunit_kmalloc_array(struct kunit *test, size_t n, size_t size, gfp_t gfp); 617 617 618 618 /** 619 619 * kunit_kmalloc() - Like kmalloc() except the allocation is *test managed*. ··· 657 657 * 658 658 * See kcalloc() and kunit_kmalloc_array() for more information. 659 659 */ 660 - static inline void *kunit_kcalloc(struct kunit *test, size_t n, size_t size, gfp_t flags) 660 + static inline void *kunit_kcalloc(struct kunit *test, size_t n, size_t size, gfp_t gfp) 661 661 { 662 - return kunit_kmalloc_array(test, n, size, flags | __GFP_ZERO); 662 + return kunit_kmalloc_array(test, n, size, gfp | __GFP_ZERO); 663 663 } 664 664 665 665 void kunit_cleanup(struct kunit *test);
+13
include/linux/dsa/mv88e6xxx.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * Copyright 2021 NXP 3 + */ 4 + 5 + #ifndef _NET_DSA_TAG_MV88E6XXX_H 6 + #define _NET_DSA_TAG_MV88E6XXX_H 7 + 8 + #include <linux/if_vlan.h> 9 + 10 + #define MV88E6XXX_VID_STANDALONE 0 11 + #define MV88E6XXX_VID_BRIDGED (VLAN_N_VID - 1) 12 + 13 + #endif
+49
include/linux/dsa/ocelot.h
··· 5 5 #ifndef _NET_DSA_TAG_OCELOT_H 6 6 #define _NET_DSA_TAG_OCELOT_H 7 7 8 + #include <linux/kthread.h> 8 9 #include <linux/packing.h> 10 + #include <linux/skbuff.h> 11 + 12 + struct ocelot_skb_cb { 13 + struct sk_buff *clone; 14 + unsigned int ptp_class; /* valid only for clones */ 15 + u8 ptp_cmd; 16 + u8 ts_id; 17 + }; 18 + 19 + #define OCELOT_SKB_CB(skb) \ 20 + ((struct ocelot_skb_cb *)((skb)->cb)) 21 + 22 + #define IFH_TAG_TYPE_C 0 23 + #define IFH_TAG_TYPE_S 1 24 + 25 + #define IFH_REW_OP_NOOP 0x0 26 + #define IFH_REW_OP_DSCP 0x1 27 + #define IFH_REW_OP_ONE_STEP_PTP 0x2 28 + #define IFH_REW_OP_TWO_STEP_PTP 0x3 29 + #define IFH_REW_OP_ORIGIN_PTP 0x5 9 30 10 31 #define OCELOT_TAG_LEN 16 11 32 #define OCELOT_SHORT_PREFIX_LEN 4 ··· 161 140 * +------+------+------+------+------+------+------+------+ 162 141 */ 163 142 143 + struct felix_deferred_xmit_work { 144 + struct dsa_port *dp; 145 + struct sk_buff *skb; 146 + struct kthread_work work; 147 + }; 148 + 149 + struct felix_port { 150 + void (*xmit_work_fn)(struct kthread_work *work); 151 + struct kthread_worker *xmit_worker; 152 + }; 153 + 164 154 static inline void ocelot_xfh_get_rew_val(void *extraction, u64 *rew_val) 165 155 { 166 156 packing(extraction, rew_val, 116, 85, OCELOT_TAG_LEN, UNPACK, 0); ··· 245 213 static inline void ocelot_ifh_set_vid(void *injection, u64 vid) 246 214 { 247 215 packing(injection, &vid, 11, 0, OCELOT_TAG_LEN, PACK, 0); 216 + } 217 + 218 + /* Determine the PTP REW_OP to use for injecting the given skb */ 219 + static inline u32 ocelot_ptp_rew_op(struct sk_buff *skb) 220 + { 221 + struct sk_buff *clone = OCELOT_SKB_CB(skb)->clone; 222 + u8 ptp_cmd = OCELOT_SKB_CB(skb)->ptp_cmd; 223 + u32 rew_op = 0; 224 + 225 + if (ptp_cmd == IFH_REW_OP_TWO_STEP_PTP && clone) { 226 + rew_op = ptp_cmd; 227 + rew_op |= OCELOT_SKB_CB(clone)->ts_id << 3; 228 + } else if (ptp_cmd == IFH_REW_OP_ORIGIN_PTP) { 229 + rew_op = ptp_cmd; 230 + } 231 + 232 + return rew_op; 248 233 } 249 234 250 235 #endif
+15 -29
include/linux/dsa/sja1105.h
··· 48 48 spinlock_t meta_lock; 49 49 unsigned long state; 50 50 u8 ts_id; 51 + /* Used on SJA1110 where meta frames are generated only for 52 + * 2-step TX timestamps 53 + */ 54 + struct sk_buff_head skb_txtstamp_queue; 51 55 }; 52 56 53 57 struct sja1105_skb_cb { ··· 73 69 bool hwts_tx_en; 74 70 }; 75 71 76 - enum sja1110_meta_tstamp { 77 - SJA1110_META_TSTAMP_TX = 0, 78 - SJA1110_META_TSTAMP_RX = 1, 79 - }; 72 + /* Timestamps are in units of 8 ns clock ticks (equivalent to 73 + * a fixed 125 MHz clock). 74 + */ 75 + #define SJA1105_TICK_NS 8 80 76 81 - #if IS_ENABLED(CONFIG_NET_DSA_SJA1105_PTP) 82 - 83 - void sja1110_process_meta_tstamp(struct dsa_switch *ds, int port, u8 ts_id, 84 - enum sja1110_meta_tstamp dir, u64 tstamp); 85 - 86 - #else 87 - 88 - static inline void sja1110_process_meta_tstamp(struct dsa_switch *ds, int port, 89 - u8 ts_id, enum sja1110_meta_tstamp dir, 90 - u64 tstamp) 77 + static inline s64 ns_to_sja1105_ticks(s64 ns) 91 78 { 79 + return ns / SJA1105_TICK_NS; 92 80 } 93 81 94 - #endif /* IS_ENABLED(CONFIG_NET_DSA_SJA1105_PTP) */ 95 - 96 - #if IS_ENABLED(CONFIG_NET_DSA_SJA1105) 97 - 98 - extern const struct dsa_switch_ops sja1105_switch_ops; 82 + static inline s64 sja1105_ticks_to_ns(s64 ticks) 83 + { 84 + return ticks * SJA1105_TICK_NS; 85 + } 99 86 100 87 static inline bool dsa_port_is_sja1105(struct dsa_port *dp) 101 88 { 102 - return dp->ds->ops == &sja1105_switch_ops; 89 + return true; 103 90 } 104 - 105 - #else 106 - 107 - static inline bool dsa_port_is_sja1105(struct dsa_port *dp) 108 - { 109 - return false; 110 - } 111 - 112 - #endif 113 91 114 92 #endif /* _NET_DSA_SJA1105_H */
+1
include/linux/genhd.h
··· 149 149 unsigned long state; 150 150 #define GD_NEED_PART_SCAN 0 151 151 #define GD_READ_ONLY 1 152 + #define GD_DEAD 2 152 153 153 154 struct mutex open_mutex; /* open/close mutex */ 154 155 unsigned open_partitions; /* number of open partitions */
+8 -2
include/linux/mlx5/mlx5_ifc.h
··· 9475 9475 u8 reserved_at_0[0x8]; 9476 9476 u8 local_port[0x8]; 9477 9477 u8 reserved_at_10[0x10]; 9478 + 9478 9479 u8 entropy_force_cap[0x1]; 9479 9480 u8 entropy_calc_cap[0x1]; 9480 9481 u8 entropy_gre_calc_cap[0x1]; 9481 - u8 reserved_at_23[0x1b]; 9482 + u8 reserved_at_23[0xf]; 9483 + u8 rx_ts_over_crc_cap[0x1]; 9484 + u8 reserved_at_33[0xb]; 9482 9485 u8 fcs_cap[0x1]; 9483 9486 u8 reserved_at_3f[0x1]; 9487 + 9484 9488 u8 entropy_force[0x1]; 9485 9489 u8 entropy_calc[0x1]; 9486 9490 u8 entropy_gre_calc[0x1]; 9487 - u8 reserved_at_43[0x1b]; 9491 + u8 reserved_at_43[0xf]; 9492 + u8 rx_ts_over_crc[0x1]; 9493 + u8 reserved_at_53[0xb]; 9488 9494 u8 fcs_chk[0x1]; 9489 9495 u8 reserved_at_5f[0x1]; 9490 9496 };
+3
include/linux/spi/spi.h
··· 531 531 /* I/O mutex */ 532 532 struct mutex io_mutex; 533 533 534 + /* Used to avoid adding the same CS twice */ 535 + struct mutex add_lock; 536 + 534 537 /* lock and mutex for SPI bus locking */ 535 538 spinlock_t bus_lock_spinlock; 536 539 struct mutex bus_lock_mutex;
+2 -3
include/linux/workqueue.h
··· 399 399 * RETURNS: 400 400 * Pointer to the allocated workqueue on success, %NULL on failure. 401 401 */ 402 - struct workqueue_struct *alloc_workqueue(const char *fmt, 403 - unsigned int flags, 404 - int max_active, ...); 402 + __printf(1, 4) struct workqueue_struct * 403 + alloc_workqueue(const char *fmt, unsigned int flags, int max_active, ...); 405 404 406 405 /** 407 406 * alloc_ordered_workqueue - allocate an ordered workqueue
+4 -51
include/soc/mscc/ocelot.h
··· 89 89 /* Source PGIDs, one per physical port */ 90 90 #define PGID_SRC 80 91 91 92 - #define IFH_TAG_TYPE_C 0 93 - #define IFH_TAG_TYPE_S 1 94 - 95 - #define IFH_REW_OP_NOOP 0x0 96 - #define IFH_REW_OP_DSCP 0x1 97 - #define IFH_REW_OP_ONE_STEP_PTP 0x2 98 - #define IFH_REW_OP_TWO_STEP_PTP 0x3 99 - #define IFH_REW_OP_ORIGIN_PTP 0x5 100 - 101 92 #define OCELOT_NUM_TC 8 102 93 103 94 #define OCELOT_SPEED_2500 0 ··· 594 603 /* The VLAN ID that will be transmitted as untagged, on egress */ 595 604 struct ocelot_vlan native_vlan; 596 605 606 + unsigned int ptp_skbs_in_flight; 597 607 u8 ptp_cmd; 598 608 struct sk_buff_head tx_skbs; 599 609 u8 ts_id; 600 - spinlock_t ts_id_lock; 601 610 602 611 phy_interface_t phy_mode; 603 612 ··· 671 680 struct ptp_clock *ptp_clock; 672 681 struct ptp_clock_info ptp_info; 673 682 struct hwtstamp_config hwtstamp_config; 683 + unsigned int ptp_skbs_in_flight; 684 + /* Protects the 2-step TX timestamp ID logic */ 685 + spinlock_t ts_id_lock; 674 686 /* Protects the PTP interface state */ 675 687 struct mutex ptp_lock; 676 688 /* Protects the PTP clock */ ··· 685 691 u32 rate; /* kilobit per second */ 686 692 u32 burst; /* bytes */ 687 693 }; 688 - 689 - struct ocelot_skb_cb { 690 - struct sk_buff *clone; 691 - u8 ptp_cmd; 692 - u8 ts_id; 693 - }; 694 - 695 - #define OCELOT_SKB_CB(skb) \ 696 - ((struct ocelot_skb_cb *)((skb)->cb)) 697 694 698 695 #define ocelot_read_ix(ocelot, reg, gi, ri) __ocelot_read_ix(ocelot, reg, reg##_GSZ * (gi) + reg##_RSZ * (ri)) 699 696 #define ocelot_read_gix(ocelot, reg, gi) __ocelot_read_ix(ocelot, reg, reg##_GSZ * (gi)) ··· 737 752 void __ocelot_target_write_ix(struct ocelot *ocelot, enum ocelot_target target, 738 753 u32 val, u32 reg, u32 offset); 739 754 740 - #if IS_ENABLED(CONFIG_MSCC_OCELOT_SWITCH_LIB) 741 - 742 755 /* Packet I/O */ 743 756 bool ocelot_can_inject(struct ocelot *ocelot, int grp); 744 757 void ocelot_port_inject_frame(struct ocelot *ocelot, int port, int grp, 745 758 u32 rew_op, struct sk_buff *skb); 746 759 int ocelot_xtr_poll_frame(struct ocelot *ocelot, int grp, struct sk_buff **skb); 747 760 void ocelot_drain_cpu_queue(struct ocelot *ocelot, int grp); 748 - 749 - u32 ocelot_ptp_rew_op(struct sk_buff *skb); 750 - #else 751 - 752 - static inline bool ocelot_can_inject(struct ocelot *ocelot, int grp) 753 - { 754 - return false; 755 - } 756 - 757 - static inline void ocelot_port_inject_frame(struct ocelot *ocelot, int port, 758 - int grp, u32 rew_op, 759 - struct sk_buff *skb) 760 - { 761 - } 762 - 763 - static inline int ocelot_xtr_poll_frame(struct ocelot *ocelot, int grp, 764 - struct sk_buff **skb) 765 - { 766 - return -EIO; 767 - } 768 - 769 - static inline void ocelot_drain_cpu_queue(struct ocelot *ocelot, int grp) 770 - { 771 - } 772 - 773 - static inline u32 ocelot_ptp_rew_op(struct sk_buff *skb) 774 - { 775 - return 0; 776 - } 777 - #endif 778 761 779 762 /* Hardware initialization */ 780 763 int ocelot_regfields_init(struct ocelot *ocelot,
+3
include/soc/mscc/ocelot_ptp.h
··· 13 13 #include <linux/ptp_clock_kernel.h> 14 14 #include <soc/mscc/ocelot.h> 15 15 16 + #define OCELOT_MAX_PTP_ID 63 17 + #define OCELOT_PTP_FIFO_SIZE 128 18 + 16 19 #define PTP_PIN_CFG_RSZ 0x20 17 20 #define PTP_PIN_TOD_SEC_MSB_RSZ PTP_PIN_CFG_RSZ 18 21 #define PTP_PIN_TOD_SEC_LSB_RSZ PTP_PIN_CFG_RSZ
+1
include/sound/hda_codec.h
··· 224 224 #endif 225 225 226 226 /* misc flags */ 227 + unsigned int configured:1; /* codec was configured */ 227 228 unsigned int in_freeing:1; /* being released */ 228 229 unsigned int registered:1; /* codec was registered */ 229 230 unsigned int display_power_control:1; /* needs display power */
+9 -10
include/trace/events/kyber.h
··· 13 13 14 14 TRACE_EVENT(kyber_latency, 15 15 16 - TP_PROTO(struct request_queue *q, const char *domain, const char *type, 16 + TP_PROTO(dev_t dev, const char *domain, const char *type, 17 17 unsigned int percentile, unsigned int numerator, 18 18 unsigned int denominator, unsigned int samples), 19 19 20 - TP_ARGS(q, domain, type, percentile, numerator, denominator, samples), 20 + TP_ARGS(dev, domain, type, percentile, numerator, denominator, samples), 21 21 22 22 TP_STRUCT__entry( 23 23 __field( dev_t, dev ) ··· 30 30 ), 31 31 32 32 TP_fast_assign( 33 - __entry->dev = disk_devt(q->disk); 33 + __entry->dev = dev; 34 34 strlcpy(__entry->domain, domain, sizeof(__entry->domain)); 35 35 strlcpy(__entry->type, type, sizeof(__entry->type)); 36 36 __entry->percentile = percentile; ··· 47 47 48 48 TRACE_EVENT(kyber_adjust, 49 49 50 - TP_PROTO(struct request_queue *q, const char *domain, 51 - unsigned int depth), 50 + TP_PROTO(dev_t dev, const char *domain, unsigned int depth), 52 51 53 - TP_ARGS(q, domain, depth), 52 + TP_ARGS(dev, domain, depth), 54 53 55 54 TP_STRUCT__entry( 56 55 __field( dev_t, dev ) ··· 58 59 ), 59 60 60 61 TP_fast_assign( 61 - __entry->dev = disk_devt(q->disk); 62 + __entry->dev = dev; 62 63 strlcpy(__entry->domain, domain, sizeof(__entry->domain)); 63 64 __entry->depth = depth; 64 65 ), ··· 70 71 71 72 TRACE_EVENT(kyber_throttled, 72 73 73 - TP_PROTO(struct request_queue *q, const char *domain), 74 + TP_PROTO(dev_t dev, const char *domain), 74 75 75 - TP_ARGS(q, domain), 76 + TP_ARGS(dev, domain), 76 77 77 78 TP_STRUCT__entry( 78 79 __field( dev_t, dev ) ··· 80 81 ), 81 82 82 83 TP_fast_assign( 83 - __entry->dev = disk_devt(q->disk); 84 + __entry->dev = dev; 84 85 strlcpy(__entry->domain, domain, sizeof(__entry->domain)); 85 86 ), 86 87
+2 -4
include/uapi/misc/habanalabs.h
··· 917 917 #define HL_WAIT_CS_STATUS_BUSY 1 918 918 #define HL_WAIT_CS_STATUS_TIMEDOUT 2 919 919 #define HL_WAIT_CS_STATUS_ABORTED 3 920 - #define HL_WAIT_CS_STATUS_INTERRUPTED 4 921 920 922 921 #define HL_WAIT_CS_STATUS_FLAG_GONE 0x1 923 922 #define HL_WAIT_CS_STATUS_FLAG_TIMESTAMP_VLD 0x2 ··· 1285 1286 * EIO - The CS was aborted (usually because the device was reset) 1286 1287 * ENODEV - The device wants to do hard-reset (so user need to close FD) 1287 1288 * 1288 - * The driver also returns a custom define inside the IOCTL which can be: 1289 + * The driver also returns a custom define in case the IOCTL call returned 0. 1290 + * The define can be one of the following: 1289 1291 * 1290 1292 * HL_WAIT_CS_STATUS_COMPLETED - The CS has been completed successfully (0) 1291 1293 * HL_WAIT_CS_STATUS_BUSY - The CS is still executing (0) ··· 1294 1294 * (ETIMEDOUT) 1295 1295 * HL_WAIT_CS_STATUS_ABORTED - The CS was aborted, usually because the 1296 1296 * device was reset (EIO) 1297 - * HL_WAIT_CS_STATUS_INTERRUPTED - Waiting for the CS was interrupted (EINTR) 1298 - * 1299 1297 */ 1300 1298 1301 1299 #define HL_IOCTL_WAIT_CS \
+1
init/main.c
··· 382 382 ret = xbc_snprint_cmdline(new_cmdline, len + 1, root); 383 383 if (ret < 0 || ret > len) { 384 384 pr_err("Failed to print extra kernel cmdline.\n"); 385 + memblock_free_ptr(new_cmdline, len + 1); 385 386 return NULL; 386 387 } 387 388
+29 -27
kernel/cgroup/cpuset.c
··· 311 311 if (is_cpuset_online(((des_cs) = css_cs((pos_css))))) 312 312 313 313 /* 314 - * There are two global locks guarding cpuset structures - cpuset_mutex and 314 + * There are two global locks guarding cpuset structures - cpuset_rwsem and 315 315 * callback_lock. We also require taking task_lock() when dereferencing a 316 316 * task's cpuset pointer. See "The task_lock() exception", at the end of this 317 - * comment. 317 + * comment. The cpuset code uses only cpuset_rwsem write lock. Other 318 + * kernel subsystems can use cpuset_read_lock()/cpuset_read_unlock() to 319 + * prevent change to cpuset structures. 318 320 * 319 321 * A task must hold both locks to modify cpusets. If a task holds 320 - * cpuset_mutex, then it blocks others wanting that mutex, ensuring that it 322 + * cpuset_rwsem, it blocks others wanting that rwsem, ensuring that it 321 323 * is the only task able to also acquire callback_lock and be able to 322 324 * modify cpusets. It can perform various checks on the cpuset structure 323 325 * first, knowing nothing will change. It can also allocate memory while 324 - * just holding cpuset_mutex. While it is performing these checks, various 326 + * just holding cpuset_rwsem. While it is performing these checks, various 325 327 * callback routines can briefly acquire callback_lock to query cpusets. 326 328 * Once it is ready to make the changes, it takes callback_lock, blocking 327 329 * everyone else. ··· 395 393 * One way or another, we guarantee to return some non-empty subset 396 394 * of cpu_online_mask. 397 395 * 398 - * Call with callback_lock or cpuset_mutex held. 396 + * Call with callback_lock or cpuset_rwsem held. 399 397 */ 400 398 static void guarantee_online_cpus(struct task_struct *tsk, 401 399 struct cpumask *pmask) ··· 437 435 * One way or another, we guarantee to return some non-empty subset 438 436 * of node_states[N_MEMORY]. 439 437 * 440 - * Call with callback_lock or cpuset_mutex held. 438 + * Call with callback_lock or cpuset_rwsem held. 441 439 */ 442 440 static void guarantee_online_mems(struct cpuset *cs, nodemask_t *pmask) 443 441 { ··· 449 447 /* 450 448 * update task's spread flag if cpuset's page/slab spread flag is set 451 449 * 452 - * Call with callback_lock or cpuset_mutex held. 450 + * Call with callback_lock or cpuset_rwsem held. 453 451 */ 454 452 static void cpuset_update_task_spread_flag(struct cpuset *cs, 455 453 struct task_struct *tsk) ··· 470 468 * 471 469 * One cpuset is a subset of another if all its allowed CPUs and 472 470 * Memory Nodes are a subset of the other, and its exclusive flags 473 - * are only set if the other's are set. Call holding cpuset_mutex. 471 + * are only set if the other's are set. Call holding cpuset_rwsem. 474 472 */ 475 473 476 474 static int is_cpuset_subset(const struct cpuset *p, const struct cpuset *q) ··· 579 577 * If we replaced the flag and mask values of the current cpuset 580 578 * (cur) with those values in the trial cpuset (trial), would 581 579 * our various subset and exclusive rules still be valid? Presumes 582 - * cpuset_mutex held. 580 + * cpuset_rwsem held. 583 581 * 584 582 * 'cur' is the address of an actual, in-use cpuset. Operations 585 583 * such as list traversal that depend on the actual address of the ··· 702 700 rcu_read_unlock(); 703 701 } 704 702 705 - /* Must be called with cpuset_mutex held. */ 703 + /* Must be called with cpuset_rwsem held. */ 706 704 static inline int nr_cpusets(void) 707 705 { 708 706 /* jump label reference count + the top-level cpuset */ ··· 728 726 * domains when operating in the severe memory shortage situations 729 727 * that could cause allocation failures below. 730 728 * 731 - * Must be called with cpuset_mutex held. 729 + * Must be called with cpuset_rwsem held. 732 730 * 733 731 * The three key local variables below are: 734 732 * cp - cpuset pointer, used (together with pos_css) to perform a ··· 1007 1005 * 'cpus' is removed, then call this routine to rebuild the 1008 1006 * scheduler's dynamic sched domains. 1009 1007 * 1010 - * Call with cpuset_mutex held. Takes cpus_read_lock(). 1008 + * Call with cpuset_rwsem held. Takes cpus_read_lock(). 1011 1009 */ 1012 1010 static void rebuild_sched_domains_locked(void) 1013 1011 { ··· 1080 1078 * @cs: the cpuset in which each task's cpus_allowed mask needs to be changed 1081 1079 * 1082 1080 * Iterate through each task of @cs updating its cpus_allowed to the 1083 - * effective cpuset's. As this function is called with cpuset_mutex held, 1081 + * effective cpuset's. As this function is called with cpuset_rwsem held, 1084 1082 * cpuset membership stays stable. 1085 1083 */ 1086 1084 static void update_tasks_cpumask(struct cpuset *cs) ··· 1349 1347 * 1350 1348 * On legacy hierarchy, effective_cpus will be the same with cpu_allowed. 1351 1349 * 1352 - * Called with cpuset_mutex held 1350 + * Called with cpuset_rwsem held 1353 1351 */ 1354 1352 static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp) 1355 1353 { ··· 1706 1704 * @cs: the cpuset in which each task's mems_allowed mask needs to be changed 1707 1705 * 1708 1706 * Iterate through each task of @cs updating its mems_allowed to the 1709 - * effective cpuset's. As this function is called with cpuset_mutex held, 1707 + * effective cpuset's. As this function is called with cpuset_rwsem held, 1710 1708 * cpuset membership stays stable. 1711 1709 */ 1712 1710 static void update_tasks_nodemask(struct cpuset *cs) 1713 1711 { 1714 - static nodemask_t newmems; /* protected by cpuset_mutex */ 1712 + static nodemask_t newmems; /* protected by cpuset_rwsem */ 1715 1713 struct css_task_iter it; 1716 1714 struct task_struct *task; 1717 1715 ··· 1724 1722 * take while holding tasklist_lock. Forks can happen - the 1725 1723 * mpol_dup() cpuset_being_rebound check will catch such forks, 1726 1724 * and rebind their vma mempolicies too. Because we still hold 1727 - * the global cpuset_mutex, we know that no other rebind effort 1725 + * the global cpuset_rwsem, we know that no other rebind effort 1728 1726 * will be contending for the global variable cpuset_being_rebound. 1729 1727 * It's ok if we rebind the same mm twice; mpol_rebind_mm() 1730 1728 * is idempotent. Also migrate pages in each mm to new nodes. ··· 1770 1768 * 1771 1769 * On legacy hierarchy, effective_mems will be the same with mems_allowed. 1772 1770 * 1773 - * Called with cpuset_mutex held 1771 + * Called with cpuset_rwsem held 1774 1772 */ 1775 1773 static void update_nodemasks_hier(struct cpuset *cs, nodemask_t *new_mems) 1776 1774 { ··· 1823 1821 * mempolicies and if the cpuset is marked 'memory_migrate', 1824 1822 * migrate the tasks pages to the new memory. 1825 1823 * 1826 - * Call with cpuset_mutex held. May take callback_lock during call. 1824 + * Call with cpuset_rwsem held. May take callback_lock during call. 1827 1825 * Will take tasklist_lock, scan tasklist for tasks in cpuset cs, 1828 1826 * lock each such tasks mm->mmap_lock, scan its vma's and rebind 1829 1827 * their mempolicies to the cpusets new mems_allowed. ··· 1913 1911 * @cs: the cpuset in which each task's spread flags needs to be changed 1914 1912 * 1915 1913 * Iterate through each task of @cs updating its spread flags. As this 1916 - * function is called with cpuset_mutex held, cpuset membership stays 1914 + * function is called with cpuset_rwsem held, cpuset membership stays 1917 1915 * stable. 1918 1916 */ 1919 1917 static void update_tasks_flags(struct cpuset *cs) ··· 1933 1931 * cs: the cpuset to update 1934 1932 * turning_on: whether the flag is being set or cleared 1935 1933 * 1936 - * Call with cpuset_mutex held. 1934 + * Call with cpuset_rwsem held. 1937 1935 */ 1938 1936 1939 1937 static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs, ··· 1982 1980 * cs: the cpuset to update 1983 1981 * new_prs: new partition root state 1984 1982 * 1985 - * Call with cpuset_mutex held. 1983 + * Call with cpuset_rwsem held. 1986 1984 */ 1987 1985 static int update_prstate(struct cpuset *cs, int new_prs) 1988 1986 { ··· 2169 2167 2170 2168 static struct cpuset *cpuset_attach_old_cs; 2171 2169 2172 - /* Called by cgroups to determine if a cpuset is usable; cpuset_mutex held */ 2170 + /* Called by cgroups to determine if a cpuset is usable; cpuset_rwsem held */ 2173 2171 static int cpuset_can_attach(struct cgroup_taskset *tset) 2174 2172 { 2175 2173 struct cgroup_subsys_state *css; ··· 2221 2219 } 2222 2220 2223 2221 /* 2224 - * Protected by cpuset_mutex. cpus_attach is used only by cpuset_attach() 2222 + * Protected by cpuset_rwsem. cpus_attach is used only by cpuset_attach() 2225 2223 * but we can't allocate it dynamically there. Define it global and 2226 2224 * allocate from cpuset_init(). 2227 2225 */ ··· 2229 2227 2230 2228 static void cpuset_attach(struct cgroup_taskset *tset) 2231 2229 { 2232 - /* static buf protected by cpuset_mutex */ 2230 + /* static buf protected by cpuset_rwsem */ 2233 2231 static nodemask_t cpuset_attach_nodemask_to; 2234 2232 struct task_struct *task; 2235 2233 struct task_struct *leader; ··· 2419 2417 * operation like this one can lead to a deadlock through kernfs 2420 2418 * active_ref protection. Let's break the protection. Losing the 2421 2419 * protection is okay as we check whether @cs is online after 2422 - * grabbing cpuset_mutex anyway. This only happens on the legacy 2420 + * grabbing cpuset_rwsem anyway. This only happens on the legacy 2423 2421 * hierarchies. 2424 2422 */ 2425 2423 css_get(&cs->css); ··· 3674 3672 * - Used for /proc/<pid>/cpuset. 3675 3673 * - No need to task_lock(tsk) on this tsk->cpuset reference, as it 3676 3674 * doesn't really matter if tsk->cpuset changes after we read it, 3677 - * and we take cpuset_mutex, keeping cpuset_attach() from changing it 3675 + * and we take cpuset_rwsem, keeping cpuset_attach() from changing it 3678 3676 * anyway. 3679 3677 */ 3680 3678 int proc_cpuset_show(struct seq_file *m, struct pid_namespace *ns,
+2
kernel/module.c
··· 4489 4489 /* Fix init/exit functions to point to the CFI jump table */ 4490 4490 if (init) 4491 4491 mod->init = *init; 4492 + #ifdef CONFIG_MODULE_UNLOAD 4492 4493 if (exit) 4493 4494 mod->exit = *exit; 4495 + #endif 4494 4496 4495 4497 cfi_module_add(mod, module_addr_min); 4496 4498 #endif
+4 -7
kernel/trace/trace.c
··· 1744 1744 irq_work_queue(&tr->fsnotify_irqwork); 1745 1745 } 1746 1746 1747 - /* 1748 - * (defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)) && \ 1749 - * defined(CONFIG_FSNOTIFY) 1750 - */ 1751 - #else 1747 + #elif defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER) \ 1748 + || defined(CONFIG_OSNOISE_TRACER) 1752 1749 1753 1750 #define trace_create_maxlat_file(tr, d_tracer) \ 1754 1751 trace_create_file("tracing_max_latency", 0644, d_tracer, \ 1755 1752 &tr->max_latency, &tracing_max_lat_fops) 1756 1753 1754 + #else 1755 + #define trace_create_maxlat_file(tr, d_tracer) do { } while (0) 1757 1756 #endif 1758 1757 1759 1758 #ifdef CONFIG_TRACER_MAX_TRACE ··· 9472 9473 9473 9474 create_trace_options_dir(tr); 9474 9475 9475 - #if defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER) 9476 9476 trace_create_maxlat_file(tr, d_tracer); 9477 - #endif 9478 9477 9479 9478 if (ftrace_create_function_files(tr, d_tracer)) 9480 9479 MEM_FAIL(1, "Could not allocate function filter files");
+58 -3
kernel/trace/trace_eprobe.c
··· 119 119 int argc, const char **argv, struct dyn_event *ev) 120 120 { 121 121 struct trace_eprobe *ep = to_trace_eprobe(ev); 122 + const char *slash; 122 123 123 - return strcmp(trace_probe_name(&ep->tp), event) == 0 && 124 - (!system || strcmp(trace_probe_group_name(&ep->tp), system) == 0) && 125 - trace_probe_match_command_args(&ep->tp, argc, argv); 124 + /* 125 + * We match the following: 126 + * event only - match all eprobes with event name 127 + * system and event only - match all system/event probes 128 + * 129 + * The below has the above satisfied with more arguments: 130 + * 131 + * attached system/event - If the arg has the system and event 132 + * the probe is attached to, match 133 + * probes with the attachment. 134 + * 135 + * If any more args are given, then it requires a full match. 136 + */ 137 + 138 + /* 139 + * If system exists, but this probe is not part of that system 140 + * do not match. 141 + */ 142 + if (system && strcmp(trace_probe_group_name(&ep->tp), system) != 0) 143 + return false; 144 + 145 + /* Must match the event name */ 146 + if (strcmp(trace_probe_name(&ep->tp), event) != 0) 147 + return false; 148 + 149 + /* No arguments match all */ 150 + if (argc < 1) 151 + return true; 152 + 153 + /* First argument is the system/event the probe is attached to */ 154 + 155 + slash = strchr(argv[0], '/'); 156 + if (!slash) 157 + slash = strchr(argv[0], '.'); 158 + if (!slash) 159 + return false; 160 + 161 + if (strncmp(ep->event_system, argv[0], slash - argv[0])) 162 + return false; 163 + if (strcmp(ep->event_name, slash + 1)) 164 + return false; 165 + 166 + argc--; 167 + argv++; 168 + 169 + /* If there are no other args, then match */ 170 + if (argc < 1) 171 + return true; 172 + 173 + return trace_probe_match_command_args(&ep->tp, argc, argv); 126 174 } 127 175 128 176 static struct dyn_event_operations eprobe_dyn_event_ops = { ··· 680 632 681 633 trace_event_trigger_enable_disable(file, 0); 682 634 update_cond_flag(file); 635 + 636 + /* Make sure nothing is using the edata or trigger */ 637 + tracepoint_synchronize_unregister(); 638 + 639 + kfree(edata); 640 + kfree(trigger); 641 + 683 642 return 0; 684 643 } 685 644
+1 -1
kernel/trace/trace_events_hist.c
··· 2506 2506 * events. However, for convenience, users are allowed to directly 2507 2507 * specify an event field in an action, which will be automatically 2508 2508 * converted into a variable on their behalf. 2509 - 2509 + * 2510 2510 * If a user specifies a field on an event that isn't the event the 2511 2511 * histogram currently being defined (the target event histogram), the 2512 2512 * only way that can be accomplished is if a new hist trigger is
+16 -2
kernel/workqueue.c
··· 4830 4830 4831 4831 for_each_pwq(pwq, wq) { 4832 4832 raw_spin_lock_irqsave(&pwq->pool->lock, flags); 4833 - if (pwq->nr_active || !list_empty(&pwq->inactive_works)) 4833 + if (pwq->nr_active || !list_empty(&pwq->inactive_works)) { 4834 + /* 4835 + * Defer printing to avoid deadlocks in console 4836 + * drivers that queue work while holding locks 4837 + * also taken in their write paths. 4838 + */ 4839 + printk_deferred_enter(); 4834 4840 show_pwq(pwq); 4841 + printk_deferred_exit(); 4842 + } 4835 4843 raw_spin_unlock_irqrestore(&pwq->pool->lock, flags); 4836 4844 /* 4837 4845 * We could be printing a lot from atomic context, e.g. ··· 4857 4849 raw_spin_lock_irqsave(&pool->lock, flags); 4858 4850 if (pool->nr_workers == pool->nr_idle) 4859 4851 goto next_pool; 4860 - 4852 + /* 4853 + * Defer printing to avoid deadlocks in console drivers that 4854 + * queue work while holding locks also taken in their write 4855 + * paths. 4856 + */ 4857 + printk_deferred_enter(); 4861 4858 pr_info("pool %d:", pool->id); 4862 4859 pr_cont_pool_info(pool); 4863 4860 pr_cont(" hung=%us workers=%d", ··· 4877 4864 first = false; 4878 4865 } 4879 4866 pr_cont("\n"); 4867 + printk_deferred_exit(); 4880 4868 next_pool: 4881 4869 raw_spin_unlock_irqrestore(&pool->lock, flags); 4882 4870 /*
+1 -1
lib/Makefile
··· 351 351 obj-$(CONFIG_PLDMFW) += pldmfw/ 352 352 353 353 # KUnit tests 354 - CFLAGS_bitfield_kunit.o := $(call cc-option,-Wframe-larger-than=10240) 354 + CFLAGS_bitfield_kunit.o := $(DISABLE_STRUCTLEAK_PLUGIN) 355 355 obj-$(CONFIG_BITFIELD_KUNIT) += bitfield_kunit.o 356 356 obj-$(CONFIG_LIST_KUNIT_TEST) += list-test.o 357 357 obj-$(CONFIG_LINEAR_RANGES_TEST) += test_linear_ranges.o
+2 -2
lib/kunit/executor_test.c
··· 116 116 /* kfree() handles NULL already, but avoid allocating a no-op cleanup. */ 117 117 if (IS_ERR_OR_NULL(to_free)) 118 118 return; 119 - kunit_alloc_and_get_resource(test, NULL, kfree_res_free, GFP_KERNEL, 120 - (void *)to_free); 119 + kunit_alloc_resource(test, NULL, kfree_res_free, GFP_KERNEL, 120 + (void *)to_free); 121 121 } 122 122 123 123 static struct kunit_suite *alloc_fake_suite(struct kunit *test,
+6 -1
mm/memblock.c
··· 936 936 */ 937 937 int __init_memblock memblock_mark_nomap(phys_addr_t base, phys_addr_t size) 938 938 { 939 - return memblock_setclr_flag(base, size, 1, MEMBLOCK_NOMAP); 939 + int ret = memblock_setclr_flag(base, size, 1, MEMBLOCK_NOMAP); 940 + 941 + if (!ret) 942 + kmemleak_free_part_phys(base, size); 943 + 944 + return ret; 940 945 } 941 946 942 947 /**
+11 -13
net/core/net-procfs.c
··· 77 77 struct rtnl_link_stats64 temp; 78 78 const struct rtnl_link_stats64 *stats = dev_get_stats(dev, &temp); 79 79 80 - seq_printf(seq, "%9s: %16llu %12llu %4llu %6llu %4llu %5llu %10llu %9llu " 81 - "%16llu %12llu %4llu %6llu %4llu %5llu %7llu %10llu\n", 80 + seq_printf(seq, "%6s: %7llu %7llu %4llu %4llu %4llu %5llu %10llu %9llu " 81 + "%8llu %7llu %4llu %4llu %4llu %5llu %7llu %10llu\n", 82 82 dev->name, stats->rx_bytes, stats->rx_packets, 83 83 stats->rx_errors, 84 84 stats->rx_dropped + stats->rx_missed_errors, ··· 103 103 static int dev_seq_show(struct seq_file *seq, void *v) 104 104 { 105 105 if (v == SEQ_START_TOKEN) 106 - seq_puts(seq, "Interface| Receive " 107 - " | Transmit\n" 108 - " | bytes packets errs drop fifo frame " 109 - "compressed multicast| bytes packets errs " 110 - " drop fifo colls carrier compressed\n"); 106 + seq_puts(seq, "Inter-| Receive " 107 + " | Transmit\n" 108 + " face |bytes packets errs drop fifo frame " 109 + "compressed multicast|bytes packets errs " 110 + "drop fifo colls carrier compressed\n"); 111 111 else 112 112 dev_seq_printf_stats(seq, v); 113 113 return 0; ··· 259 259 struct packet_type *pt = v; 260 260 261 261 if (v == SEQ_START_TOKEN) 262 - seq_puts(seq, "Type Device Function\n"); 262 + seq_puts(seq, "Type Device Function\n"); 263 263 else if (pt->dev == NULL || dev_net(pt->dev) == seq_file_net(seq)) { 264 264 if (pt->type == htons(ETH_P_ALL)) 265 265 seq_puts(seq, "ALL "); 266 266 else 267 267 seq_printf(seq, "%04x", ntohs(pt->type)); 268 268 269 - seq_printf(seq, " %-9s %ps\n", 269 + seq_printf(seq, " %-8s %ps\n", 270 270 pt->dev ? pt->dev->name : "", pt->func); 271 271 } 272 272 ··· 327 327 struct netdev_hw_addr *ha; 328 328 struct net_device *dev = v; 329 329 330 - if (v == SEQ_START_TOKEN) { 331 - seq_puts(seq, "Ifindex Interface Refcount Global_use Address\n"); 330 + if (v == SEQ_START_TOKEN) 332 331 return 0; 333 - } 334 332 335 333 netif_addr_lock_bh(dev); 336 334 netdev_for_each_mc_addr(ha, dev) { 337 - seq_printf(seq, "%-7d %-9s %-8d %-10d %*phN\n", 335 + seq_printf(seq, "%-4d %-15s %-5d %-5d %*phN\n", 338 336 dev->ifindex, dev->name, 339 337 ha->refcount, ha->global_use, 340 338 (int)dev->addr_len, ha->addr);
-5
net/dsa/Kconfig
··· 101 101 102 102 config NET_DSA_TAG_OCELOT 103 103 tristate "Tag driver for Ocelot family of switches, using NPI port" 104 - depends on MSCC_OCELOT_SWITCH_LIB || \ 105 - (MSCC_OCELOT_SWITCH_LIB=n && COMPILE_TEST) 106 104 select PACKING 107 105 help 108 106 Say Y or M if you want to enable NPI tagging for the Ocelot switches ··· 112 114 113 115 config NET_DSA_TAG_OCELOT_8021Q 114 116 tristate "Tag driver for Ocelot family of switches, using VLAN" 115 - depends on MSCC_OCELOT_SWITCH_LIB || \ 116 - (MSCC_OCELOT_SWITCH_LIB=n && COMPILE_TEST) 117 117 help 118 118 Say Y or M if you want to enable support for tagging frames with a 119 119 custom VLAN-based header. Frames that require timestamping, such as ··· 134 138 135 139 config NET_DSA_TAG_SJA1105 136 140 tristate "Tag driver for NXP SJA1105 switches" 137 - depends on NET_DSA_SJA1105 || !NET_DSA_SJA1105 138 141 select PACKING 139 142 help 140 143 Say Y or M if you want to enable support for tagging frames with the
+3 -1
net/dsa/dsa2.c
··· 170 170 /* Check if the bridge is still in use, otherwise it is time 171 171 * to clean it up so we can reuse this bridge_num later. 172 172 */ 173 - if (!dsa_bridge_num_find(bridge_dev)) 173 + if (dsa_bridge_num_find(bridge_dev) < 0) 174 174 clear_bit(bridge_num, &dsa_fwd_offloading_bridges); 175 175 } 176 176 ··· 811 811 if (!dsa_is_cpu_port(ds, port)) 812 812 continue; 813 813 814 + rtnl_lock(); 814 815 err = ds->ops->change_tag_protocol(ds, port, tag_ops->proto); 816 + rtnl_unlock(); 815 817 if (err) { 816 818 dev_err(ds->dev, "Unable to use tag protocol \"%s\": %pe\n", 817 819 tag_ops->name, ERR_PTR(err));
+1 -1
net/dsa/switch.c
··· 168 168 if (extack._msg) 169 169 dev_err(ds->dev, "port %d: %s\n", info->port, 170 170 extack._msg); 171 - if (err && err != EOPNOTSUPP) 171 + if (err && err != -EOPNOTSUPP) 172 172 return err; 173 173 } 174 174
+9 -19
net/dsa/tag_dsa.c
··· 45 45 * 6 6 2 2 4 2 N 46 46 */ 47 47 48 + #include <linux/dsa/mv88e6xxx.h> 48 49 #include <linux/etherdevice.h> 49 50 #include <linux/list.h> 50 51 #include <linux/slab.h> ··· 130 129 u8 tag_dev, tag_port; 131 130 enum dsa_cmd cmd; 132 131 u8 *dsa_header; 133 - u16 pvid = 0; 134 - int err; 135 132 136 133 if (skb->offload_fwd_mark) { 137 134 struct dsa_switch_tree *dst = dp->ds->dst; 138 - struct net_device *br = dp->bridge_dev; 139 135 140 136 cmd = DSA_CMD_FORWARD; 141 137 ··· 142 144 */ 143 145 tag_dev = dst->last_switch + 1 + dp->bridge_num; 144 146 tag_port = 0; 145 - 146 - /* If we are offloading forwarding for a VLAN-unaware bridge, 147 - * inject packets to hardware using the bridge's pvid, since 148 - * that's where the packets ingressed from. 149 - */ 150 - if (!br_vlan_enabled(br)) { 151 - /* Safe because __dev_queue_xmit() runs under 152 - * rcu_read_lock_bh() 153 - */ 154 - err = br_vlan_get_pvid_rcu(br, &pvid); 155 - if (err) 156 - return NULL; 157 - } 158 147 } else { 159 148 cmd = DSA_CMD_FROM_CPU; 160 149 tag_dev = dp->ds->index; ··· 165 180 dsa_header[2] &= ~0x10; 166 181 } 167 182 } else { 183 + struct net_device *br = dp->bridge_dev; 184 + u16 vid; 185 + 186 + vid = br ? MV88E6XXX_VID_BRIDGED : MV88E6XXX_VID_STANDALONE; 187 + 168 188 skb_push(skb, DSA_HLEN + extra); 169 189 dsa_alloc_etype_header(skb, DSA_HLEN + extra); 170 190 171 - /* Construct untagged DSA tag. */ 191 + /* Construct DSA header from untagged frame. */ 172 192 dsa_header = dsa_etype_header_pos_tx(skb) + extra; 173 193 174 194 dsa_header[0] = (cmd << 6) | tag_dev; 175 195 dsa_header[1] = tag_port << 3; 176 - dsa_header[2] = pvid >> 8; 177 - dsa_header[3] = pvid & 0xff; 196 + dsa_header[2] = vid >> 8; 197 + dsa_header[3] = vid & 0xff; 178 198 } 179 199 180 200 return skb;
-1
net/dsa/tag_ocelot.c
··· 2 2 /* Copyright 2019 NXP 3 3 */ 4 4 #include <linux/dsa/ocelot.h> 5 - #include <soc/mscc/ocelot.h> 6 5 #include "dsa_priv.h" 7 6 8 7 static void ocelot_xmit_common(struct sk_buff *skb, struct net_device *netdev,
+27 -13
net/dsa/tag_ocelot_8021q.c
··· 9 9 * that on egress 10 10 */ 11 11 #include <linux/dsa/8021q.h> 12 - #include <soc/mscc/ocelot.h> 13 - #include <soc/mscc/ocelot_ptp.h> 12 + #include <linux/dsa/ocelot.h> 14 13 #include "dsa_priv.h" 14 + 15 + static struct sk_buff *ocelot_defer_xmit(struct dsa_port *dp, 16 + struct sk_buff *skb) 17 + { 18 + struct felix_deferred_xmit_work *xmit_work; 19 + struct felix_port *felix_port = dp->priv; 20 + 21 + xmit_work = kzalloc(sizeof(*xmit_work), GFP_ATOMIC); 22 + if (!xmit_work) 23 + return NULL; 24 + 25 + /* Calls felix_port_deferred_xmit in felix.c */ 26 + kthread_init_work(&xmit_work->work, felix_port->xmit_work_fn); 27 + /* Increase refcount so the kfree_skb in dsa_slave_xmit 28 + * won't really free the packet. 29 + */ 30 + xmit_work->dp = dp; 31 + xmit_work->skb = skb_get(skb); 32 + 33 + kthread_queue_work(felix_port->xmit_worker, &xmit_work->work); 34 + 35 + return NULL; 36 + } 15 37 16 38 static struct sk_buff *ocelot_xmit(struct sk_buff *skb, 17 39 struct net_device *netdev) ··· 42 20 u16 tx_vid = dsa_8021q_tx_vid(dp->ds, dp->index); 43 21 u16 queue_mapping = skb_get_queue_mapping(skb); 44 22 u8 pcp = netdev_txq_to_tc(netdev, queue_mapping); 45 - struct ocelot *ocelot = dp->ds->priv; 46 - int port = dp->index; 47 - u32 rew_op = 0; 23 + struct ethhdr *hdr = eth_hdr(skb); 48 24 49 - rew_op = ocelot_ptp_rew_op(skb); 50 - if (rew_op) { 51 - if (!ocelot_can_inject(ocelot, 0)) 52 - return NULL; 53 - 54 - ocelot_port_inject_frame(ocelot, port, 0, rew_op, skb); 55 - return NULL; 56 - } 25 + if (ocelot_ptp_rew_op(skb) || is_link_local_ether_addr(hdr->h_dest)) 26 + return ocelot_defer_xmit(dp, skb); 57 27 58 28 return dsa_8021q_xmit(skb, netdev, ETH_P_8021Q, 59 29 ((pcp << VLAN_PRIO_SHIFT) | tx_vid));
+43
net/dsa/tag_sja1105.c
··· 4 4 #include <linux/if_vlan.h> 5 5 #include <linux/dsa/sja1105.h> 6 6 #include <linux/dsa/8021q.h> 7 + #include <linux/skbuff.h> 7 8 #include <linux/packing.h> 8 9 #include "dsa_priv.h" 9 10 ··· 53 52 #define SJA1110_RX_TRAILER_LEN 13 54 53 #define SJA1110_TX_TRAILER_LEN 4 55 54 #define SJA1110_MAX_PADDING_LEN 15 55 + 56 + enum sja1110_meta_tstamp { 57 + SJA1110_META_TSTAMP_TX = 0, 58 + SJA1110_META_TSTAMP_RX = 1, 59 + }; 56 60 57 61 /* Similar to is_link_local_ether_addr(hdr->h_dest) but also covers PTP */ 58 62 static inline bool sja1105_is_link_local(const struct sk_buff *skb) ··· 524 518 525 519 return sja1105_rcv_meta_state_machine(skb, &meta, is_link_local, 526 520 is_meta); 521 + } 522 + 523 + static void sja1110_process_meta_tstamp(struct dsa_switch *ds, int port, 524 + u8 ts_id, enum sja1110_meta_tstamp dir, 525 + u64 tstamp) 526 + { 527 + struct sk_buff *skb, *skb_tmp, *skb_match = NULL; 528 + struct dsa_port *dp = dsa_to_port(ds, port); 529 + struct skb_shared_hwtstamps shwt = {0}; 530 + struct sja1105_port *sp = dp->priv; 531 + 532 + if (!dsa_port_is_sja1105(dp)) 533 + return; 534 + 535 + /* We don't care about RX timestamps on the CPU port */ 536 + if (dir == SJA1110_META_TSTAMP_RX) 537 + return; 538 + 539 + spin_lock(&sp->data->skb_txtstamp_queue.lock); 540 + 541 + skb_queue_walk_safe(&sp->data->skb_txtstamp_queue, skb, skb_tmp) { 542 + if (SJA1105_SKB_CB(skb)->ts_id != ts_id) 543 + continue; 544 + 545 + __skb_unlink(skb, &sp->data->skb_txtstamp_queue); 546 + skb_match = skb; 547 + 548 + break; 549 + } 550 + 551 + spin_unlock(&sp->data->skb_txtstamp_queue.lock); 552 + 553 + if (WARN_ON(!skb_match)) 554 + return; 555 + 556 + shwt.hwtstamp = ns_to_ktime(sja1105_ticks_to_ns(tstamp)); 557 + skb_complete_tx_timestamp(skb_match, &shwt); 527 558 } 528 559 529 560 static struct sk_buff *sja1110_rcv_meta(struct sk_buff *skb, u16 rx_header)
+11 -12
net/ipv4/icmp.c
··· 1054 1054 iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(iio->extobj_hdr), &_iio); 1055 1055 if (!ext_hdr || !iio) 1056 1056 goto send_mal_query; 1057 - if (ntohs(iio->extobj_hdr.length) <= sizeof(iio->extobj_hdr)) 1057 + if (ntohs(iio->extobj_hdr.length) <= sizeof(iio->extobj_hdr) || 1058 + ntohs(iio->extobj_hdr.length) > sizeof(_iio)) 1058 1059 goto send_mal_query; 1059 1060 ident_len = ntohs(iio->extobj_hdr.length) - sizeof(iio->extobj_hdr); 1061 + iio = skb_header_pointer(skb, sizeof(_ext_hdr), 1062 + sizeof(iio->extobj_hdr) + ident_len, &_iio); 1063 + if (!iio) 1064 + goto send_mal_query; 1065 + 1060 1066 status = 0; 1061 1067 dev = NULL; 1062 1068 switch (iio->extobj_hdr.class_type) { 1063 1069 case ICMP_EXT_ECHO_CTYPE_NAME: 1064 - iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(_iio), &_iio); 1065 1070 if (ident_len >= IFNAMSIZ) 1066 1071 goto send_mal_query; 1067 1072 memset(buff, 0, sizeof(buff)); ··· 1074 1069 dev = dev_get_by_name(net, buff); 1075 1070 break; 1076 1071 case ICMP_EXT_ECHO_CTYPE_INDEX: 1077 - iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(iio->extobj_hdr) + 1078 - sizeof(iio->ident.ifindex), &_iio); 1079 1072 if (ident_len != sizeof(iio->ident.ifindex)) 1080 1073 goto send_mal_query; 1081 1074 dev = dev_get_by_index(net, ntohl(iio->ident.ifindex)); 1082 1075 break; 1083 1076 case ICMP_EXT_ECHO_CTYPE_ADDR: 1084 - if (ident_len != sizeof(iio->ident.addr.ctype3_hdr) + 1077 + if (ident_len < sizeof(iio->ident.addr.ctype3_hdr) || 1078 + ident_len != sizeof(iio->ident.addr.ctype3_hdr) + 1085 1079 iio->ident.addr.ctype3_hdr.addrlen) 1086 1080 goto send_mal_query; 1087 1081 switch (ntohs(iio->ident.addr.ctype3_hdr.afi)) { 1088 1082 case ICMP_AFI_IP: 1089 - iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(iio->extobj_hdr) + 1090 - sizeof(struct in_addr), &_iio); 1091 - if (ident_len != sizeof(iio->ident.addr.ctype3_hdr) + 1092 - sizeof(struct in_addr)) 1083 + if (iio->ident.addr.ctype3_hdr.addrlen != sizeof(struct in_addr)) 1093 1084 goto send_mal_query; 1094 1085 dev = ip_dev_find(net, iio->ident.addr.ip_addr.ipv4_addr); 1095 1086 break; 1096 1087 #if IS_ENABLED(CONFIG_IPV6) 1097 1088 case ICMP_AFI_IP6: 1098 - iio = skb_header_pointer(skb, sizeof(_ext_hdr), sizeof(_iio), &_iio); 1099 - if (ident_len != sizeof(iio->ident.addr.ctype3_hdr) + 1100 - sizeof(struct in6_addr)) 1089 + if (iio->ident.addr.ctype3_hdr.addrlen != sizeof(struct in6_addr)) 1101 1090 goto send_mal_query; 1102 1091 dev = ipv6_stub->ipv6_dev_find(net, &iio->ident.addr.ip_addr.ipv6_addr, dev); 1103 1092 dev_hold(dev);
+62 -8
net/ipv6/ioam6.c
··· 770 770 data += sizeof(__be32); 771 771 } 772 772 773 + /* bit12 undefined: filled with empty value */ 774 + if (trace->type.bit12) { 775 + *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE); 776 + data += sizeof(__be32); 777 + } 778 + 779 + /* bit13 undefined: filled with empty value */ 780 + if (trace->type.bit13) { 781 + *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE); 782 + data += sizeof(__be32); 783 + } 784 + 785 + /* bit14 undefined: filled with empty value */ 786 + if (trace->type.bit14) { 787 + *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE); 788 + data += sizeof(__be32); 789 + } 790 + 791 + /* bit15 undefined: filled with empty value */ 792 + if (trace->type.bit15) { 793 + *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE); 794 + data += sizeof(__be32); 795 + } 796 + 797 + /* bit16 undefined: filled with empty value */ 798 + if (trace->type.bit16) { 799 + *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE); 800 + data += sizeof(__be32); 801 + } 802 + 803 + /* bit17 undefined: filled with empty value */ 804 + if (trace->type.bit17) { 805 + *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE); 806 + data += sizeof(__be32); 807 + } 808 + 809 + /* bit18 undefined: filled with empty value */ 810 + if (trace->type.bit18) { 811 + *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE); 812 + data += sizeof(__be32); 813 + } 814 + 815 + /* bit19 undefined: filled with empty value */ 816 + if (trace->type.bit19) { 817 + *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE); 818 + data += sizeof(__be32); 819 + } 820 + 821 + /* bit20 undefined: filled with empty value */ 822 + if (trace->type.bit20) { 823 + *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE); 824 + data += sizeof(__be32); 825 + } 826 + 827 + /* bit21 undefined: filled with empty value */ 828 + if (trace->type.bit21) { 829 + *(__be32 *)data = cpu_to_be32(IOAM6_U32_UNAVAILABLE); 830 + data += sizeof(__be32); 831 + } 832 + 773 833 /* opaque state snapshot */ 774 834 if (trace->type.bit22) { 775 835 if (!sc) { ··· 851 791 struct ioam6_schema *sc; 852 792 u8 sclen = 0; 853 793 854 - /* Skip if Overflow flag is set OR 855 - * if an unknown type (bit 12-21) is set 794 + /* Skip if Overflow flag is set 856 795 */ 857 - if (trace->overflow || 858 - trace->type.bit12 | trace->type.bit13 | trace->type.bit14 | 859 - trace->type.bit15 | trace->type.bit16 | trace->type.bit17 | 860 - trace->type.bit18 | trace->type.bit19 | trace->type.bit20 | 861 - trace->type.bit21) { 796 + if (trace->overflow) 862 797 return; 863 - } 864 798 865 799 /* NodeLen does not include Opaque State Snapshot length. We need to 866 800 * take it into account if the corresponding bit is set (bit 22) and
+5 -1
net/ipv6/ioam6_iptunnel.c
··· 75 75 u32 fields; 76 76 77 77 if (!trace->type_be32 || !trace->remlen || 78 - trace->remlen > IOAM6_TRACE_DATA_SIZE_MAX / 4) 78 + trace->remlen > IOAM6_TRACE_DATA_SIZE_MAX / 4 || 79 + trace->type.bit12 | trace->type.bit13 | trace->type.bit14 | 80 + trace->type.bit15 | trace->type.bit16 | trace->type.bit17 | 81 + trace->type.bit18 | trace->type.bit19 | trace->type.bit20 | 82 + trace->type.bit21) 79 83 return false; 80 84 81 85 trace->nodelen = 0;
+15 -40
net/mptcp/protocol.c
··· 528 528 529 529 sk->sk_shutdown |= RCV_SHUTDOWN; 530 530 smp_mb__before_atomic(); /* SHUTDOWN must be visible first */ 531 - set_bit(MPTCP_DATA_READY, &msk->flags); 532 531 533 532 switch (sk->sk_state) { 534 533 case TCP_ESTABLISHED: ··· 741 742 742 743 /* Wake-up the reader only for in-sequence data */ 743 744 mptcp_data_lock(sk); 744 - if (move_skbs_to_msk(msk, ssk)) { 745 - set_bit(MPTCP_DATA_READY, &msk->flags); 745 + if (move_skbs_to_msk(msk, ssk)) 746 746 sk->sk_data_ready(sk); 747 - } 747 + 748 748 mptcp_data_unlock(sk); 749 749 } 750 750 ··· 845 847 sk->sk_shutdown |= RCV_SHUTDOWN; 846 848 847 849 smp_mb__before_atomic(); /* SHUTDOWN must be visible first */ 848 - set_bit(MPTCP_DATA_READY, &msk->flags); 849 850 sk->sk_data_ready(sk); 850 851 } 851 852 ··· 1756 1759 return copied ? : ret; 1757 1760 } 1758 1761 1759 - static void mptcp_wait_data(struct sock *sk, long *timeo) 1760 - { 1761 - DEFINE_WAIT_FUNC(wait, woken_wake_function); 1762 - struct mptcp_sock *msk = mptcp_sk(sk); 1763 - 1764 - add_wait_queue(sk_sleep(sk), &wait); 1765 - sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk); 1766 - 1767 - sk_wait_event(sk, timeo, 1768 - test_bit(MPTCP_DATA_READY, &msk->flags), &wait); 1769 - 1770 - sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk); 1771 - remove_wait_queue(sk_sleep(sk), &wait); 1772 - } 1773 - 1774 1762 static int __mptcp_recvmsg_mskq(struct mptcp_sock *msk, 1775 1763 struct msghdr *msg, 1776 1764 size_t len, int flags, ··· 2059 2077 } 2060 2078 2061 2079 pr_debug("block timeout %ld", timeo); 2062 - mptcp_wait_data(sk, &timeo); 2063 - } 2064 - 2065 - if (skb_queue_empty_lockless(&sk->sk_receive_queue) && 2066 - skb_queue_empty(&msk->receive_queue)) { 2067 - /* entire backlog drained, clear DATA_READY. */ 2068 - clear_bit(MPTCP_DATA_READY, &msk->flags); 2069 - 2070 - /* .. race-breaker: ssk might have gotten new data 2071 - * after last __mptcp_move_skbs() returned false. 2072 - */ 2073 - if (unlikely(__mptcp_move_skbs(msk))) 2074 - set_bit(MPTCP_DATA_READY, &msk->flags); 2080 + sk_wait_data(sk, &timeo, NULL); 2075 2081 } 2076 2082 2077 2083 out_err: ··· 2068 2098 tcp_recv_timestamp(msg, sk, &tss); 2069 2099 } 2070 2100 2071 - pr_debug("msk=%p data_ready=%d rx queue empty=%d copied=%d", 2072 - msk, test_bit(MPTCP_DATA_READY, &msk->flags), 2073 - skb_queue_empty_lockless(&sk->sk_receive_queue), copied); 2101 + pr_debug("msk=%p rx queue empty=%d:%d copied=%d", 2102 + msk, skb_queue_empty_lockless(&sk->sk_receive_queue), 2103 + skb_queue_empty(&msk->receive_queue), copied); 2074 2104 if (!(flags & MSG_PEEK)) 2075 2105 mptcp_rcv_space_adjust(msk, copied); 2076 2106 ··· 2338 2368 inet_sk_state_store(sk, TCP_CLOSE); 2339 2369 sk->sk_shutdown = SHUTDOWN_MASK; 2340 2370 smp_mb__before_atomic(); /* SHUTDOWN must be visible first */ 2341 - set_bit(MPTCP_DATA_READY, &msk->flags); 2342 2371 set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags); 2343 2372 2344 2373 mptcp_close_wake_up(sk); ··· 3354 3385 3355 3386 static __poll_t mptcp_check_readable(struct mptcp_sock *msk) 3356 3387 { 3357 - return test_bit(MPTCP_DATA_READY, &msk->flags) ? EPOLLIN | EPOLLRDNORM : 3358 - 0; 3388 + /* Concurrent splices from sk_receive_queue into receive_queue will 3389 + * always show at least one non-empty queue when checked in this order. 3390 + */ 3391 + if (skb_queue_empty_lockless(&((struct sock *)msk)->sk_receive_queue) && 3392 + skb_queue_empty_lockless(&msk->receive_queue)) 3393 + return 0; 3394 + 3395 + return EPOLLIN | EPOLLRDNORM; 3359 3396 } 3360 3397 3361 3398 static __poll_t mptcp_check_writeable(struct mptcp_sock *msk) ··· 3396 3421 state = inet_sk_state_load(sk); 3397 3422 pr_debug("msk=%p state=%d flags=%lx", msk, state, msk->flags); 3398 3423 if (state == TCP_LISTEN) 3399 - return mptcp_check_readable(msk); 3424 + return test_bit(MPTCP_DATA_READY, &msk->flags) ? EPOLLIN | EPOLLRDNORM : 0; 3400 3425 3401 3426 if (state != TCP_SYN_SENT && state != TCP_SYN_RECV) { 3402 3427 mask |= mptcp_check_readable(msk);
+3
net/nfc/af_nfc.c
··· 60 60 proto_tab[nfc_proto->id] = nfc_proto; 61 61 write_unlock(&proto_tab_lock); 62 62 63 + if (rc) 64 + proto_unregister(nfc_proto->proto); 65 + 63 66 return rc; 64 67 } 65 68 EXPORT_SYMBOL(nfc_proto_register);
+7 -2
net/nfc/digital_core.c
··· 277 277 static int digital_tg_listen_mdaa(struct nfc_digital_dev *ddev, u8 rf_tech) 278 278 { 279 279 struct digital_tg_mdaa_params *params; 280 + int rc; 280 281 281 282 params = kzalloc(sizeof(*params), GFP_KERNEL); 282 283 if (!params) ··· 292 291 get_random_bytes(params->nfcid2 + 2, NFC_NFCID2_MAXSIZE - 2); 293 292 params->sc = DIGITAL_SENSF_FELICA_SC; 294 293 295 - return digital_send_cmd(ddev, DIGITAL_CMD_TG_LISTEN_MDAA, NULL, params, 296 - 500, digital_tg_recv_atr_req, NULL); 294 + rc = digital_send_cmd(ddev, DIGITAL_CMD_TG_LISTEN_MDAA, NULL, params, 295 + 500, digital_tg_recv_atr_req, NULL); 296 + if (rc) 297 + kfree(params); 298 + 299 + return rc; 297 300 } 298 301 299 302 static int digital_tg_listen_md(struct nfc_digital_dev *ddev, u8 rf_tech)
+6 -2
net/nfc/digital_technology.c
··· 465 465 skb_put_u8(skb, sel_cmd); 466 466 skb_put_u8(skb, DIGITAL_SDD_REQ_SEL_PAR); 467 467 468 - return digital_in_send_cmd(ddev, skb, 30, digital_in_recv_sdd_res, 469 - target); 468 + rc = digital_in_send_cmd(ddev, skb, 30, digital_in_recv_sdd_res, 469 + target); 470 + if (rc) 471 + kfree_skb(skb); 472 + 473 + return rc; 470 474 } 471 475 472 476 static void digital_in_recv_sens_res(struct nfc_digital_dev *ddev, void *arg,
+2
net/nfc/nci/rsp.c
··· 334 334 ndev->cur_conn_id); 335 335 if (conn_info) { 336 336 list_del(&conn_info->list); 337 + if (conn_info == ndev->rf_conn_info) 338 + ndev->rf_conn_info = NULL; 337 339 devm_kfree(&ndev->nfc_dev->dev, conn_info); 338 340 } 339 341 }
+19 -13
net/sched/sch_mqprio.c
··· 529 529 for (i = tc.offset; i < tc.offset + tc.count; i++) { 530 530 struct netdev_queue *q = netdev_get_tx_queue(dev, i); 531 531 struct Qdisc *qdisc = rtnl_dereference(q->qdisc); 532 - struct gnet_stats_basic_cpu __percpu *cpu_bstats = NULL; 533 - struct gnet_stats_queue __percpu *cpu_qstats = NULL; 534 532 535 533 spin_lock_bh(qdisc_lock(qdisc)); 536 - if (qdisc_is_percpu_stats(qdisc)) { 537 - cpu_bstats = qdisc->cpu_bstats; 538 - cpu_qstats = qdisc->cpu_qstats; 539 - } 540 534 541 - qlen = qdisc_qlen_sum(qdisc); 542 - __gnet_stats_copy_basic(NULL, &sch->bstats, 543 - cpu_bstats, &qdisc->bstats); 544 - __gnet_stats_copy_queue(&sch->qstats, 545 - cpu_qstats, 546 - &qdisc->qstats, 547 - qlen); 535 + if (qdisc_is_percpu_stats(qdisc)) { 536 + qlen = qdisc_qlen_sum(qdisc); 537 + 538 + __gnet_stats_copy_basic(NULL, &bstats, 539 + qdisc->cpu_bstats, 540 + &qdisc->bstats); 541 + __gnet_stats_copy_queue(&qstats, 542 + qdisc->cpu_qstats, 543 + &qdisc->qstats, 544 + qlen); 545 + } else { 546 + qlen += qdisc->q.qlen; 547 + bstats.bytes += qdisc->bstats.bytes; 548 + bstats.packets += qdisc->bstats.packets; 549 + qstats.backlog += qdisc->qstats.backlog; 550 + qstats.drops += qdisc->qstats.drops; 551 + qstats.requeues += qdisc->qstats.requeues; 552 + qstats.overlimits += qdisc->qstats.overlimits; 553 + } 548 554 spin_unlock_bh(qdisc_lock(qdisc)); 549 555 } 550 556
+1 -1
net/sctp/sm_make_chunk.c
··· 3697 3697 outlen = (sizeof(outreq) + stream_len) * out; 3698 3698 inlen = (sizeof(inreq) + stream_len) * in; 3699 3699 3700 - retval = sctp_make_reconf(asoc, outlen + inlen); 3700 + retval = sctp_make_reconf(asoc, SCTP_PAD4(outlen) + SCTP_PAD4(inlen)); 3701 3701 if (!retval) 3702 3702 return NULL; 3703 3703
+6 -1
net/smc/smc_cdc.c
··· 150 150 151 151 again: 152 152 link = conn->lnk; 153 + if (!smc_wr_tx_link_hold(link)) 154 + return -ENOLINK; 153 155 rc = smc_cdc_get_free_slot(conn, link, &wr_buf, NULL, &pend); 154 156 if (rc) 155 - return rc; 157 + goto put_out; 156 158 157 159 spin_lock_bh(&conn->send_lock); 158 160 if (link != conn->lnk) { ··· 162 160 spin_unlock_bh(&conn->send_lock); 163 161 smc_wr_tx_put_slot(link, 164 162 (struct smc_wr_tx_pend_priv *)pend); 163 + smc_wr_tx_link_put(link); 165 164 if (again) 166 165 return -ENOLINK; 167 166 again = true; ··· 170 167 } 171 168 rc = smc_cdc_msg_send(conn, wr_buf, pend); 172 169 spin_unlock_bh(&conn->send_lock); 170 + put_out: 171 + smc_wr_tx_link_put(link); 173 172 return rc; 174 173 } 175 174
+11 -9
net/smc/smc_core.c
··· 949 949 to_lnk = &lgr->lnk[i]; 950 950 break; 951 951 } 952 - if (!to_lnk) { 952 + if (!to_lnk || !smc_wr_tx_link_hold(to_lnk)) { 953 953 smc_lgr_terminate_sched(lgr); 954 954 return NULL; 955 955 } ··· 981 981 read_unlock_bh(&lgr->conns_lock); 982 982 /* pre-fetch buffer outside of send_lock, might sleep */ 983 983 rc = smc_cdc_get_free_slot(conn, to_lnk, &wr_buf, NULL, &pend); 984 - if (rc) { 985 - smcr_link_down_cond_sched(to_lnk); 986 - return NULL; 987 - } 984 + if (rc) 985 + goto err_out; 988 986 /* avoid race with smcr_tx_sndbuf_nonempty() */ 989 987 spin_lock_bh(&conn->send_lock); 990 988 smc_switch_link_and_count(conn, to_lnk); 991 989 rc = smc_switch_cursor(smc, pend, wr_buf); 992 990 spin_unlock_bh(&conn->send_lock); 993 991 sock_put(&smc->sk); 994 - if (rc) { 995 - smcr_link_down_cond_sched(to_lnk); 996 - return NULL; 997 - } 992 + if (rc) 993 + goto err_out; 998 994 goto again; 999 995 } 1000 996 read_unlock_bh(&lgr->conns_lock); 997 + smc_wr_tx_link_put(to_lnk); 1001 998 return to_lnk; 999 + 1000 + err_out: 1001 + smcr_link_down_cond_sched(to_lnk); 1002 + smc_wr_tx_link_put(to_lnk); 1003 + return NULL; 1002 1004 } 1003 1005 1004 1006 static void smcr_buf_unuse(struct smc_buf_desc *rmb_desc,
+49 -14
net/smc/smc_llc.c
··· 383 383 struct smc_wr_buf *wr_buf; 384 384 int rc; 385 385 386 + if (!smc_wr_tx_link_hold(link)) 387 + return -ENOLINK; 386 388 rc = smc_llc_add_pending_send(link, &wr_buf, &pend); 387 389 if (rc) 388 - return rc; 390 + goto put_out; 389 391 confllc = (struct smc_llc_msg_confirm_link *)wr_buf; 390 392 memset(confllc, 0, sizeof(*confllc)); 391 393 confllc->hd.common.type = SMC_LLC_CONFIRM_LINK; ··· 404 402 confllc->max_links = SMC_LLC_ADD_LNK_MAX_LINKS; 405 403 /* send llc message */ 406 404 rc = smc_wr_tx_send(link, pend); 405 + put_out: 406 + smc_wr_tx_link_put(link); 407 407 return rc; 408 408 } 409 409 ··· 419 415 struct smc_link *link; 420 416 int i, rc, rtok_ix; 421 417 418 + if (!smc_wr_tx_link_hold(send_link)) 419 + return -ENOLINK; 422 420 rc = smc_llc_add_pending_send(send_link, &wr_buf, &pend); 423 421 if (rc) 424 - return rc; 422 + goto put_out; 425 423 rkeyllc = (struct smc_llc_msg_confirm_rkey *)wr_buf; 426 424 memset(rkeyllc, 0, sizeof(*rkeyllc)); 427 425 rkeyllc->hd.common.type = SMC_LLC_CONFIRM_RKEY; ··· 450 444 (u64)sg_dma_address(rmb_desc->sgt[send_link->link_idx].sgl)); 451 445 /* send llc message */ 452 446 rc = smc_wr_tx_send(send_link, pend); 447 + put_out: 448 + smc_wr_tx_link_put(send_link); 453 449 return rc; 454 450 } 455 451 ··· 464 456 struct smc_wr_buf *wr_buf; 465 457 int rc; 466 458 459 + if (!smc_wr_tx_link_hold(link)) 460 + return -ENOLINK; 467 461 rc = smc_llc_add_pending_send(link, &wr_buf, &pend); 468 462 if (rc) 469 - return rc; 463 + goto put_out; 470 464 rkeyllc = (struct smc_llc_msg_delete_rkey *)wr_buf; 471 465 memset(rkeyllc, 0, sizeof(*rkeyllc)); 472 466 rkeyllc->hd.common.type = SMC_LLC_DELETE_RKEY; ··· 477 467 rkeyllc->rkey[0] = htonl(rmb_desc->mr_rx[link->link_idx]->rkey); 478 468 /* send llc message */ 479 469 rc = smc_wr_tx_send(link, pend); 470 + put_out: 471 + smc_wr_tx_link_put(link); 480 472 return rc; 481 473 } 482 474 ··· 492 480 struct smc_wr_buf *wr_buf; 493 481 int rc; 494 482 483 + if (!smc_wr_tx_link_hold(link)) 484 + return -ENOLINK; 495 485 rc = smc_llc_add_pending_send(link, &wr_buf, &pend); 496 486 if (rc) 497 - return rc; 487 + goto put_out; 498 488 addllc = (struct smc_llc_msg_add_link *)wr_buf; 499 489 500 490 memset(addllc, 0, sizeof(*addllc)); ··· 518 504 } 519 505 /* send llc message */ 520 506 rc = smc_wr_tx_send(link, pend); 507 + put_out: 508 + smc_wr_tx_link_put(link); 521 509 return rc; 522 510 } 523 511 ··· 533 517 struct smc_wr_buf *wr_buf; 534 518 int rc; 535 519 520 + if (!smc_wr_tx_link_hold(link)) 521 + return -ENOLINK; 536 522 rc = smc_llc_add_pending_send(link, &wr_buf, &pend); 537 523 if (rc) 538 - return rc; 524 + goto put_out; 539 525 delllc = (struct smc_llc_msg_del_link *)wr_buf; 540 526 541 527 memset(delllc, 0, sizeof(*delllc)); ··· 554 536 delllc->reason = htonl(reason); 555 537 /* send llc message */ 556 538 rc = smc_wr_tx_send(link, pend); 539 + put_out: 540 + smc_wr_tx_link_put(link); 557 541 return rc; 558 542 } 559 543 ··· 567 547 struct smc_wr_buf *wr_buf; 568 548 int rc; 569 549 550 + if (!smc_wr_tx_link_hold(link)) 551 + return -ENOLINK; 570 552 rc = smc_llc_add_pending_send(link, &wr_buf, &pend); 571 553 if (rc) 572 - return rc; 554 + goto put_out; 573 555 testllc = (struct smc_llc_msg_test_link *)wr_buf; 574 556 memset(testllc, 0, sizeof(*testllc)); 575 557 testllc->hd.common.type = SMC_LLC_TEST_LINK; ··· 579 557 memcpy(testllc->user_data, user_data, sizeof(testllc->user_data)); 580 558 /* send llc message */ 581 559 rc = smc_wr_tx_send(link, pend); 560 + put_out: 561 + smc_wr_tx_link_put(link); 582 562 return rc; 583 563 } 584 564 ··· 591 567 struct smc_wr_buf *wr_buf; 592 568 int rc; 593 569 594 - if (!smc_link_usable(link)) 570 + if (!smc_wr_tx_link_hold(link)) 595 571 return -ENOLINK; 596 572 rc = smc_llc_add_pending_send(link, &wr_buf, &pend); 597 573 if (rc) 598 - return rc; 574 + goto put_out; 599 575 memcpy(wr_buf, llcbuf, sizeof(union smc_llc_msg)); 600 - return smc_wr_tx_send(link, pend); 576 + rc = smc_wr_tx_send(link, pend); 577 + put_out: 578 + smc_wr_tx_link_put(link); 579 + return rc; 601 580 } 602 581 603 582 /* schedule an llc send on link, may wait for buffers, ··· 613 586 struct smc_wr_buf *wr_buf; 614 587 int rc; 615 588 616 - if (!smc_link_usable(link)) 589 + if (!smc_wr_tx_link_hold(link)) 617 590 return -ENOLINK; 618 591 rc = smc_llc_add_pending_send(link, &wr_buf, &pend); 619 592 if (rc) 620 - return rc; 593 + goto put_out; 621 594 memcpy(wr_buf, llcbuf, sizeof(union smc_llc_msg)); 622 - return smc_wr_tx_send_wait(link, pend, SMC_LLC_WAIT_TIME); 595 + rc = smc_wr_tx_send_wait(link, pend, SMC_LLC_WAIT_TIME); 596 + put_out: 597 + smc_wr_tx_link_put(link); 598 + return rc; 623 599 } 624 600 625 601 /********************************* receive ***********************************/ ··· 702 672 struct smc_buf_desc *rmb; 703 673 u8 n; 704 674 675 + if (!smc_wr_tx_link_hold(link)) 676 + return -ENOLINK; 705 677 rc = smc_llc_add_pending_send(link, &wr_buf, &pend); 706 678 if (rc) 707 - return rc; 679 + goto put_out; 708 680 addc_llc = (struct smc_llc_msg_add_link_cont *)wr_buf; 709 681 memset(addc_llc, 0, sizeof(*addc_llc)); 710 682 ··· 738 706 addc_llc->hd.length = sizeof(struct smc_llc_msg_add_link_cont); 739 707 if (lgr->role == SMC_CLNT) 740 708 addc_llc->hd.flags |= SMC_LLC_FLAG_RESP; 741 - return smc_wr_tx_send(link, pend); 709 + rc = smc_wr_tx_send(link, pend); 710 + put_out: 711 + smc_wr_tx_link_put(link); 712 + return rc; 742 713 } 743 714 744 715 static int smc_llc_cli_rkey_exchange(struct smc_link *link,
+5 -17
net/smc/smc_tx.c
··· 496 496 /* Wakeup sndbuf consumers from any context (IRQ or process) 497 497 * since there is more data to transmit; usable snd_wnd as max transmit 498 498 */ 499 - static int _smcr_tx_sndbuf_nonempty(struct smc_connection *conn) 499 + static int smcr_tx_sndbuf_nonempty(struct smc_connection *conn) 500 500 { 501 501 struct smc_cdc_producer_flags *pflags = &conn->local_tx_ctrl.prod_flags; 502 502 struct smc_link *link = conn->lnk; ··· 505 505 struct smc_wr_buf *wr_buf; 506 506 int rc; 507 507 508 + if (!link || !smc_wr_tx_link_hold(link)) 509 + return -ENOLINK; 508 510 rc = smc_cdc_get_free_slot(conn, link, &wr_buf, &wr_rdma_buf, &pend); 509 511 if (rc < 0) { 512 + smc_wr_tx_link_put(link); 510 513 if (rc == -EBUSY) { 511 514 struct smc_sock *smc = 512 515 container_of(conn, struct smc_sock, conn); ··· 550 547 551 548 out_unlock: 552 549 spin_unlock_bh(&conn->send_lock); 553 - return rc; 554 - } 555 - 556 - static int smcr_tx_sndbuf_nonempty(struct smc_connection *conn) 557 - { 558 - struct smc_link *link = conn->lnk; 559 - int rc = -ENOLINK; 560 - 561 - if (!link) 562 - return rc; 563 - 564 - atomic_inc(&link->wr_tx_refcnt); 565 - if (smc_link_usable(link)) 566 - rc = _smcr_tx_sndbuf_nonempty(conn); 567 - if (atomic_dec_and_test(&link->wr_tx_refcnt)) 568 - wake_up_all(&link->wr_tx_wait); 550 + smc_wr_tx_link_put(link); 569 551 return rc; 570 552 } 571 553
+14
net/smc/smc_wr.h
··· 60 60 atomic_long_set(wr_tx_id, val); 61 61 } 62 62 63 + static inline bool smc_wr_tx_link_hold(struct smc_link *link) 64 + { 65 + if (!smc_link_usable(link)) 66 + return false; 67 + atomic_inc(&link->wr_tx_refcnt); 68 + return true; 69 + } 70 + 71 + static inline void smc_wr_tx_link_put(struct smc_link *link) 72 + { 73 + if (atomic_dec_and_test(&link->wr_tx_refcnt)) 74 + wake_up_all(&link->wr_tx_wait); 75 + } 76 + 63 77 static inline void smc_wr_wakeup_tx_wait(struct smc_link *lnk) 64 78 { 65 79 wake_up_all(&lnk->wr_tx_wait);
+1 -1
net/unix/af_unix.c
··· 828 828 } 829 829 830 830 struct proto unix_dgram_proto = { 831 - .name = "UNIX-DGRAM", 831 + .name = "UNIX", 832 832 .owner = THIS_MODULE, 833 833 .obj_size = sizeof(struct unix_sock), 834 834 .close = unix_close,
+4
scripts/Makefile.gcc-plugins
··· 19 19 += -fplugin-arg-structleak_plugin-byref 20 20 gcc-plugin-cflags-$(CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL) \ 21 21 += -fplugin-arg-structleak_plugin-byref-all 22 + ifdef CONFIG_GCC_PLUGIN_STRUCTLEAK 23 + DISABLE_STRUCTLEAK_PLUGIN += -fplugin-arg-structleak_plugin-disable 24 + endif 25 + export DISABLE_STRUCTLEAK_PLUGIN 22 26 gcc-plugin-cflags-$(CONFIG_GCC_PLUGIN_STRUCTLEAK) \ 23 27 += -DSTRUCTLEAK_PLUGIN 24 28
+1 -1
scripts/recordmcount.pl
··· 189 189 $local_regex = "^[0-9a-fA-F]+\\s+t\\s+(\\S+)"; 190 190 $weak_regex = "^[0-9a-fA-F]+\\s+([wW])\\s+(\\S+)"; 191 191 $section_regex = "Disassembly of section\\s+(\\S+):"; 192 - $function_regex = "^([0-9a-fA-F]+)\\s+<(.*?)>:"; 192 + $function_regex = "^([0-9a-fA-F]+)\\s+<([^^]*?)>:"; 193 193 $mcount_regex = "^\\s*([0-9a-fA-F]+):.*\\s(mcount|__fentry__)\$"; 194 194 $section_type = '@progbits'; 195 195 $mcount_adjust = 0;
+71 -1
sound/core/pcm_compat.c
··· 468 468 } 469 469 #endif /* CONFIG_X86_X32 */ 470 470 471 + #ifdef __BIG_ENDIAN 472 + typedef char __pad_before_u32[4]; 473 + typedef char __pad_after_u32[0]; 474 + #else 475 + typedef char __pad_before_u32[0]; 476 + typedef char __pad_after_u32[4]; 477 + #endif 478 + 479 + /* PCM 2.0.15 API definition had a bug in mmap control; it puts the avail_min 480 + * at the wrong offset due to a typo in padding type. 481 + * The bug hits only 32bit. 482 + * A workaround for incorrect read/write is needed only in 32bit compat mode. 483 + */ 484 + struct __snd_pcm_mmap_control64_buggy { 485 + __pad_before_u32 __pad1; 486 + __u32 appl_ptr; 487 + __pad_before_u32 __pad2; /* SiC! here is the bug */ 488 + __pad_before_u32 __pad3; 489 + __u32 avail_min; 490 + __pad_after_uframe __pad4; 491 + }; 492 + 493 + static int snd_pcm_ioctl_sync_ptr_buggy(struct snd_pcm_substream *substream, 494 + struct snd_pcm_sync_ptr __user *_sync_ptr) 495 + { 496 + struct snd_pcm_runtime *runtime = substream->runtime; 497 + struct snd_pcm_sync_ptr sync_ptr; 498 + struct __snd_pcm_mmap_control64_buggy *sync_cp; 499 + volatile struct snd_pcm_mmap_status *status; 500 + volatile struct snd_pcm_mmap_control *control; 501 + int err; 502 + 503 + memset(&sync_ptr, 0, sizeof(sync_ptr)); 504 + sync_cp = (struct __snd_pcm_mmap_control64_buggy *)&sync_ptr.c.control; 505 + if (get_user(sync_ptr.flags, (unsigned __user *)&(_sync_ptr->flags))) 506 + return -EFAULT; 507 + if (copy_from_user(sync_cp, &(_sync_ptr->c.control), sizeof(*sync_cp))) 508 + return -EFAULT; 509 + status = runtime->status; 510 + control = runtime->control; 511 + if (sync_ptr.flags & SNDRV_PCM_SYNC_PTR_HWSYNC) { 512 + err = snd_pcm_hwsync(substream); 513 + if (err < 0) 514 + return err; 515 + } 516 + snd_pcm_stream_lock_irq(substream); 517 + if (!(sync_ptr.flags & SNDRV_PCM_SYNC_PTR_APPL)) { 518 + err = pcm_lib_apply_appl_ptr(substream, sync_cp->appl_ptr); 519 + if (err < 0) { 520 + snd_pcm_stream_unlock_irq(substream); 521 + return err; 522 + } 523 + } else { 524 + sync_cp->appl_ptr = control->appl_ptr; 525 + } 526 + if (!(sync_ptr.flags & SNDRV_PCM_SYNC_PTR_AVAIL_MIN)) 527 + control->avail_min = sync_cp->avail_min; 528 + else 529 + sync_cp->avail_min = control->avail_min; 530 + sync_ptr.s.status.state = status->state; 531 + sync_ptr.s.status.hw_ptr = status->hw_ptr; 532 + sync_ptr.s.status.tstamp = status->tstamp; 533 + sync_ptr.s.status.suspended_state = status->suspended_state; 534 + sync_ptr.s.status.audio_tstamp = status->audio_tstamp; 535 + snd_pcm_stream_unlock_irq(substream); 536 + if (copy_to_user(_sync_ptr, &sync_ptr, sizeof(sync_ptr))) 537 + return -EFAULT; 538 + return 0; 539 + } 540 + 471 541 /* 472 542 */ 473 543 enum { ··· 607 537 if (in_x32_syscall()) 608 538 return snd_pcm_ioctl_sync_ptr_x32(substream, argp); 609 539 #endif /* CONFIG_X86_X32 */ 610 - return snd_pcm_common_ioctl(file, substream, cmd, argp); 540 + return snd_pcm_ioctl_sync_ptr_buggy(substream, argp); 611 541 case SNDRV_PCM_IOCTL_HW_REFINE32: 612 542 return snd_pcm_ioctl_hw_params_compat(substream, 1, argp); 613 543 case SNDRV_PCM_IOCTL_HW_PARAMS32:
+3 -5
sound/core/seq_device.c
··· 156 156 struct snd_seq_device *dev = device->device_data; 157 157 158 158 cancel_autoload_drivers(); 159 + if (dev->private_free) 160 + dev->private_free(dev); 159 161 put_device(&dev->dev); 160 162 return 0; 161 163 } ··· 185 183 186 184 static void snd_seq_dev_release(struct device *dev) 187 185 { 188 - struct snd_seq_device *sdev = to_seq_dev(dev); 189 - 190 - if (sdev->private_free) 191 - sdev->private_free(sdev); 192 - kfree(sdev); 186 + kfree(to_seq_dev(dev)); 193 187 } 194 188 195 189 /*
+3 -2
sound/hda/hdac_controller.c
··· 421 421 if (!full_reset) 422 422 goto skip_reset; 423 423 424 - /* clear STATESTS */ 425 - snd_hdac_chip_writew(bus, STATESTS, STATESTS_INT_MASK); 424 + /* clear STATESTS if not in reset */ 425 + if (snd_hdac_chip_readb(bus, GCTL) & AZX_GCTL_RESET) 426 + snd_hdac_chip_writew(bus, STATESTS, STATESTS_INT_MASK); 426 427 427 428 /* reset controller */ 428 429 snd_hdac_bus_enter_link_reset(bus);
+11 -9
sound/pci/hda/hda_bind.c
··· 298 298 { 299 299 int err; 300 300 301 + if (codec->configured) 302 + return 0; 303 + 301 304 if (is_generic_config(codec)) 302 305 codec->probe_id = HDA_CODEC_ID_GENERIC; 303 306 else 304 307 codec->probe_id = 0; 305 308 306 - err = snd_hdac_device_register(&codec->core); 307 - if (err < 0) 308 - return err; 309 + if (!device_is_registered(&codec->core.dev)) { 310 + err = snd_hdac_device_register(&codec->core); 311 + if (err < 0) 312 + return err; 313 + } 309 314 310 315 if (!codec->preset) 311 316 codec_bind_module(codec); 312 317 if (!codec->preset) { 313 318 err = codec_bind_generic(codec); 314 319 if (err < 0) { 315 - codec_err(codec, "Unable to bind the codec\n"); 316 - goto error; 320 + codec_dbg(codec, "Unable to bind the codec\n"); 321 + return err; 317 322 } 318 323 } 319 324 325 + codec->configured = 1; 320 326 return 0; 321 - 322 - error: 323 - snd_hdac_device_unregister(&codec->core); 324 - return err; 325 327 } 326 328 EXPORT_SYMBOL_GPL(snd_hda_codec_configure);
+1
sound/pci/hda/hda_codec.c
··· 791 791 snd_array_free(&codec->nids); 792 792 remove_conn_list(codec); 793 793 snd_hdac_regmap_exit(&codec->core); 794 + codec->configured = 0; 794 795 } 795 796 EXPORT_SYMBOL_GPL(snd_hda_codec_cleanup_for_unbind); 796 797
+16 -8
sound/pci/hda/hda_controller.c
··· 25 25 #include <sound/core.h> 26 26 #include <sound/initval.h> 27 27 #include "hda_controller.h" 28 + #include "hda_local.h" 28 29 29 30 #define CREATE_TRACE_POINTS 30 31 #include "hda_controller_trace.h" ··· 1249 1248 int azx_codec_configure(struct azx *chip) 1250 1249 { 1251 1250 struct hda_codec *codec, *next; 1251 + int success = 0; 1252 1252 1253 - /* use _safe version here since snd_hda_codec_configure() deregisters 1254 - * the device upon error and deletes itself from the bus list. 1255 - */ 1256 - list_for_each_codec_safe(codec, next, &chip->bus) { 1257 - snd_hda_codec_configure(codec); 1253 + list_for_each_codec(codec, &chip->bus) { 1254 + if (!snd_hda_codec_configure(codec)) 1255 + success++; 1258 1256 } 1259 1257 1260 - if (!azx_bus(chip)->num_codecs) 1261 - return -ENODEV; 1262 - return 0; 1258 + if (success) { 1259 + /* unregister failed codecs if any codec has been probed */ 1260 + list_for_each_codec_safe(codec, next, &chip->bus) { 1261 + if (!codec->configured) { 1262 + codec_err(codec, "Unable to configure, disabling\n"); 1263 + snd_hdac_device_unregister(&codec->core); 1264 + } 1265 + } 1266 + } 1267 + 1268 + return success ? 0 : -ENODEV; 1263 1269 } 1264 1270 EXPORT_SYMBOL_GPL(azx_codec_configure); 1265 1271
+1 -1
sound/pci/hda/hda_controller.h
··· 41 41 /* 24 unused */ 42 42 #define AZX_DCAPS_COUNT_LPIB_DELAY (1 << 25) /* Take LPIB as delay */ 43 43 #define AZX_DCAPS_PM_RUNTIME (1 << 26) /* runtime PM support */ 44 - /* 27 unused */ 44 + #define AZX_DCAPS_RETRY_PROBE (1 << 27) /* retry probe if no codec is configured */ 45 45 #define AZX_DCAPS_CORBRP_SELF_CLEAR (1 << 28) /* CORBRP clears itself after reset */ 46 46 #define AZX_DCAPS_NO_MSI64 (1 << 29) /* Stick to 32-bit MSIs */ 47 47 #define AZX_DCAPS_SEPARATE_STREAM_TAG (1 << 30) /* capture and playback use separate stream tag */
+23 -6
sound/pci/hda/hda_intel.c
··· 307 307 /* quirks for AMD SB */ 308 308 #define AZX_DCAPS_PRESET_AMD_SB \ 309 309 (AZX_DCAPS_NO_TCSEL | AZX_DCAPS_AMD_WORKAROUND |\ 310 - AZX_DCAPS_SNOOP_TYPE(ATI) | AZX_DCAPS_PM_RUNTIME) 310 + AZX_DCAPS_SNOOP_TYPE(ATI) | AZX_DCAPS_PM_RUNTIME |\ 311 + AZX_DCAPS_RETRY_PROBE) 311 312 312 313 /* quirks for Nvidia */ 313 314 #define AZX_DCAPS_PRESET_NVIDIA \ ··· 1724 1723 1725 1724 static void azx_probe_work(struct work_struct *work) 1726 1725 { 1727 - struct hda_intel *hda = container_of(work, struct hda_intel, probe_work); 1726 + struct hda_intel *hda = container_of(work, struct hda_intel, probe_work.work); 1728 1727 azx_probe_continue(&hda->chip); 1729 1728 } 1730 1729 ··· 1829 1828 } 1830 1829 1831 1830 /* continue probing in work context as may trigger request module */ 1832 - INIT_WORK(&hda->probe_work, azx_probe_work); 1831 + INIT_DELAYED_WORK(&hda->probe_work, azx_probe_work); 1833 1832 1834 1833 *rchip = chip; 1835 1834 ··· 2143 2142 #endif 2144 2143 2145 2144 if (schedule_probe) 2146 - schedule_work(&hda->probe_work); 2145 + schedule_delayed_work(&hda->probe_work, 0); 2147 2146 2148 2147 dev++; 2149 2148 if (chip->disabled) ··· 2229 2228 int dev = chip->dev_index; 2230 2229 int err; 2231 2230 2231 + if (chip->disabled || hda->init_failed) 2232 + return -EIO; 2233 + if (hda->probe_retry) 2234 + goto probe_retry; 2235 + 2232 2236 to_hda_bus(bus)->bus_probing = 1; 2233 2237 hda->probe_continued = 1; 2234 2238 ··· 2295 2289 #endif 2296 2290 } 2297 2291 #endif 2292 + 2293 + probe_retry: 2298 2294 if (bus->codec_mask && !(probe_only[dev] & 1)) { 2299 2295 err = azx_codec_configure(chip); 2300 - if (err < 0) 2296 + if (err) { 2297 + if ((chip->driver_caps & AZX_DCAPS_RETRY_PROBE) && 2298 + ++hda->probe_retry < 60) { 2299 + schedule_delayed_work(&hda->probe_work, 2300 + msecs_to_jiffies(1000)); 2301 + return 0; /* keep things up */ 2302 + } 2303 + dev_err(chip->card->dev, "Cannot probe codecs, giving up\n"); 2301 2304 goto out_free; 2305 + } 2302 2306 } 2303 2307 2304 2308 err = snd_card_register(chip->card); ··· 2338 2322 display_power(chip, false); 2339 2323 complete_all(&hda->probe_wait); 2340 2324 to_hda_bus(bus)->bus_probing = 0; 2325 + hda->probe_retry = 0; 2341 2326 return 0; 2342 2327 } 2343 2328 ··· 2364 2347 * device during cancel_work_sync() call. 2365 2348 */ 2366 2349 device_unlock(&pci->dev); 2367 - cancel_work_sync(&hda->probe_work); 2350 + cancel_delayed_work_sync(&hda->probe_work); 2368 2351 device_lock(&pci->dev); 2369 2352 2370 2353 snd_card_free(card);
+3 -1
sound/pci/hda/hda_intel.h
··· 14 14 15 15 /* sync probing */ 16 16 struct completion probe_wait; 17 - struct work_struct probe_work; 17 + struct delayed_work probe_work; 18 18 19 19 /* card list (for power_save trigger) */ 20 20 struct list_head list; ··· 30 30 unsigned int freed:1; /* resources already released */ 31 31 32 32 bool need_i915_power:1; /* the hda controller needs i915 power */ 33 + 34 + int probe_retry; /* being probe-retry */ 33 35 }; 34 36 35 37 #endif
+62 -4
sound/pci/hda/patch_realtek.c
··· 526 526 struct alc_spec *spec = codec->spec; 527 527 528 528 switch (codec->core.vendor_id) { 529 + case 0x10ec0236: 530 + case 0x10ec0256: 529 531 case 0x10ec0283: 530 532 case 0x10ec0286: 531 533 case 0x10ec0288: ··· 2539 2537 SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2540 2538 SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2541 2539 SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2542 - SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2540 + SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170SM", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2541 + SND_PCI_QUIRK(0x1558, 0x7715, "Clevo X170KM-G", ALC1220_FIXUP_CLEVO_PB51ED), 2543 2542 SND_PCI_QUIRK(0x1558, 0x9501, "Clevo P950HR", ALC1220_FIXUP_CLEVO_P950), 2544 2543 SND_PCI_QUIRK(0x1558, 0x9506, "Clevo P955HQ", ALC1220_FIXUP_CLEVO_P950), 2545 2544 SND_PCI_QUIRK(0x1558, 0x950a, "Clevo P955H[PR]", ALC1220_FIXUP_CLEVO_P950), ··· 3531 3528 /* If disable 3k pulldown control for alc257, the Mic detection will not work correctly 3532 3529 * when booting with headset plugged. So skip setting it for the codec alc257 3533 3530 */ 3534 - if (codec->core.vendor_id != 0x10ec0257) 3531 + if (spec->codec_variant != ALC269_TYPE_ALC257 && 3532 + spec->codec_variant != ALC269_TYPE_ALC256) 3535 3533 alc_update_coef_idx(codec, 0x46, 0, 3 << 12); 3536 3534 3537 3535 if (!spec->no_shutup_pins) ··· 6453 6449 /* for alc285_fixup_ideapad_s740_coef() */ 6454 6450 #include "ideapad_s740_helper.c" 6455 6451 6452 + static void alc256_fixup_tongfang_reset_persistent_settings(struct hda_codec *codec, 6453 + const struct hda_fixup *fix, 6454 + int action) 6455 + { 6456 + /* 6457 + * A certain other OS sets these coeffs to different values. On at least one TongFang 6458 + * barebone these settings might survive even a cold reboot. So to restore a clean slate the 6459 + * values are explicitly reset to default here. Without this, the external microphone is 6460 + * always in a plugged-in state, while the internal microphone is always in an unplugged 6461 + * state, breaking the ability to use the internal microphone. 6462 + */ 6463 + alc_write_coef_idx(codec, 0x24, 0x0000); 6464 + alc_write_coef_idx(codec, 0x26, 0x0000); 6465 + alc_write_coef_idx(codec, 0x29, 0x3000); 6466 + alc_write_coef_idx(codec, 0x37, 0xfe05); 6467 + alc_write_coef_idx(codec, 0x45, 0x5089); 6468 + } 6469 + 6456 6470 enum { 6457 6471 ALC269_FIXUP_GPIO2, 6458 6472 ALC269_FIXUP_SONY_VAIO, ··· 6685 6663 ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS, 6686 6664 ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE, 6687 6665 ALC287_FIXUP_YOGA7_14ITL_SPEAKERS, 6688 - ALC287_FIXUP_13S_GEN2_SPEAKERS 6666 + ALC287_FIXUP_13S_GEN2_SPEAKERS, 6667 + ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS, 6689 6668 }; 6690 6669 6691 6670 static const struct hda_fixup alc269_fixups[] = { ··· 8367 8344 .v.verbs = (const struct hda_verb[]) { 8368 8345 { 0x20, AC_VERB_SET_COEF_INDEX, 0x24 }, 8369 8346 { 0x20, AC_VERB_SET_PROC_COEF, 0x41 }, 8370 - { 0x20, AC_VERB_SET_PROC_COEF, 0xb020 }, 8347 + { 0x20, AC_VERB_SET_COEF_INDEX, 0x26 }, 8371 8348 { 0x20, AC_VERB_SET_PROC_COEF, 0x2 }, 8372 8349 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 }, 8373 8350 { 0x20, AC_VERB_SET_PROC_COEF, 0x0 }, ··· 8383 8360 }, 8384 8361 .chained = true, 8385 8362 .chain_id = ALC269_FIXUP_HEADSET_MODE, 8363 + }, 8364 + [ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS] = { 8365 + .type = HDA_FIXUP_FUNC, 8366 + .v.func = alc256_fixup_tongfang_reset_persistent_settings, 8386 8367 }, 8387 8368 }; 8388 8369 ··· 8479 8452 SND_PCI_QUIRK(0x1028, 0x0a30, "Dell", ALC236_FIXUP_DELL_AIO_HEADSET_MIC), 8480 8453 SND_PCI_QUIRK(0x1028, 0x0a58, "Dell", ALC255_FIXUP_DELL_HEADSET_MIC), 8481 8454 SND_PCI_QUIRK(0x1028, 0x0a61, "Dell XPS 15 9510", ALC289_FIXUP_DUAL_SPK), 8455 + SND_PCI_QUIRK(0x1028, 0x0a62, "Dell Precision 5560", ALC289_FIXUP_DUAL_SPK), 8456 + SND_PCI_QUIRK(0x1028, 0x0a9d, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE), 8457 + SND_PCI_QUIRK(0x1028, 0x0a9e, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE), 8482 8458 SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), 8483 8459 SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), 8484 8460 SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2), ··· 8819 8789 SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */ 8820 8790 SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802), 8821 8791 SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X), 8792 + SND_PCI_QUIRK(0x1d05, 0x1132, "TongFang PHxTxX1", ALC256_FIXUP_TONGFANG_RESET_PERSISTENT_SETTINGS), 8822 8793 SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC), 8823 8794 SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE), 8824 8795 SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC), ··· 10197 10166 ALC671_FIXUP_HP_HEADSET_MIC2, 10198 10167 ALC662_FIXUP_ACER_X2660G_HEADSET_MODE, 10199 10168 ALC662_FIXUP_ACER_NITRO_HEADSET_MODE, 10169 + ALC668_FIXUP_ASUS_NO_HEADSET_MIC, 10170 + ALC668_FIXUP_HEADSET_MIC, 10171 + ALC668_FIXUP_MIC_DET_COEF, 10200 10172 }; 10201 10173 10202 10174 static const struct hda_fixup alc662_fixups[] = { ··· 10583 10549 .chained = true, 10584 10550 .chain_id = ALC662_FIXUP_USI_FUNC 10585 10551 }, 10552 + [ALC668_FIXUP_ASUS_NO_HEADSET_MIC] = { 10553 + .type = HDA_FIXUP_PINS, 10554 + .v.pins = (const struct hda_pintbl[]) { 10555 + { 0x1b, 0x04a1112c }, 10556 + { } 10557 + }, 10558 + .chained = true, 10559 + .chain_id = ALC668_FIXUP_HEADSET_MIC 10560 + }, 10561 + [ALC668_FIXUP_HEADSET_MIC] = { 10562 + .type = HDA_FIXUP_FUNC, 10563 + .v.func = alc269_fixup_headset_mic, 10564 + .chained = true, 10565 + .chain_id = ALC668_FIXUP_MIC_DET_COEF 10566 + }, 10567 + [ALC668_FIXUP_MIC_DET_COEF] = { 10568 + .type = HDA_FIXUP_VERBS, 10569 + .v.verbs = (const struct hda_verb[]) { 10570 + { 0x20, AC_VERB_SET_COEF_INDEX, 0x15 }, 10571 + { 0x20, AC_VERB_SET_PROC_COEF, 0x0d60 }, 10572 + {} 10573 + }, 10574 + }, 10586 10575 }; 10587 10576 10588 10577 static const struct snd_pci_quirk alc662_fixup_tbl[] = { ··· 10641 10584 SND_PCI_QUIRK(0x1043, 0x15a7, "ASUS UX51VZH", ALC662_FIXUP_BASS_16), 10642 10585 SND_PCI_QUIRK(0x1043, 0x177d, "ASUS N551", ALC668_FIXUP_ASUS_Nx51), 10643 10586 SND_PCI_QUIRK(0x1043, 0x17bd, "ASUS N751", ALC668_FIXUP_ASUS_Nx51), 10587 + SND_PCI_QUIRK(0x1043, 0x185d, "ASUS G551JW", ALC668_FIXUP_ASUS_NO_HEADSET_MIC), 10644 10588 SND_PCI_QUIRK(0x1043, 0x1963, "ASUS X71SL", ALC662_FIXUP_ASUS_MODE8), 10645 10589 SND_PCI_QUIRK(0x1043, 0x1b73, "ASUS N55SF", ALC662_FIXUP_BASS_16), 10646 10590 SND_PCI_QUIRK(0x1043, 0x1bf3, "ASUS N76VZ", ALC662_FIXUP_BASS_MODE4_CHMAP),
+2
sound/usb/mixer_scarlett_gen2.c
··· 2450 2450 err = scarlett2_usb_get_config(mixer, 2451 2451 SCARLETT2_CONFIG_TALKBACK_MAP, 2452 2452 1, &bitmap); 2453 + if (err < 0) 2454 + return err; 2453 2455 for (i = 0; i < num_mixes; i++, bitmap >>= 1) 2454 2456 private->talkback_map[i] = bitmap & 1; 2455 2457 }
+42
sound/usb/quirks-table.h
··· 78 78 { USB_DEVICE_VENDOR_SPEC(0x041e, 0x3f19) }, 79 79 80 80 /* 81 + * Creative Technology, Ltd Live! Cam Sync HD [VF0770] 82 + * The device advertises 8 formats, but only a rate of 48kHz is honored by the 83 + * hardware and 24 bits give chopped audio, so only report the one working 84 + * combination. 85 + */ 86 + { 87 + USB_DEVICE(0x041e, 0x4095), 88 + .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) { 89 + .ifnum = QUIRK_ANY_INTERFACE, 90 + .type = QUIRK_COMPOSITE, 91 + .data = &(const struct snd_usb_audio_quirk[]) { 92 + { 93 + .ifnum = 2, 94 + .type = QUIRK_AUDIO_STANDARD_MIXER, 95 + }, 96 + { 97 + .ifnum = 3, 98 + .type = QUIRK_AUDIO_FIXED_ENDPOINT, 99 + .data = &(const struct audioformat) { 100 + .formats = SNDRV_PCM_FMTBIT_S16_LE, 101 + .channels = 2, 102 + .fmt_bits = 16, 103 + .iface = 3, 104 + .altsetting = 4, 105 + .altset_idx = 4, 106 + .endpoint = 0x82, 107 + .ep_attr = 0x05, 108 + .rates = SNDRV_PCM_RATE_48000, 109 + .rate_min = 48000, 110 + .rate_max = 48000, 111 + .nr_rates = 1, 112 + .rate_table = (unsigned int[]) { 48000 }, 113 + }, 114 + }, 115 + { 116 + .ifnum = -1 117 + }, 118 + }, 119 + }, 120 + }, 121 + 122 + /* 81 123 * HP Wireless Audio 82 124 * When not ignored, causes instability issues for some users, forcing them to 83 125 * skip the entire module.
+2
sound/usb/quirks.c
··· 1900 1900 QUIRK_FLAG_CTL_MSG_DELAY | QUIRK_FLAG_IFACE_DELAY), 1901 1901 VENDOR_FLG(0x07fd, /* MOTU */ 1902 1902 QUIRK_FLAG_VALIDATE_RATES), 1903 + VENDOR_FLG(0x1235, /* Focusrite Novation */ 1904 + QUIRK_FLAG_VALIDATE_RATES), 1903 1905 VENDOR_FLG(0x152a, /* Thesycon devices */ 1904 1906 QUIRK_FLAG_DSD_RAW), 1905 1907 VENDOR_FLG(0x1de7, /* Phoenix Audio */
+3 -3
tools/lib/perf/tests/test-evlist.c
··· 40 40 .type = PERF_TYPE_SOFTWARE, 41 41 .config = PERF_COUNT_SW_TASK_CLOCK, 42 42 }; 43 - int err, cpu, tmp; 43 + int err, idx; 44 44 45 45 cpus = perf_cpu_map__new(NULL); 46 46 __T("failed to create cpus", cpus); ··· 70 70 perf_evlist__for_each_evsel(evlist, evsel) { 71 71 cpus = perf_evsel__cpus(evsel); 72 72 73 - perf_cpu_map__for_each_cpu(cpu, tmp, cpus) { 73 + for (idx = 0; idx < perf_cpu_map__nr(cpus); idx++) { 74 74 struct perf_counts_values counts = { .val = 0 }; 75 75 76 - perf_evsel__read(evsel, cpu, 0, &counts); 76 + perf_evsel__read(evsel, idx, 0, &counts); 77 77 __T("failed to read value for evsel", counts.val != 0); 78 78 } 79 79 }
+4 -3
tools/lib/perf/tests/test-evsel.c
··· 22 22 .type = PERF_TYPE_SOFTWARE, 23 23 .config = PERF_COUNT_SW_CPU_CLOCK, 24 24 }; 25 - int err, cpu, tmp; 25 + int err, idx; 26 26 27 27 cpus = perf_cpu_map__new(NULL); 28 28 __T("failed to create cpus", cpus); ··· 33 33 err = perf_evsel__open(evsel, cpus, NULL); 34 34 __T("failed to open evsel", err == 0); 35 35 36 - perf_cpu_map__for_each_cpu(cpu, tmp, cpus) { 36 + for (idx = 0; idx < perf_cpu_map__nr(cpus); idx++) { 37 37 struct perf_counts_values counts = { .val = 0 }; 38 38 39 - perf_evsel__read(evsel, cpu, 0, &counts); 39 + perf_evsel__read(evsel, idx, 0, &counts); 40 40 __T("failed to read value for evsel", counts.val != 0); 41 41 } 42 42 ··· 148 148 __T("failed to mmap evsel", err == 0); 149 149 150 150 pc = perf_evsel__mmap_base(evsel, 0, 0); 151 + __T("failed to get mmapped address", pc); 151 152 152 153 #if defined(__i386__) || defined(__x86_64__) 153 154 __T("userspace counter access not supported", pc->cap_user_rdpmc);
+25 -31
tools/objtool/elf.c
··· 508 508 list_add_tail(&reloc->list, &sec->reloc->reloc_list); 509 509 elf_hash_add(reloc, &reloc->hash, reloc_hash(reloc)); 510 510 511 + sec->reloc->sh.sh_size += sec->reloc->sh.sh_entsize; 511 512 sec->reloc->changed = true; 512 513 513 514 return 0; ··· 978 977 } 979 978 } 980 979 981 - static int elf_rebuild_rel_reloc_section(struct section *sec, int nr) 980 + static int elf_rebuild_rel_reloc_section(struct section *sec) 982 981 { 983 982 struct reloc *reloc; 984 - int idx = 0, size; 983 + int idx = 0; 985 984 void *buf; 986 985 987 986 /* Allocate a buffer for relocations */ 988 - size = nr * sizeof(GElf_Rel); 989 - buf = malloc(size); 987 + buf = malloc(sec->sh.sh_size); 990 988 if (!buf) { 991 989 perror("malloc"); 992 990 return -1; 993 991 } 994 992 995 993 sec->data->d_buf = buf; 996 - sec->data->d_size = size; 994 + sec->data->d_size = sec->sh.sh_size; 997 995 sec->data->d_type = ELF_T_REL; 998 - 999 - sec->sh.sh_size = size; 1000 996 1001 997 idx = 0; 1002 998 list_for_each_entry(reloc, &sec->reloc_list, list) { 1003 999 reloc->rel.r_offset = reloc->offset; 1004 1000 reloc->rel.r_info = GELF_R_INFO(reloc->sym->idx, reloc->type); 1005 - gelf_update_rel(sec->data, idx, &reloc->rel); 1001 + if (!gelf_update_rel(sec->data, idx, &reloc->rel)) { 1002 + WARN_ELF("gelf_update_rel"); 1003 + return -1; 1004 + } 1006 1005 idx++; 1007 1006 } 1008 1007 1009 1008 return 0; 1010 1009 } 1011 1010 1012 - static int elf_rebuild_rela_reloc_section(struct section *sec, int nr) 1011 + static int elf_rebuild_rela_reloc_section(struct section *sec) 1013 1012 { 1014 1013 struct reloc *reloc; 1015 - int idx = 0, size; 1014 + int idx = 0; 1016 1015 void *buf; 1017 1016 1018 1017 /* Allocate a buffer for relocations with addends */ 1019 - size = nr * sizeof(GElf_Rela); 1020 - buf = malloc(size); 1018 + buf = malloc(sec->sh.sh_size); 1021 1019 if (!buf) { 1022 1020 perror("malloc"); 1023 1021 return -1; 1024 1022 } 1025 1023 1026 1024 sec->data->d_buf = buf; 1027 - sec->data->d_size = size; 1025 + sec->data->d_size = sec->sh.sh_size; 1028 1026 sec->data->d_type = ELF_T_RELA; 1029 - 1030 - sec->sh.sh_size = size; 1031 1027 1032 1028 idx = 0; 1033 1029 list_for_each_entry(reloc, &sec->reloc_list, list) { 1034 1030 reloc->rela.r_offset = reloc->offset; 1035 1031 reloc->rela.r_addend = reloc->addend; 1036 1032 reloc->rela.r_info = GELF_R_INFO(reloc->sym->idx, reloc->type); 1037 - gelf_update_rela(sec->data, idx, &reloc->rela); 1033 + if (!gelf_update_rela(sec->data, idx, &reloc->rela)) { 1034 + WARN_ELF("gelf_update_rela"); 1035 + return -1; 1036 + } 1038 1037 idx++; 1039 1038 } 1040 1039 ··· 1043 1042 1044 1043 static int elf_rebuild_reloc_section(struct elf *elf, struct section *sec) 1045 1044 { 1046 - struct reloc *reloc; 1047 - int nr; 1048 - 1049 - nr = 0; 1050 - list_for_each_entry(reloc, &sec->reloc_list, list) 1051 - nr++; 1052 - 1053 1045 switch (sec->sh.sh_type) { 1054 - case SHT_REL: return elf_rebuild_rel_reloc_section(sec, nr); 1055 - case SHT_RELA: return elf_rebuild_rela_reloc_section(sec, nr); 1046 + case SHT_REL: return elf_rebuild_rel_reloc_section(sec); 1047 + case SHT_RELA: return elf_rebuild_rela_reloc_section(sec); 1056 1048 default: return -1; 1057 1049 } 1058 1050 } ··· 1105 1111 /* Update changed relocation sections and section headers: */ 1106 1112 list_for_each_entry(sec, &elf->sections, list) { 1107 1113 if (sec->changed) { 1108 - if (sec->base && 1109 - elf_rebuild_reloc_section(elf, sec)) { 1110 - WARN("elf_rebuild_reloc_section"); 1111 - return -1; 1112 - } 1113 - 1114 1114 s = elf_getscn(elf->elf, sec->idx); 1115 1115 if (!s) { 1116 1116 WARN_ELF("elf_getscn"); ··· 1112 1124 } 1113 1125 if (!gelf_update_shdr(s, &sec->sh)) { 1114 1126 WARN_ELF("gelf_update_shdr"); 1127 + return -1; 1128 + } 1129 + 1130 + if (sec->base && 1131 + elf_rebuild_reloc_section(elf, sec)) { 1132 + WARN("elf_rebuild_reloc_section"); 1115 1133 return -1; 1116 1134 } 1117 1135
+2 -2
tools/perf/util/session.c
··· 2116 2116 static int __perf_session__process_decomp_events(struct perf_session *session) 2117 2117 { 2118 2118 s64 skip; 2119 - u64 size, file_pos = 0; 2119 + u64 size; 2120 2120 struct decomp *decomp = session->decomp_last; 2121 2121 2122 2122 if (!decomp) ··· 2132 2132 size = event->header.size; 2133 2133 2134 2134 if (size < sizeof(struct perf_event_header) || 2135 - (skip = perf_session__process_event(session, event, file_pos)) < 0) { 2135 + (skip = perf_session__process_event(session, event, decomp->file_pos)) < 0) { 2136 2136 pr_err("%#" PRIx64 " [%#x]: failed to process type: %d\n", 2137 2137 decomp->file_pos + decomp->head, event->header.size, event->header.type); 2138 2138 return -EINVAL;
+22 -2
tools/testing/kunit/kunit.py
··· 16 16 17 17 from collections import namedtuple 18 18 from enum import Enum, auto 19 - from typing import Iterable 19 + from typing import Iterable, Sequence 20 20 21 21 import kunit_config 22 22 import kunit_json ··· 186 186 exec_result.elapsed_time)) 187 187 return parse_result 188 188 189 + # Problem: 190 + # $ kunit.py run --json 191 + # works as one would expect and prints the parsed test results as JSON. 192 + # $ kunit.py run --json suite_name 193 + # would *not* pass suite_name as the filter_glob and print as json. 194 + # argparse will consider it to be another way of writing 195 + # $ kunit.py run --json=suite_name 196 + # i.e. it would run all tests, and dump the json to a `suite_name` file. 197 + # So we hackily automatically rewrite --json => --json=stdout 198 + pseudo_bool_flag_defaults = { 199 + '--json': 'stdout', 200 + '--raw_output': 'kunit', 201 + } 202 + def massage_argv(argv: Sequence[str]) -> Sequence[str]: 203 + def massage_arg(arg: str) -> str: 204 + if arg not in pseudo_bool_flag_defaults: 205 + return arg 206 + return f'{arg}={pseudo_bool_flag_defaults[arg]}' 207 + return list(map(massage_arg, argv)) 208 + 189 209 def add_common_opts(parser) -> None: 190 210 parser.add_argument('--build_dir', 191 211 help='As in the make command, it specifies the build ' ··· 323 303 help='Specifies the file to read results from.', 324 304 type=str, nargs='?', metavar='input_file') 325 305 326 - cli_args = parser.parse_args(argv) 306 + cli_args = parser.parse_args(massage_argv(argv)) 327 307 328 308 if get_kernel_root_path(): 329 309 os.chdir(get_kernel_root_path())
+8
tools/testing/kunit/kunit_tool_test.py
··· 408 408 self.assertNotEqual(call, mock.call(StrContains('Testing complete.'))) 409 409 self.assertNotEqual(call, mock.call(StrContains(' 0 tests run'))) 410 410 411 + def test_run_raw_output_does_not_take_positional_args(self): 412 + # --raw_output is a string flag, but we don't want it to consume 413 + # any positional arguments, only ones after an '=' 414 + self.linux_source_mock.run_kernel = mock.Mock(return_value=[]) 415 + kunit.main(['run', '--raw_output', 'filter_glob'], self.linux_source_mock) 416 + self.linux_source_mock.run_kernel.assert_called_once_with( 417 + args=None, build_dir='.kunit', filter_glob='filter_glob', timeout=300) 418 + 411 419 def test_exec_timeout(self): 412 420 timeout = 3453 413 421 kunit.main(['exec', '--timeout', str(timeout)], self.linux_source_mock)
+52 -2
tools/testing/selftests/ftrace/test.d/dynevent/add_remove_eprobe.tc
··· 11 11 EVENT="sys_enter_openat" 12 12 FIELD="filename" 13 13 EPROBE="eprobe_open" 14 - 15 - echo "e:$EPROBE $SYSTEM/$EVENT file=+0(\$filename):ustring" >> dynamic_events 14 + OPTIONS="file=+0(\$filename):ustring" 15 + echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 16 16 17 17 grep -q "$EPROBE" dynamic_events 18 18 test -d events/eprobes/$EPROBE ··· 34 34 35 35 echo "-:$EPROBE" >> dynamic_events 36 36 37 + ! grep -q "$EPROBE" dynamic_events 38 + ! test -d events/eprobes/$EPROBE 39 + 40 + # test various ways to remove the probe (already tested with just event name) 41 + 42 + # With group name 43 + echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 44 + grep -q "$EPROBE" dynamic_events 45 + test -d events/eprobes/$EPROBE 46 + echo "-:eprobes/$EPROBE" >> dynamic_events 47 + ! grep -q "$EPROBE" dynamic_events 48 + ! test -d events/eprobes/$EPROBE 49 + 50 + # With group name and system/event 51 + echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 52 + grep -q "$EPROBE" dynamic_events 53 + test -d events/eprobes/$EPROBE 54 + echo "-:eprobes/$EPROBE $SYSTEM/$EVENT" >> dynamic_events 55 + ! grep -q "$EPROBE" dynamic_events 56 + ! test -d events/eprobes/$EPROBE 57 + 58 + # With just event name and system/event 59 + echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 60 + grep -q "$EPROBE" dynamic_events 61 + test -d events/eprobes/$EPROBE 62 + echo "-:$EPROBE $SYSTEM/$EVENT" >> dynamic_events 63 + ! grep -q "$EPROBE" dynamic_events 64 + ! test -d events/eprobes/$EPROBE 65 + 66 + # With just event name and system/event and options 67 + echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 68 + grep -q "$EPROBE" dynamic_events 69 + test -d events/eprobes/$EPROBE 70 + echo "-:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 71 + ! grep -q "$EPROBE" dynamic_events 72 + ! test -d events/eprobes/$EPROBE 73 + 74 + # With group name and system/event and options 75 + echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 76 + grep -q "$EPROBE" dynamic_events 77 + test -d events/eprobes/$EPROBE 78 + echo "-:eprobes/$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 79 + ! grep -q "$EPROBE" dynamic_events 80 + ! test -d events/eprobes/$EPROBE 81 + 82 + # Finally make sure what is in the dynamic_events file clears it too 83 + echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 84 + LINE=`sed -e '/$EPROBE/s/^e/-/' < dynamic_events` 85 + test -d events/eprobes/$EPROBE 86 + echo "-:eprobes/$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 37 87 ! grep -q "$EPROBE" dynamic_events 38 88 ! test -d events/eprobes/$EPROBE 39 89
+20 -4
tools/testing/selftests/net/ioam6.sh
··· 468 468 for i in {0..22} 469 469 do 470 470 ip -netns ioam-node-alpha route change db01::/64 encap ioam6 trace \ 471 - prealloc type ${bit2type[$i]} ns 123 size ${bit2size[$i]} dev veth0 471 + prealloc type ${bit2type[$i]} ns 123 size ${bit2size[$i]} \ 472 + dev veth0 &>/dev/null 472 473 473 - run_test "out_bit$i" "${desc/<n>/$i}" ioam-node-alpha ioam-node-beta \ 474 - db01::2 db01::1 veth0 ${bit2type[$i]} 123 474 + local cmd_res=$? 475 + local descr="${desc/<n>/$i}" 476 + 477 + if [[ $i -ge 12 && $i -le 21 ]] 478 + then 479 + if [ $cmd_res != 0 ] 480 + then 481 + npassed=$((npassed+1)) 482 + log_test_passed "$descr" 483 + else 484 + nfailed=$((nfailed+1)) 485 + log_test_failed "$descr" 486 + fi 487 + else 488 + run_test "out_bit$i" "$descr" ioam-node-alpha ioam-node-beta \ 489 + db01::2 db01::1 veth0 ${bit2type[$i]} 123 490 + fi 475 491 done 476 492 477 493 bit2size[22]=$tmp ··· 560 544 local tmp=${bit2size[22]} 561 545 bit2size[22]=$(( $tmp + ${#BETA[9]} + ((4 - (${#BETA[9]} % 4)) % 4) )) 562 546 563 - for i in {0..22} 547 + for i in {0..11} {22..22} 564 548 do 565 549 ip -netns ioam-node-alpha route change db01::/64 encap ioam6 trace \ 566 550 prealloc type ${bit2type[$i]} ns 123 size ${bit2size[$i]} dev veth0
+60 -104
tools/testing/selftests/net/ioam6_parser.c
··· 94 94 TEST_OUT_BIT9, 95 95 TEST_OUT_BIT10, 96 96 TEST_OUT_BIT11, 97 - TEST_OUT_BIT12, 98 - TEST_OUT_BIT13, 99 - TEST_OUT_BIT14, 100 - TEST_OUT_BIT15, 101 - TEST_OUT_BIT16, 102 - TEST_OUT_BIT17, 103 - TEST_OUT_BIT18, 104 - TEST_OUT_BIT19, 105 - TEST_OUT_BIT20, 106 - TEST_OUT_BIT21, 107 97 TEST_OUT_BIT22, 108 98 TEST_OUT_FULL_SUPP_TRACE, 109 99 ··· 115 125 TEST_IN_BIT9, 116 126 TEST_IN_BIT10, 117 127 TEST_IN_BIT11, 118 - TEST_IN_BIT12, 119 - TEST_IN_BIT13, 120 - TEST_IN_BIT14, 121 - TEST_IN_BIT15, 122 - TEST_IN_BIT16, 123 - TEST_IN_BIT17, 124 - TEST_IN_BIT18, 125 - TEST_IN_BIT19, 126 - TEST_IN_BIT20, 127 - TEST_IN_BIT21, 128 128 TEST_IN_BIT22, 129 129 TEST_IN_FULL_SUPP_TRACE, 130 130 ··· 178 198 return ioam6h->overflow || 179 199 ioam6h->nodelen != 2 || 180 200 ioam6h->remlen; 181 - 182 - case TEST_OUT_BIT12: 183 - case TEST_IN_BIT12: 184 - case TEST_OUT_BIT13: 185 - case TEST_IN_BIT13: 186 - case TEST_OUT_BIT14: 187 - case TEST_IN_BIT14: 188 - case TEST_OUT_BIT15: 189 - case TEST_IN_BIT15: 190 - case TEST_OUT_BIT16: 191 - case TEST_IN_BIT16: 192 - case TEST_OUT_BIT17: 193 - case TEST_IN_BIT17: 194 - case TEST_OUT_BIT18: 195 - case TEST_IN_BIT18: 196 - case TEST_OUT_BIT19: 197 - case TEST_IN_BIT19: 198 - case TEST_OUT_BIT20: 199 - case TEST_IN_BIT20: 200 - case TEST_OUT_BIT21: 201 - case TEST_IN_BIT21: 202 - return ioam6h->overflow || 203 - ioam6h->nodelen || 204 - ioam6h->remlen != 1; 205 201 206 202 case TEST_OUT_BIT22: 207 203 case TEST_IN_BIT22: ··· 277 321 } 278 322 279 323 if (ioam6h->type.bit11) { 324 + if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff) 325 + return 1; 326 + *p += sizeof(__u32); 327 + } 328 + 329 + if (ioam6h->type.bit12) { 330 + if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff) 331 + return 1; 332 + *p += sizeof(__u32); 333 + } 334 + 335 + if (ioam6h->type.bit13) { 336 + if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff) 337 + return 1; 338 + *p += sizeof(__u32); 339 + } 340 + 341 + if (ioam6h->type.bit14) { 342 + if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff) 343 + return 1; 344 + *p += sizeof(__u32); 345 + } 346 + 347 + if (ioam6h->type.bit15) { 348 + if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff) 349 + return 1; 350 + *p += sizeof(__u32); 351 + } 352 + 353 + if (ioam6h->type.bit16) { 354 + if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff) 355 + return 1; 356 + *p += sizeof(__u32); 357 + } 358 + 359 + if (ioam6h->type.bit17) { 360 + if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff) 361 + return 1; 362 + *p += sizeof(__u32); 363 + } 364 + 365 + if (ioam6h->type.bit18) { 366 + if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff) 367 + return 1; 368 + *p += sizeof(__u32); 369 + } 370 + 371 + if (ioam6h->type.bit19) { 372 + if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff) 373 + return 1; 374 + *p += sizeof(__u32); 375 + } 376 + 377 + if (ioam6h->type.bit20) { 378 + if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff) 379 + return 1; 380 + *p += sizeof(__u32); 381 + } 382 + 383 + if (ioam6h->type.bit21) { 280 384 if (__be32_to_cpu(*((__u32 *)*p)) != 0xffffffff) 281 385 return 1; 282 386 *p += sizeof(__u32); ··· 471 455 return TEST_OUT_BIT10; 472 456 if (!strcmp("out_bit11", tname)) 473 457 return TEST_OUT_BIT11; 474 - if (!strcmp("out_bit12", tname)) 475 - return TEST_OUT_BIT12; 476 - if (!strcmp("out_bit13", tname)) 477 - return TEST_OUT_BIT13; 478 - if (!strcmp("out_bit14", tname)) 479 - return TEST_OUT_BIT14; 480 - if (!strcmp("out_bit15", tname)) 481 - return TEST_OUT_BIT15; 482 - if (!strcmp("out_bit16", tname)) 483 - return TEST_OUT_BIT16; 484 - if (!strcmp("out_bit17", tname)) 485 - return TEST_OUT_BIT17; 486 - if (!strcmp("out_bit18", tname)) 487 - return TEST_OUT_BIT18; 488 - if (!strcmp("out_bit19", tname)) 489 - return TEST_OUT_BIT19; 490 - if (!strcmp("out_bit20", tname)) 491 - return TEST_OUT_BIT20; 492 - if (!strcmp("out_bit21", tname)) 493 - return TEST_OUT_BIT21; 494 458 if (!strcmp("out_bit22", tname)) 495 459 return TEST_OUT_BIT22; 496 460 if (!strcmp("out_full_supp_trace", tname)) ··· 505 509 return TEST_IN_BIT10; 506 510 if (!strcmp("in_bit11", tname)) 507 511 return TEST_IN_BIT11; 508 - if (!strcmp("in_bit12", tname)) 509 - return TEST_IN_BIT12; 510 - if (!strcmp("in_bit13", tname)) 511 - return TEST_IN_BIT13; 512 - if (!strcmp("in_bit14", tname)) 513 - return TEST_IN_BIT14; 514 - if (!strcmp("in_bit15", tname)) 515 - return TEST_IN_BIT15; 516 - if (!strcmp("in_bit16", tname)) 517 - return TEST_IN_BIT16; 518 - if (!strcmp("in_bit17", tname)) 519 - return TEST_IN_BIT17; 520 - if (!strcmp("in_bit18", tname)) 521 - return TEST_IN_BIT18; 522 - if (!strcmp("in_bit19", tname)) 523 - return TEST_IN_BIT19; 524 - if (!strcmp("in_bit20", tname)) 525 - return TEST_IN_BIT20; 526 - if (!strcmp("in_bit21", tname)) 527 - return TEST_IN_BIT21; 528 512 if (!strcmp("in_bit22", tname)) 529 513 return TEST_IN_BIT22; 530 514 if (!strcmp("in_full_supp_trace", tname)) ··· 582 606 [TEST_OUT_BIT9] = check_ioam_header_and_data, 583 607 [TEST_OUT_BIT10] = check_ioam_header_and_data, 584 608 [TEST_OUT_BIT11] = check_ioam_header_and_data, 585 - [TEST_OUT_BIT12] = check_ioam_header, 586 - [TEST_OUT_BIT13] = check_ioam_header, 587 - [TEST_OUT_BIT14] = check_ioam_header, 588 - [TEST_OUT_BIT15] = check_ioam_header, 589 - [TEST_OUT_BIT16] = check_ioam_header, 590 - [TEST_OUT_BIT17] = check_ioam_header, 591 - [TEST_OUT_BIT18] = check_ioam_header, 592 - [TEST_OUT_BIT19] = check_ioam_header, 593 - [TEST_OUT_BIT20] = check_ioam_header, 594 - [TEST_OUT_BIT21] = check_ioam_header, 595 609 [TEST_OUT_BIT22] = check_ioam_header_and_data, 596 610 [TEST_OUT_FULL_SUPP_TRACE] = check_ioam_header_and_data, 597 611 [TEST_IN_UNDEF_NS] = check_ioam_header, ··· 599 633 [TEST_IN_BIT9] = check_ioam_header_and_data, 600 634 [TEST_IN_BIT10] = check_ioam_header_and_data, 601 635 [TEST_IN_BIT11] = check_ioam_header_and_data, 602 - [TEST_IN_BIT12] = check_ioam_header, 603 - [TEST_IN_BIT13] = check_ioam_header, 604 - [TEST_IN_BIT14] = check_ioam_header, 605 - [TEST_IN_BIT15] = check_ioam_header, 606 - [TEST_IN_BIT16] = check_ioam_header, 607 - [TEST_IN_BIT17] = check_ioam_header, 608 - [TEST_IN_BIT18] = check_ioam_header, 609 - [TEST_IN_BIT19] = check_ioam_header, 610 - [TEST_IN_BIT20] = check_ioam_header, 611 - [TEST_IN_BIT21] = check_ioam_header, 612 636 [TEST_IN_BIT22] = check_ioam_header_and_data, 613 637 [TEST_IN_FULL_SUPP_TRACE] = check_ioam_header_and_data, 614 638 [TEST_FWD_FULL_SUPP_TRACE] = check_ioam_header_and_data,