Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Lots of simnple overlapping additions.

With a build fix from Stephen Rothwell.

Signed-off-by: David S. Miller <davem@davemloft.net>

+3812 -2436
+2
.mailmap
··· 33 33 Andi Kleen <ak@linux.intel.com> <ak@suse.de> 34 34 Andi Shyti <andi@etezian.org> <andi.shyti@samsung.com> 35 35 Andreas Herrmann <aherrman@de.ibm.com> 36 + Andrej Shadura <andrew.shadura@collabora.co.uk> 37 + Andrej Shadura <andrew@shadura.me> <andrew@beldisplaytech.com> 36 38 Andrew Morton <akpm@linux-foundation.org> 37 39 Andrew Murray <amurray@thegoodpenguin.co.uk> <amurray@embedded-bits.co.uk> 38 40 Andrew Murray <amurray@thegoodpenguin.co.uk> <andrew.murray@arm.com>
+1 -1
Documentation/devicetree/bindings/spi/snps,dw-apb-ssi.yaml
··· 171 171 cs-gpios = <&gpio0 13 0>, 172 172 <&gpio0 14 0>; 173 173 rx-sample-delay-ns = <3>; 174 - spi-flash@1 { 174 + flash@1 { 175 175 compatible = "spi-nand"; 176 176 reg = <1>; 177 177 rx-sample-delay-ns = <7>;
+76 -67
Documentation/filesystems/ntfs3.rst
··· 4 4 NTFS3 5 5 ===== 6 6 7 - 8 7 Summary and Features 9 8 ==================== 10 9 11 - NTFS3 is fully functional NTFS Read-Write driver. The driver works with 12 - NTFS versions up to 3.1, normal/compressed/sparse files 13 - and journal replaying. File system type to use on mount is 'ntfs3'. 10 + NTFS3 is fully functional NTFS Read-Write driver. The driver works with NTFS 11 + versions up to 3.1. File system type to use on mount is *ntfs3*. 14 12 15 13 - This driver implements NTFS read/write support for normal, sparse and 16 14 compressed files. 17 - - Supports native journal replaying; 18 - - Supports extended attributes 19 - Predefined extended attributes: 20 - - 'system.ntfs_security' gets/sets security 21 - descriptor (SECURITY_DESCRIPTOR_RELATIVE) 22 - - 'system.ntfs_attrib' gets/sets ntfs file/dir attributes. 23 - Note: applied to empty files, this allows to switch type between 24 - sparse(0x200), compressed(0x800) and normal; 15 + - Supports native journal replaying. 25 16 - Supports NFS export of mounted NTFS volumes. 17 + - Supports extended attributes. Predefined extended attributes: 18 + 19 + - *system.ntfs_security* gets/sets security 20 + 21 + Descriptor: SECURITY_DESCRIPTOR_RELATIVE 22 + 23 + - *system.ntfs_attrib* gets/sets ntfs file/dir attributes. 24 + 25 + Note: Applied to empty files, this allows to switch type between 26 + sparse(0x200), compressed(0x800) and normal. 26 27 27 28 Mount Options 28 29 ============= 29 30 30 31 The list below describes mount options supported by NTFS3 driver in addition to 31 - generic ones. 32 + generic ones. You can use every mount option with **no** option. If it is in 33 + this table marked with no it means default is without **no**. 32 34 33 - =============================================================================== 35 + .. flat-table:: 36 + :widths: 1 5 37 + :fill-cells: 34 38 35 - nls=name This option informs the driver how to interpret path 36 - strings and translate them to Unicode and back. If 37 - this option is not set, the default codepage will be 38 - used (CONFIG_NLS_DEFAULT). 39 - Examples: 40 - 'nls=utf8' 39 + * - iocharset=name 40 + - This option informs the driver how to interpret path strings and 41 + translate them to Unicode and back. If this option is not set, the 42 + default codepage will be used (CONFIG_NLS_DEFAULT). 41 43 42 - uid= 43 - gid= 44 - umask= Controls the default permissions for files/directories created 45 - after the NTFS volume is mounted. 44 + Example: iocharset=utf8 46 45 47 - fmask= 48 - dmask= Instead of specifying umask which applies both to 49 - files and directories, fmask applies only to files and 50 - dmask only to directories. 46 + * - uid= 47 + - :rspan:`1` 48 + * - gid= 51 49 52 - nohidden Files with the Windows-specific HIDDEN (FILE_ATTRIBUTE_HIDDEN) 53 - attribute will not be shown under Linux. 50 + * - umask= 51 + - Controls the default permissions for files/directories created after 52 + the NTFS volume is mounted. 54 53 55 - sys_immutable Files with the Windows-specific SYSTEM 56 - (FILE_ATTRIBUTE_SYSTEM) attribute will be marked as system 57 - immutable files. 54 + * - dmask= 55 + - :rspan:`1` Instead of specifying umask which applies both to files and 56 + directories, fmask applies only to files and dmask only to directories. 57 + * - fmask= 58 58 59 - discard Enable support of the TRIM command for improved performance 60 - on delete operations, which is recommended for use with the 61 - solid-state drives (SSD). 59 + * - noacsrules 60 + - "No access rules" mount option sets access rights for files/folders to 61 + 777 and owner/group to root. This mount option absorbs all other 62 + permissions. 62 63 63 - force Forces the driver to mount partitions even if 'dirty' flag 64 - (volume dirty) is set. Not recommended for use. 64 + - Permissions change for files/folders will be reported as successful, 65 + but they will remain 777. 65 66 66 - sparse Create new files as "sparse". 67 + - Owner/group change will be reported as successful, butthey will stay 68 + as root. 67 69 68 - showmeta Use this parameter to show all meta-files (System Files) on 69 - a mounted NTFS partition. 70 - By default, all meta-files are hidden. 70 + * - nohidden 71 + - Files with the Windows-specific HIDDEN (FILE_ATTRIBUTE_HIDDEN) attribute 72 + will not be shown under Linux. 71 73 72 - prealloc Preallocate space for files excessively when file size is 73 - increasing on writes. Decreases fragmentation in case of 74 - parallel write operations to different files. 74 + * - sys_immutable 75 + - Files with the Windows-specific SYSTEM (FILE_ATTRIBUTE_SYSTEM) attribute 76 + will be marked as system immutable files. 75 77 76 - no_acs_rules "No access rules" mount option sets access rights for 77 - files/folders to 777 and owner/group to root. This mount 78 - option absorbs all other permissions: 79 - - permissions change for files/folders will be reported 80 - as successful, but they will remain 777; 81 - - owner/group change will be reported as successful, but 82 - they will stay as root 78 + * - discard 79 + - Enable support of the TRIM command for improved performance on delete 80 + operations, which is recommended for use with the solid-state drives 81 + (SSD). 83 82 84 - acl Support POSIX ACLs (Access Control Lists). Effective if 85 - supported by Kernel. Not to be confused with NTFS ACLs. 86 - The option specified as acl enables support for POSIX ACLs. 83 + * - force 84 + - Forces the driver to mount partitions even if volume is marked dirty. 85 + Not recommended for use. 87 86 88 - noatime All files and directories will not update their last access 89 - time attribute if a partition is mounted with this parameter. 90 - This option can speed up file system operation. 87 + * - sparse 88 + - Create new files as sparse. 91 89 92 - =============================================================================== 90 + * - showmeta 91 + - Use this parameter to show all meta-files (System Files) on a mounted 92 + NTFS partition. By default, all meta-files are hidden. 93 93 94 - ToDo list 94 + * - prealloc 95 + - Preallocate space for files excessively when file size is increasing on 96 + writes. Decreases fragmentation in case of parallel write operations to 97 + different files. 98 + 99 + * - acl 100 + - Support POSIX ACLs (Access Control Lists). Effective if supported by 101 + Kernel. Not to be confused with NTFS ACLs. The option specified as acl 102 + enables support for POSIX ACLs. 103 + 104 + Todo list 95 105 ========= 96 - 97 - - Full journaling support (currently journal replaying is supported) over JBD. 98 - 106 + - Full journaling support over JBD. Currently journal replaying is supported 107 + which is not necessarily as effectice as JBD would be. 99 108 100 109 References 101 110 ========== 102 - https://www.paragon-software.com/home/ntfs-linux-professional/ 103 - - Commercial version of the NTFS driver for Linux. 111 + - Commercial version of the NTFS driver for Linux. 112 + https://www.paragon-software.com/home/ntfs-linux-professional/ 104 113 105 - almaz.alexandrovich@paragon-software.com 106 - - Direct e-mail address for feedback and requests on the NTFS3 implementation. 114 + - Direct e-mail address for feedback and requests on the NTFS3 implementation. 115 + almaz.alexandrovich@paragon-software.com
+5 -4
Documentation/networking/devlink/ice.rst
··· 30 30 PHY, link, etc. 31 31 * - ``fw.mgmt.api`` 32 32 - running 33 - - 1.5 34 - - 2-digit version number of the API exported over the AdminQ by the 35 - management firmware. Used by the driver to identify what commands 36 - are supported. 33 + - 1.5.1 34 + - 3-digit version number (major.minor.patch) of the API exported over 35 + the AdminQ by the management firmware. Used by the driver to 36 + identify what commands are supported. Historical versions of the 37 + kernel only displayed a 2-digit version number (major.minor). 37 38 * - ``fw.mgmt.build`` 38 39 - running 39 40 - 0x305d955f
+5 -5
Documentation/networking/mctp.rst
··· 59 59 }; 60 60 61 61 struct sockaddr_mctp { 62 - unsigned short int smctp_family; 63 - int smctp_network; 64 - struct mctp_addr smctp_addr; 65 - __u8 smctp_type; 66 - __u8 smctp_tag; 62 + __kernel_sa_family_t smctp_family; 63 + unsigned int smctp_network; 64 + struct mctp_addr smctp_addr; 65 + __u8 smctp_type; 66 + __u8 smctp_tag; 67 67 }; 68 68 69 69 #define MCTP_NET_ANY 0x0
+1 -1
Documentation/userspace-api/vduse.rst
··· 18 18 is clarified or fixed in the future. 19 19 20 20 Create/Destroy VDUSE devices 21 - ------------------------ 21 + ---------------------------- 22 22 23 23 VDUSE devices are created as follows: 24 24
+4 -4
MAINTAINERS
··· 7349 7349 7350 7350 FPGA MANAGER FRAMEWORK 7351 7351 M: Moritz Fischer <mdf@kernel.org> 7352 + M: Wu Hao <hao.wu@intel.com> 7353 + M: Xu Yilun <yilun.xu@intel.com> 7352 7354 R: Tom Rix <trix@redhat.com> 7353 7355 L: linux-fpga@vger.kernel.org 7354 7356 S: Maintained 7355 - W: http://www.rocketboards.org 7356 7357 Q: http://patchwork.kernel.org/project/linux-fpga/list/ 7357 7358 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mdf/linux-fpga.git 7358 7359 F: Documentation/devicetree/bindings/fpga/ ··· 10286 10285 M: Christian Borntraeger <borntraeger@de.ibm.com> 10287 10286 M: Janosch Frank <frankja@linux.ibm.com> 10288 10287 R: David Hildenbrand <david@redhat.com> 10289 - R: Cornelia Huck <cohuck@redhat.com> 10290 10288 R: Claudio Imbrenda <imbrenda@linux.ibm.com> 10291 10289 L: kvm@vger.kernel.org 10292 10290 S: Supported ··· 16308 16308 M: Heiko Carstens <hca@linux.ibm.com> 16309 16309 M: Vasily Gorbik <gor@linux.ibm.com> 16310 16310 M: Christian Borntraeger <borntraeger@de.ibm.com> 16311 + R: Alexander Gordeev <agordeev@linux.ibm.com> 16311 16312 L: linux-s390@vger.kernel.org 16312 16313 S: Supported 16313 16314 W: http://www.ibm.com/developerworks/linux/linux390/ ··· 16387 16386 F: drivers/s390/crypto/vfio_ap_private.h 16388 16387 16389 16388 S390 VFIO-CCW DRIVER 16390 - M: Cornelia Huck <cohuck@redhat.com> 16391 16389 M: Eric Farman <farman@linux.ibm.com> 16392 16390 M: Matthew Rosato <mjrosato@linux.ibm.com> 16393 16391 R: Halil Pasic <pasic@linux.ibm.com> ··· 17993 17993 SY8106A REGULATOR DRIVER 17994 17994 M: Icenowy Zheng <icenowy@aosc.io> 17995 17995 S: Maintained 17996 - F: Documentation/devicetree/bindings/regulator/sy8106a-regulator.txt 17996 + F: Documentation/devicetree/bindings/regulator/silergy,sy8106a.yaml 17997 17997 F: drivers/regulator/sy8106a-regulator.c 17998 17998 17999 17999 SYNC FILE FRAMEWORK
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 15 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc6 6 6 NAME = Opossums on Parade 7 7 8 8 # *DOCUMENTATION*
-5
arch/arc/include/asm/pgtable.h
··· 26 26 27 27 extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE); 28 28 29 - /* Macro to mark a page protection as uncacheable */ 30 - #define pgprot_noncached(prot) (__pgprot(pgprot_val(prot) & ~_PAGE_CACHEABLE)) 31 - 32 - extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE); 33 - 34 29 /* to cope with aliasing VIPT cache */ 35 30 #define HAVE_ARCH_UNMAPPED_AREA 36 31
+6 -5
arch/arm/boot/dts/bcm2711-rpi-4-b.dts
··· 40 40 regulator-always-on; 41 41 regulator-settling-time-us = <5000>; 42 42 gpios = <&expgpio 4 GPIO_ACTIVE_HIGH>; 43 - states = <1800000 0x1 44 - 3300000 0x0>; 43 + states = <1800000 0x1>, 44 + <3300000 0x0>; 45 45 status = "okay"; 46 46 }; 47 47 ··· 217 217 }; 218 218 219 219 &pcie0 { 220 - pci@1,0 { 220 + pci@0,0 { 221 + device_type = "pci"; 221 222 #address-cells = <3>; 222 223 #size-cells = <2>; 223 224 ranges; 224 225 225 226 reg = <0 0 0 0 0>; 226 227 227 - usb@1,0 { 228 - reg = <0x10000 0 0 0 0>; 228 + usb@0,0 { 229 + reg = <0 0 0 0 0>; 229 230 resets = <&reset RASPBERRYPI_FIRMWARE_RESET_ID_USB>; 230 231 }; 231 232 };
+10 -2
arch/arm/boot/dts/bcm2711.dtsi
··· 300 300 status = "disabled"; 301 301 }; 302 302 303 + vec: vec@7ec13000 { 304 + compatible = "brcm,bcm2711-vec"; 305 + reg = <0x7ec13000 0x1000>; 306 + clocks = <&clocks BCM2835_CLOCK_VEC>; 307 + interrupts = <GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>; 308 + status = "disabled"; 309 + }; 310 + 303 311 dvp: clock@7ef00000 { 304 312 compatible = "brcm,brcm2711-dvp"; 305 313 reg = <0x7ef00000 0x10>; ··· 540 532 compatible = "brcm,genet-mdio-v5"; 541 533 reg = <0xe14 0x8>; 542 534 reg-names = "mdio"; 543 - #address-cells = <0x0>; 544 - #size-cells = <0x1>; 535 + #address-cells = <0x1>; 536 + #size-cells = <0x0>; 545 537 }; 546 538 }; 547 539 };
+8
arch/arm/boot/dts/bcm2835-common.dtsi
··· 106 106 status = "okay"; 107 107 }; 108 108 109 + vec: vec@7e806000 { 110 + compatible = "brcm,bcm2835-vec"; 111 + reg = <0x7e806000 0x1000>; 112 + clocks = <&clocks BCM2835_CLOCK_VEC>; 113 + interrupts = <2 27>; 114 + status = "disabled"; 115 + }; 116 + 109 117 pixelvalve@7e807000 { 110 118 compatible = "brcm,bcm2835-pixelvalve2"; 111 119 reg = <0x7e807000 0x100>;
-8
arch/arm/boot/dts/bcm283x.dtsi
··· 464 464 status = "disabled"; 465 465 }; 466 466 467 - vec: vec@7e806000 { 468 - compatible = "brcm,bcm2835-vec"; 469 - reg = <0x7e806000 0x1000>; 470 - clocks = <&clocks BCM2835_CLOCK_VEC>; 471 - interrupts = <2 27>; 472 - status = "disabled"; 473 - }; 474 - 475 467 usb: usb@7e980000 { 476 468 compatible = "brcm,bcm2835-usb"; 477 469 reg = <0x7e980000 0x10000>;
-1
arch/arm/configs/multi_v7_defconfig
··· 197 197 CONFIG_DEVTMPFS=y 198 198 CONFIG_DEVTMPFS_MOUNT=y 199 199 CONFIG_OMAP_OCP2SCP=y 200 - CONFIG_SIMPLE_PM_BUS=y 201 200 CONFIG_MTD=y 202 201 CONFIG_MTD_CMDLINE_PARTS=y 203 202 CONFIG_MTD_BLOCK=y
-1
arch/arm/configs/oxnas_v6_defconfig
··· 46 46 CONFIG_DEVTMPFS_MOUNT=y 47 47 CONFIG_DMA_CMA=y 48 48 CONFIG_CMA_SIZE_MBYTES=64 49 - CONFIG_SIMPLE_PM_BUS=y 50 49 CONFIG_MTD=y 51 50 CONFIG_MTD_CMDLINE_PARTS=y 52 51 CONFIG_MTD_BLOCK=y
-1
arch/arm/configs/shmobile_defconfig
··· 40 40 CONFIG_PCIE_RCAR_HOST=y 41 41 CONFIG_DEVTMPFS=y 42 42 CONFIG_DEVTMPFS_MOUNT=y 43 - CONFIG_SIMPLE_PM_BUS=y 44 43 CONFIG_MTD=y 45 44 CONFIG_MTD_BLOCK=y 46 45 CONFIG_MTD_CFI=y
+31 -9
arch/arm/mach-imx/src.c
··· 9 9 #include <linux/iopoll.h> 10 10 #include <linux/of.h> 11 11 #include <linux/of_address.h> 12 + #include <linux/platform_device.h> 12 13 #include <linux/reset-controller.h> 13 14 #include <linux/smp.h> 14 15 #include <asm/smp_plat.h> ··· 80 79 81 80 static const struct reset_control_ops imx_src_ops = { 82 81 .reset = imx_src_reset_module, 83 - }; 84 - 85 - static struct reset_controller_dev imx_reset_controller = { 86 - .ops = &imx_src_ops, 87 - .nr_resets = ARRAY_SIZE(sw_reset_bits), 88 82 }; 89 83 90 84 static void imx_gpcv2_set_m_core_pgc(bool enable, u32 offset) ··· 173 177 src_base = of_iomap(np, 0); 174 178 WARN_ON(!src_base); 175 179 176 - imx_reset_controller.of_node = np; 177 - if (IS_ENABLED(CONFIG_RESET_CONTROLLER)) 178 - reset_controller_register(&imx_reset_controller); 179 - 180 180 /* 181 181 * force warm reset sources to generate cold reset 182 182 * for a more reliable restart ··· 206 214 if (!gpc_base) 207 215 return; 208 216 } 217 + 218 + static const struct of_device_id imx_src_dt_ids[] = { 219 + { .compatible = "fsl,imx51-src" }, 220 + { /* sentinel */ } 221 + }; 222 + 223 + static int imx_src_probe(struct platform_device *pdev) 224 + { 225 + struct reset_controller_dev *rcdev; 226 + 227 + rcdev = devm_kzalloc(&pdev->dev, sizeof(*rcdev), GFP_KERNEL); 228 + if (!rcdev) 229 + return -ENOMEM; 230 + 231 + rcdev->ops = &imx_src_ops; 232 + rcdev->dev = &pdev->dev; 233 + rcdev->of_node = pdev->dev.of_node; 234 + rcdev->nr_resets = ARRAY_SIZE(sw_reset_bits); 235 + 236 + return devm_reset_controller_register(&pdev->dev, rcdev); 237 + } 238 + 239 + static struct platform_driver imx_src_driver = { 240 + .driver = { 241 + .name = "imx-src", 242 + .of_match_table = imx_src_dt_ids, 243 + }, 244 + .probe = imx_src_probe, 245 + }; 246 + builtin_platform_driver(imx_src_driver);
-1
arch/arm/mach-omap2/Kconfig
··· 112 112 select PM_GENERIC_DOMAINS 113 113 select PM_GENERIC_DOMAINS_OF 114 114 select RESET_CONTROLLER 115 - select SIMPLE_PM_BUS 116 115 select SOC_BUS 117 116 select TI_SYSC 118 117 select OMAP_IRQCHIP
-1
arch/arm64/configs/defconfig
··· 245 245 CONFIG_FW_LOADER_USER_HELPER=y 246 246 CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y 247 247 CONFIG_HISILICON_LPC=y 248 - CONFIG_SIMPLE_PM_BUS=y 249 248 CONFIG_FSL_MC_BUS=y 250 249 CONFIG_TEGRA_ACONNECT=m 251 250 CONFIG_GNSS=m
+1
arch/arm64/kvm/hyp/include/nvhe/gfp.h
··· 24 24 25 25 /* Allocation */ 26 26 void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order); 27 + void hyp_split_page(struct hyp_page *page); 27 28 void hyp_get_page(struct hyp_pool *pool, void *addr); 28 29 void hyp_put_page(struct hyp_pool *pool, void *addr); 29 30
+12 -1
arch/arm64/kvm/hyp/nvhe/mem_protect.c
··· 35 35 36 36 static void *host_s2_zalloc_pages_exact(size_t size) 37 37 { 38 - return hyp_alloc_pages(&host_s2_pool, get_order(size)); 38 + void *addr = hyp_alloc_pages(&host_s2_pool, get_order(size)); 39 + 40 + hyp_split_page(hyp_virt_to_page(addr)); 41 + 42 + /* 43 + * The size of concatenated PGDs is always a power of two of PAGE_SIZE, 44 + * so there should be no need to free any of the tail pages to make the 45 + * allocation exact. 46 + */ 47 + WARN_ON(size != (PAGE_SIZE << get_order(size))); 48 + 49 + return addr; 39 50 } 40 51 41 52 static void *host_s2_zalloc_page(void *pool)
+15
arch/arm64/kvm/hyp/nvhe/page_alloc.c
··· 152 152 153 153 static inline int hyp_page_ref_dec_and_test(struct hyp_page *p) 154 154 { 155 + BUG_ON(!p->refcount); 155 156 p->refcount--; 156 157 return (p->refcount == 0); 157 158 } ··· 192 191 hyp_spin_lock(&pool->lock); 193 192 hyp_page_ref_inc(p); 194 193 hyp_spin_unlock(&pool->lock); 194 + } 195 + 196 + void hyp_split_page(struct hyp_page *p) 197 + { 198 + unsigned short order = p->order; 199 + unsigned int i; 200 + 201 + p->order = 0; 202 + for (i = 1; i < (1 << order); i++) { 203 + struct hyp_page *tail = p + i; 204 + 205 + tail->order = 0; 206 + hyp_set_page_refcounted(tail); 207 + } 195 208 } 196 209 197 210 void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order)
+4 -2
arch/arm64/kvm/mmu.c
··· 1529 1529 * when updating the PG_mte_tagged page flag, see 1530 1530 * sanitise_mte_tags for more details. 1531 1531 */ 1532 - if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED) 1533 - return -EINVAL; 1532 + if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED) { 1533 + ret = -EINVAL; 1534 + break; 1535 + } 1534 1536 1535 1537 if (vma->vm_flags & VM_PFNMAP) { 1536 1538 /* IO region dirty page logging not allowed */
+2 -1
arch/csky/Kconfig
··· 8 8 select ARCH_HAS_SYNC_DMA_FOR_DEVICE 9 9 select ARCH_USE_BUILTIN_BSWAP 10 10 select ARCH_USE_QUEUED_RWLOCKS 11 - select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 11 + select ARCH_WANT_FRAME_POINTERS if !CPU_CK610 && $(cc-option,-mbacktrace) 12 12 select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT 13 13 select COMMON_CLK 14 14 select CLKSRC_MMIO ··· 241 241 242 242 menuconfig HAVE_TCM 243 243 bool "Tightly-Coupled/Sram Memory" 244 + depends on !COMPILE_TEST 244 245 help 245 246 The implementation are not only used by TCM (Tightly-Coupled Meory) 246 247 but also used by sram on SOC bus. It follow existed linux tcm
-1
arch/csky/include/asm/bitops.h
··· 74 74 * bug fix, why only could use atomic!!!! 75 75 */ 76 76 #include <asm-generic/bitops/non-atomic.h> 77 - #define __clear_bit(nr, vaddr) clear_bit(nr, vaddr) 78 77 79 78 #include <asm-generic/bitops/le.h> 80 79 #include <asm-generic/bitops/ext2-atomic.h>
+2 -1
arch/csky/kernel/ptrace.c
··· 99 99 if (ret) 100 100 return ret; 101 101 102 - regs.sr = task_pt_regs(target)->sr; 102 + /* BIT(0) of regs.sr is Condition Code/Carry bit */ 103 + regs.sr = (regs.sr & BIT(0)) | (task_pt_regs(target)->sr & ~BIT(0)); 103 104 #ifdef CONFIG_CPU_HAS_HILO 104 105 regs.dcsr = task_pt_regs(target)->dcsr; 105 106 #endif
+4
arch/csky/kernel/signal.c
··· 52 52 struct sigcontext __user *sc) 53 53 { 54 54 int err = 0; 55 + unsigned long sr = regs->sr; 55 56 56 57 /* sc_pt_regs is structured the same as the start of pt_regs */ 57 58 err |= __copy_from_user(regs, &sc->sc_pt_regs, sizeof(struct pt_regs)); 59 + 60 + /* BIT(0) of regs->sr is Condition Code/Carry bit */ 61 + regs->sr = (sr & ~1) | (regs->sr & 1); 58 62 59 63 /* Restore the floating-point state. */ 60 64 err |= restore_fpu_state(sc);
+2 -2
arch/nios2/include/asm/irqflags.h
··· 9 9 10 10 static inline unsigned long arch_local_save_flags(void) 11 11 { 12 - return RDCTL(CTL_STATUS); 12 + return RDCTL(CTL_FSTATUS); 13 13 } 14 14 15 15 /* ··· 18 18 */ 19 19 static inline void arch_local_irq_restore(unsigned long flags) 20 20 { 21 - WRCTL(CTL_STATUS, flags); 21 + WRCTL(CTL_FSTATUS, flags); 22 22 } 23 23 24 24 static inline void arch_local_irq_disable(void)
+1 -1
arch/nios2/include/asm/registers.h
··· 11 11 #endif 12 12 13 13 /* control register numbers */ 14 - #define CTL_STATUS 0 14 + #define CTL_FSTATUS 0 15 15 #define CTL_ESTATUS 1 16 16 #define CTL_BSTATUS 2 17 17 #define CTL_IENABLE 3
+6 -4
arch/powerpc/kernel/idle_book3s.S
··· 126 126 /* 127 127 * This is the sequence required to execute idle instructions, as 128 128 * specified in ISA v2.07 (and earlier). MSR[IR] and MSR[DR] must be 0. 129 - * 130 - * The 0(r1) slot is used to save r2 in isa206, so use that here. 129 + * We have to store a GPR somewhere, ptesync, then reload it, and create 130 + * a false dependency on the result of the load. It doesn't matter which 131 + * GPR we store, or where we store it. We have already stored r2 to the 132 + * stack at -8(r1) in isa206_idle_insn_mayloss, so use that. 131 133 */ 132 134 #define IDLE_STATE_ENTER_SEQ_NORET(IDLE_INST) \ 133 135 /* Magic NAP/SLEEP/WINKLE mode enter sequence */ \ 134 - std r2,0(r1); \ 136 + std r2,-8(r1); \ 135 137 ptesync; \ 136 - ld r2,0(r1); \ 138 + ld r2,-8(r1); \ 137 139 236: cmpd cr0,r2,r2; \ 138 140 bne 236b; \ 139 141 IDLE_INST; \
-2
arch/powerpc/kernel/smp.c
··· 1730 1730 1731 1731 void arch_cpu_idle_dead(void) 1732 1732 { 1733 - sched_preempt_enable_no_resched(); 1734 - 1735 1733 /* 1736 1734 * Disable on the down path. This will be re-enabled by 1737 1735 * start_secondary() via start_secondary_resume() below
+17 -11
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 255 255 * r3 contains the SRR1 wakeup value, SRR1 is trashed. 256 256 */ 257 257 _GLOBAL(idle_kvm_start_guest) 258 - ld r4,PACAEMERGSP(r13) 259 258 mfcr r5 260 259 mflr r0 261 - std r1,0(r4) 262 - std r5,8(r4) 263 - std r0,16(r4) 264 - subi r1,r4,STACK_FRAME_OVERHEAD 260 + std r5, 8(r1) // Save CR in caller's frame 261 + std r0, 16(r1) // Save LR in caller's frame 262 + // Create frame on emergency stack 263 + ld r4, PACAEMERGSP(r13) 264 + stdu r1, -SWITCH_FRAME_SIZE(r4) 265 + // Switch to new frame on emergency stack 266 + mr r1, r4 267 + std r3, 32(r1) // Save SRR1 wakeup value 265 268 SAVE_NVGPRS(r1) 266 269 267 270 /* ··· 315 312 beq kvm_no_guest 316 313 317 314 kvm_secondary_got_guest: 315 + 316 + // About to go to guest, clear saved SRR1 317 + li r0, 0 318 + std r0, 32(r1) 318 319 319 320 /* Set HSTATE_DSCR(r13) to something sensible */ 320 321 ld r6, PACA_DSCR_DEFAULT(r13) ··· 399 392 mfspr r4, SPRN_LPCR 400 393 rlwimi r4, r3, 0, LPCR_PECE0 | LPCR_PECE1 401 394 mtspr SPRN_LPCR, r4 402 - /* set up r3 for return */ 403 - mfspr r3,SPRN_SRR1 395 + // Return SRR1 wakeup value, or 0 if we went into the guest 396 + ld r3, 32(r1) 404 397 REST_NVGPRS(r1) 405 - addi r1, r1, STACK_FRAME_OVERHEAD 406 - ld r0, 16(r1) 407 - ld r5, 8(r1) 408 - ld r1, 0(r1) 398 + ld r1, 0(r1) // Switch back to caller stack 399 + ld r0, 16(r1) // Reload LR 400 + ld r5, 8(r1) // Reload CR 409 401 mtlr r0 410 402 mtcr r5 411 403 blr
+2 -1
arch/powerpc/sysdev/xive/common.c
··· 945 945 * interrupt to be inactive in that case. 946 946 */ 947 947 *state = (pq != XIVE_ESB_INVALID) && !xd->stale_p && 948 - (xd->saved_p || !!(pq & XIVE_ESB_VAL_P)); 948 + (xd->saved_p || (!!(pq & XIVE_ESB_VAL_P) && 949 + !irqd_irq_disabled(data))); 949 950 return 0; 950 951 default: 951 952 return -EINVAL;
+12
arch/s390/kvm/gaccess.c
··· 894 894 895 895 /** 896 896 * guest_translate_address - translate guest logical into guest absolute address 897 + * @vcpu: virtual cpu 898 + * @gva: Guest virtual address 899 + * @ar: Access register 900 + * @gpa: Guest physical address 901 + * @mode: Translation access mode 897 902 * 898 903 * Parameter semantics are the same as the ones from guest_translate. 899 904 * The memory contents at the guest address are not changed. ··· 939 934 940 935 /** 941 936 * check_gva_range - test a range of guest virtual addresses for accessibility 937 + * @vcpu: virtual cpu 938 + * @gva: Guest virtual address 939 + * @ar: Access register 940 + * @length: Length of test range 941 + * @mode: Translation access mode 942 942 */ 943 943 int check_gva_range(struct kvm_vcpu *vcpu, unsigned long gva, u8 ar, 944 944 unsigned long length, enum gacc_mode mode) ··· 966 956 967 957 /** 968 958 * kvm_s390_check_low_addr_prot_real - check for low-address protection 959 + * @vcpu: virtual cpu 969 960 * @gra: Guest real address 970 961 * 971 962 * Checks whether an address is subject to low-address protection and set ··· 990 979 * @pgt: pointer to the beginning of the page table for the given address if 991 980 * successful (return value 0), or to the first invalid DAT entry in 992 981 * case of exceptions (return value > 0) 982 + * @dat_protection: referenced memory is write protected 993 983 * @fake: pgt references contiguous guest memory block, not a pgtable 994 984 */ 995 985 static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr,
+3 -1
arch/s390/kvm/intercept.c
··· 269 269 270 270 /** 271 271 * handle_external_interrupt - used for external interruption interceptions 272 + * @vcpu: virtual cpu 272 273 * 273 274 * This interception only occurs if the CPUSTAT_EXT_INT bit was set, or if 274 275 * the new PSW does not have external interrupts disabled. In the first case, ··· 316 315 } 317 316 318 317 /** 319 - * Handle MOVE PAGE partial execution interception. 318 + * handle_mvpg_pei - Handle MOVE PAGE partial execution interception. 319 + * @vcpu: virtual cpu 320 320 * 321 321 * This interception can only happen for guests with DAT disabled and 322 322 * addresses that are currently not mapped in the host. Thus we try to
+6 -7
arch/s390/lib/string.c
··· 259 259 #ifdef __HAVE_ARCH_STRRCHR 260 260 char *strrchr(const char *s, int c) 261 261 { 262 - size_t len = __strend(s) - s; 262 + ssize_t len = __strend(s) - s; 263 263 264 - if (len) 265 - do { 266 - if (s[len] == (char) c) 267 - return (char *) s + len; 268 - } while (--len > 0); 269 - return NULL; 264 + do { 265 + if (s[len] == (char)c) 266 + return (char *)s + len; 267 + } while (--len >= 0); 268 + return NULL; 270 269 } 271 270 EXPORT_SYMBOL(strrchr); 272 271 #endif
-1
arch/x86/Kconfig
··· 1525 1525 1526 1526 config AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT 1527 1527 bool "Activate AMD Secure Memory Encryption (SME) by default" 1528 - default y 1529 1528 depends on AMD_MEM_ENCRYPT 1530 1529 help 1531 1530 Say yes to have system memory encrypted by default if running on
+1
arch/x86/events/msr.c
··· 68 68 case INTEL_FAM6_BROADWELL_D: 69 69 case INTEL_FAM6_BROADWELL_G: 70 70 case INTEL_FAM6_BROADWELL_X: 71 + case INTEL_FAM6_SAPPHIRERAPIDS_X: 71 72 72 73 case INTEL_FAM6_ATOM_SILVERMONT: 73 74 case INTEL_FAM6_ATOM_SILVERMONT_D:
+1 -1
arch/x86/kernel/fpu/signal.c
··· 385 385 return -EINVAL; 386 386 } else { 387 387 /* Mask invalid bits out for historical reasons (broken hardware). */ 388 - fpu->state.fxsave.mxcsr &= ~mxcsr_feature_mask; 388 + fpu->state.fxsave.mxcsr &= mxcsr_feature_mask; 389 389 } 390 390 391 391 /* Enforce XFEATURE_MASK_FPSSE when XSAVE is enabled */
+13 -7
arch/x86/kvm/lapic.c
··· 2321 2321 void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event) 2322 2322 { 2323 2323 struct kvm_lapic *apic = vcpu->arch.apic; 2324 + u64 msr_val; 2324 2325 int i; 2325 2326 2326 2327 if (!init_event) { 2327 - vcpu->arch.apic_base = APIC_DEFAULT_PHYS_BASE | 2328 - MSR_IA32_APICBASE_ENABLE; 2328 + msr_val = APIC_DEFAULT_PHYS_BASE | MSR_IA32_APICBASE_ENABLE; 2329 2329 if (kvm_vcpu_is_reset_bsp(vcpu)) 2330 - vcpu->arch.apic_base |= MSR_IA32_APICBASE_BSP; 2330 + msr_val |= MSR_IA32_APICBASE_BSP; 2331 + kvm_lapic_set_base(vcpu, msr_val); 2331 2332 } 2332 2333 2333 2334 if (!apic) ··· 2337 2336 /* Stop the timer in case it's a reset to an active apic */ 2338 2337 hrtimer_cancel(&apic->lapic_timer.timer); 2339 2338 2340 - if (!init_event) { 2341 - apic->base_address = APIC_DEFAULT_PHYS_BASE; 2342 - 2339 + /* The xAPIC ID is set at RESET even if the APIC was already enabled. */ 2340 + if (!init_event) 2343 2341 kvm_apic_set_xapic_id(apic, vcpu->vcpu_id); 2344 - } 2345 2342 kvm_apic_set_version(apic->vcpu); 2346 2343 2347 2344 for (i = 0; i < KVM_APIC_LVT_NUM; i++) ··· 2480 2481 lapic_timer_advance_dynamic = false; 2481 2482 } 2482 2483 2484 + /* 2485 + * Stuff the APIC ENABLE bit in lieu of temporarily incrementing 2486 + * apic_hw_disabled; the full RESET value is set by kvm_lapic_reset(). 2487 + */ 2488 + vcpu->arch.apic_base = MSR_IA32_APICBASE_ENABLE; 2483 2489 static_branch_inc(&apic_sw_disabled.key); /* sw disabled at reset */ 2484 2490 kvm_iodevice_init(&apic->dev, &apic_mmio_ops); 2485 2491 ··· 2946 2942 void kvm_lapic_exit(void) 2947 2943 { 2948 2944 static_key_deferred_flush(&apic_hw_disabled); 2945 + WARN_ON(static_branch_unlikely(&apic_hw_disabled.key)); 2949 2946 static_key_deferred_flush(&apic_sw_disabled); 2947 + WARN_ON(static_branch_unlikely(&apic_sw_disabled.key)); 2950 2948 }
+7 -2
arch/x86/kvm/svm/sev.c
··· 618 618 vmsa.handle = to_kvm_svm(kvm)->sev_info.handle; 619 619 vmsa.address = __sme_pa(svm->vmsa); 620 620 vmsa.len = PAGE_SIZE; 621 - return sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error); 621 + ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error); 622 + if (ret) 623 + return ret; 624 + 625 + vcpu->arch.guest_state_protected = true; 626 + return 0; 622 627 } 623 628 624 629 static int sev_launch_update_vmsa(struct kvm *kvm, struct kvm_sev_cmd *argp) ··· 2588 2583 return -EINVAL; 2589 2584 2590 2585 return kvm_sev_es_string_io(&svm->vcpu, size, port, 2591 - svm->ghcb_sa, svm->ghcb_sa_len, in); 2586 + svm->ghcb_sa, svm->ghcb_sa_len / size, in); 2592 2587 } 2593 2588 2594 2589 void sev_es_init_vmcb(struct vcpu_svm *svm)
+1 -1
arch/x86/kvm/svm/svm.h
··· 191 191 192 192 /* SEV-ES scratch area support */ 193 193 void *ghcb_sa; 194 - u64 ghcb_sa_len; 194 + u32 ghcb_sa_len; 195 195 bool ghcb_sa_sync; 196 196 bool ghcb_sa_free; 197 197
+9 -6
arch/x86/kvm/vmx/vmx.c
··· 5562 5562 5563 5563 static int handle_bus_lock_vmexit(struct kvm_vcpu *vcpu) 5564 5564 { 5565 - vcpu->run->exit_reason = KVM_EXIT_X86_BUS_LOCK; 5566 - vcpu->run->flags |= KVM_RUN_X86_BUS_LOCK; 5567 - return 0; 5565 + /* 5566 + * Hardware may or may not set the BUS_LOCK_DETECTED flag on BUS_LOCK 5567 + * VM-Exits. Unconditionally set the flag here and leave the handling to 5568 + * vmx_handle_exit(). 5569 + */ 5570 + to_vmx(vcpu)->exit_reason.bus_lock_detected = true; 5571 + return 1; 5568 5572 } 5569 5573 5570 5574 /* ··· 6055 6051 int ret = __vmx_handle_exit(vcpu, exit_fastpath); 6056 6052 6057 6053 /* 6058 - * Even when current exit reason is handled by KVM internally, we 6059 - * still need to exit to user space when bus lock detected to inform 6060 - * that there is a bus lock in guest. 6054 + * Exit to user space when bus lock detected to inform that there is 6055 + * a bus lock in guest. 6061 6056 */ 6062 6057 if (to_vmx(vcpu)->exit_reason.bus_lock_detected) { 6063 6058 if (ret > 0)
+2 -1
arch/x86/kvm/x86.c
··· 11392 11392 int level = i + 1; 11393 11393 int lpages = __kvm_mmu_slot_lpages(slot, npages, level); 11394 11394 11395 - WARN_ON(slot->arch.rmap[i]); 11395 + if (slot->arch.rmap[i]) 11396 + continue; 11396 11397 11397 11398 slot->arch.rmap[i] = kvcalloc(lpages, sz, GFP_KERNEL_ACCOUNT); 11398 11399 if (!slot->arch.rmap[i]) {
+6
block/bfq-cgroup.c
··· 666 666 bfq_put_idle_entity(bfq_entity_service_tree(entity), entity); 667 667 bfqg_and_blkg_put(bfqq_group(bfqq)); 668 668 669 + if (entity->parent && 670 + entity->parent->last_bfqq_created == bfqq) 671 + entity->parent->last_bfqq_created = NULL; 672 + else if (bfqd->last_bfqq_created == bfqq) 673 + bfqd->last_bfqq_created = NULL; 674 + 669 675 entity->parent = bfqg->my_entity; 670 676 entity->sched_data = &bfqg->sched_data; 671 677 /* pin down bfqg and its associated blkg */
+78 -70
block/blk-core.c
··· 49 49 #include "blk-mq.h" 50 50 #include "blk-mq-sched.h" 51 51 #include "blk-pm.h" 52 - #include "blk-rq-qos.h" 53 52 54 53 struct dentry *blk_debugfs_root; 55 54 ··· 336 337 } 337 338 EXPORT_SYMBOL(blk_put_queue); 338 339 339 - void blk_set_queue_dying(struct request_queue *q) 340 + void blk_queue_start_drain(struct request_queue *q) 340 341 { 341 - blk_queue_flag_set(QUEUE_FLAG_DYING, q); 342 - 343 342 /* 344 343 * When queue DYING flag is set, we need to block new req 345 344 * entering queue, so we call blk_freeze_queue_start() to 346 345 * prevent I/O from crossing blk_queue_enter(). 347 346 */ 348 347 blk_freeze_queue_start(q); 349 - 350 348 if (queue_is_mq(q)) 351 349 blk_mq_wake_waiters(q); 352 - 353 350 /* Make blk_queue_enter() reexamine the DYING flag. */ 354 351 wake_up_all(&q->mq_freeze_wq); 352 + } 353 + 354 + void blk_set_queue_dying(struct request_queue *q) 355 + { 356 + blk_queue_flag_set(QUEUE_FLAG_DYING, q); 357 + blk_queue_start_drain(q); 355 358 } 356 359 EXPORT_SYMBOL_GPL(blk_set_queue_dying); 357 360 ··· 386 385 */ 387 386 blk_freeze_queue(q); 388 387 389 - rq_qos_exit(q); 390 - 391 388 blk_queue_flag_set(QUEUE_FLAG_DEAD, q); 392 - 393 - /* for synchronous bio-based driver finish in-flight integrity i/o */ 394 - blk_flush_integrity(); 395 389 396 390 blk_sync_queue(q); 397 391 if (queue_is_mq(q)) ··· 412 416 } 413 417 EXPORT_SYMBOL(blk_cleanup_queue); 414 418 419 + static bool blk_try_enter_queue(struct request_queue *q, bool pm) 420 + { 421 + rcu_read_lock(); 422 + if (!percpu_ref_tryget_live(&q->q_usage_counter)) 423 + goto fail; 424 + 425 + /* 426 + * The code that increments the pm_only counter must ensure that the 427 + * counter is globally visible before the queue is unfrozen. 428 + */ 429 + if (blk_queue_pm_only(q) && 430 + (!pm || queue_rpm_status(q) == RPM_SUSPENDED)) 431 + goto fail_put; 432 + 433 + rcu_read_unlock(); 434 + return true; 435 + 436 + fail_put: 437 + percpu_ref_put(&q->q_usage_counter); 438 + fail: 439 + rcu_read_unlock(); 440 + return false; 441 + } 442 + 415 443 /** 416 444 * blk_queue_enter() - try to increase q->q_usage_counter 417 445 * @q: request queue pointer ··· 445 425 { 446 426 const bool pm = flags & BLK_MQ_REQ_PM; 447 427 448 - while (true) { 449 - bool success = false; 450 - 451 - rcu_read_lock(); 452 - if (percpu_ref_tryget_live(&q->q_usage_counter)) { 453 - /* 454 - * The code that increments the pm_only counter is 455 - * responsible for ensuring that that counter is 456 - * globally visible before the queue is unfrozen. 457 - */ 458 - if ((pm && queue_rpm_status(q) != RPM_SUSPENDED) || 459 - !blk_queue_pm_only(q)) { 460 - success = true; 461 - } else { 462 - percpu_ref_put(&q->q_usage_counter); 463 - } 464 - } 465 - rcu_read_unlock(); 466 - 467 - if (success) 468 - return 0; 469 - 428 + while (!blk_try_enter_queue(q, pm)) { 470 429 if (flags & BLK_MQ_REQ_NOWAIT) 471 430 return -EBUSY; 472 431 473 432 /* 474 - * read pair of barrier in blk_freeze_queue_start(), 475 - * we need to order reading __PERCPU_REF_DEAD flag of 476 - * .q_usage_counter and reading .mq_freeze_depth or 477 - * queue dying flag, otherwise the following wait may 478 - * never return if the two reads are reordered. 433 + * read pair of barrier in blk_freeze_queue_start(), we need to 434 + * order reading __PERCPU_REF_DEAD flag of .q_usage_counter and 435 + * reading .mq_freeze_depth or queue dying flag, otherwise the 436 + * following wait may never return if the two reads are 437 + * reordered. 479 438 */ 480 439 smp_rmb(); 481 - 482 440 wait_event(q->mq_freeze_wq, 483 441 (!q->mq_freeze_depth && 484 442 blk_pm_resume_queue(pm, q)) || ··· 464 466 if (blk_queue_dying(q)) 465 467 return -ENODEV; 466 468 } 469 + 470 + return 0; 467 471 } 468 472 469 473 static inline int bio_queue_enter(struct bio *bio) 470 474 { 471 - struct request_queue *q = bio->bi_bdev->bd_disk->queue; 472 - bool nowait = bio->bi_opf & REQ_NOWAIT; 473 - int ret; 475 + struct gendisk *disk = bio->bi_bdev->bd_disk; 476 + struct request_queue *q = disk->queue; 474 477 475 - ret = blk_queue_enter(q, nowait ? BLK_MQ_REQ_NOWAIT : 0); 476 - if (unlikely(ret)) { 477 - if (nowait && !blk_queue_dying(q)) 478 + while (!blk_try_enter_queue(q, false)) { 479 + if (bio->bi_opf & REQ_NOWAIT) { 480 + if (test_bit(GD_DEAD, &disk->state)) 481 + goto dead; 478 482 bio_wouldblock_error(bio); 479 - else 480 - bio_io_error(bio); 483 + return -EBUSY; 484 + } 485 + 486 + /* 487 + * read pair of barrier in blk_freeze_queue_start(), we need to 488 + * order reading __PERCPU_REF_DEAD flag of .q_usage_counter and 489 + * reading .mq_freeze_depth or queue dying flag, otherwise the 490 + * following wait may never return if the two reads are 491 + * reordered. 492 + */ 493 + smp_rmb(); 494 + wait_event(q->mq_freeze_wq, 495 + (!q->mq_freeze_depth && 496 + blk_pm_resume_queue(false, q)) || 497 + test_bit(GD_DEAD, &disk->state)); 498 + if (test_bit(GD_DEAD, &disk->state)) 499 + goto dead; 481 500 } 482 501 483 - return ret; 502 + return 0; 503 + dead: 504 + bio_io_error(bio); 505 + return -ENODEV; 484 506 } 485 507 486 508 void blk_queue_exit(struct request_queue *q) ··· 917 899 struct gendisk *disk = bio->bi_bdev->bd_disk; 918 900 blk_qc_t ret = BLK_QC_T_NONE; 919 901 920 - if (blk_crypto_bio_prep(&bio)) { 921 - if (!disk->fops->submit_bio) 922 - return blk_mq_submit_bio(bio); 902 + if (unlikely(bio_queue_enter(bio) != 0)) 903 + return BLK_QC_T_NONE; 904 + 905 + if (!submit_bio_checks(bio) || !blk_crypto_bio_prep(&bio)) 906 + goto queue_exit; 907 + if (disk->fops->submit_bio) { 923 908 ret = disk->fops->submit_bio(bio); 909 + goto queue_exit; 924 910 } 911 + return blk_mq_submit_bio(bio); 912 + 913 + queue_exit: 925 914 blk_queue_exit(disk->queue); 926 915 return ret; 927 916 } ··· 966 941 struct request_queue *q = bio->bi_bdev->bd_disk->queue; 967 942 struct bio_list lower, same; 968 943 969 - if (unlikely(bio_queue_enter(bio) != 0)) 970 - continue; 971 - 972 944 /* 973 945 * Create a fresh bio_list for all subordinate requests. 974 946 */ ··· 1001 979 static blk_qc_t __submit_bio_noacct_mq(struct bio *bio) 1002 980 { 1003 981 struct bio_list bio_list[2] = { }; 1004 - blk_qc_t ret = BLK_QC_T_NONE; 982 + blk_qc_t ret; 1005 983 1006 984 current->bio_list = bio_list; 1007 985 1008 986 do { 1009 - struct gendisk *disk = bio->bi_bdev->bd_disk; 1010 - 1011 - if (unlikely(bio_queue_enter(bio) != 0)) 1012 - continue; 1013 - 1014 - if (!blk_crypto_bio_prep(&bio)) { 1015 - blk_queue_exit(disk->queue); 1016 - ret = BLK_QC_T_NONE; 1017 - continue; 1018 - } 1019 - 1020 - ret = blk_mq_submit_bio(bio); 987 + ret = __submit_bio(bio); 1021 988 } while ((bio = bio_list_pop(&bio_list[0]))); 1022 989 1023 990 current->bio_list = NULL; ··· 1024 1013 */ 1025 1014 blk_qc_t submit_bio_noacct(struct bio *bio) 1026 1015 { 1027 - if (!submit_bio_checks(bio)) 1028 - return BLK_QC_T_NONE; 1029 - 1030 1016 /* 1031 1017 * We only want one ->submit_bio to be active at a time, else stack 1032 1018 * usage with stacked devices could be a problem. Use current->bio_list
+8 -1
block/blk-mq.c
··· 188 188 } 189 189 EXPORT_SYMBOL_GPL(blk_mq_freeze_queue); 190 190 191 - void blk_mq_unfreeze_queue(struct request_queue *q) 191 + void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic) 192 192 { 193 193 mutex_lock(&q->mq_freeze_lock); 194 + if (force_atomic) 195 + q->q_usage_counter.data->force_atomic = true; 194 196 q->mq_freeze_depth--; 195 197 WARN_ON_ONCE(q->mq_freeze_depth < 0); 196 198 if (!q->mq_freeze_depth) { ··· 200 198 wake_up_all(&q->mq_freeze_wq); 201 199 } 202 200 mutex_unlock(&q->mq_freeze_lock); 201 + } 202 + 203 + void blk_mq_unfreeze_queue(struct request_queue *q) 204 + { 205 + __blk_mq_unfreeze_queue(q, false); 203 206 } 204 207 EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue); 205 208
+2
block/blk.h
··· 51 51 void blk_free_flush_queue(struct blk_flush_queue *q); 52 52 53 53 void blk_freeze_queue(struct request_queue *q); 54 + void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic); 55 + void blk_queue_start_drain(struct request_queue *q); 54 56 55 57 #define BIO_INLINE_VECS 4 56 58 struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs,
+23
block/genhd.c
··· 26 26 #include <linux/badblocks.h> 27 27 28 28 #include "blk.h" 29 + #include "blk-rq-qos.h" 29 30 30 31 static struct kobject *block_depr; 31 32 ··· 560 559 */ 561 560 void del_gendisk(struct gendisk *disk) 562 561 { 562 + struct request_queue *q = disk->queue; 563 + 563 564 might_sleep(); 564 565 565 566 if (WARN_ON_ONCE(!disk_live(disk) && !(disk->flags & GENHD_FL_HIDDEN))) ··· 578 575 fsync_bdev(disk->part0); 579 576 __invalidate_device(disk->part0, true); 580 577 578 + /* 579 + * Fail any new I/O. 580 + */ 581 + set_bit(GD_DEAD, &disk->state); 581 582 set_capacity(disk, 0); 583 + 584 + /* 585 + * Prevent new I/O from crossing bio_queue_enter(). 586 + */ 587 + blk_queue_start_drain(q); 588 + blk_mq_freeze_queue_wait(q); 589 + 590 + rq_qos_exit(q); 591 + blk_sync_queue(q); 592 + blk_flush_integrity(); 593 + /* 594 + * Allow using passthrough request again after the queue is torn down. 595 + */ 596 + blk_queue_flag_clear(QUEUE_FLAG_INIT_DONE, q); 597 + __blk_mq_unfreeze_queue(q, true); 582 598 583 599 if (!(disk->flags & GENHD_FL_HIDDEN)) { 584 600 sysfs_remove_link(&disk_to_dev(disk)->kobj, "bdi"); ··· 1078 1056 struct gendisk *disk = dev_to_disk(dev); 1079 1057 1080 1058 might_sleep(); 1059 + WARN_ON_ONCE(disk_live(disk)); 1081 1060 1082 1061 disk_release_events(disk); 1083 1062 kfree(disk->random);
+6 -4
block/kyber-iosched.c
··· 151 151 152 152 struct kyber_queue_data { 153 153 struct request_queue *q; 154 + dev_t dev; 154 155 155 156 /* 156 157 * Each scheduling domain has a limited number of in-flight requests ··· 258 257 } 259 258 memset(buckets, 0, sizeof(kqd->latency_buckets[sched_domain][type])); 260 259 261 - trace_kyber_latency(kqd->q, kyber_domain_names[sched_domain], 260 + trace_kyber_latency(kqd->dev, kyber_domain_names[sched_domain], 262 261 kyber_latency_type_names[type], percentile, 263 262 bucket + 1, 1 << KYBER_LATENCY_SHIFT, samples); 264 263 ··· 271 270 depth = clamp(depth, 1U, kyber_depth[sched_domain]); 272 271 if (depth != kqd->domain_tokens[sched_domain].sb.depth) { 273 272 sbitmap_queue_resize(&kqd->domain_tokens[sched_domain], depth); 274 - trace_kyber_adjust(kqd->q, kyber_domain_names[sched_domain], 273 + trace_kyber_adjust(kqd->dev, kyber_domain_names[sched_domain], 275 274 depth); 276 275 } 277 276 } ··· 367 366 goto err; 368 367 369 368 kqd->q = q; 369 + kqd->dev = disk_devt(q->disk); 370 370 371 371 kqd->cpu_latency = alloc_percpu_gfp(struct kyber_cpu_latency, 372 372 GFP_KERNEL | __GFP_ZERO); ··· 776 774 list_del_init(&rq->queuelist); 777 775 return rq; 778 776 } else { 779 - trace_kyber_throttled(kqd->q, 777 + trace_kyber_throttled(kqd->dev, 780 778 kyber_domain_names[khd->cur_domain]); 781 779 } 782 780 } else if (sbitmap_any_bit_set(&khd->kcq_map[khd->cur_domain])) { ··· 789 787 list_del_init(&rq->queuelist); 790 788 return rq; 791 789 } else { 792 - trace_kyber_throttled(kqd->q, 790 + trace_kyber_throttled(kqd->dev, 793 791 kyber_domain_names[khd->cur_domain]); 794 792 } 795 793 }
+3
drivers/acpi/tables.c
··· 21 21 #include <linux/earlycpio.h> 22 22 #include <linux/initrd.h> 23 23 #include <linux/security.h> 24 + #include <linux/kmemleak.h> 24 25 #include "internal.h" 25 26 26 27 #ifdef CONFIG_ACPI_CUSTOM_DSDT ··· 601 600 * works fine. 602 601 */ 603 602 arch_reserve_mem_area(acpi_tables_addr, all_tables_size); 603 + 604 + kmemleak_ignore_phys(acpi_tables_addr); 604 605 605 606 /* 606 607 * early_ioremap only can remap 256k one time. If we map all
+2 -1
drivers/acpi/x86/s2idle.c
··· 371 371 return 0; 372 372 373 373 if (acpi_s2idle_vendor_amd()) { 374 - /* AMD0004, AMDI0005: 374 + /* AMD0004, AMD0005, AMDI0005: 375 375 * - Should use rev_id 0x0 376 376 * - function mask > 0x3: Should use AMD method, but has off by one bug 377 377 * - function mask = 0x3: Should use Microsoft method ··· 390 390 ACPI_LPS0_DSM_UUID_MICROSOFT, 0, 391 391 &lps0_dsm_guid_microsoft); 392 392 if (lps0_dsm_func_mask > 0x3 && (!strcmp(hid, "AMD0004") || 393 + !strcmp(hid, "AMD0005") || 393 394 !strcmp(hid, "AMDI0005"))) { 394 395 lps0_dsm_func_mask = (lps0_dsm_func_mask << 1) | 0x1; 395 396 acpi_handle_debug(adev->handle, "_DSM UUID %s: Adjusted function mask: 0x%x\n",
+1 -4
drivers/ata/libahci_platform.c
··· 440 440 hpriv->phy_regulator = devm_regulator_get(dev, "phy"); 441 441 if (IS_ERR(hpriv->phy_regulator)) { 442 442 rc = PTR_ERR(hpriv->phy_regulator); 443 - if (rc == -EPROBE_DEFER) 444 - goto err_out; 445 - rc = 0; 446 - hpriv->phy_regulator = NULL; 443 + goto err_out; 447 444 } 448 445 449 446 if (flags & AHCI_PLATFORM_GET_RESETS) {
+4 -2
drivers/ata/pata_legacy.c
··· 352 352 iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2); 353 353 354 354 if (unlikely(slop)) { 355 - __le32 pad; 355 + __le32 pad = 0; 356 + 356 357 if (rw == READ) { 357 358 pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr)); 358 359 memcpy(buf + buflen - slop, &pad, slop); ··· 743 742 ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2); 744 743 745 744 if (unlikely(slop)) { 746 - __le32 pad; 745 + __le32 pad = 0; 746 + 747 747 if (rw == WRITE) { 748 748 memcpy(&pad, buf + buflen - slop, slop); 749 749 iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
+2 -1
drivers/base/core.c
··· 687 687 { 688 688 struct device_link *link; 689 689 690 - if (!consumer || !supplier || flags & ~DL_ADD_VALID_FLAGS || 690 + if (!consumer || !supplier || consumer == supplier || 691 + flags & ~DL_ADD_VALID_FLAGS || 691 692 (flags & DL_FLAG_STATELESS && flags & DL_MANAGED_LINK_FLAGS) || 692 693 (flags & DL_FLAG_SYNC_STATE_ONLY && 693 694 (flags & ~DL_FLAG_INFERRED) != DL_FLAG_SYNC_STATE_ONLY) ||
+22 -22
drivers/block/brd.c
··· 373 373 struct gendisk *disk; 374 374 char buf[DISK_NAME_LEN]; 375 375 376 + mutex_lock(&brd_devices_mutex); 377 + list_for_each_entry(brd, &brd_devices, brd_list) { 378 + if (brd->brd_number == i) { 379 + mutex_unlock(&brd_devices_mutex); 380 + return -EEXIST; 381 + } 382 + } 376 383 brd = kzalloc(sizeof(*brd), GFP_KERNEL); 377 - if (!brd) 384 + if (!brd) { 385 + mutex_unlock(&brd_devices_mutex); 378 386 return -ENOMEM; 387 + } 379 388 brd->brd_number = i; 389 + list_add_tail(&brd->brd_list, &brd_devices); 390 + mutex_unlock(&brd_devices_mutex); 391 + 380 392 spin_lock_init(&brd->brd_lock); 381 393 INIT_RADIX_TREE(&brd->brd_pages, GFP_ATOMIC); 382 394 ··· 423 411 blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue); 424 412 blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, disk->queue); 425 413 add_disk(disk); 426 - list_add_tail(&brd->brd_list, &brd_devices); 427 414 428 415 return 0; 429 416 430 417 out_free_dev: 418 + mutex_lock(&brd_devices_mutex); 419 + list_del(&brd->brd_list); 420 + mutex_unlock(&brd_devices_mutex); 431 421 kfree(brd); 432 422 return -ENOMEM; 433 423 } 434 424 435 425 static void brd_probe(dev_t dev) 436 426 { 437 - int i = MINOR(dev) / max_part; 438 - struct brd_device *brd; 439 - 440 - mutex_lock(&brd_devices_mutex); 441 - list_for_each_entry(brd, &brd_devices, brd_list) { 442 - if (brd->brd_number == i) 443 - goto out_unlock; 444 - } 445 - 446 - brd_alloc(i); 447 - out_unlock: 448 - mutex_unlock(&brd_devices_mutex); 427 + brd_alloc(MINOR(dev) / max_part); 449 428 } 450 429 451 430 static void brd_del_one(struct brd_device *brd) 452 431 { 453 - list_del(&brd->brd_list); 454 432 del_gendisk(brd->brd_disk); 455 433 blk_cleanup_disk(brd->brd_disk); 456 434 brd_free_pages(brd); 435 + mutex_lock(&brd_devices_mutex); 436 + list_del(&brd->brd_list); 437 + mutex_unlock(&brd_devices_mutex); 457 438 kfree(brd); 458 439 } 459 440 ··· 496 491 497 492 brd_debugfs_dir = debugfs_create_dir("ramdisk_pages", NULL); 498 493 499 - mutex_lock(&brd_devices_mutex); 500 494 for (i = 0; i < rd_nr; i++) { 501 495 err = brd_alloc(i); 502 496 if (err) 503 497 goto out_free; 504 498 } 505 499 506 - mutex_unlock(&brd_devices_mutex); 507 - 508 500 pr_info("brd: module loaded\n"); 509 501 return 0; 510 502 511 503 out_free: 504 + unregister_blkdev(RAMDISK_MAJOR, "ramdisk"); 512 505 debugfs_remove_recursive(brd_debugfs_dir); 513 506 514 507 list_for_each_entry_safe(brd, next, &brd_devices, brd_list) 515 508 brd_del_one(brd); 516 - mutex_unlock(&brd_devices_mutex); 517 - unregister_blkdev(RAMDISK_MAJOR, "ramdisk"); 518 509 519 510 pr_info("brd: module NOT loaded !!!\n"); 520 511 return err; ··· 520 519 { 521 520 struct brd_device *brd, *next; 522 521 522 + unregister_blkdev(RAMDISK_MAJOR, "ramdisk"); 523 523 debugfs_remove_recursive(brd_debugfs_dir); 524 524 525 525 list_for_each_entry_safe(brd, next, &brd_devices, brd_list) 526 526 brd_del_one(brd); 527 - 528 - unregister_blkdev(RAMDISK_MAJOR, "ramdisk"); 529 527 530 528 pr_info("brd: module unloaded\n"); 531 529 }
+3 -1
drivers/block/rnbd/rnbd-clt-sysfs.c
··· 71 71 int opt_mask = 0; 72 72 int token; 73 73 int ret = -EINVAL; 74 - int i, dest_port, nr_poll_queues; 74 + int nr_poll_queues = 0; 75 + int dest_port = 0; 75 76 int p_cnt = 0; 77 + int i; 76 78 77 79 options = kstrdup(buf, GFP_KERNEL); 78 80 if (!options)
+6 -31
drivers/block/virtio_blk.c
··· 689 689 static unsigned int virtblk_queue_depth; 690 690 module_param_named(queue_depth, virtblk_queue_depth, uint, 0444); 691 691 692 - static int virtblk_validate(struct virtio_device *vdev) 693 - { 694 - u32 blk_size; 695 - 696 - if (!vdev->config->get) { 697 - dev_err(&vdev->dev, "%s failure: config access disabled\n", 698 - __func__); 699 - return -EINVAL; 700 - } 701 - 702 - if (!virtio_has_feature(vdev, VIRTIO_BLK_F_BLK_SIZE)) 703 - return 0; 704 - 705 - blk_size = virtio_cread32(vdev, 706 - offsetof(struct virtio_blk_config, blk_size)); 707 - 708 - if (blk_size < SECTOR_SIZE || blk_size > PAGE_SIZE) 709 - __virtio_clear_bit(vdev, VIRTIO_BLK_F_BLK_SIZE); 710 - 711 - return 0; 712 - } 713 - 714 692 static int virtblk_probe(struct virtio_device *vdev) 715 693 { 716 694 struct virtio_blk *vblk; ··· 699 721 u16 min_io_size; 700 722 u8 physical_block_exp, alignment_offset; 701 723 unsigned int queue_depth; 724 + 725 + if (!vdev->config->get) { 726 + dev_err(&vdev->dev, "%s failure: config access disabled\n", 727 + __func__); 728 + return -EINVAL; 729 + } 702 730 703 731 err = ida_simple_get(&vd_index_ida, 0, minor_to_index(1 << MINORBITS), 704 732 GFP_KERNEL); ··· 819 835 blk_queue_logical_block_size(q, blk_size); 820 836 else 821 837 blk_size = queue_logical_block_size(q); 822 - 823 - if (blk_size < SECTOR_SIZE || blk_size > PAGE_SIZE) { 824 - dev_err(&vdev->dev, 825 - "block size is changed unexpectedly, now is %u\n", 826 - blk_size); 827 - err = -EINVAL; 828 - goto out_cleanup_disk; 829 - } 830 838 831 839 /* Use topology information if available */ 832 840 err = virtio_cread_feature(vdev, VIRTIO_BLK_F_TOPOLOGY, ··· 985 1009 .driver.name = KBUILD_MODNAME, 986 1010 .driver.owner = THIS_MODULE, 987 1011 .id_table = id_table, 988 - .validate = virtblk_validate, 989 1012 .probe = virtblk_probe, 990 1013 .remove = virtblk_remove, 991 1014 .config_changed = virtblk_config_changed,
-12
drivers/bus/Kconfig
··· 152 152 Interface 2, which can be used to connect things like NAND Flash, 153 153 SRAM, ethernet adapters, FPGAs and LCD displays. 154 154 155 - config SIMPLE_PM_BUS 156 - tristate "Simple Power-Managed Bus Driver" 157 - depends on OF && PM 158 - help 159 - Driver for transparent busses that don't need a real driver, but 160 - where the bus controller is part of a PM domain, or under the control 161 - of a functional clock, and thus relies on runtime PM for managing 162 - this PM domain and/or clock. 163 - An example of such a bus controller is the Renesas Bus State 164 - Controller (BSC, sometimes called "LBSC within Bus Bridge", or 165 - "External Bus Interface") as found on several Renesas ARM SoCs. 166 - 167 155 config SUN50I_DE2_BUS 168 156 bool "Allwinner A64 DE2 Bus Driver" 169 157 default ARM64
+1 -1
drivers/bus/Makefile
··· 27 27 obj-$(CONFIG_QCOM_EBI2) += qcom-ebi2.o 28 28 obj-$(CONFIG_SUN50I_DE2_BUS) += sun50i-de2.o 29 29 obj-$(CONFIG_SUNXI_RSB) += sunxi-rsb.o 30 - obj-$(CONFIG_SIMPLE_PM_BUS) += simple-pm-bus.o 30 + obj-$(CONFIG_OF) += simple-pm-bus.o 31 31 obj-$(CONFIG_TEGRA_ACONNECT) += tegra-aconnect.o 32 32 obj-$(CONFIG_TEGRA_GMI) += tegra-gmi.o 33 33 obj-$(CONFIG_TI_PWMSS) += ti-pwmss.o
+39 -3
drivers/bus/simple-pm-bus.c
··· 13 13 #include <linux/platform_device.h> 14 14 #include <linux/pm_runtime.h> 15 15 16 - 17 16 static int simple_pm_bus_probe(struct platform_device *pdev) 18 17 { 19 - const struct of_dev_auxdata *lookup = dev_get_platdata(&pdev->dev); 20 - struct device_node *np = pdev->dev.of_node; 18 + const struct device *dev = &pdev->dev; 19 + const struct of_dev_auxdata *lookup = dev_get_platdata(dev); 20 + struct device_node *np = dev->of_node; 21 + const struct of_device_id *match; 22 + 23 + /* 24 + * Allow user to use driver_override to bind this driver to a 25 + * transparent bus device which has a different compatible string 26 + * that's not listed in simple_pm_bus_of_match. We don't want to do any 27 + * of the simple-pm-bus tasks for these devices, so return early. 28 + */ 29 + if (pdev->driver_override) 30 + return 0; 31 + 32 + match = of_match_device(dev->driver->of_match_table, dev); 33 + /* 34 + * These are transparent bus devices (not simple-pm-bus matches) that 35 + * have their child nodes populated automatically. So, don't need to 36 + * do anything more. We only match with the device if this driver is 37 + * the most specific match because we don't want to incorrectly bind to 38 + * a device that has a more specific driver. 39 + */ 40 + if (match && match->data) { 41 + if (of_property_match_string(np, "compatible", match->compatible) == 0) 42 + return 0; 43 + else 44 + return -ENODEV; 45 + } 21 46 22 47 dev_dbg(&pdev->dev, "%s\n", __func__); 23 48 ··· 56 31 57 32 static int simple_pm_bus_remove(struct platform_device *pdev) 58 33 { 34 + const void *data = of_device_get_match_data(&pdev->dev); 35 + 36 + if (pdev->driver_override || data) 37 + return 0; 38 + 59 39 dev_dbg(&pdev->dev, "%s\n", __func__); 60 40 61 41 pm_runtime_disable(&pdev->dev); 62 42 return 0; 63 43 } 64 44 45 + #define ONLY_BUS ((void *) 1) /* Match if the device is only a bus. */ 46 + 65 47 static const struct of_device_id simple_pm_bus_of_match[] = { 66 48 { .compatible = "simple-pm-bus", }, 49 + { .compatible = "simple-bus", .data = ONLY_BUS }, 50 + { .compatible = "simple-mfd", .data = ONLY_BUS }, 51 + { .compatible = "isa", .data = ONLY_BUS }, 52 + { .compatible = "arm,amba-bus", .data = ONLY_BUS }, 67 53 { /* sentinel */ } 68 54 }; 69 55 MODULE_DEVICE_TABLE(of, simple_pm_bus_of_match);
+1
drivers/clk/qcom/Kconfig
··· 564 564 565 565 config SM_GCC_6350 566 566 tristate "SM6350 Global Clock Controller" 567 + select QCOM_GDSC 567 568 help 568 569 Support for the global clock controller on SM6350 devices. 569 570 Say Y if you want to use peripheral devices such as UART,
+1 -1
drivers/clk/qcom/gcc-sm6115.c
··· 3242 3242 }; 3243 3243 3244 3244 static struct gdsc hlos1_vote_turing_mmu_tbu0_gdsc = { 3245 - .gdscr = 0x7d060, 3245 + .gdscr = 0x7d07c, 3246 3246 .pd = { 3247 3247 .name = "hlos1_vote_turing_mmu_tbu0", 3248 3248 },
+2
drivers/clk/renesas/r9a07g044-cpg.c
··· 186 186 187 187 static const unsigned int r9a07g044_crit_mod_clks[] __initconst = { 188 188 MOD_CLK_BASE + R9A07G044_GIC600_GICCLK, 189 + MOD_CLK_BASE + R9A07G044_IA55_CLK, 190 + MOD_CLK_BASE + R9A07G044_DMAC_ACLK, 189 191 }; 190 192 191 193 const struct rzg2l_cpg_info r9a07g044_cpg_info = {
+1 -1
drivers/clk/renesas/rzg2l-cpg.c
··· 391 391 392 392 value = readl(priv->base + CLK_MON_R(clock->off)); 393 393 394 - return !(value & bitmask); 394 + return value & bitmask; 395 395 } 396 396 397 397 static const struct clk_ops rzg2l_mod_clock_ops = {
-9
drivers/clk/socfpga/clk-agilex.c
··· 165 165 .name = "boot_clk", }, 166 166 }; 167 167 168 - static const struct clk_parent_data s2f_usr0_mux[] = { 169 - { .fw_name = "f2s-free-clk", 170 - .name = "f2s-free-clk", }, 171 - { .fw_name = "boot_clk", 172 - .name = "boot_clk", }, 173 - }; 174 - 175 168 static const struct clk_parent_data emac_mux[] = { 176 169 { .fw_name = "emaca_free_clk", 177 170 .name = "emaca_free_clk", }, ··· 305 312 4, 0x44, 28, 1, 0, 0, 0}, 306 313 { AGILEX_CS_TIMER_CLK, "cs_timer_clk", NULL, noc_mux, ARRAY_SIZE(noc_mux), 0, 0x24, 307 314 5, 0, 0, 0, 0x30, 1, 0}, 308 - { AGILEX_S2F_USER0_CLK, "s2f_user0_clk", NULL, s2f_usr0_mux, ARRAY_SIZE(s2f_usr0_mux), 0, 0x24, 309 - 6, 0, 0, 0, 0, 0, 0}, 310 315 { AGILEX_EMAC0_CLK, "emac0_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C, 311 316 0, 0, 0, 0, 0x94, 26, 0}, 312 317 { AGILEX_EMAC1_CLK, "emac1_clk", NULL, emac_mux, ARRAY_SIZE(emac_mux), 0, 0x7C,
+1 -1
drivers/edac/armada_xp_edac.c
··· 178 178 "details unavailable (multiple errors)"); 179 179 if (cnt_dbe) 180 180 edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 181 - cnt_sbe, /* error count */ 181 + cnt_dbe, /* error count */ 182 182 0, 0, 0, /* pfn, offset, syndrome */ 183 183 -1, -1, -1, /* top, mid, low layer */ 184 184 mci->ctl_name,
+9 -1
drivers/firmware/arm_ffa/bus.c
··· 49 49 return ffa_drv->probe(ffa_dev); 50 50 } 51 51 52 + static void ffa_device_remove(struct device *dev) 53 + { 54 + struct ffa_driver *ffa_drv = to_ffa_driver(dev->driver); 55 + 56 + ffa_drv->remove(to_ffa_dev(dev)); 57 + } 58 + 52 59 static int ffa_device_uevent(struct device *dev, struct kobj_uevent_env *env) 53 60 { 54 61 struct ffa_device *ffa_dev = to_ffa_dev(dev); ··· 93 86 .name = "arm_ffa", 94 87 .match = ffa_device_match, 95 88 .probe = ffa_device_probe, 89 + .remove = ffa_device_remove, 96 90 .uevent = ffa_device_uevent, 97 91 .dev_groups = ffa_device_attributes_groups, 98 92 }; ··· 135 127 136 128 static int __ffa_devices_unregister(struct device *dev, void *data) 137 129 { 138 - ffa_release_device(dev); 130 + device_unregister(dev); 139 131 140 132 return 0; 141 133 }
+2 -2
drivers/firmware/efi/cper.c
··· 25 25 #include <acpi/ghes.h> 26 26 #include <ras/ras_event.h> 27 27 28 - static char rcd_decode_str[CPER_REC_LEN]; 29 - 30 28 /* 31 29 * CPER record ID need to be unique even after reboot, because record 32 30 * ID is used as index for ERST storage, while CPER records from ··· 310 312 struct cper_mem_err_compact *cmem) 311 313 { 312 314 const char *ret = trace_seq_buffer_ptr(p); 315 + char rcd_decode_str[CPER_REC_LEN]; 313 316 314 317 if (cper_mem_err_location(cmem, rcd_decode_str)) 315 318 trace_seq_printf(p, "%s", rcd_decode_str); ··· 325 326 int len) 326 327 { 327 328 struct cper_mem_err_compact cmem; 329 + char rcd_decode_str[CPER_REC_LEN]; 328 330 329 331 /* Don't trust UEFI 2.1/2.2 structure with bad validation bits */ 330 332 if (len == sizeof(struct cper_sec_mem_err_old) &&
+1 -1
drivers/firmware/efi/libstub/fdt.c
··· 271 271 return status; 272 272 } 273 273 274 - efi_info("Exiting boot services and installing virtual address map...\n"); 274 + efi_info("Exiting boot services...\n"); 275 275 276 276 map.map = &memory_map; 277 277 status = efi_allocate_pages(MAX_FDT_SIZE, new_fdt_addr, ULONG_MAX);
+1 -1
drivers/firmware/efi/runtime-wrappers.c
··· 414 414 unsigned long data_size, 415 415 efi_char16_t *data) 416 416 { 417 - if (down_interruptible(&efi_runtime_lock)) { 417 + if (down_trylock(&efi_runtime_lock)) { 418 418 pr_warn("failed to invoke the reset_system() runtime service:\n" 419 419 "could not get exclusive access to the firmware\n"); 420 420 return;
+7
drivers/fpga/ice40-spi.c
··· 192 192 }; 193 193 MODULE_DEVICE_TABLE(of, ice40_fpga_of_match); 194 194 195 + static const struct spi_device_id ice40_fpga_spi_ids[] = { 196 + { .name = "ice40-fpga-mgr", }, 197 + {}, 198 + }; 199 + MODULE_DEVICE_TABLE(spi, ice40_fpga_spi_ids); 200 + 195 201 static struct spi_driver ice40_fpga_driver = { 196 202 .probe = ice40_fpga_probe, 197 203 .driver = { 198 204 .name = "ice40spi", 199 205 .of_match_table = of_match_ptr(ice40_fpga_of_match), 200 206 }, 207 + .id_table = ice40_fpga_spi_ids, 201 208 }; 202 209 203 210 module_spi_driver(ice40_fpga_driver);
+8
drivers/gpio/gpio-74x164.c
··· 174 174 return 0; 175 175 } 176 176 177 + static const struct spi_device_id gen_74x164_spi_ids[] = { 178 + { .name = "74hc595" }, 179 + { .name = "74lvc594" }, 180 + {}, 181 + }; 182 + MODULE_DEVICE_TABLE(spi, gen_74x164_spi_ids); 183 + 177 184 static const struct of_device_id gen_74x164_dt_ids[] = { 178 185 { .compatible = "fairchild,74hc595" }, 179 186 { .compatible = "nxp,74lvc594" }, ··· 195 188 }, 196 189 .probe = gen_74x164_probe, 197 190 .remove = gen_74x164_remove, 191 + .id_table = gen_74x164_spi_ids, 198 192 }; 199 193 module_spi_driver(gen_74x164_driver); 200 194
+18 -3
drivers/gpio/gpio-mockup.c
··· 476 476 477 477 static void gpio_mockup_unregister_pdevs(void) 478 478 { 479 + struct platform_device *pdev; 480 + struct fwnode_handle *fwnode; 479 481 int i; 480 482 481 - for (i = 0; i < GPIO_MOCKUP_MAX_GC; i++) 482 - platform_device_unregister(gpio_mockup_pdevs[i]); 483 + for (i = 0; i < GPIO_MOCKUP_MAX_GC; i++) { 484 + pdev = gpio_mockup_pdevs[i]; 485 + if (!pdev) 486 + continue; 487 + 488 + fwnode = dev_fwnode(&pdev->dev); 489 + platform_device_unregister(pdev); 490 + fwnode_remove_software_node(fwnode); 491 + } 483 492 } 484 493 485 494 static __init char **gpio_mockup_make_line_names(const char *label, ··· 517 508 struct property_entry properties[GPIO_MOCKUP_MAX_PROP]; 518 509 struct platform_device_info pdevinfo; 519 510 struct platform_device *pdev; 511 + struct fwnode_handle *fwnode; 520 512 char **line_names = NULL; 521 513 char chip_label[32]; 522 514 int prop = 0, base; ··· 546 536 "gpio-line-names", line_names, ngpio); 547 537 } 548 538 539 + fwnode = fwnode_create_software_node(properties, NULL); 540 + if (IS_ERR(fwnode)) 541 + return PTR_ERR(fwnode); 542 + 549 543 pdevinfo.name = "gpio-mockup"; 550 544 pdevinfo.id = idx; 551 - pdevinfo.properties = properties; 545 + pdevinfo.fwnode = fwnode; 552 546 553 547 pdev = platform_device_register_full(&pdevinfo); 554 548 kfree_strarray(line_names, ngpio); 555 549 if (IS_ERR(pdev)) { 550 + fwnode_remove_software_node(fwnode); 556 551 pr_err("error registering device"); 557 552 return PTR_ERR(pdev); 558 553 }
+9 -7
drivers/gpio/gpio-pca953x.c
··· 559 559 560 560 mutex_lock(&chip->i2c_lock); 561 561 562 - /* Disable pull-up/pull-down */ 563 - ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, 0); 564 - if (ret) 565 - goto exit; 566 - 567 562 /* Configure pull-up/pull-down */ 568 563 if (config == PIN_CONFIG_BIAS_PULL_UP) 569 564 ret = regmap_write_bits(chip->regmap, pull_sel_reg, bit, bit); 570 565 else if (config == PIN_CONFIG_BIAS_PULL_DOWN) 571 566 ret = regmap_write_bits(chip->regmap, pull_sel_reg, bit, 0); 567 + else 568 + ret = 0; 572 569 if (ret) 573 570 goto exit; 574 571 575 - /* Enable pull-up/pull-down */ 576 - ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, bit); 572 + /* Disable/Enable pull-up/pull-down */ 573 + if (config == PIN_CONFIG_BIAS_DISABLE) 574 + ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, 0); 575 + else 576 + ret = regmap_write_bits(chip->regmap, pull_en_reg, bit, bit); 577 577 578 578 exit: 579 579 mutex_unlock(&chip->i2c_lock); ··· 587 587 588 588 switch (pinconf_to_config_param(config)) { 589 589 case PIN_CONFIG_BIAS_PULL_UP: 590 + case PIN_CONFIG_BIAS_PULL_PIN_DEFAULT: 590 591 case PIN_CONFIG_BIAS_PULL_DOWN: 592 + case PIN_CONFIG_BIAS_DISABLE: 591 593 return pca953x_gpio_set_pull_up_down(chip, offset, config); 592 594 default: 593 595 return -ENOTSUPP;
+1 -17
drivers/gpu/drm/ast/ast_mode.c
··· 1300 1300 return flags; 1301 1301 } 1302 1302 1303 - static enum drm_connector_status ast_connector_detect(struct drm_connector 1304 - *connector, bool force) 1305 - { 1306 - int r; 1307 - 1308 - r = ast_get_modes(connector); 1309 - if (r <= 0) 1310 - return connector_status_disconnected; 1311 - 1312 - return connector_status_connected; 1313 - } 1314 - 1315 1303 static void ast_connector_destroy(struct drm_connector *connector) 1316 1304 { 1317 1305 struct ast_connector *ast_connector = to_ast_connector(connector); ··· 1315 1327 1316 1328 static const struct drm_connector_funcs ast_connector_funcs = { 1317 1329 .reset = drm_atomic_helper_connector_reset, 1318 - .detect = ast_connector_detect, 1319 1330 .fill_modes = drm_helper_probe_single_connector_modes, 1320 1331 .destroy = ast_connector_destroy, 1321 1332 .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state, ··· 1342 1355 connector->interlace_allowed = 0; 1343 1356 connector->doublescan_allowed = 0; 1344 1357 1345 - connector->polled = DRM_CONNECTOR_POLL_CONNECT | 1346 - DRM_CONNECTOR_POLL_DISCONNECT; 1358 + connector->polled = DRM_CONNECTOR_POLL_CONNECT; 1347 1359 1348 1360 drm_connector_attach_encoder(connector, encoder); 1349 1361 ··· 1410 1424 ast_connector_init(dev); 1411 1425 1412 1426 drm_mode_config_reset(dev); 1413 - 1414 - drm_kms_helper_poll_init(dev); 1415 1427 1416 1428 return 0; 1417 1429 }
+12 -3
drivers/gpu/drm/drm_edid.c
··· 1834 1834 u8 *edid, int num_blocks) 1835 1835 { 1836 1836 int i; 1837 - u8 num_of_ext = edid[0x7e]; 1837 + u8 last_block; 1838 + 1839 + /* 1840 + * 0x7e in the EDID is the number of extension blocks. The EDID 1841 + * is 1 (base block) + num_ext_blocks big. That means we can think 1842 + * of 0x7e in the EDID of the _index_ of the last block in the 1843 + * combined chunk of memory. 1844 + */ 1845 + last_block = edid[0x7e]; 1838 1846 1839 1847 /* Calculate real checksum for the last edid extension block data */ 1840 - connector->real_edid_checksum = 1841 - drm_edid_block_checksum(edid + num_of_ext * EDID_LENGTH); 1848 + if (last_block < num_blocks) 1849 + connector->real_edid_checksum = 1850 + drm_edid_block_checksum(edid + last_block * EDID_LENGTH); 1842 1851 1843 1852 if (connector->bad_edid_counter++ && !drm_debug_enabled(DRM_UT_KMS)) 1844 1853 return;
+6
drivers/gpu/drm/drm_fb_helper.c
··· 1506 1506 { 1507 1507 struct drm_client_dev *client = &fb_helper->client; 1508 1508 struct drm_device *dev = fb_helper->dev; 1509 + struct drm_mode_config *config = &dev->mode_config; 1509 1510 int ret = 0; 1510 1511 int crtc_count = 0; 1511 1512 struct drm_connector_list_iter conn_iter; ··· 1664 1663 /* Handle our overallocation */ 1665 1664 sizes.surface_height *= drm_fbdev_overalloc; 1666 1665 sizes.surface_height /= 100; 1666 + if (sizes.surface_height > config->max_height) { 1667 + drm_dbg_kms(dev, "Fbdev over-allocation too large; clamping height to %d\n", 1668 + config->max_height); 1669 + sizes.surface_height = config->max_height; 1670 + } 1667 1671 1668 1672 /* push down into drivers */ 1669 1673 ret = (*fb_helper->funcs->fb_probe)(fb_helper, &sizes);
+1
drivers/gpu/drm/hyperv/hyperv_drm.h
··· 46 46 int hyperv_update_vram_location(struct hv_device *hdev, phys_addr_t vram_pp); 47 47 int hyperv_update_situation(struct hv_device *hdev, u8 active, u32 bpp, 48 48 u32 w, u32 h, u32 pitch); 49 + int hyperv_hide_hw_ptr(struct hv_device *hdev); 49 50 int hyperv_update_dirt(struct hv_device *hdev, struct drm_rect *rect); 50 51 int hyperv_connect_vsp(struct hv_device *hdev); 51 52
+1
drivers/gpu/drm/hyperv/hyperv_drm_modeset.c
··· 101 101 struct hyperv_drm_device *hv = to_hv(pipe->crtc.dev); 102 102 struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state); 103 103 104 + hyperv_hide_hw_ptr(hv->hdev); 104 105 hyperv_update_situation(hv->hdev, 1, hv->screen_depth, 105 106 crtc_state->mode.hdisplay, 106 107 crtc_state->mode.vdisplay,
+53 -1
drivers/gpu/drm/hyperv/hyperv_drm_proto.c
··· 299 299 return 0; 300 300 } 301 301 302 + /* 303 + * Hyper-V supports a hardware cursor feature. It's not used by Linux VM, 304 + * but the Hyper-V host still draws a point as an extra mouse pointer, 305 + * which is unwanted, especially when Xorg is running. 306 + * 307 + * The hyperv_fb driver uses synthvid_send_ptr() to hide the unwanted 308 + * pointer, by setting msg.ptr_pos.is_visible = 1 and setting the 309 + * msg.ptr_shape.data. Note: setting msg.ptr_pos.is_visible to 0 doesn't 310 + * work in tests. 311 + * 312 + * Copy synthvid_send_ptr() to hyperv_drm and rename it to 313 + * hyperv_hide_hw_ptr(). Note: hyperv_hide_hw_ptr() is also called in the 314 + * handler of the SYNTHVID_FEATURE_CHANGE event, otherwise the host still 315 + * draws an extra unwanted mouse pointer after the VM Connection window is 316 + * closed and reopened. 317 + */ 318 + int hyperv_hide_hw_ptr(struct hv_device *hdev) 319 + { 320 + struct synthvid_msg msg; 321 + 322 + memset(&msg, 0, sizeof(struct synthvid_msg)); 323 + msg.vid_hdr.type = SYNTHVID_POINTER_POSITION; 324 + msg.vid_hdr.size = sizeof(struct synthvid_msg_hdr) + 325 + sizeof(struct synthvid_pointer_position); 326 + msg.ptr_pos.is_visible = 1; 327 + msg.ptr_pos.video_output = 0; 328 + msg.ptr_pos.image_x = 0; 329 + msg.ptr_pos.image_y = 0; 330 + hyperv_sendpacket(hdev, &msg); 331 + 332 + memset(&msg, 0, sizeof(struct synthvid_msg)); 333 + msg.vid_hdr.type = SYNTHVID_POINTER_SHAPE; 334 + msg.vid_hdr.size = sizeof(struct synthvid_msg_hdr) + 335 + sizeof(struct synthvid_pointer_shape); 336 + msg.ptr_shape.part_idx = SYNTHVID_CURSOR_COMPLETE; 337 + msg.ptr_shape.is_argb = 1; 338 + msg.ptr_shape.width = 1; 339 + msg.ptr_shape.height = 1; 340 + msg.ptr_shape.hot_x = 0; 341 + msg.ptr_shape.hot_y = 0; 342 + msg.ptr_shape.data[0] = 0; 343 + msg.ptr_shape.data[1] = 1; 344 + msg.ptr_shape.data[2] = 1; 345 + msg.ptr_shape.data[3] = 1; 346 + hyperv_sendpacket(hdev, &msg); 347 + 348 + return 0; 349 + } 350 + 302 351 int hyperv_update_dirt(struct hv_device *hdev, struct drm_rect *rect) 303 352 { 304 353 struct hyperv_drm_device *hv = hv_get_drvdata(hdev); ··· 441 392 return; 442 393 } 443 394 444 - if (msg->vid_hdr.type == SYNTHVID_FEATURE_CHANGE) 395 + if (msg->vid_hdr.type == SYNTHVID_FEATURE_CHANGE) { 445 396 hv->dirt_needed = msg->feature_chg.is_dirt_needed; 397 + if (hv->dirt_needed) 398 + hyperv_hide_hw_ptr(hv->hdev); 399 + } 446 400 } 447 401 448 402 static void hyperv_receive(void *ctx)
+5 -2
drivers/gpu/drm/i915/display/intel_acpi.c
··· 186 186 { 187 187 struct pci_dev *pdev = to_pci_dev(i915->drm.dev); 188 188 acpi_handle dhandle; 189 + union acpi_object *obj; 189 190 190 191 dhandle = ACPI_HANDLE(&pdev->dev); 191 192 if (!dhandle) 192 193 return; 193 194 194 - acpi_evaluate_dsm(dhandle, &intel_dsm_guid2, INTEL_DSM_REVISION_ID, 195 - INTEL_DSM_FN_GET_BIOS_DATA_FUNCS_SUPPORTED, NULL); 195 + obj = acpi_evaluate_dsm(dhandle, &intel_dsm_guid2, INTEL_DSM_REVISION_ID, 196 + INTEL_DSM_FN_GET_BIOS_DATA_FUNCS_SUPPORTED, NULL); 197 + if (obj) 198 + ACPI_FREE(obj); 196 199 } 197 200 198 201 /*
+4 -1
drivers/gpu/drm/i915/gem/i915_gem_context.c
··· 937 937 unsigned int n; 938 938 939 939 e = alloc_engines(num_engines); 940 + if (!e) 941 + return ERR_PTR(-ENOMEM); 942 + e->num_engines = num_engines; 943 + 940 944 for (n = 0; n < num_engines; n++) { 941 945 struct intel_context *ce; 942 946 int ret; ··· 974 970 goto free_engines; 975 971 } 976 972 } 977 - e->num_engines = num_engines; 978 973 979 974 return e; 980 975
+1
drivers/gpu/drm/i915/gt/intel_context.c
··· 421 421 422 422 mutex_destroy(&ce->pin_mutex); 423 423 i915_active_fini(&ce->active); 424 + i915_sw_fence_fini(&ce->guc_blocked); 424 425 } 425 426 426 427 void i915_context_module_exit(void)
+38 -3
drivers/gpu/drm/kmb/kmb_crtc.c
··· 66 66 .disable_vblank = kmb_crtc_disable_vblank, 67 67 }; 68 68 69 - static void kmb_crtc_set_mode(struct drm_crtc *crtc) 69 + static void kmb_crtc_set_mode(struct drm_crtc *crtc, 70 + struct drm_atomic_state *old_state) 70 71 { 71 72 struct drm_device *dev = crtc->dev; 72 73 struct drm_display_mode *m = &crtc->state->adjusted_mode; ··· 76 75 unsigned int val = 0; 77 76 78 77 /* Initialize mipi */ 79 - kmb_dsi_mode_set(kmb->kmb_dsi, m, kmb->sys_clk_mhz); 78 + kmb_dsi_mode_set(kmb->kmb_dsi, m, kmb->sys_clk_mhz, old_state); 80 79 drm_info(dev, 81 80 "vfp= %d vbp= %d vsync_len=%d hfp=%d hbp=%d hsync_len=%d\n", 82 81 m->crtc_vsync_start - m->crtc_vdisplay, ··· 139 138 struct kmb_drm_private *kmb = crtc_to_kmb_priv(crtc); 140 139 141 140 clk_prepare_enable(kmb->kmb_clk.clk_lcd); 142 - kmb_crtc_set_mode(crtc); 141 + kmb_crtc_set_mode(crtc, state); 143 142 drm_crtc_vblank_on(crtc); 144 143 } 145 144 ··· 186 185 spin_unlock_irq(&crtc->dev->event_lock); 187 186 } 188 187 188 + static enum drm_mode_status 189 + kmb_crtc_mode_valid(struct drm_crtc *crtc, 190 + const struct drm_display_mode *mode) 191 + { 192 + int refresh; 193 + struct drm_device *dev = crtc->dev; 194 + int vfp = mode->vsync_start - mode->vdisplay; 195 + 196 + if (mode->vdisplay < KMB_CRTC_MAX_HEIGHT) { 197 + drm_dbg(dev, "height = %d less than %d", 198 + mode->vdisplay, KMB_CRTC_MAX_HEIGHT); 199 + return MODE_BAD_VVALUE; 200 + } 201 + if (mode->hdisplay < KMB_CRTC_MAX_WIDTH) { 202 + drm_dbg(dev, "width = %d less than %d", 203 + mode->hdisplay, KMB_CRTC_MAX_WIDTH); 204 + return MODE_BAD_HVALUE; 205 + } 206 + refresh = drm_mode_vrefresh(mode); 207 + if (refresh < KMB_MIN_VREFRESH || refresh > KMB_MAX_VREFRESH) { 208 + drm_dbg(dev, "refresh = %d less than %d or greater than %d", 209 + refresh, KMB_MIN_VREFRESH, KMB_MAX_VREFRESH); 210 + return MODE_BAD; 211 + } 212 + 213 + if (vfp < KMB_CRTC_MIN_VFP) { 214 + drm_dbg(dev, "vfp = %d less than %d", vfp, KMB_CRTC_MIN_VFP); 215 + return MODE_BAD; 216 + } 217 + 218 + return MODE_OK; 219 + } 220 + 189 221 static const struct drm_crtc_helper_funcs kmb_crtc_helper_funcs = { 190 222 .atomic_begin = kmb_crtc_atomic_begin, 191 223 .atomic_enable = kmb_crtc_atomic_enable, 192 224 .atomic_disable = kmb_crtc_atomic_disable, 193 225 .atomic_flush = kmb_crtc_atomic_flush, 226 + .mode_valid = kmb_crtc_mode_valid, 194 227 }; 195 228 196 229 int kmb_setup_crtc(struct drm_device *drm)
+1 -1
drivers/gpu/drm/kmb/kmb_drv.c
··· 380 380 if (val & LAYER3_DMA_FIFO_UNDERFLOW) 381 381 drm_dbg(&kmb->drm, 382 382 "LAYER3:GL1 DMA UNDERFLOW val = 0x%lx", val); 383 - if (val & LAYER3_DMA_FIFO_UNDERFLOW) 383 + if (val & LAYER3_DMA_FIFO_OVERFLOW) 384 384 drm_dbg(&kmb->drm, 385 385 "LAYER3:GL1 DMA OVERFLOW val = 0x%lx", val); 386 386 }
+9 -1
drivers/gpu/drm/kmb/kmb_drv.h
··· 20 20 #define DRIVER_MAJOR 1 21 21 #define DRIVER_MINOR 1 22 22 23 + /* Platform definitions */ 24 + #define KMB_CRTC_MIN_VFP 4 25 + #define KMB_CRTC_MAX_WIDTH 1920 /* max width in pixels */ 26 + #define KMB_CRTC_MAX_HEIGHT 1080 /* max height in pixels */ 27 + #define KMB_CRTC_MIN_WIDTH 1920 28 + #define KMB_CRTC_MIN_HEIGHT 1080 23 29 #define KMB_FB_MAX_WIDTH 1920 24 30 #define KMB_FB_MAX_HEIGHT 1080 25 31 #define KMB_FB_MIN_WIDTH 1 26 32 #define KMB_FB_MIN_HEIGHT 1 27 - 33 + #define KMB_MIN_VREFRESH 59 /*vertical refresh in Hz */ 34 + #define KMB_MAX_VREFRESH 60 /*vertical refresh in Hz */ 28 35 #define KMB_LCD_DEFAULT_CLK 200000000 29 36 #define KMB_SYS_CLK_MHZ 500 30 37 ··· 57 50 spinlock_t irq_lock; 58 51 int irq_lcd; 59 52 int sys_clk_mhz; 53 + struct disp_cfg init_disp_cfg[KMB_MAX_PLANES]; 60 54 struct layer_status plane_status[KMB_MAX_PLANES]; 61 55 int kmb_under_flow; 62 56 int kmb_flush_done;
+15 -10
drivers/gpu/drm/kmb/kmb_dsi.c
··· 482 482 return 0; 483 483 } 484 484 485 + #define CLK_DIFF_LOW 50 486 + #define CLK_DIFF_HI 60 487 + #define SYSCLK_500 500 488 + 485 489 static void mipi_tx_fg_cfg_regs(struct kmb_dsi *kmb_dsi, u8 frame_gen, 486 490 struct mipi_tx_frame_timing_cfg *fg_cfg) 487 491 { ··· 496 492 /* 500 Mhz system clock minus 50 to account for the difference in 497 493 * MIPI clock speed in RTL tests 498 494 */ 499 - sysclk = kmb_dsi->sys_clk_mhz - 50; 495 + if (kmb_dsi->sys_clk_mhz == SYSCLK_500) { 496 + sysclk = kmb_dsi->sys_clk_mhz - CLK_DIFF_LOW; 497 + } else { 498 + /* 700 Mhz clk*/ 499 + sysclk = kmb_dsi->sys_clk_mhz - CLK_DIFF_HI; 500 + } 500 501 501 502 /* PPL-Pixel Packing Layer, LLP-Low Level Protocol 502 503 * Frame genartor timing parameters are clocked on the system clock, ··· 1331 1322 return 0; 1332 1323 } 1333 1324 1334 - static void connect_lcd_to_mipi(struct kmb_dsi *kmb_dsi) 1325 + static void connect_lcd_to_mipi(struct kmb_dsi *kmb_dsi, 1326 + struct drm_atomic_state *old_state) 1335 1327 { 1336 1328 struct regmap *msscam; 1337 1329 ··· 1341 1331 dev_dbg(kmb_dsi->dev, "failed to get msscam syscon"); 1342 1332 return; 1343 1333 } 1344 - 1334 + drm_atomic_bridge_chain_enable(adv_bridge, old_state); 1345 1335 /* DISABLE MIPI->CIF CONNECTION */ 1346 1336 regmap_write(msscam, MSS_MIPI_CIF_CFG, 0); 1347 1337 ··· 1352 1342 } 1353 1343 1354 1344 int kmb_dsi_mode_set(struct kmb_dsi *kmb_dsi, struct drm_display_mode *mode, 1355 - int sys_clk_mhz) 1345 + int sys_clk_mhz, struct drm_atomic_state *old_state) 1356 1346 { 1357 1347 u64 data_rate; 1358 1348 ··· 1394 1384 mipi_tx_init_cfg.lane_rate_mbps = data_rate; 1395 1385 } 1396 1386 1397 - kmb_write_mipi(kmb_dsi, DPHY_ENABLE, 0); 1398 - kmb_write_mipi(kmb_dsi, DPHY_INIT_CTRL0, 0); 1399 - kmb_write_mipi(kmb_dsi, DPHY_INIT_CTRL1, 0); 1400 - kmb_write_mipi(kmb_dsi, DPHY_INIT_CTRL2, 0); 1401 - 1402 1387 /* Initialize mipi controller */ 1403 1388 mipi_tx_init_cntrl(kmb_dsi, &mipi_tx_init_cfg); 1404 1389 1405 1390 /* Dphy initialization */ 1406 1391 mipi_tx_init_dphy(kmb_dsi, &mipi_tx_init_cfg); 1407 1392 1408 - connect_lcd_to_mipi(kmb_dsi); 1393 + connect_lcd_to_mipi(kmb_dsi, old_state); 1409 1394 dev_info(kmb_dsi->dev, "mipi hw initialized"); 1410 1395 1411 1396 return 0;
+1 -1
drivers/gpu/drm/kmb/kmb_dsi.h
··· 380 380 struct kmb_dsi *kmb_dsi_init(struct platform_device *pdev); 381 381 void kmb_dsi_host_unregister(struct kmb_dsi *kmb_dsi); 382 382 int kmb_dsi_mode_set(struct kmb_dsi *kmb_dsi, struct drm_display_mode *mode, 383 - int sys_clk_mhz); 383 + int sys_clk_mhz, struct drm_atomic_state *old_state); 384 384 int kmb_dsi_map_mmio(struct kmb_dsi *kmb_dsi); 385 385 int kmb_dsi_clk_init(struct kmb_dsi *kmb_dsi); 386 386 int kmb_dsi_encoder_init(struct drm_device *dev, struct kmb_dsi *kmb_dsi);
+42 -1
drivers/gpu/drm/kmb/kmb_plane.c
··· 67 67 68 68 static unsigned int check_pixel_format(struct drm_plane *plane, u32 format) 69 69 { 70 + struct kmb_drm_private *kmb; 71 + struct kmb_plane *kmb_plane = to_kmb_plane(plane); 70 72 int i; 73 + int plane_id = kmb_plane->id; 74 + struct disp_cfg init_disp_cfg; 71 75 76 + kmb = to_kmb(plane->dev); 77 + init_disp_cfg = kmb->init_disp_cfg[plane_id]; 78 + /* Due to HW limitations, changing pixel format after initial 79 + * plane configuration is not supported. 80 + */ 81 + if (init_disp_cfg.format && init_disp_cfg.format != format) { 82 + drm_dbg(&kmb->drm, "Cannot change format after initial plane configuration"); 83 + return -EINVAL; 84 + } 72 85 for (i = 0; i < plane->format_count; i++) { 73 86 if (plane->format_types[i] == format) 74 87 return 0; ··· 94 81 { 95 82 struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 96 83 plane); 84 + struct kmb_drm_private *kmb; 85 + struct kmb_plane *kmb_plane = to_kmb_plane(plane); 86 + int plane_id = kmb_plane->id; 87 + struct disp_cfg init_disp_cfg; 97 88 struct drm_framebuffer *fb; 98 89 int ret; 99 90 struct drm_crtc_state *crtc_state; 100 91 bool can_position; 101 92 93 + kmb = to_kmb(plane->dev); 94 + init_disp_cfg = kmb->init_disp_cfg[plane_id]; 102 95 fb = new_plane_state->fb; 103 96 if (!fb || !new_plane_state->crtc) 104 97 return 0; ··· 118 99 new_plane_state->crtc_w < KMB_FB_MIN_WIDTH || 119 100 new_plane_state->crtc_h < KMB_FB_MIN_HEIGHT) 120 101 return -EINVAL; 102 + 103 + /* Due to HW limitations, changing plane height or width after 104 + * initial plane configuration is not supported. 105 + */ 106 + if ((init_disp_cfg.width && init_disp_cfg.height) && 107 + (init_disp_cfg.width != fb->width || 108 + init_disp_cfg.height != fb->height)) { 109 + drm_dbg(&kmb->drm, "Cannot change plane height or width after initial configuration"); 110 + return -EINVAL; 111 + } 121 112 can_position = (plane->type == DRM_PLANE_TYPE_OVERLAY); 122 113 crtc_state = 123 114 drm_atomic_get_existing_crtc_state(state, ··· 364 335 unsigned char plane_id; 365 336 int num_planes; 366 337 static dma_addr_t addr[MAX_SUB_PLANES]; 338 + struct disp_cfg *init_disp_cfg; 367 339 368 340 if (!plane || !new_plane_state || !old_plane_state) 369 341 return; ··· 387 357 } 388 358 spin_unlock_irq(&kmb->irq_lock); 389 359 390 - src_w = (new_plane_state->src_w >> 16); 360 + init_disp_cfg = &kmb->init_disp_cfg[plane_id]; 361 + src_w = new_plane_state->src_w >> 16; 391 362 src_h = new_plane_state->src_h >> 16; 392 363 crtc_x = new_plane_state->crtc_x; 393 364 crtc_y = new_plane_state->crtc_y; ··· 531 500 532 501 /* Enable DMA */ 533 502 kmb_write_lcd(kmb, LCD_LAYERn_DMA_CFG(plane_id), dma_cfg); 503 + 504 + /* Save initial display config */ 505 + if (!init_disp_cfg->width || 506 + !init_disp_cfg->height || 507 + !init_disp_cfg->format) { 508 + init_disp_cfg->width = width; 509 + init_disp_cfg->height = height; 510 + init_disp_cfg->format = fb->format->format; 511 + } 512 + 534 513 drm_dbg(&kmb->drm, "dma_cfg=0x%x LCD_DMA_CFG=0x%x\n", dma_cfg, 535 514 kmb_read_lcd(kmb, LCD_LAYERn_DMA_CFG(plane_id))); 536 515
+6
drivers/gpu/drm/kmb/kmb_plane.h
··· 63 63 u32 ctrl; 64 64 }; 65 65 66 + struct disp_cfg { 67 + unsigned int width; 68 + unsigned int height; 69 + unsigned int format; 70 + }; 71 + 66 72 struct kmb_plane *kmb_plane_init(struct drm_device *drm); 67 73 void kmb_plane_destroy(struct drm_plane *plane); 68 74 #endif /* __KMB_PLANE_H__ */
+24 -133
drivers/gpu/drm/mediatek/mtk_drm_crtc.c
··· 4 4 */ 5 5 6 6 #include <linux/clk.h> 7 - #include <linux/dma-mapping.h> 8 - #include <linux/mailbox_controller.h> 9 7 #include <linux/pm_runtime.h> 10 8 #include <linux/soc/mediatek/mtk-cmdq.h> 11 9 #include <linux/soc/mediatek/mtk-mmsys.h> ··· 50 52 bool pending_async_planes; 51 53 52 54 #if IS_REACHABLE(CONFIG_MTK_CMDQ) 53 - struct mbox_client cmdq_cl; 54 - struct mbox_chan *cmdq_chan; 55 - struct cmdq_pkt cmdq_handle; 55 + struct cmdq_client *cmdq_client; 56 56 u32 cmdq_event; 57 - u32 cmdq_vblank_cnt; 58 57 #endif 59 58 60 59 struct device *mmsys_dev; ··· 222 227 } 223 228 224 229 #if IS_REACHABLE(CONFIG_MTK_CMDQ) 225 - static int mtk_drm_cmdq_pkt_create(struct mbox_chan *chan, struct cmdq_pkt *pkt, 226 - size_t size) 230 + static void ddp_cmdq_cb(struct cmdq_cb_data data) 227 231 { 228 - struct device *dev; 229 - dma_addr_t dma_addr; 230 - 231 - pkt->va_base = kzalloc(size, GFP_KERNEL); 232 - if (!pkt->va_base) { 233 - kfree(pkt); 234 - return -ENOMEM; 235 - } 236 - pkt->buf_size = size; 237 - 238 - dev = chan->mbox->dev; 239 - dma_addr = dma_map_single(dev, pkt->va_base, pkt->buf_size, 240 - DMA_TO_DEVICE); 241 - if (dma_mapping_error(dev, dma_addr)) { 242 - dev_err(dev, "dma map failed, size=%u\n", (u32)(u64)size); 243 - kfree(pkt->va_base); 244 - kfree(pkt); 245 - return -ENOMEM; 246 - } 247 - 248 - pkt->pa_base = dma_addr; 249 - 250 - return 0; 251 - } 252 - 253 - static void mtk_drm_cmdq_pkt_destroy(struct mbox_chan *chan, struct cmdq_pkt *pkt) 254 - { 255 - dma_unmap_single(chan->mbox->dev, pkt->pa_base, pkt->buf_size, 256 - DMA_TO_DEVICE); 257 - kfree(pkt->va_base); 258 - kfree(pkt); 259 - } 260 - 261 - static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg) 262 - { 263 - struct mtk_drm_crtc *mtk_crtc = container_of(cl, struct mtk_drm_crtc, cmdq_cl); 264 - struct cmdq_cb_data *data = mssg; 265 - struct mtk_crtc_state *state; 266 - unsigned int i; 267 - 268 - state = to_mtk_crtc_state(mtk_crtc->base.state); 269 - 270 - state->pending_config = false; 271 - 272 - if (mtk_crtc->pending_planes) { 273 - for (i = 0; i < mtk_crtc->layer_nr; i++) { 274 - struct drm_plane *plane = &mtk_crtc->planes[i]; 275 - struct mtk_plane_state *plane_state; 276 - 277 - plane_state = to_mtk_plane_state(plane->state); 278 - 279 - plane_state->pending.config = false; 280 - } 281 - mtk_crtc->pending_planes = false; 282 - } 283 - 284 - if (mtk_crtc->pending_async_planes) { 285 - for (i = 0; i < mtk_crtc->layer_nr; i++) { 286 - struct drm_plane *plane = &mtk_crtc->planes[i]; 287 - struct mtk_plane_state *plane_state; 288 - 289 - plane_state = to_mtk_plane_state(plane->state); 290 - 291 - plane_state->pending.async_config = false; 292 - } 293 - mtk_crtc->pending_async_planes = false; 294 - } 295 - 296 - mtk_crtc->cmdq_vblank_cnt = 0; 297 - mtk_drm_cmdq_pkt_destroy(mtk_crtc->cmdq_chan, data->pkt); 232 + cmdq_pkt_destroy(data.data); 298 233 } 299 234 #endif 300 235 ··· 378 453 state->pending_vrefresh, 0, 379 454 cmdq_handle); 380 455 381 - if (!cmdq_handle) 382 - state->pending_config = false; 456 + state->pending_config = false; 383 457 } 384 458 385 459 if (mtk_crtc->pending_planes) { ··· 398 474 mtk_ddp_comp_layer_config(comp, local_layer, 399 475 plane_state, 400 476 cmdq_handle); 401 - if (!cmdq_handle) 402 - plane_state->pending.config = false; 477 + plane_state->pending.config = false; 403 478 } 404 - 405 - if (!cmdq_handle) 406 - mtk_crtc->pending_planes = false; 479 + mtk_crtc->pending_planes = false; 407 480 } 408 481 409 482 if (mtk_crtc->pending_async_planes) { ··· 420 499 mtk_ddp_comp_layer_config(comp, local_layer, 421 500 plane_state, 422 501 cmdq_handle); 423 - if (!cmdq_handle) 424 - plane_state->pending.async_config = false; 502 + plane_state->pending.async_config = false; 425 503 } 426 - 427 - if (!cmdq_handle) 428 - mtk_crtc->pending_async_planes = false; 504 + mtk_crtc->pending_async_planes = false; 429 505 } 430 506 } 431 507 ··· 430 512 bool needs_vblank) 431 513 { 432 514 #if IS_REACHABLE(CONFIG_MTK_CMDQ) 433 - struct cmdq_pkt *cmdq_handle = &mtk_crtc->cmdq_handle; 515 + struct cmdq_pkt *cmdq_handle; 434 516 #endif 435 517 struct drm_crtc *crtc = &mtk_crtc->base; 436 518 struct mtk_drm_private *priv = crtc->dev->dev_private; ··· 468 550 mtk_mutex_release(mtk_crtc->mutex); 469 551 } 470 552 #if IS_REACHABLE(CONFIG_MTK_CMDQ) 471 - if (mtk_crtc->cmdq_chan) { 472 - mbox_flush(mtk_crtc->cmdq_chan, 2000); 473 - cmdq_handle->cmd_buf_size = 0; 553 + if (mtk_crtc->cmdq_client) { 554 + mbox_flush(mtk_crtc->cmdq_client->chan, 2000); 555 + cmdq_handle = cmdq_pkt_create(mtk_crtc->cmdq_client, PAGE_SIZE); 474 556 cmdq_pkt_clear_event(cmdq_handle, mtk_crtc->cmdq_event); 475 557 cmdq_pkt_wfe(cmdq_handle, mtk_crtc->cmdq_event, false); 476 558 mtk_crtc_ddp_config(crtc, cmdq_handle); 477 559 cmdq_pkt_finalize(cmdq_handle); 478 - dma_sync_single_for_device(mtk_crtc->cmdq_chan->mbox->dev, 479 - cmdq_handle->pa_base, 480 - cmdq_handle->cmd_buf_size, 481 - DMA_TO_DEVICE); 482 - /* 483 - * CMDQ command should execute in next vblank, 484 - * If it fail to execute in next 2 vblank, timeout happen. 485 - */ 486 - mtk_crtc->cmdq_vblank_cnt = 2; 487 - mbox_send_message(mtk_crtc->cmdq_chan, cmdq_handle); 488 - mbox_client_txdone(mtk_crtc->cmdq_chan, 0); 560 + cmdq_pkt_flush_async(cmdq_handle, ddp_cmdq_cb, cmdq_handle); 489 561 } 490 562 #endif 491 563 mtk_crtc->config_updating = false; ··· 489 581 struct mtk_drm_private *priv = crtc->dev->dev_private; 490 582 491 583 #if IS_REACHABLE(CONFIG_MTK_CMDQ) 492 - if (!priv->data->shadow_register && !mtk_crtc->cmdq_chan) 493 - mtk_crtc_ddp_config(crtc, NULL); 494 - else if (mtk_crtc->cmdq_vblank_cnt > 0 && --mtk_crtc->cmdq_vblank_cnt == 0) 495 - DRM_ERROR("mtk_crtc %d CMDQ execute command timeout!\n", 496 - drm_crtc_index(&mtk_crtc->base)); 584 + if (!priv->data->shadow_register && !mtk_crtc->cmdq_client) 497 585 #else 498 586 if (!priv->data->shadow_register) 499 - mtk_crtc_ddp_config(crtc, NULL); 500 587 #endif 588 + mtk_crtc_ddp_config(crtc, NULL); 589 + 501 590 mtk_drm_finish_page_flip(mtk_crtc); 502 591 } 503 592 ··· 829 924 mutex_init(&mtk_crtc->hw_lock); 830 925 831 926 #if IS_REACHABLE(CONFIG_MTK_CMDQ) 832 - mtk_crtc->cmdq_cl.dev = mtk_crtc->mmsys_dev; 833 - mtk_crtc->cmdq_cl.tx_block = false; 834 - mtk_crtc->cmdq_cl.knows_txdone = true; 835 - mtk_crtc->cmdq_cl.rx_callback = ddp_cmdq_cb; 836 - mtk_crtc->cmdq_chan = 837 - mbox_request_channel(&mtk_crtc->cmdq_cl, 838 - drm_crtc_index(&mtk_crtc->base)); 839 - if (IS_ERR(mtk_crtc->cmdq_chan)) { 927 + mtk_crtc->cmdq_client = 928 + cmdq_mbox_create(mtk_crtc->mmsys_dev, 929 + drm_crtc_index(&mtk_crtc->base)); 930 + if (IS_ERR(mtk_crtc->cmdq_client)) { 840 931 dev_dbg(dev, "mtk_crtc %d failed to create mailbox client, writing register by CPU now\n", 841 932 drm_crtc_index(&mtk_crtc->base)); 842 - mtk_crtc->cmdq_chan = NULL; 933 + mtk_crtc->cmdq_client = NULL; 843 934 } 844 935 845 - if (mtk_crtc->cmdq_chan) { 936 + if (mtk_crtc->cmdq_client) { 846 937 ret = of_property_read_u32_index(priv->mutex_node, 847 938 "mediatek,gce-events", 848 939 drm_crtc_index(&mtk_crtc->base), ··· 846 945 if (ret) { 847 946 dev_dbg(dev, "mtk_crtc %d failed to get mediatek,gce-events property\n", 848 947 drm_crtc_index(&mtk_crtc->base)); 849 - mbox_free_channel(mtk_crtc->cmdq_chan); 850 - mtk_crtc->cmdq_chan = NULL; 851 - } else { 852 - ret = mtk_drm_cmdq_pkt_create(mtk_crtc->cmdq_chan, 853 - &mtk_crtc->cmdq_handle, 854 - PAGE_SIZE); 855 - if (ret) { 856 - dev_dbg(dev, "mtk_crtc %d failed to create cmdq packet\n", 857 - drm_crtc_index(&mtk_crtc->base)); 858 - mbox_free_channel(mtk_crtc->cmdq_chan); 859 - mtk_crtc->cmdq_chan = NULL; 860 - } 948 + cmdq_mbox_destroy(mtk_crtc->cmdq_client); 949 + mtk_crtc->cmdq_client = NULL; 861 950 } 862 951 } 863 952 #endif
+5 -4
drivers/gpu/drm/msm/adreno/a3xx_gpu.c
··· 571 571 } 572 572 573 573 icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem"); 574 - ret = IS_ERR(icc_path); 575 - if (ret) 574 + if (IS_ERR(icc_path)) { 575 + ret = PTR_ERR(icc_path); 576 576 goto fail; 577 + } 577 578 578 579 ocmem_icc_path = devm_of_icc_get(&pdev->dev, "ocmem"); 579 - ret = IS_ERR(ocmem_icc_path); 580 - if (ret) { 580 + if (IS_ERR(ocmem_icc_path)) { 581 + ret = PTR_ERR(ocmem_icc_path); 581 582 /* allow -ENODATA, ocmem icc is optional */ 582 583 if (ret != -ENODATA) 583 584 goto fail;
+5 -4
drivers/gpu/drm/msm/adreno/a4xx_gpu.c
··· 699 699 } 700 700 701 701 icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem"); 702 - ret = IS_ERR(icc_path); 703 - if (ret) 702 + if (IS_ERR(icc_path)) { 703 + ret = PTR_ERR(icc_path); 704 704 goto fail; 705 + } 705 706 706 707 ocmem_icc_path = devm_of_icc_get(&pdev->dev, "ocmem"); 707 - ret = IS_ERR(ocmem_icc_path); 708 - if (ret) { 708 + if (IS_ERR(ocmem_icc_path)) { 709 + ret = PTR_ERR(ocmem_icc_path); 709 710 /* allow -ENODATA, ocmem icc is optional */ 710 711 if (ret != -ENODATA) 711 712 goto fail;
+6
drivers/gpu/drm/msm/adreno/a6xx_gmu.c
··· 296 296 u32 val; 297 297 int request, ack; 298 298 299 + WARN_ON_ONCE(!mutex_is_locked(&gmu->lock)); 300 + 299 301 if (state >= ARRAY_SIZE(a6xx_gmu_oob_bits)) 300 302 return -EINVAL; 301 303 ··· 338 336 void a6xx_gmu_clear_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state) 339 337 { 340 338 int bit; 339 + 340 + WARN_ON_ONCE(!mutex_is_locked(&gmu->lock)); 341 341 342 342 if (state >= ARRAY_SIZE(a6xx_gmu_oob_bits)) 343 343 return; ··· 1485 1481 1486 1482 if (!pdev) 1487 1483 return -ENODEV; 1484 + 1485 + mutex_init(&gmu->lock); 1488 1486 1489 1487 gmu->dev = &pdev->dev; 1490 1488
+3
drivers/gpu/drm/msm/adreno/a6xx_gmu.h
··· 44 44 struct a6xx_gmu { 45 45 struct device *dev; 46 46 47 + /* For serializing communication with the GMU: */ 48 + struct mutex lock; 49 + 47 50 struct msm_gem_address_space *aspace; 48 51 49 52 void * __iomem mmio;
+44 -9
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
··· 106 106 u32 asid; 107 107 u64 memptr = rbmemptr(ring, ttbr0); 108 108 109 - if (ctx == a6xx_gpu->cur_ctx) 109 + if (ctx->seqno == a6xx_gpu->cur_ctx_seqno) 110 110 return; 111 111 112 112 if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid)) ··· 139 139 OUT_PKT7(ring, CP_EVENT_WRITE, 1); 140 140 OUT_RING(ring, 0x31); 141 141 142 - a6xx_gpu->cur_ctx = ctx; 142 + a6xx_gpu->cur_ctx_seqno = ctx->seqno; 143 143 } 144 144 145 145 static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) ··· 881 881 A6XX_RBBM_INT_0_MASK_UCHE_OOB_ACCESS | \ 882 882 A6XX_RBBM_INT_0_MASK_UCHE_TRAP_INTR) 883 883 884 - static int a6xx_hw_init(struct msm_gpu *gpu) 884 + static int hw_init(struct msm_gpu *gpu) 885 885 { 886 886 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 887 887 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); ··· 1081 1081 /* Always come up on rb 0 */ 1082 1082 a6xx_gpu->cur_ring = gpu->rb[0]; 1083 1083 1084 - a6xx_gpu->cur_ctx = NULL; 1084 + a6xx_gpu->cur_ctx_seqno = 0; 1085 1085 1086 1086 /* Enable the SQE_to start the CP engine */ 1087 1087 gpu_write(gpu, REG_A6XX_CP_SQE_CNTL, 1); ··· 1131 1131 /* Take the GMU out of its special boot mode */ 1132 1132 a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_BOOT_SLUMBER); 1133 1133 } 1134 + 1135 + return ret; 1136 + } 1137 + 1138 + static int a6xx_hw_init(struct msm_gpu *gpu) 1139 + { 1140 + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 1141 + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 1142 + int ret; 1143 + 1144 + mutex_lock(&a6xx_gpu->gmu.lock); 1145 + ret = hw_init(gpu); 1146 + mutex_unlock(&a6xx_gpu->gmu.lock); 1134 1147 1135 1148 return ret; 1136 1149 } ··· 1522 1509 1523 1510 trace_msm_gpu_resume(0); 1524 1511 1512 + mutex_lock(&a6xx_gpu->gmu.lock); 1525 1513 ret = a6xx_gmu_resume(a6xx_gpu); 1514 + mutex_unlock(&a6xx_gpu->gmu.lock); 1526 1515 if (ret) 1527 1516 return ret; 1528 1517 ··· 1547 1532 1548 1533 msm_devfreq_suspend(gpu); 1549 1534 1535 + mutex_lock(&a6xx_gpu->gmu.lock); 1550 1536 ret = a6xx_gmu_stop(a6xx_gpu); 1537 + mutex_unlock(&a6xx_gpu->gmu.lock); 1551 1538 if (ret) 1552 1539 return ret; 1553 1540 ··· 1564 1547 { 1565 1548 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 1566 1549 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 1567 - static DEFINE_MUTEX(perfcounter_oob); 1568 1550 1569 - mutex_lock(&perfcounter_oob); 1551 + mutex_lock(&a6xx_gpu->gmu.lock); 1570 1552 1571 1553 /* Force the GPU power on so we can read this register */ 1572 1554 a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET); 1573 1555 1574 1556 *value = gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO, 1575 - REG_A6XX_CP_ALWAYS_ON_COUNTER_HI); 1557 + REG_A6XX_CP_ALWAYS_ON_COUNTER_HI); 1576 1558 1577 1559 a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET); 1578 - mutex_unlock(&perfcounter_oob); 1560 + 1561 + mutex_unlock(&a6xx_gpu->gmu.lock); 1562 + 1579 1563 return 0; 1580 1564 } 1581 1565 ··· 1638 1620 return ~0LU; 1639 1621 1640 1622 return (unsigned long)busy_time; 1623 + } 1624 + 1625 + void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp) 1626 + { 1627 + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 1628 + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 1629 + 1630 + mutex_lock(&a6xx_gpu->gmu.lock); 1631 + a6xx_gmu_set_freq(gpu, opp); 1632 + mutex_unlock(&a6xx_gpu->gmu.lock); 1641 1633 } 1642 1634 1643 1635 static struct msm_gem_address_space * ··· 1794 1766 #endif 1795 1767 .gpu_busy = a6xx_gpu_busy, 1796 1768 .gpu_get_freq = a6xx_gmu_get_freq, 1797 - .gpu_set_freq = a6xx_gmu_set_freq, 1769 + .gpu_set_freq = a6xx_gpu_set_freq, 1798 1770 #if defined(CONFIG_DRM_MSM_GPU_STATE) 1799 1771 .gpu_state_get = a6xx_gpu_state_get, 1800 1772 .gpu_state_put = a6xx_gpu_state_put, ··· 1837 1809 if (info && (info->revn == 650 || info->revn == 660 || 1838 1810 adreno_cmp_rev(ADRENO_REV(6, 3, 5, ANY_ID), info->rev))) 1839 1811 adreno_gpu->base.hw_apriv = true; 1812 + 1813 + /* 1814 + * For now only clamp to idle freq for devices where this is known not 1815 + * to cause power supply issues: 1816 + */ 1817 + if (info && (info->revn == 618)) 1818 + gpu->clamp_to_idle = true; 1840 1819 1841 1820 a6xx_llc_slices_init(pdev, a6xx_gpu); 1842 1821
+10 -1
drivers/gpu/drm/msm/adreno/a6xx_gpu.h
··· 19 19 uint64_t sqe_iova; 20 20 21 21 struct msm_ringbuffer *cur_ring; 22 - struct msm_file_private *cur_ctx; 22 + 23 + /** 24 + * cur_ctx_seqno: 25 + * 26 + * The ctx->seqno value of the context with current pgtables 27 + * installed. Tracked by seqno rather than pointer value to 28 + * avoid dangling pointers, and cases where a ctx can be freed 29 + * and a new one created with the same address. 30 + */ 31 + int cur_ctx_seqno; 23 32 24 33 struct a6xx_gmu gmu; 25 34
+1 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
··· 794 794 DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30), 795 795 -1), 796 796 PP_BLK("pingpong_5", PINGPONG_5, 0x72800, MERGE_3D_2, sdm845_pp_sblk, 797 - DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 30), 797 + DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 31), 798 798 -1), 799 799 }; 800 800
+16
drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
··· 1125 1125 __drm_atomic_helper_crtc_reset(crtc, &mdp5_cstate->base); 1126 1126 } 1127 1127 1128 + static const struct drm_crtc_funcs mdp5_crtc_no_lm_cursor_funcs = { 1129 + .set_config = drm_atomic_helper_set_config, 1130 + .destroy = mdp5_crtc_destroy, 1131 + .page_flip = drm_atomic_helper_page_flip, 1132 + .reset = mdp5_crtc_reset, 1133 + .atomic_duplicate_state = mdp5_crtc_duplicate_state, 1134 + .atomic_destroy_state = mdp5_crtc_destroy_state, 1135 + .atomic_print_state = mdp5_crtc_atomic_print_state, 1136 + .get_vblank_counter = mdp5_crtc_get_vblank_counter, 1137 + .enable_vblank = msm_crtc_enable_vblank, 1138 + .disable_vblank = msm_crtc_disable_vblank, 1139 + .get_vblank_timestamp = drm_crtc_vblank_helper_get_vblank_timestamp, 1140 + }; 1141 + 1128 1142 static const struct drm_crtc_funcs mdp5_crtc_funcs = { 1129 1143 .set_config = drm_atomic_helper_set_config, 1130 1144 .destroy = mdp5_crtc_destroy, ··· 1327 1313 mdp5_crtc->lm_cursor_enabled = cursor_plane ? false : true; 1328 1314 1329 1315 drm_crtc_init_with_planes(dev, crtc, plane, cursor_plane, 1316 + cursor_plane ? 1317 + &mdp5_crtc_no_lm_cursor_funcs : 1330 1318 &mdp5_crtc_funcs, NULL); 1331 1319 1332 1320 drm_flip_work_init(&mdp5_crtc->unref_cursor_work,
+5 -5
drivers/gpu/drm/msm/dp/dp_display.c
··· 1309 1309 * can not declared display is connected unless 1310 1310 * HDMI cable is plugged in and sink_count of 1311 1311 * dongle become 1 1312 + * also only signal audio when disconnected 1312 1313 */ 1313 - if (dp->link->sink_count) 1314 + if (dp->link->sink_count) { 1314 1315 dp->dp_display.is_connected = true; 1315 - else 1316 + } else { 1316 1317 dp->dp_display.is_connected = false; 1317 - 1318 - dp_display_handle_plugged_change(g_dp_display, 1319 - dp->dp_display.is_connected); 1318 + dp_display_handle_plugged_change(g_dp_display, false); 1319 + } 1320 1320 1321 1321 DRM_DEBUG_DP("After, sink_count=%d is_connected=%d core_inited=%d power_on=%d\n", 1322 1322 dp->link->sink_count, dp->dp_display.is_connected,
+3 -1
drivers/gpu/drm/msm/dsi/dsi.c
··· 215 215 goto fail; 216 216 } 217 217 218 - if (!msm_dsi_manager_validate_current_config(msm_dsi->id)) 218 + if (!msm_dsi_manager_validate_current_config(msm_dsi->id)) { 219 + ret = -EINVAL; 219 220 goto fail; 221 + } 220 222 221 223 msm_dsi->encoder = encoder; 222 224
+1 -1
drivers/gpu/drm/msm/dsi/dsi_host.c
··· 451 451 452 452 return 0; 453 453 err: 454 - for (; i > 0; i--) 454 + while (--i >= 0) 455 455 clk_disable_unprepare(msm_host->bus_clks[i]); 456 456 457 457 return ret;
+15 -15
drivers/gpu/drm/msm/dsi/phy/dsi_phy_14nm.c
··· 110 110 static bool pll_14nm_poll_for_ready(struct dsi_pll_14nm *pll_14nm, 111 111 u32 nb_tries, u32 timeout_us) 112 112 { 113 - bool pll_locked = false; 113 + bool pll_locked = false, pll_ready = false; 114 114 void __iomem *base = pll_14nm->phy->pll_base; 115 115 u32 tries, val; 116 116 117 117 tries = nb_tries; 118 118 while (tries--) { 119 - val = dsi_phy_read(base + 120 - REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS); 119 + val = dsi_phy_read(base + REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS); 121 120 pll_locked = !!(val & BIT(5)); 122 121 123 122 if (pll_locked) ··· 125 126 udelay(timeout_us); 126 127 } 127 128 128 - if (!pll_locked) { 129 - tries = nb_tries; 130 - while (tries--) { 131 - val = dsi_phy_read(base + 132 - REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS); 133 - pll_locked = !!(val & BIT(0)); 129 + if (!pll_locked) 130 + goto out; 134 131 135 - if (pll_locked) 136 - break; 132 + tries = nb_tries; 133 + while (tries--) { 134 + val = dsi_phy_read(base + REG_DSI_14nm_PHY_PLL_RESET_SM_READY_STATUS); 135 + pll_ready = !!(val & BIT(0)); 137 136 138 - udelay(timeout_us); 139 - } 137 + if (pll_ready) 138 + break; 139 + 140 + udelay(timeout_us); 140 141 } 141 142 142 - DBG("DSI PLL is %slocked", pll_locked ? "" : "*not* "); 143 + out: 144 + DBG("DSI PLL is %slocked, %sready", pll_locked ? "" : "*not* ", pll_ready ? "" : "*not* "); 143 145 144 - return pll_locked; 146 + return pll_locked && pll_ready; 145 147 } 146 148 147 149 static void dsi_pll_14nm_config_init(struct dsi_pll_config *pconf)
+2 -2
drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c
··· 428 428 bytediv->reg = pll_28nm->phy->pll_base + REG_DSI_28nm_8960_PHY_PLL_CTRL_9; 429 429 430 430 snprintf(parent_name, 32, "dsi%dvco_clk", pll_28nm->phy->id); 431 - snprintf(clk_name, 32, "dsi%dpllbyte", pll_28nm->phy->id); 431 + snprintf(clk_name, 32, "dsi%dpllbyte", pll_28nm->phy->id + 1); 432 432 433 433 bytediv_init.name = clk_name; 434 434 bytediv_init.ops = &clk_bytediv_ops; ··· 442 442 return ret; 443 443 provided_clocks[DSI_BYTE_PLL_CLK] = &bytediv->hw; 444 444 445 - snprintf(clk_name, 32, "dsi%dpll", pll_28nm->phy->id); 445 + snprintf(clk_name, 32, "dsi%dpll", pll_28nm->phy->id + 1); 446 446 /* DIV3 */ 447 447 hw = devm_clk_hw_register_divider(dev, clk_name, 448 448 parent_name, 0, pll_28nm->phy->pll_base +
+2 -1
drivers/gpu/drm/msm/edp/edp_ctrl.c
··· 1116 1116 int msm_edp_ctrl_init(struct msm_edp *edp) 1117 1117 { 1118 1118 struct edp_ctrl *ctrl = NULL; 1119 - struct device *dev = &edp->pdev->dev; 1119 + struct device *dev; 1120 1120 int ret; 1121 1121 1122 1122 if (!edp) { ··· 1124 1124 return -EINVAL; 1125 1125 } 1126 1126 1127 + dev = &edp->pdev->dev; 1127 1128 ctrl = devm_kzalloc(dev, sizeof(*ctrl), GFP_KERNEL); 1128 1129 if (!ctrl) 1129 1130 return -ENOMEM;
+11 -4
drivers/gpu/drm/msm/msm_drv.c
··· 630 630 if (ret) 631 631 goto err_msm_uninit; 632 632 633 - ret = msm_disp_snapshot_init(ddev); 634 - if (ret) 635 - DRM_DEV_ERROR(dev, "msm_disp_snapshot_init failed ret = %d\n", ret); 636 - 633 + if (kms) { 634 + ret = msm_disp_snapshot_init(ddev); 635 + if (ret) 636 + DRM_DEV_ERROR(dev, "msm_disp_snapshot_init failed ret = %d\n", ret); 637 + } 637 638 drm_mode_config_reset(ddev); 638 639 639 640 #ifdef CONFIG_DRM_FBDEV_EMULATION ··· 683 682 684 683 static int context_init(struct drm_device *dev, struct drm_file *file) 685 684 { 685 + static atomic_t ident = ATOMIC_INIT(0); 686 686 struct msm_drm_private *priv = dev->dev_private; 687 687 struct msm_file_private *ctx; 688 688 ··· 691 689 if (!ctx) 692 690 return -ENOMEM; 693 691 692 + INIT_LIST_HEAD(&ctx->submitqueues); 693 + rwlock_init(&ctx->queuelock); 694 + 694 695 kref_init(&ctx->ref); 695 696 msm_submitqueue_init(dev, ctx); 696 697 697 698 ctx->aspace = msm_gpu_create_private_address_space(priv->gpu, current); 698 699 file->driver_priv = ctx; 700 + 701 + ctx->seqno = atomic_inc_return(&ident); 699 702 700 703 return 0; 701 704 }
+2 -45
drivers/gpu/drm/msm/msm_drv.h
··· 53 53 54 54 #define FRAC_16_16(mult, div) (((mult) << 16) / (div)) 55 55 56 - struct msm_file_private { 57 - rwlock_t queuelock; 58 - struct list_head submitqueues; 59 - int queueid; 60 - struct msm_gem_address_space *aspace; 61 - struct kref ref; 62 - }; 63 - 64 56 enum msm_mdp_plane_property { 65 57 PLANE_PROP_ZPOS, 66 58 PLANE_PROP_ALPHA, ··· 480 488 u32 msm_readl(const void __iomem *addr); 481 489 void msm_rmw(void __iomem *addr, u32 mask, u32 or); 482 490 483 - struct msm_gpu_submitqueue; 484 - int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx); 485 - struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx, 486 - u32 id); 487 - int msm_submitqueue_create(struct drm_device *drm, 488 - struct msm_file_private *ctx, 489 - u32 prio, u32 flags, u32 *id); 490 - int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx, 491 - struct drm_msm_submitqueue_query *args); 492 - int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id); 493 - void msm_submitqueue_close(struct msm_file_private *ctx); 494 - 495 - void msm_submitqueue_destroy(struct kref *kref); 496 - 497 - static inline void __msm_file_private_destroy(struct kref *kref) 498 - { 499 - struct msm_file_private *ctx = container_of(kref, 500 - struct msm_file_private, ref); 501 - 502 - msm_gem_address_space_put(ctx->aspace); 503 - kfree(ctx); 504 - } 505 - 506 - static inline void msm_file_private_put(struct msm_file_private *ctx) 507 - { 508 - kref_put(&ctx->ref, __msm_file_private_destroy); 509 - } 510 - 511 - static inline struct msm_file_private *msm_file_private_get( 512 - struct msm_file_private *ctx) 513 - { 514 - kref_get(&ctx->ref); 515 - return ctx; 516 - } 517 - 518 491 #define DBG(fmt, ...) DRM_DEBUG_DRIVER(fmt"\n", ##__VA_ARGS__) 519 492 #define VERB(fmt, ...) if (0) DRM_DEBUG_DRIVER(fmt"\n", ##__VA_ARGS__) 520 493 ··· 504 547 static inline unsigned long timeout_to_jiffies(const ktime_t *timeout) 505 548 { 506 549 ktime_t now = ktime_get(); 507 - unsigned long remaining_jiffies; 550 + s64 remaining_jiffies; 508 551 509 552 if (ktime_compare(*timeout, now) < 0) { 510 553 remaining_jiffies = 0; ··· 513 556 remaining_jiffies = ktime_divns(rem, NSEC_PER_SEC / HZ); 514 557 } 515 558 516 - return remaining_jiffies; 559 + return clamp(remaining_jiffies, 0LL, (s64)INT_MAX); 517 560 } 518 561 519 562 #endif /* __MSM_DRV_H__ */
+4 -3
drivers/gpu/drm/msm/msm_gem_submit.c
··· 46 46 if (!submit) 47 47 return ERR_PTR(-ENOMEM); 48 48 49 - ret = drm_sched_job_init(&submit->base, &queue->entity, queue); 49 + ret = drm_sched_job_init(&submit->base, queue->entity, queue); 50 50 if (ret) { 51 51 kfree(submit); 52 52 return ERR_PTR(ret); ··· 171 171 static int submit_lookup_cmds(struct msm_gem_submit *submit, 172 172 struct drm_msm_gem_submit *args, struct drm_file *file) 173 173 { 174 - unsigned i, sz; 174 + unsigned i; 175 + size_t sz; 175 176 int ret = 0; 176 177 177 178 for (i = 0; i < args->nr_cmds; i++) { ··· 908 907 /* The scheduler owns a ref now: */ 909 908 msm_gem_submit_get(submit); 910 909 911 - drm_sched_entity_push_job(&submit->base, &queue->entity); 910 + drm_sched_entity_push_job(&submit->base, queue->entity); 912 911 913 912 args->fence = submit->fence_id; 914 913
+68 -2
drivers/gpu/drm/msm/msm_gpu.h
··· 203 203 uint32_t suspend_count; 204 204 205 205 struct msm_gpu_state *crashstate; 206 + 207 + /* Enable clamping to idle freq when inactive: */ 208 + bool clamp_to_idle; 209 + 206 210 /* True if the hardware supports expanded apriv (a650 and newer) */ 207 211 bool hw_apriv; 208 212 ··· 262 258 #define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_HIGH - DRM_SCHED_PRIORITY_MIN) 263 259 264 260 /** 261 + * struct msm_file_private - per-drm_file context 262 + * 263 + * @queuelock: synchronizes access to submitqueues list 264 + * @submitqueues: list of &msm_gpu_submitqueue created by userspace 265 + * @queueid: counter incremented each time a submitqueue is created, 266 + * used to assign &msm_gpu_submitqueue.id 267 + * @aspace: the per-process GPU address-space 268 + * @ref: reference count 269 + * @seqno: unique per process seqno 270 + */ 271 + struct msm_file_private { 272 + rwlock_t queuelock; 273 + struct list_head submitqueues; 274 + int queueid; 275 + struct msm_gem_address_space *aspace; 276 + struct kref ref; 277 + int seqno; 278 + 279 + /** 280 + * entities: 281 + * 282 + * Table of per-priority-level sched entities used by submitqueues 283 + * associated with this &drm_file. Because some userspace apps 284 + * make assumptions about rendering from multiple gl contexts 285 + * (of the same priority) within the process happening in FIFO 286 + * order without requiring any fencing beyond MakeCurrent(), we 287 + * create at most one &drm_sched_entity per-process per-priority- 288 + * level. 289 + */ 290 + struct drm_sched_entity *entities[NR_SCHED_PRIORITIES * MSM_GPU_MAX_RINGS]; 291 + }; 292 + 293 + /** 265 294 * msm_gpu_convert_priority - Map userspace priority to ring # and sched priority 266 295 * 267 296 * @gpu: the gpu instance ··· 341 304 } 342 305 343 306 /** 307 + * struct msm_gpu_submitqueues - Userspace created context. 308 + * 344 309 * A submitqueue is associated with a gl context or vk queue (or equiv) 345 310 * in userspace. 346 311 * ··· 360 321 * seqno, protected by submitqueue lock 361 322 * @lock: submitqueue lock 362 323 * @ref: reference count 363 - * @entity: the submit job-queue 324 + * @entity: the submit job-queue 364 325 */ 365 326 struct msm_gpu_submitqueue { 366 327 int id; ··· 372 333 struct idr fence_idr; 373 334 struct mutex lock; 374 335 struct kref ref; 375 - struct drm_sched_entity entity; 336 + struct drm_sched_entity *entity; 376 337 }; 377 338 378 339 struct msm_gpu_state_bo { ··· 459 420 460 421 int msm_gpu_pm_suspend(struct msm_gpu *gpu); 461 422 int msm_gpu_pm_resume(struct msm_gpu *gpu); 423 + 424 + int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx); 425 + struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx, 426 + u32 id); 427 + int msm_submitqueue_create(struct drm_device *drm, 428 + struct msm_file_private *ctx, 429 + u32 prio, u32 flags, u32 *id); 430 + int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx, 431 + struct drm_msm_submitqueue_query *args); 432 + int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id); 433 + void msm_submitqueue_close(struct msm_file_private *ctx); 434 + 435 + void msm_submitqueue_destroy(struct kref *kref); 436 + 437 + void __msm_file_private_destroy(struct kref *kref); 438 + 439 + static inline void msm_file_private_put(struct msm_file_private *ctx) 440 + { 441 + kref_put(&ctx->ref, __msm_file_private_destroy); 442 + } 443 + 444 + static inline struct msm_file_private *msm_file_private_get( 445 + struct msm_file_private *ctx) 446 + { 447 + kref_get(&ctx->ref); 448 + return ctx; 449 + } 462 450 463 451 void msm_devfreq_init(struct msm_gpu *gpu); 464 452 void msm_devfreq_cleanup(struct msm_gpu *gpu);
+8 -1
drivers/gpu/drm/msm/msm_gpu_devfreq.c
··· 151 151 unsigned int idle_time; 152 152 unsigned long target_freq = df->idle_freq; 153 153 154 + if (!df->devfreq) 155 + return; 156 + 154 157 /* 155 158 * Hold devfreq lock to synchronize with get_dev_status()/ 156 159 * target() callbacks ··· 189 186 struct msm_gpu_devfreq *df = &gpu->devfreq; 190 187 unsigned long idle_freq, target_freq = 0; 191 188 189 + if (!df->devfreq) 190 + return; 191 + 192 192 /* 193 193 * Hold devfreq lock to synchronize with get_dev_status()/ 194 194 * target() callbacks ··· 200 194 201 195 idle_freq = get_freq(gpu); 202 196 203 - msm_devfreq_target(&gpu->pdev->dev, &target_freq, 0); 197 + if (gpu->clamp_to_idle) 198 + msm_devfreq_target(&gpu->pdev->dev, &target_freq, 0); 204 199 205 200 df->idle_time = ktime_get(); 206 201 df->idle_freq = idle_freq;
+58 -14
drivers/gpu/drm/msm/msm_submitqueue.c
··· 7 7 8 8 #include "msm_gpu.h" 9 9 10 + void __msm_file_private_destroy(struct kref *kref) 11 + { 12 + struct msm_file_private *ctx = container_of(kref, 13 + struct msm_file_private, ref); 14 + int i; 15 + 16 + for (i = 0; i < ARRAY_SIZE(ctx->entities); i++) { 17 + if (!ctx->entities[i]) 18 + continue; 19 + 20 + drm_sched_entity_destroy(ctx->entities[i]); 21 + kfree(ctx->entities[i]); 22 + } 23 + 24 + msm_gem_address_space_put(ctx->aspace); 25 + kfree(ctx); 26 + } 27 + 10 28 void msm_submitqueue_destroy(struct kref *kref) 11 29 { 12 30 struct msm_gpu_submitqueue *queue = container_of(kref, 13 31 struct msm_gpu_submitqueue, ref); 14 32 15 33 idr_destroy(&queue->fence_idr); 16 - 17 - drm_sched_entity_destroy(&queue->entity); 18 34 19 35 msm_file_private_put(queue->ctx); 20 36 ··· 77 61 } 78 62 } 79 63 64 + static struct drm_sched_entity * 65 + get_sched_entity(struct msm_file_private *ctx, struct msm_ringbuffer *ring, 66 + unsigned ring_nr, enum drm_sched_priority sched_prio) 67 + { 68 + static DEFINE_MUTEX(entity_lock); 69 + unsigned idx = (ring_nr * NR_SCHED_PRIORITIES) + sched_prio; 70 + 71 + /* We should have already validated that the requested priority is 72 + * valid by the time we get here. 73 + */ 74 + if (WARN_ON(idx >= ARRAY_SIZE(ctx->entities))) 75 + return ERR_PTR(-EINVAL); 76 + 77 + mutex_lock(&entity_lock); 78 + 79 + if (!ctx->entities[idx]) { 80 + struct drm_sched_entity *entity; 81 + struct drm_gpu_scheduler *sched = &ring->sched; 82 + int ret; 83 + 84 + entity = kzalloc(sizeof(*ctx->entities[idx]), GFP_KERNEL); 85 + 86 + ret = drm_sched_entity_init(entity, sched_prio, &sched, 1, NULL); 87 + if (ret) { 88 + kfree(entity); 89 + return ERR_PTR(ret); 90 + } 91 + 92 + ctx->entities[idx] = entity; 93 + } 94 + 95 + mutex_unlock(&entity_lock); 96 + 97 + return ctx->entities[idx]; 98 + } 99 + 80 100 int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, 81 101 u32 prio, u32 flags, u32 *id) 82 102 { 83 103 struct msm_drm_private *priv = drm->dev_private; 84 104 struct msm_gpu_submitqueue *queue; 85 - struct msm_ringbuffer *ring; 86 - struct drm_gpu_scheduler *sched; 87 105 enum drm_sched_priority sched_prio; 88 106 unsigned ring_nr; 89 107 int ret; ··· 141 91 queue->flags = flags; 142 92 queue->ring_nr = ring_nr; 143 93 144 - ring = priv->gpu->rb[ring_nr]; 145 - sched = &ring->sched; 146 - 147 - ret = drm_sched_entity_init(&queue->entity, 148 - sched_prio, &sched, 1, NULL); 149 - if (ret) { 94 + queue->entity = get_sched_entity(ctx, priv->gpu->rb[ring_nr], 95 + ring_nr, sched_prio); 96 + if (IS_ERR(queue->entity)) { 97 + ret = PTR_ERR(queue->entity); 150 98 kfree(queue); 151 99 return ret; 152 100 } ··· 187 139 * than the middle priority level. 188 140 */ 189 141 default_prio = DIV_ROUND_UP(max_priority, 2); 190 - 191 - INIT_LIST_HEAD(&ctx->submitqueues); 192 - 193 - rwlock_init(&ctx->queuelock); 194 142 195 143 return msm_submitqueue_create(drm, ctx, default_prio, 0, NULL); 196 144 }
+5 -1
drivers/gpu/drm/mxsfb/mxsfb_drv.c
··· 173 173 struct mxsfb_drm_private *mxsfb = drm->dev_private; 174 174 175 175 mxsfb_enable_axi_clk(mxsfb); 176 - mxsfb->crtc.funcs->disable_vblank(&mxsfb->crtc); 176 + 177 + /* Disable and clear VBLANK IRQ */ 178 + writel(CTRL1_CUR_FRAME_DONE_IRQ_EN, mxsfb->base + LCDC_CTRL1 + REG_CLR); 179 + writel(CTRL1_CUR_FRAME_DONE_IRQ, mxsfb->base + LCDC_CTRL1 + REG_CLR); 180 + 177 181 mxsfb_disable_axi_clk(mxsfb); 178 182 } 179 183
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/fifo/chang84.c
··· 82 82 if (offset < 0) 83 83 return 0; 84 84 85 - engn = fifo->base.func->engine_id(&fifo->base, engine); 85 + engn = fifo->base.func->engine_id(&fifo->base, engine) - 1; 86 86 save = nvkm_mask(device, 0x002520, 0x0000003f, 1 << engn); 87 87 nvkm_wr32(device, 0x0032fc, chan->base.inst->addr >> 12); 88 88 done = nvkm_msec(device, 2000,
+1
drivers/gpu/drm/panel/Kconfig
··· 295 295 depends on OF 296 296 depends on I2C 297 297 depends on BACKLIGHT_CLASS_DEVICE 298 + select CRC32 298 299 help 299 300 The panel is used with different sizes LCDs, from 480x272 to 300 301 1280x800, and 24 bit per pixel.
+6 -6
drivers/gpu/drm/panel/panel-ilitek-ili9881c.c
··· 590 590 .clock = 69700, 591 591 592 592 .hdisplay = 800, 593 - .hsync_start = 800 + 6, 594 - .hsync_end = 800 + 6 + 15, 595 - .htotal = 800 + 6 + 15 + 16, 593 + .hsync_start = 800 + 52, 594 + .hsync_end = 800 + 52 + 8, 595 + .htotal = 800 + 52 + 8 + 48, 596 596 597 597 .vdisplay = 1280, 598 - .vsync_start = 1280 + 8, 599 - .vsync_end = 1280 + 8 + 48, 600 - .vtotal = 1280 + 8 + 48 + 52, 598 + .vsync_start = 1280 + 16, 599 + .vsync_end = 1280 + 16 + 6, 600 + .vtotal = 1280 + 16 + 6 + 15, 601 601 602 602 .width_mm = 135, 603 603 .height_mm = 217,
+1 -1
drivers/gpu/drm/r128/ati_pcigart.c
··· 214 214 } 215 215 ret = 0; 216 216 217 - #if defined(__i386__) || defined(__x86_64__) 217 + #ifdef CONFIG_X86 218 218 wbinvd(); 219 219 #else 220 220 mb();
+12 -4
drivers/gpu/drm/rcar-du/rcar_du_encoder.c
··· 86 86 } 87 87 88 88 /* 89 - * Create and initialize the encoder. On Gen3 skip the LVDS1 output if 89 + * Create and initialize the encoder. On Gen3, skip the LVDS1 output if 90 90 * the LVDS1 encoder is used as a companion for LVDS0 in dual-link 91 - * mode. 91 + * mode, or any LVDS output if it isn't connected. The latter may happen 92 + * on D3 or E3 as the LVDS encoders are needed to provide the pixel 93 + * clock to the DU, even when the LVDS outputs are not used. 92 94 */ 93 - if (rcdu->info->gen >= 3 && output == RCAR_DU_OUTPUT_LVDS1) { 94 - if (rcar_lvds_dual_link(bridge)) 95 + if (rcdu->info->gen >= 3) { 96 + if (output == RCAR_DU_OUTPUT_LVDS1 && 97 + rcar_lvds_dual_link(bridge)) 98 + return -ENOLINK; 99 + 100 + if ((output == RCAR_DU_OUTPUT_LVDS0 || 101 + output == RCAR_DU_OUTPUT_LVDS1) && 102 + !rcar_lvds_is_connected(bridge)) 95 103 return -ENOLINK; 96 104 } 97 105
+11
drivers/gpu/drm/rcar-du/rcar_lvds.c
··· 576 576 { 577 577 struct rcar_lvds *lvds = bridge_to_rcar_lvds(bridge); 578 578 579 + if (!lvds->next_bridge) 580 + return 0; 581 + 579 582 return drm_bridge_attach(bridge->encoder, lvds->next_bridge, bridge, 580 583 flags); 581 584 } ··· 600 597 return lvds->link_type != RCAR_LVDS_SINGLE_LINK; 601 598 } 602 599 EXPORT_SYMBOL_GPL(rcar_lvds_dual_link); 600 + 601 + bool rcar_lvds_is_connected(struct drm_bridge *bridge) 602 + { 603 + struct rcar_lvds *lvds = bridge_to_rcar_lvds(bridge); 604 + 605 + return lvds->next_bridge != NULL; 606 + } 607 + EXPORT_SYMBOL_GPL(rcar_lvds_is_connected); 603 608 604 609 /* ----------------------------------------------------------------------------- 605 610 * Probe & Remove
+5
drivers/gpu/drm/rcar-du/rcar_lvds.h
··· 16 16 int rcar_lvds_clk_enable(struct drm_bridge *bridge, unsigned long freq); 17 17 void rcar_lvds_clk_disable(struct drm_bridge *bridge); 18 18 bool rcar_lvds_dual_link(struct drm_bridge *bridge); 19 + bool rcar_lvds_is_connected(struct drm_bridge *bridge); 19 20 #else 20 21 static inline int rcar_lvds_clk_enable(struct drm_bridge *bridge, 21 22 unsigned long freq) ··· 25 24 } 26 25 static inline void rcar_lvds_clk_disable(struct drm_bridge *bridge) { } 27 26 static inline bool rcar_lvds_dual_link(struct drm_bridge *bridge) 27 + { 28 + return false; 29 + } 30 + static inline bool rcar_lvds_is_connected(struct drm_bridge *bridge) 28 31 { 29 32 return false; 30 33 }
+1 -1
drivers/iio/accel/fxls8962af-core.c
··· 738 738 739 739 if (reg & FXLS8962AF_INT_STATUS_SRC_BUF) { 740 740 ret = fxls8962af_fifo_flush(indio_dev); 741 - if (ret) 741 + if (ret < 0) 742 742 return IRQ_NONE; 743 743 744 744 return IRQ_HANDLED;
+1
drivers/iio/adc/ad7192.c
··· 293 293 .has_registers = true, 294 294 .addr_shift = 3, 295 295 .read_mask = BIT(6), 296 + .irq_flags = IRQF_TRIGGER_FALLING, 296 297 }; 297 298 298 299 static const struct ad_sd_calib_data ad7192_calib_arr[8] = {
+1 -1
drivers/iio/adc/ad7780.c
··· 203 203 .set_mode = ad7780_set_mode, 204 204 .postprocess_sample = ad7780_postprocess_sample, 205 205 .has_registers = false, 206 - .irq_flags = IRQF_TRIGGER_LOW, 206 + .irq_flags = IRQF_TRIGGER_FALLING, 207 207 }; 208 208 209 209 #define _AD7780_CHANNEL(_bits, _wordsize, _mask_all) \
+1 -1
drivers/iio/adc/ad7793.c
··· 206 206 .has_registers = true, 207 207 .addr_shift = 3, 208 208 .read_mask = BIT(6), 209 - .irq_flags = IRQF_TRIGGER_LOW, 209 + .irq_flags = IRQF_TRIGGER_FALLING, 210 210 }; 211 211 212 212 static const struct ad_sd_calib_data ad7793_calib_arr[6] = {
+1
drivers/iio/adc/aspeed_adc.c
··· 183 183 184 184 data = iio_priv(indio_dev); 185 185 data->dev = &pdev->dev; 186 + platform_set_drvdata(pdev, indio_dev); 186 187 187 188 data->base = devm_platform_ioremap_resource(pdev, 0); 188 189 if (IS_ERR(data->base))
+1 -2
drivers/iio/adc/max1027.c
··· 103 103 .sign = 'u', \ 104 104 .realbits = depth, \ 105 105 .storagebits = 16, \ 106 - .shift = 2, \ 106 + .shift = (depth == 10) ? 2 : 0, \ 107 107 .endianness = IIO_BE, \ 108 108 }, \ 109 109 } ··· 142 142 MAX1027_V_CHAN(11, depth) 143 143 144 144 #define MAX1X31_CHANNELS(depth) \ 145 - MAX1X27_CHANNELS(depth), \ 146 145 MAX1X29_CHANNELS(depth), \ 147 146 MAX1027_V_CHAN(12, depth), \ 148 147 MAX1027_V_CHAN(13, depth), \
+8
drivers/iio/adc/mt6577_auxadc.c
··· 82 82 MT6577_AUXADC_CHANNEL(15), 83 83 }; 84 84 85 + /* For Voltage calculation */ 86 + #define VOLTAGE_FULL_RANGE 1500 /* VA voltage */ 87 + #define AUXADC_PRECISE 4096 /* 12 bits */ 88 + 85 89 static int mt_auxadc_get_cali_data(int rawdata, bool enable_cali) 86 90 { 87 91 return rawdata; ··· 195 191 } 196 192 if (adc_dev->dev_comp->sample_data_cali) 197 193 *val = mt_auxadc_get_cali_data(*val, true); 194 + 195 + /* Convert adc raw data to voltage: 0 - 1500 mV */ 196 + *val = *val * VOLTAGE_FULL_RANGE / AUXADC_PRECISE; 197 + 198 198 return IIO_VAL_INT; 199 199 200 200 default:
+4 -2
drivers/iio/adc/rzg2l_adc.c
··· 401 401 exit_hw_init: 402 402 clk_disable_unprepare(adc->pclk); 403 403 404 - return 0; 404 + return ret; 405 405 } 406 406 407 407 static void rzg2l_adc_pm_runtime_disable(void *data) ··· 570 570 return ret; 571 571 572 572 ret = clk_prepare_enable(adc->adclk); 573 - if (ret) 573 + if (ret) { 574 + clk_disable_unprepare(adc->pclk); 574 575 return ret; 576 + } 575 577 576 578 rzg2l_adc_pwr(adc, true); 577 579
+6
drivers/iio/adc/ti-adc128s052.c
··· 171 171 mutex_init(&adc->lock); 172 172 173 173 ret = iio_device_register(indio_dev); 174 + if (ret) 175 + goto err_disable_regulator; 174 176 177 + return 0; 178 + 179 + err_disable_regulator: 180 + regulator_disable(adc->reg); 175 181 return ret; 176 182 } 177 183
+9 -2
drivers/iio/common/ssp_sensors/ssp_spi.c
··· 137 137 if (length > received_len - *data_index || length <= 0) { 138 138 ssp_dbg("[SSP]: MSG From MCU-invalid debug length(%d/%d)\n", 139 139 length, received_len); 140 - return length ? length : -EPROTO; 140 + return -EPROTO; 141 141 } 142 142 143 143 ssp_dbg("[SSP]: MSG From MCU - %s\n", &data_frame[*data_index]); ··· 273 273 for (idx = 0; idx < len;) { 274 274 switch (dataframe[idx++]) { 275 275 case SSP_MSG2AP_INST_BYPASS_DATA: 276 + if (idx >= len) 277 + return -EPROTO; 276 278 sd = dataframe[idx++]; 277 279 if (sd < 0 || sd >= SSP_SENSOR_MAX) { 278 280 dev_err(SSP_DEV, ··· 284 282 285 283 if (indio_devs[sd]) { 286 284 spd = iio_priv(indio_devs[sd]); 287 - if (spd->process_data) 285 + if (spd->process_data) { 286 + if (idx >= len) 287 + return -EPROTO; 288 288 spd->process_data(indio_devs[sd], 289 289 &dataframe[idx], 290 290 data->timestamp); 291 + } 291 292 } else { 292 293 dev_err(SSP_DEV, "no client for frame\n"); 293 294 } ··· 298 293 idx += ssp_offset_map[sd]; 299 294 break; 300 295 case SSP_MSG2AP_INST_DEBUG_DATA: 296 + if (idx >= len) 297 + return -EPROTO; 301 298 sd = ssp_print_mcu_debug(dataframe, &idx, len); 302 299 if (sd) { 303 300 dev_err(SSP_DEV,
+1
drivers/iio/dac/ti-dac5571.c
··· 350 350 data->dac5571_pwrdwn = dac5571_pwrdwn_quad; 351 351 break; 352 352 default: 353 + ret = -EINVAL; 353 354 goto err; 354 355 } 355 356
+2 -1
drivers/iio/imu/adis16475.c
··· 353 353 if (dec > st->info->max_dec) 354 354 dec = st->info->max_dec; 355 355 356 - ret = adis_write_reg_16(&st->adis, ADIS16475_REG_DEC_RATE, dec); 356 + ret = __adis_write_reg_16(&st->adis, ADIS16475_REG_DEC_RATE, dec); 357 357 if (ret) 358 358 goto error; 359 359 360 + adis_dev_unlock(&st->adis); 360 361 /* 361 362 * If decimation is used, then gyro and accel data will have meaningful 362 363 * bits on the LSB registers. This info is used on the trigger handler.
+11 -3
drivers/iio/imu/adis16480.c
··· 144 144 unsigned int max_dec_rate; 145 145 const unsigned int *filter_freqs; 146 146 bool has_pps_clk_mode; 147 + bool has_sleep_cnt; 147 148 const struct adis_data adis_data; 148 149 }; 149 150 ··· 940 939 .temp_scale = 5650, /* 5.65 milli degree Celsius */ 941 940 .int_clk = 2460000, 942 941 .max_dec_rate = 2048, 942 + .has_sleep_cnt = true, 943 943 .filter_freqs = adis16480_def_filter_freqs, 944 944 .adis_data = ADIS16480_DATA(16375, &adis16485_timeouts, 0), 945 945 }, ··· 954 952 .temp_scale = 5650, /* 5.65 milli degree Celsius */ 955 953 .int_clk = 2460000, 956 954 .max_dec_rate = 2048, 955 + .has_sleep_cnt = true, 957 956 .filter_freqs = adis16480_def_filter_freqs, 958 957 .adis_data = ADIS16480_DATA(16480, &adis16480_timeouts, 0), 959 958 }, ··· 968 965 .temp_scale = 5650, /* 5.65 milli degree Celsius */ 969 966 .int_clk = 2460000, 970 967 .max_dec_rate = 2048, 968 + .has_sleep_cnt = true, 971 969 .filter_freqs = adis16480_def_filter_freqs, 972 970 .adis_data = ADIS16480_DATA(16485, &adis16485_timeouts, 0), 973 971 }, ··· 982 978 .temp_scale = 5650, /* 5.65 milli degree Celsius */ 983 979 .int_clk = 2460000, 984 980 .max_dec_rate = 2048, 981 + .has_sleep_cnt = true, 985 982 .filter_freqs = adis16480_def_filter_freqs, 986 983 .adis_data = ADIS16480_DATA(16488, &adis16485_timeouts, 0), 987 984 }, ··· 1430 1425 if (ret) 1431 1426 return ret; 1432 1427 1433 - ret = devm_add_action_or_reset(&spi->dev, adis16480_stop, indio_dev); 1434 - if (ret) 1435 - return ret; 1428 + if (st->chip_info->has_sleep_cnt) { 1429 + ret = devm_add_action_or_reset(&spi->dev, adis16480_stop, 1430 + indio_dev); 1431 + if (ret) 1432 + return ret; 1433 + } 1436 1434 1437 1435 ret = adis16480_config_irq_pin(spi->dev.of_node, st); 1438 1436 if (ret)
+3 -3
drivers/iio/light/opt3001.c
··· 276 276 ret = wait_event_timeout(opt->result_ready_queue, 277 277 opt->result_ready, 278 278 msecs_to_jiffies(OPT3001_RESULT_READY_LONG)); 279 + if (ret == 0) 280 + return -ETIMEDOUT; 279 281 } else { 280 282 /* Sleep for result ready time */ 281 283 timeout = (opt->int_time == OPT3001_INT_TIME_SHORT) ? ··· 314 312 /* Disallow IRQ to access the device while lock is active */ 315 313 opt->ok_to_ignore_lock = false; 316 314 317 - if (ret == 0) 318 - return -ETIMEDOUT; 319 - else if (ret < 0) 315 + if (ret < 0) 320 316 return ret; 321 317 322 318 if (opt->use_irq) {
+2
drivers/input/joystick/xpad.c
··· 334 334 { 0x24c6, 0x5b03, "Thrustmaster Ferrari 458 Racing Wheel", 0, XTYPE_XBOX360 }, 335 335 { 0x24c6, 0x5d04, "Razer Sabertooth", 0, XTYPE_XBOX360 }, 336 336 { 0x24c6, 0xfafe, "Rock Candy Gamepad for Xbox 360", 0, XTYPE_XBOX360 }, 337 + { 0x3285, 0x0607, "Nacon GC-100", 0, XTYPE_XBOX360 }, 337 338 { 0x3767, 0x0101, "Fanatec Speedster 3 Forceshock Wheel", 0, XTYPE_XBOX }, 338 339 { 0xffff, 0xffff, "Chinese-made Xbox Controller", 0, XTYPE_XBOX }, 339 340 { 0x0000, 0x0000, "Generic X-Box pad", 0, XTYPE_UNKNOWN } ··· 452 451 XPAD_XBOXONE_VENDOR(0x24c6), /* PowerA Controllers */ 453 452 XPAD_XBOXONE_VENDOR(0x2e24), /* Hyperkin Duke X-Box One pad */ 454 453 XPAD_XBOX360_VENDOR(0x2f24), /* GameSir Controllers */ 454 + XPAD_XBOX360_VENDOR(0x3285), /* Nacon GC-100 */ 455 455 { } 456 456 }; 457 457
+29
drivers/input/keyboard/snvs_pwrkey.c
··· 3 3 // Driver for the IMX SNVS ON/OFF Power Key 4 4 // Copyright (C) 2015 Freescale Semiconductor, Inc. All Rights Reserved. 5 5 6 + #include <linux/clk.h> 6 7 #include <linux/device.h> 7 8 #include <linux/err.h> 8 9 #include <linux/init.h> ··· 100 99 return IRQ_HANDLED; 101 100 } 102 101 102 + static void imx_snvs_pwrkey_disable_clk(void *data) 103 + { 104 + clk_disable_unprepare(data); 105 + } 106 + 103 107 static void imx_snvs_pwrkey_act(void *pdata) 104 108 { 105 109 struct pwrkey_drv_data *pd = pdata; ··· 117 111 struct pwrkey_drv_data *pdata; 118 112 struct input_dev *input; 119 113 struct device_node *np; 114 + struct clk *clk; 120 115 int error; 121 116 u32 vid; 122 117 ··· 139 132 if (of_property_read_u32(np, "linux,keycode", &pdata->keycode)) { 140 133 pdata->keycode = KEY_POWER; 141 134 dev_warn(&pdev->dev, "KEY_POWER without setting in dts\n"); 135 + } 136 + 137 + clk = devm_clk_get_optional(&pdev->dev, NULL); 138 + if (IS_ERR(clk)) { 139 + dev_err(&pdev->dev, "Failed to get snvs clock (%pe)\n", clk); 140 + return PTR_ERR(clk); 141 + } 142 + 143 + error = clk_prepare_enable(clk); 144 + if (error) { 145 + dev_err(&pdev->dev, "Failed to enable snvs clock (%pe)\n", 146 + ERR_PTR(error)); 147 + return error; 148 + } 149 + 150 + error = devm_add_action_or_reset(&pdev->dev, 151 + imx_snvs_pwrkey_disable_clk, clk); 152 + if (error) { 153 + dev_err(&pdev->dev, 154 + "Failed to register clock cleanup handler (%pe)\n", 155 + ERR_PTR(error)); 156 + return error; 142 157 } 143 158 144 159 pdata->wakeup = of_property_read_bool(np, "wakeup-source");
+21 -21
drivers/input/touchscreen.c
··· 80 80 81 81 data_present = touchscreen_get_prop_u32(dev, "touchscreen-min-x", 82 82 input_abs_get_min(input, axis_x), 83 - &minimum) | 84 - touchscreen_get_prop_u32(dev, "touchscreen-size-x", 85 - input_abs_get_max(input, 86 - axis_x) + 1, 87 - &maximum) | 88 - touchscreen_get_prop_u32(dev, "touchscreen-fuzz-x", 89 - input_abs_get_fuzz(input, axis_x), 90 - &fuzz); 83 + &minimum); 84 + data_present |= touchscreen_get_prop_u32(dev, "touchscreen-size-x", 85 + input_abs_get_max(input, 86 + axis_x) + 1, 87 + &maximum); 88 + data_present |= touchscreen_get_prop_u32(dev, "touchscreen-fuzz-x", 89 + input_abs_get_fuzz(input, axis_x), 90 + &fuzz); 91 91 if (data_present) 92 92 touchscreen_set_params(input, axis_x, minimum, maximum - 1, fuzz); 93 93 94 94 data_present = touchscreen_get_prop_u32(dev, "touchscreen-min-y", 95 95 input_abs_get_min(input, axis_y), 96 - &minimum) | 97 - touchscreen_get_prop_u32(dev, "touchscreen-size-y", 98 - input_abs_get_max(input, 99 - axis_y) + 1, 100 - &maximum) | 101 - touchscreen_get_prop_u32(dev, "touchscreen-fuzz-y", 102 - input_abs_get_fuzz(input, axis_y), 103 - &fuzz); 96 + &minimum); 97 + data_present |= touchscreen_get_prop_u32(dev, "touchscreen-size-y", 98 + input_abs_get_max(input, 99 + axis_y) + 1, 100 + &maximum); 101 + data_present |= touchscreen_get_prop_u32(dev, "touchscreen-fuzz-y", 102 + input_abs_get_fuzz(input, axis_y), 103 + &fuzz); 104 104 if (data_present) 105 105 touchscreen_set_params(input, axis_y, minimum, maximum - 1, fuzz); 106 106 ··· 108 108 data_present = touchscreen_get_prop_u32(dev, 109 109 "touchscreen-max-pressure", 110 110 input_abs_get_max(input, axis), 111 - &maximum) | 112 - touchscreen_get_prop_u32(dev, 113 - "touchscreen-fuzz-pressure", 114 - input_abs_get_fuzz(input, axis), 115 - &fuzz); 111 + &maximum); 112 + data_present |= touchscreen_get_prop_u32(dev, 113 + "touchscreen-fuzz-pressure", 114 + input_abs_get_fuzz(input, axis), 115 + &fuzz); 116 116 if (data_present) 117 117 touchscreen_set_params(input, axis, 0, maximum, fuzz); 118 118
+16 -13
drivers/input/touchscreen/resistive-adc-touch.c
··· 71 71 unsigned int z2 = touch_info[st->ch_map[GRTS_CH_Z2]]; 72 72 unsigned int Rt; 73 73 74 - Rt = z2; 75 - Rt -= z1; 76 - Rt *= st->x_plate_ohms; 77 - Rt = DIV_ROUND_CLOSEST(Rt, 16); 78 - Rt *= x; 79 - Rt /= z1; 80 - Rt = DIV_ROUND_CLOSEST(Rt, 256); 81 - /* 82 - * On increased pressure the resistance (Rt) is decreasing 83 - * so, convert values to make it looks as real pressure. 84 - */ 85 - if (Rt < GRTS_DEFAULT_PRESSURE_MAX) 86 - press = GRTS_DEFAULT_PRESSURE_MAX - Rt; 74 + if (likely(x && z1)) { 75 + Rt = z2; 76 + Rt -= z1; 77 + Rt *= st->x_plate_ohms; 78 + Rt = DIV_ROUND_CLOSEST(Rt, 16); 79 + Rt *= x; 80 + Rt /= z1; 81 + Rt = DIV_ROUND_CLOSEST(Rt, 256); 82 + /* 83 + * On increased pressure the resistance (Rt) is 84 + * decreasing so, convert values to make it looks as 85 + * real pressure. 86 + */ 87 + if (Rt < GRTS_DEFAULT_PRESSURE_MAX) 88 + press = GRTS_DEFAULT_PRESSURE_MAX - Rt; 89 + } 87 90 } 88 91 89 92 if ((!x && !y) || (st->pressure && (press < st->pressure_min))) {
+8
drivers/iommu/Kconfig
··· 355 355 'arm-smmu.disable_bypass' will continue to override this 356 356 config. 357 357 358 + config ARM_SMMU_QCOM 359 + def_tristate y 360 + depends on ARM_SMMU && ARCH_QCOM 361 + select QCOM_SCM 362 + help 363 + When running on a Qualcomm platform that has the custom variant 364 + of the ARM SMMU, this needs to be built into the SMMU driver. 365 + 358 366 config ARM_SMMU_V3 359 367 tristate "ARM Ltd. System MMU Version 3 (SMMUv3) Support" 360 368 depends on ARM64
+4 -4
drivers/isdn/hardware/mISDN/hfcpci.c
··· 1994 1994 pci_set_master(hc->pdev); 1995 1995 if (!hc->irq) { 1996 1996 printk(KERN_WARNING "HFC-PCI: No IRQ for PCI card found\n"); 1997 - return 1; 1997 + return -EINVAL; 1998 1998 } 1999 1999 hc->hw.pci_io = 2000 2000 (char __iomem *)(unsigned long)hc->pdev->resource[1].start; 2001 2001 2002 2002 if (!hc->hw.pci_io) { 2003 2003 printk(KERN_WARNING "HFC-PCI: No IO-Mem for PCI card found\n"); 2004 - return 1; 2004 + return -ENOMEM; 2005 2005 } 2006 2006 /* Allocate memory for FIFOS */ 2007 2007 /* the memory needs to be on a 32k boundary within the first 4G */ ··· 2012 2012 if (!buffer) { 2013 2013 printk(KERN_WARNING 2014 2014 "HFC-PCI: Error allocating memory for FIFO!\n"); 2015 - return 1; 2015 + return -ENOMEM; 2016 2016 } 2017 2017 hc->hw.fifos = buffer; 2018 2018 pci_write_config_dword(hc->pdev, 0x80, hc->hw.dmahandle); ··· 2022 2022 "HFC-PCI: Error in ioremap for PCI!\n"); 2023 2023 dma_free_coherent(&hc->pdev->dev, 0x8000, hc->hw.fifos, 2024 2024 hc->hw.dmahandle); 2025 - return 1; 2025 + return -ENOMEM; 2026 2026 } 2027 2027 2028 2028 printk(KERN_INFO
+1 -1
drivers/md/dm-clone-target.c
··· 161 161 162 162 static void __set_clone_mode(struct clone *clone, enum clone_metadata_mode new_mode) 163 163 { 164 - const char *descs[] = { 164 + static const char * const descs[] = { 165 165 "read-write", 166 166 "read-only", 167 167 "fail"
+8
drivers/md/dm-rq.c
··· 490 490 struct mapped_device *md = tio->md; 491 491 struct dm_target *ti = md->immutable_target; 492 492 493 + /* 494 + * blk-mq's unquiesce may come from outside events, such as 495 + * elevator switch, updating nr_requests or others, and request may 496 + * come during suspend, so simply ask for blk-mq to requeue it. 497 + */ 498 + if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags))) 499 + return BLK_STS_RESOURCE; 500 + 493 501 if (unlikely(!ti)) { 494 502 int srcu_idx; 495 503 struct dm_table *map = dm_get_live_table(md, &srcu_idx);
+12 -3
drivers/md/dm-verity-target.c
··· 475 475 struct bvec_iter start; 476 476 unsigned b; 477 477 struct crypto_wait wait; 478 + struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size); 478 479 479 480 for (b = 0; b < io->n_blocks; b++) { 480 481 int r; ··· 530 529 else if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_DATA, 531 530 cur_block, NULL, &start) == 0) 532 531 continue; 533 - else if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_DATA, 534 - cur_block)) 535 - return -EIO; 532 + else { 533 + if (bio->bi_status) { 534 + /* 535 + * Error correction failed; Just return error 536 + */ 537 + return -EIO; 538 + } 539 + if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_DATA, 540 + cur_block)) 541 + return -EIO; 542 + } 536 543 } 537 544 538 545 return 0;
+10 -7
drivers/md/dm.c
··· 496 496 false, 0, &io->stats_aux); 497 497 } 498 498 499 - static void end_io_acct(struct dm_io *io) 499 + static void end_io_acct(struct mapped_device *md, struct bio *bio, 500 + unsigned long start_time, struct dm_stats_aux *stats_aux) 500 501 { 501 - struct mapped_device *md = io->md; 502 - struct bio *bio = io->orig_bio; 503 - unsigned long duration = jiffies - io->start_time; 502 + unsigned long duration = jiffies - start_time; 504 503 505 - bio_end_io_acct(bio, io->start_time); 504 + bio_end_io_acct(bio, start_time); 506 505 507 506 if (unlikely(dm_stats_used(&md->stats))) 508 507 dm_stats_account_io(&md->stats, bio_data_dir(bio), 509 508 bio->bi_iter.bi_sector, bio_sectors(bio), 510 - true, duration, &io->stats_aux); 509 + true, duration, stats_aux); 511 510 512 511 /* nudge anyone waiting on suspend queue */ 513 512 if (unlikely(wq_has_sleeper(&md->wait))) ··· 789 790 blk_status_t io_error; 790 791 struct bio *bio; 791 792 struct mapped_device *md = io->md; 793 + unsigned long start_time = 0; 794 + struct dm_stats_aux stats_aux; 792 795 793 796 /* Push-back supersedes any I/O errors */ 794 797 if (unlikely(error)) { ··· 822 821 } 823 822 824 823 io_error = io->status; 825 - end_io_acct(io); 824 + start_time = io->start_time; 825 + stats_aux = io->stats_aux; 826 826 free_io(md, io); 827 + end_io_acct(md, bio, start_time, &stats_aux); 827 828 828 829 if (io_error == BLK_STS_DM_REQUEUE) 829 830 return;
+1
drivers/misc/Kconfig
··· 224 224 tristate "HiSilicon Hi6421v600 IRQ and powerkey" 225 225 depends on OF 226 226 depends on SPMI 227 + depends on HAS_IOMEM 227 228 select MFD_CORE 228 229 select REGMAP_SPMI 229 230 help
+1 -1
drivers/misc/cb710/sgbuf2.c
··· 47 47 #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 48 48 return false; 49 49 #else 50 - return ((ptr - NULL) & 3) != 0; 50 + return ((uintptr_t)ptr & 3) != 0; 51 51 #endif 52 52 } 53 53
+8
drivers/misc/eeprom/at25.c
··· 366 366 }; 367 367 MODULE_DEVICE_TABLE(of, at25_of_match); 368 368 369 + static const struct spi_device_id at25_spi_ids[] = { 370 + { .name = "at25",}, 371 + { .name = "fm25",}, 372 + { } 373 + }; 374 + MODULE_DEVICE_TABLE(spi, at25_spi_ids); 375 + 369 376 static int at25_probe(struct spi_device *spi) 370 377 { 371 378 struct at25_data *at25 = NULL; ··· 498 491 .dev_groups = sernum_groups, 499 492 }, 500 493 .probe = at25_probe, 494 + .id_table = at25_spi_ids, 501 495 }; 502 496 503 497 module_spi_driver(at25_driver);
+18
drivers/misc/eeprom/eeprom_93xx46.c
··· 406 406 }; 407 407 MODULE_DEVICE_TABLE(of, eeprom_93xx46_of_table); 408 408 409 + static const struct spi_device_id eeprom_93xx46_spi_ids[] = { 410 + { .name = "eeprom-93xx46", 411 + .driver_data = (kernel_ulong_t)&at93c46_data, }, 412 + { .name = "at93c46", 413 + .driver_data = (kernel_ulong_t)&at93c46_data, }, 414 + { .name = "at93c46d", 415 + .driver_data = (kernel_ulong_t)&atmel_at93c46d_data, }, 416 + { .name = "at93c56", 417 + .driver_data = (kernel_ulong_t)&at93c56_data, }, 418 + { .name = "at93c66", 419 + .driver_data = (kernel_ulong_t)&at93c66_data, }, 420 + { .name = "93lc46b", 421 + .driver_data = (kernel_ulong_t)&microchip_93lc46b_data, }, 422 + {} 423 + }; 424 + MODULE_DEVICE_TABLE(spi, eeprom_93xx46_spi_ids); 425 + 409 426 static int eeprom_93xx46_probe_dt(struct spi_device *spi) 410 427 { 411 428 const struct of_device_id *of_id = ··· 572 555 }, 573 556 .probe = eeprom_93xx46_probe, 574 557 .remove = eeprom_93xx46_remove, 558 + .id_table = eeprom_93xx46_spi_ids, 575 559 }; 576 560 577 561 module_spi_driver(eeprom_93xx46_driver);
+2
drivers/misc/fastrpc.c
··· 814 814 rpra[i].pv = (u64) ctx->args[i].ptr; 815 815 pages[i].addr = ctx->maps[i]->phys; 816 816 817 + mmap_read_lock(current->mm); 817 818 vma = find_vma(current->mm, ctx->args[i].ptr); 818 819 if (vma) 819 820 pages[i].addr += ctx->args[i].ptr - 820 821 vma->vm_start; 822 + mmap_read_unlock(current->mm); 821 823 822 824 pg_start = (ctx->args[i].ptr & PAGE_MASK) >> PAGE_SHIFT; 823 825 pg_end = ((ctx->args[i].ptr + len - 1) & PAGE_MASK) >>
+1
drivers/misc/gehc-achc.c
··· 539 539 540 540 static const struct spi_device_id gehc_achc_id[] = { 541 541 { "ge,achc", 0 }, 542 + { "achc", 0 }, 542 543 { } 543 544 }; 544 545 MODULE_DEVICE_TABLE(spi, gehc_achc_id);
+19 -14
drivers/misc/habanalabs/common/command_submission.c
··· 2649 2649 free_seq_arr: 2650 2650 kfree(cs_seq_arr); 2651 2651 2652 - /* update output args */ 2653 - memset(args, 0, sizeof(*args)); 2654 2652 if (rc) 2655 2653 return rc; 2654 + 2655 + if (mcs_data.wait_status == -ERESTARTSYS) { 2656 + dev_err_ratelimited(hdev->dev, 2657 + "user process got signal while waiting for Multi-CS\n"); 2658 + return -EINTR; 2659 + } 2660 + 2661 + /* update output args */ 2662 + memset(args, 0, sizeof(*args)); 2656 2663 2657 2664 if (mcs_data.completion_bitmap) { 2658 2665 args->out.status = HL_WAIT_CS_STATUS_COMPLETED; ··· 2674 2667 /* update if some CS was gone */ 2675 2668 if (mcs_data.timestamp) 2676 2669 args->out.flags |= HL_WAIT_CS_STATUS_FLAG_GONE; 2677 - } else if (mcs_data.wait_status == -ERESTARTSYS) { 2678 - args->out.status = HL_WAIT_CS_STATUS_INTERRUPTED; 2679 2670 } else { 2680 2671 args->out.status = HL_WAIT_CS_STATUS_BUSY; 2681 2672 } ··· 2693 2688 rc = _hl_cs_wait_ioctl(hdev, hpriv->ctx, args->in.timeout_us, seq, 2694 2689 &status, &timestamp); 2695 2690 2691 + if (rc == -ERESTARTSYS) { 2692 + dev_err_ratelimited(hdev->dev, 2693 + "user process got signal while waiting for CS handle %llu\n", 2694 + seq); 2695 + return -EINTR; 2696 + } 2697 + 2696 2698 memset(args, 0, sizeof(*args)); 2697 2699 2698 2700 if (rc) { 2699 - if (rc == -ERESTARTSYS) { 2700 - dev_err_ratelimited(hdev->dev, 2701 - "user process got signal while waiting for CS handle %llu\n", 2702 - seq); 2703 - args->out.status = HL_WAIT_CS_STATUS_INTERRUPTED; 2704 - rc = -EINTR; 2705 - } else if (rc == -ETIMEDOUT) { 2701 + if (rc == -ETIMEDOUT) { 2706 2702 dev_err_ratelimited(hdev->dev, 2707 2703 "CS %llu has timed-out while user process is waiting for it\n", 2708 2704 seq); ··· 2829 2823 dev_err_ratelimited(hdev->dev, 2830 2824 "user process got signal while waiting for interrupt ID %d\n", 2831 2825 interrupt->interrupt_id); 2832 - *status = HL_WAIT_CS_STATUS_INTERRUPTED; 2833 2826 rc = -EINTR; 2834 2827 } else { 2835 2828 *status = CS_WAIT_STATUS_BUSY; ··· 2883 2878 args->in.interrupt_timeout_us, args->in.addr, 2884 2879 args->in.target, interrupt_offset, &status); 2885 2880 2886 - memset(args, 0, sizeof(*args)); 2887 - 2888 2881 if (rc) { 2889 2882 if (rc != -EINTR) 2890 2883 dev_err_ratelimited(hdev->dev, ··· 2890 2887 2891 2888 return rc; 2892 2889 } 2890 + 2891 + memset(args, 0, sizeof(*args)); 2893 2892 2894 2893 switch (status) { 2895 2894 case CS_WAIT_STATUS_COMPLETED:
+8 -4
drivers/misc/mei/hbm.c
··· 1298 1298 1299 1299 if (dev->dev_state != MEI_DEV_INIT_CLIENTS || 1300 1300 dev->hbm_state != MEI_HBM_STARTING) { 1301 - if (dev->dev_state == MEI_DEV_POWER_DOWN) { 1301 + if (dev->dev_state == MEI_DEV_POWER_DOWN || 1302 + dev->dev_state == MEI_DEV_POWERING_DOWN) { 1302 1303 dev_dbg(dev->dev, "hbm: start: on shutdown, ignoring\n"); 1303 1304 return 0; 1304 1305 } ··· 1382 1381 1383 1382 if (dev->dev_state != MEI_DEV_INIT_CLIENTS || 1384 1383 dev->hbm_state != MEI_HBM_DR_SETUP) { 1385 - if (dev->dev_state == MEI_DEV_POWER_DOWN) { 1384 + if (dev->dev_state == MEI_DEV_POWER_DOWN || 1385 + dev->dev_state == MEI_DEV_POWERING_DOWN) { 1386 1386 dev_dbg(dev->dev, "hbm: dma setup response: on shutdown, ignoring\n"); 1387 1387 return 0; 1388 1388 } ··· 1450 1448 1451 1449 if (dev->dev_state != MEI_DEV_INIT_CLIENTS || 1452 1450 dev->hbm_state != MEI_HBM_CLIENT_PROPERTIES) { 1453 - if (dev->dev_state == MEI_DEV_POWER_DOWN) { 1451 + if (dev->dev_state == MEI_DEV_POWER_DOWN || 1452 + dev->dev_state == MEI_DEV_POWERING_DOWN) { 1454 1453 dev_dbg(dev->dev, "hbm: properties response: on shutdown, ignoring\n"); 1455 1454 return 0; 1456 1455 } ··· 1493 1490 1494 1491 if (dev->dev_state != MEI_DEV_INIT_CLIENTS || 1495 1492 dev->hbm_state != MEI_HBM_ENUM_CLIENTS) { 1496 - if (dev->dev_state == MEI_DEV_POWER_DOWN) { 1493 + if (dev->dev_state == MEI_DEV_POWER_DOWN || 1494 + dev->dev_state == MEI_DEV_POWERING_DOWN) { 1497 1495 dev_dbg(dev->dev, "hbm: enumeration response: on shutdown, ignoring\n"); 1498 1496 return 0; 1499 1497 }
+1
drivers/misc/mei/hw-me-regs.h
··· 92 92 #define MEI_DEV_ID_CDF 0x18D3 /* Cedar Fork */ 93 93 94 94 #define MEI_DEV_ID_ICP_LP 0x34E0 /* Ice Lake Point LP */ 95 + #define MEI_DEV_ID_ICP_N 0x38E0 /* Ice Lake Point N */ 95 96 96 97 #define MEI_DEV_ID_JSP_N 0x4DE0 /* Jasper Lake Point N */ 97 98
+1
drivers/misc/mei/pci-me.c
··· 96 96 {MEI_PCI_DEVICE(MEI_DEV_ID_CMP_H_3, MEI_ME_PCH8_ITOUCH_CFG)}, 97 97 98 98 {MEI_PCI_DEVICE(MEI_DEV_ID_ICP_LP, MEI_ME_PCH12_CFG)}, 99 + {MEI_PCI_DEVICE(MEI_DEV_ID_ICP_N, MEI_ME_PCH12_CFG)}, 99 100 100 101 {MEI_PCI_DEVICE(MEI_DEV_ID_TGP_LP, MEI_ME_PCH15_CFG)}, 101 102 {MEI_PCI_DEVICE(MEI_DEV_ID_TGP_H, MEI_ME_PCH15_SPS_CFG)},
+6 -2
drivers/mtd/nand/raw/qcom_nandc.c
··· 1676 1676 struct nand_ecc_ctrl *ecc = &chip->ecc; 1677 1677 int data_size1, data_size2, oob_size1, oob_size2; 1678 1678 int ret, reg_off = FLASH_BUF_ACC, read_loc = 0; 1679 + int raw_cw = cw; 1679 1680 1680 1681 nand_read_page_op(chip, page, 0, NULL, 0); 1681 1682 host->use_ecc = false; 1682 1683 1684 + if (nandc->props->qpic_v2) 1685 + raw_cw = ecc->steps - 1; 1686 + 1683 1687 clear_bam_transaction(nandc); 1684 1688 set_address(host, host->cw_size * cw, page); 1685 - update_rw_regs(host, 1, true, cw); 1689 + update_rw_regs(host, 1, true, raw_cw); 1686 1690 config_nand_page_read(chip); 1687 1691 1688 1692 data_size1 = mtd->writesize - host->cw_size * (ecc->steps - 1); ··· 1715 1711 nandc_set_read_loc(chip, cw, 3, read_loc, oob_size2, 1); 1716 1712 } 1717 1713 1718 - config_nand_cw_read(chip, false, cw); 1714 + config_nand_cw_read(chip, false, raw_cw); 1719 1715 1720 1716 read_data_dma(nandc, reg_off, data_buf, data_size1, 0); 1721 1717 reg_off += data_size1;
+12 -2
drivers/net/can/m_can/m_can_platform.c
··· 32 32 static int iomap_read_fifo(struct m_can_classdev *cdev, int offset, void *val, size_t val_count) 33 33 { 34 34 struct m_can_plat_priv *priv = cdev_to_priv(cdev); 35 + void __iomem *src = priv->mram_base + offset; 35 36 36 - ioread32_rep(priv->mram_base + offset, val, val_count); 37 + while (val_count--) { 38 + *(unsigned int *)val = ioread32(src); 39 + val += 4; 40 + src += 4; 41 + } 37 42 38 43 return 0; 39 44 } ··· 56 51 const void *val, size_t val_count) 57 52 { 58 53 struct m_can_plat_priv *priv = cdev_to_priv(cdev); 54 + void __iomem *dst = priv->mram_base + offset; 59 55 60 - iowrite32_rep(priv->base + offset, val, val_count); 56 + while (val_count--) { 57 + iowrite32(*(unsigned int *)val, dst); 58 + val += 4; 59 + dst += 4; 60 + } 61 61 62 62 return 0; 63 63 }
+12 -8
drivers/net/can/rcar/rcar_can.c
··· 846 846 struct rcar_can_priv *priv = netdev_priv(ndev); 847 847 u16 ctlr; 848 848 849 - if (netif_running(ndev)) { 850 - netif_stop_queue(ndev); 851 - netif_device_detach(ndev); 852 - } 849 + if (!netif_running(ndev)) 850 + return 0; 851 + 852 + netif_stop_queue(ndev); 853 + netif_device_detach(ndev); 854 + 853 855 ctlr = readw(&priv->regs->ctlr); 854 856 ctlr |= RCAR_CAN_CTLR_CANM_HALT; 855 857 writew(ctlr, &priv->regs->ctlr); ··· 870 868 u16 ctlr; 871 869 int err; 872 870 871 + if (!netif_running(ndev)) 872 + return 0; 873 + 873 874 err = clk_enable(priv->clk); 874 875 if (err) { 875 876 netdev_err(ndev, "clk_enable() failed, error %d\n", err); ··· 886 881 writew(ctlr, &priv->regs->ctlr); 887 882 priv->can.state = CAN_STATE_ERROR_ACTIVE; 888 883 889 - if (netif_running(ndev)) { 890 - netif_device_attach(ndev); 891 - netif_start_queue(ndev); 892 - } 884 + netif_device_attach(ndev); 885 + netif_start_queue(ndev); 886 + 893 887 return 0; 894 888 } 895 889
+4 -5
drivers/net/can/sja1000/peak_pci.c
··· 752 752 struct net_device *prev_dev = chan->prev_dev; 753 753 754 754 dev_info(&pdev->dev, "removing device %s\n", dev->name); 755 + /* do that only for first channel */ 756 + if (!prev_dev && chan->pciec_card) 757 + peak_pciec_remove(chan->pciec_card); 755 758 unregister_sja1000dev(dev); 756 759 free_sja1000dev(dev); 757 760 dev = prev_dev; 758 761 759 - if (!dev) { 760 - /* do that only for first channel */ 761 - if (chan->pciec_card) 762 - peak_pciec_remove(chan->pciec_card); 762 + if (!dev) 763 763 break; 764 - } 765 764 priv = netdev_priv(dev); 766 765 chan = priv->priv; 767 766 }
+3 -5
drivers/net/can/usb/peak_usb/pcan_usb_fd.c
··· 551 551 } else if (sm->channel_p_w_b & PUCAN_BUS_WARNING) { 552 552 new_state = CAN_STATE_ERROR_WARNING; 553 553 } else { 554 - /* no error bit (so, no error skb, back to active state) */ 555 - dev->can.state = CAN_STATE_ERROR_ACTIVE; 554 + /* back to (or still in) ERROR_ACTIVE state */ 555 + new_state = CAN_STATE_ERROR_ACTIVE; 556 556 pdev->bec.txerr = 0; 557 557 pdev->bec.rxerr = 0; 558 - return 0; 559 558 } 560 559 561 560 /* state hasn't changed */ ··· 567 568 568 569 /* allocate an skb to store the error frame */ 569 570 skb = alloc_can_err_skb(netdev, &cf); 570 - if (skb) 571 - can_change_state(netdev, cf, tx_state, rx_state); 571 + can_change_state(netdev, cf, tx_state, rx_state); 572 572 573 573 /* things must be done even in case of OOM */ 574 574 if (new_state == CAN_STATE_BUS_OFF)
+1 -1
drivers/net/dsa/lantiq_gswip.c
··· 230 230 #define GSWIP_SDMA_PCTRLp(p) (0xBC0 + ((p) * 0x6)) 231 231 #define GSWIP_SDMA_PCTRL_EN BIT(0) /* SDMA Port Enable */ 232 232 #define GSWIP_SDMA_PCTRL_FCEN BIT(1) /* Flow Control Enable */ 233 - #define GSWIP_SDMA_PCTRL_PAUFWD BIT(1) /* Pause Frame Forwarding */ 233 + #define GSWIP_SDMA_PCTRL_PAUFWD BIT(3) /* Pause Frame Forwarding */ 234 234 235 235 #define GSWIP_TABLE_ACTIVE_VLAN 0x01 236 236 #define GSWIP_TABLE_VLAN_MAPPING 0x02
+1 -7
drivers/net/dsa/mt7530.c
··· 1035 1035 { 1036 1036 struct mt7530_priv *priv = ds->priv; 1037 1037 1038 - if (!dsa_is_user_port(ds, port)) 1039 - return 0; 1040 - 1041 1038 mutex_lock(&priv->reg_mutex); 1042 1039 1043 1040 /* Allow the user port gets connected to the cpu port and also ··· 1056 1059 mt7530_port_disable(struct dsa_switch *ds, int port) 1057 1060 { 1058 1061 struct mt7530_priv *priv = ds->priv; 1059 - 1060 - if (!dsa_is_user_port(ds, port)) 1061 - return; 1062 1062 1063 1063 mutex_lock(&priv->reg_mutex); 1064 1064 ··· 3205 3211 return -ENOMEM; 3206 3212 3207 3213 priv->ds->dev = &mdiodev->dev; 3208 - priv->ds->num_ports = DSA_MAX_PORTS; 3214 + priv->ds->num_ports = MT7530_NUM_PORTS; 3209 3215 3210 3216 /* Use medatek,mcm property to distinguish hardware type that would 3211 3217 * casues a little bit differences on power-on sequence.
+1 -1
drivers/net/ethernet/cavium/thunder/nic_main.c
··· 1193 1193 dev_err(&nic->pdev->dev, 1194 1194 "Request for #%d msix vectors failed, returned %d\n", 1195 1195 nic->num_vec, ret); 1196 - return 1; 1196 + return ret; 1197 1197 } 1198 1198 1199 1199 /* Register mailbox interrupt handler */
+2 -2
drivers/net/ethernet/cavium/thunder/nicvf_main.c
··· 1223 1223 if (ret < 0) { 1224 1224 netdev_err(nic->netdev, 1225 1225 "Req for #%d msix vectors failed\n", nic->num_vec); 1226 - return 1; 1226 + return ret; 1227 1227 } 1228 1228 1229 1229 sprintf(nic->irq_name[irq], "%s Mbox", "NICVF"); ··· 1242 1242 if (!nicvf_check_pf_ready(nic)) { 1243 1243 nicvf_disable_intr(nic, NICVF_INTR_MBOX, 0); 1244 1244 nicvf_unregister_interrupts(nic); 1245 - return 1; 1245 + return -EIO; 1246 1246 } 1247 1247 1248 1248 return 0;
+1 -1
drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
··· 157 157 { ENETC_PM0_TFRM, "MAC tx frames" }, 158 158 { ENETC_PM0_TFCS, "MAC tx fcs errors" }, 159 159 { ENETC_PM0_TVLAN, "MAC tx VLAN frames" }, 160 - { ENETC_PM0_TERR, "MAC tx frames" }, 160 + { ENETC_PM0_TERR, "MAC tx frame errors" }, 161 161 { ENETC_PM0_TUCA, "MAC tx unicast frames" }, 162 162 { ENETC_PM0_TMCA, "MAC tx multicast frames" }, 163 163 { ENETC_PM0_TBCA, "MAC tx broadcast frames" },
+4 -1
drivers/net/ethernet/freescale/enetc/enetc_pf.c
··· 517 517 518 518 static void enetc_configure_port_mac(struct enetc_hw *hw) 519 519 { 520 + int tc; 521 + 520 522 enetc_port_wr(hw, ENETC_PM0_MAXFRM, 521 523 ENETC_SET_MAXFRM(ENETC_RX_MAXFRM_SIZE)); 522 524 523 - enetc_port_wr(hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE); 525 + for (tc = 0; tc < 8; tc++) 526 + enetc_port_wr(hw, ENETC_PTCMSDUR(tc), ENETC_MAC_MAXFRM_SIZE); 524 527 525 528 enetc_port_wr(hw, ENETC_PM0_CMD_CFG, ENETC_PM0_CMD_PHY_TX_EN | 526 529 ENETC_PM0_CMD_TXP | ENETC_PM0_PROMISC);
+21
drivers/net/ethernet/hisilicon/hns3/hnae3.c
··· 10 10 static LIST_HEAD(hnae3_client_list); 11 11 static LIST_HEAD(hnae3_ae_dev_list); 12 12 13 + void hnae3_unregister_ae_algo_prepare(struct hnae3_ae_algo *ae_algo) 14 + { 15 + const struct pci_device_id *pci_id; 16 + struct hnae3_ae_dev *ae_dev; 17 + 18 + if (!ae_algo) 19 + return; 20 + 21 + list_for_each_entry(ae_dev, &hnae3_ae_dev_list, node) { 22 + if (!hnae3_get_bit(ae_dev->flag, HNAE3_DEV_INITED_B)) 23 + continue; 24 + 25 + pci_id = pci_match_id(ae_algo->pdev_id_table, ae_dev->pdev); 26 + if (!pci_id) 27 + continue; 28 + if (IS_ENABLED(CONFIG_PCI_IOV)) 29 + pci_disable_sriov(ae_dev->pdev); 30 + } 31 + } 32 + EXPORT_SYMBOL(hnae3_unregister_ae_algo_prepare); 33 + 13 34 /* we are keeping things simple and using single lock for all the 14 35 * list. This is a non-critical code so other updations, if happen 15 36 * in parallel, can wait.
+1
drivers/net/ethernet/hisilicon/hns3/hnae3.h
··· 860 860 int hnae3_register_ae_dev(struct hnae3_ae_dev *ae_dev); 861 861 void hnae3_unregister_ae_dev(struct hnae3_ae_dev *ae_dev); 862 862 863 + void hnae3_unregister_ae_algo_prepare(struct hnae3_ae_algo *ae_algo); 863 864 void hnae3_unregister_ae_algo(struct hnae3_ae_algo *ae_algo); 864 865 void hnae3_register_ae_algo(struct hnae3_ae_algo *ae_algo); 865 866
+22 -15
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
··· 1847 1847 1848 1848 static int hns3_skb_linearize(struct hns3_enet_ring *ring, 1849 1849 struct sk_buff *skb, 1850 - u8 max_non_tso_bd_num, 1851 1850 unsigned int bd_num) 1852 1851 { 1853 1852 /* 'bd_num == UINT_MAX' means the skb' fraglist has a ··· 1863 1864 * will not help. 1864 1865 */ 1865 1866 if (skb->len > HNS3_MAX_TSO_SIZE || 1866 - (!skb_is_gso(skb) && skb->len > 1867 - HNS3_MAX_NON_TSO_SIZE(max_non_tso_bd_num))) { 1867 + (!skb_is_gso(skb) && skb->len > HNS3_MAX_NON_TSO_SIZE)) { 1868 1868 u64_stats_update_begin(&ring->syncp); 1869 1869 ring->stats.hw_limitation++; 1870 1870 u64_stats_update_end(&ring->syncp); ··· 1898 1900 goto out; 1899 1901 } 1900 1902 1901 - if (hns3_skb_linearize(ring, skb, max_non_tso_bd_num, 1902 - bd_num)) 1903 + if (hns3_skb_linearize(ring, skb, bd_num)) 1903 1904 return -ENOMEM; 1904 1905 1905 1906 bd_num = hns3_tx_bd_count(skb->len); ··· 3255 3258 { 3256 3259 hns3_unmap_buffer(ring, &ring->desc_cb[i]); 3257 3260 ring->desc[i].addr = 0; 3261 + ring->desc_cb[i].refill = 0; 3258 3262 } 3259 3263 3260 3264 static void hns3_free_buffer_detach(struct hns3_enet_ring *ring, int i, ··· 3334 3336 3335 3337 ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma + 3336 3338 ring->desc_cb[i].page_offset); 3339 + ring->desc_cb[i].refill = 1; 3337 3340 3338 3341 return 0; 3339 3342 } ··· 3364 3365 { 3365 3366 hns3_unmap_buffer(ring, &ring->desc_cb[i]); 3366 3367 ring->desc_cb[i] = *res_cb; 3368 + ring->desc_cb[i].refill = 1; 3367 3369 ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma + 3368 3370 ring->desc_cb[i].page_offset); 3369 3371 ring->desc[i].rx.bd_base_info = 0; ··· 3373 3373 static void hns3_reuse_buffer(struct hns3_enet_ring *ring, int i) 3374 3374 { 3375 3375 ring->desc_cb[i].reuse_flag = 0; 3376 + ring->desc_cb[i].refill = 1; 3376 3377 ring->desc[i].addr = cpu_to_le64(ring->desc_cb[i].dma + 3377 3378 ring->desc_cb[i].page_offset); 3378 3379 ring->desc[i].rx.bd_base_info = 0; ··· 3480 3479 int ntc = ring->next_to_clean; 3481 3480 int ntu = ring->next_to_use; 3482 3481 3482 + if (unlikely(ntc == ntu && !ring->desc_cb[ntc].refill)) 3483 + return ring->desc_num; 3484 + 3483 3485 return ((ntc >= ntu) ? 0 : ring->desc_num) + ntc - ntu; 3484 3486 } 3485 3487 3486 - static void hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring, 3488 + /* Return true if there is any allocation failure */ 3489 + static bool hns3_nic_alloc_rx_buffers(struct hns3_enet_ring *ring, 3487 3490 int cleand_count) 3488 3491 { 3489 3492 struct hns3_desc_cb *desc_cb; ··· 3512 3507 hns3_rl_err(ring_to_netdev(ring), 3513 3508 "alloc rx buffer failed: %d\n", 3514 3509 ret); 3515 - break; 3510 + 3511 + writel(i, ring->tqp->io_base + 3512 + HNS3_RING_RX_RING_HEAD_REG); 3513 + return true; 3516 3514 } 3517 3515 hns3_replace_buffer(ring, ring->next_to_use, &res_cbs); 3518 3516 ··· 3528 3520 } 3529 3521 3530 3522 writel(i, ring->tqp->io_base + HNS3_RING_RX_RING_HEAD_REG); 3523 + return false; 3531 3524 } 3532 3525 3533 3526 static bool hns3_can_reuse_page(struct hns3_desc_cb *cb) ··· 3833 3824 { 3834 3825 ring->desc[ring->next_to_clean].rx.bd_base_info &= 3835 3826 cpu_to_le32(~BIT(HNS3_RXD_VLD_B)); 3827 + ring->desc_cb[ring->next_to_clean].refill = 0; 3836 3828 ring->next_to_clean += 1; 3837 3829 3838 3830 if (unlikely(ring->next_to_clean == ring->desc_num)) ··· 4180 4170 { 4181 4171 #define RCB_NOF_ALLOC_RX_BUFF_ONCE 16 4182 4172 int unused_count = hns3_desc_unused(ring); 4173 + bool failure = false; 4183 4174 int recv_pkts = 0; 4184 4175 int err; 4185 4176 ··· 4189 4178 while (recv_pkts < budget) { 4190 4179 /* Reuse or realloc buffers */ 4191 4180 if (unused_count >= RCB_NOF_ALLOC_RX_BUFF_ONCE) { 4192 - hns3_nic_alloc_rx_buffers(ring, unused_count); 4193 - unused_count = hns3_desc_unused(ring) - 4194 - ring->pending_buf; 4181 + failure = failure || 4182 + hns3_nic_alloc_rx_buffers(ring, unused_count); 4183 + unused_count = 0; 4195 4184 } 4196 4185 4197 4186 /* Poll one pkt */ ··· 4210 4199 } 4211 4200 4212 4201 out: 4213 - /* Make all data has been write before submit */ 4214 - if (unused_count > 0) 4215 - hns3_nic_alloc_rx_buffers(ring, unused_count); 4216 - 4217 - return recv_pkts; 4202 + return failure ? budget : recv_pkts; 4218 4203 } 4219 4204 4220 4205 static void hns3_update_rx_int_coalesce(struct hns3_enet_tqp_vector *tqp_vector)
+3 -4
drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
··· 186 186 187 187 #define HNS3_MAX_BD_SIZE 65535 188 188 #define HNS3_MAX_TSO_BD_NUM 63U 189 - #define HNS3_MAX_TSO_SIZE \ 190 - (HNS3_MAX_BD_SIZE * HNS3_MAX_TSO_BD_NUM) 189 + #define HNS3_MAX_TSO_SIZE 1048576U 190 + #define HNS3_MAX_NON_TSO_SIZE 9728U 191 191 192 - #define HNS3_MAX_NON_TSO_SIZE(max_non_tso_bd_num) \ 193 - (HNS3_MAX_BD_SIZE * (max_non_tso_bd_num)) 194 192 195 193 #define HNS3_VECTOR_GL0_OFFSET 0x100 196 194 #define HNS3_VECTOR_GL1_OFFSET 0x200 ··· 330 332 u32 length; /* length of the buffer */ 331 333 332 334 u16 reuse_flag; 335 + u16 refill; 333 336 334 337 /* desc type, used by the ring user to mark the type of the priv data */ 335 338 u16 type;
+9
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_dcb.c
··· 137 137 *changed = true; 138 138 break; 139 139 case IEEE_8021QAZ_TSA_ETS: 140 + /* The hardware will switch to sp mode if bandwidth is 141 + * 0, so limit ets bandwidth must be greater than 0. 142 + */ 143 + if (!ets->tc_tx_bw[i]) { 144 + dev_err(&hdev->pdev->dev, 145 + "tc%u ets bw cannot be 0\n", i); 146 + return -EINVAL; 147 + } 148 + 140 149 if (hdev->tm_info.tc_info[i].tc_sch_mode != 141 150 HCLGE_SCH_MODE_DWRR) 142 151 *changed = true;
+4 -1
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
··· 1560 1560 1561 1561 /* configure TM QCN hw errors */ 1562 1562 hclge_cmd_setup_basic_desc(&desc, HCLGE_TM_QCN_MEM_INT_CFG, false); 1563 - if (en) 1563 + desc.data[0] = cpu_to_le32(HCLGE_TM_QCN_ERR_INT_TYPE); 1564 + if (en) { 1565 + desc.data[0] |= cpu_to_le32(HCLGE_TM_QCN_FIFO_INT_EN); 1564 1566 desc.data[1] = cpu_to_le32(HCLGE_TM_QCN_MEM_ERR_INT_EN); 1567 + } 1565 1568 1566 1569 ret = hclge_cmd_send(&hdev->hw, &desc, 1); 1567 1570 if (ret)
+2
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h
··· 50 50 #define HCLGE_PPP_MPF_ECC_ERR_INT3_EN 0x003F 51 51 #define HCLGE_PPP_MPF_ECC_ERR_INT3_EN_MASK 0x003F 52 52 #define HCLGE_TM_SCH_ECC_ERR_INT_EN 0x3 53 + #define HCLGE_TM_QCN_ERR_INT_TYPE 0x29 54 + #define HCLGE_TM_QCN_FIFO_INT_EN 0xFFFF00 53 55 #define HCLGE_TM_QCN_MEM_ERR_INT_EN 0xFFFFFF 54 56 #define HCLGE_NCSI_ERR_INT_EN 0x3 55 57 #define HCLGE_NCSI_ERR_INT_TYPE 0x9
+1
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 13093 13093 13094 13094 static void hclge_exit(void) 13095 13095 { 13096 + hnae3_unregister_ae_algo_prepare(&ae_algo); 13096 13097 hnae3_unregister_ae_algo(&ae_algo); 13097 13098 destroy_workqueue(hclge_wq); 13098 13099 }
+2
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_tm.c
··· 752 752 hdev->tm_info.pg_info[i].tc_bit_map = hdev->hw_tc_map; 753 753 for (k = 0; k < hdev->tm_info.num_tc; k++) 754 754 hdev->tm_info.pg_info[i].tc_dwrr[k] = BW_PERCENT; 755 + for (; k < HNAE3_MAX_TC; k++) 756 + hdev->tm_info.pg_info[i].tc_dwrr[k] = 0; 755 757 } 756 758 } 757 759
+3 -3
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
··· 2273 2273 hdev->reset_attempts = 0; 2274 2274 2275 2275 hdev->last_reset_time = jiffies; 2276 - while ((hdev->reset_type = 2277 - hclgevf_get_reset_level(hdev, &hdev->reset_pending)) 2278 - != HNAE3_NONE_RESET) 2276 + hdev->reset_type = 2277 + hclgevf_get_reset_level(hdev, &hdev->reset_pending); 2278 + if (hdev->reset_type != HNAE3_NONE_RESET) 2279 2279 hclgevf_reset(hdev); 2280 2280 } else if (test_and_clear_bit(HCLGEVF_RESET_REQUESTED, 2281 2281 &hdev->reset_state)) {
+3 -1
drivers/net/ethernet/intel/e1000e/e1000.h
··· 114 114 board_pch2lan, 115 115 board_pch_lpt, 116 116 board_pch_spt, 117 - board_pch_cnp 117 + board_pch_cnp, 118 + board_pch_tgp 118 119 }; 119 120 120 121 struct e1000_ps_page { ··· 501 500 extern const struct e1000_info e1000_pch_lpt_info; 502 501 extern const struct e1000_info e1000_pch_spt_info; 503 502 extern const struct e1000_info e1000_pch_cnp_info; 503 + extern const struct e1000_info e1000_pch_tgp_info; 504 504 extern const struct e1000_info e1000_es2_info; 505 505 506 506 void e1000e_ptp_init(struct e1000_adapter *adapter);
+30 -1
drivers/net/ethernet/intel/e1000e/ich8lan.c
··· 4813 4813 static s32 e1000_init_hw_ich8lan(struct e1000_hw *hw) 4814 4814 { 4815 4815 struct e1000_mac_info *mac = &hw->mac; 4816 - u32 ctrl_ext, txdctl, snoop; 4816 + u32 ctrl_ext, txdctl, snoop, fflt_dbg; 4817 4817 s32 ret_val; 4818 4818 u16 i; 4819 4819 ··· 4871 4871 else 4872 4872 snoop = (u32)~(PCIE_NO_SNOOP_ALL); 4873 4873 e1000e_set_pcie_no_snoop(hw, snoop); 4874 + 4875 + /* Enable workaround for packet loss issue on TGP PCH 4876 + * Do not gate DMA clock from the modPHY block 4877 + */ 4878 + if (mac->type >= e1000_pch_tgp) { 4879 + fflt_dbg = er32(FFLT_DBG); 4880 + fflt_dbg |= E1000_FFLT_DBG_DONT_GATE_WAKE_DMA_CLK; 4881 + ew32(FFLT_DBG, fflt_dbg); 4882 + } 4874 4883 4875 4884 ctrl_ext = er32(CTRL_EXT); 4876 4885 ctrl_ext |= E1000_CTRL_EXT_RO_DIS; ··· 5984 5975 5985 5976 const struct e1000_info e1000_pch_cnp_info = { 5986 5977 .mac = e1000_pch_cnp, 5978 + .flags = FLAG_IS_ICH 5979 + | FLAG_HAS_WOL 5980 + | FLAG_HAS_HW_TIMESTAMP 5981 + | FLAG_HAS_CTRLEXT_ON_LOAD 5982 + | FLAG_HAS_AMT 5983 + | FLAG_HAS_FLASH 5984 + | FLAG_HAS_JUMBO_FRAMES 5985 + | FLAG_APME_IN_WUC, 5986 + .flags2 = FLAG2_HAS_PHY_STATS 5987 + | FLAG2_HAS_EEE, 5988 + .pba = 26, 5989 + .max_hw_frame_size = 9022, 5990 + .get_variants = e1000_get_variants_ich8lan, 5991 + .mac_ops = &ich8_mac_ops, 5992 + .phy_ops = &ich8_phy_ops, 5993 + .nvm_ops = &spt_nvm_ops, 5994 + }; 5995 + 5996 + const struct e1000_info e1000_pch_tgp_info = { 5997 + .mac = e1000_pch_tgp, 5987 5998 .flags = FLAG_IS_ICH 5988 5999 | FLAG_HAS_WOL 5989 6000 | FLAG_HAS_HW_TIMESTAMP
+3
drivers/net/ethernet/intel/e1000e/ich8lan.h
··· 289 289 /* Proprietary Latency Tolerance Reporting PCI Capability */ 290 290 #define E1000_PCI_LTR_CAP_LPT 0xA8 291 291 292 + /* Don't gate wake DMA clock */ 293 + #define E1000_FFLT_DBG_DONT_GATE_WAKE_DMA_CLK 0x1000 294 + 292 295 void e1000e_write_protect_nvm_ich8lan(struct e1000_hw *hw); 293 296 void e1000e_set_kmrn_lock_loss_workaround_ich8lan(struct e1000_hw *hw, 294 297 bool state);
+23 -22
drivers/net/ethernet/intel/e1000e/netdev.c
··· 51 51 [board_pch_lpt] = &e1000_pch_lpt_info, 52 52 [board_pch_spt] = &e1000_pch_spt_info, 53 53 [board_pch_cnp] = &e1000_pch_cnp_info, 54 + [board_pch_tgp] = &e1000_pch_tgp_info, 54 55 }; 55 56 56 57 struct e1000_reg_info { ··· 7896 7895 { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_CMP_I219_V11), board_pch_cnp }, 7897 7896 { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_CMP_I219_LM12), board_pch_spt }, 7898 7897 { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_CMP_I219_V12), board_pch_spt }, 7899 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM13), board_pch_cnp }, 7900 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V13), board_pch_cnp }, 7901 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM14), board_pch_cnp }, 7902 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V14), board_pch_cnp }, 7903 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM15), board_pch_cnp }, 7904 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V15), board_pch_cnp }, 7905 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM23), board_pch_cnp }, 7906 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V23), board_pch_cnp }, 7907 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM16), board_pch_cnp }, 7908 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V16), board_pch_cnp }, 7909 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM17), board_pch_cnp }, 7910 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V17), board_pch_cnp }, 7911 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM22), board_pch_cnp }, 7912 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V22), board_pch_cnp }, 7913 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM18), board_pch_cnp }, 7914 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V18), board_pch_cnp }, 7915 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM19), board_pch_cnp }, 7916 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V19), board_pch_cnp }, 7917 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM20), board_pch_cnp }, 7918 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V20), board_pch_cnp }, 7919 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM21), board_pch_cnp }, 7920 - { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V21), board_pch_cnp }, 7898 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM13), board_pch_tgp }, 7899 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V13), board_pch_tgp }, 7900 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM14), board_pch_tgp }, 7901 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V14), board_pch_tgp }, 7902 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_LM15), board_pch_tgp }, 7903 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_TGP_I219_V15), board_pch_tgp }, 7904 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM23), board_pch_tgp }, 7905 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V23), board_pch_tgp }, 7906 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM16), board_pch_tgp }, 7907 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V16), board_pch_tgp }, 7908 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_LM17), board_pch_tgp }, 7909 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_ADP_I219_V17), board_pch_tgp }, 7910 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_LM22), board_pch_tgp }, 7911 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_RPL_I219_V22), board_pch_tgp }, 7912 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM18), board_pch_tgp }, 7913 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V18), board_pch_tgp }, 7914 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_LM19), board_pch_tgp }, 7915 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_MTP_I219_V19), board_pch_tgp }, 7916 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM20), board_pch_tgp }, 7917 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V20), board_pch_tgp }, 7918 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_LM21), board_pch_tgp }, 7919 + { PCI_VDEVICE(INTEL, E1000_DEV_ID_PCH_LNP_I219_V21), board_pch_tgp }, 7921 7920 7922 7921 { 0, 0, 0, 0, 0, 0, 0 } /* terminate list */ 7923 7922 };
+2
drivers/net/ethernet/intel/ice/ice_common.c
··· 25 25 case ICE_DEV_ID_E810C_BACKPLANE: 26 26 case ICE_DEV_ID_E810C_QSFP: 27 27 case ICE_DEV_ID_E810C_SFP: 28 + case ICE_DEV_ID_E810_XXV_BACKPLANE: 29 + case ICE_DEV_ID_E810_XXV_QSFP: 28 30 case ICE_DEV_ID_E810_XXV_SFP: 29 31 hw->mac_type = ICE_MAC_E810; 30 32 break;
+4
drivers/net/ethernet/intel/ice/ice_devids.h
··· 23 23 #define ICE_DEV_ID_E810C_SFP 0x1593 24 24 #define ICE_SUBDEV_ID_E810T 0x000E 25 25 #define ICE_SUBDEV_ID_E810T2 0x000F 26 + /* Intel(R) Ethernet Controller E810-XXV for backplane */ 27 + #define ICE_DEV_ID_E810_XXV_BACKPLANE 0x1599 28 + /* Intel(R) Ethernet Controller E810-XXV for QSFP */ 29 + #define ICE_DEV_ID_E810_XXV_QSFP 0x159A 26 30 /* Intel(R) Ethernet Controller E810-XXV for SFP */ 27 31 #define ICE_DEV_ID_E810_XXV_SFP 0x159B 28 32 /* Intel(R) Ethernet Connection E823-C for backplane */
+4 -2
drivers/net/ethernet/intel/ice/ice_devlink.c
··· 60 60 { 61 61 struct ice_hw *hw = &pf->hw; 62 62 63 - snprintf(ctx->buf, sizeof(ctx->buf), "%u.%u", 64 - hw->api_maj_ver, hw->api_min_ver); 63 + snprintf(ctx->buf, sizeof(ctx->buf), "%u.%u.%u", hw->api_maj_ver, 64 + hw->api_min_ver, hw->api_patch); 65 + 66 + return 0; 65 67 } 66 68 67 69 static void ice_info_fw_build(struct ice_pf *pf, struct ice_info_ctx *ctx)
+2 -2
drivers/net/ethernet/intel/ice/ice_flex_pipe.c
··· 1910 1910 for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++) 1911 1911 if (hw->tnl.tbl[i].valid && 1912 1912 hw->tnl.tbl[i].type == type && 1913 - idx--) 1913 + idx-- == 0) 1914 1914 return i; 1915 1915 1916 1916 WARN_ON_ONCE(1); ··· 2070 2070 u16 index; 2071 2071 2072 2072 tnl_type = ti->type == UDP_TUNNEL_TYPE_VXLAN ? TNL_VXLAN : TNL_GENEVE; 2073 - index = ice_tunnel_idx_to_entry(&pf->hw, idx, tnl_type); 2073 + index = ice_tunnel_idx_to_entry(&pf->hw, tnl_type, idx); 2074 2074 2075 2075 status = ice_create_tunnel(&pf->hw, index, tnl_type, ntohs(ti->port)); 2076 2076 if (status) {
+9
drivers/net/ethernet/intel/ice/ice_lib.c
··· 3033 3033 */ 3034 3034 int ice_vsi_release(struct ice_vsi *vsi) 3035 3035 { 3036 + enum ice_status err; 3036 3037 struct ice_pf *pf; 3037 3038 3038 3039 if (!vsi->back) ··· 3106 3105 3107 3106 ice_fltr_remove_all(vsi); 3108 3107 ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx); 3108 + err = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx); 3109 + if (err) 3110 + dev_err(ice_pf_to_dev(vsi->back), "Failed to remove RDMA scheduler config for VSI %u, err %d\n", 3111 + vsi->vsi_num, err); 3109 3112 ice_vsi_delete(vsi); 3110 3113 ice_vsi_free_q_vectors(vsi); 3111 3114 ··· 3290 3285 prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi, coalesce); 3291 3286 3292 3287 ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx); 3288 + ret = ice_rm_vsi_rdma_cfg(vsi->port_info, vsi->idx); 3289 + if (ret) 3290 + dev_err(ice_pf_to_dev(vsi->back), "Failed to remove RDMA scheduler config for VSI %u, err %d\n", 3291 + vsi->vsi_num, ret); 3293 3292 ice_vsi_free_q_vectors(vsi); 3294 3293 3295 3294 /* SR-IOV determines needed MSIX resources all at once instead of per
+7 -1
drivers/net/ethernet/intel/ice/ice_main.c
··· 4342 4342 if (!pf) 4343 4343 return -ENOMEM; 4344 4344 4345 + /* initialize Auxiliary index to invalid value */ 4346 + pf->aux_idx = -1; 4347 + 4345 4348 /* set up for high or low DMA */ 4346 4349 err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); 4347 4350 if (err) ··· 4730 4727 4731 4728 ice_aq_cancel_waiting_tasks(pf); 4732 4729 ice_unplug_aux_dev(pf); 4733 - ida_free(&ice_aux_ida, pf->aux_idx); 4730 + if (pf->aux_idx >= 0) 4731 + ida_free(&ice_aux_ida, pf->aux_idx); 4734 4732 set_bit(ICE_DOWN, pf->state); 4735 4733 4736 4734 mutex_destroy(&(&pf->hw)->fdir_fltr_lock); ··· 5131 5127 { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_BACKPLANE), 0 }, 5132 5128 { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_QSFP), 0 }, 5133 5129 { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_SFP), 0 }, 5130 + { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_BACKPLANE), 0 }, 5131 + { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_QSFP), 0 }, 5134 5132 { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_SFP), 0 }, 5135 5133 { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_BACKPLANE), 0 }, 5136 5134 { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_QSFP), 0 },
+13
drivers/net/ethernet/intel/ice/ice_sched.c
··· 2071 2071 } 2072 2072 2073 2073 /** 2074 + * ice_rm_vsi_rdma_cfg - remove VSI and its RDMA children nodes 2075 + * @pi: port information structure 2076 + * @vsi_handle: software VSI handle 2077 + * 2078 + * This function clears the VSI and its RDMA children nodes from scheduler tree 2079 + * for all TCs. 2080 + */ 2081 + enum ice_status ice_rm_vsi_rdma_cfg(struct ice_port_info *pi, u16 vsi_handle) 2082 + { 2083 + return ice_sched_rm_vsi_cfg(pi, vsi_handle, ICE_SCHED_NODE_OWNER_RDMA); 2084 + } 2085 + 2086 + /** 2074 2087 * ice_get_agg_info - get the aggregator ID 2075 2088 * @hw: pointer to the hardware structure 2076 2089 * @agg_id: aggregator ID
+1
drivers/net/ethernet/intel/ice/ice_sched.h
··· 91 91 ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs, 92 92 u8 owner, bool enable); 93 93 enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle); 94 + enum ice_status ice_rm_vsi_rdma_cfg(struct ice_port_info *pi, u16 vsi_handle); 94 95 95 96 /* Tx scheduler rate limiter functions */ 96 97 enum ice_status
+1 -1
drivers/net/ethernet/intel/igc/igc_hw.h
··· 22 22 #define IGC_DEV_ID_I220_V 0x15F7 23 23 #define IGC_DEV_ID_I225_K 0x3100 24 24 #define IGC_DEV_ID_I225_K2 0x3101 25 + #define IGC_DEV_ID_I226_K 0x3102 25 26 #define IGC_DEV_ID_I225_LMVP 0x5502 26 - #define IGC_DEV_ID_I226_K 0x5504 27 27 #define IGC_DEV_ID_I225_IT 0x0D9F 28 28 #define IGC_DEV_ID_I226_LM 0x125B 29 29 #define IGC_DEV_ID_I226_V 0x125C
+3
drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
··· 199 199 int mlx5e_create_flow_steering(struct mlx5e_priv *priv); 200 200 void mlx5e_destroy_flow_steering(struct mlx5e_priv *priv); 201 201 202 + int mlx5e_fs_init(struct mlx5e_priv *priv); 203 + void mlx5e_fs_cleanup(struct mlx5e_priv *priv); 204 + 202 205 int mlx5e_add_vlan_trap(struct mlx5e_priv *priv, int trap_id, int tir_num); 203 206 void mlx5e_remove_vlan_trap(struct mlx5e_priv *priv); 204 207 int mlx5e_add_mac_trap(struct mlx5e_priv *priv, int trap_id, int tir_num);
+2
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
··· 10 10 #include "en_tc.h" 11 11 #include "rep/tc.h" 12 12 #include "rep/neigh.h" 13 + #include "lag.h" 14 + #include "lag_mp.h" 13 15 14 16 struct mlx5e_tc_tun_route_attr { 15 17 struct net_device *out_dev;
+27 -24
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c
··· 141 141 * Pkt: MAC IP ESP IP L4 142 142 * 143 143 * Transport Mode: 144 - * SWP: OutL3 InL4 145 - * InL3 144 + * SWP: OutL3 OutL4 146 145 * Pkt: MAC IP ESP L4 147 146 * 148 147 * Tunnel(VXLAN TCP/UDP) over Transport Mode ··· 170 171 return; 171 172 172 173 if (!xo->inner_ipproto) { 173 - eseg->swp_inner_l3_offset = skb_network_offset(skb) / 2; 174 - eseg->swp_inner_l4_offset = skb_inner_transport_offset(skb) / 2; 175 - if (skb->protocol == htons(ETH_P_IPV6)) 176 - eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6; 177 - if (xo->proto == IPPROTO_UDP) 174 + switch (xo->proto) { 175 + case IPPROTO_UDP: 176 + eseg->swp_flags |= MLX5_ETH_WQE_SWP_OUTER_L4_UDP; 177 + fallthrough; 178 + case IPPROTO_TCP: 179 + /* IP | ESP | TCP */ 180 + eseg->swp_outer_l4_offset = skb_inner_transport_offset(skb) / 2; 181 + break; 182 + default: 183 + break; 184 + } 185 + } else { 186 + /* Tunnel(VXLAN TCP/UDP) over Transport Mode */ 187 + switch (xo->inner_ipproto) { 188 + case IPPROTO_UDP: 178 189 eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L4_UDP; 179 - return; 190 + fallthrough; 191 + case IPPROTO_TCP: 192 + eseg->swp_inner_l3_offset = skb_inner_network_offset(skb) / 2; 193 + eseg->swp_inner_l4_offset = 194 + (skb->csum_start + skb->head - skb->data) / 2; 195 + if (skb->protocol == htons(ETH_P_IPV6)) 196 + eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6; 197 + break; 198 + default: 199 + break; 200 + } 180 201 } 181 202 182 - /* Tunnel(VXLAN TCP/UDP) over Transport Mode */ 183 - switch (xo->inner_ipproto) { 184 - case IPPROTO_UDP: 185 - eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L4_UDP; 186 - fallthrough; 187 - case IPPROTO_TCP: 188 - eseg->swp_inner_l3_offset = skb_inner_network_offset(skb) / 2; 189 - eseg->swp_inner_l4_offset = (skb->csum_start + skb->head - skb->data) / 2; 190 - if (skb->protocol == htons(ETH_P_IPV6)) 191 - eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6; 192 - break; 193 - default: 194 - break; 195 - } 196 - 197 - return; 198 203 } 199 204 200 205 void mlx5e_ipsec_set_iv_esn(struct sk_buff *skb, struct xfrm_state *x,
+16 -12
drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
··· 1186 1186 struct mlx5e_flow_table *ft; 1187 1187 int err; 1188 1188 1189 - priv->fs.vlan = kvzalloc(sizeof(*priv->fs.vlan), GFP_KERNEL); 1190 - if (!priv->fs.vlan) 1191 - return -ENOMEM; 1192 - 1193 1189 ft = &priv->fs.vlan->ft; 1194 1190 ft->num_groups = 0; 1195 1191 ··· 1194 1198 ft_attr.prio = MLX5E_NIC_PRIO; 1195 1199 1196 1200 ft->t = mlx5_create_flow_table(priv->fs.ns, &ft_attr); 1197 - if (IS_ERR(ft->t)) { 1198 - err = PTR_ERR(ft->t); 1199 - goto err_free_t; 1200 - } 1201 + if (IS_ERR(ft->t)) 1202 + return PTR_ERR(ft->t); 1201 1203 1202 1204 ft->g = kcalloc(MLX5E_NUM_VLAN_GROUPS, sizeof(*ft->g), GFP_KERNEL); 1203 1205 if (!ft->g) { ··· 1215 1221 kfree(ft->g); 1216 1222 err_destroy_vlan_table: 1217 1223 mlx5_destroy_flow_table(ft->t); 1218 - err_free_t: 1219 - kvfree(priv->fs.vlan); 1220 - priv->fs.vlan = NULL; 1221 1224 1222 1225 return err; 1223 1226 } ··· 1223 1232 { 1224 1233 mlx5e_del_vlan_rules(priv); 1225 1234 mlx5e_destroy_flow_table(&priv->fs.vlan->ft); 1226 - kvfree(priv->fs.vlan); 1227 1235 } 1228 1236 1229 1237 static void mlx5e_destroy_inner_ttc_table(struct mlx5e_priv *priv) ··· 1340 1350 mlx5e_destroy_inner_ttc_table(priv); 1341 1351 mlx5e_arfs_destroy_tables(priv); 1342 1352 mlx5e_ethtool_cleanup_steering(priv); 1353 + } 1354 + 1355 + int mlx5e_fs_init(struct mlx5e_priv *priv) 1356 + { 1357 + priv->fs.vlan = kvzalloc(sizeof(*priv->fs.vlan), GFP_KERNEL); 1358 + if (!priv->fs.vlan) 1359 + return -ENOMEM; 1360 + return 0; 1361 + } 1362 + 1363 + void mlx5e_fs_cleanup(struct mlx5e_priv *priv) 1364 + { 1365 + kvfree(priv->fs.vlan); 1366 + priv->fs.vlan = NULL; 1343 1367 }
+7
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 4659 4659 4660 4660 mlx5e_timestamp_init(priv); 4661 4661 4662 + err = mlx5e_fs_init(priv); 4663 + if (err) { 4664 + mlx5_core_err(mdev, "FS initialization failed, %d\n", err); 4665 + return err; 4666 + } 4667 + 4662 4668 err = mlx5e_ipsec_init(priv); 4663 4669 if (err) 4664 4670 mlx5_core_err(mdev, "IPSec initialization failed, %d\n", err); ··· 4682 4676 mlx5e_health_destroy_reporters(priv); 4683 4677 mlx5e_tls_cleanup(priv); 4684 4678 mlx5e_ipsec_cleanup(priv); 4679 + mlx5e_fs_cleanup(priv); 4685 4680 } 4686 4681 4687 4682 static int mlx5e_init_nic_rx(struct mlx5e_priv *priv)
+2
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 68 68 #include "lib/fs_chains.h" 69 69 #include "diag/en_tc_tracepoint.h" 70 70 #include <asm/div64.h> 71 + #include "lag.h" 72 + #include "lag_mp.h" 71 73 72 74 #define nic_chains(priv) ((priv)->fs.tc.chains) 73 75 #define MLX5_MH_ACT_SZ MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto)
+11 -9
drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
··· 213 213 memcpy(&vhdr->h_vlan_encapsulated_proto, skb->data + cpy1_sz, cpy2_sz); 214 214 } 215 215 216 - /* If packet is not IP's CHECKSUM_PARTIAL (e.g. icmd packet), 217 - * need to set L3 checksum flag for IPsec 218 - */ 219 216 static void 220 217 ipsec_txwqe_build_eseg_csum(struct mlx5e_txqsq *sq, struct sk_buff *skb, 221 218 struct mlx5_wqe_eth_seg *eseg) 222 219 { 220 + struct xfrm_offload *xo = xfrm_offload(skb); 221 + 223 222 eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM; 224 - if (skb->encapsulation) { 225 - eseg->cs_flags |= MLX5_ETH_WQE_L3_INNER_CSUM; 223 + if (xo->inner_ipproto) { 224 + eseg->cs_flags |= MLX5_ETH_WQE_L4_INNER_CSUM | MLX5_ETH_WQE_L3_INNER_CSUM; 225 + } else if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) { 226 + eseg->cs_flags |= MLX5_ETH_WQE_L4_CSUM; 226 227 sq->stats->csum_partial_inner++; 227 - } else { 228 - sq->stats->csum_partial++; 229 228 } 230 229 } 231 230 ··· 233 234 struct mlx5e_accel_tx_state *accel, 234 235 struct mlx5_wqe_eth_seg *eseg) 235 236 { 237 + if (unlikely(mlx5e_ipsec_eseg_meta(eseg))) { 238 + ipsec_txwqe_build_eseg_csum(sq, skb, eseg); 239 + return; 240 + } 241 + 236 242 if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) { 237 243 eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM; 238 244 if (skb->encapsulation) { ··· 253 249 eseg->cs_flags = MLX5_ETH_WQE_L3_CSUM | MLX5_ETH_WQE_L4_CSUM; 254 250 sq->stats->csum_partial++; 255 251 #endif 256 - } else if (unlikely(mlx5e_ipsec_eseg_meta(eseg))) { 257 - ipsec_txwqe_build_eseg_csum(sq, skb, eseg); 258 252 } else 259 253 sq->stats->csum_none++; 260 254 }
+3 -4
drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
··· 473 473 474 474 err_min_rate: 475 475 list_del(&group->list); 476 - err = mlx5_destroy_scheduling_element_cmd(esw->dev, 477 - SCHEDULING_HIERARCHY_E_SWITCH, 478 - group->tsar_ix); 479 - if (err) 476 + if (mlx5_destroy_scheduling_element_cmd(esw->dev, 477 + SCHEDULING_HIERARCHY_E_SWITCH, 478 + group->tsar_ix)) 480 479 NL_SET_ERR_MSG_MOD(extack, "E-Switch destroy TSAR for group failed"); 481 480 err_sched_elem: 482 481 kfree(group);
+4
drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
··· 492 492 if (!mlx5_lag_is_ready(ldev)) { 493 493 do_bond = false; 494 494 } else { 495 + /* VF LAG is in multipath mode, ignore bond change requests */ 496 + if (mlx5_lag_is_multipath(dev0)) 497 + return; 498 + 495 499 tracker = ldev->tracker; 496 500 497 501 do_bond = tracker.is_bonded && mlx5_lag_check_prereq(ldev);
+8 -5
drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c
··· 9 9 #include "eswitch.h" 10 10 #include "lib/mlx5.h" 11 11 12 + static bool __mlx5_lag_is_multipath(struct mlx5_lag *ldev) 13 + { 14 + return !!(ldev->flags & MLX5_LAG_FLAG_MULTIPATH); 15 + } 16 + 12 17 static bool mlx5_lag_multipath_check_prereq(struct mlx5_lag *ldev) 13 18 { 14 19 if (!mlx5_lag_is_ready(ldev)) 15 20 return false; 16 21 22 + if (__mlx5_lag_is_active(ldev) && !__mlx5_lag_is_multipath(ldev)) 23 + return false; 24 + 17 25 return mlx5_esw_multipath_prereq(ldev->pf[MLX5_LAG_P1].dev, 18 26 ldev->pf[MLX5_LAG_P2].dev); 19 - } 20 - 21 - static bool __mlx5_lag_is_multipath(struct mlx5_lag *ldev) 22 - { 23 - return !!(ldev->flags & MLX5_LAG_FLAG_MULTIPATH); 24 27 } 25 28 26 29 bool mlx5_lag_is_multipath(struct mlx5_core_dev *dev)
+2
drivers/net/ethernet/mellanox/mlx5/core/lag/mp.h
··· 24 24 void mlx5_lag_mp_reset(struct mlx5_lag *ldev); 25 25 int mlx5_lag_mp_init(struct mlx5_lag *ldev); 26 26 void mlx5_lag_mp_cleanup(struct mlx5_lag *ldev); 27 + bool mlx5_lag_is_multipath(struct mlx5_core_dev *dev); 27 28 28 29 #else /* CONFIG_MLX5_ESWITCH */ 29 30 30 31 static inline void mlx5_lag_mp_reset(struct mlx5_lag *ldev) {}; 31 32 static inline int mlx5_lag_mp_init(struct mlx5_lag *ldev) { return 0; } 32 33 static inline void mlx5_lag_mp_cleanup(struct mlx5_lag *ldev) {} 34 + bool mlx5_lag_is_multipath(struct mlx5_core_dev *dev) { return false; } 33 35 34 36 #endif /* CONFIG_MLX5_ESWITCH */ 35 37 #endif /* __MLX5_LAG_MP_H__ */
+1
drivers/net/ethernet/microchip/sparx5/sparx5_main.c
··· 757 757 err = dev_err_probe(sparx5->dev, PTR_ERR(serdes), 758 758 "port %u: missing serdes\n", 759 759 portno); 760 + of_node_put(portnp); 760 761 goto cleanup_config; 761 762 } 762 763 config->portno = portno;
+1
drivers/net/ethernet/mscc/ocelot_vsc7514.c
··· 969 969 target = ocelot_regmap_init(ocelot, res); 970 970 if (IS_ERR(target)) { 971 971 err = PTR_ERR(target); 972 + of_node_put(portnp); 972 973 goto out_teardown; 973 974 } 974 975
+2 -2
drivers/net/ethernet/netronome/nfp/nfp_asm.c
··· 196 196 } 197 197 198 198 reg->dst_lmextn = swreg_lmextn(dst); 199 - reg->src_lmextn = swreg_lmextn(lreg) | swreg_lmextn(rreg); 199 + reg->src_lmextn = swreg_lmextn(lreg) || swreg_lmextn(rreg); 200 200 201 201 return 0; 202 202 } ··· 277 277 } 278 278 279 279 reg->dst_lmextn = swreg_lmextn(dst); 280 - reg->src_lmextn = swreg_lmextn(lreg) | swreg_lmextn(rreg); 280 + reg->src_lmextn = swreg_lmextn(lreg) || swreg_lmextn(rreg); 281 281 282 282 return 0; 283 283 }
+26 -11
drivers/net/ethernet/sfc/mcdi_port_common.c
··· 132 132 case MC_CMD_MEDIA_SFP_PLUS: 133 133 case MC_CMD_MEDIA_QSFP_PLUS: 134 134 SET_BIT(FIBRE); 135 - if (cap & (1 << MC_CMD_PHY_CAP_1000FDX_LBN)) 135 + if (cap & (1 << MC_CMD_PHY_CAP_1000FDX_LBN)) { 136 136 SET_BIT(1000baseT_Full); 137 - if (cap & (1 << MC_CMD_PHY_CAP_10000FDX_LBN)) 138 - SET_BIT(10000baseT_Full); 139 - if (cap & (1 << MC_CMD_PHY_CAP_40000FDX_LBN)) 137 + SET_BIT(1000baseX_Full); 138 + } 139 + if (cap & (1 << MC_CMD_PHY_CAP_10000FDX_LBN)) { 140 + SET_BIT(10000baseCR_Full); 141 + SET_BIT(10000baseLR_Full); 142 + SET_BIT(10000baseSR_Full); 143 + } 144 + if (cap & (1 << MC_CMD_PHY_CAP_40000FDX_LBN)) { 140 145 SET_BIT(40000baseCR4_Full); 141 - if (cap & (1 << MC_CMD_PHY_CAP_100000FDX_LBN)) 146 + SET_BIT(40000baseSR4_Full); 147 + } 148 + if (cap & (1 << MC_CMD_PHY_CAP_100000FDX_LBN)) { 142 149 SET_BIT(100000baseCR4_Full); 143 - if (cap & (1 << MC_CMD_PHY_CAP_25000FDX_LBN)) 150 + SET_BIT(100000baseSR4_Full); 151 + } 152 + if (cap & (1 << MC_CMD_PHY_CAP_25000FDX_LBN)) { 144 153 SET_BIT(25000baseCR_Full); 154 + SET_BIT(25000baseSR_Full); 155 + } 145 156 if (cap & (1 << MC_CMD_PHY_CAP_50000FDX_LBN)) 146 157 SET_BIT(50000baseCR2_Full); 147 158 break; ··· 203 192 result |= (1 << MC_CMD_PHY_CAP_100FDX_LBN); 204 193 if (TEST_BIT(1000baseT_Half)) 205 194 result |= (1 << MC_CMD_PHY_CAP_1000HDX_LBN); 206 - if (TEST_BIT(1000baseT_Full) || TEST_BIT(1000baseKX_Full)) 195 + if (TEST_BIT(1000baseT_Full) || TEST_BIT(1000baseKX_Full) || 196 + TEST_BIT(1000baseX_Full)) 207 197 result |= (1 << MC_CMD_PHY_CAP_1000FDX_LBN); 208 - if (TEST_BIT(10000baseT_Full) || TEST_BIT(10000baseKX4_Full)) 198 + if (TEST_BIT(10000baseT_Full) || TEST_BIT(10000baseKX4_Full) || 199 + TEST_BIT(10000baseCR_Full) || TEST_BIT(10000baseLR_Full) || 200 + TEST_BIT(10000baseSR_Full)) 209 201 result |= (1 << MC_CMD_PHY_CAP_10000FDX_LBN); 210 - if (TEST_BIT(40000baseCR4_Full) || TEST_BIT(40000baseKR4_Full)) 202 + if (TEST_BIT(40000baseCR4_Full) || TEST_BIT(40000baseKR4_Full) || 203 + TEST_BIT(40000baseSR4_Full)) 211 204 result |= (1 << MC_CMD_PHY_CAP_40000FDX_LBN); 212 - if (TEST_BIT(100000baseCR4_Full)) 205 + if (TEST_BIT(100000baseCR4_Full) || TEST_BIT(100000baseSR4_Full)) 213 206 result |= (1 << MC_CMD_PHY_CAP_100000FDX_LBN); 214 - if (TEST_BIT(25000baseCR_Full)) 207 + if (TEST_BIT(25000baseCR_Full) || TEST_BIT(25000baseSR_Full)) 215 208 result |= (1 << MC_CMD_PHY_CAP_25000FDX_LBN); 216 209 if (TEST_BIT(50000baseCR2_Full)) 217 210 result |= (1 << MC_CMD_PHY_CAP_50000FDX_LBN);
+2 -2
drivers/net/ethernet/sfc/ptp.c
··· 648 648 } else if (rc == -EINVAL) { 649 649 fmt = MC_CMD_PTP_OUT_GET_ATTRIBUTES_SECONDS_NANOSECONDS; 650 650 } else if (rc == -EPERM) { 651 - netif_info(efx, probe, efx->net_dev, "no PTP support\n"); 651 + pci_info(efx->pci_dev, "no PTP support\n"); 652 652 return rc; 653 653 } else { 654 654 efx_mcdi_display_error(efx, MC_CMD_PTP, sizeof(inbuf), ··· 824 824 * should only have been called during probe. 825 825 */ 826 826 if (rc == -ENOSYS || rc == -EPERM) 827 - netif_info(efx, probe, efx->net_dev, "no PTP support\n"); 827 + pci_info(efx->pci_dev, "no PTP support\n"); 828 828 else if (rc) 829 829 efx_mcdi_display_error(efx, MC_CMD_PTP, 830 830 MC_CMD_PTP_IN_DISABLE_LEN,
+1 -1
drivers/net/ethernet/sfc/siena_sriov.c
··· 1057 1057 return; 1058 1058 1059 1059 if (efx_siena_sriov_cmd(efx, false, &efx->vi_scale, &count)) { 1060 - netif_info(efx, probe, efx->net_dev, "no SR-IOV VFs probed\n"); 1060 + pci_info(efx->pci_dev, "no SR-IOV VFs probed\n"); 1061 1061 return; 1062 1062 } 1063 1063 if (count > 0 && count > max_vfs)
+1 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 736 736 config.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT; 737 737 ptp_v2 = PTP_TCR_TSVER2ENA; 738 738 snap_type_sel = PTP_TCR_SNAPTYPSEL_1; 739 - if (priv->synopsys_id != DWMAC_CORE_5_10) 739 + if (priv->synopsys_id < DWMAC_CORE_4_10) 740 740 ts_event_en = PTP_TCR_TSEVNTENA; 741 741 ptp_over_ipv4_udp = PTP_TCR_TSIPV4ENA; 742 742 ptp_over_ipv6_udp = PTP_TCR_TSIPV6ENA;
+3 -3
drivers/net/hamradio/baycom_epp.c
··· 623 623 624 624 /* --------------------------------------------------------------------- */ 625 625 626 - #ifdef __i386__ 626 + #if defined(__i386__) && !defined(CONFIG_UML) 627 627 #include <asm/msr.h> 628 628 #define GETTICK(x) \ 629 629 ({ \ 630 630 if (boot_cpu_has(X86_FEATURE_TSC)) \ 631 631 x = (unsigned int)rdtsc(); \ 632 632 }) 633 - #else /* __i386__ */ 633 + #else /* __i386__ && !CONFIG_UML */ 634 634 #define GETTICK(x) 635 - #endif /* __i386__ */ 635 + #endif /* __i386__ && !CONFIG_UML */ 636 636 637 637 static void epp_bh(struct work_struct *work) 638 638 {
+1
drivers/net/usb/Kconfig
··· 117 117 select PHYLIB 118 118 select MICROCHIP_PHY 119 119 select FIXED_PHY 120 + select CRC32 120 121 help 121 122 This option adds support for Microchip LAN78XX based USB 2 122 123 & USB 3 10/100/1000 Ethernet adapters.
+4
drivers/net/usb/usbnet.c
··· 1788 1788 if (!dev->rx_urb_size) 1789 1789 dev->rx_urb_size = dev->hard_mtu; 1790 1790 dev->maxpacket = usb_maxpacket (dev->udev, dev->out, 1); 1791 + if (dev->maxpacket == 0) { 1792 + /* that is a broken device */ 1793 + goto out4; 1794 + } 1791 1795 1792 1796 /* let userspace know we have a random address */ 1793 1797 if (ether_addr_equal(net->dev_addr, node_id))
-4
drivers/net/vrf.c
··· 1360 1360 bool need_strict = rt6_need_strict(&ipv6_hdr(skb)->daddr); 1361 1361 bool is_ndisc = ipv6_ndisc_frame(skb); 1362 1362 1363 - nf_reset_ct(skb); 1364 - 1365 1363 /* loopback, multicast & non-ND link-local traffic; do not push through 1366 1364 * packet taps again. Reset pkt_type for upper layers to process skb. 1367 1365 * For strict packets with a source LLA, determine the dst using the ··· 1421 1423 skb->dev = vrf_dev; 1422 1424 skb->skb_iif = vrf_dev->ifindex; 1423 1425 IPCB(skb)->flags |= IPSKB_L3SLAVE; 1424 - 1425 - nf_reset_ct(skb); 1426 1426 1427 1427 if (ipv4_is_multicast(ip_hdr(skb)->daddr)) 1428 1428 goto out;
+2 -4
drivers/nfc/st95hf/core.c
··· 1226 1226 &reset_cmd, 1227 1227 ST95HF_RESET_CMD_LEN, 1228 1228 ASYNC); 1229 - if (result) { 1229 + if (result) 1230 1230 dev_err(&spictx->spidev->dev, 1231 1231 "ST95HF reset failed in remove() err = %d\n", result); 1232 - return result; 1233 - } 1234 1232 1235 1233 /* wait for 3 ms to complete the controller reset process */ 1236 1234 usleep_range(3000, 4000); ··· 1237 1239 if (stcontext->st95hf_supply) 1238 1240 regulator_disable(stcontext->st95hf_supply); 1239 1241 1240 - return result; 1242 + return 0; 1241 1243 } 1242 1244 1243 1245 /* Register as SPI protocol driver */
+12 -9
drivers/nvme/host/core.c
··· 3550 3550 return 0; 3551 3551 } 3552 3552 3553 + static void nvme_cdev_rel(struct device *dev) 3554 + { 3555 + ida_simple_remove(&nvme_ns_chr_minor_ida, MINOR(dev->devt)); 3556 + } 3557 + 3553 3558 void nvme_cdev_del(struct cdev *cdev, struct device *cdev_device) 3554 3559 { 3555 3560 cdev_device_del(cdev, cdev_device); 3556 - ida_simple_remove(&nvme_ns_chr_minor_ida, MINOR(cdev_device->devt)); 3561 + put_device(cdev_device); 3557 3562 } 3558 3563 3559 3564 int nvme_cdev_add(struct cdev *cdev, struct device *cdev_device, ··· 3571 3566 return minor; 3572 3567 cdev_device->devt = MKDEV(MAJOR(nvme_ns_chr_devt), minor); 3573 3568 cdev_device->class = nvme_ns_chr_class; 3569 + cdev_device->release = nvme_cdev_rel; 3574 3570 device_initialize(cdev_device); 3575 3571 cdev_init(cdev, fops); 3576 3572 cdev->owner = owner; 3577 3573 ret = cdev_device_add(cdev, cdev_device); 3578 - if (ret) { 3574 + if (ret) 3579 3575 put_device(cdev_device); 3580 - ida_simple_remove(&nvme_ns_chr_minor_ida, minor); 3581 - } 3576 + 3582 3577 return ret; 3583 3578 } 3584 3579 ··· 3610 3605 ns->ctrl->instance, ns->head->instance); 3611 3606 if (ret) 3612 3607 return ret; 3613 - ret = nvme_cdev_add(&ns->cdev, &ns->cdev_device, &nvme_ns_chr_fops, 3614 - ns->ctrl->ops->module); 3615 - if (ret) 3616 - kfree_const(ns->cdev_device.kobj.name); 3617 - return ret; 3608 + 3609 + return nvme_cdev_add(&ns->cdev, &ns->cdev_device, &nvme_ns_chr_fops, 3610 + ns->ctrl->ops->module); 3618 3611 } 3619 3612 3620 3613 static struct nvme_ns_head *nvme_alloc_ns_head(struct nvme_ctrl *ctrl,
-2
drivers/nvme/host/multipath.c
··· 431 431 return ret; 432 432 ret = nvme_cdev_add(&head->cdev, &head->cdev_device, 433 433 &nvme_ns_head_chr_fops, THIS_MODULE); 434 - if (ret) 435 - kfree_const(head->cdev_device.kobj.name); 436 434 return ret; 437 435 } 438 436
+1 -1
drivers/nvme/host/pci.c
··· 1330 1330 iod->aborted = 1; 1331 1331 1332 1332 cmd.abort.opcode = nvme_admin_abort_cmd; 1333 - cmd.abort.cid = req->tag; 1333 + cmd.abort.cid = nvme_cid(req); 1334 1334 cmd.abort.sqid = cpu_to_le16(nvmeq->qid); 1335 1335 1336 1336 dev_warn(nvmeq->dev->ctrl.device,
+2 -1
drivers/nvmem/core.c
··· 1383 1383 *p-- = 0; 1384 1384 1385 1385 /* clear msb bits if any leftover in the last byte */ 1386 - *p &= GENMASK((cell->nbits%BITS_PER_BYTE) - 1, 0); 1386 + if (cell->nbits % BITS_PER_BYTE) 1387 + *p &= GENMASK((cell->nbits % BITS_PER_BYTE) - 1, 0); 1387 1388 } 1388 1389 1389 1390 static int __nvmem_cell_read(struct nvmem_device *nvmem,
+2
drivers/of/of_reserved_mem.c
··· 21 21 #include <linux/sort.h> 22 22 #include <linux/slab.h> 23 23 #include <linux/memblock.h> 24 + #include <linux/kmemleak.h> 24 25 25 26 #include "of_private.h" 26 27 ··· 47 46 err = memblock_mark_nomap(base, size); 48 47 if (err) 49 48 memblock_free(base, size); 49 + kmemleak_ignore_phys(base); 50 50 } 51 51 52 52 return err;
+12 -6
drivers/pci/msi.c
··· 535 535 static int msi_capability_init(struct pci_dev *dev, int nvec, 536 536 struct irq_affinity *affd) 537 537 { 538 + const struct attribute_group **groups; 538 539 struct msi_desc *entry; 539 540 int ret; 540 541 ··· 559 558 if (ret) 560 559 goto err; 561 560 562 - dev->msi_irq_groups = msi_populate_sysfs(&dev->dev); 563 - if (IS_ERR(dev->msi_irq_groups)) { 564 - ret = PTR_ERR(dev->msi_irq_groups); 561 + groups = msi_populate_sysfs(&dev->dev); 562 + if (IS_ERR(groups)) { 563 + ret = PTR_ERR(groups); 565 564 goto err; 566 565 } 566 + 567 + dev->msi_irq_groups = groups; 567 568 568 569 /* Set MSI enabled bits */ 569 570 pci_intx_for_msi(dev, 0); ··· 694 691 static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries, 695 692 int nvec, struct irq_affinity *affd) 696 693 { 694 + const struct attribute_group **groups; 697 695 void __iomem *base; 698 696 int ret, tsize; 699 697 u16 control; ··· 734 730 735 731 msix_update_entries(dev, entries); 736 732 737 - dev->msi_irq_groups = msi_populate_sysfs(&dev->dev); 738 - if (IS_ERR(dev->msi_irq_groups)) { 739 - ret = PTR_ERR(dev->msi_irq_groups); 733 + groups = msi_populate_sysfs(&dev->dev); 734 + if (IS_ERR(groups)) { 735 + ret = PTR_ERR(groups); 740 736 goto out_free; 741 737 } 738 + 739 + dev->msi_irq_groups = groups; 742 740 743 741 /* Set MSI-X enabled bits and unmask the function */ 744 742 pci_intx_for_msi(dev, 0);
+10 -6
drivers/ptp/ptp_clock.c
··· 170 170 struct ptp_clock *ptp = container_of(dev, struct ptp_clock, dev); 171 171 172 172 ptp_cleanup_pin_groups(ptp); 173 + kfree(ptp->vclock_index); 173 174 mutex_destroy(&ptp->tsevq_mux); 174 175 mutex_destroy(&ptp->pincfg_mux); 175 176 mutex_destroy(&ptp->n_vclocks_mux); ··· 284 283 /* Create a posix clock and link it to the device. */ 285 284 err = posix_clock_register(&ptp->clock, &ptp->dev); 286 285 if (err) { 286 + if (ptp->pps_source) 287 + pps_unregister_source(ptp->pps_source); 288 + 289 + if (ptp->kworker) 290 + kthread_destroy_worker(ptp->kworker); 291 + 292 + put_device(&ptp->dev); 293 + 287 294 pr_err("failed to create posix clock\n"); 288 - goto no_clock; 295 + return ERR_PTR(err); 289 296 } 290 297 291 298 return ptp; 292 299 293 - no_clock: 294 - if (ptp->pps_source) 295 - pps_unregister_source(ptp->pps_source); 296 300 no_pps: 297 301 ptp_cleanup_pin_groups(ptp); 298 302 no_pin_groups: ··· 326 320 327 321 ptp->defunct = 1; 328 322 wake_up_interruptible(&ptp->tsev_wq); 329 - 330 - kfree(ptp->vclock_index); 331 323 332 324 if (ptp->kworker) { 333 325 kthread_cancel_delayed_work_sync(&ptp->aux_work);
+2 -2
drivers/ptp/ptp_kvm_x86.c
··· 31 31 32 32 ret = kvm_hypercall2(KVM_HC_CLOCK_PAIRING, clock_pair_gpa, 33 33 KVM_CLOCK_PAIRING_WALLCLOCK); 34 - if (ret == -KVM_ENOSYS || ret == -KVM_EOPNOTSUPP) 34 + if (ret == -KVM_ENOSYS) 35 35 return -ENODEV; 36 36 37 - return 0; 37 + return ret; 38 38 } 39 39 40 40 int kvm_arch_ptp_get_clock(struct timespec64 *ts)
-1
drivers/soc/canaan/Kconfig
··· 5 5 depends on RISCV && SOC_CANAAN && OF 6 6 default SOC_CANAAN 7 7 select PM 8 - select SIMPLE_PM_BUS 9 8 select SYSCON 10 9 select MFD_SYSCON 11 10 help
+2 -2
drivers/spi/spi-atmel.c
··· 1301 1301 * DMA map early, for performance (empties dcache ASAP) and 1302 1302 * better fault reporting. 1303 1303 */ 1304 - if ((!master->cur_msg_mapped) 1304 + if ((!master->cur_msg->is_dma_mapped) 1305 1305 && as->use_pdc) { 1306 1306 if (atmel_spi_dma_map_xfer(as, xfer) < 0) 1307 1307 return -ENOMEM; ··· 1381 1381 } 1382 1382 } 1383 1383 1384 - if (!master->cur_msg_mapped 1384 + if (!master->cur_msg->is_dma_mapped 1385 1385 && as->use_pdc) 1386 1386 atmel_spi_dma_unmap_xfer(master, xfer); 1387 1387
+45 -32
drivers/spi/spi-bcm-qspi.c
··· 1250 1250 1251 1251 static void bcm_qspi_hw_uninit(struct bcm_qspi *qspi) 1252 1252 { 1253 + u32 status = bcm_qspi_read(qspi, MSPI, MSPI_MSPI_STATUS); 1254 + 1253 1255 bcm_qspi_write(qspi, MSPI, MSPI_SPCR2, 0); 1254 1256 if (has_bspi(qspi)) 1255 1257 bcm_qspi_write(qspi, MSPI, MSPI_WRITE_LOCK, 0); 1256 1258 1259 + /* clear interrupt */ 1260 + bcm_qspi_write(qspi, MSPI, MSPI_MSPI_STATUS, status & ~1); 1257 1261 } 1258 1262 1259 1263 static const struct spi_controller_mem_ops bcm_qspi_mem_ops = { ··· 1401 1397 if (!qspi->dev_ids) 1402 1398 return -ENOMEM; 1403 1399 1400 + /* 1401 + * Some SoCs integrate spi controller (e.g., its interrupt bits) 1402 + * in specific ways 1403 + */ 1404 + if (soc_intc) { 1405 + qspi->soc_intc = soc_intc; 1406 + soc_intc->bcm_qspi_int_set(soc_intc, MSPI_DONE, true); 1407 + } else { 1408 + qspi->soc_intc = NULL; 1409 + } 1410 + 1411 + if (qspi->clk) { 1412 + ret = clk_prepare_enable(qspi->clk); 1413 + if (ret) { 1414 + dev_err(dev, "failed to prepare clock\n"); 1415 + goto qspi_probe_err; 1416 + } 1417 + qspi->base_clk = clk_get_rate(qspi->clk); 1418 + } else { 1419 + qspi->base_clk = MSPI_BASE_FREQ; 1420 + } 1421 + 1422 + if (data->has_mspi_rev) { 1423 + rev = bcm_qspi_read(qspi, MSPI, MSPI_REV); 1424 + /* some older revs do not have a MSPI_REV register */ 1425 + if ((rev & 0xff) == 0xff) 1426 + rev = 0; 1427 + } 1428 + 1429 + qspi->mspi_maj_rev = (rev >> 4) & 0xf; 1430 + qspi->mspi_min_rev = rev & 0xf; 1431 + qspi->mspi_spcr3_sysclk = data->has_spcr3_sysclk; 1432 + 1433 + qspi->max_speed_hz = qspi->base_clk / (bcm_qspi_spbr_min(qspi) * 2); 1434 + 1435 + /* 1436 + * On SW resets it is possible to have the mask still enabled 1437 + * Need to disable the mask and clear the status while we init 1438 + */ 1439 + bcm_qspi_hw_uninit(qspi); 1440 + 1404 1441 for (val = 0; val < num_irqs; val++) { 1405 1442 irq = -1; 1406 1443 name = qspi_irq_tab[val].irq_name; ··· 1477 1432 ret = -EINVAL; 1478 1433 goto qspi_probe_err; 1479 1434 } 1480 - 1481 - /* 1482 - * Some SoCs integrate spi controller (e.g., its interrupt bits) 1483 - * in specific ways 1484 - */ 1485 - if (soc_intc) { 1486 - qspi->soc_intc = soc_intc; 1487 - soc_intc->bcm_qspi_int_set(soc_intc, MSPI_DONE, true); 1488 - } else { 1489 - qspi->soc_intc = NULL; 1490 - } 1491 - 1492 - ret = clk_prepare_enable(qspi->clk); 1493 - if (ret) { 1494 - dev_err(dev, "failed to prepare clock\n"); 1495 - goto qspi_probe_err; 1496 - } 1497 - 1498 - qspi->base_clk = clk_get_rate(qspi->clk); 1499 - 1500 - if (data->has_mspi_rev) { 1501 - rev = bcm_qspi_read(qspi, MSPI, MSPI_REV); 1502 - /* some older revs do not have a MSPI_REV register */ 1503 - if ((rev & 0xff) == 0xff) 1504 - rev = 0; 1505 - } 1506 - 1507 - qspi->mspi_maj_rev = (rev >> 4) & 0xf; 1508 - qspi->mspi_min_rev = rev & 0xf; 1509 - qspi->mspi_spcr3_sysclk = data->has_spcr3_sysclk; 1510 - 1511 - qspi->max_speed_hz = qspi->base_clk / (bcm_qspi_spbr_min(qspi) * 2); 1512 1435 1513 1436 bcm_qspi_hw_init(qspi); 1514 1437 init_completion(&qspi->mspi_done);
+36 -28
drivers/spi/spi-mt65xx.c
··· 233 233 return delay; 234 234 inactive = (delay * DIV_ROUND_UP(mdata->spi_clk_hz, 1000000)) / 1000; 235 235 236 - setup = setup ? setup : 1; 237 - hold = hold ? hold : 1; 238 - inactive = inactive ? inactive : 1; 239 - 240 - reg_val = readl(mdata->base + SPI_CFG0_REG); 241 - if (mdata->dev_comp->enhance_timing) { 242 - hold = min_t(u32, hold, 0x10000); 243 - setup = min_t(u32, setup, 0x10000); 244 - reg_val &= ~(0xffff << SPI_ADJUST_CFG0_CS_HOLD_OFFSET); 245 - reg_val |= (((hold - 1) & 0xffff) 246 - << SPI_ADJUST_CFG0_CS_HOLD_OFFSET); 247 - reg_val &= ~(0xffff << SPI_ADJUST_CFG0_CS_SETUP_OFFSET); 248 - reg_val |= (((setup - 1) & 0xffff) 249 - << SPI_ADJUST_CFG0_CS_SETUP_OFFSET); 250 - } else { 251 - hold = min_t(u32, hold, 0x100); 252 - setup = min_t(u32, setup, 0x100); 253 - reg_val &= ~(0xff << SPI_CFG0_CS_HOLD_OFFSET); 254 - reg_val |= (((hold - 1) & 0xff) << SPI_CFG0_CS_HOLD_OFFSET); 255 - reg_val &= ~(0xff << SPI_CFG0_CS_SETUP_OFFSET); 256 - reg_val |= (((setup - 1) & 0xff) 257 - << SPI_CFG0_CS_SETUP_OFFSET); 236 + if (hold || setup) { 237 + reg_val = readl(mdata->base + SPI_CFG0_REG); 238 + if (mdata->dev_comp->enhance_timing) { 239 + if (hold) { 240 + hold = min_t(u32, hold, 0x10000); 241 + reg_val &= ~(0xffff << SPI_ADJUST_CFG0_CS_HOLD_OFFSET); 242 + reg_val |= (((hold - 1) & 0xffff) 243 + << SPI_ADJUST_CFG0_CS_HOLD_OFFSET); 244 + } 245 + if (setup) { 246 + setup = min_t(u32, setup, 0x10000); 247 + reg_val &= ~(0xffff << SPI_ADJUST_CFG0_CS_SETUP_OFFSET); 248 + reg_val |= (((setup - 1) & 0xffff) 249 + << SPI_ADJUST_CFG0_CS_SETUP_OFFSET); 250 + } 251 + } else { 252 + if (hold) { 253 + hold = min_t(u32, hold, 0x100); 254 + reg_val &= ~(0xff << SPI_CFG0_CS_HOLD_OFFSET); 255 + reg_val |= (((hold - 1) & 0xff) << SPI_CFG0_CS_HOLD_OFFSET); 256 + } 257 + if (setup) { 258 + setup = min_t(u32, setup, 0x100); 259 + reg_val &= ~(0xff << SPI_CFG0_CS_SETUP_OFFSET); 260 + reg_val |= (((setup - 1) & 0xff) 261 + << SPI_CFG0_CS_SETUP_OFFSET); 262 + } 263 + } 264 + writel(reg_val, mdata->base + SPI_CFG0_REG); 258 265 } 259 - writel(reg_val, mdata->base + SPI_CFG0_REG); 260 266 261 - inactive = min_t(u32, inactive, 0x100); 262 - reg_val = readl(mdata->base + SPI_CFG1_REG); 263 - reg_val &= ~SPI_CFG1_CS_IDLE_MASK; 264 - reg_val |= (((inactive - 1) & 0xff) << SPI_CFG1_CS_IDLE_OFFSET); 265 - writel(reg_val, mdata->base + SPI_CFG1_REG); 267 + if (inactive) { 268 + inactive = min_t(u32, inactive, 0x100); 269 + reg_val = readl(mdata->base + SPI_CFG1_REG); 270 + reg_val &= ~SPI_CFG1_CS_IDLE_MASK; 271 + reg_val |= (((inactive - 1) & 0xff) << SPI_CFG1_CS_IDLE_OFFSET); 272 + writel(reg_val, mdata->base + SPI_CFG1_REG); 273 + } 266 274 267 275 return 0; 268 276 }
+7
drivers/spi/spi-mux.c
··· 137 137 priv = spi_controller_get_devdata(ctlr); 138 138 priv->spi = spi; 139 139 140 + /* 141 + * Increase lockdep class as these lock are taken while the parent bus 142 + * already holds their instance's lock. 143 + */ 144 + lockdep_set_subclass(&ctlr->io_mutex, 1); 145 + lockdep_set_subclass(&ctlr->add_lock, 1); 146 + 140 147 priv->mux = devm_mux_control_get(&spi->dev, NULL); 141 148 if (IS_ERR(priv->mux)) { 142 149 ret = dev_err_probe(&spi->dev, PTR_ERR(priv->mux),
+7 -19
drivers/spi/spi-nxp-fspi.c
··· 33 33 34 34 #include <linux/acpi.h> 35 35 #include <linux/bitops.h> 36 + #include <linux/bitfield.h> 36 37 #include <linux/clk.h> 37 38 #include <linux/completion.h> 38 39 #include <linux/delay.h> ··· 316 315 #define NXP_FSPI_MIN_IOMAP SZ_4M 317 316 318 317 #define DCFG_RCWSR1 0x100 318 + #define SYS_PLL_RAT GENMASK(6, 2) 319 319 320 320 /* Access flash memory using IP bus only */ 321 321 #define FSPI_QUIRK_USE_IP_ONLY BIT(0) ··· 928 926 { .family = "QorIQ LS1028A" }, 929 927 { /* sentinel */ } 930 928 }; 931 - struct device_node *np; 932 929 struct regmap *map; 933 - u32 val = 0, sysclk = 0; 930 + u32 val, sys_pll_ratio; 934 931 int ret; 935 932 936 933 /* Check for LS1028A family */ ··· 938 937 return; 939 938 } 940 939 941 - /* Compute system clock frequency multiplier ratio */ 942 940 map = syscon_regmap_lookup_by_compatible("fsl,ls1028a-dcfg"); 943 941 if (IS_ERR(map)) { 944 942 dev_err(f->dev, "No syscon regmap\n"); ··· 948 948 if (ret < 0) 949 949 goto err; 950 950 951 - /* Strap bits 6:2 define SYS_PLL_RAT i.e frequency multiplier ratio */ 952 - val = (val >> 2) & 0x1F; 953 - WARN(val == 0, "Strapping is zero: Cannot determine ratio"); 951 + sys_pll_ratio = FIELD_GET(SYS_PLL_RAT, val); 952 + dev_dbg(f->dev, "val: 0x%08x, sys_pll_ratio: %d\n", val, sys_pll_ratio); 954 953 955 - /* Compute system clock frequency */ 956 - np = of_find_node_by_name(NULL, "clock-sysclk"); 957 - if (!np) 958 - goto err; 959 - 960 - if (of_property_read_u32(np, "clock-frequency", &sysclk)) 961 - goto err; 962 - 963 - sysclk = (sysclk * val) / 1000000; /* Convert sysclk to Mhz */ 964 - dev_dbg(f->dev, "val: 0x%08x, sysclk: %dMhz\n", val, sysclk); 965 - 966 - /* Use IP bus only if PLL is 300MHz */ 967 - if (sysclk == 300) 954 + /* Use IP bus only if platform clock is 300MHz */ 955 + if (sys_pll_ratio == 3) 968 956 f->devtype_data->quirks |= FSPI_QUIRK_USE_IP_ONLY; 969 957 970 958 return;
+1 -3
drivers/spi/spi-tegra20-slink.c
··· 1182 1182 } 1183 1183 #endif 1184 1184 1185 - #ifdef CONFIG_PM 1186 - static int tegra_slink_runtime_suspend(struct device *dev) 1185 + static int __maybe_unused tegra_slink_runtime_suspend(struct device *dev) 1187 1186 { 1188 1187 struct spi_master *master = dev_get_drvdata(dev); 1189 1188 struct tegra_slink_data *tspi = spi_master_get_devdata(master); ··· 1207 1208 } 1208 1209 return 0; 1209 1210 } 1210 - #endif /* CONFIG_PM */ 1211 1211 1212 1212 static const struct dev_pm_ops slink_pm_ops = { 1213 1213 SET_RUNTIME_PM_OPS(tegra_slink_runtime_suspend,
+11 -16
drivers/spi/spi.c
··· 478 478 */ 479 479 static DEFINE_MUTEX(board_lock); 480 480 481 - /* 482 - * Prevents addition of devices with same chip select and 483 - * addition of devices below an unregistering controller. 484 - */ 485 - static DEFINE_MUTEX(spi_add_lock); 486 - 487 481 /** 488 482 * spi_alloc_device - Allocate a new SPI device 489 483 * @ctlr: Controller to which device is connected ··· 630 636 * chipselect **BEFORE** we call setup(), else we'll trash 631 637 * its configuration. Lock against concurrent add() calls. 632 638 */ 633 - mutex_lock(&spi_add_lock); 639 + mutex_lock(&ctlr->add_lock); 634 640 status = __spi_add_device(spi); 635 - mutex_unlock(&spi_add_lock); 641 + mutex_unlock(&ctlr->add_lock); 636 642 return status; 637 643 } 638 644 EXPORT_SYMBOL_GPL(spi_add_device); ··· 652 658 /* Set the bus ID string */ 653 659 spi_dev_set_name(spi); 654 660 655 - WARN_ON(!mutex_is_locked(&spi_add_lock)); 661 + WARN_ON(!mutex_is_locked(&ctlr->add_lock)); 656 662 return __spi_add_device(spi); 657 663 } 658 664 ··· 2547 2553 return NULL; 2548 2554 2549 2555 device_initialize(&ctlr->dev); 2556 + INIT_LIST_HEAD(&ctlr->queue); 2557 + spin_lock_init(&ctlr->queue_lock); 2558 + spin_lock_init(&ctlr->bus_lock_spinlock); 2559 + mutex_init(&ctlr->bus_lock_mutex); 2560 + mutex_init(&ctlr->io_mutex); 2561 + mutex_init(&ctlr->add_lock); 2550 2562 ctlr->bus_num = -1; 2551 2563 ctlr->num_chipselect = 1; 2552 2564 ctlr->slave = slave; ··· 2825 2825 return id; 2826 2826 ctlr->bus_num = id; 2827 2827 } 2828 - INIT_LIST_HEAD(&ctlr->queue); 2829 - spin_lock_init(&ctlr->queue_lock); 2830 - spin_lock_init(&ctlr->bus_lock_spinlock); 2831 - mutex_init(&ctlr->bus_lock_mutex); 2832 - mutex_init(&ctlr->io_mutex); 2833 2828 ctlr->bus_lock_flag = 0; 2834 2829 init_completion(&ctlr->xfer_completion); 2835 2830 if (!ctlr->max_dma_len) ··· 2961 2966 2962 2967 /* Prevent addition of new devices, unregister existing ones */ 2963 2968 if (IS_ENABLED(CONFIG_SPI_DYNAMIC)) 2964 - mutex_lock(&spi_add_lock); 2969 + mutex_lock(&ctlr->add_lock); 2965 2970 2966 2971 device_for_each_child(&ctlr->dev, NULL, __unregister); 2967 2972 ··· 2992 2997 mutex_unlock(&board_lock); 2993 2998 2994 2999 if (IS_ENABLED(CONFIG_SPI_DYNAMIC)) 2995 - mutex_unlock(&spi_add_lock); 3000 + mutex_unlock(&ctlr->add_lock); 2996 3001 } 2997 3002 EXPORT_SYMBOL_GPL(spi_unregister_controller); 2998 3003
+14
drivers/spi/spidev.c
··· 673 673 674 674 static struct class *spidev_class; 675 675 676 + static const struct spi_device_id spidev_spi_ids[] = { 677 + { .name = "dh2228fv" }, 678 + { .name = "ltc2488" }, 679 + { .name = "sx1301" }, 680 + { .name = "bk4" }, 681 + { .name = "dhcom-board" }, 682 + { .name = "m53cpld" }, 683 + { .name = "spi-petra" }, 684 + { .name = "spi-authenta" }, 685 + {}, 686 + }; 687 + MODULE_DEVICE_TABLE(spi, spidev_spi_ids); 688 + 676 689 #ifdef CONFIG_OF 677 690 static const struct of_device_id spidev_dt_ids[] = { 678 691 { .compatible = "rohm,dh2228fv" }, ··· 831 818 }, 832 819 .probe = spidev_probe, 833 820 .remove = spidev_remove, 821 + .id_table = spidev_spi_ids, 834 822 835 823 /* NOTE: suspend/resume methods are not necessary here. 836 824 * We don't do anything except pass the requests to/from
+1 -1
drivers/staging/r8188eu/hal/hal_intf.c
··· 248 248 #ifdef CONFIG_88EU_AP_MODE 249 249 struct sta_info *psta = NULL; 250 250 struct sta_priv *pstapriv = &adapt->stapriv; 251 - if ((mac_id - 1) > 0) 251 + if (mac_id >= 2) 252 252 psta = pstapriv->sta_aid[(mac_id - 1) - 1]; 253 253 if (psta) 254 254 add_RATid(adapt, psta, 0);/* todo: based on rssi_level*/
+1 -1
drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
··· 182 182 offset = (uintptr_t)ubuf & (PAGE_SIZE - 1); 183 183 num_pages = DIV_ROUND_UP(count + offset, PAGE_SIZE); 184 184 185 - if (num_pages > (SIZE_MAX - sizeof(struct pagelist) - 185 + if ((size_t)num_pages > (SIZE_MAX - sizeof(struct pagelist) - 186 186 sizeof(struct vchiq_pagelist_info)) / 187 187 (sizeof(u32) + sizeof(pages[0]) + 188 188 sizeof(struct scatterlist)))
+3
drivers/tee/optee/core.c
··· 585 585 { 586 586 struct optee *optee = platform_get_drvdata(pdev); 587 587 588 + /* Unregister OP-TEE specific client devices on TEE bus */ 589 + optee_unregister_devices(); 590 + 588 591 /* 589 592 * Ask OP-TEE to free all cached shared memory objects to decrease 590 593 * reference counters and also avoid wild pointers in secure world
+22
drivers/tee/optee/device.c
··· 53 53 return 0; 54 54 } 55 55 56 + static void optee_release_device(struct device *dev) 57 + { 58 + struct tee_client_device *optee_device = to_tee_client_device(dev); 59 + 60 + kfree(optee_device); 61 + } 62 + 56 63 static int optee_register_device(const uuid_t *device_uuid) 57 64 { 58 65 struct tee_client_device *optee_device = NULL; ··· 70 63 return -ENOMEM; 71 64 72 65 optee_device->dev.bus = &tee_bus_type; 66 + optee_device->dev.release = optee_release_device; 73 67 if (dev_set_name(&optee_device->dev, "optee-ta-%pUb", device_uuid)) { 74 68 kfree(optee_device); 75 69 return -ENOMEM; ··· 161 153 int optee_enumerate_devices(u32 func) 162 154 { 163 155 return __optee_enumerate_devices(func); 156 + } 157 + 158 + static int __optee_unregister_device(struct device *dev, void *data) 159 + { 160 + if (!strncmp(dev_name(dev), "optee-ta", strlen("optee-ta"))) 161 + device_unregister(dev); 162 + 163 + return 0; 164 + } 165 + 166 + void optee_unregister_devices(void) 167 + { 168 + bus_for_each_dev(&tee_bus_type, NULL, NULL, 169 + __optee_unregister_device); 164 170 }
+1
drivers/tee/optee/optee_private.h
··· 184 184 #define PTA_CMD_GET_DEVICES 0x0 185 185 #define PTA_CMD_GET_DEVICES_SUPP 0x1 186 186 int optee_enumerate_devices(u32 func); 187 + void optee_unregister_devices(void); 187 188 188 189 /* 189 190 * Small helpers
+6 -2
drivers/tty/serial/8250/Kconfig
··· 361 361 If unsure, say N. 362 362 363 363 config SERIAL_8250_FSL 364 - bool 364 + bool "Freescale 16550 UART support" if COMPILE_TEST && !(PPC || ARM || ARM64) 365 365 depends on SERIAL_8250_CONSOLE 366 - default PPC || ARM || ARM64 || COMPILE_TEST 366 + default PPC || ARM || ARM64 367 + help 368 + Selecting this option enables a workaround for a break-detection 369 + erratum for Freescale 16550 UARTs in the 8250 driver. It also 370 + enables support for ACPI enumeration. 367 371 368 372 config SERIAL_8250_DW 369 373 tristate "Support for Synopsys DesignWare 8250 quirks"
+13 -15
drivers/usb/host/xhci-dbgtty.c
··· 408 408 return -EBUSY; 409 409 410 410 xhci_dbc_tty_init_port(dbc, port); 411 - tty_dev = tty_port_register_device(&port->port, 412 - dbc_tty_driver, 0, NULL); 413 - if (IS_ERR(tty_dev)) { 414 - ret = PTR_ERR(tty_dev); 415 - goto register_fail; 416 - } 417 411 418 412 ret = kfifo_alloc(&port->write_fifo, DBC_WRITE_BUF_SIZE, GFP_KERNEL); 419 413 if (ret) 420 - goto buf_alloc_fail; 414 + goto err_exit_port; 421 415 422 416 ret = xhci_dbc_alloc_requests(dbc, BULK_IN, &port->read_pool, 423 417 dbc_read_complete); 424 418 if (ret) 425 - goto request_fail; 419 + goto err_free_fifo; 426 420 427 421 ret = xhci_dbc_alloc_requests(dbc, BULK_OUT, &port->write_pool, 428 422 dbc_write_complete); 429 423 if (ret) 430 - goto request_fail; 424 + goto err_free_requests; 425 + 426 + tty_dev = tty_port_register_device(&port->port, 427 + dbc_tty_driver, 0, NULL); 428 + if (IS_ERR(tty_dev)) { 429 + ret = PTR_ERR(tty_dev); 430 + goto err_free_requests; 431 + } 431 432 432 433 port->registered = true; 433 434 434 435 return 0; 435 436 436 - request_fail: 437 + err_free_requests: 437 438 xhci_dbc_free_requests(&port->read_pool); 438 439 xhci_dbc_free_requests(&port->write_pool); 440 + err_free_fifo: 439 441 kfifo_free(&port->write_fifo); 440 - 441 - buf_alloc_fail: 442 - tty_unregister_device(dbc_tty_driver, 0); 443 - 444 - register_fail: 442 + err_exit_port: 445 443 xhci_dbc_tty_exit_port(port); 446 444 447 445 dev_err(dbc->dev, "can't register tty port, err %d\n", ret);
+5 -1
drivers/usb/host/xhci-pci.c
··· 30 30 #define PCI_VENDOR_ID_FRESCO_LOGIC 0x1b73 31 31 #define PCI_DEVICE_ID_FRESCO_LOGIC_PDK 0x1000 32 32 #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1009 0x1009 33 + #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1100 0x1100 33 34 #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1400 0x1400 34 35 35 36 #define PCI_VENDOR_ID_ETRON 0x1b6f ··· 114 113 /* Look for vendor-specific quirks */ 115 114 if (pdev->vendor == PCI_VENDOR_ID_FRESCO_LOGIC && 116 115 (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK || 116 + pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1100 || 117 117 pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1400)) { 118 118 if (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK && 119 119 pdev->revision == 0x0) { ··· 281 279 pdev->device == 0x3432) 282 280 xhci->quirks |= XHCI_BROKEN_STREAMS; 283 281 284 - if (pdev->vendor == PCI_VENDOR_ID_VIA && pdev->device == 0x3483) 282 + if (pdev->vendor == PCI_VENDOR_ID_VIA && pdev->device == 0x3483) { 285 283 xhci->quirks |= XHCI_LPM_SUPPORT; 284 + xhci->quirks |= XHCI_EP_CTX_BROKEN_DCS; 285 + } 286 286 287 287 if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA && 288 288 pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI)
+34 -5
drivers/usb/host/xhci-ring.c
··· 366 366 /* Must be called with xhci->lock held, releases and aquires lock back */ 367 367 static int xhci_abort_cmd_ring(struct xhci_hcd *xhci, unsigned long flags) 368 368 { 369 - u64 temp_64; 369 + u32 temp_32; 370 370 int ret; 371 371 372 372 xhci_dbg(xhci, "Abort command ring\n"); 373 373 374 374 reinit_completion(&xhci->cmd_ring_stop_completion); 375 375 376 - temp_64 = xhci_read_64(xhci, &xhci->op_regs->cmd_ring); 377 - xhci_write_64(xhci, temp_64 | CMD_RING_ABORT, 378 - &xhci->op_regs->cmd_ring); 376 + /* 377 + * The control bits like command stop, abort are located in lower 378 + * dword of the command ring control register. Limit the write 379 + * to the lower dword to avoid corrupting the command ring pointer 380 + * in case if the command ring is stopped by the time upper dword 381 + * is written. 382 + */ 383 + temp_32 = readl(&xhci->op_regs->cmd_ring); 384 + writel(temp_32 | CMD_RING_ABORT, &xhci->op_regs->cmd_ring); 379 385 380 386 /* Section 4.6.1.2 of xHCI 1.0 spec says software should also time the 381 387 * completion of the Command Abort operation. If CRR is not negated in 5 ··· 565 559 struct xhci_ring *ep_ring; 566 560 struct xhci_command *cmd; 567 561 struct xhci_segment *new_seg; 562 + struct xhci_segment *halted_seg = NULL; 568 563 union xhci_trb *new_deq; 569 564 int new_cycle; 565 + union xhci_trb *halted_trb; 566 + int index = 0; 570 567 dma_addr_t addr; 571 568 u64 hw_dequeue; 572 569 bool cycle_found = false; ··· 607 598 hw_dequeue = xhci_get_hw_deq(xhci, dev, ep_index, stream_id); 608 599 new_seg = ep_ring->deq_seg; 609 600 new_deq = ep_ring->dequeue; 610 - new_cycle = hw_dequeue & 0x1; 601 + 602 + /* 603 + * Quirk: xHC write-back of the DCS field in the hardware dequeue 604 + * pointer is wrong - use the cycle state of the TRB pointed to by 605 + * the dequeue pointer. 606 + */ 607 + if (xhci->quirks & XHCI_EP_CTX_BROKEN_DCS && 608 + !(ep->ep_state & EP_HAS_STREAMS)) 609 + halted_seg = trb_in_td(xhci, td->start_seg, 610 + td->first_trb, td->last_trb, 611 + hw_dequeue & ~0xf, false); 612 + if (halted_seg) { 613 + index = ((dma_addr_t)(hw_dequeue & ~0xf) - halted_seg->dma) / 614 + sizeof(*halted_trb); 615 + halted_trb = &halted_seg->trbs[index]; 616 + new_cycle = halted_trb->generic.field[3] & 0x1; 617 + xhci_dbg(xhci, "Endpoint DCS = %d TRB index = %d cycle = %d\n", 618 + (u8)(hw_dequeue & 0x1), index, new_cycle); 619 + } else { 620 + new_cycle = hw_dequeue & 0x1; 621 + } 611 622 612 623 /* 613 624 * We want to find the pointer, segment and cycle state of the new trb
+5
drivers/usb/host/xhci.c
··· 3214 3214 return; 3215 3215 3216 3216 /* Bail out if toggle is already being cleared by a endpoint reset */ 3217 + spin_lock_irqsave(&xhci->lock, flags); 3217 3218 if (ep->ep_state & EP_HARD_CLEAR_TOGGLE) { 3218 3219 ep->ep_state &= ~EP_HARD_CLEAR_TOGGLE; 3220 + spin_unlock_irqrestore(&xhci->lock, flags); 3219 3221 return; 3220 3222 } 3223 + spin_unlock_irqrestore(&xhci->lock, flags); 3221 3224 /* Only interrupt and bulk ep's use data toggle, USB2 spec 5.5.4-> */ 3222 3225 if (usb_endpoint_xfer_control(&host_ep->desc) || 3223 3226 usb_endpoint_xfer_isoc(&host_ep->desc)) ··· 3306 3303 xhci_free_command(xhci, cfg_cmd); 3307 3304 cleanup: 3308 3305 xhci_free_command(xhci, stop_cmd); 3306 + spin_lock_irqsave(&xhci->lock, flags); 3309 3307 if (ep->ep_state & EP_SOFT_CLEAR_TOGGLE) 3310 3308 ep->ep_state &= ~EP_SOFT_CLEAR_TOGGLE; 3309 + spin_unlock_irqrestore(&xhci->lock, flags); 3311 3310 } 3312 3311 3313 3312 static int xhci_check_streams_endpoint(struct xhci_hcd *xhci,
+1
drivers/usb/host/xhci.h
··· 1899 1899 #define XHCI_SG_TRB_CACHE_SIZE_QUIRK BIT_ULL(39) 1900 1900 #define XHCI_NO_SOFT_RETRY BIT_ULL(40) 1901 1901 #define XHCI_BROKEN_D3COLD BIT_ULL(41) 1902 + #define XHCI_EP_CTX_BROKEN_DCS BIT_ULL(42) 1902 1903 1903 1904 unsigned int num_active_eps; 1904 1905 unsigned int limit_active_eps;
+3 -1
drivers/usb/musb/musb_dsps.c
··· 899 899 if (usb_get_dr_mode(&pdev->dev) == USB_DR_MODE_PERIPHERAL) { 900 900 ret = dsps_setup_optional_vbus_irq(pdev, glue); 901 901 if (ret) 902 - goto err; 902 + goto unregister_pdev; 903 903 } 904 904 905 905 return 0; 906 906 907 + unregister_pdev: 908 + platform_device_unregister(glue->musb); 907 909 err: 908 910 pm_runtime_disable(&pdev->dev); 909 911 iounmap(glue->usbss_base);
+8
drivers/usb/serial/option.c
··· 246 246 /* These Quectel products use Quectel's vendor ID */ 247 247 #define QUECTEL_PRODUCT_EC21 0x0121 248 248 #define QUECTEL_PRODUCT_EC25 0x0125 249 + #define QUECTEL_PRODUCT_EG91 0x0191 249 250 #define QUECTEL_PRODUCT_EG95 0x0195 250 251 #define QUECTEL_PRODUCT_BG96 0x0296 251 252 #define QUECTEL_PRODUCT_EP06 0x0306 252 253 #define QUECTEL_PRODUCT_EM12 0x0512 253 254 #define QUECTEL_PRODUCT_RM500Q 0x0800 255 + #define QUECTEL_PRODUCT_EC200S_CN 0x6002 254 256 #define QUECTEL_PRODUCT_EC200T 0x6026 255 257 256 258 #define CMOTECH_VENDOR_ID 0x16d8 ··· 1113 1111 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC25, 0xff, 0xff, 0xff), 1114 1112 .driver_info = NUMEP2 }, 1115 1113 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC25, 0xff, 0, 0) }, 1114 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG91, 0xff, 0xff, 0xff), 1115 + .driver_info = NUMEP2 }, 1116 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG91, 0xff, 0, 0) }, 1116 1117 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0xff, 0xff), 1117 1118 .driver_info = NUMEP2 }, 1118 1119 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0, 0) }, ··· 1133 1128 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0, 0) }, 1134 1129 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x10), 1135 1130 .driver_info = ZLP }, 1131 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) }, 1136 1132 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) }, 1137 1133 1138 1134 { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) }, ··· 1233 1227 .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, 1234 1228 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1203, 0xff), /* Telit LE910Cx (RNDIS) */ 1235 1229 .driver_info = NCTRL(2) | RSVD(3) }, 1230 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1204, 0xff), /* Telit LE910Cx (MBIM) */ 1231 + .driver_info = NCTRL(0) | RSVD(1) }, 1236 1232 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910_USBCFG4), 1237 1233 .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) | RSVD(3) }, 1238 1234 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920),
+1
drivers/usb/serial/qcserial.c
··· 165 165 {DEVICE_SWI(0x1199, 0x907b)}, /* Sierra Wireless EM74xx */ 166 166 {DEVICE_SWI(0x1199, 0x9090)}, /* Sierra Wireless EM7565 QDL */ 167 167 {DEVICE_SWI(0x1199, 0x9091)}, /* Sierra Wireless EM7565 */ 168 + {DEVICE_SWI(0x1199, 0x90d2)}, /* Sierra Wireless EM9191 QDL */ 168 169 {DEVICE_SWI(0x413c, 0x81a2)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card */ 169 170 {DEVICE_SWI(0x413c, 0x81a3)}, /* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card */ 170 171 {DEVICE_SWI(0x413c, 0x81a4)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */
+5 -5
drivers/vhost/vdpa.c
··· 173 173 if (status != 0 && (ops->get_status(vdpa) & ~status) != 0) 174 174 return -EINVAL; 175 175 176 + if ((status_old & VIRTIO_CONFIG_S_DRIVER_OK) && !(status & VIRTIO_CONFIG_S_DRIVER_OK)) 177 + for (i = 0; i < nvqs; i++) 178 + vhost_vdpa_unsetup_vq_irq(v, i); 179 + 176 180 if (status == 0) { 177 181 ret = ops->reset(vdpa); 178 182 if (ret) ··· 187 183 if ((status & VIRTIO_CONFIG_S_DRIVER_OK) && !(status_old & VIRTIO_CONFIG_S_DRIVER_OK)) 188 184 for (i = 0; i < nvqs; i++) 189 185 vhost_vdpa_setup_vq_irq(v, i); 190 - 191 - if ((status_old & VIRTIO_CONFIG_S_DRIVER_OK) && !(status & VIRTIO_CONFIG_S_DRIVER_OK)) 192 - for (i = 0; i < nvqs; i++) 193 - vhost_vdpa_unsetup_vq_irq(v, i); 194 186 195 187 return 0; 196 188 } ··· 322 322 struct eventfd_ctx *ctx; 323 323 324 324 cb.callback = vhost_vdpa_config_cb; 325 - cb.private = v->vdpa; 325 + cb.private = v; 326 326 if (copy_from_user(&fd, argp, sizeof(fd))) 327 327 return -EFAULT; 328 328
+11
drivers/virtio/virtio.c
··· 239 239 driver_features_legacy = driver_features; 240 240 } 241 241 242 + /* 243 + * Some devices detect legacy solely via F_VERSION_1. Write 244 + * F_VERSION_1 to force LE config space accesses before FEATURES_OK for 245 + * these when needed. 246 + */ 247 + if (drv->validate && !virtio_legacy_is_little_endian() 248 + && device_features & BIT_ULL(VIRTIO_F_VERSION_1)) { 249 + dev->features = BIT_ULL(VIRTIO_F_VERSION_1); 250 + dev->config->finalize_features(dev); 251 + } 252 + 242 253 if (device_features & (1ULL << VIRTIO_F_VERSION_1)) 243 254 dev->features = driver_features & device_features; 244 255 else
+3 -9
fs/ceph/caps.c
··· 2330 2330 2331 2331 int ceph_fsync(struct file *file, loff_t start, loff_t end, int datasync) 2332 2332 { 2333 - struct ceph_file_info *fi = file->private_data; 2334 2333 struct inode *inode = file->f_mapping->host; 2335 2334 struct ceph_inode_info *ci = ceph_inode(inode); 2336 2335 u64 flush_tid; ··· 2364 2365 if (err < 0) 2365 2366 ret = err; 2366 2367 2367 - if (errseq_check(&ci->i_meta_err, READ_ONCE(fi->meta_err))) { 2368 - spin_lock(&file->f_lock); 2369 - err = errseq_check_and_advance(&ci->i_meta_err, 2370 - &fi->meta_err); 2371 - spin_unlock(&file->f_lock); 2372 - if (err < 0) 2373 - ret = err; 2374 - } 2368 + err = file_check_and_advance_wb_err(file); 2369 + if (err < 0) 2370 + ret = err; 2375 2371 out: 2376 2372 dout("fsync %p%s result=%d\n", inode, datasync ? " datasync" : "", ret); 2377 2373 return ret;
-1
fs/ceph/file.c
··· 233 233 234 234 spin_lock_init(&fi->rw_contexts_lock); 235 235 INIT_LIST_HEAD(&fi->rw_contexts); 236 - fi->meta_err = errseq_sample(&ci->i_meta_err); 237 236 fi->filp_gen = READ_ONCE(ceph_inode_to_client(inode)->filp_gen); 238 237 239 238 return 0;
-2
fs/ceph/inode.c
··· 541 541 542 542 ceph_fscache_inode_init(ci); 543 543 544 - ci->i_meta_err = 0; 545 - 546 544 return &ci->vfs_inode; 547 545 } 548 546
+5 -12
fs/ceph/mds_client.c
··· 1493 1493 { 1494 1494 struct ceph_mds_request *req; 1495 1495 struct rb_node *p; 1496 - struct ceph_inode_info *ci; 1497 1496 1498 1497 dout("cleanup_session_requests mds%d\n", session->s_mds); 1499 1498 mutex_lock(&mdsc->mutex); ··· 1501 1502 struct ceph_mds_request, r_unsafe_item); 1502 1503 pr_warn_ratelimited(" dropping unsafe request %llu\n", 1503 1504 req->r_tid); 1504 - if (req->r_target_inode) { 1505 - /* dropping unsafe change of inode's attributes */ 1506 - ci = ceph_inode(req->r_target_inode); 1507 - errseq_set(&ci->i_meta_err, -EIO); 1508 - } 1509 - if (req->r_unsafe_dir) { 1510 - /* dropping unsafe directory operation */ 1511 - ci = ceph_inode(req->r_unsafe_dir); 1512 - errseq_set(&ci->i_meta_err, -EIO); 1513 - } 1505 + if (req->r_target_inode) 1506 + mapping_set_error(req->r_target_inode->i_mapping, -EIO); 1507 + if (req->r_unsafe_dir) 1508 + mapping_set_error(req->r_unsafe_dir->i_mapping, -EIO); 1514 1509 __unregister_request(mdsc, req); 1515 1510 } 1516 1511 /* zero r_attempts, so kick_requests() will re-send requests */ ··· 1671 1678 spin_unlock(&mdsc->cap_dirty_lock); 1672 1679 1673 1680 if (dirty_dropped) { 1674 - errseq_set(&ci->i_meta_err, -EIO); 1681 + mapping_set_error(inode->i_mapping, -EIO); 1675 1682 1676 1683 if (ci->i_wrbuffer_ref_head == 0 && 1677 1684 ci->i_wr_ref == 0 &&
+14 -3
fs/ceph/super.c
··· 1002 1002 struct ceph_fs_client *new = fc->s_fs_info; 1003 1003 struct ceph_mount_options *fsopt = new->mount_options; 1004 1004 struct ceph_options *opt = new->client->options; 1005 - struct ceph_fs_client *other = ceph_sb_to_client(sb); 1005 + struct ceph_fs_client *fsc = ceph_sb_to_client(sb); 1006 1006 1007 1007 dout("ceph_compare_super %p\n", sb); 1008 1008 1009 - if (compare_mount_options(fsopt, opt, other)) { 1009 + if (compare_mount_options(fsopt, opt, fsc)) { 1010 1010 dout("monitor(s)/mount options don't match\n"); 1011 1011 return 0; 1012 1012 } 1013 1013 if ((opt->flags & CEPH_OPT_FSID) && 1014 - ceph_fsid_compare(&opt->fsid, &other->client->fsid)) { 1014 + ceph_fsid_compare(&opt->fsid, &fsc->client->fsid)) { 1015 1015 dout("fsid doesn't match\n"); 1016 1016 return 0; 1017 1017 } ··· 1019 1019 dout("flags differ\n"); 1020 1020 return 0; 1021 1021 } 1022 + 1023 + if (fsc->blocklisted && !ceph_test_mount_opt(fsc, CLEANRECOVER)) { 1024 + dout("client is blocklisted (and CLEANRECOVER is not set)\n"); 1025 + return 0; 1026 + } 1027 + 1028 + if (fsc->mount_state == CEPH_MOUNT_SHUTDOWN) { 1029 + dout("client has been forcibly unmounted\n"); 1030 + return 0; 1031 + } 1032 + 1022 1033 return 1; 1023 1034 } 1024 1035
-3
fs/ceph/super.h
··· 429 429 #ifdef CONFIG_CEPH_FSCACHE 430 430 struct fscache_cookie *fscache; 431 431 #endif 432 - errseq_t i_meta_err; 433 - 434 432 struct inode vfs_inode; /* at end */ 435 433 }; 436 434 ··· 772 774 spinlock_t rw_contexts_lock; 773 775 struct list_head rw_contexts; 774 776 775 - errseq_t meta_err; 776 777 u32 filp_gen; 777 778 atomic_t num_locks; 778 779 };
+1 -1
fs/io_uring.c
··· 2949 2949 struct io_ring_ctx *ctx = req->ctx; 2950 2950 2951 2951 req_set_fail(req); 2952 - if (issue_flags & IO_URING_F_NONBLOCK) { 2952 + if (!(issue_flags & IO_URING_F_NONBLOCK)) { 2953 2953 mutex_lock(&ctx->uring_lock); 2954 2954 __io_req_complete(req, issue_flags, ret, cflags); 2955 2955 mutex_unlock(&ctx->uring_lock);
+1 -1
fs/kernel_read_file.c
··· 178 178 struct fd f = fdget(fd); 179 179 int ret = -EBADF; 180 180 181 - if (!f.file) 181 + if (!f.file || !(f.file->f_mode & FMODE_READ)) 182 182 goto out; 183 183 184 184 ret = kernel_read_file(f.file, offset, buf, buf_size, file_size, id);
+8 -1
fs/kernfs/dir.c
··· 1111 1111 1112 1112 kn = kernfs_find_ns(parent, dentry->d_name.name, ns); 1113 1113 /* attach dentry and inode */ 1114 - if (kn && kernfs_active(kn)) { 1114 + if (kn) { 1115 + /* Inactive nodes are invisible to the VFS so don't 1116 + * create a negative. 1117 + */ 1118 + if (!kernfs_active(kn)) { 1119 + up_read(&kernfs_rwsem); 1120 + return NULL; 1121 + } 1115 1122 inode = kernfs_get_inode(dir->i_sb, kn); 1116 1123 if (!inode) 1117 1124 inode = ERR_PTR(-ENOMEM);
+5 -15
fs/ntfs3/attrib.c
··· 6 6 * TODO: Merge attr_set_size/attr_data_get_block/attr_allocate_frame? 7 7 */ 8 8 9 - #include <linux/blkdev.h> 10 - #include <linux/buffer_head.h> 11 9 #include <linux/fs.h> 12 - #include <linux/hash.h> 13 - #include <linux/nls.h> 14 - #include <linux/ratelimit.h> 15 10 #include <linux/slab.h> 11 + #include <linux/kernel.h> 16 12 17 13 #include "debug.h" 18 14 #include "ntfs.h" ··· 287 291 if (!rsize) { 288 292 /* Empty resident -> Non empty nonresident. */ 289 293 } else if (!is_data) { 290 - err = ntfs_sb_write_run(sbi, run, 0, data, rsize); 294 + err = ntfs_sb_write_run(sbi, run, 0, data, rsize, 0); 291 295 if (err) 292 296 goto out2; 293 297 } else if (!page) { ··· 447 451 again_1: 448 452 align = sbi->cluster_size; 449 453 450 - if (is_ext) { 454 + if (is_ext) 451 455 align <<= attr_b->nres.c_unit; 452 - if (is_attr_sparsed(attr_b)) 453 - keep_prealloc = false; 454 - } 455 456 456 457 old_valid = le64_to_cpu(attr_b->nres.valid_size); 457 458 old_size = le64_to_cpu(attr_b->nres.data_size); ··· 457 464 458 465 new_alloc = (new_size + align - 1) & ~(u64)(align - 1); 459 466 new_alen = new_alloc >> cluster_bits; 460 - 461 - if (keep_prealloc && is_ext) 462 - keep_prealloc = false; 463 467 464 468 if (keep_prealloc && new_size < old_size) { 465 469 attr_b->nres.data_size = cpu_to_le64(new_size); ··· 519 529 } else if (pre_alloc == -1) { 520 530 pre_alloc = 0; 521 531 if (type == ATTR_DATA && !name_len && 522 - sbi->options.prealloc) { 532 + sbi->options->prealloc) { 523 533 CLST new_alen2 = bytes_to_cluster( 524 534 sbi, get_pre_allocated(new_size)); 525 535 pre_alloc = new_alen2 - new_alen; ··· 1956 1966 return 0; 1957 1967 1958 1968 from = vbo; 1959 - to = (vbo + bytes) < data_size ? (vbo + bytes) : data_size; 1969 + to = min_t(u64, vbo + bytes, data_size); 1960 1970 memset(Add2Ptr(resident_data(attr_b), from), 0, to - from); 1961 1971 return 0; 1962 1972 }
+3 -6
fs/ntfs3/attrlist.c
··· 5 5 * 6 6 */ 7 7 8 - #include <linux/blkdev.h> 9 - #include <linux/buffer_head.h> 10 8 #include <linux/fs.h> 11 - #include <linux/nls.h> 12 9 13 10 #include "debug.h" 14 11 #include "ntfs.h" ··· 333 336 334 337 if (attr && attr->non_res) { 335 338 err = ntfs_sb_write_run(ni->mi.sbi, &al->run, 0, al->le, 336 - al->size); 339 + al->size, 0); 337 340 if (err) 338 341 return err; 339 342 al->dirty = false; ··· 420 423 return true; 421 424 } 422 425 423 - int al_update(struct ntfs_inode *ni) 426 + int al_update(struct ntfs_inode *ni, int sync) 424 427 { 425 428 int err; 426 429 struct ATTRIB *attr; ··· 442 445 memcpy(resident_data(attr), al->le, al->size); 443 446 } else { 444 447 err = ntfs_sb_write_run(ni->mi.sbi, &al->run, 0, al->le, 445 - al->size); 448 + al->size, sync); 446 449 if (err) 447 450 goto out; 448 451
+2 -8
fs/ntfs3/bitfunc.c
··· 5 5 * 6 6 */ 7 7 8 - #include <linux/blkdev.h> 9 - #include <linux/buffer_head.h> 10 - #include <linux/fs.h> 11 - #include <linux/nls.h> 8 + #include <linux/types.h> 12 9 13 - #include "debug.h" 14 - #include "ntfs.h" 15 10 #include "ntfs_fs.h" 16 11 17 12 #define BITS_IN_SIZE_T (sizeof(size_t) * 8) ··· 119 124 120 125 pos = nbits & 7; 121 126 if (pos) { 122 - u8 mask = fill_mask[pos]; 123 - 127 + mask = fill_mask[pos]; 124 128 if ((*map & mask) != mask) 125 129 return false; 126 130 }
+6 -8
fs/ntfs3/bitmap.c
··· 10 10 * 11 11 */ 12 12 13 - #include <linux/blkdev.h> 14 13 #include <linux/buffer_head.h> 15 14 #include <linux/fs.h> 16 - #include <linux/nls.h> 15 + #include <linux/kernel.h> 17 16 18 - #include "debug.h" 19 17 #include "ntfs.h" 20 18 #include "ntfs_fs.h" 21 19 ··· 433 435 ; 434 436 } else { 435 437 n3 = rb_next(&e->count.node); 436 - max_new_len = len > new_len ? len : new_len; 438 + max_new_len = max(len, new_len); 437 439 if (!n3) { 438 440 wnd->extent_max = max_new_len; 439 441 } else { ··· 729 731 wbits = wnd->bits_last; 730 732 731 733 tail = wbits - wbit; 732 - op = tail < bits ? tail : bits; 734 + op = min_t(u32, tail, bits); 733 735 734 736 bh = wnd_map(wnd, iw); 735 737 if (IS_ERR(bh)) { ··· 782 784 wbits = wnd->bits_last; 783 785 784 786 tail = wbits - wbit; 785 - op = tail < bits ? tail : bits; 787 + op = min_t(u32, tail, bits); 786 788 787 789 bh = wnd_map(wnd, iw); 788 790 if (IS_ERR(bh)) { ··· 832 834 wbits = wnd->bits_last; 833 835 834 836 tail = wbits - wbit; 835 - op = tail < bits ? tail : bits; 837 + op = min_t(u32, tail, bits); 836 838 837 839 if (wbits != wnd->free_bits[iw]) { 838 840 bool ret; ··· 924 926 wbits = wnd->bits_last; 925 927 926 928 tail = wbits - wbit; 927 - op = tail < bits ? tail : bits; 929 + op = min_t(u32, tail, bits); 928 930 929 931 if (wnd->free_bits[iw]) { 930 932 bool ret;
+3
fs/ntfs3/debug.h
··· 11 11 #ifndef _LINUX_NTFS3_DEBUG_H 12 12 #define _LINUX_NTFS3_DEBUG_H 13 13 14 + struct super_block; 15 + struct inode; 16 + 14 17 #ifndef Add2Ptr 15 18 #define Add2Ptr(P, I) ((void *)((u8 *)(P) + (I))) 16 19 #define PtrOffset(B, O) ((size_t)((size_t)(O) - (size_t)(B)))
+12 -18
fs/ntfs3/dir.c
··· 7 7 * 8 8 */ 9 9 10 - #include <linux/blkdev.h> 11 - #include <linux/buffer_head.h> 12 10 #include <linux/fs.h> 13 - #include <linux/iversion.h> 14 11 #include <linux/nls.h> 15 12 16 13 #include "debug.h" ··· 15 18 #include "ntfs_fs.h" 16 19 17 20 /* Convert little endian UTF-16 to NLS string. */ 18 - int ntfs_utf16_to_nls(struct ntfs_sb_info *sbi, const struct le_str *uni, 21 + int ntfs_utf16_to_nls(struct ntfs_sb_info *sbi, const __le16 *name, u32 len, 19 22 u8 *buf, int buf_len) 20 23 { 21 - int ret, uni_len, warn; 22 - const __le16 *ip; 24 + int ret, warn; 23 25 u8 *op; 24 - struct nls_table *nls = sbi->options.nls; 26 + struct nls_table *nls = sbi->options->nls; 25 27 26 28 static_assert(sizeof(wchar_t) == sizeof(__le16)); 27 29 28 30 if (!nls) { 29 31 /* UTF-16 -> UTF-8 */ 30 - ret = utf16s_to_utf8s((wchar_t *)uni->name, uni->len, 31 - UTF16_LITTLE_ENDIAN, buf, buf_len); 32 + ret = utf16s_to_utf8s(name, len, UTF16_LITTLE_ENDIAN, buf, 33 + buf_len); 32 34 buf[ret] = '\0'; 33 35 return ret; 34 36 } 35 37 36 - ip = uni->name; 37 38 op = buf; 38 - uni_len = uni->len; 39 39 warn = 0; 40 40 41 - while (uni_len--) { 41 + while (len--) { 42 42 u16 ec; 43 43 int charlen; 44 44 char dump[5]; ··· 46 52 break; 47 53 } 48 54 49 - ec = le16_to_cpu(*ip++); 55 + ec = le16_to_cpu(*name++); 50 56 charlen = nls->uni2char(ec, op, buf_len); 51 57 52 58 if (charlen > 0) { ··· 180 186 { 181 187 int ret, slen; 182 188 const u8 *end; 183 - struct nls_table *nls = sbi->options.nls; 189 + struct nls_table *nls = sbi->options->nls; 184 190 u16 *uname = uni->name; 185 191 186 192 static_assert(sizeof(wchar_t) == sizeof(u16)); ··· 295 301 return 0; 296 302 297 303 /* Skip meta files. Unless option to show metafiles is set. */ 298 - if (!sbi->options.showmeta && ntfs_is_meta_file(sbi, ino)) 304 + if (!sbi->options->showmeta && ntfs_is_meta_file(sbi, ino)) 299 305 return 0; 300 306 301 - if (sbi->options.nohidden && (fname->dup.fa & FILE_ATTRIBUTE_HIDDEN)) 307 + if (sbi->options->nohidden && (fname->dup.fa & FILE_ATTRIBUTE_HIDDEN)) 302 308 return 0; 303 309 304 - name_len = ntfs_utf16_to_nls(sbi, (struct le_str *)&fname->name_len, 305 - name, PATH_MAX); 310 + name_len = ntfs_utf16_to_nls(sbi, fname->name, fname->name_len, name, 311 + PATH_MAX); 306 312 if (name_len <= 0) { 307 313 ntfs_warn(sbi->sb, "failed to convert name for inode %lx.", 308 314 ino);
+7 -5
fs/ntfs3/file.c
··· 12 12 #include <linux/compat.h> 13 13 #include <linux/falloc.h> 14 14 #include <linux/fiemap.h> 15 - #include <linux/nls.h> 16 15 17 16 #include "debug.h" 18 17 #include "ntfs.h" ··· 587 588 truncate_pagecache(inode, vbo_down); 588 589 589 590 if (!is_sparsed(ni) && !is_compressed(ni)) { 590 - /* Normal file. */ 591 - err = ntfs_zero_range(inode, vbo, end); 591 + /* 592 + * Normal file, can't make hole. 593 + * TODO: Try to find way to save info about hole. 594 + */ 595 + err = -EOPNOTSUPP; 592 596 goto out; 593 597 } 594 598 ··· 739 737 umode_t mode = inode->i_mode; 740 738 int err; 741 739 742 - if (sbi->options.no_acs_rules) { 740 + if (sbi->options->noacsrules) { 743 741 /* "No access rules" - Force any changes of time etc. */ 744 742 attr->ia_valid |= ATTR_FORCE; 745 743 /* and disable for editing some attributes. */ ··· 1187 1185 int err = 0; 1188 1186 1189 1187 /* If we are last writer on the inode, drop the block reservation. */ 1190 - if (sbi->options.prealloc && ((file->f_mode & FMODE_WRITE) && 1188 + if (sbi->options->prealloc && ((file->f_mode & FMODE_WRITE) && 1191 1189 atomic_read(&inode->i_writecount) == 1)) { 1192 1190 ni_lock(ni); 1193 1191 down_write(&ni->file.run_lock);
+41 -14
fs/ntfs3/frecord.c
··· 5 5 * 6 6 */ 7 7 8 - #include <linux/blkdev.h> 9 - #include <linux/buffer_head.h> 10 8 #include <linux/fiemap.h> 11 9 #include <linux/fs.h> 12 - #include <linux/nls.h> 13 10 #include <linux/vmalloc.h> 14 11 15 12 #include "debug.h" ··· 705 708 continue; 706 709 707 710 mi = ni_find_mi(ni, ino_get(&le->ref)); 711 + if (!mi) { 712 + /* Should never happened, 'cause already checked. */ 713 + goto bad; 714 + } 708 715 709 716 attr = mi_find_attr(mi, NULL, le->type, le_name(le), 710 717 le->name_len, &le->id); 718 + if (!attr) { 719 + /* Should never happened, 'cause already checked. */ 720 + goto bad; 721 + } 711 722 asize = le32_to_cpu(attr->size); 712 723 713 724 /* Insert into primary record. */ 714 725 attr_ins = mi_insert_attr(&ni->mi, le->type, le_name(le), 715 726 le->name_len, asize, 716 727 le16_to_cpu(attr->name_off)); 717 - id = attr_ins->id; 728 + if (!attr_ins) { 729 + /* 730 + * Internal error. 731 + * Either no space in primary record (already checked). 732 + * Either tried to insert another 733 + * non indexed attribute (logic error). 734 + */ 735 + goto bad; 736 + } 718 737 719 738 /* Copy all except id. */ 739 + id = attr_ins->id; 720 740 memcpy(attr_ins, attr, asize); 721 741 attr_ins->id = id; 722 742 ··· 749 735 ni->attr_list.dirty = false; 750 736 751 737 return 0; 738 + bad: 739 + ntfs_inode_err(&ni->vfs_inode, "Internal error"); 740 + make_bad_inode(&ni->vfs_inode); 741 + return -EINVAL; 752 742 } 753 743 754 744 /* ··· 973 955 /* Only indexed attributes can share same record. */ 974 956 continue; 975 957 } 958 + 959 + /* 960 + * Do not try to insert this attribute 961 + * if there is no room in record. 962 + */ 963 + if (le32_to_cpu(mi->mrec->used) + asize > sbi->record_size) 964 + continue; 976 965 977 966 /* Try to insert attribute into this subrecord. */ 978 967 attr = ni_ins_new_attr(ni, mi, le, type, name, name_len, asize, ··· 1476 1451 attr->res.flags = RESIDENT_FLAG_INDEXED; 1477 1452 1478 1453 /* is_attr_indexed(attr)) == true */ 1479 - le16_add_cpu(&ni->mi.mrec->hard_links, +1); 1454 + le16_add_cpu(&ni->mi.mrec->hard_links, 1); 1480 1455 ni->mi.dirty = true; 1481 1456 } 1482 1457 attr->res.res = 0; ··· 1631 1606 1632 1607 *le = NULL; 1633 1608 1634 - if (FILE_NAME_POSIX == name_type) 1609 + if (name_type == FILE_NAME_POSIX) 1635 1610 return NULL; 1636 1611 1637 1612 /* Enumerate all names. */ ··· 1731 1706 /* 1732 1707 * ni_parse_reparse 1733 1708 * 1734 - * Buffer is at least 24 bytes. 1709 + * buffer - memory for reparse buffer header 1735 1710 */ 1736 1711 enum REPARSE_SIGN ni_parse_reparse(struct ntfs_inode *ni, struct ATTRIB *attr, 1737 - void *buffer) 1712 + struct REPARSE_DATA_BUFFER *buffer) 1738 1713 { 1739 1714 const struct REPARSE_DATA_BUFFER *rp = NULL; 1740 1715 u8 bits; 1741 1716 u16 len; 1742 1717 typeof(rp->CompressReparseBuffer) *cmpr; 1743 - 1744 - static_assert(sizeof(struct REPARSE_DATA_BUFFER) <= 24); 1745 1718 1746 1719 /* Try to estimate reparse point. */ 1747 1720 if (!attr->non_res) { ··· 1825 1802 1826 1803 return REPARSE_NONE; 1827 1804 } 1805 + 1806 + if (buffer != rp) 1807 + memcpy(buffer, rp, sizeof(struct REPARSE_DATA_BUFFER)); 1828 1808 1829 1809 /* Looks like normal symlink. */ 1830 1810 return REPARSE_LINK; ··· 2932 2906 memcpy(Add2Ptr(attr, SIZEOF_RESIDENT), de + 1, de_key_size); 2933 2907 mi_get_ref(&ni->mi, &de->ref); 2934 2908 2935 - if (indx_insert_entry(&dir_ni->dir, dir_ni, de, sbi, NULL, 1)) { 2909 + if (indx_insert_entry(&dir_ni->dir, dir_ni, de, sbi, NULL, 1)) 2936 2910 return false; 2937 - } 2938 2911 } 2939 2912 2940 2913 return true; ··· 3102 3077 const struct EA_INFO *info; 3103 3078 3104 3079 info = resident_data_ex(attr, sizeof(struct EA_INFO)); 3105 - dup->ea_size = info->size_pack; 3080 + /* If ATTR_EA_INFO exists 'info' can't be NULL. */ 3081 + if (info) 3082 + dup->ea_size = info->size_pack; 3106 3083 } 3107 3084 } 3108 3085 ··· 3232 3205 goto out; 3233 3206 } 3234 3207 3235 - err = al_update(ni); 3208 + err = al_update(ni, sync); 3236 3209 if (err) 3237 3210 goto out; 3238 3211 }
+4 -8
fs/ntfs3/fslog.c
··· 6 6 */ 7 7 8 8 #include <linux/blkdev.h> 9 - #include <linux/buffer_head.h> 10 9 #include <linux/fs.h> 11 - #include <linux/hash.h> 12 - #include <linux/nls.h> 13 10 #include <linux/random.h> 14 - #include <linux/ratelimit.h> 15 11 #include <linux/slab.h> 16 12 17 13 #include "debug.h" ··· 2215 2219 2216 2220 err = ntfs_sb_write_run(log->ni->mi.sbi, 2217 2221 &log->ni->file.run, off, page, 2218 - log->page_size); 2222 + log->page_size, 0); 2219 2223 2220 2224 if (err) 2221 2225 goto out; ··· 3706 3710 3707 3711 if (a_dirty) { 3708 3712 attr = oa->attr; 3709 - err = ntfs_sb_write_run(sbi, oa->run1, vbo, buffer_le, bytes); 3713 + err = ntfs_sb_write_run(sbi, oa->run1, vbo, buffer_le, bytes, 0); 3710 3714 if (err) 3711 3715 goto out; 3712 3716 } ··· 5148 5152 5149 5153 ntfs_fix_pre_write(&rh->rhdr, log->page_size); 5150 5154 5151 - err = ntfs_sb_write_run(sbi, &ni->file.run, 0, rh, log->page_size); 5155 + err = ntfs_sb_write_run(sbi, &ni->file.run, 0, rh, log->page_size, 0); 5152 5156 if (!err) 5153 5157 err = ntfs_sb_write_run(sbi, &log->ni->file.run, log->page_size, 5154 - rh, log->page_size); 5158 + rh, log->page_size, 0); 5155 5159 5156 5160 kfree(rh); 5157 5161 if (err)
+37 -40
fs/ntfs3/fsntfs.c
··· 8 8 #include <linux/blkdev.h> 9 9 #include <linux/buffer_head.h> 10 10 #include <linux/fs.h> 11 - #include <linux/nls.h> 11 + #include <linux/kernel.h> 12 12 13 13 #include "debug.h" 14 14 #include "ntfs.h" ··· 358 358 enum ALLOCATE_OPT opt) 359 359 { 360 360 int err; 361 - CLST alen = 0; 361 + CLST alen; 362 362 struct super_block *sb = sbi->sb; 363 363 size_t alcn, zlen, zeroes, zlcn, zlen2, ztrim, new_zlen; 364 364 struct wnd_bitmap *wnd = &sbi->used.bitmap; ··· 370 370 if (!zlen) { 371 371 err = ntfs_refresh_zone(sbi); 372 372 if (err) 373 - goto out; 373 + goto up_write; 374 + 374 375 zlen = wnd_zone_len(wnd); 375 376 } 376 377 377 378 if (!zlen) { 378 379 ntfs_err(sbi->sb, "no free space to extend mft"); 379 - goto out; 380 + err = -ENOSPC; 381 + goto up_write; 380 382 } 381 383 382 384 lcn = wnd_zone_bit(wnd); 383 - alen = zlen > len ? len : zlen; 385 + alen = min_t(CLST, len, zlen); 384 386 385 387 wnd_zone_set(wnd, lcn + alen, zlen - alen); 386 388 387 389 err = wnd_set_used(wnd, lcn, alen); 388 - if (err) { 389 - up_write(&wnd->rw_lock); 390 - return err; 391 - } 390 + if (err) 391 + goto up_write; 392 + 392 393 alcn = lcn; 393 - goto out; 394 + goto space_found; 394 395 } 395 396 /* 396 397 * 'Cause cluster 0 is always used this value means that we should use ··· 405 404 406 405 alen = wnd_find(wnd, len, lcn, BITMAP_FIND_MARK_AS_USED, &alcn); 407 406 if (alen) 408 - goto out; 407 + goto space_found; 409 408 410 409 /* Try to use clusters from MftZone. */ 411 410 zlen = wnd_zone_len(wnd); 412 411 zeroes = wnd_zeroes(wnd); 413 412 414 413 /* Check too big request */ 415 - if (len > zeroes + zlen || zlen <= NTFS_MIN_MFT_ZONE) 416 - goto out; 414 + if (len > zeroes + zlen || zlen <= NTFS_MIN_MFT_ZONE) { 415 + err = -ENOSPC; 416 + goto up_write; 417 + } 417 418 418 419 /* How many clusters to cat from zone. */ 419 420 zlcn = wnd_zone_bit(wnd); 420 421 zlen2 = zlen >> 1; 421 - ztrim = len > zlen ? zlen : (len > zlen2 ? len : zlen2); 422 - new_zlen = zlen - ztrim; 423 - 424 - if (new_zlen < NTFS_MIN_MFT_ZONE) { 425 - new_zlen = NTFS_MIN_MFT_ZONE; 426 - if (new_zlen > zlen) 427 - new_zlen = zlen; 428 - } 422 + ztrim = clamp_val(len, zlen2, zlen); 423 + new_zlen = max_t(size_t, zlen - ztrim, NTFS_MIN_MFT_ZONE); 429 424 430 425 wnd_zone_set(wnd, zlcn, new_zlen); 431 426 432 427 /* Allocate continues clusters. */ 433 428 alen = wnd_find(wnd, len, 0, 434 429 BITMAP_FIND_MARK_AS_USED | BITMAP_FIND_FULL, &alcn); 435 - 436 - out: 437 - if (alen) { 438 - err = 0; 439 - *new_len = alen; 440 - *new_lcn = alcn; 441 - 442 - ntfs_unmap_meta(sb, alcn, alen); 443 - 444 - /* Set hint for next requests. */ 445 - if (!(opt & ALLOCATE_MFT)) 446 - sbi->used.next_free_lcn = alcn + alen; 447 - } else { 430 + if (!alen) { 448 431 err = -ENOSPC; 432 + goto up_write; 449 433 } 450 434 435 + space_found: 436 + err = 0; 437 + *new_len = alen; 438 + *new_lcn = alcn; 439 + 440 + ntfs_unmap_meta(sb, alcn, alen); 441 + 442 + /* Set hint for next requests. */ 443 + if (!(opt & ALLOCATE_MFT)) 444 + sbi->used.next_free_lcn = alcn + alen; 445 + up_write: 451 446 up_write(&wnd->rw_lock); 452 447 return err; 453 448 } ··· 1077 1080 } 1078 1081 1079 1082 int ntfs_sb_write_run(struct ntfs_sb_info *sbi, const struct runs_tree *run, 1080 - u64 vbo, const void *buf, size_t bytes) 1083 + u64 vbo, const void *buf, size_t bytes, int sync) 1081 1084 { 1082 1085 struct super_block *sb = sbi->sb; 1083 1086 u8 cluster_bits = sbi->cluster_bits; ··· 1096 1099 len = ((u64)clen << cluster_bits) - off; 1097 1100 1098 1101 for (;;) { 1099 - u32 op = len < bytes ? len : bytes; 1100 - int err = ntfs_sb_write(sb, lbo, op, buf, 0); 1102 + u32 op = min_t(u64, len, bytes); 1103 + int err = ntfs_sb_write(sb, lbo, op, buf, sync); 1101 1104 1102 1105 if (err) 1103 1106 return err; ··· 1297 1300 nb->off = off = lbo & (blocksize - 1); 1298 1301 1299 1302 for (;;) { 1300 - u32 len32 = len < bytes ? len : bytes; 1303 + u32 len32 = min_t(u64, len, bytes); 1301 1304 sector_t block = lbo >> sb->s_blocksize_bits; 1302 1305 1303 1306 do { ··· 2172 2175 2173 2176 /* Write main SDS bucket. */ 2174 2177 err = ntfs_sb_write_run(sbi, &ni->file.run, sbi->security.next_off, 2175 - d_security, aligned_sec_size); 2178 + d_security, aligned_sec_size, 0); 2176 2179 2177 2180 if (err) 2178 2181 goto out; ··· 2190 2193 2191 2194 /* Write copy SDS bucket. */ 2192 2195 err = ntfs_sb_write_run(sbi, &ni->file.run, mirr_off, d_security, 2193 - aligned_sec_size); 2196 + aligned_sec_size, 0); 2194 2197 if (err) 2195 2198 goto out; 2196 2199
+50 -116
fs/ntfs3/index.c
··· 8 8 #include <linux/blkdev.h> 9 9 #include <linux/buffer_head.h> 10 10 #include <linux/fs.h> 11 - #include <linux/nls.h> 11 + #include <linux/kernel.h> 12 12 13 13 #include "debug.h" 14 14 #include "ntfs.h" ··· 671 671 const struct INDEX_HDR *hdr, const void *key, 672 672 size_t key_len, const void *ctx, int *diff) 673 673 { 674 - struct NTFS_DE *e; 674 + struct NTFS_DE *e, *found = NULL; 675 675 NTFS_CMP_FUNC cmp = indx->cmp; 676 + int min_idx = 0, mid_idx, max_idx = 0; 677 + int diff2; 678 + int table_size = 8; 676 679 u32 e_size, e_key_len; 677 680 u32 end = le32_to_cpu(hdr->used); 678 681 u32 off = le32_to_cpu(hdr->de_off); 682 + u16 offs[128]; 679 683 680 - #ifdef NTFS3_INDEX_BINARY_SEARCH 681 - int max_idx = 0, fnd, min_idx; 682 - int nslots = 64; 683 - u16 *offs; 684 - 685 - if (end > 0x10000) 686 - goto next; 687 - 688 - offs = kmalloc(sizeof(u16) * nslots, GFP_NOFS); 689 - if (!offs) 690 - goto next; 691 - 692 - /* Use binary search algorithm. */ 693 - next1: 694 - if (off + sizeof(struct NTFS_DE) > end) { 695 - e = NULL; 696 - goto out1; 697 - } 698 - e = Add2Ptr(hdr, off); 699 - e_size = le16_to_cpu(e->size); 700 - 701 - if (e_size < sizeof(struct NTFS_DE) || off + e_size > end) { 702 - e = NULL; 703 - goto out1; 704 - } 705 - 706 - if (max_idx >= nslots) { 707 - u16 *ptr; 708 - int new_slots = ALIGN(2 * nslots, 8); 709 - 710 - ptr = kmalloc(sizeof(u16) * new_slots, GFP_NOFS); 711 - if (ptr) 712 - memcpy(ptr, offs, sizeof(u16) * max_idx); 713 - kfree(offs); 714 - offs = ptr; 715 - nslots = new_slots; 716 - if (!ptr) 717 - goto next; 718 - } 719 - 720 - /* Store entry table. */ 721 - offs[max_idx] = off; 722 - 723 - if (!de_is_last(e)) { 724 - off += e_size; 725 - max_idx += 1; 726 - goto next1; 727 - } 728 - 729 - /* 730 - * Table of pointers is created. 731 - * Use binary search to find entry that is <= to the search value. 732 - */ 733 - fnd = -1; 734 - min_idx = 0; 735 - 736 - while (min_idx <= max_idx) { 737 - int mid_idx = min_idx + ((max_idx - min_idx) >> 1); 738 - int diff2; 739 - 740 - e = Add2Ptr(hdr, offs[mid_idx]); 741 - 742 - e_key_len = le16_to_cpu(e->key_size); 743 - 744 - diff2 = (*cmp)(key, key_len, e + 1, e_key_len, ctx); 745 - 746 - if (!diff2) { 747 - *diff = 0; 748 - goto out1; 749 - } 750 - 751 - if (diff2 < 0) { 752 - max_idx = mid_idx - 1; 753 - fnd = mid_idx; 754 - if (!fnd) 755 - break; 756 - } else { 757 - min_idx = mid_idx + 1; 758 - } 759 - } 760 - 761 - if (fnd == -1) { 762 - e = NULL; 763 - goto out1; 764 - } 765 - 766 - *diff = -1; 767 - e = Add2Ptr(hdr, offs[fnd]); 768 - 769 - out1: 770 - kfree(offs); 771 - 772 - return e; 773 - #endif 774 - 775 - next: 776 - /* 777 - * Entries index are sorted. 778 - * Enumerate all entries until we find entry 779 - * that is <= to the search value. 780 - */ 684 + fill_table: 781 685 if (off + sizeof(struct NTFS_DE) > end) 782 686 return NULL; 783 687 ··· 691 787 if (e_size < sizeof(struct NTFS_DE) || off + e_size > end) 692 788 return NULL; 693 789 694 - off += e_size; 790 + if (!de_is_last(e)) { 791 + offs[max_idx] = off; 792 + off += e_size; 695 793 794 + max_idx++; 795 + if (max_idx < table_size) 796 + goto fill_table; 797 + 798 + max_idx--; 799 + } 800 + 801 + binary_search: 696 802 e_key_len = le16_to_cpu(e->key_size); 697 803 698 - *diff = (*cmp)(key, key_len, e + 1, e_key_len, ctx); 699 - if (!*diff) 700 - return e; 804 + diff2 = (*cmp)(key, key_len, e + 1, e_key_len, ctx); 805 + if (diff2 > 0) { 806 + if (found) { 807 + min_idx = mid_idx + 1; 808 + } else { 809 + if (de_is_last(e)) 810 + return NULL; 701 811 702 - if (*diff <= 0) 703 - return e; 812 + max_idx = 0; 813 + table_size = min(table_size * 2, 814 + (int)ARRAY_SIZE(offs)); 815 + goto fill_table; 816 + } 817 + } else if (diff2 < 0) { 818 + if (found) 819 + max_idx = mid_idx - 1; 820 + else 821 + max_idx--; 704 822 705 - if (de_is_last(e)) { 706 - *diff = 1; 823 + found = e; 824 + } else { 825 + *diff = 0; 707 826 return e; 708 827 } 709 - goto next; 828 + 829 + if (min_idx > max_idx) { 830 + *diff = -1; 831 + return found; 832 + } 833 + 834 + mid_idx = (min_idx + max_idx) >> 1; 835 + e = Add2Ptr(hdr, offs[mid_idx]); 836 + 837 + goto binary_search; 710 838 } 711 839 712 840 /* ··· 1072 1136 if (!e) 1073 1137 return -EINVAL; 1074 1138 1075 - if (fnd) 1076 - fnd->root_de = e; 1077 - 1139 + fnd->root_de = e; 1078 1140 err = 0; 1079 1141 1080 1142 for (;;) { ··· 1335 1401 static int indx_create_allocate(struct ntfs_index *indx, struct ntfs_inode *ni, 1336 1402 CLST *vbn) 1337 1403 { 1338 - int err = -ENOMEM; 1404 + int err; 1339 1405 struct ntfs_sb_info *sbi = ni->mi.sbi; 1340 1406 struct ATTRIB *bitmap; 1341 1407 struct ATTRIB *alloc;
+81 -78
fs/ntfs3/inode.c
··· 5 5 * 6 6 */ 7 7 8 - #include <linux/blkdev.h> 9 8 #include <linux/buffer_head.h> 10 9 #include <linux/fs.h> 11 - #include <linux/iversion.h> 12 10 #include <linux/mpage.h> 13 11 #include <linux/namei.h> 14 12 #include <linux/nls.h> ··· 47 49 48 50 inode->i_op = NULL; 49 51 /* Setup 'uid' and 'gid' */ 50 - inode->i_uid = sbi->options.fs_uid; 51 - inode->i_gid = sbi->options.fs_gid; 52 + inode->i_uid = sbi->options->fs_uid; 53 + inode->i_gid = sbi->options->fs_gid; 52 54 53 55 err = mi_init(&ni->mi, sbi, ino); 54 56 if (err) ··· 222 224 if (!attr->non_res) { 223 225 ni->i_valid = inode->i_size = rsize; 224 226 inode_set_bytes(inode, rsize); 225 - t32 = asize; 226 - } else { 227 - t32 = le16_to_cpu(attr->nres.run_off); 228 227 } 229 228 230 - mode = S_IFREG | (0777 & sbi->options.fs_fmask_inv); 229 + mode = S_IFREG | (0777 & sbi->options->fs_fmask_inv); 231 230 232 231 if (!attr->non_res) { 233 232 ni->ni_flags |= NI_FLAG_RESIDENT; ··· 267 272 goto out; 268 273 269 274 mode = sb->s_root 270 - ? (S_IFDIR | (0777 & sbi->options.fs_dmask_inv)) 275 + ? (S_IFDIR | (0777 & sbi->options->fs_dmask_inv)) 271 276 : (S_IFDIR | 0777); 272 277 goto next_attr; 273 278 ··· 310 315 rp_fa = ni_parse_reparse(ni, attr, &rp); 311 316 switch (rp_fa) { 312 317 case REPARSE_LINK: 313 - if (!attr->non_res) { 314 - inode->i_size = rsize; 315 - inode_set_bytes(inode, rsize); 316 - t32 = asize; 317 - } else { 318 - inode->i_size = 319 - le64_to_cpu(attr->nres.data_size); 320 - t32 = le16_to_cpu(attr->nres.run_off); 321 - } 318 + /* 319 + * Normal symlink. 320 + * Assume one unicode symbol == one utf8. 321 + */ 322 + inode->i_size = le16_to_cpu(rp.SymbolicLinkReparseBuffer 323 + .PrintNameLength) / 324 + sizeof(u16); 322 325 323 - /* Looks like normal symlink. */ 324 326 ni->i_valid = inode->i_size; 325 327 326 328 /* Clear directory bit. */ ··· 414 422 ni->std_fa &= ~FILE_ATTRIBUTE_DIRECTORY; 415 423 inode->i_op = &ntfs_link_inode_operations; 416 424 inode->i_fop = NULL; 417 - inode_nohighmem(inode); // ?? 425 + inode_nohighmem(inode); 418 426 } else if (S_ISREG(mode)) { 419 427 ni->std_fa &= ~FILE_ATTRIBUTE_DIRECTORY; 420 428 inode->i_op = &ntfs_file_inode_operations; ··· 435 443 goto out; 436 444 } 437 445 438 - if ((sbi->options.sys_immutable && 446 + if ((sbi->options->sys_immutable && 439 447 (std5->fa & FILE_ATTRIBUTE_SYSTEM)) && 440 448 !S_ISFIFO(mode) && !S_ISSOCK(mode) && !S_ISLNK(mode)) { 441 449 inode->i_flags |= S_IMMUTABLE; ··· 1192 1200 struct REPARSE_DATA_BUFFER *rp = NULL; 1193 1201 bool rp_inserted = false; 1194 1202 1203 + ni_lock_dir(dir_ni); 1204 + 1195 1205 dir_root = indx_get_root(&dir_ni->dir, dir_ni, NULL, NULL); 1196 - if (!dir_root) 1197 - return ERR_PTR(-EINVAL); 1206 + if (!dir_root) { 1207 + err = -EINVAL; 1208 + goto out1; 1209 + } 1198 1210 1199 1211 if (S_ISDIR(mode)) { 1200 1212 /* Use parent's directory attributes. */ ··· 1240 1244 * } 1241 1245 */ 1242 1246 } else if (S_ISREG(mode)) { 1243 - if (sbi->options.sparse) { 1247 + if (sbi->options->sparse) { 1244 1248 /* Sparsed regular file, cause option 'sparse'. */ 1245 1249 fa = FILE_ATTRIBUTE_SPARSE_FILE | 1246 1250 FILE_ATTRIBUTE_ARCHIVE; ··· 1482 1486 asize = ALIGN(SIZEOF_RESIDENT + nsize, 8); 1483 1487 t16 = PtrOffset(rec, attr); 1484 1488 1485 - /* 0x78 - the size of EA + EAINFO to store WSL */ 1489 + /* 1490 + * Below function 'ntfs_save_wsl_perm' requires 0x78 bytes. 1491 + * It is good idea to keep extened attributes resident. 1492 + */ 1486 1493 if (asize + t16 + 0x78 + 8 > sbi->record_size) { 1487 1494 CLST alen; 1488 1495 CLST clst = bytes_to_cluster(sbi, nsize); ··· 1520 1521 } 1521 1522 1522 1523 asize = SIZEOF_NONRESIDENT + ALIGN(err, 8); 1523 - inode->i_size = nsize; 1524 1524 } else { 1525 1525 attr->res.data_off = SIZEOF_RESIDENT_LE; 1526 1526 attr->res.data_size = cpu_to_le32(nsize); 1527 1527 memcpy(Add2Ptr(attr, SIZEOF_RESIDENT), rp, nsize); 1528 - inode->i_size = nsize; 1529 1528 nsize = 0; 1530 1529 } 1530 + /* Size of symlink equals the length of input string. */ 1531 + inode->i_size = size; 1531 1532 1532 1533 attr->size = cpu_to_le32(asize); 1533 1534 ··· 1550 1551 if (err) 1551 1552 goto out6; 1552 1553 1554 + /* Unlock parent directory before ntfs_init_acl. */ 1555 + ni_unlock(dir_ni); 1556 + 1553 1557 inode->i_generation = le16_to_cpu(rec->seq); 1554 1558 1555 1559 dir->i_mtime = dir->i_ctime = inode->i_atime; ··· 1564 1562 inode->i_op = &ntfs_link_inode_operations; 1565 1563 inode->i_fop = NULL; 1566 1564 inode->i_mapping->a_ops = &ntfs_aops; 1565 + inode->i_size = size; 1566 + inode_nohighmem(inode); 1567 1567 } else if (S_ISREG(mode)) { 1568 1568 inode->i_op = &ntfs_file_inode_operations; 1569 1569 inode->i_fop = &ntfs_file_operations; ··· 1581 1577 if (!S_ISLNK(mode) && (sb->s_flags & SB_POSIXACL)) { 1582 1578 err = ntfs_init_acl(mnt_userns, inode, dir); 1583 1579 if (err) 1584 - goto out6; 1580 + goto out7; 1585 1581 } else 1586 1582 #endif 1587 1583 { ··· 1590 1586 1591 1587 /* Write non resident data. */ 1592 1588 if (nsize) { 1593 - err = ntfs_sb_write_run(sbi, &ni->file.run, 0, rp, nsize); 1589 + err = ntfs_sb_write_run(sbi, &ni->file.run, 0, rp, nsize, 0); 1594 1590 if (err) 1595 1591 goto out7; 1596 1592 } ··· 1611 1607 out7: 1612 1608 1613 1609 /* Undo 'indx_insert_entry'. */ 1610 + ni_lock_dir(dir_ni); 1614 1611 indx_delete_entry(&dir_ni->dir, dir_ni, new_de + 1, 1615 1612 le16_to_cpu(new_de->key_size), sbi); 1613 + /* ni_unlock(dir_ni); will be called later. */ 1616 1614 out6: 1617 1615 if (rp_inserted) 1618 1616 ntfs_remove_reparse(sbi, IO_REPARSE_TAG_SYMLINK, &new_de->ref); ··· 1638 1632 kfree(rp); 1639 1633 1640 1634 out1: 1641 - if (err) 1635 + if (err) { 1636 + ni_unlock(dir_ni); 1642 1637 return ERR_PTR(err); 1638 + } 1643 1639 1644 1640 unlock_new_inode(inode); 1645 1641 ··· 1762 1754 static noinline int ntfs_readlink_hlp(struct inode *inode, char *buffer, 1763 1755 int buflen) 1764 1756 { 1765 - int i, err = 0; 1757 + int i, err = -EINVAL; 1766 1758 struct ntfs_inode *ni = ntfs_i(inode); 1767 1759 struct super_block *sb = inode->i_sb; 1768 1760 struct ntfs_sb_info *sbi = sb->s_fs_info; 1769 - u64 i_size = inode->i_size; 1770 - u16 nlen = 0; 1761 + u64 size; 1762 + u16 ulen = 0; 1771 1763 void *to_free = NULL; 1772 1764 struct REPARSE_DATA_BUFFER *rp; 1773 - struct le_str *uni; 1765 + const __le16 *uname; 1774 1766 struct ATTRIB *attr; 1775 1767 1776 1768 /* Reparse data present. Try to parse it. */ ··· 1779 1771 1780 1772 *buffer = 0; 1781 1773 1782 - /* Read into temporal buffer. */ 1783 - if (i_size > sbi->reparse.max_size || i_size <= sizeof(u32)) { 1784 - err = -EINVAL; 1785 - goto out; 1786 - } 1787 - 1788 1774 attr = ni_find_attr(ni, NULL, NULL, ATTR_REPARSE, NULL, 0, NULL, NULL); 1789 - if (!attr) { 1790 - err = -EINVAL; 1775 + if (!attr) 1791 1776 goto out; 1792 - } 1793 1777 1794 1778 if (!attr->non_res) { 1795 - rp = resident_data_ex(attr, i_size); 1796 - if (!rp) { 1797 - err = -EINVAL; 1779 + rp = resident_data_ex(attr, sizeof(struct REPARSE_DATA_BUFFER)); 1780 + if (!rp) 1798 1781 goto out; 1799 - } 1782 + size = le32_to_cpu(attr->res.data_size); 1800 1783 } else { 1801 - rp = kmalloc(i_size, GFP_NOFS); 1784 + size = le64_to_cpu(attr->nres.data_size); 1785 + rp = NULL; 1786 + } 1787 + 1788 + if (size > sbi->reparse.max_size || size <= sizeof(u32)) 1789 + goto out; 1790 + 1791 + if (!rp) { 1792 + rp = kmalloc(size, GFP_NOFS); 1802 1793 if (!rp) { 1803 1794 err = -ENOMEM; 1804 1795 goto out; 1805 1796 } 1806 1797 to_free = rp; 1807 - err = ntfs_read_run_nb(sbi, &ni->file.run, 0, rp, i_size, NULL); 1798 + /* Read into temporal buffer. */ 1799 + err = ntfs_read_run_nb(sbi, &ni->file.run, 0, rp, size, NULL); 1808 1800 if (err) 1809 1801 goto out; 1810 1802 } 1811 - 1812 - err = -EINVAL; 1813 1803 1814 1804 /* Microsoft Tag. */ 1815 1805 switch (rp->ReparseTag) { 1816 1806 case IO_REPARSE_TAG_MOUNT_POINT: 1817 1807 /* Mount points and junctions. */ 1818 1808 /* Can we use 'Rp->MountPointReparseBuffer.PrintNameLength'? */ 1819 - if (i_size <= offsetof(struct REPARSE_DATA_BUFFER, 1820 - MountPointReparseBuffer.PathBuffer)) 1809 + if (size <= offsetof(struct REPARSE_DATA_BUFFER, 1810 + MountPointReparseBuffer.PathBuffer)) 1821 1811 goto out; 1822 - uni = Add2Ptr(rp, 1823 - offsetof(struct REPARSE_DATA_BUFFER, 1824 - MountPointReparseBuffer.PathBuffer) + 1825 - le16_to_cpu(rp->MountPointReparseBuffer 1826 - .PrintNameOffset) - 1827 - 2); 1828 - nlen = le16_to_cpu(rp->MountPointReparseBuffer.PrintNameLength); 1812 + uname = Add2Ptr(rp, 1813 + offsetof(struct REPARSE_DATA_BUFFER, 1814 + MountPointReparseBuffer.PathBuffer) + 1815 + le16_to_cpu(rp->MountPointReparseBuffer 1816 + .PrintNameOffset)); 1817 + ulen = le16_to_cpu(rp->MountPointReparseBuffer.PrintNameLength); 1829 1818 break; 1830 1819 1831 1820 case IO_REPARSE_TAG_SYMLINK: 1832 1821 /* FolderSymbolicLink */ 1833 1822 /* Can we use 'Rp->SymbolicLinkReparseBuffer.PrintNameLength'? */ 1834 - if (i_size <= offsetof(struct REPARSE_DATA_BUFFER, 1835 - SymbolicLinkReparseBuffer.PathBuffer)) 1823 + if (size <= offsetof(struct REPARSE_DATA_BUFFER, 1824 + SymbolicLinkReparseBuffer.PathBuffer)) 1836 1825 goto out; 1837 - uni = Add2Ptr(rp, 1838 - offsetof(struct REPARSE_DATA_BUFFER, 1839 - SymbolicLinkReparseBuffer.PathBuffer) + 1840 - le16_to_cpu(rp->SymbolicLinkReparseBuffer 1841 - .PrintNameOffset) - 1842 - 2); 1843 - nlen = le16_to_cpu( 1826 + uname = Add2Ptr( 1827 + rp, offsetof(struct REPARSE_DATA_BUFFER, 1828 + SymbolicLinkReparseBuffer.PathBuffer) + 1829 + le16_to_cpu(rp->SymbolicLinkReparseBuffer 1830 + .PrintNameOffset)); 1831 + ulen = le16_to_cpu( 1844 1832 rp->SymbolicLinkReparseBuffer.PrintNameLength); 1845 1833 break; 1846 1834 ··· 1868 1864 goto out; 1869 1865 } 1870 1866 if (!IsReparseTagNameSurrogate(rp->ReparseTag) || 1871 - i_size <= sizeof(struct REPARSE_POINT)) { 1867 + size <= sizeof(struct REPARSE_POINT)) { 1872 1868 goto out; 1873 1869 } 1874 1870 1875 1871 /* Users tag. */ 1876 - uni = Add2Ptr(rp, sizeof(struct REPARSE_POINT) - 2); 1877 - nlen = le16_to_cpu(rp->ReparseDataLength) - 1872 + uname = Add2Ptr(rp, sizeof(struct REPARSE_POINT)); 1873 + ulen = le16_to_cpu(rp->ReparseDataLength) - 1878 1874 sizeof(struct REPARSE_POINT); 1879 1875 } 1880 1876 1881 1877 /* Convert nlen from bytes to UNICODE chars. */ 1882 - nlen >>= 1; 1878 + ulen >>= 1; 1883 1879 1884 1880 /* Check that name is available. */ 1885 - if (!nlen || &uni->name[nlen] > (__le16 *)Add2Ptr(rp, i_size)) 1881 + if (!ulen || uname + ulen > (__le16 *)Add2Ptr(rp, size)) 1886 1882 goto out; 1887 1883 1888 1884 /* If name is already zero terminated then truncate it now. */ 1889 - if (!uni->name[nlen - 1]) 1890 - nlen -= 1; 1891 - uni->len = nlen; 1885 + if (!uname[ulen - 1]) 1886 + ulen -= 1; 1892 1887 1893 - err = ntfs_utf16_to_nls(sbi, uni, buffer, buflen); 1888 + err = ntfs_utf16_to_nls(sbi, uname, ulen, buffer, buflen); 1894 1889 1895 1890 if (err < 0) 1896 1891 goto out;
+5
fs/ntfs3/lib/decompress_common.h
··· 5 5 * Copyright (C) 2015 Eric Biggers 6 6 */ 7 7 8 + #ifndef _LINUX_NTFS3_LIB_DECOMPRESS_COMMON_H 9 + #define _LINUX_NTFS3_LIB_DECOMPRESS_COMMON_H 10 + 8 11 #include <linux/string.h> 9 12 #include <linux/compiler.h> 10 13 #include <linux/types.h> ··· 339 336 340 337 return dst; 341 338 } 339 + 340 + #endif /* _LINUX_NTFS3_LIB_DECOMPRESS_COMMON_H */
+6
fs/ntfs3/lib/lib.h
··· 7 7 * - linux kernel code style 8 8 */ 9 9 10 + #ifndef _LINUX_NTFS3_LIB_LIB_H 11 + #define _LINUX_NTFS3_LIB_LIB_H 12 + 13 + #include <linux/types.h> 10 14 11 15 /* globals from xpress_decompress.c */ 12 16 struct xpress_decompressor *xpress_allocate_decompressor(void); ··· 28 24 const void *__restrict compressed_data, 29 25 size_t compressed_size, void *__restrict uncompressed_data, 30 26 size_t uncompressed_size); 27 + 28 + #endif /* _LINUX_NTFS3_LIB_LIB_H */
+6 -6
fs/ntfs3/lznt.c
··· 5 5 * 6 6 */ 7 7 8 - #include <linux/blkdev.h> 9 - #include <linux/buffer_head.h> 10 - #include <linux/fs.h> 11 - #include <linux/nls.h> 8 + #include <linux/kernel.h> 9 + #include <linux/slab.h> 10 + #include <linux/stddef.h> 11 + #include <linux/string.h> 12 + #include <linux/types.h> 12 13 13 14 #include "debug.h" 14 - #include "ntfs.h" 15 15 #include "ntfs_fs.h" 16 16 17 17 // clang-format off ··· 292 292 /* 293 293 * get_lznt_ctx 294 294 * @level: 0 - Standard compression. 295 - * !0 - Best compression, requires a lot of cpu. 295 + * !0 - Best compression, requires a lot of cpu. 296 296 */ 297 297 struct lznt *get_lznt_ctx(int level) 298 298 {
-24
fs/ntfs3/namei.c
··· 5 5 * 6 6 */ 7 7 8 - #include <linux/blkdev.h> 9 - #include <linux/buffer_head.h> 10 8 #include <linux/fs.h> 11 - #include <linux/iversion.h> 12 - #include <linux/namei.h> 13 9 #include <linux/nls.h> 14 10 15 11 #include "debug.h" ··· 95 99 static int ntfs_create(struct user_namespace *mnt_userns, struct inode *dir, 96 100 struct dentry *dentry, umode_t mode, bool excl) 97 101 { 98 - struct ntfs_inode *ni = ntfs_i(dir); 99 102 struct inode *inode; 100 - 101 - ni_lock_dir(ni); 102 103 103 104 inode = ntfs_create_inode(mnt_userns, dir, dentry, NULL, S_IFREG | mode, 104 105 0, NULL, 0, NULL); 105 - 106 - ni_unlock(ni); 107 106 108 107 return IS_ERR(inode) ? PTR_ERR(inode) : 0; 109 108 } ··· 111 120 static int ntfs_mknod(struct user_namespace *mnt_userns, struct inode *dir, 112 121 struct dentry *dentry, umode_t mode, dev_t rdev) 113 122 { 114 - struct ntfs_inode *ni = ntfs_i(dir); 115 123 struct inode *inode; 116 - 117 - ni_lock_dir(ni); 118 124 119 125 inode = ntfs_create_inode(mnt_userns, dir, dentry, NULL, mode, rdev, 120 126 NULL, 0, NULL); 121 - 122 - ni_unlock(ni); 123 127 124 128 return IS_ERR(inode) ? PTR_ERR(inode) : 0; 125 129 } ··· 186 200 { 187 201 u32 size = strlen(symname); 188 202 struct inode *inode; 189 - struct ntfs_inode *ni = ntfs_i(dir); 190 - 191 - ni_lock_dir(ni); 192 203 193 204 inode = ntfs_create_inode(mnt_userns, dir, dentry, NULL, S_IFLNK | 0777, 194 205 0, symname, size, NULL); 195 - 196 - ni_unlock(ni); 197 206 198 207 return IS_ERR(inode) ? PTR_ERR(inode) : 0; 199 208 } ··· 200 219 struct dentry *dentry, umode_t mode) 201 220 { 202 221 struct inode *inode; 203 - struct ntfs_inode *ni = ntfs_i(dir); 204 - 205 - ni_lock_dir(ni); 206 222 207 223 inode = ntfs_create_inode(mnt_userns, dir, dentry, NULL, S_IFDIR | mode, 208 224 0, NULL, 0, NULL); 209 - 210 - ni_unlock(ni); 211 225 212 226 return IS_ERR(inode) ? PTR_ERR(inode) : 0; 213 227 }
+14 -6
fs/ntfs3/ntfs.h
··· 10 10 #ifndef _LINUX_NTFS3_NTFS_H 11 11 #define _LINUX_NTFS3_NTFS_H 12 12 13 - /* TODO: Check 4K MFT record and 512 bytes cluster. */ 13 + #include <linux/blkdev.h> 14 + #include <linux/build_bug.h> 15 + #include <linux/kernel.h> 16 + #include <linux/stddef.h> 17 + #include <linux/string.h> 18 + #include <linux/types.h> 14 19 15 - /* Activate this define to use binary search in indexes. */ 16 - #define NTFS3_INDEX_BINARY_SEARCH 20 + #include "debug.h" 21 + 22 + /* TODO: Check 4K MFT record and 512 bytes cluster. */ 17 23 18 24 /* Check each run for marked clusters. */ 19 25 #define NTFS3_CHECK_FREE_CLST 20 26 21 27 #define NTFS_NAME_LEN 255 22 28 23 - /* ntfs.sys used 500 maximum links on-disk struct allows up to 0xffff. */ 24 - #define NTFS_LINK_MAX 0x400 25 - //#define NTFS_LINK_MAX 0xffff 29 + /* 30 + * ntfs.sys used 500 maximum links on-disk struct allows up to 0xffff. 31 + * xfstest generic/041 creates 3003 hardlinks. 32 + */ 33 + #define NTFS_LINK_MAX 4000 26 34 27 35 /* 28 36 * Activate to use 64 bit clusters instead of 32 bits in ntfs.sys.
+47 -20
fs/ntfs3/ntfs_fs.h
··· 9 9 #ifndef _LINUX_NTFS3_NTFS_FS_H 10 10 #define _LINUX_NTFS3_NTFS_FS_H 11 11 12 + #include <linux/blkdev.h> 13 + #include <linux/buffer_head.h> 14 + #include <linux/cleancache.h> 15 + #include <linux/fs.h> 16 + #include <linux/highmem.h> 17 + #include <linux/kernel.h> 18 + #include <linux/mm.h> 19 + #include <linux/mutex.h> 20 + #include <linux/page-flags.h> 21 + #include <linux/pagemap.h> 22 + #include <linux/rbtree.h> 23 + #include <linux/rwsem.h> 24 + #include <linux/slab.h> 25 + #include <linux/string.h> 26 + #include <linux/time64.h> 27 + #include <linux/types.h> 28 + #include <linux/uidgid.h> 29 + #include <asm/div64.h> 30 + #include <asm/page.h> 31 + 32 + #include "debug.h" 33 + #include "ntfs.h" 34 + 35 + struct dentry; 36 + struct fiemap_extent_info; 37 + struct user_namespace; 38 + struct page; 39 + struct writeback_control; 40 + enum utf16_endian; 41 + 42 + 12 43 #define MINUS_ONE_T ((size_t)(-1)) 13 44 /* Biggest MFT / smallest cluster */ 14 45 #define MAXIMUM_BYTES_PER_MFT 4096 ··· 83 52 // clang-format on 84 53 85 54 struct ntfs_mount_options { 55 + char *nls_name; 86 56 struct nls_table *nls; 87 57 88 58 kuid_t fs_uid; ··· 91 59 u16 fs_fmask_inv; 92 60 u16 fs_dmask_inv; 93 61 94 - unsigned uid : 1, /* uid was set. */ 95 - gid : 1, /* gid was set. */ 96 - fmask : 1, /* fmask was set. */ 97 - dmask : 1, /* dmask was set. */ 98 - sys_immutable : 1, /* Immutable system files. */ 99 - discard : 1, /* Issue discard requests on deletions. */ 100 - sparse : 1, /* Create sparse files. */ 101 - showmeta : 1, /* Show meta files. */ 102 - nohidden : 1, /* Do not show hidden files. */ 103 - force : 1, /* Rw mount dirty volume. */ 104 - no_acs_rules : 1, /*Exclude acs rules. */ 105 - prealloc : 1 /* Preallocate space when file is growing. */ 106 - ; 62 + unsigned fmask : 1; /* fmask was set. */ 63 + unsigned dmask : 1; /*dmask was set. */ 64 + unsigned sys_immutable : 1; /* Immutable system files. */ 65 + unsigned discard : 1; /* Issue discard requests on deletions. */ 66 + unsigned sparse : 1; /* Create sparse files. */ 67 + unsigned showmeta : 1; /* Show meta files. */ 68 + unsigned nohidden : 1; /* Do not show hidden files. */ 69 + unsigned force : 1; /* RW mount dirty volume. */ 70 + unsigned noacsrules : 1; /* Exclude acs rules. */ 71 + unsigned prealloc : 1; /* Preallocate space when file is growing. */ 107 72 }; 108 73 109 74 /* Special value to unpack and deallocate. */ ··· 211 182 u32 blocks_per_cluster; // cluster_size / sb->s_blocksize 212 183 213 184 u32 record_size; 214 - u32 sector_size; 215 185 u32 index_size; 216 186 217 - u8 sector_bits; 218 187 u8 cluster_bits; 219 188 u8 record_bits; 220 189 ··· 306 279 #endif 307 280 } compress; 308 281 309 - struct ntfs_mount_options options; 282 + struct ntfs_mount_options *options; 310 283 struct ratelimit_state msg_ratelimit; 311 284 }; 312 285 ··· 463 436 bool al_delete_le(struct ntfs_inode *ni, enum ATTR_TYPE type, CLST vcn, 464 437 const __le16 *name, size_t name_len, 465 438 const struct MFT_REF *ref); 466 - int al_update(struct ntfs_inode *ni); 439 + int al_update(struct ntfs_inode *ni, int sync); 467 440 static inline size_t al_aligned(size_t size) 468 441 { 469 442 return (size + 1023) & ~(size_t)1023; ··· 475 448 size_t get_set_bits_ex(const ulong *map, size_t bit, size_t nbits); 476 449 477 450 /* Globals from dir.c */ 478 - int ntfs_utf16_to_nls(struct ntfs_sb_info *sbi, const struct le_str *uni, 451 + int ntfs_utf16_to_nls(struct ntfs_sb_info *sbi, const __le16 *name, u32 len, 479 452 u8 *buf, int buf_len); 480 453 int ntfs_nls_to_utf16(struct ntfs_sb_info *sbi, const u8 *name, u32 name_len, 481 454 struct cpu_str *uni, u32 max_ulen, ··· 547 520 struct ATTR_LIST_ENTRY **entry); 548 521 int ni_new_attr_flags(struct ntfs_inode *ni, enum FILE_ATTRIBUTE new_fa); 549 522 enum REPARSE_SIGN ni_parse_reparse(struct ntfs_inode *ni, struct ATTRIB *attr, 550 - void *buffer); 523 + struct REPARSE_DATA_BUFFER *buffer); 551 524 int ni_write_inode(struct inode *inode, int sync, const char *hint); 552 525 #define _ni_write_inode(i, w) ni_write_inode(i, w, __func__) 553 526 int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo, ··· 604 577 int ntfs_sb_write(struct super_block *sb, u64 lbo, size_t bytes, 605 578 const void *buffer, int wait); 606 579 int ntfs_sb_write_run(struct ntfs_sb_info *sbi, const struct runs_tree *run, 607 - u64 vbo, const void *buf, size_t bytes); 580 + u64 vbo, const void *buf, size_t bytes, int sync); 608 581 struct buffer_head *ntfs_bread_run(struct ntfs_sb_info *sbi, 609 582 const struct runs_tree *run, u64 vbo); 610 583 int ntfs_read_run_nb(struct ntfs_sb_info *sbi, const struct runs_tree *run,
-3
fs/ntfs3/record.c
··· 5 5 * 6 6 */ 7 7 8 - #include <linux/blkdev.h> 9 - #include <linux/buffer_head.h> 10 8 #include <linux/fs.h> 11 - #include <linux/nls.h> 12 9 13 10 #include "debug.h" 14 11 #include "ntfs.h"
-2
fs/ntfs3/run.c
··· 7 7 */ 8 8 9 9 #include <linux/blkdev.h> 10 - #include <linux/buffer_head.h> 11 10 #include <linux/fs.h> 12 11 #include <linux/log2.h> 13 - #include <linux/nls.h> 14 12 15 13 #include "debug.h" 16 14 #include "ntfs.h"
+323 -328
fs/ntfs3/super.c
··· 23 23 * 24 24 */ 25 25 26 - #include <linux/backing-dev.h> 27 26 #include <linux/blkdev.h> 28 27 #include <linux/buffer_head.h> 29 28 #include <linux/exportfs.h> 30 29 #include <linux/fs.h> 31 - #include <linux/iversion.h> 30 + #include <linux/fs_context.h> 31 + #include <linux/fs_parser.h> 32 32 #include <linux/log2.h> 33 33 #include <linux/module.h> 34 34 #include <linux/nls.h> 35 - #include <linux/parser.h> 36 35 #include <linux/seq_file.h> 37 36 #include <linux/statfs.h> 38 37 ··· 204 205 return ret; 205 206 } 206 207 207 - static inline void clear_mount_options(struct ntfs_mount_options *options) 208 + static inline void put_mount_options(struct ntfs_mount_options *options) 208 209 { 210 + kfree(options->nls_name); 209 211 unload_nls(options->nls); 212 + kfree(options); 210 213 } 211 214 212 215 enum Opt { ··· 224 223 Opt_nohidden, 225 224 Opt_showmeta, 226 225 Opt_acl, 227 - Opt_noatime, 228 - Opt_nls, 226 + Opt_iocharset, 229 227 Opt_prealloc, 230 - Opt_no_acs_rules, 228 + Opt_noacsrules, 231 229 Opt_err, 232 230 }; 233 231 234 - static const match_table_t ntfs_tokens = { 235 - { Opt_uid, "uid=%u" }, 236 - { Opt_gid, "gid=%u" }, 237 - { Opt_umask, "umask=%o" }, 238 - { Opt_dmask, "dmask=%o" }, 239 - { Opt_fmask, "fmask=%o" }, 240 - { Opt_immutable, "sys_immutable" }, 241 - { Opt_discard, "discard" }, 242 - { Opt_force, "force" }, 243 - { Opt_sparse, "sparse" }, 244 - { Opt_nohidden, "nohidden" }, 245 - { Opt_acl, "acl" }, 246 - { Opt_noatime, "noatime" }, 247 - { Opt_showmeta, "showmeta" }, 248 - { Opt_nls, "nls=%s" }, 249 - { Opt_prealloc, "prealloc" }, 250 - { Opt_no_acs_rules, "no_acs_rules" }, 251 - { Opt_err, NULL }, 232 + static const struct fs_parameter_spec ntfs_fs_parameters[] = { 233 + fsparam_u32("uid", Opt_uid), 234 + fsparam_u32("gid", Opt_gid), 235 + fsparam_u32oct("umask", Opt_umask), 236 + fsparam_u32oct("dmask", Opt_dmask), 237 + fsparam_u32oct("fmask", Opt_fmask), 238 + fsparam_flag_no("sys_immutable", Opt_immutable), 239 + fsparam_flag_no("discard", Opt_discard), 240 + fsparam_flag_no("force", Opt_force), 241 + fsparam_flag_no("sparse", Opt_sparse), 242 + fsparam_flag_no("hidden", Opt_nohidden), 243 + fsparam_flag_no("acl", Opt_acl), 244 + fsparam_flag_no("showmeta", Opt_showmeta), 245 + fsparam_flag_no("prealloc", Opt_prealloc), 246 + fsparam_flag_no("acsrules", Opt_noacsrules), 247 + fsparam_string("iocharset", Opt_iocharset), 248 + {} 252 249 }; 253 250 254 - static noinline int ntfs_parse_options(struct super_block *sb, char *options, 255 - int silent, 256 - struct ntfs_mount_options *opts) 251 + /* 252 + * Load nls table or if @nls is utf8 then return NULL. 253 + */ 254 + static struct nls_table *ntfs_load_nls(char *nls) 257 255 { 258 - char *p; 259 - substring_t args[MAX_OPT_ARGS]; 260 - int option; 261 - char nls_name[30]; 262 - struct nls_table *nls; 256 + struct nls_table *ret; 263 257 264 - opts->fs_uid = current_uid(); 265 - opts->fs_gid = current_gid(); 266 - opts->fs_fmask_inv = opts->fs_dmask_inv = ~current_umask(); 267 - nls_name[0] = 0; 258 + if (!nls) 259 + nls = CONFIG_NLS_DEFAULT; 268 260 269 - if (!options) 270 - goto out; 261 + if (strcmp(nls, "utf8") == 0) 262 + return NULL; 271 263 272 - while ((p = strsep(&options, ","))) { 273 - int token; 264 + if (strcmp(nls, CONFIG_NLS_DEFAULT) == 0) 265 + return load_nls_default(); 274 266 275 - if (!*p) 276 - continue; 267 + ret = load_nls(nls); 268 + if (ret) 269 + return ret; 277 270 278 - token = match_token(p, ntfs_tokens, args); 279 - switch (token) { 280 - case Opt_immutable: 281 - opts->sys_immutable = 1; 282 - break; 283 - case Opt_uid: 284 - if (match_int(&args[0], &option)) 285 - return -EINVAL; 286 - opts->fs_uid = make_kuid(current_user_ns(), option); 287 - if (!uid_valid(opts->fs_uid)) 288 - return -EINVAL; 289 - opts->uid = 1; 290 - break; 291 - case Opt_gid: 292 - if (match_int(&args[0], &option)) 293 - return -EINVAL; 294 - opts->fs_gid = make_kgid(current_user_ns(), option); 295 - if (!gid_valid(opts->fs_gid)) 296 - return -EINVAL; 297 - opts->gid = 1; 298 - break; 299 - case Opt_umask: 300 - if (match_octal(&args[0], &option)) 301 - return -EINVAL; 302 - opts->fs_fmask_inv = opts->fs_dmask_inv = ~option; 303 - opts->fmask = opts->dmask = 1; 304 - break; 305 - case Opt_dmask: 306 - if (match_octal(&args[0], &option)) 307 - return -EINVAL; 308 - opts->fs_dmask_inv = ~option; 309 - opts->dmask = 1; 310 - break; 311 - case Opt_fmask: 312 - if (match_octal(&args[0], &option)) 313 - return -EINVAL; 314 - opts->fs_fmask_inv = ~option; 315 - opts->fmask = 1; 316 - break; 317 - case Opt_discard: 318 - opts->discard = 1; 319 - break; 320 - case Opt_force: 321 - opts->force = 1; 322 - break; 323 - case Opt_sparse: 324 - opts->sparse = 1; 325 - break; 326 - case Opt_nohidden: 327 - opts->nohidden = 1; 328 - break; 329 - case Opt_acl: 271 + return ERR_PTR(-EINVAL); 272 + } 273 + 274 + static int ntfs_fs_parse_param(struct fs_context *fc, 275 + struct fs_parameter *param) 276 + { 277 + struct ntfs_mount_options *opts = fc->fs_private; 278 + struct fs_parse_result result; 279 + int opt; 280 + 281 + opt = fs_parse(fc, ntfs_fs_parameters, param, &result); 282 + if (opt < 0) 283 + return opt; 284 + 285 + switch (opt) { 286 + case Opt_uid: 287 + opts->fs_uid = make_kuid(current_user_ns(), result.uint_32); 288 + if (!uid_valid(opts->fs_uid)) 289 + return invalf(fc, "ntfs3: Invalid value for uid."); 290 + break; 291 + case Opt_gid: 292 + opts->fs_gid = make_kgid(current_user_ns(), result.uint_32); 293 + if (!gid_valid(opts->fs_gid)) 294 + return invalf(fc, "ntfs3: Invalid value for gid."); 295 + break; 296 + case Opt_umask: 297 + if (result.uint_32 & ~07777) 298 + return invalf(fc, "ntfs3: Invalid value for umask."); 299 + opts->fs_fmask_inv = ~result.uint_32; 300 + opts->fs_dmask_inv = ~result.uint_32; 301 + opts->fmask = 1; 302 + opts->dmask = 1; 303 + break; 304 + case Opt_dmask: 305 + if (result.uint_32 & ~07777) 306 + return invalf(fc, "ntfs3: Invalid value for dmask."); 307 + opts->fs_dmask_inv = ~result.uint_32; 308 + opts->dmask = 1; 309 + break; 310 + case Opt_fmask: 311 + if (result.uint_32 & ~07777) 312 + return invalf(fc, "ntfs3: Invalid value for fmask."); 313 + opts->fs_fmask_inv = ~result.uint_32; 314 + opts->fmask = 1; 315 + break; 316 + case Opt_immutable: 317 + opts->sys_immutable = result.negated ? 0 : 1; 318 + break; 319 + case Opt_discard: 320 + opts->discard = result.negated ? 0 : 1; 321 + break; 322 + case Opt_force: 323 + opts->force = result.negated ? 0 : 1; 324 + break; 325 + case Opt_sparse: 326 + opts->sparse = result.negated ? 0 : 1; 327 + break; 328 + case Opt_nohidden: 329 + opts->nohidden = result.negated ? 1 : 0; 330 + break; 331 + case Opt_acl: 332 + if (!result.negated) 330 333 #ifdef CONFIG_NTFS3_FS_POSIX_ACL 331 - sb->s_flags |= SB_POSIXACL; 332 - break; 334 + fc->sb_flags |= SB_POSIXACL; 333 335 #else 334 - ntfs_err(sb, "support for ACL not compiled in!"); 335 - return -EINVAL; 336 + return invalf(fc, "ntfs3: Support for ACL not compiled in!"); 336 337 #endif 337 - case Opt_noatime: 338 - sb->s_flags |= SB_NOATIME; 339 - break; 340 - case Opt_showmeta: 341 - opts->showmeta = 1; 342 - break; 343 - case Opt_nls: 344 - match_strlcpy(nls_name, &args[0], sizeof(nls_name)); 345 - break; 346 - case Opt_prealloc: 347 - opts->prealloc = 1; 348 - break; 349 - case Opt_no_acs_rules: 350 - opts->no_acs_rules = 1; 351 - break; 352 - default: 353 - if (!silent) 354 - ntfs_err( 355 - sb, 356 - "Unrecognized mount option \"%s\" or missing value", 357 - p); 358 - //return -EINVAL; 359 - } 338 + else 339 + fc->sb_flags &= ~SB_POSIXACL; 340 + break; 341 + case Opt_showmeta: 342 + opts->showmeta = result.negated ? 0 : 1; 343 + break; 344 + case Opt_iocharset: 345 + kfree(opts->nls_name); 346 + opts->nls_name = param->string; 347 + param->string = NULL; 348 + break; 349 + case Opt_prealloc: 350 + opts->prealloc = result.negated ? 0 : 1; 351 + break; 352 + case Opt_noacsrules: 353 + opts->noacsrules = result.negated ? 1 : 0; 354 + break; 355 + default: 356 + /* Should not be here unless we forget add case. */ 357 + return -EINVAL; 360 358 } 361 - 362 - out: 363 - if (!strcmp(nls_name[0] ? nls_name : CONFIG_NLS_DEFAULT, "utf8")) { 364 - /* 365 - * For UTF-8 use utf16s_to_utf8s()/utf8s_to_utf16s() 366 - * instead of NLS. 367 - */ 368 - nls = NULL; 369 - } else if (nls_name[0]) { 370 - nls = load_nls(nls_name); 371 - if (!nls) { 372 - ntfs_err(sb, "failed to load \"%s\"", nls_name); 373 - return -EINVAL; 374 - } 375 - } else { 376 - nls = load_nls_default(); 377 - if (!nls) { 378 - ntfs_err(sb, "failed to load default nls"); 379 - return -EINVAL; 380 - } 381 - } 382 - opts->nls = nls; 383 - 384 359 return 0; 385 360 } 386 361 387 - static int ntfs_remount(struct super_block *sb, int *flags, char *data) 362 + static int ntfs_fs_reconfigure(struct fs_context *fc) 388 363 { 389 - int err, ro_rw; 364 + struct super_block *sb = fc->root->d_sb; 390 365 struct ntfs_sb_info *sbi = sb->s_fs_info; 391 - struct ntfs_mount_options old_opts; 392 - char *orig_data = kstrdup(data, GFP_KERNEL); 366 + struct ntfs_mount_options *new_opts = fc->fs_private; 367 + int ro_rw; 393 368 394 - if (data && !orig_data) 395 - return -ENOMEM; 396 - 397 - /* Store original options. */ 398 - memcpy(&old_opts, &sbi->options, sizeof(old_opts)); 399 - clear_mount_options(&sbi->options); 400 - memset(&sbi->options, 0, sizeof(sbi->options)); 401 - 402 - err = ntfs_parse_options(sb, data, 0, &sbi->options); 403 - if (err) 404 - goto restore_opts; 405 - 406 - ro_rw = sb_rdonly(sb) && !(*flags & SB_RDONLY); 369 + ro_rw = sb_rdonly(sb) && !(fc->sb_flags & SB_RDONLY); 407 370 if (ro_rw && (sbi->flags & NTFS_FLAGS_NEED_REPLAY)) { 408 - ntfs_warn( 409 - sb, 410 - "Couldn't remount rw because journal is not replayed. Please umount/remount instead\n"); 411 - err = -EINVAL; 412 - goto restore_opts; 371 + errorf(fc, "ntfs3: Couldn't remount rw because journal is not replayed. Please umount/remount instead\n"); 372 + return -EINVAL; 413 373 } 374 + 375 + new_opts->nls = ntfs_load_nls(new_opts->nls_name); 376 + if (IS_ERR(new_opts->nls)) { 377 + new_opts->nls = NULL; 378 + errorf(fc, "ntfs3: Cannot load iocharset %s", new_opts->nls_name); 379 + return -EINVAL; 380 + } 381 + if (new_opts->nls != sbi->options->nls) 382 + return invalf(fc, "ntfs3: Cannot use different iocharset when remounting!"); 414 383 415 384 sync_filesystem(sb); 416 385 417 386 if (ro_rw && (sbi->volume.flags & VOLUME_FLAG_DIRTY) && 418 - !sbi->options.force) { 419 - ntfs_warn(sb, "volume is dirty and \"force\" flag is not set!"); 420 - err = -EINVAL; 421 - goto restore_opts; 387 + !new_opts->force) { 388 + errorf(fc, "ntfs3: Volume is dirty and \"force\" flag is not set!"); 389 + return -EINVAL; 422 390 } 423 391 424 - clear_mount_options(&old_opts); 392 + memcpy(sbi->options, new_opts, sizeof(*new_opts)); 425 393 426 - *flags = (*flags & ~SB_LAZYTIME) | (sb->s_flags & SB_LAZYTIME) | 427 - SB_NODIRATIME | SB_NOATIME; 428 - ntfs_info(sb, "re-mounted. Opts: %s", orig_data); 429 - err = 0; 430 - goto out; 431 - 432 - restore_opts: 433 - clear_mount_options(&sbi->options); 434 - memcpy(&sbi->options, &old_opts, sizeof(old_opts)); 435 - 436 - out: 437 - kfree(orig_data); 438 - return err; 394 + return 0; 439 395 } 440 396 441 397 static struct kmem_cache *ntfs_inode_cachep; ··· 471 513 xpress_free_decompressor(sbi->compress.xpress); 472 514 lzx_free_decompressor(sbi->compress.lzx); 473 515 #endif 474 - clear_mount_options(&sbi->options); 475 - 476 516 kfree(sbi); 477 517 } 478 518 ··· 481 525 /* Mark rw ntfs as clear, if possible. */ 482 526 ntfs_set_state(sbi, NTFS_DIRTY_CLEAR); 483 527 528 + put_mount_options(sbi->options); 484 529 put_ntfs(sbi); 530 + sb->s_fs_info = NULL; 485 531 486 532 sync_blockdev(sb->s_bdev); 487 533 } ··· 510 552 { 511 553 struct super_block *sb = root->d_sb; 512 554 struct ntfs_sb_info *sbi = sb->s_fs_info; 513 - struct ntfs_mount_options *opts = &sbi->options; 555 + struct ntfs_mount_options *opts = sbi->options; 514 556 struct user_namespace *user_ns = seq_user_ns(m); 515 557 516 - if (opts->uid) 517 - seq_printf(m, ",uid=%u", 518 - from_kuid_munged(user_ns, opts->fs_uid)); 519 - if (opts->gid) 520 - seq_printf(m, ",gid=%u", 521 - from_kgid_munged(user_ns, opts->fs_gid)); 558 + seq_printf(m, ",uid=%u", 559 + from_kuid_munged(user_ns, opts->fs_uid)); 560 + seq_printf(m, ",gid=%u", 561 + from_kgid_munged(user_ns, opts->fs_gid)); 522 562 if (opts->fmask) 523 563 seq_printf(m, ",fmask=%04o", ~opts->fs_fmask_inv); 524 564 if (opts->dmask) 525 565 seq_printf(m, ",dmask=%04o", ~opts->fs_dmask_inv); 526 566 if (opts->nls) 527 - seq_printf(m, ",nls=%s", opts->nls->charset); 567 + seq_printf(m, ",iocharset=%s", opts->nls->charset); 528 568 else 529 - seq_puts(m, ",nls=utf8"); 569 + seq_puts(m, ",iocharset=utf8"); 530 570 if (opts->sys_immutable) 531 571 seq_puts(m, ",sys_immutable"); 532 572 if (opts->discard) ··· 537 581 seq_puts(m, ",nohidden"); 538 582 if (opts->force) 539 583 seq_puts(m, ",force"); 540 - if (opts->no_acs_rules) 541 - seq_puts(m, ",no_acs_rules"); 584 + if (opts->noacsrules) 585 + seq_puts(m, ",noacsrules"); 542 586 if (opts->prealloc) 543 587 seq_puts(m, ",prealloc"); 544 588 if (sb->s_flags & SB_POSIXACL) 545 589 seq_puts(m, ",acl"); 546 - if (sb->s_flags & SB_NOATIME) 547 - seq_puts(m, ",noatime"); 548 590 549 591 return 0; 550 592 } ··· 597 643 .statfs = ntfs_statfs, 598 644 .show_options = ntfs_show_options, 599 645 .sync_fs = ntfs_sync_fs, 600 - .remount_fs = ntfs_remount, 601 646 .write_inode = ntfs3_write_inode, 602 647 }; 603 648 ··· 682 729 struct ntfs_sb_info *sbi = sb->s_fs_info; 683 730 int err; 684 731 u32 mb, gb, boot_sector_size, sct_per_clst, record_size; 685 - u64 sectors, clusters, fs_size, mlcn, mlcn2; 732 + u64 sectors, clusters, mlcn, mlcn2; 686 733 struct NTFS_BOOT *boot; 687 734 struct buffer_head *bh; 688 735 struct MFT_REC *rec; ··· 740 787 goto out; 741 788 } 742 789 743 - sbi->sector_size = boot_sector_size; 744 - sbi->sector_bits = blksize_bits(boot_sector_size); 745 - fs_size = (sectors + 1) << sbi->sector_bits; 790 + sbi->volume.size = sectors * boot_sector_size; 746 791 747 - gb = format_size_gb(fs_size, &mb); 792 + gb = format_size_gb(sbi->volume.size + boot_sector_size, &mb); 748 793 749 794 /* 750 795 * - Volume formatted and mounted with the same sector size. 751 796 * - Volume formatted 4K and mounted as 512. 752 797 * - Volume formatted 512 and mounted as 4K. 753 798 */ 754 - if (sbi->sector_size != sector_size) { 755 - ntfs_warn(sb, 756 - "Different NTFS' sector size and media sector size"); 799 + if (boot_sector_size != sector_size) { 800 + ntfs_warn( 801 + sb, 802 + "Different NTFS' sector size (%u) and media sector size (%u)", 803 + boot_sector_size, sector_size); 757 804 dev_size += sector_size - 1; 758 805 } 759 806 ··· 763 810 sbi->mft.lbo = mlcn << sbi->cluster_bits; 764 811 sbi->mft.lbo2 = mlcn2 << sbi->cluster_bits; 765 812 766 - if (sbi->cluster_size < sbi->sector_size) 813 + /* Compare boot's cluster and sector. */ 814 + if (sbi->cluster_size < boot_sector_size) 767 815 goto out; 816 + 817 + /* Compare boot's cluster and media sector. */ 818 + if (sbi->cluster_size < sector_size) { 819 + /* No way to use ntfs_get_block in this case. */ 820 + ntfs_err( 821 + sb, 822 + "Failed to mount 'cause NTFS's cluster size (%u) is less than media sector size (%u)", 823 + sbi->cluster_size, sector_size); 824 + goto out; 825 + } 768 826 769 827 sbi->cluster_mask = sbi->cluster_size - 1; 770 828 sbi->cluster_mask_inv = ~(u64)sbi->cluster_mask; ··· 800 836 : (u32)boot->index_size << sbi->cluster_bits; 801 837 802 838 sbi->volume.ser_num = le64_to_cpu(boot->serial_num); 803 - sbi->volume.size = sectors << sbi->sector_bits; 804 839 805 840 /* Warning if RAW volume. */ 806 - if (dev_size < fs_size) { 841 + if (dev_size < sbi->volume.size + boot_sector_size) { 807 842 u32 mb0, gb0; 808 843 809 844 gb0 = format_size_gb(dev_size, &mb0); ··· 846 883 rec->total = cpu_to_le32(sbi->record_size); 847 884 ((struct ATTRIB *)Add2Ptr(rec, ao))->type = ATTR_END; 848 885 849 - if (sbi->cluster_size < PAGE_SIZE) 850 - sb_set_blocksize(sb, sbi->cluster_size); 886 + sb_set_blocksize(sb, min_t(u32, sbi->cluster_size, PAGE_SIZE)); 851 887 852 888 sbi->block_mask = sb->s_blocksize - 1; 853 889 sbi->blocks_per_cluster = sbi->cluster_size >> sb->s_blocksize_bits; ··· 859 897 if (clusters >= (1ull << (64 - sbi->cluster_bits))) 860 898 sbi->maxbytes = -1; 861 899 sbi->maxbytes_sparse = -1; 900 + sb->s_maxbytes = MAX_LFS_FILESIZE; 862 901 #else 863 902 /* Maximum size for sparse file. */ 864 903 sbi->maxbytes_sparse = (1ull << (sbi->cluster_bits + 32)) - 1; 904 + sb->s_maxbytes = 0xFFFFFFFFull << sbi->cluster_bits; 865 905 #endif 866 906 867 907 err = 0; ··· 877 913 /* 878 914 * ntfs_fill_super - Try to mount. 879 915 */ 880 - static int ntfs_fill_super(struct super_block *sb, void *data, int silent) 916 + static int ntfs_fill_super(struct super_block *sb, struct fs_context *fc) 881 917 { 882 918 int err; 883 - struct ntfs_sb_info *sbi; 919 + struct ntfs_sb_info *sbi = sb->s_fs_info; 884 920 struct block_device *bdev = sb->s_bdev; 885 - struct inode *bd_inode = bdev->bd_inode; 886 - struct request_queue *rq = bdev_get_queue(bdev); 887 - struct inode *inode = NULL; 921 + struct request_queue *rq; 922 + struct inode *inode; 888 923 struct ntfs_inode *ni; 889 924 size_t i, tt; 890 925 CLST vcn, lcn, len; ··· 891 928 const struct VOLUME_INFO *info; 892 929 u32 idx, done, bytes; 893 930 struct ATTR_DEF_ENTRY *t; 894 - u16 *upcase = NULL; 895 931 u16 *shared; 896 - bool is_ro; 897 932 struct MFT_REF ref; 898 933 899 934 ref.high = 0; 900 935 901 - sbi = kzalloc(sizeof(struct ntfs_sb_info), GFP_NOFS); 902 - if (!sbi) 903 - return -ENOMEM; 904 - 905 - sb->s_fs_info = sbi; 906 936 sbi->sb = sb; 907 937 sb->s_flags |= SB_NODIRATIME; 908 938 sb->s_magic = 0x7366746e; // "ntfs" ··· 904 948 sb->s_time_gran = NTFS_TIME_GRAN; // 100 nsec 905 949 sb->s_xattr = ntfs_xattr_handlers; 906 950 907 - ratelimit_state_init(&sbi->msg_ratelimit, DEFAULT_RATELIMIT_INTERVAL, 908 - DEFAULT_RATELIMIT_BURST); 909 - 910 - err = ntfs_parse_options(sb, data, silent, &sbi->options); 911 - if (err) 951 + sbi->options->nls = ntfs_load_nls(sbi->options->nls_name); 952 + if (IS_ERR(sbi->options->nls)) { 953 + sbi->options->nls = NULL; 954 + errorf(fc, "Cannot load nls %s", sbi->options->nls_name); 955 + err = -EINVAL; 912 956 goto out; 957 + } 913 958 914 - if (!rq || !blk_queue_discard(rq) || !rq->limits.discard_granularity) { 915 - ; 916 - } else { 959 + rq = bdev_get_queue(bdev); 960 + if (blk_queue_discard(rq) && rq->limits.discard_granularity) { 917 961 sbi->discard_granularity = rq->limits.discard_granularity; 918 962 sbi->discard_granularity_mask_inv = 919 963 ~(u64)(sbi->discard_granularity - 1); 920 964 } 921 965 922 - sb_set_blocksize(sb, PAGE_SIZE); 923 - 924 966 /* Parse boot. */ 925 967 err = ntfs_init_from_boot(sb, rq ? queue_logical_block_size(rq) : 512, 926 - bd_inode->i_size); 968 + bdev->bd_inode->i_size); 927 969 if (err) 928 970 goto out; 929 - 930 - #ifdef CONFIG_NTFS3_64BIT_CLUSTER 931 - sb->s_maxbytes = MAX_LFS_FILESIZE; 932 - #else 933 - sb->s_maxbytes = 0xFFFFFFFFull << sbi->cluster_bits; 934 - #endif 935 - 936 - mutex_init(&sbi->compress.mtx_lznt); 937 - #ifdef CONFIG_NTFS3_LZX_XPRESS 938 - mutex_init(&sbi->compress.mtx_xpress); 939 - mutex_init(&sbi->compress.mtx_lzx); 940 - #endif 941 971 942 972 /* 943 973 * Load $Volume. This should be done before $LogFile ··· 933 991 ref.seq = cpu_to_le16(MFT_REC_VOL); 934 992 inode = ntfs_iget5(sb, &ref, &NAME_VOLUME); 935 993 if (IS_ERR(inode)) { 936 - err = PTR_ERR(inode); 937 994 ntfs_err(sb, "Failed to load $Volume."); 938 - inode = NULL; 995 + err = PTR_ERR(inode); 939 996 goto out; 940 997 } 941 998 ··· 956 1015 } else { 957 1016 /* Should we break mounting here? */ 958 1017 //err = -EINVAL; 959 - //goto out; 1018 + //goto put_inode_out; 960 1019 } 961 1020 962 1021 attr = ni_find_attr(ni, attr, NULL, ATTR_VOL_INFO, NULL, 0, NULL, NULL); 963 1022 if (!attr || is_attr_ext(attr)) { 964 1023 err = -EINVAL; 965 - goto out; 1024 + goto put_inode_out; 966 1025 } 967 1026 968 1027 info = resident_data_ex(attr, SIZEOF_ATTRIBUTE_VOLUME_INFO); 969 1028 if (!info) { 970 1029 err = -EINVAL; 971 - goto out; 1030 + goto put_inode_out; 972 1031 } 973 1032 974 1033 sbi->volume.major_ver = info->major_ver; 975 1034 sbi->volume.minor_ver = info->minor_ver; 976 1035 sbi->volume.flags = info->flags; 977 - 978 1036 sbi->volume.ni = ni; 979 - inode = NULL; 980 1037 981 1038 /* Load $MFTMirr to estimate recs_mirr. */ 982 1039 ref.low = cpu_to_le32(MFT_REC_MIRR); 983 1040 ref.seq = cpu_to_le16(MFT_REC_MIRR); 984 1041 inode = ntfs_iget5(sb, &ref, &NAME_MIRROR); 985 1042 if (IS_ERR(inode)) { 986 - err = PTR_ERR(inode); 987 1043 ntfs_err(sb, "Failed to load $MFTMirr."); 988 - inode = NULL; 1044 + err = PTR_ERR(inode); 989 1045 goto out; 990 1046 } 991 1047 ··· 996 1058 ref.seq = cpu_to_le16(MFT_REC_LOG); 997 1059 inode = ntfs_iget5(sb, &ref, &NAME_LOGFILE); 998 1060 if (IS_ERR(inode)) { 999 - err = PTR_ERR(inode); 1000 1061 ntfs_err(sb, "Failed to load \x24LogFile."); 1001 - inode = NULL; 1062 + err = PTR_ERR(inode); 1002 1063 goto out; 1003 1064 } 1004 1065 ··· 1005 1068 1006 1069 err = ntfs_loadlog_and_replay(ni, sbi); 1007 1070 if (err) 1008 - goto out; 1071 + goto put_inode_out; 1009 1072 1010 1073 iput(inode); 1011 - inode = NULL; 1012 - 1013 - is_ro = sb_rdonly(sbi->sb); 1014 1074 1015 1075 if (sbi->flags & NTFS_FLAGS_NEED_REPLAY) { 1016 - if (!is_ro) { 1076 + if (!sb_rdonly(sb)) { 1017 1077 ntfs_warn(sb, 1018 1078 "failed to replay log file. Can't mount rw!"); 1019 1079 err = -EINVAL; 1020 1080 goto out; 1021 1081 } 1022 1082 } else if (sbi->volume.flags & VOLUME_FLAG_DIRTY) { 1023 - if (!is_ro && !sbi->options.force) { 1083 + if (!sb_rdonly(sb) && !sbi->options->force) { 1024 1084 ntfs_warn( 1025 1085 sb, 1026 1086 "volume is dirty and \"force\" flag is not set!"); ··· 1032 1098 1033 1099 inode = ntfs_iget5(sb, &ref, &NAME_MFT); 1034 1100 if (IS_ERR(inode)) { 1035 - err = PTR_ERR(inode); 1036 1101 ntfs_err(sb, "Failed to load $MFT."); 1037 - inode = NULL; 1102 + err = PTR_ERR(inode); 1038 1103 goto out; 1039 1104 } 1040 1105 ··· 1045 1112 1046 1113 err = wnd_init(&sbi->mft.bitmap, sb, tt); 1047 1114 if (err) 1048 - goto out; 1115 + goto put_inode_out; 1049 1116 1050 1117 err = ni_load_all_mi(ni); 1051 1118 if (err) 1052 - goto out; 1119 + goto put_inode_out; 1053 1120 1054 1121 sbi->mft.ni = ni; 1055 1122 ··· 1058 1125 ref.seq = cpu_to_le16(MFT_REC_BADCLUST); 1059 1126 inode = ntfs_iget5(sb, &ref, &NAME_BADCLUS); 1060 1127 if (IS_ERR(inode)) { 1061 - err = PTR_ERR(inode); 1062 1128 ntfs_err(sb, "Failed to load $BadClus."); 1063 - inode = NULL; 1129 + err = PTR_ERR(inode); 1064 1130 goto out; 1065 1131 } 1066 1132 ··· 1082 1150 ref.seq = cpu_to_le16(MFT_REC_BITMAP); 1083 1151 inode = ntfs_iget5(sb, &ref, &NAME_BITMAP); 1084 1152 if (IS_ERR(inode)) { 1085 - err = PTR_ERR(inode); 1086 1153 ntfs_err(sb, "Failed to load $Bitmap."); 1087 - inode = NULL; 1154 + err = PTR_ERR(inode); 1088 1155 goto out; 1089 1156 } 1090 - 1091 - ni = ntfs_i(inode); 1092 1157 1093 1158 #ifndef CONFIG_NTFS3_64BIT_CLUSTER 1094 1159 if (inode->i_size >> 32) { 1095 1160 err = -EINVAL; 1096 - goto out; 1161 + goto put_inode_out; 1097 1162 } 1098 1163 #endif 1099 1164 ··· 1098 1169 tt = sbi->used.bitmap.nbits; 1099 1170 if (inode->i_size < bitmap_size(tt)) { 1100 1171 err = -EINVAL; 1101 - goto out; 1172 + goto put_inode_out; 1102 1173 } 1103 1174 1104 1175 /* Not necessary. */ 1105 1176 sbi->used.bitmap.set_tail = true; 1106 - err = wnd_init(&sbi->used.bitmap, sbi->sb, tt); 1177 + err = wnd_init(&sbi->used.bitmap, sb, tt); 1107 1178 if (err) 1108 - goto out; 1179 + goto put_inode_out; 1109 1180 1110 1181 iput(inode); 1111 1182 ··· 1117 1188 /* Load $AttrDef. */ 1118 1189 ref.low = cpu_to_le32(MFT_REC_ATTR); 1119 1190 ref.seq = cpu_to_le16(MFT_REC_ATTR); 1120 - inode = ntfs_iget5(sbi->sb, &ref, &NAME_ATTRDEF); 1191 + inode = ntfs_iget5(sb, &ref, &NAME_ATTRDEF); 1121 1192 if (IS_ERR(inode)) { 1122 - err = PTR_ERR(inode); 1123 1193 ntfs_err(sb, "Failed to load $AttrDef -> %d", err); 1124 - inode = NULL; 1194 + err = PTR_ERR(inode); 1125 1195 goto out; 1126 1196 } 1127 1197 1128 1198 if (inode->i_size < sizeof(struct ATTR_DEF_ENTRY)) { 1129 1199 err = -EINVAL; 1130 - goto out; 1200 + goto put_inode_out; 1131 1201 } 1132 1202 bytes = inode->i_size; 1133 1203 sbi->def_table = t = kmalloc(bytes, GFP_NOFS); 1134 1204 if (!t) { 1135 1205 err = -ENOMEM; 1136 - goto out; 1206 + goto put_inode_out; 1137 1207 } 1138 1208 1139 1209 for (done = idx = 0; done < bytes; done += PAGE_SIZE, idx++) { ··· 1141 1213 1142 1214 if (IS_ERR(page)) { 1143 1215 err = PTR_ERR(page); 1144 - goto out; 1216 + goto put_inode_out; 1145 1217 } 1146 1218 memcpy(Add2Ptr(t, done), page_address(page), 1147 1219 min(PAGE_SIZE, tail)); ··· 1149 1221 1150 1222 if (!idx && ATTR_STD != t->type) { 1151 1223 err = -EINVAL; 1152 - goto out; 1224 + goto put_inode_out; 1153 1225 } 1154 1226 } 1155 1227 ··· 1182 1254 ref.seq = cpu_to_le16(MFT_REC_UPCASE); 1183 1255 inode = ntfs_iget5(sb, &ref, &NAME_UPCASE); 1184 1256 if (IS_ERR(inode)) { 1257 + ntfs_err(sb, "Failed to load $UpCase."); 1185 1258 err = PTR_ERR(inode); 1186 - ntfs_err(sb, "Failed to load \x24LogFile."); 1187 - inode = NULL; 1188 1259 goto out; 1189 1260 } 1190 - 1191 - ni = ntfs_i(inode); 1192 1261 1193 1262 if (inode->i_size != 0x10000 * sizeof(short)) { 1194 1263 err = -EINVAL; 1195 - goto out; 1196 - } 1197 - 1198 - sbi->upcase = upcase = kvmalloc(0x10000 * sizeof(short), GFP_KERNEL); 1199 - if (!upcase) { 1200 - err = -ENOMEM; 1201 - goto out; 1264 + goto put_inode_out; 1202 1265 } 1203 1266 1204 1267 for (idx = 0; idx < (0x10000 * sizeof(short) >> PAGE_SHIFT); idx++) { 1205 1268 const __le16 *src; 1206 - u16 *dst = Add2Ptr(upcase, idx << PAGE_SHIFT); 1269 + u16 *dst = Add2Ptr(sbi->upcase, idx << PAGE_SHIFT); 1207 1270 struct page *page = ntfs_map_page(inode->i_mapping, idx); 1208 1271 1209 1272 if (IS_ERR(page)) { 1210 1273 err = PTR_ERR(page); 1211 - goto out; 1274 + goto put_inode_out; 1212 1275 } 1213 1276 1214 1277 src = page_address(page); ··· 1213 1294 ntfs_unmap_page(page); 1214 1295 } 1215 1296 1216 - shared = ntfs_set_shared(upcase, 0x10000 * sizeof(short)); 1217 - if (shared && upcase != shared) { 1297 + shared = ntfs_set_shared(sbi->upcase, 0x10000 * sizeof(short)); 1298 + if (shared && sbi->upcase != shared) { 1299 + kvfree(sbi->upcase); 1218 1300 sbi->upcase = shared; 1219 - kvfree(upcase); 1220 1301 } 1221 1302 1222 1303 iput(inode); 1223 - inode = NULL; 1224 1304 1225 1305 if (is_ntfs3(sbi)) { 1226 1306 /* Load $Secure. */ ··· 1249 1331 ref.seq = cpu_to_le16(MFT_REC_ROOT); 1250 1332 inode = ntfs_iget5(sb, &ref, &NAME_ROOT); 1251 1333 if (IS_ERR(inode)) { 1252 - err = PTR_ERR(inode); 1253 1334 ntfs_err(sb, "Failed to load root."); 1254 - inode = NULL; 1335 + err = PTR_ERR(inode); 1255 1336 goto out; 1256 1337 } 1257 - 1258 - ni = ntfs_i(inode); 1259 1338 1260 1339 sb->s_root = d_make_root(inode); 1261 - 1262 1340 if (!sb->s_root) { 1263 - err = -EINVAL; 1264 - goto out; 1341 + err = -ENOMEM; 1342 + goto put_inode_out; 1265 1343 } 1344 + 1345 + fc->fs_private = NULL; 1266 1346 1267 1347 return 0; 1268 1348 1269 - out: 1349 + put_inode_out: 1270 1350 iput(inode); 1271 - 1272 - if (sb->s_root) { 1273 - d_drop(sb->s_root); 1274 - sb->s_root = NULL; 1275 - } 1276 - 1351 + out: 1352 + /* 1353 + * Free resources here. 1354 + * ntfs_fs_free will be called with fc->s_fs_info = NULL 1355 + */ 1277 1356 put_ntfs(sbi); 1278 - 1279 1357 sb->s_fs_info = NULL; 1358 + 1280 1359 return err; 1281 1360 } 1282 1361 ··· 1318 1403 if (sbi->flags & NTFS_FLAGS_NODISCARD) 1319 1404 return -EOPNOTSUPP; 1320 1405 1321 - if (!sbi->options.discard) 1406 + if (!sbi->options->discard) 1322 1407 return -EOPNOTSUPP; 1323 1408 1324 1409 lbo = (u64)lcn << sbi->cluster_bits; ··· 1343 1428 return err; 1344 1429 } 1345 1430 1346 - static struct dentry *ntfs_mount(struct file_system_type *fs_type, int flags, 1347 - const char *dev_name, void *data) 1431 + static int ntfs_fs_get_tree(struct fs_context *fc) 1348 1432 { 1349 - return mount_bdev(fs_type, flags, dev_name, data, ntfs_fill_super); 1433 + return get_tree_bdev(fc, ntfs_fill_super); 1434 + } 1435 + 1436 + /* 1437 + * ntfs_fs_free - Free fs_context. 1438 + * 1439 + * Note that this will be called after fill_super and reconfigure 1440 + * even when they pass. So they have to take pointers if they pass. 1441 + */ 1442 + static void ntfs_fs_free(struct fs_context *fc) 1443 + { 1444 + struct ntfs_mount_options *opts = fc->fs_private; 1445 + struct ntfs_sb_info *sbi = fc->s_fs_info; 1446 + 1447 + if (sbi) 1448 + put_ntfs(sbi); 1449 + 1450 + if (opts) 1451 + put_mount_options(opts); 1452 + } 1453 + 1454 + static const struct fs_context_operations ntfs_context_ops = { 1455 + .parse_param = ntfs_fs_parse_param, 1456 + .get_tree = ntfs_fs_get_tree, 1457 + .reconfigure = ntfs_fs_reconfigure, 1458 + .free = ntfs_fs_free, 1459 + }; 1460 + 1461 + /* 1462 + * ntfs_init_fs_context - Initialize spi and opts 1463 + * 1464 + * This will called when mount/remount. We will first initiliaze 1465 + * options so that if remount we can use just that. 1466 + */ 1467 + static int ntfs_init_fs_context(struct fs_context *fc) 1468 + { 1469 + struct ntfs_mount_options *opts; 1470 + struct ntfs_sb_info *sbi; 1471 + 1472 + opts = kzalloc(sizeof(struct ntfs_mount_options), GFP_NOFS); 1473 + if (!opts) 1474 + return -ENOMEM; 1475 + 1476 + /* Default options. */ 1477 + opts->fs_uid = current_uid(); 1478 + opts->fs_gid = current_gid(); 1479 + opts->fs_fmask_inv = ~current_umask(); 1480 + opts->fs_dmask_inv = ~current_umask(); 1481 + 1482 + if (fc->purpose == FS_CONTEXT_FOR_RECONFIGURE) 1483 + goto ok; 1484 + 1485 + sbi = kzalloc(sizeof(struct ntfs_sb_info), GFP_NOFS); 1486 + if (!sbi) 1487 + goto free_opts; 1488 + 1489 + sbi->upcase = kvmalloc(0x10000 * sizeof(short), GFP_KERNEL); 1490 + if (!sbi->upcase) 1491 + goto free_sbi; 1492 + 1493 + ratelimit_state_init(&sbi->msg_ratelimit, DEFAULT_RATELIMIT_INTERVAL, 1494 + DEFAULT_RATELIMIT_BURST); 1495 + 1496 + mutex_init(&sbi->compress.mtx_lznt); 1497 + #ifdef CONFIG_NTFS3_LZX_XPRESS 1498 + mutex_init(&sbi->compress.mtx_xpress); 1499 + mutex_init(&sbi->compress.mtx_lzx); 1500 + #endif 1501 + 1502 + sbi->options = opts; 1503 + fc->s_fs_info = sbi; 1504 + ok: 1505 + fc->fs_private = opts; 1506 + fc->ops = &ntfs_context_ops; 1507 + 1508 + return 0; 1509 + free_sbi: 1510 + kfree(sbi); 1511 + free_opts: 1512 + kfree(opts); 1513 + return -ENOMEM; 1350 1514 } 1351 1515 1352 1516 // clang-format off 1353 1517 static struct file_system_type ntfs_fs_type = { 1354 - .owner = THIS_MODULE, 1355 - .name = "ntfs3", 1356 - .mount = ntfs_mount, 1357 - .kill_sb = kill_block_super, 1358 - .fs_flags = FS_REQUIRES_DEV | FS_ALLOW_IDMAP, 1518 + .owner = THIS_MODULE, 1519 + .name = "ntfs3", 1520 + .init_fs_context = ntfs_init_fs_context, 1521 + .parameters = ntfs_fs_parameters, 1522 + .kill_sb = kill_block_super, 1523 + .fs_flags = FS_REQUIRES_DEV | FS_ALLOW_IDMAP, 1359 1524 }; 1360 1525 // clang-format on 1361 1526
+2 -6
fs/ntfs3/upcase.c
··· 5 5 * 6 6 */ 7 7 8 - #include <linux/blkdev.h> 9 - #include <linux/buffer_head.h> 10 - #include <linux/module.h> 11 - #include <linux/nls.h> 8 + #include <linux/kernel.h> 9 + #include <linux/types.h> 12 10 13 - #include "debug.h" 14 - #include "ntfs.h" 15 11 #include "ntfs_fs.h" 16 12 17 13 static inline u16 upcase_unicode_char(const u16 *upcase, u16 chr)
+62 -189
fs/ntfs3/xattr.c
··· 5 5 * 6 6 */ 7 7 8 - #include <linux/blkdev.h> 9 - #include <linux/buffer_head.h> 10 8 #include <linux/fs.h> 11 - #include <linux/nls.h> 12 9 #include <linux/posix_acl.h> 13 10 #include <linux/posix_acl_xattr.h> 14 11 #include <linux/xattr.h> ··· 75 78 size_t add_bytes, const struct EA_INFO **info) 76 79 { 77 80 int err; 81 + struct ntfs_sb_info *sbi = ni->mi.sbi; 78 82 struct ATTR_LIST_ENTRY *le = NULL; 79 83 struct ATTRIB *attr_info, *attr_ea; 80 84 void *ea_p; ··· 100 102 101 103 /* Check Ea limit. */ 102 104 size = le32_to_cpu((*info)->size); 103 - if (size > ni->mi.sbi->ea_max_size) 105 + if (size > sbi->ea_max_size) 104 106 return -EFBIG; 105 107 106 - if (attr_size(attr_ea) > ni->mi.sbi->ea_max_size) 108 + if (attr_size(attr_ea) > sbi->ea_max_size) 107 109 return -EFBIG; 108 110 109 111 /* Allocate memory for packed Ea. */ ··· 111 113 if (!ea_p) 112 114 return -ENOMEM; 113 115 114 - if (attr_ea->non_res) { 116 + if (!size) { 117 + ; 118 + } else if (attr_ea->non_res) { 115 119 struct runs_tree run; 116 120 117 121 run_init(&run); 118 122 119 123 err = attr_load_runs(attr_ea, ni, &run, NULL); 120 124 if (!err) 121 - err = ntfs_read_run_nb(ni->mi.sbi, &run, 0, ea_p, size, 122 - NULL); 125 + err = ntfs_read_run_nb(sbi, &run, 0, ea_p, size, NULL); 123 126 run_close(&run); 124 127 125 128 if (err) ··· 259 260 260 261 static noinline int ntfs_set_ea(struct inode *inode, const char *name, 261 262 size_t name_len, const void *value, 262 - size_t val_size, int flags, int locked) 263 + size_t val_size, int flags) 263 264 { 264 265 struct ntfs_inode *ni = ntfs_i(inode); 265 266 struct ntfs_sb_info *sbi = ni->mi.sbi; ··· 278 279 u64 new_sz; 279 280 void *p; 280 281 281 - if (!locked) 282 - ni_lock(ni); 282 + ni_lock(ni); 283 283 284 284 run_init(&ea_run); 285 285 ··· 368 370 new_ea->name[name_len] = 0; 369 371 memcpy(new_ea->name + name_len + 1, value, val_size); 370 372 new_pack = le16_to_cpu(ea_info.size_pack) + packed_ea_size(new_ea); 371 - 372 - /* Should fit into 16 bits. */ 373 - if (new_pack > 0xffff) { 374 - err = -EFBIG; // -EINVAL? 375 - goto out; 376 - } 377 373 ea_info.size_pack = cpu_to_le16(new_pack); 378 - 379 374 /* New size of ATTR_EA. */ 380 375 size += add; 381 - if (size > sbi->ea_max_size) { 376 + ea_info.size = cpu_to_le32(size); 377 + 378 + /* 379 + * 1. Check ea_info.size_pack for overflow. 380 + * 2. New attibute size must fit value from $AttrDef 381 + */ 382 + if (new_pack > 0xffff || size > sbi->ea_max_size) { 383 + ntfs_inode_warn( 384 + inode, 385 + "The size of extended attributes must not exceed 64KiB"); 382 386 err = -EFBIG; // -EINVAL? 383 387 goto out; 384 388 } 385 - ea_info.size = cpu_to_le32(size); 386 389 387 390 update_ea: 388 391 ··· 443 444 /* Delete xattr, ATTR_EA */ 444 445 ni_remove_attr_le(ni, attr, mi, le); 445 446 } else if (attr->non_res) { 446 - err = ntfs_sb_write_run(sbi, &ea_run, 0, ea_all, size); 447 + err = ntfs_sb_write_run(sbi, &ea_run, 0, ea_all, size, 0); 447 448 if (err) 448 449 goto out; 449 450 } else { ··· 467 468 mark_inode_dirty(&ni->vfs_inode); 468 469 469 470 out: 470 - if (!locked) 471 - ni_unlock(ni); 471 + ni_unlock(ni); 472 472 473 473 run_close(&ea_run); 474 474 kfree(ea_all); ··· 476 478 } 477 479 478 480 #ifdef CONFIG_NTFS3_FS_POSIX_ACL 479 - static inline void ntfs_posix_acl_release(struct posix_acl *acl) 480 - { 481 - if (acl && refcount_dec_and_test(&acl->a_refcount)) 482 - kfree(acl); 483 - } 484 - 485 481 static struct posix_acl *ntfs_get_acl_ex(struct user_namespace *mnt_userns, 486 482 struct inode *inode, int type, 487 483 int locked) ··· 513 521 /* Translate extended attribute to acl. */ 514 522 if (err >= 0) { 515 523 acl = posix_acl_from_xattr(mnt_userns, buf, err); 516 - if (!IS_ERR(acl)) 517 - set_cached_acl(inode, type, acl); 524 + } else if (err == -ENODATA) { 525 + acl = NULL; 518 526 } else { 519 - acl = err == -ENODATA ? NULL : ERR_PTR(err); 527 + acl = ERR_PTR(err); 520 528 } 529 + 530 + if (!IS_ERR(acl)) 531 + set_cached_acl(inode, type, acl); 521 532 522 533 __putname(buf); 523 534 ··· 541 546 542 547 static noinline int ntfs_set_acl_ex(struct user_namespace *mnt_userns, 543 548 struct inode *inode, struct posix_acl *acl, 544 - int type, int locked) 549 + int type) 545 550 { 546 551 const char *name; 547 552 size_t size, name_len; 548 553 void *value = NULL; 549 554 int err = 0; 555 + int flags; 550 556 551 557 if (S_ISLNK(inode->i_mode)) 552 558 return -EOPNOTSUPP; ··· 557 561 if (acl) { 558 562 umode_t mode = inode->i_mode; 559 563 560 - err = posix_acl_equiv_mode(acl, &mode); 561 - if (err < 0) 562 - return err; 564 + err = posix_acl_update_mode(mnt_userns, inode, &mode, 565 + &acl); 566 + if (err) 567 + goto out; 563 568 564 569 if (inode->i_mode != mode) { 565 570 inode->i_mode = mode; 566 571 mark_inode_dirty(inode); 567 - } 568 - 569 - if (!err) { 570 - /* 571 - * ACL can be exactly represented in the 572 - * traditional file mode permission bits. 573 - */ 574 - acl = NULL; 575 572 } 576 573 } 577 574 name = XATTR_NAME_POSIX_ACL_ACCESS; ··· 583 594 } 584 595 585 596 if (!acl) { 597 + /* Remove xattr if it can be presented via mode. */ 586 598 size = 0; 587 599 value = NULL; 600 + flags = XATTR_REPLACE; 588 601 } else { 589 602 size = posix_acl_xattr_size(acl->a_count); 590 603 value = kmalloc(size, GFP_NOFS); 591 604 if (!value) 592 605 return -ENOMEM; 593 - 594 606 err = posix_acl_to_xattr(mnt_userns, acl, value, size); 595 607 if (err < 0) 596 608 goto out; 609 + flags = 0; 597 610 } 598 611 599 - err = ntfs_set_ea(inode, name, name_len, value, size, 0, locked); 612 + err = ntfs_set_ea(inode, name, name_len, value, size, flags); 613 + if (err == -ENODATA && !size) 614 + err = 0; /* Removing non existed xattr. */ 600 615 if (!err) 601 616 set_cached_acl(inode, type, acl); 602 617 ··· 616 623 int ntfs_set_acl(struct user_namespace *mnt_userns, struct inode *inode, 617 624 struct posix_acl *acl, int type) 618 625 { 619 - return ntfs_set_acl_ex(mnt_userns, inode, acl, type, 0); 620 - } 621 - 622 - static int ntfs_xattr_get_acl(struct user_namespace *mnt_userns, 623 - struct inode *inode, int type, void *buffer, 624 - size_t size) 625 - { 626 - struct posix_acl *acl; 627 - int err; 628 - 629 - if (!(inode->i_sb->s_flags & SB_POSIXACL)) { 630 - ntfs_inode_warn(inode, "add mount option \"acl\" to use acl"); 631 - return -EOPNOTSUPP; 632 - } 633 - 634 - acl = ntfs_get_acl(inode, type, false); 635 - if (IS_ERR(acl)) 636 - return PTR_ERR(acl); 637 - 638 - if (!acl) 639 - return -ENODATA; 640 - 641 - err = posix_acl_to_xattr(mnt_userns, acl, buffer, size); 642 - ntfs_posix_acl_release(acl); 643 - 644 - return err; 645 - } 646 - 647 - static int ntfs_xattr_set_acl(struct user_namespace *mnt_userns, 648 - struct inode *inode, int type, const void *value, 649 - size_t size) 650 - { 651 - struct posix_acl *acl; 652 - int err; 653 - 654 - if (!(inode->i_sb->s_flags & SB_POSIXACL)) { 655 - ntfs_inode_warn(inode, "add mount option \"acl\" to use acl"); 656 - return -EOPNOTSUPP; 657 - } 658 - 659 - if (!inode_owner_or_capable(mnt_userns, inode)) 660 - return -EPERM; 661 - 662 - if (!value) { 663 - acl = NULL; 664 - } else { 665 - acl = posix_acl_from_xattr(mnt_userns, value, size); 666 - if (IS_ERR(acl)) 667 - return PTR_ERR(acl); 668 - 669 - if (acl) { 670 - err = posix_acl_valid(mnt_userns, acl); 671 - if (err) 672 - goto release_and_out; 673 - } 674 - } 675 - 676 - err = ntfs_set_acl(mnt_userns, inode, acl, type); 677 - 678 - release_and_out: 679 - ntfs_posix_acl_release(acl); 680 - return err; 626 + return ntfs_set_acl_ex(mnt_userns, inode, acl, type); 681 627 } 682 628 683 629 /* ··· 630 698 struct posix_acl *default_acl, *acl; 631 699 int err; 632 700 633 - /* 634 - * TODO: Refactoring lock. 635 - * ni_lock(dir) ... -> posix_acl_create(dir,...) -> ntfs_get_acl -> ni_lock(dir) 636 - */ 637 - inode->i_default_acl = NULL; 701 + err = posix_acl_create(dir, &inode->i_mode, &default_acl, &acl); 702 + if (err) 703 + return err; 638 704 639 - default_acl = ntfs_get_acl_ex(mnt_userns, dir, ACL_TYPE_DEFAULT, 1); 640 - 641 - if (!default_acl || default_acl == ERR_PTR(-EOPNOTSUPP)) { 642 - inode->i_mode &= ~current_umask(); 643 - err = 0; 644 - goto out; 645 - } 646 - 647 - if (IS_ERR(default_acl)) { 648 - err = PTR_ERR(default_acl); 649 - goto out; 650 - } 651 - 652 - acl = default_acl; 653 - err = __posix_acl_create(&acl, GFP_NOFS, &inode->i_mode); 654 - if (err < 0) 655 - goto out1; 656 - if (!err) { 657 - posix_acl_release(acl); 658 - acl = NULL; 659 - } 660 - 661 - if (!S_ISDIR(inode->i_mode)) { 662 - posix_acl_release(default_acl); 663 - default_acl = NULL; 664 - } 665 - 666 - if (default_acl) 705 + if (default_acl) { 667 706 err = ntfs_set_acl_ex(mnt_userns, inode, default_acl, 668 - ACL_TYPE_DEFAULT, 1); 707 + ACL_TYPE_DEFAULT); 708 + posix_acl_release(default_acl); 709 + } else { 710 + inode->i_default_acl = NULL; 711 + } 669 712 670 713 if (!acl) 671 714 inode->i_acl = NULL; 672 - else if (!err) 673 - err = ntfs_set_acl_ex(mnt_userns, inode, acl, ACL_TYPE_ACCESS, 674 - 1); 715 + else { 716 + if (!err) 717 + err = ntfs_set_acl_ex(mnt_userns, inode, acl, 718 + ACL_TYPE_ACCESS); 719 + posix_acl_release(acl); 720 + } 675 721 676 - posix_acl_release(acl); 677 - out1: 678 - posix_acl_release(default_acl); 679 - 680 - out: 681 722 return err; 682 723 } 683 724 #endif ··· 677 772 int ntfs_permission(struct user_namespace *mnt_userns, struct inode *inode, 678 773 int mask) 679 774 { 680 - if (ntfs_sb(inode->i_sb)->options.no_acs_rules) { 775 + if (ntfs_sb(inode->i_sb)->options->noacsrules) { 681 776 /* "No access rules" mode - Allow all changes. */ 682 777 return 0; 683 778 } ··· 785 880 goto out; 786 881 } 787 882 788 - #ifdef CONFIG_NTFS3_FS_POSIX_ACL 789 - if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 && 790 - !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS, 791 - sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) || 792 - (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 && 793 - !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT, 794 - sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) { 795 - /* TODO: init_user_ns? */ 796 - err = ntfs_xattr_get_acl( 797 - &init_user_ns, inode, 798 - name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 799 - ? ACL_TYPE_ACCESS 800 - : ACL_TYPE_DEFAULT, 801 - buffer, size); 802 - goto out; 803 - } 804 - #endif 805 883 /* Deal with NTFS extended attribute. */ 806 884 err = ntfs_get_ea(inode, name, name_len, buffer, size, NULL); 807 885 ··· 897 1009 goto out; 898 1010 } 899 1011 900 - #ifdef CONFIG_NTFS3_FS_POSIX_ACL 901 - if ((name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 && 902 - !memcmp(name, XATTR_NAME_POSIX_ACL_ACCESS, 903 - sizeof(XATTR_NAME_POSIX_ACL_ACCESS))) || 904 - (name_len == sizeof(XATTR_NAME_POSIX_ACL_DEFAULT) - 1 && 905 - !memcmp(name, XATTR_NAME_POSIX_ACL_DEFAULT, 906 - sizeof(XATTR_NAME_POSIX_ACL_DEFAULT)))) { 907 - err = ntfs_xattr_set_acl( 908 - mnt_userns, inode, 909 - name_len == sizeof(XATTR_NAME_POSIX_ACL_ACCESS) - 1 910 - ? ACL_TYPE_ACCESS 911 - : ACL_TYPE_DEFAULT, 912 - value, size); 913 - goto out; 914 - } 915 - #endif 916 1012 /* Deal with NTFS extended attribute. */ 917 - err = ntfs_set_ea(inode, name, name_len, value, size, flags, 0); 1013 + err = ntfs_set_ea(inode, name, name_len, value, size, flags); 918 1014 919 1015 out: 920 1016 return err; ··· 914 1042 int err; 915 1043 __le32 value; 916 1044 1045 + /* TODO: refactor this, so we don't lock 4 times in ntfs_set_ea */ 917 1046 value = cpu_to_le32(i_uid_read(inode)); 918 1047 err = ntfs_set_ea(inode, "$LXUID", sizeof("$LXUID") - 1, &value, 919 - sizeof(value), 0, 0); 1048 + sizeof(value), 0); 920 1049 if (err) 921 1050 goto out; 922 1051 923 1052 value = cpu_to_le32(i_gid_read(inode)); 924 1053 err = ntfs_set_ea(inode, "$LXGID", sizeof("$LXGID") - 1, &value, 925 - sizeof(value), 0, 0); 1054 + sizeof(value), 0); 926 1055 if (err) 927 1056 goto out; 928 1057 929 1058 value = cpu_to_le32(inode->i_mode); 930 1059 err = ntfs_set_ea(inode, "$LXMOD", sizeof("$LXMOD") - 1, &value, 931 - sizeof(value), 0, 0); 1060 + sizeof(value), 0); 932 1061 if (err) 933 1062 goto out; 934 1063 935 1064 if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) { 936 1065 value = cpu_to_le32(inode->i_rdev); 937 1066 err = ntfs_set_ea(inode, "$LXDEV", sizeof("$LXDEV") - 1, &value, 938 - sizeof(value), 0, 0); 1067 + sizeof(value), 0); 939 1068 if (err) 940 1069 goto out; 941 1070 }
+12 -34
fs/ocfs2/alloc.c
··· 7045 7045 int ocfs2_convert_inline_data_to_extents(struct inode *inode, 7046 7046 struct buffer_head *di_bh) 7047 7047 { 7048 - int ret, i, has_data, num_pages = 0; 7048 + int ret, has_data, num_pages = 0; 7049 7049 int need_free = 0; 7050 7050 u32 bit_off, num; 7051 7051 handle_t *handle; ··· 7054 7054 struct ocfs2_super *osb = OCFS2_SB(inode->i_sb); 7055 7055 struct ocfs2_dinode *di = (struct ocfs2_dinode *)di_bh->b_data; 7056 7056 struct ocfs2_alloc_context *data_ac = NULL; 7057 - struct page **pages = NULL; 7058 - loff_t end = osb->s_clustersize; 7057 + struct page *page = NULL; 7059 7058 struct ocfs2_extent_tree et; 7060 7059 int did_quota = 0; 7061 7060 7062 7061 has_data = i_size_read(inode) ? 1 : 0; 7063 7062 7064 7063 if (has_data) { 7065 - pages = kcalloc(ocfs2_pages_per_cluster(osb->sb), 7066 - sizeof(struct page *), GFP_NOFS); 7067 - if (pages == NULL) { 7068 - ret = -ENOMEM; 7069 - mlog_errno(ret); 7070 - return ret; 7071 - } 7072 - 7073 7064 ret = ocfs2_reserve_clusters(osb, 1, &data_ac); 7074 7065 if (ret) { 7075 7066 mlog_errno(ret); 7076 - goto free_pages; 7067 + goto out; 7077 7068 } 7078 7069 } 7079 7070 ··· 7084 7093 } 7085 7094 7086 7095 if (has_data) { 7087 - unsigned int page_end; 7096 + unsigned int page_end = min_t(unsigned, PAGE_SIZE, 7097 + osb->s_clustersize); 7088 7098 u64 phys; 7089 7099 7090 7100 ret = dquot_alloc_space_nodirty(inode, ··· 7109 7117 */ 7110 7118 block = phys = ocfs2_clusters_to_blocks(inode->i_sb, bit_off); 7111 7119 7112 - /* 7113 - * Non sparse file systems zero on extend, so no need 7114 - * to do that now. 7115 - */ 7116 - if (!ocfs2_sparse_alloc(osb) && 7117 - PAGE_SIZE < osb->s_clustersize) 7118 - end = PAGE_SIZE; 7119 - 7120 - ret = ocfs2_grab_eof_pages(inode, 0, end, pages, &num_pages); 7120 + ret = ocfs2_grab_eof_pages(inode, 0, page_end, &page, 7121 + &num_pages); 7121 7122 if (ret) { 7122 7123 mlog_errno(ret); 7123 7124 need_free = 1; ··· 7121 7136 * This should populate the 1st page for us and mark 7122 7137 * it up to date. 7123 7138 */ 7124 - ret = ocfs2_read_inline_data(inode, pages[0], di_bh); 7139 + ret = ocfs2_read_inline_data(inode, page, di_bh); 7125 7140 if (ret) { 7126 7141 mlog_errno(ret); 7127 7142 need_free = 1; 7128 7143 goto out_unlock; 7129 7144 } 7130 7145 7131 - page_end = PAGE_SIZE; 7132 - if (PAGE_SIZE > osb->s_clustersize) 7133 - page_end = osb->s_clustersize; 7134 - 7135 - for (i = 0; i < num_pages; i++) 7136 - ocfs2_map_and_dirty_page(inode, handle, 0, page_end, 7137 - pages[i], i > 0, &phys); 7146 + ocfs2_map_and_dirty_page(inode, handle, 0, page_end, page, 0, 7147 + &phys); 7138 7148 } 7139 7149 7140 7150 spin_lock(&oi->ip_lock); ··· 7160 7180 } 7161 7181 7162 7182 out_unlock: 7163 - if (pages) 7164 - ocfs2_unlock_and_free_pages(pages, num_pages); 7183 + if (page) 7184 + ocfs2_unlock_and_free_pages(&page, num_pages); 7165 7185 7166 7186 out_commit: 7167 7187 if (ret < 0 && did_quota) ··· 7185 7205 out: 7186 7206 if (data_ac) 7187 7207 ocfs2_free_alloc_context(data_ac); 7188 - free_pages: 7189 - kfree(pages); 7190 7208 return ret; 7191 7209 } 7192 7210
+10 -4
fs/ocfs2/super.c
··· 2167 2167 } 2168 2168 2169 2169 if (ocfs2_clusterinfo_valid(osb)) { 2170 + /* 2171 + * ci_stack and ci_cluster in ocfs2_cluster_info may not be null 2172 + * terminated, so make sure no overflow happens here by using 2173 + * memcpy. Destination strings will always be null terminated 2174 + * because osb is allocated using kzalloc. 2175 + */ 2170 2176 osb->osb_stackflags = 2171 2177 OCFS2_RAW_SB(di)->s_cluster_info.ci_stackflags; 2172 - strlcpy(osb->osb_cluster_stack, 2178 + memcpy(osb->osb_cluster_stack, 2173 2179 OCFS2_RAW_SB(di)->s_cluster_info.ci_stack, 2174 - OCFS2_STACK_LABEL_LEN + 1); 2180 + OCFS2_STACK_LABEL_LEN); 2175 2181 if (strlen(osb->osb_cluster_stack) != OCFS2_STACK_LABEL_LEN) { 2176 2182 mlog(ML_ERROR, 2177 2183 "couldn't mount because of an invalid " ··· 2186 2180 status = -EINVAL; 2187 2181 goto bail; 2188 2182 } 2189 - strlcpy(osb->osb_cluster_name, 2183 + memcpy(osb->osb_cluster_name, 2190 2184 OCFS2_RAW_SB(di)->s_cluster_info.ci_cluster, 2191 - OCFS2_CLUSTER_NAME_LEN + 1); 2185 + OCFS2_CLUSTER_NAME_LEN); 2192 2186 } else { 2193 2187 /* The empty string is identical with classic tools that 2194 2188 * don't know about s_cluster_info. */
+9 -3
fs/userfaultfd.c
··· 1827 1827 if (mode_wp && mode_dontwake) 1828 1828 return -EINVAL; 1829 1829 1830 - ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start, 1831 - uffdio_wp.range.len, mode_wp, 1832 - &ctx->mmap_changing); 1830 + if (mmget_not_zero(ctx->mm)) { 1831 + ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start, 1832 + uffdio_wp.range.len, mode_wp, 1833 + &ctx->mmap_changing); 1834 + mmput(ctx->mm); 1835 + } else { 1836 + return -ESRCH; 1837 + } 1838 + 1833 1839 if (ret) 1834 1840 return ret; 1835 1841
+4
include/linux/cpuhotplug.h
··· 72 72 CPUHP_SLUB_DEAD, 73 73 CPUHP_DEBUG_OBJ_DEAD, 74 74 CPUHP_MM_WRITEBACK_DEAD, 75 + /* Must be after CPUHP_MM_VMSTAT_DEAD */ 76 + CPUHP_MM_DEMOTION_DEAD, 75 77 CPUHP_MM_VMSTAT_DEAD, 76 78 CPUHP_SOFTIRQ_DEAD, 77 79 CPUHP_NET_MVNETA_DEAD, ··· 242 240 CPUHP_AP_BASE_CACHEINFO_ONLINE, 243 241 CPUHP_AP_ONLINE_DYN, 244 242 CPUHP_AP_ONLINE_DYN_END = CPUHP_AP_ONLINE_DYN + 30, 243 + /* Must be after CPUHP_AP_ONLINE_DYN for node_states[N_CPU] update */ 244 + CPUHP_AP_MM_DEMOTION_ONLINE, 245 245 CPUHP_AP_X86_HPET_ONLINE, 246 246 CPUHP_AP_X86_KVM_CLK_ONLINE, 247 247 CPUHP_AP_DTPM_CPU_ONLINE,
+1 -1
include/linux/elfcore.h
··· 109 109 #endif 110 110 } 111 111 112 - #if defined(CONFIG_UM) || defined(CONFIG_IA64) 112 + #if (defined(CONFIG_UML) && defined(CONFIG_X86_32)) || defined(CONFIG_IA64) 113 113 /* 114 114 * These functions parameterize elf_core_dump in fs/binfmt_elf.c to write out 115 115 * extra segments containing the gate DSO contents. Dumping its
+1
include/linux/genhd.h
··· 149 149 unsigned long state; 150 150 #define GD_NEED_PART_SCAN 0 151 151 #define GD_READ_ONLY 1 152 + #define GD_DEAD 2 152 153 153 154 struct mutex open_mutex; /* open/close mutex */ 154 155 unsigned open_partitions; /* number of open partitions */
+4 -1
include/linux/memory.h
··· 160 160 #define register_hotmemory_notifier(nb) register_memory_notifier(nb) 161 161 #define unregister_hotmemory_notifier(nb) unregister_memory_notifier(nb) 162 162 #else 163 - #define hotplug_memory_notifier(fn, pri) ({ 0; }) 163 + static inline int hotplug_memory_notifier(notifier_fn_t fn, int pri) 164 + { 165 + return 0; 166 + } 164 167 /* These aren't inline functions due to a GCC bug. */ 165 168 #define register_hotmemory_notifier(nb) ({ (void)(nb); 0; }) 166 169 #define unregister_hotmemory_notifier(nb) ({ (void)(nb); })
-1
include/linux/mlx5/driver.h
··· 1138 1138 int mlx5_cmd_destroy_vport_lag(struct mlx5_core_dev *dev); 1139 1139 bool mlx5_lag_is_roce(struct mlx5_core_dev *dev); 1140 1140 bool mlx5_lag_is_sriov(struct mlx5_core_dev *dev); 1141 - bool mlx5_lag_is_multipath(struct mlx5_core_dev *dev); 1142 1141 bool mlx5_lag_is_active(struct mlx5_core_dev *dev); 1143 1142 bool mlx5_lag_is_master(struct mlx5_core_dev *dev); 1144 1143 bool mlx5_lag_is_shared_fdb(struct mlx5_core_dev *dev);
+1 -1
include/linux/secretmem.h
··· 23 23 mapping = (struct address_space *) 24 24 ((unsigned long)page->mapping & ~PAGE_MAPPING_FLAGS); 25 25 26 - if (mapping != page->mapping) 26 + if (!mapping || mapping != page->mapping) 27 27 return false; 28 28 29 29 return mapping->a_ops == &secretmem_aops;
+3
include/linux/spi/spi.h
··· 531 531 /* I/O mutex */ 532 532 struct mutex io_mutex; 533 533 534 + /* Used to avoid adding the same CS twice */ 535 + struct mutex add_lock; 536 + 534 537 /* lock and mutex for SPI bus locking */ 535 538 spinlock_t bus_lock_spinlock; 536 539 struct mutex bus_lock_mutex;
+9 -40
include/linux/trace_recursion.h
··· 16 16 * When function tracing occurs, the following steps are made: 17 17 * If arch does not support a ftrace feature: 18 18 * call internal function (uses INTERNAL bits) which calls... 19 - * If callback is registered to the "global" list, the list 20 - * function is called and recursion checks the GLOBAL bits. 21 - * then this function calls... 22 19 * The function callback, which can use the FTRACE bits to 23 20 * check for recursion. 24 - * 25 - * Now if the arch does not support a feature, and it calls 26 - * the global list function which calls the ftrace callback 27 - * all three of these steps will do a recursion protection. 28 - * There's no reason to do one if the previous caller already 29 - * did. The recursion that we are protecting against will 30 - * go through the same steps again. 31 - * 32 - * To prevent the multiple recursion checks, if a recursion 33 - * bit is set that is higher than the MAX bit of the current 34 - * check, then we know that the check was made by the previous 35 - * caller, and we can skip the current check. 36 21 */ 37 22 enum { 38 23 /* Function recursion bits */ ··· 25 40 TRACE_FTRACE_NMI_BIT, 26 41 TRACE_FTRACE_IRQ_BIT, 27 42 TRACE_FTRACE_SIRQ_BIT, 43 + TRACE_FTRACE_TRANSITION_BIT, 28 44 29 - /* INTERNAL_BITs must be greater than FTRACE_BITs */ 45 + /* Internal use recursion bits */ 30 46 TRACE_INTERNAL_BIT, 31 47 TRACE_INTERNAL_NMI_BIT, 32 48 TRACE_INTERNAL_IRQ_BIT, 33 49 TRACE_INTERNAL_SIRQ_BIT, 50 + TRACE_INTERNAL_TRANSITION_BIT, 34 51 35 52 TRACE_BRANCH_BIT, 36 53 /* ··· 73 86 */ 74 87 TRACE_GRAPH_NOTRACE_BIT, 75 88 76 - /* 77 - * When transitioning between context, the preempt_count() may 78 - * not be correct. Allow for a single recursion to cover this case. 79 - */ 80 - TRACE_TRANSITION_BIT, 81 - 82 89 /* Used to prevent recursion recording from recursing. */ 83 90 TRACE_RECORD_RECURSION_BIT, 84 91 }; ··· 94 113 #define TRACE_CONTEXT_BITS 4 95 114 96 115 #define TRACE_FTRACE_START TRACE_FTRACE_BIT 97 - #define TRACE_FTRACE_MAX ((1 << (TRACE_FTRACE_START + TRACE_CONTEXT_BITS)) - 1) 98 116 99 117 #define TRACE_LIST_START TRACE_INTERNAL_BIT 100 - #define TRACE_LIST_MAX ((1 << (TRACE_LIST_START + TRACE_CONTEXT_BITS)) - 1) 101 118 102 - #define TRACE_CONTEXT_MASK TRACE_LIST_MAX 119 + #define TRACE_CONTEXT_MASK ((1 << (TRACE_LIST_START + TRACE_CONTEXT_BITS)) - 1) 103 120 104 121 /* 105 122 * Used for setting context ··· 111 132 TRACE_CTX_IRQ, 112 133 TRACE_CTX_SOFTIRQ, 113 134 TRACE_CTX_NORMAL, 135 + TRACE_CTX_TRANSITION, 114 136 }; 115 137 116 138 static __always_inline int trace_get_context_bit(void) ··· 140 160 #endif 141 161 142 162 static __always_inline int trace_test_and_set_recursion(unsigned long ip, unsigned long pip, 143 - int start, int max) 163 + int start) 144 164 { 145 165 unsigned int val = READ_ONCE(current->trace_recursion); 146 166 int bit; 147 - 148 - /* A previous recursion check was made */ 149 - if ((val & TRACE_CONTEXT_MASK) > max) 150 - return 0; 151 167 152 168 bit = trace_get_context_bit() + start; 153 169 if (unlikely(val & (1 << bit))) { ··· 151 175 * It could be that preempt_count has not been updated during 152 176 * a switch between contexts. Allow for a single recursion. 153 177 */ 154 - bit = TRACE_TRANSITION_BIT; 178 + bit = TRACE_CTX_TRANSITION + start; 155 179 if (val & (1 << bit)) { 156 180 do_ftrace_record_recursion(ip, pip); 157 181 return -1; 158 182 } 159 - } else { 160 - /* Normal check passed, clear the transition to allow it again */ 161 - val &= ~(1 << TRACE_TRANSITION_BIT); 162 183 } 163 184 164 185 val |= 1 << bit; 165 186 current->trace_recursion = val; 166 187 barrier(); 167 188 168 - return bit + 1; 189 + return bit; 169 190 } 170 191 171 192 static __always_inline void trace_clear_recursion(int bit) 172 193 { 173 - if (!bit) 174 - return; 175 - 176 194 barrier(); 177 - bit--; 178 195 trace_recursion_clear(bit); 179 196 } 180 197 ··· 183 214 static __always_inline int ftrace_test_recursion_trylock(unsigned long ip, 184 215 unsigned long parent_ip) 185 216 { 186 - return trace_test_and_set_recursion(ip, parent_ip, TRACE_FTRACE_START, TRACE_FTRACE_MAX); 217 + return trace_test_and_set_recursion(ip, parent_ip, TRACE_FTRACE_START); 187 218 } 188 219 189 220 /**
+2
include/linux/user_namespace.h
··· 127 127 128 128 long inc_rlimit_ucounts(struct ucounts *ucounts, enum ucount_type type, long v); 129 129 bool dec_rlimit_ucounts(struct ucounts *ucounts, enum ucount_type type, long v); 130 + long inc_rlimit_get_ucounts(struct ucounts *ucounts, enum ucount_type type); 131 + void dec_rlimit_put_ucounts(struct ucounts *ucounts, enum ucount_type type); 130 132 bool is_ucounts_overlimit(struct ucounts *ucounts, enum ucount_type type, unsigned long max); 131 133 132 134 static inline void set_rlimit_ucount_max(struct user_namespace *ns,
+1 -1
include/net/mctp.h
··· 54 54 struct sock sk; 55 55 56 56 /* bind() params */ 57 - int bind_net; 57 + unsigned int bind_net; 58 58 mctp_eid_t bind_addr; 59 59 __u8 bind_type; 60 60
+3 -3
include/net/sctp/sm.h
··· 384 384 * Verification Tag value does not match the receiver's own 385 385 * tag value, the receiver shall silently discard the packet... 386 386 */ 387 - if (ntohl(chunk->sctp_hdr->vtag) == asoc->c.my_vtag) 388 - return 1; 387 + if (ntohl(chunk->sctp_hdr->vtag) != asoc->c.my_vtag) 388 + return 0; 389 389 390 390 chunk->transport->encap_port = SCTP_INPUT_CB(chunk->skb)->encap_port; 391 - return 0; 391 + return 1; 392 392 } 393 393 394 394 /* Check VTAG of the packet matches the sender's own tag and the T bit is
+3 -2
include/net/tcp.h
··· 1579 1579 u8 keylen; 1580 1580 u8 family; /* AF_INET or AF_INET6 */ 1581 1581 u8 prefixlen; 1582 + u8 flags; 1582 1583 union tcp_md5_addr addr; 1583 1584 int l3index; /* set if key added with L3 scope */ 1584 1585 u8 key[TCP_MD5SIG_MAXKEYLEN]; ··· 1625 1624 int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key, 1626 1625 const struct sock *sk, const struct sk_buff *skb); 1627 1626 int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, 1628 - int family, u8 prefixlen, int l3index, 1627 + int family, u8 prefixlen, int l3index, u8 flags, 1629 1628 const u8 *newkey, u8 newkeylen, gfp_t gfp); 1630 1629 int tcp_md5_do_del(struct sock *sk, const union tcp_md5_addr *addr, 1631 - int family, u8 prefixlen, int l3index); 1630 + int family, u8 prefixlen, int l3index, u8 flags); 1632 1631 struct tcp_md5sig_key *tcp_v4_md5_lookup(const struct sock *sk, 1633 1632 const struct sock *addr_sk); 1634 1633
+9 -10
include/trace/events/kyber.h
··· 13 13 14 14 TRACE_EVENT(kyber_latency, 15 15 16 - TP_PROTO(struct request_queue *q, const char *domain, const char *type, 16 + TP_PROTO(dev_t dev, const char *domain, const char *type, 17 17 unsigned int percentile, unsigned int numerator, 18 18 unsigned int denominator, unsigned int samples), 19 19 20 - TP_ARGS(q, domain, type, percentile, numerator, denominator, samples), 20 + TP_ARGS(dev, domain, type, percentile, numerator, denominator, samples), 21 21 22 22 TP_STRUCT__entry( 23 23 __field( dev_t, dev ) ··· 30 30 ), 31 31 32 32 TP_fast_assign( 33 - __entry->dev = disk_devt(q->disk); 33 + __entry->dev = dev; 34 34 strlcpy(__entry->domain, domain, sizeof(__entry->domain)); 35 35 strlcpy(__entry->type, type, sizeof(__entry->type)); 36 36 __entry->percentile = percentile; ··· 47 47 48 48 TRACE_EVENT(kyber_adjust, 49 49 50 - TP_PROTO(struct request_queue *q, const char *domain, 51 - unsigned int depth), 50 + TP_PROTO(dev_t dev, const char *domain, unsigned int depth), 52 51 53 - TP_ARGS(q, domain, depth), 52 + TP_ARGS(dev, domain, depth), 54 53 55 54 TP_STRUCT__entry( 56 55 __field( dev_t, dev ) ··· 58 59 ), 59 60 60 61 TP_fast_assign( 61 - __entry->dev = disk_devt(q->disk); 62 + __entry->dev = dev; 62 63 strlcpy(__entry->domain, domain, sizeof(__entry->domain)); 63 64 __entry->depth = depth; 64 65 ), ··· 70 71 71 72 TRACE_EVENT(kyber_throttled, 72 73 73 - TP_PROTO(struct request_queue *q, const char *domain), 74 + TP_PROTO(dev_t dev, const char *domain), 74 75 75 - TP_ARGS(q, domain), 76 + TP_ARGS(dev, domain), 76 77 77 78 TP_STRUCT__entry( 78 79 __field( dev_t, dev ) ··· 80 81 ), 81 82 82 83 TP_fast_assign( 83 - __entry->dev = disk_devt(q->disk); 84 + __entry->dev = dev; 84 85 strlcpy(__entry->domain, domain, sizeof(__entry->domain)); 85 86 ), 86 87
+5 -2
include/uapi/linux/mctp.h
··· 10 10 #define __UAPI_MCTP_H 11 11 12 12 #include <linux/types.h> 13 + #include <linux/socket.h> 13 14 14 15 typedef __u8 mctp_eid_t; 15 16 ··· 19 18 }; 20 19 21 20 struct sockaddr_mctp { 22 - unsigned short int smctp_family; 23 - int smctp_network; 21 + __kernel_sa_family_t smctp_family; 22 + __u16 __smctp_pad0; 23 + unsigned int smctp_network; 24 24 struct mctp_addr smctp_addr; 25 25 __u8 smctp_type; 26 26 __u8 smctp_tag; 27 + __u8 __smctp_pad1; 27 28 }; 28 29 29 30 #define MCTP_NET_ANY 0x0
+2 -4
include/uapi/misc/habanalabs.h
··· 917 917 #define HL_WAIT_CS_STATUS_BUSY 1 918 918 #define HL_WAIT_CS_STATUS_TIMEDOUT 2 919 919 #define HL_WAIT_CS_STATUS_ABORTED 3 920 - #define HL_WAIT_CS_STATUS_INTERRUPTED 4 921 920 922 921 #define HL_WAIT_CS_STATUS_FLAG_GONE 0x1 923 922 #define HL_WAIT_CS_STATUS_FLAG_TIMESTAMP_VLD 0x2 ··· 1285 1286 * EIO - The CS was aborted (usually because the device was reset) 1286 1287 * ENODEV - The device wants to do hard-reset (so user need to close FD) 1287 1288 * 1288 - * The driver also returns a custom define inside the IOCTL which can be: 1289 + * The driver also returns a custom define in case the IOCTL call returned 0. 1290 + * The define can be one of the following: 1289 1291 * 1290 1292 * HL_WAIT_CS_STATUS_COMPLETED - The CS has been completed successfully (0) 1291 1293 * HL_WAIT_CS_STATUS_BUSY - The CS is still executing (0) ··· 1294 1294 * (ETIMEDOUT) 1295 1295 * HL_WAIT_CS_STATUS_ABORTED - The CS was aborted, usually because the 1296 1296 * device was reset (EIO) 1297 - * HL_WAIT_CS_STATUS_INTERRUPTED - Waiting for the CS was interrupted (EINTR) 1298 - * 1299 1297 */ 1300 1298 1301 1299 #define HL_IOCTL_WAIT_CS \
+1
init/main.c
··· 382 382 ret = xbc_snprint_cmdline(new_cmdline, len + 1, root); 383 383 if (ret < 0 || ret > len) { 384 384 pr_err("Failed to print extra kernel cmdline.\n"); 385 + memblock_free_ptr(new_cmdline, len + 1); 385 386 return NULL; 386 387 } 387 388
+1 -1
kernel/auditsc.c
··· 657 657 result = audit_comparator(audit_loginuid_set(tsk), f->op, f->val); 658 658 break; 659 659 case AUDIT_SADDR_FAM: 660 - if (ctx->sockaddr) 660 + if (ctx && ctx->sockaddr) 661 661 result = audit_comparator(ctx->sockaddr->ss_family, 662 662 f->op, f->val); 663 663 break;
+4 -5
kernel/cred.c
··· 225 225 #ifdef CONFIG_DEBUG_CREDENTIALS 226 226 new->magic = CRED_MAGIC; 227 227 #endif 228 - new->ucounts = get_ucounts(&init_ucounts); 229 - 230 228 if (security_cred_alloc_blank(new, GFP_KERNEL_ACCOUNT) < 0) 231 229 goto error; 232 230 ··· 499 501 inc_rlimit_ucounts(new->ucounts, UCOUNT_RLIMIT_NPROC, 1); 500 502 rcu_assign_pointer(task->real_cred, new); 501 503 rcu_assign_pointer(task->cred, new); 502 - if (new->user != old->user) 504 + if (new->user != old->user || new->user_ns != old->user_ns) 503 505 dec_rlimit_ucounts(old->ucounts, UCOUNT_RLIMIT_NPROC, 1); 504 506 alter_cred_subscribers(old, -2); 505 507 ··· 667 669 { 668 670 struct task_struct *task = current; 669 671 const struct cred *old = task->real_cred; 670 - struct ucounts *old_ucounts = new->ucounts; 672 + struct ucounts *new_ucounts, *old_ucounts = new->ucounts; 671 673 672 674 if (new->user == old->user && new->user_ns == old->user_ns) 673 675 return 0; ··· 679 681 if (old_ucounts && old_ucounts->ns == new->user_ns && uid_eq(old_ucounts->uid, new->euid)) 680 682 return 0; 681 683 682 - if (!(new->ucounts = alloc_ucounts(new->user_ns, new->euid))) 684 + if (!(new_ucounts = alloc_ucounts(new->user_ns, new->euid))) 683 685 return -EAGAIN; 684 686 687 + new->ucounts = new_ucounts; 685 688 if (old_ucounts) 686 689 put_ucounts(old_ucounts); 687 690
+20 -16
kernel/dma/debug.c
··· 552 552 * Wrapper function for adding an entry to the hash. 553 553 * This function takes care of locking itself. 554 554 */ 555 - static void add_dma_entry(struct dma_debug_entry *entry) 555 + static void add_dma_entry(struct dma_debug_entry *entry, unsigned long attrs) 556 556 { 557 557 struct hash_bucket *bucket; 558 558 unsigned long flags; ··· 566 566 if (rc == -ENOMEM) { 567 567 pr_err("cacheline tracking ENOMEM, dma-debug disabled\n"); 568 568 global_disable = true; 569 - } else if (rc == -EEXIST) { 569 + } else if (rc == -EEXIST && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) { 570 570 err_printk(entry->dev, entry, 571 571 "cacheline tracking EEXIST, overlapping mappings aren't supported\n"); 572 572 } ··· 1191 1191 EXPORT_SYMBOL(debug_dma_map_single); 1192 1192 1193 1193 void debug_dma_map_page(struct device *dev, struct page *page, size_t offset, 1194 - size_t size, int direction, dma_addr_t dma_addr) 1194 + size_t size, int direction, dma_addr_t dma_addr, 1195 + unsigned long attrs) 1195 1196 { 1196 1197 struct dma_debug_entry *entry; 1197 1198 ··· 1223 1222 check_for_illegal_area(dev, addr, size); 1224 1223 } 1225 1224 1226 - add_dma_entry(entry); 1225 + add_dma_entry(entry, attrs); 1227 1226 } 1228 1227 1229 1228 void debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr) ··· 1281 1280 } 1282 1281 1283 1282 void debug_dma_map_sg(struct device *dev, struct scatterlist *sg, 1284 - int nents, int mapped_ents, int direction) 1283 + int nents, int mapped_ents, int direction, 1284 + unsigned long attrs) 1285 1285 { 1286 1286 struct dma_debug_entry *entry; 1287 1287 struct scatterlist *s; ··· 1290 1288 1291 1289 if (unlikely(dma_debug_disabled())) 1292 1290 return; 1291 + 1292 + for_each_sg(sg, s, nents, i) { 1293 + check_for_stack(dev, sg_page(s), s->offset); 1294 + if (!PageHighMem(sg_page(s))) 1295 + check_for_illegal_area(dev, sg_virt(s), s->length); 1296 + } 1293 1297 1294 1298 for_each_sg(sg, s, mapped_ents, i) { 1295 1299 entry = dma_entry_alloc(); ··· 1312 1304 entry->sg_call_ents = nents; 1313 1305 entry->sg_mapped_ents = mapped_ents; 1314 1306 1315 - check_for_stack(dev, sg_page(s), s->offset); 1316 - 1317 - if (!PageHighMem(sg_page(s))) { 1318 - check_for_illegal_area(dev, sg_virt(s), sg_dma_len(s)); 1319 - } 1320 - 1321 1307 check_sg_segment(dev, s); 1322 1308 1323 - add_dma_entry(entry); 1309 + add_dma_entry(entry, attrs); 1324 1310 } 1325 1311 } 1326 1312 ··· 1370 1368 } 1371 1369 1372 1370 void debug_dma_alloc_coherent(struct device *dev, size_t size, 1373 - dma_addr_t dma_addr, void *virt) 1371 + dma_addr_t dma_addr, void *virt, 1372 + unsigned long attrs) 1374 1373 { 1375 1374 struct dma_debug_entry *entry; 1376 1375 ··· 1401 1398 else 1402 1399 entry->pfn = page_to_pfn(virt_to_page(virt)); 1403 1400 1404 - add_dma_entry(entry); 1401 + add_dma_entry(entry, attrs); 1405 1402 } 1406 1403 1407 1404 void debug_dma_free_coherent(struct device *dev, size_t size, ··· 1432 1429 } 1433 1430 1434 1431 void debug_dma_map_resource(struct device *dev, phys_addr_t addr, size_t size, 1435 - int direction, dma_addr_t dma_addr) 1432 + int direction, dma_addr_t dma_addr, 1433 + unsigned long attrs) 1436 1434 { 1437 1435 struct dma_debug_entry *entry; 1438 1436 ··· 1453 1449 entry->direction = direction; 1454 1450 entry->map_err_type = MAP_ERR_NOT_CHECKED; 1455 1451 1456 - add_dma_entry(entry); 1452 + add_dma_entry(entry, attrs); 1457 1453 } 1458 1454 1459 1455 void debug_dma_unmap_resource(struct device *dev, dma_addr_t dma_addr,
+16 -8
kernel/dma/debug.h
··· 11 11 #ifdef CONFIG_DMA_API_DEBUG 12 12 extern void debug_dma_map_page(struct device *dev, struct page *page, 13 13 size_t offset, size_t size, 14 - int direction, dma_addr_t dma_addr); 14 + int direction, dma_addr_t dma_addr, 15 + unsigned long attrs); 15 16 16 17 extern void debug_dma_unmap_page(struct device *dev, dma_addr_t addr, 17 18 size_t size, int direction); 18 19 19 20 extern void debug_dma_map_sg(struct device *dev, struct scatterlist *sg, 20 - int nents, int mapped_ents, int direction); 21 + int nents, int mapped_ents, int direction, 22 + unsigned long attrs); 21 23 22 24 extern void debug_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, 23 25 int nelems, int dir); 24 26 25 27 extern void debug_dma_alloc_coherent(struct device *dev, size_t size, 26 - dma_addr_t dma_addr, void *virt); 28 + dma_addr_t dma_addr, void *virt, 29 + unsigned long attrs); 27 30 28 31 extern void debug_dma_free_coherent(struct device *dev, size_t size, 29 32 void *virt, dma_addr_t addr); 30 33 31 34 extern void debug_dma_map_resource(struct device *dev, phys_addr_t addr, 32 35 size_t size, int direction, 33 - dma_addr_t dma_addr); 36 + dma_addr_t dma_addr, 37 + unsigned long attrs); 34 38 35 39 extern void debug_dma_unmap_resource(struct device *dev, dma_addr_t dma_addr, 36 40 size_t size, int direction); ··· 57 53 #else /* CONFIG_DMA_API_DEBUG */ 58 54 static inline void debug_dma_map_page(struct device *dev, struct page *page, 59 55 size_t offset, size_t size, 60 - int direction, dma_addr_t dma_addr) 56 + int direction, dma_addr_t dma_addr, 57 + unsigned long attrs) 61 58 { 62 59 } 63 60 ··· 68 63 } 69 64 70 65 static inline void debug_dma_map_sg(struct device *dev, struct scatterlist *sg, 71 - int nents, int mapped_ents, int direction) 66 + int nents, int mapped_ents, int direction, 67 + unsigned long attrs) 72 68 { 73 69 } 74 70 ··· 80 74 } 81 75 82 76 static inline void debug_dma_alloc_coherent(struct device *dev, size_t size, 83 - dma_addr_t dma_addr, void *virt) 77 + dma_addr_t dma_addr, void *virt, 78 + unsigned long attrs) 84 79 { 85 80 } 86 81 ··· 92 85 93 86 static inline void debug_dma_map_resource(struct device *dev, phys_addr_t addr, 94 87 size_t size, int direction, 95 - dma_addr_t dma_addr) 88 + dma_addr_t dma_addr, 89 + unsigned long attrs) 96 90 { 97 91 } 98 92
+12 -12
kernel/dma/mapping.c
··· 156 156 addr = dma_direct_map_page(dev, page, offset, size, dir, attrs); 157 157 else 158 158 addr = ops->map_page(dev, page, offset, size, dir, attrs); 159 - debug_dma_map_page(dev, page, offset, size, dir, addr); 159 + debug_dma_map_page(dev, page, offset, size, dir, addr, attrs); 160 160 161 161 return addr; 162 162 } ··· 195 195 ents = ops->map_sg(dev, sg, nents, dir, attrs); 196 196 197 197 if (ents > 0) 198 - debug_dma_map_sg(dev, sg, nents, ents, dir); 198 + debug_dma_map_sg(dev, sg, nents, ents, dir, attrs); 199 199 else if (WARN_ON_ONCE(ents != -EINVAL && ents != -ENOMEM && 200 200 ents != -EIO)) 201 201 return -EIO; ··· 249 249 * Returns 0 on success or a negative error code on error. The following 250 250 * error codes are supported with the given meaning: 251 251 * 252 - * -EINVAL - An invalid argument, unaligned access or other error 253 - * in usage. Will not succeed if retried. 254 - * -ENOMEM - Insufficient resources (like memory or IOVA space) to 255 - * complete the mapping. Should succeed if retried later. 256 - * -EIO - Legacy error code with an unknown meaning. eg. this is 257 - * returned if a lower level call returned DMA_MAPPING_ERROR. 252 + * -EINVAL An invalid argument, unaligned access or other error 253 + * in usage. Will not succeed if retried. 254 + * -ENOMEM Insufficient resources (like memory or IOVA space) to 255 + * complete the mapping. Should succeed if retried later. 256 + * -EIO Legacy error code with an unknown meaning. eg. this is 257 + * returned if a lower level call returned DMA_MAPPING_ERROR. 258 258 */ 259 259 int dma_map_sgtable(struct device *dev, struct sg_table *sgt, 260 260 enum dma_data_direction dir, unsigned long attrs) ··· 305 305 else if (ops->map_resource) 306 306 addr = ops->map_resource(dev, phys_addr, size, dir, attrs); 307 307 308 - debug_dma_map_resource(dev, phys_addr, size, dir, addr); 308 + debug_dma_map_resource(dev, phys_addr, size, dir, addr, attrs); 309 309 return addr; 310 310 } 311 311 EXPORT_SYMBOL(dma_map_resource); ··· 510 510 else 511 511 return NULL; 512 512 513 - debug_dma_alloc_coherent(dev, size, *dma_handle, cpu_addr); 513 + debug_dma_alloc_coherent(dev, size, *dma_handle, cpu_addr, attrs); 514 514 return cpu_addr; 515 515 } 516 516 EXPORT_SYMBOL(dma_alloc_attrs); ··· 566 566 struct page *page = __dma_alloc_pages(dev, size, dma_handle, dir, gfp); 567 567 568 568 if (page) 569 - debug_dma_map_page(dev, page, 0, size, dir, *dma_handle); 569 + debug_dma_map_page(dev, page, 0, size, dir, *dma_handle, 0); 570 570 return page; 571 571 } 572 572 EXPORT_SYMBOL_GPL(dma_alloc_pages); ··· 644 644 645 645 if (sgt) { 646 646 sgt->nents = 1; 647 - debug_dma_map_sg(dev, sgt->sgl, sgt->orig_nents, 1, dir); 647 + debug_dma_map_sg(dev, sgt->sgl, sgt->orig_nents, 1, dir, attrs); 648 648 } 649 649 return sgt; 650 650 }
+6 -19
kernel/signal.c
··· 426 426 */ 427 427 rcu_read_lock(); 428 428 ucounts = task_ucounts(t); 429 - sigpending = inc_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING, 1); 430 - switch (sigpending) { 431 - case 1: 432 - if (likely(get_ucounts(ucounts))) 433 - break; 434 - fallthrough; 435 - case LONG_MAX: 436 - /* 437 - * we need to decrease the ucount in the userns tree on any 438 - * failure to avoid counts leaking. 439 - */ 440 - dec_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING, 1); 441 - rcu_read_unlock(); 442 - return NULL; 443 - } 429 + sigpending = inc_rlimit_get_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING); 444 430 rcu_read_unlock(); 431 + if (!sigpending) 432 + return NULL; 445 433 446 434 if (override_rlimit || likely(sigpending <= task_rlimit(t, RLIMIT_SIGPENDING))) { 447 435 q = kmem_cache_alloc(sigqueue_cachep, gfp_flags); ··· 438 450 } 439 451 440 452 if (unlikely(q == NULL)) { 441 - if (dec_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING, 1)) 442 - put_ucounts(ucounts); 453 + dec_rlimit_put_ucounts(ucounts, UCOUNT_RLIMIT_SIGPENDING); 443 454 } else { 444 455 INIT_LIST_HEAD(&q->list); 445 456 q->flags = sigqueue_flags; ··· 451 464 { 452 465 if (q->flags & SIGQUEUE_PREALLOC) 453 466 return; 454 - if (q->ucounts && dec_rlimit_ucounts(q->ucounts, UCOUNT_RLIMIT_SIGPENDING, 1)) { 455 - put_ucounts(q->ucounts); 467 + if (q->ucounts) { 468 + dec_rlimit_put_ucounts(q->ucounts, UCOUNT_RLIMIT_SIGPENDING); 456 469 q->ucounts = NULL; 457 470 } 458 471 kmem_cache_free(sigqueue_cachep, q);
+2 -2
kernel/trace/ftrace.c
··· 6977 6977 struct ftrace_ops *op; 6978 6978 int bit; 6979 6979 6980 - bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_LIST_START, TRACE_LIST_MAX); 6980 + bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_LIST_START); 6981 6981 if (bit < 0) 6982 6982 return; 6983 6983 ··· 7052 7052 { 7053 7053 int bit; 7054 7054 7055 - bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_LIST_START, TRACE_LIST_MAX); 7055 + bit = trace_test_and_set_recursion(ip, parent_ip, TRACE_LIST_START); 7056 7056 if (bit < 0) 7057 7057 return; 7058 7058
+4 -7
kernel/trace/trace.c
··· 1744 1744 irq_work_queue(&tr->fsnotify_irqwork); 1745 1745 } 1746 1746 1747 - /* 1748 - * (defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)) && \ 1749 - * defined(CONFIG_FSNOTIFY) 1750 - */ 1751 - #else 1747 + #elif defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER) \ 1748 + || defined(CONFIG_OSNOISE_TRACER) 1752 1749 1753 1750 #define trace_create_maxlat_file(tr, d_tracer) \ 1754 1751 trace_create_file("tracing_max_latency", 0644, d_tracer, \ 1755 1752 &tr->max_latency, &tracing_max_lat_fops) 1756 1753 1754 + #else 1755 + #define trace_create_maxlat_file(tr, d_tracer) do { } while (0) 1757 1756 #endif 1758 1757 1759 1758 #ifdef CONFIG_TRACER_MAX_TRACE ··· 9472 9473 9473 9474 create_trace_options_dir(tr); 9474 9475 9475 - #if defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER) 9476 9476 trace_create_maxlat_file(tr, d_tracer); 9477 - #endif 9478 9477 9479 9478 if (ftrace_create_function_files(tr, d_tracer)) 9480 9479 MEM_FAIL(1, "Could not allocate function filter files");
+58 -3
kernel/trace/trace_eprobe.c
··· 119 119 int argc, const char **argv, struct dyn_event *ev) 120 120 { 121 121 struct trace_eprobe *ep = to_trace_eprobe(ev); 122 + const char *slash; 122 123 123 - return strcmp(trace_probe_name(&ep->tp), event) == 0 && 124 - (!system || strcmp(trace_probe_group_name(&ep->tp), system) == 0) && 125 - trace_probe_match_command_args(&ep->tp, argc, argv); 124 + /* 125 + * We match the following: 126 + * event only - match all eprobes with event name 127 + * system and event only - match all system/event probes 128 + * 129 + * The below has the above satisfied with more arguments: 130 + * 131 + * attached system/event - If the arg has the system and event 132 + * the probe is attached to, match 133 + * probes with the attachment. 134 + * 135 + * If any more args are given, then it requires a full match. 136 + */ 137 + 138 + /* 139 + * If system exists, but this probe is not part of that system 140 + * do not match. 141 + */ 142 + if (system && strcmp(trace_probe_group_name(&ep->tp), system) != 0) 143 + return false; 144 + 145 + /* Must match the event name */ 146 + if (strcmp(trace_probe_name(&ep->tp), event) != 0) 147 + return false; 148 + 149 + /* No arguments match all */ 150 + if (argc < 1) 151 + return true; 152 + 153 + /* First argument is the system/event the probe is attached to */ 154 + 155 + slash = strchr(argv[0], '/'); 156 + if (!slash) 157 + slash = strchr(argv[0], '.'); 158 + if (!slash) 159 + return false; 160 + 161 + if (strncmp(ep->event_system, argv[0], slash - argv[0])) 162 + return false; 163 + if (strcmp(ep->event_name, slash + 1)) 164 + return false; 165 + 166 + argc--; 167 + argv++; 168 + 169 + /* If there are no other args, then match */ 170 + if (argc < 1) 171 + return true; 172 + 173 + return trace_probe_match_command_args(&ep->tp, argc, argv); 126 174 } 127 175 128 176 static struct dyn_event_operations eprobe_dyn_event_ops = { ··· 680 632 681 633 trace_event_trigger_enable_disable(file, 0); 682 634 update_cond_flag(file); 635 + 636 + /* Make sure nothing is using the edata or trigger */ 637 + tracepoint_synchronize_unregister(); 638 + 639 + kfree(edata); 640 + kfree(trigger); 641 + 683 642 return 0; 684 643 } 685 644
+1 -1
kernel/trace/trace_events_hist.c
··· 2506 2506 * events. However, for convenience, users are allowed to directly 2507 2507 * specify an event field in an action, which will be automatically 2508 2508 * converted into a variable on their behalf. 2509 - 2509 + * 2510 2510 * If a user specifies a field on an event that isn't the event the 2511 2511 * histogram currently being defined (the target event histogram), the 2512 2512 * only way that can be accomplished is if a new hist trigger is
+49
kernel/ucount.c
··· 284 284 return (new == 0); 285 285 } 286 286 287 + static void do_dec_rlimit_put_ucounts(struct ucounts *ucounts, 288 + struct ucounts *last, enum ucount_type type) 289 + { 290 + struct ucounts *iter, *next; 291 + for (iter = ucounts; iter != last; iter = next) { 292 + long dec = atomic_long_add_return(-1, &iter->ucount[type]); 293 + WARN_ON_ONCE(dec < 0); 294 + next = iter->ns->ucounts; 295 + if (dec == 0) 296 + put_ucounts(iter); 297 + } 298 + } 299 + 300 + void dec_rlimit_put_ucounts(struct ucounts *ucounts, enum ucount_type type) 301 + { 302 + do_dec_rlimit_put_ucounts(ucounts, NULL, type); 303 + } 304 + 305 + long inc_rlimit_get_ucounts(struct ucounts *ucounts, enum ucount_type type) 306 + { 307 + /* Caller must hold a reference to ucounts */ 308 + struct ucounts *iter; 309 + long dec, ret = 0; 310 + 311 + for (iter = ucounts; iter; iter = iter->ns->ucounts) { 312 + long max = READ_ONCE(iter->ns->ucount_max[type]); 313 + long new = atomic_long_add_return(1, &iter->ucount[type]); 314 + if (new < 0 || new > max) 315 + goto unwind; 316 + if (iter == ucounts) 317 + ret = new; 318 + /* 319 + * Grab an extra ucount reference for the caller when 320 + * the rlimit count was previously 0. 321 + */ 322 + if (new != 1) 323 + continue; 324 + if (!get_ucounts(iter)) 325 + goto dec_unwind; 326 + } 327 + return ret; 328 + dec_unwind: 329 + dec = atomic_long_add_return(-1, &iter->ucount[type]); 330 + WARN_ON_ONCE(dec < 0); 331 + unwind: 332 + do_dec_rlimit_put_ucounts(ucounts, iter, type); 333 + return 0; 334 + } 335 + 287 336 bool is_ucounts_overlimit(struct ucounts *ucounts, enum ucount_type type, unsigned long max) 288 337 { 289 338 struct ucounts *iter;
+4 -2
mm/huge_memory.c
··· 2700 2700 if (mapping) { 2701 2701 int nr = thp_nr_pages(head); 2702 2702 2703 - if (PageSwapBacked(head)) 2703 + if (PageSwapBacked(head)) { 2704 2704 __mod_lruvec_page_state(head, NR_SHMEM_THPS, 2705 2705 -nr); 2706 - else 2706 + } else { 2707 2707 __mod_lruvec_page_state(head, NR_FILE_THPS, 2708 2708 -nr); 2709 + filemap_nr_thps_dec(mapping); 2710 + } 2709 2711 } 2710 2712 2711 2713 __split_huge_page(page, list, end);
+4 -1
mm/memblock.c
··· 932 932 * covered by the memory map. The struct page representing NOMAP memory 933 933 * frames in the memory map will be PageReserved() 934 934 * 935 + * Note: if the memory being marked %MEMBLOCK_NOMAP was allocated from 936 + * memblock, the caller must inform kmemleak to ignore that memory 937 + * 935 938 * Return: 0 on success, -errno on failure. 936 939 */ 937 940 int __init_memblock memblock_mark_nomap(phys_addr_t base, phys_addr_t size) ··· 1690 1687 if (!size) 1691 1688 return; 1692 1689 1693 - if (memblock.memory.cnt <= 1) { 1690 + if (!memblock_memory->total_size) { 1694 1691 pr_warn("%s: No memory registered yet\n", __func__); 1695 1692 return; 1696 1693 }
+5 -11
mm/mempolicy.c
··· 856 856 goto out; 857 857 } 858 858 859 - if (flags & MPOL_F_NUMA_BALANCING) { 860 - if (new && new->mode == MPOL_BIND) { 861 - new->flags |= (MPOL_F_MOF | MPOL_F_MORON); 862 - } else { 863 - ret = -EINVAL; 864 - mpol_put(new); 865 - goto out; 866 - } 867 - } 868 - 869 859 ret = mpol_set_nodemask(new, nodes, scratch); 870 860 if (ret) { 871 861 mpol_put(new); ··· 1448 1458 return -EINVAL; 1449 1459 if ((*flags & MPOL_F_STATIC_NODES) && (*flags & MPOL_F_RELATIVE_NODES)) 1450 1460 return -EINVAL; 1451 - 1461 + if (*flags & MPOL_F_NUMA_BALANCING) { 1462 + if (*mode != MPOL_BIND) 1463 + return -EINVAL; 1464 + *flags |= (MPOL_F_MOF | MPOL_F_MORON); 1465 + } 1452 1466 return 0; 1453 1467 } 1454 1468
+37 -25
mm/migrate.c
··· 3066 3066 EXPORT_SYMBOL(migrate_vma_finalize); 3067 3067 #endif /* CONFIG_DEVICE_PRIVATE */ 3068 3068 3069 - #if defined(CONFIG_MEMORY_HOTPLUG) 3069 + #if defined(CONFIG_HOTPLUG_CPU) 3070 3070 /* Disable reclaim-based migration. */ 3071 3071 static void __disable_all_migrate_targets(void) 3072 3072 { ··· 3209 3209 } 3210 3210 3211 3211 /* 3212 - * React to hotplug events that might affect the migration targets 3213 - * like events that online or offline NUMA nodes. 3214 - * 3215 - * The ordering is also currently dependent on which nodes have 3216 - * CPUs. That means we need CPU on/offline notification too. 3217 - */ 3218 - static int migration_online_cpu(unsigned int cpu) 3219 - { 3220 - set_migration_target_nodes(); 3221 - return 0; 3222 - } 3223 - 3224 - static int migration_offline_cpu(unsigned int cpu) 3225 - { 3226 - set_migration_target_nodes(); 3227 - return 0; 3228 - } 3229 - 3230 - /* 3231 3212 * This leaves migrate-on-reclaim transiently disabled between 3232 3213 * the MEM_GOING_OFFLINE and MEM_OFFLINE events. This runs 3233 3214 * whether reclaim-based migration is enabled or not, which ··· 3220 3239 * set_migration_target_nodes(). 3221 3240 */ 3222 3241 static int __meminit migrate_on_reclaim_callback(struct notifier_block *self, 3223 - unsigned long action, void *arg) 3242 + unsigned long action, void *_arg) 3224 3243 { 3244 + struct memory_notify *arg = _arg; 3245 + 3246 + /* 3247 + * Only update the node migration order when a node is 3248 + * changing status, like online->offline. This avoids 3249 + * the overhead of synchronize_rcu() in most cases. 3250 + */ 3251 + if (arg->status_change_nid < 0) 3252 + return notifier_from_errno(0); 3253 + 3225 3254 switch (action) { 3226 3255 case MEM_GOING_OFFLINE: 3227 3256 /* ··· 3265 3274 return notifier_from_errno(0); 3266 3275 } 3267 3276 3277 + /* 3278 + * React to hotplug events that might affect the migration targets 3279 + * like events that online or offline NUMA nodes. 3280 + * 3281 + * The ordering is also currently dependent on which nodes have 3282 + * CPUs. That means we need CPU on/offline notification too. 3283 + */ 3284 + static int migration_online_cpu(unsigned int cpu) 3285 + { 3286 + set_migration_target_nodes(); 3287 + return 0; 3288 + } 3289 + 3290 + static int migration_offline_cpu(unsigned int cpu) 3291 + { 3292 + set_migration_target_nodes(); 3293 + return 0; 3294 + } 3295 + 3268 3296 static int __init migrate_on_reclaim_init(void) 3269 3297 { 3270 3298 int ret; 3271 3299 3272 - ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "migrate on reclaim", 3273 - migration_online_cpu, 3274 - migration_offline_cpu); 3300 + ret = cpuhp_setup_state_nocalls(CPUHP_MM_DEMOTION_DEAD, "mm/demotion:offline", 3301 + NULL, migration_offline_cpu); 3275 3302 /* 3276 3303 * In the unlikely case that this fails, the automatic 3277 3304 * migration targets may become suboptimal for nodes ··· 3297 3288 * rare case, do not bother trying to do anything special. 3298 3289 */ 3299 3290 WARN_ON(ret < 0); 3291 + ret = cpuhp_setup_state(CPUHP_AP_MM_DEMOTION_ONLINE, "mm/demotion:online", 3292 + migration_online_cpu, NULL); 3293 + WARN_ON(ret < 0); 3300 3294 3301 3295 hotplug_memory_notifier(migrate_on_reclaim_callback, 100); 3302 3296 return 0; 3303 3297 } 3304 3298 late_initcall(migrate_on_reclaim_init); 3305 - #endif /* CONFIG_MEMORY_HOTPLUG */ 3299 + #endif /* CONFIG_HOTPLUG_CPU */
+1 -3
mm/page_ext.c
··· 269 269 total_usage += table_size; 270 270 return 0; 271 271 } 272 - #ifdef CONFIG_MEMORY_HOTPLUG 272 + 273 273 static void free_page_ext(void *addr) 274 274 { 275 275 if (is_vmalloc_addr(addr)) { ··· 373 373 374 374 return notifier_from_errno(ret); 375 375 } 376 - 377 - #endif 378 376 379 377 void __init page_ext_init(void) 380 378 {
+2 -2
mm/slab.c
··· 1095 1095 return 0; 1096 1096 } 1097 1097 1098 - #if defined(CONFIG_NUMA) && defined(CONFIG_MEMORY_HOTPLUG) 1098 + #if defined(CONFIG_NUMA) 1099 1099 /* 1100 1100 * Drains freelist for a node on each slab cache, used for memory hot-remove. 1101 1101 * Returns -EBUSY if all objects cannot be drained so that the node is not ··· 1157 1157 out: 1158 1158 return notifier_from_errno(ret); 1159 1159 } 1160 - #endif /* CONFIG_NUMA && CONFIG_MEMORY_HOTPLUG */ 1160 + #endif /* CONFIG_NUMA */ 1161 1161 1162 1162 /* 1163 1163 * swap the static kmem_cache_node with kmalloced memory
+25 -8
mm/slub.c
··· 1701 1701 } 1702 1702 1703 1703 static inline bool slab_free_freelist_hook(struct kmem_cache *s, 1704 - void **head, void **tail) 1704 + void **head, void **tail, 1705 + int *cnt) 1705 1706 { 1706 1707 1707 1708 void *object; ··· 1729 1728 *head = object; 1730 1729 if (!*tail) 1731 1730 *tail = object; 1731 + } else { 1732 + /* 1733 + * Adjust the reconstructed freelist depth 1734 + * accordingly if object's reuse is delayed. 1735 + */ 1736 + --(*cnt); 1732 1737 } 1733 1738 } while (object != old_tail); 1734 1739 ··· 3420 3413 struct kmem_cache_cpu *c; 3421 3414 unsigned long tid; 3422 3415 3423 - memcg_slab_free_hook(s, &head, 1); 3416 + /* memcg_slab_free_hook() is already called for bulk free. */ 3417 + if (!tail) 3418 + memcg_slab_free_hook(s, &head, 1); 3424 3419 redo: 3425 3420 /* 3426 3421 * Determine the currently cpus per cpu slab. ··· 3489 3480 * With KASAN enabled slab_free_freelist_hook modifies the freelist 3490 3481 * to remove objects, whose reuse must be delayed. 3491 3482 */ 3492 - if (slab_free_freelist_hook(s, &head, &tail)) 3483 + if (slab_free_freelist_hook(s, &head, &tail, &cnt)) 3493 3484 do_slab_free(s, page, head, tail, cnt, addr); 3494 3485 } 3495 3486 ··· 4212 4203 if (alloc_kmem_cache_cpus(s)) 4213 4204 return 0; 4214 4205 4215 - free_kmem_cache_nodes(s); 4216 4206 error: 4207 + __kmem_cache_release(s); 4217 4208 return -EINVAL; 4218 4209 } 4219 4210 ··· 4889 4880 return 0; 4890 4881 4891 4882 err = sysfs_slab_add(s); 4892 - if (err) 4883 + if (err) { 4893 4884 __kmem_cache_release(s); 4885 + return err; 4886 + } 4894 4887 4895 4888 if (s->flags & SLAB_STORE_USER) 4896 4889 debugfs_slab_add(s); 4897 4890 4898 - return err; 4891 + return 0; 4899 4892 } 4900 4893 4901 4894 void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller) ··· 6119 6108 struct kmem_cache *s = file_inode(filep)->i_private; 6120 6109 unsigned long *obj_map; 6121 6110 6122 - obj_map = bitmap_alloc(oo_objects(s->oo), GFP_KERNEL); 6123 - if (!obj_map) 6111 + if (!t) 6124 6112 return -ENOMEM; 6113 + 6114 + obj_map = bitmap_alloc(oo_objects(s->oo), GFP_KERNEL); 6115 + if (!obj_map) { 6116 + seq_release_private(inode, filep); 6117 + return -ENOMEM; 6118 + } 6125 6119 6126 6120 if (strcmp(filep->f_path.dentry->d_name.name, "alloc_traces") == 0) 6127 6121 alloc = TRACK_ALLOC; ··· 6135 6119 6136 6120 if (!alloc_loc_track(t, PAGE_SIZE / sizeof(struct location), GFP_KERNEL)) { 6137 6121 bitmap_free(obj_map); 6122 + seq_release_private(inode, filep); 6138 6123 return -ENOMEM; 6139 6124 } 6140 6125
+1 -3
net/bridge/br_private.h
··· 1125 1125 1126 1126 static inline unsigned long br_multicast_gmi(const struct net_bridge_mcast *brmctx) 1127 1127 { 1128 - /* use the RFC default of 2 for QRV */ 1129 - return 2 * brmctx->multicast_query_interval + 1130 - brmctx->multicast_query_response_interval; 1128 + return brmctx->multicast_membership_interval; 1131 1129 } 1132 1130 1133 1131 static inline bool
+3 -1
net/bridge/netfilter/ebtables.c
··· 926 926 return -ENOMEM; 927 927 for_each_possible_cpu(i) { 928 928 newinfo->chainstack[i] = 929 - vmalloc(array_size(udc_cnt, sizeof(*(newinfo->chainstack[0])))); 929 + vmalloc_node(array_size(udc_cnt, 930 + sizeof(*(newinfo->chainstack[0]))), 931 + cpu_to_node(i)); 930 932 if (!newinfo->chainstack[i]) { 931 933 while (i) 932 934 vfree(newinfo->chainstack[--i]);
+36 -15
net/can/isotp.c
··· 121 121 struct tpcon { 122 122 int idx; 123 123 int len; 124 - u8 state; 124 + u32 state; 125 125 u8 bs; 126 126 u8 sn; 127 127 u8 ll_dl; ··· 848 848 { 849 849 struct sock *sk = sock->sk; 850 850 struct isotp_sock *so = isotp_sk(sk); 851 + u32 old_state = so->tx.state; 851 852 struct sk_buff *skb; 852 853 struct net_device *dev; 853 854 struct canfd_frame *cf; ··· 861 860 return -EADDRNOTAVAIL; 862 861 863 862 /* we do not support multiple buffers - for now */ 864 - if (so->tx.state != ISOTP_IDLE || wq_has_sleeper(&so->wait)) { 865 - if (msg->msg_flags & MSG_DONTWAIT) 866 - return -EAGAIN; 863 + if (cmpxchg(&so->tx.state, ISOTP_IDLE, ISOTP_SENDING) != ISOTP_IDLE || 864 + wq_has_sleeper(&so->wait)) { 865 + if (msg->msg_flags & MSG_DONTWAIT) { 866 + err = -EAGAIN; 867 + goto err_out; 868 + } 867 869 868 870 /* wait for complete transmission of current pdu */ 869 - wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE); 871 + err = wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE); 872 + if (err) 873 + goto err_out; 870 874 } 871 875 872 - if (!size || size > MAX_MSG_LENGTH) 873 - return -EINVAL; 876 + if (!size || size > MAX_MSG_LENGTH) { 877 + err = -EINVAL; 878 + goto err_out; 879 + } 874 880 875 881 /* take care of a potential SF_DL ESC offset for TX_DL > 8 */ 876 882 off = (so->tx.ll_dl > CAN_MAX_DLEN) ? 1 : 0; 877 883 878 884 /* does the given data fit into a single frame for SF_BROADCAST? */ 879 885 if ((so->opt.flags & CAN_ISOTP_SF_BROADCAST) && 880 - (size > so->tx.ll_dl - SF_PCI_SZ4 - ae - off)) 881 - return -EINVAL; 886 + (size > so->tx.ll_dl - SF_PCI_SZ4 - ae - off)) { 887 + err = -EINVAL; 888 + goto err_out; 889 + } 882 890 883 891 err = memcpy_from_msg(so->tx.buf, msg, size); 884 892 if (err < 0) 885 - return err; 893 + goto err_out; 886 894 887 895 dev = dev_get_by_index(sock_net(sk), so->ifindex); 888 - if (!dev) 889 - return -ENXIO; 896 + if (!dev) { 897 + err = -ENXIO; 898 + goto err_out; 899 + } 890 900 891 901 skb = sock_alloc_send_skb(sk, so->ll.mtu + sizeof(struct can_skb_priv), 892 902 msg->msg_flags & MSG_DONTWAIT, &err); 893 903 if (!skb) { 894 904 dev_put(dev); 895 - return err; 905 + goto err_out; 896 906 } 897 907 898 908 can_skb_reserve(skb); 899 909 can_skb_prv(skb)->ifindex = dev->ifindex; 900 910 can_skb_prv(skb)->skbcnt = 0; 901 911 902 - so->tx.state = ISOTP_SENDING; 903 912 so->tx.len = size; 904 913 so->tx.idx = 0; 905 914 ··· 965 954 if (err) { 966 955 pr_notice_once("can-isotp: %s: can_send_ret %pe\n", 967 956 __func__, ERR_PTR(err)); 968 - return err; 957 + goto err_out; 969 958 } 970 959 971 960 if (wait_tx_done) { 972 961 /* wait for complete transmission of current pdu */ 973 962 wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE); 963 + 964 + if (sk->sk_err) 965 + return -sk->sk_err; 974 966 } 975 967 976 968 return size; 969 + 970 + err_out: 971 + so->tx.state = old_state; 972 + if (so->tx.state == ISOTP_IDLE) 973 + wake_up_interruptible(&so->wait); 974 + 975 + return err; 977 976 } 978 977 979 978 static int isotp_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
+1
net/can/j1939/j1939-priv.h
··· 330 330 void j1939_tp_schedule_txtimer(struct j1939_session *session, int msec); 331 331 void j1939_session_timers_cancel(struct j1939_session *session); 332 332 333 + #define J1939_MIN_TP_PACKET_SIZE 9 333 334 #define J1939_MAX_TP_PACKET_SIZE (7 * 0xff) 334 335 #define J1939_MAX_ETP_PACKET_SIZE (7 * 0x00ffffff) 335 336
+5 -2
net/can/j1939/main.c
··· 249 249 struct j1939_priv *priv, *priv_new; 250 250 int ret; 251 251 252 - priv = j1939_priv_get_by_ndev(ndev); 252 + spin_lock(&j1939_netdev_lock); 253 + priv = j1939_priv_get_by_ndev_locked(ndev); 253 254 if (priv) { 254 255 kref_get(&priv->rx_kref); 256 + spin_unlock(&j1939_netdev_lock); 255 257 return priv; 256 258 } 259 + spin_unlock(&j1939_netdev_lock); 257 260 258 261 priv = j1939_priv_create(ndev); 259 262 if (!priv) ··· 272 269 /* Someone was faster than us, use their priv and roll 273 270 * back our's. 274 271 */ 272 + kref_get(&priv_new->rx_kref); 275 273 spin_unlock(&j1939_netdev_lock); 276 274 dev_put(ndev); 277 275 kfree(priv); 278 - kref_get(&priv_new->rx_kref); 279 276 return priv_new; 280 277 } 281 278 j1939_priv_set(ndev, priv);
+9 -5
net/can/j1939/transport.c
··· 1237 1237 session->err = -ETIME; 1238 1238 j1939_session_deactivate(session); 1239 1239 } else { 1240 - netdev_alert(priv->ndev, "%s: 0x%p: rx timeout, send abort\n", 1241 - __func__, session); 1242 - 1243 1240 j1939_session_list_lock(session->priv); 1244 1241 if (session->state >= J1939_SESSION_ACTIVE && 1245 1242 session->state < J1939_SESSION_ACTIVE_MAX) { 1243 + netdev_alert(priv->ndev, "%s: 0x%p: rx timeout, send abort\n", 1244 + __func__, session); 1246 1245 j1939_session_get(session); 1247 1246 hrtimer_start(&session->rxtimer, 1248 1247 ms_to_ktime(J1939_XTP_ABORT_TIMEOUT_MS), ··· 1608 1609 abort = J1939_XTP_ABORT_FAULT; 1609 1610 else if (len > priv->tp_max_packet_size) 1610 1611 abort = J1939_XTP_ABORT_RESOURCE; 1612 + else if (len < J1939_MIN_TP_PACKET_SIZE) 1613 + abort = J1939_XTP_ABORT_FAULT; 1611 1614 } 1612 1615 1613 1616 if (abort != J1939_XTP_NO_ABORT) { ··· 1790 1789 static void j1939_xtp_rx_dat_one(struct j1939_session *session, 1791 1790 struct sk_buff *skb) 1792 1791 { 1792 + enum j1939_xtp_abort abort = J1939_XTP_ABORT_FAULT; 1793 1793 struct j1939_priv *priv = session->priv; 1794 1794 struct j1939_sk_buff_cb *skcb, *se_skcb; 1795 1795 struct sk_buff *se_skb = NULL; ··· 1805 1803 1806 1804 skcb = j1939_skb_to_cb(skb); 1807 1805 dat = skb->data; 1808 - if (skb->len <= 1) 1806 + if (skb->len != 8) { 1809 1807 /* makes no sense */ 1808 + abort = J1939_XTP_ABORT_UNEXPECTED_DATA; 1810 1809 goto out_session_cancel; 1810 + } 1811 1811 1812 1812 switch (session->last_cmd) { 1813 1813 case 0xff: ··· 1908 1904 out_session_cancel: 1909 1905 kfree_skb(se_skb); 1910 1906 j1939_session_timers_cancel(session); 1911 - j1939_session_cancel(session, J1939_XTP_ABORT_FAULT); 1907 + j1939_session_cancel(session, abort); 1912 1908 j1939_session_put(session); 1913 1909 } 1914 1910
+7 -2
net/dsa/dsa2.c
··· 1359 1359 1360 1360 for_each_available_child_of_node(ports, port) { 1361 1361 err = of_property_read_u32(port, "reg", &reg); 1362 - if (err) 1362 + if (err) { 1363 + of_node_put(port); 1363 1364 goto out_put_node; 1365 + } 1364 1366 1365 1367 if (reg >= ds->num_ports) { 1366 1368 dev_err(ds->dev, "port %pOF index %u exceeds num_ports (%zu)\n", 1367 1369 port, reg, ds->num_ports); 1370 + of_node_put(port); 1368 1371 err = -EINVAL; 1369 1372 goto out_put_node; 1370 1373 } ··· 1375 1372 dp = dsa_to_port(ds, reg); 1376 1373 1377 1374 err = dsa_port_parse_of(dp, port); 1378 - if (err) 1375 + if (err) { 1376 + of_node_put(port); 1379 1377 goto out_put_node; 1378 + } 1380 1379 } 1381 1380 1382 1381 out_put_node:
+32 -13
net/ipv4/tcp_ipv4.c
··· 1037 1037 DEFINE_STATIC_KEY_FALSE(tcp_md5_needed); 1038 1038 EXPORT_SYMBOL(tcp_md5_needed); 1039 1039 1040 + static bool better_md5_match(struct tcp_md5sig_key *old, struct tcp_md5sig_key *new) 1041 + { 1042 + if (!old) 1043 + return true; 1044 + 1045 + /* l3index always overrides non-l3index */ 1046 + if (old->l3index && new->l3index == 0) 1047 + return false; 1048 + if (old->l3index == 0 && new->l3index) 1049 + return true; 1050 + 1051 + return old->prefixlen < new->prefixlen; 1052 + } 1053 + 1040 1054 /* Find the Key structure for an address. */ 1041 1055 struct tcp_md5sig_key *__tcp_md5_do_lookup(const struct sock *sk, int l3index, 1042 1056 const union tcp_md5_addr *addr, ··· 1073 1059 lockdep_sock_is_held(sk)) { 1074 1060 if (key->family != family) 1075 1061 continue; 1076 - if (key->l3index && key->l3index != l3index) 1062 + if (key->flags & TCP_MD5SIG_FLAG_IFINDEX && key->l3index != l3index) 1077 1063 continue; 1078 1064 if (family == AF_INET) { 1079 1065 mask = inet_make_mask(key->prefixlen); ··· 1088 1074 match = false; 1089 1075 } 1090 1076 1091 - if (match && (!best_match || 1092 - key->prefixlen > best_match->prefixlen)) 1077 + if (match && better_md5_match(best_match, key)) 1093 1078 best_match = key; 1094 1079 } 1095 1080 return best_match; ··· 1098 1085 static struct tcp_md5sig_key *tcp_md5_do_lookup_exact(const struct sock *sk, 1099 1086 const union tcp_md5_addr *addr, 1100 1087 int family, u8 prefixlen, 1101 - int l3index) 1088 + int l3index, u8 flags) 1102 1089 { 1103 1090 const struct tcp_sock *tp = tcp_sk(sk); 1104 1091 struct tcp_md5sig_key *key; ··· 1118 1105 lockdep_sock_is_held(sk)) { 1119 1106 if (key->family != family) 1120 1107 continue; 1121 - if (key->l3index && key->l3index != l3index) 1108 + if ((key->flags & TCP_MD5SIG_FLAG_IFINDEX) != (flags & TCP_MD5SIG_FLAG_IFINDEX)) 1109 + continue; 1110 + if (key->l3index != l3index) 1122 1111 continue; 1123 1112 if (!memcmp(&key->addr, addr, size) && 1124 1113 key->prefixlen == prefixlen) ··· 1144 1129 1145 1130 /* This can be called on a newly created socket, from other files */ 1146 1131 int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr, 1147 - int family, u8 prefixlen, int l3index, 1132 + int family, u8 prefixlen, int l3index, u8 flags, 1148 1133 const u8 *newkey, u8 newkeylen, gfp_t gfp) 1149 1134 { 1150 1135 /* Add Key to the list */ ··· 1152 1137 struct tcp_sock *tp = tcp_sk(sk); 1153 1138 struct tcp_md5sig_info *md5sig; 1154 1139 1155 - key = tcp_md5_do_lookup_exact(sk, addr, family, prefixlen, l3index); 1140 + key = tcp_md5_do_lookup_exact(sk, addr, family, prefixlen, l3index, flags); 1156 1141 if (key) { 1157 1142 /* Pre-existing entry - just update that one. 1158 1143 * Note that the key might be used concurrently. ··· 1197 1182 key->family = family; 1198 1183 key->prefixlen = prefixlen; 1199 1184 key->l3index = l3index; 1185 + key->flags = flags; 1200 1186 memcpy(&key->addr, addr, 1201 1187 (family == AF_INET6) ? sizeof(struct in6_addr) : 1202 1188 sizeof(struct in_addr)); ··· 1207 1191 EXPORT_SYMBOL(tcp_md5_do_add); 1208 1192 1209 1193 int tcp_md5_do_del(struct sock *sk, const union tcp_md5_addr *addr, int family, 1210 - u8 prefixlen, int l3index) 1194 + u8 prefixlen, int l3index, u8 flags) 1211 1195 { 1212 1196 struct tcp_md5sig_key *key; 1213 1197 1214 - key = tcp_md5_do_lookup_exact(sk, addr, family, prefixlen, l3index); 1198 + key = tcp_md5_do_lookup_exact(sk, addr, family, prefixlen, l3index, flags); 1215 1199 if (!key) 1216 1200 return -ENOENT; 1217 1201 hlist_del_rcu(&key->node); ··· 1245 1229 const union tcp_md5_addr *addr; 1246 1230 u8 prefixlen = 32; 1247 1231 int l3index = 0; 1232 + u8 flags; 1248 1233 1249 1234 if (optlen < sizeof(cmd)) 1250 1235 return -EINVAL; ··· 1256 1239 if (sin->sin_family != AF_INET) 1257 1240 return -EINVAL; 1258 1241 1242 + flags = cmd.tcpm_flags & TCP_MD5SIG_FLAG_IFINDEX; 1243 + 1259 1244 if (optname == TCP_MD5SIG_EXT && 1260 1245 cmd.tcpm_flags & TCP_MD5SIG_FLAG_PREFIX) { 1261 1246 prefixlen = cmd.tcpm_prefixlen; ··· 1265 1246 return -EINVAL; 1266 1247 } 1267 1248 1268 - if (optname == TCP_MD5SIG_EXT && 1249 + if (optname == TCP_MD5SIG_EXT && cmd.tcpm_ifindex && 1269 1250 cmd.tcpm_flags & TCP_MD5SIG_FLAG_IFINDEX) { 1270 1251 struct net_device *dev; 1271 1252 ··· 1286 1267 addr = (union tcp_md5_addr *)&sin->sin_addr.s_addr; 1287 1268 1288 1269 if (!cmd.tcpm_keylen) 1289 - return tcp_md5_do_del(sk, addr, AF_INET, prefixlen, l3index); 1270 + return tcp_md5_do_del(sk, addr, AF_INET, prefixlen, l3index, flags); 1290 1271 1291 1272 if (cmd.tcpm_keylen > TCP_MD5SIG_MAXKEYLEN) 1292 1273 return -EINVAL; 1293 1274 1294 - return tcp_md5_do_add(sk, addr, AF_INET, prefixlen, l3index, 1275 + return tcp_md5_do_add(sk, addr, AF_INET, prefixlen, l3index, flags, 1295 1276 cmd.tcpm_key, cmd.tcpm_keylen, GFP_KERNEL); 1296 1277 } 1297 1278 ··· 1615 1596 * memory, then we end up not copying the key 1616 1597 * across. Shucks. 1617 1598 */ 1618 - tcp_md5_do_add(newsk, addr, AF_INET, 32, l3index, 1599 + tcp_md5_do_add(newsk, addr, AF_INET, 32, l3index, key->flags, 1619 1600 key->key, key->keylen, GFP_ATOMIC); 1620 1601 sk_nocaps_add(newsk, NETIF_F_GSO_MASK); 1621 1602 }
+2 -1
net/ipv6/ip6_output.c
··· 464 464 465 465 int ip6_forward(struct sk_buff *skb) 466 466 { 467 - struct inet6_dev *idev = __in6_dev_get_safely(skb->dev); 468 467 struct dst_entry *dst = skb_dst(skb); 469 468 struct ipv6hdr *hdr = ipv6_hdr(skb); 470 469 struct inet6_skb_parm *opt = IP6CB(skb); 471 470 struct net *net = dev_net(dst->dev); 471 + struct inet6_dev *idev; 472 472 u32 mtu; 473 473 474 + idev = __in6_dev_get_safely(dev_get_by_index_rcu(net, IP6CB(skb)->iif)); 474 475 if (net->ipv6.devconf_all->forwarding == 0) 475 476 goto error; 476 477
+6 -42
net/ipv6/netfilter/ip6t_rt.c
··· 25 25 static inline bool 26 26 segsleft_match(u_int32_t min, u_int32_t max, u_int32_t id, bool invert) 27 27 { 28 - bool r; 29 - pr_debug("segsleft_match:%c 0x%x <= 0x%x <= 0x%x\n", 30 - invert ? '!' : ' ', min, id, max); 31 - r = (id >= min && id <= max) ^ invert; 32 - pr_debug(" result %s\n", r ? "PASS" : "FAILED"); 33 - return r; 28 + return (id >= min && id <= max) ^ invert; 34 29 } 35 30 36 31 static bool rt_mt6(const struct sk_buff *skb, struct xt_action_param *par) ··· 60 65 return false; 61 66 } 62 67 63 - pr_debug("IPv6 RT LEN %u %u ", hdrlen, rh->hdrlen); 64 - pr_debug("TYPE %04X ", rh->type); 65 - pr_debug("SGS_LEFT %u %02X\n", rh->segments_left, rh->segments_left); 66 - 67 - pr_debug("IPv6 RT segsleft %02X ", 68 - segsleft_match(rtinfo->segsleft[0], rtinfo->segsleft[1], 69 - rh->segments_left, 70 - !!(rtinfo->invflags & IP6T_RT_INV_SGS))); 71 - pr_debug("type %02X %02X %02X ", 72 - rtinfo->rt_type, rh->type, 73 - (!(rtinfo->flags & IP6T_RT_TYP) || 74 - ((rtinfo->rt_type == rh->type) ^ 75 - !!(rtinfo->invflags & IP6T_RT_INV_TYP)))); 76 - pr_debug("len %02X %04X %02X ", 77 - rtinfo->hdrlen, hdrlen, 78 - !(rtinfo->flags & IP6T_RT_LEN) || 79 - ((rtinfo->hdrlen == hdrlen) ^ 80 - !!(rtinfo->invflags & IP6T_RT_INV_LEN))); 81 - pr_debug("res %02X %02X %02X ", 82 - rtinfo->flags & IP6T_RT_RES, 83 - ((const struct rt0_hdr *)rh)->reserved, 84 - !((rtinfo->flags & IP6T_RT_RES) && 85 - (((const struct rt0_hdr *)rh)->reserved))); 86 - 87 68 ret = (segsleft_match(rtinfo->segsleft[0], rtinfo->segsleft[1], 88 69 rh->segments_left, 89 70 !!(rtinfo->invflags & IP6T_RT_INV_SGS))) && ··· 78 107 reserved), 79 108 sizeof(_reserved), 80 109 &_reserved); 110 + if (!rp) { 111 + par->hotdrop = true; 112 + return false; 113 + } 81 114 82 115 ret = (*rp == 0); 83 116 } 84 117 85 - pr_debug("#%d ", rtinfo->addrnr); 86 118 if (!(rtinfo->flags & IP6T_RT_FST)) { 87 119 return ret; 88 120 } else if (rtinfo->flags & IP6T_RT_FST_NSTRICT) { 89 - pr_debug("Not strict "); 90 121 if (rtinfo->addrnr > (unsigned int)((hdrlen - 8) / 16)) { 91 - pr_debug("There isn't enough space\n"); 92 122 return false; 93 123 } else { 94 124 unsigned int i = 0; 95 125 96 - pr_debug("#%d ", rtinfo->addrnr); 97 126 for (temp = 0; 98 127 temp < (unsigned int)((hdrlen - 8) / 16); 99 128 temp++) { ··· 109 138 return false; 110 139 } 111 140 112 - if (ipv6_addr_equal(ap, &rtinfo->addrs[i])) { 113 - pr_debug("i=%d temp=%d;\n", i, temp); 141 + if (ipv6_addr_equal(ap, &rtinfo->addrs[i])) 114 142 i++; 115 - } 116 143 if (i == rtinfo->addrnr) 117 144 break; 118 145 } 119 - pr_debug("i=%d #%d\n", i, rtinfo->addrnr); 120 146 if (i == rtinfo->addrnr) 121 147 return ret; 122 148 else 123 149 return false; 124 150 } 125 151 } else { 126 - pr_debug("Strict "); 127 152 if (rtinfo->addrnr > (unsigned int)((hdrlen - 8) / 16)) { 128 - pr_debug("There isn't enough space\n"); 129 153 return false; 130 154 } else { 131 - pr_debug("#%d ", rtinfo->addrnr); 132 155 for (temp = 0; temp < rtinfo->addrnr; temp++) { 133 156 ap = skb_header_pointer(skb, 134 157 ptr ··· 138 173 if (!ipv6_addr_equal(ap, &rtinfo->addrs[temp])) 139 174 break; 140 175 } 141 - pr_debug("temp=%d #%d\n", temp, rtinfo->addrnr); 142 176 if (temp == rtinfo->addrnr && 143 177 temp == (unsigned int)((hdrlen - 8) / 16)) 144 178 return ret;
+9 -6
net/ipv6/tcp_ipv6.c
··· 599 599 struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *)&cmd.tcpm_addr; 600 600 int l3index = 0; 601 601 u8 prefixlen; 602 + u8 flags; 602 603 603 604 if (optlen < sizeof(cmd)) 604 605 return -EINVAL; ··· 609 608 610 609 if (sin6->sin6_family != AF_INET6) 611 610 return -EINVAL; 611 + 612 + flags = cmd.tcpm_flags & TCP_MD5SIG_FLAG_IFINDEX; 612 613 613 614 if (optname == TCP_MD5SIG_EXT && 614 615 cmd.tcpm_flags & TCP_MD5SIG_FLAG_PREFIX) { ··· 622 619 prefixlen = ipv6_addr_v4mapped(&sin6->sin6_addr) ? 32 : 128; 623 620 } 624 621 625 - if (optname == TCP_MD5SIG_EXT && 622 + if (optname == TCP_MD5SIG_EXT && cmd.tcpm_ifindex && 626 623 cmd.tcpm_flags & TCP_MD5SIG_FLAG_IFINDEX) { 627 624 struct net_device *dev; 628 625 ··· 643 640 if (ipv6_addr_v4mapped(&sin6->sin6_addr)) 644 641 return tcp_md5_do_del(sk, (union tcp_md5_addr *)&sin6->sin6_addr.s6_addr32[3], 645 642 AF_INET, prefixlen, 646 - l3index); 643 + l3index, flags); 647 644 return tcp_md5_do_del(sk, (union tcp_md5_addr *)&sin6->sin6_addr, 648 - AF_INET6, prefixlen, l3index); 645 + AF_INET6, prefixlen, l3index, flags); 649 646 } 650 647 651 648 if (cmd.tcpm_keylen > TCP_MD5SIG_MAXKEYLEN) ··· 653 650 654 651 if (ipv6_addr_v4mapped(&sin6->sin6_addr)) 655 652 return tcp_md5_do_add(sk, (union tcp_md5_addr *)&sin6->sin6_addr.s6_addr32[3], 656 - AF_INET, prefixlen, l3index, 653 + AF_INET, prefixlen, l3index, flags, 657 654 cmd.tcpm_key, cmd.tcpm_keylen, 658 655 GFP_KERNEL); 659 656 660 657 return tcp_md5_do_add(sk, (union tcp_md5_addr *)&sin6->sin6_addr, 661 - AF_INET6, prefixlen, l3index, 658 + AF_INET6, prefixlen, l3index, flags, 662 659 cmd.tcpm_key, cmd.tcpm_keylen, GFP_KERNEL); 663 660 } 664 661 ··· 1407 1404 * across. Shucks. 1408 1405 */ 1409 1406 tcp_md5_do_add(newsk, (union tcp_md5_addr *)&newsk->sk_v6_daddr, 1410 - AF_INET6, 128, l3index, key->key, key->keylen, 1407 + AF_INET6, 128, l3index, key->flags, key->key, key->keylen, 1411 1408 sk_gfp_mask(sk, GFP_ATOMIC)); 1412 1409 } 1413 1410 #endif
+1 -1
net/netfilter/Kconfig
··· 120 120 config NF_CONNTRACK_SECMARK 121 121 bool 'Connection tracking security mark support' 122 122 depends on NETWORK_SECMARK 123 - default m if NETFILTER_ADVANCED=n 123 + default y if NETFILTER_ADVANCED=n 124 124 help 125 125 This option enables security markings to be applied to 126 126 connections. Typically they are copied to connections from
+5
net/netfilter/ipvs/ip_vs_ctl.c
··· 4098 4098 tbl[idx++].data = &ipvs->sysctl_ignore_tunneled; 4099 4099 ipvs->sysctl_run_estimation = 1; 4100 4100 tbl[idx++].data = &ipvs->sysctl_run_estimation; 4101 + #ifdef CONFIG_IP_VS_DEBUG 4102 + /* Global sysctls must be ro in non-init netns */ 4103 + if (!net_eq(net, &init_net)) 4104 + tbl[idx++].mode = 0444; 4105 + #endif 4101 4106 4102 4107 ipvs->sysctl_hdr = register_net_sysctl(net, "net/ipv4/vs", tbl); 4103 4108 if (ipvs->sysctl_hdr == NULL) {
+3 -6
net/netfilter/nft_chain_filter.c
··· 344 344 return; 345 345 } 346 346 347 - /* UNREGISTER events are also happening on netns exit. 348 - * 349 - * Although nf_tables core releases all tables/chains, only this event 350 - * handler provides guarantee that hook->ops.dev is still accessible, 351 - * so we cannot skip exiting net namespaces. 352 - */ 353 347 __nft_release_basechain(ctx); 354 348 } 355 349 ··· 360 366 361 367 if (event != NETDEV_UNREGISTER && 362 368 event != NETDEV_CHANGENAME) 369 + return NOTIFY_DONE; 370 + 371 + if (!check_net(ctx.net)) 363 372 return NOTIFY_DONE; 364 373 365 374 nft_net = nft_pernet(ctx.net);
+1 -1
net/netfilter/xt_IDLETIMER.c
··· 137 137 { 138 138 int ret; 139 139 140 - info->timer = kmalloc(sizeof(*info->timer), GFP_KERNEL); 140 + info->timer = kzalloc(sizeof(*info->timer), GFP_KERNEL); 141 141 if (!info->timer) { 142 142 ret = -ENOMEM; 143 143 goto out;
+1 -1
net/sched/act_ct.c
··· 960 960 tmpl = p->tmpl; 961 961 962 962 tcf_lastuse_update(&c->tcf_tm); 963 + tcf_action_update_bstats(&c->common, skb); 963 964 964 965 if (clear) { 965 966 qdisc_skb_cb(skb)->post_ct = false; ··· 1050 1049 1051 1050 qdisc_skb_cb(skb)->post_ct = true; 1052 1051 out_clear: 1053 - tcf_action_update_bstats(&c->common, skb); 1054 1052 if (defrag) 1055 1053 qdisc_skb_cb(skb)->pkt_len = skb->len; 1056 1054 return retval;
+1 -1
scripts/recordmcount.pl
··· 189 189 $local_regex = "^[0-9a-fA-F]+\\s+t\\s+(\\S+)"; 190 190 $weak_regex = "^[0-9a-fA-F]+\\s+([wW])\\s+(\\S+)"; 191 191 $section_regex = "Disassembly of section\\s+(\\S+):"; 192 - $function_regex = "^([0-9a-fA-F]+)\\s+<(.*?)>:"; 192 + $function_regex = "^([0-9a-fA-F]+)\\s+<([^^]*?)>:"; 193 193 $mcount_regex = "^\\s*([0-9a-fA-F]+):.*\\s(mcount|__fentry__)\$"; 194 194 $section_type = '@progbits'; 195 195 $mcount_adjust = 0;
+8
security/keys/process_keys.c
··· 918 918 return; 919 919 } 920 920 921 + /* If get_ucounts fails more bits are needed in the refcount */ 922 + if (unlikely(!get_ucounts(old->ucounts))) { 923 + WARN_ONCE(1, "In %s get_ucounts failed\n", __func__); 924 + put_cred(new); 925 + return; 926 + } 927 + 921 928 new-> uid = old-> uid; 922 929 new-> euid = old-> euid; 923 930 new-> suid = old-> suid; ··· 934 927 new-> sgid = old-> sgid; 935 928 new->fsgid = old->fsgid; 936 929 new->user = get_uid(old->user); 930 + new->ucounts = old->ucounts; 937 931 new->user_ns = get_user_ns(old->user_ns); 938 932 new->group_info = get_group_info(old->group_info); 939 933
+47
sound/pci/hda/patch_realtek.c
··· 2535 2535 SND_PCI_QUIRK(0x1558, 0x65d2, "Clevo PB51R[CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2536 2536 SND_PCI_QUIRK(0x1558, 0x65e1, "Clevo PB51[ED][DF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2537 2537 SND_PCI_QUIRK(0x1558, 0x65e5, "Clevo PC50D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2538 + SND_PCI_QUIRK(0x1558, 0x65f1, "Clevo PC50HS", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2538 2539 SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2539 2540 SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2540 2541 SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS), ··· 6406 6405 } 6407 6406 } 6408 6407 6408 + /* GPIO1 = amplifier on/off 6409 + * GPIO3 = mic mute LED 6410 + */ 6411 + static void alc285_fixup_hp_spectre_x360_eb1(struct hda_codec *codec, 6412 + const struct hda_fixup *fix, int action) 6413 + { 6414 + static const hda_nid_t conn[] = { 0x02 }; 6415 + 6416 + struct alc_spec *spec = codec->spec; 6417 + static const struct hda_pintbl pincfgs[] = { 6418 + { 0x14, 0x90170110 }, /* front/high speakers */ 6419 + { 0x17, 0x90170130 }, /* back/bass speakers */ 6420 + { } 6421 + }; 6422 + 6423 + //enable micmute led 6424 + alc_fixup_hp_gpio_led(codec, action, 0x00, 0x04); 6425 + 6426 + switch (action) { 6427 + case HDA_FIXUP_ACT_PRE_PROBE: 6428 + spec->micmute_led_polarity = 1; 6429 + /* needed for amp of back speakers */ 6430 + spec->gpio_mask |= 0x01; 6431 + spec->gpio_dir |= 0x01; 6432 + snd_hda_apply_pincfgs(codec, pincfgs); 6433 + /* share DAC to have unified volume control */ 6434 + snd_hda_override_conn_list(codec, 0x14, ARRAY_SIZE(conn), conn); 6435 + snd_hda_override_conn_list(codec, 0x17, ARRAY_SIZE(conn), conn); 6436 + break; 6437 + case HDA_FIXUP_ACT_INIT: 6438 + /* need to toggle GPIO to enable the amp of back speakers */ 6439 + alc_update_gpio_data(codec, 0x01, true); 6440 + msleep(100); 6441 + alc_update_gpio_data(codec, 0x01, false); 6442 + break; 6443 + } 6444 + } 6445 + 6409 6446 static void alc285_fixup_hp_spectre_x360(struct hda_codec *codec, 6410 6447 const struct hda_fixup *fix, int action) 6411 6448 { ··· 6596 6557 ALC269_FIXUP_HP_DOCK_GPIO_MIC1_LED, 6597 6558 ALC280_FIXUP_HP_9480M, 6598 6559 ALC245_FIXUP_HP_X360_AMP, 6560 + ALC285_FIXUP_HP_SPECTRE_X360_EB1, 6599 6561 ALC288_FIXUP_DELL_HEADSET_MODE, 6600 6562 ALC288_FIXUP_DELL1_MIC_NO_PRESENCE, 6601 6563 ALC288_FIXUP_DELL_XPS_13, ··· 8290 8250 .type = HDA_FIXUP_FUNC, 8291 8251 .v.func = alc285_fixup_hp_spectre_x360, 8292 8252 }, 8253 + [ALC285_FIXUP_HP_SPECTRE_X360_EB1] = { 8254 + .type = HDA_FIXUP_FUNC, 8255 + .v.func = alc285_fixup_hp_spectre_x360_eb1 8256 + }, 8293 8257 [ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP] = { 8294 8258 .type = HDA_FIXUP_FUNC, 8295 8259 .v.func = alc285_fixup_ideapad_s740_coef, ··· 8628 8584 SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP), 8629 8585 SND_PCI_QUIRK(0x103c, 0x8805, "HP ProBook 650 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED), 8630 8586 SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), 8587 + SND_PCI_QUIRK(0x103c, 0x8811, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1), 8588 + SND_PCI_QUIRK(0x103c, 0x8812, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1), 8631 8589 SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), 8632 8590 SND_PCI_QUIRK(0x103c, 0x8847, "HP EliteBook x360 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), 8633 8591 SND_PCI_QUIRK(0x103c, 0x884b, "HP EliteBook 840 Aero G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), ··· 9051 9005 {.id = ALC245_FIXUP_HP_X360_AMP, .name = "alc245-hp-x360-amp"}, 9052 9006 {.id = ALC295_FIXUP_HP_OMEN, .name = "alc295-hp-omen"}, 9053 9007 {.id = ALC285_FIXUP_HP_SPECTRE_X360, .name = "alc285-hp-spectre-x360"}, 9008 + {.id = ALC285_FIXUP_HP_SPECTRE_X360_EB1, .name = "alc285-hp-spectre-x360-eb1"}, 9054 9009 {.id = ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP, .name = "alc287-ideapad-bass-spk-amp"}, 9055 9010 {.id = ALC623_FIXUP_LENOVO_THINKSTATION_P340, .name = "alc623-lenovo-thinkstation-p340"}, 9056 9011 {.id = ALC255_FIXUP_ACER_HEADPHONE_AND_MIC, .name = "alc255-acer-headphone-and-mic"},
+1
sound/soc/codecs/Kconfig
··· 1583 1583 tristate "WCD9380/WCD9385 Codec - SDW" 1584 1584 select SND_SOC_WCD938X 1585 1585 select SND_SOC_WCD_MBHC 1586 + select REGMAP_IRQ 1586 1587 depends on SOUNDWIRE 1587 1588 select REGMAP_SOUNDWIRE 1588 1589 help
+3 -13
sound/soc/codecs/cs42l42.c
··· 922 922 struct snd_soc_component *component = dai->component; 923 923 struct cs42l42_private *cs42l42 = snd_soc_component_get_drvdata(component); 924 924 unsigned int regval; 925 - u8 fullScaleVol; 926 925 int ret; 927 926 928 927 if (mute) { ··· 992 993 cs42l42->stream_use |= 1 << stream; 993 994 994 995 if (stream == SNDRV_PCM_STREAM_PLAYBACK) { 995 - /* Read the headphone load */ 996 - regval = snd_soc_component_read(component, CS42L42_LOAD_DET_RCSTAT); 997 - if (((regval & CS42L42_RLA_STAT_MASK) >> CS42L42_RLA_STAT_SHIFT) == 998 - CS42L42_RLA_STAT_15_OHM) { 999 - fullScaleVol = CS42L42_HP_FULL_SCALE_VOL_MASK; 1000 - } else { 1001 - fullScaleVol = 0; 1002 - } 1003 - 1004 - /* Un-mute the headphone, set the full scale volume flag */ 996 + /* Un-mute the headphone */ 1005 997 snd_soc_component_update_bits(component, CS42L42_HP_CTL, 1006 998 CS42L42_HP_ANA_AMUTE_MASK | 1007 - CS42L42_HP_ANA_BMUTE_MASK | 1008 - CS42L42_HP_FULL_SCALE_VOL_MASK, fullScaleVol); 999 + CS42L42_HP_ANA_BMUTE_MASK, 1000 + 0); 1009 1001 } 1010 1002 } 1011 1003
+7
sound/soc/codecs/cs4341.c
··· 305 305 return cs4341_probe(&spi->dev); 306 306 } 307 307 308 + static const struct spi_device_id cs4341_spi_ids[] = { 309 + { "cs4341a" }, 310 + { } 311 + }; 312 + MODULE_DEVICE_TABLE(spi, cs4341_spi_ids); 313 + 308 314 static struct spi_driver cs4341_spi_driver = { 309 315 .driver = { 310 316 .name = "cs4341-spi", 311 317 .of_match_table = of_match_ptr(cs4341_dt_ids), 312 318 }, 313 319 .probe = cs4341_spi_probe, 320 + .id_table = cs4341_spi_ids, 314 321 }; 315 322 #endif 316 323
+2 -2
sound/soc/codecs/nau8824.c
··· 867 867 struct regmap *regmap = nau8824->regmap; 868 868 int adc_value, event = 0, event_mask = 0; 869 869 870 - snd_soc_dapm_enable_pin(dapm, "MICBIAS"); 871 - snd_soc_dapm_enable_pin(dapm, "SAR"); 870 + snd_soc_dapm_force_enable_pin(dapm, "MICBIAS"); 871 + snd_soc_dapm_force_enable_pin(dapm, "SAR"); 872 872 snd_soc_dapm_sync(dapm); 873 873 874 874 msleep(100);
+1
sound/soc/codecs/pcm179x-spi.c
··· 36 36 MODULE_DEVICE_TABLE(of, pcm179x_of_match); 37 37 38 38 static const struct spi_device_id pcm179x_spi_ids[] = { 39 + { "pcm1792a", 0 }, 39 40 { "pcm179x", 0 }, 40 41 { }, 41 42 };
+2
sound/soc/codecs/pcm512x.c
··· 116 116 { PCM512x_FS_SPEED_MODE, 0x00 }, 117 117 { PCM512x_IDAC_1, 0x01 }, 118 118 { PCM512x_IDAC_2, 0x00 }, 119 + { PCM512x_I2S_1, 0x02 }, 120 + { PCM512x_I2S_2, 0x00 }, 119 121 }; 120 122 121 123 static bool pcm512x_readable(struct device *dev, unsigned int reg)
+3 -3
sound/soc/codecs/wcd938x.c
··· 4144 4144 { 4145 4145 struct wcd938x_priv *wcd = dev_get_drvdata(comp->dev); 4146 4146 4147 - if (!jack) 4147 + if (jack) 4148 4148 return wcd_mbhc_start(wcd->wcd_mbhc, &wcd->mbhc_cfg, jack); 4149 - 4150 - wcd_mbhc_stop(wcd->wcd_mbhc); 4149 + else 4150 + wcd_mbhc_stop(wcd->wcd_mbhc); 4151 4151 4152 4152 return 0; 4153 4153 }
+10 -3
sound/soc/codecs/wm8960.c
··· 742 742 int i, j, k; 743 743 int ret; 744 744 745 - if (!(iface1 & (1<<6))) { 746 - dev_dbg(component->dev, 747 - "Codec is slave mode, no need to configure clock\n"); 745 + /* 746 + * For Slave mode clocking should still be configured, 747 + * so this if statement should be removed, but some platform 748 + * may not work if the sysclk is not configured, to avoid such 749 + * compatible issue, just add '!wm8960->sysclk' condition in 750 + * this if statement. 751 + */ 752 + if (!(iface1 & (1 << 6)) && !wm8960->sysclk) { 753 + dev_warn(component->dev, 754 + "slave mode, but proceeding with no clock configuration\n"); 748 755 return 0; 749 756 } 750 757
+12 -5
sound/soc/fsl/fsl_xcvr.c
··· 487 487 return ret; 488 488 } 489 489 490 - /* clear DPATH RESET */ 490 + /* set DPATH RESET */ 491 491 m_ctl |= FSL_XCVR_EXT_CTRL_DPTH_RESET(tx); 492 + v_ctl |= FSL_XCVR_EXT_CTRL_DPTH_RESET(tx); 492 493 ret = regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_CTRL, m_ctl, v_ctl); 493 494 if (ret < 0) { 494 495 dev_err(dai->dev, "Error while setting EXT_CTRL: %d\n", ret); ··· 591 590 val |= FSL_XCVR_EXT_CTRL_CMDC_RESET(tx); 592 591 } 593 592 594 - /* set DPATH RESET */ 595 - mask |= FSL_XCVR_EXT_CTRL_DPTH_RESET(tx); 596 - val |= FSL_XCVR_EXT_CTRL_DPTH_RESET(tx); 597 - 598 593 ret = regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_CTRL, mask, val); 599 594 if (ret < 0) { 600 595 dev_err(dai->dev, "Err setting DPATH RESET: %d\n", ret); ··· 640 643 dev_err(dai->dev, "Failed to enable DMA: %d\n", ret); 641 644 return ret; 642 645 } 646 + 647 + /* clear DPATH RESET */ 648 + ret = regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_CTRL, 649 + FSL_XCVR_EXT_CTRL_DPTH_RESET(tx), 650 + 0); 651 + if (ret < 0) { 652 + dev_err(dai->dev, "Failed to clear DPATH RESET: %d\n", ret); 653 + return ret; 654 + } 655 + 643 656 break; 644 657 case SNDRV_PCM_TRIGGER_STOP: 645 658 case SNDRV_PCM_TRIGGER_SUSPEND:
+12 -25
sound/soc/intel/boards/bytcht_es8316.c
··· 456 456 457 457 static int snd_byt_cht_es8316_mc_probe(struct platform_device *pdev) 458 458 { 459 + struct device *dev = &pdev->dev; 459 460 static const char * const mic_name[] = { "in1", "in2" }; 461 + struct snd_soc_acpi_mach *mach = dev_get_platdata(dev); 460 462 struct property_entry props[MAX_NO_PROPS] = {}; 461 463 struct byt_cht_es8316_private *priv; 462 464 const struct dmi_system_id *dmi_id; 463 - struct device *dev = &pdev->dev; 464 - struct snd_soc_acpi_mach *mach; 465 465 struct fwnode_handle *fwnode; 466 466 const char *platform_name; 467 467 struct acpi_device *adev; ··· 476 476 if (!priv) 477 477 return -ENOMEM; 478 478 479 - mach = dev->platform_data; 480 479 /* fix index of codec dai */ 481 480 for (i = 0; i < ARRAY_SIZE(byt_cht_es8316_dais); i++) { 482 481 if (!strcmp(byt_cht_es8316_dais[i].codecs->name, ··· 493 494 put_device(&adev->dev); 494 495 byt_cht_es8316_dais[dai_index].codecs->name = codec_name; 495 496 } else { 496 - dev_err(&pdev->dev, "Error cannot find '%s' dev\n", mach->id); 497 + dev_err(dev, "Error cannot find '%s' dev\n", mach->id); 497 498 return -ENXIO; 498 499 } 499 500 ··· 532 533 533 534 /* get the clock */ 534 535 priv->mclk = devm_clk_get(dev, "pmc_plt_clk_3"); 535 - if (IS_ERR(priv->mclk)) { 536 - ret = PTR_ERR(priv->mclk); 537 - dev_err(dev, "clk_get pmc_plt_clk_3 failed: %d\n", ret); 538 - return ret; 539 - } 536 + if (IS_ERR(priv->mclk)) 537 + return dev_err_probe(dev, PTR_ERR(priv->mclk), "clk_get pmc_plt_clk_3 failed\n"); 540 538 541 539 /* get speaker enable GPIO */ 542 540 codec_dev = acpi_get_first_physical_node(adev); ··· 563 567 564 568 devm_acpi_dev_add_driver_gpios(codec_dev, byt_cht_es8316_gpios); 565 569 priv->speaker_en_gpio = 566 - gpiod_get_index(codec_dev, "speaker-enable", 0, 567 - /* see comment in byt_cht_es8316_resume */ 568 - GPIOD_OUT_LOW | GPIOD_FLAGS_BIT_NONEXCLUSIVE); 569 - 570 + gpiod_get_optional(codec_dev, "speaker-enable", 571 + /* see comment in byt_cht_es8316_resume() */ 572 + GPIOD_OUT_LOW | GPIOD_FLAGS_BIT_NONEXCLUSIVE); 570 573 if (IS_ERR(priv->speaker_en_gpio)) { 571 - ret = PTR_ERR(priv->speaker_en_gpio); 572 - switch (ret) { 573 - case -ENOENT: 574 - priv->speaker_en_gpio = NULL; 575 - break; 576 - default: 577 - dev_err(dev, "get speaker GPIO failed: %d\n", ret); 578 - fallthrough; 579 - case -EPROBE_DEFER: 580 - goto err_put_codec; 581 - } 574 + ret = dev_err_probe(dev, PTR_ERR(priv->speaker_en_gpio), 575 + "get speaker GPIO failed\n"); 576 + goto err_put_codec; 582 577 } 583 578 584 579 snprintf(components_string, sizeof(components_string), ··· 584 597 byt_cht_es8316_card.long_name = long_name; 585 598 #endif 586 599 587 - sof_parent = snd_soc_acpi_sof_parent(&pdev->dev); 600 + sof_parent = snd_soc_acpi_sof_parent(dev); 588 601 589 602 /* set card and driver name */ 590 603 if (sof_parent) {
+1
sound/soc/soc-core.c
··· 2599 2599 INIT_LIST_HEAD(&component->dai_list); 2600 2600 INIT_LIST_HEAD(&component->dobj_list); 2601 2601 INIT_LIST_HEAD(&component->card_list); 2602 + INIT_LIST_HEAD(&component->list); 2602 2603 mutex_init(&component->io_mutex); 2603 2604 2604 2605 component->name = fmt_single_name(dev, &component->id);
+8 -5
sound/soc/soc-dapm.c
··· 2561 2561 const char *pin, int status) 2562 2562 { 2563 2563 struct snd_soc_dapm_widget *w = dapm_find_widget(dapm, pin, true); 2564 + int ret = 0; 2564 2565 2565 2566 dapm_assert_locked(dapm); 2566 2567 ··· 2574 2573 dapm_mark_dirty(w, "pin configuration"); 2575 2574 dapm_widget_invalidate_input_paths(w); 2576 2575 dapm_widget_invalidate_output_paths(w); 2576 + ret = 1; 2577 2577 } 2578 2578 2579 2579 w->connected = status; 2580 2580 if (status == 0) 2581 2581 w->force = 0; 2582 2582 2583 - return 0; 2583 + return ret; 2584 2584 } 2585 2585 2586 2586 /** ··· 3585 3583 { 3586 3584 struct snd_soc_card *card = snd_kcontrol_chip(kcontrol); 3587 3585 const char *pin = (const char *)kcontrol->private_value; 3586 + int ret; 3588 3587 3589 3588 if (ucontrol->value.integer.value[0]) 3590 - snd_soc_dapm_enable_pin(&card->dapm, pin); 3589 + ret = snd_soc_dapm_enable_pin(&card->dapm, pin); 3591 3590 else 3592 - snd_soc_dapm_disable_pin(&card->dapm, pin); 3591 + ret = snd_soc_dapm_disable_pin(&card->dapm, pin); 3593 3592 3594 3593 snd_soc_dapm_sync(&card->dapm); 3595 - return 0; 3594 + return ret; 3596 3595 } 3597 3596 EXPORT_SYMBOL_GPL(snd_soc_dapm_put_pin_switch); 3598 3597 ··· 4026 4023 4027 4024 rtd->params_select = ucontrol->value.enumerated.item[0]; 4028 4025 4029 - return 0; 4026 + return 1; 4030 4027 } 4031 4028 4032 4029 static void
+7
sound/usb/mixer.c
··· 1198 1198 cval->res = 1; 1199 1199 } 1200 1200 break; 1201 + case USB_ID(0x1224, 0x2a25): /* Jieli Technology USB PHY 2.0 */ 1202 + if (!strcmp(kctl->id.name, "Mic Capture Volume")) { 1203 + usb_audio_info(chip, 1204 + "set resolution quirk: cval->res = 16\n"); 1205 + cval->res = 16; 1206 + } 1207 + break; 1201 1208 } 1202 1209 } 1203 1210
+32
sound/usb/quirks-table.h
··· 4012 4012 } 4013 4013 } 4014 4014 }, 4015 + { 4016 + /* 4017 + * Sennheiser GSP670 4018 + * Change order of interfaces loaded 4019 + */ 4020 + USB_DEVICE(0x1395, 0x0300), 4021 + .bInterfaceClass = USB_CLASS_PER_INTERFACE, 4022 + .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) { 4023 + .ifnum = QUIRK_ANY_INTERFACE, 4024 + .type = QUIRK_COMPOSITE, 4025 + .data = &(const struct snd_usb_audio_quirk[]) { 4026 + // Communication 4027 + { 4028 + .ifnum = 3, 4029 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 4030 + }, 4031 + // Recording 4032 + { 4033 + .ifnum = 4, 4034 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 4035 + }, 4036 + // Main 4037 + { 4038 + .ifnum = 1, 4039 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 4040 + }, 4041 + { 4042 + .ifnum = -1 4043 + } 4044 + } 4045 + } 4046 + }, 4015 4047 4016 4048 #undef USB_DEVICE_VENDOR_SPEC 4017 4049 #undef USB_AUDIO_DEVICE
+9
sound/usb/quirks.c
··· 1719 1719 */ 1720 1720 fp->attributes &= ~UAC_EP_CS_ATTR_FILL_MAX; 1721 1721 break; 1722 + case USB_ID(0x1224, 0x2a25): /* Jieli Technology USB PHY 2.0 */ 1723 + /* mic works only when ep packet size is set to wMaxPacketSize */ 1724 + fp->attributes |= UAC_EP_CS_ATTR_FILL_MAX; 1725 + break; 1726 + 1722 1727 } 1723 1728 } 1724 1729 ··· 1889 1884 QUIRK_FLAG_GET_SAMPLE_RATE), 1890 1885 DEVICE_FLG(0x2912, 0x30c8, /* Audioengine D1 */ 1891 1886 QUIRK_FLAG_GET_SAMPLE_RATE), 1887 + DEVICE_FLG(0x30be, 0x0101, /* Schiit Hel */ 1888 + QUIRK_FLAG_IGNORE_CTL_ERROR), 1892 1889 DEVICE_FLG(0x413c, 0xa506, /* Dell AE515 sound bar */ 1893 1890 QUIRK_FLAG_GET_SAMPLE_RATE), 1894 1891 DEVICE_FLG(0x534d, 0x2109, /* MacroSilicon MS2109 */ 1895 1892 QUIRK_FLAG_ALIGN_TRANSFER), 1893 + DEVICE_FLG(0x1224, 0x2a25, /* Jieli Technology USB PHY 2.0 */ 1894 + QUIRK_FLAG_GET_SAMPLE_RATE), 1896 1895 1897 1896 /* Vendor matches */ 1898 1897 VENDOR_FLG(0x045e, /* MS Lifecam */
+1 -1
tools/kvm/kvm_stat/kvm_stat
··· 742 742 The fields are all available KVM debugfs files 743 743 744 744 """ 745 - exempt_list = ['halt_poll_fail_ns', 'halt_poll_success_ns'] 745 + exempt_list = ['halt_poll_fail_ns', 'halt_poll_success_ns', 'halt_wait_ns'] 746 746 fields = [field for field in self.walkdir(PATH_DEBUGFS_KVM)[2] 747 747 if field not in exempt_list] 748 748
+3 -3
tools/lib/perf/tests/test-evlist.c
··· 40 40 .type = PERF_TYPE_SOFTWARE, 41 41 .config = PERF_COUNT_SW_TASK_CLOCK, 42 42 }; 43 - int err, cpu, tmp; 43 + int err, idx; 44 44 45 45 cpus = perf_cpu_map__new(NULL); 46 46 __T("failed to create cpus", cpus); ··· 70 70 perf_evlist__for_each_evsel(evlist, evsel) { 71 71 cpus = perf_evsel__cpus(evsel); 72 72 73 - perf_cpu_map__for_each_cpu(cpu, tmp, cpus) { 73 + for (idx = 0; idx < perf_cpu_map__nr(cpus); idx++) { 74 74 struct perf_counts_values counts = { .val = 0 }; 75 75 76 - perf_evsel__read(evsel, cpu, 0, &counts); 76 + perf_evsel__read(evsel, idx, 0, &counts); 77 77 __T("failed to read value for evsel", counts.val != 0); 78 78 } 79 79 }
+4 -3
tools/lib/perf/tests/test-evsel.c
··· 22 22 .type = PERF_TYPE_SOFTWARE, 23 23 .config = PERF_COUNT_SW_CPU_CLOCK, 24 24 }; 25 - int err, cpu, tmp; 25 + int err, idx; 26 26 27 27 cpus = perf_cpu_map__new(NULL); 28 28 __T("failed to create cpus", cpus); ··· 33 33 err = perf_evsel__open(evsel, cpus, NULL); 34 34 __T("failed to open evsel", err == 0); 35 35 36 - perf_cpu_map__for_each_cpu(cpu, tmp, cpus) { 36 + for (idx = 0; idx < perf_cpu_map__nr(cpus); idx++) { 37 37 struct perf_counts_values counts = { .val = 0 }; 38 38 39 - perf_evsel__read(evsel, cpu, 0, &counts); 39 + perf_evsel__read(evsel, idx, 0, &counts); 40 40 __T("failed to read value for evsel", counts.val != 0); 41 41 } 42 42 ··· 148 148 __T("failed to mmap evsel", err == 0); 149 149 150 150 pc = perf_evsel__mmap_base(evsel, 0, 0); 151 + __T("failed to get mmapped address", pc); 151 152 152 153 #if defined(__i386__) || defined(__x86_64__) 153 154 __T("userspace counter access not supported", pc->cap_user_rdpmc);
+25 -31
tools/objtool/elf.c
··· 508 508 list_add_tail(&reloc->list, &sec->reloc->reloc_list); 509 509 elf_hash_add(reloc, &reloc->hash, reloc_hash(reloc)); 510 510 511 + sec->reloc->sh.sh_size += sec->reloc->sh.sh_entsize; 511 512 sec->reloc->changed = true; 512 513 513 514 return 0; ··· 978 977 } 979 978 } 980 979 981 - static int elf_rebuild_rel_reloc_section(struct section *sec, int nr) 980 + static int elf_rebuild_rel_reloc_section(struct section *sec) 982 981 { 983 982 struct reloc *reloc; 984 - int idx = 0, size; 983 + int idx = 0; 985 984 void *buf; 986 985 987 986 /* Allocate a buffer for relocations */ 988 - size = nr * sizeof(GElf_Rel); 989 - buf = malloc(size); 987 + buf = malloc(sec->sh.sh_size); 990 988 if (!buf) { 991 989 perror("malloc"); 992 990 return -1; 993 991 } 994 992 995 993 sec->data->d_buf = buf; 996 - sec->data->d_size = size; 994 + sec->data->d_size = sec->sh.sh_size; 997 995 sec->data->d_type = ELF_T_REL; 998 - 999 - sec->sh.sh_size = size; 1000 996 1001 997 idx = 0; 1002 998 list_for_each_entry(reloc, &sec->reloc_list, list) { 1003 999 reloc->rel.r_offset = reloc->offset; 1004 1000 reloc->rel.r_info = GELF_R_INFO(reloc->sym->idx, reloc->type); 1005 - gelf_update_rel(sec->data, idx, &reloc->rel); 1001 + if (!gelf_update_rel(sec->data, idx, &reloc->rel)) { 1002 + WARN_ELF("gelf_update_rel"); 1003 + return -1; 1004 + } 1006 1005 idx++; 1007 1006 } 1008 1007 1009 1008 return 0; 1010 1009 } 1011 1010 1012 - static int elf_rebuild_rela_reloc_section(struct section *sec, int nr) 1011 + static int elf_rebuild_rela_reloc_section(struct section *sec) 1013 1012 { 1014 1013 struct reloc *reloc; 1015 - int idx = 0, size; 1014 + int idx = 0; 1016 1015 void *buf; 1017 1016 1018 1017 /* Allocate a buffer for relocations with addends */ 1019 - size = nr * sizeof(GElf_Rela); 1020 - buf = malloc(size); 1018 + buf = malloc(sec->sh.sh_size); 1021 1019 if (!buf) { 1022 1020 perror("malloc"); 1023 1021 return -1; 1024 1022 } 1025 1023 1026 1024 sec->data->d_buf = buf; 1027 - sec->data->d_size = size; 1025 + sec->data->d_size = sec->sh.sh_size; 1028 1026 sec->data->d_type = ELF_T_RELA; 1029 - 1030 - sec->sh.sh_size = size; 1031 1027 1032 1028 idx = 0; 1033 1029 list_for_each_entry(reloc, &sec->reloc_list, list) { 1034 1030 reloc->rela.r_offset = reloc->offset; 1035 1031 reloc->rela.r_addend = reloc->addend; 1036 1032 reloc->rela.r_info = GELF_R_INFO(reloc->sym->idx, reloc->type); 1037 - gelf_update_rela(sec->data, idx, &reloc->rela); 1033 + if (!gelf_update_rela(sec->data, idx, &reloc->rela)) { 1034 + WARN_ELF("gelf_update_rela"); 1035 + return -1; 1036 + } 1038 1037 idx++; 1039 1038 } 1040 1039 ··· 1043 1042 1044 1043 static int elf_rebuild_reloc_section(struct elf *elf, struct section *sec) 1045 1044 { 1046 - struct reloc *reloc; 1047 - int nr; 1048 - 1049 - nr = 0; 1050 - list_for_each_entry(reloc, &sec->reloc_list, list) 1051 - nr++; 1052 - 1053 1045 switch (sec->sh.sh_type) { 1054 - case SHT_REL: return elf_rebuild_rel_reloc_section(sec, nr); 1055 - case SHT_RELA: return elf_rebuild_rela_reloc_section(sec, nr); 1046 + case SHT_REL: return elf_rebuild_rel_reloc_section(sec); 1047 + case SHT_RELA: return elf_rebuild_rela_reloc_section(sec); 1056 1048 default: return -1; 1057 1049 } 1058 1050 } ··· 1105 1111 /* Update changed relocation sections and section headers: */ 1106 1112 list_for_each_entry(sec, &elf->sections, list) { 1107 1113 if (sec->changed) { 1108 - if (sec->base && 1109 - elf_rebuild_reloc_section(elf, sec)) { 1110 - WARN("elf_rebuild_reloc_section"); 1111 - return -1; 1112 - } 1113 - 1114 1114 s = elf_getscn(elf->elf, sec->idx); 1115 1115 if (!s) { 1116 1116 WARN_ELF("elf_getscn"); ··· 1112 1124 } 1113 1125 if (!gelf_update_shdr(s, &sec->sh)) { 1114 1126 WARN_ELF("gelf_update_shdr"); 1127 + return -1; 1128 + } 1129 + 1130 + if (sec->base && 1131 + elf_rebuild_reloc_section(elf, sec)) { 1132 + WARN("elf_rebuild_reloc_section"); 1115 1133 return -1; 1116 1134 } 1117 1135
+2 -2
tools/perf/util/session.c
··· 2116 2116 static int __perf_session__process_decomp_events(struct perf_session *session) 2117 2117 { 2118 2118 s64 skip; 2119 - u64 size, file_pos = 0; 2119 + u64 size; 2120 2120 struct decomp *decomp = session->decomp_last; 2121 2121 2122 2122 if (!decomp) ··· 2132 2132 size = event->header.size; 2133 2133 2134 2134 if (size < sizeof(struct perf_event_header) || 2135 - (skip = perf_session__process_event(session, event, file_pos)) < 0) { 2135 + (skip = perf_session__process_event(session, event, decomp->file_pos)) < 0) { 2136 2136 pr_err("%#" PRIx64 " [%#x]: failed to process type: %d\n", 2137 2137 decomp->file_pos + decomp->head, event->header.size, event->header.type); 2138 2138 return -EINVAL;
+52 -2
tools/testing/selftests/ftrace/test.d/dynevent/add_remove_eprobe.tc
··· 11 11 EVENT="sys_enter_openat" 12 12 FIELD="filename" 13 13 EPROBE="eprobe_open" 14 - 15 - echo "e:$EPROBE $SYSTEM/$EVENT file=+0(\$filename):ustring" >> dynamic_events 14 + OPTIONS="file=+0(\$filename):ustring" 15 + echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 16 16 17 17 grep -q "$EPROBE" dynamic_events 18 18 test -d events/eprobes/$EPROBE ··· 34 34 35 35 echo "-:$EPROBE" >> dynamic_events 36 36 37 + ! grep -q "$EPROBE" dynamic_events 38 + ! test -d events/eprobes/$EPROBE 39 + 40 + # test various ways to remove the probe (already tested with just event name) 41 + 42 + # With group name 43 + echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 44 + grep -q "$EPROBE" dynamic_events 45 + test -d events/eprobes/$EPROBE 46 + echo "-:eprobes/$EPROBE" >> dynamic_events 47 + ! grep -q "$EPROBE" dynamic_events 48 + ! test -d events/eprobes/$EPROBE 49 + 50 + # With group name and system/event 51 + echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 52 + grep -q "$EPROBE" dynamic_events 53 + test -d events/eprobes/$EPROBE 54 + echo "-:eprobes/$EPROBE $SYSTEM/$EVENT" >> dynamic_events 55 + ! grep -q "$EPROBE" dynamic_events 56 + ! test -d events/eprobes/$EPROBE 57 + 58 + # With just event name and system/event 59 + echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 60 + grep -q "$EPROBE" dynamic_events 61 + test -d events/eprobes/$EPROBE 62 + echo "-:$EPROBE $SYSTEM/$EVENT" >> dynamic_events 63 + ! grep -q "$EPROBE" dynamic_events 64 + ! test -d events/eprobes/$EPROBE 65 + 66 + # With just event name and system/event and options 67 + echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 68 + grep -q "$EPROBE" dynamic_events 69 + test -d events/eprobes/$EPROBE 70 + echo "-:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 71 + ! grep -q "$EPROBE" dynamic_events 72 + ! test -d events/eprobes/$EPROBE 73 + 74 + # With group name and system/event and options 75 + echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 76 + grep -q "$EPROBE" dynamic_events 77 + test -d events/eprobes/$EPROBE 78 + echo "-:eprobes/$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 79 + ! grep -q "$EPROBE" dynamic_events 80 + ! test -d events/eprobes/$EPROBE 81 + 82 + # Finally make sure what is in the dynamic_events file clears it too 83 + echo "e:$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 84 + LINE=`sed -e '/$EPROBE/s/^e/-/' < dynamic_events` 85 + test -d events/eprobes/$EPROBE 86 + echo "-:eprobes/$EPROBE $SYSTEM/$EVENT $OPTIONS" >> dynamic_events 37 87 ! grep -q "$EPROBE" dynamic_events 38 88 ! test -d events/eprobes/$EPROBE 39 89
+1
tools/testing/selftests/net/config
··· 43 43 CONFIG_NET_ACT_MIRRED=m 44 44 CONFIG_BAREUDP=m 45 45 CONFIG_IPV6_IOAM6_LWTUNNEL=y 46 + CONFIG_CRYPTO_SM4=y
+60
tools/testing/selftests/net/fcnal-test.sh
··· 289 289 run_cmd sysctl -q -w $* 290 290 } 291 291 292 + # get sysctl values in NS-A 293 + get_sysctl() 294 + { 295 + ${NSA_CMD} sysctl -n $* 296 + } 297 + 292 298 ################################################################################ 293 299 # Setup for tests 294 300 ··· 1009 1003 run_cmd nettest -s -I ${NSA_DEV} -M ${MD5_PW} -m ${NS_NET} 1010 1004 log_test $? 1 "MD5: VRF: Device must be a VRF - prefix" 1011 1005 1006 + test_ipv4_md5_vrf__vrf_server__no_bind_ifindex 1007 + test_ipv4_md5_vrf__global_server__bind_ifindex0 1008 + } 1009 + 1010 + test_ipv4_md5_vrf__vrf_server__no_bind_ifindex() 1011 + { 1012 + log_start 1013 + show_hint "Simulates applications using VRF without TCP_MD5SIG_FLAG_IFINDEX" 1014 + run_cmd nettest -s -I ${VRF} -M ${MD5_PW} -m ${NS_NET} --no-bind-key-ifindex & 1015 + sleep 1 1016 + run_cmd_nsb nettest -r ${NSA_IP} -X ${MD5_PW} 1017 + log_test $? 0 "MD5: VRF: VRF-bound server, unbound key accepts connection" 1018 + 1019 + log_start 1020 + show_hint "Binding both the socket and the key is not required but it works" 1021 + run_cmd nettest -s -I ${VRF} -M ${MD5_PW} -m ${NS_NET} --force-bind-key-ifindex & 1022 + sleep 1 1023 + run_cmd_nsb nettest -r ${NSA_IP} -X ${MD5_PW} 1024 + log_test $? 0 "MD5: VRF: VRF-bound server, bound key accepts connection" 1025 + } 1026 + 1027 + test_ipv4_md5_vrf__global_server__bind_ifindex0() 1028 + { 1029 + # This particular test needs tcp_l3mdev_accept=1 for Global server to accept VRF connections 1030 + local old_tcp_l3mdev_accept 1031 + old_tcp_l3mdev_accept=$(get_sysctl net.ipv4.tcp_l3mdev_accept) 1032 + set_sysctl net.ipv4.tcp_l3mdev_accept=1 1033 + 1034 + log_start 1035 + run_cmd nettest -s -M ${MD5_PW} -m ${NS_NET} --force-bind-key-ifindex & 1036 + sleep 1 1037 + run_cmd_nsb nettest -r ${NSA_IP} -X ${MD5_PW} 1038 + log_test $? 2 "MD5: VRF: Global server, Key bound to ifindex=0 rejects VRF connection" 1039 + 1040 + log_start 1041 + run_cmd nettest -s -M ${MD5_PW} -m ${NS_NET} --force-bind-key-ifindex & 1042 + sleep 1 1043 + run_cmd_nsc nettest -r ${NSA_IP} -X ${MD5_PW} 1044 + log_test $? 0 "MD5: VRF: Global server, key bound to ifindex=0 accepts non-VRF connection" 1045 + log_start 1046 + 1047 + run_cmd nettest -s -M ${MD5_PW} -m ${NS_NET} --no-bind-key-ifindex & 1048 + sleep 1 1049 + run_cmd_nsb nettest -r ${NSA_IP} -X ${MD5_PW} 1050 + log_test $? 0 "MD5: VRF: Global server, key not bound to ifindex accepts VRF connection" 1051 + 1052 + log_start 1053 + run_cmd nettest -s -M ${MD5_PW} -m ${NS_NET} --no-bind-key-ifindex & 1054 + sleep 1 1055 + run_cmd_nsc nettest -r ${NSA_IP} -X ${MD5_PW} 1056 + log_test $? 0 "MD5: VRF: Global server, key not bound to ifindex accepts non-VRF connection" 1057 + 1058 + # restore value 1059 + set_sysctl net.ipv4.tcp_l3mdev_accept="$old_tcp_l3mdev_accept" 1012 1060 } 1013 1061 1014 1062 ipv4_tcp_novrf()
+1
tools/testing/selftests/net/forwarding/Makefile
··· 9 9 gre_inner_v4_multipath.sh \ 10 10 gre_inner_v6_multipath.sh \ 11 11 gre_multipath.sh \ 12 + ip6_forward_instats_vrf.sh \ 12 13 ip6gre_inner_v4_multipath.sh \ 13 14 ip6gre_inner_v6_multipath.sh \ 14 15 ipip_flat_gre_key.sh \
+3
tools/testing/selftests/net/forwarding/forwarding.config.sample
··· 42 42 # Flag for tc match, supposed to be skip_sw/skip_hw which means do not process 43 43 # filter by software/hardware 44 44 TC_FLAG=skip_hw 45 + # IPv6 traceroute utility name. 46 + TROUTE6=traceroute6 47 +
+172
tools/testing/selftests/net/forwarding/ip6_forward_instats_vrf.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + # Test ipv6 stats on the incoming if when forwarding with VRF 5 + 6 + ALL_TESTS=" 7 + ipv6_ping 8 + ipv6_in_too_big_err 9 + ipv6_in_hdr_err 10 + ipv6_in_addr_err 11 + ipv6_in_discard 12 + " 13 + 14 + NUM_NETIFS=4 15 + source lib.sh 16 + 17 + h1_create() 18 + { 19 + simple_if_init $h1 2001:1:1::2/64 20 + ip -6 route add vrf v$h1 2001:1:2::/64 via 2001:1:1::1 21 + } 22 + 23 + h1_destroy() 24 + { 25 + ip -6 route del vrf v$h1 2001:1:2::/64 via 2001:1:1::1 26 + simple_if_fini $h1 2001:1:1::2/64 27 + } 28 + 29 + router_create() 30 + { 31 + vrf_create router 32 + __simple_if_init $rtr1 router 2001:1:1::1/64 33 + __simple_if_init $rtr2 router 2001:1:2::1/64 34 + mtu_set $rtr2 1280 35 + } 36 + 37 + router_destroy() 38 + { 39 + mtu_restore $rtr2 40 + __simple_if_fini $rtr2 2001:1:2::1/64 41 + __simple_if_fini $rtr1 2001:1:1::1/64 42 + vrf_destroy router 43 + } 44 + 45 + h2_create() 46 + { 47 + simple_if_init $h2 2001:1:2::2/64 48 + ip -6 route add vrf v$h2 2001:1:1::/64 via 2001:1:2::1 49 + mtu_set $h2 1280 50 + } 51 + 52 + h2_destroy() 53 + { 54 + mtu_restore $h2 55 + ip -6 route del vrf v$h2 2001:1:1::/64 via 2001:1:2::1 56 + simple_if_fini $h2 2001:1:2::2/64 57 + } 58 + 59 + setup_prepare() 60 + { 61 + h1=${NETIFS[p1]} 62 + rtr1=${NETIFS[p2]} 63 + 64 + rtr2=${NETIFS[p3]} 65 + h2=${NETIFS[p4]} 66 + 67 + vrf_prepare 68 + h1_create 69 + router_create 70 + h2_create 71 + 72 + forwarding_enable 73 + } 74 + 75 + cleanup() 76 + { 77 + pre_cleanup 78 + 79 + forwarding_restore 80 + 81 + h2_destroy 82 + router_destroy 83 + h1_destroy 84 + vrf_cleanup 85 + } 86 + 87 + ipv6_in_too_big_err() 88 + { 89 + RET=0 90 + 91 + local t0=$(ipv6_stats_get $rtr1 Ip6InTooBigErrors) 92 + local vrf_name=$(master_name_get $h1) 93 + 94 + # Send too big packets 95 + ip vrf exec $vrf_name \ 96 + $PING6 -s 1300 2001:1:2::2 -c 1 -w $PING_TIMEOUT &> /dev/null 97 + 98 + local t1=$(ipv6_stats_get $rtr1 Ip6InTooBigErrors) 99 + test "$((t1 - t0))" -ne 0 100 + check_err $? 101 + log_test "Ip6InTooBigErrors" 102 + } 103 + 104 + ipv6_in_hdr_err() 105 + { 106 + RET=0 107 + 108 + local t0=$(ipv6_stats_get $rtr1 Ip6InHdrErrors) 109 + local vrf_name=$(master_name_get $h1) 110 + 111 + # Send packets with hop limit 1, easiest with traceroute6 as some ping6 112 + # doesn't allow hop limit to be specified 113 + ip vrf exec $vrf_name \ 114 + $TROUTE6 2001:1:2::2 &> /dev/null 115 + 116 + local t1=$(ipv6_stats_get $rtr1 Ip6InHdrErrors) 117 + test "$((t1 - t0))" -ne 0 118 + check_err $? 119 + log_test "Ip6InHdrErrors" 120 + } 121 + 122 + ipv6_in_addr_err() 123 + { 124 + RET=0 125 + 126 + local t0=$(ipv6_stats_get $rtr1 Ip6InAddrErrors) 127 + local vrf_name=$(master_name_get $h1) 128 + 129 + # Disable forwarding temporary while sending the packet 130 + sysctl -qw net.ipv6.conf.all.forwarding=0 131 + ip vrf exec $vrf_name \ 132 + $PING6 2001:1:2::2 -c 1 -w $PING_TIMEOUT &> /dev/null 133 + sysctl -qw net.ipv6.conf.all.forwarding=1 134 + 135 + local t1=$(ipv6_stats_get $rtr1 Ip6InAddrErrors) 136 + test "$((t1 - t0))" -ne 0 137 + check_err $? 138 + log_test "Ip6InAddrErrors" 139 + } 140 + 141 + ipv6_in_discard() 142 + { 143 + RET=0 144 + 145 + local t0=$(ipv6_stats_get $rtr1 Ip6InDiscards) 146 + local vrf_name=$(master_name_get $h1) 147 + 148 + # Add a policy to discard 149 + ip xfrm policy add dst 2001:1:2::2/128 dir fwd action block 150 + ip vrf exec $vrf_name \ 151 + $PING6 2001:1:2::2 -c 1 -w $PING_TIMEOUT &> /dev/null 152 + ip xfrm policy del dst 2001:1:2::2/128 dir fwd 153 + 154 + local t1=$(ipv6_stats_get $rtr1 Ip6InDiscards) 155 + test "$((t1 - t0))" -ne 0 156 + check_err $? 157 + log_test "Ip6InDiscards" 158 + } 159 + ipv6_ping() 160 + { 161 + RET=0 162 + 163 + ping6_test $h1 2001:1:2::2 164 + } 165 + 166 + trap cleanup EXIT 167 + 168 + setup_prepare 169 + setup_wait 170 + tests_run 171 + 172 + exit $EXIT_STATUS
+8
tools/testing/selftests/net/forwarding/lib.sh
··· 751 751 | jq '.[] | select(.parent == "'"$parent"'") | '"$selector" 752 752 } 753 753 754 + ipv6_stats_get() 755 + { 756 + local dev=$1; shift 757 + local stat=$1; shift 758 + 759 + cat /proc/net/dev_snmp6/$dev | grep "^$stat" | cut -f2 760 + } 761 + 754 762 humanize() 755 763 { 756 764 local speed=$1; shift
+26 -2
tools/testing/selftests/net/nettest.c
··· 28 28 #include <unistd.h> 29 29 #include <time.h> 30 30 #include <errno.h> 31 + #include <getopt.h> 31 32 32 33 #include <linux/xfrm.h> 33 34 #include <linux/ipsec.h> ··· 102 101 struct sockaddr_in6 v6; 103 102 } md5_prefix; 104 103 unsigned int prefix_len; 104 + /* 0: default, -1: force off, +1: force on */ 105 + int bind_key_ifindex; 105 106 106 107 /* expected addresses and device index for connection */ 107 108 const char *expected_dev; ··· 274 271 } 275 272 memcpy(&md5sig.tcpm_addr, addr, alen); 276 273 277 - if (args->ifindex) { 274 + if ((args->ifindex && args->bind_key_ifindex >= 0) || args->bind_key_ifindex >= 1) { 278 275 opt = TCP_MD5SIG_EXT; 279 276 md5sig.tcpm_flags |= TCP_MD5SIG_FLAG_IFINDEX; 280 277 281 278 md5sig.tcpm_ifindex = args->ifindex; 279 + log_msg("TCP_MD5SIG_FLAG_IFINDEX set tcpm_ifindex=%d\n", md5sig.tcpm_ifindex); 280 + } else { 281 + log_msg("TCP_MD5SIG_FLAG_IFINDEX off\n", md5sig.tcpm_ifindex); 282 282 } 283 283 284 284 rc = setsockopt(sd, IPPROTO_TCP, opt, &md5sig, sizeof(md5sig)); ··· 1828 1822 } 1829 1823 1830 1824 #define GETOPT_STR "sr:l:c:p:t:g:P:DRn:M:X:m:d:I:BN:O:SCi6xL:0:1:2:3:Fbq" 1825 + #define OPT_FORCE_BIND_KEY_IFINDEX 1001 1826 + #define OPT_NO_BIND_KEY_IFINDEX 1002 1827 + 1828 + static struct option long_opts[] = { 1829 + {"force-bind-key-ifindex", 0, 0, OPT_FORCE_BIND_KEY_IFINDEX}, 1830 + {"no-bind-key-ifindex", 0, 0, OPT_NO_BIND_KEY_IFINDEX}, 1831 + {0, 0, 0, 0} 1832 + }; 1831 1833 1832 1834 static void print_usage(char *prog) 1833 1835 { ··· 1872 1858 " -M password use MD5 sum protection\n" 1873 1859 " -X password MD5 password for client mode\n" 1874 1860 " -m prefix/len prefix and length to use for MD5 key\n" 1861 + " --no-bind-key-ifindex: Force TCP_MD5SIG_FLAG_IFINDEX off\n" 1862 + " --force-bind-key-ifindex: Force TCP_MD5SIG_FLAG_IFINDEX on\n" 1863 + " (default: only if -I is passed)\n" 1864 + "\n" 1875 1865 " -g grp multicast group (e.g., 239.1.1.1)\n" 1876 1866 " -i interactive mode (default is echo and terminate)\n" 1877 1867 "\n" ··· 1911 1893 * process input args 1912 1894 */ 1913 1895 1914 - while ((rc = getopt(argc, argv, GETOPT_STR)) != -1) { 1896 + while ((rc = getopt_long(argc, argv, GETOPT_STR, long_opts, NULL)) != -1) { 1915 1897 switch (rc) { 1916 1898 case 'B': 1917 1899 both_mode = 1; ··· 1983 1965 break; 1984 1966 case 'M': 1985 1967 args.password = optarg; 1968 + break; 1969 + case OPT_FORCE_BIND_KEY_IFINDEX: 1970 + args.bind_key_ifindex = 1; 1971 + break; 1972 + case OPT_NO_BIND_KEY_IFINDEX: 1973 + args.bind_key_ifindex = -1; 1986 1974 break; 1987 1975 case 'X': 1988 1976 args.client_pw = optarg;
-1
tools/testing/selftests/netfilter/nft_flowtable.sh
··· 199 199 # test basic connectivity 200 200 if ! ip netns exec ns1 ping -c 1 -q 10.0.2.99 > /dev/null; then 201 201 echo "ERROR: ns1 cannot reach ns2" 1>&2 202 - bash 203 202 exit 1 204 203 fi 205 204
+145
tools/testing/selftests/netfilter/nft_nat.sh
··· 741 741 return $lret 742 742 } 743 743 744 + # test port shadowing. 745 + # create two listening services, one on router (ns0), one 746 + # on client (ns2), which is masqueraded from ns1 point of view. 747 + # ns2 sends udp packet coming from service port to ns1, on a highport. 748 + # Later, if n1 uses same highport to connect to ns0:service, packet 749 + # might be port-forwarded to ns2 instead. 750 + 751 + # second argument tells if we expect the 'fake-entry' to take effect 752 + # (CLIENT) or not (ROUTER). 753 + test_port_shadow() 754 + { 755 + local test=$1 756 + local expect=$2 757 + local daddrc="10.0.1.99" 758 + local daddrs="10.0.1.1" 759 + local result="" 760 + local logmsg="" 761 + 762 + echo ROUTER | ip netns exec "$ns0" nc -w 5 -u -l -p 1405 >/dev/null 2>&1 & 763 + nc_r=$! 764 + 765 + echo CLIENT | ip netns exec "$ns2" nc -w 5 -u -l -p 1405 >/dev/null 2>&1 & 766 + nc_c=$! 767 + 768 + # make shadow entry, from client (ns2), going to (ns1), port 41404, sport 1405. 769 + echo "fake-entry" | ip netns exec "$ns2" nc -w 1 -p 1405 -u "$daddrc" 41404 > /dev/null 770 + 771 + # ns1 tries to connect to ns0:1405. With default settings this should connect 772 + # to client, it matches the conntrack entry created above. 773 + 774 + result=$(echo "" | ip netns exec "$ns1" nc -w 1 -p 41404 -u "$daddrs" 1405) 775 + 776 + if [ "$result" = "$expect" ] ;then 777 + echo "PASS: portshadow test $test: got reply from ${expect}${logmsg}" 778 + else 779 + echo "ERROR: portshadow test $test: got reply from \"$result\", not $expect as intended" 780 + ret=1 781 + fi 782 + 783 + kill $nc_r $nc_c 2>/dev/null 784 + 785 + # flush udp entries for next test round, if any 786 + ip netns exec "$ns0" conntrack -F >/dev/null 2>&1 787 + } 788 + 789 + # This prevents port shadow of router service via packet filter, 790 + # packets claiming to originate from service port from internal 791 + # network are dropped. 792 + test_port_shadow_filter() 793 + { 794 + local family=$1 795 + 796 + ip netns exec "$ns0" nft -f /dev/stdin <<EOF 797 + table $family filter { 798 + chain forward { 799 + type filter hook forward priority 0; policy accept; 800 + meta iif veth1 udp sport 1405 drop 801 + } 802 + } 803 + EOF 804 + test_port_shadow "port-filter" "ROUTER" 805 + 806 + ip netns exec "$ns0" nft delete table $family filter 807 + } 808 + 809 + # This prevents port shadow of router service via notrack. 810 + test_port_shadow_notrack() 811 + { 812 + local family=$1 813 + 814 + ip netns exec "$ns0" nft -f /dev/stdin <<EOF 815 + table $family raw { 816 + chain prerouting { 817 + type filter hook prerouting priority -300; policy accept; 818 + meta iif veth0 udp dport 1405 notrack 819 + udp dport 1405 notrack 820 + } 821 + chain output { 822 + type filter hook output priority -300; policy accept; 823 + udp sport 1405 notrack 824 + } 825 + } 826 + EOF 827 + test_port_shadow "port-notrack" "ROUTER" 828 + 829 + ip netns exec "$ns0" nft delete table $family raw 830 + } 831 + 832 + # This prevents port shadow of router service via sport remap. 833 + test_port_shadow_pat() 834 + { 835 + local family=$1 836 + 837 + ip netns exec "$ns0" nft -f /dev/stdin <<EOF 838 + table $family pat { 839 + chain postrouting { 840 + type nat hook postrouting priority -1; policy accept; 841 + meta iif veth1 udp sport <= 1405 masquerade to : 1406-65535 random 842 + } 843 + } 844 + EOF 845 + test_port_shadow "pat" "ROUTER" 846 + 847 + ip netns exec "$ns0" nft delete table $family pat 848 + } 849 + 850 + test_port_shadowing() 851 + { 852 + local family="ip" 853 + 854 + ip netns exec "$ns0" sysctl net.ipv4.conf.veth0.forwarding=1 > /dev/null 855 + ip netns exec "$ns0" sysctl net.ipv4.conf.veth1.forwarding=1 > /dev/null 856 + 857 + ip netns exec "$ns0" nft -f /dev/stdin <<EOF 858 + table $family nat { 859 + chain postrouting { 860 + type nat hook postrouting priority 0; policy accept; 861 + meta oif veth0 masquerade 862 + } 863 + } 864 + EOF 865 + if [ $? -ne 0 ]; then 866 + echo "SKIP: Could not add add $family masquerade hook" 867 + return $ksft_skip 868 + fi 869 + 870 + # test default behaviour. Packet from ns1 to ns0 is redirected to ns2. 871 + test_port_shadow "default" "CLIENT" 872 + 873 + # test packet filter based mitigation: prevent forwarding of 874 + # packets claiming to come from the service port. 875 + test_port_shadow_filter "$family" 876 + 877 + # test conntrack based mitigation: connections going or coming 878 + # from router:service bypass connection tracking. 879 + test_port_shadow_notrack "$family" 880 + 881 + # test nat based mitigation: fowarded packets coming from service port 882 + # are masqueraded with random highport. 883 + test_port_shadow_pat "$family" 884 + 885 + ip netns exec "$ns0" nft delete table $family nat 886 + } 744 887 745 888 # ip netns exec "$ns0" ping -c 1 -q 10.0.$i.99 746 889 for i in 0 1 2; do ··· 1003 860 reset_counters 1004 861 $test_inet_nat && test_redirect inet 1005 862 $test_inet_nat && test_redirect6 inet 863 + 864 + test_port_shadowing 1006 865 1007 866 if [ $ret -ne 0 ];then 1008 867 echo -n "FAIL: "
+20 -3
tools/testing/selftests/vm/userfaultfd.c
··· 414 414 uffd_test_ops->allocate_area((void **)&area_src); 415 415 uffd_test_ops->allocate_area((void **)&area_dst); 416 416 417 - uffd_test_ops->release_pages(area_src); 418 - uffd_test_ops->release_pages(area_dst); 419 - 420 417 userfaultfd_open(features); 421 418 422 419 count_verify = malloc(nr_pages * sizeof(unsigned long long)); ··· 433 436 */ 434 437 *(area_count(area_src, nr) + 1) = 1; 435 438 } 439 + 440 + /* 441 + * After initialization of area_src, we must explicitly release pages 442 + * for area_dst to make sure it's fully empty. Otherwise we could have 443 + * some area_dst pages be errornously initialized with zero pages, 444 + * hence we could hit memory corruption later in the test. 445 + * 446 + * One example is when THP is globally enabled, above allocate_area() 447 + * calls could have the two areas merged into a single VMA (as they 448 + * will have the same VMA flags so they're mergeable). When we 449 + * initialize the area_src above, it's possible that some part of 450 + * area_dst could have been faulted in via one huge THP that will be 451 + * shared between area_src and area_dst. It could cause some of the 452 + * area_dst won't be trapped by missing userfaults. 453 + * 454 + * This release_pages() will guarantee even if that happened, we'll 455 + * proactively split the thp and drop any accidentally initialized 456 + * pages within area_dst. 457 + */ 458 + uffd_test_ops->release_pages(area_dst); 436 459 437 460 pipefd = malloc(sizeof(int) * nr_cpus * 2); 438 461 if (!pipefd)
-2
tools/testing/vsock/vsock_diag_test.c
··· 332 332 read_vsock_stat(&sockets); 333 333 334 334 check_no_sockets(&sockets); 335 - 336 - free_sock_stat(&sockets); 337 335 } 338 336 339 337 static void test_listen_socket_server(const struct test_opts *opts)