Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.16-rc5).

No conflicts.

No adjacent changes.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>

+3353 -2033
+5
.mailmap
··· 223 223 Dmitry Safonov <0x7f454c46@gmail.com> <dsafonov@virtuozzo.com> 224 224 Domen Puncer <domen@coderock.org> 225 225 Douglas Gilbert <dougg@torque.net> 226 + Drew Fustini <fustini@kernel.org> <drew@pdp7.com> 227 + <duje@dujemihanovic.xyz> <duje.mihanovic@skole.hr> 226 228 Ed L. Cashin <ecashin@coraid.com> 227 229 Elliot Berman <quic_eberman@quicinc.com> <eberman@codeaurora.org> 228 230 Enric Balletbo i Serra <eballetbo@kernel.org> <enric.balletbo@collabora.com> ··· 832 830 Yusuke Goda <goda.yusuke@renesas.com> 833 831 Zack Rusin <zack.rusin@broadcom.com> <zackr@vmware.com> 834 832 Zhu Yanjun <zyjzyj2000@gmail.com> <yanjunz@nvidia.com> 833 + Zijun Hu <zijun.hu@oss.qualcomm.com> <quic_zijuhu@quicinc.com> 834 + Zijun Hu <zijun.hu@oss.qualcomm.com> <zijuhu@codeaurora.org> 835 + Zijun Hu <zijun_hu@htc.com>
+16
Documentation/ABI/testing/sysfs-edac-scrub
··· 49 49 (RO) Supported minimum scrub cycle duration in seconds 50 50 by the memory scrubber. 51 51 52 + Device-based scrub: returns the minimum scrub cycle 53 + supported by the memory device. 54 + 55 + Region-based scrub: returns the max of minimum scrub cycles 56 + supported by individual memory devices that back the region. 57 + 52 58 What: /sys/bus/edac/devices/<dev-name>/scrubX/max_cycle_duration 53 59 Date: March 2025 54 60 KernelVersion: 6.15 ··· 62 56 Description: 63 57 (RO) Supported maximum scrub cycle duration in seconds 64 58 by the memory scrubber. 59 + 60 + Device-based scrub: returns the maximum scrub cycle supported 61 + by the memory device. 62 + 63 + Region-based scrub: returns the min of maximum scrub cycles 64 + supported by individual memory devices that back the region. 65 + 66 + If the memory device does not provide maximum scrub cycle 67 + information, return the maximum supported value of the scrub 68 + cycle field. 65 69 66 70 What: /sys/bus/edac/devices/<dev-name>/scrubX/current_cycle_duration 67 71 Date: March 2025
-4
Documentation/devicetree/bindings/display/bridge/ti,sn65dsi83.yaml
··· 118 118 ti,lvds-vod-swing-clock-microvolt: 119 119 description: LVDS diferential output voltage <min max> for clock 120 120 lanes in microvolts. 121 - $ref: /schemas/types.yaml#/definitions/uint32-array 122 - minItems: 2 123 121 maxItems: 2 124 122 125 123 ti,lvds-vod-swing-data-microvolt: 126 124 description: LVDS diferential output voltage <min max> for data 127 125 lanes in microvolts. 128 - $ref: /schemas/types.yaml#/definitions/uint32-array 129 - minItems: 2 130 126 maxItems: 2 131 127 132 128 allOf:
+2 -1
Documentation/devicetree/bindings/net/sophgo,sg2044-dwmac.yaml
··· 80 80 interrupt-parent = <&intc>; 81 81 interrupts = <296 IRQ_TYPE_LEVEL_HIGH>; 82 82 interrupt-names = "macirq"; 83 + phy-handle = <&phy0>; 84 + phy-mode = "rgmii-id"; 83 85 resets = <&rst 30>; 84 86 reset-names = "stmmaceth"; 85 87 snps,multicast-filter-bins = <0>; ··· 93 91 snps,mtl-rx-config = <&gmac0_mtl_rx_setup>; 94 92 snps,mtl-tx-config = <&gmac0_mtl_tx_setup>; 95 93 snps,axi-config = <&gmac0_stmmac_axi_setup>; 96 - status = "disabled"; 97 94 98 95 gmac0_mtl_rx_setup: rx-queues-config { 99 96 snps,rx-queues-to-use = <8>;
+1 -1
Documentation/devicetree/bindings/serial/8250.yaml
··· 45 45 - ns16550 46 46 - ns16550a 47 47 then: 48 - anyOf: 48 + oneOf: 49 49 - required: [ clock-frequency ] 50 50 - required: [ clocks ] 51 51
-5
Documentation/devicetree/bindings/serial/altera_jtaguart.txt
··· 1 - Altera JTAG UART 2 - 3 - Required properties: 4 - - compatible : should be "ALTR,juart-1.0" <DEPRECATED> 5 - - compatible : should be "altr,juart-1.0"
-8
Documentation/devicetree/bindings/serial/altera_uart.txt
··· 1 - Altera UART 2 - 3 - Required properties: 4 - - compatible : should be "ALTR,uart-1.0" <DEPRECATED> 5 - - compatible : should be "altr,uart-1.0" 6 - 7 - Optional properties: 8 - - clock-frequency : frequency of the clock input to the UART
+19
Documentation/devicetree/bindings/serial/altr,juart-1.0.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/serial/altr,juart-1.0.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Altera JTAG UART 8 + 9 + maintainers: 10 + - Dinh Nguyen <dinguyen@kernel.org> 11 + 12 + properties: 13 + compatible: 14 + const: altr,juart-1.0 15 + 16 + required: 17 + - compatible 18 + 19 + additionalProperties: false
+25
Documentation/devicetree/bindings/serial/altr,uart-1.0.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/serial/altr,uart-1.0.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Altera UART 8 + 9 + maintainers: 10 + - Dinh Nguyen <dinguyen@kernel.org> 11 + 12 + allOf: 13 + - $ref: /schemas/serial/serial.yaml# 14 + 15 + properties: 16 + compatible: 17 + const: altr,uart-1.0 18 + 19 + clock-frequency: 20 + description: Frequency of the clock input to the UART. 21 + 22 + required: 23 + - compatible 24 + 25 + unevaluatedProperties: false
+1 -1
Documentation/devicetree/bindings/soc/fsl/fsl,ls1028a-reset.yaml
··· 1 1 # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 2 %YAML 1.2 3 3 --- 4 - $id: http://devicetree.org/schemas//soc/fsl/fsl,ls1028a-reset.yaml# 4 + $id: http://devicetree.org/schemas/soc/fsl/fsl,ls1028a-reset.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 7 title: Freescale Layerscape Reset Registers Module
+3 -1
Documentation/networking/tls.rst
··· 16 16 Creating a TLS connection 17 17 ------------------------- 18 18 19 - First create a new TCP socket and set the TLS ULP. 19 + First create a new TCP socket and once the connection is established set the 20 + TLS ULP. 20 21 21 22 .. code-block:: c 22 23 23 24 sock = socket(AF_INET, SOCK_STREAM, 0); 25 + connect(sock, addr, addrlen); 24 26 setsockopt(sock, SOL_TCP, TCP_ULP, "tls", sizeof("tls")); 25 27 26 28 Setting the TLS ULP allows us to set/get TLS socket options. Currently
+1 -1
Documentation/process/maintainer-netdev.rst
··· 312 312 (as of patchwork 2.2.2). 313 313 314 314 Co-posting selftests 315 - -------------------- 315 + ~~~~~~~~~~~~~~~~~~~~ 316 316 317 317 Selftests should be part of the same series as the code changes. 318 318 Specifically for fixes both code change and related test should go into
+32 -5
MAINTAINERS
··· 15555 15555 MELLANOX ETHERNET DRIVER (mlx5e) 15556 15556 M: Saeed Mahameed <saeedm@nvidia.com> 15557 15557 M: Tariq Toukan <tariqt@nvidia.com> 15558 + M: Mark Bloch <mbloch@nvidia.com> 15558 15559 L: netdev@vger.kernel.org 15559 15560 S: Maintained 15560 15561 W: https://www.nvidia.com/networking/ ··· 15625 15624 M: Saeed Mahameed <saeedm@nvidia.com> 15626 15625 M: Leon Romanovsky <leonro@nvidia.com> 15627 15626 M: Tariq Toukan <tariqt@nvidia.com> 15627 + M: Mark Bloch <mbloch@nvidia.com> 15628 15628 L: netdev@vger.kernel.org 15629 15629 L: linux-rdma@vger.kernel.org 15630 15630 S: Maintained ··· 15683 15681 M: Mike Rapoport <rppt@kernel.org> 15684 15682 L: linux-mm@kvack.org 15685 15683 S: Maintained 15684 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock.git for-next 15685 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock.git fixes 15686 15686 F: Documentation/core-api/boot-time-mm.rst 15687 15687 F: Documentation/core-api/kho/bindings/memblock/* 15688 15688 F: include/linux/memblock.h ··· 15857 15853 F: mm/numa_emulation.c 15858 15854 F: mm/numa_memblks.c 15859 15855 15856 + MEMORY MANAGEMENT - OOM KILLER 15857 + M: Michal Hocko <mhocko@suse.com> 15858 + R: David Rientjes <rientjes@google.com> 15859 + R: Shakeel Butt <shakeel.butt@linux.dev> 15860 + L: linux-mm@kvack.org 15861 + S: Maintained 15862 + F: include/linux/oom.h 15863 + F: include/trace/events/oom.h 15864 + F: include/uapi/linux/oom.h 15865 + F: mm/oom_kill.c 15866 + 15860 15867 MEMORY MANAGEMENT - PAGE ALLOCATOR 15861 15868 M: Andrew Morton <akpm@linux-foundation.org> 15862 15869 M: Vlastimil Babka <vbabka@suse.cz> ··· 15882 15867 F: include/linux/gfp.h 15883 15868 F: include/linux/page-isolation.h 15884 15869 F: mm/compaction.c 15870 + F: mm/debug_page_alloc.c 15871 + F: mm/fail_page_alloc.c 15885 15872 F: mm/page_alloc.c 15873 + F: mm/page_ext.c 15874 + F: mm/page_frag_cache.c 15886 15875 F: mm/page_isolation.c 15876 + F: mm/page_owner.c 15877 + F: mm/page_poison.c 15878 + F: mm/page_reporting.c 15879 + F: mm/show_mem.c 15880 + F: mm/shuffle.c 15887 15881 15888 15882 MEMORY MANAGEMENT - RECLAIM 15889 15883 M: Andrew Morton <akpm@linux-foundation.org> ··· 15952 15928 MEMORY MANAGEMENT - THP (TRANSPARENT HUGE PAGE) 15953 15929 M: Andrew Morton <akpm@linux-foundation.org> 15954 15930 M: David Hildenbrand <david@redhat.com> 15931 + M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 15955 15932 R: Zi Yan <ziy@nvidia.com> 15956 15933 R: Baolin Wang <baolin.wang@linux.alibaba.com> 15957 - R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 15958 15934 R: Liam R. Howlett <Liam.Howlett@oracle.com> 15959 15935 R: Nico Pache <npache@redhat.com> 15960 15936 R: Ryan Roberts <ryan.roberts@arm.com> ··· 21205 21181 L: netdev@vger.kernel.org 21206 21182 L: linux-renesas-soc@vger.kernel.org 21207 21183 S: Maintained 21208 - F: Documentation/devicetree/bindings/net/renesas,r9a09g057-gbeth.yaml 21184 + F: Documentation/devicetree/bindings/net/renesas,rzv2h-gbeth.yaml 21209 21185 F: drivers/net/ethernet/stmicro/stmmac/dwmac-renesas-gbeth.c 21210 21186 21211 21187 RENESAS RZ/V2H(P) USB2PHY PORT RESET DRIVER ··· 21417 21393 K: spacemit 21418 21394 21419 21395 RISC-V THEAD SoC SUPPORT 21420 - M: Drew Fustini <drew@pdp7.com> 21396 + M: Drew Fustini <fustini@kernel.org> 21421 21397 M: Guo Ren <guoren@kernel.org> 21422 21398 M: Fu Wei <wefu@redhat.com> 21423 21399 L: linux-riscv@lists.infradead.org ··· 22593 22569 F: drivers/misc/sgi-xp/ 22594 22570 22595 22571 SHARED MEMORY COMMUNICATIONS (SMC) SOCKETS 22572 + M: D. Wythe <alibuda@linux.alibaba.com> 22573 + M: Dust Li <dust.li@linux.alibaba.com> 22574 + M: Sidraya Jayagond <sidraya@linux.ibm.com> 22596 22575 M: Wenjia Zhang <wenjia@linux.ibm.com> 22597 - M: Jan Karcher <jaka@linux.ibm.com> 22598 - R: D. Wythe <alibuda@linux.alibaba.com> 22576 + R: Mahanta Jambigi <mjambigi@linux.ibm.com> 22599 22577 R: Tony Lu <tonylu@linux.alibaba.com> 22600 22578 R: Wen Gu <guwen@linux.alibaba.com> 22601 22579 L: linux-rdma@vger.kernel.org ··· 24108 24082 L: linux-i2c@vger.kernel.org 24109 24083 S: Maintained 24110 24084 F: drivers/i2c/busses/i2c-designware-amdisp.c 24085 + F: include/linux/soc/amd/isp4_misc.h 24111 24086 24112 24087 SYNOPSYS DESIGNWARE MMC/SD/SDIO DRIVER 24113 24088 M: Jaehoon Chung <jh80.chung@samsung.com>
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 16 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 5 + EXTRAVERSION = -rc4 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+4 -4
arch/loongarch/include/asm/addrspace.h
··· 18 18 /* 19 19 * This gives the physical RAM offset. 20 20 */ 21 - #ifndef __ASSEMBLY__ 21 + #ifndef __ASSEMBLER__ 22 22 #ifndef PHYS_OFFSET 23 23 #define PHYS_OFFSET _UL(0) 24 24 #endif 25 25 extern unsigned long vm_map_base; 26 - #endif /* __ASSEMBLY__ */ 26 + #endif /* __ASSEMBLER__ */ 27 27 28 28 #ifndef IO_BASE 29 29 #define IO_BASE CSR_DMW0_BASE ··· 66 66 #define FIXADDR_TOP ((unsigned long)(long)(int)0xfffe0000) 67 67 #endif 68 68 69 - #ifdef __ASSEMBLY__ 69 + #ifdef __ASSEMBLER__ 70 70 #define _ATYPE_ 71 71 #define _ATYPE32_ 72 72 #define _ATYPE64_ ··· 85 85 /* 86 86 * 32/64-bit LoongArch address spaces 87 87 */ 88 - #ifdef __ASSEMBLY__ 88 + #ifdef __ASSEMBLER__ 89 89 #define _ACAST32_ 90 90 #define _ACAST64_ 91 91 #else
+2 -2
arch/loongarch/include/asm/alternative-asm.h
··· 2 2 #ifndef _ASM_ALTERNATIVE_ASM_H 3 3 #define _ASM_ALTERNATIVE_ASM_H 4 4 5 - #ifdef __ASSEMBLY__ 5 + #ifdef __ASSEMBLER__ 6 6 7 7 #include <asm/asm.h> 8 8 ··· 77 77 .previous 78 78 .endm 79 79 80 - #endif /* __ASSEMBLY__ */ 80 + #endif /* __ASSEMBLER__ */ 81 81 82 82 #endif /* _ASM_ALTERNATIVE_ASM_H */
+2 -2
arch/loongarch/include/asm/alternative.h
··· 2 2 #ifndef _ASM_ALTERNATIVE_H 3 3 #define _ASM_ALTERNATIVE_H 4 4 5 - #ifndef __ASSEMBLY__ 5 + #ifndef __ASSEMBLER__ 6 6 7 7 #include <linux/types.h> 8 8 #include <linux/stddef.h> ··· 106 106 #define alternative_2(oldinstr, newinstr1, feature1, newinstr2, feature2) \ 107 107 (asm volatile(ALTERNATIVE_2(oldinstr, newinstr1, feature1, newinstr2, feature2) ::: "memory")) 108 108 109 - #endif /* __ASSEMBLY__ */ 109 + #endif /* __ASSEMBLER__ */ 110 110 111 111 #endif /* _ASM_ALTERNATIVE_H */
+3 -3
arch/loongarch/include/asm/asm-extable.h
··· 7 7 #define EX_TYPE_UACCESS_ERR_ZERO 2 8 8 #define EX_TYPE_BPF 3 9 9 10 - #ifdef __ASSEMBLY__ 10 + #ifdef __ASSEMBLER__ 11 11 12 12 #define __ASM_EXTABLE_RAW(insn, fixup, type, data) \ 13 13 .pushsection __ex_table, "a"; \ ··· 22 22 __ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_FIXUP, 0) 23 23 .endm 24 24 25 - #else /* __ASSEMBLY__ */ 25 + #else /* __ASSEMBLER__ */ 26 26 27 27 #include <linux/bits.h> 28 28 #include <linux/stringify.h> ··· 60 60 #define _ASM_EXTABLE_UACCESS_ERR(insn, fixup, err) \ 61 61 _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, zero) 62 62 63 - #endif /* __ASSEMBLY__ */ 63 + #endif /* __ASSEMBLER__ */ 64 64 65 65 #endif /* __ASM_ASM_EXTABLE_H */
+4 -4
arch/loongarch/include/asm/asm.h
··· 110 110 #define LONG_SRA srai.w 111 111 #define LONG_SRAV sra.w 112 112 113 - #ifdef __ASSEMBLY__ 113 + #ifdef __ASSEMBLER__ 114 114 #define LONG .word 115 115 #endif 116 116 #define LONGSIZE 4 ··· 131 131 #define LONG_SRA srai.d 132 132 #define LONG_SRAV sra.d 133 133 134 - #ifdef __ASSEMBLY__ 134 + #ifdef __ASSEMBLER__ 135 135 #define LONG .dword 136 136 #endif 137 137 #define LONGSIZE 8 ··· 158 158 159 159 #define PTR_SCALESHIFT 2 160 160 161 - #ifdef __ASSEMBLY__ 161 + #ifdef __ASSEMBLER__ 162 162 #define PTR .word 163 163 #endif 164 164 #define PTRSIZE 4 ··· 181 181 182 182 #define PTR_SCALESHIFT 3 183 183 184 - #ifdef __ASSEMBLY__ 184 + #ifdef __ASSEMBLER__ 185 185 #define PTR .dword 186 186 #endif 187 187 #define PTRSIZE 8
+2 -2
arch/loongarch/include/asm/cpu.h
··· 46 46 47 47 #define PRID_PRODUCT_MASK 0x0fff 48 48 49 - #if !defined(__ASSEMBLY__) 49 + #if !defined(__ASSEMBLER__) 50 50 51 51 enum cpu_type_enum { 52 52 CPU_UNKNOWN, ··· 55 55 CPU_LAST 56 56 }; 57 57 58 - #endif /* !__ASSEMBLY */ 58 + #endif /* !__ASSEMBLER__ */ 59 59 60 60 /* 61 61 * ISA Level encodings
+2 -2
arch/loongarch/include/asm/ftrace.h
··· 14 14 15 15 #define MCOUNT_INSN_SIZE 4 /* sizeof mcount call */ 16 16 17 - #ifndef __ASSEMBLY__ 17 + #ifndef __ASSEMBLER__ 18 18 19 19 #ifndef CONFIG_DYNAMIC_FTRACE 20 20 ··· 84 84 85 85 #endif 86 86 87 - #endif /* __ASSEMBLY__ */ 87 + #endif /* __ASSEMBLER__ */ 88 88 89 89 #endif /* CONFIG_FUNCTION_TRACER */ 90 90
+3 -3
arch/loongarch/include/asm/gpr-num.h
··· 2 2 #ifndef __ASM_GPR_NUM_H 3 3 #define __ASM_GPR_NUM_H 4 4 5 - #ifdef __ASSEMBLY__ 5 + #ifdef __ASSEMBLER__ 6 6 7 7 .equ .L__gpr_num_zero, 0 8 8 .irp num,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31 ··· 25 25 .equ .L__gpr_num_$s\num, 23 + \num 26 26 .endr 27 27 28 - #else /* __ASSEMBLY__ */ 28 + #else /* __ASSEMBLER__ */ 29 29 30 30 #define __DEFINE_ASM_GPR_NUMS \ 31 31 " .equ .L__gpr_num_zero, 0\n" \ ··· 47 47 " .equ .L__gpr_num_$s\\num, 23 + \\num\n" \ 48 48 " .endr\n" \ 49 49 50 - #endif /* __ASSEMBLY__ */ 50 + #endif /* __ASSEMBLER__ */ 51 51 52 52 #endif /* __ASM_GPR_NUM_H */
+2 -2
arch/loongarch/include/asm/irqflags.h
··· 5 5 #ifndef _ASM_IRQFLAGS_H 6 6 #define _ASM_IRQFLAGS_H 7 7 8 - #ifndef __ASSEMBLY__ 8 + #ifndef __ASSEMBLER__ 9 9 10 10 #include <linux/compiler.h> 11 11 #include <linux/stringify.h> ··· 80 80 return arch_irqs_disabled_flags(arch_local_save_flags()); 81 81 } 82 82 83 - #endif /* #ifndef __ASSEMBLY__ */ 83 + #endif /* #ifndef __ASSEMBLER__ */ 84 84 85 85 #endif /* _ASM_IRQFLAGS_H */
+2 -2
arch/loongarch/include/asm/jump_label.h
··· 7 7 #ifndef __ASM_JUMP_LABEL_H 8 8 #define __ASM_JUMP_LABEL_H 9 9 10 - #ifndef __ASSEMBLY__ 10 + #ifndef __ASSEMBLER__ 11 11 12 12 #include <linux/types.h> 13 13 ··· 50 50 return true; 51 51 } 52 52 53 - #endif /* __ASSEMBLY__ */ 53 + #endif /* __ASSEMBLER__ */ 54 54 #endif /* __ASM_JUMP_LABEL_H */
+1 -1
arch/loongarch/include/asm/kasan.h
··· 2 2 #ifndef __ASM_KASAN_H 3 3 #define __ASM_KASAN_H 4 4 5 - #ifndef __ASSEMBLY__ 5 + #ifndef __ASSEMBLER__ 6 6 7 7 #include <linux/linkage.h> 8 8 #include <linux/mmzone.h>
+8 -8
arch/loongarch/include/asm/loongarch.h
··· 9 9 #include <linux/linkage.h> 10 10 #include <linux/types.h> 11 11 12 - #ifndef __ASSEMBLY__ 12 + #ifndef __ASSEMBLER__ 13 13 #include <larchintrin.h> 14 14 15 15 /* CPUCFG */ 16 16 #define read_cpucfg(reg) __cpucfg(reg) 17 17 18 - #endif /* !__ASSEMBLY__ */ 18 + #endif /* !__ASSEMBLER__ */ 19 19 20 - #ifdef __ASSEMBLY__ 20 + #ifdef __ASSEMBLER__ 21 21 22 22 /* LoongArch Registers */ 23 23 #define REG_ZERO 0x0 ··· 53 53 #define REG_S7 0x1e 54 54 #define REG_S8 0x1f 55 55 56 - #endif /* __ASSEMBLY__ */ 56 + #endif /* __ASSEMBLER__ */ 57 57 58 58 /* Bit fields for CPUCFG registers */ 59 59 #define LOONGARCH_CPUCFG0 0x0 ··· 171 171 * SW emulation for KVM hypervirsor, see arch/loongarch/include/uapi/asm/kvm_para.h 172 172 */ 173 173 174 - #ifndef __ASSEMBLY__ 174 + #ifndef __ASSEMBLER__ 175 175 176 176 /* CSR */ 177 177 #define csr_read32(reg) __csrrd_w(reg) ··· 187 187 #define iocsr_write32(val, reg) __iocsrwr_w(val, reg) 188 188 #define iocsr_write64(val, reg) __iocsrwr_d(val, reg) 189 189 190 - #endif /* !__ASSEMBLY__ */ 190 + #endif /* !__ASSEMBLER__ */ 191 191 192 192 /* CSR register number */ 193 193 ··· 1195 1195 #define LOONGARCH_IOCSR_EXTIOI_ROUTE_BASE 0x1c00 1196 1196 #define IOCSR_EXTIOI_VECTOR_NUM 256 1197 1197 1198 - #ifndef __ASSEMBLY__ 1198 + #ifndef __ASSEMBLER__ 1199 1199 1200 1200 static __always_inline u64 drdtime(void) 1201 1201 { ··· 1357 1357 #define clear_csr_estat(val) \ 1358 1358 csr_xchg32(~(val), val, LOONGARCH_CSR_ESTAT) 1359 1359 1360 - #endif /* __ASSEMBLY__ */ 1360 + #endif /* __ASSEMBLER__ */ 1361 1361 1362 1362 /* Generic EntryLo bit definitions */ 1363 1363 #define ENTRYLO_V (_ULCAST_(1) << 0)
+2 -2
arch/loongarch/include/asm/orc_types.h
··· 34 34 #define ORC_TYPE_REGS 3 35 35 #define ORC_TYPE_REGS_PARTIAL 4 36 36 37 - #ifndef __ASSEMBLY__ 37 + #ifndef __ASSEMBLER__ 38 38 /* 39 39 * This struct is more or less a vastly simplified version of the DWARF Call 40 40 * Frame Information standard. It contains only the necessary parts of DWARF ··· 53 53 unsigned int type:3; 54 54 unsigned int signal:1; 55 55 }; 56 - #endif /* __ASSEMBLY__ */ 56 + #endif /* __ASSEMBLER__ */ 57 57 58 58 #endif /* _ORC_TYPES_H */
+2 -2
arch/loongarch/include/asm/page.h
··· 15 15 #define HPAGE_MASK (~(HPAGE_SIZE - 1)) 16 16 #define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT) 17 17 18 - #ifndef __ASSEMBLY__ 18 + #ifndef __ASSEMBLER__ 19 19 20 20 #include <linux/kernel.h> 21 21 #include <linux/pfn.h> ··· 110 110 #include <asm-generic/memory_model.h> 111 111 #include <asm-generic/getorder.h> 112 112 113 - #endif /* !__ASSEMBLY__ */ 113 + #endif /* !__ASSEMBLER__ */ 114 114 115 115 #endif /* _ASM_PAGE_H */
+2 -2
arch/loongarch/include/asm/pgtable-bits.h
··· 92 92 #define PAGE_KERNEL_WUC __pgprot(_PAGE_PRESENT | __READABLE | __WRITEABLE | \ 93 93 _PAGE_GLOBAL | _PAGE_KERN | _CACHE_WUC) 94 94 95 - #ifndef __ASSEMBLY__ 95 + #ifndef __ASSEMBLER__ 96 96 97 97 #define _PAGE_IOREMAP pgprot_val(PAGE_KERNEL_SUC) 98 98 ··· 127 127 return __pgprot(prot); 128 128 } 129 129 130 - #endif /* !__ASSEMBLY__ */ 130 + #endif /* !__ASSEMBLER__ */ 131 131 132 132 #endif /* _ASM_PGTABLE_BITS_H */
+2 -2
arch/loongarch/include/asm/pgtable.h
··· 55 55 56 56 #define USER_PTRS_PER_PGD ((TASK_SIZE64 / PGDIR_SIZE)?(TASK_SIZE64 / PGDIR_SIZE):1) 57 57 58 - #ifndef __ASSEMBLY__ 58 + #ifndef __ASSEMBLER__ 59 59 60 60 #include <linux/mm_types.h> 61 61 #include <linux/mmzone.h> ··· 618 618 #define HAVE_ARCH_UNMAPPED_AREA 619 619 #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN 620 620 621 - #endif /* !__ASSEMBLY__ */ 621 + #endif /* !__ASSEMBLER__ */ 622 622 623 623 #endif /* _ASM_PGTABLE_H */
+1 -1
arch/loongarch/include/asm/prefetch.h
··· 8 8 #define Pref_Load 0 9 9 #define Pref_Store 8 10 10 11 - #ifdef __ASSEMBLY__ 11 + #ifdef __ASSEMBLER__ 12 12 13 13 .macro __pref hint addr 14 14 #ifdef CONFIG_CPU_HAS_PREFETCH
+1 -1
arch/loongarch/include/asm/smp.h
··· 39 39 void loongson_cpu_die(unsigned int cpu); 40 40 #endif 41 41 42 - static inline void plat_smp_setup(void) 42 + static inline void __init plat_smp_setup(void) 43 43 { 44 44 loongson_smp_setup(); 45 45 }
+2 -2
arch/loongarch/include/asm/thread_info.h
··· 10 10 11 11 #ifdef __KERNEL__ 12 12 13 - #ifndef __ASSEMBLY__ 13 + #ifndef __ASSEMBLER__ 14 14 15 15 #include <asm/processor.h> 16 16 ··· 53 53 54 54 register unsigned long current_stack_pointer __asm__("$sp"); 55 55 56 - #endif /* !__ASSEMBLY__ */ 56 + #endif /* !__ASSEMBLER__ */ 57 57 58 58 /* thread information allocation */ 59 59 #define THREAD_SIZE SZ_16K
+1 -1
arch/loongarch/include/asm/types.h
··· 8 8 #include <asm-generic/int-ll64.h> 9 9 #include <uapi/asm/types.h> 10 10 11 - #ifdef __ASSEMBLY__ 11 + #ifdef __ASSEMBLER__ 12 12 #define _ULCAST_ 13 13 #define _U64CAST_ 14 14 #else
+3 -3
arch/loongarch/include/asm/unwind_hints.h
··· 5 5 #include <linux/objtool.h> 6 6 #include <asm/orc_types.h> 7 7 8 - #ifdef __ASSEMBLY__ 8 + #ifdef __ASSEMBLER__ 9 9 10 10 .macro UNWIND_HINT_UNDEFINED 11 11 UNWIND_HINT type=UNWIND_HINT_TYPE_UNDEFINED ··· 23 23 UNWIND_HINT sp_reg=ORC_REG_SP type=UNWIND_HINT_TYPE_CALL 24 24 .endm 25 25 26 - #else /* !__ASSEMBLY__ */ 26 + #else /* !__ASSEMBLER__ */ 27 27 28 28 #define UNWIND_HINT_SAVE \ 29 29 UNWIND_HINT(UNWIND_HINT_TYPE_SAVE, 0, 0, 0) ··· 31 31 #define UNWIND_HINT_RESTORE \ 32 32 UNWIND_HINT(UNWIND_HINT_TYPE_RESTORE, 0, 0, 0) 33 33 34 - #endif /* !__ASSEMBLY__ */ 34 + #endif /* !__ASSEMBLER__ */ 35 35 36 36 #endif /* _ASM_LOONGARCH_UNWIND_HINTS_H */
+2 -2
arch/loongarch/include/asm/vdso/arch_data.h
··· 7 7 #ifndef _VDSO_ARCH_DATA_H 8 8 #define _VDSO_ARCH_DATA_H 9 9 10 - #ifndef __ASSEMBLY__ 10 + #ifndef __ASSEMBLER__ 11 11 12 12 #include <asm/asm.h> 13 13 #include <asm/vdso.h> ··· 20 20 struct vdso_pcpu_data pdata[NR_CPUS]; 21 21 }; 22 22 23 - #endif /* __ASSEMBLY__ */ 23 + #endif /* __ASSEMBLER__ */ 24 24 25 25 #endif
+2 -2
arch/loongarch/include/asm/vdso/getrandom.h
··· 5 5 #ifndef __ASM_VDSO_GETRANDOM_H 6 6 #define __ASM_VDSO_GETRANDOM_H 7 7 8 - #ifndef __ASSEMBLY__ 8 + #ifndef __ASSEMBLER__ 9 9 10 10 #include <asm/unistd.h> 11 11 #include <asm/vdso/vdso.h> ··· 28 28 return ret; 29 29 } 30 30 31 - #endif /* !__ASSEMBLY__ */ 31 + #endif /* !__ASSEMBLER__ */ 32 32 33 33 #endif /* __ASM_VDSO_GETRANDOM_H */
+2 -2
arch/loongarch/include/asm/vdso/gettimeofday.h
··· 7 7 #ifndef __ASM_VDSO_GETTIMEOFDAY_H 8 8 #define __ASM_VDSO_GETTIMEOFDAY_H 9 9 10 - #ifndef __ASSEMBLY__ 10 + #ifndef __ASSEMBLER__ 11 11 12 12 #include <asm/unistd.h> 13 13 #include <asm/vdso/vdso.h> ··· 89 89 } 90 90 #define __arch_vdso_hres_capable loongarch_vdso_hres_capable 91 91 92 - #endif /* !__ASSEMBLY__ */ 92 + #endif /* !__ASSEMBLER__ */ 93 93 94 94 #endif /* __ASM_VDSO_GETTIMEOFDAY_H */
+2 -2
arch/loongarch/include/asm/vdso/processor.h
··· 5 5 #ifndef __ASM_VDSO_PROCESSOR_H 6 6 #define __ASM_VDSO_PROCESSOR_H 7 7 8 - #ifndef __ASSEMBLY__ 8 + #ifndef __ASSEMBLER__ 9 9 10 10 #define cpu_relax() barrier() 11 11 12 - #endif /* __ASSEMBLY__ */ 12 + #endif /* __ASSEMBLER__ */ 13 13 14 14 #endif /* __ASM_VDSO_PROCESSOR_H */
+2 -2
arch/loongarch/include/asm/vdso/vdso.h
··· 7 7 #ifndef _ASM_VDSO_VDSO_H 8 8 #define _ASM_VDSO_VDSO_H 9 9 10 - #ifndef __ASSEMBLY__ 10 + #ifndef __ASSEMBLER__ 11 11 12 12 #include <asm/asm.h> 13 13 #include <asm/page.h> ··· 16 16 17 17 #define VVAR_SIZE (VDSO_NR_PAGES << PAGE_SHIFT) 18 18 19 - #endif /* __ASSEMBLY__ */ 19 + #endif /* __ASSEMBLER__ */ 20 20 21 21 #endif
+2 -2
arch/loongarch/include/asm/vdso/vsyscall.h
··· 2 2 #ifndef __ASM_VDSO_VSYSCALL_H 3 3 #define __ASM_VDSO_VSYSCALL_H 4 4 5 - #ifndef __ASSEMBLY__ 5 + #ifndef __ASSEMBLER__ 6 6 7 7 #include <vdso/datapage.h> 8 8 9 9 /* The asm-generic header needs to be included after the definitions above */ 10 10 #include <asm-generic/vdso/vsyscall.h> 11 11 12 - #endif /* !__ASSEMBLY__ */ 12 + #endif /* !__ASSEMBLER__ */ 13 13 14 14 #endif /* __ASM_VDSO_VSYSCALL_H */
+1
arch/loongarch/kernel/acpi.c
··· 10 10 #include <linux/init.h> 11 11 #include <linux/acpi.h> 12 12 #include <linux/efi-bgrt.h> 13 + #include <linux/export.h> 13 14 #include <linux/irq.h> 14 15 #include <linux/irqdomain.h> 15 16 #include <linux/memblock.h>
+1
arch/loongarch/kernel/alternative.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 + #include <linux/export.h> 2 3 #include <linux/mm.h> 3 4 #include <linux/module.h> 4 5 #include <asm/alternative.h>
+12
arch/loongarch/kernel/efi.c
··· 144 144 if (efi_memmap_init_early(&data) < 0) 145 145 panic("Unable to map EFI memory map.\n"); 146 146 147 + /* 148 + * Reserve the physical memory region occupied by the EFI 149 + * memory map table (header + descriptors). This is crucial 150 + * for kdump, as the kdump kernel relies on this original 151 + * memmap passed by the bootloader. Without reservation, 152 + * this region could be overwritten by the primary kernel. 153 + * Also, set the EFI_PRESERVE_BS_REGIONS flag to indicate that 154 + * critical boot services code/data regions like this are preserved. 155 + */ 156 + memblock_reserve((phys_addr_t)boot_memmap, sizeof(*tbl) + data.size); 157 + set_bit(EFI_PRESERVE_BS_REGIONS, &efi.flags); 158 + 147 159 early_memunmap(tbl, sizeof(*tbl)); 148 160 } 149 161
-1
arch/loongarch/kernel/elf.c
··· 6 6 7 7 #include <linux/binfmts.h> 8 8 #include <linux/elf.h> 9 - #include <linux/export.h> 10 9 #include <linux/sched.h> 11 10 12 11 #include <asm/cpu-features.h>
+1
arch/loongarch/kernel/kfpu.c
··· 4 4 */ 5 5 6 6 #include <linux/cpu.h> 7 + #include <linux/export.h> 7 8 #include <linux/init.h> 8 9 #include <asm/fpu.h> 9 10 #include <asm/smp.h>
-1
arch/loongarch/kernel/paravirt.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - #include <linux/export.h> 3 2 #include <linux/types.h> 4 3 #include <linux/interrupt.h> 5 4 #include <linux/irq_work.h>
+1 -1
arch/loongarch/kernel/time.c
··· 102 102 return 0; 103 103 } 104 104 105 - static unsigned long __init get_loops_per_jiffy(void) 105 + static unsigned long get_loops_per_jiffy(void) 106 106 { 107 107 unsigned long lpj = (unsigned long)const_clock_freq; 108 108
+1
arch/loongarch/kernel/traps.c
··· 13 13 #include <linux/kernel.h> 14 14 #include <linux/kexec.h> 15 15 #include <linux/module.h> 16 + #include <linux/export.h> 16 17 #include <linux/extable.h> 17 18 #include <linux/mm.h> 18 19 #include <linux/sched/mm.h>
+1
arch/loongarch/kernel/unwind_guess.c
··· 3 3 * Copyright (C) 2022 Loongson Technology Corporation Limited 4 4 */ 5 5 #include <asm/unwind.h> 6 + #include <linux/export.h> 6 7 7 8 unsigned long unwind_get_return_address(struct unwind_state *state) 8 9 {
+2 -1
arch/loongarch/kernel/unwind_orc.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 - #include <linux/objtool.h> 2 + #include <linux/export.h> 3 3 #include <linux/module.h> 4 + #include <linux/objtool.h> 4 5 #include <linux/sort.h> 5 6 #include <asm/exception.h> 6 7 #include <asm/orc_header.h>
+1
arch/loongarch/kernel/unwind_prologue.c
··· 3 3 * Copyright (C) 2022 Loongson Technology Corporation Limited 4 4 */ 5 5 #include <linux/cpumask.h> 6 + #include <linux/export.h> 6 7 #include <linux/ftrace.h> 7 8 #include <linux/kallsyms.h> 8 9
+61 -28
arch/loongarch/kvm/intc/eiointc.c
··· 9 9 10 10 static void eiointc_set_sw_coreisr(struct loongarch_eiointc *s) 11 11 { 12 - int ipnum, cpu, irq_index, irq_mask, irq; 12 + int ipnum, cpu, cpuid, irq_index, irq_mask, irq; 13 + struct kvm_vcpu *vcpu; 13 14 14 15 for (irq = 0; irq < EIOINTC_IRQS; irq++) { 15 16 ipnum = s->ipmap.reg_u8[irq / 32]; ··· 21 20 irq_index = irq / 32; 22 21 irq_mask = BIT(irq & 0x1f); 23 22 24 - cpu = s->coremap.reg_u8[irq]; 23 + cpuid = s->coremap.reg_u8[irq]; 24 + vcpu = kvm_get_vcpu_by_cpuid(s->kvm, cpuid); 25 + if (!vcpu) 26 + continue; 27 + 28 + cpu = vcpu->vcpu_id; 25 29 if (!!(s->coreisr.reg_u32[cpu][irq_index] & irq_mask)) 26 30 set_bit(irq, s->sw_coreisr[cpu][ipnum]); 27 31 else ··· 72 66 } 73 67 74 68 static inline void eiointc_update_sw_coremap(struct loongarch_eiointc *s, 75 - int irq, void *pvalue, u32 len, bool notify) 69 + int irq, u64 val, u32 len, bool notify) 76 70 { 77 - int i, cpu; 78 - u64 val = *(u64 *)pvalue; 71 + int i, cpu, cpuid; 72 + struct kvm_vcpu *vcpu; 79 73 80 74 for (i = 0; i < len; i++) { 81 - cpu = val & 0xff; 75 + cpuid = val & 0xff; 82 76 val = val >> 8; 83 77 84 78 if (!(s->status & BIT(EIOINTC_ENABLE_CPU_ENCODE))) { 85 - cpu = ffs(cpu) - 1; 86 - cpu = (cpu >= 4) ? 0 : cpu; 79 + cpuid = ffs(cpuid) - 1; 80 + cpuid = (cpuid >= 4) ? 0 : cpuid; 87 81 } 88 82 83 + vcpu = kvm_get_vcpu_by_cpuid(s->kvm, cpuid); 84 + if (!vcpu) 85 + continue; 86 + 87 + cpu = vcpu->vcpu_id; 89 88 if (s->sw_coremap[irq + i] == cpu) 90 89 continue; 91 90 ··· 316 305 return -EINVAL; 317 306 } 318 307 308 + if (addr & (len - 1)) { 309 + kvm_err("%s: eiointc not aligned addr %llx len %d\n", __func__, addr, len); 310 + return -EINVAL; 311 + } 312 + 319 313 vcpu->kvm->stat.eiointc_read_exits++; 320 314 spin_lock_irqsave(&eiointc->lock, flags); 321 315 switch (len) { ··· 414 398 irq = offset - EIOINTC_COREMAP_START; 415 399 index = irq; 416 400 s->coremap.reg_u8[index] = data; 417 - eiointc_update_sw_coremap(s, irq, (void *)&data, sizeof(data), true); 401 + eiointc_update_sw_coremap(s, irq, data, sizeof(data), true); 418 402 break; 419 403 default: 420 404 ret = -EINVAL; ··· 452 436 break; 453 437 case EIOINTC_ENABLE_START ... EIOINTC_ENABLE_END: 454 438 index = (offset - EIOINTC_ENABLE_START) >> 1; 455 - old_data = s->enable.reg_u32[index]; 439 + old_data = s->enable.reg_u16[index]; 456 440 s->enable.reg_u16[index] = data; 457 441 /* 458 442 * 1: enable irq. 459 443 * update irq when isr is set. 460 444 */ 461 445 data = s->enable.reg_u16[index] & ~old_data & s->isr.reg_u16[index]; 462 - index = index << 1; 463 446 for (i = 0; i < sizeof(data); i++) { 464 447 u8 mask = (data >> (i * 8)) & 0xff; 465 - eiointc_enable_irq(vcpu, s, index + i, mask, 1); 448 + eiointc_enable_irq(vcpu, s, index * 2 + i, mask, 1); 466 449 } 467 450 /* 468 451 * 0: disable irq. ··· 470 455 data = ~s->enable.reg_u16[index] & old_data & s->isr.reg_u16[index]; 471 456 for (i = 0; i < sizeof(data); i++) { 472 457 u8 mask = (data >> (i * 8)) & 0xff; 473 - eiointc_enable_irq(vcpu, s, index, mask, 0); 458 + eiointc_enable_irq(vcpu, s, index * 2 + i, mask, 0); 474 459 } 475 460 break; 476 461 case EIOINTC_BOUNCE_START ... EIOINTC_BOUNCE_END: ··· 499 484 irq = offset - EIOINTC_COREMAP_START; 500 485 index = irq >> 1; 501 486 s->coremap.reg_u16[index] = data; 502 - eiointc_update_sw_coremap(s, irq, (void *)&data, sizeof(data), true); 487 + eiointc_update_sw_coremap(s, irq, data, sizeof(data), true); 503 488 break; 504 489 default: 505 490 ret = -EINVAL; ··· 544 529 * update irq when isr is set. 545 530 */ 546 531 data = s->enable.reg_u32[index] & ~old_data & s->isr.reg_u32[index]; 547 - index = index << 2; 548 532 for (i = 0; i < sizeof(data); i++) { 549 533 u8 mask = (data >> (i * 8)) & 0xff; 550 - eiointc_enable_irq(vcpu, s, index + i, mask, 1); 534 + eiointc_enable_irq(vcpu, s, index * 4 + i, mask, 1); 551 535 } 552 536 /* 553 537 * 0: disable irq. ··· 555 541 data = ~s->enable.reg_u32[index] & old_data & s->isr.reg_u32[index]; 556 542 for (i = 0; i < sizeof(data); i++) { 557 543 u8 mask = (data >> (i * 8)) & 0xff; 558 - eiointc_enable_irq(vcpu, s, index, mask, 0); 544 + eiointc_enable_irq(vcpu, s, index * 4 + i, mask, 0); 559 545 } 560 546 break; 561 547 case EIOINTC_BOUNCE_START ... EIOINTC_BOUNCE_END: ··· 584 570 irq = offset - EIOINTC_COREMAP_START; 585 571 index = irq >> 2; 586 572 s->coremap.reg_u32[index] = data; 587 - eiointc_update_sw_coremap(s, irq, (void *)&data, sizeof(data), true); 573 + eiointc_update_sw_coremap(s, irq, data, sizeof(data), true); 588 574 break; 589 575 default: 590 576 ret = -EINVAL; ··· 629 615 * update irq when isr is set. 630 616 */ 631 617 data = s->enable.reg_u64[index] & ~old_data & s->isr.reg_u64[index]; 632 - index = index << 3; 633 618 for (i = 0; i < sizeof(data); i++) { 634 619 u8 mask = (data >> (i * 8)) & 0xff; 635 - eiointc_enable_irq(vcpu, s, index + i, mask, 1); 620 + eiointc_enable_irq(vcpu, s, index * 8 + i, mask, 1); 636 621 } 637 622 /* 638 623 * 0: disable irq. ··· 640 627 data = ~s->enable.reg_u64[index] & old_data & s->isr.reg_u64[index]; 641 628 for (i = 0; i < sizeof(data); i++) { 642 629 u8 mask = (data >> (i * 8)) & 0xff; 643 - eiointc_enable_irq(vcpu, s, index, mask, 0); 630 + eiointc_enable_irq(vcpu, s, index * 8 + i, mask, 0); 644 631 } 645 632 break; 646 633 case EIOINTC_BOUNCE_START ... EIOINTC_BOUNCE_END: ··· 669 656 irq = offset - EIOINTC_COREMAP_START; 670 657 index = irq >> 3; 671 658 s->coremap.reg_u64[index] = data; 672 - eiointc_update_sw_coremap(s, irq, (void *)&data, sizeof(data), true); 659 + eiointc_update_sw_coremap(s, irq, data, sizeof(data), true); 673 660 break; 674 661 default: 675 662 ret = -EINVAL; ··· 689 676 690 677 if (!eiointc) { 691 678 kvm_err("%s: eiointc irqchip not valid!\n", __func__); 679 + return -EINVAL; 680 + } 681 + 682 + if (addr & (len - 1)) { 683 + kvm_err("%s: eiointc not aligned addr %llx len %d\n", __func__, addr, len); 692 684 return -EINVAL; 693 685 } 694 686 ··· 805 787 int ret = 0; 806 788 unsigned long flags; 807 789 unsigned long type = (unsigned long)attr->attr; 808 - u32 i, start_irq; 790 + u32 i, start_irq, val; 809 791 void __user *data; 810 792 struct loongarch_eiointc *s = dev->kvm->arch.eiointc; 811 793 ··· 813 795 spin_lock_irqsave(&s->lock, flags); 814 796 switch (type) { 815 797 case KVM_DEV_LOONGARCH_EXTIOI_CTRL_INIT_NUM_CPU: 816 - if (copy_from_user(&s->num_cpu, data, 4)) 798 + if (copy_from_user(&val, data, 4)) 817 799 ret = -EFAULT; 800 + else { 801 + if (val >= EIOINTC_ROUTE_MAX_VCPUS) 802 + ret = -EINVAL; 803 + else 804 + s->num_cpu = val; 805 + } 818 806 break; 819 807 case KVM_DEV_LOONGARCH_EXTIOI_CTRL_INIT_FEATURE: 820 808 if (copy_from_user(&s->features, data, 4)) ··· 833 809 for (i = 0; i < (EIOINTC_IRQS / 4); i++) { 834 810 start_irq = i * 4; 835 811 eiointc_update_sw_coremap(s, start_irq, 836 - (void *)&s->coremap.reg_u32[i], sizeof(u32), false); 812 + s->coremap.reg_u32[i], sizeof(u32), false); 837 813 } 838 814 break; 839 815 default: ··· 848 824 struct kvm_device_attr *attr, 849 825 bool is_write) 850 826 { 851 - int addr, cpuid, offset, ret = 0; 827 + int addr, cpu, offset, ret = 0; 852 828 unsigned long flags; 853 829 void *p = NULL; 854 830 void __user *data; ··· 856 832 857 833 s = dev->kvm->arch.eiointc; 858 834 addr = attr->attr; 859 - cpuid = addr >> 16; 835 + cpu = addr >> 16; 860 836 addr &= 0xffff; 861 837 data = (void __user *)attr->addr; 862 838 switch (addr) { ··· 881 857 p = &s->isr.reg_u32[offset]; 882 858 break; 883 859 case EIOINTC_COREISR_START ... EIOINTC_COREISR_END: 860 + if (cpu >= s->num_cpu) 861 + return -EINVAL; 862 + 884 863 offset = (addr - EIOINTC_COREISR_START) / 4; 885 - p = &s->coreisr.reg_u32[cpuid][offset]; 864 + p = &s->coreisr.reg_u32[cpu][offset]; 886 865 break; 887 866 case EIOINTC_COREMAP_START ... EIOINTC_COREMAP_END: 888 867 offset = (addr - EIOINTC_COREMAP_START) / 4; ··· 926 899 data = (void __user *)attr->addr; 927 900 switch (addr) { 928 901 case KVM_DEV_LOONGARCH_EXTIOI_SW_STATUS_NUM_CPU: 902 + if (is_write) 903 + return ret; 904 + 929 905 p = &s->num_cpu; 930 906 break; 931 907 case KVM_DEV_LOONGARCH_EXTIOI_SW_STATUS_FEATURE: 908 + if (is_write) 909 + return ret; 910 + 932 911 p = &s->features; 933 912 break; 934 913 case KVM_DEV_LOONGARCH_EXTIOI_SW_STATUS_STATE:
+1
arch/loongarch/lib/crc32-loongarch.c
··· 11 11 12 12 #include <asm/cpu-features.h> 13 13 #include <linux/crc32.h> 14 + #include <linux/export.h> 14 15 #include <linux/module.h> 15 16 #include <linux/unaligned.h> 16 17
+1
arch/loongarch/lib/csum.c
··· 2 2 // Copyright (C) 2019-2020 Arm Ltd. 3 3 4 4 #include <linux/compiler.h> 5 + #include <linux/export.h> 5 6 #include <linux/kasan-checks.h> 6 7 #include <linux/kernel.h> 7 8
+2 -2
arch/loongarch/mm/ioremap.c
··· 16 16 17 17 } 18 18 19 - void *early_memremap_ro(resource_size_t phys_addr, unsigned long size) 19 + void * __init early_memremap_ro(resource_size_t phys_addr, unsigned long size) 20 20 { 21 21 return early_memremap(phys_addr, size); 22 22 } 23 23 24 - void *early_memremap_prot(resource_size_t phys_addr, unsigned long size, 24 + void * __init early_memremap_prot(resource_size_t phys_addr, unsigned long size, 25 25 unsigned long prot_val) 26 26 { 27 27 return early_memremap(phys_addr, size);
-1
arch/loongarch/pci/pci.c
··· 3 3 * Copyright (C) 2020-2022 Loongson Technology Corporation Limited 4 4 */ 5 5 #include <linux/kernel.h> 6 - #include <linux/export.h> 7 6 #include <linux/init.h> 8 7 #include <linux/acpi.h> 9 8 #include <linux/types.h>
-1
arch/riscv/include/asm/pgtable.h
··· 1075 1075 */ 1076 1076 #ifdef CONFIG_64BIT 1077 1077 #define TASK_SIZE_64 (PGDIR_SIZE * PTRS_PER_PGD / 2) 1078 - #define TASK_SIZE_MAX LONG_MAX 1079 1078 1080 1079 #ifdef CONFIG_COMPAT 1081 1080 #define TASK_SIZE_32 (_AC(0x80000000, UL) - PAGE_SIZE)
+1 -1
arch/riscv/include/asm/runtime-const.h
··· 206 206 addi_insn_mask &= 0x07fff; 207 207 } 208 208 209 - if (lower_immediate & 0x00000fff) { 209 + if (lower_immediate & 0x00000fff || lui_insn == RISCV_INSN_NOP4) { 210 210 /* replace upper 12 bits of addi with lower 12 bits of val */ 211 211 addi_insn &= addi_insn_mask; 212 212 addi_insn |= (lower_immediate & 0x00000fff) << 20;
+2 -1
arch/riscv/include/asm/uaccess.h
··· 127 127 128 128 #ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT 129 129 #define __get_user_8(x, ptr, label) \ 130 + do { \ 130 131 u32 __user *__ptr = (u32 __user *)(ptr); \ 131 132 u32 __lo, __hi; \ 132 133 asm_goto_output( \ ··· 142 141 : : label); \ 143 142 (x) = (__typeof__(x))((__typeof__((x) - (x)))( \ 144 143 (((u64)__hi << 32) | __lo))); \ 145 - 144 + } while (0) 146 145 #else /* !CONFIG_CC_HAS_ASM_GOTO_OUTPUT */ 147 146 #define __get_user_8(x, ptr, label) \ 148 147 do { \
+1 -1
arch/riscv/include/asm/vdso/getrandom.h
··· 18 18 register unsigned int flags asm("a2") = _flags; 19 19 20 20 asm volatile ("ecall\n" 21 - : "+r" (ret) 21 + : "=r" (ret) 22 22 : "r" (nr), "r" (buffer), "r" (len), "r" (flags) 23 23 : "memory"); 24 24
+6 -6
arch/riscv/include/asm/vector.h
··· 205 205 THEAD_VSETVLI_T4X0E8M8D1 206 206 THEAD_VSB_V_V0T0 207 207 "add t0, t0, t4\n\t" 208 - THEAD_VSB_V_V0T0 208 + THEAD_VSB_V_V8T0 209 209 "add t0, t0, t4\n\t" 210 - THEAD_VSB_V_V0T0 210 + THEAD_VSB_V_V16T0 211 211 "add t0, t0, t4\n\t" 212 - THEAD_VSB_V_V0T0 212 + THEAD_VSB_V_V24T0 213 213 : : "r" (datap) : "memory", "t0", "t4"); 214 214 } else { 215 215 asm volatile ( ··· 241 241 THEAD_VSETVLI_T4X0E8M8D1 242 242 THEAD_VLB_V_V0T0 243 243 "add t0, t0, t4\n\t" 244 - THEAD_VLB_V_V0T0 244 + THEAD_VLB_V_V8T0 245 245 "add t0, t0, t4\n\t" 246 - THEAD_VLB_V_V0T0 246 + THEAD_VLB_V_V16T0 247 247 "add t0, t0, t4\n\t" 248 - THEAD_VLB_V_V0T0 248 + THEAD_VLB_V_V24T0 249 249 : : "r" (datap) : "memory", "t0", "t4"); 250 250 } else { 251 251 asm volatile (
+1
arch/riscv/kernel/setup.c
··· 50 50 #endif 51 51 ; 52 52 unsigned long boot_cpu_hartid; 53 + EXPORT_SYMBOL_GPL(boot_cpu_hartid); 53 54 54 55 /* 55 56 * Place kernel memory regions on the resource tree so that
+2 -2
arch/riscv/kernel/traps_misaligned.c
··· 454 454 455 455 val.data_u64 = 0; 456 456 if (user_mode(regs)) { 457 - if (copy_from_user_nofault(&val, (u8 __user *)addr, len)) 457 + if (copy_from_user(&val, (u8 __user *)addr, len)) 458 458 return -1; 459 459 } else { 460 460 memcpy(&val, (u8 *)addr, len); ··· 555 555 return -EOPNOTSUPP; 556 556 557 557 if (user_mode(regs)) { 558 - if (copy_to_user_nofault((u8 __user *)addr, &val, len)) 558 + if (copy_to_user((u8 __user *)addr, &val, len)) 559 559 return -1; 560 560 } else { 561 561 memcpy((u8 *)addr, &val, len);
+1 -1
arch/riscv/kernel/vdso/vdso.lds.S
··· 30 30 *(.data .data.* .gnu.linkonce.d.*) 31 31 *(.dynbss) 32 32 *(.bss .bss.* .gnu.linkonce.b.*) 33 - } 33 + } :text 34 34 35 35 .note : { *(.note.*) } :text :note 36 36
+1 -1
arch/riscv/kernel/vendor_extensions/sifive.c
··· 8 8 #include <linux/types.h> 9 9 10 10 /* All SiFive vendor extensions supported in Linux */ 11 - const struct riscv_isa_ext_data riscv_isa_vendor_ext_sifive[] = { 11 + static const struct riscv_isa_ext_data riscv_isa_vendor_ext_sifive[] = { 12 12 __RISCV_ISA_EXT_DATA(xsfvfnrclipxfqf, RISCV_ISA_VENDOR_EXT_XSFVFNRCLIPXFQF), 13 13 __RISCV_ISA_EXT_DATA(xsfvfwmaccqqq, RISCV_ISA_VENDOR_EXT_XSFVFWMACCQQQ), 14 14 __RISCV_ISA_EXT_DATA(xsfvqmaccdod, RISCV_ISA_VENDOR_EXT_XSFVQMACCDOD),
+1 -1
arch/s390/include/asm/ptrace.h
··· 265 265 addr = kernel_stack_pointer(regs) + n * sizeof(long); 266 266 if (!regs_within_kernel_stack(regs, addr)) 267 267 return 0; 268 - return READ_ONCE_NOCHECK(addr); 268 + return READ_ONCE_NOCHECK(*(unsigned long *)addr); 269 269 } 270 270 271 271 /**
+44 -15
arch/s390/pci/pci_event.c
··· 54 54 case PCI_ERS_RESULT_CAN_RECOVER: 55 55 case PCI_ERS_RESULT_RECOVERED: 56 56 case PCI_ERS_RESULT_NEED_RESET: 57 + case PCI_ERS_RESULT_NONE: 57 58 return false; 58 59 default: 59 60 return true; ··· 78 77 if (!driver || !driver->err_handler) 79 78 return false; 80 79 if (!driver->err_handler->error_detected) 81 - return false; 82 - if (!driver->err_handler->slot_reset) 83 - return false; 84 - if (!driver->err_handler->resume) 85 80 return false; 86 81 return true; 87 82 } ··· 103 106 struct zpci_dev *zdev = to_zpci(pdev); 104 107 int rc; 105 108 109 + /* The underlying device may have been disabled by the event */ 110 + if (!zdev_enabled(zdev)) 111 + return PCI_ERS_RESULT_NEED_RESET; 112 + 106 113 pr_info("%s: Unblocking device access for examination\n", pci_name(pdev)); 107 114 rc = zpci_reset_load_store_blocked(zdev); 108 115 if (rc) { ··· 115 114 return PCI_ERS_RESULT_NEED_RESET; 116 115 } 117 116 118 - if (driver->err_handler->mmio_enabled) { 117 + if (driver->err_handler->mmio_enabled) 119 118 ers_res = driver->err_handler->mmio_enabled(pdev); 120 - if (ers_result_indicates_abort(ers_res)) { 121 - pr_info("%s: Automatic recovery failed after MMIO re-enable\n", 122 - pci_name(pdev)); 123 - return ers_res; 124 - } else if (ers_res == PCI_ERS_RESULT_NEED_RESET) { 125 - pr_debug("%s: Driver needs reset to recover\n", pci_name(pdev)); 126 - return ers_res; 127 - } 119 + else 120 + ers_res = PCI_ERS_RESULT_NONE; 121 + 122 + if (ers_result_indicates_abort(ers_res)) { 123 + pr_info("%s: Automatic recovery failed after MMIO re-enable\n", 124 + pci_name(pdev)); 125 + return ers_res; 126 + } else if (ers_res == PCI_ERS_RESULT_NEED_RESET) { 127 + pr_debug("%s: Driver needs reset to recover\n", pci_name(pdev)); 128 + return ers_res; 128 129 } 129 130 130 131 pr_debug("%s: Unblocking DMA\n", pci_name(pdev)); ··· 153 150 return ers_res; 154 151 } 155 152 pdev->error_state = pci_channel_io_normal; 156 - ers_res = driver->err_handler->slot_reset(pdev); 153 + 154 + if (driver->err_handler->slot_reset) 155 + ers_res = driver->err_handler->slot_reset(pdev); 156 + else 157 + ers_res = PCI_ERS_RESULT_NONE; 158 + 157 159 if (ers_result_indicates_abort(ers_res)) { 158 160 pr_info("%s: Automatic recovery failed after slot reset\n", pci_name(pdev)); 159 161 return ers_res; ··· 222 214 goto out_unlock; 223 215 } 224 216 225 - if (ers_res == PCI_ERS_RESULT_CAN_RECOVER) { 217 + if (ers_res != PCI_ERS_RESULT_NEED_RESET) { 226 218 ers_res = zpci_event_do_error_state_clear(pdev, driver); 227 219 if (ers_result_indicates_abort(ers_res)) { 228 220 status_str = "failed (abort on MMIO enable)"; ··· 232 224 233 225 if (ers_res == PCI_ERS_RESULT_NEED_RESET) 234 226 ers_res = zpci_event_do_reset(pdev, driver); 227 + 228 + /* 229 + * ers_res can be PCI_ERS_RESULT_NONE either because the driver 230 + * decided to return it, indicating that it abstains from voting 231 + * on how to recover, or because it didn't implement the callback. 232 + * Both cases assume, that if there is nothing else causing a 233 + * disconnect, we recovered successfully. 234 + */ 235 + if (ers_res == PCI_ERS_RESULT_NONE) 236 + ers_res = PCI_ERS_RESULT_RECOVERED; 235 237 236 238 if (ers_res != PCI_ERS_RESULT_RECOVERED) { 237 239 pr_err("%s: Automatic recovery failed; operator intervention is required\n", ··· 291 273 struct zpci_dev *zdev = get_zdev_by_fid(ccdf->fid); 292 274 struct pci_dev *pdev = NULL; 293 275 pci_ers_result_t ers_res; 276 + u32 fh = 0; 277 + int rc; 294 278 295 279 zpci_dbg(3, "err fid:%x, fh:%x, pec:%x\n", 296 280 ccdf->fid, ccdf->fh, ccdf->pec); ··· 301 281 302 282 if (zdev) { 303 283 mutex_lock(&zdev->state_lock); 284 + rc = clp_refresh_fh(zdev->fid, &fh); 285 + if (rc) 286 + goto no_pdev; 287 + if (!fh || ccdf->fh != fh) { 288 + /* Ignore events with stale handles */ 289 + zpci_dbg(3, "err fid:%x, fh:%x (stale %x)\n", 290 + ccdf->fid, fh, ccdf->fh); 291 + goto no_pdev; 292 + } 304 293 zpci_update_fh(zdev, ccdf->fh); 305 294 if (zdev->zbus->bus) 306 295 pdev = pci_get_slot(zdev->zbus->bus, zdev->devfn);
+15 -4
arch/x86/include/asm/debugreg.h
··· 9 9 #include <asm/cpufeature.h> 10 10 #include <asm/msr.h> 11 11 12 + /* 13 + * Define bits that are always set to 1 in DR7, only bit 10 is 14 + * architecturally reserved to '1'. 15 + * 16 + * This is also the init/reset value for DR7. 17 + */ 18 + #define DR7_FIXED_1 0x00000400 19 + 12 20 DECLARE_PER_CPU(unsigned long, cpu_dr7); 13 21 14 22 #ifndef CONFIG_PARAVIRT_XXL ··· 108 100 109 101 static inline void hw_breakpoint_disable(void) 110 102 { 111 - /* Zero the control register for HW Breakpoint */ 112 - set_debugreg(0UL, 7); 103 + /* Reset the control register for HW Breakpoint */ 104 + set_debugreg(DR7_FIXED_1, 7); 113 105 114 106 /* Zero-out the individual HW breakpoint address registers */ 115 107 set_debugreg(0UL, 0); ··· 133 125 return 0; 134 126 135 127 get_debugreg(dr7, 7); 136 - dr7 &= ~0x400; /* architecturally set bit */ 128 + 129 + /* Architecturally set bit */ 130 + dr7 &= ~DR7_FIXED_1; 137 131 if (dr7) 138 - set_debugreg(0, 7); 132 + set_debugreg(DR7_FIXED_1, 7); 133 + 139 134 /* 140 135 * Ensure the compiler doesn't lower the above statements into 141 136 * the critical section; disabling breakpoints late would not
+1 -1
arch/x86/include/asm/kvm_host.h
··· 31 31 32 32 #include <asm/apic.h> 33 33 #include <asm/pvclock-abi.h> 34 + #include <asm/debugreg.h> 34 35 #include <asm/desc.h> 35 36 #include <asm/mtrr.h> 36 37 #include <asm/msr-index.h> ··· 250 249 #define DR7_BP_EN_MASK 0x000000ff 251 250 #define DR7_GE (1 << 9) 252 251 #define DR7_GD (1 << 13) 253 - #define DR7_FIXED_1 0x00000400 254 252 #define DR7_VOLATILE 0xffff2bff 255 253 256 254 #define KVM_GUESTDBG_VALID_MASK \
+20 -1
arch/x86/include/uapi/asm/debugreg.h
··· 15 15 which debugging register was responsible for the trap. The other bits 16 16 are either reserved or not of interest to us. */ 17 17 18 - /* Define reserved bits in DR6 which are always set to 1 */ 18 + /* 19 + * Define bits in DR6 which are set to 1 by default. 20 + * 21 + * This is also the DR6 architectural value following Power-up, Reset or INIT. 22 + * 23 + * Note, with the introduction of Bus Lock Detection (BLD) and Restricted 24 + * Transactional Memory (RTM), the DR6 register has been modified: 25 + * 26 + * 1) BLD flag (bit 11) is no longer reserved to 1 if the CPU supports 27 + * Bus Lock Detection. The assertion of a bus lock could clear it. 28 + * 29 + * 2) RTM flag (bit 16) is no longer reserved to 1 if the CPU supports 30 + * restricted transactional memory. #DB occurred inside an RTM region 31 + * could clear it. 32 + * 33 + * Apparently, DR6.BLD and DR6.RTM are active low bits. 34 + * 35 + * As a result, DR6_RESERVED is an incorrect name now, but it is kept for 36 + * compatibility. 37 + */ 19 38 #define DR6_RESERVED (0xFFFF0FF0) 20 39 21 40 #define DR_TRAP0 (0x1) /* db0 */
+10 -14
arch/x86/kernel/cpu/common.c
··· 2243 2243 #endif 2244 2244 #endif 2245 2245 2246 - /* 2247 - * Clear all 6 debug registers: 2248 - */ 2249 - static void clear_all_debug_regs(void) 2246 + static void initialize_debug_regs(void) 2250 2247 { 2251 - int i; 2252 - 2253 - for (i = 0; i < 8; i++) { 2254 - /* Ignore db4, db5 */ 2255 - if ((i == 4) || (i == 5)) 2256 - continue; 2257 - 2258 - set_debugreg(0, i); 2259 - } 2248 + /* Control register first -- to make sure everything is disabled. */ 2249 + set_debugreg(DR7_FIXED_1, 7); 2250 + set_debugreg(DR6_RESERVED, 6); 2251 + /* dr5 and dr4 don't exist */ 2252 + set_debugreg(0, 3); 2253 + set_debugreg(0, 2); 2254 + set_debugreg(0, 1); 2255 + set_debugreg(0, 0); 2260 2256 } 2261 2257 2262 2258 #ifdef CONFIG_KGDB ··· 2413 2417 2414 2418 load_mm_ldt(&init_mm); 2415 2419 2416 - clear_all_debug_regs(); 2420 + initialize_debug_regs(); 2417 2421 dbg_restore_debug_regs(); 2418 2422 2419 2423 doublefault_init_cpu_tss();
+1 -1
arch/x86/kernel/kgdb.c
··· 385 385 struct perf_event *bp; 386 386 387 387 /* Disable hardware debugging while we are in kgdb: */ 388 - set_debugreg(0UL, 7); 388 + set_debugreg(DR7_FIXED_1, 7); 389 389 for (i = 0; i < HBP_NUM; i++) { 390 390 if (!breakinfo[i].enabled) 391 391 continue;
+1 -1
arch/x86/kernel/process_32.c
··· 93 93 94 94 /* Only print out debug registers if they are in their non-default state. */ 95 95 if ((d0 == 0) && (d1 == 0) && (d2 == 0) && (d3 == 0) && 96 - (d6 == DR6_RESERVED) && (d7 == 0x400)) 96 + (d6 == DR6_RESERVED) && (d7 == DR7_FIXED_1)) 97 97 return; 98 98 99 99 printk("%sDR0: %08lx DR1: %08lx DR2: %08lx DR3: %08lx\n",
+1 -1
arch/x86/kernel/process_64.c
··· 133 133 134 134 /* Only print out debug registers if they are in their non-default state. */ 135 135 if (!((d0 == 0) && (d1 == 0) && (d2 == 0) && (d3 == 0) && 136 - (d6 == DR6_RESERVED) && (d7 == 0x400))) { 136 + (d6 == DR6_RESERVED) && (d7 == DR7_FIXED_1))) { 137 137 printk("%sDR0: %016lx DR1: %016lx DR2: %016lx\n", 138 138 log_lvl, d0, d1, d2); 139 139 printk("%sDR3: %016lx DR6: %016lx DR7: %016lx\n",
+21 -13
arch/x86/kernel/traps.c
··· 1022 1022 #endif 1023 1023 } 1024 1024 1025 - static __always_inline unsigned long debug_read_clear_dr6(void) 1025 + static __always_inline unsigned long debug_read_reset_dr6(void) 1026 1026 { 1027 1027 unsigned long dr6; 1028 + 1029 + get_debugreg(dr6, 6); 1030 + dr6 ^= DR6_RESERVED; /* Flip to positive polarity */ 1028 1031 1029 1032 /* 1030 1033 * The Intel SDM says: 1031 1034 * 1032 - * Certain debug exceptions may clear bits 0-3. The remaining 1033 - * contents of the DR6 register are never cleared by the 1034 - * processor. To avoid confusion in identifying debug 1035 - * exceptions, debug handlers should clear the register before 1036 - * returning to the interrupted task. 1035 + * Certain debug exceptions may clear bits 0-3 of DR6. 1037 1036 * 1038 - * Keep it simple: clear DR6 immediately. 1037 + * BLD induced #DB clears DR6.BLD and any other debug 1038 + * exception doesn't modify DR6.BLD. 1039 + * 1040 + * RTM induced #DB clears DR6.RTM and any other debug 1041 + * exception sets DR6.RTM. 1042 + * 1043 + * To avoid confusion in identifying debug exceptions, 1044 + * debug handlers should set DR6.BLD and DR6.RTM, and 1045 + * clear other DR6 bits before returning. 1046 + * 1047 + * Keep it simple: write DR6 with its architectural reset 1048 + * value 0xFFFF0FF0, defined as DR6_RESERVED, immediately. 1039 1049 */ 1040 - get_debugreg(dr6, 6); 1041 1050 set_debugreg(DR6_RESERVED, 6); 1042 - dr6 ^= DR6_RESERVED; /* Flip to positive polarity */ 1043 1051 1044 1052 return dr6; 1045 1053 } ··· 1247 1239 /* IST stack entry */ 1248 1240 DEFINE_IDTENTRY_DEBUG(exc_debug) 1249 1241 { 1250 - exc_debug_kernel(regs, debug_read_clear_dr6()); 1242 + exc_debug_kernel(regs, debug_read_reset_dr6()); 1251 1243 } 1252 1244 1253 1245 /* User entry, runs on regular task stack */ 1254 1246 DEFINE_IDTENTRY_DEBUG_USER(exc_debug) 1255 1247 { 1256 - exc_debug_user(regs, debug_read_clear_dr6()); 1248 + exc_debug_user(regs, debug_read_reset_dr6()); 1257 1249 } 1258 1250 1259 1251 #ifdef CONFIG_X86_FRED ··· 1272 1264 { 1273 1265 /* 1274 1266 * FRED #DB stores DR6 on the stack in the format which 1275 - * debug_read_clear_dr6() returns for the IDT entry points. 1267 + * debug_read_reset_dr6() returns for the IDT entry points. 1276 1268 */ 1277 1269 unsigned long dr6 = fred_event_data(regs); 1278 1270 ··· 1287 1279 /* 32 bit does not have separate entry points. */ 1288 1280 DEFINE_IDTENTRY_RAW(exc_debug) 1289 1281 { 1290 - unsigned long dr6 = debug_read_clear_dr6(); 1282 + unsigned long dr6 = debug_read_reset_dr6(); 1291 1283 1292 1284 if (user_mode(regs)) 1293 1285 exc_debug_user(regs, dr6);
+2 -2
arch/x86/kvm/x86.c
··· 11035 11035 11036 11036 if (unlikely(vcpu->arch.switch_db_regs && 11037 11037 !(vcpu->arch.switch_db_regs & KVM_DEBUGREG_AUTO_SWITCH))) { 11038 - set_debugreg(0, 7); 11038 + set_debugreg(DR7_FIXED_1, 7); 11039 11039 set_debugreg(vcpu->arch.eff_db[0], 0); 11040 11040 set_debugreg(vcpu->arch.eff_db[1], 1); 11041 11041 set_debugreg(vcpu->arch.eff_db[2], 2); ··· 11044 11044 if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)) 11045 11045 kvm_x86_call(set_dr6)(vcpu, vcpu->arch.dr6); 11046 11046 } else if (unlikely(hw_breakpoint_active())) { 11047 - set_debugreg(0, 7); 11047 + set_debugreg(DR7_FIXED_1, 7); 11048 11048 } 11049 11049 11050 11050 vcpu->arch.host_debugctl = get_debugctlmsr();
+15 -11
block/genhd.c
··· 128 128 static void bdev_count_inflight_rw(struct block_device *part, 129 129 unsigned int inflight[2], bool mq_driver) 130 130 { 131 + int write = 0; 132 + int read = 0; 131 133 int cpu; 132 134 133 135 if (mq_driver) { 134 136 blk_mq_in_driver_rw(part, inflight); 135 - } else { 136 - for_each_possible_cpu(cpu) { 137 - inflight[READ] += part_stat_local_read_cpu( 138 - part, in_flight[READ], cpu); 139 - inflight[WRITE] += part_stat_local_read_cpu( 140 - part, in_flight[WRITE], cpu); 141 - } 137 + return; 142 138 } 143 139 144 - if (WARN_ON_ONCE((int)inflight[READ] < 0)) 145 - inflight[READ] = 0; 146 - if (WARN_ON_ONCE((int)inflight[WRITE] < 0)) 147 - inflight[WRITE] = 0; 140 + for_each_possible_cpu(cpu) { 141 + read += part_stat_local_read_cpu(part, in_flight[READ], cpu); 142 + write += part_stat_local_read_cpu(part, in_flight[WRITE], cpu); 143 + } 144 + 145 + /* 146 + * While iterating all CPUs, some IOs may be issued from a CPU already 147 + * traversed and complete on a CPU that has not yet been traversed, 148 + * causing the inflight number to be negative. 149 + */ 150 + inflight[READ] = read > 0 ? read : 0; 151 + inflight[WRITE] = write > 0 ? write : 0; 148 152 } 149 153 150 154 /**
+47 -78
crypto/wp512.c
··· 21 21 */ 22 22 #include <crypto/internal/hash.h> 23 23 #include <linux/init.h> 24 + #include <linux/kernel.h> 24 25 #include <linux/module.h> 25 - #include <linux/mm.h> 26 - #include <asm/byteorder.h> 27 - #include <linux/types.h> 26 + #include <linux/string.h> 27 + #include <linux/unaligned.h> 28 28 29 29 #define WP512_DIGEST_SIZE 64 30 30 #define WP384_DIGEST_SIZE 48 ··· 37 37 38 38 struct wp512_ctx { 39 39 u8 bitLength[WP512_LENGTHBYTES]; 40 - u8 buffer[WP512_BLOCK_SIZE]; 41 - int bufferBits; 42 - int bufferPos; 43 40 u64 hash[WP512_DIGEST_SIZE/8]; 44 41 }; 45 42 ··· 776 779 * The core Whirlpool transform. 777 780 */ 778 781 779 - static __no_kmsan_checks void wp512_process_buffer(struct wp512_ctx *wctx) { 782 + static __no_kmsan_checks void wp512_process_buffer(struct wp512_ctx *wctx, 783 + const u8 *buffer) { 780 784 int i, r; 781 785 u64 K[8]; /* the round key */ 782 786 u64 block[8]; /* mu(buffer) */ 783 787 u64 state[8]; /* the cipher state */ 784 788 u64 L[8]; 785 - const __be64 *buffer = (const __be64 *)wctx->buffer; 786 789 787 790 for (i = 0; i < 8; i++) 788 - block[i] = be64_to_cpu(buffer[i]); 791 + block[i] = get_unaligned_be64(buffer + i * 8); 789 792 790 793 state[0] = block[0] ^ (K[0] = wctx->hash[0]); 791 794 state[1] = block[1] ^ (K[1] = wctx->hash[1]); ··· 988 991 int i; 989 992 990 993 memset(wctx->bitLength, 0, 32); 991 - wctx->bufferBits = wctx->bufferPos = 0; 992 - wctx->buffer[0] = 0; 993 994 for (i = 0; i < 8; i++) { 994 995 wctx->hash[i] = 0L; 995 996 } ··· 995 1000 return 0; 996 1001 } 997 1002 998 - static int wp512_update(struct shash_desc *desc, const u8 *source, 999 - unsigned int len) 1003 + static void wp512_add_length(u8 *bitLength, u64 value) 1000 1004 { 1001 - struct wp512_ctx *wctx = shash_desc_ctx(desc); 1002 - int sourcePos = 0; 1003 - unsigned int bits_len = len * 8; // convert to number of bits 1004 - int sourceGap = (8 - ((int)bits_len & 7)) & 7; 1005 - int bufferRem = wctx->bufferBits & 7; 1005 + u32 carry; 1006 1006 int i; 1007 - u32 b, carry; 1008 - u8 *buffer = wctx->buffer; 1009 - u8 *bitLength = wctx->bitLength; 1010 - int bufferBits = wctx->bufferBits; 1011 - int bufferPos = wctx->bufferPos; 1012 1007 1013 - u64 value = bits_len; 1014 1008 for (i = 31, carry = 0; i >= 0 && (carry != 0 || value != 0ULL); i--) { 1015 1009 carry += bitLength[i] + ((u32)value & 0xff); 1016 1010 bitLength[i] = (u8)carry; 1017 1011 carry >>= 8; 1018 1012 value >>= 8; 1019 1013 } 1020 - while (bits_len > 8) { 1021 - b = ((source[sourcePos] << sourceGap) & 0xff) | 1022 - ((source[sourcePos + 1] & 0xff) >> (8 - sourceGap)); 1023 - buffer[bufferPos++] |= (u8)(b >> bufferRem); 1024 - bufferBits += 8 - bufferRem; 1025 - if (bufferBits == WP512_BLOCK_SIZE * 8) { 1026 - wp512_process_buffer(wctx); 1027 - bufferBits = bufferPos = 0; 1028 - } 1029 - buffer[bufferPos] = b << (8 - bufferRem); 1030 - bufferBits += bufferRem; 1031 - bits_len -= 8; 1032 - sourcePos++; 1033 - } 1034 - if (bits_len > 0) { 1035 - b = (source[sourcePos] << sourceGap) & 0xff; 1036 - buffer[bufferPos] |= b >> bufferRem; 1037 - } else { 1038 - b = 0; 1039 - } 1040 - if (bufferRem + bits_len < 8) { 1041 - bufferBits += bits_len; 1042 - } else { 1043 - bufferPos++; 1044 - bufferBits += 8 - bufferRem; 1045 - bits_len -= 8 - bufferRem; 1046 - if (bufferBits == WP512_BLOCK_SIZE * 8) { 1047 - wp512_process_buffer(wctx); 1048 - bufferBits = bufferPos = 0; 1049 - } 1050 - buffer[bufferPos] = b << (8 - bufferRem); 1051 - bufferBits += (int)bits_len; 1052 - } 1053 - 1054 - wctx->bufferBits = bufferBits; 1055 - wctx->bufferPos = bufferPos; 1056 - 1057 - return 0; 1058 1014 } 1059 1015 1060 - static int wp512_final(struct shash_desc *desc, u8 *out) 1016 + static int wp512_update(struct shash_desc *desc, const u8 *source, 1017 + unsigned int len) 1018 + { 1019 + struct wp512_ctx *wctx = shash_desc_ctx(desc); 1020 + unsigned int remain = len % WP512_BLOCK_SIZE; 1021 + u64 bits_len = (len - remain) * 8ull; 1022 + u8 *bitLength = wctx->bitLength; 1023 + 1024 + wp512_add_length(bitLength, bits_len); 1025 + do { 1026 + wp512_process_buffer(wctx, source); 1027 + source += WP512_BLOCK_SIZE; 1028 + bits_len -= WP512_BLOCK_SIZE * 8; 1029 + } while (bits_len); 1030 + 1031 + return remain; 1032 + } 1033 + 1034 + static int wp512_finup(struct shash_desc *desc, const u8 *src, 1035 + unsigned int bufferPos, u8 *out) 1061 1036 { 1062 1037 struct wp512_ctx *wctx = shash_desc_ctx(desc); 1063 1038 int i; 1064 - u8 *buffer = wctx->buffer; 1065 1039 u8 *bitLength = wctx->bitLength; 1066 - int bufferBits = wctx->bufferBits; 1067 - int bufferPos = wctx->bufferPos; 1068 1040 __be64 *digest = (__be64 *)out; 1041 + u8 buffer[WP512_BLOCK_SIZE]; 1069 1042 1070 - buffer[bufferPos] |= 0x80U >> (bufferBits & 7); 1043 + wp512_add_length(bitLength, bufferPos * 8); 1044 + memcpy(buffer, src, bufferPos); 1045 + buffer[bufferPos] = 0x80U; 1071 1046 bufferPos++; 1072 1047 if (bufferPos > WP512_BLOCK_SIZE - WP512_LENGTHBYTES) { 1073 1048 if (bufferPos < WP512_BLOCK_SIZE) 1074 1049 memset(&buffer[bufferPos], 0, WP512_BLOCK_SIZE - bufferPos); 1075 - wp512_process_buffer(wctx); 1050 + wp512_process_buffer(wctx, buffer); 1076 1051 bufferPos = 0; 1077 1052 } 1078 1053 if (bufferPos < WP512_BLOCK_SIZE - WP512_LENGTHBYTES) ··· 1051 1086 bufferPos = WP512_BLOCK_SIZE - WP512_LENGTHBYTES; 1052 1087 memcpy(&buffer[WP512_BLOCK_SIZE - WP512_LENGTHBYTES], 1053 1088 bitLength, WP512_LENGTHBYTES); 1054 - wp512_process_buffer(wctx); 1089 + wp512_process_buffer(wctx, buffer); 1090 + memzero_explicit(buffer, sizeof(buffer)); 1055 1091 for (i = 0; i < WP512_DIGEST_SIZE/8; i++) 1056 1092 digest[i] = cpu_to_be64(wctx->hash[i]); 1057 - wctx->bufferBits = bufferBits; 1058 - wctx->bufferPos = bufferPos; 1059 1093 1060 1094 return 0; 1061 1095 } 1062 1096 1063 - static int wp384_final(struct shash_desc *desc, u8 *out) 1097 + static int wp384_finup(struct shash_desc *desc, const u8 *src, 1098 + unsigned int len, u8 *out) 1064 1099 { 1065 1100 u8 D[64]; 1066 1101 1067 - wp512_final(desc, D); 1102 + wp512_finup(desc, src, len, D); 1068 1103 memcpy(out, D, WP384_DIGEST_SIZE); 1069 1104 memzero_explicit(D, WP512_DIGEST_SIZE); 1070 1105 1071 1106 return 0; 1072 1107 } 1073 1108 1074 - static int wp256_final(struct shash_desc *desc, u8 *out) 1109 + static int wp256_finup(struct shash_desc *desc, const u8 *src, 1110 + unsigned int len, u8 *out) 1075 1111 { 1076 1112 u8 D[64]; 1077 1113 1078 - wp512_final(desc, D); 1114 + wp512_finup(desc, src, len, D); 1079 1115 memcpy(out, D, WP256_DIGEST_SIZE); 1080 1116 memzero_explicit(D, WP512_DIGEST_SIZE); 1081 1117 ··· 1087 1121 .digestsize = WP512_DIGEST_SIZE, 1088 1122 .init = wp512_init, 1089 1123 .update = wp512_update, 1090 - .final = wp512_final, 1124 + .finup = wp512_finup, 1091 1125 .descsize = sizeof(struct wp512_ctx), 1092 1126 .base = { 1093 1127 .cra_name = "wp512", 1094 1128 .cra_driver_name = "wp512-generic", 1129 + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, 1095 1130 .cra_blocksize = WP512_BLOCK_SIZE, 1096 1131 .cra_module = THIS_MODULE, 1097 1132 } ··· 1100 1133 .digestsize = WP384_DIGEST_SIZE, 1101 1134 .init = wp512_init, 1102 1135 .update = wp512_update, 1103 - .final = wp384_final, 1136 + .finup = wp384_finup, 1104 1137 .descsize = sizeof(struct wp512_ctx), 1105 1138 .base = { 1106 1139 .cra_name = "wp384", 1107 1140 .cra_driver_name = "wp384-generic", 1141 + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, 1108 1142 .cra_blocksize = WP512_BLOCK_SIZE, 1109 1143 .cra_module = THIS_MODULE, 1110 1144 } ··· 1113 1145 .digestsize = WP256_DIGEST_SIZE, 1114 1146 .init = wp512_init, 1115 1147 .update = wp512_update, 1116 - .final = wp256_final, 1148 + .finup = wp256_finup, 1117 1149 .descsize = sizeof(struct wp512_ctx), 1118 1150 .base = { 1119 1151 .cra_name = "wp256", 1120 1152 .cra_driver_name = "wp256-generic", 1153 + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, 1121 1154 .cra_blocksize = WP512_BLOCK_SIZE, 1122 1155 .cra_module = THIS_MODULE, 1123 1156 }
+1 -1
drivers/ata/ahci.c
··· 1450 1450 { 1451 1451 .matches = { 1452 1452 DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 1453 - DMI_MATCH(DMI_PRODUCT_VERSION, "ASUSPRO D840MB_M840SA"), 1453 + DMI_MATCH(DMI_PRODUCT_NAME, "ASUSPRO D840MB_M840SA"), 1454 1454 }, 1455 1455 /* 320 is broken, there is no known good version. */ 1456 1456 },
+37 -12
drivers/block/ublk_drv.c
··· 1148 1148 blk_mq_end_request(req, res); 1149 1149 } 1150 1150 1151 - static void ublk_complete_io_cmd(struct ublk_io *io, struct request *req, 1152 - int res, unsigned issue_flags) 1151 + static struct io_uring_cmd *__ublk_prep_compl_io_cmd(struct ublk_io *io, 1152 + struct request *req) 1153 1153 { 1154 1154 /* read cmd first because req will overwrite it */ 1155 1155 struct io_uring_cmd *cmd = io->cmd; ··· 1164 1164 io->flags &= ~UBLK_IO_FLAG_ACTIVE; 1165 1165 1166 1166 io->req = req; 1167 + return cmd; 1168 + } 1169 + 1170 + static void ublk_complete_io_cmd(struct ublk_io *io, struct request *req, 1171 + int res, unsigned issue_flags) 1172 + { 1173 + struct io_uring_cmd *cmd = __ublk_prep_compl_io_cmd(io, req); 1167 1174 1168 1175 /* tell ublksrv one io request is coming */ 1169 1176 io_uring_cmd_done(cmd, res, 0, issue_flags); ··· 1423 1416 return BLK_STS_OK; 1424 1417 } 1425 1418 1419 + static inline bool ublk_belong_to_same_batch(const struct ublk_io *io, 1420 + const struct ublk_io *io2) 1421 + { 1422 + return (io_uring_cmd_ctx_handle(io->cmd) == 1423 + io_uring_cmd_ctx_handle(io2->cmd)) && 1424 + (io->task == io2->task); 1425 + } 1426 + 1426 1427 static void ublk_queue_rqs(struct rq_list *rqlist) 1427 1428 { 1428 1429 struct rq_list requeue_list = { }; ··· 1442 1427 struct ublk_queue *this_q = req->mq_hctx->driver_data; 1443 1428 struct ublk_io *this_io = &this_q->ios[req->tag]; 1444 1429 1445 - if (io && io->task != this_io->task && !rq_list_empty(&submit_list)) 1430 + if (io && !ublk_belong_to_same_batch(io, this_io) && 1431 + !rq_list_empty(&submit_list)) 1446 1432 ublk_queue_cmd_list(io, &submit_list); 1447 1433 io = this_io; 1448 1434 ··· 2164 2148 return 0; 2165 2149 } 2166 2150 2167 - static bool ublk_get_data(const struct ublk_queue *ubq, struct ublk_io *io) 2151 + static bool ublk_get_data(const struct ublk_queue *ubq, struct ublk_io *io, 2152 + struct request *req) 2168 2153 { 2169 - struct request *req = io->req; 2170 - 2171 2154 /* 2172 2155 * We have handled UBLK_IO_NEED_GET_DATA command, 2173 2156 * so clear UBLK_IO_FLAG_NEED_GET_DATA now and just ··· 2193 2178 u32 cmd_op = cmd->cmd_op; 2194 2179 unsigned tag = ub_cmd->tag; 2195 2180 int ret = -EINVAL; 2181 + struct request *req; 2196 2182 2197 2183 pr_devel("%s: received: cmd op %d queue %d tag %d result %d\n", 2198 2184 __func__, cmd->cmd_op, ub_cmd->q_id, tag, ··· 2252 2236 goto out; 2253 2237 break; 2254 2238 case UBLK_IO_NEED_GET_DATA: 2255 - io->addr = ub_cmd->addr; 2256 - if (!ublk_get_data(ubq, io)) 2257 - return -EIOCBQUEUED; 2258 - 2259 - return UBLK_IO_RES_OK; 2239 + /* 2240 + * ublk_get_data() may fail and fallback to requeue, so keep 2241 + * uring_cmd active first and prepare for handling new requeued 2242 + * request 2243 + */ 2244 + req = io->req; 2245 + ublk_fill_io_cmd(io, cmd, ub_cmd->addr); 2246 + io->flags &= ~UBLK_IO_FLAG_OWNED_BY_SRV; 2247 + if (likely(ublk_get_data(ubq, io, req))) { 2248 + __ublk_prep_compl_io_cmd(io, req); 2249 + return UBLK_IO_RES_OK; 2250 + } 2251 + break; 2260 2252 default: 2261 2253 goto out; 2262 2254 } ··· 2849 2825 if (copy_from_user(&info, argp, sizeof(info))) 2850 2826 return -EFAULT; 2851 2827 2852 - if (info.queue_depth > UBLK_MAX_QUEUE_DEPTH || info.nr_hw_queues > UBLK_MAX_NR_QUEUES) 2828 + if (info.queue_depth > UBLK_MAX_QUEUE_DEPTH || !info.queue_depth || 2829 + info.nr_hw_queues > UBLK_MAX_NR_QUEUES || !info.nr_hw_queues) 2853 2830 return -EINVAL; 2854 2831 2855 2832 if (capable(CAP_SYS_ADMIN))
+13 -5
drivers/cxl/core/edac.c
··· 103 103 u8 *cap, u16 *cycle, u8 *flags, u8 *min_cycle) 104 104 { 105 105 struct cxl_mailbox *cxl_mbox; 106 - u8 min_scrub_cycle = U8_MAX; 107 106 struct cxl_region_params *p; 108 107 struct cxl_memdev *cxlmd; 109 108 struct cxl_region *cxlr; 109 + u8 min_scrub_cycle = 0; 110 110 int i, ret; 111 111 112 112 if (!cxl_ps_ctx->cxlr) { ··· 133 133 if (ret) 134 134 return ret; 135 135 136 + /* 137 + * The min_scrub_cycle of a region is the max of minimum scrub 138 + * cycles supported by memdevs that back the region. 139 + */ 136 140 if (min_cycle) 137 - min_scrub_cycle = min(*min_cycle, min_scrub_cycle); 141 + min_scrub_cycle = max(*min_cycle, min_scrub_cycle); 138 142 } 139 143 140 144 if (min_cycle) ··· 1103 1099 old_rec = xa_store(&array_rec->rec_gen_media, 1104 1100 le64_to_cpu(rec->media_hdr.phys_addr), rec, 1105 1101 GFP_KERNEL); 1106 - if (xa_is_err(old_rec)) 1102 + if (xa_is_err(old_rec)) { 1103 + kfree(rec); 1107 1104 return xa_err(old_rec); 1105 + } 1108 1106 1109 1107 kfree(old_rec); 1110 1108 ··· 1133 1127 old_rec = xa_store(&array_rec->rec_dram, 1134 1128 le64_to_cpu(rec->media_hdr.phys_addr), rec, 1135 1129 GFP_KERNEL); 1136 - if (xa_is_err(old_rec)) 1130 + if (xa_is_err(old_rec)) { 1131 + kfree(rec); 1137 1132 return xa_err(old_rec); 1133 + } 1138 1134 1139 1135 kfree(old_rec); 1140 1136 ··· 1323 1315 attrbs.bank = ctx->bank; 1324 1316 break; 1325 1317 case EDAC_REPAIR_RANK_SPARING: 1326 - attrbs.repair_type = CXL_BANK_SPARING; 1318 + attrbs.repair_type = CXL_RANK_SPARING; 1327 1319 break; 1328 1320 default: 1329 1321 return NULL;
+1 -1
drivers/cxl/core/features.c
··· 544 544 u32 flags; 545 545 546 546 if (rpc_in->op_size < sizeof(uuid_t)) 547 - return ERR_PTR(-EINVAL); 547 + return false; 548 548 549 549 feat = cxl_feature_info(cxlfs, &rpc_in->set_feat_in.uuid); 550 550 if (IS_ERR(feat))
+27 -20
drivers/cxl/core/ras.c
··· 31 31 ras_cap.header_log); 32 32 } 33 33 34 - static void cxl_cper_trace_corr_prot_err(struct pci_dev *pdev, 35 - struct cxl_ras_capability_regs ras_cap) 34 + static void cxl_cper_trace_corr_prot_err(struct cxl_memdev *cxlmd, 35 + struct cxl_ras_capability_regs ras_cap) 36 36 { 37 37 u32 status = ras_cap.cor_status & ~ras_cap.cor_mask; 38 - struct cxl_dev_state *cxlds; 39 38 40 - cxlds = pci_get_drvdata(pdev); 41 - if (!cxlds) 42 - return; 43 - 44 - trace_cxl_aer_correctable_error(cxlds->cxlmd, status); 39 + trace_cxl_aer_correctable_error(cxlmd, status); 45 40 } 46 41 47 - static void cxl_cper_trace_uncorr_prot_err(struct pci_dev *pdev, 48 - struct cxl_ras_capability_regs ras_cap) 42 + static void 43 + cxl_cper_trace_uncorr_prot_err(struct cxl_memdev *cxlmd, 44 + struct cxl_ras_capability_regs ras_cap) 49 45 { 50 46 u32 status = ras_cap.uncor_status & ~ras_cap.uncor_mask; 51 - struct cxl_dev_state *cxlds; 52 47 u32 fe; 53 - 54 - cxlds = pci_get_drvdata(pdev); 55 - if (!cxlds) 56 - return; 57 48 58 49 if (hweight32(status) > 1) 59 50 fe = BIT(FIELD_GET(CXL_RAS_CAP_CONTROL_FE_MASK, ··· 52 61 else 53 62 fe = status; 54 63 55 - trace_cxl_aer_uncorrectable_error(cxlds->cxlmd, status, fe, 64 + trace_cxl_aer_uncorrectable_error(cxlmd, status, fe, 56 65 ras_cap.header_log); 66 + } 67 + 68 + static int match_memdev_by_parent(struct device *dev, const void *uport) 69 + { 70 + if (is_cxl_memdev(dev) && dev->parent == uport) 71 + return 1; 72 + return 0; 57 73 } 58 74 59 75 static void cxl_cper_handle_prot_err(struct cxl_cper_prot_err_work_data *data) ··· 71 73 pci_get_domain_bus_and_slot(data->prot_err.agent_addr.segment, 72 74 data->prot_err.agent_addr.bus, 73 75 devfn); 76 + struct cxl_memdev *cxlmd; 74 77 int port_type; 75 78 76 79 if (!pdev) 77 80 return; 78 - 79 - guard(device)(&pdev->dev); 80 81 81 82 port_type = pci_pcie_type(pdev); 82 83 if (port_type == PCI_EXP_TYPE_ROOT_PORT || ··· 89 92 return; 90 93 } 91 94 95 + guard(device)(&pdev->dev); 96 + if (!pdev->dev.driver) 97 + return; 98 + 99 + struct device *mem_dev __free(put_device) = bus_find_device( 100 + &cxl_bus_type, NULL, pdev, match_memdev_by_parent); 101 + if (!mem_dev) 102 + return; 103 + 104 + cxlmd = to_cxl_memdev(mem_dev); 92 105 if (data->severity == AER_CORRECTABLE) 93 - cxl_cper_trace_corr_prot_err(pdev, data->ras_cap); 106 + cxl_cper_trace_corr_prot_err(cxlmd, data->ras_cap); 94 107 else 95 - cxl_cper_trace_uncorr_prot_err(pdev, data->ras_cap); 108 + cxl_cper_trace_uncorr_prot_err(cxlmd, data->ras_cap); 96 109 } 97 110 98 111 static void cxl_cper_prot_err_work_fn(struct work_struct *work)
+36 -21
drivers/edac/amd64_edac.c
··· 1209 1209 if (csrow_enabled(2 * dimm + 1, ctrl, pvt)) 1210 1210 cs_mode |= CS_ODD_PRIMARY; 1211 1211 1212 - /* Asymmetric dual-rank DIMM support. */ 1212 + if (csrow_sec_enabled(2 * dimm, ctrl, pvt)) 1213 + cs_mode |= CS_EVEN_SECONDARY; 1214 + 1213 1215 if (csrow_sec_enabled(2 * dimm + 1, ctrl, pvt)) 1214 1216 cs_mode |= CS_ODD_SECONDARY; 1215 1217 ··· 1232 1230 return cs_mode; 1233 1231 } 1234 1232 1235 - static int __addr_mask_to_cs_size(u32 addr_mask_orig, unsigned int cs_mode, 1236 - int csrow_nr, int dimm) 1233 + static int calculate_cs_size(u32 mask, unsigned int cs_mode) 1237 1234 { 1238 - u32 msb, weight, num_zero_bits; 1239 - u32 addr_mask_deinterleaved; 1240 - int size = 0; 1235 + int msb, weight, num_zero_bits; 1236 + u32 deinterleaved_mask; 1237 + 1238 + if (!mask) 1239 + return 0; 1241 1240 1242 1241 /* 1243 1242 * The number of zero bits in the mask is equal to the number of bits ··· 1251 1248 * without swapping with the most significant bit. This can be handled 1252 1249 * by keeping the MSB where it is and ignoring the single zero bit. 1253 1250 */ 1254 - msb = fls(addr_mask_orig) - 1; 1255 - weight = hweight_long(addr_mask_orig); 1251 + msb = fls(mask) - 1; 1252 + weight = hweight_long(mask); 1256 1253 num_zero_bits = msb - weight - !!(cs_mode & CS_3R_INTERLEAVE); 1257 1254 1258 1255 /* Take the number of zero bits off from the top of the mask. */ 1259 - addr_mask_deinterleaved = GENMASK_ULL(msb - num_zero_bits, 1); 1256 + deinterleaved_mask = GENMASK(msb - num_zero_bits, 1); 1257 + edac_dbg(1, " Deinterleaved AddrMask: 0x%x\n", deinterleaved_mask); 1258 + 1259 + return (deinterleaved_mask >> 2) + 1; 1260 + } 1261 + 1262 + static int __addr_mask_to_cs_size(u32 addr_mask, u32 addr_mask_sec, 1263 + unsigned int cs_mode, int csrow_nr, int dimm) 1264 + { 1265 + int size; 1260 1266 1261 1267 edac_dbg(1, "CS%d DIMM%d AddrMasks:\n", csrow_nr, dimm); 1262 - edac_dbg(1, " Original AddrMask: 0x%x\n", addr_mask_orig); 1263 - edac_dbg(1, " Deinterleaved AddrMask: 0x%x\n", addr_mask_deinterleaved); 1268 + edac_dbg(1, " Primary AddrMask: 0x%x\n", addr_mask); 1264 1269 1265 1270 /* Register [31:1] = Address [39:9]. Size is in kBs here. */ 1266 - size = (addr_mask_deinterleaved >> 2) + 1; 1271 + size = calculate_cs_size(addr_mask, cs_mode); 1272 + 1273 + edac_dbg(1, " Secondary AddrMask: 0x%x\n", addr_mask_sec); 1274 + size += calculate_cs_size(addr_mask_sec, cs_mode); 1267 1275 1268 1276 /* Return size in MBs. */ 1269 1277 return size >> 10; ··· 1283 1269 static int umc_addr_mask_to_cs_size(struct amd64_pvt *pvt, u8 umc, 1284 1270 unsigned int cs_mode, int csrow_nr) 1285 1271 { 1272 + u32 addr_mask = 0, addr_mask_sec = 0; 1286 1273 int cs_mask_nr = csrow_nr; 1287 - u32 addr_mask_orig; 1288 1274 int dimm, size = 0; 1289 1275 1290 1276 /* No Chip Selects are enabled. */ ··· 1322 1308 if (!pvt->flags.zn_regs_v2) 1323 1309 cs_mask_nr >>= 1; 1324 1310 1325 - /* Asymmetric dual-rank DIMM support. */ 1326 - if ((csrow_nr & 1) && (cs_mode & CS_ODD_SECONDARY)) 1327 - addr_mask_orig = pvt->csels[umc].csmasks_sec[cs_mask_nr]; 1328 - else 1329 - addr_mask_orig = pvt->csels[umc].csmasks[cs_mask_nr]; 1311 + if (cs_mode & (CS_EVEN_PRIMARY | CS_ODD_PRIMARY)) 1312 + addr_mask = pvt->csels[umc].csmasks[cs_mask_nr]; 1330 1313 1331 - return __addr_mask_to_cs_size(addr_mask_orig, cs_mode, csrow_nr, dimm); 1314 + if (cs_mode & (CS_EVEN_SECONDARY | CS_ODD_SECONDARY)) 1315 + addr_mask_sec = pvt->csels[umc].csmasks_sec[cs_mask_nr]; 1316 + 1317 + return __addr_mask_to_cs_size(addr_mask, addr_mask_sec, cs_mode, csrow_nr, dimm); 1332 1318 } 1333 1319 1334 1320 static void umc_debug_display_dimm_sizes(struct amd64_pvt *pvt, u8 ctrl) ··· 3526 3512 static int gpu_addr_mask_to_cs_size(struct amd64_pvt *pvt, u8 umc, 3527 3513 unsigned int cs_mode, int csrow_nr) 3528 3514 { 3529 - u32 addr_mask_orig = pvt->csels[umc].csmasks[csrow_nr]; 3515 + u32 addr_mask = pvt->csels[umc].csmasks[csrow_nr]; 3516 + u32 addr_mask_sec = pvt->csels[umc].csmasks_sec[csrow_nr]; 3530 3517 3531 - return __addr_mask_to_cs_size(addr_mask_orig, cs_mode, csrow_nr, csrow_nr >> 1); 3518 + return __addr_mask_to_cs_size(addr_mask, addr_mask_sec, cs_mode, csrow_nr, csrow_nr >> 1); 3532 3519 } 3533 3520 3534 3521 static void gpu_debug_display_dimm_sizes(struct amd64_pvt *pvt, u8 ctrl)
+13 -15
drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
··· 321 321 const struct firmware *fw; 322 322 int r; 323 323 324 - r = request_firmware(&fw, fw_name, adev->dev); 324 + r = firmware_request_nowarn(&fw, fw_name, adev->dev); 325 325 if (r) { 326 - dev_err(adev->dev, "can't load firmware \"%s\"\n", 327 - fw_name); 326 + if (amdgpu_discovery == 2) 327 + dev_err(adev->dev, "can't load firmware \"%s\"\n", fw_name); 328 + else 329 + drm_info(&adev->ddev, "Optional firmware \"%s\" was not found\n", fw_name); 328 330 return r; 329 331 } 330 332 ··· 461 459 /* Read from file if it is the preferred option */ 462 460 fw_name = amdgpu_discovery_get_fw_name(adev); 463 461 if (fw_name != NULL) { 464 - dev_info(adev->dev, "use ip discovery information from file"); 462 + drm_dbg(&adev->ddev, "use ip discovery information from file"); 465 463 r = amdgpu_discovery_read_binary_from_file(adev, adev->mman.discovery_bin, fw_name); 466 - 467 - if (r) { 468 - dev_err(adev->dev, "failed to read ip discovery binary from file\n"); 469 - r = -EINVAL; 464 + if (r) 470 465 goto out; 471 - } 472 - 473 466 } else { 467 + drm_dbg(&adev->ddev, "use ip discovery information from memory"); 474 468 r = amdgpu_discovery_read_binary_from_mem( 475 469 adev, adev->mman.discovery_bin); 476 470 if (r) ··· 1336 1338 int r; 1337 1339 1338 1340 r = amdgpu_discovery_init(adev); 1339 - if (r) { 1340 - DRM_ERROR("amdgpu_discovery_init failed\n"); 1341 + if (r) 1341 1342 return r; 1342 - } 1343 1343 1344 1344 wafl_ver = 0; 1345 1345 adev->gfx.xcc_mask = 0; ··· 2575 2579 break; 2576 2580 default: 2577 2581 r = amdgpu_discovery_reg_base_init(adev); 2578 - if (r) 2579 - return -EINVAL; 2582 + if (r) { 2583 + drm_err(&adev->ddev, "discovery failed: %d\n", r); 2584 + return r; 2585 + } 2580 2586 2581 2587 amdgpu_discovery_harvest_ip(adev); 2582 2588 amdgpu_discovery_get_gfx_info(adev);
+19
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
··· 2235 2235 } 2236 2236 2237 2237 switch (amdgpu_ip_version(adev, GC_HWIP, 0)) { 2238 + case IP_VERSION(9, 0, 1): 2239 + case IP_VERSION(9, 2, 1): 2240 + case IP_VERSION(9, 4, 0): 2241 + case IP_VERSION(9, 2, 2): 2242 + case IP_VERSION(9, 1, 0): 2243 + case IP_VERSION(9, 3, 0): 2244 + adev->gfx.cleaner_shader_ptr = gfx_9_4_2_cleaner_shader_hex; 2245 + adev->gfx.cleaner_shader_size = sizeof(gfx_9_4_2_cleaner_shader_hex); 2246 + if (adev->gfx.me_fw_version >= 167 && 2247 + adev->gfx.pfp_fw_version >= 196 && 2248 + adev->gfx.mec_fw_version >= 474) { 2249 + adev->gfx.enable_cleaner_shader = true; 2250 + r = amdgpu_gfx_cleaner_shader_sw_init(adev, adev->gfx.cleaner_shader_size); 2251 + if (r) { 2252 + adev->gfx.enable_cleaner_shader = false; 2253 + dev_err(adev->dev, "Failed to initialize cleaner shader\n"); 2254 + } 2255 + } 2256 + break; 2238 2257 case IP_VERSION(9, 4, 2): 2239 2258 adev->gfx.cleaner_shader_ptr = gfx_9_4_2_cleaner_shader_hex; 2240 2259 adev->gfx.cleaner_shader_size = sizeof(gfx_9_4_2_cleaner_shader_hex);
+6 -4
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
··· 1630 1630 if (r) 1631 1631 goto failure; 1632 1632 1633 - r = mes_v11_0_set_hw_resources_1(&adev->mes); 1634 - if (r) { 1635 - DRM_ERROR("failed mes_v11_0_set_hw_resources_1, r=%d\n", r); 1636 - goto failure; 1633 + if ((adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) >= 0x50) { 1634 + r = mes_v11_0_set_hw_resources_1(&adev->mes); 1635 + if (r) { 1636 + DRM_ERROR("failed mes_v11_0_set_hw_resources_1, r=%d\n", r); 1637 + goto failure; 1638 + } 1637 1639 } 1638 1640 1639 1641 r = mes_v11_0_query_sched_status(&adev->mes);
+2 -1
drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
··· 1742 1742 if (r) 1743 1743 goto failure; 1744 1744 1745 - mes_v12_0_set_hw_resources_1(&adev->mes, AMDGPU_MES_SCHED_PIPE); 1745 + if ((adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) >= 0x4b) 1746 + mes_v12_0_set_hw_resources_1(&adev->mes, AMDGPU_MES_SCHED_PIPE); 1746 1747 1747 1748 mes_v12_0_init_aggregated_doorbell(&adev->mes); 1748 1749
+16 -3
drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
··· 1374 1374 else 1375 1375 DRM_ERROR("Failed to allocated memory for SDMA IP Dump\n"); 1376 1376 1377 - /* add firmware version checks here */ 1378 - if (0 && !adev->sdma.disable_uq) 1379 - adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1377 + switch (amdgpu_ip_version(adev, SDMA0_HWIP, 0)) { 1378 + case IP_VERSION(6, 0, 0): 1379 + if ((adev->sdma.instance[0].fw_version >= 24) && !adev->sdma.disable_uq) 1380 + adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1381 + break; 1382 + case IP_VERSION(6, 0, 2): 1383 + if ((adev->sdma.instance[0].fw_version >= 21) && !adev->sdma.disable_uq) 1384 + adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1385 + break; 1386 + case IP_VERSION(6, 0, 3): 1387 + if ((adev->sdma.instance[0].fw_version >= 25) && !adev->sdma.disable_uq) 1388 + adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1389 + break; 1390 + default: 1391 + break; 1392 + } 1380 1393 1381 1394 r = amdgpu_sdma_sysfs_reset_mask_init(adev); 1382 1395 if (r)
+9 -3
drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
··· 1349 1349 else 1350 1350 DRM_ERROR("Failed to allocated memory for SDMA IP Dump\n"); 1351 1351 1352 - /* add firmware version checks here */ 1353 - if (0 && !adev->sdma.disable_uq) 1354 - adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1352 + switch (amdgpu_ip_version(adev, SDMA0_HWIP, 0)) { 1353 + case IP_VERSION(7, 0, 0): 1354 + case IP_VERSION(7, 0, 1): 1355 + if ((adev->sdma.instance[0].fw_version >= 7836028) && !adev->sdma.disable_uq) 1356 + adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1357 + break; 1358 + default: 1359 + break; 1360 + } 1355 1361 1356 1362 return r; 1357 1363 }
+5 -5
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 4718 4718 return 1; 4719 4719 } 4720 4720 4721 - /* Rescale from [min..max] to [0..AMDGPU_MAX_BL_LEVEL] */ 4721 + /* Rescale from [min..max] to [0..MAX_BACKLIGHT_LEVEL] */ 4722 4722 static inline u32 scale_input_to_fw(int min, int max, u64 input) 4723 4723 { 4724 - return DIV_ROUND_CLOSEST_ULL(input * AMDGPU_MAX_BL_LEVEL, max - min); 4724 + return DIV_ROUND_CLOSEST_ULL(input * MAX_BACKLIGHT_LEVEL, max - min); 4725 4725 } 4726 4726 4727 - /* Rescale from [0..AMDGPU_MAX_BL_LEVEL] to [min..max] */ 4727 + /* Rescale from [0..MAX_BACKLIGHT_LEVEL] to [min..max] */ 4728 4728 static inline u32 scale_fw_to_input(int min, int max, u64 input) 4729 4729 { 4730 - return min + DIV_ROUND_CLOSEST_ULL(input * (max - min), AMDGPU_MAX_BL_LEVEL); 4730 + return min + DIV_ROUND_CLOSEST_ULL(input * (max - min), MAX_BACKLIGHT_LEVEL); 4731 4731 } 4732 4732 4733 4733 static void convert_custom_brightness(const struct amdgpu_dm_backlight_caps *caps, ··· 4947 4947 drm_dbg(drm, "Backlight caps: min: %d, max: %d, ac %d, dc %d\n", min, max, 4948 4948 caps->ac_level, caps->dc_level); 4949 4949 } else 4950 - props.brightness = props.max_brightness = AMDGPU_MAX_BL_LEVEL; 4950 + props.brightness = props.max_brightness = MAX_BACKLIGHT_LEVEL; 4951 4951 4952 4952 if (caps->data_points && !(amdgpu_dc_debug_mask & DC_DISABLE_CUSTOM_BRIGHTNESS_CURVE)) 4953 4953 drm_info(drm, "Using custom brightness curve\n");
+4
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
··· 1029 1029 return EDID_NO_RESPONSE; 1030 1030 1031 1031 edid = drm_edid_raw(drm_edid); // FIXME: Get rid of drm_edid_raw() 1032 + if (!edid || 1033 + edid->extensions >= sizeof(sink->dc_edid.raw_edid) / EDID_LENGTH) 1034 + return EDID_BAD_INPUT; 1035 + 1032 1036 sink->dc_edid.length = EDID_LENGTH * (edid->extensions + 1); 1033 1037 memmove(sink->dc_edid.raw_edid, (uint8_t *)edid, sink->dc_edid.length); 1034 1038
+60 -9
drivers/gpu/drm/bridge/ti-sn65dsi86.c
··· 348 348 * 200 ms. We'll assume that the panel driver will have the hardcoded 349 349 * delay in its prepare and always disable HPD. 350 350 * 351 - * If HPD somehow makes sense on some future panel we'll have to 352 - * change this to be conditional on someone specifying that HPD should 353 - * be used. 351 + * For DisplayPort bridge type, we need HPD. So we use the bridge type 352 + * to conditionally disable HPD. 353 + * NOTE: The bridge type is set in ti_sn_bridge_probe() but enable_comms() 354 + * can be called before. So for DisplayPort, HPD will be enabled once 355 + * bridge type is set. We are using bridge type instead of "no-hpd" 356 + * property because it is not used properly in devicetree description 357 + * and hence is unreliable. 354 358 */ 355 - regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG, HPD_DISABLE, 356 - HPD_DISABLE); 359 + 360 + if (pdata->bridge.type != DRM_MODE_CONNECTOR_DisplayPort) 361 + regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG, HPD_DISABLE, 362 + HPD_DISABLE); 357 363 358 364 pdata->comms_enabled = true; 359 365 ··· 1201 1195 struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge); 1202 1196 int val = 0; 1203 1197 1204 - pm_runtime_get_sync(pdata->dev); 1198 + /* 1199 + * Runtime reference is grabbed in ti_sn_bridge_hpd_enable() 1200 + * as the chip won't report HPD just after being powered on. 1201 + * HPD_DEBOUNCED_STATE reflects correct state only after the 1202 + * debounce time (~100-400 ms). 1203 + */ 1204 + 1205 1205 regmap_read(pdata->regmap, SN_HPD_DISABLE_REG, &val); 1206 - pm_runtime_put_autosuspend(pdata->dev); 1207 1206 1208 1207 return val & HPD_DEBOUNCED_STATE ? connector_status_connected 1209 1208 : connector_status_disconnected; ··· 1231 1220 debugfs_create_file("status", 0600, debugfs, pdata, &status_fops); 1232 1221 } 1233 1222 1223 + static void ti_sn_bridge_hpd_enable(struct drm_bridge *bridge) 1224 + { 1225 + struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge); 1226 + 1227 + /* 1228 + * Device needs to be powered on before reading the HPD state 1229 + * for reliable hpd detection in ti_sn_bridge_detect() due to 1230 + * the high debounce time. 1231 + */ 1232 + 1233 + pm_runtime_get_sync(pdata->dev); 1234 + } 1235 + 1236 + static void ti_sn_bridge_hpd_disable(struct drm_bridge *bridge) 1237 + { 1238 + struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge); 1239 + 1240 + pm_runtime_put_autosuspend(pdata->dev); 1241 + } 1242 + 1234 1243 static const struct drm_bridge_funcs ti_sn_bridge_funcs = { 1235 1244 .attach = ti_sn_bridge_attach, 1236 1245 .detach = ti_sn_bridge_detach, ··· 1265 1234 .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, 1266 1235 .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, 1267 1236 .debugfs_init = ti_sn65dsi86_debugfs_init, 1237 + .hpd_enable = ti_sn_bridge_hpd_enable, 1238 + .hpd_disable = ti_sn_bridge_hpd_disable, 1268 1239 }; 1269 1240 1270 1241 static void ti_sn_bridge_parse_lanes(struct ti_sn65dsi86 *pdata, ··· 1354 1321 pdata->bridge.type = pdata->next_bridge->type == DRM_MODE_CONNECTOR_DisplayPort 1355 1322 ? DRM_MODE_CONNECTOR_DisplayPort : DRM_MODE_CONNECTOR_eDP; 1356 1323 1357 - if (pdata->bridge.type == DRM_MODE_CONNECTOR_DisplayPort) 1358 - pdata->bridge.ops = DRM_BRIDGE_OP_EDID | DRM_BRIDGE_OP_DETECT; 1324 + if (pdata->bridge.type == DRM_MODE_CONNECTOR_DisplayPort) { 1325 + pdata->bridge.ops = DRM_BRIDGE_OP_EDID | DRM_BRIDGE_OP_DETECT | 1326 + DRM_BRIDGE_OP_HPD; 1327 + /* 1328 + * If comms were already enabled they would have been enabled 1329 + * with the wrong value of HPD_DISABLE. Update it now. Comms 1330 + * could be enabled if anyone is holding a pm_runtime reference 1331 + * (like if a GPIO is in use). Note that in most cases nobody 1332 + * is doing AUX channel xfers before the bridge is added so 1333 + * HPD doesn't _really_ matter then. The only exception is in 1334 + * the eDP case where the panel wants to read the EDID before 1335 + * the bridge is added. We always consistently have HPD disabled 1336 + * for eDP. 1337 + */ 1338 + mutex_lock(&pdata->comms_mutex); 1339 + if (pdata->comms_enabled) 1340 + regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG, 1341 + HPD_DISABLE, 0); 1342 + mutex_unlock(&pdata->comms_mutex); 1343 + }; 1359 1344 1360 1345 drm_bridge_add(&pdata->bridge); 1361 1346
+5 -2
drivers/gpu/drm/display/drm_bridge_connector.c
··· 708 708 if (bridge_connector->bridge_hdmi_audio || 709 709 bridge_connector->bridge_dp_audio) { 710 710 struct device *dev; 711 + struct drm_bridge *bridge; 711 712 712 713 if (bridge_connector->bridge_hdmi_audio) 713 - dev = bridge_connector->bridge_hdmi_audio->hdmi_audio_dev; 714 + bridge = bridge_connector->bridge_hdmi_audio; 714 715 else 715 - dev = bridge_connector->bridge_dp_audio->hdmi_audio_dev; 716 + bridge = bridge_connector->bridge_dp_audio; 717 + 718 + dev = bridge->hdmi_audio_dev; 716 719 717 720 ret = drm_connector_hdmi_audio_init(connector, dev, 718 721 &drm_bridge_connector_hdmi_audio_funcs,
+1 -1
drivers/gpu/drm/display/drm_dp_helper.c
··· 725 725 * monitor doesn't power down exactly after the throw away read. 726 726 */ 727 727 if (!aux->is_remote) { 728 - ret = drm_dp_dpcd_probe(aux, DP_DPCD_REV); 728 + ret = drm_dp_dpcd_probe(aux, DP_LANE0_1_STATUS); 729 729 if (ret < 0) 730 730 return ret; 731 731 }
+4 -3
drivers/gpu/drm/drm_writeback.c
··· 343 343 /** 344 344 * drm_writeback_connector_cleanup - Cleanup the writeback connector 345 345 * @dev: DRM device 346 - * @wb_connector: Pointer to the writeback connector to clean up 346 + * @data: Pointer to the writeback connector to clean up 347 347 * 348 348 * This will decrement the reference counter of blobs and destroy properties. It 349 349 * will also clean the remaining jobs in this writeback connector. Caution: This helper will not 350 350 * clean up the attached encoder and the drm_connector. 351 351 */ 352 352 static void drm_writeback_connector_cleanup(struct drm_device *dev, 353 - struct drm_writeback_connector *wb_connector) 353 + void *data) 354 354 { 355 355 unsigned long flags; 356 356 struct drm_writeback_job *pos, *n; 357 + struct drm_writeback_connector *wb_connector = data; 357 358 358 359 delete_writeback_properties(dev); 359 360 drm_property_blob_put(wb_connector->pixel_formats_blob_ptr); ··· 406 405 if (ret) 407 406 return ret; 408 407 409 - ret = drmm_add_action_or_reset(dev, (void *)drm_writeback_connector_cleanup, 408 + ret = drmm_add_action_or_reset(dev, drm_writeback_connector_cleanup, 410 409 wb_connector); 411 410 if (ret) 412 411 return ret;
+2 -2
drivers/gpu/drm/i915/display/intel_snps_hdmi_pll.c
··· 103 103 DIV_ROUND_DOWN_ULL(curve_1_interpolated, CURVE0_MULTIPLIER))); 104 104 105 105 ana_cp_int_temp = 106 - DIV_ROUND_CLOSEST_ULL(DIV_ROUND_DOWN_ULL(adjusted_vco_clk1, curve_2_scaled1), 107 - CURVE2_MULTIPLIER); 106 + DIV64_U64_ROUND_CLOSEST(DIV_ROUND_DOWN_ULL(adjusted_vco_clk1, curve_2_scaled1), 107 + CURVE2_MULTIPLIER); 108 108 109 109 *ana_cp_int = max(1, min(ana_cp_int_temp, 127)); 110 110
+1 -1
drivers/gpu/drm/i915/i915_pmu.c
··· 108 108 return other_bit(config); 109 109 } 110 110 111 - static u32 config_mask(const u64 config) 111 + static __always_inline u32 config_mask(const u64 config) 112 112 { 113 113 unsigned int bit = config_bit(config); 114 114
+2
drivers/gpu/drm/xe/display/xe_display.c
··· 104 104 spin_lock_init(&xe->display.fb_tracking.lock); 105 105 106 106 xe->display.hotplug.dp_wq = alloc_ordered_workqueue("xe-dp", 0); 107 + if (!xe->display.hotplug.dp_wq) 108 + return -ENOMEM; 107 109 108 110 return drmm_add_action_or_reset(&xe->drm, display_destroy, NULL); 109 111 }
+4 -7
drivers/gpu/drm/xe/display/xe_dsb_buffer.c
··· 17 17 18 18 void intel_dsb_buffer_write(struct intel_dsb_buffer *dsb_buf, u32 idx, u32 val) 19 19 { 20 - struct xe_device *xe = dsb_buf->vma->bo->tile->xe; 21 - 22 20 iosys_map_wr(&dsb_buf->vma->bo->vmap, idx * 4, u32, val); 23 - xe_device_l2_flush(xe); 24 21 } 25 22 26 23 u32 intel_dsb_buffer_read(struct intel_dsb_buffer *dsb_buf, u32 idx) ··· 27 30 28 31 void intel_dsb_buffer_memset(struct intel_dsb_buffer *dsb_buf, u32 idx, u32 val, size_t size) 29 32 { 30 - struct xe_device *xe = dsb_buf->vma->bo->tile->xe; 31 - 32 33 WARN_ON(idx > (dsb_buf->buf_size - size) / sizeof(*dsb_buf->cmd_buf)); 33 34 34 35 iosys_map_memset(&dsb_buf->vma->bo->vmap, idx * 4, val, size); 35 - xe_device_l2_flush(xe); 36 36 } 37 37 38 38 bool intel_dsb_buffer_create(struct intel_crtc *crtc, struct intel_dsb_buffer *dsb_buf, size_t size) ··· 68 74 69 75 void intel_dsb_buffer_flush_map(struct intel_dsb_buffer *dsb_buf) 70 76 { 77 + struct xe_device *xe = dsb_buf->vma->bo->tile->xe; 78 + 71 79 /* 72 80 * The memory barrier here is to ensure coherency of DSB vs MMIO, 73 81 * both for weak ordering archs and discrete cards. 74 82 */ 75 - xe_device_wmb(dsb_buf->vma->bo->tile->xe); 83 + xe_device_wmb(xe); 84 + xe_device_l2_flush(xe); 76 85 }
+3 -2
drivers/gpu/drm/xe/display/xe_fb_pin.c
··· 164 164 165 165 vma->dpt = dpt; 166 166 vma->node = dpt->ggtt_node[tile0->id]; 167 + 168 + /* Ensure DPT writes are flushed */ 169 + xe_device_l2_flush(xe); 167 170 return 0; 168 171 } 169 172 ··· 336 333 if (ret) 337 334 goto err_unpin; 338 335 339 - /* Ensure DPT writes are flushed */ 340 - xe_device_l2_flush(xe); 341 336 return vma; 342 337 343 338 err_unpin:
+1
drivers/gpu/drm/xe/regs/xe_mchbar_regs.h
··· 40 40 #define PCU_CR_PACKAGE_RAPL_LIMIT XE_REG(MCHBAR_MIRROR_BASE_SNB + 0x59a0) 41 41 #define PWR_LIM_VAL REG_GENMASK(14, 0) 42 42 #define PWR_LIM_EN REG_BIT(15) 43 + #define PWR_LIM REG_GENMASK(15, 0) 43 44 #define PWR_LIM_TIME REG_GENMASK(23, 17) 44 45 #define PWR_LIM_TIME_X REG_GENMASK(23, 22) 45 46 #define PWR_LIM_TIME_Y REG_GENMASK(21, 17)
+11
drivers/gpu/drm/xe/xe_ggtt.c
··· 201 201 .ggtt_set_pte = xe_ggtt_set_pte_and_flush, 202 202 }; 203 203 204 + static void dev_fini_ggtt(void *arg) 205 + { 206 + struct xe_ggtt *ggtt = arg; 207 + 208 + drain_workqueue(ggtt->wq); 209 + } 210 + 204 211 /** 205 212 * xe_ggtt_init_early - Early GGTT initialization 206 213 * @ggtt: the &xe_ggtt to be initialized ··· 261 254 primelockdep(ggtt); 262 255 263 256 err = drmm_add_action_or_reset(&xe->drm, ggtt_fini_early, ggtt); 257 + if (err) 258 + return err; 259 + 260 + err = devm_add_action_or_reset(xe->drm.dev, dev_fini_ggtt, ggtt); 264 261 if (err) 265 262 return err; 266 263
+6 -4
drivers/gpu/drm/xe/xe_guc_ct.c
··· 34 34 #include "xe_pm.h" 35 35 #include "xe_trace_guc.h" 36 36 37 + static void receive_g2h(struct xe_guc_ct *ct); 38 + static void g2h_worker_func(struct work_struct *w); 39 + static void safe_mode_worker_func(struct work_struct *w); 40 + static void ct_exit_safe_mode(struct xe_guc_ct *ct); 41 + 37 42 #if IS_ENABLED(CONFIG_DRM_XE_DEBUG) 38 43 enum { 39 44 /* Internal states, not error conditions */ ··· 191 186 { 192 187 struct xe_guc_ct *ct = arg; 193 188 189 + ct_exit_safe_mode(ct); 194 190 destroy_workqueue(ct->g2h_wq); 195 191 xa_destroy(&ct->fence_lookup); 196 192 } 197 - 198 - static void receive_g2h(struct xe_guc_ct *ct); 199 - static void g2h_worker_func(struct work_struct *w); 200 - static void safe_mode_worker_func(struct work_struct *w); 201 193 202 194 static void primelockdep(struct xe_guc_ct *ct) 203 195 {
+15 -19
drivers/gpu/drm/xe/xe_hwmon.c
··· 159 159 return ret; 160 160 } 161 161 162 - static int xe_hwmon_pcode_write_power_limit(const struct xe_hwmon *hwmon, u32 attr, u8 channel, 163 - u32 uval) 162 + static int xe_hwmon_pcode_rmw_power_limit(const struct xe_hwmon *hwmon, u32 attr, u8 channel, 163 + u32 clr, u32 set) 164 164 { 165 165 struct xe_tile *root_tile = xe_device_get_root_tile(hwmon->xe); 166 166 u32 val0, val1; ··· 179 179 channel, val0, val1, ret); 180 180 181 181 if (attr == PL1_HWMON_ATTR) 182 - val0 = uval; 182 + val0 = (val0 & ~clr) | set; 183 183 else 184 184 return -EIO; 185 185 ··· 339 339 if (hwmon->xe->info.has_mbx_power_limits) { 340 340 drm_dbg(&hwmon->xe->drm, "disabling %s on channel %d\n", 341 341 PWR_ATTR_TO_STR(attr), channel); 342 - xe_hwmon_pcode_write_power_limit(hwmon, attr, channel, 0); 342 + xe_hwmon_pcode_rmw_power_limit(hwmon, attr, channel, PWR_LIM_EN, 0); 343 343 xe_hwmon_pcode_read_power_limit(hwmon, attr, channel, &reg_val); 344 344 } else { 345 345 reg_val = xe_mmio_rmw32(mmio, rapl_limit, PWR_LIM_EN, 0); ··· 370 370 } 371 371 372 372 if (hwmon->xe->info.has_mbx_power_limits) 373 - ret = xe_hwmon_pcode_write_power_limit(hwmon, attr, channel, reg_val); 373 + ret = xe_hwmon_pcode_rmw_power_limit(hwmon, attr, channel, PWR_LIM, reg_val); 374 374 else 375 - reg_val = xe_mmio_rmw32(mmio, rapl_limit, PWR_LIM_EN | PWR_LIM_VAL, 376 - reg_val); 375 + reg_val = xe_mmio_rmw32(mmio, rapl_limit, PWR_LIM, reg_val); 377 376 unlock: 378 377 mutex_unlock(&hwmon->hwmon_lock); 379 378 return ret; ··· 562 563 563 564 mutex_lock(&hwmon->hwmon_lock); 564 565 565 - if (hwmon->xe->info.has_mbx_power_limits) { 566 - ret = xe_hwmon_pcode_read_power_limit(hwmon, power_attr, channel, (u32 *)&r); 567 - r = (r & ~PWR_LIM_TIME) | rxy; 568 - xe_hwmon_pcode_write_power_limit(hwmon, power_attr, channel, r); 569 - } else { 566 + if (hwmon->xe->info.has_mbx_power_limits) 567 + xe_hwmon_pcode_rmw_power_limit(hwmon, power_attr, channel, PWR_LIM_TIME, rxy); 568 + else 570 569 r = xe_mmio_rmw32(mmio, xe_hwmon_get_reg(hwmon, REG_PKG_RAPL_LIMIT, channel), 571 570 PWR_LIM_TIME, rxy); 572 - } 573 571 574 572 mutex_unlock(&hwmon->hwmon_lock); 575 573 ··· 1134 1138 } else { 1135 1139 drm_info(&hwmon->xe->drm, "Using mailbox commands for power limits\n"); 1136 1140 /* Write default limits to read from pcode from now on. */ 1137 - xe_hwmon_pcode_write_power_limit(hwmon, PL1_HWMON_ATTR, 1138 - CHANNEL_CARD, 1139 - hwmon->pl1_on_boot[CHANNEL_CARD]); 1140 - xe_hwmon_pcode_write_power_limit(hwmon, PL1_HWMON_ATTR, 1141 - CHANNEL_PKG, 1142 - hwmon->pl1_on_boot[CHANNEL_PKG]); 1141 + xe_hwmon_pcode_rmw_power_limit(hwmon, PL1_HWMON_ATTR, 1142 + CHANNEL_CARD, PWR_LIM | PWR_LIM_TIME, 1143 + hwmon->pl1_on_boot[CHANNEL_CARD]); 1144 + xe_hwmon_pcode_rmw_power_limit(hwmon, PL1_HWMON_ATTR, 1145 + CHANNEL_PKG, PWR_LIM | PWR_LIM_TIME, 1146 + hwmon->pl1_on_boot[CHANNEL_PKG]); 1143 1147 hwmon->scl_shift_power = PWR_UNIT; 1144 1148 hwmon->scl_shift_energy = ENERGY_UNIT; 1145 1149 hwmon->scl_shift_time = TIME_UNIT;
+5
drivers/hid/hid-appletb-kbd.c
··· 438 438 return 0; 439 439 440 440 close_hw: 441 + if (kbd->backlight_dev) 442 + put_device(&kbd->backlight_dev->dev); 441 443 hid_hw_close(hdev); 442 444 stop_hw: 443 445 hid_hw_stop(hdev); ··· 454 452 455 453 input_unregister_handler(&kbd->inp_handler); 456 454 timer_delete_sync(&kbd->inactivity_timer); 455 + 456 + if (kbd->backlight_dev) 457 + put_device(&kbd->backlight_dev->dev); 457 458 458 459 hid_hw_close(hdev); 459 460 hid_hw_stop(hdev);
+6
drivers/hid/hid-ids.h
··· 312 312 #define USB_DEVICE_ID_ASUS_AK1D 0x1125 313 313 #define USB_DEVICE_ID_CHICONY_TOSHIBA_WT10A 0x1408 314 314 #define USB_DEVICE_ID_CHICONY_ACER_SWITCH12 0x1421 315 + #define USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA 0xb824 316 + #define USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA2 0xb82c 315 317 316 318 #define USB_VENDOR_ID_CHUNGHWAT 0x2247 317 319 #define USB_DEVICE_ID_CHUNGHWAT_MULTITOUCH 0x0001 ··· 821 819 #define USB_DEVICE_ID_LENOVO_TPPRODOCK 0x6067 822 820 #define USB_DEVICE_ID_LENOVO_X1_COVER 0x6085 823 821 #define USB_DEVICE_ID_LENOVO_X1_TAB 0x60a3 822 + #define USB_DEVICE_ID_LENOVO_X1_TAB2 0x60a4 824 823 #define USB_DEVICE_ID_LENOVO_X1_TAB3 0x60b5 825 824 #define USB_DEVICE_ID_LENOVO_X12_TAB 0x60fe 826 825 #define USB_DEVICE_ID_LENOVO_X12_TAB2 0x61ae ··· 1527 1524 1528 1525 #define USB_VENDOR_ID_SIGNOTEC 0x2133 1529 1526 #define USB_DEVICE_ID_SIGNOTEC_VIEWSONIC_PD1011 0x0018 1527 + 1528 + #define USB_VENDOR_ID_SMARTLINKTECHNOLOGY 0x4c4a 1529 + #define USB_DEVICE_ID_SMARTLINKTECHNOLOGY_4155 0x4155 1530 1530 1531 1531 #endif
+1 -1
drivers/hid/hid-input.c
··· 2343 2343 } 2344 2344 2345 2345 if (list_empty(&hid->inputs)) { 2346 - hid_err(hid, "No inputs registered, leaving\n"); 2346 + hid_dbg(hid, "No inputs registered, leaving\n"); 2347 2347 goto out_unwind; 2348 2348 } 2349 2349
+15 -4
drivers/hid/hid-lenovo.c
··· 492 492 case USB_DEVICE_ID_LENOVO_X12_TAB: 493 493 case USB_DEVICE_ID_LENOVO_X12_TAB2: 494 494 case USB_DEVICE_ID_LENOVO_X1_TAB: 495 + case USB_DEVICE_ID_LENOVO_X1_TAB2: 495 496 case USB_DEVICE_ID_LENOVO_X1_TAB3: 496 497 return lenovo_input_mapping_x1_tab_kbd(hdev, hi, field, usage, bit, max); 497 498 default: ··· 549 548 550 549 /* 551 550 * Tell the keyboard a driver understands it, and turn F7, F9, F11 into 552 - * regular keys 551 + * regular keys (Compact only) 553 552 */ 554 - ret = lenovo_send_cmd_cptkbd(hdev, 0x01, 0x03); 555 - if (ret) 556 - hid_warn(hdev, "Failed to switch F7/9/11 mode: %d\n", ret); 553 + if (hdev->product == USB_DEVICE_ID_LENOVO_CUSBKBD || 554 + hdev->product == USB_DEVICE_ID_LENOVO_CBTKBD) { 555 + ret = lenovo_send_cmd_cptkbd(hdev, 0x01, 0x03); 556 + if (ret) 557 + hid_warn(hdev, "Failed to switch F7/9/11 mode: %d\n", ret); 558 + } 557 559 558 560 /* Switch middle button to native mode */ 559 561 ret = lenovo_send_cmd_cptkbd(hdev, 0x09, 0x01); ··· 609 605 case USB_DEVICE_ID_LENOVO_X12_TAB2: 610 606 case USB_DEVICE_ID_LENOVO_TP10UBKBD: 611 607 case USB_DEVICE_ID_LENOVO_X1_TAB: 608 + case USB_DEVICE_ID_LENOVO_X1_TAB2: 612 609 case USB_DEVICE_ID_LENOVO_X1_TAB3: 613 610 ret = lenovo_led_set_tp10ubkbd(hdev, TP10UBKBD_FN_LOCK_LED, value); 614 611 if (ret) ··· 866 861 case USB_DEVICE_ID_LENOVO_X12_TAB2: 867 862 case USB_DEVICE_ID_LENOVO_TP10UBKBD: 868 863 case USB_DEVICE_ID_LENOVO_X1_TAB: 864 + case USB_DEVICE_ID_LENOVO_X1_TAB2: 869 865 case USB_DEVICE_ID_LENOVO_X1_TAB3: 870 866 return lenovo_event_tp10ubkbd(hdev, field, usage, value); 871 867 default: ··· 1150 1144 case USB_DEVICE_ID_LENOVO_X12_TAB2: 1151 1145 case USB_DEVICE_ID_LENOVO_TP10UBKBD: 1152 1146 case USB_DEVICE_ID_LENOVO_X1_TAB: 1147 + case USB_DEVICE_ID_LENOVO_X1_TAB2: 1153 1148 case USB_DEVICE_ID_LENOVO_X1_TAB3: 1154 1149 ret = lenovo_led_set_tp10ubkbd(hdev, tp10ubkbd_led[led_nr], value); 1155 1150 break; ··· 1391 1384 case USB_DEVICE_ID_LENOVO_X12_TAB2: 1392 1385 case USB_DEVICE_ID_LENOVO_TP10UBKBD: 1393 1386 case USB_DEVICE_ID_LENOVO_X1_TAB: 1387 + case USB_DEVICE_ID_LENOVO_X1_TAB2: 1394 1388 case USB_DEVICE_ID_LENOVO_X1_TAB3: 1395 1389 ret = lenovo_probe_tp10ubkbd(hdev); 1396 1390 break; ··· 1481 1473 case USB_DEVICE_ID_LENOVO_X12_TAB2: 1482 1474 case USB_DEVICE_ID_LENOVO_TP10UBKBD: 1483 1475 case USB_DEVICE_ID_LENOVO_X1_TAB: 1476 + case USB_DEVICE_ID_LENOVO_X1_TAB2: 1484 1477 case USB_DEVICE_ID_LENOVO_X1_TAB3: 1485 1478 lenovo_remove_tp10ubkbd(hdev); 1486 1479 break; ··· 1532 1523 */ 1533 1524 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 1534 1525 USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB) }, 1526 + { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 1527 + USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB2) }, 1535 1528 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 1536 1529 USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB3) }, 1537 1530 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+7 -1
drivers/hid/hid-multitouch.c
··· 2132 2132 HID_DEVICE(BUS_I2C, HID_GROUP_GENERIC, 2133 2133 USB_VENDOR_ID_LG, I2C_DEVICE_ID_LG_7010) }, 2134 2134 2135 - /* Lenovo X1 TAB Gen 2 */ 2135 + /* Lenovo X1 TAB Gen 1 */ 2136 2136 { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT, 2137 2137 HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8, 2138 2138 USB_VENDOR_ID_LENOVO, 2139 2139 USB_DEVICE_ID_LENOVO_X1_TAB) }, 2140 + 2141 + /* Lenovo X1 TAB Gen 2 */ 2142 + { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT, 2143 + HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8, 2144 + USB_VENDOR_ID_LENOVO, 2145 + USB_DEVICE_ID_LENOVO_X1_TAB2) }, 2140 2146 2141 2147 /* Lenovo X1 TAB Gen 3 */ 2142 2148 { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
+36 -2
drivers/hid/hid-nintendo.c
··· 308 308 JOYCON_CTLR_STATE_INIT, 309 309 JOYCON_CTLR_STATE_READ, 310 310 JOYCON_CTLR_STATE_REMOVED, 311 + JOYCON_CTLR_STATE_SUSPENDED, 311 312 }; 312 313 313 314 /* Controller type received as part of device info */ ··· 2751 2750 2752 2751 static int nintendo_hid_resume(struct hid_device *hdev) 2753 2752 { 2754 - int ret = joycon_init(hdev); 2753 + struct joycon_ctlr *ctlr = hid_get_drvdata(hdev); 2754 + int ret; 2755 2755 2756 + hid_dbg(hdev, "resume\n"); 2757 + if (!joycon_using_usb(ctlr)) { 2758 + hid_dbg(hdev, "no-op resume for bt ctlr\n"); 2759 + ctlr->ctlr_state = JOYCON_CTLR_STATE_READ; 2760 + return 0; 2761 + } 2762 + 2763 + ret = joycon_init(hdev); 2756 2764 if (ret) 2757 - hid_err(hdev, "Failed to restore controller after resume"); 2765 + hid_err(hdev, 2766 + "Failed to restore controller after resume: %d\n", 2767 + ret); 2768 + else 2769 + ctlr->ctlr_state = JOYCON_CTLR_STATE_READ; 2758 2770 2759 2771 return ret; 2772 + } 2773 + 2774 + static int nintendo_hid_suspend(struct hid_device *hdev, pm_message_t message) 2775 + { 2776 + struct joycon_ctlr *ctlr = hid_get_drvdata(hdev); 2777 + 2778 + hid_dbg(hdev, "suspend: %d\n", message.event); 2779 + /* 2780 + * Avoid any blocking loops in suspend/resume transitions. 2781 + * 2782 + * joycon_enforce_subcmd_rate() can result in repeated retries if for 2783 + * whatever reason the controller stops providing input reports. 2784 + * 2785 + * This has been observed with bluetooth controllers which lose 2786 + * connectivity prior to suspend (but not long enough to result in 2787 + * complete disconnection). 2788 + */ 2789 + ctlr->ctlr_state = JOYCON_CTLR_STATE_SUSPENDED; 2790 + return 0; 2760 2791 } 2761 2792 2762 2793 #endif ··· 2829 2796 2830 2797 #ifdef CONFIG_PM 2831 2798 .resume = nintendo_hid_resume, 2799 + .suspend = nintendo_hid_suspend, 2832 2800 #endif 2833 2801 }; 2834 2802 static int __init nintendo_init(void)
+3
drivers/hid/hid-quirks.c
··· 757 757 { HID_USB_DEVICE(USB_VENDOR_ID_AVERMEDIA, USB_DEVICE_ID_AVER_FM_MR800) }, 758 758 { HID_USB_DEVICE(USB_VENDOR_ID_AXENTIA, USB_DEVICE_ID_AXENTIA_FM_RADIO) }, 759 759 { HID_USB_DEVICE(USB_VENDOR_ID_BERKSHIRE, USB_DEVICE_ID_BERKSHIRE_PCWD) }, 760 + { HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA) }, 761 + { HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA2) }, 760 762 { HID_USB_DEVICE(USB_VENDOR_ID_CIDC, 0x0103) }, 761 763 { HID_USB_DEVICE(USB_VENDOR_ID_CYGNAL, USB_DEVICE_ID_CYGNAL_RADIO_SI470X) }, 762 764 { HID_USB_DEVICE(USB_VENDOR_ID_CYGNAL, USB_DEVICE_ID_CYGNAL_RADIO_SI4713) }, ··· 906 904 #endif 907 905 { HID_USB_DEVICE(USB_VENDOR_ID_YEALINK, USB_DEVICE_ID_YEALINK_P1K_P4K_B2K) }, 908 906 { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_HP_5MP_CAMERA_5473) }, 907 + { HID_USB_DEVICE(USB_VENDOR_ID_SMARTLINKTECHNOLOGY, USB_DEVICE_ID_SMARTLINKTECHNOLOGY_4155) }, 909 908 { } 910 909 }; 911 910
+1
drivers/hid/intel-ish-hid/ipc/hw-ish.h
··· 38 38 #define PCI_DEVICE_ID_INTEL_ISH_LNL_M 0xA845 39 39 #define PCI_DEVICE_ID_INTEL_ISH_PTL_H 0xE345 40 40 #define PCI_DEVICE_ID_INTEL_ISH_PTL_P 0xE445 41 + #define PCI_DEVICE_ID_INTEL_ISH_WCL 0x4D45 41 42 42 43 #define REVISION_ID_CHT_A0 0x6 43 44 #define REVISION_ID_CHT_Ax_SI 0x0
+9 -3
drivers/hid/intel-ish-hid/ipc/pci-ish.c
··· 27 27 ISHTP_DRIVER_DATA_NONE, 28 28 ISHTP_DRIVER_DATA_LNL_M, 29 29 ISHTP_DRIVER_DATA_PTL, 30 + ISHTP_DRIVER_DATA_WCL, 30 31 }; 31 32 32 33 #define ISH_FW_GEN_LNL_M "lnlm" 33 34 #define ISH_FW_GEN_PTL "ptl" 35 + #define ISH_FW_GEN_WCL "wcl" 34 36 35 37 #define ISH_FIRMWARE_PATH(gen) "intel/ish/ish_" gen ".bin" 36 38 #define ISH_FIRMWARE_PATH_ALL "intel/ish/ish_*.bin" ··· 43 41 }, 44 42 [ISHTP_DRIVER_DATA_PTL] = { 45 43 .fw_generation = ISH_FW_GEN_PTL, 44 + }, 45 + [ISHTP_DRIVER_DATA_WCL] = { 46 + .fw_generation = ISH_FW_GEN_WCL, 46 47 }, 47 48 }; 48 49 ··· 72 67 {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_MTL_P)}, 73 68 {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_ARL_H)}, 74 69 {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_ARL_S)}, 75 - {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_LNL_M), .driver_data = ISHTP_DRIVER_DATA_LNL_M}, 76 - {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_PTL_H), .driver_data = ISHTP_DRIVER_DATA_PTL}, 77 - {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_PTL_P), .driver_data = ISHTP_DRIVER_DATA_PTL}, 70 + {PCI_DEVICE_DATA(INTEL, ISH_LNL_M, ISHTP_DRIVER_DATA_LNL_M)}, 71 + {PCI_DEVICE_DATA(INTEL, ISH_PTL_H, ISHTP_DRIVER_DATA_PTL)}, 72 + {PCI_DEVICE_DATA(INTEL, ISH_PTL_P, ISHTP_DRIVER_DATA_PTL)}, 73 + {PCI_DEVICE_DATA(INTEL, ISH_WCL, ISHTP_DRIVER_DATA_WCL)}, 78 74 {} 79 75 }; 80 76 MODULE_DEVICE_TABLE(pci, ish_pci_tbl);
+25 -1
drivers/hid/intel-thc-hid/intel-quicki2c/quicki2c-protocol.c
··· 4 4 #include <linux/bitfield.h> 5 5 #include <linux/hid.h> 6 6 #include <linux/hid-over-i2c.h> 7 + #include <linux/unaligned.h> 7 8 8 9 #include "intel-thc-dev.h" 9 10 #include "intel-thc-dma.h" ··· 201 200 202 201 int quicki2c_reset(struct quicki2c_device *qcdev) 203 202 { 203 + u16 input_reg = le16_to_cpu(qcdev->dev_desc.input_reg); 204 + size_t read_len = HIDI2C_LENGTH_LEN; 205 + u32 prd_len = read_len; 204 206 int ret; 205 207 206 208 qcdev->reset_ack = false; ··· 217 213 218 214 ret = wait_event_interruptible_timeout(qcdev->reset_ack_wq, qcdev->reset_ack, 219 215 HIDI2C_RESET_TIMEOUT * HZ); 220 - if (ret <= 0 || !qcdev->reset_ack) { 216 + if (qcdev->reset_ack) 217 + return 0; 218 + 219 + /* 220 + * Manually read reset response if it wasn't received, in case reset interrupt 221 + * was missed by touch device or THC hardware. 222 + */ 223 + ret = thc_tic_pio_read(qcdev->thc_hw, input_reg, read_len, &prd_len, 224 + (u32 *)qcdev->input_buf); 225 + if (ret) { 226 + dev_err_once(qcdev->dev, "Read Reset Response failed, ret %d\n", ret); 227 + return ret; 228 + } 229 + 230 + /* 231 + * Check response packet length, it's first 16 bits of packet. 232 + * If response packet length is zero, it's reset response, otherwise not. 233 + */ 234 + if (get_unaligned_le16(qcdev->input_buf)) { 221 235 dev_err_once(qcdev->dev, 222 236 "Wait reset response timed out ret:%d timeout:%ds\n", 223 237 ret, HIDI2C_RESET_TIMEOUT); 224 238 return -ETIMEDOUT; 225 239 } 240 + 241 + qcdev->reset_ack = true; 226 242 227 243 return 0; 228 244 }
+6 -1
drivers/hid/wacom_sys.c
··· 2048 2048 2049 2049 remote->remote_dir = kobject_create_and_add("wacom_remote", 2050 2050 &wacom->hdev->dev.kobj); 2051 - if (!remote->remote_dir) 2051 + if (!remote->remote_dir) { 2052 + kfifo_free(&remote->remote_fifo); 2052 2053 return -ENOMEM; 2054 + } 2053 2055 2054 2056 error = sysfs_create_files(remote->remote_dir, remote_unpair_attrs); 2055 2057 2056 2058 if (error) { 2057 2059 hid_err(wacom->hdev, 2058 2060 "cannot create sysfs group err: %d\n", error); 2061 + kfifo_free(&remote->remote_fifo); 2062 + kobject_put(remote->remote_dir); 2059 2063 return error; 2060 2064 } 2061 2065 ··· 2905 2901 hid_hw_stop(hdev); 2906 2902 2907 2903 cancel_delayed_work_sync(&wacom->init_work); 2904 + cancel_delayed_work_sync(&wacom->aes_battery_work); 2908 2905 cancel_work_sync(&wacom->wireless_work); 2909 2906 cancel_work_sync(&wacom->battery_work); 2910 2907 cancel_work_sync(&wacom->remote_work);
+1 -1
drivers/i2c/busses/Kconfig
··· 1530 1530 1531 1531 config SCx200_ACB 1532 1532 tristate "Geode ACCESS.bus support" 1533 - depends on X86_32 && PCI 1533 + depends on X86_32 && PCI && HAS_IOPORT 1534 1534 help 1535 1535 Enable the use of the ACCESS.bus controllers on the Geode SCx200 and 1536 1536 SC1100 processors and the CS5535 and CS5536 Geode companion devices.
+2
drivers/i2c/busses/i2c-designware-amdisp.c
··· 8 8 #include <linux/module.h> 9 9 #include <linux/platform_device.h> 10 10 #include <linux/pm_runtime.h> 11 + #include <linux/soc/amd/isp4_misc.h> 11 12 12 13 #include "i2c-designware-core.h" 13 14 ··· 63 62 64 63 adap = &isp_i2c_dev->adapter; 65 64 adap->owner = THIS_MODULE; 65 + scnprintf(adap->name, sizeof(adap->name), AMDISP_I2C_ADAP_NAME); 66 66 ACPI_COMPANION_SET(&adap->dev, ACPI_COMPANION(&pdev->dev)); 67 67 adap->dev.of_node = pdev->dev.of_node; 68 68 /* use dynamically allocated adapter id */
+3 -2
drivers/i2c/busses/i2c-designware-master.c
··· 1042 1042 if (ret) 1043 1043 return ret; 1044 1044 1045 - snprintf(adap->name, sizeof(adap->name), 1046 - "Synopsys DesignWare I2C adapter"); 1045 + if (!adap->name[0]) 1046 + scnprintf(adap->name, sizeof(adap->name), 1047 + "Synopsys DesignWare I2C adapter"); 1047 1048 adap->retries = 3; 1048 1049 adap->algo = &i2c_dw_algo; 1049 1050 adap->quirks = &i2c_dw_quirks;
+2 -1
drivers/i2c/busses/i2c-imx.c
··· 1008 1008 /* setup bus to read data */ 1009 1009 temp = imx_i2c_read_reg(i2c_imx, IMX_I2C_I2CR); 1010 1010 temp &= ~I2CR_MTX; 1011 - if (i2c_imx->msg->len - 1) 1011 + if ((i2c_imx->msg->len - 1) || (i2c_imx->msg->flags & I2C_M_RECV_LEN)) 1012 1012 temp &= ~I2CR_TXAK; 1013 1013 1014 1014 imx_i2c_write_reg(temp, i2c_imx, IMX_I2C_I2CR); ··· 1063 1063 wake_up(&i2c_imx->queue); 1064 1064 } 1065 1065 i2c_imx->msg->len += len; 1066 + i2c_imx->msg->buf[i2c_imx->msg_buf_idx++] = len; 1066 1067 } 1067 1068 1068 1069 static irqreturn_t i2c_imx_master_isr(struct imx_i2c_struct *i2c_imx, unsigned int status)
+5 -2
drivers/i2c/busses/i2c-omap.c
··· 1461 1461 if (IS_ERR(mux_state)) { 1462 1462 r = PTR_ERR(mux_state); 1463 1463 dev_dbg(&pdev->dev, "failed to get I2C mux: %d\n", r); 1464 - goto err_disable_pm; 1464 + goto err_put_pm; 1465 1465 } 1466 1466 omap->mux_state = mux_state; 1467 1467 r = mux_state_select(omap->mux_state); 1468 1468 if (r) { 1469 1469 dev_err(&pdev->dev, "failed to select I2C mux: %d\n", r); 1470 - goto err_disable_pm; 1470 + goto err_put_pm; 1471 1471 } 1472 1472 } 1473 1473 ··· 1515 1515 1516 1516 err_unuse_clocks: 1517 1517 omap_i2c_write_reg(omap, OMAP_I2C_CON_REG, 0); 1518 + if (omap->mux_state) 1519 + mux_state_deselect(omap->mux_state); 1520 + err_put_pm: 1518 1521 pm_runtime_dont_use_autosuspend(omap->dev); 1519 1522 pm_runtime_put_sync(omap->dev); 1520 1523 err_disable_pm:
+6
drivers/i2c/busses/i2c-robotfuzz-osif.c
··· 111 111 return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL; 112 112 } 113 113 114 + /* prevent invalid 0-length usb_control_msg */ 115 + static const struct i2c_adapter_quirks osif_quirks = { 116 + .flags = I2C_AQ_NO_ZERO_LEN_READ, 117 + }; 118 + 114 119 static const struct i2c_algorithm osif_algorithm = { 115 120 .xfer = osif_xfer, 116 121 .functionality = osif_func, ··· 148 143 149 144 priv->adapter.owner = THIS_MODULE; 150 145 priv->adapter.class = I2C_CLASS_HWMON; 146 + priv->adapter.quirks = &osif_quirks; 151 147 priv->adapter.algo = &osif_algorithm; 152 148 priv->adapter.algo_data = priv; 153 149 snprintf(priv->adapter.name, sizeof(priv->adapter.name),
+6
drivers/i2c/busses/i2c-tiny-usb.c
··· 139 139 return ret; 140 140 } 141 141 142 + /* prevent invalid 0-length usb_control_msg */ 143 + static const struct i2c_adapter_quirks usb_quirks = { 144 + .flags = I2C_AQ_NO_ZERO_LEN_READ, 145 + }; 146 + 142 147 /* This is the actual algorithm we define */ 143 148 static const struct i2c_algorithm usb_algorithm = { 144 149 .xfer = usb_xfer, ··· 252 247 /* setup i2c adapter description */ 253 248 dev->adapter.owner = THIS_MODULE; 254 249 dev->adapter.class = I2C_CLASS_HWMON; 250 + dev->adapter.quirks = &usb_quirks; 255 251 dev->adapter.algo = &usb_algorithm; 256 252 dev->adapter.algo_data = dev; 257 253 snprintf(dev->adapter.name, sizeof(dev->adapter.name),
+2 -2
drivers/infiniband/core/cache.c
··· 582 582 out_unlock: 583 583 mutex_unlock(&table->lock); 584 584 if (ret) 585 - pr_warn("%s: unable to add gid %pI6 error=%d\n", 586 - __func__, gid->raw, ret); 585 + pr_warn_ratelimited("%s: unable to add gid %pI6 error=%d\n", 586 + __func__, gid->raw, ret); 587 587 return ret; 588 588 } 589 589
+11
drivers/infiniband/core/umem_odp.c
··· 76 76 end = ALIGN(end, page_size); 77 77 if (unlikely(end < page_size)) 78 78 return -EOVERFLOW; 79 + /* 80 + * The mmu notifier can be called within reclaim contexts and takes the 81 + * umem_mutex. This is rare to trigger in testing, teach lockdep about 82 + * it. 83 + */ 84 + if (IS_ENABLED(CONFIG_LOCKDEP)) { 85 + fs_reclaim_acquire(GFP_KERNEL); 86 + mutex_lock(&umem_odp->umem_mutex); 87 + mutex_unlock(&umem_odp->umem_mutex); 88 + fs_reclaim_release(GFP_KERNEL); 89 + } 79 90 80 91 nr_entries = (end - start) >> PAGE_SHIFT; 81 92 if (!(nr_entries * PAGE_SIZE / page_size))
+2 -2
drivers/infiniband/hw/mlx5/counters.c
··· 398 398 return ret; 399 399 400 400 /* We don't expose device counters over Vports */ 401 - if (is_mdev_switchdev_mode(dev->mdev) && port_num != 0) 401 + if (is_mdev_switchdev_mode(dev->mdev) && dev->is_rep && port_num != 0) 402 402 goto done; 403 403 404 404 if (MLX5_CAP_PCAM_FEATURE(dev->mdev, rx_icrc_encapsulated_counter)) { ··· 418 418 */ 419 419 goto done; 420 420 } 421 - ret = mlx5_lag_query_cong_counters(dev->mdev, 421 + ret = mlx5_lag_query_cong_counters(mdev, 422 422 stats->value + 423 423 cnts->num_q_counters, 424 424 cnts->num_cong_counters,
+8 -2
drivers/infiniband/hw/mlx5/devx.c
··· 1958 1958 /* Level1 is valid for future use, no need to free */ 1959 1959 return -ENOMEM; 1960 1960 1961 + INIT_LIST_HEAD(&obj_event->obj_sub_list); 1961 1962 err = xa_insert(&event->object_ids, 1962 1963 key_level2, 1963 1964 obj_event, ··· 1967 1966 kfree(obj_event); 1968 1967 return err; 1969 1968 } 1970 - INIT_LIST_HEAD(&obj_event->obj_sub_list); 1971 1969 } 1972 1970 1973 1971 return 0; ··· 2669 2669 2670 2670 void mlx5_ib_ufile_hw_cleanup(struct ib_uverbs_file *ufile) 2671 2671 { 2672 - struct mlx5_async_cmd async_cmd[MAX_ASYNC_CMDS]; 2672 + struct mlx5_async_cmd *async_cmd; 2673 2673 struct ib_ucontext *ucontext = ufile->ucontext; 2674 2674 struct ib_device *device = ucontext->device; 2675 2675 struct mlx5_ib_dev *dev = to_mdev(device); ··· 2677 2677 struct devx_obj *obj; 2678 2678 int head = 0; 2679 2679 int tail = 0; 2680 + 2681 + async_cmd = kcalloc(MAX_ASYNC_CMDS, sizeof(*async_cmd), GFP_KERNEL); 2682 + if (!async_cmd) 2683 + return; 2680 2684 2681 2685 list_for_each_entry(uobject, &ufile->uobjects, list) { 2682 2686 WARN_ON(uverbs_try_lock_object(uobject, UVERBS_LOOKUP_WRITE)); ··· 2717 2713 devx_wait_async_destroy(&async_cmd[head % MAX_ASYNC_CMDS]); 2718 2714 head++; 2719 2715 } 2716 + 2717 + kfree(async_cmd); 2720 2718 } 2721 2719 2722 2720 static ssize_t devx_async_cmd_event_read(struct file *filp, char __user *buf,
+33
drivers/infiniband/hw/mlx5/main.c
··· 1791 1791 context->devx_uid); 1792 1792 } 1793 1793 1794 + static int mlx5_ib_enable_lb_mp(struct mlx5_core_dev *master, 1795 + struct mlx5_core_dev *slave) 1796 + { 1797 + int err; 1798 + 1799 + err = mlx5_nic_vport_update_local_lb(master, true); 1800 + if (err) 1801 + return err; 1802 + 1803 + err = mlx5_nic_vport_update_local_lb(slave, true); 1804 + if (err) 1805 + goto out; 1806 + 1807 + return 0; 1808 + 1809 + out: 1810 + mlx5_nic_vport_update_local_lb(master, false); 1811 + return err; 1812 + } 1813 + 1814 + static void mlx5_ib_disable_lb_mp(struct mlx5_core_dev *master, 1815 + struct mlx5_core_dev *slave) 1816 + { 1817 + mlx5_nic_vport_update_local_lb(slave, false); 1818 + mlx5_nic_vport_update_local_lb(master, false); 1819 + } 1820 + 1794 1821 int mlx5_ib_enable_lb(struct mlx5_ib_dev *dev, bool td, bool qp) 1795 1822 { 1796 1823 int err = 0; ··· 3522 3495 3523 3496 lockdep_assert_held(&mlx5_ib_multiport_mutex); 3524 3497 3498 + mlx5_ib_disable_lb_mp(ibdev->mdev, mpi->mdev); 3499 + 3525 3500 mlx5_core_mp_event_replay(ibdev->mdev, 3526 3501 MLX5_DRIVER_EVENT_AFFILIATION_REMOVED, 3527 3502 NULL); ··· 3618 3589 mlx5_core_mp_event_replay(ibdev->mdev, 3619 3590 MLX5_DRIVER_EVENT_AFFILIATION_DONE, 3620 3591 &key); 3592 + 3593 + err = mlx5_ib_enable_lb_mp(ibdev->mdev, mpi->mdev); 3594 + if (err) 3595 + goto unbind; 3621 3596 3622 3597 return true; 3623 3598
+47 -14
drivers/infiniband/hw/mlx5/mr.c
··· 2027 2027 } 2028 2028 } 2029 2029 2030 - static int mlx5_revoke_mr(struct mlx5_ib_mr *mr) 2030 + static int mlx5_umr_revoke_mr_with_lock(struct mlx5_ib_mr *mr) 2031 2031 { 2032 - struct mlx5_ib_dev *dev = to_mdev(mr->ibmr.device); 2033 - struct mlx5_cache_ent *ent = mr->mmkey.cache_ent; 2034 - bool is_odp = is_odp_mr(mr); 2035 2032 bool is_odp_dma_buf = is_dmabuf_mr(mr) && 2036 - !to_ib_umem_dmabuf(mr->umem)->pinned; 2037 - bool from_cache = !!ent; 2038 - int ret = 0; 2033 + !to_ib_umem_dmabuf(mr->umem)->pinned; 2034 + bool is_odp = is_odp_mr(mr); 2035 + int ret; 2039 2036 2040 2037 if (is_odp) 2041 2038 mutex_lock(&to_ib_umem_odp(mr->umem)->umem_mutex); 2042 2039 2043 2040 if (is_odp_dma_buf) 2044 - dma_resv_lock(to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv, NULL); 2041 + dma_resv_lock(to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv, 2042 + NULL); 2045 2043 2046 - if (mr->mmkey.cacheable && !mlx5r_umr_revoke_mr(mr) && !cache_ent_find_and_store(dev, mr)) { 2044 + ret = mlx5r_umr_revoke_mr(mr); 2045 + 2046 + if (is_odp) { 2047 + if (!ret) 2048 + to_ib_umem_odp(mr->umem)->private = NULL; 2049 + mutex_unlock(&to_ib_umem_odp(mr->umem)->umem_mutex); 2050 + } 2051 + 2052 + if (is_odp_dma_buf) { 2053 + if (!ret) 2054 + to_ib_umem_dmabuf(mr->umem)->private = NULL; 2055 + dma_resv_unlock( 2056 + to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv); 2057 + } 2058 + 2059 + return ret; 2060 + } 2061 + 2062 + static int mlx5r_handle_mkey_cleanup(struct mlx5_ib_mr *mr) 2063 + { 2064 + bool is_odp_dma_buf = is_dmabuf_mr(mr) && 2065 + !to_ib_umem_dmabuf(mr->umem)->pinned; 2066 + struct mlx5_ib_dev *dev = to_mdev(mr->ibmr.device); 2067 + struct mlx5_cache_ent *ent = mr->mmkey.cache_ent; 2068 + bool is_odp = is_odp_mr(mr); 2069 + bool from_cache = !!ent; 2070 + int ret; 2071 + 2072 + if (mr->mmkey.cacheable && !mlx5_umr_revoke_mr_with_lock(mr) && 2073 + !cache_ent_find_and_store(dev, mr)) { 2047 2074 ent = mr->mmkey.cache_ent; 2048 2075 /* upon storing to a clean temp entry - schedule its cleanup */ 2049 2076 spin_lock_irq(&ent->mkeys_queue.lock); ··· 2082 2055 ent->tmp_cleanup_scheduled = true; 2083 2056 } 2084 2057 spin_unlock_irq(&ent->mkeys_queue.lock); 2085 - goto out; 2058 + return 0; 2086 2059 } 2087 2060 2088 2061 if (ent) { ··· 2091 2064 mr->mmkey.cache_ent = NULL; 2092 2065 spin_unlock_irq(&ent->mkeys_queue.lock); 2093 2066 } 2067 + 2068 + if (is_odp) 2069 + mutex_lock(&to_ib_umem_odp(mr->umem)->umem_mutex); 2070 + 2071 + if (is_odp_dma_buf) 2072 + dma_resv_lock(to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv, 2073 + NULL); 2094 2074 ret = destroy_mkey(dev, mr); 2095 - out: 2096 2075 if (is_odp) { 2097 2076 if (!ret) 2098 2077 to_ib_umem_odp(mr->umem)->private = NULL; ··· 2108 2075 if (is_odp_dma_buf) { 2109 2076 if (!ret) 2110 2077 to_ib_umem_dmabuf(mr->umem)->private = NULL; 2111 - dma_resv_unlock(to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv); 2078 + dma_resv_unlock( 2079 + to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv); 2112 2080 } 2113 - 2114 2081 return ret; 2115 2082 } 2116 2083 ··· 2159 2126 } 2160 2127 2161 2128 /* Stop DMA */ 2162 - rc = mlx5_revoke_mr(mr); 2129 + rc = mlx5r_handle_mkey_cleanup(mr); 2163 2130 if (rc) 2164 2131 return rc; 2165 2132
+4 -4
drivers/infiniband/hw/mlx5/odp.c
··· 259 259 } 260 260 261 261 if (MLX5_CAP_ODP(mr_to_mdev(mr)->mdev, mem_page_fault)) 262 - __xa_erase(&mr_to_mdev(mr)->odp_mkeys, 263 - mlx5_base_mkey(mr->mmkey.key)); 262 + xa_erase(&mr_to_mdev(mr)->odp_mkeys, 263 + mlx5_base_mkey(mr->mmkey.key)); 264 264 xa_unlock(&imr->implicit_children); 265 265 266 266 /* Freeing a MR is a sleeping operation, so bounce to a work queue */ ··· 532 532 } 533 533 534 534 if (MLX5_CAP_ODP(dev->mdev, mem_page_fault)) { 535 - ret = __xa_store(&dev->odp_mkeys, mlx5_base_mkey(mr->mmkey.key), 536 - &mr->mmkey, GFP_KERNEL); 535 + ret = xa_store(&dev->odp_mkeys, mlx5_base_mkey(mr->mmkey.key), 536 + &mr->mmkey, GFP_KERNEL); 537 537 if (xa_is_err(ret)) { 538 538 ret = ERR_PTR(xa_err(ret)); 539 539 __xa_erase(&imr->implicit_children, idx);
+1 -2
drivers/mfd/88pm860x-core.c
··· 573 573 unsigned long flags = IRQF_TRIGGER_FALLING | IRQF_ONESHOT; 574 574 int data, mask, ret = -EINVAL; 575 575 int nr_irqs, irq_base = -1; 576 - struct device_node *node = i2c->dev.of_node; 577 576 578 577 mask = PM8607_B0_MISC1_INV_INT | PM8607_B0_MISC1_INT_CLEAR 579 578 | PM8607_B0_MISC1_INT_MASK; ··· 623 624 ret = -EBUSY; 624 625 goto out; 625 626 } 626 - irq_domain_create_legacy(of_fwnode_handle(node), nr_irqs, chip->irq_base, 0, 627 + irq_domain_create_legacy(dev_fwnode(&i2c->dev), nr_irqs, chip->irq_base, 0, 627 628 &pm860x_irq_domain_ops, chip); 628 629 chip->core_irq = i2c->irq; 629 630 if (!chip->core_irq)
+3 -3
drivers/mfd/max8925-core.c
··· 656 656 { 657 657 unsigned long flags = IRQF_TRIGGER_FALLING | IRQF_ONESHOT; 658 658 int ret; 659 - struct device_node *node = chip->dev->of_node; 660 659 661 660 /* clear all interrupts */ 662 661 max8925_reg_read(chip->i2c, MAX8925_CHG_IRQ1); ··· 681 682 return -EBUSY; 682 683 } 683 684 684 - irq_domain_create_legacy(of_fwnode_handle(node), MAX8925_NR_IRQS, chip->irq_base, 0, 685 - &max8925_irq_domain_ops, chip); 685 + irq_domain_create_legacy(dev_fwnode(chip->dev), MAX8925_NR_IRQS, 686 + chip->irq_base, 0, &max8925_irq_domain_ops, 687 + chip); 686 688 687 689 /* request irq handler for pmic main irq*/ 688 690 chip->core_irq = irq;
+1 -2
drivers/mfd/twl4030-irq.c
··· 676 676 static struct irq_chip twl4030_irq_chip; 677 677 int status, i; 678 678 int irq_base, irq_end, nr_irqs; 679 - struct device_node *node = dev->of_node; 680 679 681 680 /* 682 681 * TWL core and pwr interrupts must be contiguous because ··· 690 691 return irq_base; 691 692 } 692 693 693 - irq_domain_create_legacy(of_fwnode_handle(node), nr_irqs, irq_base, 0, 694 + irq_domain_create_legacy(dev_fwnode(dev), nr_irqs, irq_base, 0, 694 695 &irq_domain_simple_ops, NULL); 695 696 696 697 irq_end = irq_base + TWL4030_CORE_NR_IRQS;
+6 -6
drivers/mmc/core/quirks.h
··· 44 44 0, -1ull, SDIO_ANY_ID, SDIO_ANY_ID, add_quirk_sd, 45 45 MMC_QUIRK_NO_UHS_DDR50_TUNING, EXT_CSD_REV_ANY), 46 46 47 + /* 48 + * Some SD cards reports discard support while they don't 49 + */ 50 + MMC_FIXUP(CID_NAME_ANY, CID_MANFID_SANDISK_SD, 0x5344, add_quirk_sd, 51 + MMC_QUIRK_BROKEN_SD_DISCARD), 52 + 47 53 END_FIXUP 48 54 }; 49 55 ··· 152 146 */ 153 147 MMC_FIXUP("M62704", CID_MANFID_KINGSTON, 0x0100, add_quirk_mmc, 154 148 MMC_QUIRK_TRIM_BROKEN), 155 - 156 - /* 157 - * Some SD cards reports discard support while they don't 158 - */ 159 - MMC_FIXUP(CID_NAME_ANY, CID_MANFID_SANDISK_SD, 0x5344, add_quirk_sd, 160 - MMC_QUIRK_BROKEN_SD_DISCARD), 161 149 162 150 END_FIXUP 163 151 };
+2 -2
drivers/mmc/core/sd_uhs2.c
··· 91 91 92 92 err = host->ops->uhs2_control(host, UHS2_PHY_INIT); 93 93 if (err) { 94 - pr_err("%s: failed to initial phy for UHS-II!\n", 95 - mmc_hostname(host)); 94 + pr_debug("%s: failed to initial phy for UHS-II!\n", 95 + mmc_hostname(host)); 96 96 } 97 97 98 98 return err;
+19 -2
drivers/mmc/host/mtk-sd.c
··· 846 846 static void msdc_prepare_data(struct msdc_host *host, struct mmc_data *data) 847 847 { 848 848 if (!(data->host_cookie & MSDC_PREPARE_FLAG)) { 849 - data->host_cookie |= MSDC_PREPARE_FLAG; 850 849 data->sg_count = dma_map_sg(host->dev, data->sg, data->sg_len, 851 850 mmc_get_dma_dir(data)); 851 + if (data->sg_count) 852 + data->host_cookie |= MSDC_PREPARE_FLAG; 852 853 } 854 + } 855 + 856 + static bool msdc_data_prepared(struct mmc_data *data) 857 + { 858 + return data->host_cookie & MSDC_PREPARE_FLAG; 853 859 } 854 860 855 861 static void msdc_unprepare_data(struct msdc_host *host, struct mmc_data *data) ··· 1489 1483 WARN_ON(!host->hsq_en && host->mrq); 1490 1484 host->mrq = mrq; 1491 1485 1492 - if (mrq->data) 1486 + if (mrq->data) { 1493 1487 msdc_prepare_data(host, mrq->data); 1488 + if (!msdc_data_prepared(mrq->data)) { 1489 + host->mrq = NULL; 1490 + /* 1491 + * Failed to prepare DMA area, fail fast before 1492 + * starting any commands. 1493 + */ 1494 + mrq->cmd->error = -ENOSPC; 1495 + mmc_request_done(mmc_from_priv(host), mrq); 1496 + return; 1497 + } 1498 + } 1494 1499 1495 1500 /* if SBC is required, we have HW option and SW option. 1496 1501 * if HW option is enabled, and SBC does not have "special" flags,
+2 -1
drivers/mmc/host/sdhci-of-k1.c
··· 276 276 277 277 host->mmc->caps |= MMC_CAP_NEED_RSP_BUSY; 278 278 279 - if (spacemit_sdhci_get_clocks(dev, pltfm_host)) 279 + ret = spacemit_sdhci_get_clocks(dev, pltfm_host); 280 + if (ret) 280 281 goto err_pltfm; 281 282 282 283 ret = sdhci_add_host(host);
+10 -10
drivers/mmc/host/sdhci-uhs2.c
··· 99 99 /* hw clears the bit when it's done */ 100 100 if (read_poll_timeout_atomic(sdhci_readw, val, !(val & mask), 10, 101 101 UHS2_RESET_TIMEOUT_100MS, true, host, SDHCI_UHS2_SW_RESET)) { 102 - pr_warn("%s: %s: Reset 0x%x never completed. %s: clean reset bit.\n", __func__, 103 - mmc_hostname(host->mmc), (int)mask, mmc_hostname(host->mmc)); 102 + pr_debug("%s: %s: Reset 0x%x never completed. %s: clean reset bit.\n", __func__, 103 + mmc_hostname(host->mmc), (int)mask, mmc_hostname(host->mmc)); 104 104 sdhci_writeb(host, 0, SDHCI_UHS2_SW_RESET); 105 105 return; 106 106 } ··· 335 335 if (read_poll_timeout(sdhci_readl, val, (val & SDHCI_UHS2_IF_DETECT), 336 336 100, UHS2_INTERFACE_DETECT_TIMEOUT_100MS, true, 337 337 host, SDHCI_PRESENT_STATE)) { 338 - pr_warn("%s: not detect UHS2 interface in 100ms.\n", mmc_hostname(host->mmc)); 339 - sdhci_dumpregs(host); 338 + pr_debug("%s: not detect UHS2 interface in 100ms.\n", mmc_hostname(host->mmc)); 339 + sdhci_dbg_dumpregs(host, "UHS2 interface detect timeout in 100ms"); 340 340 return -EIO; 341 341 } 342 342 ··· 345 345 346 346 if (read_poll_timeout(sdhci_readl, val, (val & SDHCI_UHS2_LANE_SYNC), 347 347 100, UHS2_LANE_SYNC_TIMEOUT_150MS, true, host, SDHCI_PRESENT_STATE)) { 348 - pr_warn("%s: UHS2 Lane sync fail in 150ms.\n", mmc_hostname(host->mmc)); 349 - sdhci_dumpregs(host); 348 + pr_debug("%s: UHS2 Lane sync fail in 150ms.\n", mmc_hostname(host->mmc)); 349 + sdhci_dbg_dumpregs(host, "UHS2 Lane sync fail in 150ms"); 350 350 return -EIO; 351 351 } 352 352 ··· 417 417 host->ops->uhs2_pre_detect_init(host); 418 418 419 419 if (sdhci_uhs2_interface_detect(host)) { 420 - pr_warn("%s: cannot detect UHS2 interface.\n", mmc_hostname(host->mmc)); 420 + pr_debug("%s: cannot detect UHS2 interface.\n", mmc_hostname(host->mmc)); 421 421 return -EIO; 422 422 } 423 423 424 424 if (sdhci_uhs2_init(host)) { 425 - pr_warn("%s: UHS2 init fail.\n", mmc_hostname(host->mmc)); 425 + pr_debug("%s: UHS2 init fail.\n", mmc_hostname(host->mmc)); 426 426 return -EIO; 427 427 } 428 428 ··· 504 504 if (read_poll_timeout(sdhci_readl, val, (val & SDHCI_UHS2_IN_DORMANT_STATE), 505 505 100, UHS2_CHECK_DORMANT_TIMEOUT_100MS, true, host, 506 506 SDHCI_PRESENT_STATE)) { 507 - pr_warn("%s: UHS2 IN_DORMANT fail in 100ms.\n", mmc_hostname(host->mmc)); 508 - sdhci_dumpregs(host); 507 + pr_debug("%s: UHS2 IN_DORMANT fail in 100ms.\n", mmc_hostname(host->mmc)); 508 + sdhci_dbg_dumpregs(host, "UHS2 IN_DORMANT fail in 100ms"); 509 509 return -EIO; 510 510 } 511 511 return 0;
+2 -7
drivers/mmc/host/sdhci.c
··· 2065 2065 2066 2066 host->mmc->actual_clock = 0; 2067 2067 2068 - clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL); 2069 - if (clk & SDHCI_CLOCK_CARD_EN) 2070 - sdhci_writew(host, clk & ~SDHCI_CLOCK_CARD_EN, 2071 - SDHCI_CLOCK_CONTROL); 2068 + sdhci_writew(host, 0, SDHCI_CLOCK_CONTROL); 2072 2069 2073 - if (clock == 0) { 2074 - sdhci_writew(host, 0, SDHCI_CLOCK_CONTROL); 2070 + if (clock == 0) 2075 2071 return; 2076 - } 2077 2072 2078 2073 clk = sdhci_calc_clk(host, clock, &host->mmc->actual_clock); 2079 2074 sdhci_enable_clk(host, clk);
+16
drivers/mmc/host/sdhci.h
··· 900 900 void sdhci_set_data_timeout_irq(struct sdhci_host *host, bool enable); 901 901 void __sdhci_set_timeout(struct sdhci_host *host, struct mmc_command *cmd); 902 902 903 + #if defined(CONFIG_DYNAMIC_DEBUG) || \ 904 + (defined(CONFIG_DYNAMIC_DEBUG_CORE) && defined(DYNAMIC_DEBUG_MODULE)) 905 + #define SDHCI_DBG_ANYWAY 0 906 + #elif defined(DEBUG) 907 + #define SDHCI_DBG_ANYWAY 1 908 + #else 909 + #define SDHCI_DBG_ANYWAY 0 910 + #endif 911 + 912 + #define sdhci_dbg_dumpregs(host, fmt) \ 913 + do { \ 914 + DEFINE_DYNAMIC_DEBUG_METADATA(descriptor, fmt); \ 915 + if (DYNAMIC_DEBUG_BRANCH(descriptor) || SDHCI_DBG_ANYWAY) \ 916 + sdhci_dumpregs(host); \ 917 + } while (0) 918 + 903 919 #endif /* __SDHCI_HW_H */
+2
drivers/net/ethernet/amd/xgbe/xgbe-common.h
··· 1277 1277 #define MDIO_VEND2_CTRL1_SS13 BIT(13) 1278 1278 #endif 1279 1279 1280 + #define XGBE_VEND2_MAC_AUTO_SW BIT(9) 1281 + 1280 1282 /* MDIO mask values */ 1281 1283 #define XGBE_AN_CL73_INT_CMPLT BIT(0) 1282 1284 #define XGBE_AN_CL73_INC_LINK BIT(1)
+13
drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
··· 266 266 reg |= MDIO_VEND2_CTRL1_AN_RESTART; 267 267 268 268 XMDIO_WRITE(pdata, MDIO_MMD_VEND2, MDIO_CTRL1, reg); 269 + 270 + reg = XMDIO_READ(pdata, MDIO_MMD_VEND2, MDIO_PCS_DIG_CTRL); 271 + reg |= XGBE_VEND2_MAC_AUTO_SW; 272 + XMDIO_WRITE(pdata, MDIO_MMD_VEND2, MDIO_PCS_DIG_CTRL, reg); 269 273 } 270 274 271 275 static void xgbe_an37_restart(struct xgbe_prv_data *pdata) ··· 898 894 899 895 netif_dbg(pdata, link, pdata->netdev, "CL37 AN (%s) initialized\n", 900 896 (pdata->an_mode == XGBE_AN_MODE_CL37) ? "BaseX" : "SGMII"); 897 + 898 + reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_CTRL1); 899 + reg &= ~MDIO_AN_CTRL1_ENABLE; 900 + XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_CTRL1, reg); 901 + 901 902 } 902 903 903 904 static void xgbe_an73_init(struct xgbe_prv_data *pdata) ··· 1304 1295 1305 1296 pdata->phy.link = pdata->phy_if.phy_impl.link_status(pdata, 1306 1297 &an_restart); 1298 + /* bail out if the link status register read fails */ 1299 + if (pdata->phy.link < 0) 1300 + return; 1301 + 1307 1302 if (an_restart) { 1308 1303 xgbe_phy_config_aneg(pdata); 1309 1304 goto adjust_link;
+15 -9
drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
··· 2746 2746 static int xgbe_phy_link_status(struct xgbe_prv_data *pdata, int *an_restart) 2747 2747 { 2748 2748 struct xgbe_phy_data *phy_data = pdata->phy_data; 2749 - unsigned int reg; 2750 - int ret; 2749 + int reg, ret; 2751 2750 2752 2751 *an_restart = 0; 2753 2752 ··· 2780 2781 return 0; 2781 2782 } 2782 2783 2783 - /* Link status is latched low, so read once to clear 2784 - * and then read again to get current state 2784 + reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1); 2785 + if (reg < 0) 2786 + return reg; 2787 + 2788 + /* Link status is latched low so that momentary link drops 2789 + * can be detected. If link was already down read again 2790 + * to get the latest state. 2785 2791 */ 2786 - reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1); 2787 - reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1); 2792 + 2793 + if (!pdata->phy.link && !(reg & MDIO_STAT1_LSTATUS)) { 2794 + reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1); 2795 + if (reg < 0) 2796 + return reg; 2797 + } 2788 2798 2789 2799 if (pdata->en_rx_adap) { 2790 2800 /* if the link is available and adaptation is done, ··· 2812 2804 xgbe_phy_set_mode(pdata, phy_data->cur_mode); 2813 2805 } 2814 2806 2815 - /* check again for the link and adaptation status */ 2816 - reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1); 2817 - if ((reg & MDIO_STAT1_LSTATUS) && pdata->rx_adapt_done) 2807 + if (pdata->rx_adapt_done) 2818 2808 return 1; 2819 2809 } else if (reg & MDIO_STAT1_LSTATUS) 2820 2810 return 1;
+2 -2
drivers/net/ethernet/amd/xgbe/xgbe.h
··· 185 185 #define XGBE_LINK_TIMEOUT 5 186 186 #define XGBE_KR_TRAINING_WAIT_ITER 50 187 187 188 - #define XGBE_SGMII_AN_LINK_STATUS BIT(1) 188 + #define XGBE_SGMII_AN_LINK_DUPLEX BIT(1) 189 189 #define XGBE_SGMII_AN_LINK_SPEED (BIT(2) | BIT(3)) 190 190 #define XGBE_SGMII_AN_LINK_SPEED_10 0x00 191 191 #define XGBE_SGMII_AN_LINK_SPEED_100 0x04 192 192 #define XGBE_SGMII_AN_LINK_SPEED_1000 0x08 193 - #define XGBE_SGMII_AN_LINK_DUPLEX BIT(4) 193 + #define XGBE_SGMII_AN_LINK_STATUS BIT(4) 194 194 195 195 /* ECC correctable error notification window (seconds) */ 196 196 #define XGBE_ECC_LIMIT 60
+57 -22
drivers/net/ethernet/atheros/atlx/atl1.c
··· 1861 1861 break; 1862 1862 } 1863 1863 1864 - buffer_info->alloced = 1; 1865 - buffer_info->skb = skb; 1866 - buffer_info->length = (u16) adapter->rx_buffer_len; 1867 1864 page = virt_to_page(skb->data); 1868 1865 offset = offset_in_page(skb->data); 1869 1866 buffer_info->dma = dma_map_page(&pdev->dev, page, offset, 1870 1867 adapter->rx_buffer_len, 1871 1868 DMA_FROM_DEVICE); 1869 + if (dma_mapping_error(&pdev->dev, buffer_info->dma)) { 1870 + kfree_skb(skb); 1871 + adapter->soft_stats.rx_dropped++; 1872 + break; 1873 + } 1874 + 1875 + buffer_info->alloced = 1; 1876 + buffer_info->skb = skb; 1877 + buffer_info->length = (u16)adapter->rx_buffer_len; 1878 + 1872 1879 rfd_desc->buffer_addr = cpu_to_le64(buffer_info->dma); 1873 1880 rfd_desc->buf_len = cpu_to_le16(adapter->rx_buffer_len); 1874 1881 rfd_desc->coalese = 0; ··· 2190 2183 return 0; 2191 2184 } 2192 2185 2193 - static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb, 2194 - struct tx_packet_desc *ptpd) 2186 + static bool atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb, 2187 + struct tx_packet_desc *ptpd) 2195 2188 { 2196 2189 struct atl1_tpd_ring *tpd_ring = &adapter->tpd_ring; 2197 2190 struct atl1_buffer *buffer_info; ··· 2201 2194 unsigned int nr_frags; 2202 2195 unsigned int f; 2203 2196 int retval; 2197 + u16 first_mapped; 2204 2198 u16 next_to_use; 2205 2199 u16 data_len; 2206 2200 u8 hdr_len; ··· 2209 2201 buf_len -= skb->data_len; 2210 2202 nr_frags = skb_shinfo(skb)->nr_frags; 2211 2203 next_to_use = atomic_read(&tpd_ring->next_to_use); 2204 + first_mapped = next_to_use; 2212 2205 buffer_info = &tpd_ring->buffer_info[next_to_use]; 2213 2206 BUG_ON(buffer_info->skb); 2214 2207 /* put skb in last TPD */ ··· 2225 2216 buffer_info->dma = dma_map_page(&adapter->pdev->dev, page, 2226 2217 offset, hdr_len, 2227 2218 DMA_TO_DEVICE); 2219 + if (dma_mapping_error(&adapter->pdev->dev, buffer_info->dma)) 2220 + goto dma_err; 2228 2221 2229 2222 if (++next_to_use == tpd_ring->count) 2230 2223 next_to_use = 0; ··· 2253 2242 page, offset, 2254 2243 buffer_info->length, 2255 2244 DMA_TO_DEVICE); 2245 + if (dma_mapping_error(&adapter->pdev->dev, 2246 + buffer_info->dma)) 2247 + goto dma_err; 2256 2248 if (++next_to_use == tpd_ring->count) 2257 2249 next_to_use = 0; 2258 2250 } ··· 2268 2254 buffer_info->dma = dma_map_page(&adapter->pdev->dev, page, 2269 2255 offset, buf_len, 2270 2256 DMA_TO_DEVICE); 2257 + if (dma_mapping_error(&adapter->pdev->dev, buffer_info->dma)) 2258 + goto dma_err; 2271 2259 if (++next_to_use == tpd_ring->count) 2272 2260 next_to_use = 0; 2273 2261 } ··· 2293 2277 buffer_info->dma = skb_frag_dma_map(&adapter->pdev->dev, 2294 2278 frag, i * ATL1_MAX_TX_BUF_LEN, 2295 2279 buffer_info->length, DMA_TO_DEVICE); 2280 + if (dma_mapping_error(&adapter->pdev->dev, 2281 + buffer_info->dma)) 2282 + goto dma_err; 2296 2283 2297 2284 if (++next_to_use == tpd_ring->count) 2298 2285 next_to_use = 0; ··· 2304 2285 2305 2286 /* last tpd's buffer-info */ 2306 2287 buffer_info->skb = skb; 2288 + 2289 + return true; 2290 + 2291 + dma_err: 2292 + while (first_mapped != next_to_use) { 2293 + buffer_info = &tpd_ring->buffer_info[first_mapped]; 2294 + dma_unmap_page(&adapter->pdev->dev, 2295 + buffer_info->dma, 2296 + buffer_info->length, 2297 + DMA_TO_DEVICE); 2298 + buffer_info->dma = 0; 2299 + 2300 + if (++first_mapped == tpd_ring->count) 2301 + first_mapped = 0; 2302 + } 2303 + return false; 2307 2304 } 2308 2305 2309 2306 static void atl1_tx_queue(struct atl1_adapter *adapter, u16 count, ··· 2390 2355 2391 2356 len = skb_headlen(skb); 2392 2357 2393 - if (unlikely(skb->len <= 0)) { 2394 - dev_kfree_skb_any(skb); 2395 - return NETDEV_TX_OK; 2396 - } 2358 + if (unlikely(skb->len <= 0)) 2359 + goto drop_packet; 2397 2360 2398 2361 nr_frags = skb_shinfo(skb)->nr_frags; 2399 2362 for (f = 0; f < nr_frags; f++) { ··· 2404 2371 if (mss) { 2405 2372 if (skb->protocol == htons(ETH_P_IP)) { 2406 2373 proto_hdr_len = skb_tcp_all_headers(skb); 2407 - if (unlikely(proto_hdr_len > len)) { 2408 - dev_kfree_skb_any(skb); 2409 - return NETDEV_TX_OK; 2410 - } 2374 + if (unlikely(proto_hdr_len > len)) 2375 + goto drop_packet; 2376 + 2411 2377 /* need additional TPD ? */ 2412 2378 if (proto_hdr_len != len) 2413 2379 count += (len - proto_hdr_len + ··· 2438 2406 } 2439 2407 2440 2408 tso = atl1_tso(adapter, skb, ptpd); 2441 - if (tso < 0) { 2442 - dev_kfree_skb_any(skb); 2443 - return NETDEV_TX_OK; 2444 - } 2409 + if (tso < 0) 2410 + goto drop_packet; 2445 2411 2446 2412 if (!tso) { 2447 2413 ret_val = atl1_tx_csum(adapter, skb, ptpd); 2448 - if (ret_val < 0) { 2449 - dev_kfree_skb_any(skb); 2450 - return NETDEV_TX_OK; 2451 - } 2414 + if (ret_val < 0) 2415 + goto drop_packet; 2452 2416 } 2453 2417 2454 - atl1_tx_map(adapter, skb, ptpd); 2418 + if (!atl1_tx_map(adapter, skb, ptpd)) 2419 + goto drop_packet; 2420 + 2455 2421 atl1_tx_queue(adapter, count, ptpd); 2456 2422 atl1_update_mailbox(adapter); 2423 + return NETDEV_TX_OK; 2424 + 2425 + drop_packet: 2426 + adapter->soft_stats.tx_errors++; 2427 + dev_kfree_skb_any(skb); 2457 2428 return NETDEV_TX_OK; 2458 2429 } 2459 2430
+2 -2
drivers/net/ethernet/cisco/enic/enic_main.c
··· 1864 1864 if (enic_is_dynamic(enic) || enic_is_sriov_vf(enic)) 1865 1865 return -EOPNOTSUPP; 1866 1866 1867 - if (netdev->mtu > enic->port_mtu) 1867 + if (new_mtu > enic->port_mtu) 1868 1868 netdev_warn(netdev, 1869 1869 "interface MTU (%d) set higher than port MTU (%d)\n", 1870 - netdev->mtu, enic->port_mtu); 1870 + new_mtu, enic->port_mtu); 1871 1871 1872 1872 return _enic_change_mtu(netdev, new_mtu); 1873 1873 }
+24 -2
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
··· 3939 3939 MEM_TYPE_PAGE_ORDER0, NULL); 3940 3940 if (err) { 3941 3941 dev_err(dev, "xdp_rxq_info_reg_mem_model failed\n"); 3942 + xdp_rxq_info_unreg(&fq->channel->xdp_rxq); 3942 3943 return err; 3943 3944 } 3944 3945 ··· 4433 4432 return -EINVAL; 4434 4433 } 4435 4434 if (err) 4436 - return err; 4435 + goto out; 4437 4436 } 4438 4437 4439 4438 err = dpni_get_qdid(priv->mc_io, 0, priv->mc_token, 4440 4439 DPNI_QUEUE_TX, &priv->tx_qdid); 4441 4440 if (err) { 4442 4441 dev_err(dev, "dpni_get_qdid() failed\n"); 4443 - return err; 4442 + goto out; 4444 4443 } 4445 4444 4446 4445 return 0; 4446 + 4447 + out: 4448 + while (i--) { 4449 + if (priv->fq[i].type == DPAA2_RX_FQ && 4450 + xdp_rxq_info_is_reg(&priv->fq[i].channel->xdp_rxq)) 4451 + xdp_rxq_info_unreg(&priv->fq[i].channel->xdp_rxq); 4452 + } 4453 + return err; 4447 4454 } 4448 4455 4449 4456 /* Allocate rings for storing incoming frame descriptors */ ··· 4834 4825 } 4835 4826 } 4836 4827 4828 + static void dpaa2_eth_free_rx_xdp_rxq(struct dpaa2_eth_priv *priv) 4829 + { 4830 + int i; 4831 + 4832 + for (i = 0; i < priv->num_fqs; i++) { 4833 + if (priv->fq[i].type == DPAA2_RX_FQ && 4834 + xdp_rxq_info_is_reg(&priv->fq[i].channel->xdp_rxq)) 4835 + xdp_rxq_info_unreg(&priv->fq[i].channel->xdp_rxq); 4836 + } 4837 + } 4838 + 4837 4839 static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev) 4838 4840 { 4839 4841 struct device *dev; ··· 5048 5028 free_percpu(priv->percpu_stats); 5049 5029 err_alloc_percpu_stats: 5050 5030 dpaa2_eth_del_ch_napi(priv); 5031 + dpaa2_eth_free_rx_xdp_rxq(priv); 5051 5032 err_bind: 5052 5033 dpaa2_eth_free_dpbps(priv); 5053 5034 err_dpbp_setup: ··· 5101 5080 free_percpu(priv->percpu_extras); 5102 5081 5103 5082 dpaa2_eth_del_ch_napi(priv); 5083 + dpaa2_eth_free_rx_xdp_rxq(priv); 5104 5084 dpaa2_eth_free_dpbps(priv); 5105 5085 dpaa2_eth_free_dpio(priv); 5106 5086 dpaa2_eth_free_dpni(priv);
+11 -12
drivers/net/ethernet/intel/idpf/idpf_controlq.c
··· 96 96 */ 97 97 static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq) 98 98 { 99 - mutex_lock(&cq->cq_lock); 99 + spin_lock(&cq->cq_lock); 100 100 101 101 /* free ring buffers and the ring itself */ 102 102 idpf_ctlq_dealloc_ring_res(hw, cq); ··· 104 104 /* Set ring_size to 0 to indicate uninitialized queue */ 105 105 cq->ring_size = 0; 106 106 107 - mutex_unlock(&cq->cq_lock); 108 - mutex_destroy(&cq->cq_lock); 107 + spin_unlock(&cq->cq_lock); 109 108 } 110 109 111 110 /** ··· 172 173 173 174 idpf_ctlq_init_regs(hw, cq, is_rxq); 174 175 175 - mutex_init(&cq->cq_lock); 176 + spin_lock_init(&cq->cq_lock); 176 177 177 178 list_add(&cq->cq_list, &hw->cq_list_head); 178 179 ··· 271 272 int err = 0; 272 273 int i; 273 274 274 - mutex_lock(&cq->cq_lock); 275 + spin_lock(&cq->cq_lock); 275 276 276 277 /* Ensure there are enough descriptors to send all messages */ 277 278 num_desc_avail = IDPF_CTLQ_DESC_UNUSED(cq); ··· 331 332 wr32(hw, cq->reg.tail, cq->next_to_use); 332 333 333 334 err_unlock: 334 - mutex_unlock(&cq->cq_lock); 335 + spin_unlock(&cq->cq_lock); 335 336 336 337 return err; 337 338 } ··· 363 364 if (*clean_count > cq->ring_size) 364 365 return -EBADR; 365 366 366 - mutex_lock(&cq->cq_lock); 367 + spin_lock(&cq->cq_lock); 367 368 368 369 ntc = cq->next_to_clean; 369 370 ··· 396 397 397 398 cq->next_to_clean = ntc; 398 399 399 - mutex_unlock(&cq->cq_lock); 400 + spin_unlock(&cq->cq_lock); 400 401 401 402 /* Return number of descriptors actually cleaned */ 402 403 *clean_count = i; ··· 434 435 if (*buff_count > 0) 435 436 buffs_avail = true; 436 437 437 - mutex_lock(&cq->cq_lock); 438 + spin_lock(&cq->cq_lock); 438 439 439 440 if (tbp >= cq->ring_size) 440 441 tbp = 0; ··· 523 524 wr32(hw, cq->reg.tail, cq->next_to_post); 524 525 } 525 526 526 - mutex_unlock(&cq->cq_lock); 527 + spin_unlock(&cq->cq_lock); 527 528 528 529 /* return the number of buffers that were not posted */ 529 530 *buff_count = *buff_count - i; ··· 551 552 u16 i; 552 553 553 554 /* take the lock before we start messing with the ring */ 554 - mutex_lock(&cq->cq_lock); 555 + spin_lock(&cq->cq_lock); 555 556 556 557 ntc = cq->next_to_clean; 557 558 ··· 613 614 614 615 cq->next_to_clean = ntc; 615 616 616 - mutex_unlock(&cq->cq_lock); 617 + spin_unlock(&cq->cq_lock); 617 618 618 619 *num_q_msg = i; 619 620 if (*num_q_msg == 0)
+1 -1
drivers/net/ethernet/intel/idpf/idpf_controlq_api.h
··· 99 99 100 100 enum idpf_ctlq_type cq_type; 101 101 int q_id; 102 - struct mutex cq_lock; /* control queue lock */ 102 + spinlock_t cq_lock; /* control queue lock */ 103 103 /* used for interrupt processing */ 104 104 u16 next_to_use; 105 105 u16 next_to_clean;
+2 -2
drivers/net/ethernet/intel/idpf/idpf_ethtool.c
··· 47 47 struct idpf_vport_user_config_data *user_config; 48 48 49 49 if (!idpf_is_cap_ena_all(np->adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS)) 50 - return -EOPNOTSUPP; 50 + return 0; 51 51 52 52 user_config = &np->adapter->vport_config[np->vport_idx]->user_config; 53 53 ··· 66 66 struct idpf_vport_user_config_data *user_config; 67 67 68 68 if (!idpf_is_cap_ena_all(np->adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS)) 69 - return -EOPNOTSUPP; 69 + return 0; 70 70 71 71 user_config = &np->adapter->vport_config[np->vport_idx]->user_config; 72 72
+8 -4
drivers/net/ethernet/intel/idpf/idpf_lib.c
··· 2314 2314 struct idpf_adapter *adapter = hw->back; 2315 2315 size_t sz = ALIGN(size, 4096); 2316 2316 2317 - mem->va = dma_alloc_coherent(&adapter->pdev->dev, sz, 2318 - &mem->pa, GFP_KERNEL); 2317 + /* The control queue resources are freed under a spinlock, contiguous 2318 + * pages will avoid IOMMU remapping and the use vmap (and vunmap in 2319 + * dma_free_*() path. 2320 + */ 2321 + mem->va = dma_alloc_attrs(&adapter->pdev->dev, sz, &mem->pa, 2322 + GFP_KERNEL, DMA_ATTR_FORCE_CONTIGUOUS); 2319 2323 mem->size = sz; 2320 2324 2321 2325 return mem->va; ··· 2334 2330 { 2335 2331 struct idpf_adapter *adapter = hw->back; 2336 2332 2337 - dma_free_coherent(&adapter->pdev->dev, mem->size, 2338 - mem->va, mem->pa); 2333 + dma_free_attrs(&adapter->pdev->dev, mem->size, 2334 + mem->va, mem->pa, DMA_ATTR_FORCE_CONTIGUOUS); 2339 2335 mem->size = 0; 2340 2336 mem->va = NULL; 2341 2337 mem->pa = 0;
+10
drivers/net/ethernet/intel/igc/igc_main.c
··· 7144 7144 adapter->port_num = hw->bus.func; 7145 7145 adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE); 7146 7146 7147 + /* Disable ASPM L1.2 on I226 devices to avoid packet loss */ 7148 + if (igc_is_device_id_i226(hw)) 7149 + pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2); 7150 + 7147 7151 err = pci_save_state(pdev); 7148 7152 if (err) 7149 7153 goto err_ioremap; ··· 7533 7529 pci_enable_wake(pdev, PCI_D3hot, 0); 7534 7530 pci_enable_wake(pdev, PCI_D3cold, 0); 7535 7531 7532 + if (igc_is_device_id_i226(hw)) 7533 + pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2); 7534 + 7536 7535 if (igc_init_interrupt_scheme(adapter, true)) { 7537 7536 netdev_err(netdev, "Unable to allocate memory for queues\n"); 7538 7537 return -ENOMEM; ··· 7660 7653 7661 7654 pci_enable_wake(pdev, PCI_D3hot, 0); 7662 7655 pci_enable_wake(pdev, PCI_D3cold, 0); 7656 + 7657 + if (igc_is_device_id_i226(hw)) 7658 + pci_disable_link_state_locked(pdev, PCIE_LINK_STATE_L1_2); 7663 7659 7664 7660 /* In case of PCI error, adapter loses its HW address 7665 7661 * so we should re-assign it here.
+30 -1
drivers/net/ethernet/sun/niu.c
··· 3336 3336 3337 3337 addr = np->ops->map_page(np->device, page, 0, 3338 3338 PAGE_SIZE, DMA_FROM_DEVICE); 3339 - if (!addr) { 3339 + if (np->ops->mapping_error(np->device, addr)) { 3340 3340 __free_page(page); 3341 3341 return -ENOMEM; 3342 3342 } ··· 6676 6676 len = skb_headlen(skb); 6677 6677 mapping = np->ops->map_single(np->device, skb->data, 6678 6678 len, DMA_TO_DEVICE); 6679 + if (np->ops->mapping_error(np->device, mapping)) 6680 + goto out_drop; 6679 6681 6680 6682 prod = rp->prod; 6681 6683 ··· 6719 6717 mapping = np->ops->map_page(np->device, skb_frag_page(frag), 6720 6718 skb_frag_off(frag), len, 6721 6719 DMA_TO_DEVICE); 6720 + if (np->ops->mapping_error(np->device, mapping)) 6721 + goto out_unmap; 6722 6722 6723 6723 rp->tx_buffs[prod].skb = NULL; 6724 6724 rp->tx_buffs[prod].mapping = mapping; ··· 6744 6740 6745 6741 out: 6746 6742 return NETDEV_TX_OK; 6743 + 6744 + out_unmap: 6745 + while (i--) { 6746 + const skb_frag_t *frag; 6747 + 6748 + prod = PREVIOUS_TX(rp, prod); 6749 + frag = &skb_shinfo(skb)->frags[i]; 6750 + np->ops->unmap_page(np->device, rp->tx_buffs[prod].mapping, 6751 + skb_frag_size(frag), DMA_TO_DEVICE); 6752 + } 6753 + 6754 + np->ops->unmap_single(np->device, rp->tx_buffs[rp->prod].mapping, 6755 + skb_headlen(skb), DMA_TO_DEVICE); 6747 6756 6748 6757 out_drop: 6749 6758 rp->tx_errors++; ··· 9662 9645 dma_unmap_single(dev, dma_address, size, direction); 9663 9646 } 9664 9647 9648 + static int niu_pci_mapping_error(struct device *dev, u64 addr) 9649 + { 9650 + return dma_mapping_error(dev, addr); 9651 + } 9652 + 9665 9653 static const struct niu_ops niu_pci_ops = { 9666 9654 .alloc_coherent = niu_pci_alloc_coherent, 9667 9655 .free_coherent = niu_pci_free_coherent, ··· 9674 9652 .unmap_page = niu_pci_unmap_page, 9675 9653 .map_single = niu_pci_map_single, 9676 9654 .unmap_single = niu_pci_unmap_single, 9655 + .mapping_error = niu_pci_mapping_error, 9677 9656 }; 9678 9657 9679 9658 static void niu_driver_version(void) ··· 10043 10020 /* Nothing to do. */ 10044 10021 } 10045 10022 10023 + static int niu_phys_mapping_error(struct device *dev, u64 dma_address) 10024 + { 10025 + return false; 10026 + } 10027 + 10046 10028 static const struct niu_ops niu_phys_ops = { 10047 10029 .alloc_coherent = niu_phys_alloc_coherent, 10048 10030 .free_coherent = niu_phys_free_coherent, ··· 10055 10027 .unmap_page = niu_phys_unmap_page, 10056 10028 .map_single = niu_phys_map_single, 10057 10029 .unmap_single = niu_phys_unmap_single, 10030 + .mapping_error = niu_phys_mapping_error, 10058 10031 }; 10059 10032 10060 10033 static int niu_of_probe(struct platform_device *op)
+4
drivers/net/ethernet/sun/niu.h
··· 2879 2879 #define NEXT_TX(tp, index) \ 2880 2880 (((index) + 1) < (tp)->pending ? ((index) + 1) : 0) 2881 2881 2882 + #define PREVIOUS_TX(tp, index) \ 2883 + (((index) - 1) >= 0 ? ((index) - 1) : (((tp)->pending) - 1)) 2884 + 2882 2885 static inline u32 niu_tx_avail(struct tx_ring_info *tp) 2883 2886 { 2884 2887 return (tp->pending - ··· 3143 3140 enum dma_data_direction direction); 3144 3141 void (*unmap_single)(struct device *dev, u64 dma_address, 3145 3142 size_t size, enum dma_data_direction direction); 3143 + int (*mapping_error)(struct device *dev, u64 dma_address); 3146 3144 }; 3147 3145 3148 3146 struct niu_link_config {
+18 -9
drivers/net/ethernet/wangxun/libwx/wx_lib.c
··· 1705 1705 1706 1706 clear_bit(WX_FLAG_FDIR_HASH, wx->flags); 1707 1707 1708 + wx->ring_feature[RING_F_FDIR].indices = 1; 1708 1709 /* Use Flow Director in addition to RSS to ensure the best 1709 1710 * distribution of flows across cores, even when an FDIR flow 1710 1711 * isn't matched. ··· 1747 1746 */ 1748 1747 static int wx_acquire_msix_vectors(struct wx *wx) 1749 1748 { 1750 - struct irq_affinity affd = { .pre_vectors = 1 }; 1749 + struct irq_affinity affd = { .post_vectors = 1 }; 1751 1750 int nvecs, i; 1752 1751 1753 1752 /* We start by asking for one vector per queue pair */ ··· 1784 1783 return nvecs; 1785 1784 } 1786 1785 1787 - wx->msix_entry->entry = 0; 1788 - wx->msix_entry->vector = pci_irq_vector(wx->pdev, 0); 1789 1786 nvecs -= 1; 1790 1787 for (i = 0; i < nvecs; i++) { 1791 1788 wx->msix_q_entries[i].entry = i; 1792 - wx->msix_q_entries[i].vector = pci_irq_vector(wx->pdev, i + 1); 1789 + wx->msix_q_entries[i].vector = pci_irq_vector(wx->pdev, i); 1793 1790 } 1794 1791 1795 1792 wx->num_q_vectors = nvecs; 1793 + 1794 + wx->msix_entry->entry = nvecs; 1795 + wx->msix_entry->vector = pci_irq_vector(wx->pdev, nvecs); 1796 + 1797 + if (test_bit(WX_FLAG_IRQ_VECTOR_SHARED, wx->flags)) { 1798 + wx->msix_entry->entry = 0; 1799 + wx->msix_entry->vector = pci_irq_vector(wx->pdev, 0); 1800 + wx->msix_q_entries[0].entry = 0; 1801 + wx->msix_q_entries[0].vector = pci_irq_vector(wx->pdev, 1); 1802 + } 1796 1803 1797 1804 return 0; 1798 1805 } ··· 2300 2291 2301 2292 if (direction == -1) { 2302 2293 /* other causes */ 2294 + if (test_bit(WX_FLAG_IRQ_VECTOR_SHARED, wx->flags)) 2295 + msix_vector = 0; 2303 2296 msix_vector |= WX_PX_IVAR_ALLOC_VAL; 2304 2297 index = 0; 2305 2298 ivar = rd32(wx, WX_PX_MISC_IVAR); ··· 2310 2299 wr32(wx, WX_PX_MISC_IVAR, ivar); 2311 2300 } else { 2312 2301 /* tx or rx causes */ 2313 - if (!(wx->mac.type == wx_mac_em && wx->num_vfs == 7)) 2314 - msix_vector += 1; /* offset for queue vectors */ 2315 2302 msix_vector |= WX_PX_IVAR_ALLOC_VAL; 2316 2303 index = ((16 * (queue & 1)) + (8 * direction)); 2317 2304 ivar = rd32(wx, WX_PX_IVAR(queue >> 1)); ··· 2348 2339 2349 2340 itr_reg |= WX_PX_ITR_CNT_WDIS; 2350 2341 2351 - wr32(wx, WX_PX_ITR(v_idx + 1), itr_reg); 2342 + wr32(wx, WX_PX_ITR(v_idx), itr_reg); 2352 2343 } 2353 2344 2354 2345 /** ··· 2401 2392 wx_write_eitr(q_vector); 2402 2393 } 2403 2394 2404 - wx_set_ivar(wx, -1, 0, 0); 2395 + wx_set_ivar(wx, -1, 0, v_idx); 2405 2396 if (pdev->msix_enabled) 2406 - wr32(wx, WX_PX_ITR(0), 1950); 2397 + wr32(wx, WX_PX_ITR(v_idx), 1950); 2407 2398 } 2408 2399 EXPORT_SYMBOL(wx_configure_vectors); 2409 2400
+4
drivers/net/ethernet/wangxun/libwx/wx_sriov.c
··· 64 64 wr32m(wx, WX_PSR_VM_CTL, WX_PSR_VM_CTL_POOL_MASK, 0); 65 65 wx->ring_feature[RING_F_VMDQ].offset = 0; 66 66 67 + clear_bit(WX_FLAG_IRQ_VECTOR_SHARED, wx->flags); 67 68 clear_bit(WX_FLAG_SRIOV_ENABLED, wx->flags); 68 69 /* Disable VMDq flag so device will be set in NM mode */ 69 70 if (wx->ring_feature[RING_F_VMDQ].limit == 1) ··· 78 77 79 78 set_bit(WX_FLAG_SRIOV_ENABLED, wx->flags); 80 79 dev_info(&wx->pdev->dev, "SR-IOV enabled with %d VFs\n", num_vfs); 80 + 81 + if (num_vfs == 7 && wx->mac.type == wx_mac_em) 82 + set_bit(WX_FLAG_IRQ_VECTOR_SHARED, wx->flags); 81 83 82 84 /* Enable VMDq flag so device will be set in VM mode */ 83 85 set_bit(WX_FLAG_VMDQ_ENABLED, wx->flags);
+2 -1
drivers/net/ethernet/wangxun/libwx/wx_type.h
··· 1191 1191 WX_FLAG_VMDQ_ENABLED, 1192 1192 WX_FLAG_VLAN_PROMISC, 1193 1193 WX_FLAG_SRIOV_ENABLED, 1194 + WX_FLAG_IRQ_VECTOR_SHARED, 1194 1195 WX_FLAG_FDIR_CAPABLE, 1195 1196 WX_FLAG_FDIR_HASH, 1196 1197 WX_FLAG_FDIR_PERFECT, ··· 1344 1343 }; 1345 1344 1346 1345 #define WX_INTR_ALL (~0ULL) 1347 - #define WX_INTR_Q(i) BIT((i) + 1) 1346 + #define WX_INTR_Q(i) BIT((i)) 1348 1347 1349 1348 /* register operations */ 1350 1349 #define wr32(a, reg, value) writel((value), ((a)->hw_addr + (reg)))
+2 -2
drivers/net/ethernet/wangxun/ngbe/ngbe_main.c
··· 161 161 if (queues) 162 162 wx_intr_enable(wx, NGBE_INTR_ALL); 163 163 else 164 - wx_intr_enable(wx, NGBE_INTR_MISC); 164 + wx_intr_enable(wx, NGBE_INTR_MISC(wx)); 165 165 } 166 166 167 167 /** ··· 286 286 * for queue. But when num_vfs == 7, vector[1] is assigned to vf6. 287 287 * Misc and queue should reuse interrupt vector[0]. 288 288 */ 289 - if (wx->num_vfs == 7) 289 + if (test_bit(WX_FLAG_IRQ_VECTOR_SHARED, wx->flags)) 290 290 err = request_irq(wx->msix_entry->vector, 291 291 ngbe_misc_and_queue, 0, netdev->name, wx); 292 292 else
+1 -1
drivers/net/ethernet/wangxun/ngbe/ngbe_type.h
··· 87 87 #define NGBE_PX_MISC_IC_TIMESYNC BIT(11) /* time sync */ 88 88 89 89 #define NGBE_INTR_ALL 0x1FF 90 - #define NGBE_INTR_MISC BIT(0) 90 + #define NGBE_INTR_MISC(A) BIT((A)->msix_entry->entry) 91 91 92 92 #define NGBE_PHY_CONFIG(reg_offset) (0x14000 + ((reg_offset) * 4)) 93 93 #define NGBE_CFG_LAN_SPEED 0x14440
+1
drivers/net/ethernet/wangxun/txgbe/txgbe_aml.c
··· 294 294 wx_fc_enable(wx, tx_pause, rx_pause); 295 295 296 296 txgbe_reconfig_mac(wx); 297 + txgbe_enable_sec_tx_path(wx); 297 298 298 299 txcfg = rd32(wx, TXGBE_AML_MAC_TX_CFG); 299 300 txcfg &= ~TXGBE_AML_MAC_TX_CFG_SPEED_MASK;
+4 -4
drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c
··· 31 31 wr32(wx, WX_PX_MISC_IEN, misc_ien); 32 32 33 33 /* unmask interrupt */ 34 - wx_intr_enable(wx, TXGBE_INTR_MISC); 34 + wx_intr_enable(wx, TXGBE_INTR_MISC(wx)); 35 35 if (queues) 36 36 wx_intr_enable(wx, TXGBE_INTR_QALL(wx)); 37 37 } ··· 78 78 free_irq(wx->msix_q_entries[vector].vector, 79 79 wx->q_vector[vector]); 80 80 } 81 - wx_reset_interrupt_capability(wx); 82 81 return err; 83 82 } 84 83 ··· 131 132 txgbe->eicr = eicr; 132 133 if (eicr & TXGBE_PX_MISC_IC_VF_MBOX) { 133 134 wx_msg_task(txgbe->wx); 134 - wx_intr_enable(wx, TXGBE_INTR_MISC); 135 + wx_intr_enable(wx, TXGBE_INTR_MISC(wx)); 135 136 } 136 137 return IRQ_WAKE_THREAD; 137 138 } ··· 183 184 nhandled++; 184 185 } 185 186 186 - wx_intr_enable(wx, TXGBE_INTR_MISC); 187 + wx_intr_enable(wx, TXGBE_INTR_MISC(wx)); 187 188 return (nhandled > 0 ? IRQ_HANDLED : IRQ_NONE); 188 189 } 189 190 ··· 210 211 free_irq(txgbe->link_irq, txgbe); 211 212 free_irq(txgbe->misc.irq, txgbe); 212 213 txgbe_del_irq_domain(txgbe); 214 + txgbe->wx->misc_irq_domain = false; 213 215 } 214 216 215 217 int txgbe_setup_misc_irq(struct txgbe *txgbe)
+10 -12
drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
··· 458 458 459 459 wx_configure(wx); 460 460 461 - err = txgbe_request_queue_irqs(wx); 461 + err = txgbe_setup_misc_irq(wx->priv); 462 462 if (err) 463 463 goto err_free_resources; 464 + 465 + err = txgbe_request_queue_irqs(wx); 466 + if (err) 467 + goto err_free_misc_irq; 464 468 465 469 /* Notify the stack of the actual queue counts. */ 466 470 err = netif_set_real_num_tx_queues(netdev, wx->num_tx_queues); ··· 483 479 484 480 err_free_irq: 485 481 wx_free_irq(wx); 482 + err_free_misc_irq: 483 + txgbe_free_misc_irq(wx->priv); 484 + wx_reset_interrupt_capability(wx); 486 485 err_free_resources: 487 486 wx_free_resources(wx); 488 487 err_reset: ··· 526 519 wx_ptp_stop(wx); 527 520 txgbe_down(wx); 528 521 wx_free_irq(wx); 522 + txgbe_free_misc_irq(wx->priv); 529 523 wx_free_resources(wx); 530 524 txgbe_fdir_filter_exit(wx); 531 525 wx_control_hw(wx, false); ··· 572 564 int txgbe_setup_tc(struct net_device *dev, u8 tc) 573 565 { 574 566 struct wx *wx = netdev_priv(dev); 575 - struct txgbe *txgbe = wx->priv; 576 567 577 568 /* Hardware has to reinitialize queues and interrupts to 578 569 * match packet buffer alignment. Unfortunately, the ··· 582 575 else 583 576 txgbe_reset(wx); 584 577 585 - txgbe_free_misc_irq(txgbe); 586 578 wx_clear_interrupt_scheme(wx); 587 579 588 580 if (tc) ··· 590 584 netdev_reset_tc(dev); 591 585 592 586 wx_init_interrupt_scheme(wx); 593 - txgbe_setup_misc_irq(txgbe); 594 587 595 588 if (netif_running(dev)) 596 589 txgbe_open(dev); ··· 887 882 888 883 txgbe_init_fdir(txgbe); 889 884 890 - err = txgbe_setup_misc_irq(txgbe); 891 - if (err) 892 - goto err_release_hw; 893 - 894 885 err = txgbe_init_phy(txgbe); 895 886 if (err) 896 - goto err_free_misc_irq; 887 + goto err_release_hw; 897 888 898 889 err = register_netdev(netdev); 899 890 if (err) ··· 917 916 918 917 err_remove_phy: 919 918 txgbe_remove_phy(txgbe); 920 - err_free_misc_irq: 921 - txgbe_free_misc_irq(txgbe); 922 919 err_release_hw: 923 920 wx_clear_interrupt_scheme(wx); 924 921 wx_control_hw(wx, false); ··· 956 957 unregister_netdev(netdev); 957 958 958 959 txgbe_remove_phy(txgbe); 959 - txgbe_free_misc_irq(txgbe); 960 960 wx_free_isb_resources(wx); 961 961 962 962 pci_release_selected_regions(pdev,
+2 -2
drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
··· 302 302 #define TXGBE_DEFAULT_RX_WORK 128 303 303 #endif 304 304 305 - #define TXGBE_INTR_MISC BIT(0) 306 - #define TXGBE_INTR_QALL(A) GENMASK((A)->num_q_vectors, 1) 305 + #define TXGBE_INTR_MISC(A) BIT((A)->num_q_vectors) 306 + #define TXGBE_INTR_QALL(A) (TXGBE_INTR_MISC(A) - 1) 307 307 308 308 #define TXGBE_MAX_EITR GENMASK(11, 3) 309 309
+70 -41
drivers/net/virtio_net.c
··· 778 778 return (unsigned long)mrg_ctx & ((1 << MRG_CTX_HEADER_SHIFT) - 1); 779 779 } 780 780 781 + static int check_mergeable_len(struct net_device *dev, void *mrg_ctx, 782 + unsigned int len) 783 + { 784 + unsigned int headroom, tailroom, room, truesize; 785 + 786 + truesize = mergeable_ctx_to_truesize(mrg_ctx); 787 + headroom = mergeable_ctx_to_headroom(mrg_ctx); 788 + tailroom = headroom ? sizeof(struct skb_shared_info) : 0; 789 + room = SKB_DATA_ALIGN(headroom + tailroom); 790 + 791 + if (len > truesize - room) { 792 + pr_debug("%s: rx error: len %u exceeds truesize %lu\n", 793 + dev->name, len, (unsigned long)(truesize - room)); 794 + DEV_STATS_INC(dev, rx_length_errors); 795 + return -1; 796 + } 797 + 798 + return 0; 799 + } 800 + 781 801 static struct sk_buff *virtnet_build_skb(void *buf, unsigned int buflen, 782 802 unsigned int headroom, 783 803 unsigned int len) ··· 1104 1084 * Since most packets only take 1 or 2 ring slots, stopping the queue 1105 1085 * early means 16 slots are typically wasted. 1106 1086 */ 1107 - if (sq->vq->num_free < 2+MAX_SKB_FRAGS) { 1087 + if (sq->vq->num_free < MAX_SKB_FRAGS + 2) { 1108 1088 struct netdev_queue *txq = netdev_get_tx_queue(dev, qnum); 1109 1089 1110 1090 netif_tx_stop_queue(txq); ··· 1136 1116 } else if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) { 1137 1117 /* More just got used, free them then recheck. */ 1138 1118 free_old_xmit(sq, txq, false); 1139 - if (sq->vq->num_free >= 2+MAX_SKB_FRAGS) { 1119 + if (sq->vq->num_free >= MAX_SKB_FRAGS + 2) { 1140 1120 netif_start_subqueue(dev, qnum); 1141 1121 u64_stats_update_begin(&sq->stats.syncp); 1142 1122 u64_stats_inc(&sq->stats.wake); ··· 1147 1127 } 1148 1128 } 1149 1129 1130 + /* Note that @len is the length of received data without virtio header */ 1150 1131 static struct xdp_buff *buf_to_xdp(struct virtnet_info *vi, 1151 - struct receive_queue *rq, void *buf, u32 len) 1132 + struct receive_queue *rq, void *buf, 1133 + u32 len, bool first_buf) 1152 1134 { 1153 1135 struct xdp_buff *xdp; 1154 1136 u32 bufsize; 1155 1137 1156 1138 xdp = (struct xdp_buff *)buf; 1157 1139 1158 - bufsize = xsk_pool_get_rx_frame_size(rq->xsk_pool) + vi->hdr_len; 1140 + /* In virtnet_add_recvbuf_xsk, we use part of XDP_PACKET_HEADROOM for 1141 + * virtio header and ask the vhost to fill data from 1142 + * hard_start + XDP_PACKET_HEADROOM - vi->hdr_len 1143 + * The first buffer has virtio header so the remaining region for frame 1144 + * data is 1145 + * xsk_pool_get_rx_frame_size() 1146 + * While other buffers than the first one do not have virtio header, so 1147 + * the maximum frame data's length can be 1148 + * xsk_pool_get_rx_frame_size() + vi->hdr_len 1149 + */ 1150 + bufsize = xsk_pool_get_rx_frame_size(rq->xsk_pool); 1151 + if (!first_buf) 1152 + bufsize += vi->hdr_len; 1159 1153 1160 1154 if (unlikely(len > bufsize)) { 1161 1155 pr_debug("%s: rx error: len %u exceeds truesize %u\n", ··· 1294 1260 1295 1261 u64_stats_add(&stats->bytes, len); 1296 1262 1297 - xdp = buf_to_xdp(vi, rq, buf, len); 1263 + xdp = buf_to_xdp(vi, rq, buf, len, false); 1298 1264 if (!xdp) 1299 1265 goto err; 1300 1266 ··· 1392 1358 1393 1359 u64_stats_add(&stats->bytes, len); 1394 1360 1395 - xdp = buf_to_xdp(vi, rq, buf, len); 1361 + xdp = buf_to_xdp(vi, rq, buf, len, true); 1396 1362 if (!xdp) 1397 1363 return; 1398 1364 ··· 1831 1797 * across multiple buffers (num_buf > 1), and we make sure buffers 1832 1798 * have enough headroom. 1833 1799 */ 1834 - static struct page *xdp_linearize_page(struct receive_queue *rq, 1800 + static struct page *xdp_linearize_page(struct net_device *dev, 1801 + struct receive_queue *rq, 1835 1802 int *num_buf, 1836 1803 struct page *p, 1837 1804 int offset, ··· 1852 1817 memcpy(page_address(page) + page_off, page_address(p) + offset, *len); 1853 1818 page_off += *len; 1854 1819 1820 + /* Only mergeable mode can go inside this while loop. In small mode, 1821 + * *num_buf == 1, so it cannot go inside. 1822 + */ 1855 1823 while (--*num_buf) { 1856 1824 unsigned int buflen; 1857 1825 void *buf; 1826 + void *ctx; 1858 1827 int off; 1859 1828 1860 - buf = virtnet_rq_get_buf(rq, &buflen, NULL); 1829 + buf = virtnet_rq_get_buf(rq, &buflen, &ctx); 1861 1830 if (unlikely(!buf)) 1862 1831 goto err_buf; 1863 1832 1864 1833 p = virt_to_head_page(buf); 1865 1834 off = buf - page_address(p); 1835 + 1836 + if (check_mergeable_len(dev, ctx, buflen)) { 1837 + put_page(p); 1838 + goto err_buf; 1839 + } 1866 1840 1867 1841 /* guard against a misconfigured or uncooperative backend that 1868 1842 * is sending packet larger than the MTU. ··· 1961 1917 headroom = vi->hdr_len + header_offset; 1962 1918 buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + 1963 1919 SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); 1964 - xdp_page = xdp_linearize_page(rq, &num_buf, page, 1920 + xdp_page = xdp_linearize_page(dev, rq, &num_buf, page, 1965 1921 offset, header_offset, 1966 1922 &tlen); 1967 1923 if (!xdp_page) ··· 2170 2126 struct virtnet_rq_stats *stats) 2171 2127 { 2172 2128 struct virtio_net_hdr_mrg_rxbuf *hdr = buf; 2173 - unsigned int headroom, tailroom, room; 2174 - unsigned int truesize, cur_frag_size; 2175 2129 struct skb_shared_info *shinfo; 2176 2130 unsigned int xdp_frags_truesz = 0; 2131 + unsigned int truesize; 2177 2132 struct page *page; 2178 2133 skb_frag_t *frag; 2179 2134 int offset; ··· 2215 2172 page = virt_to_head_page(buf); 2216 2173 offset = buf - page_address(page); 2217 2174 2218 - truesize = mergeable_ctx_to_truesize(ctx); 2219 - headroom = mergeable_ctx_to_headroom(ctx); 2220 - tailroom = headroom ? sizeof(struct skb_shared_info) : 0; 2221 - room = SKB_DATA_ALIGN(headroom + tailroom); 2222 - 2223 - cur_frag_size = truesize; 2224 - xdp_frags_truesz += cur_frag_size; 2225 - if (unlikely(len > truesize - room || cur_frag_size > PAGE_SIZE)) { 2175 + if (check_mergeable_len(dev, ctx, len)) { 2226 2176 put_page(page); 2227 - pr_debug("%s: rx error: len %u exceeds truesize %lu\n", 2228 - dev->name, len, (unsigned long)(truesize - room)); 2229 - DEV_STATS_INC(dev, rx_length_errors); 2230 2177 goto err; 2231 2178 } 2179 + 2180 + truesize = mergeable_ctx_to_truesize(ctx); 2181 + xdp_frags_truesz += truesize; 2232 2182 2233 2183 frag = &shinfo->frags[shinfo->nr_frags++]; 2234 2184 skb_frag_fill_page_desc(frag, page, offset, len); ··· 2288 2252 */ 2289 2253 if (!xdp_prog->aux->xdp_has_frags) { 2290 2254 /* linearize data for XDP */ 2291 - xdp_page = xdp_linearize_page(rq, num_buf, 2255 + xdp_page = xdp_linearize_page(vi->dev, rq, num_buf, 2292 2256 *page, offset, 2293 2257 XDP_PACKET_HEADROOM, 2294 2258 len); ··· 2436 2400 struct sk_buff *head_skb, *curr_skb; 2437 2401 unsigned int truesize = mergeable_ctx_to_truesize(ctx); 2438 2402 unsigned int headroom = mergeable_ctx_to_headroom(ctx); 2439 - unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0; 2440 - unsigned int room = SKB_DATA_ALIGN(headroom + tailroom); 2441 2403 2442 2404 head_skb = NULL; 2443 2405 u64_stats_add(&stats->bytes, len - vi->hdr_len); 2444 2406 2445 - if (unlikely(len > truesize - room)) { 2446 - pr_debug("%s: rx error: len %u exceeds truesize %lu\n", 2447 - dev->name, len, (unsigned long)(truesize - room)); 2448 - DEV_STATS_INC(dev, rx_length_errors); 2407 + if (check_mergeable_len(dev, ctx, len)) 2449 2408 goto err_skb; 2450 - } 2451 2409 2452 2410 if (unlikely(vi->xdp_enabled)) { 2453 2411 struct bpf_prog *xdp_prog; ··· 2476 2446 u64_stats_add(&stats->bytes, len); 2477 2447 page = virt_to_head_page(buf); 2478 2448 2479 - truesize = mergeable_ctx_to_truesize(ctx); 2480 - headroom = mergeable_ctx_to_headroom(ctx); 2481 - tailroom = headroom ? sizeof(struct skb_shared_info) : 0; 2482 - room = SKB_DATA_ALIGN(headroom + tailroom); 2483 - if (unlikely(len > truesize - room)) { 2484 - pr_debug("%s: rx error: len %u exceeds truesize %lu\n", 2485 - dev->name, len, (unsigned long)(truesize - room)); 2486 - DEV_STATS_INC(dev, rx_length_errors); 2449 + if (check_mergeable_len(dev, ctx, len)) 2487 2450 goto err_skb; 2488 - } 2489 2451 2452 + truesize = mergeable_ctx_to_truesize(ctx); 2490 2453 curr_skb = virtnet_skb_append_frag(head_skb, curr_skb, page, 2491 2454 buf, len, truesize); 2492 2455 if (!curr_skb) ··· 3021 2998 free_old_xmit(sq, txq, !!budget); 3022 2999 } while (unlikely(!virtqueue_enable_cb_delayed(sq->vq))); 3023 3000 3024 - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) { 3001 + if (sq->vq->num_free >= MAX_SKB_FRAGS + 2) { 3025 3002 if (netif_tx_queue_stopped(txq)) { 3026 3003 u64_stats_update_begin(&sq->stats.syncp); 3027 3004 u64_stats_inc(&sq->stats.wake); ··· 3218 3195 else 3219 3196 free_old_xmit(sq, txq, !!budget); 3220 3197 3221 - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) { 3198 + if (sq->vq->num_free >= MAX_SKB_FRAGS + 2) { 3222 3199 if (netif_tx_queue_stopped(txq)) { 3223 3200 u64_stats_update_begin(&sq->stats.syncp); 3224 3201 u64_stats_inc(&sq->stats.wake); ··· 3503 3480 u32 ring_num) 3504 3481 { 3505 3482 int qindex, err; 3483 + 3484 + if (ring_num <= MAX_SKB_FRAGS + 2) { 3485 + netdev_err(vi->dev, "tx size (%d) cannot be smaller than %d\n", 3486 + ring_num, MAX_SKB_FRAGS + 2); 3487 + return -EINVAL; 3488 + } 3506 3489 3507 3490 qindex = sq - vi->sq; 3508 3491
+42 -45
drivers/nvme/host/core.c
··· 2015 2015 } 2016 2016 2017 2017 2018 - static void nvme_update_atomic_write_disk_info(struct nvme_ns *ns, 2019 - struct nvme_id_ns *id, struct queue_limits *lim, 2020 - u32 bs, u32 atomic_bs) 2018 + static u32 nvme_configure_atomic_write(struct nvme_ns *ns, 2019 + struct nvme_id_ns *id, struct queue_limits *lim, u32 bs) 2021 2020 { 2022 - unsigned int boundary = 0; 2021 + u32 atomic_bs, boundary = 0; 2023 2022 2024 - if (id->nsfeat & NVME_NS_FEAT_ATOMICS && id->nawupf) { 2025 - if (le16_to_cpu(id->nabspf)) 2023 + /* 2024 + * We do not support an offset for the atomic boundaries. 2025 + */ 2026 + if (id->nabo) 2027 + return bs; 2028 + 2029 + if ((id->nsfeat & NVME_NS_FEAT_ATOMICS) && id->nawupf) { 2030 + /* 2031 + * Use the per-namespace atomic write unit when available. 2032 + */ 2033 + atomic_bs = (1 + le16_to_cpu(id->nawupf)) * bs; 2034 + if (id->nabspf) 2026 2035 boundary = (le16_to_cpu(id->nabspf) + 1) * bs; 2036 + } else { 2037 + /* 2038 + * Use the controller wide atomic write unit. This sucks 2039 + * because the limit is defined in terms of logical blocks while 2040 + * namespaces can have different formats, and because there is 2041 + * no clear language in the specification prohibiting different 2042 + * values for different controllers in the subsystem. 2043 + */ 2044 + atomic_bs = (1 + ns->ctrl->subsys->awupf) * bs; 2027 2045 } 2046 + 2028 2047 lim->atomic_write_hw_max = atomic_bs; 2029 2048 lim->atomic_write_hw_boundary = boundary; 2030 2049 lim->atomic_write_hw_unit_min = bs; 2031 2050 lim->atomic_write_hw_unit_max = rounddown_pow_of_two(atomic_bs); 2032 2051 lim->features |= BLK_FEAT_ATOMIC_WRITES; 2052 + return atomic_bs; 2033 2053 } 2034 2054 2035 2055 static u32 nvme_max_drv_segments(struct nvme_ctrl *ctrl) ··· 2087 2067 valid = false; 2088 2068 } 2089 2069 2090 - atomic_bs = phys_bs = bs; 2091 - if (id->nabo == 0) { 2092 - /* 2093 - * Bit 1 indicates whether NAWUPF is defined for this namespace 2094 - * and whether it should be used instead of AWUPF. If NAWUPF == 2095 - * 0 then AWUPF must be used instead. 2096 - */ 2097 - if (id->nsfeat & NVME_NS_FEAT_ATOMICS && id->nawupf) 2098 - atomic_bs = (1 + le16_to_cpu(id->nawupf)) * bs; 2099 - else 2100 - atomic_bs = (1 + ns->ctrl->awupf) * bs; 2101 - 2102 - /* 2103 - * Set subsystem atomic bs. 2104 - */ 2105 - if (ns->ctrl->subsys->atomic_bs) { 2106 - if (atomic_bs != ns->ctrl->subsys->atomic_bs) { 2107 - dev_err_ratelimited(ns->ctrl->device, 2108 - "%s: Inconsistent Atomic Write Size, Namespace will not be added: Subsystem=%d bytes, Controller/Namespace=%d bytes\n", 2109 - ns->disk ? ns->disk->disk_name : "?", 2110 - ns->ctrl->subsys->atomic_bs, 2111 - atomic_bs); 2112 - } 2113 - } else 2114 - ns->ctrl->subsys->atomic_bs = atomic_bs; 2115 - 2116 - nvme_update_atomic_write_disk_info(ns, id, lim, bs, atomic_bs); 2117 - } 2070 + phys_bs = bs; 2071 + atomic_bs = nvme_configure_atomic_write(ns, id, lim, bs); 2118 2072 2119 2073 if (id->nsfeat & NVME_NS_FEAT_IO_OPT) { 2120 2074 /* NPWG = Namespace Preferred Write Granularity */ ··· 2375 2381 nvme_set_chunk_sectors(ns, id, &lim); 2376 2382 if (!nvme_update_disk_info(ns, id, &lim)) 2377 2383 capacity = 0; 2378 - 2379 - /* 2380 - * Validate the max atomic write size fits within the subsystem's 2381 - * atomic write capabilities. 2382 - */ 2383 - if (lim.atomic_write_hw_max > ns->ctrl->subsys->atomic_bs) { 2384 - blk_mq_unfreeze_queue(ns->disk->queue, memflags); 2385 - ret = -ENXIO; 2386 - goto out; 2387 - } 2388 2384 2389 2385 nvme_config_discard(ns, &lim); 2390 2386 if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) && ··· 3199 3215 memcpy(subsys->model, id->mn, sizeof(subsys->model)); 3200 3216 subsys->vendor_id = le16_to_cpu(id->vid); 3201 3217 subsys->cmic = id->cmic; 3218 + subsys->awupf = le16_to_cpu(id->awupf); 3202 3219 3203 3220 /* Versions prior to 1.4 don't necessarily report a valid type */ 3204 3221 if (id->cntrltype == NVME_CTRL_DISC || ··· 3537 3552 if (ret) 3538 3553 goto out_free; 3539 3554 } 3555 + 3556 + if (le16_to_cpu(id->awupf) != ctrl->subsys->awupf) { 3557 + dev_err_ratelimited(ctrl->device, 3558 + "inconsistent AWUPF, controller not added (%u/%u).\n", 3559 + le16_to_cpu(id->awupf), ctrl->subsys->awupf); 3560 + ret = -EINVAL; 3561 + goto out_free; 3562 + } 3563 + 3540 3564 memcpy(ctrl->subsys->firmware_rev, id->fr, 3541 3565 sizeof(ctrl->subsys->firmware_rev)); 3542 3566 ··· 3641 3647 dev_pm_qos_expose_latency_tolerance(ctrl->device); 3642 3648 else if (!ctrl->apst_enabled && prev_apst_enabled) 3643 3649 dev_pm_qos_hide_latency_tolerance(ctrl->device); 3644 - ctrl->awupf = le16_to_cpu(id->awupf); 3645 3650 out_free: 3646 3651 kfree(id); 3647 3652 return ret; ··· 4029 4036 list_add_tail_rcu(&ns->siblings, &head->list); 4030 4037 ns->head = head; 4031 4038 mutex_unlock(&ctrl->subsys->lock); 4039 + 4040 + #ifdef CONFIG_NVME_MULTIPATH 4041 + cancel_delayed_work(&head->remove_work); 4042 + #endif 4032 4043 return 0; 4033 4044 4034 4045 out_put_ns_head:
+1 -1
drivers/nvme/host/multipath.c
··· 1311 1311 */ 1312 1312 if (!try_module_get(THIS_MODULE)) 1313 1313 goto out; 1314 - queue_delayed_work(nvme_wq, &head->remove_work, 1314 + mod_delayed_work(nvme_wq, &head->remove_work, 1315 1315 head->delayed_removal_secs * HZ); 1316 1316 } else { 1317 1317 list_del_init(&head->entry);
+1 -2
drivers/nvme/host/nvme.h
··· 410 410 411 411 enum nvme_ctrl_type cntrltype; 412 412 enum nvme_dctype dctype; 413 - u16 awupf; /* 0's based value. */ 414 413 }; 415 414 416 415 static inline enum nvme_ctrl_state nvme_ctrl_state(struct nvme_ctrl *ctrl) ··· 442 443 u8 cmic; 443 444 enum nvme_subsys_type subtype; 444 445 u16 vendor_id; 446 + u16 awupf; /* 0's based value. */ 445 447 struct ida ns_ida; 446 448 #ifdef CONFIG_NVME_MULTIPATH 447 449 enum nvme_iopolicy iopolicy; 448 450 #endif 449 - u32 atomic_bs; 450 451 }; 451 452 452 453 /*
+10 -13
drivers/pci/pci-acpi.c
··· 1676 1676 return NULL; 1677 1677 1678 1678 root_ops = kzalloc(sizeof(*root_ops), GFP_KERNEL); 1679 - if (!root_ops) 1680 - goto free_ri; 1679 + if (!root_ops) { 1680 + kfree(ri); 1681 + return NULL; 1682 + } 1681 1683 1682 1684 ri->cfg = pci_acpi_setup_ecam_mapping(root); 1683 - if (!ri->cfg) 1684 - goto free_root_ops; 1685 + if (!ri->cfg) { 1686 + kfree(ri); 1687 + kfree(root_ops); 1688 + return NULL; 1689 + } 1685 1690 1686 1691 root_ops->release_info = pci_acpi_generic_release_info; 1687 1692 root_ops->prepare_resources = pci_acpi_root_prepare_resources; 1688 1693 root_ops->pci_ops = (struct pci_ops *)&ri->cfg->ops->pci_ops; 1689 1694 bus = acpi_pci_root_create(root, root_ops, &ri->common, ri->cfg); 1690 1695 if (!bus) 1691 - goto free_cfg; 1696 + return NULL; 1692 1697 1693 1698 /* If we must preserve the resource configuration, claim now */ 1694 1699 host = pci_find_host_bridge(bus); ··· 1710 1705 pcie_bus_configure_settings(child); 1711 1706 1712 1707 return bus; 1713 - 1714 - free_cfg: 1715 - pci_ecam_free(ri->cfg); 1716 - free_root_ops: 1717 - kfree(root_ops); 1718 - free_ri: 1719 - kfree(ri); 1720 - return NULL; 1721 1708 } 1722 1709 1723 1710 void pcibios_add_bus(struct pci_bus *bus)
+2
drivers/pci/pcie/ptm.c
··· 254 254 } 255 255 EXPORT_SYMBOL(pcie_ptm_enabled); 256 256 257 + #if IS_ENABLED(CONFIG_DEBUG_FS) 257 258 static ssize_t context_update_write(struct file *file, const char __user *ubuf, 258 259 size_t count, loff_t *ppos) 259 260 { ··· 553 552 debugfs_remove_recursive(ptm_debugfs->debugfs); 554 553 } 555 554 EXPORT_SYMBOL_GPL(pcie_ptm_destroy_debugfs); 555 + #endif
+2 -1
drivers/platform/x86/amd/amd_isp4.c
··· 11 11 #include <linux/mutex.h> 12 12 #include <linux/platform_device.h> 13 13 #include <linux/property.h> 14 + #include <linux/soc/amd/isp4_misc.h> 14 15 #include <linux/string.h> 15 16 #include <linux/types.h> 16 17 #include <linux/units.h> ··· 152 151 153 152 static inline bool is_isp_i2c_adapter(struct i2c_adapter *adap) 154 153 { 155 - return !strcmp(adap->owner->name, "i2c_designware_amdisp"); 154 + return !strcmp(adap->name, AMDISP_I2C_ADAP_NAME); 156 155 } 157 156 158 157 static void instantiate_isp_i2c_client(struct amdisp_platform *isp4_platform,
+6 -4
drivers/rtc/rtc-cmos.c
··· 692 692 { 693 693 u8 irqstat; 694 694 u8 rtc_control; 695 + unsigned long flags; 695 696 696 - spin_lock(&rtc_lock); 697 + /* We cannot use spin_lock() here, as cmos_interrupt() is also called 698 + * in a non-irq context. 699 + */ 700 + spin_lock_irqsave(&rtc_lock, flags); 697 701 698 702 /* When the HPET interrupt handler calls us, the interrupt 699 703 * status is passed as arg1 instead of the irq number. But ··· 731 727 hpet_mask_rtc_irq_bit(RTC_AIE); 732 728 CMOS_READ(RTC_INTR_FLAGS); 733 729 } 734 - spin_unlock(&rtc_lock); 730 + spin_unlock_irqrestore(&rtc_lock, flags); 735 731 736 732 if (is_intr(irqstat)) { 737 733 rtc_update_irq(p, 1, irqstat); ··· 1299 1295 * ACK the rtc irq here 1300 1296 */ 1301 1297 if (t_now >= cmos->alarm_expires && cmos_use_acpi_alarm()) { 1302 - local_irq_disable(); 1303 1298 cmos_interrupt(0, (void *)cmos->rtc); 1304 - local_irq_enable(); 1305 1299 return; 1306 1300 } 1307 1301
+6 -1
drivers/rtc/rtc-pcf2127.c
··· 1538 1538 variant = &pcf21xx_cfg[type]; 1539 1539 } 1540 1540 1541 - config.max_register = variant->max_register, 1541 + if (variant->type == PCF2131) { 1542 + config.read_flag_mask = 0x0; 1543 + config.write_flag_mask = 0x0; 1544 + } 1545 + 1546 + config.max_register = variant->max_register; 1542 1547 1543 1548 regmap = devm_regmap_init_spi(spi, &config); 1544 1549 if (IS_ERR(regmap)) {
+128 -69
drivers/rtc/rtc-s5m.c
··· 10 10 #include <linux/module.h> 11 11 #include <linux/i2c.h> 12 12 #include <linux/bcd.h> 13 + #include <linux/reboot.h> 13 14 #include <linux/regmap.h> 14 15 #include <linux/rtc.h> 15 16 #include <linux/platform_device.h> ··· 54 53 * Device | Write time | Read time | Write alarm 55 54 * ================================================= 56 55 * S5M8767 | UDR + TIME | | UDR 56 + * S2MPG10 | WUDR | RUDR | AUDR 57 57 * S2MPS11/14 | WUDR | RUDR | WUDR + RUDR 58 58 * S2MPS13 | WUDR | RUDR | WUDR + AUDR 59 59 * S2MPS15 | WUDR | RUDR | AUDR ··· 99 97 .read_time_udr_mask = 0, /* Not needed */ 100 98 .write_time_udr_mask = S5M_RTC_UDR_MASK | S5M_RTC_TIME_EN_MASK, 101 99 .write_alarm_udr_mask = S5M_RTC_UDR_MASK, 100 + }; 101 + 102 + /* Register map for S2MPG10 */ 103 + static const struct s5m_rtc_reg_config s2mpg10_rtc_regs = { 104 + .regs_count = 7, 105 + .time = S2MPG10_RTC_SEC, 106 + .ctrl = S2MPG10_RTC_CTRL, 107 + .alarm0 = S2MPG10_RTC_A0SEC, 108 + .alarm1 = S2MPG10_RTC_A1SEC, 109 + .udr_update = S2MPG10_RTC_UPDATE, 110 + .autoclear_udr_mask = S2MPS15_RTC_WUDR_MASK | S2MPS15_RTC_AUDR_MASK, 111 + .read_time_udr_mask = S2MPS_RTC_RUDR_MASK, 112 + .write_time_udr_mask = S2MPS15_RTC_WUDR_MASK, 113 + .write_alarm_udr_mask = S2MPS15_RTC_AUDR_MASK, 102 114 }; 103 115 104 116 /* Register map for S2MPS13 */ ··· 243 227 return ret; 244 228 } 245 229 246 - static int s5m_check_peding_alarm_interrupt(struct s5m_rtc_info *info, 247 - struct rtc_wkalrm *alarm) 230 + static int s5m_check_pending_alarm_interrupt(struct s5m_rtc_info *info, 231 + struct rtc_wkalrm *alarm) 248 232 { 249 233 int ret; 250 234 unsigned int val; ··· 254 238 ret = regmap_read(info->regmap, S5M_RTC_STATUS, &val); 255 239 val &= S5M_ALARM0_STATUS; 256 240 break; 241 + case S2MPG10: 257 242 case S2MPS15X: 258 243 case S2MPS14X: 259 244 case S2MPS13X: ··· 279 262 static int s5m8767_rtc_set_time_reg(struct s5m_rtc_info *info) 280 263 { 281 264 int ret; 282 - unsigned int data; 283 265 284 - ret = regmap_read(info->regmap, info->regs->udr_update, &data); 285 - if (ret < 0) { 286 - dev_err(info->dev, "failed to read update reg(%d)\n", ret); 287 - return ret; 288 - } 289 - 290 - data |= info->regs->write_time_udr_mask; 291 - 292 - ret = regmap_write(info->regmap, info->regs->udr_update, data); 266 + ret = regmap_set_bits(info->regmap, info->regs->udr_update, 267 + info->regs->write_time_udr_mask); 293 268 if (ret < 0) { 294 269 dev_err(info->dev, "failed to write update reg(%d)\n", ret); 295 270 return ret; ··· 295 286 static int s5m8767_rtc_set_alarm_reg(struct s5m_rtc_info *info) 296 287 { 297 288 int ret; 298 - unsigned int data; 289 + unsigned int udr_mask; 299 290 300 - ret = regmap_read(info->regmap, info->regs->udr_update, &data); 301 - if (ret < 0) { 302 - dev_err(info->dev, "%s: fail to read update reg(%d)\n", 303 - __func__, ret); 304 - return ret; 305 - } 306 - 307 - data |= info->regs->write_alarm_udr_mask; 291 + udr_mask = info->regs->write_alarm_udr_mask; 308 292 switch (info->device_type) { 309 293 case S5M8767X: 310 - data &= ~S5M_RTC_TIME_EN_MASK; 294 + udr_mask |= S5M_RTC_TIME_EN_MASK; 311 295 break; 296 + case S2MPG10: 312 297 case S2MPS15X: 313 298 case S2MPS14X: 314 299 case S2MPS13X: ··· 312 309 return -EINVAL; 313 310 } 314 311 315 - ret = regmap_write(info->regmap, info->regs->udr_update, data); 312 + ret = regmap_update_bits(info->regmap, info->regs->udr_update, 313 + udr_mask, info->regs->write_alarm_udr_mask); 316 314 if (ret < 0) { 317 315 dev_err(info->dev, "%s: fail to write update reg(%d)\n", 318 316 __func__, ret); ··· 324 320 325 321 /* On S2MPS13 the AUDR is not auto-cleared */ 326 322 if (info->device_type == S2MPS13X) 327 - regmap_update_bits(info->regmap, info->regs->udr_update, 328 - S2MPS13_RTC_AUDR_MASK, 0); 323 + regmap_clear_bits(info->regmap, info->regs->udr_update, 324 + S2MPS13_RTC_AUDR_MASK); 329 325 330 326 return ret; 331 327 } ··· 337 333 int ret; 338 334 339 335 if (info->regs->read_time_udr_mask) { 340 - ret = regmap_update_bits(info->regmap, 341 - info->regs->udr_update, 342 - info->regs->read_time_udr_mask, 343 - info->regs->read_time_udr_mask); 336 + ret = regmap_set_bits(info->regmap, info->regs->udr_update, 337 + info->regs->read_time_udr_mask); 344 338 if (ret) { 345 339 dev_err(dev, 346 340 "Failed to prepare registers for time reading: %d\n", ··· 353 351 354 352 switch (info->device_type) { 355 353 case S5M8767X: 354 + case S2MPG10: 356 355 case S2MPS15X: 357 356 case S2MPS14X: 358 357 case S2MPS13X: ··· 377 374 378 375 switch (info->device_type) { 379 376 case S5M8767X: 377 + case S2MPG10: 380 378 case S2MPS15X: 381 379 case S2MPS14X: 382 380 case S2MPS13X: ··· 415 411 416 412 switch (info->device_type) { 417 413 case S5M8767X: 414 + case S2MPG10: 418 415 case S2MPS15X: 419 416 case S2MPS14X: 420 417 case S2MPS13X: ··· 435 430 436 431 dev_dbg(dev, "%s: %ptR(%d)\n", __func__, &alrm->time, alrm->time.tm_wday); 437 432 438 - return s5m_check_peding_alarm_interrupt(info, alrm); 433 + return s5m_check_pending_alarm_interrupt(info, alrm); 439 434 } 440 435 441 436 static int s5m_rtc_stop_alarm(struct s5m_rtc_info *info) ··· 454 449 455 450 switch (info->device_type) { 456 451 case S5M8767X: 452 + case S2MPG10: 457 453 case S2MPS15X: 458 454 case S2MPS14X: 459 455 case S2MPS13X: ··· 493 487 494 488 switch (info->device_type) { 495 489 case S5M8767X: 490 + case S2MPG10: 496 491 case S2MPS15X: 497 492 case S2MPS14X: 498 493 case S2MPS13X: ··· 531 524 532 525 switch (info->device_type) { 533 526 case S5M8767X: 527 + case S2MPG10: 534 528 case S2MPS15X: 535 529 case S2MPS14X: 536 530 case S2MPS13X: ··· 612 604 ret = regmap_raw_write(info->regmap, S5M_ALARM0_CONF, data, 2); 613 605 break; 614 606 607 + case S2MPG10: 615 608 case S2MPS15X: 616 609 case S2MPS14X: 617 610 case S2MPS13X: ··· 643 634 return ret; 644 635 } 645 636 637 + static int s5m_rtc_restart_s2mpg10(struct sys_off_data *data) 638 + { 639 + struct s5m_rtc_info *info = data->cb_data; 640 + int ret; 641 + 642 + if (data->mode != REBOOT_COLD && data->mode != REBOOT_HARD) 643 + return NOTIFY_DONE; 644 + 645 + /* 646 + * Arm watchdog with maximum timeout (2 seconds), and perform full reset 647 + * on expiry. 648 + */ 649 + ret = regmap_set_bits(info->regmap, S2MPG10_RTC_WTSR, 650 + (S2MPG10_WTSR_COLDTIMER | S2MPG10_WTSR_COLDRST 651 + | S2MPG10_WTSR_WTSRT | S2MPG10_WTSR_WTSR_EN)); 652 + 653 + return ret ? NOTIFY_BAD : NOTIFY_DONE; 654 + } 655 + 646 656 static int s5m_rtc_probe(struct platform_device *pdev) 647 657 { 648 658 struct sec_pmic_dev *s5m87xx = dev_get_drvdata(pdev->dev.parent); 659 + enum sec_device_type device_type = 660 + platform_get_device_id(pdev)->driver_data; 649 661 struct s5m_rtc_info *info; 650 - struct i2c_client *i2c; 651 - const struct regmap_config *regmap_cfg; 652 662 int ret, alarm_irq; 653 663 654 664 info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL); 655 665 if (!info) 656 666 return -ENOMEM; 657 667 658 - switch (platform_get_device_id(pdev)->driver_data) { 659 - case S2MPS15X: 660 - regmap_cfg = &s2mps14_rtc_regmap_config; 661 - info->regs = &s2mps15_rtc_regs; 662 - alarm_irq = S2MPS14_IRQ_RTCA0; 663 - break; 664 - case S2MPS14X: 665 - regmap_cfg = &s2mps14_rtc_regmap_config; 666 - info->regs = &s2mps14_rtc_regs; 667 - alarm_irq = S2MPS14_IRQ_RTCA0; 668 - break; 669 - case S2MPS13X: 670 - regmap_cfg = &s2mps14_rtc_regmap_config; 671 - info->regs = &s2mps13_rtc_regs; 672 - alarm_irq = S2MPS14_IRQ_RTCA0; 673 - break; 674 - case S5M8767X: 675 - regmap_cfg = &s5m_rtc_regmap_config; 676 - info->regs = &s5m_rtc_regs; 677 - alarm_irq = S5M8767_IRQ_RTCA1; 678 - break; 679 - default: 668 + info->regmap = dev_get_regmap(pdev->dev.parent, "rtc"); 669 + if (!info->regmap) { 670 + const struct regmap_config *regmap_cfg; 671 + struct i2c_client *i2c; 672 + 673 + switch (device_type) { 674 + case S2MPS15X: 675 + regmap_cfg = &s2mps14_rtc_regmap_config; 676 + info->regs = &s2mps15_rtc_regs; 677 + alarm_irq = S2MPS14_IRQ_RTCA0; 678 + break; 679 + case S2MPS14X: 680 + regmap_cfg = &s2mps14_rtc_regmap_config; 681 + info->regs = &s2mps14_rtc_regs; 682 + alarm_irq = S2MPS14_IRQ_RTCA0; 683 + break; 684 + case S2MPS13X: 685 + regmap_cfg = &s2mps14_rtc_regmap_config; 686 + info->regs = &s2mps13_rtc_regs; 687 + alarm_irq = S2MPS14_IRQ_RTCA0; 688 + break; 689 + case S5M8767X: 690 + regmap_cfg = &s5m_rtc_regmap_config; 691 + info->regs = &s5m_rtc_regs; 692 + alarm_irq = S5M8767_IRQ_RTCA1; 693 + break; 694 + default: 695 + return dev_err_probe(&pdev->dev, -ENODEV, 696 + "Unsupported device type %d\n", 697 + device_type); 698 + } 699 + 700 + i2c = devm_i2c_new_dummy_device(&pdev->dev, 701 + s5m87xx->i2c->adapter, 702 + RTC_I2C_ADDR); 703 + if (IS_ERR(i2c)) 704 + return dev_err_probe(&pdev->dev, PTR_ERR(i2c), 705 + "Failed to allocate I2C\n"); 706 + 707 + info->regmap = devm_regmap_init_i2c(i2c, regmap_cfg); 708 + if (IS_ERR(info->regmap)) 709 + return dev_err_probe(&pdev->dev, PTR_ERR(info->regmap), 710 + "Failed to allocate regmap\n"); 711 + } else if (device_type == S2MPG10) { 712 + info->regs = &s2mpg10_rtc_regs; 713 + alarm_irq = S2MPG10_IRQ_RTCA0; 714 + } else { 680 715 return dev_err_probe(&pdev->dev, -ENODEV, 681 - "Device type %lu is not supported by RTC driver\n", 682 - platform_get_device_id(pdev)->driver_data); 716 + "Unsupported device type %d\n", 717 + device_type); 683 718 } 684 - 685 - i2c = devm_i2c_new_dummy_device(&pdev->dev, s5m87xx->i2c->adapter, 686 - RTC_I2C_ADDR); 687 - if (IS_ERR(i2c)) 688 - return dev_err_probe(&pdev->dev, PTR_ERR(i2c), 689 - "Failed to allocate I2C for RTC\n"); 690 - 691 - info->regmap = devm_regmap_init_i2c(i2c, regmap_cfg); 692 - if (IS_ERR(info->regmap)) 693 - return dev_err_probe(&pdev->dev, PTR_ERR(info->regmap), 694 - "Failed to allocate RTC register map\n"); 695 719 696 720 info->dev = &pdev->dev; 697 721 info->s5m87xx = s5m87xx; 698 - info->device_type = platform_get_device_id(pdev)->driver_data; 722 + info->device_type = device_type; 699 723 700 724 if (s5m87xx->irq_data) { 701 725 info->irq = regmap_irq_get_virq(s5m87xx->irq_data, alarm_irq); ··· 763 721 return dev_err_probe(&pdev->dev, ret, 764 722 "Failed to request alarm IRQ %d\n", 765 723 info->irq); 766 - device_init_wakeup(&pdev->dev, true); 724 + 725 + ret = devm_device_init_wakeup(&pdev->dev); 726 + if (ret < 0) 727 + return dev_err_probe(&pdev->dev, ret, 728 + "Failed to init wakeup\n"); 729 + } 730 + 731 + if (of_device_is_system_power_controller(pdev->dev.parent->of_node) && 732 + info->device_type == S2MPG10) { 733 + ret = devm_register_sys_off_handler(&pdev->dev, 734 + SYS_OFF_MODE_RESTART, 735 + SYS_OFF_PRIO_HIGH + 1, 736 + s5m_rtc_restart_s2mpg10, 737 + info); 738 + if (ret) 739 + return dev_err_probe(&pdev->dev, ret, 740 + "Failed to register restart handler\n"); 767 741 } 768 742 769 743 return devm_rtc_register_device(info->rtc_dev); ··· 813 755 814 756 static const struct platform_device_id s5m_rtc_id[] = { 815 757 { "s5m-rtc", S5M8767X }, 758 + { "s2mpg10-rtc", S2MPG10 }, 816 759 { "s2mps13-rtc", S2MPS13X }, 817 760 { "s2mps14-rtc", S2MPS14X }, 818 761 { "s2mps15-rtc", S2MPS15X },
+1 -1
drivers/s390/crypto/pkey_api.c
··· 86 86 if (!uapqns || nr_apqns == 0) 87 87 return NULL; 88 88 89 - return memdup_user(uapqns, nr_apqns * sizeof(struct pkey_apqn)); 89 + return memdup_array_user(uapqns, nr_apqns, sizeof(struct pkey_apqn)); 90 90 } 91 91 92 92 static int pkey_ioctl_genseck(struct pkey_genseck __user *ugs)
+14 -30
drivers/staging/rtl8723bs/core/rtw_security.c
··· 868 868 num_blocks, payload_index; 869 869 870 870 u8 pn_vector[6]; 871 - u8 mic_iv[16]; 872 - u8 mic_header1[16]; 873 - u8 mic_header2[16]; 874 - u8 ctr_preload[16]; 871 + u8 mic_iv[16] = {}; 872 + u8 mic_header1[16] = {}; 873 + u8 mic_header2[16] = {}; 874 + u8 ctr_preload[16] = {}; 875 875 876 876 /* Intermediate Buffers */ 877 - u8 chain_buffer[16]; 878 - u8 aes_out[16]; 879 - u8 padded_buffer[16]; 877 + u8 chain_buffer[16] = {}; 878 + u8 aes_out[16] = {}; 879 + u8 padded_buffer[16] = {}; 880 880 u8 mic[8]; 881 881 uint frtype = GetFrameType(pframe); 882 882 uint frsubtype = GetFrameSubType(pframe); 883 883 884 884 frsubtype = frsubtype>>4; 885 - 886 - memset((void *)mic_iv, 0, 16); 887 - memset((void *)mic_header1, 0, 16); 888 - memset((void *)mic_header2, 0, 16); 889 - memset((void *)ctr_preload, 0, 16); 890 - memset((void *)chain_buffer, 0, 16); 891 - memset((void *)aes_out, 0, 16); 892 - memset((void *)padded_buffer, 0, 16); 893 885 894 886 if ((hdrlen == WLAN_HDR_A3_LEN) || (hdrlen == WLAN_HDR_A3_QOS_LEN)) 895 887 a4_exists = 0; ··· 1072 1080 num_blocks, payload_index; 1073 1081 signed int res = _SUCCESS; 1074 1082 u8 pn_vector[6]; 1075 - u8 mic_iv[16]; 1076 - u8 mic_header1[16]; 1077 - u8 mic_header2[16]; 1078 - u8 ctr_preload[16]; 1083 + u8 mic_iv[16] = {}; 1084 + u8 mic_header1[16] = {}; 1085 + u8 mic_header2[16] = {}; 1086 + u8 ctr_preload[16] = {}; 1079 1087 1080 1088 /* Intermediate Buffers */ 1081 - u8 chain_buffer[16]; 1082 - u8 aes_out[16]; 1083 - u8 padded_buffer[16]; 1089 + u8 chain_buffer[16] = {}; 1090 + u8 aes_out[16] = {}; 1091 + u8 padded_buffer[16] = {}; 1084 1092 u8 mic[8]; 1085 1093 1086 1094 uint frtype = GetFrameType(pframe); 1087 1095 uint frsubtype = GetFrameSubType(pframe); 1088 1096 1089 1097 frsubtype = frsubtype>>4; 1090 - 1091 - memset((void *)mic_iv, 0, 16); 1092 - memset((void *)mic_header1, 0, 16); 1093 - memset((void *)mic_header2, 0, 16); 1094 - memset((void *)ctr_preload, 0, 16); 1095 - memset((void *)chain_buffer, 0, 16); 1096 - memset((void *)aes_out, 0, 16); 1097 - memset((void *)padded_buffer, 0, 16); 1098 1098 1099 1099 /* start to decrypt the payload */ 1100 1100
+12 -5
drivers/tty/serial/imx.c
··· 235 235 enum imx_tx_state tx_state; 236 236 struct hrtimer trigger_start_tx; 237 237 struct hrtimer trigger_stop_tx; 238 + unsigned int rxtl; 238 239 }; 239 240 240 241 struct imx_port_ucrs { ··· 1340 1339 1341 1340 #define TXTL_DEFAULT 8 1342 1341 #define RXTL_DEFAULT 8 /* 8 characters or aging timer */ 1342 + #define RXTL_CONSOLE_DEFAULT 1 1343 1343 #define TXTL_DMA 8 /* DMA burst setting */ 1344 1344 #define RXTL_DMA 9 /* DMA burst setting */ 1345 1345 ··· 1459 1457 ucr1 &= ~(UCR1_RXDMAEN | UCR1_TXDMAEN | UCR1_ATDMAEN); 1460 1458 imx_uart_writel(sport, ucr1, UCR1); 1461 1459 1462 - imx_uart_setup_ufcr(sport, TXTL_DEFAULT, RXTL_DEFAULT); 1460 + imx_uart_setup_ufcr(sport, TXTL_DEFAULT, sport->rxtl); 1463 1461 1464 1462 sport->dma_is_enabled = 0; 1465 1463 } ··· 1484 1482 return retval; 1485 1483 } 1486 1484 1487 - imx_uart_setup_ufcr(sport, TXTL_DEFAULT, RXTL_DEFAULT); 1485 + if (uart_console(&sport->port)) 1486 + sport->rxtl = RXTL_CONSOLE_DEFAULT; 1487 + else 1488 + sport->rxtl = RXTL_DEFAULT; 1489 + 1490 + imx_uart_setup_ufcr(sport, TXTL_DEFAULT, sport->rxtl); 1488 1491 1489 1492 /* disable the DREN bit (Data Ready interrupt enable) before 1490 1493 * requesting IRQs ··· 1955 1948 if (retval) 1956 1949 clk_disable_unprepare(sport->clk_ipg); 1957 1950 1958 - imx_uart_setup_ufcr(sport, TXTL_DEFAULT, RXTL_DEFAULT); 1951 + imx_uart_setup_ufcr(sport, TXTL_DEFAULT, sport->rxtl); 1959 1952 1960 1953 uart_port_lock_irqsave(&sport->port, &flags); 1961 1954 ··· 2047 2040 /* If the receiver trigger is 0, set it to a default value */ 2048 2041 ufcr = imx_uart_readl(sport, UFCR); 2049 2042 if ((ufcr & UFCR_RXTL_MASK) == 0) 2050 - imx_uart_setup_ufcr(sport, TXTL_DEFAULT, RXTL_DEFAULT); 2043 + imx_uart_setup_ufcr(sport, TXTL_DEFAULT, sport->rxtl); 2051 2044 imx_uart_start_rx(port); 2052 2045 } 2053 2046 ··· 2309 2302 else 2310 2303 imx_uart_console_get_options(sport, &baud, &parity, &bits); 2311 2304 2312 - imx_uart_setup_ufcr(sport, TXTL_DEFAULT, RXTL_DEFAULT); 2305 + imx_uart_setup_ufcr(sport, TXTL_DEFAULT, sport->rxtl); 2313 2306 2314 2307 retval = uart_set_options(&sport->port, co, baud, parity, bits, flow); 2315 2308
+1
drivers/tty/serial/serial_base_bus.c
··· 72 72 dev->parent = parent_dev; 73 73 dev->bus = &serial_base_bus_type; 74 74 dev->release = release; 75 + device_set_of_node_from_dev(dev, parent_dev); 75 76 76 77 if (!serial_base_initialized) { 77 78 dev_dbg(port->dev, "uart_add_one_port() called before arch_initcall()?\n");
+1 -1
drivers/tty/vt/ucs.c
··· 206 206 207 207 /** 208 208 * ucs_get_fallback() - Get a substitution for the provided Unicode character 209 - * @base: Base Unicode code point (UCS-4) 209 + * @cp: Unicode code point (UCS-4) 210 210 * 211 211 * Get a simpler fallback character for the provided Unicode character. 212 212 * This is used for terminal display when corresponding glyph is unavailable.
+1
drivers/tty/vt/vt.c
··· 4650 4650 set_palette(vc); 4651 4651 set_cursor(vc); 4652 4652 vt_event_post(VT_EVENT_UNBLANK, vc->vc_num, vc->vc_num); 4653 + notify_update(vc); 4653 4654 } 4654 4655 EXPORT_SYMBOL(do_unblank_screen); 4655 4656
+6 -2
drivers/virtio/virtio_ring.c
··· 2797 2797 void (*recycle_done)(struct virtqueue *vq)) 2798 2798 { 2799 2799 struct vring_virtqueue *vq = to_vvq(_vq); 2800 - int err; 2800 + int err, err_reset; 2801 2801 2802 2802 if (num > vq->vq.num_max) 2803 2803 return -E2BIG; ··· 2819 2819 else 2820 2820 err = virtqueue_resize_split(_vq, num); 2821 2821 2822 - return virtqueue_enable_after_reset(_vq); 2822 + err_reset = virtqueue_enable_after_reset(_vq); 2823 + if (err_reset) 2824 + return err_reset; 2825 + 2826 + return err; 2823 2827 } 2824 2828 EXPORT_SYMBOL_GPL(virtqueue_resize); 2825 2829
+9 -4
fs/bcachefs/alloc_background.c
··· 1406 1406 : BCH_DATA_free; 1407 1407 struct printbuf buf = PRINTBUF; 1408 1408 1409 + unsigned fsck_flags = (async_repair ? FSCK_ERR_NO_LOG : 0)| 1410 + FSCK_CAN_FIX|FSCK_CAN_IGNORE; 1411 + 1409 1412 struct bpos bucket = iter->pos; 1410 1413 bucket.offset &= ~(~0ULL << 56); 1411 1414 u64 genbits = iter->pos.offset & (~0ULL << 56); ··· 1422 1419 return ret; 1423 1420 1424 1421 if (!bch2_dev_bucket_exists(c, bucket)) { 1425 - if (fsck_err(trans, need_discard_freespace_key_to_invalid_dev_bucket, 1426 - "entry in %s btree for nonexistant dev:bucket %llu:%llu", 1427 - bch2_btree_id_str(iter->btree_id), bucket.inode, bucket.offset)) 1422 + if (__fsck_err(trans, fsck_flags, 1423 + need_discard_freespace_key_to_invalid_dev_bucket, 1424 + "entry in %s btree for nonexistant dev:bucket %llu:%llu", 1425 + bch2_btree_id_str(iter->btree_id), bucket.inode, bucket.offset)) 1428 1426 goto delete; 1429 1427 ret = 1; 1430 1428 goto out; ··· 1437 1433 if (a->data_type != state || 1438 1434 (state == BCH_DATA_free && 1439 1435 genbits != alloc_freespace_genbits(*a))) { 1440 - if (fsck_err(trans, need_discard_freespace_key_bad, 1436 + if (__fsck_err(trans, fsck_flags, 1437 + need_discard_freespace_key_bad, 1441 1438 "%s\nincorrectly set at %s:%llu:%llu:0 (free %u, genbits %llu should be %llu)", 1442 1439 (bch2_bkey_val_to_text(&buf, c, alloc_k), buf.buf), 1443 1440 bch2_btree_id_str(iter->btree_id),
+1 -1
fs/bcachefs/backpointers.c
··· 353 353 return ret ? bkey_s_c_err(ret) : bkey_s_c_null; 354 354 } else { 355 355 struct btree *b = __bch2_backpointer_get_node(trans, bp, iter, last_flushed, commit); 356 - if (b == ERR_PTR(bch_err_throw(c, backpointer_to_overwritten_btree_node))) 356 + if (b == ERR_PTR(-BCH_ERR_backpointer_to_overwritten_btree_node)) 357 357 return bkey_s_c_null; 358 358 if (IS_ERR_OR_NULL(b)) 359 359 return ((struct bkey_s_c) { .k = ERR_CAST(b) });
+2 -1
fs/bcachefs/bcachefs.h
··· 767 767 x(sysfs) \ 768 768 x(btree_write_buffer) \ 769 769 x(btree_node_scrub) \ 770 - x(async_recovery_passes) 770 + x(async_recovery_passes) \ 771 + x(ioctl_data) 771 772 772 773 enum bch_write_ref { 773 774 #define x(n) BCH_WRITE_REF_##n,
+25 -12
fs/bcachefs/btree_gc.c
··· 503 503 prt_newline(&buf); 504 504 bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(&b->key)); 505 505 506 + /* 507 + * XXX: we're not passing the trans object here because we're not set up 508 + * to handle a transaction restart - this code needs to be rewritten 509 + * when we start doing online topology repair 510 + */ 511 + bch2_trans_unlock_long(trans); 506 512 if (mustfix_fsck_err_on(!have_child, 507 - trans, btree_node_topology_interior_node_empty, 513 + c, btree_node_topology_interior_node_empty, 508 514 "empty interior btree node at %s", buf.buf)) 509 515 ret = DROP_THIS_NODE; 510 516 err: ··· 534 528 return ret; 535 529 } 536 530 537 - static int bch2_check_root(struct btree_trans *trans, enum btree_id i, 531 + static int bch2_check_root(struct btree_trans *trans, enum btree_id btree, 538 532 bool *reconstructed_root) 539 533 { 540 534 struct bch_fs *c = trans->c; 541 - struct btree_root *r = bch2_btree_id_root(c, i); 535 + struct btree_root *r = bch2_btree_id_root(c, btree); 542 536 struct printbuf buf = PRINTBUF; 543 537 int ret = 0; 544 538 545 - bch2_btree_id_to_text(&buf, i); 539 + bch2_btree_id_to_text(&buf, btree); 546 540 547 541 if (r->error) { 548 542 bch_info(c, "btree root %s unreadable, must recover from scan", buf.buf); 549 543 550 - r->alive = false; 551 - r->error = 0; 544 + ret = bch2_btree_has_scanned_nodes(c, btree); 545 + if (ret < 0) 546 + goto err; 552 547 553 - if (!bch2_btree_has_scanned_nodes(c, i)) { 548 + if (!ret) { 554 549 __fsck_err(trans, 555 - FSCK_CAN_FIX|(!btree_id_important(i) ? FSCK_AUTOFIX : 0), 550 + FSCK_CAN_FIX|(!btree_id_important(btree) ? FSCK_AUTOFIX : 0), 556 551 btree_root_unreadable_and_scan_found_nothing, 557 552 "no nodes found for btree %s, continue?", buf.buf); 558 - bch2_btree_root_alloc_fake_trans(trans, i, 0); 553 + 554 + r->alive = false; 555 + r->error = 0; 556 + bch2_btree_root_alloc_fake_trans(trans, btree, 0); 559 557 } else { 560 - bch2_btree_root_alloc_fake_trans(trans, i, 1); 561 - bch2_shoot_down_journal_keys(c, i, 1, BTREE_MAX_DEPTH, POS_MIN, SPOS_MAX); 562 - ret = bch2_get_scanned_nodes(c, i, 0, POS_MIN, SPOS_MAX); 558 + r->alive = false; 559 + r->error = 0; 560 + bch2_btree_root_alloc_fake_trans(trans, btree, 1); 561 + 562 + bch2_shoot_down_journal_keys(c, btree, 1, BTREE_MAX_DEPTH, POS_MIN, SPOS_MAX); 563 + ret = bch2_get_scanned_nodes(c, btree, 0, POS_MIN, SPOS_MAX); 563 564 if (ret) 564 565 goto err; 565 566 }
+31 -43
fs/bcachefs/btree_io.c
··· 557 557 const char *fmt, ...) 558 558 { 559 559 if (c->recovery.curr_pass == BCH_RECOVERY_PASS_scan_for_btree_nodes) 560 - return bch_err_throw(c, fsck_fix); 560 + return ret == -BCH_ERR_btree_node_read_err_fixable 561 + ? bch_err_throw(c, fsck_fix) 562 + : ret; 561 563 562 564 bool have_retry = false; 563 565 int ret2; ··· 725 723 726 724 static int validate_bset(struct bch_fs *c, struct bch_dev *ca, 727 725 struct btree *b, struct bset *i, 728 - unsigned offset, unsigned sectors, int write, 726 + unsigned offset, int write, 729 727 struct bch_io_failures *failed, 730 728 struct printbuf *err_msg) 731 729 { 732 730 unsigned version = le16_to_cpu(i->version); 733 - unsigned ptr_written = btree_ptr_sectors_written(bkey_i_to_s_c(&b->key)); 734 731 struct printbuf buf1 = PRINTBUF; 735 732 struct printbuf buf2 = PRINTBUF; 736 733 int ret = 0; ··· 778 777 c, ca, b, i, NULL, 779 778 btree_node_unsupported_version, 780 779 "BSET_SEPARATE_WHITEOUTS no longer supported"); 781 - 782 - if (!write && 783 - btree_err_on(offset + sectors > (ptr_written ?: btree_sectors(c)), 784 - -BCH_ERR_btree_node_read_err_fixable, 785 - c, ca, b, i, NULL, 786 - bset_past_end_of_btree_node, 787 - "bset past end of btree node (offset %u len %u but written %zu)", 788 - offset, sectors, ptr_written ?: btree_sectors(c))) 789 - i->u64s = 0; 790 780 791 781 btree_err_on(offset && !i->u64s, 792 782 -BCH_ERR_btree_node_read_err_fixable, ··· 1143 1151 "unknown checksum type %llu", BSET_CSUM_TYPE(i)); 1144 1152 1145 1153 if (first) { 1154 + sectors = vstruct_sectors(b->data, c->block_bits); 1155 + if (btree_err_on(b->written + sectors > (ptr_written ?: btree_sectors(c)), 1156 + -BCH_ERR_btree_node_read_err_fixable, 1157 + c, ca, b, i, NULL, 1158 + bset_past_end_of_btree_node, 1159 + "bset past end of btree node (offset %u len %u but written %zu)", 1160 + b->written, sectors, ptr_written ?: btree_sectors(c))) 1161 + i->u64s = 0; 1146 1162 if (good_csum_type) { 1147 1163 struct bch_csum csum = csum_vstruct(c, BSET_CSUM_TYPE(i), nonce, b->data); 1148 1164 bool csum_bad = bch2_crc_cmp(b->data->csum, csum); ··· 1178 1178 c, NULL, b, NULL, NULL, 1179 1179 btree_node_unsupported_version, 1180 1180 "btree node does not have NEW_EXTENT_OVERWRITE set"); 1181 - 1182 - sectors = vstruct_sectors(b->data, c->block_bits); 1183 1181 } else { 1182 + sectors = vstruct_sectors(bne, c->block_bits); 1183 + if (btree_err_on(b->written + sectors > (ptr_written ?: btree_sectors(c)), 1184 + -BCH_ERR_btree_node_read_err_fixable, 1185 + c, ca, b, i, NULL, 1186 + bset_past_end_of_btree_node, 1187 + "bset past end of btree node (offset %u len %u but written %zu)", 1188 + b->written, sectors, ptr_written ?: btree_sectors(c))) 1189 + i->u64s = 0; 1184 1190 if (good_csum_type) { 1185 1191 struct bch_csum csum = csum_vstruct(c, BSET_CSUM_TYPE(i), nonce, bne); 1186 1192 bool csum_bad = bch2_crc_cmp(bne->csum, csum); ··· 1207 1201 "decrypting btree node: %s", bch2_err_str(ret))) 1208 1202 goto fsck_err; 1209 1203 } 1210 - 1211 - sectors = vstruct_sectors(bne, c->block_bits); 1212 1204 } 1213 1205 1214 1206 b->version_ondisk = min(b->version_ondisk, 1215 1207 le16_to_cpu(i->version)); 1216 1208 1217 - ret = validate_bset(c, ca, b, i, b->written, sectors, READ, failed, err_msg); 1209 + ret = validate_bset(c, ca, b, i, b->written, READ, failed, err_msg); 1218 1210 if (ret) 1219 1211 goto fsck_err; 1220 1212 ··· 1986 1982 prt_newline(&err); 1987 1983 1988 1984 if (!btree_node_scrub_check(c, scrub->buf, scrub->written, &err)) { 1989 - struct btree_trans *trans = bch2_trans_get(c); 1990 - 1991 - struct btree_iter iter; 1992 - bch2_trans_node_iter_init(trans, &iter, scrub->btree, 1993 - scrub->key.k->k.p, 0, scrub->level - 1, 0); 1994 - 1995 - struct btree *b; 1996 - int ret = lockrestart_do(trans, 1997 - PTR_ERR_OR_ZERO(b = bch2_btree_iter_peek_node(trans, &iter))); 1998 - if (ret) 1999 - goto err; 2000 - 2001 - if (bkey_i_to_btree_ptr_v2(&b->key)->v.seq == scrub->seq) { 2002 - bch_err(c, "error validating btree node during scrub on %s at btree %s", 2003 - scrub->ca->name, err.buf); 2004 - 2005 - ret = bch2_btree_node_rewrite(trans, &iter, b, 0, 0); 2006 - } 2007 - err: 2008 - bch2_trans_iter_exit(trans, &iter); 2009 - bch2_trans_begin(trans); 2010 - bch2_trans_put(trans); 1985 + int ret = bch2_trans_do(c, 1986 + bch2_btree_node_rewrite_key(trans, scrub->btree, scrub->level - 1, 1987 + scrub->key.k, 0)); 1988 + if (!bch2_err_matches(ret, ENOENT) && 1989 + !bch2_err_matches(ret, EROFS)) 1990 + bch_err_fn_ratelimited(c, ret); 2011 1991 } 2012 1992 2013 1993 printbuf_exit(&err); ··· 2255 2267 } 2256 2268 2257 2269 static int validate_bset_for_write(struct bch_fs *c, struct btree *b, 2258 - struct bset *i, unsigned sectors) 2270 + struct bset *i) 2259 2271 { 2260 2272 int ret = bch2_bkey_validate(c, bkey_i_to_s_c(&b->key), 2261 2273 (struct bkey_validate_context) { ··· 2270 2282 } 2271 2283 2272 2284 ret = validate_bset_keys(c, b, i, WRITE, NULL, NULL) ?: 2273 - validate_bset(c, NULL, b, i, b->written, sectors, WRITE, NULL, NULL); 2285 + validate_bset(c, NULL, b, i, b->written, WRITE, NULL, NULL); 2274 2286 if (ret) { 2275 2287 bch2_inconsistent_error(c); 2276 2288 dump_stack(); ··· 2463 2475 2464 2476 /* if we're going to be encrypting, check metadata validity first: */ 2465 2477 if (validate_before_checksum && 2466 - validate_bset_for_write(c, b, i, sectors_to_write)) 2478 + validate_bset_for_write(c, b, i)) 2467 2479 goto err; 2468 2480 2469 2481 ret = bset_encrypt(c, i, b->written << 9); ··· 2480 2492 2481 2493 /* if we're not encrypting, check metadata after checksumming: */ 2482 2494 if (!validate_before_checksum && 2483 - validate_bset_for_write(c, b, i, sectors_to_write)) 2495 + validate_bset_for_write(c, b, i)) 2484 2496 goto err; 2485 2497 2486 2498 /*
+120 -55
fs/bcachefs/btree_iter.c
··· 2076 2076 2077 2077 static noinline 2078 2078 void bch2_btree_trans_peek_prev_updates(struct btree_trans *trans, struct btree_iter *iter, 2079 - struct bkey_s_c *k) 2079 + struct bpos search_key, struct bkey_s_c *k) 2080 2080 { 2081 2081 struct bpos end = path_l(btree_iter_path(trans, iter))->b->data->min_key; 2082 2082 2083 2083 trans_for_each_update(trans, i) 2084 2084 if (!i->key_cache_already_flushed && 2085 2085 i->btree_id == iter->btree_id && 2086 - bpos_le(i->k->k.p, iter->pos) && 2086 + bpos_le(i->k->k.p, search_key) && 2087 2087 bpos_ge(i->k->k.p, k->k ? k->k->p : end)) { 2088 2088 iter->k = i->k->k; 2089 2089 *k = bkey_i_to_s_c(i->k); ··· 2092 2092 2093 2093 static noinline 2094 2094 void bch2_btree_trans_peek_updates(struct btree_trans *trans, struct btree_iter *iter, 2095 + struct bpos search_key, 2095 2096 struct bkey_s_c *k) 2096 2097 { 2097 2098 struct btree_path *path = btree_iter_path(trans, iter); ··· 2101 2100 trans_for_each_update(trans, i) 2102 2101 if (!i->key_cache_already_flushed && 2103 2102 i->btree_id == iter->btree_id && 2104 - bpos_ge(i->k->k.p, path->pos) && 2103 + bpos_ge(i->k->k.p, search_key) && 2105 2104 bpos_le(i->k->k.p, k->k ? k->k->p : end)) { 2106 2105 iter->k = i->k->k; 2107 2106 *k = bkey_i_to_s_c(i->k); ··· 2123 2122 2124 2123 static struct bkey_i *bch2_btree_journal_peek(struct btree_trans *trans, 2125 2124 struct btree_iter *iter, 2125 + struct bpos search_pos, 2126 2126 struct bpos end_pos) 2127 2127 { 2128 2128 struct btree_path *path = btree_iter_path(trans, iter); 2129 2129 2130 2130 return bch2_journal_keys_peek_max(trans->c, iter->btree_id, 2131 2131 path->level, 2132 - path->pos, 2132 + search_pos, 2133 2133 end_pos, 2134 2134 &iter->journal_idx); 2135 2135 } ··· 2140 2138 struct btree_iter *iter) 2141 2139 { 2142 2140 struct btree_path *path = btree_iter_path(trans, iter); 2143 - struct bkey_i *k = bch2_btree_journal_peek(trans, iter, path->pos); 2141 + struct bkey_i *k = bch2_btree_journal_peek(trans, iter, path->pos, path->pos); 2144 2142 2145 2143 if (k) { 2146 2144 iter->k = k->k; ··· 2153 2151 static noinline 2154 2152 void btree_trans_peek_journal(struct btree_trans *trans, 2155 2153 struct btree_iter *iter, 2154 + struct bpos search_key, 2156 2155 struct bkey_s_c *k) 2157 2156 { 2158 2157 struct btree_path *path = btree_iter_path(trans, iter); 2159 2158 struct bkey_i *next_journal = 2160 - bch2_btree_journal_peek(trans, iter, 2159 + bch2_btree_journal_peek(trans, iter, search_key, 2161 2160 k->k ? k->k->p : path_l(path)->b->key.k.p); 2162 2161 if (next_journal) { 2163 2162 iter->k = next_journal->k; ··· 2168 2165 2169 2166 static struct bkey_i *bch2_btree_journal_peek_prev(struct btree_trans *trans, 2170 2167 struct btree_iter *iter, 2168 + struct bpos search_key, 2171 2169 struct bpos end_pos) 2172 2170 { 2173 2171 struct btree_path *path = btree_iter_path(trans, iter); 2174 2172 2175 2173 return bch2_journal_keys_peek_prev_min(trans->c, iter->btree_id, 2176 2174 path->level, 2177 - path->pos, 2175 + search_key, 2178 2176 end_pos, 2179 2177 &iter->journal_idx); 2180 2178 } ··· 2183 2179 static noinline 2184 2180 void btree_trans_peek_prev_journal(struct btree_trans *trans, 2185 2181 struct btree_iter *iter, 2182 + struct bpos search_key, 2186 2183 struct bkey_s_c *k) 2187 2184 { 2188 2185 struct btree_path *path = btree_iter_path(trans, iter); 2189 2186 struct bkey_i *next_journal = 2190 - bch2_btree_journal_peek_prev(trans, iter, 2187 + bch2_btree_journal_peek_prev(trans, iter, search_key, 2191 2188 k->k ? k->k->p : path_l(path)->b->key.k.p); 2192 2189 2193 2190 if (next_journal) { ··· 2297 2292 } 2298 2293 2299 2294 if (unlikely(iter->flags & BTREE_ITER_with_journal)) 2300 - btree_trans_peek_journal(trans, iter, &k); 2295 + btree_trans_peek_journal(trans, iter, search_key, &k); 2301 2296 2302 2297 if (unlikely((iter->flags & BTREE_ITER_with_updates) && 2303 2298 trans->nr_updates)) 2304 - bch2_btree_trans_peek_updates(trans, iter, &k); 2299 + bch2_btree_trans_peek_updates(trans, iter, search_key, &k); 2305 2300 2306 2301 if (k.k && bkey_deleted(k.k)) { 2307 2302 /* ··· 2331 2326 } 2332 2327 2333 2328 bch2_btree_iter_verify(trans, iter); 2329 + 2330 + if (trace___btree_iter_peek_enabled()) { 2331 + CLASS(printbuf, buf)(); 2332 + 2333 + int ret = bkey_err(k); 2334 + if (ret) 2335 + prt_str(&buf, bch2_err_str(ret)); 2336 + else if (k.k) 2337 + bch2_bkey_val_to_text(&buf, trans->c, k); 2338 + else 2339 + prt_str(&buf, "(null)"); 2340 + trace___btree_iter_peek(trans->c, buf.buf); 2341 + } 2342 + 2334 2343 return k; 2335 2344 } 2336 2345 ··· 2503 2484 2504 2485 bch2_btree_iter_verify_entry_exit(iter); 2505 2486 2487 + if (trace_btree_iter_peek_max_enabled()) { 2488 + CLASS(printbuf, buf)(); 2489 + 2490 + int ret = bkey_err(k); 2491 + if (ret) 2492 + prt_str(&buf, bch2_err_str(ret)); 2493 + else if (k.k) 2494 + bch2_bkey_val_to_text(&buf, trans->c, k); 2495 + else 2496 + prt_str(&buf, "(null)"); 2497 + trace_btree_iter_peek_max(trans->c, buf.buf); 2498 + } 2499 + 2506 2500 return k; 2507 2501 end: 2508 2502 bch2_btree_iter_set_pos(trans, iter, end); ··· 2589 2557 } 2590 2558 2591 2559 if (unlikely(iter->flags & BTREE_ITER_with_journal)) 2592 - btree_trans_peek_prev_journal(trans, iter, &k); 2560 + btree_trans_peek_prev_journal(trans, iter, search_key, &k); 2593 2561 2594 2562 if (unlikely((iter->flags & BTREE_ITER_with_updates) && 2595 2563 trans->nr_updates)) 2596 - bch2_btree_trans_peek_prev_updates(trans, iter, &k); 2564 + bch2_btree_trans_peek_prev_updates(trans, iter, search_key, &k); 2597 2565 2598 2566 if (likely(k.k && !bkey_deleted(k.k))) { 2599 2567 break; ··· 2756 2724 2757 2725 bch2_btree_iter_verify_entry_exit(iter); 2758 2726 bch2_btree_iter_verify(trans, iter); 2727 + 2728 + if (trace_btree_iter_peek_prev_min_enabled()) { 2729 + CLASS(printbuf, buf)(); 2730 + 2731 + int ret = bkey_err(k); 2732 + if (ret) 2733 + prt_str(&buf, bch2_err_str(ret)); 2734 + else if (k.k) 2735 + bch2_bkey_val_to_text(&buf, trans->c, k); 2736 + else 2737 + prt_str(&buf, "(null)"); 2738 + trace_btree_iter_peek_prev_min(trans->c, buf.buf); 2739 + } 2759 2740 return k; 2760 2741 end: 2761 2742 bch2_btree_iter_set_pos(trans, iter, end); ··· 2812 2767 /* extents can't span inode numbers: */ 2813 2768 if ((iter->flags & BTREE_ITER_is_extents) && 2814 2769 unlikely(iter->pos.offset == KEY_OFFSET_MAX)) { 2815 - if (iter->pos.inode == KEY_INODE_MAX) 2816 - return bkey_s_c_null; 2770 + if (iter->pos.inode == KEY_INODE_MAX) { 2771 + k = bkey_s_c_null; 2772 + goto out2; 2773 + } 2817 2774 2818 2775 bch2_btree_iter_set_pos(trans, iter, bpos_nosnap_successor(iter->pos)); 2819 2776 } ··· 2832 2785 } 2833 2786 2834 2787 struct btree_path *path = btree_iter_path(trans, iter); 2835 - if (unlikely(!btree_path_node(path, path->level))) 2836 - return bkey_s_c_null; 2788 + if (unlikely(!btree_path_node(path, path->level))) { 2789 + k = bkey_s_c_null; 2790 + goto out2; 2791 + } 2837 2792 2838 2793 btree_path_set_should_be_locked(trans, path); 2839 2794 ··· 2928 2879 bch2_btree_iter_verify(trans, iter); 2929 2880 ret = bch2_btree_iter_verify_ret(trans, iter, k); 2930 2881 if (unlikely(ret)) 2931 - return bkey_s_c_err(ret); 2882 + k = bkey_s_c_err(ret); 2883 + out2: 2884 + if (trace_btree_iter_peek_slot_enabled()) { 2885 + CLASS(printbuf, buf)(); 2886 + 2887 + int ret = bkey_err(k); 2888 + if (ret) 2889 + prt_str(&buf, bch2_err_str(ret)); 2890 + else if (k.k) 2891 + bch2_bkey_val_to_text(&buf, trans->c, k); 2892 + else 2893 + prt_str(&buf, "(null)"); 2894 + trace_btree_iter_peek_slot(trans->c, buf.buf); 2895 + } 2932 2896 2933 2897 return k; 2934 2898 } ··· 3194 3132 if (WARN_ON_ONCE(new_bytes > BTREE_TRANS_MEM_MAX)) { 3195 3133 #ifdef CONFIG_BCACHEFS_TRANS_KMALLOC_TRACE 3196 3134 struct printbuf buf = PRINTBUF; 3135 + bch2_log_msg_start(c, &buf); 3136 + prt_printf(&buf, "bump allocator exceeded BTREE_TRANS_MEM_MAX (%u)\n", 3137 + BTREE_TRANS_MEM_MAX); 3138 + 3197 3139 bch2_trans_kmalloc_trace_to_text(&buf, &trans->trans_kmalloc_trace); 3198 3140 bch2_print_str(c, KERN_ERR, buf.buf); 3199 3141 printbuf_exit(&buf); ··· 3225 3159 mutex_unlock(&s->lock); 3226 3160 } 3227 3161 3228 - if (trans->used_mempool) { 3229 - if (trans->mem_bytes >= new_bytes) 3230 - goto out_change_top; 3231 - 3232 - /* No more space from mempool item, need malloc new one */ 3233 - new_mem = kmalloc(new_bytes, GFP_NOWAIT|__GFP_NOWARN); 3234 - if (unlikely(!new_mem)) { 3235 - bch2_trans_unlock(trans); 3236 - 3237 - new_mem = kmalloc(new_bytes, GFP_KERNEL); 3238 - if (!new_mem) 3239 - return ERR_PTR(-BCH_ERR_ENOMEM_trans_kmalloc); 3240 - 3241 - ret = bch2_trans_relock(trans); 3242 - if (ret) { 3243 - kfree(new_mem); 3244 - return ERR_PTR(ret); 3245 - } 3246 - } 3247 - memcpy(new_mem, trans->mem, trans->mem_top); 3248 - trans->used_mempool = false; 3249 - mempool_free(trans->mem, &c->btree_trans_mem_pool); 3250 - goto out_new_mem; 3162 + if (trans->used_mempool || new_bytes > BTREE_TRANS_MEM_MAX) { 3163 + EBUG_ON(trans->mem_bytes >= new_bytes); 3164 + return ERR_PTR(-BCH_ERR_ENOMEM_trans_kmalloc); 3251 3165 } 3252 3166 3253 - new_mem = krealloc(trans->mem, new_bytes, GFP_NOWAIT|__GFP_NOWARN); 3167 + if (old_bytes) { 3168 + trans->realloc_bytes_required = new_bytes; 3169 + trace_and_count(c, trans_restart_mem_realloced, trans, _RET_IP_, new_bytes); 3170 + return ERR_PTR(btree_trans_restart_ip(trans, 3171 + BCH_ERR_transaction_restart_mem_realloced, _RET_IP_)); 3172 + } 3173 + 3174 + EBUG_ON(trans->mem); 3175 + 3176 + new_mem = kmalloc(new_bytes, GFP_NOWAIT|__GFP_NOWARN); 3254 3177 if (unlikely(!new_mem)) { 3255 3178 bch2_trans_unlock(trans); 3256 3179 3257 - new_mem = krealloc(trans->mem, new_bytes, GFP_KERNEL); 3180 + new_mem = kmalloc(new_bytes, GFP_KERNEL); 3258 3181 if (!new_mem && new_bytes <= BTREE_TRANS_MEM_MAX) { 3259 3182 new_mem = mempool_alloc(&c->btree_trans_mem_pool, GFP_KERNEL); 3260 3183 new_bytes = BTREE_TRANS_MEM_MAX; 3261 - memcpy(new_mem, trans->mem, trans->mem_top); 3262 3184 trans->used_mempool = true; 3263 - kfree(trans->mem); 3264 3185 } 3265 3186 3266 - if (!new_mem) 3267 - return ERR_PTR(-BCH_ERR_ENOMEM_trans_kmalloc); 3187 + EBUG_ON(!new_mem); 3268 3188 3269 3189 trans->mem = new_mem; 3270 3190 trans->mem_bytes = new_bytes; ··· 3259 3207 if (ret) 3260 3208 return ERR_PTR(ret); 3261 3209 } 3262 - out_new_mem: 3210 + 3263 3211 trans->mem = new_mem; 3264 3212 trans->mem_bytes = new_bytes; 3265 - 3266 - if (old_bytes) { 3267 - trace_and_count(c, trans_restart_mem_realloced, trans, _RET_IP_, new_bytes); 3268 - return ERR_PTR(btree_trans_restart_ip(trans, 3269 - BCH_ERR_transaction_restart_mem_realloced, _RET_IP_)); 3270 - } 3271 - out_change_top: 3272 - bch2_trans_kmalloc_trace(trans, size, ip); 3273 3213 3274 3214 p = trans->mem + trans->mem_top; 3275 3215 trans->mem_top += size; ··· 3322 3278 3323 3279 trans->restart_count++; 3324 3280 trans->mem_top = 0; 3281 + 3282 + if (trans->restarted == BCH_ERR_transaction_restart_mem_realloced) { 3283 + EBUG_ON(!trans->mem || !trans->mem_bytes); 3284 + unsigned new_bytes = trans->realloc_bytes_required; 3285 + void *new_mem = krealloc(trans->mem, new_bytes, GFP_NOWAIT|__GFP_NOWARN); 3286 + if (unlikely(!new_mem)) { 3287 + bch2_trans_unlock(trans); 3288 + new_mem = krealloc(trans->mem, new_bytes, GFP_KERNEL); 3289 + 3290 + EBUG_ON(new_bytes > BTREE_TRANS_MEM_MAX); 3291 + 3292 + if (!new_mem) { 3293 + new_mem = mempool_alloc(&trans->c->btree_trans_mem_pool, GFP_KERNEL); 3294 + new_bytes = BTREE_TRANS_MEM_MAX; 3295 + trans->used_mempool = true; 3296 + kfree(trans->mem); 3297 + } 3298 + } 3299 + trans->mem = new_mem; 3300 + trans->mem_bytes = new_bytes; 3301 + } 3325 3302 3326 3303 trans_for_each_path(trans, path, i) { 3327 3304 path->should_be_locked = false;
+53 -25
fs/bcachefs/btree_journal_iter.c
··· 137 137 struct journal_key *k; 138 138 139 139 BUG_ON(*idx > keys->nr); 140 + 141 + if (!keys->nr) 142 + return NULL; 140 143 search: 141 144 if (!*idx) 142 145 *idx = __bch2_journal_key_search(keys, btree_id, level, pos); 143 146 144 - while (*idx && 145 - __journal_key_cmp(btree_id, level, end_pos, idx_to_key(keys, *idx - 1)) <= 0) { 147 + while (*idx < keys->nr && 148 + __journal_key_cmp(btree_id, level, end_pos, idx_to_key(keys, *idx)) >= 0) { 146 149 (*idx)++; 147 150 iters++; 148 151 if (iters == 10) { ··· 154 151 } 155 152 } 156 153 154 + if (*idx == keys->nr) 155 + --(*idx); 156 + 157 157 struct bkey_i *ret = NULL; 158 158 rcu_read_lock(); /* for overwritten_ranges */ 159 159 160 - while ((k = *idx < keys->nr ? idx_to_key(keys, *idx) : NULL)) { 160 + while (true) { 161 + k = idx_to_key(keys, *idx); 161 162 if (__journal_key_cmp(btree_id, level, end_pos, k) > 0) 162 163 break; 163 164 164 165 if (k->overwritten) { 165 166 if (k->overwritten_range) 166 - *idx = rcu_dereference(k->overwritten_range)->start - 1; 167 - else 168 - *idx -= 1; 167 + *idx = rcu_dereference(k->overwritten_range)->start; 168 + if (!*idx) 169 + break; 170 + --(*idx); 169 171 continue; 170 172 } 171 173 ··· 179 171 break; 180 172 } 181 173 174 + if (!*idx) 175 + break; 182 176 --(*idx); 183 177 iters++; 184 178 if (iters == 10) { ··· 651 641 { 652 642 const struct journal_key *l = _l; 653 643 const struct journal_key *r = _r; 644 + int rewind = l->rewind && r->rewind ? -1 : 1; 654 645 655 646 return journal_key_cmp(l, r) ?: 656 - cmp_int(l->journal_seq, r->journal_seq) ?: 657 - cmp_int(l->journal_offset, r->journal_offset); 647 + ((cmp_int(l->journal_seq, r->journal_seq) ?: 648 + cmp_int(l->journal_offset, r->journal_offset)) * rewind); 658 649 } 659 650 660 651 void bch2_journal_keys_put(struct bch_fs *c) ··· 724 713 struct journal_keys *keys = &c->journal_keys; 725 714 size_t nr_read = 0; 726 715 716 + u64 rewind_seq = c->opts.journal_rewind ?: U64_MAX; 717 + 727 718 genradix_for_each(&c->journal_entries, iter, _i) { 728 719 i = *_i; 729 720 ··· 734 721 735 722 cond_resched(); 736 723 737 - for_each_jset_key(k, entry, &i->j) { 738 - struct journal_key n = (struct journal_key) { 739 - .btree_id = entry->btree_id, 740 - .level = entry->level, 741 - .k = k, 742 - .journal_seq = le64_to_cpu(i->j.seq), 743 - .journal_offset = k->_data - i->j._data, 744 - }; 724 + vstruct_for_each(&i->j, entry) { 725 + bool rewind = !entry->level && 726 + !btree_id_is_alloc(entry->btree_id) && 727 + le64_to_cpu(i->j.seq) >= rewind_seq; 745 728 746 - if (darray_push(keys, n)) { 747 - __journal_keys_sort(keys); 729 + if (entry->type != (rewind 730 + ? BCH_JSET_ENTRY_overwrite 731 + : BCH_JSET_ENTRY_btree_keys)) 732 + continue; 748 733 749 - if (keys->nr * 8 > keys->size * 7) { 750 - bch_err(c, "Too many journal keys for slowpath; have %zu compacted, buf size %zu, processed %zu keys at seq %llu", 751 - keys->nr, keys->size, nr_read, le64_to_cpu(i->j.seq)); 752 - return bch_err_throw(c, ENOMEM_journal_keys_sort); 734 + if (!rewind && le64_to_cpu(i->j.seq) < c->journal_replay_seq_start) 735 + continue; 736 + 737 + jset_entry_for_each_key(entry, k) { 738 + struct journal_key n = (struct journal_key) { 739 + .btree_id = entry->btree_id, 740 + .level = entry->level, 741 + .rewind = rewind, 742 + .k = k, 743 + .journal_seq = le64_to_cpu(i->j.seq), 744 + .journal_offset = k->_data - i->j._data, 745 + }; 746 + 747 + if (darray_push(keys, n)) { 748 + __journal_keys_sort(keys); 749 + 750 + if (keys->nr * 8 > keys->size * 7) { 751 + bch_err(c, "Too many journal keys for slowpath; have %zu compacted, buf size %zu, processed %zu keys at seq %llu", 752 + keys->nr, keys->size, nr_read, le64_to_cpu(i->j.seq)); 753 + return bch_err_throw(c, ENOMEM_journal_keys_sort); 754 + } 755 + 756 + BUG_ON(darray_push(keys, n)); 753 757 } 754 758 755 - BUG_ON(darray_push(keys, n)); 759 + nr_read++; 756 760 } 757 - 758 - nr_read++; 759 761 } 760 762 } 761 763
+3 -2
fs/bcachefs/btree_journal_iter_types.h
··· 11 11 u32 journal_offset; 12 12 enum btree_id btree_id:8; 13 13 unsigned level:8; 14 - bool allocated; 15 - bool overwritten; 14 + bool allocated:1; 15 + bool overwritten:1; 16 + bool rewind:1; 16 17 struct journal_key_range_overwritten __rcu * 17 18 overwritten_range; 18 19 struct bkey_i *k;
+6 -6
fs/bcachefs/btree_locking.c
··· 771 771 } 772 772 773 773 static noinline __cold void bch2_trans_relock_fail(struct btree_trans *trans, struct btree_path *path, 774 - struct get_locks_fail *f, bool trace) 774 + struct get_locks_fail *f, bool trace, ulong ip) 775 775 { 776 776 if (!trace) 777 777 goto out; ··· 796 796 prt_printf(&buf, " total locked %u.%u.%u", c.n[0], c.n[1], c.n[2]); 797 797 } 798 798 799 - trace_trans_restart_relock(trans, _RET_IP_, buf.buf); 799 + trace_trans_restart_relock(trans, ip, buf.buf); 800 800 printbuf_exit(&buf); 801 801 } 802 802 ··· 806 806 bch2_trans_verify_locks(trans); 807 807 } 808 808 809 - static inline int __bch2_trans_relock(struct btree_trans *trans, bool trace) 809 + static inline int __bch2_trans_relock(struct btree_trans *trans, bool trace, ulong ip) 810 810 { 811 811 bch2_trans_verify_locks(trans); 812 812 ··· 825 825 if (path->should_be_locked && 826 826 (ret = btree_path_get_locks(trans, path, false, &f, 827 827 BCH_ERR_transaction_restart_relock))) { 828 - bch2_trans_relock_fail(trans, path, &f, trace); 828 + bch2_trans_relock_fail(trans, path, &f, trace, ip); 829 829 return ret; 830 830 } 831 831 } ··· 838 838 839 839 int bch2_trans_relock(struct btree_trans *trans) 840 840 { 841 - return __bch2_trans_relock(trans, true); 841 + return __bch2_trans_relock(trans, true, _RET_IP_); 842 842 } 843 843 844 844 int bch2_trans_relock_notrace(struct btree_trans *trans) 845 845 { 846 - return __bch2_trans_relock(trans, false); 846 + return __bch2_trans_relock(trans, false, _RET_IP_); 847 847 } 848 848 849 849 void bch2_trans_unlock(struct btree_trans *trans)
+5 -1
fs/bcachefs/btree_node_scan.c
··· 521 521 return false; 522 522 } 523 523 524 - bool bch2_btree_has_scanned_nodes(struct bch_fs *c, enum btree_id btree) 524 + int bch2_btree_has_scanned_nodes(struct bch_fs *c, enum btree_id btree) 525 525 { 526 + int ret = bch2_run_print_explicit_recovery_pass(c, BCH_RECOVERY_PASS_scan_for_btree_nodes); 527 + if (ret) 528 + return ret; 529 + 526 530 struct found_btree_node search = { 527 531 .btree_id = btree, 528 532 .level = 0,
+1 -1
fs/bcachefs/btree_node_scan.h
··· 4 4 5 5 int bch2_scan_for_btree_nodes(struct bch_fs *); 6 6 bool bch2_btree_node_is_stale(struct bch_fs *, struct btree *); 7 - bool bch2_btree_has_scanned_nodes(struct bch_fs *, enum btree_id); 7 + int bch2_btree_has_scanned_nodes(struct bch_fs *, enum btree_id); 8 8 int bch2_get_scanned_nodes(struct bch_fs *, enum btree_id, unsigned, struct bpos, struct bpos); 9 9 void bch2_find_btree_nodes_exit(struct find_btree_nodes *); 10 10
+12 -6
fs/bcachefs/btree_trans_commit.c
··· 595 595 int ret = 0; 596 596 597 597 bch2_trans_verify_not_unlocked_or_in_restart(trans); 598 - 598 + #if 0 599 + /* todo: bring back dynamic fault injection */ 599 600 if (race_fault()) { 600 601 trace_and_count(c, trans_restart_fault_inject, trans, trace_ip); 601 602 return btree_trans_restart(trans, BCH_ERR_transaction_restart_fault_inject); 602 603 } 603 - 604 + #endif 604 605 /* 605 606 * Check if the insert will fit in the leaf node with the write lock 606 607 * held, otherwise another thread could write the node changing the ··· 757 756 memcpy_u64s_small(journal_res_entry(&c->journal, &trans->journal_res), 758 757 btree_trans_journal_entries_start(trans), 759 758 trans->journal_entries.u64s); 759 + 760 + EBUG_ON(trans->journal_res.u64s < trans->journal_entries.u64s); 760 761 761 762 trans->journal_res.offset += trans->journal_entries.u64s; 762 763 trans->journal_res.u64s -= trans->journal_entries.u64s; ··· 1006 1003 { 1007 1004 struct btree_insert_entry *errored_at = NULL; 1008 1005 struct bch_fs *c = trans->c; 1006 + unsigned journal_u64s = 0; 1009 1007 int ret = 0; 1010 1008 1011 1009 bch2_trans_verify_not_unlocked_or_in_restart(trans); ··· 1035 1031 1036 1032 EBUG_ON(test_bit(BCH_FS_clean_shutdown, &c->flags)); 1037 1033 1038 - trans->journal_u64s = trans->journal_entries.u64s + jset_u64s(trans->accounting.u64s); 1034 + journal_u64s = jset_u64s(trans->accounting.u64s); 1039 1035 trans->journal_transaction_names = READ_ONCE(c->opts.journal_transaction_names); 1040 1036 if (trans->journal_transaction_names) 1041 - trans->journal_u64s += jset_u64s(JSET_ENTRY_LOG_U64s); 1037 + journal_u64s += jset_u64s(JSET_ENTRY_LOG_U64s); 1042 1038 1043 1039 trans_for_each_update(trans, i) { 1044 1040 struct btree_path *path = trans->paths + i->path; ··· 1058 1054 continue; 1059 1055 1060 1056 /* we're going to journal the key being updated: */ 1061 - trans->journal_u64s += jset_u64s(i->k->k.u64s); 1057 + journal_u64s += jset_u64s(i->k->k.u64s); 1062 1058 1063 1059 /* and we're also going to log the overwrite: */ 1064 1060 if (trans->journal_transaction_names) 1065 - trans->journal_u64s += jset_u64s(i->old_k.u64s); 1061 + journal_u64s += jset_u64s(i->old_k.u64s); 1066 1062 } 1067 1063 1068 1064 if (trans->extra_disk_res) { ··· 1079 1075 if (likely(!(flags & BCH_TRANS_COMMIT_no_journal_res))) 1080 1076 memset(&trans->journal_res, 0, sizeof(trans->journal_res)); 1081 1077 memset(&trans->fs_usage_delta, 0, sizeof(trans->fs_usage_delta)); 1078 + 1079 + trans->journal_u64s = journal_u64s + trans->journal_entries.u64s; 1082 1080 1083 1081 ret = do_bch2_trans_commit(trans, flags, &errored_at, _RET_IP_); 1084 1082
+1
fs/bcachefs/btree_types.h
··· 497 497 void *mem; 498 498 unsigned mem_top; 499 499 unsigned mem_bytes; 500 + unsigned realloc_bytes_required; 500 501 #ifdef CONFIG_BCACHEFS_TRANS_KMALLOC_TRACE 501 502 darray_trans_kmalloc_trace trans_kmalloc_trace; 502 503 #endif
+11 -5
fs/bcachefs/btree_update.c
··· 549 549 unsigned u64s) 550 550 { 551 551 unsigned new_top = buf->u64s + u64s; 552 - unsigned old_size = buf->size; 552 + unsigned new_size = buf->size; 553 553 554 - if (new_top > buf->size) 555 - buf->size = roundup_pow_of_two(new_top); 554 + BUG_ON(roundup_pow_of_two(new_top) > U16_MAX); 556 555 557 - void *n = bch2_trans_kmalloc_nomemzero(trans, buf->size * sizeof(u64)); 556 + if (new_top > new_size) 557 + new_size = roundup_pow_of_two(new_top); 558 + 559 + void *n = bch2_trans_kmalloc_nomemzero(trans, new_size * sizeof(u64)); 558 560 if (IS_ERR(n)) 559 561 return n; 562 + 563 + unsigned offset = (u64 *) n - (u64 *) trans->mem; 564 + BUG_ON(offset > U16_MAX); 560 565 561 566 if (buf->u64s) 562 567 memcpy(n, 563 568 btree_trans_subbuf_base(trans, buf), 564 - old_size * sizeof(u64)); 569 + buf->size * sizeof(u64)); 565 570 buf->base = (u64 *) n - (u64 *) trans->mem; 571 + buf->size = new_size; 566 572 567 573 void *p = btree_trans_subbuf_top(trans, buf); 568 574 buf->u64s = new_top;
+2 -3
fs/bcachefs/btree_update.h
··· 170 170 171 171 int bch2_btree_insert_clone_trans(struct btree_trans *, enum btree_id, struct bkey_i *); 172 172 173 - int bch2_btree_write_buffer_insert_err(struct btree_trans *, 174 - enum btree_id, struct bkey_i *); 173 + int bch2_btree_write_buffer_insert_err(struct bch_fs *, enum btree_id, struct bkey_i *); 175 174 176 175 static inline int __must_check bch2_trans_update_buffered(struct btree_trans *trans, 177 176 enum btree_id btree, ··· 181 182 EBUG_ON(k->k.u64s > BTREE_WRITE_BUFERED_U64s_MAX); 182 183 183 184 if (unlikely(!btree_type_uses_write_buffer(btree))) { 184 - int ret = bch2_btree_write_buffer_insert_err(trans, btree, k); 185 + int ret = bch2_btree_write_buffer_insert_err(trans->c, btree, k); 185 186 dump_stack(); 186 187 return ret; 187 188 }
+8 -8
fs/bcachefs/btree_update_interior.c
··· 1287 1287 1288 1288 do { 1289 1289 ret = bch2_btree_reserve_get(trans, as, nr_nodes, target, flags, &cl); 1290 - 1290 + if (!bch2_err_matches(ret, BCH_ERR_operation_blocked)) 1291 + break; 1291 1292 bch2_trans_unlock(trans); 1292 1293 bch2_wait_on_allocator(c, &cl); 1293 - } while (bch2_err_matches(ret, BCH_ERR_operation_blocked)); 1294 + } while (1); 1294 1295 } 1295 1296 1296 1297 if (ret) { ··· 2294 2293 goto out; 2295 2294 } 2296 2295 2297 - static int bch2_btree_node_rewrite_key(struct btree_trans *trans, 2298 - enum btree_id btree, unsigned level, 2299 - struct bkey_i *k, unsigned flags) 2296 + int bch2_btree_node_rewrite_key(struct btree_trans *trans, 2297 + enum btree_id btree, unsigned level, 2298 + struct bkey_i *k, unsigned flags) 2300 2299 { 2301 2300 struct btree_iter iter; 2302 2301 bch2_trans_node_iter_init(trans, &iter, ··· 2368 2367 2369 2368 int ret = bch2_trans_do(c, bch2_btree_node_rewrite_key(trans, 2370 2369 a->btree_id, a->level, a->key.k, 0)); 2371 - if (ret != -ENOENT && 2372 - !bch2_err_matches(ret, EROFS) && 2373 - ret != -BCH_ERR_journal_shutdown) 2370 + if (!bch2_err_matches(ret, ENOENT) && 2371 + !bch2_err_matches(ret, EROFS)) 2374 2372 bch_err_fn_ratelimited(c, ret); 2375 2373 2376 2374 spin_lock(&c->btree_node_rewrites_lock);
+3
fs/bcachefs/btree_update_interior.h
··· 176 176 177 177 int bch2_btree_node_rewrite(struct btree_trans *, struct btree_iter *, 178 178 struct btree *, unsigned, unsigned); 179 + int bch2_btree_node_rewrite_key(struct btree_trans *, 180 + enum btree_id, unsigned, 181 + struct bkey_i *, unsigned); 179 182 int bch2_btree_node_rewrite_pos(struct btree_trans *, 180 183 enum btree_id, unsigned, 181 184 struct bpos, unsigned, unsigned);
+5 -3
fs/bcachefs/btree_write_buffer.c
··· 267 267 BUG_ON(wb->sorted.size < wb->flushing.keys.nr); 268 268 } 269 269 270 - int bch2_btree_write_buffer_insert_err(struct btree_trans *trans, 270 + int bch2_btree_write_buffer_insert_err(struct bch_fs *c, 271 271 enum btree_id btree, struct bkey_i *k) 272 272 { 273 - struct bch_fs *c = trans->c; 274 273 struct printbuf buf = PRINTBUF; 275 274 276 275 prt_printf(&buf, "attempting to do write buffer update on non wb btree="); ··· 331 332 struct btree_write_buffered_key *k = &wb->flushing.keys.data[i->idx]; 332 333 333 334 if (unlikely(!btree_type_uses_write_buffer(k->btree))) { 334 - ret = bch2_btree_write_buffer_insert_err(trans, k->btree, &k->k); 335 + ret = bch2_btree_write_buffer_insert_err(trans->c, k->btree, &k->k); 335 336 goto err; 336 337 } 337 338 ··· 675 676 goto err; 676 677 677 678 bch2_bkey_buf_copy(last_flushed, c, tmp.k); 679 + 680 + /* can we avoid the unconditional restart? */ 681 + trace_and_count(c, trans_restart_write_buffer_flush, trans, _RET_IP_); 678 682 ret = bch_err_throw(c, transaction_restart_write_buffer_flush); 679 683 } 680 684 err:
+6
fs/bcachefs/btree_write_buffer.h
··· 89 89 struct journal_keys_to_wb *dst, 90 90 enum btree_id btree, struct bkey_i *k) 91 91 { 92 + if (unlikely(!btree_type_uses_write_buffer(btree))) { 93 + int ret = bch2_btree_write_buffer_insert_err(c, btree, k); 94 + dump_stack(); 95 + return ret; 96 + } 97 + 92 98 EBUG_ON(!dst->seq); 93 99 94 100 return k->k.type == KEY_TYPE_accounting
+22 -7
fs/bcachefs/chardev.c
··· 319 319 ctx->stats.ret = BCH_IOCTL_DATA_EVENT_RET_done; 320 320 ctx->stats.data_type = (int) DATA_PROGRESS_DATA_TYPE_done; 321 321 } 322 + enumerated_ref_put(&ctx->c->writes, BCH_WRITE_REF_ioctl_data); 322 323 return 0; 323 324 } 324 325 ··· 379 378 struct bch_data_ctx *ctx; 380 379 int ret; 381 380 382 - if (!capable(CAP_SYS_ADMIN)) 383 - return -EPERM; 381 + if (!enumerated_ref_tryget(&c->writes, BCH_WRITE_REF_ioctl_data)) 382 + return -EROFS; 384 383 385 - if (arg.op >= BCH_DATA_OP_NR || arg.flags) 386 - return -EINVAL; 384 + if (!capable(CAP_SYS_ADMIN)) { 385 + ret = -EPERM; 386 + goto put_ref; 387 + } 388 + 389 + if (arg.op >= BCH_DATA_OP_NR || arg.flags) { 390 + ret = -EINVAL; 391 + goto put_ref; 392 + } 387 393 388 394 ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); 389 - if (!ctx) 390 - return -ENOMEM; 395 + if (!ctx) { 396 + ret = -ENOMEM; 397 + goto put_ref; 398 + } 391 399 392 400 ctx->c = c; 393 401 ctx->arg = arg; ··· 405 395 &bcachefs_data_ops, 406 396 bch2_data_thread); 407 397 if (ret < 0) 408 - kfree(ctx); 398 + goto cleanup; 399 + return ret; 400 + cleanup: 401 + kfree(ctx); 402 + put_ref: 403 + enumerated_ref_put(&c->writes, BCH_WRITE_REF_ioctl_data); 409 404 return ret; 410 405 } 411 406
+1
fs/bcachefs/data_update.c
··· 249 249 bch2_bkey_val_to_text(&buf, c, k); 250 250 prt_str(&buf, "\nnew: "); 251 251 bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(insert)); 252 + prt_newline(&buf); 252 253 253 254 bch2_fs_emergency_read_only2(c, &buf); 254 255
-5
fs/bcachefs/errcode.h
··· 137 137 x(BCH_ERR_transaction_restart, transaction_restart_relock) \ 138 138 x(BCH_ERR_transaction_restart, transaction_restart_relock_path) \ 139 139 x(BCH_ERR_transaction_restart, transaction_restart_relock_path_intent) \ 140 - x(BCH_ERR_transaction_restart, transaction_restart_relock_after_fill) \ 141 140 x(BCH_ERR_transaction_restart, transaction_restart_too_many_iters) \ 142 141 x(BCH_ERR_transaction_restart, transaction_restart_lock_node_reused) \ 143 142 x(BCH_ERR_transaction_restart, transaction_restart_fill_relock) \ ··· 147 148 x(BCH_ERR_transaction_restart, transaction_restart_would_deadlock_write)\ 148 149 x(BCH_ERR_transaction_restart, transaction_restart_deadlock_recursion_limit)\ 149 150 x(BCH_ERR_transaction_restart, transaction_restart_upgrade) \ 150 - x(BCH_ERR_transaction_restart, transaction_restart_key_cache_upgrade) \ 151 151 x(BCH_ERR_transaction_restart, transaction_restart_key_cache_fill) \ 152 152 x(BCH_ERR_transaction_restart, transaction_restart_key_cache_raced) \ 153 - x(BCH_ERR_transaction_restart, transaction_restart_key_cache_realloced)\ 154 - x(BCH_ERR_transaction_restart, transaction_restart_journal_preres_get) \ 155 153 x(BCH_ERR_transaction_restart, transaction_restart_split_race) \ 156 154 x(BCH_ERR_transaction_restart, transaction_restart_write_buffer_flush) \ 157 155 x(BCH_ERR_transaction_restart, transaction_restart_nested) \ ··· 237 241 x(BCH_ERR_journal_res_blocked, journal_buf_enomem) \ 238 242 x(BCH_ERR_journal_res_blocked, journal_stuck) \ 239 243 x(BCH_ERR_journal_res_blocked, journal_retry_open) \ 240 - x(BCH_ERR_journal_res_blocked, journal_preres_get_blocked) \ 241 244 x(BCH_ERR_journal_res_blocked, bucket_alloc_blocked) \ 242 245 x(BCH_ERR_journal_res_blocked, stripe_alloc_blocked) \ 243 246 x(BCH_ERR_invalid, invalid_sb) \
+3 -1
fs/bcachefs/error.c
··· 621 621 if (s) 622 622 s->ret = ret; 623 623 624 - if (trans) 624 + if (trans && 625 + !(flags & FSCK_ERR_NO_LOG) && 626 + ret == -BCH_ERR_fsck_fix) 625 627 ret = bch2_trans_log_str(trans, bch2_sb_error_strs[err]) ?: ret; 626 628 err_unlock: 627 629 mutex_unlock(&c->fsck_error_msgs_lock);
+12 -1
fs/bcachefs/extent_update.c
··· 139 139 if (ret) 140 140 return ret; 141 141 142 - bch2_cut_back(end, k); 142 + /* tracepoint */ 143 + 144 + if (bpos_lt(end, k->k.p)) { 145 + if (trace_extent_trim_atomic_enabled()) { 146 + CLASS(printbuf, buf)(); 147 + bch2_bpos_to_text(&buf, end); 148 + prt_newline(&buf); 149 + bch2_bkey_val_to_text(&buf, trans->c, bkey_i_to_s_c(k)); 150 + trace_extent_trim_atomic(trans->c, buf.buf); 151 + } 152 + bch2_cut_back(end, k); 153 + } 143 154 return 0; 144 155 }
+2 -1
fs/bcachefs/fs.c
··· 1732 1732 bch2_write_inode(c, inode, fssetxattr_inode_update_fn, &s, 1733 1733 ATTR_CTIME); 1734 1734 mutex_unlock(&inode->ei_update_lock); 1735 - return ret; 1735 + 1736 + return bch2_err_class(ret); 1736 1737 } 1737 1738 1738 1739 static const struct file_operations bch_file_operations = {
+218 -99
fs/bcachefs/fsck.c
··· 327 327 (inode->bi_flags & BCH_INODE_has_child_snapshot)) 328 328 return false; 329 329 330 - return !inode->bi_dir && !(inode->bi_flags & BCH_INODE_unlinked); 330 + return !bch2_inode_has_backpointer(inode) && 331 + !(inode->bi_flags & BCH_INODE_unlinked); 331 332 } 332 333 333 334 static int maybe_delete_dirent(struct btree_trans *trans, struct bpos d_pos, u32 snapshot) ··· 373 372 if (inode->bi_subvol) { 374 373 inode->bi_parent_subvol = BCACHEFS_ROOT_SUBVOL; 375 374 375 + struct btree_iter subvol_iter; 376 + struct bkey_i_subvolume *subvol = 377 + bch2_bkey_get_mut_typed(trans, &subvol_iter, 378 + BTREE_ID_subvolumes, POS(0, inode->bi_subvol), 379 + 0, subvolume); 380 + ret = PTR_ERR_OR_ZERO(subvol); 381 + if (ret) 382 + return ret; 383 + 384 + subvol->v.fs_path_parent = BCACHEFS_ROOT_SUBVOL; 385 + bch2_trans_iter_exit(trans, &subvol_iter); 386 + 376 387 u64 root_inum; 377 388 ret = subvol_lookup(trans, inode->bi_parent_subvol, 378 389 &dirent_snapshot, &root_inum); ··· 399 386 ret = lookup_lostfound(trans, dirent_snapshot, &lostfound, inode->bi_inum); 400 387 if (ret) 401 388 return ret; 389 + 390 + bch_verbose(c, "got lostfound inum %llu", lostfound.bi_inum); 402 391 403 392 lostfound.bi_nlink += S_ISDIR(inode->bi_mode); 404 393 ··· 437 422 ret = __bch2_fsck_write_inode(trans, inode); 438 423 if (ret) 439 424 return ret; 425 + 426 + { 427 + CLASS(printbuf, buf)(); 428 + ret = bch2_inum_snapshot_to_path(trans, inode->bi_inum, 429 + inode->bi_snapshot, NULL, &buf); 430 + if (ret) 431 + return ret; 432 + 433 + bch_info(c, "reattached at %s", buf.buf); 434 + } 440 435 441 436 /* 442 437 * Fix up inodes in child snapshots: if they should also be reattached ··· 515 490 static int remove_backpointer(struct btree_trans *trans, 516 491 struct bch_inode_unpacked *inode) 517 492 { 518 - if (!inode->bi_dir) 493 + if (!bch2_inode_has_backpointer(inode)) 519 494 return 0; 495 + 496 + u32 snapshot = inode->bi_snapshot; 497 + 498 + if (inode->bi_parent_subvol) { 499 + int ret = bch2_subvolume_get_snapshot(trans, inode->bi_parent_subvol, &snapshot); 500 + if (ret) 501 + return ret; 502 + } 520 503 521 504 struct bch_fs *c = trans->c; 522 505 struct btree_iter iter; 523 506 struct bkey_s_c_dirent d = dirent_get_by_pos(trans, &iter, 524 - SPOS(inode->bi_dir, inode->bi_dir_offset, inode->bi_snapshot)); 507 + SPOS(inode->bi_dir, inode->bi_dir_offset, snapshot)); 525 508 int ret = bkey_err(d) ?: 526 509 dirent_points_to_inode(c, d, inode) ?: 527 510 bch2_fsck_remove_dirent(trans, d.k->p); ··· 728 695 static bool key_visible_in_snapshot(struct bch_fs *c, struct snapshots_seen *seen, 729 696 u32 id, u32 ancestor) 730 697 { 731 - ssize_t i; 732 - 733 698 EBUG_ON(id > ancestor); 734 - 735 - /* @ancestor should be the snapshot most recently added to @seen */ 736 - EBUG_ON(ancestor != seen->pos.snapshot); 737 - EBUG_ON(ancestor != darray_last(seen->ids)); 738 699 739 700 if (id == ancestor) 740 701 return true; ··· 745 718 * numerically, since snapshot ID lists are kept sorted, so if we find 746 719 * an id that's an ancestor of @id we're done: 747 720 */ 748 - 749 - for (i = seen->ids.nr - 2; 750 - i >= 0 && seen->ids.data[i] >= id; 751 - --i) 752 - if (bch2_snapshot_is_ancestor(c, id, seen->ids.data[i])) 721 + darray_for_each_reverse(seen->ids, i) 722 + if (*i != ancestor && bch2_snapshot_is_ancestor(c, id, *i)) 753 723 return false; 754 724 755 725 return true; ··· 830 806 if (!n->whiteout) { 831 807 return bch2_inode_unpack(inode, &n->inode); 832 808 } else { 833 - n->inode.bi_inum = inode.k->p.inode; 809 + n->inode.bi_inum = inode.k->p.offset; 834 810 n->inode.bi_snapshot = inode.k->p.snapshot; 835 811 return 0; 836 812 } ··· 927 903 w->last_pos.inode, k.k->p.snapshot, i->inode.bi_snapshot, 928 904 (bch2_bkey_val_to_text(&buf, c, k), 929 905 buf.buf))) { 930 - struct bch_inode_unpacked new = i->inode; 931 - struct bkey_i whiteout; 932 - 933 - new.bi_snapshot = k.k->p.snapshot; 934 - 935 906 if (!i->whiteout) { 907 + struct bch_inode_unpacked new = i->inode; 908 + new.bi_snapshot = k.k->p.snapshot; 936 909 ret = __bch2_fsck_write_inode(trans, &new); 937 910 } else { 911 + struct bkey_i whiteout; 938 912 bkey_init(&whiteout.k); 939 913 whiteout.k.type = KEY_TYPE_whiteout; 940 - whiteout.k.p = SPOS(0, i->inode.bi_inum, i->inode.bi_snapshot); 914 + whiteout.k.p = SPOS(0, i->inode.bi_inum, k.k->p.snapshot); 941 915 ret = bch2_btree_insert_nonextent(trans, BTREE_ID_inodes, 942 916 &whiteout, 943 917 BTREE_UPDATE_internal_snapshot_node); ··· 1157 1135 if (ret) 1158 1136 goto err; 1159 1137 1160 - if (u.bi_dir || u.bi_dir_offset) { 1138 + if (bch2_inode_has_backpointer(&u)) { 1161 1139 ret = check_inode_dirent_inode(trans, &u, &do_update); 1162 1140 if (ret) 1163 1141 goto err; 1164 1142 } 1165 1143 1166 - if (fsck_err_on(u.bi_dir && (u.bi_flags & BCH_INODE_unlinked), 1144 + if (fsck_err_on(bch2_inode_has_backpointer(&u) && 1145 + (u.bi_flags & BCH_INODE_unlinked), 1167 1146 trans, inode_unlinked_but_has_dirent, 1168 1147 "inode unlinked but has dirent\n%s", 1169 1148 (printbuf_reset(&buf), ··· 1461 1438 { 1462 1439 struct bch_fs *c = trans->c; 1463 1440 struct printbuf buf = PRINTBUF; 1441 + struct btree_iter iter2 = {}; 1464 1442 int ret = PTR_ERR_OR_ZERO(i); 1465 1443 if (ret) 1466 1444 return ret; ··· 1471 1447 1472 1448 bool have_inode = i && !i->whiteout; 1473 1449 1474 - if (!have_inode && (c->sb.btrees_lost_data & BIT_ULL(BTREE_ID_inodes))) { 1475 - ret = reconstruct_inode(trans, iter->btree_id, k.k->p.snapshot, k.k->p.inode) ?: 1476 - bch2_trans_commit(trans, NULL, NULL, BCH_TRANS_COMMIT_no_enospc); 1477 - if (ret) 1478 - goto err; 1450 + if (!have_inode && (c->sb.btrees_lost_data & BIT_ULL(BTREE_ID_inodes))) 1451 + goto reconstruct; 1479 1452 1480 - inode->last_pos.inode--; 1481 - ret = bch_err_throw(c, transaction_restart_nested); 1482 - goto err; 1453 + if (have_inode && btree_matches_i_mode(iter->btree_id, i->inode.bi_mode)) 1454 + goto out; 1455 + 1456 + prt_printf(&buf, ", "); 1457 + 1458 + bool have_old_inode = false; 1459 + darray_for_each(inode->inodes, i2) 1460 + if (!i2->whiteout && 1461 + bch2_snapshot_is_ancestor(c, k.k->p.snapshot, i2->inode.bi_snapshot) && 1462 + btree_matches_i_mode(iter->btree_id, i2->inode.bi_mode)) { 1463 + prt_printf(&buf, "but found good inode in older snapshot\n"); 1464 + bch2_inode_unpacked_to_text(&buf, &i2->inode); 1465 + prt_newline(&buf); 1466 + have_old_inode = true; 1467 + break; 1468 + } 1469 + 1470 + struct bkey_s_c k2; 1471 + unsigned nr_keys = 0; 1472 + 1473 + prt_printf(&buf, "found keys:\n"); 1474 + 1475 + for_each_btree_key_max_norestart(trans, iter2, iter->btree_id, 1476 + SPOS(k.k->p.inode, 0, k.k->p.snapshot), 1477 + POS(k.k->p.inode, U64_MAX), 1478 + 0, k2, ret) { 1479 + nr_keys++; 1480 + if (nr_keys <= 10) { 1481 + bch2_bkey_val_to_text(&buf, c, k2); 1482 + prt_newline(&buf); 1483 + } 1484 + if (nr_keys >= 100) 1485 + break; 1483 1486 } 1484 1487 1485 - if (fsck_err_on(!have_inode, 1486 - trans, key_in_missing_inode, 1487 - "key in missing inode:\n%s", 1488 - (printbuf_reset(&buf), 1489 - bch2_bkey_val_to_text(&buf, c, k), buf.buf))) 1490 - goto delete; 1488 + if (ret) 1489 + goto err; 1491 1490 1492 - if (fsck_err_on(have_inode && !btree_matches_i_mode(iter->btree_id, i->inode.bi_mode), 1493 - trans, key_in_wrong_inode_type, 1494 - "key for wrong inode mode %o:\n%s", 1495 - i->inode.bi_mode, 1496 - (printbuf_reset(&buf), 1497 - bch2_bkey_val_to_text(&buf, c, k), buf.buf))) 1498 - goto delete; 1491 + if (nr_keys > 100) 1492 + prt_printf(&buf, "found > %u keys for this missing inode\n", nr_keys); 1493 + else if (nr_keys > 10) 1494 + prt_printf(&buf, "found %u keys for this missing inode\n", nr_keys); 1495 + 1496 + if (!have_inode) { 1497 + if (fsck_err_on(!have_inode, 1498 + trans, key_in_missing_inode, 1499 + "key in missing inode%s", buf.buf)) { 1500 + /* 1501 + * Maybe a deletion that raced with data move, or something 1502 + * weird like that? But if we know the inode was deleted, or 1503 + * it's just a few keys, we can safely delete them. 1504 + * 1505 + * If it's many keys, we should probably recreate the inode 1506 + */ 1507 + if (have_old_inode || nr_keys <= 2) 1508 + goto delete; 1509 + else 1510 + goto reconstruct; 1511 + } 1512 + } else { 1513 + /* 1514 + * not autofix, this one would be a giant wtf - bit error in the 1515 + * inode corrupting i_mode? 1516 + * 1517 + * may want to try repairing inode instead of deleting 1518 + */ 1519 + if (fsck_err_on(!btree_matches_i_mode(iter->btree_id, i->inode.bi_mode), 1520 + trans, key_in_wrong_inode_type, 1521 + "key for wrong inode mode %o%s", 1522 + i->inode.bi_mode, buf.buf)) 1523 + goto delete; 1524 + } 1499 1525 out: 1500 1526 err: 1501 1527 fsck_err: 1528 + bch2_trans_iter_exit(trans, &iter2); 1502 1529 printbuf_exit(&buf); 1503 1530 bch_err_fn(c, ret); 1504 1531 return ret; 1505 1532 delete: 1533 + /* 1534 + * XXX: print out more info 1535 + * count up extents for this inode, check if we have different inode in 1536 + * an older snapshot version, perhaps decide if we want to reconstitute 1537 + */ 1506 1538 ret = bch2_btree_delete_at(trans, iter, BTREE_UPDATE_internal_snapshot_node); 1539 + goto out; 1540 + reconstruct: 1541 + ret = reconstruct_inode(trans, iter->btree_id, k.k->p.snapshot, k.k->p.inode) ?: 1542 + bch2_trans_commit(trans, NULL, NULL, BCH_TRANS_COMMIT_no_enospc); 1543 + if (ret) 1544 + goto err; 1545 + 1546 + inode->last_pos.inode--; 1547 + ret = bch_err_throw(c, transaction_restart_nested); 1507 1548 goto out; 1508 1549 } 1509 1550 ··· 1911 1822 !key_visible_in_snapshot(c, s, i->inode.bi_snapshot, k.k->p.snapshot)) 1912 1823 continue; 1913 1824 1914 - if (fsck_err_on(k.k->p.offset > round_up(i->inode.bi_size, block_bytes(c)) >> 9 && 1825 + u64 last_block = round_up(i->inode.bi_size, block_bytes(c)) >> 9; 1826 + 1827 + if (fsck_err_on(k.k->p.offset > last_block && 1915 1828 !bkey_extent_is_reservation(k), 1916 1829 trans, extent_past_end_of_inode, 1917 1830 "extent type past end of inode %llu:%u, i_size %llu\n%s", 1918 1831 i->inode.bi_inum, i->inode.bi_snapshot, i->inode.bi_size, 1919 1832 (bch2_bkey_val_to_text(&buf, c, k), buf.buf))) { 1920 - struct btree_iter iter2; 1833 + struct bkey_i *whiteout = bch2_trans_kmalloc(trans, sizeof(*whiteout)); 1834 + ret = PTR_ERR_OR_ZERO(whiteout); 1835 + if (ret) 1836 + goto err; 1921 1837 1922 - bch2_trans_copy_iter(trans, &iter2, iter); 1923 - bch2_btree_iter_set_snapshot(trans, &iter2, i->inode.bi_snapshot); 1838 + bkey_init(&whiteout->k); 1839 + whiteout->k.p = SPOS(k.k->p.inode, 1840 + last_block, 1841 + i->inode.bi_snapshot); 1842 + bch2_key_resize(&whiteout->k, 1843 + min(KEY_SIZE_MAX & (~0 << c->block_bits), 1844 + U64_MAX - whiteout->k.p.offset)); 1845 + 1846 + 1847 + /* 1848 + * Need a normal (not BTREE_ITER_all_snapshots) 1849 + * iterator, if we're deleting in a different 1850 + * snapshot and need to emit a whiteout 1851 + */ 1852 + struct btree_iter iter2; 1853 + bch2_trans_iter_init(trans, &iter2, BTREE_ID_extents, 1854 + bkey_start_pos(&whiteout->k), 1855 + BTREE_ITER_intent); 1924 1856 ret = bch2_btree_iter_traverse(trans, &iter2) ?: 1925 - bch2_btree_delete_at(trans, &iter2, 1857 + bch2_trans_update(trans, &iter2, whiteout, 1926 1858 BTREE_UPDATE_internal_snapshot_node); 1927 1859 bch2_trans_iter_exit(trans, &iter2); 1928 1860 if (ret) ··· 2059 1949 continue; 2060 1950 } 2061 1951 2062 - if (fsck_err_on(i->inode.bi_nlink != i->count, 2063 - trans, inode_dir_wrong_nlink, 2064 - "directory %llu:%u with wrong i_nlink: got %u, should be %llu", 2065 - w->last_pos.inode, i->inode.bi_snapshot, i->inode.bi_nlink, i->count)) { 2066 - i->inode.bi_nlink = i->count; 2067 - ret = bch2_fsck_write_inode(trans, &i->inode); 2068 - if (ret) 2069 - break; 1952 + if (i->inode.bi_nlink != i->count) { 1953 + CLASS(printbuf, buf)(); 1954 + 1955 + lockrestart_do(trans, 1956 + bch2_inum_snapshot_to_path(trans, w->last_pos.inode, 1957 + i->inode.bi_snapshot, NULL, &buf)); 1958 + 1959 + if (fsck_err_on(i->inode.bi_nlink != i->count, 1960 + trans, inode_dir_wrong_nlink, 1961 + "directory with wrong i_nlink: got %u, should be %llu\n%s", 1962 + i->inode.bi_nlink, i->count, buf.buf)) { 1963 + i->inode.bi_nlink = i->count; 1964 + ret = bch2_fsck_write_inode(trans, &i->inode); 1965 + if (ret) 1966 + break; 1967 + } 2070 1968 } 2071 1969 } 2072 1970 fsck_err: ··· 2611 2493 if (k.k->type != KEY_TYPE_subvolume) 2612 2494 return 0; 2613 2495 2496 + subvol_inum start = { 2497 + .subvol = k.k->p.offset, 2498 + .inum = le64_to_cpu(bkey_s_c_to_subvolume(k).v->inode), 2499 + }; 2500 + 2614 2501 while (k.k->p.offset != BCACHEFS_ROOT_SUBVOL) { 2615 2502 ret = darray_push(&subvol_path, k.k->p.offset); 2616 2503 if (ret) ··· 2634 2511 2635 2512 if (darray_u32_has(&subvol_path, parent)) { 2636 2513 printbuf_reset(&buf); 2637 - prt_printf(&buf, "subvolume loop:\n"); 2514 + prt_printf(&buf, "subvolume loop: "); 2638 2515 2639 - darray_for_each_reverse(subvol_path, i) 2640 - prt_printf(&buf, "%u ", *i); 2641 - prt_printf(&buf, "%u", parent); 2516 + ret = bch2_inum_to_path(trans, start, &buf); 2517 + if (ret) 2518 + goto err; 2642 2519 2643 2520 if (fsck_err(trans, subvol_loop, "%s", buf.buf)) 2644 2521 ret = reattach_subvol(trans, s); ··· 2682 2559 return ret; 2683 2560 } 2684 2561 2685 - struct pathbuf_entry { 2686 - u64 inum; 2687 - u32 snapshot; 2688 - }; 2689 - 2690 - typedef DARRAY(struct pathbuf_entry) pathbuf; 2691 - 2692 - static int bch2_bi_depth_renumber_one(struct btree_trans *trans, struct pathbuf_entry *p, 2562 + static int bch2_bi_depth_renumber_one(struct btree_trans *trans, 2563 + u64 inum, u32 snapshot, 2693 2564 u32 new_depth) 2694 2565 { 2695 2566 struct btree_iter iter; 2696 2567 struct bkey_s_c k = bch2_bkey_get_iter(trans, &iter, BTREE_ID_inodes, 2697 - SPOS(0, p->inum, p->snapshot), 0); 2568 + SPOS(0, inum, snapshot), 0); 2698 2569 2699 2570 struct bch_inode_unpacked inode; 2700 2571 int ret = bkey_err(k) ?: ··· 2707 2590 return ret; 2708 2591 } 2709 2592 2710 - static int bch2_bi_depth_renumber(struct btree_trans *trans, pathbuf *path, u32 new_bi_depth) 2593 + static int bch2_bi_depth_renumber(struct btree_trans *trans, darray_u64 *path, 2594 + u32 snapshot, u32 new_bi_depth) 2711 2595 { 2712 2596 u32 restart_count = trans->restart_count; 2713 2597 int ret = 0; 2714 2598 2715 2599 darray_for_each_reverse(*path, i) { 2716 2600 ret = nested_lockrestart_do(trans, 2717 - bch2_bi_depth_renumber_one(trans, i, new_bi_depth)); 2601 + bch2_bi_depth_renumber_one(trans, *i, snapshot, new_bi_depth)); 2718 2602 bch_err_fn(trans->c, ret); 2719 2603 if (ret) 2720 2604 break; ··· 2726 2608 return ret ?: trans_was_restarted(trans, restart_count); 2727 2609 } 2728 2610 2729 - static bool path_is_dup(pathbuf *p, u64 inum, u32 snapshot) 2730 - { 2731 - darray_for_each(*p, i) 2732 - if (i->inum == inum && 2733 - i->snapshot == snapshot) 2734 - return true; 2735 - return false; 2736 - } 2737 - 2738 2611 static int check_path_loop(struct btree_trans *trans, struct bkey_s_c inode_k) 2739 2612 { 2740 2613 struct bch_fs *c = trans->c; 2741 2614 struct btree_iter inode_iter = {}; 2742 - pathbuf path = {}; 2615 + darray_u64 path = {}; 2743 2616 struct printbuf buf = PRINTBUF; 2744 2617 u32 snapshot = inode_k.k->p.snapshot; 2745 2618 bool redo_bi_depth = false; 2746 2619 u32 min_bi_depth = U32_MAX; 2747 2620 int ret = 0; 2748 2621 2622 + struct bpos start = inode_k.k->p; 2623 + 2749 2624 struct bch_inode_unpacked inode; 2750 2625 ret = bch2_inode_unpack(inode_k, &inode); 2751 2626 if (ret) 2752 2627 return ret; 2753 2628 2754 - while (!inode.bi_subvol) { 2629 + /* 2630 + * If we're running full fsck, check_dirents() will have already ran, 2631 + * and we shouldn't see any missing backpointers here - otherwise that's 2632 + * handled separately, by check_unreachable_inodes 2633 + */ 2634 + while (!inode.bi_subvol && 2635 + bch2_inode_has_backpointer(&inode)) { 2755 2636 struct btree_iter dirent_iter; 2756 2637 struct bkey_s_c_dirent d; 2757 - u32 parent_snapshot = snapshot; 2758 2638 2759 - d = inode_get_dirent(trans, &dirent_iter, &inode, &parent_snapshot); 2639 + d = dirent_get_by_pos(trans, &dirent_iter, 2640 + SPOS(inode.bi_dir, inode.bi_dir_offset, snapshot)); 2760 2641 ret = bkey_err(d.s_c); 2761 2642 if (ret && !bch2_err_matches(ret, ENOENT)) 2762 2643 goto out; ··· 2773 2656 2774 2657 bch2_trans_iter_exit(trans, &dirent_iter); 2775 2658 2776 - ret = darray_push(&path, ((struct pathbuf_entry) { 2777 - .inum = inode.bi_inum, 2778 - .snapshot = snapshot, 2779 - })); 2659 + ret = darray_push(&path, inode.bi_inum); 2780 2660 if (ret) 2781 2661 return ret; 2782 - 2783 - snapshot = parent_snapshot; 2784 2662 2785 2663 bch2_trans_iter_exit(trans, &inode_iter); 2786 2664 inode_k = bch2_bkey_get_iter(trans, &inode_iter, BTREE_ID_inodes, ··· 2798 2686 break; 2799 2687 2800 2688 inode = parent_inode; 2801 - snapshot = inode_k.k->p.snapshot; 2802 2689 redo_bi_depth = true; 2803 2690 2804 - if (path_is_dup(&path, inode.bi_inum, snapshot)) { 2691 + if (darray_find(path, inode.bi_inum)) { 2805 2692 printbuf_reset(&buf); 2806 - prt_printf(&buf, "directory structure loop:\n"); 2807 - darray_for_each_reverse(path, i) 2808 - prt_printf(&buf, "%llu:%u ", i->inum, i->snapshot); 2809 - prt_printf(&buf, "%llu:%u", inode.bi_inum, snapshot); 2693 + prt_printf(&buf, "directory structure loop in snapshot %u: ", 2694 + snapshot); 2695 + 2696 + ret = bch2_inum_snapshot_to_path(trans, start.offset, start.snapshot, NULL, &buf); 2697 + if (ret) 2698 + goto out; 2699 + 2700 + if (c->opts.verbose) { 2701 + prt_newline(&buf); 2702 + darray_for_each(path, i) 2703 + prt_printf(&buf, "%llu ", *i); 2704 + } 2810 2705 2811 2706 if (fsck_err(trans, dir_loop, "%s", buf.buf)) { 2812 2707 ret = remove_backpointer(trans, &inode); ··· 2833 2714 min_bi_depth = 0; 2834 2715 2835 2716 if (redo_bi_depth) 2836 - ret = bch2_bi_depth_renumber(trans, &path, min_bi_depth); 2717 + ret = bch2_bi_depth_renumber(trans, &path, snapshot, min_bi_depth); 2837 2718 out: 2838 2719 fsck_err: 2839 2720 bch2_trans_iter_exit(trans, &inode_iter); ··· 2850 2731 int bch2_check_directory_structure(struct bch_fs *c) 2851 2732 { 2852 2733 int ret = bch2_trans_run(c, 2853 - for_each_btree_key_commit(trans, iter, BTREE_ID_inodes, POS_MIN, 2734 + for_each_btree_key_reverse_commit(trans, iter, BTREE_ID_inodes, POS_MIN, 2854 2735 BTREE_ITER_intent| 2855 2736 BTREE_ITER_prefetch| 2856 2737 BTREE_ITER_all_snapshots, k,
+5
fs/bcachefs/inode.h
··· 254 254 : c->opts.casefold; 255 255 } 256 256 257 + static inline bool bch2_inode_has_backpointer(const struct bch_inode_unpacked *bi) 258 + { 259 + return bi->bi_dir || bi->bi_dir_offset; 260 + } 261 + 257 262 /* i_nlink: */ 258 263 259 264 static inline unsigned nlink_bias(umode_t mode)
+6 -1
fs/bcachefs/io_read.c
··· 1491 1491 prt_printf(out, "have_ioref:\t%u\n", rbio->have_ioref); 1492 1492 prt_printf(out, "narrow_crcs:\t%u\n", rbio->narrow_crcs); 1493 1493 prt_printf(out, "context:\t%u\n", rbio->context); 1494 - prt_printf(out, "ret:\t%s\n", bch2_err_str(rbio->ret)); 1494 + 1495 + int ret = READ_ONCE(rbio->ret); 1496 + if (ret < 0) 1497 + prt_printf(out, "ret:\t%s\n", bch2_err_str(ret)); 1498 + else 1499 + prt_printf(out, "ret:\t%i\n", ret); 1495 1500 1496 1501 prt_printf(out, "flags:\t"); 1497 1502 bch2_prt_bitflags(out, bch2_read_bio_flags, rbio->flags);
+7 -13
fs/bcachefs/journal.c
··· 1283 1283 ret = 0; /* wait and retry */ 1284 1284 1285 1285 bch2_disk_reservation_put(c, &disk_res); 1286 - closure_sync(&cl); 1286 + bch2_wait_on_allocator(c, &cl); 1287 1287 } 1288 1288 1289 1289 return ret; ··· 1474 1474 clear_bit(JOURNAL_running, &j->flags); 1475 1475 } 1476 1476 1477 - int bch2_fs_journal_start(struct journal *j, u64 cur_seq) 1477 + int bch2_fs_journal_start(struct journal *j, u64 last_seq, u64 cur_seq) 1478 1478 { 1479 1479 struct bch_fs *c = container_of(j, struct bch_fs, journal); 1480 1480 struct journal_entry_pin_list *p; 1481 1481 struct journal_replay *i, **_i; 1482 1482 struct genradix_iter iter; 1483 1483 bool had_entries = false; 1484 - u64 last_seq = cur_seq, nr, seq; 1485 1484 1486 1485 /* 1487 1486 * ··· 1494 1495 return -EINVAL; 1495 1496 } 1496 1497 1497 - genradix_for_each_reverse(&c->journal_entries, iter, _i) { 1498 - i = *_i; 1498 + /* Clean filesystem? */ 1499 + if (!last_seq) 1500 + last_seq = cur_seq; 1499 1501 1500 - if (journal_replay_ignore(i)) 1501 - continue; 1502 - 1503 - last_seq = le64_to_cpu(i->j.last_seq); 1504 - break; 1505 - } 1506 - 1507 - nr = cur_seq - last_seq; 1502 + u64 nr = cur_seq - last_seq; 1508 1503 1509 1504 /* 1510 1505 * Extra fudge factor, in case we crashed when the journal pin fifo was ··· 1525 1532 j->pin.back = cur_seq; 1526 1533 atomic64_set(&j->seq, cur_seq - 1); 1527 1534 1535 + u64 seq; 1528 1536 fifo_for_each_entry_ptr(p, &j->pin, seq) 1529 1537 journal_pin_list_init(p, 1); 1530 1538
+1 -1
fs/bcachefs/journal.h
··· 453 453 void bch2_dev_journal_stop(struct journal *, struct bch_dev *); 454 454 455 455 void bch2_fs_journal_stop(struct journal *); 456 - int bch2_fs_journal_start(struct journal *, u64); 456 + int bch2_fs_journal_start(struct journal *, u64, u64); 457 457 void bch2_journal_set_replay_done(struct journal *); 458 458 459 459 void bch2_dev_journal_exit(struct bch_dev *);
+20 -6
fs/bcachefs/journal_io.c
··· 160 160 struct printbuf buf = PRINTBUF; 161 161 int ret = JOURNAL_ENTRY_ADD_OK; 162 162 163 + if (last_seq && c->opts.journal_rewind) 164 + last_seq = min(last_seq, c->opts.journal_rewind); 165 + 163 166 if (!c->journal.oldest_seq_found_ondisk || 164 167 le64_to_cpu(j->seq) < c->journal.oldest_seq_found_ondisk) 165 168 c->journal.oldest_seq_found_ondisk = le64_to_cpu(j->seq); ··· 1433 1430 printbuf_reset(&buf); 1434 1431 prt_printf(&buf, "journal read done, replaying entries %llu-%llu", 1435 1432 *last_seq, *blacklist_seq - 1); 1433 + 1434 + /* 1435 + * Drop blacklisted entries and entries older than last_seq (or start of 1436 + * journal rewind: 1437 + */ 1438 + u64 drop_before = *last_seq; 1439 + if (c->opts.journal_rewind) { 1440 + drop_before = min(drop_before, c->opts.journal_rewind); 1441 + prt_printf(&buf, " (rewinding from %llu)", c->opts.journal_rewind); 1442 + } 1443 + 1444 + *last_seq = drop_before; 1436 1445 if (*start_seq != *blacklist_seq) 1437 1446 prt_printf(&buf, " (unflushed %llu-%llu)", *blacklist_seq, *start_seq - 1); 1438 1447 bch_info(c, "%s", buf.buf); 1439 - 1440 - /* Drop blacklisted entries and entries older than last_seq: */ 1441 1448 genradix_for_each(&c->journal_entries, radix_iter, _i) { 1442 1449 i = *_i; 1443 1450 ··· 1455 1442 continue; 1456 1443 1457 1444 seq = le64_to_cpu(i->j.seq); 1458 - if (seq < *last_seq) { 1445 + if (seq < drop_before) { 1459 1446 journal_replay_free(c, i, false); 1460 1447 continue; 1461 1448 } ··· 1468 1455 } 1469 1456 } 1470 1457 1471 - ret = bch2_journal_check_for_missing(c, *last_seq, *blacklist_seq - 1); 1458 + ret = bch2_journal_check_for_missing(c, drop_before, *blacklist_seq - 1); 1472 1459 if (ret) 1473 1460 goto err; 1474 1461 ··· 1716 1703 bch2_log_msg_start(c, &buf); 1717 1704 1718 1705 if (err == -BCH_ERR_journal_write_err) 1719 - prt_printf(&buf, "unable to write journal to sufficient devices"); 1706 + prt_printf(&buf, "unable to write journal to sufficient devices\n"); 1720 1707 else 1721 - prt_printf(&buf, "journal write error marking replicas: %s", bch2_err_str(err)); 1708 + prt_printf(&buf, "journal write error marking replicas: %s\n", 1709 + bch2_err_str(err)); 1722 1710 1723 1711 bch2_fs_emergency_read_only2(c, &buf); 1724 1712
+23 -7
fs/bcachefs/namei.c
··· 625 625 { 626 626 unsigned orig_pos = path->pos; 627 627 int ret = 0; 628 + DARRAY(subvol_inum) inums = {}; 629 + 630 + if (!snapshot) { 631 + ret = bch2_subvolume_get_snapshot(trans, subvol, &snapshot); 632 + if (ret) 633 + goto disconnected; 634 + } 628 635 629 636 while (true) { 630 - if (!snapshot) { 631 - ret = bch2_subvolume_get_snapshot(trans, subvol, &snapshot); 632 - if (ret) 633 - goto disconnected; 637 + subvol_inum n = (subvol_inum) { subvol ?: snapshot, inum }; 638 + 639 + if (darray_find_p(inums, i, i->subvol == n.subvol && i->inum == n.inum)) { 640 + prt_str_reversed(path, "(loop)"); 641 + break; 634 642 } 643 + 644 + ret = darray_push(&inums, n); 645 + if (ret) 646 + goto err; 635 647 636 648 struct bch_inode_unpacked inode; 637 649 ret = bch2_inode_find_by_inum_snapshot(trans, inum, snapshot, &inode, 0); ··· 662 650 inum = inode.bi_dir; 663 651 if (inode.bi_parent_subvol) { 664 652 subvol = inode.bi_parent_subvol; 665 - snapshot = 0; 653 + ret = bch2_subvolume_get_snapshot(trans, inode.bi_parent_subvol, &snapshot); 654 + if (ret) 655 + goto disconnected; 666 656 } 667 657 668 658 struct btree_iter d_iter; ··· 676 662 goto disconnected; 677 663 678 664 struct qstr dirent_name = bch2_dirent_get_name(d); 665 + 679 666 prt_bytes_reversed(path, dirent_name.name, dirent_name.len); 680 667 681 668 prt_char(path, '/'); ··· 692 677 goto err; 693 678 694 679 reverse_bytes(path->buf + orig_pos, path->pos - orig_pos); 680 + darray_exit(&inums); 695 681 return 0; 696 682 err: 683 + darray_exit(&inums); 697 684 return ret; 698 685 disconnected: 699 686 if (bch2_err_matches(ret, BCH_ERR_transaction_restart)) ··· 734 717 if (inode_points_to_dirent(target, d)) 735 718 return 0; 736 719 737 - if (!target->bi_dir && 738 - !target->bi_dir_offset) { 720 + if (!bch2_inode_has_backpointer(target)) { 739 721 fsck_err_on(S_ISDIR(target->bi_mode), 740 722 trans, inode_dir_missing_backpointer, 741 723 "directory with missing backpointer\n%s",
+5
fs/bcachefs/opts.h
··· 379 379 OPT_BOOL(), \ 380 380 BCH2_NO_SB_OPT, false, \ 381 381 NULL, "Exit recovery immediately prior to journal replay")\ 382 + x(journal_rewind, u64, \ 383 + OPT_FS|OPT_MOUNT, \ 384 + OPT_UINT(0, U64_MAX), \ 385 + BCH2_NO_SB_OPT, 0, \ 386 + NULL, "Rewind journal") \ 382 387 x(recovery_passes, u64, \ 383 388 OPT_FS|OPT_MOUNT, \ 384 389 OPT_BITFIELD(bch2_recovery_passes), \
+20 -4
fs/bcachefs/recovery.c
··· 607 607 buf.buf, bch2_err_str(ret))) { 608 608 if (btree_id_is_alloc(i)) 609 609 r->error = 0; 610 + ret = 0; 610 611 } 611 612 } 612 613 ··· 693 692 ret = true; 694 693 } 695 694 696 - if (new_version > c->sb.version_incompat && 695 + if (new_version > c->sb.version_incompat_allowed && 697 696 c->opts.version_upgrade == BCH_VERSION_UPGRADE_incompatible) { 698 697 struct printbuf buf = PRINTBUF; 699 698 ··· 757 756 758 757 if (c->opts.nochanges) 759 758 c->opts.read_only = true; 759 + 760 + if (c->opts.journal_rewind) { 761 + bch_info(c, "rewinding journal, fsck required"); 762 + c->opts.fsck = true; 763 + } 764 + 765 + if (go_rw_in_recovery(c)) { 766 + /* 767 + * start workqueues/kworkers early - kthread creation checks for 768 + * pending signals, which is _very_ annoying 769 + */ 770 + ret = bch2_fs_init_rw(c); 771 + if (ret) 772 + goto err; 773 + } 760 774 761 775 mutex_lock(&c->sb_lock); 762 776 struct bch_sb_field_ext *ext = bch2_sb_field_get(c->disk_sb.sb, ext); ··· 981 965 982 966 ret = bch2_journal_log_msg(c, "starting journal at entry %llu, replaying %llu-%llu", 983 967 journal_seq, last_seq, blacklist_seq - 1) ?: 984 - bch2_fs_journal_start(&c->journal, journal_seq); 968 + bch2_fs_journal_start(&c->journal, last_seq, journal_seq); 985 969 if (ret) 986 970 goto err; 987 971 ··· 1142 1126 struct printbuf buf = PRINTBUF; 1143 1127 bch2_log_msg_start(c, &buf); 1144 1128 1145 - prt_printf(&buf, "error in recovery: %s", bch2_err_str(ret)); 1129 + prt_printf(&buf, "error in recovery: %s\n", bch2_err_str(ret)); 1146 1130 bch2_fs_emergency_read_only2(c, &buf); 1147 1131 1148 1132 bch2_print_str(c, KERN_ERR, buf.buf); ··· 1197 1181 * journal_res_get() will crash if called before this has 1198 1182 * set up the journal.pin FIFO and journal.cur pointer: 1199 1183 */ 1200 - ret = bch2_fs_journal_start(&c->journal, 1); 1184 + ret = bch2_fs_journal_start(&c->journal, 1, 1); 1201 1185 if (ret) 1202 1186 goto err; 1203 1187
+9 -10
fs/bcachefs/recovery_passes.c
··· 217 217 218 218 set_bit(BCH_FS_may_go_rw, &c->flags); 219 219 220 - if (keys->nr || 221 - !c->opts.read_only || 222 - !c->sb.clean || 223 - c->opts.recovery_passes || 224 - (c->opts.fsck && !(c->sb.features & BIT_ULL(BCH_FEATURE_no_alloc_info)))) { 220 + if (go_rw_in_recovery(c)) { 225 221 if (c->sb.features & BIT_ULL(BCH_FEATURE_no_alloc_info)) { 226 222 bch_info(c, "mounting a filesystem with no alloc info read-write; will recreate"); 227 223 bch2_reconstruct_alloc(c); ··· 313 317 */ 314 318 bool in_recovery = test_bit(BCH_FS_in_recovery, &c->flags); 315 319 bool persistent = !in_recovery || !(*flags & RUN_RECOVERY_PASS_nopersistent); 320 + bool rewind = in_recovery && 321 + r->curr_pass > pass && 322 + !(r->passes_complete & BIT_ULL(pass)); 316 323 317 324 if (persistent 318 325 ? !(c->sb.recovery_passes_required & BIT_ULL(pass)) ··· 324 325 325 326 if (!(*flags & RUN_RECOVERY_PASS_ratelimit) && 326 327 (r->passes_ratelimiting & BIT_ULL(pass))) 328 + return true; 329 + 330 + if (rewind) 327 331 return true; 328 332 329 333 return false; ··· 342 340 { 343 341 struct bch_fs_recovery *r = &c->recovery; 344 342 int ret = 0; 345 - 346 343 347 344 lockdep_assert_held(&c->sb_lock); 348 345 ··· 413 412 { 414 413 int ret = 0; 415 414 416 - scoped_guard(mutex, &c->sb_lock) { 417 - if (!recovery_pass_needs_set(c, pass, &flags)) 418 - return 0; 419 - 415 + if (recovery_pass_needs_set(c, pass, &flags)) { 416 + guard(mutex)(&c->sb_lock); 420 417 ret = __bch2_run_explicit_recovery_pass(c, out, pass, flags); 421 418 bch2_write_super(c); 422 419 }
+9
fs/bcachefs/recovery_passes.h
··· 17 17 RUN_RECOVERY_PASS_ratelimit = BIT(1), 18 18 }; 19 19 20 + static inline bool go_rw_in_recovery(struct bch_fs *c) 21 + { 22 + return (c->journal_keys.nr || 23 + !c->opts.read_only || 24 + !c->sb.clean || 25 + c->opts.recovery_passes || 26 + (c->opts.fsck && !(c->sb.features & BIT_ULL(BCH_FEATURE_no_alloc_info)))); 27 + } 28 + 20 29 int bch2_run_print_explicit_recovery_pass(struct bch_fs *, enum bch_recovery_pass); 21 30 22 31 int __bch2_run_explicit_recovery_pass(struct bch_fs *, struct printbuf *,
+7 -5
fs/bcachefs/reflink.c
··· 64 64 REFLINK_P_IDX(p.v), 65 65 le32_to_cpu(p.v->front_pad), 66 66 le32_to_cpu(p.v->back_pad)); 67 + 68 + if (REFLINK_P_ERROR(p.v)) 69 + prt_str(out, " error"); 67 70 } 68 71 69 72 bool bch2_reflink_p_merge(struct bch_fs *c, struct bkey_s _l, struct bkey_s_c _r) ··· 272 269 return k; 273 270 274 271 if (unlikely(!bkey_extent_is_reflink_data(k.k))) { 275 - unsigned size = min((u64) k.k->size, 276 - REFLINK_P_IDX(p.v) + p.k->size + le32_to_cpu(p.v->back_pad) - 277 - reflink_offset); 278 - bch2_key_resize(&iter->k, size); 272 + u64 missing_end = min(k.k->p.offset, 273 + REFLINK_P_IDX(p.v) + p.k->size + le32_to_cpu(p.v->back_pad)); 274 + BUG_ON(reflink_offset == missing_end); 279 275 280 276 int ret = bch2_indirect_extent_missing_error(trans, p, reflink_offset, 281 - k.k->p.offset, should_commit); 277 + missing_end, should_commit); 282 278 if (ret) { 283 279 bch2_trans_iter_exit(trans, iter); 284 280 return bkey_s_c_err(ret);
+10 -9
fs/bcachefs/sb-errors_format.h
··· 3 3 #define _BCACHEFS_SB_ERRORS_FORMAT_H 4 4 5 5 enum bch_fsck_flags { 6 - FSCK_CAN_FIX = 1 << 0, 7 - FSCK_CAN_IGNORE = 1 << 1, 8 - FSCK_AUTOFIX = 1 << 2, 6 + FSCK_CAN_FIX = BIT(0), 7 + FSCK_CAN_IGNORE = BIT(1), 8 + FSCK_AUTOFIX = BIT(2), 9 + FSCK_ERR_NO_LOG = BIT(3), 9 10 }; 10 11 11 12 #define BCH_SB_ERRS() \ ··· 218 217 x(inode_str_hash_invalid, 194, 0) \ 219 218 x(inode_v3_fields_start_bad, 195, 0) \ 220 219 x(inode_snapshot_mismatch, 196, 0) \ 221 - x(snapshot_key_missing_inode_snapshot, 314, 0) \ 220 + x(snapshot_key_missing_inode_snapshot, 314, FSCK_AUTOFIX) \ 222 221 x(inode_unlinked_but_clean, 197, 0) \ 223 222 x(inode_unlinked_but_nlink_nonzero, 198, 0) \ 224 223 x(inode_unlinked_and_not_open, 281, 0) \ ··· 252 251 x(deleted_inode_not_unlinked, 214, FSCK_AUTOFIX) \ 253 252 x(deleted_inode_has_child_snapshots, 288, FSCK_AUTOFIX) \ 254 253 x(extent_overlapping, 215, 0) \ 255 - x(key_in_missing_inode, 216, 0) \ 254 + x(key_in_missing_inode, 216, FSCK_AUTOFIX) \ 256 255 x(key_in_wrong_inode_type, 217, 0) \ 257 - x(extent_past_end_of_inode, 218, 0) \ 256 + x(extent_past_end_of_inode, 218, FSCK_AUTOFIX) \ 258 257 x(dirent_empty_name, 219, 0) \ 259 258 x(dirent_val_too_big, 220, 0) \ 260 259 x(dirent_name_too_long, 221, 0) \ 261 260 x(dirent_name_embedded_nul, 222, 0) \ 262 261 x(dirent_name_dot_or_dotdot, 223, 0) \ 263 262 x(dirent_name_has_slash, 224, 0) \ 264 - x(dirent_d_type_wrong, 225, 0) \ 263 + x(dirent_d_type_wrong, 225, FSCK_AUTOFIX) \ 265 264 x(inode_bi_parent_wrong, 226, 0) \ 266 265 x(dirent_in_missing_dir_inode, 227, 0) \ 267 266 x(dirent_in_non_dir_inode, 228, 0) \ 268 - x(dirent_to_missing_inode, 229, 0) \ 267 + x(dirent_to_missing_inode, 229, FSCK_AUTOFIX) \ 269 268 x(dirent_to_overwritten_inode, 302, 0) \ 270 269 x(dirent_to_missing_subvol, 230, 0) \ 271 270 x(dirent_to_itself, 231, 0) \ ··· 301 300 x(btree_node_bkey_bad_u64s, 260, 0) \ 302 301 x(btree_node_topology_empty_interior_node, 261, 0) \ 303 302 x(btree_ptr_v2_min_key_bad, 262, 0) \ 304 - x(btree_root_unreadable_and_scan_found_nothing, 263, FSCK_AUTOFIX) \ 303 + x(btree_root_unreadable_and_scan_found_nothing, 263, 0) \ 305 304 x(snapshot_node_missing, 264, FSCK_AUTOFIX) \ 306 305 x(dup_backpointer_to_bad_csum_extent, 265, 0) \ 307 306 x(btree_bitmap_not_marked, 266, FSCK_AUTOFIX) \
+9 -5
fs/bcachefs/snapshot.c
··· 135 135 136 136 bool __bch2_snapshot_is_ancestor(struct bch_fs *c, u32 id, u32 ancestor) 137 137 { 138 - bool ret; 138 + #ifdef CONFIG_BCACHEFS_DEBUG 139 + u32 orig_id = id; 140 + #endif 139 141 140 142 guard(rcu)(); 141 143 struct snapshot_table *t = rcu_dereference(c->snapshots); ··· 149 147 while (id && id < ancestor - IS_ANCESTOR_BITMAP) 150 148 id = get_ancestor_below(t, id, ancestor); 151 149 152 - ret = id && id < ancestor 150 + bool ret = id && id < ancestor 153 151 ? test_ancestor_bitmap(t, id, ancestor) 154 152 : id == ancestor; 155 153 156 - EBUG_ON(ret != __bch2_snapshot_is_ancestor_early(t, id, ancestor)); 154 + EBUG_ON(ret != __bch2_snapshot_is_ancestor_early(t, orig_id, ancestor)); 157 155 return ret; 158 156 } 159 157 ··· 871 869 872 870 for_each_btree_key_norestart(trans, iter, BTREE_ID_snapshot_trees, POS_MIN, 873 871 0, k, ret) { 874 - if (le32_to_cpu(bkey_s_c_to_snapshot_tree(k).v->root_snapshot) == id) { 872 + if (k.k->type == KEY_TYPE_snapshot_tree && 873 + le32_to_cpu(bkey_s_c_to_snapshot_tree(k).v->root_snapshot) == id) { 875 874 tree_id = k.k->p.offset; 876 875 break; 877 876 } ··· 900 897 901 898 for_each_btree_key_norestart(trans, iter, BTREE_ID_subvolumes, POS_MIN, 902 899 0, k, ret) { 903 - if (le32_to_cpu(bkey_s_c_to_subvolume(k).v->snapshot) == id) { 900 + if (k.k->type == KEY_TYPE_subvolume && 901 + le32_to_cpu(bkey_s_c_to_subvolume(k).v->snapshot) == id) { 904 902 snapshot->v.subvol = cpu_to_le32(k.k->p.offset); 905 903 SET_BCH_SNAPSHOT_SUBVOL(&snapshot->v, true); 906 904 break;
+11 -2
fs/bcachefs/super.c
··· 210 210 static int bch2_dev_sysfs_online(struct bch_fs *, struct bch_dev *); 211 211 static void bch2_dev_io_ref_stop(struct bch_dev *, int); 212 212 static void __bch2_dev_read_only(struct bch_fs *, struct bch_dev *); 213 - static int bch2_fs_init_rw(struct bch_fs *); 214 213 215 214 struct bch_fs *bch2_dev_to_fs(dev_t dev) 216 215 { ··· 793 794 return ret; 794 795 } 795 796 796 - static int bch2_fs_init_rw(struct bch_fs *c) 797 + int bch2_fs_init_rw(struct bch_fs *c) 797 798 { 798 799 if (test_bit(BCH_FS_rw_init_done, &c->flags)) 799 800 return 0; ··· 1013 1014 bch2_fs_vfs_init(c); 1014 1015 if (ret) 1015 1016 goto err; 1017 + 1018 + if (go_rw_in_recovery(c)) { 1019 + /* 1020 + * start workqueues/kworkers early - kthread creation checks for 1021 + * pending signals, which is _very_ annoying 1022 + */ 1023 + ret = bch2_fs_init_rw(c); 1024 + if (ret) 1025 + goto err; 1026 + } 1016 1027 1017 1028 #ifdef CONFIG_UNICODE 1018 1029 /* Default encoding until we can potentially have more as an option. */
+1
fs/bcachefs/super.h
··· 46 46 void bch2_fs_free(struct bch_fs *); 47 47 void bch2_fs_stop(struct bch_fs *); 48 48 49 + int bch2_fs_init_rw(struct bch_fs *); 49 50 int bch2_fs_start(struct bch_fs *); 50 51 struct bch_fs *bch2_fs_open(darray_const_str *, struct bch_opts *); 51 52
+28 -97
fs/bcachefs/trace.h
··· 1080 1080 __entry->must_wait) 1081 1081 ); 1082 1082 1083 - TRACE_EVENT(trans_restart_journal_preres_get, 1084 - TP_PROTO(struct btree_trans *trans, 1085 - unsigned long caller_ip, 1086 - unsigned flags), 1087 - TP_ARGS(trans, caller_ip, flags), 1088 - 1089 - TP_STRUCT__entry( 1090 - __array(char, trans_fn, 32 ) 1091 - __field(unsigned long, caller_ip ) 1092 - __field(unsigned, flags ) 1093 - ), 1094 - 1095 - TP_fast_assign( 1096 - strscpy(__entry->trans_fn, trans->fn, sizeof(__entry->trans_fn)); 1097 - __entry->caller_ip = caller_ip; 1098 - __entry->flags = flags; 1099 - ), 1100 - 1101 - TP_printk("%s %pS %x", __entry->trans_fn, 1102 - (void *) __entry->caller_ip, 1103 - __entry->flags) 1104 - ); 1105 - 1083 + #if 0 1084 + /* todo: bring back dynamic fault injection */ 1106 1085 DEFINE_EVENT(transaction_event, trans_restart_fault_inject, 1107 1086 TP_PROTO(struct btree_trans *trans, 1108 1087 unsigned long caller_ip), 1109 1088 TP_ARGS(trans, caller_ip) 1110 1089 ); 1090 + #endif 1111 1091 1112 1092 DEFINE_EVENT(transaction_event, trans_traverse_all, 1113 1093 TP_PROTO(struct btree_trans *trans, ··· 1175 1195 TP_ARGS(trans, caller_ip, path) 1176 1196 ); 1177 1197 1178 - DEFINE_EVENT(transaction_restart_iter, trans_restart_relock_after_fill, 1179 - TP_PROTO(struct btree_trans *trans, 1180 - unsigned long caller_ip, 1181 - struct btree_path *path), 1182 - TP_ARGS(trans, caller_ip, path) 1183 - ); 1184 - 1185 - DEFINE_EVENT(transaction_event, trans_restart_key_cache_upgrade, 1186 - TP_PROTO(struct btree_trans *trans, 1187 - unsigned long caller_ip), 1188 - TP_ARGS(trans, caller_ip) 1189 - ); 1190 - 1191 1198 DEFINE_EVENT(transaction_restart_iter, trans_restart_relock_key_cache_fill, 1192 1199 TP_PROTO(struct btree_trans *trans, 1193 1200 unsigned long caller_ip, ··· 1190 1223 ); 1191 1224 1192 1225 DEFINE_EVENT(transaction_restart_iter, trans_restart_relock_path_intent, 1193 - TP_PROTO(struct btree_trans *trans, 1194 - unsigned long caller_ip, 1195 - struct btree_path *path), 1196 - TP_ARGS(trans, caller_ip, path) 1197 - ); 1198 - 1199 - DEFINE_EVENT(transaction_restart_iter, trans_restart_traverse, 1200 1226 TP_PROTO(struct btree_trans *trans, 1201 1227 unsigned long caller_ip, 1202 1228 struct btree_path *path), ··· 1252 1292 __entry->trans_fn, 1253 1293 (void *) __entry->caller_ip, 1254 1294 __entry->bytes) 1255 - ); 1256 - 1257 - TRACE_EVENT(trans_restart_key_cache_key_realloced, 1258 - TP_PROTO(struct btree_trans *trans, 1259 - unsigned long caller_ip, 1260 - struct btree_path *path, 1261 - unsigned old_u64s, 1262 - unsigned new_u64s), 1263 - TP_ARGS(trans, caller_ip, path, old_u64s, new_u64s), 1264 - 1265 - TP_STRUCT__entry( 1266 - __array(char, trans_fn, 32 ) 1267 - __field(unsigned long, caller_ip ) 1268 - __field(enum btree_id, btree_id ) 1269 - TRACE_BPOS_entries(pos) 1270 - __field(u32, old_u64s ) 1271 - __field(u32, new_u64s ) 1272 - ), 1273 - 1274 - TP_fast_assign( 1275 - strscpy(__entry->trans_fn, trans->fn, sizeof(__entry->trans_fn)); 1276 - __entry->caller_ip = caller_ip; 1277 - 1278 - __entry->btree_id = path->btree_id; 1279 - TRACE_BPOS_assign(pos, path->pos); 1280 - __entry->old_u64s = old_u64s; 1281 - __entry->new_u64s = new_u64s; 1282 - ), 1283 - 1284 - TP_printk("%s %pS btree %s pos %llu:%llu:%u old_u64s %u new_u64s %u", 1285 - __entry->trans_fn, 1286 - (void *) __entry->caller_ip, 1287 - bch2_btree_id_str(__entry->btree_id), 1288 - __entry->pos_inode, 1289 - __entry->pos_offset, 1290 - __entry->pos_snapshot, 1291 - __entry->old_u64s, 1292 - __entry->new_u64s) 1293 1295 ); 1294 1296 1295 1297 DEFINE_EVENT(transaction_event, trans_restart_write_buffer_flush, ··· 1408 1486 ); 1409 1487 1410 1488 DEFINE_EVENT(fs_str, io_move_evacuate_bucket, 1489 + TP_PROTO(struct bch_fs *c, const char *str), 1490 + TP_ARGS(c, str) 1491 + ); 1492 + 1493 + DEFINE_EVENT(fs_str, extent_trim_atomic, 1494 + TP_PROTO(struct bch_fs *c, const char *str), 1495 + TP_ARGS(c, str) 1496 + ); 1497 + 1498 + DEFINE_EVENT(fs_str, btree_iter_peek_slot, 1499 + TP_PROTO(struct bch_fs *c, const char *str), 1500 + TP_ARGS(c, str) 1501 + ); 1502 + 1503 + DEFINE_EVENT(fs_str, __btree_iter_peek, 1504 + TP_PROTO(struct bch_fs *c, const char *str), 1505 + TP_ARGS(c, str) 1506 + ); 1507 + 1508 + DEFINE_EVENT(fs_str, btree_iter_peek_max, 1509 + TP_PROTO(struct bch_fs *c, const char *str), 1510 + TP_ARGS(c, str) 1511 + ); 1512 + 1513 + DEFINE_EVENT(fs_str, btree_iter_peek_prev_min, 1411 1514 TP_PROTO(struct bch_fs *c, const char *str), 1412 1515 TP_ARGS(c, str) 1413 1516 ); ··· 1849 1902 __entry->dup_locked) 1850 1903 ); 1851 1904 1852 - TRACE_EVENT(btree_path_free_trans_begin, 1853 - TP_PROTO(btree_path_idx_t path), 1854 - TP_ARGS(path), 1855 - 1856 - TP_STRUCT__entry( 1857 - __field(btree_path_idx_t, idx ) 1858 - ), 1859 - 1860 - TP_fast_assign( 1861 - __entry->idx = path; 1862 - ), 1863 - 1864 - TP_printk(" path %3u", __entry->idx) 1865 - ); 1866 - 1867 1905 #else /* CONFIG_BCACHEFS_PATH_TRACEPOINTS */ 1868 1906 #ifndef _TRACE_BCACHEFS_H 1869 1907 ··· 1866 1934 static inline void trace_btree_path_traverse_end(struct btree_trans *trans, struct btree_path *path) {} 1867 1935 static inline void trace_btree_path_set_pos(struct btree_trans *trans, struct btree_path *path, struct bpos *new_pos) {} 1868 1936 static inline void trace_btree_path_free(struct btree_trans *trans, btree_path_idx_t path, struct btree_path *dup) {} 1869 - static inline void trace_btree_path_free_trans_begin(btree_path_idx_t path) {} 1870 1937 1871 1938 #endif 1872 1939 #endif /* CONFIG_BCACHEFS_PATH_TRACEPOINTS */
+4
fs/fuse/inode.c
··· 9 9 #include "fuse_i.h" 10 10 #include "dev_uring_i.h" 11 11 12 + #include <linux/dax.h> 12 13 #include <linux/pagemap.h> 13 14 #include <linux/slab.h> 14 15 #include <linux/file.h> ··· 162 161 163 162 /* Will write inode on close/munmap and in all other dirtiers */ 164 163 WARN_ON(inode->i_state & I_DIRTY_INODE); 164 + 165 + if (FUSE_IS_DAX(inode)) 166 + dax_break_layout_final(inode); 165 167 166 168 truncate_inode_pages_final(&inode->i_data); 167 169 clear_inode(inode);
+84 -34
fs/nfs/flexfilelayout/flexfilelayout.c
··· 1105 1105 } 1106 1106 1107 1107 static int ff_layout_async_handle_error_v4(struct rpc_task *task, 1108 + u32 op_status, 1108 1109 struct nfs4_state *state, 1109 1110 struct nfs_client *clp, 1110 1111 struct pnfs_layout_segment *lseg, ··· 1116 1115 struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx); 1117 1116 struct nfs4_slot_table *tbl = &clp->cl_session->fc_slot_table; 1118 1117 1119 - switch (task->tk_status) { 1120 - case -NFS4ERR_BADSESSION: 1121 - case -NFS4ERR_BADSLOT: 1122 - case -NFS4ERR_BAD_HIGH_SLOT: 1123 - case -NFS4ERR_DEADSESSION: 1124 - case -NFS4ERR_CONN_NOT_BOUND_TO_SESSION: 1125 - case -NFS4ERR_SEQ_FALSE_RETRY: 1126 - case -NFS4ERR_SEQ_MISORDERED: 1118 + switch (op_status) { 1119 + case NFS4_OK: 1120 + case NFS4ERR_NXIO: 1121 + break; 1122 + case NFSERR_PERM: 1123 + if (!task->tk_xprt) 1124 + break; 1125 + xprt_force_disconnect(task->tk_xprt); 1126 + goto out_retry; 1127 + case NFS4ERR_BADSESSION: 1128 + case NFS4ERR_BADSLOT: 1129 + case NFS4ERR_BAD_HIGH_SLOT: 1130 + case NFS4ERR_DEADSESSION: 1131 + case NFS4ERR_CONN_NOT_BOUND_TO_SESSION: 1132 + case NFS4ERR_SEQ_FALSE_RETRY: 1133 + case NFS4ERR_SEQ_MISORDERED: 1127 1134 dprintk("%s ERROR %d, Reset session. Exchangeid " 1128 1135 "flags 0x%x\n", __func__, task->tk_status, 1129 1136 clp->cl_exchange_flags); 1130 1137 nfs4_schedule_session_recovery(clp->cl_session, task->tk_status); 1131 - break; 1132 - case -NFS4ERR_DELAY: 1138 + goto out_retry; 1139 + case NFS4ERR_DELAY: 1133 1140 nfs_inc_stats(lseg->pls_layout->plh_inode, NFSIOS_DELAY); 1134 1141 fallthrough; 1135 - case -NFS4ERR_GRACE: 1142 + case NFS4ERR_GRACE: 1136 1143 rpc_delay(task, FF_LAYOUT_POLL_RETRY_MAX); 1137 - break; 1138 - case -NFS4ERR_RETRY_UNCACHED_REP: 1139 - break; 1144 + goto out_retry; 1145 + case NFS4ERR_RETRY_UNCACHED_REP: 1146 + goto out_retry; 1140 1147 /* Invalidate Layout errors */ 1141 - case -NFS4ERR_PNFS_NO_LAYOUT: 1142 - case -ESTALE: /* mapped NFS4ERR_STALE */ 1143 - case -EBADHANDLE: /* mapped NFS4ERR_BADHANDLE */ 1144 - case -EISDIR: /* mapped NFS4ERR_ISDIR */ 1145 - case -NFS4ERR_FHEXPIRED: 1146 - case -NFS4ERR_WRONG_TYPE: 1148 + case NFS4ERR_PNFS_NO_LAYOUT: 1149 + case NFS4ERR_STALE: 1150 + case NFS4ERR_BADHANDLE: 1151 + case NFS4ERR_ISDIR: 1152 + case NFS4ERR_FHEXPIRED: 1153 + case NFS4ERR_WRONG_TYPE: 1147 1154 dprintk("%s Invalid layout error %d\n", __func__, 1148 1155 task->tk_status); 1149 1156 /* ··· 1164 1155 pnfs_destroy_layout(NFS_I(inode)); 1165 1156 rpc_wake_up(&tbl->slot_tbl_waitq); 1166 1157 goto reset; 1158 + default: 1159 + break; 1160 + } 1161 + 1162 + switch (task->tk_status) { 1167 1163 /* RPC connection errors */ 1168 1164 case -ENETDOWN: 1169 1165 case -ENETUNREACH: ··· 1188 1174 nfs4_delete_deviceid(devid->ld, devid->nfs_client, 1189 1175 &devid->deviceid); 1190 1176 rpc_wake_up(&tbl->slot_tbl_waitq); 1191 - fallthrough; 1177 + break; 1192 1178 default: 1193 - if (ff_layout_avoid_mds_available_ds(lseg)) 1194 - return -NFS4ERR_RESET_TO_PNFS; 1195 - reset: 1196 - dprintk("%s Retry through MDS. Error %d\n", __func__, 1197 - task->tk_status); 1198 - return -NFS4ERR_RESET_TO_MDS; 1179 + break; 1199 1180 } 1181 + 1182 + if (ff_layout_avoid_mds_available_ds(lseg)) 1183 + return -NFS4ERR_RESET_TO_PNFS; 1184 + reset: 1185 + dprintk("%s Retry through MDS. Error %d\n", __func__, 1186 + task->tk_status); 1187 + return -NFS4ERR_RESET_TO_MDS; 1188 + 1189 + out_retry: 1200 1190 task->tk_status = 0; 1201 1191 return -EAGAIN; 1202 1192 } 1203 1193 1204 1194 /* Retry all errors through either pNFS or MDS except for -EJUKEBOX */ 1205 1195 static int ff_layout_async_handle_error_v3(struct rpc_task *task, 1196 + u32 op_status, 1206 1197 struct nfs_client *clp, 1207 1198 struct pnfs_layout_segment *lseg, 1208 1199 u32 idx) 1209 1200 { 1210 1201 struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx); 1202 + 1203 + switch (op_status) { 1204 + case NFS_OK: 1205 + case NFSERR_NXIO: 1206 + break; 1207 + case NFSERR_PERM: 1208 + if (!task->tk_xprt) 1209 + break; 1210 + xprt_force_disconnect(task->tk_xprt); 1211 + goto out_retry; 1212 + case NFSERR_ACCES: 1213 + case NFSERR_BADHANDLE: 1214 + case NFSERR_FBIG: 1215 + case NFSERR_IO: 1216 + case NFSERR_NOSPC: 1217 + case NFSERR_ROFS: 1218 + case NFSERR_STALE: 1219 + goto out_reset_to_pnfs; 1220 + case NFSERR_JUKEBOX: 1221 + nfs_inc_stats(lseg->pls_layout->plh_inode, NFSIOS_DELAY); 1222 + goto out_retry; 1223 + default: 1224 + break; 1225 + } 1211 1226 1212 1227 switch (task->tk_status) { 1213 1228 /* File access problems. Don't mark the device as unavailable */ ··· 1261 1218 nfs4_delete_deviceid(devid->ld, devid->nfs_client, 1262 1219 &devid->deviceid); 1263 1220 } 1221 + out_reset_to_pnfs: 1264 1222 /* FIXME: Need to prevent infinite looping here. */ 1265 1223 return -NFS4ERR_RESET_TO_PNFS; 1266 1224 out_retry: ··· 1272 1228 } 1273 1229 1274 1230 static int ff_layout_async_handle_error(struct rpc_task *task, 1231 + u32 op_status, 1275 1232 struct nfs4_state *state, 1276 1233 struct nfs_client *clp, 1277 1234 struct pnfs_layout_segment *lseg, ··· 1291 1246 1292 1247 switch (vers) { 1293 1248 case 3: 1294 - return ff_layout_async_handle_error_v3(task, clp, lseg, idx); 1295 - case 4: 1296 - return ff_layout_async_handle_error_v4(task, state, clp, 1249 + return ff_layout_async_handle_error_v3(task, op_status, clp, 1297 1250 lseg, idx); 1251 + case 4: 1252 + return ff_layout_async_handle_error_v4(task, op_status, state, 1253 + clp, lseg, idx); 1298 1254 default: 1299 1255 /* should never happen */ 1300 1256 WARN_ON_ONCE(1); ··· 1348 1302 switch (status) { 1349 1303 case NFS4ERR_DELAY: 1350 1304 case NFS4ERR_GRACE: 1305 + case NFS4ERR_PERM: 1351 1306 break; 1352 1307 case NFS4ERR_NXIO: 1353 1308 ff_layout_mark_ds_unreachable(lseg, idx); ··· 1381 1334 trace_ff_layout_read_error(hdr, task->tk_status); 1382 1335 } 1383 1336 1384 - err = ff_layout_async_handle_error(task, hdr->args.context->state, 1337 + err = ff_layout_async_handle_error(task, hdr->res.op_status, 1338 + hdr->args.context->state, 1385 1339 hdr->ds_clp, hdr->lseg, 1386 1340 hdr->pgio_mirror_idx); 1387 1341 ··· 1555 1507 trace_ff_layout_write_error(hdr, task->tk_status); 1556 1508 } 1557 1509 1558 - err = ff_layout_async_handle_error(task, hdr->args.context->state, 1510 + err = ff_layout_async_handle_error(task, hdr->res.op_status, 1511 + hdr->args.context->state, 1559 1512 hdr->ds_clp, hdr->lseg, 1560 1513 hdr->pgio_mirror_idx); 1561 1514 ··· 1605 1556 trace_ff_layout_commit_error(data, task->tk_status); 1606 1557 } 1607 1558 1608 - err = ff_layout_async_handle_error(task, NULL, data->ds_clp, 1609 - data->lseg, data->ds_commit_index); 1559 + err = ff_layout_async_handle_error(task, data->res.op_status, 1560 + NULL, data->ds_clp, data->lseg, 1561 + data->ds_commit_index); 1610 1562 1611 1563 trace_nfs4_pnfs_commit_ds(data, err); 1612 1564 switch (err) {
+14 -3
fs/nfs/inode.c
··· 2589 2589 static int nfs_net_init(struct net *net) 2590 2590 { 2591 2591 struct nfs_net *nn = net_generic(net, nfs_net_id); 2592 + int err; 2592 2593 2593 2594 nfs_clients_init(net); 2594 2595 2595 2596 if (!rpc_proc_register(net, &nn->rpcstats)) { 2596 - nfs_clients_exit(net); 2597 - return -ENOMEM; 2597 + err = -ENOMEM; 2598 + goto err_proc_rpc; 2598 2599 } 2599 2600 2600 - return nfs_fs_proc_net_init(net); 2601 + err = nfs_fs_proc_net_init(net); 2602 + if (err) 2603 + goto err_proc_nfs; 2604 + 2605 + return 0; 2606 + 2607 + err_proc_nfs: 2608 + rpc_proc_unregister(net, "nfs"); 2609 + err_proc_rpc: 2610 + nfs_clients_exit(net); 2611 + return err; 2601 2612 } 2602 2613 2603 2614 static void nfs_net_exit(struct net *net)
+3 -1
fs/nfs/pnfs.c
··· 2059 2059 static void nfs_layoutget_end(struct pnfs_layout_hdr *lo) 2060 2060 { 2061 2061 if (atomic_dec_and_test(&lo->plh_outstanding) && 2062 - test_and_clear_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags)) 2062 + test_and_clear_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags)) { 2063 + smp_mb__after_atomic(); 2063 2064 wake_up_bit(&lo->plh_flags, NFS_LAYOUT_DRAIN); 2065 + } 2064 2066 } 2065 2067 2066 2068 static bool pnfs_is_first_layoutget(struct pnfs_layout_hdr *lo)
+1 -1
fs/proc/task_mmu.c
··· 2182 2182 categories |= PAGE_IS_FILE; 2183 2183 } 2184 2184 2185 - if (is_zero_pfn(pmd_pfn(pmd))) 2185 + if (is_huge_zero_pmd(pmd)) 2186 2186 categories |= PAGE_IS_PFNZERO; 2187 2187 if (pmd_soft_dirty(pmd)) 2188 2188 categories |= PAGE_IS_SOFT_DIRTY;
+1
fs/smb/client/cifsglob.h
··· 709 709 struct TCP_Server_Info { 710 710 struct list_head tcp_ses_list; 711 711 struct list_head smb_ses_list; 712 + struct list_head rlist; /* reconnect list */ 712 713 spinlock_t srv_lock; /* protect anything here that is not protected */ 713 714 __u64 conn_id; /* connection identifier (useful for debugging) */ 714 715 int srv_count; /* reference counter */
+36 -22
fs/smb/client/connect.c
··· 124 124 (SMB_INTERFACE_POLL_INTERVAL * HZ)); 125 125 } 126 126 127 + #define set_need_reco(server) \ 128 + do { \ 129 + spin_lock(&server->srv_lock); \ 130 + if (server->tcpStatus != CifsExiting) \ 131 + server->tcpStatus = CifsNeedReconnect; \ 132 + spin_unlock(&server->srv_lock); \ 133 + } while (0) 134 + 127 135 /* 128 136 * Update the tcpStatus for the server. 129 137 * This is used to signal the cifsd thread to call cifs_reconnect ··· 145 137 cifs_signal_cifsd_for_reconnect(struct TCP_Server_Info *server, 146 138 bool all_channels) 147 139 { 148 - struct TCP_Server_Info *pserver; 140 + struct TCP_Server_Info *nserver; 149 141 struct cifs_ses *ses; 142 + LIST_HEAD(reco); 150 143 int i; 151 - 152 - /* If server is a channel, select the primary channel */ 153 - pserver = SERVER_IS_CHAN(server) ? server->primary_server : server; 154 144 155 145 /* if we need to signal just this channel */ 156 146 if (!all_channels) { 157 - spin_lock(&server->srv_lock); 158 - if (server->tcpStatus != CifsExiting) 159 - server->tcpStatus = CifsNeedReconnect; 160 - spin_unlock(&server->srv_lock); 147 + set_need_reco(server); 161 148 return; 162 149 } 163 150 164 - spin_lock(&cifs_tcp_ses_lock); 165 - list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) { 166 - if (cifs_ses_exiting(ses)) 167 - continue; 168 - spin_lock(&ses->chan_lock); 169 - for (i = 0; i < ses->chan_count; i++) { 170 - if (!ses->chans[i].server) 151 + if (SERVER_IS_CHAN(server)) 152 + server = server->primary_server; 153 + scoped_guard(spinlock, &cifs_tcp_ses_lock) { 154 + set_need_reco(server); 155 + list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) { 156 + spin_lock(&ses->ses_lock); 157 + if (ses->ses_status == SES_EXITING) { 158 + spin_unlock(&ses->ses_lock); 171 159 continue; 172 - 173 - spin_lock(&ses->chans[i].server->srv_lock); 174 - if (ses->chans[i].server->tcpStatus != CifsExiting) 175 - ses->chans[i].server->tcpStatus = CifsNeedReconnect; 176 - spin_unlock(&ses->chans[i].server->srv_lock); 160 + } 161 + spin_lock(&ses->chan_lock); 162 + for (i = 1; i < ses->chan_count; i++) { 163 + nserver = ses->chans[i].server; 164 + if (!nserver) 165 + continue; 166 + nserver->srv_count++; 167 + list_add(&nserver->rlist, &reco); 168 + } 169 + spin_unlock(&ses->chan_lock); 170 + spin_unlock(&ses->ses_lock); 177 171 } 178 - spin_unlock(&ses->chan_lock); 179 172 } 180 - spin_unlock(&cifs_tcp_ses_lock); 173 + 174 + list_for_each_entry_safe(server, nserver, &reco, rlist) { 175 + list_del_init(&server->rlist); 176 + set_need_reco(server); 177 + cifs_put_tcp_session(server, 0); 178 + } 181 179 } 182 180 183 181 /*
+4 -16
fs/smb/client/reparse.c
··· 875 875 abs_path += sizeof("\\DosDevices\\")-1; 876 876 else if (strstarts(abs_path, "\\GLOBAL??\\")) 877 877 abs_path += sizeof("\\GLOBAL??\\")-1; 878 - else { 879 - /* Unhandled absolute symlink, points outside of DOS/Win32 */ 880 - cifs_dbg(VFS, 881 - "absolute symlink '%s' cannot be converted from NT format " 882 - "because points to unknown target\n", 883 - smb_target); 884 - rc = -EIO; 885 - goto out; 886 - } 878 + else 879 + goto out_unhandled_target; 887 880 888 881 /* Sometimes path separator after \?? is double backslash */ 889 882 if (abs_path[0] == '\\') ··· 903 910 abs_path++; 904 911 abs_path[0] = drive_letter; 905 912 } else { 906 - /* Unhandled absolute symlink. Report an error. */ 907 - cifs_dbg(VFS, 908 - "absolute symlink '%s' cannot be converted from NT format " 909 - "because points to unknown target\n", 910 - smb_target); 911 - rc = -EIO; 912 - goto out; 913 + goto out_unhandled_target; 913 914 } 914 915 915 916 abs_path_len = strlen(abs_path)+1; ··· 953 966 * These paths have same format as Linux symlinks, so no 954 967 * conversion is needed. 955 968 */ 969 + out_unhandled_target: 956 970 linux_target = smb_target; 957 971 smb_target = NULL; 958 972 }
+57 -104
fs/smb/client/smbdirect.c
··· 907 907 .local_dma_lkey = sc->ib.pd->local_dma_lkey, 908 908 .direction = DMA_TO_DEVICE, 909 909 }; 910 + size_t payload_len = umin(*_remaining_data_length, 911 + sp->max_send_size - sizeof(*packet)); 910 912 911 - rc = smb_extract_iter_to_rdma(iter, *_remaining_data_length, 913 + rc = smb_extract_iter_to_rdma(iter, payload_len, 912 914 &extract); 913 915 if (rc < 0) 914 916 goto err_dma; ··· 1013 1011 1014 1012 info->count_send_empty++; 1015 1013 return smbd_post_send_iter(info, NULL, &remaining_data_length); 1014 + } 1015 + 1016 + static int smbd_post_send_full_iter(struct smbd_connection *info, 1017 + struct iov_iter *iter, 1018 + int *_remaining_data_length) 1019 + { 1020 + int rc = 0; 1021 + 1022 + /* 1023 + * smbd_post_send_iter() respects the 1024 + * negotiated max_send_size, so we need to 1025 + * loop until the full iter is posted 1026 + */ 1027 + 1028 + while (iov_iter_count(iter) > 0) { 1029 + rc = smbd_post_send_iter(info, iter, _remaining_data_length); 1030 + if (rc < 0) 1031 + break; 1032 + } 1033 + 1034 + return rc; 1016 1035 } 1017 1036 1018 1037 /* ··· 1475 1452 char name[MAX_NAME_LEN]; 1476 1453 int rc; 1477 1454 1455 + if (WARN_ON_ONCE(sp->max_recv_size < sizeof(struct smbdirect_data_transfer))) 1456 + return -ENOMEM; 1457 + 1478 1458 scnprintf(name, MAX_NAME_LEN, "smbd_request_%p", info); 1479 1459 info->request_cache = 1480 1460 kmem_cache_create( ··· 1495 1469 goto out1; 1496 1470 1497 1471 scnprintf(name, MAX_NAME_LEN, "smbd_response_%p", info); 1472 + 1473 + struct kmem_cache_args response_args = { 1474 + .align = __alignof__(struct smbd_response), 1475 + .useroffset = (offsetof(struct smbd_response, packet) + 1476 + sizeof(struct smbdirect_data_transfer)), 1477 + .usersize = sp->max_recv_size - sizeof(struct smbdirect_data_transfer), 1478 + }; 1498 1479 info->response_cache = 1499 - kmem_cache_create( 1500 - name, 1501 - sizeof(struct smbd_response) + 1502 - sp->max_recv_size, 1503 - 0, SLAB_HWCACHE_ALIGN, NULL); 1480 + kmem_cache_create(name, 1481 + sizeof(struct smbd_response) + sp->max_recv_size, 1482 + &response_args, SLAB_HWCACHE_ALIGN); 1504 1483 if (!info->response_cache) 1505 1484 goto out2; 1506 1485 ··· 1778 1747 } 1779 1748 1780 1749 /* 1781 - * Receive data from receive reassembly queue 1750 + * Receive data from the transport's receive reassembly queue 1782 1751 * All the incoming data packets are placed in reassembly queue 1783 - * buf: the buffer to read data into 1752 + * iter: the buffer to read data into 1784 1753 * size: the length of data to read 1785 1754 * return value: actual data read 1786 - * Note: this implementation copies the data from reassebmly queue to receive 1755 + * 1756 + * Note: this implementation copies the data from reassembly queue to receive 1787 1757 * buffers used by upper layer. This is not the optimal code path. A better way 1788 1758 * to do it is to not have upper layer allocate its receive buffers but rather 1789 1759 * borrow the buffer from reassembly queue, and return it after data is 1790 1760 * consumed. But this will require more changes to upper layer code, and also 1791 1761 * need to consider packet boundaries while they still being reassembled. 1792 1762 */ 1793 - static int smbd_recv_buf(struct smbd_connection *info, char *buf, 1794 - unsigned int size) 1763 + int smbd_recv(struct smbd_connection *info, struct msghdr *msg) 1795 1764 { 1796 1765 struct smbdirect_socket *sc = &info->socket; 1797 1766 struct smbd_response *response; 1798 1767 struct smbdirect_data_transfer *data_transfer; 1768 + size_t size = iov_iter_count(&msg->msg_iter); 1799 1769 int to_copy, to_read, data_read, offset; 1800 1770 u32 data_length, remaining_data_length, data_offset; 1801 1771 int rc; 1772 + 1773 + if (WARN_ON_ONCE(iov_iter_rw(&msg->msg_iter) == WRITE)) 1774 + return -EINVAL; /* It's a bug in upper layer to get there */ 1802 1775 1803 1776 again: 1804 1777 /* ··· 1810 1775 * the only one reading from the front of the queue. The transport 1811 1776 * may add more entries to the back of the queue at the same time 1812 1777 */ 1813 - log_read(INFO, "size=%d info->reassembly_data_length=%d\n", size, 1778 + log_read(INFO, "size=%zd info->reassembly_data_length=%d\n", size, 1814 1779 info->reassembly_data_length); 1815 1780 if (info->reassembly_data_length >= size) { 1816 1781 int queue_length; ··· 1848 1813 if (response->first_segment && size == 4) { 1849 1814 unsigned int rfc1002_len = 1850 1815 data_length + remaining_data_length; 1851 - *((__be32 *)buf) = cpu_to_be32(rfc1002_len); 1816 + __be32 rfc1002_hdr = cpu_to_be32(rfc1002_len); 1817 + if (copy_to_iter(&rfc1002_hdr, sizeof(rfc1002_hdr), 1818 + &msg->msg_iter) != sizeof(rfc1002_hdr)) 1819 + return -EFAULT; 1852 1820 data_read = 4; 1853 1821 response->first_segment = false; 1854 1822 log_read(INFO, "returning rfc1002 length %d\n", ··· 1860 1822 } 1861 1823 1862 1824 to_copy = min_t(int, data_length - offset, to_read); 1863 - memcpy( 1864 - buf + data_read, 1865 - (char *)data_transfer + data_offset + offset, 1866 - to_copy); 1825 + if (copy_to_iter((char *)data_transfer + data_offset + offset, 1826 + to_copy, &msg->msg_iter) != to_copy) 1827 + return -EFAULT; 1867 1828 1868 1829 /* move on to the next buffer? */ 1869 1830 if (to_copy == data_length - offset) { ··· 1928 1891 } 1929 1892 1930 1893 /* 1931 - * Receive a page from receive reassembly queue 1932 - * page: the page to read data into 1933 - * to_read: the length of data to read 1934 - * return value: actual data read 1935 - */ 1936 - static int smbd_recv_page(struct smbd_connection *info, 1937 - struct page *page, unsigned int page_offset, 1938 - unsigned int to_read) 1939 - { 1940 - struct smbdirect_socket *sc = &info->socket; 1941 - int ret; 1942 - char *to_address; 1943 - void *page_address; 1944 - 1945 - /* make sure we have the page ready for read */ 1946 - ret = wait_event_interruptible( 1947 - info->wait_reassembly_queue, 1948 - info->reassembly_data_length >= to_read || 1949 - sc->status != SMBDIRECT_SOCKET_CONNECTED); 1950 - if (ret) 1951 - return ret; 1952 - 1953 - /* now we can read from reassembly queue and not sleep */ 1954 - page_address = kmap_atomic(page); 1955 - to_address = (char *) page_address + page_offset; 1956 - 1957 - log_read(INFO, "reading from page=%p address=%p to_read=%d\n", 1958 - page, to_address, to_read); 1959 - 1960 - ret = smbd_recv_buf(info, to_address, to_read); 1961 - kunmap_atomic(page_address); 1962 - 1963 - return ret; 1964 - } 1965 - 1966 - /* 1967 - * Receive data from transport 1968 - * msg: a msghdr point to the buffer, can be ITER_KVEC or ITER_BVEC 1969 - * return: total bytes read, or 0. SMB Direct will not do partial read. 1970 - */ 1971 - int smbd_recv(struct smbd_connection *info, struct msghdr *msg) 1972 - { 1973 - char *buf; 1974 - struct page *page; 1975 - unsigned int to_read, page_offset; 1976 - int rc; 1977 - 1978 - if (iov_iter_rw(&msg->msg_iter) == WRITE) { 1979 - /* It's a bug in upper layer to get there */ 1980 - cifs_dbg(VFS, "Invalid msg iter dir %u\n", 1981 - iov_iter_rw(&msg->msg_iter)); 1982 - rc = -EINVAL; 1983 - goto out; 1984 - } 1985 - 1986 - switch (iov_iter_type(&msg->msg_iter)) { 1987 - case ITER_KVEC: 1988 - buf = msg->msg_iter.kvec->iov_base; 1989 - to_read = msg->msg_iter.kvec->iov_len; 1990 - rc = smbd_recv_buf(info, buf, to_read); 1991 - break; 1992 - 1993 - case ITER_BVEC: 1994 - page = msg->msg_iter.bvec->bv_page; 1995 - page_offset = msg->msg_iter.bvec->bv_offset; 1996 - to_read = msg->msg_iter.bvec->bv_len; 1997 - rc = smbd_recv_page(info, page, page_offset, to_read); 1998 - break; 1999 - 2000 - default: 2001 - /* It's a bug in upper layer to get there */ 2002 - cifs_dbg(VFS, "Invalid msg type %d\n", 2003 - iov_iter_type(&msg->msg_iter)); 2004 - rc = -EINVAL; 2005 - } 2006 - 2007 - out: 2008 - /* SMBDirect will read it all or nothing */ 2009 - if (rc > 0) 2010 - msg->msg_iter.count = 0; 2011 - return rc; 2012 - } 2013 - 2014 - /* 2015 1894 * Send data to transport 2016 1895 * Each rqst is transported as a SMBDirect payload 2017 1896 * rqst: the data to write ··· 1985 2032 klen += rqst->rq_iov[i].iov_len; 1986 2033 iov_iter_kvec(&iter, ITER_SOURCE, rqst->rq_iov, rqst->rq_nvec, klen); 1987 2034 1988 - rc = smbd_post_send_iter(info, &iter, &remaining_data_length); 2035 + rc = smbd_post_send_full_iter(info, &iter, &remaining_data_length); 1989 2036 if (rc < 0) 1990 2037 break; 1991 2038 1992 2039 if (iov_iter_count(&rqst->rq_iter) > 0) { 1993 2040 /* And then the data pages if there are any */ 1994 - rc = smbd_post_send_iter(info, &rqst->rq_iter, 1995 - &remaining_data_length); 2041 + rc = smbd_post_send_full_iter(info, &rqst->rq_iter, 2042 + &remaining_data_length); 1996 2043 if (rc < 0) 1997 2044 break; 1998 2045 }
+12 -12
fs/smb/client/trace.h
··· 140 140 __entry->len = len; 141 141 __entry->rc = rc; 142 142 ), 143 - TP_printk("\tR=%08x[%x] xid=%u sid=0x%llx tid=0x%x fid=0x%llx offset=0x%llx len=0x%x rc=%d", 143 + TP_printk("R=%08x[%x] xid=%u sid=0x%llx tid=0x%x fid=0x%llx offset=0x%llx len=0x%x rc=%d", 144 144 __entry->rreq_debug_id, __entry->rreq_debug_index, 145 145 __entry->xid, __entry->sesid, __entry->tid, __entry->fid, 146 146 __entry->offset, __entry->len, __entry->rc) ··· 190 190 __entry->len = len; 191 191 __entry->rc = rc; 192 192 ), 193 - TP_printk("\txid=%u sid=0x%llx tid=0x%x fid=0x%llx offset=0x%llx len=0x%x rc=%d", 193 + TP_printk("xid=%u sid=0x%llx tid=0x%x fid=0x%llx offset=0x%llx len=0x%x rc=%d", 194 194 __entry->xid, __entry->sesid, __entry->tid, __entry->fid, 195 195 __entry->offset, __entry->len, __entry->rc) 196 196 ) ··· 247 247 __entry->len = len; 248 248 __entry->rc = rc; 249 249 ), 250 - TP_printk("\txid=%u sid=0x%llx tid=0x%x source fid=0x%llx source offset=0x%llx target fid=0x%llx target offset=0x%llx len=0x%x rc=%d", 250 + TP_printk("xid=%u sid=0x%llx tid=0x%x source fid=0x%llx source offset=0x%llx target fid=0x%llx target offset=0x%llx len=0x%x rc=%d", 251 251 __entry->xid, __entry->sesid, __entry->tid, __entry->target_fid, 252 252 __entry->src_offset, __entry->target_fid, __entry->target_offset, __entry->len, __entry->rc) 253 253 ) ··· 298 298 __entry->target_offset = target_offset; 299 299 __entry->len = len; 300 300 ), 301 - TP_printk("\txid=%u sid=0x%llx tid=0x%x source fid=0x%llx source offset=0x%llx target fid=0x%llx target offset=0x%llx len=0x%x", 301 + TP_printk("xid=%u sid=0x%llx tid=0x%x source fid=0x%llx source offset=0x%llx target fid=0x%llx target offset=0x%llx len=0x%x", 302 302 __entry->xid, __entry->sesid, __entry->tid, __entry->target_fid, 303 303 __entry->src_offset, __entry->target_fid, __entry->target_offset, __entry->len) 304 304 ) ··· 482 482 __entry->tid = tid; 483 483 __entry->sesid = sesid; 484 484 ), 485 - TP_printk("\txid=%u sid=0x%llx tid=0x%x fid=0x%llx", 485 + TP_printk("xid=%u sid=0x%llx tid=0x%x fid=0x%llx", 486 486 __entry->xid, __entry->sesid, __entry->tid, __entry->fid) 487 487 ) 488 488 ··· 521 521 __entry->sesid = sesid; 522 522 __entry->rc = rc; 523 523 ), 524 - TP_printk("\txid=%u sid=0x%llx tid=0x%x fid=0x%llx rc=%d", 524 + TP_printk("xid=%u sid=0x%llx tid=0x%x fid=0x%llx rc=%d", 525 525 __entry->xid, __entry->sesid, __entry->tid, __entry->fid, 526 526 __entry->rc) 527 527 ) ··· 794 794 __entry->status = status; 795 795 __entry->rc = rc; 796 796 ), 797 - TP_printk("\tsid=0x%llx tid=0x%x cmd=%u mid=%llu status=0x%x rc=%d", 797 + TP_printk("sid=0x%llx tid=0x%x cmd=%u mid=%llu status=0x%x rc=%d", 798 798 __entry->sesid, __entry->tid, __entry->cmd, __entry->mid, 799 799 __entry->status, __entry->rc) 800 800 ) ··· 829 829 __entry->cmd = cmd; 830 830 __entry->mid = mid; 831 831 ), 832 - TP_printk("\tsid=0x%llx tid=0x%x cmd=%u mid=%llu", 832 + TP_printk("sid=0x%llx tid=0x%x cmd=%u mid=%llu", 833 833 __entry->sesid, __entry->tid, 834 834 __entry->cmd, __entry->mid) 835 835 ) ··· 867 867 __entry->when_sent = when_sent; 868 868 __entry->when_received = when_received; 869 869 ), 870 - TP_printk("\tcmd=%u mid=%llu pid=%u, when_sent=%lu when_rcv=%lu", 870 + TP_printk("cmd=%u mid=%llu pid=%u, when_sent=%lu when_rcv=%lu", 871 871 __entry->cmd, __entry->mid, __entry->pid, __entry->when_sent, 872 872 __entry->when_received) 873 873 ) ··· 898 898 __assign_str(func_name); 899 899 __entry->rc = rc; 900 900 ), 901 - TP_printk("\t%s: xid=%u rc=%d", 901 + TP_printk("%s: xid=%u rc=%d", 902 902 __get_str(func_name), __entry->xid, __entry->rc) 903 903 ) 904 904 ··· 924 924 __entry->ino = ino; 925 925 __entry->rc = rc; 926 926 ), 927 - TP_printk("\tino=%lu rc=%d", 927 + TP_printk("ino=%lu rc=%d", 928 928 __entry->ino, __entry->rc) 929 929 ) 930 930 ··· 950 950 __entry->xid = xid; 951 951 __assign_str(func_name); 952 952 ), 953 - TP_printk("\t%s: xid=%u", 953 + TP_printk("%s: xid=%u", 954 954 __get_str(func_name), __entry->xid) 955 955 ) 956 956
+33 -8
fs/xfs/libxfs/xfs_alloc.c
··· 3444 3444 3445 3445 set_bit(XFS_AGSTATE_AGF_INIT, &pag->pag_opstate); 3446 3446 } 3447 + 3447 3448 #ifdef DEBUG 3448 - else if (!xfs_is_shutdown(mp)) { 3449 - ASSERT(pag->pagf_freeblks == be32_to_cpu(agf->agf_freeblks)); 3450 - ASSERT(pag->pagf_btreeblks == be32_to_cpu(agf->agf_btreeblks)); 3451 - ASSERT(pag->pagf_flcount == be32_to_cpu(agf->agf_flcount)); 3452 - ASSERT(pag->pagf_longest == be32_to_cpu(agf->agf_longest)); 3453 - ASSERT(pag->pagf_bno_level == be32_to_cpu(agf->agf_bno_level)); 3454 - ASSERT(pag->pagf_cnt_level == be32_to_cpu(agf->agf_cnt_level)); 3449 + /* 3450 + * It's possible for the AGF to be out of sync if the block device is 3451 + * silently dropping writes. This can happen in fstests with dmflakey 3452 + * enabled, which allows the buffer to be cleaned and reclaimed by 3453 + * memory pressure and then re-read from disk here. We will get a 3454 + * stale version of the AGF from disk, and nothing good can happen from 3455 + * here. Hence if we detect this situation, immediately shut down the 3456 + * filesystem. 3457 + * 3458 + * This can also happen if we are already in the middle of a forced 3459 + * shutdown, so don't bother checking if we are already shut down. 3460 + */ 3461 + if (!xfs_is_shutdown(pag_mount(pag))) { 3462 + bool ok = true; 3463 + 3464 + ok &= pag->pagf_freeblks == be32_to_cpu(agf->agf_freeblks); 3465 + ok &= pag->pagf_freeblks == be32_to_cpu(agf->agf_freeblks); 3466 + ok &= pag->pagf_btreeblks == be32_to_cpu(agf->agf_btreeblks); 3467 + ok &= pag->pagf_flcount == be32_to_cpu(agf->agf_flcount); 3468 + ok &= pag->pagf_longest == be32_to_cpu(agf->agf_longest); 3469 + ok &= pag->pagf_bno_level == be32_to_cpu(agf->agf_bno_level); 3470 + ok &= pag->pagf_cnt_level == be32_to_cpu(agf->agf_cnt_level); 3471 + 3472 + if (XFS_IS_CORRUPT(pag_mount(pag), !ok)) { 3473 + xfs_ag_mark_sick(pag, XFS_SICK_AG_AGF); 3474 + xfs_trans_brelse(tp, agfbp); 3475 + xfs_force_shutdown(pag_mount(pag), 3476 + SHUTDOWN_CORRUPT_ONDISK); 3477 + return -EFSCORRUPTED; 3478 + } 3455 3479 } 3456 - #endif 3480 + #endif /* DEBUG */ 3481 + 3457 3482 if (agfbpp) 3458 3483 *agfbpp = agfbp; 3459 3484 else
+27 -4
fs/xfs/libxfs/xfs_ialloc.c
··· 2801 2801 set_bit(XFS_AGSTATE_AGI_INIT, &pag->pag_opstate); 2802 2802 } 2803 2803 2804 + #ifdef DEBUG 2804 2805 /* 2805 - * It's possible for these to be out of sync if 2806 - * we are in the middle of a forced shutdown. 2806 + * It's possible for the AGF to be out of sync if the block device is 2807 + * silently dropping writes. This can happen in fstests with dmflakey 2808 + * enabled, which allows the buffer to be cleaned and reclaimed by 2809 + * memory pressure and then re-read from disk here. We will get a 2810 + * stale version of the AGF from disk, and nothing good can happen from 2811 + * here. Hence if we detect this situation, immediately shut down the 2812 + * filesystem. 2813 + * 2814 + * This can also happen if we are already in the middle of a forced 2815 + * shutdown, so don't bother checking if we are already shut down. 2807 2816 */ 2808 - ASSERT(pag->pagi_freecount == be32_to_cpu(agi->agi_freecount) || 2809 - xfs_is_shutdown(pag_mount(pag))); 2817 + if (!xfs_is_shutdown(pag_mount(pag))) { 2818 + bool ok = true; 2819 + 2820 + ok &= pag->pagi_freecount == be32_to_cpu(agi->agi_freecount); 2821 + ok &= pag->pagi_count == be32_to_cpu(agi->agi_count); 2822 + 2823 + if (XFS_IS_CORRUPT(pag_mount(pag), !ok)) { 2824 + xfs_ag_mark_sick(pag, XFS_SICK_AG_AGI); 2825 + xfs_trans_brelse(tp, agibp); 2826 + xfs_force_shutdown(pag_mount(pag), 2827 + SHUTDOWN_CORRUPT_ONDISK); 2828 + return -EFSCORRUPTED; 2829 + } 2830 + } 2831 + #endif /* DEBUG */ 2832 + 2810 2833 if (agibpp) 2811 2834 *agibpp = agibp; 2812 2835 else
-38
fs/xfs/xfs_buf.c
··· 2082 2082 return error; 2083 2083 } 2084 2084 2085 - /* 2086 - * Push a single buffer on a delwri queue. 2087 - * 2088 - * The purpose of this function is to submit a single buffer of a delwri queue 2089 - * and return with the buffer still on the original queue. 2090 - * 2091 - * The buffer locking and queue management logic between _delwri_pushbuf() and 2092 - * _delwri_queue() guarantee that the buffer cannot be queued to another list 2093 - * before returning. 2094 - */ 2095 - int 2096 - xfs_buf_delwri_pushbuf( 2097 - struct xfs_buf *bp, 2098 - struct list_head *buffer_list) 2099 - { 2100 - int error; 2101 - 2102 - ASSERT(bp->b_flags & _XBF_DELWRI_Q); 2103 - 2104 - trace_xfs_buf_delwri_pushbuf(bp, _RET_IP_); 2105 - 2106 - xfs_buf_lock(bp); 2107 - bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC); 2108 - bp->b_flags |= XBF_WRITE; 2109 - xfs_buf_submit(bp); 2110 - 2111 - /* 2112 - * The buffer is now locked, under I/O but still on the original delwri 2113 - * queue. Wait for I/O completion, restore the DELWRI_Q flag and 2114 - * return with the buffer unlocked and still on the original queue. 2115 - */ 2116 - error = xfs_buf_iowait(bp); 2117 - bp->b_flags |= _XBF_DELWRI_Q; 2118 - xfs_buf_unlock(bp); 2119 - 2120 - return error; 2121 - } 2122 - 2123 2085 void xfs_buf_set_ref(struct xfs_buf *bp, int lru_ref) 2124 2086 { 2125 2087 /*
-1
fs/xfs/xfs_buf.h
··· 326 326 void xfs_buf_delwri_queue_here(struct xfs_buf *bp, struct list_head *bl); 327 327 extern int xfs_buf_delwri_submit(struct list_head *); 328 328 extern int xfs_buf_delwri_submit_nowait(struct list_head *); 329 - extern int xfs_buf_delwri_pushbuf(struct xfs_buf *, struct list_head *); 330 329 331 330 static inline xfs_daddr_t xfs_buf_daddr(struct xfs_buf *bp) 332 331 {
+180 -117
fs/xfs/xfs_buf_item.c
··· 32 32 return container_of(lip, struct xfs_buf_log_item, bli_item); 33 33 } 34 34 35 + static void 36 + xfs_buf_item_get_format( 37 + struct xfs_buf_log_item *bip, 38 + int count) 39 + { 40 + ASSERT(bip->bli_formats == NULL); 41 + bip->bli_format_count = count; 42 + 43 + if (count == 1) { 44 + bip->bli_formats = &bip->__bli_format; 45 + return; 46 + } 47 + 48 + bip->bli_formats = kzalloc(count * sizeof(struct xfs_buf_log_format), 49 + GFP_KERNEL | __GFP_NOFAIL); 50 + } 51 + 52 + static void 53 + xfs_buf_item_free_format( 54 + struct xfs_buf_log_item *bip) 55 + { 56 + if (bip->bli_formats != &bip->__bli_format) { 57 + kfree(bip->bli_formats); 58 + bip->bli_formats = NULL; 59 + } 60 + } 61 + 62 + static void 63 + xfs_buf_item_free( 64 + struct xfs_buf_log_item *bip) 65 + { 66 + xfs_buf_item_free_format(bip); 67 + kvfree(bip->bli_item.li_lv_shadow); 68 + kmem_cache_free(xfs_buf_item_cache, bip); 69 + } 70 + 71 + /* 72 + * xfs_buf_item_relse() is called when the buf log item is no longer needed. 73 + */ 74 + static void 75 + xfs_buf_item_relse( 76 + struct xfs_buf_log_item *bip) 77 + { 78 + struct xfs_buf *bp = bip->bli_buf; 79 + 80 + trace_xfs_buf_item_relse(bp, _RET_IP_); 81 + 82 + ASSERT(!test_bit(XFS_LI_IN_AIL, &bip->bli_item.li_flags)); 83 + ASSERT(atomic_read(&bip->bli_refcount) == 0); 84 + 85 + bp->b_log_item = NULL; 86 + xfs_buf_rele(bp); 87 + xfs_buf_item_free(bip); 88 + } 89 + 35 90 /* Is this log iovec plausibly large enough to contain the buffer log format? */ 36 91 bool 37 92 xfs_buf_log_check_iovec( ··· 445 390 } 446 391 447 392 /* 393 + * For a stale BLI, process all the necessary completions that must be 394 + * performed when the final BLI reference goes away. The buffer will be 395 + * referenced and locked here - we return to the caller with the buffer still 396 + * referenced and locked for them to finalise processing of the buffer. 397 + */ 398 + static void 399 + xfs_buf_item_finish_stale( 400 + struct xfs_buf_log_item *bip) 401 + { 402 + struct xfs_buf *bp = bip->bli_buf; 403 + struct xfs_log_item *lip = &bip->bli_item; 404 + 405 + ASSERT(bip->bli_flags & XFS_BLI_STALE); 406 + ASSERT(xfs_buf_islocked(bp)); 407 + ASSERT(bp->b_flags & XBF_STALE); 408 + ASSERT(bip->__bli_format.blf_flags & XFS_BLF_CANCEL); 409 + ASSERT(list_empty(&lip->li_trans)); 410 + ASSERT(!bp->b_transp); 411 + 412 + if (bip->bli_flags & XFS_BLI_STALE_INODE) { 413 + xfs_buf_item_done(bp); 414 + xfs_buf_inode_iodone(bp); 415 + ASSERT(list_empty(&bp->b_li_list)); 416 + return; 417 + } 418 + 419 + /* 420 + * We may or may not be on the AIL here, xfs_trans_ail_delete() will do 421 + * the right thing regardless of the situation in which we are called. 422 + */ 423 + xfs_trans_ail_delete(lip, SHUTDOWN_LOG_IO_ERROR); 424 + xfs_buf_item_relse(bip); 425 + ASSERT(bp->b_log_item == NULL); 426 + } 427 + 428 + /* 448 429 * This is called to unpin the buffer associated with the buf log item which was 449 430 * previously pinned with a call to xfs_buf_item_pin(). We enter this function 450 431 * with a buffer pin count, a buffer reference and a BLI reference. ··· 529 438 } 530 439 531 440 if (stale) { 532 - ASSERT(bip->bli_flags & XFS_BLI_STALE); 533 - ASSERT(xfs_buf_islocked(bp)); 534 - ASSERT(bp->b_flags & XBF_STALE); 535 - ASSERT(bip->__bli_format.blf_flags & XFS_BLF_CANCEL); 536 - ASSERT(list_empty(&lip->li_trans)); 537 - ASSERT(!bp->b_transp); 538 - 539 441 trace_xfs_buf_item_unpin_stale(bip); 540 442 541 443 /* ··· 539 455 * processing is complete. 540 456 */ 541 457 xfs_buf_rele(bp); 542 - 543 - /* 544 - * If we get called here because of an IO error, we may or may 545 - * not have the item on the AIL. xfs_trans_ail_delete() will 546 - * take care of that situation. xfs_trans_ail_delete() drops 547 - * the AIL lock. 548 - */ 549 - if (bip->bli_flags & XFS_BLI_STALE_INODE) { 550 - xfs_buf_item_done(bp); 551 - xfs_buf_inode_iodone(bp); 552 - ASSERT(list_empty(&bp->b_li_list)); 553 - } else { 554 - xfs_trans_ail_delete(lip, SHUTDOWN_LOG_IO_ERROR); 555 - xfs_buf_item_relse(bp); 556 - ASSERT(bp->b_log_item == NULL); 557 - } 458 + xfs_buf_item_finish_stale(bip); 558 459 xfs_buf_relse(bp); 559 460 return; 560 461 } ··· 612 543 * Drop the buffer log item refcount and take appropriate action. This helper 613 544 * determines whether the bli must be freed or not, since a decrement to zero 614 545 * does not necessarily mean the bli is unused. 615 - * 616 - * Return true if the bli is freed, false otherwise. 617 546 */ 618 - bool 547 + void 619 548 xfs_buf_item_put( 620 549 struct xfs_buf_log_item *bip) 621 550 { 622 - struct xfs_log_item *lip = &bip->bli_item; 623 - bool aborted; 624 - bool dirty; 551 + 552 + ASSERT(xfs_buf_islocked(bip->bli_buf)); 625 553 626 554 /* drop the bli ref and return if it wasn't the last one */ 627 555 if (!atomic_dec_and_test(&bip->bli_refcount)) 628 - return false; 556 + return; 557 + 558 + /* If the BLI is in the AIL, then it is still dirty and in use */ 559 + if (test_bit(XFS_LI_IN_AIL, &bip->bli_item.li_flags)) { 560 + ASSERT(bip->bli_flags & XFS_BLI_DIRTY); 561 + return; 562 + } 629 563 630 564 /* 631 - * We dropped the last ref and must free the item if clean or aborted. 632 - * If the bli is dirty and non-aborted, the buffer was clean in the 633 - * transaction but still awaiting writeback from previous changes. In 634 - * that case, the bli is freed on buffer writeback completion. 565 + * In shutdown conditions, we can be asked to free a dirty BLI that 566 + * isn't in the AIL. This can occur due to a checkpoint aborting a BLI 567 + * instead of inserting it into the AIL at checkpoint IO completion. If 568 + * there's another bli reference (e.g. a btree cursor holds a clean 569 + * reference) and it is released via xfs_trans_brelse(), we can get here 570 + * with that aborted, dirty BLI. In this case, it is safe to free the 571 + * dirty BLI immediately, as it is not in the AIL and there are no 572 + * other references to it. 573 + * 574 + * We should never get here with a stale BLI via that path as 575 + * xfs_trans_brelse() specifically holds onto stale buffers rather than 576 + * releasing them. 635 577 */ 636 - aborted = test_bit(XFS_LI_ABORTED, &lip->li_flags) || 637 - xlog_is_shutdown(lip->li_log); 638 - dirty = bip->bli_flags & XFS_BLI_DIRTY; 639 - if (dirty && !aborted) 640 - return false; 641 - 642 - /* 643 - * The bli is aborted or clean. An aborted item may be in the AIL 644 - * regardless of dirty state. For example, consider an aborted 645 - * transaction that invalidated a dirty bli and cleared the dirty 646 - * state. 647 - */ 648 - if (aborted) 649 - xfs_trans_ail_delete(lip, 0); 650 - xfs_buf_item_relse(bip->bli_buf); 651 - return true; 578 + ASSERT(!(bip->bli_flags & XFS_BLI_DIRTY) || 579 + test_bit(XFS_LI_ABORTED, &bip->bli_item.li_flags)); 580 + ASSERT(!(bip->bli_flags & XFS_BLI_STALE)); 581 + xfs_buf_item_relse(bip); 652 582 } 653 583 654 584 /* ··· 668 600 * if necessary but do not unlock the buffer. This is for support of 669 601 * xfs_trans_bhold(). Make sure the XFS_BLI_HOLD field is cleared if we don't 670 602 * free the item. 603 + * 604 + * If the XFS_BLI_STALE flag is set, the last reference to the BLI *must* 605 + * perform a completion abort of any objects attached to the buffer for IO 606 + * tracking purposes. This generally only happens in shutdown situations, 607 + * normally xfs_buf_item_unpin() will drop the last BLI reference and perform 608 + * completion processing. However, because transaction completion can race with 609 + * checkpoint completion during a shutdown, this release context may end up 610 + * being the last active reference to the BLI and so needs to perform this 611 + * cleanup. 671 612 */ 672 613 STATIC void 673 614 xfs_buf_item_release( ··· 684 607 { 685 608 struct xfs_buf_log_item *bip = BUF_ITEM(lip); 686 609 struct xfs_buf *bp = bip->bli_buf; 687 - bool released; 688 610 bool hold = bip->bli_flags & XFS_BLI_HOLD; 689 611 bool stale = bip->bli_flags & XFS_BLI_STALE; 690 - #if defined(DEBUG) || defined(XFS_WARN) 691 - bool ordered = bip->bli_flags & XFS_BLI_ORDERED; 692 - bool dirty = bip->bli_flags & XFS_BLI_DIRTY; 693 612 bool aborted = test_bit(XFS_LI_ABORTED, 694 613 &lip->li_flags); 614 + bool dirty = bip->bli_flags & XFS_BLI_DIRTY; 615 + #if defined(DEBUG) || defined(XFS_WARN) 616 + bool ordered = bip->bli_flags & XFS_BLI_ORDERED; 695 617 #endif 696 618 697 619 trace_xfs_buf_item_release(bip); 620 + 621 + ASSERT(xfs_buf_islocked(bp)); 698 622 699 623 /* 700 624 * The bli dirty state should match whether the blf has logged segments ··· 712 634 bp->b_transp = NULL; 713 635 bip->bli_flags &= ~(XFS_BLI_LOGGED | XFS_BLI_HOLD | XFS_BLI_ORDERED); 714 636 637 + /* If there are other references, then we have nothing to do. */ 638 + if (!atomic_dec_and_test(&bip->bli_refcount)) 639 + goto out_release; 640 + 715 641 /* 716 - * Unref the item and unlock the buffer unless held or stale. Stale 717 - * buffers remain locked until final unpin unless the bli is freed by 718 - * the unref call. The latter implies shutdown because buffer 719 - * invalidation dirties the bli and transaction. 642 + * Stale buffer completion frees the BLI, unlocks and releases the 643 + * buffer. Neither the BLI or buffer are safe to reference after this 644 + * call, so there's nothing more we need to do here. 645 + * 646 + * If we get here with a stale buffer and references to the BLI remain, 647 + * we must not unlock the buffer as the last BLI reference owns lock 648 + * context, not us. 720 649 */ 721 - released = xfs_buf_item_put(bip); 722 - if (hold || (stale && !released)) 650 + if (stale) { 651 + xfs_buf_item_finish_stale(bip); 652 + xfs_buf_relse(bp); 653 + ASSERT(!hold); 723 654 return; 724 - ASSERT(!stale || aborted); 655 + } 656 + 657 + /* 658 + * Dirty or clean, aborted items are done and need to be removed from 659 + * the AIL and released. This frees the BLI, but leaves the buffer 660 + * locked and referenced. 661 + */ 662 + if (aborted || xlog_is_shutdown(lip->li_log)) { 663 + ASSERT(list_empty(&bip->bli_buf->b_li_list)); 664 + xfs_buf_item_done(bp); 665 + goto out_release; 666 + } 667 + 668 + /* 669 + * Clean, unreferenced BLIs can be immediately freed, leaving the buffer 670 + * locked and referenced. 671 + * 672 + * Dirty, unreferenced BLIs *must* be in the AIL awaiting writeback. 673 + */ 674 + if (!dirty) 675 + xfs_buf_item_relse(bip); 676 + else 677 + ASSERT(test_bit(XFS_LI_IN_AIL, &lip->li_flags)); 678 + 679 + /* Not safe to reference the BLI from here */ 680 + out_release: 681 + /* 682 + * If we get here with a stale buffer, we must not unlock the 683 + * buffer as the last BLI reference owns lock context, not us. 684 + */ 685 + if (stale || hold) 686 + return; 725 687 xfs_buf_relse(bp); 726 688 } 727 689 ··· 846 728 .iop_committed = xfs_buf_item_committed, 847 729 .iop_push = xfs_buf_item_push, 848 730 }; 849 - 850 - STATIC void 851 - xfs_buf_item_get_format( 852 - struct xfs_buf_log_item *bip, 853 - int count) 854 - { 855 - ASSERT(bip->bli_formats == NULL); 856 - bip->bli_format_count = count; 857 - 858 - if (count == 1) { 859 - bip->bli_formats = &bip->__bli_format; 860 - return; 861 - } 862 - 863 - bip->bli_formats = kzalloc(count * sizeof(struct xfs_buf_log_format), 864 - GFP_KERNEL | __GFP_NOFAIL); 865 - } 866 - 867 - STATIC void 868 - xfs_buf_item_free_format( 869 - struct xfs_buf_log_item *bip) 870 - { 871 - if (bip->bli_formats != &bip->__bli_format) { 872 - kfree(bip->bli_formats); 873 - bip->bli_formats = NULL; 874 - } 875 - } 876 731 877 732 /* 878 733 * Allocate a new buf log item to go with the given buffer. ··· 1067 976 return false; 1068 977 } 1069 978 1070 - STATIC void 1071 - xfs_buf_item_free( 1072 - struct xfs_buf_log_item *bip) 1073 - { 1074 - xfs_buf_item_free_format(bip); 1075 - kvfree(bip->bli_item.li_lv_shadow); 1076 - kmem_cache_free(xfs_buf_item_cache, bip); 1077 - } 1078 - 1079 - /* 1080 - * xfs_buf_item_relse() is called when the buf log item is no longer needed. 1081 - */ 1082 - void 1083 - xfs_buf_item_relse( 1084 - struct xfs_buf *bp) 1085 - { 1086 - struct xfs_buf_log_item *bip = bp->b_log_item; 1087 - 1088 - trace_xfs_buf_item_relse(bp, _RET_IP_); 1089 - ASSERT(!test_bit(XFS_LI_IN_AIL, &bip->bli_item.li_flags)); 1090 - 1091 - if (atomic_read(&bip->bli_refcount)) 1092 - return; 1093 - bp->b_log_item = NULL; 1094 - xfs_buf_rele(bp); 1095 - xfs_buf_item_free(bip); 1096 - } 1097 - 1098 979 void 1099 980 xfs_buf_item_done( 1100 981 struct xfs_buf *bp) ··· 1086 1023 xfs_trans_ail_delete(&bp->b_log_item->bli_item, 1087 1024 (bp->b_flags & _XBF_LOGRECOVERY) ? 0 : 1088 1025 SHUTDOWN_CORRUPT_INCORE); 1089 - xfs_buf_item_relse(bp); 1026 + xfs_buf_item_relse(bp->b_log_item); 1090 1027 }
+1 -2
fs/xfs/xfs_buf_item.h
··· 49 49 50 50 int xfs_buf_item_init(struct xfs_buf *, struct xfs_mount *); 51 51 void xfs_buf_item_done(struct xfs_buf *bp); 52 - void xfs_buf_item_relse(struct xfs_buf *); 53 - bool xfs_buf_item_put(struct xfs_buf_log_item *); 52 + void xfs_buf_item_put(struct xfs_buf_log_item *bip); 54 53 void xfs_buf_item_log(struct xfs_buf_log_item *, uint, uint); 55 54 bool xfs_buf_item_dirty_format(struct xfs_buf_log_item *); 56 55 void xfs_buf_inode_iodone(struct xfs_buf *);
+1 -3
fs/xfs/xfs_dquot.c
··· 1398 1398 1399 1399 ASSERT(XFS_DQ_IS_LOCKED(dqp)); 1400 1400 ASSERT(!completion_done(&dqp->q_flush)); 1401 + ASSERT(atomic_read(&dqp->q_pincount) == 0); 1401 1402 1402 1403 trace_xfs_dqflush(dqp); 1403 - 1404 - xfs_qm_dqunpin_wait(dqp); 1405 - 1406 1404 fa = xfs_qm_dqflush_check(dqp); 1407 1405 if (fa) { 1408 1406 xfs_alert(mp, "corrupt dquot ID 0x%x in memory at %pS",
+4 -3
fs/xfs/xfs_file.c
··· 1335 1335 } 1336 1336 1337 1337 #define XFS_FALLOC_FL_SUPPORTED \ 1338 - (FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE | \ 1339 - FALLOC_FL_COLLAPSE_RANGE | FALLOC_FL_ZERO_RANGE | \ 1340 - FALLOC_FL_INSERT_RANGE | FALLOC_FL_UNSHARE_RANGE) 1338 + (FALLOC_FL_ALLOCATE_RANGE | FALLOC_FL_KEEP_SIZE | \ 1339 + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_COLLAPSE_RANGE | \ 1340 + FALLOC_FL_ZERO_RANGE | FALLOC_FL_INSERT_RANGE | \ 1341 + FALLOC_FL_UNSHARE_RANGE) 1341 1342 1342 1343 STATIC long 1343 1344 __xfs_file_fallocate(
+8
fs/xfs/xfs_icache.c
··· 979 979 */ 980 980 if (xlog_is_shutdown(ip->i_mount->m_log)) { 981 981 xfs_iunpin_wait(ip); 982 + /* 983 + * Avoid a ABBA deadlock on the inode cluster buffer vs 984 + * concurrent xfs_ifree_cluster() trying to mark the inode 985 + * stale. We don't need the inode locked to run the flush abort 986 + * code, but the flush abort needs to lock the cluster buffer. 987 + */ 988 + xfs_iunlock(ip, XFS_ILOCK_EXCL); 982 989 xfs_iflush_shutdown_abort(ip); 990 + xfs_ilock(ip, XFS_ILOCK_EXCL); 983 991 goto reclaim; 984 992 } 985 993 if (xfs_ipincount(ip))
+1 -1
fs/xfs/xfs_inode.c
··· 1635 1635 iip = ip->i_itemp; 1636 1636 if (__xfs_iflags_test(ip, XFS_IFLUSHING)) { 1637 1637 ASSERT(!list_empty(&iip->ili_item.li_bio_list)); 1638 - ASSERT(iip->ili_last_fields); 1638 + ASSERT(iip->ili_last_fields || xlog_is_shutdown(mp->m_log)); 1639 1639 goto out_iunlock; 1640 1640 } 1641 1641
+4 -1
fs/xfs/xfs_inode_item.c
··· 758 758 * completed and items removed from the AIL before the next push 759 759 * attempt. 760 760 */ 761 + trace_xfs_inode_push_stale(ip, _RET_IP_); 761 762 return XFS_ITEM_PINNED; 762 763 } 763 764 764 - if (xfs_ipincount(ip) > 0 || xfs_buf_ispinned(bp)) 765 + if (xfs_ipincount(ip) > 0 || xfs_buf_ispinned(bp)) { 766 + trace_xfs_inode_push_pinned(ip, _RET_IP_); 765 767 return XFS_ITEM_PINNED; 768 + } 766 769 767 770 if (xfs_iflags_test(ip, XFS_IFLUSHING)) 768 771 return XFS_ITEM_FLUSHING;
+3 -1
fs/xfs/xfs_log_cil.c
··· 793 793 struct xfs_log_item *lip = lv->lv_item; 794 794 xfs_lsn_t item_lsn; 795 795 796 - if (aborted) 796 + if (aborted) { 797 + trace_xlog_ail_insert_abort(lip); 797 798 set_bit(XFS_LI_ABORTED, &lip->li_flags); 799 + } 798 800 799 801 if (lip->li_ops->flags & XFS_ITEM_RELEASE_WHEN_COMMITTED) { 800 802 lip->li_ops->iop_release(lip);
+4 -15
fs/xfs/xfs_mru_cache.c
··· 320 320 xfs_mru_cache_free_func_t free_func) 321 321 { 322 322 struct xfs_mru_cache *mru = NULL; 323 - int err = 0, grp; 323 + int grp; 324 324 unsigned int grp_time; 325 325 326 326 if (mrup) ··· 341 341 mru->lists = kzalloc(mru->grp_count * sizeof(*mru->lists), 342 342 GFP_KERNEL | __GFP_NOFAIL); 343 343 if (!mru->lists) { 344 - err = -ENOMEM; 345 - goto exit; 344 + kfree(mru); 345 + return -ENOMEM; 346 346 } 347 347 348 348 for (grp = 0; grp < mru->grp_count; grp++) ··· 361 361 mru->free_func = free_func; 362 362 mru->data = data; 363 363 *mrup = mru; 364 - 365 - exit: 366 - if (err && mru && mru->lists) 367 - kfree(mru->lists); 368 - if (err && mru) 369 - kfree(mru); 370 - 371 - return err; 364 + return 0; 372 365 } 373 366 374 367 /* ··· 417 424 struct xfs_mru_cache_elem *elem) 418 425 { 419 426 int error = -EINVAL; 420 - 421 - ASSERT(mru && mru->lists); 422 - if (!mru || !mru->lists) 423 - goto out_free; 424 427 425 428 error = -ENOMEM; 426 429 if (radix_tree_preload(GFP_KERNEL))
+19 -67
fs/xfs/xfs_qm.c
··· 134 134 135 135 dqp->q_flags |= XFS_DQFLAG_FREEING; 136 136 137 + xfs_qm_dqunpin_wait(dqp); 137 138 xfs_dqflock(dqp); 138 139 139 140 /* ··· 466 465 struct xfs_dquot *dqp = container_of(item, 467 466 struct xfs_dquot, q_lru); 468 467 struct xfs_qm_isolate *isol = arg; 468 + enum lru_status ret = LRU_SKIP; 469 469 470 470 if (!xfs_dqlock_nowait(dqp)) 471 471 goto out_miss_busy; ··· 478 476 */ 479 477 if (dqp->q_flags & XFS_DQFLAG_FREEING) 480 478 goto out_miss_unlock; 479 + 480 + /* 481 + * If the dquot is pinned or dirty, rotate it to the end of the LRU to 482 + * give some time for it to be cleaned before we try to isolate it 483 + * again. 484 + */ 485 + ret = LRU_ROTATE; 486 + if (XFS_DQ_IS_DIRTY(dqp) || atomic_read(&dqp->q_pincount) > 0) { 487 + goto out_miss_unlock; 488 + } 481 489 482 490 /* 483 491 * This dquot has acquired a reference in the meantime remove it from ··· 504 492 } 505 493 506 494 /* 507 - * If the dquot is dirty, flush it. If it's already being flushed, just 508 - * skip it so there is time for the IO to complete before we try to 509 - * reclaim it again on the next LRU pass. 495 + * The dquot may still be under IO, in which case the flush lock will be 496 + * held. If we can't get the flush lock now, just skip over the dquot as 497 + * if it was dirty. 510 498 */ 511 499 if (!xfs_dqflock_nowait(dqp)) 512 500 goto out_miss_unlock; 513 501 514 - if (XFS_DQ_IS_DIRTY(dqp)) { 515 - struct xfs_buf *bp = NULL; 516 - int error; 517 - 518 - trace_xfs_dqreclaim_dirty(dqp); 519 - 520 - /* we have to drop the LRU lock to flush the dquot */ 521 - spin_unlock(&lru->lock); 522 - 523 - error = xfs_dquot_use_attached_buf(dqp, &bp); 524 - if (!bp || error == -EAGAIN) { 525 - xfs_dqfunlock(dqp); 526 - goto out_unlock_dirty; 527 - } 528 - 529 - /* 530 - * dqflush completes dqflock on error, and the delwri ioend 531 - * does it on success. 532 - */ 533 - error = xfs_qm_dqflush(dqp, bp); 534 - if (error) 535 - goto out_unlock_dirty; 536 - 537 - xfs_buf_delwri_queue(bp, &isol->buffers); 538 - xfs_buf_relse(bp); 539 - goto out_unlock_dirty; 540 - } 541 - 502 + ASSERT(!XFS_DQ_IS_DIRTY(dqp)); 542 503 xfs_dquot_detach_buf(dqp); 543 504 xfs_dqfunlock(dqp); 544 505 ··· 533 548 out_miss_busy: 534 549 trace_xfs_dqreclaim_busy(dqp); 535 550 XFS_STATS_INC(dqp->q_mount, xs_qm_dqreclaim_misses); 536 - return LRU_SKIP; 537 - 538 - out_unlock_dirty: 539 - trace_xfs_dqreclaim_busy(dqp); 540 - XFS_STATS_INC(dqp->q_mount, xs_qm_dqreclaim_misses); 541 - xfs_dqunlock(dqp); 542 - return LRU_RETRY; 551 + return ret; 543 552 } 544 553 545 554 static unsigned long ··· 1465 1486 struct xfs_dquot *dqp, 1466 1487 void *data) 1467 1488 { 1468 - struct xfs_mount *mp = dqp->q_mount; 1469 1489 struct list_head *buffer_list = data; 1470 1490 struct xfs_buf *bp = NULL; 1471 1491 int error = 0; ··· 1475 1497 if (!XFS_DQ_IS_DIRTY(dqp)) 1476 1498 goto out_unlock; 1477 1499 1478 - /* 1479 - * The only way the dquot is already flush locked by the time quotacheck 1480 - * gets here is if reclaim flushed it before the dqadjust walk dirtied 1481 - * it for the final time. Quotacheck collects all dquot bufs in the 1482 - * local delwri queue before dquots are dirtied, so reclaim can't have 1483 - * possibly queued it for I/O. The only way out is to push the buffer to 1484 - * cycle the flush lock. 1485 - */ 1486 - if (!xfs_dqflock_nowait(dqp)) { 1487 - /* buf is pinned in-core by delwri list */ 1488 - error = xfs_buf_incore(mp->m_ddev_targp, dqp->q_blkno, 1489 - mp->m_quotainfo->qi_dqchunklen, 0, &bp); 1490 - if (error) 1491 - goto out_unlock; 1492 - 1493 - if (!(bp->b_flags & _XBF_DELWRI_Q)) { 1494 - error = -EAGAIN; 1495 - xfs_buf_relse(bp); 1496 - goto out_unlock; 1497 - } 1498 - xfs_buf_unlock(bp); 1499 - 1500 - xfs_buf_delwri_pushbuf(bp, buffer_list); 1501 - xfs_buf_rele(bp); 1502 - 1503 - error = -EAGAIN; 1504 - goto out_unlock; 1505 - } 1500 + xfs_qm_dqunpin_wait(dqp); 1501 + xfs_dqflock(dqp); 1506 1502 1507 1503 error = xfs_dquot_use_attached_buf(dqp, &bp); 1508 1504 if (error)
+2
fs/xfs/xfs_rtalloc.c
··· 1259 1259 1260 1260 kfree(nmp); 1261 1261 1262 + trace_xfs_growfs_check_rtgeom(mp, min_logfsbs); 1263 + 1262 1264 if (min_logfsbs > mp->m_sb.sb_logblocks) 1263 1265 return -EINVAL; 1264 1266
+2 -3
fs/xfs/xfs_super.c
··· 2020 2020 int error; 2021 2021 2022 2022 if (mp->m_logdev_targp && mp->m_logdev_targp != mp->m_ddev_targp && 2023 - bdev_read_only(mp->m_logdev_targp->bt_bdev)) { 2023 + xfs_readonly_buftarg(mp->m_logdev_targp)) { 2024 2024 xfs_warn(mp, 2025 2025 "ro->rw transition prohibited by read-only logdev"); 2026 2026 return -EACCES; 2027 2027 } 2028 2028 2029 - if (mp->m_rtdev_targp && 2030 - bdev_read_only(mp->m_rtdev_targp->bt_bdev)) { 2029 + if (mp->m_rtdev_targp && xfs_readonly_buftarg(mp->m_rtdev_targp)) { 2031 2030 xfs_warn(mp, 2032 2031 "ro->rw transition prohibited by read-only rtdev"); 2033 2032 return -EACCES;
+8 -2
fs/xfs/xfs_trace.h
··· 778 778 DEFINE_BUF_EVENT(xfs_buf_delwri_queue); 779 779 DEFINE_BUF_EVENT(xfs_buf_delwri_queued); 780 780 DEFINE_BUF_EVENT(xfs_buf_delwri_split); 781 - DEFINE_BUF_EVENT(xfs_buf_delwri_pushbuf); 782 781 DEFINE_BUF_EVENT(xfs_buf_get_uncached); 783 782 DEFINE_BUF_EVENT(xfs_buf_item_relse); 784 783 DEFINE_BUF_EVENT(xfs_buf_iodone_async); ··· 1146 1147 __field(xfs_ino_t, ino) 1147 1148 __field(int, count) 1148 1149 __field(int, pincount) 1150 + __field(unsigned long, iflags) 1149 1151 __field(unsigned long, caller_ip) 1150 1152 ), 1151 1153 TP_fast_assign( ··· 1154 1154 __entry->ino = ip->i_ino; 1155 1155 __entry->count = atomic_read(&VFS_I(ip)->i_count); 1156 1156 __entry->pincount = atomic_read(&ip->i_pincount); 1157 + __entry->iflags = ip->i_flags; 1157 1158 __entry->caller_ip = caller_ip; 1158 1159 ), 1159 - TP_printk("dev %d:%d ino 0x%llx count %d pincount %d caller %pS", 1160 + TP_printk("dev %d:%d ino 0x%llx count %d pincount %d iflags 0x%lx caller %pS", 1160 1161 MAJOR(__entry->dev), MINOR(__entry->dev), 1161 1162 __entry->ino, 1162 1163 __entry->count, 1163 1164 __entry->pincount, 1165 + __entry->iflags, 1164 1166 (char *)__entry->caller_ip) 1165 1167 ) 1166 1168 ··· 1252 1250 DEFINE_IREF_EVENT(xfs_inode_pin); 1253 1251 DEFINE_IREF_EVENT(xfs_inode_unpin); 1254 1252 DEFINE_IREF_EVENT(xfs_inode_unpin_nowait); 1253 + DEFINE_IREF_EVENT(xfs_inode_push_pinned); 1254 + DEFINE_IREF_EVENT(xfs_inode_push_stale); 1255 1255 1256 1256 DECLARE_EVENT_CLASS(xfs_namespace_class, 1257 1257 TP_PROTO(struct xfs_inode *dp, const struct xfs_name *name), ··· 1658 1654 DEFINE_LOG_ITEM_EVENT(xfs_cil_whiteout_mark); 1659 1655 DEFINE_LOG_ITEM_EVENT(xfs_cil_whiteout_skip); 1660 1656 DEFINE_LOG_ITEM_EVENT(xfs_cil_whiteout_unpin); 1657 + DEFINE_LOG_ITEM_EVENT(xlog_ail_insert_abort); 1658 + DEFINE_LOG_ITEM_EVENT(xfs_trans_free_abort); 1661 1659 1662 1660 DECLARE_EVENT_CLASS(xfs_ail_class, 1663 1661 TP_PROTO(struct xfs_log_item *lip, xfs_lsn_t old_lsn, xfs_lsn_t new_lsn),
+3 -1
fs/xfs/xfs_trans.c
··· 742 742 743 743 list_for_each_entry_safe(lip, next, &tp->t_items, li_trans) { 744 744 xfs_trans_del_item(lip); 745 - if (abort) 745 + if (abort) { 746 + trace_xfs_trans_free_abort(lip); 746 747 set_bit(XFS_LI_ABORTED, &lip->li_flags); 748 + } 747 749 if (lip->li_ops->iop_release) 748 750 lip->li_ops->iop_release(lip); 749 751 }
+21 -21
fs/xfs/xfs_zone_alloc.c
··· 727 727 for (;;) { 728 728 prepare_to_wait(&zi->zi_zone_wait, &wait, TASK_UNINTERRUPTIBLE); 729 729 oz = xfs_select_zone_nowait(mp, write_hint, pack_tight); 730 - if (oz) 730 + if (oz || xfs_is_shutdown(mp)) 731 731 break; 732 732 schedule(); 733 733 } ··· 775 775 776 776 if (xfs_rtb_to_rgbno(mp, xfs_daddr_to_rtb(mp, sector)) == 0) 777 777 ioend->io_flags |= IOMAP_IOEND_BOUNDARY; 778 - } 779 - 780 - static void 781 - xfs_submit_zoned_bio( 782 - struct iomap_ioend *ioend, 783 - struct xfs_open_zone *oz, 784 - bool is_seq) 785 - { 786 - ioend->io_bio.bi_iter.bi_sector = ioend->io_sector; 787 - ioend->io_private = oz; 788 - atomic_inc(&oz->oz_ref); /* for xfs_zoned_end_io */ 789 - 790 - if (is_seq) { 791 - ioend->io_bio.bi_opf &= ~REQ_OP_WRITE; 792 - ioend->io_bio.bi_opf |= REQ_OP_ZONE_APPEND; 793 - } else { 794 - xfs_mark_rtg_boundary(ioend); 795 - } 796 - 797 - submit_bio(&ioend->io_bio); 798 778 } 799 779 800 780 /* ··· 869 889 } 870 890 item->oz = oz; 871 891 xfs_mru_cache_insert(mp->m_zone_cache, ip->i_ino, &item->mru); 892 + } 893 + 894 + static void 895 + xfs_submit_zoned_bio( 896 + struct iomap_ioend *ioend, 897 + struct xfs_open_zone *oz, 898 + bool is_seq) 899 + { 900 + ioend->io_bio.bi_iter.bi_sector = ioend->io_sector; 901 + ioend->io_private = oz; 902 + atomic_inc(&oz->oz_ref); /* for xfs_zoned_end_io */ 903 + 904 + if (is_seq) { 905 + ioend->io_bio.bi_opf &= ~REQ_OP_WRITE; 906 + ioend->io_bio.bi_opf |= REQ_OP_ZONE_APPEND; 907 + } else { 908 + xfs_mark_rtg_boundary(ioend); 909 + } 910 + 911 + submit_bio(&ioend->io_bio); 872 912 } 873 913 874 914 void
+1 -1
include/crypto/internal/sha2.h
··· 25 25 void sha256_blocks_simd(u32 state[SHA256_STATE_WORDS], 26 26 const u8 *data, size_t nblocks); 27 27 28 - static inline void sha256_choose_blocks( 28 + static __always_inline void sha256_choose_blocks( 29 29 u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks, 30 30 bool force_generic, bool force_simd) 31 31 {
+1
include/linux/futex.h
··· 89 89 static inline void futex_mm_init(struct mm_struct *mm) 90 90 { 91 91 RCU_INIT_POINTER(mm->futex_phash, NULL); 92 + mm->futex_phash_new = NULL; 92 93 mutex_init(&mm->futex_hash_lock); 93 94 } 94 95
+4
include/linux/kmemleak.h
··· 28 28 extern void kmemleak_not_leak(const void *ptr) __ref; 29 29 extern void kmemleak_transient_leak(const void *ptr) __ref; 30 30 extern void kmemleak_ignore(const void *ptr) __ref; 31 + extern void kmemleak_ignore_percpu(const void __percpu *ptr) __ref; 31 32 extern void kmemleak_scan_area(const void *ptr, size_t size, gfp_t gfp) __ref; 32 33 extern void kmemleak_no_scan(const void *ptr) __ref; 33 34 extern void kmemleak_alloc_phys(phys_addr_t phys, size_t size, ··· 96 95 { 97 96 } 98 97 static inline void kmemleak_transient_leak(const void *ptr) 98 + { 99 + } 100 + static inline void kmemleak_ignore_percpu(const void __percpu *ptr) 99 101 { 100 102 } 101 103 static inline void kmemleak_ignore(const void *ptr)
+12
include/linux/soc/amd/isp4_misc.h
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + 3 + /* 4 + * Copyright (C) 2025 Advanced Micro Devices, Inc. 5 + */ 6 + 7 + #ifndef __SOC_ISP4_MISC_H 8 + #define __SOC_ISP4_MISC_H 9 + 10 + #define AMDISP_I2C_ADAP_NAME "AMDISP DesignWare I2C adapter" 11 + 12 + #endif
+26 -6
include/uapi/linux/ublk_cmd.h
··· 135 135 #define UBLKSRV_IO_BUF_TOTAL_SIZE (1ULL << UBLKSRV_IO_BUF_TOTAL_BITS) 136 136 137 137 /* 138 - * zero copy requires 4k block size, and can remap ublk driver's io 139 - * request into ublksrv's vm space 138 + * ublk server can register data buffers for incoming I/O requests with a sparse 139 + * io_uring buffer table. The request buffer can then be used as the data buffer 140 + * for io_uring operations via the fixed buffer index. 141 + * Note that the ublk server can never directly access the request data memory. 142 + * 143 + * To use this feature, the ublk server must first register a sparse buffer 144 + * table on an io_uring instance. 145 + * When an incoming ublk request is received, the ublk server submits a 146 + * UBLK_U_IO_REGISTER_IO_BUF command to that io_uring instance. The 147 + * ublksrv_io_cmd's q_id and tag specify the request whose buffer to register 148 + * and addr is the index in the io_uring's buffer table to install the buffer. 149 + * SQEs can now be submitted to the io_uring to read/write the request's buffer 150 + * by enabling fixed buffers (e.g. using IORING_OP_{READ,WRITE}_FIXED or 151 + * IORING_URING_CMD_FIXED) and passing the registered buffer index in buf_index. 152 + * Once the last io_uring operation using the request's buffer has completed, 153 + * the ublk server submits a UBLK_U_IO_UNREGISTER_IO_BUF command with q_id, tag, 154 + * and addr again specifying the request buffer to unregister. 155 + * The ublk request is completed when its buffer is unregistered from all 156 + * io_uring instances and the ublk server issues UBLK_U_IO_COMMIT_AND_FETCH_REQ. 157 + * 158 + * Not available for UBLK_F_UNPRIVILEGED_DEV, as a ublk server can leak 159 + * uninitialized kernel memory by not reading into the full request buffer. 140 160 */ 141 161 #define UBLK_F_SUPPORT_ZERO_COPY (1ULL << 0) 142 162 ··· 470 450 __u64 sqe_addr) 471 451 { 472 452 struct ublk_auto_buf_reg reg = { 473 - .index = sqe_addr & 0xffff, 474 - .flags = (sqe_addr >> 16) & 0xff, 475 - .reserved0 = (sqe_addr >> 24) & 0xff, 476 - .reserved1 = sqe_addr >> 32, 453 + .index = (__u16)sqe_addr, 454 + .flags = (__u8)(sqe_addr >> 16), 455 + .reserved0 = (__u8)(sqe_addr >> 24), 456 + .reserved1 = (__u32)(sqe_addr >> 32), 477 457 }; 478 458 479 459 return reg;
+2 -1
io_uring/io_uring.c
··· 1666 1666 1667 1667 io_req_flags_t io_file_get_flags(struct file *file) 1668 1668 { 1669 + struct inode *inode = file_inode(file); 1669 1670 io_req_flags_t res = 0; 1670 1671 1671 1672 BUILD_BUG_ON(REQ_F_ISREG_BIT != REQ_F_SUPPORT_NOWAIT_BIT + 1); 1672 1673 1673 - if (S_ISREG(file_inode(file)->i_mode)) 1674 + if (S_ISREG(inode->i_mode) && !(inode->i_flags & S_ANON_INODE)) 1674 1675 res |= REQ_F_ISREG; 1675 1676 if ((file->f_flags & O_NONBLOCK) || (file->f_mode & FMODE_NOWAIT)) 1676 1677 res |= REQ_F_SUPPORT_NOWAIT;
+1
io_uring/kbuf.c
··· 271 271 if (len > arg->max_len) { 272 272 len = arg->max_len; 273 273 if (!(bl->flags & IOBL_INC)) { 274 + arg->partial_map = 1; 274 275 if (iov != arg->iovs) 275 276 break; 276 277 buf->len = len;
+2 -1
io_uring/kbuf.h
··· 58 58 size_t max_len; 59 59 unsigned short nr_iovs; 60 60 unsigned short mode; 61 - unsigned buf_group; 61 + unsigned short buf_group; 62 + unsigned short partial_map; 62 63 }; 63 64 64 65 void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
+21 -13
io_uring/net.c
··· 75 75 u16 flags; 76 76 /* initialised and used only by !msg send variants */ 77 77 u16 buf_group; 78 - bool retry; 78 + unsigned short retry_flags; 79 79 void __user *msg_control; 80 80 /* used only for send zerocopy */ 81 81 struct io_kiocb *notif; 82 + }; 83 + 84 + enum sr_retry_flags { 85 + IO_SR_MSG_RETRY = 1, 86 + IO_SR_MSG_PARTIAL_MAP = 2, 82 87 }; 83 88 84 89 /* ··· 192 187 193 188 req->flags &= ~REQ_F_BL_EMPTY; 194 189 sr->done_io = 0; 195 - sr->retry = false; 190 + sr->retry_flags = 0; 196 191 sr->len = 0; /* get from the provided buffer */ 197 192 } 198 193 ··· 402 397 struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg); 403 398 404 399 sr->done_io = 0; 405 - sr->retry = false; 400 + sr->retry_flags = 0; 406 401 sr->len = READ_ONCE(sqe->len); 407 402 sr->flags = READ_ONCE(sqe->ioprio); 408 403 if (sr->flags & ~SENDMSG_FLAGS) ··· 756 751 struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg); 757 752 758 753 sr->done_io = 0; 759 - sr->retry = false; 754 + sr->retry_flags = 0; 760 755 761 756 if (unlikely(sqe->file_index || sqe->addr2)) 762 757 return -EINVAL; ··· 828 823 829 824 cflags |= io_put_kbufs(req, this_ret, io_bundle_nbufs(kmsg, this_ret), 830 825 issue_flags); 831 - if (sr->retry) 826 + if (sr->retry_flags & IO_SR_MSG_RETRY) 832 827 cflags = req->cqe.flags | (cflags & CQE_F_MASK); 833 828 /* bundle with no more immediate buffers, we're done */ 834 829 if (req->flags & REQ_F_BL_EMPTY) ··· 837 832 * If more is available AND it was a full transfer, retry and 838 833 * append to this one 839 834 */ 840 - if (!sr->retry && kmsg->msg.msg_inq > 1 && this_ret > 0 && 835 + if (!sr->retry_flags && kmsg->msg.msg_inq > 1 && this_ret > 0 && 841 836 !iov_iter_count(&kmsg->msg.msg_iter)) { 842 837 req->cqe.flags = cflags & ~CQE_F_MASK; 843 838 sr->len = kmsg->msg.msg_inq; 844 839 sr->done_io += this_ret; 845 - sr->retry = true; 840 + sr->retry_flags |= IO_SR_MSG_RETRY; 846 841 return false; 847 842 } 848 843 } else { ··· 1082 1077 if (unlikely(ret < 0)) 1083 1078 return ret; 1084 1079 1080 + if (arg.iovs != &kmsg->fast_iov && arg.iovs != kmsg->vec.iovec) { 1081 + kmsg->vec.nr = ret; 1082 + kmsg->vec.iovec = arg.iovs; 1083 + req->flags |= REQ_F_NEED_CLEANUP; 1084 + } 1085 + if (arg.partial_map) 1086 + sr->retry_flags |= IO_SR_MSG_PARTIAL_MAP; 1087 + 1085 1088 /* special case 1 vec, can be a fast path */ 1086 1089 if (ret == 1) { 1087 1090 sr->buf = arg.iovs[0].iov_base; ··· 1098 1085 } 1099 1086 iov_iter_init(&kmsg->msg.msg_iter, ITER_DEST, arg.iovs, ret, 1100 1087 arg.out_len); 1101 - if (arg.iovs != &kmsg->fast_iov && arg.iovs != kmsg->vec.iovec) { 1102 - kmsg->vec.nr = ret; 1103 - kmsg->vec.iovec = arg.iovs; 1104 - req->flags |= REQ_F_NEED_CLEANUP; 1105 - } 1106 1088 } else { 1107 1089 void __user *buf; 1108 1090 ··· 1283 1275 int ret; 1284 1276 1285 1277 zc->done_io = 0; 1286 - zc->retry = false; 1278 + zc->retry_flags = 0; 1287 1279 1288 1280 if (unlikely(READ_ONCE(sqe->__pad2[0]) || READ_ONCE(sqe->addr3))) 1289 1281 return -EINVAL;
+1
io_uring/opdef.c
··· 216 216 }, 217 217 [IORING_OP_FALLOCATE] = { 218 218 .needs_file = 1, 219 + .hash_reg_file = 1, 219 220 .prep = io_fallocate_prep, 220 221 .issue = io_fallocate, 221 222 },
+22 -8
io_uring/rsrc.c
··· 112 112 struct io_mapped_ubuf *imu = priv; 113 113 unsigned int i; 114 114 115 - for (i = 0; i < imu->nr_bvecs; i++) 116 - unpin_user_page(imu->bvec[i].bv_page); 115 + for (i = 0; i < imu->nr_bvecs; i++) { 116 + struct folio *folio = page_folio(imu->bvec[i].bv_page); 117 + 118 + unpin_user_folio(folio, 1); 119 + } 117 120 } 118 121 119 122 static struct io_mapped_ubuf *io_alloc_imu(struct io_ring_ctx *ctx, ··· 734 731 735 732 data->nr_pages_mid = folio_nr_pages(folio); 736 733 data->folio_shift = folio_shift(folio); 734 + data->first_folio_page_idx = folio_page_idx(folio, page_array[0]); 737 735 738 736 /* 739 737 * Check if pages are contiguous inside a folio, and all folios have ··· 828 824 if (coalesced) 829 825 imu->folio_shift = data.folio_shift; 830 826 refcount_set(&imu->refs, 1); 831 - off = (unsigned long) iov->iov_base & ((1UL << imu->folio_shift) - 1); 827 + 828 + off = (unsigned long)iov->iov_base & ~PAGE_MASK; 829 + if (coalesced) 830 + off += data.first_folio_page_idx << PAGE_SHIFT; 831 + 832 832 node->buf = imu; 833 833 ret = 0; 834 834 ··· 848 840 if (ret) { 849 841 if (imu) 850 842 io_free_imu(ctx, imu); 851 - if (pages) 852 - unpin_user_pages(pages, nr_pages); 843 + if (pages) { 844 + for (i = 0; i < nr_pages; i++) 845 + unpin_user_folio(page_folio(pages[i]), 1); 846 + } 853 847 io_cache_free(&ctx->node_cache, node); 854 848 node = ERR_PTR(ret); 855 849 } ··· 1339 1329 { 1340 1330 unsigned long folio_size = 1 << imu->folio_shift; 1341 1331 unsigned long folio_mask = folio_size - 1; 1342 - u64 folio_addr = imu->ubuf & ~folio_mask; 1343 1332 struct bio_vec *res_bvec = vec->bvec; 1344 1333 size_t total_len = 0; 1345 1334 unsigned bvec_idx = 0; ··· 1360 1351 if (unlikely(check_add_overflow(total_len, iov_len, &total_len))) 1361 1352 return -EOVERFLOW; 1362 1353 1363 - /* by using folio address it also accounts for bvec offset */ 1364 - offset = buf_addr - folio_addr; 1354 + offset = buf_addr - imu->ubuf; 1355 + /* 1356 + * Only the first bvec can have non zero bv_offset, account it 1357 + * here and work with full folios below. 1358 + */ 1359 + offset += imu->bvec[0].bv_offset; 1360 + 1365 1361 src_bvec = imu->bvec + (offset >> imu->folio_shift); 1366 1362 offset &= folio_mask; 1367 1363
+1
io_uring/rsrc.h
··· 49 49 unsigned int nr_pages_mid; 50 50 unsigned int folio_shift; 51 51 unsigned int nr_folios; 52 + unsigned long first_folio_page_idx; 52 53 }; 53 54 54 55 bool io_rsrc_cache_init(struct io_ring_ctx *ctx);
+4 -2
io_uring/zcrx.c
··· 106 106 for_each_sgtable_dma_sg(mem->sgt, sg, i) 107 107 total_size += sg_dma_len(sg); 108 108 109 - if (total_size < off + len) 110 - return -EINVAL; 109 + if (total_size < off + len) { 110 + ret = -EINVAL; 111 + goto err; 112 + } 111 113 112 114 mem->dmabuf_offset = off; 113 115 mem->size = len;
+1
kernel/Kconfig.kexec
··· 134 134 depends on KEXEC_FILE 135 135 depends on CRASH_DUMP 136 136 depends on DM_CRYPT 137 + depends on KEYS 137 138 help 138 139 With this option enabled, user space can intereact with 139 140 /sys/kernel/config/crash_dm_crypt_keys to make the dm crypt keys
+3 -3
kernel/events/core.c
··· 7251 7251 * CPU-A CPU-B 7252 7252 * 7253 7253 * perf_event_disable_inatomic() 7254 - * @pending_disable = CPU-A; 7254 + * @pending_disable = 1; 7255 7255 * irq_work_queue(); 7256 7256 * 7257 7257 * sched-out 7258 - * @pending_disable = -1; 7258 + * @pending_disable = 0; 7259 7259 * 7260 7260 * sched-in 7261 7261 * perf_event_disable_inatomic() 7262 - * @pending_disable = CPU-B; 7262 + * @pending_disable = 1; 7263 7263 * irq_work_queue(); // FAILS 7264 7264 * 7265 7265 * irq_work_run()
+2 -2
kernel/events/ring_buffer.c
··· 441 441 * store that will be enabled on successful return 442 442 */ 443 443 if (!handle->size) { /* A, matches D */ 444 - event->pending_disable = smp_processor_id(); 444 + perf_event_disable_inatomic(handle->event); 445 445 perf_output_wakeup(handle); 446 446 WRITE_ONCE(rb->aux_nest, 0); 447 447 goto err_put; ··· 526 526 527 527 if (wakeup) { 528 528 if (handle->aux_flags & PERF_AUX_FLAG_TRUNCATED) 529 - handle->event->pending_disable = smp_processor_id(); 529 + perf_event_disable_inatomic(handle->event); 530 530 perf_output_wakeup(handle); 531 531 } 532 532
+7 -7
kernel/trace/trace_events_filter.c
··· 1436 1436 1437 1437 INIT_LIST_HEAD(&head->list); 1438 1438 1439 - item = kmalloc(sizeof(*item), GFP_KERNEL); 1440 - if (!item) 1441 - goto free_now; 1442 - 1443 - item->filter = filter; 1444 - list_add_tail(&item->list, &head->list); 1445 - 1446 1439 list_for_each_entry(file, &tr->events, list) { 1447 1440 if (file->system != dir) 1448 1441 continue; ··· 1446 1453 list_add_tail(&item->list, &head->list); 1447 1454 event_clear_filter(file); 1448 1455 } 1456 + 1457 + item = kmalloc(sizeof(*item), GFP_KERNEL); 1458 + if (!item) 1459 + goto free_now; 1460 + 1461 + item->filter = filter; 1462 + list_add_tail(&item->list, &head->list); 1449 1463 1450 1464 delay_free_filter(head); 1451 1465 return;
+7 -1
lib/alloc_tag.c
··· 10 10 #include <linux/seq_buf.h> 11 11 #include <linux/seq_file.h> 12 12 #include <linux/vmalloc.h> 13 + #include <linux/kmemleak.h> 13 14 14 15 #define ALLOCINFO_FILE_NAME "allocinfo" 15 16 #define MODULE_ALLOC_TAG_VMAP_SIZE (100000UL * sizeof(struct alloc_tag)) ··· 633 632 mod->name); 634 633 return -ENOMEM; 635 634 } 636 - } 637 635 636 + /* 637 + * Avoid a kmemleak false positive. The pointer to the counters is stored 638 + * in the alloc_tag section of the module and cannot be directly accessed. 639 + */ 640 + kmemleak_ignore_percpu(tag->counters); 641 + } 638 642 return 0; 639 643 } 640 644
+8 -1
lib/group_cpus.c
··· 352 352 int ret = -ENOMEM; 353 353 struct cpumask *masks = NULL; 354 354 355 + if (numgrps == 0) 356 + return NULL; 357 + 355 358 if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL)) 356 359 return NULL; 357 360 ··· 429 426 #else /* CONFIG_SMP */ 430 427 struct cpumask *group_cpus_evenly(unsigned int numgrps) 431 428 { 432 - struct cpumask *masks = kcalloc(numgrps, sizeof(*masks), GFP_KERNEL); 429 + struct cpumask *masks; 433 430 431 + if (numgrps == 0) 432 + return NULL; 433 + 434 + masks = kcalloc(numgrps, sizeof(*masks), GFP_KERNEL); 434 435 if (!masks) 435 436 return NULL; 436 437
+28 -20
lib/raid6/rvv.c
··· 26 26 static void raid6_rvv1_gen_syndrome_real(int disks, unsigned long bytes, void **ptrs) 27 27 { 28 28 u8 **dptr = (u8 **)ptrs; 29 - unsigned long d; 30 - int z, z0; 31 29 u8 *p, *q; 30 + unsigned long vl, d; 31 + int z, z0; 32 32 33 33 z0 = disks - 3; /* Highest data disk */ 34 34 p = dptr[z0 + 1]; /* XOR parity */ ··· 36 36 37 37 asm volatile (".option push\n" 38 38 ".option arch,+v\n" 39 - "vsetvli t0, x0, e8, m1, ta, ma\n" 39 + "vsetvli %0, x0, e8, m1, ta, ma\n" 40 40 ".option pop\n" 41 + : "=&r" (vl) 41 42 ); 42 43 43 44 /* v0:wp0, v1:wq0, v2:wd0/w20, v3:w10 */ ··· 100 99 { 101 100 u8 **dptr = (u8 **)ptrs; 102 101 u8 *p, *q; 103 - unsigned long d; 102 + unsigned long vl, d; 104 103 int z, z0; 105 104 106 105 z0 = stop; /* P/Q right side optimization */ ··· 109 108 110 109 asm volatile (".option push\n" 111 110 ".option arch,+v\n" 112 - "vsetvli t0, x0, e8, m1, ta, ma\n" 111 + "vsetvli %0, x0, e8, m1, ta, ma\n" 113 112 ".option pop\n" 113 + : "=&r" (vl) 114 114 ); 115 115 116 116 /* v0:wp0, v1:wq0, v2:wd0/w20, v3:w10 */ ··· 197 195 static void raid6_rvv2_gen_syndrome_real(int disks, unsigned long bytes, void **ptrs) 198 196 { 199 197 u8 **dptr = (u8 **)ptrs; 200 - unsigned long d; 201 - int z, z0; 202 198 u8 *p, *q; 199 + unsigned long vl, d; 200 + int z, z0; 203 201 204 202 z0 = disks - 3; /* Highest data disk */ 205 203 p = dptr[z0 + 1]; /* XOR parity */ ··· 207 205 208 206 asm volatile (".option push\n" 209 207 ".option arch,+v\n" 210 - "vsetvli t0, x0, e8, m1, ta, ma\n" 208 + "vsetvli %0, x0, e8, m1, ta, ma\n" 211 209 ".option pop\n" 210 + : "=&r" (vl) 212 211 ); 213 212 214 213 /* ··· 290 287 { 291 288 u8 **dptr = (u8 **)ptrs; 292 289 u8 *p, *q; 293 - unsigned long d; 290 + unsigned long vl, d; 294 291 int z, z0; 295 292 296 293 z0 = stop; /* P/Q right side optimization */ ··· 299 296 300 297 asm volatile (".option push\n" 301 298 ".option arch,+v\n" 302 - "vsetvli t0, x0, e8, m1, ta, ma\n" 299 + "vsetvli %0, x0, e8, m1, ta, ma\n" 303 300 ".option pop\n" 301 + : "=&r" (vl) 304 302 ); 305 303 306 304 /* ··· 417 413 static void raid6_rvv4_gen_syndrome_real(int disks, unsigned long bytes, void **ptrs) 418 414 { 419 415 u8 **dptr = (u8 **)ptrs; 420 - unsigned long d; 421 - int z, z0; 422 416 u8 *p, *q; 417 + unsigned long vl, d; 418 + int z, z0; 423 419 424 420 z0 = disks - 3; /* Highest data disk */ 425 421 p = dptr[z0 + 1]; /* XOR parity */ ··· 427 423 428 424 asm volatile (".option push\n" 429 425 ".option arch,+v\n" 430 - "vsetvli t0, x0, e8, m1, ta, ma\n" 426 + "vsetvli %0, x0, e8, m1, ta, ma\n" 431 427 ".option pop\n" 428 + : "=&r" (vl) 432 429 ); 433 430 434 431 /* ··· 544 539 { 545 540 u8 **dptr = (u8 **)ptrs; 546 541 u8 *p, *q; 547 - unsigned long d; 542 + unsigned long vl, d; 548 543 int z, z0; 549 544 550 545 z0 = stop; /* P/Q right side optimization */ ··· 553 548 554 549 asm volatile (".option push\n" 555 550 ".option arch,+v\n" 556 - "vsetvli t0, x0, e8, m1, ta, ma\n" 551 + "vsetvli %0, x0, e8, m1, ta, ma\n" 557 552 ".option pop\n" 553 + : "=&r" (vl) 558 554 ); 559 555 560 556 /* ··· 727 721 static void raid6_rvv8_gen_syndrome_real(int disks, unsigned long bytes, void **ptrs) 728 722 { 729 723 u8 **dptr = (u8 **)ptrs; 730 - unsigned long d; 731 - int z, z0; 732 724 u8 *p, *q; 725 + unsigned long vl, d; 726 + int z, z0; 733 727 734 728 z0 = disks - 3; /* Highest data disk */ 735 729 p = dptr[z0 + 1]; /* XOR parity */ ··· 737 731 738 732 asm volatile (".option push\n" 739 733 ".option arch,+v\n" 740 - "vsetvli t0, x0, e8, m1, ta, ma\n" 734 + "vsetvli %0, x0, e8, m1, ta, ma\n" 741 735 ".option pop\n" 736 + : "=&r" (vl) 742 737 ); 743 738 744 739 /* ··· 922 915 { 923 916 u8 **dptr = (u8 **)ptrs; 924 917 u8 *p, *q; 925 - unsigned long d; 918 + unsigned long vl, d; 926 919 int z, z0; 927 920 928 921 z0 = stop; /* P/Q right side optimization */ ··· 931 924 932 925 asm volatile (".option push\n" 933 926 ".option arch,+v\n" 934 - "vsetvli t0, x0, e8, m1, ta, ma\n" 927 + "vsetvli %0, x0, e8, m1, ta, ma\n" 935 928 ".option pop\n" 929 + : "=&r" (vl) 936 930 ); 937 931 938 932 /*
+3 -1
lib/test_objagg.c
··· 899 899 int err; 900 900 901 901 stats = objagg_hints_stats_get(objagg_hints); 902 - if (IS_ERR(stats)) 902 + if (IS_ERR(stats)) { 903 + *errmsg = "objagg_hints_stats_get() failed."; 903 904 return PTR_ERR(stats); 905 + } 904 906 err = __check_expect_stats(stats, expect_stats, errmsg); 905 907 objagg_stats_put(stats); 906 908 return err;
+1
mm/damon/sysfs-schemes.c
··· 472 472 return -ENOMEM; 473 473 474 474 strscpy(path, buf, count + 1); 475 + kfree(filter->memcg_path); 475 476 filter->memcg_path = path; 476 477 return count; 477 478 }
+17 -37
mm/hugetlb.c
··· 2787 2787 /* 2788 2788 * alloc_and_dissolve_hugetlb_folio - Allocate a new folio and dissolve 2789 2789 * the old one 2790 - * @h: struct hstate old page belongs to 2791 2790 * @old_folio: Old folio to dissolve 2792 2791 * @list: List to isolate the page in case we need to 2793 2792 * Returns 0 on success, otherwise negated error. 2794 2793 */ 2795 - static int alloc_and_dissolve_hugetlb_folio(struct hstate *h, 2796 - struct folio *old_folio, struct list_head *list) 2794 + static int alloc_and_dissolve_hugetlb_folio(struct folio *old_folio, 2795 + struct list_head *list) 2797 2796 { 2798 - gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; 2797 + gfp_t gfp_mask; 2798 + struct hstate *h; 2799 2799 int nid = folio_nid(old_folio); 2800 2800 struct folio *new_folio = NULL; 2801 2801 int ret = 0; 2802 2802 2803 2803 retry: 2804 + /* 2805 + * The old_folio might have been dissolved from under our feet, so make sure 2806 + * to carefully check the state under the lock. 2807 + */ 2804 2808 spin_lock_irq(&hugetlb_lock); 2805 2809 if (!folio_test_hugetlb(old_folio)) { 2806 2810 /* ··· 2833 2829 cond_resched(); 2834 2830 goto retry; 2835 2831 } else { 2832 + h = folio_hstate(old_folio); 2836 2833 if (!new_folio) { 2837 2834 spin_unlock_irq(&hugetlb_lock); 2835 + gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; 2838 2836 new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, 2839 2837 NULL, NULL); 2840 2838 if (!new_folio) ··· 2880 2874 2881 2875 int isolate_or_dissolve_huge_folio(struct folio *folio, struct list_head *list) 2882 2876 { 2883 - struct hstate *h; 2884 2877 int ret = -EBUSY; 2885 2878 2886 - /* 2887 - * The page might have been dissolved from under our feet, so make sure 2888 - * to carefully check the state under the lock. 2889 - * Return success when racing as if we dissolved the page ourselves. 2890 - */ 2891 - spin_lock_irq(&hugetlb_lock); 2892 - if (folio_test_hugetlb(folio)) { 2893 - h = folio_hstate(folio); 2894 - } else { 2895 - spin_unlock_irq(&hugetlb_lock); 2879 + /* Not to disrupt normal path by vainly holding hugetlb_lock */ 2880 + if (!folio_test_hugetlb(folio)) 2896 2881 return 0; 2897 - } 2898 - spin_unlock_irq(&hugetlb_lock); 2899 2882 2900 2883 /* 2901 2884 * Fence off gigantic pages as there is a cyclic dependency between 2902 2885 * alloc_contig_range and them. Return -ENOMEM as this has the effect 2903 2886 * of bailing out right away without further retrying. 2904 2887 */ 2905 - if (hstate_is_gigantic(h)) 2888 + if (folio_order(folio) > MAX_PAGE_ORDER) 2906 2889 return -ENOMEM; 2907 2890 2908 2891 if (folio_ref_count(folio) && folio_isolate_hugetlb(folio, list)) 2909 2892 ret = 0; 2910 2893 else if (!folio_ref_count(folio)) 2911 - ret = alloc_and_dissolve_hugetlb_folio(h, folio, list); 2894 + ret = alloc_and_dissolve_hugetlb_folio(folio, list); 2912 2895 2913 2896 return ret; 2914 2897 } ··· 2911 2916 */ 2912 2917 int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn) 2913 2918 { 2914 - struct hstate *h; 2915 2919 struct folio *folio; 2916 2920 int ret = 0; 2917 2921 ··· 2919 2925 while (start_pfn < end_pfn) { 2920 2926 folio = pfn_folio(start_pfn); 2921 2927 2922 - /* 2923 - * The folio might have been dissolved from under our feet, so make sure 2924 - * to carefully check the state under the lock. 2925 - */ 2926 - spin_lock_irq(&hugetlb_lock); 2927 - if (folio_test_hugetlb(folio)) { 2928 - h = folio_hstate(folio); 2929 - } else { 2930 - spin_unlock_irq(&hugetlb_lock); 2931 - start_pfn++; 2932 - continue; 2933 - } 2934 - spin_unlock_irq(&hugetlb_lock); 2935 - 2936 - if (!folio_ref_count(folio)) { 2937 - ret = alloc_and_dissolve_hugetlb_folio(h, folio, 2938 - &isolate_list); 2928 + /* Not to disrupt normal path by vainly holding hugetlb_lock */ 2929 + if (folio_test_hugetlb(folio) && !folio_ref_count(folio)) { 2930 + ret = alloc_and_dissolve_hugetlb_folio(folio, &isolate_list); 2939 2931 if (ret) 2940 2932 break; 2941 2933
+14
mm/kmemleak.c
··· 1247 1247 EXPORT_SYMBOL(kmemleak_transient_leak); 1248 1248 1249 1249 /** 1250 + * kmemleak_ignore_percpu - similar to kmemleak_ignore but taking a percpu 1251 + * address argument 1252 + * @ptr: percpu address of the object 1253 + */ 1254 + void __ref kmemleak_ignore_percpu(const void __percpu *ptr) 1255 + { 1256 + pr_debug("%s(0x%px)\n", __func__, ptr); 1257 + 1258 + if (kmemleak_enabled && ptr && !IS_ERR_PCPU(ptr)) 1259 + make_black_object((unsigned long)ptr, OBJECT_PERCPU); 1260 + } 1261 + EXPORT_SYMBOL_GPL(kmemleak_ignore_percpu); 1262 + 1263 + /** 1250 1264 * kmemleak_ignore - ignore an allocated object 1251 1265 * @ptr: pointer to beginning of the object 1252 1266 *
-36
net/bluetooth/hci_event.c
··· 2150 2150 return rp->status; 2151 2151 } 2152 2152 2153 - static u8 hci_cc_set_ext_adv_param(struct hci_dev *hdev, void *data, 2154 - struct sk_buff *skb) 2155 - { 2156 - struct hci_rp_le_set_ext_adv_params *rp = data; 2157 - struct hci_cp_le_set_ext_adv_params *cp; 2158 - struct adv_info *adv_instance; 2159 - 2160 - bt_dev_dbg(hdev, "status 0x%2.2x", rp->status); 2161 - 2162 - if (rp->status) 2163 - return rp->status; 2164 - 2165 - cp = hci_sent_cmd_data(hdev, HCI_OP_LE_SET_EXT_ADV_PARAMS); 2166 - if (!cp) 2167 - return rp->status; 2168 - 2169 - hci_dev_lock(hdev); 2170 - hdev->adv_addr_type = cp->own_addr_type; 2171 - if (!cp->handle) { 2172 - /* Store in hdev for instance 0 */ 2173 - hdev->adv_tx_power = rp->tx_power; 2174 - } else { 2175 - adv_instance = hci_find_adv_instance(hdev, cp->handle); 2176 - if (adv_instance) 2177 - adv_instance->tx_power = rp->tx_power; 2178 - } 2179 - /* Update adv data as tx power is known now */ 2180 - hci_update_adv_data(hdev, cp->handle); 2181 - 2182 - hci_dev_unlock(hdev); 2183 - 2184 - return rp->status; 2185 - } 2186 - 2187 2153 static u8 hci_cc_read_rssi(struct hci_dev *hdev, void *data, 2188 2154 struct sk_buff *skb) 2189 2155 { ··· 4130 4164 HCI_CC(HCI_OP_LE_READ_NUM_SUPPORTED_ADV_SETS, 4131 4165 hci_cc_le_read_num_adv_sets, 4132 4166 sizeof(struct hci_rp_le_read_num_supported_adv_sets)), 4133 - HCI_CC(HCI_OP_LE_SET_EXT_ADV_PARAMS, hci_cc_set_ext_adv_param, 4134 - sizeof(struct hci_rp_le_set_ext_adv_params)), 4135 4167 HCI_CC_STATUS(HCI_OP_LE_SET_EXT_ADV_ENABLE, 4136 4168 hci_cc_le_set_ext_adv_enable), 4137 4169 HCI_CC_STATUS(HCI_OP_LE_SET_ADV_SET_RAND_ADDR,
+138 -89
net/bluetooth/hci_sync.c
··· 1205 1205 sizeof(cp), &cp, HCI_CMD_TIMEOUT); 1206 1206 } 1207 1207 1208 + static int 1209 + hci_set_ext_adv_params_sync(struct hci_dev *hdev, struct adv_info *adv, 1210 + const struct hci_cp_le_set_ext_adv_params *cp, 1211 + struct hci_rp_le_set_ext_adv_params *rp) 1212 + { 1213 + struct sk_buff *skb; 1214 + 1215 + skb = __hci_cmd_sync(hdev, HCI_OP_LE_SET_EXT_ADV_PARAMS, sizeof(*cp), 1216 + cp, HCI_CMD_TIMEOUT); 1217 + 1218 + /* If command return a status event, skb will be set to -ENODATA */ 1219 + if (skb == ERR_PTR(-ENODATA)) 1220 + return 0; 1221 + 1222 + if (IS_ERR(skb)) { 1223 + bt_dev_err(hdev, "Opcode 0x%4.4x failed: %ld", 1224 + HCI_OP_LE_SET_EXT_ADV_PARAMS, PTR_ERR(skb)); 1225 + return PTR_ERR(skb); 1226 + } 1227 + 1228 + if (skb->len != sizeof(*rp)) { 1229 + bt_dev_err(hdev, "Invalid response length for 0x%4.4x: %u", 1230 + HCI_OP_LE_SET_EXT_ADV_PARAMS, skb->len); 1231 + kfree_skb(skb); 1232 + return -EIO; 1233 + } 1234 + 1235 + memcpy(rp, skb->data, sizeof(*rp)); 1236 + kfree_skb(skb); 1237 + 1238 + if (!rp->status) { 1239 + hdev->adv_addr_type = cp->own_addr_type; 1240 + if (!cp->handle) { 1241 + /* Store in hdev for instance 0 */ 1242 + hdev->adv_tx_power = rp->tx_power; 1243 + } else if (adv) { 1244 + adv->tx_power = rp->tx_power; 1245 + } 1246 + } 1247 + 1248 + return rp->status; 1249 + } 1250 + 1251 + static int hci_set_ext_adv_data_sync(struct hci_dev *hdev, u8 instance) 1252 + { 1253 + DEFINE_FLEX(struct hci_cp_le_set_ext_adv_data, pdu, data, length, 1254 + HCI_MAX_EXT_AD_LENGTH); 1255 + u8 len; 1256 + struct adv_info *adv = NULL; 1257 + int err; 1258 + 1259 + if (instance) { 1260 + adv = hci_find_adv_instance(hdev, instance); 1261 + if (!adv || !adv->adv_data_changed) 1262 + return 0; 1263 + } 1264 + 1265 + len = eir_create_adv_data(hdev, instance, pdu->data, 1266 + HCI_MAX_EXT_AD_LENGTH); 1267 + 1268 + pdu->length = len; 1269 + pdu->handle = adv ? adv->handle : instance; 1270 + pdu->operation = LE_SET_ADV_DATA_OP_COMPLETE; 1271 + pdu->frag_pref = LE_SET_ADV_DATA_NO_FRAG; 1272 + 1273 + err = __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_EXT_ADV_DATA, 1274 + struct_size(pdu, data, len), pdu, 1275 + HCI_CMD_TIMEOUT); 1276 + if (err) 1277 + return err; 1278 + 1279 + /* Update data if the command succeed */ 1280 + if (adv) { 1281 + adv->adv_data_changed = false; 1282 + } else { 1283 + memcpy(hdev->adv_data, pdu->data, len); 1284 + hdev->adv_data_len = len; 1285 + } 1286 + 1287 + return 0; 1288 + } 1289 + 1290 + static int hci_set_adv_data_sync(struct hci_dev *hdev, u8 instance) 1291 + { 1292 + struct hci_cp_le_set_adv_data cp; 1293 + u8 len; 1294 + 1295 + memset(&cp, 0, sizeof(cp)); 1296 + 1297 + len = eir_create_adv_data(hdev, instance, cp.data, sizeof(cp.data)); 1298 + 1299 + /* There's nothing to do if the data hasn't changed */ 1300 + if (hdev->adv_data_len == len && 1301 + memcmp(cp.data, hdev->adv_data, len) == 0) 1302 + return 0; 1303 + 1304 + memcpy(hdev->adv_data, cp.data, sizeof(cp.data)); 1305 + hdev->adv_data_len = len; 1306 + 1307 + cp.length = len; 1308 + 1309 + return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_ADV_DATA, 1310 + sizeof(cp), &cp, HCI_CMD_TIMEOUT); 1311 + } 1312 + 1313 + int hci_update_adv_data_sync(struct hci_dev *hdev, u8 instance) 1314 + { 1315 + if (!hci_dev_test_flag(hdev, HCI_LE_ENABLED)) 1316 + return 0; 1317 + 1318 + if (ext_adv_capable(hdev)) 1319 + return hci_set_ext_adv_data_sync(hdev, instance); 1320 + 1321 + return hci_set_adv_data_sync(hdev, instance); 1322 + } 1323 + 1208 1324 int hci_setup_ext_adv_instance_sync(struct hci_dev *hdev, u8 instance) 1209 1325 { 1210 1326 struct hci_cp_le_set_ext_adv_params cp; 1327 + struct hci_rp_le_set_ext_adv_params rp; 1211 1328 bool connectable; 1212 1329 u32 flags; 1213 1330 bdaddr_t random_addr; ··· 1433 1316 cp.secondary_phy = HCI_ADV_PHY_1M; 1434 1317 } 1435 1318 1436 - err = __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_EXT_ADV_PARAMS, 1437 - sizeof(cp), &cp, HCI_CMD_TIMEOUT); 1319 + err = hci_set_ext_adv_params_sync(hdev, adv, &cp, &rp); 1320 + if (err) 1321 + return err; 1322 + 1323 + /* Update adv data as tx power is known now */ 1324 + err = hci_set_ext_adv_data_sync(hdev, cp.handle); 1438 1325 if (err) 1439 1326 return err; 1440 1327 ··· 1943 1822 sizeof(cp), &cp, HCI_CMD_TIMEOUT); 1944 1823 } 1945 1824 1946 - static int hci_set_ext_adv_data_sync(struct hci_dev *hdev, u8 instance) 1947 - { 1948 - DEFINE_FLEX(struct hci_cp_le_set_ext_adv_data, pdu, data, length, 1949 - HCI_MAX_EXT_AD_LENGTH); 1950 - u8 len; 1951 - struct adv_info *adv = NULL; 1952 - int err; 1953 - 1954 - if (instance) { 1955 - adv = hci_find_adv_instance(hdev, instance); 1956 - if (!adv || !adv->adv_data_changed) 1957 - return 0; 1958 - } 1959 - 1960 - len = eir_create_adv_data(hdev, instance, pdu->data, 1961 - HCI_MAX_EXT_AD_LENGTH); 1962 - 1963 - pdu->length = len; 1964 - pdu->handle = adv ? adv->handle : instance; 1965 - pdu->operation = LE_SET_ADV_DATA_OP_COMPLETE; 1966 - pdu->frag_pref = LE_SET_ADV_DATA_NO_FRAG; 1967 - 1968 - err = __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_EXT_ADV_DATA, 1969 - struct_size(pdu, data, len), pdu, 1970 - HCI_CMD_TIMEOUT); 1971 - if (err) 1972 - return err; 1973 - 1974 - /* Update data if the command succeed */ 1975 - if (adv) { 1976 - adv->adv_data_changed = false; 1977 - } else { 1978 - memcpy(hdev->adv_data, pdu->data, len); 1979 - hdev->adv_data_len = len; 1980 - } 1981 - 1982 - return 0; 1983 - } 1984 - 1985 - static int hci_set_adv_data_sync(struct hci_dev *hdev, u8 instance) 1986 - { 1987 - struct hci_cp_le_set_adv_data cp; 1988 - u8 len; 1989 - 1990 - memset(&cp, 0, sizeof(cp)); 1991 - 1992 - len = eir_create_adv_data(hdev, instance, cp.data, sizeof(cp.data)); 1993 - 1994 - /* There's nothing to do if the data hasn't changed */ 1995 - if (hdev->adv_data_len == len && 1996 - memcmp(cp.data, hdev->adv_data, len) == 0) 1997 - return 0; 1998 - 1999 - memcpy(hdev->adv_data, cp.data, sizeof(cp.data)); 2000 - hdev->adv_data_len = len; 2001 - 2002 - cp.length = len; 2003 - 2004 - return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_ADV_DATA, 2005 - sizeof(cp), &cp, HCI_CMD_TIMEOUT); 2006 - } 2007 - 2008 - int hci_update_adv_data_sync(struct hci_dev *hdev, u8 instance) 2009 - { 2010 - if (!hci_dev_test_flag(hdev, HCI_LE_ENABLED)) 2011 - return 0; 2012 - 2013 - if (ext_adv_capable(hdev)) 2014 - return hci_set_ext_adv_data_sync(hdev, instance); 2015 - 2016 - return hci_set_adv_data_sync(hdev, instance); 2017 - } 2018 - 2019 1825 int hci_schedule_adv_instance_sync(struct hci_dev *hdev, u8 instance, 2020 1826 bool force) 2021 1827 { ··· 2018 1970 static int hci_clear_adv_sync(struct hci_dev *hdev, struct sock *sk, bool force) 2019 1971 { 2020 1972 struct adv_info *adv, *n; 2021 - int err = 0; 2022 1973 2023 1974 if (ext_adv_capable(hdev)) 2024 1975 /* Remove all existing sets */ 2025 - err = hci_clear_adv_sets_sync(hdev, sk); 2026 - if (ext_adv_capable(hdev)) 2027 - return err; 1976 + return hci_clear_adv_sets_sync(hdev, sk); 2028 1977 2029 1978 /* This is safe as long as there is no command send while the lock is 2030 1979 * held. ··· 2049 2004 static int hci_remove_adv_sync(struct hci_dev *hdev, u8 instance, 2050 2005 struct sock *sk) 2051 2006 { 2052 - int err = 0; 2007 + int err; 2053 2008 2054 2009 /* If we use extended advertising, instance has to be removed first. */ 2055 2010 if (ext_adv_capable(hdev)) 2056 - err = hci_remove_ext_adv_instance_sync(hdev, instance, sk); 2057 - if (ext_adv_capable(hdev)) 2058 - return err; 2011 + return hci_remove_ext_adv_instance_sync(hdev, instance, sk); 2059 2012 2060 2013 /* This is safe as long as there is no command send while the lock is 2061 2014 * held. ··· 2152 2109 int hci_disable_advertising_sync(struct hci_dev *hdev) 2153 2110 { 2154 2111 u8 enable = 0x00; 2155 - int err = 0; 2156 2112 2157 2113 /* If controller is not advertising we are done. */ 2158 2114 if (!hci_dev_test_flag(hdev, HCI_LE_ADV)) 2159 2115 return 0; 2160 2116 2161 2117 if (ext_adv_capable(hdev)) 2162 - err = hci_disable_ext_adv_instance_sync(hdev, 0x00); 2163 - if (ext_adv_capable(hdev)) 2164 - return err; 2118 + return hci_disable_ext_adv_instance_sync(hdev, 0x00); 2165 2119 2166 2120 return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_ADV_ENABLE, 2167 2121 sizeof(enable), &enable, HCI_CMD_TIMEOUT); ··· 2520 2480 { 2521 2481 int err; 2522 2482 int old_state; 2483 + 2484 + /* If controller is not advertising we are done. */ 2485 + if (!hci_dev_test_flag(hdev, HCI_LE_ADV)) 2486 + return 0; 2523 2487 2524 2488 /* If already been paused there is nothing to do. */ 2525 2489 if (hdev->advertising_paused) ··· 6321 6277 struct hci_conn *conn) 6322 6278 { 6323 6279 struct hci_cp_le_set_ext_adv_params cp; 6280 + struct hci_rp_le_set_ext_adv_params rp; 6324 6281 int err; 6325 6282 bdaddr_t random_addr; 6326 6283 u8 own_addr_type; ··· 6363 6318 if (err) 6364 6319 return err; 6365 6320 6366 - err = __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_EXT_ADV_PARAMS, 6367 - sizeof(cp), &cp, HCI_CMD_TIMEOUT); 6321 + err = hci_set_ext_adv_params_sync(hdev, NULL, &cp, &rp); 6322 + if (err) 6323 + return err; 6324 + 6325 + /* Update adv data as tx power is known now */ 6326 + err = hci_set_ext_adv_data_sync(hdev, cp.handle); 6368 6327 if (err) 6369 6328 return err; 6370 6329
+24 -1
net/bluetooth/mgmt.c
··· 1080 1080 struct mgmt_mesh_tx *mesh_tx; 1081 1081 1082 1082 hci_dev_clear_flag(hdev, HCI_MESH_SENDING); 1083 - hci_disable_advertising_sync(hdev); 1083 + if (list_empty(&hdev->adv_instances)) 1084 + hci_disable_advertising_sync(hdev); 1084 1085 mesh_tx = mgmt_mesh_next(hdev, NULL); 1085 1086 1086 1087 if (mesh_tx) ··· 2154 2153 else 2155 2154 hci_dev_clear_flag(hdev, HCI_MESH); 2156 2155 2156 + hdev->le_scan_interval = __le16_to_cpu(cp->period); 2157 + hdev->le_scan_window = __le16_to_cpu(cp->window); 2158 + 2157 2159 len -= sizeof(*cp); 2158 2160 2159 2161 /* If filters don't fit, forward all adv pkts */ ··· 2171 2167 { 2172 2168 struct mgmt_cp_set_mesh *cp = data; 2173 2169 struct mgmt_pending_cmd *cmd; 2170 + __u16 period, window; 2174 2171 int err = 0; 2175 2172 2176 2173 bt_dev_dbg(hdev, "sock %p", sk); ··· 2182 2177 MGMT_STATUS_NOT_SUPPORTED); 2183 2178 2184 2179 if (cp->enable != 0x00 && cp->enable != 0x01) 2180 + return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER, 2181 + MGMT_STATUS_INVALID_PARAMS); 2182 + 2183 + /* Keep allowed ranges in sync with set_scan_params() */ 2184 + period = __le16_to_cpu(cp->period); 2185 + 2186 + if (period < 0x0004 || period > 0x4000) 2187 + return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER, 2188 + MGMT_STATUS_INVALID_PARAMS); 2189 + 2190 + window = __le16_to_cpu(cp->window); 2191 + 2192 + if (window < 0x0004 || window > 0x4000) 2193 + return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER, 2194 + MGMT_STATUS_INVALID_PARAMS); 2195 + 2196 + if (window > period) 2185 2197 return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER, 2186 2198 MGMT_STATUS_INVALID_PARAMS); 2187 2199 ··· 6454 6432 return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_SCAN_PARAMS, 6455 6433 MGMT_STATUS_NOT_SUPPORTED); 6456 6434 6435 + /* Keep allowed ranges in sync with set_mesh() */ 6457 6436 interval = __le16_to_cpu(cp->interval); 6458 6437 6459 6438 if (interval < 0x0004 || interval > 0x4000)
+4 -3
net/ipv4/ip_input.c
··· 325 325 const struct sk_buff *hint) 326 326 { 327 327 const struct iphdr *iph = ip_hdr(skb); 328 - int err, drop_reason; 329 328 struct rtable *rt; 329 + int drop_reason; 330 330 331 331 if (ip_can_use_hint(skb, iph, hint)) { 332 332 drop_reason = ip_route_use_hint(skb, iph->daddr, iph->saddr, ··· 351 351 break; 352 352 case IPPROTO_UDP: 353 353 if (READ_ONCE(net->ipv4.sysctl_udp_early_demux)) { 354 - err = udp_v4_early_demux(skb); 355 - if (unlikely(err)) 354 + drop_reason = udp_v4_early_demux(skb); 355 + if (unlikely(drop_reason)) 356 356 goto drop_error; 357 + drop_reason = SKB_DROP_REASON_NOT_SPECIFIED; 357 358 358 359 /* must reload iph, skb->head might have changed */ 359 360 iph = ip_hdr(skb);
+4 -11
net/rose/rose_route.c
··· 497 497 t = rose_node; 498 498 rose_node = rose_node->next; 499 499 500 - for (i = 0; i < t->count; i++) { 500 + for (i = t->count - 1; i >= 0; i--) { 501 501 if (t->neighbour[i] != s) 502 502 continue; 503 503 504 504 t->count--; 505 505 506 - switch (i) { 507 - case 0: 508 - t->neighbour[0] = t->neighbour[1]; 509 - fallthrough; 510 - case 1: 511 - t->neighbour[1] = t->neighbour[2]; 512 - break; 513 - case 2: 514 - break; 515 - } 506 + memmove(&t->neighbour[i], &t->neighbour[i + 1], 507 + sizeof(t->neighbour[0]) * 508 + (t->count - i)); 516 509 } 517 510 518 511 if (t->count <= 0)
+5 -14
net/sched/sch_api.c
··· 780 780 781 781 void qdisc_tree_reduce_backlog(struct Qdisc *sch, int n, int len) 782 782 { 783 - bool qdisc_is_offloaded = sch->flags & TCQ_F_OFFLOADED; 784 783 const struct Qdisc_class_ops *cops; 785 784 unsigned long cl; 786 785 u32 parentid; 787 786 bool notify; 788 787 int drops; 789 788 790 - if (n == 0 && len == 0) 791 - return; 792 789 drops = max_t(int, n, 0); 793 790 rcu_read_lock(); 794 791 while ((parentid = sch->parent)) { ··· 794 797 795 798 if (sch->flags & TCQ_F_NOPARENT) 796 799 break; 797 - /* Notify parent qdisc only if child qdisc becomes empty. 798 - * 799 - * If child was empty even before update then backlog 800 - * counter is screwed and we skip notification because 801 - * parent class is already passive. 802 - * 803 - * If the original child was offloaded then it is allowed 804 - * to be seem as empty, so the parent is notified anyway. 805 - */ 806 - notify = !sch->q.qlen && !WARN_ON_ONCE(!n && 807 - !qdisc_is_offloaded); 800 + /* Notify parent qdisc only if child qdisc becomes empty. */ 801 + notify = !sch->q.qlen; 808 802 /* TODO: perform the search on a per txq basis */ 809 803 sch = qdisc_lookup_rcu(qdisc_dev(sch), TC_H_MAJ(parentid)); 810 804 if (sch == NULL) { ··· 804 816 } 805 817 cops = sch->ops->cl_ops; 806 818 if (notify && cops->qlen_notify) { 819 + /* Note that qlen_notify must be idempotent as it may get called 820 + * multiple times. 821 + */ 807 822 cl = cops->find(sch, parentid); 808 823 cops->qlen_notify(sch, cl); 809 824 }
+1 -1
net/sunrpc/auth_gss/auth_gss.c
··· 1724 1724 maj_stat = gss_validate_seqno_mic(ctx, task->tk_rqstp->rq_seqnos[0], seq, p, len); 1725 1725 /* RFC 2203 5.3.3.1 - compute the checksum of each sequence number in the cache */ 1726 1726 while (unlikely(maj_stat == GSS_S_BAD_SIG && i < task->tk_rqstp->rq_seqno_count)) 1727 - maj_stat = gss_validate_seqno_mic(ctx, task->tk_rqstp->rq_seqnos[i], seq, p, len); 1727 + maj_stat = gss_validate_seqno_mic(ctx, task->tk_rqstp->rq_seqnos[i++], seq, p, len); 1728 1728 if (maj_stat == GSS_S_CONTEXT_EXPIRED) 1729 1729 clear_bit(RPCAUTH_CRED_UPTODATE, &cred->cr_flags); 1730 1730 if (maj_stat)
+2 -2
net/vmw_vsock/vmci_transport.c
··· 119 119 u16 proto, 120 120 struct vmci_handle handle) 121 121 { 122 + memset(pkt, 0, sizeof(*pkt)); 123 + 122 124 /* We register the stream control handler as an any cid handle so we 123 125 * must always send from a source address of VMADDR_CID_ANY 124 126 */ ··· 133 131 pkt->type = type; 134 132 pkt->src_port = src->svm_port; 135 133 pkt->dst_port = dst->svm_port; 136 - memset(&pkt->proto, 0, sizeof(pkt->proto)); 137 - memset(&pkt->_reserved2, 0, sizeof(pkt->_reserved2)); 138 134 139 135 switch (pkt->type) { 140 136 case VMCI_TRANSPORT_PACKET_TYPE_INVALID:
+1 -1
scripts/gdb/linux/vfs.py
··· 22 22 if parent == d or parent == 0: 23 23 return "" 24 24 p = dentry_name(d['d_parent']) + "/" 25 - return p + d['d_iname'].string() 25 + return p + d['d_shortname']['string'].string() 26 26 27 27 class DentryName(gdb.Function): 28 28 """Return string of the full path of a dentry.
+10
sound/pci/hda/patch_realtek.c
··· 2656 2656 SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX), 2657 2657 SND_PCI_QUIRK(0x1558, 0x3702, "Clevo X370SN[VW]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2658 2658 SND_PCI_QUIRK(0x1558, 0x50d3, "Clevo PC50[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2659 + SND_PCI_QUIRK(0x1558, 0x5802, "Clevo X58[05]WN[RST]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2659 2660 SND_PCI_QUIRK(0x1558, 0x65d1, "Clevo PB51[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2660 2661 SND_PCI_QUIRK(0x1558, 0x65d2, "Clevo PB51R[CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2661 2662 SND_PCI_QUIRK(0x1558, 0x65e1, "Clevo PB51[ED][DF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), ··· 6610 6609 if (action == HDA_FIXUP_ACT_PRE_PROBE) { 6611 6610 static const hda_nid_t conn[] = { 0x02, 0x03 }; 6612 6611 snd_hda_override_conn_list(codec, 0x15, ARRAY_SIZE(conn), conn); 6612 + snd_hda_gen_add_micmute_led_cdev(codec, NULL); 6613 6613 } 6614 6614 } 6615 6615 ··· 10739 10737 SND_PCI_QUIRK(0x103c, 0x8975, "HP EliteBook x360 840 Aero G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 10740 10738 SND_PCI_QUIRK(0x103c, 0x897d, "HP mt440 Mobile Thin Client U74", ALC236_FIXUP_HP_GPIO_LED), 10741 10739 SND_PCI_QUIRK(0x103c, 0x8981, "HP Elite Dragonfly G3", ALC245_FIXUP_CS35L41_SPI_4), 10740 + SND_PCI_QUIRK(0x103c, 0x898a, "HP Pavilion 15-eg100", ALC287_FIXUP_HP_GPIO_LED), 10742 10741 SND_PCI_QUIRK(0x103c, 0x898e, "HP EliteBook 835 G9", ALC287_FIXUP_CS35L41_I2C_2), 10743 10742 SND_PCI_QUIRK(0x103c, 0x898f, "HP EliteBook 835 G9", ALC287_FIXUP_CS35L41_I2C_2), 10744 10743 SND_PCI_QUIRK(0x103c, 0x8991, "HP EliteBook 845 G9", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED), ··· 10910 10907 SND_PCI_QUIRK(0x103c, 0x8def, "HP EliteBook 660 G12", ALC236_FIXUP_HP_GPIO_LED), 10911 10908 SND_PCI_QUIRK(0x103c, 0x8df0, "HP EliteBook 630 G12", ALC236_FIXUP_HP_GPIO_LED), 10912 10909 SND_PCI_QUIRK(0x103c, 0x8df1, "HP EliteBook 630 G12", ALC236_FIXUP_HP_GPIO_LED), 10910 + SND_PCI_QUIRK(0x103c, 0x8dfb, "HP EliteBook 6 G1a 14", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 10913 10911 SND_PCI_QUIRK(0x103c, 0x8dfc, "HP EliteBook 645 G12", ALC236_FIXUP_HP_GPIO_LED), 10912 + SND_PCI_QUIRK(0x103c, 0x8dfd, "HP EliteBook 6 G1a 16", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 10914 10913 SND_PCI_QUIRK(0x103c, 0x8dfe, "HP EliteBook 665 G12", ALC236_FIXUP_HP_GPIO_LED), 10915 10914 SND_PCI_QUIRK(0x103c, 0x8e11, "HP Trekker", ALC287_FIXUP_CS35L41_I2C_2), 10916 10915 SND_PCI_QUIRK(0x103c, 0x8e12, "HP Trekker", ALC287_FIXUP_CS35L41_I2C_2), ··· 11031 11026 SND_PCI_QUIRK(0x1043, 0x1df3, "ASUS UM5606WA", ALC294_FIXUP_BASS_SPEAKER_15), 11032 11027 SND_PCI_QUIRK(0x1043, 0x1264, "ASUS UM5606KA", ALC294_FIXUP_BASS_SPEAKER_15), 11033 11028 SND_PCI_QUIRK(0x1043, 0x1e02, "ASUS UX3402ZA", ALC245_FIXUP_CS35L41_SPI_2), 11029 + SND_PCI_QUIRK(0x1043, 0x1e10, "ASUS VivoBook X507UAR", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), 11034 11030 SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502), 11035 11031 SND_PCI_QUIRK(0x1043, 0x1e12, "ASUS UM3402", ALC287_FIXUP_CS35L41_I2C_2), 11036 11032 SND_PCI_QUIRK(0x1043, 0x1e1f, "ASUS Vivobook 15 X1504VAP", ALC2XX_FIXUP_HEADSET_MIC), ··· 11141 11135 SND_PCI_QUIRK(0x1558, 0x14a1, "Clevo L141MU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11142 11136 SND_PCI_QUIRK(0x1558, 0x2624, "Clevo L240TU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11143 11137 SND_PCI_QUIRK(0x1558, 0x28c1, "Clevo V370VND", ALC2XX_FIXUP_HEADSET_MIC), 11138 + SND_PCI_QUIRK(0x1558, 0x35a1, "Clevo V3[56]0EN[CDE]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11139 + SND_PCI_QUIRK(0x1558, 0x35b1, "Clevo V3[57]0WN[MNP]Q", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11144 11140 SND_PCI_QUIRK(0x1558, 0x4018, "Clevo NV40M[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11145 11141 SND_PCI_QUIRK(0x1558, 0x4019, "Clevo NV40MZ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11146 11142 SND_PCI_QUIRK(0x1558, 0x4020, "Clevo NV40MB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ··· 11170 11162 SND_PCI_QUIRK(0x1558, 0x51b1, "Clevo NS50AU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11171 11163 SND_PCI_QUIRK(0x1558, 0x51b3, "Clevo NS70AU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11172 11164 SND_PCI_QUIRK(0x1558, 0x5630, "Clevo NP50RNJS", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11165 + SND_PCI_QUIRK(0x1558, 0x5700, "Clevo X560WN[RST]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11173 11166 SND_PCI_QUIRK(0x1558, 0x70a1, "Clevo NB70T[HJK]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11174 11167 SND_PCI_QUIRK(0x1558, 0x70b3, "Clevo NK70SB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11175 11168 SND_PCI_QUIRK(0x1558, 0x70f2, "Clevo NH79EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ··· 11210 11201 SND_PCI_QUIRK(0x1558, 0xa650, "Clevo NP[567]0SN[CD]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11211 11202 SND_PCI_QUIRK(0x1558, 0xa671, "Clevo NP70SN[CDE]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11212 11203 SND_PCI_QUIRK(0x1558, 0xa741, "Clevo V54x_6x_TNE", ALC245_FIXUP_CLEVO_NOISY_MIC), 11204 + SND_PCI_QUIRK(0x1558, 0xa743, "Clevo V54x_6x_TU", ALC245_FIXUP_CLEVO_NOISY_MIC), 11213 11205 SND_PCI_QUIRK(0x1558, 0xa763, "Clevo V54x_6x_TU", ALC245_FIXUP_CLEVO_NOISY_MIC), 11214 11206 SND_PCI_QUIRK(0x1558, 0xb018, "Clevo NP50D[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11215 11207 SND_PCI_QUIRK(0x1558, 0xb019, "Clevo NH77D[BE]Q", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+4
sound/soc/amd/ps/acp63.h
··· 334 334 * @addr: pci ioremap address 335 335 * @reg_range: ACP reigister range 336 336 * @acp_rev: ACP PCI revision id 337 + * @acp_sw_pad_keeper_en: store acp SoundWire pad keeper enable register value 338 + * @acp_pad_pulldown_ctrl: store acp pad pulldown control register value 337 339 * @acp63_sdw0-dma_intr_stat: DMA interrupt status array for ACP6.3 platform SoundWire 338 340 * manager-SW0 instance 339 341 * @acp63_sdw_dma_intr_stat: DMA interrupt status array for ACP6.3 platform SoundWire ··· 369 367 u32 addr; 370 368 u32 reg_range; 371 369 u32 acp_rev; 370 + u32 acp_sw_pad_keeper_en; 371 + u32 acp_pad_pulldown_ctrl; 372 372 u16 acp63_sdw0_dma_intr_stat[ACP63_SDW0_DMA_MAX_STREAMS]; 373 373 u16 acp63_sdw1_dma_intr_stat[ACP63_SDW1_DMA_MAX_STREAMS]; 374 374 u16 acp70_sdw0_dma_intr_stat[ACP70_SDW0_DMA_MAX_STREAMS];
+18
sound/soc/amd/ps/ps-common.c
··· 160 160 161 161 adata = dev_get_drvdata(dev); 162 162 if (adata->is_sdw_dev) { 163 + adata->acp_sw_pad_keeper_en = readl(adata->acp63_base + ACP_SW0_PAD_KEEPER_EN); 164 + adata->acp_pad_pulldown_ctrl = readl(adata->acp63_base + ACP_PAD_PULLDOWN_CTRL); 163 165 adata->sdw_en_stat = check_acp_sdw_enable_status(adata); 164 166 if (adata->sdw_en_stat) { 165 167 writel(1, adata->acp63_base + ACP_ZSC_DSP_CTRL); ··· 199 197 static int __maybe_unused snd_acp63_resume(struct device *dev) 200 198 { 201 199 struct acp63_dev_data *adata; 200 + u32 acp_sw_pad_keeper_en; 202 201 int ret; 203 202 204 203 adata = dev_get_drvdata(dev); ··· 212 209 if (ret) 213 210 dev_err(dev, "ACP init failed\n"); 214 211 212 + acp_sw_pad_keeper_en = readl(adata->acp63_base + ACP_SW0_PAD_KEEPER_EN); 213 + dev_dbg(dev, "ACP_SW0_PAD_KEEPER_EN:0x%x\n", acp_sw_pad_keeper_en); 214 + if (!acp_sw_pad_keeper_en) { 215 + writel(adata->acp_sw_pad_keeper_en, adata->acp63_base + ACP_SW0_PAD_KEEPER_EN); 216 + writel(adata->acp_pad_pulldown_ctrl, adata->acp63_base + ACP_PAD_PULLDOWN_CTRL); 217 + } 215 218 return ret; 216 219 } 217 220 ··· 417 408 418 409 adata = dev_get_drvdata(dev); 419 410 if (adata->is_sdw_dev) { 411 + adata->acp_sw_pad_keeper_en = readl(adata->acp63_base + ACP_SW0_PAD_KEEPER_EN); 412 + adata->acp_pad_pulldown_ctrl = readl(adata->acp63_base + ACP_PAD_PULLDOWN_CTRL); 420 413 adata->sdw_en_stat = check_acp_sdw_enable_status(adata); 421 414 if (adata->sdw_en_stat) { 422 415 writel(1, adata->acp63_base + ACP_ZSC_DSP_CTRL); ··· 456 445 static int __maybe_unused snd_acp70_resume(struct device *dev) 457 446 { 458 447 struct acp63_dev_data *adata; 448 + u32 acp_sw_pad_keeper_en; 459 449 int ret; 460 450 461 451 adata = dev_get_drvdata(dev); ··· 471 459 if (ret) 472 460 dev_err(dev, "ACP init failed\n"); 473 461 462 + acp_sw_pad_keeper_en = readl(adata->acp63_base + ACP_SW0_PAD_KEEPER_EN); 463 + dev_dbg(dev, "ACP_SW0_PAD_KEEPER_EN:0x%x\n", acp_sw_pad_keeper_en); 464 + if (!acp_sw_pad_keeper_en) { 465 + writel(adata->acp_sw_pad_keeper_en, adata->acp63_base + ACP_SW0_PAD_KEEPER_EN); 466 + writel(adata->acp_pad_pulldown_ctrl, adata->acp63_base + ACP_PAD_PULLDOWN_CTRL); 467 + } 474 468 return ret; 475 469 } 476 470
+14
sound/soc/amd/yc/acp6x-mach.c
··· 356 356 { 357 357 .driver_data = &acp6x_card, 358 358 .matches = { 359 + DMI_MATCH(DMI_BOARD_VENDOR, "RB"), 360 + DMI_MATCH(DMI_PRODUCT_NAME, "Nitro ANV15-41"), 361 + } 362 + }, 363 + { 364 + .driver_data = &acp6x_card, 365 + .matches = { 359 366 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 360 367 DMI_MATCH(DMI_PRODUCT_NAME, "83J2"), 368 + } 369 + }, 370 + { 371 + .driver_data = &acp6x_card, 372 + .matches = { 373 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 374 + DMI_MATCH(DMI_PRODUCT_NAME, "83J3"), 361 375 } 362 376 }, 363 377 {
+19 -4
sound/soc/codecs/rt721-sdca.c
··· 430 430 unsigned int read_l, read_r, ctl_l = 0, ctl_r = 0; 431 431 unsigned int adc_vol_flag = 0; 432 432 const unsigned int interval_offset = 0xc0; 433 + const unsigned int tendA = 0x200; 433 434 const unsigned int tendB = 0xa00; 434 435 435 436 if (strstr(ucontrol->id.name, "FU1E Capture Volume") || ··· 440 439 regmap_read(rt721->mbq_regmap, mc->reg, &read_l); 441 440 regmap_read(rt721->mbq_regmap, mc->rreg, &read_r); 442 441 443 - if (mc->shift == 8) /* boost gain */ 442 + if (mc->shift == 8) { 443 + /* boost gain */ 444 444 ctl_l = read_l / tendB; 445 - else { 445 + } else if (mc->shift == 1) { 446 + /* FU33 boost gain */ 447 + if (read_l == 0x8000 || read_l == 0xfe00) 448 + ctl_l = 0; 449 + else 450 + ctl_l = read_l / tendA + 1; 451 + } else { 446 452 if (adc_vol_flag) 447 453 ctl_l = mc->max - (((0x1e00 - read_l) & 0xffff) / interval_offset); 448 454 else ··· 457 449 } 458 450 459 451 if (read_l != read_r) { 460 - if (mc->shift == 8) /* boost gain */ 452 + if (mc->shift == 8) { 453 + /* boost gain */ 461 454 ctl_r = read_r / tendB; 462 - else { /* ADC/DAC gain */ 455 + } else if (mc->shift == 1) { 456 + /* FU33 boost gain */ 457 + if (read_r == 0x8000 || read_r == 0xfe00) 458 + ctl_r = 0; 459 + else 460 + ctl_r = read_r / tendA + 1; 461 + } else { /* ADC/DAC gain */ 463 462 if (adc_vol_flag) 464 463 ctl_r = mc->max - (((0x1e00 - read_r) & 0xffff) / interval_offset); 465 464 else
+1
sound/soc/qcom/Kconfig
··· 186 186 tristate "SoC Machine driver for SM8250 boards" 187 187 depends on QCOM_APR && SOUNDWIRE 188 188 depends on COMMON_CLK 189 + depends on SND_SOC_QCOM_OFFLOAD_UTILS || !SND_SOC_QCOM_OFFLOAD_UTILS 189 190 select SND_SOC_QDSP6 190 191 select SND_SOC_QCOM_COMMON 191 192 select SND_SOC_QCOM_SDW
+3 -3
sound/soc/sof/intel/hda.c
··· 1257 1257 return 0; 1258 1258 } 1259 1259 1260 - static char *remove_file_ext(const char *tplg_filename) 1260 + static char *remove_file_ext(struct device *dev, const char *tplg_filename) 1261 1261 { 1262 1262 char *filename, *tmp; 1263 1263 1264 - filename = kstrdup(tplg_filename, GFP_KERNEL); 1264 + filename = devm_kstrdup(dev, tplg_filename, GFP_KERNEL); 1265 1265 if (!filename) 1266 1266 return NULL; 1267 1267 ··· 1345 1345 */ 1346 1346 if (!sof_pdata->tplg_filename) { 1347 1347 /* remove file extension if it exists */ 1348 - tplg_filename = remove_file_ext(mach->sof_tplg_filename); 1348 + tplg_filename = remove_file_ext(sdev->dev, mach->sof_tplg_filename); 1349 1349 if (!tplg_filename) 1350 1350 return NULL; 1351 1351
+8 -8
sound/usb/qcom/qc_audio_offload.c
··· 759 759 subs = find_substream(pcm_card_num, info->pcm_dev_num, 760 760 info->direction); 761 761 if (!subs || !chip || atomic_read(&chip->shutdown)) { 762 - dev_err(&subs->dev->dev, 762 + dev_err(&uadev[idx].udev->dev, 763 763 "no sub for c#%u dev#%u dir%u\n", 764 764 info->pcm_card_num, 765 765 info->pcm_dev_num, ··· 1360 1360 1361 1361 if (!uadev[card_num].ctrl_intf) { 1362 1362 dev_err(&subs->dev->dev, "audio ctrl intf info not cached\n"); 1363 - ret = -ENODEV; 1364 - goto err; 1363 + return -ENODEV; 1365 1364 } 1366 1365 1367 1366 ret = uaudio_populate_uac_desc(subs, resp); 1368 1367 if (ret < 0) 1369 - goto err; 1368 + return ret; 1370 1369 1371 1370 resp->slot_id = subs->dev->slot_id; 1372 1371 resp->slot_id_valid = 1; 1373 1372 1374 1373 data = snd_soc_usb_find_priv_data(uaudio_qdev->auxdev->dev.parent); 1375 - if (!data) 1376 - goto err; 1374 + if (!data) { 1375 + dev_err(&subs->dev->dev, "No private data found\n"); 1376 + return -ENODEV; 1377 + } 1377 1378 1378 1379 uaudio_qdev->data = data; 1379 1380 ··· 1383 1382 &resp->xhci_mem_info.tr_data, 1384 1383 &resp->std_as_data_ep_desc); 1385 1384 if (ret < 0) 1386 - goto err; 1385 + return ret; 1387 1386 1388 1387 resp->std_as_data_ep_desc_valid = 1; 1389 1388 ··· 1501 1500 xhci_sideband_remove_endpoint(uadev[card_num].sb, 1502 1501 usb_pipe_endpoint(subs->dev, subs->data_endpoint->pipe)); 1503 1502 1504 - err: 1505 1503 return ret; 1506 1504 } 1507 1505
+2
sound/usb/stream.c
··· 987 987 * and request Cluster Descriptor 988 988 */ 989 989 wLength = le16_to_cpu(hc_header.wLength); 990 + if (wLength < sizeof(cluster)) 991 + return NULL; 990 992 cluster = kzalloc(wLength, GFP_KERNEL); 991 993 if (!cluster) 992 994 return ERR_PTR(-ENOMEM);
+2 -2
tools/arch/loongarch/include/asm/orc_types.h
··· 34 34 #define ORC_TYPE_REGS 3 35 35 #define ORC_TYPE_REGS_PARTIAL 4 36 36 37 - #ifndef __ASSEMBLY__ 37 + #ifndef __ASSEMBLER__ 38 38 /* 39 39 * This struct is more or less a vastly simplified version of the DWARF Call 40 40 * Frame Information standard. It contains only the necessary parts of DWARF ··· 53 53 unsigned int type:3; 54 54 unsigned int signal:1; 55 55 }; 56 - #endif /* __ASSEMBLY__ */ 56 + #endif /* __ASSEMBLER__ */ 57 57 58 58 #endif /* _ORC_TYPES_H */
+29 -11
tools/testing/selftests/iommu/iommufd.c
··· 54 54 55 55 mfd_buffer = memfd_mmap(BUFFER_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, 56 56 &mfd); 57 + assert(mfd_buffer != MAP_FAILED); 58 + assert(mfd > 0); 57 59 } 58 60 59 61 FIXTURE(iommufd) ··· 1748 1746 unsigned int end; 1749 1747 uint8_t *buf; 1750 1748 int prot = PROT_READ | PROT_WRITE; 1751 - int mfd; 1749 + int mfd = -1; 1752 1750 1753 1751 if (variant->file) 1754 1752 buf = memfd_mmap(buf_size, prot, MAP_SHARED, &mfd); 1755 1753 else 1756 1754 buf = mmap(0, buf_size, prot, self->mmap_flags, -1, 0); 1757 1755 ASSERT_NE(MAP_FAILED, buf); 1756 + if (variant->file) 1757 + ASSERT_GT(mfd, 0); 1758 1758 check_refs(buf, buf_size, 0); 1759 1759 1760 1760 /* ··· 1802 1798 unsigned int end; 1803 1799 uint8_t *buf; 1804 1800 int prot = PROT_READ | PROT_WRITE; 1805 - int mfd; 1801 + int mfd = -1; 1806 1802 1807 1803 if (variant->file) 1808 1804 buf = memfd_mmap(buf_size, prot, MAP_SHARED, &mfd); 1809 1805 else 1810 1806 buf = mmap(0, buf_size, prot, self->mmap_flags, -1, 0); 1811 1807 ASSERT_NE(MAP_FAILED, buf); 1808 + if (variant->file) 1809 + ASSERT_GT(mfd, 0); 1812 1810 check_refs(buf, buf_size, 0); 1813 1811 1814 1812 /* ··· 2014 2008 2015 2009 FIXTURE_SETUP(iommufd_dirty_tracking) 2016 2010 { 2011 + size_t mmap_buffer_size; 2017 2012 unsigned long size; 2018 2013 int mmap_flags; 2019 2014 void *vrc; ··· 2029 2022 self->fd = open("/dev/iommu", O_RDWR); 2030 2023 ASSERT_NE(-1, self->fd); 2031 2024 2032 - rc = posix_memalign(&self->buffer, HUGEPAGE_SIZE, variant->buffer_size); 2033 - if (rc || !self->buffer) { 2034 - SKIP(return, "Skipping buffer_size=%lu due to errno=%d", 2035 - variant->buffer_size, rc); 2036 - } 2037 - 2038 2025 mmap_flags = MAP_SHARED | MAP_ANONYMOUS | MAP_FIXED; 2026 + mmap_buffer_size = variant->buffer_size; 2039 2027 if (variant->hugepages) { 2040 2028 /* 2041 2029 * MAP_POPULATE will cause the kernel to fail mmap if THPs are 2042 2030 * not available. 2043 2031 */ 2044 2032 mmap_flags |= MAP_HUGETLB | MAP_POPULATE; 2033 + 2034 + /* 2035 + * Allocation must be aligned to the HUGEPAGE_SIZE, because the 2036 + * following mmap() will automatically align the length to be a 2037 + * multiple of the underlying huge page size. Failing to do the 2038 + * same at this allocation will result in a memory overwrite by 2039 + * the mmap(). 2040 + */ 2041 + if (mmap_buffer_size < HUGEPAGE_SIZE) 2042 + mmap_buffer_size = HUGEPAGE_SIZE; 2043 + } 2044 + 2045 + rc = posix_memalign(&self->buffer, HUGEPAGE_SIZE, mmap_buffer_size); 2046 + if (rc || !self->buffer) { 2047 + SKIP(return, "Skipping buffer_size=%lu due to errno=%d", 2048 + mmap_buffer_size, rc); 2045 2049 } 2046 2050 assert((uintptr_t)self->buffer % HUGEPAGE_SIZE == 0); 2047 - vrc = mmap(self->buffer, variant->buffer_size, PROT_READ | PROT_WRITE, 2051 + vrc = mmap(self->buffer, mmap_buffer_size, PROT_READ | PROT_WRITE, 2048 2052 mmap_flags, -1, 0); 2049 2053 assert(vrc == self->buffer); 2050 2054 ··· 2084 2066 2085 2067 FIXTURE_TEARDOWN(iommufd_dirty_tracking) 2086 2068 { 2087 - munmap(self->buffer, variant->buffer_size); 2088 - munmap(self->bitmap, DIV_ROUND_UP(self->bitmap_size, BITS_PER_BYTE)); 2069 + free(self->buffer); 2070 + free(self->bitmap); 2089 2071 teardown_iommufd(self->fd, _metadata); 2090 2072 } 2091 2073
+7 -2
tools/testing/selftests/iommu/iommufd_utils.h
··· 60 60 { 61 61 int mfd_flags = (flags & MAP_HUGETLB) ? MFD_HUGETLB : 0; 62 62 int mfd = memfd_create("buffer", mfd_flags); 63 + void *buf = MAP_FAILED; 63 64 64 65 if (mfd <= 0) 65 66 return MAP_FAILED; 66 67 if (ftruncate(mfd, length)) 67 - return MAP_FAILED; 68 + goto out; 68 69 *mfd_p = mfd; 69 - return mmap(0, length, prot, flags, mfd, 0); 70 + buf = mmap(0, length, prot, flags, mfd, 0); 71 + out: 72 + if (buf == MAP_FAILED) 73 + close(mfd); 74 + return buf; 70 75 } 71 76 72 77 /*
+5 -2
tools/testing/selftests/mm/virtual_address_range.c
··· 77 77 { 78 78 unsigned long addr = (unsigned long) ptr; 79 79 80 - if (high_addr && addr < HIGH_ADDR_MARK) 81 - ksft_exit_fail_msg("Bad address %lx\n", addr); 80 + if (high_addr) { 81 + if (addr < HIGH_ADDR_MARK) 82 + ksft_exit_fail_msg("Bad address %lx\n", addr); 83 + return; 84 + } 82 85 83 86 if (addr > HIGH_ADDR_MARK) 84 87 ksft_exit_fail_msg("Bad address %lx\n", addr);
+3 -2
tools/testing/selftests/ublk/test_stress_03.sh
··· 32 32 ublk_io_and_remove 8G -t null -q 4 -z & 33 33 ublk_io_and_remove 256M -t loop -q 4 -z "${UBLK_BACKFILES[0]}" & 34 34 ublk_io_and_remove 256M -t stripe -q 4 -z "${UBLK_BACKFILES[1]}" "${UBLK_BACKFILES[2]}" & 35 + wait 35 36 36 37 if _have_feature "AUTO_BUF_REG"; then 37 38 ublk_io_and_remove 8G -t null -q 4 --auto_zc & 38 39 ublk_io_and_remove 256M -t loop -q 4 --auto_zc "${UBLK_BACKFILES[0]}" & 39 40 ublk_io_and_remove 256M -t stripe -q 4 --auto_zc "${UBLK_BACKFILES[1]}" "${UBLK_BACKFILES[2]}" & 40 41 ublk_io_and_remove 8G -t null -q 4 -z --auto_zc --auto_zc_fallback & 42 + wait 41 43 fi 42 - wait 43 44 44 45 if _have_feature "PER_IO_DAEMON"; then 45 46 ublk_io_and_remove 8G -t null -q 4 --auto_zc --nthreads 8 --per_io_tasks & 46 47 ublk_io_and_remove 256M -t loop -q 4 --auto_zc --nthreads 8 --per_io_tasks "${UBLK_BACKFILES[0]}" & 47 48 ublk_io_and_remove 256M -t stripe -q 4 --auto_zc --nthreads 8 --per_io_tasks "${UBLK_BACKFILES[1]}" "${UBLK_BACKFILES[2]}" & 48 49 ublk_io_and_remove 8G -t null -q 4 -z --auto_zc --auto_zc_fallback --nthreads 8 --per_io_tasks & 50 + wait 49 51 fi 50 - wait 51 52 52 53 _cleanup_test "stress" 53 54 _show_result $TID $ERR_CODE