Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge v6.7-rc3 into drm-next

Thomas Zimermann needs 8d6ef26501 ("drm/ast: Disconnect BMC if
physical connector is connected") for further ast work in -next.

Minor conflicts in ivpu between 3de6d9597892 ("accel/ivpu: Pass D0i3
residency time to the VPU firmware") and 3f7c0634926d
("accel/ivpu/37xx: Fix hangs related to MMIO reset") changing adjacent
lines.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>

+2825 -3938
+2 -2
Documentation/arch/loongarch/introduction.rst
··· 375 375 376 376 Documentation of LoongArch ISA: 377 377 378 - https://github.com/loongson/LoongArch-Documentation/releases/latest/download/LoongArch-Vol1-v1.02-CN.pdf (in Chinese) 378 + https://github.com/loongson/LoongArch-Documentation/releases/latest/download/LoongArch-Vol1-v1.10-CN.pdf (in Chinese) 379 379 380 - https://github.com/loongson/LoongArch-Documentation/releases/latest/download/LoongArch-Vol1-v1.02-EN.pdf (in English) 380 + https://github.com/loongson/LoongArch-Documentation/releases/latest/download/LoongArch-Vol1-v1.10-EN.pdf (in English) 381 381 382 382 Documentation of LoongArch ELF psABI: 383 383
+6 -1
Documentation/devicetree/bindings/usb/microchip,usb5744.yaml
··· 36 36 37 37 vdd-supply: 38 38 description: 39 - VDD power supply to the hub 39 + 3V3 power supply to the hub 40 + 41 + vdd2-supply: 42 + description: 43 + 1V2 power supply to the hub 40 44 41 45 peer-hub: 42 46 $ref: /schemas/types.yaml#/definitions/phandle ··· 66 62 properties: 67 63 reset-gpios: false 68 64 vdd-supply: false 65 + vdd2-supply: false 69 66 peer-hub: false 70 67 i2c-bus: false 71 68 else:
+2 -2
Documentation/devicetree/bindings/usb/qcom,dwc3.yaml
··· 521 521 522 522 interrupts = <GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>, 523 523 <GIC_SPI 486 IRQ_TYPE_LEVEL_HIGH>, 524 - <GIC_SPI 488 IRQ_TYPE_LEVEL_HIGH>, 525 - <GIC_SPI 489 IRQ_TYPE_LEVEL_HIGH>; 524 + <GIC_SPI 488 IRQ_TYPE_EDGE_BOTH>, 525 + <GIC_SPI 489 IRQ_TYPE_EDGE_BOTH>; 526 526 interrupt-names = "hs_phy_irq", "ss_phy_irq", 527 527 "dm_hs_phy_irq", "dp_hs_phy_irq"; 528 528
+1 -1
Documentation/devicetree/bindings/usb/usb-hcd.yaml
··· 41 41 - | 42 42 usb { 43 43 phys = <&usb2_phy1>, <&usb3_phy1>; 44 - phy-names = "usb"; 44 + phy-names = "usb2", "usb3"; 45 45 #address-cells = <1>; 46 46 #size-cells = <0>; 47 47
+4
Documentation/filesystems/erofs.rst
··· 91 91 92 92 - git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git 93 93 94 + For more information, please also refer to the documentation site: 95 + 96 + - https://erofs.docs.kernel.org 97 + 94 98 Bugs and patches are welcome, please kindly help us and send to the following 95 99 linux-erofs mailing list: 96 100
+17 -3
Documentation/process/maintainer-netdev.rst
··· 193 193 Generally speaking, the patches get triaged quickly (in less than 194 194 48h). But be patient, if your patch is active in patchwork (i.e. it's 195 195 listed on the project's patch list) the chances it was missed are close to zero. 196 - Asking the maintainer for status updates on your 197 - patch is a good way to ensure your patch is ignored or pushed to the 198 - bottom of the priority list. 196 + 197 + The high volume of development on netdev makes reviewers move on 198 + from discussions relatively quickly. New comments and replies 199 + are very unlikely to arrive after a week of silence. If a patch 200 + is no longer active in patchwork and the thread went idle for more 201 + than a week - clarify the next steps and/or post the next version. 202 + 203 + For RFC postings specifically, if nobody responded in a week - reviewers 204 + either missed the posting or have no strong opinions. If the code is ready, 205 + repost as a PATCH. 206 + 207 + Emails saying just "ping" or "bump" are considered rude. If you can't figure 208 + out the status of the patch from patchwork or where the discussion has 209 + landed - describe your best guess and ask if it's correct. For example:: 210 + 211 + I don't understand what the next steps are. Person X seems to be unhappy 212 + with A, should I do B and repost the patches? 199 213 200 214 .. _Changes requested: 201 215
+2 -2
Documentation/translations/zh_CN/arch/loongarch/introduction.rst
··· 338 338 339 339 LoongArch指令集架构的文档: 340 340 341 - https://github.com/loongson/LoongArch-Documentation/releases/latest/download/LoongArch-Vol1-v1.02-CN.pdf (中文版) 341 + https://github.com/loongson/LoongArch-Documentation/releases/latest/download/LoongArch-Vol1-v1.10-CN.pdf (中文版) 342 342 343 - https://github.com/loongson/LoongArch-Documentation/releases/latest/download/LoongArch-Vol1-v1.02-EN.pdf (英文版) 343 + https://github.com/loongson/LoongArch-Documentation/releases/latest/download/LoongArch-Vol1-v1.10-EN.pdf (英文版) 344 344 345 345 LoongArch的ELF psABI文档: 346 346
+5 -4
MAINTAINERS
··· 7852 7852 R: Jeffle Xu <jefflexu@linux.alibaba.com> 7853 7853 L: linux-erofs@lists.ozlabs.org 7854 7854 S: Maintained 7855 + W: https://erofs.docs.kernel.org 7855 7856 T: git git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs.git 7856 7857 F: Documentation/ABI/testing/sysfs-fs-erofs 7857 7858 F: Documentation/filesystems/erofs.rst ··· 11032 11031 11033 11032 INTEL WMI SLIM BOOTLOADER (SBL) FIRMWARE UPDATE DRIVER 11034 11033 M: Jithu Joseph <jithu.joseph@intel.com> 11035 - R: Maurice Ma <maurice.ma@intel.com> 11036 11034 S: Maintained 11037 11035 W: https://slimbootloader.github.io/security/firmware-update.html 11038 11036 F: drivers/platform/x86/intel/wmi/sbl-fw-update.c ··· 13785 13785 MELLANOX HARDWARE PLATFORM SUPPORT 13786 13786 M: Hans de Goede <hdegoede@redhat.com> 13787 13787 M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> 13788 - M: Mark Gross <markgross@kernel.org> 13789 13788 M: Vadim Pasternak <vadimp@nvidia.com> 13790 13789 L: platform-driver-x86@vger.kernel.org 13791 13790 S: Supported ··· 14393 14394 MICROSOFT SURFACE HARDWARE PLATFORM SUPPORT 14394 14395 M: Hans de Goede <hdegoede@redhat.com> 14395 14396 M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> 14396 - M: Mark Gross <markgross@kernel.org> 14397 14397 M: Maximilian Luz <luzmaximilian@gmail.com> 14398 14398 L: platform-driver-x86@vger.kernel.org 14399 14399 S: Maintained ··· 14999 15001 M: Paolo Abeni <pabeni@redhat.com> 15000 15002 L: netdev@vger.kernel.org 15001 15003 S: Maintained 15004 + P: Documentation/process/maintainer-netdev.rst 15002 15005 Q: https://patchwork.kernel.org/project/netdevbpf/list/ 15003 15006 T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git 15004 15007 T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git ··· 15051 15052 M: Paolo Abeni <pabeni@redhat.com> 15052 15053 L: netdev@vger.kernel.org 15053 15054 S: Maintained 15055 + P: Documentation/process/maintainer-netdev.rst 15054 15056 Q: https://patchwork.kernel.org/project/netdevbpf/list/ 15055 15057 B: mailto:netdev@vger.kernel.org 15056 15058 T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git ··· 15062 15062 F: Documentation/process/maintainer-netdev.rst 15063 15063 F: Documentation/userspace-api/netlink/ 15064 15064 F: include/linux/in.h 15065 + F: include/linux/indirect_call_wrapper.h 15065 15066 F: include/linux/net.h 15066 15067 F: include/linux/netdevice.h 15067 15068 F: include/net/ ··· 22086 22085 TRACING 22087 22086 M: Steven Rostedt <rostedt@goodmis.org> 22088 22087 M: Masami Hiramatsu <mhiramat@kernel.org> 22088 + R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> 22089 22089 L: linux-kernel@vger.kernel.org 22090 22090 L: linux-trace-kernel@vger.kernel.org 22091 22091 S: Maintained ··· 23673 23671 X86 PLATFORM DRIVERS 23674 23672 M: Hans de Goede <hdegoede@redhat.com> 23675 23673 M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> 23676 - M: Mark Gross <markgross@kernel.org> 23677 23674 L: platform-driver-x86@vger.kernel.org 23678 23675 S: Maintained 23679 23676 Q: https://patchwork.kernel.org/project/platform-driver-x86/list/
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 7 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc2 5 + EXTRAVERSION = -rc3 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
+2 -1
arch/arm/xen/enlighten.c
··· 484 484 * for secondary CPUs as they are brought up. 485 485 * For uniformity we use VCPUOP_register_vcpu_info even on cpu0. 486 486 */ 487 - xen_vcpu_info = alloc_percpu(struct vcpu_info); 487 + xen_vcpu_info = __alloc_percpu(sizeof(struct vcpu_info), 488 + 1 << fls(sizeof(struct vcpu_info) - 1)); 488 489 if (xen_vcpu_info == NULL) 489 490 return -ENOMEM; 490 491
+1 -1
arch/arm64/Makefile
··· 158 158 159 159 all: $(notdir $(KBUILD_IMAGE)) 160 160 161 - 161 + vmlinuz.efi: Image 162 162 Image vmlinuz.efi: vmlinux 163 163 $(Q)$(MAKE) $(build)=$(boot) $(boot)/$@ 164 164
+15 -2
arch/arm64/include/asm/setup.h
··· 21 21 extern bool rodata_enabled; 22 22 extern bool rodata_full; 23 23 24 - if (arg && !strcmp(arg, "full")) { 24 + if (!arg) 25 + return false; 26 + 27 + if (!strcmp(arg, "full")) { 28 + rodata_enabled = rodata_full = true; 29 + return true; 30 + } 31 + 32 + if (!strcmp(arg, "off")) { 33 + rodata_enabled = rodata_full = false; 34 + return true; 35 + } 36 + 37 + if (!strcmp(arg, "on")) { 25 38 rodata_enabled = true; 26 - rodata_full = true; 39 + rodata_full = false; 27 40 return true; 28 41 } 29 42
+3 -4
arch/arm64/mm/pageattr.c
··· 29 29 * 30 30 * KFENCE pool requires page-granular mapping if initialized late. 31 31 */ 32 - return (rodata_enabled && rodata_full) || debug_pagealloc_enabled() || 33 - arm64_kfence_can_set_direct_map(); 32 + return rodata_full || debug_pagealloc_enabled() || 33 + arm64_kfence_can_set_direct_map(); 34 34 } 35 35 36 36 static int change_page_range(pte_t *ptep, unsigned long addr, void *data) ··· 105 105 * If we are manipulating read-only permissions, apply the same 106 106 * change to the linear mapping of the pages that back this VM area. 107 107 */ 108 - if (rodata_enabled && 109 - rodata_full && (pgprot_val(set_mask) == PTE_RDONLY || 108 + if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY || 110 109 pgprot_val(clear_mask) == PTE_RDONLY)) { 111 110 for (i = 0; i < area->nr_pages; i++) { 112 111 __change_memory_common((u64)page_address(area->pages[i]),
+3
arch/loongarch/Makefile
··· 68 68 ifdef CONFIG_AS_HAS_EXPLICIT_RELOCS 69 69 cflags-y += $(call cc-option,-mexplicit-relocs) 70 70 KBUILD_CFLAGS_KERNEL += $(call cc-option,-mdirect-extern-access) 71 + KBUILD_CFLAGS_KERNEL += $(call cc-option,-fdirect-access-external-data) 71 72 KBUILD_AFLAGS_MODULE += $(call cc-option,-fno-direct-access-external-data) 72 73 KBUILD_CFLAGS_MODULE += $(call cc-option,-fno-direct-access-external-data) 73 74 KBUILD_AFLAGS_MODULE += $(call cc-option,-mno-relax) $(call cc-option,-Wa$(comma)-mno-relax) ··· 142 141 vdso-install-y += arch/loongarch/vdso/vdso.so.dbg 143 142 144 143 all: $(notdir $(KBUILD_IMAGE)) 144 + 145 + vmlinuz.efi: vmlinux.efi 145 146 146 147 vmlinux.elf vmlinux.efi vmlinuz.efi: vmlinux 147 148 $(Q)$(MAKE) $(build)=$(boot) $(bootvars-y) $(boot)/$@
+1 -2
arch/loongarch/include/asm/asmmacro.h
··· 609 609 lu32i.d \reg, 0 610 610 lu52i.d \reg, \reg, 0 611 611 .pushsection ".la_abs", "aw", %progbits 612 - 768: 613 - .dword 768b-766b 612 + .dword 766b 614 613 .dword \sym 615 614 .popsection 616 615 #endif
+5 -6
arch/loongarch/include/asm/percpu.h
··· 40 40 switch (size) { \ 41 41 case 4: \ 42 42 __asm__ __volatile__( \ 43 - "am"#asm_op".w" " %[ret], %[val], %[ptr] \n" \ 43 + "am"#asm_op".w" " %[ret], %[val], %[ptr] \n" \ 44 44 : [ret] "=&r" (ret), [ptr] "+ZB"(*(u32 *)ptr) \ 45 45 : [val] "r" (val)); \ 46 46 break; \ 47 47 case 8: \ 48 48 __asm__ __volatile__( \ 49 - "am"#asm_op".d" " %[ret], %[val], %[ptr] \n" \ 49 + "am"#asm_op".d" " %[ret], %[val], %[ptr] \n" \ 50 50 : [ret] "=&r" (ret), [ptr] "+ZB"(*(u64 *)ptr) \ 51 51 : [val] "r" (val)); \ 52 52 break; \ ··· 63 63 PERCPU_OP(or, or, |) 64 64 #undef PERCPU_OP 65 65 66 - static __always_inline unsigned long __percpu_read(void *ptr, int size) 66 + static __always_inline unsigned long __percpu_read(void __percpu *ptr, int size) 67 67 { 68 68 unsigned long ret; 69 69 ··· 100 100 return ret; 101 101 } 102 102 103 - static __always_inline void __percpu_write(void *ptr, unsigned long val, int size) 103 + static __always_inline void __percpu_write(void __percpu *ptr, unsigned long val, int size) 104 104 { 105 105 switch (size) { 106 106 case 1: ··· 132 132 } 133 133 } 134 134 135 - static __always_inline unsigned long __percpu_xchg(void *ptr, unsigned long val, 136 - int size) 135 + static __always_inline unsigned long __percpu_xchg(void *ptr, unsigned long val, int size) 137 136 { 138 137 switch (size) { 139 138 case 1:
+1 -1
arch/loongarch/include/asm/setup.h
··· 25 25 #ifdef CONFIG_RELOCATABLE 26 26 27 27 struct rela_la_abs { 28 - long offset; 28 + long pc; 29 29 long symvalue; 30 30 }; 31 31
+9 -1
arch/loongarch/kernel/relocate.c
··· 52 52 for (p = begin; (void *)p < end; p++) { 53 53 long v = p->symvalue; 54 54 uint32_t lu12iw, ori, lu32id, lu52id; 55 - union loongarch_instruction *insn = (void *)p - p->offset; 55 + union loongarch_instruction *insn = (void *)p->pc; 56 56 57 57 lu12iw = (v >> 12) & 0xfffff; 58 58 ori = v & 0xfff; ··· 101 101 102 102 return hash; 103 103 } 104 + 105 + static int __init nokaslr(char *p) 106 + { 107 + pr_info("KASLR is disabled.\n"); 108 + 109 + return 0; /* Print a notice and silence the boot warning */ 110 + } 111 + early_param("nokaslr", nokaslr); 104 112 105 113 static inline __init bool kaslr_disabled(void) 106 114 {
+11 -16
arch/loongarch/kernel/time.c
··· 58 58 return 0; 59 59 } 60 60 61 - static int constant_set_state_oneshot_stopped(struct clock_event_device *evt) 62 - { 63 - unsigned long timer_config; 64 - 65 - raw_spin_lock(&state_lock); 66 - 67 - timer_config = csr_read64(LOONGARCH_CSR_TCFG); 68 - timer_config &= ~CSR_TCFG_EN; 69 - csr_write64(timer_config, LOONGARCH_CSR_TCFG); 70 - 71 - raw_spin_unlock(&state_lock); 72 - 73 - return 0; 74 - } 75 - 76 61 static int constant_set_state_periodic(struct clock_event_device *evt) 77 62 { 78 63 unsigned long period; ··· 77 92 78 93 static int constant_set_state_shutdown(struct clock_event_device *evt) 79 94 { 95 + unsigned long timer_config; 96 + 97 + raw_spin_lock(&state_lock); 98 + 99 + timer_config = csr_read64(LOONGARCH_CSR_TCFG); 100 + timer_config &= ~CSR_TCFG_EN; 101 + csr_write64(timer_config, LOONGARCH_CSR_TCFG); 102 + 103 + raw_spin_unlock(&state_lock); 104 + 80 105 return 0; 81 106 } 82 107 ··· 156 161 cd->rating = 320; 157 162 cd->cpumask = cpumask_of(cpu); 158 163 cd->set_state_oneshot = constant_set_state_oneshot; 159 - cd->set_state_oneshot_stopped = constant_set_state_oneshot_stopped; 164 + cd->set_state_oneshot_stopped = constant_set_state_shutdown; 160 165 cd->set_state_periodic = constant_set_state_periodic; 161 166 cd->set_state_shutdown = constant_set_state_shutdown; 162 167 cd->set_next_event = constant_timer_next_event;
+2 -2
arch/loongarch/mm/pgtable.c
··· 13 13 { 14 14 return pfn_to_page(virt_to_pfn(kaddr)); 15 15 } 16 - EXPORT_SYMBOL_GPL(dmw_virt_to_page); 16 + EXPORT_SYMBOL(dmw_virt_to_page); 17 17 18 18 struct page *tlb_virt_to_page(unsigned long kaddr) 19 19 { 20 20 return pfn_to_page(pte_pfn(*virt_to_kpte(kaddr))); 21 21 } 22 - EXPORT_SYMBOL_GPL(tlb_virt_to_page); 22 + EXPORT_SYMBOL(tlb_virt_to_page); 23 23 24 24 pgd_t *pgd_alloc(struct mm_struct *mm) 25 25 {
+5 -2
arch/parisc/Kconfig
··· 115 115 default n 116 116 117 117 config GENERIC_BUG 118 - bool 119 - default y 118 + def_bool y 120 119 depends on BUG 120 + select GENERIC_BUG_RELATIVE_POINTERS if 64BIT 121 + 122 + config GENERIC_BUG_RELATIVE_POINTERS 123 + bool 121 124 122 125 config GENERIC_HWEIGHT 123 126 bool
+6 -3
arch/parisc/include/asm/alternative.h
··· 34 34 35 35 /* Alternative SMP implementation. */ 36 36 #define ALTERNATIVE(cond, replacement) "!0:" \ 37 - ".section .altinstructions, \"aw\" !" \ 37 + ".section .altinstructions, \"a\" !" \ 38 + ".align 4 !" \ 38 39 ".word (0b-4-.) !" \ 39 40 ".hword 1, " __stringify(cond) " !" \ 40 41 ".word " __stringify(replacement) " !" \ ··· 45 44 46 45 /* to replace one single instructions by a new instruction */ 47 46 #define ALTERNATIVE(from, to, cond, replacement)\ 48 - .section .altinstructions, "aw" ! \ 47 + .section .altinstructions, "a" ! \ 48 + .align 4 ! \ 49 49 .word (from - .) ! \ 50 50 .hword (to - from)/4, cond ! \ 51 51 .word replacement ! \ ··· 54 52 55 53 /* to replace multiple instructions by new code */ 56 54 #define ALTERNATIVE_CODE(from, num_instructions, cond, new_instr_ptr)\ 57 - .section .altinstructions, "aw" ! \ 55 + .section .altinstructions, "a" ! \ 56 + .align 4 ! \ 58 57 .word (from - .) ! \ 59 58 .hword -num_instructions, cond ! \ 60 59 .word (new_instr_ptr - .) ! \
+1
arch/parisc/include/asm/assembly.h
··· 574 574 */ 575 575 #define ASM_EXCEPTIONTABLE_ENTRY(fault_addr, except_addr) \ 576 576 .section __ex_table,"aw" ! \ 577 + .align 4 ! \ 577 578 .word (fault_addr - .), (except_addr - .) ! \ 578 579 .previous 579 580
+22 -16
arch/parisc/include/asm/bug.h
··· 17 17 #define PARISC_BUG_BREAK_ASM "break 0x1f, 0x1fff" 18 18 #define PARISC_BUG_BREAK_INSN 0x03ffe01f /* PARISC_BUG_BREAK_ASM */ 19 19 20 - #if defined(CONFIG_64BIT) 21 - #define ASM_WORD_INSN ".dword\t" 20 + #ifdef CONFIG_GENERIC_BUG_RELATIVE_POINTERS 21 + # define __BUG_REL(val) ".word " __stringify(val) " - ." 22 22 #else 23 - #define ASM_WORD_INSN ".word\t" 23 + # define __BUG_REL(val) ".word " __stringify(val) 24 24 #endif 25 + 25 26 26 27 #ifdef CONFIG_DEBUG_BUGVERBOSE 27 28 #define BUG() \ 28 29 do { \ 29 30 asm volatile("\n" \ 30 31 "1:\t" PARISC_BUG_BREAK_ASM "\n" \ 31 - "\t.pushsection __bug_table,\"aw\"\n" \ 32 - "2:\t" ASM_WORD_INSN "1b, %c0\n" \ 33 - "\t.short %c1, %c2\n" \ 34 - "\t.org 2b+%c3\n" \ 32 + "\t.pushsection __bug_table,\"a\"\n" \ 33 + "\t.align 4\n" \ 34 + "2:\t" __BUG_REL(1b) "\n" \ 35 + "\t" __BUG_REL(%c0) "\n" \ 36 + "\t.short %1, %2\n" \ 37 + "\t.blockz %3-2*4-2*2\n" \ 35 38 "\t.popsection" \ 36 39 : : "i" (__FILE__), "i" (__LINE__), \ 37 - "i" (0), "i" (sizeof(struct bug_entry)) ); \ 40 + "i" (0), "i" (sizeof(struct bug_entry)) ); \ 38 41 unreachable(); \ 39 42 } while(0) 40 43 ··· 54 51 do { \ 55 52 asm volatile("\n" \ 56 53 "1:\t" PARISC_BUG_BREAK_ASM "\n" \ 57 - "\t.pushsection __bug_table,\"aw\"\n" \ 58 - "2:\t" ASM_WORD_INSN "1b, %c0\n" \ 59 - "\t.short %c1, %c2\n" \ 60 - "\t.org 2b+%c3\n" \ 54 + "\t.pushsection __bug_table,\"a\"\n" \ 55 + "\t.align 4\n" \ 56 + "2:\t" __BUG_REL(1b) "\n" \ 57 + "\t" __BUG_REL(%c0) "\n" \ 58 + "\t.short %1, %2\n" \ 59 + "\t.blockz %3-2*4-2*2\n" \ 61 60 "\t.popsection" \ 62 61 : : "i" (__FILE__), "i" (__LINE__), \ 63 62 "i" (BUGFLAG_WARNING|(flags)), \ ··· 70 65 do { \ 71 66 asm volatile("\n" \ 72 67 "1:\t" PARISC_BUG_BREAK_ASM "\n" \ 73 - "\t.pushsection __bug_table,\"aw\"\n" \ 74 - "2:\t" ASM_WORD_INSN "1b\n" \ 75 - "\t.short %c0\n" \ 76 - "\t.org 2b+%c1\n" \ 68 + "\t.pushsection __bug_table,\"a\"\n" \ 69 + "\t.align %2\n" \ 70 + "2:\t" __BUG_REL(1b) "\n" \ 71 + "\t.short %0\n" \ 72 + "\t.blockz %1-4-2\n" \ 77 73 "\t.popsection" \ 78 74 : : "i" (BUGFLAG_WARNING|(flags)), \ 79 75 "i" (sizeof(struct bug_entry)) ); \
+6 -2
arch/parisc/include/asm/jump_label.h
··· 15 15 asm_volatile_goto("1:\n\t" 16 16 "nop\n\t" 17 17 ".pushsection __jump_table, \"aw\"\n\t" 18 + ".align %1\n\t" 18 19 ".word 1b - ., %l[l_yes] - .\n\t" 19 20 __stringify(ASM_ULONG_INSN) " %c0 - .\n\t" 20 21 ".popsection\n\t" 21 - : : "i" (&((char *)key)[branch]) : : l_yes); 22 + : : "i" (&((char *)key)[branch]), "i" (sizeof(long)) 23 + : : l_yes); 22 24 23 25 return false; 24 26 l_yes: ··· 32 30 asm_volatile_goto("1:\n\t" 33 31 "b,n %l[l_yes]\n\t" 34 32 ".pushsection __jump_table, \"aw\"\n\t" 33 + ".align %1\n\t" 35 34 ".word 1b - ., %l[l_yes] - .\n\t" 36 35 __stringify(ASM_ULONG_INSN) " %c0 - .\n\t" 37 36 ".popsection\n\t" 38 - : : "i" (&((char *)key)[branch]) : : l_yes); 37 + : : "i" (&((char *)key)[branch]), "i" (sizeof(long)) 38 + : : l_yes); 39 39 40 40 return false; 41 41 l_yes:
+1 -1
arch/parisc/include/asm/ldcw.h
··· 55 55 }) 56 56 57 57 #ifdef CONFIG_SMP 58 - # define __lock_aligned __section(".data..lock_aligned") 58 + # define __lock_aligned __section(".data..lock_aligned") __aligned(16) 59 59 #endif 60 60 61 61 #endif /* __PARISC_LDCW_H */
+1
arch/parisc/include/asm/uaccess.h
··· 41 41 42 42 #define ASM_EXCEPTIONTABLE_ENTRY( fault_addr, except_addr )\ 43 43 ".section __ex_table,\"aw\"\n" \ 44 + ".align 4\n" \ 44 45 ".word (" #fault_addr " - .), (" #except_addr " - .)\n\t" \ 45 46 ".previous\n" 46 47
-2
arch/parisc/include/uapi/asm/errno.h
··· 75 75 76 76 /* We now return you to your regularly scheduled HPUX. */ 77 77 78 - #define ENOSYM 215 /* symbol does not exist in executable */ 79 78 #define ENOTSOCK 216 /* Socket operation on non-socket */ 80 79 #define EDESTADDRREQ 217 /* Destination address required */ 81 80 #define EMSGSIZE 218 /* Message too long */ ··· 100 101 #define ETIMEDOUT 238 /* Connection timed out */ 101 102 #define ECONNREFUSED 239 /* Connection refused */ 102 103 #define EREFUSED ECONNREFUSED /* for HP's NFS apparently */ 103 - #define EREMOTERELEASE 240 /* Remote peer released connection */ 104 104 #define EHOSTDOWN 241 /* Host is down */ 105 105 #define EHOSTUNREACH 242 /* No route to host */ 106 106
+1
arch/parisc/kernel/vmlinux.lds.S
··· 130 130 RO_DATA(8) 131 131 132 132 /* unwind info */ 133 + . = ALIGN(4); 133 134 .PARISC.unwind : { 134 135 __start___unwind = .; 135 136 *(.PARISC.unwind)
-1
arch/s390/include/asm/processor.h
··· 228 228 execve_tail(); \ 229 229 } while (0) 230 230 231 - /* Forward declaration, a strange C thing */ 232 231 struct task_struct; 233 232 struct mm_struct; 234 233 struct seq_file;
+1
arch/s390/kernel/ipl.c
··· 666 666 &ipl_ccw_attr_group_lpar); 667 667 break; 668 668 case IPL_TYPE_ECKD: 669 + case IPL_TYPE_ECKD_DUMP: 669 670 rc = sysfs_create_group(&ipl_kset->kobj, &ipl_eckd_attr_group); 670 671 break; 671 672 case IPL_TYPE_FCP:
+5 -6
arch/s390/kernel/perf_pai_crypto.c
··· 279 279 if (IS_ERR(cpump)) 280 280 return PTR_ERR(cpump); 281 281 282 - /* Event initialization sets last_tag to 0. When later on the events 283 - * are deleted and re-added, do not reset the event count value to zero. 284 - * Events are added, deleted and re-added when 2 or more events 285 - * are active at the same time. 286 - */ 287 - event->hw.last_tag = 0; 288 282 event->destroy = paicrypt_event_destroy; 289 283 290 284 if (a->sample_period) { ··· 312 318 { 313 319 u64 sum; 314 320 321 + /* Event initialization sets last_tag to 0. When later on the events 322 + * are deleted and re-added, do not reset the event count value to zero. 323 + * Events are added, deleted and re-added when 2 or more events 324 + * are active at the same time. 325 + */ 315 326 if (!event->hw.last_tag) { 316 327 event->hw.last_tag = 1; 317 328 sum = paicrypt_getall(event); /* Get current value */
-1
arch/s390/kernel/perf_pai_ext.c
··· 260 260 rc = paiext_alloc(a, event); 261 261 if (rc) 262 262 return rc; 263 - event->hw.last_tag = 0; 264 263 event->destroy = paiext_event_destroy; 265 264 266 265 if (a->sample_period) {
+1 -1
arch/x86/events/intel/core.c
··· 4660 4660 if (pmu->intel_cap.pebs_output_pt_available) 4661 4661 pmu->pmu.capabilities |= PERF_PMU_CAP_AUX_OUTPUT; 4662 4662 else 4663 - pmu->pmu.capabilities |= ~PERF_PMU_CAP_AUX_OUTPUT; 4663 + pmu->pmu.capabilities &= ~PERF_PMU_CAP_AUX_OUTPUT; 4664 4664 4665 4665 intel_pmu_check_event_constraints(pmu->event_constraints, 4666 4666 pmu->num_counters,
+21 -4
arch/x86/hyperv/hv_init.c
··· 15 15 #include <linux/io.h> 16 16 #include <asm/apic.h> 17 17 #include <asm/desc.h> 18 + #include <asm/e820/api.h> 18 19 #include <asm/sev.h> 19 20 #include <asm/ibt.h> 20 21 #include <asm/hypervisor.h> ··· 287 286 288 287 static int __init hv_pci_init(void) 289 288 { 290 - int gen2vm = efi_enabled(EFI_BOOT); 289 + bool gen2vm = efi_enabled(EFI_BOOT); 291 290 292 291 /* 293 - * For Generation-2 VM, we exit from pci_arch_init() by returning 0. 294 - * The purpose is to suppress the harmless warning: 292 + * A Generation-2 VM doesn't support legacy PCI/PCIe, so both 293 + * raw_pci_ops and raw_pci_ext_ops are NULL, and pci_subsys_init() -> 294 + * pcibios_init() doesn't call pcibios_resource_survey() -> 295 + * e820__reserve_resources_late(); as a result, any emulated persistent 296 + * memory of E820_TYPE_PRAM (12) via the kernel parameter 297 + * memmap=nn[KMG]!ss is not added into iomem_resource and hence can't be 298 + * detected by register_e820_pmem(). Fix this by directly calling 299 + * e820__reserve_resources_late() here: e820__reserve_resources_late() 300 + * depends on e820__reserve_resources(), which has been called earlier 301 + * from setup_arch(). Note: e820__reserve_resources_late() also adds 302 + * any memory of E820_TYPE_PMEM (7) into iomem_resource, and 303 + * acpi_nfit_register_region() -> acpi_nfit_insert_resource() -> 304 + * region_intersects() returns REGION_INTERSECTS, so the memory of 305 + * E820_TYPE_PMEM won't get added twice. 306 + * 307 + * We return 0 here so that pci_arch_init() won't print the warning: 295 308 * "PCI: Fatal: No config space access function found" 296 309 */ 297 - if (gen2vm) 310 + if (gen2vm) { 311 + e820__reserve_resources_late(); 298 312 return 0; 313 + } 299 314 300 315 /* For Generation-1 VM, we'll proceed in pci_arch_init(). */ 301 316 return 1;
+11 -28
arch/x86/kernel/cpu/microcode/amd.c
··· 104 104 size_t size; 105 105 }; 106 106 107 - static u32 ucode_new_rev; 108 - 109 107 /* 110 108 * Microcode patch container file is prepended to the initrd in cpio 111 109 * format. See Documentation/arch/x86/microcode.rst ··· 440 442 * 441 443 * Returns true if container found (sets @desc), false otherwise. 442 444 */ 443 - static bool early_apply_microcode(u32 cpuid_1_eax, void *ucode, size_t size) 445 + static bool early_apply_microcode(u32 cpuid_1_eax, u32 old_rev, void *ucode, size_t size) 444 446 { 445 447 struct cont_desc desc = { 0 }; 446 448 struct microcode_amd *mc; 447 449 bool ret = false; 448 - u32 rev, dummy; 449 450 450 451 desc.cpuid_1_eax = cpuid_1_eax; 451 452 ··· 454 457 if (!mc) 455 458 return ret; 456 459 457 - native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy); 458 - 459 460 /* 460 461 * Allow application of the same revision to pick up SMT-specific 461 462 * changes even if the revision of the other SMT thread is already 462 463 * up-to-date. 463 464 */ 464 - if (rev > mc->hdr.patch_id) 465 + if (old_rev > mc->hdr.patch_id) 465 466 return ret; 466 467 467 - if (!__apply_microcode_amd(mc)) { 468 - ucode_new_rev = mc->hdr.patch_id; 469 - ret = true; 470 - } 471 - 472 - return ret; 468 + return !__apply_microcode_amd(mc); 473 469 } 474 470 475 471 static bool get_builtin_microcode(struct cpio_data *cp, unsigned int family) ··· 496 506 *ret = cp; 497 507 } 498 508 499 - void __init load_ucode_amd_bsp(unsigned int cpuid_1_eax) 509 + void __init load_ucode_amd_bsp(struct early_load_data *ed, unsigned int cpuid_1_eax) 500 510 { 501 511 struct cpio_data cp = { }; 512 + u32 dummy; 513 + 514 + native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->old_rev, dummy); 502 515 503 516 /* Needed in load_microcode_amd() */ 504 517 ucode_cpu_info[0].cpu_sig.sig = cpuid_1_eax; ··· 510 517 if (!(cp.data && cp.size)) 511 518 return; 512 519 513 - early_apply_microcode(cpuid_1_eax, cp.data, cp.size); 520 + if (early_apply_microcode(cpuid_1_eax, ed->old_rev, cp.data, cp.size)) 521 + native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->new_rev, dummy); 514 522 } 515 523 516 524 static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size); ··· 619 625 rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy); 620 626 621 627 if (rev < mc->hdr.patch_id) { 622 - if (!__apply_microcode_amd(mc)) { 623 - ucode_new_rev = mc->hdr.patch_id; 624 - pr_info("reload patch_level=0x%08x\n", ucode_new_rev); 625 - } 628 + if (!__apply_microcode_amd(mc)) 629 + pr_info_once("reload revision: 0x%08x\n", mc->hdr.patch_id); 626 630 } 627 631 } 628 632 ··· 640 648 p = find_patch(cpu); 641 649 if (p && (p->patch_id == csig->rev)) 642 650 uci->mc = p->data; 643 - 644 - pr_info("CPU%d: patch_level=0x%08x\n", cpu, csig->rev); 645 651 646 652 return 0; 647 653 } ··· 680 690 681 691 rev = mc_amd->hdr.patch_id; 682 692 ret = UCODE_UPDATED; 683 - 684 - pr_info("CPU%d: new patch_level=0x%08x\n", cpu, rev); 685 693 686 694 out: 687 695 uci->cpu_sig.rev = rev; ··· 923 935 pr_warn("AMD CPU family 0x%x not supported\n", c->x86); 924 936 return NULL; 925 937 } 926 - 927 - if (ucode_new_rev) 928 - pr_info_once("microcode updated early to new patch_level=0x%08x\n", 929 - ucode_new_rev); 930 - 931 938 return &microcode_amd_ops; 932 939 } 933 940
+9 -6
arch/x86/kernel/cpu/microcode/core.c
··· 41 41 42 42 #include "internal.h" 43 43 44 - #define DRIVER_VERSION "2.2" 45 - 46 44 static struct microcode_ops *microcode_ops; 47 45 bool dis_ucode_ldr = true; 48 46 ··· 74 76 0x010000af, 75 77 0, /* T-101 terminator */ 76 78 }; 79 + 80 + struct early_load_data early_data; 77 81 78 82 /* 79 83 * Check the current patch level on this CPU. ··· 155 155 return; 156 156 157 157 if (intel) 158 - load_ucode_intel_bsp(); 158 + load_ucode_intel_bsp(&early_data); 159 159 else 160 - load_ucode_amd_bsp(cpuid_1_eax); 160 + load_ucode_amd_bsp(&early_data, cpuid_1_eax); 161 161 } 162 162 163 163 void load_ucode_ap(void) ··· 828 828 if (!microcode_ops) 829 829 return -ENODEV; 830 830 831 + pr_info_once("Current revision: 0x%08x\n", (early_data.new_rev ?: early_data.old_rev)); 832 + 833 + if (early_data.new_rev) 834 + pr_info_once("Updated early from: 0x%08x\n", early_data.old_rev); 835 + 831 836 microcode_pdev = platform_device_register_simple("microcode", -1, NULL, 0); 832 837 if (IS_ERR(microcode_pdev)) 833 838 return PTR_ERR(microcode_pdev); ··· 850 845 register_syscore_ops(&mc_syscore_ops); 851 846 cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "x86/microcode:online", 852 847 mc_cpu_online, mc_cpu_down_prep); 853 - 854 - pr_info("Microcode Update Driver: v%s.", DRIVER_VERSION); 855 848 856 849 return 0; 857 850
+7 -10
arch/x86/kernel/cpu/microcode/intel.c
··· 339 339 static enum ucode_state apply_microcode_early(struct ucode_cpu_info *uci) 340 340 { 341 341 struct microcode_intel *mc = uci->mc; 342 - enum ucode_state ret; 343 - u32 cur_rev, date; 342 + u32 cur_rev; 344 343 345 - ret = __apply_microcode(uci, mc, &cur_rev); 346 - if (ret == UCODE_UPDATED) { 347 - date = mc->hdr.date; 348 - pr_info_once("updated early: 0x%x -> 0x%x, date = %04x-%02x-%02x\n", 349 - cur_rev, mc->hdr.rev, date & 0xffff, date >> 24, (date >> 16) & 0xff); 350 - } 351 - return ret; 344 + return __apply_microcode(uci, mc, &cur_rev); 352 345 } 353 346 354 347 static __init bool load_builtin_intel_microcode(struct cpio_data *cp) ··· 406 413 early_initcall(save_builtin_microcode); 407 414 408 415 /* Load microcode on BSP from initrd or builtin blobs */ 409 - void __init load_ucode_intel_bsp(void) 416 + void __init load_ucode_intel_bsp(struct early_load_data *ed) 410 417 { 411 418 struct ucode_cpu_info uci; 419 + 420 + ed->old_rev = intel_get_microcode_revision(); 412 421 413 422 uci.mc = get_microcode_blob(&uci, false); 414 423 if (uci.mc && apply_microcode_early(&uci) == UCODE_UPDATED) 415 424 ucode_patch_va = UCODE_BSP_LOADED; 425 + 426 + ed->new_rev = uci.cpu_sig.rev; 416 427 } 417 428 418 429 void load_ucode_intel_ap(void)
+10 -4
arch/x86/kernel/cpu/microcode/internal.h
··· 37 37 use_nmi : 1; 38 38 }; 39 39 40 + struct early_load_data { 41 + u32 old_rev; 42 + u32 new_rev; 43 + }; 44 + 45 + extern struct early_load_data early_data; 40 46 extern struct ucode_cpu_info ucode_cpu_info[]; 41 47 struct cpio_data find_microcode_in_initrd(const char *path); 42 48 ··· 98 92 extern bool force_minrev; 99 93 100 94 #ifdef CONFIG_CPU_SUP_AMD 101 - void load_ucode_amd_bsp(unsigned int family); 95 + void load_ucode_amd_bsp(struct early_load_data *ed, unsigned int family); 102 96 void load_ucode_amd_ap(unsigned int family); 103 97 int save_microcode_in_initrd_amd(unsigned int family); 104 98 void reload_ucode_amd(unsigned int cpu); 105 99 struct microcode_ops *init_amd_microcode(void); 106 100 void exit_amd_microcode(void); 107 101 #else /* CONFIG_CPU_SUP_AMD */ 108 - static inline void load_ucode_amd_bsp(unsigned int family) { } 102 + static inline void load_ucode_amd_bsp(struct early_load_data *ed, unsigned int family) { } 109 103 static inline void load_ucode_amd_ap(unsigned int family) { } 110 104 static inline int save_microcode_in_initrd_amd(unsigned int family) { return -EINVAL; } 111 105 static inline void reload_ucode_amd(unsigned int cpu) { } ··· 114 108 #endif /* !CONFIG_CPU_SUP_AMD */ 115 109 116 110 #ifdef CONFIG_CPU_SUP_INTEL 117 - void load_ucode_intel_bsp(void); 111 + void load_ucode_intel_bsp(struct early_load_data *ed); 118 112 void load_ucode_intel_ap(void); 119 113 void reload_ucode_intel(void); 120 114 struct microcode_ops *init_intel_microcode(void); 121 115 #else /* CONFIG_CPU_SUP_INTEL */ 122 - static inline void load_ucode_intel_bsp(void) { } 116 + static inline void load_ucode_intel_bsp(struct early_load_data *ed) { } 123 117 static inline void load_ucode_intel_ap(void) { } 124 118 static inline void reload_ucode_intel(void) { } 125 119 static inline struct microcode_ops *init_intel_microcode(void) { return NULL; }
+4 -1
arch/x86/kernel/cpu/mshyperv.c
··· 262 262 static int hv_nmi_unknown(unsigned int val, struct pt_regs *regs) 263 263 { 264 264 static atomic_t nmi_cpu = ATOMIC_INIT(-1); 265 + unsigned int old_cpu, this_cpu; 265 266 266 267 if (!unknown_nmi_panic) 267 268 return NMI_DONE; 268 269 269 - if (atomic_cmpxchg(&nmi_cpu, -1, raw_smp_processor_id()) != -1) 270 + old_cpu = -1; 271 + this_cpu = raw_smp_processor_id(); 272 + if (!atomic_try_cmpxchg(&nmi_cpu, &old_cpu, this_cpu)) 270 273 return NMI_HANDLED; 271 274 272 275 return NMI_DONE;
+2
block/bdev.c
··· 425 425 426 426 void bdev_add(struct block_device *bdev, dev_t dev) 427 427 { 428 + if (bdev_stable_writes(bdev)) 429 + mapping_set_stable_writes(bdev->bd_inode->i_mapping); 428 430 bdev->bd_dev = dev; 429 431 bdev->bd_inode->i_rdev = dev; 430 432 bdev->bd_inode->i_ino = dev;
+13
block/blk-cgroup.c
··· 577 577 struct request_queue *q = disk->queue; 578 578 struct blkcg_gq *blkg, *n; 579 579 int count = BLKG_DESTROY_BATCH_SIZE; 580 + int i; 580 581 581 582 restart: 582 583 spin_lock_irq(&q->queue_lock); ··· 601 600 cond_resched(); 602 601 goto restart; 603 602 } 603 + } 604 + 605 + /* 606 + * Mark policy deactivated since policy offline has been done, and 607 + * the free is scheduled, so future blkcg_deactivate_policy() can 608 + * be bypassed 609 + */ 610 + for (i = 0; i < BLKCG_MAX_POLS; i++) { 611 + struct blkcg_policy *pol = blkcg_policy[i]; 612 + 613 + if (pol) 614 + __clear_bit(pol->plid, q->blkcg_pols); 604 615 } 605 616 606 617 q->root_blkg = NULL;
-2
block/blk-cgroup.h
··· 249 249 { 250 250 struct blkcg_gq *blkg; 251 251 252 - WARN_ON_ONCE(!rcu_read_lock_held()); 253 - 254 252 if (blkcg == &blkcg_root) 255 253 return q->root_blkg; 256 254
+5 -28
block/blk-pm.c
··· 163 163 * @q: the queue of the device 164 164 * 165 165 * Description: 166 - * For historical reasons, this routine merely calls blk_set_runtime_active() 167 - * to do the real work of restarting the queue. It does this regardless of 168 - * whether the device's runtime-resume succeeded; even if it failed the 166 + * Restart the queue of a runtime suspended device. It does this regardless 167 + * of whether the device's runtime-resume succeeded; even if it failed the 169 168 * driver or error handler will need to communicate with the device. 170 169 * 171 170 * This function should be called near the end of the device's 172 - * runtime_resume callback. 171 + * runtime_resume callback to correct queue runtime PM status and re-enable 172 + * peeking requests from the queue. 173 173 */ 174 174 void blk_post_runtime_resume(struct request_queue *q) 175 - { 176 - blk_set_runtime_active(q); 177 - } 178 - EXPORT_SYMBOL(blk_post_runtime_resume); 179 - 180 - /** 181 - * blk_set_runtime_active - Force runtime status of the queue to be active 182 - * @q: the queue of the device 183 - * 184 - * If the device is left runtime suspended during system suspend the resume 185 - * hook typically resumes the device and corrects runtime status 186 - * accordingly. However, that does not affect the queue runtime PM status 187 - * which is still "suspended". This prevents processing requests from the 188 - * queue. 189 - * 190 - * This function can be used in driver's resume hook to correct queue 191 - * runtime PM status and re-enable peeking requests from the queue. It 192 - * should be called before first request is added to the queue. 193 - * 194 - * This function is also called by blk_post_runtime_resume() for 195 - * runtime resumes. It does everything necessary to restart the queue. 196 - */ 197 - void blk_set_runtime_active(struct request_queue *q) 198 175 { 199 176 int old_status; 200 177 ··· 188 211 if (old_status != RPM_ACTIVE) 189 212 blk_clear_pm_only(q); 190 213 } 191 - EXPORT_SYMBOL(blk_set_runtime_active); 214 + EXPORT_SYMBOL(blk_post_runtime_resume);
+2
block/blk-throttle.c
··· 1320 1320 tg_bps_limit(tg, READ), tg_bps_limit(tg, WRITE), 1321 1321 tg_iops_limit(tg, READ), tg_iops_limit(tg, WRITE)); 1322 1322 1323 + rcu_read_lock(); 1323 1324 /* 1324 1325 * Update has_rules[] flags for the updated tg's subtree. A tg is 1325 1326 * considered to have rules if either the tg itself or any of its ··· 1348 1347 this_tg->latency_target = max(this_tg->latency_target, 1349 1348 parent_tg->latency_target); 1350 1349 } 1350 + rcu_read_unlock(); 1351 1351 1352 1352 /* 1353 1353 * We're already holding queue_lock and know @tg is valid. Let's
+21 -22
drivers/accel/ivpu/ivpu_hw_37xx.c
··· 504 504 return ret; 505 505 } 506 506 507 + static int ivpu_boot_pwr_domain_disable(struct ivpu_device *vdev) 508 + { 509 + ivpu_boot_dpu_active_drive(vdev, false); 510 + ivpu_boot_pwr_island_isolation_drive(vdev, true); 511 + ivpu_boot_pwr_island_trickle_drive(vdev, false); 512 + ivpu_boot_pwr_island_drive(vdev, false); 513 + 514 + return ivpu_boot_wait_for_pwr_island_status(vdev, 0x0); 515 + } 516 + 507 517 static void ivpu_boot_no_snoop_enable(struct ivpu_device *vdev) 508 518 { 509 519 u32 val = REGV_RD32(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES); ··· 612 602 613 603 static int ivpu_hw_37xx_reset(struct ivpu_device *vdev) 614 604 { 615 - int ret; 616 - u32 val; 605 + int ret = 0; 617 606 618 - if (IVPU_WA(punit_disabled)) 619 - return 0; 620 - 621 - ret = REGB_POLL_FLD(VPU_37XX_BUTTRESS_VPU_IP_RESET, TRIGGER, 0, TIMEOUT_US); 622 - if (ret) { 623 - ivpu_err(vdev, "Timed out waiting for TRIGGER bit\n"); 624 - return ret; 607 + if (ivpu_boot_pwr_domain_disable(vdev)) { 608 + ivpu_err(vdev, "Failed to disable power domain\n"); 609 + ret = -EIO; 625 610 } 626 611 627 - val = REGB_RD32(VPU_37XX_BUTTRESS_VPU_IP_RESET); 628 - val = REG_SET_FLD(VPU_37XX_BUTTRESS_VPU_IP_RESET, TRIGGER, val); 629 - REGB_WR32(VPU_37XX_BUTTRESS_VPU_IP_RESET, val); 630 - 631 - ret = REGB_POLL_FLD(VPU_37XX_BUTTRESS_VPU_IP_RESET, TRIGGER, 0, TIMEOUT_US); 632 - if (ret) 633 - ivpu_err(vdev, "Timed out waiting for RESET completion\n"); 612 + if (ivpu_pll_disable(vdev)) { 613 + ivpu_err(vdev, "Failed to disable PLL\n"); 614 + ret = -EIO; 615 + } 634 616 635 617 return ret; 636 618 } ··· 735 733 736 734 ivpu_hw_37xx_save_d0i3_entry_timestamp(vdev); 737 735 738 - if (!ivpu_hw_37xx_is_idle(vdev)) { 736 + if (!ivpu_hw_37xx_is_idle(vdev)) 739 737 ivpu_warn(vdev, "VPU not idle during power down\n"); 740 - if (ivpu_hw_37xx_reset(vdev)) 741 - ivpu_warn(vdev, "Failed to reset the VPU\n"); 742 - } 743 738 744 - if (ivpu_pll_disable(vdev)) { 745 - ivpu_err(vdev, "Failed to disable PLL\n"); 739 + if (ivpu_hw_37xx_reset(vdev)) { 740 + ivpu_err(vdev, "Failed to reset VPU\n"); 746 741 ret = -EIO; 747 742 } 748 743
+1 -1
drivers/acpi/acpi_video.c
··· 2031 2031 * HP ZBook Fury 16 G10 requires ACPI video's child devices have _PS0 2032 2032 * evaluated to have functional panel brightness control. 2033 2033 */ 2034 - acpi_device_fix_up_power_extended(device); 2034 + acpi_device_fix_up_power_children(device); 2035 2035 2036 2036 pr_info("%s [%s] (multi-head: %s rom: %s post: %s)\n", 2037 2037 ACPI_VIDEO_DEVICE_NAME, acpi_device_bid(device),
+13
drivers/acpi/device_pm.c
··· 397 397 } 398 398 EXPORT_SYMBOL_GPL(acpi_device_fix_up_power_extended); 399 399 400 + /** 401 + * acpi_device_fix_up_power_children - Force a device's children into D0. 402 + * @adev: Parent device object whose children's power state is to be fixed up. 403 + * 404 + * Call acpi_device_fix_up_power() for @adev's children so long as they 405 + * are reported as present and enabled. 406 + */ 407 + void acpi_device_fix_up_power_children(struct acpi_device *adev) 408 + { 409 + acpi_dev_for_each_child(adev, fix_up_power_if_applicable, NULL); 410 + } 411 + EXPORT_SYMBOL_GPL(acpi_device_fix_up_power_children); 412 + 400 413 int acpi_device_update_power(struct acpi_device *device, int *state_p) 401 414 { 402 415 int state;
+1 -1
drivers/acpi/processor_idle.c
··· 592 592 while (1) { 593 593 594 594 if (cx->entry_method == ACPI_CSTATE_HALT) 595 - safe_halt(); 595 + raw_safe_halt(); 596 596 else if (cx->entry_method == ACPI_CSTATE_SYSTEMIO) { 597 597 io_idle(cx->address); 598 598 } else
+7
drivers/acpi/resource.c
··· 448 448 }, 449 449 }, 450 450 { 451 + /* Asus ExpertBook B1402CVA */ 452 + .matches = { 453 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 454 + DMI_MATCH(DMI_BOARD_NAME, "B1402CVA"), 455 + }, 456 + }, 457 + { 451 458 /* Asus ExpertBook B1502CBA */ 452 459 .matches = { 453 460 DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+3
drivers/ata/pata_isapnp.c
··· 82 82 if (pnp_port_valid(idev, 1)) { 83 83 ctl_addr = devm_ioport_map(&idev->dev, 84 84 pnp_port_start(idev, 1), 1); 85 + if (!ctl_addr) 86 + return -ENOMEM; 87 + 85 88 ap->ioaddr.altstatus_addr = ctl_addr; 86 89 ap->ioaddr.ctl_addr = ctl_addr; 87 90 ap->ops = &isapnp_port_ops;
+75 -42
drivers/block/nbd.c
··· 67 67 struct recv_thread_args { 68 68 struct work_struct work; 69 69 struct nbd_device *nbd; 70 + struct nbd_sock *nsock; 70 71 int index; 71 72 }; 72 73 ··· 396 395 } 397 396 } 398 397 398 + static struct nbd_config *nbd_get_config_unlocked(struct nbd_device *nbd) 399 + { 400 + if (refcount_inc_not_zero(&nbd->config_refs)) { 401 + /* 402 + * Add smp_mb__after_atomic to ensure that reading nbd->config_refs 403 + * and reading nbd->config is ordered. The pair is the barrier in 404 + * nbd_alloc_and_init_config(), avoid nbd->config_refs is set 405 + * before nbd->config. 406 + */ 407 + smp_mb__after_atomic(); 408 + return nbd->config; 409 + } 410 + 411 + return NULL; 412 + } 413 + 399 414 static enum blk_eh_timer_return nbd_xmit_timeout(struct request *req) 400 415 { 401 416 struct nbd_cmd *cmd = blk_mq_rq_to_pdu(req); ··· 426 409 return BLK_EH_DONE; 427 410 } 428 411 429 - if (!refcount_inc_not_zero(&nbd->config_refs)) { 412 + config = nbd_get_config_unlocked(nbd); 413 + if (!config) { 430 414 cmd->status = BLK_STS_TIMEOUT; 431 415 __clear_bit(NBD_CMD_INFLIGHT, &cmd->flags); 432 416 mutex_unlock(&cmd->lock); 433 417 goto done; 434 418 } 435 - config = nbd->config; 436 419 437 420 if (config->num_connections > 1 || 438 421 (config->num_connections == 1 && nbd->tag_set.timeout)) { ··· 506 489 return BLK_EH_DONE; 507 490 } 508 491 509 - /* 510 - * Send or receive packet. Return a positive value on success and 511 - * negtive value on failue, and never return 0. 512 - */ 513 - static int sock_xmit(struct nbd_device *nbd, int index, int send, 514 - struct iov_iter *iter, int msg_flags, int *sent) 492 + static int __sock_xmit(struct nbd_device *nbd, struct socket *sock, int send, 493 + struct iov_iter *iter, int msg_flags, int *sent) 515 494 { 516 - struct nbd_config *config = nbd->config; 517 - struct socket *sock = config->socks[index]->sock; 518 495 int result; 519 496 struct msghdr msg; 520 497 unsigned int noreclaim_flag; ··· 549 538 memalloc_noreclaim_restore(noreclaim_flag); 550 539 551 540 return result; 541 + } 542 + 543 + /* 544 + * Send or receive packet. Return a positive value on success and 545 + * negtive value on failure, and never return 0. 546 + */ 547 + static int sock_xmit(struct nbd_device *nbd, int index, int send, 548 + struct iov_iter *iter, int msg_flags, int *sent) 549 + { 550 + struct nbd_config *config = nbd->config; 551 + struct socket *sock = config->socks[index]->sock; 552 + 553 + return __sock_xmit(nbd, sock, send, iter, msg_flags, sent); 552 554 } 553 555 554 556 /* ··· 720 696 return 0; 721 697 } 722 698 723 - static int nbd_read_reply(struct nbd_device *nbd, int index, 699 + static int nbd_read_reply(struct nbd_device *nbd, struct socket *sock, 724 700 struct nbd_reply *reply) 725 701 { 726 702 struct kvec iov = {.iov_base = reply, .iov_len = sizeof(*reply)}; ··· 729 705 730 706 reply->magic = 0; 731 707 iov_iter_kvec(&to, ITER_DEST, &iov, 1, sizeof(*reply)); 732 - result = sock_xmit(nbd, index, 0, &to, MSG_WAITALL, NULL); 708 + result = __sock_xmit(nbd, sock, 0, &to, MSG_WAITALL, NULL); 733 709 if (result < 0) { 734 710 if (!nbd_disconnected(nbd->config)) 735 711 dev_err(disk_to_dev(nbd->disk), ··· 853 829 struct nbd_device *nbd = args->nbd; 854 830 struct nbd_config *config = nbd->config; 855 831 struct request_queue *q = nbd->disk->queue; 856 - struct nbd_sock *nsock; 832 + struct nbd_sock *nsock = args->nsock; 857 833 struct nbd_cmd *cmd; 858 834 struct request *rq; 859 835 860 836 while (1) { 861 837 struct nbd_reply reply; 862 838 863 - if (nbd_read_reply(nbd, args->index, &reply)) 839 + if (nbd_read_reply(nbd, nsock->sock, &reply)) 864 840 break; 865 841 866 842 /* ··· 895 871 percpu_ref_put(&q->q_usage_counter); 896 872 } 897 873 898 - nsock = config->socks[args->index]; 899 874 mutex_lock(&nsock->tx_lock); 900 875 nbd_mark_nsock_dead(nbd, nsock, 1); 901 876 mutex_unlock(&nsock->tx_lock); ··· 1000 977 struct nbd_sock *nsock; 1001 978 int ret; 1002 979 1003 - if (!refcount_inc_not_zero(&nbd->config_refs)) { 980 + config = nbd_get_config_unlocked(nbd); 981 + if (!config) { 1004 982 dev_err_ratelimited(disk_to_dev(nbd->disk), 1005 983 "Socks array is empty\n"); 1006 984 return -EINVAL; 1007 985 } 1008 - config = nbd->config; 1009 986 1010 987 if (index >= config->num_connections) { 1011 988 dev_err_ratelimited(disk_to_dev(nbd->disk), ··· 1238 1215 INIT_WORK(&args->work, recv_work); 1239 1216 args->index = i; 1240 1217 args->nbd = nbd; 1218 + args->nsock = nsock; 1241 1219 nsock->cookie++; 1242 1220 mutex_unlock(&nsock->tx_lock); 1243 1221 sockfd_put(old); ··· 1421 1397 refcount_inc(&nbd->config_refs); 1422 1398 INIT_WORK(&args->work, recv_work); 1423 1399 args->nbd = nbd; 1400 + args->nsock = config->socks[i]; 1424 1401 args->index = i; 1425 1402 queue_work(nbd->recv_workq, &args->work); 1426 1403 } ··· 1555 1530 return error; 1556 1531 } 1557 1532 1558 - static struct nbd_config *nbd_alloc_config(void) 1533 + static int nbd_alloc_and_init_config(struct nbd_device *nbd) 1559 1534 { 1560 1535 struct nbd_config *config; 1561 1536 1537 + if (WARN_ON(nbd->config)) 1538 + return -EINVAL; 1539 + 1562 1540 if (!try_module_get(THIS_MODULE)) 1563 - return ERR_PTR(-ENODEV); 1541 + return -ENODEV; 1564 1542 1565 1543 config = kzalloc(sizeof(struct nbd_config), GFP_NOFS); 1566 1544 if (!config) { 1567 1545 module_put(THIS_MODULE); 1568 - return ERR_PTR(-ENOMEM); 1546 + return -ENOMEM; 1569 1547 } 1570 1548 1571 1549 atomic_set(&config->recv_threads, 0); ··· 1576 1548 init_waitqueue_head(&config->conn_wait); 1577 1549 config->blksize_bits = NBD_DEF_BLKSIZE_BITS; 1578 1550 atomic_set(&config->live_connections, 0); 1579 - return config; 1551 + 1552 + nbd->config = config; 1553 + /* 1554 + * Order refcount_set(&nbd->config_refs, 1) and nbd->config assignment, 1555 + * its pair is the barrier in nbd_get_config_unlocked(). 1556 + * So nbd_get_config_unlocked() won't see nbd->config as null after 1557 + * refcount_inc_not_zero() succeed. 1558 + */ 1559 + smp_mb__before_atomic(); 1560 + refcount_set(&nbd->config_refs, 1); 1561 + 1562 + return 0; 1580 1563 } 1581 1564 1582 1565 static int nbd_open(struct gendisk *disk, blk_mode_t mode) 1583 1566 { 1584 1567 struct nbd_device *nbd; 1568 + struct nbd_config *config; 1585 1569 int ret = 0; 1586 1570 1587 1571 mutex_lock(&nbd_index_mutex); ··· 1606 1566 ret = -ENXIO; 1607 1567 goto out; 1608 1568 } 1609 - if (!refcount_inc_not_zero(&nbd->config_refs)) { 1610 - struct nbd_config *config; 1611 1569 1570 + config = nbd_get_config_unlocked(nbd); 1571 + if (!config) { 1612 1572 mutex_lock(&nbd->config_lock); 1613 1573 if (refcount_inc_not_zero(&nbd->config_refs)) { 1614 1574 mutex_unlock(&nbd->config_lock); 1615 1575 goto out; 1616 1576 } 1617 - config = nbd_alloc_config(); 1618 - if (IS_ERR(config)) { 1619 - ret = PTR_ERR(config); 1577 + ret = nbd_alloc_and_init_config(nbd); 1578 + if (ret) { 1620 1579 mutex_unlock(&nbd->config_lock); 1621 1580 goto out; 1622 1581 } 1623 - nbd->config = config; 1624 - refcount_set(&nbd->config_refs, 1); 1582 + 1625 1583 refcount_inc(&nbd->refs); 1626 1584 mutex_unlock(&nbd->config_lock); 1627 1585 if (max_part) 1628 1586 set_bit(GD_NEED_PART_SCAN, &disk->state); 1629 - } else if (nbd_disconnected(nbd->config)) { 1587 + } else if (nbd_disconnected(config)) { 1630 1588 if (max_part) 1631 1589 set_bit(GD_NEED_PART_SCAN, &disk->state); 1632 1590 } ··· 2028 1990 pr_err("nbd%d already in use\n", index); 2029 1991 return -EBUSY; 2030 1992 } 2031 - if (WARN_ON(nbd->config)) { 2032 - mutex_unlock(&nbd->config_lock); 2033 - nbd_put(nbd); 2034 - return -EINVAL; 2035 - } 2036 - config = nbd_alloc_config(); 2037 - if (IS_ERR(config)) { 1993 + 1994 + ret = nbd_alloc_and_init_config(nbd); 1995 + if (ret) { 2038 1996 mutex_unlock(&nbd->config_lock); 2039 1997 nbd_put(nbd); 2040 1998 pr_err("couldn't allocate config\n"); 2041 - return PTR_ERR(config); 1999 + return ret; 2042 2000 } 2043 - nbd->config = config; 2044 - refcount_set(&nbd->config_refs, 1); 2045 - set_bit(NBD_RT_BOUND, &config->runtime_flags); 2046 2001 2002 + config = nbd->config; 2003 + set_bit(NBD_RT_BOUND, &config->runtime_flags); 2047 2004 ret = nbd_genl_size_set(info, nbd); 2048 2005 if (ret) 2049 2006 goto out; ··· 2241 2208 } 2242 2209 mutex_unlock(&nbd_index_mutex); 2243 2210 2244 - if (!refcount_inc_not_zero(&nbd->config_refs)) { 2211 + config = nbd_get_config_unlocked(nbd); 2212 + if (!config) { 2245 2213 dev_err(nbd_to_dev(nbd), 2246 2214 "not configured, cannot reconfigure\n"); 2247 2215 nbd_put(nbd); ··· 2250 2216 } 2251 2217 2252 2218 mutex_lock(&nbd->config_lock); 2253 - config = nbd->config; 2254 2219 if (!test_bit(NBD_RT_BOUND, &config->runtime_flags) || 2255 2220 !nbd->pid) { 2256 2221 dev_err(nbd_to_dev(nbd),
+13 -12
drivers/block/null_blk/main.c
··· 1464 1464 return BLK_STS_OK; 1465 1465 } 1466 1466 1467 - static blk_status_t null_handle_cmd(struct nullb_cmd *cmd, sector_t sector, 1468 - sector_t nr_sectors, enum req_op op) 1467 + static void null_handle_cmd(struct nullb_cmd *cmd, sector_t sector, 1468 + sector_t nr_sectors, enum req_op op) 1469 1469 { 1470 1470 struct nullb_device *dev = cmd->nq->dev; 1471 1471 struct nullb *nullb = dev->nullb; 1472 1472 blk_status_t sts; 1473 - 1474 - if (test_bit(NULLB_DEV_FL_THROTTLED, &dev->flags)) { 1475 - sts = null_handle_throttled(cmd); 1476 - if (sts != BLK_STS_OK) 1477 - return sts; 1478 - } 1479 1473 1480 1474 if (op == REQ_OP_FLUSH) { 1481 1475 cmd->error = errno_to_blk_status(null_handle_flush(nullb)); ··· 1487 1493 1488 1494 out: 1489 1495 nullb_complete_cmd(cmd); 1490 - return BLK_STS_OK; 1491 1496 } 1492 1497 1493 1498 static enum hrtimer_restart nullb_bwtimer_fn(struct hrtimer *timer) ··· 1717 1724 cmd->fake_timeout = should_timeout_request(rq) || 1718 1725 blk_should_fake_timeout(rq->q); 1719 1726 1720 - blk_mq_start_request(rq); 1721 - 1722 1727 if (should_requeue_request(rq)) { 1723 1728 /* 1724 1729 * Alternate between hitting the core BUSY path, and the ··· 1729 1738 return BLK_STS_OK; 1730 1739 } 1731 1740 1741 + if (test_bit(NULLB_DEV_FL_THROTTLED, &nq->dev->flags)) { 1742 + blk_status_t sts = null_handle_throttled(cmd); 1743 + 1744 + if (sts != BLK_STS_OK) 1745 + return sts; 1746 + } 1747 + 1748 + blk_mq_start_request(rq); 1749 + 1732 1750 if (is_poll) { 1733 1751 spin_lock(&nq->poll_lock); 1734 1752 list_add_tail(&rq->queuelist, &nq->poll_list); ··· 1747 1747 if (cmd->fake_timeout) 1748 1748 return BLK_STS_OK; 1749 1749 1750 - return null_handle_cmd(cmd, sector, nr_sectors, req_op(rq)); 1750 + null_handle_cmd(cmd, sector, nr_sectors, req_op(rq)); 1751 + return BLK_STS_OK; 1751 1752 } 1752 1753 1753 1754 static void null_queue_rqs(struct request **rqlist)
+12 -5
drivers/dpll/dpll_netlink.c
··· 1093 1093 return -ENOMEM; 1094 1094 hdr = genlmsg_put_reply(msg, info, &dpll_nl_family, 0, 1095 1095 DPLL_CMD_PIN_ID_GET); 1096 - if (!hdr) 1096 + if (!hdr) { 1097 + nlmsg_free(msg); 1097 1098 return -EMSGSIZE; 1098 - 1099 + } 1099 1100 pin = dpll_pin_find_from_nlattr(info); 1100 1101 if (!IS_ERR(pin)) { 1101 1102 ret = dpll_msg_add_pin_handle(msg, pin); ··· 1124 1123 return -ENOMEM; 1125 1124 hdr = genlmsg_put_reply(msg, info, &dpll_nl_family, 0, 1126 1125 DPLL_CMD_PIN_GET); 1127 - if (!hdr) 1126 + if (!hdr) { 1127 + nlmsg_free(msg); 1128 1128 return -EMSGSIZE; 1129 + } 1129 1130 ret = dpll_cmd_pin_get_one(msg, pin, info->extack); 1130 1131 if (ret) { 1131 1132 nlmsg_free(msg); ··· 1259 1256 return -ENOMEM; 1260 1257 hdr = genlmsg_put_reply(msg, info, &dpll_nl_family, 0, 1261 1258 DPLL_CMD_DEVICE_ID_GET); 1262 - if (!hdr) 1259 + if (!hdr) { 1260 + nlmsg_free(msg); 1263 1261 return -EMSGSIZE; 1262 + } 1264 1263 1265 1264 dpll = dpll_device_find_from_nlattr(info); 1266 1265 if (!IS_ERR(dpll)) { ··· 1289 1284 return -ENOMEM; 1290 1285 hdr = genlmsg_put_reply(msg, info, &dpll_nl_family, 0, 1291 1286 DPLL_CMD_DEVICE_GET); 1292 - if (!hdr) 1287 + if (!hdr) { 1288 + nlmsg_free(msg); 1293 1289 return -EMSGSIZE; 1290 + } 1294 1291 1295 1292 ret = dpll_device_get_one(dpll, msg, info->extack); 1296 1293 if (ret) {
+12 -1
drivers/gpu/drm/ast/ast_drv.h
··· 174 174 return container_of(connector, struct ast_sil164_connector, base); 175 175 } 176 176 177 + struct ast_bmc_connector { 178 + struct drm_connector base; 179 + struct drm_connector *physical_connector; 180 + }; 181 + 182 + static inline struct ast_bmc_connector * 183 + to_ast_bmc_connector(struct drm_connector *connector) 184 + { 185 + return container_of(connector, struct ast_bmc_connector, base); 186 + } 187 + 177 188 /* 178 189 * Device 179 190 */ ··· 229 218 } astdp; 230 219 struct { 231 220 struct drm_encoder encoder; 232 - struct drm_connector connector; 221 + struct ast_bmc_connector bmc_connector; 233 222 } bmc; 234 223 } output; 235 224
+55 -7
drivers/gpu/drm/ast/ast_mode.c
··· 1767 1767 .destroy = drm_encoder_cleanup, 1768 1768 }; 1769 1769 1770 + static int ast_bmc_connector_helper_detect_ctx(struct drm_connector *connector, 1771 + struct drm_modeset_acquire_ctx *ctx, 1772 + bool force) 1773 + { 1774 + struct ast_bmc_connector *bmc_connector = to_ast_bmc_connector(connector); 1775 + struct drm_connector *physical_connector = bmc_connector->physical_connector; 1776 + 1777 + /* 1778 + * Most user-space compositors cannot handle more than one connected 1779 + * connector per CRTC. Hence, we only mark the BMC as connected if the 1780 + * physical connector is disconnected. If the physical connector's status 1781 + * is connected or unknown, the BMC remains disconnected. This has no 1782 + * effect on the output of the BMC. 1783 + * 1784 + * FIXME: Remove this logic once user-space compositors can handle more 1785 + * than one connector per CRTC. The BMC should always be connected. 1786 + */ 1787 + 1788 + if (physical_connector && physical_connector->status == connector_status_disconnected) 1789 + return connector_status_connected; 1790 + 1791 + return connector_status_disconnected; 1792 + } 1793 + 1770 1794 static int ast_bmc_connector_helper_get_modes(struct drm_connector *connector) 1771 1795 { 1772 1796 return drm_add_modes_noedid(connector, 4096, 4096); ··· 1798 1774 1799 1775 static const struct drm_connector_helper_funcs ast_bmc_connector_helper_funcs = { 1800 1776 .get_modes = ast_bmc_connector_helper_get_modes, 1777 + .detect_ctx = ast_bmc_connector_helper_detect_ctx, 1801 1778 }; 1802 1779 1803 1780 static const struct drm_connector_funcs ast_bmc_connector_funcs = { ··· 1809 1784 .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 1810 1785 }; 1811 1786 1812 - static int ast_bmc_output_init(struct ast_device *ast) 1787 + static int ast_bmc_connector_init(struct drm_device *dev, 1788 + struct ast_bmc_connector *bmc_connector, 1789 + struct drm_connector *physical_connector) 1790 + { 1791 + struct drm_connector *connector = &bmc_connector->base; 1792 + int ret; 1793 + 1794 + ret = drm_connector_init(dev, connector, &ast_bmc_connector_funcs, 1795 + DRM_MODE_CONNECTOR_VIRTUAL); 1796 + if (ret) 1797 + return ret; 1798 + 1799 + drm_connector_helper_add(connector, &ast_bmc_connector_helper_funcs); 1800 + 1801 + bmc_connector->physical_connector = physical_connector; 1802 + 1803 + return 0; 1804 + } 1805 + 1806 + static int ast_bmc_output_init(struct ast_device *ast, 1807 + struct drm_connector *physical_connector) 1813 1808 { 1814 1809 struct drm_device *dev = &ast->base; 1815 1810 struct drm_crtc *crtc = &ast->crtc; 1816 1811 struct drm_encoder *encoder = &ast->output.bmc.encoder; 1817 - struct drm_connector *connector = &ast->output.bmc.connector; 1812 + struct ast_bmc_connector *bmc_connector = &ast->output.bmc.bmc_connector; 1813 + struct drm_connector *connector = &bmc_connector->base; 1818 1814 int ret; 1819 1815 1820 1816 ret = drm_encoder_init(dev, encoder, ··· 1845 1799 return ret; 1846 1800 encoder->possible_crtcs = drm_crtc_mask(crtc); 1847 1801 1848 - ret = drm_connector_init(dev, connector, &ast_bmc_connector_funcs, 1849 - DRM_MODE_CONNECTOR_VIRTUAL); 1802 + ret = ast_bmc_connector_init(dev, bmc_connector, physical_connector); 1850 1803 if (ret) 1851 1804 return ret; 1852 - 1853 - drm_connector_helper_add(connector, &ast_bmc_connector_helper_funcs); 1854 1805 1855 1806 ret = drm_connector_attach_encoder(connector, encoder); 1856 1807 if (ret) ··· 1907 1864 int ast_mode_config_init(struct ast_device *ast) 1908 1865 { 1909 1866 struct drm_device *dev = &ast->base; 1867 + struct drm_connector *physical_connector = NULL; 1910 1868 int ret; 1911 1869 1912 1870 ret = drmm_mode_config_init(dev); ··· 1948 1904 ret = ast_vga_output_init(ast); 1949 1905 if (ret) 1950 1906 return ret; 1907 + physical_connector = &ast->output.vga.vga_connector.base; 1951 1908 } 1952 1909 if (ast->tx_chip_types & AST_TX_SIL164_BIT) { 1953 1910 ret = ast_sil164_output_init(ast); 1954 1911 if (ret) 1955 1912 return ret; 1913 + physical_connector = &ast->output.sil164.sil164_connector.base; 1956 1914 } 1957 1915 if (ast->tx_chip_types & AST_TX_DP501_BIT) { 1958 1916 ret = ast_dp501_output_init(ast); 1959 1917 if (ret) 1960 1918 return ret; 1919 + physical_connector = &ast->output.dp501.connector; 1961 1920 } 1962 1921 if (ast->tx_chip_types & AST_TX_ASTDP_BIT) { 1963 1922 ret = ast_astdp_output_init(ast); 1964 1923 if (ret) 1965 1924 return ret; 1925 + physical_connector = &ast->output.astdp.connector; 1966 1926 } 1967 - ret = ast_bmc_output_init(ast); 1927 + ret = ast_bmc_output_init(ast, physical_connector); 1968 1928 if (ret) 1969 1929 return ret; 1970 1930
-11
drivers/gpu/drm/i915/gt/intel_gt.c
··· 982 982 983 983 err: 984 984 i915_probe_error(i915, "Failed to initialize %s! (%d)\n", gtdef->name, ret); 985 - intel_gt_release_all(i915); 986 - 987 985 return ret; 988 986 } 989 987 ··· 998 1000 } 999 1001 1000 1002 return 0; 1001 - } 1002 - 1003 - void intel_gt_release_all(struct drm_i915_private *i915) 1004 - { 1005 - struct intel_gt *gt; 1006 - unsigned int id; 1007 - 1008 - for_each_gt(gt, i915, id) 1009 - i915->gt[id] = NULL; 1010 1003 } 1011 1004 1012 1005 void intel_gt_info_print(const struct intel_gt_info *info,
+1 -3
drivers/gpu/drm/i915/i915_driver.c
··· 776 776 777 777 ret = i915_driver_mmio_probe(i915); 778 778 if (ret < 0) 779 - goto out_tiles_cleanup; 779 + goto out_runtime_pm_put; 780 780 781 781 ret = i915_driver_hw_probe(i915); 782 782 if (ret < 0) ··· 836 836 i915_ggtt_driver_late_release(i915); 837 837 out_cleanup_mmio: 838 838 i915_driver_mmio_release(i915); 839 - out_tiles_cleanup: 840 - intel_gt_release_all(i915); 841 839 out_runtime_pm_put: 842 840 enable_rpm_wakeref_asserts(&i915->runtime_pm); 843 841 i915_driver_late_release(i915);
+1
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h
··· 406 406 .min_llcc_ib = 0, 407 407 .min_dram_ib = 800000, 408 408 .danger_lut_tbl = {0xf, 0xffff, 0x0}, 409 + .safe_lut_tbl = {0xfe00, 0xfe00, 0xffff}, 409 410 .qos_lut_tbl = { 410 411 {.nentry = ARRAY_SIZE(sc8180x_qos_linear), 411 412 .entries = sc8180x_qos_linear
+1 -2
drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
··· 844 844 845 845 return 0; 846 846 fail: 847 - if (mdp5_kms) 848 - mdp5_destroy(mdp5_kms); 847 + mdp5_destroy(mdp5_kms); 849 848 return ret; 850 849 } 851 850
+10 -5
drivers/gpu/drm/msm/dp/dp_display.c
··· 365 365 /* reset video pattern flag on disconnect */ 366 366 if (!hpd) { 367 367 dp->panel->video_test = false; 368 - drm_dp_set_subconnector_property(dp->dp_display.connector, 369 - connector_status_disconnected, 370 - dp->panel->dpcd, dp->panel->downstream_ports); 368 + if (!dp->dp_display.is_edp) 369 + drm_dp_set_subconnector_property(dp->dp_display.connector, 370 + connector_status_disconnected, 371 + dp->panel->dpcd, 372 + dp->panel->downstream_ports); 371 373 } 372 374 373 375 dp->dp_display.is_connected = hpd; ··· 398 396 399 397 dp_link_process_request(dp->link); 400 398 401 - drm_dp_set_subconnector_property(dp->dp_display.connector, connector_status_connected, 402 - dp->panel->dpcd, dp->panel->downstream_ports); 399 + if (!dp->dp_display.is_edp) 400 + drm_dp_set_subconnector_property(dp->dp_display.connector, 401 + connector_status_connected, 402 + dp->panel->dpcd, 403 + dp->panel->downstream_ports); 403 404 404 405 edid = dp->panel->edid; 405 406
+3
drivers/gpu/drm/msm/dp/dp_drm.c
··· 345 345 if (IS_ERR(connector)) 346 346 return connector; 347 347 348 + if (!dp_display->is_edp) 349 + drm_connector_attach_dp_subconnector_property(connector); 350 + 348 351 drm_connector_attach_encoder(connector, encoder); 349 352 350 353 return connector;
+1 -1
drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
··· 918 918 if ((phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V5_2)) { 919 919 if (phy->cphy_mode) { 920 920 vreg_ctrl_0 = 0x45; 921 - vreg_ctrl_1 = 0x45; 921 + vreg_ctrl_1 = 0x41; 922 922 glbl_rescode_top_ctrl = 0x00; 923 923 glbl_rescode_bot_ctrl = 0x00; 924 924 } else {
-2
drivers/gpu/drm/msm/msm_drv.c
··· 288 288 if (ret) 289 289 goto err_msm_uninit; 290 290 291 - drm_kms_helper_poll_init(ddev); 292 - 293 291 if (priv->kms_init) { 294 292 drm_kms_helper_poll_init(ddev); 295 293 msm_fbdev_setup(ddev);
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/fifo/r535.c
··· 539 539 struct nvkm_runl *runl; 540 540 struct nvkm_engn *engn; 541 541 u32 cgids = 2048; 542 - u32 chids = 2048 / CHID_PER_USERD; 542 + u32 chids = 2048; 543 543 int ret; 544 544 NV2080_CTRL_FIFO_GET_DEVICE_INFO_TABLE_PARAMS *ctrl; 545 545
+5 -4
drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
··· 1709 1709 .mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_SYNC_PULSE | 1710 1710 MIPI_DSI_MODE_LPM, 1711 1711 .init_cmds = auo_b101uan08_3_init_cmd, 1712 + .lp11_before_reset = true, 1712 1713 }; 1713 1714 1714 1715 static const struct drm_display_mode boe_tv105wum_nw0_default_mode = { ··· 1767 1766 }; 1768 1767 1769 1768 static const struct drm_display_mode starry_himax83102_j02_default_mode = { 1770 - .clock = 161600, 1769 + .clock = 162850, 1771 1770 .hdisplay = 1200, 1772 - .hsync_start = 1200 + 40, 1773 - .hsync_end = 1200 + 40 + 20, 1774 - .htotal = 1200 + 40 + 20 + 40, 1771 + .hsync_start = 1200 + 50, 1772 + .hsync_end = 1200 + 50 + 20, 1773 + .htotal = 1200 + 50 + 20 + 50, 1775 1774 .vdisplay = 1920, 1776 1775 .vsync_start = 1920 + 116, 1777 1776 .vsync_end = 1920 + 116 + 8,
+7 -6
drivers/gpu/drm/panel/panel-simple.c
··· 2379 2379 static const struct display_timing innolux_g101ice_l01_timing = { 2380 2380 .pixelclock = { 60400000, 71100000, 74700000 }, 2381 2381 .hactive = { 1280, 1280, 1280 }, 2382 - .hfront_porch = { 41, 80, 100 }, 2383 - .hback_porch = { 40, 79, 99 }, 2384 - .hsync_len = { 1, 1, 1 }, 2382 + .hfront_porch = { 30, 60, 70 }, 2383 + .hback_porch = { 30, 60, 70 }, 2384 + .hsync_len = { 22, 40, 60 }, 2385 2385 .vactive = { 800, 800, 800 }, 2386 - .vfront_porch = { 5, 11, 14 }, 2387 - .vback_porch = { 4, 11, 14 }, 2388 - .vsync_len = { 1, 1, 1 }, 2386 + .vfront_porch = { 3, 8, 14 }, 2387 + .vback_porch = { 3, 8, 14 }, 2388 + .vsync_len = { 4, 7, 12 }, 2389 2389 .flags = DISPLAY_FLAGS_DE_HIGH, 2390 2390 }; 2391 2391 ··· 2402 2402 .disable = 200, 2403 2403 }, 2404 2404 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 2405 + .bus_flags = DRM_BUS_FLAG_DE_HIGH, 2405 2406 .connector_type = DRM_MODE_CONNECTOR_LVDS, 2406 2407 }; 2407 2408
+11 -3
drivers/gpu/drm/rockchip/rockchip_drm_vop.c
··· 247 247 VOP_REG_SET(vop, common, cfg_done, 1); 248 248 } 249 249 250 - static bool has_rb_swapped(uint32_t format) 250 + static bool has_rb_swapped(uint32_t version, uint32_t format) 251 251 { 252 252 switch (format) { 253 253 case DRM_FORMAT_XBGR8888: 254 254 case DRM_FORMAT_ABGR8888: 255 - case DRM_FORMAT_BGR888: 256 255 case DRM_FORMAT_BGR565: 257 256 return true; 257 + /* 258 + * full framework (IP version 3.x) only need rb swapped for RGB888 and 259 + * little framework (IP version 2.x) only need rb swapped for BGR888, 260 + * check for 3.x to also only rb swap BGR888 for unknown vop version 261 + */ 262 + case DRM_FORMAT_RGB888: 263 + return VOP_MAJOR(version) == 3; 264 + case DRM_FORMAT_BGR888: 265 + return VOP_MAJOR(version) != 3; 258 266 default: 259 267 return false; 260 268 } ··· 1038 1030 VOP_WIN_SET(vop, win, dsp_info, dsp_info); 1039 1031 VOP_WIN_SET(vop, win, dsp_st, dsp_st); 1040 1032 1041 - rb_swap = has_rb_swapped(fb->format->format); 1033 + rb_swap = has_rb_swapped(vop->data->version, fb->format->format); 1042 1034 VOP_WIN_SET(vop, win, rb_swap, rb_swap); 1043 1035 1044 1036 /*
+2
drivers/hid/hid-apple.c
··· 345 345 { "AONE" }, 346 346 { "GANSS" }, 347 347 { "Hailuck" }, 348 + { "Jamesdonkey" }, 349 + { "A3R" }, 348 350 }; 349 351 350 352 static bool apple_is_non_apple_keyboard(struct hid_device *hdev)
+23 -4
drivers/hid/hid-asus.c
··· 381 381 return 0; 382 382 } 383 383 384 - static int asus_kbd_set_report(struct hid_device *hdev, u8 *buf, size_t buf_size) 384 + static int asus_kbd_set_report(struct hid_device *hdev, const u8 *buf, size_t buf_size) 385 385 { 386 386 unsigned char *dmabuf; 387 387 int ret; ··· 404 404 405 405 static int asus_kbd_init(struct hid_device *hdev) 406 406 { 407 - u8 buf[] = { FEATURE_KBD_REPORT_ID, 0x41, 0x53, 0x55, 0x53, 0x20, 0x54, 407 + const u8 buf[] = { FEATURE_KBD_REPORT_ID, 0x41, 0x53, 0x55, 0x53, 0x20, 0x54, 408 408 0x65, 0x63, 0x68, 0x2e, 0x49, 0x6e, 0x63, 0x2e, 0x00 }; 409 409 int ret; 410 410 ··· 418 418 static int asus_kbd_get_functions(struct hid_device *hdev, 419 419 unsigned char *kbd_func) 420 420 { 421 - u8 buf[] = { FEATURE_KBD_REPORT_ID, 0x05, 0x20, 0x31, 0x00, 0x08 }; 421 + const u8 buf[] = { FEATURE_KBD_REPORT_ID, 0x05, 0x20, 0x31, 0x00, 0x08 }; 422 422 u8 *readbuf; 423 423 int ret; 424 424 ··· 449 449 450 450 static int rog_nkey_led_init(struct hid_device *hdev) 451 451 { 452 - u8 buf_init_start[] = { FEATURE_KBD_LED_REPORT_ID1, 0xB9 }; 452 + const u8 buf_init_start[] = { FEATURE_KBD_LED_REPORT_ID1, 0xB9 }; 453 453 u8 buf_init2[] = { FEATURE_KBD_LED_REPORT_ID1, 0x41, 0x53, 0x55, 0x53, 0x20, 454 454 0x54, 0x65, 0x63, 0x68, 0x2e, 0x49, 0x6e, 0x63, 0x2e, 0x00 }; 455 455 u8 buf_init3[] = { FEATURE_KBD_LED_REPORT_ID1, ··· 1000 1000 return 0; 1001 1001 } 1002 1002 1003 + static int __maybe_unused asus_resume(struct hid_device *hdev) { 1004 + struct asus_drvdata *drvdata = hid_get_drvdata(hdev); 1005 + int ret = 0; 1006 + 1007 + if (drvdata->kbd_backlight) { 1008 + const u8 buf[] = { FEATURE_KBD_REPORT_ID, 0xba, 0xc5, 0xc4, 1009 + drvdata->kbd_backlight->cdev.brightness }; 1010 + ret = asus_kbd_set_report(hdev, buf, sizeof(buf)); 1011 + if (ret < 0) { 1012 + hid_err(hdev, "Asus failed to set keyboard backlight: %d\n", ret); 1013 + goto asus_resume_err; 1014 + } 1015 + } 1016 + 1017 + asus_resume_err: 1018 + return ret; 1019 + } 1020 + 1003 1021 static int __maybe_unused asus_reset_resume(struct hid_device *hdev) 1004 1022 { 1005 1023 struct asus_drvdata *drvdata = hid_get_drvdata(hdev); ··· 1312 1294 .input_configured = asus_input_configured, 1313 1295 #ifdef CONFIG_PM 1314 1296 .reset_resume = asus_reset_resume, 1297 + .resume = asus_resume, 1315 1298 #endif 1316 1299 .event = asus_event, 1317 1300 .raw_event = asus_raw_event
+10 -2
drivers/hid/hid-core.c
··· 702 702 * Free a device structure, all reports, and all fields. 703 703 */ 704 704 705 - static void hid_device_release(struct device *dev) 705 + void hiddev_free(struct kref *ref) 706 706 { 707 - struct hid_device *hid = to_hid_device(dev); 707 + struct hid_device *hid = container_of(ref, struct hid_device, ref); 708 708 709 709 hid_close_report(hid); 710 710 kfree(hid->dev_rdesc); 711 711 kfree(hid); 712 + } 713 + 714 + static void hid_device_release(struct device *dev) 715 + { 716 + struct hid_device *hid = to_hid_device(dev); 717 + 718 + kref_put(&hid->ref, hiddev_free); 712 719 } 713 720 714 721 /* ··· 2853 2846 spin_lock_init(&hdev->debug_list_lock); 2854 2847 sema_init(&hdev->driver_input_lock, 1); 2855 2848 mutex_init(&hdev->ll_open_lock); 2849 + kref_init(&hdev->ref); 2856 2850 2857 2851 hid_bpf_device_init(hdev); 2858 2852
+3
drivers/hid/hid-debug.c
··· 1135 1135 goto out; 1136 1136 } 1137 1137 list->hdev = (struct hid_device *) inode->i_private; 1138 + kref_get(&list->hdev->ref); 1138 1139 file->private_data = list; 1139 1140 mutex_init(&list->read_mutex); 1140 1141 ··· 1228 1227 list_del(&list->node); 1229 1228 spin_unlock_irqrestore(&list->hdev->debug_list_lock, flags); 1230 1229 kfifo_free(&list->hid_debug_fifo); 1230 + 1231 + kref_put(&list->hdev->ref, hiddev_free); 1231 1232 kfree(list); 1232 1233 1233 1234 return 0;
+14 -2
drivers/hid/hid-glorious.c
··· 21 21 * Glorious Model O and O- specify the const flag in the consumer input 22 22 * report descriptor, which leads to inputs being ignored. Fix this 23 23 * by patching the descriptor. 24 + * 25 + * Glorious Model I incorrectly specifes the Usage Minimum for its 26 + * keyboard HID report, causing keycodes to be misinterpreted. 27 + * Fix this by setting Usage Minimum to 0 in that report. 24 28 */ 25 29 static __u8 *glorious_report_fixup(struct hid_device *hdev, __u8 *rdesc, 26 30 unsigned int *rsize) ··· 35 31 hid_info(hdev, "patching Glorious Model O consumer control report descriptor\n"); 36 32 rdesc[85] = rdesc[113] = rdesc[141] = \ 37 33 HID_MAIN_ITEM_VARIABLE | HID_MAIN_ITEM_RELATIVE; 34 + } 35 + if (*rsize == 156 && rdesc[41] == 1) { 36 + hid_info(hdev, "patching Glorious Model I keyboard report descriptor\n"); 37 + rdesc[41] = 0; 38 38 } 39 39 return rdesc; 40 40 } ··· 52 44 model = "Model O"; break; 53 45 case USB_DEVICE_ID_GLORIOUS_MODEL_D: 54 46 model = "Model D"; break; 47 + case USB_DEVICE_ID_GLORIOUS_MODEL_I: 48 + model = "Model I"; break; 55 49 } 56 50 57 51 snprintf(hdev->name, sizeof(hdev->name), "%s %s", "Glorious", model); ··· 76 66 } 77 67 78 68 static const struct hid_device_id glorious_devices[] = { 79 - { HID_USB_DEVICE(USB_VENDOR_ID_GLORIOUS, 69 + { HID_USB_DEVICE(USB_VENDOR_ID_SINOWEALTH, 80 70 USB_DEVICE_ID_GLORIOUS_MODEL_O) }, 81 - { HID_USB_DEVICE(USB_VENDOR_ID_GLORIOUS, 71 + { HID_USB_DEVICE(USB_VENDOR_ID_SINOWEALTH, 82 72 USB_DEVICE_ID_GLORIOUS_MODEL_D) }, 73 + { HID_USB_DEVICE(USB_VENDOR_ID_LAVIEW, 74 + USB_DEVICE_ID_GLORIOUS_MODEL_I) }, 83 75 { } 84 76 }; 85 77 MODULE_DEVICE_TABLE(hid, glorious_devices);
+7 -5
drivers/hid/hid-ids.h
··· 511 511 #define USB_DEVICE_ID_GENERAL_TOUCH_WIN8_PIT_010A 0x010a 512 512 #define USB_DEVICE_ID_GENERAL_TOUCH_WIN8_PIT_E100 0xe100 513 513 514 - #define USB_VENDOR_ID_GLORIOUS 0x258a 515 - #define USB_DEVICE_ID_GLORIOUS_MODEL_D 0x0033 516 - #define USB_DEVICE_ID_GLORIOUS_MODEL_O 0x0036 517 - 518 514 #define I2C_VENDOR_ID_GOODIX 0x27c6 519 515 #define I2C_DEVICE_ID_GOODIX_01F0 0x01f0 520 516 ··· 741 745 #define USB_VENDOR_ID_LABTEC 0x1020 742 746 #define USB_DEVICE_ID_LABTEC_WIRELESS_KEYBOARD 0x0006 743 747 748 + #define USB_VENDOR_ID_LAVIEW 0x22D4 749 + #define USB_DEVICE_ID_GLORIOUS_MODEL_I 0x1503 750 + 744 751 #define USB_VENDOR_ID_LCPOWER 0x1241 745 752 #define USB_DEVICE_ID_LCPOWER_LC1000 0xf767 746 753 ··· 868 869 #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_2 0xc534 869 870 #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1 0xc539 870 871 #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1_1 0xc53f 871 - #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1_2 0xc547 872 872 #define USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_POWERPLAY 0xc53a 873 873 #define USB_DEVICE_ID_SPACETRAVELLER 0xc623 874 874 #define USB_DEVICE_ID_SPACENAVIGATOR 0xc626 ··· 1157 1159 1158 1160 #define USB_VENDOR_ID_SIGMATEL 0x066F 1159 1161 #define USB_DEVICE_ID_SIGMATEL_STMP3780 0x3780 1162 + 1163 + #define USB_VENDOR_ID_SINOWEALTH 0x258a 1164 + #define USB_DEVICE_ID_GLORIOUS_MODEL_D 0x0033 1165 + #define USB_DEVICE_ID_GLORIOUS_MODEL_O 0x0036 1160 1166 1161 1167 #define USB_VENDOR_ID_SIS_TOUCH 0x0457 1162 1168 #define USB_DEVICE_ID_SIS9200_TOUCH 0x9200
+3 -8
drivers/hid/hid-logitech-dj.c
··· 1695 1695 } 1696 1696 /* 1697 1697 * Mouse-only receivers send unnumbered mouse data. The 27 MHz 1698 - * receiver uses 6 byte packets, the nano receiver 8 bytes, 1699 - * the lightspeed receiver (Pro X Superlight) 13 bytes. 1698 + * receiver uses 6 byte packets, the nano receiver 8 bytes. 1700 1699 */ 1701 1700 if (djrcv_dev->unnumbered_application == HID_GD_MOUSE && 1702 - size <= 13){ 1703 - u8 mouse_report[14]; 1701 + size <= 8) { 1702 + u8 mouse_report[9]; 1704 1703 1705 1704 /* Prepend report id */ 1706 1705 mouse_report[0] = REPORT_TYPE_MOUSE; ··· 1982 1983 { /* Logitech lightspeed receiver (0xc53f) */ 1983 1984 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 1984 1985 USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1_1), 1985 - .driver_data = recvr_type_gaming_hidpp}, 1986 - { /* Logitech lightspeed receiver (0xc547) */ 1987 - HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 1988 - USB_DEVICE_ID_LOGITECH_NANO_RECEIVER_LIGHTSPEED_1_2), 1989 1986 .driver_data = recvr_type_gaming_hidpp}, 1990 1987 1991 1988 { /* Logitech 27 MHz HID++ 1.0 receiver (0xc513) */
+3 -1
drivers/hid/hid-mcp2221.c
··· 1142 1142 if (ret) 1143 1143 return ret; 1144 1144 1145 + hid_device_io_start(hdev); 1146 + 1145 1147 /* Set I2C bus clock diviser */ 1146 1148 if (i2c_clk_freq > 400) 1147 1149 i2c_clk_freq = 400; ··· 1159 1157 snprintf(mcp->adapter.name, sizeof(mcp->adapter.name), 1160 1158 "MCP2221 usb-i2c bridge"); 1161 1159 1160 + i2c_set_adapdata(&mcp->adapter, mcp); 1162 1161 ret = devm_i2c_add_adapter(&hdev->dev, &mcp->adapter); 1163 1162 if (ret) { 1164 1163 hid_err(hdev, "can't add usb-i2c adapter: %d\n", ret); 1165 1164 return ret; 1166 1165 } 1167 - i2c_set_adapdata(&mcp->adapter, mcp); 1168 1166 1169 1167 #if IS_REACHABLE(CONFIG_GPIOLIB) 1170 1168 /* Setup GPIO chip */
+5
drivers/hid/hid-multitouch.c
··· 2046 2046 MT_USB_DEVICE(USB_VENDOR_ID_HANVON_ALT, 2047 2047 USB_DEVICE_ID_HANVON_ALT_MULTITOUCH) }, 2048 2048 2049 + /* HONOR GLO-GXXX panel */ 2050 + { .driver_data = MT_CLS_VTL, 2051 + HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, 2052 + 0x347d, 0x7853) }, 2053 + 2049 2054 /* Ilitek dual touch panel */ 2050 2055 { .driver_data = MT_CLS_NSMU, 2051 2056 MT_USB_DEVICE(USB_VENDOR_ID_ILITEK,
+1
drivers/hid/hid-quirks.c
··· 33 33 { HID_USB_DEVICE(USB_VENDOR_ID_AKAI, USB_DEVICE_ID_AKAI_MPKMINI2), HID_QUIRK_NO_INIT_REPORTS }, 34 34 { HID_USB_DEVICE(USB_VENDOR_ID_ALPS, USB_DEVICE_ID_IBM_GAMEPAD), HID_QUIRK_BADPAD }, 35 35 { HID_USB_DEVICE(USB_VENDOR_ID_AMI, USB_DEVICE_ID_AMI_VIRT_KEYBOARD_AND_MOUSE), HID_QUIRK_ALWAYS_POLL }, 36 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_REVB_ANSI), HID_QUIRK_ALWAYS_POLL }, 36 37 { HID_USB_DEVICE(USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_2PORTKVM), HID_QUIRK_NOGET }, 37 38 { HID_USB_DEVICE(USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_4PORTKVMC), HID_QUIRK_NOGET }, 38 39 { HID_USB_DEVICE(USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_4PORTKVM), HID_QUIRK_NOGET },
+1
drivers/md/bcache/bcache.h
··· 265 265 #define BCACHE_DEV_WB_RUNNING 3 266 266 #define BCACHE_DEV_RATE_DW_RUNNING 4 267 267 int nr_stripes; 268 + #define BCH_MIN_STRIPE_SZ ((4 << 20) >> SECTOR_SHIFT) 268 269 unsigned int stripe_size; 269 270 atomic_t *stripe_sectors_dirty; 270 271 unsigned long *full_dirty_stripes;
+10 -1
drivers/md/bcache/btree.c
··· 1000 1000 * 1001 1001 * The btree node will have either a read or a write lock held, depending on 1002 1002 * level and op->lock. 1003 + * 1004 + * Note: Only error code or btree pointer will be returned, it is unncessary 1005 + * for callers to check NULL pointer. 1003 1006 */ 1004 1007 struct btree *bch_btree_node_get(struct cache_set *c, struct btree_op *op, 1005 1008 struct bkey *k, int level, bool write, ··· 1114 1111 mutex_unlock(&b->c->bucket_lock); 1115 1112 } 1116 1113 1114 + /* 1115 + * Only error code or btree pointer will be returned, it is unncessary for 1116 + * callers to check NULL pointer. 1117 + */ 1117 1118 struct btree *__bch_btree_node_alloc(struct cache_set *c, struct btree_op *op, 1118 1119 int level, bool wait, 1119 1120 struct btree *parent) ··· 1375 1368 memset(new_nodes, 0, sizeof(new_nodes)); 1376 1369 closure_init_stack(&cl); 1377 1370 1378 - while (nodes < GC_MERGE_NODES && !IS_ERR(r[nodes].b)) 1371 + while (nodes < GC_MERGE_NODES && !IS_ERR_OR_NULL(r[nodes].b)) 1379 1372 keys += r[nodes++].keys; 1380 1373 1381 1374 blocks = btree_default_blocks(b->c) * 2 / 3; ··· 1539 1532 return 0; 1540 1533 1541 1534 n = btree_node_alloc_replacement(replace, NULL); 1535 + if (IS_ERR(n)) 1536 + return 0; 1542 1537 1543 1538 /* recheck reserve after allocating replacement node */ 1544 1539 if (btree_check_reserve(b, NULL)) {
+3 -1
drivers/md/bcache/super.c
··· 905 905 906 906 if (!d->stripe_size) 907 907 d->stripe_size = 1 << 31; 908 + else if (d->stripe_size < BCH_MIN_STRIPE_SZ) 909 + d->stripe_size = roundup(BCH_MIN_STRIPE_SZ, d->stripe_size); 908 910 909 911 n = DIV_ROUND_UP_ULL(sectors, d->stripe_size); 910 912 if (!n || n > max_stripes) { ··· 2018 2016 c->root = bch_btree_node_get(c, NULL, k, 2019 2017 j->btree_level, 2020 2018 true, NULL); 2021 - if (IS_ERR_OR_NULL(c->root)) 2019 + if (IS_ERR(c->root)) 2022 2020 goto err; 2023 2021 2024 2022 list_del_init(&c->root->list);
+1 -1
drivers/md/bcache/sysfs.c
··· 1104 1104 sum += INITIAL_PRIO - cached[i]; 1105 1105 1106 1106 if (n) 1107 - do_div(sum, n); 1107 + sum = div64_u64(sum, n); 1108 1108 1109 1109 for (i = 0; i < ARRAY_SIZE(q); i++) 1110 1110 q[i] = INITIAL_PRIO - cached[n * (i + 1) /
+18 -6
drivers/md/bcache/writeback.c
··· 913 913 int cur_idx, prev_idx, skip_nr; 914 914 915 915 k = p = NULL; 916 - cur_idx = prev_idx = 0; 916 + prev_idx = 0; 917 917 918 918 bch_btree_iter_init(&c->root->keys, &iter, NULL); 919 919 k = bch_btree_iter_next_filter(&iter, &c->root->keys, bch_ptr_bad); ··· 977 977 void bch_sectors_dirty_init(struct bcache_device *d) 978 978 { 979 979 int i; 980 + struct btree *b = NULL; 980 981 struct bkey *k = NULL; 981 982 struct btree_iter iter; 982 983 struct sectors_dirty_init op; 983 984 struct cache_set *c = d->c; 984 985 struct bch_dirty_init_state state; 985 986 987 + retry_lock: 988 + b = c->root; 989 + rw_lock(0, b, b->level); 990 + if (b != c->root) { 991 + rw_unlock(0, b); 992 + goto retry_lock; 993 + } 994 + 986 995 /* Just count root keys if no leaf node */ 987 - rw_lock(0, c->root, c->root->level); 988 996 if (c->root->level == 0) { 989 997 bch_btree_op_init(&op.op, -1); 990 998 op.inode = d->id; 991 999 op.count = 0; 992 1000 993 1001 for_each_key_filter(&c->root->keys, 994 - k, &iter, bch_ptr_invalid) 1002 + k, &iter, bch_ptr_invalid) { 1003 + if (KEY_INODE(k) != op.inode) 1004 + continue; 995 1005 sectors_dirty_init_fn(&op.op, c->root, k); 1006 + } 996 1007 997 - rw_unlock(0, c->root); 1008 + rw_unlock(0, b); 998 1009 return; 999 1010 } 1000 1011 ··· 1025 1014 if (atomic_read(&state.enough)) 1026 1015 break; 1027 1016 1017 + atomic_inc(&state.started); 1028 1018 state.infos[i].state = &state; 1029 1019 state.infos[i].thread = 1030 1020 kthread_run(bch_dirty_init_thread, &state.infos[i], 1031 1021 "bch_dirtcnt[%d]", i); 1032 1022 if (IS_ERR(state.infos[i].thread)) { 1033 1023 pr_err("fails to run thread bch_dirty_init[%d]\n", i); 1024 + atomic_dec(&state.started); 1034 1025 for (--i; i >= 0; i--) 1035 1026 kthread_stop(state.infos[i].thread); 1036 1027 goto out; 1037 1028 } 1038 - atomic_inc(&state.started); 1039 1029 } 1040 1030 1041 1031 out: 1042 1032 /* Must wait for all threads to stop. */ 1043 1033 wait_event(state.wait, atomic_read(&state.started) == 0); 1044 - rw_unlock(0, c->root); 1034 + rw_unlock(0, b); 1045 1035 } 1046 1036 1047 1037 void bch_cached_dev_writeback_init(struct cached_dev *dc)
+2 -1
drivers/md/md.c
··· 8666 8666 struct bio *orig_bio = md_io_clone->orig_bio; 8667 8667 struct mddev *mddev = md_io_clone->mddev; 8668 8668 8669 - orig_bio->bi_status = bio->bi_status; 8669 + if (bio->bi_status && !orig_bio->bi_status) 8670 + orig_bio->bi_status = bio->bi_status; 8670 8671 8671 8672 if (md_io_clone->start_time) 8672 8673 bio_end_io_acct(orig_bio, md_io_clone->start_time);
+14
drivers/net/ethernet/amd/xgbe/xgbe-drv.c
··· 682 682 static void xgbe_service_timer(struct timer_list *t) 683 683 { 684 684 struct xgbe_prv_data *pdata = from_timer(pdata, t, service_timer); 685 + struct xgbe_channel *channel; 686 + unsigned int i; 685 687 686 688 queue_work(pdata->dev_workqueue, &pdata->service_work); 687 689 688 690 mod_timer(&pdata->service_timer, jiffies + HZ); 691 + 692 + if (!pdata->tx_usecs) 693 + return; 694 + 695 + for (i = 0; i < pdata->channel_count; i++) { 696 + channel = pdata->channel[i]; 697 + if (!channel->tx_ring || channel->tx_timer_active) 698 + break; 699 + channel->tx_timer_active = 1; 700 + mod_timer(&channel->tx_timer, 701 + jiffies + usecs_to_jiffies(pdata->tx_usecs)); 702 + } 689 703 } 690 704 691 705 static void xgbe_init_timers(struct xgbe_prv_data *pdata)
+8 -3
drivers/net/ethernet/amd/xgbe/xgbe-ethtool.c
··· 314 314 315 315 cmd->base.phy_address = pdata->phy.address; 316 316 317 - cmd->base.autoneg = pdata->phy.autoneg; 318 - cmd->base.speed = pdata->phy.speed; 319 - cmd->base.duplex = pdata->phy.duplex; 317 + if (netif_carrier_ok(netdev)) { 318 + cmd->base.speed = pdata->phy.speed; 319 + cmd->base.duplex = pdata->phy.duplex; 320 + } else { 321 + cmd->base.speed = SPEED_UNKNOWN; 322 + cmd->base.duplex = DUPLEX_UNKNOWN; 323 + } 320 324 325 + cmd->base.autoneg = pdata->phy.autoneg; 321 326 cmd->base.port = PORT_NONE; 322 327 323 328 XGBE_LM_COPY(cmd, supported, lks, supported);
+13 -1
drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
··· 1193 1193 if (pdata->phy.duplex != DUPLEX_FULL) 1194 1194 return -EINVAL; 1195 1195 1196 - xgbe_set_mode(pdata, mode); 1196 + /* Force the mode change for SFI in Fixed PHY config. 1197 + * Fixed PHY configs needs PLL to be enabled while doing mode set. 1198 + * When the SFP module isn't connected during boot, driver assumes 1199 + * AN is ON and attempts autonegotiation. However, if the connected 1200 + * SFP comes up in Fixed PHY config, the link will not come up as 1201 + * PLL isn't enabled while the initial mode set command is issued. 1202 + * So, force the mode change for SFI in Fixed PHY configuration to 1203 + * fix link issues. 1204 + */ 1205 + if (mode == XGBE_MODE_SFI) 1206 + xgbe_change_mode(pdata, mode); 1207 + else 1208 + xgbe_set_mode(pdata, mode); 1197 1209 1198 1210 return 0; 1199 1211 }
+9 -7
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 3844 3844 struct i40e_pf *pf = vf->pf; 3845 3845 struct i40e_vsi *vsi = NULL; 3846 3846 int aq_ret = 0; 3847 - int i, ret; 3847 + int i; 3848 3848 3849 3849 if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 3850 3850 aq_ret = -EINVAL; ··· 3868 3868 } 3869 3869 3870 3870 cfilter = kzalloc(sizeof(*cfilter), GFP_KERNEL); 3871 - if (!cfilter) 3872 - return -ENOMEM; 3871 + if (!cfilter) { 3872 + aq_ret = -ENOMEM; 3873 + goto err_out; 3874 + } 3873 3875 3874 3876 /* parse destination mac address */ 3875 3877 for (i = 0; i < ETH_ALEN; i++) ··· 3919 3917 3920 3918 /* Adding cloud filter programmed as TC filter */ 3921 3919 if (tcf.dst_port) 3922 - ret = i40e_add_del_cloud_filter_big_buf(vsi, cfilter, true); 3920 + aq_ret = i40e_add_del_cloud_filter_big_buf(vsi, cfilter, true); 3923 3921 else 3924 - ret = i40e_add_del_cloud_filter(vsi, cfilter, true); 3925 - if (ret) { 3922 + aq_ret = i40e_add_del_cloud_filter(vsi, cfilter, true); 3923 + if (aq_ret) { 3926 3924 dev_err(&pf->pdev->dev, 3927 3925 "VF %d: Failed to add cloud filter, err %pe aq_err %s\n", 3928 - vf->vf_id, ERR_PTR(ret), 3926 + vf->vf_id, ERR_PTR(aq_ret), 3929 3927 i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); 3930 3928 goto err_free; 3931 3929 }
+3 -9
drivers/net/ethernet/intel/ice/ice_main.c
··· 7401 7401 goto err_vsi_rebuild; 7402 7402 } 7403 7403 7404 - /* configure PTP timestamping after VSI rebuild */ 7405 - if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags)) { 7406 - if (pf->ptp.tx_interrupt_mode == ICE_PTP_TX_INTERRUPT_SELF) 7407 - ice_ptp_cfg_timestamp(pf, false); 7408 - else if (pf->ptp.tx_interrupt_mode == ICE_PTP_TX_INTERRUPT_ALL) 7409 - /* for E82x PHC owner always need to have interrupts */ 7410 - ice_ptp_cfg_timestamp(pf, true); 7411 - } 7412 - 7413 7404 err = ice_vsi_rebuild_by_type(pf, ICE_VSI_SWITCHDEV_CTRL); 7414 7405 if (err) { 7415 7406 dev_err(dev, "Switchdev CTRL VSI rebuild failed: %d\n", err); ··· 7452 7461 ice_plug_aux_dev(pf); 7453 7462 if (ice_is_feature_supported(pf, ICE_F_SRIOV_LAG)) 7454 7463 ice_lag_rebuild(pf); 7464 + 7465 + /* Restore timestamp mode settings after VSI rebuild */ 7466 + ice_ptp_restore_timestamp_mode(pf); 7455 7467 return; 7456 7468 7457 7469 err_vsi_rebuild:
+79 -67
drivers/net/ethernet/intel/ice/ice_ptp.c
··· 256 256 } 257 257 258 258 /** 259 - * ice_ptp_configure_tx_tstamp - Enable or disable Tx timestamp interrupt 260 - * @pf: The PF pointer to search in 261 - * @on: bool value for whether timestamp interrupt is enabled or disabled 259 + * ice_ptp_cfg_tx_interrupt - Configure Tx timestamp interrupt for the device 260 + * @pf: Board private structure 261 + * 262 + * Program the device to respond appropriately to the Tx timestamp interrupt 263 + * cause. 262 264 */ 263 - static void ice_ptp_configure_tx_tstamp(struct ice_pf *pf, bool on) 265 + static void ice_ptp_cfg_tx_interrupt(struct ice_pf *pf) 264 266 { 267 + struct ice_hw *hw = &pf->hw; 268 + bool enable; 265 269 u32 val; 266 270 271 + switch (pf->ptp.tx_interrupt_mode) { 272 + case ICE_PTP_TX_INTERRUPT_ALL: 273 + /* React to interrupts across all quads. */ 274 + wr32(hw, PFINT_TSYN_MSK + (0x4 * hw->pf_id), (u32)0x1f); 275 + enable = true; 276 + break; 277 + case ICE_PTP_TX_INTERRUPT_NONE: 278 + /* Do not react to interrupts on any quad. */ 279 + wr32(hw, PFINT_TSYN_MSK + (0x4 * hw->pf_id), (u32)0x0); 280 + enable = false; 281 + break; 282 + case ICE_PTP_TX_INTERRUPT_SELF: 283 + default: 284 + enable = pf->ptp.tstamp_config.tx_type == HWTSTAMP_TX_ON; 285 + break; 286 + } 287 + 267 288 /* Configure the Tx timestamp interrupt */ 268 - val = rd32(&pf->hw, PFINT_OICR_ENA); 269 - if (on) 289 + val = rd32(hw, PFINT_OICR_ENA); 290 + if (enable) 270 291 val |= PFINT_OICR_TSYN_TX_M; 271 292 else 272 293 val &= ~PFINT_OICR_TSYN_TX_M; 273 - wr32(&pf->hw, PFINT_OICR_ENA, val); 274 - } 275 - 276 - /** 277 - * ice_set_tx_tstamp - Enable or disable Tx timestamping 278 - * @pf: The PF pointer to search in 279 - * @on: bool value for whether timestamps are enabled or disabled 280 - */ 281 - static void ice_set_tx_tstamp(struct ice_pf *pf, bool on) 282 - { 283 - struct ice_vsi *vsi; 284 - u16 i; 285 - 286 - vsi = ice_get_main_vsi(pf); 287 - if (!vsi) 288 - return; 289 - 290 - /* Set the timestamp enable flag for all the Tx rings */ 291 - ice_for_each_txq(vsi, i) { 292 - if (!vsi->tx_rings[i]) 293 - continue; 294 - vsi->tx_rings[i]->ptp_tx = on; 295 - } 296 - 297 - if (pf->ptp.tx_interrupt_mode == ICE_PTP_TX_INTERRUPT_SELF) 298 - ice_ptp_configure_tx_tstamp(pf, on); 299 - 300 - pf->ptp.tstamp_config.tx_type = on ? HWTSTAMP_TX_ON : HWTSTAMP_TX_OFF; 294 + wr32(hw, PFINT_OICR_ENA, val); 301 295 } 302 296 303 297 /** ··· 305 311 u16 i; 306 312 307 313 vsi = ice_get_main_vsi(pf); 308 - if (!vsi) 314 + if (!vsi || !vsi->rx_rings) 309 315 return; 310 316 311 317 /* Set the timestamp flag for all the Rx rings */ ··· 314 320 continue; 315 321 vsi->rx_rings[i]->ptp_rx = on; 316 322 } 317 - 318 - pf->ptp.tstamp_config.rx_filter = on ? HWTSTAMP_FILTER_ALL : 319 - HWTSTAMP_FILTER_NONE; 320 323 } 321 324 322 325 /** 323 - * ice_ptp_cfg_timestamp - Configure timestamp for init/deinit 326 + * ice_ptp_disable_timestamp_mode - Disable current timestamp mode 324 327 * @pf: Board private structure 325 - * @ena: bool value to enable or disable time stamp 326 328 * 327 - * This function will configure timestamping during PTP initialization 328 - * and deinitialization 329 + * Called during preparation for reset to temporarily disable timestamping on 330 + * the device. Called during remove to disable timestamping while cleaning up 331 + * driver resources. 329 332 */ 330 - void ice_ptp_cfg_timestamp(struct ice_pf *pf, bool ena) 333 + static void ice_ptp_disable_timestamp_mode(struct ice_pf *pf) 331 334 { 332 - ice_set_tx_tstamp(pf, ena); 333 - ice_set_rx_tstamp(pf, ena); 335 + struct ice_hw *hw = &pf->hw; 336 + u32 val; 337 + 338 + val = rd32(hw, PFINT_OICR_ENA); 339 + val &= ~PFINT_OICR_TSYN_TX_M; 340 + wr32(hw, PFINT_OICR_ENA, val); 341 + 342 + ice_set_rx_tstamp(pf, false); 343 + } 344 + 345 + /** 346 + * ice_ptp_restore_timestamp_mode - Restore timestamp configuration 347 + * @pf: Board private structure 348 + * 349 + * Called at the end of rebuild to restore timestamp configuration after 350 + * a device reset. 351 + */ 352 + void ice_ptp_restore_timestamp_mode(struct ice_pf *pf) 353 + { 354 + struct ice_hw *hw = &pf->hw; 355 + bool enable_rx; 356 + 357 + ice_ptp_cfg_tx_interrupt(pf); 358 + 359 + enable_rx = pf->ptp.tstamp_config.rx_filter == HWTSTAMP_FILTER_ALL; 360 + ice_set_rx_tstamp(pf, enable_rx); 361 + 362 + /* Trigger an immediate software interrupt to ensure that timestamps 363 + * which occurred during reset are handled now. 364 + */ 365 + wr32(hw, PFINT_OICR, PFINT_OICR_TSYN_TX_M); 366 + ice_flush(hw); 334 367 } 335 368 336 369 /** ··· 2058 2037 { 2059 2038 switch (config->tx_type) { 2060 2039 case HWTSTAMP_TX_OFF: 2061 - ice_set_tx_tstamp(pf, false); 2040 + pf->ptp.tstamp_config.tx_type = HWTSTAMP_TX_OFF; 2062 2041 break; 2063 2042 case HWTSTAMP_TX_ON: 2064 - ice_set_tx_tstamp(pf, true); 2043 + pf->ptp.tstamp_config.tx_type = HWTSTAMP_TX_ON; 2065 2044 break; 2066 2045 default: 2067 2046 return -ERANGE; ··· 2069 2048 2070 2049 switch (config->rx_filter) { 2071 2050 case HWTSTAMP_FILTER_NONE: 2072 - ice_set_rx_tstamp(pf, false); 2051 + pf->ptp.tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE; 2073 2052 break; 2074 2053 case HWTSTAMP_FILTER_PTP_V1_L4_EVENT: 2075 2054 case HWTSTAMP_FILTER_PTP_V1_L4_SYNC: ··· 2085 2064 case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ: 2086 2065 case HWTSTAMP_FILTER_NTP_ALL: 2087 2066 case HWTSTAMP_FILTER_ALL: 2088 - ice_set_rx_tstamp(pf, true); 2067 + pf->ptp.tstamp_config.rx_filter = HWTSTAMP_FILTER_ALL; 2089 2068 break; 2090 2069 default: 2091 2070 return -ERANGE; 2092 2071 } 2072 + 2073 + /* Immediately update the device timestamping mode */ 2074 + ice_ptp_restore_timestamp_mode(pf); 2093 2075 2094 2076 return 0; 2095 2077 } ··· 2761 2737 clear_bit(ICE_FLAG_PTP, pf->flags); 2762 2738 2763 2739 /* Disable timestamping for both Tx and Rx */ 2764 - ice_ptp_cfg_timestamp(pf, false); 2740 + ice_ptp_disable_timestamp_mode(pf); 2765 2741 2766 2742 kthread_cancel_delayed_work_sync(&ptp->work); 2767 2743 ··· 2827 2803 /* Release the global hardware lock */ 2828 2804 ice_ptp_unlock(hw); 2829 2805 2830 - if (pf->ptp.tx_interrupt_mode == ICE_PTP_TX_INTERRUPT_ALL) { 2831 - /* The clock owner for this device type handles the timestamp 2832 - * interrupt for all ports. 2833 - */ 2834 - ice_ptp_configure_tx_tstamp(pf, true); 2835 - 2836 - /* React on all quads interrupts for E82x */ 2837 - wr32(hw, PFINT_TSYN_MSK + (0x4 * hw->pf_id), (u32)0x1f); 2838 - 2806 + if (!ice_is_e810(hw)) { 2839 2807 /* Enable quad interrupts */ 2840 2808 err = ice_ptp_tx_ena_intr(pf, true, itr); 2841 2809 if (err) ··· 2897 2881 case ICE_PHY_E810: 2898 2882 return ice_ptp_init_tx_e810(pf, &ptp_port->tx); 2899 2883 case ICE_PHY_E822: 2900 - /* Non-owner PFs don't react to any interrupts on E82x, 2901 - * neither on own quad nor on others 2902 - */ 2903 - if (!ice_ptp_pf_handles_tx_interrupt(pf)) { 2904 - ice_ptp_configure_tx_tstamp(pf, false); 2905 - wr32(hw, PFINT_TSYN_MSK + (0x4 * hw->pf_id), (u32)0x0); 2906 - } 2907 2884 kthread_init_delayed_work(&ptp_port->ov_work, 2908 2885 ice_ptp_wait_for_offsets); 2909 2886 ··· 3041 3032 /* Start the PHY timestamping block */ 3042 3033 ice_ptp_reset_phy_timestamping(pf); 3043 3034 3035 + /* Configure initial Tx interrupt settings */ 3036 + ice_ptp_cfg_tx_interrupt(pf); 3037 + 3044 3038 set_bit(ICE_FLAG_PTP, pf->flags); 3045 3039 err = ice_ptp_init_work(pf, ptp); 3046 3040 if (err) ··· 3079 3067 return; 3080 3068 3081 3069 /* Disable timestamping for both Tx and Rx */ 3082 - ice_ptp_cfg_timestamp(pf, false); 3070 + ice_ptp_disable_timestamp_mode(pf); 3083 3071 3084 3072 ice_ptp_remove_auxbus_device(pf); 3085 3073
+2 -3
drivers/net/ethernet/intel/ice/ice_ptp.h
··· 292 292 struct ice_pf; 293 293 int ice_ptp_set_ts_config(struct ice_pf *pf, struct ifreq *ifr); 294 294 int ice_ptp_get_ts_config(struct ice_pf *pf, struct ifreq *ifr); 295 - void ice_ptp_cfg_timestamp(struct ice_pf *pf, bool ena); 295 + void ice_ptp_restore_timestamp_mode(struct ice_pf *pf); 296 296 297 297 void ice_ptp_extts_event(struct ice_pf *pf); 298 298 s8 ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb); ··· 317 317 return -EOPNOTSUPP; 318 318 } 319 319 320 - static inline void ice_ptp_cfg_timestamp(struct ice_pf *pf, bool ena) { } 321 - 320 + static inline void ice_ptp_restore_timestamp_mode(struct ice_pf *pf) { } 322 321 static inline void ice_ptp_extts_event(struct ice_pf *pf) { } 323 322 static inline s8 324 323 ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb)
-3
drivers/net/ethernet/intel/ice/ice_txrx.c
··· 2306 2306 if (likely(!(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))) 2307 2307 return; 2308 2308 2309 - if (!tx_ring->ptp_tx) 2310 - return; 2311 - 2312 2309 /* Tx timestamps cannot be sampled when doing TSO */ 2313 2310 if (first->tx_flags & ICE_TX_FLAGS_TSO) 2314 2311 return;
-1
drivers/net/ethernet/intel/ice/ice_txrx.h
··· 380 380 #define ICE_TX_FLAGS_RING_VLAN_L2TAG2 BIT(2) 381 381 u8 flags; 382 382 u8 dcb_tc; /* Traffic class of ring */ 383 - u8 ptp_tx; 384 383 } ____cacheline_internodealigned_in_smp; 385 384 386 385 static inline bool ice_ring_uses_build_skb(struct ice_rx_ring *ring)
+19 -1
drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
··· 1088 1088 struct ethhdr *eth_hdr; 1089 1089 bool new = false; 1090 1090 int err = 0; 1091 + u64 vf_num; 1091 1092 u32 ring; 1092 1093 1093 1094 if (!flow_cfg->max_flows) { ··· 1101 1100 if (!(pfvf->flags & OTX2_FLAG_NTUPLE_SUPPORT)) 1102 1101 return -ENOMEM; 1103 1102 1104 - if (ring >= pfvf->hw.rx_queues && fsp->ring_cookie != RX_CLS_FLOW_DISC) 1103 + /* Number of queues on a VF can be greater or less than 1104 + * the PF's queue. Hence no need to check for the 1105 + * queue count. Hence no need to check queue count if PF 1106 + * is installing for its VF. Below is the expected vf_num value 1107 + * based on the ethtool commands. 1108 + * 1109 + * e.g. 1110 + * 1. ethtool -U <netdev> ... action -1 ==> vf_num:255 1111 + * 2. ethtool -U <netdev> ... action <queue_num> ==> vf_num:0 1112 + * 3. ethtool -U <netdev> ... vf <vf_idx> queue <queue_num> ==> 1113 + * vf_num:vf_idx+1 1114 + */ 1115 + vf_num = ethtool_get_flow_spec_ring_vf(fsp->ring_cookie); 1116 + if (!is_otx2_vf(pfvf->pcifunc) && !vf_num && 1117 + ring >= pfvf->hw.rx_queues && fsp->ring_cookie != RX_CLS_FLOW_DISC) 1105 1118 return -EINVAL; 1106 1119 1107 1120 if (fsp->location >= otx2_get_maxflows(flow_cfg)) ··· 1197 1182 flow_cfg->nr_flows++; 1198 1183 } 1199 1184 1185 + if (flow->is_vf) 1186 + netdev_info(pfvf->netdev, 1187 + "Make sure that VF's queue number is within its queue limit\n"); 1200 1188 return 0; 1201 1189 } 1202 1190
+2
drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
··· 1934 1934 /* Clear RSS enable flag */ 1935 1935 rss = &pf->hw.rss_info; 1936 1936 rss->enable = false; 1937 + if (!netif_is_rxfh_configured(netdev)) 1938 + kfree(rss->rss_ctx[DEFAULT_RSS_CONTEXT_GROUP]); 1937 1939 1938 1940 /* Cleanup Queue IRQ */ 1939 1941 vec = pci_irq_vector(pf->pdev,
+1 -3
drivers/net/ethernet/realtek/r8169_main.c
··· 2599 2599 rx_mode &= ~AcceptMulticast; 2600 2600 } else if (netdev_mc_count(dev) > MC_FILTER_LIMIT || 2601 2601 dev->flags & IFF_ALLMULTI || 2602 - tp->mac_version == RTL_GIGA_MAC_VER_35 || 2603 - tp->mac_version == RTL_GIGA_MAC_VER_46 || 2604 - tp->mac_version == RTL_GIGA_MAC_VER_48) { 2602 + tp->mac_version == RTL_GIGA_MAC_VER_35) { 2605 2603 /* accept all multicasts */ 2606 2604 } else if (netdev_mc_empty(dev)) { 2607 2605 rx_mode &= ~AcceptMulticast;
+1 -1
drivers/net/ethernet/stmicro/stmmac/Kconfig
··· 280 280 config DWMAC_LOONGSON 281 281 tristate "Loongson PCI DWMAC support" 282 282 default MACH_LOONGSON64 283 - depends on STMMAC_ETH && PCI 283 + depends on (MACH_LOONGSON64 || COMPILE_TEST) && STMMAC_ETH && PCI 284 284 depends on COMMON_CLK 285 285 help 286 286 This selects the LOONGSON PCI bus support for the stmmac driver,
+5 -3
drivers/net/ethernet/wangxun/libwx/wx_hw.c
··· 1769 1769 wx->subsystem_device_id = pdev->subsystem_device; 1770 1770 } else { 1771 1771 err = wx_flash_read_dword(wx, 0xfffdc, &ssid); 1772 - if (!err) 1773 - wx->subsystem_device_id = swab16((u16)ssid); 1772 + if (err < 0) { 1773 + wx_err(wx, "read of internal subsystem device id failed\n"); 1774 + return err; 1775 + } 1774 1776 1775 - return err; 1777 + wx->subsystem_device_id = swab16((u16)ssid); 1776 1778 } 1777 1779 1778 1780 wx->mac_table = kcalloc(wx->mac.num_rar_entries,
+1 -3
drivers/net/ethernet/wangxun/ngbe/ngbe_main.c
··· 121 121 122 122 /* PCI config space info */ 123 123 err = wx_sw_init(wx); 124 - if (err < 0) { 125 - wx_err(wx, "read of internal subsystem device id failed\n"); 124 + if (err < 0) 126 125 return err; 127 - } 128 126 129 127 /* mac type, phy type , oem type */ 130 128 ngbe_init_type_code(wx);
+1 -3
drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
··· 364 364 365 365 /* PCI config space info */ 366 366 err = wx_sw_init(wx); 367 - if (err < 0) { 368 - wx_err(wx, "read of internal subsystem device id failed\n"); 367 + if (err < 0) 369 368 return err; 370 - } 371 369 372 370 txgbe_init_type_code(wx); 373 371
+1 -1
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 822 822 if (lp->features & XAE_FEATURE_FULL_TX_CSUM) { 823 823 /* Tx Full Checksum Offload Enabled */ 824 824 cur_p->app0 |= 2; 825 - } else if (lp->features & XAE_FEATURE_PARTIAL_RX_CSUM) { 825 + } else if (lp->features & XAE_FEATURE_PARTIAL_TX_CSUM) { 826 826 csum_start_off = skb_transport_offset(skb); 827 827 csum_index_off = csum_start_off + skb->csum_offset; 828 828 /* Tx Partial Checksum Offload Enabled */
+46 -22
drivers/net/hyperv/netvsc_drv.c
··· 2206 2206 goto upper_link_failed; 2207 2207 } 2208 2208 2209 - /* set slave flag before open to prevent IPv6 addrconf */ 2210 - vf_netdev->flags |= IFF_SLAVE; 2211 - 2212 2209 schedule_delayed_work(&ndev_ctx->vf_takeover, VF_TAKEOVER_INT); 2213 2210 2214 2211 call_netdevice_notifiers(NETDEV_JOIN, vf_netdev); ··· 2312 2315 2313 2316 } 2314 2317 2315 - /* Fallback path to check synthetic vf with 2316 - * help of mac addr 2318 + /* Fallback path to check synthetic vf with help of mac addr. 2319 + * Because this function can be called before vf_netdev is 2320 + * initialized (NETDEV_POST_INIT) when its perm_addr has not been copied 2321 + * from dev_addr, also try to match to its dev_addr. 2322 + * Note: On Hyper-V and Azure, it's not possible to set a MAC address 2323 + * on a VF that matches to the MAC of a unrelated NETVSC device. 2317 2324 */ 2318 2325 list_for_each_entry(ndev_ctx, &netvsc_dev_list, list) { 2319 2326 ndev = hv_get_drvdata(ndev_ctx->device_ctx); 2320 - if (ether_addr_equal(vf_netdev->perm_addr, ndev->perm_addr)) { 2321 - netdev_notice(vf_netdev, 2322 - "falling back to mac addr based matching\n"); 2327 + if (ether_addr_equal(vf_netdev->perm_addr, ndev->perm_addr) || 2328 + ether_addr_equal(vf_netdev->dev_addr, ndev->perm_addr)) 2323 2329 return ndev; 2324 - } 2325 2330 } 2326 2331 2327 2332 netdev_notice(vf_netdev, 2328 2333 "no netdev found for vf serial:%u\n", serial); 2329 2334 return NULL; 2335 + } 2336 + 2337 + static int netvsc_prepare_bonding(struct net_device *vf_netdev) 2338 + { 2339 + struct net_device *ndev; 2340 + 2341 + ndev = get_netvsc_byslot(vf_netdev); 2342 + if (!ndev) 2343 + return NOTIFY_DONE; 2344 + 2345 + /* set slave flag before open to prevent IPv6 addrconf */ 2346 + vf_netdev->flags |= IFF_SLAVE; 2347 + return NOTIFY_DONE; 2330 2348 } 2331 2349 2332 2350 static int netvsc_register_vf(struct net_device *vf_netdev) ··· 2543 2531 goto devinfo_failed; 2544 2532 } 2545 2533 2534 + /* We must get rtnl lock before scheduling nvdev->subchan_work, 2535 + * otherwise netvsc_subchan_work() can get rtnl lock first and wait 2536 + * all subchannels to show up, but that may not happen because 2537 + * netvsc_probe() can't get rtnl lock and as a result vmbus_onoffer() 2538 + * -> ... -> device_add() -> ... -> __device_attach() can't get 2539 + * the device lock, so all the subchannels can't be processed -- 2540 + * finally netvsc_subchan_work() hangs forever. 2541 + * 2542 + * The rtnl lock also needs to be held before rndis_filter_device_add() 2543 + * which advertises nvsp_2_vsc_capability / sriov bit, and triggers 2544 + * VF NIC offering and registering. If VF NIC finished register_netdev() 2545 + * earlier it may cause name based config failure. 2546 + */ 2547 + rtnl_lock(); 2548 + 2546 2549 nvdev = rndis_filter_device_add(dev, device_info); 2547 2550 if (IS_ERR(nvdev)) { 2548 2551 ret = PTR_ERR(nvdev); ··· 2566 2539 } 2567 2540 2568 2541 eth_hw_addr_set(net, device_info->mac_adr); 2569 - 2570 - /* We must get rtnl lock before scheduling nvdev->subchan_work, 2571 - * otherwise netvsc_subchan_work() can get rtnl lock first and wait 2572 - * all subchannels to show up, but that may not happen because 2573 - * netvsc_probe() can't get rtnl lock and as a result vmbus_onoffer() 2574 - * -> ... -> device_add() -> ... -> __device_attach() can't get 2575 - * the device lock, so all the subchannels can't be processed -- 2576 - * finally netvsc_subchan_work() hangs forever. 2577 - */ 2578 - rtnl_lock(); 2579 2542 2580 2543 if (nvdev->num_chn > 1) 2581 2544 schedule_work(&nvdev->subchan_work); ··· 2603 2586 return 0; 2604 2587 2605 2588 register_failed: 2606 - rtnl_unlock(); 2607 2589 rndis_filter_device_remove(dev, nvdev); 2608 2590 rndis_failed: 2591 + rtnl_unlock(); 2609 2592 netvsc_devinfo_put(device_info); 2610 2593 devinfo_failed: 2611 2594 free_percpu(net_device_ctx->vf_stats); ··· 2770 2753 return NOTIFY_DONE; 2771 2754 2772 2755 switch (event) { 2756 + case NETDEV_POST_INIT: 2757 + return netvsc_prepare_bonding(event_dev); 2773 2758 case NETDEV_REGISTER: 2774 2759 return netvsc_register_vf(event_dev); 2775 2760 case NETDEV_UNREGISTER: ··· 2807 2788 } 2808 2789 netvsc_ring_bytes = ring_size * PAGE_SIZE; 2809 2790 2791 + register_netdevice_notifier(&netvsc_netdev_notifier); 2792 + 2810 2793 ret = vmbus_driver_register(&netvsc_drv); 2811 2794 if (ret) 2812 - return ret; 2795 + goto err_vmbus_reg; 2813 2796 2814 - register_netdevice_notifier(&netvsc_netdev_notifier); 2815 2797 return 0; 2798 + 2799 + err_vmbus_reg: 2800 + unregister_netdevice_notifier(&netvsc_netdev_notifier); 2801 + return ret; 2816 2802 } 2817 2803 2818 2804 MODULE_LICENSE("GPL");
+1 -1
drivers/net/ipa/reg/gsi_reg-v5.0.c
··· 78 78 0x0001c000 + 0x12000 * GSI_EE_AP, 0x80); 79 79 80 80 static const u32 reg_ev_ch_e_cntxt_1_fmask[] = { 81 - [R_LENGTH] = GENMASK(19, 0), 81 + [R_LENGTH] = GENMASK(23, 0), 82 82 }; 83 83 84 84 REG_STRIDE_FIELDS(EV_CH_E_CNTXT_1, ev_ch_e_cntxt_1,
+20 -2
drivers/net/netkit.c
··· 7 7 #include <linux/filter.h> 8 8 #include <linux/netfilter_netdev.h> 9 9 #include <linux/bpf_mprog.h> 10 + #include <linux/indirect_call_wrapper.h> 10 11 11 12 #include <net/netkit.h> 12 13 #include <net/dst.h> ··· 69 68 netdev_tx_t ret_dev = NET_XMIT_SUCCESS; 70 69 const struct bpf_mprog_entry *entry; 71 70 struct net_device *peer; 71 + int len = skb->len; 72 72 73 73 rcu_read_lock(); 74 74 peer = rcu_dereference(nk->peer); ··· 87 85 case NETKIT_PASS: 88 86 skb->protocol = eth_type_trans(skb, skb->dev); 89 87 skb_postpull_rcsum(skb, eth_hdr(skb), ETH_HLEN); 90 - __netif_rx(skb); 88 + if (likely(__netif_rx(skb) == NET_RX_SUCCESS)) { 89 + dev_sw_netstats_tx_add(dev, 1, len); 90 + dev_sw_netstats_rx_add(peer, len); 91 + } else { 92 + goto drop_stats; 93 + } 91 94 break; 92 95 case NETKIT_REDIRECT: 96 + dev_sw_netstats_tx_add(dev, 1, len); 93 97 skb_do_redirect(skb); 94 98 break; 95 99 case NETKIT_DROP: 96 100 default: 97 101 drop: 98 102 kfree_skb(skb); 103 + drop_stats: 99 104 dev_core_stats_tx_dropped_inc(dev); 100 105 ret_dev = NET_XMIT_DROP; 101 106 break; ··· 178 169 rcu_read_unlock(); 179 170 } 180 171 181 - static struct net_device *netkit_peer_dev(struct net_device *dev) 172 + INDIRECT_CALLABLE_SCOPE struct net_device *netkit_peer_dev(struct net_device *dev) 182 173 { 183 174 return rcu_dereference(netkit_priv(dev)->peer); 175 + } 176 + 177 + static void netkit_get_stats(struct net_device *dev, 178 + struct rtnl_link_stats64 *stats) 179 + { 180 + dev_fetch_sw_netstats(stats, dev->tstats); 181 + stats->tx_dropped = DEV_STATS_READ(dev, tx_dropped); 184 182 } 185 183 186 184 static void netkit_uninit(struct net_device *dev); ··· 200 184 .ndo_set_rx_headroom = netkit_set_headroom, 201 185 .ndo_get_iflink = netkit_get_iflink, 202 186 .ndo_get_peer_dev = netkit_peer_dev, 187 + .ndo_get_stats64 = netkit_get_stats, 203 188 .ndo_uninit = netkit_uninit, 204 189 .ndo_features_check = passthru_features_check, 205 190 }; ··· 235 218 236 219 ether_setup(dev); 237 220 dev->max_mtu = ETH_MAX_MTU; 221 + dev->pcpu_stat_type = NETDEV_PCPU_STAT_TSTATS; 238 222 239 223 dev->flags |= IFF_NOARP; 240 224 dev->priv_flags &= ~IFF_TX_SKB_SHARING;
+4 -4
drivers/net/usb/aqc111.c
··· 1079 1079 u16 pkt_count = 0; 1080 1080 u64 desc_hdr = 0; 1081 1081 u16 vlan_tag = 0; 1082 - u32 skb_len = 0; 1082 + u32 skb_len; 1083 1083 1084 1084 if (!skb) 1085 1085 goto err; 1086 1086 1087 - if (skb->len == 0) 1087 + skb_len = skb->len; 1088 + if (skb_len < sizeof(desc_hdr)) 1088 1089 goto err; 1089 1090 1090 - skb_len = skb->len; 1091 1091 /* RX Descriptor Header */ 1092 - skb_trim(skb, skb->len - sizeof(desc_hdr)); 1092 + skb_trim(skb, skb_len - sizeof(desc_hdr)); 1093 1093 desc_hdr = le64_to_cpup((u64 *)skb_tail_pointer(skb)); 1094 1094 1095 1095 /* Check these packets */
+2 -2
drivers/net/usb/ax88179_178a.c
··· 1583 1583 1584 1584 *tmp16 = AX_PHYPWR_RSTCTL_IPRL; 1585 1585 ax88179_write_cmd(dev, AX_ACCESS_MAC, AX_PHYPWR_RSTCTL, 2, 2, tmp16); 1586 - msleep(200); 1586 + msleep(500); 1587 1587 1588 1588 *tmp = AX_CLK_SELECT_ACS | AX_CLK_SELECT_BCS; 1589 1589 ax88179_write_cmd(dev, AX_ACCESS_MAC, AX_CLK_SELECT, 1, 1, tmp); 1590 - msleep(100); 1590 + msleep(200); 1591 1591 1592 1592 /* Ethernet PHY Auto Detach*/ 1593 1593 ax88179_auto_detach(dev);
+1
drivers/net/usb/qmi_wwan.c
··· 1289 1289 {QMI_FIXED_INTF(0x19d2, 0x0168, 4)}, 1290 1290 {QMI_FIXED_INTF(0x19d2, 0x0176, 3)}, 1291 1291 {QMI_FIXED_INTF(0x19d2, 0x0178, 3)}, 1292 + {QMI_FIXED_INTF(0x19d2, 0x0189, 4)}, /* ZTE MF290 */ 1292 1293 {QMI_FIXED_INTF(0x19d2, 0x0191, 4)}, /* ZTE EuFi890 */ 1293 1294 {QMI_FIXED_INTF(0x19d2, 0x0199, 1)}, /* ZTE MF820S */ 1294 1295 {QMI_FIXED_INTF(0x19d2, 0x0200, 1)},
+13 -33
drivers/net/veth.c
··· 236 236 data[tx_idx + j] += *(u64 *)(base + offset); 237 237 } 238 238 } while (u64_stats_fetch_retry(&rq_stats->syncp, start)); 239 - pp_idx = tx_idx + VETH_TQ_STATS_LEN; 240 239 } 240 + pp_idx = idx + dev->real_num_tx_queues * VETH_TQ_STATS_LEN; 241 241 242 242 page_pool_stats: 243 243 veth_get_page_pool_stats(dev, &data[pp_idx]); ··· 373 373 skb_tx_timestamp(skb); 374 374 if (likely(veth_forward_skb(rcv, skb, rq, use_napi) == NET_RX_SUCCESS)) { 375 375 if (!use_napi) 376 - dev_lstats_add(dev, length); 376 + dev_sw_netstats_tx_add(dev, 1, length); 377 377 else 378 378 __veth_xdp_flush(rq); 379 379 } else { ··· 385 385 rcu_read_unlock(); 386 386 387 387 return ret; 388 - } 389 - 390 - static u64 veth_stats_tx(struct net_device *dev, u64 *packets, u64 *bytes) 391 - { 392 - struct veth_priv *priv = netdev_priv(dev); 393 - 394 - dev_lstats_read(dev, packets, bytes); 395 - return atomic64_read(&priv->dropped); 396 388 } 397 389 398 390 static void veth_stats_rx(struct veth_stats *result, struct net_device *dev) ··· 424 432 struct veth_priv *priv = netdev_priv(dev); 425 433 struct net_device *peer; 426 434 struct veth_stats rx; 427 - u64 packets, bytes; 428 435 429 - tot->tx_dropped = veth_stats_tx(dev, &packets, &bytes); 430 - tot->tx_bytes = bytes; 431 - tot->tx_packets = packets; 436 + tot->tx_dropped = atomic64_read(&priv->dropped); 437 + dev_fetch_sw_netstats(tot, dev->tstats); 432 438 433 439 veth_stats_rx(&rx, dev); 434 440 tot->tx_dropped += rx.xdp_tx_err; 435 441 tot->rx_dropped = rx.rx_drops + rx.peer_tq_xdp_xmit_err; 436 - tot->rx_bytes = rx.xdp_bytes; 437 - tot->rx_packets = rx.xdp_packets; 442 + tot->rx_bytes += rx.xdp_bytes; 443 + tot->rx_packets += rx.xdp_packets; 438 444 439 445 rcu_read_lock(); 440 446 peer = rcu_dereference(priv->peer); 441 447 if (peer) { 442 - veth_stats_tx(peer, &packets, &bytes); 443 - tot->rx_bytes += bytes; 444 - tot->rx_packets += packets; 448 + struct rtnl_link_stats64 tot_peer = {}; 449 + 450 + dev_fetch_sw_netstats(&tot_peer, peer->tstats); 451 + tot->rx_bytes += tot_peer.tx_bytes; 452 + tot->rx_packets += tot_peer.tx_packets; 445 453 446 454 veth_stats_rx(&rx, peer); 447 455 tot->tx_dropped += rx.peer_tq_xdp_xmit_err; ··· 1498 1506 1499 1507 static int veth_dev_init(struct net_device *dev) 1500 1508 { 1501 - int err; 1502 - 1503 - dev->lstats = netdev_alloc_pcpu_stats(struct pcpu_lstats); 1504 - if (!dev->lstats) 1505 - return -ENOMEM; 1506 - 1507 - err = veth_alloc_queues(dev); 1508 - if (err) { 1509 - free_percpu(dev->lstats); 1510 - return err; 1511 - } 1512 - 1513 - return 0; 1509 + return veth_alloc_queues(dev); 1514 1510 } 1515 1511 1516 1512 static void veth_dev_free(struct net_device *dev) 1517 1513 { 1518 1514 veth_free_queues(dev); 1519 - free_percpu(dev->lstats); 1520 1515 } 1521 1516 1522 1517 #ifdef CONFIG_NET_POLL_CONTROLLER ··· 1775 1796 NETIF_F_HW_VLAN_STAG_RX); 1776 1797 dev->needs_free_netdev = true; 1777 1798 dev->priv_destructor = veth_dev_free; 1799 + dev->pcpu_stat_type = NETDEV_PCPU_STAT_TSTATS; 1778 1800 dev->max_mtu = ETH_MAX_MTU; 1779 1801 1780 1802 dev->hw_features = VETH_FEATURES;
+10 -28
drivers/net/vrf.c
··· 121 121 int ifindex; 122 122 }; 123 123 124 - struct pcpu_dstats { 125 - u64 tx_pkts; 126 - u64 tx_bytes; 127 - u64 tx_drps; 128 - u64 rx_pkts; 129 - u64 rx_bytes; 130 - u64 rx_drps; 131 - struct u64_stats_sync syncp; 132 - }; 133 - 134 124 static void vrf_rx_stats(struct net_device *dev, int len) 135 125 { 136 126 struct pcpu_dstats *dstats = this_cpu_ptr(dev->dstats); 137 127 138 128 u64_stats_update_begin(&dstats->syncp); 139 - dstats->rx_pkts++; 129 + dstats->rx_packets++; 140 130 dstats->rx_bytes += len; 141 131 u64_stats_update_end(&dstats->syncp); 142 132 } ··· 151 161 do { 152 162 start = u64_stats_fetch_begin(&dstats->syncp); 153 163 tbytes = dstats->tx_bytes; 154 - tpkts = dstats->tx_pkts; 155 - tdrops = dstats->tx_drps; 164 + tpkts = dstats->tx_packets; 165 + tdrops = dstats->tx_drops; 156 166 rbytes = dstats->rx_bytes; 157 - rpkts = dstats->rx_pkts; 167 + rpkts = dstats->rx_packets; 158 168 } while (u64_stats_fetch_retry(&dstats->syncp, start)); 159 169 stats->tx_bytes += tbytes; 160 170 stats->tx_packets += tpkts; ··· 411 421 if (likely(__netif_rx(skb) == NET_RX_SUCCESS)) 412 422 vrf_rx_stats(dev, len); 413 423 else 414 - this_cpu_inc(dev->dstats->rx_drps); 424 + this_cpu_inc(dev->dstats->rx_drops); 415 425 416 426 return NETDEV_TX_OK; 417 427 } ··· 606 616 struct pcpu_dstats *dstats = this_cpu_ptr(dev->dstats); 607 617 608 618 u64_stats_update_begin(&dstats->syncp); 609 - dstats->tx_pkts++; 619 + dstats->tx_packets++; 610 620 dstats->tx_bytes += len; 611 621 u64_stats_update_end(&dstats->syncp); 612 622 } else { 613 - this_cpu_inc(dev->dstats->tx_drps); 623 + this_cpu_inc(dev->dstats->tx_drops); 614 624 } 615 625 616 626 return ret; ··· 1164 1174 1165 1175 vrf_rtable_release(dev, vrf); 1166 1176 vrf_rt6_release(dev, vrf); 1167 - 1168 - free_percpu(dev->dstats); 1169 - dev->dstats = NULL; 1170 1177 } 1171 1178 1172 1179 static int vrf_dev_init(struct net_device *dev) 1173 1180 { 1174 1181 struct net_vrf *vrf = netdev_priv(dev); 1175 1182 1176 - dev->dstats = netdev_alloc_pcpu_stats(struct pcpu_dstats); 1177 - if (!dev->dstats) 1178 - goto out_nomem; 1179 - 1180 1183 /* create the default dst which points back to us */ 1181 1184 if (vrf_rtable_create(dev) != 0) 1182 - goto out_stats; 1185 + goto out_nomem; 1183 1186 1184 1187 if (vrf_rt6_create(dev) != 0) 1185 1188 goto out_rth; ··· 1186 1203 1187 1204 out_rth: 1188 1205 vrf_rtable_release(dev, vrf); 1189 - out_stats: 1190 - free_percpu(dev->dstats); 1191 - dev->dstats = NULL; 1192 1206 out_nomem: 1193 1207 return -ENOMEM; 1194 1208 } ··· 1684 1704 dev->min_mtu = IPV6_MIN_MTU; 1685 1705 dev->max_mtu = IP6_MAX_MTU; 1686 1706 dev->mtu = dev->max_mtu; 1707 + 1708 + dev->pcpu_stat_type = NETDEV_PCPU_STAT_DSTATS; 1687 1709 } 1688 1710 1689 1711 static int vrf_validate(struct nlattr *tb[], struct nlattr *data[],
+2 -2
drivers/net/wireguard/device.c
··· 210 210 */ 211 211 while (skb_queue_len(&peer->staged_packet_queue) > MAX_STAGED_PACKETS) { 212 212 dev_kfree_skb(__skb_dequeue(&peer->staged_packet_queue)); 213 - ++dev->stats.tx_dropped; 213 + DEV_STATS_INC(dev, tx_dropped); 214 214 } 215 215 skb_queue_splice_tail(&packets, &peer->staged_packet_queue); 216 216 spin_unlock_bh(&peer->staged_packet_queue.lock); ··· 228 228 else if (skb->protocol == htons(ETH_P_IPV6)) 229 229 icmpv6_ndo_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_ADDR_UNREACH, 0); 230 230 err: 231 - ++dev->stats.tx_errors; 231 + DEV_STATS_INC(dev, tx_errors); 232 232 kfree_skb(skb); 233 233 return ret; 234 234 }
+6 -6
drivers/net/wireguard/receive.c
··· 416 416 net_dbg_skb_ratelimited("%s: Packet has unallowed src IP (%pISc) from peer %llu (%pISpfsc)\n", 417 417 dev->name, skb, peer->internal_id, 418 418 &peer->endpoint.addr); 419 - ++dev->stats.rx_errors; 420 - ++dev->stats.rx_frame_errors; 419 + DEV_STATS_INC(dev, rx_errors); 420 + DEV_STATS_INC(dev, rx_frame_errors); 421 421 goto packet_processed; 422 422 dishonest_packet_type: 423 423 net_dbg_ratelimited("%s: Packet is neither ipv4 nor ipv6 from peer %llu (%pISpfsc)\n", 424 424 dev->name, peer->internal_id, &peer->endpoint.addr); 425 - ++dev->stats.rx_errors; 426 - ++dev->stats.rx_frame_errors; 425 + DEV_STATS_INC(dev, rx_errors); 426 + DEV_STATS_INC(dev, rx_frame_errors); 427 427 goto packet_processed; 428 428 dishonest_packet_size: 429 429 net_dbg_ratelimited("%s: Packet has incorrect size from peer %llu (%pISpfsc)\n", 430 430 dev->name, peer->internal_id, &peer->endpoint.addr); 431 - ++dev->stats.rx_errors; 432 - ++dev->stats.rx_length_errors; 431 + DEV_STATS_INC(dev, rx_errors); 432 + DEV_STATS_INC(dev, rx_length_errors); 433 433 goto packet_processed; 434 434 packet_processed: 435 435 dev_kfree_skb(skb);
+2 -1
drivers/net/wireguard/send.c
··· 333 333 void wg_packet_purge_staged_packets(struct wg_peer *peer) 334 334 { 335 335 spin_lock_bh(&peer->staged_packet_queue.lock); 336 - peer->device->dev->stats.tx_dropped += peer->staged_packet_queue.qlen; 336 + DEV_STATS_ADD(peer->device->dev, tx_dropped, 337 + peer->staged_packet_queue.qlen); 337 338 __skb_queue_purge(&peer->staged_packet_queue); 338 339 spin_unlock_bh(&peer->staged_packet_queue.lock); 339 340 }
+6 -1
drivers/nfc/virtual_ncidev.c
··· 26 26 struct mutex mtx; 27 27 struct sk_buff *send_buff; 28 28 struct wait_queue_head wq; 29 + bool running; 29 30 }; 30 31 31 32 static int virtual_nci_open(struct nci_dev *ndev) 32 33 { 34 + struct virtual_nci_dev *vdev = nci_get_drvdata(ndev); 35 + 36 + vdev->running = true; 33 37 return 0; 34 38 } 35 39 ··· 44 40 mutex_lock(&vdev->mtx); 45 41 kfree_skb(vdev->send_buff); 46 42 vdev->send_buff = NULL; 43 + vdev->running = false; 47 44 mutex_unlock(&vdev->mtx); 48 45 49 46 return 0; ··· 55 50 struct virtual_nci_dev *vdev = nci_get_drvdata(ndev); 56 51 57 52 mutex_lock(&vdev->mtx); 58 - if (vdev->send_buff) { 53 + if (vdev->send_buff || !vdev->running) { 59 54 mutex_unlock(&vdev->mtx); 60 55 kfree_skb(skb); 61 56 return -1;
+3 -2
drivers/nvme/host/auth.c
··· 757 757 __func__, chap->qid); 758 758 mutex_lock(&ctrl->dhchap_auth_mutex); 759 759 ret = nvme_auth_dhchap_setup_host_response(ctrl, chap); 760 + mutex_unlock(&ctrl->dhchap_auth_mutex); 760 761 if (ret) { 761 - mutex_unlock(&ctrl->dhchap_auth_mutex); 762 762 chap->error = ret; 763 763 goto fail2; 764 764 } 765 - mutex_unlock(&ctrl->dhchap_auth_mutex); 766 765 767 766 /* DH-HMAC-CHAP Step 3: send reply */ 768 767 dev_dbg(ctrl->device, "%s: qid %d send reply\n", ··· 838 839 } 839 840 840 841 fail2: 842 + if (chap->status == 0) 843 + chap->status = NVME_AUTH_DHCHAP_FAILURE_FAILED; 841 844 dev_dbg(ctrl->device, "%s: qid %d send failure2, status %x\n", 842 845 __func__, chap->qid, chap->status); 843 846 tl = nvme_auth_set_dhchap_failure2_data(ctrl, chap);
+14 -7
drivers/nvme/host/core.c
··· 482 482 483 483 void nvme_cancel_admin_tagset(struct nvme_ctrl *ctrl) 484 484 { 485 - nvme_stop_keep_alive(ctrl); 486 485 if (ctrl->admin_tagset) { 487 486 blk_mq_tagset_busy_iter(ctrl->admin_tagset, 488 487 nvme_cancel_request, ctrl); ··· 1813 1814 return ret; 1814 1815 } 1815 1816 1816 - static void nvme_configure_metadata(struct nvme_ns *ns, struct nvme_id_ns *id) 1817 + static int nvme_configure_metadata(struct nvme_ns *ns, struct nvme_id_ns *id) 1817 1818 { 1818 1819 struct nvme_ctrl *ctrl = ns->ctrl; 1820 + int ret; 1819 1821 1820 - if (nvme_init_ms(ns, id)) 1821 - return; 1822 + ret = nvme_init_ms(ns, id); 1823 + if (ret) 1824 + return ret; 1822 1825 1823 1826 ns->features &= ~(NVME_NS_METADATA_SUPPORTED | NVME_NS_EXT_LBAS); 1824 1827 if (!ns->ms || !(ctrl->ops->flags & NVME_F_METADATA_SUPPORTED)) 1825 - return; 1828 + return 0; 1826 1829 1827 1830 if (ctrl->ops->flags & NVME_F_FABRICS) { 1828 1831 /* ··· 1833 1832 * remap the separate metadata buffer from the block layer. 1834 1833 */ 1835 1834 if (WARN_ON_ONCE(!(id->flbas & NVME_NS_FLBAS_META_EXT))) 1836 - return; 1835 + return 0; 1837 1836 1838 1837 ns->features |= NVME_NS_EXT_LBAS; 1839 1838 ··· 1860 1859 else 1861 1860 ns->features |= NVME_NS_METADATA_SUPPORTED; 1862 1861 } 1862 + return 0; 1863 1863 } 1864 1864 1865 1865 static void nvme_set_queue_limits(struct nvme_ctrl *ctrl, ··· 2034 2032 ns->lba_shift = id->lbaf[lbaf].ds; 2035 2033 nvme_set_queue_limits(ns->ctrl, ns->queue); 2036 2034 2037 - nvme_configure_metadata(ns, id); 2035 + ret = nvme_configure_metadata(ns, id); 2036 + if (ret < 0) { 2037 + blk_mq_unfreeze_queue(ns->disk->queue); 2038 + goto out; 2039 + } 2038 2040 nvme_set_chunk_sectors(ns, id); 2039 2041 nvme_update_disk_info(ns->disk, ns, id); 2040 2042 ··· 4354 4348 { 4355 4349 nvme_mpath_stop(ctrl); 4356 4350 nvme_auth_stop(ctrl); 4351 + nvme_stop_keep_alive(ctrl); 4357 4352 nvme_stop_failfast_work(ctrl); 4358 4353 flush_work(&ctrl->async_event_work); 4359 4354 cancel_work_sync(&ctrl->fw_act_work);
+2
drivers/nvme/host/fabrics.c
··· 667 667 #endif 668 668 { NVMF_OPT_FAIL_FAST_TMO, "fast_io_fail_tmo=%d" }, 669 669 { NVMF_OPT_DISCOVERY, "discovery" }, 670 + #ifdef CONFIG_NVME_HOST_AUTH 670 671 { NVMF_OPT_DHCHAP_SECRET, "dhchap_secret=%s" }, 671 672 { NVMF_OPT_DHCHAP_CTRL_SECRET, "dhchap_ctrl_secret=%s" }, 673 + #endif 672 674 #ifdef CONFIG_NVME_TCP_TLS 673 675 { NVMF_OPT_TLS, "tls" }, 674 676 #endif
+8 -11
drivers/nvme/host/fc.c
··· 2530 2530 * clean up the admin queue. Same thing as above. 2531 2531 */ 2532 2532 nvme_quiesce_admin_queue(&ctrl->ctrl); 2533 - 2534 - /* 2535 - * Open-coding nvme_cancel_admin_tagset() as fc 2536 - * is not using nvme_cancel_request(). 2537 - */ 2538 - nvme_stop_keep_alive(&ctrl->ctrl); 2539 2533 blk_sync_queue(ctrl->ctrl.admin_q); 2540 2534 blk_mq_tagset_busy_iter(&ctrl->admin_tag_set, 2541 2535 nvme_fc_terminate_exchange, &ctrl->ctrl); ··· 3132 3138 nvme_unquiesce_admin_queue(&ctrl->ctrl); 3133 3139 3134 3140 ret = nvme_init_ctrl_finish(&ctrl->ctrl, false); 3135 - if (!ret && test_bit(ASSOC_FAILED, &ctrl->flags)) 3136 - ret = -EIO; 3137 3141 if (ret) 3138 3142 goto out_disconnect_admin_queue; 3139 - 3143 + if (test_bit(ASSOC_FAILED, &ctrl->flags)) { 3144 + ret = -EIO; 3145 + goto out_stop_keep_alive; 3146 + } 3140 3147 /* sanity checks */ 3141 3148 3142 3149 /* FC-NVME does not have other data in the capsule */ ··· 3145 3150 dev_err(ctrl->ctrl.device, "icdoff %d is not supported!\n", 3146 3151 ctrl->ctrl.icdoff); 3147 3152 ret = NVME_SC_INVALID_FIELD | NVME_SC_DNR; 3148 - goto out_disconnect_admin_queue; 3153 + goto out_stop_keep_alive; 3149 3154 } 3150 3155 3151 3156 /* FC-NVME supports normal SGL Data Block Descriptors */ ··· 3153 3158 dev_err(ctrl->ctrl.device, 3154 3159 "Mandatory sgls are not supported!\n"); 3155 3160 ret = NVME_SC_INVALID_FIELD | NVME_SC_DNR; 3156 - goto out_disconnect_admin_queue; 3161 + goto out_stop_keep_alive; 3157 3162 } 3158 3163 3159 3164 if (opts->queue_size > ctrl->ctrl.maxcmd) { ··· 3200 3205 3201 3206 out_term_aen_ops: 3202 3207 nvme_fc_term_aen_ops(ctrl); 3208 + out_stop_keep_alive: 3209 + nvme_stop_keep_alive(&ctrl->ctrl); 3203 3210 out_disconnect_admin_queue: 3204 3211 dev_warn(ctrl->ctrl.device, 3205 3212 "NVME-FC{%d}: create_assoc failed, assoc_id %llx ret %d\n",
+1
drivers/nvme/host/rdma.c
··· 1080 1080 nvme_rdma_free_io_queues(ctrl); 1081 1081 } 1082 1082 destroy_admin: 1083 + nvme_stop_keep_alive(&ctrl->ctrl); 1083 1084 nvme_quiesce_admin_queue(&ctrl->ctrl); 1084 1085 blk_sync_queue(ctrl->ctrl.admin_q); 1085 1086 nvme_rdma_stop_queue(&ctrl->queues[0]);
+15 -17
drivers/nvme/host/tcp.c
··· 36 36 module_param(so_priority, int, 0644); 37 37 MODULE_PARM_DESC(so_priority, "nvme tcp socket optimize priority"); 38 38 39 - #ifdef CONFIG_NVME_TCP_TLS 40 39 /* 41 40 * TLS handshake timeout 42 41 */ 43 42 static int tls_handshake_timeout = 10; 43 + #ifdef CONFIG_NVME_TCP_TLS 44 44 module_param(tls_handshake_timeout, int, 0644); 45 45 MODULE_PARM_DESC(tls_handshake_timeout, 46 46 "nvme TLS handshake timeout in seconds (default 10)"); ··· 161 161 struct ahash_request *snd_hash; 162 162 __le32 exp_ddgst; 163 163 __le32 recv_ddgst; 164 - #ifdef CONFIG_NVME_TCP_TLS 165 164 struct completion tls_complete; 166 165 int tls_err; 167 - #endif 168 166 struct page_frag_cache pf_cache; 169 167 170 168 void (*state_change)(struct sock *); ··· 203 205 static inline int nvme_tcp_queue_id(struct nvme_tcp_queue *queue) 204 206 { 205 207 return queue - queue->ctrl->queues; 208 + } 209 + 210 + static inline bool nvme_tcp_tls(struct nvme_ctrl *ctrl) 211 + { 212 + if (!IS_ENABLED(CONFIG_NVME_TCP_TLS)) 213 + return 0; 214 + 215 + return ctrl->opts->tls; 206 216 } 207 217 208 218 static inline struct blk_mq_tags *nvme_tcp_tagset(struct nvme_tcp_queue *queue) ··· 1418 1412 memset(&msg, 0, sizeof(msg)); 1419 1413 iov.iov_base = icresp; 1420 1414 iov.iov_len = sizeof(*icresp); 1421 - if (queue->ctrl->ctrl.opts->tls) { 1415 + if (nvme_tcp_tls(&queue->ctrl->ctrl)) { 1422 1416 msg.msg_control = cbuf; 1423 1417 msg.msg_controllen = sizeof(cbuf); 1424 1418 } ··· 1430 1424 goto free_icresp; 1431 1425 } 1432 1426 ret = -ENOTCONN; 1433 - if (queue->ctrl->ctrl.opts->tls) { 1427 + if (nvme_tcp_tls(&queue->ctrl->ctrl)) { 1434 1428 ctype = tls_get_record_type(queue->sock->sk, 1435 1429 (struct cmsghdr *)cbuf); 1436 1430 if (ctype != TLS_RECORD_TYPE_DATA) { ··· 1554 1548 queue->io_cpu = cpumask_next_wrap(n - 1, cpu_online_mask, -1, false); 1555 1549 } 1556 1550 1557 - #ifdef CONFIG_NVME_TCP_TLS 1558 1551 static void nvme_tcp_tls_done(void *data, int status, key_serial_t pskid) 1559 1552 { 1560 1553 struct nvme_tcp_queue *queue = data; ··· 1630 1625 } 1631 1626 return ret; 1632 1627 } 1633 - #else 1634 - static int nvme_tcp_start_tls(struct nvme_ctrl *nctrl, 1635 - struct nvme_tcp_queue *queue, 1636 - key_serial_t pskid) 1637 - { 1638 - return -EPROTONOSUPPORT; 1639 - } 1640 - #endif 1641 1628 1642 1629 static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl, int qid, 1643 1630 key_serial_t pskid) ··· 1756 1759 } 1757 1760 1758 1761 /* If PSKs are configured try to start TLS */ 1759 - if (pskid) { 1762 + if (IS_ENABLED(CONFIG_NVME_TCP_TLS) && pskid) { 1760 1763 ret = nvme_tcp_start_tls(nctrl, queue, pskid); 1761 1764 if (ret) 1762 1765 goto err_init_connect; ··· 1913 1916 int ret; 1914 1917 key_serial_t pskid = 0; 1915 1918 1916 - if (ctrl->opts->tls) { 1919 + if (nvme_tcp_tls(ctrl)) { 1917 1920 if (ctrl->opts->tls_key) 1918 1921 pskid = key_serial(ctrl->opts->tls_key); 1919 1922 else ··· 1946 1949 { 1947 1950 int i, ret; 1948 1951 1949 - if (ctrl->opts->tls && !ctrl->tls_key) { 1952 + if (nvme_tcp_tls(ctrl) && !ctrl->tls_key) { 1950 1953 dev_err(ctrl->device, "no PSK negotiated\n"); 1951 1954 return -ENOKEY; 1952 1955 } ··· 2234 2237 nvme_tcp_destroy_io_queues(ctrl, new); 2235 2238 } 2236 2239 destroy_admin: 2240 + nvme_stop_keep_alive(ctrl); 2237 2241 nvme_tcp_teardown_admin_queue(ctrl, false); 2238 2242 return ret; 2239 2243 }
+2 -2
drivers/nvme/target/Kconfig
··· 4 4 tristate "NVMe Target support" 5 5 depends on BLOCK 6 6 depends on CONFIGFS_FS 7 + select NVME_KEYRING if NVME_TARGET_TCP_TLS 8 + select KEYS if NVME_TARGET_TCP_TLS 7 9 select BLK_DEV_INTEGRITY_T10 if BLK_DEV_INTEGRITY 8 10 select SGL_ALLOC 9 11 help ··· 89 87 config NVME_TARGET_TCP_TLS 90 88 bool "NVMe over Fabrics TCP target TLS encryption support" 91 89 depends on NVME_TARGET_TCP 92 - select NVME_KEYRING 93 90 select NET_HANDSHAKE 94 - select KEYS 95 91 help 96 92 Enables TLS encryption for the NVMe TCP target using the netlink handshake API. 97 93
+1 -1
drivers/nvme/target/configfs.c
··· 1893 1893 return ERR_PTR(-ENOMEM); 1894 1894 } 1895 1895 1896 - if (nvme_keyring_id()) { 1896 + if (IS_ENABLED(CONFIG_NVME_TARGET_TCP_TLS) && nvme_keyring_id()) { 1897 1897 port->keyring = key_lookup(nvme_keyring_id()); 1898 1898 if (IS_ERR(port->keyring)) { 1899 1899 pr_warn("NVMe keyring not available, disabling TLS\n");
+4
drivers/nvme/target/fabrics-cmd.c
··· 244 244 goto out; 245 245 } 246 246 247 + d->subsysnqn[NVMF_NQN_FIELD_LEN - 1] = '\0'; 248 + d->hostnqn[NVMF_NQN_FIELD_LEN - 1] = '\0'; 247 249 status = nvmet_alloc_ctrl(d->subsysnqn, d->hostnqn, req, 248 250 le32_to_cpu(c->kato), &ctrl); 249 251 if (status) ··· 315 313 goto out; 316 314 } 317 315 316 + d->subsysnqn[NVMF_NQN_FIELD_LEN - 1] = '\0'; 317 + d->hostnqn[NVMF_NQN_FIELD_LEN - 1] = '\0'; 318 318 ctrl = nvmet_ctrl_find_get(d->subsysnqn, d->hostnqn, 319 319 le16_to_cpu(d->cntlid), req); 320 320 if (!ctrl) {
+3 -1
drivers/nvme/target/tcp.c
··· 1854 1854 } 1855 1855 return ret; 1856 1856 } 1857 + #else 1858 + static void nvmet_tcp_tls_handshake_timeout(struct work_struct *w) {} 1857 1859 #endif 1858 1860 1859 1861 static void nvmet_tcp_alloc_queue(struct nvmet_tcp_port *port, ··· 1913 1911 list_add_tail(&queue->queue_list, &nvmet_tcp_queue_list); 1914 1912 mutex_unlock(&nvmet_tcp_queue_mutex); 1915 1913 1916 - #ifdef CONFIG_NVME_TARGET_TCP_TLS 1917 1914 INIT_DELAYED_WORK(&queue->tls_handshake_tmo_work, 1918 1915 nvmet_tcp_tls_handshake_timeout); 1916 + #ifdef CONFIG_NVME_TARGET_TCP_TLS 1919 1917 if (queue->state == NVMET_TCP_Q_TLS_HANDSHAKE) { 1920 1918 struct sock *sk = queue->sock->sk; 1921 1919
-1
drivers/phy/Kconfig
··· 87 87 source "drivers/phy/mscc/Kconfig" 88 88 source "drivers/phy/qualcomm/Kconfig" 89 89 source "drivers/phy/ralink/Kconfig" 90 - source "drivers/phy/realtek/Kconfig" 91 90 source "drivers/phy/renesas/Kconfig" 92 91 source "drivers/phy/rockchip/Kconfig" 93 92 source "drivers/phy/samsung/Kconfig"
-1
drivers/phy/Makefile
··· 26 26 mscc/ \ 27 27 qualcomm/ \ 28 28 ralink/ \ 29 - realtek/ \ 30 29 renesas/ \ 31 30 rockchip/ \ 32 31 samsung/ \
-32
drivers/phy/realtek/Kconfig
··· 1 - # SPDX-License-Identifier: GPL-2.0 2 - # 3 - # Phy drivers for Realtek platforms 4 - # 5 - 6 - if ARCH_REALTEK || COMPILE_TEST 7 - 8 - config PHY_RTK_RTD_USB2PHY 9 - tristate "Realtek RTD USB2 PHY Transceiver Driver" 10 - depends on USB_SUPPORT 11 - select GENERIC_PHY 12 - select USB_PHY 13 - select USB_COMMON 14 - help 15 - Enable this to support Realtek SoC USB2 phy transceiver. 16 - The DHC (digital home center) RTD series SoCs used the Synopsys 17 - DWC3 USB IP. This driver will do the PHY initialization 18 - of the parameters. 19 - 20 - config PHY_RTK_RTD_USB3PHY 21 - tristate "Realtek RTD USB3 PHY Transceiver Driver" 22 - depends on USB_SUPPORT 23 - select GENERIC_PHY 24 - select USB_PHY 25 - select USB_COMMON 26 - help 27 - Enable this to support Realtek SoC USB3 phy transceiver. 28 - The DHC (digital home center) RTD series SoCs used the Synopsys 29 - DWC3 USB IP. This driver will do the PHY initialization 30 - of the parameters. 31 - 32 - endif # ARCH_REALTEK || COMPILE_TEST
-3
drivers/phy/realtek/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0 2 - obj-$(CONFIG_PHY_RTK_RTD_USB2PHY) += phy-rtk-usb2.o 3 - obj-$(CONFIG_PHY_RTK_RTD_USB3PHY) += phy-rtk-usb3.o
-1325
drivers/phy/realtek/phy-rtk-usb2.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * phy-rtk-usb2.c RTK usb2.0 PHY driver 4 - * 5 - * Copyright (C) 2023 Realtek Semiconductor Corporation 6 - * 7 - */ 8 - 9 - #include <linux/module.h> 10 - #include <linux/of.h> 11 - #include <linux/of_address.h> 12 - #include <linux/platform_device.h> 13 - #include <linux/uaccess.h> 14 - #include <linux/debugfs.h> 15 - #include <linux/nvmem-consumer.h> 16 - #include <linux/regmap.h> 17 - #include <linux/sys_soc.h> 18 - #include <linux/mfd/syscon.h> 19 - #include <linux/phy/phy.h> 20 - #include <linux/usb.h> 21 - #include <linux/usb/phy.h> 22 - #include <linux/usb/hcd.h> 23 - 24 - /* GUSB2PHYACCn register */ 25 - #define PHY_NEW_REG_REQ BIT(25) 26 - #define PHY_VSTS_BUSY BIT(23) 27 - #define PHY_VCTRL_SHIFT 8 28 - #define PHY_REG_DATA_MASK 0xff 29 - 30 - #define GET_LOW_NIBBLE(addr) ((addr) & 0x0f) 31 - #define GET_HIGH_NIBBLE(addr) (((addr) & 0xf0) >> 4) 32 - 33 - #define EFUS_USB_DC_CAL_RATE 2 34 - #define EFUS_USB_DC_CAL_MAX 7 35 - 36 - #define EFUS_USB_DC_DIS_RATE 1 37 - #define EFUS_USB_DC_DIS_MAX 7 38 - 39 - #define MAX_PHY_DATA_SIZE 20 40 - #define OFFEST_PHY_READ 0x20 41 - 42 - #define MAX_USB_PHY_NUM 4 43 - #define MAX_USB_PHY_PAGE0_DATA_SIZE 16 44 - #define MAX_USB_PHY_PAGE1_DATA_SIZE 16 45 - #define MAX_USB_PHY_PAGE2_DATA_SIZE 8 46 - 47 - #define SET_PAGE_OFFSET 0xf4 48 - #define SET_PAGE_0 0x9b 49 - #define SET_PAGE_1 0xbb 50 - #define SET_PAGE_2 0xdb 51 - 52 - #define PAGE_START 0xe0 53 - #define PAGE0_0XE4 0xe4 54 - #define PAGE0_0XE6 0xe6 55 - #define PAGE0_0XE7 0xe7 56 - #define PAGE1_0XE0 0xe0 57 - #define PAGE1_0XE2 0xe2 58 - 59 - #define SENSITIVITY_CTRL (BIT(4) | BIT(5) | BIT(6)) 60 - #define ENABLE_AUTO_SENSITIVITY_CALIBRATION BIT(2) 61 - #define DEFAULT_DC_DRIVING_VALUE (0x8) 62 - #define DEFAULT_DC_DISCONNECTION_VALUE (0x6) 63 - #define HS_CLK_SELECT BIT(6) 64 - 65 - struct phy_reg { 66 - void __iomem *reg_wrap_vstatus; 67 - void __iomem *reg_gusb2phyacc0; 68 - int vstatus_index; 69 - }; 70 - 71 - struct phy_data { 72 - u8 addr; 73 - u8 data; 74 - }; 75 - 76 - struct phy_cfg { 77 - int page0_size; 78 - struct phy_data page0[MAX_USB_PHY_PAGE0_DATA_SIZE]; 79 - int page1_size; 80 - struct phy_data page1[MAX_USB_PHY_PAGE1_DATA_SIZE]; 81 - int page2_size; 82 - struct phy_data page2[MAX_USB_PHY_PAGE2_DATA_SIZE]; 83 - 84 - int num_phy; 85 - 86 - bool check_efuse; 87 - int check_efuse_version; 88 - #define CHECK_EFUSE_V1 1 89 - #define CHECK_EFUSE_V2 2 90 - int efuse_dc_driving_rate; 91 - int efuse_dc_disconnect_rate; 92 - int dc_driving_mask; 93 - int dc_disconnect_mask; 94 - bool usb_dc_disconnect_at_page0; 95 - int driving_updated_for_dev_dis; 96 - 97 - bool do_toggle; 98 - bool do_toggle_driving; 99 - bool use_default_parameter; 100 - bool is_double_sensitivity_mode; 101 - }; 102 - 103 - struct phy_parameter { 104 - struct phy_reg phy_reg; 105 - 106 - /* Get from efuse */ 107 - s8 efuse_usb_dc_cal; 108 - s8 efuse_usb_dc_dis; 109 - 110 - /* Get from dts */ 111 - bool inverse_hstx_sync_clock; 112 - u32 driving_level; 113 - s32 driving_level_compensate; 114 - s32 disconnection_compensate; 115 - }; 116 - 117 - struct rtk_phy { 118 - struct usb_phy phy; 119 - struct device *dev; 120 - 121 - struct phy_cfg *phy_cfg; 122 - int num_phy; 123 - struct phy_parameter *phy_parameter; 124 - 125 - struct dentry *debug_dir; 126 - }; 127 - 128 - /* mapping 0xE0 to 0 ... 0xE7 to 7, 0xF0 to 8 ,,, 0xF7 to 15 */ 129 - static inline int page_addr_to_array_index(u8 addr) 130 - { 131 - return (int)((((addr) - PAGE_START) & 0x7) + 132 - ((((addr) - PAGE_START) & 0x10) >> 1)); 133 - } 134 - 135 - static inline u8 array_index_to_page_addr(int index) 136 - { 137 - return ((((index) + PAGE_START) & 0x7) + 138 - ((((index) & 0x8) << 1) + PAGE_START)); 139 - } 140 - 141 - #define PHY_IO_TIMEOUT_USEC (50000) 142 - #define PHY_IO_DELAY_US (100) 143 - 144 - static inline int utmi_wait_register(void __iomem *reg, u32 mask, u32 result) 145 - { 146 - int ret; 147 - unsigned int val; 148 - 149 - ret = read_poll_timeout(readl, val, ((val & mask) == result), 150 - PHY_IO_DELAY_US, PHY_IO_TIMEOUT_USEC, false, reg); 151 - if (ret) { 152 - pr_err("%s can't program USB phy\n", __func__); 153 - return -ETIMEDOUT; 154 - } 155 - 156 - return 0; 157 - } 158 - 159 - static char rtk_phy_read(struct phy_reg *phy_reg, char addr) 160 - { 161 - void __iomem *reg_gusb2phyacc0 = phy_reg->reg_gusb2phyacc0; 162 - unsigned int val; 163 - int ret = 0; 164 - 165 - addr -= OFFEST_PHY_READ; 166 - 167 - /* polling until VBusy == 0 */ 168 - ret = utmi_wait_register(reg_gusb2phyacc0, PHY_VSTS_BUSY, 0); 169 - if (ret) 170 - return (char)ret; 171 - 172 - /* VCtrl = low nibble of addr, and set PHY_NEW_REG_REQ */ 173 - val = PHY_NEW_REG_REQ | (GET_LOW_NIBBLE(addr) << PHY_VCTRL_SHIFT); 174 - writel(val, reg_gusb2phyacc0); 175 - ret = utmi_wait_register(reg_gusb2phyacc0, PHY_VSTS_BUSY, 0); 176 - if (ret) 177 - return (char)ret; 178 - 179 - /* VCtrl = high nibble of addr, and set PHY_NEW_REG_REQ */ 180 - val = PHY_NEW_REG_REQ | (GET_HIGH_NIBBLE(addr) << PHY_VCTRL_SHIFT); 181 - writel(val, reg_gusb2phyacc0); 182 - ret = utmi_wait_register(reg_gusb2phyacc0, PHY_VSTS_BUSY, 0); 183 - if (ret) 184 - return (char)ret; 185 - 186 - val = readl(reg_gusb2phyacc0); 187 - 188 - return (char)(val & PHY_REG_DATA_MASK); 189 - } 190 - 191 - static int rtk_phy_write(struct phy_reg *phy_reg, char addr, char data) 192 - { 193 - unsigned int val; 194 - void __iomem *reg_wrap_vstatus = phy_reg->reg_wrap_vstatus; 195 - void __iomem *reg_gusb2phyacc0 = phy_reg->reg_gusb2phyacc0; 196 - int shift_bits = phy_reg->vstatus_index * 8; 197 - int ret = 0; 198 - 199 - /* write data to VStatusOut2 (data output to phy) */ 200 - writel((u32)data << shift_bits, reg_wrap_vstatus); 201 - 202 - ret = utmi_wait_register(reg_gusb2phyacc0, PHY_VSTS_BUSY, 0); 203 - if (ret) 204 - return ret; 205 - 206 - /* VCtrl = low nibble of addr, set PHY_NEW_REG_REQ */ 207 - val = PHY_NEW_REG_REQ | (GET_LOW_NIBBLE(addr) << PHY_VCTRL_SHIFT); 208 - 209 - writel(val, reg_gusb2phyacc0); 210 - ret = utmi_wait_register(reg_gusb2phyacc0, PHY_VSTS_BUSY, 0); 211 - if (ret) 212 - return ret; 213 - 214 - /* VCtrl = high nibble of addr, set PHY_NEW_REG_REQ */ 215 - val = PHY_NEW_REG_REQ | (GET_HIGH_NIBBLE(addr) << PHY_VCTRL_SHIFT); 216 - 217 - writel(val, reg_gusb2phyacc0); 218 - ret = utmi_wait_register(reg_gusb2phyacc0, PHY_VSTS_BUSY, 0); 219 - if (ret) 220 - return ret; 221 - 222 - return 0; 223 - } 224 - 225 - static int rtk_phy_set_page(struct phy_reg *phy_reg, int page) 226 - { 227 - switch (page) { 228 - case 0: 229 - return rtk_phy_write(phy_reg, SET_PAGE_OFFSET, SET_PAGE_0); 230 - case 1: 231 - return rtk_phy_write(phy_reg, SET_PAGE_OFFSET, SET_PAGE_1); 232 - case 2: 233 - return rtk_phy_write(phy_reg, SET_PAGE_OFFSET, SET_PAGE_2); 234 - default: 235 - pr_err("%s error page=%d\n", __func__, page); 236 - } 237 - 238 - return -EINVAL; 239 - } 240 - 241 - static u8 __updated_dc_disconnect_level_page0_0xe4(struct phy_cfg *phy_cfg, 242 - struct phy_parameter *phy_parameter, u8 data) 243 - { 244 - u8 ret; 245 - s32 val; 246 - s32 dc_disconnect_mask = phy_cfg->dc_disconnect_mask; 247 - int offset = 4; 248 - 249 - val = (s32)((data >> offset) & dc_disconnect_mask) 250 - + phy_parameter->efuse_usb_dc_dis 251 - + phy_parameter->disconnection_compensate; 252 - 253 - if (val > dc_disconnect_mask) 254 - val = dc_disconnect_mask; 255 - else if (val < 0) 256 - val = 0; 257 - 258 - ret = (data & (~(dc_disconnect_mask << offset))) | 259 - (val & dc_disconnect_mask) << offset; 260 - 261 - return ret; 262 - } 263 - 264 - /* updated disconnect level at page0 */ 265 - static void update_dc_disconnect_level_at_page0(struct rtk_phy *rtk_phy, 266 - struct phy_parameter *phy_parameter, bool update) 267 - { 268 - struct phy_cfg *phy_cfg; 269 - struct phy_reg *phy_reg; 270 - struct phy_data *phy_data_page; 271 - struct phy_data *phy_data; 272 - u8 addr, data; 273 - int offset = 4; 274 - s32 dc_disconnect_mask; 275 - int i; 276 - 277 - phy_cfg = rtk_phy->phy_cfg; 278 - phy_reg = &phy_parameter->phy_reg; 279 - 280 - /* Set page 0 */ 281 - phy_data_page = phy_cfg->page0; 282 - rtk_phy_set_page(phy_reg, 0); 283 - 284 - i = page_addr_to_array_index(PAGE0_0XE4); 285 - phy_data = phy_data_page + i; 286 - if (!phy_data->addr) { 287 - phy_data->addr = PAGE0_0XE4; 288 - phy_data->data = rtk_phy_read(phy_reg, PAGE0_0XE4); 289 - } 290 - 291 - addr = phy_data->addr; 292 - data = phy_data->data; 293 - dc_disconnect_mask = phy_cfg->dc_disconnect_mask; 294 - 295 - if (update) 296 - data = __updated_dc_disconnect_level_page0_0xe4(phy_cfg, phy_parameter, data); 297 - else 298 - data = (data & ~(dc_disconnect_mask << offset)) | 299 - (DEFAULT_DC_DISCONNECTION_VALUE << offset); 300 - 301 - if (rtk_phy_write(phy_reg, addr, data)) 302 - dev_err(rtk_phy->dev, 303 - "%s: Error to set page1 parameter addr=0x%x value=0x%x\n", 304 - __func__, addr, data); 305 - } 306 - 307 - static u8 __updated_dc_disconnect_level_page1_0xe2(struct phy_cfg *phy_cfg, 308 - struct phy_parameter *phy_parameter, u8 data) 309 - { 310 - u8 ret; 311 - s32 val; 312 - s32 dc_disconnect_mask = phy_cfg->dc_disconnect_mask; 313 - 314 - if (phy_cfg->check_efuse_version == CHECK_EFUSE_V1) { 315 - val = (s32)(data & dc_disconnect_mask) 316 - + phy_parameter->efuse_usb_dc_dis 317 - + phy_parameter->disconnection_compensate; 318 - } else { /* for CHECK_EFUSE_V2 or no efuse */ 319 - if (phy_parameter->efuse_usb_dc_dis) 320 - val = (s32)(phy_parameter->efuse_usb_dc_dis + 321 - phy_parameter->disconnection_compensate); 322 - else 323 - val = (s32)((data & dc_disconnect_mask) + 324 - phy_parameter->disconnection_compensate); 325 - } 326 - 327 - if (val > dc_disconnect_mask) 328 - val = dc_disconnect_mask; 329 - else if (val < 0) 330 - val = 0; 331 - 332 - ret = (data & (~dc_disconnect_mask)) | (val & dc_disconnect_mask); 333 - 334 - return ret; 335 - } 336 - 337 - /* updated disconnect level at page1 */ 338 - static void update_dc_disconnect_level_at_page1(struct rtk_phy *rtk_phy, 339 - struct phy_parameter *phy_parameter, bool update) 340 - { 341 - struct phy_cfg *phy_cfg; 342 - struct phy_data *phy_data_page; 343 - struct phy_data *phy_data; 344 - struct phy_reg *phy_reg; 345 - u8 addr, data; 346 - s32 dc_disconnect_mask; 347 - int i; 348 - 349 - phy_cfg = rtk_phy->phy_cfg; 350 - phy_reg = &phy_parameter->phy_reg; 351 - 352 - /* Set page 1 */ 353 - phy_data_page = phy_cfg->page1; 354 - rtk_phy_set_page(phy_reg, 1); 355 - 356 - i = page_addr_to_array_index(PAGE1_0XE2); 357 - phy_data = phy_data_page + i; 358 - if (!phy_data->addr) { 359 - phy_data->addr = PAGE1_0XE2; 360 - phy_data->data = rtk_phy_read(phy_reg, PAGE1_0XE2); 361 - } 362 - 363 - addr = phy_data->addr; 364 - data = phy_data->data; 365 - dc_disconnect_mask = phy_cfg->dc_disconnect_mask; 366 - 367 - if (update) 368 - data = __updated_dc_disconnect_level_page1_0xe2(phy_cfg, phy_parameter, data); 369 - else 370 - data = (data & ~dc_disconnect_mask) | DEFAULT_DC_DISCONNECTION_VALUE; 371 - 372 - if (rtk_phy_write(phy_reg, addr, data)) 373 - dev_err(rtk_phy->dev, 374 - "%s: Error to set page1 parameter addr=0x%x value=0x%x\n", 375 - __func__, addr, data); 376 - } 377 - 378 - static void update_dc_disconnect_level(struct rtk_phy *rtk_phy, 379 - struct phy_parameter *phy_parameter, bool update) 380 - { 381 - struct phy_cfg *phy_cfg = rtk_phy->phy_cfg; 382 - 383 - if (phy_cfg->usb_dc_disconnect_at_page0) 384 - update_dc_disconnect_level_at_page0(rtk_phy, phy_parameter, update); 385 - else 386 - update_dc_disconnect_level_at_page1(rtk_phy, phy_parameter, update); 387 - } 388 - 389 - static u8 __update_dc_driving_page0_0xe4(struct phy_cfg *phy_cfg, 390 - struct phy_parameter *phy_parameter, u8 data) 391 - { 392 - s32 driving_level_compensate = phy_parameter->driving_level_compensate; 393 - s32 dc_driving_mask = phy_cfg->dc_driving_mask; 394 - s32 val; 395 - u8 ret; 396 - 397 - if (phy_cfg->check_efuse_version == CHECK_EFUSE_V1) { 398 - val = (s32)(data & dc_driving_mask) + driving_level_compensate 399 - + phy_parameter->efuse_usb_dc_cal; 400 - } else { /* for CHECK_EFUSE_V2 or no efuse */ 401 - if (phy_parameter->efuse_usb_dc_cal) 402 - val = (s32)((phy_parameter->efuse_usb_dc_cal & dc_driving_mask) 403 - + driving_level_compensate); 404 - else 405 - val = (s32)(data & dc_driving_mask); 406 - } 407 - 408 - if (val > dc_driving_mask) 409 - val = dc_driving_mask; 410 - else if (val < 0) 411 - val = 0; 412 - 413 - ret = (data & (~dc_driving_mask)) | (val & dc_driving_mask); 414 - 415 - return ret; 416 - } 417 - 418 - static void update_dc_driving_level(struct rtk_phy *rtk_phy, 419 - struct phy_parameter *phy_parameter) 420 - { 421 - struct phy_cfg *phy_cfg; 422 - struct phy_reg *phy_reg; 423 - 424 - phy_reg = &phy_parameter->phy_reg; 425 - phy_cfg = rtk_phy->phy_cfg; 426 - if (!phy_cfg->page0[4].addr) { 427 - rtk_phy_set_page(phy_reg, 0); 428 - phy_cfg->page0[4].addr = PAGE0_0XE4; 429 - phy_cfg->page0[4].data = rtk_phy_read(phy_reg, PAGE0_0XE4); 430 - } 431 - 432 - if (phy_parameter->driving_level != DEFAULT_DC_DRIVING_VALUE) { 433 - u32 dc_driving_mask; 434 - u8 driving_level; 435 - u8 data; 436 - 437 - data = phy_cfg->page0[4].data; 438 - dc_driving_mask = phy_cfg->dc_driving_mask; 439 - driving_level = data & dc_driving_mask; 440 - 441 - dev_dbg(rtk_phy->dev, "%s driving_level=%d => dts driving_level=%d\n", 442 - __func__, driving_level, phy_parameter->driving_level); 443 - 444 - phy_cfg->page0[4].data = (data & (~dc_driving_mask)) | 445 - (phy_parameter->driving_level & dc_driving_mask); 446 - } 447 - 448 - phy_cfg->page0[4].data = __update_dc_driving_page0_0xe4(phy_cfg, 449 - phy_parameter, 450 - phy_cfg->page0[4].data); 451 - } 452 - 453 - static void update_hs_clk_select(struct rtk_phy *rtk_phy, 454 - struct phy_parameter *phy_parameter) 455 - { 456 - struct phy_cfg *phy_cfg; 457 - struct phy_reg *phy_reg; 458 - 459 - phy_cfg = rtk_phy->phy_cfg; 460 - phy_reg = &phy_parameter->phy_reg; 461 - 462 - if (phy_parameter->inverse_hstx_sync_clock) { 463 - if (!phy_cfg->page0[6].addr) { 464 - rtk_phy_set_page(phy_reg, 0); 465 - phy_cfg->page0[6].addr = PAGE0_0XE6; 466 - phy_cfg->page0[6].data = rtk_phy_read(phy_reg, PAGE0_0XE6); 467 - } 468 - 469 - phy_cfg->page0[6].data = phy_cfg->page0[6].data | HS_CLK_SELECT; 470 - } 471 - } 472 - 473 - static void do_rtk_phy_toggle(struct rtk_phy *rtk_phy, 474 - int index, bool connect) 475 - { 476 - struct phy_parameter *phy_parameter; 477 - struct phy_cfg *phy_cfg; 478 - struct phy_reg *phy_reg; 479 - struct phy_data *phy_data_page; 480 - u8 addr, data; 481 - int i; 482 - 483 - phy_cfg = rtk_phy->phy_cfg; 484 - phy_parameter = &((struct phy_parameter *)rtk_phy->phy_parameter)[index]; 485 - phy_reg = &phy_parameter->phy_reg; 486 - 487 - if (!phy_cfg->do_toggle) 488 - goto out; 489 - 490 - if (phy_cfg->is_double_sensitivity_mode) 491 - goto do_toggle_driving; 492 - 493 - /* Set page 0 */ 494 - rtk_phy_set_page(phy_reg, 0); 495 - 496 - addr = PAGE0_0XE7; 497 - data = rtk_phy_read(phy_reg, addr); 498 - 499 - if (connect) 500 - rtk_phy_write(phy_reg, addr, data & (~SENSITIVITY_CTRL)); 501 - else 502 - rtk_phy_write(phy_reg, addr, data | (SENSITIVITY_CTRL)); 503 - 504 - do_toggle_driving: 505 - 506 - if (!phy_cfg->do_toggle_driving) 507 - goto do_toggle; 508 - 509 - /* Page 0 addr 0xE4 driving capability */ 510 - 511 - /* Set page 0 */ 512 - phy_data_page = phy_cfg->page0; 513 - rtk_phy_set_page(phy_reg, 0); 514 - 515 - i = page_addr_to_array_index(PAGE0_0XE4); 516 - addr = phy_data_page[i].addr; 517 - data = phy_data_page[i].data; 518 - 519 - if (connect) { 520 - rtk_phy_write(phy_reg, addr, data); 521 - } else { 522 - u8 value; 523 - s32 tmp; 524 - s32 driving_updated = 525 - phy_cfg->driving_updated_for_dev_dis; 526 - s32 dc_driving_mask = phy_cfg->dc_driving_mask; 527 - 528 - tmp = (s32)(data & dc_driving_mask) + driving_updated; 529 - 530 - if (tmp > dc_driving_mask) 531 - tmp = dc_driving_mask; 532 - else if (tmp < 0) 533 - tmp = 0; 534 - 535 - value = (data & (~dc_driving_mask)) | (tmp & dc_driving_mask); 536 - 537 - rtk_phy_write(phy_reg, addr, value); 538 - } 539 - 540 - do_toggle: 541 - /* restore dc disconnect level before toggle */ 542 - update_dc_disconnect_level(rtk_phy, phy_parameter, false); 543 - 544 - /* Set page 1 */ 545 - rtk_phy_set_page(phy_reg, 1); 546 - 547 - addr = PAGE1_0XE0; 548 - data = rtk_phy_read(phy_reg, addr); 549 - 550 - rtk_phy_write(phy_reg, addr, data & 551 - (~ENABLE_AUTO_SENSITIVITY_CALIBRATION)); 552 - mdelay(1); 553 - rtk_phy_write(phy_reg, addr, data | 554 - (ENABLE_AUTO_SENSITIVITY_CALIBRATION)); 555 - 556 - /* update dc disconnect level after toggle */ 557 - update_dc_disconnect_level(rtk_phy, phy_parameter, true); 558 - 559 - out: 560 - return; 561 - } 562 - 563 - static int do_rtk_phy_init(struct rtk_phy *rtk_phy, int index) 564 - { 565 - struct phy_parameter *phy_parameter; 566 - struct phy_cfg *phy_cfg; 567 - struct phy_data *phy_data_page; 568 - struct phy_reg *phy_reg; 569 - int i; 570 - 571 - phy_cfg = rtk_phy->phy_cfg; 572 - phy_parameter = &((struct phy_parameter *)rtk_phy->phy_parameter)[index]; 573 - phy_reg = &phy_parameter->phy_reg; 574 - 575 - if (phy_cfg->use_default_parameter) { 576 - dev_dbg(rtk_phy->dev, "%s phy#%d use default parameter\n", 577 - __func__, index); 578 - goto do_toggle; 579 - } 580 - 581 - /* Set page 0 */ 582 - phy_data_page = phy_cfg->page0; 583 - rtk_phy_set_page(phy_reg, 0); 584 - 585 - for (i = 0; i < phy_cfg->page0_size; i++) { 586 - struct phy_data *phy_data = phy_data_page + i; 587 - u8 addr = phy_data->addr; 588 - u8 data = phy_data->data; 589 - 590 - if (!addr) 591 - continue; 592 - 593 - if (rtk_phy_write(phy_reg, addr, data)) { 594 - dev_err(rtk_phy->dev, 595 - "%s: Error to set page0 parameter addr=0x%x value=0x%x\n", 596 - __func__, addr, data); 597 - return -EINVAL; 598 - } 599 - } 600 - 601 - /* Set page 1 */ 602 - phy_data_page = phy_cfg->page1; 603 - rtk_phy_set_page(phy_reg, 1); 604 - 605 - for (i = 0; i < phy_cfg->page1_size; i++) { 606 - struct phy_data *phy_data = phy_data_page + i; 607 - u8 addr = phy_data->addr; 608 - u8 data = phy_data->data; 609 - 610 - if (!addr) 611 - continue; 612 - 613 - if (rtk_phy_write(phy_reg, addr, data)) { 614 - dev_err(rtk_phy->dev, 615 - "%s: Error to set page1 parameter addr=0x%x value=0x%x\n", 616 - __func__, addr, data); 617 - return -EINVAL; 618 - } 619 - } 620 - 621 - if (phy_cfg->page2_size == 0) 622 - goto do_toggle; 623 - 624 - /* Set page 2 */ 625 - phy_data_page = phy_cfg->page2; 626 - rtk_phy_set_page(phy_reg, 2); 627 - 628 - for (i = 0; i < phy_cfg->page2_size; i++) { 629 - struct phy_data *phy_data = phy_data_page + i; 630 - u8 addr = phy_data->addr; 631 - u8 data = phy_data->data; 632 - 633 - if (!addr) 634 - continue; 635 - 636 - if (rtk_phy_write(phy_reg, addr, data)) { 637 - dev_err(rtk_phy->dev, 638 - "%s: Error to set page2 parameter addr=0x%x value=0x%x\n", 639 - __func__, addr, data); 640 - return -EINVAL; 641 - } 642 - } 643 - 644 - do_toggle: 645 - do_rtk_phy_toggle(rtk_phy, index, false); 646 - 647 - return 0; 648 - } 649 - 650 - static int rtk_phy_init(struct phy *phy) 651 - { 652 - struct rtk_phy *rtk_phy = phy_get_drvdata(phy); 653 - unsigned long phy_init_time = jiffies; 654 - int i, ret = 0; 655 - 656 - if (!rtk_phy) 657 - return -EINVAL; 658 - 659 - for (i = 0; i < rtk_phy->num_phy; i++) 660 - ret = do_rtk_phy_init(rtk_phy, i); 661 - 662 - dev_dbg(rtk_phy->dev, "Initialized RTK USB 2.0 PHY (take %dms)\n", 663 - jiffies_to_msecs(jiffies - phy_init_time)); 664 - return ret; 665 - } 666 - 667 - static int rtk_phy_exit(struct phy *phy) 668 - { 669 - return 0; 670 - } 671 - 672 - static const struct phy_ops ops = { 673 - .init = rtk_phy_init, 674 - .exit = rtk_phy_exit, 675 - .owner = THIS_MODULE, 676 - }; 677 - 678 - static void rtk_phy_toggle(struct usb_phy *usb2_phy, bool connect, int port) 679 - { 680 - int index = port; 681 - struct rtk_phy *rtk_phy = NULL; 682 - 683 - rtk_phy = dev_get_drvdata(usb2_phy->dev); 684 - 685 - if (index > rtk_phy->num_phy) { 686 - dev_err(rtk_phy->dev, "%s: The port=%d is not in usb phy (num_phy=%d)\n", 687 - __func__, index, rtk_phy->num_phy); 688 - return; 689 - } 690 - 691 - do_rtk_phy_toggle(rtk_phy, index, connect); 692 - } 693 - 694 - static int rtk_phy_notify_port_status(struct usb_phy *x, int port, 695 - u16 portstatus, u16 portchange) 696 - { 697 - bool connect = false; 698 - 699 - pr_debug("%s port=%d portstatus=0x%x portchange=0x%x\n", 700 - __func__, port, (int)portstatus, (int)portchange); 701 - if (portstatus & USB_PORT_STAT_CONNECTION) 702 - connect = true; 703 - 704 - if (portchange & USB_PORT_STAT_C_CONNECTION) 705 - rtk_phy_toggle(x, connect, port); 706 - 707 - return 0; 708 - } 709 - 710 - #ifdef CONFIG_DEBUG_FS 711 - static struct dentry *create_phy_debug_root(void) 712 - { 713 - struct dentry *phy_debug_root; 714 - 715 - phy_debug_root = debugfs_lookup("phy", usb_debug_root); 716 - if (!phy_debug_root) 717 - phy_debug_root = debugfs_create_dir("phy", usb_debug_root); 718 - 719 - return phy_debug_root; 720 - } 721 - 722 - static int rtk_usb2_parameter_show(struct seq_file *s, void *unused) 723 - { 724 - struct rtk_phy *rtk_phy = s->private; 725 - struct phy_cfg *phy_cfg; 726 - int i, index; 727 - 728 - phy_cfg = rtk_phy->phy_cfg; 729 - 730 - seq_puts(s, "Property:\n"); 731 - seq_printf(s, " check_efuse: %s\n", 732 - phy_cfg->check_efuse ? "Enable" : "Disable"); 733 - seq_printf(s, " check_efuse_version: %d\n", 734 - phy_cfg->check_efuse_version); 735 - seq_printf(s, " efuse_dc_driving_rate: %d\n", 736 - phy_cfg->efuse_dc_driving_rate); 737 - seq_printf(s, " dc_driving_mask: 0x%x\n", 738 - phy_cfg->dc_driving_mask); 739 - seq_printf(s, " efuse_dc_disconnect_rate: %d\n", 740 - phy_cfg->efuse_dc_disconnect_rate); 741 - seq_printf(s, " dc_disconnect_mask: 0x%x\n", 742 - phy_cfg->dc_disconnect_mask); 743 - seq_printf(s, " usb_dc_disconnect_at_page0: %s\n", 744 - phy_cfg->usb_dc_disconnect_at_page0 ? "true" : "false"); 745 - seq_printf(s, " do_toggle: %s\n", 746 - phy_cfg->do_toggle ? "Enable" : "Disable"); 747 - seq_printf(s, " do_toggle_driving: %s\n", 748 - phy_cfg->do_toggle_driving ? "Enable" : "Disable"); 749 - seq_printf(s, " driving_updated_for_dev_dis: 0x%x\n", 750 - phy_cfg->driving_updated_for_dev_dis); 751 - seq_printf(s, " use_default_parameter: %s\n", 752 - phy_cfg->use_default_parameter ? "Enable" : "Disable"); 753 - seq_printf(s, " is_double_sensitivity_mode: %s\n", 754 - phy_cfg->is_double_sensitivity_mode ? "Enable" : "Disable"); 755 - 756 - for (index = 0; index < rtk_phy->num_phy; index++) { 757 - struct phy_parameter *phy_parameter; 758 - struct phy_reg *phy_reg; 759 - struct phy_data *phy_data_page; 760 - 761 - phy_parameter = &((struct phy_parameter *)rtk_phy->phy_parameter)[index]; 762 - phy_reg = &phy_parameter->phy_reg; 763 - 764 - seq_printf(s, "PHY %d:\n", index); 765 - 766 - seq_puts(s, "Page 0:\n"); 767 - /* Set page 0 */ 768 - phy_data_page = phy_cfg->page0; 769 - rtk_phy_set_page(phy_reg, 0); 770 - 771 - for (i = 0; i < phy_cfg->page0_size; i++) { 772 - struct phy_data *phy_data = phy_data_page + i; 773 - u8 addr = array_index_to_page_addr(i); 774 - u8 data = phy_data->data; 775 - u8 value = rtk_phy_read(phy_reg, addr); 776 - 777 - if (phy_data->addr) 778 - seq_printf(s, " Page 0: addr=0x%x data=0x%02x ==> read value=0x%02x\n", 779 - addr, data, value); 780 - else 781 - seq_printf(s, " Page 0: addr=0x%x data=none ==> read value=0x%02x\n", 782 - addr, value); 783 - } 784 - 785 - seq_puts(s, "Page 1:\n"); 786 - /* Set page 1 */ 787 - phy_data_page = phy_cfg->page1; 788 - rtk_phy_set_page(phy_reg, 1); 789 - 790 - for (i = 0; i < phy_cfg->page1_size; i++) { 791 - struct phy_data *phy_data = phy_data_page + i; 792 - u8 addr = array_index_to_page_addr(i); 793 - u8 data = phy_data->data; 794 - u8 value = rtk_phy_read(phy_reg, addr); 795 - 796 - if (phy_data->addr) 797 - seq_printf(s, " Page 1: addr=0x%x data=0x%02x ==> read value=0x%02x\n", 798 - addr, data, value); 799 - else 800 - seq_printf(s, " Page 1: addr=0x%x data=none ==> read value=0x%02x\n", 801 - addr, value); 802 - } 803 - 804 - if (phy_cfg->page2_size == 0) 805 - goto out; 806 - 807 - seq_puts(s, "Page 2:\n"); 808 - /* Set page 2 */ 809 - phy_data_page = phy_cfg->page2; 810 - rtk_phy_set_page(phy_reg, 2); 811 - 812 - for (i = 0; i < phy_cfg->page2_size; i++) { 813 - struct phy_data *phy_data = phy_data_page + i; 814 - u8 addr = array_index_to_page_addr(i); 815 - u8 data = phy_data->data; 816 - u8 value = rtk_phy_read(phy_reg, addr); 817 - 818 - if (phy_data->addr) 819 - seq_printf(s, " Page 2: addr=0x%x data=0x%02x ==> read value=0x%02x\n", 820 - addr, data, value); 821 - else 822 - seq_printf(s, " Page 2: addr=0x%x data=none ==> read value=0x%02x\n", 823 - addr, value); 824 - } 825 - 826 - out: 827 - seq_puts(s, "PHY Property:\n"); 828 - seq_printf(s, " efuse_usb_dc_cal: %d\n", 829 - (int)phy_parameter->efuse_usb_dc_cal); 830 - seq_printf(s, " efuse_usb_dc_dis: %d\n", 831 - (int)phy_parameter->efuse_usb_dc_dis); 832 - seq_printf(s, " inverse_hstx_sync_clock: %s\n", 833 - phy_parameter->inverse_hstx_sync_clock ? "Enable" : "Disable"); 834 - seq_printf(s, " driving_level: %d\n", 835 - phy_parameter->driving_level); 836 - seq_printf(s, " driving_level_compensate: %d\n", 837 - phy_parameter->driving_level_compensate); 838 - seq_printf(s, " disconnection_compensate: %d\n", 839 - phy_parameter->disconnection_compensate); 840 - } 841 - 842 - return 0; 843 - } 844 - DEFINE_SHOW_ATTRIBUTE(rtk_usb2_parameter); 845 - 846 - static inline void create_debug_files(struct rtk_phy *rtk_phy) 847 - { 848 - struct dentry *phy_debug_root = NULL; 849 - 850 - phy_debug_root = create_phy_debug_root(); 851 - if (!phy_debug_root) 852 - return; 853 - 854 - rtk_phy->debug_dir = debugfs_create_dir(dev_name(rtk_phy->dev), 855 - phy_debug_root); 856 - 857 - debugfs_create_file("parameter", 0444, rtk_phy->debug_dir, rtk_phy, 858 - &rtk_usb2_parameter_fops); 859 - 860 - return; 861 - } 862 - 863 - static inline void remove_debug_files(struct rtk_phy *rtk_phy) 864 - { 865 - debugfs_remove_recursive(rtk_phy->debug_dir); 866 - } 867 - #else 868 - static inline void create_debug_files(struct rtk_phy *rtk_phy) { } 869 - static inline void remove_debug_files(struct rtk_phy *rtk_phy) { } 870 - #endif /* CONFIG_DEBUG_FS */ 871 - 872 - static int get_phy_data_by_efuse(struct rtk_phy *rtk_phy, 873 - struct phy_parameter *phy_parameter, int index) 874 - { 875 - struct phy_cfg *phy_cfg = rtk_phy->phy_cfg; 876 - u8 value = 0; 877 - struct nvmem_cell *cell; 878 - struct soc_device_attribute rtk_soc_groot[] = { 879 - { .family = "Realtek Groot",}, 880 - { /* empty */ } }; 881 - 882 - if (!phy_cfg->check_efuse) 883 - goto out; 884 - 885 - /* Read efuse for usb dc cal */ 886 - cell = nvmem_cell_get(rtk_phy->dev, "usb-dc-cal"); 887 - if (IS_ERR(cell)) { 888 - dev_dbg(rtk_phy->dev, "%s no usb-dc-cal: %ld\n", 889 - __func__, PTR_ERR(cell)); 890 - } else { 891 - unsigned char *buf; 892 - size_t buf_size; 893 - 894 - buf = nvmem_cell_read(cell, &buf_size); 895 - if (!IS_ERR(buf)) { 896 - value = buf[0] & phy_cfg->dc_driving_mask; 897 - kfree(buf); 898 - } 899 - nvmem_cell_put(cell); 900 - } 901 - 902 - if (phy_cfg->check_efuse_version == CHECK_EFUSE_V1) { 903 - int rate = phy_cfg->efuse_dc_driving_rate; 904 - 905 - if (value <= EFUS_USB_DC_CAL_MAX) 906 - phy_parameter->efuse_usb_dc_cal = (int8_t)(value * rate); 907 - else 908 - phy_parameter->efuse_usb_dc_cal = -(int8_t) 909 - ((EFUS_USB_DC_CAL_MAX & value) * rate); 910 - 911 - if (soc_device_match(rtk_soc_groot)) { 912 - dev_dbg(rtk_phy->dev, "For groot IC we need a workaround to adjust efuse_usb_dc_cal\n"); 913 - 914 - /* We don't multiple dc_cal_rate=2 for positive dc cal compensate */ 915 - if (value <= EFUS_USB_DC_CAL_MAX) 916 - phy_parameter->efuse_usb_dc_cal = (int8_t)(value); 917 - 918 - /* We set max dc cal compensate is 0x8 if otp is 0x7 */ 919 - if (value == 0x7) 920 - phy_parameter->efuse_usb_dc_cal = (int8_t)(value + 1); 921 - } 922 - } else { /* for CHECK_EFUSE_V2 */ 923 - phy_parameter->efuse_usb_dc_cal = value & phy_cfg->dc_driving_mask; 924 - } 925 - 926 - /* Read efuse for usb dc disconnect level */ 927 - value = 0; 928 - cell = nvmem_cell_get(rtk_phy->dev, "usb-dc-dis"); 929 - if (IS_ERR(cell)) { 930 - dev_dbg(rtk_phy->dev, "%s no usb-dc-dis: %ld\n", 931 - __func__, PTR_ERR(cell)); 932 - } else { 933 - unsigned char *buf; 934 - size_t buf_size; 935 - 936 - buf = nvmem_cell_read(cell, &buf_size); 937 - if (!IS_ERR(buf)) { 938 - value = buf[0] & phy_cfg->dc_disconnect_mask; 939 - kfree(buf); 940 - } 941 - nvmem_cell_put(cell); 942 - } 943 - 944 - if (phy_cfg->check_efuse_version == CHECK_EFUSE_V1) { 945 - int rate = phy_cfg->efuse_dc_disconnect_rate; 946 - 947 - if (value <= EFUS_USB_DC_DIS_MAX) 948 - phy_parameter->efuse_usb_dc_dis = (int8_t)(value * rate); 949 - else 950 - phy_parameter->efuse_usb_dc_dis = -(int8_t) 951 - ((EFUS_USB_DC_DIS_MAX & value) * rate); 952 - } else { /* for CHECK_EFUSE_V2 */ 953 - phy_parameter->efuse_usb_dc_dis = value & phy_cfg->dc_disconnect_mask; 954 - } 955 - 956 - out: 957 - return 0; 958 - } 959 - 960 - static int parse_phy_data(struct rtk_phy *rtk_phy) 961 - { 962 - struct device *dev = rtk_phy->dev; 963 - struct device_node *np = dev->of_node; 964 - struct phy_parameter *phy_parameter; 965 - int ret = 0; 966 - int index; 967 - 968 - rtk_phy->phy_parameter = devm_kzalloc(dev, sizeof(struct phy_parameter) * 969 - rtk_phy->num_phy, GFP_KERNEL); 970 - if (!rtk_phy->phy_parameter) 971 - return -ENOMEM; 972 - 973 - for (index = 0; index < rtk_phy->num_phy; index++) { 974 - phy_parameter = &((struct phy_parameter *)rtk_phy->phy_parameter)[index]; 975 - 976 - phy_parameter->phy_reg.reg_wrap_vstatus = of_iomap(np, 0); 977 - phy_parameter->phy_reg.reg_gusb2phyacc0 = of_iomap(np, 1) + index; 978 - phy_parameter->phy_reg.vstatus_index = index; 979 - 980 - if (of_property_read_bool(np, "realtek,inverse-hstx-sync-clock")) 981 - phy_parameter->inverse_hstx_sync_clock = true; 982 - else 983 - phy_parameter->inverse_hstx_sync_clock = false; 984 - 985 - if (of_property_read_u32_index(np, "realtek,driving-level", 986 - index, &phy_parameter->driving_level)) 987 - phy_parameter->driving_level = DEFAULT_DC_DRIVING_VALUE; 988 - 989 - if (of_property_read_u32_index(np, "realtek,driving-level-compensate", 990 - index, &phy_parameter->driving_level_compensate)) 991 - phy_parameter->driving_level_compensate = 0; 992 - 993 - if (of_property_read_u32_index(np, "realtek,disconnection-compensate", 994 - index, &phy_parameter->disconnection_compensate)) 995 - phy_parameter->disconnection_compensate = 0; 996 - 997 - get_phy_data_by_efuse(rtk_phy, phy_parameter, index); 998 - 999 - update_dc_driving_level(rtk_phy, phy_parameter); 1000 - 1001 - update_hs_clk_select(rtk_phy, phy_parameter); 1002 - } 1003 - 1004 - return ret; 1005 - } 1006 - 1007 - static int rtk_usb2phy_probe(struct platform_device *pdev) 1008 - { 1009 - struct rtk_phy *rtk_phy; 1010 - struct device *dev = &pdev->dev; 1011 - struct phy *generic_phy; 1012 - struct phy_provider *phy_provider; 1013 - const struct phy_cfg *phy_cfg; 1014 - int ret = 0; 1015 - 1016 - phy_cfg = of_device_get_match_data(dev); 1017 - if (!phy_cfg) { 1018 - dev_err(dev, "phy config are not assigned!\n"); 1019 - return -EINVAL; 1020 - } 1021 - 1022 - rtk_phy = devm_kzalloc(dev, sizeof(*rtk_phy), GFP_KERNEL); 1023 - if (!rtk_phy) 1024 - return -ENOMEM; 1025 - 1026 - rtk_phy->dev = &pdev->dev; 1027 - rtk_phy->phy.dev = rtk_phy->dev; 1028 - rtk_phy->phy.label = "rtk-usb2phy"; 1029 - rtk_phy->phy.notify_port_status = rtk_phy_notify_port_status; 1030 - 1031 - rtk_phy->phy_cfg = devm_kzalloc(dev, sizeof(*phy_cfg), GFP_KERNEL); 1032 - 1033 - memcpy(rtk_phy->phy_cfg, phy_cfg, sizeof(*phy_cfg)); 1034 - 1035 - rtk_phy->num_phy = phy_cfg->num_phy; 1036 - 1037 - ret = parse_phy_data(rtk_phy); 1038 - if (ret) 1039 - goto err; 1040 - 1041 - platform_set_drvdata(pdev, rtk_phy); 1042 - 1043 - generic_phy = devm_phy_create(rtk_phy->dev, NULL, &ops); 1044 - if (IS_ERR(generic_phy)) 1045 - return PTR_ERR(generic_phy); 1046 - 1047 - phy_set_drvdata(generic_phy, rtk_phy); 1048 - 1049 - phy_provider = devm_of_phy_provider_register(rtk_phy->dev, 1050 - of_phy_simple_xlate); 1051 - if (IS_ERR(phy_provider)) 1052 - return PTR_ERR(phy_provider); 1053 - 1054 - ret = usb_add_phy_dev(&rtk_phy->phy); 1055 - if (ret) 1056 - goto err; 1057 - 1058 - create_debug_files(rtk_phy); 1059 - 1060 - err: 1061 - return ret; 1062 - } 1063 - 1064 - static void rtk_usb2phy_remove(struct platform_device *pdev) 1065 - { 1066 - struct rtk_phy *rtk_phy = platform_get_drvdata(pdev); 1067 - 1068 - remove_debug_files(rtk_phy); 1069 - 1070 - usb_remove_phy(&rtk_phy->phy); 1071 - } 1072 - 1073 - static const struct phy_cfg rtd1295_phy_cfg = { 1074 - .page0_size = MAX_USB_PHY_PAGE0_DATA_SIZE, 1075 - .page0 = { [0] = {0xe0, 0x90}, 1076 - [3] = {0xe3, 0x3a}, 1077 - [4] = {0xe4, 0x68}, 1078 - [6] = {0xe6, 0x91}, 1079 - [13] = {0xf5, 0x81}, 1080 - [15] = {0xf7, 0x02}, }, 1081 - .page1_size = 8, 1082 - .page1 = { /* default parameter */ }, 1083 - .page2_size = 0, 1084 - .page2 = { /* no parameter */ }, 1085 - .num_phy = 1, 1086 - .check_efuse = false, 1087 - .check_efuse_version = CHECK_EFUSE_V1, 1088 - .efuse_dc_driving_rate = 1, 1089 - .dc_driving_mask = 0xf, 1090 - .efuse_dc_disconnect_rate = EFUS_USB_DC_DIS_RATE, 1091 - .dc_disconnect_mask = 0xf, 1092 - .usb_dc_disconnect_at_page0 = true, 1093 - .do_toggle = true, 1094 - .do_toggle_driving = false, 1095 - .driving_updated_for_dev_dis = 0xf, 1096 - .use_default_parameter = false, 1097 - .is_double_sensitivity_mode = false, 1098 - }; 1099 - 1100 - static const struct phy_cfg rtd1395_phy_cfg = { 1101 - .page0_size = MAX_USB_PHY_PAGE0_DATA_SIZE, 1102 - .page0 = { [4] = {0xe4, 0xac}, 1103 - [13] = {0xf5, 0x00}, 1104 - [15] = {0xf7, 0x02}, }, 1105 - .page1_size = 8, 1106 - .page1 = { /* default parameter */ }, 1107 - .page2_size = 0, 1108 - .page2 = { /* no parameter */ }, 1109 - .num_phy = 1, 1110 - .check_efuse = false, 1111 - .check_efuse_version = CHECK_EFUSE_V1, 1112 - .efuse_dc_driving_rate = 1, 1113 - .dc_driving_mask = 0xf, 1114 - .efuse_dc_disconnect_rate = EFUS_USB_DC_DIS_RATE, 1115 - .dc_disconnect_mask = 0xf, 1116 - .usb_dc_disconnect_at_page0 = true, 1117 - .do_toggle = true, 1118 - .do_toggle_driving = false, 1119 - .driving_updated_for_dev_dis = 0xf, 1120 - .use_default_parameter = false, 1121 - .is_double_sensitivity_mode = false, 1122 - }; 1123 - 1124 - static const struct phy_cfg rtd1395_phy_cfg_2port = { 1125 - .page0_size = MAX_USB_PHY_PAGE0_DATA_SIZE, 1126 - .page0 = { [4] = {0xe4, 0xac}, 1127 - [13] = {0xf5, 0x00}, 1128 - [15] = {0xf7, 0x02}, }, 1129 - .page1_size = 8, 1130 - .page1 = { /* default parameter */ }, 1131 - .page2_size = 0, 1132 - .page2 = { /* no parameter */ }, 1133 - .num_phy = 2, 1134 - .check_efuse = false, 1135 - .check_efuse_version = CHECK_EFUSE_V1, 1136 - .efuse_dc_driving_rate = 1, 1137 - .dc_driving_mask = 0xf, 1138 - .efuse_dc_disconnect_rate = EFUS_USB_DC_DIS_RATE, 1139 - .dc_disconnect_mask = 0xf, 1140 - .usb_dc_disconnect_at_page0 = true, 1141 - .do_toggle = true, 1142 - .do_toggle_driving = false, 1143 - .driving_updated_for_dev_dis = 0xf, 1144 - .use_default_parameter = false, 1145 - .is_double_sensitivity_mode = false, 1146 - }; 1147 - 1148 - static const struct phy_cfg rtd1619_phy_cfg = { 1149 - .page0_size = MAX_USB_PHY_PAGE0_DATA_SIZE, 1150 - .page0 = { [4] = {0xe4, 0x68}, }, 1151 - .page1_size = 8, 1152 - .page1 = { /* default parameter */ }, 1153 - .page2_size = 0, 1154 - .page2 = { /* no parameter */ }, 1155 - .num_phy = 1, 1156 - .check_efuse = true, 1157 - .check_efuse_version = CHECK_EFUSE_V1, 1158 - .efuse_dc_driving_rate = 1, 1159 - .dc_driving_mask = 0xf, 1160 - .efuse_dc_disconnect_rate = EFUS_USB_DC_DIS_RATE, 1161 - .dc_disconnect_mask = 0xf, 1162 - .usb_dc_disconnect_at_page0 = true, 1163 - .do_toggle = true, 1164 - .do_toggle_driving = false, 1165 - .driving_updated_for_dev_dis = 0xf, 1166 - .use_default_parameter = false, 1167 - .is_double_sensitivity_mode = false, 1168 - }; 1169 - 1170 - static const struct phy_cfg rtd1319_phy_cfg = { 1171 - .page0_size = MAX_USB_PHY_PAGE0_DATA_SIZE, 1172 - .page0 = { [0] = {0xe0, 0x18}, 1173 - [4] = {0xe4, 0x6a}, 1174 - [7] = {0xe7, 0x71}, 1175 - [13] = {0xf5, 0x15}, 1176 - [15] = {0xf7, 0x32}, }, 1177 - .page1_size = 8, 1178 - .page1 = { [3] = {0xe3, 0x44}, }, 1179 - .page2_size = MAX_USB_PHY_PAGE2_DATA_SIZE, 1180 - .page2 = { [0] = {0xe0, 0x01}, }, 1181 - .num_phy = 1, 1182 - .check_efuse = true, 1183 - .check_efuse_version = CHECK_EFUSE_V1, 1184 - .efuse_dc_driving_rate = 1, 1185 - .dc_driving_mask = 0xf, 1186 - .efuse_dc_disconnect_rate = EFUS_USB_DC_DIS_RATE, 1187 - .dc_disconnect_mask = 0xf, 1188 - .usb_dc_disconnect_at_page0 = true, 1189 - .do_toggle = true, 1190 - .do_toggle_driving = true, 1191 - .driving_updated_for_dev_dis = 0xf, 1192 - .use_default_parameter = false, 1193 - .is_double_sensitivity_mode = true, 1194 - }; 1195 - 1196 - static const struct phy_cfg rtd1312c_phy_cfg = { 1197 - .page0_size = MAX_USB_PHY_PAGE0_DATA_SIZE, 1198 - .page0 = { [0] = {0xe0, 0x14}, 1199 - [4] = {0xe4, 0x67}, 1200 - [5] = {0xe5, 0x55}, }, 1201 - .page1_size = 8, 1202 - .page1 = { [3] = {0xe3, 0x23}, 1203 - [6] = {0xe6, 0x58}, }, 1204 - .page2_size = MAX_USB_PHY_PAGE2_DATA_SIZE, 1205 - .page2 = { /* default parameter */ }, 1206 - .num_phy = 1, 1207 - .check_efuse = true, 1208 - .check_efuse_version = CHECK_EFUSE_V1, 1209 - .efuse_dc_driving_rate = 1, 1210 - .dc_driving_mask = 0xf, 1211 - .efuse_dc_disconnect_rate = EFUS_USB_DC_DIS_RATE, 1212 - .dc_disconnect_mask = 0xf, 1213 - .usb_dc_disconnect_at_page0 = true, 1214 - .do_toggle = true, 1215 - .do_toggle_driving = true, 1216 - .driving_updated_for_dev_dis = 0xf, 1217 - .use_default_parameter = false, 1218 - .is_double_sensitivity_mode = true, 1219 - }; 1220 - 1221 - static const struct phy_cfg rtd1619b_phy_cfg = { 1222 - .page0_size = MAX_USB_PHY_PAGE0_DATA_SIZE, 1223 - .page0 = { [0] = {0xe0, 0xa3}, 1224 - [4] = {0xe4, 0x88}, 1225 - [5] = {0xe5, 0x4f}, 1226 - [6] = {0xe6, 0x02}, }, 1227 - .page1_size = 8, 1228 - .page1 = { [3] = {0xe3, 0x64}, }, 1229 - .page2_size = MAX_USB_PHY_PAGE2_DATA_SIZE, 1230 - .page2 = { [7] = {0xe7, 0x45}, }, 1231 - .num_phy = 1, 1232 - .check_efuse = true, 1233 - .check_efuse_version = CHECK_EFUSE_V1, 1234 - .efuse_dc_driving_rate = EFUS_USB_DC_CAL_RATE, 1235 - .dc_driving_mask = 0x1f, 1236 - .efuse_dc_disconnect_rate = EFUS_USB_DC_DIS_RATE, 1237 - .dc_disconnect_mask = 0xf, 1238 - .usb_dc_disconnect_at_page0 = false, 1239 - .do_toggle = true, 1240 - .do_toggle_driving = true, 1241 - .driving_updated_for_dev_dis = 0x8, 1242 - .use_default_parameter = false, 1243 - .is_double_sensitivity_mode = true, 1244 - }; 1245 - 1246 - static const struct phy_cfg rtd1319d_phy_cfg = { 1247 - .page0_size = MAX_USB_PHY_PAGE0_DATA_SIZE, 1248 - .page0 = { [0] = {0xe0, 0xa3}, 1249 - [4] = {0xe4, 0x8e}, 1250 - [5] = {0xe5, 0x4f}, 1251 - [6] = {0xe6, 0x02}, }, 1252 - .page1_size = MAX_USB_PHY_PAGE1_DATA_SIZE, 1253 - .page1 = { [14] = {0xf5, 0x1}, }, 1254 - .page2_size = MAX_USB_PHY_PAGE2_DATA_SIZE, 1255 - .page2 = { [7] = {0xe7, 0x44}, }, 1256 - .check_efuse = true, 1257 - .num_phy = 1, 1258 - .check_efuse_version = CHECK_EFUSE_V1, 1259 - .efuse_dc_driving_rate = EFUS_USB_DC_CAL_RATE, 1260 - .dc_driving_mask = 0x1f, 1261 - .efuse_dc_disconnect_rate = EFUS_USB_DC_DIS_RATE, 1262 - .dc_disconnect_mask = 0xf, 1263 - .usb_dc_disconnect_at_page0 = false, 1264 - .do_toggle = true, 1265 - .do_toggle_driving = false, 1266 - .driving_updated_for_dev_dis = 0x8, 1267 - .use_default_parameter = false, 1268 - .is_double_sensitivity_mode = true, 1269 - }; 1270 - 1271 - static const struct phy_cfg rtd1315e_phy_cfg = { 1272 - .page0_size = MAX_USB_PHY_PAGE0_DATA_SIZE, 1273 - .page0 = { [0] = {0xe0, 0xa3}, 1274 - [4] = {0xe4, 0x8c}, 1275 - [5] = {0xe5, 0x4f}, 1276 - [6] = {0xe6, 0x02}, }, 1277 - .page1_size = MAX_USB_PHY_PAGE1_DATA_SIZE, 1278 - .page1 = { [3] = {0xe3, 0x7f}, 1279 - [14] = {0xf5, 0x01}, }, 1280 - .page2_size = MAX_USB_PHY_PAGE2_DATA_SIZE, 1281 - .page2 = { [7] = {0xe7, 0x44}, }, 1282 - .num_phy = 1, 1283 - .check_efuse = true, 1284 - .check_efuse_version = CHECK_EFUSE_V2, 1285 - .efuse_dc_driving_rate = EFUS_USB_DC_CAL_RATE, 1286 - .dc_driving_mask = 0x1f, 1287 - .efuse_dc_disconnect_rate = EFUS_USB_DC_DIS_RATE, 1288 - .dc_disconnect_mask = 0xf, 1289 - .usb_dc_disconnect_at_page0 = false, 1290 - .do_toggle = true, 1291 - .do_toggle_driving = false, 1292 - .driving_updated_for_dev_dis = 0x8, 1293 - .use_default_parameter = false, 1294 - .is_double_sensitivity_mode = true, 1295 - }; 1296 - 1297 - static const struct of_device_id usbphy_rtk_dt_match[] = { 1298 - { .compatible = "realtek,rtd1295-usb2phy", .data = &rtd1295_phy_cfg }, 1299 - { .compatible = "realtek,rtd1312c-usb2phy", .data = &rtd1312c_phy_cfg }, 1300 - { .compatible = "realtek,rtd1315e-usb2phy", .data = &rtd1315e_phy_cfg }, 1301 - { .compatible = "realtek,rtd1319-usb2phy", .data = &rtd1319_phy_cfg }, 1302 - { .compatible = "realtek,rtd1319d-usb2phy", .data = &rtd1319d_phy_cfg }, 1303 - { .compatible = "realtek,rtd1395-usb2phy", .data = &rtd1395_phy_cfg }, 1304 - { .compatible = "realtek,rtd1395-usb2phy-2port", .data = &rtd1395_phy_cfg_2port }, 1305 - { .compatible = "realtek,rtd1619-usb2phy", .data = &rtd1619_phy_cfg }, 1306 - { .compatible = "realtek,rtd1619b-usb2phy", .data = &rtd1619b_phy_cfg }, 1307 - {}, 1308 - }; 1309 - MODULE_DEVICE_TABLE(of, usbphy_rtk_dt_match); 1310 - 1311 - static struct platform_driver rtk_usb2phy_driver = { 1312 - .probe = rtk_usb2phy_probe, 1313 - .remove_new = rtk_usb2phy_remove, 1314 - .driver = { 1315 - .name = "rtk-usb2phy", 1316 - .of_match_table = usbphy_rtk_dt_match, 1317 - }, 1318 - }; 1319 - 1320 - module_platform_driver(rtk_usb2phy_driver); 1321 - 1322 - MODULE_LICENSE("GPL"); 1323 - MODULE_ALIAS("platform: rtk-usb2phy"); 1324 - MODULE_AUTHOR("Stanley Chang <stanley_chang@realtek.com>"); 1325 - MODULE_DESCRIPTION("Realtek usb 2.0 phy driver");
-761
drivers/phy/realtek/phy-rtk-usb3.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * phy-rtk-usb3.c RTK usb3.0 phy driver 4 - * 5 - * copyright (c) 2023 realtek semiconductor corporation 6 - * 7 - */ 8 - 9 - #include <linux/module.h> 10 - #include <linux/of.h> 11 - #include <linux/of_address.h> 12 - #include <linux/platform_device.h> 13 - #include <linux/uaccess.h> 14 - #include <linux/debugfs.h> 15 - #include <linux/nvmem-consumer.h> 16 - #include <linux/regmap.h> 17 - #include <linux/sys_soc.h> 18 - #include <linux/mfd/syscon.h> 19 - #include <linux/phy/phy.h> 20 - #include <linux/usb.h> 21 - #include <linux/usb/hcd.h> 22 - #include <linux/usb/phy.h> 23 - 24 - #define USB_MDIO_CTRL_PHY_BUSY BIT(7) 25 - #define USB_MDIO_CTRL_PHY_WRITE BIT(0) 26 - #define USB_MDIO_CTRL_PHY_ADDR_SHIFT 8 27 - #define USB_MDIO_CTRL_PHY_DATA_SHIFT 16 28 - 29 - #define MAX_USB_PHY_DATA_SIZE 0x30 30 - #define PHY_ADDR_0X09 0x09 31 - #define PHY_ADDR_0X0B 0x0b 32 - #define PHY_ADDR_0X0D 0x0d 33 - #define PHY_ADDR_0X10 0x10 34 - #define PHY_ADDR_0X1F 0x1f 35 - #define PHY_ADDR_0X20 0x20 36 - #define PHY_ADDR_0X21 0x21 37 - #define PHY_ADDR_0X30 0x30 38 - 39 - #define REG_0X09_FORCE_CALIBRATION BIT(9) 40 - #define REG_0X0B_RX_OFFSET_RANGE_MASK 0xc 41 - #define REG_0X0D_RX_DEBUG_TEST_EN BIT(6) 42 - #define REG_0X10_DEBUG_MODE_SETTING 0x3c0 43 - #define REG_0X10_DEBUG_MODE_SETTING_MASK 0x3f8 44 - #define REG_0X1F_RX_OFFSET_CODE_MASK 0x1e 45 - 46 - #define USB_U3_TX_LFPS_SWING_TRIM_SHIFT 4 47 - #define USB_U3_TX_LFPS_SWING_TRIM_MASK 0xf 48 - #define AMPLITUDE_CONTROL_COARSE_MASK 0xff 49 - #define AMPLITUDE_CONTROL_FINE_MASK 0xffff 50 - #define AMPLITUDE_CONTROL_COARSE_DEFAULT 0xff 51 - #define AMPLITUDE_CONTROL_FINE_DEFAULT 0xffff 52 - 53 - #define PHY_ADDR_MAP_ARRAY_INDEX(addr) (addr) 54 - #define ARRAY_INDEX_MAP_PHY_ADDR(index) (index) 55 - 56 - struct phy_reg { 57 - void __iomem *reg_mdio_ctl; 58 - }; 59 - 60 - struct phy_data { 61 - u8 addr; 62 - u16 data; 63 - }; 64 - 65 - struct phy_cfg { 66 - int param_size; 67 - struct phy_data param[MAX_USB_PHY_DATA_SIZE]; 68 - 69 - bool check_efuse; 70 - bool do_toggle; 71 - bool do_toggle_once; 72 - bool use_default_parameter; 73 - bool check_rx_front_end_offset; 74 - }; 75 - 76 - struct phy_parameter { 77 - struct phy_reg phy_reg; 78 - 79 - /* Get from efuse */ 80 - u8 efuse_usb_u3_tx_lfps_swing_trim; 81 - 82 - /* Get from dts */ 83 - u32 amplitude_control_coarse; 84 - u32 amplitude_control_fine; 85 - }; 86 - 87 - struct rtk_phy { 88 - struct usb_phy phy; 89 - struct device *dev; 90 - 91 - struct phy_cfg *phy_cfg; 92 - int num_phy; 93 - struct phy_parameter *phy_parameter; 94 - 95 - struct dentry *debug_dir; 96 - }; 97 - 98 - #define PHY_IO_TIMEOUT_USEC (50000) 99 - #define PHY_IO_DELAY_US (100) 100 - 101 - static inline int utmi_wait_register(void __iomem *reg, u32 mask, u32 result) 102 - { 103 - int ret; 104 - unsigned int val; 105 - 106 - ret = read_poll_timeout(readl, val, ((val & mask) == result), 107 - PHY_IO_DELAY_US, PHY_IO_TIMEOUT_USEC, false, reg); 108 - if (ret) { 109 - pr_err("%s can't program USB phy\n", __func__); 110 - return -ETIMEDOUT; 111 - } 112 - 113 - return 0; 114 - } 115 - 116 - static int rtk_phy3_wait_vbusy(struct phy_reg *phy_reg) 117 - { 118 - return utmi_wait_register(phy_reg->reg_mdio_ctl, USB_MDIO_CTRL_PHY_BUSY, 0); 119 - } 120 - 121 - static u16 rtk_phy_read(struct phy_reg *phy_reg, char addr) 122 - { 123 - unsigned int tmp; 124 - u32 value; 125 - 126 - tmp = (addr << USB_MDIO_CTRL_PHY_ADDR_SHIFT); 127 - 128 - writel(tmp, phy_reg->reg_mdio_ctl); 129 - 130 - rtk_phy3_wait_vbusy(phy_reg); 131 - 132 - value = readl(phy_reg->reg_mdio_ctl); 133 - value = value >> USB_MDIO_CTRL_PHY_DATA_SHIFT; 134 - 135 - return (u16)value; 136 - } 137 - 138 - static int rtk_phy_write(struct phy_reg *phy_reg, char addr, u16 data) 139 - { 140 - unsigned int val; 141 - 142 - val = USB_MDIO_CTRL_PHY_WRITE | 143 - (addr << USB_MDIO_CTRL_PHY_ADDR_SHIFT) | 144 - (data << USB_MDIO_CTRL_PHY_DATA_SHIFT); 145 - 146 - writel(val, phy_reg->reg_mdio_ctl); 147 - 148 - rtk_phy3_wait_vbusy(phy_reg); 149 - 150 - return 0; 151 - } 152 - 153 - static void do_rtk_usb3_phy_toggle(struct rtk_phy *rtk_phy, int index, bool connect) 154 - { 155 - struct phy_cfg *phy_cfg = rtk_phy->phy_cfg; 156 - struct phy_reg *phy_reg; 157 - struct phy_parameter *phy_parameter; 158 - struct phy_data *phy_data; 159 - u8 addr; 160 - u16 data; 161 - int i; 162 - 163 - phy_parameter = &((struct phy_parameter *)rtk_phy->phy_parameter)[index]; 164 - phy_reg = &phy_parameter->phy_reg; 165 - 166 - if (!phy_cfg->do_toggle) 167 - return; 168 - 169 - i = PHY_ADDR_MAP_ARRAY_INDEX(PHY_ADDR_0X09); 170 - phy_data = phy_cfg->param + i; 171 - addr = phy_data->addr; 172 - data = phy_data->data; 173 - 174 - if (!addr && !data) { 175 - addr = PHY_ADDR_0X09; 176 - data = rtk_phy_read(phy_reg, addr); 177 - phy_data->addr = addr; 178 - phy_data->data = data; 179 - } 180 - 181 - rtk_phy_write(phy_reg, addr, data & (~REG_0X09_FORCE_CALIBRATION)); 182 - mdelay(1); 183 - rtk_phy_write(phy_reg, addr, data | REG_0X09_FORCE_CALIBRATION); 184 - } 185 - 186 - static int do_rtk_phy_init(struct rtk_phy *rtk_phy, int index) 187 - { 188 - struct phy_cfg *phy_cfg; 189 - struct phy_reg *phy_reg; 190 - struct phy_parameter *phy_parameter; 191 - int i = 0; 192 - 193 - phy_cfg = rtk_phy->phy_cfg; 194 - phy_parameter = &((struct phy_parameter *)rtk_phy->phy_parameter)[index]; 195 - phy_reg = &phy_parameter->phy_reg; 196 - 197 - if (phy_cfg->use_default_parameter) 198 - goto do_toggle; 199 - 200 - for (i = 0; i < phy_cfg->param_size; i++) { 201 - struct phy_data *phy_data = phy_cfg->param + i; 202 - u8 addr = phy_data->addr; 203 - u16 data = phy_data->data; 204 - 205 - if (!addr && !data) 206 - continue; 207 - 208 - rtk_phy_write(phy_reg, addr, data); 209 - } 210 - 211 - do_toggle: 212 - if (phy_cfg->do_toggle_once) 213 - phy_cfg->do_toggle = true; 214 - 215 - do_rtk_usb3_phy_toggle(rtk_phy, index, false); 216 - 217 - if (phy_cfg->do_toggle_once) { 218 - u16 check_value = 0; 219 - int count = 10; 220 - u16 value_0x0d, value_0x10; 221 - 222 - /* Enable Debug mode by set 0x0D and 0x10 */ 223 - value_0x0d = rtk_phy_read(phy_reg, PHY_ADDR_0X0D); 224 - value_0x10 = rtk_phy_read(phy_reg, PHY_ADDR_0X10); 225 - 226 - rtk_phy_write(phy_reg, PHY_ADDR_0X0D, 227 - value_0x0d | REG_0X0D_RX_DEBUG_TEST_EN); 228 - rtk_phy_write(phy_reg, PHY_ADDR_0X10, 229 - (value_0x10 & ~REG_0X10_DEBUG_MODE_SETTING_MASK) | 230 - REG_0X10_DEBUG_MODE_SETTING); 231 - 232 - check_value = rtk_phy_read(phy_reg, PHY_ADDR_0X30); 233 - 234 - while (!(check_value & BIT(15))) { 235 - check_value = rtk_phy_read(phy_reg, PHY_ADDR_0X30); 236 - mdelay(1); 237 - if (count-- < 0) 238 - break; 239 - } 240 - 241 - if (!(check_value & BIT(15))) 242 - dev_info(rtk_phy->dev, "toggle fail addr=0x%02x, data=0x%04x\n", 243 - PHY_ADDR_0X30, check_value); 244 - 245 - /* Disable Debug mode by set 0x0D and 0x10 to default*/ 246 - rtk_phy_write(phy_reg, PHY_ADDR_0X0D, value_0x0d); 247 - rtk_phy_write(phy_reg, PHY_ADDR_0X10, value_0x10); 248 - 249 - phy_cfg->do_toggle = false; 250 - } 251 - 252 - if (phy_cfg->check_rx_front_end_offset) { 253 - u16 rx_offset_code, rx_offset_range; 254 - u16 code_mask = REG_0X1F_RX_OFFSET_CODE_MASK; 255 - u16 range_mask = REG_0X0B_RX_OFFSET_RANGE_MASK; 256 - bool do_update = false; 257 - 258 - rx_offset_code = rtk_phy_read(phy_reg, PHY_ADDR_0X1F); 259 - if (((rx_offset_code & code_mask) == 0x0) || 260 - ((rx_offset_code & code_mask) == code_mask)) 261 - do_update = true; 262 - 263 - rx_offset_range = rtk_phy_read(phy_reg, PHY_ADDR_0X0B); 264 - if (((rx_offset_range & range_mask) == range_mask) && do_update) { 265 - dev_warn(rtk_phy->dev, "Don't update rx_offset_range (rx_offset_code=0x%x, rx_offset_range=0x%x)\n", 266 - rx_offset_code, rx_offset_range); 267 - do_update = false; 268 - } 269 - 270 - if (do_update) { 271 - u16 tmp1, tmp2; 272 - 273 - tmp1 = rx_offset_range & (~range_mask); 274 - tmp2 = rx_offset_range & range_mask; 275 - tmp2 += (1 << 2); 276 - rx_offset_range = tmp1 | (tmp2 & range_mask); 277 - rtk_phy_write(phy_reg, PHY_ADDR_0X0B, rx_offset_range); 278 - goto do_toggle; 279 - } 280 - } 281 - 282 - return 0; 283 - } 284 - 285 - static int rtk_phy_init(struct phy *phy) 286 - { 287 - struct rtk_phy *rtk_phy = phy_get_drvdata(phy); 288 - int ret = 0; 289 - int i; 290 - unsigned long phy_init_time = jiffies; 291 - 292 - for (i = 0; i < rtk_phy->num_phy; i++) 293 - ret = do_rtk_phy_init(rtk_phy, i); 294 - 295 - dev_dbg(rtk_phy->dev, "Initialized RTK USB 3.0 PHY (take %dms)\n", 296 - jiffies_to_msecs(jiffies - phy_init_time)); 297 - 298 - return ret; 299 - } 300 - 301 - static int rtk_phy_exit(struct phy *phy) 302 - { 303 - return 0; 304 - } 305 - 306 - static const struct phy_ops ops = { 307 - .init = rtk_phy_init, 308 - .exit = rtk_phy_exit, 309 - .owner = THIS_MODULE, 310 - }; 311 - 312 - static void rtk_phy_toggle(struct usb_phy *usb3_phy, bool connect, int port) 313 - { 314 - int index = port; 315 - struct rtk_phy *rtk_phy = NULL; 316 - 317 - rtk_phy = dev_get_drvdata(usb3_phy->dev); 318 - 319 - if (index > rtk_phy->num_phy) { 320 - dev_err(rtk_phy->dev, "%s: The port=%d is not in usb phy (num_phy=%d)\n", 321 - __func__, index, rtk_phy->num_phy); 322 - return; 323 - } 324 - 325 - do_rtk_usb3_phy_toggle(rtk_phy, index, connect); 326 - } 327 - 328 - static int rtk_phy_notify_port_status(struct usb_phy *x, int port, 329 - u16 portstatus, u16 portchange) 330 - { 331 - bool connect = false; 332 - 333 - pr_debug("%s port=%d portstatus=0x%x portchange=0x%x\n", 334 - __func__, port, (int)portstatus, (int)portchange); 335 - if (portstatus & USB_PORT_STAT_CONNECTION) 336 - connect = true; 337 - 338 - if (portchange & USB_PORT_STAT_C_CONNECTION) 339 - rtk_phy_toggle(x, connect, port); 340 - 341 - return 0; 342 - } 343 - 344 - #ifdef CONFIG_DEBUG_FS 345 - static struct dentry *create_phy_debug_root(void) 346 - { 347 - struct dentry *phy_debug_root; 348 - 349 - phy_debug_root = debugfs_lookup("phy", usb_debug_root); 350 - if (!phy_debug_root) 351 - phy_debug_root = debugfs_create_dir("phy", usb_debug_root); 352 - 353 - return phy_debug_root; 354 - } 355 - 356 - static int rtk_usb3_parameter_show(struct seq_file *s, void *unused) 357 - { 358 - struct rtk_phy *rtk_phy = s->private; 359 - struct phy_cfg *phy_cfg; 360 - int i, index; 361 - 362 - phy_cfg = rtk_phy->phy_cfg; 363 - 364 - seq_puts(s, "Property:\n"); 365 - seq_printf(s, " check_efuse: %s\n", 366 - phy_cfg->check_efuse ? "Enable" : "Disable"); 367 - seq_printf(s, " do_toggle: %s\n", 368 - phy_cfg->do_toggle ? "Enable" : "Disable"); 369 - seq_printf(s, " do_toggle_once: %s\n", 370 - phy_cfg->do_toggle_once ? "Enable" : "Disable"); 371 - seq_printf(s, " use_default_parameter: %s\n", 372 - phy_cfg->use_default_parameter ? "Enable" : "Disable"); 373 - 374 - for (index = 0; index < rtk_phy->num_phy; index++) { 375 - struct phy_reg *phy_reg; 376 - struct phy_parameter *phy_parameter; 377 - 378 - phy_parameter = &((struct phy_parameter *)rtk_phy->phy_parameter)[index]; 379 - phy_reg = &phy_parameter->phy_reg; 380 - 381 - seq_printf(s, "PHY %d:\n", index); 382 - 383 - for (i = 0; i < phy_cfg->param_size; i++) { 384 - struct phy_data *phy_data = phy_cfg->param + i; 385 - u8 addr = ARRAY_INDEX_MAP_PHY_ADDR(i); 386 - u16 data = phy_data->data; 387 - 388 - if (!phy_data->addr && !data) 389 - seq_printf(s, " addr = 0x%02x, data = none ==> read value = 0x%04x\n", 390 - addr, rtk_phy_read(phy_reg, addr)); 391 - else 392 - seq_printf(s, " addr = 0x%02x, data = 0x%04x ==> read value = 0x%04x\n", 393 - addr, data, rtk_phy_read(phy_reg, addr)); 394 - } 395 - 396 - seq_puts(s, "PHY Property:\n"); 397 - seq_printf(s, " efuse_usb_u3_tx_lfps_swing_trim: 0x%x\n", 398 - (int)phy_parameter->efuse_usb_u3_tx_lfps_swing_trim); 399 - seq_printf(s, " amplitude_control_coarse: 0x%x\n", 400 - (int)phy_parameter->amplitude_control_coarse); 401 - seq_printf(s, " amplitude_control_fine: 0x%x\n", 402 - (int)phy_parameter->amplitude_control_fine); 403 - } 404 - 405 - return 0; 406 - } 407 - DEFINE_SHOW_ATTRIBUTE(rtk_usb3_parameter); 408 - 409 - static inline void create_debug_files(struct rtk_phy *rtk_phy) 410 - { 411 - struct dentry *phy_debug_root = NULL; 412 - 413 - phy_debug_root = create_phy_debug_root(); 414 - 415 - if (!phy_debug_root) 416 - return; 417 - 418 - rtk_phy->debug_dir = debugfs_create_dir(dev_name(rtk_phy->dev), phy_debug_root); 419 - 420 - debugfs_create_file("parameter", 0444, rtk_phy->debug_dir, rtk_phy, 421 - &rtk_usb3_parameter_fops); 422 - 423 - return; 424 - } 425 - 426 - static inline void remove_debug_files(struct rtk_phy *rtk_phy) 427 - { 428 - debugfs_remove_recursive(rtk_phy->debug_dir); 429 - } 430 - #else 431 - static inline void create_debug_files(struct rtk_phy *rtk_phy) { } 432 - static inline void remove_debug_files(struct rtk_phy *rtk_phy) { } 433 - #endif /* CONFIG_DEBUG_FS */ 434 - 435 - static int get_phy_data_by_efuse(struct rtk_phy *rtk_phy, 436 - struct phy_parameter *phy_parameter, int index) 437 - { 438 - struct phy_cfg *phy_cfg = rtk_phy->phy_cfg; 439 - u8 value = 0; 440 - struct nvmem_cell *cell; 441 - 442 - if (!phy_cfg->check_efuse) 443 - goto out; 444 - 445 - cell = nvmem_cell_get(rtk_phy->dev, "usb_u3_tx_lfps_swing_trim"); 446 - if (IS_ERR(cell)) { 447 - dev_dbg(rtk_phy->dev, "%s no usb_u3_tx_lfps_swing_trim: %ld\n", 448 - __func__, PTR_ERR(cell)); 449 - } else { 450 - unsigned char *buf; 451 - size_t buf_size; 452 - 453 - buf = nvmem_cell_read(cell, &buf_size); 454 - if (!IS_ERR(buf)) { 455 - value = buf[0] & USB_U3_TX_LFPS_SWING_TRIM_MASK; 456 - kfree(buf); 457 - } 458 - nvmem_cell_put(cell); 459 - } 460 - 461 - if (value > 0 && value < 0x8) 462 - phy_parameter->efuse_usb_u3_tx_lfps_swing_trim = 0x8; 463 - else 464 - phy_parameter->efuse_usb_u3_tx_lfps_swing_trim = (u8)value; 465 - 466 - out: 467 - return 0; 468 - } 469 - 470 - static void update_amplitude_control_value(struct rtk_phy *rtk_phy, 471 - struct phy_parameter *phy_parameter) 472 - { 473 - struct phy_cfg *phy_cfg; 474 - struct phy_reg *phy_reg; 475 - 476 - phy_reg = &phy_parameter->phy_reg; 477 - phy_cfg = rtk_phy->phy_cfg; 478 - 479 - if (phy_parameter->amplitude_control_coarse != AMPLITUDE_CONTROL_COARSE_DEFAULT) { 480 - u16 val_mask = AMPLITUDE_CONTROL_COARSE_MASK; 481 - u16 data; 482 - 483 - if (!phy_cfg->param[PHY_ADDR_0X20].addr && !phy_cfg->param[PHY_ADDR_0X20].data) { 484 - phy_cfg->param[PHY_ADDR_0X20].addr = PHY_ADDR_0X20; 485 - data = rtk_phy_read(phy_reg, PHY_ADDR_0X20); 486 - } else { 487 - data = phy_cfg->param[PHY_ADDR_0X20].data; 488 - } 489 - 490 - data &= (~val_mask); 491 - data |= (phy_parameter->amplitude_control_coarse & val_mask); 492 - 493 - phy_cfg->param[PHY_ADDR_0X20].data = data; 494 - } 495 - 496 - if (phy_parameter->efuse_usb_u3_tx_lfps_swing_trim) { 497 - u8 efuse_val = phy_parameter->efuse_usb_u3_tx_lfps_swing_trim; 498 - u16 val_mask = USB_U3_TX_LFPS_SWING_TRIM_MASK; 499 - int val_shift = USB_U3_TX_LFPS_SWING_TRIM_SHIFT; 500 - u16 data; 501 - 502 - if (!phy_cfg->param[PHY_ADDR_0X20].addr && !phy_cfg->param[PHY_ADDR_0X20].data) { 503 - phy_cfg->param[PHY_ADDR_0X20].addr = PHY_ADDR_0X20; 504 - data = rtk_phy_read(phy_reg, PHY_ADDR_0X20); 505 - } else { 506 - data = phy_cfg->param[PHY_ADDR_0X20].data; 507 - } 508 - 509 - data &= ~(val_mask << val_shift); 510 - data |= ((efuse_val & val_mask) << val_shift); 511 - 512 - phy_cfg->param[PHY_ADDR_0X20].data = data; 513 - } 514 - 515 - if (phy_parameter->amplitude_control_fine != AMPLITUDE_CONTROL_FINE_DEFAULT) { 516 - u16 val_mask = AMPLITUDE_CONTROL_FINE_MASK; 517 - 518 - if (!phy_cfg->param[PHY_ADDR_0X21].addr && !phy_cfg->param[PHY_ADDR_0X21].data) 519 - phy_cfg->param[PHY_ADDR_0X21].addr = PHY_ADDR_0X21; 520 - 521 - phy_cfg->param[PHY_ADDR_0X21].data = 522 - phy_parameter->amplitude_control_fine & val_mask; 523 - } 524 - } 525 - 526 - static int parse_phy_data(struct rtk_phy *rtk_phy) 527 - { 528 - struct device *dev = rtk_phy->dev; 529 - struct phy_parameter *phy_parameter; 530 - int ret = 0; 531 - int index; 532 - 533 - rtk_phy->phy_parameter = devm_kzalloc(dev, sizeof(struct phy_parameter) * 534 - rtk_phy->num_phy, GFP_KERNEL); 535 - if (!rtk_phy->phy_parameter) 536 - return -ENOMEM; 537 - 538 - for (index = 0; index < rtk_phy->num_phy; index++) { 539 - phy_parameter = &((struct phy_parameter *)rtk_phy->phy_parameter)[index]; 540 - 541 - phy_parameter->phy_reg.reg_mdio_ctl = of_iomap(dev->of_node, 0) + index; 542 - 543 - /* Amplitude control address 0x20 bit 0 to bit 7 */ 544 - if (of_property_read_u32(dev->of_node, "realtek,amplitude-control-coarse-tuning", 545 - &phy_parameter->amplitude_control_coarse)) 546 - phy_parameter->amplitude_control_coarse = AMPLITUDE_CONTROL_COARSE_DEFAULT; 547 - 548 - /* Amplitude control address 0x21 bit 0 to bit 16 */ 549 - if (of_property_read_u32(dev->of_node, "realtek,amplitude-control-fine-tuning", 550 - &phy_parameter->amplitude_control_fine)) 551 - phy_parameter->amplitude_control_fine = AMPLITUDE_CONTROL_FINE_DEFAULT; 552 - 553 - get_phy_data_by_efuse(rtk_phy, phy_parameter, index); 554 - 555 - update_amplitude_control_value(rtk_phy, phy_parameter); 556 - } 557 - 558 - return ret; 559 - } 560 - 561 - static int rtk_usb3phy_probe(struct platform_device *pdev) 562 - { 563 - struct rtk_phy *rtk_phy; 564 - struct device *dev = &pdev->dev; 565 - struct phy *generic_phy; 566 - struct phy_provider *phy_provider; 567 - const struct phy_cfg *phy_cfg; 568 - int ret; 569 - 570 - phy_cfg = of_device_get_match_data(dev); 571 - if (!phy_cfg) { 572 - dev_err(dev, "phy config are not assigned!\n"); 573 - return -EINVAL; 574 - } 575 - 576 - rtk_phy = devm_kzalloc(dev, sizeof(*rtk_phy), GFP_KERNEL); 577 - if (!rtk_phy) 578 - return -ENOMEM; 579 - 580 - rtk_phy->dev = &pdev->dev; 581 - rtk_phy->phy.dev = rtk_phy->dev; 582 - rtk_phy->phy.label = "rtk-usb3phy"; 583 - rtk_phy->phy.notify_port_status = rtk_phy_notify_port_status; 584 - 585 - rtk_phy->phy_cfg = devm_kzalloc(dev, sizeof(*phy_cfg), GFP_KERNEL); 586 - 587 - memcpy(rtk_phy->phy_cfg, phy_cfg, sizeof(*phy_cfg)); 588 - 589 - rtk_phy->num_phy = 1; 590 - 591 - ret = parse_phy_data(rtk_phy); 592 - if (ret) 593 - goto err; 594 - 595 - platform_set_drvdata(pdev, rtk_phy); 596 - 597 - generic_phy = devm_phy_create(rtk_phy->dev, NULL, &ops); 598 - if (IS_ERR(generic_phy)) 599 - return PTR_ERR(generic_phy); 600 - 601 - phy_set_drvdata(generic_phy, rtk_phy); 602 - 603 - phy_provider = devm_of_phy_provider_register(rtk_phy->dev, of_phy_simple_xlate); 604 - if (IS_ERR(phy_provider)) 605 - return PTR_ERR(phy_provider); 606 - 607 - ret = usb_add_phy_dev(&rtk_phy->phy); 608 - if (ret) 609 - goto err; 610 - 611 - create_debug_files(rtk_phy); 612 - 613 - err: 614 - return ret; 615 - } 616 - 617 - static void rtk_usb3phy_remove(struct platform_device *pdev) 618 - { 619 - struct rtk_phy *rtk_phy = platform_get_drvdata(pdev); 620 - 621 - remove_debug_files(rtk_phy); 622 - 623 - usb_remove_phy(&rtk_phy->phy); 624 - } 625 - 626 - static const struct phy_cfg rtd1295_phy_cfg = { 627 - .param_size = MAX_USB_PHY_DATA_SIZE, 628 - .param = { [0] = {0x01, 0x4008}, [1] = {0x01, 0xe046}, 629 - [2] = {0x02, 0x6046}, [3] = {0x03, 0x2779}, 630 - [4] = {0x04, 0x72f5}, [5] = {0x05, 0x2ad3}, 631 - [6] = {0x06, 0x000e}, [7] = {0x07, 0x2e00}, 632 - [8] = {0x08, 0x3591}, [9] = {0x09, 0x525c}, 633 - [10] = {0x0a, 0xa600}, [11] = {0x0b, 0xa904}, 634 - [12] = {0x0c, 0xc000}, [13] = {0x0d, 0xef1c}, 635 - [14] = {0x0e, 0x2000}, [15] = {0x0f, 0x0000}, 636 - [16] = {0x10, 0x000c}, [17] = {0x11, 0x4c00}, 637 - [18] = {0x12, 0xfc00}, [19] = {0x13, 0x0c81}, 638 - [20] = {0x14, 0xde01}, [21] = {0x15, 0x0000}, 639 - [22] = {0x16, 0x0000}, [23] = {0x17, 0x0000}, 640 - [24] = {0x18, 0x0000}, [25] = {0x19, 0x4004}, 641 - [26] = {0x1a, 0x1260}, [27] = {0x1b, 0xff00}, 642 - [28] = {0x1c, 0xcb00}, [29] = {0x1d, 0xa03f}, 643 - [30] = {0x1e, 0xc2e0}, [31] = {0x1f, 0x2807}, 644 - [32] = {0x20, 0x947a}, [33] = {0x21, 0x88aa}, 645 - [34] = {0x22, 0x0057}, [35] = {0x23, 0xab66}, 646 - [36] = {0x24, 0x0800}, [37] = {0x25, 0x0000}, 647 - [38] = {0x26, 0x040a}, [39] = {0x27, 0x01d6}, 648 - [40] = {0x28, 0xf8c2}, [41] = {0x29, 0x3080}, 649 - [42] = {0x2a, 0x3082}, [43] = {0x2b, 0x2078}, 650 - [44] = {0x2c, 0xffff}, [45] = {0x2d, 0xffff}, 651 - [46] = {0x2e, 0x0000}, [47] = {0x2f, 0x0040}, }, 652 - .check_efuse = false, 653 - .do_toggle = true, 654 - .do_toggle_once = false, 655 - .use_default_parameter = false, 656 - .check_rx_front_end_offset = false, 657 - }; 658 - 659 - static const struct phy_cfg rtd1619_phy_cfg = { 660 - .param_size = MAX_USB_PHY_DATA_SIZE, 661 - .param = { [8] = {0x08, 0x3591}, 662 - [38] = {0x26, 0x840b}, 663 - [40] = {0x28, 0xf842}, }, 664 - .check_efuse = false, 665 - .do_toggle = true, 666 - .do_toggle_once = false, 667 - .use_default_parameter = false, 668 - .check_rx_front_end_offset = false, 669 - }; 670 - 671 - static const struct phy_cfg rtd1319_phy_cfg = { 672 - .param_size = MAX_USB_PHY_DATA_SIZE, 673 - .param = { [1] = {0x01, 0xac86}, 674 - [6] = {0x06, 0x0003}, 675 - [9] = {0x09, 0x924c}, 676 - [10] = {0x0a, 0xa608}, 677 - [11] = {0x0b, 0xb905}, 678 - [14] = {0x0e, 0x2010}, 679 - [32] = {0x20, 0x705a}, 680 - [33] = {0x21, 0xf645}, 681 - [34] = {0x22, 0x0013}, 682 - [35] = {0x23, 0xcb66}, 683 - [41] = {0x29, 0xff00}, }, 684 - .check_efuse = true, 685 - .do_toggle = true, 686 - .do_toggle_once = false, 687 - .use_default_parameter = false, 688 - .check_rx_front_end_offset = false, 689 - }; 690 - 691 - static const struct phy_cfg rtd1619b_phy_cfg = { 692 - .param_size = MAX_USB_PHY_DATA_SIZE, 693 - .param = { [1] = {0x01, 0xac8c}, 694 - [6] = {0x06, 0x0017}, 695 - [9] = {0x09, 0x724c}, 696 - [10] = {0x0a, 0xb610}, 697 - [11] = {0x0b, 0xb90d}, 698 - [13] = {0x0d, 0xef2a}, 699 - [15] = {0x0f, 0x9050}, 700 - [16] = {0x10, 0x000c}, 701 - [32] = {0x20, 0x70ff}, 702 - [34] = {0x22, 0x0013}, 703 - [35] = {0x23, 0xdb66}, 704 - [38] = {0x26, 0x8609}, 705 - [41] = {0x29, 0xff13}, 706 - [42] = {0x2a, 0x3070}, }, 707 - .check_efuse = true, 708 - .do_toggle = false, 709 - .do_toggle_once = true, 710 - .use_default_parameter = false, 711 - .check_rx_front_end_offset = false, 712 - }; 713 - 714 - static const struct phy_cfg rtd1319d_phy_cfg = { 715 - .param_size = MAX_USB_PHY_DATA_SIZE, 716 - .param = { [1] = {0x01, 0xac89}, 717 - [4] = {0x04, 0xf2f5}, 718 - [6] = {0x06, 0x0017}, 719 - [9] = {0x09, 0x424c}, 720 - [10] = {0x0a, 0x9610}, 721 - [11] = {0x0b, 0x9901}, 722 - [12] = {0x0c, 0xf000}, 723 - [13] = {0x0d, 0xef2a}, 724 - [14] = {0x0e, 0x1000}, 725 - [15] = {0x0f, 0x9050}, 726 - [32] = {0x20, 0x7077}, 727 - [35] = {0x23, 0x0b62}, 728 - [37] = {0x25, 0x10ec}, 729 - [42] = {0x2a, 0x3070}, }, 730 - .check_efuse = true, 731 - .do_toggle = false, 732 - .do_toggle_once = true, 733 - .use_default_parameter = false, 734 - .check_rx_front_end_offset = true, 735 - }; 736 - 737 - static const struct of_device_id usbphy_rtk_dt_match[] = { 738 - { .compatible = "realtek,rtd1295-usb3phy", .data = &rtd1295_phy_cfg }, 739 - { .compatible = "realtek,rtd1319-usb3phy", .data = &rtd1319_phy_cfg }, 740 - { .compatible = "realtek,rtd1319d-usb3phy", .data = &rtd1319d_phy_cfg }, 741 - { .compatible = "realtek,rtd1619-usb3phy", .data = &rtd1619_phy_cfg }, 742 - { .compatible = "realtek,rtd1619b-usb3phy", .data = &rtd1619b_phy_cfg }, 743 - {}, 744 - }; 745 - MODULE_DEVICE_TABLE(of, usbphy_rtk_dt_match); 746 - 747 - static struct platform_driver rtk_usb3phy_driver = { 748 - .probe = rtk_usb3phy_probe, 749 - .remove_new = rtk_usb3phy_remove, 750 - .driver = { 751 - .name = "rtk-usb3phy", 752 - .of_match_table = usbphy_rtk_dt_match, 753 - }, 754 - }; 755 - 756 - module_platform_driver(rtk_usb3phy_driver); 757 - 758 - MODULE_LICENSE("GPL"); 759 - MODULE_ALIAS("platform: rtk-usb3phy"); 760 - MODULE_AUTHOR("Stanley Chang <stanley_chang@realtek.com>"); 761 - MODULE_DESCRIPTION("Realtek usb 3.0 phy driver");
+2 -29
drivers/platform/x86/amd/pmc/pmc.c
··· 964 964 { } 965 965 }; 966 966 967 - static int amd_pmc_get_dram_size(struct amd_pmc_dev *dev) 968 - { 969 - int ret; 970 - 971 - switch (dev->cpu_id) { 972 - case AMD_CPU_ID_YC: 973 - if (!(dev->major > 90 || (dev->major == 90 && dev->minor > 39))) { 974 - ret = -EINVAL; 975 - goto err_dram_size; 976 - } 977 - break; 978 - default: 979 - ret = -EINVAL; 980 - goto err_dram_size; 981 - } 982 - 983 - ret = amd_pmc_send_cmd(dev, S2D_DRAM_SIZE, &dev->dram_size, dev->s2d_msg_id, true); 984 - if (ret || !dev->dram_size) 985 - goto err_dram_size; 986 - 987 - return 0; 988 - 989 - err_dram_size: 990 - dev_err(dev->dev, "DRAM size command not supported for this platform\n"); 991 - return ret; 992 - } 993 - 994 967 static int amd_pmc_s2d_init(struct amd_pmc_dev *dev) 995 968 { 996 969 u32 phys_addr_low, phys_addr_hi; ··· 982 1009 return -EIO; 983 1010 984 1011 /* Get DRAM size */ 985 - ret = amd_pmc_get_dram_size(dev); 986 - if (ret) 1012 + ret = amd_pmc_send_cmd(dev, S2D_DRAM_SIZE, &dev->dram_size, dev->s2d_msg_id, true); 1013 + if (ret || !dev->dram_size) 987 1014 dev->dram_size = S2D_TELEMETRY_DRAMBYTES_MAX; 988 1015 989 1016 /* Get STB DRAM address */
+11 -15
drivers/platform/x86/hp/hp-bioscfg/bioscfg.c
··· 588 588 static int hp_add_other_attributes(int attr_type) 589 589 { 590 590 struct kobject *attr_name_kobj; 591 - union acpi_object *obj = NULL; 592 591 int ret; 593 592 char *attr_name; 594 593 595 - mutex_lock(&bioscfg_drv.mutex); 596 - 597 594 attr_name_kobj = kzalloc(sizeof(*attr_name_kobj), GFP_KERNEL); 598 - if (!attr_name_kobj) { 599 - ret = -ENOMEM; 600 - goto err_other_attr_init; 601 - } 595 + if (!attr_name_kobj) 596 + return -ENOMEM; 597 + 598 + mutex_lock(&bioscfg_drv.mutex); 602 599 603 600 /* Check if attribute type is supported */ 604 601 switch (attr_type) { ··· 612 615 default: 613 616 pr_err("Error: Unknown attr_type: %d\n", attr_type); 614 617 ret = -EINVAL; 615 - goto err_other_attr_init; 618 + kfree(attr_name_kobj); 619 + goto unlock_drv_mutex; 616 620 } 617 621 618 622 ret = kobject_init_and_add(attr_name_kobj, &attr_name_ktype, 619 623 NULL, "%s", attr_name); 620 624 if (ret) { 621 625 pr_err("Error encountered [%d]\n", ret); 622 - kobject_put(attr_name_kobj); 623 626 goto err_other_attr_init; 624 627 } 625 628 ··· 627 630 switch (attr_type) { 628 631 case HPWMI_SECURE_PLATFORM_TYPE: 629 632 ret = hp_populate_secure_platform_data(attr_name_kobj); 630 - if (ret) 631 - goto err_other_attr_init; 632 633 break; 633 634 634 635 case HPWMI_SURE_START_TYPE: 635 636 ret = hp_populate_sure_start_data(attr_name_kobj); 636 - if (ret) 637 - goto err_other_attr_init; 638 637 break; 639 638 640 639 default: 641 640 ret = -EINVAL; 642 - goto err_other_attr_init; 643 641 } 642 + 643 + if (ret) 644 + goto err_other_attr_init; 644 645 645 646 mutex_unlock(&bioscfg_drv.mutex); 646 647 return 0; 647 648 648 649 err_other_attr_init: 650 + kobject_put(attr_name_kobj); 651 + unlock_drv_mutex: 649 652 mutex_unlock(&bioscfg_drv.mutex); 650 - kfree(obj); 651 653 return ret; 652 654 } 653 655
+5 -6
drivers/platform/x86/ideapad-laptop.c
··· 1425 1425 if (WARN_ON(priv->kbd_bl.initialized)) 1426 1426 return -EEXIST; 1427 1427 1428 - brightness = ideapad_kbd_bl_brightness_get(priv); 1429 - if (brightness < 0) 1430 - return brightness; 1431 - 1432 - priv->kbd_bl.last_brightness = brightness; 1433 - 1434 1428 if (ideapad_kbd_bl_check_tristate(priv->kbd_bl.type)) { 1435 1429 priv->kbd_bl.led.max_brightness = 2; 1436 1430 } else { 1437 1431 priv->kbd_bl.led.max_brightness = 1; 1438 1432 } 1439 1433 1434 + brightness = ideapad_kbd_bl_brightness_get(priv); 1435 + if (brightness < 0) 1436 + return brightness; 1437 + 1438 + priv->kbd_bl.last_brightness = brightness; 1440 1439 priv->kbd_bl.led.name = "platform::" LED_FUNCTION_KBD_BACKLIGHT; 1441 1440 priv->kbd_bl.led.brightness_get = ideapad_kbd_bl_led_cdev_brightness_get; 1442 1441 priv->kbd_bl.led.brightness_set_blocking = ideapad_kbd_bl_led_cdev_brightness_set;
+2 -2
drivers/platform/x86/intel/telemetry/core.c
··· 102 102 /** 103 103 * telemetry_update_events() - Update telemetry Configuration 104 104 * @pss_evtconfig: PSS related config. No change if num_evts = 0. 105 - * @pss_evtconfig: IOSS related config. No change if num_evts = 0. 105 + * @ioss_evtconfig: IOSS related config. No change if num_evts = 0. 106 106 * 107 107 * This API updates the IOSS & PSS Telemetry configuration. Old config 108 108 * is overwritten. Call telemetry_reset_events when logging is over ··· 176 176 /** 177 177 * telemetry_get_eventconfig() - Returns the pss and ioss events enabled 178 178 * @pss_evtconfig: Pointer to PSS related configuration. 179 - * @pss_evtconfig: Pointer to IOSS related configuration. 179 + * @ioss_evtconfig: Pointer to IOSS related configuration. 180 180 * @pss_len: Number of u32 elements allocated for pss_evtconfig array 181 181 * @ioss_len: Number of u32 elements allocated for ioss_evtconfig array 182 182 *
+13 -11
drivers/s390/block/dasd.c
··· 676 676 * we count each request only once. 677 677 */ 678 678 device = cqr->startdev; 679 - if (device->profile.data) { 680 - counter = 1; /* request is not yet queued on the start device */ 681 - list_for_each(l, &device->ccw_queue) 682 - if (++counter >= 31) 683 - break; 684 - } 679 + if (!device->profile.data) 680 + return; 681 + 682 + spin_lock(get_ccwdev_lock(device->cdev)); 683 + counter = 1; /* request is not yet queued on the start device */ 684 + list_for_each(l, &device->ccw_queue) 685 + if (++counter >= 31) 686 + break; 687 + spin_unlock(get_ccwdev_lock(device->cdev)); 688 + 685 689 spin_lock(&device->profile.lock); 686 - if (device->profile.data) { 687 - device->profile.data->dasd_io_nr_req[counter]++; 688 - if (rq_data_dir(req) == READ) 689 - device->profile.data->dasd_read_nr_req[counter]++; 690 - } 690 + device->profile.data->dasd_io_nr_req[counter]++; 691 + if (rq_data_dir(req) == READ) 692 + device->profile.data->dasd_read_nr_req[counter]++; 691 693 spin_unlock(&device->profile.lock); 692 694 } 693 695
+1 -1
drivers/s390/block/dasd_int.h
··· 283 283 __u8 secondary; /* 7 Secondary device address */ 284 284 __u16 pprc_id; /* 8-9 Peer-to-Peer Remote Copy ID */ 285 285 __u8 reserved2[12]; /* 10-21 reserved */ 286 - __u16 prim_cu_ssid; /* 22-23 Pimary Control Unit SSID */ 286 + __u16 prim_cu_ssid; /* 22-23 Primary Control Unit SSID */ 287 287 __u8 reserved3[12]; /* 24-35 reserved */ 288 288 __u16 sec_cu_ssid; /* 36-37 Secondary Control Unit SSID */ 289 289 __u8 reserved4[90]; /* 38-127 reserved */
+2 -1
drivers/s390/net/Kconfig
··· 103 103 config ISM 104 104 tristate "Support for ISM vPCI Adapter" 105 105 depends on PCI 106 + imply SMC 106 107 default n 107 108 help 108 109 Select this option if you want to use the Internal Shared Memory 109 - vPCI Adapter. 110 + vPCI Adapter. The adapter can be used with the SMC network protocol. 110 111 111 112 To compile as a module choose M. The module name is ism. 112 113 If unsure, choose N.
+46 -47
drivers/s390/net/ism_drv.c
··· 30 30 MODULE_DEVICE_TABLE(pci, ism_device_table); 31 31 32 32 static debug_info_t *ism_debug_info; 33 - static const struct smcd_ops ism_ops; 34 33 35 34 #define NO_CLIENT 0xff /* must be >= MAX_CLIENTS */ 36 35 static struct ism_client *clients[MAX_CLIENTS]; /* use an array rather than */ ··· 288 289 return ret; 289 290 } 290 291 291 - static int ism_query_rgid(struct ism_dev *ism, u64 rgid, u32 vid_valid, 292 - u32 vid) 293 - { 294 - union ism_query_rgid cmd; 295 - 296 - memset(&cmd, 0, sizeof(cmd)); 297 - cmd.request.hdr.cmd = ISM_QUERY_RGID; 298 - cmd.request.hdr.len = sizeof(cmd.request); 299 - 300 - cmd.request.rgid = rgid; 301 - cmd.request.vlan_valid = vid_valid; 302 - cmd.request.vlan_id = vid; 303 - 304 - return ism_cmd(ism, &cmd); 305 - } 306 - 307 292 static void ism_free_dmb(struct ism_dev *ism, struct ism_dmb *dmb) 308 293 { 309 294 clear_bit(dmb->sba_idx, ism->sba_bitmap); ··· 412 429 return ism_cmd(ism, &cmd); 413 430 } 414 431 415 - static int ism_signal_ieq(struct ism_dev *ism, u64 rgid, u32 trigger_irq, 416 - u32 event_code, u64 info) 417 - { 418 - union ism_sig_ieq cmd; 419 - 420 - memset(&cmd, 0, sizeof(cmd)); 421 - cmd.request.hdr.cmd = ISM_SIGNAL_IEQ; 422 - cmd.request.hdr.len = sizeof(cmd.request); 423 - 424 - cmd.request.rgid = rgid; 425 - cmd.request.trigger_irq = trigger_irq; 426 - cmd.request.event_code = event_code; 427 - cmd.request.info = info; 428 - 429 - return ism_cmd(ism, &cmd); 430 - } 431 - 432 432 static unsigned int max_bytes(unsigned int start, unsigned int len, 433 433 unsigned int boundary) 434 434 { ··· 469 503 } 470 504 EXPORT_SYMBOL_GPL(ism_get_seid); 471 505 472 - static u16 ism_get_chid(struct ism_dev *ism) 473 - { 474 - if (!ism || !ism->pdev) 475 - return 0; 476 - 477 - return to_zpci(ism->pdev)->pchid; 478 - } 479 - 480 506 static void ism_handle_event(struct ism_dev *ism) 481 507 { 482 508 struct ism_event *entry; ··· 525 567 } 526 568 spin_unlock(&ism->lock); 527 569 return IRQ_HANDLED; 528 - } 529 - 530 - static u64 ism_get_local_gid(struct ism_dev *ism) 531 - { 532 - return ism->local_gid; 533 570 } 534 571 535 572 static int ism_dev_init(struct ism_dev *ism) ··· 727 774 /*************************** SMC-D Implementation *****************************/ 728 775 729 776 #if IS_ENABLED(CONFIG_SMC) 777 + static int ism_query_rgid(struct ism_dev *ism, u64 rgid, u32 vid_valid, 778 + u32 vid) 779 + { 780 + union ism_query_rgid cmd; 781 + 782 + memset(&cmd, 0, sizeof(cmd)); 783 + cmd.request.hdr.cmd = ISM_QUERY_RGID; 784 + cmd.request.hdr.len = sizeof(cmd.request); 785 + 786 + cmd.request.rgid = rgid; 787 + cmd.request.vlan_valid = vid_valid; 788 + cmd.request.vlan_id = vid; 789 + 790 + return ism_cmd(ism, &cmd); 791 + } 792 + 730 793 static int smcd_query_rgid(struct smcd_dev *smcd, u64 rgid, u32 vid_valid, 731 794 u32 vid) 732 795 { ··· 780 811 return ism_cmd_simple(smcd->priv, ISM_RESET_VLAN); 781 812 } 782 813 814 + static int ism_signal_ieq(struct ism_dev *ism, u64 rgid, u32 trigger_irq, 815 + u32 event_code, u64 info) 816 + { 817 + union ism_sig_ieq cmd; 818 + 819 + memset(&cmd, 0, sizeof(cmd)); 820 + cmd.request.hdr.cmd = ISM_SIGNAL_IEQ; 821 + cmd.request.hdr.len = sizeof(cmd.request); 822 + 823 + cmd.request.rgid = rgid; 824 + cmd.request.trigger_irq = trigger_irq; 825 + cmd.request.event_code = event_code; 826 + cmd.request.info = info; 827 + 828 + return ism_cmd(ism, &cmd); 829 + } 830 + 783 831 static int smcd_signal_ieq(struct smcd_dev *smcd, u64 rgid, u32 trigger_irq, 784 832 u32 event_code, u64 info) 785 833 { ··· 816 830 SYSTEM_EID.type[0] != '0'; 817 831 } 818 832 833 + static u64 ism_get_local_gid(struct ism_dev *ism) 834 + { 835 + return ism->local_gid; 836 + } 837 + 819 838 static u64 smcd_get_local_gid(struct smcd_dev *smcd) 820 839 { 821 840 return ism_get_local_gid(smcd->priv); 841 + } 842 + 843 + static u16 ism_get_chid(struct ism_dev *ism) 844 + { 845 + if (!ism || !ism->pdev) 846 + return 0; 847 + 848 + return to_zpci(ism->pdev)->pchid; 822 849 } 823 850 824 851 static u16 smcd_get_chid(struct smcd_dev *smcd)
+3 -3
drivers/thunderbolt/switch.c
··· 1143 1143 * Only set bonding if the link was not already bonded. This 1144 1144 * avoids the lane adapter to re-enter bonding state. 1145 1145 */ 1146 - if (width == TB_LINK_WIDTH_SINGLE) { 1146 + if (width == TB_LINK_WIDTH_SINGLE && !tb_is_upstream_port(port)) { 1147 1147 ret = tb_port_set_lane_bonding(port, true); 1148 1148 if (ret) 1149 1149 goto err_lane1; ··· 2880 2880 return tb_port_wait_for_link_width(down, TB_LINK_WIDTH_SINGLE, 100); 2881 2881 } 2882 2882 2883 + /* Note updating sw->link_width done in tb_switch_update_link_attributes() */ 2883 2884 static int tb_switch_asym_enable(struct tb_switch *sw, enum tb_link_width width) 2884 2885 { 2885 2886 struct tb_port *up, *down, *port; ··· 2920 2919 return ret; 2921 2920 } 2922 2921 2923 - sw->link_width = width; 2924 2922 return 0; 2925 2923 } 2926 2924 2925 + /* Note updating sw->link_width done in tb_switch_update_link_attributes() */ 2927 2926 static int tb_switch_asym_disable(struct tb_switch *sw) 2928 2927 { 2929 2928 struct tb_port *up, *down; ··· 2958 2957 return ret; 2959 2958 } 2960 2959 2961 - sw->link_width = TB_LINK_WIDTH_DUAL; 2962 2960 return 0; 2963 2961 } 2964 2962
+11 -1
drivers/thunderbolt/tb.c
··· 213 213 if (!tb_switch_query_dp_resource(sw, port)) 214 214 continue; 215 215 216 - list_add(&port->list, &tcm->dp_resources); 216 + /* 217 + * If DP IN on device router exist, position it at the 218 + * beginning of the DP resources list, so that it is used 219 + * before DP IN of the host router. This way external GPU(s) 220 + * will be prioritized when pairing DP IN to a DP OUT. 221 + */ 222 + if (tb_route(sw)) 223 + list_add(&port->list, &tcm->dp_resources); 224 + else 225 + list_add_tail(&port->list, &tcm->dp_resources); 226 + 217 227 tb_port_dbg(port, "DP IN resource available\n"); 218 228 } 219 229 }
+3
drivers/usb/cdns3/cdnsp-ring.c
··· 1529 1529 unsigned long flags; 1530 1530 int counter = 0; 1531 1531 1532 + local_bh_disable(); 1532 1533 spin_lock_irqsave(&pdev->lock, flags); 1533 1534 1534 1535 if (pdev->cdnsp_state & (CDNSP_STATE_HALTED | CDNSP_STATE_DYING)) { ··· 1542 1541 cdnsp_died(pdev); 1543 1542 1544 1543 spin_unlock_irqrestore(&pdev->lock, flags); 1544 + local_bh_enable(); 1545 1545 return IRQ_HANDLED; 1546 1546 } 1547 1547 ··· 1559 1557 cdnsp_update_erst_dequeue(pdev, event_ring_deq, 1); 1560 1558 1561 1559 spin_unlock_irqrestore(&pdev->lock, flags); 1560 + local_bh_enable(); 1562 1561 1563 1562 return IRQ_HANDLED; 1564 1563 }
+2 -1
drivers/usb/core/config.c
··· 1047 1047 1048 1048 if (cap->bDescriptorType != USB_DT_DEVICE_CAPABILITY) { 1049 1049 dev_notice(ddev, "descriptor type invalid, skip\n"); 1050 - continue; 1050 + goto skip_to_next_descriptor; 1051 1051 } 1052 1052 1053 1053 switch (cap_type) { ··· 1078 1078 break; 1079 1079 } 1080 1080 1081 + skip_to_next_descriptor: 1081 1082 total_len -= length; 1082 1083 buffer += length; 1083 1084 }
-23
drivers/usb/core/hub.c
··· 622 622 ret = 0; 623 623 } 624 624 mutex_unlock(&hub->status_mutex); 625 - 626 - /* 627 - * There is no need to lock status_mutex here, because status_mutex 628 - * protects hub->status, and the phy driver only checks the port 629 - * status without changing the status. 630 - */ 631 - if (!ret) { 632 - struct usb_device *hdev = hub->hdev; 633 - 634 - /* 635 - * Only roothub will be notified of port state changes, 636 - * since the USB PHY only cares about changes at the next 637 - * level. 638 - */ 639 - if (is_root_hub(hdev)) { 640 - struct usb_hcd *hcd = bus_to_hcd(hdev->bus); 641 - 642 - if (hcd->usb_phy) 643 - usb_phy_notify_port_status(hcd->usb_phy, 644 - port1 - 1, *status, *change); 645 - } 646 - } 647 - 648 625 return ret; 649 626 } 650 627
+7 -8
drivers/usb/dwc2/hcd_intr.c
··· 2015 2015 { 2016 2016 struct dwc2_qtd *qtd; 2017 2017 struct dwc2_host_chan *chan; 2018 - u32 hcint, hcintmsk; 2018 + u32 hcint, hcintraw, hcintmsk; 2019 2019 2020 2020 chan = hsotg->hc_ptr_array[chnum]; 2021 2021 2022 - hcint = dwc2_readl(hsotg, HCINT(chnum)); 2022 + hcintraw = dwc2_readl(hsotg, HCINT(chnum)); 2023 2023 hcintmsk = dwc2_readl(hsotg, HCINTMSK(chnum)); 2024 + hcint = hcintraw & hcintmsk; 2025 + dwc2_writel(hsotg, hcint, HCINT(chnum)); 2026 + 2024 2027 if (!chan) { 2025 2028 dev_err(hsotg->dev, "## hc_ptr_array for channel is NULL ##\n"); 2026 - dwc2_writel(hsotg, hcint, HCINT(chnum)); 2027 2029 return; 2028 2030 } 2029 2031 ··· 2034 2032 chnum); 2035 2033 dev_vdbg(hsotg->dev, 2036 2034 " hcint 0x%08x, hcintmsk 0x%08x, hcint&hcintmsk 0x%08x\n", 2037 - hcint, hcintmsk, hcint & hcintmsk); 2035 + hcintraw, hcintmsk, hcint); 2038 2036 } 2039 - 2040 - dwc2_writel(hsotg, hcint, HCINT(chnum)); 2041 2037 2042 2038 /* 2043 2039 * If we got an interrupt after someone called ··· 2046 2046 return; 2047 2047 } 2048 2048 2049 - chan->hcint = hcint; 2050 - hcint &= hcintmsk; 2049 + chan->hcint = hcintraw; 2051 2050 2052 2051 /* 2053 2052 * If the channel was halted due to a dequeue, the qtd list might
+2
drivers/usb/dwc3/core.c
··· 2034 2034 2035 2035 pm_runtime_put(dev); 2036 2036 2037 + dma_set_max_seg_size(dev, UINT_MAX); 2038 + 2037 2039 return 0; 2038 2040 2039 2041 err_exit_debugfs:
+1 -1
drivers/usb/dwc3/drd.c
··· 505 505 dwc->role_switch_default_mode = USB_DR_MODE_PERIPHERAL; 506 506 mode = DWC3_GCTL_PRTCAP_DEVICE; 507 507 } 508 + dwc3_set_mode(dwc, mode); 508 509 509 510 dwc3_role_switch.fwnode = dev_fwnode(dwc->dev); 510 511 dwc3_role_switch.set = dwc3_usb_role_switch_set; ··· 527 526 } 528 527 } 529 528 530 - dwc3_set_mode(dwc, mode); 531 529 return 0; 532 530 } 533 531 #else
+47 -22
drivers/usb/dwc3/dwc3-qcom.c
··· 546 546 pdata ? pdata->hs_phy_irq_index : -1); 547 547 if (irq > 0) { 548 548 /* Keep wakeup interrupts disabled until suspend */ 549 - irq_set_status_flags(irq, IRQ_NOAUTOEN); 550 549 ret = devm_request_threaded_irq(qcom->dev, irq, NULL, 551 550 qcom_dwc3_resume_irq, 552 - IRQF_TRIGGER_HIGH | IRQF_ONESHOT, 551 + IRQF_ONESHOT | IRQF_NO_AUTOEN, 553 552 "qcom_dwc3 HS", qcom); 554 553 if (ret) { 555 554 dev_err(qcom->dev, "hs_phy_irq failed: %d\n", ret); ··· 560 561 irq = dwc3_qcom_get_irq(pdev, "dp_hs_phy_irq", 561 562 pdata ? pdata->dp_hs_phy_irq_index : -1); 562 563 if (irq > 0) { 563 - irq_set_status_flags(irq, IRQ_NOAUTOEN); 564 564 ret = devm_request_threaded_irq(qcom->dev, irq, NULL, 565 565 qcom_dwc3_resume_irq, 566 - IRQF_TRIGGER_HIGH | IRQF_ONESHOT, 566 + IRQF_ONESHOT | IRQF_NO_AUTOEN, 567 567 "qcom_dwc3 DP_HS", qcom); 568 568 if (ret) { 569 569 dev_err(qcom->dev, "dp_hs_phy_irq failed: %d\n", ret); ··· 574 576 irq = dwc3_qcom_get_irq(pdev, "dm_hs_phy_irq", 575 577 pdata ? pdata->dm_hs_phy_irq_index : -1); 576 578 if (irq > 0) { 577 - irq_set_status_flags(irq, IRQ_NOAUTOEN); 578 579 ret = devm_request_threaded_irq(qcom->dev, irq, NULL, 579 580 qcom_dwc3_resume_irq, 580 - IRQF_TRIGGER_HIGH | IRQF_ONESHOT, 581 + IRQF_ONESHOT | IRQF_NO_AUTOEN, 581 582 "qcom_dwc3 DM_HS", qcom); 582 583 if (ret) { 583 584 dev_err(qcom->dev, "dm_hs_phy_irq failed: %d\n", ret); ··· 588 591 irq = dwc3_qcom_get_irq(pdev, "ss_phy_irq", 589 592 pdata ? pdata->ss_phy_irq_index : -1); 590 593 if (irq > 0) { 591 - irq_set_status_flags(irq, IRQ_NOAUTOEN); 592 594 ret = devm_request_threaded_irq(qcom->dev, irq, NULL, 593 595 qcom_dwc3_resume_irq, 594 - IRQF_TRIGGER_HIGH | IRQF_ONESHOT, 596 + IRQF_ONESHOT | IRQF_NO_AUTOEN, 595 597 "qcom_dwc3 SS", qcom); 596 598 if (ret) { 597 599 dev_err(qcom->dev, "ss_phy_irq failed: %d\n", ret); ··· 754 758 if (!qcom->dwc3) { 755 759 ret = -ENODEV; 756 760 dev_err(dev, "failed to get dwc3 platform device\n"); 761 + of_platform_depopulate(dev); 757 762 } 758 763 759 764 node_put: ··· 763 766 return ret; 764 767 } 765 768 766 - static struct platform_device * 767 - dwc3_qcom_create_urs_usb_platdev(struct device *dev) 769 + static struct platform_device *dwc3_qcom_create_urs_usb_platdev(struct device *dev) 768 770 { 771 + struct platform_device *urs_usb = NULL; 769 772 struct fwnode_handle *fwh; 770 773 struct acpi_device *adev; 771 774 char name[8]; ··· 785 788 786 789 adev = to_acpi_device_node(fwh); 787 790 if (!adev) 788 - return NULL; 791 + goto err_put_handle; 789 792 790 - return acpi_create_platform_device(adev, NULL); 793 + urs_usb = acpi_create_platform_device(adev, NULL); 794 + if (IS_ERR_OR_NULL(urs_usb)) 795 + goto err_put_handle; 796 + 797 + return urs_usb; 798 + 799 + err_put_handle: 800 + fwnode_handle_put(fwh); 801 + 802 + return urs_usb; 803 + } 804 + 805 + static void dwc3_qcom_destroy_urs_usb_platdev(struct platform_device *urs_usb) 806 + { 807 + struct fwnode_handle *fwh = urs_usb->dev.fwnode; 808 + 809 + platform_device_unregister(urs_usb); 810 + fwnode_handle_put(fwh); 791 811 } 792 812 793 813 static int dwc3_qcom_probe(struct platform_device *pdev) ··· 888 874 qcom->qscratch_base = devm_ioremap_resource(dev, parent_res); 889 875 if (IS_ERR(qcom->qscratch_base)) { 890 876 ret = PTR_ERR(qcom->qscratch_base); 891 - goto clk_disable; 877 + goto free_urs; 892 878 } 893 879 894 880 ret = dwc3_qcom_setup_irq(pdev); 895 881 if (ret) { 896 882 dev_err(dev, "failed to setup IRQs, err=%d\n", ret); 897 - goto clk_disable; 883 + goto free_urs; 898 884 } 899 885 900 886 /* ··· 913 899 914 900 if (ret) { 915 901 dev_err(dev, "failed to register DWC3 Core, err=%d\n", ret); 916 - goto depopulate; 902 + goto free_urs; 917 903 } 918 904 919 905 ret = dwc3_qcom_interconnect_init(qcom); ··· 945 931 interconnect_exit: 946 932 dwc3_qcom_interconnect_exit(qcom); 947 933 depopulate: 948 - if (np) 934 + if (np) { 949 935 of_platform_depopulate(&pdev->dev); 950 - else 951 - platform_device_put(pdev); 936 + } else { 937 + device_remove_software_node(&qcom->dwc3->dev); 938 + platform_device_del(qcom->dwc3); 939 + } 940 + platform_device_put(qcom->dwc3); 941 + free_urs: 942 + if (qcom->urs_usb) 943 + dwc3_qcom_destroy_urs_usb_platdev(qcom->urs_usb); 952 944 clk_disable: 953 945 for (i = qcom->num_clocks - 1; i >= 0; i--) { 954 946 clk_disable_unprepare(qcom->clks[i]); ··· 973 953 struct device *dev = &pdev->dev; 974 954 int i; 975 955 976 - device_remove_software_node(&qcom->dwc3->dev); 977 - if (np) 956 + if (np) { 978 957 of_platform_depopulate(&pdev->dev); 979 - else 980 - platform_device_put(pdev); 958 + } else { 959 + device_remove_software_node(&qcom->dwc3->dev); 960 + platform_device_del(qcom->dwc3); 961 + } 962 + platform_device_put(qcom->dwc3); 963 + 964 + if (qcom->urs_usb) 965 + dwc3_qcom_destroy_urs_usb_platdev(qcom->urs_usb); 981 966 982 967 for (i = qcom->num_clocks - 1; i >= 0; i--) { 983 968 clk_disable_unprepare(qcom->clks[i]);
+7 -1
drivers/usb/dwc3/dwc3-rtk.c
··· 183 183 184 184 ret = of_property_read_string(dwc3_np, "maximum-speed", &maximum_speed); 185 185 if (ret < 0) 186 - return USB_SPEED_UNKNOWN; 186 + goto out; 187 187 188 188 ret = match_string(speed_names, ARRAY_SIZE(speed_names), maximum_speed); 189 + 190 + out: 191 + of_node_put(dwc3_np); 189 192 190 193 return (ret < 0) ? USB_SPEED_UNKNOWN : ret; 191 194 } ··· 341 338 } 342 339 343 340 switch_usb2_role(rtk, rtk->cur_role); 341 + 342 + platform_device_put(dwc3_pdev); 343 + of_node_put(dwc3_node); 344 344 345 345 return 0; 346 346
+10 -3
drivers/usb/host/xhci-mtk-sch.c
··· 650 650 651 651 if (sch_ep->ep_type == ISOC_OUT_EP) { 652 652 for (j = 0; j < sch_ep->num_budget_microframes; j++) { 653 - k = XHCI_MTK_BW_INDEX(base + j + CS_OFFSET); 654 - /* use cs to indicate existence of in-ss @(base+j) */ 655 - if (tt->fs_bus_bw_in[k]) 653 + k = XHCI_MTK_BW_INDEX(base + j); 654 + if (tt->in_ss_cnt[k]) 656 655 return -ESCH_SS_OVERLAP; 657 656 } 658 657 } else if (sch_ep->ep_type == ISOC_IN_EP || sch_ep->ep_type == INT_IN_EP) { ··· 767 768 fs_bus_bw[k] -= (u16)sch_ep->bw_budget_table[j]; 768 769 tt->fs_frame_bw[f] -= (u16)sch_ep->bw_budget_table[j]; 769 770 } 771 + } 772 + 773 + if (sch_ep->ep_type == ISOC_IN_EP || sch_ep->ep_type == INT_IN_EP) { 774 + k = XHCI_MTK_BW_INDEX(base); 775 + if (used) 776 + tt->in_ss_cnt[k]++; 777 + else 778 + tt->in_ss_cnt[k]--; 770 779 } 771 780 } 772 781
+2
drivers/usb/host/xhci-mtk.h
··· 38 38 * @fs_bus_bw_in: save bandwidth used by FS/LS IN eps in each uframes 39 39 * @ls_bus_bw: save bandwidth used by LS eps in each uframes 40 40 * @fs_frame_bw: save bandwidth used by FS/LS eps in each FS frames 41 + * @in_ss_cnt: the count of Start-Split for IN eps 41 42 * @ep_list: Endpoints using this TT 42 43 */ 43 44 struct mu3h_sch_tt { ··· 46 45 u16 fs_bus_bw_in[XHCI_MTK_MAX_ESIT]; 47 46 u8 ls_bus_bw[XHCI_MTK_MAX_ESIT]; 48 47 u16 fs_frame_bw[XHCI_MTK_FRAMES_CNT]; 48 + u8 in_ss_cnt[XHCI_MTK_MAX_ESIT]; 49 49 struct list_head ep_list; 50 50 }; 51 51
+30 -20
drivers/usb/host/xhci-plat.c
··· 13 13 #include <linux/module.h> 14 14 #include <linux/pci.h> 15 15 #include <linux/of.h> 16 + #include <linux/of_device.h> 16 17 #include <linux/platform_device.h> 17 18 #include <linux/usb/phy.h> 18 19 #include <linux/slab.h> ··· 149 148 int ret; 150 149 int irq; 151 150 struct xhci_plat_priv *priv = NULL; 152 - 151 + bool of_match; 153 152 154 153 if (usb_disabled()) 155 154 return -ENODEV; ··· 254 253 &xhci->imod_interval); 255 254 } 256 255 257 - hcd->usb_phy = devm_usb_get_phy_by_phandle(sysdev, "usb-phy", 0); 258 - if (IS_ERR(hcd->usb_phy)) { 259 - ret = PTR_ERR(hcd->usb_phy); 260 - if (ret == -EPROBE_DEFER) 261 - goto disable_clk; 262 - hcd->usb_phy = NULL; 263 - } else { 264 - ret = usb_phy_init(hcd->usb_phy); 265 - if (ret) 266 - goto disable_clk; 256 + /* 257 + * Drivers such as dwc3 manages PHYs themself (and rely on driver name 258 + * matching for the xhci platform device). 259 + */ 260 + of_match = of_match_device(pdev->dev.driver->of_match_table, &pdev->dev); 261 + if (of_match) { 262 + hcd->usb_phy = devm_usb_get_phy_by_phandle(sysdev, "usb-phy", 0); 263 + if (IS_ERR(hcd->usb_phy)) { 264 + ret = PTR_ERR(hcd->usb_phy); 265 + if (ret == -EPROBE_DEFER) 266 + goto disable_clk; 267 + hcd->usb_phy = NULL; 268 + } else { 269 + ret = usb_phy_init(hcd->usb_phy); 270 + if (ret) 271 + goto disable_clk; 272 + } 267 273 } 268 274 269 275 hcd->tpl_support = of_usb_host_tpl_support(sysdev->of_node); ··· 293 285 goto dealloc_usb2_hcd; 294 286 } 295 287 296 - xhci->shared_hcd->usb_phy = devm_usb_get_phy_by_phandle(sysdev, 297 - "usb-phy", 1); 298 - if (IS_ERR(xhci->shared_hcd->usb_phy)) { 299 - xhci->shared_hcd->usb_phy = NULL; 300 - } else { 301 - ret = usb_phy_init(xhci->shared_hcd->usb_phy); 302 - if (ret) 303 - dev_err(sysdev, "%s init usb3phy fail (ret=%d)\n", 304 - __func__, ret); 288 + if (of_match) { 289 + xhci->shared_hcd->usb_phy = devm_usb_get_phy_by_phandle(sysdev, 290 + "usb-phy", 1); 291 + if (IS_ERR(xhci->shared_hcd->usb_phy)) { 292 + xhci->shared_hcd->usb_phy = NULL; 293 + } else { 294 + ret = usb_phy_init(xhci->shared_hcd->usb_phy); 295 + if (ret) 296 + dev_err(sysdev, "%s init usb3phy fail (ret=%d)\n", 297 + __func__, ret); 298 + } 305 299 } 306 300 307 301 xhci->shared_hcd->tpl_support = hcd->tpl_support;
+2
drivers/usb/misc/onboard_usb_hub.c
··· 432 432 { USB_DEVICE(VENDOR_ID_MICROCHIP, 0x2412) }, /* USB2412 USB 2.0 */ 433 433 { USB_DEVICE(VENDOR_ID_MICROCHIP, 0x2514) }, /* USB2514B USB 2.0 */ 434 434 { USB_DEVICE(VENDOR_ID_MICROCHIP, 0x2517) }, /* USB2517 USB 2.0 */ 435 + { USB_DEVICE(VENDOR_ID_MICROCHIP, 0x2744) }, /* USB5744 USB 2.0 */ 436 + { USB_DEVICE(VENDOR_ID_MICROCHIP, 0x5744) }, /* USB5744 USB 3.0 */ 435 437 { USB_DEVICE(VENDOR_ID_REALTEK, 0x0411) }, /* RTS5411 USB 3.1 */ 436 438 { USB_DEVICE(VENDOR_ID_REALTEK, 0x5411) }, /* RTS5411 USB 2.1 */ 437 439 { USB_DEVICE(VENDOR_ID_REALTEK, 0x0414) }, /* RTS5414 USB 3.2 */
+7
drivers/usb/misc/onboard_usb_hub.h
··· 16 16 .num_supplies = 1, 17 17 }; 18 18 19 + static const struct onboard_hub_pdata microchip_usb5744_data = { 20 + .reset_us = 0, 21 + .num_supplies = 2, 22 + }; 23 + 19 24 static const struct onboard_hub_pdata realtek_rts5411_data = { 20 25 .reset_us = 0, 21 26 .num_supplies = 1, ··· 55 50 { .compatible = "usb424,2412", .data = &microchip_usb424_data, }, 56 51 { .compatible = "usb424,2514", .data = &microchip_usb424_data, }, 57 52 { .compatible = "usb424,2517", .data = &microchip_usb424_data, }, 53 + { .compatible = "usb424,2744", .data = &microchip_usb5744_data, }, 54 + { .compatible = "usb424,5744", .data = &microchip_usb5744_data, }, 58 55 { .compatible = "usb451,8140", .data = &ti_tusb8041_data, }, 59 56 { .compatible = "usb451,8142", .data = &ti_tusb8041_data, }, 60 57 { .compatible = "usb4b4,6504", .data = &cypress_hx3_data, },
+4 -13
drivers/usb/misc/usb-ljca.c
··· 457 457 u64 adr, u8 id) 458 458 { 459 459 struct ljca_match_ids_walk_data wd = { 0 }; 460 - struct acpi_device *parent, *adev; 461 460 struct device *dev = adap->dev; 461 + struct acpi_device *parent; 462 462 char uid[4]; 463 463 464 464 parent = ACPI_COMPANION(dev); ··· 466 466 return; 467 467 468 468 /* 469 - * get auxdev ACPI handle from the ACPI device directly 470 - * under the parent that matches _ADR. 471 - */ 472 - adev = acpi_find_child_device(parent, adr, false); 473 - if (adev) { 474 - ACPI_COMPANION_SET(&auxdev->dev, adev); 475 - return; 476 - } 477 - 478 - /* 479 - * _ADR is a grey area in the ACPI specification, some 469 + * Currently LJCA hw doesn't use _ADR instead the shipped 480 470 * platforms use _HID to distinguish children devices. 481 471 */ 482 472 switch (adr) { ··· 646 656 unsigned int i; 647 657 int ret; 648 658 659 + /* Not all LJCA chips implement SPI, a timeout reading the descriptors is normal */ 649 660 ret = ljca_send(adap, LJCA_CLIENT_MNG, LJCA_MNG_ENUM_SPI, NULL, 0, buf, 650 661 sizeof(buf), true, LJCA_ENUM_CLIENT_TIMEOUT_MS); 651 662 if (ret < 0) 652 - return ret; 663 + return (ret == -ETIMEDOUT) ? 0 : ret; 653 664 654 665 /* check firmware response */ 655 666 desc = (struct ljca_spi_descriptor *)buf;
+8 -3
drivers/usb/serial/option.c
··· 203 203 #define DELL_PRODUCT_5829E_ESIM 0x81e4 204 204 #define DELL_PRODUCT_5829E 0x81e6 205 205 206 - #define DELL_PRODUCT_FM101R 0x8213 207 - #define DELL_PRODUCT_FM101R_ESIM 0x8215 206 + #define DELL_PRODUCT_FM101R_ESIM 0x8213 207 + #define DELL_PRODUCT_FM101R 0x8215 208 208 209 209 #define KYOCERA_VENDOR_ID 0x0c88 210 210 #define KYOCERA_PRODUCT_KPC650 0x17da ··· 609 609 #define UNISOC_VENDOR_ID 0x1782 610 610 /* TOZED LT70-C based on UNISOC SL8563 uses UNISOC's vendor ID */ 611 611 #define TOZED_PRODUCT_LT70C 0x4055 612 + /* Luat Air72*U series based on UNISOC UIS8910 uses UNISOC's vendor ID */ 613 + #define LUAT_PRODUCT_AIR720U 0x4e00 612 614 613 615 /* Device flags */ 614 616 ··· 1548 1546 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0165, 0xff, 0xff, 0xff) }, 1549 1547 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0167, 0xff, 0xff, 0xff), 1550 1548 .driver_info = RSVD(4) }, 1551 - { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0189, 0xff, 0xff, 0xff) }, 1549 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0189, 0xff, 0xff, 0xff), 1550 + .driver_info = RSVD(4) }, 1552 1551 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0191, 0xff, 0xff, 0xff), /* ZTE EuFi890 */ 1553 1552 .driver_info = RSVD(4) }, 1554 1553 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0196, 0xff, 0xff, 0xff) }, ··· 2252 2249 .driver_info = RSVD(4) | RSVD(5) | RSVD(6) }, 2253 2250 { USB_DEVICE(0x1782, 0x4d10) }, /* Fibocom L610 (AT mode) */ 2254 2251 { USB_DEVICE_INTERFACE_CLASS(0x1782, 0x4d11, 0xff) }, /* Fibocom L610 (ECM/RNDIS mode) */ 2252 + { USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x0001, 0xff, 0xff, 0xff) }, /* Fibocom L716-EU (ECM/RNDIS mode) */ 2255 2253 { USB_DEVICE(0x2cb7, 0x0104), /* Fibocom NL678 series */ 2256 2254 .driver_info = RSVD(4) | RSVD(5) }, 2257 2255 { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0105, 0xff), /* Fibocom NL678 series */ ··· 2275 2271 { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x40) }, 2276 2272 { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0, 0) }, 2277 2273 { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) }, 2274 + { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, LUAT_PRODUCT_AIR720U, 0xff, 0, 0) }, 2278 2275 { } /* Terminating entry */ 2279 2276 }; 2280 2277 MODULE_DEVICE_TABLE(usb, option_ids);
+11 -1
drivers/usb/typec/tcpm/tcpm.c
··· 4273 4273 current_lim = PD_P_SNK_STDBY_MW / 5; 4274 4274 tcpm_set_current_limit(port, current_lim, 5000); 4275 4275 /* Not sink vbus if operational current is 0mA */ 4276 - tcpm_set_charge(port, !!pdo_max_current(port->snk_pdo[0])); 4276 + tcpm_set_charge(port, !port->pd_supported || 4277 + pdo_max_current(port->snk_pdo[0])); 4277 4278 4278 4279 if (!port->pd_supported) 4279 4280 tcpm_set_state(port, SNK_READY, 0); ··· 5391 5390 tcpm_log_force(port, "Received hard reset"); 5392 5391 if (port->bist_request == BDO_MODE_TESTDATA && port->tcpc->set_bist_data) 5393 5392 port->tcpc->set_bist_data(port->tcpc, false); 5393 + 5394 + switch (port->state) { 5395 + case ERROR_RECOVERY: 5396 + case PORT_RESET: 5397 + case PORT_RESET_WAIT_OFF: 5398 + return; 5399 + default: 5400 + break; 5401 + } 5394 5402 5395 5403 if (port->ams != NONE_AMS) 5396 5404 port->ams = NONE_AMS;
+9 -5
drivers/usb/typec/tipd/core.c
··· 968 968 ret = of_property_match_string(np, "reg-names", "patch-address"); 969 969 if (ret < 0) { 970 970 dev_err(tps->dev, "failed to get patch-address %d\n", ret); 971 - return ret; 971 + goto release_fw; 972 972 } 973 973 974 974 ret = of_property_read_u32_index(np, "reg", ret, &addr); 975 975 if (ret) 976 - return ret; 976 + goto release_fw; 977 977 978 978 if (addr == 0 || (addr >= 0x20 && addr <= 0x23)) { 979 979 dev_err(tps->dev, "wrong patch address %u\n", addr); 980 - return -EINVAL; 980 + ret = -EINVAL; 981 + goto release_fw; 981 982 } 982 983 983 984 bpms_data.addr = (u8)addr; ··· 1227 1226 TPS_REG_INT_PLUG_EVENT; 1228 1227 } 1229 1228 1230 - tps->data = device_get_match_data(tps->dev); 1229 + if (dev_fwnode(tps->dev)) 1230 + tps->data = device_get_match_data(tps->dev); 1231 + else 1232 + tps->data = i2c_get_match_data(client); 1231 1233 if (!tps->data) 1232 1234 return -EINVAL; 1233 1235 ··· 1429 1425 MODULE_DEVICE_TABLE(of, tps6598x_of_match); 1430 1426 1431 1427 static const struct i2c_device_id tps6598x_id[] = { 1432 - { "tps6598x" }, 1428 + { "tps6598x", (kernel_ulong_t)&tps6598x_data }, 1433 1429 { } 1434 1430 }; 1435 1431 MODULE_DEVICE_TABLE(i2c, tps6598x_id);
+1 -1
drivers/xen/privcmd.c
··· 1115 1115 spinlock_t lock; /* Protects ioeventfds list */ 1116 1116 struct list_head ioeventfds; 1117 1117 struct list_head list; 1118 - struct ioreq_port ports[0]; 1118 + struct ioreq_port ports[] __counted_by(vcpus); 1119 1119 }; 1120 1120 1121 1121 static irqreturn_t ioeventfd_interrupt(int irq, void *dev_id)
+1
drivers/xen/swiotlb-xen.c
··· 405 405 .get_sgtable = dma_common_get_sgtable, 406 406 .alloc_pages = dma_common_alloc_pages, 407 407 .free_pages = dma_common_free_pages, 408 + .max_mapping_size = swiotlb_max_mapping_size, 408 409 };
+2 -2
fs/afs/dynroot.c
··· 132 132 133 133 ret = dns_query(net->net, "afsdb", name, len, "srv=1", 134 134 NULL, NULL, false); 135 - if (ret == -ENODATA) 136 - ret = -EDESTADDRREQ; 135 + if (ret == -ENODATA || ret == -ENOKEY) 136 + ret = -ENOENT; 137 137 return ret; 138 138 } 139 139
+1
fs/afs/internal.h
··· 553 553 }; 554 554 555 555 struct afs_server_list { 556 + struct rcu_head rcu; 556 557 afs_volid_t vids[AFS_MAXTYPES]; /* Volume IDs */ 557 558 refcount_t usage; 558 559 unsigned char nr_servers;
+1 -1
fs/afs/server_list.c
··· 17 17 for (i = 0; i < slist->nr_servers; i++) 18 18 afs_unuse_server(net, slist->servers[i].server, 19 19 afs_server_trace_put_slist); 20 - kfree(slist); 20 + kfree_rcu(slist, rcu); 21 21 } 22 22 } 23 23
+4
fs/afs/super.c
··· 407 407 return PTR_ERR(volume); 408 408 409 409 ctx->volume = volume; 410 + if (volume->type != AFSVL_RWVOL) { 411 + ctx->flock_mode = afs_flock_mode_local; 412 + fc->sb_flags |= SB_RDONLY; 413 + } 410 414 } 411 415 412 416 return 0;
+10
fs/afs/vl_rotate.c
··· 58 58 } 59 59 60 60 /* Status load is ordered after lookup counter load */ 61 + if (cell->dns_status == DNS_LOOKUP_GOT_NOT_FOUND) { 62 + pr_warn("No record of cell %s\n", cell->name); 63 + vc->error = -ENOENT; 64 + return false; 65 + } 66 + 61 67 if (cell->dns_source == DNS_RECORD_UNAVAILABLE) { 62 68 vc->error = -EDESTADDRREQ; 63 69 return false; ··· 291 285 */ 292 286 static void afs_vl_dump_edestaddrreq(const struct afs_vl_cursor *vc) 293 287 { 288 + struct afs_cell *cell = vc->cell; 294 289 static int count; 295 290 int i; 296 291 ··· 301 294 302 295 rcu_read_lock(); 303 296 pr_notice("EDESTADDR occurred\n"); 297 + pr_notice("CELL: %s err=%d\n", cell->name, cell->error); 298 + pr_notice("DNS: src=%u st=%u lc=%x\n", 299 + cell->dns_source, cell->dns_status, cell->dns_lookup_count); 304 300 pr_notice("VC: ut=%lx ix=%u ni=%hu fl=%hx err=%hd\n", 305 301 vc->untried, vc->index, vc->nr_iterations, vc->flags, vc->error); 306 302
+21 -35
fs/autofs/inode.c
··· 309 309 struct autofs_fs_context *ctx = fc->fs_private; 310 310 struct autofs_sb_info *sbi = s->s_fs_info; 311 311 struct inode *root_inode; 312 - struct dentry *root; 313 312 struct autofs_info *ino; 314 - int ret = -ENOMEM; 315 313 316 314 pr_debug("starting up, sbi = %p\n", sbi); 317 315 ··· 326 328 */ 327 329 ino = autofs_new_ino(sbi); 328 330 if (!ino) 329 - goto fail; 331 + return -ENOMEM; 330 332 331 333 root_inode = autofs_get_inode(s, S_IFDIR | 0755); 334 + if (!root_inode) 335 + return -ENOMEM; 336 + 332 337 root_inode->i_uid = ctx->uid; 333 338 root_inode->i_gid = ctx->gid; 339 + root_inode->i_fop = &autofs_root_operations; 340 + root_inode->i_op = &autofs_dir_inode_operations; 334 341 335 - root = d_make_root(root_inode); 336 - if (!root) 337 - goto fail_ino; 338 - 339 - root->d_fsdata = ino; 342 + s->s_root = d_make_root(root_inode); 343 + if (unlikely(!s->s_root)) { 344 + autofs_free_ino(ino); 345 + return -ENOMEM; 346 + } 347 + s->s_root->d_fsdata = ino; 340 348 341 349 if (ctx->pgrp_set) { 342 350 sbi->oz_pgrp = find_get_pid(ctx->pgrp); 343 - if (!sbi->oz_pgrp) { 344 - ret = invalf(fc, "Could not find process group %d", 345 - ctx->pgrp); 346 - goto fail_dput; 347 - } 348 - } else { 351 + if (!sbi->oz_pgrp) 352 + return invalf(fc, "Could not find process group %d", 353 + ctx->pgrp); 354 + } else 349 355 sbi->oz_pgrp = get_task_pid(current, PIDTYPE_PGID); 350 - } 351 356 352 357 if (autofs_type_trigger(sbi->type)) 353 - __managed_dentry_set_managed(root); 354 - 355 - root_inode->i_fop = &autofs_root_operations; 356 - root_inode->i_op = &autofs_dir_inode_operations; 358 + /* s->s_root won't be contended so there's little to 359 + * be gained by not taking the d_lock when setting 360 + * d_flags, even when a lot mounts are being done. 361 + */ 362 + managed_dentry_set_managed(s->s_root); 357 363 358 364 pr_debug("pipe fd = %d, pgrp = %u\n", 359 365 sbi->pipefd, pid_nr(sbi->oz_pgrp)); 360 366 361 367 sbi->flags &= ~AUTOFS_SBI_CATATONIC; 362 - 363 - /* 364 - * Success! Install the root dentry now to indicate completion. 365 - */ 366 - s->s_root = root; 367 368 return 0; 368 - 369 - /* 370 - * Failure ... clean up. 371 - */ 372 - fail_dput: 373 - dput(root); 374 - goto fail; 375 - fail_ino: 376 - autofs_free_ino(ino); 377 - fail: 378 - return ret; 379 369 } 380 370 381 371 /*
+10 -2
fs/ecryptfs/inode.c
··· 998 998 return rc; 999 999 } 1000 1000 1001 + static int ecryptfs_do_getattr(const struct path *path, struct kstat *stat, 1002 + u32 request_mask, unsigned int flags) 1003 + { 1004 + if (flags & AT_GETATTR_NOSEC) 1005 + return vfs_getattr_nosec(path, stat, request_mask, flags); 1006 + return vfs_getattr(path, stat, request_mask, flags); 1007 + } 1008 + 1001 1009 static int ecryptfs_getattr(struct mnt_idmap *idmap, 1002 1010 const struct path *path, struct kstat *stat, 1003 1011 u32 request_mask, unsigned int flags) ··· 1014 1006 struct kstat lower_stat; 1015 1007 int rc; 1016 1008 1017 - rc = vfs_getattr(ecryptfs_dentry_to_lower_path(dentry), &lower_stat, 1018 - request_mask, flags); 1009 + rc = ecryptfs_do_getattr(ecryptfs_dentry_to_lower_path(dentry), 1010 + &lower_stat, request_mask, flags); 1019 1011 if (!rc) { 1020 1012 fsstack_copy_attr_all(d_inode(dentry), 1021 1013 ecryptfs_inode_to_lower(d_inode(dentry)));
+1 -1
fs/erofs/Kconfig
··· 21 21 performance under extremely memory pressure without extra cost. 22 22 23 23 See the documentation at <file:Documentation/filesystems/erofs.rst> 24 - for more details. 24 + and the web pages at <https://erofs.docs.kernel.org> for more details. 25 25 26 26 If unsure, say N. 27 27
+3 -2
fs/erofs/data.c
··· 220 220 up_read(&devs->rwsem); 221 221 return 0; 222 222 } 223 - map->m_bdev = dif->bdev_handle->bdev; 223 + map->m_bdev = dif->bdev_handle ? dif->bdev_handle->bdev : NULL; 224 224 map->m_daxdev = dif->dax_dev; 225 225 map->m_dax_part_off = dif->dax_part_off; 226 226 map->m_fscache = dif->fscache; ··· 238 238 if (map->m_pa >= startoff && 239 239 map->m_pa < startoff + length) { 240 240 map->m_pa -= startoff; 241 - map->m_bdev = dif->bdev_handle->bdev; 241 + map->m_bdev = dif->bdev_handle ? 242 + dif->bdev_handle->bdev : NULL; 242 243 map->m_daxdev = dif->dax_dev; 243 244 map->m_dax_part_off = dif->dax_part_off; 244 245 map->m_fscache = dif->fscache;
+35 -63
fs/erofs/inode.c
··· 15 15 struct erofs_sb_info *sbi = EROFS_SB(sb); 16 16 struct erofs_inode *vi = EROFS_I(inode); 17 17 const erofs_off_t inode_loc = erofs_iloc(inode); 18 - 19 18 erofs_blk_t blkaddr, nblks = 0; 20 19 void *kaddr; 21 20 struct erofs_inode_compact *dic; 22 21 struct erofs_inode_extended *die, *copied = NULL; 22 + union erofs_inode_i_u iu; 23 23 unsigned int ifmt; 24 24 int err; 25 25 ··· 35 35 36 36 dic = kaddr + *ofs; 37 37 ifmt = le16_to_cpu(dic->i_format); 38 - 39 38 if (ifmt & ~EROFS_I_ALL) { 40 - erofs_err(inode->i_sb, "unsupported i_format %u of nid %llu", 39 + erofs_err(sb, "unsupported i_format %u of nid %llu", 41 40 ifmt, vi->nid); 42 41 err = -EOPNOTSUPP; 43 42 goto err_out; ··· 44 45 45 46 vi->datalayout = erofs_inode_datalayout(ifmt); 46 47 if (vi->datalayout >= EROFS_INODE_DATALAYOUT_MAX) { 47 - erofs_err(inode->i_sb, "unsupported datalayout %u of nid %llu", 48 + erofs_err(sb, "unsupported datalayout %u of nid %llu", 48 49 vi->datalayout, vi->nid); 49 50 err = -EOPNOTSUPP; 50 51 goto err_out; ··· 81 82 vi->xattr_isize = erofs_xattr_ibody_size(die->i_xattr_icount); 82 83 83 84 inode->i_mode = le16_to_cpu(die->i_mode); 84 - switch (inode->i_mode & S_IFMT) { 85 - case S_IFREG: 86 - case S_IFDIR: 87 - case S_IFLNK: 88 - vi->raw_blkaddr = le32_to_cpu(die->i_u.raw_blkaddr); 89 - break; 90 - case S_IFCHR: 91 - case S_IFBLK: 92 - inode->i_rdev = 93 - new_decode_dev(le32_to_cpu(die->i_u.rdev)); 94 - break; 95 - case S_IFIFO: 96 - case S_IFSOCK: 97 - inode->i_rdev = 0; 98 - break; 99 - default: 100 - goto bogusimode; 101 - } 85 + iu = die->i_u; 102 86 i_uid_write(inode, le32_to_cpu(die->i_uid)); 103 87 i_gid_write(inode, le32_to_cpu(die->i_gid)); 104 88 set_nlink(inode, le32_to_cpu(die->i_nlink)); 105 - 106 - /* extended inode has its own timestamp */ 89 + /* each extended inode has its own timestamp */ 107 90 inode_set_ctime(inode, le64_to_cpu(die->i_mtime), 108 91 le32_to_cpu(die->i_mtime_nsec)); 109 92 110 93 inode->i_size = le64_to_cpu(die->i_size); 111 - 112 - /* total blocks for compressed files */ 113 - if (erofs_inode_is_data_compressed(vi->datalayout)) 114 - nblks = le32_to_cpu(die->i_u.compressed_blocks); 115 - else if (vi->datalayout == EROFS_INODE_CHUNK_BASED) 116 - /* fill chunked inode summary info */ 117 - vi->chunkformat = le16_to_cpu(die->i_u.c.format); 118 94 kfree(copied); 119 95 copied = NULL; 120 96 break; ··· 99 125 vi->xattr_isize = erofs_xattr_ibody_size(dic->i_xattr_icount); 100 126 101 127 inode->i_mode = le16_to_cpu(dic->i_mode); 102 - switch (inode->i_mode & S_IFMT) { 103 - case S_IFREG: 104 - case S_IFDIR: 105 - case S_IFLNK: 106 - vi->raw_blkaddr = le32_to_cpu(dic->i_u.raw_blkaddr); 107 - break; 108 - case S_IFCHR: 109 - case S_IFBLK: 110 - inode->i_rdev = 111 - new_decode_dev(le32_to_cpu(dic->i_u.rdev)); 112 - break; 113 - case S_IFIFO: 114 - case S_IFSOCK: 115 - inode->i_rdev = 0; 116 - break; 117 - default: 118 - goto bogusimode; 119 - } 128 + iu = dic->i_u; 120 129 i_uid_write(inode, le16_to_cpu(dic->i_uid)); 121 130 i_gid_write(inode, le16_to_cpu(dic->i_gid)); 122 131 set_nlink(inode, le16_to_cpu(dic->i_nlink)); 123 - 124 132 /* use build time for compact inodes */ 125 133 inode_set_ctime(inode, sbi->build_time, sbi->build_time_nsec); 126 134 127 135 inode->i_size = le32_to_cpu(dic->i_size); 128 - if (erofs_inode_is_data_compressed(vi->datalayout)) 129 - nblks = le32_to_cpu(dic->i_u.compressed_blocks); 130 - else if (vi->datalayout == EROFS_INODE_CHUNK_BASED) 131 - vi->chunkformat = le16_to_cpu(dic->i_u.c.format); 132 136 break; 133 137 default: 134 - erofs_err(inode->i_sb, 135 - "unsupported on-disk inode version %u of nid %llu", 138 + erofs_err(sb, "unsupported on-disk inode version %u of nid %llu", 136 139 erofs_inode_version(ifmt), vi->nid); 137 140 err = -EOPNOTSUPP; 138 141 goto err_out; 139 142 } 140 143 141 - if (vi->datalayout == EROFS_INODE_CHUNK_BASED) { 144 + switch (inode->i_mode & S_IFMT) { 145 + case S_IFREG: 146 + case S_IFDIR: 147 + case S_IFLNK: 148 + vi->raw_blkaddr = le32_to_cpu(iu.raw_blkaddr); 149 + break; 150 + case S_IFCHR: 151 + case S_IFBLK: 152 + inode->i_rdev = new_decode_dev(le32_to_cpu(iu.rdev)); 153 + break; 154 + case S_IFIFO: 155 + case S_IFSOCK: 156 + inode->i_rdev = 0; 157 + break; 158 + default: 159 + erofs_err(sb, "bogus i_mode (%o) @ nid %llu", inode->i_mode, 160 + vi->nid); 161 + err = -EFSCORRUPTED; 162 + goto err_out; 163 + } 164 + 165 + /* total blocks for compressed files */ 166 + if (erofs_inode_is_data_compressed(vi->datalayout)) { 167 + nblks = le32_to_cpu(iu.compressed_blocks); 168 + } else if (vi->datalayout == EROFS_INODE_CHUNK_BASED) { 169 + /* fill chunked inode summary info */ 170 + vi->chunkformat = le16_to_cpu(iu.c.format); 142 171 if (vi->chunkformat & ~EROFS_CHUNK_FORMAT_ALL) { 143 - erofs_err(inode->i_sb, 144 - "unsupported chunk format %x of nid %llu", 172 + erofs_err(sb, "unsupported chunk format %x of nid %llu", 145 173 vi->chunkformat, vi->nid); 146 174 err = -EOPNOTSUPP; 147 175 goto err_out; ··· 167 191 inode->i_blocks = nblks << (sb->s_blocksize_bits - 9); 168 192 return kaddr; 169 193 170 - bogusimode: 171 - erofs_err(inode->i_sb, "bogus i_mode (%o) @ nid %llu", 172 - inode->i_mode, vi->nid); 173 - err = -EFSCORRUPTED; 174 194 err_out: 175 195 DBG_BUGON(1); 176 196 kfree(copied);
+2
fs/inode.c
··· 215 215 lockdep_set_class_and_name(&mapping->invalidate_lock, 216 216 &sb->s_type->invalidate_lock_key, 217 217 "mapping.invalidate_lock"); 218 + if (sb->s_iflags & SB_I_STABLE_WRITES) 219 + mapping_set_stable_writes(mapping); 218 220 inode->i_private = NULL; 219 221 inode->i_mapping = mapping; 220 222 INIT_HLIST_HEAD(&inode->i_dentry); /* buggered by rcu freeing */
+11 -3
fs/libfs.c
··· 399 399 return -EINVAL; 400 400 } 401 401 402 + /* In this case, ->private_data is protected by f_pos_lock */ 403 + file->private_data = NULL; 402 404 return vfs_setpos(file, offset, U32_MAX); 403 405 } 404 406 ··· 430 428 inode->i_ino, fs_umode_to_dtype(inode->i_mode)); 431 429 } 432 430 433 - static void offset_iterate_dir(struct inode *inode, struct dir_context *ctx) 431 + static void *offset_iterate_dir(struct inode *inode, struct dir_context *ctx) 434 432 { 435 433 struct offset_ctx *so_ctx = inode->i_op->get_offset_ctx(inode); 436 434 XA_STATE(xas, &so_ctx->xa, ctx->pos); ··· 439 437 while (true) { 440 438 dentry = offset_find_next(&xas); 441 439 if (!dentry) 442 - break; 440 + return ERR_PTR(-ENOENT); 443 441 444 442 if (!offset_dir_emit(ctx, dentry)) { 445 443 dput(dentry); ··· 449 447 dput(dentry); 450 448 ctx->pos = xas.xa_index + 1; 451 449 } 450 + return NULL; 452 451 } 453 452 454 453 /** ··· 482 479 if (!dir_emit_dots(file, ctx)) 483 480 return 0; 484 481 485 - offset_iterate_dir(d_inode(dir), ctx); 482 + /* In this case, ->private_data is protected by f_pos_lock */ 483 + if (ctx->pos == 2) 484 + file->private_data = NULL; 485 + else if (file->private_data == ERR_PTR(-ENOENT)) 486 + return 0; 487 + file->private_data = offset_iterate_dir(d_inode(dir), ctx); 486 488 return 0; 487 489 } 488 490
+5 -5
fs/overlayfs/inode.c
··· 171 171 172 172 type = ovl_path_real(dentry, &realpath); 173 173 old_cred = ovl_override_creds(dentry->d_sb); 174 - err = vfs_getattr(&realpath, stat, request_mask, flags); 174 + err = ovl_do_getattr(&realpath, stat, request_mask, flags); 175 175 if (err) 176 176 goto out; 177 177 ··· 196 196 (!is_dir ? STATX_NLINK : 0); 197 197 198 198 ovl_path_lower(dentry, &realpath); 199 - err = vfs_getattr(&realpath, &lowerstat, 200 - lowermask, flags); 199 + err = ovl_do_getattr(&realpath, &lowerstat, lowermask, 200 + flags); 201 201 if (err) 202 202 goto out; 203 203 ··· 249 249 250 250 ovl_path_lowerdata(dentry, &realpath); 251 251 if (realpath.dentry) { 252 - err = vfs_getattr(&realpath, &lowerdatastat, 253 - lowermask, flags); 252 + err = ovl_do_getattr(&realpath, &lowerdatastat, 253 + lowermask, flags); 254 254 if (err) 255 255 goto out; 256 256 } else {
+8
fs/overlayfs/overlayfs.h
··· 408 408 return ((OPEN_FMODE(flags) & FMODE_WRITE) || (flags & O_TRUNC)); 409 409 } 410 410 411 + static inline int ovl_do_getattr(const struct path *path, struct kstat *stat, 412 + u32 request_mask, unsigned int flags) 413 + { 414 + if (flags & AT_GETATTR_NOSEC) 415 + return vfs_getattr_nosec(path, stat, request_mask, flags); 416 + return vfs_getattr(path, stat, request_mask, flags); 417 + } 418 + 411 419 /* util.c */ 412 420 int ovl_get_write_access(struct dentry *dentry); 413 421 void ovl_put_write_access(struct dentry *dentry);
+11 -3
fs/smb/client/cifsglob.h
··· 191 191 bool reparse_point; 192 192 bool symlink; 193 193 }; 194 - __u32 reparse_tag; 194 + struct { 195 + __u32 tag; 196 + union { 197 + struct reparse_data_buffer *buf; 198 + struct reparse_posix_data *posix; 199 + }; 200 + } reparse; 195 201 char *symlink_target; 196 202 union { 197 203 struct smb2_file_all_info fi; ··· 401 395 struct cifs_tcon *tcon, 402 396 struct cifs_sb_info *cifs_sb, 403 397 const char *full_path, 404 - char **target_path, 405 - struct kvec *rsp_iov); 398 + char **target_path); 406 399 /* open a file for non-posix mounts */ 407 400 int (*open)(const unsigned int xid, struct cifs_open_parms *oparms, __u32 *oplock, 408 401 void *buf); ··· 556 551 bool (*is_status_io_timeout)(char *buf); 557 552 /* Check for STATUS_NETWORK_NAME_DELETED */ 558 553 bool (*is_network_name_deleted)(char *buf, struct TCP_Server_Info *srv); 554 + int (*parse_reparse_point)(struct cifs_sb_info *cifs_sb, 555 + struct kvec *rsp_iov, 556 + struct cifs_open_info_data *data); 559 557 }; 560 558 561 559 struct smb_version_values {
+2 -2
fs/smb/client/cifspdu.h
··· 1356 1356 __le32 DataDisplacement; 1357 1357 __u8 SetupCount; /* 1 */ 1358 1358 __le16 ReturnedDataLen; 1359 - __u16 ByteCount; 1359 + __le16 ByteCount; 1360 1360 } __attribute__((packed)) TRANSACT_IOCTL_RSP; 1361 1361 1362 1362 #define CIFS_ACL_OWNER 1 ··· 1509 1509 __le16 ReparseDataLength; 1510 1510 __u16 Reserved; 1511 1511 __le64 InodeType; /* LNK, FIFO, CHR etc. */ 1512 - char PathBuffer[]; 1512 + __u8 DataBuffer[]; 1513 1513 } __attribute__((packed)); 1514 1514 1515 1515 struct cifs_quota_data {
+13 -1
fs/smb/client/cifsproto.h
··· 210 210 const struct cifs_fid *fid); 211 211 bool cifs_reparse_point_to_fattr(struct cifs_sb_info *cifs_sb, 212 212 struct cifs_fattr *fattr, 213 - u32 tag); 213 + struct cifs_open_info_data *data); 214 214 extern int smb311_posix_get_inode_info(struct inode **pinode, const char *search_path, 215 215 struct super_block *sb, unsigned int xid); 216 216 extern int cifs_get_inode_info_unix(struct inode **pinode, ··· 458 458 struct cifs_tcon *tcon, 459 459 const unsigned char *searchName, char **syminfo, 460 460 const struct nls_table *nls_codepage, int remap); 461 + extern int cifs_query_reparse_point(const unsigned int xid, 462 + struct cifs_tcon *tcon, 463 + struct cifs_sb_info *cifs_sb, 464 + const char *full_path, 465 + u32 *tag, struct kvec *rsp, 466 + int *rsp_buftype); 461 467 extern int CIFSSMBQuerySymLink(const unsigned int xid, struct cifs_tcon *tcon, 462 468 __u16 fid, char **symlinkinfo, 463 469 const struct nls_table *nls_codepage); ··· 665 659 int cifs_update_super_prepath(struct cifs_sb_info *cifs_sb, char *prefix); 666 660 char *extract_hostname(const char *unc); 667 661 char *extract_sharename(const char *unc); 662 + int parse_reparse_point(struct reparse_data_buffer *buf, 663 + u32 plen, struct cifs_sb_info *cifs_sb, 664 + bool unicode, struct cifs_open_info_data *data); 665 + int cifs_sfu_make_node(unsigned int xid, struct inode *inode, 666 + struct dentry *dentry, struct cifs_tcon *tcon, 667 + const char *full_path, umode_t mode, dev_t dev); 668 668 669 669 #ifdef CONFIG_CIFS_DFS_UPCALL 670 670 static inline int get_dfs_path(const unsigned int xid, struct cifs_ses *ses,
+76 -115
fs/smb/client/cifssmb.c
··· 2690 2690 return rc; 2691 2691 } 2692 2692 2693 - /* 2694 - * Recent Windows versions now create symlinks more frequently 2695 - * and they use the "reparse point" mechanism below. We can of course 2696 - * do symlinks nicely to Samba and other servers which support the 2697 - * CIFS Unix Extensions and we can also do SFU symlinks and "client only" 2698 - * "MF" symlinks optionally, but for recent Windows we really need to 2699 - * reenable the code below and fix the cifs_symlink callers to handle this. 2700 - * In the interim this code has been moved to its own config option so 2701 - * it is not compiled in by default until callers fixed up and more tested. 2702 - */ 2703 - int 2704 - CIFSSMBQuerySymLink(const unsigned int xid, struct cifs_tcon *tcon, 2705 - __u16 fid, char **symlinkinfo, 2706 - const struct nls_table *nls_codepage) 2693 + int cifs_query_reparse_point(const unsigned int xid, 2694 + struct cifs_tcon *tcon, 2695 + struct cifs_sb_info *cifs_sb, 2696 + const char *full_path, 2697 + u32 *tag, struct kvec *rsp, 2698 + int *rsp_buftype) 2707 2699 { 2708 - int rc = 0; 2709 - int bytes_returned; 2710 - struct smb_com_transaction_ioctl_req *pSMB; 2711 - struct smb_com_transaction_ioctl_rsp *pSMBr; 2712 - bool is_unicode; 2713 - unsigned int sub_len; 2714 - char *sub_start; 2715 - struct reparse_symlink_data *reparse_buf; 2716 - struct reparse_posix_data *posix_buf; 2700 + struct cifs_open_parms oparms; 2701 + TRANSACT_IOCTL_REQ *io_req = NULL; 2702 + TRANSACT_IOCTL_RSP *io_rsp = NULL; 2703 + struct cifs_fid fid; 2717 2704 __u32 data_offset, data_count; 2718 - char *end_of_smb; 2705 + __u8 *start, *end; 2706 + int io_rsp_len; 2707 + int oplock = 0; 2708 + int rc; 2719 2709 2720 - cifs_dbg(FYI, "In Windows reparse style QueryLink for fid %u\n", fid); 2721 - rc = smb_init(SMB_COM_NT_TRANSACT, 23, tcon, (void **) &pSMB, 2722 - (void **) &pSMBr); 2710 + cifs_tcon_dbg(FYI, "%s: path=%s\n", __func__, full_path); 2711 + 2712 + if (cap_unix(tcon->ses)) 2713 + return -EOPNOTSUPP; 2714 + 2715 + oparms = (struct cifs_open_parms) { 2716 + .tcon = tcon, 2717 + .cifs_sb = cifs_sb, 2718 + .desired_access = FILE_READ_ATTRIBUTES, 2719 + .create_options = cifs_create_options(cifs_sb, 2720 + OPEN_REPARSE_POINT), 2721 + .disposition = FILE_OPEN, 2722 + .path = full_path, 2723 + .fid = &fid, 2724 + }; 2725 + 2726 + rc = CIFS_open(xid, &oparms, &oplock, NULL); 2723 2727 if (rc) 2724 2728 return rc; 2725 2729 2726 - pSMB->TotalParameterCount = 0 ; 2727 - pSMB->TotalDataCount = 0; 2728 - pSMB->MaxParameterCount = cpu_to_le32(2); 2730 + rc = smb_init(SMB_COM_NT_TRANSACT, 23, tcon, 2731 + (void **)&io_req, (void **)&io_rsp); 2732 + if (rc) 2733 + goto error; 2734 + 2735 + io_req->TotalParameterCount = 0; 2736 + io_req->TotalDataCount = 0; 2737 + io_req->MaxParameterCount = cpu_to_le32(2); 2729 2738 /* BB find exact data count max from sess structure BB */ 2730 - pSMB->MaxDataCount = cpu_to_le32(CIFSMaxBufSize & 0xFFFFFF00); 2731 - pSMB->MaxSetupCount = 4; 2732 - pSMB->Reserved = 0; 2733 - pSMB->ParameterOffset = 0; 2734 - pSMB->DataCount = 0; 2735 - pSMB->DataOffset = 0; 2736 - pSMB->SetupCount = 4; 2737 - pSMB->SubCommand = cpu_to_le16(NT_TRANSACT_IOCTL); 2738 - pSMB->ParameterCount = pSMB->TotalParameterCount; 2739 - pSMB->FunctionCode = cpu_to_le32(FSCTL_GET_REPARSE_POINT); 2740 - pSMB->IsFsctl = 1; /* FSCTL */ 2741 - pSMB->IsRootFlag = 0; 2742 - pSMB->Fid = fid; /* file handle always le */ 2743 - pSMB->ByteCount = 0; 2739 + io_req->MaxDataCount = cpu_to_le32(CIFSMaxBufSize & 0xFFFFFF00); 2740 + io_req->MaxSetupCount = 4; 2741 + io_req->Reserved = 0; 2742 + io_req->ParameterOffset = 0; 2743 + io_req->DataCount = 0; 2744 + io_req->DataOffset = 0; 2745 + io_req->SetupCount = 4; 2746 + io_req->SubCommand = cpu_to_le16(NT_TRANSACT_IOCTL); 2747 + io_req->ParameterCount = io_req->TotalParameterCount; 2748 + io_req->FunctionCode = cpu_to_le32(FSCTL_GET_REPARSE_POINT); 2749 + io_req->IsFsctl = 1; 2750 + io_req->IsRootFlag = 0; 2751 + io_req->Fid = fid.netfid; 2752 + io_req->ByteCount = 0; 2744 2753 2745 - rc = SendReceive(xid, tcon->ses, (struct smb_hdr *) pSMB, 2746 - (struct smb_hdr *) pSMBr, &bytes_returned, 0); 2747 - if (rc) { 2748 - cifs_dbg(FYI, "Send error in QueryReparseLinkInfo = %d\n", rc); 2749 - goto qreparse_out; 2750 - } 2754 + rc = SendReceive(xid, tcon->ses, (struct smb_hdr *)io_req, 2755 + (struct smb_hdr *)io_rsp, &io_rsp_len, 0); 2756 + if (rc) 2757 + goto error; 2751 2758 2752 - data_offset = le32_to_cpu(pSMBr->DataOffset); 2753 - data_count = le32_to_cpu(pSMBr->DataCount); 2754 - if (get_bcc(&pSMBr->hdr) < 2 || data_offset > 512) { 2755 - /* BB also check enough total bytes returned */ 2756 - rc = -EIO; /* bad smb */ 2757 - goto qreparse_out; 2758 - } 2759 - if (!data_count || (data_count > 2048)) { 2759 + data_offset = le32_to_cpu(io_rsp->DataOffset); 2760 + data_count = le32_to_cpu(io_rsp->DataCount); 2761 + if (get_bcc(&io_rsp->hdr) < 2 || data_offset > 512 || 2762 + !data_count || data_count > 2048) { 2760 2763 rc = -EIO; 2761 - cifs_dbg(FYI, "Invalid return data count on get reparse info ioctl\n"); 2762 - goto qreparse_out; 2764 + goto error; 2763 2765 } 2764 - end_of_smb = 2 + get_bcc(&pSMBr->hdr) + (char *)&pSMBr->ByteCount; 2765 - reparse_buf = (struct reparse_symlink_data *) 2766 - ((char *)&pSMBr->hdr.Protocol + data_offset); 2767 - if ((char *)reparse_buf >= end_of_smb) { 2766 + 2767 + end = 2 + get_bcc(&io_rsp->hdr) + (__u8 *)&io_rsp->ByteCount; 2768 + start = (__u8 *)&io_rsp->hdr.Protocol + data_offset; 2769 + if (start >= end) { 2768 2770 rc = -EIO; 2769 - goto qreparse_out; 2770 - } 2771 - if (reparse_buf->ReparseTag == cpu_to_le32(IO_REPARSE_TAG_NFS)) { 2772 - cifs_dbg(FYI, "NFS style reparse tag\n"); 2773 - posix_buf = (struct reparse_posix_data *)reparse_buf; 2774 - 2775 - if (posix_buf->InodeType != cpu_to_le64(NFS_SPECFILE_LNK)) { 2776 - cifs_dbg(FYI, "unsupported file type 0x%llx\n", 2777 - le64_to_cpu(posix_buf->InodeType)); 2778 - rc = -EOPNOTSUPP; 2779 - goto qreparse_out; 2780 - } 2781 - is_unicode = true; 2782 - sub_len = le16_to_cpu(reparse_buf->ReparseDataLength); 2783 - if (posix_buf->PathBuffer + sub_len > end_of_smb) { 2784 - cifs_dbg(FYI, "reparse buf beyond SMB\n"); 2785 - rc = -EIO; 2786 - goto qreparse_out; 2787 - } 2788 - *symlinkinfo = cifs_strndup_from_utf16(posix_buf->PathBuffer, 2789 - sub_len, is_unicode, nls_codepage); 2790 - goto qreparse_out; 2791 - } else if (reparse_buf->ReparseTag != 2792 - cpu_to_le32(IO_REPARSE_TAG_SYMLINK)) { 2793 - rc = -EOPNOTSUPP; 2794 - goto qreparse_out; 2771 + goto error; 2795 2772 } 2796 2773 2797 - /* Reparse tag is NTFS symlink */ 2798 - sub_start = le16_to_cpu(reparse_buf->SubstituteNameOffset) + 2799 - reparse_buf->PathBuffer; 2800 - sub_len = le16_to_cpu(reparse_buf->SubstituteNameLength); 2801 - if (sub_start + sub_len > end_of_smb) { 2802 - cifs_dbg(FYI, "reparse buf beyond SMB\n"); 2803 - rc = -EIO; 2804 - goto qreparse_out; 2805 - } 2806 - if (pSMBr->hdr.Flags2 & SMBFLG2_UNICODE) 2807 - is_unicode = true; 2808 - else 2809 - is_unicode = false; 2774 + *tag = le32_to_cpu(((struct reparse_data_buffer *)start)->ReparseTag); 2775 + rsp->iov_base = io_rsp; 2776 + rsp->iov_len = io_rsp_len; 2777 + *rsp_buftype = CIFS_LARGE_BUFFER; 2778 + CIFSSMBClose(xid, tcon, fid.netfid); 2779 + return 0; 2810 2780 2811 - /* BB FIXME investigate remapping reserved chars here */ 2812 - *symlinkinfo = cifs_strndup_from_utf16(sub_start, sub_len, is_unicode, 2813 - nls_codepage); 2814 - if (!*symlinkinfo) 2815 - rc = -ENOMEM; 2816 - qreparse_out: 2817 - cifs_buf_release(pSMB); 2818 - 2819 - /* 2820 - * Note: On -EAGAIN error only caller can retry on handle based calls 2821 - * since file handle passed in no longer valid. 2822 - */ 2781 + error: 2782 + cifs_buf_release(io_req); 2783 + CIFSSMBClose(xid, tcon, fid.netfid); 2823 2784 return rc; 2824 2785 } 2825 2786
+60 -14
fs/smb/client/inode.c
··· 459 459 return -EOPNOTSUPP; 460 460 rc = server->ops->query_symlink(xid, tcon, 461 461 cifs_sb, full_path, 462 - &fattr->cf_symlink_target, 463 - NULL); 462 + &fattr->cf_symlink_target); 464 463 cifs_dbg(FYI, "%s: query_symlink: %d\n", __func__, rc); 465 464 } 466 465 return rc; ··· 721 722 fattr->cf_mode, fattr->cf_uniqueid, fattr->cf_nlink); 722 723 } 723 724 725 + static inline dev_t nfs_mkdev(struct reparse_posix_data *buf) 726 + { 727 + u64 v = le64_to_cpu(*(__le64 *)buf->DataBuffer); 728 + 729 + return MKDEV(v >> 32, v & 0xffffffff); 730 + } 731 + 724 732 bool cifs_reparse_point_to_fattr(struct cifs_sb_info *cifs_sb, 725 733 struct cifs_fattr *fattr, 726 - u32 tag) 734 + struct cifs_open_info_data *data) 727 735 { 736 + struct reparse_posix_data *buf = data->reparse.posix; 737 + u32 tag = data->reparse.tag; 738 + 739 + if (tag == IO_REPARSE_TAG_NFS && buf) { 740 + switch (le64_to_cpu(buf->InodeType)) { 741 + case NFS_SPECFILE_CHR: 742 + fattr->cf_mode |= S_IFCHR | cifs_sb->ctx->file_mode; 743 + fattr->cf_dtype = DT_CHR; 744 + fattr->cf_rdev = nfs_mkdev(buf); 745 + break; 746 + case NFS_SPECFILE_BLK: 747 + fattr->cf_mode |= S_IFBLK | cifs_sb->ctx->file_mode; 748 + fattr->cf_dtype = DT_BLK; 749 + fattr->cf_rdev = nfs_mkdev(buf); 750 + break; 751 + case NFS_SPECFILE_FIFO: 752 + fattr->cf_mode |= S_IFIFO | cifs_sb->ctx->file_mode; 753 + fattr->cf_dtype = DT_FIFO; 754 + break; 755 + case NFS_SPECFILE_SOCK: 756 + fattr->cf_mode |= S_IFSOCK | cifs_sb->ctx->file_mode; 757 + fattr->cf_dtype = DT_SOCK; 758 + break; 759 + case NFS_SPECFILE_LNK: 760 + fattr->cf_mode = S_IFLNK | cifs_sb->ctx->file_mode; 761 + fattr->cf_dtype = DT_LNK; 762 + break; 763 + default: 764 + WARN_ON_ONCE(1); 765 + return false; 766 + } 767 + return true; 768 + } 769 + 728 770 switch (tag) { 729 771 case IO_REPARSE_TAG_LX_SYMLINK: 730 772 fattr->cf_mode |= S_IFLNK | cifs_sb->ctx->file_mode; ··· 831 791 fattr->cf_nlink = le32_to_cpu(info->NumberOfLinks); 832 792 833 793 if (cifs_open_data_reparse(data) && 834 - cifs_reparse_point_to_fattr(cifs_sb, fattr, data->reparse_tag)) 794 + cifs_reparse_point_to_fattr(cifs_sb, fattr, data)) 835 795 goto out_reparse; 836 796 837 797 if (fattr->cf_cifsattrs & ATTR_DIRECTORY) { ··· 896 856 data.adjust_tz = false; 897 857 if (data.symlink_target) { 898 858 data.symlink = true; 899 - data.reparse_tag = IO_REPARSE_TAG_SYMLINK; 859 + data.reparse.tag = IO_REPARSE_TAG_SYMLINK; 900 860 } 901 861 cifs_open_info_to_fattr(&fattr, &data, inode->i_sb); 902 862 break; ··· 1065 1025 struct cifs_sb_info *cifs_sb = CIFS_SB(sb); 1066 1026 struct kvec rsp_iov, *iov = NULL; 1067 1027 int rsp_buftype = CIFS_NO_BUFFER; 1068 - u32 tag = data->reparse_tag; 1028 + u32 tag = data->reparse.tag; 1069 1029 int rc = 0; 1070 1030 1071 1031 if (!tag && server->ops->query_reparse_point) { ··· 1075 1035 if (!rc) 1076 1036 iov = &rsp_iov; 1077 1037 } 1078 - switch ((data->reparse_tag = tag)) { 1038 + 1039 + rc = -EOPNOTSUPP; 1040 + switch ((data->reparse.tag = tag)) { 1079 1041 case 0: /* SMB1 symlink */ 1080 - iov = NULL; 1081 - fallthrough; 1082 - case IO_REPARSE_TAG_NFS: 1083 - case IO_REPARSE_TAG_SYMLINK: 1084 - if (!data->symlink_target && server->ops->query_symlink) { 1042 + if (server->ops->query_symlink) { 1085 1043 rc = server->ops->query_symlink(xid, tcon, 1086 1044 cifs_sb, full_path, 1087 - &data->symlink_target, 1088 - iov); 1045 + &data->symlink_target); 1089 1046 } 1090 1047 break; 1091 1048 case IO_REPARSE_TAG_MOUNT_POINT: 1092 1049 cifs_create_junction_fattr(fattr, sb); 1050 + rc = 0; 1093 1051 goto out; 1052 + default: 1053 + if (data->symlink_target) { 1054 + rc = 0; 1055 + } else if (server->ops->parse_reparse_point) { 1056 + rc = server->ops->parse_reparse_point(cifs_sb, 1057 + iov, data); 1058 + } 1059 + break; 1094 1060 } 1095 1061 1096 1062 cifs_open_info_to_fattr(fattr, data, sb);
+5 -1
fs/smb/client/readdir.c
··· 153 153 static void 154 154 cifs_fill_common_info(struct cifs_fattr *fattr, struct cifs_sb_info *cifs_sb) 155 155 { 156 + struct cifs_open_info_data data = { 157 + .reparse = { .tag = fattr->cf_cifstag, }, 158 + }; 159 + 156 160 fattr->cf_uid = cifs_sb->ctx->linux_uid; 157 161 fattr->cf_gid = cifs_sb->ctx->linux_gid; 158 162 ··· 169 165 * reasonably map some of them to directories vs. files vs. symlinks 170 166 */ 171 167 if ((fattr->cf_cifsattrs & ATTR_REPARSE) && 172 - cifs_reparse_point_to_fattr(cifs_sb, fattr, fattr->cf_cifstag)) 168 + cifs_reparse_point_to_fattr(cifs_sb, fattr, &data)) 173 169 goto out_reparse; 174 170 175 171 if (fattr->cf_cifsattrs & ATTR_DIRECTORY) {
+1 -1
fs/smb/client/sess.c
··· 332 332 333 333 if (iface) { 334 334 spin_lock(&ses->iface_lock); 335 - kref_put(&iface->refcount, release_iface); 336 335 iface->num_channels--; 337 336 if (iface->weight_fulfilled) 338 337 iface->weight_fulfilled--; 338 + kref_put(&iface->refcount, release_iface); 339 339 spin_unlock(&ses->iface_lock); 340 340 } 341 341
+32 -121
fs/smb/client/smb1ops.c
··· 976 976 struct cifs_tcon *tcon, 977 977 struct cifs_sb_info *cifs_sb, 978 978 const char *full_path, 979 - char **target_path, 980 - struct kvec *rsp_iov) 979 + char **target_path) 981 980 { 982 981 int rc; 983 - int oplock = 0; 984 - bool is_reparse_point = !!rsp_iov; 985 - struct cifs_fid fid; 986 - struct cifs_open_parms oparms; 987 982 988 - cifs_dbg(FYI, "%s: path: %s\n", __func__, full_path); 983 + cifs_tcon_dbg(FYI, "%s: path=%s\n", __func__, full_path); 989 984 990 - if (is_reparse_point) { 991 - cifs_dbg(VFS, "reparse points not handled for SMB1 symlinks\n"); 985 + if (!cap_unix(tcon->ses)) 992 986 return -EOPNOTSUPP; 993 - } 994 987 995 - /* Check for unix extensions */ 996 - if (cap_unix(tcon->ses)) { 997 - rc = CIFSSMBUnixQuerySymLink(xid, tcon, full_path, target_path, 998 - cifs_sb->local_nls, 999 - cifs_remap(cifs_sb)); 1000 - if (rc == -EREMOTE) 1001 - rc = cifs_unix_dfs_readlink(xid, tcon, full_path, 1002 - target_path, 1003 - cifs_sb->local_nls); 1004 - 1005 - goto out; 1006 - } 1007 - 1008 - oparms = (struct cifs_open_parms) { 1009 - .tcon = tcon, 1010 - .cifs_sb = cifs_sb, 1011 - .desired_access = FILE_READ_ATTRIBUTES, 1012 - .create_options = cifs_create_options(cifs_sb, 1013 - OPEN_REPARSE_POINT), 1014 - .disposition = FILE_OPEN, 1015 - .path = full_path, 1016 - .fid = &fid, 1017 - }; 1018 - 1019 - rc = CIFS_open(xid, &oparms, &oplock, NULL); 1020 - if (rc) 1021 - goto out; 1022 - 1023 - rc = CIFSSMBQuerySymLink(xid, tcon, fid.netfid, target_path, 1024 - cifs_sb->local_nls); 1025 - if (rc) 1026 - goto out_close; 1027 - 1028 - convert_delimiter(*target_path, '/'); 1029 - out_close: 1030 - CIFSSMBClose(xid, tcon, fid.netfid); 1031 - out: 1032 - if (!rc) 1033 - cifs_dbg(FYI, "%s: target path: %s\n", __func__, *target_path); 988 + rc = CIFSSMBUnixQuerySymLink(xid, tcon, full_path, target_path, 989 + cifs_sb->local_nls, cifs_remap(cifs_sb)); 990 + if (rc == -EREMOTE) 991 + rc = cifs_unix_dfs_readlink(xid, tcon, full_path, 992 + target_path, cifs_sb->local_nls); 1034 993 return rc; 994 + } 995 + 996 + static int cifs_parse_reparse_point(struct cifs_sb_info *cifs_sb, 997 + struct kvec *rsp_iov, 998 + struct cifs_open_info_data *data) 999 + { 1000 + struct reparse_data_buffer *buf; 1001 + TRANSACT_IOCTL_RSP *io = rsp_iov->iov_base; 1002 + bool unicode = !!(io->hdr.Flags2 & SMBFLG2_UNICODE); 1003 + u32 plen = le16_to_cpu(io->ByteCount); 1004 + 1005 + buf = (struct reparse_data_buffer *)((__u8 *)&io->hdr.Protocol + 1006 + le32_to_cpu(io->DataOffset)); 1007 + return parse_reparse_point(buf, plen, cifs_sb, unicode, data); 1035 1008 } 1036 1009 1037 1010 static bool ··· 1041 1068 { 1042 1069 struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); 1043 1070 struct inode *newinode = NULL; 1044 - int rc = -EPERM; 1045 - struct cifs_open_info_data buf = {}; 1046 - struct cifs_io_parms io_parms; 1047 - __u32 oplock = 0; 1048 - struct cifs_fid fid; 1049 - struct cifs_open_parms oparms; 1050 - unsigned int bytes_written; 1051 - struct win_dev *pdev; 1052 - struct kvec iov[2]; 1071 + int rc; 1053 1072 1054 1073 if (tcon->unix_ext) { 1055 1074 /* ··· 1075 1110 d_instantiate(dentry, newinode); 1076 1111 return rc; 1077 1112 } 1078 - 1079 1113 /* 1080 - * SMB1 SFU emulation: should work with all servers, but only 1081 - * support block and char device (no socket & fifo) 1114 + * Check if mounted with mount parm 'sfu' mount parm. 1115 + * SFU emulation should work with all servers, but only 1116 + * supports block and char device (no socket & fifo), 1117 + * and was used by default in earlier versions of Windows 1082 1118 */ 1083 1119 if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UNX_EMUL)) 1084 - return rc; 1085 - 1086 - if (!S_ISCHR(mode) && !S_ISBLK(mode)) 1087 - return rc; 1088 - 1089 - cifs_dbg(FYI, "sfu compat create special file\n"); 1090 - 1091 - oparms = (struct cifs_open_parms) { 1092 - .tcon = tcon, 1093 - .cifs_sb = cifs_sb, 1094 - .desired_access = GENERIC_WRITE, 1095 - .create_options = cifs_create_options(cifs_sb, CREATE_NOT_DIR | 1096 - CREATE_OPTION_SPECIAL), 1097 - .disposition = FILE_CREATE, 1098 - .path = full_path, 1099 - .fid = &fid, 1100 - }; 1101 - 1102 - if (tcon->ses->server->oplocks) 1103 - oplock = REQ_OPLOCK; 1104 - else 1105 - oplock = 0; 1106 - rc = tcon->ses->server->ops->open(xid, &oparms, &oplock, &buf); 1107 - if (rc) 1108 - return rc; 1109 - 1110 - /* 1111 - * BB Do not bother to decode buf since no local inode yet to put 1112 - * timestamps in, but we can reuse it safely. 1113 - */ 1114 - 1115 - pdev = (struct win_dev *)&buf.fi; 1116 - io_parms.pid = current->tgid; 1117 - io_parms.tcon = tcon; 1118 - io_parms.offset = 0; 1119 - io_parms.length = sizeof(struct win_dev); 1120 - iov[1].iov_base = &buf.fi; 1121 - iov[1].iov_len = sizeof(struct win_dev); 1122 - if (S_ISCHR(mode)) { 1123 - memcpy(pdev->type, "IntxCHR", 8); 1124 - pdev->major = cpu_to_le64(MAJOR(dev)); 1125 - pdev->minor = cpu_to_le64(MINOR(dev)); 1126 - rc = tcon->ses->server->ops->sync_write(xid, &fid, &io_parms, 1127 - &bytes_written, iov, 1); 1128 - } else if (S_ISBLK(mode)) { 1129 - memcpy(pdev->type, "IntxBLK", 8); 1130 - pdev->major = cpu_to_le64(MAJOR(dev)); 1131 - pdev->minor = cpu_to_le64(MINOR(dev)); 1132 - rc = tcon->ses->server->ops->sync_write(xid, &fid, &io_parms, 1133 - &bytes_written, iov, 1); 1134 - } 1135 - tcon->ses->server->ops->close(xid, tcon, &fid); 1136 - d_drop(dentry); 1137 - 1138 - /* FIXME: add code here to set EAs */ 1139 - 1140 - cifs_free_open_info(&buf); 1141 - return rc; 1120 + return -EPERM; 1121 + return cifs_sfu_make_node(xid, inode, dentry, tcon, 1122 + full_path, mode, dev); 1142 1123 } 1143 - 1144 - 1145 1124 1146 1125 struct smb_version_operations smb1_operations = { 1147 1126 .send_cancel = send_nt_cancel, ··· 1123 1214 .is_path_accessible = cifs_is_path_accessible, 1124 1215 .can_echo = cifs_can_echo, 1125 1216 .query_path_info = cifs_query_path_info, 1217 + .query_reparse_point = cifs_query_reparse_point, 1126 1218 .query_file_info = cifs_query_file_info, 1127 1219 .get_srv_inum = cifs_get_srv_inum, 1128 1220 .set_path_size = CIFSSMBSetEOF, ··· 1139 1229 .rename = CIFSSMBRename, 1140 1230 .create_hardlink = CIFSCreateHardLink, 1141 1231 .query_symlink = cifs_query_symlink, 1232 + .parse_reparse_point = cifs_parse_reparse_point, 1142 1233 .open = cifs_open_file, 1143 1234 .set_fid = cifs_set_fid, 1144 1235 .close = cifs_close_file,
+1 -1
fs/smb/client/smb2inode.c
··· 555 555 break; 556 556 } 557 557 data->reparse_point = reparse_point; 558 - data->reparse_tag = tag; 558 + data->reparse.tag = tag; 559 559 return rc; 560 560 } 561 561
+111 -114
fs/smb/client/smb2ops.c
··· 2866 2866 return rc; 2867 2867 } 2868 2868 2869 - static int 2870 - parse_reparse_posix(struct reparse_posix_data *symlink_buf, 2871 - u32 plen, char **target_path, 2872 - struct cifs_sb_info *cifs_sb) 2869 + /* See MS-FSCC 2.1.2.6 for the 'NFS' style reparse tags */ 2870 + static int parse_reparse_posix(struct reparse_posix_data *buf, 2871 + struct cifs_sb_info *cifs_sb, 2872 + struct cifs_open_info_data *data) 2873 2873 { 2874 2874 unsigned int len; 2875 + u64 type; 2875 2876 2876 - /* See MS-FSCC 2.1.2.6 for the 'NFS' style reparse tags */ 2877 - len = le16_to_cpu(symlink_buf->ReparseDataLength); 2878 - 2879 - if (le64_to_cpu(symlink_buf->InodeType) != NFS_SPECFILE_LNK) { 2880 - cifs_dbg(VFS, "%lld not a supported symlink type\n", 2881 - le64_to_cpu(symlink_buf->InodeType)); 2877 + switch ((type = le64_to_cpu(buf->InodeType))) { 2878 + case NFS_SPECFILE_LNK: 2879 + len = le16_to_cpu(buf->ReparseDataLength); 2880 + data->symlink_target = cifs_strndup_from_utf16(buf->DataBuffer, 2881 + len, true, 2882 + cifs_sb->local_nls); 2883 + if (!data->symlink_target) 2884 + return -ENOMEM; 2885 + convert_delimiter(data->symlink_target, '/'); 2886 + cifs_dbg(FYI, "%s: target path: %s\n", 2887 + __func__, data->symlink_target); 2888 + break; 2889 + case NFS_SPECFILE_CHR: 2890 + case NFS_SPECFILE_BLK: 2891 + case NFS_SPECFILE_FIFO: 2892 + case NFS_SPECFILE_SOCK: 2893 + break; 2894 + default: 2895 + cifs_dbg(VFS, "%s: unhandled inode type: 0x%llx\n", 2896 + __func__, type); 2882 2897 return -EOPNOTSUPP; 2883 2898 } 2884 - 2885 - *target_path = cifs_strndup_from_utf16( 2886 - symlink_buf->PathBuffer, 2887 - len, true, cifs_sb->local_nls); 2888 - if (!(*target_path)) 2889 - return -ENOMEM; 2890 - 2891 - convert_delimiter(*target_path, '/'); 2892 - cifs_dbg(FYI, "%s: target path: %s\n", __func__, *target_path); 2893 - 2894 2899 return 0; 2895 2900 } 2896 2901 2897 - static int 2898 - parse_reparse_symlink(struct reparse_symlink_data_buffer *symlink_buf, 2899 - u32 plen, char **target_path, 2900 - struct cifs_sb_info *cifs_sb) 2902 + static int parse_reparse_symlink(struct reparse_symlink_data_buffer *sym, 2903 + u32 plen, bool unicode, 2904 + struct cifs_sb_info *cifs_sb, 2905 + struct cifs_open_info_data *data) 2901 2906 { 2902 - unsigned int sub_len; 2903 - unsigned int sub_offset; 2907 + unsigned int len; 2908 + unsigned int offs; 2904 2909 2905 2910 /* We handle Symbolic Link reparse tag here. See: MS-FSCC 2.1.2.4 */ 2906 2911 2907 - sub_offset = le16_to_cpu(symlink_buf->SubstituteNameOffset); 2908 - sub_len = le16_to_cpu(symlink_buf->SubstituteNameLength); 2909 - if (sub_offset + 20 > plen || 2910 - sub_offset + sub_len + 20 > plen) { 2912 + offs = le16_to_cpu(sym->SubstituteNameOffset); 2913 + len = le16_to_cpu(sym->SubstituteNameLength); 2914 + if (offs + 20 > plen || offs + len + 20 > plen) { 2911 2915 cifs_dbg(VFS, "srv returned malformed symlink buffer\n"); 2912 2916 return -EIO; 2913 2917 } 2914 2918 2915 - *target_path = cifs_strndup_from_utf16( 2916 - symlink_buf->PathBuffer + sub_offset, 2917 - sub_len, true, cifs_sb->local_nls); 2918 - if (!(*target_path)) 2919 + data->symlink_target = cifs_strndup_from_utf16(sym->PathBuffer + offs, 2920 + len, unicode, 2921 + cifs_sb->local_nls); 2922 + if (!data->symlink_target) 2919 2923 return -ENOMEM; 2920 2924 2921 - convert_delimiter(*target_path, '/'); 2922 - cifs_dbg(FYI, "%s: target path: %s\n", __func__, *target_path); 2925 + convert_delimiter(data->symlink_target, '/'); 2926 + cifs_dbg(FYI, "%s: target path: %s\n", __func__, data->symlink_target); 2923 2927 2924 2928 return 0; 2925 2929 } 2926 2930 2927 - static int 2928 - parse_reparse_point(struct reparse_data_buffer *buf, 2929 - u32 plen, char **target_path, 2930 - struct cifs_sb_info *cifs_sb) 2931 + int parse_reparse_point(struct reparse_data_buffer *buf, 2932 + u32 plen, struct cifs_sb_info *cifs_sb, 2933 + bool unicode, struct cifs_open_info_data *data) 2931 2934 { 2932 - if (plen < sizeof(struct reparse_data_buffer)) { 2933 - cifs_dbg(VFS, "reparse buffer is too small. Must be at least 8 bytes but was %d\n", 2934 - plen); 2935 + if (plen < sizeof(*buf)) { 2936 + cifs_dbg(VFS, "%s: reparse buffer is too small. Must be at least 8 bytes but was %d\n", 2937 + __func__, plen); 2935 2938 return -EIO; 2936 2939 } 2937 2940 2938 - if (plen < le16_to_cpu(buf->ReparseDataLength) + 2939 - sizeof(struct reparse_data_buffer)) { 2940 - cifs_dbg(VFS, "srv returned invalid reparse buf length: %d\n", 2941 - plen); 2941 + if (plen < le16_to_cpu(buf->ReparseDataLength) + sizeof(*buf)) { 2942 + cifs_dbg(VFS, "%s: invalid reparse buf length: %d\n", 2943 + __func__, plen); 2942 2944 return -EIO; 2943 2945 } 2946 + 2947 + data->reparse.buf = buf; 2944 2948 2945 2949 /* See MS-FSCC 2.1.2 */ 2946 2950 switch (le32_to_cpu(buf->ReparseTag)) { 2947 2951 case IO_REPARSE_TAG_NFS: 2948 - return parse_reparse_posix( 2949 - (struct reparse_posix_data *)buf, 2950 - plen, target_path, cifs_sb); 2952 + return parse_reparse_posix((struct reparse_posix_data *)buf, 2953 + cifs_sb, data); 2951 2954 case IO_REPARSE_TAG_SYMLINK: 2952 2955 return parse_reparse_symlink( 2953 2956 (struct reparse_symlink_data_buffer *)buf, 2954 - plen, target_path, cifs_sb); 2957 + plen, unicode, cifs_sb, data); 2958 + case IO_REPARSE_TAG_LX_SYMLINK: 2959 + case IO_REPARSE_TAG_AF_UNIX: 2960 + case IO_REPARSE_TAG_LX_FIFO: 2961 + case IO_REPARSE_TAG_LX_CHR: 2962 + case IO_REPARSE_TAG_LX_BLK: 2963 + return 0; 2955 2964 default: 2956 - cifs_dbg(VFS, "srv returned unknown symlink buffer tag:0x%08x\n", 2957 - le32_to_cpu(buf->ReparseTag)); 2965 + cifs_dbg(VFS, "%s: unhandled reparse tag: 0x%08x\n", 2966 + __func__, le32_to_cpu(buf->ReparseTag)); 2958 2967 return -EOPNOTSUPP; 2959 2968 } 2960 2969 } 2961 2970 2962 - static int smb2_query_symlink(const unsigned int xid, 2963 - struct cifs_tcon *tcon, 2964 - struct cifs_sb_info *cifs_sb, 2965 - const char *full_path, 2966 - char **target_path, 2967 - struct kvec *rsp_iov) 2971 + static int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb, 2972 + struct kvec *rsp_iov, 2973 + struct cifs_open_info_data *data) 2968 2974 { 2969 2975 struct reparse_data_buffer *buf; 2970 2976 struct smb2_ioctl_rsp *io = rsp_iov->iov_base; 2971 2977 u32 plen = le32_to_cpu(io->OutputCount); 2972 2978 2973 - cifs_dbg(FYI, "%s: path: %s\n", __func__, full_path); 2974 - 2975 2979 buf = (struct reparse_data_buffer *)((u8 *)io + 2976 2980 le32_to_cpu(io->OutputOffset)); 2977 - return parse_reparse_point(buf, plen, target_path, cifs_sb); 2981 + return parse_reparse_point(buf, plen, cifs_sb, true, data); 2978 2982 } 2979 2983 2980 2984 static int smb2_query_reparse_point(const unsigned int xid, ··· 5068 5064 return le32_to_cpu(hdr->NextCommand); 5069 5065 } 5070 5066 5071 - static int 5072 - smb2_make_node(unsigned int xid, struct inode *inode, 5073 - struct dentry *dentry, struct cifs_tcon *tcon, 5074 - const char *full_path, umode_t mode, dev_t dev) 5067 + int cifs_sfu_make_node(unsigned int xid, struct inode *inode, 5068 + struct dentry *dentry, struct cifs_tcon *tcon, 5069 + const char *full_path, umode_t mode, dev_t dev) 5075 5070 { 5076 - struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); 5077 - int rc = -EPERM; 5078 5071 struct cifs_open_info_data buf = {}; 5079 - struct cifs_io_parms io_parms = {0}; 5080 - __u32 oplock = 0; 5081 - struct cifs_fid fid; 5072 + struct TCP_Server_Info *server = tcon->ses->server; 5082 5073 struct cifs_open_parms oparms; 5074 + struct cifs_io_parms io_parms = {}; 5075 + struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); 5076 + struct cifs_fid fid; 5083 5077 unsigned int bytes_written; 5084 5078 struct win_dev *pdev; 5085 5079 struct kvec iov[2]; 5086 - 5087 - /* 5088 - * Check if mounted with mount parm 'sfu' mount parm. 5089 - * SFU emulation should work with all servers, but only 5090 - * supports block and char device (no socket & fifo), 5091 - * and was used by default in earlier versions of Windows 5092 - */ 5093 - if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UNX_EMUL)) 5094 - return rc; 5095 - 5096 - /* 5097 - * TODO: Add ability to create instead via reparse point. Windows (e.g. 5098 - * their current NFS server) uses this approach to expose special files 5099 - * over SMB2/SMB3 and Samba will do this with SMB3.1.1 POSIX Extensions 5100 - */ 5080 + __u32 oplock = server->oplocks ? REQ_OPLOCK : 0; 5081 + int rc; 5101 5082 5102 5083 if (!S_ISCHR(mode) && !S_ISBLK(mode) && !S_ISFIFO(mode)) 5103 - return rc; 5104 - 5105 - cifs_dbg(FYI, "sfu compat create special file\n"); 5084 + return -EPERM; 5106 5085 5107 5086 oparms = (struct cifs_open_parms) { 5108 5087 .tcon = tcon, ··· 5098 5111 .fid = &fid, 5099 5112 }; 5100 5113 5101 - if (tcon->ses->server->oplocks) 5102 - oplock = REQ_OPLOCK; 5103 - else 5104 - oplock = 0; 5105 - rc = tcon->ses->server->ops->open(xid, &oparms, &oplock, &buf); 5114 + rc = server->ops->open(xid, &oparms, &oplock, &buf); 5106 5115 if (rc) 5107 5116 return rc; 5108 5117 ··· 5106 5123 * BB Do not bother to decode buf since no local inode yet to put 5107 5124 * timestamps in, but we can reuse it safely. 5108 5125 */ 5109 - 5110 5126 pdev = (struct win_dev *)&buf.fi; 5111 5127 io_parms.pid = current->tgid; 5112 5128 io_parms.tcon = tcon; 5113 - io_parms.offset = 0; 5114 - io_parms.length = sizeof(struct win_dev); 5115 - iov[1].iov_base = &buf.fi; 5116 - iov[1].iov_len = sizeof(struct win_dev); 5129 + io_parms.length = sizeof(*pdev); 5130 + iov[1].iov_base = pdev; 5131 + iov[1].iov_len = sizeof(*pdev); 5117 5132 if (S_ISCHR(mode)) { 5118 5133 memcpy(pdev->type, "IntxCHR", 8); 5119 5134 pdev->major = cpu_to_le64(MAJOR(dev)); 5120 5135 pdev->minor = cpu_to_le64(MINOR(dev)); 5121 - rc = tcon->ses->server->ops->sync_write(xid, &fid, &io_parms, 5122 - &bytes_written, iov, 1); 5123 5136 } else if (S_ISBLK(mode)) { 5124 5137 memcpy(pdev->type, "IntxBLK", 8); 5125 5138 pdev->major = cpu_to_le64(MAJOR(dev)); 5126 5139 pdev->minor = cpu_to_le64(MINOR(dev)); 5127 - rc = tcon->ses->server->ops->sync_write(xid, &fid, &io_parms, 5128 - &bytes_written, iov, 1); 5129 5140 } else if (S_ISFIFO(mode)) { 5130 5141 memcpy(pdev->type, "LnxFIFO", 8); 5131 - pdev->major = 0; 5132 - pdev->minor = 0; 5133 - rc = tcon->ses->server->ops->sync_write(xid, &fid, &io_parms, 5134 - &bytes_written, iov, 1); 5135 5142 } 5136 - tcon->ses->server->ops->close(xid, tcon, &fid); 5143 + 5144 + rc = server->ops->sync_write(xid, &fid, &io_parms, 5145 + &bytes_written, iov, 1); 5146 + server->ops->close(xid, tcon, &fid); 5137 5147 d_drop(dentry); 5138 - 5139 5148 /* FIXME: add code here to set EAs */ 5140 - 5141 5149 cifs_free_open_info(&buf); 5142 5150 return rc; 5151 + } 5152 + 5153 + static int smb2_make_node(unsigned int xid, struct inode *inode, 5154 + struct dentry *dentry, struct cifs_tcon *tcon, 5155 + const char *full_path, umode_t mode, dev_t dev) 5156 + { 5157 + struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); 5158 + 5159 + /* 5160 + * Check if mounted with mount parm 'sfu' mount parm. 5161 + * SFU emulation should work with all servers, but only 5162 + * supports block and char device (no socket & fifo), 5163 + * and was used by default in earlier versions of Windows 5164 + */ 5165 + if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UNX_EMUL)) 5166 + return -EPERM; 5167 + /* 5168 + * TODO: Add ability to create instead via reparse point. Windows (e.g. 5169 + * their current NFS server) uses this approach to expose special files 5170 + * over SMB2/SMB3 and Samba will do this with SMB3.1.1 POSIX Extensions 5171 + */ 5172 + return cifs_sfu_make_node(xid, inode, dentry, tcon, 5173 + full_path, mode, dev); 5143 5174 } 5144 5175 5145 5176 #ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY ··· 5206 5209 .unlink = smb2_unlink, 5207 5210 .rename = smb2_rename_path, 5208 5211 .create_hardlink = smb2_create_hardlink, 5209 - .query_symlink = smb2_query_symlink, 5212 + .parse_reparse_point = smb2_parse_reparse_point, 5210 5213 .query_mf_symlink = smb3_query_mf_symlink, 5211 5214 .create_mf_symlink = smb3_create_mf_symlink, 5212 5215 .open = smb2_open_file, ··· 5308 5311 .unlink = smb2_unlink, 5309 5312 .rename = smb2_rename_path, 5310 5313 .create_hardlink = smb2_create_hardlink, 5311 - .query_symlink = smb2_query_symlink, 5314 + .parse_reparse_point = smb2_parse_reparse_point, 5312 5315 .query_mf_symlink = smb3_query_mf_symlink, 5313 5316 .create_mf_symlink = smb3_create_mf_symlink, 5314 5317 .open = smb2_open_file, ··· 5413 5416 .unlink = smb2_unlink, 5414 5417 .rename = smb2_rename_path, 5415 5418 .create_hardlink = smb2_create_hardlink, 5416 - .query_symlink = smb2_query_symlink, 5419 + .parse_reparse_point = smb2_parse_reparse_point, 5417 5420 .query_mf_symlink = smb3_query_mf_symlink, 5418 5421 .create_mf_symlink = smb3_create_mf_symlink, 5419 5422 .open = smb2_open_file, ··· 5527 5530 .unlink = smb2_unlink, 5528 5531 .rename = smb2_rename_path, 5529 5532 .create_hardlink = smb2_create_hardlink, 5530 - .query_symlink = smb2_query_symlink, 5533 + .parse_reparse_point = smb2_parse_reparse_point, 5531 5534 .query_mf_symlink = smb3_query_mf_symlink, 5532 5535 .create_mf_symlink = smb3_create_mf_symlink, 5533 5536 .open = smb2_open_file,
+5 -1
fs/stat.c
··· 133 133 idmap = mnt_idmap(path->mnt); 134 134 if (inode->i_op->getattr) 135 135 return inode->i_op->getattr(idmap, path, stat, 136 - request_mask, query_flags); 136 + request_mask, 137 + query_flags | AT_GETATTR_NOSEC); 137 138 138 139 generic_fillattr(idmap, request_mask, inode, stat); 139 140 return 0; ··· 166 165 u32 request_mask, unsigned int query_flags) 167 166 { 168 167 int retval; 168 + 169 + if (WARN_ON_ONCE(query_flags & AT_GETATTR_NOSEC)) 170 + return -EPERM; 169 171 170 172 retval = security_inode_getattr(path); 171 173 if (retval)
+26 -39
fs/tracefs/event_inode.c
··· 27 27 /* 28 28 * eventfs_mutex protects the eventfs_inode (ei) dentry. Any access 29 29 * to the ei->dentry must be done under this mutex and after checking 30 - * if ei->is_freed is not set. The ei->dentry is released under the 31 - * mutex at the same time ei->is_freed is set. If ei->is_freed is set 32 - * then the ei->dentry is invalid. 30 + * if ei->is_freed is not set. When ei->is_freed is set, the dentry 31 + * is on its way to being freed after the last dput() is made on it. 33 32 */ 34 33 static DEFINE_MUTEX(eventfs_mutex); 35 34 36 35 /* 37 36 * The eventfs_inode (ei) itself is protected by SRCU. It is released from 38 37 * its parent's list and will have is_freed set (under eventfs_mutex). 39 - * After the SRCU grace period is over, the ei may be freed. 38 + * After the SRCU grace period is over and the last dput() is called 39 + * the ei is freed. 40 40 */ 41 41 DEFINE_STATIC_SRCU(eventfs_srcu); 42 42 ··· 95 95 if (!(dentry->d_inode->i_mode & S_IFDIR)) { 96 96 if (!ei->entry_attrs) { 97 97 ei->entry_attrs = kzalloc(sizeof(*ei->entry_attrs) * ei->nr_entries, 98 - GFP_KERNEL); 98 + GFP_NOFS); 99 99 if (!ei->entry_attrs) { 100 100 ret = -ENOMEM; 101 101 goto out; ··· 326 326 struct eventfs_attr *attr = NULL; 327 327 struct dentry **e_dentry = &ei->d_children[idx]; 328 328 struct dentry *dentry; 329 - bool invalidate = false; 329 + 330 + WARN_ON_ONCE(!inode_is_locked(parent->d_inode)); 330 331 331 332 mutex_lock(&eventfs_mutex); 332 333 if (ei->is_freed) { ··· 349 348 350 349 mutex_unlock(&eventfs_mutex); 351 350 352 - /* The lookup already has the parent->d_inode locked */ 353 - if (!lookup) 354 - inode_lock(parent->d_inode); 355 - 356 351 dentry = create_file(name, mode, attr, parent, data, fops); 357 - 358 - if (!lookup) 359 - inode_unlock(parent->d_inode); 360 352 361 353 mutex_lock(&eventfs_mutex); 362 354 ··· 359 365 * created the dentry for this e_dentry. In which case 360 366 * use that one. 361 367 * 362 - * Note, with the mutex held, the e_dentry cannot have content 363 - * and the ei->is_freed be true at the same time. 368 + * If ei->is_freed is set, the e_dentry is currently on its 369 + * way to being freed, don't return it. If e_dentry is NULL 370 + * it means it was already freed. 364 371 */ 365 - dentry = *e_dentry; 366 - if (WARN_ON_ONCE(dentry && ei->is_freed)) 372 + if (ei->is_freed) 367 373 dentry = NULL; 374 + else 375 + dentry = *e_dentry; 368 376 /* The lookup does not need to up the dentry refcount */ 369 377 if (dentry && !lookup) 370 378 dget(dentry); ··· 383 387 * Otherwise it means two dentries exist with the same name. 384 388 */ 385 389 WARN_ON_ONCE(!ei->is_freed); 386 - invalidate = true; 390 + dentry = NULL; 387 391 } 388 392 mutex_unlock(&eventfs_mutex); 389 393 390 - if (invalidate) 391 - d_invalidate(dentry); 392 - 393 - if (lookup || invalidate) 394 + if (lookup) 394 395 dput(dentry); 395 396 396 - return invalidate ? NULL : dentry; 397 + return dentry; 397 398 } 398 399 399 400 /** ··· 430 437 create_dir_dentry(struct eventfs_inode *pei, struct eventfs_inode *ei, 431 438 struct dentry *parent, bool lookup) 432 439 { 433 - bool invalidate = false; 434 440 struct dentry *dentry = NULL; 441 + 442 + WARN_ON_ONCE(!inode_is_locked(parent->d_inode)); 435 443 436 444 mutex_lock(&eventfs_mutex); 437 445 if (pei->is_freed || ei->is_freed) { ··· 450 456 } 451 457 mutex_unlock(&eventfs_mutex); 452 458 453 - /* The lookup already has the parent->d_inode locked */ 454 - if (!lookup) 455 - inode_lock(parent->d_inode); 456 - 457 459 dentry = create_dir(ei, parent); 458 - 459 - if (!lookup) 460 - inode_unlock(parent->d_inode); 461 460 462 461 mutex_lock(&eventfs_mutex); 463 462 ··· 460 473 * created the dentry for this e_dentry. In which case 461 474 * use that one. 462 475 * 463 - * Note, with the mutex held, the e_dentry cannot have content 464 - * and the ei->is_freed be true at the same time. 476 + * If ei->is_freed is set, the e_dentry is currently on its 477 + * way to being freed. 465 478 */ 466 479 dentry = ei->dentry; 467 480 if (dentry && !lookup) ··· 480 493 * Otherwise it means two dentries exist with the same name. 481 494 */ 482 495 WARN_ON_ONCE(!ei->is_freed); 483 - invalidate = true; 496 + dentry = NULL; 484 497 } 485 498 mutex_unlock(&eventfs_mutex); 486 - if (invalidate) 487 - d_invalidate(dentry); 488 499 489 - if (lookup || invalidate) 500 + if (lookup) 490 501 dput(dentry); 491 502 492 - return invalidate ? NULL : dentry; 503 + return dentry; 493 504 } 494 505 495 506 /** ··· 617 632 { 618 633 struct dentry **tmp; 619 634 620 - tmp = krealloc(*dentries, sizeof(d) * (cnt + 2), GFP_KERNEL); 635 + tmp = krealloc(*dentries, sizeof(d) * (cnt + 2), GFP_NOFS); 621 636 if (!tmp) 622 637 return -1; 623 638 tmp[cnt] = d; ··· 683 698 return -ENOMEM; 684 699 } 685 700 701 + inode_lock(parent->d_inode); 686 702 list_for_each_entry_srcu(ei_child, &ei->children, list, 687 703 srcu_read_lock_held(&eventfs_srcu)) { 688 704 d = create_dir_dentry(ei, ei_child, parent, false); ··· 716 730 cnt++; 717 731 } 718 732 } 733 + inode_unlock(parent->d_inode); 719 734 srcu_read_unlock(&eventfs_srcu, idx); 720 735 ret = dcache_dir_open(inode, file); 721 736
+4 -9
fs/tracefs/inode.c
··· 509 509 struct dentry *dentry; 510 510 int error; 511 511 512 + /* Must always have a parent. */ 513 + if (WARN_ON_ONCE(!parent)) 514 + return ERR_PTR(-EINVAL); 515 + 512 516 error = simple_pin_fs(&trace_fs_type, &tracefs_mount, 513 517 &tracefs_mount_count); 514 518 if (error) 515 519 return ERR_PTR(error); 516 - 517 - /* 518 - * If the parent is not specified, we create it in the root. 519 - * We need the root dentry to do this, which is in the super 520 - * block. A pointer to that is in the struct vfsmount that we 521 - * have around. 522 - */ 523 - if (!parent) 524 - parent = tracefs_mount->mnt_root; 525 520 526 521 if (unlikely(IS_DEADDIR(parent->d_inode))) 527 522 dentry = ERR_PTR(-ENOENT);
+3 -2
fs/xfs/xfs_dquot.c
··· 562 562 struct xfs_dquot *dqp, 563 563 struct xfs_buf *bp) 564 564 { 565 - struct xfs_disk_dquot *ddqp = bp->b_addr + dqp->q_bufoffset; 565 + struct xfs_dqblk *dqb = xfs_buf_offset(bp, dqp->q_bufoffset); 566 + struct xfs_disk_dquot *ddqp = &dqb->dd_diskdq; 566 567 567 568 /* 568 569 * Ensure that we got the type and ID we were looking for. ··· 1251 1250 } 1252 1251 1253 1252 /* Flush the incore dquot to the ondisk buffer. */ 1254 - dqblk = bp->b_addr + dqp->q_bufoffset; 1253 + dqblk = xfs_buf_offset(bp, dqp->q_bufoffset); 1255 1254 xfs_dquot_to_disk(&dqblk->dd_diskdq, dqp); 1256 1255 1257 1256 /*
+18 -3
fs/xfs/xfs_dquot_item_recover.c
··· 19 19 #include "xfs_log.h" 20 20 #include "xfs_log_priv.h" 21 21 #include "xfs_log_recover.h" 22 + #include "xfs_error.h" 22 23 23 24 STATIC void 24 25 xlog_recover_dquot_ra_pass2( ··· 66 65 { 67 66 struct xfs_mount *mp = log->l_mp; 68 67 struct xfs_buf *bp; 68 + struct xfs_dqblk *dqb; 69 69 struct xfs_disk_dquot *ddq, *recddq; 70 70 struct xfs_dq_logformat *dq_f; 71 71 xfs_failaddr_t fa; ··· 132 130 return error; 133 131 134 132 ASSERT(bp); 135 - ddq = xfs_buf_offset(bp, dq_f->qlf_boffset); 133 + dqb = xfs_buf_offset(bp, dq_f->qlf_boffset); 134 + ddq = &dqb->dd_diskdq; 136 135 137 136 /* 138 137 * If the dquot has an LSN in it, recover the dquot only if it's less 139 138 * than the lsn of the transaction we are replaying. 140 139 */ 141 140 if (xfs_has_crc(mp)) { 142 - struct xfs_dqblk *dqb = (struct xfs_dqblk *)ddq; 143 141 xfs_lsn_t lsn = be64_to_cpu(dqb->dd_lsn); 144 142 145 143 if (lsn && lsn != -1 && XFS_LSN_CMP(lsn, current_lsn) >= 0) { ··· 149 147 150 148 memcpy(ddq, recddq, item->ri_buf[1].i_len); 151 149 if (xfs_has_crc(mp)) { 152 - xfs_update_cksum((char *)ddq, sizeof(struct xfs_dqblk), 150 + xfs_update_cksum((char *)dqb, sizeof(struct xfs_dqblk), 153 151 XFS_DQUOT_CRC_OFF); 152 + } 153 + 154 + /* Validate the recovered dquot. */ 155 + fa = xfs_dqblk_verify(log->l_mp, dqb, dq_f->qlf_id); 156 + if (fa) { 157 + XFS_CORRUPTION_ERROR("Bad dquot after recovery", 158 + XFS_ERRLEVEL_LOW, mp, dqb, 159 + sizeof(struct xfs_dqblk)); 160 + xfs_alert(mp, 161 + "Metadata corruption detected at %pS, dquot 0x%x", 162 + fa, dq_f->qlf_id); 163 + error = -EFSCORRUPTED; 164 + goto out_release; 154 165 } 155 166 156 167 ASSERT(dq_f->qlf_size == 2);
+8
fs/xfs/xfs_inode.h
··· 569 569 extern void xfs_setup_iops(struct xfs_inode *ip); 570 570 extern void xfs_diflags_to_iflags(struct xfs_inode *ip, bool init); 571 571 572 + static inline void xfs_update_stable_writes(struct xfs_inode *ip) 573 + { 574 + if (bdev_stable_writes(xfs_inode_buftarg(ip)->bt_bdev)) 575 + mapping_set_stable_writes(VFS_I(ip)->i_mapping); 576 + else 577 + mapping_clear_stable_writes(VFS_I(ip)->i_mapping); 578 + } 579 + 572 580 /* 573 581 * When setting up a newly allocated inode, we need to call 574 582 * xfs_finish_inode_setup() once the inode is fully instantiated at
+22 -12
fs/xfs/xfs_ioctl.c
··· 1121 1121 struct fileattr *fa) 1122 1122 { 1123 1123 struct xfs_mount *mp = ip->i_mount; 1124 + bool rtflag = (fa->fsx_xflags & FS_XFLAG_REALTIME); 1124 1125 uint64_t i_flags2; 1125 1126 1126 - /* Can't change realtime flag if any extents are allocated. */ 1127 - if ((ip->i_df.if_nextents || ip->i_delayed_blks) && 1128 - XFS_IS_REALTIME_INODE(ip) != (fa->fsx_xflags & FS_XFLAG_REALTIME)) 1129 - return -EINVAL; 1130 - 1131 - /* If realtime flag is set then must have realtime device */ 1132 - if (fa->fsx_xflags & FS_XFLAG_REALTIME) { 1133 - if (mp->m_sb.sb_rblocks == 0 || mp->m_sb.sb_rextsize == 0 || 1134 - xfs_extlen_to_rtxmod(mp, ip->i_extsize)) 1127 + if (rtflag != XFS_IS_REALTIME_INODE(ip)) { 1128 + /* Can't change realtime flag if any extents are allocated. */ 1129 + if (ip->i_df.if_nextents || ip->i_delayed_blks) 1135 1130 return -EINVAL; 1136 1131 } 1137 1132 1138 - /* Clear reflink if we are actually able to set the rt flag. */ 1139 - if ((fa->fsx_xflags & FS_XFLAG_REALTIME) && xfs_is_reflink_inode(ip)) 1140 - ip->i_diflags2 &= ~XFS_DIFLAG2_REFLINK; 1133 + if (rtflag) { 1134 + /* If realtime flag is set then must have realtime device */ 1135 + if (mp->m_sb.sb_rblocks == 0 || mp->m_sb.sb_rextsize == 0 || 1136 + xfs_extlen_to_rtxmod(mp, ip->i_extsize)) 1137 + return -EINVAL; 1138 + 1139 + /* Clear reflink if we are actually able to set the rt flag. */ 1140 + if (xfs_is_reflink_inode(ip)) 1141 + ip->i_diflags2 &= ~XFS_DIFLAG2_REFLINK; 1142 + } 1141 1143 1142 1144 /* diflags2 only valid for v3 inodes. */ 1143 1145 i_flags2 = xfs_flags2diflags2(ip, fa->fsx_xflags); ··· 1150 1148 ip->i_diflags2 = i_flags2; 1151 1149 1152 1150 xfs_diflags_to_iflags(ip, false); 1151 + 1152 + /* 1153 + * Make the stable writes flag match that of the device the inode 1154 + * resides on when flipping the RT flag. 1155 + */ 1156 + if (rtflag != XFS_IS_REALTIME_INODE(ip) && S_ISREG(VFS_I(ip)->i_mode)) 1157 + xfs_update_stable_writes(ip); 1158 + 1153 1159 xfs_trans_ichgtime(tp, ip, XFS_ICHGTIME_CHG); 1154 1160 xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE); 1155 1161 XFS_STATS_INC(mp, xs_ig_attrchg);
+7
fs/xfs/xfs_iops.c
··· 1299 1299 mapping_set_gfp_mask(inode->i_mapping, (gfp_mask & ~(__GFP_FS))); 1300 1300 1301 1301 /* 1302 + * For real-time inodes update the stable write flags to that of the RT 1303 + * device instead of the data device. 1304 + */ 1305 + if (S_ISREG(inode->i_mode) && XFS_IS_REALTIME_INODE(ip)) 1306 + xfs_update_stable_writes(ip); 1307 + 1308 + /* 1302 1309 * If there is no attribute fork no ACL can exist on this inode, 1303 1310 * and it can't have any file capabilities attached to it either. 1304 1311 */
+1
include/acpi/acpi_bus.h
··· 542 542 int acpi_bus_init_power(struct acpi_device *device); 543 543 int acpi_device_fix_up_power(struct acpi_device *device); 544 544 void acpi_device_fix_up_power_extended(struct acpi_device *adev); 545 + void acpi_device_fix_up_power_children(struct acpi_device *adev); 545 546 int acpi_bus_update_power(acpi_handle handle, int *state_p); 546 547 int acpi_device_update_power(struct acpi_device *device, int *state_p); 547 548 bool acpi_bus_power_manageable(acpi_handle handle);
+1 -1
include/asm-generic/qspinlock.h
··· 70 70 */ 71 71 static __always_inline int queued_spin_value_unlocked(struct qspinlock lock) 72 72 { 73 - return !atomic_read(&lock.val); 73 + return !lock.val.counter; 74 74 } 75 75 76 76 /**
-1
include/linux/blk-pm.h
··· 15 15 extern void blk_post_runtime_suspend(struct request_queue *q, int err); 16 16 extern void blk_pre_runtime_resume(struct request_queue *q); 17 17 extern void blk_post_runtime_resume(struct request_queue *q); 18 - extern void blk_set_runtime_active(struct request_queue *q); 19 18 #else 20 19 static inline void blk_pm_runtime_init(struct request_queue *q, 21 20 struct device *dev) {}
+16
include/linux/bpf_verifier.h
··· 301 301 struct tnum callback_ret_range; 302 302 bool in_async_callback_fn; 303 303 bool in_exception_callback_fn; 304 + /* For callback calling functions that limit number of possible 305 + * callback executions (e.g. bpf_loop) keeps track of current 306 + * simulated iteration number. 307 + * Value in frame N refers to number of times callback with frame 308 + * N+1 was simulated, e.g. for the following call: 309 + * 310 + * bpf_loop(..., fn, ...); | suppose current frame is N 311 + * | fn would be simulated in frame N+1 312 + * | number of simulations is tracked in frame N 313 + */ 314 + u32 callback_depth; 304 315 305 316 /* The following fields should be last. See copy_func_state() */ 306 317 int acquired_refs; ··· 411 400 struct bpf_idx_pair *jmp_history; 412 401 u32 jmp_history_cnt; 413 402 u32 dfs_depth; 403 + u32 callback_unroll_depth; 414 404 }; 415 405 416 406 #define bpf_get_spilled_reg(slot, frame, mask) \ ··· 523 511 * this instruction, regardless of any heuristics 524 512 */ 525 513 bool force_checkpoint; 514 + /* true if instruction is a call to a helper function that 515 + * accepts callback function as a parameter. 516 + */ 517 + bool calls_callback; 526 518 }; 527 519 528 520 #define MAX_USED_MAPS 64 /* max number of maps accessed by one eBPF program */
+3
include/linux/hid.h
··· 679 679 struct list_head debug_list; 680 680 spinlock_t debug_list_lock; 681 681 wait_queue_head_t debug_wait; 682 + struct kref ref; 682 683 683 684 unsigned int id; /* system unique id */ 684 685 ··· 687 686 struct hid_bpf bpf; /* hid-bpf data */ 688 687 #endif /* CONFIG_BPF */ 689 688 }; 689 + 690 + void hiddev_free(struct kref *ref); 690 691 691 692 #define to_hid_device(pdev) \ 692 693 container_of(pdev, struct hid_device, dev)
+26 -4
include/linux/netdevice.h
··· 1797 1797 ML_PRIV_CAN, 1798 1798 }; 1799 1799 1800 + enum netdev_stat_type { 1801 + NETDEV_PCPU_STAT_NONE, 1802 + NETDEV_PCPU_STAT_LSTATS, /* struct pcpu_lstats */ 1803 + NETDEV_PCPU_STAT_TSTATS, /* struct pcpu_sw_netstats */ 1804 + NETDEV_PCPU_STAT_DSTATS, /* struct pcpu_dstats */ 1805 + }; 1806 + 1800 1807 /** 1801 1808 * struct net_device - The DEVICE structure. 1802 1809 * ··· 1998 1991 * 1999 1992 * @ml_priv: Mid-layer private 2000 1993 * @ml_priv_type: Mid-layer private type 2001 - * @lstats: Loopback statistics 2002 - * @tstats: Tunnel statistics 2003 - * @dstats: Dummy statistics 2004 - * @vstats: Virtual ethernet statistics 1994 + * 1995 + * @pcpu_stat_type: Type of device statistics which the core should 1996 + * allocate/free: none, lstats, tstats, dstats. none 1997 + * means the driver is handling statistics allocation/ 1998 + * freeing internally. 1999 + * @lstats: Loopback statistics: packets, bytes 2000 + * @tstats: Tunnel statistics: RX/TX packets, RX/TX bytes 2001 + * @dstats: Dummy statistics: RX/TX/drop packets, RX/TX bytes 2005 2002 * 2006 2003 * @garp_port: GARP 2007 2004 * @mrp_port: MRP ··· 2365 2354 void *ml_priv; 2366 2355 enum netdev_ml_priv_type ml_priv_type; 2367 2356 2357 + enum netdev_stat_type pcpu_stat_type:8; 2368 2358 union { 2369 2359 struct pcpu_lstats __percpu *lstats; 2370 2360 struct pcpu_sw_netstats __percpu *tstats; ··· 2766 2754 u64_stats_t tx_bytes; 2767 2755 struct u64_stats_sync syncp; 2768 2756 } __aligned(4 * sizeof(u64)); 2757 + 2758 + struct pcpu_dstats { 2759 + u64 rx_packets; 2760 + u64 rx_bytes; 2761 + u64 rx_drops; 2762 + u64 tx_packets; 2763 + u64 tx_bytes; 2764 + u64 tx_drops; 2765 + struct u64_stats_sync syncp; 2766 + } __aligned(8 * sizeof(u64)); 2769 2767 2770 2768 struct pcpu_lstats { 2771 2769 u64_stats_t packets;
+17
include/linux/pagemap.h
··· 204 204 AS_NO_WRITEBACK_TAGS = 5, 205 205 AS_LARGE_FOLIO_SUPPORT = 6, 206 206 AS_RELEASE_ALWAYS, /* Call ->release_folio(), even if no private data */ 207 + AS_STABLE_WRITES, /* must wait for writeback before modifying 208 + folio contents */ 207 209 }; 208 210 209 211 /** ··· 289 287 static inline void mapping_clear_release_always(struct address_space *mapping) 290 288 { 291 289 clear_bit(AS_RELEASE_ALWAYS, &mapping->flags); 290 + } 291 + 292 + static inline bool mapping_stable_writes(const struct address_space *mapping) 293 + { 294 + return test_bit(AS_STABLE_WRITES, &mapping->flags); 295 + } 296 + 297 + static inline void mapping_set_stable_writes(struct address_space *mapping) 298 + { 299 + set_bit(AS_STABLE_WRITES, &mapping->flags); 300 + } 301 + 302 + static inline void mapping_clear_stable_writes(struct address_space *mapping) 303 + { 304 + clear_bit(AS_STABLE_WRITES, &mapping->flags); 292 305 } 293 306 294 307 static inline gfp_t mapping_gfp_mask(struct address_space * mapping)
-13
include/linux/usb/phy.h
··· 144 144 */ 145 145 int (*set_wakeup)(struct usb_phy *x, bool enabled); 146 146 147 - /* notify phy port status change */ 148 - int (*notify_port_status)(struct usb_phy *x, int port, 149 - u16 portstatus, u16 portchange); 150 - 151 147 /* notify phy connect status change */ 152 148 int (*notify_connect)(struct usb_phy *x, 153 149 enum usb_device_speed speed); ··· 312 316 { 313 317 if (x && x->set_wakeup) 314 318 return x->set_wakeup(x, enabled); 315 - else 316 - return 0; 317 - } 318 - 319 - static inline int 320 - usb_phy_notify_port_status(struct usb_phy *x, int port, u16 portstatus, u16 portchange) 321 - { 322 - if (x && x->notify_port_status) 323 - return x->notify_port_status(x, port, portstatus, portchange); 324 319 else 325 320 return 0; 326 321 }
+6
include/net/netkit.h
··· 10 10 int netkit_link_attach(const union bpf_attr *attr, struct bpf_prog *prog); 11 11 int netkit_prog_detach(const union bpf_attr *attr, struct bpf_prog *prog); 12 12 int netkit_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr); 13 + INDIRECT_CALLABLE_DECLARE(struct net_device *netkit_peer_dev(struct net_device *dev)); 13 14 #else 14 15 static inline int netkit_prog_attach(const union bpf_attr *attr, 15 16 struct bpf_prog *prog) ··· 34 33 union bpf_attr __user *uattr) 35 34 { 36 35 return -EINVAL; 36 + } 37 + 38 + static inline struct net_device *netkit_peer_dev(struct net_device *dev) 39 + { 40 + return NULL; 37 41 } 38 42 #endif /* CONFIG_NETKIT */ 39 43 #endif /* __NET_NETKIT_H */
+1 -1
include/trace/events/rxrpc.h
··· 328 328 E_(rxrpc_rtt_tx_ping, "PING") 329 329 330 330 #define rxrpc_rtt_rx_traces \ 331 - EM(rxrpc_rtt_rx_cancel, "CNCL") \ 331 + EM(rxrpc_rtt_rx_other_ack, "OACK") \ 332 332 EM(rxrpc_rtt_rx_obsolete, "OBSL") \ 333 333 EM(rxrpc_rtt_rx_lost, "LOST") \ 334 334 EM(rxrpc_rtt_rx_ping_response, "PONG") \
+3
include/uapi/linux/fcntl.h
··· 116 116 #define AT_HANDLE_FID AT_REMOVEDIR /* file handle is needed to 117 117 compare object identity and may not 118 118 be usable to open_by_handle_at(2) */ 119 + #if defined(__KERNEL__) 120 + #define AT_GETATTR_NOSEC 0x80000000 121 + #endif 119 122 120 123 #endif /* _UAPI_LINUX_FCNTL_H */
+1 -1
io_uring/fs.c
··· 254 254 newf = u64_to_user_ptr(READ_ONCE(sqe->addr2)); 255 255 lnk->flags = READ_ONCE(sqe->hardlink_flags); 256 256 257 - lnk->oldpath = getname(oldf); 257 + lnk->oldpath = getname_uflags(oldf, lnk->flags); 258 258 if (IS_ERR(lnk->oldpath)) 259 259 return PTR_ERR(lnk->oldpath); 260 260
+1 -1
io_uring/rsrc.c
··· 1258 1258 */ 1259 1259 const struct bio_vec *bvec = imu->bvec; 1260 1260 1261 - if (offset <= bvec->bv_len) { 1261 + if (offset < bvec->bv_len) { 1262 1262 /* 1263 1263 * Note, huge pages buffers consists of one large 1264 1264 * bvec entry and should always go this way. The other
+284 -154
kernel/bpf/verifier.c
··· 547 547 return func_id == BPF_FUNC_dynptr_data; 548 548 } 549 549 550 - static bool is_callback_calling_kfunc(u32 btf_id); 550 + static bool is_sync_callback_calling_kfunc(u32 btf_id); 551 551 static bool is_bpf_throw_kfunc(struct bpf_insn *insn); 552 552 553 - static bool is_callback_calling_function(enum bpf_func_id func_id) 553 + static bool is_sync_callback_calling_function(enum bpf_func_id func_id) 554 554 { 555 555 return func_id == BPF_FUNC_for_each_map_elem || 556 - func_id == BPF_FUNC_timer_set_callback || 557 556 func_id == BPF_FUNC_find_vma || 558 557 func_id == BPF_FUNC_loop || 559 558 func_id == BPF_FUNC_user_ringbuf_drain; ··· 561 562 static bool is_async_callback_calling_function(enum bpf_func_id func_id) 562 563 { 563 564 return func_id == BPF_FUNC_timer_set_callback; 565 + } 566 + 567 + static bool is_callback_calling_function(enum bpf_func_id func_id) 568 + { 569 + return is_sync_callback_calling_function(func_id) || 570 + is_async_callback_calling_function(func_id); 571 + } 572 + 573 + static bool is_sync_callback_calling_insn(struct bpf_insn *insn) 574 + { 575 + return (bpf_helper_call(insn) && is_sync_callback_calling_function(insn->imm)) || 576 + (bpf_pseudo_kfunc_call(insn) && is_sync_callback_calling_kfunc(insn->imm)); 564 577 } 565 578 566 579 static bool is_storage_get_function(enum bpf_func_id func_id) ··· 1819 1808 dst_state->first_insn_idx = src->first_insn_idx; 1820 1809 dst_state->last_insn_idx = src->last_insn_idx; 1821 1810 dst_state->dfs_depth = src->dfs_depth; 1811 + dst_state->callback_unroll_depth = src->callback_unroll_depth; 1822 1812 dst_state->used_as_loop_entry = src->used_as_loop_entry; 1823 1813 for (i = 0; i <= src->curframe; i++) { 1824 1814 dst = dst_state->frame[i]; ··· 3451 3439 reg->subreg_def = DEF_NOT_SUBREG; 3452 3440 } 3453 3441 3454 - static int check_reg_arg(struct bpf_verifier_env *env, u32 regno, 3455 - enum reg_arg_type t) 3442 + static int __check_reg_arg(struct bpf_verifier_env *env, struct bpf_reg_state *regs, u32 regno, 3443 + enum reg_arg_type t) 3456 3444 { 3457 - struct bpf_verifier_state *vstate = env->cur_state; 3458 - struct bpf_func_state *state = vstate->frame[vstate->curframe]; 3459 3445 struct bpf_insn *insn = env->prog->insnsi + env->insn_idx; 3460 - struct bpf_reg_state *reg, *regs = state->regs; 3446 + struct bpf_reg_state *reg; 3461 3447 bool rw64; 3462 3448 3463 3449 if (regno >= MAX_BPF_REG) { ··· 3494 3484 mark_reg_unknown(env, regs, regno); 3495 3485 } 3496 3486 return 0; 3487 + } 3488 + 3489 + static int check_reg_arg(struct bpf_verifier_env *env, u32 regno, 3490 + enum reg_arg_type t) 3491 + { 3492 + struct bpf_verifier_state *vstate = env->cur_state; 3493 + struct bpf_func_state *state = vstate->frame[vstate->curframe]; 3494 + 3495 + return __check_reg_arg(env, state->regs, regno, t); 3497 3496 } 3498 3497 3499 3498 static void mark_jmp_point(struct bpf_verifier_env *env, int idx) ··· 3743 3724 } 3744 3725 } 3745 3726 3727 + static bool calls_callback(struct bpf_verifier_env *env, int insn_idx); 3728 + 3746 3729 /* For given verifier state backtrack_insn() is called from the last insn to 3747 3730 * the first insn. Its purpose is to compute a bitmask of registers and 3748 3731 * stack slots that needs precision in the parent verifier state. ··· 3920 3899 return -EFAULT; 3921 3900 return 0; 3922 3901 } 3923 - } else if ((bpf_helper_call(insn) && 3924 - is_callback_calling_function(insn->imm) && 3925 - !is_async_callback_calling_function(insn->imm)) || 3926 - (bpf_pseudo_kfunc_call(insn) && is_callback_calling_kfunc(insn->imm))) { 3927 - /* callback-calling helper or kfunc call, which means 3928 - * we are exiting from subprog, but unlike the subprog 3929 - * call handling above, we shouldn't propagate 3930 - * precision of r1-r5 (if any requested), as they are 3931 - * not actually arguments passed directly to callback 3932 - * subprogs 3902 + } else if (is_sync_callback_calling_insn(insn) && idx != subseq_idx - 1) { 3903 + /* exit from callback subprog to callback-calling helper or 3904 + * kfunc call. Use idx/subseq_idx check to discern it from 3905 + * straight line code backtracking. 3906 + * Unlike the subprog call handling above, we shouldn't 3907 + * propagate precision of r1-r5 (if any requested), as they are 3908 + * not actually arguments passed directly to callback subprogs 3933 3909 */ 3934 3910 if (bt_reg_mask(bt) & ~BPF_REGMASK_ARGS) { 3935 3911 verbose(env, "BUG regs %x\n", bt_reg_mask(bt)); ··· 3961 3943 } else if (opcode == BPF_EXIT) { 3962 3944 bool r0_precise; 3963 3945 3946 + /* Backtracking to a nested function call, 'idx' is a part of 3947 + * the inner frame 'subseq_idx' is a part of the outer frame. 3948 + * In case of a regular function call, instructions giving 3949 + * precision to registers R1-R5 should have been found already. 3950 + * In case of a callback, it is ok to have R1-R5 marked for 3951 + * backtracking, as these registers are set by the function 3952 + * invoking callback. 3953 + */ 3954 + if (subseq_idx >= 0 && calls_callback(env, subseq_idx)) 3955 + for (i = BPF_REG_1; i <= BPF_REG_5; i++) 3956 + bt_clear_reg(bt, i); 3964 3957 if (bt_reg_mask(bt) & BPF_REGMASK_ARGS) { 3965 - /* if backtracing was looking for registers R1-R5 3966 - * they should have been found already. 3967 - */ 3968 3958 verbose(env, "BUG regs %x\n", bt_reg_mask(bt)); 3969 3959 WARN_ONCE(1, "verifier backtracking bug"); 3970 3960 return -EFAULT; ··· 9376 9350 /* after the call registers r0 - r5 were scratched */ 9377 9351 for (i = 0; i < CALLER_SAVED_REGS; i++) { 9378 9352 mark_reg_not_init(env, regs, caller_saved[i]); 9379 - check_reg_arg(env, caller_saved[i], DST_OP_NO_MARK); 9353 + __check_reg_arg(env, regs, caller_saved[i], DST_OP_NO_MARK); 9380 9354 } 9381 9355 } 9382 9356 ··· 9389 9363 struct bpf_func_state *caller, 9390 9364 struct bpf_func_state *callee, int insn_idx); 9391 9365 9392 - static int __check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, 9393 - int *insn_idx, int subprog, 9394 - set_callee_state_fn set_callee_state_cb) 9366 + static int setup_func_entry(struct bpf_verifier_env *env, int subprog, int callsite, 9367 + set_callee_state_fn set_callee_state_cb, 9368 + struct bpf_verifier_state *state) 9395 9369 { 9396 - struct bpf_verifier_state *state = env->cur_state; 9397 9370 struct bpf_func_state *caller, *callee; 9398 9371 int err; 9399 9372 ··· 9402 9377 return -E2BIG; 9403 9378 } 9404 9379 9405 - caller = state->frame[state->curframe]; 9406 9380 if (state->frame[state->curframe + 1]) { 9407 9381 verbose(env, "verifier bug. Frame %d already allocated\n", 9408 9382 state->curframe + 1); 9409 9383 return -EFAULT; 9410 9384 } 9411 9385 9412 - err = btf_check_subprog_call(env, subprog, caller->regs); 9413 - if (err == -EFAULT) 9414 - return err; 9415 - if (subprog_is_global(env, subprog)) { 9416 - if (err) { 9417 - verbose(env, "Caller passes invalid args into func#%d\n", 9418 - subprog); 9419 - return err; 9420 - } else { 9421 - if (env->log.level & BPF_LOG_LEVEL) 9422 - verbose(env, 9423 - "Func#%d is global and valid. Skipping.\n", 9424 - subprog); 9425 - clear_caller_saved_regs(env, caller->regs); 9426 - 9427 - /* All global functions return a 64-bit SCALAR_VALUE */ 9428 - mark_reg_unknown(env, caller->regs, BPF_REG_0); 9429 - caller->regs[BPF_REG_0].subreg_def = DEF_NOT_SUBREG; 9430 - 9431 - /* continue with next insn after call */ 9432 - return 0; 9433 - } 9434 - } 9435 - 9436 - /* set_callee_state is used for direct subprog calls, but we are 9437 - * interested in validating only BPF helpers that can call subprogs as 9438 - * callbacks 9439 - */ 9440 - if (set_callee_state_cb != set_callee_state) { 9441 - env->subprog_info[subprog].is_cb = true; 9442 - if (bpf_pseudo_kfunc_call(insn) && 9443 - !is_callback_calling_kfunc(insn->imm)) { 9444 - verbose(env, "verifier bug: kfunc %s#%d not marked as callback-calling\n", 9445 - func_id_name(insn->imm), insn->imm); 9446 - return -EFAULT; 9447 - } else if (!bpf_pseudo_kfunc_call(insn) && 9448 - !is_callback_calling_function(insn->imm)) { /* helper */ 9449 - verbose(env, "verifier bug: helper %s#%d not marked as callback-calling\n", 9450 - func_id_name(insn->imm), insn->imm); 9451 - return -EFAULT; 9452 - } 9453 - } 9454 - 9455 - if (insn->code == (BPF_JMP | BPF_CALL) && 9456 - insn->src_reg == 0 && 9457 - insn->imm == BPF_FUNC_timer_set_callback) { 9458 - struct bpf_verifier_state *async_cb; 9459 - 9460 - /* there is no real recursion here. timer callbacks are async */ 9461 - env->subprog_info[subprog].is_async_cb = true; 9462 - async_cb = push_async_cb(env, env->subprog_info[subprog].start, 9463 - *insn_idx, subprog); 9464 - if (!async_cb) 9465 - return -EFAULT; 9466 - callee = async_cb->frame[0]; 9467 - callee->async_entry_cnt = caller->async_entry_cnt + 1; 9468 - 9469 - /* Convert bpf_timer_set_callback() args into timer callback args */ 9470 - err = set_callee_state_cb(env, caller, callee, *insn_idx); 9471 - if (err) 9472 - return err; 9473 - 9474 - clear_caller_saved_regs(env, caller->regs); 9475 - mark_reg_unknown(env, caller->regs, BPF_REG_0); 9476 - caller->regs[BPF_REG_0].subreg_def = DEF_NOT_SUBREG; 9477 - /* continue with next insn after call */ 9478 - return 0; 9479 - } 9480 - 9386 + caller = state->frame[state->curframe]; 9481 9387 callee = kzalloc(sizeof(*callee), GFP_KERNEL); 9482 9388 if (!callee) 9483 9389 return -ENOMEM; ··· 9420 9464 */ 9421 9465 init_func_state(env, callee, 9422 9466 /* remember the callsite, it will be used by bpf_exit */ 9423 - *insn_idx /* callsite */, 9467 + callsite, 9424 9468 state->curframe + 1 /* frameno within this callchain */, 9425 9469 subprog /* subprog number within this prog */); 9426 - 9427 9470 /* Transfer references to the callee */ 9428 9471 err = copy_reference_state(callee, caller); 9472 + err = err ?: set_callee_state_cb(env, caller, callee, callsite); 9429 9473 if (err) 9430 9474 goto err_out; 9431 - 9432 - err = set_callee_state_cb(env, caller, callee, *insn_idx); 9433 - if (err) 9434 - goto err_out; 9435 - 9436 - clear_caller_saved_regs(env, caller->regs); 9437 9475 9438 9476 /* only increment it after check_reg_arg() finished */ 9439 9477 state->curframe++; 9478 + 9479 + return 0; 9480 + 9481 + err_out: 9482 + free_func_state(callee); 9483 + state->frame[state->curframe + 1] = NULL; 9484 + return err; 9485 + } 9486 + 9487 + static int push_callback_call(struct bpf_verifier_env *env, struct bpf_insn *insn, 9488 + int insn_idx, int subprog, 9489 + set_callee_state_fn set_callee_state_cb) 9490 + { 9491 + struct bpf_verifier_state *state = env->cur_state, *callback_state; 9492 + struct bpf_func_state *caller, *callee; 9493 + int err; 9494 + 9495 + caller = state->frame[state->curframe]; 9496 + err = btf_check_subprog_call(env, subprog, caller->regs); 9497 + if (err == -EFAULT) 9498 + return err; 9499 + 9500 + /* set_callee_state is used for direct subprog calls, but we are 9501 + * interested in validating only BPF helpers that can call subprogs as 9502 + * callbacks 9503 + */ 9504 + env->subprog_info[subprog].is_cb = true; 9505 + if (bpf_pseudo_kfunc_call(insn) && 9506 + !is_sync_callback_calling_kfunc(insn->imm)) { 9507 + verbose(env, "verifier bug: kfunc %s#%d not marked as callback-calling\n", 9508 + func_id_name(insn->imm), insn->imm); 9509 + return -EFAULT; 9510 + } else if (!bpf_pseudo_kfunc_call(insn) && 9511 + !is_callback_calling_function(insn->imm)) { /* helper */ 9512 + verbose(env, "verifier bug: helper %s#%d not marked as callback-calling\n", 9513 + func_id_name(insn->imm), insn->imm); 9514 + return -EFAULT; 9515 + } 9516 + 9517 + if (insn->code == (BPF_JMP | BPF_CALL) && 9518 + insn->src_reg == 0 && 9519 + insn->imm == BPF_FUNC_timer_set_callback) { 9520 + struct bpf_verifier_state *async_cb; 9521 + 9522 + /* there is no real recursion here. timer callbacks are async */ 9523 + env->subprog_info[subprog].is_async_cb = true; 9524 + async_cb = push_async_cb(env, env->subprog_info[subprog].start, 9525 + insn_idx, subprog); 9526 + if (!async_cb) 9527 + return -EFAULT; 9528 + callee = async_cb->frame[0]; 9529 + callee->async_entry_cnt = caller->async_entry_cnt + 1; 9530 + 9531 + /* Convert bpf_timer_set_callback() args into timer callback args */ 9532 + err = set_callee_state_cb(env, caller, callee, insn_idx); 9533 + if (err) 9534 + return err; 9535 + 9536 + return 0; 9537 + } 9538 + 9539 + /* for callback functions enqueue entry to callback and 9540 + * proceed with next instruction within current frame. 9541 + */ 9542 + callback_state = push_stack(env, env->subprog_info[subprog].start, insn_idx, false); 9543 + if (!callback_state) 9544 + return -ENOMEM; 9545 + 9546 + err = setup_func_entry(env, subprog, insn_idx, set_callee_state_cb, 9547 + callback_state); 9548 + if (err) 9549 + return err; 9550 + 9551 + callback_state->callback_unroll_depth++; 9552 + callback_state->frame[callback_state->curframe - 1]->callback_depth++; 9553 + caller->callback_depth = 0; 9554 + return 0; 9555 + } 9556 + 9557 + static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, 9558 + int *insn_idx) 9559 + { 9560 + struct bpf_verifier_state *state = env->cur_state; 9561 + struct bpf_func_state *caller; 9562 + int err, subprog, target_insn; 9563 + 9564 + target_insn = *insn_idx + insn->imm + 1; 9565 + subprog = find_subprog(env, target_insn); 9566 + if (subprog < 0) { 9567 + verbose(env, "verifier bug. No program starts at insn %d\n", target_insn); 9568 + return -EFAULT; 9569 + } 9570 + 9571 + caller = state->frame[state->curframe]; 9572 + err = btf_check_subprog_call(env, subprog, caller->regs); 9573 + if (err == -EFAULT) 9574 + return err; 9575 + if (subprog_is_global(env, subprog)) { 9576 + if (err) { 9577 + verbose(env, "Caller passes invalid args into func#%d\n", subprog); 9578 + return err; 9579 + } 9580 + 9581 + if (env->log.level & BPF_LOG_LEVEL) 9582 + verbose(env, "Func#%d is global and valid. Skipping.\n", subprog); 9583 + clear_caller_saved_regs(env, caller->regs); 9584 + 9585 + /* All global functions return a 64-bit SCALAR_VALUE */ 9586 + mark_reg_unknown(env, caller->regs, BPF_REG_0); 9587 + caller->regs[BPF_REG_0].subreg_def = DEF_NOT_SUBREG; 9588 + 9589 + /* continue with next insn after call */ 9590 + return 0; 9591 + } 9592 + 9593 + /* for regular function entry setup new frame and continue 9594 + * from that frame. 9595 + */ 9596 + err = setup_func_entry(env, subprog, *insn_idx, set_callee_state, state); 9597 + if (err) 9598 + return err; 9599 + 9600 + clear_caller_saved_regs(env, caller->regs); 9440 9601 9441 9602 /* and go analyze first insn of the callee */ 9442 9603 *insn_idx = env->subprog_info[subprog].start - 1; ··· 9562 9489 verbose(env, "caller:\n"); 9563 9490 print_verifier_state(env, caller, true); 9564 9491 verbose(env, "callee:\n"); 9565 - print_verifier_state(env, callee, true); 9492 + print_verifier_state(env, state->frame[state->curframe], true); 9566 9493 } 9567 - return 0; 9568 9494 9569 - err_out: 9570 - free_func_state(callee); 9571 - state->frame[state->curframe + 1] = NULL; 9572 - return err; 9495 + return 0; 9573 9496 } 9574 9497 9575 9498 int map_set_for_each_callback_args(struct bpf_verifier_env *env, ··· 9607 9538 for (i = BPF_REG_1; i <= BPF_REG_5; i++) 9608 9539 callee->regs[i] = caller->regs[i]; 9609 9540 return 0; 9610 - } 9611 - 9612 - static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, 9613 - int *insn_idx) 9614 - { 9615 - int subprog, target_insn; 9616 - 9617 - target_insn = *insn_idx + insn->imm + 1; 9618 - subprog = find_subprog(env, target_insn); 9619 - if (subprog < 0) { 9620 - verbose(env, "verifier bug. No program starts at insn %d\n", 9621 - target_insn); 9622 - return -EFAULT; 9623 - } 9624 - 9625 - return __check_func_call(env, insn, insn_idx, subprog, set_callee_state); 9626 9541 } 9627 9542 9628 9543 static int set_map_elem_callback_state(struct bpf_verifier_env *env, ··· 9801 9748 9802 9749 static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx) 9803 9750 { 9804 - struct bpf_verifier_state *state = env->cur_state; 9751 + struct bpf_verifier_state *state = env->cur_state, *prev_st; 9805 9752 struct bpf_func_state *caller, *callee; 9806 9753 struct bpf_reg_state *r0; 9754 + bool in_callback_fn; 9807 9755 int err; 9808 9756 9809 9757 callee = state->frame[state->curframe]; ··· 9833 9779 verbose_invalid_scalar(env, r0, &range, "callback return", "R0"); 9834 9780 return -EINVAL; 9835 9781 } 9782 + if (!calls_callback(env, callee->callsite)) { 9783 + verbose(env, "BUG: in callback at %d, callsite %d !calls_callback\n", 9784 + *insn_idx, callee->callsite); 9785 + return -EFAULT; 9786 + } 9836 9787 } else { 9837 9788 /* return to the caller whatever r0 had in the callee */ 9838 9789 caller->regs[BPF_REG_0] = *r0; ··· 9855 9796 return err; 9856 9797 } 9857 9798 9858 - *insn_idx = callee->callsite + 1; 9799 + /* for callbacks like bpf_loop or bpf_for_each_map_elem go back to callsite, 9800 + * there function call logic would reschedule callback visit. If iteration 9801 + * converges is_state_visited() would prune that visit eventually. 9802 + */ 9803 + in_callback_fn = callee->in_callback_fn; 9804 + if (in_callback_fn) 9805 + *insn_idx = callee->callsite; 9806 + else 9807 + *insn_idx = callee->callsite + 1; 9808 + 9859 9809 if (env->log.level & BPF_LOG_LEVEL) { 9860 9810 verbose(env, "returning from callee:\n"); 9861 9811 print_verifier_state(env, callee, true); ··· 9875 9807 * bpf_throw, this will be done by copy_verifier_state for extra frames. */ 9876 9808 free_func_state(callee); 9877 9809 state->frame[state->curframe--] = NULL; 9810 + 9811 + /* for callbacks widen imprecise scalars to make programs like below verify: 9812 + * 9813 + * struct ctx { int i; } 9814 + * void cb(int idx, struct ctx *ctx) { ctx->i++; ... } 9815 + * ... 9816 + * struct ctx = { .i = 0; } 9817 + * bpf_loop(100, cb, &ctx, 0); 9818 + * 9819 + * This is similar to what is done in process_iter_next_call() for open 9820 + * coded iterators. 9821 + */ 9822 + prev_st = in_callback_fn ? find_prev_entry(env, state, *insn_idx) : NULL; 9823 + if (prev_st) { 9824 + err = widen_imprecise_scalars(env, prev_st, state); 9825 + if (err) 9826 + return err; 9827 + } 9878 9828 return 0; 9879 9829 } 9880 9830 ··· 10295 10209 } 10296 10210 break; 10297 10211 case BPF_FUNC_for_each_map_elem: 10298 - err = __check_func_call(env, insn, insn_idx_p, meta.subprogno, 10299 - set_map_elem_callback_state); 10212 + err = push_callback_call(env, insn, insn_idx, meta.subprogno, 10213 + set_map_elem_callback_state); 10300 10214 break; 10301 10215 case BPF_FUNC_timer_set_callback: 10302 - err = __check_func_call(env, insn, insn_idx_p, meta.subprogno, 10303 - set_timer_callback_state); 10216 + err = push_callback_call(env, insn, insn_idx, meta.subprogno, 10217 + set_timer_callback_state); 10304 10218 break; 10305 10219 case BPF_FUNC_find_vma: 10306 - err = __check_func_call(env, insn, insn_idx_p, meta.subprogno, 10307 - set_find_vma_callback_state); 10220 + err = push_callback_call(env, insn, insn_idx, meta.subprogno, 10221 + set_find_vma_callback_state); 10308 10222 break; 10309 10223 case BPF_FUNC_snprintf: 10310 10224 err = check_bpf_snprintf_call(env, regs); 10311 10225 break; 10312 10226 case BPF_FUNC_loop: 10313 10227 update_loop_inline_state(env, meta.subprogno); 10314 - err = __check_func_call(env, insn, insn_idx_p, meta.subprogno, 10315 - set_loop_callback_state); 10228 + /* Verifier relies on R1 value to determine if bpf_loop() iteration 10229 + * is finished, thus mark it precise. 10230 + */ 10231 + err = mark_chain_precision(env, BPF_REG_1); 10232 + if (err) 10233 + return err; 10234 + if (cur_func(env)->callback_depth < regs[BPF_REG_1].umax_value) { 10235 + err = push_callback_call(env, insn, insn_idx, meta.subprogno, 10236 + set_loop_callback_state); 10237 + } else { 10238 + cur_func(env)->callback_depth = 0; 10239 + if (env->log.level & BPF_LOG_LEVEL2) 10240 + verbose(env, "frame%d bpf_loop iteration limit reached\n", 10241 + env->cur_state->curframe); 10242 + } 10316 10243 break; 10317 10244 case BPF_FUNC_dynptr_from_mem: 10318 10245 if (regs[BPF_REG_1].type != PTR_TO_MAP_VALUE) { ··· 10421 10322 break; 10422 10323 } 10423 10324 case BPF_FUNC_user_ringbuf_drain: 10424 - err = __check_func_call(env, insn, insn_idx_p, meta.subprogno, 10425 - set_user_ringbuf_callback_state); 10325 + err = push_callback_call(env, insn, insn_idx, meta.subprogno, 10326 + set_user_ringbuf_callback_state); 10426 10327 break; 10427 10328 } 10428 10329 ··· 11310 11211 btf_id == special_kfunc_list[KF_bpf_refcount_acquire_impl]; 11311 11212 } 11312 11213 11313 - static bool is_callback_calling_kfunc(u32 btf_id) 11214 + static bool is_sync_callback_calling_kfunc(u32 btf_id) 11314 11215 { 11315 11216 return btf_id == special_kfunc_list[KF_bpf_rbtree_add_impl]; 11316 11217 } ··· 12062 11963 return -EACCES; 12063 11964 } 12064 11965 11966 + /* Check the arguments */ 11967 + err = check_kfunc_args(env, &meta, insn_idx); 11968 + if (err < 0) 11969 + return err; 11970 + 11971 + if (meta.func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) { 11972 + err = push_callback_call(env, insn, insn_idx, meta.subprogno, 11973 + set_rbtree_add_callback_state); 11974 + if (err) { 11975 + verbose(env, "kfunc %s#%d failed callback verification\n", 11976 + func_name, meta.func_id); 11977 + return err; 11978 + } 11979 + } 11980 + 12065 11981 rcu_lock = is_kfunc_bpf_rcu_read_lock(&meta); 12066 11982 rcu_unlock = is_kfunc_bpf_rcu_read_unlock(&meta); 12067 11983 ··· 12112 11998 return -EINVAL; 12113 11999 } 12114 12000 12115 - /* Check the arguments */ 12116 - err = check_kfunc_args(env, &meta, insn_idx); 12117 - if (err < 0) 12118 - return err; 12119 12001 /* In case of release function, we get register number of refcounted 12120 12002 * PTR_TO_BTF_ID in bpf_kfunc_arg_meta, do the release now. 12121 12003 */ ··· 12140 12030 err = release_reference(env, release_ref_obj_id); 12141 12031 if (err) { 12142 12032 verbose(env, "kfunc %s#%d reference has not been acquired before\n", 12143 - func_name, meta.func_id); 12144 - return err; 12145 - } 12146 - } 12147 - 12148 - if (meta.func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) { 12149 - err = __check_func_call(env, insn, insn_idx_p, meta.subprogno, 12150 - set_rbtree_add_callback_state); 12151 - if (err) { 12152 - verbose(env, "kfunc %s#%d failed callback verification\n", 12153 12033 func_name, meta.func_id); 12154 12034 return err; 12155 12035 } ··· 15508 15408 return env->insn_aux_data[insn_idx].force_checkpoint; 15509 15409 } 15510 15410 15411 + static void mark_calls_callback(struct bpf_verifier_env *env, int idx) 15412 + { 15413 + env->insn_aux_data[idx].calls_callback = true; 15414 + } 15415 + 15416 + static bool calls_callback(struct bpf_verifier_env *env, int insn_idx) 15417 + { 15418 + return env->insn_aux_data[insn_idx].calls_callback; 15419 + } 15511 15420 15512 15421 enum { 15513 15422 DONE_EXPLORING = 0, ··· 15630 15521 * async state will be pushed for further exploration. 15631 15522 */ 15632 15523 mark_prune_point(env, t); 15524 + /* For functions that invoke callbacks it is not known how many times 15525 + * callback would be called. Verifier models callback calling functions 15526 + * by repeatedly visiting callback bodies and returning to origin call 15527 + * instruction. 15528 + * In order to stop such iteration verifier needs to identify when a 15529 + * state identical some state from a previous iteration is reached. 15530 + * Check below forces creation of checkpoint before callback calling 15531 + * instruction to allow search for such identical states. 15532 + */ 15533 + if (is_sync_callback_calling_insn(insn)) { 15534 + mark_calls_callback(env, t); 15535 + mark_force_checkpoint(env, t); 15536 + mark_prune_point(env, t); 15537 + mark_jmp_point(env, t); 15538 + } 15633 15539 if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) { 15634 15540 struct bpf_kfunc_call_arg_meta meta; 15635 15541 ··· 17114 16990 } 17115 16991 goto skip_inf_loop_check; 17116 16992 } 16993 + if (calls_callback(env, insn_idx)) { 16994 + if (states_equal(env, &sl->state, cur, true)) 16995 + goto hit; 16996 + goto skip_inf_loop_check; 16997 + } 17117 16998 /* attempt to detect infinite loop to avoid unnecessary doomed work */ 17118 16999 if (states_maybe_looping(&sl->state, cur) && 17119 17000 states_equal(env, &sl->state, cur, false) && 17120 - !iter_active_depths_differ(&sl->state, cur)) { 17001 + !iter_active_depths_differ(&sl->state, cur) && 17002 + sl->state.callback_unroll_depth == cur->callback_unroll_depth) { 17121 17003 verbose_linfo(env, insn_idx, "; "); 17122 17004 verbose(env, "infinite loop detected at insn %d\n", insn_idx); 17123 17005 verbose(env, "cur state:");
+2 -1
kernel/locking/lockdep.c
··· 3497 3497 size = chain_block_size(curr); 3498 3498 if (likely(size >= req)) { 3499 3499 del_chain_block(0, size, chain_block_next(curr)); 3500 - add_chain_block(curr + req, size - req); 3500 + if (size > req) 3501 + add_chain_block(curr + req, size - req); 3501 3502 return curr; 3502 3503 } 3503 3504 }
-6
lib/errname.c
··· 111 111 E(ENOSPC), 112 112 E(ENOSR), 113 113 E(ENOSTR), 114 - #ifdef ENOSYM 115 - E(ENOSYM), 116 - #endif 117 114 E(ENOSYS), 118 115 E(ENOTBLK), 119 116 E(ENOTCONN), ··· 141 144 #endif 142 145 E(EREMOTE), 143 146 E(EREMOTEIO), 144 - #ifdef EREMOTERELEASE 145 - E(EREMOTERELEASE), 146 - #endif 147 147 E(ERESTART), 148 148 E(ERFKILL), 149 149 E(EROFS),
+1 -1
lib/iov_iter.c
··· 409 409 void *kaddr = kmap_local_page(page); 410 410 size_t n = min(bytes, (size_t)PAGE_SIZE - offset); 411 411 412 - n = iterate_and_advance(i, bytes, kaddr, 412 + n = iterate_and_advance(i, n, kaddr + offset, 413 413 copy_to_user_iter_nofault, 414 414 memcpy_to_iter); 415 415 kunmap_local(kaddr);
+1 -1
mm/page-writeback.c
··· 3107 3107 */ 3108 3108 void folio_wait_stable(struct folio *folio) 3109 3109 { 3110 - if (folio_inode(folio)->i_sb->s_iflags & SB_I_STABLE_WRITES) 3110 + if (mapping_stable_writes(folio_mapping(folio))) 3111 3111 folio_wait_writeback(folio); 3112 3112 } 3113 3113 EXPORT_SYMBOL_GPL(folio_wait_stable);
+56 -1
net/core/dev.c
··· 10051 10051 } 10052 10052 EXPORT_SYMBOL(netif_tx_stop_all_queues); 10053 10053 10054 + static int netdev_do_alloc_pcpu_stats(struct net_device *dev) 10055 + { 10056 + void __percpu *v; 10057 + 10058 + /* Drivers implementing ndo_get_peer_dev must support tstat 10059 + * accounting, so that skb_do_redirect() can bump the dev's 10060 + * RX stats upon network namespace switch. 10061 + */ 10062 + if (dev->netdev_ops->ndo_get_peer_dev && 10063 + dev->pcpu_stat_type != NETDEV_PCPU_STAT_TSTATS) 10064 + return -EOPNOTSUPP; 10065 + 10066 + switch (dev->pcpu_stat_type) { 10067 + case NETDEV_PCPU_STAT_NONE: 10068 + return 0; 10069 + case NETDEV_PCPU_STAT_LSTATS: 10070 + v = dev->lstats = netdev_alloc_pcpu_stats(struct pcpu_lstats); 10071 + break; 10072 + case NETDEV_PCPU_STAT_TSTATS: 10073 + v = dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); 10074 + break; 10075 + case NETDEV_PCPU_STAT_DSTATS: 10076 + v = dev->dstats = netdev_alloc_pcpu_stats(struct pcpu_dstats); 10077 + break; 10078 + default: 10079 + return -EINVAL; 10080 + } 10081 + 10082 + return v ? 0 : -ENOMEM; 10083 + } 10084 + 10085 + static void netdev_do_free_pcpu_stats(struct net_device *dev) 10086 + { 10087 + switch (dev->pcpu_stat_type) { 10088 + case NETDEV_PCPU_STAT_NONE: 10089 + return; 10090 + case NETDEV_PCPU_STAT_LSTATS: 10091 + free_percpu(dev->lstats); 10092 + break; 10093 + case NETDEV_PCPU_STAT_TSTATS: 10094 + free_percpu(dev->tstats); 10095 + break; 10096 + case NETDEV_PCPU_STAT_DSTATS: 10097 + free_percpu(dev->dstats); 10098 + break; 10099 + } 10100 + } 10101 + 10054 10102 /** 10055 10103 * register_netdevice() - register a network device 10056 10104 * @dev: device to register ··· 10159 10111 goto err_uninit; 10160 10112 } 10161 10113 10114 + ret = netdev_do_alloc_pcpu_stats(dev); 10115 + if (ret) 10116 + goto err_uninit; 10117 + 10162 10118 ret = dev_index_reserve(net, dev->ifindex); 10163 10119 if (ret < 0) 10164 - goto err_uninit; 10120 + goto err_free_pcpu; 10165 10121 dev->ifindex = ret; 10166 10122 10167 10123 /* Transfer changeable features to wanted_features and enable ··· 10271 10219 call_netdevice_notifiers(NETDEV_PRE_UNINIT, dev); 10272 10220 err_ifindex_release: 10273 10221 dev_index_release(net, dev->ifindex); 10222 + err_free_pcpu: 10223 + netdev_do_free_pcpu_stats(dev); 10274 10224 err_uninit: 10275 10225 if (dev->netdev_ops->ndo_uninit) 10276 10226 dev->netdev_ops->ndo_uninit(dev); ··· 10525 10471 WARN_ON(rcu_access_pointer(dev->ip_ptr)); 10526 10472 WARN_ON(rcu_access_pointer(dev->ip6_ptr)); 10527 10473 10474 + netdev_do_free_pcpu_stats(dev); 10528 10475 if (dev->priv_destructor) 10529 10476 dev->priv_destructor(dev); 10530 10477 if (dev->needs_free_netdev)
+14 -5
net/core/filter.c
··· 81 81 #include <net/xdp.h> 82 82 #include <net/mptcp.h> 83 83 #include <net/netfilter/nf_conntrack_bpf.h> 84 + #include <net/netkit.h> 84 85 #include <linux/un.h> 85 86 86 87 #include "dev.h" ··· 2469 2468 DEFINE_PER_CPU(struct bpf_redirect_info, bpf_redirect_info); 2470 2469 EXPORT_PER_CPU_SYMBOL_GPL(bpf_redirect_info); 2471 2470 2471 + static struct net_device *skb_get_peer_dev(struct net_device *dev) 2472 + { 2473 + const struct net_device_ops *ops = dev->netdev_ops; 2474 + 2475 + if (likely(ops->ndo_get_peer_dev)) 2476 + return INDIRECT_CALL_1(ops->ndo_get_peer_dev, 2477 + netkit_peer_dev, dev); 2478 + return NULL; 2479 + } 2480 + 2472 2481 int skb_do_redirect(struct sk_buff *skb) 2473 2482 { 2474 2483 struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); ··· 2492 2481 if (unlikely(!dev)) 2493 2482 goto out_drop; 2494 2483 if (flags & BPF_F_PEER) { 2495 - const struct net_device_ops *ops = dev->netdev_ops; 2496 - 2497 - if (unlikely(!ops->ndo_get_peer_dev || 2498 - !skb_at_tc_ingress(skb))) 2484 + if (unlikely(!skb_at_tc_ingress(skb))) 2499 2485 goto out_drop; 2500 - dev = ops->ndo_get_peer_dev(dev); 2486 + dev = skb_get_peer_dev(dev); 2501 2487 if (unlikely(!dev || 2502 2488 !(dev->flags & IFF_UP) || 2503 2489 net_eq(net, dev_net(dev)))) 2504 2490 goto out_drop; 2505 2491 skb->dev = dev; 2492 + dev_sw_netstats_rx_add(dev, skb->len); 2506 2493 return -EAGAIN; 2507 2494 } 2508 2495 return flags & BPF_F_NEIGH ?
+1
net/ipv4/inet_diag.c
··· 1481 1481 module_init(inet_diag_init); 1482 1482 module_exit(inet_diag_exit); 1483 1483 MODULE_LICENSE("GPL"); 1484 + MODULE_DESCRIPTION("INET/INET6: socket monitoring via SOCK_DIAG"); 1484 1485 MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_NETLINK, NETLINK_SOCK_DIAG, 2 /* AF_INET */); 1485 1486 MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_NETLINK, NETLINK_SOCK_DIAG, 10 /* AF_INET6 */);
+1
net/ipv4/raw_diag.c
··· 257 257 module_init(raw_diag_init); 258 258 module_exit(raw_diag_exit); 259 259 MODULE_LICENSE("GPL"); 260 + MODULE_DESCRIPTION("RAW socket monitoring via SOCK_DIAG"); 260 261 MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_NETLINK, NETLINK_SOCK_DIAG, 2-255 /* AF_INET - IPPROTO_RAW */); 261 262 MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_NETLINK, NETLINK_SOCK_DIAG, 10-255 /* AF_INET6 - IPPROTO_RAW */);
+1 -1
net/ipv4/route.c
··· 780 780 goto reject_redirect; 781 781 } 782 782 783 - n = __ipv4_neigh_lookup(rt->dst.dev, new_gw); 783 + n = __ipv4_neigh_lookup(rt->dst.dev, (__force u32)new_gw); 784 784 if (!n) 785 785 n = neigh_create(&arp_tbl, &new_gw, rt->dst.dev); 786 786 if (!IS_ERR(n)) {
+1
net/ipv4/tcp_diag.c
··· 247 247 module_init(tcp_diag_init); 248 248 module_exit(tcp_diag_exit); 249 249 MODULE_LICENSE("GPL"); 250 + MODULE_DESCRIPTION("TCP socket monitoring via SOCK_DIAG"); 250 251 MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_NETLINK, NETLINK_SOCK_DIAG, 2-6 /* AF_INET - IPPROTO_TCP */);
+1
net/ipv4/udp_diag.c
··· 296 296 module_init(udp_diag_init); 297 297 module_exit(udp_diag_exit); 298 298 MODULE_LICENSE("GPL"); 299 + MODULE_DESCRIPTION("UDP socket monitoring via SOCK_DIAG"); 299 300 MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_NETLINK, NETLINK_SOCK_DIAG, 2-17 /* AF_INET - IPPROTO_UDP */); 300 301 MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_NETLINK, NETLINK_SOCK_DIAG, 2-136 /* AF_INET - IPPROTO_UDPLITE */);
+1
net/mptcp/mptcp_diag.c
··· 245 245 module_init(mptcp_diag_init); 246 246 module_exit(mptcp_diag_exit); 247 247 MODULE_LICENSE("GPL"); 248 + MODULE_DESCRIPTION("MPTCP socket monitoring via SOCK_DIAG"); 248 249 MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_NETLINK, NETLINK_SOCK_DIAG, 2-262 /* AF_INET - IPPROTO_MPTCP */);
+1
net/packet/diag.c
··· 262 262 module_init(packet_diag_init); 263 263 module_exit(packet_diag_exit); 264 264 MODULE_LICENSE("GPL"); 265 + MODULE_DESCRIPTION("PACKET socket monitoring via SOCK_DIAG"); 265 266 MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_NETLINK, NETLINK_SOCK_DIAG, 17 /* AF_PACKET */);
+4 -3
net/rxrpc/conn_client.c
··· 73 73 static struct rxrpc_bundle *rxrpc_alloc_bundle(struct rxrpc_call *call, 74 74 gfp_t gfp) 75 75 { 76 + static atomic_t rxrpc_bundle_id; 76 77 struct rxrpc_bundle *bundle; 77 78 78 79 bundle = kzalloc(sizeof(*bundle), gfp); ··· 86 85 bundle->upgrade = test_bit(RXRPC_CALL_UPGRADE, &call->flags); 87 86 bundle->service_id = call->dest_srx.srx_service; 88 87 bundle->security_level = call->security_level; 88 + bundle->debug_id = atomic_inc_return(&rxrpc_bundle_id); 89 89 refcount_set(&bundle->ref, 1); 90 90 atomic_set(&bundle->active, 1); 91 91 INIT_LIST_HEAD(&bundle->waiting_calls); ··· 107 105 108 106 static void rxrpc_free_bundle(struct rxrpc_bundle *bundle) 109 107 { 110 - trace_rxrpc_bundle(bundle->debug_id, 1, rxrpc_bundle_free); 108 + trace_rxrpc_bundle(bundle->debug_id, refcount_read(&bundle->ref), 109 + rxrpc_bundle_free); 111 110 rxrpc_put_peer(bundle->peer, rxrpc_peer_put_bundle); 112 111 key_put(bundle->key); 113 112 kfree(bundle); ··· 242 239 */ 243 240 int rxrpc_look_up_bundle(struct rxrpc_call *call, gfp_t gfp) 244 241 { 245 - static atomic_t rxrpc_bundle_id; 246 242 struct rxrpc_bundle *bundle, *candidate; 247 243 struct rxrpc_local *local = call->local; 248 244 struct rb_node *p, **pp, *parent; ··· 308 306 } 309 307 310 308 _debug("new bundle"); 311 - candidate->debug_id = atomic_inc_return(&rxrpc_bundle_id); 312 309 rb_link_node(&candidate->local_node, parent, pp); 313 310 rb_insert_color(&candidate->local_node, &local->client_bundles); 314 311 call->bundle = rxrpc_get_bundle(candidate, rxrpc_bundle_get_client_call);
+29 -32
net/rxrpc/input.c
··· 643 643 clear_bit(i + RXRPC_CALL_RTT_PEND_SHIFT, &call->rtt_avail); 644 644 smp_mb(); /* Read data before setting avail bit */ 645 645 set_bit(i, &call->rtt_avail); 646 - if (type != rxrpc_rtt_rx_cancel) 647 - rxrpc_peer_add_rtt(call, type, i, acked_serial, ack_serial, 648 - sent_at, resp_time); 649 - else 650 - trace_rxrpc_rtt_rx(call, rxrpc_rtt_rx_cancel, i, 651 - orig_serial, acked_serial, 0, 0); 646 + rxrpc_peer_add_rtt(call, type, i, acked_serial, ack_serial, 647 + sent_at, resp_time); 652 648 matched = true; 653 649 } 654 650 ··· 797 801 summary.ack_reason, nr_acks); 798 802 rxrpc_inc_stat(call->rxnet, stat_rx_acks[ack.reason]); 799 803 800 - switch (ack.reason) { 801 - case RXRPC_ACK_PING_RESPONSE: 802 - rxrpc_complete_rtt_probe(call, skb->tstamp, acked_serial, ack_serial, 803 - rxrpc_rtt_rx_ping_response); 804 - break; 805 - case RXRPC_ACK_REQUESTED: 806 - rxrpc_complete_rtt_probe(call, skb->tstamp, acked_serial, ack_serial, 807 - rxrpc_rtt_rx_requested_ack); 808 - break; 809 - default: 810 - if (acked_serial != 0) 804 + if (acked_serial != 0) { 805 + switch (ack.reason) { 806 + case RXRPC_ACK_PING_RESPONSE: 811 807 rxrpc_complete_rtt_probe(call, skb->tstamp, acked_serial, ack_serial, 812 - rxrpc_rtt_rx_cancel); 813 - break; 814 - } 815 - 816 - if (ack.reason == RXRPC_ACK_PING) { 817 - rxrpc_send_ACK(call, RXRPC_ACK_PING_RESPONSE, ack_serial, 818 - rxrpc_propose_ack_respond_to_ping); 819 - } else if (sp->hdr.flags & RXRPC_REQUEST_ACK) { 820 - rxrpc_send_ACK(call, RXRPC_ACK_REQUESTED, ack_serial, 821 - rxrpc_propose_ack_respond_to_ack); 808 + rxrpc_rtt_rx_ping_response); 809 + break; 810 + case RXRPC_ACK_REQUESTED: 811 + rxrpc_complete_rtt_probe(call, skb->tstamp, acked_serial, ack_serial, 812 + rxrpc_rtt_rx_requested_ack); 813 + break; 814 + default: 815 + rxrpc_complete_rtt_probe(call, skb->tstamp, acked_serial, ack_serial, 816 + rxrpc_rtt_rx_other_ack); 817 + break; 818 + } 822 819 } 823 820 824 821 /* If we get an EXCEEDS_WINDOW ACK from the server, it probably ··· 824 835 rxrpc_is_client_call(call)) { 825 836 rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED, 826 837 0, -ENETRESET); 827 - return; 838 + goto send_response; 828 839 } 829 840 830 841 /* If we get an OUT_OF_SEQUENCE ACK from the server, that can also ··· 838 849 rxrpc_is_client_call(call)) { 839 850 rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED, 840 851 0, -ENETRESET); 841 - return; 852 + goto send_response; 842 853 } 843 854 844 855 /* Discard any out-of-order or duplicate ACKs (outside lock). */ ··· 846 857 trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial, 847 858 first_soft_ack, call->acks_first_seq, 848 859 prev_pkt, call->acks_prev_seq); 849 - return; 860 + goto send_response; 850 861 } 851 862 852 863 info.rxMTU = 0; ··· 886 897 case RXRPC_CALL_SERVER_AWAIT_ACK: 887 898 break; 888 899 default: 889 - return; 900 + goto send_response; 890 901 } 891 902 892 903 if (before(hard_ack, call->acks_hard_ack) || ··· 898 909 if (after(hard_ack, call->acks_hard_ack)) { 899 910 if (rxrpc_rotate_tx_window(call, hard_ack, &summary)) { 900 911 rxrpc_end_tx_phase(call, false, rxrpc_eproto_unexpected_ack); 901 - return; 912 + goto send_response; 902 913 } 903 914 } 904 915 ··· 916 927 rxrpc_propose_ack_ping_for_lost_reply); 917 928 918 929 rxrpc_congestion_management(call, skb, &summary, acked_serial); 930 + 931 + send_response: 932 + if (ack.reason == RXRPC_ACK_PING) 933 + rxrpc_send_ACK(call, RXRPC_ACK_PING_RESPONSE, ack_serial, 934 + rxrpc_propose_ack_respond_to_ping); 935 + else if (sp->hdr.flags & RXRPC_REQUEST_ACK) 936 + rxrpc_send_ACK(call, RXRPC_ACK_REQUESTED, ack_serial, 937 + rxrpc_propose_ack_respond_to_ack); 919 938 } 920 939 921 940 /*
+1
net/sctp/diag.c
··· 527 527 module_init(sctp_diag_init); 528 528 module_exit(sctp_diag_exit); 529 529 MODULE_LICENSE("GPL"); 530 + MODULE_DESCRIPTION("SCTP socket monitoring via SOCK_DIAG"); 530 531 MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_NETLINK, NETLINK_SOCK_DIAG, 2-132);
+6 -2
net/smc/af_smc.c
··· 598 598 struct smc_llc_qentry *qentry; 599 599 int rc; 600 600 601 - /* receive CONFIRM LINK request from server over RoCE fabric */ 602 - qentry = smc_llc_wait(link->lgr, NULL, SMC_LLC_WAIT_TIME, 601 + /* Receive CONFIRM LINK request from server over RoCE fabric. 602 + * Increasing the client's timeout by twice as much as the server's 603 + * timeout by default can temporarily avoid decline messages of 604 + * both sides crossing or colliding 605 + */ 606 + qentry = smc_llc_wait(link->lgr, NULL, 2 * SMC_LLC_WAIT_TIME, 603 607 SMC_LLC_CONFIRM_LINK); 604 608 if (!qentry) { 605 609 struct smc_clc_msg_decline dclc;
+1
net/smc/smc_diag.c
··· 268 268 module_init(smc_diag_init); 269 269 module_exit(smc_diag_exit); 270 270 MODULE_LICENSE("GPL"); 271 + MODULE_DESCRIPTION("SMC socket monitoring via SOCK_DIAG"); 271 272 MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_NETLINK, NETLINK_SOCK_DIAG, 43 /* AF_SMC */); 272 273 MODULE_ALIAS_GENL_FAMILY(SMCR_GENL_FAMILY_NAME);
+1
net/tipc/diag.c
··· 113 113 module_exit(tipc_diag_exit); 114 114 115 115 MODULE_LICENSE("Dual BSD/GPL"); 116 + MODULE_DESCRIPTION("TIPC socket monitoring via SOCK_DIAG"); 116 117 MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_NETLINK, NETLINK_SOCK_DIAG, AF_TIPC);
+3
net/tls/tls_sw.c
··· 1232 1232 lock_sock(sk); 1233 1233 1234 1234 retry: 1235 + /* same checks as in tls_sw_push_pending_record() */ 1235 1236 rec = ctx->open_rec; 1236 1237 if (!rec) 1237 1238 goto unlock; 1238 1239 1239 1240 msg_pl = &rec->msg_plaintext; 1241 + if (msg_pl->sg.size == 0) 1242 + goto unlock; 1240 1243 1241 1244 /* Check the BPF advisor and perform transmission. */ 1242 1245 ret = bpf_exec_tx_verdict(msg_pl, sk, false, TLS_RECORD_TYPE_DATA,
+1
net/unix/diag.c
··· 339 339 module_init(unix_diag_init); 340 340 module_exit(unix_diag_exit); 341 341 MODULE_LICENSE("GPL"); 342 + MODULE_DESCRIPTION("UNIX socket monitoring via SOCK_DIAG"); 342 343 MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_NETLINK, NETLINK_SOCK_DIAG, 1 /* AF_LOCAL */);
+1
net/vmw_vsock/diag.c
··· 174 174 module_init(vsock_diag_init); 175 175 module_exit(vsock_diag_exit); 176 176 MODULE_LICENSE("GPL"); 177 + MODULE_DESCRIPTION("VMware Virtual Sockets monitoring via SOCK_DIAG"); 177 178 MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_NETLINK, NETLINK_SOCK_DIAG, 178 179 40 /* AF_VSOCK */);
+1
net/xdp/xsk_diag.c
··· 211 211 module_init(xsk_diag_init); 212 212 module_exit(xsk_diag_exit); 213 213 MODULE_LICENSE("GPL"); 214 + MODULE_DESCRIPTION("XDP socket monitoring via SOCK_DIAG"); 214 215 MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_NETLINK, NETLINK_SOCK_DIAG, AF_XDP);
+1 -2
scripts/checkstack.pl
··· 97 97 # 11160: a7 fb ff 60 aghi %r15,-160 98 98 # or 99 99 # 100092: e3 f0 ff c8 ff 71 lay %r15,-56(%r15) 100 - $re = qr/.*(?:lay|ag?hi).*\%r15,-(([0-9]{2}|[3-9])[0-9]{2}) 101 - (?:\(\%r15\))?$/ox; 100 + $re = qr/.*(?:lay|ag?hi).*\%r15,-([0-9]+)(?:\(\%r15\))?$/o; 102 101 } elsif ($arch eq 'sparc' || $arch eq 'sparc64') { 103 102 # f0019d10: 9d e3 bf 90 save %sp, -112, %sp 104 103 $re = qr/.*save.*%sp, -(([0-9]{2}|[3-9])[0-9]{2}), %sp/o;
-2
tools/arch/parisc/include/uapi/asm/errno.h
··· 75 75 76 76 /* We now return you to your regularly scheduled HPUX. */ 77 77 78 - #define ENOSYM 215 /* symbol does not exist in executable */ 79 78 #define ENOTSOCK 216 /* Socket operation on non-socket */ 80 79 #define EDESTADDRREQ 217 /* Destination address required */ 81 80 #define EMSGSIZE 218 /* Message too long */ ··· 100 101 #define ETIMEDOUT 238 /* Connection timed out */ 101 102 #define ECONNREFUSED 239 /* Connection refused */ 102 103 #define EREFUSED ECONNREFUSED /* for HP's NFS apparently */ 103 - #define EREMOTERELEASE 240 /* Remote peer released connection */ 104 104 #define EHOSTDOWN 241 /* Host is down */ 105 105 #define EHOSTUNREACH 242 /* No route to host */ 106 106
+12 -8
tools/hv/hv_kvp_daemon.c
··· 1421 1421 if (error) 1422 1422 goto setval_error; 1423 1423 1424 - if (new_val->addr_family == ADDR_FAMILY_IPV6) { 1424 + if (new_val->addr_family & ADDR_FAMILY_IPV6) { 1425 1425 error = fprintf(nmfile, "\n[ipv6]\n"); 1426 1426 if (error < 0) 1427 1427 goto setval_error; ··· 1455 1455 if (error < 0) 1456 1456 goto setval_error; 1457 1457 1458 - error = fprintf(nmfile, "gateway=%s\n", (char *)new_val->gate_way); 1459 - if (error < 0) 1460 - goto setval_error; 1458 + /* we do not want ipv4 addresses in ipv6 section and vice versa */ 1459 + if (is_ipv6 != is_ipv4((char *)new_val->gate_way)) { 1460 + error = fprintf(nmfile, "gateway=%s\n", (char *)new_val->gate_way); 1461 + if (error < 0) 1462 + goto setval_error; 1463 + } 1461 1464 1462 - error = fprintf(nmfile, "dns=%s\n", (char *)new_val->dns_addr); 1463 - if (error < 0) 1464 - goto setval_error; 1465 - 1465 + if (is_ipv6 != is_ipv4((char *)new_val->dns_addr)) { 1466 + error = fprintf(nmfile, "dns=%s\n", (char *)new_val->dns_addr); 1467 + if (error < 0) 1468 + goto setval_error; 1469 + } 1466 1470 fclose(nmfile); 1467 1471 fclose(ifcfg_file); 1468 1472
+2 -2
tools/hv/hv_set_ifconfig.sh
··· 53 53 # or "manual" if no boot-time protocol should be used) 54 54 # 55 55 # address1=ipaddr1/plen 56 - # address=ipaddr2/plen 56 + # address2=ipaddr2/plen 57 57 # 58 58 # gateway=gateway1;gateway2 59 59 # ··· 61 61 # 62 62 # [ipv6] 63 63 # address1=ipaddr1/plen 64 - # address2=ipaddr1/plen 64 + # address2=ipaddr2/plen 65 65 # 66 66 # gateway=gateway1;gateway2 67 67 #
+1 -1
tools/net/ynl/Makefile.deps
··· 18 18 CFLAGS_ethtool:=$(call get_hdr_inc,_LINUX_ETHTOOL_NETLINK_H_,ethtool_netlink.h) 19 19 CFLAGS_handshake:=$(call get_hdr_inc,_LINUX_HANDSHAKE_H,handshake.h) 20 20 CFLAGS_netdev:=$(call get_hdr_inc,_LINUX_NETDEV_H,netdev.h) 21 - CFLAGS_nfsd:=$(call get_hdr_inc,_LINUX_NFSD_H,nfsd.h) 21 + CFLAGS_nfsd:=$(call get_hdr_inc,_LINUX_NFSD_NETLINK_H,nfsd_netlink.h)
+6
tools/net/ynl/ynl-gen-c.py
··· 1505 1505 cw.block_start(line=f"static const char * const {map_name}[] =") 1506 1506 for op_name, op in family.msgs.items(): 1507 1507 if op.rsp_value: 1508 + # Make sure we don't add duplicated entries, if multiple commands 1509 + # produce the same response in legacy families. 1510 + if family.rsp_by_value[op.rsp_value] != op: 1511 + cw.p(f'// skip "{op_name}", duplicate reply value') 1512 + continue 1513 + 1508 1514 if op.req_value == op.rsp_value: 1509 1515 cw.p(f'[{op.enum_name}] = "{op_name}",') 1510 1516 else:
+1 -1
tools/power/pm-graph/sleepgraph.py
··· 4151 4151 elif(re.match('Enabling non-boot CPUs .*', msg)): 4152 4152 # start of first cpu resume 4153 4153 cpu_start = ktime 4154 - elif(re.match('smpboot: CPU (?P<cpu>[0-9]*) is now offline', msg)) \ 4154 + elif(re.match('smpboot: CPU (?P<cpu>[0-9]*) is now offline', msg) \ 4155 4155 or re.match('psci: CPU(?P<cpu>[0-9]*) killed.*', msg)): 4156 4156 # end of a cpu suspend, start of the next 4157 4157 m = re.match('smpboot: CPU (?P<cpu>[0-9]*) is now offline', msg)
+1 -1
tools/testing/selftests/arm64/fp/za-fork.c
··· 85 85 */ 86 86 ret = open("/proc/sys/abi/sme_default_vector_length", O_RDONLY, 0); 87 87 if (ret >= 0) { 88 - ksft_test_result(fork_test(), "fork_test"); 88 + ksft_test_result(fork_test(), "fork_test\n"); 89 89 90 90 } else { 91 91 ksft_print_msg("SME not supported\n");
+189 -126
tools/testing/selftests/bpf/prog_tests/tc_redirect.c
··· 24 24 25 25 #include "test_progs.h" 26 26 #include "network_helpers.h" 27 + #include "netlink_helpers.h" 27 28 #include "test_tc_neigh_fib.skel.h" 28 29 #include "test_tc_neigh.skel.h" 29 30 #include "test_tc_peer.skel.h" ··· 111 110 } 112 111 } 113 112 113 + enum dev_mode { 114 + MODE_VETH, 115 + MODE_NETKIT, 116 + }; 117 + 114 118 struct netns_setup_result { 115 - int ifindex_veth_src; 116 - int ifindex_veth_src_fwd; 117 - int ifindex_veth_dst; 118 - int ifindex_veth_dst_fwd; 119 + enum dev_mode dev_mode; 120 + int ifindex_src; 121 + int ifindex_src_fwd; 122 + int ifindex_dst; 123 + int ifindex_dst_fwd; 119 124 }; 120 125 121 126 static int get_ifaddr(const char *name, char *ifaddr) ··· 144 137 return 0; 145 138 } 146 139 140 + static int create_netkit(int mode, char *prim, char *peer) 141 + { 142 + struct rtattr *linkinfo, *data, *peer_info; 143 + struct rtnl_handle rth = { .fd = -1 }; 144 + const char *type = "netkit"; 145 + struct { 146 + struct nlmsghdr n; 147 + struct ifinfomsg i; 148 + char buf[1024]; 149 + } req = {}; 150 + int err; 151 + 152 + err = rtnl_open(&rth, 0); 153 + if (!ASSERT_OK(err, "open_rtnetlink")) 154 + return err; 155 + 156 + memset(&req, 0, sizeof(req)); 157 + req.n.nlmsg_len = NLMSG_LENGTH(sizeof(struct ifinfomsg)); 158 + req.n.nlmsg_flags = NLM_F_REQUEST | NLM_F_CREATE | NLM_F_EXCL; 159 + req.n.nlmsg_type = RTM_NEWLINK; 160 + req.i.ifi_family = AF_UNSPEC; 161 + 162 + addattr_l(&req.n, sizeof(req), IFLA_IFNAME, prim, strlen(prim)); 163 + linkinfo = addattr_nest(&req.n, sizeof(req), IFLA_LINKINFO); 164 + addattr_l(&req.n, sizeof(req), IFLA_INFO_KIND, type, strlen(type)); 165 + data = addattr_nest(&req.n, sizeof(req), IFLA_INFO_DATA); 166 + addattr32(&req.n, sizeof(req), IFLA_NETKIT_MODE, mode); 167 + peer_info = addattr_nest(&req.n, sizeof(req), IFLA_NETKIT_PEER_INFO); 168 + req.n.nlmsg_len += sizeof(struct ifinfomsg); 169 + addattr_l(&req.n, sizeof(req), IFLA_IFNAME, peer, strlen(peer)); 170 + addattr_nest_end(&req.n, peer_info); 171 + addattr_nest_end(&req.n, data); 172 + addattr_nest_end(&req.n, linkinfo); 173 + 174 + err = rtnl_talk(&rth, &req.n, NULL); 175 + ASSERT_OK(err, "talk_rtnetlink"); 176 + rtnl_close(&rth); 177 + return err; 178 + } 179 + 147 180 static int netns_setup_links_and_routes(struct netns_setup_result *result) 148 181 { 149 182 struct nstoken *nstoken = NULL; 150 - char veth_src_fwd_addr[IFADDR_STR_LEN+1] = {}; 183 + char src_fwd_addr[IFADDR_STR_LEN+1] = {}; 184 + int err; 151 185 152 - SYS(fail, "ip link add veth_src type veth peer name veth_src_fwd"); 153 - SYS(fail, "ip link add veth_dst type veth peer name veth_dst_fwd"); 186 + if (result->dev_mode == MODE_VETH) { 187 + SYS(fail, "ip link add src type veth peer name src_fwd"); 188 + SYS(fail, "ip link add dst type veth peer name dst_fwd"); 154 189 155 - SYS(fail, "ip link set veth_dst_fwd address " MAC_DST_FWD); 156 - SYS(fail, "ip link set veth_dst address " MAC_DST); 190 + SYS(fail, "ip link set dst_fwd address " MAC_DST_FWD); 191 + SYS(fail, "ip link set dst address " MAC_DST); 192 + } else if (result->dev_mode == MODE_NETKIT) { 193 + err = create_netkit(NETKIT_L3, "src", "src_fwd"); 194 + if (!ASSERT_OK(err, "create_ifindex_src")) 195 + goto fail; 196 + err = create_netkit(NETKIT_L3, "dst", "dst_fwd"); 197 + if (!ASSERT_OK(err, "create_ifindex_dst")) 198 + goto fail; 199 + } 157 200 158 - if (get_ifaddr("veth_src_fwd", veth_src_fwd_addr)) 201 + if (get_ifaddr("src_fwd", src_fwd_addr)) 159 202 goto fail; 160 203 161 - result->ifindex_veth_src = if_nametoindex("veth_src"); 162 - if (!ASSERT_GT(result->ifindex_veth_src, 0, "ifindex_veth_src")) 204 + result->ifindex_src = if_nametoindex("src"); 205 + if (!ASSERT_GT(result->ifindex_src, 0, "ifindex_src")) 163 206 goto fail; 164 207 165 - result->ifindex_veth_src_fwd = if_nametoindex("veth_src_fwd"); 166 - if (!ASSERT_GT(result->ifindex_veth_src_fwd, 0, "ifindex_veth_src_fwd")) 208 + result->ifindex_src_fwd = if_nametoindex("src_fwd"); 209 + if (!ASSERT_GT(result->ifindex_src_fwd, 0, "ifindex_src_fwd")) 167 210 goto fail; 168 211 169 - result->ifindex_veth_dst = if_nametoindex("veth_dst"); 170 - if (!ASSERT_GT(result->ifindex_veth_dst, 0, "ifindex_veth_dst")) 212 + result->ifindex_dst = if_nametoindex("dst"); 213 + if (!ASSERT_GT(result->ifindex_dst, 0, "ifindex_dst")) 171 214 goto fail; 172 215 173 - result->ifindex_veth_dst_fwd = if_nametoindex("veth_dst_fwd"); 174 - if (!ASSERT_GT(result->ifindex_veth_dst_fwd, 0, "ifindex_veth_dst_fwd")) 216 + result->ifindex_dst_fwd = if_nametoindex("dst_fwd"); 217 + if (!ASSERT_GT(result->ifindex_dst_fwd, 0, "ifindex_dst_fwd")) 175 218 goto fail; 176 219 177 - SYS(fail, "ip link set veth_src netns " NS_SRC); 178 - SYS(fail, "ip link set veth_src_fwd netns " NS_FWD); 179 - SYS(fail, "ip link set veth_dst_fwd netns " NS_FWD); 180 - SYS(fail, "ip link set veth_dst netns " NS_DST); 220 + SYS(fail, "ip link set src netns " NS_SRC); 221 + SYS(fail, "ip link set src_fwd netns " NS_FWD); 222 + SYS(fail, "ip link set dst_fwd netns " NS_FWD); 223 + SYS(fail, "ip link set dst netns " NS_DST); 181 224 182 225 /** setup in 'src' namespace */ 183 226 nstoken = open_netns(NS_SRC); 184 227 if (!ASSERT_OK_PTR(nstoken, "setns src")) 185 228 goto fail; 186 229 187 - SYS(fail, "ip addr add " IP4_SRC "/32 dev veth_src"); 188 - SYS(fail, "ip addr add " IP6_SRC "/128 dev veth_src nodad"); 189 - SYS(fail, "ip link set dev veth_src up"); 230 + SYS(fail, "ip addr add " IP4_SRC "/32 dev src"); 231 + SYS(fail, "ip addr add " IP6_SRC "/128 dev src nodad"); 232 + SYS(fail, "ip link set dev src up"); 190 233 191 - SYS(fail, "ip route add " IP4_DST "/32 dev veth_src scope global"); 192 - SYS(fail, "ip route add " IP4_NET "/16 dev veth_src scope global"); 193 - SYS(fail, "ip route add " IP6_DST "/128 dev veth_src scope global"); 234 + SYS(fail, "ip route add " IP4_DST "/32 dev src scope global"); 235 + SYS(fail, "ip route add " IP4_NET "/16 dev src scope global"); 236 + SYS(fail, "ip route add " IP6_DST "/128 dev src scope global"); 194 237 195 - SYS(fail, "ip neigh add " IP4_DST " dev veth_src lladdr %s", 196 - veth_src_fwd_addr); 197 - SYS(fail, "ip neigh add " IP6_DST " dev veth_src lladdr %s", 198 - veth_src_fwd_addr); 238 + if (result->dev_mode == MODE_VETH) { 239 + SYS(fail, "ip neigh add " IP4_DST " dev src lladdr %s", 240 + src_fwd_addr); 241 + SYS(fail, "ip neigh add " IP6_DST " dev src lladdr %s", 242 + src_fwd_addr); 243 + } 199 244 200 245 close_netns(nstoken); 201 246 ··· 260 201 * needs v4 one in order to start ARP probing. IP4_NET route is added 261 202 * to the endpoints so that the ARP processing will reply. 262 203 */ 263 - SYS(fail, "ip addr add " IP4_SLL "/32 dev veth_src_fwd"); 264 - SYS(fail, "ip addr add " IP4_DLL "/32 dev veth_dst_fwd"); 265 - SYS(fail, "ip link set dev veth_src_fwd up"); 266 - SYS(fail, "ip link set dev veth_dst_fwd up"); 204 + SYS(fail, "ip addr add " IP4_SLL "/32 dev src_fwd"); 205 + SYS(fail, "ip addr add " IP4_DLL "/32 dev dst_fwd"); 206 + SYS(fail, "ip link set dev src_fwd up"); 207 + SYS(fail, "ip link set dev dst_fwd up"); 267 208 268 - SYS(fail, "ip route add " IP4_SRC "/32 dev veth_src_fwd scope global"); 269 - SYS(fail, "ip route add " IP6_SRC "/128 dev veth_src_fwd scope global"); 270 - SYS(fail, "ip route add " IP4_DST "/32 dev veth_dst_fwd scope global"); 271 - SYS(fail, "ip route add " IP6_DST "/128 dev veth_dst_fwd scope global"); 209 + SYS(fail, "ip route add " IP4_SRC "/32 dev src_fwd scope global"); 210 + SYS(fail, "ip route add " IP6_SRC "/128 dev src_fwd scope global"); 211 + SYS(fail, "ip route add " IP4_DST "/32 dev dst_fwd scope global"); 212 + SYS(fail, "ip route add " IP6_DST "/128 dev dst_fwd scope global"); 272 213 273 214 close_netns(nstoken); 274 215 ··· 277 218 if (!ASSERT_OK_PTR(nstoken, "setns dst")) 278 219 goto fail; 279 220 280 - SYS(fail, "ip addr add " IP4_DST "/32 dev veth_dst"); 281 - SYS(fail, "ip addr add " IP6_DST "/128 dev veth_dst nodad"); 282 - SYS(fail, "ip link set dev veth_dst up"); 221 + SYS(fail, "ip addr add " IP4_DST "/32 dev dst"); 222 + SYS(fail, "ip addr add " IP6_DST "/128 dev dst nodad"); 223 + SYS(fail, "ip link set dev dst up"); 283 224 284 - SYS(fail, "ip route add " IP4_SRC "/32 dev veth_dst scope global"); 285 - SYS(fail, "ip route add " IP4_NET "/16 dev veth_dst scope global"); 286 - SYS(fail, "ip route add " IP6_SRC "/128 dev veth_dst scope global"); 225 + SYS(fail, "ip route add " IP4_SRC "/32 dev dst scope global"); 226 + SYS(fail, "ip route add " IP4_NET "/16 dev dst scope global"); 227 + SYS(fail, "ip route add " IP6_SRC "/128 dev dst scope global"); 287 228 288 - SYS(fail, "ip neigh add " IP4_SRC " dev veth_dst lladdr " MAC_DST_FWD); 289 - SYS(fail, "ip neigh add " IP6_SRC " dev veth_dst lladdr " MAC_DST_FWD); 229 + if (result->dev_mode == MODE_VETH) { 230 + SYS(fail, "ip neigh add " IP4_SRC " dev dst lladdr " MAC_DST_FWD); 231 + SYS(fail, "ip neigh add " IP6_SRC " dev dst lladdr " MAC_DST_FWD); 232 + } 290 233 291 234 close_netns(nstoken); 292 235 ··· 354 293 const struct bpf_program *chk_prog, 355 294 const struct netns_setup_result *setup_result) 356 295 { 357 - LIBBPF_OPTS(bpf_tc_hook, qdisc_veth_src_fwd); 358 - LIBBPF_OPTS(bpf_tc_hook, qdisc_veth_dst_fwd); 296 + LIBBPF_OPTS(bpf_tc_hook, qdisc_src_fwd); 297 + LIBBPF_OPTS(bpf_tc_hook, qdisc_dst_fwd); 359 298 int err; 360 299 361 - /* tc qdisc add dev veth_src_fwd clsact */ 362 - QDISC_CLSACT_CREATE(&qdisc_veth_src_fwd, setup_result->ifindex_veth_src_fwd); 363 - /* tc filter add dev veth_src_fwd ingress bpf da src_prog */ 364 - XGRESS_FILTER_ADD(&qdisc_veth_src_fwd, BPF_TC_INGRESS, src_prog, 0); 365 - /* tc filter add dev veth_src_fwd egress bpf da chk_prog */ 366 - XGRESS_FILTER_ADD(&qdisc_veth_src_fwd, BPF_TC_EGRESS, chk_prog, 0); 300 + /* tc qdisc add dev src_fwd clsact */ 301 + QDISC_CLSACT_CREATE(&qdisc_src_fwd, setup_result->ifindex_src_fwd); 302 + /* tc filter add dev src_fwd ingress bpf da src_prog */ 303 + XGRESS_FILTER_ADD(&qdisc_src_fwd, BPF_TC_INGRESS, src_prog, 0); 304 + /* tc filter add dev src_fwd egress bpf da chk_prog */ 305 + XGRESS_FILTER_ADD(&qdisc_src_fwd, BPF_TC_EGRESS, chk_prog, 0); 367 306 368 - /* tc qdisc add dev veth_dst_fwd clsact */ 369 - QDISC_CLSACT_CREATE(&qdisc_veth_dst_fwd, setup_result->ifindex_veth_dst_fwd); 370 - /* tc filter add dev veth_dst_fwd ingress bpf da dst_prog */ 371 - XGRESS_FILTER_ADD(&qdisc_veth_dst_fwd, BPF_TC_INGRESS, dst_prog, 0); 372 - /* tc filter add dev veth_dst_fwd egress bpf da chk_prog */ 373 - XGRESS_FILTER_ADD(&qdisc_veth_dst_fwd, BPF_TC_EGRESS, chk_prog, 0); 307 + /* tc qdisc add dev dst_fwd clsact */ 308 + QDISC_CLSACT_CREATE(&qdisc_dst_fwd, setup_result->ifindex_dst_fwd); 309 + /* tc filter add dev dst_fwd ingress bpf da dst_prog */ 310 + XGRESS_FILTER_ADD(&qdisc_dst_fwd, BPF_TC_INGRESS, dst_prog, 0); 311 + /* tc filter add dev dst_fwd egress bpf da chk_prog */ 312 + XGRESS_FILTER_ADD(&qdisc_dst_fwd, BPF_TC_EGRESS, chk_prog, 0); 374 313 375 314 return 0; 376 315 fail: ··· 600 539 static int netns_load_dtime_bpf(struct test_tc_dtime *skel, 601 540 const struct netns_setup_result *setup_result) 602 541 { 603 - LIBBPF_OPTS(bpf_tc_hook, qdisc_veth_src_fwd); 604 - LIBBPF_OPTS(bpf_tc_hook, qdisc_veth_dst_fwd); 605 - LIBBPF_OPTS(bpf_tc_hook, qdisc_veth_src); 606 - LIBBPF_OPTS(bpf_tc_hook, qdisc_veth_dst); 542 + LIBBPF_OPTS(bpf_tc_hook, qdisc_src_fwd); 543 + LIBBPF_OPTS(bpf_tc_hook, qdisc_dst_fwd); 544 + LIBBPF_OPTS(bpf_tc_hook, qdisc_src); 545 + LIBBPF_OPTS(bpf_tc_hook, qdisc_dst); 607 546 struct nstoken *nstoken; 608 547 int err; 609 548 ··· 611 550 nstoken = open_netns(NS_SRC); 612 551 if (!ASSERT_OK_PTR(nstoken, "setns " NS_SRC)) 613 552 return -1; 614 - /* tc qdisc add dev veth_src clsact */ 615 - QDISC_CLSACT_CREATE(&qdisc_veth_src, setup_result->ifindex_veth_src); 616 - /* tc filter add dev veth_src ingress bpf da ingress_host */ 617 - XGRESS_FILTER_ADD(&qdisc_veth_src, BPF_TC_INGRESS, skel->progs.ingress_host, 0); 618 - /* tc filter add dev veth_src egress bpf da egress_host */ 619 - XGRESS_FILTER_ADD(&qdisc_veth_src, BPF_TC_EGRESS, skel->progs.egress_host, 0); 553 + /* tc qdisc add dev src clsact */ 554 + QDISC_CLSACT_CREATE(&qdisc_src, setup_result->ifindex_src); 555 + /* tc filter add dev src ingress bpf da ingress_host */ 556 + XGRESS_FILTER_ADD(&qdisc_src, BPF_TC_INGRESS, skel->progs.ingress_host, 0); 557 + /* tc filter add dev src egress bpf da egress_host */ 558 + XGRESS_FILTER_ADD(&qdisc_src, BPF_TC_EGRESS, skel->progs.egress_host, 0); 620 559 close_netns(nstoken); 621 560 622 561 /* setup ns_dst tc progs */ 623 562 nstoken = open_netns(NS_DST); 624 563 if (!ASSERT_OK_PTR(nstoken, "setns " NS_DST)) 625 564 return -1; 626 - /* tc qdisc add dev veth_dst clsact */ 627 - QDISC_CLSACT_CREATE(&qdisc_veth_dst, setup_result->ifindex_veth_dst); 628 - /* tc filter add dev veth_dst ingress bpf da ingress_host */ 629 - XGRESS_FILTER_ADD(&qdisc_veth_dst, BPF_TC_INGRESS, skel->progs.ingress_host, 0); 630 - /* tc filter add dev veth_dst egress bpf da egress_host */ 631 - XGRESS_FILTER_ADD(&qdisc_veth_dst, BPF_TC_EGRESS, skel->progs.egress_host, 0); 565 + /* tc qdisc add dev dst clsact */ 566 + QDISC_CLSACT_CREATE(&qdisc_dst, setup_result->ifindex_dst); 567 + /* tc filter add dev dst ingress bpf da ingress_host */ 568 + XGRESS_FILTER_ADD(&qdisc_dst, BPF_TC_INGRESS, skel->progs.ingress_host, 0); 569 + /* tc filter add dev dst egress bpf da egress_host */ 570 + XGRESS_FILTER_ADD(&qdisc_dst, BPF_TC_EGRESS, skel->progs.egress_host, 0); 632 571 close_netns(nstoken); 633 572 634 573 /* setup ns_fwd tc progs */ 635 574 nstoken = open_netns(NS_FWD); 636 575 if (!ASSERT_OK_PTR(nstoken, "setns " NS_FWD)) 637 576 return -1; 638 - /* tc qdisc add dev veth_dst_fwd clsact */ 639 - QDISC_CLSACT_CREATE(&qdisc_veth_dst_fwd, setup_result->ifindex_veth_dst_fwd); 640 - /* tc filter add dev veth_dst_fwd ingress prio 100 bpf da ingress_fwdns_prio100 */ 641 - XGRESS_FILTER_ADD(&qdisc_veth_dst_fwd, BPF_TC_INGRESS, 577 + /* tc qdisc add dev dst_fwd clsact */ 578 + QDISC_CLSACT_CREATE(&qdisc_dst_fwd, setup_result->ifindex_dst_fwd); 579 + /* tc filter add dev dst_fwd ingress prio 100 bpf da ingress_fwdns_prio100 */ 580 + XGRESS_FILTER_ADD(&qdisc_dst_fwd, BPF_TC_INGRESS, 642 581 skel->progs.ingress_fwdns_prio100, 100); 643 - /* tc filter add dev veth_dst_fwd ingress prio 101 bpf da ingress_fwdns_prio101 */ 644 - XGRESS_FILTER_ADD(&qdisc_veth_dst_fwd, BPF_TC_INGRESS, 582 + /* tc filter add dev dst_fwd ingress prio 101 bpf da ingress_fwdns_prio101 */ 583 + XGRESS_FILTER_ADD(&qdisc_dst_fwd, BPF_TC_INGRESS, 645 584 skel->progs.ingress_fwdns_prio101, 101); 646 - /* tc filter add dev veth_dst_fwd egress prio 100 bpf da egress_fwdns_prio100 */ 647 - XGRESS_FILTER_ADD(&qdisc_veth_dst_fwd, BPF_TC_EGRESS, 585 + /* tc filter add dev dst_fwd egress prio 100 bpf da egress_fwdns_prio100 */ 586 + XGRESS_FILTER_ADD(&qdisc_dst_fwd, BPF_TC_EGRESS, 648 587 skel->progs.egress_fwdns_prio100, 100); 649 - /* tc filter add dev veth_dst_fwd egress prio 101 bpf da egress_fwdns_prio101 */ 650 - XGRESS_FILTER_ADD(&qdisc_veth_dst_fwd, BPF_TC_EGRESS, 588 + /* tc filter add dev dst_fwd egress prio 101 bpf da egress_fwdns_prio101 */ 589 + XGRESS_FILTER_ADD(&qdisc_dst_fwd, BPF_TC_EGRESS, 651 590 skel->progs.egress_fwdns_prio101, 101); 652 591 653 - /* tc qdisc add dev veth_src_fwd clsact */ 654 - QDISC_CLSACT_CREATE(&qdisc_veth_src_fwd, setup_result->ifindex_veth_src_fwd); 655 - /* tc filter add dev veth_src_fwd ingress prio 100 bpf da ingress_fwdns_prio100 */ 656 - XGRESS_FILTER_ADD(&qdisc_veth_src_fwd, BPF_TC_INGRESS, 592 + /* tc qdisc add dev src_fwd clsact */ 593 + QDISC_CLSACT_CREATE(&qdisc_src_fwd, setup_result->ifindex_src_fwd); 594 + /* tc filter add dev src_fwd ingress prio 100 bpf da ingress_fwdns_prio100 */ 595 + XGRESS_FILTER_ADD(&qdisc_src_fwd, BPF_TC_INGRESS, 657 596 skel->progs.ingress_fwdns_prio100, 100); 658 - /* tc filter add dev veth_src_fwd ingress prio 101 bpf da ingress_fwdns_prio101 */ 659 - XGRESS_FILTER_ADD(&qdisc_veth_src_fwd, BPF_TC_INGRESS, 597 + /* tc filter add dev src_fwd ingress prio 101 bpf da ingress_fwdns_prio101 */ 598 + XGRESS_FILTER_ADD(&qdisc_src_fwd, BPF_TC_INGRESS, 660 599 skel->progs.ingress_fwdns_prio101, 101); 661 - /* tc filter add dev veth_src_fwd egress prio 100 bpf da egress_fwdns_prio100 */ 662 - XGRESS_FILTER_ADD(&qdisc_veth_src_fwd, BPF_TC_EGRESS, 600 + /* tc filter add dev src_fwd egress prio 100 bpf da egress_fwdns_prio100 */ 601 + XGRESS_FILTER_ADD(&qdisc_src_fwd, BPF_TC_EGRESS, 663 602 skel->progs.egress_fwdns_prio100, 100); 664 - /* tc filter add dev veth_src_fwd egress prio 101 bpf da egress_fwdns_prio101 */ 665 - XGRESS_FILTER_ADD(&qdisc_veth_src_fwd, BPF_TC_EGRESS, 603 + /* tc filter add dev src_fwd egress prio 101 bpf da egress_fwdns_prio101 */ 604 + XGRESS_FILTER_ADD(&qdisc_src_fwd, BPF_TC_EGRESS, 666 605 skel->progs.egress_fwdns_prio101, 101); 667 606 close_netns(nstoken); 668 607 return 0; ··· 838 777 if (!ASSERT_OK_PTR(skel, "test_tc_dtime__open")) 839 778 return; 840 779 841 - skel->rodata->IFINDEX_SRC = setup_result->ifindex_veth_src_fwd; 842 - skel->rodata->IFINDEX_DST = setup_result->ifindex_veth_dst_fwd; 780 + skel->rodata->IFINDEX_SRC = setup_result->ifindex_src_fwd; 781 + skel->rodata->IFINDEX_DST = setup_result->ifindex_dst_fwd; 843 782 844 783 err = test_tc_dtime__load(skel); 845 784 if (!ASSERT_OK(err, "test_tc_dtime__load")) ··· 929 868 if (!ASSERT_OK_PTR(skel, "test_tc_neigh__open")) 930 869 goto done; 931 870 932 - skel->rodata->IFINDEX_SRC = setup_result->ifindex_veth_src_fwd; 933 - skel->rodata->IFINDEX_DST = setup_result->ifindex_veth_dst_fwd; 871 + skel->rodata->IFINDEX_SRC = setup_result->ifindex_src_fwd; 872 + skel->rodata->IFINDEX_DST = setup_result->ifindex_dst_fwd; 934 873 935 874 err = test_tc_neigh__load(skel); 936 875 if (!ASSERT_OK(err, "test_tc_neigh__load")) ··· 965 904 if (!ASSERT_OK_PTR(skel, "test_tc_peer__open")) 966 905 goto done; 967 906 968 - skel->rodata->IFINDEX_SRC = setup_result->ifindex_veth_src_fwd; 969 - skel->rodata->IFINDEX_DST = setup_result->ifindex_veth_dst_fwd; 907 + skel->rodata->IFINDEX_SRC = setup_result->ifindex_src_fwd; 908 + skel->rodata->IFINDEX_DST = setup_result->ifindex_dst_fwd; 970 909 971 910 err = test_tc_peer__load(skel); 972 911 if (!ASSERT_OK(err, "test_tc_peer__load")) ··· 1057 996 static void test_tc_redirect_peer_l3(struct netns_setup_result *setup_result) 1058 997 { 1059 998 LIBBPF_OPTS(bpf_tc_hook, qdisc_tun_fwd); 1060 - LIBBPF_OPTS(bpf_tc_hook, qdisc_veth_dst_fwd); 999 + LIBBPF_OPTS(bpf_tc_hook, qdisc_dst_fwd); 1061 1000 struct test_tc_peer *skel = NULL; 1062 1001 struct nstoken *nstoken = NULL; 1063 1002 int err; ··· 1106 1045 goto fail; 1107 1046 1108 1047 skel->rodata->IFINDEX_SRC = ifindex; 1109 - skel->rodata->IFINDEX_DST = setup_result->ifindex_veth_dst_fwd; 1048 + skel->rodata->IFINDEX_DST = setup_result->ifindex_dst_fwd; 1110 1049 1111 1050 err = test_tc_peer__load(skel); 1112 1051 if (!ASSERT_OK(err, "test_tc_peer__load")) ··· 1114 1053 1115 1054 /* Load "tc_src_l3" to the tun_fwd interface to redirect packets 1116 1055 * towards dst, and "tc_dst" to redirect packets 1117 - * and "tc_chk" on veth_dst_fwd to drop non-redirected packets. 1056 + * and "tc_chk" on dst_fwd to drop non-redirected packets. 1118 1057 */ 1119 1058 /* tc qdisc add dev tun_fwd clsact */ 1120 1059 QDISC_CLSACT_CREATE(&qdisc_tun_fwd, ifindex); 1121 1060 /* tc filter add dev tun_fwd ingress bpf da tc_src_l3 */ 1122 1061 XGRESS_FILTER_ADD(&qdisc_tun_fwd, BPF_TC_INGRESS, skel->progs.tc_src_l3, 0); 1123 1062 1124 - /* tc qdisc add dev veth_dst_fwd clsact */ 1125 - QDISC_CLSACT_CREATE(&qdisc_veth_dst_fwd, setup_result->ifindex_veth_dst_fwd); 1126 - /* tc filter add dev veth_dst_fwd ingress bpf da tc_dst_l3 */ 1127 - XGRESS_FILTER_ADD(&qdisc_veth_dst_fwd, BPF_TC_INGRESS, skel->progs.tc_dst_l3, 0); 1128 - /* tc filter add dev veth_dst_fwd egress bpf da tc_chk */ 1129 - XGRESS_FILTER_ADD(&qdisc_veth_dst_fwd, BPF_TC_EGRESS, skel->progs.tc_chk, 0); 1063 + /* tc qdisc add dev dst_fwd clsact */ 1064 + QDISC_CLSACT_CREATE(&qdisc_dst_fwd, setup_result->ifindex_dst_fwd); 1065 + /* tc filter add dev dst_fwd ingress bpf da tc_dst_l3 */ 1066 + XGRESS_FILTER_ADD(&qdisc_dst_fwd, BPF_TC_INGRESS, skel->progs.tc_dst_l3, 0); 1067 + /* tc filter add dev dst_fwd egress bpf da tc_chk */ 1068 + XGRESS_FILTER_ADD(&qdisc_dst_fwd, BPF_TC_EGRESS, skel->progs.tc_chk, 0); 1130 1069 1131 1070 /* Setup route and neigh tables */ 1132 1071 SYS(fail, "ip -netns " NS_SRC " addr add dev tun_src " IP4_TUN_SRC "/24"); ··· 1135 1074 SYS(fail, "ip -netns " NS_SRC " addr add dev tun_src " IP6_TUN_SRC "/64 nodad"); 1136 1075 SYS(fail, "ip -netns " NS_FWD " addr add dev tun_fwd " IP6_TUN_FWD "/64 nodad"); 1137 1076 1138 - SYS(fail, "ip -netns " NS_SRC " route del " IP4_DST "/32 dev veth_src scope global"); 1077 + SYS(fail, "ip -netns " NS_SRC " route del " IP4_DST "/32 dev src scope global"); 1139 1078 SYS(fail, "ip -netns " NS_SRC " route add " IP4_DST "/32 via " IP4_TUN_FWD 1140 1079 " dev tun_src scope global"); 1141 - SYS(fail, "ip -netns " NS_DST " route add " IP4_TUN_SRC "/32 dev veth_dst scope global"); 1142 - SYS(fail, "ip -netns " NS_SRC " route del " IP6_DST "/128 dev veth_src scope global"); 1080 + SYS(fail, "ip -netns " NS_DST " route add " IP4_TUN_SRC "/32 dev dst scope global"); 1081 + SYS(fail, "ip -netns " NS_SRC " route del " IP6_DST "/128 dev src scope global"); 1143 1082 SYS(fail, "ip -netns " NS_SRC " route add " IP6_DST "/128 via " IP6_TUN_FWD 1144 1083 " dev tun_src scope global"); 1145 - SYS(fail, "ip -netns " NS_DST " route add " IP6_TUN_SRC "/128 dev veth_dst scope global"); 1084 + SYS(fail, "ip -netns " NS_DST " route add " IP6_TUN_SRC "/128 dev dst scope global"); 1146 1085 1147 - SYS(fail, "ip -netns " NS_DST " neigh add " IP4_TUN_SRC " dev veth_dst lladdr " MAC_DST_FWD); 1148 - SYS(fail, "ip -netns " NS_DST " neigh add " IP6_TUN_SRC " dev veth_dst lladdr " MAC_DST_FWD); 1086 + SYS(fail, "ip -netns " NS_DST " neigh add " IP4_TUN_SRC " dev dst lladdr " MAC_DST_FWD); 1087 + SYS(fail, "ip -netns " NS_DST " neigh add " IP6_TUN_SRC " dev dst lladdr " MAC_DST_FWD); 1149 1088 1150 1089 if (!ASSERT_OK(set_forwarding(false), "disable forwarding")) 1151 1090 goto fail; ··· 1167 1106 close_netns(nstoken); 1168 1107 } 1169 1108 1170 - #define RUN_TEST(name) \ 1109 + #define RUN_TEST(name, mode) \ 1171 1110 ({ \ 1172 - struct netns_setup_result setup_result; \ 1111 + struct netns_setup_result setup_result = { .dev_mode = mode, }; \ 1173 1112 if (test__start_subtest(#name)) \ 1174 1113 if (ASSERT_OK(netns_setup_namespaces("add"), "setup namespaces")) { \ 1175 1114 if (ASSERT_OK(netns_setup_links_and_routes(&setup_result), \ ··· 1183 1122 { 1184 1123 netns_setup_namespaces_nofail("delete"); 1185 1124 1186 - RUN_TEST(tc_redirect_peer); 1187 - RUN_TEST(tc_redirect_peer_l3); 1188 - RUN_TEST(tc_redirect_neigh); 1189 - RUN_TEST(tc_redirect_neigh_fib); 1190 - RUN_TEST(tc_redirect_dtime); 1125 + RUN_TEST(tc_redirect_peer, MODE_VETH); 1126 + RUN_TEST(tc_redirect_peer, MODE_NETKIT); 1127 + RUN_TEST(tc_redirect_peer_l3, MODE_VETH); 1128 + RUN_TEST(tc_redirect_peer_l3, MODE_NETKIT); 1129 + RUN_TEST(tc_redirect_neigh, MODE_VETH); 1130 + RUN_TEST(tc_redirect_neigh_fib, MODE_VETH); 1131 + RUN_TEST(tc_redirect_dtime, MODE_VETH); 1191 1132 return NULL; 1192 1133 } 1193 1134
+2
tools/testing/selftests/bpf/prog_tests/verifier.c
··· 31 31 #include "verifier_helper_restricted.skel.h" 32 32 #include "verifier_helper_value_access.skel.h" 33 33 #include "verifier_int_ptr.skel.h" 34 + #include "verifier_iterating_callbacks.skel.h" 34 35 #include "verifier_jeq_infer_not_null.skel.h" 35 36 #include "verifier_ld_ind.skel.h" 36 37 #include "verifier_ldsx.skel.h" ··· 140 139 void test_verifier_helper_restricted(void) { RUN(verifier_helper_restricted); } 141 140 void test_verifier_helper_value_access(void) { RUN(verifier_helper_value_access); } 142 141 void test_verifier_int_ptr(void) { RUN(verifier_int_ptr); } 142 + void test_verifier_iterating_callbacks(void) { RUN(verifier_iterating_callbacks); } 143 143 void test_verifier_jeq_infer_not_null(void) { RUN(verifier_jeq_infer_not_null); } 144 144 void test_verifier_ld_ind(void) { RUN(verifier_ld_ind); } 145 145 void test_verifier_ldsx(void) { RUN(verifier_ldsx); }
+8 -5
tools/testing/selftests/bpf/progs/bpf_loop_bench.c
··· 15 15 return 0; 16 16 } 17 17 18 + static int outer_loop(__u32 index, void *data) 19 + { 20 + bpf_loop(nr_loops, empty_callback, NULL, 0); 21 + __sync_add_and_fetch(&hits, nr_loops); 22 + return 0; 23 + } 24 + 18 25 SEC("fentry/" SYS_PREFIX "sys_getpgid") 19 26 int benchmark(void *ctx) 20 27 { 21 - for (int i = 0; i < 1000; i++) { 22 - bpf_loop(nr_loops, empty_callback, NULL, 0); 23 - 24 - __sync_add_and_fetch(&hits, nr_loops); 25 - } 28 + bpf_loop(1000, outer_loop, NULL, 0); 26 29 return 0; 27 30 }
+1
tools/testing/selftests/bpf/progs/cb_refs.c
··· 33 33 if (!p) 34 34 return 0; 35 35 bpf_for_each_map_elem(&array_map, cb1, &p, 0); 36 + bpf_kfunc_call_test_release(p); 36 37 return 0; 37 38 } 38 39
+2
tools/testing/selftests/bpf/progs/exceptions_fail.c
··· 171 171 return 0; 172 172 bpf_spin_lock(&lock); 173 173 bpf_rbtree_add(&rbtree, &f->node, rbless); 174 + bpf_spin_unlock(&lock); 174 175 return 0; 175 176 } 176 177 ··· 215 214 if (!f) 216 215 return 0; 217 216 bpf_loop(5, subprog_cb_ref, NULL, 0); 217 + bpf_obj_drop(f); 218 218 return 0; 219 219 } 220 220
+48 -30
tools/testing/selftests/bpf/progs/strobemeta.h
··· 24 24 #define STACK_TABLE_EPOCH_SHIFT 20 25 25 #define STROBE_MAX_STR_LEN 1 26 26 #define STROBE_MAX_CFGS 32 27 + #define READ_MAP_VAR_PAYLOAD_CAP \ 28 + ((1 + STROBE_MAX_MAP_ENTRIES * 2) * STROBE_MAX_STR_LEN) 27 29 #define STROBE_MAX_PAYLOAD \ 28 30 (STROBE_MAX_STRS * STROBE_MAX_STR_LEN + \ 29 - STROBE_MAX_MAPS * (1 + STROBE_MAX_MAP_ENTRIES * 2) * STROBE_MAX_STR_LEN) 31 + STROBE_MAX_MAPS * READ_MAP_VAR_PAYLOAD_CAP) 30 32 31 33 struct strobe_value_header { 32 34 /* ··· 357 355 size_t idx, void *tls_base, 358 356 struct strobe_value_generic *value, 359 357 struct strobemeta_payload *data, 360 - void *payload) 358 + size_t off) 361 359 { 362 360 void *location; 363 361 uint64_t len; ··· 368 366 return 0; 369 367 370 368 bpf_probe_read_user(value, sizeof(struct strobe_value_generic), location); 371 - len = bpf_probe_read_user_str(payload, STROBE_MAX_STR_LEN, value->ptr); 369 + len = bpf_probe_read_user_str(&data->payload[off], STROBE_MAX_STR_LEN, value->ptr); 372 370 /* 373 371 * if bpf_probe_read_user_str returns error (<0), due to casting to 374 372 * unsinged int, it will become big number, so next check is ··· 380 378 return 0; 381 379 382 380 data->str_lens[idx] = len; 383 - return len; 381 + return off + len; 384 382 } 385 383 386 - static __always_inline void *read_map_var(struct strobemeta_cfg *cfg, 387 - size_t idx, void *tls_base, 388 - struct strobe_value_generic *value, 389 - struct strobemeta_payload *data, 390 - void *payload) 384 + static __always_inline uint64_t read_map_var(struct strobemeta_cfg *cfg, 385 + size_t idx, void *tls_base, 386 + struct strobe_value_generic *value, 387 + struct strobemeta_payload *data, 388 + size_t off) 391 389 { 392 390 struct strobe_map_descr* descr = &data->map_descrs[idx]; 393 391 struct strobe_map_raw map; ··· 399 397 400 398 location = calc_location(&cfg->map_locs[idx], tls_base); 401 399 if (!location) 402 - return payload; 400 + return off; 403 401 404 402 bpf_probe_read_user(value, sizeof(struct strobe_value_generic), location); 405 403 if (bpf_probe_read_user(&map, sizeof(struct strobe_map_raw), value->ptr)) 406 - return payload; 404 + return off; 407 405 408 406 descr->id = map.id; 409 407 descr->cnt = map.cnt; ··· 412 410 data->req_meta_valid = 1; 413 411 } 414 412 415 - len = bpf_probe_read_user_str(payload, STROBE_MAX_STR_LEN, map.tag); 413 + len = bpf_probe_read_user_str(&data->payload[off], STROBE_MAX_STR_LEN, map.tag); 416 414 if (len <= STROBE_MAX_STR_LEN) { 417 415 descr->tag_len = len; 418 - payload += len; 416 + off += len; 419 417 } 420 418 421 419 #ifdef NO_UNROLL ··· 428 426 break; 429 427 430 428 descr->key_lens[i] = 0; 431 - len = bpf_probe_read_user_str(payload, STROBE_MAX_STR_LEN, 429 + len = bpf_probe_read_user_str(&data->payload[off], STROBE_MAX_STR_LEN, 432 430 map.entries[i].key); 433 431 if (len <= STROBE_MAX_STR_LEN) { 434 432 descr->key_lens[i] = len; 435 - payload += len; 433 + off += len; 436 434 } 437 435 descr->val_lens[i] = 0; 438 - len = bpf_probe_read_user_str(payload, STROBE_MAX_STR_LEN, 436 + len = bpf_probe_read_user_str(&data->payload[off], STROBE_MAX_STR_LEN, 439 437 map.entries[i].val); 440 438 if (len <= STROBE_MAX_STR_LEN) { 441 439 descr->val_lens[i] = len; 442 - payload += len; 440 + off += len; 443 441 } 444 442 } 445 443 446 - return payload; 444 + return off; 447 445 } 448 446 449 447 #ifdef USE_BPF_LOOP ··· 457 455 struct strobemeta_payload *data; 458 456 void *tls_base; 459 457 struct strobemeta_cfg *cfg; 460 - void *payload; 458 + size_t payload_off; 461 459 /* value gets mutated */ 462 460 struct strobe_value_generic *value; 463 461 enum read_type type; 464 462 }; 465 463 466 - static int read_var_callback(__u32 index, struct read_var_ctx *ctx) 464 + static int read_var_callback(__u64 index, struct read_var_ctx *ctx) 467 465 { 466 + /* lose precision info for ctx->payload_off, verifier won't track 467 + * double xor, barrier_var() is needed to force clang keep both xors. 468 + */ 469 + ctx->payload_off ^= index; 470 + barrier_var(ctx->payload_off); 471 + ctx->payload_off ^= index; 468 472 switch (ctx->type) { 469 473 case READ_INT_VAR: 470 474 if (index >= STROBE_MAX_INTS) ··· 480 472 case READ_MAP_VAR: 481 473 if (index >= STROBE_MAX_MAPS) 482 474 return 1; 483 - ctx->payload = read_map_var(ctx->cfg, index, ctx->tls_base, 484 - ctx->value, ctx->data, ctx->payload); 475 + if (ctx->payload_off > sizeof(ctx->data->payload) - READ_MAP_VAR_PAYLOAD_CAP) 476 + return 1; 477 + ctx->payload_off = read_map_var(ctx->cfg, index, ctx->tls_base, 478 + ctx->value, ctx->data, ctx->payload_off); 485 479 break; 486 480 case READ_STR_VAR: 487 481 if (index >= STROBE_MAX_STRS) 488 482 return 1; 489 - ctx->payload += read_str_var(ctx->cfg, index, ctx->tls_base, 490 - ctx->value, ctx->data, ctx->payload); 483 + if (ctx->payload_off > sizeof(ctx->data->payload) - STROBE_MAX_STR_LEN) 484 + return 1; 485 + ctx->payload_off = read_str_var(ctx->cfg, index, ctx->tls_base, 486 + ctx->value, ctx->data, ctx->payload_off); 491 487 break; 492 488 } 493 489 return 0; ··· 513 501 pid_t pid = bpf_get_current_pid_tgid() >> 32; 514 502 struct strobe_value_generic value = {0}; 515 503 struct strobemeta_cfg *cfg; 516 - void *tls_base, *payload; 504 + size_t payload_off; 505 + void *tls_base; 517 506 518 507 cfg = bpf_map_lookup_elem(&strobemeta_cfgs, &pid); 519 508 if (!cfg) ··· 522 509 523 510 data->int_vals_set_mask = 0; 524 511 data->req_meta_valid = 0; 525 - payload = data->payload; 512 + payload_off = 0; 526 513 /* 527 514 * we don't have struct task_struct definition, it should be: 528 515 * tls_base = (void *)task->thread.fsbase; ··· 535 522 .tls_base = tls_base, 536 523 .value = &value, 537 524 .data = data, 538 - .payload = payload, 525 + .payload_off = 0, 539 526 }; 540 527 int err; 541 528 ··· 553 540 err = bpf_loop(STROBE_MAX_MAPS, read_var_callback, &ctx, 0); 554 541 if (err != STROBE_MAX_MAPS) 555 542 return NULL; 543 + 544 + payload_off = ctx.payload_off; 545 + /* this should not really happen, here only to satisfy verifer */ 546 + if (payload_off > sizeof(data->payload)) 547 + payload_off = sizeof(data->payload); 556 548 #else 557 549 #ifdef NO_UNROLL 558 550 #pragma clang loop unroll(disable) ··· 573 555 #pragma unroll 574 556 #endif /* NO_UNROLL */ 575 557 for (int i = 0; i < STROBE_MAX_STRS; ++i) { 576 - payload += read_str_var(cfg, i, tls_base, &value, data, payload); 558 + payload_off = read_str_var(cfg, i, tls_base, &value, data, payload_off); 577 559 } 578 560 #ifdef NO_UNROLL 579 561 #pragma clang loop unroll(disable) ··· 581 563 #pragma unroll 582 564 #endif /* NO_UNROLL */ 583 565 for (int i = 0; i < STROBE_MAX_MAPS; ++i) { 584 - payload = read_map_var(cfg, i, tls_base, &value, data, payload); 566 + payload_off = read_map_var(cfg, i, tls_base, &value, data, payload_off); 585 567 } 586 568 #endif /* USE_BPF_LOOP */ 587 569 ··· 589 571 * return pointer right after end of payload, so it's possible to 590 572 * calculate exact amount of useful data that needs to be sent 591 573 */ 592 - return payload; 574 + return &data->payload[payload_off]; 593 575 } 594 576 595 577 SEC("raw_tracepoint/kfree_skb")
+242
tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/bpf.h> 4 + #include <bpf/bpf_helpers.h> 5 + #include "bpf_misc.h" 6 + 7 + struct { 8 + __uint(type, BPF_MAP_TYPE_ARRAY); 9 + __uint(max_entries, 8); 10 + __type(key, __u32); 11 + __type(value, __u64); 12 + } map SEC(".maps"); 13 + 14 + struct { 15 + __uint(type, BPF_MAP_TYPE_USER_RINGBUF); 16 + __uint(max_entries, 8); 17 + } ringbuf SEC(".maps"); 18 + 19 + struct vm_area_struct; 20 + struct bpf_map; 21 + 22 + struct buf_context { 23 + char *buf; 24 + }; 25 + 26 + struct num_context { 27 + __u64 i; 28 + __u64 j; 29 + }; 30 + 31 + __u8 choice_arr[2] = { 0, 1 }; 32 + 33 + static int unsafe_on_2nd_iter_cb(__u32 idx, struct buf_context *ctx) 34 + { 35 + if (idx == 0) { 36 + ctx->buf = (char *)(0xDEAD); 37 + return 0; 38 + } 39 + 40 + if (bpf_probe_read_user(ctx->buf, 8, (void *)(0xBADC0FFEE))) 41 + return 1; 42 + 43 + return 0; 44 + } 45 + 46 + SEC("?raw_tp") 47 + __failure __msg("R1 type=scalar expected=fp") 48 + int unsafe_on_2nd_iter(void *unused) 49 + { 50 + char buf[4]; 51 + struct buf_context loop_ctx = { .buf = buf }; 52 + 53 + bpf_loop(100, unsafe_on_2nd_iter_cb, &loop_ctx, 0); 54 + return 0; 55 + } 56 + 57 + static int unsafe_on_zero_iter_cb(__u32 idx, struct num_context *ctx) 58 + { 59 + ctx->i = 0; 60 + return 0; 61 + } 62 + 63 + SEC("?raw_tp") 64 + __failure __msg("invalid access to map value, value_size=2 off=32 size=1") 65 + int unsafe_on_zero_iter(void *unused) 66 + { 67 + struct num_context loop_ctx = { .i = 32 }; 68 + 69 + bpf_loop(100, unsafe_on_zero_iter_cb, &loop_ctx, 0); 70 + return choice_arr[loop_ctx.i]; 71 + } 72 + 73 + static int widening_cb(__u32 idx, struct num_context *ctx) 74 + { 75 + ++ctx->i; 76 + return 0; 77 + } 78 + 79 + SEC("?raw_tp") 80 + __success 81 + int widening(void *unused) 82 + { 83 + struct num_context loop_ctx = { .i = 0, .j = 1 }; 84 + 85 + bpf_loop(100, widening_cb, &loop_ctx, 0); 86 + /* loop_ctx.j is not changed during callback iteration, 87 + * verifier should not apply widening to it. 88 + */ 89 + return choice_arr[loop_ctx.j]; 90 + } 91 + 92 + static int loop_detection_cb(__u32 idx, struct num_context *ctx) 93 + { 94 + for (;;) {} 95 + return 0; 96 + } 97 + 98 + SEC("?raw_tp") 99 + __failure __msg("infinite loop detected") 100 + int loop_detection(void *unused) 101 + { 102 + struct num_context loop_ctx = { .i = 0 }; 103 + 104 + bpf_loop(100, loop_detection_cb, &loop_ctx, 0); 105 + return 0; 106 + } 107 + 108 + static __always_inline __u64 oob_state_machine(struct num_context *ctx) 109 + { 110 + switch (ctx->i) { 111 + case 0: 112 + ctx->i = 1; 113 + break; 114 + case 1: 115 + ctx->i = 32; 116 + break; 117 + } 118 + return 0; 119 + } 120 + 121 + static __u64 for_each_map_elem_cb(struct bpf_map *map, __u32 *key, __u64 *val, void *data) 122 + { 123 + return oob_state_machine(data); 124 + } 125 + 126 + SEC("?raw_tp") 127 + __failure __msg("invalid access to map value, value_size=2 off=32 size=1") 128 + int unsafe_for_each_map_elem(void *unused) 129 + { 130 + struct num_context loop_ctx = { .i = 0 }; 131 + 132 + bpf_for_each_map_elem(&map, for_each_map_elem_cb, &loop_ctx, 0); 133 + return choice_arr[loop_ctx.i]; 134 + } 135 + 136 + static __u64 ringbuf_drain_cb(struct bpf_dynptr *dynptr, void *data) 137 + { 138 + return oob_state_machine(data); 139 + } 140 + 141 + SEC("?raw_tp") 142 + __failure __msg("invalid access to map value, value_size=2 off=32 size=1") 143 + int unsafe_ringbuf_drain(void *unused) 144 + { 145 + struct num_context loop_ctx = { .i = 0 }; 146 + 147 + bpf_user_ringbuf_drain(&ringbuf, ringbuf_drain_cb, &loop_ctx, 0); 148 + return choice_arr[loop_ctx.i]; 149 + } 150 + 151 + static __u64 find_vma_cb(struct task_struct *task, struct vm_area_struct *vma, void *data) 152 + { 153 + return oob_state_machine(data); 154 + } 155 + 156 + SEC("?raw_tp") 157 + __failure __msg("invalid access to map value, value_size=2 off=32 size=1") 158 + int unsafe_find_vma(void *unused) 159 + { 160 + struct task_struct *task = bpf_get_current_task_btf(); 161 + struct num_context loop_ctx = { .i = 0 }; 162 + 163 + bpf_find_vma(task, 0, find_vma_cb, &loop_ctx, 0); 164 + return choice_arr[loop_ctx.i]; 165 + } 166 + 167 + static int iter_limit_cb(__u32 idx, struct num_context *ctx) 168 + { 169 + ctx->i++; 170 + return 0; 171 + } 172 + 173 + SEC("?raw_tp") 174 + __success 175 + int bpf_loop_iter_limit_ok(void *unused) 176 + { 177 + struct num_context ctx = { .i = 0 }; 178 + 179 + bpf_loop(1, iter_limit_cb, &ctx, 0); 180 + return choice_arr[ctx.i]; 181 + } 182 + 183 + SEC("?raw_tp") 184 + __failure __msg("invalid access to map value, value_size=2 off=2 size=1") 185 + int bpf_loop_iter_limit_overflow(void *unused) 186 + { 187 + struct num_context ctx = { .i = 0 }; 188 + 189 + bpf_loop(2, iter_limit_cb, &ctx, 0); 190 + return choice_arr[ctx.i]; 191 + } 192 + 193 + static int iter_limit_level2a_cb(__u32 idx, struct num_context *ctx) 194 + { 195 + ctx->i += 100; 196 + return 0; 197 + } 198 + 199 + static int iter_limit_level2b_cb(__u32 idx, struct num_context *ctx) 200 + { 201 + ctx->i += 10; 202 + return 0; 203 + } 204 + 205 + static int iter_limit_level1_cb(__u32 idx, struct num_context *ctx) 206 + { 207 + ctx->i += 1; 208 + bpf_loop(1, iter_limit_level2a_cb, ctx, 0); 209 + bpf_loop(1, iter_limit_level2b_cb, ctx, 0); 210 + return 0; 211 + } 212 + 213 + /* Check that path visiting every callback function once had been 214 + * reached by verifier. Variables 'ctx{1,2}i' below serve as flags, 215 + * with each decimal digit corresponding to a callback visit marker. 216 + */ 217 + SEC("socket") 218 + __success __retval(111111) 219 + int bpf_loop_iter_limit_nested(void *unused) 220 + { 221 + struct num_context ctx1 = { .i = 0 }; 222 + struct num_context ctx2 = { .i = 0 }; 223 + __u64 a, b, c; 224 + 225 + bpf_loop(1, iter_limit_level1_cb, &ctx1, 0); 226 + bpf_loop(1, iter_limit_level1_cb, &ctx2, 0); 227 + a = ctx1.i; 228 + b = ctx2.i; 229 + /* Force 'ctx1.i' and 'ctx2.i' precise. */ 230 + c = choice_arr[(a + b) % 2]; 231 + /* This makes 'c' zero, but neither clang nor verifier know it. */ 232 + c /= 10; 233 + /* Make sure that verifier does not visit 'impossible' states: 234 + * enumerate all possible callback visit masks. 235 + */ 236 + if (a != 0 && a != 1 && a != 11 && a != 101 && a != 111 && 237 + b != 0 && b != 1 && b != 11 && b != 101 && b != 111) 238 + asm volatile ("r0 /= 0;" ::: "r0"); 239 + return 1000 * a + b + c; 240 + } 241 + 242 + char _license[] SEC("license") = "GPL";
+72 -14
tools/testing/selftests/bpf/progs/verifier_subprog_precision.c
··· 119 119 120 120 SEC("?raw_tp") 121 121 __success __log_level(2) 122 + /* First simulated path does not include callback body, 123 + * r1 and r4 are always precise for bpf_loop() calls. 124 + */ 125 + __msg("9: (85) call bpf_loop#181") 126 + __msg("mark_precise: frame0: last_idx 9 first_idx 9 subseq_idx -1") 127 + __msg("mark_precise: frame0: parent state regs=r4 stack=:") 128 + __msg("mark_precise: frame0: last_idx 8 first_idx 0 subseq_idx 9") 129 + __msg("mark_precise: frame0: regs=r4 stack= before 8: (b7) r4 = 0") 130 + __msg("mark_precise: frame0: last_idx 9 first_idx 9 subseq_idx -1") 131 + __msg("mark_precise: frame0: parent state regs=r1 stack=:") 132 + __msg("mark_precise: frame0: last_idx 8 first_idx 0 subseq_idx 9") 133 + __msg("mark_precise: frame0: regs=r1 stack= before 8: (b7) r4 = 0") 134 + __msg("mark_precise: frame0: regs=r1 stack= before 7: (b7) r3 = 0") 135 + __msg("mark_precise: frame0: regs=r1 stack= before 6: (bf) r2 = r8") 136 + __msg("mark_precise: frame0: regs=r1 stack= before 5: (bf) r1 = r6") 137 + __msg("mark_precise: frame0: regs=r6 stack= before 4: (b7) r6 = 3") 138 + /* r6 precision propagation */ 122 139 __msg("14: (0f) r1 += r6") 123 - __msg("mark_precise: frame0: last_idx 14 first_idx 10") 140 + __msg("mark_precise: frame0: last_idx 14 first_idx 9") 124 141 __msg("mark_precise: frame0: regs=r6 stack= before 13: (bf) r1 = r7") 125 142 __msg("mark_precise: frame0: regs=r6 stack= before 12: (27) r6 *= 4") 126 143 __msg("mark_precise: frame0: regs=r6 stack= before 11: (25) if r6 > 0x3 goto pc+4") 127 144 __msg("mark_precise: frame0: regs=r6 stack= before 10: (bf) r6 = r0") 128 - __msg("mark_precise: frame0: parent state regs=r0 stack=:") 129 - __msg("mark_precise: frame0: last_idx 18 first_idx 0") 130 - __msg("mark_precise: frame0: regs=r0 stack= before 18: (95) exit") 145 + __msg("mark_precise: frame0: regs=r0 stack= before 9: (85) call bpf_loop") 146 + /* State entering callback body popped from states stack */ 147 + __msg("from 9 to 17: frame1:") 148 + __msg("17: frame1: R1=scalar() R2=0 R10=fp0 cb") 149 + __msg("17: (b7) r0 = 0") 150 + __msg("18: (95) exit") 151 + __msg("returning from callee:") 152 + __msg("to caller at 9:") 153 + __msg("frame 0: propagating r1,r4") 154 + __msg("mark_precise: frame0: last_idx 9 first_idx 9 subseq_idx -1") 155 + __msg("mark_precise: frame0: regs=r1,r4 stack= before 18: (95) exit") 156 + __msg("from 18 to 9: safe") 131 157 __naked int callback_result_precise(void) 132 158 { 133 159 asm volatile ( ··· 259 233 260 234 SEC("?raw_tp") 261 235 __success __log_level(2) 236 + /* First simulated path does not include callback body */ 262 237 __msg("12: (0f) r1 += r6") 263 - __msg("mark_precise: frame0: last_idx 12 first_idx 10") 238 + __msg("mark_precise: frame0: last_idx 12 first_idx 9") 264 239 __msg("mark_precise: frame0: regs=r6 stack= before 11: (bf) r1 = r7") 265 240 __msg("mark_precise: frame0: regs=r6 stack= before 10: (27) r6 *= 4") 241 + __msg("mark_precise: frame0: regs=r6 stack= before 9: (85) call bpf_loop") 266 242 __msg("mark_precise: frame0: parent state regs=r6 stack=:") 267 - __msg("mark_precise: frame0: last_idx 16 first_idx 0") 268 - __msg("mark_precise: frame0: regs=r6 stack= before 16: (95) exit") 269 - __msg("mark_precise: frame1: regs= stack= before 15: (b7) r0 = 0") 270 - __msg("mark_precise: frame1: regs= stack= before 9: (85) call bpf_loop#181") 243 + __msg("mark_precise: frame0: last_idx 8 first_idx 0 subseq_idx 9") 271 244 __msg("mark_precise: frame0: regs=r6 stack= before 8: (b7) r4 = 0") 272 245 __msg("mark_precise: frame0: regs=r6 stack= before 7: (b7) r3 = 0") 273 246 __msg("mark_precise: frame0: regs=r6 stack= before 6: (bf) r2 = r8") 274 247 __msg("mark_precise: frame0: regs=r6 stack= before 5: (b7) r1 = 1") 275 248 __msg("mark_precise: frame0: regs=r6 stack= before 4: (b7) r6 = 3") 249 + /* State entering callback body popped from states stack */ 250 + __msg("from 9 to 15: frame1:") 251 + __msg("15: frame1: R1=scalar() R2=0 R10=fp0 cb") 252 + __msg("15: (b7) r0 = 0") 253 + __msg("16: (95) exit") 254 + __msg("returning from callee:") 255 + __msg("to caller at 9:") 256 + /* r1, r4 are always precise for bpf_loop(), 257 + * r6 was marked before backtracking to callback body. 258 + */ 259 + __msg("frame 0: propagating r1,r4,r6") 260 + __msg("mark_precise: frame0: last_idx 9 first_idx 9 subseq_idx -1") 261 + __msg("mark_precise: frame0: regs=r1,r4,r6 stack= before 16: (95) exit") 262 + __msg("mark_precise: frame1: regs= stack= before 15: (b7) r0 = 0") 263 + __msg("mark_precise: frame1: regs= stack= before 9: (85) call bpf_loop") 264 + __msg("mark_precise: frame0: parent state regs= stack=:") 265 + __msg("from 16 to 9: safe") 276 266 __naked int parent_callee_saved_reg_precise_with_callback(void) 277 267 { 278 268 asm volatile ( ··· 415 373 416 374 SEC("?raw_tp") 417 375 __success __log_level(2) 376 + /* First simulated path does not include callback body */ 418 377 __msg("14: (0f) r1 += r6") 419 - __msg("mark_precise: frame0: last_idx 14 first_idx 11") 378 + __msg("mark_precise: frame0: last_idx 14 first_idx 10") 420 379 __msg("mark_precise: frame0: regs=r6 stack= before 13: (bf) r1 = r7") 421 380 __msg("mark_precise: frame0: regs=r6 stack= before 12: (27) r6 *= 4") 422 381 __msg("mark_precise: frame0: regs=r6 stack= before 11: (79) r6 = *(u64 *)(r10 -8)") 382 + __msg("mark_precise: frame0: regs= stack=-8 before 10: (85) call bpf_loop") 423 383 __msg("mark_precise: frame0: parent state regs= stack=-8:") 424 - __msg("mark_precise: frame0: last_idx 18 first_idx 0") 425 - __msg("mark_precise: frame0: regs= stack=-8 before 18: (95) exit") 426 - __msg("mark_precise: frame1: regs= stack= before 17: (b7) r0 = 0") 427 - __msg("mark_precise: frame1: regs= stack= before 10: (85) call bpf_loop#181") 384 + __msg("mark_precise: frame0: last_idx 9 first_idx 0 subseq_idx 10") 428 385 __msg("mark_precise: frame0: regs= stack=-8 before 9: (b7) r4 = 0") 429 386 __msg("mark_precise: frame0: regs= stack=-8 before 8: (b7) r3 = 0") 430 387 __msg("mark_precise: frame0: regs= stack=-8 before 7: (bf) r2 = r8") 431 388 __msg("mark_precise: frame0: regs= stack=-8 before 6: (bf) r1 = r6") 432 389 __msg("mark_precise: frame0: regs= stack=-8 before 5: (7b) *(u64 *)(r10 -8) = r6") 433 390 __msg("mark_precise: frame0: regs=r6 stack= before 4: (b7) r6 = 3") 391 + /* State entering callback body popped from states stack */ 392 + __msg("from 10 to 17: frame1:") 393 + __msg("17: frame1: R1=scalar() R2=0 R10=fp0 cb") 394 + __msg("17: (b7) r0 = 0") 395 + __msg("18: (95) exit") 396 + __msg("returning from callee:") 397 + __msg("to caller at 10:") 398 + /* r1, r4 are always precise for bpf_loop(), 399 + * fp-8 was marked before backtracking to callback body. 400 + */ 401 + __msg("frame 0: propagating r1,r4,fp-8") 402 + __msg("mark_precise: frame0: last_idx 10 first_idx 10 subseq_idx -1") 403 + __msg("mark_precise: frame0: regs=r1,r4 stack=-8 before 18: (95) exit") 404 + __msg("mark_precise: frame1: regs= stack= before 17: (b7) r0 = 0") 405 + __msg("mark_precise: frame1: regs= stack= before 10: (85) call bpf_loop#181") 406 + __msg("mark_precise: frame0: parent state regs= stack=:") 407 + __msg("from 18 to 10: safe") 434 408 __naked int parent_stack_slot_precise_with_callback(void) 435 409 { 436 410 asm volatile (
+52 -32
tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c
··· 53 53 #define DEFAULT_TTL 64 54 54 #define MAX_ALLOWED_PORTS 8 55 55 56 + #define MAX_PACKET_OFF 0xffff 57 + 56 58 #define swap(a, b) \ 57 59 do { typeof(a) __tmp = (a); (a) = (b); (b) = __tmp; } while (0) 58 60 ··· 185 183 } 186 184 187 185 struct tcpopt_context { 188 - __u8 *ptr; 189 - __u8 *end; 186 + void *data; 190 187 void *data_end; 191 188 __be32 *tsecr; 192 189 __u8 wscale; 193 190 bool option_timestamp; 194 191 bool option_sack; 192 + __u32 off; 195 193 }; 194 + 195 + static __always_inline u8 *next(struct tcpopt_context *ctx, __u32 sz) 196 + { 197 + __u64 off = ctx->off; 198 + __u8 *data; 199 + 200 + /* Verifier forbids access to packet when offset exceeds MAX_PACKET_OFF */ 201 + if (off > MAX_PACKET_OFF - sz) 202 + return NULL; 203 + 204 + data = ctx->data + off; 205 + barrier_var(data); 206 + if (data + sz >= ctx->data_end) 207 + return NULL; 208 + 209 + ctx->off += sz; 210 + return data; 211 + } 196 212 197 213 static int tscookie_tcpopt_parse(struct tcpopt_context *ctx) 198 214 { 199 - __u8 opcode, opsize; 215 + __u8 *opcode, *opsize, *wscale, *tsecr; 216 + __u32 off = ctx->off; 200 217 201 - if (ctx->ptr >= ctx->end) 202 - return 1; 203 - if (ctx->ptr >= ctx->data_end) 218 + opcode = next(ctx, 1); 219 + if (!opcode) 204 220 return 1; 205 221 206 - opcode = ctx->ptr[0]; 207 - 208 - if (opcode == TCPOPT_EOL) 222 + if (*opcode == TCPOPT_EOL) 209 223 return 1; 210 - if (opcode == TCPOPT_NOP) { 211 - ++ctx->ptr; 224 + if (*opcode == TCPOPT_NOP) 212 225 return 0; 213 - } 214 226 215 - if (ctx->ptr + 1 >= ctx->end) 216 - return 1; 217 - if (ctx->ptr + 1 >= ctx->data_end) 218 - return 1; 219 - opsize = ctx->ptr[1]; 220 - if (opsize < 2) 227 + opsize = next(ctx, 1); 228 + if (!opsize || *opsize < 2) 221 229 return 1; 222 230 223 - if (ctx->ptr + opsize > ctx->end) 224 - return 1; 225 - 226 - switch (opcode) { 231 + switch (*opcode) { 227 232 case TCPOPT_WINDOW: 228 - if (opsize == TCPOLEN_WINDOW && ctx->ptr + TCPOLEN_WINDOW <= ctx->data_end) 229 - ctx->wscale = ctx->ptr[2] < TCP_MAX_WSCALE ? ctx->ptr[2] : TCP_MAX_WSCALE; 233 + wscale = next(ctx, 1); 234 + if (!wscale) 235 + return 1; 236 + if (*opsize == TCPOLEN_WINDOW) 237 + ctx->wscale = *wscale < TCP_MAX_WSCALE ? *wscale : TCP_MAX_WSCALE; 230 238 break; 231 239 case TCPOPT_TIMESTAMP: 232 - if (opsize == TCPOLEN_TIMESTAMP && ctx->ptr + TCPOLEN_TIMESTAMP <= ctx->data_end) { 240 + tsecr = next(ctx, 4); 241 + if (!tsecr) 242 + return 1; 243 + if (*opsize == TCPOLEN_TIMESTAMP) { 233 244 ctx->option_timestamp = true; 234 245 /* Client's tsval becomes our tsecr. */ 235 - *ctx->tsecr = get_unaligned((__be32 *)(ctx->ptr + 2)); 246 + *ctx->tsecr = get_unaligned((__be32 *)tsecr); 236 247 } 237 248 break; 238 249 case TCPOPT_SACK_PERM: 239 - if (opsize == TCPOLEN_SACK_PERM) 250 + if (*opsize == TCPOLEN_SACK_PERM) 240 251 ctx->option_sack = true; 241 252 break; 242 253 } 243 254 244 - ctx->ptr += opsize; 255 + ctx->off = off + *opsize; 245 256 246 257 return 0; 247 258 } ··· 271 256 272 257 static __always_inline bool tscookie_init(struct tcphdr *tcp_header, 273 258 __u16 tcp_len, __be32 *tsval, 274 - __be32 *tsecr, void *data_end) 259 + __be32 *tsecr, void *data, void *data_end) 275 260 { 276 261 struct tcpopt_context loop_ctx = { 277 - .ptr = (__u8 *)(tcp_header + 1), 278 - .end = (__u8 *)tcp_header + tcp_len, 262 + .data = data, 279 263 .data_end = data_end, 280 264 .tsecr = tsecr, 281 265 .wscale = TS_OPT_WSCALE_MASK, 282 266 .option_timestamp = false, 283 267 .option_sack = false, 268 + /* Note: currently verifier would track .off as unbound scalar. 269 + * In case if verifier would at some point get smarter and 270 + * compute bounded value for this var, beware that it might 271 + * hinder bpf_loop() convergence validation. 272 + */ 273 + .off = (__u8 *)(tcp_header + 1) - (__u8 *)data, 284 274 }; 285 275 u32 cookie; 286 276 ··· 655 635 cookie = (__u32)value; 656 636 657 637 if (tscookie_init((void *)hdr->tcp, hdr->tcp_len, 658 - &tsopt_buf[0], &tsopt_buf[1], data_end)) 638 + &tsopt_buf[0], &tsopt_buf[1], data, data_end)) 659 639 tsopt = tsopt_buf; 660 640 661 641 /* Check that there is enough space for a SYNACK. It also covers
+1 -1
tools/testing/selftests/net/rtnetlink.sh
··· 859 859 860 860 861 861 run_cmd ip -netns "$testns" addr add dev "$DEV_NS" 10.1.1.100/24 862 - run_cmd ip -netns "$testns" link set dev $DEV_NS ups 862 + run_cmd ip -netns "$testns" link set dev $DEV_NS up 863 863 run_cmd ip -netns "$testns" link del "$DEV_NS" 864 864 865 865 # test external mode
+13 -6
tools/testing/vsock/vsock_test.c
··· 353 353 } 354 354 355 355 #define SOCK_BUF_SIZE (2 * 1024 * 1024) 356 - #define MAX_MSG_SIZE (32 * 1024) 356 + #define MAX_MSG_PAGES 4 357 357 358 358 static void test_seqpacket_msg_bounds_client(const struct test_opts *opts) 359 359 { 360 360 unsigned long curr_hash; 361 + size_t max_msg_size; 361 362 int page_size; 362 363 int msg_count; 363 364 int fd; ··· 374 373 375 374 curr_hash = 0; 376 375 page_size = getpagesize(); 377 - msg_count = SOCK_BUF_SIZE / MAX_MSG_SIZE; 376 + max_msg_size = MAX_MSG_PAGES * page_size; 377 + msg_count = SOCK_BUF_SIZE / max_msg_size; 378 378 379 379 for (int i = 0; i < msg_count; i++) { 380 380 size_t buf_size; ··· 385 383 /* Use "small" buffers and "big" buffers. */ 386 384 if (i & 1) 387 385 buf_size = page_size + 388 - (rand() % (MAX_MSG_SIZE - page_size)); 386 + (rand() % (max_msg_size - page_size)); 389 387 else 390 388 buf_size = 1 + (rand() % page_size); 391 389 ··· 431 429 unsigned long remote_hash; 432 430 unsigned long curr_hash; 433 431 int fd; 434 - char buf[MAX_MSG_SIZE]; 435 432 struct msghdr msg = {0}; 436 433 struct iovec iov = {0}; 437 434 ··· 458 457 control_writeln("SRVREADY"); 459 458 /* Wait, until peer sends whole data. */ 460 459 control_expectln("SENDDONE"); 461 - iov.iov_base = buf; 462 - iov.iov_len = sizeof(buf); 460 + iov.iov_len = MAX_MSG_PAGES * getpagesize(); 461 + iov.iov_base = malloc(iov.iov_len); 462 + if (!iov.iov_base) { 463 + perror("malloc"); 464 + exit(EXIT_FAILURE); 465 + } 466 + 463 467 msg.msg_iov = &iov; 464 468 msg.msg_iovlen = 1; 465 469 ··· 489 483 curr_hash += hash_djb2(msg.msg_iov[0].iov_base, recv_size); 490 484 } 491 485 486 + free(iov.iov_base); 492 487 close(fd); 493 488 remote_hash = control_readulong(); 494 489