Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 4.17-rc4 into usb-next

We want the USB fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+2200 -1244
+9 -1
Documentation/bpf/bpf_devel_QA.txt
··· 557 557 pulls in some header files containing file scope host assembly codes. 558 558 - You can add "-fno-jump-tables" to work around the switch table issue. 559 559 560 - Otherwise, you can use bpf target. 560 + Otherwise, you can use bpf target. Additionally, you _must_ use bpf target 561 + when: 562 + 563 + - Your program uses data structures with pointer or long / unsigned long 564 + types that interface with BPF helpers or context data structures. Access 565 + into these structures is verified by the BPF verifier and may result 566 + in verification failures if the native architecture is not aligned with 567 + the BPF architecture, e.g. 64-bit. An example of this is 568 + BPF_PROG_TYPE_SK_MSG require '-target bpf' 561 569 562 570 Happy BPF hacking!
+7
Documentation/devicetree/bindings/input/atmel,maxtouch.txt
··· 4 4 - compatible: 5 5 atmel,maxtouch 6 6 7 + The following compatibles have been used in various products but are 8 + deprecated: 9 + atmel,qt602240_ts 10 + atmel,atmel_mxt_ts 11 + atmel,atmel_mxt_tp 12 + atmel,mXT224 13 + 7 14 - reg: The I2C address of the device 8 15 9 16 - interrupts: The sink for the touchpad's IRQ output
+2 -2
Documentation/doc-guide/parse-headers.rst
··· 177 177 **** 178 178 179 179 180 - Report bugs to Mauro Carvalho Chehab <mchehab@s-opensource.com> 180 + Report bugs to Mauro Carvalho Chehab <mchehab@kernel.org> 181 181 182 182 183 183 COPYRIGHT 184 184 ********* 185 185 186 186 187 - Copyright (c) 2016 by Mauro Carvalho Chehab <mchehab@s-opensource.com>. 187 + Copyright (c) 2016 by Mauro Carvalho Chehab <mchehab+samsung@kernel.org>. 188 188 189 189 License GPLv2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html>. 190 190
+1 -1
Documentation/media/uapi/rc/keytable.c.rst
··· 7 7 8 8 /* keytable.c - This program allows checking/replacing keys at IR 9 9 10 - Copyright (C) 2006-2009 Mauro Carvalho Chehab <mchehab@infradead.org> 10 + Copyright (C) 2006-2009 Mauro Carvalho Chehab <mchehab@kernel.org> 11 11 12 12 This program is free software; you can redistribute it and/or modify 13 13 it under the terms of the GNU General Public License as published by
+1 -1
Documentation/media/uapi/v4l/v4l2grab.c.rst
··· 6 6 .. code-block:: c 7 7 8 8 /* V4L2 video picture grabber 9 - Copyright (C) 2009 Mauro Carvalho Chehab <mchehab@infradead.org> 9 + Copyright (C) 2009 Mauro Carvalho Chehab <mchehab@kernel.org> 10 10 11 11 This program is free software; you can redistribute it and/or modify 12 12 it under the terms of the GNU General Public License as published by
+2 -2
Documentation/sphinx/parse-headers.pl
··· 387 387 388 388 =head1 BUGS 389 389 390 - Report bugs to Mauro Carvalho Chehab <mchehab@s-opensource.com> 390 + Report bugs to Mauro Carvalho Chehab <mchehab@kernel.org> 391 391 392 392 =head1 COPYRIGHT 393 393 394 - Copyright (c) 2016 by Mauro Carvalho Chehab <mchehab@s-opensource.com>. 394 + Copyright (c) 2016 by Mauro Carvalho Chehab <mchehab+samsung@kernel.org>. 395 395 396 396 License GPLv2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html>. 397 397
+2 -2
Documentation/translations/zh_CN/video4linux/v4l2-framework.txt
··· 6 6 help. Contact the Chinese maintainer if this translation is outdated 7 7 or if there is a problem with the translation. 8 8 9 - Maintainer: Mauro Carvalho Chehab <mchehab@infradead.org> 9 + Maintainer: Mauro Carvalho Chehab <mchehab@kernel.org> 10 10 Chinese maintainer: Fu Wei <tekkamanninja@gmail.com> 11 11 --------------------------------------------------------------------- 12 12 Documentation/video4linux/v4l2-framework.txt 的中文翻译 ··· 14 14 如果想评论或更新本文的内容,请直接联系原文档的维护者。如果你使用英文 15 15 交流有困难的话,也可以向中文版维护者求助。如果本翻译更新不及时或者翻 16 16 译存在问题,请联系中文版维护者。 17 - 英文版维护者: Mauro Carvalho Chehab <mchehab@infradead.org> 17 + 英文版维护者: Mauro Carvalho Chehab <mchehab@kernel.org> 18 18 中文版维护者: 傅炜 Fu Wei <tekkamanninja@gmail.com> 19 19 中文版翻译者: 傅炜 Fu Wei <tekkamanninja@gmail.com> 20 20 中文版校译者: 傅炜 Fu Wei <tekkamanninja@gmail.com>
+5 -19
MAINTAINERS
··· 2554 2554 F: sound/soc/atmel/tse850-pcm5142.c 2555 2555 2556 2556 AZ6007 DVB DRIVER 2557 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 2558 2557 M: Mauro Carvalho Chehab <mchehab@kernel.org> 2559 2558 L: linux-media@vger.kernel.org 2560 2559 W: https://linuxtv.org ··· 3082 3083 F: include/uapi/linux/btrfs* 3083 3084 3084 3085 BTTV VIDEO4LINUX DRIVER 3085 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 3086 3086 M: Mauro Carvalho Chehab <mchehab@kernel.org> 3087 3087 L: linux-media@vger.kernel.org 3088 3088 W: https://linuxtv.org ··· 3810 3812 F: drivers/media/dvb-frontends/cx24120* 3811 3813 3812 3814 CX88 VIDEO4LINUX DRIVER 3813 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 3814 3815 M: Mauro Carvalho Chehab <mchehab@kernel.org> 3815 3816 L: linux-media@vger.kernel.org 3816 3817 W: https://linuxtv.org ··· 5050 5053 5051 5054 EDAC-CORE 5052 5055 M: Borislav Petkov <bp@alien8.de> 5053 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 5054 5056 M: Mauro Carvalho Chehab <mchehab@kernel.org> 5055 5057 L: linux-edac@vger.kernel.org 5056 5058 T: git git://git.kernel.org/pub/scm/linux/kernel/git/bp/bp.git for-next ··· 5078 5082 F: drivers/edac/fsl_ddr_edac.* 5079 5083 5080 5084 EDAC-GHES 5081 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 5082 5085 M: Mauro Carvalho Chehab <mchehab@kernel.org> 5083 5086 L: linux-edac@vger.kernel.org 5084 5087 S: Maintained ··· 5094 5099 F: drivers/edac/i5000_edac.c 5095 5100 5096 5101 EDAC-I5400 5097 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 5098 5102 M: Mauro Carvalho Chehab <mchehab@kernel.org> 5099 5103 L: linux-edac@vger.kernel.org 5100 5104 S: Maintained 5101 5105 F: drivers/edac/i5400_edac.c 5102 5106 5103 5107 EDAC-I7300 5104 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 5105 5108 M: Mauro Carvalho Chehab <mchehab@kernel.org> 5106 5109 L: linux-edac@vger.kernel.org 5107 5110 S: Maintained 5108 5111 F: drivers/edac/i7300_edac.c 5109 5112 5110 5113 EDAC-I7CORE 5111 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 5112 5114 M: Mauro Carvalho Chehab <mchehab@kernel.org> 5113 5115 L: linux-edac@vger.kernel.org 5114 5116 S: Maintained ··· 5155 5163 F: drivers/edac/r82600_edac.c 5156 5164 5157 5165 EDAC-SBRIDGE 5158 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 5159 5166 M: Mauro Carvalho Chehab <mchehab@kernel.org> 5160 5167 L: linux-edac@vger.kernel.org 5161 5168 S: Maintained ··· 5213 5222 F: drivers/net/ethernet/ibm/ehea/ 5214 5223 5215 5224 EM28XX VIDEO4LINUX DRIVER 5216 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 5217 5225 M: Mauro Carvalho Chehab <mchehab@kernel.org> 5218 5226 L: linux-media@vger.kernel.org 5219 5227 W: https://linuxtv.org ··· 7667 7677 S: Maintained 7668 7678 F: Documentation/kbuild/ 7669 7679 F: Makefile 7670 - F: scripts/Makefile.* 7680 + F: scripts/Kbuild* 7681 + F: scripts/Makefile* 7671 7682 F: scripts/basic/ 7672 7683 F: scripts/mk* 7684 + F: scripts/mod/ 7673 7685 F: scripts/package/ 7674 7686 7675 7687 KERNEL JANITORS ··· 8863 8871 F: drivers/staging/media/tegra-vde/ 8864 8872 8865 8873 MEDIA INPUT INFRASTRUCTURE (V4L/DVB) 8866 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 8867 8874 M: Mauro Carvalho Chehab <mchehab@kernel.org> 8868 8875 P: LinuxTV.org Project 8869 8876 L: linux-media@vger.kernel.org ··· 9716 9725 F: net/core/drop_monitor.c 9717 9726 9718 9727 NETWORKING DRIVERS 9728 + M: "David S. Miller" <davem@davemloft.net> 9719 9729 L: netdev@vger.kernel.org 9720 9730 W: http://www.linuxfoundation.org/en/Net 9721 9731 Q: http://patchwork.ozlabs.org/project/netdev/list/ ··· 12252 12260 F: drivers/media/i2c/saa6588* 12253 12261 12254 12262 SAA7134 VIDEO4LINUX DRIVER 12255 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 12256 12263 M: Mauro Carvalho Chehab <mchehab@kernel.org> 12257 12264 L: linux-media@vger.kernel.org 12258 12265 W: https://linuxtv.org ··· 12490 12499 SCTP PROTOCOL 12491 12500 M: Vlad Yasevich <vyasevich@gmail.com> 12492 12501 M: Neil Horman <nhorman@tuxdriver.com> 12502 + M: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> 12493 12503 L: linux-sctp@vger.kernel.org 12494 12504 W: http://lksctp.sourceforge.net 12495 12505 S: Maintained ··· 12756 12764 F: drivers/media/radio/si4713/radio-usb-si4713.c 12757 12765 12758 12766 SIANO DVB DRIVER 12759 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 12760 12767 M: Mauro Carvalho Chehab <mchehab@kernel.org> 12761 12768 L: linux-media@vger.kernel.org 12762 12769 W: https://linuxtv.org ··· 13746 13755 F: drivers/media/i2c/tda9840* 13747 13756 13748 13757 TEA5761 TUNER DRIVER 13749 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 13750 13758 M: Mauro Carvalho Chehab <mchehab@kernel.org> 13751 13759 L: linux-media@vger.kernel.org 13752 13760 W: https://linuxtv.org ··· 13754 13764 F: drivers/media/tuners/tea5761.* 13755 13765 13756 13766 TEA5767 TUNER DRIVER 13757 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 13758 13767 M: Mauro Carvalho Chehab <mchehab@kernel.org> 13759 13768 L: linux-media@vger.kernel.org 13760 13769 W: https://linuxtv.org ··· 13843 13854 F: drivers/iommu/tegra* 13844 13855 13845 13856 TEGRA KBC DRIVER 13846 - M: Rakesh Iyer <riyer@nvidia.com> 13847 13857 M: Laxman Dewangan <ldewangan@nvidia.com> 13848 13858 S: Supported 13849 13859 F: drivers/input/keyboard/tegra-kbc.c ··· 14169 14181 F: drivers/net/ethernet/ti/tlan.* 14170 14182 14171 14183 TM6000 VIDEO4LINUX DRIVER 14172 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 14173 14184 M: Mauro Carvalho Chehab <mchehab@kernel.org> 14174 14185 L: linux-media@vger.kernel.org 14175 14186 W: https://linuxtv.org ··· 15395 15408 F: arch/x86/entry/vdso/ 15396 15409 15397 15410 XC2028/3028 TUNER DRIVER 15398 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 15399 15411 M: Mauro Carvalho Chehab <mchehab@kernel.org> 15400 15412 L: linux-media@vger.kernel.org 15401 15413 W: https://linuxtv.org
+2 -2
Makefile
··· 2 2 VERSION = 4 3 3 PATCHLEVEL = 17 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 6 - NAME = Fearless Coyote 5 + EXTRAVERSION = -rc4 6 + NAME = Merciless Moray 7 7 8 8 # *DOCUMENTATION* 9 9 # To see a list of typical targets execute "make help"
+1 -1
arch/arm64/include/asm/kvm_emulate.h
··· 333 333 } else { 334 334 u64 sctlr = vcpu_read_sys_reg(vcpu, SCTLR_EL1); 335 335 sctlr |= (1 << 25); 336 - vcpu_write_sys_reg(vcpu, SCTLR_EL1, sctlr); 336 + vcpu_write_sys_reg(vcpu, sctlr, SCTLR_EL1); 337 337 } 338 338 } 339 339
+19 -5
arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c
··· 18 18 #include <linux/compiler.h> 19 19 #include <linux/irqchip/arm-gic.h> 20 20 #include <linux/kvm_host.h> 21 + #include <linux/swab.h> 21 22 22 23 #include <asm/kvm_emulate.h> 23 24 #include <asm/kvm_hyp.h> 24 25 #include <asm/kvm_mmu.h> 26 + 27 + static bool __hyp_text __is_be(struct kvm_vcpu *vcpu) 28 + { 29 + if (vcpu_mode_is_32bit(vcpu)) 30 + return !!(read_sysreg_el2(spsr) & COMPAT_PSR_E_BIT); 31 + 32 + return !!(read_sysreg(SCTLR_EL1) & SCTLR_ELx_EE); 33 + } 25 34 26 35 /* 27 36 * __vgic_v2_perform_cpuif_access -- perform a GICV access on behalf of the ··· 73 64 addr += fault_ipa - vgic->vgic_cpu_base; 74 65 75 66 if (kvm_vcpu_dabt_iswrite(vcpu)) { 76 - u32 data = vcpu_data_guest_to_host(vcpu, 77 - vcpu_get_reg(vcpu, rd), 78 - sizeof(u32)); 67 + u32 data = vcpu_get_reg(vcpu, rd); 68 + if (__is_be(vcpu)) { 69 + /* guest pre-swabbed data, undo this for writel() */ 70 + data = swab32(data); 71 + } 79 72 writel_relaxed(data, addr); 80 73 } else { 81 74 u32 data = readl_relaxed(addr); 82 - vcpu_set_reg(vcpu, rd, vcpu_data_host_to_guest(vcpu, data, 83 - sizeof(u32))); 75 + if (__is_be(vcpu)) { 76 + /* guest expects swabbed data */ 77 + data = swab32(data); 78 + } 79 + vcpu_set_reg(vcpu, rd, data); 84 80 } 85 81 86 82 return 1;
+6
arch/hexagon/include/asm/io.h
··· 216 216 memcpy((void *) dst, src, count); 217 217 } 218 218 219 + static inline void memset_io(volatile void __iomem *addr, int value, 220 + size_t size) 221 + { 222 + memset((void __force *)addr, value, size); 223 + } 224 + 219 225 #define PCI_IO_ADDR (volatile void __iomem *) 220 226 221 227 /*
+1
arch/hexagon/lib/checksum.c
··· 199 199 memcpy(dst, src, len); 200 200 return csum_partial(dst, len, sum); 201 201 } 202 + EXPORT_SYMBOL(csum_partial_copy_nocheck);
+3
arch/parisc/Makefile
··· 123 123 124 124 PHONY += bzImage $(BOOT_TARGETS) $(INSTALL_TARGETS) 125 125 126 + # Default kernel to build 127 + all: bzImage 128 + 126 129 zImage: vmlinuz 127 130 Image: vmlinux 128 131
+4 -3
arch/parisc/kernel/drivers.c
··· 448 448 * Checks all the children of @parent for a matching @id. If none 449 449 * found, it allocates a new device and returns it. 450 450 */ 451 - static struct parisc_device * alloc_tree_node(struct device *parent, char id) 451 + static struct parisc_device * __init alloc_tree_node( 452 + struct device *parent, char id) 452 453 { 453 454 struct match_id_data d = { 454 455 .id = id, ··· 826 825 * devices which are not physically connected (such as extra serial & 827 826 * keyboard ports). This problem is not yet solved. 828 827 */ 829 - static void walk_native_bus(unsigned long io_io_low, unsigned long io_io_high, 830 - struct device *parent) 828 + static void __init walk_native_bus(unsigned long io_io_low, 829 + unsigned long io_io_high, struct device *parent) 831 830 { 832 831 int i, devices_found = 0; 833 832 unsigned long hpa = io_io_low;
+1 -1
arch/parisc/kernel/pci.c
··· 174 174 * pcibios_init_bridge() initializes cache line and default latency 175 175 * for pci controllers and pci-pci bridges 176 176 */ 177 - void __init pcibios_init_bridge(struct pci_dev *dev) 177 + void __ref pcibios_init_bridge(struct pci_dev *dev) 178 178 { 179 179 unsigned short bridge_ctl, bridge_ctl_new; 180 180
+1 -1
arch/parisc/kernel/time.c
··· 205 205 device_initcall(rtc_init); 206 206 #endif 207 207 208 - void read_persistent_clock(struct timespec *ts) 208 + void read_persistent_clock64(struct timespec64 *ts) 209 209 { 210 210 static struct pdc_tod tod_data; 211 211 if (pdc_tod_read(&tod_data) == 0) {
+11
arch/parisc/kernel/traps.c
··· 837 837 if (pdc_instr(&instr) == PDC_OK) 838 838 ivap[0] = instr; 839 839 840 + /* 841 + * Rules for the checksum of the HPMC handler: 842 + * 1. The IVA does not point to PDC/PDH space (ie: the OS has installed 843 + * its own IVA). 844 + * 2. The word at IVA + 32 is nonzero. 845 + * 3. If Length (IVA + 60) is not zero, then Length (IVA + 60) and 846 + * Address (IVA + 56) are word-aligned. 847 + * 4. The checksum of the 8 words starting at IVA + 32 plus the sum of 848 + * the Length/4 words starting at Address is zero. 849 + */ 850 + 840 851 /* Compute Checksum for HPMC handler */ 841 852 length = os_hpmc_size; 842 853 ivap[7] = length;
+1 -1
arch/parisc/mm/init.c
··· 516 516 } 517 517 } 518 518 519 - void free_initmem(void) 519 + void __ref free_initmem(void) 520 520 { 521 521 unsigned long init_begin = (unsigned long)__init_begin; 522 522 unsigned long init_end = (unsigned long)__init_end;
+1 -1
arch/sparc/include/uapi/asm/oradax.h
··· 3 3 * 4 4 * This program is free software: you can redistribute it and/or modify 5 5 * it under the terms of the GNU General Public License as published by 6 - * the Free Software Foundation, either version 3 of the License, or 6 + * the Free Software Foundation, either version 2 of the License, or 7 7 * (at your option) any later version. 8 8 * 9 9 * This program is distributed in the hope that it will be useful,
+1 -1
arch/sparc/kernel/vio.c
··· 403 403 if (err) { 404 404 printk(KERN_ERR "VIO: Could not register device %s, err=%d\n", 405 405 dev_name(&vdev->dev), err); 406 - kfree(vdev); 406 + put_device(&vdev->dev); 407 407 return NULL; 408 408 } 409 409 if (vdev->dp)
+5 -1
arch/x86/kernel/cpu/common.c
··· 848 848 c->x86_power = edx; 849 849 } 850 850 851 + if (c->extended_cpuid_level >= 0x80000008) { 852 + cpuid(0x80000008, &eax, &ebx, &ecx, &edx); 853 + c->x86_capability[CPUID_8000_0008_EBX] = ebx; 854 + } 855 + 851 856 if (c->extended_cpuid_level >= 0x8000000a) 852 857 c->x86_capability[CPUID_8000_000A_EDX] = cpuid_edx(0x8000000a); 853 858 ··· 876 871 877 872 c->x86_virt_bits = (eax >> 8) & 0xff; 878 873 c->x86_phys_bits = eax & 0xff; 879 - c->x86_capability[CPUID_8000_0008_EBX] = ebx; 880 874 } 881 875 #ifdef CONFIG_X86_32 882 876 else if (cpu_has(c, X86_FEATURE_PAE) || cpu_has(c, X86_FEATURE_PSE36))
+11 -11
arch/x86/kernel/tsc.c
··· 1067 1067 .resume = tsc_resume, 1068 1068 .mark_unstable = tsc_cs_mark_unstable, 1069 1069 .tick_stable = tsc_cs_tick_stable, 1070 + .list = LIST_HEAD_INIT(clocksource_tsc_early.list), 1070 1071 }; 1071 1072 1072 1073 /* ··· 1087 1086 .resume = tsc_resume, 1088 1087 .mark_unstable = tsc_cs_mark_unstable, 1089 1088 .tick_stable = tsc_cs_tick_stable, 1089 + .list = LIST_HEAD_INIT(clocksource_tsc.list), 1090 1090 }; 1091 1091 1092 1092 void mark_tsc_unstable(char *reason) ··· 1100 1098 clear_sched_clock_stable(); 1101 1099 disable_sched_clock_irqtime(); 1102 1100 pr_info("Marking TSC unstable due to %s\n", reason); 1103 - /* Change only the rating, when not registered */ 1104 - if (clocksource_tsc.mult) { 1105 - clocksource_mark_unstable(&clocksource_tsc); 1106 - } else { 1107 - clocksource_tsc.flags |= CLOCK_SOURCE_UNSTABLE; 1108 - clocksource_tsc.rating = 0; 1109 - } 1101 + 1102 + clocksource_mark_unstable(&clocksource_tsc_early); 1103 + clocksource_mark_unstable(&clocksource_tsc); 1110 1104 } 1111 1105 1112 1106 EXPORT_SYMBOL_GPL(mark_tsc_unstable); ··· 1242 1244 1243 1245 /* Don't bother refining TSC on unstable systems */ 1244 1246 if (tsc_unstable) 1245 - return; 1247 + goto unreg; 1246 1248 1247 1249 /* 1248 1250 * Since the work is started early in boot, we may be ··· 1295 1297 1296 1298 out: 1297 1299 if (tsc_unstable) 1298 - return; 1300 + goto unreg; 1299 1301 1300 1302 if (boot_cpu_has(X86_FEATURE_ART)) 1301 1303 art_related_clocksource = &clocksource_tsc; 1302 1304 clocksource_register_khz(&clocksource_tsc, tsc_khz); 1305 + unreg: 1303 1306 clocksource_unregister(&clocksource_tsc_early); 1304 1307 } 1305 1308 ··· 1310 1311 if (!boot_cpu_has(X86_FEATURE_TSC) || tsc_disabled > 0 || !tsc_khz) 1311 1312 return 0; 1312 1313 1313 - if (check_tsc_unstable()) 1314 - return 0; 1314 + if (tsc_unstable) 1315 + goto unreg; 1315 1316 1316 1317 if (tsc_clocksource_reliable) 1317 1318 clocksource_tsc.flags &= ~CLOCK_SOURCE_MUST_VERIFY; ··· 1327 1328 if (boot_cpu_has(X86_FEATURE_ART)) 1328 1329 art_related_clocksource = &clocksource_tsc; 1329 1330 clocksource_register_khz(&clocksource_tsc, tsc_khz); 1331 + unreg: 1330 1332 clocksource_unregister(&clocksource_tsc_early); 1331 1333 return 0; 1332 1334 }
+20 -17
arch/x86/kvm/lapic.c
··· 1463 1463 local_irq_restore(flags); 1464 1464 } 1465 1465 1466 - static void start_sw_period(struct kvm_lapic *apic) 1467 - { 1468 - if (!apic->lapic_timer.period) 1469 - return; 1470 - 1471 - if (apic_lvtt_oneshot(apic) && 1472 - ktime_after(ktime_get(), 1473 - apic->lapic_timer.target_expiration)) { 1474 - apic_timer_expired(apic); 1475 - return; 1476 - } 1477 - 1478 - hrtimer_start(&apic->lapic_timer.timer, 1479 - apic->lapic_timer.target_expiration, 1480 - HRTIMER_MODE_ABS_PINNED); 1481 - } 1482 - 1483 1466 static void update_target_expiration(struct kvm_lapic *apic, uint32_t old_divisor) 1484 1467 { 1485 1468 ktime_t now, remaining; ··· 1527 1544 apic->lapic_timer.target_expiration = 1528 1545 ktime_add_ns(apic->lapic_timer.target_expiration, 1529 1546 apic->lapic_timer.period); 1547 + } 1548 + 1549 + static void start_sw_period(struct kvm_lapic *apic) 1550 + { 1551 + if (!apic->lapic_timer.period) 1552 + return; 1553 + 1554 + if (ktime_after(ktime_get(), 1555 + apic->lapic_timer.target_expiration)) { 1556 + apic_timer_expired(apic); 1557 + 1558 + if (apic_lvtt_oneshot(apic)) 1559 + return; 1560 + 1561 + advance_periodic_target_expiration(apic); 1562 + } 1563 + 1564 + hrtimer_start(&apic->lapic_timer.timer, 1565 + apic->lapic_timer.target_expiration, 1566 + HRTIMER_MODE_ABS_PINNED); 1530 1567 } 1531 1568 1532 1569 bool kvm_lapic_hv_timer_in_use(struct kvm_vcpu *vcpu)
+14 -4
arch/x86/net/bpf_jit_comp.c
··· 1027 1027 break; 1028 1028 1029 1029 case BPF_JMP | BPF_JA: 1030 - jmp_offset = addrs[i + insn->off] - addrs[i]; 1030 + if (insn->off == -1) 1031 + /* -1 jmp instructions will always jump 1032 + * backwards two bytes. Explicitly handling 1033 + * this case avoids wasting too many passes 1034 + * when there are long sequences of replaced 1035 + * dead code. 1036 + */ 1037 + jmp_offset = -2; 1038 + else 1039 + jmp_offset = addrs[i + insn->off] - addrs[i]; 1040 + 1031 1041 if (!jmp_offset) 1032 1042 /* optimize out nop jumps */ 1033 1043 break; ··· 1236 1226 for (pass = 0; pass < 20 || image; pass++) { 1237 1227 proglen = do_jit(prog, addrs, image, oldproglen, &ctx); 1238 1228 if (proglen <= 0) { 1229 + out_image: 1239 1230 image = NULL; 1240 1231 if (header) 1241 1232 bpf_jit_binary_free(header); ··· 1247 1236 if (proglen != oldproglen) { 1248 1237 pr_err("bpf_jit: proglen=%d != oldproglen=%d\n", 1249 1238 proglen, oldproglen); 1250 - prog = orig_prog; 1251 - goto out_addrs; 1239 + goto out_image; 1252 1240 } 1253 1241 break; 1254 1242 } ··· 1283 1273 prog = orig_prog; 1284 1274 } 1285 1275 1286 - if (!prog->is_func || extra_pass) { 1276 + if (!image || !prog->is_func || extra_pass) { 1287 1277 out_addrs: 1288 1278 kfree(addrs); 1289 1279 kfree(jit_data);
+31 -55
arch/x86/xen/enlighten_pv.c
··· 421 421 { 422 422 unsigned long va = dtr->address; 423 423 unsigned int size = dtr->size + 1; 424 - unsigned pages = DIV_ROUND_UP(size, PAGE_SIZE); 425 - unsigned long frames[pages]; 426 - int f; 424 + unsigned long pfn, mfn; 425 + int level; 426 + pte_t *ptep; 427 + void *virt; 427 428 428 - /* 429 - * A GDT can be up to 64k in size, which corresponds to 8192 430 - * 8-byte entries, or 16 4k pages.. 431 - */ 432 - 433 - BUG_ON(size > 65536); 429 + /* @size should be at most GDT_SIZE which is smaller than PAGE_SIZE. */ 430 + BUG_ON(size > PAGE_SIZE); 434 431 BUG_ON(va & ~PAGE_MASK); 435 432 436 - for (f = 0; va < dtr->address + size; va += PAGE_SIZE, f++) { 437 - int level; 438 - pte_t *ptep; 439 - unsigned long pfn, mfn; 440 - void *virt; 433 + /* 434 + * The GDT is per-cpu and is in the percpu data area. 435 + * That can be virtually mapped, so we need to do a 436 + * page-walk to get the underlying MFN for the 437 + * hypercall. The page can also be in the kernel's 438 + * linear range, so we need to RO that mapping too. 439 + */ 440 + ptep = lookup_address(va, &level); 441 + BUG_ON(ptep == NULL); 441 442 442 - /* 443 - * The GDT is per-cpu and is in the percpu data area. 444 - * That can be virtually mapped, so we need to do a 445 - * page-walk to get the underlying MFN for the 446 - * hypercall. The page can also be in the kernel's 447 - * linear range, so we need to RO that mapping too. 448 - */ 449 - ptep = lookup_address(va, &level); 450 - BUG_ON(ptep == NULL); 443 + pfn = pte_pfn(*ptep); 444 + mfn = pfn_to_mfn(pfn); 445 + virt = __va(PFN_PHYS(pfn)); 451 446 452 - pfn = pte_pfn(*ptep); 453 - mfn = pfn_to_mfn(pfn); 454 - virt = __va(PFN_PHYS(pfn)); 447 + make_lowmem_page_readonly((void *)va); 448 + make_lowmem_page_readonly(virt); 455 449 456 - frames[f] = mfn; 457 - 458 - make_lowmem_page_readonly((void *)va); 459 - make_lowmem_page_readonly(virt); 460 - } 461 - 462 - if (HYPERVISOR_set_gdt(frames, size / sizeof(struct desc_struct))) 450 + if (HYPERVISOR_set_gdt(&mfn, size / sizeof(struct desc_struct))) 463 451 BUG(); 464 452 } 465 453 ··· 458 470 { 459 471 unsigned long va = dtr->address; 460 472 unsigned int size = dtr->size + 1; 461 - unsigned pages = DIV_ROUND_UP(size, PAGE_SIZE); 462 - unsigned long frames[pages]; 463 - int f; 473 + unsigned long pfn, mfn; 474 + pte_t pte; 464 475 465 - /* 466 - * A GDT can be up to 64k in size, which corresponds to 8192 467 - * 8-byte entries, or 16 4k pages.. 468 - */ 469 - 470 - BUG_ON(size > 65536); 476 + /* @size should be at most GDT_SIZE which is smaller than PAGE_SIZE. */ 477 + BUG_ON(size > PAGE_SIZE); 471 478 BUG_ON(va & ~PAGE_MASK); 472 479 473 - for (f = 0; va < dtr->address + size; va += PAGE_SIZE, f++) { 474 - pte_t pte; 475 - unsigned long pfn, mfn; 480 + pfn = virt_to_pfn(va); 481 + mfn = pfn_to_mfn(pfn); 476 482 477 - pfn = virt_to_pfn(va); 478 - mfn = pfn_to_mfn(pfn); 483 + pte = pfn_pte(pfn, PAGE_KERNEL_RO); 479 484 480 - pte = pfn_pte(pfn, PAGE_KERNEL_RO); 485 + if (HYPERVISOR_update_va_mapping((unsigned long)va, pte, 0)) 486 + BUG(); 481 487 482 - if (HYPERVISOR_update_va_mapping((unsigned long)va, pte, 0)) 483 - BUG(); 484 - 485 - frames[f] = mfn; 486 - } 487 - 488 - if (HYPERVISOR_set_gdt(frames, size / sizeof(struct desc_struct))) 488 + if (HYPERVISOR_set_gdt(&mfn, size / sizeof(struct desc_struct))) 489 489 BUG(); 490 490 } 491 491
+28 -12
block/blk-mq.c
··· 95 95 { 96 96 struct mq_inflight *mi = priv; 97 97 98 - if (blk_mq_rq_state(rq) == MQ_RQ_IN_FLIGHT) { 99 - /* 100 - * index[0] counts the specific partition that was asked 101 - * for. index[1] counts the ones that are active on the 102 - * whole device, so increment that if mi->part is indeed 103 - * a partition, and not a whole device. 104 - */ 105 - if (rq->part == mi->part) 106 - mi->inflight[0]++; 107 - if (mi->part->partno) 108 - mi->inflight[1]++; 109 - } 98 + /* 99 + * index[0] counts the specific partition that was asked for. index[1] 100 + * counts the ones that are active on the whole device, so increment 101 + * that if mi->part is indeed a partition, and not a whole device. 102 + */ 103 + if (rq->part == mi->part) 104 + mi->inflight[0]++; 105 + if (mi->part->partno) 106 + mi->inflight[1]++; 110 107 } 111 108 112 109 void blk_mq_in_flight(struct request_queue *q, struct hd_struct *part, ··· 113 116 114 117 inflight[0] = inflight[1] = 0; 115 118 blk_mq_queue_tag_busy_iter(q, blk_mq_check_inflight, &mi); 119 + } 120 + 121 + static void blk_mq_check_inflight_rw(struct blk_mq_hw_ctx *hctx, 122 + struct request *rq, void *priv, 123 + bool reserved) 124 + { 125 + struct mq_inflight *mi = priv; 126 + 127 + if (rq->part == mi->part) 128 + mi->inflight[rq_data_dir(rq)]++; 129 + } 130 + 131 + void blk_mq_in_flight_rw(struct request_queue *q, struct hd_struct *part, 132 + unsigned int inflight[2]) 133 + { 134 + struct mq_inflight mi = { .part = part, .inflight = inflight, }; 135 + 136 + inflight[0] = inflight[1] = 0; 137 + blk_mq_queue_tag_busy_iter(q, blk_mq_check_inflight_rw, &mi); 116 138 } 117 139 118 140 void blk_freeze_queue_start(struct request_queue *q)
+3 -1
block/blk-mq.h
··· 188 188 } 189 189 190 190 void blk_mq_in_flight(struct request_queue *q, struct hd_struct *part, 191 - unsigned int inflight[2]); 191 + unsigned int inflight[2]); 192 + void blk_mq_in_flight_rw(struct request_queue *q, struct hd_struct *part, 193 + unsigned int inflight[2]); 192 194 193 195 static inline void blk_mq_put_dispatch_budget(struct blk_mq_hw_ctx *hctx) 194 196 {
+12
block/genhd.c
··· 82 82 } 83 83 } 84 84 85 + void part_in_flight_rw(struct request_queue *q, struct hd_struct *part, 86 + unsigned int inflight[2]) 87 + { 88 + if (q->mq_ops) { 89 + blk_mq_in_flight_rw(q, part, inflight); 90 + return; 91 + } 92 + 93 + inflight[0] = atomic_read(&part->in_flight[0]); 94 + inflight[1] = atomic_read(&part->in_flight[1]); 95 + } 96 + 85 97 struct hd_struct *__disk_get_part(struct gendisk *disk, int partno) 86 98 { 87 99 struct disk_part_tbl *ptbl = rcu_dereference(disk->part_tbl);
+6 -4
block/partition-generic.c
··· 145 145 jiffies_to_msecs(part_stat_read(p, time_in_queue))); 146 146 } 147 147 148 - ssize_t part_inflight_show(struct device *dev, 149 - struct device_attribute *attr, char *buf) 148 + ssize_t part_inflight_show(struct device *dev, struct device_attribute *attr, 149 + char *buf) 150 150 { 151 151 struct hd_struct *p = dev_to_part(dev); 152 + struct request_queue *q = part_to_disk(p)->queue; 153 + unsigned int inflight[2]; 152 154 153 - return sprintf(buf, "%8u %8u\n", atomic_read(&p->in_flight[0]), 154 - atomic_read(&p->in_flight[1])); 155 + part_in_flight_rw(q, p, inflight); 156 + return sprintf(buf, "%8u %8u\n", inflight[0], inflight[1]); 155 157 } 156 158 157 159 #ifdef CONFIG_FAIL_MAKE_REQUEST
+1 -1
drivers/clk/clk-cs2000-cp.c
··· 541 541 return ret; 542 542 } 543 543 544 - static int cs2000_resume(struct device *dev) 544 + static int __maybe_unused cs2000_resume(struct device *dev) 545 545 { 546 546 struct cs2000_priv *priv = dev_get_drvdata(dev); 547 547
+9 -1
drivers/clk/clk-mux.c
··· 112 112 return 0; 113 113 } 114 114 115 + static int clk_mux_determine_rate(struct clk_hw *hw, 116 + struct clk_rate_request *req) 117 + { 118 + struct clk_mux *mux = to_clk_mux(hw); 119 + 120 + return clk_mux_determine_rate_flags(hw, req, mux->flags); 121 + } 122 + 115 123 const struct clk_ops clk_mux_ops = { 116 124 .get_parent = clk_mux_get_parent, 117 125 .set_parent = clk_mux_set_parent, 118 - .determine_rate = __clk_mux_determine_rate, 126 + .determine_rate = clk_mux_determine_rate, 119 127 }; 120 128 EXPORT_SYMBOL_GPL(clk_mux_ops); 121 129
+23 -31
drivers/clk/clk-stm32mp1.c
··· 216 216 "pclk5", "pll3_q", "ck_hsi", "ck_csi", "pll4_q", "ck_hse" 217 217 }; 218 218 219 - const char * const usart234578_src[] = { 219 + static const char * const usart234578_src[] = { 220 220 "pclk1", "pll4_q", "ck_hsi", "ck_csi", "ck_hse" 221 221 }; 222 222 223 223 static const char * const usart6_src[] = { 224 224 "pclk2", "pll4_q", "ck_hsi", "ck_csi", "ck_hse" 225 - }; 226 - 227 - static const char * const dfsdm_src[] = { 228 - "pclk2", "ck_mcu" 229 225 }; 230 226 231 227 static const char * const fdcan_src[] = { ··· 312 316 struct clock_config { 313 317 u32 id; 314 318 const char *name; 315 - union { 316 - const char *parent_name; 317 - const char * const *parent_names; 318 - }; 319 + const char *parent_name; 320 + const char * const *parent_names; 319 321 int num_parents; 320 322 unsigned long flags; 321 323 void *cfg; ··· 463 469 } 464 470 } 465 471 466 - const struct clk_ops mp1_gate_clk_ops = { 472 + static const struct clk_ops mp1_gate_clk_ops = { 467 473 .enable = mp1_gate_clk_enable, 468 474 .disable = mp1_gate_clk_disable, 469 475 .is_enabled = clk_gate_is_enabled, ··· 692 698 mp1_gate_clk_disable(hw); 693 699 } 694 700 695 - const struct clk_ops mp1_mgate_clk_ops = { 701 + static const struct clk_ops mp1_mgate_clk_ops = { 696 702 .enable = mp1_mgate_clk_enable, 697 703 .disable = mp1_mgate_clk_disable, 698 704 .is_enabled = clk_gate_is_enabled, ··· 726 732 return 0; 727 733 } 728 734 729 - const struct clk_ops clk_mmux_ops = { 735 + static const struct clk_ops clk_mmux_ops = { 730 736 .get_parent = clk_mmux_get_parent, 731 737 .set_parent = clk_mmux_set_parent, 732 738 .determine_rate = __clk_mux_determine_rate, ··· 1042 1048 u32 offset; 1043 1049 }; 1044 1050 1045 - struct clk_hw *_clk_register_pll(struct device *dev, 1046 - struct clk_hw_onecell_data *clk_data, 1047 - void __iomem *base, spinlock_t *lock, 1048 - const struct clock_config *cfg) 1051 + static struct clk_hw *_clk_register_pll(struct device *dev, 1052 + struct clk_hw_onecell_data *clk_data, 1053 + void __iomem *base, spinlock_t *lock, 1054 + const struct clock_config *cfg) 1049 1055 { 1050 1056 struct stm32_pll_cfg *stm_pll_cfg = cfg->cfg; 1051 1057 ··· 1399 1405 G_USBH, 1400 1406 G_ETHSTP, 1401 1407 G_RTCAPB, 1402 - G_TZC, 1408 + G_TZC1, 1409 + G_TZC2, 1403 1410 G_TZPC, 1404 1411 G_IWDG1, 1405 1412 G_BSEC, ··· 1412 1417 G_LAST 1413 1418 }; 1414 1419 1415 - struct stm32_mgate mp1_mgate[G_LAST]; 1420 + static struct stm32_mgate mp1_mgate[G_LAST]; 1416 1421 1417 1422 #define _K_GATE(_id, _gate_offset, _gate_bit_idx, _gate_flags,\ 1418 1423 _mgate, _ops)\ ··· 1435 1440 &mp1_mgate[_id], &mp1_mgate_clk_ops) 1436 1441 1437 1442 /* Peripheral gates */ 1438 - struct stm32_gate_cfg per_gate_cfg[G_LAST] = { 1443 + static struct stm32_gate_cfg per_gate_cfg[G_LAST] = { 1439 1444 /* Multi gates */ 1440 1445 K_GATE(G_MDIO, RCC_APB1ENSETR, 31, 0), 1441 1446 K_MGATE(G_DAC12, RCC_APB1ENSETR, 29, 0), ··· 1501 1506 K_GATE(G_BSEC, RCC_APB5ENSETR, 16, 0), 1502 1507 K_GATE(G_IWDG1, RCC_APB5ENSETR, 15, 0), 1503 1508 K_GATE(G_TZPC, RCC_APB5ENSETR, 13, 0), 1504 - K_GATE(G_TZC, RCC_APB5ENSETR, 12, 0), 1509 + K_GATE(G_TZC2, RCC_APB5ENSETR, 12, 0), 1510 + K_GATE(G_TZC1, RCC_APB5ENSETR, 11, 0), 1505 1511 K_GATE(G_RTCAPB, RCC_APB5ENSETR, 8, 0), 1506 1512 K_MGATE(G_USART1, RCC_APB5ENSETR, 4, 0), 1507 1513 K_MGATE(G_I2C6, RCC_APB5ENSETR, 3, 0), ··· 1596 1600 M_LAST 1597 1601 }; 1598 1602 1599 - struct stm32_mmux ker_mux[M_LAST]; 1603 + static struct stm32_mmux ker_mux[M_LAST]; 1600 1604 1601 1605 #define _K_MUX(_id, _offset, _shift, _width, _mux_flags, _mmux, _ops)\ 1602 1606 [_id] = {\ ··· 1619 1623 _K_MUX(_id, _offset, _shift, _width, _mux_flags,\ 1620 1624 &ker_mux[_id], &clk_mmux_ops) 1621 1625 1622 - const struct stm32_mux_cfg ker_mux_cfg[M_LAST] = { 1626 + static const struct stm32_mux_cfg ker_mux_cfg[M_LAST] = { 1623 1627 /* Kernel multi mux */ 1624 1628 K_MMUX(M_SDMMC12, RCC_SDMMC12CKSELR, 0, 3, 0), 1625 1629 K_MMUX(M_SPI23, RCC_SPI2S23CKSELR, 0, 3, 0), ··· 1856 1860 PCLK(USART1, "usart1", "pclk5", 0, G_USART1), 1857 1861 PCLK(RTCAPB, "rtcapb", "pclk5", CLK_IGNORE_UNUSED | 1858 1862 CLK_IS_CRITICAL, G_RTCAPB), 1859 - PCLK(TZC, "tzc", "pclk5", CLK_IGNORE_UNUSED, G_TZC), 1863 + PCLK(TZC1, "tzc1", "ck_axi", CLK_IGNORE_UNUSED, G_TZC1), 1864 + PCLK(TZC2, "tzc2", "ck_axi", CLK_IGNORE_UNUSED, G_TZC2), 1860 1865 PCLK(TZPC, "tzpc", "pclk5", CLK_IGNORE_UNUSED, G_TZPC), 1861 1866 PCLK(IWDG1, "iwdg1", "pclk5", 0, G_IWDG1), 1862 1867 PCLK(BSEC, "bsec", "pclk5", CLK_IGNORE_UNUSED, G_BSEC), ··· 1913 1916 KCLK(RNG1_K, "rng1_k", rng_src, 0, G_RNG1, M_RNG1), 1914 1917 KCLK(RNG2_K, "rng2_k", rng_src, 0, G_RNG2, M_RNG2), 1915 1918 KCLK(USBPHY_K, "usbphy_k", usbphy_src, 0, G_USBPHY, M_USBPHY), 1916 - KCLK(STGEN_K, "stgen_k", stgen_src, CLK_IGNORE_UNUSED, 1917 - G_STGEN, M_STGEN), 1919 + KCLK(STGEN_K, "stgen_k", stgen_src, CLK_IS_CRITICAL, G_STGEN, M_STGEN), 1918 1920 KCLK(SPDIF_K, "spdif_k", spdif_src, 0, G_SPDIF, M_SPDIF), 1919 1921 KCLK(SPI1_K, "spi1_k", spi123_src, 0, G_SPI1, M_SPI1), 1920 1922 KCLK(SPI2_K, "spi2_k", spi123_src, 0, G_SPI2, M_SPI23), ··· 1944 1948 KCLK(FDCAN_K, "fdcan_k", fdcan_src, 0, G_FDCAN, M_FDCAN), 1945 1949 KCLK(SAI1_K, "sai1_k", sai_src, 0, G_SAI1, M_SAI1), 1946 1950 KCLK(SAI2_K, "sai2_k", sai2_src, 0, G_SAI2, M_SAI2), 1947 - KCLK(SAI3_K, "sai3_k", sai_src, 0, G_SAI2, M_SAI3), 1948 - KCLK(SAI4_K, "sai4_k", sai_src, 0, G_SAI2, M_SAI4), 1951 + KCLK(SAI3_K, "sai3_k", sai_src, 0, G_SAI3, M_SAI3), 1952 + KCLK(SAI4_K, "sai4_k", sai_src, 0, G_SAI4, M_SAI4), 1949 1953 KCLK(ADC12_K, "adc12_k", adc12_src, 0, G_ADC12, M_ADC12), 1950 1954 KCLK(DSI_K, "dsi_k", dsi_src, 0, G_DSI, M_DSI), 1951 1955 KCLK(ADFSDM_K, "adfsdm_k", sai_src, 0, G_ADFSDM, M_SAI1), ··· 1988 1992 _DIV(RCC_MCO2CFGR, 4, 4, 0, NULL)), 1989 1993 1990 1994 /* Debug clocks */ 1991 - FIXED_FACTOR(NO_ID, "ck_axi_div2", "ck_axi", 0, 1, 2), 1992 - 1993 - GATE(DBG, "ck_apb_dbg", "ck_axi_div2", 0, RCC_DBGCFGR, 8, 0), 1994 - 1995 1995 GATE(CK_DBG, "ck_sys_dbg", "ck_axi", 0, RCC_DBGCFGR, 8, 0), 1996 1996 1997 1997 COMPOSITE(CK_TRACE, "ck_trace", ck_trace_src, CLK_OPS_PARENT_ENABLE,
+4 -3
drivers/clk/clk.c
··· 426 426 return now <= rate && now > best; 427 427 } 428 428 429 - static int 430 - clk_mux_determine_rate_flags(struct clk_hw *hw, struct clk_rate_request *req, 431 - unsigned long flags) 429 + int clk_mux_determine_rate_flags(struct clk_hw *hw, 430 + struct clk_rate_request *req, 431 + unsigned long flags) 432 432 { 433 433 struct clk_core *core = hw->core, *parent, *best_parent = NULL; 434 434 int i, num_parents, ret; ··· 488 488 489 489 return 0; 490 490 } 491 + EXPORT_SYMBOL_GPL(clk_mux_determine_rate_flags); 491 492 492 493 struct clk *__clk_lookup(const char *name) 493 494 {
+10 -1
drivers/clk/meson/clk-regmap.c
··· 153 153 val << mux->shift); 154 154 } 155 155 156 + static int clk_regmap_mux_determine_rate(struct clk_hw *hw, 157 + struct clk_rate_request *req) 158 + { 159 + struct clk_regmap *clk = to_clk_regmap(hw); 160 + struct clk_regmap_mux_data *mux = clk_get_regmap_mux_data(clk); 161 + 162 + return clk_mux_determine_rate_flags(hw, req, mux->flags); 163 + } 164 + 156 165 const struct clk_ops clk_regmap_mux_ops = { 157 166 .get_parent = clk_regmap_mux_get_parent, 158 167 .set_parent = clk_regmap_mux_set_parent, 159 - .determine_rate = __clk_mux_determine_rate, 168 + .determine_rate = clk_regmap_mux_determine_rate, 160 169 }; 161 170 EXPORT_SYMBOL_GPL(clk_regmap_mux_ops); 162 171
-2
drivers/clk/meson/gxbb-aoclk.h
··· 17 17 #define AO_RTC_ALT_CLK_CNTL0 0x94 18 18 #define AO_RTC_ALT_CLK_CNTL1 0x98 19 19 20 - extern const struct clk_ops meson_aoclk_gate_regmap_ops; 21 - 22 20 struct aoclk_cec_32k { 23 21 struct clk_hw hw; 24 22 struct regmap *regmap;
+3 -2
drivers/clk/meson/meson8b.c
··· 253 253 .mult = 1, 254 254 .div = 3, 255 255 .hw.init = &(struct clk_init_data){ 256 - .name = "fclk_div_div3", 256 + .name = "fclk_div3_div", 257 257 .ops = &clk_fixed_factor_ops, 258 258 .parent_names = (const char *[]){ "fixed_pll" }, 259 259 .num_parents = 1, ··· 632 632 .hw.init = &(struct clk_init_data){ 633 633 .name = "cpu_clk", 634 634 .ops = &clk_regmap_mux_ro_ops, 635 - .parent_names = (const char *[]){ "xtal", "cpu_out_sel" }, 635 + .parent_names = (const char *[]){ "xtal", 636 + "cpu_scale_out_sel" }, 636 637 .num_parents = 2, 637 638 .flags = (CLK_SET_RATE_PARENT | 638 639 CLK_SET_RATE_NO_REPARENT),
+44 -2
drivers/cpufreq/cppc_cpufreq.c
··· 126 126 cpu->perf_caps.lowest_perf, cpu_num, ret); 127 127 } 128 128 129 + /* 130 + * The PCC subspace describes the rate at which platform can accept commands 131 + * on the shared PCC channel (including READs which do not count towards freq 132 + * trasition requests), so ideally we need to use the PCC values as a fallback 133 + * if we don't have a platform specific transition_delay_us 134 + */ 135 + #ifdef CONFIG_ARM64 136 + #include <asm/cputype.h> 137 + 138 + static unsigned int cppc_cpufreq_get_transition_delay_us(int cpu) 139 + { 140 + unsigned long implementor = read_cpuid_implementor(); 141 + unsigned long part_num = read_cpuid_part_number(); 142 + unsigned int delay_us = 0; 143 + 144 + switch (implementor) { 145 + case ARM_CPU_IMP_QCOM: 146 + switch (part_num) { 147 + case QCOM_CPU_PART_FALKOR_V1: 148 + case QCOM_CPU_PART_FALKOR: 149 + delay_us = 10000; 150 + break; 151 + default: 152 + delay_us = cppc_get_transition_latency(cpu) / NSEC_PER_USEC; 153 + break; 154 + } 155 + break; 156 + default: 157 + delay_us = cppc_get_transition_latency(cpu) / NSEC_PER_USEC; 158 + break; 159 + } 160 + 161 + return delay_us; 162 + } 163 + 164 + #else 165 + 166 + static unsigned int cppc_cpufreq_get_transition_delay_us(int cpu) 167 + { 168 + return cppc_get_transition_latency(cpu) / NSEC_PER_USEC; 169 + } 170 + #endif 171 + 129 172 static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy) 130 173 { 131 174 struct cppc_cpudata *cpu; ··· 205 162 cpu->perf_caps.highest_perf; 206 163 policy->cpuinfo.max_freq = cppc_dmi_max_khz; 207 164 208 - policy->transition_delay_us = cppc_get_transition_latency(cpu_num) / 209 - NSEC_PER_USEC; 165 + policy->transition_delay_us = cppc_cpufreq_get_transition_delay_us(cpu_num); 210 166 policy->shared_type = cpu->shared_type; 211 167 212 168 if (policy->shared_type == CPUFREQ_SHARED_TYPE_ANY) {
+3 -1
drivers/gpu/drm/bridge/dumb-vga-dac.c
··· 56 56 } 57 57 58 58 drm_mode_connector_update_edid_property(connector, edid); 59 - return drm_add_edid_modes(connector, edid); 59 + ret = drm_add_edid_modes(connector, edid); 60 + kfree(edid); 61 + return ret; 60 62 61 63 fallback: 62 64 /*
+1
drivers/gpu/drm/i915/intel_csr.c
··· 35 35 */ 36 36 37 37 #define I915_CSR_GLK "i915/glk_dmc_ver1_04.bin" 38 + MODULE_FIRMWARE(I915_CSR_GLK); 38 39 #define GLK_CSR_VERSION_REQUIRED CSR_VERSION(1, 4) 39 40 40 41 #define I915_CSR_CNL "i915/cnl_dmc_ver1_07.bin"
+45 -1
drivers/gpu/drm/vc4/vc4_crtc.c
··· 760 760 struct vc4_async_flip_state { 761 761 struct drm_crtc *crtc; 762 762 struct drm_framebuffer *fb; 763 + struct drm_framebuffer *old_fb; 763 764 struct drm_pending_vblank_event *event; 764 765 765 766 struct vc4_seqno_cb cb; ··· 790 789 791 790 drm_crtc_vblank_put(crtc); 792 791 drm_framebuffer_put(flip_state->fb); 792 + 793 + /* Decrement the BO usecnt in order to keep the inc/dec calls balanced 794 + * when the planes are updated through the async update path. 795 + * FIXME: we should move to generic async-page-flip when it's 796 + * available, so that we can get rid of this hand-made cleanup_fb() 797 + * logic. 798 + */ 799 + if (flip_state->old_fb) { 800 + struct drm_gem_cma_object *cma_bo; 801 + struct vc4_bo *bo; 802 + 803 + cma_bo = drm_fb_cma_get_gem_obj(flip_state->old_fb, 0); 804 + bo = to_vc4_bo(&cma_bo->base); 805 + vc4_bo_dec_usecnt(bo); 806 + drm_framebuffer_put(flip_state->old_fb); 807 + } 808 + 793 809 kfree(flip_state); 794 810 795 811 up(&vc4->async_modeset); ··· 831 813 struct drm_gem_cma_object *cma_bo = drm_fb_cma_get_gem_obj(fb, 0); 832 814 struct vc4_bo *bo = to_vc4_bo(&cma_bo->base); 833 815 816 + /* Increment the BO usecnt here, so that we never end up with an 817 + * unbalanced number of vc4_bo_{dec,inc}_usecnt() calls when the 818 + * plane is later updated through the non-async path. 819 + * FIXME: we should move to generic async-page-flip when it's 820 + * available, so that we can get rid of this hand-made prepare_fb() 821 + * logic. 822 + */ 823 + ret = vc4_bo_inc_usecnt(bo); 824 + if (ret) 825 + return ret; 826 + 834 827 flip_state = kzalloc(sizeof(*flip_state), GFP_KERNEL); 835 - if (!flip_state) 828 + if (!flip_state) { 829 + vc4_bo_dec_usecnt(bo); 836 830 return -ENOMEM; 831 + } 837 832 838 833 drm_framebuffer_get(fb); 839 834 flip_state->fb = fb; ··· 857 826 ret = down_interruptible(&vc4->async_modeset); 858 827 if (ret) { 859 828 drm_framebuffer_put(fb); 829 + vc4_bo_dec_usecnt(bo); 860 830 kfree(flip_state); 861 831 return ret; 862 832 } 833 + 834 + /* Save the current FB before it's replaced by the new one in 835 + * drm_atomic_set_fb_for_plane(). We'll need the old FB in 836 + * vc4_async_page_flip_complete() to decrement the BO usecnt and keep 837 + * it consistent. 838 + * FIXME: we should move to generic async-page-flip when it's 839 + * available, so that we can get rid of this hand-made cleanup_fb() 840 + * logic. 841 + */ 842 + flip_state->old_fb = plane->state->fb; 843 + if (flip_state->old_fb) 844 + drm_framebuffer_get(flip_state->old_fb); 863 845 864 846 WARN_ON(drm_crtc_vblank_get(crtc) != 0); 865 847
+10 -21
drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
··· 441 441 struct drm_crtc *crtc = set->crtc; 442 442 struct drm_framebuffer *fb; 443 443 struct drm_crtc *tmp; 444 - struct drm_modeset_acquire_ctx *ctx; 445 444 struct drm_device *dev = set->crtc->dev; 445 + struct drm_modeset_acquire_ctx ctx; 446 446 int ret; 447 447 448 - ctx = dev->mode_config.acquire_ctx; 448 + drm_modeset_acquire_init(&ctx, 0); 449 449 450 450 restart: 451 451 /* ··· 458 458 459 459 fb = set->fb; 460 460 461 - ret = crtc->funcs->set_config(set, ctx); 461 + ret = crtc->funcs->set_config(set, &ctx); 462 462 if (ret == 0) { 463 463 crtc->primary->crtc = crtc; 464 464 crtc->primary->fb = fb; ··· 473 473 } 474 474 475 475 if (ret == -EDEADLK) { 476 - dev->mode_config.acquire_ctx = NULL; 477 - 478 - retry_locking: 479 - drm_modeset_backoff(ctx); 480 - 481 - ret = drm_modeset_lock_all_ctx(dev, ctx); 482 - if (ret) 483 - goto retry_locking; 484 - 485 - dev->mode_config.acquire_ctx = ctx; 486 - 476 + drm_modeset_backoff(&ctx); 487 477 goto restart; 488 478 } 479 + 480 + drm_modeset_drop_locks(&ctx); 481 + drm_modeset_acquire_fini(&ctx); 489 482 490 483 return ret; 491 484 } ··· 617 624 } 618 625 619 626 mutex_lock(&par->bo_mutex); 620 - drm_modeset_lock_all(vmw_priv->dev); 621 627 ret = vmw_fb_kms_framebuffer(info); 622 628 if (ret) 623 629 goto out_unlock; ··· 649 657 drm_mode_destroy(vmw_priv->dev, old_mode); 650 658 par->set_mode = mode; 651 659 652 - drm_modeset_unlock_all(vmw_priv->dev); 653 660 mutex_unlock(&par->bo_mutex); 654 661 655 662 return ret; ··· 704 713 par->max_width = fb_width; 705 714 par->max_height = fb_height; 706 715 707 - drm_modeset_lock_all(vmw_priv->dev); 708 716 ret = vmw_kms_fbdev_init_data(vmw_priv, 0, par->max_width, 709 717 par->max_height, &par->con, 710 718 &par->crtc, &init_mode); 711 - if (ret) { 712 - drm_modeset_unlock_all(vmw_priv->dev); 719 + if (ret) 713 720 goto err_kms; 714 - } 715 721 716 722 info->var.xres = init_mode->hdisplay; 717 723 info->var.yres = init_mode->vdisplay; 718 - drm_modeset_unlock_all(vmw_priv->dev); 719 724 720 725 /* 721 726 * Create buffers and alloc memory ··· 819 832 cancel_delayed_work_sync(&par->local_work); 820 833 unregister_framebuffer(info); 821 834 835 + mutex_lock(&par->bo_mutex); 822 836 (void) vmw_fb_kms_detach(par, true, true); 837 + mutex_unlock(&par->bo_mutex); 823 838 824 839 vfree(par->vmalloc); 825 840 framebuffer_release(info);
+11 -3
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 2595 2595 vmw_kms_helper_buffer_finish(res->dev_priv, NULL, ctx->buf, 2596 2596 out_fence, NULL); 2597 2597 2598 + vmw_dmabuf_unreference(&ctx->buf); 2598 2599 vmw_resource_unreserve(res, false, NULL, 0); 2599 2600 mutex_unlock(&res->dev_priv->cmdbuf_mutex); 2600 2601 } ··· 2681 2680 struct vmw_display_unit *du; 2682 2681 struct drm_display_mode *mode; 2683 2682 int i = 0; 2683 + int ret = 0; 2684 2684 2685 + mutex_lock(&dev_priv->dev->mode_config.mutex); 2685 2686 list_for_each_entry(con, &dev_priv->dev->mode_config.connector_list, 2686 2687 head) { 2687 2688 if (i == unit) ··· 2694 2691 2695 2692 if (i != unit) { 2696 2693 DRM_ERROR("Could not find initial display unit.\n"); 2697 - return -EINVAL; 2694 + ret = -EINVAL; 2695 + goto out_unlock; 2698 2696 } 2699 2697 2700 2698 if (list_empty(&con->modes)) ··· 2703 2699 2704 2700 if (list_empty(&con->modes)) { 2705 2701 DRM_ERROR("Could not find initial display mode.\n"); 2706 - return -EINVAL; 2702 + ret = -EINVAL; 2703 + goto out_unlock; 2707 2704 } 2708 2705 2709 2706 du = vmw_connector_to_du(con); ··· 2725 2720 head); 2726 2721 } 2727 2722 2728 - return 0; 2723 + out_unlock: 2724 + mutex_unlock(&dev_priv->dev->mode_config.mutex); 2725 + 2726 + return ret; 2729 2727 } 2730 2728 2731 2729 /**
+4 -1
drivers/infiniband/Kconfig
··· 61 61 pages on demand instead. 62 62 63 63 config INFINIBAND_ADDR_TRANS 64 - bool 64 + bool "RDMA/CM" 65 65 depends on INFINIBAND 66 66 default y 67 + ---help--- 68 + Support for RDMA communication manager (CM). 69 + This allows for a generic connection abstraction over RDMA. 67 70 68 71 config INFINIBAND_ADDR_TRANS_CONFIGFS 69 72 bool
+35 -20
drivers/infiniband/core/cache.c
··· 291 291 * so lookup free slot only if requested. 292 292 */ 293 293 if (pempty && empty < 0) { 294 - if (data->props & GID_TABLE_ENTRY_INVALID) { 295 - /* Found an invalid (free) entry; allocate it */ 296 - if (data->props & GID_TABLE_ENTRY_DEFAULT) { 297 - if (default_gid) 298 - empty = curr_index; 299 - } else { 300 - empty = curr_index; 301 - } 294 + if (data->props & GID_TABLE_ENTRY_INVALID && 295 + (default_gid == 296 + !!(data->props & GID_TABLE_ENTRY_DEFAULT))) { 297 + /* 298 + * Found an invalid (free) entry; allocate it. 299 + * If default GID is requested, then our 300 + * found slot must be one of the DEFAULT 301 + * reserved slots or we fail. 302 + * This ensures that only DEFAULT reserved 303 + * slots are used for default property GIDs. 304 + */ 305 + empty = curr_index; 302 306 } 303 307 } 304 308 ··· 424 420 return ret; 425 421 } 426 422 427 - int ib_cache_gid_del(struct ib_device *ib_dev, u8 port, 428 - union ib_gid *gid, struct ib_gid_attr *attr) 423 + static int 424 + _ib_cache_gid_del(struct ib_device *ib_dev, u8 port, 425 + union ib_gid *gid, struct ib_gid_attr *attr, 426 + unsigned long mask, bool default_gid) 429 427 { 430 428 struct ib_gid_table *table; 431 429 int ret = 0; ··· 437 431 438 432 mutex_lock(&table->lock); 439 433 440 - ix = find_gid(table, gid, attr, false, 441 - GID_ATTR_FIND_MASK_GID | 442 - GID_ATTR_FIND_MASK_GID_TYPE | 443 - GID_ATTR_FIND_MASK_NETDEV, 444 - NULL); 434 + ix = find_gid(table, gid, attr, default_gid, mask, NULL); 445 435 if (ix < 0) { 446 436 ret = -EINVAL; 447 437 goto out_unlock; ··· 452 450 pr_debug("%s: can't delete gid %pI6 error=%d\n", 453 451 __func__, gid->raw, ret); 454 452 return ret; 453 + } 454 + 455 + int ib_cache_gid_del(struct ib_device *ib_dev, u8 port, 456 + union ib_gid *gid, struct ib_gid_attr *attr) 457 + { 458 + unsigned long mask = GID_ATTR_FIND_MASK_GID | 459 + GID_ATTR_FIND_MASK_GID_TYPE | 460 + GID_ATTR_FIND_MASK_DEFAULT | 461 + GID_ATTR_FIND_MASK_NETDEV; 462 + 463 + return _ib_cache_gid_del(ib_dev, port, gid, attr, mask, false); 455 464 } 456 465 457 466 int ib_cache_gid_del_all_netdev_gids(struct ib_device *ib_dev, u8 port, ··· 741 728 unsigned long gid_type_mask, 742 729 enum ib_cache_gid_default_mode mode) 743 730 { 744 - union ib_gid gid; 731 + union ib_gid gid = { }; 745 732 struct ib_gid_attr gid_attr; 746 733 struct ib_gid_table *table; 747 734 unsigned int gid_type; ··· 749 736 750 737 table = ib_dev->cache.ports[port - rdma_start_port(ib_dev)].gid; 751 738 752 - make_default_gid(ndev, &gid); 739 + mask = GID_ATTR_FIND_MASK_GID_TYPE | 740 + GID_ATTR_FIND_MASK_DEFAULT | 741 + GID_ATTR_FIND_MASK_NETDEV; 753 742 memset(&gid_attr, 0, sizeof(gid_attr)); 754 743 gid_attr.ndev = ndev; 755 744 ··· 762 747 gid_attr.gid_type = gid_type; 763 748 764 749 if (mode == IB_CACHE_GID_DEFAULT_MODE_SET) { 765 - mask = GID_ATTR_FIND_MASK_GID_TYPE | 766 - GID_ATTR_FIND_MASK_DEFAULT; 750 + make_default_gid(ndev, &gid); 767 751 __ib_cache_gid_add(ib_dev, port, &gid, 768 752 &gid_attr, mask, true); 769 753 } else if (mode == IB_CACHE_GID_DEFAULT_MODE_DELETE) { 770 - ib_cache_gid_del(ib_dev, port, &gid, &gid_attr); 754 + _ib_cache_gid_del(ib_dev, port, &gid, 755 + &gid_attr, mask, true); 771 756 } 772 757 } 773 758 }
+43 -17
drivers/infiniband/core/cma.c
··· 382 382 #define CMA_VERSION 0x00 383 383 384 384 struct cma_req_info { 385 + struct sockaddr_storage listen_addr_storage; 386 + struct sockaddr_storage src_addr_storage; 385 387 struct ib_device *device; 386 388 int port; 387 389 union ib_gid local_gid; ··· 868 866 { 869 867 struct ib_qp_attr qp_attr; 870 868 int qp_attr_mask, ret; 871 - union ib_gid sgid; 872 869 873 870 mutex_lock(&id_priv->qp_mutex); 874 871 if (!id_priv->id.qp) { ··· 887 886 888 887 qp_attr.qp_state = IB_QPS_RTR; 889 888 ret = rdma_init_qp_attr(&id_priv->id, &qp_attr, &qp_attr_mask); 890 - if (ret) 891 - goto out; 892 - 893 - ret = ib_query_gid(id_priv->id.device, id_priv->id.port_num, 894 - rdma_ah_read_grh(&qp_attr.ah_attr)->sgid_index, 895 - &sgid, NULL); 896 889 if (ret) 897 890 goto out; 898 891 ··· 1335 1340 } 1336 1341 1337 1342 static struct net_device *cma_get_net_dev(struct ib_cm_event *ib_event, 1338 - const struct cma_req_info *req) 1343 + struct cma_req_info *req) 1339 1344 { 1340 - struct sockaddr_storage listen_addr_storage, src_addr_storage; 1341 - struct sockaddr *listen_addr = (struct sockaddr *)&listen_addr_storage, 1342 - *src_addr = (struct sockaddr *)&src_addr_storage; 1345 + struct sockaddr *listen_addr = 1346 + (struct sockaddr *)&req->listen_addr_storage; 1347 + struct sockaddr *src_addr = (struct sockaddr *)&req->src_addr_storage; 1343 1348 struct net_device *net_dev; 1344 1349 const union ib_gid *gid = req->has_gid ? &req->local_gid : NULL; 1345 1350 int err; ··· 1353 1358 gid, listen_addr); 1354 1359 if (!net_dev) 1355 1360 return ERR_PTR(-ENODEV); 1356 - 1357 - if (!validate_net_dev(net_dev, listen_addr, src_addr)) { 1358 - dev_put(net_dev); 1359 - return ERR_PTR(-EHOSTUNREACH); 1360 - } 1361 1361 1362 1362 return net_dev; 1363 1363 } ··· 1480 1490 } 1481 1491 } 1482 1492 1493 + /* 1494 + * Net namespace might be getting deleted while route lookup, 1495 + * cm_id lookup is in progress. Therefore, perform netdevice 1496 + * validation, cm_id lookup under rcu lock. 1497 + * RCU lock along with netdevice state check, synchronizes with 1498 + * netdevice migrating to different net namespace and also avoids 1499 + * case where net namespace doesn't get deleted while lookup is in 1500 + * progress. 1501 + * If the device state is not IFF_UP, its properties such as ifindex 1502 + * and nd_net cannot be trusted to remain valid without rcu lock. 1503 + * net/core/dev.c change_net_namespace() ensures to synchronize with 1504 + * ongoing operations on net device after device is closed using 1505 + * synchronize_net(). 1506 + */ 1507 + rcu_read_lock(); 1508 + if (*net_dev) { 1509 + /* 1510 + * If netdevice is down, it is likely that it is administratively 1511 + * down or it might be migrating to different namespace. 1512 + * In that case avoid further processing, as the net namespace 1513 + * or ifindex may change. 1514 + */ 1515 + if (((*net_dev)->flags & IFF_UP) == 0) { 1516 + id_priv = ERR_PTR(-EHOSTUNREACH); 1517 + goto err; 1518 + } 1519 + 1520 + if (!validate_net_dev(*net_dev, 1521 + (struct sockaddr *)&req.listen_addr_storage, 1522 + (struct sockaddr *)&req.src_addr_storage)) { 1523 + id_priv = ERR_PTR(-EHOSTUNREACH); 1524 + goto err; 1525 + } 1526 + } 1527 + 1483 1528 bind_list = cma_ps_find(*net_dev ? dev_net(*net_dev) : &init_net, 1484 1529 rdma_ps_from_service_id(req.service_id), 1485 1530 cma_port_from_service_id(req.service_id)); 1486 1531 id_priv = cma_find_listener(bind_list, cm_id, ib_event, &req, *net_dev); 1532 + err: 1533 + rcu_read_unlock(); 1487 1534 if (IS_ERR(id_priv) && *net_dev) { 1488 1535 dev_put(*net_dev); 1489 1536 *net_dev = NULL; 1490 1537 } 1491 - 1492 1538 return id_priv; 1493 1539 } 1494 1540
+4 -1
drivers/infiniband/core/iwpm_util.c
··· 114 114 struct sockaddr_storage *mapped_sockaddr, 115 115 u8 nl_client) 116 116 { 117 - struct hlist_head *hash_bucket_head; 117 + struct hlist_head *hash_bucket_head = NULL; 118 118 struct iwpm_mapping_info *map_info; 119 119 unsigned long flags; 120 120 int ret = -EINVAL; ··· 142 142 } 143 143 } 144 144 spin_unlock_irqrestore(&iwpm_mapinfo_lock, flags); 145 + 146 + if (!hash_bucket_head) 147 + kfree(map_info); 145 148 return ret; 146 149 } 147 150
+2 -2
drivers/infiniband/core/mad.c
··· 59 59 MODULE_PARM_DESC(recv_queue_size, "Size of receive queue in number of work requests"); 60 60 61 61 static struct list_head ib_mad_port_list; 62 - static u32 ib_mad_client_id = 0; 62 + static atomic_t ib_mad_client_id = ATOMIC_INIT(0); 63 63 64 64 /* Port list lock */ 65 65 static DEFINE_SPINLOCK(ib_mad_port_list_lock); ··· 377 377 } 378 378 379 379 spin_lock_irqsave(&port_priv->reg_lock, flags); 380 - mad_agent_priv->agent.hi_tid = ++ib_mad_client_id; 380 + mad_agent_priv->agent.hi_tid = atomic_inc_return(&ib_mad_client_id); 381 381 382 382 /* 383 383 * Make sure MAD registration (if supplied)
+15 -13
drivers/infiniband/core/roce_gid_mgmt.c
··· 255 255 struct net_device *rdma_ndev) 256 256 { 257 257 struct net_device *real_dev = rdma_vlan_dev_real_dev(event_ndev); 258 + unsigned long gid_type_mask; 258 259 259 260 if (!rdma_ndev) 260 261 return; ··· 265 264 266 265 rcu_read_lock(); 267 266 268 - if (rdma_is_upper_dev_rcu(rdma_ndev, event_ndev) && 269 - is_eth_active_slave_of_bonding_rcu(rdma_ndev, real_dev) == 270 - BONDING_SLAVE_STATE_INACTIVE) { 271 - unsigned long gid_type_mask; 272 - 267 + if (((rdma_ndev != event_ndev && 268 + !rdma_is_upper_dev_rcu(rdma_ndev, event_ndev)) || 269 + is_eth_active_slave_of_bonding_rcu(rdma_ndev, real_dev) 270 + == 271 + BONDING_SLAVE_STATE_INACTIVE)) { 273 272 rcu_read_unlock(); 274 - 275 - gid_type_mask = roce_gid_type_mask_support(ib_dev, port); 276 - 277 - ib_cache_gid_set_default_gid(ib_dev, port, rdma_ndev, 278 - gid_type_mask, 279 - IB_CACHE_GID_DEFAULT_MODE_DELETE); 280 - } else { 281 - rcu_read_unlock(); 273 + return; 282 274 } 275 + 276 + rcu_read_unlock(); 277 + 278 + gid_type_mask = roce_gid_type_mask_support(ib_dev, port); 279 + 280 + ib_cache_gid_set_default_gid(ib_dev, port, rdma_ndev, 281 + gid_type_mask, 282 + IB_CACHE_GID_DEFAULT_MODE_DELETE); 283 283 } 284 284 285 285 static void enum_netdev_ipv4_ips(struct ib_device *ib_dev,
+28 -16
drivers/infiniband/core/ucma.c
··· 159 159 complete(&ctx->comp); 160 160 } 161 161 162 + /* 163 + * Same as ucm_get_ctx but requires that ->cm_id->device is valid, eg that the 164 + * CM_ID is bound. 165 + */ 166 + static struct ucma_context *ucma_get_ctx_dev(struct ucma_file *file, int id) 167 + { 168 + struct ucma_context *ctx = ucma_get_ctx(file, id); 169 + 170 + if (IS_ERR(ctx)) 171 + return ctx; 172 + if (!ctx->cm_id->device) { 173 + ucma_put_ctx(ctx); 174 + return ERR_PTR(-EINVAL); 175 + } 176 + return ctx; 177 + } 178 + 162 179 static void ucma_close_event_id(struct work_struct *work) 163 180 { 164 181 struct ucma_event *uevent_close = container_of(work, struct ucma_event, close_work); ··· 700 683 if (copy_from_user(&cmd, inbuf, sizeof(cmd))) 701 684 return -EFAULT; 702 685 703 - if (!rdma_addr_size_in6(&cmd.src_addr) || 686 + if ((cmd.src_addr.sin6_family && !rdma_addr_size_in6(&cmd.src_addr)) || 704 687 !rdma_addr_size_in6(&cmd.dst_addr)) 705 688 return -EINVAL; 706 689 ··· 751 734 if (copy_from_user(&cmd, inbuf, sizeof(cmd))) 752 735 return -EFAULT; 753 736 754 - ctx = ucma_get_ctx(file, cmd.id); 737 + ctx = ucma_get_ctx_dev(file, cmd.id); 755 738 if (IS_ERR(ctx)) 756 739 return PTR_ERR(ctx); 757 740 ··· 1067 1050 if (!cmd.conn_param.valid) 1068 1051 return -EINVAL; 1069 1052 1070 - ctx = ucma_get_ctx(file, cmd.id); 1053 + ctx = ucma_get_ctx_dev(file, cmd.id); 1071 1054 if (IS_ERR(ctx)) 1072 1055 return PTR_ERR(ctx); 1073 1056 ··· 1109 1092 if (copy_from_user(&cmd, inbuf, sizeof(cmd))) 1110 1093 return -EFAULT; 1111 1094 1112 - ctx = ucma_get_ctx(file, cmd.id); 1095 + ctx = ucma_get_ctx_dev(file, cmd.id); 1113 1096 if (IS_ERR(ctx)) 1114 1097 return PTR_ERR(ctx); 1115 1098 ··· 1137 1120 if (copy_from_user(&cmd, inbuf, sizeof(cmd))) 1138 1121 return -EFAULT; 1139 1122 1140 - ctx = ucma_get_ctx(file, cmd.id); 1123 + ctx = ucma_get_ctx_dev(file, cmd.id); 1141 1124 if (IS_ERR(ctx)) 1142 1125 return PTR_ERR(ctx); 1143 1126 ··· 1156 1139 if (copy_from_user(&cmd, inbuf, sizeof(cmd))) 1157 1140 return -EFAULT; 1158 1141 1159 - ctx = ucma_get_ctx(file, cmd.id); 1142 + ctx = ucma_get_ctx_dev(file, cmd.id); 1160 1143 if (IS_ERR(ctx)) 1161 1144 return PTR_ERR(ctx); 1162 1145 ··· 1184 1167 if (cmd.qp_state > IB_QPS_ERR) 1185 1168 return -EINVAL; 1186 1169 1187 - ctx = ucma_get_ctx(file, cmd.id); 1170 + ctx = ucma_get_ctx_dev(file, cmd.id); 1188 1171 if (IS_ERR(ctx)) 1189 1172 return PTR_ERR(ctx); 1190 - 1191 - if (!ctx->cm_id->device) { 1192 - ret = -EINVAL; 1193 - goto out; 1194 - } 1195 1173 1196 1174 resp.qp_attr_mask = 0; 1197 1175 memset(&qp_attr, 0, sizeof qp_attr); ··· 1328 1316 if (copy_from_user(&cmd, inbuf, sizeof(cmd))) 1329 1317 return -EFAULT; 1330 1318 1319 + if (unlikely(cmd.optlen > KMALLOC_MAX_SIZE)) 1320 + return -EINVAL; 1321 + 1331 1322 ctx = ucma_get_ctx(file, cmd.id); 1332 1323 if (IS_ERR(ctx)) 1333 1324 return PTR_ERR(ctx); 1334 - 1335 - if (unlikely(cmd.optlen > KMALLOC_MAX_SIZE)) 1336 - return -EINVAL; 1337 1325 1338 1326 optval = memdup_user(u64_to_user_ptr(cmd.optval), 1339 1327 cmd.optlen); ··· 1396 1384 else 1397 1385 return -EINVAL; 1398 1386 1399 - ctx = ucma_get_ctx(file, cmd->id); 1387 + ctx = ucma_get_ctx_dev(file, cmd->id); 1400 1388 if (IS_ERR(ctx)) 1401 1389 return PTR_ERR(ctx); 1402 1390
+6
drivers/infiniband/core/uverbs_cmd.c
··· 691 691 692 692 mr->device = pd->device; 693 693 mr->pd = pd; 694 + mr->dm = NULL; 694 695 mr->uobject = uobj; 695 696 atomic_inc(&pd->usecnt); 696 697 mr->res.type = RDMA_RESTRACK_MR; ··· 765 764 return PTR_ERR(uobj); 766 765 767 766 mr = uobj->object; 767 + 768 + if (mr->dm) { 769 + ret = -EINVAL; 770 + goto put_uobjs; 771 + } 768 772 769 773 if (cmd.flags & IB_MR_REREG_ACCESS) { 770 774 ret = ib_check_mr_access(cmd.access_flags);
+9
drivers/infiniband/core/uverbs_ioctl.c
··· 234 234 return -EINVAL; 235 235 } 236 236 237 + for (; i < method_spec->num_buckets; i++) { 238 + struct uverbs_attr_spec_hash *attr_spec_bucket = 239 + method_spec->attr_buckets[i]; 240 + 241 + if (!bitmap_empty(attr_spec_bucket->mandatory_attrs_bitmask, 242 + attr_spec_bucket->num_attrs)) 243 + return -EINVAL; 244 + } 245 + 237 246 return 0; 238 247 } 239 248
+6 -6
drivers/infiniband/core/uverbs_std_types_flow_action.c
··· 363 363 364 364 static const struct uverbs_attr_spec uverbs_flow_action_esp_keymat[] = { 365 365 [IB_UVERBS_FLOW_ACTION_ESP_KEYMAT_AES_GCM] = { 366 - .ptr = { 366 + { .ptr = { 367 367 .type = UVERBS_ATTR_TYPE_PTR_IN, 368 368 UVERBS_ATTR_TYPE(struct ib_uverbs_flow_action_esp_keymat_aes_gcm), 369 369 .flags = UVERBS_ATTR_SPEC_F_MIN_SZ_OR_ZERO, 370 - }, 370 + } }, 371 371 }, 372 372 }; 373 373 374 374 static const struct uverbs_attr_spec uverbs_flow_action_esp_replay[] = { 375 375 [IB_UVERBS_FLOW_ACTION_ESP_REPLAY_NONE] = { 376 - .ptr = { 376 + { .ptr = { 377 377 .type = UVERBS_ATTR_TYPE_PTR_IN, 378 378 /* No need to specify any data */ 379 379 .len = 0, 380 - } 380 + } } 381 381 }, 382 382 [IB_UVERBS_FLOW_ACTION_ESP_REPLAY_BMP] = { 383 - .ptr = { 383 + { .ptr = { 384 384 .type = UVERBS_ATTR_TYPE_PTR_IN, 385 385 UVERBS_ATTR_STRUCT(struct ib_uverbs_flow_action_esp_replay_bmp, size), 386 386 .flags = UVERBS_ATTR_SPEC_F_MIN_SZ_OR_ZERO, 387 - } 387 + } } 388 388 }, 389 389 }; 390 390
+1
drivers/infiniband/core/verbs.c
··· 1656 1656 if (!IS_ERR(mr)) { 1657 1657 mr->device = pd->device; 1658 1658 mr->pd = pd; 1659 + mr->dm = NULL; 1659 1660 mr->uobject = NULL; 1660 1661 atomic_inc(&pd->usecnt); 1661 1662 mr->need_inval = false;
+10 -1
drivers/infiniband/hw/cxgb4/cq.c
··· 315 315 * Deal with out-of-order and/or completions that complete 316 316 * prior unsignalled WRs. 317 317 */ 318 - void c4iw_flush_hw_cq(struct c4iw_cq *chp) 318 + void c4iw_flush_hw_cq(struct c4iw_cq *chp, struct c4iw_qp *flush_qhp) 319 319 { 320 320 struct t4_cqe *hw_cqe, *swcqe, read_cqe; 321 321 struct c4iw_qp *qhp; ··· 338 338 */ 339 339 if (qhp == NULL) 340 340 goto next_cqe; 341 + 342 + if (flush_qhp != qhp) { 343 + spin_lock(&qhp->lock); 344 + 345 + if (qhp->wq.flushed == 1) 346 + goto next_cqe; 347 + } 341 348 342 349 if (CQE_OPCODE(hw_cqe) == FW_RI_TERMINATE) 343 350 goto next_cqe; ··· 397 390 next_cqe: 398 391 t4_hwcq_consume(&chp->cq); 399 392 ret = t4_next_hw_cqe(&chp->cq, &hw_cqe); 393 + if (qhp && flush_qhp != qhp) 394 + spin_unlock(&qhp->lock); 400 395 } 401 396 } 402 397
+8 -1
drivers/infiniband/hw/cxgb4/device.c
··· 875 875 876 876 rdev->status_page->db_off = 0; 877 877 878 + init_completion(&rdev->rqt_compl); 879 + init_completion(&rdev->pbl_compl); 880 + kref_init(&rdev->rqt_kref); 881 + kref_init(&rdev->pbl_kref); 882 + 878 883 return 0; 879 884 err_free_status_page_and_wr_log: 880 885 if (c4iw_wr_log && rdev->wr_log) ··· 898 893 899 894 static void c4iw_rdev_close(struct c4iw_rdev *rdev) 900 895 { 901 - destroy_workqueue(rdev->free_workq); 902 896 kfree(rdev->wr_log); 903 897 c4iw_release_dev_ucontext(rdev, &rdev->uctx); 904 898 free_page((unsigned long)rdev->status_page); 905 899 c4iw_pblpool_destroy(rdev); 906 900 c4iw_rqtpool_destroy(rdev); 901 + wait_for_completion(&rdev->pbl_compl); 902 + wait_for_completion(&rdev->rqt_compl); 907 903 c4iw_ocqp_pool_destroy(rdev); 904 + destroy_workqueue(rdev->free_workq); 908 905 c4iw_destroy_resource(&rdev->resource); 909 906 } 910 907
+5 -1
drivers/infiniband/hw/cxgb4/iw_cxgb4.h
··· 185 185 struct wr_log_entry *wr_log; 186 186 int wr_log_size; 187 187 struct workqueue_struct *free_workq; 188 + struct completion rqt_compl; 189 + struct completion pbl_compl; 190 + struct kref rqt_kref; 191 + struct kref pbl_kref; 188 192 }; 189 193 190 194 static inline int c4iw_fatal_error(struct c4iw_rdev *rdev) ··· 1053 1049 void c4iw_pblpool_free(struct c4iw_rdev *rdev, u32 addr, int size); 1054 1050 u32 c4iw_ocqp_pool_alloc(struct c4iw_rdev *rdev, int size); 1055 1051 void c4iw_ocqp_pool_free(struct c4iw_rdev *rdev, u32 addr, int size); 1056 - void c4iw_flush_hw_cq(struct c4iw_cq *chp); 1052 + void c4iw_flush_hw_cq(struct c4iw_cq *chp, struct c4iw_qp *flush_qhp); 1057 1053 void c4iw_count_rcqes(struct t4_cq *cq, struct t4_wq *wq, int *count); 1058 1054 int c4iw_ep_disconnect(struct c4iw_ep *ep, int abrupt, gfp_t gfp); 1059 1055 int c4iw_flush_rq(struct t4_wq *wq, struct t4_cq *cq, int count);
+2 -2
drivers/infiniband/hw/cxgb4/qp.c
··· 1343 1343 qhp->wq.flushed = 1; 1344 1344 t4_set_wq_in_error(&qhp->wq); 1345 1345 1346 - c4iw_flush_hw_cq(rchp); 1346 + c4iw_flush_hw_cq(rchp, qhp); 1347 1347 c4iw_count_rcqes(&rchp->cq, &qhp->wq, &count); 1348 1348 rq_flushed = c4iw_flush_rq(&qhp->wq, &rchp->cq, count); 1349 1349 1350 1350 if (schp != rchp) 1351 - c4iw_flush_hw_cq(schp); 1351 + c4iw_flush_hw_cq(schp, qhp); 1352 1352 sq_flushed = c4iw_flush_sq(qhp); 1353 1353 1354 1354 spin_unlock(&qhp->lock);
+24 -2
drivers/infiniband/hw/cxgb4/resource.c
··· 260 260 rdev->stats.pbl.cur += roundup(size, 1 << MIN_PBL_SHIFT); 261 261 if (rdev->stats.pbl.cur > rdev->stats.pbl.max) 262 262 rdev->stats.pbl.max = rdev->stats.pbl.cur; 263 + kref_get(&rdev->pbl_kref); 263 264 } else 264 265 rdev->stats.pbl.fail++; 265 266 mutex_unlock(&rdev->stats.lock); 266 267 return (u32)addr; 268 + } 269 + 270 + static void destroy_pblpool(struct kref *kref) 271 + { 272 + struct c4iw_rdev *rdev; 273 + 274 + rdev = container_of(kref, struct c4iw_rdev, pbl_kref); 275 + gen_pool_destroy(rdev->pbl_pool); 276 + complete(&rdev->pbl_compl); 267 277 } 268 278 269 279 void c4iw_pblpool_free(struct c4iw_rdev *rdev, u32 addr, int size) ··· 283 273 rdev->stats.pbl.cur -= roundup(size, 1 << MIN_PBL_SHIFT); 284 274 mutex_unlock(&rdev->stats.lock); 285 275 gen_pool_free(rdev->pbl_pool, (unsigned long)addr, size); 276 + kref_put(&rdev->pbl_kref, destroy_pblpool); 286 277 } 287 278 288 279 int c4iw_pblpool_create(struct c4iw_rdev *rdev) ··· 321 310 322 311 void c4iw_pblpool_destroy(struct c4iw_rdev *rdev) 323 312 { 324 - gen_pool_destroy(rdev->pbl_pool); 313 + kref_put(&rdev->pbl_kref, destroy_pblpool); 325 314 } 326 315 327 316 /* ··· 342 331 rdev->stats.rqt.cur += roundup(size << 6, 1 << MIN_RQT_SHIFT); 343 332 if (rdev->stats.rqt.cur > rdev->stats.rqt.max) 344 333 rdev->stats.rqt.max = rdev->stats.rqt.cur; 334 + kref_get(&rdev->rqt_kref); 345 335 } else 346 336 rdev->stats.rqt.fail++; 347 337 mutex_unlock(&rdev->stats.lock); 348 338 return (u32)addr; 339 + } 340 + 341 + static void destroy_rqtpool(struct kref *kref) 342 + { 343 + struct c4iw_rdev *rdev; 344 + 345 + rdev = container_of(kref, struct c4iw_rdev, rqt_kref); 346 + gen_pool_destroy(rdev->rqt_pool); 347 + complete(&rdev->rqt_compl); 349 348 } 350 349 351 350 void c4iw_rqtpool_free(struct c4iw_rdev *rdev, u32 addr, int size) ··· 365 344 rdev->stats.rqt.cur -= roundup(size << 6, 1 << MIN_RQT_SHIFT); 366 345 mutex_unlock(&rdev->stats.lock); 367 346 gen_pool_free(rdev->rqt_pool, (unsigned long)addr, size << 6); 347 + kref_put(&rdev->rqt_kref, destroy_rqtpool); 368 348 } 369 349 370 350 int c4iw_rqtpool_create(struct c4iw_rdev *rdev) ··· 402 380 403 381 void c4iw_rqtpool_destroy(struct c4iw_rdev *rdev) 404 382 { 405 - gen_pool_destroy(rdev->rqt_pool); 383 + kref_put(&rdev->rqt_kref, destroy_rqtpool); 406 384 } 407 385 408 386 /*
+5 -6
drivers/infiniband/hw/hfi1/affinity.c
··· 412 412 static int get_irq_affinity(struct hfi1_devdata *dd, 413 413 struct hfi1_msix_entry *msix) 414 414 { 415 - int ret; 416 415 cpumask_var_t diff; 417 416 struct hfi1_affinity_node *entry; 418 417 struct cpu_mask_set *set = NULL; ··· 422 423 423 424 extra[0] = '\0'; 424 425 cpumask_clear(&msix->mask); 425 - 426 - ret = zalloc_cpumask_var(&diff, GFP_KERNEL); 427 - if (!ret) 428 - return -ENOMEM; 429 426 430 427 entry = node_affinity_lookup(dd->node); 431 428 ··· 453 458 * finds its CPU here. 454 459 */ 455 460 if (cpu == -1 && set) { 461 + if (!zalloc_cpumask_var(&diff, GFP_KERNEL)) 462 + return -ENOMEM; 463 + 456 464 if (cpumask_equal(&set->mask, &set->used)) { 457 465 /* 458 466 * We've used up all the CPUs, bump up the generation ··· 467 469 cpumask_andnot(diff, &set->mask, &set->used); 468 470 cpu = cpumask_first(diff); 469 471 cpumask_set_cpu(cpu, &set->used); 472 + 473 + free_cpumask_var(diff); 470 474 } 471 475 472 476 cpumask_set_cpu(cpu, &msix->mask); ··· 482 482 hfi1_setup_sdma_notifier(msix); 483 483 } 484 484 485 - free_cpumask_var(diff); 486 485 return 0; 487 486 } 488 487
+15 -4
drivers/infiniband/hw/hfi1/driver.c
··· 433 433 bool do_cnp) 434 434 { 435 435 struct hfi1_ibport *ibp = to_iport(qp->ibqp.device, qp->port_num); 436 + struct hfi1_pportdata *ppd = ppd_from_ibp(ibp); 436 437 struct ib_other_headers *ohdr = pkt->ohdr; 437 438 struct ib_grh *grh = pkt->grh; 438 439 u32 rqpn = 0, bth1; 439 - u16 pkey, rlid, dlid = ib_get_dlid(pkt->hdr); 440 + u16 pkey; 441 + u32 rlid, slid, dlid = 0; 440 442 u8 hdr_type, sc, svc_type; 441 443 bool is_mcast = false; 442 444 445 + /* can be called from prescan */ 443 446 if (pkt->etype == RHF_RCV_TYPE_BYPASS) { 444 447 is_mcast = hfi1_is_16B_mcast(dlid); 445 448 pkey = hfi1_16B_get_pkey(pkt->hdr); 446 449 sc = hfi1_16B_get_sc(pkt->hdr); 450 + dlid = hfi1_16B_get_dlid(pkt->hdr); 451 + slid = hfi1_16B_get_slid(pkt->hdr); 447 452 hdr_type = HFI1_PKT_TYPE_16B; 448 453 } else { 449 454 is_mcast = (dlid > be16_to_cpu(IB_MULTICAST_LID_BASE)) && 450 455 (dlid != be16_to_cpu(IB_LID_PERMISSIVE)); 451 456 pkey = ib_bth_get_pkey(ohdr); 452 457 sc = hfi1_9B_get_sc5(pkt->hdr, pkt->rhf); 458 + dlid = ib_get_dlid(pkt->hdr); 459 + slid = ib_get_slid(pkt->hdr); 453 460 hdr_type = HFI1_PKT_TYPE_9B; 454 461 } 455 462 456 463 switch (qp->ibqp.qp_type) { 464 + case IB_QPT_UD: 465 + dlid = ppd->lid; 466 + rlid = slid; 467 + rqpn = ib_get_sqpn(pkt->ohdr); 468 + svc_type = IB_CC_SVCTYPE_UD; 469 + break; 457 470 case IB_QPT_SMI: 458 471 case IB_QPT_GSI: 459 - case IB_QPT_UD: 460 - rlid = ib_get_slid(pkt->hdr); 472 + rlid = slid; 461 473 rqpn = ib_get_sqpn(pkt->ohdr); 462 474 svc_type = IB_CC_SVCTYPE_UD; 463 475 break; ··· 494 482 dlid, rlid, sc, grh); 495 483 496 484 if (!is_mcast && (bth1 & IB_BECN_SMASK)) { 497 - struct hfi1_pportdata *ppd = ppd_from_ibp(ibp); 498 485 u32 lqpn = bth1 & RVT_QPN_MASK; 499 486 u8 sl = ibp->sc_to_sl[sc]; 500 487
+4 -4
drivers/infiniband/hw/hfi1/hfi.h
··· 1537 1537 void process_becn(struct hfi1_pportdata *ppd, u8 sl, u32 rlid, u32 lqpn, 1538 1538 u32 rqpn, u8 svc_type); 1539 1539 void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn, 1540 - u32 pkey, u32 slid, u32 dlid, u8 sc5, 1540 + u16 pkey, u32 slid, u32 dlid, u8 sc5, 1541 1541 const struct ib_grh *old_grh); 1542 1542 void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp, 1543 - u32 remote_qpn, u32 pkey, u32 slid, u32 dlid, 1543 + u32 remote_qpn, u16 pkey, u32 slid, u32 dlid, 1544 1544 u8 sc5, const struct ib_grh *old_grh); 1545 1545 typedef void (*hfi1_handle_cnp)(struct hfi1_ibport *ibp, struct rvt_qp *qp, 1546 - u32 remote_qpn, u32 pkey, u32 slid, u32 dlid, 1546 + u32 remote_qpn, u16 pkey, u32 slid, u32 dlid, 1547 1547 u8 sc5, const struct ib_grh *old_grh); 1548 1548 1549 1549 #define PKEY_CHECK_INVALID -1 ··· 2437 2437 ((slid >> OPA_16B_SLID_SHIFT) << OPA_16B_SLID_HIGH_SHIFT); 2438 2438 lrh2 = (lrh2 & ~OPA_16B_DLID_MASK) | 2439 2439 ((dlid >> OPA_16B_DLID_SHIFT) << OPA_16B_DLID_HIGH_SHIFT); 2440 - lrh2 = (lrh2 & ~OPA_16B_PKEY_MASK) | (pkey << OPA_16B_PKEY_SHIFT); 2440 + lrh2 = (lrh2 & ~OPA_16B_PKEY_MASK) | ((u32)pkey << OPA_16B_PKEY_SHIFT); 2441 2441 lrh2 = (lrh2 & ~OPA_16B_L4_MASK) | l4; 2442 2442 2443 2443 hdr->lrh[0] = lrh0;
+31 -12
drivers/infiniband/hw/hfi1/init.c
··· 88 88 * pio buffers per ctxt, etc.) Zero means use one user context per CPU. 89 89 */ 90 90 int num_user_contexts = -1; 91 - module_param_named(num_user_contexts, num_user_contexts, uint, S_IRUGO); 91 + module_param_named(num_user_contexts, num_user_contexts, int, 0444); 92 92 MODULE_PARM_DESC( 93 - num_user_contexts, "Set max number of user contexts to use"); 93 + num_user_contexts, "Set max number of user contexts to use (default: -1 will use the real (non-HT) CPU count)"); 94 94 95 95 uint krcvqs[RXE_NUM_DATA_VL]; 96 96 int krcvqsset; ··· 1209 1209 kfree(ad); 1210 1210 } 1211 1211 1212 - static void __hfi1_free_devdata(struct kobject *kobj) 1212 + /** 1213 + * hfi1_clean_devdata - cleans up per-unit data structure 1214 + * @dd: pointer to a valid devdata structure 1215 + * 1216 + * It cleans up all data structures set up by 1217 + * by hfi1_alloc_devdata(). 1218 + */ 1219 + static void hfi1_clean_devdata(struct hfi1_devdata *dd) 1213 1220 { 1214 - struct hfi1_devdata *dd = 1215 - container_of(kobj, struct hfi1_devdata, kobj); 1216 1221 struct hfi1_asic_data *ad; 1217 1222 unsigned long flags; 1218 1223 1219 1224 spin_lock_irqsave(&hfi1_devs_lock, flags); 1220 - idr_remove(&hfi1_unit_table, dd->unit); 1221 - list_del(&dd->list); 1225 + if (!list_empty(&dd->list)) { 1226 + idr_remove(&hfi1_unit_table, dd->unit); 1227 + list_del_init(&dd->list); 1228 + } 1222 1229 ad = release_asic_data(dd); 1223 1230 spin_unlock_irqrestore(&hfi1_devs_lock, flags); 1224 - if (ad) 1225 - finalize_asic_data(dd, ad); 1231 + 1232 + finalize_asic_data(dd, ad); 1226 1233 free_platform_config(dd); 1227 1234 rcu_barrier(); /* wait for rcu callbacks to complete */ 1228 1235 free_percpu(dd->int_counter); 1229 1236 free_percpu(dd->rcv_limit); 1230 1237 free_percpu(dd->send_schedule); 1231 1238 free_percpu(dd->tx_opstats); 1239 + dd->int_counter = NULL; 1240 + dd->rcv_limit = NULL; 1241 + dd->send_schedule = NULL; 1242 + dd->tx_opstats = NULL; 1232 1243 sdma_clean(dd, dd->num_sdma); 1233 1244 rvt_dealloc_device(&dd->verbs_dev.rdi); 1245 + } 1246 + 1247 + static void __hfi1_free_devdata(struct kobject *kobj) 1248 + { 1249 + struct hfi1_devdata *dd = 1250 + container_of(kobj, struct hfi1_devdata, kobj); 1251 + 1252 + hfi1_clean_devdata(dd); 1234 1253 } 1235 1254 1236 1255 static struct kobj_type hfi1_devdata_type = { ··· 1284 1265 return ERR_PTR(-ENOMEM); 1285 1266 dd->num_pports = nports; 1286 1267 dd->pport = (struct hfi1_pportdata *)(dd + 1); 1268 + dd->pcidev = pdev; 1269 + pci_set_drvdata(pdev, dd); 1287 1270 1288 1271 INIT_LIST_HEAD(&dd->list); 1289 1272 idr_preload(GFP_KERNEL); ··· 1352 1331 return dd; 1353 1332 1354 1333 bail: 1355 - if (!list_empty(&dd->list)) 1356 - list_del_init(&dd->list); 1357 - rvt_dealloc_device(&dd->verbs_dev.rdi); 1334 + hfi1_clean_devdata(dd); 1358 1335 return ERR_PTR(ret); 1359 1336 } 1360 1337
-3
drivers/infiniband/hw/hfi1/pcie.c
··· 163 163 resource_size_t addr; 164 164 int ret = 0; 165 165 166 - dd->pcidev = pdev; 167 - pci_set_drvdata(pdev, dd); 168 - 169 166 addr = pci_resource_start(pdev, 0); 170 167 len = pci_resource_len(pdev, 0); 171 168
+1
drivers/infiniband/hw/hfi1/platform.c
··· 199 199 { 200 200 /* Release memory allocated for eprom or fallback file read. */ 201 201 kfree(dd->platform_config.data); 202 + dd->platform_config.data = NULL; 202 203 } 203 204 204 205 void get_port_type(struct hfi1_pportdata *ppd)
+2
drivers/infiniband/hw/hfi1/qsfp.c
··· 204 204 205 205 void clean_up_i2c(struct hfi1_devdata *dd, struct hfi1_asic_data *ad) 206 206 { 207 + if (!ad) 208 + return; 207 209 clean_i2c_bus(ad->i2c_bus0); 208 210 ad->i2c_bus0 = NULL; 209 211 clean_i2c_bus(ad->i2c_bus1);
+40 -10
drivers/infiniband/hw/hfi1/ruc.c
··· 733 733 ohdr->bth[2] = cpu_to_be32(bth2); 734 734 } 735 735 736 + /** 737 + * hfi1_make_ruc_header_16B - build a 16B header 738 + * @qp: the queue pair 739 + * @ohdr: a pointer to the destination header memory 740 + * @bth0: bth0 passed in from the RC/UC builder 741 + * @bth2: bth2 passed in from the RC/UC builder 742 + * @middle: non zero implies indicates ahg "could" be used 743 + * @ps: the current packet state 744 + * 745 + * This routine may disarm ahg under these situations: 746 + * - packet needs a GRH 747 + * - BECN needed 748 + * - migration state not IB_MIG_MIGRATED 749 + */ 736 750 static inline void hfi1_make_ruc_header_16B(struct rvt_qp *qp, 737 751 struct ib_other_headers *ohdr, 738 752 u32 bth0, u32 bth2, int middle, ··· 791 777 else 792 778 middle = 0; 793 779 780 + if (qp->s_flags & RVT_S_ECN) { 781 + qp->s_flags &= ~RVT_S_ECN; 782 + /* we recently received a FECN, so return a BECN */ 783 + becn = true; 784 + middle = 0; 785 + } 794 786 if (middle) 795 787 build_ahg(qp, bth2); 796 788 else ··· 804 784 805 785 bth0 |= pkey; 806 786 bth0 |= extra_bytes << 20; 807 - if (qp->s_flags & RVT_S_ECN) { 808 - qp->s_flags &= ~RVT_S_ECN; 809 - /* we recently received a FECN, so return a BECN */ 810 - becn = true; 811 - } 812 787 hfi1_make_ruc_bth(qp, ohdr, bth0, bth1, bth2); 813 788 814 789 if (!ppd->lid) ··· 821 806 pkey, becn, 0, l4, priv->s_sc); 822 807 } 823 808 809 + /** 810 + * hfi1_make_ruc_header_9B - build a 9B header 811 + * @qp: the queue pair 812 + * @ohdr: a pointer to the destination header memory 813 + * @bth0: bth0 passed in from the RC/UC builder 814 + * @bth2: bth2 passed in from the RC/UC builder 815 + * @middle: non zero implies indicates ahg "could" be used 816 + * @ps: the current packet state 817 + * 818 + * This routine may disarm ahg under these situations: 819 + * - packet needs a GRH 820 + * - BECN needed 821 + * - migration state not IB_MIG_MIGRATED 822 + */ 824 823 static inline void hfi1_make_ruc_header_9B(struct rvt_qp *qp, 825 824 struct ib_other_headers *ohdr, 826 825 u32 bth0, u32 bth2, int middle, ··· 868 839 else 869 840 middle = 0; 870 841 842 + if (qp->s_flags & RVT_S_ECN) { 843 + qp->s_flags &= ~RVT_S_ECN; 844 + /* we recently received a FECN, so return a BECN */ 845 + bth1 |= (IB_BECN_MASK << IB_BECN_SHIFT); 846 + middle = 0; 847 + } 871 848 if (middle) 872 849 build_ahg(qp, bth2); 873 850 else ··· 881 846 882 847 bth0 |= pkey; 883 848 bth0 |= extra_bytes << 20; 884 - if (qp->s_flags & RVT_S_ECN) { 885 - qp->s_flags &= ~RVT_S_ECN; 886 - /* we recently received a FECN, so return a BECN */ 887 - bth1 |= (IB_BECN_MASK << IB_BECN_SHIFT); 888 - } 889 849 hfi1_make_ruc_bth(qp, ohdr, bth0, bth1, bth2); 890 850 hfi1_make_ib_hdr(&ps->s_txreq->phdr.hdr.ibh, 891 851 lrh0,
+2 -2
drivers/infiniband/hw/hfi1/ud.c
··· 628 628 } 629 629 630 630 void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp, 631 - u32 remote_qpn, u32 pkey, u32 slid, u32 dlid, 631 + u32 remote_qpn, u16 pkey, u32 slid, u32 dlid, 632 632 u8 sc5, const struct ib_grh *old_grh) 633 633 { 634 634 u64 pbc, pbc_flags = 0; ··· 687 687 } 688 688 689 689 void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn, 690 - u32 pkey, u32 slid, u32 dlid, u8 sc5, 690 + u16 pkey, u32 slid, u32 dlid, u8 sc5, 691 691 const struct ib_grh *old_grh) 692 692 { 693 693 u64 pbc, pbc_flags = 0;
+6 -6
drivers/infiniband/hw/hns/hns_roce_hem.c
··· 912 912 obj_per_chunk = buf_chunk_size / obj_size; 913 913 num_hem = (nobj + obj_per_chunk - 1) / obj_per_chunk; 914 914 bt_chunk_num = bt_chunk_size / 8; 915 - if (table->type >= HEM_TYPE_MTT) 915 + if (type >= HEM_TYPE_MTT) 916 916 num_bt_l0 = bt_chunk_num; 917 917 918 918 table->hem = kcalloc(num_hem, sizeof(*table->hem), ··· 920 920 if (!table->hem) 921 921 goto err_kcalloc_hem_buf; 922 922 923 - if (check_whether_bt_num_3(table->type, hop_num)) { 923 + if (check_whether_bt_num_3(type, hop_num)) { 924 924 unsigned long num_bt_l1; 925 925 926 926 num_bt_l1 = (num_hem + bt_chunk_num - 1) / ··· 939 939 goto err_kcalloc_l1_dma; 940 940 } 941 941 942 - if (check_whether_bt_num_2(table->type, hop_num) || 943 - check_whether_bt_num_3(table->type, hop_num)) { 942 + if (check_whether_bt_num_2(type, hop_num) || 943 + check_whether_bt_num_3(type, hop_num)) { 944 944 table->bt_l0 = kcalloc(num_bt_l0, sizeof(*table->bt_l0), 945 945 GFP_KERNEL); 946 946 if (!table->bt_l0) ··· 1039 1039 void hns_roce_cleanup_hem(struct hns_roce_dev *hr_dev) 1040 1040 { 1041 1041 hns_roce_cleanup_hem_table(hr_dev, &hr_dev->cq_table.table); 1042 - hns_roce_cleanup_hem_table(hr_dev, &hr_dev->qp_table.irrl_table); 1043 1042 if (hr_dev->caps.trrl_entry_sz) 1044 1043 hns_roce_cleanup_hem_table(hr_dev, 1045 1044 &hr_dev->qp_table.trrl_table); 1045 + hns_roce_cleanup_hem_table(hr_dev, &hr_dev->qp_table.irrl_table); 1046 1046 hns_roce_cleanup_hem_table(hr_dev, &hr_dev->qp_table.qp_table); 1047 1047 hns_roce_cleanup_hem_table(hr_dev, &hr_dev->mr_table.mtpt_table); 1048 - hns_roce_cleanup_hem_table(hr_dev, &hr_dev->mr_table.mtt_table); 1049 1048 if (hns_roce_check_whether_mhop(hr_dev, HEM_TYPE_CQE)) 1050 1049 hns_roce_cleanup_hem_table(hr_dev, 1051 1050 &hr_dev->mr_table.mtt_cqe_table); 1051 + hns_roce_cleanup_hem_table(hr_dev, &hr_dev->mr_table.mtt_table); 1052 1052 }
+29 -20
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 71 71 return -EINVAL; 72 72 } 73 73 74 + if (wr->opcode == IB_WR_RDMA_READ) { 75 + dev_err(hr_dev->dev, "Not support inline data!\n"); 76 + return -EINVAL; 77 + } 78 + 74 79 for (i = 0; i < wr->num_sge; i++) { 75 80 memcpy(wqe, ((void *)wr->sg_list[i].addr), 76 81 wr->sg_list[i].length); ··· 153 148 ibqp->qp_type != IB_QPT_GSI && 154 149 ibqp->qp_type != IB_QPT_UD)) { 155 150 dev_err(dev, "Not supported QP(0x%x)type!\n", ibqp->qp_type); 156 - *bad_wr = NULL; 151 + *bad_wr = wr; 157 152 return -EOPNOTSUPP; 158 153 } 159 154 ··· 187 182 qp->sq.wrid[(qp->sq.head + nreq) & (qp->sq.wqe_cnt - 1)] = 188 183 wr->wr_id; 189 184 190 - owner_bit = ~(qp->sq.head >> ilog2(qp->sq.wqe_cnt)) & 0x1; 185 + owner_bit = 186 + ~(((qp->sq.head + nreq) >> ilog2(qp->sq.wqe_cnt)) & 0x1); 191 187 192 188 /* Corresponding to the QP type, wqe process separately */ 193 189 if (ibqp->qp_type == IB_QPT_GSI) { ··· 462 456 } else { 463 457 dev_err(dev, "Illegal qp_type(0x%x)\n", ibqp->qp_type); 464 458 spin_unlock_irqrestore(&qp->sq.lock, flags); 459 + *bad_wr = wr; 465 460 return -EOPNOTSUPP; 466 461 } 467 462 } ··· 2599 2592 roce_set_field(qpc_mask->byte_4_sqpn_tst, V2_QPC_BYTE_4_SQPN_M, 2600 2593 V2_QPC_BYTE_4_SQPN_S, 0); 2601 2594 2602 - roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, 2603 - V2_QPC_BYTE_56_DQPN_S, hr_qp->qpn); 2604 - roce_set_field(qpc_mask->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, 2605 - V2_QPC_BYTE_56_DQPN_S, 0); 2595 + if (attr_mask & IB_QP_DEST_QPN) { 2596 + roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, 2597 + V2_QPC_BYTE_56_DQPN_S, hr_qp->qpn); 2598 + roce_set_field(qpc_mask->byte_56_dqpn_err, 2599 + V2_QPC_BYTE_56_DQPN_M, V2_QPC_BYTE_56_DQPN_S, 0); 2600 + } 2606 2601 roce_set_field(context->byte_168_irrl_idx, 2607 2602 V2_QPC_BYTE_168_SQ_SHIFT_BAK_M, 2608 2603 V2_QPC_BYTE_168_SQ_SHIFT_BAK_S, ··· 2659 2650 return -EINVAL; 2660 2651 } 2661 2652 2662 - if ((attr_mask & IB_QP_ALT_PATH) || (attr_mask & IB_QP_ACCESS_FLAGS) || 2663 - (attr_mask & IB_QP_PKEY_INDEX) || (attr_mask & IB_QP_QKEY)) { 2653 + if (attr_mask & IB_QP_ALT_PATH) { 2664 2654 dev_err(dev, "INIT2RTR attr_mask (0x%x) error\n", attr_mask); 2665 2655 return -EINVAL; 2666 2656 } ··· 2808 2800 V2_QPC_BYTE_140_RR_MAX_S, 0); 2809 2801 } 2810 2802 2811 - roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, 2812 - V2_QPC_BYTE_56_DQPN_S, attr->dest_qp_num); 2813 - roce_set_field(qpc_mask->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, 2814 - V2_QPC_BYTE_56_DQPN_S, 0); 2803 + if (attr_mask & IB_QP_DEST_QPN) { 2804 + roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, 2805 + V2_QPC_BYTE_56_DQPN_S, attr->dest_qp_num); 2806 + roce_set_field(qpc_mask->byte_56_dqpn_err, 2807 + V2_QPC_BYTE_56_DQPN_M, V2_QPC_BYTE_56_DQPN_S, 0); 2808 + } 2815 2809 2816 2810 /* Configure GID index */ 2817 2811 port_num = rdma_ah_get_port_num(&attr->ah_attr); ··· 2855 2845 if (ibqp->qp_type == IB_QPT_GSI || ibqp->qp_type == IB_QPT_UD) 2856 2846 roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M, 2857 2847 V2_QPC_BYTE_24_MTU_S, IB_MTU_4096); 2858 - else 2848 + else if (attr_mask & IB_QP_PATH_MTU) 2859 2849 roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M, 2860 2850 V2_QPC_BYTE_24_MTU_S, attr->path_mtu); 2861 2851 ··· 2932 2922 return -EINVAL; 2933 2923 } 2934 2924 2935 - /* If exist optional param, return error */ 2936 - if ((attr_mask & IB_QP_ALT_PATH) || (attr_mask & IB_QP_ACCESS_FLAGS) || 2937 - (attr_mask & IB_QP_QKEY) || (attr_mask & IB_QP_PATH_MIG_STATE) || 2938 - (attr_mask & IB_QP_CUR_STATE) || 2939 - (attr_mask & IB_QP_MIN_RNR_TIMER)) { 2925 + /* Not support alternate path and path migration */ 2926 + if ((attr_mask & IB_QP_ALT_PATH) || 2927 + (attr_mask & IB_QP_PATH_MIG_STATE)) { 2940 2928 dev_err(dev, "RTR2RTS attr_mask (0x%x)error\n", attr_mask); 2941 2929 return -EINVAL; 2942 2930 } ··· 3169 3161 (cur_state == IB_QPS_RTR && new_state == IB_QPS_ERR) || 3170 3162 (cur_state == IB_QPS_RTS && new_state == IB_QPS_ERR) || 3171 3163 (cur_state == IB_QPS_SQD && new_state == IB_QPS_ERR) || 3172 - (cur_state == IB_QPS_SQE && new_state == IB_QPS_ERR)) { 3164 + (cur_state == IB_QPS_SQE && new_state == IB_QPS_ERR) || 3165 + (cur_state == IB_QPS_ERR && new_state == IB_QPS_ERR)) { 3173 3166 /* Nothing */ 3174 3167 ; 3175 3168 } else { ··· 4487 4478 ret = hns_roce_cmd_mbox(hr_dev, mailbox->dma, 0, eq->eqn, 0, 4488 4479 eq_cmd, HNS_ROCE_CMD_TIMEOUT_MSECS); 4489 4480 if (ret) { 4490 - dev_err(dev, "[mailbox cmd] creat eqc failed.\n"); 4481 + dev_err(dev, "[mailbox cmd] create eqc failed.\n"); 4491 4482 goto err_cmd_mbox; 4492 4483 } 4493 4484
+1 -1
drivers/infiniband/hw/hns/hns_roce_qp.c
··· 620 620 to_hr_ucontext(ib_pd->uobject->context), 621 621 ucmd.db_addr, &hr_qp->rdb); 622 622 if (ret) { 623 - dev_err(dev, "rp record doorbell map failed!\n"); 623 + dev_err(dev, "rq record doorbell map failed!\n"); 624 624 goto err_mtt; 625 625 } 626 626 }
+1 -1
drivers/infiniband/hw/mlx4/mr.c
··· 346 346 /* Add to the first block the misalignment that it suffers from. */ 347 347 total_len += (first_block_start & ((1ULL << block_shift) - 1ULL)); 348 348 last_block_end = current_block_start + current_block_len; 349 - last_block_aligned_end = round_up(last_block_end, 1 << block_shift); 349 + last_block_aligned_end = round_up(last_block_end, 1ULL << block_shift); 350 350 total_len += (last_block_aligned_end - last_block_end); 351 351 352 352 if (total_len & ((1ULL << block_shift) - 1ULL))
+2 -1
drivers/infiniband/hw/mlx4/qp.c
··· 673 673 MLX4_IB_RX_HASH_SRC_PORT_TCP | 674 674 MLX4_IB_RX_HASH_DST_PORT_TCP | 675 675 MLX4_IB_RX_HASH_SRC_PORT_UDP | 676 - MLX4_IB_RX_HASH_DST_PORT_UDP)) { 676 + MLX4_IB_RX_HASH_DST_PORT_UDP | 677 + MLX4_IB_RX_HASH_INNER)) { 677 678 pr_debug("RX Hash fields_mask has unsupported mask (0x%llx)\n", 678 679 ucmd->rx_hash_fields_mask); 679 680 return (-EOPNOTSUPP);
+1
drivers/infiniband/hw/mlx5/Kconfig
··· 1 1 config MLX5_INFINIBAND 2 2 tristate "Mellanox Connect-IB HCA support" 3 3 depends on NETDEVICES && ETHERNET && PCI && MLX5_CORE 4 + depends on INFINIBAND_USER_ACCESS || INFINIBAND_USER_ACCESS=n 4 5 ---help--- 5 6 This driver provides low-level InfiniBand support for 6 7 Mellanox Connect-IB PCI Express host channel adapters (HCAs).
+3 -6
drivers/infiniband/hw/mlx5/main.c
··· 52 52 #include <linux/mlx5/port.h> 53 53 #include <linux/mlx5/vport.h> 54 54 #include <linux/mlx5/fs.h> 55 - #include <linux/mlx5/fs_helpers.h> 56 55 #include <linux/list.h> 57 56 #include <rdma/ib_smi.h> 58 57 #include <rdma/ib_umem.h> ··· 179 180 if (rep_ndev == ndev) 180 181 roce->netdev = (event == NETDEV_UNREGISTER) ? 181 182 NULL : ndev; 182 - } else if (ndev->dev.parent == &ibdev->mdev->pdev->dev) { 183 + } else if (ndev->dev.parent == &mdev->pdev->dev) { 183 184 roce->netdev = (event == NETDEV_UNREGISTER) ? 184 185 NULL : ndev; 185 186 } ··· 4756 4757 { 4757 4758 struct mlx5_ib_dev *dev = to_mdev(ibdev); 4758 4759 4759 - return mlx5_get_vector_affinity(dev->mdev, comp_vector); 4760 + return mlx5_get_vector_affinity_hint(dev->mdev, comp_vector); 4760 4761 } 4761 4762 4762 4763 /* The mlx5_ib_multiport_mutex should be held when calling this function */ ··· 5426 5427 static int mlx5_ib_stage_uar_init(struct mlx5_ib_dev *dev) 5427 5428 { 5428 5429 dev->mdev->priv.uar = mlx5_get_uars_page(dev->mdev); 5429 - if (!dev->mdev->priv.uar) 5430 - return -ENOMEM; 5431 - return 0; 5430 + return PTR_ERR_OR_ZERO(dev->mdev->priv.uar); 5432 5431 } 5433 5432 5434 5433 static void mlx5_ib_stage_uar_cleanup(struct mlx5_ib_dev *dev)
+23 -9
drivers/infiniband/hw/mlx5/mr.c
··· 866 866 int *order) 867 867 { 868 868 struct mlx5_ib_dev *dev = to_mdev(pd->device); 869 + struct ib_umem *u; 869 870 int err; 870 871 871 - *umem = ib_umem_get(pd->uobject->context, start, length, 872 - access_flags, 0); 873 - err = PTR_ERR_OR_ZERO(*umem); 872 + *umem = NULL; 873 + 874 + u = ib_umem_get(pd->uobject->context, start, length, access_flags, 0); 875 + err = PTR_ERR_OR_ZERO(u); 874 876 if (err) { 875 - *umem = NULL; 876 - mlx5_ib_err(dev, "umem get failed (%d)\n", err); 877 + mlx5_ib_dbg(dev, "umem get failed (%d)\n", err); 877 878 return err; 878 879 } 879 880 880 - mlx5_ib_cont_pages(*umem, start, MLX5_MKEY_PAGE_SHIFT_MASK, npages, 881 + mlx5_ib_cont_pages(u, start, MLX5_MKEY_PAGE_SHIFT_MASK, npages, 881 882 page_shift, ncont, order); 882 883 if (!*npages) { 883 884 mlx5_ib_warn(dev, "avoid zero region\n"); 884 - ib_umem_release(*umem); 885 + ib_umem_release(u); 885 886 return -EINVAL; 886 887 } 888 + 889 + *umem = u; 887 890 888 891 mlx5_ib_dbg(dev, "npages %d, ncont %d, order %d, page_shift %d\n", 889 892 *npages, *ncont, *order, *page_shift); ··· 1461 1458 int access_flags = flags & IB_MR_REREG_ACCESS ? 1462 1459 new_access_flags : 1463 1460 mr->access_flags; 1464 - u64 addr = (flags & IB_MR_REREG_TRANS) ? virt_addr : mr->umem->address; 1465 - u64 len = (flags & IB_MR_REREG_TRANS) ? length : mr->umem->length; 1466 1461 int page_shift = 0; 1467 1462 int upd_flags = 0; 1468 1463 int npages = 0; 1469 1464 int ncont = 0; 1470 1465 int order = 0; 1466 + u64 addr, len; 1471 1467 int err; 1472 1468 1473 1469 mlx5_ib_dbg(dev, "start 0x%llx, virt_addr 0x%llx, length 0x%llx, access_flags 0x%x\n", 1474 1470 start, virt_addr, length, access_flags); 1475 1471 1476 1472 atomic_sub(mr->npages, &dev->mdev->priv.reg_pages); 1473 + 1474 + if (!mr->umem) 1475 + return -EINVAL; 1476 + 1477 + if (flags & IB_MR_REREG_TRANS) { 1478 + addr = virt_addr; 1479 + len = length; 1480 + } else { 1481 + addr = mr->umem->address; 1482 + len = mr->umem->length; 1483 + } 1477 1484 1478 1485 if (flags != IB_MR_REREG_PD) { 1479 1486 /* ··· 1492 1479 */ 1493 1480 flags |= IB_MR_REREG_TRANS; 1494 1481 ib_umem_release(mr->umem); 1482 + mr->umem = NULL; 1495 1483 err = mr_umem_get(pd, addr, len, access_flags, &mr->umem, 1496 1484 &npages, &page_shift, &ncont, &order); 1497 1485 if (err)
+14 -10
drivers/infiniband/hw/mlx5/qp.c
··· 259 259 } else { 260 260 if (ucmd) { 261 261 qp->rq.wqe_cnt = ucmd->rq_wqe_count; 262 + if (ucmd->rq_wqe_shift > BITS_PER_BYTE * sizeof(ucmd->rq_wqe_shift)) 263 + return -EINVAL; 262 264 qp->rq.wqe_shift = ucmd->rq_wqe_shift; 265 + if ((1 << qp->rq.wqe_shift) / sizeof(struct mlx5_wqe_data_seg) < qp->wq_sig) 266 + return -EINVAL; 263 267 qp->rq.max_gs = (1 << qp->rq.wqe_shift) / sizeof(struct mlx5_wqe_data_seg) - qp->wq_sig; 264 268 qp->rq.max_post = qp->rq.wqe_cnt; 265 269 } else { ··· 2455 2451 2456 2452 static int ib_rate_to_mlx5(struct mlx5_ib_dev *dev, u8 rate) 2457 2453 { 2458 - if (rate == IB_RATE_PORT_CURRENT) { 2454 + if (rate == IB_RATE_PORT_CURRENT) 2459 2455 return 0; 2460 - } else if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_300_GBPS) { 2461 - return -EINVAL; 2462 - } else { 2463 - while (rate != IB_RATE_2_5_GBPS && 2464 - !(1 << (rate + MLX5_STAT_RATE_OFFSET) & 2465 - MLX5_CAP_GEN(dev->mdev, stat_rate_support))) 2466 - --rate; 2467 - } 2468 2456 2469 - return rate + MLX5_STAT_RATE_OFFSET; 2457 + if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_300_GBPS) 2458 + return -EINVAL; 2459 + 2460 + while (rate != IB_RATE_PORT_CURRENT && 2461 + !(1 << (rate + MLX5_STAT_RATE_OFFSET) & 2462 + MLX5_CAP_GEN(dev->mdev, stat_rate_support))) 2463 + --rate; 2464 + 2465 + return rate ? rate + MLX5_STAT_RATE_OFFSET : rate; 2470 2466 } 2471 2467 2472 2468 static int modify_raw_packet_eth_prio(struct mlx5_core_dev *dev,
+1 -1
drivers/infiniband/hw/nes/nes_nic.c
··· 461 461 /** 462 462 * nes_netdev_start_xmit 463 463 */ 464 - static int nes_netdev_start_xmit(struct sk_buff *skb, struct net_device *netdev) 464 + static netdev_tx_t nes_netdev_start_xmit(struct sk_buff *skb, struct net_device *netdev) 465 465 { 466 466 struct nes_vnic *nesvnic = netdev_priv(netdev); 467 467 struct nes_device *nesdev = nesvnic->nesdev;
+1 -1
drivers/infiniband/sw/rxe/rxe_opcode.c
··· 390 390 .name = "IB_OPCODE_RC_SEND_ONLY_INV", 391 391 .mask = RXE_IETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK 392 392 | RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK 393 - | RXE_END_MASK, 393 + | RXE_END_MASK | RXE_START_MASK, 394 394 .length = RXE_BTH_BYTES + RXE_IETH_BYTES, 395 395 .offset = { 396 396 [RXE_BTH] = 0,
-1
drivers/infiniband/sw/rxe/rxe_req.c
··· 728 728 rollback_state(wqe, qp, &rollback_wqe, rollback_psn); 729 729 730 730 if (ret == -EAGAIN) { 731 - kfree_skb(skb); 732 731 rxe_run_task(&qp->req.task, 1); 733 732 goto exit; 734 733 }
+1 -5
drivers/infiniband/sw/rxe/rxe_resp.c
··· 742 742 err = rxe_xmit_packet(rxe, qp, &ack_pkt, skb); 743 743 if (err) { 744 744 pr_err("Failed sending RDMA reply.\n"); 745 - kfree_skb(skb); 746 745 return RESPST_ERR_RNR; 747 746 } 748 747 ··· 953 954 } 954 955 955 956 err = rxe_xmit_packet(rxe, qp, &ack_pkt, skb); 956 - if (err) { 957 + if (err) 957 958 pr_err_ratelimited("Failed sending ack\n"); 958 - kfree_skb(skb); 959 - } 960 959 961 960 err1: 962 961 return err; ··· 1138 1141 if (rc) { 1139 1142 pr_err("Failed resending result. This flow is not handled - skb ignored\n"); 1140 1143 rxe_drop_ref(qp); 1141 - kfree_skb(skb_copy); 1142 1144 rc = RESPST_CLEANUP; 1143 1145 goto out; 1144 1146 }
+1 -1
drivers/infiniband/ulp/ipoib/ipoib_main.c
··· 1094 1094 spin_unlock_irqrestore(&priv->lock, flags); 1095 1095 } 1096 1096 1097 - static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev) 1097 + static netdev_tx_t ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev) 1098 1098 { 1099 1099 struct ipoib_dev_priv *priv = ipoib_priv(dev); 1100 1100 struct rdma_netdev *rn = netdev_priv(dev);
+1 -1
drivers/infiniband/ulp/srp/Kconfig
··· 1 1 config INFINIBAND_SRP 2 2 tristate "InfiniBand SCSI RDMA Protocol" 3 - depends on SCSI 3 + depends on SCSI && INFINIBAND_ADDR_TRANS 4 4 select SCSI_SRP_ATTRS 5 5 ---help--- 6 6 Support for the SCSI RDMA Protocol over InfiniBand. This
+1 -1
drivers/infiniband/ulp/srpt/Kconfig
··· 1 1 config INFINIBAND_SRPT 2 2 tristate "InfiniBand SCSI RDMA Protocol target support" 3 - depends on INFINIBAND && TARGET_CORE 3 + depends on INFINIBAND && INFINIBAND_ADDR_TRANS && TARGET_CORE 4 4 ---help--- 5 5 6 6 Support for the SCSI RDMA Protocol (SRP) Target driver. The
+5 -5
drivers/input/input-leds.c
··· 88 88 const struct input_device_id *id) 89 89 { 90 90 struct input_leds *leds; 91 + struct input_led *led; 91 92 unsigned int num_leds; 92 93 unsigned int led_code; 93 94 int led_no; ··· 120 119 121 120 led_no = 0; 122 121 for_each_set_bit(led_code, dev->ledbit, LED_CNT) { 123 - struct input_led *led = &leds->leds[led_no]; 124 - 125 - led->handle = &leds->handle; 126 - led->code = led_code; 127 - 128 122 if (!input_led_info[led_code].name) 129 123 continue; 124 + 125 + led = &leds->leds[led_no]; 126 + led->handle = &leds->handle; 127 + led->code = led_code; 130 128 131 129 led->cdev.name = kasprintf(GFP_KERNEL, "%s::%s", 132 130 dev_name(&dev->dev),
+1 -1
drivers/input/mouse/alps.c
··· 583 583 584 584 x = (s8)(((packet[0] & 0x20) << 2) | (packet[1] & 0x7f)); 585 585 y = (s8)(((packet[0] & 0x10) << 3) | (packet[2] & 0x7f)); 586 - z = packet[4] & 0x7c; 586 + z = packet[4] & 0x7f; 587 587 588 588 /* 589 589 * The x and y values tend to be quite large, and when used
+5 -2
drivers/input/rmi4/rmi_spi.c
··· 147 147 if (len > RMI_SPI_XFER_SIZE_LIMIT) 148 148 return -EINVAL; 149 149 150 - if (rmi_spi->xfer_buf_size < len) 151 - rmi_spi_manage_pools(rmi_spi, len); 150 + if (rmi_spi->xfer_buf_size < len) { 151 + ret = rmi_spi_manage_pools(rmi_spi, len); 152 + if (ret < 0) 153 + return ret; 154 + } 152 155 153 156 if (addr == 0) 154 157 /*
+1 -1
drivers/input/touchscreen/Kconfig
··· 362 362 363 363 If unsure, say N. 364 364 365 - To compile this driver as a moudle, choose M here : the 365 + To compile this driver as a module, choose M here : the 366 366 module will be called hideep_ts. 367 367 368 368 config TOUCHSCREEN_ILI210X
+124 -76
drivers/input/touchscreen/atmel_mxt_ts.c
··· 280 280 struct input_dev *input_dev; 281 281 char phys[64]; /* device physical location */ 282 282 struct mxt_object *object_table; 283 - struct mxt_info info; 283 + struct mxt_info *info; 284 + void *raw_info_block; 284 285 unsigned int irq; 285 286 unsigned int max_x; 286 287 unsigned int max_y; ··· 461 460 { 462 461 u8 appmode = data->client->addr; 463 462 u8 bootloader; 463 + u8 family_id = data->info ? data->info->family_id : 0; 464 464 465 465 switch (appmode) { 466 466 case 0x4a: 467 467 case 0x4b: 468 468 /* Chips after 1664S use different scheme */ 469 - if (retry || data->info.family_id >= 0xa2) { 469 + if (retry || family_id >= 0xa2) { 470 470 bootloader = appmode - 0x24; 471 471 break; 472 472 } ··· 694 692 struct mxt_object *object; 695 693 int i; 696 694 697 - for (i = 0; i < data->info.object_num; i++) { 695 + for (i = 0; i < data->info->object_num; i++) { 698 696 object = data->object_table + i; 699 697 if (object->type == type) 700 698 return object; ··· 1464 1462 data_pos += offset; 1465 1463 } 1466 1464 1467 - if (cfg_info.family_id != data->info.family_id) { 1465 + if (cfg_info.family_id != data->info->family_id) { 1468 1466 dev_err(dev, "Family ID mismatch!\n"); 1469 1467 return -EINVAL; 1470 1468 } 1471 1469 1472 - if (cfg_info.variant_id != data->info.variant_id) { 1470 + if (cfg_info.variant_id != data->info->variant_id) { 1473 1471 dev_err(dev, "Variant ID mismatch!\n"); 1474 1472 return -EINVAL; 1475 1473 } ··· 1514 1512 1515 1513 /* Malloc memory to store configuration */ 1516 1514 cfg_start_ofs = MXT_OBJECT_START + 1517 - data->info.object_num * sizeof(struct mxt_object) + 1515 + data->info->object_num * sizeof(struct mxt_object) + 1518 1516 MXT_INFO_CHECKSUM_SIZE; 1519 1517 config_mem_size = data->mem_size - cfg_start_ofs; 1520 1518 config_mem = kzalloc(config_mem_size, GFP_KERNEL); ··· 1565 1563 return ret; 1566 1564 } 1567 1565 1568 - static int mxt_get_info(struct mxt_data *data) 1569 - { 1570 - struct i2c_client *client = data->client; 1571 - struct mxt_info *info = &data->info; 1572 - int error; 1573 - 1574 - /* Read 7-byte info block starting at address 0 */ 1575 - error = __mxt_read_reg(client, 0, sizeof(*info), info); 1576 - if (error) 1577 - return error; 1578 - 1579 - return 0; 1580 - } 1581 - 1582 1566 static void mxt_free_input_device(struct mxt_data *data) 1583 1567 { 1584 1568 if (data->input_dev) { ··· 1579 1591 video_unregister_device(&data->dbg.vdev); 1580 1592 v4l2_device_unregister(&data->dbg.v4l2); 1581 1593 #endif 1582 - 1583 - kfree(data->object_table); 1584 1594 data->object_table = NULL; 1595 + data->info = NULL; 1596 + kfree(data->raw_info_block); 1597 + data->raw_info_block = NULL; 1585 1598 kfree(data->msg_buf); 1586 1599 data->msg_buf = NULL; 1587 1600 data->T5_address = 0; ··· 1598 1609 data->max_reportid = 0; 1599 1610 } 1600 1611 1601 - static int mxt_get_object_table(struct mxt_data *data) 1612 + static int mxt_parse_object_table(struct mxt_data *data, 1613 + struct mxt_object *object_table) 1602 1614 { 1603 1615 struct i2c_client *client = data->client; 1604 - size_t table_size; 1605 - struct mxt_object *object_table; 1606 - int error; 1607 1616 int i; 1608 1617 u8 reportid; 1609 1618 u16 end_address; 1610 1619 1611 - table_size = data->info.object_num * sizeof(struct mxt_object); 1612 - object_table = kzalloc(table_size, GFP_KERNEL); 1613 - if (!object_table) { 1614 - dev_err(&data->client->dev, "Failed to allocate memory\n"); 1615 - return -ENOMEM; 1616 - } 1617 - 1618 - error = __mxt_read_reg(client, MXT_OBJECT_START, table_size, 1619 - object_table); 1620 - if (error) { 1621 - kfree(object_table); 1622 - return error; 1623 - } 1624 - 1625 1620 /* Valid Report IDs start counting from 1 */ 1626 1621 reportid = 1; 1627 1622 data->mem_size = 0; 1628 - for (i = 0; i < data->info.object_num; i++) { 1623 + for (i = 0; i < data->info->object_num; i++) { 1629 1624 struct mxt_object *object = object_table + i; 1630 1625 u8 min_id, max_id; 1631 1626 ··· 1633 1660 1634 1661 switch (object->type) { 1635 1662 case MXT_GEN_MESSAGE_T5: 1636 - if (data->info.family_id == 0x80 && 1637 - data->info.version < 0x20) { 1663 + if (data->info->family_id == 0x80 && 1664 + data->info->version < 0x20) { 1638 1665 /* 1639 1666 * On mXT224 firmware versions prior to V2.0 1640 1667 * read and discard unused CRC byte otherwise ··· 1689 1716 /* If T44 exists, T5 position has to be directly after */ 1690 1717 if (data->T44_address && (data->T5_address != data->T44_address + 1)) { 1691 1718 dev_err(&client->dev, "Invalid T44 position\n"); 1692 - error = -EINVAL; 1693 - goto free_object_table; 1719 + return -EINVAL; 1694 1720 } 1695 1721 1696 1722 data->msg_buf = kcalloc(data->max_reportid, 1697 1723 data->T5_msg_size, GFP_KERNEL); 1698 - if (!data->msg_buf) { 1699 - dev_err(&client->dev, "Failed to allocate message buffer\n"); 1724 + if (!data->msg_buf) 1725 + return -ENOMEM; 1726 + 1727 + return 0; 1728 + } 1729 + 1730 + static int mxt_read_info_block(struct mxt_data *data) 1731 + { 1732 + struct i2c_client *client = data->client; 1733 + int error; 1734 + size_t size; 1735 + void *id_buf, *buf; 1736 + uint8_t num_objects; 1737 + u32 calculated_crc; 1738 + u8 *crc_ptr; 1739 + 1740 + /* If info block already allocated, free it */ 1741 + if (data->raw_info_block) 1742 + mxt_free_object_table(data); 1743 + 1744 + /* Read 7-byte ID information block starting at address 0 */ 1745 + size = sizeof(struct mxt_info); 1746 + id_buf = kzalloc(size, GFP_KERNEL); 1747 + if (!id_buf) 1748 + return -ENOMEM; 1749 + 1750 + error = __mxt_read_reg(client, 0, size, id_buf); 1751 + if (error) 1752 + goto err_free_mem; 1753 + 1754 + /* Resize buffer to give space for rest of info block */ 1755 + num_objects = ((struct mxt_info *)id_buf)->object_num; 1756 + size += (num_objects * sizeof(struct mxt_object)) 1757 + + MXT_INFO_CHECKSUM_SIZE; 1758 + 1759 + buf = krealloc(id_buf, size, GFP_KERNEL); 1760 + if (!buf) { 1700 1761 error = -ENOMEM; 1701 - goto free_object_table; 1762 + goto err_free_mem; 1763 + } 1764 + id_buf = buf; 1765 + 1766 + /* Read rest of info block */ 1767 + error = __mxt_read_reg(client, MXT_OBJECT_START, 1768 + size - MXT_OBJECT_START, 1769 + id_buf + MXT_OBJECT_START); 1770 + if (error) 1771 + goto err_free_mem; 1772 + 1773 + /* Extract & calculate checksum */ 1774 + crc_ptr = id_buf + size - MXT_INFO_CHECKSUM_SIZE; 1775 + data->info_crc = crc_ptr[0] | (crc_ptr[1] << 8) | (crc_ptr[2] << 16); 1776 + 1777 + calculated_crc = mxt_calculate_crc(id_buf, 0, 1778 + size - MXT_INFO_CHECKSUM_SIZE); 1779 + 1780 + /* 1781 + * CRC mismatch can be caused by data corruption due to I2C comms 1782 + * issue or else device is not using Object Based Protocol (eg i2c-hid) 1783 + */ 1784 + if ((data->info_crc == 0) || (data->info_crc != calculated_crc)) { 1785 + dev_err(&client->dev, 1786 + "Info Block CRC error calculated=0x%06X read=0x%06X\n", 1787 + calculated_crc, data->info_crc); 1788 + error = -EIO; 1789 + goto err_free_mem; 1702 1790 } 1703 1791 1704 - data->object_table = object_table; 1792 + data->raw_info_block = id_buf; 1793 + data->info = (struct mxt_info *)id_buf; 1794 + 1795 + dev_info(&client->dev, 1796 + "Family: %u Variant: %u Firmware V%u.%u.%02X Objects: %u\n", 1797 + data->info->family_id, data->info->variant_id, 1798 + data->info->version >> 4, data->info->version & 0xf, 1799 + data->info->build, data->info->object_num); 1800 + 1801 + /* Parse object table information */ 1802 + error = mxt_parse_object_table(data, id_buf + MXT_OBJECT_START); 1803 + if (error) { 1804 + dev_err(&client->dev, "Error %d parsing object table\n", error); 1805 + mxt_free_object_table(data); 1806 + goto err_free_mem; 1807 + } 1808 + 1809 + data->object_table = (struct mxt_object *)(id_buf + MXT_OBJECT_START); 1705 1810 1706 1811 return 0; 1707 1812 1708 - free_object_table: 1709 - mxt_free_object_table(data); 1813 + err_free_mem: 1814 + kfree(id_buf); 1710 1815 return error; 1711 1816 } 1712 1817 ··· 2097 2046 int error; 2098 2047 2099 2048 while (1) { 2100 - error = mxt_get_info(data); 2049 + error = mxt_read_info_block(data); 2101 2050 if (!error) 2102 2051 break; 2103 2052 ··· 2128 2077 msleep(MXT_FW_RESET_TIME); 2129 2078 } 2130 2079 2131 - /* Get object table information */ 2132 - error = mxt_get_object_table(data); 2133 - if (error) { 2134 - dev_err(&client->dev, "Error %d reading object table\n", error); 2135 - return error; 2136 - } 2137 - 2138 2080 error = mxt_acquire_irq(data); 2139 2081 if (error) 2140 - goto err_free_object_table; 2082 + return error; 2141 2083 2142 2084 error = request_firmware_nowait(THIS_MODULE, true, MXT_CFG_NAME, 2143 2085 &client->dev, GFP_KERNEL, data, ··· 2138 2094 if (error) { 2139 2095 dev_err(&client->dev, "Failed to invoke firmware loader: %d\n", 2140 2096 error); 2141 - goto err_free_object_table; 2097 + return error; 2142 2098 } 2143 2099 2144 2100 return 0; 2145 - 2146 - err_free_object_table: 2147 - mxt_free_object_table(data); 2148 - return error; 2149 2101 } 2150 2102 2151 2103 static int mxt_set_t7_power_cfg(struct mxt_data *data, u8 sleep) ··· 2202 2162 static u16 mxt_get_debug_value(struct mxt_data *data, unsigned int x, 2203 2163 unsigned int y) 2204 2164 { 2205 - struct mxt_info *info = &data->info; 2165 + struct mxt_info *info = data->info; 2206 2166 struct mxt_dbg *dbg = &data->dbg; 2207 2167 unsigned int ofs, page; 2208 2168 unsigned int col = 0; ··· 2530 2490 2531 2491 static void mxt_debug_init(struct mxt_data *data) 2532 2492 { 2533 - struct mxt_info *info = &data->info; 2493 + struct mxt_info *info = data->info; 2534 2494 struct mxt_dbg *dbg = &data->dbg; 2535 2495 struct mxt_object *object; 2536 2496 int error; ··· 2616 2576 const struct firmware *cfg) 2617 2577 { 2618 2578 struct device *dev = &data->client->dev; 2619 - struct mxt_info *info = &data->info; 2620 2579 int error; 2621 2580 2622 2581 error = mxt_init_t7_power_cfg(data); ··· 2640 2601 2641 2602 mxt_debug_init(data); 2642 2603 2643 - dev_info(dev, 2644 - "Family: %u Variant: %u Firmware V%u.%u.%02X Objects: %u\n", 2645 - info->family_id, info->variant_id, info->version >> 4, 2646 - info->version & 0xf, info->build, info->object_num); 2647 - 2648 2604 return 0; 2649 2605 } 2650 2606 ··· 2648 2614 struct device_attribute *attr, char *buf) 2649 2615 { 2650 2616 struct mxt_data *data = dev_get_drvdata(dev); 2651 - struct mxt_info *info = &data->info; 2617 + struct mxt_info *info = data->info; 2652 2618 return scnprintf(buf, PAGE_SIZE, "%u.%u.%02X\n", 2653 2619 info->version >> 4, info->version & 0xf, info->build); 2654 2620 } ··· 2658 2624 struct device_attribute *attr, char *buf) 2659 2625 { 2660 2626 struct mxt_data *data = dev_get_drvdata(dev); 2661 - struct mxt_info *info = &data->info; 2627 + struct mxt_info *info = data->info; 2662 2628 return scnprintf(buf, PAGE_SIZE, "%u.%u\n", 2663 2629 info->family_id, info->variant_id); 2664 2630 } ··· 2697 2663 return -ENOMEM; 2698 2664 2699 2665 error = 0; 2700 - for (i = 0; i < data->info.object_num; i++) { 2666 + for (i = 0; i < data->info->object_num; i++) { 2701 2667 object = data->object_table + i; 2702 2668 2703 2669 if (!mxt_object_readable(object->type)) ··· 3069 3035 .driver_data = samus_platform_data, 3070 3036 }, 3071 3037 { 3038 + /* Samsung Chromebook Pro */ 3039 + .ident = "Samsung Chromebook Pro", 3040 + .matches = { 3041 + DMI_MATCH(DMI_SYS_VENDOR, "Google"), 3042 + DMI_MATCH(DMI_PRODUCT_NAME, "Caroline"), 3043 + }, 3044 + .driver_data = samus_platform_data, 3045 + }, 3046 + { 3072 3047 /* Other Google Chromebooks */ 3073 3048 .ident = "Chromebook", 3074 3049 .matches = { ··· 3297 3254 3298 3255 static const struct of_device_id mxt_of_match[] = { 3299 3256 { .compatible = "atmel,maxtouch", }, 3257 + /* Compatibles listed below are deprecated */ 3258 + { .compatible = "atmel,qt602240_ts", }, 3259 + { .compatible = "atmel,atmel_mxt_ts", }, 3260 + { .compatible = "atmel,atmel_mxt_tp", }, 3261 + { .compatible = "atmel,mXT224", }, 3300 3262 {}, 3301 3263 }; 3302 3264 MODULE_DEVICE_TABLE(of, mxt_of_match);
+1 -1
drivers/iommu/amd_iommu.c
··· 83 83 84 84 static DEFINE_SPINLOCK(amd_iommu_devtable_lock); 85 85 static DEFINE_SPINLOCK(pd_bitmap_lock); 86 - static DEFINE_SPINLOCK(iommu_table_lock); 87 86 88 87 /* List of all available dev_data structures */ 89 88 static LLIST_HEAD(dev_data_list); ··· 3561 3562 *****************************************************************************/ 3562 3563 3563 3564 static struct irq_chip amd_ir_chip; 3565 + static DEFINE_SPINLOCK(iommu_table_lock); 3564 3566 3565 3567 static void set_dte_irq_entry(u16 devid, struct irq_remap_table *table) 3566 3568 {
+25 -29
drivers/iommu/dma-iommu.c
··· 167 167 * @list: Reserved region list from iommu_get_resv_regions() 168 168 * 169 169 * IOMMU drivers can use this to implement their .get_resv_regions callback 170 - * for general non-IOMMU-specific reservations. Currently, this covers host 171 - * bridge windows for PCI devices and GICv3 ITS region reservation on ACPI 172 - * based ARM platforms that may require HW MSI reservation. 170 + * for general non-IOMMU-specific reservations. Currently, this covers GICv3 171 + * ITS region reservation on ACPI based ARM platforms that may require HW MSI 172 + * reservation. 173 173 */ 174 174 void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list) 175 175 { 176 - struct pci_host_bridge *bridge; 177 - struct resource_entry *window; 178 176 179 - if (!is_of_node(dev->iommu_fwspec->iommu_fwnode) && 180 - iort_iommu_msi_get_resv_regions(dev, list) < 0) 181 - return; 177 + if (!is_of_node(dev->iommu_fwspec->iommu_fwnode)) 178 + iort_iommu_msi_get_resv_regions(dev, list); 182 179 183 - if (!dev_is_pci(dev)) 184 - return; 185 - 186 - bridge = pci_find_host_bridge(to_pci_dev(dev)->bus); 187 - resource_list_for_each_entry(window, &bridge->windows) { 188 - struct iommu_resv_region *region; 189 - phys_addr_t start; 190 - size_t length; 191 - 192 - if (resource_type(window->res) != IORESOURCE_MEM) 193 - continue; 194 - 195 - start = window->res->start - window->offset; 196 - length = window->res->end - window->res->start + 1; 197 - region = iommu_alloc_resv_region(start, length, 0, 198 - IOMMU_RESV_RESERVED); 199 - if (!region) 200 - return; 201 - 202 - list_add_tail(&region->list, list); 203 - } 204 180 } 205 181 EXPORT_SYMBOL(iommu_dma_get_resv_regions); 206 182 ··· 205 229 return 0; 206 230 } 207 231 232 + static void iova_reserve_pci_windows(struct pci_dev *dev, 233 + struct iova_domain *iovad) 234 + { 235 + struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus); 236 + struct resource_entry *window; 237 + unsigned long lo, hi; 238 + 239 + resource_list_for_each_entry(window, &bridge->windows) { 240 + if (resource_type(window->res) != IORESOURCE_MEM) 241 + continue; 242 + 243 + lo = iova_pfn(iovad, window->res->start - window->offset); 244 + hi = iova_pfn(iovad, window->res->end - window->offset); 245 + reserve_iova(iovad, lo, hi); 246 + } 247 + } 248 + 208 249 static int iova_reserve_iommu_regions(struct device *dev, 209 250 struct iommu_domain *domain) 210 251 { ··· 230 237 struct iommu_resv_region *region; 231 238 LIST_HEAD(resv_regions); 232 239 int ret = 0; 240 + 241 + if (dev_is_pci(dev)) 242 + iova_reserve_pci_windows(to_pci_dev(dev), iovad); 233 243 234 244 iommu_get_resv_regions(dev, &resv_regions); 235 245 list_for_each_entry(region, &resv_regions, list) {
+1 -1
drivers/iommu/dmar.c
··· 1345 1345 struct qi_desc desc; 1346 1346 1347 1347 if (mask) { 1348 - BUG_ON(addr & ((1 << (VTD_PAGE_SHIFT + mask)) - 1)); 1348 + WARN_ON_ONCE(addr & ((1ULL << (VTD_PAGE_SHIFT + mask)) - 1)); 1349 1349 addr |= (1ULL << (VTD_PAGE_SHIFT + mask - 1)) - 1; 1350 1350 desc.high = QI_DEV_IOTLB_ADDR(addr) | QI_DEV_IOTLB_SIZE; 1351 1351 } else
+1 -1
drivers/iommu/intel_irq_remapping.c
··· 1136 1136 irte->dest_id = IRTE_DEST(cfg->dest_apicid); 1137 1137 1138 1138 /* Update the hardware only if the interrupt is in remapped mode. */ 1139 - if (!force || ir_data->irq_2_iommu.mode == IRQ_REMAPPING) 1139 + if (force || ir_data->irq_2_iommu.mode == IRQ_REMAPPING) 1140 1140 modify_irte(&ir_data->irq_2_iommu, irte); 1141 1141 } 1142 1142
+9 -2
drivers/iommu/rockchip-iommu.c
··· 1098 1098 data->iommu = platform_get_drvdata(iommu_dev); 1099 1099 dev->archdata.iommu = data; 1100 1100 1101 - of_dev_put(iommu_dev); 1101 + platform_device_put(iommu_dev); 1102 1102 1103 1103 return 0; 1104 1104 } ··· 1175 1175 for (i = 0; i < iommu->num_clocks; ++i) 1176 1176 iommu->clocks[i].id = rk_iommu_clocks[i]; 1177 1177 1178 + /* 1179 + * iommu clocks should be present for all new devices and devicetrees 1180 + * but there are older devicetrees without clocks out in the wild. 1181 + * So clocks as optional for the time being. 1182 + */ 1178 1183 err = devm_clk_bulk_get(iommu->dev, iommu->num_clocks, iommu->clocks); 1179 - if (err) 1184 + if (err == -ENOENT) 1185 + iommu->num_clocks = 0; 1186 + else if (err) 1180 1187 return err; 1181 1188 1182 1189 err = clk_bulk_prepare(iommu->num_clocks, iommu->clocks);
+2 -2
drivers/irqchip/qcom-irq-combiner.c
··· 1 - /* Copyright (c) 2015-2016, The Linux Foundation. All rights reserved. 1 + /* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. 2 2 * 3 3 * This program is free software; you can redistribute it and/or modify 4 4 * it under the terms of the GNU General Public License version 2 and ··· 68 68 69 69 bit = readl_relaxed(combiner->regs[reg].addr); 70 70 status = bit & combiner->regs[reg].enabled; 71 - if (!status) 71 + if (bit && !status) 72 72 pr_warn_ratelimited("Unexpected IRQ on CPU%d: (%08x %08lx %p)\n", 73 73 smp_processor_id(), bit, 74 74 combiner->regs[reg].enabled,
+4 -1
drivers/md/bcache/alloc.c
··· 290 290 if (kthread_should_stop() || \ 291 291 test_bit(CACHE_SET_IO_DISABLE, &ca->set->flags)) { \ 292 292 set_current_state(TASK_RUNNING); \ 293 - return 0; \ 293 + goto out; \ 294 294 } \ 295 295 \ 296 296 schedule(); \ ··· 378 378 bch_prio_write(ca); 379 379 } 380 380 } 381 + out: 382 + wait_for_kthread_stop(); 383 + return 0; 381 384 } 382 385 383 386 /* Allocation */
+4
drivers/md/bcache/bcache.h
··· 392 392 #define DEFAULT_CACHED_DEV_ERROR_LIMIT 64 393 393 atomic_t io_errors; 394 394 unsigned error_limit; 395 + 396 + char backing_dev_name[BDEVNAME_SIZE]; 395 397 }; 396 398 397 399 enum alloc_reserve { ··· 466 464 atomic_long_t meta_sectors_written; 467 465 atomic_long_t btree_sectors_written; 468 466 atomic_long_t sectors_written; 467 + 468 + char cache_dev_name[BDEVNAME_SIZE]; 469 469 }; 470 470 471 471 struct gc_stat {
+1 -2
drivers/md/bcache/debug.c
··· 106 106 107 107 void bch_data_verify(struct cached_dev *dc, struct bio *bio) 108 108 { 109 - char name[BDEVNAME_SIZE]; 110 109 struct bio *check; 111 110 struct bio_vec bv, cbv; 112 111 struct bvec_iter iter, citer = { 0 }; ··· 133 134 bv.bv_len), 134 135 dc->disk.c, 135 136 "verify failed at dev %s sector %llu", 136 - bdevname(dc->bdev, name), 137 + dc->backing_dev_name, 137 138 (uint64_t) bio->bi_iter.bi_sector); 138 139 139 140 kunmap_atomic(p1);
+3 -5
drivers/md/bcache/io.c
··· 52 52 /* IO errors */ 53 53 void bch_count_backing_io_errors(struct cached_dev *dc, struct bio *bio) 54 54 { 55 - char buf[BDEVNAME_SIZE]; 56 55 unsigned errors; 57 56 58 57 WARN_ONCE(!dc, "NULL pointer of struct cached_dev"); ··· 59 60 errors = atomic_add_return(1, &dc->io_errors); 60 61 if (errors < dc->error_limit) 61 62 pr_err("%s: IO error on backing device, unrecoverable", 62 - bio_devname(bio, buf)); 63 + dc->backing_dev_name); 63 64 else 64 65 bch_cached_dev_error(dc); 65 66 } ··· 104 105 } 105 106 106 107 if (error) { 107 - char buf[BDEVNAME_SIZE]; 108 108 unsigned errors = atomic_add_return(1 << IO_ERROR_SHIFT, 109 109 &ca->io_errors); 110 110 errors >>= IO_ERROR_SHIFT; 111 111 112 112 if (errors < ca->set->error_limit) 113 113 pr_err("%s: IO error on %s%s", 114 - bdevname(ca->bdev, buf), m, 114 + ca->cache_dev_name, m, 115 115 is_read ? ", recovering." : "."); 116 116 else 117 117 bch_cache_set_error(ca->set, 118 118 "%s: too many IO errors %s", 119 - bdevname(ca->bdev, buf), m); 119 + ca->cache_dev_name, m); 120 120 } 121 121 } 122 122
+1 -4
drivers/md/bcache/request.c
··· 649 649 */ 650 650 if (unlikely(s->iop.writeback && 651 651 bio->bi_opf & REQ_PREFLUSH)) { 652 - char buf[BDEVNAME_SIZE]; 653 - 654 - bio_devname(bio, buf); 655 652 pr_err("Can't flush %s: returned bi_status %i", 656 - buf, bio->bi_status); 653 + dc->backing_dev_name, bio->bi_status); 657 654 } else { 658 655 /* set to orig_bio->bi_status in bio_complete() */ 659 656 s->iop.status = bio->bi_status;
+52 -23
drivers/md/bcache/super.c
··· 936 936 static void cached_dev_detach_finish(struct work_struct *w) 937 937 { 938 938 struct cached_dev *dc = container_of(w, struct cached_dev, detach); 939 - char buf[BDEVNAME_SIZE]; 940 939 struct closure cl; 941 940 closure_init_stack(&cl); 942 941 ··· 966 967 967 968 mutex_unlock(&bch_register_lock); 968 969 969 - pr_info("Caching disabled for %s", bdevname(dc->bdev, buf)); 970 + pr_info("Caching disabled for %s", dc->backing_dev_name); 970 971 971 972 /* Drop ref we took in cached_dev_detach() */ 972 973 closure_put(&dc->disk.cl); ··· 998 999 { 999 1000 uint32_t rtime = cpu_to_le32(get_seconds()); 1000 1001 struct uuid_entry *u; 1001 - char buf[BDEVNAME_SIZE]; 1002 1002 struct cached_dev *exist_dc, *t; 1003 - 1004 - bdevname(dc->bdev, buf); 1005 1003 1006 1004 if ((set_uuid && memcmp(set_uuid, c->sb.set_uuid, 16)) || 1007 1005 (!set_uuid && memcmp(dc->sb.set_uuid, c->sb.set_uuid, 16))) 1008 1006 return -ENOENT; 1009 1007 1010 1008 if (dc->disk.c) { 1011 - pr_err("Can't attach %s: already attached", buf); 1009 + pr_err("Can't attach %s: already attached", 1010 + dc->backing_dev_name); 1012 1011 return -EINVAL; 1013 1012 } 1014 1013 1015 1014 if (test_bit(CACHE_SET_STOPPING, &c->flags)) { 1016 - pr_err("Can't attach %s: shutting down", buf); 1015 + pr_err("Can't attach %s: shutting down", 1016 + dc->backing_dev_name); 1017 1017 return -EINVAL; 1018 1018 } 1019 1019 1020 1020 if (dc->sb.block_size < c->sb.block_size) { 1021 1021 /* Will die */ 1022 1022 pr_err("Couldn't attach %s: block size less than set's block size", 1023 - buf); 1023 + dc->backing_dev_name); 1024 1024 return -EINVAL; 1025 1025 } 1026 1026 ··· 1027 1029 list_for_each_entry_safe(exist_dc, t, &c->cached_devs, list) { 1028 1030 if (!memcmp(dc->sb.uuid, exist_dc->sb.uuid, 16)) { 1029 1031 pr_err("Tried to attach %s but duplicate UUID already attached", 1030 - buf); 1032 + dc->backing_dev_name); 1031 1033 1032 1034 return -EINVAL; 1033 1035 } ··· 1045 1047 1046 1048 if (!u) { 1047 1049 if (BDEV_STATE(&dc->sb) == BDEV_STATE_DIRTY) { 1048 - pr_err("Couldn't find uuid for %s in set", buf); 1050 + pr_err("Couldn't find uuid for %s in set", 1051 + dc->backing_dev_name); 1049 1052 return -ENOENT; 1050 1053 } 1051 1054 1052 1055 u = uuid_find_empty(c); 1053 1056 if (!u) { 1054 - pr_err("Not caching %s, no room for UUID", buf); 1057 + pr_err("Not caching %s, no room for UUID", 1058 + dc->backing_dev_name); 1055 1059 return -EINVAL; 1056 1060 } 1057 1061 } ··· 1112 1112 up_write(&dc->writeback_lock); 1113 1113 1114 1114 pr_info("Caching %s as %s on set %pU", 1115 - bdevname(dc->bdev, buf), dc->disk.disk->disk_name, 1115 + dc->backing_dev_name, 1116 + dc->disk.disk->disk_name, 1116 1117 dc->disk.c->sb.set_uuid); 1117 1118 return 0; 1118 1119 } ··· 1226 1225 struct block_device *bdev, 1227 1226 struct cached_dev *dc) 1228 1227 { 1229 - char name[BDEVNAME_SIZE]; 1230 1228 const char *err = "cannot allocate memory"; 1231 1229 struct cache_set *c; 1232 1230 1231 + bdevname(bdev, dc->backing_dev_name); 1233 1232 memcpy(&dc->sb, sb, sizeof(struct cache_sb)); 1234 1233 dc->bdev = bdev; 1235 1234 dc->bdev->bd_holder = dc; ··· 1237 1236 bio_init(&dc->sb_bio, dc->sb_bio.bi_inline_vecs, 1); 1238 1237 bio_first_bvec_all(&dc->sb_bio)->bv_page = sb_page; 1239 1238 get_page(sb_page); 1239 + 1240 1240 1241 1241 if (cached_dev_init(dc, sb->block_size << 9)) 1242 1242 goto err; ··· 1249 1247 if (bch_cache_accounting_add_kobjs(&dc->accounting, &dc->disk.kobj)) 1250 1248 goto err; 1251 1249 1252 - pr_info("registered backing device %s", bdevname(bdev, name)); 1250 + pr_info("registered backing device %s", dc->backing_dev_name); 1253 1251 1254 1252 list_add(&dc->list, &uncached_devices); 1255 1253 list_for_each_entry(c, &bch_cache_sets, list) ··· 1261 1259 1262 1260 return; 1263 1261 err: 1264 - pr_notice("error %s: %s", bdevname(bdev, name), err); 1262 + pr_notice("error %s: %s", dc->backing_dev_name, err); 1265 1263 bcache_device_stop(&dc->disk); 1266 1264 } 1267 1265 ··· 1369 1367 1370 1368 bool bch_cached_dev_error(struct cached_dev *dc) 1371 1369 { 1372 - char name[BDEVNAME_SIZE]; 1370 + struct cache_set *c; 1373 1371 1374 1372 if (!dc || test_bit(BCACHE_DEV_CLOSING, &dc->disk.flags)) 1375 1373 return false; ··· 1379 1377 smp_mb(); 1380 1378 1381 1379 pr_err("stop %s: too many IO errors on backing device %s\n", 1382 - dc->disk.disk->disk_name, bdevname(dc->bdev, name)); 1380 + dc->disk.disk->disk_name, dc->backing_dev_name); 1381 + 1382 + /* 1383 + * If the cached device is still attached to a cache set, 1384 + * even dc->io_disable is true and no more I/O requests 1385 + * accepted, cache device internal I/O (writeback scan or 1386 + * garbage collection) may still prevent bcache device from 1387 + * being stopped. So here CACHE_SET_IO_DISABLE should be 1388 + * set to c->flags too, to make the internal I/O to cache 1389 + * device rejected and stopped immediately. 1390 + * If c is NULL, that means the bcache device is not attached 1391 + * to any cache set, then no CACHE_SET_IO_DISABLE bit to set. 1392 + */ 1393 + c = dc->disk.c; 1394 + if (c && test_and_set_bit(CACHE_SET_IO_DISABLE, &c->flags)) 1395 + pr_info("CACHE_SET_IO_DISABLE already set"); 1383 1396 1384 1397 bcache_device_stop(&dc->disk); 1385 1398 return true; ··· 1412 1395 return false; 1413 1396 1414 1397 if (test_and_set_bit(CACHE_SET_IO_DISABLE, &c->flags)) 1415 - pr_warn("CACHE_SET_IO_DISABLE already set"); 1398 + pr_info("CACHE_SET_IO_DISABLE already set"); 1416 1399 1417 1400 /* XXX: we can be called from atomic context 1418 1401 acquire_console_sem(); ··· 1556 1539 */ 1557 1540 pr_warn("stop_when_cache_set_failed of %s is \"auto\" and cache is dirty, stop it to avoid potential data corruption.", 1558 1541 d->disk->disk_name); 1542 + /* 1543 + * There might be a small time gap that cache set is 1544 + * released but bcache device is not. Inside this time 1545 + * gap, regular I/O requests will directly go into 1546 + * backing device as no cache set attached to. This 1547 + * behavior may also introduce potential inconsistence 1548 + * data in writeback mode while cache is dirty. 1549 + * Therefore before calling bcache_device_stop() due 1550 + * to a broken cache device, dc->io_disable should be 1551 + * explicitly set to true. 1552 + */ 1553 + dc->io_disable = true; 1554 + /* make others know io_disable is true earlier */ 1555 + smp_mb(); 1559 1556 bcache_device_stop(d); 1560 1557 } else { 1561 1558 /* ··· 2034 2003 static int register_cache(struct cache_sb *sb, struct page *sb_page, 2035 2004 struct block_device *bdev, struct cache *ca) 2036 2005 { 2037 - char name[BDEVNAME_SIZE]; 2038 2006 const char *err = NULL; /* must be set for any error case */ 2039 2007 int ret = 0; 2040 2008 2041 - bdevname(bdev, name); 2042 - 2009 + bdevname(bdev, ca->cache_dev_name); 2043 2010 memcpy(&ca->sb, sb, sizeof(struct cache_sb)); 2044 2011 ca->bdev = bdev; 2045 2012 ca->bdev->bd_holder = ca; ··· 2074 2045 goto out; 2075 2046 } 2076 2047 2077 - pr_info("registered cache device %s", name); 2048 + pr_info("registered cache device %s", ca->cache_dev_name); 2078 2049 2079 2050 out: 2080 2051 kobject_put(&ca->kobj); 2081 2052 2082 2053 err: 2083 2054 if (err) 2084 - pr_notice("error %s: %s", name, err); 2055 + pr_notice("error %s: %s", ca->cache_dev_name, err); 2085 2056 2086 2057 return ret; 2087 2058 }
+3 -1
drivers/md/bcache/writeback.c
··· 244 244 struct keybuf_key *w = bio->bi_private; 245 245 struct dirty_io *io = w->private; 246 246 247 - if (bio->bi_status) 247 + if (bio->bi_status) { 248 248 SET_KEY_DIRTY(&w->key, false); 249 + bch_count_backing_io_errors(io->dc, bio); 250 + } 249 251 250 252 closure_put(&io->cl); 251 253 }
+1 -1
drivers/media/i2c/saa7115.c
··· 20 20 // 21 21 // VBI support (2004) and cleanups (2005) by Hans Verkuil <hverkuil@xs4all.nl> 22 22 // 23 - // Copyright (c) 2005-2006 Mauro Carvalho Chehab <mchehab@infradead.org> 23 + // Copyright (c) 2005-2006 Mauro Carvalho Chehab <mchehab@kernel.org> 24 24 // SAA7111, SAA7113 and SAA7118 support 25 25 26 26 #include "saa711x_regs.h"
+1 -1
drivers/media/i2c/saa711x_regs.h
··· 2 2 * SPDX-License-Identifier: GPL-2.0+ 3 3 * saa711x - Philips SAA711x video decoder register specifications 4 4 * 5 - * Copyright (c) 2006 Mauro Carvalho Chehab <mchehab@infradead.org> 5 + * Copyright (c) 2006 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 */ 7 7 8 8 #define R_00_CHIP_VERSION 0x00
+1 -1
drivers/media/i2c/tda7432.c
··· 8 8 * Muting and tone control by Jonathan Isom <jisom@ematic.com> 9 9 * 10 10 * Copyright (c) 2000 Eric Sandeen <eric_sandeen@bigfoot.com> 11 - * Copyright (c) 2006 Mauro Carvalho Chehab <mchehab@infradead.org> 11 + * Copyright (c) 2006 Mauro Carvalho Chehab <mchehab@kernel.org> 12 12 * This code is placed under the terms of the GNU General Public License 13 13 * Based on tda9855.c by Steve VanDeBogart (vandebo@uclink.berkeley.edu) 14 14 * Which was based on tda8425.c by Greg Alexander (c) 1998
+1 -1
drivers/media/i2c/tvp5150.c
··· 2 2 // 3 3 // tvp5150 - Texas Instruments TVP5150A/AM1 and TVP5151 video decoder driver 4 4 // 5 - // Copyright (c) 2005,2006 Mauro Carvalho Chehab <mchehab@infradead.org> 5 + // Copyright (c) 2005,2006 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 7 7 #include <dt-bindings/media/tvp5150.h> 8 8 #include <linux/i2c.h>
+1 -1
drivers/media/i2c/tvp5150_reg.h
··· 3 3 * 4 4 * tvp5150 - Texas Instruments TVP5150A/AM1 video decoder registers 5 5 * 6 - * Copyright (c) 2005,2006 Mauro Carvalho Chehab <mchehab@infradead.org> 6 + * Copyright (c) 2005,2006 Mauro Carvalho Chehab <mchehab@kernel.org> 7 7 */ 8 8 9 9 #define TVP5150_VD_IN_SRC_SEL_1 0x00 /* Video input source selection #1 */
+1 -1
drivers/media/i2c/tvp7002.c
··· 5 5 * Author: Santiago Nunez-Corrales <santiago.nunez@ridgerun.com> 6 6 * 7 7 * This code is partially based upon the TVP5150 driver 8 - * written by Mauro Carvalho Chehab (mchehab@infradead.org), 8 + * written by Mauro Carvalho Chehab <mchehab@kernel.org>, 9 9 * the TVP514x driver written by Vaibhav Hiremath <hvaibhav@ti.com> 10 10 * and the TVP7002 driver in the TI LSP 2.10.00.14. Revisions by 11 11 * Muralidharan Karicheri and Snehaprabha Narnakaje (TI).
+1 -1
drivers/media/i2c/tvp7002_reg.h
··· 5 5 * Author: Santiago Nunez-Corrales <santiago.nunez@ridgerun.com> 6 6 * 7 7 * This code is partially based upon the TVP5150 driver 8 - * written by Mauro Carvalho Chehab (mchehab@infradead.org), 8 + * written by Mauro Carvalho Chehab <mchehab@kernel.org>, 9 9 * the TVP514x driver written by Vaibhav Hiremath <hvaibhav@ti.com> 10 10 * and the TVP7002 driver in the TI LSP 2.10.00.14 11 11 *
+1 -1
drivers/media/media-devnode.c
··· 4 4 * Copyright (C) 2010 Nokia Corporation 5 5 * 6 6 * Based on drivers/media/video/v4l2_dev.c code authored by 7 - * Mauro Carvalho Chehab <mchehab@infradead.org> (version 2) 7 + * Mauro Carvalho Chehab <mchehab@kernel.org> (version 2) 8 8 * Alan Cox, <alan@lxorguk.ukuu.org.uk> (version 1) 9 9 * 10 10 * Contacts: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+1 -1
drivers/media/pci/bt8xx/bttv-audio-hook.c
··· 1 1 /* 2 2 * Handlers for board audio hooks, splitted from bttv-cards 3 3 * 4 - * Copyright (c) 2006 Mauro Carvalho Chehab (mchehab@infradead.org) 4 + * Copyright (c) 2006 Mauro Carvalho Chehab <mchehab@kernel.org> 5 5 * This code is placed under the terms of the GNU General Public License 6 6 */ 7 7
+1 -1
drivers/media/pci/bt8xx/bttv-audio-hook.h
··· 1 1 /* 2 2 * Handlers for board audio hooks, splitted from bttv-cards 3 3 * 4 - * Copyright (c) 2006 Mauro Carvalho Chehab (mchehab@infradead.org) 4 + * Copyright (c) 2006 Mauro Carvalho Chehab <mchehab@kernel.org> 5 5 * This code is placed under the terms of the GNU General Public License 6 6 */ 7 7
+2 -2
drivers/media/pci/bt8xx/bttv-cards.c
··· 2447 2447 }, 2448 2448 /* ---- card 0x88---------------------------------- */ 2449 2449 [BTTV_BOARD_ACORP_Y878F] = { 2450 - /* Mauro Carvalho Chehab <mchehab@infradead.org> */ 2450 + /* Mauro Carvalho Chehab <mchehab@kernel.org> */ 2451 2451 .name = "Acorp Y878F", 2452 2452 .video_inputs = 3, 2453 2453 /* .audio_inputs= 1, */ ··· 2688 2688 }, 2689 2689 [BTTV_BOARD_ENLTV_FM_2] = { 2690 2690 /* Encore TV Tuner Pro ENL TV-FM-2 2691 - Mauro Carvalho Chehab <mchehab@infradead.org */ 2691 + Mauro Carvalho Chehab <mchehab@kernel.org> */ 2692 2692 .name = "Encore ENL TV-FM-2", 2693 2693 .video_inputs = 3, 2694 2694 /* .audio_inputs= 1, */
+1 -1
drivers/media/pci/bt8xx/bttv-driver.c
··· 13 13 (c) 2005-2006 Nickolay V. Shmyrev <nshmyrev@yandex.ru> 14 14 15 15 Fixes to be fully V4L2 compliant by 16 - (c) 2006 Mauro Carvalho Chehab <mchehab@infradead.org> 16 + (c) 2006 Mauro Carvalho Chehab <mchehab@kernel.org> 17 17 18 18 Cropping and overscan support 19 19 Copyright (C) 2005, 2006 Michael H. Schimek <mschimek@gmx.at>
+1 -1
drivers/media/pci/bt8xx/bttv-i2c.c
··· 8 8 & Marcus Metzler (mocm@thp.uni-koeln.de) 9 9 (c) 1999-2003 Gerd Knorr <kraxel@bytesex.org> 10 10 11 - (c) 2005 Mauro Carvalho Chehab <mchehab@infradead.org> 11 + (c) 2005 Mauro Carvalho Chehab <mchehab@kernel.org> 12 12 - Multituner support and i2c address binding 13 13 14 14 This program is free software; you can redistribute it and/or modify
+1 -1
drivers/media/pci/cx23885/cx23885-input.c
··· 13 13 * Copyright (C) 2008 <srinivasa.deevi at conexant dot com> 14 14 * Copyright (C) 2005 Ludovico Cavedon <cavedon@sssup.it> 15 15 * Markus Rechberger <mrechberger@gmail.com> 16 - * Mauro Carvalho Chehab <mchehab@infradead.org> 16 + * Mauro Carvalho Chehab <mchehab@kernel.org> 17 17 * Sascha Sommer <saschasommer@freenet.de> 18 18 * Copyright (C) 2004, 2005 Chris Pascoe 19 19 * Copyright (C) 2003, 2004 Gerd Knorr
+2 -2
drivers/media/pci/cx88/cx88-alsa.c
··· 4 4 * 5 5 * (c) 2007 Trent Piepho <xyzzy@speakeasy.org> 6 6 * (c) 2005,2006 Ricardo Cerqueira <v4l@cerqueira.org> 7 - * (c) 2005 Mauro Carvalho Chehab <mchehab@infradead.org> 7 + * (c) 2005 Mauro Carvalho Chehab <mchehab@kernel.org> 8 8 * Based on a dummy cx88 module by Gerd Knorr <kraxel@bytesex.org> 9 9 * Based on dummy.c by Jaroslav Kysela <perex@perex.cz> 10 10 * ··· 103 103 104 104 MODULE_DESCRIPTION("ALSA driver module for cx2388x based TV cards"); 105 105 MODULE_AUTHOR("Ricardo Cerqueira"); 106 - MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>"); 106 + MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>"); 107 107 MODULE_LICENSE("GPL"); 108 108 MODULE_VERSION(CX88_VERSION); 109 109
+1 -1
drivers/media/pci/cx88/cx88-blackbird.c
··· 5 5 * (c) 2004 Jelle Foks <jelle@foks.us> 6 6 * (c) 2004 Gerd Knorr <kraxel@bytesex.org> 7 7 * 8 - * (c) 2005-2006 Mauro Carvalho Chehab <mchehab@infradead.org> 8 + * (c) 2005-2006 Mauro Carvalho Chehab <mchehab@kernel.org> 9 9 * - video_ioctl2 conversion 10 10 * 11 11 * Includes parts from the ivtv driver <http://sourceforge.net/projects/ivtv/>
+1 -1
drivers/media/pci/cx88/cx88-core.c
··· 4 4 * 5 5 * (c) 2003 Gerd Knorr <kraxel@bytesex.org> [SuSE Labs] 6 6 * 7 - * (c) 2005-2006 Mauro Carvalho Chehab <mchehab@infradead.org> 7 + * (c) 2005-2006 Mauro Carvalho Chehab <mchehab@kernel.org> 8 8 * - Multituner support 9 9 * - video_ioctl2 conversion 10 10 * - PAL/M fixes
+1 -1
drivers/media/pci/cx88/cx88-i2c.c
··· 8 8 * (c) 2002 Yurij Sysoev <yurij@naturesoft.net> 9 9 * (c) 1999-2003 Gerd Knorr <kraxel@bytesex.org> 10 10 * 11 - * (c) 2005 Mauro Carvalho Chehab <mchehab@infradead.org> 11 + * (c) 2005 Mauro Carvalho Chehab <mchehab@kernel.org> 12 12 * - Multituner support and i2c address binding 13 13 * 14 14 * This program is free software; you can redistribute it and/or modify
+1 -1
drivers/media/pci/cx88/cx88-video.c
··· 5 5 * 6 6 * (c) 2003-04 Gerd Knorr <kraxel@bytesex.org> [SuSE Labs] 7 7 * 8 - * (c) 2005-2006 Mauro Carvalho Chehab <mchehab@infradead.org> 8 + * (c) 2005-2006 Mauro Carvalho Chehab <mchehab@kernel.org> 9 9 * - Multituner support 10 10 * - video_ioctl2 conversion 11 11 * - PAL/M fixes
+1 -1
drivers/media/radio/radio-aimslab.c
··· 4 4 * Copyright 1997 M. Kirkwood 5 5 * 6 6 * Converted to the radio-isa framework by Hans Verkuil <hans.verkuil@cisco.com> 7 - * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org> 7 + * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org> 8 8 * Converted to new API by Alan Cox <alan@lxorguk.ukuu.org.uk> 9 9 * Various bugfixes and enhancements by Russell Kroll <rkroll@exploits.org> 10 10 *
+1 -1
drivers/media/radio/radio-aztech.c
··· 2 2 * radio-aztech.c - Aztech radio card driver 3 3 * 4 4 * Converted to the radio-isa framework by Hans Verkuil <hans.verkuil@xs4all.nl> 5 - * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org> 5 + * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 * Adapted to support the Video for Linux API by 7 7 * Russell Kroll <rkroll@exploits.org>. Based on original tuner code by: 8 8 *
+1 -1
drivers/media/radio/radio-gemtek.c
··· 15 15 * Various bugfixes and enhancements by Russell Kroll <rkroll@exploits.org> 16 16 * 17 17 * Converted to the radio-isa framework by Hans Verkuil <hans.verkuil@cisco.com> 18 - * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org> 18 + * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org> 19 19 * 20 20 * Note: this card seems to swap the left and right audio channels! 21 21 *
+1 -1
drivers/media/radio/radio-maxiradio.c
··· 27 27 * BUGS: 28 28 * - card unmutes if you change frequency 29 29 * 30 - * (c) 2006, 2007 by Mauro Carvalho Chehab <mchehab@infradead.org>: 30 + * (c) 2006, 2007 by Mauro Carvalho Chehab <mchehab@kernel.org>: 31 31 * - Conversion to V4L2 API 32 32 * - Uses video_ioctl2 for parsing and to add debug support 33 33 */
+1 -1
drivers/media/radio/radio-rtrack2.c
··· 7 7 * Various bugfixes and enhancements by Russell Kroll <rkroll@exploits.org> 8 8 * 9 9 * Converted to the radio-isa framework by Hans Verkuil <hans.verkuil@cisco.com> 10 - * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org> 10 + * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org> 11 11 * 12 12 * Fully tested with actual hardware and the v4l2-compliance tool. 13 13 */
+1 -1
drivers/media/radio/radio-sf16fmi.c
··· 13 13 * No volume control - only mute/unmute - you have to use line volume 14 14 * control on SB-part of SF16-FMI/SF16-FMP/SF16-FMD 15 15 * 16 - * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org> 16 + * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org> 17 17 */ 18 18 19 19 #include <linux/kernel.h> /* __setup */
+1 -1
drivers/media/radio/radio-terratec.c
··· 17 17 * Volume Control is done digitally 18 18 * 19 19 * Converted to the radio-isa framework by Hans Verkuil <hans.verkuil@cisco.com> 20 - * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org> 20 + * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org> 21 21 */ 22 22 23 23 #include <linux/module.h> /* Modules */
+1 -1
drivers/media/radio/radio-trust.c
··· 12 12 * Scott McGrath (smcgrath@twilight.vtc.vsc.edu) 13 13 * William McGrath (wmcgrath@twilight.vtc.vsc.edu) 14 14 * 15 - * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org> 15 + * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org> 16 16 */ 17 17 18 18 #include <stdarg.h>
+1 -1
drivers/media/radio/radio-typhoon.c
··· 25 25 * The frequency change is necessary since the card never seems to be 26 26 * completely silent. 27 27 * 28 - * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org> 28 + * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org> 29 29 */ 30 30 31 31 #include <linux/module.h> /* Modules */
+1 -1
drivers/media/radio/radio-zoltrix.c
··· 27 27 * 2002-07-15 - Fix Stereo typo 28 28 * 29 29 * 2006-07-24 - Converted to V4L2 API 30 - * by Mauro Carvalho Chehab <mchehab@infradead.org> 30 + * by Mauro Carvalho Chehab <mchehab@kernel.org> 31 31 * 32 32 * Converted to the radio-isa framework by Hans Verkuil <hans.verkuil@cisco.com> 33 33 *
+1 -1
drivers/media/rc/keymaps/rc-avermedia-m135a.c
··· 12 12 * 13 13 * On Avermedia M135A with IR model RM-JX, the same codes exist on both 14 14 * Positivo (BR) and original IR, initial version and remote control codes 15 - * added by Mauro Carvalho Chehab <mchehab@infradead.org> 15 + * added by Mauro Carvalho Chehab <mchehab@kernel.org> 16 16 * 17 17 * Positivo also ships Avermedia M135A with model RM-K6, extra control 18 18 * codes added by Herton Ronaldo Krzesinski <herton@mandriva.com.br>
+1 -1
drivers/media/rc/keymaps/rc-encore-enltv-fm53.c
··· 9 9 #include <linux/module.h> 10 10 11 11 /* Encore ENLTV-FM v5.3 12 - Mauro Carvalho Chehab <mchehab@infradead.org> 12 + Mauro Carvalho Chehab <mchehab@kernel.org> 13 13 */ 14 14 15 15 static struct rc_map_table encore_enltv_fm53[] = {
+1 -1
drivers/media/rc/keymaps/rc-encore-enltv2.c
··· 9 9 #include <linux/module.h> 10 10 11 11 /* Encore ENLTV2-FM - silver plastic - "Wand Media" written at the botton 12 - Mauro Carvalho Chehab <mchehab@infradead.org> */ 12 + Mauro Carvalho Chehab <mchehab@kernel.org> */ 13 13 14 14 static struct rc_map_table encore_enltv2[] = { 15 15 { 0x4c, KEY_POWER2 },
+1 -1
drivers/media/rc/keymaps/rc-kaiomy.c
··· 9 9 #include <linux/module.h> 10 10 11 11 /* Kaiomy TVnPC U2 12 - Mauro Carvalho Chehab <mchehab@infradead.org> 12 + Mauro Carvalho Chehab <mchehab@kernel.org> 13 13 */ 14 14 15 15 static struct rc_map_table kaiomy[] = {
+1 -1
drivers/media/rc/keymaps/rc-kworld-plus-tv-analog.c
··· 9 9 #include <linux/module.h> 10 10 11 11 /* Kworld Plus TV Analog Lite PCI IR 12 - Mauro Carvalho Chehab <mchehab@infradead.org> 12 + Mauro Carvalho Chehab <mchehab@kernel.org> 13 13 */ 14 14 15 15 static struct rc_map_table kworld_plus_tv_analog[] = {
+1 -1
drivers/media/rc/keymaps/rc-pixelview-new.c
··· 9 9 #include <linux/module.h> 10 10 11 11 /* 12 - Mauro Carvalho Chehab <mchehab@infradead.org> 12 + Mauro Carvalho Chehab <mchehab@kernel.org> 13 13 present on PV MPEG 8000GT 14 14 */ 15 15
+2 -2
drivers/media/tuners/tea5761.c
··· 2 2 // For Philips TEA5761 FM Chip 3 3 // I2C address is always 0x20 (0x10 at 7-bit mode). 4 4 // 5 - // Copyright (c) 2005-2007 Mauro Carvalho Chehab (mchehab@infradead.org) 5 + // Copyright (c) 2005-2007 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 7 7 #include <linux/i2c.h> 8 8 #include <linux/slab.h> ··· 337 337 EXPORT_SYMBOL_GPL(tea5761_autodetection); 338 338 339 339 MODULE_DESCRIPTION("Philips TEA5761 FM tuner driver"); 340 - MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>"); 340 + MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>"); 341 341 MODULE_LICENSE("GPL v2");
+2 -2
drivers/media/tuners/tea5767.c
··· 2 2 // For Philips TEA5767 FM Chip used on some TV Cards like Prolink Pixelview 3 3 // I2C address is always 0xC0. 4 4 // 5 - // Copyright (c) 2005 Mauro Carvalho Chehab (mchehab@infradead.org) 5 + // Copyright (c) 2005 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 // 7 7 // tea5767 autodetection thanks to Torsten Seeboth and Atsushi Nakagawa 8 8 // from their contributions on DScaler. ··· 469 469 EXPORT_SYMBOL_GPL(tea5767_autodetection); 470 470 471 471 MODULE_DESCRIPTION("Philips TEA5767 FM tuner driver"); 472 - MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>"); 472 + MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>"); 473 473 MODULE_LICENSE("GPL v2");
+1 -1
drivers/media/tuners/tuner-xc2028-types.h
··· 5 5 * This file includes internal tipes to be used inside tuner-xc2028. 6 6 * Shouldn't be included outside tuner-xc2028 7 7 * 8 - * Copyright (c) 2007-2008 Mauro Carvalho Chehab (mchehab@infradead.org) 8 + * Copyright (c) 2007-2008 Mauro Carvalho Chehab <mchehab@kernel.org> 9 9 */ 10 10 11 11 /* xc3028 firmware types */
+2 -2
drivers/media/tuners/tuner-xc2028.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 // tuner-xc2028 3 3 // 4 - // Copyright (c) 2007-2008 Mauro Carvalho Chehab (mchehab@infradead.org) 4 + // Copyright (c) 2007-2008 Mauro Carvalho Chehab <mchehab@kernel.org> 5 5 // 6 6 // Copyright (c) 2007 Michel Ludwig (michel.ludwig@gmail.com) 7 7 // - frontend interface ··· 1518 1518 1519 1519 MODULE_DESCRIPTION("Xceive xc2028/xc3028 tuner driver"); 1520 1520 MODULE_AUTHOR("Michel Ludwig <michel.ludwig@gmail.com>"); 1521 - MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>"); 1521 + MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>"); 1522 1522 MODULE_LICENSE("GPL v2"); 1523 1523 MODULE_FIRMWARE(XC2028_DEFAULT_FIRMWARE); 1524 1524 MODULE_FIRMWARE(XC3028L_DEFAULT_FIRMWARE);
+1 -1
drivers/media/tuners/tuner-xc2028.h
··· 2 2 * SPDX-License-Identifier: GPL-2.0 3 3 * tuner-xc2028 4 4 * 5 - * Copyright (c) 2007-2008 Mauro Carvalho Chehab (mchehab@infradead.org) 5 + * Copyright (c) 2007-2008 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 */ 7 7 8 8 #ifndef __TUNER_XC2028_H__
+1 -1
drivers/media/usb/em28xx/em28xx-camera.c
··· 2 2 // 3 3 // em28xx-camera.c - driver for Empia EM25xx/27xx/28xx USB video capture devices 4 4 // 5 - // Copyright (C) 2009 Mauro Carvalho Chehab <mchehab@infradead.org> 5 + // Copyright (C) 2009 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 // Copyright (C) 2013 Frank Schäfer <fschaefer.oss@googlemail.com> 7 7 // 8 8 // This program is free software; you can redistribute it and/or modify
+1 -1
drivers/media/usb/em28xx/em28xx-cards.c
··· 5 5 // 6 6 // Copyright (C) 2005 Ludovico Cavedon <cavedon@sssup.it> 7 7 // Markus Rechberger <mrechberger@gmail.com> 8 - // Mauro Carvalho Chehab <mchehab@infradead.org> 8 + // Mauro Carvalho Chehab <mchehab@kernel.org> 9 9 // Sascha Sommer <saschasommer@freenet.de> 10 10 // Copyright (C) 2012 Frank Schäfer <fschaefer.oss@googlemail.com> 11 11 //
+2 -2
drivers/media/usb/em28xx/em28xx-core.c
··· 4 4 // 5 5 // Copyright (C) 2005 Ludovico Cavedon <cavedon@sssup.it> 6 6 // Markus Rechberger <mrechberger@gmail.com> 7 - // Mauro Carvalho Chehab <mchehab@infradead.org> 7 + // Mauro Carvalho Chehab <mchehab@kernel.org> 8 8 // Sascha Sommer <saschasommer@freenet.de> 9 9 // Copyright (C) 2012 Frank Schäfer <fschaefer.oss@googlemail.com> 10 10 // ··· 32 32 33 33 #define DRIVER_AUTHOR "Ludovico Cavedon <cavedon@sssup.it>, " \ 34 34 "Markus Rechberger <mrechberger@gmail.com>, " \ 35 - "Mauro Carvalho Chehab <mchehab@infradead.org>, " \ 35 + "Mauro Carvalho Chehab <mchehab@kernel.org>, " \ 36 36 "Sascha Sommer <saschasommer@freenet.de>" 37 37 38 38 MODULE_AUTHOR(DRIVER_AUTHOR);
+2 -2
drivers/media/usb/em28xx/em28xx-dvb.c
··· 2 2 // 3 3 // DVB device driver for em28xx 4 4 // 5 - // (c) 2008-2011 Mauro Carvalho Chehab <mchehab@infradead.org> 5 + // (c) 2008-2011 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 // 7 7 // (c) 2008 Devin Heitmueller <devin.heitmueller@gmail.com> 8 8 // - Fixes for the driver to properly work with HVR-950 ··· 63 63 #include "tc90522.h" 64 64 #include "qm1d1c0042.h" 65 65 66 - MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>"); 66 + MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>"); 67 67 MODULE_LICENSE("GPL v2"); 68 68 MODULE_DESCRIPTION(DRIVER_DESC " - digital TV interface"); 69 69 MODULE_VERSION(EM28XX_VERSION);
+1 -1
drivers/media/usb/em28xx/em28xx-i2c.c
··· 4 4 // 5 5 // Copyright (C) 2005 Ludovico Cavedon <cavedon@sssup.it> 6 6 // Markus Rechberger <mrechberger@gmail.com> 7 - // Mauro Carvalho Chehab <mchehab@infradead.org> 7 + // Mauro Carvalho Chehab <mchehab@kernel.org> 8 8 // Sascha Sommer <saschasommer@freenet.de> 9 9 // Copyright (C) 2013 Frank Schäfer <fschaefer.oss@googlemail.com> 10 10 //
+1 -1
drivers/media/usb/em28xx/em28xx-input.c
··· 4 4 // 5 5 // Copyright (C) 2005 Ludovico Cavedon <cavedon@sssup.it> 6 6 // Markus Rechberger <mrechberger@gmail.com> 7 - // Mauro Carvalho Chehab <mchehab@infradead.org> 7 + // Mauro Carvalho Chehab <mchehab@kernel.org> 8 8 // Sascha Sommer <saschasommer@freenet.de> 9 9 // 10 10 // This program is free software; you can redistribute it and/or modify
+2 -2
drivers/media/usb/em28xx/em28xx-video.c
··· 5 5 // 6 6 // Copyright (C) 2005 Ludovico Cavedon <cavedon@sssup.it> 7 7 // Markus Rechberger <mrechberger@gmail.com> 8 - // Mauro Carvalho Chehab <mchehab@infradead.org> 8 + // Mauro Carvalho Chehab <mchehab@kernel.org> 9 9 // Sascha Sommer <saschasommer@freenet.de> 10 10 // Copyright (C) 2012 Frank Schäfer <fschaefer.oss@googlemail.com> 11 11 // ··· 44 44 45 45 #define DRIVER_AUTHOR "Ludovico Cavedon <cavedon@sssup.it>, " \ 46 46 "Markus Rechberger <mrechberger@gmail.com>, " \ 47 - "Mauro Carvalho Chehab <mchehab@infradead.org>, " \ 47 + "Mauro Carvalho Chehab <mchehab@kernel.org>, " \ 48 48 "Sascha Sommer <saschasommer@freenet.de>" 49 49 50 50 static unsigned int isoc_debug;
+1 -1
drivers/media/usb/em28xx/em28xx.h
··· 4 4 * 5 5 * Copyright (C) 2005 Markus Rechberger <mrechberger@gmail.com> 6 6 * Ludovico Cavedon <cavedon@sssup.it> 7 - * Mauro Carvalho Chehab <mchehab@infradead.org> 7 + * Mauro Carvalho Chehab <mchehab@kernel.org> 8 8 * Copyright (C) 2012 Frank Schäfer <fschaefer.oss@googlemail.com> 9 9 * 10 10 * Based on the em2800 driver from Sascha Sommer <saschasommer@freenet.de>
+1 -1
drivers/media/usb/gspca/zc3xx-reg.h
··· 1 1 /* 2 2 * zc030x registers 3 3 * 4 - * Copyright (c) 2008 Mauro Carvalho Chehab <mchehab@infradead.org> 4 + * Copyright (c) 2008 Mauro Carvalho Chehab <mchehab@kernel.org> 5 5 * 6 6 * The register aliases used here came from this driver: 7 7 * http://zc0302.sourceforge.net/zc0302.php
+1 -1
drivers/media/usb/tm6000/tm6000-cards.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 // tm6000-cards.c - driver for TM5600/TM6000/TM6010 USB video capture devices 3 3 // 4 - // Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@infradead.org> 4 + // Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@kernel.org> 5 5 6 6 #include <linux/init.h> 7 7 #include <linux/module.h>
+1 -1
drivers/media/usb/tm6000/tm6000-core.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 // tm6000-core.c - driver for TM5600/TM6000/TM6010 USB video capture devices 3 3 // 4 - // Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@infradead.org> 4 + // Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@kernel.org> 5 5 // 6 6 // Copyright (c) 2007 Michel Ludwig <michel.ludwig@gmail.com> 7 7 // - DVB-T support
+1 -1
drivers/media/usb/tm6000/tm6000-i2c.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 // tm6000-i2c.c - driver for TM5600/TM6000/TM6010 USB video capture devices 3 3 // 4 - // Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@infradead.org> 4 + // Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@kernel.org> 5 5 // 6 6 // Copyright (c) 2007 Michel Ludwig <michel.ludwig@gmail.com> 7 7 // - Fix SMBus Read Byte command
+1 -1
drivers/media/usb/tm6000/tm6000-regs.h
··· 2 2 * SPDX-License-Identifier: GPL-2.0 3 3 * tm6000-regs.h - driver for TM5600/TM6000/TM6010 USB video capture devices 4 4 * 5 - * Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@infradead.org> 5 + * Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 */ 7 7 8 8 /*
+1 -1
drivers/media/usb/tm6000/tm6000-usb-isoc.h
··· 2 2 * SPDX-License-Identifier: GPL-2.0 3 3 * tm6000-buf.c - driver for TM5600/TM6000/TM6010 USB video capture devices 4 4 * 5 - * Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@infradead.org> 5 + * Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 */ 7 7 8 8 #include <linux/videodev2.h>
+1 -1
drivers/media/usb/tm6000/tm6000-video.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 // tm6000-video.c - driver for TM5600/TM6000/TM6010 USB video capture devices 3 3 // 4 - // Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@infradead.org> 4 + // Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@kernel.org> 5 5 // 6 6 // Copyright (c) 2007 Michel Ludwig <michel.ludwig@gmail.com> 7 7 // - Fixed module load/unload
+1 -1
drivers/media/usb/tm6000/tm6000.h
··· 2 2 * SPDX-License-Identifier: GPL-2.0 3 3 * tm6000.h - driver for TM5600/TM6000/TM6010 USB video capture devices 4 4 * 5 - * Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@infradead.org> 5 + * Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 * 7 7 * Copyright (c) 2007 Michel Ludwig <michel.ludwig@gmail.com> 8 8 * - DVB-T support
+2 -2
drivers/media/v4l2-core/v4l2-dev.c
··· 10 10 * 2 of the License, or (at your option) any later version. 11 11 * 12 12 * Authors: Alan Cox, <alan@lxorguk.ukuu.org.uk> (version 1) 13 - * Mauro Carvalho Chehab <mchehab@infradead.org> (version 2) 13 + * Mauro Carvalho Chehab <mchehab@kernel.org> (version 2) 14 14 * 15 15 * Fixes: 20000516 Claudio Matsuoka <claudio@conectiva.com> 16 16 * - Added procfs support ··· 1072 1072 subsys_initcall(videodev_init); 1073 1073 module_exit(videodev_exit) 1074 1074 1075 - MODULE_AUTHOR("Alan Cox, Mauro Carvalho Chehab <mchehab@infradead.org>"); 1075 + MODULE_AUTHOR("Alan Cox, Mauro Carvalho Chehab <mchehab@kernel.org>"); 1076 1076 MODULE_DESCRIPTION("Device registrar for Video4Linux drivers v2"); 1077 1077 MODULE_LICENSE("GPL"); 1078 1078 MODULE_ALIAS_CHARDEV_MAJOR(VIDEO_MAJOR);
+1 -1
drivers/media/v4l2-core/v4l2-ioctl.c
··· 9 9 * 2 of the License, or (at your option) any later version. 10 10 * 11 11 * Authors: Alan Cox, <alan@lxorguk.ukuu.org.uk> (version 1) 12 - * Mauro Carvalho Chehab <mchehab@infradead.org> (version 2) 12 + * Mauro Carvalho Chehab <mchehab@kernel.org> (version 2) 13 13 */ 14 14 15 15 #include <linux/mm.h>
+3 -3
drivers/media/v4l2-core/videobuf-core.c
··· 1 1 /* 2 2 * generic helper functions for handling video4linux capture buffers 3 3 * 4 - * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org> 4 + * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org> 5 5 * 6 6 * Highly based on video-buf written originally by: 7 7 * (c) 2001,02 Gerd Knorr <kraxel@bytesex.org> 8 - * (c) 2006 Mauro Carvalho Chehab, <mchehab@infradead.org> 8 + * (c) 2006 Mauro Carvalho Chehab, <mchehab@kernel.org> 9 9 * (c) 2006 Ted Walther and John Sokol 10 10 * 11 11 * This program is free software; you can redistribute it and/or modify ··· 38 38 module_param(debug, int, 0644); 39 39 40 40 MODULE_DESCRIPTION("helper module to manage video4linux buffers"); 41 - MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>"); 41 + MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>"); 42 42 MODULE_LICENSE("GPL"); 43 43 44 44 #define dprintk(level, fmt, arg...) \
+1 -1
drivers/media/v4l2-core/videobuf-dma-contig.c
··· 7 7 * Copyright (c) 2008 Magnus Damm 8 8 * 9 9 * Based on videobuf-vmalloc.c, 10 - * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org> 10 + * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org> 11 11 * 12 12 * This program is free software; you can redistribute it and/or modify 13 13 * it under the terms of the GNU General Public License as published by
+3 -3
drivers/media/v4l2-core/videobuf-dma-sg.c
··· 6 6 * into PAGE_SIZE chunks). They also assume the driver does not need 7 7 * to touch the video data. 8 8 * 9 - * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org> 9 + * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org> 10 10 * 11 11 * Highly based on video-buf written originally by: 12 12 * (c) 2001,02 Gerd Knorr <kraxel@bytesex.org> 13 - * (c) 2006 Mauro Carvalho Chehab, <mchehab@infradead.org> 13 + * (c) 2006 Mauro Carvalho Chehab, <mchehab@kernel.org> 14 14 * (c) 2006 Ted Walther and John Sokol 15 15 * 16 16 * This program is free software; you can redistribute it and/or modify ··· 48 48 module_param(debug, int, 0644); 49 49 50 50 MODULE_DESCRIPTION("helper module to manage video4linux dma sg buffers"); 51 - MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>"); 51 + MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>"); 52 52 MODULE_LICENSE("GPL"); 53 53 54 54 #define dprintk(level, fmt, arg...) \
+2 -2
drivers/media/v4l2-core/videobuf-vmalloc.c
··· 6 6 * into PAGE_SIZE chunks). They also assume the driver does not need 7 7 * to touch the video data. 8 8 * 9 - * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org> 9 + * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org> 10 10 * 11 11 * This program is free software; you can redistribute it and/or modify 12 12 * it under the terms of the GNU General Public License as published by ··· 41 41 module_param(debug, int, 0644); 42 42 43 43 MODULE_DESCRIPTION("helper module to manage video4linux vmalloc buffers"); 44 - MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>"); 44 + MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>"); 45 45 MODULE_LICENSE("GPL"); 46 46 47 47 #define dprintk(level, fmt, arg...) \
+13 -5
drivers/net/ethernet/broadcom/bcmsysport.c
··· 2144 2144 .ndo_select_queue = bcm_sysport_select_queue, 2145 2145 }; 2146 2146 2147 - static int bcm_sysport_map_queues(struct net_device *dev, 2147 + static int bcm_sysport_map_queues(struct notifier_block *nb, 2148 2148 struct dsa_notifier_register_info *info) 2149 2149 { 2150 - struct bcm_sysport_priv *priv = netdev_priv(dev); 2151 2150 struct bcm_sysport_tx_ring *ring; 2151 + struct bcm_sysport_priv *priv; 2152 2152 struct net_device *slave_dev; 2153 2153 unsigned int num_tx_queues; 2154 2154 unsigned int q, start, port; 2155 + struct net_device *dev; 2156 + 2157 + priv = container_of(nb, struct bcm_sysport_priv, dsa_notifier); 2158 + if (priv->netdev != info->master) 2159 + return 0; 2160 + 2161 + dev = info->master; 2155 2162 2156 2163 /* We can't be setting up queue inspection for non directly attached 2157 2164 * switches ··· 2181 2174 if (priv->is_lite) 2182 2175 netif_set_real_num_tx_queues(slave_dev, 2183 2176 slave_dev->num_tx_queues / 2); 2177 + 2184 2178 num_tx_queues = slave_dev->real_num_tx_queues; 2185 2179 2186 2180 if (priv->per_port_num_tx_queues && 2187 2181 priv->per_port_num_tx_queues != num_tx_queues) 2188 - netdev_warn(slave_dev, "asymetric number of per-port queues\n"); 2182 + netdev_warn(slave_dev, "asymmetric number of per-port queues\n"); 2189 2183 2190 2184 priv->per_port_num_tx_queues = num_tx_queues; 2191 2185 ··· 2209 2201 return 0; 2210 2202 } 2211 2203 2212 - static int bcm_sysport_dsa_notifier(struct notifier_block *unused, 2204 + static int bcm_sysport_dsa_notifier(struct notifier_block *nb, 2213 2205 unsigned long event, void *ptr) 2214 2206 { 2215 2207 struct dsa_notifier_register_info *info; ··· 2219 2211 2220 2212 info = ptr; 2221 2213 2222 - return notifier_from_errno(bcm_sysport_map_queues(info->master, info)); 2214 + return notifier_from_errno(bcm_sysport_map_queues(nb, info)); 2223 2215 } 2224 2216 2225 2217 #define REV_FMT "v%2x.%02x"
+1 -1
drivers/net/ethernet/freescale/ucc_geth_ethtool.c
··· 61 61 static const char tx_fw_stat_gstrings[][ETH_GSTRING_LEN] = { 62 62 "tx-single-collision", 63 63 "tx-multiple-collision", 64 - "tx-late-collsion", 64 + "tx-late-collision", 65 65 "tx-aborted-frames", 66 66 "tx-lost-frames", 67 67 "tx-carrier-sense-errors",
+23 -7
drivers/net/ethernet/marvell/mvpp2.c
··· 942 942 struct clk *pp_clk; 943 943 struct clk *gop_clk; 944 944 struct clk *mg_clk; 945 + struct clk *mg_core_clk; 945 946 struct clk *axi_clk; 946 947 947 948 /* List of pointers to port structures */ ··· 8769 8768 err = clk_prepare_enable(priv->mg_clk); 8770 8769 if (err < 0) 8771 8770 goto err_gop_clk; 8771 + 8772 + priv->mg_core_clk = devm_clk_get(&pdev->dev, "mg_core_clk"); 8773 + if (IS_ERR(priv->mg_core_clk)) { 8774 + priv->mg_core_clk = NULL; 8775 + } else { 8776 + err = clk_prepare_enable(priv->mg_core_clk); 8777 + if (err < 0) 8778 + goto err_mg_clk; 8779 + } 8772 8780 } 8773 8781 8774 8782 priv->axi_clk = devm_clk_get(&pdev->dev, "axi_clk"); 8775 8783 if (IS_ERR(priv->axi_clk)) { 8776 8784 err = PTR_ERR(priv->axi_clk); 8777 8785 if (err == -EPROBE_DEFER) 8778 - goto err_gop_clk; 8786 + goto err_mg_core_clk; 8779 8787 priv->axi_clk = NULL; 8780 8788 } else { 8781 8789 err = clk_prepare_enable(priv->axi_clk); 8782 8790 if (err < 0) 8783 - goto err_gop_clk; 8791 + goto err_mg_core_clk; 8784 8792 } 8785 8793 8786 8794 /* Get system's tclk rate */ ··· 8803 8793 if (priv->hw_version == MVPP22) { 8804 8794 err = dma_set_mask(&pdev->dev, MVPP2_DESC_DMA_MASK); 8805 8795 if (err) 8806 - goto err_mg_clk; 8796 + goto err_axi_clk; 8807 8797 /* Sadly, the BM pools all share the same register to 8808 8798 * store the high 32 bits of their address. So they 8809 8799 * must all have the same high 32 bits, which forces ··· 8811 8801 */ 8812 8802 err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 8813 8803 if (err) 8814 - goto err_mg_clk; 8804 + goto err_axi_clk; 8815 8805 } 8816 8806 8817 8807 /* Initialize network controller */ 8818 8808 err = mvpp2_init(pdev, priv); 8819 8809 if (err < 0) { 8820 8810 dev_err(&pdev->dev, "failed to initialize controller\n"); 8821 - goto err_mg_clk; 8811 + goto err_axi_clk; 8822 8812 } 8823 8813 8824 8814 /* Initialize ports */ ··· 8831 8821 if (priv->port_count == 0) { 8832 8822 dev_err(&pdev->dev, "no ports enabled\n"); 8833 8823 err = -ENODEV; 8834 - goto err_mg_clk; 8824 + goto err_axi_clk; 8835 8825 } 8836 8826 8837 8827 /* Statistics must be gathered regularly because some of them (like ··· 8859 8849 mvpp2_port_remove(priv->port_list[i]); 8860 8850 i++; 8861 8851 } 8862 - err_mg_clk: 8852 + err_axi_clk: 8863 8853 clk_disable_unprepare(priv->axi_clk); 8854 + 8855 + err_mg_core_clk: 8856 + if (priv->hw_version == MVPP22) 8857 + clk_disable_unprepare(priv->mg_core_clk); 8858 + err_mg_clk: 8864 8859 if (priv->hw_version == MVPP22) 8865 8860 clk_disable_unprepare(priv->mg_clk); 8866 8861 err_gop_clk: ··· 8912 8897 return 0; 8913 8898 8914 8899 clk_disable_unprepare(priv->axi_clk); 8900 + clk_disable_unprepare(priv->mg_core_clk); 8915 8901 clk_disable_unprepare(priv->mg_clk); 8916 8902 clk_disable_unprepare(priv->pp_clk); 8917 8903 clk_disable_unprepare(priv->gop_clk);
+1 -1
drivers/net/ethernet/mellanox/mlx4/main.c
··· 1317 1317 1318 1318 ret = mlx4_unbond_fs_rules(dev); 1319 1319 if (ret) 1320 - mlx4_warn(dev, "multifunction unbond for flow rules failedi (%d)\n", ret); 1320 + mlx4_warn(dev, "multifunction unbond for flow rules failed (%d)\n", ret); 1321 1321 ret1 = mlx4_unbond_mac_table(dev); 1322 1322 if (ret1) { 1323 1323 mlx4_warn(dev, "multifunction unbond for MAC table failed (%d)\n", ret1);
+5 -3
drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
··· 1007 1007 1008 1008 mutex_lock(&priv->state_lock); 1009 1009 1010 - if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) 1011 - goto out; 1012 - 1013 1010 new_channels.params = priv->channels.params; 1014 1011 mlx5e_trust_update_tx_min_inline_mode(priv, &new_channels.params); 1012 + 1013 + if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) { 1014 + priv->channels.params = new_channels.params; 1015 + goto out; 1016 + } 1015 1017 1016 1018 /* Skip if tx_min_inline is the same */ 1017 1019 if (new_channels.params.tx_min_inline_mode ==
+3 -2
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 877 877 }; 878 878 879 879 static void mlx5e_build_rep_params(struct mlx5_core_dev *mdev, 880 - struct mlx5e_params *params) 880 + struct mlx5e_params *params, u16 mtu) 881 881 { 882 882 u8 cq_period_mode = MLX5_CAP_GEN(mdev, cq_period_start_from_cqe) ? 883 883 MLX5_CQ_PERIOD_MODE_START_FROM_CQE : 884 884 MLX5_CQ_PERIOD_MODE_START_FROM_EQE; 885 885 886 886 params->hard_mtu = MLX5E_ETH_HARD_MTU; 887 + params->sw_mtu = mtu; 887 888 params->log_sq_size = MLX5E_REP_PARAMS_LOG_SQ_SIZE; 888 889 params->rq_wq_type = MLX5_WQ_TYPE_LINKED_LIST; 889 890 params->log_rq_mtu_frames = MLX5E_REP_PARAMS_LOG_RQ_SIZE; ··· 932 931 933 932 priv->channels.params.num_channels = profile->max_nch(mdev); 934 933 935 - mlx5e_build_rep_params(mdev, &priv->channels.params); 934 + mlx5e_build_rep_params(mdev, &priv->channels.params, netdev->mtu); 936 935 mlx5e_build_rep_netdev(netdev); 937 936 938 937 mlx5e_timestamp_init(priv);
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
··· 290 290 291 291 if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) { 292 292 netdev_err(priv->netdev, 293 - "\tCan't perform loobpack test while device is down\n"); 293 + "\tCan't perform loopback test while device is down\n"); 294 294 return -ENODEV; 295 295 } 296 296
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 1864 1864 } 1865 1865 1866 1866 ip_proto = MLX5_GET(fte_match_set_lyr_2_4, headers_v, ip_protocol); 1867 - if (modify_ip_header && ip_proto != IPPROTO_TCP && ip_proto != IPPROTO_UDP) { 1867 + if (modify_ip_header && ip_proto != IPPROTO_TCP && 1868 + ip_proto != IPPROTO_UDP && ip_proto != IPPROTO_ICMP) { 1868 1869 pr_info("can't offload re-write of ip proto %d\n", ip_proto); 1869 1870 return false; 1870 1871 }
+10 -10
drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
··· 255 255 dma_addr = dma_map_single(sq->pdev, skb_data, headlen, 256 256 DMA_TO_DEVICE); 257 257 if (unlikely(dma_mapping_error(sq->pdev, dma_addr))) 258 - return -ENOMEM; 258 + goto dma_unmap_wqe_err; 259 259 260 260 dseg->addr = cpu_to_be64(dma_addr); 261 261 dseg->lkey = sq->mkey_be; ··· 273 273 dma_addr = skb_frag_dma_map(sq->pdev, frag, 0, fsz, 274 274 DMA_TO_DEVICE); 275 275 if (unlikely(dma_mapping_error(sq->pdev, dma_addr))) 276 - return -ENOMEM; 276 + goto dma_unmap_wqe_err; 277 277 278 278 dseg->addr = cpu_to_be64(dma_addr); 279 279 dseg->lkey = sq->mkey_be; ··· 285 285 } 286 286 287 287 return num_dma; 288 + 289 + dma_unmap_wqe_err: 290 + mlx5e_dma_unmap_wqe_err(sq, num_dma); 291 + return -ENOMEM; 288 292 } 289 293 290 294 static inline void ··· 384 380 num_dma = mlx5e_txwqe_build_dsegs(sq, skb, skb_data, headlen, 385 381 (struct mlx5_wqe_data_seg *)cseg + ds_cnt); 386 382 if (unlikely(num_dma < 0)) 387 - goto dma_unmap_wqe_err; 383 + goto err_drop; 388 384 389 385 mlx5e_txwqe_complete(sq, skb, opcode, ds_cnt + num_dma, 390 386 num_bytes, num_dma, wi, cseg); 391 387 392 388 return NETDEV_TX_OK; 393 389 394 - dma_unmap_wqe_err: 390 + err_drop: 395 391 sq->stats.dropped++; 396 - mlx5e_dma_unmap_wqe_err(sq, wi->num_dma); 397 - 398 392 dev_kfree_skb_any(skb); 399 393 400 394 return NETDEV_TX_OK; ··· 647 645 num_dma = mlx5e_txwqe_build_dsegs(sq, skb, skb_data, headlen, 648 646 (struct mlx5_wqe_data_seg *)cseg + ds_cnt); 649 647 if (unlikely(num_dma < 0)) 650 - goto dma_unmap_wqe_err; 648 + goto err_drop; 651 649 652 650 mlx5e_txwqe_complete(sq, skb, opcode, ds_cnt + num_dma, 653 651 num_bytes, num_dma, wi, cseg); 654 652 655 653 return NETDEV_TX_OK; 656 654 657 - dma_unmap_wqe_err: 655 + err_drop: 658 656 sq->stats.dropped++; 659 - mlx5e_dma_unmap_wqe_err(sq, wi->num_dma); 660 - 661 657 dev_kfree_skb_any(skb); 662 658 663 659 return NETDEV_TX_OK;
+16 -10
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
··· 187 187 static void del_sw_hw_rule(struct fs_node *node); 188 188 static bool mlx5_flow_dests_cmp(struct mlx5_flow_destination *d1, 189 189 struct mlx5_flow_destination *d2); 190 + static void cleanup_root_ns(struct mlx5_flow_root_namespace *root_ns); 190 191 static struct mlx5_flow_rule * 191 192 find_flow_rule(struct fs_fte *fte, 192 193 struct mlx5_flow_destination *dest); ··· 482 481 483 482 if (rule->dest_attr.type == MLX5_FLOW_DESTINATION_TYPE_COUNTER && 484 483 --fte->dests_size) { 485 - modify_mask = BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_ACTION); 484 + modify_mask = BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_ACTION) | 485 + BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_FLOW_COUNTERS); 486 486 fte->action.action &= ~MLX5_FLOW_CONTEXT_ACTION_COUNT; 487 487 update_fte = true; 488 488 goto out; ··· 2353 2351 2354 2352 static int init_root_ns(struct mlx5_flow_steering *steering) 2355 2353 { 2354 + int err; 2355 + 2356 2356 steering->root_ns = create_root_ns(steering, FS_FT_NIC_RX); 2357 2357 if (!steering->root_ns) 2358 - goto cleanup; 2358 + return -ENOMEM; 2359 2359 2360 - if (init_root_tree(steering, &root_fs, &steering->root_ns->ns.node)) 2361 - goto cleanup; 2360 + err = init_root_tree(steering, &root_fs, &steering->root_ns->ns.node); 2361 + if (err) 2362 + goto out_err; 2362 2363 2363 2364 set_prio_attrs(steering->root_ns); 2364 - 2365 - if (create_anchor_flow_table(steering)) 2366 - goto cleanup; 2365 + err = create_anchor_flow_table(steering); 2366 + if (err) 2367 + goto out_err; 2367 2368 2368 2369 return 0; 2369 2370 2370 - cleanup: 2371 - mlx5_cleanup_fs(steering->dev); 2372 - return -ENOMEM; 2371 + out_err: 2372 + cleanup_root_ns(steering->root_ns); 2373 + steering->root_ns = NULL; 2374 + return err; 2373 2375 } 2374 2376 2375 2377 static void clean_tree(struct fs_node *node)
+5 -7
drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
··· 1718 1718 struct net_device *dev = mlxsw_sp_port->dev; 1719 1719 int err; 1720 1720 1721 - if (bridge_port->bridge_device->multicast_enabled) { 1722 - if (bridge_port->bridge_device->multicast_enabled) { 1723 - err = mlxsw_sp_port_smid_set(mlxsw_sp_port, mid->mid, 1724 - false); 1725 - if (err) 1726 - netdev_err(dev, "Unable to remove port from SMID\n"); 1727 - } 1721 + if (bridge_port->bridge_device->multicast_enabled && 1722 + !bridge_port->mrouter) { 1723 + err = mlxsw_sp_port_smid_set(mlxsw_sp_port, mid->mid, false); 1724 + if (err) 1725 + netdev_err(dev, "Unable to remove port from SMID\n"); 1728 1726 } 1729 1727 1730 1728 err = mlxsw_sp_port_remove_from_mid(mlxsw_sp_port, mid);
+8 -2
drivers/net/ethernet/netronome/nfp/flower/action.c
··· 183 183 nfp_fl_set_ipv4_udp_tun(struct nfp_fl_set_ipv4_udp_tun *set_tun, 184 184 const struct tc_action *action, 185 185 struct nfp_fl_pre_tunnel *pre_tun, 186 - enum nfp_flower_tun_type tun_type) 186 + enum nfp_flower_tun_type tun_type, 187 + struct net_device *netdev) 187 188 { 188 189 size_t act_size = sizeof(struct nfp_fl_set_ipv4_udp_tun); 189 190 struct ip_tunnel_info *ip_tun = tcf_tunnel_info(action); 190 191 u32 tmp_set_ip_tun_type_index = 0; 191 192 /* Currently support one pre-tunnel so index is always 0. */ 192 193 int pretun_idx = 0; 194 + struct net *net; 193 195 194 196 if (ip_tun->options_len) 195 197 return -EOPNOTSUPP; 198 + 199 + net = dev_net(netdev); 196 200 197 201 set_tun->head.jump_id = NFP_FL_ACTION_OPCODE_SET_IPV4_TUNNEL; 198 202 set_tun->head.len_lw = act_size >> NFP_FL_LW_SIZ; ··· 208 204 209 205 set_tun->tun_type_index = cpu_to_be32(tmp_set_ip_tun_type_index); 210 206 set_tun->tun_id = ip_tun->key.tun_id; 207 + set_tun->ttl = net->ipv4.sysctl_ip_default_ttl; 211 208 212 209 /* Complete pre_tunnel action. */ 213 210 pre_tun->ipv4_dst = ip_tun->key.u.ipv4.dst; ··· 516 511 *a_len += sizeof(struct nfp_fl_pre_tunnel); 517 512 518 513 set_tun = (void *)&nfp_fl->action_data[*a_len]; 519 - err = nfp_fl_set_ipv4_udp_tun(set_tun, a, pre_tun, *tun_type); 514 + err = nfp_fl_set_ipv4_udp_tun(set_tun, a, pre_tun, *tun_type, 515 + netdev); 520 516 if (err) 521 517 return err; 522 518 *a_len += sizeof(struct nfp_fl_set_ipv4_udp_tun);
+4 -1
drivers/net/ethernet/netronome/nfp/flower/cmsg.h
··· 190 190 __be16 reserved; 191 191 __be64 tun_id __packed; 192 192 __be32 tun_type_index; 193 - __be32 extra[3]; 193 + __be16 reserved2; 194 + u8 ttl; 195 + u8 reserved3; 196 + __be32 extra[2]; 194 197 }; 195 198 196 199 /* Metadata with L2 (1W/4B)
+1 -1
drivers/net/ethernet/netronome/nfp/flower/main.c
··· 360 360 } 361 361 362 362 SET_NETDEV_DEV(repr, &priv->nn->pdev->dev); 363 - nfp_net_get_mac_addr(app->pf, port); 363 + nfp_net_get_mac_addr(app->pf, repr, port); 364 364 365 365 cmsg_port_id = nfp_flower_cmsg_phys_port(phys_port); 366 366 err = nfp_repr_init(app, repr,
+1 -1
drivers/net/ethernet/netronome/nfp/nfp_app_nic.c
··· 69 69 if (err) 70 70 return err < 0 ? err : 0; 71 71 72 - nfp_net_get_mac_addr(app->pf, nn->port); 72 + nfp_net_get_mac_addr(app->pf, nn->dp.netdev, nn->port); 73 73 74 74 return 0; 75 75 }
+3 -1
drivers/net/ethernet/netronome/nfp/nfp_main.h
··· 171 171 int nfp_hwmon_register(struct nfp_pf *pf); 172 172 void nfp_hwmon_unregister(struct nfp_pf *pf); 173 173 174 - void nfp_net_get_mac_addr(struct nfp_pf *pf, struct nfp_port *port); 174 + void 175 + nfp_net_get_mac_addr(struct nfp_pf *pf, struct net_device *netdev, 176 + struct nfp_port *port); 175 177 176 178 bool nfp_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb); 177 179
+18 -13
drivers/net/ethernet/netronome/nfp/nfp_net_main.c
··· 67 67 /** 68 68 * nfp_net_get_mac_addr() - Get the MAC address. 69 69 * @pf: NFP PF handle 70 + * @netdev: net_device to set MAC address on 70 71 * @port: NFP port structure 71 72 * 72 73 * First try to get the MAC address from NSP ETH table. If that 73 74 * fails generate a random address. 74 75 */ 75 - void nfp_net_get_mac_addr(struct nfp_pf *pf, struct nfp_port *port) 76 + void 77 + nfp_net_get_mac_addr(struct nfp_pf *pf, struct net_device *netdev, 78 + struct nfp_port *port) 76 79 { 77 80 struct nfp_eth_table_port *eth_port; 78 81 79 82 eth_port = __nfp_port_get_eth_port(port); 80 83 if (!eth_port) { 81 - eth_hw_addr_random(port->netdev); 84 + eth_hw_addr_random(netdev); 82 85 return; 83 86 } 84 87 85 - ether_addr_copy(port->netdev->dev_addr, eth_port->mac_addr); 86 - ether_addr_copy(port->netdev->perm_addr, eth_port->mac_addr); 88 + ether_addr_copy(netdev->dev_addr, eth_port->mac_addr); 89 + ether_addr_copy(netdev->perm_addr, eth_port->mac_addr); 87 90 } 88 91 89 92 static struct nfp_eth_table_port * ··· 514 511 return PTR_ERR(mem); 515 512 } 516 513 517 - min_size = NFP_MAC_STATS_SIZE * (pf->eth_tbl->max_index + 1); 518 - pf->mac_stats_mem = nfp_rtsym_map(pf->rtbl, "_mac_stats", 519 - "net.macstats", min_size, 520 - &pf->mac_stats_bar); 521 - if (IS_ERR(pf->mac_stats_mem)) { 522 - if (PTR_ERR(pf->mac_stats_mem) != -ENOENT) { 523 - err = PTR_ERR(pf->mac_stats_mem); 524 - goto err_unmap_ctrl; 514 + if (pf->eth_tbl) { 515 + min_size = NFP_MAC_STATS_SIZE * (pf->eth_tbl->max_index + 1); 516 + pf->mac_stats_mem = nfp_rtsym_map(pf->rtbl, "_mac_stats", 517 + "net.macstats", min_size, 518 + &pf->mac_stats_bar); 519 + if (IS_ERR(pf->mac_stats_mem)) { 520 + if (PTR_ERR(pf->mac_stats_mem) != -ENOENT) { 521 + err = PTR_ERR(pf->mac_stats_mem); 522 + goto err_unmap_ctrl; 523 + } 524 + pf->mac_stats_mem = NULL; 525 525 } 526 - pf->mac_stats_mem = NULL; 527 526 } 528 527 529 528 pf->vf_cfg_mem = nfp_net_pf_map_rtsym(pf, "net.vfcfg",
+1 -1
drivers/net/ethernet/qlogic/qed/qed_ll2.c
··· 2370 2370 u8 flags = 0; 2371 2371 2372 2372 if (unlikely(skb->ip_summed != CHECKSUM_NONE)) { 2373 - DP_INFO(cdev, "Cannot transmit a checksumed packet\n"); 2373 + DP_INFO(cdev, "Cannot transmit a checksummed packet\n"); 2374 2374 return -EINVAL; 2375 2375 } 2376 2376
+1 -1
drivers/net/ethernet/qlogic/qed/qed_roce.c
··· 848 848 849 849 if (!(qp->resp_offloaded)) { 850 850 DP_NOTICE(p_hwfn, 851 - "The responder's qp should be offloded before requester's\n"); 851 + "The responder's qp should be offloaded before requester's\n"); 852 852 return -EINVAL; 853 853 } 854 854
+1 -1
drivers/net/ethernet/realtek/8139too.c
··· 2224 2224 struct rtl8139_private *tp = netdev_priv(dev); 2225 2225 const int irq = tp->pci_dev->irq; 2226 2226 2227 - disable_irq(irq); 2227 + disable_irq_nosync(irq); 2228 2228 rtl8139_interrupt(irq, dev); 2229 2229 enable_irq(irq); 2230 2230 }
+3 -2
drivers/net/ethernet/sfc/ef10.c
··· 4784 4784 * will set rule->filter_id to EFX_ARFS_FILTER_ID_PENDING, meaning that 4785 4785 * the rule is not removed by efx_rps_hash_del() below. 4786 4786 */ 4787 - ret = efx_ef10_filter_remove_internal(efx, 1U << spec->priority, 4788 - filter_idx, true) == 0; 4787 + if (ret) 4788 + ret = efx_ef10_filter_remove_internal(efx, 1U << spec->priority, 4789 + filter_idx, true) == 0; 4789 4790 /* While we can't safely dereference rule (we dropped the lock), we can 4790 4791 * still test it for NULL. 4791 4792 */
+2
drivers/net/ethernet/sfc/rx.c
··· 839 839 int rc; 840 840 841 841 rc = efx->type->filter_insert(efx, &req->spec, true); 842 + if (rc >= 0) 843 + rc %= efx->type->max_rx_ip_filters; 842 844 if (efx->rps_hash_table) { 843 845 spin_lock_bh(&efx->rps_hash_lock); 844 846 rule = efx_rps_hash_find(efx, &req->spec);
+2
drivers/net/ethernet/ti/cpsw.c
··· 1340 1340 cpsw_ale_add_ucast(cpsw->ale, priv->mac_addr, 1341 1341 HOST_PORT_NUM, ALE_VLAN | 1342 1342 ALE_SECURE, slave->port_vlan); 1343 + cpsw_ale_control_set(cpsw->ale, slave_port, 1344 + ALE_PORT_DROP_UNKNOWN_VLAN, 1); 1343 1345 } 1344 1346 1345 1347 static void soft_reset_slave(struct cpsw_slave *slave)
+10 -1
drivers/net/phy/phy_device.c
··· 535 535 536 536 /* Grab the bits from PHYIR1, and put them in the upper half */ 537 537 phy_reg = mdiobus_read(bus, addr, MII_PHYSID1); 538 - if (phy_reg < 0) 538 + if (phy_reg < 0) { 539 + /* if there is no device, return without an error so scanning 540 + * the bus works properly 541 + */ 542 + if (phy_reg == -EIO || phy_reg == -ENODEV) { 543 + *phy_id = 0xffffffff; 544 + return 0; 545 + } 546 + 539 547 return -EIO; 548 + } 540 549 541 550 *phy_id = (phy_reg & 0xffff) << 16; 542 551
+13
drivers/net/usb/qmi_wwan.c
··· 1098 1098 {QMI_FIXED_INTF(0x05c6, 0x9080, 8)}, 1099 1099 {QMI_FIXED_INTF(0x05c6, 0x9083, 3)}, 1100 1100 {QMI_FIXED_INTF(0x05c6, 0x9084, 4)}, 1101 + {QMI_FIXED_INTF(0x05c6, 0x90b2, 3)}, /* ublox R410M */ 1101 1102 {QMI_FIXED_INTF(0x05c6, 0x920d, 0)}, 1102 1103 {QMI_FIXED_INTF(0x05c6, 0x920d, 5)}, 1103 1104 {QMI_QUIRK_SET_DTR(0x05c6, 0x9625, 4)}, /* YUGA CLM920-NC5 */ ··· 1342 1341 if (!id->driver_info) { 1343 1342 dev_dbg(&intf->dev, "setting defaults for dynamic device id\n"); 1344 1343 id->driver_info = (unsigned long)&qmi_wwan_info; 1344 + } 1345 + 1346 + /* There are devices where the same interface number can be 1347 + * configured as different functions. We should only bind to 1348 + * vendor specific functions when matching on interface number 1349 + */ 1350 + if (id->match_flags & USB_DEVICE_ID_MATCH_INT_NUMBER && 1351 + desc->bInterfaceClass != USB_CLASS_VENDOR_SPEC) { 1352 + dev_dbg(&intf->dev, 1353 + "Rejecting interface number match for class %02x\n", 1354 + desc->bInterfaceClass); 1355 + return -ENODEV; 1345 1356 } 1346 1357 1347 1358 /* Quectel EC20 quirk where we've QMI on interface 4 instead of 0 */
+20 -16
drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
··· 459 459 kfree(req); 460 460 } 461 461 462 - static void brcmf_fw_request_nvram_done(const struct firmware *fw, void *ctx) 462 + static int brcmf_fw_request_nvram_done(const struct firmware *fw, void *ctx) 463 463 { 464 464 struct brcmf_fw *fwctx = ctx; 465 465 struct brcmf_fw_item *cur; ··· 498 498 brcmf_dbg(TRACE, "nvram %p len %d\n", nvram, nvram_length); 499 499 cur->nv_data.data = nvram; 500 500 cur->nv_data.len = nvram_length; 501 - return; 501 + return 0; 502 502 503 503 fail: 504 - brcmf_dbg(TRACE, "failed: dev=%s\n", dev_name(fwctx->dev)); 505 - fwctx->done(fwctx->dev, -ENOENT, NULL); 506 - brcmf_fw_free_request(fwctx->req); 507 - kfree(fwctx); 504 + return -ENOENT; 508 505 } 509 506 510 507 static int brcmf_fw_request_next_item(struct brcmf_fw *fwctx, bool async) ··· 550 553 brcmf_dbg(TRACE, "enter: firmware %s %sfound\n", cur->path, 551 554 fw ? "" : "not "); 552 555 553 - if (fw) { 554 - if (cur->type == BRCMF_FW_TYPE_BINARY) 555 - cur->binary = fw; 556 - else if (cur->type == BRCMF_FW_TYPE_NVRAM) 557 - brcmf_fw_request_nvram_done(fw, fwctx); 558 - else 559 - release_firmware(fw); 560 - } else if (cur->type == BRCMF_FW_TYPE_NVRAM) { 561 - brcmf_fw_request_nvram_done(NULL, fwctx); 562 - } else if (!(cur->flags & BRCMF_FW_REQF_OPTIONAL)) { 556 + if (!fw) 563 557 ret = -ENOENT; 558 + 559 + switch (cur->type) { 560 + case BRCMF_FW_TYPE_NVRAM: 561 + ret = brcmf_fw_request_nvram_done(fw, fwctx); 562 + break; 563 + case BRCMF_FW_TYPE_BINARY: 564 + cur->binary = fw; 565 + break; 566 + default: 567 + /* something fishy here so bail out early */ 568 + brcmf_err("unknown fw type: %d\n", cur->type); 569 + release_firmware(fw); 570 + ret = -EINVAL; 564 571 goto fail; 565 572 } 573 + 574 + if (ret < 0 && !(cur->flags & BRCMF_FW_REQF_OPTIONAL)) 575 + goto fail; 566 576 567 577 do { 568 578 if (++fwctx->curpos == fwctx->req->n_items) {
+5 -8
drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
··· 8 8 * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. 9 9 * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH 10 10 * Copyright(c) 2016 - 2017 Intel Deutschland GmbH 11 + * Copyright(c) 2018 Intel Corporation 11 12 * 12 13 * This program is free software; you can redistribute it and/or modify 13 14 * it under the terms of version 2 of the GNU General Public License as ··· 31 30 * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. 32 31 * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH 33 32 * Copyright(c) 2016 - 2017 Intel Deutschland GmbH 34 - * Copyright(c) 2018 Intel Corporation 33 + * Copyright(c) 2018 Intel Corporation 35 34 * All rights reserved. 36 35 * 37 36 * Redistribution and use in source and binary forms, with or without ··· 750 749 } __packed; 751 750 752 751 #define IWL_SCAN_REQ_UMAC_SIZE_V8 sizeof(struct iwl_scan_req_umac) 753 - #define IWL_SCAN_REQ_UMAC_SIZE_V7 (sizeof(struct iwl_scan_req_umac) - \ 754 - 4 * sizeof(u8)) 755 - #define IWL_SCAN_REQ_UMAC_SIZE_V6 (sizeof(struct iwl_scan_req_umac) - \ 756 - 2 * sizeof(u8) - sizeof(__le16)) 757 - #define IWL_SCAN_REQ_UMAC_SIZE_V1 (sizeof(struct iwl_scan_req_umac) - \ 758 - 2 * sizeof(__le32) - 2 * sizeof(u8) - \ 759 - sizeof(__le16)) 752 + #define IWL_SCAN_REQ_UMAC_SIZE_V7 48 753 + #define IWL_SCAN_REQ_UMAC_SIZE_V6 44 754 + #define IWL_SCAN_REQ_UMAC_SIZE_V1 36 760 755 761 756 /** 762 757 * struct iwl_umac_scan_abort
+95 -16
drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
··· 76 76 #include "iwl-io.h" 77 77 #include "iwl-csr.h" 78 78 #include "fw/acpi.h" 79 + #include "fw/api/nvm-reg.h" 79 80 80 81 /* NVM offsets (in words) definitions */ 81 82 enum nvm_offsets { ··· 147 146 149, 153, 157, 161, 165, 169, 173, 177, 181 148 147 }; 149 148 150 - #define IWL_NUM_CHANNELS ARRAY_SIZE(iwl_nvm_channels) 151 - #define IWL_NUM_CHANNELS_EXT ARRAY_SIZE(iwl_ext_nvm_channels) 149 + #define IWL_NVM_NUM_CHANNELS ARRAY_SIZE(iwl_nvm_channels) 150 + #define IWL_NVM_NUM_CHANNELS_EXT ARRAY_SIZE(iwl_ext_nvm_channels) 152 151 #define NUM_2GHZ_CHANNELS 14 153 152 #define NUM_2GHZ_CHANNELS_EXT 14 154 153 #define FIRST_2GHZ_HT_MINUS 5 ··· 302 301 const u8 *nvm_chan; 303 302 304 303 if (cfg->nvm_type != IWL_NVM_EXT) { 305 - num_of_ch = IWL_NUM_CHANNELS; 304 + num_of_ch = IWL_NVM_NUM_CHANNELS; 306 305 nvm_chan = &iwl_nvm_channels[0]; 307 306 num_2ghz_channels = NUM_2GHZ_CHANNELS; 308 307 } else { 309 - num_of_ch = IWL_NUM_CHANNELS_EXT; 308 + num_of_ch = IWL_NVM_NUM_CHANNELS_EXT; 310 309 nvm_chan = &iwl_ext_nvm_channels[0]; 311 310 num_2ghz_channels = NUM_2GHZ_CHANNELS_EXT; 312 311 } ··· 721 720 if (cfg->nvm_type != IWL_NVM_EXT) 722 721 data = kzalloc(sizeof(*data) + 723 722 sizeof(struct ieee80211_channel) * 724 - IWL_NUM_CHANNELS, 723 + IWL_NVM_NUM_CHANNELS, 725 724 GFP_KERNEL); 726 725 else 727 726 data = kzalloc(sizeof(*data) + 728 727 sizeof(struct ieee80211_channel) * 729 - IWL_NUM_CHANNELS_EXT, 728 + IWL_NVM_NUM_CHANNELS_EXT, 730 729 GFP_KERNEL); 731 730 if (!data) 732 731 return NULL; ··· 843 842 return flags; 844 843 } 845 844 845 + struct regdb_ptrs { 846 + struct ieee80211_wmm_rule *rule; 847 + u32 token; 848 + }; 849 + 846 850 struct ieee80211_regdomain * 847 851 iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, 848 - int num_of_ch, __le32 *channels, u16 fw_mcc) 852 + int num_of_ch, __le32 *channels, u16 fw_mcc, 853 + u16 geo_info) 849 854 { 850 855 int ch_idx; 851 856 u16 ch_flags; 852 857 u32 reg_rule_flags, prev_reg_rule_flags = 0; 853 858 const u8 *nvm_chan = cfg->nvm_type == IWL_NVM_EXT ? 854 859 iwl_ext_nvm_channels : iwl_nvm_channels; 855 - struct ieee80211_regdomain *regd; 856 - int size_of_regd; 860 + struct ieee80211_regdomain *regd, *copy_rd; 861 + int size_of_regd, regd_to_copy, wmms_to_copy; 862 + int size_of_wmms = 0; 857 863 struct ieee80211_reg_rule *rule; 864 + struct ieee80211_wmm_rule *wmm_rule, *d_wmm, *s_wmm; 865 + struct regdb_ptrs *regdb_ptrs; 858 866 enum nl80211_band band; 859 867 int center_freq, prev_center_freq = 0; 860 - int valid_rules = 0; 868 + int valid_rules = 0, n_wmms = 0; 869 + int i; 861 870 bool new_rule; 862 871 int max_num_ch = cfg->nvm_type == IWL_NVM_EXT ? 863 - IWL_NUM_CHANNELS_EXT : IWL_NUM_CHANNELS; 872 + IWL_NVM_NUM_CHANNELS_EXT : IWL_NVM_NUM_CHANNELS; 864 873 865 874 if (WARN_ON_ONCE(num_of_ch > NL80211_MAX_SUPP_REG_RULES)) 866 875 return ERR_PTR(-EINVAL); ··· 886 875 sizeof(struct ieee80211_regdomain) + 887 876 num_of_ch * sizeof(struct ieee80211_reg_rule); 888 877 889 - regd = kzalloc(size_of_regd, GFP_KERNEL); 878 + if (geo_info & GEO_WMM_ETSI_5GHZ_INFO) 879 + size_of_wmms = 880 + num_of_ch * sizeof(struct ieee80211_wmm_rule); 881 + 882 + regd = kzalloc(size_of_regd + size_of_wmms, GFP_KERNEL); 890 883 if (!regd) 891 884 return ERR_PTR(-ENOMEM); 885 + 886 + regdb_ptrs = kcalloc(num_of_ch, sizeof(*regdb_ptrs), GFP_KERNEL); 887 + if (!regdb_ptrs) { 888 + copy_rd = ERR_PTR(-ENOMEM); 889 + goto out; 890 + } 891 + 892 + /* set alpha2 from FW. */ 893 + regd->alpha2[0] = fw_mcc >> 8; 894 + regd->alpha2[1] = fw_mcc & 0xff; 895 + 896 + wmm_rule = (struct ieee80211_wmm_rule *)((u8 *)regd + size_of_regd); 892 897 893 898 for (ch_idx = 0; ch_idx < num_of_ch; ch_idx++) { 894 899 ch_flags = (u16)__le32_to_cpup(channels + ch_idx); ··· 954 927 955 928 iwl_nvm_print_channel_flags(dev, IWL_DL_LAR, 956 929 nvm_chan[ch_idx], ch_flags); 930 + 931 + if (!(geo_info & GEO_WMM_ETSI_5GHZ_INFO) || 932 + band == NL80211_BAND_2GHZ) 933 + continue; 934 + 935 + if (!reg_query_regdb_wmm(regd->alpha2, center_freq, 936 + &regdb_ptrs[n_wmms].token, wmm_rule)) { 937 + /* Add only new rules */ 938 + for (i = 0; i < n_wmms; i++) { 939 + if (regdb_ptrs[i].token == 940 + regdb_ptrs[n_wmms].token) { 941 + rule->wmm_rule = regdb_ptrs[i].rule; 942 + break; 943 + } 944 + } 945 + if (i == n_wmms) { 946 + rule->wmm_rule = wmm_rule; 947 + regdb_ptrs[n_wmms++].rule = wmm_rule; 948 + wmm_rule++; 949 + } 950 + } 957 951 } 958 952 959 953 regd->n_reg_rules = valid_rules; 954 + regd->n_wmm_rules = n_wmms; 960 955 961 - /* set alpha2 from FW. */ 962 - regd->alpha2[0] = fw_mcc >> 8; 963 - regd->alpha2[1] = fw_mcc & 0xff; 956 + /* 957 + * Narrow down regdom for unused regulatory rules to prevent hole 958 + * between reg rules to wmm rules. 959 + */ 960 + regd_to_copy = sizeof(struct ieee80211_regdomain) + 961 + valid_rules * sizeof(struct ieee80211_reg_rule); 964 962 965 - return regd; 963 + wmms_to_copy = sizeof(struct ieee80211_wmm_rule) * n_wmms; 964 + 965 + copy_rd = kzalloc(regd_to_copy + wmms_to_copy, GFP_KERNEL); 966 + if (!copy_rd) { 967 + copy_rd = ERR_PTR(-ENOMEM); 968 + goto out; 969 + } 970 + 971 + memcpy(copy_rd, regd, regd_to_copy); 972 + memcpy((u8 *)copy_rd + regd_to_copy, (u8 *)regd + size_of_regd, 973 + wmms_to_copy); 974 + 975 + d_wmm = (struct ieee80211_wmm_rule *)((u8 *)copy_rd + regd_to_copy); 976 + s_wmm = (struct ieee80211_wmm_rule *)((u8 *)regd + size_of_regd); 977 + 978 + for (i = 0; i < regd->n_reg_rules; i++) { 979 + if (!regd->reg_rules[i].wmm_rule) 980 + continue; 981 + 982 + copy_rd->reg_rules[i].wmm_rule = d_wmm + 983 + (regd->reg_rules[i].wmm_rule - s_wmm) / 984 + sizeof(struct ieee80211_wmm_rule); 985 + } 986 + 987 + out: 988 + kfree(regdb_ptrs); 989 + kfree(regd); 990 + return copy_rd; 966 991 } 967 992 IWL_EXPORT_SYMBOL(iwl_parse_nvm_mcc_info);
+4 -2
drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.h
··· 101 101 * 102 102 * This function parses the regulatory channel data received as a 103 103 * MCC_UPDATE_CMD command. It returns a newly allocation regulatory domain, 104 - * to be fed into the regulatory core. An ERR_PTR is returned on error. 104 + * to be fed into the regulatory core. In case the geo_info is set handle 105 + * accordingly. An ERR_PTR is returned on error. 105 106 * If not given to the regulatory core, the user is responsible for freeing 106 107 * the regdomain returned here with kfree. 107 108 */ 108 109 struct ieee80211_regdomain * 109 110 iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, 110 - int num_of_ch, __le32 *channels, u16 fw_mcc); 111 + int num_of_ch, __le32 *channels, u16 fw_mcc, 112 + u16 geo_info); 111 113 112 114 #endif /* __iwl_nvm_parse_h__ */
+2 -1
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
··· 311 311 regd = iwl_parse_nvm_mcc_info(mvm->trans->dev, mvm->cfg, 312 312 __le32_to_cpu(resp->n_channels), 313 313 resp->channels, 314 - __le16_to_cpu(resp->mcc)); 314 + __le16_to_cpu(resp->mcc), 315 + __le16_to_cpu(resp->geo_info)); 315 316 /* Store the return source id */ 316 317 src_id = resp->source_id; 317 318 kfree(resp);
-15
drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.c
··· 158 158 159 159 static u8 rtl_get_hwpg_single_ant_path(struct rtl_priv *rtlpriv) 160 160 { 161 - struct rtl_mod_params *mod_params = rtlpriv->cfg->mod_params; 162 - 163 - /* override ant_num / ant_path */ 164 - if (mod_params->ant_sel) { 165 - rtlpriv->btcoexist.btc_info.ant_num = 166 - (mod_params->ant_sel == 1 ? ANT_X2 : ANT_X1); 167 - 168 - rtlpriv->btcoexist.btc_info.single_ant_path = 169 - (mod_params->ant_sel == 1 ? 0 : 1); 170 - } 171 161 return rtlpriv->btcoexist.btc_info.single_ant_path; 172 162 } 173 163 ··· 168 178 169 179 static u8 rtl_get_hwpg_ant_num(struct rtl_priv *rtlpriv) 170 180 { 171 - struct rtl_mod_params *mod_params = rtlpriv->cfg->mod_params; 172 181 u8 num; 173 182 174 183 if (rtlpriv->btcoexist.btc_info.ant_num == ANT_X2) 175 184 num = 2; 176 185 else 177 186 num = 1; 178 - 179 - /* override ant_num / ant_path */ 180 - if (mod_params->ant_sel) 181 - num = (mod_params->ant_sel == 1 ? ANT_X2 : ANT_X1) + 1; 182 187 183 188 return num; 184 189 }
+7 -4
drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
··· 848 848 return false; 849 849 } 850 850 851 + if (rtlpriv->cfg->ops->get_btc_status()) 852 + rtlpriv->btcoexist.btc_ops->btc_power_on_setting(rtlpriv); 853 + 851 854 bytetmp = rtl_read_byte(rtlpriv, REG_MULTI_FUNC_CTRL); 852 855 rtl_write_byte(rtlpriv, REG_MULTI_FUNC_CTRL, bytetmp | BIT(3)); 853 856 ··· 2699 2696 rtlpriv->btcoexist.btc_info.bt_type = BT_RTL8723B; 2700 2697 rtlpriv->btcoexist.btc_info.ant_num = (value & 0x1); 2701 2698 rtlpriv->btcoexist.btc_info.single_ant_path = 2702 - (value & 0x40); /*0xc3[6]*/ 2699 + (value & 0x40 ? ANT_AUX : ANT_MAIN); /*0xc3[6]*/ 2703 2700 } else { 2704 2701 rtlpriv->btcoexist.btc_info.btcoexist = 0; 2705 2702 rtlpriv->btcoexist.btc_info.bt_type = BT_RTL8723B; 2706 2703 rtlpriv->btcoexist.btc_info.ant_num = ANT_X2; 2707 - rtlpriv->btcoexist.btc_info.single_ant_path = 0; 2704 + rtlpriv->btcoexist.btc_info.single_ant_path = ANT_MAIN; 2708 2705 } 2709 2706 2710 2707 /* override ant_num / ant_path */ 2711 2708 if (mod_params->ant_sel) { 2712 2709 rtlpriv->btcoexist.btc_info.ant_num = 2713 - (mod_params->ant_sel == 1 ? ANT_X2 : ANT_X1); 2710 + (mod_params->ant_sel == 1 ? ANT_X1 : ANT_X2); 2714 2711 2715 2712 rtlpriv->btcoexist.btc_info.single_ant_path = 2716 - (mod_params->ant_sel == 1 ? 0 : 1); 2713 + (mod_params->ant_sel == 1 ? ANT_AUX : ANT_MAIN); 2717 2714 } 2718 2715 } 2719 2716
+5
drivers/net/wireless/realtek/rtlwifi/wifi.h
··· 2823 2823 ANT_X1 = 1, 2824 2824 }; 2825 2825 2826 + enum bt_ant_path { 2827 + ANT_MAIN = 0, 2828 + ANT_AUX = 1, 2829 + }; 2830 + 2826 2831 enum bt_co_type { 2827 2832 BT_2WIRE = 0, 2828 2833 BT_ISSC_3WIRE = 1,
+1 -1
drivers/nvme/host/Kconfig
··· 27 27 28 28 config NVME_RDMA 29 29 tristate "NVM Express over Fabrics RDMA host driver" 30 - depends on INFINIBAND && BLOCK 30 + depends on INFINIBAND && INFINIBAND_ADDR_TRANS && BLOCK 31 31 select NVME_CORE 32 32 select NVME_FABRICS 33 33 select SG_POOL
+2 -25
drivers/nvme/host/core.c
··· 764 764 ret = PTR_ERR(meta); 765 765 goto out_unmap; 766 766 } 767 + req->cmd_flags |= REQ_INTEGRITY; 767 768 } 768 769 } 769 770 ··· 2998 2997 if (nvme_init_ns_head(ns, nsid, id)) 2999 2998 goto out_free_id; 3000 2999 nvme_setup_streams_ns(ctrl, ns); 3001 - 3002 - #ifdef CONFIG_NVME_MULTIPATH 3003 - /* 3004 - * If multipathing is enabled we need to always use the subsystem 3005 - * instance number for numbering our devices to avoid conflicts 3006 - * between subsystems that have multiple controllers and thus use 3007 - * the multipath-aware subsystem node and those that have a single 3008 - * controller and use the controller node directly. 3009 - */ 3010 - if (ns->head->disk) { 3011 - sprintf(disk_name, "nvme%dc%dn%d", ctrl->subsys->instance, 3012 - ctrl->cntlid, ns->head->instance); 3013 - flags = GENHD_FL_HIDDEN; 3014 - } else { 3015 - sprintf(disk_name, "nvme%dn%d", ctrl->subsys->instance, 3016 - ns->head->instance); 3017 - } 3018 - #else 3019 - /* 3020 - * But without the multipath code enabled, multiple controller per 3021 - * subsystems are visible as devices and thus we cannot use the 3022 - * subsystem instance. 3023 - */ 3024 - sprintf(disk_name, "nvme%dn%d", ctrl->instance, ns->head->instance); 3025 - #endif 3000 + nvme_set_disk_name(disk_name, ns, ctrl, &flags); 3026 3001 3027 3002 if ((ctrl->quirks & NVME_QUIRK_LIGHTNVM) && id->vs[0] == 0x1) { 3028 3003 if (nvme_nvm_register(ns, disk_name, node)) {
+6
drivers/nvme/host/fabrics.c
··· 668 668 ret = -ENOMEM; 669 669 goto out; 670 670 } 671 + kfree(opts->transport); 671 672 opts->transport = p; 672 673 break; 673 674 case NVMF_OPT_NQN: ··· 677 676 ret = -ENOMEM; 678 677 goto out; 679 678 } 679 + kfree(opts->subsysnqn); 680 680 opts->subsysnqn = p; 681 681 nqnlen = strlen(opts->subsysnqn); 682 682 if (nqnlen >= NVMF_NQN_SIZE) { ··· 700 698 ret = -ENOMEM; 701 699 goto out; 702 700 } 701 + kfree(opts->traddr); 703 702 opts->traddr = p; 704 703 break; 705 704 case NVMF_OPT_TRSVCID: ··· 709 706 ret = -ENOMEM; 710 707 goto out; 711 708 } 709 + kfree(opts->trsvcid); 712 710 opts->trsvcid = p; 713 711 break; 714 712 case NVMF_OPT_QUEUE_SIZE: ··· 796 792 ret = -EINVAL; 797 793 goto out; 798 794 } 795 + nvmf_host_put(opts->host); 799 796 opts->host = nvmf_host_add(p); 800 797 kfree(p); 801 798 if (!opts->host) { ··· 822 817 ret = -ENOMEM; 823 818 goto out; 824 819 } 820 + kfree(opts->host_traddr); 825 821 opts->host_traddr = p; 826 822 break; 827 823 case NVMF_OPT_HOST_ID:
+23 -1
drivers/nvme/host/multipath.c
··· 15 15 #include "nvme.h" 16 16 17 17 static bool multipath = true; 18 - module_param(multipath, bool, 0644); 18 + module_param(multipath, bool, 0444); 19 19 MODULE_PARM_DESC(multipath, 20 20 "turn on native support for multiple controllers per subsystem"); 21 + 22 + /* 23 + * If multipathing is enabled we need to always use the subsystem instance 24 + * number for numbering our devices to avoid conflicts between subsystems that 25 + * have multiple controllers and thus use the multipath-aware subsystem node 26 + * and those that have a single controller and use the controller node 27 + * directly. 28 + */ 29 + void nvme_set_disk_name(char *disk_name, struct nvme_ns *ns, 30 + struct nvme_ctrl *ctrl, int *flags) 31 + { 32 + if (!multipath) { 33 + sprintf(disk_name, "nvme%dn%d", ctrl->instance, ns->head->instance); 34 + } else if (ns->head->disk) { 35 + sprintf(disk_name, "nvme%dc%dn%d", ctrl->subsys->instance, 36 + ctrl->cntlid, ns->head->instance); 37 + *flags = GENHD_FL_HIDDEN; 38 + } else { 39 + sprintf(disk_name, "nvme%dn%d", ctrl->subsys->instance, 40 + ns->head->instance); 41 + } 42 + } 21 43 22 44 void nvme_failover_req(struct request *req) 23 45 {
+12
drivers/nvme/host/nvme.h
··· 436 436 extern const struct block_device_operations nvme_ns_head_ops; 437 437 438 438 #ifdef CONFIG_NVME_MULTIPATH 439 + void nvme_set_disk_name(char *disk_name, struct nvme_ns *ns, 440 + struct nvme_ctrl *ctrl, int *flags); 439 441 void nvme_failover_req(struct request *req); 440 442 bool nvme_req_needs_failover(struct request *req, blk_status_t error); 441 443 void nvme_kick_requeue_lists(struct nvme_ctrl *ctrl); ··· 463 461 } 464 462 465 463 #else 464 + /* 465 + * Without the multipath code enabled, multiple controller per subsystems are 466 + * visible as devices and thus we cannot use the subsystem instance. 467 + */ 468 + static inline void nvme_set_disk_name(char *disk_name, struct nvme_ns *ns, 469 + struct nvme_ctrl *ctrl, int *flags) 470 + { 471 + sprintf(disk_name, "nvme%dn%d", ctrl->instance, ns->head->instance); 472 + } 473 + 466 474 static inline void nvme_failover_req(struct request *req) 467 475 { 468 476 }
+1 -1
drivers/nvme/target/Kconfig
··· 27 27 28 28 config NVME_TARGET_RDMA 29 29 tristate "NVMe over Fabrics RDMA target support" 30 - depends on INFINIBAND 30 + depends on INFINIBAND && INFINIBAND_ADDR_TRANS 31 31 depends on NVME_TARGET 32 32 select SGL_ALLOC 33 33 help
+6
drivers/nvme/target/loop.c
··· 469 469 nvme_stop_ctrl(&ctrl->ctrl); 470 470 nvme_loop_shutdown_ctrl(ctrl); 471 471 472 + if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) { 473 + /* state change failure should never happen */ 474 + WARN_ON_ONCE(1); 475 + return; 476 + } 477 + 472 478 ret = nvme_loop_configure_admin_queue(ctrl); 473 479 if (ret) 474 480 goto out_disable;
+1 -1
drivers/parisc/ccio-dma.c
··· 1263 1263 * I/O Page Directory, the resource map, and initalizing the 1264 1264 * U2/Uturn chip into virtual mode. 1265 1265 */ 1266 - static void 1266 + static void __init 1267 1267 ccio_ioc_init(struct ioc *ioc) 1268 1268 { 1269 1269 int i;
+1 -1
drivers/platform/x86/Kconfig
··· 154 154 depends on ACPI_VIDEO || ACPI_VIDEO = n 155 155 depends on RFKILL || RFKILL = n 156 156 depends on SERIO_I8042 157 - select DELL_SMBIOS 157 + depends on DELL_SMBIOS 158 158 select POWER_SUPPLY 159 159 select LEDS_CLASS 160 160 select NEW_LEDS
+3 -1
drivers/platform/x86/asus-wireless.c
··· 178 178 { 179 179 struct asus_wireless_data *data = acpi_driver_data(adev); 180 180 181 - if (data->wq) 181 + if (data->wq) { 182 + devm_led_classdev_unregister(&adev->dev, &data->led); 182 183 destroy_workqueue(data->wq); 184 + } 183 185 return 0; 184 186 } 185 187
+2
drivers/remoteproc/qcom_q6v5_pil.c
··· 1083 1083 dev_err(qproc->dev, "unable to resolve mba region\n"); 1084 1084 return ret; 1085 1085 } 1086 + of_node_put(node); 1086 1087 1087 1088 qproc->mba_phys = r.start; 1088 1089 qproc->mba_size = resource_size(&r); ··· 1101 1100 dev_err(qproc->dev, "unable to resolve mpss region\n"); 1102 1101 return ret; 1103 1102 } 1103 + of_node_put(node); 1104 1104 1105 1105 qproc->mpss_phys = qproc->mpss_reloc = r.start; 1106 1106 qproc->mpss_size = resource_size(&r);
+2 -2
drivers/remoteproc/remoteproc_core.c
··· 1163 1163 if (ret) 1164 1164 return ret; 1165 1165 1166 - ret = rproc_stop(rproc, false); 1166 + ret = rproc_stop(rproc, true); 1167 1167 if (ret) 1168 1168 goto unlock_mutex; 1169 1169 ··· 1316 1316 if (!atomic_dec_and_test(&rproc->power)) 1317 1317 goto out; 1318 1318 1319 - ret = rproc_stop(rproc, true); 1319 + ret = rproc_stop(rproc, false); 1320 1320 if (ret) { 1321 1321 atomic_inc(&rproc->power); 1322 1322 goto out;
+2
drivers/rpmsg/rpmsg_char.c
··· 581 581 unregister_chrdev_region(rpmsg_major, RPMSG_DEV_MAX); 582 582 } 583 583 module_exit(rpmsg_chrdev_exit); 584 + 585 + MODULE_ALIAS("rpmsg:rpmsg_chrdev"); 584 586 MODULE_LICENSE("GPL v2");
+1 -1
drivers/sbus/char/oradax.c
··· 3 3 * 4 4 * This program is free software: you can redistribute it and/or modify 5 5 * it under the terms of the GNU General Public License as published by 6 - * the Free Software Foundation, either version 3 of the License, or 6 + * the Free Software Foundation, either version 2 of the License, or 7 7 * (at your option) any later version. 8 8 * 9 9 * This program is distributed in the hope that it will be useful,
+1 -2
drivers/scsi/isci/port_config.c
··· 291 291 * Note: We have not moved the current phy_index so we will actually 292 292 * compare the startting phy with itself. 293 293 * This is expected and required to add the phy to the port. */ 294 - while (phy_index < SCI_MAX_PHYS) { 294 + for (; phy_index < SCI_MAX_PHYS; phy_index++) { 295 295 if ((phy_mask & (1 << phy_index)) == 0) 296 296 continue; 297 297 sci_phy_get_sas_address(&ihost->phys[phy_index], ··· 311 311 &ihost->phys[phy_index]); 312 312 313 313 assigned_phy_mask |= (1 << phy_index); 314 - phy_index++; 315 314 } 316 315 317 316 }
+5 -2
drivers/scsi/storvsc_drv.c
··· 1722 1722 max_targets = STORVSC_MAX_TARGETS; 1723 1723 max_channels = STORVSC_MAX_CHANNELS; 1724 1724 /* 1725 - * On Windows8 and above, we support sub-channels for storage. 1725 + * On Windows8 and above, we support sub-channels for storage 1726 + * on SCSI and FC controllers. 1726 1727 * The number of sub-channels offerred is based on the number of 1727 1728 * VCPUs in the guest. 1728 1729 */ 1729 - max_sub_channels = (num_cpus / storvsc_vcpus_per_sub_channel); 1730 + if (!dev_is_ide) 1731 + max_sub_channels = 1732 + (num_cpus - 1) / storvsc_vcpus_per_sub_channel; 1730 1733 } 1731 1734 1732 1735 scsi_driver.can_queue = (max_outstanding_req_per_channel *
+1 -1
drivers/staging/media/imx/imx-media-csi.c
··· 1799 1799 priv->dev->of_node = pdata->of_node; 1800 1800 pinctrl = devm_pinctrl_get_select_default(priv->dev); 1801 1801 if (IS_ERR(pinctrl)) { 1802 - ret = PTR_ERR(priv->vdev); 1802 + ret = PTR_ERR(pinctrl); 1803 1803 dev_dbg(priv->dev, 1804 1804 "devm_pinctrl_get_select_default() failed: %d\n", ret); 1805 1805 if (ret != -ENODEV)
+4 -4
drivers/target/target_core_iblock.c
··· 427 427 { 428 428 struct se_device *dev = cmd->se_dev; 429 429 struct scatterlist *sg = &cmd->t_data_sg[0]; 430 - unsigned char *buf, zero = 0x00, *p = &zero; 431 - int rc, ret; 430 + unsigned char *buf, *not_zero; 431 + int ret; 432 432 433 433 buf = kmap(sg_page(sg)) + sg->offset; 434 434 if (!buf) ··· 437 437 * Fall back to block_execute_write_same() slow-path if 438 438 * incoming WRITE_SAME payload does not contain zeros. 439 439 */ 440 - rc = memcmp(buf, p, cmd->data_length); 440 + not_zero = memchr_inv(buf, 0x00, cmd->data_length); 441 441 kunmap(sg_page(sg)); 442 442 443 - if (rc) 443 + if (not_zero) 444 444 return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 445 445 446 446 ret = blkdev_issue_zeroout(bdev,
+3 -1
drivers/usb/core/config.c
··· 191 191 static const unsigned short high_speed_maxpacket_maxes[4] = { 192 192 [USB_ENDPOINT_XFER_CONTROL] = 64, 193 193 [USB_ENDPOINT_XFER_ISOC] = 1024, 194 - [USB_ENDPOINT_XFER_BULK] = 512, 194 + 195 + /* Bulk should be 512, but some devices use 1024: we will warn below */ 196 + [USB_ENDPOINT_XFER_BULK] = 1024, 195 197 [USB_ENDPOINT_XFER_INT] = 1024, 196 198 }; 197 199 static const unsigned short super_speed_maxpacket_maxes[4] = {
+2
drivers/usb/dwc2/core.h
··· 985 985 986 986 /* DWC OTG HW Release versions */ 987 987 #define DWC2_CORE_REV_2_71a 0x4f54271a 988 + #define DWC2_CORE_REV_2_72a 0x4f54272a 988 989 #define DWC2_CORE_REV_2_80a 0x4f54280a 989 990 #define DWC2_CORE_REV_2_90a 0x4f54290a 990 991 #define DWC2_CORE_REV_2_91a 0x4f54291a ··· 993 992 #define DWC2_CORE_REV_2_94a 0x4f54294a 994 993 #define DWC2_CORE_REV_3_00a 0x4f54300a 995 994 #define DWC2_CORE_REV_3_10a 0x4f54310a 995 + #define DWC2_CORE_REV_4_00a 0x4f54400a 996 996 #define DWC2_FS_IOT_REV_1_00a 0x5531100a 997 997 #define DWC2_HS_IOT_REV_1_00a 0x5532100a 998 998
+21
drivers/usb/dwc2/gadget.c
··· 3928 3928 if (index && !hs_ep->isochronous) 3929 3929 epctrl |= DXEPCTL_SETD0PID; 3930 3930 3931 + /* WA for Full speed ISOC IN in DDMA mode. 3932 + * By Clear NAK status of EP, core will send ZLP 3933 + * to IN token and assert NAK interrupt relying 3934 + * on TxFIFO status only 3935 + */ 3936 + 3937 + if (hsotg->gadget.speed == USB_SPEED_FULL && 3938 + hs_ep->isochronous && dir_in) { 3939 + /* The WA applies only to core versions from 2.72a 3940 + * to 4.00a (including both). Also for FS_IOT_1.00a 3941 + * and HS_IOT_1.00a. 3942 + */ 3943 + u32 gsnpsid = dwc2_readl(hsotg->regs + GSNPSID); 3944 + 3945 + if ((gsnpsid >= DWC2_CORE_REV_2_72a && 3946 + gsnpsid <= DWC2_CORE_REV_4_00a) || 3947 + gsnpsid == DWC2_FS_IOT_REV_1_00a || 3948 + gsnpsid == DWC2_HS_IOT_REV_1_00a) 3949 + epctrl |= DXEPCTL_CNAK; 3950 + } 3951 + 3931 3952 dev_dbg(hsotg->dev, "%s: write DxEPCTL=0x%08x\n", 3932 3953 __func__, epctrl); 3933 3954
+8 -5
drivers/usb/dwc2/hcd.c
··· 358 358 359 359 static int dwc2_vbus_supply_init(struct dwc2_hsotg *hsotg) 360 360 { 361 + int ret; 362 + 361 363 hsotg->vbus_supply = devm_regulator_get_optional(hsotg->dev, "vbus"); 362 - if (IS_ERR(hsotg->vbus_supply)) 363 - return 0; 364 + if (IS_ERR(hsotg->vbus_supply)) { 365 + ret = PTR_ERR(hsotg->vbus_supply); 366 + hsotg->vbus_supply = NULL; 367 + return ret == -ENODEV ? 0 : ret; 368 + } 364 369 365 370 return regulator_enable(hsotg->vbus_supply); 366 371 } ··· 4347 4342 4348 4343 spin_unlock_irqrestore(&hsotg->lock, flags); 4349 4344 4350 - dwc2_vbus_supply_init(hsotg); 4351 - 4352 - return 0; 4345 + return dwc2_vbus_supply_init(hsotg); 4353 4346 } 4354 4347 4355 4348 /*
+3 -1
drivers/usb/dwc2/pci.c
··· 141 141 goto err; 142 142 143 143 glue = devm_kzalloc(dev, sizeof(*glue), GFP_KERNEL); 144 - if (!glue) 144 + if (!glue) { 145 + ret = -ENOMEM; 145 146 goto err; 147 + } 146 148 147 149 ret = platform_device_add(dwc2); 148 150 if (ret) {
+2 -2
drivers/usb/dwc3/gadget.c
··· 166 166 dwc3_ep_inc_trb(&dep->trb_dequeue); 167 167 } 168 168 169 - void dwc3_gadget_del_and_unmap_request(struct dwc3_ep *dep, 169 + static void dwc3_gadget_del_and_unmap_request(struct dwc3_ep *dep, 170 170 struct dwc3_request *req, int status) 171 171 { 172 172 struct dwc3 *dwc = dep->dwc; ··· 1424 1424 dwc->lock); 1425 1425 1426 1426 if (!r->trb) 1427 - goto out1; 1427 + goto out0; 1428 1428 1429 1429 if (r->num_pending_sgs) { 1430 1430 struct dwc3_trb *trb;
+1 -1
drivers/usb/gadget/function/f_phonet.c
··· 221 221 netif_wake_queue(dev); 222 222 } 223 223 224 - static int pn_net_xmit(struct sk_buff *skb, struct net_device *dev) 224 + static netdev_tx_t pn_net_xmit(struct sk_buff *skb, struct net_device *dev) 225 225 { 226 226 struct phonet_port *port = netdev_priv(dev); 227 227 struct f_phonet *fp;
+2 -1
drivers/usb/host/ehci-mem.c
··· 73 73 if (!qh) 74 74 goto done; 75 75 qh->hw = (struct ehci_qh_hw *) 76 - dma_pool_zalloc(ehci->qh_pool, flags, &dma); 76 + dma_pool_alloc(ehci->qh_pool, flags, &dma); 77 77 if (!qh->hw) 78 78 goto fail; 79 + memset(qh->hw, 0, sizeof *qh->hw); 79 80 qh->qh_dma = dma; 80 81 // INIT_LIST_HEAD (&qh->qh_list); 81 82 INIT_LIST_HEAD (&qh->qtd_list);
+4 -2
drivers/usb/host/ehci-sched.c
··· 1287 1287 } else { 1288 1288 alloc_itd: 1289 1289 spin_unlock_irqrestore(&ehci->lock, flags); 1290 - itd = dma_pool_zalloc(ehci->itd_pool, mem_flags, 1290 + itd = dma_pool_alloc(ehci->itd_pool, mem_flags, 1291 1291 &itd_dma); 1292 1292 spin_lock_irqsave(&ehci->lock, flags); 1293 1293 if (!itd) { ··· 1297 1297 } 1298 1298 } 1299 1299 1300 + memset(itd, 0, sizeof(*itd)); 1300 1301 itd->itd_dma = itd_dma; 1301 1302 itd->frame = NO_FRAME; 1302 1303 list_add(&itd->itd_list, &sched->td_list); ··· 2081 2080 } else { 2082 2081 alloc_sitd: 2083 2082 spin_unlock_irqrestore(&ehci->lock, flags); 2084 - sitd = dma_pool_zalloc(ehci->sitd_pool, mem_flags, 2083 + sitd = dma_pool_alloc(ehci->sitd_pool, mem_flags, 2085 2084 &sitd_dma); 2086 2085 spin_lock_irqsave(&ehci->lock, flags); 2087 2086 if (!sitd) { ··· 2091 2090 } 2092 2091 } 2093 2092 2093 + memset(sitd, 0, sizeof(*sitd)); 2094 2094 sitd->sitd_dma = sitd_dma; 2095 2095 sitd->frame = NO_FRAME; 2096 2096 list_add(&sitd->sitd_list, &iso_sched->td_list);
+1
drivers/usb/host/xhci.c
··· 3621 3621 del_timer_sync(&virt_dev->eps[i].stop_cmd_timer); 3622 3622 } 3623 3623 xhci_debugfs_remove_slot(xhci, udev->slot_id); 3624 + virt_dev->udev = NULL; 3624 3625 ret = xhci_disable_slot(xhci, udev->slot_id); 3625 3626 if (ret) 3626 3627 xhci_free_virt_device(xhci, udev->slot_id);
+2 -1
drivers/usb/musb/musb_gadget.c
··· 417 417 req = next_request(musb_ep); 418 418 request = &req->request; 419 419 420 - trace_musb_req_tx(req); 421 420 csr = musb_readw(epio, MUSB_TXCSR); 422 421 musb_dbg(musb, "<== %s, txcsr %04x", musb_ep->end_point.name, csr); 423 422 ··· 454 455 if (request) { 455 456 u8 is_dma = 0; 456 457 bool short_packet = false; 458 + 459 + trace_musb_req_tx(req); 457 460 458 461 if (dma && (csr & MUSB_TXCSR_DMAENAB)) { 459 462 is_dma = 1;
+3 -1
drivers/usb/musb/musb_host.c
··· 990 990 /* set tx_reinit and schedule the next qh */ 991 991 ep->tx_reinit = 1; 992 992 } 993 - musb_start_urb(musb, is_in, next_qh); 993 + 994 + if (next_qh) 995 + musb_start_urb(musb, is_in, next_qh); 994 996 } 995 997 } 996 998
+5
drivers/usb/serial/option.c
··· 233 233 /* These Quectel products use Qualcomm's vendor ID */ 234 234 #define QUECTEL_PRODUCT_UC20 0x9003 235 235 #define QUECTEL_PRODUCT_UC15 0x9090 236 + /* These u-blox products use Qualcomm's vendor ID */ 237 + #define UBLOX_PRODUCT_R410M 0x90b2 236 238 /* These Yuga products use Qualcomm's vendor ID */ 237 239 #define YUGA_PRODUCT_CLM920_NC5 0x9625 238 240 ··· 1067 1065 /* Yuga products use Qualcomm vendor ID */ 1068 1066 { USB_DEVICE(QUALCOMM_VENDOR_ID, YUGA_PRODUCT_CLM920_NC5), 1069 1067 .driver_info = RSVD(1) | RSVD(4) }, 1068 + /* u-blox products using Qualcomm vendor ID */ 1069 + { USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R410M), 1070 + .driver_info = RSVD(1) | RSVD(3) }, 1070 1071 /* Quectel products using Quectel vendor ID */ 1071 1072 { USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC21), 1072 1073 .driver_info = RSVD(4) },
+35 -34
drivers/usb/serial/visor.c
··· 335 335 goto exit; 336 336 } 337 337 338 - if (retval == sizeof(*connection_info)) { 339 - connection_info = (struct visor_connection_info *) 340 - transfer_buffer; 341 - 342 - num_ports = le16_to_cpu(connection_info->num_ports); 343 - for (i = 0; i < num_ports; ++i) { 344 - switch ( 345 - connection_info->connections[i].port_function_id) { 346 - case VISOR_FUNCTION_GENERIC: 347 - string = "Generic"; 348 - break; 349 - case VISOR_FUNCTION_DEBUGGER: 350 - string = "Debugger"; 351 - break; 352 - case VISOR_FUNCTION_HOTSYNC: 353 - string = "HotSync"; 354 - break; 355 - case VISOR_FUNCTION_CONSOLE: 356 - string = "Console"; 357 - break; 358 - case VISOR_FUNCTION_REMOTE_FILE_SYS: 359 - string = "Remote File System"; 360 - break; 361 - default: 362 - string = "unknown"; 363 - break; 364 - } 365 - dev_info(dev, "%s: port %d, is for %s use\n", 366 - serial->type->description, 367 - connection_info->connections[i].port, string); 368 - } 338 + if (retval != sizeof(*connection_info)) { 339 + dev_err(dev, "Invalid connection information received from device\n"); 340 + retval = -ENODEV; 341 + goto exit; 369 342 } 370 - /* 371 - * Handle devices that report invalid stuff here. 372 - */ 343 + 344 + connection_info = (struct visor_connection_info *)transfer_buffer; 345 + 346 + num_ports = le16_to_cpu(connection_info->num_ports); 347 + 348 + /* Handle devices that report invalid stuff here. */ 373 349 if (num_ports == 0 || num_ports > 2) { 374 350 dev_warn(dev, "%s: No valid connect info available\n", 375 351 serial->type->description); 376 352 num_ports = 2; 377 353 } 378 354 355 + for (i = 0; i < num_ports; ++i) { 356 + switch (connection_info->connections[i].port_function_id) { 357 + case VISOR_FUNCTION_GENERIC: 358 + string = "Generic"; 359 + break; 360 + case VISOR_FUNCTION_DEBUGGER: 361 + string = "Debugger"; 362 + break; 363 + case VISOR_FUNCTION_HOTSYNC: 364 + string = "HotSync"; 365 + break; 366 + case VISOR_FUNCTION_CONSOLE: 367 + string = "Console"; 368 + break; 369 + case VISOR_FUNCTION_REMOTE_FILE_SYS: 370 + string = "Remote File System"; 371 + break; 372 + default: 373 + string = "unknown"; 374 + break; 375 + } 376 + dev_info(dev, "%s: port %d, is for %s use\n", 377 + serial->type->description, 378 + connection_info->connections[i].port, string); 379 + } 379 380 dev_info(dev, "%s: Number of ports: %d\n", serial->type->description, 380 381 num_ports); 381 382
+1
drivers/usb/typec/tcpm.c
··· 4652 4652 for (i = 0; i < ARRAY_SIZE(port->port_altmode); i++) 4653 4653 typec_unregister_altmode(port->port_altmode[i]); 4654 4654 typec_unregister_port(port->typec_port); 4655 + usb_role_switch_put(port->role_sw); 4655 4656 tcpm_debugfs_exit(port); 4656 4657 destroy_workqueue(port->wq); 4657 4658 }
+39 -8
drivers/usb/typec/tps6598x.c
··· 73 73 struct device *dev; 74 74 struct regmap *regmap; 75 75 struct mutex lock; /* device lock */ 76 + u8 i2c_protocol:1; 76 77 77 78 struct typec_port *port; 78 79 struct typec_partner *partner; ··· 81 80 struct typec_capability typec_cap; 82 81 }; 83 82 83 + static int 84 + tps6598x_block_read(struct tps6598x *tps, u8 reg, void *val, size_t len) 85 + { 86 + u8 data[len + 1]; 87 + int ret; 88 + 89 + if (!tps->i2c_protocol) 90 + return regmap_raw_read(tps->regmap, reg, val, len); 91 + 92 + ret = regmap_raw_read(tps->regmap, reg, data, sizeof(data)); 93 + if (ret) 94 + return ret; 95 + 96 + if (data[0] < len) 97 + return -EIO; 98 + 99 + memcpy(val, &data[1], len); 100 + return 0; 101 + } 102 + 84 103 static inline int tps6598x_read16(struct tps6598x *tps, u8 reg, u16 *val) 85 104 { 86 - return regmap_raw_read(tps->regmap, reg, val, sizeof(u16)); 105 + return tps6598x_block_read(tps, reg, val, sizeof(u16)); 87 106 } 88 107 89 108 static inline int tps6598x_read32(struct tps6598x *tps, u8 reg, u32 *val) 90 109 { 91 - return regmap_raw_read(tps->regmap, reg, val, sizeof(u32)); 110 + return tps6598x_block_read(tps, reg, val, sizeof(u32)); 92 111 } 93 112 94 113 static inline int tps6598x_read64(struct tps6598x *tps, u8 reg, u64 *val) 95 114 { 96 - return regmap_raw_read(tps->regmap, reg, val, sizeof(u64)); 115 + return tps6598x_block_read(tps, reg, val, sizeof(u64)); 97 116 } 98 117 99 118 static inline int tps6598x_write16(struct tps6598x *tps, u8 reg, u16 val) ··· 142 121 struct tps6598x_rx_identity_reg id; 143 122 int ret; 144 123 145 - ret = regmap_raw_read(tps->regmap, TPS_REG_RX_IDENTITY_SOP, 146 - &id, sizeof(id)); 124 + ret = tps6598x_block_read(tps, TPS_REG_RX_IDENTITY_SOP, 125 + &id, sizeof(id)); 147 126 if (ret) 148 127 return ret; 149 128 ··· 245 224 } while (val); 246 225 247 226 if (out_len) { 248 - ret = regmap_raw_read(tps->regmap, TPS_REG_DATA1, 249 - out_data, out_len); 227 + ret = tps6598x_block_read(tps, TPS_REG_DATA1, 228 + out_data, out_len); 250 229 if (ret) 251 230 return ret; 252 231 val = out_data[0]; 253 232 } else { 254 - ret = regmap_read(tps->regmap, TPS_REG_DATA1, &val); 233 + ret = tps6598x_block_read(tps, TPS_REG_DATA1, &val, sizeof(u8)); 255 234 if (ret) 256 235 return ret; 257 236 } ··· 405 384 return ret; 406 385 if (!vid) 407 386 return -ENODEV; 387 + 388 + /* 389 + * Checking can the adapter handle SMBus protocol. If it can not, the 390 + * driver needs to take care of block reads separately. 391 + * 392 + * FIXME: Testing with I2C_FUNC_I2C. regmap-i2c uses I2C protocol 393 + * unconditionally if the adapter has I2C_FUNC_I2C set. 394 + */ 395 + if (i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) 396 + tps->i2c_protocol = true; 408 397 409 398 ret = tps6598x_read32(tps, TPS_REG_STATUS, &status); 410 399 if (ret < 0)
+7
fs/btrfs/extent-tree.c
··· 3142 3142 struct rb_node *node; 3143 3143 int ret = 0; 3144 3144 3145 + spin_lock(&root->fs_info->trans_lock); 3145 3146 cur_trans = root->fs_info->running_transaction; 3147 + if (cur_trans) 3148 + refcount_inc(&cur_trans->use_count); 3149 + spin_unlock(&root->fs_info->trans_lock); 3146 3150 if (!cur_trans) 3147 3151 return 0; 3148 3152 ··· 3155 3151 head = btrfs_find_delayed_ref_head(delayed_refs, bytenr); 3156 3152 if (!head) { 3157 3153 spin_unlock(&delayed_refs->lock); 3154 + btrfs_put_transaction(cur_trans); 3158 3155 return 0; 3159 3156 } 3160 3157 ··· 3172 3167 mutex_lock(&head->mutex); 3173 3168 mutex_unlock(&head->mutex); 3174 3169 btrfs_put_delayed_ref_head(head); 3170 + btrfs_put_transaction(cur_trans); 3175 3171 return -EAGAIN; 3176 3172 } 3177 3173 spin_unlock(&delayed_refs->lock); ··· 3205 3199 } 3206 3200 spin_unlock(&head->lock); 3207 3201 mutex_unlock(&head->mutex); 3202 + btrfs_put_transaction(cur_trans); 3208 3203 return ret; 3209 3204 } 3210 3205
+1 -1
fs/btrfs/relocation.c
··· 1841 1841 old_bytenr = btrfs_node_blockptr(parent, slot); 1842 1842 blocksize = fs_info->nodesize; 1843 1843 old_ptr_gen = btrfs_node_ptr_generation(parent, slot); 1844 - btrfs_node_key_to_cpu(parent, &key, slot); 1844 + btrfs_node_key_to_cpu(parent, &first_key, slot); 1845 1845 1846 1846 if (level <= max_level) { 1847 1847 eb = path->nodes[level];
+4
fs/btrfs/send.c
··· 5236 5236 len = btrfs_file_extent_num_bytes(path->nodes[0], ei); 5237 5237 } 5238 5238 5239 + if (offset >= sctx->cur_inode_size) { 5240 + ret = 0; 5241 + goto out; 5242 + } 5239 5243 if (offset + len > sctx->cur_inode_size) 5240 5244 len = sctx->cur_inode_size - offset; 5241 5245 if (len == 0) {
+1 -1
fs/cifs/Kconfig
··· 197 197 198 198 config CIFS_SMB_DIRECT 199 199 bool "SMB Direct support (Experimental)" 200 - depends on CIFS=m && INFINIBAND || CIFS=y && INFINIBAND=y 200 + depends on CIFS=m && INFINIBAND && INFINIBAND_ADDR_TRANS || CIFS=y && INFINIBAND=y && INFINIBAND_ADDR_TRANS=y 201 201 help 202 202 Enables SMB Direct experimental support for SMB 3.0, 3.02 and 3.1.1. 203 203 SMB Direct allows transferring SMB packets over RDMA. If unsure,
+1 -1
fs/fs-writeback.c
··· 1961 1961 } 1962 1962 1963 1963 if (!list_empty(&wb->work_list)) 1964 - mod_delayed_work(bdi_wq, &wb->dwork, 0); 1964 + wb_wakeup(wb); 1965 1965 else if (wb_has_dirty_io(wb) && dirty_writeback_interval) 1966 1966 wb_wakeup_delayed(wb); 1967 1967
+8 -1
fs/xfs/libxfs/xfs_attr.c
··· 511 511 if (args->flags & ATTR_CREATE) 512 512 return retval; 513 513 retval = xfs_attr_shortform_remove(args); 514 - ASSERT(retval == 0); 514 + if (retval) 515 + return retval; 516 + /* 517 + * Since we have removed the old attr, clear ATTR_REPLACE so 518 + * that the leaf format add routine won't trip over the attr 519 + * not being around. 520 + */ 521 + args->flags &= ~ATTR_REPLACE; 515 522 } 516 523 517 524 if (args->namelen >= XFS_ATTR_SF_ENTSIZE_MAX ||
+4
fs/xfs/libxfs/xfs_bmap.c
··· 725 725 *logflagsp = 0; 726 726 if ((error = xfs_alloc_vextent(&args))) { 727 727 xfs_iroot_realloc(ip, -1, whichfork); 728 + ASSERT(ifp->if_broot == NULL); 729 + XFS_IFORK_FMT_SET(ip, whichfork, XFS_DINODE_FMT_EXTENTS); 728 730 xfs_btree_del_cursor(cur, XFS_BTREE_ERROR); 729 731 return error; 730 732 } 731 733 732 734 if (WARN_ON_ONCE(args.fsbno == NULLFSBLOCK)) { 733 735 xfs_iroot_realloc(ip, -1, whichfork); 736 + ASSERT(ifp->if_broot == NULL); 737 + XFS_IFORK_FMT_SET(ip, whichfork, XFS_DINODE_FMT_EXTENTS); 734 738 xfs_btree_del_cursor(cur, XFS_BTREE_ERROR); 735 739 return -ENOSPC; 736 740 }
+21
fs/xfs/libxfs/xfs_inode_buf.c
··· 466 466 return __this_address; 467 467 if (di_size > XFS_DFORK_DSIZE(dip, mp)) 468 468 return __this_address; 469 + if (dip->di_nextents) 470 + return __this_address; 469 471 /* fall through */ 470 472 case XFS_DINODE_FMT_EXTENTS: 471 473 case XFS_DINODE_FMT_BTREE: ··· 486 484 if (XFS_DFORK_Q(dip)) { 487 485 switch (dip->di_aformat) { 488 486 case XFS_DINODE_FMT_LOCAL: 487 + if (dip->di_anextents) 488 + return __this_address; 489 + /* fall through */ 489 490 case XFS_DINODE_FMT_EXTENTS: 490 491 case XFS_DINODE_FMT_BTREE: 491 492 break; 492 493 default: 493 494 return __this_address; 494 495 } 496 + } else { 497 + /* 498 + * If there is no fork offset, this may be a freshly-made inode 499 + * in a new disk cluster, in which case di_aformat is zeroed. 500 + * Otherwise, such an inode must be in EXTENTS format; this goes 501 + * for freed inodes as well. 502 + */ 503 + switch (dip->di_aformat) { 504 + case 0: 505 + case XFS_DINODE_FMT_EXTENTS: 506 + break; 507 + default: 508 + return __this_address; 509 + } 510 + if (dip->di_anextents) 511 + return __this_address; 495 512 } 496 513 497 514 /* only version 3 or greater inodes are extensively verified here */
+19 -5
fs/xfs/xfs_file.c
··· 778 778 if (error) 779 779 goto out_unlock; 780 780 } else if (mode & FALLOC_FL_INSERT_RANGE) { 781 - unsigned int blksize_mask = i_blocksize(inode) - 1; 781 + unsigned int blksize_mask = i_blocksize(inode) - 1; 782 + loff_t isize = i_size_read(inode); 782 783 783 - new_size = i_size_read(inode) + len; 784 784 if (offset & blksize_mask || len & blksize_mask) { 785 785 error = -EINVAL; 786 786 goto out_unlock; 787 787 } 788 788 789 - /* check the new inode size does not wrap through zero */ 790 - if (new_size > inode->i_sb->s_maxbytes) { 789 + /* 790 + * New inode size must not exceed ->s_maxbytes, accounting for 791 + * possible signed overflow. 792 + */ 793 + if (inode->i_sb->s_maxbytes - isize < len) { 791 794 error = -EFBIG; 792 795 goto out_unlock; 793 796 } 797 + new_size = isize + len; 794 798 795 799 /* Offset should be less than i_size */ 796 - if (offset >= i_size_read(inode)) { 800 + if (offset >= isize) { 797 801 error = -EINVAL; 798 802 goto out_unlock; 799 803 } ··· 880 876 struct file *dst_file, 881 877 u64 dst_loff) 882 878 { 879 + struct inode *srci = file_inode(src_file); 880 + u64 max_dedupe; 883 881 int error; 884 882 883 + /* 884 + * Since we have to read all these pages in to compare them, cut 885 + * it off at MAX_RW_COUNT/2 rounded down to the nearest block. 886 + * That means we won't do more than MAX_RW_COUNT IO per request. 887 + */ 888 + max_dedupe = (MAX_RW_COUNT >> 1) & ~(i_blocksize(srci) - 1); 889 + if (len > max_dedupe) 890 + len = max_dedupe; 885 891 error = xfs_reflink_remap_range(src_file, loff, dst_file, dst_loff, 886 892 len, true); 887 893 if (error)
+2 -2
include/dt-bindings/clock/stm32mp1-clks.h
··· 76 76 #define I2C6 63 77 77 #define USART1 64 78 78 #define RTCAPB 65 79 - #define TZC 66 79 + #define TZC1 66 80 80 #define TZPC 67 81 81 #define IWDG1 68 82 82 #define BSEC 69 ··· 123 123 #define CRC1 110 124 124 #define USBH 111 125 125 #define ETHSTP 112 126 + #define TZC2 113 126 127 127 128 /* Kernel clocks */ 128 129 #define SDMMC1_K 118 ··· 229 228 #define CK_MCO2 212 230 229 231 230 /* TRACE & DEBUG clocks */ 232 - #define DBG 213 233 231 #define CK_DBG 214 234 232 #define CK_TRACE 215 235 233
+1
include/kvm/arm_vgic.h
··· 131 131 u32 mpidr; /* GICv3 target VCPU */ 132 132 }; 133 133 u8 source; /* GICv2 SGIs only */ 134 + u8 active_source; /* GICv2 SGIs only */ 134 135 u8 priority; 135 136 enum vgic_irq_config config; /* Level or edge */ 136 137
+3 -1
include/linux/bpf.h
··· 31 31 void (*map_release)(struct bpf_map *map, struct file *map_file); 32 32 void (*map_free)(struct bpf_map *map); 33 33 int (*map_get_next_key)(struct bpf_map *map, void *key, void *next_key); 34 + void (*map_release_uref)(struct bpf_map *map); 34 35 35 36 /* funcs callable from userspace and from eBPF programs */ 36 37 void *(*map_lookup_elem)(struct bpf_map *map, void *key); ··· 352 351 struct bpf_prog **_prog, *__prog; \ 353 352 struct bpf_prog_array *_array; \ 354 353 u32 _ret = 1; \ 354 + preempt_disable(); \ 355 355 rcu_read_lock(); \ 356 356 _array = rcu_dereference(array); \ 357 357 if (unlikely(check_non_null && !_array))\ ··· 364 362 } \ 365 363 _out: \ 366 364 rcu_read_unlock(); \ 365 + preempt_enable_no_resched(); \ 367 366 _ret; \ 368 367 }) 369 368 ··· 437 434 int bpf_fd_array_map_update_elem(struct bpf_map *map, struct file *map_file, 438 435 void *key, void *value, u64 map_flags); 439 436 int bpf_fd_array_map_lookup_elem(struct bpf_map *map, void *key, u32 *value); 440 - void bpf_fd_array_map_clear(struct bpf_map *map); 441 437 int bpf_fd_htab_map_update_elem(struct bpf_map *map, struct file *map_file, 442 438 void *key, void *value, u64 map_flags); 443 439 int bpf_fd_htab_map_lookup_elem(struct bpf_map *map, void *key, u32 *value);
+3
include/linux/clk-provider.h
··· 765 765 int __clk_determine_rate(struct clk_hw *core, struct clk_rate_request *req); 766 766 int __clk_mux_determine_rate_closest(struct clk_hw *hw, 767 767 struct clk_rate_request *req); 768 + int clk_mux_determine_rate_flags(struct clk_hw *hw, 769 + struct clk_rate_request *req, 770 + unsigned long flags); 768 771 void clk_hw_reparent(struct clk_hw *hw, struct clk_hw *new_parent); 769 772 void clk_hw_set_rate_range(struct clk_hw *hw, unsigned long min_rate, 770 773 unsigned long max_rate);
+3 -1
include/linux/genhd.h
··· 368 368 part_stat_add(cpu, gendiskp, field, -subnd) 369 369 370 370 void part_in_flight(struct request_queue *q, struct hd_struct *part, 371 - unsigned int inflight[2]); 371 + unsigned int inflight[2]); 372 + void part_in_flight_rw(struct request_queue *q, struct hd_struct *part, 373 + unsigned int inflight[2]); 372 374 void part_dec_in_flight(struct request_queue *q, struct hd_struct *part, 373 375 int rw); 374 376 void part_inc_in_flight(struct request_queue *q, struct hd_struct *part,
+3 -9
include/linux/mlx5/driver.h
··· 1284 1284 }; 1285 1285 1286 1286 static inline const struct cpumask * 1287 - mlx5_get_vector_affinity(struct mlx5_core_dev *dev, int vector) 1287 + mlx5_get_vector_affinity_hint(struct mlx5_core_dev *dev, int vector) 1288 1288 { 1289 - const struct cpumask *mask; 1290 1289 struct irq_desc *desc; 1291 1290 unsigned int irq; 1292 1291 int eqn; 1293 1292 int err; 1294 1293 1295 - err = mlx5_vector2eqn(dev, MLX5_EQ_VEC_COMP_BASE + vector, &eqn, &irq); 1294 + err = mlx5_vector2eqn(dev, vector, &eqn, &irq); 1296 1295 if (err) 1297 1296 return NULL; 1298 1297 1299 1298 desc = irq_to_desc(irq); 1300 - #ifdef CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK 1301 - mask = irq_data_get_effective_affinity_mask(&desc->irq_data); 1302 - #else 1303 - mask = desc->irq_common_data.affinity; 1304 - #endif 1305 - return mask; 1299 + return desc->affinity_hint; 1306 1300 } 1307 1301 1308 1302 #endif /* MLX5_DRIVER_H */
+1 -1
include/linux/remoteproc.h
··· 569 569 void rproc_add_subdev(struct rproc *rproc, 570 570 struct rproc_subdev *subdev, 571 571 int (*probe)(struct rproc_subdev *subdev), 572 - void (*remove)(struct rproc_subdev *subdev, bool graceful)); 572 + void (*remove)(struct rproc_subdev *subdev, bool crashed)); 573 573 574 574 void rproc_remove_subdev(struct rproc *rproc, struct rproc_subdev *subdev); 575 575
+1 -1
include/linux/usb/composite.h
··· 52 52 #define USB_GADGET_DELAYED_STATUS 0x7fff /* Impossibly large value */ 53 53 54 54 /* big enough to hold our biggest descriptor */ 55 - #define USB_COMP_EP0_BUFSIZ 1024 55 + #define USB_COMP_EP0_BUFSIZ 4096 56 56 57 57 /* OS feature descriptor length <= 4kB */ 58 58 #define USB_COMP_EP0_OS_DESC_BUFSIZ 4096
+17
include/linux/wait_bit.h
··· 305 305 __ret; \ 306 306 }) 307 307 308 + /** 309 + * clear_and_wake_up_bit - clear a bit and wake up anyone waiting on that bit 310 + * 311 + * @bit: the bit of the word being waited on 312 + * @word: the word being waited on, a kernel virtual address 313 + * 314 + * You can use this helper if bitflags are manipulated atomically rather than 315 + * non-atomically under a lock. 316 + */ 317 + static inline void clear_and_wake_up_bit(int bit, void *word) 318 + { 319 + clear_bit_unlock(bit, word); 320 + /* See wake_up_bit() for which memory barrier you need to use. */ 321 + smp_mb__after_atomic(); 322 + wake_up_bit(word, bit); 323 + } 324 + 308 325 #endif /* _LINUX_WAIT_BIT_H */
+1 -1
include/media/i2c/tvp7002.h
··· 5 5 * Author: Santiago Nunez-Corrales <santiago.nunez@ridgerun.com> 6 6 * 7 7 * This code is partially based upon the TVP5150 driver 8 - * written by Mauro Carvalho Chehab (mchehab@infradead.org), 8 + * written by Mauro Carvalho Chehab <mchehab@kernel.org>, 9 9 * the TVP514x driver written by Vaibhav Hiremath <hvaibhav@ti.com> 10 10 * and the TVP7002 driver in the TI LSP 2.10.00.14 11 11 *
+2 -2
include/media/videobuf-core.h
··· 1 1 /* 2 2 * generic helper functions for handling video4linux capture buffers 3 3 * 4 - * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org> 4 + * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org> 5 5 * 6 6 * Highly based on video-buf written originally by: 7 7 * (c) 2001,02 Gerd Knorr <kraxel@bytesex.org> 8 - * (c) 2006 Mauro Carvalho Chehab, <mchehab@infradead.org> 8 + * (c) 2006 Mauro Carvalho Chehab, <mchehab@kernel.org> 9 9 * (c) 2006 Ted Walther and John Sokol 10 10 * 11 11 * This program is free software; you can redistribute it and/or modify
+2 -2
include/media/videobuf-dma-sg.h
··· 6 6 * into PAGE_SIZE chunks). They also assume the driver does not need 7 7 * to touch the video data. 8 8 * 9 - * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org> 9 + * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org> 10 10 * 11 11 * Highly based on video-buf written originally by: 12 12 * (c) 2001,02 Gerd Knorr <kraxel@bytesex.org> 13 - * (c) 2006 Mauro Carvalho Chehab, <mchehab@infradead.org> 13 + * (c) 2006 Mauro Carvalho Chehab, <mchehab@kernel.org> 14 14 * (c) 2006 Ted Walther and John Sokol 15 15 * 16 16 * This program is free software; you can redistribute it and/or modify
+1 -1
include/media/videobuf-vmalloc.h
··· 6 6 * into PAGE_SIZE chunks). They also assume the driver does not need 7 7 * to touch the video data. 8 8 * 9 - * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org> 9 + * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org> 10 10 * 11 11 * This program is free software; you can redistribute it and/or modify 12 12 * it under the terms of the GNU General Public License as published by
+1
include/net/tls.h
··· 148 148 struct scatterlist *partially_sent_record; 149 149 u16 partially_sent_offset; 150 150 unsigned long flags; 151 + bool in_tcp_sendpages; 151 152 152 153 u16 pending_open_record_frags; 153 154 int (*push_pending_record)(struct sock *sk, int flags);
+11 -3
include/trace/events/initcall.h
··· 31 31 TP_ARGS(func), 32 32 33 33 TP_STRUCT__entry( 34 - __field(initcall_t, func) 34 + /* 35 + * Use field_struct to avoid is_signed_type() 36 + * comparison of a function pointer 37 + */ 38 + __field_struct(initcall_t, func) 35 39 ), 36 40 37 41 TP_fast_assign( ··· 52 48 TP_ARGS(func, ret), 53 49 54 50 TP_STRUCT__entry( 55 - __field(initcall_t, func) 56 - __field(int, ret) 51 + /* 52 + * Use field_struct to avoid is_signed_type() 53 + * comparison of a function pointer 54 + */ 55 + __field_struct(initcall_t, func) 56 + __field(int, ret) 57 57 ), 58 58 59 59 TP_fast_assign(
+1 -1
include/uapi/linux/if_infiniband.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 2 2 /* 3 3 * This software is available to you under a choice of one of two 4 4 * licenses. You may choose to be licensed under the terms of the GNU
+1 -1
include/uapi/linux/rds.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2008 Oracle. All rights reserved. 4 4 *
+1 -1
include/uapi/linux/tls.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2016-2017, Mellanox Technologies. All rights reserved. 4 4 *
+1 -1
include/uapi/rdma/cxgb3-abi.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2006 Chelsio, Inc. All rights reserved. 4 4 *
+1 -1
include/uapi/rdma/cxgb4-abi.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2009-2010 Chelsio, Inc. All rights reserved. 4 4 *
+1 -1
include/uapi/rdma/hns-abi.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2016 Hisilicon Limited. 4 4 *
+1 -1
include/uapi/rdma/ib_user_cm.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2005 Topspin Communications. All rights reserved. 4 4 * Copyright (c) 2005 Intel Corporation. All rights reserved.
+1 -1
include/uapi/rdma/ib_user_ioctl_verbs.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2017-2018, Mellanox Technologies inc. All rights reserved. 4 4 *
+1 -1
include/uapi/rdma/ib_user_mad.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2004 Topspin Communications. All rights reserved. 4 4 * Copyright (c) 2005 Voltaire, Inc. All rights reserved.
+1 -1
include/uapi/rdma/ib_user_sa.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2005 Intel Corporation. All rights reserved. 4 4 *
+1 -1
include/uapi/rdma/ib_user_verbs.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2005 Topspin Communications. All rights reserved. 4 4 * Copyright (c) 2005, 2006 Cisco Systems. All rights reserved.
+1 -1
include/uapi/rdma/mlx4-abi.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2007 Cisco Systems, Inc. All rights reserved. 4 4 * Copyright (c) 2007, 2008 Mellanox Technologies. All rights reserved.
+1 -1
include/uapi/rdma/mlx5-abi.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved. 4 4 *
+1 -1
include/uapi/rdma/mthca-abi.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2005 Topspin Communications. All rights reserved. 4 4 * Copyright (c) 2005, 2006 Cisco Systems. All rights reserved.
+1 -1
include/uapi/rdma/nes-abi.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved. 4 4 * Copyright (c) 2005 Topspin Communications. All rights reserved.
+1 -1
include/uapi/rdma/qedr-abi.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* QLogic qedr NIC Driver 3 3 * Copyright (c) 2015-2016 QLogic Corporation 4 4 *
+1 -1
include/uapi/rdma/rdma_user_cm.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2005-2006 Intel Corporation. All rights reserved. 4 4 *
+1 -1
include/uapi/rdma/rdma_user_ioctl.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2016 Mellanox Technologies, LTD. All rights reserved. 4 4 *
+1 -1
include/uapi/rdma/rdma_user_rxe.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved. 4 4 *
+2 -1
kernel/bpf/arraymap.c
··· 476 476 } 477 477 478 478 /* decrement refcnt of all bpf_progs that are stored in this map */ 479 - void bpf_fd_array_map_clear(struct bpf_map *map) 479 + static void bpf_fd_array_map_clear(struct bpf_map *map) 480 480 { 481 481 struct bpf_array *array = container_of(map, struct bpf_array, map); 482 482 int i; ··· 495 495 .map_fd_get_ptr = prog_fd_array_get_ptr, 496 496 .map_fd_put_ptr = prog_fd_array_put_ptr, 497 497 .map_fd_sys_lookup_elem = prog_fd_array_sys_lookup_elem, 498 + .map_release_uref = bpf_fd_array_map_clear, 498 499 }; 499 500 500 501 static struct bpf_event_entry *bpf_event_entry_gen(struct file *perf_file,
+73 -26
kernel/bpf/sockmap.c
··· 43 43 #include <net/tcp.h> 44 44 #include <linux/ptr_ring.h> 45 45 #include <net/inet_common.h> 46 + #include <linux/sched/signal.h> 46 47 47 48 #define SOCK_CREATE_FLAG_MASK \ 48 49 (BPF_F_NUMA_NODE | BPF_F_RDONLY | BPF_F_WRONLY) ··· 326 325 if (ret > 0) { 327 326 if (apply) 328 327 apply_bytes -= ret; 328 + 329 + sg->offset += ret; 330 + sg->length -= ret; 329 331 size -= ret; 330 332 offset += ret; 331 333 if (uncharge) ··· 336 332 goto retry; 337 333 } 338 334 339 - sg->length = size; 340 - sg->offset = offset; 341 335 return ret; 342 336 } 343 337 ··· 393 391 } while (i != md->sg_end); 394 392 } 395 393 396 - static void free_bytes_sg(struct sock *sk, int bytes, struct sk_msg_buff *md) 394 + static void free_bytes_sg(struct sock *sk, int bytes, 395 + struct sk_msg_buff *md, bool charge) 397 396 { 398 397 struct scatterlist *sg = md->sg_data; 399 398 int i = md->sg_start, free; ··· 404 401 if (bytes < free) { 405 402 sg[i].length -= bytes; 406 403 sg[i].offset += bytes; 407 - sk_mem_uncharge(sk, bytes); 404 + if (charge) 405 + sk_mem_uncharge(sk, bytes); 408 406 break; 409 407 } 410 408 411 - sk_mem_uncharge(sk, sg[i].length); 409 + if (charge) 410 + sk_mem_uncharge(sk, sg[i].length); 412 411 put_page(sg_page(&sg[i])); 413 412 bytes -= sg[i].length; 414 413 sg[i].length = 0; ··· 421 416 if (i == MAX_SKB_FRAGS) 422 417 i = 0; 423 418 } 419 + md->sg_start = i; 424 420 } 425 421 426 422 static int free_sg(struct sock *sk, int start, struct sk_msg_buff *md) ··· 529 523 i = md->sg_start; 530 524 531 525 do { 532 - r->sg_data[i] = md->sg_data[i]; 533 - 534 526 size = (apply && apply_bytes < md->sg_data[i].length) ? 535 527 apply_bytes : md->sg_data[i].length; 536 528 ··· 539 535 } 540 536 541 537 sk_mem_charge(sk, size); 538 + r->sg_data[i] = md->sg_data[i]; 542 539 r->sg_data[i].length = size; 543 540 md->sg_data[i].length -= size; 544 541 md->sg_data[i].offset += size; ··· 580 575 struct sk_msg_buff *md, 581 576 int flags) 582 577 { 578 + bool ingress = !!(md->flags & BPF_F_INGRESS); 583 579 struct smap_psock *psock; 584 580 struct scatterlist *sg; 585 - int i, err, free = 0; 586 - bool ingress = !!(md->flags & BPF_F_INGRESS); 581 + int err = 0; 587 582 588 583 sg = md->sg_data; 589 584 ··· 611 606 out_rcu: 612 607 rcu_read_unlock(); 613 608 out: 614 - i = md->sg_start; 615 - while (sg[i].length) { 616 - free += sg[i].length; 617 - put_page(sg_page(&sg[i])); 618 - sg[i].length = 0; 619 - i++; 620 - if (i == MAX_SKB_FRAGS) 621 - i = 0; 622 - } 623 - return free; 609 + free_bytes_sg(NULL, send, md, false); 610 + return err; 624 611 } 625 612 626 613 static inline void bpf_md_init(struct smap_psock *psock) ··· 697 700 err = bpf_tcp_sendmsg_do_redirect(redir, send, m, flags); 698 701 lock_sock(sk); 699 702 703 + if (unlikely(err < 0)) { 704 + free_start_sg(sk, m); 705 + psock->sg_size = 0; 706 + if (!cork) 707 + *copied -= send; 708 + } else { 709 + psock->sg_size -= send; 710 + } 711 + 700 712 if (cork) { 701 713 free_start_sg(sk, m); 714 + psock->sg_size = 0; 702 715 kfree(m); 703 716 m = NULL; 717 + err = 0; 704 718 } 705 - if (unlikely(err)) 706 - *copied -= err; 707 - else 708 - psock->sg_size -= send; 709 719 break; 710 720 case __SK_DROP: 711 721 default: 712 - free_bytes_sg(sk, send, m); 722 + free_bytes_sg(sk, send, m, true); 713 723 apply_bytes_dec(psock, send); 714 724 *copied -= send; 715 725 psock->sg_size -= send; ··· 734 730 735 731 out_err: 736 732 return err; 733 + } 734 + 735 + static int bpf_wait_data(struct sock *sk, 736 + struct smap_psock *psk, int flags, 737 + long timeo, int *err) 738 + { 739 + int rc; 740 + 741 + DEFINE_WAIT_FUNC(wait, woken_wake_function); 742 + 743 + add_wait_queue(sk_sleep(sk), &wait); 744 + sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk); 745 + rc = sk_wait_event(sk, &timeo, 746 + !list_empty(&psk->ingress) || 747 + !skb_queue_empty(&sk->sk_receive_queue), 748 + &wait); 749 + sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk); 750 + remove_wait_queue(sk_sleep(sk), &wait); 751 + 752 + return rc; 737 753 } 738 754 739 755 static int bpf_tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, ··· 779 755 return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len); 780 756 781 757 lock_sock(sk); 758 + bytes_ready: 782 759 while (copied != len) { 783 760 struct scatterlist *sg; 784 761 struct sk_msg_buff *md; ··· 832 807 consume_skb(md->skb); 833 808 kfree(md); 834 809 } 810 + } 811 + 812 + if (!copied) { 813 + long timeo; 814 + int data; 815 + int err = 0; 816 + 817 + timeo = sock_rcvtimeo(sk, nonblock); 818 + data = bpf_wait_data(sk, psock, flags, timeo, &err); 819 + 820 + if (data) { 821 + if (!skb_queue_empty(&sk->sk_receive_queue)) { 822 + release_sock(sk); 823 + smap_release_sock(psock, sk); 824 + copied = tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len); 825 + return copied; 826 + } 827 + goto bytes_ready; 828 + } 829 + 830 + if (err) 831 + copied = err; 835 832 } 836 833 837 834 release_sock(sk); ··· 1878 1831 return err; 1879 1832 } 1880 1833 1881 - static void sock_map_release(struct bpf_map *map, struct file *map_file) 1834 + static void sock_map_release(struct bpf_map *map) 1882 1835 { 1883 1836 struct bpf_stab *stab = container_of(map, struct bpf_stab, map); 1884 1837 struct bpf_prog *orig; ··· 1902 1855 .map_get_next_key = sock_map_get_next_key, 1903 1856 .map_update_elem = sock_map_update_elem, 1904 1857 .map_delete_elem = sock_map_delete_elem, 1905 - .map_release = sock_map_release, 1858 + .map_release_uref = sock_map_release, 1906 1859 }; 1907 1860 1908 1861 BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, bpf_sock,
+2 -2
kernel/bpf/syscall.c
··· 257 257 static void bpf_map_put_uref(struct bpf_map *map) 258 258 { 259 259 if (atomic_dec_and_test(&map->usercnt)) { 260 - if (map->map_type == BPF_MAP_TYPE_PROG_ARRAY) 261 - bpf_fd_array_map_clear(map); 260 + if (map->ops->map_release_uref) 261 + map->ops->map_release_uref(map); 262 262 } 263 263 } 264 264
+3 -4
kernel/events/uprobes.c
··· 491 491 if (!uprobe) 492 492 return NULL; 493 493 494 - uprobe->inode = igrab(inode); 494 + uprobe->inode = inode; 495 495 uprobe->offset = offset; 496 496 init_rwsem(&uprobe->register_rwsem); 497 497 init_rwsem(&uprobe->consumer_rwsem); ··· 502 502 if (cur_uprobe) { 503 503 kfree(uprobe); 504 504 uprobe = cur_uprobe; 505 - iput(inode); 506 505 } 507 506 508 507 return uprobe; ··· 700 701 rb_erase(&uprobe->rb_node, &uprobes_tree); 701 702 spin_unlock(&uprobes_treelock); 702 703 RB_CLEAR_NODE(&uprobe->rb_node); /* for uprobe_is_active() */ 703 - iput(uprobe->inode); 704 704 put_uprobe(uprobe); 705 705 } 706 706 ··· 871 873 * tuple). Creation refcount stops uprobe_unregister from freeing the 872 874 * @uprobe even before the register operation is complete. Creation 873 875 * refcount is released when the last @uc for the @uprobe 874 - * unregisters. 876 + * unregisters. Caller of uprobe_register() is required to keep @inode 877 + * (and the containing mount) referenced. 875 878 * 876 879 * Return errno if it cannot successully install probes 877 880 * else return 0 (success)
+44 -19
kernel/time/clocksource.c
··· 119 119 static int watchdog_running; 120 120 static atomic_t watchdog_reset_pending; 121 121 122 + static void inline clocksource_watchdog_lock(unsigned long *flags) 123 + { 124 + spin_lock_irqsave(&watchdog_lock, *flags); 125 + } 126 + 127 + static void inline clocksource_watchdog_unlock(unsigned long *flags) 128 + { 129 + spin_unlock_irqrestore(&watchdog_lock, *flags); 130 + } 131 + 122 132 static int clocksource_watchdog_kthread(void *data); 123 133 static void __clocksource_change_rating(struct clocksource *cs, int rating); 124 134 ··· 152 142 cs->flags &= ~(CLOCK_SOURCE_VALID_FOR_HRES | CLOCK_SOURCE_WATCHDOG); 153 143 cs->flags |= CLOCK_SOURCE_UNSTABLE; 154 144 145 + /* 146 + * If the clocksource is registered clocksource_watchdog_kthread() will 147 + * re-rate and re-select. 148 + */ 149 + if (list_empty(&cs->list)) { 150 + cs->rating = 0; 151 + return; 152 + } 153 + 155 154 if (cs->mark_unstable) 156 155 cs->mark_unstable(cs); 157 156 157 + /* kick clocksource_watchdog_kthread() */ 158 158 if (finished_booting) 159 159 schedule_work(&watchdog_work); 160 160 } ··· 173 153 * clocksource_mark_unstable - mark clocksource unstable via watchdog 174 154 * @cs: clocksource to be marked unstable 175 155 * 176 - * This function is called instead of clocksource_change_rating from 177 - * cpu hotplug code to avoid a deadlock between the clocksource mutex 178 - * and the cpu hotplug mutex. It defers the update of the clocksource 179 - * to the watchdog thread. 156 + * This function is called by the x86 TSC code to mark clocksources as unstable; 157 + * it defers demotion and re-selection to a kthread. 180 158 */ 181 159 void clocksource_mark_unstable(struct clocksource *cs) 182 160 { ··· 182 164 183 165 spin_lock_irqsave(&watchdog_lock, flags); 184 166 if (!(cs->flags & CLOCK_SOURCE_UNSTABLE)) { 185 - if (list_empty(&cs->wd_list)) 167 + if (!list_empty(&cs->list) && list_empty(&cs->wd_list)) 186 168 list_add(&cs->wd_list, &watchdog_list); 187 169 __clocksource_unstable(cs); 188 170 } ··· 337 319 338 320 static void clocksource_enqueue_watchdog(struct clocksource *cs) 339 321 { 340 - unsigned long flags; 322 + INIT_LIST_HEAD(&cs->wd_list); 341 323 342 - spin_lock_irqsave(&watchdog_lock, flags); 343 324 if (cs->flags & CLOCK_SOURCE_MUST_VERIFY) { 344 325 /* cs is a clocksource to be watched. */ 345 326 list_add(&cs->wd_list, &watchdog_list); ··· 348 331 if (cs->flags & CLOCK_SOURCE_IS_CONTINUOUS) 349 332 cs->flags |= CLOCK_SOURCE_VALID_FOR_HRES; 350 333 } 351 - spin_unlock_irqrestore(&watchdog_lock, flags); 352 334 } 353 335 354 336 static void clocksource_select_watchdog(bool fallback) ··· 389 373 390 374 static void clocksource_dequeue_watchdog(struct clocksource *cs) 391 375 { 392 - unsigned long flags; 393 - 394 - spin_lock_irqsave(&watchdog_lock, flags); 395 376 if (cs != watchdog) { 396 377 if (cs->flags & CLOCK_SOURCE_MUST_VERIFY) { 397 378 /* cs is a watched clocksource. */ ··· 397 384 clocksource_stop_watchdog(); 398 385 } 399 386 } 400 - spin_unlock_irqrestore(&watchdog_lock, flags); 401 387 } 402 388 403 389 static int __clocksource_watchdog_kthread(void) 404 390 { 405 391 struct clocksource *cs, *tmp; 406 392 unsigned long flags; 407 - LIST_HEAD(unstable); 408 393 int select = 0; 409 394 410 395 spin_lock_irqsave(&watchdog_lock, flags); 411 396 list_for_each_entry_safe(cs, tmp, &watchdog_list, wd_list) { 412 397 if (cs->flags & CLOCK_SOURCE_UNSTABLE) { 413 398 list_del_init(&cs->wd_list); 414 - list_add(&cs->wd_list, &unstable); 399 + __clocksource_change_rating(cs, 0); 415 400 select = 1; 416 401 } 417 402 if (cs->flags & CLOCK_SOURCE_RESELECT) { ··· 421 410 clocksource_stop_watchdog(); 422 411 spin_unlock_irqrestore(&watchdog_lock, flags); 423 412 424 - /* Needs to be done outside of watchdog lock */ 425 - list_for_each_entry_safe(cs, tmp, &unstable, wd_list) { 426 - list_del_init(&cs->wd_list); 427 - __clocksource_change_rating(cs, 0); 428 - } 429 413 return select; 430 414 } 431 415 ··· 452 446 static inline int __clocksource_watchdog_kthread(void) { return 0; } 453 447 static bool clocksource_is_watchdog(struct clocksource *cs) { return false; } 454 448 void clocksource_mark_unstable(struct clocksource *cs) { } 449 + 450 + static void inline clocksource_watchdog_lock(unsigned long *flags) { } 451 + static void inline clocksource_watchdog_unlock(unsigned long *flags) { } 455 452 456 453 #endif /* CONFIG_CLOCKSOURCE_WATCHDOG */ 457 454 ··· 788 779 */ 789 780 int __clocksource_register_scale(struct clocksource *cs, u32 scale, u32 freq) 790 781 { 782 + unsigned long flags; 791 783 792 784 /* Initialize mult/shift and max_idle_ns */ 793 785 __clocksource_update_freq_scale(cs, scale, freq); 794 786 795 787 /* Add clocksource to the clocksource list */ 796 788 mutex_lock(&clocksource_mutex); 789 + 790 + clocksource_watchdog_lock(&flags); 797 791 clocksource_enqueue(cs); 798 792 clocksource_enqueue_watchdog(cs); 793 + clocksource_watchdog_unlock(&flags); 794 + 799 795 clocksource_select(); 800 796 clocksource_select_watchdog(false); 801 797 mutex_unlock(&clocksource_mutex); ··· 822 808 */ 823 809 void clocksource_change_rating(struct clocksource *cs, int rating) 824 810 { 811 + unsigned long flags; 812 + 825 813 mutex_lock(&clocksource_mutex); 814 + clocksource_watchdog_lock(&flags); 826 815 __clocksource_change_rating(cs, rating); 816 + clocksource_watchdog_unlock(&flags); 817 + 827 818 clocksource_select(); 828 819 clocksource_select_watchdog(false); 829 820 mutex_unlock(&clocksource_mutex); ··· 840 821 */ 841 822 static int clocksource_unbind(struct clocksource *cs) 842 823 { 824 + unsigned long flags; 825 + 843 826 if (clocksource_is_watchdog(cs)) { 844 827 /* Select and try to install a replacement watchdog. */ 845 828 clocksource_select_watchdog(true); ··· 855 834 if (curr_clocksource == cs) 856 835 return -EBUSY; 857 836 } 837 + 838 + clocksource_watchdog_lock(&flags); 858 839 clocksource_dequeue_watchdog(cs); 859 840 list_del_init(&cs->list); 841 + clocksource_watchdog_unlock(&flags); 842 + 860 843 return 0; 861 844 } 862 845
+2 -2
kernel/trace/ftrace.c
··· 5514 5514 ftrace_create_filter_files(&global_ops, d_tracer); 5515 5515 5516 5516 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 5517 - trace_create_file("set_graph_function", 0444, d_tracer, 5517 + trace_create_file("set_graph_function", 0644, d_tracer, 5518 5518 NULL, 5519 5519 &ftrace_graph_fops); 5520 - trace_create_file("set_graph_notrace", 0444, d_tracer, 5520 + trace_create_file("set_graph_notrace", 0644, d_tracer, 5521 5521 NULL, 5522 5522 &ftrace_graph_notrace_fops); 5523 5523 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
+12
kernel/trace/trace_events_hist.c
··· 2466 2466 else if (strcmp(modifier, "usecs") == 0) 2467 2467 *flags |= HIST_FIELD_FL_TIMESTAMP_USECS; 2468 2468 else { 2469 + hist_err("Invalid field modifier: ", modifier); 2469 2470 field = ERR_PTR(-EINVAL); 2470 2471 goto out; 2471 2472 } ··· 2482 2481 else { 2483 2482 field = trace_find_event_field(file->event_call, field_name); 2484 2483 if (!field || !field->size) { 2484 + hist_err("Couldn't find field: ", field_name); 2485 2485 field = ERR_PTR(-EINVAL); 2486 2486 goto out; 2487 2487 } ··· 4915 4913 seq_printf(m, "%s", field_name); 4916 4914 } else if (hist_field->flags & HIST_FIELD_FL_TIMESTAMP) 4917 4915 seq_puts(m, "common_timestamp"); 4916 + 4917 + if (hist_field->flags) { 4918 + if (!(hist_field->flags & HIST_FIELD_FL_VAR_REF) && 4919 + !(hist_field->flags & HIST_FIELD_FL_EXPR)) { 4920 + const char *flags = get_hist_field_flags(hist_field); 4921 + 4922 + if (flags) 4923 + seq_printf(m, ".%s", flags); 4924 + } 4925 + } 4918 4926 } 4919 4927 4920 4928 static int event_hist_trigger_print(struct seq_file *m,
+1 -1
kernel/trace/trace_stack.c
··· 472 472 NULL, &stack_trace_fops); 473 473 474 474 #ifdef CONFIG_DYNAMIC_FTRACE 475 - trace_create_file("stack_trace_filter", 0444, d_tracer, 475 + trace_create_file("stack_trace_filter", 0644, d_tracer, 476 476 &trace_ops, &stack_trace_filter_fops); 477 477 #endif 478 478
+14 -21
kernel/trace/trace_uprobe.c
··· 55 55 struct list_head list; 56 56 struct trace_uprobe_filter filter; 57 57 struct uprobe_consumer consumer; 58 + struct path path; 58 59 struct inode *inode; 59 60 char *filename; 60 61 unsigned long offset; ··· 290 289 for (i = 0; i < tu->tp.nr_args; i++) 291 290 traceprobe_free_probe_arg(&tu->tp.args[i]); 292 291 293 - iput(tu->inode); 292 + path_put(&tu->path); 294 293 kfree(tu->tp.call.class->system); 295 294 kfree(tu->tp.call.name); 296 295 kfree(tu->filename); ··· 364 363 static int create_trace_uprobe(int argc, char **argv) 365 364 { 366 365 struct trace_uprobe *tu; 367 - struct inode *inode; 368 366 char *arg, *event, *group, *filename; 369 367 char buf[MAX_EVENT_NAME_LEN]; 370 368 struct path path; ··· 371 371 bool is_delete, is_return; 372 372 int i, ret; 373 373 374 - inode = NULL; 375 374 ret = 0; 376 375 is_delete = false; 377 376 is_return = false; ··· 436 437 } 437 438 /* Find the last occurrence, in case the path contains ':' too. */ 438 439 arg = strrchr(argv[1], ':'); 439 - if (!arg) { 440 - ret = -EINVAL; 441 - goto fail_address_parse; 442 - } 440 + if (!arg) 441 + return -EINVAL; 443 442 444 443 *arg++ = '\0'; 445 444 filename = argv[1]; 446 445 ret = kern_path(filename, LOOKUP_FOLLOW, &path); 447 446 if (ret) 448 - goto fail_address_parse; 447 + return ret; 449 448 450 - inode = igrab(d_real_inode(path.dentry)); 451 - path_put(&path); 452 - 453 - if (!inode || !S_ISREG(inode->i_mode)) { 449 + if (!d_is_reg(path.dentry)) { 454 450 ret = -EINVAL; 455 451 goto fail_address_parse; 456 452 } ··· 484 490 goto fail_address_parse; 485 491 } 486 492 tu->offset = offset; 487 - tu->inode = inode; 493 + tu->path = path; 488 494 tu->filename = kstrdup(filename, GFP_KERNEL); 489 495 490 496 if (!tu->filename) { ··· 552 558 return ret; 553 559 554 560 fail_address_parse: 555 - iput(inode); 561 + path_put(&path); 556 562 557 563 pr_info("Failed to parse address or file.\n"); 558 564 ··· 916 922 goto err_flags; 917 923 918 924 tu->consumer.filter = filter; 925 + tu->inode = d_real_inode(tu->path.dentry); 919 926 ret = uprobe_register(tu->inode, tu->offset, &tu->consumer); 920 927 if (ret) 921 928 goto err_buffer; ··· 962 967 WARN_ON(!uprobe_filter_is_empty(&tu->filter)); 963 968 964 969 uprobe_unregister(tu->inode, tu->offset, &tu->consumer); 970 + tu->inode = NULL; 965 971 tu->tp.flags &= file ? ~TP_FLAG_TRACE : ~TP_FLAG_PROFILE; 966 972 967 973 uprobe_buffer_disable(); ··· 1333 1337 create_local_trace_uprobe(char *name, unsigned long offs, bool is_return) 1334 1338 { 1335 1339 struct trace_uprobe *tu; 1336 - struct inode *inode; 1337 1340 struct path path; 1338 1341 int ret; 1339 1342 ··· 1340 1345 if (ret) 1341 1346 return ERR_PTR(ret); 1342 1347 1343 - inode = igrab(d_inode(path.dentry)); 1344 - path_put(&path); 1345 - 1346 - if (!inode || !S_ISREG(inode->i_mode)) { 1347 - iput(inode); 1348 + if (!d_is_reg(path.dentry)) { 1349 + path_put(&path); 1348 1350 return ERR_PTR(-EINVAL); 1349 1351 } 1350 1352 ··· 1356 1364 if (IS_ERR(tu)) { 1357 1365 pr_info("Failed to allocate trace_uprobe.(%d)\n", 1358 1366 (int)PTR_ERR(tu)); 1367 + path_put(&path); 1359 1368 return ERR_CAST(tu); 1360 1369 } 1361 1370 1362 1371 tu->offset = offs; 1363 - tu->inode = inode; 1372 + tu->path = path; 1364 1373 tu->filename = kstrdup(name, GFP_KERNEL); 1365 1374 init_trace_event_call(tu, &tu->tp.call); 1366 1375
+2 -2
kernel/tracepoint.c
··· 207 207 lockdep_is_held(&tracepoints_mutex)); 208 208 old = func_add(&tp_funcs, func, prio); 209 209 if (IS_ERR(old)) { 210 - WARN_ON_ONCE(1); 210 + WARN_ON_ONCE(PTR_ERR(old) != -ENOMEM); 211 211 return PTR_ERR(old); 212 212 } 213 213 ··· 239 239 lockdep_is_held(&tracepoints_mutex)); 240 240 old = func_remove(&tp_funcs, func); 241 241 if (IS_ERR(old)) { 242 - WARN_ON_ONCE(1); 242 + WARN_ON_ONCE(PTR_ERR(old) != -ENOMEM); 243 243 return PTR_ERR(old); 244 244 } 245 245
+9 -14
lib/errseq.c
··· 111 111 * errseq_sample() - Grab current errseq_t value. 112 112 * @eseq: Pointer to errseq_t to be sampled. 113 113 * 114 - * This function allows callers to sample an errseq_t value, marking it as 115 - * "seen" if required. 114 + * This function allows callers to initialise their errseq_t variable. 115 + * If the error has been "seen", new callers will not see an old error. 116 + * If there is an unseen error in @eseq, the caller of this function will 117 + * see it the next time it checks for an error. 116 118 * 119 + * Context: Any context. 117 120 * Return: The current errseq value. 118 121 */ 119 122 errseq_t errseq_sample(errseq_t *eseq) 120 123 { 121 124 errseq_t old = READ_ONCE(*eseq); 122 - errseq_t new = old; 123 125 124 - /* 125 - * For the common case of no errors ever having been set, we can skip 126 - * marking the SEEN bit. Once an error has been set, the value will 127 - * never go back to zero. 128 - */ 129 - if (old != 0) { 130 - new |= ERRSEQ_SEEN; 131 - if (old != new) 132 - cmpxchg(eseq, old, new); 133 - } 134 - return new; 126 + /* If nobody has seen this error yet, then we can be the first. */ 127 + if (!(old & ERRSEQ_SEEN)) 128 + old = 0; 129 + return old; 135 130 } 136 131 EXPORT_SYMBOL(errseq_sample); 137 132
+1 -1
lib/swiotlb.c
··· 737 737 swiotlb_tbl_unmap_single(dev, phys_addr, size, DMA_TO_DEVICE, 738 738 DMA_ATTR_SKIP_CPU_SYNC); 739 739 out_warn: 740 - if ((attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) { 740 + if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) { 741 741 dev_warn(dev, 742 742 "swiotlb: coherent allocation failed, size=%zu\n", 743 743 size);
+2 -1
mm/backing-dev.c
··· 115 115 bdi, &bdi_debug_stats_fops); 116 116 if (!bdi->debug_stats) { 117 117 debugfs_remove(bdi->debug_dir); 118 + bdi->debug_dir = NULL; 118 119 return -ENOMEM; 119 120 } 120 121 ··· 384 383 * the barrier provided by test_and_clear_bit() above. 385 384 */ 386 385 smp_wmb(); 387 - clear_bit(WB_shutting_down, &wb->state); 386 + clear_and_wake_up_bit(WB_shutting_down, &wb->state); 388 387 } 389 388 390 389 static void wb_exit(struct bdi_writeback *wb)
+2 -2
net/bridge/br_if.c
··· 518 518 return -ELOOP; 519 519 } 520 520 521 - /* Device is already being bridged */ 522 - if (br_port_exists(dev)) 521 + /* Device has master upper dev */ 522 + if (netdev_master_upper_dev_get(dev)) 523 523 return -EBUSY; 524 524 525 525 /* No bridging devices that dislike that (e.g. wireless) */
+4 -2
net/compat.c
··· 377 377 optname == SO_ATTACH_REUSEPORT_CBPF) 378 378 return do_set_attach_filter(sock, level, optname, 379 379 optval, optlen); 380 - if (optname == SO_RCVTIMEO || optname == SO_SNDTIMEO) 380 + if (!COMPAT_USE_64BIT_TIME && 381 + (optname == SO_RCVTIMEO || optname == SO_SNDTIMEO)) 381 382 return do_set_sock_timeout(sock, level, optname, optval, optlen); 382 383 383 384 return sock_setsockopt(sock, level, optname, optval, optlen); ··· 449 448 static int compat_sock_getsockopt(struct socket *sock, int level, int optname, 450 449 char __user *optval, int __user *optlen) 451 450 { 452 - if (optname == SO_RCVTIMEO || optname == SO_SNDTIMEO) 451 + if (!COMPAT_USE_64BIT_TIME && 452 + (optname == SO_RCVTIMEO || optname == SO_SNDTIMEO)) 453 453 return do_get_sock_timeout(sock, level, optname, optval, optlen); 454 454 return sock_getsockopt(sock, level, optname, optval, optlen); 455 455 }
+5
net/core/ethtool.c
··· 1032 1032 info_size = sizeof(info); 1033 1033 if (copy_from_user(&info, useraddr, info_size)) 1034 1034 return -EFAULT; 1035 + /* Since malicious users may modify the original data, 1036 + * we need to check whether FLOW_RSS is still requested. 1037 + */ 1038 + if (!(info.flow_type & FLOW_RSS)) 1039 + return -EINVAL; 1035 1040 } 1036 1041 1037 1042 if (info.cmd == ETHTOOL_GRXCLSRLALL) {
+1
net/core/filter.c
··· 3240 3240 skb_dst_set(skb, (struct dst_entry *) md); 3241 3241 3242 3242 info = &md->u.tun_info; 3243 + memset(info, 0, sizeof(*info)); 3243 3244 info->mode = IP_TUNNEL_INFO_TX; 3244 3245 3245 3246 info->key.tun_flags = TUNNEL_KEY | TUNNEL_CSUM | TUNNEL_NOCACHE;
+12 -2
net/dccp/ccids/ccid2.c
··· 126 126 DCCPF_SEQ_WMAX)); 127 127 } 128 128 129 + static void dccp_tasklet_schedule(struct sock *sk) 130 + { 131 + struct tasklet_struct *t = &dccp_sk(sk)->dccps_xmitlet; 132 + 133 + if (!test_and_set_bit(TASKLET_STATE_SCHED, &t->state)) { 134 + sock_hold(sk); 135 + __tasklet_schedule(t); 136 + } 137 + } 138 + 129 139 static void ccid2_hc_tx_rto_expire(struct timer_list *t) 130 140 { 131 141 struct ccid2_hc_tx_sock *hc = from_timer(hc, t, tx_rtotimer); ··· 176 166 177 167 /* if we were blocked before, we may now send cwnd=1 packet */ 178 168 if (sender_was_blocked) 179 - tasklet_schedule(&dccp_sk(sk)->dccps_xmitlet); 169 + dccp_tasklet_schedule(sk); 180 170 /* restart backed-off timer */ 181 171 sk_reset_timer(sk, &hc->tx_rtotimer, jiffies + hc->tx_rto); 182 172 out: ··· 716 706 done: 717 707 /* check if incoming Acks allow pending packets to be sent */ 718 708 if (sender_was_blocked && !ccid2_cwnd_network_limited(hc)) 719 - tasklet_schedule(&dccp_sk(sk)->dccps_xmitlet); 709 + dccp_tasklet_schedule(sk); 720 710 dccp_ackvec_parsed_cleanup(&hc->tx_av_chunks); 721 711 } 722 712
+1 -1
net/dccp/timer.c
··· 232 232 else 233 233 dccp_write_xmit(sk); 234 234 bh_unlock_sock(sk); 235 + sock_put(sk); 235 236 } 236 237 237 238 static void dccp_write_xmit_timer(struct timer_list *t) ··· 241 240 struct sock *sk = &dp->dccps_inet_connection.icsk_inet.sk; 242 241 243 242 dccp_write_xmitlet((unsigned long)sk); 244 - sock_put(sk); 245 243 } 246 244 247 245 void dccp_init_xmit_timers(struct sock *sk)
+53 -65
net/ipv4/route.c
··· 709 709 fnhe->fnhe_gw = gw; 710 710 fnhe->fnhe_pmtu = pmtu; 711 711 fnhe->fnhe_mtu_locked = lock; 712 - fnhe->fnhe_expires = expires; 712 + fnhe->fnhe_expires = max(1UL, expires); 713 713 714 714 /* Exception created; mark the cached routes for the nexthop 715 715 * stale, so anyone caching it rechecks if this exception ··· 1297 1297 return mtu - lwtunnel_headroom(dst->lwtstate, mtu); 1298 1298 } 1299 1299 1300 + static void ip_del_fnhe(struct fib_nh *nh, __be32 daddr) 1301 + { 1302 + struct fnhe_hash_bucket *hash; 1303 + struct fib_nh_exception *fnhe, __rcu **fnhe_p; 1304 + u32 hval = fnhe_hashfun(daddr); 1305 + 1306 + spin_lock_bh(&fnhe_lock); 1307 + 1308 + hash = rcu_dereference_protected(nh->nh_exceptions, 1309 + lockdep_is_held(&fnhe_lock)); 1310 + hash += hval; 1311 + 1312 + fnhe_p = &hash->chain; 1313 + fnhe = rcu_dereference_protected(*fnhe_p, lockdep_is_held(&fnhe_lock)); 1314 + while (fnhe) { 1315 + if (fnhe->fnhe_daddr == daddr) { 1316 + rcu_assign_pointer(*fnhe_p, rcu_dereference_protected( 1317 + fnhe->fnhe_next, lockdep_is_held(&fnhe_lock))); 1318 + fnhe_flush_routes(fnhe); 1319 + kfree_rcu(fnhe, rcu); 1320 + break; 1321 + } 1322 + fnhe_p = &fnhe->fnhe_next; 1323 + fnhe = rcu_dereference_protected(fnhe->fnhe_next, 1324 + lockdep_is_held(&fnhe_lock)); 1325 + } 1326 + 1327 + spin_unlock_bh(&fnhe_lock); 1328 + } 1329 + 1300 1330 static struct fib_nh_exception *find_exception(struct fib_nh *nh, __be32 daddr) 1301 1331 { 1302 1332 struct fnhe_hash_bucket *hash = rcu_dereference(nh->nh_exceptions); ··· 1340 1310 1341 1311 for (fnhe = rcu_dereference(hash[hval].chain); fnhe; 1342 1312 fnhe = rcu_dereference(fnhe->fnhe_next)) { 1343 - if (fnhe->fnhe_daddr == daddr) 1313 + if (fnhe->fnhe_daddr == daddr) { 1314 + if (fnhe->fnhe_expires && 1315 + time_after(jiffies, fnhe->fnhe_expires)) { 1316 + ip_del_fnhe(nh, daddr); 1317 + break; 1318 + } 1344 1319 return fnhe; 1320 + } 1345 1321 } 1346 1322 return NULL; 1347 1323 } ··· 1672 1636 #endif 1673 1637 } 1674 1638 1675 - static void ip_del_fnhe(struct fib_nh *nh, __be32 daddr) 1676 - { 1677 - struct fnhe_hash_bucket *hash; 1678 - struct fib_nh_exception *fnhe, __rcu **fnhe_p; 1679 - u32 hval = fnhe_hashfun(daddr); 1680 - 1681 - spin_lock_bh(&fnhe_lock); 1682 - 1683 - hash = rcu_dereference_protected(nh->nh_exceptions, 1684 - lockdep_is_held(&fnhe_lock)); 1685 - hash += hval; 1686 - 1687 - fnhe_p = &hash->chain; 1688 - fnhe = rcu_dereference_protected(*fnhe_p, lockdep_is_held(&fnhe_lock)); 1689 - while (fnhe) { 1690 - if (fnhe->fnhe_daddr == daddr) { 1691 - rcu_assign_pointer(*fnhe_p, rcu_dereference_protected( 1692 - fnhe->fnhe_next, lockdep_is_held(&fnhe_lock))); 1693 - fnhe_flush_routes(fnhe); 1694 - kfree_rcu(fnhe, rcu); 1695 - break; 1696 - } 1697 - fnhe_p = &fnhe->fnhe_next; 1698 - fnhe = rcu_dereference_protected(fnhe->fnhe_next, 1699 - lockdep_is_held(&fnhe_lock)); 1700 - } 1701 - 1702 - spin_unlock_bh(&fnhe_lock); 1703 - } 1704 - 1705 1639 /* called in rcu_read_lock() section */ 1706 1640 static int __mkroute_input(struct sk_buff *skb, 1707 1641 const struct fib_result *res, ··· 1725 1719 1726 1720 fnhe = find_exception(&FIB_RES_NH(*res), daddr); 1727 1721 if (do_cache) { 1728 - if (fnhe) { 1722 + if (fnhe) 1729 1723 rth = rcu_dereference(fnhe->fnhe_rth_input); 1730 - if (rth && rth->dst.expires && 1731 - time_after(jiffies, rth->dst.expires)) { 1732 - ip_del_fnhe(&FIB_RES_NH(*res), daddr); 1733 - fnhe = NULL; 1734 - } else { 1735 - goto rt_cache; 1736 - } 1737 - } 1738 - 1739 - rth = rcu_dereference(FIB_RES_NH(*res).nh_rth_input); 1740 - 1741 - rt_cache: 1724 + else 1725 + rth = rcu_dereference(FIB_RES_NH(*res).nh_rth_input); 1742 1726 if (rt_cache_valid(rth)) { 1743 1727 skb_dst_set_noref(skb, &rth->dst); 1744 1728 goto out; ··· 2212 2216 * the loopback interface and the IP_PKTINFO ipi_ifindex will 2213 2217 * be set to the loopback interface as well. 2214 2218 */ 2215 - fi = NULL; 2219 + do_cache = false; 2216 2220 } 2217 2221 2218 2222 fnhe = NULL; 2219 2223 do_cache &= fi != NULL; 2220 - if (do_cache) { 2224 + if (fi) { 2221 2225 struct rtable __rcu **prth; 2222 2226 struct fib_nh *nh = &FIB_RES_NH(*res); 2223 2227 2224 2228 fnhe = find_exception(nh, fl4->daddr); 2229 + if (!do_cache) 2230 + goto add; 2225 2231 if (fnhe) { 2226 2232 prth = &fnhe->fnhe_rth_output; 2227 - rth = rcu_dereference(*prth); 2228 - if (rth && rth->dst.expires && 2229 - time_after(jiffies, rth->dst.expires)) { 2230 - ip_del_fnhe(nh, fl4->daddr); 2231 - fnhe = NULL; 2232 - } else { 2233 - goto rt_cache; 2233 + } else { 2234 + if (unlikely(fl4->flowi4_flags & 2235 + FLOWI_FLAG_KNOWN_NH && 2236 + !(nh->nh_gw && 2237 + nh->nh_scope == RT_SCOPE_LINK))) { 2238 + do_cache = false; 2239 + goto add; 2234 2240 } 2241 + prth = raw_cpu_ptr(nh->nh_pcpu_rth_output); 2235 2242 } 2236 - 2237 - if (unlikely(fl4->flowi4_flags & 2238 - FLOWI_FLAG_KNOWN_NH && 2239 - !(nh->nh_gw && 2240 - nh->nh_scope == RT_SCOPE_LINK))) { 2241 - do_cache = false; 2242 - goto add; 2243 - } 2244 - prth = raw_cpu_ptr(nh->nh_pcpu_rth_output); 2245 2243 rth = rcu_dereference(*prth); 2246 - 2247 - rt_cache: 2248 2244 if (rt_cache_valid(rth) && dst_hold_safe(&rth->dst)) 2249 2245 return rth; 2250 2246 }
+4 -3
net/ipv4/tcp.c
··· 697 697 { 698 698 return skb->len < size_goal && 699 699 sock_net(sk)->ipv4.sysctl_tcp_autocorking && 700 - skb != tcp_write_queue_head(sk) && 700 + !tcp_rtx_queue_empty(sk) && 701 701 refcount_read(&sk->sk_wmem_alloc) > skb->truesize; 702 702 } 703 703 ··· 1204 1204 uarg->zerocopy = 0; 1205 1205 } 1206 1206 1207 - if (unlikely(flags & MSG_FASTOPEN || inet_sk(sk)->defer_connect)) { 1207 + if (unlikely(flags & MSG_FASTOPEN || inet_sk(sk)->defer_connect) && 1208 + !tp->repair) { 1208 1209 err = tcp_sendmsg_fastopen(sk, msg, &copied_syn, size); 1209 1210 if (err == -EINPROGRESS && copied_syn > 0) 1210 1211 goto out; ··· 2674 2673 case TCP_REPAIR_QUEUE: 2675 2674 if (!tp->repair) 2676 2675 err = -EPERM; 2677 - else if (val < TCP_QUEUES_NR) 2676 + else if ((unsigned int)val < TCP_QUEUES_NR) 2678 2677 tp->repair_queue = val; 2679 2678 else 2680 2679 err = -EINVAL;
+3 -1
net/ipv4/tcp_bbr.c
··· 806 806 } 807 807 } 808 808 } 809 - bbr->idle_restart = 0; 809 + /* Restart after idle ends only once we process a new S/ACK for data */ 810 + if (rs->delivered > 0) 811 + bbr->idle_restart = 0; 810 812 } 811 813 812 814 static void bbr_update_model(struct sock *sk, const struct rate_sample *rs)
+6 -1
net/ipv6/route.c
··· 1835 1835 const struct ipv6hdr *inner_iph; 1836 1836 const struct icmp6hdr *icmph; 1837 1837 struct ipv6hdr _inner_iph; 1838 + struct icmp6hdr _icmph; 1838 1839 1839 1840 if (likely(outer_iph->nexthdr != IPPROTO_ICMPV6)) 1840 1841 goto out; 1841 1842 1842 - icmph = icmp6_hdr(skb); 1843 + icmph = skb_header_pointer(skb, skb_transport_offset(skb), 1844 + sizeof(_icmph), &_icmph); 1845 + if (!icmph) 1846 + goto out; 1847 + 1843 1848 if (icmph->icmp6_type != ICMPV6_DEST_UNREACH && 1844 1849 icmph->icmp6_type != ICMPV6_PKT_TOOBIG && 1845 1850 icmph->icmp6_type != ICMPV6_TIME_EXCEED &&
+2 -1
net/rds/ib_cm.c
··· 547 547 rdsdebug("conn %p pd %p cq %p %p\n", conn, ic->i_pd, 548 548 ic->i_send_cq, ic->i_recv_cq); 549 549 550 - return ret; 550 + goto out; 551 551 552 552 sends_out: 553 553 vfree(ic->i_sends); ··· 572 572 ic->i_send_cq = NULL; 573 573 rds_ibdev_out: 574 574 rds_ib_remove_conn(rds_ibdev, conn); 575 + out: 575 576 rds_ib_dev_put(rds_ibdev); 576 577 577 578 return ret;
+1
net/rds/recv.c
··· 558 558 struct rds_cmsg_rx_trace t; 559 559 int i, j; 560 560 561 + memset(&t, 0, sizeof(t)); 561 562 inc->i_rx_lat_trace[RDS_MSG_RX_CMSG] = local_clock(); 562 563 t.rx_traces = rs->rs_rx_traces; 563 564 for (i = 0; i < rs->rs_rx_traces; i++) {
+25 -12
net/sched/sch_fq.c
··· 128 128 return f->next == &detached; 129 129 } 130 130 131 + static bool fq_flow_is_throttled(const struct fq_flow *f) 132 + { 133 + return f->next == &throttled; 134 + } 135 + 136 + static void fq_flow_add_tail(struct fq_flow_head *head, struct fq_flow *flow) 137 + { 138 + if (head->first) 139 + head->last->next = flow; 140 + else 141 + head->first = flow; 142 + head->last = flow; 143 + flow->next = NULL; 144 + } 145 + 146 + static void fq_flow_unset_throttled(struct fq_sched_data *q, struct fq_flow *f) 147 + { 148 + rb_erase(&f->rate_node, &q->delayed); 149 + q->throttled_flows--; 150 + fq_flow_add_tail(&q->old_flows, f); 151 + } 152 + 131 153 static void fq_flow_set_throttled(struct fq_sched_data *q, struct fq_flow *f) 132 154 { 133 155 struct rb_node **p = &q->delayed.rb_node, *parent = NULL; ··· 177 155 178 156 static struct kmem_cache *fq_flow_cachep __read_mostly; 179 157 180 - static void fq_flow_add_tail(struct fq_flow_head *head, struct fq_flow *flow) 181 - { 182 - if (head->first) 183 - head->last->next = flow; 184 - else 185 - head->first = flow; 186 - head->last = flow; 187 - flow->next = NULL; 188 - } 189 158 190 159 /* limit number of collected flows per round */ 191 160 #define FQ_GC_MAX 8 ··· 280 267 f->socket_hash != sk->sk_hash)) { 281 268 f->credit = q->initial_quantum; 282 269 f->socket_hash = sk->sk_hash; 270 + if (fq_flow_is_throttled(f)) 271 + fq_flow_unset_throttled(q, f); 283 272 f->time_next_packet = 0ULL; 284 273 } 285 274 return f; ··· 453 438 q->time_next_delayed_flow = f->time_next_packet; 454 439 break; 455 440 } 456 - rb_erase(p, &q->delayed); 457 - q->throttled_flows--; 458 - fq_flow_add_tail(&q->old_flows, f); 441 + fq_flow_unset_throttled(q, f); 459 442 } 460 443 } 461 444
+1 -1
net/sctp/inqueue.c
··· 217 217 skb_pull(chunk->skb, sizeof(*ch)); 218 218 chunk->subh.v = NULL; /* Subheader is no longer valid. */ 219 219 220 - if (chunk->chunk_end + sizeof(*ch) < skb_tail_pointer(chunk->skb)) { 220 + if (chunk->chunk_end + sizeof(*ch) <= skb_tail_pointer(chunk->skb)) { 221 221 /* This is not a singleton */ 222 222 chunk->singleton = 0; 223 223 } else if (chunk->chunk_end > skb_tail_pointer(chunk->skb)) {
+3
net/sctp/ipv6.c
··· 895 895 if (sctp_is_any(sk, addr1) || sctp_is_any(sk, addr2)) 896 896 return 1; 897 897 898 + if (addr1->sa.sa_family == AF_INET && addr2->sa.sa_family == AF_INET) 899 + return addr1->v4.sin_addr.s_addr == addr2->v4.sin_addr.s_addr; 900 + 898 901 return __sctp_v6_cmp_addr(addr1, addr2); 899 902 } 900 903
+7 -1
net/sctp/sm_statefuns.c
··· 1794 1794 GFP_ATOMIC)) 1795 1795 goto nomem; 1796 1796 1797 + if (sctp_auth_asoc_init_active_key(new_asoc, GFP_ATOMIC)) 1798 + goto nomem; 1799 + 1797 1800 /* Make sure no new addresses are being added during the 1798 1801 * restart. Though this is a pretty complicated attack 1799 1802 * since you'd have to get inside the cookie. ··· 1907 1904 peer_init = &chunk->subh.cookie_hdr->c.peer_init[0]; 1908 1905 if (!sctp_process_init(new_asoc, chunk, sctp_source(chunk), peer_init, 1909 1906 GFP_ATOMIC)) 1907 + goto nomem; 1908 + 1909 + if (sctp_auth_asoc_init_active_key(new_asoc, GFP_ATOMIC)) 1910 1910 goto nomem; 1911 1911 1912 1912 /* Update the content of current association. */ ··· 2056 2050 } 2057 2051 } 2058 2052 2059 - repl = sctp_make_cookie_ack(new_asoc, chunk); 2053 + repl = sctp_make_cookie_ack(asoc, chunk); 2060 2054 if (!repl) 2061 2055 goto nomem; 2062 2056
+2
net/sctp/stream.c
··· 240 240 241 241 new->out = NULL; 242 242 new->in = NULL; 243 + new->outcnt = 0; 244 + new->incnt = 0; 243 245 } 244 246 245 247 static int sctp_send_reconf(struct sctp_association *asoc,
+29 -32
net/smc/af_smc.c
··· 292 292 smc_copy_sock_settings(&smc->sk, smc->clcsock->sk, SK_FLAGS_CLC_TO_SMC); 293 293 } 294 294 295 + /* register a new rmb */ 296 + static int smc_reg_rmb(struct smc_link *link, struct smc_buf_desc *rmb_desc) 297 + { 298 + /* register memory region for new rmb */ 299 + if (smc_wr_reg_send(link, rmb_desc->mr_rx[SMC_SINGLE_LINK])) { 300 + rmb_desc->regerr = 1; 301 + return -EFAULT; 302 + } 303 + return 0; 304 + } 305 + 295 306 static int smc_clnt_conf_first_link(struct smc_sock *smc) 296 307 { 297 308 struct smc_link_group *lgr = smc->conn.lgr; ··· 332 321 333 322 smc_wr_remember_qp_attr(link); 334 323 335 - rc = smc_wr_reg_send(link, 336 - smc->conn.rmb_desc->mr_rx[SMC_SINGLE_LINK]); 337 - if (rc) 324 + if (smc_reg_rmb(link, smc->conn.rmb_desc)) 338 325 return SMC_CLC_DECL_INTERR; 339 326 340 327 /* send CONFIRM LINK response over RoCE fabric */ ··· 482 473 goto decline_rdma_unlock; 483 474 } 484 475 } else { 485 - struct smc_buf_desc *buf_desc = smc->conn.rmb_desc; 486 - 487 - if (!buf_desc->reused) { 488 - /* register memory region for new rmb */ 489 - rc = smc_wr_reg_send(link, 490 - buf_desc->mr_rx[SMC_SINGLE_LINK]); 491 - if (rc) { 476 + if (!smc->conn.rmb_desc->reused) { 477 + if (smc_reg_rmb(link, smc->conn.rmb_desc)) { 492 478 reason_code = SMC_CLC_DECL_INTERR; 493 479 goto decline_rdma_unlock; 494 480 } ··· 723 719 724 720 link = &lgr->lnk[SMC_SINGLE_LINK]; 725 721 726 - rc = smc_wr_reg_send(link, 727 - smc->conn.rmb_desc->mr_rx[SMC_SINGLE_LINK]); 728 - if (rc) 722 + if (smc_reg_rmb(link, smc->conn.rmb_desc)) 729 723 return SMC_CLC_DECL_INTERR; 730 724 731 725 /* send CONFIRM LINK request to client over the RoCE fabric */ ··· 856 854 smc_rx_init(new_smc); 857 855 858 856 if (local_contact != SMC_FIRST_CONTACT) { 859 - struct smc_buf_desc *buf_desc = new_smc->conn.rmb_desc; 860 - 861 - if (!buf_desc->reused) { 862 - /* register memory region for new rmb */ 863 - rc = smc_wr_reg_send(link, 864 - buf_desc->mr_rx[SMC_SINGLE_LINK]); 865 - if (rc) { 857 + if (!new_smc->conn.rmb_desc->reused) { 858 + if (smc_reg_rmb(link, new_smc->conn.rmb_desc)) { 866 859 reason_code = SMC_CLC_DECL_INTERR; 867 860 goto decline_rdma_unlock; 868 861 } ··· 975 978 } 976 979 977 980 out: 978 - if (lsmc->clcsock) { 979 - sock_release(lsmc->clcsock); 980 - lsmc->clcsock = NULL; 981 - } 982 981 release_sock(lsk); 983 982 sock_put(&lsmc->sk); /* sock_hold in smc_listen */ 984 983 } ··· 1163 1170 /* delegate to CLC child sock */ 1164 1171 release_sock(sk); 1165 1172 mask = smc->clcsock->ops->poll(file, smc->clcsock, wait); 1166 - /* if non-blocking connect finished ... */ 1167 1173 lock_sock(sk); 1168 - if ((sk->sk_state == SMC_INIT) && (mask & EPOLLOUT)) { 1169 - sk->sk_err = smc->clcsock->sk->sk_err; 1170 - if (sk->sk_err) { 1171 - mask |= EPOLLERR; 1172 - } else { 1174 + sk->sk_err = smc->clcsock->sk->sk_err; 1175 + if (sk->sk_err) { 1176 + mask |= EPOLLERR; 1177 + } else { 1178 + /* if non-blocking connect finished ... */ 1179 + if (sk->sk_state == SMC_INIT && 1180 + mask & EPOLLOUT && 1181 + smc->clcsock->sk->sk_state != TCP_CLOSE) { 1173 1182 rc = smc_connect_rdma(smc); 1174 1183 if (rc < 0) 1175 1184 mask |= EPOLLERR; ··· 1315 1320 1316 1321 smc = smc_sk(sk); 1317 1322 lock_sock(sk); 1318 - if (sk->sk_state != SMC_ACTIVE) 1323 + if (sk->sk_state != SMC_ACTIVE) { 1324 + release_sock(sk); 1319 1325 goto out; 1326 + } 1327 + release_sock(sk); 1320 1328 if (smc->use_fallback) 1321 1329 rc = kernel_sendpage(smc->clcsock, page, offset, 1322 1330 size, flags); ··· 1327 1329 rc = sock_no_sendpage(sock, page, offset, size, flags); 1328 1330 1329 1331 out: 1330 - release_sock(sk); 1331 1332 return rc; 1332 1333 } 1333 1334
+19 -3
net/smc/smc_core.c
··· 32 32 33 33 static u32 smc_lgr_num; /* unique link group number */ 34 34 35 + static void smc_buf_free(struct smc_buf_desc *buf_desc, struct smc_link *lnk, 36 + bool is_rmb); 37 + 35 38 static void smc_lgr_schedule_free_work(struct smc_link_group *lgr) 36 39 { 37 40 /* client link group creation always follows the server link group ··· 237 234 conn->sndbuf_size = 0; 238 235 } 239 236 if (conn->rmb_desc) { 240 - conn->rmb_desc->reused = true; 241 - conn->rmb_desc->used = 0; 242 - conn->rmbe_size = 0; 237 + if (!conn->rmb_desc->regerr) { 238 + conn->rmb_desc->reused = 1; 239 + conn->rmb_desc->used = 0; 240 + conn->rmbe_size = 0; 241 + } else { 242 + /* buf registration failed, reuse not possible */ 243 + struct smc_link_group *lgr = conn->lgr; 244 + struct smc_link *lnk; 245 + 246 + write_lock_bh(&lgr->rmbs_lock); 247 + list_del(&conn->rmb_desc->list); 248 + write_unlock_bh(&lgr->rmbs_lock); 249 + 250 + lnk = &lgr->lnk[SMC_SINGLE_LINK]; 251 + smc_buf_free(conn->rmb_desc, lnk, true); 252 + } 243 253 } 244 254 } 245 255
+2 -1
net/smc/smc_core.h
··· 123 123 */ 124 124 u32 order; /* allocation order */ 125 125 u32 used; /* currently used / unused */ 126 - bool reused; /* new created / reused */ 126 + u8 reused : 1; /* new created / reused */ 127 + u8 regerr : 1; /* err during registration */ 127 128 }; 128 129 129 130 struct smc_rtoken { /* address/key of remote RMB */
+1 -1
net/tipc/node.c
··· 2244 2244 2245 2245 rtnl_lock(); 2246 2246 for (bearer_id = prev_bearer; bearer_id < MAX_BEARERS; bearer_id++) { 2247 - err = __tipc_nl_add_monitor(net, &msg, prev_bearer); 2247 + err = __tipc_nl_add_monitor(net, &msg, bearer_id); 2248 2248 if (err) 2249 2249 break; 2250 2250 }
+7
net/tls/tls_main.c
··· 114 114 size = sg->length - offset; 115 115 offset += sg->offset; 116 116 117 + ctx->in_tcp_sendpages = true; 117 118 while (1) { 118 119 if (sg_is_last(sg)) 119 120 sendpage_flags = flags; ··· 149 148 } 150 149 151 150 clear_bit(TLS_PENDING_CLOSED_RECORD, &ctx->flags); 151 + ctx->in_tcp_sendpages = false; 152 + ctx->sk_write_space(sk); 152 153 153 154 return 0; 154 155 } ··· 219 216 static void tls_write_space(struct sock *sk) 220 217 { 221 218 struct tls_context *ctx = tls_get_ctx(sk); 219 + 220 + /* We are already sending pages, ignore notification */ 221 + if (ctx->in_tcp_sendpages) 222 + return; 222 223 223 224 if (!sk->sk_write_pending && tls_is_pending_closed_record(ctx)) { 224 225 gfp_t sk_allocation = sk->sk_allocation;
+5 -2
samples/sockmap/Makefile
··· 65 65 # asm/sysreg.h - inline assembly used by it is incompatible with llvm. 66 66 # But, there is no easy way to fix it, so just exclude it since it is 67 67 # useless for BPF samples. 68 + # 69 + # -target bpf option required with SK_MSG programs, this is to ensure 70 + # reading 'void *' data types for data and data_end are __u64 reads. 68 71 $(obj)/%.o: $(src)/%.c 69 72 $(CLANG) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(EXTRA_CFLAGS) -I$(obj) \ 70 73 -D__KERNEL__ -D__ASM_SYSREG_H -Wno-unused-value -Wno-pointer-sign \ 71 74 -Wno-compare-distinct-pointer-types \ 72 75 -Wno-gnu-variable-sized-type-not-at-end \ 73 76 -Wno-address-of-packed-member -Wno-tautological-compare \ 74 - -Wno-unknown-warning-option \ 75 - -O2 -emit-llvm -c $< -o -| $(LLC) -march=bpf -filetype=obj -o $@ 77 + -Wno-unknown-warning-option -O2 -target bpf \ 78 + -emit-llvm -c $< -o -| $(LLC) -march=bpf -filetype=obj -o $@
+1 -1
scripts/Makefile.gcc-plugins
··· 14 14 endif 15 15 16 16 ifdef CONFIG_GCC_PLUGIN_SANCOV 17 - ifeq ($(CFLAGS_KCOV),) 17 + ifeq ($(strip $(CFLAGS_KCOV)),) 18 18 # It is needed because of the gcc-plugin.sh and gcc version checks. 19 19 gcc-plugin-$(CONFIG_GCC_PLUGIN_SANCOV) += sancov_plugin.so 20 20
+1 -1
scripts/Makefile.lib
··· 196 196 $(call if_changed,bison) 197 197 198 198 quiet_cmd_bison_h = YACC $@ 199 - cmd_bison_h = bison -o/dev/null --defines=$@ -t -l $< 199 + cmd_bison_h = $(YACC) -o/dev/null --defines=$@ -t -l $< 200 200 201 201 $(obj)/%.tab.h: $(src)/%.y FORCE 202 202 $(call if_changed,bison_h)
+1 -1
scripts/extract_xc3028.pl
··· 1 1 #!/usr/bin/env perl 2 2 3 - # Copyright (c) Mauro Carvalho Chehab <mchehab@infradead.org> 3 + # Copyright (c) Mauro Carvalho Chehab <mchehab@kernel.org> 4 4 # Released under GPLv2 5 5 # 6 6 # In order to use, you need to:
+2 -2
scripts/genksyms/Makefile
··· 14 14 # so that 'bison: not found' will be displayed if it is missing. 15 15 ifeq ($(findstring 1,$(KBUILD_ENABLE_EXTRA_GCC_CHECKS)),) 16 16 17 - quiet_cmd_bison_no_warn = $(quet_cmd_bison) 17 + quiet_cmd_bison_no_warn = $(quiet_cmd_bison) 18 18 cmd_bison_no_warn = $(YACC) --version >/dev/null; \ 19 19 $(cmd_bison) 2>/dev/null 20 20 21 21 $(obj)/parse.tab.c: $(src)/parse.y FORCE 22 22 $(call if_changed,bison_no_warn) 23 23 24 - quiet_cmd_bison_h_no_warn = $(quet_cmd_bison_h) 24 + quiet_cmd_bison_h_no_warn = $(quiet_cmd_bison_h) 25 25 cmd_bison_h_no_warn = $(YACC) --version >/dev/null; \ 26 26 $(cmd_bison_h) 2>/dev/null 27 27
+1 -8
scripts/mod/sumversion.c
··· 330 330 goto out; 331 331 } 332 332 333 - /* There will be a line like so: 334 - deps_drivers/net/dummy.o := \ 335 - drivers/net/dummy.c \ 336 - $(wildcard include/config/net/fastroute.h) \ 337 - include/linux/module.h \ 338 - 339 - Sum all files in the same dir or subdirs. 340 - */ 333 + /* Sum all files in the same dir or subdirs. */ 341 334 while ((line = get_next_line(&pos, file, flen)) != NULL) { 342 335 char* p = line; 343 336
+1 -1
scripts/split-man.pl
··· 1 1 #!/usr/bin/perl 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 # 4 - # Author: Mauro Carvalho Chehab <mchehab@s-opensource.com> 4 + # Author: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> 5 5 # 6 6 # Produce manpages from kernel-doc. 7 7 # See Documentation/doc-guide/kernel-doc.rst for instructions
+2
sound/core/pcm_compat.c
··· 423 423 return -ENOTTY; 424 424 if (substream->stream != dir) 425 425 return -EINVAL; 426 + if (substream->runtime->status->state == SNDRV_PCM_STATE_OPEN) 427 + return -EBADFD; 426 428 427 429 if ((ch = substream->runtime->channels) > 128) 428 430 return -EINVAL;
+2 -2
sound/core/seq/seq_virmidi.c
··· 174 174 } 175 175 return; 176 176 } 177 + spin_lock_irqsave(&substream->runtime->lock, flags); 177 178 if (vmidi->event.type != SNDRV_SEQ_EVENT_NONE) { 178 179 if (snd_seq_kernel_client_dispatch(vmidi->client, &vmidi->event, in_atomic(), 0) < 0) 179 - return; 180 + goto out; 180 181 vmidi->event.type = SNDRV_SEQ_EVENT_NONE; 181 182 } 182 - spin_lock_irqsave(&substream->runtime->lock, flags); 183 183 while (1) { 184 184 count = __snd_rawmidi_transmit_peek(substream, buf, sizeof(buf)); 185 185 if (count <= 0)
+15 -2
sound/drivers/aloop.c
··· 831 831 { 832 832 struct loopback *loopback = snd_kcontrol_chip(kcontrol); 833 833 834 + mutex_lock(&loopback->cable_lock); 834 835 ucontrol->value.integer.value[0] = 835 836 loopback->setup[kcontrol->id.subdevice] 836 837 [kcontrol->id.device].rate_shift; 838 + mutex_unlock(&loopback->cable_lock); 837 839 return 0; 838 840 } 839 841 ··· 867 865 { 868 866 struct loopback *loopback = snd_kcontrol_chip(kcontrol); 869 867 868 + mutex_lock(&loopback->cable_lock); 870 869 ucontrol->value.integer.value[0] = 871 870 loopback->setup[kcontrol->id.subdevice] 872 871 [kcontrol->id.device].notify; 872 + mutex_unlock(&loopback->cable_lock); 873 873 return 0; 874 874 } 875 875 ··· 883 879 int change = 0; 884 880 885 881 val = ucontrol->value.integer.value[0] ? 1 : 0; 882 + mutex_lock(&loopback->cable_lock); 886 883 if (val != loopback->setup[kcontrol->id.subdevice] 887 884 [kcontrol->id.device].notify) { 888 885 loopback->setup[kcontrol->id.subdevice] 889 886 [kcontrol->id.device].notify = val; 890 887 change = 1; 891 888 } 889 + mutex_unlock(&loopback->cable_lock); 892 890 return change; 893 891 } 894 892 ··· 898 892 struct snd_ctl_elem_value *ucontrol) 899 893 { 900 894 struct loopback *loopback = snd_kcontrol_chip(kcontrol); 901 - struct loopback_cable *cable = loopback->cables 902 - [kcontrol->id.subdevice][kcontrol->id.device ^ 1]; 895 + struct loopback_cable *cable; 896 + 903 897 unsigned int val = 0; 904 898 899 + mutex_lock(&loopback->cable_lock); 900 + cable = loopback->cables[kcontrol->id.subdevice][kcontrol->id.device ^ 1]; 905 901 if (cable != NULL) { 906 902 unsigned int running = cable->running ^ cable->pause; 907 903 908 904 val = (running & (1 << SNDRV_PCM_STREAM_PLAYBACK)) ? 1 : 0; 909 905 } 906 + mutex_unlock(&loopback->cable_lock); 910 907 ucontrol->value.integer.value[0] = val; 911 908 return 0; 912 909 } ··· 952 943 { 953 944 struct loopback *loopback = snd_kcontrol_chip(kcontrol); 954 945 946 + mutex_lock(&loopback->cable_lock); 955 947 ucontrol->value.integer.value[0] = 956 948 loopback->setup[kcontrol->id.subdevice] 957 949 [kcontrol->id.device].rate; 950 + mutex_unlock(&loopback->cable_lock); 958 951 return 0; 959 952 } 960 953 ··· 976 965 { 977 966 struct loopback *loopback = snd_kcontrol_chip(kcontrol); 978 967 968 + mutex_lock(&loopback->cable_lock); 979 969 ucontrol->value.integer.value[0] = 980 970 loopback->setup[kcontrol->id.subdevice] 981 971 [kcontrol->id.device].channels; 972 + mutex_unlock(&loopback->cable_lock); 982 973 return 0; 983 974 } 984 975
+3 -2
sound/firewire/amdtp-stream.c
··· 773 773 u32 cycle; 774 774 unsigned int packets; 775 775 776 - s->max_payload_length = amdtp_stream_get_max_payload(s); 777 - 778 776 /* 779 777 * For in-stream, first packet has come. 780 778 * For out-stream, prepared to transmit first packet ··· 876 878 } 877 879 878 880 amdtp_stream_update(s); 881 + 882 + if (s->direction == AMDTP_IN_STREAM) 883 + s->max_payload_length = amdtp_stream_get_max_payload(s); 879 884 880 885 if (s->flags & CIP_NO_HEADER) 881 886 s->tag = TAG_NO_CIP_HEADER;
+1 -1
sound/pci/hda/patch_realtek.c
··· 3832 3832 } 3833 3833 } 3834 3834 3835 - #if IS_REACHABLE(INPUT) 3835 + #if IS_REACHABLE(CONFIG_INPUT) 3836 3836 static void gpio2_mic_hotkey_event(struct hda_codec *codec, 3837 3837 struct hda_jack_callback *event) 3838 3838 {
+2
tools/bpf/Makefile
··· 76 76 $(QUIET_LINK)$(CC) $(CFLAGS) -o $@ $^ 77 77 78 78 $(OUTPUT)bpf_exp.lex.c: $(OUTPUT)bpf_exp.yacc.c 79 + $(OUTPUT)bpf_exp.yacc.o: $(OUTPUT)bpf_exp.yacc.c 80 + $(OUTPUT)bpf_exp.lex.o: $(OUTPUT)bpf_exp.lex.c 79 81 80 82 clean: bpftool_clean 81 83 $(call QUIET_CLEAN, bpf-progs)
+5 -2
tools/bpf/bpf_dbg.c
··· 1063 1063 1064 1064 static int cmd_load(char *arg) 1065 1065 { 1066 - char *subcmd, *cont, *tmp = strdup(arg); 1066 + char *subcmd, *cont = NULL, *tmp = strdup(arg); 1067 1067 int ret = CMD_OK; 1068 1068 1069 1069 subcmd = strtok_r(tmp, " ", &cont); ··· 1073 1073 bpf_reset(); 1074 1074 bpf_reset_breakpoints(); 1075 1075 1076 - ret = cmd_load_bpf(cont); 1076 + if (!cont) 1077 + ret = CMD_ERR; 1078 + else 1079 + ret = cmd_load_bpf(cont); 1077 1080 } else if (matches(subcmd, "pcap") == 0) { 1078 1081 ret = cmd_load_pcap(cont); 1079 1082 } else {
+1
tools/power/acpi/Makefile.config
··· 56 56 # to compile vs uClibc, that can be done here as well. 57 57 CROSS = #/usr/i386-linux-uclibc/usr/bin/i386-uclibc- 58 58 CROSS_COMPILE ?= $(CROSS) 59 + LD = $(CC) 59 60 HOSTCC = gcc 60 61 61 62 # check if compiler option is supported
+2 -2
tools/testing/selftests/bpf/test_progs.c
··· 1108 1108 1109 1109 assert(system("dd if=/dev/urandom of=/dev/zero count=4 2> /dev/null") 1110 1110 == 0); 1111 - assert(system("./urandom_read if=/dev/urandom of=/dev/zero count=4 2> /dev/null") == 0); 1111 + assert(system("./urandom_read") == 0); 1112 1112 /* disable stack trace collection */ 1113 1113 key = 0; 1114 1114 val = 1; ··· 1158 1158 } while (bpf_map_get_next_key(stackmap_fd, &previous_key, &key) == 0); 1159 1159 1160 1160 CHECK(build_id_matches < 1, "build id match", 1161 - "Didn't find expected build ID from the map"); 1161 + "Didn't find expected build ID from the map\n"); 1162 1162 1163 1163 disable_pmu: 1164 1164 ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE);
+4 -4
tools/testing/selftests/lib.mk
··· 20 20 21 21 .ONESHELL: 22 22 define RUN_TESTS 23 - @export KSFT_TAP_LEVEL=`echo 1`; 24 - @test_num=`echo 0`; 25 - @echo "TAP version 13"; 26 - @for TEST in $(1); do \ 23 + @export KSFT_TAP_LEVEL=`echo 1`; \ 24 + test_num=`echo 0`; \ 25 + echo "TAP version 13"; \ 26 + for TEST in $(1); do \ 27 27 BASENAME_TEST=`basename $$TEST`; \ 28 28 test_num=`echo $$test_num+1 | bc`; \ 29 29 echo "selftests: $$BASENAME_TEST"; \
+2 -1
tools/testing/selftests/net/Makefile
··· 5 5 CFLAGS += -I../../../../usr/include/ 6 6 7 7 TEST_PROGS := run_netsocktests run_afpackettests test_bpf.sh netdevice.sh rtnetlink.sh 8 - TEST_PROGS += fib_tests.sh fib-onlink-tests.sh in_netns.sh pmtu.sh 8 + TEST_PROGS += fib_tests.sh fib-onlink-tests.sh pmtu.sh 9 + TEST_GEN_PROGS_EXTENDED := in_netns.sh 9 10 TEST_GEN_FILES = socket 10 11 TEST_GEN_FILES += psock_fanout psock_tpacket msg_zerocopy 11 12 TEST_GEN_PROGS = reuseport_bpf reuseport_bpf_cpu reuseport_bpf_numa
+1 -1
virt/kvm/arm/vgic/vgic-init.c
··· 423 423 * We cannot rely on the vgic maintenance interrupt to be 424 424 * delivered synchronously. This means we can only use it to 425 425 * exit the VM, and we perform the handling of EOIed 426 - * interrupts on the exit path (see vgic_process_maintenance). 426 + * interrupts on the exit path (see vgic_fold_lr_state). 427 427 */ 428 428 return IRQ_HANDLED; 429 429 }
+8 -2
virt/kvm/arm/vgic/vgic-mmio.c
··· 289 289 irq->vcpu->cpu != -1) /* VCPU thread is running */ 290 290 cond_resched_lock(&irq->irq_lock); 291 291 292 - if (irq->hw) 292 + if (irq->hw) { 293 293 vgic_hw_irq_change_active(vcpu, irq, active, !requester_vcpu); 294 - else 294 + } else { 295 + u32 model = vcpu->kvm->arch.vgic.vgic_model; 296 + 295 297 irq->active = active; 298 + if (model == KVM_DEV_TYPE_ARM_VGIC_V2 && 299 + active && vgic_irq_is_sgi(irq->intid)) 300 + irq->active_source = requester_vcpu->vcpu_id; 301 + } 296 302 297 303 if (irq->active) 298 304 vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
+22 -16
virt/kvm/arm/vgic/vgic-v2.c
··· 37 37 vgic_v2_write_lr(i, 0); 38 38 } 39 39 40 - void vgic_v2_set_npie(struct kvm_vcpu *vcpu) 41 - { 42 - struct vgic_v2_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v2; 43 - 44 - cpuif->vgic_hcr |= GICH_HCR_NPIE; 45 - } 46 - 47 40 void vgic_v2_set_underflow(struct kvm_vcpu *vcpu) 48 41 { 49 42 struct vgic_v2_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v2; ··· 64 71 int lr; 65 72 unsigned long flags; 66 73 67 - cpuif->vgic_hcr &= ~(GICH_HCR_UIE | GICH_HCR_NPIE); 74 + cpuif->vgic_hcr &= ~GICH_HCR_UIE; 68 75 69 76 for (lr = 0; lr < vgic_cpu->used_lrs; lr++) { 70 77 u32 val = cpuif->vgic_lr[lr]; 71 - u32 intid = val & GICH_LR_VIRTUALID; 78 + u32 cpuid, intid = val & GICH_LR_VIRTUALID; 72 79 struct vgic_irq *irq; 80 + 81 + /* Extract the source vCPU id from the LR */ 82 + cpuid = val & GICH_LR_PHYSID_CPUID; 83 + cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT; 84 + cpuid &= 7; 73 85 74 86 /* Notify fds when the guest EOI'ed a level-triggered SPI */ 75 87 if (lr_signals_eoi_mi(val) && vgic_valid_spi(vcpu->kvm, intid)) ··· 88 90 /* Always preserve the active bit */ 89 91 irq->active = !!(val & GICH_LR_ACTIVE_BIT); 90 92 93 + if (irq->active && vgic_irq_is_sgi(intid)) 94 + irq->active_source = cpuid; 95 + 91 96 /* Edge is the only case where we preserve the pending bit */ 92 97 if (irq->config == VGIC_CONFIG_EDGE && 93 98 (val & GICH_LR_PENDING_BIT)) { 94 99 irq->pending_latch = true; 95 100 96 - if (vgic_irq_is_sgi(intid)) { 97 - u32 cpuid = val & GICH_LR_PHYSID_CPUID; 98 - 99 - cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT; 101 + if (vgic_irq_is_sgi(intid)) 100 102 irq->source |= (1 << cpuid); 101 - } 102 103 } 103 104 104 105 /* ··· 149 152 u32 val = irq->intid; 150 153 bool allow_pending = true; 151 154 152 - if (irq->active) 155 + if (irq->active) { 153 156 val |= GICH_LR_ACTIVE_BIT; 157 + if (vgic_irq_is_sgi(irq->intid)) 158 + val |= irq->active_source << GICH_LR_PHYSID_CPUID_SHIFT; 159 + if (vgic_irq_is_multi_sgi(irq)) { 160 + allow_pending = false; 161 + val |= GICH_LR_EOI; 162 + } 163 + } 154 164 155 165 if (irq->hw) { 156 166 val |= GICH_LR_HW; ··· 194 190 BUG_ON(!src); 195 191 val |= (src - 1) << GICH_LR_PHYSID_CPUID_SHIFT; 196 192 irq->source &= ~(1 << (src - 1)); 197 - if (irq->source) 193 + if (irq->source) { 198 194 irq->pending_latch = true; 195 + val |= GICH_LR_EOI; 196 + } 199 197 } 200 198 } 201 199
+29 -20
virt/kvm/arm/vgic/vgic-v3.c
··· 27 27 static bool common_trap; 28 28 static bool gicv4_enable; 29 29 30 - void vgic_v3_set_npie(struct kvm_vcpu *vcpu) 31 - { 32 - struct vgic_v3_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v3; 33 - 34 - cpuif->vgic_hcr |= ICH_HCR_NPIE; 35 - } 36 - 37 30 void vgic_v3_set_underflow(struct kvm_vcpu *vcpu) 38 31 { 39 32 struct vgic_v3_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v3; ··· 48 55 int lr; 49 56 unsigned long flags; 50 57 51 - cpuif->vgic_hcr &= ~(ICH_HCR_UIE | ICH_HCR_NPIE); 58 + cpuif->vgic_hcr &= ~ICH_HCR_UIE; 52 59 53 60 for (lr = 0; lr < vgic_cpu->used_lrs; lr++) { 54 61 u64 val = cpuif->vgic_lr[lr]; 55 - u32 intid; 62 + u32 intid, cpuid; 56 63 struct vgic_irq *irq; 64 + bool is_v2_sgi = false; 57 65 58 - if (model == KVM_DEV_TYPE_ARM_VGIC_V3) 66 + cpuid = val & GICH_LR_PHYSID_CPUID; 67 + cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT; 68 + 69 + if (model == KVM_DEV_TYPE_ARM_VGIC_V3) { 59 70 intid = val & ICH_LR_VIRTUAL_ID_MASK; 60 - else 71 + } else { 61 72 intid = val & GICH_LR_VIRTUALID; 73 + is_v2_sgi = vgic_irq_is_sgi(intid); 74 + } 62 75 63 76 /* Notify fds when the guest EOI'ed a level-triggered IRQ */ 64 77 if (lr_signals_eoi_mi(val) && vgic_valid_spi(vcpu->kvm, intid)) ··· 80 81 /* Always preserve the active bit */ 81 82 irq->active = !!(val & ICH_LR_ACTIVE_BIT); 82 83 84 + if (irq->active && is_v2_sgi) 85 + irq->active_source = cpuid; 86 + 83 87 /* Edge is the only case where we preserve the pending bit */ 84 88 if (irq->config == VGIC_CONFIG_EDGE && 85 89 (val & ICH_LR_PENDING_BIT)) { 86 90 irq->pending_latch = true; 87 91 88 - if (vgic_irq_is_sgi(intid) && 89 - model == KVM_DEV_TYPE_ARM_VGIC_V2) { 90 - u32 cpuid = val & GICH_LR_PHYSID_CPUID; 91 - 92 - cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT; 92 + if (is_v2_sgi) 93 93 irq->source |= (1 << cpuid); 94 - } 95 94 } 96 95 97 96 /* ··· 130 133 { 131 134 u32 model = vcpu->kvm->arch.vgic.vgic_model; 132 135 u64 val = irq->intid; 133 - bool allow_pending = true; 136 + bool allow_pending = true, is_v2_sgi; 134 137 135 - if (irq->active) 138 + is_v2_sgi = (vgic_irq_is_sgi(irq->intid) && 139 + model == KVM_DEV_TYPE_ARM_VGIC_V2); 140 + 141 + if (irq->active) { 136 142 val |= ICH_LR_ACTIVE_BIT; 143 + if (is_v2_sgi) 144 + val |= irq->active_source << GICH_LR_PHYSID_CPUID_SHIFT; 145 + if (vgic_irq_is_multi_sgi(irq)) { 146 + allow_pending = false; 147 + val |= ICH_LR_EOI; 148 + } 149 + } 137 150 138 151 if (irq->hw) { 139 152 val |= ICH_LR_HW; ··· 181 174 BUG_ON(!src); 182 175 val |= (src - 1) << GICH_LR_PHYSID_CPUID_SHIFT; 183 176 irq->source &= ~(1 << (src - 1)); 184 - if (irq->source) 177 + if (irq->source) { 185 178 irq->pending_latch = true; 179 + val |= ICH_LR_EOI; 180 + } 186 181 } 187 182 } 188 183
+7 -23
virt/kvm/arm/vgic/vgic.c
··· 725 725 vgic_v3_set_underflow(vcpu); 726 726 } 727 727 728 - static inline void vgic_set_npie(struct kvm_vcpu *vcpu) 729 - { 730 - if (kvm_vgic_global_state.type == VGIC_V2) 731 - vgic_v2_set_npie(vcpu); 732 - else 733 - vgic_v3_set_npie(vcpu); 734 - } 735 - 736 728 /* Requires the ap_list_lock to be held. */ 737 729 static int compute_ap_list_depth(struct kvm_vcpu *vcpu, 738 730 bool *multi_sgi) ··· 738 746 DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&vgic_cpu->ap_list_lock)); 739 747 740 748 list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) { 749 + int w; 750 + 741 751 spin_lock(&irq->irq_lock); 742 752 /* GICv2 SGIs can count for more than one... */ 743 - if (vgic_irq_is_sgi(irq->intid) && irq->source) { 744 - int w = hweight8(irq->source); 745 - 746 - count += w; 747 - *multi_sgi |= (w > 1); 748 - } else { 749 - count++; 750 - } 753 + w = vgic_irq_get_lr_count(irq); 751 754 spin_unlock(&irq->irq_lock); 755 + 756 + count += w; 757 + *multi_sgi |= (w > 1); 752 758 } 753 759 return count; 754 760 } ··· 757 767 struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; 758 768 struct vgic_irq *irq; 759 769 int count; 760 - bool npie = false; 761 770 bool multi_sgi; 762 771 u8 prio = 0xff; 763 772 ··· 786 797 if (likely(vgic_target_oracle(irq) == vcpu)) { 787 798 vgic_populate_lr(vcpu, irq, count++); 788 799 789 - if (irq->source) { 790 - npie = true; 800 + if (irq->source) 791 801 prio = irq->priority; 792 - } 793 802 } 794 803 795 804 spin_unlock(&irq->irq_lock); ··· 799 812 break; 800 813 } 801 814 } 802 - 803 - if (npie) 804 - vgic_set_npie(vcpu); 805 815 806 816 vcpu->arch.vgic_cpu.used_lrs = count; 807 817
+14
virt/kvm/arm/vgic/vgic.h
··· 110 110 return irq->config == VGIC_CONFIG_LEVEL && irq->hw; 111 111 } 112 112 113 + static inline int vgic_irq_get_lr_count(struct vgic_irq *irq) 114 + { 115 + /* Account for the active state as an interrupt */ 116 + if (vgic_irq_is_sgi(irq->intid) && irq->source) 117 + return hweight8(irq->source) + irq->active; 118 + 119 + return irq_is_pending(irq) || irq->active; 120 + } 121 + 122 + static inline bool vgic_irq_is_multi_sgi(struct vgic_irq *irq) 123 + { 124 + return vgic_irq_get_lr_count(irq) > 1; 125 + } 126 + 113 127 /* 114 128 * This struct provides an intermediate representation of the fields contained 115 129 * in the GICH_VMCR and ICH_VMCR registers, such that code exporting the GIC