Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Several conflicts here.

NFP driver bug fix adding nfp_netdev_is_nfp_repr() check to
nfp_fl_output() needed some adjustments because the code block is in
an else block now.

Parallel additions to net/pkt_cls.h and net/sch_generic.h

A bug fix in __tcp_retransmit_skb() conflicted with some of
the rbtree changes in net-next.

The tc action RCU callback fixes in 'net' had some overlap with some
of the recent tcf_block reworking.

Signed-off-by: David S. Miller <davem@davemloft.net>

+2333 -2094
+8
Documentation/ABI/testing/sysfs-bus-iio-proximity-as3935
··· 14 14 Show or set the gain boost of the amp, from 0-31 range. 15 15 18 = indoors (default) 16 16 14 = outdoors 17 + 18 + What /sys/bus/iio/devices/iio:deviceX/noise_level_tripped 19 + Date: May 2017 20 + KernelVersion: 4.13 21 + Contact: Matt Ranostay <matt.ranostay@konsulko.com> 22 + Description: 23 + When 1 the noise level is over the trip level and not reporting 24 + valid data
+3 -1
Documentation/ABI/testing/sysfs-devices-power
··· 211 211 device, after it has been suspended at run time, from a resume 212 212 request to the moment the device will be ready to process I/O, 213 213 in microseconds. If it is equal to 0, however, this means that 214 - the PM QoS resume latency may be arbitrary. 214 + the PM QoS resume latency may be arbitrary and the special value 215 + "n/a" means that user space cannot accept any resume latency at 216 + all for the given device. 215 217 216 218 Not all drivers support this attribute. If it isn't supported, 217 219 it is not present.
+5
Documentation/devicetree/bindings/iio/proximity/as3935.txt
··· 16 16 - ams,tuning-capacitor-pf: Calibration tuning capacitor stepping 17 17 value 0 - 120pF. This will require using the calibration data from 18 18 the manufacturer. 19 + - ams,nflwdth: Set the noise and watchdog threshold register on 20 + startup. This will need to set according to the noise from the 21 + MCU board, and possibly the local environment. Refer to the 22 + datasheet for the threshold settings. 19 23 20 24 Example: 21 25 ··· 31 27 interrupt-parent = <&gpio1>; 32 28 interrupts = <16 1>; 33 29 ams,tuning-capacitor-pf = <80>; 30 + ams,nflwdth = <0x44>; 34 31 };
+3 -3
Documentation/devicetree/bindings/interrupt-controller/arm,gic-v3.txt
··· 99 99 compatible = "arm,gic-v3-its"; 100 100 msi-controller; 101 101 #msi-cells = <1>; 102 - reg = <0x0 0x2c200000 0 0x200000>; 102 + reg = <0x0 0x2c200000 0 0x20000>; 103 103 }; 104 104 }; 105 105 ··· 124 124 compatible = "arm,gic-v3-its"; 125 125 msi-controller; 126 126 #msi-cells = <1>; 127 - reg = <0x0 0x2c200000 0 0x200000>; 127 + reg = <0x0 0x2c200000 0 0x20000>; 128 128 }; 129 129 130 130 gic-its@2c400000 { 131 131 compatible = "arm,gic-v3-its"; 132 132 msi-controller; 133 133 #msi-cells = <1>; 134 - reg = <0x0 0x2c400000 0 0x200000>; 134 + reg = <0x0 0x2c400000 0 0x20000>; 135 135 }; 136 136 137 137 ppi-partitions {
+18 -13
Documentation/kbuild/makefiles.txt
··· 1108 1108 ld 1109 1109 Link target. Often, LDFLAGS_$@ is used to set specific options to ld. 1110 1110 1111 - objcopy 1112 - Copy binary. Uses OBJCOPYFLAGS usually specified in 1113 - arch/$(ARCH)/Makefile. 1114 - OBJCOPYFLAGS_$@ may be used to set additional options. 1115 - 1116 - gzip 1117 - Compress target. Use maximum compression to compress target. 1118 - 1119 1111 Example: 1120 1112 #arch/x86/boot/Makefile 1121 1113 LDFLAGS_bootsect := -Ttext 0x0 -s --oformat binary ··· 1130 1138 Note: It is a common mistake to forget the "targets :=" assignment, 1131 1139 resulting in the target file being recompiled for no 1132 1140 obvious reason. 1141 + 1142 + objcopy 1143 + Copy binary. Uses OBJCOPYFLAGS usually specified in 1144 + arch/$(ARCH)/Makefile. 1145 + OBJCOPYFLAGS_$@ may be used to set additional options. 1146 + 1147 + gzip 1148 + Compress target. Use maximum compression to compress target. 1149 + 1150 + Example: 1151 + #arch/x86/boot/compressed/Makefile 1152 + $(obj)/vmlinux.bin.gz: $(vmlinux.bin.all-y) FORCE 1153 + $(call if_changed,gzip) 1133 1154 1134 1155 dtc 1135 1156 Create flattened device tree blob object suitable for linking ··· 1224 1219 that may be shared between individual architectures. 1225 1220 The recommended approach how to use a generic header file is 1226 1221 to list the file in the Kbuild file. 1227 - See "7.3 generic-y" for further info on syntax etc. 1222 + See "7.2 generic-y" for further info on syntax etc. 1228 1223 1229 1224 --- 6.11 Post-link pass 1230 1225 ··· 1259 1254 arch/<arch>/include/asm/ to list asm files coming from asm-generic. 1260 1255 See subsequent chapter for the syntax of the Kbuild file. 1261 1256 1262 - --- 7.1 no-export-headers 1257 + --- 7.1 no-export-headers 1263 1258 1264 1259 no-export-headers is essentially used by include/uapi/linux/Kbuild to 1265 1260 avoid exporting specific headers (e.g. kvm.h) on architectures that do 1266 1261 not support it. It should be avoided as much as possible. 1267 1262 1268 - --- 7.2 generic-y 1263 + --- 7.2 generic-y 1269 1264 1270 1265 If an architecture uses a verbatim copy of a header from 1271 1266 include/asm-generic then this is listed in the file ··· 1292 1287 Example: termios.h 1293 1288 #include <asm-generic/termios.h> 1294 1289 1295 - --- 7.3 generated-y 1290 + --- 7.3 generated-y 1296 1291 1297 1292 If an architecture generates other header files alongside generic-y 1298 1293 wrappers, generated-y specifies them. ··· 1304 1299 #arch/x86/include/asm/Kbuild 1305 1300 generated-y += syscalls_32.h 1306 1301 1307 - --- 7.5 mandatory-y 1302 + --- 7.4 mandatory-y 1308 1303 1309 1304 mandatory-y is essentially used by include/uapi/asm-generic/Kbuild.asm 1310 1305 to define the minimum set of headers that must be exported in
+2 -2
MAINTAINERS
··· 9220 9220 MUSB MULTIPOINT HIGH SPEED DUAL-ROLE CONTROLLER 9221 9221 M: Bin Liu <b-liu@ti.com> 9222 9222 L: linux-usb@vger.kernel.org 9223 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb.git 9224 9223 S: Maintained 9225 9224 F: drivers/usb/musb/ 9226 9225 ··· 10186 10187 10187 10188 PARAVIRT_OPS INTERFACE 10188 10189 M: Juergen Gross <jgross@suse.com> 10189 - M: Chris Wright <chrisw@sous-sol.org> 10190 10190 M: Alok Kataria <akataria@vmware.com> 10191 10191 M: Rusty Russell <rusty@rustcorp.com.au> 10192 10192 L: virtualization@lists.linux-foundation.org ··· 10565 10567 M: Ingo Molnar <mingo@redhat.com> 10566 10568 M: Arnaldo Carvalho de Melo <acme@kernel.org> 10567 10569 R: Alexander Shishkin <alexander.shishkin@linux.intel.com> 10570 + R: Jiri Olsa <jolsa@redhat.com> 10571 + R: Namhyung Kim <namhyung@kernel.org> 10568 10572 L: linux-kernel@vger.kernel.org 10569 10573 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git perf/core 10570 10574 S: Supported
+6 -6
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 14 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc5 4 + EXTRAVERSION = -rc7 5 5 NAME = Fearless Coyote 6 6 7 7 # *DOCUMENTATION* ··· 130 130 ifneq ($(KBUILD_OUTPUT),) 131 131 # check that the output directory actually exists 132 132 saved-output := $(KBUILD_OUTPUT) 133 - $(shell [ -d $(KBUILD_OUTPUT) ] || mkdir -p $(KBUILD_OUTPUT)) 134 - KBUILD_OUTPUT := $(realpath $(KBUILD_OUTPUT)) 133 + KBUILD_OUTPUT := $(shell mkdir -p $(KBUILD_OUTPUT) && cd $(KBUILD_OUTPUT) \ 134 + && /bin/pwd) 135 135 $(if $(KBUILD_OUTPUT),, \ 136 136 $(error failed to create output directory "$(saved-output)")) 137 137 ··· 697 697 698 698 ifeq ($(cc-name),clang) 699 699 ifneq ($(CROSS_COMPILE),) 700 - CLANG_TARGET := -target $(notdir $(CROSS_COMPILE:%-=%)) 700 + CLANG_TARGET := --target=$(notdir $(CROSS_COMPILE:%-=%)) 701 701 GCC_TOOLCHAIN := $(realpath $(dir $(shell which $(LD)))/..) 702 702 endif 703 703 ifneq ($(GCC_TOOLCHAIN),) 704 - CLANG_GCC_TC := -gcc-toolchain $(GCC_TOOLCHAIN) 704 + CLANG_GCC_TC := --gcc-toolchain=$(GCC_TOOLCHAIN) 705 705 endif 706 706 KBUILD_CFLAGS += $(CLANG_TARGET) $(CLANG_GCC_TC) 707 707 KBUILD_AFLAGS += $(CLANG_TARGET) $(CLANG_GCC_TC) ··· 1399 1399 @echo ' Build, install, and boot kernel before' 1400 1400 @echo ' running kselftest on it' 1401 1401 @echo ' kselftest-clean - Remove all generated kselftest files' 1402 - @echo ' kselftest-merge - Merge all the config dependencies of kselftest to existed' 1402 + @echo ' kselftest-merge - Merge all the config dependencies of kselftest to existing' 1403 1403 @echo ' .config.' 1404 1404 @echo '' 1405 1405 @echo 'Userspace tools targets:'
+2 -2
arch/alpha/kernel/sys_alcor.c
··· 181 181 * comes in on. This makes interrupt processing much easier. 182 182 */ 183 183 184 - static int __init 184 + static int 185 185 alcor_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 186 186 { 187 - static char irq_tab[7][5] __initdata = { 187 + static char irq_tab[7][5] = { 188 188 /*INT INTA INTB INTC INTD */ 189 189 /* note: IDSEL 17 is XLT only */ 190 190 {16+13, 16+13, 16+13, 16+13, 16+13}, /* IdSel 17, TULIP */
+6 -6
arch/alpha/kernel/sys_cabriolet.c
··· 173 173 * because it is the Saturn IO (SIO) PCI/ISA Bridge Chip. 174 174 */ 175 175 176 - static inline int __init 176 + static inline int 177 177 eb66p_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 178 178 { 179 - static char irq_tab[5][5] __initdata = { 179 + static char irq_tab[5][5] = { 180 180 /*INT INTA INTB INTC INTD */ 181 181 {16+0, 16+0, 16+5, 16+9, 16+13}, /* IdSel 6, slot 0, J25 */ 182 182 {16+1, 16+1, 16+6, 16+10, 16+14}, /* IdSel 7, slot 1, J26 */ ··· 203 203 * because it is the Saturn IO (SIO) PCI/ISA Bridge Chip. 204 204 */ 205 205 206 - static inline int __init 206 + static inline int 207 207 cabriolet_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 208 208 { 209 - static char irq_tab[5][5] __initdata = { 209 + static char irq_tab[5][5] = { 210 210 /*INT INTA INTB INTC INTD */ 211 211 { 16+2, 16+2, 16+7, 16+11, 16+15}, /* IdSel 5, slot 2, J21 */ 212 212 { 16+0, 16+0, 16+5, 16+9, 16+13}, /* IdSel 6, slot 0, J19 */ ··· 287 287 * 288 288 */ 289 289 290 - static inline int __init 290 + static inline int 291 291 alphapc164_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 292 292 { 293 - static char irq_tab[7][5] __initdata = { 293 + static char irq_tab[7][5] = { 294 294 /*INT INTA INTB INTC INTD */ 295 295 { 16+2, 16+2, 16+9, 16+13, 16+17}, /* IdSel 5, slot 2, J20 */ 296 296 { 16+0, 16+0, 16+7, 16+11, 16+15}, /* IdSel 6, slot 0, J29 */
+10 -10
arch/alpha/kernel/sys_dp264.c
··· 356 356 * 10 64 bit PCI option slot 3 (not bus 0) 357 357 */ 358 358 359 - static int __init 359 + static int 360 360 isa_irq_fixup(const struct pci_dev *dev, int irq) 361 361 { 362 362 u8 irq8; ··· 372 372 return irq8 & 0xf; 373 373 } 374 374 375 - static int __init 375 + static int 376 376 dp264_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 377 377 { 378 - static char irq_tab[6][5] __initdata = { 378 + static char irq_tab[6][5] = { 379 379 /*INT INTA INTB INTC INTD */ 380 380 { -1, -1, -1, -1, -1}, /* IdSel 5 ISA Bridge */ 381 381 { 16+ 3, 16+ 3, 16+ 2, 16+ 2, 16+ 2}, /* IdSel 6 SCSI builtin*/ ··· 394 394 return isa_irq_fixup(dev, irq); 395 395 } 396 396 397 - static int __init 397 + static int 398 398 monet_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 399 399 { 400 - static char irq_tab[13][5] __initdata = { 400 + static char irq_tab[13][5] = { 401 401 /*INT INTA INTB INTC INTD */ 402 402 { 45, 45, 45, 45, 45}, /* IdSel 3 21143 PCI1 */ 403 403 { -1, -1, -1, -1, -1}, /* IdSel 4 unused */ ··· 423 423 return isa_irq_fixup(dev, COMMON_TABLE_LOOKUP); 424 424 } 425 425 426 - static u8 __init 426 + static u8 427 427 monet_swizzle(struct pci_dev *dev, u8 *pinp) 428 428 { 429 429 struct pci_controller *hose = dev->sysdata; ··· 456 456 return slot; 457 457 } 458 458 459 - static int __init 459 + static int 460 460 webbrick_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 461 461 { 462 - static char irq_tab[13][5] __initdata = { 462 + static char irq_tab[13][5] = { 463 463 /*INT INTA INTB INTC INTD */ 464 464 { -1, -1, -1, -1, -1}, /* IdSel 7 ISA Bridge */ 465 465 { -1, -1, -1, -1, -1}, /* IdSel 8 unused */ ··· 478 478 return isa_irq_fixup(dev, COMMON_TABLE_LOOKUP); 479 479 } 480 480 481 - static int __init 481 + static int 482 482 clipper_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 483 483 { 484 - static char irq_tab[7][5] __initdata = { 484 + static char irq_tab[7][5] = { 485 485 /*INT INTA INTB INTC INTD */ 486 486 { 16+ 8, 16+ 8, 16+ 9, 16+10, 16+11}, /* IdSel 1 slot 1 */ 487 487 { 16+12, 16+12, 16+13, 16+14, 16+15}, /* IdSel 2 slot 2 */
+2 -2
arch/alpha/kernel/sys_eb64p.c
··· 167 167 * comes in on. This makes interrupt processing much easier. 168 168 */ 169 169 170 - static int __init 170 + static int 171 171 eb64p_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 172 172 { 173 - static char irq_tab[5][5] __initdata = { 173 + static char irq_tab[5][5] = { 174 174 /*INT INTA INTB INTC INTD */ 175 175 {16+7, 16+7, 16+7, 16+7, 16+7}, /* IdSel 5, slot ?, ?? */ 176 176 {16+0, 16+0, 16+2, 16+4, 16+9}, /* IdSel 6, slot ?, ?? */
+2 -2
arch/alpha/kernel/sys_eiger.c
··· 141 141 } 142 142 } 143 143 144 - static int __init 144 + static int 145 145 eiger_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 146 146 { 147 147 u8 irq_orig; ··· 158 158 return irq_orig - 0x80; 159 159 } 160 160 161 - static u8 __init 161 + static u8 162 162 eiger_swizzle(struct pci_dev *dev, u8 *pinp) 163 163 { 164 164 struct pci_controller *hose = dev->sysdata;
+3 -3
arch/alpha/kernel/sys_miata.c
··· 149 149 * comes in on. This makes interrupt processing much easier. 150 150 */ 151 151 152 - static int __init 152 + static int 153 153 miata_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 154 154 { 155 - static char irq_tab[18][5] __initdata = { 155 + static char irq_tab[18][5] = { 156 156 /*INT INTA INTB INTC INTD */ 157 157 {16+ 8, 16+ 8, 16+ 8, 16+ 8, 16+ 8}, /* IdSel 14, DC21142 */ 158 158 { -1, -1, -1, -1, -1}, /* IdSel 15, EIDE */ ··· 196 196 return COMMON_TABLE_LOOKUP; 197 197 } 198 198 199 - static u8 __init 199 + static u8 200 200 miata_swizzle(struct pci_dev *dev, u8 *pinp) 201 201 { 202 202 int slot, pin = *pinp;
+2 -2
arch/alpha/kernel/sys_mikasa.c
··· 145 145 * comes in on. This makes interrupt processing much easier. 146 146 */ 147 147 148 - static int __init 148 + static int 149 149 mikasa_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 150 150 { 151 - static char irq_tab[8][5] __initdata = { 151 + static char irq_tab[8][5] = { 152 152 /*INT INTA INTB INTC INTD */ 153 153 {16+12, 16+12, 16+12, 16+12, 16+12}, /* IdSel 17, SCSI */ 154 154 { -1, -1, -1, -1, -1}, /* IdSel 18, PCEB */
+1 -1
arch/alpha/kernel/sys_nautilus.c
··· 62 62 common_init_isa_dma(); 63 63 } 64 64 65 - static int __init 65 + static int 66 66 nautilus_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 67 67 { 68 68 /* Preserve the IRQ set up by the console. */
+3 -3
arch/alpha/kernel/sys_noritake.c
··· 193 193 * comes in on. This makes interrupt processing much easier. 194 194 */ 195 195 196 - static int __init 196 + static int 197 197 noritake_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 198 198 { 199 - static char irq_tab[15][5] __initdata = { 199 + static char irq_tab[15][5] = { 200 200 /*INT INTA INTB INTC INTD */ 201 201 /* note: IDSELs 16, 17, and 25 are CORELLE only */ 202 202 { 16+1, 16+1, 16+1, 16+1, 16+1}, /* IdSel 16, QLOGIC */ ··· 221 221 return COMMON_TABLE_LOOKUP; 222 222 } 223 223 224 - static u8 __init 224 + static u8 225 225 noritake_swizzle(struct pci_dev *dev, u8 *pinp) 226 226 { 227 227 int slot, pin = *pinp;
+2 -2
arch/alpha/kernel/sys_rawhide.c
··· 221 221 * 222 222 */ 223 223 224 - static int __init 224 + static int 225 225 rawhide_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 226 226 { 227 - static char irq_tab[5][5] __initdata = { 227 + static char irq_tab[5][5] = { 228 228 /*INT INTA INTB INTC INTD */ 229 229 { 16+16, 16+16, 16+16, 16+16, 16+16}, /* IdSel 1 SCSI PCI 1 */ 230 230 { 16+ 0, 16+ 0, 16+ 1, 16+ 2, 16+ 3}, /* IdSel 2 slot 2 */
+3 -3
arch/alpha/kernel/sys_ruffian.c
··· 117 117 * 118 118 */ 119 119 120 - static int __init 120 + static int 121 121 ruffian_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 122 122 { 123 - static char irq_tab[11][5] __initdata = { 123 + static char irq_tab[11][5] = { 124 124 /*INT INTA INTB INTC INTD */ 125 125 {-1, -1, -1, -1, -1}, /* IdSel 13, 21052 */ 126 126 {-1, -1, -1, -1, -1}, /* IdSel 14, SIO */ ··· 139 139 return COMMON_TABLE_LOOKUP; 140 140 } 141 141 142 - static u8 __init 142 + static u8 143 143 ruffian_swizzle(struct pci_dev *dev, u8 *pinp) 144 144 { 145 145 int slot, pin = *pinp;
+2 -2
arch/alpha/kernel/sys_rx164.c
··· 142 142 * 143 143 */ 144 144 145 - static int __init 145 + static int 146 146 rx164_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 147 147 { 148 148 #if 0 ··· 156 156 { 16+1, 16+1, 16+6, 16+11, 16+16}, /* IdSel 10, slot 4 */ 157 157 }; 158 158 #else 159 - static char irq_tab[6][5] __initdata = { 159 + static char irq_tab[6][5] = { 160 160 /*INT INTA INTB INTC INTD */ 161 161 { 16+0, 16+0, 16+6, 16+11, 16+16}, /* IdSel 5, slot 0 */ 162 162 { 16+1, 16+1, 16+7, 16+12, 16+17}, /* IdSel 6, slot 1 */
+5 -5
arch/alpha/kernel/sys_sable.c
··· 192 192 * with the values in the irq swizzling tables above. 193 193 */ 194 194 195 - static int __init 195 + static int 196 196 sable_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 197 197 { 198 - static char irq_tab[9][5] __initdata = { 198 + static char irq_tab[9][5] = { 199 199 /*INT INTA INTB INTC INTD */ 200 200 { 32+0, 32+0, 32+0, 32+0, 32+0}, /* IdSel 0, TULIP */ 201 201 { 32+1, 32+1, 32+1, 32+1, 32+1}, /* IdSel 1, SCSI */ ··· 374 374 * with the values in the irq swizzling tables above. 375 375 */ 376 376 377 - static int __init 377 + static int 378 378 lynx_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 379 379 { 380 - static char irq_tab[19][5] __initdata = { 380 + static char irq_tab[19][5] = { 381 381 /*INT INTA INTB INTC INTD */ 382 382 { -1, -1, -1, -1, -1}, /* IdSel 13, PCEB */ 383 383 { -1, -1, -1, -1, -1}, /* IdSel 14, PPB */ ··· 404 404 return COMMON_TABLE_LOOKUP; 405 405 } 406 406 407 - static u8 __init 407 + static u8 408 408 lynx_swizzle(struct pci_dev *dev, u8 *pinp) 409 409 { 410 410 int slot, pin = *pinp;
+4 -4
arch/alpha/kernel/sys_sio.c
··· 144 144 outb((level_bits >> 8) & 0xff, 0x4d1); 145 145 } 146 146 147 - static inline int __init 147 + static inline int 148 148 noname_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 149 149 { 150 150 /* ··· 165 165 * that they use the default INTA line, if they are interrupt 166 166 * driven at all). 167 167 */ 168 - static char irq_tab[][5] __initdata = { 168 + static char irq_tab[][5] = { 169 169 /*INT A B C D */ 170 170 { 3, 3, 3, 3, 3}, /* idsel 6 (53c810) */ 171 171 {-1, -1, -1, -1, -1}, /* idsel 7 (SIO: PCI/ISA bridge) */ ··· 183 183 return irq >= 0 ? tmp : -1; 184 184 } 185 185 186 - static inline int __init 186 + static inline int 187 187 p2k_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 188 188 { 189 - static char irq_tab[][5] __initdata = { 189 + static char irq_tab[][5] = { 190 190 /*INT A B C D */ 191 191 { 0, 0, -1, -1, -1}, /* idsel 6 (53c810) */ 192 192 {-1, -1, -1, -1, -1}, /* idsel 7 (SIO: PCI/ISA bridge) */
+2 -2
arch/alpha/kernel/sys_sx164.c
··· 94 94 * 9 32 bit PCI option slot 3 95 95 */ 96 96 97 - static int __init 97 + static int 98 98 sx164_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 99 99 { 100 - static char irq_tab[5][5] __initdata = { 100 + static char irq_tab[5][5] = { 101 101 /*INT INTA INTB INTC INTD */ 102 102 { 16+ 9, 16+ 9, 16+13, 16+17, 16+21}, /* IdSel 5 slot 2 J17 */ 103 103 { 16+11, 16+11, 16+15, 16+19, 16+23}, /* IdSel 6 slot 0 J19 */
+3 -3
arch/alpha/kernel/sys_takara.c
··· 155 155 * assign it whatever the hell IRQ we like and it doesn't matter. 156 156 */ 157 157 158 - static int __init 158 + static int 159 159 takara_map_irq_srm(const struct pci_dev *dev, u8 slot, u8 pin) 160 160 { 161 - static char irq_tab[15][5] __initdata = { 161 + static char irq_tab[15][5] = { 162 162 { 16+3, 16+3, 16+3, 16+3, 16+3}, /* slot 6 == device 3 */ 163 163 { 16+2, 16+2, 16+2, 16+2, 16+2}, /* slot 7 == device 2 */ 164 164 { 16+1, 16+1, 16+1, 16+1, 16+1}, /* slot 8 == device 1 */ ··· 210 210 return COMMON_TABLE_LOOKUP; 211 211 } 212 212 213 - static u8 __init 213 + static u8 214 214 takara_swizzle(struct pci_dev *dev, u8 *pinp) 215 215 { 216 216 int slot = PCI_SLOT(dev->devfn);
+2 -2
arch/alpha/kernel/sys_wildfire.c
··· 288 288 * 7 64 bit PCI 1 option slot 7 289 289 */ 290 290 291 - static int __init 291 + static int 292 292 wildfire_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 293 293 { 294 - static char irq_tab[8][5] __initdata = { 294 + static char irq_tab[8][5] = { 295 295 /*INT INTA INTB INTC INTD */ 296 296 { -1, -1, -1, -1, -1}, /* IdSel 0 ISA Bridge */ 297 297 { 36, 36, 36+1, 36+2, 36+3}, /* IdSel 1 SCSI builtin */
+6 -5
arch/arc/boot/dts/hsdk.dts
··· 137 137 /* 138 138 * DW sdio controller has external ciu clock divider 139 139 * controlled via register in SDIO IP. Due to its 140 - * unexpected default value (it should devide by 1 141 - * but it devides by 8) SDIO IP uses wrong clock and 140 + * unexpected default value (it should divide by 1 141 + * but it divides by 8) SDIO IP uses wrong clock and 142 142 * works unstable (see STAR 9001204800) 143 + * We switched to the minimum possible value of the 144 + * divisor (div-by-2) in HSDK platform code. 143 145 * So add temporary fix and change clock frequency 144 - * from 100000000 to 12500000 Hz until we fix dw sdio 145 - * driver itself. 146 + * to 50000000 Hz until we fix dw sdio driver itself. 146 147 */ 147 - clock-frequency = <12500000>; 148 + clock-frequency = <50000000>; 148 149 #clock-cells = <0>; 149 150 }; 150 151
-1
arch/arc/configs/hsdk_defconfig
··· 63 63 CONFIG_MMC_SDHCI_PLTFM=y 64 64 CONFIG_MMC_DW=y 65 65 # CONFIG_IOMMU_SUPPORT is not set 66 - CONFIG_RESET_HSDK=y 67 66 CONFIG_EXT3_FS=y 68 67 CONFIG_VFAT_FS=y 69 68 CONFIG_TMPFS=y
+5
arch/arc/kernel/smp.c
··· 23 23 #include <linux/cpumask.h> 24 24 #include <linux/reboot.h> 25 25 #include <linux/irqdomain.h> 26 + #include <linux/export.h> 27 + 26 28 #include <asm/processor.h> 27 29 #include <asm/setup.h> 28 30 #include <asm/mach_desc.h> ··· 32 30 #ifndef CONFIG_ARC_HAS_LLSC 33 31 arch_spinlock_t smp_atomic_ops_lock = __ARCH_SPIN_LOCK_UNLOCKED; 34 32 arch_spinlock_t smp_bitops_lock = __ARCH_SPIN_LOCK_UNLOCKED; 33 + 34 + EXPORT_SYMBOL_GPL(smp_atomic_ops_lock); 35 + EXPORT_SYMBOL_GPL(smp_bitops_lock); 35 36 #endif 36 37 37 38 struct plat_smp_ops __weak plat_smp_ops;
+1
arch/arc/plat-hsdk/Kconfig
··· 8 8 menuconfig ARC_SOC_HSDK 9 9 bool "ARC HS Development Kit SOC" 10 10 select CLK_HSDK 11 + select RESET_HSDK
+10
arch/arc/plat-hsdk/platform.c
··· 74 74 pr_err("Failed to setup CPU frequency to 1GHz!"); 75 75 } 76 76 77 + #define SDIO_BASE (ARC_PERIPHERAL_BASE + 0xA000) 78 + #define SDIO_UHS_REG_EXT (SDIO_BASE + 0x108) 79 + #define SDIO_UHS_REG_EXT_DIV_2 (2 << 30) 80 + 77 81 static void __init hsdk_init_early(void) 78 82 { 79 83 /* ··· 92 88 93 89 /* Really apply settings made above */ 94 90 writel(1, (void __iomem *) CREG_PAE_UPDATE); 91 + 92 + /* 93 + * Switch SDIO external ciu clock divider from default div-by-8 to 94 + * minimum possible div-by-2. 95 + */ 96 + iowrite32(SDIO_UHS_REG_EXT_DIV_2, (void __iomem *) SDIO_UHS_REG_EXT); 95 97 96 98 /* 97 99 * Setup CPU frequency to 1GHz.
+1 -1
arch/arm/xen/p2m.c
··· 1 1 #include <linux/bootmem.h> 2 2 #include <linux/gfp.h> 3 3 #include <linux/export.h> 4 - #include <linux/rwlock.h> 4 + #include <linux/spinlock.h> 5 5 #include <linux/slab.h> 6 6 #include <linux/types.h> 7 7 #include <linux/dma-mapping.h>
+14 -9
arch/powerpc/kvm/book3s_64_vio.c
··· 478 478 return ret; 479 479 480 480 dir = iommu_tce_direction(tce); 481 + 482 + idx = srcu_read_lock(&vcpu->kvm->srcu); 483 + 481 484 if ((dir != DMA_NONE) && kvmppc_gpa_to_ua(vcpu->kvm, 482 - tce & ~(TCE_PCI_READ | TCE_PCI_WRITE), &ua, NULL)) 483 - return H_PARAMETER; 485 + tce & ~(TCE_PCI_READ | TCE_PCI_WRITE), &ua, NULL)) { 486 + ret = H_PARAMETER; 487 + goto unlock_exit; 488 + } 484 489 485 490 entry = ioba >> stt->page_shift; 486 491 487 492 list_for_each_entry_lockless(stit, &stt->iommu_tables, next) { 488 - if (dir == DMA_NONE) { 493 + if (dir == DMA_NONE) 489 494 ret = kvmppc_tce_iommu_unmap(vcpu->kvm, 490 495 stit->tbl, entry); 491 - } else { 492 - idx = srcu_read_lock(&vcpu->kvm->srcu); 496 + else 493 497 ret = kvmppc_tce_iommu_map(vcpu->kvm, stit->tbl, 494 498 entry, ua, dir); 495 - srcu_read_unlock(&vcpu->kvm->srcu, idx); 496 - } 497 499 498 500 if (ret == H_SUCCESS) 499 501 continue; 500 502 501 503 if (ret == H_TOO_HARD) 502 - return ret; 504 + goto unlock_exit; 503 505 504 506 WARN_ON_ONCE(1); 505 507 kvmppc_clear_tce(stit->tbl, entry); ··· 509 507 510 508 kvmppc_tce_put(stt, entry, tce); 511 509 512 - return H_SUCCESS; 510 + unlock_exit: 511 + srcu_read_unlock(&vcpu->kvm->srcu, idx); 512 + 513 + return ret; 513 514 } 514 515 EXPORT_SYMBOL_GPL(kvmppc_h_put_tce); 515 516
+10 -3
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 989 989 beq no_xive 990 990 ld r11, VCPU_XIVE_SAVED_STATE(r4) 991 991 li r9, TM_QW1_OS 992 - stdcix r11,r9,r10 993 992 eieio 993 + stdcix r11,r9,r10 994 994 lwz r11, VCPU_XIVE_CAM_WORD(r4) 995 995 li r9, TM_QW1_OS + TM_WORD2 996 996 stwcix r11,r9,r10 997 997 li r9, 1 998 998 stw r9, VCPU_XIVE_PUSHED(r4) 999 + eieio 999 1000 no_xive: 1000 1001 #endif /* CONFIG_KVM_XICS */ 1001 1002 ··· 1311 1310 bne 3f 1312 1311 BEGIN_FTR_SECTION 1313 1312 PPC_MSGSYNC 1313 + lwsync 1314 1314 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) 1315 1315 lbz r0, HSTATE_HOST_IPI(r13) 1316 1316 cmpwi r0, 0 ··· 1402 1400 cmpldi cr0, r10, 0 1403 1401 beq 1f 1404 1402 /* First load to pull the context, we ignore the value */ 1405 - lwzx r11, r7, r10 1406 1403 eieio 1404 + lwzx r11, r7, r10 1407 1405 /* Second load to recover the context state (Words 0 and 1) */ 1408 1406 ldx r11, r6, r10 1409 1407 b 3f ··· 1411 1409 cmpldi cr0, r10, 0 1412 1410 beq 1f 1413 1411 /* First load to pull the context, we ignore the value */ 1414 - lwzcix r11, r7, r10 1415 1412 eieio 1413 + lwzcix r11, r7, r10 1416 1414 /* Second load to recover the context state (Words 0 and 1) */ 1417 1415 ldcix r11, r6, r10 1418 1416 3: std r11, VCPU_XIVE_SAVED_STATE(r9) ··· 1422 1420 stw r10, VCPU_XIVE_PUSHED(r9) 1423 1421 stb r10, (VCPU_XIVE_SAVED_STATE+3)(r9) 1424 1422 stb r0, (VCPU_XIVE_SAVED_STATE+4)(r9) 1423 + eieio 1425 1424 1: 1426 1425 #endif /* CONFIG_KVM_XICS */ 1427 1426 /* Save more register state */ ··· 2791 2788 PPC_MSGCLR(6) 2792 2789 /* see if it's a host IPI */ 2793 2790 li r3, 1 2791 + BEGIN_FTR_SECTION 2792 + PPC_MSGSYNC 2793 + lwsync 2794 + END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) 2794 2795 lbz r0, HSTATE_HOST_IPI(r13) 2795 2796 cmpwi r0, 0 2796 2797 bnelr
+1 -2
arch/powerpc/kvm/powerpc.c
··· 644 644 break; 645 645 #endif 646 646 case KVM_CAP_PPC_HTM: 647 - r = cpu_has_feature(CPU_FTR_TM_COMP) && 648 - is_kvmppc_hv_enabled(kvm); 647 + r = cpu_has_feature(CPU_FTR_TM_COMP) && hv_enabled; 649 648 break; 650 649 default: 651 650 r = 0;
+5 -2
arch/s390/kernel/entry.S
··· 521 521 tmhh %r8,0x0001 # test problem state bit 522 522 jnz 2f # -> fault in user space 523 523 #if IS_ENABLED(CONFIG_KVM) 524 - # cleanup critical section for sie64a 524 + # cleanup critical section for program checks in sie64a 525 525 lgr %r14,%r9 526 526 slg %r14,BASED(.Lsie_critical_start) 527 527 clg %r14,BASED(.Lsie_critical_length) 528 528 jhe 0f 529 - brasl %r14,.Lcleanup_sie 529 + lg %r14,__SF_EMPTY(%r15) # get control block pointer 530 + ni __SIE_PROG0C+3(%r14),0xfe # no longer in SIE 531 + lctlg %c1,%c1,__LC_USER_ASCE # load primary asce 532 + larl %r9,sie_exit # skip forward to sie_exit 530 533 #endif 531 534 0: tmhh %r8,0x4000 # PER bit set in old PSW ? 532 535 jnz 1f # -> enabled, can't be a double fault
+1 -1
arch/x86/entry/entry_64.S
··· 808 808 809 809 .macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1 810 810 ENTRY(\sym) 811 - UNWIND_HINT_IRET_REGS offset=8 811 + UNWIND_HINT_IRET_REGS offset=\has_error_code*8 812 812 813 813 /* Sanity check */ 814 814 .if \shift_ist != -1 && \paranoid == 0
+3 -3
arch/x86/events/intel/bts.c
··· 546 546 if (event->attr.type != bts_pmu.type) 547 547 return -ENOENT; 548 548 549 - if (x86_add_exclusive(x86_lbr_exclusive_bts)) 550 - return -EBUSY; 551 - 552 549 /* 553 550 * BTS leaks kernel addresses even when CPL0 tracing is 554 551 * disabled, so disallow intel_bts driver for unprivileged ··· 558 561 if (event->attr.exclude_kernel && perf_paranoid_kernel() && 559 562 !capable(CAP_SYS_ADMIN)) 560 563 return -EACCES; 564 + 565 + if (x86_add_exclusive(x86_lbr_exclusive_bts)) 566 + return -EBUSY; 561 567 562 568 ret = x86_reserve_hardware(); 563 569 if (ret) {
+15 -6
arch/x86/include/asm/tlbflush.h
··· 82 82 #define __flush_tlb_single(addr) __native_flush_tlb_single(addr) 83 83 #endif 84 84 85 - /* 86 - * If tlb_use_lazy_mode is true, then we try to avoid switching CR3 to point 87 - * to init_mm when we switch to a kernel thread (e.g. the idle thread). If 88 - * it's false, then we immediately switch CR3 when entering a kernel thread. 89 - */ 90 - DECLARE_STATIC_KEY_TRUE(tlb_use_lazy_mode); 85 + static inline bool tlb_defer_switch_to_init_mm(void) 86 + { 87 + /* 88 + * If we have PCID, then switching to init_mm is reasonably 89 + * fast. If we don't have PCID, then switching to init_mm is 90 + * quite slow, so we try to defer it in the hopes that we can 91 + * avoid it entirely. The latter approach runs the risk of 92 + * receiving otherwise unnecessary IPIs. 93 + * 94 + * This choice is just a heuristic. The tlb code can handle this 95 + * function returning true or false regardless of whether we have 96 + * PCID. 97 + */ 98 + return !static_cpu_has(X86_FEATURE_PCID); 99 + } 91 100 92 101 /* 93 102 * 6 because 6 should be plenty and struct tlb_state will fit in
+41
arch/x86/kernel/amd_nb.c
··· 27 27 {} 28 28 }; 29 29 30 + #define PCI_DEVICE_ID_AMD_CNB17H_F4 0x1704 31 + 30 32 const struct pci_device_id amd_nb_misc_ids[] = { 31 33 { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_K8_NB_MISC) }, 32 34 { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_10H_NB_MISC) }, ··· 39 37 { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_16H_NB_F3) }, 40 38 { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F3) }, 41 39 { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_DF_F3) }, 40 + { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F3) }, 42 41 {} 43 42 }; 44 43 EXPORT_SYMBOL_GPL(amd_nb_misc_ids); ··· 51 48 { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_16H_NB_F4) }, 52 49 { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F4) }, 53 50 { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_17H_DF_F4) }, 51 + { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CNB17H_F4) }, 54 52 {} 55 53 }; 56 54 ··· 406 402 } 407 403 EXPORT_SYMBOL_GPL(amd_flush_garts); 408 404 405 + static void __fix_erratum_688(void *info) 406 + { 407 + #define MSR_AMD64_IC_CFG 0xC0011021 408 + 409 + msr_set_bit(MSR_AMD64_IC_CFG, 3); 410 + msr_set_bit(MSR_AMD64_IC_CFG, 14); 411 + } 412 + 413 + /* Apply erratum 688 fix so machines without a BIOS fix work. */ 414 + static __init void fix_erratum_688(void) 415 + { 416 + struct pci_dev *F4; 417 + u32 val; 418 + 419 + if (boot_cpu_data.x86 != 0x14) 420 + return; 421 + 422 + if (!amd_northbridges.num) 423 + return; 424 + 425 + F4 = node_to_amd_nb(0)->link; 426 + if (!F4) 427 + return; 428 + 429 + if (pci_read_config_dword(F4, 0x164, &val)) 430 + return; 431 + 432 + if (val & BIT(2)) 433 + return; 434 + 435 + on_each_cpu(__fix_erratum_688, NULL, 0); 436 + 437 + pr_info("x86/cpu/AMD: CPU erratum 688 worked around\n"); 438 + } 439 + 409 440 static __init int init_amd_nbs(void) 410 441 { 411 442 amd_cache_northbridges(); 412 443 amd_cache_gart(); 444 + 445 + fix_erratum_688(); 413 446 414 447 return 0; 415 448 }
-1
arch/x86/kernel/cpu/intel_cacheinfo.c
··· 831 831 } else if (boot_cpu_has(X86_FEATURE_TOPOEXT)) { 832 832 unsigned int apicid, nshared, first, last; 833 833 834 - this_leaf = this_cpu_ci->info_list + index; 835 834 nshared = base->eax.split.num_threads_sharing + 1; 836 835 apicid = cpu_data(cpu).apicid; 837 836 first = apicid - (apicid % nshared);
+19
arch/x86/kernel/cpu/microcode/intel.c
··· 34 34 #include <linux/mm.h> 35 35 36 36 #include <asm/microcode_intel.h> 37 + #include <asm/intel-family.h> 37 38 #include <asm/processor.h> 38 39 #include <asm/tlbflush.h> 39 40 #include <asm/setup.h> ··· 919 918 return 0; 920 919 } 921 920 921 + static bool is_blacklisted(unsigned int cpu) 922 + { 923 + struct cpuinfo_x86 *c = &cpu_data(cpu); 924 + 925 + if (c->x86 == 6 && c->x86_model == INTEL_FAM6_BROADWELL_X) { 926 + pr_err_once("late loading on model 79 is disabled.\n"); 927 + return true; 928 + } 929 + 930 + return false; 931 + } 932 + 922 933 static enum ucode_state request_microcode_fw(int cpu, struct device *device, 923 934 bool refresh_fw) 924 935 { ··· 938 925 struct cpuinfo_x86 *c = &cpu_data(cpu); 939 926 const struct firmware *firmware; 940 927 enum ucode_state ret; 928 + 929 + if (is_blacklisted(cpu)) 930 + return UCODE_NFOUND; 941 931 942 932 sprintf(name, "intel-ucode/%02x-%02x-%02x", 943 933 c->x86, c->x86_model, c->x86_mask); ··· 966 950 static enum ucode_state 967 951 request_microcode_user(int cpu, const void __user *buf, size_t size) 968 952 { 953 + if (is_blacklisted(cpu)) 954 + return UCODE_NFOUND; 955 + 969 956 return generic_load_microcode(cpu, (void *)buf, size, &get_ucode_user); 970 957 } 971 958
+3 -2
arch/x86/kernel/head32.c
··· 30 30 31 31 asmlinkage __visible void __init i386_start_kernel(void) 32 32 { 33 - cr4_init_shadow(); 34 - 33 + /* Make sure IDT is set up before any exception happens */ 35 34 idt_setup_early_handler(); 35 + 36 + cr4_init_shadow(); 36 37 37 38 sanitize_boot_params(&boot_params); 38 39
+15 -14
arch/x86/kernel/unwind_orc.c
··· 86 86 idx = (ip - LOOKUP_START_IP) / LOOKUP_BLOCK_SIZE; 87 87 88 88 if (unlikely((idx >= lookup_num_blocks-1))) { 89 - orc_warn("WARNING: bad lookup idx: idx=%u num=%u ip=%lx\n", 90 - idx, lookup_num_blocks, ip); 89 + orc_warn("WARNING: bad lookup idx: idx=%u num=%u ip=%pB\n", 90 + idx, lookup_num_blocks, (void *)ip); 91 91 return NULL; 92 92 } 93 93 ··· 96 96 97 97 if (unlikely((__start_orc_unwind + start >= __stop_orc_unwind) || 98 98 (__start_orc_unwind + stop > __stop_orc_unwind))) { 99 - orc_warn("WARNING: bad lookup value: idx=%u num=%u start=%u stop=%u ip=%lx\n", 100 - idx, lookup_num_blocks, start, stop, ip); 99 + orc_warn("WARNING: bad lookup value: idx=%u num=%u start=%u stop=%u ip=%pB\n", 100 + idx, lookup_num_blocks, start, stop, (void *)ip); 101 101 return NULL; 102 102 } 103 103 ··· 373 373 374 374 case ORC_REG_R10: 375 375 if (!state->regs || !state->full_regs) { 376 - orc_warn("missing regs for base reg R10 at ip %p\n", 376 + orc_warn("missing regs for base reg R10 at ip %pB\n", 377 377 (void *)state->ip); 378 378 goto done; 379 379 } ··· 382 382 383 383 case ORC_REG_R13: 384 384 if (!state->regs || !state->full_regs) { 385 - orc_warn("missing regs for base reg R13 at ip %p\n", 385 + orc_warn("missing regs for base reg R13 at ip %pB\n", 386 386 (void *)state->ip); 387 387 goto done; 388 388 } ··· 391 391 392 392 case ORC_REG_DI: 393 393 if (!state->regs || !state->full_regs) { 394 - orc_warn("missing regs for base reg DI at ip %p\n", 394 + orc_warn("missing regs for base reg DI at ip %pB\n", 395 395 (void *)state->ip); 396 396 goto done; 397 397 } ··· 400 400 401 401 case ORC_REG_DX: 402 402 if (!state->regs || !state->full_regs) { 403 - orc_warn("missing regs for base reg DX at ip %p\n", 403 + orc_warn("missing regs for base reg DX at ip %pB\n", 404 404 (void *)state->ip); 405 405 goto done; 406 406 } ··· 408 408 break; 409 409 410 410 default: 411 - orc_warn("unknown SP base reg %d for ip %p\n", 411 + orc_warn("unknown SP base reg %d for ip %pB\n", 412 412 orc->sp_reg, (void *)state->ip); 413 413 goto done; 414 414 } ··· 436 436 437 437 case ORC_TYPE_REGS: 438 438 if (!deref_stack_regs(state, sp, &state->ip, &state->sp, true)) { 439 - orc_warn("can't dereference registers at %p for ip %p\n", 439 + orc_warn("can't dereference registers at %p for ip %pB\n", 440 440 (void *)sp, (void *)orig_ip); 441 441 goto done; 442 442 } ··· 448 448 449 449 case ORC_TYPE_REGS_IRET: 450 450 if (!deref_stack_regs(state, sp, &state->ip, &state->sp, false)) { 451 - orc_warn("can't dereference iret registers at %p for ip %p\n", 451 + orc_warn("can't dereference iret registers at %p for ip %pB\n", 452 452 (void *)sp, (void *)orig_ip); 453 453 goto done; 454 454 } ··· 465 465 break; 466 466 467 467 default: 468 - orc_warn("unknown .orc_unwind entry type %d\n", orc->type); 468 + orc_warn("unknown .orc_unwind entry type %d for ip %pB\n", 469 + orc->type, (void *)orig_ip); 469 470 break; 470 471 } 471 472 ··· 488 487 break; 489 488 490 489 default: 491 - orc_warn("unknown BP base reg %d for ip %p\n", 490 + orc_warn("unknown BP base reg %d for ip %pB\n", 492 491 orc->bp_reg, (void *)orig_ip); 493 492 goto done; 494 493 } ··· 497 496 if (state->stack_info.type == prev_type && 498 497 on_stack(&state->stack_info, (void *)state->sp, sizeof(long)) && 499 498 state->sp <= prev_sp) { 500 - orc_warn("stack going in the wrong direction? ip=%p\n", 499 + orc_warn("stack going in the wrong direction? ip=%pB\n", 501 500 (void *)orig_ip); 502 501 goto done; 503 502 }
+6 -58
arch/x86/mm/tlb.c
··· 30 30 31 31 atomic64_t last_mm_ctx_id = ATOMIC64_INIT(1); 32 32 33 - DEFINE_STATIC_KEY_TRUE(tlb_use_lazy_mode); 34 33 35 34 static void choose_new_asid(struct mm_struct *next, u64 next_tlb_gen, 36 35 u16 *new_asid, bool *need_flush) ··· 146 147 this_cpu_write(cpu_tlbstate.is_lazy, false); 147 148 148 149 if (real_prev == next) { 149 - VM_BUG_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) != 150 - next->context.ctx_id); 150 + VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) != 151 + next->context.ctx_id); 151 152 152 153 /* 153 154 * We don't currently support having a real mm loaded without ··· 212 213 } 213 214 214 215 /* 216 + * Please ignore the name of this function. It should be called 217 + * switch_to_kernel_thread(). 218 + * 215 219 * enter_lazy_tlb() is a hint from the scheduler that we are entering a 216 220 * kernel thread or other context without an mm. Acceptable implementations 217 221 * include doing nothing whatsoever, switching to init_mm, or various clever ··· 229 227 if (this_cpu_read(cpu_tlbstate.loaded_mm) == &init_mm) 230 228 return; 231 229 232 - if (static_branch_unlikely(&tlb_use_lazy_mode)) { 230 + if (tlb_defer_switch_to_init_mm()) { 233 231 /* 234 232 * There's a significant optimization that may be possible 235 233 * here. We have accurate enough TLB flush tracking that we ··· 628 626 return 0; 629 627 } 630 628 late_initcall(create_tlb_single_page_flush_ceiling); 631 - 632 - static ssize_t tlblazy_read_file(struct file *file, char __user *user_buf, 633 - size_t count, loff_t *ppos) 634 - { 635 - char buf[2]; 636 - 637 - buf[0] = static_branch_likely(&tlb_use_lazy_mode) ? '1' : '0'; 638 - buf[1] = '\n'; 639 - 640 - return simple_read_from_buffer(user_buf, count, ppos, buf, 2); 641 - } 642 - 643 - static ssize_t tlblazy_write_file(struct file *file, 644 - const char __user *user_buf, size_t count, loff_t *ppos) 645 - { 646 - bool val; 647 - 648 - if (kstrtobool_from_user(user_buf, count, &val)) 649 - return -EINVAL; 650 - 651 - if (val) 652 - static_branch_enable(&tlb_use_lazy_mode); 653 - else 654 - static_branch_disable(&tlb_use_lazy_mode); 655 - 656 - return count; 657 - } 658 - 659 - static const struct file_operations fops_tlblazy = { 660 - .read = tlblazy_read_file, 661 - .write = tlblazy_write_file, 662 - .llseek = default_llseek, 663 - }; 664 - 665 - static int __init init_tlb_use_lazy_mode(void) 666 - { 667 - if (boot_cpu_has(X86_FEATURE_PCID)) { 668 - /* 669 - * Heuristic: with PCID on, switching to and from 670 - * init_mm is reasonably fast, but remote flush IPIs 671 - * as expensive as ever, so turn off lazy TLB mode. 672 - * 673 - * We can't do this in setup_pcid() because static keys 674 - * haven't been initialized yet, and it would blow up 675 - * badly. 676 - */ 677 - static_branch_disable(&tlb_use_lazy_mode); 678 - } 679 - 680 - debugfs_create_file("tlb_use_lazy_mode", S_IRUSR | S_IWUSR, 681 - arch_debugfs_dir, NULL, &fops_tlblazy); 682 - return 0; 683 - } 684 - late_initcall(init_tlb_use_lazy_mode);
+1 -10
drivers/android/binder.c
··· 3662 3662 } 3663 3663 } 3664 3664 3665 - static int binder_has_thread_work(struct binder_thread *thread) 3666 - { 3667 - return !binder_worklist_empty(thread->proc, &thread->todo) || 3668 - thread->looper_need_return; 3669 - } 3670 - 3671 3665 static int binder_put_node_cmd(struct binder_proc *proc, 3672 3666 struct binder_thread *thread, 3673 3667 void __user **ptrp, ··· 4291 4297 4292 4298 binder_inner_proc_unlock(thread->proc); 4293 4299 4294 - if (binder_has_work(thread, wait_for_proc_work)) 4295 - return POLLIN; 4296 - 4297 4300 poll_wait(filp, &thread->wait, wait); 4298 4301 4299 - if (binder_has_thread_work(thread)) 4302 + if (binder_has_work(thread, wait_for_proc_work)) 4300 4303 return POLLIN; 4301 4304 4302 4305 return 0;
+10 -14
drivers/android/binder_alloc.c
··· 215 215 } 216 216 } 217 217 218 - if (!vma && need_mm) 219 - mm = get_task_mm(alloc->tsk); 218 + if (!vma && need_mm && mmget_not_zero(alloc->vma_vm_mm)) 219 + mm = alloc->vma_vm_mm; 220 220 221 221 if (mm) { 222 222 down_write(&mm->mmap_sem); 223 223 vma = alloc->vma; 224 - if (vma && mm != alloc->vma_vm_mm) { 225 - pr_err("%d: vma mm and task mm mismatch\n", 226 - alloc->pid); 227 - vma = NULL; 228 - } 229 224 } 230 225 231 226 if (!vma && need_mm) { ··· 560 565 binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, 561 566 "%d: merge free, buffer %pK do not share page with %pK or %pK\n", 562 567 alloc->pid, buffer->data, 563 - prev->data, next->data); 568 + prev->data, next ? next->data : NULL); 564 569 binder_update_page_range(alloc, 0, buffer_start_page(buffer), 565 570 buffer_start_page(buffer) + PAGE_SIZE, 566 571 NULL); ··· 715 720 barrier(); 716 721 alloc->vma = vma; 717 722 alloc->vma_vm_mm = vma->vm_mm; 723 + mmgrab(alloc->vma_vm_mm); 718 724 719 725 return 0; 720 726 ··· 791 795 vfree(alloc->buffer); 792 796 } 793 797 mutex_unlock(&alloc->mutex); 798 + if (alloc->vma_vm_mm) 799 + mmdrop(alloc->vma_vm_mm); 794 800 795 801 binder_alloc_debug(BINDER_DEBUG_OPEN_CLOSE, 796 802 "%s: %d buffers %d, pages %d\n", ··· 887 889 void binder_alloc_vma_close(struct binder_alloc *alloc) 888 890 { 889 891 WRITE_ONCE(alloc->vma, NULL); 890 - WRITE_ONCE(alloc->vma_vm_mm, NULL); 891 892 } 892 893 893 894 /** ··· 923 926 page_addr = (uintptr_t)alloc->buffer + index * PAGE_SIZE; 924 927 vma = alloc->vma; 925 928 if (vma) { 926 - mm = get_task_mm(alloc->tsk); 927 - if (!mm) 928 - goto err_get_task_mm_failed; 929 + if (!mmget_not_zero(alloc->vma_vm_mm)) 930 + goto err_mmget; 931 + mm = alloc->vma_vm_mm; 929 932 if (!down_write_trylock(&mm->mmap_sem)) 930 933 goto err_down_write_mmap_sem_failed; 931 934 } ··· 960 963 961 964 err_down_write_mmap_sem_failed: 962 965 mmput_async(mm); 963 - err_get_task_mm_failed: 966 + err_mmget: 964 967 err_page_already_freed: 965 968 mutex_unlock(&alloc->mutex); 966 969 err_get_alloc_mutex_failed: ··· 999 1002 */ 1000 1003 void binder_alloc_init(struct binder_alloc *alloc) 1001 1004 { 1002 - alloc->tsk = current->group_leader; 1003 1005 alloc->pid = current->group_leader->pid; 1004 1006 mutex_init(&alloc->mutex); 1005 1007 INIT_LIST_HEAD(&alloc->buffers);
-1
drivers/android/binder_alloc.h
··· 100 100 */ 101 101 struct binder_alloc { 102 102 struct mutex mutex; 103 - struct task_struct *tsk; 104 103 struct vm_area_struct *vma; 105 104 struct mm_struct *vma_vm_mm; 106 105 void *buffer;
+2 -1
drivers/base/cpu.c
··· 377 377 378 378 per_cpu(cpu_sys_devices, num) = &cpu->dev; 379 379 register_cpu_under_node(num, cpu_to_node(num)); 380 - dev_pm_qos_expose_latency_limit(&cpu->dev, 0); 380 + dev_pm_qos_expose_latency_limit(&cpu->dev, 381 + PM_QOS_RESUME_LATENCY_NO_CONSTRAINT); 381 382 382 383 return 0; 383 384 }
+30 -23
drivers/base/power/domain_governor.c
··· 14 14 static int dev_update_qos_constraint(struct device *dev, void *data) 15 15 { 16 16 s64 *constraint_ns_p = data; 17 - s32 constraint_ns = -1; 17 + s64 constraint_ns = -1; 18 18 19 19 if (dev->power.subsys_data && dev->power.subsys_data->domain_data) 20 20 constraint_ns = dev_gpd_data(dev)->td.effective_constraint_ns; 21 21 22 - if (constraint_ns < 0) { 22 + if (constraint_ns < 0) 23 23 constraint_ns = dev_pm_qos_read_value(dev); 24 - constraint_ns *= NSEC_PER_USEC; 25 - } 26 - if (constraint_ns == 0) 24 + 25 + if (constraint_ns == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) 27 26 return 0; 28 27 29 - /* 30 - * constraint_ns cannot be negative here, because the device has been 31 - * suspended. 32 - */ 33 - if (constraint_ns < *constraint_ns_p || *constraint_ns_p == 0) 28 + constraint_ns *= NSEC_PER_USEC; 29 + 30 + if (constraint_ns < *constraint_ns_p || *constraint_ns_p < 0) 34 31 *constraint_ns_p = constraint_ns; 35 32 36 33 return 0; ··· 60 63 61 64 spin_unlock_irqrestore(&dev->power.lock, flags); 62 65 63 - if (constraint_ns < 0) 66 + if (constraint_ns == 0) 64 67 return false; 65 68 66 - constraint_ns *= NSEC_PER_USEC; 69 + if (constraint_ns == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) 70 + constraint_ns = -1; 71 + else 72 + constraint_ns *= NSEC_PER_USEC; 73 + 67 74 /* 68 75 * We can walk the children without any additional locking, because 69 76 * they all have been suspended at this point and their ··· 77 76 device_for_each_child(dev, &constraint_ns, 78 77 dev_update_qos_constraint); 79 78 80 - if (constraint_ns > 0) { 81 - constraint_ns -= td->suspend_latency_ns + 82 - td->resume_latency_ns; 83 - if (constraint_ns == 0) 84 - return false; 79 + if (constraint_ns < 0) { 80 + /* The children have no constraints. */ 81 + td->effective_constraint_ns = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT; 82 + td->cached_suspend_ok = true; 83 + } else { 84 + constraint_ns -= td->suspend_latency_ns + td->resume_latency_ns; 85 + if (constraint_ns > 0) { 86 + td->effective_constraint_ns = constraint_ns; 87 + td->cached_suspend_ok = true; 88 + } else { 89 + td->effective_constraint_ns = 0; 90 + } 85 91 } 86 - td->effective_constraint_ns = constraint_ns; 87 - td->cached_suspend_ok = constraint_ns >= 0; 88 92 89 93 /* 90 94 * The children have been suspended already, so we don't need to take ··· 151 145 td = &to_gpd_data(pdd)->td; 152 146 constraint_ns = td->effective_constraint_ns; 153 147 /* default_suspend_ok() need not be called before us. */ 154 - if (constraint_ns < 0) { 148 + if (constraint_ns < 0) 155 149 constraint_ns = dev_pm_qos_read_value(pdd->dev); 156 - constraint_ns *= NSEC_PER_USEC; 157 - } 158 - if (constraint_ns == 0) 150 + 151 + if (constraint_ns == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) 159 152 continue; 153 + 154 + constraint_ns *= NSEC_PER_USEC; 160 155 161 156 /* 162 157 * constraint_ns cannot be negative here, because the device has
+1 -1
drivers/base/power/qos.c
··· 189 189 plist_head_init(&c->list); 190 190 c->target_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE; 191 191 c->default_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE; 192 - c->no_constraint_value = PM_QOS_RESUME_LATENCY_DEFAULT_VALUE; 192 + c->no_constraint_value = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT; 193 193 c->type = PM_QOS_MIN; 194 194 c->notifiers = n; 195 195
+1 -1
drivers/base/power/runtime.c
··· 253 253 || (dev->power.request_pending 254 254 && dev->power.request == RPM_REQ_RESUME)) 255 255 retval = -EAGAIN; 256 - else if (__dev_pm_qos_read_value(dev) < 0) 256 + else if (__dev_pm_qos_read_value(dev) == 0) 257 257 retval = -EPERM; 258 258 else if (dev->power.runtime_status == RPM_SUSPENDED) 259 259 retval = 1;
+21 -4
drivers/base/power/sysfs.c
··· 218 218 struct device_attribute *attr, 219 219 char *buf) 220 220 { 221 - return sprintf(buf, "%d\n", dev_pm_qos_requested_resume_latency(dev)); 221 + s32 value = dev_pm_qos_requested_resume_latency(dev); 222 + 223 + if (value == 0) 224 + return sprintf(buf, "n/a\n"); 225 + else if (value == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) 226 + value = 0; 227 + 228 + return sprintf(buf, "%d\n", value); 222 229 } 223 230 224 231 static ssize_t pm_qos_resume_latency_store(struct device *dev, ··· 235 228 s32 value; 236 229 int ret; 237 230 238 - if (kstrtos32(buf, 0, &value)) 239 - return -EINVAL; 231 + if (!kstrtos32(buf, 0, &value)) { 232 + /* 233 + * Prevent users from writing negative or "no constraint" values 234 + * directly. 235 + */ 236 + if (value < 0 || value == PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) 237 + return -EINVAL; 240 238 241 - if (value < 0) 239 + if (value == 0) 240 + value = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT; 241 + } else if (!strcmp(buf, "n/a") || !strcmp(buf, "n/a\n")) { 242 + value = 0; 243 + } else { 242 244 return -EINVAL; 245 + } 243 246 244 247 ret = dev_pm_qos_update_request(dev->power.qos->resume_latency_req, 245 248 value);
+11 -2
drivers/block/nbd.c
··· 386 386 return result; 387 387 } 388 388 389 + /* 390 + * Different settings for sk->sk_sndtimeo can result in different return values 391 + * if there is a signal pending when we enter sendmsg, because reasons? 392 + */ 393 + static inline int was_interrupted(int result) 394 + { 395 + return result == -ERESTARTSYS || result == -EINTR; 396 + } 397 + 389 398 /* always call with the tx_lock held */ 390 399 static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd, int index) 391 400 { ··· 467 458 result = sock_xmit(nbd, index, 1, &from, 468 459 (type == NBD_CMD_WRITE) ? MSG_MORE : 0, &sent); 469 460 if (result <= 0) { 470 - if (result == -ERESTARTSYS) { 461 + if (was_interrupted(result)) { 471 462 /* If we havne't sent anything we can just return BUSY, 472 463 * however if we have sent something we need to make 473 464 * sure we only allow this req to be sent until we are ··· 511 502 } 512 503 result = sock_xmit(nbd, index, 1, &from, flags, &sent); 513 504 if (result <= 0) { 514 - if (result == -ERESTARTSYS) { 505 + if (was_interrupted(result)) { 515 506 /* We've already sent the header, we 516 507 * have no choice but to set pending and 517 508 * return BUSY.
+2 -1
drivers/clocksource/cs5535-clockevt.c
··· 117 117 /* Turn off the clock (and clear the event) */ 118 118 disable_timer(cs5535_event_clock); 119 119 120 - if (clockevent_state_shutdown(&cs5535_clockevent)) 120 + if (clockevent_state_detached(&cs5535_clockevent) || 121 + clockevent_state_shutdown(&cs5535_clockevent)) 121 122 return IRQ_HANDLED; 122 123 123 124 /* Clear the counter */
+2 -2
drivers/cpuidle/governors/menu.c
··· 298 298 data->needs_update = 0; 299 299 } 300 300 301 - /* resume_latency is 0 means no restriction */ 302 - if (resume_latency && resume_latency < latency_req) 301 + if (resume_latency < latency_req && 302 + resume_latency != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) 303 303 latency_req = resume_latency; 304 304 305 305 /* Special case when user has set very strict latency requirement */
+2 -1
drivers/firmware/efi/libstub/arm-stub.c
··· 238 238 239 239 efi_random_get_seed(sys_table); 240 240 241 - if (!nokaslr()) { 241 + /* hibernation expects the runtime regions to stay in the same place */ 242 + if (!IS_ENABLED(CONFIG_HIBERNATION) && !nokaslr()) { 242 243 /* 243 244 * Randomize the base of the UEFI runtime services region. 244 245 * Preserve the 2 MB alignment of the region by taking a
+3
drivers/firmware/efi/test/efi_test.c
··· 593 593 if (copy_from_user(&qcaps, qcaps_user, sizeof(qcaps))) 594 594 return -EFAULT; 595 595 596 + if (qcaps.capsule_count == ULONG_MAX) 597 + return -EINVAL; 598 + 596 599 capsules = kcalloc(qcaps.capsule_count + 1, 597 600 sizeof(efi_capsule_header_t), GFP_KERNEL); 598 601 if (!capsules)
+5 -11
drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
··· 225 225 if (r) 226 226 return r; 227 227 228 - /* Skip this for APU for now */ 229 - if (!(adev->flags & AMD_IS_APU)) 230 - r = amdgpu_uvd_suspend(adev); 231 - 232 - return r; 228 + return amdgpu_uvd_suspend(adev); 233 229 } 234 230 235 231 static int uvd_v6_0_resume(void *handle) ··· 233 237 int r; 234 238 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 235 239 236 - /* Skip this for APU for now */ 237 - if (!(adev->flags & AMD_IS_APU)) { 238 - r = amdgpu_uvd_resume(adev); 239 - if (r) 240 - return r; 241 - } 240 + r = amdgpu_uvd_resume(adev); 241 + if (r) 242 + return r; 243 + 242 244 return uvd_v6_0_hw_init(adev); 243 245 } 244 246
+3 -3
drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
··· 830 830 { 831 831 uint32_t reference_clock, tmp; 832 832 struct cgs_display_info info = {0}; 833 - struct cgs_mode_info mode_info; 833 + struct cgs_mode_info mode_info = {0}; 834 834 835 835 info.mode_info = &mode_info; 836 836 ··· 3948 3948 uint32_t ref_clock; 3949 3949 uint32_t refresh_rate = 0; 3950 3950 struct cgs_display_info info = {0}; 3951 - struct cgs_mode_info mode_info; 3951 + struct cgs_mode_info mode_info = {0}; 3952 3952 3953 3953 info.mode_info = &mode_info; 3954 - 3955 3954 cgs_get_active_displays_info(hwmgr->device, &info); 3956 3955 num_active_displays = info.display_count; 3957 3956 ··· 3966 3967 frame_time_in_us = 1000000 / refresh_rate; 3967 3968 3968 3969 pre_vbi_time_in_us = frame_time_in_us - 200 - mode_info.vblank_time_us; 3970 + 3969 3971 data->frame_time_x2 = frame_time_in_us * 2 / 100; 3970 3972 3971 3973 display_gap2 = pre_vbi_time_in_us * (ref_clock / 100);
+3
drivers/gpu/drm/i915/gvt/cmd_parser.c
··· 2723 2723 uint32_t per_ctx_start[CACHELINE_DWORDS] = {0}; 2724 2724 unsigned char *bb_start_sva; 2725 2725 2726 + if (!wa_ctx->per_ctx.valid) 2727 + return 0; 2728 + 2726 2729 per_ctx_start[0] = 0x18800001; 2727 2730 per_ctx_start[1] = wa_ctx->per_ctx.guest_gma; 2728 2731
+1 -2
drivers/gpu/drm/i915/gvt/execlist.c
··· 701 701 CACHELINE_BYTES; 702 702 workload->wa_ctx.per_ctx.guest_gma = 703 703 per_ctx & PER_CTX_ADDR_MASK; 704 - 705 - WARN_ON(workload->wa_ctx.indirect_ctx.size && !(per_ctx & 0x1)); 704 + workload->wa_ctx.per_ctx.valid = per_ctx & 1; 706 705 } 707 706 708 707 if (emulate_schedule_in)
+10 -60
drivers/gpu/drm/i915/gvt/handlers.c
··· 1429 1429 return 0; 1430 1430 } 1431 1431 1432 - static int ring_timestamp_mmio_read(struct intel_vgpu *vgpu, 1433 - unsigned int offset, void *p_data, unsigned int bytes) 1434 - { 1435 - struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; 1436 - 1437 - mmio_hw_access_pre(dev_priv); 1438 - vgpu_vreg(vgpu, offset) = I915_READ(_MMIO(offset)); 1439 - mmio_hw_access_post(dev_priv); 1440 - return intel_vgpu_default_mmio_read(vgpu, offset, p_data, bytes); 1441 - } 1442 - 1443 - static int instdone_mmio_read(struct intel_vgpu *vgpu, 1432 + static int mmio_read_from_hw(struct intel_vgpu *vgpu, 1444 1433 unsigned int offset, void *p_data, unsigned int bytes) 1445 1434 { 1446 1435 struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; ··· 1578 1589 MMIO_F(prefix(BLT_RING_BASE), s, f, am, rm, d, r, w); \ 1579 1590 MMIO_F(prefix(GEN6_BSD_RING_BASE), s, f, am, rm, d, r, w); \ 1580 1591 MMIO_F(prefix(VEBOX_RING_BASE), s, f, am, rm, d, r, w); \ 1592 + if (HAS_BSD2(dev_priv)) \ 1593 + MMIO_F(prefix(GEN8_BSD2_RING_BASE), s, f, am, rm, d, r, w); \ 1581 1594 } while (0) 1582 1595 1583 1596 #define MMIO_RING_D(prefix, d) \ ··· 1626 1635 #undef RING_REG 1627 1636 1628 1637 #define RING_REG(base) (base + 0x6c) 1629 - MMIO_RING_DFH(RING_REG, D_ALL, 0, instdone_mmio_read, NULL); 1630 - MMIO_DH(RING_REG(GEN8_BSD2_RING_BASE), D_ALL, instdone_mmio_read, NULL); 1638 + MMIO_RING_DFH(RING_REG, D_ALL, 0, mmio_read_from_hw, NULL); 1631 1639 #undef RING_REG 1632 - MMIO_DH(GEN7_SC_INSTDONE, D_BDW_PLUS, instdone_mmio_read, NULL); 1640 + MMIO_DH(GEN7_SC_INSTDONE, D_BDW_PLUS, mmio_read_from_hw, NULL); 1633 1641 1634 1642 MMIO_GM_RDR(0x2148, D_ALL, NULL, NULL); 1635 1643 MMIO_GM_RDR(CCID, D_ALL, NULL, NULL); ··· 1638 1648 MMIO_RING_DFH(RING_TAIL, D_ALL, F_CMD_ACCESS, NULL, NULL); 1639 1649 MMIO_RING_DFH(RING_HEAD, D_ALL, F_CMD_ACCESS, NULL, NULL); 1640 1650 MMIO_RING_DFH(RING_CTL, D_ALL, F_CMD_ACCESS, NULL, NULL); 1641 - MMIO_RING_DFH(RING_ACTHD, D_ALL, F_CMD_ACCESS, NULL, NULL); 1651 + MMIO_RING_DFH(RING_ACTHD, D_ALL, F_CMD_ACCESS, mmio_read_from_hw, NULL); 1642 1652 MMIO_RING_GM_RDR(RING_START, D_ALL, NULL, NULL); 1643 1653 1644 1654 /* RING MODE */ ··· 1652 1662 MMIO_RING_DFH(RING_INSTPM, D_ALL, F_MODE_MASK | F_CMD_ACCESS, 1653 1663 NULL, NULL); 1654 1664 MMIO_RING_DFH(RING_TIMESTAMP, D_ALL, F_CMD_ACCESS, 1655 - ring_timestamp_mmio_read, NULL); 1665 + mmio_read_from_hw, NULL); 1656 1666 MMIO_RING_DFH(RING_TIMESTAMP_UDW, D_ALL, F_CMD_ACCESS, 1657 - ring_timestamp_mmio_read, NULL); 1667 + mmio_read_from_hw, NULL); 1658 1668 1659 1669 MMIO_DFH(GEN7_GT_MODE, D_ALL, F_MODE_MASK | F_CMD_ACCESS, NULL, NULL); 1660 1670 MMIO_DFH(CACHE_MODE_0_GEN7, D_ALL, F_MODE_MASK | F_CMD_ACCESS, ··· 2401 2411 struct drm_i915_private *dev_priv = gvt->dev_priv; 2402 2412 int ret; 2403 2413 2404 - MMIO_DFH(RING_IMR(GEN8_BSD2_RING_BASE), D_BDW_PLUS, F_CMD_ACCESS, NULL, 2405 - intel_vgpu_reg_imr_handler); 2406 - 2407 2414 MMIO_DH(GEN8_GT_IMR(0), D_BDW_PLUS, NULL, intel_vgpu_reg_imr_handler); 2408 2415 MMIO_DH(GEN8_GT_IER(0), D_BDW_PLUS, NULL, intel_vgpu_reg_ier_handler); 2409 2416 MMIO_DH(GEN8_GT_IIR(0), D_BDW_PLUS, NULL, intel_vgpu_reg_iir_handler); ··· 2463 2476 MMIO_DH(GEN8_MASTER_IRQ, D_BDW_PLUS, NULL, 2464 2477 intel_vgpu_reg_master_irq_handler); 2465 2478 2466 - MMIO_DFH(RING_HWSTAM(GEN8_BSD2_RING_BASE), D_BDW_PLUS, 2467 - F_CMD_ACCESS, NULL, NULL); 2468 - MMIO_DFH(0x1c134, D_BDW_PLUS, F_CMD_ACCESS, NULL, NULL); 2469 - 2470 - MMIO_DFH(RING_TAIL(GEN8_BSD2_RING_BASE), D_BDW_PLUS, F_CMD_ACCESS, 2471 - NULL, NULL); 2472 - MMIO_DFH(RING_HEAD(GEN8_BSD2_RING_BASE), D_BDW_PLUS, 2473 - F_CMD_ACCESS, NULL, NULL); 2474 - MMIO_GM_RDR(RING_START(GEN8_BSD2_RING_BASE), D_BDW_PLUS, NULL, NULL); 2475 - MMIO_DFH(RING_CTL(GEN8_BSD2_RING_BASE), D_BDW_PLUS, F_CMD_ACCESS, 2476 - NULL, NULL); 2477 - MMIO_DFH(RING_ACTHD(GEN8_BSD2_RING_BASE), D_BDW_PLUS, 2478 - F_CMD_ACCESS, NULL, NULL); 2479 - MMIO_DFH(RING_ACTHD_UDW(GEN8_BSD2_RING_BASE), D_BDW_PLUS, 2480 - F_CMD_ACCESS, NULL, NULL); 2481 - MMIO_DFH(0x1c29c, D_BDW_PLUS, F_MODE_MASK | F_CMD_ACCESS, NULL, 2482 - ring_mode_mmio_write); 2483 - MMIO_DFH(RING_MI_MODE(GEN8_BSD2_RING_BASE), D_BDW_PLUS, 2484 - F_MODE_MASK | F_CMD_ACCESS, NULL, NULL); 2485 - MMIO_DFH(RING_INSTPM(GEN8_BSD2_RING_BASE), D_BDW_PLUS, 2486 - F_MODE_MASK | F_CMD_ACCESS, NULL, NULL); 2487 - MMIO_DFH(RING_TIMESTAMP(GEN8_BSD2_RING_BASE), D_BDW_PLUS, F_CMD_ACCESS, 2488 - ring_timestamp_mmio_read, NULL); 2489 - 2490 - MMIO_RING_DFH(RING_ACTHD_UDW, D_BDW_PLUS, F_CMD_ACCESS, NULL, NULL); 2479 + MMIO_RING_DFH(RING_ACTHD_UDW, D_BDW_PLUS, F_CMD_ACCESS, 2480 + mmio_read_from_hw, NULL); 2491 2481 2492 2482 #define RING_REG(base) (base + 0xd0) 2493 2483 MMIO_RING_F(RING_REG, 4, F_RO, 0, 2494 - ~_MASKED_BIT_ENABLE(RESET_CTL_REQUEST_RESET), D_BDW_PLUS, NULL, 2495 - ring_reset_ctl_write); 2496 - MMIO_F(RING_REG(GEN8_BSD2_RING_BASE), 4, F_RO, 0, 2497 2484 ~_MASKED_BIT_ENABLE(RESET_CTL_REQUEST_RESET), D_BDW_PLUS, NULL, 2498 2485 ring_reset_ctl_write); 2499 2486 #undef RING_REG 2500 2487 2501 2488 #define RING_REG(base) (base + 0x230) 2502 2489 MMIO_RING_DFH(RING_REG, D_BDW_PLUS, 0, NULL, elsp_mmio_write); 2503 - MMIO_DH(RING_REG(GEN8_BSD2_RING_BASE), D_BDW_PLUS, NULL, elsp_mmio_write); 2504 2490 #undef RING_REG 2505 2491 2506 2492 #define RING_REG(base) (base + 0x234) 2507 2493 MMIO_RING_F(RING_REG, 8, F_RO | F_CMD_ACCESS, 0, ~0, D_BDW_PLUS, 2508 2494 NULL, NULL); 2509 - MMIO_F(RING_REG(GEN8_BSD2_RING_BASE), 4, F_RO | F_CMD_ACCESS, 0, 2510 - ~0LL, D_BDW_PLUS, NULL, NULL); 2511 2495 #undef RING_REG 2512 2496 2513 2497 #define RING_REG(base) (base + 0x244) 2514 2498 MMIO_RING_DFH(RING_REG, D_BDW_PLUS, F_CMD_ACCESS, NULL, NULL); 2515 - MMIO_DFH(RING_REG(GEN8_BSD2_RING_BASE), D_BDW_PLUS, F_CMD_ACCESS, 2516 - NULL, NULL); 2517 2499 #undef RING_REG 2518 2500 2519 2501 #define RING_REG(base) (base + 0x370) 2520 2502 MMIO_RING_F(RING_REG, 48, F_RO, 0, ~0, D_BDW_PLUS, NULL, NULL); 2521 - MMIO_F(RING_REG(GEN8_BSD2_RING_BASE), 48, F_RO, 0, ~0, D_BDW_PLUS, 2522 - NULL, NULL); 2523 2503 #undef RING_REG 2524 2504 2525 2505 #define RING_REG(base) (base + 0x3a0) 2526 2506 MMIO_RING_DFH(RING_REG, D_BDW_PLUS, F_MODE_MASK, NULL, NULL); 2527 - MMIO_DFH(RING_REG(GEN8_BSD2_RING_BASE), D_BDW_PLUS, F_MODE_MASK, NULL, NULL); 2528 2507 #undef RING_REG 2529 2508 2530 2509 MMIO_D(PIPEMISC(PIPE_A), D_BDW_PLUS); ··· 2510 2557 2511 2558 #define RING_REG(base) (base + 0x270) 2512 2559 MMIO_RING_F(RING_REG, 32, 0, 0, 0, D_BDW_PLUS, NULL, NULL); 2513 - MMIO_F(RING_REG(GEN8_BSD2_RING_BASE), 32, 0, 0, 0, D_BDW_PLUS, NULL, NULL); 2514 2560 #undef RING_REG 2515 2561 2516 2562 MMIO_RING_GM_RDR(RING_HWS_PGA, D_BDW_PLUS, NULL, NULL); 2517 - MMIO_GM_RDR(RING_HWS_PGA(GEN8_BSD2_RING_BASE), D_BDW_PLUS, NULL, NULL); 2518 2563 2519 2564 MMIO_DFH(HDC_CHICKEN0, D_BDW_PLUS, F_MODE_MASK | F_CMD_ACCESS, NULL, NULL); 2520 2565 ··· 2800 2849 MMIO_D(0x65f08, D_SKL | D_KBL); 2801 2850 MMIO_D(0x320f0, D_SKL | D_KBL); 2802 2851 2803 - MMIO_DFH(_REG_VCS2_EXCC, D_SKL_PLUS, F_CMD_ACCESS, NULL, NULL); 2804 2852 MMIO_D(0x70034, D_SKL_PLUS); 2805 2853 MMIO_D(0x71034, D_SKL_PLUS); 2806 2854 MMIO_D(0x72034, D_SKL_PLUS);
-3
drivers/gpu/drm/i915/gvt/reg.h
··· 54 54 55 55 #define VGT_SPRSTRIDE(pipe) _PIPE(pipe, _SPRA_STRIDE, _PLANE_STRIDE_2_B) 56 56 57 - #define _REG_VECS_EXCC 0x1A028 58 - #define _REG_VCS2_EXCC 0x1c028 59 - 60 57 #define _REG_701C0(pipe, plane) (0x701c0 + pipe * 0x1000 + (plane - 1) * 0x100) 61 58 #define _REG_701C4(pipe, plane) (0x701c4 + pipe * 0x1000 + (plane - 1) * 0x100) 62 59
+1
drivers/gpu/drm/i915/gvt/scheduler.h
··· 68 68 struct shadow_per_ctx { 69 69 unsigned long guest_gma; 70 70 unsigned long shadow_gma; 71 + unsigned valid; 71 72 }; 72 73 73 74 struct intel_shadow_wa_ctx {
+4
drivers/gpu/drm/i915/i915_perf.c
··· 2537 2537 .poll = i915_perf_poll, 2538 2538 .read = i915_perf_read, 2539 2539 .unlocked_ioctl = i915_perf_ioctl, 2540 + /* Our ioctl have no arguments, so it's safe to use the same function 2541 + * to handle 32bits compatibility. 2542 + */ 2543 + .compat_ioctl = i915_perf_ioctl, 2540 2544 }; 2541 2545 2542 2546
+4 -1
drivers/hv/channel_mgmt.c
··· 937 937 { 938 938 BUG_ON(!is_hvsock_channel(channel)); 939 939 940 - channel->rescind = true; 940 + /* We always get a rescind msg when a connection is closed. */ 941 + while (!READ_ONCE(channel->probe_done) || !READ_ONCE(channel->rescind)) 942 + msleep(1); 943 + 941 944 vmbus_device_unregister(channel->device_obj); 942 945 } 943 946 EXPORT_SYMBOL_GPL(vmbus_hvsock_device_unregister);
+5
drivers/hwmon/da9052-hwmon.c
··· 477 477 /* disable touchscreen features */ 478 478 da9052_reg_write(hwmon->da9052, DA9052_TSI_CONT_A_REG, 0x00); 479 479 480 + /* Sample every 1ms */ 481 + da9052_reg_update(hwmon->da9052, DA9052_ADC_CONT_REG, 482 + DA9052_ADCCONT_ADCMODE, 483 + DA9052_ADCCONT_ADCMODE); 484 + 480 485 err = da9052_request_irq(hwmon->da9052, DA9052_IRQ_TSIREADY, 481 486 "tsiready-irq", da9052_tsi_datardy_irq, 482 487 hwmon);
+5 -8
drivers/hwmon/tmp102.c
··· 268 268 return err; 269 269 } 270 270 271 - tmp102->ready_time = jiffies; 272 - if (tmp102->config_orig & TMP102_CONF_SD) { 273 - /* 274 - * Mark that we are not ready with data until the first 275 - * conversion is complete 276 - */ 277 - tmp102->ready_time += msecs_to_jiffies(CONVERSION_TIME_MS); 278 - } 271 + /* 272 + * Mark that we are not ready with data until the first 273 + * conversion is complete 274 + */ 275 + tmp102->ready_time = jiffies + msecs_to_jiffies(CONVERSION_TIME_MS); 279 276 280 277 hwmon_dev = devm_hwmon_device_register_with_info(dev, client->name, 281 278 tmp102,
+2
drivers/iio/adc/Kconfig
··· 243 243 config DLN2_ADC 244 244 tristate "Diolan DLN-2 ADC driver support" 245 245 depends on MFD_DLN2 246 + select IIO_BUFFER 247 + select IIO_TRIGGERED_BUFFER 246 248 help 247 249 Say yes here to build support for Diolan DLN-2 ADC. 248 250
+29 -16
drivers/iio/adc/at91-sama5d2_adc.c
··· 225 225 char *name; 226 226 unsigned int trgmod_value; 227 227 unsigned int edge_type; 228 + bool hw_trig; 228 229 }; 229 230 230 231 struct at91_adc_state { ··· 255 254 .name = "external_rising", 256 255 .trgmod_value = AT91_SAMA5D2_TRGR_TRGMOD_EXT_TRIG_RISE, 257 256 .edge_type = IRQ_TYPE_EDGE_RISING, 257 + .hw_trig = true, 258 258 }, 259 259 { 260 260 .name = "external_falling", 261 261 .trgmod_value = AT91_SAMA5D2_TRGR_TRGMOD_EXT_TRIG_FALL, 262 262 .edge_type = IRQ_TYPE_EDGE_FALLING, 263 + .hw_trig = true, 263 264 }, 264 265 { 265 266 .name = "external_any", 266 267 .trgmod_value = AT91_SAMA5D2_TRGR_TRGMOD_EXT_TRIG_ANY, 267 268 .edge_type = IRQ_TYPE_EDGE_BOTH, 269 + .hw_trig = true, 270 + }, 271 + { 272 + .name = "software", 273 + .trgmod_value = AT91_SAMA5D2_TRGR_TRGMOD_NO_TRIGGER, 274 + .edge_type = IRQ_TYPE_NONE, 275 + .hw_trig = false, 268 276 }, 269 277 }; 270 278 ··· 607 597 struct at91_adc_state *st; 608 598 struct resource *res; 609 599 int ret, i; 610 - u32 edge_type; 600 + u32 edge_type = IRQ_TYPE_NONE; 611 601 612 602 indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*st)); 613 603 if (!indio_dev) ··· 651 641 ret = of_property_read_u32(pdev->dev.of_node, 652 642 "atmel,trigger-edge-type", &edge_type); 653 643 if (ret) { 654 - dev_err(&pdev->dev, 655 - "invalid or missing value for atmel,trigger-edge-type\n"); 656 - return ret; 644 + dev_dbg(&pdev->dev, 645 + "atmel,trigger-edge-type not specified, only software trigger available\n"); 657 646 } 658 647 659 648 st->selected_trig = NULL; 660 649 661 - for (i = 0; i < AT91_SAMA5D2_HW_TRIG_CNT; i++) 650 + /* find the right trigger, or no trigger at all */ 651 + for (i = 0; i < AT91_SAMA5D2_HW_TRIG_CNT + 1; i++) 662 652 if (at91_adc_trigger_list[i].edge_type == edge_type) { 663 653 st->selected_trig = &at91_adc_trigger_list[i]; 664 654 break; ··· 727 717 728 718 platform_set_drvdata(pdev, indio_dev); 729 719 730 - ret = at91_adc_buffer_init(indio_dev); 731 - if (ret < 0) { 732 - dev_err(&pdev->dev, "couldn't initialize the buffer.\n"); 733 - goto per_clk_disable_unprepare; 734 - } 720 + if (st->selected_trig->hw_trig) { 721 + ret = at91_adc_buffer_init(indio_dev); 722 + if (ret < 0) { 723 + dev_err(&pdev->dev, "couldn't initialize the buffer.\n"); 724 + goto per_clk_disable_unprepare; 725 + } 735 726 736 - ret = at91_adc_trigger_init(indio_dev); 737 - if (ret < 0) { 738 - dev_err(&pdev->dev, "couldn't setup the triggers.\n"); 739 - goto per_clk_disable_unprepare; 727 + ret = at91_adc_trigger_init(indio_dev); 728 + if (ret < 0) { 729 + dev_err(&pdev->dev, "couldn't setup the triggers.\n"); 730 + goto per_clk_disable_unprepare; 731 + } 740 732 } 741 733 742 734 ret = iio_device_register(indio_dev); 743 735 if (ret < 0) 744 736 goto per_clk_disable_unprepare; 745 737 746 - dev_info(&pdev->dev, "setting up trigger as %s\n", 747 - st->selected_trig->name); 738 + if (st->selected_trig->hw_trig) 739 + dev_info(&pdev->dev, "setting up trigger as %s\n", 740 + st->selected_trig->name); 748 741 749 742 dev_info(&pdev->dev, "version: %x\n", 750 743 readl_relaxed(st->base + AT91_SAMA5D2_VERSION));
+1
drivers/iio/dummy/iio_simple_dummy_events.c
··· 72 72 st->event_en = state; 73 73 else 74 74 return -EINVAL; 75 + break; 75 76 default: 76 77 return -EINVAL; 77 78 }
+3 -7
drivers/iio/pressure/zpa2326.c
··· 865 865 static int zpa2326_wait_oneshot_completion(const struct iio_dev *indio_dev, 866 866 struct zpa2326_private *private) 867 867 { 868 - int ret; 869 868 unsigned int val; 870 869 long timeout; 871 870 ··· 886 887 /* Timed out. */ 887 888 zpa2326_warn(indio_dev, "no one shot interrupt occurred (%ld)", 888 889 timeout); 889 - ret = -ETIME; 890 - } else if (timeout < 0) { 891 - zpa2326_warn(indio_dev, 892 - "wait for one shot interrupt cancelled"); 893 - ret = -ERESTARTSYS; 890 + return -ETIME; 894 891 } 895 892 896 - return ret; 893 + zpa2326_warn(indio_dev, "wait for one shot interrupt cancelled"); 894 + return -ERESTARTSYS; 897 895 } 898 896 899 897 static int zpa2326_init_managed_irq(struct device *parent,
+40 -3
drivers/iio/proximity/as3935.c
··· 39 39 #define AS3935_AFE_GAIN_MAX 0x1F 40 40 #define AS3935_AFE_PWR_BIT BIT(0) 41 41 42 + #define AS3935_NFLWDTH 0x01 43 + #define AS3935_NFLWDTH_MASK 0x7f 44 + 42 45 #define AS3935_INT 0x03 43 46 #define AS3935_INT_MASK 0x0f 47 + #define AS3935_DISTURB_INT BIT(2) 44 48 #define AS3935_EVENT_INT BIT(3) 45 49 #define AS3935_NOISE_INT BIT(0) 46 50 ··· 52 48 #define AS3935_DATA_MASK 0x3F 53 49 54 50 #define AS3935_TUNE_CAP 0x08 51 + #define AS3935_DEFAULTS 0x3C 55 52 #define AS3935_CALIBRATE 0x3D 56 53 57 54 #define AS3935_READ_DATA BIT(14) ··· 67 62 struct mutex lock; 68 63 struct delayed_work work; 69 64 65 + unsigned long noise_tripped; 70 66 u32 tune_cap; 67 + u32 nflwdth_reg; 71 68 u8 buffer[16]; /* 8-bit data + 56-bit padding + 64-bit timestamp */ 72 69 u8 buf[2] ____cacheline_aligned; 73 70 }; ··· 152 145 return len; 153 146 } 154 147 148 + static ssize_t as3935_noise_level_tripped_show(struct device *dev, 149 + struct device_attribute *attr, 150 + char *buf) 151 + { 152 + struct as3935_state *st = iio_priv(dev_to_iio_dev(dev)); 153 + int ret; 154 + 155 + mutex_lock(&st->lock); 156 + ret = sprintf(buf, "%d\n", !time_after(jiffies, st->noise_tripped + HZ)); 157 + mutex_unlock(&st->lock); 158 + 159 + return ret; 160 + } 161 + 155 162 static IIO_DEVICE_ATTR(sensor_sensitivity, S_IRUGO | S_IWUSR, 156 163 as3935_sensor_sensitivity_show, as3935_sensor_sensitivity_store, 0); 157 164 165 + static IIO_DEVICE_ATTR(noise_level_tripped, S_IRUGO, 166 + as3935_noise_level_tripped_show, NULL, 0); 158 167 159 168 static struct attribute *as3935_attributes[] = { 160 169 &iio_dev_attr_sensor_sensitivity.dev_attr.attr, 170 + &iio_dev_attr_noise_level_tripped.dev_attr.attr, 161 171 NULL, 162 172 }; 163 173 ··· 270 246 case AS3935_EVENT_INT: 271 247 iio_trigger_poll_chained(st->trig); 272 248 break; 249 + case AS3935_DISTURB_INT: 273 250 case AS3935_NOISE_INT: 251 + mutex_lock(&st->lock); 252 + st->noise_tripped = jiffies; 253 + mutex_unlock(&st->lock); 274 254 dev_warn(&st->spi->dev, "noise level is too high\n"); 275 255 break; 276 256 } ··· 297 269 298 270 static void calibrate_as3935(struct as3935_state *st) 299 271 { 300 - /* mask disturber interrupt bit */ 301 - as3935_write(st, AS3935_INT, BIT(5)); 302 - 272 + as3935_write(st, AS3935_DEFAULTS, 0x96); 303 273 as3935_write(st, AS3935_CALIBRATE, 0x96); 304 274 as3935_write(st, AS3935_TUNE_CAP, 305 275 BIT(5) | (st->tune_cap / TUNE_CAP_DIV)); 306 276 307 277 mdelay(2); 308 278 as3935_write(st, AS3935_TUNE_CAP, (st->tune_cap / TUNE_CAP_DIV)); 279 + as3935_write(st, AS3935_NFLWDTH, st->nflwdth_reg); 309 280 } 310 281 311 282 #ifdef CONFIG_PM_SLEEP ··· 397 370 return -EINVAL; 398 371 } 399 372 373 + ret = of_property_read_u32(np, 374 + "ams,nflwdth", &st->nflwdth_reg); 375 + if (!ret && st->nflwdth_reg > AS3935_NFLWDTH_MASK) { 376 + dev_err(&spi->dev, 377 + "invalid nflwdth setting of %d\n", 378 + st->nflwdth_reg); 379 + return -EINVAL; 380 + } 381 + 400 382 indio_dev->dev.parent = &spi->dev; 401 383 indio_dev->name = spi_get_device_id(spi)->name; 402 384 indio_dev->channels = as3935_channels; ··· 420 384 return -ENOMEM; 421 385 422 386 st->trig = trig; 387 + st->noise_tripped = jiffies - HZ; 423 388 trig->dev.parent = indio_dev->dev.parent; 424 389 iio_trigger_set_drvdata(trig, indio_dev); 425 390 trig->ops = &iio_interrupt_trigger_ops;
+12 -1
drivers/infiniband/core/netlink.c
··· 175 175 !netlink_capable(skb, CAP_NET_ADMIN)) 176 176 return -EPERM; 177 177 178 + /* 179 + * LS responses overload the 0x100 (NLM_F_ROOT) flag. Don't 180 + * mistakenly call the .dump() function. 181 + */ 182 + if (index == RDMA_NL_LS) { 183 + if (cb_table[op].doit) 184 + return cb_table[op].doit(skb, nlh, extack); 185 + return -EINVAL; 186 + } 178 187 /* FIXME: Convert IWCM to properly handle doit callbacks */ 179 188 if ((nlh->nlmsg_flags & NLM_F_DUMP) || index == RDMA_NL_RDMA_CM || 180 189 index == RDMA_NL_IWCM) { 181 190 struct netlink_dump_control c = { 182 191 .dump = cb_table[op].dump, 183 192 }; 184 - return netlink_dump_start(nls, skb, nlh, &c); 193 + if (c.dump) 194 + return netlink_dump_start(nls, skb, nlh, &c); 195 + return -EINVAL; 185 196 } 186 197 187 198 if (cb_table[op].doit)
+1
drivers/input/mouse/elan_i2c_core.c
··· 1258 1258 { "ELAN0605", 0 }, 1259 1259 { "ELAN0609", 0 }, 1260 1260 { "ELAN060B", 0 }, 1261 + { "ELAN0611", 0 }, 1261 1262 { "ELAN1000", 0 }, 1262 1263 { } 1263 1264 };
+3 -2
drivers/input/rmi4/rmi_f30.c
··· 232 232 unsigned int trackstick_button = BTN_LEFT; 233 233 bool button_mapped = false; 234 234 int i; 235 + int button_count = min_t(u8, f30->gpioled_count, TRACKSTICK_RANGE_END); 235 236 236 237 f30->gpioled_key_map = devm_kcalloc(&fn->dev, 237 - f30->gpioled_count, 238 + button_count, 238 239 sizeof(f30->gpioled_key_map[0]), 239 240 GFP_KERNEL); 240 241 if (!f30->gpioled_key_map) { ··· 243 242 return -ENOMEM; 244 243 } 245 244 246 - for (i = 0; i < f30->gpioled_count; i++) { 245 + for (i = 0; i < button_count; i++) { 247 246 if (!rmi_f30_is_valid_button(i, f30->ctrl)) 248 247 continue; 249 248
+10 -7
drivers/input/tablet/gtco.c
··· 230 230 231 231 /* Walk this report and pull out the info we need */ 232 232 while (i < length) { 233 - prefix = report[i]; 234 - 235 - /* Skip over prefix */ 236 - i++; 233 + prefix = report[i++]; 237 234 238 235 /* Determine data size and save the data in the proper variable */ 239 - size = PREF_SIZE(prefix); 236 + size = (1U << PREF_SIZE(prefix)) >> 1; 237 + if (i + size > length) { 238 + dev_err(ddev, 239 + "Not enough data (need %d, have %d)\n", 240 + i + size, length); 241 + break; 242 + } 243 + 240 244 switch (size) { 241 245 case 1: 242 246 data = report[i]; ··· 248 244 case 2: 249 245 data16 = get_unaligned_le16(&report[i]); 250 246 break; 251 - case 3: 252 - size = 4; 247 + case 4: 253 248 data32 = get_unaligned_le32(&report[i]); 254 249 break; 255 250 }
+33 -10
drivers/irqchip/irq-gic-v3-its.c
··· 107 107 108 108 #define ITS_ITT_ALIGN SZ_256 109 109 110 + /* The maximum number of VPEID bits supported by VLPI commands */ 111 + #define ITS_MAX_VPEID_BITS (16) 112 + #define ITS_MAX_VPEID (1 << (ITS_MAX_VPEID_BITS)) 113 + 110 114 /* Convert page order to size in bytes */ 111 115 #define PAGE_ORDER_TO_SIZE(o) (PAGE_SIZE << (o)) 112 116 ··· 312 308 313 309 static void its_encode_itt(struct its_cmd_block *cmd, u64 itt_addr) 314 310 { 315 - its_mask_encode(&cmd->raw_cmd[2], itt_addr >> 8, 50, 8); 311 + its_mask_encode(&cmd->raw_cmd[2], itt_addr >> 8, 51, 8); 316 312 } 317 313 318 314 static void its_encode_valid(struct its_cmd_block *cmd, int valid) ··· 322 318 323 319 static void its_encode_target(struct its_cmd_block *cmd, u64 target_addr) 324 320 { 325 - its_mask_encode(&cmd->raw_cmd[2], target_addr >> 16, 50, 16); 321 + its_mask_encode(&cmd->raw_cmd[2], target_addr >> 16, 51, 16); 326 322 } 327 323 328 324 static void its_encode_collection(struct its_cmd_block *cmd, u16 col) ··· 362 358 363 359 static void its_encode_vpt_addr(struct its_cmd_block *cmd, u64 vpt_pa) 364 360 { 365 - its_mask_encode(&cmd->raw_cmd[3], vpt_pa >> 16, 50, 16); 361 + its_mask_encode(&cmd->raw_cmd[3], vpt_pa >> 16, 51, 16); 366 362 } 367 363 368 364 static void its_encode_vpt_size(struct its_cmd_block *cmd, u8 vpt_size) ··· 1482 1478 u64 val = its_read_baser(its, baser); 1483 1479 u64 esz = GITS_BASER_ENTRY_SIZE(val); 1484 1480 u64 type = GITS_BASER_TYPE(val); 1481 + u64 baser_phys, tmp; 1485 1482 u32 alloc_pages; 1486 1483 void *base; 1487 - u64 tmp; 1488 1484 1489 1485 retry_alloc_baser: 1490 1486 alloc_pages = (PAGE_ORDER_TO_SIZE(order) / psz); ··· 1500 1496 if (!base) 1501 1497 return -ENOMEM; 1502 1498 1499 + baser_phys = virt_to_phys(base); 1500 + 1501 + /* Check if the physical address of the memory is above 48bits */ 1502 + if (IS_ENABLED(CONFIG_ARM64_64K_PAGES) && (baser_phys >> 48)) { 1503 + 1504 + /* 52bit PA is supported only when PageSize=64K */ 1505 + if (psz != SZ_64K) { 1506 + pr_err("ITS: no 52bit PA support when psz=%d\n", psz); 1507 + free_pages((unsigned long)base, order); 1508 + return -ENXIO; 1509 + } 1510 + 1511 + /* Convert 52bit PA to 48bit field */ 1512 + baser_phys = GITS_BASER_PHYS_52_to_48(baser_phys); 1513 + } 1514 + 1503 1515 retry_baser: 1504 - val = (virt_to_phys(base) | 1516 + val = (baser_phys | 1505 1517 (type << GITS_BASER_TYPE_SHIFT) | 1506 1518 ((esz - 1) << GITS_BASER_ENTRY_SIZE_SHIFT) | 1507 1519 ((alloc_pages - 1) << GITS_BASER_PAGES_SHIFT) | ··· 1602 1582 1603 1583 static bool its_parse_indirect_baser(struct its_node *its, 1604 1584 struct its_baser *baser, 1605 - u32 psz, u32 *order) 1585 + u32 psz, u32 *order, u32 ids) 1606 1586 { 1607 1587 u64 tmp = its_read_baser(its, baser); 1608 1588 u64 type = GITS_BASER_TYPE(tmp); 1609 1589 u64 esz = GITS_BASER_ENTRY_SIZE(tmp); 1610 1590 u64 val = GITS_BASER_InnerShareable | GITS_BASER_RaWaWb; 1611 - u32 ids = its->device_ids; 1612 1591 u32 new_order = *order; 1613 1592 bool indirect = false; 1614 1593 ··· 1699 1680 continue; 1700 1681 1701 1682 case GITS_BASER_TYPE_DEVICE: 1683 + indirect = its_parse_indirect_baser(its, baser, 1684 + psz, &order, 1685 + its->device_ids); 1702 1686 case GITS_BASER_TYPE_VCPU: 1703 1687 indirect = its_parse_indirect_baser(its, baser, 1704 - psz, &order); 1688 + psz, &order, 1689 + ITS_MAX_VPEID_BITS); 1705 1690 break; 1706 1691 } 1707 1692 ··· 2574 2551 2575 2552 static int its_vpe_id_alloc(void) 2576 2553 { 2577 - return ida_simple_get(&its_vpeid_ida, 0, 1 << 16, GFP_KERNEL); 2554 + return ida_simple_get(&its_vpeid_ida, 0, ITS_MAX_VPEID, GFP_KERNEL); 2578 2555 } 2579 2556 2580 2557 static void its_vpe_id_free(u16 id) ··· 2874 2851 return -ENOMEM; 2875 2852 } 2876 2853 2877 - BUG_ON(entries != vpe_proxy.dev->nr_ites); 2854 + BUG_ON(entries > vpe_proxy.dev->nr_ites); 2878 2855 2879 2856 raw_spin_lock_init(&vpe_proxy.lock); 2880 2857 vpe_proxy.next_victim = 0;
+1 -1
drivers/irqchip/irq-tango.c
··· 141 141 for (i = 0; i < 2; i++) { 142 142 ct[i].chip.irq_ack = irq_gc_ack_set_bit; 143 143 ct[i].chip.irq_mask = irq_gc_mask_disable_reg; 144 - ct[i].chip.irq_mask_ack = irq_gc_mask_disable_reg_and_ack; 144 + ct[i].chip.irq_mask_ack = irq_gc_mask_disable_and_ack_set; 145 145 ct[i].chip.irq_unmask = irq_gc_unmask_enable_reg; 146 146 ct[i].chip.irq_set_type = tangox_irq_set_type; 147 147 ct[i].chip.name = gc->domain->name;
+1 -2
drivers/net/can/sun4i_can.c
··· 342 342 343 343 /* enter the selected mode */ 344 344 mod_reg_val = readl(priv->base + SUN4I_REG_MSEL_ADDR); 345 - if (priv->can.ctrlmode & CAN_CTRLMODE_PRESUME_ACK) 345 + if (priv->can.ctrlmode & CAN_CTRLMODE_LOOPBACK) 346 346 mod_reg_val |= SUN4I_MSEL_LOOPBACK_MODE; 347 347 else if (priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY) 348 348 mod_reg_val |= SUN4I_MSEL_LISTEN_ONLY_MODE; ··· 811 811 priv->can.ctrlmode_supported = CAN_CTRLMODE_BERR_REPORTING | 812 812 CAN_CTRLMODE_LISTENONLY | 813 813 CAN_CTRLMODE_LOOPBACK | 814 - CAN_CTRLMODE_PRESUME_ACK | 815 814 CAN_CTRLMODE_3_SAMPLES; 816 815 priv->base = addr; 817 816 priv->clk = clk;
+8 -1
drivers/net/can/usb/kvaser_usb.c
··· 137 137 #define CMD_RESET_ERROR_COUNTER 49 138 138 #define CMD_TX_ACKNOWLEDGE 50 139 139 #define CMD_CAN_ERROR_EVENT 51 140 + #define CMD_FLUSH_QUEUE_REPLY 68 140 141 141 142 #define CMD_LEAF_USB_THROTTLE 77 142 143 #define CMD_LEAF_LOG_MESSAGE 106 ··· 1302 1301 goto warn; 1303 1302 break; 1304 1303 1304 + case CMD_FLUSH_QUEUE_REPLY: 1305 + if (dev->family != KVASER_LEAF) 1306 + goto warn; 1307 + break; 1308 + 1305 1309 default: 1306 1310 warn: dev_warn(dev->udev->dev.parent, 1307 1311 "Unhandled message (%d)\n", msg->id); ··· 1615 1609 if (err) 1616 1610 netdev_warn(netdev, "Cannot flush queue, error %d\n", err); 1617 1611 1618 - if (kvaser_usb_send_simple_msg(dev, CMD_RESET_CHIP, priv->channel)) 1612 + err = kvaser_usb_send_simple_msg(dev, CMD_RESET_CHIP, priv->channel); 1613 + if (err) 1619 1614 netdev_warn(netdev, "Cannot reset card, error %d\n", err); 1620 1615 1621 1616 err = kvaser_usb_stop_chip(priv);
+4 -5
drivers/net/ethernet/intel/e1000/e1000_ethtool.c
··· 1824 1824 { 1825 1825 struct e1000_adapter *adapter = netdev_priv(netdev); 1826 1826 int i; 1827 - char *p = NULL; 1828 1827 const struct e1000_stats *stat = e1000_gstrings_stats; 1829 1828 1830 1829 e1000_update_stats(adapter); 1831 - for (i = 0; i < E1000_GLOBAL_STATS_LEN; i++) { 1830 + for (i = 0; i < E1000_GLOBAL_STATS_LEN; i++, stat++) { 1831 + char *p; 1832 + 1832 1833 switch (stat->type) { 1833 1834 case NETDEV_STATS: 1834 1835 p = (char *)netdev + stat->stat_offset; ··· 1840 1839 default: 1841 1840 WARN_ONCE(1, "Invalid E1000 stat type: %u index %d\n", 1842 1841 stat->type, i); 1843 - break; 1842 + continue; 1844 1843 } 1845 1844 1846 1845 if (stat->sizeof_stat == sizeof(u64)) 1847 1846 data[i] = *(u64 *)p; 1848 1847 else 1849 1848 data[i] = *(u32 *)p; 1850 - 1851 - stat++; 1852 1849 } 1853 1850 /* BUG_ON(i != E1000_STATS_LEN); */ 1854 1851 }
+9 -2
drivers/net/ethernet/intel/e1000/e1000_main.c
··· 520 520 struct net_device *netdev = adapter->netdev; 521 521 u32 rctl, tctl; 522 522 523 - netif_carrier_off(netdev); 524 - 525 523 /* disable receives in the hardware */ 526 524 rctl = er32(RCTL); 527 525 ew32(RCTL, rctl & ~E1000_RCTL_EN); ··· 534 536 /* flush both disables and wait for them to finish */ 535 537 E1000_WRITE_FLUSH(); 536 538 msleep(10); 539 + 540 + /* Set the carrier off after transmits have been disabled in the 541 + * hardware, to avoid race conditions with e1000_watchdog() (which 542 + * may be running concurrently to us, checking for the carrier 543 + * bit to decide whether it should enable transmits again). Such 544 + * a race condition would result into transmission being disabled 545 + * in the hardware until the next IFF_DOWN+IFF_UP cycle. 546 + */ 547 + netif_carrier_off(netdev); 537 548 538 549 napi_disable(&adapter->napi); 539 550
+2 -1
drivers/net/ethernet/intel/i40e/i40e_txrx.c
··· 2111 2111 2112 2112 if (unlikely(i40e_rx_is_programming_status(qword))) { 2113 2113 i40e_clean_programming_status(rx_ring, rx_desc, qword); 2114 + cleaned_count++; 2114 2115 continue; 2115 2116 } 2116 2117 size = (qword & I40E_RXD_QW1_LENGTH_PBUF_MASK) >> ··· 2278 2277 goto enable_int; 2279 2278 } 2280 2279 2281 - if (ITR_IS_DYNAMIC(tx_itr_setting)) { 2280 + if (ITR_IS_DYNAMIC(rx_itr_setting)) { 2282 2281 rx = i40e_set_new_dynamic_itr(&q_vector->rx); 2283 2282 rxval = i40e_buildreg_itr(I40E_RX_ITR, q_vector->rx.itr); 2284 2283 }
+1 -1
drivers/net/ethernet/intel/igb/igb_main.c
··· 5673 5673 DMA_TO_DEVICE); 5674 5674 dma_unmap_len_set(tx_buffer, len, 0); 5675 5675 5676 - if (i--) 5676 + if (i-- == 0) 5677 5677 i += tx_ring->count; 5678 5678 tx_buffer = &tx_ring->tx_buffer_info[i]; 5679 5679 }
+6 -12
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 8156 8156 return 0; 8157 8157 dma_error: 8158 8158 dev_err(tx_ring->dev, "TX DMA map failed\n"); 8159 - tx_buffer = &tx_ring->tx_buffer_info[i]; 8160 8159 8161 8160 /* clear dma mappings for failed tx_buffer_info map */ 8162 - while (tx_buffer != first) { 8161 + for (;;) { 8162 + tx_buffer = &tx_ring->tx_buffer_info[i]; 8163 8163 if (dma_unmap_len(tx_buffer, len)) 8164 8164 dma_unmap_page(tx_ring->dev, 8165 8165 dma_unmap_addr(tx_buffer, dma), 8166 8166 dma_unmap_len(tx_buffer, len), 8167 8167 DMA_TO_DEVICE); 8168 8168 dma_unmap_len_set(tx_buffer, len, 0); 8169 - 8170 - if (i--) 8169 + if (tx_buffer == first) 8170 + break; 8171 + if (i == 0) 8171 8172 i += tx_ring->count; 8172 - tx_buffer = &tx_ring->tx_buffer_info[i]; 8173 + i--; 8173 8174 } 8174 - 8175 - if (dma_unmap_len(tx_buffer, len)) 8176 - dma_unmap_single(tx_ring->dev, 8177 - dma_unmap_addr(tx_buffer, dma), 8178 - dma_unmap_len(tx_buffer, len), 8179 - DMA_TO_DEVICE); 8180 - dma_unmap_len_set(tx_buffer, len, 0); 8181 8175 8182 8176 dev_kfree_skb_any(first->skb); 8183 8177 first->skb = NULL;
+22 -13
drivers/net/ethernet/marvell/mvpp2.c
··· 1167 1167 u32 port_map; 1168 1168 }; 1169 1169 1170 + #define IS_TSO_HEADER(txq_pcpu, addr) \ 1171 + ((addr) >= (txq_pcpu)->tso_headers_dma && \ 1172 + (addr) < (txq_pcpu)->tso_headers_dma + \ 1173 + (txq_pcpu)->size * TSO_HEADER_SIZE) 1174 + 1170 1175 /* Queue modes */ 1171 1176 #define MVPP2_QDIST_SINGLE_MODE 0 1172 1177 #define MVPP2_QDIST_MULTI_MODE 1 ··· 1539 1534 int off = MVPP2_PRS_TCAM_DATA_BYTE(offs); 1540 1535 u16 tcam_data; 1541 1536 1542 - tcam_data = (8 << pe->tcam.byte[off + 1]) | pe->tcam.byte[off]; 1537 + tcam_data = (pe->tcam.byte[off + 1] << 8) | pe->tcam.byte[off]; 1543 1538 if (tcam_data != data) 1544 1539 return false; 1545 1540 return true; ··· 2614 2609 /* place holders only - no ports */ 2615 2610 mvpp2_prs_mac_drop_all_set(priv, 0, false); 2616 2611 mvpp2_prs_mac_promisc_set(priv, 0, false); 2617 - mvpp2_prs_mac_multi_set(priv, MVPP2_PE_MAC_MC_ALL, 0, false); 2618 - mvpp2_prs_mac_multi_set(priv, MVPP2_PE_MAC_MC_IP6, 0, false); 2612 + mvpp2_prs_mac_multi_set(priv, 0, MVPP2_PE_MAC_MC_ALL, false); 2613 + mvpp2_prs_mac_multi_set(priv, 0, MVPP2_PE_MAC_MC_IP6, false); 2619 2614 } 2620 2615 2621 2616 /* Set default entries for various types of dsa packets */ ··· 3396 3391 struct mvpp2_prs_entry *pe; 3397 3392 int tid; 3398 3393 3399 - pe = kzalloc(sizeof(*pe), GFP_KERNEL); 3394 + pe = kzalloc(sizeof(*pe), GFP_ATOMIC); 3400 3395 if (!pe) 3401 3396 return NULL; 3402 3397 mvpp2_prs_tcam_lu_set(pe, MVPP2_PRS_LU_MAC); ··· 3458 3453 if (tid < 0) 3459 3454 return tid; 3460 3455 3461 - pe = kzalloc(sizeof(*pe), GFP_KERNEL); 3456 + pe = kzalloc(sizeof(*pe), GFP_ATOMIC); 3462 3457 if (!pe) 3463 3458 return -ENOMEM; 3464 3459 mvpp2_prs_tcam_lu_set(pe, MVPP2_PRS_LU_MAC); ··· 5326 5321 struct mvpp2_txq_pcpu_buf *tx_buf = 5327 5322 txq_pcpu->buffs + txq_pcpu->txq_get_index; 5328 5323 5329 - dma_unmap_single(port->dev->dev.parent, tx_buf->dma, 5330 - tx_buf->size, DMA_TO_DEVICE); 5324 + if (!IS_TSO_HEADER(txq_pcpu, tx_buf->dma)) 5325 + dma_unmap_single(port->dev->dev.parent, tx_buf->dma, 5326 + tx_buf->size, DMA_TO_DEVICE); 5331 5327 if (tx_buf->skb) 5332 5328 dev_kfree_skb_any(tx_buf->skb); 5333 5329 ··· 5615 5609 5616 5610 txq_pcpu->tso_headers = 5617 5611 dma_alloc_coherent(port->dev->dev.parent, 5618 - MVPP2_AGGR_TXQ_SIZE * TSO_HEADER_SIZE, 5612 + txq_pcpu->size * TSO_HEADER_SIZE, 5619 5613 &txq_pcpu->tso_headers_dma, 5620 5614 GFP_KERNEL); 5621 5615 if (!txq_pcpu->tso_headers) ··· 5629 5623 kfree(txq_pcpu->buffs); 5630 5624 5631 5625 dma_free_coherent(port->dev->dev.parent, 5632 - MVPP2_AGGR_TXQ_SIZE * MVPP2_DESC_ALIGNED_SIZE, 5626 + txq_pcpu->size * TSO_HEADER_SIZE, 5633 5627 txq_pcpu->tso_headers, 5634 5628 txq_pcpu->tso_headers_dma); 5635 5629 } ··· 5653 5647 kfree(txq_pcpu->buffs); 5654 5648 5655 5649 dma_free_coherent(port->dev->dev.parent, 5656 - MVPP2_AGGR_TXQ_SIZE * MVPP2_DESC_ALIGNED_SIZE, 5650 + txq_pcpu->size * TSO_HEADER_SIZE, 5657 5651 txq_pcpu->tso_headers, 5658 5652 txq_pcpu->tso_headers_dma); 5659 5653 } ··· 6218 6212 tx_desc_unmap_put(struct mvpp2_port *port, struct mvpp2_tx_queue *txq, 6219 6213 struct mvpp2_tx_desc *desc) 6220 6214 { 6215 + struct mvpp2_txq_pcpu *txq_pcpu = this_cpu_ptr(txq->pcpu); 6216 + 6221 6217 dma_addr_t buf_dma_addr = 6222 6218 mvpp2_txdesc_dma_addr_get(port, desc); 6223 6219 size_t buf_sz = 6224 6220 mvpp2_txdesc_size_get(port, desc); 6225 - dma_unmap_single(port->dev->dev.parent, buf_dma_addr, 6226 - buf_sz, DMA_TO_DEVICE); 6221 + if (!IS_TSO_HEADER(txq_pcpu, buf_dma_addr)) 6222 + dma_unmap_single(port->dev->dev.parent, buf_dma_addr, 6223 + buf_sz, DMA_TO_DEVICE); 6227 6224 mvpp2_txq_desc_put(txq); 6228 6225 } 6229 6226 ··· 6498 6489 } 6499 6490 6500 6491 /* Finalize TX processing */ 6501 - if (txq_pcpu->count >= txq->done_pkts_coal) 6492 + if (!port->has_tx_irqs && txq_pcpu->count >= txq->done_pkts_coal) 6502 6493 mvpp2_txq_done(port, txq, txq_pcpu); 6503 6494 6504 6495 /* Set the timer in case not all frags were processed */
+41 -29
drivers/net/ethernet/mellanox/mlx5/core/dev.c
··· 77 77 list_add_tail(&delayed_event->list, &priv->waiting_events_list); 78 78 } 79 79 80 - static void fire_delayed_event_locked(struct mlx5_device_context *dev_ctx, 81 - struct mlx5_core_dev *dev, 82 - struct mlx5_priv *priv) 80 + static void delayed_event_release(struct mlx5_device_context *dev_ctx, 81 + struct mlx5_priv *priv) 83 82 { 83 + struct mlx5_core_dev *dev = container_of(priv, struct mlx5_core_dev, priv); 84 84 struct mlx5_delayed_event *de; 85 85 struct mlx5_delayed_event *n; 86 + struct list_head temp; 86 87 87 - /* stop delaying events */ 88 + INIT_LIST_HEAD(&temp); 89 + 90 + spin_lock_irq(&priv->ctx_lock); 91 + 88 92 priv->is_accum_events = false; 89 - 90 - /* fire all accumulated events before new event comes */ 91 - list_for_each_entry_safe(de, n, &priv->waiting_events_list, list) { 93 + list_splice_init(&priv->waiting_events_list, &temp); 94 + if (!dev_ctx->context) 95 + goto out; 96 + list_for_each_entry_safe(de, n, &priv->waiting_events_list, list) 92 97 dev_ctx->intf->event(dev, dev_ctx->context, de->event, de->param); 98 + 99 + out: 100 + spin_unlock_irq(&priv->ctx_lock); 101 + 102 + list_for_each_entry_safe(de, n, &temp, list) { 93 103 list_del(&de->list); 94 104 kfree(de); 95 105 } 96 106 } 97 107 98 - static void cleanup_delayed_evets(struct mlx5_priv *priv) 108 + /* accumulating events that can come after mlx5_ib calls to 109 + * ib_register_device, till adding that interface to the events list. 110 + */ 111 + static void delayed_event_start(struct mlx5_priv *priv) 99 112 { 100 - struct mlx5_delayed_event *de; 101 - struct mlx5_delayed_event *n; 102 - 103 113 spin_lock_irq(&priv->ctx_lock); 104 - priv->is_accum_events = false; 105 - list_for_each_entry_safe(de, n, &priv->waiting_events_list, list) { 106 - list_del(&de->list); 107 - kfree(de); 108 - } 114 + priv->is_accum_events = true; 109 115 spin_unlock_irq(&priv->ctx_lock); 110 116 } 111 117 ··· 128 122 return; 129 123 130 124 dev_ctx->intf = intf; 131 - /* accumulating events that can come after mlx5_ib calls to 132 - * ib_register_device, till adding that interface to the events list. 133 - */ 134 125 135 - priv->is_accum_events = true; 126 + delayed_event_start(priv); 136 127 137 128 dev_ctx->context = intf->add(dev); 138 129 set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state); ··· 139 136 if (dev_ctx->context) { 140 137 spin_lock_irq(&priv->ctx_lock); 141 138 list_add_tail(&dev_ctx->list, &priv->ctx_list); 142 - 143 - fire_delayed_event_locked(dev_ctx, dev, priv); 144 139 145 140 #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING 146 141 if (dev_ctx->intf->pfault) { ··· 151 150 } 152 151 #endif 153 152 spin_unlock_irq(&priv->ctx_lock); 154 - } else { 155 - kfree(dev_ctx); 156 - /* delete all accumulated events */ 157 - cleanup_delayed_evets(priv); 158 153 } 154 + 155 + delayed_event_release(dev_ctx, priv); 156 + 157 + if (!dev_ctx->context) 158 + kfree(dev_ctx); 159 159 } 160 160 161 161 static struct mlx5_device_context *mlx5_get_device(struct mlx5_interface *intf, ··· 207 205 if (!dev_ctx) 208 206 return; 209 207 208 + delayed_event_start(priv); 210 209 if (intf->attach) { 211 210 if (test_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state)) 212 - return; 211 + goto out; 213 212 intf->attach(dev, dev_ctx->context); 214 213 set_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state); 215 214 } else { 216 215 if (test_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state)) 217 - return; 216 + goto out; 218 217 dev_ctx->context = intf->add(dev); 219 218 set_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state); 220 219 } 220 + 221 + out: 222 + delayed_event_release(dev_ctx, priv); 221 223 } 222 224 223 225 void mlx5_attach_device(struct mlx5_core_dev *dev) ··· 420 414 if (priv->is_accum_events) 421 415 add_delayed_event(priv, dev, event, param); 422 416 417 + /* After mlx5_detach_device, the dev_ctx->intf is still set and dev_ctx is 418 + * still in priv->ctx_list. In this case, only notify the dev_ctx if its 419 + * ADDED or ATTACHED bit are set. 420 + */ 423 421 list_for_each_entry(dev_ctx, &priv->ctx_list, list) 424 - if (dev_ctx->intf->event) 422 + if (dev_ctx->intf->event && 423 + (test_bit(MLX5_INTERFACE_ADDED, &dev_ctx->state) || 424 + test_bit(MLX5_INTERFACE_ATTACHED, &dev_ctx->state))) 425 425 dev_ctx->intf->event(dev, dev_ctx->context, event, param); 426 426 427 427 spin_unlock_irqrestore(&priv->ctx_lock, flags);
+84 -31
drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
··· 41 41 #define MLX5E_CEE_STATE_UP 1 42 42 #define MLX5E_CEE_STATE_DOWN 0 43 43 44 + enum { 45 + MLX5E_VENDOR_TC_GROUP_NUM = 7, 46 + MLX5E_LOWEST_PRIO_GROUP = 0, 47 + }; 48 + 44 49 /* If dcbx mode is non-host set the dcbx mode to host. 45 50 */ 46 51 static int mlx5e_dcbnl_set_dcbx_mode(struct mlx5e_priv *priv, ··· 90 85 { 91 86 struct mlx5e_priv *priv = netdev_priv(netdev); 92 87 struct mlx5_core_dev *mdev = priv->mdev; 88 + u8 tc_group[IEEE_8021QAZ_MAX_TCS]; 89 + bool is_tc_group_6_exist = false; 90 + bool is_zero_bw_ets_tc = false; 93 91 int err = 0; 94 92 int i; 95 93 ··· 104 96 err = mlx5_query_port_prio_tc(mdev, i, &ets->prio_tc[i]); 105 97 if (err) 106 98 return err; 107 - } 108 99 109 - for (i = 0; i < ets->ets_cap; i++) { 100 + err = mlx5_query_port_tc_group(mdev, i, &tc_group[i]); 101 + if (err) 102 + return err; 103 + 110 104 err = mlx5_query_port_tc_bw_alloc(mdev, i, &ets->tc_tx_bw[i]); 111 105 if (err) 112 106 return err; 113 - if (ets->tc_tx_bw[i] < MLX5E_MAX_BW_ALLOC) 114 - priv->dcbx.tc_tsa[i] = IEEE_8021QAZ_TSA_ETS; 107 + 108 + if (ets->tc_tx_bw[i] < MLX5E_MAX_BW_ALLOC && 109 + tc_group[i] == (MLX5E_LOWEST_PRIO_GROUP + 1)) 110 + is_zero_bw_ets_tc = true; 111 + 112 + if (tc_group[i] == (MLX5E_VENDOR_TC_GROUP_NUM - 1)) 113 + is_tc_group_6_exist = true; 115 114 } 116 115 116 + /* Report 0% ets tc if exits*/ 117 + if (is_zero_bw_ets_tc) { 118 + for (i = 0; i < ets->ets_cap; i++) 119 + if (tc_group[i] == MLX5E_LOWEST_PRIO_GROUP) 120 + ets->tc_tx_bw[i] = 0; 121 + } 122 + 123 + /* Update tc_tsa based on fw setting*/ 124 + for (i = 0; i < ets->ets_cap; i++) { 125 + if (ets->tc_tx_bw[i] < MLX5E_MAX_BW_ALLOC) 126 + priv->dcbx.tc_tsa[i] = IEEE_8021QAZ_TSA_ETS; 127 + else if (tc_group[i] == MLX5E_VENDOR_TC_GROUP_NUM && 128 + !is_tc_group_6_exist) 129 + priv->dcbx.tc_tsa[i] = IEEE_8021QAZ_TSA_VENDOR; 130 + } 117 131 memcpy(ets->tc_tsa, priv->dcbx.tc_tsa, sizeof(ets->tc_tsa)); 118 132 119 133 return err; 120 134 } 121 135 122 - enum { 123 - MLX5E_VENDOR_TC_GROUP_NUM = 7, 124 - MLX5E_ETS_TC_GROUP_NUM = 0, 125 - }; 126 - 127 136 static void mlx5e_build_tc_group(struct ieee_ets *ets, u8 *tc_group, int max_tc) 128 137 { 129 138 bool any_tc_mapped_to_ets = false; 139 + bool ets_zero_bw = false; 130 140 int strict_group; 131 141 int i; 132 142 133 - for (i = 0; i <= max_tc; i++) 134 - if (ets->tc_tsa[i] == IEEE_8021QAZ_TSA_ETS) 143 + for (i = 0; i <= max_tc; i++) { 144 + if (ets->tc_tsa[i] == IEEE_8021QAZ_TSA_ETS) { 135 145 any_tc_mapped_to_ets = true; 146 + if (!ets->tc_tx_bw[i]) 147 + ets_zero_bw = true; 148 + } 149 + } 136 150 137 - strict_group = any_tc_mapped_to_ets ? 1 : 0; 151 + /* strict group has higher priority than ets group */ 152 + strict_group = MLX5E_LOWEST_PRIO_GROUP; 153 + if (any_tc_mapped_to_ets) 154 + strict_group++; 155 + if (ets_zero_bw) 156 + strict_group++; 138 157 139 158 for (i = 0; i <= max_tc; i++) { 140 159 switch (ets->tc_tsa[i]) { ··· 172 137 tc_group[i] = strict_group++; 173 138 break; 174 139 case IEEE_8021QAZ_TSA_ETS: 175 - tc_group[i] = MLX5E_ETS_TC_GROUP_NUM; 140 + tc_group[i] = MLX5E_LOWEST_PRIO_GROUP; 141 + if (ets->tc_tx_bw[i] && ets_zero_bw) 142 + tc_group[i] = MLX5E_LOWEST_PRIO_GROUP + 1; 176 143 break; 177 144 } 178 145 } ··· 183 146 static void mlx5e_build_tc_tx_bw(struct ieee_ets *ets, u8 *tc_tx_bw, 184 147 u8 *tc_group, int max_tc) 185 148 { 149 + int bw_for_ets_zero_bw_tc = 0; 150 + int last_ets_zero_bw_tc = -1; 151 + int num_ets_zero_bw = 0; 186 152 int i; 153 + 154 + for (i = 0; i <= max_tc; i++) { 155 + if (ets->tc_tsa[i] == IEEE_8021QAZ_TSA_ETS && 156 + !ets->tc_tx_bw[i]) { 157 + num_ets_zero_bw++; 158 + last_ets_zero_bw_tc = i; 159 + } 160 + } 161 + 162 + if (num_ets_zero_bw) 163 + bw_for_ets_zero_bw_tc = MLX5E_MAX_BW_ALLOC / num_ets_zero_bw; 187 164 188 165 for (i = 0; i <= max_tc; i++) { 189 166 switch (ets->tc_tsa[i]) { ··· 208 157 tc_tx_bw[i] = MLX5E_MAX_BW_ALLOC; 209 158 break; 210 159 case IEEE_8021QAZ_TSA_ETS: 211 - tc_tx_bw[i] = ets->tc_tx_bw[i]; 160 + tc_tx_bw[i] = ets->tc_tx_bw[i] ? 161 + ets->tc_tx_bw[i] : 162 + bw_for_ets_zero_bw_tc; 212 163 break; 213 164 } 214 165 } 166 + 167 + /* Make sure the total bw for ets zero bw group is 100% */ 168 + if (last_ets_zero_bw_tc != -1) 169 + tc_tx_bw[last_ets_zero_bw_tc] += 170 + MLX5E_MAX_BW_ALLOC % num_ets_zero_bw; 215 171 } 216 172 173 + /* If there are ETS BW 0, 174 + * Set ETS group # to 1 for all ETS non zero BW tcs. Their sum must be 100%. 175 + * Set group #0 to all the ETS BW 0 tcs and 176 + * equally splits the 100% BW between them 177 + * Report both group #0 and #1 as ETS type. 178 + * All the tcs in group #0 will be reported with 0% BW. 179 + */ 217 180 int mlx5e_dcbnl_ieee_setets_core(struct mlx5e_priv *priv, struct ieee_ets *ets) 218 181 { 219 182 struct mlx5_core_dev *mdev = priv->mdev; ··· 253 188 return err; 254 189 255 190 memcpy(priv->dcbx.tc_tsa, ets->tc_tsa, sizeof(ets->tc_tsa)); 256 - 257 191 return err; 258 192 } 259 193 ··· 273 209 } 274 210 275 211 /* Validate Bandwidth Sum */ 276 - for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) { 277 - if (ets->tc_tsa[i] == IEEE_8021QAZ_TSA_ETS) { 278 - if (!ets->tc_tx_bw[i]) { 279 - netdev_err(netdev, 280 - "Failed to validate ETS: BW 0 is illegal\n"); 281 - return -EINVAL; 282 - } 283 - 212 + for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) 213 + if (ets->tc_tsa[i] == IEEE_8021QAZ_TSA_ETS) 284 214 bw_sum += ets->tc_tx_bw[i]; 285 - } 286 - } 287 215 288 216 if (bw_sum != 0 && bw_sum != 100) { 289 217 netdev_err(netdev, ··· 589 533 static void mlx5e_dcbnl_getpgbwgcfgtx(struct net_device *netdev, 590 534 int pgid, u8 *bw_pct) 591 535 { 592 - struct mlx5e_priv *priv = netdev_priv(netdev); 593 - struct mlx5_core_dev *mdev = priv->mdev; 536 + struct ieee_ets ets; 594 537 595 538 if (pgid >= CEE_DCBX_MAX_PGS) { 596 539 netdev_err(netdev, ··· 597 542 return; 598 543 } 599 544 600 - if (mlx5_query_port_tc_bw_alloc(mdev, pgid, bw_pct)) 601 - *bw_pct = 0; 545 + mlx5e_dcbnl_ieee_getets(netdev, &ets); 546 + *bw_pct = ets.tc_tx_bw[pgid]; 602 547 } 603 548 604 549 static void mlx5e_dcbnl_setpfccfg(struct net_device *netdev, ··· 793 738 ets.tc_tsa[i] = IEEE_8021QAZ_TSA_VENDOR; 794 739 ets.prio_tc[i] = i; 795 740 } 796 - 797 - memcpy(priv->dcbx.tc_tsa, ets.tc_tsa, sizeof(ets.tc_tsa)); 798 741 799 742 /* tclass[prio=0]=1, tclass[prio=1]=0, tclass[prio=i]=i (for i>1) */ 800 743 ets.prio_tc[0] = 1;
+54 -35
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 78 78 }; 79 79 80 80 struct mlx5e_tc_flow_parse_attr { 81 + struct ip_tunnel_info tun_info; 81 82 struct mlx5_flow_spec spec; 82 83 int num_mod_hdr_actions; 83 84 void *mod_hdr_actions; 85 + int mirred_ifindex; 84 86 }; 85 87 86 88 enum { ··· 324 322 static void mlx5e_detach_encap(struct mlx5e_priv *priv, 325 323 struct mlx5e_tc_flow *flow); 326 324 325 + static int mlx5e_attach_encap(struct mlx5e_priv *priv, 326 + struct ip_tunnel_info *tun_info, 327 + struct net_device *mirred_dev, 328 + struct net_device **encap_dev, 329 + struct mlx5e_tc_flow *flow); 330 + 327 331 static struct mlx5_flow_handle * 328 332 mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv, 329 333 struct mlx5e_tc_flow_parse_attr *parse_attr, ··· 337 329 { 338 330 struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; 339 331 struct mlx5_esw_flow_attr *attr = flow->esw_attr; 340 - struct mlx5_flow_handle *rule; 332 + struct net_device *out_dev, *encap_dev = NULL; 333 + struct mlx5_flow_handle *rule = NULL; 334 + struct mlx5e_rep_priv *rpriv; 335 + struct mlx5e_priv *out_priv; 341 336 int err; 337 + 338 + if (attr->action & MLX5_FLOW_CONTEXT_ACTION_ENCAP) { 339 + out_dev = __dev_get_by_index(dev_net(priv->netdev), 340 + attr->parse_attr->mirred_ifindex); 341 + err = mlx5e_attach_encap(priv, &parse_attr->tun_info, 342 + out_dev, &encap_dev, flow); 343 + if (err) { 344 + rule = ERR_PTR(err); 345 + if (err != -EAGAIN) 346 + goto err_attach_encap; 347 + } 348 + out_priv = netdev_priv(encap_dev); 349 + rpriv = out_priv->ppriv; 350 + attr->out_rep = rpriv->rep; 351 + } 342 352 343 353 err = mlx5_eswitch_add_vlan_action(esw, attr); 344 354 if (err) { ··· 373 347 } 374 348 } 375 349 376 - rule = mlx5_eswitch_add_offloaded_rule(esw, &parse_attr->spec, attr); 377 - if (IS_ERR(rule)) 378 - goto err_add_rule; 379 - 350 + /* we get here if (1) there's no error (rule being null) or when 351 + * (2) there's an encap action and we're on -EAGAIN (no valid neigh) 352 + */ 353 + if (rule != ERR_PTR(-EAGAIN)) { 354 + rule = mlx5_eswitch_add_offloaded_rule(esw, &parse_attr->spec, attr); 355 + if (IS_ERR(rule)) 356 + goto err_add_rule; 357 + } 380 358 return rule; 381 359 382 360 err_add_rule: ··· 391 361 err_add_vlan: 392 362 if (attr->action & MLX5_FLOW_CONTEXT_ACTION_ENCAP) 393 363 mlx5e_detach_encap(priv, flow); 364 + err_attach_encap: 394 365 return rule; 395 366 } 396 367 ··· 420 389 void mlx5e_tc_encap_flows_add(struct mlx5e_priv *priv, 421 390 struct mlx5e_encap_entry *e) 422 391 { 392 + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; 393 + struct mlx5_esw_flow_attr *esw_attr; 423 394 struct mlx5e_tc_flow *flow; 424 395 int err; 425 396 ··· 437 404 mlx5e_rep_queue_neigh_stats_work(priv); 438 405 439 406 list_for_each_entry(flow, &e->flows, encap) { 440 - flow->esw_attr->encap_id = e->encap_id; 441 - flow->rule = mlx5e_tc_add_fdb_flow(priv, 442 - flow->esw_attr->parse_attr, 443 - flow); 407 + esw_attr = flow->esw_attr; 408 + esw_attr->encap_id = e->encap_id; 409 + flow->rule = mlx5_eswitch_add_offloaded_rule(esw, &esw_attr->parse_attr->spec, esw_attr); 444 410 if (IS_ERR(flow->rule)) { 445 411 err = PTR_ERR(flow->rule); 446 412 mlx5_core_warn(priv->mdev, "Failed to update cached encapsulation flow, %d\n", ··· 453 421 void mlx5e_tc_encap_flows_del(struct mlx5e_priv *priv, 454 422 struct mlx5e_encap_entry *e) 455 423 { 424 + struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; 456 425 struct mlx5e_tc_flow *flow; 457 - struct mlx5_fc *counter; 458 426 459 427 list_for_each_entry(flow, &e->flows, encap) { 460 428 if (flow->flags & MLX5E_TC_FLOW_OFFLOADED) { 461 429 flow->flags &= ~MLX5E_TC_FLOW_OFFLOADED; 462 - counter = mlx5_flow_rule_counter(flow->rule); 463 - mlx5_del_flow_rules(flow->rule); 464 - mlx5_fc_destroy(priv->mdev, counter); 430 + mlx5_eswitch_del_offloaded_rule(esw, flow->rule, flow->esw_attr); 465 431 } 466 432 } 467 433 ··· 1972 1942 1973 1943 if (is_tcf_mirred_egress_redirect(a)) { 1974 1944 int ifindex = tcf_mirred_ifindex(a); 1975 - struct net_device *out_dev, *encap_dev = NULL; 1945 + struct net_device *out_dev; 1976 1946 struct mlx5e_priv *out_priv; 1977 1947 1978 1948 out_dev = __dev_get_by_index(dev_net(priv->netdev), ifindex); ··· 1985 1955 rpriv = out_priv->ppriv; 1986 1956 attr->out_rep = rpriv->rep; 1987 1957 } else if (encap) { 1988 - err = mlx5e_attach_encap(priv, info, 1989 - out_dev, &encap_dev, flow); 1990 - if (err && err != -EAGAIN) 1991 - return err; 1958 + parse_attr->mirred_ifindex = ifindex; 1959 + parse_attr->tun_info = *info; 1960 + attr->parse_attr = parse_attr; 1992 1961 attr->action |= MLX5_FLOW_CONTEXT_ACTION_ENCAP | 1993 1962 MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | 1994 1963 MLX5_FLOW_CONTEXT_ACTION_COUNT; 1995 - out_priv = netdev_priv(encap_dev); 1996 - rpriv = out_priv->ppriv; 1997 - attr->out_rep = rpriv->rep; 1998 - attr->parse_attr = parse_attr; 1964 + /* attr->out_rep is resolved when we handle encap */ 1999 1965 } else { 2000 1966 pr_err("devices %s %s not on same switch HW, can't offload forwarding\n", 2001 1967 priv->netdev->name, out_dev->name); ··· 2073 2047 if (flow->flags & MLX5E_TC_FLOW_ESWITCH) { 2074 2048 err = parse_tc_fdb_actions(priv, f->exts, parse_attr, flow); 2075 2049 if (err < 0) 2076 - goto err_handle_encap_flow; 2050 + goto err_free; 2077 2051 flow->rule = mlx5e_tc_add_fdb_flow(priv, parse_attr, flow); 2078 2052 } else { 2079 2053 err = parse_tc_nic_actions(priv, f->exts, parse_attr, flow); ··· 2084 2058 2085 2059 if (IS_ERR(flow->rule)) { 2086 2060 err = PTR_ERR(flow->rule); 2087 - goto err_free; 2061 + if (err != -EAGAIN) 2062 + goto err_free; 2088 2063 } 2089 2064 2090 - flow->flags |= MLX5E_TC_FLOW_OFFLOADED; 2065 + if (err != -EAGAIN) 2066 + flow->flags |= MLX5E_TC_FLOW_OFFLOADED; 2067 + 2091 2068 err = rhashtable_insert_fast(&tc->ht, &flow->node, 2092 2069 tc->ht_params); 2093 2070 if (err) ··· 2103 2074 2104 2075 err_del_rule: 2105 2076 mlx5e_tc_del_flow(priv, flow); 2106 - 2107 - err_handle_encap_flow: 2108 - if (err == -EAGAIN) { 2109 - err = rhashtable_insert_fast(&tc->ht, &flow->node, 2110 - tc->ht_params); 2111 - if (err) 2112 - mlx5e_tc_del_flow(priv, flow); 2113 - else 2114 - return 0; 2115 - } 2116 2077 2117 2078 err_free: 2118 2079 kvfree(parse_attr);
+3 -2
drivers/net/ethernet/mellanox/mlx5/core/health.c
··· 354 354 void mlx5_drain_health_recovery(struct mlx5_core_dev *dev) 355 355 { 356 356 struct mlx5_core_health *health = &dev->priv.health; 357 + unsigned long flags; 357 358 358 - spin_lock(&health->wq_lock); 359 + spin_lock_irqsave(&health->wq_lock, flags); 359 360 set_bit(MLX5_DROP_NEW_RECOVERY_WORK, &health->flags); 360 - spin_unlock(&health->wq_lock); 361 + spin_unlock_irqrestore(&health->wq_lock, flags); 361 362 cancel_delayed_work_sync(&dev->priv.health.recover_work); 362 363 } 363 364
+21
drivers/net/ethernet/mellanox/mlx5/core/port.c
··· 677 677 } 678 678 EXPORT_SYMBOL_GPL(mlx5_set_port_tc_group); 679 679 680 + int mlx5_query_port_tc_group(struct mlx5_core_dev *mdev, 681 + u8 tc, u8 *tc_group) 682 + { 683 + u32 out[MLX5_ST_SZ_DW(qetc_reg)]; 684 + void *ets_tcn_conf; 685 + int err; 686 + 687 + err = mlx5_query_port_qetcr_reg(mdev, out, sizeof(out)); 688 + if (err) 689 + return err; 690 + 691 + ets_tcn_conf = MLX5_ADDR_OF(qetc_reg, out, 692 + tc_configuration[tc]); 693 + 694 + *tc_group = MLX5_GET(ets_tcn_config_reg, ets_tcn_conf, 695 + group); 696 + 697 + return 0; 698 + } 699 + EXPORT_SYMBOL_GPL(mlx5_query_port_tc_group); 700 + 680 701 int mlx5_set_port_tc_bw_alloc(struct mlx5_core_dev *mdev, u8 *tc_bw) 681 702 { 682 703 u32 in[MLX5_ST_SZ_DW(qetc_reg)] = {0};
+2
drivers/net/ethernet/netronome/nfp/flower/action.c
··· 127 127 */ 128 128 if (!switchdev_port_same_parent_id(in_dev, out_dev)) 129 129 return -EOPNOTSUPP; 130 + if (!nfp_netdev_is_nfp_repr(out_dev)) 131 + return -EOPNOTSUPP; 130 132 131 133 output->port = cpu_to_be32(nfp_repr_get_port_id(out_dev)); 132 134 if (!output->port)
+1 -1
drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c
··· 74 74 plat_dat->axi->axi_wr_osr_lmt--; 75 75 } 76 76 77 - if (of_property_read_u32(np, "read,read-requests", 77 + if (of_property_read_u32(np, "snps,read-requests", 78 78 &plat_dat->axi->axi_rd_osr_lmt)) { 79 79 /** 80 80 * Since the register has a reset value of 1, if property
+7
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 150 150 plat->rx_queues_to_use = 1; 151 151 plat->tx_queues_to_use = 1; 152 152 153 + /* First Queue must always be in DCB mode. As MTL_QUEUE_DCB = 1 we need 154 + * to always set this, otherwise Queue will be classified as AVB 155 + * (because MTL_QUEUE_AVB = 0). 156 + */ 157 + plat->rx_queues_cfg[0].mode_to_use = MTL_QUEUE_DCB; 158 + plat->tx_queues_cfg[0].mode_to_use = MTL_QUEUE_DCB; 159 + 153 160 rx_node = of_parse_phandle(pdev->dev.of_node, "snps,mtl-rx-config", 0); 154 161 if (!rx_node) 155 162 return;
+2 -2
drivers/net/ipvlan/ipvtap.c
··· 197 197 { 198 198 int err; 199 199 200 - err = tap_create_cdev(&ipvtap_cdev, &ipvtap_major, "ipvtap"); 201 - 200 + err = tap_create_cdev(&ipvtap_cdev, &ipvtap_major, "ipvtap", 201 + THIS_MODULE); 202 202 if (err) 203 203 goto out1; 204 204
+2 -2
drivers/net/macvtap.c
··· 204 204 { 205 205 int err; 206 206 207 - err = tap_create_cdev(&macvtap_cdev, &macvtap_major, "macvtap"); 208 - 207 + err = tap_create_cdev(&macvtap_cdev, &macvtap_major, "macvtap", 208 + THIS_MODULE); 209 209 if (err) 210 210 goto out1; 211 211
+12 -11
drivers/net/tap.c
··· 517 517 &tap_proto, 0); 518 518 if (!q) 519 519 goto err; 520 + if (skb_array_init(&q->skb_array, tap->dev->tx_queue_len, GFP_KERNEL)) { 521 + sk_free(&q->sk); 522 + goto err; 523 + } 520 524 521 525 RCU_INIT_POINTER(q->sock.wq, &q->wq); 522 526 init_waitqueue_head(&q->wq.wait); ··· 544 540 if ((tap->dev->features & NETIF_F_HIGHDMA) && (tap->dev->features & NETIF_F_SG)) 545 541 sock_set_flag(&q->sk, SOCK_ZEROCOPY); 546 542 547 - err = -ENOMEM; 548 - if (skb_array_init(&q->skb_array, tap->dev->tx_queue_len, GFP_KERNEL)) 549 - goto err_array; 550 - 551 543 err = tap_set_queue(tap, file, q); 552 - if (err) 553 - goto err_queue; 544 + if (err) { 545 + /* tap_sock_destruct() will take care of freeing skb_array */ 546 + goto err_put; 547 + } 554 548 555 549 dev_put(tap->dev); 556 550 557 551 rtnl_unlock(); 558 552 return err; 559 553 560 - err_queue: 561 - skb_array_cleanup(&q->skb_array); 562 - err_array: 554 + err_put: 563 555 sock_put(&q->sk); 564 556 err: 565 557 if (tap) ··· 1249 1249 return 0; 1250 1250 } 1251 1251 1252 - int tap_create_cdev(struct cdev *tap_cdev, 1253 - dev_t *tap_major, const char *device_name) 1252 + int tap_create_cdev(struct cdev *tap_cdev, dev_t *tap_major, 1253 + const char *device_name, struct module *module) 1254 1254 { 1255 1255 int err; 1256 1256 ··· 1259 1259 goto out1; 1260 1260 1261 1261 cdev_init(tap_cdev, &tap_fops); 1262 + tap_cdev->owner = module; 1262 1263 err = cdev_add(tap_cdev, *tap_major, TAP_NUM_DEVS); 1263 1264 if (err) 1264 1265 goto out2;
+2 -1
drivers/net/tun.c
··· 1444 1444 buflen += SKB_DATA_ALIGN(len + pad); 1445 1445 rcu_read_unlock(); 1446 1446 1447 + alloc_frag->offset = ALIGN((u64)alloc_frag->offset, SMP_CACHE_BYTES); 1447 1448 if (unlikely(!skb_page_frag_refill(buflen, alloc_frag, GFP_KERNEL))) 1448 1449 return ERR_PTR(-ENOMEM); 1449 1450 ··· 2254 2253 if (!dev) 2255 2254 return -ENOMEM; 2256 2255 err = dev_get_valid_name(net, dev, name); 2257 - if (err) 2256 + if (err < 0) 2258 2257 goto err_free_dev; 2259 2258 2260 2259 dev_net_set(dev, net);
+14
drivers/net/usb/cdc_ether.c
··· 561 561 #define HP_VENDOR_ID 0x03f0 562 562 #define MICROSOFT_VENDOR_ID 0x045e 563 563 #define UBLOX_VENDOR_ID 0x1546 564 + #define TPLINK_VENDOR_ID 0x2357 564 565 565 566 static const struct usb_device_id products[] = { 566 567 /* BLACKLIST !! ··· 814 813 .driver_info = 0, 815 814 }, 816 815 816 + /* TP-LINK UE300 USB 3.0 Ethernet Adapters (based on Realtek RTL8153) */ 817 + { 818 + USB_DEVICE_AND_INTERFACE_INFO(TPLINK_VENDOR_ID, 0x0601, USB_CLASS_COMM, 819 + USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 820 + .driver_info = 0, 821 + }, 822 + 817 823 /* WHITELIST!!! 818 824 * 819 825 * CDC Ether uses two interfaces, not necessarily consecutive. ··· 871 863 USB_DEVICE_AND_INTERFACE_INFO(DELL_VENDOR_ID, 0x81ba, USB_CLASS_COMM, 872 864 USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 873 865 .driver_info = (kernel_ulong_t)&wwan_info, 866 + }, { 867 + /* Huawei ME906 and ME909 */ 868 + USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x15c1, USB_CLASS_COMM, 869 + USB_CDC_SUBCLASS_ETHERNET, 870 + USB_CDC_PROTO_NONE), 871 + .driver_info = (unsigned long)&wwan_info, 874 872 }, { 875 873 /* ZTE modules */ 876 874 USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, USB_CLASS_COMM,
+2
drivers/net/usb/r8152.c
··· 615 615 #define VENDOR_ID_LENOVO 0x17ef 616 616 #define VENDOR_ID_LINKSYS 0x13b1 617 617 #define VENDOR_ID_NVIDIA 0x0955 618 + #define VENDOR_ID_TPLINK 0x2357 618 619 619 620 #define MCU_TYPE_PLA 0x0100 620 621 #define MCU_TYPE_USB 0x0000 ··· 5320 5319 {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7214)}, 5321 5320 {REALTEK_USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041)}, 5322 5321 {REALTEK_USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff)}, 5322 + {REALTEK_USB_DEVICE(VENDOR_ID_TPLINK, 0x0601)}, 5323 5323 {} 5324 5324 }; 5325 5325
+33 -4
drivers/nvme/host/fc.c
··· 2545 2545 nvme_fc_abort_aen_ops(ctrl); 2546 2546 2547 2547 /* wait for all io that had to be aborted */ 2548 - spin_lock_irqsave(&ctrl->lock, flags); 2548 + spin_lock_irq(&ctrl->lock); 2549 2549 wait_event_lock_irq(ctrl->ioabort_wait, ctrl->iocnt == 0, ctrl->lock); 2550 2550 ctrl->flags &= ~FCCTRL_TERMIO; 2551 - spin_unlock_irqrestore(&ctrl->lock, flags); 2551 + spin_unlock_irq(&ctrl->lock); 2552 2552 2553 2553 nvme_fc_term_aen_ops(ctrl); 2554 2554 ··· 2734 2734 { 2735 2735 struct nvme_fc_ctrl *ctrl; 2736 2736 unsigned long flags; 2737 - int ret, idx; 2737 + int ret, idx, retry; 2738 2738 2739 2739 if (!(rport->remoteport.port_role & 2740 2740 (FC_PORT_ROLE_NVME_DISCOVERY | FC_PORT_ROLE_NVME_TARGET))) { ··· 2760 2760 ctrl->rport = rport; 2761 2761 ctrl->dev = lport->dev; 2762 2762 ctrl->cnum = idx; 2763 + init_waitqueue_head(&ctrl->ioabort_wait); 2763 2764 2764 2765 get_device(ctrl->dev); 2765 2766 kref_init(&ctrl->ref); ··· 2826 2825 list_add_tail(&ctrl->ctrl_list, &rport->ctrl_list); 2827 2826 spin_unlock_irqrestore(&rport->lock, flags); 2828 2827 2829 - ret = nvme_fc_create_association(ctrl); 2828 + /* 2829 + * It's possible that transactions used to create the association 2830 + * may fail. Examples: CreateAssociation LS or CreateIOConnection 2831 + * LS gets dropped/corrupted/fails; or a frame gets dropped or a 2832 + * command times out for one of the actions to init the controller 2833 + * (Connect, Get/Set_Property, Set_Features, etc). Many of these 2834 + * transport errors (frame drop, LS failure) inherently must kill 2835 + * the association. The transport is coded so that any command used 2836 + * to create the association (prior to a LIVE state transition 2837 + * while NEW or RECONNECTING) will fail if it completes in error or 2838 + * times out. 2839 + * 2840 + * As such: as the connect request was mostly likely due to a 2841 + * udev event that discovered the remote port, meaning there is 2842 + * not an admin or script there to restart if the connect 2843 + * request fails, retry the initial connection creation up to 2844 + * three times before giving up and declaring failure. 2845 + */ 2846 + for (retry = 0; retry < 3; retry++) { 2847 + ret = nvme_fc_create_association(ctrl); 2848 + if (!ret) 2849 + break; 2850 + } 2851 + 2830 2852 if (ret) { 2853 + /* couldn't schedule retry - fail out */ 2854 + dev_err(ctrl->ctrl.device, 2855 + "NVME-FC{%d}: Connect retry failed\n", ctrl->cnum); 2856 + 2831 2857 ctrl->ctrl.opts = NULL; 2858 + 2832 2859 /* initiate nvme ctrl ref counting teardown */ 2833 2860 nvme_uninit_ctrl(&ctrl->ctrl); 2834 2861 nvme_put_ctrl(&ctrl->ctrl);
+12 -4
drivers/nvme/host/rdma.c
··· 571 571 if (test_and_set_bit(NVME_RDMA_Q_DELETING, &queue->flags)) 572 572 return; 573 573 574 + if (nvme_rdma_queue_idx(queue) == 0) { 575 + nvme_rdma_free_qe(queue->device->dev, 576 + &queue->ctrl->async_event_sqe, 577 + sizeof(struct nvme_command), DMA_TO_DEVICE); 578 + } 579 + 574 580 nvme_rdma_destroy_queue_ib(queue); 575 581 rdma_destroy_id(queue->cm_id); 576 582 } ··· 745 739 static void nvme_rdma_destroy_admin_queue(struct nvme_rdma_ctrl *ctrl, 746 740 bool remove) 747 741 { 748 - nvme_rdma_free_qe(ctrl->queues[0].device->dev, &ctrl->async_event_sqe, 749 - sizeof(struct nvme_command), DMA_TO_DEVICE); 750 742 nvme_rdma_stop_queue(&ctrl->queues[0]); 751 743 if (remove) { 752 744 blk_cleanup_queue(ctrl->ctrl.admin_q); ··· 769 765 770 766 if (new) { 771 767 ctrl->ctrl.admin_tagset = nvme_rdma_alloc_tagset(&ctrl->ctrl, true); 772 - if (IS_ERR(ctrl->ctrl.admin_tagset)) 768 + if (IS_ERR(ctrl->ctrl.admin_tagset)) { 769 + error = PTR_ERR(ctrl->ctrl.admin_tagset); 773 770 goto out_free_queue; 771 + } 774 772 775 773 ctrl->ctrl.admin_q = blk_mq_init_queue(&ctrl->admin_tag_set); 776 774 if (IS_ERR(ctrl->ctrl.admin_q)) { ··· 852 846 853 847 if (new) { 854 848 ctrl->ctrl.tagset = nvme_rdma_alloc_tagset(&ctrl->ctrl, false); 855 - if (IS_ERR(ctrl->ctrl.tagset)) 849 + if (IS_ERR(ctrl->ctrl.tagset)) { 850 + ret = PTR_ERR(ctrl->ctrl.tagset); 856 851 goto out_free_io_queues; 852 + } 857 853 858 854 ctrl->ctrl.connect_q = blk_mq_init_queue(&ctrl->tag_set); 859 855 if (IS_ERR(ctrl->ctrl.connect_q)) {
+12 -3
drivers/nvme/target/core.c
··· 387 387 388 388 static void __nvmet_req_complete(struct nvmet_req *req, u16 status) 389 389 { 390 + u32 old_sqhd, new_sqhd; 391 + u16 sqhd; 392 + 390 393 if (status) 391 394 nvmet_set_status(req, status); 392 395 393 - if (req->sq->size) 394 - req->sq->sqhd = (req->sq->sqhd + 1) % req->sq->size; 395 - req->rsp->sq_head = cpu_to_le16(req->sq->sqhd); 396 + if (req->sq->size) { 397 + do { 398 + old_sqhd = req->sq->sqhd; 399 + new_sqhd = (old_sqhd + 1) % req->sq->size; 400 + } while (cmpxchg(&req->sq->sqhd, old_sqhd, new_sqhd) != 401 + old_sqhd); 402 + } 403 + sqhd = req->sq->sqhd & 0x0000FFFF; 404 + req->rsp->sq_head = cpu_to_le16(sqhd); 396 405 req->rsp->sq_id = cpu_to_le16(req->sq->qid); 397 406 req->rsp->command_id = req->cmd->common.command_id; 398 407
+1 -1
drivers/nvme/target/nvmet.h
··· 74 74 struct percpu_ref ref; 75 75 u16 qid; 76 76 u16 size; 77 - u16 sqhd; 77 + u32 sqhd; 78 78 struct completion free_done; 79 79 struct completion confirm_done; 80 80 };
+14 -4
drivers/phy/marvell/phy-mvebu-cp110-comphy.c
··· 111 111 #define MVEBU_COMPHY_CONF6_40B BIT(18) 112 112 #define MVEBU_COMPHY_SELECTOR 0x1140 113 113 #define MVEBU_COMPHY_SELECTOR_PHY(n) ((n) * 0x4) 114 + #define MVEBU_COMPHY_PIPE_SELECTOR 0x1144 115 + #define MVEBU_COMPHY_PIPE_SELECTOR_PIPE(n) ((n) * 0x4) 114 116 115 117 #define MVEBU_COMPHY_LANES 6 116 118 #define MVEBU_COMPHY_PORTS 3 ··· 470 468 { 471 469 struct mvebu_comphy_lane *lane = phy_get_drvdata(phy); 472 470 struct mvebu_comphy_priv *priv = lane->priv; 473 - int ret; 474 - u32 mux, val; 471 + int ret, mux; 472 + u32 val; 475 473 476 474 mux = mvebu_comphy_get_mux(lane->id, lane->port, lane->mode); 477 475 if (mux < 0) 478 476 return -ENOTSUPP; 477 + 478 + regmap_read(priv->regmap, MVEBU_COMPHY_PIPE_SELECTOR, &val); 479 + val &= ~(0xf << MVEBU_COMPHY_PIPE_SELECTOR_PIPE(lane->id)); 480 + regmap_write(priv->regmap, MVEBU_COMPHY_PIPE_SELECTOR, val); 479 481 480 482 regmap_read(priv->regmap, MVEBU_COMPHY_SELECTOR, &val); 481 483 val &= ~(0xf << MVEBU_COMPHY_SELECTOR_PHY(lane->id)); ··· 532 526 val &= ~(0xf << MVEBU_COMPHY_SELECTOR_PHY(lane->id)); 533 527 regmap_write(priv->regmap, MVEBU_COMPHY_SELECTOR, val); 534 528 529 + regmap_read(priv->regmap, MVEBU_COMPHY_PIPE_SELECTOR, &val); 530 + val &= ~(0xf << MVEBU_COMPHY_PIPE_SELECTOR_PIPE(lane->id)); 531 + regmap_write(priv->regmap, MVEBU_COMPHY_PIPE_SELECTOR, val); 532 + 535 533 return 0; 536 534 } 537 535 ··· 586 576 return PTR_ERR(priv->regmap); 587 577 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 588 578 priv->base = devm_ioremap_resource(&pdev->dev, res); 589 - if (!priv->base) 590 - return -ENOMEM; 579 + if (IS_ERR(priv->base)) 580 + return PTR_ERR(priv->base); 591 581 592 582 for_each_available_child_of_node(pdev->dev.of_node, child) { 593 583 struct mvebu_comphy_lane *lane;
+2 -1
drivers/phy/mediatek/phy-mtk-tphy.c
··· 27 27 /* banks shared by multiple phys */ 28 28 #define SSUSB_SIFSLV_V1_SPLLC 0x000 /* shared by u3 phys */ 29 29 #define SSUSB_SIFSLV_V1_U2FREQ 0x100 /* shared by u2 phys */ 30 + #define SSUSB_SIFSLV_V1_CHIP 0x300 /* shared by u3 phys */ 30 31 /* u2 phy bank */ 31 32 #define SSUSB_SIFSLV_V1_U2PHY_COM 0x000 32 33 /* u3/pcie/sata phy banks */ ··· 763 762 case PHY_TYPE_USB3: 764 763 case PHY_TYPE_PCIE: 765 764 u3_banks->spllc = tphy->sif_base + SSUSB_SIFSLV_V1_SPLLC; 766 - u3_banks->chip = NULL; 765 + u3_banks->chip = tphy->sif_base + SSUSB_SIFSLV_V1_CHIP; 767 766 u3_banks->phyd = instance->port_base + SSUSB_SIFSLV_V1_U3PHYD; 768 767 u3_banks->phya = instance->port_base + SSUSB_SIFSLV_V1_U3PHYA; 769 768 break;
+55 -27
drivers/phy/rockchip/phy-rockchip-typec.c
··· 443 443 return regmap_write(tcphy->grf_regs, reg->offset, val | mask); 444 444 } 445 445 446 + static void tcphy_dp_aux_set_flip(struct rockchip_typec_phy *tcphy) 447 + { 448 + u16 tx_ana_ctrl_reg_1; 449 + 450 + /* 451 + * Select the polarity of the xcvr: 452 + * 1, Reverses the polarity (If TYPEC, Pulls ups aux_p and pull 453 + * down aux_m) 454 + * 0, Normal polarity (if TYPEC, pulls up aux_m and pulls down 455 + * aux_p) 456 + */ 457 + tx_ana_ctrl_reg_1 = readl(tcphy->base + TX_ANA_CTRL_REG_1); 458 + if (!tcphy->flip) 459 + tx_ana_ctrl_reg_1 |= BIT(12); 460 + else 461 + tx_ana_ctrl_reg_1 &= ~BIT(12); 462 + writel(tx_ana_ctrl_reg_1, tcphy->base + TX_ANA_CTRL_REG_1); 463 + } 464 + 446 465 static void tcphy_dp_aux_calibration(struct rockchip_typec_phy *tcphy) 447 466 { 467 + u16 tx_ana_ctrl_reg_1; 448 468 u16 rdata, rdata2, val; 449 469 450 470 /* disable txda_cal_latch_en for rewrite the calibration values */ 451 - rdata = readl(tcphy->base + TX_ANA_CTRL_REG_1); 452 - val = rdata & 0xdfff; 453 - writel(val, tcphy->base + TX_ANA_CTRL_REG_1); 471 + tx_ana_ctrl_reg_1 = readl(tcphy->base + TX_ANA_CTRL_REG_1); 472 + tx_ana_ctrl_reg_1 &= ~BIT(13); 473 + writel(tx_ana_ctrl_reg_1, tcphy->base + TX_ANA_CTRL_REG_1); 454 474 455 475 /* 456 476 * read a resistor calibration code from CMN_TXPUCAL_CTRL[6:0] and ··· 492 472 * Activate this signal for 1 clock cycle to sample new calibration 493 473 * values. 494 474 */ 495 - rdata = readl(tcphy->base + TX_ANA_CTRL_REG_1); 496 - val = rdata | 0x2000; 497 - writel(val, tcphy->base + TX_ANA_CTRL_REG_1); 475 + tx_ana_ctrl_reg_1 |= BIT(13); 476 + writel(tx_ana_ctrl_reg_1, tcphy->base + TX_ANA_CTRL_REG_1); 498 477 usleep_range(150, 200); 499 478 500 479 /* set TX Voltage Level and TX Deemphasis to 0 */ ··· 501 482 /* re-enable decap */ 502 483 writel(0x100, tcphy->base + TX_ANA_CTRL_REG_2); 503 484 writel(0x300, tcphy->base + TX_ANA_CTRL_REG_2); 504 - writel(0x2008, tcphy->base + TX_ANA_CTRL_REG_1); 505 - writel(0x2018, tcphy->base + TX_ANA_CTRL_REG_1); 485 + tx_ana_ctrl_reg_1 |= BIT(3); 486 + writel(tx_ana_ctrl_reg_1, tcphy->base + TX_ANA_CTRL_REG_1); 487 + tx_ana_ctrl_reg_1 |= BIT(4); 488 + writel(tx_ana_ctrl_reg_1, tcphy->base + TX_ANA_CTRL_REG_1); 506 489 507 490 writel(0, tcphy->base + TX_ANA_CTRL_REG_5); 508 491 ··· 515 494 writel(0x1001, tcphy->base + TX_ANA_CTRL_REG_4); 516 495 517 496 /* re-enables Bandgap reference for LDO */ 518 - writel(0x2098, tcphy->base + TX_ANA_CTRL_REG_1); 519 - writel(0x2198, tcphy->base + TX_ANA_CTRL_REG_1); 497 + tx_ana_ctrl_reg_1 |= BIT(7); 498 + writel(tx_ana_ctrl_reg_1, tcphy->base + TX_ANA_CTRL_REG_1); 499 + tx_ana_ctrl_reg_1 |= BIT(8); 500 + writel(tx_ana_ctrl_reg_1, tcphy->base + TX_ANA_CTRL_REG_1); 520 501 521 502 /* 522 503 * re-enables the transmitter pre-driver, driver data selection MUX, ··· 528 505 writel(0x303, tcphy->base + TX_ANA_CTRL_REG_2); 529 506 530 507 /* 531 - * BIT 12: Controls auxda_polarity, which selects the polarity of the 532 - * xcvr: 533 - * 1, Reverses the polarity (If TYPEC, Pulls ups aux_p and pull 534 - * down aux_m) 535 - * 0, Normal polarity (if TYPE_C, pulls up aux_m and pulls down 536 - * aux_p) 508 + * Do some magic undocumented stuff, some of which appears to 509 + * undo the "re-enables Bandgap reference for LDO" above. 537 510 */ 538 - val = 0xa078; 539 - if (!tcphy->flip) 540 - val |= BIT(12); 541 - writel(val, tcphy->base + TX_ANA_CTRL_REG_1); 511 + tx_ana_ctrl_reg_1 |= BIT(15); 512 + tx_ana_ctrl_reg_1 &= ~BIT(8); 513 + tx_ana_ctrl_reg_1 &= ~BIT(7); 514 + tx_ana_ctrl_reg_1 |= BIT(6); 515 + tx_ana_ctrl_reg_1 |= BIT(5); 516 + writel(tx_ana_ctrl_reg_1, tcphy->base + TX_ANA_CTRL_REG_1); 542 517 543 518 writel(0, tcphy->base + TX_ANA_CTRL_REG_3); 544 519 writel(0, tcphy->base + TX_ANA_CTRL_REG_4); 545 520 writel(0, tcphy->base + TX_ANA_CTRL_REG_5); 546 521 547 522 /* 548 - * Controls low_power_swing_en, set the voltage swing of the driver 549 - * to 400mv. The values below are peak to peak (differential) values. 523 + * Controls low_power_swing_en, don't set the voltage swing of the 524 + * driver to 400mv. The values below are peak to peak (differential) 525 + * values. 550 526 */ 551 - writel(4, tcphy->base + TXDA_COEFF_CALC_CTRL); 527 + writel(0, tcphy->base + TXDA_COEFF_CALC_CTRL); 552 528 writel(0, tcphy->base + TXDA_CYA_AUXDA_CYA); 553 529 554 530 /* Controls tx_high_z_tm_en */ ··· 577 555 reset_control_deassert(tcphy->tcphy_rst); 578 556 579 557 property_enable(tcphy, &cfg->typec_conn_dir, tcphy->flip); 558 + tcphy_dp_aux_set_flip(tcphy); 580 559 581 560 tcphy_cfg_24m(tcphy); 582 561 ··· 708 685 if (tcphy->mode == new_mode) 709 686 goto unlock_ret; 710 687 711 - if (tcphy->mode == MODE_DISCONNECT) 712 - tcphy_phy_init(tcphy, new_mode); 688 + if (tcphy->mode == MODE_DISCONNECT) { 689 + ret = tcphy_phy_init(tcphy, new_mode); 690 + if (ret) 691 + goto unlock_ret; 692 + } 713 693 714 694 /* wait TCPHY for pipe ready */ 715 695 for (timeout = 0; timeout < 100; timeout++) { ··· 786 760 */ 787 761 if (new_mode == MODE_DFP_DP && tcphy->mode != MODE_DISCONNECT) { 788 762 tcphy_phy_deinit(tcphy); 789 - tcphy_phy_init(tcphy, new_mode); 763 + ret = tcphy_phy_init(tcphy, new_mode); 790 764 } else if (tcphy->mode == MODE_DISCONNECT) { 791 - tcphy_phy_init(tcphy, new_mode); 765 + ret = tcphy_phy_init(tcphy, new_mode); 792 766 } 767 + if (ret) 768 + goto unlock_ret; 793 769 794 770 ret = readx_poll_timeout(readl, tcphy->base + DP_MODE_CTL, 795 771 val, val & DP_MODE_A2, 1000,
+2
drivers/phy/tegra/xusb.c
··· 454 454 char *name; 455 455 456 456 name = kasprintf(GFP_KERNEL, "%s-%u", type, index); 457 + if (!name) 458 + return ERR_PTR(-ENOMEM); 457 459 np = of_find_node_by_name(np, name); 458 460 kfree(name); 459 461 }
+9 -1
drivers/pinctrl/pinctrl-amd.c
··· 534 534 continue; 535 535 irq = irq_find_mapping(gc->irqdomain, irqnr + i); 536 536 generic_handle_irq(irq); 537 - /* Clear interrupt */ 537 + 538 + /* Clear interrupt. 539 + * We must read the pin register again, in case the 540 + * value was changed while executing 541 + * generic_handle_irq() above. 542 + */ 543 + raw_spin_lock_irqsave(&gpio_dev->lock, flags); 544 + regval = readl(regs + i); 538 545 writel(regval, regs + i); 546 + raw_spin_unlock_irqrestore(&gpio_dev->lock, flags); 539 547 ret = IRQ_HANDLED; 540 548 } 541 549 }
+3 -3
drivers/pinctrl/pinctrl-mcp23s08.c
··· 407 407 ret = mcp_read(mcp, MCP_GPIO, &status); 408 408 if (ret < 0) 409 409 status = 0; 410 - else 410 + else { 411 + mcp->cached_gpio = status; 411 412 status = !!(status & (1 << offset)); 412 - 413 - mcp->cached_gpio = status; 413 + } 414 414 415 415 mutex_unlock(&mcp->lock); 416 416 return status;
+50 -83
drivers/platform/x86/intel_pmc_ipc.c
··· 33 33 #include <linux/suspend.h> 34 34 #include <linux/acpi.h> 35 35 #include <linux/io-64-nonatomic-lo-hi.h> 36 + #include <linux/spinlock.h> 36 37 37 38 #include <asm/intel_pmc_ipc.h> 38 39 ··· 132 131 /* gcr */ 133 132 void __iomem *gcr_mem_base; 134 133 bool has_gcr_regs; 134 + spinlock_t gcr_lock; 135 135 136 136 /* punit */ 137 137 struct platform_device *punit_dev; ··· 227 225 { 228 226 int ret; 229 227 230 - mutex_lock(&ipclock); 228 + spin_lock(&ipcdev.gcr_lock); 231 229 232 230 ret = is_gcr_valid(offset); 233 231 if (ret < 0) { 234 - mutex_unlock(&ipclock); 232 + spin_unlock(&ipcdev.gcr_lock); 235 233 return ret; 236 234 } 237 235 238 236 *data = readl(ipcdev.gcr_mem_base + offset); 239 237 240 - mutex_unlock(&ipclock); 238 + spin_unlock(&ipcdev.gcr_lock); 241 239 242 240 return 0; 243 241 } ··· 257 255 { 258 256 int ret; 259 257 260 - mutex_lock(&ipclock); 258 + spin_lock(&ipcdev.gcr_lock); 261 259 262 260 ret = is_gcr_valid(offset); 263 261 if (ret < 0) { 264 - mutex_unlock(&ipclock); 262 + spin_unlock(&ipcdev.gcr_lock); 265 263 return ret; 266 264 } 267 265 268 266 writel(data, ipcdev.gcr_mem_base + offset); 269 267 270 - mutex_unlock(&ipclock); 268 + spin_unlock(&ipcdev.gcr_lock); 271 269 272 270 return 0; 273 271 } ··· 289 287 u32 new_val; 290 288 int ret = 0; 291 289 292 - mutex_lock(&ipclock); 290 + spin_lock(&ipcdev.gcr_lock); 293 291 294 292 ret = is_gcr_valid(offset); 295 293 if (ret < 0) ··· 311 309 } 312 310 313 311 gcr_ipc_unlock: 314 - mutex_unlock(&ipclock); 312 + spin_unlock(&ipcdev.gcr_lock); 315 313 return ret; 316 314 } 317 315 EXPORT_SYMBOL_GPL(intel_pmc_gcr_update); ··· 482 480 483 481 static int ipc_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) 484 482 { 485 - resource_size_t pci_resource; 483 + struct intel_pmc_ipc_dev *pmc = &ipcdev; 486 484 int ret; 487 - int len; 488 485 489 - ipcdev.dev = &pci_dev_get(pdev)->dev; 490 - ipcdev.irq_mode = IPC_TRIGGER_MODE_IRQ; 491 - 492 - ret = pci_enable_device(pdev); 493 - if (ret) 494 - return ret; 495 - 496 - ret = pci_request_regions(pdev, "intel_pmc_ipc"); 497 - if (ret) 498 - return ret; 499 - 500 - pci_resource = pci_resource_start(pdev, 0); 501 - len = pci_resource_len(pdev, 0); 502 - if (!pci_resource || !len) { 503 - dev_err(&pdev->dev, "Failed to get resource\n"); 504 - return -ENOMEM; 505 - } 506 - 507 - init_completion(&ipcdev.cmd_complete); 508 - 509 - if (request_irq(pdev->irq, ioc, 0, "intel_pmc_ipc", &ipcdev)) { 510 - dev_err(&pdev->dev, "Failed to request irq\n"); 486 + /* Only one PMC is supported */ 487 + if (pmc->dev) 511 488 return -EBUSY; 489 + 490 + pmc->irq_mode = IPC_TRIGGER_MODE_IRQ; 491 + 492 + spin_lock_init(&ipcdev.gcr_lock); 493 + 494 + ret = pcim_enable_device(pdev); 495 + if (ret) 496 + return ret; 497 + 498 + ret = pcim_iomap_regions(pdev, 1 << 0, pci_name(pdev)); 499 + if (ret) 500 + return ret; 501 + 502 + init_completion(&pmc->cmd_complete); 503 + 504 + pmc->ipc_base = pcim_iomap_table(pdev)[0]; 505 + 506 + ret = devm_request_irq(&pdev->dev, pdev->irq, ioc, 0, "intel_pmc_ipc", 507 + pmc); 508 + if (ret) { 509 + dev_err(&pdev->dev, "Failed to request irq\n"); 510 + return ret; 512 511 } 513 512 514 - ipcdev.ipc_base = ioremap_nocache(pci_resource, len); 515 - if (!ipcdev.ipc_base) { 516 - dev_err(&pdev->dev, "Failed to ioremap ipc base\n"); 517 - free_irq(pdev->irq, &ipcdev); 518 - ret = -ENOMEM; 519 - } 513 + pmc->dev = &pdev->dev; 520 514 521 - return ret; 522 - } 515 + pci_set_drvdata(pdev, pmc); 523 516 524 - static void ipc_pci_remove(struct pci_dev *pdev) 525 - { 526 - free_irq(pdev->irq, &ipcdev); 527 - pci_release_regions(pdev); 528 - pci_dev_put(pdev); 529 - iounmap(ipcdev.ipc_base); 530 - ipcdev.dev = NULL; 517 + return 0; 531 518 } 532 519 533 520 static const struct pci_device_id ipc_pci_ids[] = { ··· 531 540 .name = "intel_pmc_ipc", 532 541 .id_table = ipc_pci_ids, 533 542 .probe = ipc_pci_probe, 534 - .remove = ipc_pci_remove, 535 543 }; 536 544 537 545 static ssize_t intel_pmc_ipc_simple_cmd_store(struct device *dev, ··· 840 850 return -ENXIO; 841 851 } 842 852 size = PLAT_RESOURCE_IPC_SIZE + PLAT_RESOURCE_GCR_SIZE; 853 + res->end = res->start + size - 1; 843 854 844 - if (!request_mem_region(res->start, size, pdev->name)) { 845 - dev_err(&pdev->dev, "Failed to request ipc resource\n"); 846 - return -EBUSY; 847 - } 848 - addr = ioremap_nocache(res->start, size); 849 - if (!addr) { 850 - dev_err(&pdev->dev, "I/O memory remapping failed\n"); 851 - release_mem_region(res->start, size); 852 - return -ENOMEM; 853 - } 855 + addr = devm_ioremap_resource(&pdev->dev, res); 856 + if (IS_ERR(addr)) 857 + return PTR_ERR(addr); 858 + 854 859 ipcdev.ipc_base = addr; 855 860 856 861 ipcdev.gcr_mem_base = addr + PLAT_RESOURCE_GCR_OFFSET; ··· 902 917 903 918 static int ipc_plat_probe(struct platform_device *pdev) 904 919 { 905 - struct resource *res; 906 920 int ret; 907 921 908 922 ipcdev.dev = &pdev->dev; 909 923 ipcdev.irq_mode = IPC_TRIGGER_MODE_IRQ; 910 924 init_completion(&ipcdev.cmd_complete); 925 + spin_lock_init(&ipcdev.gcr_lock); 911 926 912 927 ipcdev.irq = platform_get_irq(pdev, 0); 913 928 if (ipcdev.irq < 0) { ··· 924 939 ret = ipc_create_pmc_devices(); 925 940 if (ret) { 926 941 dev_err(&pdev->dev, "Failed to create pmc devices\n"); 927 - goto err_device; 942 + return ret; 928 943 } 929 944 930 - if (request_irq(ipcdev.irq, ioc, IRQF_NO_SUSPEND, 931 - "intel_pmc_ipc", &ipcdev)) { 945 + if (devm_request_irq(&pdev->dev, ipcdev.irq, ioc, IRQF_NO_SUSPEND, 946 + "intel_pmc_ipc", &ipcdev)) { 932 947 dev_err(&pdev->dev, "Failed to request irq\n"); 933 948 ret = -EBUSY; 934 949 goto err_irq; ··· 945 960 946 961 return 0; 947 962 err_sys: 948 - free_irq(ipcdev.irq, &ipcdev); 963 + devm_free_irq(&pdev->dev, ipcdev.irq, &ipcdev); 949 964 err_irq: 950 965 platform_device_unregister(ipcdev.tco_dev); 951 966 platform_device_unregister(ipcdev.punit_dev); 952 967 platform_device_unregister(ipcdev.telemetry_dev); 953 - err_device: 954 - iounmap(ipcdev.ipc_base); 955 - res = platform_get_resource(pdev, IORESOURCE_MEM, 956 - PLAT_RESOURCE_IPC_INDEX); 957 - if (res) { 958 - release_mem_region(res->start, 959 - PLAT_RESOURCE_IPC_SIZE + 960 - PLAT_RESOURCE_GCR_SIZE); 961 - } 968 + 962 969 return ret; 963 970 } 964 971 965 972 static int ipc_plat_remove(struct platform_device *pdev) 966 973 { 967 - struct resource *res; 968 - 969 974 sysfs_remove_group(&pdev->dev.kobj, &intel_ipc_group); 970 - free_irq(ipcdev.irq, &ipcdev); 975 + devm_free_irq(&pdev->dev, ipcdev.irq, &ipcdev); 971 976 platform_device_unregister(ipcdev.tco_dev); 972 977 platform_device_unregister(ipcdev.punit_dev); 973 978 platform_device_unregister(ipcdev.telemetry_dev); 974 - iounmap(ipcdev.ipc_base); 975 - res = platform_get_resource(pdev, IORESOURCE_MEM, 976 - PLAT_RESOURCE_IPC_INDEX); 977 - if (res) { 978 - release_mem_region(res->start, 979 - PLAT_RESOURCE_IPC_SIZE + 980 - PLAT_RESOURCE_GCR_SIZE); 981 - } 982 979 ipcdev.dev = NULL; 983 980 return 0; 984 981 }
+1 -1
drivers/regulator/axp20x-regulator.c
··· 590 590 case AXP803_DCDC3: 591 591 return !!(reg & BIT(6)); 592 592 case AXP803_DCDC6: 593 - return !!(reg & BIT(7)); 593 + return !!(reg & BIT(5)); 594 594 } 595 595 break; 596 596
+1 -1
drivers/regulator/rn5t618-regulator.c
··· 29 29 }; 30 30 31 31 #define REG(rid, ereg, emask, vreg, vmask, min, max, step) \ 32 - [RN5T618_##rid] = { \ 32 + { \ 33 33 .name = #rid, \ 34 34 .of_match = of_match_ptr(#rid), \ 35 35 .regulators_node = of_match_ptr("regulators"), \
+5
drivers/s390/scsi/zfcp_aux.c
··· 357 357 358 358 adapter->next_port_scan = jiffies; 359 359 360 + adapter->erp_action.adapter = adapter; 361 + 360 362 if (zfcp_qdio_setup(adapter)) 361 363 goto failed; 362 364 ··· 514 512 port->dev.parent = &adapter->ccw_device->dev; 515 513 port->dev.groups = zfcp_port_attr_groups; 516 514 port->dev.release = zfcp_port_release; 515 + 516 + port->erp_action.adapter = adapter; 517 + port->erp_action.port = port; 517 518 518 519 if (dev_set_name(&port->dev, "0x%016llx", (unsigned long long)wwpn)) { 519 520 kfree(port);
+11 -7
drivers/s390/scsi/zfcp_erp.c
··· 193 193 atomic_or(ZFCP_STATUS_COMMON_ERP_INUSE, 194 194 &zfcp_sdev->status); 195 195 erp_action = &zfcp_sdev->erp_action; 196 - memset(erp_action, 0, sizeof(struct zfcp_erp_action)); 197 - erp_action->port = port; 198 - erp_action->sdev = sdev; 196 + WARN_ON_ONCE(erp_action->port != port); 197 + WARN_ON_ONCE(erp_action->sdev != sdev); 199 198 if (!(atomic_read(&zfcp_sdev->status) & 200 199 ZFCP_STATUS_COMMON_RUNNING)) 201 200 act_status |= ZFCP_STATUS_ERP_CLOSE_ONLY; ··· 207 208 zfcp_erp_action_dismiss_port(port); 208 209 atomic_or(ZFCP_STATUS_COMMON_ERP_INUSE, &port->status); 209 210 erp_action = &port->erp_action; 210 - memset(erp_action, 0, sizeof(struct zfcp_erp_action)); 211 - erp_action->port = port; 211 + WARN_ON_ONCE(erp_action->port != port); 212 + WARN_ON_ONCE(erp_action->sdev != NULL); 212 213 if (!(atomic_read(&port->status) & ZFCP_STATUS_COMMON_RUNNING)) 213 214 act_status |= ZFCP_STATUS_ERP_CLOSE_ONLY; 214 215 break; ··· 218 219 zfcp_erp_action_dismiss_adapter(adapter); 219 220 atomic_or(ZFCP_STATUS_COMMON_ERP_INUSE, &adapter->status); 220 221 erp_action = &adapter->erp_action; 221 - memset(erp_action, 0, sizeof(struct zfcp_erp_action)); 222 + WARN_ON_ONCE(erp_action->port != NULL); 223 + WARN_ON_ONCE(erp_action->sdev != NULL); 222 224 if (!(atomic_read(&adapter->status) & 223 225 ZFCP_STATUS_COMMON_RUNNING)) 224 226 act_status |= ZFCP_STATUS_ERP_CLOSE_ONLY; ··· 229 229 return NULL; 230 230 } 231 231 232 - erp_action->adapter = adapter; 232 + WARN_ON_ONCE(erp_action->adapter != adapter); 233 + memset(&erp_action->list, 0, sizeof(erp_action->list)); 234 + memset(&erp_action->timer, 0, sizeof(erp_action->timer)); 235 + erp_action->step = ZFCP_ERP_STEP_UNINITIALIZED; 236 + erp_action->fsf_req_id = 0; 233 237 erp_action->action = need; 234 238 erp_action->status = act_status; 235 239
+5
drivers/s390/scsi/zfcp_scsi.c
··· 115 115 struct zfcp_unit *unit; 116 116 int npiv = adapter->connection_features & FSF_FEATURE_NPIV_MODE; 117 117 118 + zfcp_sdev->erp_action.adapter = adapter; 119 + zfcp_sdev->erp_action.sdev = sdev; 120 + 118 121 port = zfcp_get_port_by_wwpn(adapter, rport->port_name); 119 122 if (!port) 120 123 return -ENXIO; 124 + 125 + zfcp_sdev->erp_action.port = port; 121 126 122 127 unit = zfcp_unit_find(port, zfcp_scsi_dev_lun(sdev)); 123 128 if (unit)
+5 -3
drivers/scsi/aacraid/comminit.c
··· 302 302 return -ENOMEM; 303 303 aac_fib_init(fibctx); 304 304 305 - mutex_lock(&dev->ioctl_mutex); 306 - dev->adapter_shutdown = 1; 307 - mutex_unlock(&dev->ioctl_mutex); 305 + if (!dev->adapter_shutdown) { 306 + mutex_lock(&dev->ioctl_mutex); 307 + dev->adapter_shutdown = 1; 308 + mutex_unlock(&dev->ioctl_mutex); 309 + } 308 310 309 311 cmd = (struct aac_close *) fib_data(fibctx); 310 312 cmd->command = cpu_to_le32(VM_CloseAll);
+6 -1
drivers/scsi/aacraid/linit.c
··· 1551 1551 { 1552 1552 int i; 1553 1553 1554 + mutex_lock(&aac->ioctl_mutex); 1554 1555 aac->adapter_shutdown = 1; 1555 - aac_send_shutdown(aac); 1556 + mutex_unlock(&aac->ioctl_mutex); 1556 1557 1557 1558 if (aac->aif_thread) { 1558 1559 int i; ··· 1566 1565 } 1567 1566 kthread_stop(aac->thread); 1568 1567 } 1568 + 1569 + aac_send_shutdown(aac); 1570 + 1569 1571 aac_adapter_disable_int(aac); 1572 + 1570 1573 if (aac_is_src(aac)) { 1571 1574 if (aac->max_msix > 1) { 1572 1575 for (i = 0; i < aac->max_msix; i++) {
+1 -1
drivers/scsi/hpsa.c
··· 4091 4091 memset(id_ctlr, 0, sizeof(*id_ctlr)); 4092 4092 rc = hpsa_bmic_id_controller(h, id_ctlr, sizeof(*id_ctlr)); 4093 4093 if (!rc) 4094 - if (id_ctlr->configured_logical_drive_count < 256) 4094 + if (id_ctlr->configured_logical_drive_count < 255) 4095 4095 *nlocals = id_ctlr->configured_logical_drive_count; 4096 4096 else 4097 4097 *nlocals = le16_to_cpu(
+2 -2
drivers/scsi/qla2xxx/qla_os.c
··· 3061 3061 host->max_cmd_len, host->max_channel, host->max_lun, 3062 3062 host->transportt, sht->vendor_id); 3063 3063 3064 + INIT_WORK(&base_vha->iocb_work, qla2x00_iocb_work_fn); 3065 + 3064 3066 /* Set up the irqs */ 3065 3067 ret = qla2x00_request_irqs(ha, rsp); 3066 3068 if (ret) ··· 3176 3174 "can_queue=%d, req=%p, mgmt_svr_loop_id=%d, sg_tablesize=%d.\n", 3177 3175 host->can_queue, base_vha->req, 3178 3176 base_vha->mgmt_svr_loop_id, host->sg_tablesize); 3179 - 3180 - INIT_WORK(&base_vha->iocb_work, qla2x00_iocb_work_fn); 3181 3177 3182 3178 if (ha->mqenable) { 3183 3179 bool mq = false;
+1 -7
drivers/scsi/scsi_lib.c
··· 1379 1379 1380 1380 ret = scsi_setup_cmnd(sdev, req); 1381 1381 out: 1382 - if (ret != BLKPREP_OK) 1383 - cmd->flags &= ~SCMD_INITIALIZED; 1384 1382 return scsi_prep_return(q, req, ret); 1385 1383 } 1386 1384 ··· 1898 1900 struct scsi_device *sdev = req->q->queuedata; 1899 1901 struct Scsi_Host *shost = sdev->host; 1900 1902 struct scatterlist *sg; 1901 - int ret; 1902 1903 1903 1904 scsi_init_command(sdev, cmd); 1904 1905 ··· 1931 1934 1932 1935 blk_mq_start_request(req); 1933 1936 1934 - ret = scsi_setup_cmnd(sdev, req); 1935 - if (ret != BLK_STS_OK) 1936 - cmd->flags &= ~SCMD_INITIALIZED; 1937 - return ret; 1937 + return scsi_setup_cmnd(sdev, req); 1938 1938 } 1939 1939 1940 1940 static void scsi_mq_done(struct scsi_cmnd *cmd)
+1 -1
drivers/scsi/sg.c
··· 837 837 838 838 val = 0; 839 839 list_for_each_entry(srp, &sfp->rq_list, entry) { 840 - if (val > SG_MAX_QUEUE) 840 + if (val >= SG_MAX_QUEUE) 841 841 break; 842 842 rinfo[val].req_state = srp->done + 1; 843 843 rinfo[val].problem =
+48 -97
drivers/spi/spi-armada-3700.c
··· 99 99 /* A3700_SPI_IF_TIME_REG */ 100 100 #define A3700_SPI_CLK_CAPT_EDGE BIT(7) 101 101 102 - /* Flags and macros for struct a3700_spi */ 103 - #define A3700_INSTR_CNT 1 104 - #define A3700_ADDR_CNT 3 105 - #define A3700_DUMMY_CNT 1 106 - 107 102 struct a3700_spi { 108 103 struct spi_master *master; 109 104 void __iomem *base; ··· 112 117 u8 byte_len; 113 118 u32 wait_mask; 114 119 struct completion done; 115 - u32 addr_cnt; 116 - u32 instr_cnt; 117 - size_t hdr_cnt; 118 120 }; 119 121 120 122 static u32 spireg_read(struct a3700_spi *a3700_spi, u32 offset) ··· 153 161 } 154 162 155 163 static int a3700_spi_pin_mode_set(struct a3700_spi *a3700_spi, 156 - unsigned int pin_mode) 164 + unsigned int pin_mode, bool receiving) 157 165 { 158 166 u32 val; 159 167 ··· 169 177 break; 170 178 case SPI_NBITS_QUAD: 171 179 val |= A3700_SPI_DATA_PIN1; 180 + /* RX during address reception uses 4-pin */ 181 + if (receiving) 182 + val |= A3700_SPI_ADDR_PIN; 172 183 break; 173 184 default: 174 185 dev_err(&a3700_spi->master->dev, "wrong pin mode %u", pin_mode); ··· 387 392 388 393 spireg_write(a3700_spi, A3700_SPI_INT_MASK_REG, 0); 389 394 390 - return true; 395 + /* Timeout was reached */ 396 + return false; 391 397 } 392 398 393 399 static bool a3700_spi_transfer_wait(struct spi_device *spi, ··· 442 446 443 447 static void a3700_spi_header_set(struct a3700_spi *a3700_spi) 444 448 { 445 - u32 instr_cnt = 0, addr_cnt = 0, dummy_cnt = 0; 449 + unsigned int addr_cnt; 446 450 u32 val = 0; 447 451 448 452 /* Clear the header registers */ 449 453 spireg_write(a3700_spi, A3700_SPI_IF_INST_REG, 0); 450 454 spireg_write(a3700_spi, A3700_SPI_IF_ADDR_REG, 0); 451 455 spireg_write(a3700_spi, A3700_SPI_IF_RMODE_REG, 0); 456 + spireg_write(a3700_spi, A3700_SPI_IF_HDR_CNT_REG, 0); 452 457 453 458 /* Set header counters */ 454 459 if (a3700_spi->tx_buf) { 455 - if (a3700_spi->buf_len <= a3700_spi->instr_cnt) { 456 - instr_cnt = a3700_spi->buf_len; 457 - } else if (a3700_spi->buf_len <= (a3700_spi->instr_cnt + 458 - a3700_spi->addr_cnt)) { 459 - instr_cnt = a3700_spi->instr_cnt; 460 - addr_cnt = a3700_spi->buf_len - instr_cnt; 461 - } else if (a3700_spi->buf_len <= a3700_spi->hdr_cnt) { 462 - instr_cnt = a3700_spi->instr_cnt; 463 - addr_cnt = a3700_spi->addr_cnt; 464 - /* Need to handle the normal write case with 1 byte 465 - * data 466 - */ 467 - if (!a3700_spi->tx_buf[instr_cnt + addr_cnt]) 468 - dummy_cnt = a3700_spi->buf_len - instr_cnt - 469 - addr_cnt; 460 + /* 461 + * when tx data is not 4 bytes aligned, there will be unexpected 462 + * bytes out of SPI output register, since it always shifts out 463 + * as whole 4 bytes. This might cause incorrect transaction with 464 + * some devices. To avoid that, use SPI header count feature to 465 + * transfer up to 3 bytes of data first, and then make the rest 466 + * of data 4-byte aligned. 467 + */ 468 + addr_cnt = a3700_spi->buf_len % 4; 469 + if (addr_cnt) { 470 + val = (addr_cnt & A3700_SPI_ADDR_CNT_MASK) 471 + << A3700_SPI_ADDR_CNT_BIT; 472 + spireg_write(a3700_spi, A3700_SPI_IF_HDR_CNT_REG, val); 473 + 474 + /* Update the buffer length to be transferred */ 475 + a3700_spi->buf_len -= addr_cnt; 476 + 477 + /* transfer 1~3 bytes through address count */ 478 + val = 0; 479 + while (addr_cnt--) { 480 + val = (val << 8) | a3700_spi->tx_buf[0]; 481 + a3700_spi->tx_buf++; 482 + } 483 + spireg_write(a3700_spi, A3700_SPI_IF_ADDR_REG, val); 470 484 } 471 - val |= ((instr_cnt & A3700_SPI_INSTR_CNT_MASK) 472 - << A3700_SPI_INSTR_CNT_BIT); 473 - val |= ((addr_cnt & A3700_SPI_ADDR_CNT_MASK) 474 - << A3700_SPI_ADDR_CNT_BIT); 475 - val |= ((dummy_cnt & A3700_SPI_DUMMY_CNT_MASK) 476 - << A3700_SPI_DUMMY_CNT_BIT); 477 485 } 478 - spireg_write(a3700_spi, A3700_SPI_IF_HDR_CNT_REG, val); 479 - 480 - /* Update the buffer length to be transferred */ 481 - a3700_spi->buf_len -= (instr_cnt + addr_cnt + dummy_cnt); 482 - 483 - /* Set Instruction */ 484 - val = 0; 485 - while (instr_cnt--) { 486 - val = (val << 8) | a3700_spi->tx_buf[0]; 487 - a3700_spi->tx_buf++; 488 - } 489 - spireg_write(a3700_spi, A3700_SPI_IF_INST_REG, val); 490 - 491 - /* Set Address */ 492 - val = 0; 493 - while (addr_cnt--) { 494 - val = (val << 8) | a3700_spi->tx_buf[0]; 495 - a3700_spi->tx_buf++; 496 - } 497 - spireg_write(a3700_spi, A3700_SPI_IF_ADDR_REG, val); 498 486 } 499 487 500 488 static int a3700_is_wfifo_full(struct a3700_spi *a3700_spi) ··· 492 512 static int a3700_spi_fifo_write(struct a3700_spi *a3700_spi) 493 513 { 494 514 u32 val; 495 - int i = 0; 496 515 497 516 while (!a3700_is_wfifo_full(a3700_spi) && a3700_spi->buf_len) { 498 - val = 0; 499 - if (a3700_spi->buf_len >= 4) { 500 - val = cpu_to_le32(*(u32 *)a3700_spi->tx_buf); 501 - spireg_write(a3700_spi, A3700_SPI_DATA_OUT_REG, val); 502 - 503 - a3700_spi->buf_len -= 4; 504 - a3700_spi->tx_buf += 4; 505 - } else { 506 - /* 507 - * If the remained buffer length is less than 4-bytes, 508 - * we should pad the write buffer with all ones. So that 509 - * it avoids overwrite the unexpected bytes following 510 - * the last one. 511 - */ 512 - val = GENMASK(31, 0); 513 - while (a3700_spi->buf_len) { 514 - val &= ~(0xff << (8 * i)); 515 - val |= *a3700_spi->tx_buf++ << (8 * i); 516 - i++; 517 - a3700_spi->buf_len--; 518 - 519 - spireg_write(a3700_spi, A3700_SPI_DATA_OUT_REG, 520 - val); 521 - } 522 - break; 523 - } 517 + val = cpu_to_le32(*(u32 *)a3700_spi->tx_buf); 518 + spireg_write(a3700_spi, A3700_SPI_DATA_OUT_REG, val); 519 + a3700_spi->buf_len -= 4; 520 + a3700_spi->tx_buf += 4; 524 521 } 525 522 526 523 return 0; ··· 602 645 a3700_spi->rx_buf = xfer->rx_buf; 603 646 a3700_spi->buf_len = xfer->len; 604 647 605 - /* SPI transfer headers */ 606 - a3700_spi_header_set(a3700_spi); 607 - 608 648 if (xfer->tx_buf) 609 649 nbits = xfer->tx_nbits; 610 650 else if (xfer->rx_buf) 611 651 nbits = xfer->rx_nbits; 612 652 613 - a3700_spi_pin_mode_set(a3700_spi, nbits); 653 + a3700_spi_pin_mode_set(a3700_spi, nbits, xfer->rx_buf ? true : false); 654 + 655 + /* Flush the FIFOs */ 656 + a3700_spi_fifo_flush(a3700_spi); 657 + 658 + /* Transfer first bytes of data when buffer is not 4-byte aligned */ 659 + a3700_spi_header_set(a3700_spi); 614 660 615 661 if (xfer->rx_buf) { 616 662 /* Set read data length */ ··· 693 733 dev_err(&spi->dev, "wait wfifo empty timed out\n"); 694 734 return -ETIMEDOUT; 695 735 } 696 - } else { 697 - /* 698 - * If the instruction in SPI_INSTR does not require data 699 - * to be written to the SPI device, wait until SPI_RDY 700 - * is 1 for the SPI interface to be in idle. 701 - */ 702 - if (!a3700_spi_transfer_wait(spi, A3700_SPI_XFER_RDY)) { 703 - dev_err(&spi->dev, "wait xfer ready timed out\n"); 704 - return -ETIMEDOUT; 705 - } 736 + } 737 + 738 + if (!a3700_spi_transfer_wait(spi, A3700_SPI_XFER_RDY)) { 739 + dev_err(&spi->dev, "wait xfer ready timed out\n"); 740 + return -ETIMEDOUT; 706 741 } 707 742 708 743 val = spireg_read(a3700_spi, A3700_SPI_IF_CFG_REG); ··· 789 834 memset(spi, 0, sizeof(struct a3700_spi)); 790 835 791 836 spi->master = master; 792 - spi->instr_cnt = A3700_INSTR_CNT; 793 - spi->addr_cnt = A3700_ADDR_CNT; 794 - spi->hdr_cnt = A3700_INSTR_CNT + A3700_ADDR_CNT + 795 - A3700_DUMMY_CNT; 796 837 797 838 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 798 839 spi->base = devm_ioremap_resource(dev, res);
+5 -4
drivers/spi/spi-bcm-qspi.c
··· 1250 1250 goto qspi_probe_err; 1251 1251 } 1252 1252 } else { 1253 - goto qspi_probe_err; 1253 + goto qspi_resource_err; 1254 1254 } 1255 1255 1256 1256 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "bspi"); ··· 1272 1272 qspi->base[CHIP_SELECT] = devm_ioremap_resource(dev, res); 1273 1273 if (IS_ERR(qspi->base[CHIP_SELECT])) { 1274 1274 ret = PTR_ERR(qspi->base[CHIP_SELECT]); 1275 - goto qspi_probe_err; 1275 + goto qspi_resource_err; 1276 1276 } 1277 1277 } 1278 1278 ··· 1280 1280 GFP_KERNEL); 1281 1281 if (!qspi->dev_ids) { 1282 1282 ret = -ENOMEM; 1283 - goto qspi_probe_err; 1283 + goto qspi_resource_err; 1284 1284 } 1285 1285 1286 1286 for (val = 0; val < num_irqs; val++) { ··· 1369 1369 bcm_qspi_hw_uninit(qspi); 1370 1370 clk_disable_unprepare(qspi->clk); 1371 1371 qspi_probe_err: 1372 - spi_master_put(master); 1373 1372 kfree(qspi->dev_ids); 1373 + qspi_resource_err: 1374 + spi_master_put(master); 1374 1375 return ret; 1375 1376 } 1376 1377 /* probe function to be called by SoC specific platform driver probe */
+2 -2
drivers/spi/spi-stm32.c
··· 263 263 * no need to check it there. 264 264 * However, we need to ensure the following calculations. 265 265 */ 266 - if ((div < SPI_MBR_DIV_MIN) && 267 - (div > SPI_MBR_DIV_MAX)) 266 + if (div < SPI_MBR_DIV_MIN || 267 + div > SPI_MBR_DIV_MAX) 268 268 return -EINVAL; 269 269 270 270 /* Determine the first power of 2 greater than or equal to div */
+9 -4
drivers/spi/spi.c
··· 45 45 46 46 #define CREATE_TRACE_POINTS 47 47 #include <trace/events/spi.h> 48 - #define SPI_DYN_FIRST_BUS_NUM 0 49 48 50 49 static DEFINE_IDR(spi_master_idr); 51 50 ··· 2085 2086 struct device *dev = ctlr->dev.parent; 2086 2087 struct boardinfo *bi; 2087 2088 int status = -ENODEV; 2088 - int id; 2089 + int id, first_dynamic; 2089 2090 2090 2091 if (!dev) 2091 2092 return -ENODEV; ··· 2115 2116 } 2116 2117 } 2117 2118 if (ctlr->bus_num < 0) { 2119 + first_dynamic = of_alias_get_highest_id("spi"); 2120 + if (first_dynamic < 0) 2121 + first_dynamic = 0; 2122 + else 2123 + first_dynamic++; 2124 + 2118 2125 mutex_lock(&board_lock); 2119 - id = idr_alloc(&spi_master_idr, ctlr, SPI_DYN_FIRST_BUS_NUM, 0, 2120 - GFP_KERNEL); 2126 + id = idr_alloc(&spi_master_idr, ctlr, first_dynamic, 2127 + 0, GFP_KERNEL); 2121 2128 mutex_unlock(&board_lock); 2122 2129 if (WARN(id < 0, "couldn't get idr")) 2123 2130 return id;
+1 -1
drivers/staging/iio/meter/ade7759.c
··· 172 172 reg_address); 173 173 goto error_ret; 174 174 } 175 - *val = ((u64)st->rx[1] << 32) | (st->rx[2] << 24) | 175 + *val = ((u64)st->rx[1] << 32) | ((u64)st->rx[2] << 24) | 176 176 (st->rx[3] << 16) | (st->rx[4] << 8) | st->rx[5]; 177 177 178 178 error_ret:
+7 -12
drivers/staging/vc04_services/bcm2835-audio/bcm2835-vchiq.c
··· 390 390 __func__, instance); 391 391 instance->alsa_stream = alsa_stream; 392 392 alsa_stream->instance = instance; 393 - ret = 0; // xxx todo -1; 394 - goto err_free_mem; 393 + return 0; 395 394 } 396 395 397 396 /* Initialize and create a VCHI connection */ ··· 400 401 LOG_ERR("%s: failed to initialise VCHI instance (ret=%d)\n", 401 402 __func__, ret); 402 403 403 - ret = -EIO; 404 - goto err_free_mem; 404 + return -EIO; 405 405 } 406 406 ret = vchi_connect(NULL, 0, vchi_instance); 407 407 if (ret) { 408 408 LOG_ERR("%s: failed to connect VCHI instance (ret=%d)\n", 409 409 __func__, ret); 410 410 411 - ret = -EIO; 412 - goto err_free_mem; 411 + kfree(vchi_instance); 412 + return -EIO; 413 413 } 414 414 initted = 1; 415 415 } ··· 419 421 if (IS_ERR(instance)) { 420 422 LOG_ERR("%s: failed to initialize audio service\n", __func__); 421 423 422 - ret = PTR_ERR(instance); 423 - goto err_free_mem; 424 + /* vchi_instance is retained for use the next time. */ 425 + return PTR_ERR(instance); 424 426 } 425 427 426 428 instance->alsa_stream = alsa_stream; 427 429 alsa_stream->instance = instance; 428 430 429 431 LOG_DBG(" success !\n"); 430 - ret = 0; 431 - err_free_mem: 432 - kfree(vchi_instance); 433 432 434 - return ret; 433 + return 0; 435 434 } 436 435 437 436 int bcm2835_audio_open(struct bcm2835_alsa_stream *alsa_stream)
+3
drivers/usb/class/cdc-acm.c
··· 1832 1832 { USB_DEVICE(0xfff0, 0x0100), /* DATECS FP-2000 */ 1833 1833 .driver_info = NO_UNION_NORMAL, /* reports zero length descriptor */ 1834 1834 }, 1835 + { USB_DEVICE(0x09d8, 0x0320), /* Elatec GmbH TWN3 */ 1836 + .driver_info = NO_UNION_NORMAL, /* has misplaced union descriptor */ 1837 + }, 1835 1838 1836 1839 { USB_DEVICE(0x2912, 0x0001), /* ATOL FPrint */ 1837 1840 .driver_info = CLEAR_HALT_CONDITIONS,
+4 -2
drivers/usb/core/config.c
··· 960 960 for (i = 0; i < num; i++) { 961 961 buffer += length; 962 962 cap = (struct usb_dev_cap_header *)buffer; 963 - length = cap->bLength; 964 963 965 - if (total_len < length) 964 + if (total_len < sizeof(*cap) || total_len < cap->bLength) { 965 + dev->bos->desc->bNumDeviceCaps = i; 966 966 break; 967 + } 968 + length = cap->bLength; 967 969 total_len -= length; 968 970 969 971 if (cap->bDescriptorType != USB_DT_DEVICE_CAPABILITY) {
+1 -5
drivers/usb/core/devio.c
··· 1576 1576 totlen += isopkt[u].length; 1577 1577 } 1578 1578 u *= sizeof(struct usb_iso_packet_descriptor); 1579 - if (totlen <= uurb->buffer_length) 1580 - uurb->buffer_length = totlen; 1581 - else 1582 - WARN_ONCE(1, "uurb->buffer_length is too short %d vs %d", 1583 - totlen, uurb->buffer_length); 1579 + uurb->buffer_length = totlen; 1584 1580 break; 1585 1581 1586 1582 default:
+7 -4
drivers/usb/core/hub.c
··· 2710 2710 if (!(portstatus & USB_PORT_STAT_CONNECTION)) 2711 2711 return -ENOTCONN; 2712 2712 2713 - /* bomb out completely if the connection bounced. A USB 3.0 2714 - * connection may bounce if multiple warm resets were issued, 2713 + /* Retry if connect change is set but status is still connected. 2714 + * A USB 3.0 connection may bounce if multiple warm resets were issued, 2715 2715 * but the device may have successfully re-connected. Ignore it. 2716 2716 */ 2717 2717 if (!hub_is_superspeed(hub->hdev) && 2718 - (portchange & USB_PORT_STAT_C_CONNECTION)) 2719 - return -ENOTCONN; 2718 + (portchange & USB_PORT_STAT_C_CONNECTION)) { 2719 + usb_clear_port_feature(hub->hdev, port1, 2720 + USB_PORT_FEAT_C_CONNECTION); 2721 + return -EAGAIN; 2722 + } 2720 2723 2721 2724 if (!(portstatus & USB_PORT_STAT_ENABLE)) 2722 2725 return -EBUSY;
+4
drivers/usb/core/quirks.c
··· 221 221 /* Corsair Strafe RGB */ 222 222 { USB_DEVICE(0x1b1c, 0x1b20), .driver_info = USB_QUIRK_DELAY_INIT }, 223 223 224 + /* MIDI keyboard WORLDE MINI */ 225 + { USB_DEVICE(0x1c75, 0x0204), .driver_info = 226 + USB_QUIRK_CONFIG_INTF_STRINGS }, 227 + 224 228 /* Acer C120 LED Projector */ 225 229 { USB_DEVICE(0x1de1, 0xc102), .driver_info = USB_QUIRK_NO_LPM }, 226 230
+18 -5
drivers/usb/host/xhci-hub.c
··· 420 420 GFP_NOWAIT); 421 421 if (!command) { 422 422 spin_unlock_irqrestore(&xhci->lock, flags); 423 - xhci_free_command(xhci, cmd); 424 - return -ENOMEM; 423 + ret = -ENOMEM; 424 + goto cmd_cleanup; 425 425 } 426 - xhci_queue_stop_endpoint(xhci, command, slot_id, i, 427 - suspend); 426 + 427 + ret = xhci_queue_stop_endpoint(xhci, command, slot_id, 428 + i, suspend); 429 + if (ret) { 430 + spin_unlock_irqrestore(&xhci->lock, flags); 431 + xhci_free_command(xhci, command); 432 + goto cmd_cleanup; 433 + } 428 434 } 429 435 } 430 - xhci_queue_stop_endpoint(xhci, cmd, slot_id, 0, suspend); 436 + ret = xhci_queue_stop_endpoint(xhci, cmd, slot_id, 0, suspend); 437 + if (ret) { 438 + spin_unlock_irqrestore(&xhci->lock, flags); 439 + goto cmd_cleanup; 440 + } 441 + 431 442 xhci_ring_cmd_db(xhci); 432 443 spin_unlock_irqrestore(&xhci->lock, flags); 433 444 ··· 450 439 xhci_warn(xhci, "Timeout while waiting for stop endpoint command\n"); 451 440 ret = -ETIME; 452 441 } 442 + 443 + cmd_cleanup: 453 444 xhci_free_command(xhci, cmd); 454 445 return ret; 455 446 }
+14 -7
drivers/usb/host/xhci-ring.c
··· 1309 1309 void xhci_cleanup_command_queue(struct xhci_hcd *xhci) 1310 1310 { 1311 1311 struct xhci_command *cur_cmd, *tmp_cmd; 1312 + xhci->current_cmd = NULL; 1312 1313 list_for_each_entry_safe(cur_cmd, tmp_cmd, &xhci->cmd_list, cmd_list) 1313 1314 xhci_complete_del_and_free_cmd(cur_cmd, COMP_COMMAND_ABORTED); 1314 1315 } ··· 2580 2579 (struct xhci_generic_trb *) ep_trb); 2581 2580 2582 2581 /* 2583 - * No-op TRB should not trigger interrupts. 2584 - * If ep_trb is a no-op TRB, it means the 2585 - * corresponding TD has been cancelled. Just ignore 2586 - * the TD. 2582 + * No-op TRB could trigger interrupts in a case where 2583 + * a URB was killed and a STALL_ERROR happens right 2584 + * after the endpoint ring stopped. Reset the halted 2585 + * endpoint. Otherwise, the endpoint remains stalled 2586 + * indefinitely. 2587 2587 */ 2588 2588 if (trb_is_noop(ep_trb)) { 2589 - xhci_dbg(xhci, 2590 - "ep_trb is a no-op TRB. Skip it for slot %u ep %u\n", 2591 - slot_id, ep_index); 2589 + if (trb_comp_code == COMP_STALL_ERROR || 2590 + xhci_requires_manual_halt_cleanup(xhci, ep_ctx, 2591 + trb_comp_code)) 2592 + xhci_cleanup_halted_endpoint(xhci, slot_id, 2593 + ep_index, 2594 + ep_ring->stream_id, 2595 + td, ep_trb, 2596 + EP_HARD_RESET); 2592 2597 goto cleanup; 2593 2598 } 2594 2599
+2 -1
drivers/usb/host/xhci.c
··· 4805 4805 */ 4806 4806 hcd->has_tt = 1; 4807 4807 } else { 4808 - if (xhci->sbrn == 0x31) { 4808 + /* Some 3.1 hosts return sbrn 0x30, can't rely on sbrn alone */ 4809 + if (xhci->sbrn == 0x31 || xhci->usb3_rhub.min_rev >= 1) { 4809 4810 xhci_info(xhci, "Host supports USB 3.1 Enhanced SuperSpeed\n"); 4810 4811 hcd->speed = HCD_USB31; 4811 4812 hcd->self.root_hub->speed = USB_SPEED_SUPER_PLUS;
+13 -8
drivers/usb/musb/musb_core.c
··· 906 906 */ 907 907 if (int_usb & MUSB_INTR_RESET) { 908 908 handled = IRQ_HANDLED; 909 - if (devctl & MUSB_DEVCTL_HM) { 909 + if (is_host_active(musb)) { 910 910 /* 911 911 * When BABBLE happens what we can depends on which 912 912 * platform MUSB is running, because some platforms ··· 916 916 * drop the session. 917 917 */ 918 918 dev_err(musb->controller, "Babble\n"); 919 - 920 - if (is_host_active(musb)) 921 - musb_recover_from_babble(musb); 919 + musb_recover_from_babble(musb); 922 920 } else { 923 921 musb_dbg(musb, "BUS RESET as %s", 924 922 usb_otg_state_string(musb->xceiv->otg->state)); ··· 1859 1861 MUSB_DEVCTL_HR; 1860 1862 switch (devctl & ~s) { 1861 1863 case MUSB_QUIRK_B_INVALID_VBUS_91: 1862 - if (musb->quirk_retries--) { 1864 + if (musb->quirk_retries && !musb->flush_irq_work) { 1863 1865 musb_dbg(musb, 1864 1866 "Poll devctl on invalid vbus, assume no session"); 1865 1867 schedule_delayed_work(&musb->irq_work, 1866 1868 msecs_to_jiffies(1000)); 1867 - 1869 + musb->quirk_retries--; 1868 1870 return; 1869 1871 } 1870 1872 /* fall through */ 1871 1873 case MUSB_QUIRK_A_DISCONNECT_19: 1872 - if (musb->quirk_retries--) { 1874 + if (musb->quirk_retries && !musb->flush_irq_work) { 1873 1875 musb_dbg(musb, 1874 1876 "Poll devctl on possible host mode disconnect"); 1875 1877 schedule_delayed_work(&musb->irq_work, 1876 1878 msecs_to_jiffies(1000)); 1877 - 1879 + musb->quirk_retries--; 1878 1880 return; 1879 1881 } 1880 1882 if (!musb->session) ··· 2679 2681 2680 2682 musb_platform_disable(musb); 2681 2683 musb_disable_interrupts(musb); 2684 + 2685 + musb->flush_irq_work = true; 2686 + while (flush_delayed_work(&musb->irq_work)) 2687 + ; 2688 + musb->flush_irq_work = false; 2689 + 2682 2690 if (!(musb->io.quirks & MUSB_PRESERVE_SESSION)) 2683 2691 musb_writeb(musb->mregs, MUSB_DEVCTL, 0); 2692 + 2684 2693 WARN_ON(!list_empty(&musb->pending_list)); 2685 2694 2686 2695 spin_lock_irqsave(&musb->lock, flags);
+2
drivers/usb/musb/musb_core.h
··· 428 428 unsigned test_mode:1; 429 429 unsigned softconnect:1; 430 430 431 + unsigned flush_irq_work:1; 432 + 431 433 u8 address; 432 434 u8 test_mode_nr; 433 435 u16 ackpend; /* ep0 */
+82 -12
drivers/usb/musb/musb_cppi41.c
··· 26 26 27 27 #define MUSB_DMA_NUM_CHANNELS 15 28 28 29 + #define DA8XX_USB_MODE 0x10 30 + #define DA8XX_USB_AUTOREQ 0x14 31 + #define DA8XX_USB_TEARDOWN 0x1c 32 + 33 + #define DA8XX_DMA_NUM_CHANNELS 4 34 + 29 35 struct cppi41_dma_controller { 30 36 struct dma_controller controller; 31 - struct cppi41_dma_channel rx_channel[MUSB_DMA_NUM_CHANNELS]; 32 - struct cppi41_dma_channel tx_channel[MUSB_DMA_NUM_CHANNELS]; 37 + struct cppi41_dma_channel *rx_channel; 38 + struct cppi41_dma_channel *tx_channel; 33 39 struct hrtimer early_tx; 34 40 struct list_head early_tx_list; 35 41 u32 rx_mode; 36 42 u32 tx_mode; 37 43 u32 auto_req; 44 + 45 + u32 tdown_reg; 46 + u32 autoreq_reg; 47 + 48 + void (*set_dma_mode)(struct cppi41_dma_channel *cppi41_channel, 49 + unsigned int mode); 50 + u8 num_channels; 38 51 }; 39 52 40 53 static void save_rx_toggle(struct cppi41_dma_channel *cppi41_channel) ··· 362 349 } 363 350 } 364 351 352 + static void da8xx_set_dma_mode(struct cppi41_dma_channel *cppi41_channel, 353 + unsigned int mode) 354 + { 355 + struct cppi41_dma_controller *controller = cppi41_channel->controller; 356 + struct musb *musb = controller->controller.musb; 357 + unsigned int shift; 358 + u32 port; 359 + u32 new_mode; 360 + u32 old_mode; 361 + 362 + old_mode = controller->tx_mode; 363 + port = cppi41_channel->port_num; 364 + 365 + shift = (port - 1) * 4; 366 + if (!cppi41_channel->is_tx) 367 + shift += 16; 368 + new_mode = old_mode & ~(3 << shift); 369 + new_mode |= mode << shift; 370 + 371 + if (new_mode == old_mode) 372 + return; 373 + controller->tx_mode = new_mode; 374 + musb_writel(musb->ctrl_base, DA8XX_USB_MODE, new_mode); 375 + } 376 + 377 + 365 378 static void cppi41_set_autoreq_mode(struct cppi41_dma_channel *cppi41_channel, 366 379 unsigned mode) 367 380 { ··· 403 364 if (new_mode == old_mode) 404 365 return; 405 366 controller->auto_req = new_mode; 406 - musb_writel(controller->controller.musb->ctrl_base, USB_CTRL_AUTOREQ, 407 - new_mode); 367 + musb_writel(controller->controller.musb->ctrl_base, 368 + controller->autoreq_reg, new_mode); 408 369 } 409 370 410 371 static bool cppi41_configure_channel(struct dma_channel *channel, ··· 412 373 dma_addr_t dma_addr, u32 len) 413 374 { 414 375 struct cppi41_dma_channel *cppi41_channel = channel->private_data; 376 + struct cppi41_dma_controller *controller = cppi41_channel->controller; 415 377 struct dma_chan *dc = cppi41_channel->dc; 416 378 struct dma_async_tx_descriptor *dma_desc; 417 379 enum dma_transfer_direction direction; ··· 438 398 musb_writel(musb->ctrl_base, 439 399 RNDIS_REG(cppi41_channel->port_num), len); 440 400 /* gen rndis */ 441 - cppi41_set_dma_mode(cppi41_channel, 401 + controller->set_dma_mode(cppi41_channel, 442 402 EP_MODE_DMA_GEN_RNDIS); 443 403 444 404 /* auto req */ ··· 447 407 } else { 448 408 musb_writel(musb->ctrl_base, 449 409 RNDIS_REG(cppi41_channel->port_num), 0); 450 - cppi41_set_dma_mode(cppi41_channel, 410 + controller->set_dma_mode(cppi41_channel, 451 411 EP_MODE_DMA_TRANSPARENT); 452 412 cppi41_set_autoreq_mode(cppi41_channel, 453 413 EP_MODE_AUTOREQ_NONE); 454 414 } 455 415 } else { 456 416 /* fallback mode */ 457 - cppi41_set_dma_mode(cppi41_channel, EP_MODE_DMA_TRANSPARENT); 417 + controller->set_dma_mode(cppi41_channel, 418 + EP_MODE_DMA_TRANSPARENT); 458 419 cppi41_set_autoreq_mode(cppi41_channel, EP_MODE_AUTOREQ_NONE); 459 420 len = min_t(u32, packet_sz, len); 460 421 } ··· 486 445 struct cppi41_dma_channel *cppi41_channel = NULL; 487 446 u8 ch_num = hw_ep->epnum - 1; 488 447 489 - if (ch_num >= MUSB_DMA_NUM_CHANNELS) 448 + if (ch_num >= controller->num_channels) 490 449 return NULL; 491 450 492 451 if (is_tx) ··· 622 581 623 582 do { 624 583 if (is_tx) 625 - musb_writel(musb->ctrl_base, USB_TDOWN, tdbit); 584 + musb_writel(musb->ctrl_base, controller->tdown_reg, 585 + tdbit); 626 586 ret = dmaengine_terminate_all(cppi41_channel->dc); 627 587 } while (ret == -EAGAIN); 628 588 629 589 if (is_tx) { 630 - musb_writel(musb->ctrl_base, USB_TDOWN, tdbit); 590 + musb_writel(musb->ctrl_base, controller->tdown_reg, tdbit); 631 591 632 592 csr = musb_readw(epio, MUSB_TXCSR); 633 593 if (csr & MUSB_TXCSR_TXPKTRDY) { ··· 646 604 struct dma_chan *dc; 647 605 int i; 648 606 649 - for (i = 0; i < MUSB_DMA_NUM_CHANNELS; i++) { 607 + for (i = 0; i < ctrl->num_channels; i++) { 650 608 dc = ctrl->tx_channel[i].dc; 651 609 if (dc) 652 610 dma_release_channel(dc); ··· 698 656 goto err; 699 657 700 658 ret = -EINVAL; 701 - if (port > MUSB_DMA_NUM_CHANNELS || !port) 659 + if (port > controller->num_channels || !port) 702 660 goto err; 703 661 if (is_tx) 704 662 cppi41_channel = &controller->tx_channel[port - 1]; ··· 739 697 740 698 hrtimer_cancel(&controller->early_tx); 741 699 cppi41_dma_controller_stop(controller); 700 + kfree(controller->rx_channel); 701 + kfree(controller->tx_channel); 742 702 kfree(controller); 743 703 } 744 704 EXPORT_SYMBOL_GPL(cppi41_dma_controller_destroy); ··· 749 705 cppi41_dma_controller_create(struct musb *musb, void __iomem *base) 750 706 { 751 707 struct cppi41_dma_controller *controller; 708 + int channel_size; 752 709 int ret = 0; 753 710 754 711 if (!musb->controller->parent->of_node) { ··· 772 727 controller->controller.is_compatible = cppi41_is_compatible; 773 728 controller->controller.musb = musb; 774 729 730 + if (musb->io.quirks & MUSB_DA8XX) { 731 + controller->tdown_reg = DA8XX_USB_TEARDOWN; 732 + controller->autoreq_reg = DA8XX_USB_AUTOREQ; 733 + controller->set_dma_mode = da8xx_set_dma_mode; 734 + controller->num_channels = DA8XX_DMA_NUM_CHANNELS; 735 + } else { 736 + controller->tdown_reg = USB_TDOWN; 737 + controller->autoreq_reg = USB_CTRL_AUTOREQ; 738 + controller->set_dma_mode = cppi41_set_dma_mode; 739 + controller->num_channels = MUSB_DMA_NUM_CHANNELS; 740 + } 741 + 742 + channel_size = controller->num_channels * 743 + sizeof(struct cppi41_dma_channel); 744 + controller->rx_channel = kzalloc(channel_size, GFP_KERNEL); 745 + if (!controller->rx_channel) 746 + goto rx_channel_alloc_fail; 747 + controller->tx_channel = kzalloc(channel_size, GFP_KERNEL); 748 + if (!controller->tx_channel) 749 + goto tx_channel_alloc_fail; 750 + 775 751 ret = cppi41_dma_controller_start(controller); 776 752 if (ret) 777 753 goto plat_get_fail; 778 754 return &controller->controller; 779 755 780 756 plat_get_fail: 757 + kfree(controller->tx_channel); 758 + tx_channel_alloc_fail: 759 + kfree(controller->rx_channel); 760 + rx_channel_alloc_fail: 781 761 kfree(controller); 782 762 kzalloc_fail: 783 763 if (ret == -EPROBE_DEFER)
+2
drivers/usb/musb/sunxi.c
··· 297 297 if (test_bit(SUNXI_MUSB_FL_HAS_SRAM, &glue->flags)) 298 298 sunxi_sram_release(musb->controller->parent); 299 299 300 + devm_usb_put_phy(glue->dev, glue->xceiv); 301 + 300 302 return 0; 301 303 } 302 304
+1
drivers/usb/serial/metro-usb.c
··· 45 45 static const struct usb_device_id id_table[] = { 46 46 { USB_DEVICE(FOCUS_VENDOR_ID, FOCUS_PRODUCT_ID_BI) }, 47 47 { USB_DEVICE(FOCUS_VENDOR_ID, FOCUS_PRODUCT_ID_UNI) }, 48 + { USB_DEVICE_INTERFACE_CLASS(0x0c2e, 0x0730, 0xff) }, /* MS7820 */ 48 49 { }, /* Terminating entry. */ 49 50 }; 50 51 MODULE_DEVICE_TABLE(usb, id_table);
+1 -1
drivers/xen/gntdev.c
··· 1024 1024 mutex_unlock(&priv->lock); 1025 1025 1026 1026 if (use_ptemod) { 1027 + map->pages_vm_start = vma->vm_start; 1027 1028 err = apply_to_page_range(vma->vm_mm, vma->vm_start, 1028 1029 vma->vm_end - vma->vm_start, 1029 1030 find_grant_ptes, map); ··· 1062 1061 set_grant_ptes_as_special, NULL); 1063 1062 } 1064 1063 #endif 1065 - map->pages_vm_start = vma->vm_start; 1066 1064 } 1067 1065 1068 1066 return 0;
+13 -6
drivers/xen/xen-balloon.c
··· 57 57 static void watch_target(struct xenbus_watch *watch, 58 58 const char *path, const char *token) 59 59 { 60 - unsigned long long new_target; 60 + unsigned long long new_target, static_max; 61 61 int err; 62 62 static bool watch_fired; 63 63 static long target_diff; ··· 72 72 * pages. PAGE_SHIFT converts bytes to pages, hence PAGE_SHIFT - 10. 73 73 */ 74 74 new_target >>= PAGE_SHIFT - 10; 75 - if (watch_fired) { 76 - balloon_set_new_target(new_target - target_diff); 77 - return; 75 + 76 + if (!watch_fired) { 77 + watch_fired = true; 78 + err = xenbus_scanf(XBT_NIL, "memory", "static-max", "%llu", 79 + &static_max); 80 + if (err != 1) 81 + static_max = new_target; 82 + else 83 + static_max >>= PAGE_SHIFT - 10; 84 + target_diff = xen_pv_domain() ? 0 85 + : static_max - balloon_stats.target_pages; 78 86 } 79 87 80 - watch_fired = true; 81 - target_diff = new_target - balloon_stats.target_pages; 88 + balloon_set_new_target(new_target - target_diff); 82 89 } 83 90 static struct xenbus_watch target_watch = { 84 91 .node = "memory/target",
+4 -1
fs/ceph/caps.c
··· 1991 1991 retry: 1992 1992 spin_lock(&ci->i_ceph_lock); 1993 1993 if (ci->i_ceph_flags & CEPH_I_NOFLUSH) { 1994 + spin_unlock(&ci->i_ceph_lock); 1994 1995 dout("try_flush_caps skipping %p I_NOFLUSH set\n", inode); 1995 1996 goto out; 1996 1997 } ··· 2009 2008 mutex_lock(&session->s_mutex); 2010 2009 goto retry; 2011 2010 } 2012 - if (cap->session->s_state < CEPH_MDS_SESSION_OPEN) 2011 + if (cap->session->s_state < CEPH_MDS_SESSION_OPEN) { 2012 + spin_unlock(&ci->i_ceph_lock); 2013 2013 goto out; 2014 + } 2014 2015 2015 2016 flushing = __mark_caps_flushing(inode, session, true, 2016 2017 &flush_tid, &oldest_flush_tid);
+5
fs/cifs/Kconfig
··· 5 5 select CRYPTO 6 6 select CRYPTO_MD4 7 7 select CRYPTO_MD5 8 + select CRYPTO_SHA256 9 + select CRYPTO_CMAC 8 10 select CRYPTO_HMAC 9 11 select CRYPTO_ARC4 12 + select CRYPTO_AEAD2 13 + select CRYPTO_CCM 10 14 select CRYPTO_ECB 15 + select CRYPTO_AES 11 16 select CRYPTO_DES 12 17 help 13 18 This is the client VFS module for the SMB3 family of NAS protocols,
+6 -2
fs/cifs/cifsglob.h
··· 661 661 #endif 662 662 unsigned int max_read; 663 663 unsigned int max_write; 664 - __u8 preauth_hash[512]; 664 + #ifdef CONFIG_CIFS_SMB311 665 + __u8 preauth_sha_hash[64]; /* save initital negprot hash */ 666 + #endif /* 3.1.1 */ 665 667 struct delayed_work reconnect; /* reconnect workqueue job */ 666 668 struct mutex reconnect_mutex; /* prevent simultaneous reconnects */ 667 669 unsigned long echo_interval; ··· 851 849 __u8 smb3signingkey[SMB3_SIGN_KEY_SIZE]; 852 850 __u8 smb3encryptionkey[SMB3_SIGN_KEY_SIZE]; 853 851 __u8 smb3decryptionkey[SMB3_SIGN_KEY_SIZE]; 854 - __u8 preauth_hash[512]; 852 + #ifdef CONFIG_CIFS_SMB311 853 + __u8 preauth_sha_hash[64]; 854 + #endif /* 3.1.1 */ 855 855 }; 856 856 857 857 static inline bool
+1 -1
fs/cifs/smb2maperror.c
··· 214 214 {STATUS_DATATYPE_MISALIGNMENT, -EIO, "STATUS_DATATYPE_MISALIGNMENT"}, 215 215 {STATUS_BREAKPOINT, -EIO, "STATUS_BREAKPOINT"}, 216 216 {STATUS_SINGLE_STEP, -EIO, "STATUS_SINGLE_STEP"}, 217 - {STATUS_BUFFER_OVERFLOW, -EIO, "STATUS_BUFFER_OVERFLOW"}, 217 + {STATUS_BUFFER_OVERFLOW, -E2BIG, "STATUS_BUFFER_OVERFLOW"}, 218 218 {STATUS_NO_MORE_FILES, -ENODATA, "STATUS_NO_MORE_FILES"}, 219 219 {STATUS_WAKE_SYSTEM_DEBUGGER, -EIO, "STATUS_WAKE_SYSTEM_DEBUGGER"}, 220 220 {STATUS_HANDLES_CLOSED, -EIO, "STATUS_HANDLES_CLOSED"},
+25 -6
fs/cifs/smb2ops.c
··· 522 522 struct cifs_open_parms oparms; 523 523 struct cifs_fid fid; 524 524 struct smb2_file_full_ea_info *smb2_data; 525 + int ea_buf_size = SMB2_MIN_EA_BUF; 525 526 526 527 utf16_path = cifs_convert_path_to_utf16(path, cifs_sb); 527 528 if (!utf16_path) ··· 542 541 return rc; 543 542 } 544 543 545 - smb2_data = kzalloc(SMB2_MAX_EA_BUF, GFP_KERNEL); 546 - if (smb2_data == NULL) { 547 - SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid); 548 - return -ENOMEM; 544 + while (1) { 545 + smb2_data = kzalloc(ea_buf_size, GFP_KERNEL); 546 + if (smb2_data == NULL) { 547 + SMB2_close(xid, tcon, fid.persistent_fid, 548 + fid.volatile_fid); 549 + return -ENOMEM; 550 + } 551 + 552 + rc = SMB2_query_eas(xid, tcon, fid.persistent_fid, 553 + fid.volatile_fid, 554 + ea_buf_size, smb2_data); 555 + 556 + if (rc != -E2BIG) 557 + break; 558 + 559 + kfree(smb2_data); 560 + ea_buf_size <<= 1; 561 + 562 + if (ea_buf_size > SMB2_MAX_EA_BUF) { 563 + cifs_dbg(VFS, "EA size is too large\n"); 564 + SMB2_close(xid, tcon, fid.persistent_fid, 565 + fid.volatile_fid); 566 + return -ENOMEM; 567 + } 549 568 } 550 569 551 - rc = SMB2_query_eas(xid, tcon, fid.persistent_fid, fid.volatile_fid, 552 - smb2_data); 553 570 SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid); 554 571 555 572 if (!rc)
+22 -11
fs/cifs/smb2pdu.c
··· 648 648 { 649 649 int rc = 0; 650 650 struct validate_negotiate_info_req vneg_inbuf; 651 - struct validate_negotiate_info_rsp *pneg_rsp; 651 + struct validate_negotiate_info_rsp *pneg_rsp = NULL; 652 652 u32 rsplen; 653 653 u32 inbuflen; /* max of 4 dialects */ 654 654 ··· 727 727 rsplen); 728 728 729 729 /* relax check since Mac returns max bufsize allowed on ioctl */ 730 - if (rsplen > CIFSMaxBufSize) 731 - return -EIO; 730 + if ((rsplen > CIFSMaxBufSize) 731 + || (rsplen < sizeof(struct validate_negotiate_info_rsp))) 732 + goto err_rsp_free; 732 733 } 733 734 734 735 /* check validate negotiate info response matches what we got earlier */ ··· 748 747 749 748 /* validate negotiate successful */ 750 749 cifs_dbg(FYI, "validate negotiate info successful\n"); 750 + kfree(pneg_rsp); 751 751 return 0; 752 752 753 753 vneg_out: 754 754 cifs_dbg(VFS, "protocol revalidation - security settings mismatch\n"); 755 + err_rsp_free: 756 + kfree(pneg_rsp); 755 757 return -EIO; 756 758 } 757 759 ··· 1259 1255 struct smb2_tree_connect_req *req; 1260 1256 struct smb2_tree_connect_rsp *rsp = NULL; 1261 1257 struct kvec iov[2]; 1262 - struct kvec rsp_iov; 1258 + struct kvec rsp_iov = { NULL, 0 }; 1263 1259 int rc = 0; 1264 1260 int resp_buftype; 1265 1261 int unc_path_len; ··· 1376 1372 return rc; 1377 1373 1378 1374 tcon_error_exit: 1379 - if (rsp->hdr.sync_hdr.Status == STATUS_BAD_NETWORK_NAME) { 1375 + if (rsp && rsp->hdr.sync_hdr.Status == STATUS_BAD_NETWORK_NAME) { 1380 1376 cifs_dbg(VFS, "BAD_NETWORK_NAME: %s\n", tree); 1381 1377 } 1382 1378 goto tcon_exit; ··· 1979 1975 } else 1980 1976 iov[0].iov_len = get_rfc1002_length(req) + 4; 1981 1977 1978 + /* validate negotiate request must be signed - see MS-SMB2 3.2.5.5 */ 1979 + if (opcode == FSCTL_VALIDATE_NEGOTIATE_INFO) 1980 + req->hdr.sync_hdr.Flags |= SMB2_FLAGS_SIGNED; 1982 1981 1983 1982 rc = SendReceive2(xid, ses, iov, n_iov, &resp_buftype, flags, &rsp_iov); 1984 1983 cifs_small_buf_release(req); ··· 2198 2191 req->PersistentFileId = persistent_fid; 2199 2192 req->VolatileFileId = volatile_fid; 2200 2193 req->AdditionalInformation = cpu_to_le32(additional_info); 2201 - /* 4 for rfc1002 length field and 1 for Buffer */ 2202 - req->InputBufferOffset = 2203 - cpu_to_le16(sizeof(struct smb2_query_info_req) - 1 - 4); 2194 + 2195 + /* 2196 + * We do not use the input buffer (do not send extra byte) 2197 + */ 2198 + req->InputBufferOffset = 0; 2199 + inc_rfc1001_len(req, -1); 2200 + 2204 2201 req->OutputBufferLength = cpu_to_le32(output_len); 2205 2202 2206 2203 iov[0].iov_base = (char *)req; ··· 2244 2233 } 2245 2234 2246 2235 int SMB2_query_eas(const unsigned int xid, struct cifs_tcon *tcon, 2247 - u64 persistent_fid, u64 volatile_fid, 2248 - struct smb2_file_full_ea_info *data) 2236 + u64 persistent_fid, u64 volatile_fid, 2237 + int ea_buf_size, struct smb2_file_full_ea_info *data) 2249 2238 { 2250 2239 return query_info(xid, tcon, persistent_fid, volatile_fid, 2251 2240 FILE_FULL_EA_INFORMATION, SMB2_O_INFO_FILE, 0, 2252 - SMB2_MAX_EA_BUF, 2241 + ea_buf_size, 2253 2242 sizeof(struct smb2_file_full_ea_info), 2254 2243 (void **)&data, 2255 2244 NULL);
+3 -2
fs/cifs/smb2pdu.h
··· 832 832 /* Channel field for read and write: exactly one of following flags can be set*/ 833 833 #define SMB2_CHANNEL_NONE 0x00000000 834 834 #define SMB2_CHANNEL_RDMA_V1 0x00000001 /* SMB3 or later */ 835 - #define SMB2_CHANNEL_RDMA_V1_INVALIDATE 0x00000001 /* SMB3.02 or later */ 835 + #define SMB2_CHANNEL_RDMA_V1_INVALIDATE 0x00000002 /* SMB3.02 or later */ 836 836 837 837 /* SMB2 read request without RFC1001 length at the beginning */ 838 838 struct smb2_read_plain_req { ··· 1178 1178 char FileName[0]; /* Name to be assigned to new link */ 1179 1179 } __packed; /* level 11 Set */ 1180 1180 1181 - #define SMB2_MAX_EA_BUF 2048 1181 + #define SMB2_MIN_EA_BUF 2048 1182 + #define SMB2_MAX_EA_BUF 65536 1182 1183 1183 1184 struct smb2_file_full_ea_info { /* encoding of response for level 15 */ 1184 1185 __le32 next_entry_offset;
+1
fs/cifs/smb2proto.h
··· 134 134 u64 persistent_file_id, u64 volatile_file_id); 135 135 extern int SMB2_query_eas(const unsigned int xid, struct cifs_tcon *tcon, 136 136 u64 persistent_file_id, u64 volatile_file_id, 137 + int ea_buf_size, 137 138 struct smb2_file_full_ea_info *data); 138 139 extern int SMB2_query_info(const unsigned int xid, struct cifs_tcon *tcon, 139 140 u64 persistent_file_id, u64 volatile_file_id,
+14 -12
fs/cifs/smb2transport.c
··· 390 390 return generate_smb3signingkey(ses, &triplet); 391 391 } 392 392 393 + #ifdef CONFIG_CIFS_SMB311 393 394 int 394 395 generate_smb311signingkey(struct cifs_ses *ses) 395 396 ··· 399 398 struct derivation *d; 400 399 401 400 d = &triplet.signing; 402 - d->label.iov_base = "SMB2AESCMAC"; 403 - d->label.iov_len = 12; 404 - d->context.iov_base = "SmbSign"; 405 - d->context.iov_len = 8; 401 + d->label.iov_base = "SMBSigningKey"; 402 + d->label.iov_len = 14; 403 + d->context.iov_base = ses->preauth_sha_hash; 404 + d->context.iov_len = 64; 406 405 407 406 d = &triplet.encryption; 408 - d->label.iov_base = "SMB2AESCCM"; 409 - d->label.iov_len = 11; 410 - d->context.iov_base = "ServerIn "; 411 - d->context.iov_len = 10; 407 + d->label.iov_base = "SMBC2SCipherKey"; 408 + d->label.iov_len = 16; 409 + d->context.iov_base = ses->preauth_sha_hash; 410 + d->context.iov_len = 64; 412 411 413 412 d = &triplet.decryption; 414 - d->label.iov_base = "SMB2AESCCM"; 415 - d->label.iov_len = 11; 416 - d->context.iov_base = "ServerOut"; 417 - d->context.iov_len = 10; 413 + d->label.iov_base = "SMBS2CCipherKey"; 414 + d->label.iov_len = 16; 415 + d->context.iov_base = ses->preauth_sha_hash; 416 + d->context.iov_len = 64; 418 417 419 418 return generate_smb3signingkey(ses, &triplet); 420 419 } 420 + #endif /* 311 */ 421 421 422 422 int 423 423 smb3_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server)
+2 -1
fs/fuse/dir.c
··· 1308 1308 */ 1309 1309 over = !dir_emit(ctx, dirent->name, dirent->namelen, 1310 1310 dirent->ino, dirent->type); 1311 - ctx->pos = dirent->off; 1311 + if (!over) 1312 + ctx->pos = dirent->off; 1312 1313 } 1313 1314 1314 1315 buf += reclen;
+16 -4
fs/overlayfs/inode.c
··· 598 598 return true; 599 599 } 600 600 601 - struct inode *ovl_get_inode(struct dentry *dentry, struct dentry *upperdentry) 601 + struct inode *ovl_get_inode(struct dentry *dentry, struct dentry *upperdentry, 602 + struct dentry *index) 602 603 { 603 604 struct dentry *lowerdentry = ovl_dentry_lower(dentry); 604 605 struct inode *realinode = upperdentry ? d_inode(upperdentry) : NULL; 605 606 struct inode *inode; 607 + /* Already indexed or could be indexed on copy up? */ 608 + bool indexed = (index || (ovl_indexdir(dentry->d_sb) && !upperdentry)); 609 + 610 + if (WARN_ON(upperdentry && indexed && !lowerdentry)) 611 + return ERR_PTR(-EIO); 606 612 607 613 if (!realinode) 608 614 realinode = d_inode(lowerdentry); 609 615 610 - if (!S_ISDIR(realinode->i_mode) && 611 - (upperdentry || (lowerdentry && ovl_indexdir(dentry->d_sb)))) { 612 - struct inode *key = d_inode(lowerdentry ?: upperdentry); 616 + /* 617 + * Copy up origin (lower) may exist for non-indexed upper, but we must 618 + * not use lower as hash key in that case. 619 + * Hash inodes that are or could be indexed by origin inode and 620 + * non-indexed upper inodes that could be hard linked by upper inode. 621 + */ 622 + if (!S_ISDIR(realinode->i_mode) && (upperdentry || indexed)) { 623 + struct inode *key = d_inode(indexed ? lowerdentry : 624 + upperdentry); 613 625 unsigned int nlink; 614 626 615 627 inode = iget5_locked(dentry->d_sb, (unsigned long) key,
+16 -16
fs/overlayfs/namei.c
··· 405 405 * be treated as stale (i.e. after unlink of the overlay inode). 406 406 * We don't know the verification rules for directory and whiteout 407 407 * index entries, because they have not been implemented yet, so return 408 - * EROFS if those entries are found to avoid corrupting an index that 409 - * was created by a newer kernel. 408 + * EINVAL if those entries are found to abort the mount to avoid 409 + * corrupting an index that was created by a newer kernel. 410 410 */ 411 - err = -EROFS; 411 + err = -EINVAL; 412 412 if (d_is_dir(index) || ovl_is_whiteout(index)) 413 413 goto fail; 414 414 415 - err = -EINVAL; 416 415 if (index->d_name.len < sizeof(struct ovl_fh)*2) 417 416 goto fail; 418 417 ··· 506 507 index = lookup_one_len_unlocked(name.name, ofs->indexdir, name.len); 507 508 if (IS_ERR(index)) { 508 509 err = PTR_ERR(index); 510 + if (err == -ENOENT) { 511 + index = NULL; 512 + goto out; 513 + } 509 514 pr_warn_ratelimited("overlayfs: failed inode index lookup (ino=%lu, key=%*s, err=%i);\n" 510 515 "overlayfs: mount with '-o index=off' to disable inodes index.\n", 511 516 d_inode(origin)->i_ino, name.len, name.name, ··· 519 516 520 517 inode = d_inode(index); 521 518 if (d_is_negative(index)) { 522 - if (upper && d_inode(origin)->i_nlink > 1) { 523 - pr_warn_ratelimited("overlayfs: hard link with origin but no index (ino=%lu).\n", 524 - d_inode(origin)->i_ino); 525 - goto fail; 526 - } 527 - 528 - dput(index); 529 - index = NULL; 519 + goto out_dput; 530 520 } else if (upper && d_inode(upper) != inode) { 531 - pr_warn_ratelimited("overlayfs: wrong index found (index=%pd2, ino=%lu, upper ino=%lu).\n", 532 - index, inode->i_ino, d_inode(upper)->i_ino); 533 - goto fail; 521 + goto out_dput; 534 522 } else if (ovl_dentry_weird(index) || ovl_is_whiteout(index) || 535 523 ((inode->i_mode ^ d_inode(origin)->i_mode) & S_IFMT)) { 536 524 /* ··· 540 546 out: 541 547 kfree(name.name); 542 548 return index; 549 + 550 + out_dput: 551 + dput(index); 552 + index = NULL; 553 + goto out; 543 554 544 555 fail: 545 556 dput(index); ··· 634 635 } 635 636 636 637 if (d.redirect) { 638 + err = -ENOMEM; 637 639 upperredirect = kstrdup(d.redirect, GFP_KERNEL); 638 640 if (!upperredirect) 639 641 goto out_put_upper; ··· 709 709 upperdentry = dget(index); 710 710 711 711 if (upperdentry || ctr) { 712 - inode = ovl_get_inode(dentry, upperdentry); 712 + inode = ovl_get_inode(dentry, upperdentry, index); 713 713 err = PTR_ERR(inode); 714 714 if (IS_ERR(inode)) 715 715 goto out_free_oe;
+2 -1
fs/overlayfs/overlayfs.h
··· 286 286 bool ovl_is_private_xattr(const char *name); 287 287 288 288 struct inode *ovl_new_inode(struct super_block *sb, umode_t mode, dev_t rdev); 289 - struct inode *ovl_get_inode(struct dentry *dentry, struct dentry *upperdentry); 289 + struct inode *ovl_get_inode(struct dentry *dentry, struct dentry *upperdentry, 290 + struct dentry *index); 290 291 static inline void ovl_copyattr(struct inode *from, struct inode *to) 291 292 { 292 293 to->i_uid = from->i_uid;
+5 -6
fs/overlayfs/readdir.c
··· 1021 1021 break; 1022 1022 } 1023 1023 err = ovl_verify_index(index, lowerstack, numlower); 1024 - if (err) { 1025 - if (err == -EROFS) 1026 - break; 1024 + /* Cleanup stale and orphan index entries */ 1025 + if (err && (err == -ESTALE || err == -ENOENT)) 1027 1026 err = ovl_cleanup(dir, index); 1028 - if (err) 1029 - break; 1030 - } 1027 + if (err) 1028 + break; 1029 + 1031 1030 dput(index); 1032 1031 index = NULL; 1033 1032 }
+3
fs/overlayfs/super.c
··· 174 174 { 175 175 struct ovl_inode *oi = kmem_cache_alloc(ovl_inode_cachep, GFP_KERNEL); 176 176 177 + if (!oi) 178 + return NULL; 179 + 177 180 oi->cache = NULL; 178 181 oi->redirect = NULL; 179 182 oi->version = 0;
+13 -8
fs/xfs/xfs_file.c
··· 237 237 if (!count) 238 238 return 0; /* skip atime */ 239 239 240 - if (!xfs_ilock_nowait(ip, XFS_IOLOCK_SHARED)) { 241 - if (iocb->ki_flags & IOCB_NOWAIT) 240 + if (iocb->ki_flags & IOCB_NOWAIT) { 241 + if (!xfs_ilock_nowait(ip, XFS_IOLOCK_SHARED)) 242 242 return -EAGAIN; 243 + } else { 243 244 xfs_ilock(ip, XFS_IOLOCK_SHARED); 244 245 } 246 + 245 247 ret = dax_iomap_rw(iocb, to, &xfs_iomap_ops); 246 248 xfs_iunlock(ip, XFS_IOLOCK_SHARED); 247 249 ··· 261 259 262 260 trace_xfs_file_buffered_read(ip, iov_iter_count(to), iocb->ki_pos); 263 261 264 - if (!xfs_ilock_nowait(ip, XFS_IOLOCK_SHARED)) { 265 - if (iocb->ki_flags & IOCB_NOWAIT) 262 + if (iocb->ki_flags & IOCB_NOWAIT) { 263 + if (!xfs_ilock_nowait(ip, XFS_IOLOCK_SHARED)) 266 264 return -EAGAIN; 265 + } else { 267 266 xfs_ilock(ip, XFS_IOLOCK_SHARED); 268 267 } 269 268 ret = generic_file_read_iter(iocb, to); ··· 555 552 iolock = XFS_IOLOCK_SHARED; 556 553 } 557 554 558 - if (!xfs_ilock_nowait(ip, iolock)) { 559 - if (iocb->ki_flags & IOCB_NOWAIT) 555 + if (iocb->ki_flags & IOCB_NOWAIT) { 556 + if (!xfs_ilock_nowait(ip, iolock)) 560 557 return -EAGAIN; 558 + } else { 561 559 xfs_ilock(ip, iolock); 562 560 } 563 561 ··· 610 606 size_t count; 611 607 loff_t pos; 612 608 613 - if (!xfs_ilock_nowait(ip, iolock)) { 614 - if (iocb->ki_flags & IOCB_NOWAIT) 609 + if (iocb->ki_flags & IOCB_NOWAIT) { 610 + if (!xfs_ilock_nowait(ip, iolock)) 615 611 return -EAGAIN; 612 + } else { 616 613 xfs_ilock(ip, iolock); 617 614 } 618 615
+2 -2
include/linux/if_tap.h
··· 77 77 int tap_get_minor(dev_t major, struct tap_dev *tap); 78 78 void tap_free_minor(dev_t major, struct tap_dev *tap); 79 79 int tap_queue_resize(struct tap_dev *tap); 80 - int tap_create_cdev(struct cdev *tap_cdev, 81 - dev_t *tap_major, const char *device_name); 80 + int tap_create_cdev(struct cdev *tap_cdev, dev_t *tap_major, 81 + const char *device_name, struct module *module); 82 82 void tap_destroy_cdev(dev_t major, struct cdev *tap_cdev); 83 83 84 84 #endif /*_LINUX_IF_TAP_H_*/
+1 -1
include/linux/irq.h
··· 1009 1009 void irq_gc_unmask_enable_reg(struct irq_data *d); 1010 1010 void irq_gc_ack_set_bit(struct irq_data *d); 1011 1011 void irq_gc_ack_clr_bit(struct irq_data *d); 1012 - void irq_gc_mask_disable_reg_and_ack(struct irq_data *d); 1012 + void irq_gc_mask_disable_and_ack_set(struct irq_data *d); 1013 1013 void irq_gc_eoi(struct irq_data *d); 1014 1014 int irq_gc_set_wake(struct irq_data *d, unsigned int on); 1015 1015
+2
include/linux/irqchip/arm-gic-v3.h
··· 372 372 #define GITS_BASER_ENTRY_SIZE_SHIFT (48) 373 373 #define GITS_BASER_ENTRY_SIZE(r) ((((r) >> GITS_BASER_ENTRY_SIZE_SHIFT) & 0x1f) + 1) 374 374 #define GITS_BASER_ENTRY_SIZE_MASK GENMASK_ULL(52, 48) 375 + #define GITS_BASER_PHYS_52_to_48(phys) \ 376 + (((phys) & GENMASK_ULL(47, 16)) | (((phys) >> 48) & 0xf) << 12) 375 377 #define GITS_BASER_SHAREABILITY_SHIFT (10) 376 378 #define GITS_BASER_InnerShareable \ 377 379 GIC_BASER_SHAREABILITY(GITS_BASER, InnerShareable)
+2
include/linux/mlx5/port.h
··· 157 157 int mlx5_query_port_prio_tc(struct mlx5_core_dev *mdev, 158 158 u8 prio, u8 *tc); 159 159 int mlx5_set_port_tc_group(struct mlx5_core_dev *mdev, u8 *tc_group); 160 + int mlx5_query_port_tc_group(struct mlx5_core_dev *mdev, 161 + u8 tc, u8 *tc_group); 160 162 int mlx5_set_port_tc_bw_alloc(struct mlx5_core_dev *mdev, u8 *tc_bw); 161 163 int mlx5_query_port_tc_bw_alloc(struct mlx5_core_dev *mdev, 162 164 u8 tc, u8 *bw_pct);
+3 -2
include/linux/pm_qos.h
··· 27 27 PM_QOS_FLAGS_ALL, 28 28 }; 29 29 30 - #define PM_QOS_DEFAULT_VALUE -1 30 + #define PM_QOS_DEFAULT_VALUE (-1) 31 + #define PM_QOS_LATENCY_ANY S32_MAX 31 32 32 33 #define PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC) 33 34 #define PM_QOS_NETWORK_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC) 34 35 #define PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE 0 35 36 #define PM_QOS_MEMORY_BANDWIDTH_DEFAULT_VALUE 0 36 37 #define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE 0 38 + #define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT PM_QOS_LATENCY_ANY 37 39 #define PM_QOS_LATENCY_TOLERANCE_DEFAULT_VALUE 0 38 40 #define PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT (-1) 39 - #define PM_QOS_LATENCY_ANY ((s32)(~(__u32)0 >> 1)) 40 41 41 42 #define PM_QOS_FLAG_NO_POWER_OFF (1 << 0) 42 43 #define PM_QOS_FLAG_REMOTE_WAKEUP (1 << 1)
+17 -17
include/linux/sctp.h
··· 231 231 __be32 tsn; 232 232 __be16 stream; 233 233 __be16 ssn; 234 - __be32 ppid; 234 + __u32 ppid; 235 235 __u8 payload[0]; 236 236 }; 237 237 ··· 716 716 717 717 struct sctp_strreset_outreq { 718 718 struct sctp_paramhdr param_hdr; 719 - __u32 request_seq; 720 - __u32 response_seq; 721 - __u32 send_reset_at_tsn; 722 - __u16 list_of_streams[0]; 719 + __be32 request_seq; 720 + __be32 response_seq; 721 + __be32 send_reset_at_tsn; 722 + __be16 list_of_streams[0]; 723 723 }; 724 724 725 725 struct sctp_strreset_inreq { 726 726 struct sctp_paramhdr param_hdr; 727 - __u32 request_seq; 728 - __u16 list_of_streams[0]; 727 + __be32 request_seq; 728 + __be16 list_of_streams[0]; 729 729 }; 730 730 731 731 struct sctp_strreset_tsnreq { 732 732 struct sctp_paramhdr param_hdr; 733 - __u32 request_seq; 733 + __be32 request_seq; 734 734 }; 735 735 736 736 struct sctp_strreset_addstrm { 737 737 struct sctp_paramhdr param_hdr; 738 - __u32 request_seq; 739 - __u16 number_of_streams; 740 - __u16 reserved; 738 + __be32 request_seq; 739 + __be16 number_of_streams; 740 + __be16 reserved; 741 741 }; 742 742 743 743 enum { ··· 752 752 753 753 struct sctp_strreset_resp { 754 754 struct sctp_paramhdr param_hdr; 755 - __u32 response_seq; 756 - __u32 result; 755 + __be32 response_seq; 756 + __be32 result; 757 757 }; 758 758 759 759 struct sctp_strreset_resptsn { 760 760 struct sctp_paramhdr param_hdr; 761 - __u32 response_seq; 762 - __u32 result; 763 - __u32 senders_next_tsn; 764 - __u32 receivers_next_tsn; 761 + __be32 response_seq; 762 + __be32 result; 763 + __be32 senders_next_tsn; 764 + __be32 receivers_next_tsn; 765 765 }; 766 766 767 767 #endif /* __LINUX_SCTP_H__ */
+16 -11
include/linux/swait.h
··· 9 9 /* 10 10 * Simple wait queues 11 11 * 12 - * While these are very similar to the other/complex wait queues (wait.h) the 13 - * most important difference is that the simple waitqueue allows for 14 - * deterministic behaviour -- IOW it has strictly bounded IRQ and lock hold 15 - * times. 12 + * While these are very similar to regular wait queues (wait.h) the most 13 + * important difference is that the simple waitqueue allows for deterministic 14 + * behaviour -- IOW it has strictly bounded IRQ and lock hold times. 16 15 * 17 - * In order to make this so, we had to drop a fair number of features of the 18 - * other waitqueue code; notably: 16 + * Mainly, this is accomplished by two things. Firstly not allowing swake_up_all 17 + * from IRQ disabled, and dropping the lock upon every wakeup, giving a higher 18 + * priority task a chance to run. 19 + * 20 + * Secondly, we had to drop a fair number of features of the other waitqueue 21 + * code; notably: 19 22 * 20 23 * - mixing INTERRUPTIBLE and UNINTERRUPTIBLE sleeps on the same waitqueue; 21 24 * all wakeups are TASK_NORMAL in order to avoid O(n) lookups for the right ··· 27 24 * - the exclusive mode; because this requires preserving the list order 28 25 * and this is hard. 29 26 * 30 - * - custom wake functions; because you cannot give any guarantees about 31 - * random code. 27 + * - custom wake callback functions; because you cannot give any guarantees 28 + * about random code. This also allows swait to be used in RT, such that 29 + * raw spinlock can be used for the swait queue head. 32 30 * 33 - * As a side effect of this; the data structures are slimmer. 34 - * 35 - * One would recommend using this wait queue where possible. 31 + * As a side effect of these; the data structures are slimmer albeit more ad-hoc. 32 + * For all the above, note that simple wait queues should _only_ be used under 33 + * very specific realtime constraints -- it is best to stick with the regular 34 + * wait queues in most cases. 36 35 */ 37 36 38 37 struct task_struct;
+6 -3
include/net/fq_impl.h
··· 159 159 fq_flow_get_default_t get_default_func) 160 160 { 161 161 struct fq_flow *flow; 162 + bool oom; 162 163 163 164 lockdep_assert_held(&fq->lock); 164 165 ··· 181 180 } 182 181 183 182 __skb_queue_tail(&flow->queue, skb); 184 - 185 - if (fq->backlog > fq->limit || fq->memory_usage > fq->memory_limit) { 183 + oom = (fq->memory_usage > fq->memory_limit); 184 + while (fq->backlog > fq->limit || oom) { 186 185 flow = list_first_entry_or_null(&fq->backlogs, 187 186 struct fq_flow, 188 187 backlogchain); ··· 197 196 198 197 flow->tin->overlimit++; 199 198 fq->overlimit++; 200 - if (fq->memory_usage > fq->memory_limit) 199 + if (oom) { 201 200 fq->overmemory++; 201 + oom = (fq->memory_usage > fq->memory_limit); 202 + } 202 203 } 203 204 } 204 205
+6
include/net/inet_sock.h
··· 133 133 return sk->sk_bound_dev_if; 134 134 } 135 135 136 + static inline struct ip_options_rcu *ireq_opt_deref(const struct inet_request_sock *ireq) 137 + { 138 + return rcu_dereference_check(ireq->ireq_opt, 139 + refcount_read(&ireq->req.rsk_refcnt) > 0); 140 + } 141 + 136 142 struct inet_cork { 137 143 unsigned int flags; 138 144 __be32 addr;
+2
include/net/pkt_cls.h
··· 2 2 #define __NET_PKT_CLS_H 3 3 4 4 #include <linux/pkt_cls.h> 5 + #include <linux/workqueue.h> 5 6 #include <net/sch_generic.h> 6 7 #include <net/act_api.h> 7 8 ··· 29 28 }; 30 29 31 30 struct tcf_block_cb; 31 + bool tcf_queue_work(struct work_struct *work); 32 32 33 33 #ifdef CONFIG_NET_CLS 34 34 struct tcf_chain *tcf_chain_get(struct tcf_block *block, u32 chain_index,
+2
include/net/sch_generic.h
··· 10 10 #include <linux/dynamic_queue_limits.h> 11 11 #include <linux/list.h> 12 12 #include <linux/refcount.h> 13 + #include <linux/workqueue.h> 13 14 #include <net/gen_stats.h> 14 15 #include <net/rtnetlink.h> 15 16 ··· 274 273 struct net *net; 275 274 struct Qdisc *q; 276 275 struct list_head cb_list; 276 + struct work_struct work; 277 277 }; 278 278 279 279 static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz)
+1 -1
include/net/sctp/sm.h
··· 261 261 struct sctp_fwdtsn_skip *skiplist); 262 262 struct sctp_chunk *sctp_make_auth(const struct sctp_association *asoc); 263 263 struct sctp_chunk *sctp_make_strreset_req(const struct sctp_association *asoc, 264 - __u16 stream_num, __u16 *stream_list, 264 + __u16 stream_num, __be16 *stream_list, 265 265 bool out, bool in); 266 266 struct sctp_chunk *sctp_make_strreset_tsnreq( 267 267 const struct sctp_association *asoc);
+1 -1
include/net/sctp/ulpevent.h
··· 130 130 131 131 struct sctp_ulpevent *sctp_ulpevent_make_stream_reset_event( 132 132 const struct sctp_association *asoc, __u16 flags, 133 - __u16 stream_num, __u16 *stream_list, gfp_t gfp); 133 + __u16 stream_num, __be16 *stream_list, gfp_t gfp); 134 134 135 135 struct sctp_ulpevent *sctp_ulpevent_make_assoc_reset_event( 136 136 const struct sctp_association *asoc, __u16 flags,
+1 -2
include/net/strparser.h
··· 74 74 u32 unrecov_intr : 1; 75 75 76 76 struct sk_buff **skb_nextp; 77 - struct timer_list msg_timer; 78 77 struct sk_buff *skb_head; 79 78 unsigned int need_bytes; 80 - struct delayed_work delayed_work; 79 + struct delayed_work msg_timer_work; 81 80 struct work_struct work; 82 81 struct strp_stats stats; 83 82 struct strp_callbacks cb;
+1
include/net/tcp.h
··· 816 816 __u32 key; 817 817 __u32 flags; 818 818 struct bpf_map *map; 819 + void *data_end; 819 820 } bpf; 820 821 }; 821 822 };
+3 -3
include/uapi/linux/bpf.h
··· 645 645 * @map: pointer to sockmap 646 646 * @key: key to lookup sock in map 647 647 * @flags: reserved for future use 648 - * Return: SK_REDIRECT 648 + * Return: SK_PASS 649 649 * 650 650 * int bpf_sock_map_update(skops, map, key, flags) 651 651 * @skops: pointer to bpf_sock_ops ··· 887 887 }; 888 888 889 889 enum sk_action { 890 - SK_ABORTED = 0, 891 - SK_DROP, 890 + SK_DROP = 0, 891 + SK_PASS, 892 892 SK_REDIRECT, 893 893 }; 894 894
+1 -1
include/uapi/linux/sctp.h
··· 378 378 __u16 sre_type; 379 379 __u16 sre_flags; 380 380 __u32 sre_length; 381 - __u16 sre_error; 381 + __be16 sre_error; 382 382 sctp_assoc_t sre_assoc_id; 383 383 __u8 sre_data[0]; 384 384 };
+1
include/uapi/linux/spi/spidev.h
··· 23 23 #define SPIDEV_H 24 24 25 25 #include <linux/types.h> 26 + #include <linux/ioctl.h> 26 27 27 28 /* User space versions of kernel symbols for SPI clocking modes, 28 29 * matching <linux/spi/spi.h>
+1 -1
init/Kconfig
··· 1033 1033 1034 1034 choice 1035 1035 prompt "Compiler optimization level" 1036 - default CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE 1036 + default CC_OPTIMIZE_FOR_PERFORMANCE 1037 1037 1038 1038 config CC_OPTIMIZE_FOR_PERFORMANCE 1039 1039 bool "Optimize for performance"
+10 -1
kernel/bpf/sockmap.c
··· 96 96 return rcu_dereference_sk_user_data(sk); 97 97 } 98 98 99 + /* compute the linear packet data range [data, data_end) for skb when 100 + * sk_skb type programs are in use. 101 + */ 102 + static inline void bpf_compute_data_end_sk_skb(struct sk_buff *skb) 103 + { 104 + TCP_SKB_CB(skb)->bpf.data_end = skb->data + skb_headlen(skb); 105 + } 106 + 99 107 static int smap_verdict_func(struct smap_psock *psock, struct sk_buff *skb) 100 108 { 101 109 struct bpf_prog *prog = READ_ONCE(psock->bpf_verdict); ··· 125 117 preempt_enable(); 126 118 skb->sk = NULL; 127 119 128 - return rc; 120 + return rc == SK_PASS ? 121 + (TCP_SKB_CB(skb)->bpf.map ? SK_REDIRECT : SK_PASS) : SK_DROP; 129 122 } 130 123 131 124 static void smap_do_verdict(struct smap_psock *psock, struct sk_buff *skb)
+5
kernel/cpu.c
··· 632 632 __cpuhp_kick_ap(st); 633 633 } 634 634 635 + /* 636 + * Clean up the leftovers so the next hotplug operation wont use stale 637 + * data. 638 + */ 639 + st->node = st->last = NULL; 635 640 return ret; 636 641 } 637 642
+12 -3
kernel/irq/generic-chip.c
··· 135 135 } 136 136 137 137 /** 138 - * irq_gc_mask_disable_reg_and_ack - Mask and ack pending interrupt 138 + * irq_gc_mask_disable_and_ack_set - Mask and ack pending interrupt 139 139 * @d: irq_data 140 + * 141 + * This generic implementation of the irq_mask_ack method is for chips 142 + * with separate enable/disable registers instead of a single mask 143 + * register and where a pending interrupt is acknowledged by setting a 144 + * bit. 145 + * 146 + * Note: This is the only permutation currently used. Similar generic 147 + * functions should be added here if other permutations are required. 140 148 */ 141 - void irq_gc_mask_disable_reg_and_ack(struct irq_data *d) 149 + void irq_gc_mask_disable_and_ack_set(struct irq_data *d) 142 150 { 143 151 struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 144 152 struct irq_chip_type *ct = irq_data_get_chip_type(d); 145 153 u32 mask = d->mask; 146 154 147 155 irq_gc_lock(gc); 148 - irq_reg_writel(gc, mask, ct->regs.mask); 156 + irq_reg_writel(gc, mask, ct->regs.disable); 157 + *ct->mask_cache &= ~mask; 149 158 irq_reg_writel(gc, mask, ct->regs.ack); 150 159 irq_gc_unlock(gc); 151 160 }
+15 -22
kernel/workqueue.c
··· 68 68 * attach_mutex to avoid changing binding state while 69 69 * worker_attach_to_pool() is in progress. 70 70 */ 71 + POOL_MANAGER_ACTIVE = 1 << 0, /* being managed */ 71 72 POOL_DISASSOCIATED = 1 << 2, /* cpu can't serve workers */ 72 73 73 74 /* worker flags */ ··· 166 165 /* L: hash of busy workers */ 167 166 168 167 /* see manage_workers() for details on the two manager mutexes */ 169 - struct mutex manager_arb; /* manager arbitration */ 170 168 struct worker *manager; /* L: purely informational */ 171 169 struct mutex attach_mutex; /* attach/detach exclusion */ 172 170 struct list_head workers; /* A: attached workers */ ··· 299 299 300 300 static DEFINE_MUTEX(wq_pool_mutex); /* protects pools and workqueues list */ 301 301 static DEFINE_SPINLOCK(wq_mayday_lock); /* protects wq->maydays list */ 302 + static DECLARE_WAIT_QUEUE_HEAD(wq_manager_wait); /* wait for manager to go away */ 302 303 303 304 static LIST_HEAD(workqueues); /* PR: list of all workqueues */ 304 305 static bool workqueue_freezing; /* PL: have wqs started freezing? */ ··· 802 801 /* Do we have too many workers and should some go away? */ 803 802 static bool too_many_workers(struct worker_pool *pool) 804 803 { 805 - bool managing = mutex_is_locked(&pool->manager_arb); 804 + bool managing = pool->flags & POOL_MANAGER_ACTIVE; 806 805 int nr_idle = pool->nr_idle + managing; /* manager is considered idle */ 807 806 int nr_busy = pool->nr_workers - nr_idle; 808 807 ··· 1981 1980 { 1982 1981 struct worker_pool *pool = worker->pool; 1983 1982 1984 - /* 1985 - * Anyone who successfully grabs manager_arb wins the arbitration 1986 - * and becomes the manager. mutex_trylock() on pool->manager_arb 1987 - * failure while holding pool->lock reliably indicates that someone 1988 - * else is managing the pool and the worker which failed trylock 1989 - * can proceed to executing work items. This means that anyone 1990 - * grabbing manager_arb is responsible for actually performing 1991 - * manager duties. If manager_arb is grabbed and released without 1992 - * actual management, the pool may stall indefinitely. 1993 - */ 1994 - if (!mutex_trylock(&pool->manager_arb)) 1983 + if (pool->flags & POOL_MANAGER_ACTIVE) 1995 1984 return false; 1985 + 1986 + pool->flags |= POOL_MANAGER_ACTIVE; 1996 1987 pool->manager = worker; 1997 1988 1998 1989 maybe_create_worker(pool); 1999 1990 2000 1991 pool->manager = NULL; 2001 - mutex_unlock(&pool->manager_arb); 1992 + pool->flags &= ~POOL_MANAGER_ACTIVE; 1993 + wake_up(&wq_manager_wait); 2002 1994 return true; 2003 1995 } 2004 1996 ··· 3242 3248 setup_timer(&pool->mayday_timer, pool_mayday_timeout, 3243 3249 (unsigned long)pool); 3244 3250 3245 - mutex_init(&pool->manager_arb); 3246 3251 mutex_init(&pool->attach_mutex); 3247 3252 INIT_LIST_HEAD(&pool->workers); 3248 3253 ··· 3311 3318 hash_del(&pool->hash_node); 3312 3319 3313 3320 /* 3314 - * Become the manager and destroy all workers. Grabbing 3315 - * manager_arb prevents @pool's workers from blocking on 3316 - * attach_mutex. 3321 + * Become the manager and destroy all workers. This prevents 3322 + * @pool's workers from blocking on attach_mutex. We're the last 3323 + * manager and @pool gets freed with the flag set. 3317 3324 */ 3318 - mutex_lock(&pool->manager_arb); 3319 - 3320 3325 spin_lock_irq(&pool->lock); 3326 + wait_event_lock_irq(wq_manager_wait, 3327 + !(pool->flags & POOL_MANAGER_ACTIVE), pool->lock); 3328 + pool->flags |= POOL_MANAGER_ACTIVE; 3329 + 3321 3330 while ((worker = first_idle_worker(pool))) 3322 3331 destroy_worker(worker); 3323 3332 WARN_ON(pool->nr_workers || pool->nr_idle); ··· 3332 3337 3333 3338 if (pool->detach_completion) 3334 3339 wait_for_completion(pool->detach_completion); 3335 - 3336 - mutex_unlock(&pool->manager_arb); 3337 3340 3338 3341 /* shut down the timers */ 3339 3342 del_timer_sync(&pool->idle_timer);
+17 -34
lib/assoc_array.c
··· 598 598 if ((edit->segment_cache[ASSOC_ARRAY_FAN_OUT] ^ base_seg) == 0) 599 599 goto all_leaves_cluster_together; 600 600 601 - /* Otherwise we can just insert a new node ahead of the old 602 - * one. 601 + /* Otherwise all the old leaves cluster in the same slot, but 602 + * the new leaf wants to go into a different slot - so we 603 + * create a new node (n0) to hold the new leaf and a pointer to 604 + * a new node (n1) holding all the old leaves. 605 + * 606 + * This can be done by falling through to the node splitting 607 + * path. 603 608 */ 604 - goto present_leaves_cluster_but_not_new_leaf; 609 + pr_devel("present leaves cluster but not new leaf\n"); 605 610 } 606 611 607 612 split_node: 608 613 pr_devel("split node\n"); 609 614 610 - /* We need to split the current node; we know that the node doesn't 611 - * simply contain a full set of leaves that cluster together (it 612 - * contains meta pointers and/or non-clustering leaves). 615 + /* We need to split the current node. The node must contain anything 616 + * from a single leaf (in the one leaf case, this leaf will cluster 617 + * with the new leaf) and the rest meta-pointers, to all leaves, some 618 + * of which may cluster. 619 + * 620 + * It won't contain the case in which all the current leaves plus the 621 + * new leaves want to cluster in the same slot. 613 622 * 614 623 * We need to expel at least two leaves out of a set consisting of the 615 - * leaves in the node and the new leaf. 624 + * leaves in the node and the new leaf. The current meta pointers can 625 + * just be copied as they shouldn't cluster with any of the leaves. 616 626 * 617 627 * We need a new node (n0) to replace the current one and a new node to 618 628 * take the expelled nodes (n1). ··· 725 715 edit->set[0].ptr = &assoc_array_ptr_to_shortcut(ptr)->next_node; 726 716 edit->excised_meta[0] = assoc_array_node_to_ptr(node); 727 717 pr_devel("<--%s() = ok [split node]\n", __func__); 728 - return true; 729 - 730 - present_leaves_cluster_but_not_new_leaf: 731 - /* All the old leaves cluster in the same slot, but the new leaf wants 732 - * to go into a different slot, so we create a new node to hold the new 733 - * leaf and a pointer to a new node holding all the old leaves. 734 - */ 735 - pr_devel("present leaves cluster but not new leaf\n"); 736 - 737 - new_n0->back_pointer = node->back_pointer; 738 - new_n0->parent_slot = node->parent_slot; 739 - new_n0->nr_leaves_on_branch = node->nr_leaves_on_branch; 740 - new_n1->back_pointer = assoc_array_node_to_ptr(new_n0); 741 - new_n1->parent_slot = edit->segment_cache[0]; 742 - new_n1->nr_leaves_on_branch = node->nr_leaves_on_branch; 743 - edit->adjust_count_on = new_n0; 744 - 745 - for (i = 0; i < ASSOC_ARRAY_FAN_OUT; i++) 746 - new_n1->slots[i] = node->slots[i]; 747 - 748 - new_n0->slots[edit->segment_cache[0]] = assoc_array_node_to_ptr(new_n0); 749 - edit->leaf_p = &new_n0->slots[edit->segment_cache[ASSOC_ARRAY_FAN_OUT]]; 750 - 751 - edit->set[0].ptr = &assoc_array_ptr_to_node(node->back_pointer)->slots[node->parent_slot]; 752 - edit->set[0].to = assoc_array_node_to_ptr(new_n0); 753 - edit->excised_meta[0] = assoc_array_node_to_ptr(node); 754 - pr_devel("<--%s() = ok [insert node before]\n", __func__); 755 718 return true; 756 719 757 720 all_leaves_cluster_together:
+29 -3
net/core/filter.c
··· 1845 1845 { 1846 1846 struct tcp_skb_cb *tcb = TCP_SKB_CB(skb); 1847 1847 1848 + /* If user passes invalid input drop the packet. */ 1848 1849 if (unlikely(flags)) 1849 - return SK_ABORTED; 1850 + return SK_DROP; 1850 1851 1851 1852 tcb->bpf.key = key; 1852 1853 tcb->bpf.flags = flags; 1853 1854 tcb->bpf.map = map; 1854 1855 1855 - return SK_REDIRECT; 1856 + return SK_PASS; 1856 1857 } 1857 1858 1858 1859 struct sock *do_sk_redirect_map(struct sk_buff *skb) ··· 4476 4475 return insn - insn_buf; 4477 4476 } 4478 4477 4478 + static u32 sk_skb_convert_ctx_access(enum bpf_access_type type, 4479 + const struct bpf_insn *si, 4480 + struct bpf_insn *insn_buf, 4481 + struct bpf_prog *prog, u32 *target_size) 4482 + { 4483 + struct bpf_insn *insn = insn_buf; 4484 + int off; 4485 + 4486 + switch (si->off) { 4487 + case offsetof(struct __sk_buff, data_end): 4488 + off = si->off; 4489 + off -= offsetof(struct __sk_buff, data_end); 4490 + off += offsetof(struct sk_buff, cb); 4491 + off += offsetof(struct tcp_skb_cb, bpf.data_end); 4492 + *insn++ = BPF_LDX_MEM(BPF_SIZEOF(void *), si->dst_reg, 4493 + si->src_reg, off); 4494 + break; 4495 + default: 4496 + return bpf_convert_ctx_access(type, si, insn_buf, prog, 4497 + target_size); 4498 + } 4499 + 4500 + return insn - insn_buf; 4501 + } 4502 + 4479 4503 const struct bpf_verifier_ops sk_filter_verifier_ops = { 4480 4504 .get_func_proto = sk_filter_func_proto, 4481 4505 .is_valid_access = sk_filter_is_valid_access, ··· 4591 4565 const struct bpf_verifier_ops sk_skb_verifier_ops = { 4592 4566 .get_func_proto = sk_skb_func_proto, 4593 4567 .is_valid_access = sk_skb_is_valid_access, 4594 - .convert_ctx_access = bpf_convert_ctx_access, 4568 + .convert_ctx_access = sk_skb_convert_ctx_access, 4595 4569 .gen_prologue = sk_skb_prologue, 4596 4570 }; 4597 4571
+1 -1
net/dccp/ipv4.c
··· 495 495 ireq->ir_rmt_addr); 496 496 err = ip_build_and_send_pkt(skb, sk, ireq->ir_loc_addr, 497 497 ireq->ir_rmt_addr, 498 - rcu_dereference(ireq->ireq_opt)); 498 + ireq_opt_deref(ireq)); 499 499 err = net_xmit_eval(err); 500 500 } 501 501
+4 -3
net/dsa/dsa2.c
··· 486 486 if (!ethernet) 487 487 return -EINVAL; 488 488 ethernet_dev = of_find_net_device_by_node(ethernet); 489 + if (!ethernet_dev) 490 + return -EPROBE_DEFER; 489 491 } else { 490 492 ethernet_dev = dsa_dev_to_net_device(ds->cd->netdev[index]); 493 + if (!ethernet_dev) 494 + return -EPROBE_DEFER; 491 495 dev_put(ethernet_dev); 492 496 } 493 - 494 - if (!ethernet_dev) 495 - return -EPROBE_DEFER; 496 497 497 498 if (!dst->cpu_dp) { 498 499 dst->cpu_dp = port;
+2 -1
net/ipv4/inet_connection_sock.c
··· 541 541 struct ip_options_rcu *opt; 542 542 struct rtable *rt; 543 543 544 - opt = rcu_dereference(ireq->ireq_opt); 544 + opt = ireq_opt_deref(ireq); 545 + 545 546 flowi4_init_output(fl4, ireq->ir_iif, ireq->ir_mark, 546 547 RT_CONN_FLAGS(sk), RT_SCOPE_UNIVERSE, 547 548 sk->sk_protocol, inet_sk_flowi_flags(sk),
+42 -17
net/ipv4/ipip.c
··· 128 128 129 129 static int ipip_err(struct sk_buff *skb, u32 info) 130 130 { 131 - 132 - /* All the routers (except for Linux) return only 133 - 8 bytes of packet payload. It means, that precise relaying of 134 - ICMP in the real Internet is absolutely infeasible. 135 - */ 131 + /* All the routers (except for Linux) return only 132 + * 8 bytes of packet payload. It means, that precise relaying of 133 + * ICMP in the real Internet is absolutely infeasible. 134 + */ 136 135 struct net *net = dev_net(skb->dev); 137 136 struct ip_tunnel_net *itn = net_generic(net, ipip_net_id); 138 137 const struct iphdr *iph = (const struct iphdr *)skb->data; 139 - struct ip_tunnel *t; 140 - int err; 141 138 const int type = icmp_hdr(skb)->type; 142 139 const int code = icmp_hdr(skb)->code; 140 + struct ip_tunnel *t; 141 + int err = 0; 143 142 144 - err = -ENOENT; 143 + switch (type) { 144 + case ICMP_DEST_UNREACH: 145 + switch (code) { 146 + case ICMP_SR_FAILED: 147 + /* Impossible event. */ 148 + goto out; 149 + default: 150 + /* All others are translated to HOST_UNREACH. 151 + * rfc2003 contains "deep thoughts" about NET_UNREACH, 152 + * I believe they are just ether pollution. --ANK 153 + */ 154 + break; 155 + } 156 + break; 157 + 158 + case ICMP_TIME_EXCEEDED: 159 + if (code != ICMP_EXC_TTL) 160 + goto out; 161 + break; 162 + 163 + case ICMP_REDIRECT: 164 + break; 165 + 166 + default: 167 + goto out; 168 + } 169 + 145 170 t = ip_tunnel_lookup(itn, skb->dev->ifindex, TUNNEL_NO_KEY, 146 171 iph->daddr, iph->saddr, 0); 147 - if (!t) 172 + if (!t) { 173 + err = -ENOENT; 148 174 goto out; 175 + } 149 176 150 177 if (type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED) { 151 - ipv4_update_pmtu(skb, dev_net(skb->dev), info, 152 - t->parms.link, 0, iph->protocol, 0); 153 - err = 0; 178 + ipv4_update_pmtu(skb, net, info, t->parms.link, 0, 179 + iph->protocol, 0); 154 180 goto out; 155 181 } 156 182 157 183 if (type == ICMP_REDIRECT) { 158 - ipv4_redirect(skb, dev_net(skb->dev), t->parms.link, 0, 159 - iph->protocol, 0); 160 - err = 0; 184 + ipv4_redirect(skb, net, t->parms.link, 0, iph->protocol, 0); 161 185 goto out; 162 186 } 163 187 164 - if (t->parms.iph.daddr == 0) 188 + if (t->parms.iph.daddr == 0) { 189 + err = -ENOENT; 165 190 goto out; 191 + } 166 192 167 - err = 0; 168 193 if (t->parms.iph.ttl == 0 && type == ICMP_TIME_EXCEEDED) 169 194 goto out; 170 195
+1 -1
net/ipv4/tcp_ipv4.c
··· 881 881 882 882 err = ip_build_and_send_pkt(skb, sk, ireq->ir_loc_addr, 883 883 ireq->ir_rmt_addr, 884 - rcu_dereference(ireq->ireq_opt)); 884 + ireq_opt_deref(ireq)); 885 885 err = net_xmit_eval(err); 886 886 } 887 887
+7 -3
net/ipv4/tcp_output.c
··· 781 781 struct tcp_sock *tp = tcp_sk(sk); 782 782 783 783 if (tp->lost_out > tp->retrans_out && 784 - tp->snd_cwnd > tcp_packets_in_flight(tp)) 784 + tp->snd_cwnd > tcp_packets_in_flight(tp)) { 785 + tcp_mstamp_refresh(tp); 785 786 tcp_xmit_retransmit_queue(sk); 787 + } 786 788 787 789 tcp_write_xmit(sk, tcp_current_mss(sk), tp->nonagle, 788 790 0, GFP_ATOMIC); ··· 2309 2307 2310 2308 sent_pkts = 0; 2311 2309 2310 + tcp_mstamp_refresh(tp); 2312 2311 if (!push_one) { 2313 2312 /* Do MTU probing. */ 2314 2313 result = tcp_mtu_probe(sk); ··· 2321 2318 } 2322 2319 2323 2320 max_segs = tcp_tso_segs(sk, mss_now); 2324 - tcp_mstamp_refresh(tp); 2325 2321 while ((skb = tcp_send_head(sk))) { 2326 2322 unsigned int limit; 2327 2323 ··· 2913 2911 -ENOBUFS; 2914 2912 } tcp_skb_tsorted_restore(skb); 2915 2913 2916 - if (!err) 2914 + if (!err) { 2917 2915 tcp_update_skb_after_send(tp, skb); 2916 + tcp_rate_skb_sent(sk, skb); 2917 + } 2918 2918 } else { 2919 2919 err = tcp_transmit_skb(sk, skb, 1, GFP_ATOMIC); 2920 2920 }
+14 -6
net/ipv6/ip6_gre.c
··· 408 408 case ICMPV6_DEST_UNREACH: 409 409 net_dbg_ratelimited("%s: Path to destination invalid or inactive!\n", 410 410 t->parms.name); 411 - break; 411 + if (code != ICMPV6_PORT_UNREACH) 412 + break; 413 + return; 412 414 case ICMPV6_TIME_EXCEED: 413 415 if (code == ICMPV6_EXC_HOPLIMIT) { 414 416 net_dbg_ratelimited("%s: Too small hop limit or routing loop in tunnel!\n", 415 417 t->parms.name); 418 + break; 416 419 } 417 - break; 420 + return; 418 421 case ICMPV6_PARAMPROB: 419 422 teli = 0; 420 423 if (code == ICMPV6_HDR_FIELD) ··· 433 430 net_dbg_ratelimited("%s: Recipient unable to parse tunneled packet!\n", 434 431 t->parms.name); 435 432 } 436 - break; 433 + return; 437 434 case ICMPV6_PKT_TOOBIG: 438 435 mtu = be32_to_cpu(info) - offset - t->tun_hlen; 439 436 if (t->dev->type == ARPHRD_ETHER) ··· 441 438 if (mtu < IPV6_MIN_MTU) 442 439 mtu = IPV6_MIN_MTU; 443 440 t->dev->mtu = mtu; 444 - break; 441 + return; 445 442 } 446 443 447 444 if (time_before(jiffies, t->err_time + IP6TUNNEL_ERR_TIMEO)) ··· 503 500 __u32 *pmtu, __be16 proto) 504 501 { 505 502 struct ip6_tnl *tunnel = netdev_priv(dev); 506 - __be16 protocol = (dev->type == ARPHRD_ETHER) ? 507 - htons(ETH_P_TEB) : proto; 503 + struct dst_entry *dst = skb_dst(skb); 504 + __be16 protocol; 508 505 509 506 if (dev->type == ARPHRD_ETHER) 510 507 IPCB(skb)->flags = 0; ··· 518 515 tunnel->o_seqno++; 519 516 520 517 /* Push GRE header. */ 518 + protocol = (dev->type == ARPHRD_ETHER) ? htons(ETH_P_TEB) : proto; 521 519 gre_build_header(skb, tunnel->tun_hlen, tunnel->parms.o_flags, 522 520 protocol, tunnel->parms.o_key, htonl(tunnel->o_seqno)); 521 + 522 + /* TooBig packet may have updated dst->dev's mtu */ 523 + if (dst && dst_mtu(dst) > dst->dev->mtu) 524 + dst->ops->update_pmtu(dst, NULL, skb, dst->dev->mtu); 523 525 524 526 return ip6_tnl_xmit(skb, dev, dsfield, fl6, encap_limit, pmtu, 525 527 NEXTHDR_GRE);
+6 -6
net/mac80211/cfg.c
··· 2727 2727 if (!ieee80211_sdata_running(sdata)) 2728 2728 return -ENETDOWN; 2729 2729 2730 - if (ieee80211_hw_check(&local->hw, HAS_RATE_CONTROL)) { 2731 - ret = drv_set_bitrate_mask(local, sdata, mask); 2732 - if (ret) 2733 - return ret; 2734 - } 2735 - 2736 2730 /* 2737 2731 * If active validate the setting and reject it if it doesn't leave 2738 2732 * at least one basic rate usable, since we really have to be able ··· 2740 2746 2741 2747 if (!(mask->control[band].legacy & basic_rates)) 2742 2748 return -EINVAL; 2749 + } 2750 + 2751 + if (ieee80211_hw_check(&local->hw, HAS_RATE_CONTROL)) { 2752 + ret = drv_set_bitrate_mask(local, sdata, mask); 2753 + if (ret) 2754 + return ret; 2743 2755 } 2744 2756 2745 2757 for (i = 0; i < NUM_NL80211_BANDS; i++) {
+35 -2
net/mac80211/key.c
··· 19 19 #include <linux/slab.h> 20 20 #include <linux/export.h> 21 21 #include <net/mac80211.h> 22 + #include <crypto/algapi.h> 22 23 #include <asm/unaligned.h> 23 24 #include "ieee80211_i.h" 24 25 #include "driver-ops.h" ··· 610 609 ieee80211_key_free_common(key); 611 610 } 612 611 612 + static bool ieee80211_key_identical(struct ieee80211_sub_if_data *sdata, 613 + struct ieee80211_key *old, 614 + struct ieee80211_key *new) 615 + { 616 + u8 tkip_old[WLAN_KEY_LEN_TKIP], tkip_new[WLAN_KEY_LEN_TKIP]; 617 + u8 *tk_old, *tk_new; 618 + 619 + if (!old || new->conf.keylen != old->conf.keylen) 620 + return false; 621 + 622 + tk_old = old->conf.key; 623 + tk_new = new->conf.key; 624 + 625 + /* 626 + * In station mode, don't compare the TX MIC key, as it's never used 627 + * and offloaded rekeying may not care to send it to the host. This 628 + * is the case in iwlwifi, for example. 629 + */ 630 + if (sdata->vif.type == NL80211_IFTYPE_STATION && 631 + new->conf.cipher == WLAN_CIPHER_SUITE_TKIP && 632 + new->conf.keylen == WLAN_KEY_LEN_TKIP && 633 + !(new->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE)) { 634 + memcpy(tkip_old, tk_old, WLAN_KEY_LEN_TKIP); 635 + memcpy(tkip_new, tk_new, WLAN_KEY_LEN_TKIP); 636 + memset(tkip_old + NL80211_TKIP_DATA_OFFSET_TX_MIC_KEY, 0, 8); 637 + memset(tkip_new + NL80211_TKIP_DATA_OFFSET_TX_MIC_KEY, 0, 8); 638 + tk_old = tkip_old; 639 + tk_new = tkip_new; 640 + } 641 + 642 + return !crypto_memneq(tk_old, tk_new, new->conf.keylen); 643 + } 644 + 613 645 int ieee80211_key_link(struct ieee80211_key *key, 614 646 struct ieee80211_sub_if_data *sdata, 615 647 struct sta_info *sta) ··· 668 634 * Silently accept key re-installation without really installing the 669 635 * new version of the key to avoid nonce reuse or replay issues. 670 636 */ 671 - if (old_key && key->conf.keylen == old_key->conf.keylen && 672 - !memcmp(key->conf.key, old_key->conf.key, key->conf.keylen)) { 637 + if (ieee80211_key_identical(sdata, old_key, key)) { 673 638 ieee80211_key_free_unused(key); 674 639 ret = 0; 675 640 goto out;
+8 -8
net/rds/ib_send.c
··· 661 661 } 662 662 } 663 663 664 - rds_ib_set_wr_signal_state(ic, send, 0); 664 + rds_ib_set_wr_signal_state(ic, send, false); 665 665 666 666 /* 667 667 * Always signal the last one if we're stopping due to flow control. 668 668 */ 669 - if (ic->i_flowctl && flow_controlled && i == (work_alloc-1)) 670 - send->s_wr.send_flags |= IB_SEND_SIGNALED | IB_SEND_SOLICITED; 669 + if (ic->i_flowctl && flow_controlled && i == (work_alloc - 1)) { 670 + rds_ib_set_wr_signal_state(ic, send, true); 671 + send->s_wr.send_flags |= IB_SEND_SOLICITED; 672 + } 671 673 672 674 if (send->s_wr.send_flags & IB_SEND_SIGNALED) 673 675 nr_sig++; ··· 707 705 if (scat == &rm->data.op_sg[rm->data.op_count]) { 708 706 prev->s_op = ic->i_data_op; 709 707 prev->s_wr.send_flags |= IB_SEND_SOLICITED; 710 - if (!(prev->s_wr.send_flags & IB_SEND_SIGNALED)) { 711 - ic->i_unsignaled_wrs = rds_ib_sysctl_max_unsig_wrs; 712 - prev->s_wr.send_flags |= IB_SEND_SIGNALED; 713 - nr_sig++; 714 - } 708 + if (!(prev->s_wr.send_flags & IB_SEND_SIGNALED)) 709 + nr_sig += rds_ib_set_wr_signal_state(ic, prev, true); 715 710 ic->i_data_op = NULL; 716 711 } 717 712 ··· 791 792 send->s_atomic_wr.compare_add_mask = op->op_m_fadd.nocarry_mask; 792 793 send->s_atomic_wr.swap_mask = 0; 793 794 } 795 + send->s_wr.send_flags = 0; 794 796 nr_sig = rds_ib_set_wr_signal_state(ic, send, op->op_notify); 795 797 send->s_atomic_wr.wr.num_sge = 1; 796 798 send->s_atomic_wr.wr.next = NULL;
+1
net/sched/act_sample.c
··· 264 264 265 265 static void __exit sample_cleanup_module(void) 266 266 { 267 + rcu_barrier(); 267 268 tcf_unregister_action(&act_sample_ops, &sample_net_ops); 268 269 } 269 270
+57 -21
net/sched/cls_api.c
··· 77 77 } 78 78 EXPORT_SYMBOL(register_tcf_proto_ops); 79 79 80 + static struct workqueue_struct *tc_filter_wq; 81 + 80 82 int unregister_tcf_proto_ops(struct tcf_proto_ops *ops) 81 83 { 82 84 struct tcf_proto_ops *t; ··· 88 86 * tcf_proto_ops's destroy() handler. 89 87 */ 90 88 rcu_barrier(); 89 + flush_workqueue(tc_filter_wq); 91 90 92 91 write_lock(&cls_mod_lock); 93 92 list_for_each_entry(t, &tcf_proto_base, head) { ··· 102 99 return rc; 103 100 } 104 101 EXPORT_SYMBOL(unregister_tcf_proto_ops); 102 + 103 + bool tcf_queue_work(struct work_struct *work) 104 + { 105 + return queue_work(tc_filter_wq, work); 106 + } 107 + EXPORT_SYMBOL(tcf_queue_work); 105 108 106 109 /* Select new prio value from the range, managed by kernel. */ 107 110 ··· 317 308 } 318 309 EXPORT_SYMBOL(tcf_block_get); 319 310 320 - void tcf_block_put_ext(struct tcf_block *block, 321 - struct tcf_proto __rcu **p_filter_chain, struct Qdisc *q, 322 - struct tcf_block_ext_info *ei) 311 + static void tcf_block_put_final(struct work_struct *work) 323 312 { 313 + struct tcf_block *block = container_of(work, struct tcf_block, work); 324 314 struct tcf_chain *chain, *tmp; 325 315 326 - if (!block) 327 - return; 316 + /* At this point, all the chains should have refcnt == 1. */ 317 + rtnl_lock(); 318 + list_for_each_entry_safe(chain, tmp, &block->chain_list, list) 319 + tcf_chain_put(chain); 320 + rtnl_unlock(); 321 + kfree(block); 322 + } 328 323 329 - tcf_block_offload_unbind(block, q, ei); 324 + /* XXX: Standalone actions are not allowed to jump to any chain, and bound 325 + * actions should be all removed after flushing. However, filters are destroyed 326 + * in RCU callbacks, we have to hold the chains first, otherwise we would 327 + * always race with RCU callbacks on this list without proper locking. 328 + */ 329 + static void tcf_block_put_deferred(struct work_struct *work) 330 + { 331 + struct tcf_block *block = container_of(work, struct tcf_block, work); 332 + struct tcf_chain *chain; 330 333 331 - /* XXX: Standalone actions are not allowed to jump to any chain, and 332 - * bound actions should be all removed after flushing. However, 333 - * filters are destroyed in RCU callbacks, we have to hold the chains 334 - * first, otherwise we would always race with RCU callbacks on this list 335 - * without proper locking. 336 - */ 337 - 338 - /* Wait for existing RCU callbacks to cool down. */ 339 - rcu_barrier(); 340 - 334 + rtnl_lock(); 341 335 /* Hold a refcnt for all chains, except 0, in case they are gone. */ 342 336 list_for_each_entry(chain, &block->chain_list, list) 343 337 if (chain->index) ··· 350 338 list_for_each_entry(chain, &block->chain_list, list) 351 339 tcf_chain_flush(chain); 352 340 353 - /* Wait for RCU callbacks to release the reference count. */ 341 + INIT_WORK(&block->work, tcf_block_put_final); 342 + /* Wait for RCU callbacks to release the reference count and make 343 + * sure their works have been queued before this. 344 + */ 354 345 rcu_barrier(); 346 + tcf_queue_work(&block->work); 347 + rtnl_unlock(); 348 + } 355 349 356 - /* At this point, all the chains should have refcnt == 1. */ 357 - list_for_each_entry_safe(chain, tmp, &block->chain_list, list) 358 - tcf_chain_put(chain); 359 - kfree(block); 350 + void tcf_block_put_ext(struct tcf_block *block, 351 + struct tcf_proto __rcu **p_filter_chain, struct Qdisc *q, 352 + struct tcf_block_ext_info *ei) 353 + { 354 + if (!block) 355 + return; 356 + 357 + tcf_block_offload_unbind(block, q, ei); 358 + 359 + INIT_WORK(&block->work, tcf_block_put_deferred); 360 + /* Wait for existing RCU callbacks to cool down, make sure their works 361 + * have been queued before this. We can not flush pending works here 362 + * because we are holding the RTNL lock. 363 + */ 364 + rcu_barrier(); 365 + tcf_queue_work(&block->work); 360 366 } 361 367 EXPORT_SYMBOL(tcf_block_put_ext); 362 368 ··· 384 354 385 355 tcf_block_put_ext(block, NULL, block->q, &ei); 386 356 } 357 + 387 358 EXPORT_SYMBOL(tcf_block_put); 388 359 389 360 struct tcf_block_cb { ··· 1082 1051 #ifdef CONFIG_NET_CLS_ACT 1083 1052 LIST_HEAD(actions); 1084 1053 1054 + ASSERT_RTNL(); 1085 1055 tcf_exts_to_list(exts, &actions); 1086 1056 tcf_action_destroy(&actions, TCA_ACT_UNBIND); 1087 1057 kfree(exts->actions); ··· 1261 1229 1262 1230 static int __init tc_filter_init(void) 1263 1231 { 1232 + tc_filter_wq = alloc_ordered_workqueue("tc_filter_workqueue", 0); 1233 + if (!tc_filter_wq) 1234 + return -ENOMEM; 1235 + 1264 1236 rtnl_register(PF_UNSPEC, RTM_NEWTFILTER, tc_ctl_tfilter, NULL, 0); 1265 1237 rtnl_register(PF_UNSPEC, RTM_DELTFILTER, tc_ctl_tfilter, NULL, 0); 1266 1238 rtnl_register(PF_UNSPEC, RTM_GETTFILTER, tc_ctl_tfilter,
+18 -4
net/sched/cls_basic.c
··· 35 35 struct tcf_result res; 36 36 struct tcf_proto *tp; 37 37 struct list_head link; 38 - struct rcu_head rcu; 38 + union { 39 + struct work_struct work; 40 + struct rcu_head rcu; 41 + }; 39 42 }; 40 43 41 44 static int basic_classify(struct sk_buff *skb, const struct tcf_proto *tp, ··· 87 84 return 0; 88 85 } 89 86 87 + static void basic_delete_filter_work(struct work_struct *work) 88 + { 89 + struct basic_filter *f = container_of(work, struct basic_filter, work); 90 + 91 + rtnl_lock(); 92 + tcf_exts_destroy(&f->exts); 93 + tcf_em_tree_destroy(&f->ematches); 94 + rtnl_unlock(); 95 + 96 + kfree(f); 97 + } 98 + 90 99 static void basic_delete_filter(struct rcu_head *head) 91 100 { 92 101 struct basic_filter *f = container_of(head, struct basic_filter, rcu); 93 102 94 - tcf_exts_destroy(&f->exts); 95 - tcf_em_tree_destroy(&f->ematches); 96 - kfree(f); 103 + INIT_WORK(&f->work, basic_delete_filter_work); 104 + tcf_queue_work(&f->work); 97 105 } 98 106 99 107 static void basic_destroy(struct tcf_proto *tp)
+17 -2
net/sched/cls_bpf.c
··· 50 50 struct sock_filter *bpf_ops; 51 51 const char *bpf_name; 52 52 struct tcf_proto *tp; 53 - struct rcu_head rcu; 53 + union { 54 + struct work_struct work; 55 + struct rcu_head rcu; 56 + }; 54 57 }; 55 58 56 59 static const struct nla_policy bpf_policy[TCA_BPF_MAX + 1] = { ··· 272 269 kfree(prog); 273 270 } 274 271 272 + static void cls_bpf_delete_prog_work(struct work_struct *work) 273 + { 274 + struct cls_bpf_prog *prog = container_of(work, struct cls_bpf_prog, work); 275 + 276 + rtnl_lock(); 277 + __cls_bpf_delete_prog(prog); 278 + rtnl_unlock(); 279 + } 280 + 275 281 static void cls_bpf_delete_prog_rcu(struct rcu_head *rcu) 276 282 { 277 - __cls_bpf_delete_prog(container_of(rcu, struct cls_bpf_prog, rcu)); 283 + struct cls_bpf_prog *prog = container_of(rcu, struct cls_bpf_prog, rcu); 284 + 285 + INIT_WORK(&prog->work, cls_bpf_delete_prog_work); 286 + tcf_queue_work(&prog->work); 278 287 } 279 288 280 289 static void __cls_bpf_delete(struct tcf_proto *tp, struct cls_bpf_prog *prog)
+18 -4
net/sched/cls_cgroup.c
··· 23 23 struct tcf_exts exts; 24 24 struct tcf_ematch_tree ematches; 25 25 struct tcf_proto *tp; 26 - struct rcu_head rcu; 26 + union { 27 + struct work_struct work; 28 + struct rcu_head rcu; 29 + }; 27 30 }; 28 31 29 32 static int cls_cgroup_classify(struct sk_buff *skb, const struct tcf_proto *tp, ··· 60 57 [TCA_CGROUP_EMATCHES] = { .type = NLA_NESTED }, 61 58 }; 62 59 60 + static void cls_cgroup_destroy_work(struct work_struct *work) 61 + { 62 + struct cls_cgroup_head *head = container_of(work, 63 + struct cls_cgroup_head, 64 + work); 65 + rtnl_lock(); 66 + tcf_exts_destroy(&head->exts); 67 + tcf_em_tree_destroy(&head->ematches); 68 + kfree(head); 69 + rtnl_unlock(); 70 + } 71 + 63 72 static void cls_cgroup_destroy_rcu(struct rcu_head *root) 64 73 { 65 74 struct cls_cgroup_head *head = container_of(root, 66 75 struct cls_cgroup_head, 67 76 rcu); 68 77 69 - tcf_exts_destroy(&head->exts); 70 - tcf_em_tree_destroy(&head->ematches); 71 - kfree(head); 78 + INIT_WORK(&head->work, cls_cgroup_destroy_work); 79 + tcf_queue_work(&head->work); 72 80 } 73 81 74 82 static int cls_cgroup_change(struct net *net, struct sk_buff *in_skb,
+16 -3
net/sched/cls_flow.c
··· 57 57 u32 divisor; 58 58 u32 baseclass; 59 59 u32 hashrnd; 60 - struct rcu_head rcu; 60 + union { 61 + struct work_struct work; 62 + struct rcu_head rcu; 63 + }; 61 64 }; 62 65 63 66 static inline u32 addr_fold(void *addr) ··· 372 369 [TCA_FLOW_PERTURB] = { .type = NLA_U32 }, 373 370 }; 374 371 375 - static void flow_destroy_filter(struct rcu_head *head) 372 + static void flow_destroy_filter_work(struct work_struct *work) 376 373 { 377 - struct flow_filter *f = container_of(head, struct flow_filter, rcu); 374 + struct flow_filter *f = container_of(work, struct flow_filter, work); 378 375 376 + rtnl_lock(); 379 377 del_timer_sync(&f->perturb_timer); 380 378 tcf_exts_destroy(&f->exts); 381 379 tcf_em_tree_destroy(&f->ematches); 382 380 kfree(f); 381 + rtnl_unlock(); 382 + } 383 + 384 + static void flow_destroy_filter(struct rcu_head *head) 385 + { 386 + struct flow_filter *f = container_of(head, struct flow_filter, rcu); 387 + 388 + INIT_WORK(&f->work, flow_destroy_filter_work); 389 + tcf_queue_work(&f->work); 383 390 } 384 391 385 392 static int flow_change(struct net *net, struct sk_buff *in_skb,
+17 -3
net/sched/cls_flower.c
··· 87 87 struct list_head list; 88 88 u32 handle; 89 89 u32 flags; 90 - struct rcu_head rcu; 90 + union { 91 + struct work_struct work; 92 + struct rcu_head rcu; 93 + }; 94 + struct net_device *hw_dev; 91 95 }; 92 96 93 97 static unsigned short int fl_mask_range(const struct fl_flow_mask *mask) ··· 193 189 return 0; 194 190 } 195 191 192 + static void fl_destroy_filter_work(struct work_struct *work) 193 + { 194 + struct cls_fl_filter *f = container_of(work, struct cls_fl_filter, work); 195 + 196 + rtnl_lock(); 197 + tcf_exts_destroy(&f->exts); 198 + kfree(f); 199 + rtnl_unlock(); 200 + } 201 + 196 202 static void fl_destroy_filter(struct rcu_head *head) 197 203 { 198 204 struct cls_fl_filter *f = container_of(head, struct cls_fl_filter, rcu); 199 205 200 - tcf_exts_destroy(&f->exts); 201 - kfree(f); 206 + INIT_WORK(&f->work, fl_destroy_filter_work); 207 + tcf_queue_work(&f->work); 202 208 } 203 209 204 210 static void fl_hw_destroy_filter(struct tcf_proto *tp, struct cls_fl_filter *f)
+16 -3
net/sched/cls_fw.c
··· 47 47 #endif /* CONFIG_NET_CLS_IND */ 48 48 struct tcf_exts exts; 49 49 struct tcf_proto *tp; 50 - struct rcu_head rcu; 50 + union { 51 + struct work_struct work; 52 + struct rcu_head rcu; 53 + }; 51 54 }; 52 55 53 56 static u32 fw_hash(u32 handle) ··· 125 122 return 0; 126 123 } 127 124 125 + static void fw_delete_filter_work(struct work_struct *work) 126 + { 127 + struct fw_filter *f = container_of(work, struct fw_filter, work); 128 + 129 + rtnl_lock(); 130 + tcf_exts_destroy(&f->exts); 131 + kfree(f); 132 + rtnl_unlock(); 133 + } 134 + 128 135 static void fw_delete_filter(struct rcu_head *head) 129 136 { 130 137 struct fw_filter *f = container_of(head, struct fw_filter, rcu); 131 138 132 - tcf_exts_destroy(&f->exts); 133 - kfree(f); 139 + INIT_WORK(&f->work, fw_delete_filter_work); 140 + tcf_queue_work(&f->work); 134 141 } 135 142 136 143 static void fw_destroy(struct tcf_proto *tp)
+16 -3
net/sched/cls_matchall.c
··· 21 21 struct tcf_result res; 22 22 u32 handle; 23 23 u32 flags; 24 - struct rcu_head rcu; 24 + union { 25 + struct work_struct work; 26 + struct rcu_head rcu; 27 + }; 25 28 }; 26 29 27 30 static int mall_classify(struct sk_buff *skb, const struct tcf_proto *tp, ··· 44 41 return 0; 45 42 } 46 43 44 + static void mall_destroy_work(struct work_struct *work) 45 + { 46 + struct cls_mall_head *head = container_of(work, struct cls_mall_head, 47 + work); 48 + rtnl_lock(); 49 + tcf_exts_destroy(&head->exts); 50 + kfree(head); 51 + rtnl_unlock(); 52 + } 53 + 47 54 static void mall_destroy_rcu(struct rcu_head *rcu) 48 55 { 49 56 struct cls_mall_head *head = container_of(rcu, struct cls_mall_head, 50 57 rcu); 51 58 52 - tcf_exts_destroy(&head->exts); 53 - kfree(head); 59 + INIT_WORK(&head->work, mall_destroy_work); 60 + tcf_queue_work(&head->work); 54 61 } 55 62 56 63 static void mall_destroy_hw_filter(struct tcf_proto *tp,
+16 -3
net/sched/cls_route.c
··· 57 57 u32 handle; 58 58 struct route4_bucket *bkt; 59 59 struct tcf_proto *tp; 60 - struct rcu_head rcu; 60 + union { 61 + struct work_struct work; 62 + struct rcu_head rcu; 63 + }; 61 64 }; 62 65 63 66 #define ROUTE4_FAILURE ((struct route4_filter *)(-1L)) ··· 257 254 return 0; 258 255 } 259 256 257 + static void route4_delete_filter_work(struct work_struct *work) 258 + { 259 + struct route4_filter *f = container_of(work, struct route4_filter, work); 260 + 261 + rtnl_lock(); 262 + tcf_exts_destroy(&f->exts); 263 + kfree(f); 264 + rtnl_unlock(); 265 + } 266 + 260 267 static void route4_delete_filter(struct rcu_head *head) 261 268 { 262 269 struct route4_filter *f = container_of(head, struct route4_filter, rcu); 263 270 264 - tcf_exts_destroy(&f->exts); 265 - kfree(f); 271 + INIT_WORK(&f->work, route4_delete_filter_work); 272 + tcf_queue_work(&f->work); 266 273 } 267 274 268 275 static void route4_destroy(struct tcf_proto *tp)
+16 -3
net/sched/cls_rsvp.h
··· 97 97 98 98 u32 handle; 99 99 struct rsvp_session *sess; 100 - struct rcu_head rcu; 100 + union { 101 + struct work_struct work; 102 + struct rcu_head rcu; 103 + }; 101 104 }; 102 105 103 106 static inline unsigned int hash_dst(__be32 *dst, u8 protocol, u8 tunnelid) ··· 285 282 return -ENOBUFS; 286 283 } 287 284 285 + static void rsvp_delete_filter_work(struct work_struct *work) 286 + { 287 + struct rsvp_filter *f = container_of(work, struct rsvp_filter, work); 288 + 289 + rtnl_lock(); 290 + tcf_exts_destroy(&f->exts); 291 + kfree(f); 292 + rtnl_unlock(); 293 + } 294 + 288 295 static void rsvp_delete_filter_rcu(struct rcu_head *head) 289 296 { 290 297 struct rsvp_filter *f = container_of(head, struct rsvp_filter, rcu); 291 298 292 - tcf_exts_destroy(&f->exts); 293 - kfree(f); 299 + INIT_WORK(&f->work, rsvp_delete_filter_work); 300 + tcf_queue_work(&f->work); 294 301 } 295 302 296 303 static void rsvp_delete_filter(struct tcf_proto *tp, struct rsvp_filter *f)
+33 -5
net/sched/cls_tcindex.c
··· 28 28 struct tcindex_filter_result { 29 29 struct tcf_exts exts; 30 30 struct tcf_result res; 31 - struct rcu_head rcu; 31 + union { 32 + struct work_struct work; 33 + struct rcu_head rcu; 34 + }; 32 35 }; 33 36 34 37 struct tcindex_filter { 35 38 u16 key; 36 39 struct tcindex_filter_result result; 37 40 struct tcindex_filter __rcu *next; 38 - struct rcu_head rcu; 41 + union { 42 + struct work_struct work; 43 + struct rcu_head rcu; 44 + }; 39 45 }; 40 46 41 47 ··· 142 136 return 0; 143 137 } 144 138 139 + static void tcindex_destroy_rexts_work(struct work_struct *work) 140 + { 141 + struct tcindex_filter_result *r; 142 + 143 + r = container_of(work, struct tcindex_filter_result, work); 144 + rtnl_lock(); 145 + tcf_exts_destroy(&r->exts); 146 + rtnl_unlock(); 147 + } 148 + 145 149 static void tcindex_destroy_rexts(struct rcu_head *head) 146 150 { 147 151 struct tcindex_filter_result *r; 148 152 149 153 r = container_of(head, struct tcindex_filter_result, rcu); 150 - tcf_exts_destroy(&r->exts); 154 + INIT_WORK(&r->work, tcindex_destroy_rexts_work); 155 + tcf_queue_work(&r->work); 156 + } 157 + 158 + static void tcindex_destroy_fexts_work(struct work_struct *work) 159 + { 160 + struct tcindex_filter *f = container_of(work, struct tcindex_filter, 161 + work); 162 + 163 + rtnl_lock(); 164 + tcf_exts_destroy(&f->result.exts); 165 + kfree(f); 166 + rtnl_unlock(); 151 167 } 152 168 153 169 static void tcindex_destroy_fexts(struct rcu_head *head) ··· 177 149 struct tcindex_filter *f = container_of(head, struct tcindex_filter, 178 150 rcu); 179 151 180 - tcf_exts_destroy(&f->result.exts); 181 - kfree(f); 152 + INIT_WORK(&f->work, tcindex_destroy_fexts_work); 153 + tcf_queue_work(&f->work); 182 154 } 183 155 184 156 static int tcindex_delete(struct tcf_proto *tp, void *arg, bool *last)
+26 -3
net/sched/cls_u32.c
··· 69 69 u32 __percpu *pcpu_success; 70 70 #endif 71 71 struct tcf_proto *tp; 72 - struct rcu_head rcu; 72 + union { 73 + struct work_struct work; 74 + struct rcu_head rcu; 75 + }; 73 76 /* The 'sel' field MUST be the last field in structure to allow for 74 77 * tc_u32_keys allocated at end of structure. 75 78 */ ··· 421 418 * this the u32_delete_key_rcu variant does not free the percpu 422 419 * statistics. 423 420 */ 421 + static void u32_delete_key_work(struct work_struct *work) 422 + { 423 + struct tc_u_knode *key = container_of(work, struct tc_u_knode, work); 424 + 425 + rtnl_lock(); 426 + u32_destroy_key(key->tp, key, false); 427 + rtnl_unlock(); 428 + } 429 + 424 430 static void u32_delete_key_rcu(struct rcu_head *rcu) 425 431 { 426 432 struct tc_u_knode *key = container_of(rcu, struct tc_u_knode, rcu); 427 433 428 - u32_destroy_key(key->tp, key, false); 434 + INIT_WORK(&key->work, u32_delete_key_work); 435 + tcf_queue_work(&key->work); 429 436 } 430 437 431 438 /* u32_delete_key_freepf_rcu is the rcu callback variant ··· 445 432 * for the variant that should be used with keys return from 446 433 * u32_init_knode() 447 434 */ 435 + static void u32_delete_key_freepf_work(struct work_struct *work) 436 + { 437 + struct tc_u_knode *key = container_of(work, struct tc_u_knode, work); 438 + 439 + rtnl_lock(); 440 + u32_destroy_key(key->tp, key, true); 441 + rtnl_unlock(); 442 + } 443 + 448 444 static void u32_delete_key_freepf_rcu(struct rcu_head *rcu) 449 445 { 450 446 struct tc_u_knode *key = container_of(rcu, struct tc_u_knode, rcu); 451 447 452 - u32_destroy_key(key->tp, key, true); 448 + INIT_WORK(&key->work, u32_delete_key_freepf_work); 449 + tcf_queue_work(&key->work); 453 450 } 454 451 455 452 static int u32_delete_key(struct tcf_proto *tp, struct tc_u_knode *key)
+2
net/sched/sch_api.c
··· 301 301 { 302 302 struct Qdisc *q; 303 303 304 + if (!handle) 305 + return NULL; 304 306 q = qdisc_match_from_root(dev->qdisc, handle); 305 307 if (q) 306 308 goto out;
+11 -11
net/sctp/input.c
··· 794 794 struct sctp_hash_cmp_arg { 795 795 const union sctp_addr *paddr; 796 796 const struct net *net; 797 - u16 lport; 797 + __be16 lport; 798 798 }; 799 799 800 800 static inline int sctp_hash_cmp(struct rhashtable_compare_arg *arg, ··· 820 820 return err; 821 821 } 822 822 823 - static inline u32 sctp_hash_obj(const void *data, u32 len, u32 seed) 823 + static inline __u32 sctp_hash_obj(const void *data, u32 len, u32 seed) 824 824 { 825 825 const struct sctp_transport *t = data; 826 826 const union sctp_addr *paddr = &t->ipaddr; 827 827 const struct net *net = sock_net(t->asoc->base.sk); 828 - u16 lport = htons(t->asoc->base.bind_addr.port); 829 - u32 addr; 828 + __be16 lport = htons(t->asoc->base.bind_addr.port); 829 + __u32 addr; 830 830 831 831 if (paddr->sa.sa_family == AF_INET6) 832 832 addr = jhash(&paddr->v6.sin6_addr, 16, seed); 833 833 else 834 - addr = paddr->v4.sin_addr.s_addr; 834 + addr = (__force __u32)paddr->v4.sin_addr.s_addr; 835 835 836 - return jhash_3words(addr, ((__u32)paddr->v4.sin_port) << 16 | 836 + return jhash_3words(addr, ((__force __u32)paddr->v4.sin_port) << 16 | 837 837 (__force __u32)lport, net_hash_mix(net), seed); 838 838 } 839 839 840 - static inline u32 sctp_hash_key(const void *data, u32 len, u32 seed) 840 + static inline __u32 sctp_hash_key(const void *data, u32 len, u32 seed) 841 841 { 842 842 const struct sctp_hash_cmp_arg *x = data; 843 843 const union sctp_addr *paddr = x->paddr; 844 844 const struct net *net = x->net; 845 - u16 lport = x->lport; 846 - u32 addr; 845 + __be16 lport = x->lport; 846 + __u32 addr; 847 847 848 848 if (paddr->sa.sa_family == AF_INET6) 849 849 addr = jhash(&paddr->v6.sin6_addr, 16, seed); 850 850 else 851 - addr = paddr->v4.sin_addr.s_addr; 851 + addr = (__force __u32)paddr->v4.sin_addr.s_addr; 852 852 853 - return jhash_3words(addr, ((__u32)paddr->v4.sin_port) << 16 | 853 + return jhash_3words(addr, ((__force __u32)paddr->v4.sin_port) << 16 | 854 854 (__force __u32)lport, net_hash_mix(net), seed); 855 855 } 856 856
+5 -3
net/sctp/ipv6.c
··· 738 738 /* Was this packet marked by Explicit Congestion Notification? */ 739 739 static int sctp_v6_is_ce(const struct sk_buff *skb) 740 740 { 741 - return *((__u32 *)(ipv6_hdr(skb))) & htonl(1 << 20); 741 + return *((__u32 *)(ipv6_hdr(skb))) & (__force __u32)htonl(1 << 20); 742 742 } 743 743 744 744 /* Dump the v6 addr to the seq file. */ ··· 882 882 net = sock_net(&opt->inet.sk); 883 883 rcu_read_lock(); 884 884 dev = dev_get_by_index_rcu(net, addr->v6.sin6_scope_id); 885 - if (!dev || 886 - !ipv6_chk_addr(net, &addr->v6.sin6_addr, dev, 0)) { 885 + if (!dev || !(opt->inet.freebind || 886 + net->ipv6.sysctl.ip_nonlocal_bind || 887 + ipv6_chk_addr(net, &addr->v6.sin6_addr, 888 + dev, 0))) { 887 889 rcu_read_unlock(); 888 890 return 0; 889 891 }
+5 -4
net/sctp/sm_make_chunk.c
··· 2854 2854 addr_param_len = af->to_addr_param(addr, &addr_param); 2855 2855 param.param_hdr.type = flags; 2856 2856 param.param_hdr.length = htons(paramlen + addr_param_len); 2857 - param.crr_id = i; 2857 + param.crr_id = htonl(i); 2858 2858 2859 2859 sctp_addto_chunk(retval, paramlen, &param); 2860 2860 sctp_addto_chunk(retval, addr_param_len, &addr_param); ··· 2867 2867 addr_param_len = af->to_addr_param(addr, &addr_param); 2868 2868 param.param_hdr.type = SCTP_PARAM_DEL_IP; 2869 2869 param.param_hdr.length = htons(paramlen + addr_param_len); 2870 - param.crr_id = i; 2870 + param.crr_id = htonl(i); 2871 2871 2872 2872 sctp_addto_chunk(retval, paramlen, &param); 2873 2873 sctp_addto_chunk(retval, addr_param_len, &addr_param); ··· 3591 3591 */ 3592 3592 struct sctp_chunk *sctp_make_strreset_req( 3593 3593 const struct sctp_association *asoc, 3594 - __u16 stream_num, __u16 *stream_list, 3594 + __u16 stream_num, __be16 *stream_list, 3595 3595 bool out, bool in) 3596 3596 { 3597 3597 struct sctp_strreset_outreq outreq; ··· 3788 3788 { 3789 3789 struct sctp_reconf_chunk *hdr; 3790 3790 union sctp_params param; 3791 - __u16 last = 0, cnt = 0; 3791 + __be16 last = 0; 3792 + __u16 cnt = 0; 3792 3793 3793 3794 hdr = (struct sctp_reconf_chunk *)chunk->chunk_hdr; 3794 3795 sctp_walk_params(param, hdr, params) {
+4 -4
net/sctp/sm_sideeffect.c
··· 1629 1629 break; 1630 1630 1631 1631 case SCTP_CMD_INIT_FAILED: 1632 - sctp_cmd_init_failed(commands, asoc, cmd->obj.err); 1632 + sctp_cmd_init_failed(commands, asoc, cmd->obj.u32); 1633 1633 break; 1634 1634 1635 1635 case SCTP_CMD_ASSOC_FAILED: 1636 1636 sctp_cmd_assoc_failed(commands, asoc, event_type, 1637 - subtype, chunk, cmd->obj.err); 1637 + subtype, chunk, cmd->obj.u32); 1638 1638 break; 1639 1639 1640 1640 case SCTP_CMD_INIT_COUNTER_INC: ··· 1702 1702 case SCTP_CMD_PROCESS_CTSN: 1703 1703 /* Dummy up a SACK for processing. */ 1704 1704 sackh.cum_tsn_ack = cmd->obj.be32; 1705 - sackh.a_rwnd = asoc->peer.rwnd + 1706 - asoc->outqueue.outstanding_bytes; 1705 + sackh.a_rwnd = htonl(asoc->peer.rwnd + 1706 + asoc->outqueue.outstanding_bytes); 1707 1707 sackh.num_gap_ack_blocks = 0; 1708 1708 sackh.num_dup_tsns = 0; 1709 1709 chunk->subh.sack_hdr = &sackh;
+32
net/sctp/socket.c
··· 171 171 sk_mem_charge(sk, chunk->skb->truesize); 172 172 } 173 173 174 + static void sctp_clear_owner_w(struct sctp_chunk *chunk) 175 + { 176 + skb_orphan(chunk->skb); 177 + } 178 + 179 + static void sctp_for_each_tx_datachunk(struct sctp_association *asoc, 180 + void (*cb)(struct sctp_chunk *)) 181 + 182 + { 183 + struct sctp_outq *q = &asoc->outqueue; 184 + struct sctp_transport *t; 185 + struct sctp_chunk *chunk; 186 + 187 + list_for_each_entry(t, &asoc->peer.transport_addr_list, transports) 188 + list_for_each_entry(chunk, &t->transmitted, transmitted_list) 189 + cb(chunk); 190 + 191 + list_for_each_entry(chunk, &q->retransmit, list) 192 + cb(chunk); 193 + 194 + list_for_each_entry(chunk, &q->sacked, list) 195 + cb(chunk); 196 + 197 + list_for_each_entry(chunk, &q->abandoned, list) 198 + cb(chunk); 199 + 200 + list_for_each_entry(chunk, &q->out_chunk_list, list) 201 + cb(chunk); 202 + } 203 + 174 204 /* Verify that this is a valid address. */ 175 205 static inline int sctp_verify_addr(struct sock *sk, union sctp_addr *addr, 176 206 int len) ··· 8409 8379 * paths won't try to lock it and then oldsk. 8410 8380 */ 8411 8381 lock_sock_nested(newsk, SINGLE_DEPTH_NESTING); 8382 + sctp_for_each_tx_datachunk(assoc, sctp_clear_owner_w); 8412 8383 sctp_assoc_migrate(assoc, newsk); 8384 + sctp_for_each_tx_datachunk(assoc, sctp_set_owner_w); 8413 8385 8414 8386 /* If the association on the newsk is already closed before accept() 8415 8387 * is called, set RCV_SHUTDOWN flag.
+18 -10
net/sctp/stream.c
··· 261 261 __u16 i, str_nums, *str_list; 262 262 struct sctp_chunk *chunk; 263 263 int retval = -EINVAL; 264 + __be16 *nstr_list; 264 265 bool out, in; 265 266 266 267 if (!asoc->peer.reconf_capable || ··· 292 291 if (str_list[i] >= stream->incnt) 293 292 goto out; 294 293 295 - for (i = 0; i < str_nums; i++) 296 - str_list[i] = htons(str_list[i]); 294 + nstr_list = kcalloc(str_nums, sizeof(__be16), GFP_KERNEL); 295 + if (!nstr_list) { 296 + retval = -ENOMEM; 297 + goto out; 298 + } 297 299 298 - chunk = sctp_make_strreset_req(asoc, str_nums, str_list, out, in); 299 - 300 300 for (i = 0; i < str_nums; i++) 301 - str_list[i] = ntohs(str_list[i]); 301 + nstr_list[i] = htons(str_list[i]); 302 + 303 + chunk = sctp_make_strreset_req(asoc, str_nums, nstr_list, out, in); 304 + 305 + kfree(nstr_list); 302 306 303 307 if (!chunk) { 304 308 retval = -ENOMEM; ··· 448 442 } 449 443 450 444 static struct sctp_paramhdr *sctp_chunk_lookup_strreset_param( 451 - struct sctp_association *asoc, __u32 resp_seq, 445 + struct sctp_association *asoc, __be32 resp_seq, 452 446 __be16 type) 453 447 { 454 448 struct sctp_chunk *chunk = asoc->strreset_chunk; ··· 488 482 { 489 483 struct sctp_strreset_outreq *outreq = param.v; 490 484 struct sctp_stream *stream = &asoc->stream; 491 - __u16 i, nums, flags = 0, *str_p = NULL; 492 485 __u32 result = SCTP_STRRESET_DENIED; 486 + __u16 i, nums, flags = 0; 487 + __be16 *str_p = NULL; 493 488 __u32 request_seq; 494 489 495 490 request_seq = ntohl(outreq->request_seq); ··· 583 576 struct sctp_stream *stream = &asoc->stream; 584 577 __u32 result = SCTP_STRRESET_DENIED; 585 578 struct sctp_chunk *chunk = NULL; 586 - __u16 i, nums, *str_p; 587 579 __u32 request_seq; 580 + __u16 i, nums; 581 + __be16 *str_p; 588 582 589 583 request_seq = ntohl(inreq->request_seq); 590 584 if (TSN_lt(asoc->strreset_inseq, request_seq) || ··· 905 897 906 898 if (req->type == SCTP_PARAM_RESET_OUT_REQUEST) { 907 899 struct sctp_strreset_outreq *outreq; 908 - __u16 *str_p; 900 + __be16 *str_p; 909 901 910 902 outreq = (struct sctp_strreset_outreq *)req; 911 903 str_p = outreq->list_of_streams; ··· 930 922 nums, str_p, GFP_ATOMIC); 931 923 } else if (req->type == SCTP_PARAM_RESET_IN_REQUEST) { 932 924 struct sctp_strreset_inreq *inreq; 933 - __u16 *str_p; 925 + __be16 *str_p; 934 926 935 927 /* if the result is performed, it's impossible for inreq */ 936 928 if (result == SCTP_STRRESET_PERFORMED)
+1 -1
net/sctp/ulpevent.c
··· 847 847 848 848 struct sctp_ulpevent *sctp_ulpevent_make_stream_reset_event( 849 849 const struct sctp_association *asoc, __u16 flags, __u16 stream_num, 850 - __u16 *stream_list, gfp_t gfp) 850 + __be16 *stream_list, gfp_t gfp) 851 851 { 852 852 struct sctp_stream_reset_event *sreset; 853 853 struct sctp_ulpevent *event;
+8 -9
net/strparser/strparser.c
··· 49 49 { 50 50 /* Unrecoverable error in receive */ 51 51 52 - del_timer(&strp->msg_timer); 52 + cancel_delayed_work(&strp->msg_timer_work); 53 53 54 54 if (strp->stopped) 55 55 return; ··· 68 68 static void strp_start_timer(struct strparser *strp, long timeo) 69 69 { 70 70 if (timeo) 71 - mod_timer(&strp->msg_timer, timeo); 71 + mod_delayed_work(strp_wq, &strp->msg_timer_work, timeo); 72 72 } 73 73 74 74 /* Lower lock held */ ··· 319 319 eaten += (cand_len - extra); 320 320 321 321 /* Hurray, we have a new message! */ 322 - del_timer(&strp->msg_timer); 322 + cancel_delayed_work(&strp->msg_timer_work); 323 323 strp->skb_head = NULL; 324 324 STRP_STATS_INCR(strp->stats.msgs); 325 325 ··· 450 450 do_strp_work(container_of(w, struct strparser, work)); 451 451 } 452 452 453 - static void strp_msg_timeout(unsigned long arg) 453 + static void strp_msg_timeout(struct work_struct *w) 454 454 { 455 - struct strparser *strp = (struct strparser *)arg; 455 + struct strparser *strp = container_of(w, struct strparser, 456 + msg_timer_work.work); 456 457 457 458 /* Message assembly timed out */ 458 459 STRP_STATS_INCR(strp->stats.msg_timeouts); ··· 506 505 strp->cb.read_sock_done = cb->read_sock_done ? : default_read_sock_done; 507 506 strp->cb.abort_parser = cb->abort_parser ? : strp_abort_strp; 508 507 509 - setup_timer(&strp->msg_timer, strp_msg_timeout, 510 - (unsigned long)strp); 511 - 508 + INIT_DELAYED_WORK(&strp->msg_timer_work, strp_msg_timeout); 512 509 INIT_WORK(&strp->work, strp_work); 513 510 514 511 return 0; ··· 531 532 { 532 533 WARN_ON(!strp->stopped); 533 534 534 - del_timer_sync(&strp->msg_timer); 535 + cancel_delayed_work_sync(&strp->msg_timer_work); 535 536 cancel_work_sync(&strp->work); 536 537 537 538 if (strp->skb_head) {
+25 -11
net/sunrpc/xprt.c
··· 1333 1333 rpc_count_iostats(task, task->tk_client->cl_metrics); 1334 1334 spin_lock(&xprt->recv_lock); 1335 1335 if (!list_empty(&req->rq_list)) { 1336 - list_del(&req->rq_list); 1336 + list_del_init(&req->rq_list); 1337 1337 xprt_wait_on_pinned_rqst(req); 1338 1338 } 1339 1339 spin_unlock(&xprt->recv_lock); ··· 1444 1444 return xprt; 1445 1445 } 1446 1446 1447 + static void xprt_destroy_cb(struct work_struct *work) 1448 + { 1449 + struct rpc_xprt *xprt = 1450 + container_of(work, struct rpc_xprt, task_cleanup); 1451 + 1452 + rpc_xprt_debugfs_unregister(xprt); 1453 + rpc_destroy_wait_queue(&xprt->binding); 1454 + rpc_destroy_wait_queue(&xprt->pending); 1455 + rpc_destroy_wait_queue(&xprt->sending); 1456 + rpc_destroy_wait_queue(&xprt->backlog); 1457 + kfree(xprt->servername); 1458 + /* 1459 + * Tear down transport state and free the rpc_xprt 1460 + */ 1461 + xprt->ops->destroy(xprt); 1462 + } 1463 + 1447 1464 /** 1448 1465 * xprt_destroy - destroy an RPC transport, killing off all requests. 1449 1466 * @xprt: transport to destroy ··· 1470 1453 { 1471 1454 dprintk("RPC: destroying transport %p\n", xprt); 1472 1455 1473 - /* Exclude transport connect/disconnect handlers */ 1456 + /* 1457 + * Exclude transport connect/disconnect handlers and autoclose 1458 + */ 1474 1459 wait_on_bit_lock(&xprt->state, XPRT_LOCKED, TASK_UNINTERRUPTIBLE); 1475 1460 1476 1461 del_timer_sync(&xprt->timer); 1477 1462 1478 - rpc_xprt_debugfs_unregister(xprt); 1479 - rpc_destroy_wait_queue(&xprt->binding); 1480 - rpc_destroy_wait_queue(&xprt->pending); 1481 - rpc_destroy_wait_queue(&xprt->sending); 1482 - rpc_destroy_wait_queue(&xprt->backlog); 1483 - cancel_work_sync(&xprt->task_cleanup); 1484 - kfree(xprt->servername); 1485 1463 /* 1486 - * Tear down transport state and free the rpc_xprt 1464 + * Destroy sockets etc from the system workqueue so they can 1465 + * safely flush receive work running on rpciod. 1487 1466 */ 1488 - xprt->ops->destroy(xprt); 1467 + INIT_WORK(&xprt->task_cleanup, xprt_destroy_cb); 1468 + schedule_work(&xprt->task_cleanup); 1489 1469 } 1490 1470 1491 1471 static void xprt_destroy_kref(struct kref *kref)
+2
net/unix/diag.c
··· 257 257 err = -ENOENT; 258 258 if (sk == NULL) 259 259 goto out_nosk; 260 + if (!net_eq(sock_net(sk), net)) 261 + goto out; 260 262 261 263 err = sock_diag_check_cookie(sk, req->udiag_cookie); 262 264 if (err)
+41 -9
net/wireless/sme.c
··· 522 522 return -EOPNOTSUPP; 523 523 524 524 if (wdev->current_bss) { 525 - if (!prev_bssid) 526 - return -EALREADY; 527 - if (prev_bssid && 528 - !ether_addr_equal(prev_bssid, wdev->current_bss->pub.bssid)) 529 - return -ENOTCONN; 530 525 cfg80211_unhold_bss(wdev->current_bss); 531 526 cfg80211_put_bss(wdev->wiphy, &wdev->current_bss->pub); 532 527 wdev->current_bss = NULL; ··· 1101 1106 1102 1107 ASSERT_WDEV_LOCK(wdev); 1103 1108 1104 - if (WARN_ON(wdev->connect_keys)) { 1105 - kzfree(wdev->connect_keys); 1106 - wdev->connect_keys = NULL; 1109 + /* 1110 + * If we have an ssid_len, we're trying to connect or are 1111 + * already connected, so reject a new SSID unless it's the 1112 + * same (which is the case for re-association.) 1113 + */ 1114 + if (wdev->ssid_len && 1115 + (wdev->ssid_len != connect->ssid_len || 1116 + memcmp(wdev->ssid, connect->ssid, wdev->ssid_len))) 1117 + return -EALREADY; 1118 + 1119 + /* 1120 + * If connected, reject (re-)association unless prev_bssid 1121 + * matches the current BSSID. 1122 + */ 1123 + if (wdev->current_bss) { 1124 + if (!prev_bssid) 1125 + return -EALREADY; 1126 + if (!ether_addr_equal(prev_bssid, wdev->current_bss->pub.bssid)) 1127 + return -ENOTCONN; 1107 1128 } 1129 + 1130 + /* 1131 + * Reject if we're in the process of connecting with WEP, 1132 + * this case isn't very interesting and trying to handle 1133 + * it would make the code much more complex. 1134 + */ 1135 + if (wdev->connect_keys) 1136 + return -EINPROGRESS; 1108 1137 1109 1138 cfg80211_oper_and_ht_capa(&connect->ht_capa_mask, 1110 1139 rdev->wiphy.ht_capa_mod_mask); ··· 1180 1161 1181 1162 if (err) { 1182 1163 wdev->connect_keys = NULL; 1183 - wdev->ssid_len = 0; 1164 + /* 1165 + * This could be reassoc getting refused, don't clear 1166 + * ssid_len in that case. 1167 + */ 1168 + if (!wdev->current_bss) 1169 + wdev->ssid_len = 0; 1184 1170 return err; 1185 1171 } 1186 1172 ··· 1211 1187 cfg80211_mlme_down(rdev, dev); 1212 1188 else if (wdev->ssid_len) 1213 1189 err = rdev_disconnect(rdev, dev, reason); 1190 + 1191 + /* 1192 + * Clear ssid_len unless we actually were fully connected, 1193 + * in which case cfg80211_disconnected() will take care of 1194 + * this later. 1195 + */ 1196 + if (!wdev->current_bss) 1197 + wdev->ssid_len = 0; 1214 1198 1215 1199 return err; 1216 1200 }
+8 -8
net/xfrm/xfrm_policy.c
··· 1572 1572 goto put_states; 1573 1573 } 1574 1574 1575 + if (!dst_prev) 1576 + dst0 = dst1; 1577 + else 1578 + /* Ref count is taken during xfrm_alloc_dst() 1579 + * No need to do dst_clone() on dst1 1580 + */ 1581 + dst_prev->child = dst1; 1582 + 1575 1583 if (xfrm[i]->sel.family == AF_UNSPEC) { 1576 1584 inner_mode = xfrm_ip2inner_mode(xfrm[i], 1577 1585 xfrm_af2proto(family)); ··· 1590 1582 } 1591 1583 } else 1592 1584 inner_mode = xfrm[i]->inner_mode; 1593 - 1594 - if (!dst_prev) 1595 - dst0 = dst1; 1596 - else 1597 - /* Ref count is taken during xfrm_alloc_dst() 1598 - * No need to do dst_clone() on dst1 1599 - */ 1600 - dst_prev->child = dst1; 1601 1585 1602 1586 xdst->route = dst; 1603 1587 dst_copy_metrics(dst1, dst);
+15 -10
net/xfrm/xfrm_user.c
··· 1693 1693 1694 1694 static int xfrm_dump_policy_done(struct netlink_callback *cb) 1695 1695 { 1696 - struct xfrm_policy_walk *walk = (struct xfrm_policy_walk *) &cb->args[1]; 1696 + struct xfrm_policy_walk *walk = (struct xfrm_policy_walk *)cb->args; 1697 1697 struct net *net = sock_net(cb->skb->sk); 1698 1698 1699 1699 xfrm_policy_walk_done(walk, net); 1700 1700 return 0; 1701 1701 } 1702 1702 1703 + static int xfrm_dump_policy_start(struct netlink_callback *cb) 1704 + { 1705 + struct xfrm_policy_walk *walk = (struct xfrm_policy_walk *)cb->args; 1706 + 1707 + BUILD_BUG_ON(sizeof(*walk) > sizeof(cb->args)); 1708 + 1709 + xfrm_policy_walk_init(walk, XFRM_POLICY_TYPE_ANY); 1710 + return 0; 1711 + } 1712 + 1703 1713 static int xfrm_dump_policy(struct sk_buff *skb, struct netlink_callback *cb) 1704 1714 { 1705 1715 struct net *net = sock_net(skb->sk); 1706 - struct xfrm_policy_walk *walk = (struct xfrm_policy_walk *) &cb->args[1]; 1716 + struct xfrm_policy_walk *walk = (struct xfrm_policy_walk *)cb->args; 1707 1717 struct xfrm_dump_info info; 1708 - 1709 - BUILD_BUG_ON(sizeof(struct xfrm_policy_walk) > 1710 - sizeof(cb->args) - sizeof(cb->args[0])); 1711 1718 1712 1719 info.in_skb = cb->skb; 1713 1720 info.out_skb = skb; 1714 1721 info.nlmsg_seq = cb->nlh->nlmsg_seq; 1715 1722 info.nlmsg_flags = NLM_F_MULTI; 1716 - 1717 - if (!cb->args[0]) { 1718 - cb->args[0] = 1; 1719 - xfrm_policy_walk_init(walk, XFRM_POLICY_TYPE_ANY); 1720 - } 1721 1723 1722 1724 (void) xfrm_policy_walk(net, walk, dump_one_policy, &info); 1723 1725 ··· 2476 2474 2477 2475 static const struct xfrm_link { 2478 2476 int (*doit)(struct sk_buff *, struct nlmsghdr *, struct nlattr **); 2477 + int (*start)(struct netlink_callback *); 2479 2478 int (*dump)(struct sk_buff *, struct netlink_callback *); 2480 2479 int (*done)(struct netlink_callback *); 2481 2480 const struct nla_policy *nla_pol; ··· 2490 2487 [XFRM_MSG_NEWPOLICY - XFRM_MSG_BASE] = { .doit = xfrm_add_policy }, 2491 2488 [XFRM_MSG_DELPOLICY - XFRM_MSG_BASE] = { .doit = xfrm_get_policy }, 2492 2489 [XFRM_MSG_GETPOLICY - XFRM_MSG_BASE] = { .doit = xfrm_get_policy, 2490 + .start = xfrm_dump_policy_start, 2493 2491 .dump = xfrm_dump_policy, 2494 2492 .done = xfrm_dump_policy_done }, 2495 2493 [XFRM_MSG_ALLOCSPI - XFRM_MSG_BASE] = { .doit = xfrm_alloc_userspi }, ··· 2543 2539 2544 2540 { 2545 2541 struct netlink_dump_control c = { 2542 + .start = link->start, 2546 2543 .dump = link->dump, 2547 2544 .done = link->done, 2548 2545 };
+1 -1
samples/trace_events/trace-events-sample.c
··· 78 78 } 79 79 80 80 static DEFINE_MUTEX(thread_mutex); 81 - static bool simple_thread_cnt; 81 + static int simple_thread_cnt; 82 82 83 83 int foo_bar_reg(void) 84 84 {
-1
scripts/Makefile.modpost
··· 97 97 $(call cmd,kernel-mod) 98 98 99 99 # Declare generated files as targets for modpost 100 - $(symverfile): __modpost ; 101 100 $(modules:.ko=.mod.c): __modpost ; 102 101 103 102
-1
scripts/mod/devicetable-offsets.c
··· 105 105 DEVID_FIELD(input_device_id, sndbit); 106 106 DEVID_FIELD(input_device_id, ffbit); 107 107 DEVID_FIELD(input_device_id, swbit); 108 - DEVID_FIELD(input_device_id, propbit); 109 108 110 109 DEVID(eisa_device_id); 111 110 DEVID_FIELD(eisa_device_id, sig);
+1 -5
scripts/mod/file2alias.c
··· 761 761 sprintf(alias + strlen(alias), "%X,*", i); 762 762 } 763 763 764 - /* input:b0v0p0e0-eXkXrXaXmXlXsXfXwXprX where X is comma-separated %02X. */ 764 + /* input:b0v0p0e0-eXkXrXaXmXlXsXfXwX where X is comma-separated %02X. */ 765 765 static int do_input_entry(const char *filename, void *symval, 766 766 char *alias) 767 767 { ··· 779 779 DEF_FIELD_ADDR(symval, input_device_id, sndbit); 780 780 DEF_FIELD_ADDR(symval, input_device_id, ffbit); 781 781 DEF_FIELD_ADDR(symval, input_device_id, swbit); 782 - DEF_FIELD_ADDR(symval, input_device_id, propbit); 783 782 784 783 sprintf(alias, "input:"); 785 784 ··· 816 817 sprintf(alias + strlen(alias), "w*"); 817 818 if (flags & INPUT_DEVICE_ID_MATCH_SWBIT) 818 819 do_input(alias, *swbit, 0, INPUT_DEVICE_ID_SW_MAX); 819 - sprintf(alias + strlen(alias), "pr*"); 820 - if (flags & INPUT_DEVICE_ID_MATCH_PROPBIT) 821 - do_input(alias, *propbit, 0, INPUT_DEVICE_ID_PROP_MAX); 822 820 return 1; 823 821 } 824 822 ADD_TO_DEVTABLE("input", input_device_id, do_input_entry);
-1
security/apparmor/.gitignore
··· 1 1 # 2 2 # Generated include files 3 3 # 4 - net_names.h 5 4 capability_names.h 6 5 rlim_names.h
+2 -41
security/apparmor/Makefile
··· 4 4 5 5 apparmor-y := apparmorfs.o audit.o capability.o context.o ipc.o lib.o match.o \ 6 6 path.o domain.o policy.o policy_unpack.o procattr.o lsm.o \ 7 - resource.o secid.o file.o policy_ns.o label.o mount.o net.o 7 + resource.o secid.o file.o policy_ns.o label.o mount.o 8 8 apparmor-$(CONFIG_SECURITY_APPARMOR_HASH) += crypto.o 9 9 10 - clean-files := capability_names.h rlim_names.h net_names.h 10 + clean-files := capability_names.h rlim_names.h 11 11 12 - # Build a lower case string table of address family names 13 - # Transform lines from 14 - # #define AF_LOCAL 1 /* POSIX name for AF_UNIX */ 15 - # #define AF_INET 2 /* Internet IP Protocol */ 16 - # to 17 - # [1] = "local", 18 - # [2] = "inet", 19 - # 20 - # and build the securityfs entries for the mapping. 21 - # Transforms lines from 22 - # #define AF_INET 2 /* Internet IP Protocol */ 23 - # to 24 - # #define AA_SFS_AF_MASK "local inet" 25 - quiet_cmd_make-af = GEN $@ 26 - cmd_make-af = echo "static const char *address_family_names[] = {" > $@ ;\ 27 - sed $< >>$@ -r -n -e "/AF_MAX/d" -e "/AF_LOCAL/d" -e "/AF_ROUTE/d" -e \ 28 - 's/^\#define[ \t]+AF_([A-Z0-9_]+)[ \t]+([0-9]+)(.*)/[\2] = "\L\1",/p';\ 29 - echo "};" >> $@ ;\ 30 - printf '%s' '\#define AA_SFS_AF_MASK "' >> $@ ;\ 31 - sed -r -n -e "/AF_MAX/d" -e "/AF_LOCAL/d" -e "/AF_ROUTE/d" -e \ 32 - 's/^\#define[ \t]+AF_([A-Z0-9_]+)[ \t]+([0-9]+)(.*)/\L\1/p'\ 33 - $< | tr '\n' ' ' | sed -e 's/ $$/"\n/' >> $@ 34 - 35 - # Build a lower case string table of sock type names 36 - # Transform lines from 37 - # SOCK_STREAM = 1, 38 - # to 39 - # [1] = "stream", 40 - quiet_cmd_make-sock = GEN $@ 41 - cmd_make-sock = echo "static const char *sock_type_names[] = {" >> $@ ;\ 42 - sed $^ >>$@ -r -n \ 43 - -e 's/^\tSOCK_([A-Z0-9_]+)[\t]+=[ \t]+([0-9]+)(.*)/[\2] = "\L\1",/p';\ 44 - echo "};" >> $@ 45 12 46 13 # Build a lower case string table of capability names 47 14 # Transforms lines from ··· 61 94 tr '\n' ' ' | sed -e 's/ $$/"\n/' >> $@ 62 95 63 96 $(obj)/capability.o : $(obj)/capability_names.h 64 - $(obj)/net.o : $(obj)/net_names.h 65 97 $(obj)/resource.o : $(obj)/rlim_names.h 66 98 $(obj)/capability_names.h : $(srctree)/include/uapi/linux/capability.h \ 67 99 $(src)/Makefile ··· 68 102 $(obj)/rlim_names.h : $(srctree)/include/uapi/asm-generic/resource.h \ 69 103 $(src)/Makefile 70 104 $(call cmd,make-rlim) 71 - $(obj)/net_names.h : $(srctree)/include/linux/socket.h \ 72 - $(srctree)/include/linux/net.h \ 73 - $(src)/Makefile 74 - $(call cmd,make-af) 75 - $(call cmd,make-sock)
-1
security/apparmor/apparmorfs.c
··· 2202 2202 AA_SFS_DIR("policy", aa_sfs_entry_policy), 2203 2203 AA_SFS_DIR("domain", aa_sfs_entry_domain), 2204 2204 AA_SFS_DIR("file", aa_sfs_entry_file), 2205 - AA_SFS_DIR("network", aa_sfs_entry_network), 2206 2205 AA_SFS_DIR("mount", aa_sfs_entry_mount), 2207 2206 AA_SFS_DIR("namespaces", aa_sfs_entry_ns), 2208 2207 AA_SFS_FILE_U64("capability", VFS_CAP_FLAGS_MASK),
-30
security/apparmor/file.c
··· 21 21 #include "include/context.h" 22 22 #include "include/file.h" 23 23 #include "include/match.h" 24 - #include "include/net.h" 25 24 #include "include/path.h" 26 25 #include "include/policy.h" 27 26 #include "include/label.h" ··· 566 567 return error; 567 568 } 568 569 569 - static int __file_sock_perm(const char *op, struct aa_label *label, 570 - struct aa_label *flabel, struct file *file, 571 - u32 request, u32 denied) 572 - { 573 - struct socket *sock = (struct socket *) file->private_data; 574 - int error; 575 - 576 - AA_BUG(!sock); 577 - 578 - /* revalidation due to label out of date. No revocation at this time */ 579 - if (!denied && aa_label_is_subset(flabel, label)) 580 - return 0; 581 - 582 - /* TODO: improve to skip profiles cached in flabel */ 583 - error = aa_sock_file_perm(label, op, request, sock); 584 - if (denied) { 585 - /* TODO: improve to skip profiles checked above */ 586 - /* check every profile in file label to is cached */ 587 - last_error(error, aa_sock_file_perm(flabel, op, request, sock)); 588 - } 589 - if (!error) 590 - update_file_ctx(file_ctx(file), label, request); 591 - 592 - return error; 593 - } 594 - 595 570 /** 596 571 * aa_file_perm - do permission revalidation check & audit for @file 597 572 * @op: operation being checked ··· 610 637 error = __file_path_perm(op, label, flabel, file, request, 611 638 denied); 612 639 613 - else if (S_ISSOCK(file_inode(file)->i_mode)) 614 - error = __file_sock_perm(op, label, flabel, file, request, 615 - denied); 616 640 done: 617 641 rcu_read_unlock(); 618 642
+9 -17
security/apparmor/include/audit.h
··· 121 121 /* these entries require a custom callback fn */ 122 122 struct { 123 123 struct aa_label *peer; 124 - union { 125 - struct { 126 - kuid_t ouid; 127 - const char *target; 128 - } fs; 129 - struct { 130 - int type, protocol; 131 - struct sock *peer_sk; 132 - void *addr; 133 - int addrlen; 134 - } net; 135 - int signal; 136 - struct { 137 - int rlim; 138 - unsigned long max; 139 - } rlim; 140 - }; 124 + struct { 125 + const char *target; 126 + kuid_t ouid; 127 + } fs; 141 128 }; 142 129 struct { 143 130 struct aa_profile *profile; 144 131 const char *ns; 145 132 long pos; 146 133 } iface; 134 + int signal; 135 + struct { 136 + int rlim; 137 + unsigned long max; 138 + } rlim; 147 139 struct { 148 140 const char *src_name; 149 141 const char *type;
-114
security/apparmor/include/net.h
··· 1 - /* 2 - * AppArmor security module 3 - * 4 - * This file contains AppArmor network mediation definitions. 5 - * 6 - * Copyright (C) 1998-2008 Novell/SUSE 7 - * Copyright 2009-2017 Canonical Ltd. 8 - * 9 - * This program is free software; you can redistribute it and/or 10 - * modify it under the terms of the GNU General Public License as 11 - * published by the Free Software Foundation, version 2 of the 12 - * License. 13 - */ 14 - 15 - #ifndef __AA_NET_H 16 - #define __AA_NET_H 17 - 18 - #include <net/sock.h> 19 - #include <linux/path.h> 20 - 21 - #include "apparmorfs.h" 22 - #include "label.h" 23 - #include "perms.h" 24 - #include "policy.h" 25 - 26 - #define AA_MAY_SEND AA_MAY_WRITE 27 - #define AA_MAY_RECEIVE AA_MAY_READ 28 - 29 - #define AA_MAY_SHUTDOWN AA_MAY_DELETE 30 - 31 - #define AA_MAY_CONNECT AA_MAY_OPEN 32 - #define AA_MAY_ACCEPT 0x00100000 33 - 34 - #define AA_MAY_BIND 0x00200000 35 - #define AA_MAY_LISTEN 0x00400000 36 - 37 - #define AA_MAY_SETOPT 0x01000000 38 - #define AA_MAY_GETOPT 0x02000000 39 - 40 - #define NET_PERMS_MASK (AA_MAY_SEND | AA_MAY_RECEIVE | AA_MAY_CREATE | \ 41 - AA_MAY_SHUTDOWN | AA_MAY_BIND | AA_MAY_LISTEN | \ 42 - AA_MAY_CONNECT | AA_MAY_ACCEPT | AA_MAY_SETATTR | \ 43 - AA_MAY_GETATTR | AA_MAY_SETOPT | AA_MAY_GETOPT) 44 - 45 - #define NET_FS_PERMS (AA_MAY_SEND | AA_MAY_RECEIVE | AA_MAY_CREATE | \ 46 - AA_MAY_SHUTDOWN | AA_MAY_CONNECT | AA_MAY_RENAME |\ 47 - AA_MAY_SETATTR | AA_MAY_GETATTR | AA_MAY_CHMOD | \ 48 - AA_MAY_CHOWN | AA_MAY_CHGRP | AA_MAY_LOCK | \ 49 - AA_MAY_MPROT) 50 - 51 - #define NET_PEER_MASK (AA_MAY_SEND | AA_MAY_RECEIVE | AA_MAY_CONNECT | \ 52 - AA_MAY_ACCEPT) 53 - struct aa_sk_ctx { 54 - struct aa_label *label; 55 - struct aa_label *peer; 56 - struct path path; 57 - }; 58 - 59 - #define SK_CTX(X) ((X)->sk_security) 60 - #define SOCK_ctx(X) SOCK_INODE(X)->i_security 61 - #define DEFINE_AUDIT_NET(NAME, OP, SK, F, T, P) \ 62 - struct lsm_network_audit NAME ## _net = { .sk = (SK), \ 63 - .family = (F)}; \ 64 - DEFINE_AUDIT_DATA(NAME, \ 65 - ((SK) && (F) != AF_UNIX) ? LSM_AUDIT_DATA_NET : \ 66 - LSM_AUDIT_DATA_NONE, \ 67 - OP); \ 68 - NAME.u.net = &(NAME ## _net); \ 69 - aad(&NAME)->net.type = (T); \ 70 - aad(&NAME)->net.protocol = (P) 71 - 72 - #define DEFINE_AUDIT_SK(NAME, OP, SK) \ 73 - DEFINE_AUDIT_NET(NAME, OP, SK, (SK)->sk_family, (SK)->sk_type, \ 74 - (SK)->sk_protocol) 75 - 76 - /* struct aa_net - network confinement data 77 - * @allow: basic network families permissions 78 - * @audit: which network permissions to force audit 79 - * @quiet: which network permissions to quiet rejects 80 - */ 81 - struct aa_net { 82 - u16 allow[AF_MAX]; 83 - u16 audit[AF_MAX]; 84 - u16 quiet[AF_MAX]; 85 - }; 86 - 87 - 88 - extern struct aa_sfs_entry aa_sfs_entry_network[]; 89 - 90 - void audit_net_cb(struct audit_buffer *ab, void *va); 91 - int aa_profile_af_perm(struct aa_profile *profile, struct common_audit_data *sa, 92 - u32 request, u16 family, int type); 93 - int aa_af_perm(struct aa_label *label, const char *op, u32 request, u16 family, 94 - int type, int protocol); 95 - static inline int aa_profile_af_sk_perm(struct aa_profile *profile, 96 - struct common_audit_data *sa, 97 - u32 request, 98 - struct sock *sk) 99 - { 100 - return aa_profile_af_perm(profile, sa, request, sk->sk_family, 101 - sk->sk_type); 102 - } 103 - int aa_sk_perm(const char *op, u32 request, struct sock *sk); 104 - 105 - int aa_sock_file_perm(struct aa_label *label, const char *op, u32 request, 106 - struct socket *sock); 107 - 108 - 109 - static inline void aa_free_net_rules(struct aa_net *new) 110 - { 111 - /* NOP */ 112 - } 113 - 114 - #endif /* __AA_NET_H */
+2 -3
security/apparmor/include/perms.h
··· 135 135 136 136 137 137 void aa_perm_mask_to_str(char *str, const char *chrs, u32 mask); 138 - void aa_audit_perm_names(struct audit_buffer *ab, const char * const *names, 139 - u32 mask); 138 + void aa_audit_perm_names(struct audit_buffer *ab, const char **names, u32 mask); 140 139 void aa_audit_perm_mask(struct audit_buffer *ab, u32 mask, const char *chrs, 141 - u32 chrsmask, const char * const *names, u32 namesmask); 140 + u32 chrsmask, const char **names, u32 namesmask); 142 141 void aa_apply_modes_to_perms(struct aa_profile *profile, 143 142 struct aa_perms *perms); 144 143 void aa_compute_perms(struct aa_dfa *dfa, unsigned int state,
-13
security/apparmor/include/policy.h
··· 30 30 #include "file.h" 31 31 #include "lib.h" 32 32 #include "label.h" 33 - #include "net.h" 34 33 #include "perms.h" 35 34 #include "resource.h" 36 35 ··· 111 112 * @policy: general match rules governing policy 112 113 * @file: The set of rules governing basic file access and domain transitions 113 114 * @caps: capabilities for the profile 114 - * @net: network controls for the profile 115 115 * @rlimits: rlimits for the profile 116 116 * 117 117 * @dents: dentries for the profiles file entries in apparmorfs ··· 148 150 struct aa_policydb policy; 149 151 struct aa_file_rules file; 150 152 struct aa_caps caps; 151 - struct aa_net net; 152 153 struct aa_rlimit rlimits; 153 154 154 155 struct aa_loaddata *rawdata; ··· 218 221 return aa_dfa_match_len(profile->policy.dfa, 219 222 profile->policy.start[0], &class, 1); 220 223 return 0; 221 - } 222 - 223 - static inline unsigned int PROFILE_MEDIATES_AF(struct aa_profile *profile, 224 - u16 AF) { 225 - unsigned int state = PROFILE_MEDIATES(profile, AA_CLASS_NET); 226 - u16 be_af = cpu_to_be16(AF); 227 - 228 - if (!state) 229 - return 0; 230 - return aa_dfa_match_len(profile->policy.dfa, state, (char *) &be_af, 2); 231 224 } 232 225 233 226 /**
+2 -3
security/apparmor/lib.c
··· 211 211 *str = '\0'; 212 212 } 213 213 214 - void aa_audit_perm_names(struct audit_buffer *ab, const char * const *names, 215 - u32 mask) 214 + void aa_audit_perm_names(struct audit_buffer *ab, const char **names, u32 mask) 216 215 { 217 216 const char *fmt = "%s"; 218 217 unsigned int i, perm = 1; ··· 229 230 } 230 231 231 232 void aa_audit_perm_mask(struct audit_buffer *ab, u32 mask, const char *chrs, 232 - u32 chrsmask, const char * const *names, u32 namesmask) 233 + u32 chrsmask, const char **names, u32 namesmask) 233 234 { 234 235 char str[33]; 235 236
-387
security/apparmor/lsm.c
··· 33 33 #include "include/context.h" 34 34 #include "include/file.h" 35 35 #include "include/ipc.h" 36 - #include "include/net.h" 37 36 #include "include/path.h" 38 37 #include "include/label.h" 39 38 #include "include/policy.h" ··· 736 737 return error; 737 738 } 738 739 739 - /** 740 - * apparmor_sk_alloc_security - allocate and attach the sk_security field 741 - */ 742 - static int apparmor_sk_alloc_security(struct sock *sk, int family, gfp_t flags) 743 - { 744 - struct aa_sk_ctx *ctx; 745 - 746 - ctx = kzalloc(sizeof(*ctx), flags); 747 - if (!ctx) 748 - return -ENOMEM; 749 - 750 - SK_CTX(sk) = ctx; 751 - 752 - return 0; 753 - } 754 - 755 - /** 756 - * apparmor_sk_free_security - free the sk_security field 757 - */ 758 - static void apparmor_sk_free_security(struct sock *sk) 759 - { 760 - struct aa_sk_ctx *ctx = SK_CTX(sk); 761 - 762 - SK_CTX(sk) = NULL; 763 - aa_put_label(ctx->label); 764 - aa_put_label(ctx->peer); 765 - path_put(&ctx->path); 766 - kfree(ctx); 767 - } 768 - 769 - /** 770 - * apparmor_clone_security - clone the sk_security field 771 - */ 772 - static void apparmor_sk_clone_security(const struct sock *sk, 773 - struct sock *newsk) 774 - { 775 - struct aa_sk_ctx *ctx = SK_CTX(sk); 776 - struct aa_sk_ctx *new = SK_CTX(newsk); 777 - 778 - new->label = aa_get_label(ctx->label); 779 - new->peer = aa_get_label(ctx->peer); 780 - new->path = ctx->path; 781 - path_get(&new->path); 782 - } 783 - 784 - static int aa_sock_create_perm(struct aa_label *label, int family, int type, 785 - int protocol) 786 - { 787 - AA_BUG(!label); 788 - AA_BUG(in_interrupt()); 789 - 790 - return aa_af_perm(label, OP_CREATE, AA_MAY_CREATE, family, type, 791 - protocol); 792 - } 793 - 794 - 795 - /** 796 - * apparmor_socket_create - check perms before creating a new socket 797 - */ 798 - static int apparmor_socket_create(int family, int type, int protocol, int kern) 799 - { 800 - struct aa_label *label; 801 - int error = 0; 802 - 803 - label = begin_current_label_crit_section(); 804 - if (!(kern || unconfined(label))) 805 - error = aa_sock_create_perm(label, family, type, protocol); 806 - end_current_label_crit_section(label); 807 - 808 - return error; 809 - } 810 - 811 - /** 812 - * apparmor_socket_post_create - setup the per-socket security struct 813 - * 814 - * Note: 815 - * - kernel sockets currently labeled unconfined but we may want to 816 - * move to a special kernel label 817 - * - socket may not have sk here if created with sock_create_lite or 818 - * sock_alloc. These should be accept cases which will be handled in 819 - * sock_graft. 820 - */ 821 - static int apparmor_socket_post_create(struct socket *sock, int family, 822 - int type, int protocol, int kern) 823 - { 824 - struct aa_label *label; 825 - 826 - if (kern) { 827 - struct aa_ns *ns = aa_get_current_ns(); 828 - 829 - label = aa_get_label(ns_unconfined(ns)); 830 - aa_put_ns(ns); 831 - } else 832 - label = aa_get_current_label(); 833 - 834 - if (sock->sk) { 835 - struct aa_sk_ctx *ctx = SK_CTX(sock->sk); 836 - 837 - aa_put_label(ctx->label); 838 - ctx->label = aa_get_label(label); 839 - } 840 - aa_put_label(label); 841 - 842 - return 0; 843 - } 844 - 845 - /** 846 - * apparmor_socket_bind - check perms before bind addr to socket 847 - */ 848 - static int apparmor_socket_bind(struct socket *sock, 849 - struct sockaddr *address, int addrlen) 850 - { 851 - AA_BUG(!sock); 852 - AA_BUG(!sock->sk); 853 - AA_BUG(!address); 854 - AA_BUG(in_interrupt()); 855 - 856 - return aa_sk_perm(OP_BIND, AA_MAY_BIND, sock->sk); 857 - } 858 - 859 - /** 860 - * apparmor_socket_connect - check perms before connecting @sock to @address 861 - */ 862 - static int apparmor_socket_connect(struct socket *sock, 863 - struct sockaddr *address, int addrlen) 864 - { 865 - AA_BUG(!sock); 866 - AA_BUG(!sock->sk); 867 - AA_BUG(!address); 868 - AA_BUG(in_interrupt()); 869 - 870 - return aa_sk_perm(OP_CONNECT, AA_MAY_CONNECT, sock->sk); 871 - } 872 - 873 - /** 874 - * apparmor_socket_list - check perms before allowing listen 875 - */ 876 - static int apparmor_socket_listen(struct socket *sock, int backlog) 877 - { 878 - AA_BUG(!sock); 879 - AA_BUG(!sock->sk); 880 - AA_BUG(in_interrupt()); 881 - 882 - return aa_sk_perm(OP_LISTEN, AA_MAY_LISTEN, sock->sk); 883 - } 884 - 885 - /** 886 - * apparmor_socket_accept - check perms before accepting a new connection. 887 - * 888 - * Note: while @newsock is created and has some information, the accept 889 - * has not been done. 890 - */ 891 - static int apparmor_socket_accept(struct socket *sock, struct socket *newsock) 892 - { 893 - AA_BUG(!sock); 894 - AA_BUG(!sock->sk); 895 - AA_BUG(!newsock); 896 - AA_BUG(in_interrupt()); 897 - 898 - return aa_sk_perm(OP_ACCEPT, AA_MAY_ACCEPT, sock->sk); 899 - } 900 - 901 - static int aa_sock_msg_perm(const char *op, u32 request, struct socket *sock, 902 - struct msghdr *msg, int size) 903 - { 904 - AA_BUG(!sock); 905 - AA_BUG(!sock->sk); 906 - AA_BUG(!msg); 907 - AA_BUG(in_interrupt()); 908 - 909 - return aa_sk_perm(op, request, sock->sk); 910 - } 911 - 912 - /** 913 - * apparmor_socket_sendmsg - check perms before sending msg to another socket 914 - */ 915 - static int apparmor_socket_sendmsg(struct socket *sock, 916 - struct msghdr *msg, int size) 917 - { 918 - return aa_sock_msg_perm(OP_SENDMSG, AA_MAY_SEND, sock, msg, size); 919 - } 920 - 921 - /** 922 - * apparmor_socket_recvmsg - check perms before receiving a message 923 - */ 924 - static int apparmor_socket_recvmsg(struct socket *sock, 925 - struct msghdr *msg, int size, int flags) 926 - { 927 - return aa_sock_msg_perm(OP_RECVMSG, AA_MAY_RECEIVE, sock, msg, size); 928 - } 929 - 930 - /* revaliation, get/set attr, shutdown */ 931 - static int aa_sock_perm(const char *op, u32 request, struct socket *sock) 932 - { 933 - AA_BUG(!sock); 934 - AA_BUG(!sock->sk); 935 - AA_BUG(in_interrupt()); 936 - 937 - return aa_sk_perm(op, request, sock->sk); 938 - } 939 - 940 - /** 941 - * apparmor_socket_getsockname - check perms before getting the local address 942 - */ 943 - static int apparmor_socket_getsockname(struct socket *sock) 944 - { 945 - return aa_sock_perm(OP_GETSOCKNAME, AA_MAY_GETATTR, sock); 946 - } 947 - 948 - /** 949 - * apparmor_socket_getpeername - check perms before getting remote address 950 - */ 951 - static int apparmor_socket_getpeername(struct socket *sock) 952 - { 953 - return aa_sock_perm(OP_GETPEERNAME, AA_MAY_GETATTR, sock); 954 - } 955 - 956 - /* revaliation, get/set attr, opt */ 957 - static int aa_sock_opt_perm(const char *op, u32 request, struct socket *sock, 958 - int level, int optname) 959 - { 960 - AA_BUG(!sock); 961 - AA_BUG(!sock->sk); 962 - AA_BUG(in_interrupt()); 963 - 964 - return aa_sk_perm(op, request, sock->sk); 965 - } 966 - 967 - /** 968 - * apparmor_getsockopt - check perms before getting socket options 969 - */ 970 - static int apparmor_socket_getsockopt(struct socket *sock, int level, 971 - int optname) 972 - { 973 - return aa_sock_opt_perm(OP_GETSOCKOPT, AA_MAY_GETOPT, sock, 974 - level, optname); 975 - } 976 - 977 - /** 978 - * apparmor_setsockopt - check perms before setting socket options 979 - */ 980 - static int apparmor_socket_setsockopt(struct socket *sock, int level, 981 - int optname) 982 - { 983 - return aa_sock_opt_perm(OP_SETSOCKOPT, AA_MAY_SETOPT, sock, 984 - level, optname); 985 - } 986 - 987 - /** 988 - * apparmor_socket_shutdown - check perms before shutting down @sock conn 989 - */ 990 - static int apparmor_socket_shutdown(struct socket *sock, int how) 991 - { 992 - return aa_sock_perm(OP_SHUTDOWN, AA_MAY_SHUTDOWN, sock); 993 - } 994 - 995 - /** 996 - * apparmor_socket_sock_recv_skb - check perms before associating skb to sk 997 - * 998 - * Note: can not sleep may be called with locks held 999 - * 1000 - * dont want protocol specific in __skb_recv_datagram() 1001 - * to deny an incoming connection socket_sock_rcv_skb() 1002 - */ 1003 - static int apparmor_socket_sock_rcv_skb(struct sock *sk, struct sk_buff *skb) 1004 - { 1005 - return 0; 1006 - } 1007 - 1008 - 1009 - static struct aa_label *sk_peer_label(struct sock *sk) 1010 - { 1011 - struct aa_sk_ctx *ctx = SK_CTX(sk); 1012 - 1013 - if (ctx->peer) 1014 - return ctx->peer; 1015 - 1016 - return ERR_PTR(-ENOPROTOOPT); 1017 - } 1018 - 1019 - /** 1020 - * apparmor_socket_getpeersec_stream - get security context of peer 1021 - * 1022 - * Note: for tcp only valid if using ipsec or cipso on lan 1023 - */ 1024 - static int apparmor_socket_getpeersec_stream(struct socket *sock, 1025 - char __user *optval, 1026 - int __user *optlen, 1027 - unsigned int len) 1028 - { 1029 - char *name; 1030 - int slen, error = 0; 1031 - struct aa_label *label; 1032 - struct aa_label *peer; 1033 - 1034 - label = begin_current_label_crit_section(); 1035 - peer = sk_peer_label(sock->sk); 1036 - if (IS_ERR(peer)) { 1037 - error = PTR_ERR(peer); 1038 - goto done; 1039 - } 1040 - slen = aa_label_asxprint(&name, labels_ns(label), peer, 1041 - FLAG_SHOW_MODE | FLAG_VIEW_SUBNS | 1042 - FLAG_HIDDEN_UNCONFINED, GFP_KERNEL); 1043 - /* don't include terminating \0 in slen, it breaks some apps */ 1044 - if (slen < 0) { 1045 - error = -ENOMEM; 1046 - } else { 1047 - if (slen > len) { 1048 - error = -ERANGE; 1049 - } else if (copy_to_user(optval, name, slen)) { 1050 - error = -EFAULT; 1051 - goto out; 1052 - } 1053 - if (put_user(slen, optlen)) 1054 - error = -EFAULT; 1055 - out: 1056 - kfree(name); 1057 - 1058 - } 1059 - 1060 - done: 1061 - end_current_label_crit_section(label); 1062 - 1063 - return error; 1064 - } 1065 - 1066 - /** 1067 - * apparmor_socket_getpeersec_dgram - get security label of packet 1068 - * @sock: the peer socket 1069 - * @skb: packet data 1070 - * @secid: pointer to where to put the secid of the packet 1071 - * 1072 - * Sets the netlabel socket state on sk from parent 1073 - */ 1074 - static int apparmor_socket_getpeersec_dgram(struct socket *sock, 1075 - struct sk_buff *skb, u32 *secid) 1076 - 1077 - { 1078 - /* TODO: requires secid support */ 1079 - return -ENOPROTOOPT; 1080 - } 1081 - 1082 - /** 1083 - * apparmor_sock_graft - Initialize newly created socket 1084 - * @sk: child sock 1085 - * @parent: parent socket 1086 - * 1087 - * Note: could set off of SOCK_CTX(parent) but need to track inode and we can 1088 - * just set sk security information off of current creating process label 1089 - * Labeling of sk for accept case - probably should be sock based 1090 - * instead of task, because of the case where an implicitly labeled 1091 - * socket is shared by different tasks. 1092 - */ 1093 - static void apparmor_sock_graft(struct sock *sk, struct socket *parent) 1094 - { 1095 - struct aa_sk_ctx *ctx = SK_CTX(sk); 1096 - 1097 - if (!ctx->label) 1098 - ctx->label = aa_get_current_label(); 1099 - } 1100 - 1101 740 static struct security_hook_list apparmor_hooks[] __lsm_ro_after_init = { 1102 741 LSM_HOOK_INIT(ptrace_access_check, apparmor_ptrace_access_check), 1103 742 LSM_HOOK_INIT(ptrace_traceme, apparmor_ptrace_traceme), ··· 769 1132 770 1133 LSM_HOOK_INIT(getprocattr, apparmor_getprocattr), 771 1134 LSM_HOOK_INIT(setprocattr, apparmor_setprocattr), 772 - 773 - LSM_HOOK_INIT(sk_alloc_security, apparmor_sk_alloc_security), 774 - LSM_HOOK_INIT(sk_free_security, apparmor_sk_free_security), 775 - LSM_HOOK_INIT(sk_clone_security, apparmor_sk_clone_security), 776 - 777 - LSM_HOOK_INIT(socket_create, apparmor_socket_create), 778 - LSM_HOOK_INIT(socket_post_create, apparmor_socket_post_create), 779 - LSM_HOOK_INIT(socket_bind, apparmor_socket_bind), 780 - LSM_HOOK_INIT(socket_connect, apparmor_socket_connect), 781 - LSM_HOOK_INIT(socket_listen, apparmor_socket_listen), 782 - LSM_HOOK_INIT(socket_accept, apparmor_socket_accept), 783 - LSM_HOOK_INIT(socket_sendmsg, apparmor_socket_sendmsg), 784 - LSM_HOOK_INIT(socket_recvmsg, apparmor_socket_recvmsg), 785 - LSM_HOOK_INIT(socket_getsockname, apparmor_socket_getsockname), 786 - LSM_HOOK_INIT(socket_getpeername, apparmor_socket_getpeername), 787 - LSM_HOOK_INIT(socket_getsockopt, apparmor_socket_getsockopt), 788 - LSM_HOOK_INIT(socket_setsockopt, apparmor_socket_setsockopt), 789 - LSM_HOOK_INIT(socket_shutdown, apparmor_socket_shutdown), 790 - LSM_HOOK_INIT(socket_sock_rcv_skb, apparmor_socket_sock_rcv_skb), 791 - LSM_HOOK_INIT(socket_getpeersec_stream, 792 - apparmor_socket_getpeersec_stream), 793 - LSM_HOOK_INIT(socket_getpeersec_dgram, 794 - apparmor_socket_getpeersec_dgram), 795 - LSM_HOOK_INIT(sock_graft, apparmor_sock_graft), 796 1135 797 1136 LSM_HOOK_INIT(cred_alloc_blank, apparmor_cred_alloc_blank), 798 1137 LSM_HOOK_INIT(cred_free, apparmor_cred_free),
-184
security/apparmor/net.c
··· 1 - /* 2 - * AppArmor security module 3 - * 4 - * This file contains AppArmor network mediation 5 - * 6 - * Copyright (C) 1998-2008 Novell/SUSE 7 - * Copyright 2009-2017 Canonical Ltd. 8 - * 9 - * This program is free software; you can redistribute it and/or 10 - * modify it under the terms of the GNU General Public License as 11 - * published by the Free Software Foundation, version 2 of the 12 - * License. 13 - */ 14 - 15 - #include "include/apparmor.h" 16 - #include "include/audit.h" 17 - #include "include/context.h" 18 - #include "include/label.h" 19 - #include "include/net.h" 20 - #include "include/policy.h" 21 - 22 - #include "net_names.h" 23 - 24 - 25 - struct aa_sfs_entry aa_sfs_entry_network[] = { 26 - AA_SFS_FILE_STRING("af_mask", AA_SFS_AF_MASK), 27 - { } 28 - }; 29 - 30 - static const char * const net_mask_names[] = { 31 - "unknown", 32 - "send", 33 - "receive", 34 - "unknown", 35 - 36 - "create", 37 - "shutdown", 38 - "connect", 39 - "unknown", 40 - 41 - "setattr", 42 - "getattr", 43 - "setcred", 44 - "getcred", 45 - 46 - "chmod", 47 - "chown", 48 - "chgrp", 49 - "lock", 50 - 51 - "mmap", 52 - "mprot", 53 - "unknown", 54 - "unknown", 55 - 56 - "accept", 57 - "bind", 58 - "listen", 59 - "unknown", 60 - 61 - "setopt", 62 - "getopt", 63 - "unknown", 64 - "unknown", 65 - 66 - "unknown", 67 - "unknown", 68 - "unknown", 69 - "unknown", 70 - }; 71 - 72 - 73 - /* audit callback for net specific fields */ 74 - void audit_net_cb(struct audit_buffer *ab, void *va) 75 - { 76 - struct common_audit_data *sa = va; 77 - 78 - audit_log_format(ab, " family="); 79 - if (address_family_names[sa->u.net->family]) 80 - audit_log_string(ab, address_family_names[sa->u.net->family]); 81 - else 82 - audit_log_format(ab, "\"unknown(%d)\"", sa->u.net->family); 83 - audit_log_format(ab, " sock_type="); 84 - if (sock_type_names[aad(sa)->net.type]) 85 - audit_log_string(ab, sock_type_names[aad(sa)->net.type]); 86 - else 87 - audit_log_format(ab, "\"unknown(%d)\"", aad(sa)->net.type); 88 - audit_log_format(ab, " protocol=%d", aad(sa)->net.protocol); 89 - 90 - if (aad(sa)->request & NET_PERMS_MASK) { 91 - audit_log_format(ab, " requested_mask="); 92 - aa_audit_perm_mask(ab, aad(sa)->request, NULL, 0, 93 - net_mask_names, NET_PERMS_MASK); 94 - 95 - if (aad(sa)->denied & NET_PERMS_MASK) { 96 - audit_log_format(ab, " denied_mask="); 97 - aa_audit_perm_mask(ab, aad(sa)->denied, NULL, 0, 98 - net_mask_names, NET_PERMS_MASK); 99 - } 100 - } 101 - if (aad(sa)->peer) { 102 - audit_log_format(ab, " peer="); 103 - aa_label_xaudit(ab, labels_ns(aad(sa)->label), aad(sa)->peer, 104 - FLAGS_NONE, GFP_ATOMIC); 105 - } 106 - } 107 - 108 - 109 - /* Generic af perm */ 110 - int aa_profile_af_perm(struct aa_profile *profile, struct common_audit_data *sa, 111 - u32 request, u16 family, int type) 112 - { 113 - struct aa_perms perms = { }; 114 - 115 - AA_BUG(family >= AF_MAX); 116 - AA_BUG(type < 0 || type >= SOCK_MAX); 117 - 118 - if (profile_unconfined(profile)) 119 - return 0; 120 - 121 - perms.allow = (profile->net.allow[family] & (1 << type)) ? 122 - ALL_PERMS_MASK : 0; 123 - perms.audit = (profile->net.audit[family] & (1 << type)) ? 124 - ALL_PERMS_MASK : 0; 125 - perms.quiet = (profile->net.quiet[family] & (1 << type)) ? 126 - ALL_PERMS_MASK : 0; 127 - aa_apply_modes_to_perms(profile, &perms); 128 - 129 - return aa_check_perms(profile, &perms, request, sa, audit_net_cb); 130 - } 131 - 132 - int aa_af_perm(struct aa_label *label, const char *op, u32 request, u16 family, 133 - int type, int protocol) 134 - { 135 - struct aa_profile *profile; 136 - DEFINE_AUDIT_NET(sa, op, NULL, family, type, protocol); 137 - 138 - return fn_for_each_confined(label, profile, 139 - aa_profile_af_perm(profile, &sa, request, family, 140 - type)); 141 - } 142 - 143 - static int aa_label_sk_perm(struct aa_label *label, const char *op, u32 request, 144 - struct sock *sk) 145 - { 146 - struct aa_profile *profile; 147 - DEFINE_AUDIT_SK(sa, op, sk); 148 - 149 - AA_BUG(!label); 150 - AA_BUG(!sk); 151 - 152 - if (unconfined(label)) 153 - return 0; 154 - 155 - return fn_for_each_confined(label, profile, 156 - aa_profile_af_sk_perm(profile, &sa, request, sk)); 157 - } 158 - 159 - int aa_sk_perm(const char *op, u32 request, struct sock *sk) 160 - { 161 - struct aa_label *label; 162 - int error; 163 - 164 - AA_BUG(!sk); 165 - AA_BUG(in_interrupt()); 166 - 167 - /* TODO: switch to begin_current_label ???? */ 168 - label = begin_current_label_crit_section(); 169 - error = aa_label_sk_perm(label, op, request, sk); 170 - end_current_label_crit_section(label); 171 - 172 - return error; 173 - } 174 - 175 - 176 - int aa_sock_file_perm(struct aa_label *label, const char *op, u32 request, 177 - struct socket *sock) 178 - { 179 - AA_BUG(!label); 180 - AA_BUG(!sock); 181 - AA_BUG(!sock->sk); 182 - 183 - return aa_label_sk_perm(label, op, request, sock->sk); 184 - }
+1 -46
security/apparmor/policy_unpack.c
··· 275 275 return 0; 276 276 } 277 277 278 - static bool unpack_u16(struct aa_ext *e, u16 *data, const char *name) 279 - { 280 - if (unpack_nameX(e, AA_U16, name)) { 281 - if (!inbounds(e, sizeof(u16))) 282 - return 0; 283 - if (data) 284 - *data = le16_to_cpu(get_unaligned((__le16 *) e->pos)); 285 - e->pos += sizeof(u16); 286 - return 1; 287 - } 288 - return 0; 289 - } 290 - 291 278 static bool unpack_u32(struct aa_ext *e, u32 *data, const char *name) 292 279 { 293 280 if (unpack_nameX(e, AA_U32, name)) { ··· 584 597 struct aa_profile *profile = NULL; 585 598 const char *tmpname, *tmpns = NULL, *name = NULL; 586 599 const char *info = "failed to unpack profile"; 587 - size_t size = 0, ns_len; 600 + size_t ns_len; 588 601 struct rhashtable_params params = { 0 }; 589 602 char *key = NULL; 590 603 struct aa_data *data; ··· 715 728 if (!unpack_rlimits(e, profile)) { 716 729 info = "failed to unpack profile rlimits"; 717 730 goto fail; 718 - } 719 - 720 - size = unpack_array(e, "net_allowed_af"); 721 - if (size) { 722 - 723 - for (i = 0; i < size; i++) { 724 - /* discard extraneous rules that this kernel will 725 - * never request 726 - */ 727 - if (i >= AF_MAX) { 728 - u16 tmp; 729 - 730 - if (!unpack_u16(e, &tmp, NULL) || 731 - !unpack_u16(e, &tmp, NULL) || 732 - !unpack_u16(e, &tmp, NULL)) 733 - goto fail; 734 - continue; 735 - } 736 - if (!unpack_u16(e, &profile->net.allow[i], NULL)) 737 - goto fail; 738 - if (!unpack_u16(e, &profile->net.audit[i], NULL)) 739 - goto fail; 740 - if (!unpack_u16(e, &profile->net.quiet[i], NULL)) 741 - goto fail; 742 - } 743 - if (!unpack_nameX(e, AA_ARRAYEND, NULL)) 744 - goto fail; 745 - } 746 - if (VERSION_LT(e->version, v7)) { 747 - /* pre v7 policy always allowed these */ 748 - profile->net.allow[AF_UNIX] = 0xffff; 749 - profile->net.allow[AF_NETLINK] = 0xffff; 750 731 } 751 732 752 733 if (unpack_nameX(e, AA_STRUCT, "policydb")) {
+19
sound/pci/hda/patch_realtek.c
··· 327 327 case 0x10ec0215: 328 328 case 0x10ec0225: 329 329 case 0x10ec0233: 330 + case 0x10ec0236: 330 331 case 0x10ec0255: 331 332 case 0x10ec0256: 332 333 case 0x10ec0282: ··· 912 911 { 0x10ec0275, 0x1028, 0, "ALC3260" }, 913 912 { 0x10ec0899, 0x1028, 0, "ALC3861" }, 914 913 { 0x10ec0298, 0x1028, 0, "ALC3266" }, 914 + { 0x10ec0236, 0x1028, 0, "ALC3204" }, 915 915 { 0x10ec0256, 0x1028, 0, "ALC3246" }, 916 916 { 0x10ec0225, 0x1028, 0, "ALC3253" }, 917 917 { 0x10ec0295, 0x1028, 0, "ALC3254" }, ··· 3932 3930 alc_process_coef_fw(codec, coef0255_1); 3933 3931 alc_process_coef_fw(codec, coef0255); 3934 3932 break; 3933 + case 0x10ec0236: 3935 3934 case 0x10ec0256: 3936 3935 alc_process_coef_fw(codec, coef0256); 3937 3936 alc_process_coef_fw(codec, coef0255); ··· 4031 4028 }; 4032 4029 4033 4030 switch (codec->core.vendor_id) { 4031 + case 0x10ec0236: 4034 4032 case 0x10ec0255: 4035 4033 case 0x10ec0256: 4036 4034 alc_write_coef_idx(codec, 0x45, 0xc489); ··· 4164 4160 alc_process_coef_fw(codec, alc225_pre_hsmode); 4165 4161 alc_process_coef_fw(codec, coef0225); 4166 4162 break; 4163 + case 0x10ec0236: 4167 4164 case 0x10ec0255: 4168 4165 case 0x10ec0256: 4169 4166 alc_process_coef_fw(codec, coef0255); ··· 4261 4256 case 0x10ec0255: 4262 4257 alc_process_coef_fw(codec, coef0255); 4263 4258 break; 4259 + case 0x10ec0236: 4264 4260 case 0x10ec0256: 4265 4261 alc_process_coef_fw(codec, coef0256); 4266 4262 break; ··· 4372 4366 case 0x10ec0255: 4373 4367 alc_process_coef_fw(codec, coef0255); 4374 4368 break; 4369 + case 0x10ec0236: 4375 4370 case 0x10ec0256: 4376 4371 alc_process_coef_fw(codec, coef0256); 4377 4372 break; ··· 4458 4451 }; 4459 4452 4460 4453 switch (codec->core.vendor_id) { 4454 + case 0x10ec0236: 4461 4455 case 0x10ec0255: 4462 4456 case 0x10ec0256: 4463 4457 alc_process_coef_fw(codec, coef0255); ··· 4713 4705 case 0x10ec0255: 4714 4706 alc_process_coef_fw(codec, alc255fw); 4715 4707 break; 4708 + case 0x10ec0236: 4716 4709 case 0x10ec0256: 4717 4710 alc_process_coef_fw(codec, alc256fw); 4718 4711 break; ··· 6428 6419 ALC225_STANDARD_PINS, 6429 6420 {0x12, 0xb7a60130}, 6430 6421 {0x1b, 0x90170110}), 6422 + SND_HDA_PIN_QUIRK(0x10ec0236, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 6423 + {0x12, 0x90a60140}, 6424 + {0x14, 0x90170110}, 6425 + {0x21, 0x02211020}), 6426 + SND_HDA_PIN_QUIRK(0x10ec0236, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 6427 + {0x12, 0x90a60140}, 6428 + {0x14, 0x90170150}, 6429 + {0x21, 0x02211020}), 6431 6430 SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL2_MIC_NO_PRESENCE, 6432 6431 {0x14, 0x90170110}, 6433 6432 {0x21, 0x02211020}), ··· 6823 6806 case 0x10ec0255: 6824 6807 spec->codec_variant = ALC269_TYPE_ALC255; 6825 6808 break; 6809 + case 0x10ec0236: 6826 6810 case 0x10ec0256: 6827 6811 spec->codec_variant = ALC269_TYPE_ALC256; 6828 6812 spec->shutup = alc256_shutup; ··· 7875 7857 HDA_CODEC_ENTRY(0x10ec0233, "ALC233", patch_alc269), 7876 7858 HDA_CODEC_ENTRY(0x10ec0234, "ALC234", patch_alc269), 7877 7859 HDA_CODEC_ENTRY(0x10ec0235, "ALC233", patch_alc269), 7860 + HDA_CODEC_ENTRY(0x10ec0236, "ALC236", patch_alc269), 7878 7861 HDA_CODEC_ENTRY(0x10ec0255, "ALC255", patch_alc269), 7879 7862 HDA_CODEC_ENTRY(0x10ec0256, "ALC256", patch_alc269), 7880 7863 HDA_CODEC_ENTRY(0x10ec0260, "ALC260", patch_alc260),
+2 -2
tools/include/uapi/linux/bpf.h
··· 887 887 }; 888 888 889 889 enum sk_action { 890 - SK_ABORTED = 0, 891 - SK_DROP, 890 + SK_DROP = 0, 891 + SK_PASS, 892 892 SK_REDIRECT, 893 893 }; 894 894
+7 -2
tools/objtool/check.c
··· 267 267 &insn->immediate, 268 268 &insn->stack_op); 269 269 if (ret) 270 - return ret; 270 + goto err; 271 271 272 272 if (!insn->type || insn->type > INSN_LAST) { 273 273 WARN_FUNC("invalid instruction type %d", 274 274 insn->sec, insn->offset, insn->type); 275 - return -1; 275 + ret = -1; 276 + goto err; 276 277 } 277 278 278 279 hash_add(file->insn_hash, &insn->hash, insn->offset); ··· 297 296 } 298 297 299 298 return 0; 299 + 300 + err: 301 + free(insn); 302 + return ret; 300 303 } 301 304 302 305 /*
+2 -2
tools/perf/Documentation/perf-record.txt
··· 8 8 SYNOPSIS 9 9 -------- 10 10 [verse] 11 - 'perf record' [-e <EVENT> | --event=EVENT] [-l] [-a] <command> 12 - 'perf record' [-e <EVENT> | --event=EVENT] [-l] [-a] -- <command> [<options>] 11 + 'perf record' [-e <EVENT> | --event=EVENT] [-a] <command> 12 + 'perf record' [-e <EVENT> | --event=EVENT] [-a] -- <command> [<options>] 13 13 14 14 DESCRIPTION 15 15 -----------
+6 -3
tools/perf/tests/shell/trace+probe_libc_inet_pton.sh
··· 10 10 11 11 . $(dirname $0)/lib/probe.sh 12 12 13 + ld=$(realpath /lib64/ld*.so.* | uniq) 14 + libc=$(echo $ld | sed 's/ld/libc/g') 15 + 13 16 trace_libc_inet_pton_backtrace() { 14 17 idx=0 15 18 expected[0]="PING.*bytes" ··· 21 18 expected[3]=".*packets transmitted.*" 22 19 expected[4]="rtt min.*" 23 20 expected[5]="[0-9]+\.[0-9]+[[:space:]]+probe_libc:inet_pton:\([[:xdigit:]]+\)" 24 - expected[6]=".*inet_pton[[:space:]]\(/usr/lib.*/libc-[0-9]+\.[0-9]+\.so\)$" 25 - expected[7]="getaddrinfo[[:space:]]\(/usr/lib.*/libc-[0-9]+\.[0-9]+\.so\)$" 21 + expected[6]=".*inet_pton[[:space:]]\($libc\)$" 22 + expected[7]="getaddrinfo[[:space:]]\($libc\)$" 26 23 expected[8]=".*\(.*/bin/ping.*\)$" 27 24 28 25 perf trace --no-syscalls -e probe_libc:inet_pton/max-stack=3/ ping -6 -c 1 ::1 2>&1 | grep -v ^$ | while read line ; do ··· 38 35 } 39 36 40 37 skip_if_no_perf_probe && \ 41 - perf probe -q /lib64/libc-*.so inet_pton && \ 38 + perf probe -q $libc inet_pton && \ 42 39 trace_libc_inet_pton_backtrace 43 40 err=$? 44 41 rm -f ${file}
+8 -1
tools/perf/ui/hist.c
··· 532 532 533 533 void perf_hpp__column_unregister(struct perf_hpp_fmt *format) 534 534 { 535 - list_del(&format->list); 535 + list_del_init(&format->list); 536 536 } 537 537 538 538 void perf_hpp__cancel_cumulate(void) ··· 606 606 607 607 static void fmt_free(struct perf_hpp_fmt *fmt) 608 608 { 609 + /* 610 + * At this point fmt should be completely 611 + * unhooked, if not it's a bug. 612 + */ 613 + BUG_ON(!list_empty(&fmt->list)); 614 + BUG_ON(!list_empty(&fmt->sort_list)); 615 + 609 616 if (fmt->free) 610 617 fmt->free(fmt); 611 618 }
+15 -2
tools/perf/util/parse-events.l
··· 8 8 9 9 %{ 10 10 #include <errno.h> 11 + #include <sys/types.h> 12 + #include <sys/stat.h> 13 + #include <unistd.h> 11 14 #include "../perf.h" 12 15 #include "parse-events.h" 13 16 #include "parse-events-bison.h" ··· 56 53 return token; 57 54 } 58 55 59 - static bool isbpf(yyscan_t scanner) 56 + static bool isbpf_suffix(char *text) 60 57 { 61 - char *text = parse_events_get_text(scanner); 62 58 int len = strlen(text); 63 59 64 60 if (len < 2) ··· 68 66 if (len > 4 && !strcmp(text + len - 4, ".obj")) 69 67 return true; 70 68 return false; 69 + } 70 + 71 + static bool isbpf(yyscan_t scanner) 72 + { 73 + char *text = parse_events_get_text(scanner); 74 + struct stat st; 75 + 76 + if (!isbpf_suffix(text)) 77 + return false; 78 + 79 + return stat(text, &st) == 0; 71 80 } 72 81 73 82 /*
+2
tools/perf/util/session.c
··· 374 374 tool->mmap2 = process_event_stub; 375 375 if (tool->comm == NULL) 376 376 tool->comm = process_event_stub; 377 + if (tool->namespaces == NULL) 378 + tool->namespaces = process_event_stub; 377 379 if (tool->fork == NULL) 378 380 tool->fork = process_event_stub; 379 381 if (tool->exit == NULL)
+2 -2
tools/perf/util/xyarray.h
··· 23 23 24 24 static inline int xyarray__max_y(struct xyarray *xy) 25 25 { 26 - return xy->max_x; 26 + return xy->max_y; 27 27 } 28 28 29 29 static inline int xyarray__max_x(struct xyarray *xy) 30 30 { 31 - return xy->max_y; 31 + return xy->max_x; 32 32 } 33 33 34 34 #endif /* _PERF_XYARRAY_H_ */
+1 -1
tools/power/cpupower/Makefile
··· 26 26 27 27 ifneq ($(OUTPUT),) 28 28 # check that the output directory actually exists 29 - OUTDIR := $(realpath $(OUTPUT)) 29 + OUTDIR := $(shell cd $(OUTPUT) && /bin/pwd) 30 30 $(if $(OUTDIR),, $(error output directory "$(OUTPUT)" does not exist)) 31 31 endif 32 32
+3 -3
tools/scripts/Makefile.include
··· 1 1 ifneq ($(O),) 2 2 ifeq ($(origin O), command line) 3 - ABSOLUTE_O := $(realpath $(O)) 4 - dummy := $(if $(ABSOLUTE_O),,$(error O=$(O) does not exist)) 3 + dummy := $(if $(shell test -d $(O) || echo $(O)),$(error O=$(O) does not exist),) 4 + ABSOLUTE_O := $(shell cd $(O) ; pwd) 5 5 OUTPUT := $(ABSOLUTE_O)/$(if $(subdir),$(subdir)/) 6 6 COMMAND_O := O=$(ABSOLUTE_O) 7 7 ifeq ($(objtree),) ··· 12 12 13 13 # check that the output directory actually exists 14 14 ifneq ($(OUTPUT),) 15 - OUTDIR := $(realpath $(OUTPUT)) 15 + OUTDIR := $(shell cd $(OUTPUT) && /bin/pwd) 16 16 $(if $(OUTDIR),, $(error output directory "$(OUTPUT)" does not exist)) 17 17 endif 18 18
+21
tools/testing/selftests/tc-testing/tc-tests/filters/tests.json
··· 17 17 "teardown": [ 18 18 "$TC qdisc del dev $DEV1 ingress" 19 19 ] 20 + }, 21 + { 22 + "id": "d052", 23 + "name": "Add 1M filters with the same action", 24 + "category": [ 25 + "filter", 26 + "flower" 27 + ], 28 + "setup": [ 29 + "$TC qdisc add dev $DEV2 ingress", 30 + "./tdc_batch.py $DEV2 $BATCH_FILE --share_action -n 1000000" 31 + ], 32 + "cmdUnderTest": "$TC -b $BATCH_FILE", 33 + "expExitCode": "0", 34 + "verifyCmd": "$TC actions list action gact", 35 + "matchPattern": "action order 0: gact action drop.*index 1 ref 1000000 bind 1000000", 36 + "matchCount": "1", 37 + "teardown": [ 38 + "$TC qdisc del dev $DEV2 ingress", 39 + "/bin/rm $BATCH_FILE" 40 + ] 20 41 } 21 42 ]
+16 -4
tools/testing/selftests/tc-testing/tdc.py
··· 88 88 exit(1) 89 89 90 90 91 - def test_runner(filtered_tests): 91 + def test_runner(filtered_tests, args): 92 92 """ 93 93 Driver function for the unit tests. 94 94 ··· 105 105 for tidx in testlist: 106 106 result = True 107 107 tresult = "" 108 + if "flower" in tidx["category"] and args.device == None: 109 + continue 108 110 print("Test " + tidx["id"] + ": " + tidx["name"]) 109 111 prepare_env(tidx["setup"]) 110 112 (p, procout) = exec_cmd(tidx["cmdUnderTest"]) ··· 153 151 cmd = 'ip link set $DEV0 up' 154 152 exec_cmd(cmd, False) 155 153 cmd = 'ip -s $NS link set $DEV1 up' 154 + exec_cmd(cmd, False) 155 + cmd = 'ip link set $DEV2 netns $NS' 156 + exec_cmd(cmd, False) 157 + cmd = 'ip -s $NS link set $DEV2 up' 156 158 exec_cmd(cmd, False) 157 159 158 160 ··· 217 211 help='Execute the single test case with specified ID') 218 212 parser.add_argument('-i', '--id', action='store_true', dest='gen_id', 219 213 help='Generate ID numbers for new test cases') 220 - return parser 214 + parser.add_argument('-d', '--device', 215 + help='Execute the test case in flower category') 221 216 return parser 222 217 223 218 ··· 232 225 233 226 if args.path != None: 234 227 NAMES['TC'] = args.path 228 + if args.device != None: 229 + NAMES['DEV2'] = args.device 235 230 if not os.path.isfile(NAMES['TC']): 236 231 print("The specified tc path " + NAMES['TC'] + " does not exist.") 237 232 exit(1) ··· 390 381 if (len(alltests) == 0): 391 382 print("Cannot find a test case with ID matching " + target_id) 392 383 exit(1) 393 - catresults = test_runner(alltests) 384 + catresults = test_runner(alltests, args) 394 385 print("All test results: " + "\n\n" + catresults) 395 386 elif (len(target_category) > 0): 387 + if (target_category == "flower") and args.device == None: 388 + print("Please specify a NIC device (-d) to run category flower") 389 + exit(1) 396 390 if (target_category not in ucat): 397 391 print("Specified category is not present in this file.") 398 392 exit(1) 399 393 else: 400 - catresults = test_runner(testcases[target_category]) 394 + catresults = test_runner(testcases[target_category], args) 401 395 print("Category " + target_category + "\n\n" + catresults) 402 396 403 397 ns_destroy()
+62
tools/testing/selftests/tc-testing/tdc_batch.py
··· 1 + #!/usr/bin/python3 2 + 3 + """ 4 + tdc_batch.py - a script to generate TC batch file 5 + 6 + Copyright (C) 2017 Chris Mi <chrism@mellanox.com> 7 + """ 8 + 9 + import argparse 10 + 11 + parser = argparse.ArgumentParser(description='TC batch file generator') 12 + parser.add_argument("device", help="device name") 13 + parser.add_argument("file", help="batch file name") 14 + parser.add_argument("-n", "--number", type=int, 15 + help="how many lines in batch file") 16 + parser.add_argument("-o", "--skip_sw", 17 + help="skip_sw (offload), by default skip_hw", 18 + action="store_true") 19 + parser.add_argument("-s", "--share_action", 20 + help="all filters share the same action", 21 + action="store_true") 22 + parser.add_argument("-p", "--prio", 23 + help="all filters have different prio", 24 + action="store_true") 25 + args = parser.parse_args() 26 + 27 + device = args.device 28 + file = open(args.file, 'w') 29 + 30 + number = 1 31 + if args.number: 32 + number = args.number 33 + 34 + skip = "skip_hw" 35 + if args.skip_sw: 36 + skip = "skip_sw" 37 + 38 + share_action = "" 39 + if args.share_action: 40 + share_action = "index 1" 41 + 42 + prio = "prio 1" 43 + if args.prio: 44 + prio = "" 45 + if number > 0x4000: 46 + number = 0x4000 47 + 48 + index = 0 49 + for i in range(0x100): 50 + for j in range(0x100): 51 + for k in range(0x100): 52 + mac = ("%02x:%02x:%02x" % (i, j, k)) 53 + src_mac = "e4:11:00:" + mac 54 + dst_mac = "e4:12:00:" + mac 55 + cmd = ("filter add dev %s %s protocol ip parent ffff: flower %s " 56 + "src_mac %s dst_mac %s action drop %s" % 57 + (device, prio, skip, src_mac, dst_mac, share_action)) 58 + file.write("%s\n" % cmd) 59 + index += 1 60 + if index >= number: 61 + file.close() 62 + exit(0)
+2
tools/testing/selftests/tc-testing/tdc_config.py
··· 12 12 # Name of veth devices to be created for the namespace 13 13 'DEV0': 'v0p0', 14 14 'DEV1': 'v0p1', 15 + 'DEV2': '', 16 + 'BATCH_FILE': './batch.txt', 15 17 # Name of the namespace to use 16 18 'NS': 'tcut' 17 19 }