Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge ra.kernel.org:/pub/scm/linux/kernel/git/davem/net

Simple overlapping changes in stmmac driver.

Adjust skb_gro_flush_final_remcsum function signature to make GRO list
changes in net-next, as per Stephen Rothwell's example merge
resolution.

Signed-off-by: David S. Miller <davem@davemloft.net>

+3829 -2461
+12 -2
Documentation/admin-guide/pm/intel_pstate.rst
··· 324 324 325 325 ``intel_pstate`` exposes several global attributes (files) in ``sysfs`` to 326 326 control its functionality at the system level. They are located in the 327 - ``/sys/devices/system/cpu/cpufreq/intel_pstate/`` directory and affect all 328 - CPUs. 327 + ``/sys/devices/system/cpu/intel_pstate/`` directory and affect all CPUs. 329 328 330 329 Some of them are not present if the ``intel_pstate=per_cpu_perf_limits`` 331 330 argument is passed to the kernel in the command line. ··· 377 378 supplied to the ``CPUFreq`` core and exposed via the policy interface, 378 379 but it affects the maximum possible value of per-policy P-state limits 379 380 (see `Interpretation of Policy Attributes`_ below for details). 381 + 382 + ``hwp_dynamic_boost`` 383 + This attribute is only present if ``intel_pstate`` works in the 384 + `active mode with the HWP feature enabled <Active Mode With HWP_>`_ in 385 + the processor. If set (equal to 1), it causes the minimum P-state limit 386 + to be increased dynamically for a short time whenever a task previously 387 + waiting on I/O is selected to run on a given logical CPU (the purpose 388 + of this mechanism is to improve performance). 389 + 390 + This setting has no effect on logical CPUs whose minimum P-state limit 391 + is directly set to the highest non-turbo P-state or above it. 380 392 381 393 .. _status_attr: 382 394
+23
Documentation/devicetree/bindings/input/sprd,sc27xx-vibra.txt
··· 1 + Spreadtrum SC27xx PMIC Vibrator 2 + 3 + Required properties: 4 + - compatible: should be "sprd,sc2731-vibrator". 5 + - reg: address of vibrator control register. 6 + 7 + Example : 8 + 9 + sc2731_pmic: pmic@0 { 10 + compatible = "sprd,sc2731"; 11 + reg = <0>; 12 + spi-max-frequency = <26000000>; 13 + interrupts = <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>; 14 + interrupt-controller; 15 + #interrupt-cells = <2>; 16 + #address-cells = <1>; 17 + #size-cells = <0>; 18 + 19 + vibrator@eb4 { 20 + compatible = "sprd,sc2731-vibrator"; 21 + reg = <0xeb4>; 22 + }; 23 + };
+1 -6
Documentation/filesystems/Locking
··· 441 441 int (*iterate) (struct file *, struct dir_context *); 442 442 int (*iterate_shared) (struct file *, struct dir_context *); 443 443 __poll_t (*poll) (struct file *, struct poll_table_struct *); 444 - struct wait_queue_head * (*get_poll_head)(struct file *, __poll_t); 445 - __poll_t (*poll_mask) (struct file *, __poll_t); 446 444 long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long); 447 445 long (*compat_ioctl) (struct file *, unsigned int, unsigned long); 448 446 int (*mmap) (struct file *, struct vm_area_struct *); ··· 471 473 }; 472 474 473 475 locking rules: 474 - All except for ->poll_mask may block. 476 + All may block. 475 477 476 478 ->llseek() locking has moved from llseek to the individual llseek 477 479 implementations. If your fs is not using generic_file_llseek, you ··· 502 504 ->setlease operations should call generic_setlease() before or after setting 503 505 the lease within the individual filesystem to record the result of the 504 506 operation 505 - 506 - ->poll_mask can be called with or without the waitqueue lock for the waitqueue 507 - returned from ->get_poll_head. 508 507 509 508 --------------------------- dquot_operations ------------------------------- 510 509 prototypes:
-13
Documentation/filesystems/vfs.txt
··· 857 857 ssize_t (*write_iter) (struct kiocb *, struct iov_iter *); 858 858 int (*iterate) (struct file *, struct dir_context *); 859 859 __poll_t (*poll) (struct file *, struct poll_table_struct *); 860 - struct wait_queue_head * (*get_poll_head)(struct file *, __poll_t); 861 - __poll_t (*poll_mask) (struct file *, __poll_t); 862 860 long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long); 863 861 long (*compat_ioctl) (struct file *, unsigned int, unsigned long); 864 862 int (*mmap) (struct file *, struct vm_area_struct *); ··· 900 902 poll: called by the VFS when a process wants to check if there is 901 903 activity on this file and (optionally) go to sleep until there 902 904 is activity. Called by the select(2) and poll(2) system calls 903 - 904 - get_poll_head: Returns the struct wait_queue_head that callers can 905 - wait on. Callers need to check the returned events using ->poll_mask 906 - once woken. Can return NULL to indicate polling is not supported, 907 - or any error code using the ERR_PTR convention to indicate that a 908 - grave error occured and ->poll_mask shall not be called. 909 - 910 - poll_mask: return the mask of EPOLL* values describing the file descriptor 911 - state. Called either before going to sleep on the waitqueue returned by 912 - get_poll_head, or after it has been woken. If ->get_poll_head and 913 - ->poll_mask are implemented ->poll does not need to be implement. 914 905 915 906 unlocked_ioctl: called by the ioctl(2) system call. 916 907
+6
Documentation/kbuild/kconfig-language.txt
··· 430 430 to use it. It should be placed at the top of the configuration, before any 431 431 other statement. 432 432 433 + '#' Kconfig source file comment: 434 + 435 + An unquoted '#' character anywhere in a source file line indicates 436 + the beginning of a source file comment. The remainder of that line 437 + is a comment. 438 + 433 439 434 440 Kconfig hints 435 441 -------------
+1 -1
Documentation/usb/gadget_configfs.txt
··· 226 226 where <config name>.<number> specify the configuration and <function> is 227 227 a symlink to a function being removed from the configuration, e.g.: 228 228 229 - $ rm configfs/c.1/ncm.usb0 229 + $ rm configs/c.1/ncm.usb0 230 230 231 231 ... 232 232 ...
+19 -6
MAINTAINERS
··· 2971 2971 N: bcm586* 2972 2972 N: bcm88312 2973 2973 N: hr2 2974 - F: arch/arm64/boot/dts/broadcom/ns2* 2974 + N: stingray 2975 + F: arch/arm64/boot/dts/broadcom/northstar2/* 2976 + F: arch/arm64/boot/dts/broadcom/stingray/* 2975 2977 F: drivers/clk/bcm/clk-ns* 2978 + F: drivers/clk/bcm/clk-sr* 2976 2979 F: drivers/pinctrl/bcm/pinctrl-ns* 2980 + F: include/dt-bindings/clock/bcm-sr* 2977 2981 2978 2982 BROADCOM KONA GPIO DRIVER 2979 2983 M: Ray Jui <rjui@broadcom.com> ··· 5673 5669 F: Documentation/devicetree/bindings/crypto/fsl-sec4.txt 5674 5670 5675 5671 FREESCALE DIU FRAMEBUFFER DRIVER 5676 - M: Timur Tabi <timur@tabi.org> 5672 + M: Timur Tabi <timur@kernel.org> 5677 5673 L: linux-fbdev@vger.kernel.org 5678 5674 S: Maintained 5679 5675 F: drivers/video/fbdev/fsl-diu-fb.* ··· 5773 5769 F: drivers/net/wan/fsl_ucc_hdlc* 5774 5770 5775 5771 FREESCALE QUICC ENGINE UCC UART DRIVER 5776 - M: Timur Tabi <timur@tabi.org> 5772 + M: Timur Tabi <timur@kernel.org> 5777 5773 L: linuxppc-dev@lists.ozlabs.org 5778 5774 S: Maintained 5779 5775 F: drivers/tty/serial/ucc_uart.c ··· 5797 5793 F: include/linux/fs_enet_pd.h 5798 5794 5799 5795 FREESCALE SOC SOUND DRIVERS 5800 - M: Timur Tabi <timur@tabi.org> 5796 + M: Timur Tabi <timur@kernel.org> 5801 5797 M: Nicolin Chen <nicoleotsuka@gmail.com> 5802 5798 M: Xiubo Li <Xiubo.Lee@gmail.com> 5803 5799 R: Fabio Estevam <fabio.estevam@nxp.com> ··· 11482 11478 S: Obsolete 11483 11479 F: drivers/net/wireless/intersil/prism54/ 11484 11480 11481 + PROC FILESYSTEM 11482 + R: Alexey Dobriyan <adobriyan@gmail.com> 11483 + L: linux-kernel@vger.kernel.org 11484 + L: linux-fsdevel@vger.kernel.org 11485 + S: Maintained 11486 + F: fs/proc/ 11487 + F: include/linux/proc_fs.h 11488 + F: tools/testing/selftests/proc/ 11489 + 11485 11490 PROC SYSCTL 11486 11491 M: "Luis R. Rodriguez" <mcgrof@kernel.org> 11487 11492 M: Kees Cook <keescook@chromium.org> ··· 11823 11810 F: drivers/cpufreq/qcom-cpufreq-kryo.c 11824 11811 11825 11812 QUALCOMM EMAC GIGABIT ETHERNET DRIVER 11826 - M: Timur Tabi <timur@codeaurora.org> 11813 + M: Timur Tabi <timur@kernel.org> 11827 11814 L: netdev@vger.kernel.org 11828 - S: Supported 11815 + S: Maintained 11829 11816 F: drivers/net/ethernet/qualcomm/emac/ 11830 11817 11831 11818 QUALCOMM HEXAGON ARCHITECTURE
+1 -6
Makefile
··· 2 2 VERSION = 4 3 3 PATCHLEVEL = 18 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc2 5 + EXTRAVERSION = -rc3 6 6 NAME = Merciless Moray 7 7 8 8 # *DOCUMENTATION* ··· 505 505 CC_HAVE_ASM_GOTO := 1 506 506 KBUILD_CFLAGS += -DCC_HAVE_ASM_GOTO 507 507 KBUILD_AFLAGS += -DCC_HAVE_ASM_GOTO 508 - endif 509 - 510 - ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/cc-can-link.sh $(CC)), y) 511 - CC_CAN_LINK := y 512 - export CC_CAN_LINK 513 508 endif 514 509 515 510 # The expansion should be delayed until arch/$(SRCARCH)/Makefile is included.
+7 -1
arch/arm/Kconfig
··· 1245 1245 VESA. If you have PCI, say Y, otherwise N. 1246 1246 1247 1247 config PCI_DOMAINS 1248 - bool 1248 + bool "Support for multiple PCI domains" 1249 1249 depends on PCI 1250 + help 1251 + Enable PCI domains kernel management. Say Y if your machine 1252 + has a PCI bus hierarchy that requires more than one PCI 1253 + domain (aka segment) to be correctly managed. Say N otherwise. 1254 + 1255 + If you don't know what to do here, say N. 1250 1256 1251 1257 config PCI_DOMAINS_GENERIC 1252 1258 def_bool PCI_DOMAINS
+1 -1
arch/arm/boot/dts/armada-385-synology-ds116.dts
··· 139 139 3700 5 140 140 3900 6 141 141 4000 7>; 142 - cooling-cells = <2>; 142 + #cooling-cells = <2>; 143 143 }; 144 144 145 145 gpio-leds {
+12 -12
arch/arm/boot/dts/bcm-cygnus.dtsi
··· 216 216 reg = <0x18008000 0x100>; 217 217 #address-cells = <1>; 218 218 #size-cells = <0>; 219 - interrupts = <GIC_SPI 85 IRQ_TYPE_NONE>; 219 + interrupts = <GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>; 220 220 clock-frequency = <100000>; 221 221 status = "disabled"; 222 222 }; ··· 245 245 reg = <0x1800b000 0x100>; 246 246 #address-cells = <1>; 247 247 #size-cells = <0>; 248 - interrupts = <GIC_SPI 86 IRQ_TYPE_NONE>; 248 + interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>; 249 249 clock-frequency = <100000>; 250 250 status = "disabled"; 251 251 }; ··· 256 256 257 257 #interrupt-cells = <1>; 258 258 interrupt-map-mask = <0 0 0 0>; 259 - interrupt-map = <0 0 0 0 &gic GIC_SPI 100 IRQ_TYPE_NONE>; 259 + interrupt-map = <0 0 0 0 &gic GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>; 260 260 261 261 linux,pci-domain = <0>; 262 262 ··· 278 278 compatible = "brcm,iproc-msi"; 279 279 msi-controller; 280 280 interrupt-parent = <&gic>; 281 - interrupts = <GIC_SPI 96 IRQ_TYPE_NONE>, 282 - <GIC_SPI 97 IRQ_TYPE_NONE>, 283 - <GIC_SPI 98 IRQ_TYPE_NONE>, 284 - <GIC_SPI 99 IRQ_TYPE_NONE>; 281 + interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_HIGH>, 282 + <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>, 283 + <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>, 284 + <GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>; 285 285 }; 286 286 }; 287 287 ··· 291 291 292 292 #interrupt-cells = <1>; 293 293 interrupt-map-mask = <0 0 0 0>; 294 - interrupt-map = <0 0 0 0 &gic GIC_SPI 106 IRQ_TYPE_NONE>; 294 + interrupt-map = <0 0 0 0 &gic GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>; 295 295 296 296 linux,pci-domain = <1>; 297 297 ··· 313 313 compatible = "brcm,iproc-msi"; 314 314 msi-controller; 315 315 interrupt-parent = <&gic>; 316 - interrupts = <GIC_SPI 102 IRQ_TYPE_NONE>, 317 - <GIC_SPI 103 IRQ_TYPE_NONE>, 318 - <GIC_SPI 104 IRQ_TYPE_NONE>, 319 - <GIC_SPI 105 IRQ_TYPE_NONE>; 316 + interrupts = <GIC_SPI 102 IRQ_TYPE_LEVEL_HIGH>, 317 + <GIC_SPI 103 IRQ_TYPE_LEVEL_HIGH>, 318 + <GIC_SPI 104 IRQ_TYPE_LEVEL_HIGH>, 319 + <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>; 320 320 }; 321 321 }; 322 322
+12 -12
arch/arm/boot/dts/bcm-hr2.dtsi
··· 264 264 reg = <0x38000 0x50>; 265 265 #address-cells = <1>; 266 266 #size-cells = <0>; 267 - interrupts = <GIC_SPI 95 IRQ_TYPE_NONE>; 267 + interrupts = <GIC_SPI 95 IRQ_TYPE_LEVEL_HIGH>; 268 268 clock-frequency = <100000>; 269 269 }; 270 270 ··· 279 279 reg = <0x3b000 0x50>; 280 280 #address-cells = <1>; 281 281 #size-cells = <0>; 282 - interrupts = <GIC_SPI 96 IRQ_TYPE_NONE>; 282 + interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_HIGH>; 283 283 clock-frequency = <100000>; 284 284 }; 285 285 }; ··· 300 300 301 301 #interrupt-cells = <1>; 302 302 interrupt-map-mask = <0 0 0 0>; 303 - interrupt-map = <0 0 0 0 &gic GIC_SPI 186 IRQ_TYPE_NONE>; 303 + interrupt-map = <0 0 0 0 &gic GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>; 304 304 305 305 linux,pci-domain = <0>; 306 306 ··· 322 322 compatible = "brcm,iproc-msi"; 323 323 msi-controller; 324 324 interrupt-parent = <&gic>; 325 - interrupts = <GIC_SPI 182 IRQ_TYPE_NONE>, 326 - <GIC_SPI 183 IRQ_TYPE_NONE>, 327 - <GIC_SPI 184 IRQ_TYPE_NONE>, 328 - <GIC_SPI 185 IRQ_TYPE_NONE>; 325 + interrupts = <GIC_SPI 182 IRQ_TYPE_LEVEL_HIGH>, 326 + <GIC_SPI 183 IRQ_TYPE_LEVEL_HIGH>, 327 + <GIC_SPI 184 IRQ_TYPE_LEVEL_HIGH>, 328 + <GIC_SPI 185 IRQ_TYPE_LEVEL_HIGH>; 329 329 brcm,pcie-msi-inten; 330 330 }; 331 331 }; ··· 336 336 337 337 #interrupt-cells = <1>; 338 338 interrupt-map-mask = <0 0 0 0>; 339 - interrupt-map = <0 0 0 0 &gic GIC_SPI 192 IRQ_TYPE_NONE>; 339 + interrupt-map = <0 0 0 0 &gic GIC_SPI 192 IRQ_TYPE_LEVEL_HIGH>; 340 340 341 341 linux,pci-domain = <1>; 342 342 ··· 358 358 compatible = "brcm,iproc-msi"; 359 359 msi-controller; 360 360 interrupt-parent = <&gic>; 361 - interrupts = <GIC_SPI 188 IRQ_TYPE_NONE>, 362 - <GIC_SPI 189 IRQ_TYPE_NONE>, 363 - <GIC_SPI 190 IRQ_TYPE_NONE>, 364 - <GIC_SPI 191 IRQ_TYPE_NONE>; 361 + interrupts = <GIC_SPI 188 IRQ_TYPE_LEVEL_HIGH>, 362 + <GIC_SPI 189 IRQ_TYPE_LEVEL_HIGH>, 363 + <GIC_SPI 190 IRQ_TYPE_LEVEL_HIGH>, 364 + <GIC_SPI 191 IRQ_TYPE_LEVEL_HIGH>; 365 365 brcm,pcie-msi-inten; 366 366 }; 367 367 };
+16 -16
arch/arm/boot/dts/bcm-nsp.dtsi
··· 391 391 reg = <0x38000 0x50>; 392 392 #address-cells = <1>; 393 393 #size-cells = <0>; 394 - interrupts = <GIC_SPI 89 IRQ_TYPE_NONE>; 394 + interrupts = <GIC_SPI 89 IRQ_TYPE_LEVEL_HIGH>; 395 395 clock-frequency = <100000>; 396 396 dma-coherent; 397 397 status = "disabled"; ··· 496 496 497 497 #interrupt-cells = <1>; 498 498 interrupt-map-mask = <0 0 0 0>; 499 - interrupt-map = <0 0 0 0 &gic GIC_SPI 131 IRQ_TYPE_NONE>; 499 + interrupt-map = <0 0 0 0 &gic GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>; 500 500 501 501 linux,pci-domain = <0>; 502 502 ··· 519 519 compatible = "brcm,iproc-msi"; 520 520 msi-controller; 521 521 interrupt-parent = <&gic>; 522 - interrupts = <GIC_SPI 127 IRQ_TYPE_NONE>, 523 - <GIC_SPI 128 IRQ_TYPE_NONE>, 524 - <GIC_SPI 129 IRQ_TYPE_NONE>, 525 - <GIC_SPI 130 IRQ_TYPE_NONE>; 522 + interrupts = <GIC_SPI 127 IRQ_TYPE_LEVEL_HIGH>, 523 + <GIC_SPI 128 IRQ_TYPE_LEVEL_HIGH>, 524 + <GIC_SPI 129 IRQ_TYPE_LEVEL_HIGH>, 525 + <GIC_SPI 130 IRQ_TYPE_LEVEL_HIGH>; 526 526 brcm,pcie-msi-inten; 527 527 }; 528 528 }; ··· 533 533 534 534 #interrupt-cells = <1>; 535 535 interrupt-map-mask = <0 0 0 0>; 536 - interrupt-map = <0 0 0 0 &gic GIC_SPI 137 IRQ_TYPE_NONE>; 536 + interrupt-map = <0 0 0 0 &gic GIC_SPI 137 IRQ_TYPE_LEVEL_HIGH>; 537 537 538 538 linux,pci-domain = <1>; 539 539 ··· 556 556 compatible = "brcm,iproc-msi"; 557 557 msi-controller; 558 558 interrupt-parent = <&gic>; 559 - interrupts = <GIC_SPI 133 IRQ_TYPE_NONE>, 560 - <GIC_SPI 134 IRQ_TYPE_NONE>, 561 - <GIC_SPI 135 IRQ_TYPE_NONE>, 562 - <GIC_SPI 136 IRQ_TYPE_NONE>; 559 + interrupts = <GIC_SPI 133 IRQ_TYPE_LEVEL_HIGH>, 560 + <GIC_SPI 134 IRQ_TYPE_LEVEL_HIGH>, 561 + <GIC_SPI 135 IRQ_TYPE_LEVEL_HIGH>, 562 + <GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>; 563 563 brcm,pcie-msi-inten; 564 564 }; 565 565 }; ··· 570 570 571 571 #interrupt-cells = <1>; 572 572 interrupt-map-mask = <0 0 0 0>; 573 - interrupt-map = <0 0 0 0 &gic GIC_SPI 143 IRQ_TYPE_NONE>; 573 + interrupt-map = <0 0 0 0 &gic GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>; 574 574 575 575 linux,pci-domain = <2>; 576 576 ··· 593 593 compatible = "brcm,iproc-msi"; 594 594 msi-controller; 595 595 interrupt-parent = <&gic>; 596 - interrupts = <GIC_SPI 139 IRQ_TYPE_NONE>, 597 - <GIC_SPI 140 IRQ_TYPE_NONE>, 598 - <GIC_SPI 141 IRQ_TYPE_NONE>, 599 - <GIC_SPI 142 IRQ_TYPE_NONE>; 596 + interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>, 597 + <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>, 598 + <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>, 599 + <GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>; 600 600 brcm,pcie-msi-inten; 601 601 }; 602 602 };
+1 -1
arch/arm/boot/dts/bcm5301x.dtsi
··· 365 365 i2c0: i2c@18009000 { 366 366 compatible = "brcm,iproc-i2c"; 367 367 reg = <0x18009000 0x50>; 368 - interrupts = <GIC_SPI 121 IRQ_TYPE_NONE>; 368 + interrupts = <GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>; 369 369 #address-cells = <1>; 370 370 #size-cells = <0>; 371 371 clock-frequency = <100000>;
+1 -5
arch/arm/boot/dts/da850.dtsi
··· 549 549 gpio-controller; 550 550 #gpio-cells = <2>; 551 551 reg = <0x226000 0x1000>; 552 - interrupts = <42 IRQ_TYPE_EDGE_BOTH 553 - 43 IRQ_TYPE_EDGE_BOTH 44 IRQ_TYPE_EDGE_BOTH 554 - 45 IRQ_TYPE_EDGE_BOTH 46 IRQ_TYPE_EDGE_BOTH 555 - 47 IRQ_TYPE_EDGE_BOTH 48 IRQ_TYPE_EDGE_BOTH 556 - 49 IRQ_TYPE_EDGE_BOTH 50 IRQ_TYPE_EDGE_BOTH>; 552 + interrupts = <42 43 44 45 46 47 48 49 50>; 557 553 ti,ngpio = <144>; 558 554 ti,davinci-gpio-unbanked = <0>; 559 555 status = "disabled";
+1 -1
arch/arm/boot/dts/imx6q.dtsi
··· 90 90 clocks = <&clks IMX6Q_CLK_ECSPI5>, 91 91 <&clks IMX6Q_CLK_ECSPI5>; 92 92 clock-names = "ipg", "per"; 93 - dmas = <&sdma 11 7 1>, <&sdma 12 7 2>; 93 + dmas = <&sdma 11 8 1>, <&sdma 12 8 2>; 94 94 dma-names = "rx", "tx"; 95 95 status = "disabled"; 96 96 };
+1 -1
arch/arm/boot/dts/imx6sx.dtsi
··· 1344 1344 ranges = <0x81000000 0 0 0x08f80000 0 0x00010000 /* downstream I/O */ 1345 1345 0x82000000 0 0x08000000 0x08000000 0 0x00f00000>; /* non-prefetchable memory */ 1346 1346 num-lanes = <1>; 1347 - interrupts = <GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>; 1347 + interrupts = <GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>; 1348 1348 interrupt-names = "msi"; 1349 1349 #interrupt-cells = <1>; 1350 1350 interrupt-map-mask = <0 0 0 0x7>;
+2 -2
arch/arm/boot/dts/socfpga.dtsi
··· 748 748 nand0: nand@ff900000 { 749 749 #address-cells = <0x1>; 750 750 #size-cells = <0x1>; 751 - compatible = "denali,denali-nand-dt"; 751 + compatible = "altr,socfpga-denali-nand"; 752 752 reg = <0xff900000 0x100000>, 753 753 <0xffb80000 0x10000>; 754 754 reg-names = "nand_data", "denali_reg"; 755 755 interrupts = <0x0 0x90 0x4>; 756 756 dma-mask = <0xffffffff>; 757 - clocks = <&nand_clk>; 757 + clocks = <&nand_x_clk>; 758 758 status = "disabled"; 759 759 }; 760 760
+2 -3
arch/arm/boot/dts/socfpga_arria10.dtsi
··· 593 593 #size-cells = <0>; 594 594 reg = <0xffda5000 0x100>; 595 595 interrupts = <0 102 4>; 596 - num-chipselect = <4>; 597 - bus-num = <0>; 596 + num-cs = <4>; 598 597 /*32bit_access;*/ 599 598 tx-dma-channel = <&pdma 16>; 600 599 rx-dma-channel = <&pdma 17>; ··· 632 633 nand: nand@ffb90000 { 633 634 #address-cells = <1>; 634 635 #size-cells = <1>; 635 - compatible = "denali,denali-nand-dt", "altr,socfpga-denali-nand"; 636 + compatible = "altr,socfpga-denali-nand"; 636 637 reg = <0xffb90000 0x72000>, 637 638 <0xffb80000 0x10000>; 638 639 reg-names = "nand_data", "denali_reg";
+1 -1
arch/arm/common/Makefile
··· 10 10 obj-$(CONFIG_SHARP_LOCOMO) += locomo.o 11 11 obj-$(CONFIG_SHARP_PARAM) += sharpsl_param.o 12 12 obj-$(CONFIG_SHARP_SCOOP) += scoop.o 13 - obj-$(CONFIG_SMP) += secure_cntvoff.o 13 + obj-$(CONFIG_CPU_V7) += secure_cntvoff.o 14 14 obj-$(CONFIG_PCI_HOST_ITE8152) += it8152.o 15 15 obj-$(CONFIG_MCPM) += mcpm_head.o mcpm_entry.o mcpm_platsmp.o vlock.o 16 16 CFLAGS_REMOVE_mcpm_entry.o = -pg
+161 -229
arch/arm/configs/multi_v7_defconfig
··· 1 1 CONFIG_SYSVIPC=y 2 - CONFIG_FHANDLE=y 3 2 CONFIG_NO_HZ=y 4 3 CONFIG_HIGH_RES_TIMERS=y 5 4 CONFIG_CGROUPS=y ··· 9 10 CONFIG_MODULE_UNLOAD=y 10 11 CONFIG_PARTITION_ADVANCED=y 11 12 CONFIG_CMDLINE_PARTITION=y 12 - CONFIG_ARCH_MULTI_V7=y 13 - # CONFIG_ARCH_MULTI_V5 is not set 14 - # CONFIG_ARCH_MULTI_V4 is not set 15 13 CONFIG_ARCH_VIRT=y 16 14 CONFIG_ARCH_ALPINE=y 17 15 CONFIG_ARCH_ARTPEC=y 18 16 CONFIG_MACH_ARTPEC6=y 19 - CONFIG_ARCH_MVEBU=y 20 - CONFIG_MACH_ARMADA_370=y 21 - CONFIG_MACH_ARMADA_375=y 22 - CONFIG_MACH_ARMADA_38X=y 23 - CONFIG_MACH_ARMADA_39X=y 24 - CONFIG_MACH_ARMADA_XP=y 25 - CONFIG_MACH_DOVE=y 26 17 CONFIG_ARCH_AT91=y 27 18 CONFIG_SOC_SAMA5D2=y 28 19 CONFIG_SOC_SAMA5D3=y ··· 21 32 CONFIG_ARCH_BCM_CYGNUS=y 22 33 CONFIG_ARCH_BCM_HR2=y 23 34 CONFIG_ARCH_BCM_NSP=y 24 - CONFIG_ARCH_BCM_21664=y 25 - CONFIG_ARCH_BCM_281XX=y 26 35 CONFIG_ARCH_BCM_5301X=y 36 + CONFIG_ARCH_BCM_281XX=y 37 + CONFIG_ARCH_BCM_21664=y 27 38 CONFIG_ARCH_BCM2835=y 28 39 CONFIG_ARCH_BCM_63XX=y 29 40 CONFIG_ARCH_BRCMSTB=y ··· 32 43 CONFIG_MACH_BERLIN_BG2CD=y 33 44 CONFIG_MACH_BERLIN_BG2Q=y 34 45 CONFIG_ARCH_DIGICOLOR=y 46 + CONFIG_ARCH_EXYNOS=y 47 + CONFIG_EXYNOS5420_MCPM=y 35 48 CONFIG_ARCH_HIGHBANK=y 36 49 CONFIG_ARCH_HISI=y 37 50 CONFIG_ARCH_HI3xxx=y 38 - CONFIG_ARCH_HIX5HD2=y 39 51 CONFIG_ARCH_HIP01=y 40 52 CONFIG_ARCH_HIP04=y 41 - CONFIG_ARCH_KEYSTONE=y 42 - CONFIG_ARCH_MESON=y 53 + CONFIG_ARCH_HIX5HD2=y 43 54 CONFIG_ARCH_MXC=y 44 55 CONFIG_SOC_IMX50=y 45 56 CONFIG_SOC_IMX51=y ··· 49 60 CONFIG_SOC_IMX6SX=y 50 61 CONFIG_SOC_IMX6UL=y 51 62 CONFIG_SOC_IMX7D=y 52 - CONFIG_SOC_VF610=y 53 63 CONFIG_SOC_LS1021A=y 64 + CONFIG_SOC_VF610=y 65 + CONFIG_ARCH_KEYSTONE=y 66 + CONFIG_ARCH_MEDIATEK=y 67 + CONFIG_ARCH_MESON=y 68 + CONFIG_ARCH_MVEBU=y 69 + CONFIG_MACH_ARMADA_370=y 70 + CONFIG_MACH_ARMADA_375=y 71 + CONFIG_MACH_ARMADA_38X=y 72 + CONFIG_MACH_ARMADA_39X=y 73 + CONFIG_MACH_ARMADA_XP=y 74 + CONFIG_MACH_DOVE=y 54 75 CONFIG_ARCH_OMAP3=y 55 76 CONFIG_ARCH_OMAP4=y 56 77 CONFIG_SOC_OMAP5=y 57 78 CONFIG_SOC_AM33XX=y 58 79 CONFIG_SOC_AM43XX=y 59 80 CONFIG_SOC_DRA7XX=y 81 + CONFIG_ARCH_SIRF=y 60 82 CONFIG_ARCH_QCOM=y 61 - CONFIG_ARCH_MEDIATEK=y 62 83 CONFIG_ARCH_MSM8X60=y 63 84 CONFIG_ARCH_MSM8960=y 64 85 CONFIG_ARCH_MSM8974=y 65 86 CONFIG_ARCH_ROCKCHIP=y 66 - CONFIG_ARCH_SOCFPGA=y 67 - CONFIG_PLAT_SPEAR=y 68 - CONFIG_ARCH_SPEAR13XX=y 69 - CONFIG_MACH_SPEAR1310=y 70 - CONFIG_MACH_SPEAR1340=y 71 - CONFIG_ARCH_STI=y 72 - CONFIG_ARCH_STM32=y 73 - CONFIG_ARCH_EXYNOS=y 74 - CONFIG_EXYNOS5420_MCPM=y 75 87 CONFIG_ARCH_RENESAS=y 76 88 CONFIG_ARCH_EMEV2=y 77 89 CONFIG_ARCH_R7S72100=y ··· 89 99 CONFIG_ARCH_R8A7793=y 90 100 CONFIG_ARCH_R8A7794=y 91 101 CONFIG_ARCH_SH73A0=y 102 + CONFIG_ARCH_SOCFPGA=y 103 + CONFIG_PLAT_SPEAR=y 104 + CONFIG_ARCH_SPEAR13XX=y 105 + CONFIG_MACH_SPEAR1310=y 106 + CONFIG_MACH_SPEAR1340=y 107 + CONFIG_ARCH_STI=y 108 + CONFIG_ARCH_STM32=y 92 109 CONFIG_ARCH_SUNXI=y 93 - CONFIG_ARCH_SIRF=y 94 110 CONFIG_ARCH_TEGRA=y 95 - CONFIG_ARCH_TEGRA_2x_SOC=y 96 - CONFIG_ARCH_TEGRA_3x_SOC=y 97 - CONFIG_ARCH_TEGRA_114_SOC=y 98 - CONFIG_ARCH_TEGRA_124_SOC=y 99 111 CONFIG_ARCH_UNIPHIER=y 100 112 CONFIG_ARCH_U8500=y 101 - CONFIG_MACH_HREFV60=y 102 - CONFIG_MACH_SNOWBALL=y 103 113 CONFIG_ARCH_VEXPRESS=y 104 114 CONFIG_ARCH_VEXPRESS_TC2_PM=y 105 115 CONFIG_ARCH_WM8850=y 106 116 CONFIG_ARCH_ZYNQ=y 107 - CONFIG_TRUSTED_FOUNDATIONS=y 108 - CONFIG_PCI=y 109 - CONFIG_PCI_HOST_GENERIC=y 110 - CONFIG_PCI_DRA7XX=y 111 - CONFIG_PCI_DRA7XX_EP=y 112 - CONFIG_PCI_KEYSTONE=y 113 - CONFIG_PCI_MSI=y 117 + CONFIG_PCIEPORTBUS=y 114 118 CONFIG_PCI_MVEBU=y 115 119 CONFIG_PCI_TEGRA=y 116 120 CONFIG_PCI_RCAR_GEN2=y 117 121 CONFIG_PCIE_RCAR=y 118 - CONFIG_PCIEPORTBUS=y 122 + CONFIG_PCI_DRA7XX_EP=y 123 + CONFIG_PCI_KEYSTONE=y 119 124 CONFIG_PCI_ENDPOINT=y 120 125 CONFIG_PCI_ENDPOINT_CONFIGFS=y 121 126 CONFIG_PCI_EPF_TEST=m 122 127 CONFIG_SMP=y 123 128 CONFIG_NR_CPUS=16 124 - CONFIG_HIGHPTE=y 125 - CONFIG_CMA=y 126 129 CONFIG_SECCOMP=y 127 130 CONFIG_ARM_APPENDED_DTB=y 128 131 CONFIG_ARM_ATAG_DTB_COMPAT=y ··· 128 145 CONFIG_CPU_FREQ_GOV_USERSPACE=m 129 146 CONFIG_CPU_FREQ_GOV_CONSERVATIVE=m 130 147 CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y 148 + CONFIG_CPUFREQ_DT=y 131 149 CONFIG_ARM_IMX6Q_CPUFREQ=y 132 150 CONFIG_QORIQ_CPUFREQ=y 133 151 CONFIG_CPU_IDLE=y 134 152 CONFIG_ARM_CPUIDLE=y 135 - CONFIG_NEON=y 136 - CONFIG_KERNEL_MODE_NEON=y 137 153 CONFIG_ARM_ZYNQ_CPUIDLE=y 138 154 CONFIG_ARM_EXYNOS_CPUIDLE=y 155 + CONFIG_KERNEL_MODE_NEON=y 139 156 CONFIG_NET=y 140 157 CONFIG_PACKET=y 141 158 CONFIG_UNIX=y ··· 153 170 CONFIG_IPV6_TUNNEL=m 154 171 CONFIG_IPV6_MULTIPLE_TABLES=y 155 172 CONFIG_NET_DSA=m 156 - CONFIG_NET_SWITCHDEV=y 157 173 CONFIG_CAN=y 158 - CONFIG_CAN_RAW=y 159 - CONFIG_CAN_BCM=y 160 - CONFIG_CAN_DEV=y 161 174 CONFIG_CAN_AT91=m 162 175 CONFIG_CAN_FLEXCAN=m 163 - CONFIG_CAN_RCAR=m 164 - CONFIG_CAN_XILINXCAN=y 165 - CONFIG_CAN_MCP251X=y 166 - CONFIG_NET_DSA_BCM_SF2=m 167 - CONFIG_B53=m 168 - CONFIG_B53_SPI_DRIVER=m 169 - CONFIG_B53_MDIO_DRIVER=m 170 - CONFIG_B53_MMAP_DRIVER=m 171 - CONFIG_B53_SRAB_DRIVER=m 172 176 CONFIG_CAN_SUN4I=y 177 + CONFIG_CAN_XILINXCAN=y 178 + CONFIG_CAN_RCAR=m 179 + CONFIG_CAN_MCP251X=y 173 180 CONFIG_BT=m 174 181 CONFIG_BT_HCIUART=m 175 182 CONFIG_BT_HCIUART_BCM=y ··· 172 199 CONFIG_RFKILL_GPIO=y 173 200 CONFIG_DEVTMPFS=y 174 201 CONFIG_DEVTMPFS_MOUNT=y 175 - CONFIG_DMA_CMA=y 176 202 CONFIG_CMA_SIZE_MBYTES=64 177 203 CONFIG_OMAP_OCP2SCP=y 178 204 CONFIG_SIMPLE_PM_BUS=y 179 - CONFIG_SUNXI_RSB=y 180 205 CONFIG_MTD=y 181 206 CONFIG_MTD_CMDLINE_PARTS=y 182 207 CONFIG_MTD_BLOCK=y ··· 207 236 CONFIG_EEPROM_AT24=y 208 237 CONFIG_BLK_DEV_SD=y 209 238 CONFIG_BLK_DEV_SR=y 210 - CONFIG_SCSI_MULTI_LUN=y 211 239 CONFIG_ATA=y 212 240 CONFIG_SATA_AHCI=y 213 241 CONFIG_SATA_AHCI_PLATFORM=y ··· 221 251 CONFIG_SATA_RCAR=y 222 252 CONFIG_NETDEVICES=y 223 253 CONFIG_VIRTIO_NET=y 224 - CONFIG_HIX5HD2_GMAC=y 254 + CONFIG_B53_SPI_DRIVER=m 255 + CONFIG_B53_MDIO_DRIVER=m 256 + CONFIG_B53_MMAP_DRIVER=m 257 + CONFIG_B53_SRAB_DRIVER=m 258 + CONFIG_NET_DSA_BCM_SF2=m 225 259 CONFIG_SUN4I_EMAC=y 226 - CONFIG_MACB=y 227 260 CONFIG_BCMGENET=m 228 261 CONFIG_BGMAC_BCMA=y 229 262 CONFIG_SYSTEMPORT=m 263 + CONFIG_MACB=y 230 264 CONFIG_NET_CALXEDA_XGMAC=y 231 265 CONFIG_GIANFAR=y 266 + CONFIG_HIX5HD2_GMAC=y 267 + CONFIG_E1000E=y 232 268 CONFIG_IGB=y 233 269 CONFIG_MV643XX_ETH=y 234 270 CONFIG_MVNETA=y ··· 244 268 CONFIG_SH_ETH=y 245 269 CONFIG_SMSC911X=y 246 270 CONFIG_STMMAC_ETH=y 247 - CONFIG_STMMAC_PLATFORM=y 248 271 CONFIG_DWMAC_DWC_QOS_ETH=y 249 272 CONFIG_TI_CPSW=y 250 273 CONFIG_XILINX_EMACLITE=y 251 274 CONFIG_AT803X_PHY=y 252 - CONFIG_MARVELL_PHY=y 253 - CONFIG_SMSC_PHY=y 254 275 CONFIG_BROADCOM_PHY=y 255 276 CONFIG_ICPLUS_PHY=y 256 - CONFIG_REALTEK_PHY=y 277 + CONFIG_MARVELL_PHY=y 257 278 CONFIG_MICREL_PHY=y 258 - CONFIG_FIXED_PHY=y 279 + CONFIG_REALTEK_PHY=y 259 280 CONFIG_ROCKCHIP_PHY=y 281 + CONFIG_SMSC_PHY=y 260 282 CONFIG_USB_PEGASUS=y 261 283 CONFIG_USB_RTL8152=m 262 284 CONFIG_USB_LAN78XX=m ··· 262 288 CONFIG_USB_NET_SMSC75XX=y 263 289 CONFIG_USB_NET_SMSC95XX=y 264 290 CONFIG_BRCMFMAC=m 265 - CONFIG_RT2X00=m 266 - CONFIG_RT2800USB=m 267 291 CONFIG_MWIFIEX=m 268 292 CONFIG_MWIFIEX_SDIO=m 293 + CONFIG_RT2X00=m 294 + CONFIG_RT2800USB=m 269 295 CONFIG_INPUT_JOYDEV=y 270 296 CONFIG_INPUT_EVDEV=y 271 297 CONFIG_KEYBOARD_QT1070=m 272 298 CONFIG_KEYBOARD_GPIO=y 273 299 CONFIG_KEYBOARD_TEGRA=y 274 - CONFIG_KEYBOARD_SPEAR=y 275 - CONFIG_KEYBOARD_ST_KEYSCAN=y 276 - CONFIG_KEYBOARD_CROS_EC=m 277 300 CONFIG_KEYBOARD_SAMSUNG=m 301 + CONFIG_KEYBOARD_ST_KEYSCAN=y 302 + CONFIG_KEYBOARD_SPEAR=y 303 + CONFIG_KEYBOARD_CROS_EC=m 278 304 CONFIG_MOUSE_PS2_ELANTECH=y 279 305 CONFIG_MOUSE_CYAPA=m 280 306 CONFIG_MOUSE_ELAN_I2C=y 281 307 CONFIG_INPUT_TOUCHSCREEN=y 282 308 CONFIG_TOUCHSCREEN_ATMEL_MXT=m 283 309 CONFIG_TOUCHSCREEN_MMS114=m 310 + CONFIG_TOUCHSCREEN_WM97XX=m 284 311 CONFIG_TOUCHSCREEN_ST1232=m 285 312 CONFIG_TOUCHSCREEN_STMPE=y 286 313 CONFIG_TOUCHSCREEN_SUN4I=y 287 - CONFIG_TOUCHSCREEN_WM97XX=m 288 314 CONFIG_INPUT_MISC=y 289 315 CONFIG_INPUT_MAX77693_HAPTIC=m 290 316 CONFIG_INPUT_MAX8997_HAPTIC=m ··· 301 327 CONFIG_SERIAL_8250_EM=y 302 328 CONFIG_SERIAL_8250_MT6577=y 303 329 CONFIG_SERIAL_8250_UNIPHIER=y 330 + CONFIG_SERIAL_OF_PLATFORM=y 304 331 CONFIG_SERIAL_AMBA_PL011=y 305 332 CONFIG_SERIAL_AMBA_PL011_CONSOLE=y 306 333 CONFIG_SERIAL_ATMEL=y 307 334 CONFIG_SERIAL_ATMEL_CONSOLE=y 308 335 CONFIG_SERIAL_ATMEL_TTYAT=y 309 - CONFIG_SERIAL_BCM63XX=y 310 - CONFIG_SERIAL_BCM63XX_CONSOLE=y 311 336 CONFIG_SERIAL_MESON=y 312 337 CONFIG_SERIAL_MESON_CONSOLE=y 313 338 CONFIG_SERIAL_SAMSUNG=y ··· 318 345 CONFIG_SERIAL_IMX_CONSOLE=y 319 346 CONFIG_SERIAL_SH_SCI=y 320 347 CONFIG_SERIAL_SH_SCI_NR_UARTS=20 321 - CONFIG_SERIAL_SH_SCI_CONSOLE=y 322 - CONFIG_SERIAL_SH_SCI_DMA=y 323 348 CONFIG_SERIAL_MSM=y 324 349 CONFIG_SERIAL_MSM_CONSOLE=y 325 350 CONFIG_SERIAL_VT8500=y 326 351 CONFIG_SERIAL_VT8500_CONSOLE=y 327 - CONFIG_SERIAL_OF_PLATFORM=y 328 352 CONFIG_SERIAL_OMAP=y 329 353 CONFIG_SERIAL_OMAP_CONSOLE=y 354 + CONFIG_SERIAL_BCM63XX=y 355 + CONFIG_SERIAL_BCM63XX_CONSOLE=y 330 356 CONFIG_SERIAL_XILINX_PS_UART=y 331 357 CONFIG_SERIAL_XILINX_PS_UART_CONSOLE=y 332 358 CONFIG_SERIAL_FSL_LPUART=y ··· 337 365 CONFIG_SERIAL_STM32=y 338 366 CONFIG_SERIAL_STM32_CONSOLE=y 339 367 CONFIG_SERIAL_DEV_BUS=y 340 - CONFIG_HVC_DRIVER=y 341 368 CONFIG_VIRTIO_CONSOLE=y 369 + CONFIG_HW_RANDOM=y 370 + CONFIG_HW_RANDOM_ST=y 342 371 CONFIG_I2C_CHARDEV=y 343 - CONFIG_I2C_DAVINCI=y 344 - CONFIG_I2C_MESON=y 345 - CONFIG_I2C_MUX=y 346 372 CONFIG_I2C_ARB_GPIO_CHALLENGE=m 347 373 CONFIG_I2C_MUX_PCA954x=y 348 374 CONFIG_I2C_MUX_PINCTRL=y ··· 348 378 CONFIG_I2C_AT91=m 349 379 CONFIG_I2C_BCM2835=y 350 380 CONFIG_I2C_CADENCE=y 381 + CONFIG_I2C_DAVINCI=y 351 382 CONFIG_I2C_DESIGNWARE_PLATFORM=y 352 383 CONFIG_I2C_DIGICOLOR=m 353 384 CONFIG_I2C_EMEV2=m 354 385 CONFIG_I2C_GPIO=m 355 - CONFIG_I2C_EXYNOS5=y 356 386 CONFIG_I2C_IMX=y 387 + CONFIG_I2C_MESON=y 357 388 CONFIG_I2C_MV64XXX=y 358 389 CONFIG_I2C_RIIC=y 359 390 CONFIG_I2C_RK3X=y ··· 398 427 CONFIG_SPMI=y 399 428 CONFIG_PINCTRL_AS3722=y 400 429 CONFIG_PINCTRL_PALMAS=y 401 - CONFIG_PINCTRL_BCM2835=y 402 430 CONFIG_PINCTRL_APQ8064=y 403 431 CONFIG_PINCTRL_APQ8084=y 404 432 CONFIG_PINCTRL_IPQ8064=y ··· 407 437 CONFIG_PINCTRL_MSM8916=y 408 438 CONFIG_PINCTRL_QCOM_SPMI_PMIC=y 409 439 CONFIG_PINCTRL_QCOM_SSBI_PMIC=y 410 - CONFIG_GPIO_GENERIC_PLATFORM=y 411 440 CONFIG_GPIO_DAVINCI=y 412 441 CONFIG_GPIO_DWAPB=y 413 442 CONFIG_GPIO_EM=y 414 443 CONFIG_GPIO_RCAR=y 444 + CONFIG_GPIO_SYSCON=y 415 445 CONFIG_GPIO_UNIPHIER=y 416 446 CONFIG_GPIO_XILINX=y 417 447 CONFIG_GPIO_ZYNQ=y 418 448 CONFIG_GPIO_PCA953X=y 419 449 CONFIG_GPIO_PCA953X_IRQ=y 420 450 CONFIG_GPIO_PCF857X=y 421 - CONFIG_GPIO_TWL4030=y 422 451 CONFIG_GPIO_PALMAS=y 423 - CONFIG_GPIO_SYSCON=y 424 452 CONFIG_GPIO_TPS6586X=y 425 453 CONFIG_GPIO_TPS65910=y 454 + CONFIG_GPIO_TWL4030=y 455 + CONFIG_POWER_AVS=y 456 + CONFIG_ROCKCHIP_IODOMAIN=y 457 + CONFIG_POWER_RESET_AS3722=y 458 + CONFIG_POWER_RESET_GPIO=y 459 + CONFIG_POWER_RESET_GPIO_RESTART=y 460 + CONFIG_POWER_RESET_ST=y 461 + CONFIG_POWER_RESET_KEYSTONE=y 462 + CONFIG_POWER_RESET_RMOBILE=y 426 463 CONFIG_BATTERY_ACT8945A=y 427 464 CONFIG_BATTERY_CPCAP=m 428 465 CONFIG_BATTERY_SBS=y 466 + CONFIG_AXP20X_POWER=m 429 467 CONFIG_BATTERY_MAX17040=m 430 468 CONFIG_BATTERY_MAX17042=m 431 469 CONFIG_CHARGER_CPCAP=m ··· 442 464 CONFIG_CHARGER_MAX8997=m 443 465 CONFIG_CHARGER_MAX8998=m 444 466 CONFIG_CHARGER_TPS65090=y 445 - CONFIG_AXP20X_POWER=m 446 - CONFIG_POWER_RESET_AS3722=y 447 - CONFIG_POWER_RESET_GPIO=y 448 - CONFIG_POWER_RESET_GPIO_RESTART=y 449 - CONFIG_POWER_RESET_KEYSTONE=y 450 - CONFIG_POWER_RESET_RMOBILE=y 451 - CONFIG_POWER_RESET_ST=y 452 - CONFIG_POWER_AVS=y 453 - CONFIG_ROCKCHIP_IODOMAIN=y 454 467 CONFIG_SENSORS_IIO_HWMON=y 455 468 CONFIG_SENSORS_LM90=y 456 469 CONFIG_SENSORS_LM95245=y ··· 449 480 CONFIG_SENSORS_PWM_FAN=m 450 481 CONFIG_SENSORS_INA2XX=m 451 482 CONFIG_CPU_THERMAL=y 452 - CONFIG_BCM2835_THERMAL=m 453 - CONFIG_BRCMSTB_THERMAL=m 454 483 CONFIG_IMX_THERMAL=y 455 484 CONFIG_ROCKCHIP_THERMAL=y 456 485 CONFIG_RCAR_THERMAL=y 457 486 CONFIG_ARMADA_THERMAL=y 458 - CONFIG_DAVINCI_WATCHDOG=m 459 - CONFIG_EXYNOS_THERMAL=m 487 + CONFIG_BCM2835_THERMAL=m 488 + CONFIG_BRCMSTB_THERMAL=m 460 489 CONFIG_ST_THERMAL_MEMMAP=y 461 490 CONFIG_WATCHDOG=y 462 491 CONFIG_DA9063_WATCHDOG=m ··· 462 495 CONFIG_ARM_SP805_WATCHDOG=y 463 496 CONFIG_AT91SAM9X_WATCHDOG=y 464 497 CONFIG_SAMA5D4_WATCHDOG=y 498 + CONFIG_DW_WATCHDOG=y 499 + CONFIG_DAVINCI_WATCHDOG=m 465 500 CONFIG_ORION_WATCHDOG=y 466 501 CONFIG_RN5T618_WATCHDOG=y 467 - CONFIG_ST_LPC_WATCHDOG=y 468 502 CONFIG_SUNXI_WATCHDOG=y 469 503 CONFIG_IMX2_WDT=y 504 + CONFIG_ST_LPC_WATCHDOG=y 470 505 CONFIG_TEGRA_WATCHDOG=m 471 506 CONFIG_MESON_WATCHDOG=y 472 - CONFIG_DW_WATCHDOG=y 473 507 CONFIG_DIGICOLOR_WATCHDOG=y 474 508 CONFIG_RENESAS_WDT=m 475 - CONFIG_BCM2835_WDT=y 476 509 CONFIG_BCM47XX_WDT=y 477 - CONFIG_BCM7038_WDT=m 510 + CONFIG_BCM2835_WDT=y 478 511 CONFIG_BCM_KONA_WDT=y 512 + CONFIG_BCM7038_WDT=m 513 + CONFIG_BCMA_HOST_SOC=y 514 + CONFIG_BCMA_DRIVER_GMAC_CMN=y 515 + CONFIG_BCMA_DRIVER_GPIO=y 479 516 CONFIG_MFD_ACT8945A=y 480 517 CONFIG_MFD_AS3711=y 481 518 CONFIG_MFD_AS3722=y ··· 487 516 CONFIG_MFD_ATMEL_HLCDC=m 488 517 CONFIG_MFD_BCM590XX=y 489 518 CONFIG_MFD_AC100=y 490 - CONFIG_MFD_AXP20X=y 491 519 CONFIG_MFD_AXP20X_I2C=y 492 520 CONFIG_MFD_AXP20X_RSB=y 493 521 CONFIG_MFD_CROS_EC=m ··· 499 529 CONFIG_MFD_MAX8907=y 500 530 CONFIG_MFD_MAX8997=y 501 531 CONFIG_MFD_MAX8998=y 502 - CONFIG_MFD_RK808=y 503 532 CONFIG_MFD_CPCAP=y 504 533 CONFIG_MFD_PM8XXX=y 505 534 CONFIG_MFD_QCOM_RPM=y 506 535 CONFIG_MFD_SPMI_PMIC=y 536 + CONFIG_MFD_RK808=y 507 537 CONFIG_MFD_RN5T618=y 508 538 CONFIG_MFD_SEC_CORE=y 509 539 CONFIG_MFD_STMPE=y ··· 513 543 CONFIG_MFD_TPS65218=y 514 544 CONFIG_MFD_TPS6586X=y 515 545 CONFIG_MFD_TPS65910=y 516 - CONFIG_REGULATOR_ACT8945A=y 517 - CONFIG_REGULATOR_AB8500=y 518 546 CONFIG_REGULATOR_ACT8865=y 547 + CONFIG_REGULATOR_ACT8945A=y 519 548 CONFIG_REGULATOR_ANATOP=y 549 + CONFIG_REGULATOR_AB8500=y 520 550 CONFIG_REGULATOR_AS3711=y 521 551 CONFIG_REGULATOR_AS3722=y 522 552 CONFIG_REGULATOR_AXP20X=y ··· 524 554 CONFIG_REGULATOR_CPCAP=y 525 555 CONFIG_REGULATOR_DA9210=y 526 556 CONFIG_REGULATOR_FAN53555=y 527 - CONFIG_REGULATOR_RK808=y 528 557 CONFIG_REGULATOR_GPIO=y 529 - CONFIG_MFD_SYSCON=y 530 - CONFIG_POWER_RESET_SYSCON=y 531 558 CONFIG_REGULATOR_LP872X=y 532 559 CONFIG_REGULATOR_MAX14577=m 533 560 CONFIG_REGULATOR_MAX8907=y ··· 538 571 CONFIG_REGULATOR_PBIAS=y 539 572 CONFIG_REGULATOR_PWM=y 540 573 CONFIG_REGULATOR_QCOM_RPM=y 541 - CONFIG_REGULATOR_QCOM_SMD_RPM=y 574 + CONFIG_REGULATOR_QCOM_SMD_RPM=m 575 + CONFIG_REGULATOR_RK808=y 542 576 CONFIG_REGULATOR_RN5T618=y 543 577 CONFIG_REGULATOR_S2MPS11=y 544 578 CONFIG_REGULATOR_S5M8767=y ··· 560 592 CONFIG_MEDIA_CONTROLLER=y 561 593 CONFIG_VIDEO_V4L2_SUBDEV_API=y 562 594 CONFIG_MEDIA_USB_SUPPORT=y 563 - CONFIG_USB_VIDEO_CLASS=y 564 - CONFIG_USB_GSPCA=y 595 + CONFIG_USB_VIDEO_CLASS=m 565 596 CONFIG_V4L_PLATFORM_DRIVERS=y 566 597 CONFIG_SOC_CAMERA=m 567 598 CONFIG_SOC_CAMERA_PLATFORM=m 568 - CONFIG_VIDEO_RCAR_VIN=m 569 - CONFIG_VIDEO_ATMEL_ISI=m 570 599 CONFIG_VIDEO_SAMSUNG_EXYNOS4_IS=m 571 600 CONFIG_VIDEO_S5P_FIMC=m 572 601 CONFIG_VIDEO_S5P_MIPI_CSIS=m 573 602 CONFIG_VIDEO_EXYNOS_FIMC_LITE=m 574 603 CONFIG_VIDEO_EXYNOS4_FIMC_IS=m 604 + CONFIG_VIDEO_RCAR_VIN=m 605 + CONFIG_VIDEO_ATMEL_ISI=m 575 606 CONFIG_V4L_MEM2MEM_DRIVERS=y 576 607 CONFIG_VIDEO_SAMSUNG_S5P_JPEG=m 577 608 CONFIG_VIDEO_SAMSUNG_S5P_MFC=m ··· 581 614 CONFIG_VIDEO_RENESAS_JPU=m 582 615 CONFIG_VIDEO_RENESAS_VSP1=m 583 616 CONFIG_V4L_TEST_DRIVERS=y 617 + CONFIG_VIDEO_VIVID=m 584 618 CONFIG_CEC_PLATFORM_DRIVERS=y 585 619 CONFIG_VIDEO_SAMSUNG_S5P_CEC=m 586 620 # CONFIG_MEDIA_SUBDRV_AUTOSELECT is not set 587 621 CONFIG_VIDEO_ADV7180=m 588 622 CONFIG_VIDEO_ML86V7667=m 589 623 CONFIG_DRM=y 590 - CONFIG_DRM_I2C_ADV7511=m 591 - CONFIG_DRM_I2C_ADV7511_AUDIO=y 592 624 # CONFIG_DRM_I2C_CH7006 is not set 593 625 # CONFIG_DRM_I2C_SIL164 is not set 594 - CONFIG_DRM_DUMB_VGA_DAC=m 595 - CONFIG_DRM_NXP_PTN3460=m 596 - CONFIG_DRM_PARADE_PS8622=m 597 626 CONFIG_DRM_NOUVEAU=m 598 627 CONFIG_DRM_EXYNOS=m 599 628 CONFIG_DRM_EXYNOS_FIMD=y ··· 608 645 CONFIG_DRM_SUN4I=m 609 646 CONFIG_DRM_FSL_DCU=m 610 647 CONFIG_DRM_TEGRA=y 648 + CONFIG_DRM_PANEL_SIMPLE=y 611 649 CONFIG_DRM_PANEL_SAMSUNG_LD9040=m 612 650 CONFIG_DRM_PANEL_SAMSUNG_S6E63J0X03=m 613 651 CONFIG_DRM_PANEL_SAMSUNG_S6E8AA0=m 614 - CONFIG_DRM_PANEL_SIMPLE=y 652 + CONFIG_DRM_DUMB_VGA_DAC=m 653 + CONFIG_DRM_NXP_PTN3460=m 654 + CONFIG_DRM_PARADE_PS8622=m 615 655 CONFIG_DRM_SII9234=m 656 + CONFIG_DRM_I2C_ADV7511=m 657 + CONFIG_DRM_I2C_ADV7511_AUDIO=y 616 658 CONFIG_DRM_STI=m 617 - CONFIG_DRM_VC4=y 659 + CONFIG_DRM_VC4=m 618 660 CONFIG_DRM_ETNAVIV=m 619 661 CONFIG_DRM_MXSFB=m 620 662 CONFIG_FB_ARMCLCD=y ··· 627 659 CONFIG_FB_WM8505=y 628 660 CONFIG_FB_SH_MOBILE_LCDC=y 629 661 CONFIG_FB_SIMPLE=y 630 - CONFIG_BACKLIGHT_LCD_SUPPORT=y 631 - CONFIG_BACKLIGHT_CLASS_DEVICE=y 632 662 CONFIG_LCD_PLATFORM=m 633 663 CONFIG_BACKLIGHT_PWM=y 634 664 CONFIG_BACKLIGHT_AS3711=y ··· 634 668 CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y 635 669 CONFIG_SOUND=m 636 670 CONFIG_SND=m 637 - CONFIG_SND_DYNAMIC_MINORS=y 638 671 CONFIG_SND_HDA_TEGRA=m 639 672 CONFIG_SND_HDA_INPUT_BEEP=y 640 673 CONFIG_SND_HDA_PATCH_LOADER=y ··· 657 692 CONFIG_SND_SOC_ODROID=m 658 693 CONFIG_SND_SOC_SH4_FSI=m 659 694 CONFIG_SND_SOC_RCAR=m 660 - CONFIG_SND_SIMPLE_SCU_CARD=m 695 + CONFIG_SND_SOC_STI=m 661 696 CONFIG_SND_SUN4I_CODEC=m 662 697 CONFIG_SND_SOC_TEGRA=m 663 698 CONFIG_SND_SOC_TEGRA20_I2S=m ··· 668 703 CONFIG_SND_SOC_TEGRA_WM9712=m 669 704 CONFIG_SND_SOC_TEGRA_TRIMSLICE=m 670 705 CONFIG_SND_SOC_TEGRA_ALC5632=m 671 - CONFIG_SND_SOC_CPCAP=m 672 706 CONFIG_SND_SOC_TEGRA_MAX98090=m 673 707 CONFIG_SND_SOC_AK4642=m 708 + CONFIG_SND_SOC_CPCAP=m 674 709 CONFIG_SND_SOC_SGTL5000=m 675 710 CONFIG_SND_SOC_SPDIF=m 676 - CONFIG_SND_SOC_WM8978=m 677 - CONFIG_SND_SOC_STI=m 678 711 CONFIG_SND_SOC_STI_SAS=m 679 - CONFIG_SND_SIMPLE_CARD=m 712 + CONFIG_SND_SOC_WM8978=m 713 + CONFIG_SND_SIMPLE_SCU_CARD=m 680 714 CONFIG_USB=y 681 715 CONFIG_USB_OTG=y 682 716 CONFIG_USB_XHCI_HCD=y 683 717 CONFIG_USB_XHCI_MVEBU=y 684 - CONFIG_USB_XHCI_RCAR=m 685 718 CONFIG_USB_XHCI_TEGRA=m 686 719 CONFIG_USB_EHCI_HCD=y 687 - CONFIG_USB_EHCI_MSM=m 688 - CONFIG_USB_EHCI_EXYNOS=y 689 - CONFIG_USB_EHCI_TEGRA=y 690 720 CONFIG_USB_EHCI_HCD_STI=y 691 - CONFIG_USB_EHCI_HCD_PLATFORM=y 692 - CONFIG_USB_ISP1760=y 721 + CONFIG_USB_EHCI_TEGRA=y 722 + CONFIG_USB_EHCI_EXYNOS=y 693 723 CONFIG_USB_OHCI_HCD=y 694 724 CONFIG_USB_OHCI_HCD_STI=y 695 - CONFIG_USB_OHCI_HCD_PLATFORM=y 696 725 CONFIG_USB_OHCI_EXYNOS=m 697 726 CONFIG_USB_R8A66597_HCD=m 698 727 CONFIG_USB_RENESAS_USBHS=m ··· 705 746 CONFIG_USB_TUSB_OMAP_DMA=y 706 747 CONFIG_USB_DWC3=y 707 748 CONFIG_USB_DWC2=y 708 - CONFIG_USB_HSIC_USB3503=y 709 749 CONFIG_USB_CHIPIDEA=y 710 750 CONFIG_USB_CHIPIDEA_UDC=y 711 751 CONFIG_USB_CHIPIDEA_HOST=y 752 + CONFIG_USB_ISP1760=y 753 + CONFIG_USB_HSIC_USB3503=y 712 754 CONFIG_AB8500_USB=y 713 - CONFIG_KEYSTONE_USB_PHY=y 755 + CONFIG_KEYSTONE_USB_PHY=m 714 756 CONFIG_NOP_USB_XCEIV=m 715 757 CONFIG_AM335X_PHY_USB=m 716 758 CONFIG_TWL6030_USB=m 717 759 CONFIG_USB_GPIO_VBUS=y 718 760 CONFIG_USB_ISP1301=y 719 - CONFIG_USB_MSM_OTG=m 720 761 CONFIG_USB_MXS_PHY=y 721 762 CONFIG_USB_GADGET=y 722 763 CONFIG_USB_FSL_USB2=y ··· 752 793 CONFIG_MMC_SDHCI_ESDHC_IMX=y 753 794 CONFIG_MMC_SDHCI_DOVE=y 754 795 CONFIG_MMC_SDHCI_TEGRA=y 796 + CONFIG_MMC_SDHCI_S3C=y 755 797 CONFIG_MMC_SDHCI_PXAV3=y 756 798 CONFIG_MMC_SDHCI_SPEAR=y 757 - CONFIG_MMC_SDHCI_S3C=y 758 799 CONFIG_MMC_SDHCI_S3C_DMA=y 759 800 CONFIG_MMC_SDHCI_BCM_KONA=y 801 + CONFIG_MMC_MESON_MX_SDIO=y 760 802 CONFIG_MMC_SDHCI_ST=y 761 803 CONFIG_MMC_OMAP=y 762 804 CONFIG_MMC_OMAP_HS=y 763 805 CONFIG_MMC_ATMELMCI=y 764 806 CONFIG_MMC_SDHCI_MSM=y 765 - CONFIG_MMC_MESON_MX_SDIO=y 766 807 CONFIG_MMC_MVSDIO=y 767 808 CONFIG_MMC_SDHI=y 768 809 CONFIG_MMC_DW=y 769 - CONFIG_MMC_DW_PLTFM=y 770 810 CONFIG_MMC_DW_EXYNOS=y 771 811 CONFIG_MMC_DW_ROCKCHIP=y 772 812 CONFIG_MMC_SH_MMCIF=y ··· 805 847 CONFIG_RTC_DRV_RK808=m 806 848 CONFIG_RTC_DRV_RS5C372=m 807 849 CONFIG_RTC_DRV_BQ32K=m 808 - CONFIG_RTC_DRV_PALMAS=y 809 - CONFIG_RTC_DRV_ST_LPC=y 810 850 CONFIG_RTC_DRV_TWL4030=y 851 + CONFIG_RTC_DRV_PALMAS=y 811 852 CONFIG_RTC_DRV_TPS6586X=y 812 853 CONFIG_RTC_DRV_TPS65910=y 813 854 CONFIG_RTC_DRV_S35390A=m 814 855 CONFIG_RTC_DRV_RX8581=m 815 856 CONFIG_RTC_DRV_EM3027=y 857 + CONFIG_RTC_DRV_S5M=m 816 858 CONFIG_RTC_DRV_DA9063=m 817 859 CONFIG_RTC_DRV_EFI=m 818 860 CONFIG_RTC_DRV_DIGICOLOR=m 819 - CONFIG_RTC_DRV_S5M=m 820 861 CONFIG_RTC_DRV_S3C=m 821 862 CONFIG_RTC_DRV_PL031=y 822 863 CONFIG_RTC_DRV_AT91RM9200=m 823 864 CONFIG_RTC_DRV_AT91SAM9=m 824 865 CONFIG_RTC_DRV_VT8500=y 825 - CONFIG_RTC_DRV_SUN6I=y 826 866 CONFIG_RTC_DRV_SUNXI=y 827 867 CONFIG_RTC_DRV_MV=y 828 868 CONFIG_RTC_DRV_TEGRA=y 869 + CONFIG_RTC_DRV_ST_LPC=y 829 870 CONFIG_RTC_DRV_CPCAP=m 830 871 CONFIG_DMADEVICES=y 831 - CONFIG_DW_DMAC=y 832 872 CONFIG_AT_HDMAC=y 833 873 CONFIG_AT_XDMAC=y 874 + CONFIG_DMA_BCM2835=y 875 + CONFIG_DMA_SUN6I=y 834 876 CONFIG_FSL_EDMA=y 877 + CONFIG_IMX_DMA=y 878 + CONFIG_IMX_SDMA=y 835 879 CONFIG_MV_XOR=y 880 + CONFIG_MXS_DMA=y 881 + CONFIG_PL330_DMA=y 882 + CONFIG_SIRF_DMA=y 883 + CONFIG_STE_DMA40=y 884 + CONFIG_ST_FDMA=m 836 885 CONFIG_TEGRA20_APB_DMA=y 886 + CONFIG_XILINX_DMA=y 887 + CONFIG_QCOM_BAM_DMA=y 888 + CONFIG_DW_DMAC=y 837 889 CONFIG_SH_DMAE=y 838 890 CONFIG_RCAR_DMAC=y 839 891 CONFIG_RENESAS_USB_DMAC=m 840 - CONFIG_STE_DMA40=y 841 - CONFIG_SIRF_DMA=y 842 - CONFIG_TI_EDMA=y 843 - CONFIG_PL330_DMA=y 844 - CONFIG_IMX_SDMA=y 845 - CONFIG_IMX_DMA=y 846 - CONFIG_MXS_DMA=y 847 - CONFIG_DMA_BCM2835=y 848 - CONFIG_DMA_OMAP=y 849 - CONFIG_QCOM_BAM_DMA=y 850 - CONFIG_XILINX_DMA=y 851 - CONFIG_DMA_SUN6I=y 852 - CONFIG_ST_FDMA=m 892 + CONFIG_VIRTIO_PCI=y 893 + CONFIG_VIRTIO_MMIO=y 853 894 CONFIG_STAGING=y 854 - CONFIG_SENSORS_ISL29018=y 855 - CONFIG_SENSORS_ISL29028=y 856 895 CONFIG_MFD_NVEC=y 857 896 CONFIG_KEYBOARD_NVEC=y 858 897 CONFIG_SERIO_NVEC_PS2=y 859 898 CONFIG_NVEC_POWER=y 860 899 CONFIG_NVEC_PAZ00=y 861 - CONFIG_BCMA=y 862 - CONFIG_BCMA_HOST_SOC=y 863 - CONFIG_BCMA_DRIVER_GMAC_CMN=y 864 - CONFIG_BCMA_DRIVER_GPIO=y 865 - CONFIG_QCOM_GSBI=y 866 - CONFIG_QCOM_PM=y 867 - CONFIG_QCOM_SMEM=y 868 - CONFIG_QCOM_SMD_RPM=y 869 - CONFIG_QCOM_SMP2P=y 870 - CONFIG_QCOM_SMSM=y 871 - CONFIG_QCOM_WCNSS_CTRL=m 872 - CONFIG_ROCKCHIP_PM_DOMAINS=y 873 - CONFIG_COMMON_CLK_QCOM=y 874 - CONFIG_QCOM_CLK_RPM=y 875 - CONFIG_CHROME_PLATFORMS=y 876 900 CONFIG_STAGING_BOARD=y 877 - CONFIG_CROS_EC_CHARDEV=m 878 901 CONFIG_COMMON_CLK_MAX77686=y 879 902 CONFIG_COMMON_CLK_RK808=m 880 903 CONFIG_COMMON_CLK_S2MPS11=m 904 + CONFIG_COMMON_CLK_QCOM=y 905 + CONFIG_QCOM_CLK_RPM=y 881 906 CONFIG_APQ_MMCC_8084=y 882 907 CONFIG_MSM_GCC_8660=y 883 908 CONFIG_MSM_MMCC_8960=y 884 909 CONFIG_MSM_MMCC_8974=y 885 - CONFIG_HWSPINLOCK_QCOM=y 910 + CONFIG_BCM2835_MBOX=y 886 911 CONFIG_ROCKCHIP_IOMMU=y 887 912 CONFIG_TEGRA_IOMMU_GART=y 888 913 CONFIG_TEGRA_IOMMU_SMMU=y 889 914 CONFIG_REMOTEPROC=m 890 915 CONFIG_ST_REMOTEPROC=m 891 916 CONFIG_RPMSG_VIRTIO=m 917 + CONFIG_RASPBERRYPI_POWER=y 918 + CONFIG_QCOM_GSBI=y 919 + CONFIG_QCOM_PM=y 920 + CONFIG_QCOM_SMD_RPM=m 921 + CONFIG_QCOM_WCNSS_CTRL=m 922 + CONFIG_ROCKCHIP_PM_DOMAINS=y 923 + CONFIG_ARCH_TEGRA_2x_SOC=y 924 + CONFIG_ARCH_TEGRA_3x_SOC=y 925 + CONFIG_ARCH_TEGRA_114_SOC=y 926 + CONFIG_ARCH_TEGRA_124_SOC=y 892 927 CONFIG_PM_DEVFREQ=y 893 928 CONFIG_ARM_TEGRA_DEVFREQ=m 894 - CONFIG_MEMORY=y 895 - CONFIG_EXTCON=y 896 929 CONFIG_TI_AEMIF=y 897 930 CONFIG_IIO=y 898 931 CONFIG_IIO_SW_TRIGGER=y ··· 896 947 CONFIG_XILINX_XADC=y 897 948 CONFIG_MPU3050_I2C=y 898 949 CONFIG_CM36651=m 950 + CONFIG_SENSORS_ISL29018=y 951 + CONFIG_SENSORS_ISL29028=y 899 952 CONFIG_AK8975=y 900 - CONFIG_RASPBERRYPI_POWER=y 901 953 CONFIG_IIO_HRTIMER_TRIGGER=y 902 954 CONFIG_PWM=y 903 955 CONFIG_PWM_ATMEL=m 904 956 CONFIG_PWM_ATMEL_HLCDC_PWM=m 905 957 CONFIG_PWM_ATMEL_TCB=m 958 + CONFIG_PWM_BCM2835=y 959 + CONFIG_PWM_BRCMSTB=m 906 960 CONFIG_PWM_FSL_FTM=m 907 961 CONFIG_PWM_MESON=m 908 962 CONFIG_PWM_RCAR=m 909 963 CONFIG_PWM_RENESAS_TPU=y 910 964 CONFIG_PWM_ROCKCHIP=m 911 965 CONFIG_PWM_SAMSUNG=m 966 + CONFIG_PWM_STI=y 912 967 CONFIG_PWM_SUN4I=y 913 968 CONFIG_PWM_TEGRA=y 914 969 CONFIG_PWM_VT8500=y 970 + CONFIG_KEYSTONE_IRQ=y 971 + CONFIG_PHY_SUN4I_USB=y 972 + CONFIG_PHY_SUN9I_USB=y 915 973 CONFIG_PHY_HIX5HD2_SATA=y 916 - CONFIG_E1000E=y 917 - CONFIG_PWM_STI=y 918 - CONFIG_PWM_BCM2835=y 919 - CONFIG_PWM_BRCMSTB=m 974 + CONFIG_PHY_BERLIN_SATA=y 975 + CONFIG_PHY_BERLIN_USB=y 976 + CONFIG_PHY_CPCAP_USB=m 977 + CONFIG_PHY_QCOM_APQ8064_SATA=m 978 + CONFIG_PHY_RCAR_GEN2=m 979 + CONFIG_PHY_ROCKCHIP_DP=m 980 + CONFIG_PHY_ROCKCHIP_USB=y 981 + CONFIG_PHY_SAMSUNG_USB2=m 982 + CONFIG_PHY_MIPHY28LP=y 983 + CONFIG_PHY_STIH407_USB=y 984 + CONFIG_PHY_STM32_USBPHYC=y 985 + CONFIG_PHY_TEGRA_XUSB=y 920 986 CONFIG_PHY_DM816X_USB=m 921 987 CONFIG_OMAP_USB2=y 922 988 CONFIG_TI_PIPE3=y 923 989 CONFIG_TWL4030_USB=m 924 - CONFIG_PHY_BERLIN_USB=y 925 - CONFIG_PHY_CPCAP_USB=m 926 - CONFIG_PHY_BERLIN_SATA=y 927 - CONFIG_PHY_ROCKCHIP_DP=m 928 - CONFIG_PHY_ROCKCHIP_USB=y 929 - CONFIG_PHY_QCOM_APQ8064_SATA=m 930 - CONFIG_PHY_MIPHY28LP=y 931 - CONFIG_PHY_RCAR_GEN2=m 932 - CONFIG_PHY_STIH407_USB=y 933 - CONFIG_PHY_STM32_USBPHYC=y 934 - CONFIG_PHY_SUN4I_USB=y 935 - CONFIG_PHY_SUN9I_USB=y 936 - CONFIG_PHY_SAMSUNG_USB2=m 937 - CONFIG_PHY_TEGRA_XUSB=y 938 - CONFIG_PHY_BRCM_SATA=y 939 - CONFIG_NVMEM=y 940 990 CONFIG_NVMEM_IMX_OCOTP=y 941 991 CONFIG_NVMEM_SUNXI_SID=y 942 992 CONFIG_NVMEM_VF610_OCOTP=y 943 - CONFIG_BCM2835_MBOX=y 944 993 CONFIG_RASPBERRYPI_FIRMWARE=y 945 - CONFIG_EFI_VARS=m 946 - CONFIG_EFI_CAPSULE_LOADER=m 947 994 CONFIG_BCM47XX_NVRAM=y 948 995 CONFIG_BCM47XX_SPROM=y 996 + CONFIG_EFI_VARS=m 997 + CONFIG_EFI_CAPSULE_LOADER=m 949 998 CONFIG_EXT4_FS=y 950 999 CONFIG_AUTOFS4_FS=y 951 1000 CONFIG_MSDOS_FS=y ··· 951 1004 CONFIG_NTFS_FS=y 952 1005 CONFIG_TMPFS_POSIX_ACL=y 953 1006 CONFIG_UBIFS_FS=y 954 - CONFIG_TMPFS=y 955 1007 CONFIG_SQUASHFS=y 956 1008 CONFIG_SQUASHFS_LZO=y 957 1009 CONFIG_SQUASHFS_XZ=y ··· 966 1020 CONFIG_NLS_ISO8859_1=y 967 1021 CONFIG_NLS_UTF8=y 968 1022 CONFIG_PRINTK_TIME=y 969 - CONFIG_DEBUG_FS=y 970 1023 CONFIG_MAGIC_SYSRQ=y 971 - CONFIG_LOCKUP_DETECTOR=y 972 - CONFIG_CPUFREQ_DT=y 973 - CONFIG_KEYSTONE_IRQ=y 974 - CONFIG_HW_RANDOM=y 975 - CONFIG_HW_RANDOM_ST=y 976 1024 CONFIG_CRYPTO_USER=m 977 1025 CONFIG_CRYPTO_USER_API_HASH=m 978 1026 CONFIG_CRYPTO_USER_API_SKCIPHER=m ··· 975 1035 CONFIG_CRYPTO_DEV_MARVELL_CESA=m 976 1036 CONFIG_CRYPTO_DEV_EXYNOS_RNG=m 977 1037 CONFIG_CRYPTO_DEV_S5P=m 1038 + CONFIG_CRYPTO_DEV_ATMEL_AES=m 1039 + CONFIG_CRYPTO_DEV_ATMEL_TDES=m 1040 + CONFIG_CRYPTO_DEV_ATMEL_SHA=m 978 1041 CONFIG_CRYPTO_DEV_SUN4I_SS=m 979 1042 CONFIG_CRYPTO_DEV_ROCKCHIP=m 980 1043 CONFIG_ARM_CRYPTO=y 981 - CONFIG_CRYPTO_SHA1_ARM=m 982 1044 CONFIG_CRYPTO_SHA1_ARM_NEON=m 983 1045 CONFIG_CRYPTO_SHA1_ARM_CE=m 984 1046 CONFIG_CRYPTO_SHA2_ARM_CE=m 985 - CONFIG_CRYPTO_SHA256_ARM=m 986 1047 CONFIG_CRYPTO_SHA512_ARM=m 987 1048 CONFIG_CRYPTO_AES_ARM=m 988 1049 CONFIG_CRYPTO_AES_ARM_BS=m 989 1050 CONFIG_CRYPTO_AES_ARM_CE=m 990 - CONFIG_CRYPTO_CHACHA20_NEON=m 991 - CONFIG_CRYPTO_CRC32_ARM_CE=m 992 - CONFIG_CRYPTO_CRCT10DIF_ARM_CE=m 993 1051 CONFIG_CRYPTO_GHASH_ARM_CE=m 994 - CONFIG_CRYPTO_DEV_ATMEL_AES=m 995 - CONFIG_CRYPTO_DEV_ATMEL_TDES=m 996 - CONFIG_CRYPTO_DEV_ATMEL_SHA=m 997 - CONFIG_VIDEO_VIVID=m 998 - CONFIG_VIRTIO=y 999 - CONFIG_VIRTIO_PCI=y 1000 - CONFIG_VIRTIO_PCI_LEGACY=y 1001 - CONFIG_VIRTIO_MMIO=y 1052 + CONFIG_CRYPTO_CRC32_ARM_CE=m 1053 + CONFIG_CRYPTO_CHACHA20_NEON=m
+1
arch/arm/mach-bcm/Kconfig
··· 20 20 select GPIOLIB 21 21 select ARM_AMBA 22 22 select PINCTRL 23 + select PCI_DOMAINS if PCI 23 24 help 24 25 This enables support for systems based on Broadcom IPROC architected SoCs. 25 26 The IPROC complex contains one or more ARM CPUs along with common
+1 -1
arch/arm/mach-davinci/board-da850-evm.c
··· 774 774 GPIO_LOOKUP("davinci_gpio.0", DA850_MMCSD_CD_PIN, "cd", 775 775 GPIO_ACTIVE_LOW), 776 776 GPIO_LOOKUP("davinci_gpio.0", DA850_MMCSD_WP_PIN, "wp", 777 - GPIO_ACTIVE_LOW), 777 + GPIO_ACTIVE_HIGH), 778 778 }, 779 779 }; 780 780
+1
arch/arm/mach-socfpga/Kconfig
··· 10 10 select HAVE_ARM_SCU 11 11 select HAVE_ARM_TWD if SMP 12 12 select MFD_SYSCON 13 + select PCI_DOMAINS if PCI 13 14 14 15 if ARCH_SOCFPGA 15 16 config SOCFPGA_SUSPEND
+1 -1
arch/arm/net/bpf_jit_32.c
··· 1844 1844 /* there are 2 passes here */ 1845 1845 bpf_jit_dump(prog->len, image_size, 2, ctx.target); 1846 1846 1847 - set_memory_ro((unsigned long)header, header->pages); 1847 + bpf_jit_binary_lock_ro(header); 1848 1848 prog->bpf_func = (void *)ctx.target; 1849 1849 prog->jited = 1; 1850 1850 prog->jited_len = image_size;
+2 -4
arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
··· 309 309 interrupts = <0 99 4>; 310 310 resets = <&rst SPIM0_RESET>; 311 311 reg-io-width = <4>; 312 - num-chipselect = <4>; 313 - bus-num = <0>; 312 + num-cs = <4>; 314 313 status = "disabled"; 315 314 }; 316 315 ··· 321 322 interrupts = <0 100 4>; 322 323 resets = <&rst SPIM1_RESET>; 323 324 reg-io-width = <4>; 324 - num-chipselect = <4>; 325 - bus-num = <0>; 325 + num-cs = <4>; 326 326 status = "disabled"; 327 327 }; 328 328
+14 -1
arch/arm64/boot/dts/amlogic/meson-axg-s400.dts
··· 66 66 67 67 &ethmac { 68 68 status = "okay"; 69 - phy-mode = "rgmii"; 70 69 pinctrl-0 = <&eth_rgmii_y_pins>; 71 70 pinctrl-names = "default"; 71 + phy-handle = <&eth_phy0>; 72 + phy-mode = "rgmii"; 73 + 74 + mdio { 75 + compatible = "snps,dwmac-mdio"; 76 + #address-cells = <1>; 77 + #size-cells = <0>; 78 + 79 + eth_phy0: ethernet-phy@0 { 80 + /* Realtek RTL8211F (0x001cc916) */ 81 + reg = <0>; 82 + eee-broken-1000t; 83 + }; 84 + }; 72 85 }; 73 86 74 87 &uart_A {
+2 -2
arch/arm64/boot/dts/amlogic/meson-axg.dtsi
··· 132 132 133 133 sd_emmc_b: sd@5000 { 134 134 compatible = "amlogic,meson-axg-mmc"; 135 - reg = <0x0 0x5000 0x0 0x2000>; 135 + reg = <0x0 0x5000 0x0 0x800>; 136 136 interrupts = <GIC_SPI 217 IRQ_TYPE_EDGE_RISING>; 137 137 status = "disabled"; 138 138 clocks = <&clkc CLKID_SD_EMMC_B>, ··· 144 144 145 145 sd_emmc_c: mmc@7000 { 146 146 compatible = "amlogic,meson-axg-mmc"; 147 - reg = <0x0 0x7000 0x0 0x2000>; 147 + reg = <0x0 0x7000 0x0 0x800>; 148 148 interrupts = <GIC_SPI 218 IRQ_TYPE_EDGE_RISING>; 149 149 status = "disabled"; 150 150 clocks = <&clkc CLKID_SD_EMMC_C>,
+9 -3
arch/arm64/boot/dts/amlogic/meson-gx.dtsi
··· 35 35 no-map; 36 36 }; 37 37 38 + /* Alternate 3 MiB reserved for ARM Trusted Firmware (BL31) */ 39 + secmon_reserved_alt: secmon@5000000 { 40 + reg = <0x0 0x05000000 0x0 0x300000>; 41 + no-map; 42 + }; 43 + 38 44 linux,cma { 39 45 compatible = "shared-dma-pool"; 40 46 reusable; ··· 463 457 464 458 sd_emmc_a: mmc@70000 { 465 459 compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc"; 466 - reg = <0x0 0x70000 0x0 0x2000>; 460 + reg = <0x0 0x70000 0x0 0x800>; 467 461 interrupts = <GIC_SPI 216 IRQ_TYPE_EDGE_RISING>; 468 462 status = "disabled"; 469 463 }; 470 464 471 465 sd_emmc_b: mmc@72000 { 472 466 compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc"; 473 - reg = <0x0 0x72000 0x0 0x2000>; 467 + reg = <0x0 0x72000 0x0 0x800>; 474 468 interrupts = <GIC_SPI 217 IRQ_TYPE_EDGE_RISING>; 475 469 status = "disabled"; 476 470 }; 477 471 478 472 sd_emmc_c: mmc@74000 { 479 473 compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc"; 480 - reg = <0x0 0x74000 0x0 0x2000>; 474 + reg = <0x0 0x74000 0x0 0x800>; 481 475 interrupts = <GIC_SPI 218 IRQ_TYPE_EDGE_RISING>; 482 476 status = "disabled"; 483 477 };
+1 -1
arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi
··· 6 6 7 7 &apb { 8 8 mali: gpu@c0000 { 9 - compatible = "amlogic,meson-gxbb-mali", "arm,mali-450"; 9 + compatible = "amlogic,meson-gxl-mali", "arm,mali-450"; 10 10 reg = <0x0 0xc0000 0x0 0x40000>; 11 11 interrupts = <GIC_SPI 160 IRQ_TYPE_LEVEL_HIGH>, 12 12 <GIC_SPI 161 IRQ_TYPE_LEVEL_HIGH>,
-3
arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts
··· 234 234 235 235 bus-width = <4>; 236 236 cap-sd-highspeed; 237 - sd-uhs-sdr12; 238 - sd-uhs-sdr25; 239 - sd-uhs-sdr50; 240 237 max-frequency = <100000000>; 241 238 disable-wp; 242 239
+7
arch/arm64/boot/dts/amlogic/meson-gxl-s905x-p212.dtsi
··· 189 189 &usb0 { 190 190 status = "okay"; 191 191 }; 192 + 193 + &usb2_phy0 { 194 + /* 195 + * HDMI_5V is also used as supply for the USB VBUS. 196 + */ 197 + phy-supply = <&hdmi_5v>; 198 + };
-8
arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
··· 13 13 / { 14 14 compatible = "amlogic,meson-gxl"; 15 15 16 - reserved-memory { 17 - /* Alternate 3 MiB reserved for ARM Trusted Firmware (BL31) */ 18 - secmon_reserved_alt: secmon@5000000 { 19 - reg = <0x0 0x05000000 0x0 0x300000>; 20 - no-map; 21 - }; 22 - }; 23 - 24 16 soc { 25 17 usb0: usb@c9000000 { 26 18 status = "disabled";
+4 -4
arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
··· 118 118 119 119 #interrupt-cells = <1>; 120 120 interrupt-map-mask = <0 0 0 0>; 121 - interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 281 IRQ_TYPE_NONE>; 121 + interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 281 IRQ_TYPE_LEVEL_HIGH>; 122 122 123 123 linux,pci-domain = <0>; 124 124 ··· 149 149 150 150 #interrupt-cells = <1>; 151 151 interrupt-map-mask = <0 0 0 0>; 152 - interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 305 IRQ_TYPE_NONE>; 152 + interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 305 IRQ_TYPE_LEVEL_HIGH>; 153 153 154 154 linux,pci-domain = <4>; 155 155 ··· 566 566 reg = <0x66080000 0x100>; 567 567 #address-cells = <1>; 568 568 #size-cells = <0>; 569 - interrupts = <GIC_SPI 394 IRQ_TYPE_NONE>; 569 + interrupts = <GIC_SPI 394 IRQ_TYPE_LEVEL_HIGH>; 570 570 clock-frequency = <100000>; 571 571 status = "disabled"; 572 572 }; ··· 594 594 reg = <0x660b0000 0x100>; 595 595 #address-cells = <1>; 596 596 #size-cells = <0>; 597 - interrupts = <GIC_SPI 395 IRQ_TYPE_NONE>; 597 + interrupts = <GIC_SPI 395 IRQ_TYPE_LEVEL_HIGH>; 598 598 clock-frequency = <100000>; 599 599 status = "disabled"; 600 600 };
+4
arch/arm64/boot/dts/broadcom/stingray/bcm958742k.dts
··· 43 43 enet-phy-lane-swap; 44 44 }; 45 45 46 + &sdio0 { 47 + mmc-ddr-1_8v; 48 + }; 49 + 46 50 &uart2 { 47 51 status = "okay"; 48 52 };
+4
arch/arm64/boot/dts/broadcom/stingray/bcm958742t.dts
··· 42 42 &gphy0 { 43 43 enet-phy-lane-swap; 44 44 }; 45 + 46 + &sdio0 { 47 + mmc-ddr-1_8v; 48 + };
+2 -2
arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi
··· 409 409 reg = <0x000b0000 0x100>; 410 410 #address-cells = <1>; 411 411 #size-cells = <0>; 412 - interrupts = <GIC_SPI 177 IRQ_TYPE_NONE>; 412 + interrupts = <GIC_SPI 177 IRQ_TYPE_LEVEL_HIGH>; 413 413 clock-frequency = <100000>; 414 414 status = "disabled"; 415 415 }; ··· 453 453 reg = <0x000e0000 0x100>; 454 454 #address-cells = <1>; 455 455 #size-cells = <0>; 456 - interrupts = <GIC_SPI 178 IRQ_TYPE_NONE>; 456 + interrupts = <GIC_SPI 178 IRQ_TYPE_LEVEL_HIGH>; 457 457 clock-frequency = <100000>; 458 458 status = "disabled"; 459 459 };
+2
arch/arm64/boot/dts/hisilicon/hi3660-hikey960.dts
··· 585 585 vmmc-supply = <&wlan_en>; 586 586 ti,non-removable; 587 587 non-removable; 588 + cap-power-off-card; 589 + keep-power-in-suspend; 588 590 #address-cells = <0x1>; 589 591 #size-cells = <0x0>; 590 592 status = "ok";
+2
arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
··· 322 322 dwmmc_2: dwmmc2@f723f000 { 323 323 bus-width = <0x4>; 324 324 non-removable; 325 + cap-power-off-card; 326 + keep-power-in-suspend; 325 327 vmmc-supply = <&reg_vdd_3v3>; 326 328 mmc-pwrseq = <&wl1835_pwrseq>; 327 329
+1 -1
arch/arm64/boot/dts/marvell/armada-cp110.dtsi
··· 149 149 150 150 CP110_LABEL(icu): interrupt-controller@1e0000 { 151 151 compatible = "marvell,cp110-icu"; 152 - reg = <0x1e0000 0x10>; 152 + reg = <0x1e0000 0x440>; 153 153 #interrupt-cells = <3>; 154 154 interrupt-controller; 155 155 msi-parent = <&gicp>;
+1 -1
arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi
··· 75 75 76 76 serial@75b1000 { 77 77 label = "LS-UART0"; 78 - status = "okay"; 78 + status = "disabled"; 79 79 pinctrl-names = "default", "sleep"; 80 80 pinctrl-0 = <&blsp2_uart2_4pins_default>; 81 81 pinctrl-1 = <&blsp2_uart2_4pins_sleep>;
+2 -2
arch/arm64/boot/dts/qcom/msm8916.dtsi
··· 1191 1191 1192 1192 port@0 { 1193 1193 reg = <0>; 1194 - etf_out: endpoint { 1194 + etf_in: endpoint { 1195 1195 slave-mode; 1196 1196 remote-endpoint = <&funnel0_out>; 1197 1197 }; 1198 1198 }; 1199 1199 port@1 { 1200 1200 reg = <0>; 1201 - etf_in: endpoint { 1201 + etf_out: endpoint { 1202 1202 remote-endpoint = <&replicator_in>; 1203 1203 }; 1204 1204 };
+1 -1
arch/arm64/boot/dts/socionext/uniphier-ld11-global.dts
··· 54 54 sound { 55 55 compatible = "audio-graph-card"; 56 56 label = "UniPhier LD11"; 57 - widgets = "Headphone", "Headphone Jack"; 57 + widgets = "Headphone", "Headphones"; 58 58 dais = <&i2s_port2 59 59 &i2s_port3 60 60 &i2s_port4
+1 -1
arch/arm64/boot/dts/socionext/uniphier-ld20-global.dts
··· 54 54 sound { 55 55 compatible = "audio-graph-card"; 56 56 label = "UniPhier LD20"; 57 - widgets = "Headphone", "Headphone Jack"; 57 + widgets = "Headphone", "Headphones"; 58 58 dais = <&i2s_port2 59 59 &i2s_port3 60 60 &i2s_port4
+43 -67
arch/arm64/configs/defconfig
··· 47 47 CONFIG_ARCH_QCOM=y 48 48 CONFIG_ARCH_ROCKCHIP=y 49 49 CONFIG_ARCH_SEATTLE=y 50 + CONFIG_ARCH_SYNQUACER=y 50 51 CONFIG_ARCH_RENESAS=y 51 52 CONFIG_ARCH_R8A7795=y 52 53 CONFIG_ARCH_R8A7796=y ··· 59 58 CONFIG_ARCH_STRATIX10=y 60 59 CONFIG_ARCH_TEGRA=y 61 60 CONFIG_ARCH_SPRD=y 62 - CONFIG_ARCH_SYNQUACER=y 63 61 CONFIG_ARCH_THUNDER=y 64 62 CONFIG_ARCH_THUNDER2=y 65 63 CONFIG_ARCH_UNIPHIER=y ··· 67 67 CONFIG_ARCH_ZX=y 68 68 CONFIG_ARCH_ZYNQMP=y 69 69 CONFIG_PCI=y 70 - CONFIG_HOTPLUG_PCI_PCIE=y 71 70 CONFIG_PCI_IOV=y 72 71 CONFIG_HOTPLUG_PCI=y 73 72 CONFIG_HOTPLUG_PCI_ACPI=y 74 - CONFIG_PCI_LAYERSCAPE=y 75 - CONFIG_PCI_HISI=y 76 - CONFIG_PCIE_QCOM=y 77 - CONFIG_PCIE_KIRIN=y 78 - CONFIG_PCIE_ARMADA_8K=y 79 - CONFIG_PCIE_HISI_STB=y 80 73 CONFIG_PCI_AARDVARK=y 81 74 CONFIG_PCI_TEGRA=y 82 75 CONFIG_PCIE_RCAR=y 83 - CONFIG_PCIE_ROCKCHIP=y 84 - CONFIG_PCIE_ROCKCHIP_HOST=m 85 76 CONFIG_PCI_HOST_GENERIC=y 86 77 CONFIG_PCI_XGENE=y 87 78 CONFIG_PCI_HOST_THUNDER_PEM=y 88 79 CONFIG_PCI_HOST_THUNDER_ECAM=y 80 + CONFIG_PCIE_ROCKCHIP_HOST=m 81 + CONFIG_PCI_LAYERSCAPE=y 82 + CONFIG_PCI_HISI=y 83 + CONFIG_PCIE_QCOM=y 84 + CONFIG_PCIE_ARMADA_8K=y 85 + CONFIG_PCIE_KIRIN=y 86 + CONFIG_PCIE_HISI_STB=y 89 87 CONFIG_ARM64_VA_BITS_48=y 90 88 CONFIG_SCHED_MC=y 91 89 CONFIG_NUMA=y ··· 102 104 CONFIG_WQ_POWER_EFFICIENT_DEFAULT=y 103 105 CONFIG_ARM_CPUIDLE=y 104 106 CONFIG_CPU_FREQ=y 105 - CONFIG_CPU_FREQ_GOV_ATTR_SET=y 106 - CONFIG_CPU_FREQ_GOV_COMMON=y 107 107 CONFIG_CPU_FREQ_STAT=y 108 108 CONFIG_CPU_FREQ_GOV_POWERSAVE=m 109 109 CONFIG_CPU_FREQ_GOV_USERSPACE=y ··· 109 113 CONFIG_CPU_FREQ_GOV_CONSERVATIVE=m 110 114 CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y 111 115 CONFIG_CPUFREQ_DT=y 116 + CONFIG_ACPI_CPPC_CPUFREQ=m 112 117 CONFIG_ARM_ARMADA_37XX_CPUFREQ=y 113 118 CONFIG_ARM_BIG_LITTLE_CPUFREQ=y 114 119 CONFIG_ARM_SCPI_CPUFREQ=y 115 120 CONFIG_ARM_TEGRA186_CPUFREQ=y 116 - CONFIG_ACPI_CPPC_CPUFREQ=m 117 121 CONFIG_NET=y 118 122 CONFIG_PACKET=y 119 123 CONFIG_UNIX=y ··· 232 236 CONFIG_SNI_AVE=y 233 237 CONFIG_SNI_NETSEC=y 234 238 CONFIG_STMMAC_ETH=m 235 - CONFIG_DWMAC_IPQ806X=m 236 - CONFIG_DWMAC_MESON=m 237 - CONFIG_DWMAC_ROCKCHIP=m 238 - CONFIG_DWMAC_SUNXI=m 239 - CONFIG_DWMAC_SUN8I=m 240 239 CONFIG_MDIO_BUS_MUX_MMIOREG=y 241 240 CONFIG_AT803X_PHY=m 242 241 CONFIG_MARVELL_PHY=m ··· 260 269 CONFIG_WLCORE_SDIO=m 261 270 CONFIG_INPUT_EVDEV=y 262 271 CONFIG_KEYBOARD_ADC=m 263 - CONFIG_KEYBOARD_CROS_EC=y 264 272 CONFIG_KEYBOARD_GPIO=y 273 + CONFIG_KEYBOARD_CROS_EC=y 265 274 CONFIG_INPUT_TOUCHSCREEN=y 266 275 CONFIG_TOUCHSCREEN_ATMEL_MXT=m 267 276 CONFIG_INPUT_MISC=y ··· 287 296 CONFIG_SERIAL_SAMSUNG_CONSOLE=y 288 297 CONFIG_SERIAL_TEGRA=y 289 298 CONFIG_SERIAL_SH_SCI=y 290 - CONFIG_SERIAL_SH_SCI_NR_UARTS=11 291 - CONFIG_SERIAL_SH_SCI_CONSOLE=y 292 299 CONFIG_SERIAL_MSM=y 293 300 CONFIG_SERIAL_MSM_CONSOLE=y 294 301 CONFIG_SERIAL_XILINX_PS_UART=y 295 302 CONFIG_SERIAL_XILINX_PS_UART_CONSOLE=y 296 303 CONFIG_SERIAL_MVEBU_UART=y 297 304 CONFIG_SERIAL_DEV_BUS=y 298 - CONFIG_SERIAL_DEV_CTRL_TTYPORT=y 299 305 CONFIG_VIRTIO_CONSOLE=y 300 - CONFIG_I2C_HID=m 301 306 CONFIG_I2C_CHARDEV=y 302 307 CONFIG_I2C_MUX=y 303 308 CONFIG_I2C_MUX_PCA954x=y ··· 312 325 CONFIG_I2C_CROS_EC_TUNNEL=y 313 326 CONFIG_SPI=y 314 327 CONFIG_SPI_ARMADA_3700=y 315 - CONFIG_SPI_MESON_SPICC=m 316 - CONFIG_SPI_MESON_SPIFC=m 317 328 CONFIG_SPI_BCM2835=m 318 329 CONFIG_SPI_BCM2835AUX=m 330 + CONFIG_SPI_MESON_SPICC=m 331 + CONFIG_SPI_MESON_SPIFC=m 319 332 CONFIG_SPI_ORION=y 320 333 CONFIG_SPI_PL022=y 321 - CONFIG_SPI_QUP=y 322 334 CONFIG_SPI_ROCKCHIP=y 335 + CONFIG_SPI_QUP=y 323 336 CONFIG_SPI_S3C64XX=y 324 337 CONFIG_SPI_SPIDEV=m 325 338 CONFIG_SPMI=y 326 - CONFIG_PINCTRL_IPQ8074=y 327 339 CONFIG_PINCTRL_SINGLE=y 328 340 CONFIG_PINCTRL_MAX77620=y 341 + CONFIG_PINCTRL_IPQ8074=y 329 342 CONFIG_PINCTRL_MSM8916=y 330 343 CONFIG_PINCTRL_MSM8994=y 331 344 CONFIG_PINCTRL_MSM8996=y 332 - CONFIG_PINCTRL_MT7622=y 333 345 CONFIG_PINCTRL_QDF2XXX=y 334 346 CONFIG_PINCTRL_QCOM_SPMI_PMIC=y 347 + CONFIG_PINCTRL_MT7622=y 335 348 CONFIG_GPIO_DWAPB=y 336 349 CONFIG_GPIO_MB86S7X=y 337 350 CONFIG_GPIO_PL061=y ··· 355 368 CONFIG_THERMAL_GOV_POWER_ALLOCATOR=y 356 369 CONFIG_CPU_THERMAL=y 357 370 CONFIG_THERMAL_EMULATION=y 371 + CONFIG_ROCKCHIP_THERMAL=m 372 + CONFIG_RCAR_GEN3_THERMAL=y 358 373 CONFIG_ARMADA_THERMAL=y 359 374 CONFIG_BRCMSTB_THERMAL=m 360 375 CONFIG_EXYNOS_THERMAL=y 361 - CONFIG_RCAR_GEN3_THERMAL=y 362 - CONFIG_QCOM_TSENS=y 363 - CONFIG_ROCKCHIP_THERMAL=m 364 376 CONFIG_TEGRA_BPMP_THERMAL=m 377 + CONFIG_QCOM_TSENS=y 365 378 CONFIG_UNIPHIER_THERMAL=y 366 379 CONFIG_WATCHDOG=y 367 380 CONFIG_S3C2410_WATCHDOG=y ··· 382 395 CONFIG_MFD_SPMI_PMIC=y 383 396 CONFIG_MFD_RK808=y 384 397 CONFIG_MFD_SEC_CORE=y 398 + CONFIG_REGULATOR_FIXED_VOLTAGE=y 385 399 CONFIG_REGULATOR_AXP20X=y 386 400 CONFIG_REGULATOR_FAN53555=y 387 - CONFIG_REGULATOR_FIXED_VOLTAGE=y 388 401 CONFIG_REGULATOR_GPIO=y 389 402 CONFIG_REGULATOR_HI6421V530=y 390 403 CONFIG_REGULATOR_HI655X=y ··· 394 407 CONFIG_REGULATOR_QCOM_SPMI=y 395 408 CONFIG_REGULATOR_RK808=y 396 409 CONFIG_REGULATOR_S2MPS11=y 410 + CONFIG_RC_CORE=m 411 + CONFIG_RC_DECODERS=y 412 + CONFIG_RC_DEVICES=y 413 + CONFIG_IR_MESON=m 397 414 CONFIG_MEDIA_SUPPORT=m 398 415 CONFIG_MEDIA_CAMERA_SUPPORT=y 399 416 CONFIG_MEDIA_ANALOG_TV_SUPPORT=y 400 417 CONFIG_MEDIA_DIGITAL_TV_SUPPORT=y 401 418 CONFIG_MEDIA_CONTROLLER=y 402 - CONFIG_MEDIA_RC_SUPPORT=y 403 - CONFIG_RC_CORE=m 404 - CONFIG_RC_DEVICES=y 405 - CONFIG_RC_DECODERS=y 406 - CONFIG_IR_MESON=m 407 419 CONFIG_VIDEO_V4L2_SUBDEV_API=y 408 420 # CONFIG_DVB_NET is not set 409 421 CONFIG_V4L_MEM2MEM_DRIVERS=y ··· 427 441 CONFIG_ROCKCHIP_DW_MIPI_DSI=y 428 442 CONFIG_ROCKCHIP_INNO_HDMI=y 429 443 CONFIG_DRM_RCAR_DU=m 430 - CONFIG_DRM_RCAR_LVDS=y 431 - CONFIG_DRM_RCAR_VSP=y 444 + CONFIG_DRM_RCAR_LVDS=m 432 445 CONFIG_DRM_TEGRA=m 433 446 CONFIG_DRM_PANEL_SIMPLE=m 434 447 CONFIG_DRM_I2C_ADV7511=m ··· 440 455 CONFIG_BACKLIGHT_GENERIC=m 441 456 CONFIG_BACKLIGHT_PWM=m 442 457 CONFIG_BACKLIGHT_LP855X=m 443 - CONFIG_FRAMEBUFFER_CONSOLE=y 444 458 CONFIG_LOGO=y 445 459 # CONFIG_LOGO_LINUX_MONO is not set 446 460 # CONFIG_LOGO_LINUX_VGA16 is not set ··· 452 468 CONFIG_SND_SOC_AK4613=m 453 469 CONFIG_SND_SIMPLE_CARD=m 454 470 CONFIG_SND_AUDIO_GRAPH_CARD=m 471 + CONFIG_I2C_HID=m 455 472 CONFIG_USB=y 456 473 CONFIG_USB_OTG=y 457 474 CONFIG_USB_XHCI_HCD=y ··· 486 501 CONFIG_MMC_ARMMMCI=y 487 502 CONFIG_MMC_SDHCI=y 488 503 CONFIG_MMC_SDHCI_ACPI=y 489 - CONFIG_MMC_SDHCI_F_SDH30=y 490 504 CONFIG_MMC_SDHCI_PLTFM=y 491 505 CONFIG_MMC_SDHCI_OF_ARASAN=y 492 506 CONFIG_MMC_SDHCI_OF_ESDHC=y 493 507 CONFIG_MMC_SDHCI_CADENCE=y 494 508 CONFIG_MMC_SDHCI_TEGRA=y 509 + CONFIG_MMC_SDHCI_F_SDH30=y 495 510 CONFIG_MMC_MESON_GX=y 496 511 CONFIG_MMC_SDHCI_MSM=y 497 512 CONFIG_MMC_SPI=y ··· 509 524 CONFIG_LEDS_GPIO=y 510 525 CONFIG_LEDS_PWM=y 511 526 CONFIG_LEDS_SYSCON=y 527 + CONFIG_LEDS_TRIGGER_DISK=y 512 528 CONFIG_LEDS_TRIGGER_HEARTBEAT=y 513 529 CONFIG_LEDS_TRIGGER_CPU=y 514 530 CONFIG_LEDS_TRIGGER_DEFAULT_ON=y 515 531 CONFIG_LEDS_TRIGGER_PANIC=y 516 - CONFIG_LEDS_TRIGGER_DISK=y 517 532 CONFIG_EDAC=y 518 533 CONFIG_EDAC_GHES=y 519 534 CONFIG_RTC_CLASS=y ··· 522 537 CONFIG_RTC_DRV_S5M=y 523 538 CONFIG_RTC_DRV_DS3232=y 524 539 CONFIG_RTC_DRV_EFI=y 540 + CONFIG_RTC_DRV_CROS_EC=y 525 541 CONFIG_RTC_DRV_S3C=y 526 542 CONFIG_RTC_DRV_PL031=y 527 543 CONFIG_RTC_DRV_SUN6I=y 528 544 CONFIG_RTC_DRV_ARMADA38X=y 529 545 CONFIG_RTC_DRV_TEGRA=y 530 546 CONFIG_RTC_DRV_XGENE=y 531 - CONFIG_RTC_DRV_CROS_EC=y 532 547 CONFIG_DMADEVICES=y 533 548 CONFIG_DMA_BCM2835=m 534 549 CONFIG_K3_DMA=y ··· 564 579 CONFIG_ARM_MHU=y 565 580 CONFIG_PLATFORM_MHU=y 566 581 CONFIG_BCM2835_MBOX=y 567 - CONFIG_HI6220_MBOX=y 568 582 CONFIG_QCOM_APCS_IPC=y 569 583 CONFIG_ROCKCHIP_IOMMU=y 570 584 CONFIG_TEGRA_IOMMU_SMMU=y ··· 586 602 CONFIG_EXTCON_USB_GPIO=y 587 603 CONFIG_EXTCON_USBC_CROS_EC=y 588 604 CONFIG_MEMORY=y 589 - CONFIG_TEGRA_MC=y 590 605 CONFIG_IIO=y 591 606 CONFIG_EXYNOS_ADC=y 592 607 CONFIG_ROCKCHIP_SARADC=m ··· 601 618 CONFIG_PWM_ROCKCHIP=y 602 619 CONFIG_PWM_SAMSUNG=y 603 620 CONFIG_PWM_TEGRA=m 621 + CONFIG_PHY_XGENE=y 622 + CONFIG_PHY_SUN4I_USB=y 623 + CONFIG_PHY_HI6220_USB=y 604 624 CONFIG_PHY_HISTB_COMBPHY=y 605 625 CONFIG_PHY_HISI_INNO_USB2=y 606 - CONFIG_PHY_RCAR_GEN3_USB2=y 607 - CONFIG_PHY_RCAR_GEN3_USB3=m 608 - CONFIG_PHY_HI6220_USB=y 609 - CONFIG_PHY_QCOM_USB_HS=y 610 - CONFIG_PHY_SUN4I_USB=y 611 626 CONFIG_PHY_MVEBU_CP110_COMPHY=y 612 627 CONFIG_PHY_QCOM_QMP=m 613 - CONFIG_PHY_ROCKCHIP_INNO_USB2=y 628 + CONFIG_PHY_QCOM_USB_HS=y 629 + CONFIG_PHY_RCAR_GEN3_USB2=y 630 + CONFIG_PHY_RCAR_GEN3_USB3=m 614 631 CONFIG_PHY_ROCKCHIP_EMMC=y 632 + CONFIG_PHY_ROCKCHIP_INNO_USB2=y 615 633 CONFIG_PHY_ROCKCHIP_PCIE=m 616 634 CONFIG_PHY_ROCKCHIP_TYPEC=y 617 - CONFIG_PHY_XGENE=y 618 635 CONFIG_PHY_TEGRA_XUSB=y 619 636 CONFIG_QCOM_L2_PMU=y 620 637 CONFIG_QCOM_L3_PMU=y 621 - CONFIG_MESON_EFUSE=m 622 638 CONFIG_QCOM_QFPROM=y 623 639 CONFIG_ROCKCHIP_EFUSE=y 624 640 CONFIG_UNIPHIER_EFUSE=y 641 + CONFIG_MESON_EFUSE=m 625 642 CONFIG_TEE=y 626 643 CONFIG_OPTEE=y 627 644 CONFIG_ARM_SCPI_PROTOCOL=y ··· 630 647 CONFIG_ACPI=y 631 648 CONFIG_ACPI_APEI=y 632 649 CONFIG_ACPI_APEI_GHES=y 633 - CONFIG_ACPI_APEI_PCIEAER=y 634 650 CONFIG_ACPI_APEI_MEMORY_FAILURE=y 635 651 CONFIG_ACPI_APEI_EINJ=y 636 652 CONFIG_EXT2_FS=y ··· 664 682 CONFIG_DEBUG_FS=y 665 683 CONFIG_MAGIC_SYSRQ=y 666 684 CONFIG_DEBUG_KERNEL=y 667 - CONFIG_LOCKUP_DETECTOR=y 668 685 # CONFIG_SCHED_DEBUG is not set 669 686 # CONFIG_DEBUG_PREEMPT is not set 670 687 # CONFIG_FTRACE is not set ··· 672 691 CONFIG_CRYPTO_ECHAINIV=y 673 692 CONFIG_CRYPTO_ANSI_CPRNG=y 674 693 CONFIG_ARM64_CRYPTO=y 675 - CONFIG_CRYPTO_SHA256_ARM64=m 676 - CONFIG_CRYPTO_SHA512_ARM64=m 677 694 CONFIG_CRYPTO_SHA1_ARM64_CE=y 678 695 CONFIG_CRYPTO_SHA2_ARM64_CE=y 679 - CONFIG_CRYPTO_GHASH_ARM64_CE=y 680 - CONFIG_CRYPTO_CRCT10DIF_ARM64_CE=m 681 - CONFIG_CRYPTO_CRC32_ARM64_CE=m 682 - CONFIG_CRYPTO_AES_ARM64=m 683 - CONFIG_CRYPTO_AES_ARM64_CE=m 684 - CONFIG_CRYPTO_AES_ARM64_CE_CCM=y 685 - CONFIG_CRYPTO_AES_ARM64_CE_BLK=y 686 - CONFIG_CRYPTO_AES_ARM64_NEON_BLK=m 687 - CONFIG_CRYPTO_CHACHA20_NEON=m 688 - CONFIG_CRYPTO_AES_ARM64_BS=m 689 696 CONFIG_CRYPTO_SHA512_ARM64_CE=m 690 697 CONFIG_CRYPTO_SHA3_ARM64=m 691 698 CONFIG_CRYPTO_SM3_ARM64_CE=m 699 + CONFIG_CRYPTO_GHASH_ARM64_CE=y 700 + CONFIG_CRYPTO_CRCT10DIF_ARM64_CE=m 701 + CONFIG_CRYPTO_CRC32_ARM64_CE=m 702 + CONFIG_CRYPTO_AES_ARM64_CE_CCM=y 703 + CONFIG_CRYPTO_AES_ARM64_CE_BLK=y 704 + CONFIG_CRYPTO_CHACHA20_NEON=m 705 + CONFIG_CRYPTO_AES_ARM64_BS=m
+6 -1
arch/arm64/include/asm/alternative.h
··· 28 28 __le32 *origptr, __le32 *updptr, int nr_inst); 29 29 30 30 void __init apply_alternatives_all(void); 31 - void apply_alternatives(void *start, size_t length); 31 + 32 + #ifdef CONFIG_MODULES 33 + void apply_alternatives_module(void *start, size_t length); 34 + #else 35 + static inline void apply_alternatives_module(void *start, size_t length) { } 36 + #endif 32 37 33 38 #define ALTINSTR_ENTRY(feature,cb) \ 34 39 " .word 661b - .\n" /* label */ \
+1 -5
arch/arm64/include/asm/pgtable.h
··· 224 224 * Only if the new pte is valid and kernel, otherwise TLB maintenance 225 225 * or update_mmu_cache() have the necessary barriers. 226 226 */ 227 - if (pte_valid_not_user(pte)) { 227 + if (pte_valid_not_user(pte)) 228 228 dsb(ishst); 229 - isb(); 230 - } 231 229 } 232 230 233 231 extern void __sync_icache_dcache(pte_t pteval); ··· 432 434 { 433 435 WRITE_ONCE(*pmdp, pmd); 434 436 dsb(ishst); 435 - isb(); 436 437 } 437 438 438 439 static inline void pmd_clear(pmd_t *pmdp) ··· 482 485 { 483 486 WRITE_ONCE(*pudp, pud); 484 487 dsb(ishst); 485 - isb(); 486 488 } 487 489 488 490 static inline void pud_clear(pud_t *pudp)
+44 -7
arch/arm64/kernel/alternative.c
··· 122 122 } 123 123 } 124 124 125 - static void __apply_alternatives(void *alt_region, bool use_linear_alias) 125 + /* 126 + * We provide our own, private D-cache cleaning function so that we don't 127 + * accidentally call into the cache.S code, which is patched by us at 128 + * runtime. 129 + */ 130 + static void clean_dcache_range_nopatch(u64 start, u64 end) 131 + { 132 + u64 cur, d_size, ctr_el0; 133 + 134 + ctr_el0 = read_sanitised_ftr_reg(SYS_CTR_EL0); 135 + d_size = 4 << cpuid_feature_extract_unsigned_field(ctr_el0, 136 + CTR_DMINLINE_SHIFT); 137 + cur = start & ~(d_size - 1); 138 + do { 139 + /* 140 + * We must clean+invalidate to the PoC in order to avoid 141 + * Cortex-A53 errata 826319, 827319, 824069 and 819472 142 + * (this corresponds to ARM64_WORKAROUND_CLEAN_CACHE) 143 + */ 144 + asm volatile("dc civac, %0" : : "r" (cur) : "memory"); 145 + } while (cur += d_size, cur < end); 146 + } 147 + 148 + static void __apply_alternatives(void *alt_region, bool is_module) 126 149 { 127 150 struct alt_instr *alt; 128 151 struct alt_region *region = alt_region; ··· 168 145 pr_info_once("patching kernel code\n"); 169 146 170 147 origptr = ALT_ORIG_PTR(alt); 171 - updptr = use_linear_alias ? lm_alias(origptr) : origptr; 148 + updptr = is_module ? origptr : lm_alias(origptr); 172 149 nr_inst = alt->orig_len / AARCH64_INSN_SIZE; 173 150 174 151 if (alt->cpufeature < ARM64_CB_PATCH) ··· 178 155 179 156 alt_cb(alt, origptr, updptr, nr_inst); 180 157 181 - flush_icache_range((uintptr_t)origptr, 182 - (uintptr_t)(origptr + nr_inst)); 158 + if (!is_module) { 159 + clean_dcache_range_nopatch((u64)origptr, 160 + (u64)(origptr + nr_inst)); 161 + } 162 + } 163 + 164 + /* 165 + * The core module code takes care of cache maintenance in 166 + * flush_module_icache(). 167 + */ 168 + if (!is_module) { 169 + dsb(ish); 170 + __flush_icache_all(); 171 + isb(); 183 172 } 184 173 } 185 174 ··· 213 178 isb(); 214 179 } else { 215 180 BUG_ON(alternatives_applied); 216 - __apply_alternatives(&region, true); 181 + __apply_alternatives(&region, false); 217 182 /* Barriers provided by the cache flushing */ 218 183 WRITE_ONCE(alternatives_applied, 1); 219 184 } ··· 227 192 stop_machine(__apply_alternatives_multi_stop, NULL, cpu_online_mask); 228 193 } 229 194 230 - void apply_alternatives(void *start, size_t length) 195 + #ifdef CONFIG_MODULES 196 + void apply_alternatives_module(void *start, size_t length) 231 197 { 232 198 struct alt_region region = { 233 199 .begin = start, 234 200 .end = start + length, 235 201 }; 236 202 237 - __apply_alternatives(&region, false); 203 + __apply_alternatives(&region, true); 238 204 } 205 + #endif
+2 -3
arch/arm64/kernel/module.c
··· 448 448 const char *secstrs = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset; 449 449 450 450 for (s = sechdrs, se = sechdrs + hdr->e_shnum; s < se; s++) { 451 - if (strcmp(".altinstructions", secstrs + s->sh_name) == 0) { 452 - apply_alternatives((void *)s->sh_addr, s->sh_size); 453 - } 451 + if (strcmp(".altinstructions", secstrs + s->sh_name) == 0) 452 + apply_alternatives_module((void *)s->sh_addr, s->sh_size); 454 453 #ifdef CONFIG_ARM64_MODULE_PLTS 455 454 if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE) && 456 455 !strcmp(".text.ftrace_trampoline", secstrs + s->sh_name))
-7
arch/microblaze/Kconfig.debug
··· 8 8 9 9 source "lib/Kconfig.debug" 10 10 11 - config HEART_BEAT 12 - bool "Heart beat function for kernel" 13 - default n 14 - help 15 - This option turns on/off heart beat kernel functionality. 16 - First GPIO node is taken. 17 - 18 11 endmenu
-5
arch/microblaze/include/asm/setup.h
··· 19 19 20 20 extern char *klimit; 21 21 22 - void microblaze_heartbeat(void); 23 - void microblaze_setup_heartbeat(void); 24 - 25 22 # ifdef CONFIG_MMU 26 23 extern void mmu_reset(void); 27 24 # endif /* CONFIG_MMU */ 28 - 29 - extern void of_platform_reset_gpio_probe(void); 30 25 31 26 void time_init(void); 32 27 void init_IRQ(void);
+1 -1
arch/microblaze/include/asm/unistd.h
··· 38 38 39 39 #endif /* __ASSEMBLY__ */ 40 40 41 - #define __NR_syscalls 399 41 + #define __NR_syscalls 401 42 42 43 43 #endif /* _ASM_MICROBLAZE_UNISTD_H */
+2
arch/microblaze/include/uapi/asm/unistd.h
··· 415 415 #define __NR_pkey_alloc 396 416 416 #define __NR_pkey_free 397 417 417 #define __NR_statx 398 418 + #define __NR_io_pgetevents 399 419 + #define __NR_rseq 400 418 420 419 421 #endif /* _UAPI_ASM_MICROBLAZE_UNISTD_H */
+1 -3
arch/microblaze/kernel/Makefile
··· 8 8 CFLAGS_REMOVE_timer.o = -pg 9 9 CFLAGS_REMOVE_intc.o = -pg 10 10 CFLAGS_REMOVE_early_printk.o = -pg 11 - CFLAGS_REMOVE_heartbeat.o = -pg 12 11 CFLAGS_REMOVE_ftrace.o = -pg 13 12 CFLAGS_REMOVE_process.o = -pg 14 13 endif ··· 16 17 17 18 obj-y += dma.o exceptions.o \ 18 19 hw_exception_handler.o irq.o \ 19 - platform.o process.o prom.o ptrace.o \ 20 + process.o prom.o ptrace.o \ 20 21 reset.o setup.o signal.o sys_microblaze.o timer.o traps.o unwind.o 21 22 22 23 obj-y += cpu/ 23 24 24 - obj-$(CONFIG_HEART_BEAT) += heartbeat.o 25 25 obj-$(CONFIG_MODULES) += microblaze_ksyms.o module.o 26 26 obj-$(CONFIG_MMU) += misc.o 27 27 obj-$(CONFIG_STACKTRACE) += stacktrace.o
-72
arch/microblaze/kernel/heartbeat.c
··· 1 - /* 2 - * Copyright (C) 2007-2009 Michal Simek <monstr@monstr.eu> 3 - * Copyright (C) 2007-2009 PetaLogix 4 - * Copyright (C) 2006 Atmark Techno, Inc. 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 - */ 10 - 11 - #include <linux/sched.h> 12 - #include <linux/sched/loadavg.h> 13 - #include <linux/io.h> 14 - 15 - #include <asm/setup.h> 16 - #include <asm/page.h> 17 - #include <asm/prom.h> 18 - 19 - static unsigned int base_addr; 20 - 21 - void microblaze_heartbeat(void) 22 - { 23 - static unsigned int cnt, period, dist; 24 - 25 - if (base_addr) { 26 - if (cnt == 0 || cnt == dist) 27 - out_be32(base_addr, 1); 28 - else if (cnt == 7 || cnt == dist + 7) 29 - out_be32(base_addr, 0); 30 - 31 - if (++cnt > period) { 32 - cnt = 0; 33 - /* 34 - * The hyperbolic function below modifies the heartbeat 35 - * period length in dependency of the current (5min) 36 - * load. It goes through the points f(0)=126, f(1)=86, 37 - * f(5)=51, f(inf)->30. 38 - */ 39 - period = ((672 << FSHIFT) / (5 * avenrun[0] + 40 - (7 << FSHIFT))) + 30; 41 - dist = period / 4; 42 - } 43 - } 44 - } 45 - 46 - void microblaze_setup_heartbeat(void) 47 - { 48 - struct device_node *gpio = NULL; 49 - int *prop; 50 - int j; 51 - const char * const gpio_list[] = { 52 - "xlnx,xps-gpio-1.00.a", 53 - NULL 54 - }; 55 - 56 - for (j = 0; gpio_list[j] != NULL; j++) { 57 - gpio = of_find_compatible_node(NULL, NULL, gpio_list[j]); 58 - if (gpio) 59 - break; 60 - } 61 - 62 - if (gpio) { 63 - base_addr = be32_to_cpup(of_get_property(gpio, "reg", NULL)); 64 - base_addr = (unsigned long) ioremap(base_addr, PAGE_SIZE); 65 - pr_notice("Heartbeat GPIO at 0x%x\n", base_addr); 66 - 67 - /* GPIO is configured as output */ 68 - prop = (int *) of_get_property(gpio, "xlnx,is-bidir", NULL); 69 - if (prop) 70 - out_be32(base_addr + 4, 0); 71 - } 72 - }
-29
arch/microblaze/kernel/platform.c
··· 1 - /* 2 - * Copyright 2008 Michal Simek <monstr@monstr.eu> 3 - * 4 - * based on virtex.c file 5 - * 6 - * Copyright 2007 Secret Lab Technologies Ltd. 7 - * 8 - * This file is licensed under the terms of the GNU General Public License 9 - * version 2. This program is licensed "as is" without any warranty of any 10 - * kind, whether express or implied. 11 - */ 12 - 13 - #include <linux/init.h> 14 - #include <linux/of_platform.h> 15 - #include <asm/setup.h> 16 - 17 - static struct of_device_id xilinx_of_bus_ids[] __initdata = { 18 - { .compatible = "simple-bus", }, 19 - { .compatible = "xlnx,compound", }, 20 - {} 21 - }; 22 - 23 - static int __init microblaze_device_probe(void) 24 - { 25 - of_platform_bus_probe(NULL, xilinx_of_bus_ids, NULL); 26 - of_platform_reset_gpio_probe(); 27 - return 0; 28 - } 29 - device_initcall(microblaze_device_probe);
+6 -5
arch/microblaze/kernel/reset.c
··· 18 18 static int handle; /* reset pin handle */ 19 19 static unsigned int reset_val; 20 20 21 - void of_platform_reset_gpio_probe(void) 21 + static int of_platform_reset_gpio_probe(void) 22 22 { 23 23 int ret; 24 24 handle = of_get_named_gpio(of_find_node_by_path("/"), ··· 27 27 if (!gpio_is_valid(handle)) { 28 28 pr_info("Skipping unavailable RESET gpio %d (%s)\n", 29 29 handle, "reset"); 30 - return; 30 + return -ENODEV; 31 31 } 32 32 33 33 ret = gpio_request(handle, "reset"); 34 34 if (ret < 0) { 35 35 pr_info("GPIO pin is already allocated\n"); 36 - return; 36 + return ret; 37 37 } 38 38 39 39 /* get current setup value */ ··· 51 51 52 52 pr_info("RESET: Registered gpio device: %d, current val: %d\n", 53 53 handle, reset_val); 54 - return; 54 + return 0; 55 55 err: 56 56 gpio_free(handle); 57 - return; 57 + return ret; 58 58 } 59 + device_initcall(of_platform_reset_gpio_probe); 59 60 60 61 61 62 static void gpio_system_reset(void)
+2
arch/microblaze/kernel/syscall_table.S
··· 400 400 .long sys_pkey_alloc 401 401 .long sys_pkey_free 402 402 .long sys_statx 403 + .long sys_io_pgetevents 404 + .long sys_rseq
-7
arch/microblaze/kernel/timer.c
··· 156 156 static irqreturn_t timer_interrupt(int irq, void *dev_id) 157 157 { 158 158 struct clock_event_device *evt = &clockevent_xilinx_timer; 159 - #ifdef CONFIG_HEART_BEAT 160 - microblaze_heartbeat(); 161 - #endif 162 159 timer_ack(); 163 160 evt->event_handler(evt); 164 161 return IRQ_HANDLED; ··· 314 317 pr_err("Failed to setup IRQ"); 315 318 return ret; 316 319 } 317 - 318 - #ifdef CONFIG_HEART_BEAT 319 - microblaze_setup_heartbeat(); 320 - #endif 321 320 322 321 ret = xilinx_clocksource_init(); 323 322 if (ret)
+2 -2
arch/mips/kernel/signal.c
··· 801 801 regs->regs[0] = 0; /* Don't deal with this again. */ 802 802 } 803 803 804 - rseq_signal_deliver(regs); 804 + rseq_signal_deliver(ksig, regs); 805 805 806 806 if (sig_uses_siginfo(&ksig->ka, abi)) 807 807 ret = abi->setup_rt_frame(vdso + abi->vdso->off_rt_sigreturn, ··· 870 870 if (thread_info_flags & _TIF_NOTIFY_RESUME) { 871 871 clear_thread_flag(TIF_NOTIFY_RESUME); 872 872 tracehook_notify_resume(regs); 873 - rseq_handle_notify_resume(regs); 873 + rseq_handle_notify_resume(NULL, regs); 874 874 } 875 875 876 876 user_enter();
+5 -1
arch/openrisc/include/asm/pgalloc.h
··· 98 98 __free_page(pte); 99 99 } 100 100 101 + #define __pte_free_tlb(tlb, pte, addr) \ 102 + do { \ 103 + pgtable_page_dtor(pte); \ 104 + tlb_remove_page((tlb), (pte)); \ 105 + } while (0) 101 106 102 - #define __pte_free_tlb(tlb, pte, addr) tlb_remove_page((tlb), (pte)) 103 107 #define pmd_pgtable(pmd) pmd_page(pmd) 104 108 105 109 #define check_pgt_cache() do { } while (0)
+1 -7
arch/openrisc/kernel/entry.S
··· 277 277 l.addi r3,r1,0 // pt_regs 278 278 /* r4 set be EXCEPTION_HANDLE */ // effective address of fault 279 279 280 - /* 281 - * __PHX__: TODO 282 - * 283 - * all this can be written much simpler. look at 284 - * DTLB miss handler in the CONFIG_GUARD_PROTECTED_CORE part 285 - */ 286 280 #ifdef CONFIG_OPENRISC_NO_SPR_SR_DSX 287 281 l.lwz r6,PT_PC(r3) // address of an offending insn 288 282 l.lwz r6,0(r6) // instruction that caused pf ··· 308 314 309 315 #else 310 316 311 - l.lwz r6,PT_SR(r3) // SR 317 + l.mfspr r6,r0,SPR_SR // SR 312 318 l.andi r6,r6,SPR_SR_DSX // check for delay slot exception 313 319 l.sfne r6,r0 // exception happened in delay slot 314 320 l.bnf 7f
+6 -3
arch/openrisc/kernel/head.S
··· 210 210 * r4 - EEAR exception EA 211 211 * r10 - current pointing to current_thread_info struct 212 212 * r12 - syscall 0, since we didn't come from syscall 213 - * r13 - temp it actually contains new SR, not needed anymore 214 - * r31 - handler address of the handler we'll jump to 213 + * r30 - handler address of the handler we'll jump to 215 214 * 216 215 * handler has to save remaining registers to the exception 217 216 * ksp frame *before* tainting them! ··· 243 244 /* r1 is KSP, r30 is __pa(KSP) */ ;\ 244 245 tophys (r30,r1) ;\ 245 246 l.sw PT_GPR12(r30),r12 ;\ 247 + /* r4 use for tmp before EA */ ;\ 246 248 l.mfspr r12,r0,SPR_EPCR_BASE ;\ 247 249 l.sw PT_PC(r30),r12 ;\ 248 250 l.mfspr r12,r0,SPR_ESR_BASE ;\ ··· 263 263 /* r12 == 1 if we come from syscall */ ;\ 264 264 CLEAR_GPR(r12) ;\ 265 265 /* ----- turn on MMU ----- */ ;\ 266 - l.ori r30,r0,(EXCEPTION_SR) ;\ 266 + /* Carry DSX into exception SR */ ;\ 267 + l.mfspr r30,r0,SPR_SR ;\ 268 + l.andi r30,r30,SPR_SR_DSX ;\ 269 + l.ori r30,r30,(EXCEPTION_SR) ;\ 267 270 l.mtspr r0,r30,SPR_ESR_BASE ;\ 268 271 /* r30: EA address of handler */ ;\ 269 272 LOAD_SYMBOL_2_GPR(r30,handler) ;\
+1 -1
arch/openrisc/kernel/traps.c
··· 300 300 return 0; 301 301 } 302 302 #else 303 - return regs->sr & SPR_SR_DSX; 303 + return mfspr(SPR_SR) & SPR_SR_DSX; 304 304 #endif 305 305 } 306 306
+3 -3
arch/parisc/Kconfig
··· 244 244 245 245 config PARISC_PAGE_SIZE_16KB 246 246 bool "16KB" 247 - depends on PA8X00 247 + depends on PA8X00 && BROKEN 248 248 249 249 config PARISC_PAGE_SIZE_64KB 250 250 bool "64KB" 251 - depends on PA8X00 251 + depends on PA8X00 && BROKEN 252 252 253 253 endchoice 254 254 ··· 347 347 int "Maximum number of CPUs (2-32)" 348 348 range 2 32 349 349 depends on SMP 350 - default "32" 350 + default "4" 351 351 352 352 endmenu 353 353
-4
arch/parisc/Makefile
··· 65 65 # kernel. 66 66 cflags-y += -mdisable-fpregs 67 67 68 - # Without this, "ld -r" results in .text sections that are too big 69 - # (> 0x40000) for branches to reach stubs. 70 - cflags-y += -ffunction-sections 71 - 72 68 # Use long jumps instead of long branches (needed if your linker fails to 73 69 # link a too big vmlinux executable). Not enabled for building modules. 74 70 ifdef CONFIG_MLONGCALLS
-8
arch/parisc/include/asm/signal.h
··· 21 21 unsigned long sig[_NSIG_WORDS]; 22 22 } sigset_t; 23 23 24 - #ifndef __KERNEL__ 25 - struct sigaction { 26 - __sighandler_t sa_handler; 27 - unsigned long sa_flags; 28 - sigset_t sa_mask; /* mask last for extensibility */ 29 - }; 30 - #endif 31 - 32 24 #include <asm/sigcontext.h> 33 25 34 26 #endif /* !__ASSEMBLY */
+2 -1
arch/parisc/include/uapi/asm/unistd.h
··· 364 364 #define __NR_preadv2 (__NR_Linux + 347) 365 365 #define __NR_pwritev2 (__NR_Linux + 348) 366 366 #define __NR_statx (__NR_Linux + 349) 367 + #define __NR_io_pgetevents (__NR_Linux + 350) 367 368 368 - #define __NR_Linux_syscalls (__NR_statx + 1) 369 + #define __NR_Linux_syscalls (__NR_io_pgetevents + 1) 369 370 370 371 371 372 #define __IGNORE_select /* newselect */
+9 -16
arch/parisc/kernel/drivers.c
··· 154 154 { 155 155 /* FIXME: we need this because apparently the sti 156 156 * driver can be registered twice */ 157 - if(driver->drv.name) { 158 - printk(KERN_WARNING 159 - "BUG: skipping previously registered driver %s\n", 160 - driver->name); 157 + if (driver->drv.name) { 158 + pr_warn("BUG: skipping previously registered driver %s\n", 159 + driver->name); 161 160 return 1; 162 161 } 163 162 164 163 if (!driver->probe) { 165 - printk(KERN_WARNING 166 - "BUG: driver %s has no probe routine\n", 167 - driver->name); 164 + pr_warn("BUG: driver %s has no probe routine\n", driver->name); 168 165 return 1; 169 166 } 170 167 ··· 488 491 489 492 dev = create_parisc_device(mod_path); 490 493 if (dev->id.hw_type != HPHW_FAULTY) { 491 - printk(KERN_ERR "Two devices have hardware path [%s]. " 492 - "IODC data for second device: " 493 - "%02x%02x%02x%02x%02x%02x\n" 494 - "Rearranging GSC cards sometimes helps\n", 495 - parisc_pathname(dev), iodc_data[0], iodc_data[1], 496 - iodc_data[3], iodc_data[4], iodc_data[5], iodc_data[6]); 494 + pr_err("Two devices have hardware path [%s]. IODC data for second device: %7phN\n" 495 + "Rearranging GSC cards sometimes helps\n", 496 + parisc_pathname(dev), iodc_data); 497 497 return NULL; 498 498 } 499 499 ··· 522 528 * the keyboard controller 523 529 */ 524 530 if ((hpa & 0xfff) == 0 && insert_resource(&iomem_resource, &dev->hpa)) 525 - printk("Unable to claim HPA %lx for device %s\n", 526 - hpa, name); 531 + pr_warn("Unable to claim HPA %lx for device %s\n", hpa, name); 527 532 528 533 return dev; 529 534 } ··· 868 875 static int count; 869 876 870 877 print_pa_hwpath(dev, hw_path); 871 - printk(KERN_INFO "%d. %s at 0x%px [%s] { %d, 0x%x, 0x%.3x, 0x%.5x }", 878 + pr_info("%d. %s at 0x%px [%s] { %d, 0x%x, 0x%.3x, 0x%.5x }", 872 879 ++count, dev->name, (void*) dev->hpa.start, hw_path, dev->id.hw_type, 873 880 dev->id.hversion_rev, dev->id.hversion, dev->id.sversion); 874 881
+1
arch/parisc/kernel/syscall_table.S
··· 445 445 ENTRY_COMP(preadv2) 446 446 ENTRY_COMP(pwritev2) 447 447 ENTRY_SAME(statx) 448 + ENTRY_COMP(io_pgetevents) /* 350 */ 448 449 449 450 450 451 .ifne (. - 90b) - (__NR_Linux_syscalls * (91b - 90b))
+2 -2
arch/parisc/kernel/unwind.c
··· 25 25 26 26 /* #define DEBUG 1 */ 27 27 #ifdef DEBUG 28 - #define dbg(x...) printk(x) 28 + #define dbg(x...) pr_debug(x) 29 29 #else 30 30 #define dbg(x...) 31 31 #endif ··· 182 182 start = (long)&__start___unwind[0]; 183 183 stop = (long)&__stop___unwind[0]; 184 184 185 - printk("unwind_init: start = 0x%lx, end = 0x%lx, entries = %lu\n", 185 + dbg("unwind_init: start = 0x%lx, end = 0x%lx, entries = %lu\n", 186 186 start, stop, 187 187 (stop - start) / sizeof(struct unwind_table_entry)); 188 188
-1
arch/powerpc/include/asm/book3s/32/pgalloc.h
··· 138 138 static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, 139 139 unsigned long address) 140 140 { 141 - pgtable_page_dtor(table); 142 141 pgtable_free_tlb(tlb, page_address(table), 0); 143 142 } 144 143 #endif /* _ASM_POWERPC_BOOK3S_32_PGALLOC_H */
-1
arch/powerpc/include/asm/nohash/32/pgalloc.h
··· 140 140 unsigned long address) 141 141 { 142 142 tlb_flush_pgtable(tlb, address); 143 - pgtable_page_dtor(table); 144 143 pgtable_free_tlb(tlb, page_address(table), 0); 145 144 } 146 145 #endif /* _ASM_POWERPC_PGALLOC_32_H */
+1
arch/powerpc/include/asm/systbl.h
··· 393 393 SYSCALL(pkey_free) 394 394 SYSCALL(pkey_mprotect) 395 395 SYSCALL(rseq) 396 + COMPAT_SYS(io_pgetevents)
+1 -1
arch/powerpc/include/asm/unistd.h
··· 12 12 #include <uapi/asm/unistd.h> 13 13 14 14 15 - #define NR_syscalls 388 15 + #define NR_syscalls 389 16 16 17 17 #define __NR__exit __NR_exit 18 18
+1
arch/powerpc/include/uapi/asm/unistd.h
··· 399 399 #define __NR_pkey_free 385 400 400 #define __NR_pkey_mprotect 386 401 401 #define __NR_rseq 387 402 + #define __NR_io_pgetevents 388 402 403 403 404 #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */
-4
arch/powerpc/kernel/pci_32.c
··· 285 285 * Note that the returned IO or memory base is a physical address 286 286 */ 287 287 288 - #pragma GCC diagnostic push 289 - #pragma GCC diagnostic ignored "-Wpragmas" 290 - #pragma GCC diagnostic ignored "-Wattribute-alias" 291 288 SYSCALL_DEFINE3(pciconfig_iobase, long, which, 292 289 unsigned long, bus, unsigned long, devfn) 293 290 { ··· 310 313 311 314 return result; 312 315 } 313 - #pragma GCC diagnostic pop
-4
arch/powerpc/kernel/pci_64.c
··· 203 203 #define IOBASE_ISA_IO 3 204 204 #define IOBASE_ISA_MEM 4 205 205 206 - #pragma GCC diagnostic push 207 - #pragma GCC diagnostic ignored "-Wpragmas" 208 - #pragma GCC diagnostic ignored "-Wattribute-alias" 209 206 SYSCALL_DEFINE3(pciconfig_iobase, long, which, unsigned long, in_bus, 210 207 unsigned long, in_devfn) 211 208 { ··· 256 259 257 260 return -EOPNOTSUPP; 258 261 } 259 - #pragma GCC diagnostic pop 260 262 261 263 #ifdef CONFIG_NUMA 262 264 int pcibus_to_node(struct pci_bus *bus)
-4
arch/powerpc/kernel/rtas.c
··· 1051 1051 } 1052 1052 1053 1053 /* We assume to be passed big endian arguments */ 1054 - #pragma GCC diagnostic push 1055 - #pragma GCC diagnostic ignored "-Wpragmas" 1056 - #pragma GCC diagnostic ignored "-Wattribute-alias" 1057 1054 SYSCALL_DEFINE1(rtas, struct rtas_args __user *, uargs) 1058 1055 { 1059 1056 struct rtas_args args; ··· 1137 1140 1138 1141 return 0; 1139 1142 } 1140 - #pragma GCC diagnostic pop 1141 1143 1142 1144 /* 1143 1145 * Call early during boot, before mem init, to retrieve the RTAS
-8
arch/powerpc/kernel/signal_32.c
··· 1038 1038 } 1039 1039 #endif 1040 1040 1041 - #pragma GCC diagnostic push 1042 - #pragma GCC diagnostic ignored "-Wpragmas" 1043 - #pragma GCC diagnostic ignored "-Wattribute-alias" 1044 1041 #ifdef CONFIG_PPC64 1045 1042 COMPAT_SYSCALL_DEFINE3(swapcontext, struct ucontext __user *, old_ctx, 1046 1043 struct ucontext __user *, new_ctx, int, ctx_size) ··· 1131 1134 set_thread_flag(TIF_RESTOREALL); 1132 1135 return 0; 1133 1136 } 1134 - #pragma GCC diagnostic pop 1135 1137 1136 1138 #ifdef CONFIG_PPC64 1137 1139 COMPAT_SYSCALL_DEFINE0(rt_sigreturn) ··· 1227 1231 return 0; 1228 1232 } 1229 1233 1230 - #pragma GCC diagnostic push 1231 - #pragma GCC diagnostic ignored "-Wpragmas" 1232 - #pragma GCC diagnostic ignored "-Wattribute-alias" 1233 1234 #ifdef CONFIG_PPC32 1234 1235 SYSCALL_DEFINE3(debug_setcontext, struct ucontext __user *, ctx, 1235 1236 int, ndbg, struct sig_dbg_op __user *, dbg) ··· 1330 1337 return 0; 1331 1338 } 1332 1339 #endif 1333 - #pragma GCC diagnostic pop 1334 1340 1335 1341 /* 1336 1342 * OK, we're invoking a handler
-4
arch/powerpc/kernel/signal_64.c
··· 625 625 /* 626 626 * Handle {get,set,swap}_context operations 627 627 */ 628 - #pragma GCC diagnostic push 629 - #pragma GCC diagnostic ignored "-Wpragmas" 630 - #pragma GCC diagnostic ignored "-Wattribute-alias" 631 628 SYSCALL_DEFINE3(swapcontext, struct ucontext __user *, old_ctx, 632 629 struct ucontext __user *, new_ctx, long, ctx_size) 633 630 { ··· 690 693 set_thread_flag(TIF_RESTOREALL); 691 694 return 0; 692 695 } 693 - #pragma GCC diagnostic pop 694 696 695 697 696 698 /*
-4
arch/powerpc/kernel/syscalls.c
··· 62 62 return ret; 63 63 } 64 64 65 - #pragma GCC diagnostic push 66 - #pragma GCC diagnostic ignored "-Wpragmas" 67 - #pragma GCC diagnostic ignored "-Wattribute-alias" 68 65 SYSCALL_DEFINE6(mmap2, unsigned long, addr, size_t, len, 69 66 unsigned long, prot, unsigned long, flags, 70 67 unsigned long, fd, unsigned long, pgoff) ··· 75 78 { 76 79 return do_mmap2(addr, len, prot, flags, fd, offset, PAGE_SHIFT); 77 80 } 78 - #pragma GCC diagnostic pop 79 81 80 82 #ifdef CONFIG_PPC32 81 83 /*
-4
arch/powerpc/mm/subpage-prot.c
··· 186 186 * in a 2-bit field won't allow writes to a page that is otherwise 187 187 * write-protected. 188 188 */ 189 - #pragma GCC diagnostic push 190 - #pragma GCC diagnostic ignored "-Wpragmas" 191 - #pragma GCC diagnostic ignored "-Wattribute-alias" 192 189 SYSCALL_DEFINE3(subpage_prot, unsigned long, addr, 193 190 unsigned long, len, u32 __user *, map) 194 191 { ··· 269 272 up_write(&mm->mmap_sem); 270 273 return err; 271 274 } 272 - #pragma GCC diagnostic pop
+20 -9
arch/powerpc/platforms/powermac/time.c
··· 42 42 #define DBG(x...) 43 43 #endif 44 44 45 - /* Apparently the RTC stores seconds since 1 Jan 1904 */ 45 + /* 46 + * Offset between Unix time (1970-based) and Mac time (1904-based). Cuda and PMU 47 + * times wrap in 2040. If we need to handle later times, the read_time functions 48 + * need to be changed to interpret wrapped times as post-2040. 49 + */ 46 50 #define RTC_OFFSET 2082844800 47 51 48 52 /* ··· 101 97 if (req.reply_len != 7) 102 98 printk(KERN_ERR "cuda_get_time: got %d byte reply\n", 103 99 req.reply_len); 104 - now = (req.reply[3] << 24) + (req.reply[4] << 16) 105 - + (req.reply[5] << 8) + req.reply[6]; 100 + now = (u32)((req.reply[3] << 24) + (req.reply[4] << 16) + 101 + (req.reply[5] << 8) + req.reply[6]); 102 + /* it's either after year 2040, or the RTC has gone backwards */ 103 + WARN_ON(now < RTC_OFFSET); 104 + 106 105 return now - RTC_OFFSET; 107 106 } 108 107 ··· 113 106 114 107 static int cuda_set_rtc_time(struct rtc_time *tm) 115 108 { 116 - time64_t nowtime; 109 + u32 nowtime; 117 110 struct adb_request req; 118 111 119 - nowtime = rtc_tm_to_time64(tm) + RTC_OFFSET; 112 + nowtime = lower_32_bits(rtc_tm_to_time64(tm) + RTC_OFFSET); 120 113 if (cuda_request(&req, NULL, 6, CUDA_PACKET, CUDA_SET_TIME, 121 114 nowtime >> 24, nowtime >> 16, nowtime >> 8, 122 115 nowtime) < 0) ··· 147 140 if (req.reply_len != 4) 148 141 printk(KERN_ERR "pmu_get_time: got %d byte reply from PMU\n", 149 142 req.reply_len); 150 - now = (req.reply[0] << 24) + (req.reply[1] << 16) 151 - + (req.reply[2] << 8) + req.reply[3]; 143 + now = (u32)((req.reply[0] << 24) + (req.reply[1] << 16) + 144 + (req.reply[2] << 8) + req.reply[3]); 145 + 146 + /* it's either after year 2040, or the RTC has gone backwards */ 147 + WARN_ON(now < RTC_OFFSET); 148 + 152 149 return now - RTC_OFFSET; 153 150 } 154 151 ··· 160 149 161 150 static int pmu_set_rtc_time(struct rtc_time *tm) 162 151 { 163 - time64_t nowtime; 152 + u32 nowtime; 164 153 struct adb_request req; 165 154 166 - nowtime = rtc_tm_to_time64(tm) + RTC_OFFSET; 155 + nowtime = lower_32_bits(rtc_tm_to_time64(tm) + RTC_OFFSET); 167 156 if (pmu_request(&req, NULL, 5, PMU_SET_RTC, nowtime >> 24, 168 157 nowtime >> 16, nowtime >> 8, nowtime) < 0) 169 158 return -ENXIO;
+1
arch/s390/net/bpf_jit_comp.c
··· 1286 1286 goto free_addrs; 1287 1287 } 1288 1288 if (bpf_jit_prog(&jit, fp)) { 1289 + bpf_jit_binary_free(header); 1289 1290 fp = orig_fp; 1290 1291 goto free_addrs; 1291 1292 }
+1 -1
arch/x86/entry/entry_32.S
··· 477 477 * whereas POPF does not.) 478 478 */ 479 479 addl $PT_EFLAGS-PT_DS, %esp /* point esp at pt_regs->flags */ 480 - btr $X86_EFLAGS_IF_BIT, (%esp) 480 + btrl $X86_EFLAGS_IF_BIT, (%esp) 481 481 popfl 482 482 483 483 /*
+8 -8
arch/x86/entry/entry_64_compat.S
··· 84 84 pushq %rdx /* pt_regs->dx */ 85 85 pushq %rcx /* pt_regs->cx */ 86 86 pushq $-ENOSYS /* pt_regs->ax */ 87 - pushq %r8 /* pt_regs->r8 */ 87 + pushq $0 /* pt_regs->r8 = 0 */ 88 88 xorl %r8d, %r8d /* nospec r8 */ 89 - pushq %r9 /* pt_regs->r9 */ 89 + pushq $0 /* pt_regs->r9 = 0 */ 90 90 xorl %r9d, %r9d /* nospec r9 */ 91 - pushq %r10 /* pt_regs->r10 */ 91 + pushq $0 /* pt_regs->r10 = 0 */ 92 92 xorl %r10d, %r10d /* nospec r10 */ 93 - pushq %r11 /* pt_regs->r11 */ 93 + pushq $0 /* pt_regs->r11 = 0 */ 94 94 xorl %r11d, %r11d /* nospec r11 */ 95 95 pushq %rbx /* pt_regs->rbx */ 96 96 xorl %ebx, %ebx /* nospec rbx */ ··· 374 374 pushq %rcx /* pt_regs->cx */ 375 375 xorl %ecx, %ecx /* nospec cx */ 376 376 pushq $-ENOSYS /* pt_regs->ax */ 377 - pushq $0 /* pt_regs->r8 = 0 */ 377 + pushq %r8 /* pt_regs->r8 */ 378 378 xorl %r8d, %r8d /* nospec r8 */ 379 - pushq $0 /* pt_regs->r9 = 0 */ 379 + pushq %r9 /* pt_regs->r9 */ 380 380 xorl %r9d, %r9d /* nospec r9 */ 381 - pushq $0 /* pt_regs->r10 = 0 */ 381 + pushq %r10 /* pt_regs->r10*/ 382 382 xorl %r10d, %r10d /* nospec r10 */ 383 - pushq $0 /* pt_regs->r11 = 0 */ 383 + pushq %r11 /* pt_regs->r11 */ 384 384 xorl %r11d, %r11d /* nospec r11 */ 385 385 pushq %rbx /* pt_regs->rbx */ 386 386 xorl %ebx, %ebx /* nospec rbx */
+3
arch/x86/include/asm/pgalloc.h
··· 184 184 185 185 static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) 186 186 { 187 + if (!pgtable_l5_enabled()) 188 + return; 189 + 187 190 BUG_ON((unsigned long)p4d & (PAGE_SIZE-1)); 188 191 free_page((unsigned long)p4d); 189 192 }
+1 -1
arch/x86/include/asm/pgtable.h
··· 898 898 #define pgd_page(pgd) pfn_to_page(pgd_pfn(pgd)) 899 899 900 900 /* to find an entry in a page-table-directory. */ 901 - static __always_inline p4d_t *p4d_offset(pgd_t *pgd, unsigned long address) 901 + static inline p4d_t *p4d_offset(pgd_t *pgd, unsigned long address) 902 902 { 903 903 if (!pgtable_l5_enabled()) 904 904 return (p4d_t *)pgd;
+2 -2
arch/x86/include/asm/pgtable_64.h
··· 216 216 } 217 217 #endif 218 218 219 - static __always_inline void native_set_p4d(p4d_t *p4dp, p4d_t p4d) 219 + static inline void native_set_p4d(p4d_t *p4dp, p4d_t p4d) 220 220 { 221 221 pgd_t pgd; 222 222 ··· 230 230 *p4dp = native_make_p4d(native_pgd_val(pgd)); 231 231 } 232 232 233 - static __always_inline void native_p4d_clear(p4d_t *p4d) 233 + static inline void native_p4d_clear(p4d_t *p4d) 234 234 { 235 235 native_set_p4d(p4d, native_make_p4d(0)); 236 236 }
+12 -3
arch/x86/kernel/e820.c
··· 1248 1248 { 1249 1249 int i; 1250 1250 u64 end; 1251 + u64 addr = 0; 1251 1252 1252 1253 /* 1253 1254 * The bootstrap memblock region count maximum is 128 entries ··· 1265 1264 struct e820_entry *entry = &e820_table->entries[i]; 1266 1265 1267 1266 end = entry->addr + entry->size; 1267 + if (addr < entry->addr) 1268 + memblock_reserve(addr, entry->addr - addr); 1269 + addr = end; 1268 1270 if (end != (resource_size_t)end) 1269 1271 continue; 1270 1272 1273 + /* 1274 + * all !E820_TYPE_RAM ranges (including gap ranges) are put 1275 + * into memblock.reserved to make sure that struct pages in 1276 + * such regions are not left uninitialized after bootup. 1277 + */ 1271 1278 if (entry->type != E820_TYPE_RAM && entry->type != E820_TYPE_RESERVED_KERN) 1272 - continue; 1273 - 1274 - memblock_add(entry->addr, entry->size); 1279 + memblock_reserve(entry->addr, entry->size); 1280 + else 1281 + memblock_add(entry->addr, entry->size); 1275 1282 } 1276 1283 1277 1284 /* Throw away partial pages: */
+7 -14
arch/x86/mm/fault.c
··· 641 641 return 0; 642 642 } 643 643 644 - static const char nx_warning[] = KERN_CRIT 645 - "kernel tried to execute NX-protected page - exploit attempt? (uid: %d)\n"; 646 - static const char smep_warning[] = KERN_CRIT 647 - "unable to execute userspace code (SMEP?) (uid: %d)\n"; 648 - 649 644 static void 650 645 show_fault_oops(struct pt_regs *regs, unsigned long error_code, 651 646 unsigned long address) ··· 659 664 pte = lookup_address_in_pgd(pgd, address, &level); 660 665 661 666 if (pte && pte_present(*pte) && !pte_exec(*pte)) 662 - printk(nx_warning, from_kuid(&init_user_ns, current_uid())); 667 + pr_crit("kernel tried to execute NX-protected page - exploit attempt? (uid: %d)\n", 668 + from_kuid(&init_user_ns, current_uid())); 663 669 if (pte && pte_present(*pte) && pte_exec(*pte) && 664 670 (pgd_flags(*pgd) & _PAGE_USER) && 665 671 (__read_cr4() & X86_CR4_SMEP)) 666 - printk(smep_warning, from_kuid(&init_user_ns, current_uid())); 672 + pr_crit("unable to execute userspace code (SMEP?) (uid: %d)\n", 673 + from_kuid(&init_user_ns, current_uid())); 667 674 } 668 675 669 - printk(KERN_ALERT "BUG: unable to handle kernel "); 670 - if (address < PAGE_SIZE) 671 - printk(KERN_CONT "NULL pointer dereference"); 672 - else 673 - printk(KERN_CONT "paging request"); 674 - 675 - printk(KERN_CONT " at %px\n", (void *) address); 676 + pr_alert("BUG: unable to handle kernel %s at %px\n", 677 + address < PAGE_SIZE ? "NULL pointer dereference" : "paging request", 678 + (void *)address); 676 679 677 680 dump_pagetable(address); 678 681 }
+2 -2
arch/x86/platform/efi/efi_64.c
··· 166 166 pgd = pgd_offset_k(pgd_idx * PGDIR_SIZE); 167 167 set_pgd(pgd_offset_k(pgd_idx * PGDIR_SIZE), save_pgd[pgd_idx]); 168 168 169 - if (!(pgd_val(*pgd) & _PAGE_PRESENT)) 169 + if (!pgd_present(*pgd)) 170 170 continue; 171 171 172 172 for (i = 0; i < PTRS_PER_P4D; i++) { 173 173 p4d = p4d_offset(pgd, 174 174 pgd_idx * PGDIR_SIZE + i * P4D_SIZE); 175 175 176 - if (!(p4d_val(*p4d) & _PAGE_PRESENT)) 176 + if (!p4d_present(*p4d)) 177 177 continue; 178 178 179 179 pud = (pud_t *)p4d_page_vaddr(*p4d);
+4
block/blk-core.c
··· 3473 3473 dst->cpu = src->cpu; 3474 3474 dst->__sector = blk_rq_pos(src); 3475 3475 dst->__data_len = blk_rq_bytes(src); 3476 + if (src->rq_flags & RQF_SPECIAL_PAYLOAD) { 3477 + dst->rq_flags |= RQF_SPECIAL_PAYLOAD; 3478 + dst->special_vec = src->special_vec; 3479 + } 3476 3480 dst->nr_phys_segments = src->nr_phys_segments; 3477 3481 dst->ioprio = src->ioprio; 3478 3482 dst->extra_len = src->extra_len;
+12
block/blk-mq.c
··· 1075 1075 1076 1076 #define BLK_MQ_RESOURCE_DELAY 3 /* ms units */ 1077 1077 1078 + /* 1079 + * Returns true if we did some work AND can potentially do more. 1080 + */ 1078 1081 bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list, 1079 1082 bool got_budget) 1080 1083 { ··· 1208 1205 blk_mq_run_hw_queue(hctx, true); 1209 1206 else if (needs_restart && (ret == BLK_STS_RESOURCE)) 1210 1207 blk_mq_delay_run_hw_queue(hctx, BLK_MQ_RESOURCE_DELAY); 1208 + 1209 + return false; 1211 1210 } 1211 + 1212 + /* 1213 + * If the host/device is unable to accept more work, inform the 1214 + * caller of that. 1215 + */ 1216 + if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE) 1217 + return false; 1212 1218 1213 1219 return (queued + errors) != 0; 1214 1220 }
+1 -1
certs/blacklist.h
··· 1 1 #include <linux/kernel.h> 2 2 3 - extern const char __initdata *const blacklist_hashes[]; 3 + extern const char __initconst *const blacklist_hashes[];
+10 -3
crypto/af_alg.c
··· 1060 1060 } 1061 1061 EXPORT_SYMBOL_GPL(af_alg_async_cb); 1062 1062 1063 - __poll_t af_alg_poll_mask(struct socket *sock, __poll_t events) 1063 + /** 1064 + * af_alg_poll - poll system call handler 1065 + */ 1066 + __poll_t af_alg_poll(struct file *file, struct socket *sock, 1067 + poll_table *wait) 1064 1068 { 1065 1069 struct sock *sk = sock->sk; 1066 1070 struct alg_sock *ask = alg_sk(sk); 1067 1071 struct af_alg_ctx *ctx = ask->private; 1068 - __poll_t mask = 0; 1072 + __poll_t mask; 1073 + 1074 + sock_poll_wait(file, sk_sleep(sk), wait); 1075 + mask = 0; 1069 1076 1070 1077 if (!ctx->more || ctx->used) 1071 1078 mask |= EPOLLIN | EPOLLRDNORM; ··· 1082 1075 1083 1076 return mask; 1084 1077 } 1085 - EXPORT_SYMBOL_GPL(af_alg_poll_mask); 1078 + EXPORT_SYMBOL_GPL(af_alg_poll); 1086 1079 1087 1080 /** 1088 1081 * af_alg_alloc_areq - allocate struct af_alg_async_req
+2 -2
crypto/algif_aead.c
··· 375 375 .sendmsg = aead_sendmsg, 376 376 .sendpage = af_alg_sendpage, 377 377 .recvmsg = aead_recvmsg, 378 - .poll_mask = af_alg_poll_mask, 378 + .poll = af_alg_poll, 379 379 }; 380 380 381 381 static int aead_check_key(struct socket *sock) ··· 471 471 .sendmsg = aead_sendmsg_nokey, 472 472 .sendpage = aead_sendpage_nokey, 473 473 .recvmsg = aead_recvmsg_nokey, 474 - .poll_mask = af_alg_poll_mask, 474 + .poll = af_alg_poll, 475 475 }; 476 476 477 477 static void *aead_bind(const char *name, u32 type, u32 mask)
+2 -2
crypto/algif_skcipher.c
··· 206 206 .sendmsg = skcipher_sendmsg, 207 207 .sendpage = af_alg_sendpage, 208 208 .recvmsg = skcipher_recvmsg, 209 - .poll_mask = af_alg_poll_mask, 209 + .poll = af_alg_poll, 210 210 }; 211 211 212 212 static int skcipher_check_key(struct socket *sock) ··· 302 302 .sendmsg = skcipher_sendmsg_nokey, 303 303 .sendpage = skcipher_sendpage_nokey, 304 304 .recvmsg = skcipher_recvmsg_nokey, 305 - .poll_mask = af_alg_poll_mask, 305 + .poll = af_alg_poll, 306 306 }; 307 307 308 308 static void *skcipher_bind(const char *name, u32 type, u32 mask)
+9
crypto/asymmetric_keys/x509_cert_parser.c
··· 249 249 return -EINVAL; 250 250 } 251 251 252 + if (strcmp(ctx->cert->sig->pkey_algo, "rsa") == 0) { 253 + /* Discard the BIT STRING metadata */ 254 + if (vlen < 1 || *(const u8 *)value != 0) 255 + return -EBADMSG; 256 + 257 + value++; 258 + vlen--; 259 + } 260 + 252 261 ctx->cert->raw_sig = value; 253 262 ctx->cert->raw_sig_size = vlen; 254 263 return 0;
+72
drivers/acpi/osl.c
··· 45 45 #include <linux/uaccess.h> 46 46 #include <linux/io-64-nonatomic-lo-hi.h> 47 47 48 + #include "acpica/accommon.h" 49 + #include "acpica/acnamesp.h" 48 50 #include "internal.h" 49 51 50 52 #define _COMPONENT ACPI_OS_SERVICES ··· 1491 1489 return acpi_check_resource_conflict(&res); 1492 1490 } 1493 1491 EXPORT_SYMBOL(acpi_check_region); 1492 + 1493 + static acpi_status acpi_deactivate_mem_region(acpi_handle handle, u32 level, 1494 + void *_res, void **return_value) 1495 + { 1496 + struct acpi_mem_space_context **mem_ctx; 1497 + union acpi_operand_object *handler_obj; 1498 + union acpi_operand_object *region_obj2; 1499 + union acpi_operand_object *region_obj; 1500 + struct resource *res = _res; 1501 + acpi_status status; 1502 + 1503 + region_obj = acpi_ns_get_attached_object(handle); 1504 + if (!region_obj) 1505 + return AE_OK; 1506 + 1507 + handler_obj = region_obj->region.handler; 1508 + if (!handler_obj) 1509 + return AE_OK; 1510 + 1511 + if (region_obj->region.space_id != ACPI_ADR_SPACE_SYSTEM_MEMORY) 1512 + return AE_OK; 1513 + 1514 + if (!(region_obj->region.flags & AOPOBJ_SETUP_COMPLETE)) 1515 + return AE_OK; 1516 + 1517 + region_obj2 = acpi_ns_get_secondary_object(region_obj); 1518 + if (!region_obj2) 1519 + return AE_OK; 1520 + 1521 + mem_ctx = (void *)&region_obj2->extra.region_context; 1522 + 1523 + if (!(mem_ctx[0]->address >= res->start && 1524 + mem_ctx[0]->address < res->end)) 1525 + return AE_OK; 1526 + 1527 + status = handler_obj->address_space.setup(region_obj, 1528 + ACPI_REGION_DEACTIVATE, 1529 + NULL, (void **)mem_ctx); 1530 + if (ACPI_SUCCESS(status)) 1531 + region_obj->region.flags &= ~(AOPOBJ_SETUP_COMPLETE); 1532 + 1533 + return status; 1534 + } 1535 + 1536 + /** 1537 + * acpi_release_memory - Release any mappings done to a memory region 1538 + * @handle: Handle to namespace node 1539 + * @res: Memory resource 1540 + * @level: A level that terminates the search 1541 + * 1542 + * Walks through @handle and unmaps all SystemMemory Operation Regions that 1543 + * overlap with @res and that have already been activated (mapped). 1544 + * 1545 + * This is a helper that allows drivers to place special requirements on memory 1546 + * region that may overlap with operation regions, primarily allowing them to 1547 + * safely map the region as non-cached memory. 1548 + * 1549 + * The unmapped Operation Regions will be automatically remapped next time they 1550 + * are called, so the drivers do not need to do anything else. 1551 + */ 1552 + acpi_status acpi_release_memory(acpi_handle handle, struct resource *res, 1553 + u32 level) 1554 + { 1555 + if (!(res->flags & IORESOURCE_MEM)) 1556 + return AE_TYPE; 1557 + 1558 + return acpi_walk_namespace(ACPI_TYPE_REGION, handle, level, 1559 + acpi_deactivate_mem_region, NULL, res, NULL); 1560 + } 1561 + EXPORT_SYMBOL_GPL(acpi_release_memory); 1494 1562 1495 1563 /* 1496 1564 * Let drivers know whether the resource checks are effective
+1 -1
drivers/atm/iphase.c
··· 1618 1618 skb_queue_head_init(&iadev->rx_dma_q); 1619 1619 iadev->rx_free_desc_qhead = NULL; 1620 1620 1621 - iadev->rx_open = kcalloc(4, iadev->num_vc, GFP_KERNEL); 1621 + iadev->rx_open = kcalloc(iadev->num_vc, sizeof(void *), GFP_KERNEL); 1622 1622 if (!iadev->rx_open) { 1623 1623 printk(KERN_ERR DEV_LABEL "itf %d couldn't get free page\n", 1624 1624 dev->number);
+2
drivers/atm/zatm.c
··· 1481 1481 return -EFAULT; 1482 1482 if (pool < 0 || pool > ZATM_LAST_POOL) 1483 1483 return -EINVAL; 1484 + pool = array_index_nospec(pool, 1485 + ZATM_LAST_POOL + 1); 1484 1486 if (copy_from_user(&info, 1485 1487 &((struct zatm_pool_req __user *) arg)->info, 1486 1488 sizeof(info))) return -EFAULT;
+3 -4
drivers/base/power/domain.c
··· 2487 2487 * power domain corresponding to a DT node's "required-opps" property. 2488 2488 * 2489 2489 * @dev: Device for which the performance-state needs to be found. 2490 - * @opp_node: DT node where the "required-opps" property is present. This can be 2490 + * @np: DT node where the "required-opps" property is present. This can be 2491 2491 * the device node itself (if it doesn't have an OPP table) or a node 2492 2492 * within the OPP table of a device (if device has an OPP table). 2493 - * @state: Pointer to return performance state. 2494 2493 * 2495 2494 * Returns performance state corresponding to the "required-opps" property of 2496 2495 * a DT node. This calls platform specific genpd->opp_to_performance_state() ··· 2498 2499 * Returns performance state on success and 0 on failure. 2499 2500 */ 2500 2501 unsigned int of_genpd_opp_to_performance_state(struct device *dev, 2501 - struct device_node *opp_node) 2502 + struct device_node *np) 2502 2503 { 2503 2504 struct generic_pm_domain *genpd; 2504 2505 struct dev_pm_opp *opp; ··· 2513 2514 2514 2515 genpd_lock(genpd); 2515 2516 2516 - opp = of_dev_pm_opp_find_required_opp(&genpd->dev, opp_node); 2517 + opp = of_dev_pm_opp_find_required_opp(&genpd->dev, np); 2517 2518 if (IS_ERR(opp)) { 2518 2519 dev_err(dev, "Failed to find required OPP: %ld\n", 2519 2520 PTR_ERR(opp));
+2 -2
drivers/block/drbd/drbd_req.c
··· 1244 1244 _drbd_start_io_acct(device, req); 1245 1245 1246 1246 /* process discards always from our submitter thread */ 1247 - if ((bio_op(bio) & REQ_OP_WRITE_ZEROES) || 1248 - (bio_op(bio) & REQ_OP_DISCARD)) 1247 + if (bio_op(bio) == REQ_OP_WRITE_ZEROES || 1248 + bio_op(bio) == REQ_OP_DISCARD) 1249 1249 goto queue_for_submitter_thread; 1250 1250 1251 1251 if (rw == WRITE && req->private_bio && req->i.size
+13 -16
drivers/char/random.c
··· 402 402 /* 403 403 * Static global variables 404 404 */ 405 - static DECLARE_WAIT_QUEUE_HEAD(random_wait); 405 + static DECLARE_WAIT_QUEUE_HEAD(random_read_wait); 406 + static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); 406 407 static struct fasync_struct *fasync; 407 408 408 409 static DEFINE_SPINLOCK(random_ready_list_lock); ··· 722 721 723 722 /* should we wake readers? */ 724 723 if (entropy_bits >= random_read_wakeup_bits && 725 - wq_has_sleeper(&random_wait)) { 726 - wake_up_interruptible_poll(&random_wait, POLLIN); 724 + wq_has_sleeper(&random_read_wait)) { 725 + wake_up_interruptible(&random_read_wait); 727 726 kill_fasync(&fasync, SIGIO, POLL_IN); 728 727 } 729 728 /* If the input pool is getting full, send some ··· 1397 1396 trace_debit_entropy(r->name, 8 * ibytes); 1398 1397 if (ibytes && 1399 1398 (r->entropy_count >> ENTROPY_SHIFT) < random_write_wakeup_bits) { 1400 - wake_up_interruptible_poll(&random_wait, POLLOUT); 1399 + wake_up_interruptible(&random_write_wait); 1401 1400 kill_fasync(&fasync, SIGIO, POLL_OUT); 1402 1401 } 1403 1402 ··· 1839 1838 if (nonblock) 1840 1839 return -EAGAIN; 1841 1840 1842 - wait_event_interruptible(random_wait, 1841 + wait_event_interruptible(random_read_wait, 1843 1842 ENTROPY_BITS(&input_pool) >= 1844 1843 random_read_wakeup_bits); 1845 1844 if (signal_pending(current)) ··· 1876 1875 return ret; 1877 1876 } 1878 1877 1879 - static struct wait_queue_head * 1880 - random_get_poll_head(struct file *file, __poll_t events) 1881 - { 1882 - return &random_wait; 1883 - } 1884 - 1885 1878 static __poll_t 1886 - random_poll_mask(struct file *file, __poll_t events) 1879 + random_poll(struct file *file, poll_table * wait) 1887 1880 { 1888 - __poll_t mask = 0; 1881 + __poll_t mask; 1889 1882 1883 + poll_wait(file, &random_read_wait, wait); 1884 + poll_wait(file, &random_write_wait, wait); 1885 + mask = 0; 1890 1886 if (ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits) 1891 1887 mask |= EPOLLIN | EPOLLRDNORM; 1892 1888 if (ENTROPY_BITS(&input_pool) < random_write_wakeup_bits) ··· 1990 1992 const struct file_operations random_fops = { 1991 1993 .read = random_read, 1992 1994 .write = random_write, 1993 - .get_poll_head = random_get_poll_head, 1994 - .poll_mask = random_poll_mask, 1995 + .poll = random_poll, 1995 1996 .unlocked_ioctl = random_ioctl, 1996 1997 .fasync = random_fasync, 1997 1998 .llseek = noop_llseek, ··· 2323 2326 * We'll be woken up again once below random_write_wakeup_thresh, 2324 2327 * or when the calling thread is about to terminate. 2325 2328 */ 2326 - wait_event_interruptible(random_wait, kthread_should_stop() || 2329 + wait_event_interruptible(random_write_wait, kthread_should_stop() || 2327 2330 ENTROPY_BITS(&input_pool) <= random_write_wakeup_bits); 2328 2331 mix_pool_bytes(poolp, buffer, count); 2329 2332 credit_entropy_bits(poolp, entropy);
+4 -4
drivers/cpufreq/qcom-cpufreq-kryo.c
··· 87 87 int ret; 88 88 89 89 cpu_dev = get_cpu_device(0); 90 - if (NULL == cpu_dev) 91 - ret = -ENODEV; 90 + if (!cpu_dev) 91 + return -ENODEV; 92 92 93 93 msm8996_version = qcom_cpufreq_kryo_get_msm_id(); 94 94 if (NUM_OF_MSM8996_VERSIONS == msm8996_version) { ··· 97 97 } 98 98 99 99 np = dev_pm_opp_of_get_opp_desc_node(cpu_dev); 100 - if (IS_ERR(np)) 101 - return PTR_ERR(np); 100 + if (!np) 101 + return -ENOENT; 102 102 103 103 ret = of_device_is_compatible(np, "operating-points-v2-kryo-cpu"); 104 104 if (!ret) {
+8
drivers/dax/super.c
··· 86 86 { 87 87 struct dax_device *dax_dev; 88 88 bool dax_enabled = false; 89 + struct request_queue *q; 89 90 pgoff_t pgoff; 90 91 int err, id; 91 92 void *kaddr; ··· 96 95 97 96 if (blocksize != PAGE_SIZE) { 98 97 pr_debug("%s: error: unsupported blocksize for dax\n", 98 + bdevname(bdev, buf)); 99 + return false; 100 + } 101 + 102 + q = bdev_get_queue(bdev); 103 + if (!q || !blk_queue_dax(q)) { 104 + pr_debug("%s: error: request queue doesn't support dax\n", 99 105 bdevname(bdev, buf)); 100 106 return false; 101 107 }
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
··· 376 376 struct amdgpu_device *adev = ring->adev; 377 377 uint64_t index; 378 378 379 - if (ring != &adev->uvd.inst[ring->me].ring) { 379 + if (ring->funcs->type != AMDGPU_RING_TYPE_UVD) { 380 380 ring->fence_drv.cpu_addr = &adev->wb.wb[ring->fence_offs]; 381 381 ring->fence_drv.gpu_addr = adev->wb.gpu_addr + (ring->fence_offs * 4); 382 382 } else {
+27 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
··· 52 52 unsigned long bo_size; 53 53 const char *fw_name; 54 54 const struct common_firmware_header *hdr; 55 - unsigned version_major, version_minor, family_id; 55 + unsigned char fw_check; 56 56 int r; 57 57 58 58 INIT_DELAYED_WORK(&adev->vcn.idle_work, amdgpu_vcn_idle_work_handler); ··· 83 83 84 84 hdr = (const struct common_firmware_header *)adev->vcn.fw->data; 85 85 adev->vcn.fw_version = le32_to_cpu(hdr->ucode_version); 86 - family_id = le32_to_cpu(hdr->ucode_version) & 0xff; 87 - version_major = (le32_to_cpu(hdr->ucode_version) >> 24) & 0xff; 88 - version_minor = (le32_to_cpu(hdr->ucode_version) >> 8) & 0xff; 89 - DRM_INFO("Found VCN firmware Version: %hu.%hu Family ID: %hu\n", 90 - version_major, version_minor, family_id); 91 86 87 + /* Bit 20-23, it is encode major and non-zero for new naming convention. 88 + * This field is part of version minor and DRM_DISABLED_FLAG in old naming 89 + * convention. Since the l:wq!atest version minor is 0x5B and DRM_DISABLED_FLAG 90 + * is zero in old naming convention, this field is always zero so far. 91 + * These four bits are used to tell which naming convention is present. 92 + */ 93 + fw_check = (le32_to_cpu(hdr->ucode_version) >> 20) & 0xf; 94 + if (fw_check) { 95 + unsigned int dec_ver, enc_major, enc_minor, vep, fw_rev; 96 + 97 + fw_rev = le32_to_cpu(hdr->ucode_version) & 0xfff; 98 + enc_minor = (le32_to_cpu(hdr->ucode_version) >> 12) & 0xff; 99 + enc_major = fw_check; 100 + dec_ver = (le32_to_cpu(hdr->ucode_version) >> 24) & 0xf; 101 + vep = (le32_to_cpu(hdr->ucode_version) >> 28) & 0xf; 102 + DRM_INFO("Found VCN firmware Version ENC: %hu.%hu DEC: %hu VEP: %hu Revision: %hu\n", 103 + enc_major, enc_minor, dec_ver, vep, fw_rev); 104 + } else { 105 + unsigned int version_major, version_minor, family_id; 106 + 107 + family_id = le32_to_cpu(hdr->ucode_version) & 0xff; 108 + version_major = (le32_to_cpu(hdr->ucode_version) >> 24) & 0xff; 109 + version_minor = (le32_to_cpu(hdr->ucode_version) >> 8) & 0xff; 110 + DRM_INFO("Found VCN firmware Version: %hu.%hu Family ID: %hu\n", 111 + version_major, version_minor, family_id); 112 + } 92 113 93 114 bo_size = AMDGPU_GPU_PAGE_ALIGN(le32_to_cpu(hdr->ucode_size_bytes) + 8) 94 115 + AMDGPU_VCN_STACK_SIZE + AMDGPU_VCN_HEAP_SIZE
+5 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 1463 1463 uint64_t count; 1464 1464 1465 1465 max_entries = min(max_entries, 16ull * 1024ull); 1466 - for (count = 1; count < max_entries; ++count) { 1466 + for (count = 1; 1467 + count < max_entries / (PAGE_SIZE / AMDGPU_GPU_PAGE_SIZE); 1468 + ++count) { 1467 1469 uint64_t idx = pfn + count; 1468 1470 1469 1471 if (pages_addr[idx] != ··· 1478 1476 dma_addr = pages_addr; 1479 1477 } else { 1480 1478 addr = pages_addr[pfn]; 1481 - max_entries = count; 1479 + max_entries = count * (PAGE_SIZE / AMDGPU_GPU_PAGE_SIZE); 1482 1480 } 1483 1481 1484 1482 } else if (flags & AMDGPU_PTE_VALID) { ··· 1493 1491 if (r) 1494 1492 return r; 1495 1493 1496 - pfn += last - start + 1; 1494 + pfn += (last - start + 1) / (PAGE_SIZE / AMDGPU_GPU_PAGE_SIZE); 1497 1495 if (nodes && nodes->size == pfn) { 1498 1496 pfn = 0; 1499 1497 ++nodes;
+8 -8
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 3928 3928 if (acrtc->base.state->event) 3929 3929 prepare_flip_isr(acrtc); 3930 3930 3931 + spin_unlock_irqrestore(&crtc->dev->event_lock, flags); 3932 + 3931 3933 surface_updates->surface = dc_stream_get_status(acrtc_state->stream)->plane_states[0]; 3932 3934 surface_updates->flip_addr = &addr; 3933 - 3934 3935 3935 3936 dc_commit_updates_for_stream(adev->dm.dc, 3936 3937 surface_updates, ··· 3945 3944 __func__, 3946 3945 addr.address.grph.addr.high_part, 3947 3946 addr.address.grph.addr.low_part); 3948 - 3949 - 3950 - spin_unlock_irqrestore(&crtc->dev->event_lock, flags); 3951 3947 } 3952 3948 3953 3949 /* ··· 4204 4206 struct drm_connector *connector; 4205 4207 struct drm_connector_state *old_con_state, *new_con_state; 4206 4208 struct dm_crtc_state *dm_old_crtc_state, *dm_new_crtc_state; 4209 + int crtc_disable_count = 0; 4207 4210 4208 4211 drm_atomic_helper_update_legacy_modeset_state(dev, state); 4209 4212 ··· 4409 4410 struct amdgpu_crtc *acrtc = to_amdgpu_crtc(crtc); 4410 4411 bool modeset_needed; 4411 4412 4413 + if (old_crtc_state->active && !new_crtc_state->active) 4414 + crtc_disable_count++; 4415 + 4412 4416 dm_new_crtc_state = to_dm_crtc_state(new_crtc_state); 4413 4417 dm_old_crtc_state = to_dm_crtc_state(old_crtc_state); 4414 4418 modeset_needed = modeset_required( ··· 4465 4463 * so we can put the GPU into runtime suspend if we're not driving any 4466 4464 * displays anymore 4467 4465 */ 4466 + for (i = 0; i < crtc_disable_count; i++) 4467 + pm_runtime_put_autosuspend(dev->dev); 4468 4468 pm_runtime_mark_last_busy(dev->dev); 4469 - for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { 4470 - if (old_crtc_state->active && !new_crtc_state->active) 4471 - pm_runtime_put_autosuspend(dev->dev); 4472 - } 4473 4469 } 4474 4470 4475 4471
+2 -1
drivers/gpu/drm/arm/malidp_drv.c
··· 278 278 279 279 static void malidp_fini(struct drm_device *drm) 280 280 { 281 - drm_atomic_helper_shutdown(drm); 282 281 drm_mode_config_cleanup(drm); 283 282 } 284 283 ··· 645 646 malidp_de_irq_fini(drm); 646 647 drm->irq_enabled = false; 647 648 irq_init_fail: 649 + drm_atomic_helper_shutdown(drm); 648 650 component_unbind_all(dev, drm); 649 651 bind_fail: 650 652 of_node_put(malidp->crtc.port); ··· 681 681 malidp_se_irq_fini(drm); 682 682 malidp_de_irq_fini(drm); 683 683 drm->irq_enabled = false; 684 + drm_atomic_helper_shutdown(drm); 684 685 component_unbind_all(dev, drm); 685 686 of_node_put(malidp->crtc.port); 686 687 malidp->crtc.port = NULL;
+2 -1
drivers/gpu/drm/arm/malidp_hw.c
··· 634 634 .vsync_irq = MALIDP500_DE_IRQ_VSYNC, 635 635 }, 636 636 .se_irq_map = { 637 - .irq_mask = MALIDP500_SE_IRQ_CONF_MODE, 637 + .irq_mask = MALIDP500_SE_IRQ_CONF_MODE | 638 + MALIDP500_SE_IRQ_GLOBAL, 638 639 .vsync_irq = 0, 639 640 }, 640 641 .dc_irq_map = {
+6 -3
drivers/gpu/drm/arm/malidp_planes.c
··· 23 23 24 24 /* Layer specific register offsets */ 25 25 #define MALIDP_LAYER_FORMAT 0x000 26 + #define LAYER_FORMAT_MASK 0x3f 26 27 #define MALIDP_LAYER_CONTROL 0x004 27 28 #define LAYER_ENABLE (1 << 0) 28 29 #define LAYER_FLOWCFG_MASK 7 ··· 236 235 if (state->rotation & MALIDP_ROTATED_MASK) { 237 236 int val; 238 237 239 - val = mp->hwdev->hw->rotmem_required(mp->hwdev, state->crtc_h, 240 - state->crtc_w, 238 + val = mp->hwdev->hw->rotmem_required(mp->hwdev, state->crtc_w, 239 + state->crtc_h, 241 240 fb->format->format); 242 241 if (val < 0) 243 242 return val; ··· 338 337 dest_w = plane->state->crtc_w; 339 338 dest_h = plane->state->crtc_h; 340 339 341 - malidp_hw_write(mp->hwdev, ms->format, mp->layer->base); 340 + val = malidp_hw_read(mp->hwdev, mp->layer->base); 341 + val = (val & ~LAYER_FORMAT_MASK) | ms->format; 342 + malidp_hw_write(mp->hwdev, val, mp->layer->base); 342 343 343 344 for (i = 0; i < ms->n_planes; i++) { 344 345 /* calculate the offset for the layer's plane registers */
-3
drivers/gpu/drm/i915/i915_drv.h
··· 2245 2245 **/ 2246 2246 static inline struct scatterlist *__sg_next(struct scatterlist *sg) 2247 2247 { 2248 - #ifdef CONFIG_DEBUG_SG 2249 - BUG_ON(sg->sg_magic != SG_MAGIC); 2250 - #endif 2251 2248 return sg_is_last(sg) ? NULL : ____sg_next(sg); 2252 2249 } 2253 2250
+8 -4
drivers/gpu/drm/meson/meson_drv.c
··· 197 197 priv->io_base = regs; 198 198 199 199 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "hhi"); 200 - if (!res) 201 - return -EINVAL; 200 + if (!res) { 201 + ret = -EINVAL; 202 + goto free_drm; 203 + } 202 204 /* Simply ioremap since it may be a shared register zone */ 203 205 regs = devm_ioremap(dev, res->start, resource_size(res)); 204 206 if (!regs) { ··· 217 215 } 218 216 219 217 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dmc"); 220 - if (!res) 221 - return -EINVAL; 218 + if (!res) { 219 + ret = -EINVAL; 220 + goto free_drm; 221 + } 222 222 /* Simply ioremap since it may be a shared register zone */ 223 223 regs = devm_ioremap(dev, res->start, resource_size(res)); 224 224 if (!regs) {
+4 -4
drivers/i2c/algos/i2c-algo-bit.c
··· 647 647 if (bit_adap->getscl == NULL) 648 648 adap->quirks = &i2c_bit_quirk_no_clk_stretch; 649 649 650 - /* Bring bus to a known state. Looks like STOP if bus is not free yet */ 651 - setscl(bit_adap, 1); 652 - udelay(bit_adap->udelay); 653 - setsda(bit_adap, 1); 650 + /* 651 + * We tried forcing SCL/SDA to an initial state here. But that caused a 652 + * regression, sadly. Check Bugzilla #200045 for details. 653 + */ 654 654 655 655 ret = add_adapter(adap); 656 656 if (ret < 0)
+2 -2
drivers/i2c/busses/i2c-gpio.c
··· 279 279 * required for an I2C bus. 280 280 */ 281 281 if (pdata->scl_is_open_drain) 282 - gflags = GPIOD_OUT_LOW; 282 + gflags = GPIOD_OUT_HIGH; 283 283 else 284 - gflags = GPIOD_OUT_LOW_OPEN_DRAIN; 284 + gflags = GPIOD_OUT_HIGH_OPEN_DRAIN; 285 285 priv->scl = i2c_gpio_get_desc(dev, "scl", 1, gflags); 286 286 if (IS_ERR(priv->scl)) 287 287 return PTR_ERR(priv->scl);
+9 -5
drivers/i2c/i2c-core-smbus.c
··· 465 465 466 466 status = i2c_transfer(adapter, msg, num); 467 467 if (status < 0) 468 - return status; 469 - if (status != num) 470 - return -EIO; 468 + goto cleanup; 469 + if (status != num) { 470 + status = -EIO; 471 + goto cleanup; 472 + } 473 + status = 0; 471 474 472 475 /* Check PEC if last message is a read */ 473 476 if (i && (msg[num-1].flags & I2C_M_RD)) { 474 477 status = i2c_smbus_check_pec(partial_pec, &msg[num-1]); 475 478 if (status < 0) 476 - return status; 479 + goto cleanup; 477 480 } 478 481 479 482 if (read_write == I2C_SMBUS_READ) ··· 502 499 break; 503 500 } 504 501 502 + cleanup: 505 503 if (msg[0].flags & I2C_M_DMA_SAFE) 506 504 kfree(msg[0].buf); 507 505 if (msg[1].flags & I2C_M_DMA_SAFE) 508 506 kfree(msg[1].buf); 509 507 510 - return 0; 508 + return status; 511 509 } 512 510 513 511 /**
+1 -1
drivers/iio/accel/mma8452.c
··· 1053 1053 if (src < 0) 1054 1054 return IRQ_NONE; 1055 1055 1056 - if (!(src & data->chip_info->enabled_events)) 1056 + if (!(src & (data->chip_info->enabled_events | MMA8452_INT_DRDY))) 1057 1057 return IRQ_NONE; 1058 1058 1059 1059 if (src & MMA8452_INT_DRDY) {
+2
drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
··· 959 959 } 960 960 961 961 irq_type = irqd_get_trigger_type(desc); 962 + if (!irq_type) 963 + irq_type = IRQF_TRIGGER_RISING; 962 964 if (irq_type == IRQF_TRIGGER_RISING) 963 965 st->irq_mask = INV_MPU6050_ACTIVE_HIGH; 964 966 else if (irq_type == IRQF_TRIGGER_FALLING)
+2
drivers/iio/light/tsl2772.c
··· 582 582 "%s: failed to get lux\n", __func__); 583 583 return lux_val; 584 584 } 585 + if (lux_val == 0) 586 + return -ERANGE; 585 587 586 588 ret = (chip->settings.als_cal_target * chip->settings.als_gain_trim) / 587 589 lux_val;
+2 -3
drivers/iio/pressure/bmp280-core.c
··· 415 415 } 416 416 comp_humidity = bmp280_compensate_humidity(data, adc_humidity); 417 417 418 - *val = comp_humidity; 419 - *val2 = 1024; 418 + *val = comp_humidity * 1000 / 1024; 420 419 421 - return IIO_VAL_FRACTIONAL; 420 + return IIO_VAL_INT; 422 421 } 423 422 424 423 static int bmp280_read_raw(struct iio_dev *indio_dev,
+1 -1
drivers/infiniband/hw/mlx5/main.c
··· 6113 6113 dev->num_ports = max(MLX5_CAP_GEN(mdev, num_ports), 6114 6114 MLX5_CAP_GEN(mdev, num_vhca_ports)); 6115 6115 6116 - if (MLX5_VPORT_MANAGER(mdev) && 6116 + if (MLX5_ESWITCH_MANAGER(mdev) && 6117 6117 mlx5_ib_eswitch_mode(mdev->priv.eswitch) == SRIOV_OFFLOADS) { 6118 6118 dev->rep = mlx5_ib_vport_rep(mdev->priv.eswitch, 0); 6119 6119
+8 -4
drivers/input/input-mt.c
··· 131 131 * inactive, or if the tool type is changed, a new tracking id is 132 132 * assigned to the slot. The tool type is only reported if the 133 133 * corresponding absbit field is set. 134 + * 135 + * Returns true if contact is active. 134 136 */ 135 - void input_mt_report_slot_state(struct input_dev *dev, 137 + bool input_mt_report_slot_state(struct input_dev *dev, 136 138 unsigned int tool_type, bool active) 137 139 { 138 140 struct input_mt *mt = dev->mt; ··· 142 140 int id; 143 141 144 142 if (!mt) 145 - return; 143 + return false; 146 144 147 145 slot = &mt->slots[mt->slot]; 148 146 slot->frame = mt->frame; 149 147 150 148 if (!active) { 151 149 input_event(dev, EV_ABS, ABS_MT_TRACKING_ID, -1); 152 - return; 150 + return false; 153 151 } 154 152 155 153 id = input_mt_get_value(slot, ABS_MT_TRACKING_ID); 156 - if (id < 0 || input_mt_get_value(slot, ABS_MT_TOOL_TYPE) != tool_type) 154 + if (id < 0) 157 155 id = input_mt_new_trkid(mt); 158 156 159 157 input_event(dev, EV_ABS, ABS_MT_TRACKING_ID, id); 160 158 input_event(dev, EV_ABS, ABS_MT_TOOL_TYPE, tool_type); 159 + 160 + return true; 161 161 } 162 162 EXPORT_SYMBOL(input_mt_report_slot_state); 163 163
+1 -1
drivers/input/joystick/xpad.c
··· 125 125 u8 mapping; 126 126 u8 xtype; 127 127 } xpad_device[] = { 128 - { 0x0079, 0x18d4, "GPD Win 2 Controller", 0, XTYPE_XBOX360 }, 128 + { 0x0079, 0x18d4, "GPD Win 2 X-Box Controller", 0, XTYPE_XBOX360 }, 129 129 { 0x044f, 0x0f00, "Thrustmaster Wheel", 0, XTYPE_XBOX }, 130 130 { 0x044f, 0x0f03, "Thrustmaster Wheel", 0, XTYPE_XBOX }, 131 131 { 0x044f, 0x0f07, "Thrustmaster, Inc. Controller", 0, XTYPE_XBOX },
+5 -4
drivers/input/keyboard/goldfish_events.c
··· 45 45 static irqreturn_t events_interrupt(int irq, void *dev_id) 46 46 { 47 47 struct event_dev *edev = dev_id; 48 - unsigned type, code, value; 48 + unsigned int type, code, value; 49 49 50 50 type = __raw_readl(edev->addr + REG_READ); 51 51 code = __raw_readl(edev->addr + REG_READ); ··· 57 57 } 58 58 59 59 static void events_import_bits(struct event_dev *edev, 60 - unsigned long bits[], unsigned type, size_t count) 60 + unsigned long bits[], unsigned int type, size_t count) 61 61 { 62 62 void __iomem *addr = edev->addr; 63 63 int i, j; ··· 99 99 100 100 for (j = 0; j < ARRAY_SIZE(val); j++) { 101 101 int offset = (i * ARRAY_SIZE(val) + j) * sizeof(u32); 102 + 102 103 val[j] = __raw_readl(edev->addr + REG_DATA + offset); 103 104 } 104 105 ··· 113 112 struct input_dev *input_dev; 114 113 struct event_dev *edev; 115 114 struct resource *res; 116 - unsigned keymapnamelen; 115 + unsigned int keymapnamelen; 117 116 void __iomem *addr; 118 117 int irq; 119 118 int i; ··· 151 150 for (i = 0; i < keymapnamelen; i++) 152 151 edev->name[i] = __raw_readb(edev->addr + REG_DATA + i); 153 152 154 - pr_debug("events_probe() keymap=%s\n", edev->name); 153 + pr_debug("%s: keymap=%s\n", __func__, edev->name); 155 154 156 155 input_dev->name = edev->name; 157 156 input_dev->id.bustype = BUS_HOST;
+10
drivers/input/misc/Kconfig
··· 841 841 To compile this driver as a module, choose M here: the 842 842 module will be called rave-sp-pwrbutton. 843 843 844 + config INPUT_SC27XX_VIBRA 845 + tristate "Spreadtrum sc27xx vibrator support" 846 + depends on MFD_SC27XX_PMIC || COMPILE_TEST 847 + select INPUT_FF_MEMLESS 848 + help 849 + This option enables support for Spreadtrum sc27xx vibrator driver. 850 + 851 + To compile this driver as a module, choose M here. The module will 852 + be called sc27xx_vibra. 853 + 844 854 endif
+1
drivers/input/misc/Makefile
··· 66 66 obj-$(CONFIG_INPUT_AXP20X_PEK) += axp20x-pek.o 67 67 obj-$(CONFIG_INPUT_GPIO_ROTARY_ENCODER) += rotary_encoder.o 68 68 obj-$(CONFIG_INPUT_RK805_PWRKEY) += rk805-pwrkey.o 69 + obj-$(CONFIG_INPUT_SC27XX_VIBRA) += sc27xx-vibra.o 69 70 obj-$(CONFIG_INPUT_SGI_BTNS) += sgi_btns.o 70 71 obj-$(CONFIG_INPUT_SIRFSOC_ONKEY) += sirfsoc-onkey.o 71 72 obj-$(CONFIG_INPUT_SOC_BUTTON_ARRAY) += soc_button_array.o
+154
drivers/input/misc/sc27xx-vibra.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2018 Spreadtrum Communications Inc. 4 + */ 5 + 6 + #include <linux/module.h> 7 + #include <linux/of_address.h> 8 + #include <linux/platform_device.h> 9 + #include <linux/regmap.h> 10 + #include <linux/input.h> 11 + #include <linux/workqueue.h> 12 + 13 + #define CUR_DRV_CAL_SEL GENMASK(13, 12) 14 + #define SLP_LDOVIBR_PD_EN BIT(9) 15 + #define LDO_VIBR_PD BIT(8) 16 + 17 + struct vibra_info { 18 + struct input_dev *input_dev; 19 + struct work_struct play_work; 20 + struct regmap *regmap; 21 + u32 base; 22 + u32 strength; 23 + bool enabled; 24 + }; 25 + 26 + static void sc27xx_vibra_set(struct vibra_info *info, bool on) 27 + { 28 + if (on) { 29 + regmap_update_bits(info->regmap, info->base, LDO_VIBR_PD, 0); 30 + regmap_update_bits(info->regmap, info->base, 31 + SLP_LDOVIBR_PD_EN, 0); 32 + info->enabled = true; 33 + } else { 34 + regmap_update_bits(info->regmap, info->base, LDO_VIBR_PD, 35 + LDO_VIBR_PD); 36 + regmap_update_bits(info->regmap, info->base, 37 + SLP_LDOVIBR_PD_EN, SLP_LDOVIBR_PD_EN); 38 + info->enabled = false; 39 + } 40 + } 41 + 42 + static int sc27xx_vibra_hw_init(struct vibra_info *info) 43 + { 44 + return regmap_update_bits(info->regmap, info->base, CUR_DRV_CAL_SEL, 0); 45 + } 46 + 47 + static void sc27xx_vibra_play_work(struct work_struct *work) 48 + { 49 + struct vibra_info *info = container_of(work, struct vibra_info, 50 + play_work); 51 + 52 + if (info->strength && !info->enabled) 53 + sc27xx_vibra_set(info, true); 54 + else if (info->strength == 0 && info->enabled) 55 + sc27xx_vibra_set(info, false); 56 + } 57 + 58 + static int sc27xx_vibra_play(struct input_dev *input, void *data, 59 + struct ff_effect *effect) 60 + { 61 + struct vibra_info *info = input_get_drvdata(input); 62 + 63 + info->strength = effect->u.rumble.weak_magnitude; 64 + schedule_work(&info->play_work); 65 + 66 + return 0; 67 + } 68 + 69 + static void sc27xx_vibra_close(struct input_dev *input) 70 + { 71 + struct vibra_info *info = input_get_drvdata(input); 72 + 73 + cancel_work_sync(&info->play_work); 74 + if (info->enabled) 75 + sc27xx_vibra_set(info, false); 76 + } 77 + 78 + static int sc27xx_vibra_probe(struct platform_device *pdev) 79 + { 80 + struct vibra_info *info; 81 + int error; 82 + 83 + info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL); 84 + if (!info) 85 + return -ENOMEM; 86 + 87 + info->regmap = dev_get_regmap(pdev->dev.parent, NULL); 88 + if (!info->regmap) { 89 + dev_err(&pdev->dev, "failed to get vibrator regmap.\n"); 90 + return -ENODEV; 91 + } 92 + 93 + error = device_property_read_u32(&pdev->dev, "reg", &info->base); 94 + if (error) { 95 + dev_err(&pdev->dev, "failed to get vibrator base address.\n"); 96 + return error; 97 + } 98 + 99 + info->input_dev = devm_input_allocate_device(&pdev->dev); 100 + if (!info->input_dev) { 101 + dev_err(&pdev->dev, "failed to allocate input device.\n"); 102 + return -ENOMEM; 103 + } 104 + 105 + info->input_dev->name = "sc27xx:vibrator"; 106 + info->input_dev->id.version = 0; 107 + info->input_dev->close = sc27xx_vibra_close; 108 + 109 + input_set_drvdata(info->input_dev, info); 110 + input_set_capability(info->input_dev, EV_FF, FF_RUMBLE); 111 + INIT_WORK(&info->play_work, sc27xx_vibra_play_work); 112 + info->enabled = false; 113 + 114 + error = sc27xx_vibra_hw_init(info); 115 + if (error) { 116 + dev_err(&pdev->dev, "failed to initialize the vibrator.\n"); 117 + return error; 118 + } 119 + 120 + error = input_ff_create_memless(info->input_dev, NULL, 121 + sc27xx_vibra_play); 122 + if (error) { 123 + dev_err(&pdev->dev, "failed to register vibrator to FF.\n"); 124 + return error; 125 + } 126 + 127 + error = input_register_device(info->input_dev); 128 + if (error) { 129 + dev_err(&pdev->dev, "failed to register input device.\n"); 130 + return error; 131 + } 132 + 133 + return 0; 134 + } 135 + 136 + static const struct of_device_id sc27xx_vibra_of_match[] = { 137 + { .compatible = "sprd,sc2731-vibrator", }, 138 + {} 139 + }; 140 + MODULE_DEVICE_TABLE(of, sc27xx_vibra_of_match); 141 + 142 + static struct platform_driver sc27xx_vibra_driver = { 143 + .driver = { 144 + .name = "sc27xx-vibrator", 145 + .of_match_table = sc27xx_vibra_of_match, 146 + }, 147 + .probe = sc27xx_vibra_probe, 148 + }; 149 + 150 + module_platform_driver(sc27xx_vibra_driver); 151 + 152 + MODULE_DESCRIPTION("Spreadtrum SC27xx Vibrator Driver"); 153 + MODULE_LICENSE("GPL v2"); 154 + MODULE_AUTHOR("Xiaotong Lu <xiaotong.lu@spreadtrum.com>");
+2
drivers/input/mouse/elan_i2c.h
··· 27 27 #define ETP_DISABLE_POWER 0x0001 28 28 #define ETP_PRESSURE_OFFSET 25 29 29 30 + #define ETP_CALIBRATE_MAX_LEN 3 31 + 30 32 /* IAP Firmware handling */ 31 33 #define ETP_PRODUCT_ID_FORMAT_STRING "%d.0" 32 34 #define ETP_FW_NAME "elan_i2c_" ETP_PRODUCT_ID_FORMAT_STRING ".bin"
+2 -1
drivers/input/mouse/elan_i2c_core.c
··· 613 613 int tries = 20; 614 614 int retval; 615 615 int error; 616 - u8 val[3]; 616 + u8 val[ETP_CALIBRATE_MAX_LEN]; 617 617 618 618 retval = mutex_lock_interruptible(&data->sysfs_mutex); 619 619 if (retval) ··· 1345 1345 { "ELAN060C", 0 }, 1346 1346 { "ELAN0611", 0 }, 1347 1347 { "ELAN0612", 0 }, 1348 + { "ELAN0618", 0 }, 1348 1349 { "ELAN1000", 0 }, 1349 1350 { } 1350 1351 };
+8 -2
drivers/input/mouse/elan_i2c_smbus.c
··· 56 56 static int elan_smbus_initialize(struct i2c_client *client) 57 57 { 58 58 u8 check[ETP_SMBUS_HELLOPACKET_LEN] = { 0x55, 0x55, 0x55, 0x55, 0x55 }; 59 - u8 values[ETP_SMBUS_HELLOPACKET_LEN] = { 0, 0, 0, 0, 0 }; 59 + u8 values[I2C_SMBUS_BLOCK_MAX] = {0}; 60 60 int len, error; 61 61 62 62 /* Get hello packet */ ··· 117 117 static int elan_smbus_calibrate_result(struct i2c_client *client, u8 *val) 118 118 { 119 119 int error; 120 + u8 buf[I2C_SMBUS_BLOCK_MAX] = {0}; 121 + 122 + BUILD_BUG_ON(ETP_CALIBRATE_MAX_LEN > sizeof(buf)); 120 123 121 124 error = i2c_smbus_read_block_data(client, 122 - ETP_SMBUS_CALIBRATE_QUERY, val); 125 + ETP_SMBUS_CALIBRATE_QUERY, buf); 123 126 if (error < 0) 124 127 return error; 125 128 129 + memcpy(val, buf, ETP_CALIBRATE_MAX_LEN); 126 130 return 0; 127 131 } 128 132 ··· 475 471 static int elan_smbus_get_report(struct i2c_client *client, u8 *report) 476 472 { 477 473 int len; 474 + 475 + BUILD_BUG_ON(I2C_SMBUS_BLOCK_MAX > ETP_SMBUS_REPORT_LEN); 478 476 479 477 len = i2c_smbus_read_block_data(client, 480 478 ETP_SMBUS_PACKET_QUERY,
+9 -2
drivers/input/mouse/elantech.c
··· 799 799 else if (ic_version == 7 && etd->info.samples[1] == 0x2A) 800 800 sanity_check = ((packet[3] & 0x1c) == 0x10); 801 801 else 802 - sanity_check = ((packet[0] & 0x0c) == 0x04 && 802 + sanity_check = ((packet[0] & 0x08) == 0x00 && 803 803 (packet[3] & 0x1c) == 0x10); 804 804 805 805 if (!sanity_check) ··· 1175 1175 { } 1176 1176 }; 1177 1177 1178 + static const char * const middle_button_pnp_ids[] = { 1179 + "LEN2131", /* ThinkPad P52 w/ NFC */ 1180 + "LEN2132", /* ThinkPad P52 */ 1181 + NULL 1182 + }; 1183 + 1178 1184 /* 1179 1185 * Set the appropriate event bits for the input subsystem 1180 1186 */ ··· 1200 1194 __clear_bit(EV_REL, dev->evbit); 1201 1195 1202 1196 __set_bit(BTN_LEFT, dev->keybit); 1203 - if (dmi_check_system(elantech_dmi_has_middle_button)) 1197 + if (dmi_check_system(elantech_dmi_has_middle_button) || 1198 + psmouse_matches_pnp_id(psmouse, middle_button_pnp_ids)) 1204 1199 __set_bit(BTN_MIDDLE, dev->keybit); 1205 1200 __set_bit(BTN_RIGHT, dev->keybit); 1206 1201
+6 -6
drivers/input/mouse/psmouse-base.c
··· 192 192 else 193 193 input_report_rel(dev, REL_WHEEL, -wheel); 194 194 195 - input_report_key(dev, BTN_SIDE, BIT(4)); 196 - input_report_key(dev, BTN_EXTRA, BIT(5)); 195 + input_report_key(dev, BTN_SIDE, packet[3] & BIT(4)); 196 + input_report_key(dev, BTN_EXTRA, packet[3] & BIT(5)); 197 197 break; 198 198 } 199 199 break; ··· 203 203 input_report_rel(dev, REL_WHEEL, -(s8) packet[3]); 204 204 205 205 /* Extra buttons on Genius NewNet 3D */ 206 - input_report_key(dev, BTN_SIDE, BIT(6)); 207 - input_report_key(dev, BTN_EXTRA, BIT(7)); 206 + input_report_key(dev, BTN_SIDE, packet[0] & BIT(6)); 207 + input_report_key(dev, BTN_EXTRA, packet[0] & BIT(7)); 208 208 break; 209 209 210 210 case PSMOUSE_THINKPS: 211 211 /* Extra button on ThinkingMouse */ 212 - input_report_key(dev, BTN_EXTRA, BIT(3)); 212 + input_report_key(dev, BTN_EXTRA, packet[0] & BIT(3)); 213 213 214 214 /* 215 215 * Without this bit of weirdness moving up gives wildly ··· 223 223 * Cortron PS2 Trackball reports SIDE button in the 224 224 * 4th bit of the first byte. 225 225 */ 226 - input_report_key(dev, BTN_SIDE, BIT(3)); 226 + input_report_key(dev, BTN_SIDE, packet[0] & BIT(3)); 227 227 packet[0] |= BIT(3); 228 228 break; 229 229
+1
drivers/input/rmi4/Kconfig
··· 3 3 # 4 4 config RMI4_CORE 5 5 tristate "Synaptics RMI4 bus support" 6 + select IRQ_DOMAIN 6 7 help 7 8 Say Y here if you want to support the Synaptics RMI4 bus. This is 8 9 required for all RMI4 device support.
+16 -18
drivers/input/rmi4/rmi_2d_sensor.c
··· 32 32 if (obj->type == RMI_2D_OBJECT_NONE) 33 33 return; 34 34 35 - if (axis_align->swap_axes) 36 - swap(obj->x, obj->y); 37 - 38 35 if (axis_align->flip_x) 39 36 obj->x = sensor->max_x - obj->x; 40 37 41 38 if (axis_align->flip_y) 42 39 obj->y = sensor->max_y - obj->y; 40 + 41 + if (axis_align->swap_axes) 42 + swap(obj->x, obj->y); 43 43 44 44 /* 45 45 * Here checking if X offset or y offset are specified is ··· 120 120 x = min(RMI_2D_REL_POS_MAX, max(RMI_2D_REL_POS_MIN, (int)x)); 121 121 y = min(RMI_2D_REL_POS_MAX, max(RMI_2D_REL_POS_MIN, (int)y)); 122 122 123 - if (axis_align->swap_axes) 124 - swap(x, y); 125 - 126 123 if (axis_align->flip_x) 127 124 x = min(RMI_2D_REL_POS_MAX, -x); 128 125 129 126 if (axis_align->flip_y) 130 127 y = min(RMI_2D_REL_POS_MAX, -y); 128 + 129 + if (axis_align->swap_axes) 130 + swap(x, y); 131 131 132 132 if (x || y) { 133 133 input_report_rel(sensor->input, REL_X, x); ··· 141 141 struct input_dev *input = sensor->input; 142 142 int res_x; 143 143 int res_y; 144 + int max_x, max_y; 144 145 int input_flags = 0; 145 146 146 147 if (sensor->report_abs) { 147 - if (sensor->axis_align.swap_axes) { 148 - swap(sensor->max_x, sensor->max_y); 149 - swap(sensor->axis_align.clip_x_low, 150 - sensor->axis_align.clip_y_low); 151 - swap(sensor->axis_align.clip_x_high, 152 - sensor->axis_align.clip_y_high); 153 - } 154 - 155 148 sensor->min_x = sensor->axis_align.clip_x_low; 156 149 if (sensor->axis_align.clip_x_high) 157 150 sensor->max_x = min(sensor->max_x, ··· 156 163 sensor->axis_align.clip_y_high); 157 164 158 165 set_bit(EV_ABS, input->evbit); 159 - input_set_abs_params(input, ABS_MT_POSITION_X, 0, sensor->max_x, 160 - 0, 0); 161 - input_set_abs_params(input, ABS_MT_POSITION_Y, 0, sensor->max_y, 162 - 0, 0); 166 + 167 + max_x = sensor->max_x; 168 + max_y = sensor->max_y; 169 + if (sensor->axis_align.swap_axes) 170 + swap(max_x, max_y); 171 + input_set_abs_params(input, ABS_MT_POSITION_X, 0, max_x, 0, 0); 172 + input_set_abs_params(input, ABS_MT_POSITION_Y, 0, max_y, 0, 0); 163 173 164 174 if (sensor->x_mm && sensor->y_mm) { 165 175 res_x = (sensor->max_x - sensor->min_x) / sensor->x_mm; 166 176 res_y = (sensor->max_y - sensor->min_y) / sensor->y_mm; 177 + if (sensor->axis_align.swap_axes) 178 + swap(res_x, res_y); 167 179 168 180 input_abs_set_res(input, ABS_X, res_x); 169 181 input_abs_set_res(input, ABS_Y, res_y);
+49 -1
drivers/input/rmi4/rmi_bus.c
··· 9 9 10 10 #include <linux/kernel.h> 11 11 #include <linux/device.h> 12 + #include <linux/irq.h> 13 + #include <linux/irqdomain.h> 12 14 #include <linux/list.h> 13 15 #include <linux/pm.h> 14 16 #include <linux/rmi.h> ··· 169 167 {} 170 168 #endif 171 169 170 + static struct irq_chip rmi_irq_chip = { 171 + .name = "rmi4", 172 + }; 173 + 174 + static int rmi_create_function_irq(struct rmi_function *fn, 175 + struct rmi_function_handler *handler) 176 + { 177 + struct rmi_driver_data *drvdata = dev_get_drvdata(&fn->rmi_dev->dev); 178 + int i, error; 179 + 180 + for (i = 0; i < fn->num_of_irqs; i++) { 181 + set_bit(fn->irq_pos + i, fn->irq_mask); 182 + 183 + fn->irq[i] = irq_create_mapping(drvdata->irqdomain, 184 + fn->irq_pos + i); 185 + 186 + irq_set_chip_data(fn->irq[i], fn); 187 + irq_set_chip_and_handler(fn->irq[i], &rmi_irq_chip, 188 + handle_simple_irq); 189 + irq_set_nested_thread(fn->irq[i], 1); 190 + 191 + error = devm_request_threaded_irq(&fn->dev, fn->irq[i], NULL, 192 + handler->attention, IRQF_ONESHOT, 193 + dev_name(&fn->dev), fn); 194 + if (error) { 195 + dev_err(&fn->dev, "Error %d registering IRQ\n", error); 196 + return error; 197 + } 198 + } 199 + 200 + return 0; 201 + } 202 + 172 203 static int rmi_function_probe(struct device *dev) 173 204 { 174 205 struct rmi_function *fn = to_rmi_function(dev); ··· 213 178 214 179 if (handler->probe) { 215 180 error = handler->probe(fn); 216 - return error; 181 + if (error) 182 + return error; 183 + } 184 + 185 + if (fn->num_of_irqs && handler->attention) { 186 + error = rmi_create_function_irq(fn, handler); 187 + if (error) 188 + return error; 217 189 } 218 190 219 191 return 0; ··· 272 230 273 231 void rmi_unregister_function(struct rmi_function *fn) 274 232 { 233 + int i; 234 + 275 235 rmi_dbg(RMI_DEBUG_CORE, &fn->dev, "Unregistering F%02X.\n", 276 236 fn->fd.function_number); 277 237 278 238 device_del(&fn->dev); 279 239 of_node_put(fn->dev.of_node); 280 240 put_device(&fn->dev); 241 + 242 + for (i = 0; i < fn->num_of_irqs; i++) 243 + irq_dispose_mapping(fn->irq[i]); 244 + 281 245 } 282 246 283 247 /**
+9 -1
drivers/input/rmi4/rmi_bus.h
··· 14 14 15 15 struct rmi_device; 16 16 17 + /* 18 + * The interrupt source count in the function descriptor can represent up to 19 + * 6 interrupt sources in the normal manner. 20 + */ 21 + #define RMI_FN_MAX_IRQS 6 22 + 17 23 /** 18 24 * struct rmi_function - represents the implementation of an RMI4 19 25 * function for a particular device (basically, a driver for that RMI4 function) ··· 32 26 * @irq_pos: The position in the irq bitfield this function holds 33 27 * @irq_mask: For convenience, can be used to mask IRQ bits off during ATTN 34 28 * interrupt handling. 29 + * @irqs: assigned virq numbers (up to num_of_irqs) 35 30 * 36 31 * @node: entry in device's list of functions 37 32 */ ··· 43 36 struct list_head node; 44 37 45 38 unsigned int num_of_irqs; 39 + int irq[RMI_FN_MAX_IRQS]; 46 40 unsigned int irq_pos; 47 41 unsigned long irq_mask[]; 48 42 }; ··· 84 76 void (*remove)(struct rmi_function *fn); 85 77 int (*config)(struct rmi_function *fn); 86 78 int (*reset)(struct rmi_function *fn); 87 - int (*attention)(struct rmi_function *fn, unsigned long *irq_bits); 79 + irqreturn_t (*attention)(int irq, void *ctx); 88 80 int (*suspend)(struct rmi_function *fn); 89 81 int (*resume)(struct rmi_function *fn); 90 82 };
+20 -32
drivers/input/rmi4/rmi_driver.c
··· 21 21 #include <linux/pm.h> 22 22 #include <linux/slab.h> 23 23 #include <linux/of.h> 24 + #include <linux/irqdomain.h> 24 25 #include <uapi/linux/input.h> 25 26 #include <linux/rmi.h> 26 27 #include "rmi_bus.h" ··· 128 127 return 0; 129 128 } 130 129 131 - static void process_one_interrupt(struct rmi_driver_data *data, 132 - struct rmi_function *fn) 133 - { 134 - struct rmi_function_handler *fh; 135 - 136 - if (!fn || !fn->dev.driver) 137 - return; 138 - 139 - fh = to_rmi_function_handler(fn->dev.driver); 140 - if (fh->attention) { 141 - bitmap_and(data->fn_irq_bits, data->irq_status, fn->irq_mask, 142 - data->irq_count); 143 - if (!bitmap_empty(data->fn_irq_bits, data->irq_count)) 144 - fh->attention(fn, data->fn_irq_bits); 145 - } 146 - } 147 - 148 130 static int rmi_process_interrupt_requests(struct rmi_device *rmi_dev) 149 131 { 150 132 struct rmi_driver_data *data = dev_get_drvdata(&rmi_dev->dev); 151 133 struct device *dev = &rmi_dev->dev; 152 - struct rmi_function *entry; 134 + int i; 153 135 int error; 154 136 155 137 if (!data) ··· 157 173 */ 158 174 mutex_unlock(&data->irq_mutex); 159 175 160 - /* 161 - * It would be nice to be able to use irq_chip to handle these 162 - * nested IRQs. Unfortunately, most of the current customers for 163 - * this driver are using older kernels (3.0.x) that don't support 164 - * the features required for that. Once they've shifted to more 165 - * recent kernels (say, 3.3 and higher), this should be switched to 166 - * use irq_chip. 167 - */ 168 - list_for_each_entry(entry, &data->function_list, node) 169 - process_one_interrupt(data, entry); 176 + for_each_set_bit(i, data->irq_status, data->irq_count) 177 + handle_nested_irq(irq_find_mapping(data->irqdomain, i)); 170 178 171 179 if (data->input) 172 180 input_sync(data->input); ··· 977 1001 static int rmi_driver_remove(struct device *dev) 978 1002 { 979 1003 struct rmi_device *rmi_dev = to_rmi_device(dev); 1004 + struct rmi_driver_data *data = dev_get_drvdata(&rmi_dev->dev); 980 1005 981 1006 rmi_disable_irq(rmi_dev, false); 1007 + 1008 + irq_domain_remove(data->irqdomain); 1009 + data->irqdomain = NULL; 982 1010 983 1011 rmi_f34_remove_sysfs(rmi_dev); 984 1012 rmi_free_function_list(rmi_dev); ··· 1015 1035 { 1016 1036 struct rmi_device *rmi_dev = data->rmi_dev; 1017 1037 struct device *dev = &rmi_dev->dev; 1018 - int irq_count; 1038 + struct fwnode_handle *fwnode = rmi_dev->xport->dev->fwnode; 1039 + int irq_count = 0; 1019 1040 size_t size; 1020 1041 int retval; 1021 1042 ··· 1027 1046 * being accessed. 1028 1047 */ 1029 1048 rmi_dbg(RMI_DEBUG_CORE, dev, "%s: Counting IRQs.\n", __func__); 1030 - irq_count = 0; 1031 1049 data->bootloader_mode = false; 1032 1050 1033 1051 retval = rmi_scan_pdt(rmi_dev, &irq_count, rmi_count_irqs); ··· 1037 1057 1038 1058 if (data->bootloader_mode) 1039 1059 dev_warn(dev, "Device in bootloader mode.\n"); 1060 + 1061 + /* Allocate and register a linear revmap irq_domain */ 1062 + data->irqdomain = irq_domain_create_linear(fwnode, irq_count, 1063 + &irq_domain_simple_ops, 1064 + data); 1065 + if (!data->irqdomain) { 1066 + dev_err(&rmi_dev->dev, "Failed to create IRQ domain\n"); 1067 + return -ENOMEM; 1068 + } 1040 1069 1041 1070 data->irq_count = irq_count; 1042 1071 data->num_of_irq_regs = (data->irq_count + 7) / 8; ··· 1069 1080 { 1070 1081 struct rmi_device *rmi_dev = data->rmi_dev; 1071 1082 struct device *dev = &rmi_dev->dev; 1072 - int irq_count; 1083 + int irq_count = 0; 1073 1084 int retval; 1074 1085 1075 - irq_count = 0; 1076 1086 rmi_dbg(RMI_DEBUG_CORE, dev, "%s: Creating functions.\n", __func__); 1077 1087 retval = rmi_scan_pdt(rmi_dev, &irq_count, rmi_create_function); 1078 1088 if (retval < 0) {
+5 -5
drivers/input/rmi4/rmi_f01.c
··· 681 681 return 0; 682 682 } 683 683 684 - static int rmi_f01_attention(struct rmi_function *fn, 685 - unsigned long *irq_bits) 684 + static irqreturn_t rmi_f01_attention(int irq, void *ctx) 686 685 { 686 + struct rmi_function *fn = ctx; 687 687 struct rmi_device *rmi_dev = fn->rmi_dev; 688 688 int error; 689 689 u8 device_status; ··· 692 692 if (error) { 693 693 dev_err(&fn->dev, 694 694 "Failed to read device status: %d.\n", error); 695 - return error; 695 + return IRQ_RETVAL(error); 696 696 } 697 697 698 698 if (RMI_F01_STATUS_BOOTLOADER(device_status)) ··· 704 704 error = rmi_dev->driver->reset_handler(rmi_dev); 705 705 if (error) { 706 706 dev_err(&fn->dev, "Device reset failed: %d\n", error); 707 - return error; 707 + return IRQ_RETVAL(error); 708 708 } 709 709 } 710 710 711 - return 0; 711 + return IRQ_HANDLED; 712 712 } 713 713 714 714 struct rmi_function_handler rmi_f01_handler = {
+5 -4
drivers/input/rmi4/rmi_f03.c
··· 244 244 return 0; 245 245 } 246 246 247 - static int rmi_f03_attention(struct rmi_function *fn, unsigned long *irq_bits) 247 + static irqreturn_t rmi_f03_attention(int irq, void *ctx) 248 248 { 249 + struct rmi_function *fn = ctx; 249 250 struct rmi_device *rmi_dev = fn->rmi_dev; 250 251 struct rmi_driver_data *drvdata = dev_get_drvdata(&rmi_dev->dev); 251 252 struct f03_data *f03 = dev_get_drvdata(&fn->dev); ··· 263 262 /* First grab the data passed by the transport device */ 264 263 if (drvdata->attn_data.size < ob_len) { 265 264 dev_warn(&fn->dev, "F03 interrupted, but data is missing!\n"); 266 - return 0; 265 + return IRQ_HANDLED; 267 266 } 268 267 269 268 memcpy(obs, drvdata->attn_data.data, ob_len); ··· 278 277 "%s: Failed to read F03 output buffers: %d\n", 279 278 __func__, error); 280 279 serio_interrupt(f03->serio, 0, SERIO_TIMEOUT); 281 - return error; 280 + return IRQ_RETVAL(error); 282 281 } 283 282 } 284 283 ··· 304 303 serio_interrupt(f03->serio, ob_data, serio_flags); 305 304 } 306 305 307 - return 0; 306 + return IRQ_HANDLED; 308 307 } 309 308 310 309 static void rmi_f03_remove(struct rmi_function *fn)
+16 -26
drivers/input/rmi4/rmi_f11.c
··· 570 570 } 571 571 572 572 static void rmi_f11_finger_handler(struct f11_data *f11, 573 - struct rmi_2d_sensor *sensor, 574 - unsigned long *irq_bits, int num_irq_regs, 575 - int size) 573 + struct rmi_2d_sensor *sensor, int size) 576 574 { 577 575 const u8 *f_state = f11->data.f_state; 578 576 u8 finger_state; ··· 579 581 int rel_fingers; 580 582 int abs_size = sensor->nbr_fingers * RMI_F11_ABS_BYTES; 581 583 582 - int abs_bits = bitmap_and(f11->result_bits, irq_bits, f11->abs_mask, 583 - num_irq_regs * 8); 584 - int rel_bits = bitmap_and(f11->result_bits, irq_bits, f11->rel_mask, 585 - num_irq_regs * 8); 586 - 587 - if (abs_bits) { 584 + if (sensor->report_abs) { 588 585 if (abs_size > size) 589 586 abs_fingers = size / RMI_F11_ABS_BYTES; 590 587 else ··· 597 604 rmi_f11_abs_pos_process(f11, sensor, &sensor->objs[i], 598 605 finger_state, i); 599 606 } 600 - } 601 607 602 - if (rel_bits) { 603 - if ((abs_size + sensor->nbr_fingers * RMI_F11_REL_BYTES) > size) 604 - rel_fingers = (size - abs_size) / RMI_F11_REL_BYTES; 605 - else 606 - rel_fingers = sensor->nbr_fingers; 607 - 608 - for (i = 0; i < rel_fingers; i++) 609 - rmi_f11_rel_pos_report(f11, i); 610 - } 611 - 612 - if (abs_bits) { 613 608 /* 614 609 * the absolute part is made in 2 parts to allow the kernel 615 610 * tracking to take place. ··· 619 638 } 620 639 621 640 input_mt_sync_frame(sensor->input); 641 + } else if (sensor->report_rel) { 642 + if ((abs_size + sensor->nbr_fingers * RMI_F11_REL_BYTES) > size) 643 + rel_fingers = (size - abs_size) / RMI_F11_REL_BYTES; 644 + else 645 + rel_fingers = sensor->nbr_fingers; 646 + 647 + for (i = 0; i < rel_fingers; i++) 648 + rmi_f11_rel_pos_report(f11, i); 622 649 } 650 + 623 651 } 624 652 625 653 static int f11_2d_construct_data(struct f11_data *f11) ··· 1266 1276 return 0; 1267 1277 } 1268 1278 1269 - static int rmi_f11_attention(struct rmi_function *fn, unsigned long *irq_bits) 1279 + static irqreturn_t rmi_f11_attention(int irq, void *ctx) 1270 1280 { 1281 + struct rmi_function *fn = ctx; 1271 1282 struct rmi_device *rmi_dev = fn->rmi_dev; 1272 1283 struct rmi_driver_data *drvdata = dev_get_drvdata(&rmi_dev->dev); 1273 1284 struct f11_data *f11 = dev_get_drvdata(&fn->dev); ··· 1294 1303 data_base_addr, f11->sensor.data_pkt, 1295 1304 f11->sensor.pkt_size); 1296 1305 if (error < 0) 1297 - return error; 1306 + return IRQ_RETVAL(error); 1298 1307 } 1299 1308 1300 - rmi_f11_finger_handler(f11, &f11->sensor, irq_bits, 1301 - drvdata->num_of_irq_regs, valid_bytes); 1309 + rmi_f11_finger_handler(f11, &f11->sensor, valid_bytes); 1302 1310 1303 - return 0; 1311 + return IRQ_HANDLED; 1304 1312 } 1305 1313 1306 1314 static int rmi_f11_resume(struct rmi_function *fn)
+4 -4
drivers/input/rmi4/rmi_f12.c
··· 197 197 rmi_2d_sensor_abs_report(sensor, &sensor->objs[i], i); 198 198 } 199 199 200 - static int rmi_f12_attention(struct rmi_function *fn, 201 - unsigned long *irq_nr_regs) 200 + static irqreturn_t rmi_f12_attention(int irq, void *ctx) 202 201 { 203 202 int retval; 203 + struct rmi_function *fn = ctx; 204 204 struct rmi_device *rmi_dev = fn->rmi_dev; 205 205 struct rmi_driver_data *drvdata = dev_get_drvdata(&rmi_dev->dev); 206 206 struct f12_data *f12 = dev_get_drvdata(&fn->dev); ··· 222 222 if (retval < 0) { 223 223 dev_err(&fn->dev, "Failed to read object data. Code: %d.\n", 224 224 retval); 225 - return retval; 225 + return IRQ_RETVAL(retval); 226 226 } 227 227 } 228 228 ··· 232 232 233 233 input_mt_sync_frame(sensor->input); 234 234 235 - return 0; 235 + return IRQ_HANDLED; 236 236 } 237 237 238 238 static int rmi_f12_write_control_regs(struct rmi_function *fn)
+5 -4
drivers/input/rmi4/rmi_f30.c
··· 122 122 } 123 123 } 124 124 125 - static int rmi_f30_attention(struct rmi_function *fn, unsigned long *irq_bits) 125 + static irqreturn_t rmi_f30_attention(int irq, void *ctx) 126 126 { 127 + struct rmi_function *fn = ctx; 127 128 struct f30_data *f30 = dev_get_drvdata(&fn->dev); 128 129 struct rmi_driver_data *drvdata = dev_get_drvdata(&fn->rmi_dev->dev); 129 130 int error; ··· 135 134 if (drvdata->attn_data.size < f30->register_count) { 136 135 dev_warn(&fn->dev, 137 136 "F30 interrupted, but data is missing\n"); 138 - return 0; 137 + return IRQ_HANDLED; 139 138 } 140 139 memcpy(f30->data_regs, drvdata->attn_data.data, 141 140 f30->register_count); ··· 148 147 dev_err(&fn->dev, 149 148 "%s: Failed to read F30 data registers: %d\n", 150 149 __func__, error); 151 - return error; 150 + return IRQ_RETVAL(error); 152 151 } 153 152 } 154 153 ··· 160 159 rmi_f03_commit_buttons(f30->f03); 161 160 } 162 161 163 - return 0; 162 + return IRQ_HANDLED; 164 163 } 165 164 166 165 static int rmi_f30_config(struct rmi_function *fn)
+3 -2
drivers/input/rmi4/rmi_f34.c
··· 100 100 return 0; 101 101 } 102 102 103 - static int rmi_f34_attention(struct rmi_function *fn, unsigned long *irq_bits) 103 + static irqreturn_t rmi_f34_attention(int irq, void *ctx) 104 104 { 105 + struct rmi_function *fn = ctx; 105 106 struct f34_data *f34 = dev_get_drvdata(&fn->dev); 106 107 int ret; 107 108 u8 status; ··· 127 126 complete(&f34->v7.cmd_done); 128 127 } 129 128 130 - return 0; 129 + return IRQ_HANDLED; 131 130 } 132 131 133 132 static int rmi_f34_write_blocks(struct f34_data *f34, const void *data,
-6
drivers/input/rmi4/rmi_f54.c
··· 610 610 mutex_unlock(&f54->data_mutex); 611 611 } 612 612 613 - static int rmi_f54_attention(struct rmi_function *fn, unsigned long *irqbits) 614 - { 615 - return 0; 616 - } 617 - 618 613 static int rmi_f54_config(struct rmi_function *fn) 619 614 { 620 615 struct rmi_driver *drv = fn->rmi_dev->driver; ··· 751 756 .func = 0x54, 752 757 .probe = rmi_f54_probe, 753 758 .config = rmi_f54_config, 754 - .attention = rmi_f54_attention, 755 759 .remove = rmi_f54_remove, 756 760 };
+1
drivers/input/touchscreen/silead.c
··· 603 603 { "GSL3692", 0 }, 604 604 { "MSSL1680", 0 }, 605 605 { "MSSL0001", 0 }, 606 + { "MSSL0002", 0 }, 606 607 { } 607 608 }; 608 609 MODULE_DEVICE_TABLE(acpi, silead_ts_acpi_match);
+1 -1
drivers/isdn/mISDN/socket.c
··· 588 588 .getname = data_sock_getname, 589 589 .sendmsg = mISDN_sock_sendmsg, 590 590 .recvmsg = mISDN_sock_recvmsg, 591 - .poll_mask = datagram_poll_mask, 591 + .poll = datagram_poll, 592 592 .listen = sock_no_listen, 593 593 .shutdown = sock_no_shutdown, 594 594 .setsockopt = data_sock_setsockopt,
+1 -1
drivers/md/dm-raid.c
··· 588 588 } 589 589 590 590 /* Return md raid10 algorithm for @name */ 591 - static const int raid10_name_to_format(const char *name) 591 + static int raid10_name_to_format(const char *name) 592 592 { 593 593 if (!strcasecmp(name, "near")) 594 594 return ALGORITHM_RAID10_NEAR;
+4 -3
drivers/md/dm-table.c
··· 885 885 static int device_supports_dax(struct dm_target *ti, struct dm_dev *dev, 886 886 sector_t start, sector_t len, void *data) 887 887 { 888 - struct request_queue *q = bdev_get_queue(dev->bdev); 889 - 890 - return q && blk_queue_dax(q); 888 + return bdev_dax_supported(dev->bdev, PAGE_SIZE); 891 889 } 892 890 893 891 static bool dm_table_supports_dax(struct dm_table *t) ··· 1905 1907 1906 1908 if (dm_table_supports_dax(t)) 1907 1909 blk_queue_flag_set(QUEUE_FLAG_DAX, q); 1910 + else 1911 + blk_queue_flag_clear(QUEUE_FLAG_DAX, q); 1912 + 1908 1913 if (dm_table_supports_dax_write_cache(t)) 1909 1914 dax_write_cache(t->md->dax_dev, true); 1910 1915
-9
drivers/md/dm-thin-metadata.c
··· 776 776 static int __commit_transaction(struct dm_pool_metadata *pmd) 777 777 { 778 778 int r; 779 - size_t metadata_len, data_len; 780 779 struct thin_disk_superblock *disk_super; 781 780 struct dm_block *sblock; 782 781 ··· 793 794 return r; 794 795 795 796 r = dm_tm_pre_commit(pmd->tm); 796 - if (r < 0) 797 - return r; 798 - 799 - r = dm_sm_root_size(pmd->metadata_sm, &metadata_len); 800 - if (r < 0) 801 - return r; 802 - 803 - r = dm_sm_root_size(pmd->data_sm, &data_len); 804 797 if (r < 0) 805 798 return r; 806 799
+9 -2
drivers/md/dm-thin.c
··· 1386 1386 1387 1387 static void set_pool_mode(struct pool *pool, enum pool_mode new_mode); 1388 1388 1389 + static void requeue_bios(struct pool *pool); 1390 + 1389 1391 static void check_for_space(struct pool *pool) 1390 1392 { 1391 1393 int r; ··· 1400 1398 if (r) 1401 1399 return; 1402 1400 1403 - if (nr_free) 1401 + if (nr_free) { 1404 1402 set_pool_mode(pool, PM_WRITE); 1403 + requeue_bios(pool); 1404 + } 1405 1405 } 1406 1406 1407 1407 /* ··· 1480 1476 1481 1477 r = dm_pool_alloc_data_block(pool->pmd, result); 1482 1478 if (r) { 1483 - metadata_operation_failed(pool, "dm_pool_alloc_data_block", r); 1479 + if (r == -ENOSPC) 1480 + set_pool_mode(pool, PM_OUT_OF_DATA_SPACE); 1481 + else 1482 + metadata_operation_failed(pool, "dm_pool_alloc_data_block", r); 1484 1483 return r; 1485 1484 } 1486 1485
+5 -5
drivers/md/dm-writecache.c
··· 259 259 if (da != p) { 260 260 long i; 261 261 wc->memory_map = NULL; 262 - pages = kvmalloc(p * sizeof(struct page *), GFP_KERNEL); 262 + pages = kvmalloc_array(p, sizeof(struct page *), GFP_KERNEL); 263 263 if (!pages) { 264 264 r = -ENOMEM; 265 265 goto err2; ··· 859 859 860 860 if (wc->entries) 861 861 return 0; 862 - wc->entries = vmalloc(sizeof(struct wc_entry) * wc->n_blocks); 862 + wc->entries = vmalloc(array_size(sizeof(struct wc_entry), wc->n_blocks)); 863 863 if (!wc->entries) 864 864 return -ENOMEM; 865 865 for (b = 0; b < wc->n_blocks; b++) { ··· 1481 1481 wb->bio.bi_iter.bi_sector = read_original_sector(wc, e); 1482 1482 wb->page_offset = PAGE_SIZE; 1483 1483 if (max_pages <= WB_LIST_INLINE || 1484 - unlikely(!(wb->wc_list = kmalloc(max_pages * sizeof(struct wc_entry *), 1485 - GFP_NOIO | __GFP_NORETRY | 1486 - __GFP_NOMEMALLOC | __GFP_NOWARN)))) { 1484 + unlikely(!(wb->wc_list = kmalloc_array(max_pages, sizeof(struct wc_entry *), 1485 + GFP_NOIO | __GFP_NORETRY | 1486 + __GFP_NOMEMALLOC | __GFP_NOWARN)))) { 1487 1487 wb->wc_list = wb->wc_list_inline; 1488 1488 max_pages = WB_LIST_INLINE; 1489 1489 }
+1 -1
drivers/md/dm-zoned-target.c
··· 787 787 788 788 /* Chunk BIO work */ 789 789 mutex_init(&dmz->chunk_lock); 790 - INIT_RADIX_TREE(&dmz->chunk_rxtree, GFP_KERNEL); 790 + INIT_RADIX_TREE(&dmz->chunk_rxtree, GFP_NOIO); 791 791 dmz->chunk_wq = alloc_workqueue("dmz_cwq_%s", WQ_MEM_RECLAIM | WQ_UNBOUND, 792 792 0, dev->name); 793 793 if (!dmz->chunk_wq) {
+3 -5
drivers/md/dm.c
··· 1056 1056 if (len < 1) 1057 1057 goto out; 1058 1058 nr_pages = min(len, nr_pages); 1059 - if (ti->type->direct_access) 1060 - ret = ti->type->direct_access(ti, pgoff, nr_pages, kaddr, pfn); 1059 + ret = ti->type->direct_access(ti, pgoff, nr_pages, kaddr, pfn); 1061 1060 1062 1061 out: 1063 1062 dm_put_live_table(md, srcu_idx); ··· 1605 1606 * the usage of io->orig_bio in dm_remap_zone_report() 1606 1607 * won't be affected by this reassignment. 1607 1608 */ 1608 - struct bio *b = bio_clone_bioset(bio, GFP_NOIO, 1609 - &md->queue->bio_split); 1609 + struct bio *b = bio_split(bio, bio_sectors(bio) - ci.sector_count, 1610 + GFP_NOIO, &md->queue->bio_split); 1610 1611 ci.io->orig_bio = b; 1611 - bio_advance(bio, (bio_sectors(bio) - ci.sector_count) << 9); 1612 1612 bio_chain(b, bio); 1613 1613 ret = generic_make_request(bio); 1614 1614 break;
+5 -3
drivers/md/md.c
··· 5547 5547 else 5548 5548 pr_warn("md: personality for level %s is not loaded!\n", 5549 5549 mddev->clevel); 5550 - return -EINVAL; 5550 + err = -EINVAL; 5551 + goto abort; 5551 5552 } 5552 5553 spin_unlock(&pers_lock); 5553 5554 if (mddev->level != pers->level) { ··· 5561 5560 pers->start_reshape == NULL) { 5562 5561 /* This personality cannot handle reshaping... */ 5563 5562 module_put(pers->owner); 5564 - return -EINVAL; 5563 + err = -EINVAL; 5564 + goto abort; 5565 5565 } 5566 5566 5567 5567 if (pers->sync_request) { ··· 5631 5629 mddev->private = NULL; 5632 5630 module_put(pers->owner); 5633 5631 bitmap_destroy(mddev); 5634 - return err; 5632 + goto abort; 5635 5633 } 5636 5634 if (mddev->queue) { 5637 5635 bool nonrot = true;
+7
drivers/md/raid10.c
··· 3893 3893 disk->rdev->saved_raid_disk < 0) 3894 3894 conf->fullsync = 1; 3895 3895 } 3896 + 3897 + if (disk->replacement && 3898 + !test_bit(In_sync, &disk->replacement->flags) && 3899 + disk->replacement->saved_raid_disk < 0) { 3900 + conf->fullsync = 1; 3901 + } 3902 + 3896 3903 disk->recovery_disabled = mddev->recovery_disabled - 1; 3897 3904 } 3898 3905
+2 -12
drivers/media/rc/bpf-lirc.c
··· 207 207 bpf_prog_array_free(rcdev->raw->progs); 208 208 } 209 209 210 - int lirc_prog_attach(const union bpf_attr *attr) 210 + int lirc_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog) 211 211 { 212 - struct bpf_prog *prog; 213 212 struct rc_dev *rcdev; 214 213 int ret; 215 214 216 215 if (attr->attach_flags) 217 216 return -EINVAL; 218 217 219 - prog = bpf_prog_get_type(attr->attach_bpf_fd, 220 - BPF_PROG_TYPE_LIRC_MODE2); 221 - if (IS_ERR(prog)) 222 - return PTR_ERR(prog); 223 - 224 218 rcdev = rc_dev_get_from_fd(attr->target_fd); 225 - if (IS_ERR(rcdev)) { 226 - bpf_prog_put(prog); 219 + if (IS_ERR(rcdev)) 227 220 return PTR_ERR(rcdev); 228 - } 229 221 230 222 ret = lirc_bpf_attach(rcdev, prog); 231 - if (ret) 232 - bpf_prog_put(prog); 233 223 234 224 put_device(&rcdev->dev); 235 225
+11 -8
drivers/mtd/chips/cfi_cmdset_0002.c
··· 2526 2526 2527 2527 struct ppb_lock { 2528 2528 struct flchip *chip; 2529 - loff_t offset; 2529 + unsigned long adr; 2530 2530 int locked; 2531 2531 }; 2532 2532 ··· 2544 2544 unsigned long timeo; 2545 2545 int ret; 2546 2546 2547 + adr += chip->start; 2547 2548 mutex_lock(&chip->mutex); 2548 - ret = get_chip(map, chip, adr + chip->start, FL_LOCKING); 2549 + ret = get_chip(map, chip, adr, FL_LOCKING); 2549 2550 if (ret) { 2550 2551 mutex_unlock(&chip->mutex); 2551 2552 return ret; ··· 2564 2563 2565 2564 if (thunk == DO_XXLOCK_ONEBLOCK_LOCK) { 2566 2565 chip->state = FL_LOCKING; 2567 - map_write(map, CMD(0xA0), chip->start + adr); 2568 - map_write(map, CMD(0x00), chip->start + adr); 2566 + map_write(map, CMD(0xA0), adr); 2567 + map_write(map, CMD(0x00), adr); 2569 2568 } else if (thunk == DO_XXLOCK_ONEBLOCK_UNLOCK) { 2570 2569 /* 2571 2570 * Unlocking of one specific sector is not supported, so we ··· 2603 2602 map_write(map, CMD(0x00), chip->start); 2604 2603 2605 2604 chip->state = FL_READY; 2606 - put_chip(map, chip, adr + chip->start); 2605 + put_chip(map, chip, adr); 2607 2606 mutex_unlock(&chip->mutex); 2608 2607 2609 2608 return ret; ··· 2660 2659 * sectors shall be unlocked, so lets keep their locking 2661 2660 * status at "unlocked" (locked=0) for the final re-locking. 2662 2661 */ 2663 - if ((adr < ofs) || (adr >= (ofs + len))) { 2662 + if ((offset < ofs) || (offset >= (ofs + len))) { 2664 2663 sect[sectors].chip = &cfi->chips[chipnum]; 2665 - sect[sectors].offset = offset; 2664 + sect[sectors].adr = adr; 2666 2665 sect[sectors].locked = do_ppb_xxlock( 2667 2666 map, &cfi->chips[chipnum], adr, 0, 2668 2667 DO_XXLOCK_ONEBLOCK_GETLOCK); ··· 2676 2675 i++; 2677 2676 2678 2677 if (adr >> cfi->chipshift) { 2678 + if (offset >= (ofs + len)) 2679 + break; 2679 2680 adr = 0; 2680 2681 chipnum++; 2681 2682 ··· 2708 2705 */ 2709 2706 for (i = 0; i < sectors; i++) { 2710 2707 if (sect[i].locked) 2711 - do_ppb_xxlock(map, sect[i].chip, sect[i].offset, 0, 2708 + do_ppb_xxlock(map, sect[i].chip, sect[i].adr, 0, 2712 2709 DO_XXLOCK_ONEBLOCK_LOCK); 2713 2710 } 2714 2711
+2 -2
drivers/mtd/devices/mtd_dataflash.c
··· 733 733 { "AT45DB642x", 0x1f2800, 8192, 1056, 11, SUP_POW2PS}, 734 734 { "at45db642d", 0x1f2800, 8192, 1024, 10, SUP_POW2PS | IS_POW2PS}, 735 735 736 - { "AT45DB641E", 0x1f28000100, 32768, 264, 9, SUP_EXTID | SUP_POW2PS}, 737 - { "at45db641e", 0x1f28000100, 32768, 256, 8, SUP_EXTID | SUP_POW2PS | IS_POW2PS}, 736 + { "AT45DB641E", 0x1f28000100ULL, 32768, 264, 9, SUP_EXTID | SUP_POW2PS}, 737 + { "at45db641e", 0x1f28000100ULL, 32768, 256, 8, SUP_EXTID | SUP_POW2PS | IS_POW2PS}, 738 738 }; 739 739 740 740 static struct flash_info *jedec_lookup(struct spi_device *spi,
+5 -1
drivers/mtd/nand/raw/denali_dt.c
··· 123 123 if (ret) 124 124 return ret; 125 125 126 - denali->clk_x_rate = clk_get_rate(dt->clk); 126 + /* 127 + * Hardcode the clock rate for the backward compatibility. 128 + * This works for both SOCFPGA and UniPhier. 129 + */ 130 + denali->clk_x_rate = 200000000; 127 131 128 132 ret = denali_init(denali); 129 133 if (ret)
+4 -1
drivers/mtd/nand/raw/mxc_nand.c
··· 48 48 #define NFC_V1_V2_CONFIG (host->regs + 0x0a) 49 49 #define NFC_V1_V2_ECC_STATUS_RESULT (host->regs + 0x0c) 50 50 #define NFC_V1_V2_RSLTMAIN_AREA (host->regs + 0x0e) 51 - #define NFC_V1_V2_RSLTSPARE_AREA (host->regs + 0x10) 51 + #define NFC_V21_RSLTSPARE_AREA (host->regs + 0x10) 52 52 #define NFC_V1_V2_WRPROT (host->regs + 0x12) 53 53 #define NFC_V1_UNLOCKSTART_BLKADDR (host->regs + 0x14) 54 54 #define NFC_V1_UNLOCKEND_BLKADDR (host->regs + 0x16) ··· 1273 1273 1274 1274 writew(config1, NFC_V1_V2_CONFIG1); 1275 1275 /* preset operation */ 1276 + 1277 + /* spare area size in 16-bit half-words */ 1278 + writew(mtd->oobsize / 2, NFC_V21_RSLTSPARE_AREA); 1276 1279 1277 1280 /* Unlock the internal RAM Buffer */ 1278 1281 writew(0x2, NFC_V1_V2_CONFIG);
+1 -1
drivers/mtd/nand/raw/nand_base.c
··· 440 440 441 441 for (; page < page_end; page++) { 442 442 res = chip->ecc.read_oob(mtd, chip, page); 443 - if (res) 443 + if (res < 0) 444 444 return res; 445 445 446 446 bad = chip->oob_poi[chip->badblockpos];
+36 -12
drivers/mtd/nand/raw/nand_macronix.c
··· 17 17 18 18 #include <linux/mtd/rawnand.h> 19 19 20 + /* 21 + * Macronix AC series does not support using SET/GET_FEATURES to change 22 + * the timings unlike what is declared in the parameter page. Unflag 23 + * this feature to avoid unnecessary downturns. 24 + */ 25 + static void macronix_nand_fix_broken_get_timings(struct nand_chip *chip) 26 + { 27 + unsigned int i; 28 + static const char * const broken_get_timings[] = { 29 + "MX30LF1G18AC", 30 + "MX30LF1G28AC", 31 + "MX30LF2G18AC", 32 + "MX30LF2G28AC", 33 + "MX30LF4G18AC", 34 + "MX30LF4G28AC", 35 + "MX60LF8G18AC", 36 + }; 37 + 38 + if (!chip->parameters.supports_set_get_features) 39 + return; 40 + 41 + for (i = 0; i < ARRAY_SIZE(broken_get_timings); i++) { 42 + if (!strcmp(broken_get_timings[i], chip->parameters.model)) 43 + break; 44 + } 45 + 46 + if (i == ARRAY_SIZE(broken_get_timings)) 47 + return; 48 + 49 + bitmap_clear(chip->parameters.get_feature_list, 50 + ONFI_FEATURE_ADDR_TIMING_MODE, 1); 51 + bitmap_clear(chip->parameters.set_feature_list, 52 + ONFI_FEATURE_ADDR_TIMING_MODE, 1); 53 + } 54 + 20 55 static int macronix_nand_init(struct nand_chip *chip) 21 56 { 22 57 if (nand_is_slc(chip)) 23 58 chip->bbt_options |= NAND_BBT_SCAN2NDPAGE; 24 59 25 - /* 26 - * MX30LF2G18AC chip does not support using SET/GET_FEATURES to change 27 - * the timings unlike what is declared in the parameter page. Unflag 28 - * this feature to avoid unnecessary downturns. 29 - */ 30 - if (chip->parameters.supports_set_get_features && 31 - !strcmp("MX30LF2G18AC", chip->parameters.model)) { 32 - bitmap_clear(chip->parameters.get_feature_list, 33 - ONFI_FEATURE_ADDR_TIMING_MODE, 1); 34 - bitmap_clear(chip->parameters.set_feature_list, 35 - ONFI_FEATURE_ADDR_TIMING_MODE, 1); 36 - } 60 + macronix_nand_fix_broken_get_timings(chip); 37 61 38 62 return 0; 39 63 }
+2
drivers/mtd/nand/raw/nand_micron.c
··· 66 66 67 67 if (p->supports_set_get_features) { 68 68 set_bit(ONFI_FEATURE_ADDR_READ_RETRY, p->set_feature_list); 69 + set_bit(ONFI_FEATURE_ON_DIE_ECC, p->set_feature_list); 69 70 set_bit(ONFI_FEATURE_ADDR_READ_RETRY, p->get_feature_list); 71 + set_bit(ONFI_FEATURE_ON_DIE_ECC, p->get_feature_list); 70 72 } 71 73 72 74 return 0;
+7 -1
drivers/net/ethernet/atheros/alx/main.c
··· 1897 1897 struct pci_dev *pdev = to_pci_dev(dev); 1898 1898 struct alx_priv *alx = pci_get_drvdata(pdev); 1899 1899 struct alx_hw *hw = &alx->hw; 1900 + int err; 1900 1901 1901 1902 alx_reset_phy(hw); 1902 1903 1903 1904 if (!netif_running(alx->dev)) 1904 1905 return 0; 1905 1906 netif_device_attach(alx->dev); 1906 - return __alx_open(alx, true); 1907 + 1908 + rtnl_lock(); 1909 + err = __alx_open(alx, true); 1910 + rtnl_unlock(); 1911 + 1912 + return err; 1907 1913 } 1908 1914 1909 1915 static SIMPLE_DEV_PM_OPS(alx_pm_ops, alx_suspend, alx_resume);
+1
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
··· 1533 1533 struct link_vars link_vars; 1534 1534 u32 link_cnt; 1535 1535 struct bnx2x_link_report_data last_reported_link; 1536 + bool force_link_down; 1536 1537 1537 1538 struct mdio_if_info mdio; 1538 1539
+6
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
··· 1261 1261 { 1262 1262 struct bnx2x_link_report_data cur_data; 1263 1263 1264 + if (bp->force_link_down) { 1265 + bp->link_vars.link_up = 0; 1266 + return; 1267 + } 1268 + 1264 1269 /* reread mf_cfg */ 1265 1270 if (IS_PF(bp) && !CHIP_IS_E1(bp)) 1266 1271 bnx2x_read_mf_cfg(bp); ··· 2822 2817 bp->pending_max = 0; 2823 2818 } 2824 2819 2820 + bp->force_link_down = false; 2825 2821 if (bp->port.pmf) { 2826 2822 rc = bnx2x_initial_phy_init(bp, load_mode); 2827 2823 if (rc)
+6
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 10279 10279 bp->sp_rtnl_state = 0; 10280 10280 smp_mb(); 10281 10281 10282 + /* Immediately indicate link as down */ 10283 + bp->link_vars.link_up = 0; 10284 + bp->force_link_down = true; 10285 + netif_carrier_off(bp->dev); 10286 + BNX2X_ERR("Indicating link is down due to Tx-timeout\n"); 10287 + 10282 10288 bnx2x_nic_unload(bp, UNLOAD_NORMAL, true); 10283 10289 /* When ret value shows failure of allocation failure, 10284 10290 * the nic is rebooted again. If open still fails, a error
+1 -1
drivers/net/ethernet/broadcom/cnic.c
··· 660 660 id_tbl->max = size; 661 661 id_tbl->next = next; 662 662 spin_lock_init(&id_tbl->lock); 663 - id_tbl->table = kcalloc(DIV_ROUND_UP(size, 32), 4, GFP_KERNEL); 663 + id_tbl->table = kcalloc(BITS_TO_LONGS(size), sizeof(long), GFP_KERNEL); 664 664 if (!id_tbl->table) 665 665 return -ENOMEM; 666 666
+2
drivers/net/ethernet/cadence/macb_main.c
··· 3726 3726 int err; 3727 3727 u32 reg; 3728 3728 3729 + bp->queues[0].bp = bp; 3730 + 3729 3731 dev->netdev_ops = &at91ether_netdev_ops; 3730 3732 dev->ethtool_ops = &macb_ethtool_ops; 3731 3733
+8 -7
drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
··· 125 125 /* Default alignment for start of data in an Rx FD */ 126 126 #define DPAA_FD_DATA_ALIGNMENT 16 127 127 128 + /* The DPAA requires 256 bytes reserved and mapped for the SGT */ 129 + #define DPAA_SGT_SIZE 256 130 + 128 131 /* Values for the L3R field of the FM Parse Results 129 132 */ 130 133 /* L3 Type field: First IP Present IPv4 */ ··· 1634 1631 1635 1632 if (unlikely(qm_fd_get_format(fd) == qm_fd_sg)) { 1636 1633 nr_frags = skb_shinfo(skb)->nr_frags; 1637 - dma_unmap_single(dev, addr, qm_fd_get_offset(fd) + 1638 - sizeof(struct qm_sg_entry) * (1 + nr_frags), 1634 + dma_unmap_single(dev, addr, 1635 + qm_fd_get_offset(fd) + DPAA_SGT_SIZE, 1639 1636 dma_dir); 1640 1637 1641 1638 /* The sgt buffer has been allocated with netdev_alloc_frag(), ··· 1920 1917 void *sgt_buf; 1921 1918 1922 1919 /* get a page frag to store the SGTable */ 1923 - sz = SKB_DATA_ALIGN(priv->tx_headroom + 1924 - sizeof(struct qm_sg_entry) * (1 + nr_frags)); 1920 + sz = SKB_DATA_ALIGN(priv->tx_headroom + DPAA_SGT_SIZE); 1925 1921 sgt_buf = netdev_alloc_frag(sz); 1926 1922 if (unlikely(!sgt_buf)) { 1927 1923 netdev_err(net_dev, "netdev_alloc_frag() failed for size %d\n", ··· 1988 1986 skbh = (struct sk_buff **)buffer_start; 1989 1987 *skbh = skb; 1990 1988 1991 - addr = dma_map_single(dev, buffer_start, priv->tx_headroom + 1992 - sizeof(struct qm_sg_entry) * (1 + nr_frags), 1993 - dma_dir); 1989 + addr = dma_map_single(dev, buffer_start, 1990 + priv->tx_headroom + DPAA_SGT_SIZE, dma_dir); 1994 1991 if (unlikely(dma_mapping_error(dev, addr))) { 1995 1992 dev_err(dev, "DMA mapping failed"); 1996 1993 err = -EINVAL;
+8
drivers/net/ethernet/freescale/fman/fman_port.c
··· 324 324 #define HWP_HXS_PHE_REPORT 0x00000800 325 325 #define HWP_HXS_PCAC_PSTAT 0x00000100 326 326 #define HWP_HXS_PCAC_PSTOP 0x00000001 327 + #define HWP_HXS_TCP_OFFSET 0xA 328 + #define HWP_HXS_UDP_OFFSET 0xB 329 + #define HWP_HXS_SH_PAD_REM 0x80000000 330 + 327 331 struct fman_port_hwp_regs { 328 332 struct { 329 333 u32 ssa; /* Soft Sequence Attachment */ ··· 731 727 iowrite32be(0x00000000, &regs->pmda[i].ssa); 732 728 iowrite32be(0xffffffff, &regs->pmda[i].lcv); 733 729 } 730 + 731 + /* Short packet padding removal from checksum calculation */ 732 + iowrite32be(HWP_HXS_SH_PAD_REM, &regs->pmda[HWP_HXS_TCP_OFFSET].ssa); 733 + iowrite32be(HWP_HXS_SH_PAD_REM, &regs->pmda[HWP_HXS_UDP_OFFSET].ssa); 734 734 735 735 start_port_hwp(port); 736 736 }
+1
drivers/net/ethernet/huawei/hinic/hinic_rx.c
··· 439 439 { 440 440 struct hinic_rq *rq = rxq->rq; 441 441 442 + irq_set_affinity_hint(rq->irq, NULL); 442 443 free_irq(rq->irq, rxq); 443 444 rx_del_napi(rxq); 444 445 }
+15 -9
drivers/net/ethernet/intel/i40e/i40e_txrx.c
··· 2199 2199 return true; 2200 2200 } 2201 2201 2202 - #define I40E_XDP_PASS 0 2203 - #define I40E_XDP_CONSUMED 1 2204 - #define I40E_XDP_TX 2 2202 + #define I40E_XDP_PASS 0 2203 + #define I40E_XDP_CONSUMED BIT(0) 2204 + #define I40E_XDP_TX BIT(1) 2205 + #define I40E_XDP_REDIR BIT(2) 2205 2206 2206 2207 static int i40e_xmit_xdp_ring(struct xdp_frame *xdpf, 2207 2208 struct i40e_ring *xdp_ring); ··· 2249 2248 break; 2250 2249 case XDP_REDIRECT: 2251 2250 err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); 2252 - result = !err ? I40E_XDP_TX : I40E_XDP_CONSUMED; 2251 + result = !err ? I40E_XDP_REDIR : I40E_XDP_CONSUMED; 2253 2252 break; 2254 2253 default: 2255 2254 bpf_warn_invalid_xdp_action(act); ··· 2312 2311 unsigned int total_rx_bytes = 0, total_rx_packets = 0; 2313 2312 struct sk_buff *skb = rx_ring->skb; 2314 2313 u16 cleaned_count = I40E_DESC_UNUSED(rx_ring); 2315 - bool failure = false, xdp_xmit = false; 2314 + unsigned int xdp_xmit = 0; 2315 + bool failure = false; 2316 2316 struct xdp_buff xdp; 2317 2317 2318 2318 xdp.rxq = &rx_ring->xdp_rxq; ··· 2374 2372 } 2375 2373 2376 2374 if (IS_ERR(skb)) { 2377 - if (PTR_ERR(skb) == -I40E_XDP_TX) { 2378 - xdp_xmit = true; 2375 + unsigned int xdp_res = -PTR_ERR(skb); 2376 + 2377 + if (xdp_res & (I40E_XDP_TX | I40E_XDP_REDIR)) { 2378 + xdp_xmit |= xdp_res; 2379 2379 i40e_rx_buffer_flip(rx_ring, rx_buffer, size); 2380 2380 } else { 2381 2381 rx_buffer->pagecnt_bias++; ··· 2431 2427 total_rx_packets++; 2432 2428 } 2433 2429 2434 - if (xdp_xmit) { 2430 + if (xdp_xmit & I40E_XDP_REDIR) 2431 + xdp_do_flush_map(); 2432 + 2433 + if (xdp_xmit & I40E_XDP_TX) { 2435 2434 struct i40e_ring *xdp_ring = 2436 2435 rx_ring->vsi->xdp_rings[rx_ring->queue_index]; 2437 2436 2438 2437 i40e_xdp_ring_update_tail(xdp_ring); 2439 - xdp_do_flush_map(); 2440 2438 } 2441 2439 2442 2440 rx_ring->skb = skb;
+14 -10
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 2186 2186 return skb; 2187 2187 } 2188 2188 2189 - #define IXGBE_XDP_PASS 0 2190 - #define IXGBE_XDP_CONSUMED 1 2191 - #define IXGBE_XDP_TX 2 2189 + #define IXGBE_XDP_PASS 0 2190 + #define IXGBE_XDP_CONSUMED BIT(0) 2191 + #define IXGBE_XDP_TX BIT(1) 2192 + #define IXGBE_XDP_REDIR BIT(2) 2192 2193 2193 2194 static int ixgbe_xmit_xdp_ring(struct ixgbe_adapter *adapter, 2194 2195 struct xdp_frame *xdpf); ··· 2226 2225 case XDP_REDIRECT: 2227 2226 err = xdp_do_redirect(adapter->netdev, xdp, xdp_prog); 2228 2227 if (!err) 2229 - result = IXGBE_XDP_TX; 2228 + result = IXGBE_XDP_REDIR; 2230 2229 else 2231 2230 result = IXGBE_XDP_CONSUMED; 2232 2231 break; ··· 2286 2285 unsigned int mss = 0; 2287 2286 #endif /* IXGBE_FCOE */ 2288 2287 u16 cleaned_count = ixgbe_desc_unused(rx_ring); 2289 - bool xdp_xmit = false; 2288 + unsigned int xdp_xmit = 0; 2290 2289 struct xdp_buff xdp; 2291 2290 2292 2291 xdp.rxq = &rx_ring->xdp_rxq; ··· 2329 2328 } 2330 2329 2331 2330 if (IS_ERR(skb)) { 2332 - if (PTR_ERR(skb) == -IXGBE_XDP_TX) { 2333 - xdp_xmit = true; 2331 + unsigned int xdp_res = -PTR_ERR(skb); 2332 + 2333 + if (xdp_res & (IXGBE_XDP_TX | IXGBE_XDP_REDIR)) { 2334 + xdp_xmit |= xdp_res; 2334 2335 ixgbe_rx_buffer_flip(rx_ring, rx_buffer, size); 2335 2336 } else { 2336 2337 rx_buffer->pagecnt_bias++; ··· 2404 2401 total_rx_packets++; 2405 2402 } 2406 2403 2407 - if (xdp_xmit) { 2404 + if (xdp_xmit & IXGBE_XDP_REDIR) 2405 + xdp_do_flush_map(); 2406 + 2407 + if (xdp_xmit & IXGBE_XDP_TX) { 2408 2408 struct ixgbe_ring *ring = adapter->xdp_ring[smp_processor_id()]; 2409 2409 2410 2410 /* Force memory writes to complete before letting h/w ··· 2415 2409 */ 2416 2410 wmb(); 2417 2411 writel(ring->next_to_use, ring->tail); 2418 - 2419 - xdp_do_flush_map(); 2420 2412 } 2421 2413 2422 2414 u64_stats_update_begin(&rx_ring->syncp);
+4 -4
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
··· 807 807 unsigned long flags; 808 808 bool poll_cmd = ent->polling; 809 809 int alloc_ret; 810 + int cmd_mode; 810 811 811 812 sem = ent->page_queue ? &cmd->pages_sem : &cmd->sem; 812 813 down(sem); ··· 854 853 set_signature(ent, !cmd->checksum_disabled); 855 854 dump_command(dev, ent, 1); 856 855 ent->ts1 = ktime_get_ns(); 856 + cmd_mode = cmd->mode; 857 857 858 858 if (ent->callback) 859 859 schedule_delayed_work(&ent->cb_timeout_work, cb_timeout); ··· 879 877 iowrite32be(1 << ent->idx, &dev->iseg->cmd_dbell); 880 878 mmiowb(); 881 879 /* if not in polling don't use ent after this point */ 882 - if (cmd->mode == CMD_MODE_POLLING || poll_cmd) { 880 + if (cmd_mode == CMD_MODE_POLLING || poll_cmd) { 883 881 poll_timeout(ent); 884 882 /* make sure we read the descriptor after ownership is SW */ 885 883 rmb(); ··· 1278 1276 { 1279 1277 struct mlx5_core_dev *dev = filp->private_data; 1280 1278 struct mlx5_cmd_debug *dbg = &dev->cmd.dbg; 1281 - char outlen_str[8]; 1279 + char outlen_str[8] = {0}; 1282 1280 int outlen; 1283 1281 void *ptr; 1284 1282 int err; ··· 1292 1290 1293 1291 if (copy_from_user(outlen_str, buf, count)) 1294 1292 return -EFAULT; 1295 - 1296 - outlen_str[7] = 0; 1297 1293 1298 1294 err = sscanf(outlen_str, "%d", &outlen); 1299 1295 if (err < 0)
+6 -6
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 2843 2843 mlx5e_activate_channels(&priv->channels); 2844 2844 netif_tx_start_all_queues(priv->netdev); 2845 2845 2846 - if (MLX5_VPORT_MANAGER(priv->mdev)) 2846 + if (MLX5_ESWITCH_MANAGER(priv->mdev)) 2847 2847 mlx5e_add_sqs_fwd_rules(priv); 2848 2848 2849 2849 mlx5e_wait_channels_min_rx_wqes(&priv->channels); ··· 2854 2854 { 2855 2855 mlx5e_redirect_rqts_to_drop(priv); 2856 2856 2857 - if (MLX5_VPORT_MANAGER(priv->mdev)) 2857 + if (MLX5_ESWITCH_MANAGER(priv->mdev)) 2858 2858 mlx5e_remove_sqs_fwd_rules(priv); 2859 2859 2860 2860 /* FIXME: This is a W/A only for tx timeout watch dog false alarm when ··· 4600 4600 mlx5e_set_netdev_dev_addr(netdev); 4601 4601 4602 4602 #if IS_ENABLED(CONFIG_MLX5_ESWITCH) 4603 - if (MLX5_VPORT_MANAGER(mdev)) 4603 + if (MLX5_ESWITCH_MANAGER(mdev)) 4604 4604 netdev->switchdev_ops = &mlx5e_switchdev_ops; 4605 4605 #endif 4606 4606 ··· 4756 4756 4757 4757 mlx5e_enable_async_events(priv); 4758 4758 4759 - if (MLX5_VPORT_MANAGER(priv->mdev)) 4759 + if (MLX5_ESWITCH_MANAGER(priv->mdev)) 4760 4760 mlx5e_register_vport_reps(priv); 4761 4761 4762 4762 if (netdev->reg_state != NETREG_REGISTERED) ··· 4791 4791 4792 4792 queue_work(priv->wq, &priv->set_rx_mode_work); 4793 4793 4794 - if (MLX5_VPORT_MANAGER(priv->mdev)) 4794 + if (MLX5_ESWITCH_MANAGER(priv->mdev)) 4795 4795 mlx5e_unregister_vport_reps(priv); 4796 4796 4797 4797 mlx5e_disable_async_events(priv); ··· 4975 4975 return NULL; 4976 4976 4977 4977 #ifdef CONFIG_MLX5_ESWITCH 4978 - if (MLX5_VPORT_MANAGER(mdev)) { 4978 + if (MLX5_ESWITCH_MANAGER(mdev)) { 4979 4979 rpriv = mlx5e_alloc_nic_rep_priv(mdev); 4980 4980 if (!rpriv) { 4981 4981 mlx5_core_warn(mdev, "Failed to alloc NIC rep priv data\n");
+6 -2
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 823 823 struct mlx5e_rep_priv *rpriv = priv->ppriv; 824 824 struct mlx5_eswitch_rep *rep; 825 825 826 - if (!MLX5_CAP_GEN(priv->mdev, vport_group_manager)) 826 + if (!MLX5_ESWITCH_MANAGER(priv->mdev)) 827 827 return false; 828 828 829 829 rep = rpriv->rep; ··· 837 837 static bool mlx5e_is_vf_vport_rep(struct mlx5e_priv *priv) 838 838 { 839 839 struct mlx5e_rep_priv *rpriv = priv->ppriv; 840 - struct mlx5_eswitch_rep *rep = rpriv->rep; 840 + struct mlx5_eswitch_rep *rep; 841 841 842 + if (!MLX5_ESWITCH_MANAGER(priv->mdev)) 843 + return false; 844 + 845 + rep = rpriv->rep; 842 846 if (rep && rep->vport != FDB_UPLINK_VPORT) 843 847 return true; 844 848
+5 -7
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
··· 1594 1594 } 1595 1595 1596 1596 /* Public E-Switch API */ 1597 - #define ESW_ALLOWED(esw) ((esw) && MLX5_VPORT_MANAGER((esw)->dev)) 1597 + #define ESW_ALLOWED(esw) ((esw) && MLX5_ESWITCH_MANAGER((esw)->dev)) 1598 + 1598 1599 1599 1600 int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode) 1600 1601 { 1601 1602 int err; 1602 1603 int i, enabled_events; 1603 1604 1604 - if (!ESW_ALLOWED(esw)) 1605 - return 0; 1606 - 1607 - if (!MLX5_CAP_GEN(esw->dev, eswitch_flow_table) || 1605 + if (!ESW_ALLOWED(esw) || 1608 1606 !MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ft_support)) { 1609 1607 esw_warn(esw->dev, "E-Switch FDB is not supported, aborting ...\n"); 1610 1608 return -EOPNOTSUPP; ··· 1804 1806 u64 node_guid; 1805 1807 int err = 0; 1806 1808 1807 - if (!ESW_ALLOWED(esw)) 1809 + if (!MLX5_CAP_GEN(esw->dev, vport_group_manager)) 1808 1810 return -EPERM; 1809 1811 if (!LEGAL_VPORT(esw, vport) || is_multicast_ether_addr(mac)) 1810 1812 return -EINVAL; ··· 1881 1883 { 1882 1884 struct mlx5_vport *evport; 1883 1885 1884 - if (!ESW_ALLOWED(esw)) 1886 + if (!MLX5_CAP_GEN(esw->dev, vport_group_manager)) 1885 1887 return -EPERM; 1886 1888 if (!LEGAL_VPORT(esw, vport)) 1887 1889 return -EINVAL;
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 1079 1079 if (MLX5_CAP_GEN(dev, port_type) != MLX5_CAP_PORT_TYPE_ETH) 1080 1080 return -EOPNOTSUPP; 1081 1081 1082 - if (!MLX5_CAP_GEN(dev, vport_group_manager)) 1083 - return -EOPNOTSUPP; 1082 + if(!MLX5_ESWITCH_MANAGER(dev)) 1083 + return -EPERM; 1084 1084 1085 1085 if (dev->priv.eswitch->mode == SRIOV_NONE) 1086 1086 return -EOPNOTSUPP;
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
··· 32 32 33 33 #include <linux/mutex.h> 34 34 #include <linux/mlx5/driver.h> 35 + #include <linux/mlx5/eswitch.h> 35 36 36 37 #include "mlx5_core.h" 37 38 #include "fs_core.h" ··· 2653 2652 goto err; 2654 2653 } 2655 2654 2656 - if (MLX5_CAP_GEN(dev, eswitch_flow_table)) { 2655 + if (MLX5_ESWITCH_MANAGER(dev)) { 2657 2656 if (MLX5_CAP_ESW_FLOWTABLE_FDB(dev, ft_support)) { 2658 2657 err = init_fdb_root_ns(steering); 2659 2658 if (err)
+3 -2
drivers/net/ethernet/mellanox/mlx5/core/fw.c
··· 32 32 33 33 #include <linux/mlx5/driver.h> 34 34 #include <linux/mlx5/cmd.h> 35 + #include <linux/mlx5/eswitch.h> 35 36 #include <linux/module.h> 36 37 #include "mlx5_core.h" 37 38 #include "../../mlxfw/mlxfw.h" ··· 160 159 } 161 160 162 161 if (MLX5_CAP_GEN(dev, vport_group_manager) && 163 - MLX5_CAP_GEN(dev, eswitch_flow_table)) { 162 + MLX5_ESWITCH_MANAGER(dev)) { 164 163 err = mlx5_core_get_caps(dev, MLX5_CAP_ESWITCH_FLOW_TABLE); 165 164 if (err) 166 165 return err; 167 166 } 168 167 169 - if (MLX5_CAP_GEN(dev, eswitch_flow_table)) { 168 + if (MLX5_ESWITCH_MANAGER(dev)) { 170 169 err = mlx5_core_get_caps(dev, MLX5_CAP_ESWITCH); 171 170 if (err) 172 171 return err;
+5 -4
drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c
··· 33 33 #include <linux/etherdevice.h> 34 34 #include <linux/mlx5/driver.h> 35 35 #include <linux/mlx5/mlx5_ifc.h> 36 + #include <linux/mlx5/eswitch.h> 36 37 #include "mlx5_core.h" 37 38 #include "lib/mpfs.h" 38 39 ··· 99 98 int l2table_size = 1 << MLX5_CAP_GEN(dev, log_max_l2_table); 100 99 struct mlx5_mpfs *mpfs; 101 100 102 - if (!MLX5_VPORT_MANAGER(dev)) 101 + if (!MLX5_ESWITCH_MANAGER(dev)) 103 102 return 0; 104 103 105 104 mpfs = kzalloc(sizeof(*mpfs), GFP_KERNEL); ··· 123 122 { 124 123 struct mlx5_mpfs *mpfs = dev->priv.mpfs; 125 124 126 - if (!MLX5_VPORT_MANAGER(dev)) 125 + if (!MLX5_ESWITCH_MANAGER(dev)) 127 126 return; 128 127 129 128 WARN_ON(!hlist_empty(mpfs->hash)); ··· 138 137 u32 index; 139 138 int err; 140 139 141 - if (!MLX5_VPORT_MANAGER(dev)) 140 + if (!MLX5_ESWITCH_MANAGER(dev)) 142 141 return 0; 143 142 144 143 mutex_lock(&mpfs->lock); ··· 180 179 int err = 0; 181 180 u32 index; 182 181 183 - if (!MLX5_VPORT_MANAGER(dev)) 182 + if (!MLX5_ESWITCH_MANAGER(dev)) 184 183 return 0; 185 184 186 185 mutex_lock(&mpfs->lock);
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/port.c
··· 701 701 static int mlx5_set_port_qetcr_reg(struct mlx5_core_dev *mdev, u32 *in, 702 702 int inlen) 703 703 { 704 - u32 out[MLX5_ST_SZ_DW(qtct_reg)]; 704 + u32 out[MLX5_ST_SZ_DW(qetc_reg)]; 705 705 706 706 if (!MLX5_CAP_GEN(mdev, ets)) 707 707 return -EOPNOTSUPP; ··· 713 713 static int mlx5_query_port_qetcr_reg(struct mlx5_core_dev *mdev, u32 *out, 714 714 int outlen) 715 715 { 716 - u32 in[MLX5_ST_SZ_DW(qtct_reg)]; 716 + u32 in[MLX5_ST_SZ_DW(qetc_reg)]; 717 717 718 718 if (!MLX5_CAP_GEN(mdev, ets)) 719 719 return -EOPNOTSUPP;
+6 -1
drivers/net/ethernet/mellanox/mlx5/core/sriov.c
··· 88 88 return -EBUSY; 89 89 } 90 90 91 + if (!MLX5_ESWITCH_MANAGER(dev)) 92 + goto enable_vfs_hca; 93 + 91 94 err = mlx5_eswitch_enable_sriov(dev->priv.eswitch, num_vfs, SRIOV_LEGACY); 92 95 if (err) { 93 96 mlx5_core_warn(dev, ··· 98 95 return err; 99 96 } 100 97 98 + enable_vfs_hca: 101 99 for (vf = 0; vf < num_vfs; vf++) { 102 100 err = mlx5_core_enable_hca(dev, vf + 1); 103 101 if (err) { ··· 144 140 } 145 141 146 142 out: 147 - mlx5_eswitch_disable_sriov(dev->priv.eswitch); 143 + if (MLX5_ESWITCH_MANAGER(dev)) 144 + mlx5_eswitch_disable_sriov(dev->priv.eswitch); 148 145 149 146 if (mlx5_wait_for_vf_pages(dev)) 150 147 mlx5_core_warn(dev, "timeout reclaiming VFs pages\n");
-2
drivers/net/ethernet/mellanox/mlx5/core/vport.c
··· 549 549 return -EINVAL; 550 550 if (!MLX5_CAP_GEN(mdev, vport_group_manager)) 551 551 return -EACCES; 552 - if (!MLX5_CAP_ESW(mdev, nic_vport_node_guid_modify)) 553 - return -EOPNOTSUPP; 554 552 555 553 in = kvzalloc(inlen, GFP_KERNEL); 556 554 if (!in)
+6 -3
drivers/net/ethernet/netronome/nfp/bpf/main.c
··· 81 81 82 82 ret = nfp_net_bpf_offload(nn, prog, running, extack); 83 83 /* Stop offload if replace not possible */ 84 - if (ret && prog) 85 - nfp_bpf_xdp_offload(app, nn, NULL, extack); 84 + if (ret) 85 + return ret; 86 86 87 - nn->dp.bpf_offload_xdp = prog && !ret; 87 + nn->dp.bpf_offload_xdp = !!prog; 88 88 return ret; 89 89 } 90 90 ··· 200 200 struct nfp_net *nn = netdev_priv(netdev); 201 201 202 202 if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS) 203 + return -EOPNOTSUPP; 204 + 205 + if (tcf_block_shared(f->block)) 203 206 return -EOPNOTSUPP; 204 207 205 208 switch (f->command) {
+14
drivers/net/ethernet/netronome/nfp/flower/match.c
··· 123 123 NFP_FLOWER_MASK_MPLS_Q; 124 124 125 125 frame->mpls_lse = cpu_to_be32(t_mpls); 126 + } else if (dissector_uses_key(flow->dissector, 127 + FLOW_DISSECTOR_KEY_BASIC)) { 128 + /* Check for mpls ether type and set NFP_FLOWER_MASK_MPLS_Q 129 + * bit, which indicates an mpls ether type but without any 130 + * mpls fields. 131 + */ 132 + struct flow_dissector_key_basic *key_basic; 133 + 134 + key_basic = skb_flow_dissector_target(flow->dissector, 135 + FLOW_DISSECTOR_KEY_BASIC, 136 + flow->key); 137 + if (key_basic->n_proto == cpu_to_be16(ETH_P_MPLS_UC) || 138 + key_basic->n_proto == cpu_to_be16(ETH_P_MPLS_MC)) 139 + frame->mpls_lse = cpu_to_be32(NFP_FLOWER_MASK_MPLS_Q); 126 140 } 127 141 } 128 142
+11
drivers/net/ethernet/netronome/nfp/flower/offload.c
··· 264 264 case cpu_to_be16(ETH_P_ARP): 265 265 return -EOPNOTSUPP; 266 266 267 + case cpu_to_be16(ETH_P_MPLS_UC): 268 + case cpu_to_be16(ETH_P_MPLS_MC): 269 + if (!(key_layer & NFP_FLOWER_LAYER_MAC)) { 270 + key_layer |= NFP_FLOWER_LAYER_MAC; 271 + key_size += sizeof(struct nfp_flower_mac_mpls); 272 + } 273 + break; 274 + 267 275 /* Will be included in layer 2. */ 268 276 case cpu_to_be16(ETH_P_8021Q): 269 277 break; ··· 629 621 struct nfp_repr *repr = netdev_priv(netdev); 630 622 631 623 if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS) 624 + return -EOPNOTSUPP; 625 + 626 + if (tcf_block_shared(f->block)) 632 627 return -EOPNOTSUPP; 633 628 634 629 switch (f->command) {
+1 -1
drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nffw.c
··· 232 232 err = nfp_cpp_read(cpp, nfp_resource_cpp_id(state->res), 233 233 nfp_resource_address(state->res), 234 234 fwinf, sizeof(*fwinf)); 235 - if (err < sizeof(*fwinf)) 235 + if (err < (int)sizeof(*fwinf)) 236 236 goto err_release; 237 237 238 238 if (!nffw_res_flg_init_get(fwinf))
+4 -4
drivers/net/ethernet/qlogic/qed/qed_dcbx.c
··· 709 709 p_local = &p_hwfn->p_dcbx_info->lldp_local[LLDP_NEAREST_BRIDGE]; 710 710 711 711 memcpy(params->lldp_local.local_chassis_id, p_local->local_chassis_id, 712 - ARRAY_SIZE(p_local->local_chassis_id)); 712 + sizeof(p_local->local_chassis_id)); 713 713 memcpy(params->lldp_local.local_port_id, p_local->local_port_id, 714 - ARRAY_SIZE(p_local->local_port_id)); 714 + sizeof(p_local->local_port_id)); 715 715 } 716 716 717 717 static void ··· 723 723 p_remote = &p_hwfn->p_dcbx_info->lldp_remote[LLDP_NEAREST_BRIDGE]; 724 724 725 725 memcpy(params->lldp_remote.peer_chassis_id, p_remote->peer_chassis_id, 726 - ARRAY_SIZE(p_remote->peer_chassis_id)); 726 + sizeof(p_remote->peer_chassis_id)); 727 727 memcpy(params->lldp_remote.peer_port_id, p_remote->peer_port_id, 728 - ARRAY_SIZE(p_remote->peer_port_id)); 728 + sizeof(p_remote->peer_port_id)); 729 729 } 730 730 731 731 static int
+1 -1
drivers/net/ethernet/qlogic/qed/qed_dev.c
··· 1804 1804 DP_INFO(p_hwfn, "Failed to update driver state\n"); 1805 1805 1806 1806 rc = qed_mcp_ov_update_eswitch(p_hwfn, p_hwfn->p_main_ptt, 1807 - QED_OV_ESWITCH_VEB); 1807 + QED_OV_ESWITCH_NONE); 1808 1808 if (rc) 1809 1809 DP_INFO(p_hwfn, "Failed to update eswitch mode\n"); 1810 1810 }
+8
drivers/net/ethernet/qlogic/qed/qed_main.c
··· 789 789 /* We want a minimum of one slowpath and one fastpath vector per hwfn */ 790 790 cdev->int_params.in.min_msix_cnt = cdev->num_hwfns * 2; 791 791 792 + if (is_kdump_kernel()) { 793 + DP_INFO(cdev, 794 + "Kdump kernel: Limit the max number of requested MSI-X vectors to %hd\n", 795 + cdev->int_params.in.min_msix_cnt); 796 + cdev->int_params.in.num_vectors = 797 + cdev->int_params.in.min_msix_cnt; 798 + } 799 + 792 800 rc = qed_set_int_mode(cdev, false); 793 801 if (rc) { 794 802 DP_ERR(cdev, "qed_slowpath_setup_int ERR\n");
+17 -2
drivers/net/ethernet/qlogic/qed/qed_sriov.c
··· 4513 4513 static int qed_sriov_enable(struct qed_dev *cdev, int num) 4514 4514 { 4515 4515 struct qed_iov_vf_init_params params; 4516 + struct qed_hwfn *hwfn; 4517 + struct qed_ptt *ptt; 4516 4518 int i, j, rc; 4517 4519 4518 4520 if (num >= RESC_NUM(&cdev->hwfns[0], QED_VPORT)) { ··· 4527 4525 4528 4526 /* Initialize HW for VF access */ 4529 4527 for_each_hwfn(cdev, j) { 4530 - struct qed_hwfn *hwfn = &cdev->hwfns[j]; 4531 - struct qed_ptt *ptt = qed_ptt_acquire(hwfn); 4528 + hwfn = &cdev->hwfns[j]; 4529 + ptt = qed_ptt_acquire(hwfn); 4532 4530 4533 4531 /* Make sure not to use more than 16 queues per VF */ 4534 4532 params.num_queues = min_t(int, ··· 4563 4561 DP_ERR(cdev, "Failed to enable sriov [%d]\n", rc); 4564 4562 goto err; 4565 4563 } 4564 + 4565 + hwfn = QED_LEADING_HWFN(cdev); 4566 + ptt = qed_ptt_acquire(hwfn); 4567 + if (!ptt) { 4568 + DP_ERR(hwfn, "Failed to acquire ptt\n"); 4569 + rc = -EBUSY; 4570 + goto err; 4571 + } 4572 + 4573 + rc = qed_mcp_ov_update_eswitch(hwfn, ptt, QED_OV_ESWITCH_VEB); 4574 + if (rc) 4575 + DP_INFO(cdev, "Failed to update eswitch mode\n"); 4576 + qed_ptt_release(hwfn, ptt); 4566 4577 4567 4578 return num; 4568 4579
+8 -2
drivers/net/ethernet/qlogic/qede/qede_ptp.c
··· 337 337 { 338 338 struct qede_ptp *ptp = edev->ptp; 339 339 340 - if (!ptp) 341 - return -EIO; 340 + if (!ptp) { 341 + info->so_timestamping = SOF_TIMESTAMPING_TX_SOFTWARE | 342 + SOF_TIMESTAMPING_RX_SOFTWARE | 343 + SOF_TIMESTAMPING_SOFTWARE; 344 + info->phc_index = -1; 345 + 346 + return 0; 347 + } 342 348 343 349 info->so_timestamping = SOF_TIMESTAMPING_TX_SOFTWARE | 344 350 SOF_TIMESTAMPING_RX_SOFTWARE |
+1
drivers/net/ethernet/sfc/farch.c
··· 2794 2794 if (!state) 2795 2795 return -ENOMEM; 2796 2796 efx->filter_state = state; 2797 + init_rwsem(&state->lock); 2797 2798 2798 2799 table = &state->table[EFX_FARCH_FILTER_TABLE_RX_IP]; 2799 2800 table->id = EFX_FARCH_FILTER_TABLE_RX_IP;
+12
drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
··· 420 420 writel(mtl_tx_op, ioaddr + MTL_CHAN_TX_OP_MODE(channel)); 421 421 } 422 422 423 + static void dwmac4_set_bfsize(void __iomem *ioaddr, int bfsize, u32 chan) 424 + { 425 + u32 value = readl(ioaddr + DMA_CHAN_RX_CONTROL(chan)); 426 + 427 + value &= ~DMA_RBSZ_MASK; 428 + value |= (bfsize << DMA_RBSZ_SHIFT) & DMA_RBSZ_MASK; 429 + 430 + writel(value, ioaddr + DMA_CHAN_RX_CONTROL(chan)); 431 + } 432 + 423 433 const struct stmmac_dma_ops dwmac4_dma_ops = { 424 434 .reset = dwmac4_dma_reset, 425 435 .init = dwmac4_dma_init, ··· 455 445 .set_tx_tail_ptr = dwmac4_set_tx_tail_ptr, 456 446 .enable_tso = dwmac4_enable_tso, 457 447 .qmode = dwmac4_qmode, 448 + .set_bfsize = dwmac4_set_bfsize, 458 449 }; 459 450 460 451 const struct stmmac_dma_ops dwmac410_dma_ops = { ··· 483 472 .set_tx_tail_ptr = dwmac4_set_tx_tail_ptr, 484 473 .enable_tso = dwmac4_enable_tso, 485 474 .qmode = dwmac4_qmode, 475 + .set_bfsize = dwmac4_set_bfsize, 486 476 };
+2
drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.h
··· 120 120 121 121 /* DMA Rx Channel X Control register defines */ 122 122 #define DMA_CONTROL_SR BIT(0) 123 + #define DMA_RBSZ_MASK GENMASK(14, 1) 124 + #define DMA_RBSZ_SHIFT 1 123 125 124 126 /* Interrupt status per channel */ 125 127 #define DMA_CHAN_STATUS_REB GENMASK(21, 19)
+3
drivers/net/ethernet/stmicro/stmmac/hwif.h
··· 184 184 void (*set_tx_tail_ptr)(void __iomem *ioaddr, u32 tail_ptr, u32 chan); 185 185 void (*enable_tso)(void __iomem *ioaddr, bool en, u32 chan); 186 186 void (*qmode)(void __iomem *ioaddr, u32 channel, u8 qmode); 187 + void (*set_bfsize)(void __iomem *ioaddr, int bfsize, u32 chan); 187 188 }; 188 189 189 190 #define stmmac_reset(__priv, __args...) \ ··· 239 238 stmmac_do_void_callback(__priv, dma, enable_tso, __args) 240 239 #define stmmac_dma_qmode(__priv, __args...) \ 241 240 stmmac_do_void_callback(__priv, dma, qmode, __args) 241 + #define stmmac_set_dma_bfsize(__priv, __args...) \ 242 + stmmac_do_void_callback(__priv, dma, set_bfsize, __args) 242 243 243 244 struct mac_device_info; 244 245 struct net_device;
+2
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 1804 1804 1805 1805 stmmac_dma_rx_mode(priv, priv->ioaddr, rxmode, chan, 1806 1806 rxfifosz, qmode); 1807 + stmmac_set_dma_bfsize(priv, priv->ioaddr, priv->dma_buf_sz, 1808 + chan); 1807 1809 } 1808 1810 1809 1811 for (chan = 0; chan < tx_channels_count; chan++) {
+1 -1
drivers/net/geneve.c
··· 478 478 out_unlock: 479 479 rcu_read_unlock(); 480 480 out: 481 - NAPI_GRO_CB(skb)->flush |= flush; 481 + skb_gro_flush_final(skb, pp, flush); 482 482 483 483 return pp; 484 484 }
+1 -1
drivers/net/hyperv/hyperv_net.h
··· 210 210 void netvsc_channel_cb(void *context); 211 211 int netvsc_poll(struct napi_struct *napi, int budget); 212 212 213 - void rndis_set_subchannel(struct work_struct *w); 213 + int rndis_set_subchannel(struct net_device *ndev, struct netvsc_device *nvdev); 214 214 int rndis_filter_open(struct netvsc_device *nvdev); 215 215 int rndis_filter_close(struct netvsc_device *nvdev); 216 216 struct netvsc_device *rndis_filter_device_add(struct hv_device *dev,
+36 -1
drivers/net/hyperv/netvsc.c
··· 65 65 VM_PKT_DATA_INBAND, 0); 66 66 } 67 67 68 + /* Worker to setup sub channels on initial setup 69 + * Initial hotplug event occurs in softirq context 70 + * and can't wait for channels. 71 + */ 72 + static void netvsc_subchan_work(struct work_struct *w) 73 + { 74 + struct netvsc_device *nvdev = 75 + container_of(w, struct netvsc_device, subchan_work); 76 + struct rndis_device *rdev; 77 + int i, ret; 78 + 79 + /* Avoid deadlock with device removal already under RTNL */ 80 + if (!rtnl_trylock()) { 81 + schedule_work(w); 82 + return; 83 + } 84 + 85 + rdev = nvdev->extension; 86 + if (rdev) { 87 + ret = rndis_set_subchannel(rdev->ndev, nvdev); 88 + if (ret == 0) { 89 + netif_device_attach(rdev->ndev); 90 + } else { 91 + /* fallback to only primary channel */ 92 + for (i = 1; i < nvdev->num_chn; i++) 93 + netif_napi_del(&nvdev->chan_table[i].napi); 94 + 95 + nvdev->max_chn = 1; 96 + nvdev->num_chn = 1; 97 + } 98 + } 99 + 100 + rtnl_unlock(); 101 + } 102 + 68 103 static struct netvsc_device *alloc_net_device(void) 69 104 { 70 105 struct netvsc_device *net_device; ··· 116 81 117 82 init_completion(&net_device->channel_init_wait); 118 83 init_waitqueue_head(&net_device->subchan_open); 119 - INIT_WORK(&net_device->subchan_work, rndis_set_subchannel); 84 + INIT_WORK(&net_device->subchan_work, netvsc_subchan_work); 120 85 121 86 return net_device; 122 87 }
+16 -1
drivers/net/hyperv/netvsc_drv.c
··· 905 905 if (IS_ERR(nvdev)) 906 906 return PTR_ERR(nvdev); 907 907 908 - /* Note: enable and attach happen when sub-channels setup */ 908 + if (nvdev->num_chn > 1) { 909 + ret = rndis_set_subchannel(ndev, nvdev); 909 910 911 + /* if unavailable, just proceed with one queue */ 912 + if (ret) { 913 + nvdev->max_chn = 1; 914 + nvdev->num_chn = 1; 915 + } 916 + } 917 + 918 + /* In any case device is now ready */ 919 + netif_device_attach(ndev); 920 + 921 + /* Note: enable and attach happen when sub-channels setup */ 910 922 netif_carrier_off(ndev); 911 923 912 924 if (netif_running(ndev)) { ··· 2100 2088 } 2101 2089 2102 2090 memcpy(net->dev_addr, device_info.mac_adr, ETH_ALEN); 2091 + 2092 + if (nvdev->num_chn > 1) 2093 + schedule_work(&nvdev->subchan_work); 2103 2094 2104 2095 /* hw_features computed in rndis_netdev_set_hwcaps() */ 2105 2096 net->features = net->hw_features |
+12 -49
drivers/net/hyperv/rndis_filter.c
··· 1062 1062 * This breaks overlap of processing the host message for the 1063 1063 * new primary channel with the initialization of sub-channels. 1064 1064 */ 1065 - void rndis_set_subchannel(struct work_struct *w) 1065 + int rndis_set_subchannel(struct net_device *ndev, struct netvsc_device *nvdev) 1066 1066 { 1067 - struct netvsc_device *nvdev 1068 - = container_of(w, struct netvsc_device, subchan_work); 1069 1067 struct nvsp_message *init_packet = &nvdev->channel_init_pkt; 1070 - struct net_device_context *ndev_ctx; 1071 - struct rndis_device *rdev; 1072 - struct net_device *ndev; 1073 - struct hv_device *hv_dev; 1068 + struct net_device_context *ndev_ctx = netdev_priv(ndev); 1069 + struct hv_device *hv_dev = ndev_ctx->device_ctx; 1070 + struct rndis_device *rdev = nvdev->extension; 1074 1071 int i, ret; 1075 1072 1076 - if (!rtnl_trylock()) { 1077 - schedule_work(w); 1078 - return; 1079 - } 1080 - 1081 - rdev = nvdev->extension; 1082 - if (!rdev) 1083 - goto unlock; /* device was removed */ 1084 - 1085 - ndev = rdev->ndev; 1086 - ndev_ctx = netdev_priv(ndev); 1087 - hv_dev = ndev_ctx->device_ctx; 1073 + ASSERT_RTNL(); 1088 1074 1089 1075 memset(init_packet, 0, sizeof(struct nvsp_message)); 1090 1076 init_packet->hdr.msg_type = NVSP_MSG5_TYPE_SUBCHANNEL; ··· 1086 1100 VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 1087 1101 if (ret) { 1088 1102 netdev_err(ndev, "sub channel allocate send failed: %d\n", ret); 1089 - goto failed; 1103 + return ret; 1090 1104 } 1091 1105 1092 1106 wait_for_completion(&nvdev->channel_init_wait); 1093 1107 if (init_packet->msg.v5_msg.subchn_comp.status != NVSP_STAT_SUCCESS) { 1094 1108 netdev_err(ndev, "sub channel request failed\n"); 1095 - goto failed; 1109 + return -EIO; 1096 1110 } 1097 1111 1098 1112 nvdev->num_chn = 1 + ··· 1111 1125 for (i = 0; i < VRSS_SEND_TAB_SIZE; i++) 1112 1126 ndev_ctx->tx_table[i] = i % nvdev->num_chn; 1113 1127 1114 - netif_device_attach(ndev); 1115 - rtnl_unlock(); 1116 - return; 1117 - 1118 - failed: 1119 - /* fallback to only primary channel */ 1120 - for (i = 1; i < nvdev->num_chn; i++) 1121 - netif_napi_del(&nvdev->chan_table[i].napi); 1122 - 1123 - nvdev->max_chn = 1; 1124 - nvdev->num_chn = 1; 1125 - 1126 - netif_device_attach(ndev); 1127 - unlock: 1128 - rtnl_unlock(); 1128 + return 0; 1129 1129 } 1130 1130 1131 1131 static int rndis_netdev_set_hwcaps(struct rndis_device *rndis_device, ··· 1332 1360 netif_napi_add(net, &net_device->chan_table[i].napi, 1333 1361 netvsc_poll, NAPI_POLL_WEIGHT); 1334 1362 1335 - if (net_device->num_chn > 1) 1336 - schedule_work(&net_device->subchan_work); 1363 + return net_device; 1337 1364 1338 1365 out: 1339 - /* if unavailable, just proceed with one queue */ 1340 - if (ret) { 1341 - net_device->max_chn = 1; 1342 - net_device->num_chn = 1; 1343 - } 1344 - 1345 - /* No sub channels, device is ready */ 1346 - if (net_device->num_chn == 1) 1347 - netif_device_attach(net); 1348 - 1349 - return net_device; 1366 + /* setting up multiple channels failed */ 1367 + net_device->max_chn = 1; 1368 + net_device->num_chn = 1; 1350 1369 1351 1370 err_dev_remv: 1352 1371 rndis_filter_device_remove(dev, net_device);
+28 -8
drivers/net/ipvlan/ipvlan_main.c
··· 75 75 { 76 76 struct ipvl_dev *ipvlan; 77 77 struct net_device *mdev = port->dev; 78 - int err = 0; 78 + unsigned int flags; 79 + int err; 79 80 80 81 ASSERT_RTNL(); 81 82 if (port->mode != nval) { 83 + list_for_each_entry(ipvlan, &port->ipvlans, pnode) { 84 + flags = ipvlan->dev->flags; 85 + if (nval == IPVLAN_MODE_L3 || nval == IPVLAN_MODE_L3S) { 86 + err = dev_change_flags(ipvlan->dev, 87 + flags | IFF_NOARP); 88 + } else { 89 + err = dev_change_flags(ipvlan->dev, 90 + flags & ~IFF_NOARP); 91 + } 92 + if (unlikely(err)) 93 + goto fail; 94 + } 82 95 if (nval == IPVLAN_MODE_L3S) { 83 96 /* New mode is L3S */ 84 97 err = ipvlan_register_nf_hook(read_pnet(&port->pnet)); ··· 99 86 mdev->l3mdev_ops = &ipvl_l3mdev_ops; 100 87 mdev->priv_flags |= IFF_L3MDEV_MASTER; 101 88 } else 102 - return err; 89 + goto fail; 103 90 } else if (port->mode == IPVLAN_MODE_L3S) { 104 91 /* Old mode was L3S */ 105 92 mdev->priv_flags &= ~IFF_L3MDEV_MASTER; 106 93 ipvlan_unregister_nf_hook(read_pnet(&port->pnet)); 107 94 mdev->l3mdev_ops = NULL; 108 95 } 109 - list_for_each_entry(ipvlan, &port->ipvlans, pnode) { 110 - if (nval == IPVLAN_MODE_L3 || nval == IPVLAN_MODE_L3S) 111 - ipvlan->dev->flags |= IFF_NOARP; 112 - else 113 - ipvlan->dev->flags &= ~IFF_NOARP; 114 - } 115 96 port->mode = nval; 116 97 } 98 + return 0; 99 + 100 + fail: 101 + /* Undo the flags changes that have been done so far. */ 102 + list_for_each_entry_continue_reverse(ipvlan, &port->ipvlans, pnode) { 103 + flags = ipvlan->dev->flags; 104 + if (port->mode == IPVLAN_MODE_L3 || 105 + port->mode == IPVLAN_MODE_L3S) 106 + dev_change_flags(ipvlan->dev, flags | IFF_NOARP); 107 + else 108 + dev_change_flags(ipvlan->dev, flags & ~IFF_NOARP); 109 + } 110 + 117 111 return err; 118 112 } 119 113
+1 -1
drivers/net/phy/dp83tc811.c
··· 222 222 if (err < 0) 223 223 return err; 224 224 225 - err = phy_write(phydev, MII_DP83811_INT_STAT1, 0); 225 + err = phy_write(phydev, MII_DP83811_INT_STAT2, 0); 226 226 } 227 227 228 228 return err;
+1 -1
drivers/net/ppp/pppoe.c
··· 1107 1107 .socketpair = sock_no_socketpair, 1108 1108 .accept = sock_no_accept, 1109 1109 .getname = pppoe_getname, 1110 - .poll_mask = datagram_poll_mask, 1110 + .poll = datagram_poll, 1111 1111 .listen = sock_no_listen, 1112 1112 .shutdown = sock_no_shutdown, 1113 1113 .setsockopt = sock_no_setsockopt,
+34 -3
drivers/net/usb/lan78xx.c
··· 64 64 #define DEFAULT_RX_CSUM_ENABLE (true) 65 65 #define DEFAULT_TSO_CSUM_ENABLE (true) 66 66 #define DEFAULT_VLAN_FILTER_ENABLE (true) 67 + #define DEFAULT_VLAN_RX_OFFLOAD (true) 67 68 #define TX_OVERHEAD (8) 68 69 #define RXW_PADDING 2 69 70 ··· 2299 2298 if ((ll_mtu % dev->maxpacket) == 0) 2300 2299 return -EDOM; 2301 2300 2302 - ret = lan78xx_set_rx_max_frame_length(dev, new_mtu + ETH_HLEN); 2301 + ret = lan78xx_set_rx_max_frame_length(dev, new_mtu + VLAN_ETH_HLEN); 2303 2302 2304 2303 netdev->mtu = new_mtu; 2305 2304 ··· 2365 2364 } 2366 2365 2367 2366 if (features & NETIF_F_HW_VLAN_CTAG_RX) 2367 + pdata->rfe_ctl |= RFE_CTL_VLAN_STRIP_; 2368 + else 2369 + pdata->rfe_ctl &= ~RFE_CTL_VLAN_STRIP_; 2370 + 2371 + if (features & NETIF_F_HW_VLAN_CTAG_FILTER) 2368 2372 pdata->rfe_ctl |= RFE_CTL_VLAN_FILTER_; 2369 2373 else 2370 2374 pdata->rfe_ctl &= ~RFE_CTL_VLAN_FILTER_; ··· 2593 2587 buf |= FCT_TX_CTL_EN_; 2594 2588 ret = lan78xx_write_reg(dev, FCT_TX_CTL, buf); 2595 2589 2596 - ret = lan78xx_set_rx_max_frame_length(dev, dev->net->mtu + ETH_HLEN); 2590 + ret = lan78xx_set_rx_max_frame_length(dev, 2591 + dev->net->mtu + VLAN_ETH_HLEN); 2597 2592 2598 2593 ret = lan78xx_read_reg(dev, MAC_RX, &buf); 2599 2594 buf |= MAC_RX_RXEN_; ··· 2982 2975 if (DEFAULT_TSO_CSUM_ENABLE) 2983 2976 dev->net->features |= NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_SG; 2984 2977 2978 + if (DEFAULT_VLAN_RX_OFFLOAD) 2979 + dev->net->features |= NETIF_F_HW_VLAN_CTAG_RX; 2980 + 2981 + if (DEFAULT_VLAN_FILTER_ENABLE) 2982 + dev->net->features |= NETIF_F_HW_VLAN_CTAG_FILTER; 2983 + 2985 2984 dev->net->hw_features = dev->net->features; 2986 2985 2987 2986 ret = lan78xx_setup_irq_domain(dev); ··· 3052 3039 struct sk_buff *skb, 3053 3040 u32 rx_cmd_a, u32 rx_cmd_b) 3054 3041 { 3042 + /* HW Checksum offload appears to be flawed if used when not stripping 3043 + * VLAN headers. Drop back to S/W checksums under these conditions. 3044 + */ 3055 3045 if (!(dev->net->features & NETIF_F_RXCSUM) || 3056 - unlikely(rx_cmd_a & RX_CMD_A_ICSM_)) { 3046 + unlikely(rx_cmd_a & RX_CMD_A_ICSM_) || 3047 + ((rx_cmd_a & RX_CMD_A_FVTG_) && 3048 + !(dev->net->features & NETIF_F_HW_VLAN_CTAG_RX))) { 3057 3049 skb->ip_summed = CHECKSUM_NONE; 3058 3050 } else { 3059 3051 skb->csum = ntohs((u16)(rx_cmd_b >> RX_CMD_B_CSUM_SHIFT_)); 3060 3052 skb->ip_summed = CHECKSUM_COMPLETE; 3061 3053 } 3054 + } 3055 + 3056 + static void lan78xx_rx_vlan_offload(struct lan78xx_net *dev, 3057 + struct sk_buff *skb, 3058 + u32 rx_cmd_a, u32 rx_cmd_b) 3059 + { 3060 + if ((dev->net->features & NETIF_F_HW_VLAN_CTAG_RX) && 3061 + (rx_cmd_a & RX_CMD_A_FVTG_)) 3062 + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), 3063 + (rx_cmd_b & 0xffff)); 3062 3064 } 3063 3065 3064 3066 static void lan78xx_skb_return(struct lan78xx_net *dev, struct sk_buff *skb) ··· 3140 3112 if (skb->len == size) { 3141 3113 lan78xx_rx_csum_offload(dev, skb, 3142 3114 rx_cmd_a, rx_cmd_b); 3115 + lan78xx_rx_vlan_offload(dev, skb, 3116 + rx_cmd_a, rx_cmd_b); 3143 3117 3144 3118 skb_trim(skb, skb->len - 4); /* remove fcs */ 3145 3119 skb->truesize = size + sizeof(struct sk_buff); ··· 3160 3130 skb_set_tail_pointer(skb2, size); 3161 3131 3162 3132 lan78xx_rx_csum_offload(dev, skb2, rx_cmd_a, rx_cmd_b); 3133 + lan78xx_rx_vlan_offload(dev, skb2, rx_cmd_a, rx_cmd_b); 3163 3134 3164 3135 skb_trim(skb2, skb2->len - 4); /* remove fcs */ 3165 3136 skb2->truesize = size + sizeof(struct sk_buff);
+2 -1
drivers/net/usb/r8152.c
··· 3966 3966 #ifdef CONFIG_PM_SLEEP 3967 3967 unregister_pm_notifier(&tp->pm_notifier); 3968 3968 #endif 3969 - napi_disable(&tp->napi); 3969 + if (!test_bit(RTL8152_UNPLUG, &tp->flags)) 3970 + napi_disable(&tp->napi); 3970 3971 clear_bit(WORK_ENABLE, &tp->flags); 3971 3972 usb_kill_urb(tp->intr_urb); 3972 3973 cancel_delayed_work_sync(&tp->schedule);
+19 -11
drivers/net/virtio_net.c
··· 53 53 /* Amount of XDP headroom to prepend to packets for use by xdp_adjust_head */ 54 54 #define VIRTIO_XDP_HEADROOM 256 55 55 56 + /* Separating two types of XDP xmit */ 57 + #define VIRTIO_XDP_TX BIT(0) 58 + #define VIRTIO_XDP_REDIR BIT(1) 59 + 56 60 /* RX packet size EWMA. The average packet size is used to determine the packet 57 61 * buffer size when refilling RX rings. As the entire RX ring may be refilled 58 62 * at once, the weight is chosen so that the EWMA will be insensitive to short- ··· 586 582 struct receive_queue *rq, 587 583 void *buf, void *ctx, 588 584 unsigned int len, 589 - bool *xdp_xmit) 585 + unsigned int *xdp_xmit) 590 586 { 591 587 struct sk_buff *skb; 592 588 struct bpf_prog *xdp_prog; ··· 658 654 trace_xdp_exception(vi->dev, xdp_prog, act); 659 655 goto err_xdp; 660 656 } 661 - *xdp_xmit = true; 657 + *xdp_xmit |= VIRTIO_XDP_TX; 662 658 rcu_read_unlock(); 663 659 goto xdp_xmit; 664 660 case XDP_REDIRECT: 665 661 err = xdp_do_redirect(dev, &xdp, xdp_prog); 666 662 if (err) 667 663 goto err_xdp; 668 - *xdp_xmit = true; 664 + *xdp_xmit |= VIRTIO_XDP_REDIR; 669 665 rcu_read_unlock(); 670 666 goto xdp_xmit; 671 667 default: ··· 727 723 void *buf, 728 724 void *ctx, 729 725 unsigned int len, 730 - bool *xdp_xmit) 726 + unsigned int *xdp_xmit) 731 727 { 732 728 struct virtio_net_hdr_mrg_rxbuf *hdr = buf; 733 729 u16 num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers); ··· 822 818 put_page(xdp_page); 823 819 goto err_xdp; 824 820 } 825 - *xdp_xmit = true; 821 + *xdp_xmit |= VIRTIO_XDP_TX; 826 822 if (unlikely(xdp_page != page)) 827 823 put_page(page); 828 824 rcu_read_unlock(); ··· 834 830 put_page(xdp_page); 835 831 goto err_xdp; 836 832 } 837 - *xdp_xmit = true; 833 + *xdp_xmit |= VIRTIO_XDP_REDIR; 838 834 if (unlikely(xdp_page != page)) 839 835 put_page(page); 840 836 rcu_read_unlock(); ··· 943 939 } 944 940 945 941 static int receive_buf(struct virtnet_info *vi, struct receive_queue *rq, 946 - void *buf, unsigned int len, void **ctx, bool *xdp_xmit) 942 + void *buf, unsigned int len, void **ctx, 943 + unsigned int *xdp_xmit) 947 944 { 948 945 struct net_device *dev = vi->dev; 949 946 struct sk_buff *skb; ··· 1237 1232 } 1238 1233 } 1239 1234 1240 - static int virtnet_receive(struct receive_queue *rq, int budget, bool *xdp_xmit) 1235 + static int virtnet_receive(struct receive_queue *rq, int budget, 1236 + unsigned int *xdp_xmit) 1241 1237 { 1242 1238 struct virtnet_info *vi = rq->vq->vdev->priv; 1243 1239 unsigned int len, received = 0, bytes = 0; ··· 1327 1321 struct virtnet_info *vi = rq->vq->vdev->priv; 1328 1322 struct send_queue *sq; 1329 1323 unsigned int received, qp; 1330 - bool xdp_xmit = false; 1324 + unsigned int xdp_xmit = 0; 1331 1325 1332 1326 virtnet_poll_cleantx(rq); 1333 1327 ··· 1337 1331 if (received < budget) 1338 1332 virtqueue_napi_complete(napi, rq->vq, received); 1339 1333 1340 - if (xdp_xmit) { 1334 + if (xdp_xmit & VIRTIO_XDP_REDIR) 1335 + xdp_do_flush_map(); 1336 + 1337 + if (xdp_xmit & VIRTIO_XDP_TX) { 1341 1338 qp = vi->curr_queue_pairs - vi->xdp_queue_pairs + 1342 1339 smp_processor_id(); 1343 1340 sq = &vi->sq[qp]; 1344 1341 virtqueue_kick(sq->vq); 1345 - xdp_do_flush_map(); 1346 1342 } 1347 1343 1348 1344 return received;
+1 -3
drivers/net/vxlan.c
··· 624 624 flush = 0; 625 625 626 626 out: 627 - skb_gro_remcsum_cleanup(skb, &grc); 628 - skb->remcsum_offload = 0; 629 - NAPI_GRO_CB(skb)->flush |= flush; 627 + skb_gro_flush_final_remcsum(skb, pp, flush, &grc); 630 628 631 629 return pp; 632 630 }
+2 -2
drivers/nfc/pn533/usb.c
··· 74 74 struct sk_buff *skb = NULL; 75 75 76 76 if (!urb->status) { 77 - skb = alloc_skb(urb->actual_length, GFP_KERNEL); 77 + skb = alloc_skb(urb->actual_length, GFP_ATOMIC); 78 78 if (!skb) { 79 79 nfc_err(&phy->udev->dev, "failed to alloc memory\n"); 80 80 } else { ··· 186 186 187 187 if (dev->protocol_type == PN533_PROTO_REQ_RESP) { 188 188 /* request for response for sent packet directly */ 189 - rc = pn533_submit_urb_for_response(phy, GFP_ATOMIC); 189 + rc = pn533_submit_urb_for_response(phy, GFP_KERNEL); 190 190 if (rc) 191 191 goto error; 192 192 } else if (dev->protocol_type == PN533_PROTO_REQ_ACK_RESP) {
+2 -1
drivers/nvdimm/pmem.c
··· 414 414 blk_queue_logical_block_size(q, pmem_sector_size(ndns)); 415 415 blk_queue_max_hw_sectors(q, UINT_MAX); 416 416 blk_queue_flag_set(QUEUE_FLAG_NONROT, q); 417 - blk_queue_flag_set(QUEUE_FLAG_DAX, q); 417 + if (pmem->pfn_flags & PFN_MAP) 418 + blk_queue_flag_set(QUEUE_FLAG_DAX, q); 418 419 q->queuedata = pmem; 419 420 420 421 disk = alloc_disk_node(0, nid);
+5 -2
drivers/nvme/host/rdma.c
··· 732 732 blk_cleanup_queue(ctrl->ctrl.admin_q); 733 733 nvme_rdma_free_tagset(&ctrl->ctrl, ctrl->ctrl.admin_tagset); 734 734 } 735 - nvme_rdma_free_qe(ctrl->device->dev, &ctrl->async_event_sqe, 736 - sizeof(struct nvme_command), DMA_TO_DEVICE); 735 + if (ctrl->async_event_sqe.data) { 736 + nvme_rdma_free_qe(ctrl->device->dev, &ctrl->async_event_sqe, 737 + sizeof(struct nvme_command), DMA_TO_DEVICE); 738 + ctrl->async_event_sqe.data = NULL; 739 + } 737 740 nvme_rdma_free_queue(&ctrl->queues[0]); 738 741 } 739 742
+3 -3
drivers/pci/Makefile
··· 28 28 obj-$(CONFIG_PCI_ECAM) += ecam.o 29 29 obj-$(CONFIG_XEN_PCIDEV_FRONTEND) += xen-pcifront.o 30 30 31 - obj-y += controller/ 32 - obj-y += switch/ 33 - 34 31 # Endpoint library must be initialized before its users 35 32 obj-$(CONFIG_PCI_ENDPOINT) += endpoint/ 33 + 34 + obj-y += controller/ 35 + obj-y += switch/ 36 36 37 37 ccflags-$(CONFIG_PCI_DEBUG) := -DDEBUG
-3
drivers/pci/controller/Kconfig
··· 96 96 depends on OF 97 97 select PCI_HOST_COMMON 98 98 select IRQ_DOMAIN 99 - select PCI_DOMAINS 100 99 help 101 100 Say Y here if you want to support a simple generic PCI host 102 101 controller, such as the one emulated by kvmtool. ··· 137 138 138 139 config PCIE_IPROC 139 140 tristate 140 - select PCI_DOMAINS 141 141 help 142 142 This enables the iProc PCIe core controller support for Broadcom's 143 143 iProc family of SoCs. An appropriate bus interface driver needs ··· 174 176 config PCIE_ALTERA 175 177 bool "Altera PCIe controller" 176 178 depends on ARM || NIOS2 || COMPILE_TEST 177 - select PCI_DOMAINS 178 179 help 179 180 Say Y here if you want to enable PCIe controller support on Altera 180 181 FPGA.
+9 -1
drivers/pci/hotplug/acpi_pcihp.c
··· 7 7 * All rights reserved. 8 8 * 9 9 * Send feedback to <kristen.c.accardi@intel.com> 10 - * 11 10 */ 12 11 13 12 #include <linux/module.h> ··· 86 87 return 0; 87 88 88 89 /* If _OSC exists, we should not evaluate OSHP */ 90 + 91 + /* 92 + * If there's no ACPI host bridge (i.e., ACPI support is compiled 93 + * into the kernel but the hardware platform doesn't support ACPI), 94 + * there's nothing to do here. 95 + */ 89 96 host = pci_find_host_bridge(pdev->bus); 90 97 root = acpi_pci_find_root(ACPI_HANDLE(&host->dev)); 98 + if (!root) 99 + return 0; 100 + 91 101 if (root->osc_support_set) 92 102 goto no_control; 93 103
+1 -1
drivers/perf/xgene_pmu.c
··· 1463 1463 case PMU_TYPE_IOB: 1464 1464 return devm_kasprintf(dev, GFP_KERNEL, "iob%d", id); 1465 1465 case PMU_TYPE_IOB_SLOW: 1466 - return devm_kasprintf(dev, GFP_KERNEL, "iob-slow%d", id); 1466 + return devm_kasprintf(dev, GFP_KERNEL, "iob_slow%d", id); 1467 1467 case PMU_TYPE_MCB: 1468 1468 return devm_kasprintf(dev, GFP_KERNEL, "mcb%d", id); 1469 1469 case PMU_TYPE_MC:
+12 -1
drivers/s390/net/qeth_core.h
··· 829 829 /*some helper functions*/ 830 830 #define QETH_CARD_IFNAME(card) (((card)->dev)? (card)->dev->name : "") 831 831 832 + static inline void qeth_scrub_qdio_buffer(struct qdio_buffer *buf, 833 + unsigned int elements) 834 + { 835 + unsigned int i; 836 + 837 + for (i = 0; i < elements; i++) 838 + memset(&buf->element[i], 0, sizeof(struct qdio_buffer_element)); 839 + buf->element[14].sflags = 0; 840 + buf->element[15].sflags = 0; 841 + } 842 + 832 843 /** 833 844 * qeth_get_elements_for_range() - find number of SBALEs to cover range. 834 845 * @start: Start of the address range. ··· 1040 1029 __u16, __u16, 1041 1030 enum qeth_prot_versions); 1042 1031 int qeth_set_features(struct net_device *, netdev_features_t); 1043 - void qeth_recover_features(struct net_device *dev); 1032 + void qeth_enable_hw_features(struct net_device *dev); 1044 1033 netdev_features_t qeth_fix_features(struct net_device *, netdev_features_t); 1045 1034 netdev_features_t qeth_features_check(struct sk_buff *skb, 1046 1035 struct net_device *dev,
+28 -19
drivers/s390/net/qeth_core_main.c
··· 73 73 struct qeth_qdio_out_buffer *buf, 74 74 enum iucv_tx_notify notification); 75 75 static void qeth_release_skbs(struct qeth_qdio_out_buffer *buf); 76 - static void qeth_clear_output_buffer(struct qeth_qdio_out_q *queue, 77 - struct qeth_qdio_out_buffer *buf, 78 - enum qeth_qdio_buffer_states newbufstate); 79 76 static int qeth_init_qdio_out_buf(struct qeth_qdio_out_q *, int); 80 77 81 78 struct workqueue_struct *qeth_wq; ··· 486 489 struct qaob *aob; 487 490 struct qeth_qdio_out_buffer *buffer; 488 491 enum iucv_tx_notify notification; 492 + unsigned int i; 489 493 490 494 aob = (struct qaob *) phys_to_virt(phys_aob_addr); 491 495 QETH_CARD_TEXT(card, 5, "haob"); ··· 511 513 qeth_notify_skbs(buffer->q, buffer, notification); 512 514 513 515 buffer->aob = NULL; 514 - qeth_clear_output_buffer(buffer->q, buffer, 515 - QETH_QDIO_BUF_HANDLED_DELAYED); 516 + /* Free dangling allocations. The attached skbs are handled by 517 + * qeth_cleanup_handled_pending(). 518 + */ 519 + for (i = 0; 520 + i < aob->sb_count && i < QETH_MAX_BUFFER_ELEMENTS(card); 521 + i++) { 522 + if (aob->sba[i] && buffer->is_header[i]) 523 + kmem_cache_free(qeth_core_header_cache, 524 + (void *) aob->sba[i]); 525 + } 526 + atomic_set(&buffer->state, QETH_QDIO_BUF_HANDLED_DELAYED); 516 527 517 - /* from here on: do not touch buffer anymore */ 518 528 qdio_release_aob(aob); 519 529 } 520 530 ··· 3765 3759 QETH_CARD_TEXT(queue->card, 5, "aob"); 3766 3760 QETH_CARD_TEXT_(queue->card, 5, "%lx", 3767 3761 virt_to_phys(buffer->aob)); 3762 + 3763 + /* prepare the queue slot for re-use: */ 3764 + qeth_scrub_qdio_buffer(buffer->buffer, 3765 + QETH_MAX_BUFFER_ELEMENTS(card)); 3768 3766 if (qeth_init_qdio_out_buf(queue, bidx)) { 3769 3767 QETH_CARD_TEXT(card, 2, "outofbuf"); 3770 3768 qeth_schedule_recovery(card); ··· 4844 4834 goto out; 4845 4835 } 4846 4836 4847 - ccw_device_get_id(CARD_RDEV(card), &id); 4837 + ccw_device_get_id(CARD_DDEV(card), &id); 4848 4838 request->resp_buf_len = sizeof(*response); 4849 4839 request->resp_version = DIAG26C_VERSION2; 4850 4840 request->op_code = DIAG26C_GET_MAC; ··· 6469 6459 #define QETH_HW_FEATURES (NETIF_F_RXCSUM | NETIF_F_IP_CSUM | NETIF_F_TSO | \ 6470 6460 NETIF_F_IPV6_CSUM) 6471 6461 /** 6472 - * qeth_recover_features() - Restore device features after recovery 6473 - * @dev: the recovering net_device 6474 - * 6475 - * Caller must hold rtnl lock. 6462 + * qeth_enable_hw_features() - (Re-)Enable HW functions for device features 6463 + * @dev: a net_device 6476 6464 */ 6477 - void qeth_recover_features(struct net_device *dev) 6465 + void qeth_enable_hw_features(struct net_device *dev) 6478 6466 { 6479 - netdev_features_t features = dev->features; 6480 6467 struct qeth_card *card = dev->ml_priv; 6468 + netdev_features_t features; 6481 6469 6470 + rtnl_lock(); 6471 + features = dev->features; 6482 6472 /* force-off any feature that needs an IPA sequence. 6483 6473 * netdev_update_features() will restart them. 6484 6474 */ 6485 6475 dev->features &= ~QETH_HW_FEATURES; 6486 6476 netdev_update_features(dev); 6487 - 6488 - if (features == dev->features) 6489 - return; 6490 - dev_warn(&card->gdev->dev, 6491 - "Device recovery failed to restore all offload features\n"); 6477 + if (features != dev->features) 6478 + dev_warn(&card->gdev->dev, 6479 + "Device recovery failed to restore all offload features\n"); 6480 + rtnl_unlock(); 6492 6481 } 6493 - EXPORT_SYMBOL_GPL(qeth_recover_features); 6482 + EXPORT_SYMBOL_GPL(qeth_enable_hw_features); 6494 6483 6495 6484 int qeth_set_features(struct net_device *dev, netdev_features_t features) 6496 6485 {
+15 -9
drivers/s390/net/qeth_l2_main.c
··· 140 140 141 141 static int qeth_l2_write_mac(struct qeth_card *card, u8 *mac) 142 142 { 143 - enum qeth_ipa_cmds cmd = is_multicast_ether_addr_64bits(mac) ? 143 + enum qeth_ipa_cmds cmd = is_multicast_ether_addr(mac) ? 144 144 IPA_CMD_SETGMAC : IPA_CMD_SETVMAC; 145 145 int rc; 146 146 ··· 157 157 158 158 static int qeth_l2_remove_mac(struct qeth_card *card, u8 *mac) 159 159 { 160 - enum qeth_ipa_cmds cmd = is_multicast_ether_addr_64bits(mac) ? 160 + enum qeth_ipa_cmds cmd = is_multicast_ether_addr(mac) ? 161 161 IPA_CMD_DELGMAC : IPA_CMD_DELVMAC; 162 162 int rc; 163 163 ··· 501 501 return -ERESTARTSYS; 502 502 } 503 503 504 + /* avoid racing against concurrent state change: */ 505 + if (!mutex_trylock(&card->conf_mutex)) 506 + return -EAGAIN; 507 + 504 508 if (!qeth_card_hw_is_reachable(card)) { 505 509 ether_addr_copy(dev->dev_addr, addr->sa_data); 506 - return 0; 510 + goto out_unlock; 507 511 } 508 512 509 513 /* don't register the same address twice */ 510 514 if (ether_addr_equal_64bits(dev->dev_addr, addr->sa_data) && 511 515 (card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED)) 512 - return 0; 516 + goto out_unlock; 513 517 514 518 /* add the new address, switch over, drop the old */ 515 519 rc = qeth_l2_send_setmac(card, addr->sa_data); 516 520 if (rc) 517 - return rc; 521 + goto out_unlock; 518 522 ether_addr_copy(old_addr, dev->dev_addr); 519 523 ether_addr_copy(dev->dev_addr, addr->sa_data); 520 524 521 525 if (card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED) 522 526 qeth_l2_remove_mac(card, old_addr); 523 527 card->info.mac_bits |= QETH_LAYER2_MAC_REGISTERED; 524 - return 0; 528 + 529 + out_unlock: 530 + mutex_unlock(&card->conf_mutex); 531 + return rc; 525 532 } 526 533 527 534 static void qeth_promisc_to_bridge(struct qeth_card *card) ··· 1119 1112 netif_carrier_off(card->dev); 1120 1113 1121 1114 qeth_set_allowed_threads(card, 0xffffffff, 0); 1115 + 1116 + qeth_enable_hw_features(card->dev); 1122 1117 if (recover_flag == CARD_STATE_RECOVER) { 1123 1118 if (recovery_mode && 1124 1119 card->info.type != QETH_CARD_TYPE_OSN) { ··· 1132 1123 } 1133 1124 /* this also sets saved unicast addresses */ 1134 1125 qeth_l2_set_rx_mode(card->dev); 1135 - rtnl_lock(); 1136 - qeth_recover_features(card->dev); 1137 - rtnl_unlock(); 1138 1126 } 1139 1127 /* let user_space know that device is online */ 1140 1128 kobject_uevent(&gdev->dev.kobj, KOBJ_CHANGE);
+2 -1
drivers/s390/net/qeth_l3_main.c
··· 2662 2662 netif_carrier_on(card->dev); 2663 2663 else 2664 2664 netif_carrier_off(card->dev); 2665 + 2666 + qeth_enable_hw_features(card->dev); 2665 2667 if (recover_flag == CARD_STATE_RECOVER) { 2666 2668 rtnl_lock(); 2667 2669 if (recovery_mode) ··· 2671 2669 else 2672 2670 dev_open(card->dev); 2673 2671 qeth_l3_set_rx_mode(card->dev); 2674 - qeth_recover_features(card->dev); 2675 2672 rtnl_unlock(); 2676 2673 } 2677 2674 qeth_trace_features(card);
-2
drivers/scsi/ipr.c
··· 760 760 ioa_cfg->hrrq[i].allow_interrupts = 0; 761 761 spin_unlock(&ioa_cfg->hrrq[i]._lock); 762 762 } 763 - wmb(); 764 763 765 764 /* Set interrupt mask to stop all new interrupts */ 766 765 if (ioa_cfg->sis64) ··· 8402 8403 ioa_cfg->hrrq[i].allow_interrupts = 1; 8403 8404 spin_unlock(&ioa_cfg->hrrq[i]._lock); 8404 8405 } 8405 - wmb(); 8406 8406 if (ioa_cfg->sis64) { 8407 8407 /* Set the adapter to the correct endian mode. */ 8408 8408 writel(IPR_ENDIAN_SWAP_KEY, ioa_cfg->regs.endian_swap_reg);
+3 -4
drivers/scsi/qla2xxx/qla_target.c
··· 1224 1224 void qlt_schedule_sess_for_deletion(struct fc_port *sess) 1225 1225 { 1226 1226 struct qla_tgt *tgt = sess->tgt; 1227 - struct qla_hw_data *ha = sess->vha->hw; 1228 1227 unsigned long flags; 1229 1228 1230 1229 if (sess->disc_state == DSC_DELETE_PEND) ··· 1240 1241 return; 1241 1242 } 1242 1243 1243 - spin_lock_irqsave(&ha->tgt.sess_lock, flags); 1244 1244 if (sess->deleted == QLA_SESS_DELETED) 1245 1245 sess->logout_on_delete = 0; 1246 1246 1247 + spin_lock_irqsave(&sess->vha->work_lock, flags); 1247 1248 if (sess->deleted == QLA_SESS_DELETION_IN_PROGRESS) { 1248 - spin_unlock_irqrestore(&ha->tgt.sess_lock, flags); 1249 + spin_unlock_irqrestore(&sess->vha->work_lock, flags); 1249 1250 return; 1250 1251 } 1251 1252 sess->deleted = QLA_SESS_DELETION_IN_PROGRESS; 1252 - spin_unlock_irqrestore(&ha->tgt.sess_lock, flags); 1253 + spin_unlock_irqrestore(&sess->vha->work_lock, flags); 1253 1254 1254 1255 sess->disc_state = DSC_DELETE_PEND; 1255 1256
+1 -1
drivers/scsi/scsi_debug.c
··· 5507 5507 int k = sdebug_add_host; 5508 5508 5509 5509 stop_all_queued(); 5510 - free_all_queued(); 5511 5510 for (; k; k--) 5512 5511 sdebug_remove_adapter(); 5512 + free_all_queued(); 5513 5513 driver_unregister(&sdebug_driverfs_driver); 5514 5514 bus_unregister(&pseudo_lld_bus); 5515 5515 root_device_unregister(pseudo_primary);
+9 -4
drivers/soc/imx/gpcv2.c
··· 39 39 40 40 #define GPC_M4_PU_PDN_FLG 0x1bc 41 41 42 - 43 - #define PGC_MIPI 4 44 - #define PGC_PCIE 5 45 - #define PGC_USB_HSIC 8 42 + /* 43 + * The PGC offset values in Reference Manual 44 + * (Rev. 1, 01/2018 and the older ones) GPC chapter's 45 + * GPC_PGC memory map are incorrect, below offset 46 + * values are from design RTL. 47 + */ 48 + #define PGC_MIPI 16 49 + #define PGC_PCIE 17 50 + #define PGC_USB_HSIC 20 46 51 #define GPC_PGC_CTRL(n) (0x800 + (n) * 0x40) 47 52 #define GPC_PGC_SR(n) (GPC_PGC_CTRL(n) + 0xc) 48 53
+2 -1
drivers/soc/qcom/Kconfig
··· 5 5 6 6 config QCOM_COMMAND_DB 7 7 bool "Qualcomm Command DB" 8 - depends on (ARCH_QCOM && OF) || COMPILE_TEST 8 + depends on ARCH_QCOM || COMPILE_TEST 9 + depends on OF_RESERVED_MEM 9 10 help 10 11 Command DB queries shared memory by key string for shared system 11 12 resources. Platform drivers that require to set state of a shared
+29 -6
drivers/soc/renesas/rcar-sysc.c
··· 194 194 195 195 static bool has_cpg_mstp; 196 196 197 - static void __init rcar_sysc_pd_setup(struct rcar_sysc_pd *pd) 197 + static int __init rcar_sysc_pd_setup(struct rcar_sysc_pd *pd) 198 198 { 199 199 struct generic_pm_domain *genpd = &pd->genpd; 200 200 const char *name = pd->genpd.name; 201 201 struct dev_power_governor *gov = &simple_qos_governor; 202 + int error; 202 203 203 204 if (pd->flags & PD_CPU) { 204 205 /* ··· 252 251 rcar_sysc_power_up(&pd->ch); 253 252 254 253 finalize: 255 - pm_genpd_init(genpd, gov, false); 254 + error = pm_genpd_init(genpd, gov, false); 255 + if (error) 256 + pr_err("Failed to init PM domain %s: %d\n", name, error); 257 + 258 + return error; 256 259 } 257 260 258 261 static const struct of_device_id rcar_sysc_matches[] __initconst = { ··· 380 375 pr_debug("%pOF: syscier = 0x%08x\n", np, syscier); 381 376 iowrite32(syscier, base + SYSCIER); 382 377 378 + /* 379 + * First, create all PM domains 380 + */ 383 381 for (i = 0; i < info->num_areas; i++) { 384 382 const struct rcar_sysc_area *area = &info->areas[i]; 385 383 struct rcar_sysc_pd *pd; ··· 405 397 pd->ch.isr_bit = area->isr_bit; 406 398 pd->flags = area->flags; 407 399 408 - rcar_sysc_pd_setup(pd); 409 - if (area->parent >= 0) 410 - pm_genpd_add_subdomain(domains->domains[area->parent], 411 - &pd->genpd); 400 + error = rcar_sysc_pd_setup(pd); 401 + if (error) 402 + goto out_put; 412 403 413 404 domains->domains[area->isr_bit] = &pd->genpd; 405 + } 406 + 407 + /* 408 + * Second, link all PM domains to their parents 409 + */ 410 + for (i = 0; i < info->num_areas; i++) { 411 + const struct rcar_sysc_area *area = &info->areas[i]; 412 + 413 + if (!area->name || area->parent < 0) 414 + continue; 415 + 416 + error = pm_genpd_add_subdomain(domains->domains[area->parent], 417 + domains->domains[area->isr_bit]); 418 + if (error) 419 + pr_warn("Failed to add PM subdomain %s to parent %u\n", 420 + area->name, area->parent); 414 421 } 415 422 416 423 error = of_genpd_add_provider_onecell(np, &domains->onecell_data);
+1 -1
drivers/staging/android/ion/ion_heap.c
··· 30 30 struct page **tmp = pages; 31 31 32 32 if (!pages) 33 - return NULL; 33 + return ERR_PTR(-ENOMEM); 34 34 35 35 if (buffer->flags & ION_FLAG_CACHED) 36 36 pgprot = PAGE_KERNEL;
+1 -1
drivers/staging/comedi/drivers/quatech_daqp_cs.c
··· 642 642 /* Make sure D/A update mode is direct update */ 643 643 outb(0, dev->iobase + DAQP_AUX_REG); 644 644 645 - for (i = 0; i > insn->n; i++) { 645 + for (i = 0; i < insn->n; i++) { 646 646 unsigned int val = data[i]; 647 647 int ret; 648 648
+1
drivers/staging/typec/Kconfig
··· 11 11 12 12 config TYPEC_RT1711H 13 13 tristate "Richtek RT1711H Type-C chip driver" 14 + depends on I2C 14 15 select TYPEC_TCPCI 15 16 help 16 17 Richtek RT1711H Type-C chip driver that works with
+36 -8
drivers/target/target_core_user.c
··· 656 656 } 657 657 658 658 static void gather_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd, 659 - bool bidi) 659 + bool bidi, uint32_t read_len) 660 660 { 661 661 struct se_cmd *se_cmd = cmd->se_cmd; 662 662 int i, dbi; ··· 689 689 for_each_sg(data_sg, sg, data_nents, i) { 690 690 int sg_remaining = sg->length; 691 691 to = kmap_atomic(sg_page(sg)) + sg->offset; 692 - while (sg_remaining > 0) { 692 + while (sg_remaining > 0 && read_len > 0) { 693 693 if (block_remaining == 0) { 694 694 if (from) 695 695 kunmap_atomic(from); ··· 701 701 } 702 702 copy_bytes = min_t(size_t, sg_remaining, 703 703 block_remaining); 704 + if (read_len < copy_bytes) 705 + copy_bytes = read_len; 704 706 offset = DATA_BLOCK_SIZE - block_remaining; 705 707 tcmu_flush_dcache_range(from, copy_bytes); 706 708 memcpy(to + sg->length - sg_remaining, from + offset, ··· 710 708 711 709 sg_remaining -= copy_bytes; 712 710 block_remaining -= copy_bytes; 711 + read_len -= copy_bytes; 713 712 } 714 713 kunmap_atomic(to - sg->offset); 714 + if (read_len == 0) 715 + break; 715 716 } 716 717 if (from) 717 718 kunmap_atomic(from); ··· 1047 1042 { 1048 1043 struct se_cmd *se_cmd = cmd->se_cmd; 1049 1044 struct tcmu_dev *udev = cmd->tcmu_dev; 1045 + bool read_len_valid = false; 1046 + uint32_t read_len = se_cmd->data_length; 1050 1047 1051 1048 /* 1052 1049 * cmd has been completed already from timeout, just reclaim ··· 1063 1056 pr_warn("TCMU: Userspace set UNKNOWN_OP flag on se_cmd %p\n", 1064 1057 cmd->se_cmd); 1065 1058 entry->rsp.scsi_status = SAM_STAT_CHECK_CONDITION; 1066 - } else if (entry->rsp.scsi_status == SAM_STAT_CHECK_CONDITION) { 1059 + goto done; 1060 + } 1061 + 1062 + if (se_cmd->data_direction == DMA_FROM_DEVICE && 1063 + (entry->hdr.uflags & TCMU_UFLAG_READ_LEN) && entry->rsp.read_len) { 1064 + read_len_valid = true; 1065 + if (entry->rsp.read_len < read_len) 1066 + read_len = entry->rsp.read_len; 1067 + } 1068 + 1069 + if (entry->rsp.scsi_status == SAM_STAT_CHECK_CONDITION) { 1067 1070 transport_copy_sense_to_cmd(se_cmd, entry->rsp.sense_buffer); 1068 - } else if (se_cmd->se_cmd_flags & SCF_BIDI) { 1071 + if (!read_len_valid ) 1072 + goto done; 1073 + else 1074 + se_cmd->se_cmd_flags |= SCF_TREAT_READ_AS_NORMAL; 1075 + } 1076 + if (se_cmd->se_cmd_flags & SCF_BIDI) { 1069 1077 /* Get Data-In buffer before clean up */ 1070 - gather_data_area(udev, cmd, true); 1078 + gather_data_area(udev, cmd, true, read_len); 1071 1079 } else if (se_cmd->data_direction == DMA_FROM_DEVICE) { 1072 - gather_data_area(udev, cmd, false); 1080 + gather_data_area(udev, cmd, false, read_len); 1073 1081 } else if (se_cmd->data_direction == DMA_TO_DEVICE) { 1074 1082 /* TODO: */ 1075 1083 } else if (se_cmd->data_direction != DMA_NONE) { ··· 1092 1070 se_cmd->data_direction); 1093 1071 } 1094 1072 1095 - target_complete_cmd(cmd->se_cmd, entry->rsp.scsi_status); 1073 + done: 1074 + if (read_len_valid) { 1075 + pr_debug("read_len = %d\n", read_len); 1076 + target_complete_cmd_with_length(cmd->se_cmd, 1077 + entry->rsp.scsi_status, read_len); 1078 + } else 1079 + target_complete_cmd(cmd->se_cmd, entry->rsp.scsi_status); 1096 1080 1097 1081 out: 1098 1082 cmd->se_cmd = NULL; ··· 1768 1740 /* Initialise the mailbox of the ring buffer */ 1769 1741 mb = udev->mb_addr; 1770 1742 mb->version = TCMU_MAILBOX_VERSION; 1771 - mb->flags = TCMU_MAILBOX_FLAG_CAP_OOOC; 1743 + mb->flags = TCMU_MAILBOX_FLAG_CAP_OOOC | TCMU_MAILBOX_FLAG_CAP_READ_LEN; 1772 1744 mb->cmdr_off = CMDR_OFF; 1773 1745 mb->cmdr_size = udev->cmdr_size; 1774 1746
+32 -23
drivers/tty/n_tty.c
··· 124 124 struct mutex output_lock; 125 125 }; 126 126 127 + #define MASK(x) ((x) & (N_TTY_BUF_SIZE - 1)) 128 + 127 129 static inline size_t read_cnt(struct n_tty_data *ldata) 128 130 { 129 131 return ldata->read_head - ldata->read_tail; ··· 143 141 144 142 static inline unsigned char echo_buf(struct n_tty_data *ldata, size_t i) 145 143 { 144 + smp_rmb(); /* Matches smp_wmb() in add_echo_byte(). */ 146 145 return ldata->echo_buf[i & (N_TTY_BUF_SIZE - 1)]; 147 146 } 148 147 ··· 319 316 static void reset_buffer_flags(struct n_tty_data *ldata) 320 317 { 321 318 ldata->read_head = ldata->canon_head = ldata->read_tail = 0; 322 - ldata->echo_head = ldata->echo_tail = ldata->echo_commit = 0; 323 319 ldata->commit_head = 0; 324 - ldata->echo_mark = 0; 325 320 ldata->line_start = 0; 326 321 327 322 ldata->erasing = 0; ··· 618 617 old_space = space = tty_write_room(tty); 619 618 620 619 tail = ldata->echo_tail; 621 - while (ldata->echo_commit != tail) { 620 + while (MASK(ldata->echo_commit) != MASK(tail)) { 622 621 c = echo_buf(ldata, tail); 623 622 if (c == ECHO_OP_START) { 624 623 unsigned char op; 625 624 int no_space_left = 0; 626 625 626 + /* 627 + * Since add_echo_byte() is called without holding 628 + * output_lock, we might see only portion of multi-byte 629 + * operation. 630 + */ 631 + if (MASK(ldata->echo_commit) == MASK(tail + 1)) 632 + goto not_yet_stored; 627 633 /* 628 634 * If the buffer byte is the start of a multi-byte 629 635 * operation, get the next byte, which is either the ··· 642 634 unsigned int num_chars, num_bs; 643 635 644 636 case ECHO_OP_ERASE_TAB: 637 + if (MASK(ldata->echo_commit) == MASK(tail + 2)) 638 + goto not_yet_stored; 645 639 num_chars = echo_buf(ldata, tail + 2); 646 640 647 641 /* ··· 738 728 /* If the echo buffer is nearly full (so that the possibility exists 739 729 * of echo overrun before the next commit), then discard enough 740 730 * data at the tail to prevent a subsequent overrun */ 741 - while (ldata->echo_commit - tail >= ECHO_DISCARD_WATERMARK) { 731 + while (ldata->echo_commit > tail && 732 + ldata->echo_commit - tail >= ECHO_DISCARD_WATERMARK) { 742 733 if (echo_buf(ldata, tail) == ECHO_OP_START) { 743 734 if (echo_buf(ldata, tail + 1) == ECHO_OP_ERASE_TAB) 744 735 tail += 3; ··· 749 738 tail++; 750 739 } 751 740 741 + not_yet_stored: 752 742 ldata->echo_tail = tail; 753 743 return old_space - space; 754 744 } ··· 760 748 size_t nr, old, echoed; 761 749 size_t head; 762 750 751 + mutex_lock(&ldata->output_lock); 763 752 head = ldata->echo_head; 764 753 ldata->echo_mark = head; 765 754 old = ldata->echo_commit - ldata->echo_tail; ··· 769 756 * is over the threshold (and try again each time another 770 757 * block is accumulated) */ 771 758 nr = head - ldata->echo_tail; 772 - if (nr < ECHO_COMMIT_WATERMARK || (nr % ECHO_BLOCK > old % ECHO_BLOCK)) 759 + if (nr < ECHO_COMMIT_WATERMARK || 760 + (nr % ECHO_BLOCK > old % ECHO_BLOCK)) { 761 + mutex_unlock(&ldata->output_lock); 773 762 return; 763 + } 774 764 775 - mutex_lock(&ldata->output_lock); 776 765 ldata->echo_commit = head; 777 766 echoed = __process_echoes(tty); 778 767 mutex_unlock(&ldata->output_lock); ··· 825 810 826 811 static inline void add_echo_byte(unsigned char c, struct n_tty_data *ldata) 827 812 { 828 - *echo_buf_addr(ldata, ldata->echo_head++) = c; 813 + *echo_buf_addr(ldata, ldata->echo_head) = c; 814 + smp_wmb(); /* Matches smp_rmb() in echo_buf(). */ 815 + ldata->echo_head++; 829 816 } 830 817 831 818 /** ··· 995 978 } 996 979 997 980 seen_alnums = 0; 998 - while (ldata->read_head != ldata->canon_head) { 981 + while (MASK(ldata->read_head) != MASK(ldata->canon_head)) { 999 982 head = ldata->read_head; 1000 983 1001 984 /* erase a single possibly multibyte character */ 1002 985 do { 1003 986 head--; 1004 987 c = read_buf(ldata, head); 1005 - } while (is_continuation(c, tty) && head != ldata->canon_head); 988 + } while (is_continuation(c, tty) && 989 + MASK(head) != MASK(ldata->canon_head)); 1006 990 1007 991 /* do not partially erase */ 1008 992 if (is_continuation(c, tty)) ··· 1045 1027 * This info is used to go back the correct 1046 1028 * number of columns. 1047 1029 */ 1048 - while (tail != ldata->canon_head) { 1030 + while (MASK(tail) != MASK(ldata->canon_head)) { 1049 1031 tail--; 1050 1032 c = read_buf(ldata, tail); 1051 1033 if (c == '\t') { ··· 1320 1302 finish_erasing(ldata); 1321 1303 echo_char(c, tty); 1322 1304 echo_char_raw('\n', ldata); 1323 - while (tail != ldata->read_head) { 1305 + while (MASK(tail) != MASK(ldata->read_head)) { 1324 1306 echo_char(read_buf(ldata, tail), tty); 1325 1307 tail++; 1326 1308 } ··· 1896 1878 struct n_tty_data *ldata; 1897 1879 1898 1880 /* Currently a malloc failure here can panic */ 1899 - ldata = vmalloc(sizeof(*ldata)); 1881 + ldata = vzalloc(sizeof(*ldata)); 1900 1882 if (!ldata) 1901 - goto err; 1883 + return -ENOMEM; 1902 1884 1903 1885 ldata->overrun_time = jiffies; 1904 1886 mutex_init(&ldata->atomic_read_lock); 1905 1887 mutex_init(&ldata->output_lock); 1906 1888 1907 1889 tty->disc_data = ldata; 1908 - reset_buffer_flags(tty->disc_data); 1909 - ldata->column = 0; 1910 - ldata->canon_column = 0; 1911 - ldata->num_overrun = 0; 1912 - ldata->no_room = 0; 1913 - ldata->lnext = 0; 1914 1890 tty->closing = 0; 1915 1891 /* indicate buffer work may resume */ 1916 1892 clear_bit(TTY_LDISC_HALTED, &tty->flags); 1917 1893 n_tty_set_termios(tty, NULL); 1918 1894 tty_unthrottle(tty); 1919 - 1920 1895 return 0; 1921 - err: 1922 - return -ENOMEM; 1923 1896 } 1924 1897 1925 1898 static inline int input_available_p(struct tty_struct *tty, int poll) ··· 2420 2411 tail = ldata->read_tail; 2421 2412 nr = head - tail; 2422 2413 /* Skip EOF-chars.. */ 2423 - while (head != tail) { 2414 + while (MASK(head) != MASK(tail)) { 2424 2415 if (test_bit(tail & (N_TTY_BUF_SIZE - 1), ldata->read_flags) && 2425 2416 read_buf(ldata, tail) == __DISABLED_CHAR) 2426 2417 nr--;
+1
drivers/tty/serdev/core.c
··· 617 617 static void __exit serdev_exit(void) 618 618 { 619 619 bus_unregister(&serdev_bus_type); 620 + ida_destroy(&ctrl_ida); 620 621 } 621 622 module_exit(serdev_exit); 622 623
-2
drivers/tty/serial/8250/8250_pci.c
··· 3339 3339 /* multi-io cards handled by parport_serial */ 3340 3340 { PCI_DEVICE(0x4348, 0x7053), }, /* WCH CH353 2S1P */ 3341 3341 { PCI_DEVICE(0x4348, 0x5053), }, /* WCH CH353 1S1P */ 3342 - { PCI_DEVICE(0x4348, 0x7173), }, /* WCH CH355 4S */ 3343 3342 { PCI_DEVICE(0x1c00, 0x3250), }, /* WCH CH382 2S1P */ 3344 - { PCI_DEVICE(0x1c00, 0x3470), }, /* WCH CH384 4S */ 3345 3343 3346 3344 /* Moxa Smartio MUE boards handled by 8250_moxa */ 3347 3345 { PCI_VDEVICE(MOXA, 0x1024), },
+2 -2
drivers/tty/vt/vt.c
··· 784 784 if (!*vc->vc_uni_pagedir_loc) 785 785 con_set_default_unimap(vc); 786 786 787 - vc->vc_screenbuf = kmalloc(vc->vc_screenbuf_size, GFP_KERNEL); 787 + vc->vc_screenbuf = kzalloc(vc->vc_screenbuf_size, GFP_KERNEL); 788 788 if (!vc->vc_screenbuf) 789 789 goto err_free; 790 790 ··· 871 871 872 872 if (new_screen_size > (4 << 20)) 873 873 return -EINVAL; 874 - newscreen = kmalloc(new_screen_size, GFP_USER); 874 + newscreen = kzalloc(new_screen_size, GFP_USER); 875 875 if (!newscreen) 876 876 return -ENOMEM; 877 877
+4 -1
drivers/usb/chipidea/host.c
··· 124 124 125 125 hcd->power_budget = ci->platdata->power_budget; 126 126 hcd->tpl_support = ci->platdata->tpl_support; 127 - if (ci->phy || ci->usb_phy) 127 + if (ci->phy || ci->usb_phy) { 128 128 hcd->skip_phy_initialization = 1; 129 + if (ci->usb_phy) 130 + hcd->usb_phy = ci->usb_phy; 131 + } 129 132 130 133 ehci = hcd_to_ehci(hcd); 131 134 ehci->caps = ci->hw_bank.cap;
+3
drivers/usb/class/cdc-acm.c
··· 1758 1758 { USB_DEVICE(0x11ca, 0x0201), /* VeriFone Mx870 Gadget Serial */ 1759 1759 .driver_info = SINGLE_RX_URB, 1760 1760 }, 1761 + { USB_DEVICE(0x1965, 0x0018), /* Uniden UBC125XLT */ 1762 + .driver_info = NO_UNION_NORMAL, /* has no union descriptor */ 1763 + }, 1761 1764 { USB_DEVICE(0x22b8, 0x7000), /* Motorola Q Phone */ 1762 1765 .driver_info = NO_UNION_NORMAL, /* has no union descriptor */ 1763 1766 },
+3
drivers/usb/dwc2/core.h
··· 1004 1004 * @frame_list_sz: Frame list size 1005 1005 * @desc_gen_cache: Kmem cache for generic descriptors 1006 1006 * @desc_hsisoc_cache: Kmem cache for hs isochronous descriptors 1007 + * @unaligned_cache: Kmem cache for DMA mode to handle non-aligned buf 1007 1008 * 1008 1009 * These are for peripheral mode: 1009 1010 * ··· 1178 1177 u32 frame_list_sz; 1179 1178 struct kmem_cache *desc_gen_cache; 1180 1179 struct kmem_cache *desc_hsisoc_cache; 1180 + struct kmem_cache *unaligned_cache; 1181 + #define DWC2_KMEM_UNALIGNED_BUF_SIZE 1024 1181 1182 1182 1183 #endif /* CONFIG_USB_DWC2_HOST || CONFIG_USB_DWC2_DUAL_ROLE */ 1183 1184
+12 -8
drivers/usb/dwc2/gadget.c
··· 812 812 u32 index; 813 813 u32 maxsize = 0; 814 814 u32 mask = 0; 815 + u8 pid = 0; 815 816 816 817 maxsize = dwc2_gadget_get_desc_params(hs_ep, &mask); 817 818 ··· 841 840 ((len << DEV_DMA_NBYTES_SHIFT) & mask)); 842 841 843 842 if (hs_ep->dir_in) { 844 - desc->status |= ((hs_ep->mc << DEV_DMA_ISOC_PID_SHIFT) & 843 + if (len) 844 + pid = DIV_ROUND_UP(len, hs_ep->ep.maxpacket); 845 + else 846 + pid = 1; 847 + desc->status |= ((pid << DEV_DMA_ISOC_PID_SHIFT) & 845 848 DEV_DMA_ISOC_PID_MASK) | 846 849 ((len % hs_ep->ep.maxpacket) ? 847 850 DEV_DMA_SHORT : 0) | ··· 889 884 struct dwc2_dma_desc *desc; 890 885 891 886 if (list_empty(&hs_ep->queue)) { 887 + hs_ep->target_frame = TARGET_FRAME_INITIAL; 892 888 dev_dbg(hsotg->dev, "%s: No requests in queue\n", __func__); 893 889 return; 894 890 } ··· 2761 2755 */ 2762 2756 tmp = dwc2_hsotg_read_frameno(hsotg); 2763 2757 2764 - dwc2_hsotg_complete_request(hsotg, ep, get_ep_head(ep), 0); 2765 - 2766 2758 if (using_desc_dma(hsotg)) { 2767 2759 if (ep->target_frame == TARGET_FRAME_INITIAL) { 2768 2760 /* Start first ISO Out */ ··· 2821 2817 2822 2818 tmp = dwc2_hsotg_read_frameno(hsotg); 2823 2819 if (using_desc_dma(hsotg)) { 2824 - dwc2_hsotg_complete_request(hsotg, hs_ep, 2825 - get_ep_head(hs_ep), 0); 2826 - 2827 2820 hs_ep->target_frame = tmp; 2828 2821 dwc2_gadget_incr_frame_num(hs_ep); 2829 2822 dwc2_gadget_start_isoc_ddma(hs_ep); ··· 4740 4739 } 4741 4740 4742 4741 ret = usb_add_gadget_udc(dev, &hsotg->gadget); 4743 - if (ret) 4742 + if (ret) { 4743 + dwc2_hsotg_ep_free_request(&hsotg->eps_out[0]->ep, 4744 + hsotg->ctrl_req); 4744 4745 return ret; 4745 - 4746 + } 4746 4747 dwc2_hsotg_dump(hsotg); 4747 4748 4748 4749 return 0; ··· 4758 4755 int dwc2_hsotg_remove(struct dwc2_hsotg *hsotg) 4759 4756 { 4760 4757 usb_del_gadget_udc(&hsotg->gadget); 4758 + dwc2_hsotg_ep_free_request(&hsotg->eps_out[0]->ep, hsotg->ctrl_req); 4761 4759 4762 4760 return 0; 4763 4761 }
+87 -6
drivers/usb/dwc2/hcd.c
··· 1567 1567 } 1568 1568 1569 1569 if (hsotg->params.host_dma) { 1570 - dwc2_writel((u32)chan->xfer_dma, 1571 - hsotg->regs + HCDMA(chan->hc_num)); 1570 + dma_addr_t dma_addr; 1571 + 1572 + if (chan->align_buf) { 1573 + if (dbg_hc(chan)) 1574 + dev_vdbg(hsotg->dev, "align_buf\n"); 1575 + dma_addr = chan->align_buf; 1576 + } else { 1577 + dma_addr = chan->xfer_dma; 1578 + } 1579 + dwc2_writel((u32)dma_addr, hsotg->regs + HCDMA(chan->hc_num)); 1580 + 1572 1581 if (dbg_hc(chan)) 1573 1582 dev_vdbg(hsotg->dev, "Wrote %08lx to HCDMA(%d)\n", 1574 - (unsigned long)chan->xfer_dma, chan->hc_num); 1583 + (unsigned long)dma_addr, chan->hc_num); 1575 1584 } 1576 1585 1577 1586 /* Start the split */ ··· 2634 2625 } 2635 2626 } 2636 2627 2628 + static int dwc2_alloc_split_dma_aligned_buf(struct dwc2_hsotg *hsotg, 2629 + struct dwc2_qh *qh, 2630 + struct dwc2_host_chan *chan) 2631 + { 2632 + if (!hsotg->unaligned_cache || 2633 + chan->max_packet > DWC2_KMEM_UNALIGNED_BUF_SIZE) 2634 + return -ENOMEM; 2635 + 2636 + if (!qh->dw_align_buf) { 2637 + qh->dw_align_buf = kmem_cache_alloc(hsotg->unaligned_cache, 2638 + GFP_ATOMIC | GFP_DMA); 2639 + if (!qh->dw_align_buf) 2640 + return -ENOMEM; 2641 + } 2642 + 2643 + qh->dw_align_buf_dma = dma_map_single(hsotg->dev, qh->dw_align_buf, 2644 + DWC2_KMEM_UNALIGNED_BUF_SIZE, 2645 + DMA_FROM_DEVICE); 2646 + 2647 + if (dma_mapping_error(hsotg->dev, qh->dw_align_buf_dma)) { 2648 + dev_err(hsotg->dev, "can't map align_buf\n"); 2649 + chan->align_buf = 0; 2650 + return -EINVAL; 2651 + } 2652 + 2653 + chan->align_buf = qh->dw_align_buf_dma; 2654 + return 0; 2655 + } 2656 + 2637 2657 #define DWC2_USB_DMA_ALIGN 4 2638 2658 2639 2659 struct dma_aligned_buffer { ··· 2839 2801 2840 2802 /* Set the transfer attributes */ 2841 2803 dwc2_hc_init_xfer(hsotg, chan, qtd); 2804 + 2805 + /* For non-dword aligned buffers */ 2806 + if (hsotg->params.host_dma && qh->do_split && 2807 + chan->ep_is_in && (chan->xfer_dma & 0x3)) { 2808 + dev_vdbg(hsotg->dev, "Non-aligned buffer\n"); 2809 + if (dwc2_alloc_split_dma_aligned_buf(hsotg, qh, chan)) { 2810 + dev_err(hsotg->dev, 2811 + "Failed to allocate memory to handle non-aligned buffer\n"); 2812 + /* Add channel back to free list */ 2813 + chan->align_buf = 0; 2814 + chan->multi_count = 0; 2815 + list_add_tail(&chan->hc_list_entry, 2816 + &hsotg->free_hc_list); 2817 + qtd->in_process = 0; 2818 + qh->channel = NULL; 2819 + return -ENOMEM; 2820 + } 2821 + } else { 2822 + /* 2823 + * We assume that DMA is always aligned in non-split 2824 + * case or split out case. Warn if not. 2825 + */ 2826 + WARN_ON_ONCE(hsotg->params.host_dma && 2827 + (chan->xfer_dma & 0x3)); 2828 + chan->align_buf = 0; 2829 + } 2842 2830 2843 2831 if (chan->ep_type == USB_ENDPOINT_XFER_INT || 2844 2832 chan->ep_type == USB_ENDPOINT_XFER_ISOC) ··· 5310 5246 } 5311 5247 } 5312 5248 5249 + if (hsotg->params.host_dma) { 5250 + /* 5251 + * Create kmem caches to handle non-aligned buffer 5252 + * in Buffer DMA mode. 5253 + */ 5254 + hsotg->unaligned_cache = kmem_cache_create("dwc2-unaligned-dma", 5255 + DWC2_KMEM_UNALIGNED_BUF_SIZE, 4, 5256 + SLAB_CACHE_DMA, NULL); 5257 + if (!hsotg->unaligned_cache) 5258 + dev_err(hsotg->dev, 5259 + "unable to create dwc2 unaligned cache\n"); 5260 + } 5261 + 5313 5262 hsotg->otg_port = 1; 5314 5263 hsotg->frame_list = NULL; 5315 5264 hsotg->frame_list_dma = 0; ··· 5357 5280 return 0; 5358 5281 5359 5282 error4: 5360 - kmem_cache_destroy(hsotg->desc_gen_cache); 5283 + kmem_cache_destroy(hsotg->unaligned_cache); 5361 5284 kmem_cache_destroy(hsotg->desc_hsisoc_cache); 5285 + kmem_cache_destroy(hsotg->desc_gen_cache); 5362 5286 error3: 5363 5287 dwc2_hcd_release(hsotg); 5364 5288 error2: ··· 5400 5322 usb_remove_hcd(hcd); 5401 5323 hsotg->priv = NULL; 5402 5324 5403 - kmem_cache_destroy(hsotg->desc_gen_cache); 5325 + kmem_cache_destroy(hsotg->unaligned_cache); 5404 5326 kmem_cache_destroy(hsotg->desc_hsisoc_cache); 5327 + kmem_cache_destroy(hsotg->desc_gen_cache); 5405 5328 5406 5329 dwc2_hcd_release(hsotg); 5407 5330 usb_put_hcd(hcd); ··· 5514 5435 dwc2_writel(hprt0, hsotg->regs + HPRT0); 5515 5436 5516 5437 /* Wait for the HPRT0.PrtSusp register field to be set */ 5517 - if (dwc2_hsotg_wait_bit_set(hsotg, HPRT0, HPRT0_SUSP, 300)) 5438 + if (dwc2_hsotg_wait_bit_set(hsotg, HPRT0, HPRT0_SUSP, 3000)) 5518 5439 dev_warn(hsotg->dev, "Suspend wasn't generated\n"); 5519 5440 5520 5441 /* ··· 5694 5615 __func__); 5695 5616 return ret; 5696 5617 } 5618 + 5619 + dwc2_hcd_rem_wakeup(hsotg); 5697 5620 5698 5621 hsotg->hibernated = 0; 5699 5622 hsotg->bus_suspended = 0;
+8
drivers/usb/dwc2/hcd.h
··· 76 76 * (micro)frame 77 77 * @xfer_buf: Pointer to current transfer buffer position 78 78 * @xfer_dma: DMA address of xfer_buf 79 + * @align_buf: In Buffer DMA mode this will be used if xfer_buf is not 80 + * DWORD aligned 79 81 * @xfer_len: Total number of bytes to transfer 80 82 * @xfer_count: Number of bytes transferred so far 81 83 * @start_pkt_count: Packet count at start of transfer ··· 135 133 136 134 u8 *xfer_buf; 137 135 dma_addr_t xfer_dma; 136 + dma_addr_t align_buf; 138 137 u32 xfer_len; 139 138 u32 xfer_count; 140 139 u16 start_pkt_count; ··· 305 302 * speed. Note that this is in "schedule slice" which 306 303 * is tightly packed. 307 304 * @ntd: Actual number of transfer descriptors in a list 305 + * @dw_align_buf: Used instead of original buffer if its physical address 306 + * is not dword-aligned 307 + * @dw_align_buf_dma: DMA address for dw_align_buf 308 308 * @qtd_list: List of QTDs for this QH 309 309 * @channel: Host channel currently processing transfers for this QH 310 310 * @qh_list_entry: Entry for QH in either the periodic or non-periodic ··· 356 350 struct dwc2_hs_transfer_time hs_transfers[DWC2_HS_SCHEDULE_UFRAMES]; 357 351 u32 ls_start_schedule_slice; 358 352 u16 ntd; 353 + u8 *dw_align_buf; 354 + dma_addr_t dw_align_buf_dma; 359 355 struct list_head qtd_list; 360 356 struct dwc2_host_chan *channel; 361 357 struct list_head qh_list_entry;
+9 -2
drivers/usb/dwc2/hcd_intr.c
··· 942 942 frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index]; 943 943 len = dwc2_get_actual_xfer_length(hsotg, chan, chnum, qtd, 944 944 DWC2_HC_XFER_COMPLETE, NULL); 945 - if (!len) { 945 + if (!len && !qtd->isoc_split_offset) { 946 946 qtd->complete_split = 0; 947 - qtd->isoc_split_offset = 0; 948 947 return 0; 949 948 } 950 949 951 950 frame_desc->actual_length += len; 951 + 952 + if (chan->align_buf) { 953 + dev_vdbg(hsotg->dev, "non-aligned buffer\n"); 954 + dma_unmap_single(hsotg->dev, chan->qh->dw_align_buf_dma, 955 + DWC2_KMEM_UNALIGNED_BUF_SIZE, DMA_FROM_DEVICE); 956 + memcpy(qtd->urb->buf + (chan->xfer_dma - qtd->urb->dma), 957 + chan->qh->dw_align_buf, len); 958 + } 952 959 953 960 qtd->isoc_split_offset += len; 954 961
+4 -1
drivers/usb/dwc2/hcd_queue.c
··· 383 383 /* Get the map and adjust if this is a multi_tt hub */ 384 384 map = qh->dwc_tt->periodic_bitmaps; 385 385 if (qh->dwc_tt->usb_tt->multi) 386 - map += DWC2_ELEMENTS_PER_LS_BITMAP * qh->ttport; 386 + map += DWC2_ELEMENTS_PER_LS_BITMAP * (qh->ttport - 1); 387 387 388 388 return map; 389 389 } ··· 1696 1696 1697 1697 if (qh->desc_list) 1698 1698 dwc2_hcd_qh_free_ddma(hsotg, qh); 1699 + else if (hsotg->unaligned_cache && qh->dw_align_buf) 1700 + kmem_cache_free(hsotg->unaligned_cache, qh->dw_align_buf); 1701 + 1699 1702 kfree(qh); 1700 1703 } 1701 1704
+13 -10
drivers/usb/dwc3/core.c
··· 1272 1272 if (!dwc->clks) 1273 1273 return -ENOMEM; 1274 1274 1275 - dwc->num_clks = ARRAY_SIZE(dwc3_core_clks); 1276 1275 dwc->dev = dev; 1277 1276 1278 1277 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); ··· 1306 1307 if (IS_ERR(dwc->reset)) 1307 1308 return PTR_ERR(dwc->reset); 1308 1309 1309 - ret = clk_bulk_get(dev, dwc->num_clks, dwc->clks); 1310 - if (ret == -EPROBE_DEFER) 1311 - return ret; 1312 - /* 1313 - * Clocks are optional, but new DT platforms should support all clocks 1314 - * as required by the DT-binding. 1315 - */ 1316 - if (ret) 1317 - dwc->num_clks = 0; 1310 + if (dev->of_node) { 1311 + dwc->num_clks = ARRAY_SIZE(dwc3_core_clks); 1312 + 1313 + ret = clk_bulk_get(dev, dwc->num_clks, dwc->clks); 1314 + if (ret == -EPROBE_DEFER) 1315 + return ret; 1316 + /* 1317 + * Clocks are optional, but new DT platforms should support all 1318 + * clocks as required by the DT-binding. 1319 + */ 1320 + if (ret) 1321 + dwc->num_clks = 0; 1322 + } 1318 1323 1319 1324 ret = reset_control_deassert(dwc->reset); 1320 1325 if (ret)
+2 -1
drivers/usb/dwc3/dwc3-of-simple.c
··· 165 165 166 166 reset_control_put(simple->resets); 167 167 168 - pm_runtime_put_sync(dev); 169 168 pm_runtime_disable(dev); 169 + pm_runtime_put_noidle(dev); 170 + pm_runtime_set_suspended(dev); 170 171 171 172 return 0; 172 173 }
+2
drivers/usb/dwc3/dwc3-pci.c
··· 34 34 #define PCI_DEVICE_ID_INTEL_GLK 0x31aa 35 35 #define PCI_DEVICE_ID_INTEL_CNPLP 0x9dee 36 36 #define PCI_DEVICE_ID_INTEL_CNPH 0xa36e 37 + #define PCI_DEVICE_ID_INTEL_ICLLP 0x34ee 37 38 38 39 #define PCI_INTEL_BXT_DSM_GUID "732b85d5-b7a7-4a1b-9ba0-4bbd00ffd511" 39 40 #define PCI_INTEL_BXT_FUNC_PMU_PWR 4 ··· 290 289 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_GLK), }, 291 290 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CNPLP), }, 292 291 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CNPH), }, 292 + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICLLP), }, 293 293 { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_NL_USB), }, 294 294 { } /* Terminating Entry */ 295 295 };
+5 -8
drivers/usb/dwc3/dwc3-qcom.c
··· 490 490 qcom->dwc3 = of_find_device_by_node(dwc3_np); 491 491 if (!qcom->dwc3) { 492 492 dev_err(&pdev->dev, "failed to get dwc3 platform device\n"); 493 + ret = -ENODEV; 493 494 goto depopulate; 494 495 } 495 496 ··· 548 547 return 0; 549 548 } 550 549 551 - #ifdef CONFIG_PM_SLEEP 552 - static int dwc3_qcom_pm_suspend(struct device *dev) 550 + static int __maybe_unused dwc3_qcom_pm_suspend(struct device *dev) 553 551 { 554 552 struct dwc3_qcom *qcom = dev_get_drvdata(dev); 555 553 int ret = 0; ··· 560 560 return ret; 561 561 } 562 562 563 - static int dwc3_qcom_pm_resume(struct device *dev) 563 + static int __maybe_unused dwc3_qcom_pm_resume(struct device *dev) 564 564 { 565 565 struct dwc3_qcom *qcom = dev_get_drvdata(dev); 566 566 int ret; ··· 571 571 572 572 return ret; 573 573 } 574 - #endif 575 574 576 - #ifdef CONFIG_PM 577 - static int dwc3_qcom_runtime_suspend(struct device *dev) 575 + static int __maybe_unused dwc3_qcom_runtime_suspend(struct device *dev) 578 576 { 579 577 struct dwc3_qcom *qcom = dev_get_drvdata(dev); 580 578 581 579 return dwc3_qcom_suspend(qcom); 582 580 } 583 581 584 - static int dwc3_qcom_runtime_resume(struct device *dev) 582 + static int __maybe_unused dwc3_qcom_runtime_resume(struct device *dev) 585 583 { 586 584 struct dwc3_qcom *qcom = dev_get_drvdata(dev); 587 585 588 586 return dwc3_qcom_resume(qcom); 589 587 } 590 - #endif 591 588 592 589 static const struct dev_pm_ops dwc3_qcom_dev_pm_ops = { 593 590 SET_SYSTEM_SLEEP_PM_OPS(dwc3_qcom_pm_suspend, dwc3_qcom_pm_resume)
+3
drivers/usb/gadget/composite.c
··· 1719 1719 */ 1720 1720 if (w_value && !f->get_alt) 1721 1721 break; 1722 + 1723 + spin_lock(&cdev->lock); 1722 1724 value = f->set_alt(f, w_index, w_value); 1723 1725 if (value == USB_GADGET_DELAYED_STATUS) { 1724 1726 DBG(cdev, ··· 1730 1728 DBG(cdev, "delayed_status count %d\n", 1731 1729 cdev->delayed_status); 1732 1730 } 1731 + spin_unlock(&cdev->lock); 1733 1732 break; 1734 1733 case USB_REQ_GET_INTERFACE: 1735 1734 if (ctrl->bRequestType != (USB_DIR_IN|USB_RECIP_INTERFACE))
+18 -8
drivers/usb/gadget/function/f_fs.c
··· 215 215 216 216 struct mm_struct *mm; 217 217 struct work_struct work; 218 + struct work_struct cancellation_work; 218 219 219 220 struct usb_ep *ep; 220 221 struct usb_request *req; ··· 1073 1072 return 0; 1074 1073 } 1075 1074 1075 + static void ffs_aio_cancel_worker(struct work_struct *work) 1076 + { 1077 + struct ffs_io_data *io_data = container_of(work, struct ffs_io_data, 1078 + cancellation_work); 1079 + 1080 + ENTER(); 1081 + 1082 + usb_ep_dequeue(io_data->ep, io_data->req); 1083 + } 1084 + 1076 1085 static int ffs_aio_cancel(struct kiocb *kiocb) 1077 1086 { 1078 1087 struct ffs_io_data *io_data = kiocb->private; 1079 - struct ffs_epfile *epfile = kiocb->ki_filp->private_data; 1088 + struct ffs_data *ffs = io_data->ffs; 1080 1089 int value; 1081 1090 1082 1091 ENTER(); 1083 1092 1084 - spin_lock_irq(&epfile->ffs->eps_lock); 1085 - 1086 - if (likely(io_data && io_data->ep && io_data->req)) 1087 - value = usb_ep_dequeue(io_data->ep, io_data->req); 1088 - else 1093 + if (likely(io_data && io_data->ep && io_data->req)) { 1094 + INIT_WORK(&io_data->cancellation_work, ffs_aio_cancel_worker); 1095 + queue_work(ffs->io_completion_wq, &io_data->cancellation_work); 1096 + value = -EINPROGRESS; 1097 + } else { 1089 1098 value = -EINVAL; 1090 - 1091 - spin_unlock_irq(&epfile->ffs->eps_lock); 1099 + } 1092 1100 1093 1101 return value; 1094 1102 }
+2 -2
drivers/usb/host/xhci-mem.c
··· 886 886 887 887 dev = xhci->devs[slot_id]; 888 888 889 - trace_xhci_free_virt_device(dev); 890 - 891 889 xhci->dcbaa->dev_context_ptrs[slot_id] = 0; 892 890 if (!dev) 893 891 return; 892 + 893 + trace_xhci_free_virt_device(dev); 894 894 895 895 if (dev->tt_info) 896 896 old_active_eps = dev->tt_info->active_eps;
+3 -3
drivers/usb/host/xhci-tegra.c
··· 481 481 unsigned long mask; 482 482 unsigned int port; 483 483 bool idle, enable; 484 - int err; 484 + int err = 0; 485 485 486 486 memset(&rsp, 0, sizeof(rsp)); 487 487 ··· 1223 1223 pm_runtime_disable(&pdev->dev); 1224 1224 usb_put_hcd(tegra->hcd); 1225 1225 disable_xusbc: 1226 - if (!&pdev->dev.pm_domain) 1226 + if (!pdev->dev.pm_domain) 1227 1227 tegra_powergate_power_off(TEGRA_POWERGATE_XUSBC); 1228 1228 disable_xusba: 1229 - if (!&pdev->dev.pm_domain) 1229 + if (!pdev->dev.pm_domain) 1230 1230 tegra_powergate_power_off(TEGRA_POWERGATE_XUSBA); 1231 1231 put_padctl: 1232 1232 tegra_xusb_padctl_put(tegra->padctl);
+31 -5
drivers/usb/host/xhci-trace.h
··· 171 171 TP_ARGS(ring, trb) 172 172 ); 173 173 174 + DECLARE_EVENT_CLASS(xhci_log_free_virt_dev, 175 + TP_PROTO(struct xhci_virt_device *vdev), 176 + TP_ARGS(vdev), 177 + TP_STRUCT__entry( 178 + __field(void *, vdev) 179 + __field(unsigned long long, out_ctx) 180 + __field(unsigned long long, in_ctx) 181 + __field(u8, fake_port) 182 + __field(u8, real_port) 183 + __field(u16, current_mel) 184 + 185 + ), 186 + TP_fast_assign( 187 + __entry->vdev = vdev; 188 + __entry->in_ctx = (unsigned long long) vdev->in_ctx->dma; 189 + __entry->out_ctx = (unsigned long long) vdev->out_ctx->dma; 190 + __entry->fake_port = (u8) vdev->fake_port; 191 + __entry->real_port = (u8) vdev->real_port; 192 + __entry->current_mel = (u16) vdev->current_mel; 193 + ), 194 + TP_printk("vdev %p ctx %llx | %llx fake_port %d real_port %d current_mel %d", 195 + __entry->vdev, __entry->in_ctx, __entry->out_ctx, 196 + __entry->fake_port, __entry->real_port, __entry->current_mel 197 + ) 198 + ); 199 + 200 + DEFINE_EVENT(xhci_log_free_virt_dev, xhci_free_virt_device, 201 + TP_PROTO(struct xhci_virt_device *vdev), 202 + TP_ARGS(vdev) 203 + ); 204 + 174 205 DECLARE_EVENT_CLASS(xhci_log_virt_dev, 175 206 TP_PROTO(struct xhci_virt_device *vdev), 176 207 TP_ARGS(vdev), ··· 235 204 ); 236 205 237 206 DEFINE_EVENT(xhci_log_virt_dev, xhci_alloc_virt_device, 238 - TP_PROTO(struct xhci_virt_device *vdev), 239 - TP_ARGS(vdev) 240 - ); 241 - 242 - DEFINE_EVENT(xhci_log_virt_dev, xhci_free_virt_device, 243 207 TP_PROTO(struct xhci_virt_device *vdev), 244 208 TP_ARGS(vdev) 245 209 );
+43 -4
drivers/usb/host/xhci.c
··· 908 908 spin_unlock_irqrestore(&xhci->lock, flags); 909 909 } 910 910 911 + static bool xhci_pending_portevent(struct xhci_hcd *xhci) 912 + { 913 + struct xhci_port **ports; 914 + int port_index; 915 + u32 status; 916 + u32 portsc; 917 + 918 + status = readl(&xhci->op_regs->status); 919 + if (status & STS_EINT) 920 + return true; 921 + /* 922 + * Checking STS_EINT is not enough as there is a lag between a change 923 + * bit being set and the Port Status Change Event that it generated 924 + * being written to the Event Ring. See note in xhci 1.1 section 4.19.2. 925 + */ 926 + 927 + port_index = xhci->usb2_rhub.num_ports; 928 + ports = xhci->usb2_rhub.ports; 929 + while (port_index--) { 930 + portsc = readl(ports[port_index]->addr); 931 + if (portsc & PORT_CHANGE_MASK || 932 + (portsc & PORT_PLS_MASK) == XDEV_RESUME) 933 + return true; 934 + } 935 + port_index = xhci->usb3_rhub.num_ports; 936 + ports = xhci->usb3_rhub.ports; 937 + while (port_index--) { 938 + portsc = readl(ports[port_index]->addr); 939 + if (portsc & PORT_CHANGE_MASK || 940 + (portsc & PORT_PLS_MASK) == XDEV_RESUME) 941 + return true; 942 + } 943 + return false; 944 + } 945 + 911 946 /* 912 947 * Stop HC (not bus-specific) 913 948 * ··· 1044 1009 */ 1045 1010 int xhci_resume(struct xhci_hcd *xhci, bool hibernated) 1046 1011 { 1047 - u32 command, temp = 0, status; 1012 + u32 command, temp = 0; 1048 1013 struct usb_hcd *hcd = xhci_to_hcd(xhci); 1049 1014 struct usb_hcd *secondary_hcd; 1050 1015 int retval = 0; ··· 1078 1043 command = readl(&xhci->op_regs->command); 1079 1044 command |= CMD_CRS; 1080 1045 writel(command, &xhci->op_regs->command); 1046 + /* 1047 + * Some controllers take up to 55+ ms to complete the controller 1048 + * restore so setting the timeout to 100ms. Xhci specification 1049 + * doesn't mention any timeout value. 1050 + */ 1081 1051 if (xhci_handshake(&xhci->op_regs->status, 1082 - STS_RESTORE, 0, 10 * 1000)) { 1052 + STS_RESTORE, 0, 100 * 1000)) { 1083 1053 xhci_warn(xhci, "WARN: xHC restore state timeout\n"); 1084 1054 spin_unlock_irq(&xhci->lock); 1085 1055 return -ETIMEDOUT; ··· 1174 1134 done: 1175 1135 if (retval == 0) { 1176 1136 /* Resume root hubs only when have pending events. */ 1177 - status = readl(&xhci->op_regs->status); 1178 - if (status & STS_EINT) { 1137 + if (xhci_pending_portevent(xhci)) { 1179 1138 usb_hcd_resume_root_hub(xhci->shared_hcd); 1180 1139 usb_hcd_resume_root_hub(hcd); 1181 1140 }
+4
drivers/usb/host/xhci.h
··· 382 382 #define PORT_PLC (1 << 22) 383 383 /* port configure error change - port failed to configure its link partner */ 384 384 #define PORT_CEC (1 << 23) 385 + #define PORT_CHANGE_MASK (PORT_CSC | PORT_PEC | PORT_WRC | PORT_OCC | \ 386 + PORT_RC | PORT_PLC | PORT_CEC) 387 + 388 + 385 389 /* Cold Attach Status - xHC can set this bit to report device attached during 386 390 * Sx state. Warm port reset should be perfomed to clear this bit and move port 387 391 * to connected state.
+14
drivers/usb/serial/cp210x.c
··· 95 95 { USB_DEVICE(0x10C4, 0x8156) }, /* B&G H3000 link cable */ 96 96 { USB_DEVICE(0x10C4, 0x815E) }, /* Helicomm IP-Link 1220-DVM */ 97 97 { USB_DEVICE(0x10C4, 0x815F) }, /* Timewave HamLinkUSB */ 98 + { USB_DEVICE(0x10C4, 0x817C) }, /* CESINEL MEDCAL N Power Quality Monitor */ 99 + { USB_DEVICE(0x10C4, 0x817D) }, /* CESINEL MEDCAL NT Power Quality Monitor */ 100 + { USB_DEVICE(0x10C4, 0x817E) }, /* CESINEL MEDCAL S Power Quality Monitor */ 98 101 { USB_DEVICE(0x10C4, 0x818B) }, /* AVIT Research USB to TTL */ 99 102 { USB_DEVICE(0x10C4, 0x819F) }, /* MJS USB Toslink Switcher */ 100 103 { USB_DEVICE(0x10C4, 0x81A6) }, /* ThinkOptics WavIt */ ··· 115 112 { USB_DEVICE(0x10C4, 0x826B) }, /* Cygnal Integrated Products, Inc., Fasttrax GPS demonstration module */ 116 113 { USB_DEVICE(0x10C4, 0x8281) }, /* Nanotec Plug & Drive */ 117 114 { USB_DEVICE(0x10C4, 0x8293) }, /* Telegesis ETRX2USB */ 115 + { USB_DEVICE(0x10C4, 0x82EF) }, /* CESINEL FALCO 6105 AC Power Supply */ 116 + { USB_DEVICE(0x10C4, 0x82F1) }, /* CESINEL MEDCAL EFD Earth Fault Detector */ 117 + { USB_DEVICE(0x10C4, 0x82F2) }, /* CESINEL MEDCAL ST Network Analyzer */ 118 118 { USB_DEVICE(0x10C4, 0x82F4) }, /* Starizona MicroTouch */ 119 119 { USB_DEVICE(0x10C4, 0x82F9) }, /* Procyon AVS */ 120 120 { USB_DEVICE(0x10C4, 0x8341) }, /* Siemens MC35PU GPRS Modem */ ··· 130 124 { USB_DEVICE(0x10C4, 0x8470) }, /* Juniper Networks BX Series System Console */ 131 125 { USB_DEVICE(0x10C4, 0x8477) }, /* Balluff RFID */ 132 126 { USB_DEVICE(0x10C4, 0x84B6) }, /* Starizona Hyperion */ 127 + { USB_DEVICE(0x10C4, 0x851E) }, /* CESINEL MEDCAL PT Network Analyzer */ 133 128 { USB_DEVICE(0x10C4, 0x85A7) }, /* LifeScan OneTouch Verio IQ */ 129 + { USB_DEVICE(0x10C4, 0x85B8) }, /* CESINEL ReCon T Energy Logger */ 134 130 { USB_DEVICE(0x10C4, 0x85EA) }, /* AC-Services IBUS-IF */ 135 131 { USB_DEVICE(0x10C4, 0x85EB) }, /* AC-Services CIS-IBUS */ 136 132 { USB_DEVICE(0x10C4, 0x85F8) }, /* Virtenio Preon32 */ ··· 142 134 { USB_DEVICE(0x10C4, 0x8857) }, /* CEL EM357 ZigBee USB Stick */ 143 135 { USB_DEVICE(0x10C4, 0x88A4) }, /* MMB Networks ZigBee USB Device */ 144 136 { USB_DEVICE(0x10C4, 0x88A5) }, /* Planet Innovation Ingeni ZigBee USB Device */ 137 + { USB_DEVICE(0x10C4, 0x88FB) }, /* CESINEL MEDCAL STII Network Analyzer */ 138 + { USB_DEVICE(0x10C4, 0x8938) }, /* CESINEL MEDCAL S II Network Analyzer */ 145 139 { USB_DEVICE(0x10C4, 0x8946) }, /* Ketra N1 Wireless Interface */ 146 140 { USB_DEVICE(0x10C4, 0x8962) }, /* Brim Brothers charging dock */ 147 141 { USB_DEVICE(0x10C4, 0x8977) }, /* CEL MeshWorks DevKit Device */ 148 142 { USB_DEVICE(0x10C4, 0x8998) }, /* KCF Technologies PRN */ 143 + { USB_DEVICE(0x10C4, 0x89A4) }, /* CESINEL FTBC Flexible Thyristor Bridge Controller */ 149 144 { USB_DEVICE(0x10C4, 0x8A2A) }, /* HubZ dual ZigBee and Z-Wave dongle */ 150 145 { USB_DEVICE(0x10C4, 0x8A5E) }, /* CEL EM3588 ZigBee USB Stick Long Range */ 151 146 { USB_DEVICE(0x10C4, 0x8B34) }, /* Qivicon ZigBee USB Radio Stick */ 152 147 { USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */ 153 148 { USB_DEVICE(0x10C4, 0xEA61) }, /* Silicon Labs factory default */ 149 + { USB_DEVICE(0x10C4, 0xEA63) }, /* Silicon Labs Windows Update (CP2101-4/CP2102N) */ 154 150 { USB_DEVICE(0x10C4, 0xEA70) }, /* Silicon Labs factory default */ 155 151 { USB_DEVICE(0x10C4, 0xEA71) }, /* Infinity GPS-MIC-1 Radio Monophone */ 152 + { USB_DEVICE(0x10C4, 0xEA7A) }, /* Silicon Labs Windows Update (CP2105) */ 153 + { USB_DEVICE(0x10C4, 0xEA7B) }, /* Silicon Labs Windows Update (CP2108) */ 156 154 { USB_DEVICE(0x10C4, 0xF001) }, /* Elan Digital Systems USBscope50 */ 157 155 { USB_DEVICE(0x10C4, 0xF002) }, /* Elan Digital Systems USBwave12 */ 158 156 { USB_DEVICE(0x10C4, 0xF003) }, /* Elan Digital Systems USBpulse100 */
+6 -4
drivers/usb/typec/tcpm.c
··· 418 418 u64 ts_nsec = local_clock(); 419 419 unsigned long rem_nsec; 420 420 421 + mutex_lock(&port->logbuffer_lock); 421 422 if (!port->logbuffer[port->logbuffer_head]) { 422 423 port->logbuffer[port->logbuffer_head] = 423 424 kzalloc(LOG_BUFFER_ENTRY_SIZE, GFP_KERNEL); 424 - if (!port->logbuffer[port->logbuffer_head]) 425 + if (!port->logbuffer[port->logbuffer_head]) { 426 + mutex_unlock(&port->logbuffer_lock); 425 427 return; 428 + } 426 429 } 427 430 428 431 vsnprintf(tmpbuffer, sizeof(tmpbuffer), fmt, args); 429 - 430 - mutex_lock(&port->logbuffer_lock); 431 432 432 433 if (tcpm_log_full(port)) { 433 434 port->logbuffer_head = max(port->logbuffer_head - 1, 0); ··· 3044 3043 tcpm_port_is_sink(port) && 3045 3044 time_is_after_jiffies(port->delayed_runtime)) { 3046 3045 tcpm_set_state(port, SNK_DISCOVERY, 3047 - port->delayed_runtime - jiffies); 3046 + jiffies_to_msecs(port->delayed_runtime - 3047 + jiffies)); 3048 3048 break; 3049 3049 } 3050 3050 tcpm_set_state(port, unattached_state(port), 0);
+13
drivers/usb/typec/ucsi/ucsi.c
··· 350 350 } 351 351 352 352 if (con->status.change & UCSI_CONSTAT_CONNECT_CHANGE) { 353 + typec_set_pwr_role(con->port, con->status.pwr_dir); 354 + 355 + switch (con->status.partner_type) { 356 + case UCSI_CONSTAT_PARTNER_TYPE_UFP: 357 + typec_set_data_role(con->port, TYPEC_HOST); 358 + break; 359 + case UCSI_CONSTAT_PARTNER_TYPE_DFP: 360 + typec_set_data_role(con->port, TYPEC_DEVICE); 361 + break; 362 + default: 363 + break; 364 + } 365 + 353 366 if (con->status.connected) 354 367 ucsi_register_partner(con); 355 368 else
+5
drivers/usb/typec/ucsi/ucsi_acpi.c
··· 79 79 return -ENODEV; 80 80 } 81 81 82 + /* This will make sure we can use ioremap_nocache() */ 83 + status = acpi_release_memory(ACPI_HANDLE(&pdev->dev), res, 1); 84 + if (ACPI_FAILURE(status)) 85 + return -ENOMEM; 86 + 82 87 /* 83 88 * NOTE: The memory region for the data structures is used also in an 84 89 * operation region, which means ACPI has already reserved it. Therefore
+1 -147
fs/aio.c
··· 5 5 * Implements an efficient asynchronous io interface. 6 6 * 7 7 * Copyright 2000, 2001, 2002 Red Hat, Inc. All Rights Reserved. 8 - * Copyright 2018 Christoph Hellwig. 9 8 * 10 9 * See ../COPYING for licensing terms. 11 10 */ ··· 164 165 bool datasync; 165 166 }; 166 167 167 - struct poll_iocb { 168 - struct file *file; 169 - __poll_t events; 170 - struct wait_queue_head *head; 171 - 172 - union { 173 - struct wait_queue_entry wait; 174 - struct work_struct work; 175 - }; 176 - }; 177 - 178 168 struct aio_kiocb { 179 169 union { 180 170 struct kiocb rw; 181 171 struct fsync_iocb fsync; 182 - struct poll_iocb poll; 183 172 }; 184 173 185 174 struct kioctx *ki_ctx; ··· 1577 1590 if (unlikely(iocb->aio_buf || iocb->aio_offset || iocb->aio_nbytes || 1578 1591 iocb->aio_rw_flags)) 1579 1592 return -EINVAL; 1593 + 1580 1594 req->file = fget(iocb->aio_fildes); 1581 1595 if (unlikely(!req->file)) 1582 1596 return -EBADF; ··· 1590 1602 INIT_WORK(&req->work, aio_fsync_work); 1591 1603 schedule_work(&req->work); 1592 1604 return 0; 1593 - } 1594 - 1595 - /* need to use list_del_init so we can check if item was present */ 1596 - static inline bool __aio_poll_remove(struct poll_iocb *req) 1597 - { 1598 - if (list_empty(&req->wait.entry)) 1599 - return false; 1600 - list_del_init(&req->wait.entry); 1601 - return true; 1602 - } 1603 - 1604 - static inline void __aio_poll_complete(struct aio_kiocb *iocb, __poll_t mask) 1605 - { 1606 - fput(iocb->poll.file); 1607 - aio_complete(iocb, mangle_poll(mask), 0); 1608 - } 1609 - 1610 - static void aio_poll_work(struct work_struct *work) 1611 - { 1612 - struct aio_kiocb *iocb = container_of(work, struct aio_kiocb, poll.work); 1613 - 1614 - if (!list_empty_careful(&iocb->ki_list)) 1615 - aio_remove_iocb(iocb); 1616 - __aio_poll_complete(iocb, iocb->poll.events); 1617 - } 1618 - 1619 - static int aio_poll_cancel(struct kiocb *iocb) 1620 - { 1621 - struct aio_kiocb *aiocb = container_of(iocb, struct aio_kiocb, rw); 1622 - struct poll_iocb *req = &aiocb->poll; 1623 - struct wait_queue_head *head = req->head; 1624 - bool found = false; 1625 - 1626 - spin_lock(&head->lock); 1627 - found = __aio_poll_remove(req); 1628 - spin_unlock(&head->lock); 1629 - 1630 - if (found) { 1631 - req->events = 0; 1632 - INIT_WORK(&req->work, aio_poll_work); 1633 - schedule_work(&req->work); 1634 - } 1635 - return 0; 1636 - } 1637 - 1638 - static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync, 1639 - void *key) 1640 - { 1641 - struct poll_iocb *req = container_of(wait, struct poll_iocb, wait); 1642 - struct aio_kiocb *iocb = container_of(req, struct aio_kiocb, poll); 1643 - struct file *file = req->file; 1644 - __poll_t mask = key_to_poll(key); 1645 - 1646 - assert_spin_locked(&req->head->lock); 1647 - 1648 - /* for instances that support it check for an event match first: */ 1649 - if (mask && !(mask & req->events)) 1650 - return 0; 1651 - 1652 - mask = file->f_op->poll_mask(file, req->events) & req->events; 1653 - if (!mask) 1654 - return 0; 1655 - 1656 - __aio_poll_remove(req); 1657 - 1658 - /* 1659 - * Try completing without a context switch if we can acquire ctx_lock 1660 - * without spinning. Otherwise we need to defer to a workqueue to 1661 - * avoid a deadlock due to the lock order. 1662 - */ 1663 - if (spin_trylock(&iocb->ki_ctx->ctx_lock)) { 1664 - list_del_init(&iocb->ki_list); 1665 - spin_unlock(&iocb->ki_ctx->ctx_lock); 1666 - 1667 - __aio_poll_complete(iocb, mask); 1668 - } else { 1669 - req->events = mask; 1670 - INIT_WORK(&req->work, aio_poll_work); 1671 - schedule_work(&req->work); 1672 - } 1673 - 1674 - return 1; 1675 - } 1676 - 1677 - static ssize_t aio_poll(struct aio_kiocb *aiocb, struct iocb *iocb) 1678 - { 1679 - struct kioctx *ctx = aiocb->ki_ctx; 1680 - struct poll_iocb *req = &aiocb->poll; 1681 - __poll_t mask; 1682 - 1683 - /* reject any unknown events outside the normal event mask. */ 1684 - if ((u16)iocb->aio_buf != iocb->aio_buf) 1685 - return -EINVAL; 1686 - /* reject fields that are not defined for poll */ 1687 - if (iocb->aio_offset || iocb->aio_nbytes || iocb->aio_rw_flags) 1688 - return -EINVAL; 1689 - 1690 - req->events = demangle_poll(iocb->aio_buf) | EPOLLERR | EPOLLHUP; 1691 - req->file = fget(iocb->aio_fildes); 1692 - if (unlikely(!req->file)) 1693 - return -EBADF; 1694 - if (!file_has_poll_mask(req->file)) 1695 - goto out_fail; 1696 - 1697 - req->head = req->file->f_op->get_poll_head(req->file, req->events); 1698 - if (!req->head) 1699 - goto out_fail; 1700 - if (IS_ERR(req->head)) { 1701 - mask = EPOLLERR; 1702 - goto done; 1703 - } 1704 - 1705 - init_waitqueue_func_entry(&req->wait, aio_poll_wake); 1706 - aiocb->ki_cancel = aio_poll_cancel; 1707 - 1708 - spin_lock_irq(&ctx->ctx_lock); 1709 - spin_lock(&req->head->lock); 1710 - mask = req->file->f_op->poll_mask(req->file, req->events) & req->events; 1711 - if (!mask) { 1712 - __add_wait_queue(req->head, &req->wait); 1713 - list_add_tail(&aiocb->ki_list, &ctx->active_reqs); 1714 - } 1715 - spin_unlock(&req->head->lock); 1716 - spin_unlock_irq(&ctx->ctx_lock); 1717 - done: 1718 - if (mask) 1719 - __aio_poll_complete(aiocb, mask); 1720 - return 0; 1721 - out_fail: 1722 - fput(req->file); 1723 - return -EINVAL; /* same as no support for IOCB_CMD_POLL */ 1724 1605 } 1725 1606 1726 1607 static int io_submit_one(struct kioctx *ctx, struct iocb __user *user_iocb, ··· 1664 1807 break; 1665 1808 case IOCB_CMD_FDSYNC: 1666 1809 ret = aio_fsync(&req->fsync, &iocb, true); 1667 - break; 1668 - case IOCB_CMD_POLL: 1669 - ret = aio_poll(req, &iocb); 1670 1810 break; 1671 1811 default: 1672 1812 pr_debug("invalid aio operation %d\n", iocb.aio_lio_opcode);
+4 -1
fs/btrfs/extent_io.c
··· 4542 4542 offset_in_extent = em_start - em->start; 4543 4543 em_end = extent_map_end(em); 4544 4544 em_len = em_end - em_start; 4545 - disko = em->block_start + offset_in_extent; 4546 4545 flags = 0; 4546 + if (em->block_start < EXTENT_MAP_LAST_BYTE) 4547 + disko = em->block_start + offset_in_extent; 4548 + else 4549 + disko = 0; 4547 4550 4548 4551 /* 4549 4552 * bump off for our next call to get_extent
+5 -2
fs/btrfs/inode.c
··· 9005 9005 9006 9006 unlock_extent_cached(io_tree, page_start, page_end, &cached_state); 9007 9007 9008 - out_unlock: 9009 9008 if (!ret2) { 9010 9009 btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE, true); 9011 9010 sb_end_pagefault(inode->i_sb); 9012 9011 extent_changeset_free(data_reserved); 9013 9012 return VM_FAULT_LOCKED; 9014 9013 } 9014 + 9015 + out_unlock: 9015 9016 unlock_page(page); 9016 9017 out: 9017 9018 btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE, (ret != 0)); ··· 9444 9443 u64 new_idx = 0; 9445 9444 u64 root_objectid; 9446 9445 int ret; 9446 + int ret2; 9447 9447 bool root_log_pinned = false; 9448 9448 bool dest_log_pinned = false; 9449 9449 ··· 9641 9639 dest_log_pinned = false; 9642 9640 } 9643 9641 } 9644 - ret = btrfs_end_transaction(trans); 9642 + ret2 = btrfs_end_transaction(trans); 9643 + ret = ret ? ret : ret2; 9645 9644 out_notrans: 9646 9645 if (new_ino == BTRFS_FIRST_FREE_OBJECTID) 9647 9646 up_read(&fs_info->subvol_sem);
+5 -5
fs/btrfs/ioctl.c
··· 3577 3577 ret = btrfs_extent_same_range(src, loff, BTRFS_MAX_DEDUPE_LEN, 3578 3578 dst, dst_loff, &cmp); 3579 3579 if (ret) 3580 - goto out_unlock; 3580 + goto out_free; 3581 3581 3582 3582 loff += BTRFS_MAX_DEDUPE_LEN; 3583 3583 dst_loff += BTRFS_MAX_DEDUPE_LEN; ··· 3587 3587 ret = btrfs_extent_same_range(src, loff, tail_len, dst, 3588 3588 dst_loff, &cmp); 3589 3589 3590 + out_free: 3591 + kvfree(cmp.src_pages); 3592 + kvfree(cmp.dst_pages); 3593 + 3590 3594 out_unlock: 3591 3595 if (same_inode) 3592 3596 inode_unlock(src); 3593 3597 else 3594 3598 btrfs_double_inode_unlock(src, dst); 3595 - 3596 - out_free: 3597 - kvfree(cmp.src_pages); 3598 - kvfree(cmp.dst_pages); 3599 3599 3600 3600 return ret; 3601 3601 }
+13 -4
fs/btrfs/qgroup.c
··· 2680 2680 free_extent_buffer(scratch_leaf); 2681 2681 } 2682 2682 2683 - if (done && !ret) 2683 + if (done && !ret) { 2684 2684 ret = 1; 2685 + fs_info->qgroup_rescan_progress.objectid = (u64)-1; 2686 + } 2685 2687 return ret; 2686 2688 } 2687 2689 ··· 2786 2784 2787 2785 if (!init_flags) { 2788 2786 /* we're resuming qgroup rescan at mount time */ 2789 - if (!(fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_RESCAN)) 2787 + if (!(fs_info->qgroup_flags & 2788 + BTRFS_QGROUP_STATUS_FLAG_RESCAN)) { 2790 2789 btrfs_warn(fs_info, 2791 2790 "qgroup rescan init failed, qgroup is not enabled"); 2792 - else if (!(fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_ON)) 2791 + ret = -EINVAL; 2792 + } else if (!(fs_info->qgroup_flags & 2793 + BTRFS_QGROUP_STATUS_FLAG_ON)) { 2793 2794 btrfs_warn(fs_info, 2794 2795 "qgroup rescan init failed, qgroup rescan is not queued"); 2795 - return -EINVAL; 2796 + ret = -EINVAL; 2797 + } 2798 + 2799 + if (ret) 2800 + return ret; 2796 2801 } 2797 2802 2798 2803 mutex_lock(&fs_info->qgroup_rescan_lock);
+1
fs/ceph/inode.c
··· 1135 1135 if (IS_ERR(realdn)) { 1136 1136 pr_err("splice_dentry error %ld %p inode %p ino %llx.%llx\n", 1137 1137 PTR_ERR(realdn), dn, in, ceph_vinop(in)); 1138 + dput(dn); 1138 1139 dn = realdn; /* note realdn contains the error */ 1139 1140 goto out; 1140 1141 } else if (realdn) {
+6 -13
fs/eventfd.c
··· 101 101 return 0; 102 102 } 103 103 104 - static struct wait_queue_head * 105 - eventfd_get_poll_head(struct file *file, __poll_t events) 106 - { 107 - struct eventfd_ctx *ctx = file->private_data; 108 - 109 - return &ctx->wqh; 110 - } 111 - 112 - static __poll_t eventfd_poll_mask(struct file *file, __poll_t eventmask) 104 + static __poll_t eventfd_poll(struct file *file, poll_table *wait) 113 105 { 114 106 struct eventfd_ctx *ctx = file->private_data; 115 107 __poll_t events = 0; 116 108 u64 count; 109 + 110 + poll_wait(file, &ctx->wqh, wait); 117 111 118 112 /* 119 113 * All writes to ctx->count occur within ctx->wqh.lock. This read ··· 150 156 count = READ_ONCE(ctx->count); 151 157 152 158 if (count > 0) 153 - events |= (EPOLLIN & eventmask); 159 + events |= EPOLLIN; 154 160 if (count == ULLONG_MAX) 155 161 events |= EPOLLERR; 156 162 if (ULLONG_MAX - 1 > count) 157 - events |= (EPOLLOUT & eventmask); 163 + events |= EPOLLOUT; 158 164 159 165 return events; 160 166 } ··· 305 311 .show_fdinfo = eventfd_show_fdinfo, 306 312 #endif 307 313 .release = eventfd_release, 308 - .get_poll_head = eventfd_get_poll_head, 309 - .poll_mask = eventfd_poll_mask, 314 + .poll = eventfd_poll, 310 315 .read = eventfd_read, 311 316 .write = eventfd_write, 312 317 .llseek = noop_llseek,
+5 -10
fs/eventpoll.c
··· 922 922 return 0; 923 923 } 924 924 925 - static struct wait_queue_head *ep_eventpoll_get_poll_head(struct file *file, 926 - __poll_t eventmask) 927 - { 928 - struct eventpoll *ep = file->private_data; 929 - return &ep->poll_wait; 930 - } 931 - 932 - static __poll_t ep_eventpoll_poll_mask(struct file *file, __poll_t eventmask) 925 + static __poll_t ep_eventpoll_poll(struct file *file, poll_table *wait) 933 926 { 934 927 struct eventpoll *ep = file->private_data; 935 928 int depth = 0; 929 + 930 + /* Insert inside our poll wait queue */ 931 + poll_wait(file, &ep->poll_wait, wait); 936 932 937 933 /* 938 934 * Proceed to find out if wanted events are really available inside ··· 968 972 .show_fdinfo = ep_show_fdinfo, 969 973 #endif 970 974 .release = ep_eventpoll_release, 971 - .get_poll_head = ep_eventpoll_get_poll_head, 972 - .poll_mask = ep_eventpoll_poll_mask, 975 + .poll = ep_eventpoll_poll, 973 976 .llseek = noop_llseek, 974 977 }; 975 978
+9 -13
fs/pipe.c
··· 509 509 } 510 510 } 511 511 512 - static struct wait_queue_head * 513 - pipe_get_poll_head(struct file *filp, __poll_t events) 514 - { 515 - struct pipe_inode_info *pipe = filp->private_data; 516 - 517 - return &pipe->wait; 518 - } 519 - 520 512 /* No kernel lock held - fine */ 521 - static __poll_t pipe_poll_mask(struct file *filp, __poll_t events) 513 + static __poll_t 514 + pipe_poll(struct file *filp, poll_table *wait) 522 515 { 516 + __poll_t mask; 523 517 struct pipe_inode_info *pipe = filp->private_data; 524 - int nrbufs = pipe->nrbufs; 525 - __poll_t mask = 0; 518 + int nrbufs; 519 + 520 + poll_wait(filp, &pipe->wait, wait); 526 521 527 522 /* Reading only -- no need for acquiring the semaphore. */ 523 + nrbufs = pipe->nrbufs; 524 + mask = 0; 528 525 if (filp->f_mode & FMODE_READ) { 529 526 mask = (nrbufs > 0) ? EPOLLIN | EPOLLRDNORM : 0; 530 527 if (!pipe->writers && filp->f_version != pipe->w_counter) ··· 1020 1023 .llseek = no_llseek, 1021 1024 .read_iter = pipe_read, 1022 1025 .write_iter = pipe_write, 1023 - .get_poll_head = pipe_get_poll_head, 1024 - .poll_mask = pipe_poll_mask, 1026 + .poll = pipe_poll, 1025 1027 .unlocked_ioctl = pipe_ioctl, 1026 1028 .release = pipe_release, 1027 1029 .fasync = pipe_fasync,
+10 -1
fs/proc/generic.c
··· 564 564 return seq_open(file, de->seq_ops); 565 565 } 566 566 567 + static int proc_seq_release(struct inode *inode, struct file *file) 568 + { 569 + struct proc_dir_entry *de = PDE(inode); 570 + 571 + if (de->state_size) 572 + return seq_release_private(inode, file); 573 + return seq_release(inode, file); 574 + } 575 + 567 576 static const struct file_operations proc_seq_fops = { 568 577 .open = proc_seq_open, 569 578 .read = seq_read, 570 579 .llseek = seq_lseek, 571 - .release = seq_release, 580 + .release = proc_seq_release, 572 581 }; 573 582 574 583 struct proc_dir_entry *proc_create_seq_private(const char *name, umode_t mode,
-23
fs/select.c
··· 34 34 35 35 #include <linux/uaccess.h> 36 36 37 - __poll_t vfs_poll(struct file *file, struct poll_table_struct *pt) 38 - { 39 - if (file->f_op->poll) { 40 - return file->f_op->poll(file, pt); 41 - } else if (file_has_poll_mask(file)) { 42 - unsigned int events = poll_requested_events(pt); 43 - struct wait_queue_head *head; 44 - 45 - if (pt && pt->_qproc) { 46 - head = file->f_op->get_poll_head(file, events); 47 - if (!head) 48 - return DEFAULT_POLLMASK; 49 - if (IS_ERR(head)) 50 - return EPOLLERR; 51 - pt->_qproc(file, head, pt); 52 - } 53 - 54 - return file->f_op->poll_mask(file, events); 55 - } else { 56 - return DEFAULT_POLLMASK; 57 - } 58 - } 59 - EXPORT_SYMBOL_GPL(vfs_poll); 60 37 61 38 /* 62 39 * Estimate expected accuracy in ns from a timeval.
+11 -11
fs/timerfd.c
··· 226 226 kfree_rcu(ctx, rcu); 227 227 return 0; 228 228 } 229 - 230 - static struct wait_queue_head *timerfd_get_poll_head(struct file *file, 231 - __poll_t eventmask) 229 + 230 + static __poll_t timerfd_poll(struct file *file, poll_table *wait) 232 231 { 233 232 struct timerfd_ctx *ctx = file->private_data; 233 + __poll_t events = 0; 234 + unsigned long flags; 234 235 235 - return &ctx->wqh; 236 - } 236 + poll_wait(file, &ctx->wqh, wait); 237 237 238 - static __poll_t timerfd_poll_mask(struct file *file, __poll_t eventmask) 239 - { 240 - struct timerfd_ctx *ctx = file->private_data; 238 + spin_lock_irqsave(&ctx->wqh.lock, flags); 239 + if (ctx->ticks) 240 + events |= EPOLLIN; 241 + spin_unlock_irqrestore(&ctx->wqh.lock, flags); 241 242 242 - return ctx->ticks ? EPOLLIN : 0; 243 + return events; 243 244 } 244 245 245 246 static ssize_t timerfd_read(struct file *file, char __user *buf, size_t count, ··· 364 363 365 364 static const struct file_operations timerfd_fops = { 366 365 .release = timerfd_release, 367 - .get_poll_head = timerfd_get_poll_head, 368 - .poll_mask = timerfd_poll_mask, 366 + .poll = timerfd_poll, 369 367 .read = timerfd_read, 370 368 .llseek = noop_llseek, 371 369 .show_fdinfo = timerfd_show,
+27 -4
fs/xfs/libxfs/xfs_ag_resv.c
··· 157 157 error = xfs_mod_fdblocks(pag->pag_mount, oldresv, true); 158 158 resv->ar_reserved = 0; 159 159 resv->ar_asked = 0; 160 + resv->ar_orig_reserved = 0; 160 161 161 162 if (error) 162 163 trace_xfs_ag_resv_free_error(pag->pag_mount, pag->pag_agno, ··· 190 189 struct xfs_mount *mp = pag->pag_mount; 191 190 struct xfs_ag_resv *resv; 192 191 int error; 193 - xfs_extlen_t reserved; 192 + xfs_extlen_t hidden_space; 194 193 195 194 if (used > ask) 196 195 ask = used; 197 - reserved = ask - used; 198 196 199 - error = xfs_mod_fdblocks(mp, -(int64_t)reserved, true); 197 + switch (type) { 198 + case XFS_AG_RESV_RMAPBT: 199 + /* 200 + * Space taken by the rmapbt is not subtracted from fdblocks 201 + * because the rmapbt lives in the free space. Here we must 202 + * subtract the entire reservation from fdblocks so that we 203 + * always have blocks available for rmapbt expansion. 204 + */ 205 + hidden_space = ask; 206 + break; 207 + case XFS_AG_RESV_METADATA: 208 + /* 209 + * Space taken by all other metadata btrees are accounted 210 + * on-disk as used space. We therefore only hide the space 211 + * that is reserved but not used by the trees. 212 + */ 213 + hidden_space = ask - used; 214 + break; 215 + default: 216 + ASSERT(0); 217 + return -EINVAL; 218 + } 219 + error = xfs_mod_fdblocks(mp, -(int64_t)hidden_space, true); 200 220 if (error) { 201 221 trace_xfs_ag_resv_init_error(pag->pag_mount, pag->pag_agno, 202 222 error, _RET_IP_); ··· 238 216 239 217 resv = xfs_perag_resv(pag, type); 240 218 resv->ar_asked = ask; 241 - resv->ar_reserved = resv->ar_orig_reserved = reserved; 219 + resv->ar_orig_reserved = hidden_space; 220 + resv->ar_reserved = ask - used; 242 221 243 222 trace_xfs_ag_resv_init(pag, type, ask); 244 223 return 0;
+26
fs/xfs/libxfs/xfs_bmap.c
··· 5780 5780 return error; 5781 5781 } 5782 5782 5783 + /* Make sure we won't be right-shifting an extent past the maximum bound. */ 5784 + int 5785 + xfs_bmap_can_insert_extents( 5786 + struct xfs_inode *ip, 5787 + xfs_fileoff_t off, 5788 + xfs_fileoff_t shift) 5789 + { 5790 + struct xfs_bmbt_irec got; 5791 + int is_empty; 5792 + int error = 0; 5793 + 5794 + ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL)); 5795 + 5796 + if (XFS_FORCED_SHUTDOWN(ip->i_mount)) 5797 + return -EIO; 5798 + 5799 + xfs_ilock(ip, XFS_ILOCK_EXCL); 5800 + error = xfs_bmap_last_extent(NULL, ip, XFS_DATA_FORK, &got, &is_empty); 5801 + if (!error && !is_empty && got.br_startoff >= off && 5802 + ((got.br_startoff + shift) & BMBT_STARTOFF_MASK) < got.br_startoff) 5803 + error = -EINVAL; 5804 + xfs_iunlock(ip, XFS_ILOCK_EXCL); 5805 + 5806 + return error; 5807 + } 5808 + 5783 5809 int 5784 5810 xfs_bmap_insert_extents( 5785 5811 struct xfs_trans *tp,
+2
fs/xfs/libxfs/xfs_bmap.h
··· 227 227 xfs_fileoff_t *next_fsb, xfs_fileoff_t offset_shift_fsb, 228 228 bool *done, xfs_fsblock_t *firstblock, 229 229 struct xfs_defer_ops *dfops); 230 + int xfs_bmap_can_insert_extents(struct xfs_inode *ip, xfs_fileoff_t off, 231 + xfs_fileoff_t shift); 230 232 int xfs_bmap_insert_extents(struct xfs_trans *tp, struct xfs_inode *ip, 231 233 xfs_fileoff_t *next_fsb, xfs_fileoff_t offset_shift_fsb, 232 234 bool *done, xfs_fileoff_t stop_fsb, xfs_fsblock_t *firstblock,
+5
fs/xfs/libxfs/xfs_format.h
··· 962 962 XFS_DFORK_DSIZE(dip, mp) : \ 963 963 XFS_DFORK_ASIZE(dip, mp)) 964 964 965 + #define XFS_DFORK_MAXEXT(dip, mp, w) \ 966 + (XFS_DFORK_SIZE(dip, mp, w) / sizeof(struct xfs_bmbt_rec)) 967 + 965 968 /* 966 969 * Return pointers to the data or attribute forks. 967 970 */ ··· 1528 1525 #define BMBT_STARTOFF_BITLEN 54 1529 1526 #define BMBT_STARTBLOCK_BITLEN 52 1530 1527 #define BMBT_BLOCKCOUNT_BITLEN 21 1528 + 1529 + #define BMBT_STARTOFF_MASK ((1ULL << BMBT_STARTOFF_BITLEN) - 1) 1531 1530 1532 1531 typedef struct xfs_bmbt_rec { 1533 1532 __be64 l0, l1;
+47 -29
fs/xfs/libxfs/xfs_inode_buf.c
··· 374 374 } 375 375 } 376 376 377 + static xfs_failaddr_t 378 + xfs_dinode_verify_fork( 379 + struct xfs_dinode *dip, 380 + struct xfs_mount *mp, 381 + int whichfork) 382 + { 383 + uint32_t di_nextents = XFS_DFORK_NEXTENTS(dip, whichfork); 384 + 385 + switch (XFS_DFORK_FORMAT(dip, whichfork)) { 386 + case XFS_DINODE_FMT_LOCAL: 387 + /* 388 + * no local regular files yet 389 + */ 390 + if (whichfork == XFS_DATA_FORK) { 391 + if (S_ISREG(be16_to_cpu(dip->di_mode))) 392 + return __this_address; 393 + if (be64_to_cpu(dip->di_size) > 394 + XFS_DFORK_SIZE(dip, mp, whichfork)) 395 + return __this_address; 396 + } 397 + if (di_nextents) 398 + return __this_address; 399 + break; 400 + case XFS_DINODE_FMT_EXTENTS: 401 + if (di_nextents > XFS_DFORK_MAXEXT(dip, mp, whichfork)) 402 + return __this_address; 403 + break; 404 + case XFS_DINODE_FMT_BTREE: 405 + if (whichfork == XFS_ATTR_FORK) { 406 + if (di_nextents > MAXAEXTNUM) 407 + return __this_address; 408 + } else if (di_nextents > MAXEXTNUM) { 409 + return __this_address; 410 + } 411 + break; 412 + default: 413 + return __this_address; 414 + } 415 + return NULL; 416 + } 417 + 377 418 xfs_failaddr_t 378 419 xfs_dinode_verify( 379 420 struct xfs_mount *mp, ··· 482 441 case S_IFREG: 483 442 case S_IFLNK: 484 443 case S_IFDIR: 485 - switch (dip->di_format) { 486 - case XFS_DINODE_FMT_LOCAL: 487 - /* 488 - * no local regular files yet 489 - */ 490 - if (S_ISREG(mode)) 491 - return __this_address; 492 - if (di_size > XFS_DFORK_DSIZE(dip, mp)) 493 - return __this_address; 494 - if (dip->di_nextents) 495 - return __this_address; 496 - /* fall through */ 497 - case XFS_DINODE_FMT_EXTENTS: 498 - case XFS_DINODE_FMT_BTREE: 499 - break; 500 - default: 501 - return __this_address; 502 - } 444 + fa = xfs_dinode_verify_fork(dip, mp, XFS_DATA_FORK); 445 + if (fa) 446 + return fa; 503 447 break; 504 448 case 0: 505 449 /* Uninitialized inode ok. */ ··· 494 468 } 495 469 496 470 if (XFS_DFORK_Q(dip)) { 497 - switch (dip->di_aformat) { 498 - case XFS_DINODE_FMT_LOCAL: 499 - if (dip->di_anextents) 500 - return __this_address; 501 - /* fall through */ 502 - case XFS_DINODE_FMT_EXTENTS: 503 - case XFS_DINODE_FMT_BTREE: 504 - break; 505 - default: 506 - return __this_address; 507 - } 471 + fa = xfs_dinode_verify_fork(dip, mp, XFS_ATTR_FORK); 472 + if (fa) 473 + return fa; 508 474 } else { 509 475 /* 510 476 * If there is no fork offset, this may be a freshly-made inode
+2 -2
fs/xfs/libxfs/xfs_rtbitmap.c
··· 1029 1029 if (low_rec->ar_startext >= mp->m_sb.sb_rextents || 1030 1030 low_rec->ar_startext == high_rec->ar_startext) 1031 1031 return 0; 1032 - if (high_rec->ar_startext >= mp->m_sb.sb_rextents) 1033 - high_rec->ar_startext = mp->m_sb.sb_rextents - 1; 1032 + if (high_rec->ar_startext > mp->m_sb.sb_rextents) 1033 + high_rec->ar_startext = mp->m_sb.sb_rextents; 1034 1034 1035 1035 /* Iterate the bitmap, looking for discrepancies. */ 1036 1036 rtstart = low_rec->ar_startext;
+56 -58
fs/xfs/xfs_bmap_util.c
··· 685 685 } 686 686 687 687 /* 688 - * dead simple method of punching delalyed allocation blocks from a range in 689 - * the inode. Walks a block at a time so will be slow, but is only executed in 690 - * rare error cases so the overhead is not critical. This will always punch out 691 - * both the start and end blocks, even if the ranges only partially overlap 692 - * them, so it is up to the caller to ensure that partial blocks are not 693 - * passed in. 688 + * Dead simple method of punching delalyed allocation blocks from a range in 689 + * the inode. This will always punch out both the start and end blocks, even 690 + * if the ranges only partially overlap them, so it is up to the caller to 691 + * ensure that partial blocks are not passed in. 694 692 */ 695 693 int 696 694 xfs_bmap_punch_delalloc_range( ··· 696 698 xfs_fileoff_t start_fsb, 697 699 xfs_fileoff_t length) 698 700 { 699 - xfs_fileoff_t remaining = length; 701 + struct xfs_ifork *ifp = &ip->i_df; 702 + xfs_fileoff_t end_fsb = start_fsb + length; 703 + struct xfs_bmbt_irec got, del; 704 + struct xfs_iext_cursor icur; 700 705 int error = 0; 701 706 702 707 ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); 703 708 704 - do { 705 - int done; 706 - xfs_bmbt_irec_t imap; 707 - int nimaps = 1; 708 - xfs_fsblock_t firstblock; 709 - struct xfs_defer_ops dfops; 710 - 711 - /* 712 - * Map the range first and check that it is a delalloc extent 713 - * before trying to unmap the range. Otherwise we will be 714 - * trying to remove a real extent (which requires a 715 - * transaction) or a hole, which is probably a bad idea... 716 - */ 717 - error = xfs_bmapi_read(ip, start_fsb, 1, &imap, &nimaps, 718 - XFS_BMAPI_ENTIRE); 719 - 720 - if (error) { 721 - /* something screwed, just bail */ 722 - if (!XFS_FORCED_SHUTDOWN(ip->i_mount)) { 723 - xfs_alert(ip->i_mount, 724 - "Failed delalloc mapping lookup ino %lld fsb %lld.", 725 - ip->i_ino, start_fsb); 726 - } 727 - break; 728 - } 729 - if (!nimaps) { 730 - /* nothing there */ 731 - goto next_block; 732 - } 733 - if (imap.br_startblock != DELAYSTARTBLOCK) { 734 - /* been converted, ignore */ 735 - goto next_block; 736 - } 737 - WARN_ON(imap.br_blockcount == 0); 738 - 739 - /* 740 - * Note: while we initialise the firstblock/dfops pair, they 741 - * should never be used because blocks should never be 742 - * allocated or freed for a delalloc extent and hence we need 743 - * don't cancel or finish them after the xfs_bunmapi() call. 744 - */ 745 - xfs_defer_init(&dfops, &firstblock); 746 - error = xfs_bunmapi(NULL, ip, start_fsb, 1, 0, 1, &firstblock, 747 - &dfops, &done); 709 + if (!(ifp->if_flags & XFS_IFEXTENTS)) { 710 + error = xfs_iread_extents(NULL, ip, XFS_DATA_FORK); 748 711 if (error) 749 - break; 712 + return error; 713 + } 750 714 751 - ASSERT(!xfs_defer_has_unfinished_work(&dfops)); 752 - next_block: 753 - start_fsb++; 754 - remaining--; 755 - } while(remaining > 0); 715 + if (!xfs_iext_lookup_extent_before(ip, ifp, &end_fsb, &icur, &got)) 716 + return 0; 717 + 718 + while (got.br_startoff + got.br_blockcount > start_fsb) { 719 + del = got; 720 + xfs_trim_extent(&del, start_fsb, length); 721 + 722 + /* 723 + * A delete can push the cursor forward. Step back to the 724 + * previous extent on non-delalloc or extents outside the 725 + * target range. 726 + */ 727 + if (!del.br_blockcount || 728 + !isnullstartblock(del.br_startblock)) { 729 + if (!xfs_iext_prev_extent(ifp, &icur, &got)) 730 + break; 731 + continue; 732 + } 733 + 734 + error = xfs_bmap_del_extent_delay(ip, XFS_DATA_FORK, &icur, 735 + &got, &del); 736 + if (error || !xfs_iext_get_extent(ifp, &icur, &got)) 737 + break; 738 + } 756 739 757 740 return error; 758 741 } ··· 1187 1208 return 0; 1188 1209 if (offset + len > XFS_ISIZE(ip)) 1189 1210 len = XFS_ISIZE(ip) - offset; 1190 - return iomap_zero_range(VFS_I(ip), offset, len, NULL, &xfs_iomap_ops); 1211 + error = iomap_zero_range(VFS_I(ip), offset, len, NULL, &xfs_iomap_ops); 1212 + if (error) 1213 + return error; 1214 + 1215 + /* 1216 + * If we zeroed right up to EOF and EOF straddles a page boundary we 1217 + * must make sure that the post-EOF area is also zeroed because the 1218 + * page could be mmap'd and iomap_zero_range doesn't do that for us. 1219 + * Writeback of the eof page will do this, albeit clumsily. 1220 + */ 1221 + if (offset + len >= XFS_ISIZE(ip) && ((offset + len) & PAGE_MASK)) { 1222 + error = filemap_write_and_wait_range(VFS_I(ip)->i_mapping, 1223 + (offset + len) & ~PAGE_MASK, LLONG_MAX); 1224 + } 1225 + 1226 + return error; 1191 1227 } 1192 1228 1193 1229 /* ··· 1397 1403 ASSERT(xfs_isilocked(ip, XFS_MMAPLOCK_EXCL)); 1398 1404 1399 1405 trace_xfs_insert_file_space(ip); 1406 + 1407 + error = xfs_bmap_can_insert_extents(ip, stop_fsb, shift_fsb); 1408 + if (error) 1409 + return error; 1400 1410 1401 1411 error = xfs_prepare_shift(ip, offset); 1402 1412 if (error)
+2 -2
fs/xfs/xfs_fsmap.c
··· 513 513 struct xfs_trans *tp, 514 514 struct xfs_getfsmap_info *info) 515 515 { 516 - struct xfs_rtalloc_rec alow; 517 - struct xfs_rtalloc_rec ahigh; 516 + struct xfs_rtalloc_rec alow = { 0 }; 517 + struct xfs_rtalloc_rec ahigh = { 0 }; 518 518 int error; 519 519 520 520 xfs_ilock(tp->t_mountp->m_rbmip, XFS_ILOCK_SHARED);
+1 -1
fs/xfs/xfs_fsops.c
··· 387 387 do { 388 388 free = percpu_counter_sum(&mp->m_fdblocks) - 389 389 mp->m_alloc_set_aside; 390 - if (!free) 390 + if (free <= 0) 391 391 break; 392 392 393 393 delta = request - mp->m_resblks;
+21 -36
fs/xfs/xfs_inode.c
··· 3236 3236 struct xfs_inode *cip; 3237 3237 int nr_found; 3238 3238 int clcount = 0; 3239 - int bufwasdelwri; 3240 3239 int i; 3241 3240 3242 3241 pag = xfs_perag_get(mp, XFS_INO_TO_AGNO(mp, ip->i_ino)); ··· 3359 3360 * inode buffer and shut down the filesystem. 3360 3361 */ 3361 3362 rcu_read_unlock(); 3362 - /* 3363 - * Clean up the buffer. If it was delwri, just release it -- 3364 - * brelse can handle it with no problems. If not, shut down the 3365 - * filesystem before releasing the buffer. 3366 - */ 3367 - bufwasdelwri = (bp->b_flags & _XBF_DELWRI_Q); 3368 - if (bufwasdelwri) 3369 - xfs_buf_relse(bp); 3370 - 3371 3363 xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE); 3372 3364 3373 - if (!bufwasdelwri) { 3374 - /* 3375 - * Just like incore_relse: if we have b_iodone functions, 3376 - * mark the buffer as an error and call them. Otherwise 3377 - * mark it as stale and brelse. 3378 - */ 3379 - if (bp->b_iodone) { 3380 - bp->b_flags &= ~XBF_DONE; 3381 - xfs_buf_stale(bp); 3382 - xfs_buf_ioerror(bp, -EIO); 3383 - xfs_buf_ioend(bp); 3384 - } else { 3385 - xfs_buf_stale(bp); 3386 - xfs_buf_relse(bp); 3387 - } 3388 - } 3389 - 3390 3365 /* 3391 - * Unlocks the flush lock 3366 + * We'll always have an inode attached to the buffer for completion 3367 + * process by the time we are called from xfs_iflush(). Hence we have 3368 + * always need to do IO completion processing to abort the inodes 3369 + * attached to the buffer. handle them just like the shutdown case in 3370 + * xfs_buf_submit(). 3392 3371 */ 3372 + ASSERT(bp->b_iodone); 3373 + bp->b_flags &= ~XBF_DONE; 3374 + xfs_buf_stale(bp); 3375 + xfs_buf_ioerror(bp, -EIO); 3376 + xfs_buf_ioend(bp); 3377 + 3378 + /* abort the corrupt inode, as it was not attached to the buffer */ 3393 3379 xfs_iflush_abort(cip, false); 3394 3380 kmem_free(cilist); 3395 3381 xfs_perag_put(pag); ··· 3470 3486 xfs_log_force(mp, 0); 3471 3487 3472 3488 /* 3473 - * inode clustering: 3474 - * see if other inodes can be gathered into this write 3489 + * inode clustering: try to gather other inodes into this write 3490 + * 3491 + * Note: Any error during clustering will result in the filesystem 3492 + * being shut down and completion callbacks run on the cluster buffer. 3493 + * As we have already flushed and attached this inode to the buffer, 3494 + * it has already been aborted and released by xfs_iflush_cluster() and 3495 + * so we have no further error handling to do here. 3475 3496 */ 3476 3497 error = xfs_iflush_cluster(ip, bp); 3477 3498 if (error) 3478 - goto cluster_corrupt_out; 3499 + return error; 3479 3500 3480 3501 *bpp = bp; 3481 3502 return 0; ··· 3489 3500 if (bp) 3490 3501 xfs_buf_relse(bp); 3491 3502 xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE); 3492 - cluster_corrupt_out: 3493 - error = -EFSCORRUPTED; 3494 3503 abort_out: 3495 - /* 3496 - * Unlocks the flush lock 3497 - */ 3504 + /* abort the corrupt inode, as it was not attached to the buffer */ 3498 3505 xfs_iflush_abort(ip, false); 3499 3506 return error; 3500 3507 }
+14 -1
fs/xfs/xfs_iomap.c
··· 963 963 unsigned *lockmode) 964 964 { 965 965 unsigned mode = XFS_ILOCK_SHARED; 966 + bool is_write = flags & (IOMAP_WRITE | IOMAP_ZERO); 966 967 967 968 /* 968 969 * COW writes may allocate delalloc space or convert unwritten COW 969 970 * extents, so we need to make sure to take the lock exclusively here. 970 971 */ 971 - if (xfs_is_reflink_inode(ip) && (flags & (IOMAP_WRITE | IOMAP_ZERO))) { 972 + if (xfs_is_reflink_inode(ip) && is_write) { 972 973 /* 973 974 * FIXME: It could still overwrite on unshared extents and not 974 975 * need allocation. ··· 990 989 mode = XFS_ILOCK_EXCL; 991 990 } 992 991 992 + relock: 993 993 if (flags & IOMAP_NOWAIT) { 994 994 if (!xfs_ilock_nowait(ip, mode)) 995 995 return -EAGAIN; 996 996 } else { 997 997 xfs_ilock(ip, mode); 998 + } 999 + 1000 + /* 1001 + * The reflink iflag could have changed since the earlier unlocked 1002 + * check, so if we got ILOCK_SHARED for a write and but we're now a 1003 + * reflink inode we have to switch to ILOCK_EXCL and relock. 1004 + */ 1005 + if (mode == XFS_ILOCK_SHARED && is_write && xfs_is_reflink_inode(ip)) { 1006 + xfs_iunlock(ip, mode); 1007 + mode = XFS_ILOCK_EXCL; 1008 + goto relock; 998 1009 } 999 1010 1000 1011 *lockmode = mode;
+6 -1
fs/xfs/xfs_trans.c
··· 258 258 if (!(flags & XFS_TRANS_NO_WRITECOUNT)) 259 259 sb_start_intwrite(mp->m_super); 260 260 261 - WARN_ON(mp->m_super->s_writers.frozen == SB_FREEZE_COMPLETE); 261 + /* 262 + * Zero-reservation ("empty") transactions can't modify anything, so 263 + * they're allowed to run while we're frozen. 264 + */ 265 + WARN_ON(resp->tr_logres > 0 && 266 + mp->m_super->s_writers.frozen == SB_FREEZE_COMPLETE); 262 267 atomic_inc(&mp->m_active_trans); 263 268 264 269 tp = kmem_zone_zalloc(xfs_trans_zone,
+2 -1
include/crypto/if_alg.h
··· 245 245 int offset, size_t size, int flags); 246 246 void af_alg_free_resources(struct af_alg_async_req *areq); 247 247 void af_alg_async_cb(struct crypto_async_request *_req, int err); 248 - __poll_t af_alg_poll_mask(struct socket *sock, __poll_t events); 248 + __poll_t af_alg_poll(struct file *file, struct socket *sock, 249 + poll_table *wait); 249 250 struct af_alg_async_req *af_alg_alloc_areq(struct sock *sk, 250 251 unsigned int areqlen); 251 252 int af_alg_get_rsgl(struct sock *sk, struct msghdr *msg, int flags,
+3
include/linux/acpi.h
··· 443 443 int acpi_check_region(resource_size_t start, resource_size_t n, 444 444 const char *name); 445 445 446 + acpi_status acpi_release_memory(acpi_handle handle, struct resource *res, 447 + u32 level); 448 + 446 449 int acpi_resources_are_enforced(void); 447 450 448 451 #ifdef CONFIG_HIBERNATION
+2 -2
include/linux/blkdev.h
··· 1119 1119 if (!q->limits.chunk_sectors) 1120 1120 return q->limits.max_sectors; 1121 1121 1122 - return q->limits.chunk_sectors - 1123 - (offset & (q->limits.chunk_sectors - 1)); 1122 + return min(q->limits.max_sectors, (unsigned int)(q->limits.chunk_sectors - 1123 + (offset & (q->limits.chunk_sectors - 1)))); 1124 1124 } 1125 1125 1126 1126 static inline unsigned int blk_rq_get_max_sectors(struct request *rq,
+26
include/linux/bpf-cgroup.h
··· 188 188 \ 189 189 __ret; \ 190 190 }) 191 + int cgroup_bpf_prog_attach(const union bpf_attr *attr, 192 + enum bpf_prog_type ptype, struct bpf_prog *prog); 193 + int cgroup_bpf_prog_detach(const union bpf_attr *attr, 194 + enum bpf_prog_type ptype); 195 + int cgroup_bpf_prog_query(const union bpf_attr *attr, 196 + union bpf_attr __user *uattr); 191 197 #else 192 198 199 + struct bpf_prog; 193 200 struct cgroup_bpf {}; 194 201 static inline void cgroup_bpf_put(struct cgroup *cgrp) {} 195 202 static inline int cgroup_bpf_inherit(struct cgroup *cgrp) { return 0; } 203 + 204 + static inline int cgroup_bpf_prog_attach(const union bpf_attr *attr, 205 + enum bpf_prog_type ptype, 206 + struct bpf_prog *prog) 207 + { 208 + return -EINVAL; 209 + } 210 + 211 + static inline int cgroup_bpf_prog_detach(const union bpf_attr *attr, 212 + enum bpf_prog_type ptype) 213 + { 214 + return -EINVAL; 215 + } 216 + 217 + static inline int cgroup_bpf_prog_query(const union bpf_attr *attr, 218 + union bpf_attr __user *uattr) 219 + { 220 + return -EINVAL; 221 + } 196 222 197 223 #define cgroup_bpf_enabled (0) 198 224 #define BPF_CGROUP_PRE_CONNECT_ENABLED(sk) (0)
+8
include/linux/bpf.h
··· 696 696 struct sock *__sock_map_lookup_elem(struct bpf_map *map, u32 key); 697 697 struct sock *__sock_hash_lookup_elem(struct bpf_map *map, void *key); 698 698 int sock_map_prog(struct bpf_map *map, struct bpf_prog *prog, u32 type); 699 + int sockmap_get_from_fd(const union bpf_attr *attr, int type, 700 + struct bpf_prog *prog); 699 701 #else 700 702 static inline struct sock *__sock_map_lookup_elem(struct bpf_map *map, u32 key) 701 703 { ··· 715 713 u32 type) 716 714 { 717 715 return -EOPNOTSUPP; 716 + } 717 + 718 + static inline int sockmap_get_from_fd(const union bpf_attr *attr, int type, 719 + struct bpf_prog *prog) 720 + { 721 + return -EINVAL; 718 722 } 719 723 #endif 720 724
+3 -2
include/linux/bpf_lirc.h
··· 5 5 #include <uapi/linux/bpf.h> 6 6 7 7 #ifdef CONFIG_BPF_LIRC_MODE2 8 - int lirc_prog_attach(const union bpf_attr *attr); 8 + int lirc_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog); 9 9 int lirc_prog_detach(const union bpf_attr *attr); 10 10 int lirc_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr); 11 11 #else 12 - static inline int lirc_prog_attach(const union bpf_attr *attr) 12 + static inline int lirc_prog_attach(const union bpf_attr *attr, 13 + struct bpf_prog *prog) 13 14 { 14 15 return -EINVAL; 15 16 }
+7 -1
include/linux/compat.h
··· 72 72 */ 73 73 #ifndef COMPAT_SYSCALL_DEFINEx 74 74 #define COMPAT_SYSCALL_DEFINEx(x, name, ...) \ 75 + __diag_push(); \ 76 + __diag_ignore(GCC, 8, "-Wattribute-alias", \ 77 + "Type aliasing is used to sanitize syscall arguments");\ 75 78 asmlinkage long compat_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)); \ 76 79 asmlinkage long compat_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)) \ 77 80 __attribute__((alias(__stringify(__se_compat_sys##name)))); \ ··· 83 80 asmlinkage long __se_compat_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)); \ 84 81 asmlinkage long __se_compat_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)) \ 85 82 { \ 86 - return __do_compat_sys##name(__MAP(x,__SC_DELOUSE,__VA_ARGS__));\ 83 + long ret = __do_compat_sys##name(__MAP(x,__SC_DELOUSE,__VA_ARGS__));\ 84 + __MAP(x,__SC_TEST,__VA_ARGS__); \ 85 + return ret; \ 87 86 } \ 87 + __diag_pop(); \ 88 88 static inline long __do_compat_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)) 89 89 #endif /* COMPAT_SYSCALL_DEFINEx */ 90 90
+25
include/linux/compiler-gcc.h
··· 347 347 #if GCC_VERSION >= 50100 348 348 #define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1 349 349 #endif 350 + 351 + /* 352 + * Turn individual warnings and errors on and off locally, depending 353 + * on version. 354 + */ 355 + #define __diag_GCC(version, severity, s) \ 356 + __diag_GCC_ ## version(__diag_GCC_ ## severity s) 357 + 358 + /* Severity used in pragma directives */ 359 + #define __diag_GCC_ignore ignored 360 + #define __diag_GCC_warn warning 361 + #define __diag_GCC_error error 362 + 363 + /* Compilers before gcc-4.6 do not understand "#pragma GCC diagnostic push" */ 364 + #if GCC_VERSION >= 40600 365 + #define __diag_str1(s) #s 366 + #define __diag_str(s) __diag_str1(s) 367 + #define __diag(s) _Pragma(__diag_str(GCC diagnostic s)) 368 + #endif 369 + 370 + #if GCC_VERSION >= 80000 371 + #define __diag_GCC_8(s) __diag(s) 372 + #else 373 + #define __diag_GCC_8(s) 374 + #endif
+18
include/linux/compiler_types.h
··· 271 271 # define __native_word(t) (sizeof(t) == sizeof(char) || sizeof(t) == sizeof(short) || sizeof(t) == sizeof(int) || sizeof(t) == sizeof(long)) 272 272 #endif 273 273 274 + #ifndef __diag 275 + #define __diag(string) 276 + #endif 277 + 278 + #ifndef __diag_GCC 279 + #define __diag_GCC(version, severity, string) 280 + #endif 281 + 282 + #define __diag_push() __diag(push) 283 + #define __diag_pop() __diag(pop) 284 + 285 + #define __diag_ignore(compiler, version, option, comment) \ 286 + __diag_ ## compiler(version, ignore, option) 287 + #define __diag_warn(compiler, version, option, comment) \ 288 + __diag_ ## compiler(version, warn, option) 289 + #define __diag_error(compiler, version, option, comment) \ 290 + __diag_ ## compiler(version, error, option) 291 + 274 292 #endif /* __LINUX_COMPILER_TYPES_H */
+1 -1
include/linux/dax.h
··· 135 135 136 136 ssize_t dax_iomap_rw(struct kiocb *iocb, struct iov_iter *iter, 137 137 const struct iomap_ops *ops); 138 - int dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size, 138 + vm_fault_t dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size, 139 139 pfn_t *pfnp, int *errp, const struct iomap_ops *ops); 140 140 vm_fault_t dax_finish_sync_fault(struct vm_fault *vmf, 141 141 enum page_entry_size pe_size, pfn_t pfn);
+8 -48
include/linux/filter.h
··· 470 470 }; 471 471 472 472 struct bpf_binary_header { 473 - u16 pages; 474 - u16 locked:1; 475 - 473 + u32 pages; 476 474 /* Some arches need word alignment for their instructions */ 477 475 u8 image[] __aligned(4); 478 476 }; ··· 479 481 u16 pages; /* Number of allocated pages */ 480 482 u16 jited:1, /* Is our filter JIT'ed? */ 481 483 jit_requested:1,/* archs need to JIT the prog */ 482 - locked:1, /* Program image locked? */ 484 + undo_set_mem:1, /* Passed set_memory_ro() checkpoint */ 483 485 gpl_compatible:1, /* Is filter GPL compatible? */ 484 486 cb_access:1, /* Is control block accessed? */ 485 487 dst_needed:1, /* Do we need dst entry? */ ··· 675 677 676 678 static inline void bpf_prog_lock_ro(struct bpf_prog *fp) 677 679 { 678 - #ifdef CONFIG_ARCH_HAS_SET_MEMORY 679 - fp->locked = 1; 680 - if (set_memory_ro((unsigned long)fp, fp->pages)) 681 - fp->locked = 0; 682 - #endif 680 + fp->undo_set_mem = 1; 681 + set_memory_ro((unsigned long)fp, fp->pages); 683 682 } 684 683 685 684 static inline void bpf_prog_unlock_ro(struct bpf_prog *fp) 686 685 { 687 - #ifdef CONFIG_ARCH_HAS_SET_MEMORY 688 - if (fp->locked) { 689 - WARN_ON_ONCE(set_memory_rw((unsigned long)fp, fp->pages)); 690 - /* In case set_memory_rw() fails, we want to be the first 691 - * to crash here instead of some random place later on. 692 - */ 693 - fp->locked = 0; 694 - } 695 - #endif 686 + if (fp->undo_set_mem) 687 + set_memory_rw((unsigned long)fp, fp->pages); 696 688 } 697 689 698 690 static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr) 699 691 { 700 - #ifdef CONFIG_ARCH_HAS_SET_MEMORY 701 - hdr->locked = 1; 702 - if (set_memory_ro((unsigned long)hdr, hdr->pages)) 703 - hdr->locked = 0; 704 - #endif 692 + set_memory_ro((unsigned long)hdr, hdr->pages); 705 693 } 706 694 707 695 static inline void bpf_jit_binary_unlock_ro(struct bpf_binary_header *hdr) 708 696 { 709 - #ifdef CONFIG_ARCH_HAS_SET_MEMORY 710 - if (hdr->locked) { 711 - WARN_ON_ONCE(set_memory_rw((unsigned long)hdr, hdr->pages)); 712 - /* In case set_memory_rw() fails, we want to be the first 713 - * to crash here instead of some random place later on. 714 - */ 715 - hdr->locked = 0; 716 - } 717 - #endif 697 + set_memory_rw((unsigned long)hdr, hdr->pages); 718 698 } 719 699 720 700 static inline struct bpf_binary_header * ··· 703 727 704 728 return (void *)addr; 705 729 } 706 - 707 - #ifdef CONFIG_ARCH_HAS_SET_MEMORY 708 - static inline int bpf_prog_check_pages_ro_single(const struct bpf_prog *fp) 709 - { 710 - if (!fp->locked) 711 - return -ENOLCK; 712 - if (fp->jited) { 713 - const struct bpf_binary_header *hdr = bpf_jit_binary_hdr(fp); 714 - 715 - if (!hdr->locked) 716 - return -ENOLCK; 717 - } 718 - 719 - return 0; 720 - } 721 - #endif 722 730 723 731 int sk_filter_trim_cap(struct sock *sk, struct sk_buff *skb, unsigned int cap); 724 732 static inline int sk_filter(struct sock *sk, struct sk_buff *skb)
-2
include/linux/fs.h
··· 1720 1720 int (*iterate) (struct file *, struct dir_context *); 1721 1721 int (*iterate_shared) (struct file *, struct dir_context *); 1722 1722 __poll_t (*poll) (struct file *, struct poll_table_struct *); 1723 - struct wait_queue_head * (*get_poll_head)(struct file *, __poll_t); 1724 - __poll_t (*poll_mask) (struct file *, __poll_t); 1725 1723 long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long); 1726 1724 long (*compat_ioctl) (struct file *, unsigned int, unsigned long); 1727 1725 int (*mmap) (struct file *, struct vm_area_struct *);
+1 -1
include/linux/iio/buffer-dma.h
··· 141 141 char __user *user_buffer); 142 142 size_t iio_dma_buffer_data_available(struct iio_buffer *buffer); 143 143 int iio_dma_buffer_set_bytes_per_datum(struct iio_buffer *buffer, size_t bpd); 144 - int iio_dma_buffer_set_length(struct iio_buffer *buffer, int length); 144 + int iio_dma_buffer_set_length(struct iio_buffer *buffer, unsigned int length); 145 145 int iio_dma_buffer_request_update(struct iio_buffer *buffer); 146 146 147 147 int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue,
+1 -1
include/linux/input/mt.h
··· 100 100 return axis == ABS_MT_SLOT || input_is_mt_value(axis); 101 101 } 102 102 103 - void input_mt_report_slot_state(struct input_dev *dev, 103 + bool input_mt_report_slot_state(struct input_dev *dev, 104 104 unsigned int tool_type, bool active); 105 105 106 106 void input_mt_report_finger_count(struct input_dev *dev, int count);
+2
include/linux/mlx5/eswitch.h
··· 8 8 9 9 #include <linux/mlx5/driver.h> 10 10 11 + #define MLX5_ESWITCH_MANAGER(mdev) MLX5_CAP_GEN(mdev, eswitch_manager) 12 + 11 13 enum { 12 14 SRIOV_NONE, 13 15 SRIOV_LEGACY,
+1 -1
include/linux/mlx5/mlx5_ifc.h
··· 922 922 u8 vnic_env_queue_counters[0x1]; 923 923 u8 ets[0x1]; 924 924 u8 nic_flow_table[0x1]; 925 - u8 eswitch_flow_table[0x1]; 925 + u8 eswitch_manager[0x1]; 926 926 u8 device_memory[0x1]; 927 927 u8 mcam_reg[0x1]; 928 928 u8 pcam_reg[0x1];
-1
include/linux/net.h
··· 147 147 int (*getname) (struct socket *sock, 148 148 struct sockaddr *addr, 149 149 int peer); 150 - __poll_t (*poll_mask) (struct socket *sock, __poll_t events); 151 150 __poll_t (*poll) (struct file *file, struct socket *sock, 152 151 struct poll_table_struct *wait); 153 152 int (*ioctl) (struct socket *sock, unsigned int cmd,
+20
include/linux/netdevice.h
··· 2796 2796 if (PTR_ERR(pp) != -EINPROGRESS) 2797 2797 NAPI_GRO_CB(skb)->flush |= flush; 2798 2798 } 2799 + static inline void skb_gro_flush_final_remcsum(struct sk_buff *skb, 2800 + struct sk_buff *pp, 2801 + int flush, 2802 + struct gro_remcsum *grc) 2803 + { 2804 + if (PTR_ERR(pp) != -EINPROGRESS) { 2805 + NAPI_GRO_CB(skb)->flush |= flush; 2806 + skb_gro_remcsum_cleanup(skb, grc); 2807 + skb->remcsum_offload = 0; 2808 + } 2809 + } 2799 2810 #else 2800 2811 static inline void skb_gro_flush_final(struct sk_buff *skb, struct sk_buff *pp, int flush) 2801 2812 { 2802 2813 NAPI_GRO_CB(skb)->flush |= flush; 2814 + } 2815 + static inline void skb_gro_flush_final_remcsum(struct sk_buff *skb, 2816 + struct sk_buff *pp, 2817 + int flush, 2818 + struct gro_remcsum *grc) 2819 + { 2820 + NAPI_GRO_CB(skb)->flush |= flush; 2821 + skb_gro_remcsum_cleanup(skb, grc); 2822 + skb->remcsum_offload = 0; 2803 2823 } 2804 2824 #endif 2805 2825
+3 -3
include/linux/pm_domain.h
··· 234 234 int of_genpd_parse_idle_states(struct device_node *dn, 235 235 struct genpd_power_state **states, int *n); 236 236 unsigned int of_genpd_opp_to_performance_state(struct device *dev, 237 - struct device_node *opp_node); 237 + struct device_node *np); 238 238 239 239 int genpd_dev_pm_attach(struct device *dev); 240 240 struct device *genpd_dev_pm_attach_by_id(struct device *dev, ··· 274 274 275 275 static inline unsigned int 276 276 of_genpd_opp_to_performance_state(struct device *dev, 277 - struct device_node *opp_node) 277 + struct device_node *np) 278 278 { 279 - return -ENODEV; 279 + return 0; 280 280 } 281 281 282 282 static inline int genpd_dev_pm_attach(struct device *dev)
+7 -7
include/linux/poll.h
··· 74 74 pt->_key = ~(__poll_t)0; /* all events enabled */ 75 75 } 76 76 77 - static inline bool file_has_poll_mask(struct file *file) 78 - { 79 - return file->f_op->get_poll_head && file->f_op->poll_mask; 80 - } 81 - 82 77 static inline bool file_can_poll(struct file *file) 83 78 { 84 - return file->f_op->poll || file_has_poll_mask(file); 79 + return file->f_op->poll; 85 80 } 86 81 87 - __poll_t vfs_poll(struct file *file, struct poll_table_struct *pt); 82 + static inline __poll_t vfs_poll(struct file *file, struct poll_table_struct *pt) 83 + { 84 + if (unlikely(!file->f_op->poll)) 85 + return DEFAULT_POLLMASK; 86 + return file->f_op->poll(file, pt); 87 + } 88 88 89 89 struct poll_table_entry { 90 90 struct file *filp;
+2
include/linux/rmi.h
··· 354 354 struct mutex irq_mutex; 355 355 struct input_dev *input; 356 356 357 + struct irq_domain *irqdomain; 358 + 357 359 u8 pdt_props; 358 360 359 361 u8 num_rx_electrodes;
-18
include/linux/scatterlist.h
··· 9 9 #include <asm/io.h> 10 10 11 11 struct scatterlist { 12 - #ifdef CONFIG_DEBUG_SG 13 - unsigned long sg_magic; 14 - #endif 15 12 unsigned long page_link; 16 13 unsigned int offset; 17 14 unsigned int length; ··· 61 64 * 62 65 */ 63 66 64 - #define SG_MAGIC 0x87654321 65 67 #define SG_CHAIN 0x01UL 66 68 #define SG_END 0x02UL 67 69 ··· 94 98 */ 95 99 BUG_ON((unsigned long) page & (SG_CHAIN | SG_END)); 96 100 #ifdef CONFIG_DEBUG_SG 97 - BUG_ON(sg->sg_magic != SG_MAGIC); 98 101 BUG_ON(sg_is_chain(sg)); 99 102 #endif 100 103 sg->page_link = page_link | (unsigned long) page; ··· 124 129 static inline struct page *sg_page(struct scatterlist *sg) 125 130 { 126 131 #ifdef CONFIG_DEBUG_SG 127 - BUG_ON(sg->sg_magic != SG_MAGIC); 128 132 BUG_ON(sg_is_chain(sg)); 129 133 #endif 130 134 return (struct page *)((sg)->page_link & ~(SG_CHAIN | SG_END)); ··· 189 195 **/ 190 196 static inline void sg_mark_end(struct scatterlist *sg) 191 197 { 192 - #ifdef CONFIG_DEBUG_SG 193 - BUG_ON(sg->sg_magic != SG_MAGIC); 194 - #endif 195 198 /* 196 199 * Set termination bit, clear potential chain bit 197 200 */ ··· 206 215 **/ 207 216 static inline void sg_unmark_end(struct scatterlist *sg) 208 217 { 209 - #ifdef CONFIG_DEBUG_SG 210 - BUG_ON(sg->sg_magic != SG_MAGIC); 211 - #endif 212 218 sg->page_link &= ~SG_END; 213 219 } 214 220 ··· 248 260 static inline void sg_init_marker(struct scatterlist *sgl, 249 261 unsigned int nents) 250 262 { 251 - #ifdef CONFIG_DEBUG_SG 252 - unsigned int i; 253 - 254 - for (i = 0; i < nents; i++) 255 - sgl[i].sg_magic = SG_MAGIC; 256 - #endif 257 263 sg_mark_end(&sgl[nents - 1]); 258 264 } 259 265
+2 -1
include/linux/skbuff.h
··· 3253 3253 int *peeked, int *off, int *err); 3254 3254 struct sk_buff *skb_recv_datagram(struct sock *sk, unsigned flags, int noblock, 3255 3255 int *err); 3256 - __poll_t datagram_poll_mask(struct socket *sock, __poll_t events); 3256 + __poll_t datagram_poll(struct file *file, struct socket *sock, 3257 + struct poll_table_struct *wait); 3257 3258 int skb_copy_datagram_iter(const struct sk_buff *from, int offset, 3258 3259 struct iov_iter *to, int size); 3259 3260 static inline int skb_copy_datagram_msg(const struct sk_buff *from, int offset,
+4
include/linux/slub_def.h
··· 155 155 156 156 #ifdef CONFIG_SYSFS 157 157 #define SLAB_SUPPORTS_SYSFS 158 + void sysfs_slab_unlink(struct kmem_cache *); 158 159 void sysfs_slab_release(struct kmem_cache *); 159 160 #else 161 + static inline void sysfs_slab_unlink(struct kmem_cache *s) 162 + { 163 + } 160 164 static inline void sysfs_slab_release(struct kmem_cache *s) 161 165 { 162 166 }
+4
include/linux/syscalls.h
··· 231 231 */ 232 232 #ifndef __SYSCALL_DEFINEx 233 233 #define __SYSCALL_DEFINEx(x, name, ...) \ 234 + __diag_push(); \ 235 + __diag_ignore(GCC, 8, "-Wattribute-alias", \ 236 + "Type aliasing is used to sanitize syscall arguments");\ 234 237 asmlinkage long sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)) \ 235 238 __attribute__((alias(__stringify(__se_sys##name)))); \ 236 239 ALLOW_ERROR_INJECTION(sys##name, ERRNO); \ ··· 246 243 __PROTECT(x, ret,__MAP(x,__SC_ARGS,__VA_ARGS__)); \ 247 244 return ret; \ 248 245 } \ 246 + __diag_pop(); \ 249 247 static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)) 250 248 #endif /* __SYSCALL_DEFINEx */ 251 249
+1 -1
include/net/bluetooth/bluetooth.h
··· 271 271 int flags); 272 272 int bt_sock_stream_recvmsg(struct socket *sock, struct msghdr *msg, 273 273 size_t len, int flags); 274 - __poll_t bt_sock_poll_mask(struct socket *sock, __poll_t events); 274 + __poll_t bt_sock_poll(struct file *file, struct socket *sock, poll_table *wait); 275 275 int bt_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg); 276 276 int bt_sock_wait_state(struct sock *sk, int state, unsigned long timeo); 277 277 int bt_sock_wait_ready(struct sock *sk, unsigned long flags);
+2
include/net/iucv/af_iucv.h
··· 153 153 atomic_t autobind_name; 154 154 }; 155 155 156 + __poll_t iucv_sock_poll(struct file *file, struct socket *sock, 157 + poll_table *wait); 156 158 void iucv_sock_link(struct iucv_sock_list *l, struct sock *s); 157 159 void iucv_sock_unlink(struct iucv_sock_list *l, struct sock *s); 158 160 void iucv_accept_enqueue(struct sock *parent, struct sock *sk);
+1
include/net/net_namespace.h
··· 128 128 #endif 129 129 #if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6) 130 130 struct netns_nf_frag nf_frag; 131 + struct ctl_table_header *nf_frag_frags_hdr; 131 132 #endif 132 133 struct sock *nfnl; 133 134 struct sock *nfnl_stash;
-1
include/net/netns/ipv6.h
··· 109 109 110 110 #if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6) 111 111 struct netns_nf_frag { 112 - struct netns_sysctl_ipv6 sysctl; 113 112 struct netns_frags frags; 114 113 }; 115 114 #endif
+5
include/net/pkt_cls.h
··· 113 113 { 114 114 } 115 115 116 + static inline bool tcf_block_shared(struct tcf_block *block) 117 + { 118 + return false; 119 + } 120 + 116 121 static inline struct Qdisc *tcf_block_q(struct tcf_block *block) 117 122 { 118 123 return NULL;
+2 -1
include/net/sctp/sctp.h
··· 109 109 int sctp_inet_listen(struct socket *sock, int backlog); 110 110 void sctp_write_space(struct sock *sk); 111 111 void sctp_data_ready(struct sock *sk); 112 - __poll_t sctp_poll_mask(struct socket *sock, __poll_t events); 112 + __poll_t sctp_poll(struct file *file, struct socket *sock, 113 + poll_table *wait); 113 114 void sctp_sock_rfree(struct sk_buff *skb); 114 115 void sctp_copy_sock(struct sock *newsk, struct sock *sk, 115 116 struct sctp_association *asoc);
+2 -1
include/net/tcp.h
··· 388 388 void tcp_close(struct sock *sk, long timeout); 389 389 void tcp_init_sock(struct sock *sk); 390 390 void tcp_init_transfer(struct sock *sk, int bpf_op); 391 - __poll_t tcp_poll_mask(struct socket *sock, __poll_t events); 391 + __poll_t tcp_poll(struct file *file, struct socket *sock, 392 + struct poll_table_struct *wait); 392 393 int tcp_getsockopt(struct sock *sk, int level, int optname, 393 394 char __user *optval, int __user *optlen); 394 395 int tcp_setsockopt(struct sock *sk, int level, int optname,
+4 -2
include/net/tls.h
··· 109 109 110 110 struct strparser strp; 111 111 void (*saved_data_ready)(struct sock *sk); 112 - __poll_t (*sk_poll_mask)(struct socket *sock, __poll_t events); 112 + unsigned int (*sk_poll)(struct file *file, struct socket *sock, 113 + struct poll_table_struct *wait); 113 114 struct sk_buff *recv_pkt; 114 115 u8 control; 115 116 bool decrypted; ··· 225 224 void tls_sw_free_resources_rx(struct sock *sk); 226 225 int tls_sw_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, 227 226 int nonblock, int flags, int *addr_len); 228 - __poll_t tls_sw_poll_mask(struct socket *sock, __poll_t events); 227 + unsigned int tls_sw_poll(struct file *file, struct socket *sock, 228 + struct poll_table_struct *wait); 229 229 ssize_t tls_sw_splice_read(struct socket *sock, loff_t *ppos, 230 230 struct pipe_inode_info *pipe, 231 231 size_t len, unsigned int flags);
+1 -1
include/net/udp.h
··· 285 285 int udp_pre_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len); 286 286 int __udp_disconnect(struct sock *sk, int flags); 287 287 int udp_disconnect(struct sock *sk, int flags); 288 - __poll_t udp_poll_mask(struct socket *sock, __poll_t events); 288 + __poll_t udp_poll(struct file *file, struct socket *sock, poll_table *wait); 289 289 struct sk_buff *skb_udp_tunnel_segment(struct sk_buff *skb, 290 290 netdev_features_t features, 291 291 bool is_ipv6);
+4 -2
include/uapi/linux/aio_abi.h
··· 39 39 IOCB_CMD_PWRITE = 1, 40 40 IOCB_CMD_FSYNC = 2, 41 41 IOCB_CMD_FDSYNC = 3, 42 - /* 4 was the experimental IOCB_CMD_PREADX */ 43 - IOCB_CMD_POLL = 5, 42 + /* These two are experimental. 43 + * IOCB_CMD_PREADX = 4, 44 + * IOCB_CMD_POLL = 5, 45 + */ 44 46 IOCB_CMD_NOOP = 6, 45 47 IOCB_CMD_PREADV = 7, 46 48 IOCB_CMD_PWRITEV = 8,
+23 -5
include/uapi/linux/bpf.h
··· 1857 1857 * is resolved), the nexthop address is returned in ipv4_dst 1858 1858 * or ipv6_dst based on family, smac is set to mac address of 1859 1859 * egress device, dmac is set to nexthop mac address, rt_metric 1860 - * is set to metric from route (IPv4/IPv6 only). 1860 + * is set to metric from route (IPv4/IPv6 only), and ifindex 1861 + * is set to the device index of the nexthop from the FIB lookup. 1861 1862 * 1862 1863 * *plen* argument is the size of the passed in struct. 1863 1864 * *flags* argument can be a combination of one or more of the ··· 1874 1873 * *ctx* is either **struct xdp_md** for XDP programs or 1875 1874 * **struct sk_buff** tc cls_act programs. 1876 1875 * Return 1877 - * Egress device index on success, 0 if packet needs to continue 1878 - * up the stack for further processing or a negative error in case 1879 - * of failure. 1876 + * * < 0 if any input argument is invalid 1877 + * * 0 on success (packet is forwarded, nexthop neighbor exists) 1878 + * * > 0 one of **BPF_FIB_LKUP_RET_** codes explaining why the 1879 + * * packet is not forwarded or needs assist from full stack 1880 1880 * 1881 1881 * int bpf_sock_hash_update(struct bpf_sock_ops_kern *skops, struct bpf_map *map, void *key, u64 flags) 1882 1882 * Description ··· 2614 2612 #define BPF_FIB_LOOKUP_DIRECT BIT(0) 2615 2613 #define BPF_FIB_LOOKUP_OUTPUT BIT(1) 2616 2614 2615 + enum { 2616 + BPF_FIB_LKUP_RET_SUCCESS, /* lookup successful */ 2617 + BPF_FIB_LKUP_RET_BLACKHOLE, /* dest is blackholed; can be dropped */ 2618 + BPF_FIB_LKUP_RET_UNREACHABLE, /* dest is unreachable; can be dropped */ 2619 + BPF_FIB_LKUP_RET_PROHIBIT, /* dest not allowed; can be dropped */ 2620 + BPF_FIB_LKUP_RET_NOT_FWDED, /* packet is not forwarded */ 2621 + BPF_FIB_LKUP_RET_FWD_DISABLED, /* fwding is not enabled on ingress */ 2622 + BPF_FIB_LKUP_RET_UNSUPP_LWT, /* fwd requires encapsulation */ 2623 + BPF_FIB_LKUP_RET_NO_NEIGH, /* no neighbor entry for nh */ 2624 + BPF_FIB_LKUP_RET_FRAG_NEEDED, /* fragmentation required to fwd */ 2625 + }; 2626 + 2617 2627 struct bpf_fib_lookup { 2618 2628 /* input: network family for lookup (AF_INET, AF_INET6) 2619 2629 * output: network family of egress nexthop ··· 2639 2625 2640 2626 /* total length of packet from network header - used for MTU check */ 2641 2627 __u16 tot_len; 2642 - __u32 ifindex; /* L3 device index for lookup */ 2628 + 2629 + /* input: L3 device index for lookup 2630 + * output: device index from FIB lookup 2631 + */ 2632 + __u32 ifindex; 2643 2633 2644 2634 union { 2645 2635 /* inputs to lookup */
+3 -1
include/uapi/linux/target_core_user.h
··· 44 44 #define TCMU_MAILBOX_VERSION 2 45 45 #define ALIGN_SIZE 64 /* Should be enough for most CPUs */ 46 46 #define TCMU_MAILBOX_FLAG_CAP_OOOC (1 << 0) /* Out-of-order completions */ 47 + #define TCMU_MAILBOX_FLAG_CAP_READ_LEN (1 << 1) /* Read data length */ 47 48 48 49 struct tcmu_mailbox { 49 50 __u16 version; ··· 72 71 __u16 cmd_id; 73 72 __u8 kflags; 74 73 #define TCMU_UFLAG_UNKNOWN_OP 0x1 74 + #define TCMU_UFLAG_READ_LEN 0x2 75 75 __u8 uflags; 76 76 77 77 } __packed; ··· 121 119 __u8 scsi_status; 122 120 __u8 __pad1; 123 121 __u16 __pad2; 124 - __u32 __pad3; 122 + __u32 read_len; 125 123 char sense_buffer[TCMU_SENSE_BUFFERSIZE]; 126 124 } rsp; 127 125 };
+3 -4
init/Kconfig
··· 1051 1051 depends on HAVE_LD_DEAD_CODE_DATA_ELIMINATION 1052 1052 depends on EXPERT 1053 1053 help 1054 - Select this if the architecture wants to do dead code and 1055 - data elimination with the linker by compiling with 1056 - -ffunction-sections -fdata-sections, and linking with 1057 - --gc-sections. 1054 + Enable this if you want to do dead code and data elimination with 1055 + the linker by compiling with -ffunction-sections -fdata-sections, 1056 + and linking with --gc-sections. 1058 1057 1059 1058 This can reduce on disk and in-memory size of the kernel 1060 1059 code and static data, particularly for small configs and
+54
kernel/bpf/cgroup.c
··· 428 428 return ret; 429 429 } 430 430 431 + int cgroup_bpf_prog_attach(const union bpf_attr *attr, 432 + enum bpf_prog_type ptype, struct bpf_prog *prog) 433 + { 434 + struct cgroup *cgrp; 435 + int ret; 436 + 437 + cgrp = cgroup_get_from_fd(attr->target_fd); 438 + if (IS_ERR(cgrp)) 439 + return PTR_ERR(cgrp); 440 + 441 + ret = cgroup_bpf_attach(cgrp, prog, attr->attach_type, 442 + attr->attach_flags); 443 + cgroup_put(cgrp); 444 + return ret; 445 + } 446 + 447 + int cgroup_bpf_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype) 448 + { 449 + struct bpf_prog *prog; 450 + struct cgroup *cgrp; 451 + int ret; 452 + 453 + cgrp = cgroup_get_from_fd(attr->target_fd); 454 + if (IS_ERR(cgrp)) 455 + return PTR_ERR(cgrp); 456 + 457 + prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype); 458 + if (IS_ERR(prog)) 459 + prog = NULL; 460 + 461 + ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type, 0); 462 + if (prog) 463 + bpf_prog_put(prog); 464 + 465 + cgroup_put(cgrp); 466 + return ret; 467 + } 468 + 469 + int cgroup_bpf_prog_query(const union bpf_attr *attr, 470 + union bpf_attr __user *uattr) 471 + { 472 + struct cgroup *cgrp; 473 + int ret; 474 + 475 + cgrp = cgroup_get_from_fd(attr->query.target_fd); 476 + if (IS_ERR(cgrp)) 477 + return PTR_ERR(cgrp); 478 + 479 + ret = cgroup_bpf_query(cgrp, attr, uattr); 480 + 481 + cgroup_put(cgrp); 482 + return ret; 483 + } 484 + 431 485 /** 432 486 * __cgroup_bpf_run_filter_skb() - Run a program for packet filtering 433 487 * @sk: The socket sending or receiving traffic
-28
kernel/bpf/core.c
··· 598 598 bpf_fill_ill_insns(hdr, size); 599 599 600 600 hdr->pages = size / PAGE_SIZE; 601 - hdr->locked = 0; 602 - 603 601 hole = min_t(unsigned int, size - (proglen + sizeof(*hdr)), 604 602 PAGE_SIZE - sizeof(*hdr)); 605 603 start = (get_random_int() % hole) & ~(alignment - 1); ··· 1448 1450 return 0; 1449 1451 } 1450 1452 1451 - static int bpf_prog_check_pages_ro_locked(const struct bpf_prog *fp) 1452 - { 1453 - #ifdef CONFIG_ARCH_HAS_SET_MEMORY 1454 - int i, err; 1455 - 1456 - for (i = 0; i < fp->aux->func_cnt; i++) { 1457 - err = bpf_prog_check_pages_ro_single(fp->aux->func[i]); 1458 - if (err) 1459 - return err; 1460 - } 1461 - 1462 - return bpf_prog_check_pages_ro_single(fp); 1463 - #endif 1464 - return 0; 1465 - } 1466 - 1467 1453 static void bpf_prog_select_func(struct bpf_prog *fp) 1468 1454 { 1469 1455 #ifndef CONFIG_BPF_JIT_ALWAYS_ON ··· 1506 1524 * all eBPF JITs might immediately support all features. 1507 1525 */ 1508 1526 *err = bpf_check_tail_call(fp); 1509 - if (*err) 1510 - return fp; 1511 1527 1512 - /* Checkpoint: at this point onwards any cBPF -> eBPF or 1513 - * native eBPF program is read-only. If we failed to change 1514 - * the page attributes (e.g. allocation failure from 1515 - * splitting large pages), then reject the whole program 1516 - * in order to guarantee not ending up with any W+X pages 1517 - * from BPF side in kernel. 1518 - */ 1519 - *err = bpf_prog_check_pages_ro_locked(fp); 1520 1528 return fp; 1521 1529 } 1522 1530 EXPORT_SYMBOL_GPL(bpf_prog_select_runtime);
+184 -70
kernel/bpf/sockmap.c
··· 72 72 u32 n_buckets; 73 73 u32 elem_size; 74 74 struct bpf_sock_progs progs; 75 + struct rcu_head rcu; 75 76 }; 76 77 77 78 struct htab_elem { ··· 90 89 struct smap_psock_map_entry { 91 90 struct list_head list; 92 91 struct sock **entry; 93 - struct htab_elem *hash_link; 94 - struct bpf_htab *htab; 92 + struct htab_elem __rcu *hash_link; 93 + struct bpf_htab __rcu *htab; 95 94 }; 96 95 97 96 struct smap_psock { ··· 121 120 struct bpf_prog *bpf_parse; 122 121 struct bpf_prog *bpf_verdict; 123 122 struct list_head maps; 123 + spinlock_t maps_lock; 124 124 125 125 /* Back reference used when sock callback trigger sockmap operations */ 126 126 struct sock *sock; ··· 142 140 static int bpf_tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size); 143 141 static int bpf_tcp_sendpage(struct sock *sk, struct page *page, 144 142 int offset, size_t size, int flags); 143 + static void bpf_tcp_close(struct sock *sk, long timeout); 145 144 146 145 static inline struct smap_psock *smap_psock_sk(const struct sock *sk) 147 146 { ··· 164 161 return !empty; 165 162 } 166 163 167 - static struct proto tcp_bpf_proto; 164 + enum { 165 + SOCKMAP_IPV4, 166 + SOCKMAP_IPV6, 167 + SOCKMAP_NUM_PROTS, 168 + }; 169 + 170 + enum { 171 + SOCKMAP_BASE, 172 + SOCKMAP_TX, 173 + SOCKMAP_NUM_CONFIGS, 174 + }; 175 + 176 + static struct proto *saved_tcpv6_prot __read_mostly; 177 + static DEFINE_SPINLOCK(tcpv6_prot_lock); 178 + static struct proto bpf_tcp_prots[SOCKMAP_NUM_PROTS][SOCKMAP_NUM_CONFIGS]; 179 + static void build_protos(struct proto prot[SOCKMAP_NUM_CONFIGS], 180 + struct proto *base) 181 + { 182 + prot[SOCKMAP_BASE] = *base; 183 + prot[SOCKMAP_BASE].close = bpf_tcp_close; 184 + prot[SOCKMAP_BASE].recvmsg = bpf_tcp_recvmsg; 185 + prot[SOCKMAP_BASE].stream_memory_read = bpf_tcp_stream_read; 186 + 187 + prot[SOCKMAP_TX] = prot[SOCKMAP_BASE]; 188 + prot[SOCKMAP_TX].sendmsg = bpf_tcp_sendmsg; 189 + prot[SOCKMAP_TX].sendpage = bpf_tcp_sendpage; 190 + } 191 + 192 + static void update_sk_prot(struct sock *sk, struct smap_psock *psock) 193 + { 194 + int family = sk->sk_family == AF_INET6 ? SOCKMAP_IPV6 : SOCKMAP_IPV4; 195 + int conf = psock->bpf_tx_msg ? SOCKMAP_TX : SOCKMAP_BASE; 196 + 197 + sk->sk_prot = &bpf_tcp_prots[family][conf]; 198 + } 199 + 168 200 static int bpf_tcp_init(struct sock *sk) 169 201 { 170 202 struct smap_psock *psock; ··· 219 181 psock->save_close = sk->sk_prot->close; 220 182 psock->sk_proto = sk->sk_prot; 221 183 222 - if (psock->bpf_tx_msg) { 223 - tcp_bpf_proto.sendmsg = bpf_tcp_sendmsg; 224 - tcp_bpf_proto.sendpage = bpf_tcp_sendpage; 225 - tcp_bpf_proto.recvmsg = bpf_tcp_recvmsg; 226 - tcp_bpf_proto.stream_memory_read = bpf_tcp_stream_read; 184 + /* Build IPv6 sockmap whenever the address of tcpv6_prot changes */ 185 + if (sk->sk_family == AF_INET6 && 186 + unlikely(sk->sk_prot != smp_load_acquire(&saved_tcpv6_prot))) { 187 + spin_lock_bh(&tcpv6_prot_lock); 188 + if (likely(sk->sk_prot != saved_tcpv6_prot)) { 189 + build_protos(bpf_tcp_prots[SOCKMAP_IPV6], sk->sk_prot); 190 + smp_store_release(&saved_tcpv6_prot, sk->sk_prot); 191 + } 192 + spin_unlock_bh(&tcpv6_prot_lock); 227 193 } 228 - 229 - sk->sk_prot = &tcp_bpf_proto; 194 + update_sk_prot(sk, psock); 230 195 rcu_read_unlock(); 231 196 return 0; 232 197 } ··· 260 219 rcu_read_unlock(); 261 220 } 262 221 222 + static struct htab_elem *lookup_elem_raw(struct hlist_head *head, 223 + u32 hash, void *key, u32 key_size) 224 + { 225 + struct htab_elem *l; 226 + 227 + hlist_for_each_entry_rcu(l, head, hash_node) { 228 + if (l->hash == hash && !memcmp(&l->key, key, key_size)) 229 + return l; 230 + } 231 + 232 + return NULL; 233 + } 234 + 235 + static inline struct bucket *__select_bucket(struct bpf_htab *htab, u32 hash) 236 + { 237 + return &htab->buckets[hash & (htab->n_buckets - 1)]; 238 + } 239 + 240 + static inline struct hlist_head *select_bucket(struct bpf_htab *htab, u32 hash) 241 + { 242 + return &__select_bucket(htab, hash)->head; 243 + } 244 + 263 245 static void free_htab_elem(struct bpf_htab *htab, struct htab_elem *l) 264 246 { 265 247 atomic_dec(&htab->count); 266 248 kfree_rcu(l, rcu); 267 249 } 268 250 251 + static struct smap_psock_map_entry *psock_map_pop(struct sock *sk, 252 + struct smap_psock *psock) 253 + { 254 + struct smap_psock_map_entry *e; 255 + 256 + spin_lock_bh(&psock->maps_lock); 257 + e = list_first_entry_or_null(&psock->maps, 258 + struct smap_psock_map_entry, 259 + list); 260 + if (e) 261 + list_del(&e->list); 262 + spin_unlock_bh(&psock->maps_lock); 263 + return e; 264 + } 265 + 269 266 static void bpf_tcp_close(struct sock *sk, long timeout) 270 267 { 271 268 void (*close_fun)(struct sock *sk, long timeout); 272 - struct smap_psock_map_entry *e, *tmp; 269 + struct smap_psock_map_entry *e; 273 270 struct sk_msg_buff *md, *mtmp; 274 271 struct smap_psock *psock; 275 272 struct sock *osk; ··· 326 247 */ 327 248 close_fun = psock->save_close; 328 249 329 - write_lock_bh(&sk->sk_callback_lock); 330 250 if (psock->cork) { 331 251 free_start_sg(psock->sock, psock->cork); 332 252 kfree(psock->cork); ··· 338 260 kfree(md); 339 261 } 340 262 341 - list_for_each_entry_safe(e, tmp, &psock->maps, list) { 263 + e = psock_map_pop(sk, psock); 264 + while (e) { 342 265 if (e->entry) { 343 266 osk = cmpxchg(e->entry, sk, NULL); 344 267 if (osk == sk) { 345 - list_del(&e->list); 346 268 smap_release_sock(psock, sk); 347 269 } 348 270 } else { 349 - hlist_del_rcu(&e->hash_link->hash_node); 350 - smap_release_sock(psock, e->hash_link->sk); 351 - free_htab_elem(e->htab, e->hash_link); 271 + struct htab_elem *link = rcu_dereference(e->hash_link); 272 + struct bpf_htab *htab = rcu_dereference(e->htab); 273 + struct hlist_head *head; 274 + struct htab_elem *l; 275 + struct bucket *b; 276 + 277 + b = __select_bucket(htab, link->hash); 278 + head = &b->head; 279 + raw_spin_lock_bh(&b->lock); 280 + l = lookup_elem_raw(head, 281 + link->hash, link->key, 282 + htab->map.key_size); 283 + /* If another thread deleted this object skip deletion. 284 + * The refcnt on psock may or may not be zero. 285 + */ 286 + if (l) { 287 + hlist_del_rcu(&link->hash_node); 288 + smap_release_sock(psock, link->sk); 289 + free_htab_elem(htab, link); 290 + } 291 + raw_spin_unlock_bh(&b->lock); 352 292 } 293 + e = psock_map_pop(sk, psock); 353 294 } 354 - write_unlock_bh(&sk->sk_callback_lock); 355 295 rcu_read_unlock(); 356 296 close_fun(sk, timeout); 357 297 } ··· 1207 1111 1208 1112 static int bpf_tcp_ulp_register(void) 1209 1113 { 1210 - tcp_bpf_proto = tcp_prot; 1211 - tcp_bpf_proto.close = bpf_tcp_close; 1114 + build_protos(bpf_tcp_prots[SOCKMAP_IPV4], &tcp_prot); 1212 1115 /* Once BPF TX ULP is registered it is never unregistered. It 1213 1116 * will be in the ULP list for the lifetime of the system. Doing 1214 1117 * duplicate registers is not a problem. ··· 1452 1357 { 1453 1358 if (refcount_dec_and_test(&psock->refcnt)) { 1454 1359 tcp_cleanup_ulp(sock); 1360 + write_lock_bh(&sock->sk_callback_lock); 1455 1361 smap_stop_sock(psock, sock); 1362 + write_unlock_bh(&sock->sk_callback_lock); 1456 1363 clear_bit(SMAP_TX_RUNNING, &psock->state); 1457 1364 rcu_assign_sk_user_data(sock, NULL); 1458 1365 call_rcu_sched(&psock->rcu, smap_destroy_psock); ··· 1605 1508 INIT_LIST_HEAD(&psock->maps); 1606 1509 INIT_LIST_HEAD(&psock->ingress); 1607 1510 refcount_set(&psock->refcnt, 1); 1511 + spin_lock_init(&psock->maps_lock); 1608 1512 1609 1513 rcu_assign_sk_user_data(sock, psock); 1610 1514 sock_hold(sock); ··· 1662 1564 return ERR_PTR(err); 1663 1565 } 1664 1566 1665 - static void smap_list_remove(struct smap_psock *psock, 1666 - struct sock **entry, 1667 - struct htab_elem *hash_link) 1567 + static void smap_list_map_remove(struct smap_psock *psock, 1568 + struct sock **entry) 1668 1569 { 1669 1570 struct smap_psock_map_entry *e, *tmp; 1670 1571 1572 + spin_lock_bh(&psock->maps_lock); 1671 1573 list_for_each_entry_safe(e, tmp, &psock->maps, list) { 1672 - if (e->entry == entry || e->hash_link == hash_link) { 1574 + if (e->entry == entry) 1673 1575 list_del(&e->list); 1674 - break; 1675 - } 1676 1576 } 1577 + spin_unlock_bh(&psock->maps_lock); 1578 + } 1579 + 1580 + static void smap_list_hash_remove(struct smap_psock *psock, 1581 + struct htab_elem *hash_link) 1582 + { 1583 + struct smap_psock_map_entry *e, *tmp; 1584 + 1585 + spin_lock_bh(&psock->maps_lock); 1586 + list_for_each_entry_safe(e, tmp, &psock->maps, list) { 1587 + struct htab_elem *c = rcu_dereference(e->hash_link); 1588 + 1589 + if (c == hash_link) 1590 + list_del(&e->list); 1591 + } 1592 + spin_unlock_bh(&psock->maps_lock); 1677 1593 } 1678 1594 1679 1595 static void sock_map_free(struct bpf_map *map) ··· 1713 1601 if (!sock) 1714 1602 continue; 1715 1603 1716 - write_lock_bh(&sock->sk_callback_lock); 1717 1604 psock = smap_psock_sk(sock); 1718 1605 /* This check handles a racing sock event that can get the 1719 1606 * sk_callback_lock before this case but after xchg happens ··· 1720 1609 * to be null and queued for garbage collection. 1721 1610 */ 1722 1611 if (likely(psock)) { 1723 - smap_list_remove(psock, &stab->sock_map[i], NULL); 1612 + smap_list_map_remove(psock, &stab->sock_map[i]); 1724 1613 smap_release_sock(psock, sock); 1725 1614 } 1726 - write_unlock_bh(&sock->sk_callback_lock); 1727 1615 } 1728 1616 rcu_read_unlock(); 1729 1617 ··· 1771 1661 if (!sock) 1772 1662 return -EINVAL; 1773 1663 1774 - write_lock_bh(&sock->sk_callback_lock); 1775 1664 psock = smap_psock_sk(sock); 1776 1665 if (!psock) 1777 1666 goto out; 1778 1667 1779 1668 if (psock->bpf_parse) 1780 1669 smap_stop_sock(psock, sock); 1781 - smap_list_remove(psock, &stab->sock_map[k], NULL); 1670 + smap_list_map_remove(psock, &stab->sock_map[k]); 1782 1671 smap_release_sock(psock, sock); 1783 1672 out: 1784 - write_unlock_bh(&sock->sk_callback_lock); 1785 1673 return 0; 1786 1674 } 1787 1675 ··· 1860 1752 } 1861 1753 } 1862 1754 1863 - write_lock_bh(&sock->sk_callback_lock); 1864 1755 psock = smap_psock_sk(sock); 1865 1756 1866 1757 /* 2. Do not allow inheriting programs if psock exists and has ··· 1916 1809 if (err) 1917 1810 goto out_free; 1918 1811 smap_init_progs(psock, verdict, parse); 1812 + write_lock_bh(&sock->sk_callback_lock); 1919 1813 smap_start_sock(psock, sock); 1814 + write_unlock_bh(&sock->sk_callback_lock); 1920 1815 } 1921 1816 1922 1817 /* 4. Place psock in sockmap for use and stop any programs on ··· 1928 1819 */ 1929 1820 if (map_link) { 1930 1821 e->entry = map_link; 1822 + spin_lock_bh(&psock->maps_lock); 1931 1823 list_add_tail(&e->list, &psock->maps); 1824 + spin_unlock_bh(&psock->maps_lock); 1932 1825 } 1933 - write_unlock_bh(&sock->sk_callback_lock); 1934 1826 return err; 1935 1827 out_free: 1936 1828 smap_release_sock(psock, sock); ··· 1942 1832 } 1943 1833 if (tx_msg) 1944 1834 bpf_prog_put(tx_msg); 1945 - write_unlock_bh(&sock->sk_callback_lock); 1946 1835 kfree(e); 1947 1836 return err; 1948 1837 } ··· 1978 1869 if (osock) { 1979 1870 struct smap_psock *opsock = smap_psock_sk(osock); 1980 1871 1981 - write_lock_bh(&osock->sk_callback_lock); 1982 - smap_list_remove(opsock, &stab->sock_map[i], NULL); 1872 + smap_list_map_remove(opsock, &stab->sock_map[i]); 1983 1873 smap_release_sock(opsock, osock); 1984 - write_unlock_bh(&osock->sk_callback_lock); 1985 1874 } 1986 1875 out: 1987 1876 return err; ··· 2020 1913 bpf_prog_put(orig); 2021 1914 2022 1915 return 0; 1916 + } 1917 + 1918 + int sockmap_get_from_fd(const union bpf_attr *attr, int type, 1919 + struct bpf_prog *prog) 1920 + { 1921 + int ufd = attr->target_fd; 1922 + struct bpf_map *map; 1923 + struct fd f; 1924 + int err; 1925 + 1926 + f = fdget(ufd); 1927 + map = __bpf_map_get(f); 1928 + if (IS_ERR(map)) 1929 + return PTR_ERR(map); 1930 + 1931 + err = sock_map_prog(map, prog, attr->attach_type); 1932 + fdput(f); 1933 + return err; 2023 1934 } 2024 1935 2025 1936 static void *sock_map_lookup(struct bpf_map *map, void *key) ··· 2168 2043 return ERR_PTR(err); 2169 2044 } 2170 2045 2171 - static inline struct bucket *__select_bucket(struct bpf_htab *htab, u32 hash) 2046 + static void __bpf_htab_free(struct rcu_head *rcu) 2172 2047 { 2173 - return &htab->buckets[hash & (htab->n_buckets - 1)]; 2174 - } 2048 + struct bpf_htab *htab; 2175 2049 2176 - static inline struct hlist_head *select_bucket(struct bpf_htab *htab, u32 hash) 2177 - { 2178 - return &__select_bucket(htab, hash)->head; 2050 + htab = container_of(rcu, struct bpf_htab, rcu); 2051 + bpf_map_area_free(htab->buckets); 2052 + kfree(htab); 2179 2053 } 2180 2054 2181 2055 static void sock_hash_free(struct bpf_map *map) ··· 2193 2069 */ 2194 2070 rcu_read_lock(); 2195 2071 for (i = 0; i < htab->n_buckets; i++) { 2196 - struct hlist_head *head = select_bucket(htab, i); 2072 + struct bucket *b = __select_bucket(htab, i); 2073 + struct hlist_head *head; 2197 2074 struct hlist_node *n; 2198 2075 struct htab_elem *l; 2199 2076 2077 + raw_spin_lock_bh(&b->lock); 2078 + head = &b->head; 2200 2079 hlist_for_each_entry_safe(l, n, head, hash_node) { 2201 2080 struct sock *sock = l->sk; 2202 2081 struct smap_psock *psock; 2203 2082 2204 2083 hlist_del_rcu(&l->hash_node); 2205 - write_lock_bh(&sock->sk_callback_lock); 2206 2084 psock = smap_psock_sk(sock); 2207 2085 /* This check handles a racing sock event that can get 2208 2086 * the sk_callback_lock before this case but after xchg ··· 2212 2086 * (psock) to be null and queued for garbage collection. 2213 2087 */ 2214 2088 if (likely(psock)) { 2215 - smap_list_remove(psock, NULL, l); 2089 + smap_list_hash_remove(psock, l); 2216 2090 smap_release_sock(psock, sock); 2217 2091 } 2218 - write_unlock_bh(&sock->sk_callback_lock); 2219 - kfree(l); 2092 + free_htab_elem(htab, l); 2220 2093 } 2094 + raw_spin_unlock_bh(&b->lock); 2221 2095 } 2222 2096 rcu_read_unlock(); 2223 - bpf_map_area_free(htab->buckets); 2224 - kfree(htab); 2097 + call_rcu(&htab->rcu, __bpf_htab_free); 2225 2098 } 2226 2099 2227 2100 static struct htab_elem *alloc_sock_hash_elem(struct bpf_htab *htab, ··· 2245 2120 l_new->sk = sk; 2246 2121 l_new->hash = hash; 2247 2122 return l_new; 2248 - } 2249 - 2250 - static struct htab_elem *lookup_elem_raw(struct hlist_head *head, 2251 - u32 hash, void *key, u32 key_size) 2252 - { 2253 - struct htab_elem *l; 2254 - 2255 - hlist_for_each_entry_rcu(l, head, hash_node) { 2256 - if (l->hash == hash && !memcmp(&l->key, key, key_size)) 2257 - return l; 2258 - } 2259 - 2260 - return NULL; 2261 2123 } 2262 2124 2263 2125 static inline u32 htab_map_hash(const void *key, u32 key_len) ··· 2366 2254 goto bucket_err; 2367 2255 } 2368 2256 2369 - e->hash_link = l_new; 2370 - e->htab = container_of(map, struct bpf_htab, map); 2257 + rcu_assign_pointer(e->hash_link, l_new); 2258 + rcu_assign_pointer(e->htab, 2259 + container_of(map, struct bpf_htab, map)); 2260 + spin_lock_bh(&psock->maps_lock); 2371 2261 list_add_tail(&e->list, &psock->maps); 2262 + spin_unlock_bh(&psock->maps_lock); 2372 2263 2373 2264 /* add new element to the head of the list, so that 2374 2265 * concurrent search will find it before old elem ··· 2381 2266 psock = smap_psock_sk(l_old->sk); 2382 2267 2383 2268 hlist_del_rcu(&l_old->hash_node); 2384 - smap_list_remove(psock, NULL, l_old); 2269 + smap_list_hash_remove(psock, l_old); 2385 2270 smap_release_sock(psock, l_old->sk); 2386 2271 free_htab_elem(htab, l_old); 2387 2272 } ··· 2441 2326 struct smap_psock *psock; 2442 2327 2443 2328 hlist_del_rcu(&l->hash_node); 2444 - write_lock_bh(&sock->sk_callback_lock); 2445 2329 psock = smap_psock_sk(sock); 2446 2330 /* This check handles a racing sock event that can get the 2447 2331 * sk_callback_lock before this case but after xchg happens ··· 2448 2334 * to be null and queued for garbage collection. 2449 2335 */ 2450 2336 if (likely(psock)) { 2451 - smap_list_remove(psock, NULL, l); 2337 + smap_list_hash_remove(psock, l); 2452 2338 smap_release_sock(psock, sock); 2453 2339 } 2454 - write_unlock_bh(&sock->sk_callback_lock); 2455 2340 free_htab_elem(htab, l); 2456 2341 ret = 0; 2457 2342 } ··· 2496 2383 .map_get_next_key = sock_hash_get_next_key, 2497 2384 .map_update_elem = sock_hash_update_elem, 2498 2385 .map_delete_elem = sock_hash_delete_elem, 2386 + .map_release_uref = sock_map_release, 2499 2387 }; 2500 2388 2501 2389 BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, bpf_sock,
+21 -78
kernel/bpf/syscall.c
··· 1483 1483 return err; 1484 1484 } 1485 1485 1486 - #ifdef CONFIG_CGROUP_BPF 1487 - 1488 1486 static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog, 1489 1487 enum bpf_attach_type attach_type) 1490 1488 { ··· 1497 1499 1498 1500 #define BPF_PROG_ATTACH_LAST_FIELD attach_flags 1499 1501 1500 - static int sockmap_get_from_fd(const union bpf_attr *attr, 1501 - int type, bool attach) 1502 - { 1503 - struct bpf_prog *prog = NULL; 1504 - int ufd = attr->target_fd; 1505 - struct bpf_map *map; 1506 - struct fd f; 1507 - int err; 1508 - 1509 - f = fdget(ufd); 1510 - map = __bpf_map_get(f); 1511 - if (IS_ERR(map)) 1512 - return PTR_ERR(map); 1513 - 1514 - if (attach) { 1515 - prog = bpf_prog_get_type(attr->attach_bpf_fd, type); 1516 - if (IS_ERR(prog)) { 1517 - fdput(f); 1518 - return PTR_ERR(prog); 1519 - } 1520 - } 1521 - 1522 - err = sock_map_prog(map, prog, attr->attach_type); 1523 - if (err) { 1524 - fdput(f); 1525 - if (prog) 1526 - bpf_prog_put(prog); 1527 - return err; 1528 - } 1529 - 1530 - fdput(f); 1531 - return 0; 1532 - } 1533 - 1534 1502 #define BPF_F_ATTACH_MASK \ 1535 1503 (BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI) 1536 1504 ··· 1504 1540 { 1505 1541 enum bpf_prog_type ptype; 1506 1542 struct bpf_prog *prog; 1507 - struct cgroup *cgrp; 1508 1543 int ret; 1509 1544 1510 1545 if (!capable(CAP_NET_ADMIN)) ··· 1540 1577 ptype = BPF_PROG_TYPE_CGROUP_DEVICE; 1541 1578 break; 1542 1579 case BPF_SK_MSG_VERDICT: 1543 - return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_MSG, true); 1580 + ptype = BPF_PROG_TYPE_SK_MSG; 1581 + break; 1544 1582 case BPF_SK_SKB_STREAM_PARSER: 1545 1583 case BPF_SK_SKB_STREAM_VERDICT: 1546 - return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, true); 1584 + ptype = BPF_PROG_TYPE_SK_SKB; 1585 + break; 1547 1586 case BPF_LIRC_MODE2: 1548 - return lirc_prog_attach(attr); 1587 + ptype = BPF_PROG_TYPE_LIRC_MODE2; 1588 + break; 1549 1589 default: 1550 1590 return -EINVAL; 1551 1591 } ··· 1562 1596 return -EINVAL; 1563 1597 } 1564 1598 1565 - cgrp = cgroup_get_from_fd(attr->target_fd); 1566 - if (IS_ERR(cgrp)) { 1567 - bpf_prog_put(prog); 1568 - return PTR_ERR(cgrp); 1599 + switch (ptype) { 1600 + case BPF_PROG_TYPE_SK_SKB: 1601 + case BPF_PROG_TYPE_SK_MSG: 1602 + ret = sockmap_get_from_fd(attr, ptype, prog); 1603 + break; 1604 + case BPF_PROG_TYPE_LIRC_MODE2: 1605 + ret = lirc_prog_attach(attr, prog); 1606 + break; 1607 + default: 1608 + ret = cgroup_bpf_prog_attach(attr, ptype, prog); 1569 1609 } 1570 1610 1571 - ret = cgroup_bpf_attach(cgrp, prog, attr->attach_type, 1572 - attr->attach_flags); 1573 1611 if (ret) 1574 1612 bpf_prog_put(prog); 1575 - cgroup_put(cgrp); 1576 - 1577 1613 return ret; 1578 1614 } 1579 1615 ··· 1584 1616 static int bpf_prog_detach(const union bpf_attr *attr) 1585 1617 { 1586 1618 enum bpf_prog_type ptype; 1587 - struct bpf_prog *prog; 1588 - struct cgroup *cgrp; 1589 - int ret; 1590 1619 1591 1620 if (!capable(CAP_NET_ADMIN)) 1592 1621 return -EPERM; ··· 1616 1651 ptype = BPF_PROG_TYPE_CGROUP_DEVICE; 1617 1652 break; 1618 1653 case BPF_SK_MSG_VERDICT: 1619 - return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_MSG, false); 1654 + return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_MSG, NULL); 1620 1655 case BPF_SK_SKB_STREAM_PARSER: 1621 1656 case BPF_SK_SKB_STREAM_VERDICT: 1622 - return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, false); 1657 + return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, NULL); 1623 1658 case BPF_LIRC_MODE2: 1624 1659 return lirc_prog_detach(attr); 1625 1660 default: 1626 1661 return -EINVAL; 1627 1662 } 1628 1663 1629 - cgrp = cgroup_get_from_fd(attr->target_fd); 1630 - if (IS_ERR(cgrp)) 1631 - return PTR_ERR(cgrp); 1632 - 1633 - prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype); 1634 - if (IS_ERR(prog)) 1635 - prog = NULL; 1636 - 1637 - ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type, 0); 1638 - if (prog) 1639 - bpf_prog_put(prog); 1640 - cgroup_put(cgrp); 1641 - return ret; 1664 + return cgroup_bpf_prog_detach(attr, ptype); 1642 1665 } 1643 1666 1644 1667 #define BPF_PROG_QUERY_LAST_FIELD query.prog_cnt ··· 1634 1681 static int bpf_prog_query(const union bpf_attr *attr, 1635 1682 union bpf_attr __user *uattr) 1636 1683 { 1637 - struct cgroup *cgrp; 1638 - int ret; 1639 - 1640 1684 if (!capable(CAP_NET_ADMIN)) 1641 1685 return -EPERM; 1642 1686 if (CHECK_ATTR(BPF_PROG_QUERY)) ··· 1661 1711 default: 1662 1712 return -EINVAL; 1663 1713 } 1664 - cgrp = cgroup_get_from_fd(attr->query.target_fd); 1665 - if (IS_ERR(cgrp)) 1666 - return PTR_ERR(cgrp); 1667 - ret = cgroup_bpf_query(cgrp, attr, uattr); 1668 - cgroup_put(cgrp); 1669 - return ret; 1714 + 1715 + return cgroup_bpf_prog_query(attr, uattr); 1670 1716 } 1671 - #endif /* CONFIG_CGROUP_BPF */ 1672 1717 1673 1718 #define BPF_PROG_TEST_RUN_LAST_FIELD test.duration 1674 1719 ··· 2310 2365 case BPF_OBJ_GET: 2311 2366 err = bpf_obj_get(&attr); 2312 2367 break; 2313 - #ifdef CONFIG_CGROUP_BPF 2314 2368 case BPF_PROG_ATTACH: 2315 2369 err = bpf_prog_attach(&attr); 2316 2370 break; ··· 2319 2375 case BPF_PROG_QUERY: 2320 2376 err = bpf_prog_query(&attr, uattr); 2321 2377 break; 2322 - #endif 2323 2378 case BPF_PROG_TEST_RUN: 2324 2379 err = bpf_prog_test_run(&attr, uattr); 2325 2380 break;
+1
kernel/dma/swiotlb.c
··· 1085 1085 .unmap_page = swiotlb_unmap_page, 1086 1086 .dma_supported = dma_direct_supported, 1087 1087 }; 1088 + EXPORT_SYMBOL(swiotlb_dma_ops);
+1 -1
kernel/events/core.c
··· 6482 6482 data->phys_addr = perf_virt_to_phys(data->addr); 6483 6483 } 6484 6484 6485 - static void __always_inline 6485 + static __always_inline void 6486 6486 __perf_event_output(struct perf_event *event, 6487 6487 struct perf_sample_data *data, 6488 6488 struct pt_regs *regs,
+1
lib/Kconfig.kasan
··· 6 6 config KASAN 7 7 bool "KASan: runtime memory debugger" 8 8 depends on SLUB || (SLAB && !DEBUG_SLAB) 9 + select SLUB_DEBUG if SLUB 9 10 select CONSTRUCTORS 10 11 select STACKDEPOT 11 12 help
+1 -1
lib/percpu_ida.c
··· 141 141 spin_lock_irqsave(&tags->lock, flags); 142 142 143 143 /* Fastpath */ 144 - if (likely(tags->nr_free >= 0)) { 144 + if (likely(tags->nr_free)) { 145 145 tag = tags->freelist[--tags->nr_free]; 146 146 spin_unlock_irqrestore(&tags->lock, flags); 147 147 return tag;
-6
lib/scatterlist.c
··· 24 24 **/ 25 25 struct scatterlist *sg_next(struct scatterlist *sg) 26 26 { 27 - #ifdef CONFIG_DEBUG_SG 28 - BUG_ON(sg->sg_magic != SG_MAGIC); 29 - #endif 30 27 if (sg_is_last(sg)) 31 28 return NULL; 32 29 ··· 108 111 for_each_sg(sgl, sg, nents, i) 109 112 ret = sg; 110 113 111 - #ifdef CONFIG_DEBUG_SG 112 - BUG_ON(sgl[0].sg_magic != SG_MAGIC); 113 114 BUG_ON(!sg_is_last(ret)); 114 - #endif 115 115 return ret; 116 116 } 117 117 EXPORT_SYMBOL(sg_last);
+20
lib/test_bpf.c
··· 5282 5282 { /* Mainly checking JIT here. */ 5283 5283 "BPF_MAXINSNS: Ctx heavy transformations", 5284 5284 { }, 5285 + #if defined(CONFIG_BPF_JIT_ALWAYS_ON) && defined(CONFIG_S390) 5286 + CLASSIC | FLAG_EXPECTED_FAIL, 5287 + #else 5285 5288 CLASSIC, 5289 + #endif 5286 5290 { }, 5287 5291 { 5288 5292 { 1, !!(SKB_VLAN_TCI & VLAN_TAG_PRESENT) }, 5289 5293 { 10, !!(SKB_VLAN_TCI & VLAN_TAG_PRESENT) } 5290 5294 }, 5291 5295 .fill_helper = bpf_fill_maxinsns6, 5296 + .expected_errcode = -ENOTSUPP, 5292 5297 }, 5293 5298 { /* Mainly checking JIT here. */ 5294 5299 "BPF_MAXINSNS: Call heavy transformations", 5295 5300 { }, 5301 + #if defined(CONFIG_BPF_JIT_ALWAYS_ON) && defined(CONFIG_S390) 5302 + CLASSIC | FLAG_NO_DATA | FLAG_EXPECTED_FAIL, 5303 + #else 5296 5304 CLASSIC | FLAG_NO_DATA, 5305 + #endif 5297 5306 { }, 5298 5307 { { 1, 0 }, { 10, 0 } }, 5299 5308 .fill_helper = bpf_fill_maxinsns7, 5309 + .expected_errcode = -ENOTSUPP, 5300 5310 }, 5301 5311 { /* Mainly checking JIT here. */ 5302 5312 "BPF_MAXINSNS: Jump heavy test", ··· 5357 5347 { 5358 5348 "BPF_MAXINSNS: exec all MSH", 5359 5349 { }, 5350 + #if defined(CONFIG_BPF_JIT_ALWAYS_ON) && defined(CONFIG_S390) 5351 + CLASSIC | FLAG_EXPECTED_FAIL, 5352 + #else 5360 5353 CLASSIC, 5354 + #endif 5361 5355 { 0xfa, 0xfb, 0xfc, 0xfd, }, 5362 5356 { { 4, 0xababab83 } }, 5363 5357 .fill_helper = bpf_fill_maxinsns13, 5358 + .expected_errcode = -ENOTSUPP, 5364 5359 }, 5365 5360 { 5366 5361 "BPF_MAXINSNS: ld_abs+get_processor_id", 5367 5362 { }, 5363 + #if defined(CONFIG_BPF_JIT_ALWAYS_ON) && defined(CONFIG_S390) 5364 + CLASSIC | FLAG_EXPECTED_FAIL, 5365 + #else 5368 5366 CLASSIC, 5367 + #endif 5369 5368 { }, 5370 5369 { { 1, 0xbee } }, 5371 5370 .fill_helper = bpf_fill_ld_abs_get_processor_id, 5371 + .expected_errcode = -ENOTSUPP, 5372 5372 }, 5373 5373 /* 5374 5374 * LD_IND / LD_ABS on fragmented SKBs
-7
lib/test_printf.c
··· 260 260 { 261 261 int err; 262 262 263 - /* 264 - * Make sure crng is ready. Otherwise we get "(ptrval)" instead 265 - * of a hashed address when printing '%p' in plain_hash() and 266 - * plain_format(). 267 - */ 268 - wait_for_random_bytes(); 269 - 270 263 err = plain_hash(); 271 264 if (err) { 272 265 pr_warn("plain 'p' does not appear to be hashed\n");
+4
mm/slab_common.c
··· 567 567 list_del(&s->list); 568 568 569 569 if (s->flags & SLAB_TYPESAFE_BY_RCU) { 570 + #ifdef SLAB_SUPPORTS_SYSFS 571 + sysfs_slab_unlink(s); 572 + #endif 570 573 list_add_tail(&s->list, &slab_caches_to_rcu_destroy); 571 574 schedule_work(&slab_caches_to_rcu_destroy_work); 572 575 } else { 573 576 #ifdef SLAB_SUPPORTS_SYSFS 577 + sysfs_slab_unlink(s); 574 578 sysfs_slab_release(s); 575 579 #else 576 580 slab_kmem_cache_release(s);
+6 -1
mm/slub.c
··· 5667 5667 kset_unregister(s->memcg_kset); 5668 5668 #endif 5669 5669 kobject_uevent(&s->kobj, KOBJ_REMOVE); 5670 - kobject_del(&s->kobj); 5671 5670 out: 5672 5671 kobject_put(&s->kobj); 5673 5672 } ··· 5749 5750 5750 5751 kobject_get(&s->kobj); 5751 5752 schedule_work(&s->kobj_remove_work); 5753 + } 5754 + 5755 + void sysfs_slab_unlink(struct kmem_cache *s) 5756 + { 5757 + if (slab_state >= FULL) 5758 + kobject_del(&s->kobj); 5752 5759 } 5753 5760 5754 5761 void sysfs_slab_release(struct kmem_cache *s)
-2
mm/vmstat.c
··· 1796 1796 * to occur in the future. Keep on running the 1797 1797 * update worker thread. 1798 1798 */ 1799 - preempt_disable(); 1800 1799 queue_delayed_work_on(smp_processor_id(), mm_percpu_wq, 1801 1800 this_cpu_ptr(&vmstat_work), 1802 1801 round_jiffies_relative(sysctl_stat_interval)); 1803 - preempt_enable(); 1804 1802 } 1805 1803 } 1806 1804
+1 -1
net/8021q/vlan.c
··· 694 694 out_unlock: 695 695 rcu_read_unlock(); 696 696 out: 697 - NAPI_GRO_CB(skb)->flush |= flush; 697 + skb_gro_flush_final(skb, pp, flush); 698 698 699 699 return pp; 700 700 }
-4
net/Makefile
··· 20 20 obj-$(CONFIG_XFRM) += xfrm/ 21 21 obj-$(CONFIG_UNIX) += unix/ 22 22 obj-$(CONFIG_NET) += ipv6/ 23 - ifneq ($(CC_CAN_LINK),y) 24 - $(warning CC cannot link executables. Skipping bpfilter.) 25 - else 26 23 obj-$(CONFIG_BPFILTER) += bpfilter/ 27 - endif 28 24 obj-$(CONFIG_PACKET) += packet/ 29 25 obj-$(CONFIG_NET_KEY) += key/ 30 26 obj-$(CONFIG_BRIDGE) += bridge/
+1 -1
net/appletalk/ddp.c
··· 1869 1869 .socketpair = sock_no_socketpair, 1870 1870 .accept = sock_no_accept, 1871 1871 .getname = atalk_getname, 1872 - .poll_mask = datagram_poll_mask, 1872 + .poll = datagram_poll, 1873 1873 .ioctl = atalk_ioctl, 1874 1874 #ifdef CONFIG_COMPAT 1875 1875 .compat_ioctl = atalk_compat_ioctl,
+8 -3
net/atm/common.c
··· 647 647 return error; 648 648 } 649 649 650 - __poll_t vcc_poll_mask(struct socket *sock, __poll_t events) 650 + __poll_t vcc_poll(struct file *file, struct socket *sock, poll_table *wait) 651 651 { 652 652 struct sock *sk = sock->sk; 653 - struct atm_vcc *vcc = ATM_SD(sock); 654 - __poll_t mask = 0; 653 + struct atm_vcc *vcc; 654 + __poll_t mask; 655 + 656 + sock_poll_wait(file, sk_sleep(sk), wait); 657 + mask = 0; 658 + 659 + vcc = ATM_SD(sock); 655 660 656 661 /* exceptional events */ 657 662 if (sk->sk_err)
+1 -1
net/atm/common.h
··· 17 17 int vcc_recvmsg(struct socket *sock, struct msghdr *msg, size_t size, 18 18 int flags); 19 19 int vcc_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len); 20 - __poll_t vcc_poll_mask(struct socket *sock, __poll_t events); 20 + __poll_t vcc_poll(struct file *file, struct socket *sock, poll_table *wait); 21 21 int vcc_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg); 22 22 int vcc_compat_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg); 23 23 int vcc_setsockopt(struct socket *sock, int level, int optname,
+1 -1
net/atm/pvc.c
··· 113 113 .socketpair = sock_no_socketpair, 114 114 .accept = sock_no_accept, 115 115 .getname = pvc_getname, 116 - .poll_mask = vcc_poll_mask, 116 + .poll = vcc_poll, 117 117 .ioctl = vcc_ioctl, 118 118 #ifdef CONFIG_COMPAT 119 119 .compat_ioctl = vcc_compat_ioctl,
+1 -1
net/atm/svc.c
··· 636 636 .socketpair = sock_no_socketpair, 637 637 .accept = svc_accept, 638 638 .getname = svc_getname, 639 - .poll_mask = vcc_poll_mask, 639 + .poll = vcc_poll, 640 640 .ioctl = svc_ioctl, 641 641 #ifdef CONFIG_COMPAT 642 642 .compat_ioctl = svc_compat_ioctl,
+1 -1
net/ax25/af_ax25.c
··· 1941 1941 .socketpair = sock_no_socketpair, 1942 1942 .accept = ax25_accept, 1943 1943 .getname = ax25_getname, 1944 - .poll_mask = datagram_poll_mask, 1944 + .poll = datagram_poll, 1945 1945 .ioctl = ax25_ioctl, 1946 1946 .listen = ax25_listen, 1947 1947 .shutdown = ax25_shutdown,
+5 -2
net/bluetooth/af_bluetooth.c
··· 437 437 return 0; 438 438 } 439 439 440 - __poll_t bt_sock_poll_mask(struct socket *sock, __poll_t events) 440 + __poll_t bt_sock_poll(struct file *file, struct socket *sock, 441 + poll_table *wait) 441 442 { 442 443 struct sock *sk = sock->sk; 443 444 __poll_t mask = 0; 444 445 445 446 BT_DBG("sock %p, sk %p", sock, sk); 447 + 448 + poll_wait(file, sk_sleep(sk), wait); 446 449 447 450 if (sk->sk_state == BT_LISTEN) 448 451 return bt_accept_poll(sk); ··· 478 475 479 476 return mask; 480 477 } 481 - EXPORT_SYMBOL(bt_sock_poll_mask); 478 + EXPORT_SYMBOL(bt_sock_poll); 482 479 483 480 int bt_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg) 484 481 {
+1 -1
net/bluetooth/hci_sock.c
··· 1975 1975 .sendmsg = hci_sock_sendmsg, 1976 1976 .recvmsg = hci_sock_recvmsg, 1977 1977 .ioctl = hci_sock_ioctl, 1978 - .poll_mask = datagram_poll_mask, 1978 + .poll = datagram_poll, 1979 1979 .listen = sock_no_listen, 1980 1980 .shutdown = sock_no_shutdown, 1981 1981 .setsockopt = hci_sock_setsockopt,
+1 -1
net/bluetooth/l2cap_sock.c
··· 1653 1653 .getname = l2cap_sock_getname, 1654 1654 .sendmsg = l2cap_sock_sendmsg, 1655 1655 .recvmsg = l2cap_sock_recvmsg, 1656 - .poll_mask = bt_sock_poll_mask, 1656 + .poll = bt_sock_poll, 1657 1657 .ioctl = bt_sock_ioctl, 1658 1658 .mmap = sock_no_mmap, 1659 1659 .socketpair = sock_no_socketpair,
+1 -1
net/bluetooth/rfcomm/sock.c
··· 1049 1049 .setsockopt = rfcomm_sock_setsockopt, 1050 1050 .getsockopt = rfcomm_sock_getsockopt, 1051 1051 .ioctl = rfcomm_sock_ioctl, 1052 - .poll_mask = bt_sock_poll_mask, 1052 + .poll = bt_sock_poll, 1053 1053 .socketpair = sock_no_socketpair, 1054 1054 .mmap = sock_no_mmap 1055 1055 };
+1 -1
net/bluetooth/sco.c
··· 1197 1197 .getname = sco_sock_getname, 1198 1198 .sendmsg = sco_sock_sendmsg, 1199 1199 .recvmsg = sco_sock_recvmsg, 1200 - .poll_mask = bt_sock_poll_mask, 1200 + .poll = bt_sock_poll, 1201 1201 .ioctl = bt_sock_ioctl, 1202 1202 .mmap = sock_no_mmap, 1203 1203 .socketpair = sock_no_socketpair,
+1 -1
net/bpfilter/Kconfig
··· 1 1 menuconfig BPFILTER 2 2 bool "BPF based packet filtering framework (BPFILTER)" 3 - default n 4 3 depends on NET && BPF && INET 5 4 help 6 5 This builds experimental bpfilter framework that is aiming to ··· 8 9 if BPFILTER 9 10 config BPFILTER_UMH 10 11 tristate "bpfilter kernel module with user mode helper" 12 + depends on $(success,$(srctree)/scripts/cc-can-link.sh $(CC)) 11 13 default m 12 14 help 13 15 This builds bpfilter kernel module with embedded user mode helper
+2 -15
net/bpfilter/Makefile
··· 15 15 HOSTLDFLAGS += -static 16 16 endif 17 17 18 - # a bit of elf magic to convert bpfilter_umh binary into a binary blob 19 - # inside bpfilter_umh.o elf file referenced by 20 - # _binary_net_bpfilter_bpfilter_umh_start symbol 21 - # which bpfilter_kern.c passes further into umh blob loader at run-time 22 - quiet_cmd_copy_umh = GEN $@ 23 - cmd_copy_umh = echo ':' > $(obj)/.bpfilter_umh.o.cmd; \ 24 - $(OBJCOPY) -I binary \ 25 - `LC_ALL=C $(OBJDUMP) -f net/bpfilter/bpfilter_umh \ 26 - |awk -F' |,' '/file format/{print "-O",$$NF} \ 27 - /^architecture:/{print "-B",$$2}'` \ 28 - --rename-section .data=.init.rodata $< $@ 29 - 30 - $(obj)/bpfilter_umh.o: $(obj)/bpfilter_umh 31 - $(call cmd,copy_umh) 18 + $(obj)/bpfilter_umh_blob.o: $(obj)/bpfilter_umh 32 19 33 20 obj-$(CONFIG_BPFILTER_UMH) += bpfilter.o 34 - bpfilter-objs += bpfilter_kern.o bpfilter_umh.o 21 + bpfilter-objs += bpfilter_kern.o bpfilter_umh_blob.o
+5 -6
net/bpfilter/bpfilter_kern.c
··· 10 10 #include <linux/file.h> 11 11 #include "msgfmt.h" 12 12 13 - #define UMH_start _binary_net_bpfilter_bpfilter_umh_start 14 - #define UMH_end _binary_net_bpfilter_bpfilter_umh_end 15 - 16 - extern char UMH_start; 17 - extern char UMH_end; 13 + extern char bpfilter_umh_start; 14 + extern char bpfilter_umh_end; 18 15 19 16 static struct umh_info info; 20 17 /* since ip_getsockopt() can run in parallel, serialize access to umh */ ··· 90 93 int err; 91 94 92 95 /* fork usermode process */ 93 - err = fork_usermode_blob(&UMH_start, &UMH_end - &UMH_start, &info); 96 + err = fork_usermode_blob(&bpfilter_umh_start, 97 + &bpfilter_umh_end - &bpfilter_umh_start, 98 + &info); 94 99 if (err) 95 100 return err; 96 101 pr_info("Loaded bpfilter_umh pid %d\n", info.pid);
+7
net/bpfilter/bpfilter_umh_blob.S
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + .section .init.rodata, "a" 3 + .global bpfilter_umh_start 4 + bpfilter_umh_start: 5 + .incbin "net/bpfilter/bpfilter_umh" 6 + .global bpfilter_umh_end 7 + bpfilter_umh_end:
+8 -4
net/caif/caif_socket.c
··· 934 934 } 935 935 936 936 /* Copied from af_unix.c:unix_poll(), added CAIF tx_flow handling */ 937 - static __poll_t caif_poll_mask(struct socket *sock, __poll_t events) 937 + static __poll_t caif_poll(struct file *file, 938 + struct socket *sock, poll_table *wait) 938 939 { 939 940 struct sock *sk = sock->sk; 941 + __poll_t mask; 940 942 struct caifsock *cf_sk = container_of(sk, struct caifsock, sk); 941 - __poll_t mask = 0; 943 + 944 + sock_poll_wait(file, sk_sleep(sk), wait); 945 + mask = 0; 942 946 943 947 /* exceptional events? */ 944 948 if (sk->sk_err) ··· 976 972 .socketpair = sock_no_socketpair, 977 973 .accept = sock_no_accept, 978 974 .getname = sock_no_getname, 979 - .poll_mask = caif_poll_mask, 975 + .poll = caif_poll, 980 976 .ioctl = sock_no_ioctl, 981 977 .listen = sock_no_listen, 982 978 .shutdown = sock_no_shutdown, ··· 997 993 .socketpair = sock_no_socketpair, 998 994 .accept = sock_no_accept, 999 995 .getname = sock_no_getname, 1000 - .poll_mask = caif_poll_mask, 996 + .poll = caif_poll, 1001 997 .ioctl = sock_no_ioctl, 1002 998 .listen = sock_no_listen, 1003 999 .shutdown = sock_no_shutdown,
+1 -1
net/can/bcm.c
··· 1660 1660 .socketpair = sock_no_socketpair, 1661 1661 .accept = sock_no_accept, 1662 1662 .getname = sock_no_getname, 1663 - .poll_mask = datagram_poll_mask, 1663 + .poll = datagram_poll, 1664 1664 .ioctl = can_ioctl, /* use can_ioctl() from af_can.c */ 1665 1665 .listen = sock_no_listen, 1666 1666 .shutdown = sock_no_shutdown,
+1 -1
net/can/raw.c
··· 843 843 .socketpair = sock_no_socketpair, 844 844 .accept = sock_no_accept, 845 845 .getname = raw_getname, 846 - .poll_mask = datagram_poll_mask, 846 + .poll = datagram_poll, 847 847 .ioctl = can_ioctl, /* use can_ioctl() from af_can.c */ 848 848 .listen = sock_no_listen, 849 849 .shutdown = sock_no_shutdown,
+9 -4
net/core/datagram.c
··· 819 819 820 820 /** 821 821 * datagram_poll - generic datagram poll 822 + * @file: file struct 822 823 * @sock: socket 823 - * @events to wait for 824 + * @wait: poll table 824 825 * 825 826 * Datagram poll: Again totally generic. This also handles 826 827 * sequenced packet sockets providing the socket receive queue ··· 831 830 * and you use a different write policy from sock_writeable() 832 831 * then please supply your own write_space callback. 833 832 */ 834 - __poll_t datagram_poll_mask(struct socket *sock, __poll_t events) 833 + __poll_t datagram_poll(struct file *file, struct socket *sock, 834 + poll_table *wait) 835 835 { 836 836 struct sock *sk = sock->sk; 837 - __poll_t mask = 0; 837 + __poll_t mask; 838 + 839 + sock_poll_wait(file, sk_sleep(sk), wait); 840 + mask = 0; 838 841 839 842 /* exceptional events? */ 840 843 if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue)) ··· 871 866 872 867 return mask; 873 868 } 874 - EXPORT_SYMBOL(datagram_poll_mask); 869 + EXPORT_SYMBOL(datagram_poll);
+2 -9
net/core/dev_ioctl.c
··· 285 285 if (ifr->ifr_qlen < 0) 286 286 return -EINVAL; 287 287 if (dev->tx_queue_len ^ ifr->ifr_qlen) { 288 - unsigned int orig_len = dev->tx_queue_len; 289 - 290 - dev->tx_queue_len = ifr->ifr_qlen; 291 - err = call_netdevice_notifiers( 292 - NETDEV_CHANGE_TX_QUEUE_LEN, dev); 293 - err = notifier_to_errno(err); 294 - if (err) { 295 - dev->tx_queue_len = orig_len; 288 + err = dev_change_tx_queue_len(dev, ifr->ifr_qlen); 289 + if (err) 296 290 return err; 297 - } 298 291 } 299 292 return 0; 300 293
+79 -1
net/core/fib_rules.c
··· 416 416 if (rule->mark && r->mark != rule->mark) 417 417 continue; 418 418 419 + if (rule->suppress_ifgroup != -1 && 420 + r->suppress_ifgroup != rule->suppress_ifgroup) 421 + continue; 422 + 423 + if (rule->suppress_prefixlen != -1 && 424 + r->suppress_prefixlen != rule->suppress_prefixlen) 425 + continue; 426 + 419 427 if (rule->mark_mask && r->mark_mask != rule->mark_mask) 420 428 continue; 421 429 ··· 442 434 continue; 443 435 444 436 if (rule->ip_proto && r->ip_proto != rule->ip_proto) 437 + continue; 438 + 439 + if (rule->proto && r->proto != rule->proto) 445 440 continue; 446 441 447 442 if (fib_rule_port_range_set(&rule->sport_range) && ··· 656 645 return err; 657 646 } 658 647 648 + static int rule_exists(struct fib_rules_ops *ops, struct fib_rule_hdr *frh, 649 + struct nlattr **tb, struct fib_rule *rule) 650 + { 651 + struct fib_rule *r; 652 + 653 + list_for_each_entry(r, &ops->rules_list, list) { 654 + if (r->action != rule->action) 655 + continue; 656 + 657 + if (r->table != rule->table) 658 + continue; 659 + 660 + if (r->pref != rule->pref) 661 + continue; 662 + 663 + if (memcmp(r->iifname, rule->iifname, IFNAMSIZ)) 664 + continue; 665 + 666 + if (memcmp(r->oifname, rule->oifname, IFNAMSIZ)) 667 + continue; 668 + 669 + if (r->mark != rule->mark) 670 + continue; 671 + 672 + if (r->suppress_ifgroup != rule->suppress_ifgroup) 673 + continue; 674 + 675 + if (r->suppress_prefixlen != rule->suppress_prefixlen) 676 + continue; 677 + 678 + if (r->mark_mask != rule->mark_mask) 679 + continue; 680 + 681 + if (r->tun_id != rule->tun_id) 682 + continue; 683 + 684 + if (r->fr_net != rule->fr_net) 685 + continue; 686 + 687 + if (r->l3mdev != rule->l3mdev) 688 + continue; 689 + 690 + if (!uid_eq(r->uid_range.start, rule->uid_range.start) || 691 + !uid_eq(r->uid_range.end, rule->uid_range.end)) 692 + continue; 693 + 694 + if (r->ip_proto != rule->ip_proto) 695 + continue; 696 + 697 + if (r->proto != rule->proto) 698 + continue; 699 + 700 + if (!fib_rule_port_range_compare(&r->sport_range, 701 + &rule->sport_range)) 702 + continue; 703 + 704 + if (!fib_rule_port_range_compare(&r->dport_range, 705 + &rule->dport_range)) 706 + continue; 707 + 708 + if (!ops->compare(r, frh, tb)) 709 + continue; 710 + return 1; 711 + } 712 + return 0; 713 + } 714 + 659 715 int fib_nl_newrule(struct sk_buff *skb, struct nlmsghdr *nlh, 660 716 struct netlink_ext_ack *extack) 661 717 { ··· 757 679 goto errout; 758 680 759 681 if ((nlh->nlmsg_flags & NLM_F_EXCL) && 760 - rule_find(ops, frh, tb, rule, user_priority)) { 682 + rule_exists(ops, frh, tb, rule)) { 761 683 err = -EEXIST; 762 684 goto errout_free; 763 685 }
+54 -32
net/core/filter.c
··· 4073 4073 memcpy(params->smac, dev->dev_addr, ETH_ALEN); 4074 4074 params->h_vlan_TCI = 0; 4075 4075 params->h_vlan_proto = 0; 4076 + params->ifindex = dev->ifindex; 4076 4077 4077 - return dev->ifindex; 4078 + return 0; 4078 4079 } 4079 4080 #endif 4080 4081 ··· 4099 4098 /* verify forwarding is enabled on this interface */ 4100 4099 in_dev = __in_dev_get_rcu(dev); 4101 4100 if (unlikely(!in_dev || !IN_DEV_FORWARD(in_dev))) 4102 - return 0; 4101 + return BPF_FIB_LKUP_RET_FWD_DISABLED; 4103 4102 4104 4103 if (flags & BPF_FIB_LOOKUP_OUTPUT) { 4105 4104 fl4.flowi4_iif = 1; ··· 4124 4123 4125 4124 tb = fib_get_table(net, tbid); 4126 4125 if (unlikely(!tb)) 4127 - return 0; 4126 + return BPF_FIB_LKUP_RET_NOT_FWDED; 4128 4127 4129 4128 err = fib_table_lookup(tb, &fl4, &res, FIB_LOOKUP_NOREF); 4130 4129 } else { ··· 4136 4135 err = fib_lookup(net, &fl4, &res, FIB_LOOKUP_NOREF); 4137 4136 } 4138 4137 4139 - if (err || res.type != RTN_UNICAST) 4140 - return 0; 4138 + if (err) { 4139 + /* map fib lookup errors to RTN_ type */ 4140 + if (err == -EINVAL) 4141 + return BPF_FIB_LKUP_RET_BLACKHOLE; 4142 + if (err == -EHOSTUNREACH) 4143 + return BPF_FIB_LKUP_RET_UNREACHABLE; 4144 + if (err == -EACCES) 4145 + return BPF_FIB_LKUP_RET_PROHIBIT; 4146 + 4147 + return BPF_FIB_LKUP_RET_NOT_FWDED; 4148 + } 4149 + 4150 + if (res.type != RTN_UNICAST) 4151 + return BPF_FIB_LKUP_RET_NOT_FWDED; 4141 4152 4142 4153 if (res.fi->fib_nhs > 1) 4143 4154 fib_select_path(net, &res, &fl4, NULL); ··· 4157 4144 if (check_mtu) { 4158 4145 mtu = ip_mtu_from_fib_result(&res, params->ipv4_dst); 4159 4146 if (params->tot_len > mtu) 4160 - return 0; 4147 + return BPF_FIB_LKUP_RET_FRAG_NEEDED; 4161 4148 } 4162 4149 4163 4150 nh = &res.fi->fib_nh[res.nh_sel]; 4164 4151 4165 4152 /* do not handle lwt encaps right now */ 4166 4153 if (nh->nh_lwtstate) 4167 - return 0; 4154 + return BPF_FIB_LKUP_RET_UNSUPP_LWT; 4168 4155 4169 4156 dev = nh->nh_dev; 4170 - if (unlikely(!dev)) 4171 - return 0; 4172 - 4173 4157 if (nh->nh_gw) 4174 4158 params->ipv4_dst = nh->nh_gw; 4175 4159 ··· 4176 4166 * rcu_read_lock_bh is not needed here 4177 4167 */ 4178 4168 neigh = __ipv4_neigh_lookup_noref(dev, (__force u32)params->ipv4_dst); 4179 - if (neigh) 4180 - return bpf_fib_set_fwd_params(params, neigh, dev); 4169 + if (!neigh) 4170 + return BPF_FIB_LKUP_RET_NO_NEIGH; 4181 4171 4182 - return 0; 4172 + return bpf_fib_set_fwd_params(params, neigh, dev); 4183 4173 } 4184 4174 #endif 4185 4175 ··· 4200 4190 4201 4191 /* link local addresses are never forwarded */ 4202 4192 if (rt6_need_strict(dst) || rt6_need_strict(src)) 4203 - return 0; 4193 + return BPF_FIB_LKUP_RET_NOT_FWDED; 4204 4194 4205 4195 dev = dev_get_by_index_rcu(net, params->ifindex); 4206 4196 if (unlikely(!dev)) ··· 4208 4198 4209 4199 idev = __in6_dev_get_safely(dev); 4210 4200 if (unlikely(!idev || !net->ipv6.devconf_all->forwarding)) 4211 - return 0; 4201 + return BPF_FIB_LKUP_RET_FWD_DISABLED; 4212 4202 4213 4203 if (flags & BPF_FIB_LOOKUP_OUTPUT) { 4214 4204 fl6.flowi6_iif = 1; ··· 4235 4225 4236 4226 tb = ipv6_stub->fib6_get_table(net, tbid); 4237 4227 if (unlikely(!tb)) 4238 - return 0; 4228 + return BPF_FIB_LKUP_RET_NOT_FWDED; 4239 4229 4240 4230 f6i = ipv6_stub->fib6_table_lookup(net, tb, oif, &fl6, strict); 4241 4231 } else { ··· 4248 4238 } 4249 4239 4250 4240 if (unlikely(IS_ERR_OR_NULL(f6i) || f6i == net->ipv6.fib6_null_entry)) 4251 - return 0; 4241 + return BPF_FIB_LKUP_RET_NOT_FWDED; 4252 4242 4253 - if (unlikely(f6i->fib6_flags & RTF_REJECT || 4254 - f6i->fib6_type != RTN_UNICAST)) 4255 - return 0; 4243 + if (unlikely(f6i->fib6_flags & RTF_REJECT)) { 4244 + switch (f6i->fib6_type) { 4245 + case RTN_BLACKHOLE: 4246 + return BPF_FIB_LKUP_RET_BLACKHOLE; 4247 + case RTN_UNREACHABLE: 4248 + return BPF_FIB_LKUP_RET_UNREACHABLE; 4249 + case RTN_PROHIBIT: 4250 + return BPF_FIB_LKUP_RET_PROHIBIT; 4251 + default: 4252 + return BPF_FIB_LKUP_RET_NOT_FWDED; 4253 + } 4254 + } 4255 + 4256 + if (f6i->fib6_type != RTN_UNICAST) 4257 + return BPF_FIB_LKUP_RET_NOT_FWDED; 4256 4258 4257 4259 if (f6i->fib6_nsiblings && fl6.flowi6_oif == 0) 4258 4260 f6i = ipv6_stub->fib6_multipath_select(net, f6i, &fl6, ··· 4274 4252 if (check_mtu) { 4275 4253 mtu = ipv6_stub->ip6_mtu_from_fib6(f6i, dst, src); 4276 4254 if (params->tot_len > mtu) 4277 - return 0; 4255 + return BPF_FIB_LKUP_RET_FRAG_NEEDED; 4278 4256 } 4279 4257 4280 4258 if (f6i->fib6_nh.nh_lwtstate) 4281 - return 0; 4259 + return BPF_FIB_LKUP_RET_UNSUPP_LWT; 4282 4260 4283 4261 if (f6i->fib6_flags & RTF_GATEWAY) 4284 4262 *dst = f6i->fib6_nh.nh_gw; ··· 4292 4270 */ 4293 4271 neigh = ___neigh_lookup_noref(ipv6_stub->nd_tbl, neigh_key_eq128, 4294 4272 ndisc_hashfn, dst, dev); 4295 - if (neigh) 4296 - return bpf_fib_set_fwd_params(params, neigh, dev); 4273 + if (!neigh) 4274 + return BPF_FIB_LKUP_RET_NO_NEIGH; 4297 4275 4298 - return 0; 4276 + return bpf_fib_set_fwd_params(params, neigh, dev); 4299 4277 } 4300 4278 #endif 4301 4279 ··· 4337 4315 struct bpf_fib_lookup *, params, int, plen, u32, flags) 4338 4316 { 4339 4317 struct net *net = dev_net(skb->dev); 4340 - int index = -EAFNOSUPPORT; 4318 + int rc = -EAFNOSUPPORT; 4341 4319 4342 4320 if (plen < sizeof(*params)) 4343 4321 return -EINVAL; ··· 4348 4326 switch (params->family) { 4349 4327 #if IS_ENABLED(CONFIG_INET) 4350 4328 case AF_INET: 4351 - index = bpf_ipv4_fib_lookup(net, params, flags, false); 4329 + rc = bpf_ipv4_fib_lookup(net, params, flags, false); 4352 4330 break; 4353 4331 #endif 4354 4332 #if IS_ENABLED(CONFIG_IPV6) 4355 4333 case AF_INET6: 4356 - index = bpf_ipv6_fib_lookup(net, params, flags, false); 4334 + rc = bpf_ipv6_fib_lookup(net, params, flags, false); 4357 4335 break; 4358 4336 #endif 4359 4337 } 4360 4338 4361 - if (index > 0) { 4339 + if (!rc) { 4362 4340 struct net_device *dev; 4363 4341 4364 - dev = dev_get_by_index_rcu(net, index); 4342 + dev = dev_get_by_index_rcu(net, params->ifindex); 4365 4343 if (!is_skb_forwardable(dev, skb)) 4366 - index = 0; 4344 + rc = BPF_FIB_LKUP_RET_FRAG_NEEDED; 4367 4345 } 4368 4346 4369 - return index; 4347 + return rc; 4370 4348 } 4371 4349 4372 4350 static const struct bpf_func_proto bpf_skb_fib_lookup_proto = {
+1 -2
net/core/skbuff.c
··· 5275 5275 if (npages >= 1 << order) { 5276 5276 page = alloc_pages((gfp_mask & ~__GFP_DIRECT_RECLAIM) | 5277 5277 __GFP_COMP | 5278 - __GFP_NOWARN | 5279 - __GFP_NORETRY, 5278 + __GFP_NOWARN, 5280 5279 order); 5281 5280 if (page) 5282 5281 goto fill_page;
+5 -2
net/core/sock.c
··· 3247 3247 3248 3248 rsk_prot->slab = kmem_cache_create(rsk_prot->slab_name, 3249 3249 rsk_prot->obj_size, 0, 3250 - prot->slab_flags, NULL); 3250 + SLAB_ACCOUNT | prot->slab_flags, 3251 + NULL); 3251 3252 3252 3253 if (!rsk_prot->slab) { 3253 3254 pr_crit("%s: Can't create request sock SLAB cache!\n", ··· 3263 3262 if (alloc_slab) { 3264 3263 prot->slab = kmem_cache_create_usercopy(prot->name, 3265 3264 prot->obj_size, 0, 3266 - SLAB_HWCACHE_ALIGN | prot->slab_flags, 3265 + SLAB_HWCACHE_ALIGN | SLAB_ACCOUNT | 3266 + prot->slab_flags, 3267 3267 prot->useroffset, prot->usersize, 3268 3268 NULL); 3269 3269 ··· 3287 3285 kmem_cache_create(prot->twsk_prot->twsk_slab_name, 3288 3286 prot->twsk_prot->twsk_obj_size, 3289 3287 0, 3288 + SLAB_ACCOUNT | 3290 3289 prot->slab_flags, 3291 3290 NULL); 3292 3291 if (prot->twsk_prot->twsk_slab == NULL)
+2 -1
net/dccp/dccp.h
··· 316 316 int flags, int *addr_len); 317 317 void dccp_shutdown(struct sock *sk, int how); 318 318 int inet_dccp_listen(struct socket *sock, int backlog); 319 - __poll_t dccp_poll_mask(struct socket *sock, __poll_t events); 319 + __poll_t dccp_poll(struct file *file, struct socket *sock, 320 + poll_table *wait); 320 321 int dccp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len); 321 322 void dccp_req_err(struct sock *sk, u64 seq); 322 323
+1 -1
net/dccp/ipv4.c
··· 984 984 .accept = inet_accept, 985 985 .getname = inet_getname, 986 986 /* FIXME: work on tcp_poll to rename it to inet_csk_poll */ 987 - .poll_mask = dccp_poll_mask, 987 + .poll = dccp_poll, 988 988 .ioctl = inet_ioctl, 989 989 /* FIXME: work on inet_listen to rename it to sock_common_listen */ 990 990 .listen = inet_dccp_listen,
+1 -1
net/dccp/ipv6.c
··· 1070 1070 .socketpair = sock_no_socketpair, 1071 1071 .accept = inet_accept, 1072 1072 .getname = inet6_getname, 1073 - .poll_mask = dccp_poll_mask, 1073 + .poll = dccp_poll, 1074 1074 .ioctl = inet6_ioctl, 1075 1075 .listen = inet_dccp_listen, 1076 1076 .shutdown = inet_shutdown,
+11 -2
net/dccp/proto.c
··· 312 312 313 313 EXPORT_SYMBOL_GPL(dccp_disconnect); 314 314 315 - __poll_t dccp_poll_mask(struct socket *sock, __poll_t events) 315 + /* 316 + * Wait for a DCCP event. 317 + * 318 + * Note that we don't need to lock the socket, as the upper poll layers 319 + * take care of normal races (between the test and the event) and we don't 320 + * go look at any of the socket buffers directly. 321 + */ 322 + __poll_t dccp_poll(struct file *file, struct socket *sock, 323 + poll_table *wait) 316 324 { 317 325 __poll_t mask; 318 326 struct sock *sk = sock->sk; 319 327 328 + sock_poll_wait(file, sk_sleep(sk), wait); 320 329 if (sk->sk_state == DCCP_LISTEN) 321 330 return inet_csk_listen_poll(sk); 322 331 ··· 367 358 return mask; 368 359 } 369 360 370 - EXPORT_SYMBOL_GPL(dccp_poll_mask); 361 + EXPORT_SYMBOL_GPL(dccp_poll); 371 362 372 363 int dccp_ioctl(struct sock *sk, int cmd, unsigned long arg) 373 364 {
+3 -3
net/decnet/af_decnet.c
··· 1207 1207 } 1208 1208 1209 1209 1210 - static __poll_t dn_poll_mask(struct socket *sock, __poll_t events) 1210 + static __poll_t dn_poll(struct file *file, struct socket *sock, poll_table *wait) 1211 1211 { 1212 1212 struct sock *sk = sock->sk; 1213 1213 struct dn_scp *scp = DN_SK(sk); 1214 - __poll_t mask = datagram_poll_mask(sock, events); 1214 + __poll_t mask = datagram_poll(file, sock, wait); 1215 1215 1216 1216 if (!skb_queue_empty(&scp->other_receive_queue)) 1217 1217 mask |= EPOLLRDBAND; ··· 2331 2331 .socketpair = sock_no_socketpair, 2332 2332 .accept = dn_accept, 2333 2333 .getname = dn_getname, 2334 - .poll_mask = dn_poll_mask, 2334 + .poll = dn_poll, 2335 2335 .ioctl = dn_ioctl, 2336 2336 .listen = dn_listen, 2337 2337 .shutdown = dn_shutdown,
+2 -2
net/ieee802154/socket.c
··· 423 423 .socketpair = sock_no_socketpair, 424 424 .accept = sock_no_accept, 425 425 .getname = sock_no_getname, 426 - .poll_mask = datagram_poll_mask, 426 + .poll = datagram_poll, 427 427 .ioctl = ieee802154_sock_ioctl, 428 428 .listen = sock_no_listen, 429 429 .shutdown = sock_no_shutdown, ··· 969 969 .socketpair = sock_no_socketpair, 970 970 .accept = sock_no_accept, 971 971 .getname = sock_no_getname, 972 - .poll_mask = datagram_poll_mask, 972 + .poll = datagram_poll, 973 973 .ioctl = ieee802154_sock_ioctl, 974 974 .listen = sock_no_listen, 975 975 .shutdown = sock_no_shutdown,
+4 -4
net/ipv4/af_inet.c
··· 986 986 .socketpair = sock_no_socketpair, 987 987 .accept = inet_accept, 988 988 .getname = inet_getname, 989 - .poll_mask = tcp_poll_mask, 989 + .poll = tcp_poll, 990 990 .ioctl = inet_ioctl, 991 991 .listen = inet_listen, 992 992 .shutdown = inet_shutdown, ··· 1021 1021 .socketpair = sock_no_socketpair, 1022 1022 .accept = sock_no_accept, 1023 1023 .getname = inet_getname, 1024 - .poll_mask = udp_poll_mask, 1024 + .poll = udp_poll, 1025 1025 .ioctl = inet_ioctl, 1026 1026 .listen = sock_no_listen, 1027 1027 .shutdown = inet_shutdown, ··· 1042 1042 1043 1043 /* 1044 1044 * For SOCK_RAW sockets; should be the same as inet_dgram_ops but without 1045 - * udp_poll_mask 1045 + * udp_poll 1046 1046 */ 1047 1047 static const struct proto_ops inet_sockraw_ops = { 1048 1048 .family = PF_INET, ··· 1053 1053 .socketpair = sock_no_socketpair, 1054 1054 .accept = sock_no_accept, 1055 1055 .getname = inet_getname, 1056 - .poll_mask = datagram_poll_mask, 1056 + .poll = datagram_poll, 1057 1057 .ioctl = inet_ioctl, 1058 1058 .listen = sock_no_listen, 1059 1059 .shutdown = inet_shutdown,
+1 -3
net/ipv4/fou.c
··· 448 448 out_unlock: 449 449 rcu_read_unlock(); 450 450 out: 451 - NAPI_GRO_CB(skb)->flush |= flush; 452 - skb_gro_remcsum_cleanup(skb, &grc); 453 - skb->remcsum_offload = 0; 451 + skb_gro_flush_final_remcsum(skb, pp, flush, &grc); 454 452 455 453 return pp; 456 454 }
+1 -1
net/ipv4/gre_offload.c
··· 223 223 out_unlock: 224 224 rcu_read_unlock(); 225 225 out: 226 - NAPI_GRO_CB(skb)->flush |= flush; 226 + skb_gro_flush_final(skb, pp, flush); 227 227 228 228 return pp; 229 229 }
+13 -5
net/ipv4/sysctl_net_ipv4.c
··· 265 265 ipv4.sysctl_tcp_fastopen); 266 266 struct ctl_table tbl = { .maxlen = (TCP_FASTOPEN_KEY_LENGTH * 2 + 10) }; 267 267 struct tcp_fastopen_context *ctxt; 268 - int ret; 269 268 u32 user_key[4]; /* 16 bytes, matching TCP_FASTOPEN_KEY_LENGTH */ 269 + __le32 key[4]; 270 + int ret, i; 270 271 271 272 tbl.data = kmalloc(tbl.maxlen, GFP_KERNEL); 272 273 if (!tbl.data) ··· 276 275 rcu_read_lock(); 277 276 ctxt = rcu_dereference(net->ipv4.tcp_fastopen_ctx); 278 277 if (ctxt) 279 - memcpy(user_key, ctxt->key, TCP_FASTOPEN_KEY_LENGTH); 278 + memcpy(key, ctxt->key, TCP_FASTOPEN_KEY_LENGTH); 280 279 else 281 - memset(user_key, 0, sizeof(user_key)); 280 + memset(key, 0, sizeof(key)); 282 281 rcu_read_unlock(); 282 + 283 + for (i = 0; i < ARRAY_SIZE(key); i++) 284 + user_key[i] = le32_to_cpu(key[i]); 283 285 284 286 snprintf(tbl.data, tbl.maxlen, "%08x-%08x-%08x-%08x", 285 287 user_key[0], user_key[1], user_key[2], user_key[3]); ··· 294 290 ret = -EINVAL; 295 291 goto bad_key; 296 292 } 297 - tcp_fastopen_reset_cipher(net, NULL, user_key, 293 + 294 + for (i = 0; i < ARRAY_SIZE(user_key); i++) 295 + key[i] = cpu_to_le32(user_key[i]); 296 + 297 + tcp_fastopen_reset_cipher(net, NULL, key, 298 298 TCP_FASTOPEN_KEY_LENGTH); 299 299 } 300 300 301 301 bad_key: 302 302 pr_debug("proc FO key set 0x%x-%x-%x-%x <- 0x%s: %u\n", 303 - user_key[0], user_key[1], user_key[2], user_key[3], 303 + user_key[0], user_key[1], user_key[2], user_key[3], 304 304 (char *)tbl.data, ret); 305 305 kfree(tbl.data); 306 306 return ret;
+17 -6
net/ipv4/tcp.c
··· 494 494 } 495 495 496 496 /* 497 - * Socket is not locked. We are protected from async events by poll logic and 498 - * correct handling of state changes made by other threads is impossible in 499 - * any case. 497 + * Wait for a TCP event. 498 + * 499 + * Note that we don't need to lock the socket, as the upper poll layers 500 + * take care of normal races (between the test and the event) and we don't 501 + * go look at any of the socket buffers directly. 500 502 */ 501 - __poll_t tcp_poll_mask(struct socket *sock, __poll_t events) 503 + __poll_t tcp_poll(struct file *file, struct socket *sock, poll_table *wait) 502 504 { 505 + __poll_t mask; 503 506 struct sock *sk = sock->sk; 504 507 const struct tcp_sock *tp = tcp_sk(sk); 505 - __poll_t mask = 0; 506 508 int state; 509 + 510 + sock_poll_wait(file, sk_sleep(sk), wait); 507 511 508 512 state = inet_sk_state_load(sk); 509 513 if (state == TCP_LISTEN) 510 514 return inet_csk_listen_poll(sk); 515 + 516 + /* Socket is not locked. We are protected from async events 517 + * by poll logic and correct handling of state changes 518 + * made by other threads is impossible in any case. 519 + */ 520 + 521 + mask = 0; 511 522 512 523 /* 513 524 * EPOLLHUP is certainly not done right. But poll() doesn't ··· 600 589 601 590 return mask; 602 591 } 603 - EXPORT_SYMBOL(tcp_poll_mask); 592 + EXPORT_SYMBOL(tcp_poll); 604 593 605 594 int tcp_ioctl(struct sock *sk, int cmd, unsigned long arg) 606 595 {
+11 -2
net/ipv4/tcp_input.c
··· 266 266 * it is probably a retransmit. 267 267 */ 268 268 if (tp->ecn_flags & TCP_ECN_SEEN) 269 - tcp_enter_quickack_mode(sk, 1); 269 + tcp_enter_quickack_mode(sk, 2); 270 270 break; 271 271 case INET_ECN_CE: 272 272 if (tcp_ca_needs_ecn(sk)) ··· 274 274 275 275 if (!(tp->ecn_flags & TCP_ECN_DEMAND_CWR)) { 276 276 /* Better not delay acks, sender can have a very low cwnd */ 277 - tcp_enter_quickack_mode(sk, 1); 277 + tcp_enter_quickack_mode(sk, 2); 278 278 tp->ecn_flags |= TCP_ECN_DEMAND_CWR; 279 279 } 280 280 tp->ecn_flags |= TCP_ECN_SEEN; ··· 3185 3185 3186 3186 if (tcp_is_reno(tp)) { 3187 3187 tcp_remove_reno_sacks(sk, pkts_acked); 3188 + 3189 + /* If any of the cumulatively ACKed segments was 3190 + * retransmitted, non-SACK case cannot confirm that 3191 + * progress was due to original transmission due to 3192 + * lack of TCPCB_SACKED_ACKED bits even if some of 3193 + * the packets may have been never retransmitted. 3194 + */ 3195 + if (flag & FLAG_RETRANS_DATA_ACKED) 3196 + flag &= ~FLAG_ORIG_SACK_ACKED; 3188 3197 } else { 3189 3198 int delta; 3190 3199
+5 -5
net/ipv4/udp.c
··· 2591 2591 * udp_poll - wait for a UDP event. 2592 2592 * @file - file struct 2593 2593 * @sock - socket 2594 - * @events - events to wait for 2594 + * @wait - poll table 2595 2595 * 2596 2596 * This is same as datagram poll, except for the special case of 2597 2597 * blocking sockets. If application is using a blocking fd ··· 2600 2600 * but then block when reading it. Add special case code 2601 2601 * to work around these arguably broken applications. 2602 2602 */ 2603 - __poll_t udp_poll_mask(struct socket *sock, __poll_t events) 2603 + __poll_t udp_poll(struct file *file, struct socket *sock, poll_table *wait) 2604 2604 { 2605 - __poll_t mask = datagram_poll_mask(sock, events); 2605 + __poll_t mask = datagram_poll(file, sock, wait); 2606 2606 struct sock *sk = sock->sk; 2607 2607 2608 2608 if (!skb_queue_empty(&udp_sk(sk)->reader_queue)) 2609 2609 mask |= EPOLLIN | EPOLLRDNORM; 2610 2610 2611 2611 /* Check for false positives due to checksum errors */ 2612 - if ((mask & EPOLLRDNORM) && !(sock->file->f_flags & O_NONBLOCK) && 2612 + if ((mask & EPOLLRDNORM) && !(file->f_flags & O_NONBLOCK) && 2613 2613 !(sk->sk_shutdown & RCV_SHUTDOWN) && first_packet_length(sk) == -1) 2614 2614 mask &= ~(EPOLLIN | EPOLLRDNORM); 2615 2615 2616 2616 return mask; 2617 2617 2618 2618 } 2619 - EXPORT_SYMBOL(udp_poll_mask); 2619 + EXPORT_SYMBOL(udp_poll); 2620 2620 2621 2621 int udp_abort(struct sock *sk, int err) 2622 2622 {
+1 -1
net/ipv4/udp_offload.c
··· 395 395 out_unlock: 396 396 rcu_read_unlock(); 397 397 out: 398 - NAPI_GRO_CB(skb)->flush |= flush; 398 + skb_gro_flush_final(skb, pp, flush); 399 399 return pp; 400 400 } 401 401 EXPORT_SYMBOL(udp_gro_receive);
+6 -3
net/ipv6/addrconf.c
··· 4528 4528 unsigned long expires, u32 flags) 4529 4529 { 4530 4530 struct fib6_info *f6i; 4531 + u32 prio; 4531 4532 4532 4533 f6i = addrconf_get_prefix_route(&ifp->addr, 4533 4534 ifp->prefix_len, ··· 4537 4536 if (!f6i) 4538 4537 return -ENOENT; 4539 4538 4540 - if (f6i->fib6_metric != ifp->rt_priority) { 4539 + prio = ifp->rt_priority ? : IP6_RT_PRIO_ADDRCONF; 4540 + if (f6i->fib6_metric != prio) { 4541 + /* delete old one */ 4542 + ip6_del_rt(dev_net(ifp->idev->dev), f6i); 4543 + 4541 4544 /* add new one */ 4542 4545 addrconf_prefix_route(&ifp->addr, ifp->prefix_len, 4543 4546 ifp->rt_priority, ifp->idev->dev, 4544 4547 expires, flags, GFP_KERNEL); 4545 - /* delete old one */ 4546 - ip6_del_rt(dev_net(ifp->idev->dev), f6i); 4547 4548 } else { 4548 4549 if (!expires) 4549 4550 fib6_clean_expires(f6i);
+2 -2
net/ipv6/af_inet6.c
··· 570 570 .socketpair = sock_no_socketpair, /* a do nothing */ 571 571 .accept = inet_accept, /* ok */ 572 572 .getname = inet6_getname, 573 - .poll_mask = tcp_poll_mask, /* ok */ 573 + .poll = tcp_poll, /* ok */ 574 574 .ioctl = inet6_ioctl, /* must change */ 575 575 .listen = inet_listen, /* ok */ 576 576 .shutdown = inet_shutdown, /* ok */ ··· 603 603 .socketpair = sock_no_socketpair, /* a do nothing */ 604 604 .accept = sock_no_accept, /* a do nothing */ 605 605 .getname = inet6_getname, 606 - .poll_mask = udp_poll_mask, /* ok */ 606 + .poll = udp_poll, /* ok */ 607 607 .ioctl = inet6_ioctl, /* must change */ 608 608 .listen = sock_no_listen, /* ok */ 609 609 .shutdown = inet_shutdown, /* ok */
+3 -3
net/ipv6/netfilter/nf_conntrack_reasm.c
··· 107 107 if (hdr == NULL) 108 108 goto err_reg; 109 109 110 - net->nf_frag.sysctl.frags_hdr = hdr; 110 + net->nf_frag_frags_hdr = hdr; 111 111 return 0; 112 112 113 113 err_reg: ··· 121 121 { 122 122 struct ctl_table *table; 123 123 124 - table = net->nf_frag.sysctl.frags_hdr->ctl_table_arg; 125 - unregister_net_sysctl_table(net->nf_frag.sysctl.frags_hdr); 124 + table = net->nf_frag_frags_hdr->ctl_table_arg; 125 + unregister_net_sysctl_table(net->nf_frag_frags_hdr); 126 126 if (!net_eq(net, &init_net)) 127 127 kfree(table); 128 128 }
+2 -2
net/ipv6/raw.c
··· 1334 1334 } 1335 1335 #endif /* CONFIG_PROC_FS */ 1336 1336 1337 - /* Same as inet6_dgram_ops, sans udp_poll_mask. */ 1337 + /* Same as inet6_dgram_ops, sans udp_poll. */ 1338 1338 const struct proto_ops inet6_sockraw_ops = { 1339 1339 .family = PF_INET6, 1340 1340 .owner = THIS_MODULE, ··· 1344 1344 .socketpair = sock_no_socketpair, /* a do nothing */ 1345 1345 .accept = sock_no_accept, /* a do nothing */ 1346 1346 .getname = inet6_getname, 1347 - .poll_mask = datagram_poll_mask, /* ok */ 1347 + .poll = datagram_poll, /* ok */ 1348 1348 .ioctl = inet6_ioctl, /* must change */ 1349 1349 .listen = sock_no_listen, /* ok */ 1350 1350 .shutdown = inet_shutdown, /* ok */
+1 -1
net/ipv6/seg6_hmac.c
··· 374 374 return -ENOMEM; 375 375 376 376 for_each_possible_cpu(cpu) { 377 - tfm = crypto_alloc_shash(algo->name, 0, GFP_KERNEL); 377 + tfm = crypto_alloc_shash(algo->name, 0, 0); 378 378 if (IS_ERR(tfm)) 379 379 return PTR_ERR(tfm); 380 380 p_tfm = per_cpu_ptr(algo->tfms, cpu);
+5 -2
net/iucv/af_iucv.c
··· 1488 1488 return 0; 1489 1489 } 1490 1490 1491 - static __poll_t iucv_sock_poll_mask(struct socket *sock, __poll_t events) 1491 + __poll_t iucv_sock_poll(struct file *file, struct socket *sock, 1492 + poll_table *wait) 1492 1493 { 1493 1494 struct sock *sk = sock->sk; 1494 1495 __poll_t mask = 0; 1496 + 1497 + sock_poll_wait(file, sk_sleep(sk), wait); 1495 1498 1496 1499 if (sk->sk_state == IUCV_LISTEN) 1497 1500 return iucv_accept_poll(sk); ··· 2388 2385 .getname = iucv_sock_getname, 2389 2386 .sendmsg = iucv_sock_sendmsg, 2390 2387 .recvmsg = iucv_sock_recvmsg, 2391 - .poll_mask = iucv_sock_poll_mask, 2388 + .poll = iucv_sock_poll, 2392 2389 .ioctl = sock_no_ioctl, 2393 2390 .mmap = sock_no_mmap, 2394 2391 .socketpair = sock_no_socketpair,
+5 -5
net/kcm/kcmsock.c
··· 1336 1336 struct list_head *head; 1337 1337 int index = 0; 1338 1338 1339 - /* For SOCK_SEQPACKET sock type, datagram_poll_mask checks the sk_state, 1340 - * so we set sk_state, otherwise epoll_wait always returns right away 1341 - * with EPOLLHUP 1339 + /* For SOCK_SEQPACKET sock type, datagram_poll checks the sk_state, so 1340 + * we set sk_state, otherwise epoll_wait always returns right away with 1341 + * EPOLLHUP 1342 1342 */ 1343 1343 kcm->sk.sk_state = TCP_ESTABLISHED; 1344 1344 ··· 1903 1903 .socketpair = sock_no_socketpair, 1904 1904 .accept = sock_no_accept, 1905 1905 .getname = sock_no_getname, 1906 - .poll_mask = datagram_poll_mask, 1906 + .poll = datagram_poll, 1907 1907 .ioctl = kcm_ioctl, 1908 1908 .listen = sock_no_listen, 1909 1909 .shutdown = sock_no_shutdown, ··· 1924 1924 .socketpair = sock_no_socketpair, 1925 1925 .accept = sock_no_accept, 1926 1926 .getname = sock_no_getname, 1927 - .poll_mask = datagram_poll_mask, 1927 + .poll = datagram_poll, 1928 1928 .ioctl = kcm_ioctl, 1929 1929 .listen = sock_no_listen, 1930 1930 .shutdown = sock_no_shutdown,
+1 -1
net/key/af_key.c
··· 3751 3751 3752 3752 /* Now the operations that really occur. */ 3753 3753 .release = pfkey_release, 3754 - .poll_mask = datagram_poll_mask, 3754 + .poll = datagram_poll, 3755 3755 .sendmsg = pfkey_sendmsg, 3756 3756 .recvmsg = pfkey_recvmsg, 3757 3757 };
+1 -1
net/l2tp/l2tp_ip.c
··· 613 613 .socketpair = sock_no_socketpair, 614 614 .accept = sock_no_accept, 615 615 .getname = l2tp_ip_getname, 616 - .poll_mask = datagram_poll_mask, 616 + .poll = datagram_poll, 617 617 .ioctl = inet_ioctl, 618 618 .listen = sock_no_listen, 619 619 .shutdown = inet_shutdown,
+1 -1
net/l2tp/l2tp_ip6.c
··· 754 754 .socketpair = sock_no_socketpair, 755 755 .accept = sock_no_accept, 756 756 .getname = l2tp_ip6_getname, 757 - .poll_mask = datagram_poll_mask, 757 + .poll = datagram_poll, 758 758 .ioctl = inet6_ioctl, 759 759 .listen = sock_no_listen, 760 760 .shutdown = inet_shutdown,
+1 -1
net/l2tp/l2tp_ppp.c
··· 1844 1844 .socketpair = sock_no_socketpair, 1845 1845 .accept = sock_no_accept, 1846 1846 .getname = pppol2tp_getname, 1847 - .poll_mask = datagram_poll_mask, 1847 + .poll = datagram_poll, 1848 1848 .listen = sock_no_listen, 1849 1849 .shutdown = sock_no_shutdown, 1850 1850 .setsockopt = pppol2tp_setsockopt,
+1 -1
net/llc/af_llc.c
··· 1192 1192 .socketpair = sock_no_socketpair, 1193 1193 .accept = llc_ui_accept, 1194 1194 .getname = llc_ui_getname, 1195 - .poll_mask = datagram_poll_mask, 1195 + .poll = datagram_poll, 1196 1196 .ioctl = llc_ui_ioctl, 1197 1197 .listen = llc_ui_listen, 1198 1198 .shutdown = llc_ui_shutdown,
+2
net/mac80211/tx.c
··· 4850 4850 skb_reset_network_header(skb); 4851 4851 skb_reset_mac_header(skb); 4852 4852 4853 + local_bh_disable(); 4853 4854 __ieee80211_subif_start_xmit(skb, skb->dev, flags); 4855 + local_bh_enable(); 4854 4856 4855 4857 return 0; 4856 4858 }
+47 -5
net/netfilter/nf_conncount.c
··· 47 47 struct hlist_node node; 48 48 struct nf_conntrack_tuple tuple; 49 49 struct nf_conntrack_zone zone; 50 + int cpu; 51 + u32 jiffies32; 50 52 }; 51 53 52 54 struct nf_conncount_rb { ··· 93 91 return false; 94 92 conn->tuple = *tuple; 95 93 conn->zone = *zone; 94 + conn->cpu = raw_smp_processor_id(); 95 + conn->jiffies32 = (u32)jiffies; 96 96 hlist_add_head(&conn->node, head); 97 97 return true; 98 98 } 99 99 EXPORT_SYMBOL_GPL(nf_conncount_add); 100 + 101 + static const struct nf_conntrack_tuple_hash * 102 + find_or_evict(struct net *net, struct nf_conncount_tuple *conn) 103 + { 104 + const struct nf_conntrack_tuple_hash *found; 105 + unsigned long a, b; 106 + int cpu = raw_smp_processor_id(); 107 + __s32 age; 108 + 109 + found = nf_conntrack_find_get(net, &conn->zone, &conn->tuple); 110 + if (found) 111 + return found; 112 + b = conn->jiffies32; 113 + a = (u32)jiffies; 114 + 115 + /* conn might have been added just before by another cpu and 116 + * might still be unconfirmed. In this case, nf_conntrack_find() 117 + * returns no result. Thus only evict if this cpu added the 118 + * stale entry or if the entry is older than two jiffies. 119 + */ 120 + age = a - b; 121 + if (conn->cpu == cpu || age >= 2) { 122 + hlist_del(&conn->node); 123 + kmem_cache_free(conncount_conn_cachep, conn); 124 + return ERR_PTR(-ENOENT); 125 + } 126 + 127 + return ERR_PTR(-EAGAIN); 128 + } 100 129 101 130 unsigned int nf_conncount_lookup(struct net *net, struct hlist_head *head, 102 131 const struct nf_conntrack_tuple *tuple, ··· 136 103 { 137 104 const struct nf_conntrack_tuple_hash *found; 138 105 struct nf_conncount_tuple *conn; 139 - struct hlist_node *n; 140 106 struct nf_conn *found_ct; 107 + struct hlist_node *n; 141 108 unsigned int length = 0; 142 109 143 110 *addit = tuple ? true : false; 144 111 145 112 /* check the saved connections */ 146 113 hlist_for_each_entry_safe(conn, n, head, node) { 147 - found = nf_conntrack_find_get(net, &conn->zone, &conn->tuple); 148 - if (found == NULL) { 149 - hlist_del(&conn->node); 150 - kmem_cache_free(conncount_conn_cachep, conn); 114 + found = find_or_evict(net, conn); 115 + if (IS_ERR(found)) { 116 + /* Not found, but might be about to be confirmed */ 117 + if (PTR_ERR(found) == -EAGAIN) { 118 + length++; 119 + if (!tuple) 120 + continue; 121 + 122 + if (nf_ct_tuple_equal(&conn->tuple, tuple) && 123 + nf_ct_zone_id(&conn->zone, conn->zone.dir) == 124 + nf_ct_zone_id(zone, zone->dir)) 125 + *addit = false; 126 + } 151 127 continue; 152 128 } 153 129
+5
net/netfilter/nf_conntrack_helper.c
··· 465 465 466 466 nf_ct_expect_iterate_destroy(expect_iter_me, NULL); 467 467 nf_ct_iterate_destroy(unhelp, me); 468 + 469 + /* Maybe someone has gotten the helper already when unhelp above. 470 + * So need to wait it. 471 + */ 472 + synchronize_rcu(); 468 473 } 469 474 EXPORT_SYMBOL_GPL(nf_conntrack_helper_unregister); 470 475
+10 -3
net/netfilter/nf_log.c
··· 424 424 if (write) { 425 425 struct ctl_table tmp = *table; 426 426 427 + /* proc_dostring() can append to existing strings, so we need to 428 + * initialize it as an empty string. 429 + */ 430 + buf[0] = '\0'; 427 431 tmp.data = buf; 428 432 r = proc_dostring(&tmp, write, buffer, lenp, ppos); 429 433 if (r) ··· 446 442 rcu_assign_pointer(net->nf.nf_loggers[tindex], logger); 447 443 mutex_unlock(&nf_log_mutex); 448 444 } else { 445 + struct ctl_table tmp = *table; 446 + 447 + tmp.data = buf; 449 448 mutex_lock(&nf_log_mutex); 450 449 logger = nft_log_dereference(net->nf.nf_loggers[tindex]); 451 450 if (!logger) 452 - table->data = "NONE"; 451 + strlcpy(buf, "NONE", sizeof(buf)); 453 452 else 454 - table->data = logger->name; 455 - r = proc_dostring(table, write, buffer, lenp, ppos); 453 + strlcpy(buf, logger->name, sizeof(buf)); 456 454 mutex_unlock(&nf_log_mutex); 455 + r = proc_dostring(&tmp, write, buffer, lenp, ppos); 457 456 } 458 457 459 458 return r;
+1 -1
net/netlink/af_netlink.c
··· 2658 2658 .socketpair = sock_no_socketpair, 2659 2659 .accept = sock_no_accept, 2660 2660 .getname = netlink_getname, 2661 - .poll_mask = datagram_poll_mask, 2661 + .poll = datagram_poll, 2662 2662 .ioctl = netlink_ioctl, 2663 2663 .listen = sock_no_listen, 2664 2664 .shutdown = sock_no_shutdown,
+1 -1
net/netrom/af_netrom.c
··· 1355 1355 .socketpair = sock_no_socketpair, 1356 1356 .accept = nr_accept, 1357 1357 .getname = nr_getname, 1358 - .poll_mask = datagram_poll_mask, 1358 + .poll = datagram_poll, 1359 1359 .ioctl = nr_ioctl, 1360 1360 .listen = nr_listen, 1361 1361 .shutdown = sock_no_shutdown,
+6 -3
net/nfc/llcp_sock.c
··· 548 548 return 0; 549 549 } 550 550 551 - static __poll_t llcp_sock_poll_mask(struct socket *sock, __poll_t events) 551 + static __poll_t llcp_sock_poll(struct file *file, struct socket *sock, 552 + poll_table *wait) 552 553 { 553 554 struct sock *sk = sock->sk; 554 555 __poll_t mask = 0; 555 556 556 557 pr_debug("%p\n", sk); 558 + 559 + sock_poll_wait(file, sk_sleep(sk), wait); 557 560 558 561 if (sk->sk_state == LLCP_LISTEN) 559 562 return llcp_accept_poll(sk); ··· 899 896 .socketpair = sock_no_socketpair, 900 897 .accept = llcp_sock_accept, 901 898 .getname = llcp_sock_getname, 902 - .poll_mask = llcp_sock_poll_mask, 899 + .poll = llcp_sock_poll, 903 900 .ioctl = sock_no_ioctl, 904 901 .listen = llcp_sock_listen, 905 902 .shutdown = sock_no_shutdown, ··· 919 916 .socketpair = sock_no_socketpair, 920 917 .accept = sock_no_accept, 921 918 .getname = llcp_sock_getname, 922 - .poll_mask = llcp_sock_poll_mask, 919 + .poll = llcp_sock_poll, 923 920 .ioctl = sock_no_ioctl, 924 921 .listen = sock_no_listen, 925 922 .shutdown = sock_no_shutdown,
+2 -2
net/nfc/rawsock.c
··· 284 284 .socketpair = sock_no_socketpair, 285 285 .accept = sock_no_accept, 286 286 .getname = sock_no_getname, 287 - .poll_mask = datagram_poll_mask, 287 + .poll = datagram_poll, 288 288 .ioctl = sock_no_ioctl, 289 289 .listen = sock_no_listen, 290 290 .shutdown = sock_no_shutdown, ··· 304 304 .socketpair = sock_no_socketpair, 305 305 .accept = sock_no_accept, 306 306 .getname = sock_no_getname, 307 - .poll_mask = datagram_poll_mask, 307 + .poll = datagram_poll, 308 308 .ioctl = sock_no_ioctl, 309 309 .listen = sock_no_listen, 310 310 .shutdown = sock_no_shutdown,
+5 -4
net/packet/af_packet.c
··· 4076 4076 return 0; 4077 4077 } 4078 4078 4079 - static __poll_t packet_poll_mask(struct socket *sock, __poll_t events) 4079 + static __poll_t packet_poll(struct file *file, struct socket *sock, 4080 + poll_table *wait) 4080 4081 { 4081 4082 struct sock *sk = sock->sk; 4082 4083 struct packet_sock *po = pkt_sk(sk); 4083 - __poll_t mask = datagram_poll_mask(sock, events); 4084 + __poll_t mask = datagram_poll(file, sock, wait); 4084 4085 4085 4086 spin_lock_bh(&sk->sk_receive_queue.lock); 4086 4087 if (po->rx_ring.pg_vec) { ··· 4423 4422 .socketpair = sock_no_socketpair, 4424 4423 .accept = sock_no_accept, 4425 4424 .getname = packet_getname_spkt, 4426 - .poll_mask = datagram_poll_mask, 4425 + .poll = datagram_poll, 4427 4426 .ioctl = packet_ioctl, 4428 4427 .listen = sock_no_listen, 4429 4428 .shutdown = sock_no_shutdown, ··· 4444 4443 .socketpair = sock_no_socketpair, 4445 4444 .accept = sock_no_accept, 4446 4445 .getname = packet_getname, 4447 - .poll_mask = packet_poll_mask, 4446 + .poll = packet_poll, 4448 4447 .ioctl = packet_ioctl, 4449 4448 .listen = sock_no_listen, 4450 4449 .shutdown = sock_no_shutdown,
+6 -3
net/phonet/socket.c
··· 340 340 return sizeof(struct sockaddr_pn); 341 341 } 342 342 343 - static __poll_t pn_socket_poll_mask(struct socket *sock, __poll_t events) 343 + static __poll_t pn_socket_poll(struct file *file, struct socket *sock, 344 + poll_table *wait) 344 345 { 345 346 struct sock *sk = sock->sk; 346 347 struct pep_sock *pn = pep_sk(sk); 347 348 __poll_t mask = 0; 349 + 350 + poll_wait(file, sk_sleep(sk), wait); 348 351 349 352 if (sk->sk_state == TCP_CLOSE) 350 353 return EPOLLERR; ··· 448 445 .socketpair = sock_no_socketpair, 449 446 .accept = sock_no_accept, 450 447 .getname = pn_socket_getname, 451 - .poll_mask = datagram_poll_mask, 448 + .poll = datagram_poll, 452 449 .ioctl = pn_socket_ioctl, 453 450 .listen = sock_no_listen, 454 451 .shutdown = sock_no_shutdown, ··· 473 470 .socketpair = sock_no_socketpair, 474 471 .accept = pn_socket_accept, 475 472 .getname = pn_socket_getname, 476 - .poll_mask = pn_socket_poll_mask, 473 + .poll = pn_socket_poll, 477 474 .ioctl = pn_socket_ioctl, 478 475 .listen = pn_socket_listen, 479 476 .shutdown = sock_no_shutdown,
+1 -1
net/qrtr/qrtr.c
··· 1023 1023 .recvmsg = qrtr_recvmsg, 1024 1024 .getname = qrtr_getname, 1025 1025 .ioctl = qrtr_ioctl, 1026 - .poll_mask = datagram_poll_mask, 1026 + .poll = datagram_poll, 1027 1027 .shutdown = sock_no_shutdown, 1028 1028 .setsockopt = sock_no_setsockopt, 1029 1029 .getsockopt = sock_no_getsockopt,
+10 -1
net/rds/connection.c
··· 659 659 660 660 int rds_conn_init(void) 661 661 { 662 + int ret; 663 + 664 + ret = rds_loop_net_init(); /* register pernet callback */ 665 + if (ret) 666 + return ret; 667 + 662 668 rds_conn_slab = kmem_cache_create("rds_connection", 663 669 sizeof(struct rds_connection), 664 670 0, 0, NULL); 665 - if (!rds_conn_slab) 671 + if (!rds_conn_slab) { 672 + rds_loop_net_exit(); 666 673 return -ENOMEM; 674 + } 667 675 668 676 rds_info_register_func(RDS_INFO_CONNECTIONS, rds_conn_info); 669 677 rds_info_register_func(RDS_INFO_SEND_MESSAGES, ··· 684 676 685 677 void rds_conn_exit(void) 686 678 { 679 + rds_loop_net_exit(); /* unregister pernet callback */ 687 680 rds_loop_exit(); 688 681 689 682 WARN_ON(!hlist_empty(rds_conn_hash));
+56
net/rds/loop.c
··· 33 33 #include <linux/kernel.h> 34 34 #include <linux/slab.h> 35 35 #include <linux/in.h> 36 + #include <net/net_namespace.h> 37 + #include <net/netns/generic.h> 36 38 37 39 #include "rds_single_path.h" 38 40 #include "rds.h" ··· 42 40 43 41 static DEFINE_SPINLOCK(loop_conns_lock); 44 42 static LIST_HEAD(loop_conns); 43 + static atomic_t rds_loop_unloading = ATOMIC_INIT(0); 44 + 45 + static void rds_loop_set_unloading(void) 46 + { 47 + atomic_set(&rds_loop_unloading, 1); 48 + } 49 + 50 + static bool rds_loop_is_unloading(struct rds_connection *conn) 51 + { 52 + return atomic_read(&rds_loop_unloading) != 0; 53 + } 45 54 46 55 /* 47 56 * This 'loopback' transport is a special case for flows that originate ··· 178 165 struct rds_loop_connection *lc, *_lc; 179 166 LIST_HEAD(tmp_list); 180 167 168 + rds_loop_set_unloading(); 169 + synchronize_rcu(); 181 170 /* avoid calling conn_destroy with irqs off */ 182 171 spin_lock_irq(&loop_conns_lock); 183 172 list_splice(&loop_conns, &tmp_list); ··· 190 175 WARN_ON(lc->conn->c_passive); 191 176 rds_conn_destroy(lc->conn); 192 177 } 178 + } 179 + 180 + static void rds_loop_kill_conns(struct net *net) 181 + { 182 + struct rds_loop_connection *lc, *_lc; 183 + LIST_HEAD(tmp_list); 184 + 185 + spin_lock_irq(&loop_conns_lock); 186 + list_for_each_entry_safe(lc, _lc, &loop_conns, loop_node) { 187 + struct net *c_net = read_pnet(&lc->conn->c_net); 188 + 189 + if (net != c_net) 190 + continue; 191 + list_move_tail(&lc->loop_node, &tmp_list); 192 + } 193 + spin_unlock_irq(&loop_conns_lock); 194 + 195 + list_for_each_entry_safe(lc, _lc, &tmp_list, loop_node) { 196 + WARN_ON(lc->conn->c_passive); 197 + rds_conn_destroy(lc->conn); 198 + } 199 + } 200 + 201 + static void __net_exit rds_loop_exit_net(struct net *net) 202 + { 203 + rds_loop_kill_conns(net); 204 + } 205 + 206 + static struct pernet_operations rds_loop_net_ops = { 207 + .exit = rds_loop_exit_net, 208 + }; 209 + 210 + int rds_loop_net_init(void) 211 + { 212 + return register_pernet_device(&rds_loop_net_ops); 213 + } 214 + 215 + void rds_loop_net_exit(void) 216 + { 217 + unregister_pernet_device(&rds_loop_net_ops); 193 218 } 194 219 195 220 /* ··· 249 194 .inc_free = rds_loop_inc_free, 250 195 .t_name = "loopback", 251 196 .t_type = RDS_TRANS_LOOP, 197 + .t_unloading = rds_loop_is_unloading, 252 198 };
+2
net/rds/loop.h
··· 5 5 /* loop.c */ 6 6 extern struct rds_transport rds_loop_transport; 7 7 8 + int rds_loop_net_init(void); 9 + void rds_loop_net_exit(void); 8 10 void rds_loop_exit(void); 9 11 10 12 #endif
+1 -1
net/rose/af_rose.c
··· 1470 1470 .socketpair = sock_no_socketpair, 1471 1471 .accept = rose_accept, 1472 1472 .getname = rose_getname, 1473 - .poll_mask = datagram_poll_mask, 1473 + .poll = datagram_poll, 1474 1474 .ioctl = rose_ioctl, 1475 1475 .listen = rose_listen, 1476 1476 .shutdown = sock_no_shutdown,
+7 -3
net/rxrpc/af_rxrpc.c
··· 734 734 /* 735 735 * permit an RxRPC socket to be polled 736 736 */ 737 - static __poll_t rxrpc_poll_mask(struct socket *sock, __poll_t events) 737 + static __poll_t rxrpc_poll(struct file *file, struct socket *sock, 738 + poll_table *wait) 738 739 { 739 740 struct sock *sk = sock->sk; 740 741 struct rxrpc_sock *rx = rxrpc_sk(sk); 741 - __poll_t mask = 0; 742 + __poll_t mask; 743 + 744 + sock_poll_wait(file, sk_sleep(sk), wait); 745 + mask = 0; 742 746 743 747 /* the socket is readable if there are any messages waiting on the Rx 744 748 * queue */ ··· 949 945 .socketpair = sock_no_socketpair, 950 946 .accept = sock_no_accept, 951 947 .getname = sock_no_getname, 952 - .poll_mask = rxrpc_poll_mask, 948 + .poll = rxrpc_poll, 953 949 .ioctl = sock_no_ioctl, 954 950 .listen = rxrpc_listen, 955 951 .shutdown = rxrpc_shutdown,
+1 -1
net/sctp/ipv6.c
··· 1010 1010 .socketpair = sock_no_socketpair, 1011 1011 .accept = inet_accept, 1012 1012 .getname = sctp_getname, 1013 - .poll_mask = sctp_poll_mask, 1013 + .poll = sctp_poll, 1014 1014 .ioctl = inet6_ioctl, 1015 1015 .listen = sctp_inet_listen, 1016 1016 .shutdown = inet_shutdown,
+1 -1
net/sctp/protocol.c
··· 1016 1016 .socketpair = sock_no_socketpair, 1017 1017 .accept = inet_accept, 1018 1018 .getname = inet_getname, /* Semantics are different. */ 1019 - .poll_mask = sctp_poll_mask, 1019 + .poll = sctp_poll, 1020 1020 .ioctl = inet_ioctl, 1021 1021 .listen = sctp_inet_listen, 1022 1022 .shutdown = inet_shutdown, /* Looks harmless. */
+3 -1
net/sctp/socket.c
··· 7765 7765 * here, again, by modeling the current TCP/UDP code. We don't have 7766 7766 * a good way to test with it yet. 7767 7767 */ 7768 - __poll_t sctp_poll_mask(struct socket *sock, __poll_t events) 7768 + __poll_t sctp_poll(struct file *file, struct socket *sock, poll_table *wait) 7769 7769 { 7770 7770 struct sock *sk = sock->sk; 7771 7771 struct sctp_sock *sp = sctp_sk(sk); 7772 7772 __poll_t mask; 7773 + 7774 + poll_wait(file, sk_sleep(sk), wait); 7773 7775 7774 7776 sock_rps_record_flow(sk); 7775 7777
+67 -31
net/smc/af_smc.c
··· 47 47 */ 48 48 49 49 static void smc_tcp_listen_work(struct work_struct *); 50 + static void smc_connect_work(struct work_struct *); 50 51 51 52 static void smc_set_keepalive(struct sock *sk, int val) 52 53 { ··· 125 124 goto out; 126 125 127 126 smc = smc_sk(sk); 127 + 128 + /* cleanup for a dangling non-blocking connect */ 129 + flush_work(&smc->connect_work); 130 + kfree(smc->connect_info); 131 + smc->connect_info = NULL; 132 + 128 133 if (sk->sk_state == SMC_LISTEN) 129 134 /* smc_close_non_accepted() is called and acquires 130 135 * sock lock for child sockets again ··· 195 188 sk->sk_protocol = protocol; 196 189 smc = smc_sk(sk); 197 190 INIT_WORK(&smc->tcp_listen_work, smc_tcp_listen_work); 191 + INIT_WORK(&smc->connect_work, smc_connect_work); 198 192 INIT_DELAYED_WORK(&smc->conn.tx_work, smc_tx_work); 199 193 INIT_LIST_HEAD(&smc->accept_q); 200 194 spin_lock_init(&smc->accept_q_lock); ··· 715 707 return 0; 716 708 } 717 709 710 + static void smc_connect_work(struct work_struct *work) 711 + { 712 + struct smc_sock *smc = container_of(work, struct smc_sock, 713 + connect_work); 714 + int rc; 715 + 716 + lock_sock(&smc->sk); 717 + rc = kernel_connect(smc->clcsock, &smc->connect_info->addr, 718 + smc->connect_info->alen, smc->connect_info->flags); 719 + if (smc->clcsock->sk->sk_err) { 720 + smc->sk.sk_err = smc->clcsock->sk->sk_err; 721 + goto out; 722 + } 723 + if (rc < 0) { 724 + smc->sk.sk_err = -rc; 725 + goto out; 726 + } 727 + 728 + rc = __smc_connect(smc); 729 + if (rc < 0) 730 + smc->sk.sk_err = -rc; 731 + 732 + out: 733 + smc->sk.sk_state_change(&smc->sk); 734 + kfree(smc->connect_info); 735 + smc->connect_info = NULL; 736 + release_sock(&smc->sk); 737 + } 738 + 718 739 static int smc_connect(struct socket *sock, struct sockaddr *addr, 719 740 int alen, int flags) 720 741 { ··· 773 736 774 737 smc_copy_sock_settings_to_clc(smc); 775 738 tcp_sk(smc->clcsock->sk)->syn_smc = 1; 776 - rc = kernel_connect(smc->clcsock, addr, alen, flags); 777 - if (rc) 778 - goto out; 739 + if (flags & O_NONBLOCK) { 740 + if (smc->connect_info) { 741 + rc = -EALREADY; 742 + goto out; 743 + } 744 + smc->connect_info = kzalloc(alen + 2 * sizeof(int), GFP_KERNEL); 745 + if (!smc->connect_info) { 746 + rc = -ENOMEM; 747 + goto out; 748 + } 749 + smc->connect_info->alen = alen; 750 + smc->connect_info->flags = flags ^ O_NONBLOCK; 751 + memcpy(&smc->connect_info->addr, addr, alen); 752 + schedule_work(&smc->connect_work); 753 + rc = -EINPROGRESS; 754 + } else { 755 + rc = kernel_connect(smc->clcsock, addr, alen, flags); 756 + if (rc) 757 + goto out; 779 758 780 - rc = __smc_connect(smc); 781 - if (rc < 0) 782 - goto out; 783 - else 784 - rc = 0; /* success cases including fallback */ 759 + rc = __smc_connect(smc); 760 + if (rc < 0) 761 + goto out; 762 + else 763 + rc = 0; /* success cases including fallback */ 764 + } 785 765 786 766 out: 787 767 release_sock(sk); ··· 1509 1455 return mask; 1510 1456 } 1511 1457 1512 - static __poll_t smc_poll_mask(struct socket *sock, __poll_t events) 1458 + static __poll_t smc_poll(struct file *file, struct socket *sock, 1459 + poll_table *wait) 1513 1460 { 1514 1461 struct sock *sk = sock->sk; 1515 1462 __poll_t mask = 0; 1516 1463 struct smc_sock *smc; 1517 - int rc; 1518 1464 1519 1465 if (!sk) 1520 1466 return EPOLLNVAL; 1521 1467 1522 1468 smc = smc_sk(sock->sk); 1523 - sock_hold(sk); 1524 - lock_sock(sk); 1525 1469 if ((sk->sk_state == SMC_INIT) || smc->use_fallback) { 1526 1470 /* delegate to CLC child sock */ 1527 - release_sock(sk); 1528 - mask = smc->clcsock->ops->poll_mask(smc->clcsock, events); 1529 - lock_sock(sk); 1471 + mask = smc->clcsock->ops->poll(file, smc->clcsock, wait); 1530 1472 sk->sk_err = smc->clcsock->sk->sk_err; 1531 - if (sk->sk_err) { 1473 + if (sk->sk_err) 1532 1474 mask |= EPOLLERR; 1533 - } else { 1534 - /* if non-blocking connect finished ... */ 1535 - if (sk->sk_state == SMC_INIT && 1536 - mask & EPOLLOUT && 1537 - smc->clcsock->sk->sk_state != TCP_CLOSE) { 1538 - rc = __smc_connect(smc); 1539 - if (rc < 0) 1540 - mask |= EPOLLERR; 1541 - /* success cases including fallback */ 1542 - mask |= EPOLLOUT | EPOLLWRNORM; 1543 - } 1544 - } 1545 1475 } else { 1546 1476 if (sk->sk_err) 1547 1477 mask |= EPOLLERR; ··· 1554 1516 mask |= EPOLLPRI; 1555 1517 1556 1518 } 1557 - release_sock(sk); 1558 - sock_put(sk); 1559 1519 1560 1520 return mask; 1561 1521 } ··· 1837 1801 .socketpair = sock_no_socketpair, 1838 1802 .accept = smc_accept, 1839 1803 .getname = smc_getname, 1840 - .poll_mask = smc_poll_mask, 1804 + .poll = smc_poll, 1841 1805 .ioctl = smc_ioctl, 1842 1806 .listen = smc_listen, 1843 1807 .shutdown = smc_shutdown,
+8
net/smc/smc.h
··· 190 190 u64 peer_token; /* SMC-D token of peer */ 191 191 }; 192 192 193 + struct smc_connect_info { 194 + int flags; 195 + int alen; 196 + struct sockaddr addr; 197 + }; 198 + 193 199 struct smc_sock { /* smc sock container */ 194 200 struct sock sk; 195 201 struct socket *clcsock; /* internal tcp socket */ 196 202 struct smc_connection conn; /* smc connection */ 197 203 struct smc_sock *listen_smc; /* listen parent */ 204 + struct smc_connect_info *connect_info; /* connect address & flags */ 205 + struct work_struct connect_work; /* handle non-blocking connect*/ 198 206 struct work_struct tcp_listen_work;/* handle tcp socket accepts */ 199 207 struct work_struct smc_listen_work;/* prepare new accept socket */ 200 208 struct list_head accept_q; /* sockets to be accepted */
+7 -43
net/socket.c
··· 117 117 static int sock_mmap(struct file *file, struct vm_area_struct *vma); 118 118 119 119 static int sock_close(struct inode *inode, struct file *file); 120 - static struct wait_queue_head *sock_get_poll_head(struct file *file, 121 - __poll_t events); 122 - static __poll_t sock_poll_mask(struct file *file, __poll_t); 123 - static __poll_t sock_poll(struct file *file, struct poll_table_struct *wait); 120 + static __poll_t sock_poll(struct file *file, 121 + struct poll_table_struct *wait); 124 122 static long sock_ioctl(struct file *file, unsigned int cmd, unsigned long arg); 125 123 #ifdef CONFIG_COMPAT 126 124 static long compat_sock_ioctl(struct file *file, ··· 141 143 .llseek = no_llseek, 142 144 .read_iter = sock_read_iter, 143 145 .write_iter = sock_write_iter, 144 - .get_poll_head = sock_get_poll_head, 145 - .poll_mask = sock_poll_mask, 146 146 .poll = sock_poll, 147 147 .unlocked_ioctl = sock_ioctl, 148 148 #ifdef CONFIG_COMPAT ··· 1126 1130 } 1127 1131 EXPORT_SYMBOL(sock_create_lite); 1128 1132 1129 - static struct wait_queue_head *sock_get_poll_head(struct file *file, 1130 - __poll_t events) 1131 - { 1132 - struct socket *sock = file->private_data; 1133 - 1134 - if (!sock->ops->poll_mask) 1135 - return NULL; 1136 - sock_poll_busy_loop(sock, events); 1137 - return sk_sleep(sock->sk); 1138 - } 1139 - 1140 - static __poll_t sock_poll_mask(struct file *file, __poll_t events) 1141 - { 1142 - struct socket *sock = file->private_data; 1143 - 1144 - /* 1145 - * We need to be sure we are in sync with the socket flags modification. 1146 - * 1147 - * This memory barrier is paired in the wq_has_sleeper. 1148 - */ 1149 - smp_mb(); 1150 - 1151 - /* this socket can poll_ll so tell the system call */ 1152 - return sock->ops->poll_mask(sock, events) | 1153 - (sk_can_busy_loop(sock->sk) ? POLL_BUSY_LOOP : 0); 1154 - } 1155 - 1156 1133 /* No kernel lock held - perfect */ 1157 1134 static __poll_t sock_poll(struct file *file, poll_table *wait) 1158 1135 { 1159 1136 struct socket *sock = file->private_data; 1160 - __poll_t events = poll_requested_events(wait), mask = 0; 1137 + __poll_t events = poll_requested_events(wait); 1161 1138 1162 - if (sock->ops->poll) { 1163 - sock_poll_busy_loop(sock, events); 1164 - mask = sock->ops->poll(file, sock, wait); 1165 - } else if (sock->ops->poll_mask) { 1166 - sock_poll_wait(file, sock_get_poll_head(file, events), wait); 1167 - mask = sock->ops->poll_mask(sock, events); 1168 - } 1169 - 1170 - return mask | sock_poll_busy_flag(sock); 1139 + sock_poll_busy_loop(sock, events); 1140 + if (!sock->ops->poll) 1141 + return 0; 1142 + return sock->ops->poll(file, sock, wait) | sock_poll_busy_flag(sock); 1171 1143 } 1172 1144 1173 1145 static int sock_mmap(struct file *file, struct vm_area_struct *vma)
+1 -16
net/strparser/strparser.c
··· 35 35 */ 36 36 struct strp_msg strp; 37 37 int accum_len; 38 - int early_eaten; 39 38 }; 40 39 41 40 static inline struct _strp_msg *_strp_msg(struct sk_buff *skb) ··· 114 115 head = strp->skb_head; 115 116 if (head) { 116 117 /* Message already in progress */ 117 - 118 - stm = _strp_msg(head); 119 - if (unlikely(stm->early_eaten)) { 120 - /* Already some number of bytes on the receive sock 121 - * data saved in skb_head, just indicate they 122 - * are consumed. 123 - */ 124 - eaten = orig_len <= stm->early_eaten ? 125 - orig_len : stm->early_eaten; 126 - stm->early_eaten -= eaten; 127 - 128 - return eaten; 129 - } 130 - 131 118 if (unlikely(orig_offset)) { 132 119 /* Getting data with a non-zero offset when a message is 133 120 * in progress is not expected. If it does happen, we ··· 286 301 } 287 302 288 303 stm->accum_len += cand_len; 304 + eaten += cand_len; 289 305 strp->need_bytes = stm->strp.full_len - 290 306 stm->accum_len; 291 - stm->early_eaten = cand_len; 292 307 STRP_STATS_ADD(strp->stats.bytes, cand_len); 293 308 desc->count = 0; /* Stop reading socket */ 294 309 break;
+9 -5
net/tipc/socket.c
··· 692 692 } 693 693 694 694 /** 695 - * tipc_poll - read pollmask 695 + * tipc_poll - read and possibly block on pollmask 696 696 * @file: file structure associated with the socket 697 697 * @sock: socket for which to calculate the poll bits 698 + * @wait: ??? 698 699 * 699 700 * Returns pollmask value 700 701 * ··· 709 708 * imply that the operation will succeed, merely that it should be performed 710 709 * and will not block. 711 710 */ 712 - static __poll_t tipc_poll_mask(struct socket *sock, __poll_t events) 711 + static __poll_t tipc_poll(struct file *file, struct socket *sock, 712 + poll_table *wait) 713 713 { 714 714 struct sock *sk = sock->sk; 715 715 struct tipc_sock *tsk = tipc_sk(sk); 716 716 __poll_t revents = 0; 717 + 718 + sock_poll_wait(file, sk_sleep(sk), wait); 717 719 718 720 if (sk->sk_shutdown & RCV_SHUTDOWN) 719 721 revents |= EPOLLRDHUP | EPOLLIN | EPOLLRDNORM; ··· 3037 3033 .socketpair = tipc_socketpair, 3038 3034 .accept = sock_no_accept, 3039 3035 .getname = tipc_getname, 3040 - .poll_mask = tipc_poll_mask, 3036 + .poll = tipc_poll, 3041 3037 .ioctl = tipc_ioctl, 3042 3038 .listen = sock_no_listen, 3043 3039 .shutdown = tipc_shutdown, ··· 3058 3054 .socketpair = tipc_socketpair, 3059 3055 .accept = tipc_accept, 3060 3056 .getname = tipc_getname, 3061 - .poll_mask = tipc_poll_mask, 3057 + .poll = tipc_poll, 3062 3058 .ioctl = tipc_ioctl, 3063 3059 .listen = tipc_listen, 3064 3060 .shutdown = tipc_shutdown, ··· 3079 3075 .socketpair = tipc_socketpair, 3080 3076 .accept = tipc_accept, 3081 3077 .getname = tipc_getname, 3082 - .poll_mask = tipc_poll_mask, 3078 + .poll = tipc_poll, 3083 3079 .ioctl = tipc_ioctl, 3084 3080 .listen = tipc_listen, 3085 3081 .shutdown = tipc_shutdown,
+1 -1
net/tls/tls_main.c
··· 712 712 build_protos(tls_prots[TLSV4], &tcp_prot); 713 713 714 714 tls_sw_proto_ops = inet_stream_ops; 715 - tls_sw_proto_ops.poll_mask = tls_sw_poll_mask; 715 + tls_sw_proto_ops.poll = tls_sw_poll; 716 716 tls_sw_proto_ops.splice_read = tls_sw_splice_read; 717 717 718 718 #ifdef CONFIG_TLS_DEVICE
+10 -9
net/tls/tls_sw.c
··· 919 919 return copied ? : err; 920 920 } 921 921 922 - __poll_t tls_sw_poll_mask(struct socket *sock, __poll_t events) 922 + unsigned int tls_sw_poll(struct file *file, struct socket *sock, 923 + struct poll_table_struct *wait) 923 924 { 925 + unsigned int ret; 924 926 struct sock *sk = sock->sk; 925 927 struct tls_context *tls_ctx = tls_get_ctx(sk); 926 928 struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx); 927 - __poll_t mask; 928 929 929 - /* Grab EPOLLOUT and EPOLLHUP from the underlying socket */ 930 - mask = ctx->sk_poll_mask(sock, events); 930 + /* Grab POLLOUT and POLLHUP from the underlying socket */ 931 + ret = ctx->sk_poll(file, sock, wait); 931 932 932 - /* Clear EPOLLIN bits, and set based on recv_pkt */ 933 - mask &= ~(EPOLLIN | EPOLLRDNORM); 933 + /* Clear POLLIN bits, and set based on recv_pkt */ 934 + ret &= ~(POLLIN | POLLRDNORM); 934 935 if (ctx->recv_pkt) 935 - mask |= EPOLLIN | EPOLLRDNORM; 936 + ret |= POLLIN | POLLRDNORM; 936 937 937 - return mask; 938 + return ret; 938 939 } 939 940 940 941 static int tls_read_size(struct strparser *strp, struct sk_buff *skb) ··· 1195 1194 sk->sk_data_ready = tls_data_ready; 1196 1195 write_unlock_bh(&sk->sk_callback_lock); 1197 1196 1198 - sw_ctx_rx->sk_poll_mask = sk->sk_socket->ops->poll_mask; 1197 + sw_ctx_rx->sk_poll = sk->sk_socket->ops->poll; 1199 1198 1200 1199 strp_check_rcv(&sw_ctx_rx->strp); 1201 1200 }
+19 -11
net/unix/af_unix.c
··· 638 638 static int unix_socketpair(struct socket *, struct socket *); 639 639 static int unix_accept(struct socket *, struct socket *, int, bool); 640 640 static int unix_getname(struct socket *, struct sockaddr *, int); 641 - static __poll_t unix_poll_mask(struct socket *, __poll_t); 642 - static __poll_t unix_dgram_poll_mask(struct socket *, __poll_t); 641 + static __poll_t unix_poll(struct file *, struct socket *, poll_table *); 642 + static __poll_t unix_dgram_poll(struct file *, struct socket *, 643 + poll_table *); 643 644 static int unix_ioctl(struct socket *, unsigned int, unsigned long); 644 645 static int unix_shutdown(struct socket *, int); 645 646 static int unix_stream_sendmsg(struct socket *, struct msghdr *, size_t); ··· 681 680 .socketpair = unix_socketpair, 682 681 .accept = unix_accept, 683 682 .getname = unix_getname, 684 - .poll_mask = unix_poll_mask, 683 + .poll = unix_poll, 685 684 .ioctl = unix_ioctl, 686 685 .listen = unix_listen, 687 686 .shutdown = unix_shutdown, ··· 704 703 .socketpair = unix_socketpair, 705 704 .accept = sock_no_accept, 706 705 .getname = unix_getname, 707 - .poll_mask = unix_dgram_poll_mask, 706 + .poll = unix_dgram_poll, 708 707 .ioctl = unix_ioctl, 709 708 .listen = sock_no_listen, 710 709 .shutdown = unix_shutdown, ··· 726 725 .socketpair = unix_socketpair, 727 726 .accept = unix_accept, 728 727 .getname = unix_getname, 729 - .poll_mask = unix_dgram_poll_mask, 728 + .poll = unix_dgram_poll, 730 729 .ioctl = unix_ioctl, 731 730 .listen = unix_listen, 732 731 .shutdown = unix_shutdown, ··· 2630 2629 return err; 2631 2630 } 2632 2631 2633 - static __poll_t unix_poll_mask(struct socket *sock, __poll_t events) 2632 + static __poll_t unix_poll(struct file *file, struct socket *sock, poll_table *wait) 2634 2633 { 2635 2634 struct sock *sk = sock->sk; 2636 - __poll_t mask = 0; 2635 + __poll_t mask; 2636 + 2637 + sock_poll_wait(file, sk_sleep(sk), wait); 2638 + mask = 0; 2637 2639 2638 2640 /* exceptional events? */ 2639 2641 if (sk->sk_err) ··· 2665 2661 return mask; 2666 2662 } 2667 2663 2668 - static __poll_t unix_dgram_poll_mask(struct socket *sock, __poll_t events) 2664 + static __poll_t unix_dgram_poll(struct file *file, struct socket *sock, 2665 + poll_table *wait) 2669 2666 { 2670 2667 struct sock *sk = sock->sk, *other; 2671 - int writable; 2672 - __poll_t mask = 0; 2668 + unsigned int writable; 2669 + __poll_t mask; 2670 + 2671 + sock_poll_wait(file, sk_sleep(sk), wait); 2672 + mask = 0; 2673 2673 2674 2674 /* exceptional events? */ 2675 2675 if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue)) ··· 2699 2691 } 2700 2692 2701 2693 /* No write status requested, avoid expensive OUT tests. */ 2702 - if (!(events & (EPOLLWRBAND|EPOLLWRNORM|EPOLLOUT))) 2694 + if (!(poll_requested_events(wait) & (EPOLLWRBAND|EPOLLWRNORM|EPOLLOUT))) 2703 2695 return mask; 2704 2696 2705 2697 writable = unix_writable(sk);
+13 -6
net/vmw_vsock/af_vsock.c
··· 850 850 return err; 851 851 } 852 852 853 - static __poll_t vsock_poll_mask(struct socket *sock, __poll_t events) 853 + static __poll_t vsock_poll(struct file *file, struct socket *sock, 854 + poll_table *wait) 854 855 { 855 - struct sock *sk = sock->sk; 856 - struct vsock_sock *vsk = vsock_sk(sk); 857 - __poll_t mask = 0; 856 + struct sock *sk; 857 + __poll_t mask; 858 + struct vsock_sock *vsk; 859 + 860 + sk = sock->sk; 861 + vsk = vsock_sk(sk); 862 + 863 + poll_wait(file, sk_sleep(sk), wait); 864 + mask = 0; 858 865 859 866 if (sk->sk_err) 860 867 /* Signify that there has been an error on this socket. */ ··· 1091 1084 .socketpair = sock_no_socketpair, 1092 1085 .accept = sock_no_accept, 1093 1086 .getname = vsock_getname, 1094 - .poll_mask = vsock_poll_mask, 1087 + .poll = vsock_poll, 1095 1088 .ioctl = sock_no_ioctl, 1096 1089 .listen = sock_no_listen, 1097 1090 .shutdown = vsock_shutdown, ··· 1849 1842 .socketpair = sock_no_socketpair, 1850 1843 .accept = vsock_accept, 1851 1844 .getname = vsock_getname, 1852 - .poll_mask = vsock_poll_mask, 1845 + .poll = vsock_poll, 1853 1846 .ioctl = sock_no_ioctl, 1854 1847 .listen = vsock_listen, 1855 1848 .shutdown = vsock_shutdown,
+14 -21
net/wireless/nl80211.c
··· 6329 6329 nl80211_check_s32); 6330 6330 /* 6331 6331 * Check HT operation mode based on 6332 - * IEEE 802.11 2012 8.4.2.59 HT Operation element. 6332 + * IEEE 802.11-2016 9.4.2.57 HT Operation element. 6333 6333 */ 6334 6334 if (tb[NL80211_MESHCONF_HT_OPMODE]) { 6335 6335 ht_opmode = nla_get_u16(tb[NL80211_MESHCONF_HT_OPMODE]); ··· 6339 6339 IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT)) 6340 6340 return -EINVAL; 6341 6341 6342 - if ((ht_opmode & IEEE80211_HT_OP_MODE_NON_GF_STA_PRSNT) && 6343 - (ht_opmode & IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT)) 6344 - return -EINVAL; 6342 + /* NON_HT_STA bit is reserved, but some programs set it */ 6343 + ht_opmode &= ~IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT; 6345 6344 6346 - switch (ht_opmode & IEEE80211_HT_OP_MODE_PROTECTION) { 6347 - case IEEE80211_HT_OP_MODE_PROTECTION_NONE: 6348 - case IEEE80211_HT_OP_MODE_PROTECTION_20MHZ: 6349 - if (ht_opmode & IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT) 6350 - return -EINVAL; 6351 - break; 6352 - case IEEE80211_HT_OP_MODE_PROTECTION_NONMEMBER: 6353 - case IEEE80211_HT_OP_MODE_PROTECTION_NONHT_MIXED: 6354 - if (!(ht_opmode & IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT)) 6355 - return -EINVAL; 6356 - break; 6357 - } 6358 6345 cfg->ht_opmode = ht_opmode; 6359 6346 mask |= (1 << (NL80211_MESHCONF_HT_OPMODE - 1)); 6360 6347 } ··· 11055 11068 rem) { 11056 11069 u8 *mask_pat; 11057 11070 11058 - nla_parse_nested(pat_tb, MAX_NL80211_PKTPAT, pat, 11059 - nl80211_packet_pattern_policy, 11060 - info->extack); 11071 + err = nla_parse_nested(pat_tb, MAX_NL80211_PKTPAT, pat, 11072 + nl80211_packet_pattern_policy, 11073 + info->extack); 11074 + if (err) 11075 + goto error; 11076 + 11061 11077 err = -EINVAL; 11062 11078 if (!pat_tb[NL80211_PKTPAT_MASK] || 11063 11079 !pat_tb[NL80211_PKTPAT_PATTERN]) ··· 11309 11319 rem) { 11310 11320 u8 *mask_pat; 11311 11321 11312 - nla_parse_nested(pat_tb, MAX_NL80211_PKTPAT, pat, 11313 - nl80211_packet_pattern_policy, NULL); 11322 + err = nla_parse_nested(pat_tb, MAX_NL80211_PKTPAT, pat, 11323 + nl80211_packet_pattern_policy, NULL); 11324 + if (err) 11325 + return err; 11326 + 11314 11327 if (!pat_tb[NL80211_PKTPAT_MASK] || 11315 11328 !pat_tb[NL80211_PKTPAT_PATTERN]) 11316 11329 return -EINVAL;
+1 -1
net/x25/af_x25.c
··· 1750 1750 .socketpair = sock_no_socketpair, 1751 1751 .accept = x25_accept, 1752 1752 .getname = x25_getname, 1753 - .poll_mask = datagram_poll_mask, 1753 + .poll = datagram_poll, 1754 1754 .ioctl = x25_ioctl, 1755 1755 #ifdef CONFIG_COMPAT 1756 1756 .compat_ioctl = compat_x25_ioctl,
+4 -3
net/xdp/xsk.c
··· 303 303 return (xs->zc) ? xsk_zc_xmit(sk) : xsk_generic_xmit(sk, m, total_len); 304 304 } 305 305 306 - static __poll_t xsk_poll_mask(struct socket *sock, __poll_t events) 306 + static unsigned int xsk_poll(struct file *file, struct socket *sock, 307 + struct poll_table_struct *wait) 307 308 { 308 - __poll_t mask = datagram_poll_mask(sock, events); 309 + unsigned int mask = datagram_poll(file, sock, wait); 309 310 struct sock *sk = sock->sk; 310 311 struct xdp_sock *xs = xdp_sk(sk); 311 312 ··· 697 696 .socketpair = sock_no_socketpair, 698 697 .accept = sock_no_accept, 699 698 .getname = sock_no_getname, 700 - .poll_mask = xsk_poll_mask, 699 + .poll = xsk_poll, 701 700 .ioctl = sock_no_ioctl, 702 701 .listen = sock_no_listen, 703 702 .shutdown = sock_no_shutdown,
+4 -4
samples/bpf/xdp_fwd_kern.c
··· 48 48 struct ethhdr *eth = data; 49 49 struct ipv6hdr *ip6h; 50 50 struct iphdr *iph; 51 - int out_index; 52 51 u16 h_proto; 53 52 u64 nh_off; 53 + int rc; 54 54 55 55 nh_off = sizeof(*eth); 56 56 if (data + nh_off > data_end) ··· 101 101 102 102 fib_params.ifindex = ctx->ingress_ifindex; 103 103 104 - out_index = bpf_fib_lookup(ctx, &fib_params, sizeof(fib_params), flags); 104 + rc = bpf_fib_lookup(ctx, &fib_params, sizeof(fib_params), flags); 105 105 106 106 /* verify egress index has xdp support 107 107 * TO-DO bpf_map_lookup_elem(&tx_port, &key) fails with ··· 109 109 * NOTE: without verification that egress index supports XDP 110 110 * forwarding packets are dropped. 111 111 */ 112 - if (out_index > 0) { 112 + if (rc == 0) { 113 113 if (h_proto == htons(ETH_P_IP)) 114 114 ip_decrease_ttl(iph); 115 115 else if (h_proto == htons(ETH_P_IPV6)) ··· 117 117 118 118 memcpy(eth->h_dest, fib_params.dmac, ETH_ALEN); 119 119 memcpy(eth->h_source, fib_params.smac, ETH_ALEN); 120 - return bpf_redirect_map(&tx_port, out_index, 0); 120 + return bpf_redirect_map(&tx_port, fib_params.ifindex, 0); 121 121 } 122 122 123 123 return XDP_PASS;
+1 -1
scripts/cc-can-link.sh
··· 1 1 #!/bin/sh 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 - cat << "END" | $@ -x c - -o /dev/null >/dev/null 2>&1 && echo "y" 4 + cat << "END" | $@ -x c - -o /dev/null >/dev/null 2>&1 5 5 #include <stdio.h> 6 6 int main(void) 7 7 {
-6
scripts/checkpatch.pl
··· 2606 2606 "A patch subject line should describe the change not the tool that found it\n" . $herecurr); 2607 2607 } 2608 2608 2609 - # Check for old stable address 2610 - if ($line =~ /^\s*cc:\s*.*<?\bstable\@kernel\.org\b>?.*$/i) { 2611 - ERROR("STABLE_ADDRESS", 2612 - "The 'stable' address should be 'stable\@vger.kernel.org'\n" . $herecurr); 2613 - } 2614 - 2615 2609 # Check for unwanted Gerrit info 2616 2610 if ($in_commit_log && $line =~ /^\s*change-id:/i) { 2617 2611 ERROR("GERRIT_CHANGE_ID",
+1 -1
scripts/gcc-x86_64-has-stack-protector.sh
··· 1 1 #!/bin/sh 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 - echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -O0 -mcmodel=kernel -fno-PIE -fstack-protector - -o - 2> /dev/null | grep -q "%gs" 4 + echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -m64 -O0 -mcmodel=kernel -fno-PIE -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
+3
scripts/kconfig/expr.h
··· 171 171 * config BAZ 172 172 * int "BAZ Value" 173 173 * range 1..255 174 + * 175 + * Please, also check zconf.y:print_symbol() when modifying the 176 + * list of property types! 174 177 */ 175 178 enum prop_type { 176 179 P_UNKNOWN,
+1 -1
scripts/kconfig/preprocess.c
··· 156 156 nread--; 157 157 158 158 /* remove trailing new lines */ 159 - while (buf[nread - 1] == '\n') 159 + while (nread > 0 && buf[nread - 1] == '\n') 160 160 nread--; 161 161 162 162 buf[nread] = 0;
+6 -2
scripts/kconfig/zconf.y
··· 31 31 static struct menu *current_menu, *current_entry; 32 32 33 33 %} 34 - %expect 32 34 + %expect 31 35 35 36 36 %union 37 37 { ··· 337 337 338 338 /* if entry */ 339 339 340 - if_entry: T_IF expr nl 340 + if_entry: T_IF expr T_EOL 341 341 { 342 342 printd(DEBUG_PARSE, "%s:%d:if\n", zconf_curname(), zconf_lineno()); 343 343 menu_add_entry(NULL); ··· 716 716 fputs( " menu ", out); 717 717 print_quoted_string(out, prop->text); 718 718 fputc('\n', out); 719 + break; 720 + case P_SYMBOL: 721 + fputs( " symbol ", out); 722 + fprintf(out, "%s\n", prop->sym->name); 719 723 break; 720 724 default: 721 725 fprintf(out, " unknown prop %d!\n", prop->type);
+4 -2
security/keys/dh.c
··· 142 142 * The src pointer is defined as Z || other info where Z is the shared secret 143 143 * from DH and other info is an arbitrary string (see SP800-56A section 144 144 * 5.8.1.2). 145 + * 146 + * 'dlen' must be a multiple of the digest size. 145 147 */ 146 148 static int kdf_ctr(struct kdf_sdesc *sdesc, const u8 *src, unsigned int slen, 147 149 u8 *dst, unsigned int dlen, unsigned int zlen) ··· 207 205 { 208 206 uint8_t *outbuf = NULL; 209 207 int ret; 210 - size_t outbuf_len = round_up(buflen, 211 - crypto_shash_digestsize(sdesc->shash.tfm)); 208 + size_t outbuf_len = roundup(buflen, 209 + crypto_shash_digestsize(sdesc->shash.tfm)); 212 210 213 211 outbuf = kmalloc(outbuf_len, GFP_KERNEL); 214 212 if (!outbuf) {
+33 -45
security/selinux/selinuxfs.c
··· 441 441 static ssize_t sel_read_policy(struct file *filp, char __user *buf, 442 442 size_t count, loff_t *ppos) 443 443 { 444 - struct selinux_fs_info *fsi = file_inode(filp)->i_sb->s_fs_info; 445 444 struct policy_load_memory *plm = filp->private_data; 446 445 int ret; 447 - 448 - mutex_lock(&fsi->mutex); 449 446 450 447 ret = avc_has_perm(&selinux_state, 451 448 current_sid(), SECINITSID_SECURITY, 452 449 SECCLASS_SECURITY, SECURITY__READ_POLICY, NULL); 453 450 if (ret) 454 - goto out; 451 + return ret; 455 452 456 - ret = simple_read_from_buffer(buf, count, ppos, plm->data, plm->len); 457 - out: 458 - mutex_unlock(&fsi->mutex); 459 - return ret; 453 + return simple_read_from_buffer(buf, count, ppos, plm->data, plm->len); 460 454 } 461 455 462 456 static vm_fault_t sel_mmap_policy_fault(struct vm_fault *vmf) ··· 1182 1188 ret = -EINVAL; 1183 1189 if (index >= fsi->bool_num || strcmp(name, 1184 1190 fsi->bool_pending_names[index])) 1185 - goto out; 1191 + goto out_unlock; 1186 1192 1187 1193 ret = -ENOMEM; 1188 1194 page = (char *)get_zeroed_page(GFP_KERNEL); 1189 1195 if (!page) 1190 - goto out; 1196 + goto out_unlock; 1191 1197 1192 1198 cur_enforcing = security_get_bool_value(fsi->state, index); 1193 1199 if (cur_enforcing < 0) { 1194 1200 ret = cur_enforcing; 1195 - goto out; 1201 + goto out_unlock; 1196 1202 } 1197 1203 length = scnprintf(page, PAGE_SIZE, "%d %d", cur_enforcing, 1198 1204 fsi->bool_pending_values[index]); 1199 - ret = simple_read_from_buffer(buf, count, ppos, page, length); 1200 - out: 1201 1205 mutex_unlock(&fsi->mutex); 1206 + ret = simple_read_from_buffer(buf, count, ppos, page, length); 1207 + out_free: 1202 1208 free_page((unsigned long)page); 1203 1209 return ret; 1210 + 1211 + out_unlock: 1212 + mutex_unlock(&fsi->mutex); 1213 + goto out_free; 1204 1214 } 1205 1215 1206 1216 static ssize_t sel_write_bool(struct file *filep, const char __user *buf, ··· 1216 1218 int new_value; 1217 1219 unsigned index = file_inode(filep)->i_ino & SEL_INO_MASK; 1218 1220 const char *name = filep->f_path.dentry->d_name.name; 1221 + 1222 + if (count >= PAGE_SIZE) 1223 + return -ENOMEM; 1224 + 1225 + /* No partial writes. */ 1226 + if (*ppos != 0) 1227 + return -EINVAL; 1228 + 1229 + page = memdup_user_nul(buf, count); 1230 + if (IS_ERR(page)) 1231 + return PTR_ERR(page); 1219 1232 1220 1233 mutex_lock(&fsi->mutex); 1221 1234 ··· 1241 1232 if (index >= fsi->bool_num || strcmp(name, 1242 1233 fsi->bool_pending_names[index])) 1243 1234 goto out; 1244 - 1245 - length = -ENOMEM; 1246 - if (count >= PAGE_SIZE) 1247 - goto out; 1248 - 1249 - /* No partial writes. */ 1250 - length = -EINVAL; 1251 - if (*ppos != 0) 1252 - goto out; 1253 - 1254 - page = memdup_user_nul(buf, count); 1255 - if (IS_ERR(page)) { 1256 - length = PTR_ERR(page); 1257 - page = NULL; 1258 - goto out; 1259 - } 1260 1235 1261 1236 length = -EINVAL; 1262 1237 if (sscanf(page, "%d", &new_value) != 1) ··· 1273 1280 ssize_t length; 1274 1281 int new_value; 1275 1282 1283 + if (count >= PAGE_SIZE) 1284 + return -ENOMEM; 1285 + 1286 + /* No partial writes. */ 1287 + if (*ppos != 0) 1288 + return -EINVAL; 1289 + 1290 + page = memdup_user_nul(buf, count); 1291 + if (IS_ERR(page)) 1292 + return PTR_ERR(page); 1293 + 1276 1294 mutex_lock(&fsi->mutex); 1277 1295 1278 1296 length = avc_has_perm(&selinux_state, ··· 1292 1288 NULL); 1293 1289 if (length) 1294 1290 goto out; 1295 - 1296 - length = -ENOMEM; 1297 - if (count >= PAGE_SIZE) 1298 - goto out; 1299 - 1300 - /* No partial writes. */ 1301 - length = -EINVAL; 1302 - if (*ppos != 0) 1303 - goto out; 1304 - 1305 - page = memdup_user_nul(buf, count); 1306 - if (IS_ERR(page)) { 1307 - length = PTR_ERR(page); 1308 - page = NULL; 1309 - goto out; 1310 - } 1311 1291 1312 1292 length = -EINVAL; 1313 1293 if (sscanf(page, "%d", &new_value) != 1)
+1
security/smack/smack_lsm.c
··· 2296 2296 struct smack_known *skp = smk_of_task_struct(p); 2297 2297 2298 2298 isp->smk_inode = skp; 2299 + isp->smk_flags |= SMK_INODE_INSTANT; 2299 2300 } 2300 2301 2301 2302 /*
+2 -1
sound/core/seq/seq_clientmgr.c
··· 2004 2004 struct snd_seq_client *cptr = NULL; 2005 2005 2006 2006 /* search for next client */ 2007 - info->client++; 2007 + if (info->client < INT_MAX) 2008 + info->client++; 2008 2009 if (info->client < 0) 2009 2010 info->client = 0; 2010 2011 for (; info->client < SNDRV_SEQ_MAX_CLIENTS; info->client++) {
+1 -1
sound/core/timer.c
··· 1520 1520 } else { 1521 1521 if (id.subdevice < 0) 1522 1522 id.subdevice = 0; 1523 - else 1523 + else if (id.subdevice < INT_MAX) 1524 1524 id.subdevice++; 1525 1525 } 1526 1526 }
+3 -2
sound/pci/hda/hda_codec.c
··· 2899 2899 list_for_each_entry(pcm, &codec->pcm_list_head, list) 2900 2900 snd_pcm_suspend_all(pcm->pcm); 2901 2901 state = hda_call_codec_suspend(codec); 2902 - if (codec_has_clkstop(codec) && codec_has_epss(codec) && 2903 - (state & AC_PWRST_CLK_STOP_OK)) 2902 + if (codec->link_down_at_suspend || 2903 + (codec_has_clkstop(codec) && codec_has_epss(codec) && 2904 + (state & AC_PWRST_CLK_STOP_OK))) 2904 2905 snd_hdac_codec_link_down(&codec->core); 2905 2906 snd_hdac_link_power(&codec->core, false); 2906 2907 return 0;
+1
sound/pci/hda/hda_codec.h
··· 258 258 unsigned int power_save_node:1; /* advanced PM for each widget */ 259 259 unsigned int auto_runtime_pm:1; /* enable automatic codec runtime pm */ 260 260 unsigned int force_pin_prefix:1; /* Add location prefix */ 261 + unsigned int link_down_at_suspend:1; /* link down at runtime suspend */ 261 262 #ifdef CONFIG_PM 262 263 unsigned long power_on_acct; 263 264 unsigned long power_off_acct;
+23 -41
sound/pci/hda/patch_ca0132.c
··· 991 991 enum { 992 992 QUIRK_NONE, 993 993 QUIRK_ALIENWARE, 994 + QUIRK_ALIENWARE_M17XR4, 994 995 QUIRK_SBZ, 995 996 QUIRK_R3DI, 996 997 }; ··· 1041 1040 }; 1042 1041 1043 1042 static const struct snd_pci_quirk ca0132_quirks[] = { 1043 + SND_PCI_QUIRK(0x1028, 0x057b, "Alienware M17x R4", QUIRK_ALIENWARE_M17XR4), 1044 1044 SND_PCI_QUIRK(0x1028, 0x0685, "Alienware 15 2015", QUIRK_ALIENWARE), 1045 1045 SND_PCI_QUIRK(0x1028, 0x0688, "Alienware 17 2015", QUIRK_ALIENWARE), 1046 1046 SND_PCI_QUIRK(0x1028, 0x0708, "Alienware 15 R2 2016", QUIRK_ALIENWARE), ··· 5665 5663 * I think this has to do with the pin for rear surround being 0x11, 5666 5664 * and the center/lfe being 0x10. Usually the pin order is the opposite. 5667 5665 */ 5668 - const struct snd_pcm_chmap_elem ca0132_alt_chmaps[] = { 5666 + static const struct snd_pcm_chmap_elem ca0132_alt_chmaps[] = { 5669 5667 { .channels = 2, 5670 5668 .map = { SNDRV_CHMAP_FL, SNDRV_CHMAP_FR } }, 5671 5669 { .channels = 4, ··· 5968 5966 info->stream[SNDRV_PCM_STREAM_CAPTURE].nid = spec->adcs[0]; 5969 5967 5970 5968 /* With the DSP enabled, desktops don't use this ADC. */ 5971 - if (spec->use_alt_functions) { 5969 + if (!spec->use_alt_functions) { 5972 5970 info = snd_hda_codec_pcm_new(codec, "CA0132 Analog Mic-In2"); 5973 5971 if (!info) 5974 5972 return -ENOMEM; ··· 6132 6130 * Bit 6: set to select Data2, clear for Data1 6133 6131 * Bit 7: set to enable DMic, clear for AMic 6134 6132 */ 6135 - val = 0x23; 6133 + if (spec->quirk == QUIRK_ALIENWARE_M17XR4) 6134 + val = 0x33; 6135 + else 6136 + val = 0x23; 6136 6137 /* keep a copy of dmic ctl val for enable/disable dmic purpuse */ 6137 6138 spec->dmic_ctl = val; 6138 6139 snd_hda_codec_write(codec, spec->input_pins[0], 0, ··· 7228 7223 7229 7224 snd_hda_sequence_write(codec, spec->base_init_verbs); 7230 7225 7231 - if (spec->quirk != QUIRK_NONE) 7226 + if (spec->use_alt_functions) 7232 7227 ca0132_alt_init(codec); 7233 7228 7234 7229 ca0132_download_dsp(codec); ··· 7242 7237 case QUIRK_R3DI: 7243 7238 r3di_setup_defaults(codec); 7244 7239 break; 7245 - case QUIRK_NONE: 7246 - case QUIRK_ALIENWARE: 7240 + case QUIRK_SBZ: 7241 + break; 7242 + default: 7247 7243 ca0132_setup_defaults(codec); 7248 7244 ca0132_init_analog_mic2(codec); 7249 7245 ca0132_init_dmic(codec); ··· 7349 7343 static void ca0132_config(struct hda_codec *codec) 7350 7344 { 7351 7345 struct ca0132_spec *spec = codec->spec; 7352 - struct auto_pin_cfg *cfg = &spec->autocfg; 7353 7346 7354 7347 spec->dacs[0] = 0x2; 7355 7348 spec->dacs[1] = 0x3; ··· 7410 7405 /* SPDIF I/O */ 7411 7406 spec->dig_out = 0x05; 7412 7407 spec->multiout.dig_out_nid = spec->dig_out; 7413 - cfg->dig_out_pins[0] = 0x0c; 7414 - cfg->dig_outs = 1; 7415 - cfg->dig_out_type[0] = HDA_PCM_TYPE_SPDIF; 7416 7408 spec->dig_in = 0x09; 7417 - cfg->dig_in_pin = 0x0e; 7418 - cfg->dig_in_type = HDA_PCM_TYPE_SPDIF; 7419 7409 break; 7420 7410 case QUIRK_R3DI: 7421 7411 codec_dbg(codec, "%s: QUIRK_R3DI applied.\n", __func__); ··· 7438 7438 /* SPDIF I/O */ 7439 7439 spec->dig_out = 0x05; 7440 7440 spec->multiout.dig_out_nid = spec->dig_out; 7441 - cfg->dig_out_pins[0] = 0x0c; 7442 - cfg->dig_outs = 1; 7443 - cfg->dig_out_type[0] = HDA_PCM_TYPE_SPDIF; 7444 7441 break; 7445 7442 default: 7446 7443 spec->num_outputs = 2; ··· 7460 7463 /* SPDIF I/O */ 7461 7464 spec->dig_out = 0x05; 7462 7465 spec->multiout.dig_out_nid = spec->dig_out; 7463 - cfg->dig_out_pins[0] = 0x0c; 7464 - cfg->dig_outs = 1; 7465 - cfg->dig_out_type[0] = HDA_PCM_TYPE_SPDIF; 7466 7466 spec->dig_in = 0x09; 7467 - cfg->dig_in_pin = 0x0e; 7468 - cfg->dig_in_type = HDA_PCM_TYPE_SPDIF; 7469 7467 break; 7470 7468 } 7471 7469 } ··· 7468 7476 static int ca0132_prepare_verbs(struct hda_codec *codec) 7469 7477 { 7470 7478 /* Verbs + terminator (an empty element) */ 7471 - #define NUM_SPEC_VERBS 4 7479 + #define NUM_SPEC_VERBS 2 7472 7480 struct ca0132_spec *spec = codec->spec; 7473 7481 7474 7482 spec->chip_init_verbs = ca0132_init_verbs0; ··· 7480 7488 if (!spec->spec_init_verbs) 7481 7489 return -ENOMEM; 7482 7490 7483 - /* HP jack autodetection */ 7484 - spec->spec_init_verbs[0].nid = spec->unsol_tag_hp; 7485 - spec->spec_init_verbs[0].param = AC_VERB_SET_UNSOLICITED_ENABLE; 7486 - spec->spec_init_verbs[0].verb = AC_USRSP_EN | spec->unsol_tag_hp; 7487 - 7488 - /* MIC1 jack autodetection */ 7489 - spec->spec_init_verbs[1].nid = spec->unsol_tag_amic1; 7490 - spec->spec_init_verbs[1].param = AC_VERB_SET_UNSOLICITED_ENABLE; 7491 - spec->spec_init_verbs[1].verb = AC_USRSP_EN | spec->unsol_tag_amic1; 7492 - 7493 7491 /* config EAPD */ 7494 - spec->spec_init_verbs[2].nid = 0x0b; 7495 - spec->spec_init_verbs[2].param = 0x78D; 7496 - spec->spec_init_verbs[2].verb = 0x00; 7492 + spec->spec_init_verbs[0].nid = 0x0b; 7493 + spec->spec_init_verbs[0].param = 0x78D; 7494 + spec->spec_init_verbs[0].verb = 0x00; 7497 7495 7498 7496 /* Previously commented configuration */ 7499 7497 /* 7500 - spec->spec_init_verbs[3].nid = 0x0b; 7501 - spec->spec_init_verbs[3].param = AC_VERB_SET_EAPD_BTLENABLE; 7498 + spec->spec_init_verbs[2].nid = 0x0b; 7499 + spec->spec_init_verbs[2].param = AC_VERB_SET_EAPD_BTLENABLE; 7500 + spec->spec_init_verbs[2].verb = 0x02; 7501 + 7502 + spec->spec_init_verbs[3].nid = 0x10; 7503 + spec->spec_init_verbs[3].param = 0x78D; 7502 7504 spec->spec_init_verbs[3].verb = 0x02; 7503 7505 7504 7506 spec->spec_init_verbs[4].nid = 0x10; 7505 - spec->spec_init_verbs[4].param = 0x78D; 7507 + spec->spec_init_verbs[4].param = AC_VERB_SET_EAPD_BTLENABLE; 7506 7508 spec->spec_init_verbs[4].verb = 0x02; 7507 - 7508 - spec->spec_init_verbs[5].nid = 0x10; 7509 - spec->spec_init_verbs[5].param = AC_VERB_SET_EAPD_BTLENABLE; 7510 - spec->spec_init_verbs[5].verb = 0x02; 7511 7509 */ 7512 7510 7513 7511 /* Terminator: spec->spec_init_verbs[NUM_SPEC_VERBS-1] */
+5
sound/pci/hda/patch_hdmi.c
··· 3741 3741 3742 3742 spec->chmap.channels_max = max(spec->chmap.channels_max, 8u); 3743 3743 3744 + /* AMD GPUs have neither EPSS nor CLKSTOP bits, hence preventing 3745 + * the link-down as is. Tell the core to allow it. 3746 + */ 3747 + codec->link_down_at_suspend = 1; 3748 + 3744 3749 return 0; 3745 3750 } 3746 3751
+17 -3
sound/pci/hda/patch_realtek.c
··· 2545 2545 SND_PCI_QUIRK(0x10cf, 0x1397, "Fujitsu Lifebook S7110", ALC262_FIXUP_FSC_S7110), 2546 2546 SND_PCI_QUIRK(0x10cf, 0x142d, "Fujitsu Lifebook E8410", ALC262_FIXUP_BENQ), 2547 2547 SND_PCI_QUIRK(0x10f1, 0x2915, "Tyan Thunder n6650W", ALC262_FIXUP_TYAN), 2548 + SND_PCI_QUIRK(0x1734, 0x1141, "FSC ESPRIMO U9210", ALC262_FIXUP_FSC_H270), 2548 2549 SND_PCI_QUIRK(0x1734, 0x1147, "FSC Celsius H270", ALC262_FIXUP_FSC_H270), 2549 2550 SND_PCI_QUIRK(0x17aa, 0x384e, "Lenovo 3000", ALC262_FIXUP_LENOVO_3000), 2550 2551 SND_PCI_QUIRK(0x17ff, 0x0560, "Benq ED8", ALC262_FIXUP_BENQ), ··· 4996 4995 struct alc_spec *spec = codec->spec; 4997 4996 4998 4997 if (action == HDA_FIXUP_ACT_PRE_PROBE) { 4999 - spec->shutup = alc_no_shutup; /* reduce click noise */ 5000 4998 spec->reboot_notify = alc_d3_at_reboot; /* reduce noise */ 5001 4999 spec->parse_flags = HDA_PINCFG_NO_HP_FIXUP; 5002 5000 codec->power_save_node = 0; /* avoid click noises */ ··· 5393 5393 5394 5394 /* for hda_fixup_thinkpad_acpi() */ 5395 5395 #include "thinkpad_helper.c" 5396 + 5397 + static void alc_fixup_thinkpad_acpi(struct hda_codec *codec, 5398 + const struct hda_fixup *fix, int action) 5399 + { 5400 + alc_fixup_no_shutup(codec, fix, action); /* reduce click noise */ 5401 + hda_fixup_thinkpad_acpi(codec, fix, action); 5402 + } 5396 5403 5397 5404 /* for dell wmi mic mute led */ 5398 5405 #include "dell_wmi_helper.c" ··· 5953 5946 }, 5954 5947 [ALC269_FIXUP_THINKPAD_ACPI] = { 5955 5948 .type = HDA_FIXUP_FUNC, 5956 - .v.func = hda_fixup_thinkpad_acpi, 5949 + .v.func = alc_fixup_thinkpad_acpi, 5957 5950 .chained = true, 5958 5951 .chain_id = ALC269_FIXUP_SKU_IGNORE, 5959 5952 }, ··· 6610 6603 SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), 6611 6604 SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), 6612 6605 SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6606 + SND_PCI_QUIRK(0x17aa, 0x312a, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6613 6607 SND_PCI_QUIRK(0x17aa, 0x312f, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6614 - SND_PCI_QUIRK(0x17aa, 0x3138, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6608 + SND_PCI_QUIRK(0x17aa, 0x3136, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6615 6609 SND_PCI_QUIRK(0x17aa, 0x313c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6616 6610 SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI), 6617 6611 SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC), ··· 6790 6782 {0x14, 0x90170110}, 6791 6783 {0x19, 0x02a11030}, 6792 6784 {0x21, 0x02211020}), 6785 + SND_HDA_PIN_QUIRK(0x10ec0235, 0x17aa, "Lenovo", ALC294_FIXUP_LENOVO_MIC_LOCATION, 6786 + {0x14, 0x90170110}, 6787 + {0x19, 0x02a11030}, 6788 + {0x1a, 0x02a11040}, 6789 + {0x1b, 0x01014020}, 6790 + {0x21, 0x0221101f}), 6793 6791 SND_HDA_PIN_QUIRK(0x10ec0236, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 6794 6792 {0x12, 0x90a60140}, 6795 6793 {0x14, 0x90170110},
+1
sound/pci/lx6464es/lx6464es.c
··· 1018 1018 chip->port_dsp_bar = pci_ioremap_bar(pci, 2); 1019 1019 if (!chip->port_dsp_bar) { 1020 1020 dev_err(card->dev, "cannot remap PCI memory region\n"); 1021 + err = -ENOMEM; 1021 1022 goto remap_pci_failed; 1022 1023 } 1023 1024
+1
tools/arch/arm/include/uapi/asm/kvm.h
··· 91 91 #define KVM_VGIC_V3_ADDR_TYPE_DIST 2 92 92 #define KVM_VGIC_V3_ADDR_TYPE_REDIST 3 93 93 #define KVM_VGIC_ITS_ADDR_TYPE 4 94 + #define KVM_VGIC_V3_ADDR_TYPE_REDIST_REGION 5 94 95 95 96 #define KVM_VGIC_V3_DIST_SIZE SZ_64K 96 97 #define KVM_VGIC_V3_REDIST_SIZE (2 * SZ_64K)
+1
tools/arch/arm64/include/uapi/asm/kvm.h
··· 91 91 #define KVM_VGIC_V3_ADDR_TYPE_DIST 2 92 92 #define KVM_VGIC_V3_ADDR_TYPE_REDIST 3 93 93 #define KVM_VGIC_ITS_ADDR_TYPE 4 94 + #define KVM_VGIC_V3_ADDR_TYPE_REDIST_REGION 5 94 95 95 96 #define KVM_VGIC_V3_DIST_SIZE SZ_64K 96 97 #define KVM_VGIC_V3_REDIST_SIZE (2 * SZ_64K)
+1
tools/arch/powerpc/include/uapi/asm/kvm.h
··· 633 633 #define KVM_REG_PPC_PSSCR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xbd) 634 634 635 635 #define KVM_REG_PPC_DEC_EXPIRY (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xbe) 636 + #define KVM_REG_PPC_ONLINE (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xbf) 636 637 637 638 /* Transactional Memory checkpointed state: 638 639 * This is all GPRs, all VSX regs and a subset of SPRs
+1
tools/arch/powerpc/include/uapi/asm/unistd.h
··· 398 398 #define __NR_pkey_alloc 384 399 399 #define __NR_pkey_free 385 400 400 #define __NR_pkey_mprotect 386 401 + #define __NR_rseq 387 401 402 402 403 #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */
+2
tools/arch/x86/include/asm/cpufeatures.h
··· 282 282 #define X86_FEATURE_AMD_IBPB (13*32+12) /* "" Indirect Branch Prediction Barrier */ 283 283 #define X86_FEATURE_AMD_IBRS (13*32+14) /* "" Indirect Branch Restricted Speculation */ 284 284 #define X86_FEATURE_AMD_STIBP (13*32+15) /* "" Single Thread Indirect Branch Predictors */ 285 + #define X86_FEATURE_AMD_SSBD (13*32+24) /* "" Speculative Store Bypass Disable */ 285 286 #define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */ 287 + #define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */ 286 288 287 289 /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */ 288 290 #define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */
+8 -4
tools/bpf/bpftool/prog.c
··· 694 694 return -1; 695 695 } 696 696 697 - if (do_pin_fd(prog_fd, argv[1])) { 698 - p_err("failed to pin program"); 699 - return -1; 700 - } 697 + if (do_pin_fd(prog_fd, argv[1])) 698 + goto err_close_obj; 701 699 702 700 if (json_output) 703 701 jsonw_null(json_wtr); 704 702 703 + bpf_object__close(obj); 704 + 705 705 return 0; 706 + 707 + err_close_obj: 708 + bpf_object__close(obj); 709 + return -1; 706 710 } 707 711 708 712 static int do_help(int argc, char **argv)
+7
tools/include/uapi/drm/drm.h
··· 680 680 */ 681 681 #define DRM_CLIENT_CAP_ATOMIC 3 682 682 683 + /** 684 + * DRM_CLIENT_CAP_ASPECT_RATIO 685 + * 686 + * If set to 1, the DRM core will provide aspect ratio information in modes. 687 + */ 688 + #define DRM_CLIENT_CAP_ASPECT_RATIO 4 689 + 683 690 /** DRM_IOCTL_SET_CLIENT_CAP ioctl argument type */ 684 691 struct drm_set_client_cap { 685 692 __u64 capability;
+1 -1
tools/include/uapi/linux/bpf.h
··· 2630 2630 union { 2631 2631 /* inputs to lookup */ 2632 2632 __u8 tos; /* AF_INET */ 2633 - __be32 flowlabel; /* AF_INET6 */ 2633 + __be32 flowinfo; /* AF_INET6, flow_label + priority */ 2634 2634 2635 2635 /* output: metric of fib result (IPv4/IPv6 only) */ 2636 2636 __u32 rt_metric;
+2
tools/include/uapi/linux/if_link.h
··· 333 333 IFLA_BRPORT_BCAST_FLOOD, 334 334 IFLA_BRPORT_GROUP_FWD_MASK, 335 335 IFLA_BRPORT_NEIGH_SUPPRESS, 336 + IFLA_BRPORT_ISOLATED, 336 337 __IFLA_BRPORT_MAX 337 338 }; 338 339 #define IFLA_BRPORT_MAX (__IFLA_BRPORT_MAX - 1) ··· 517 516 IFLA_VXLAN_COLLECT_METADATA, 518 517 IFLA_VXLAN_LABEL, 519 518 IFLA_VXLAN_GPE, 519 + IFLA_VXLAN_TTL_INHERIT, 520 520 __IFLA_VXLAN_MAX 521 521 }; 522 522 #define IFLA_VXLAN_MAX (__IFLA_VXLAN_MAX - 1)
+1
tools/include/uapi/linux/kvm.h
··· 948 948 #define KVM_CAP_S390_BPB 152 949 949 #define KVM_CAP_GET_MSR_FEATURES 153 950 950 #define KVM_CAP_HYPERV_EVENTFD 154 951 + #define KVM_CAP_HYPERV_TLBFLUSH 155 951 952 952 953 #ifdef KVM_CAP_IRQ_ROUTING 953 954
+1 -1
tools/perf/arch/powerpc/util/skip-callchain-idx.c
··· 243 243 u64 ip; 244 244 u64 skip_slot = -1; 245 245 246 - if (chain->nr < 3) 246 + if (!chain || chain->nr < 3) 247 247 return skip_slot; 248 248 249 249 ip = chain->ips[2];
+2
tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
··· 341 341 330 common pkey_alloc __x64_sys_pkey_alloc 342 342 331 common pkey_free __x64_sys_pkey_free 343 343 332 common statx __x64_sys_statx 344 + 333 common io_pgetevents __x64_sys_io_pgetevents 345 + 334 common rseq __x64_sys_rseq 344 346 345 347 # 346 348 # x32-specific system call numbers start at 512 to avoid cache impact
+3 -2
tools/perf/bench/numa.c
··· 1098 1098 u8 *global_data; 1099 1099 u8 *process_data; 1100 1100 u8 *thread_data; 1101 - u64 bytes_done; 1101 + u64 bytes_done, secs; 1102 1102 long work_done; 1103 1103 u32 l; 1104 1104 struct rusage rusage; ··· 1254 1254 timersub(&stop, &start0, &diff); 1255 1255 td->runtime_ns = diff.tv_sec * NSEC_PER_SEC; 1256 1256 td->runtime_ns += diff.tv_usec * NSEC_PER_USEC; 1257 - td->speed_gbs = bytes_done / (td->runtime_ns / NSEC_PER_SEC) / 1e9; 1257 + secs = td->runtime_ns / NSEC_PER_SEC; 1258 + td->speed_gbs = secs ? bytes_done / secs / 1e9 : 0; 1258 1259 1259 1260 getrusage(RUSAGE_THREAD, &rusage); 1260 1261 td->system_time_ns = rusage.ru_stime.tv_sec * NSEC_PER_SEC;
+10 -1
tools/perf/builtin-annotate.c
··· 283 283 return ret; 284 284 } 285 285 286 + static int process_feature_event(struct perf_tool *tool, 287 + union perf_event *event, 288 + struct perf_session *session) 289 + { 290 + if (event->feat.feat_id < HEADER_LAST_FEATURE) 291 + return perf_event__process_feature(tool, event, session); 292 + return 0; 293 + } 294 + 286 295 static int hist_entry__tty_annotate(struct hist_entry *he, 287 296 struct perf_evsel *evsel, 288 297 struct perf_annotate *ann) ··· 480 471 .attr = perf_event__process_attr, 481 472 .build_id = perf_event__process_build_id, 482 473 .tracing_data = perf_event__process_tracing_data, 483 - .feature = perf_event__process_feature, 474 + .feature = process_feature_event, 484 475 .ordered_events = true, 485 476 .ordering_requires_timestamps = true, 486 477 },
+2 -1
tools/perf/builtin-report.c
··· 217 217 } 218 218 219 219 /* 220 - * All features are received, we can force the 220 + * (feat_id = HEADER_LAST_FEATURE) is the end marker which 221 + * means all features are received, now we can force the 221 222 * group if needed. 222 223 */ 223 224 setup_forced_leader(rep, session->evlist);
+27 -3
tools/perf/builtin-script.c
··· 1834 1834 struct perf_evlist *evlist; 1835 1835 struct perf_evsel *evsel, *pos; 1836 1836 int err; 1837 + static struct perf_evsel_script *es; 1837 1838 1838 1839 err = perf_event__process_attr(tool, event, pevlist); 1839 1840 if (err) ··· 1842 1841 1843 1842 evlist = *pevlist; 1844 1843 evsel = perf_evlist__last(*pevlist); 1844 + 1845 + if (!evsel->priv) { 1846 + if (scr->per_event_dump) { 1847 + evsel->priv = perf_evsel_script__new(evsel, 1848 + scr->session->data); 1849 + } else { 1850 + es = zalloc(sizeof(*es)); 1851 + if (!es) 1852 + return -ENOMEM; 1853 + es->fp = stdout; 1854 + evsel->priv = es; 1855 + } 1856 + } 1845 1857 1846 1858 if (evsel->attr.type >= PERF_TYPE_MAX && 1847 1859 evsel->attr.type != PERF_TYPE_SYNTH) ··· 3044 3030 return set_maps(script); 3045 3031 } 3046 3032 3033 + static int process_feature_event(struct perf_tool *tool, 3034 + union perf_event *event, 3035 + struct perf_session *session) 3036 + { 3037 + if (event->feat.feat_id < HEADER_LAST_FEATURE) 3038 + return perf_event__process_feature(tool, event, session); 3039 + return 0; 3040 + } 3041 + 3047 3042 #ifdef HAVE_AUXTRACE_SUPPORT 3048 3043 static int perf_script__process_auxtrace_info(struct perf_tool *tool, 3049 3044 union perf_event *event, ··· 3097 3074 .attr = process_attr, 3098 3075 .event_update = perf_event__process_event_update, 3099 3076 .tracing_data = perf_event__process_tracing_data, 3100 - .feature = perf_event__process_feature, 3077 + .feature = process_feature_event, 3101 3078 .build_id = perf_event__process_build_id, 3102 3079 .id_index = perf_event__process_id_index, 3103 3080 .auxtrace_info = perf_script__process_auxtrace_info, ··· 3148 3125 "+field to add and -field to remove." 3149 3126 "Valid types: hw,sw,trace,raw,synth. " 3150 3127 "Fields: comm,tid,pid,time,cpu,event,trace,ip,sym,dso," 3151 - "addr,symoff,period,iregs,uregs,brstack,brstacksym,flags," 3152 - "bpf-output,callindent,insn,insnlen,brstackinsn,synth,phys_addr", 3128 + "addr,symoff,srcline,period,iregs,uregs,brstack," 3129 + "brstacksym,flags,bpf-output,brstackinsn,brstackoff," 3130 + "callindent,insn,insnlen,synth,phys_addr,metric,misc", 3153 3131 parse_output_fields), 3154 3132 OPT_BOOLEAN('a', "all-cpus", &system_wide, 3155 3133 "system-wide collection from all CPUs"),
+20 -5
tools/perf/tests/parse-events.c
··· 1309 1309 return 0; 1310 1310 } 1311 1311 1312 + static bool test__intel_pt_valid(void) 1313 + { 1314 + return !!perf_pmu__find("intel_pt"); 1315 + } 1316 + 1312 1317 static int test__intel_pt(struct perf_evlist *evlist) 1313 1318 { 1314 1319 struct perf_evsel *evsel = perf_evlist__first(evlist); ··· 1380 1375 const char *name; 1381 1376 __u32 type; 1382 1377 const int id; 1378 + bool (*valid)(void); 1383 1379 int (*check)(struct perf_evlist *evlist); 1384 1380 }; 1385 1381 ··· 1654 1648 }, 1655 1649 { 1656 1650 .name = "intel_pt//u", 1651 + .valid = test__intel_pt_valid, 1657 1652 .check = test__intel_pt, 1658 1653 .id = 52, 1659 1654 }, ··· 1693 1686 1694 1687 static int test_event(struct evlist_test *e) 1695 1688 { 1689 + struct parse_events_error err = { .idx = 0, }; 1696 1690 struct perf_evlist *evlist; 1697 1691 int ret; 1692 + 1693 + if (e->valid && !e->valid()) { 1694 + pr_debug("... SKIP"); 1695 + return 0; 1696 + } 1698 1697 1699 1698 evlist = perf_evlist__new(); 1700 1699 if (evlist == NULL) 1701 1700 return -ENOMEM; 1702 1701 1703 - ret = parse_events(evlist, e->name, NULL); 1702 + ret = parse_events(evlist, e->name, &err); 1704 1703 if (ret) { 1705 - pr_debug("failed to parse event '%s', err %d\n", 1706 - e->name, ret); 1704 + pr_debug("failed to parse event '%s', err %d, str '%s'\n", 1705 + e->name, ret, err.str); 1706 + parse_events_print_error(&err, e->name); 1707 1707 } else { 1708 1708 ret = e->check(evlist); 1709 1709 } ··· 1728 1714 for (i = 0; i < cnt; i++) { 1729 1715 struct evlist_test *e = &events[i]; 1730 1716 1731 - pr_debug("running test %d '%s'\n", e->id, e->name); 1717 + pr_debug("running test %d '%s'", e->id, e->name); 1732 1718 ret1 = test_event(e); 1733 1719 if (ret1) 1734 1720 ret2 = ret1; 1721 + pr_debug("\n"); 1735 1722 } 1736 1723 1737 1724 return ret2; ··· 1814 1799 } 1815 1800 1816 1801 while (!ret && (ent = readdir(dir))) { 1817 - struct evlist_test e; 1802 + struct evlist_test e = { .id = 0, }; 1818 1803 char name[2 * NAME_MAX + 1 + 12 + 3]; 1819 1804 1820 1805 /* Names containing . are special and cannot be used directly */
+1
tools/perf/tests/topology.c
··· 45 45 46 46 perf_header__set_feat(&session->header, HEADER_CPU_TOPOLOGY); 47 47 perf_header__set_feat(&session->header, HEADER_NRCPUS); 48 + perf_header__set_feat(&session->header, HEADER_ARCH); 48 49 49 50 session->header.data_size += DATA_SIZE; 50 51
+9 -2
tools/perf/util/c++/clang.cpp
··· 146 146 raw_svector_ostream ostream(*Buffer); 147 147 148 148 legacy::PassManager PM; 149 - if (TargetMachine->addPassesToEmitFile(PM, ostream, 150 - TargetMachine::CGFT_ObjectFile)) { 149 + bool NotAdded; 150 + #if CLANG_VERSION_MAJOR < 7 151 + NotAdded = TargetMachine->addPassesToEmitFile(PM, ostream, 152 + TargetMachine::CGFT_ObjectFile); 153 + #else 154 + NotAdded = TargetMachine->addPassesToEmitFile(PM, ostream, nullptr, 155 + TargetMachine::CGFT_ObjectFile); 156 + #endif 157 + if (NotAdded) { 151 158 llvm::errs() << "TargetMachine can't emit a file of this type\n"; 152 159 return std::unique_ptr<llvm::SmallVectorImpl<char>>(nullptr);; 153 160 }
+10 -2
tools/perf/util/header.c
··· 2129 2129 int cpu_nr = ff->ph->env.nr_cpus_avail; 2130 2130 u64 size = 0; 2131 2131 struct perf_header *ph = ff->ph; 2132 + bool do_core_id_test = true; 2132 2133 2133 2134 ph->env.cpu = calloc(cpu_nr, sizeof(*ph->env.cpu)); 2134 2135 if (!ph->env.cpu) ··· 2184 2183 return 0; 2185 2184 } 2186 2185 2186 + /* On s390 the socket_id number is not related to the numbers of cpus. 2187 + * The socket_id number might be higher than the numbers of cpus. 2188 + * This depends on the configuration. 2189 + */ 2190 + if (ph->env.arch && !strncmp(ph->env.arch, "s390", 4)) 2191 + do_core_id_test = false; 2192 + 2187 2193 for (i = 0; i < (u32)cpu_nr; i++) { 2188 2194 if (do_read_u32(ff, &nr)) 2189 2195 goto free_cpu; ··· 2200 2192 if (do_read_u32(ff, &nr)) 2201 2193 goto free_cpu; 2202 2194 2203 - if (nr != (u32)-1 && nr > (u32)cpu_nr) { 2195 + if (do_core_id_test && nr != (u32)-1 && nr > (u32)cpu_nr) { 2204 2196 pr_debug("socket_id number is too big." 2205 2197 "You may need to upgrade the perf tool.\n"); 2206 2198 goto free_cpu; ··· 3464 3456 pr_warning("invalid record type %d in pipe-mode\n", type); 3465 3457 return 0; 3466 3458 } 3467 - if (feat == HEADER_RESERVED || feat > HEADER_LAST_FEATURE) { 3459 + if (feat == HEADER_RESERVED || feat >= HEADER_LAST_FEATURE) { 3468 3460 pr_warning("invalid record type %d in pipe-mode\n", type); 3469 3461 return -1; 3470 3462 }
+1 -1
tools/perf/util/intel-pt-decoder/intel-pt-pkt-decoder.c
··· 366 366 if (len < offs) 367 367 return INTEL_PT_NEED_MORE_BYTES; 368 368 byte = buf[offs++]; 369 - payload |= (byte >> 1) << shift; 369 + payload |= ((uint64_t)byte >> 1) << shift; 370 370 } 371 371 372 372 packet->type = INTEL_PT_CYC;
+97 -2
tools/perf/util/pmu.c
··· 234 234 return 0; 235 235 } 236 236 237 + static void perf_pmu_assign_str(char *name, const char *field, char **old_str, 238 + char **new_str) 239 + { 240 + if (!*old_str) 241 + goto set_new; 242 + 243 + if (*new_str) { /* Have new string, check with old */ 244 + if (strcasecmp(*old_str, *new_str)) 245 + pr_debug("alias %s differs in field '%s'\n", 246 + name, field); 247 + zfree(old_str); 248 + } else /* Nothing new --> keep old string */ 249 + return; 250 + set_new: 251 + *old_str = *new_str; 252 + *new_str = NULL; 253 + } 254 + 255 + static void perf_pmu_update_alias(struct perf_pmu_alias *old, 256 + struct perf_pmu_alias *newalias) 257 + { 258 + perf_pmu_assign_str(old->name, "desc", &old->desc, &newalias->desc); 259 + perf_pmu_assign_str(old->name, "long_desc", &old->long_desc, 260 + &newalias->long_desc); 261 + perf_pmu_assign_str(old->name, "topic", &old->topic, &newalias->topic); 262 + perf_pmu_assign_str(old->name, "metric_expr", &old->metric_expr, 263 + &newalias->metric_expr); 264 + perf_pmu_assign_str(old->name, "metric_name", &old->metric_name, 265 + &newalias->metric_name); 266 + perf_pmu_assign_str(old->name, "value", &old->str, &newalias->str); 267 + old->scale = newalias->scale; 268 + old->per_pkg = newalias->per_pkg; 269 + old->snapshot = newalias->snapshot; 270 + memcpy(old->unit, newalias->unit, sizeof(old->unit)); 271 + } 272 + 273 + /* Delete an alias entry. */ 274 + static void perf_pmu_free_alias(struct perf_pmu_alias *newalias) 275 + { 276 + zfree(&newalias->name); 277 + zfree(&newalias->desc); 278 + zfree(&newalias->long_desc); 279 + zfree(&newalias->topic); 280 + zfree(&newalias->str); 281 + zfree(&newalias->metric_expr); 282 + zfree(&newalias->metric_name); 283 + parse_events_terms__purge(&newalias->terms); 284 + free(newalias); 285 + } 286 + 287 + /* Merge an alias, search in alias list. If this name is already 288 + * present merge both of them to combine all information. 289 + */ 290 + static bool perf_pmu_merge_alias(struct perf_pmu_alias *newalias, 291 + struct list_head *alist) 292 + { 293 + struct perf_pmu_alias *a; 294 + 295 + list_for_each_entry(a, alist, list) { 296 + if (!strcasecmp(newalias->name, a->name)) { 297 + perf_pmu_update_alias(a, newalias); 298 + perf_pmu_free_alias(newalias); 299 + return true; 300 + } 301 + } 302 + return false; 303 + } 304 + 237 305 static int __perf_pmu__new_alias(struct list_head *list, char *dir, char *name, 238 306 char *desc, char *val, 239 307 char *long_desc, char *topic, ··· 309 241 char *metric_expr, 310 242 char *metric_name) 311 243 { 244 + struct parse_events_term *term; 312 245 struct perf_pmu_alias *alias; 313 246 int ret; 314 247 int num; 248 + char newval[256]; 315 249 316 250 alias = malloc(sizeof(*alias)); 317 251 if (!alias) ··· 330 260 pr_err("Cannot parse alias %s: %d\n", val, ret); 331 261 free(alias); 332 262 return ret; 263 + } 264 + 265 + /* Scan event and remove leading zeroes, spaces, newlines, some 266 + * platforms have terms specified as 267 + * event=0x0091 (read from files ../<PMU>/events/<FILE> 268 + * and terms specified as event=0x91 (read from JSON files). 269 + * 270 + * Rebuild string to make alias->str member comparable. 271 + */ 272 + memset(newval, 0, sizeof(newval)); 273 + ret = 0; 274 + list_for_each_entry(term, &alias->terms, list) { 275 + if (ret) 276 + ret += scnprintf(newval + ret, sizeof(newval) - ret, 277 + ","); 278 + if (term->type_val == PARSE_EVENTS__TERM_TYPE_NUM) 279 + ret += scnprintf(newval + ret, sizeof(newval) - ret, 280 + "%s=%#x", term->config, term->val.num); 281 + else if (term->type_val == PARSE_EVENTS__TERM_TYPE_STR) 282 + ret += scnprintf(newval + ret, sizeof(newval) - ret, 283 + "%s=%s", term->config, term->val.str); 333 284 } 334 285 335 286 alias->name = strdup(name); ··· 376 285 snprintf(alias->unit, sizeof(alias->unit), "%s", unit); 377 286 } 378 287 alias->per_pkg = perpkg && sscanf(perpkg, "%d", &num) == 1 && num == 1; 379 - alias->str = strdup(val); 288 + alias->str = strdup(newval); 380 289 381 - list_add_tail(&alias->list, list); 290 + if (!perf_pmu_merge_alias(alias, list)) 291 + list_add_tail(&alias->list, list); 382 292 383 293 return 0; 384 294 } ··· 394 302 return -EINVAL; 395 303 396 304 buf[ret] = 0; 305 + 306 + /* Remove trailing newline from sysfs file */ 307 + rtrim(buf); 397 308 398 309 return __perf_pmu__new_alias(list, dir, name, NULL, buf, NULL, NULL, NULL, 399 310 NULL, NULL, NULL);
+1
tools/testing/selftests/bpf/config
··· 6 6 CONFIG_CGROUP_BPF=y 7 7 CONFIG_NETDEVSIM=m 8 8 CONFIG_NET_CLS_ACT=y 9 + CONFIG_NET_SCHED=y 9 10 CONFIG_NET_SCH_INGRESS=y 10 11 CONFIG_NET_IPIP=y 11 12 CONFIG_IPV6=y
+9
tools/testing/selftests/bpf/test_kmod.sh
··· 1 1 #!/bin/sh 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + # Kselftest framework requirement - SKIP code is 4. 5 + ksft_skip=4 6 + 7 + msg="skip all tests:" 8 + if [ "$(id -u)" != "0" ]; then 9 + echo $msg please run this as root >&2 10 + exit $ksft_skip 11 + fi 12 + 4 13 SRC_TREE=../../../../ 5 14 6 15 test_run()
+9
tools/testing/selftests/bpf/test_lirc_mode2.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + # Kselftest framework requirement - SKIP code is 4. 5 + ksft_skip=4 6 + 7 + msg="skip all tests:" 8 + if [ $UID != 0 ]; then 9 + echo $msg please run this as root >&2 10 + exit $ksft_skip 11 + fi 12 + 4 13 GREEN='\033[0;92m' 5 14 RED='\033[0;31m' 6 15 NC='\033[0m' # No Color
+9
tools/testing/selftests/bpf/test_lwt_seg6local.sh
··· 21 21 # An UDP datagram is sent from fb00::1 to fb00::6. The test succeeds if this 22 22 # datagram can be read on NS6 when binding to fb00::6. 23 23 24 + # Kselftest framework requirement - SKIP code is 4. 25 + ksft_skip=4 26 + 27 + msg="skip all tests:" 28 + if [ $UID != 0 ]; then 29 + echo $msg please run this as root >&2 30 + exit $ksft_skip 31 + fi 32 + 24 33 TMP_FILE="/tmp/selftest_lwt_seg6local.txt" 25 34 26 35 cleanup()
-6
tools/testing/selftests/bpf/test_sockmap.c
··· 1413 1413 1414 1414 int main(int argc, char **argv) 1415 1415 { 1416 - struct rlimit r = {10 * 1024 * 1024, RLIM_INFINITY}; 1417 1416 int iov_count = 1, length = 1024, rate = 1; 1418 1417 struct sockmap_options options = {0}; 1419 1418 int opt, longindex, err, cg_fd = 0; 1420 1419 char *bpf_file = BPF_SOCKMAP_FILENAME; 1421 1420 int test = PING_PONG; 1422 - 1423 - if (setrlimit(RLIMIT_MEMLOCK, &r)) { 1424 - perror("setrlimit(RLIMIT_MEMLOCK)"); 1425 - return 1; 1426 - } 1427 1421 1428 1422 if (argc < 2) 1429 1423 return test_suite();
tools/testing/selftests/net/fib_tests.sh
+36 -23
tools/testing/selftests/x86/sigreturn.c
··· 610 610 */ 611 611 for (int i = 0; i < NGREG; i++) { 612 612 greg_t req = requested_regs[i], res = resulting_regs[i]; 613 + 613 614 if (i == REG_TRAPNO || i == REG_IP) 614 615 continue; /* don't care */ 615 - if (i == REG_SP) { 616 - printf("\tSP: %llx -> %llx\n", (unsigned long long)req, 617 - (unsigned long long)res); 618 616 617 + if (i == REG_SP) { 619 618 /* 620 - * In many circumstances, the high 32 bits of rsp 621 - * are zeroed. For example, we could be a real 622 - * 32-bit program, or we could hit any of a number 623 - * of poorly-documented IRET or segmented ESP 624 - * oddities. If this happens, it's okay. 619 + * If we were using a 16-bit stack segment, then 620 + * the kernel is a bit stuck: IRET only restores 621 + * the low 16 bits of ESP/RSP if SS is 16-bit. 622 + * The kernel uses a hack to restore bits 31:16, 623 + * but that hack doesn't help with bits 63:32. 624 + * On Intel CPUs, bits 63:32 end up zeroed, and, on 625 + * AMD CPUs, they leak the high bits of the kernel 626 + * espfix64 stack pointer. There's very little that 627 + * the kernel can do about it. 628 + * 629 + * Similarly, if we are returning to a 32-bit context, 630 + * the CPU will often lose the high 32 bits of RSP. 625 631 */ 626 - if (res == (req & 0xFFFFFFFF)) 627 - continue; /* OK; not expected to work */ 632 + 633 + if (res == req) 634 + continue; 635 + 636 + if (cs_bits != 64 && ((res ^ req) & 0xFFFFFFFF) == 0) { 637 + printf("[NOTE]\tSP: %llx -> %llx\n", 638 + (unsigned long long)req, 639 + (unsigned long long)res); 640 + continue; 641 + } 642 + 643 + printf("[FAIL]\tSP mismatch: requested 0x%llx; got 0x%llx\n", 644 + (unsigned long long)requested_regs[i], 645 + (unsigned long long)resulting_regs[i]); 646 + nerrs++; 647 + continue; 628 648 } 629 649 630 650 bool ignore_reg = false; ··· 674 654 #endif 675 655 676 656 /* Sanity check on the kernel */ 677 - if (i == REG_CX && requested_regs[i] != resulting_regs[i]) { 657 + if (i == REG_CX && req != res) { 678 658 printf("[FAIL]\tCX (saved SP) mismatch: requested 0x%llx; got 0x%llx\n", 679 - (unsigned long long)requested_regs[i], 680 - (unsigned long long)resulting_regs[i]); 659 + (unsigned long long)req, 660 + (unsigned long long)res); 681 661 nerrs++; 682 662 continue; 683 663 } 684 664 685 - if (requested_regs[i] != resulting_regs[i] && !ignore_reg) { 686 - /* 687 - * SP is particularly interesting here. The 688 - * usual cause of failures is that we hit the 689 - * nasty IRET case of returning to a 16-bit SS, 690 - * in which case bits 16:31 of the *kernel* 691 - * stack pointer persist in ESP. 692 - */ 665 + if (req != res && !ignore_reg) { 693 666 printf("[FAIL]\tReg %d mismatch: requested 0x%llx; got 0x%llx\n", 694 - i, (unsigned long long)requested_regs[i], 695 - (unsigned long long)resulting_regs[i]); 667 + i, (unsigned long long)req, 668 + (unsigned long long)res); 696 669 nerrs++; 697 670 } 698 671 }
-18
tools/virtio/linux/scatterlist.h
··· 36 36 */ 37 37 BUG_ON((unsigned long) page & 0x03); 38 38 #ifdef CONFIG_DEBUG_SG 39 - BUG_ON(sg->sg_magic != SG_MAGIC); 40 39 BUG_ON(sg_is_chain(sg)); 41 40 #endif 42 41 sg->page_link = page_link | (unsigned long) page; ··· 66 67 static inline struct page *sg_page(struct scatterlist *sg) 67 68 { 68 69 #ifdef CONFIG_DEBUG_SG 69 - BUG_ON(sg->sg_magic != SG_MAGIC); 70 70 BUG_ON(sg_is_chain(sg)); 71 71 #endif 72 72 return (struct page *)((sg)->page_link & ~0x3); ··· 114 116 **/ 115 117 static inline void sg_mark_end(struct scatterlist *sg) 116 118 { 117 - #ifdef CONFIG_DEBUG_SG 118 - BUG_ON(sg->sg_magic != SG_MAGIC); 119 - #endif 120 119 /* 121 120 * Set termination bit, clear potential chain bit 122 121 */ ··· 131 136 **/ 132 137 static inline void sg_unmark_end(struct scatterlist *sg) 133 138 { 134 - #ifdef CONFIG_DEBUG_SG 135 - BUG_ON(sg->sg_magic != SG_MAGIC); 136 - #endif 137 139 sg->page_link &= ~0x02; 138 140 } 139 141 140 142 static inline struct scatterlist *sg_next(struct scatterlist *sg) 141 143 { 142 - #ifdef CONFIG_DEBUG_SG 143 - BUG_ON(sg->sg_magic != SG_MAGIC); 144 - #endif 145 144 if (sg_is_last(sg)) 146 145 return NULL; 147 146 ··· 149 160 static inline void sg_init_table(struct scatterlist *sgl, unsigned int nents) 150 161 { 151 162 memset(sgl, 0, sizeof(*sgl) * nents); 152 - #ifdef CONFIG_DEBUG_SG 153 - { 154 - unsigned int i; 155 - for (i = 0; i < nents; i++) 156 - sgl[i].sg_magic = SG_MAGIC; 157 - } 158 - #endif 159 163 sg_mark_end(&sgl[nents - 1]); 160 164 } 161 165