Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 4.18-rc3 into usb-next

We want the USB and other fixes in here as well to make merges and
testing easier.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+2750 -1993
+12 -2
Documentation/admin-guide/pm/intel_pstate.rst
··· 324 324 325 325 ``intel_pstate`` exposes several global attributes (files) in ``sysfs`` to 326 326 control its functionality at the system level. They are located in the 327 - ``/sys/devices/system/cpu/cpufreq/intel_pstate/`` directory and affect all 328 - CPUs. 327 + ``/sys/devices/system/cpu/intel_pstate/`` directory and affect all CPUs. 329 328 330 329 Some of them are not present if the ``intel_pstate=per_cpu_perf_limits`` 331 330 argument is passed to the kernel in the command line. ··· 377 378 supplied to the ``CPUFreq`` core and exposed via the policy interface, 378 379 but it affects the maximum possible value of per-policy P-state limits 379 380 (see `Interpretation of Policy Attributes`_ below for details). 381 + 382 + ``hwp_dynamic_boost`` 383 + This attribute is only present if ``intel_pstate`` works in the 384 + `active mode with the HWP feature enabled <Active Mode With HWP_>`_ in 385 + the processor. If set (equal to 1), it causes the minimum P-state limit 386 + to be increased dynamically for a short time whenever a task previously 387 + waiting on I/O is selected to run on a given logical CPU (the purpose 388 + of this mechanism is to improve performance). 389 + 390 + This setting has no effect on logical CPUs whose minimum P-state limit 391 + is directly set to the highest non-turbo P-state or above it. 380 392 381 393 .. _status_attr: 382 394
+23
Documentation/devicetree/bindings/input/sprd,sc27xx-vibra.txt
··· 1 + Spreadtrum SC27xx PMIC Vibrator 2 + 3 + Required properties: 4 + - compatible: should be "sprd,sc2731-vibrator". 5 + - reg: address of vibrator control register. 6 + 7 + Example : 8 + 9 + sc2731_pmic: pmic@0 { 10 + compatible = "sprd,sc2731"; 11 + reg = <0>; 12 + spi-max-frequency = <26000000>; 13 + interrupts = <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>; 14 + interrupt-controller; 15 + #interrupt-cells = <2>; 16 + #address-cells = <1>; 17 + #size-cells = <0>; 18 + 19 + vibrator@eb4 { 20 + compatible = "sprd,sc2731-vibrator"; 21 + reg = <0xeb4>; 22 + }; 23 + };
+1 -6
Documentation/filesystems/Locking
··· 441 441 int (*iterate) (struct file *, struct dir_context *); 442 442 int (*iterate_shared) (struct file *, struct dir_context *); 443 443 __poll_t (*poll) (struct file *, struct poll_table_struct *); 444 - struct wait_queue_head * (*get_poll_head)(struct file *, __poll_t); 445 - __poll_t (*poll_mask) (struct file *, __poll_t); 446 444 long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long); 447 445 long (*compat_ioctl) (struct file *, unsigned int, unsigned long); 448 446 int (*mmap) (struct file *, struct vm_area_struct *); ··· 471 473 }; 472 474 473 475 locking rules: 474 - All except for ->poll_mask may block. 476 + All may block. 475 477 476 478 ->llseek() locking has moved from llseek to the individual llseek 477 479 implementations. If your fs is not using generic_file_llseek, you ··· 502 504 ->setlease operations should call generic_setlease() before or after setting 503 505 the lease within the individual filesystem to record the result of the 504 506 operation 505 - 506 - ->poll_mask can be called with or without the waitqueue lock for the waitqueue 507 - returned from ->get_poll_head. 508 507 509 508 --------------------------- dquot_operations ------------------------------- 510 509 prototypes:
-13
Documentation/filesystems/vfs.txt
··· 857 857 ssize_t (*write_iter) (struct kiocb *, struct iov_iter *); 858 858 int (*iterate) (struct file *, struct dir_context *); 859 859 __poll_t (*poll) (struct file *, struct poll_table_struct *); 860 - struct wait_queue_head * (*get_poll_head)(struct file *, __poll_t); 861 - __poll_t (*poll_mask) (struct file *, __poll_t); 862 860 long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long); 863 861 long (*compat_ioctl) (struct file *, unsigned int, unsigned long); 864 862 int (*mmap) (struct file *, struct vm_area_struct *); ··· 900 902 poll: called by the VFS when a process wants to check if there is 901 903 activity on this file and (optionally) go to sleep until there 902 904 is activity. Called by the select(2) and poll(2) system calls 903 - 904 - get_poll_head: Returns the struct wait_queue_head that callers can 905 - wait on. Callers need to check the returned events using ->poll_mask 906 - once woken. Can return NULL to indicate polling is not supported, 907 - or any error code using the ERR_PTR convention to indicate that a 908 - grave error occured and ->poll_mask shall not be called. 909 - 910 - poll_mask: return the mask of EPOLL* values describing the file descriptor 911 - state. Called either before going to sleep on the waitqueue returned by 912 - get_poll_head, or after it has been woken. If ->get_poll_head and 913 - ->poll_mask are implemented ->poll does not need to be implement. 914 905 915 906 unlocked_ioctl: called by the ioctl(2) system call. 916 907
+6
Documentation/kbuild/kconfig-language.txt
··· 430 430 to use it. It should be placed at the top of the configuration, before any 431 431 other statement. 432 432 433 + '#' Kconfig source file comment: 434 + 435 + An unquoted '#' character anywhere in a source file line indicates 436 + the beginning of a source file comment. The remainder of that line 437 + is a comment. 438 + 433 439 434 440 Kconfig hints 435 441 -------------
+57 -55
Documentation/networking/e100.rst
··· 1 + ============================================================== 1 2 Linux* Base Driver for the Intel(R) PRO/100 Family of Adapters 2 3 ============================================================== 3 4 ··· 87 86 Additional Configurations 88 87 ========================= 89 88 90 - Configuring the Driver on Different Distributions 91 - ------------------------------------------------- 89 + Configuring the Driver on Different Distributions 90 + ------------------------------------------------- 92 91 93 - Configuring a network driver to load properly when the system is started is 94 - distribution dependent. Typically, the configuration process involves adding 95 - an alias line to /etc/modprobe.d/*.conf as well as editing other system 96 - startup scripts and/or configuration files. Many popular Linux 97 - distributions ship with tools to make these changes for you. To learn the 98 - proper way to configure a network device for your system, refer to your 99 - distribution documentation. If during this process you are asked for the 100 - driver or module name, the name for the Linux Base Driver for the Intel 101 - PRO/100 Family of Adapters is e100. 92 + Configuring a network driver to load properly when the system is started 93 + is distribution dependent. Typically, the configuration process involves 94 + adding an alias line to /etc/modprobe.d/*.conf as well as editing other 95 + system startup scripts and/or configuration files. Many popular Linux 96 + distributions ship with tools to make these changes for you. To learn 97 + the proper way to configure a network device for your system, refer to 98 + your distribution documentation. If during this process you are asked 99 + for the driver or module name, the name for the Linux Base Driver for 100 + the Intel PRO/100 Family of Adapters is e100. 102 101 103 - As an example, if you install the e100 driver for two PRO/100 adapters 104 - (eth0 and eth1), add the following to a configuration file in /etc/modprobe.d/ 102 + As an example, if you install the e100 driver for two PRO/100 adapters 103 + (eth0 and eth1), add the following to a configuration file in 104 + /etc/modprobe.d/:: 105 105 106 106 alias eth0 e100 107 107 alias eth1 e100 108 108 109 - Viewing Link Messages 110 - --------------------- 111 - In order to see link messages and other Intel driver information on your 112 - console, you must set the dmesg level up to six. This can be done by 113 - entering the following on the command line before loading the e100 driver:: 109 + Viewing Link Messages 110 + --------------------- 111 + 112 + In order to see link messages and other Intel driver information on your 113 + console, you must set the dmesg level up to six. This can be done by 114 + entering the following on the command line before loading the e100 115 + driver:: 114 116 115 117 dmesg -n 6 116 118 117 - If you wish to see all messages issued by the driver, including debug 118 - messages, set the dmesg level to eight. 119 + If you wish to see all messages issued by the driver, including debug 120 + messages, set the dmesg level to eight. 119 121 120 - NOTE: This setting is not saved across reboots. 122 + NOTE: This setting is not saved across reboots. 121 123 124 + ethtool 125 + ------- 122 126 123 - ethtool 124 - ------- 127 + The driver utilizes the ethtool interface for driver configuration and 128 + diagnostics, as well as displaying statistical information. The ethtool 129 + version 1.6 or later is required for this functionality. 125 130 126 - The driver utilizes the ethtool interface for driver configuration and 127 - diagnostics, as well as displaying statistical information. The ethtool 128 - version 1.6 or later is required for this functionality. 131 + The latest release of ethtool can be found from 132 + https://www.kernel.org/pub/software/network/ethtool/ 129 133 130 - The latest release of ethtool can be found from 131 - https://www.kernel.org/pub/software/network/ethtool/ 134 + Enabling Wake on LAN* (WoL) 135 + --------------------------- 136 + WoL is provided through the ethtool* utility. For instructions on 137 + enabling WoL with ethtool, refer to the ethtool man page. WoL will be 138 + enabled on the system during the next shut down or reboot. For this 139 + driver version, in order to enable WoL, the e100 driver must be loaded 140 + when shutting down or rebooting the system. 132 141 133 - Enabling Wake on LAN* (WoL) 134 - --------------------------- 135 - WoL is provided through the ethtool* utility. For instructions on enabling 136 - WoL with ethtool, refer to the ethtool man page. 142 + NAPI 143 + ---- 137 144 138 - WoL will be enabled on the system during the next shut down or reboot. For 139 - this driver version, in order to enable WoL, the e100 driver must be 140 - loaded when shutting down or rebooting the system. 145 + NAPI (Rx polling mode) is supported in the e100 driver. 141 146 142 - NAPI 143 - ---- 147 + See https://wiki.linuxfoundation.org/networking/napi for more 148 + information on NAPI. 144 149 145 - NAPI (Rx polling mode) is supported in the e100 driver. 150 + Multiple Interfaces on Same Ethernet Broadcast Network 151 + ------------------------------------------------------ 146 152 147 - See https://wiki.linuxfoundation.org/networking/napi for more information 148 - on NAPI. 153 + Due to the default ARP behavior on Linux, it is not possible to have one 154 + system on two IP networks in the same Ethernet broadcast domain 155 + (non-partitioned switch) behave as expected. All Ethernet interfaces 156 + will respond to IP traffic for any IP address assigned to the system. 157 + This results in unbalanced receive traffic. 149 158 150 - Multiple Interfaces on Same Ethernet Broadcast Network 151 - ------------------------------------------------------ 159 + If you have multiple interfaces in a server, either turn on ARP 160 + filtering by 152 161 153 - Due to the default ARP behavior on Linux, it is not possible to have 154 - one system on two IP networks in the same Ethernet broadcast domain 155 - (non-partitioned switch) behave as expected. All Ethernet interfaces 156 - will respond to IP traffic for any IP address assigned to the system. 157 - This results in unbalanced receive traffic. 162 + (1) entering:: echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter 163 + (this only works if your kernel's version is higher than 2.4.5), or 158 164 159 - If you have multiple interfaces in a server, either turn on ARP 160 - filtering by 161 - 162 - (1) entering:: echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter 163 - (this only works if your kernel's version is higher than 2.4.5), or 164 - 165 - (2) installing the interfaces in separate broadcast domains (either 166 - in different switches or in a switch partitioned to VLANs). 165 + (2) installing the interfaces in separate broadcast domains (either 166 + in different switches or in a switch partitioned to VLANs). 167 167 168 168 169 169 Support
+39 -37
Documentation/networking/e1000.rst
··· 1 + =========================================================== 1 2 Linux* Base Driver for Intel(R) Ethernet Network Connection 2 3 =========================================================== 3 4 ··· 355 354 Additional Configurations 356 355 ========================= 357 356 358 - Jumbo Frames 359 - ------------ 360 - Jumbo Frames support is enabled by changing the MTU to a value larger than 361 - the default of 1500. Use the ifconfig command to increase the MTU size. 362 - For example:: 357 + Jumbo Frames 358 + ------------ 359 + Jumbo Frames support is enabled by changing the MTU to a value larger 360 + than the default of 1500. Use the ifconfig command to increase the MTU 361 + size. For example:: 363 362 364 363 ifconfig eth<x> mtu 9000 up 365 364 366 - This setting is not saved across reboots. It can be made permanent if 367 - you add:: 365 + This setting is not saved across reboots. It can be made permanent if 366 + you add:: 368 367 369 368 MTU=9000 370 369 371 - to the file /etc/sysconfig/network-scripts/ifcfg-eth<x>. This example 372 - applies to the Red Hat distributions; other distributions may store this 373 - setting in a different location. 370 + to the file /etc/sysconfig/network-scripts/ifcfg-eth<x>. This example 371 + applies to the Red Hat distributions; other distributions may store this 372 + setting in a different location. 374 373 375 - Notes: 376 - Degradation in throughput performance may be observed in some Jumbo frames 377 - environments. If this is observed, increasing the application's socket buffer 378 - size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values may help. 379 - See the specific application manual and /usr/src/linux*/Documentation/ 380 - networking/ip-sysctl.txt for more details. 374 + Notes: Degradation in throughput performance may be observed in some 375 + Jumbo frames environments. If this is observed, increasing the 376 + application's socket buffer size and/or increasing the 377 + /proc/sys/net/ipv4/tcp_*mem entry values may help. See the specific 378 + application manual and /usr/src/linux*/Documentation/ 379 + networking/ip-sysctl.txt for more details. 381 380 382 - - The maximum MTU setting for Jumbo Frames is 16110. This value coincides 383 - with the maximum Jumbo Frames size of 16128. 381 + - The maximum MTU setting for Jumbo Frames is 16110. This value 382 + coincides with the maximum Jumbo Frames size of 16128. 384 383 385 - - Using Jumbo frames at 10 or 100 Mbps is not supported and may result in 386 - poor performance or loss of link. 384 + - Using Jumbo frames at 10 or 100 Mbps is not supported and may result 385 + in poor performance or loss of link. 387 386 388 - - Adapters based on the Intel(R) 82542 and 82573V/E controller do not 389 - support Jumbo Frames. These correspond to the following product names: 390 - Intel(R) PRO/1000 Gigabit Server Adapter 391 - Intel(R) PRO/1000 PM Network Connection 387 + - Adapters based on the Intel(R) 82542 and 82573V/E controller do not 388 + support Jumbo Frames. These correspond to the following product names: 389 + Intel(R) PRO/1000 Gigabit Server Adapter Intel(R) PRO/1000 PM Network 390 + Connection 392 391 393 - ethtool 394 - ------- 395 - The driver utilizes the ethtool interface for driver configuration and 396 - diagnostics, as well as displaying statistical information. The ethtool 397 - version 1.6 or later is required for this functionality. 392 + ethtool 393 + ------- 394 + The driver utilizes the ethtool interface for driver configuration and 395 + diagnostics, as well as displaying statistical information. The ethtool 396 + version 1.6 or later is required for this functionality. 398 397 399 - The latest release of ethtool can be found from 400 - https://www.kernel.org/pub/software/network/ethtool/ 398 + The latest release of ethtool can be found from 399 + https://www.kernel.org/pub/software/network/ethtool/ 401 400 402 - Enabling Wake on LAN* (WoL) 403 - --------------------------- 404 - WoL is configured through the ethtool* utility. 401 + Enabling Wake on LAN* (WoL) 402 + --------------------------- 403 + WoL is configured through the ethtool* utility. 405 404 406 - WoL will be enabled on the system during the next shut down or reboot. 407 - For this driver version, in order to enable WoL, the e1000 driver must be 408 - loaded when shutting down or rebooting the system. 405 + WoL will be enabled on the system during the next shut down or reboot. 406 + For this driver version, in order to enable WoL, the e1000 driver must be 407 + loaded when shutting down or rebooting the system. 408 + 409 409 410 410 Support 411 411 =======
+1 -1
Documentation/networking/strparser.txt
··· 48 48 Temporarily pause a stream parser. Message parsing is suspended 49 49 and no new messages are delivered to the upper layer. 50 50 51 - void strp_pause(struct strparser *strp) 51 + void strp_unpause(struct strparser *strp) 52 52 53 53 Unpause a paused stream parser. 54 54
+1 -1
Documentation/usb/gadget_configfs.txt
··· 226 226 where <config name>.<number> specify the configuration and <function> is 227 227 a symlink to a function being removed from the configuration, e.g.: 228 228 229 - $ rm configfs/c.1/ncm.usb0 229 + $ rm configs/c.1/ncm.usb0 230 230 231 231 ... 232 232 ...
+20 -6
MAINTAINERS
··· 2971 2971 N: bcm586* 2972 2972 N: bcm88312 2973 2973 N: hr2 2974 - F: arch/arm64/boot/dts/broadcom/ns2* 2974 + N: stingray 2975 + F: arch/arm64/boot/dts/broadcom/northstar2/* 2976 + F: arch/arm64/boot/dts/broadcom/stingray/* 2975 2977 F: drivers/clk/bcm/clk-ns* 2978 + F: drivers/clk/bcm/clk-sr* 2976 2979 F: drivers/pinctrl/bcm/pinctrl-ns* 2980 + F: include/dt-bindings/clock/bcm-sr* 2977 2981 2978 2982 BROADCOM KONA GPIO DRIVER 2979 2983 M: Ray Jui <rjui@broadcom.com> ··· 5673 5669 F: Documentation/devicetree/bindings/crypto/fsl-sec4.txt 5674 5670 5675 5671 FREESCALE DIU FRAMEBUFFER DRIVER 5676 - M: Timur Tabi <timur@tabi.org> 5672 + M: Timur Tabi <timur@kernel.org> 5677 5673 L: linux-fbdev@vger.kernel.org 5678 5674 S: Maintained 5679 5675 F: drivers/video/fbdev/fsl-diu-fb.* ··· 5773 5769 F: drivers/net/wan/fsl_ucc_hdlc* 5774 5770 5775 5771 FREESCALE QUICC ENGINE UCC UART DRIVER 5776 - M: Timur Tabi <timur@tabi.org> 5772 + M: Timur Tabi <timur@kernel.org> 5777 5773 L: linuxppc-dev@lists.ozlabs.org 5778 5774 S: Maintained 5779 5775 F: drivers/tty/serial/ucc_uart.c ··· 5797 5793 F: include/linux/fs_enet_pd.h 5798 5794 5799 5795 FREESCALE SOC SOUND DRIVERS 5800 - M: Timur Tabi <timur@tabi.org> 5796 + M: Timur Tabi <timur@kernel.org> 5801 5797 M: Nicolin Chen <nicoleotsuka@gmail.com> 5802 5798 M: Xiubo Li <Xiubo.Lee@gmail.com> 5803 5799 R: Fabio Estevam <fabio.estevam@nxp.com> ··· 9886 9882 M: Vivien Didelot <vivien.didelot@savoirfairelinux.com> 9887 9883 M: Florian Fainelli <f.fainelli@gmail.com> 9888 9884 S: Maintained 9885 + F: Documentation/devicetree/bindings/net/dsa/ 9889 9886 F: net/dsa/ 9890 9887 F: include/net/dsa.h 9891 9888 F: include/linux/dsa/ ··· 11481 11476 S: Obsolete 11482 11477 F: drivers/net/wireless/intersil/prism54/ 11483 11478 11479 + PROC FILESYSTEM 11480 + R: Alexey Dobriyan <adobriyan@gmail.com> 11481 + L: linux-kernel@vger.kernel.org 11482 + L: linux-fsdevel@vger.kernel.org 11483 + S: Maintained 11484 + F: fs/proc/ 11485 + F: include/linux/proc_fs.h 11486 + F: tools/testing/selftests/proc/ 11487 + 11484 11488 PROC SYSCTL 11485 11489 M: "Luis R. Rodriguez" <mcgrof@kernel.org> 11486 11490 M: Kees Cook <keescook@chromium.org> ··· 11822 11808 F: drivers/cpufreq/qcom-cpufreq-kryo.c 11823 11809 11824 11810 QUALCOMM EMAC GIGABIT ETHERNET DRIVER 11825 - M: Timur Tabi <timur@codeaurora.org> 11811 + M: Timur Tabi <timur@kernel.org> 11826 11812 L: netdev@vger.kernel.org 11827 - S: Supported 11813 + S: Maintained 11828 11814 F: drivers/net/ethernet/qualcomm/emac/ 11829 11815 11830 11816 QUALCOMM HEXAGON ARCHITECTURE
+1 -1
Makefile
··· 2 2 VERSION = 4 3 3 PATCHLEVEL = 18 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc2 5 + EXTRAVERSION = -rc3 6 6 NAME = Merciless Moray 7 7 8 8 # *DOCUMENTATION*
+7 -1
arch/arm/Kconfig
··· 1245 1245 VESA. If you have PCI, say Y, otherwise N. 1246 1246 1247 1247 config PCI_DOMAINS 1248 - bool 1248 + bool "Support for multiple PCI domains" 1249 1249 depends on PCI 1250 + help 1251 + Enable PCI domains kernel management. Say Y if your machine 1252 + has a PCI bus hierarchy that requires more than one PCI 1253 + domain (aka segment) to be correctly managed. Say N otherwise. 1254 + 1255 + If you don't know what to do here, say N. 1250 1256 1251 1257 config PCI_DOMAINS_GENERIC 1252 1258 def_bool PCI_DOMAINS
+1 -1
arch/arm/boot/dts/armada-385-synology-ds116.dts
··· 139 139 3700 5 140 140 3900 6 141 141 4000 7>; 142 - cooling-cells = <2>; 142 + #cooling-cells = <2>; 143 143 }; 144 144 145 145 gpio-leds {
+12 -12
arch/arm/boot/dts/bcm-cygnus.dtsi
··· 216 216 reg = <0x18008000 0x100>; 217 217 #address-cells = <1>; 218 218 #size-cells = <0>; 219 - interrupts = <GIC_SPI 85 IRQ_TYPE_NONE>; 219 + interrupts = <GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>; 220 220 clock-frequency = <100000>; 221 221 status = "disabled"; 222 222 }; ··· 245 245 reg = <0x1800b000 0x100>; 246 246 #address-cells = <1>; 247 247 #size-cells = <0>; 248 - interrupts = <GIC_SPI 86 IRQ_TYPE_NONE>; 248 + interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>; 249 249 clock-frequency = <100000>; 250 250 status = "disabled"; 251 251 }; ··· 256 256 257 257 #interrupt-cells = <1>; 258 258 interrupt-map-mask = <0 0 0 0>; 259 - interrupt-map = <0 0 0 0 &gic GIC_SPI 100 IRQ_TYPE_NONE>; 259 + interrupt-map = <0 0 0 0 &gic GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>; 260 260 261 261 linux,pci-domain = <0>; 262 262 ··· 278 278 compatible = "brcm,iproc-msi"; 279 279 msi-controller; 280 280 interrupt-parent = <&gic>; 281 - interrupts = <GIC_SPI 96 IRQ_TYPE_NONE>, 282 - <GIC_SPI 97 IRQ_TYPE_NONE>, 283 - <GIC_SPI 98 IRQ_TYPE_NONE>, 284 - <GIC_SPI 99 IRQ_TYPE_NONE>; 281 + interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_HIGH>, 282 + <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>, 283 + <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>, 284 + <GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>; 285 285 }; 286 286 }; 287 287 ··· 291 291 292 292 #interrupt-cells = <1>; 293 293 interrupt-map-mask = <0 0 0 0>; 294 - interrupt-map = <0 0 0 0 &gic GIC_SPI 106 IRQ_TYPE_NONE>; 294 + interrupt-map = <0 0 0 0 &gic GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>; 295 295 296 296 linux,pci-domain = <1>; 297 297 ··· 313 313 compatible = "brcm,iproc-msi"; 314 314 msi-controller; 315 315 interrupt-parent = <&gic>; 316 - interrupts = <GIC_SPI 102 IRQ_TYPE_NONE>, 317 - <GIC_SPI 103 IRQ_TYPE_NONE>, 318 - <GIC_SPI 104 IRQ_TYPE_NONE>, 319 - <GIC_SPI 105 IRQ_TYPE_NONE>; 316 + interrupts = <GIC_SPI 102 IRQ_TYPE_LEVEL_HIGH>, 317 + <GIC_SPI 103 IRQ_TYPE_LEVEL_HIGH>, 318 + <GIC_SPI 104 IRQ_TYPE_LEVEL_HIGH>, 319 + <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>; 320 320 }; 321 321 }; 322 322
+12 -12
arch/arm/boot/dts/bcm-hr2.dtsi
··· 264 264 reg = <0x38000 0x50>; 265 265 #address-cells = <1>; 266 266 #size-cells = <0>; 267 - interrupts = <GIC_SPI 95 IRQ_TYPE_NONE>; 267 + interrupts = <GIC_SPI 95 IRQ_TYPE_LEVEL_HIGH>; 268 268 clock-frequency = <100000>; 269 269 }; 270 270 ··· 279 279 reg = <0x3b000 0x50>; 280 280 #address-cells = <1>; 281 281 #size-cells = <0>; 282 - interrupts = <GIC_SPI 96 IRQ_TYPE_NONE>; 282 + interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_HIGH>; 283 283 clock-frequency = <100000>; 284 284 }; 285 285 }; ··· 300 300 301 301 #interrupt-cells = <1>; 302 302 interrupt-map-mask = <0 0 0 0>; 303 - interrupt-map = <0 0 0 0 &gic GIC_SPI 186 IRQ_TYPE_NONE>; 303 + interrupt-map = <0 0 0 0 &gic GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>; 304 304 305 305 linux,pci-domain = <0>; 306 306 ··· 322 322 compatible = "brcm,iproc-msi"; 323 323 msi-controller; 324 324 interrupt-parent = <&gic>; 325 - interrupts = <GIC_SPI 182 IRQ_TYPE_NONE>, 326 - <GIC_SPI 183 IRQ_TYPE_NONE>, 327 - <GIC_SPI 184 IRQ_TYPE_NONE>, 328 - <GIC_SPI 185 IRQ_TYPE_NONE>; 325 + interrupts = <GIC_SPI 182 IRQ_TYPE_LEVEL_HIGH>, 326 + <GIC_SPI 183 IRQ_TYPE_LEVEL_HIGH>, 327 + <GIC_SPI 184 IRQ_TYPE_LEVEL_HIGH>, 328 + <GIC_SPI 185 IRQ_TYPE_LEVEL_HIGH>; 329 329 brcm,pcie-msi-inten; 330 330 }; 331 331 }; ··· 336 336 337 337 #interrupt-cells = <1>; 338 338 interrupt-map-mask = <0 0 0 0>; 339 - interrupt-map = <0 0 0 0 &gic GIC_SPI 192 IRQ_TYPE_NONE>; 339 + interrupt-map = <0 0 0 0 &gic GIC_SPI 192 IRQ_TYPE_LEVEL_HIGH>; 340 340 341 341 linux,pci-domain = <1>; 342 342 ··· 358 358 compatible = "brcm,iproc-msi"; 359 359 msi-controller; 360 360 interrupt-parent = <&gic>; 361 - interrupts = <GIC_SPI 188 IRQ_TYPE_NONE>, 362 - <GIC_SPI 189 IRQ_TYPE_NONE>, 363 - <GIC_SPI 190 IRQ_TYPE_NONE>, 364 - <GIC_SPI 191 IRQ_TYPE_NONE>; 361 + interrupts = <GIC_SPI 188 IRQ_TYPE_LEVEL_HIGH>, 362 + <GIC_SPI 189 IRQ_TYPE_LEVEL_HIGH>, 363 + <GIC_SPI 190 IRQ_TYPE_LEVEL_HIGH>, 364 + <GIC_SPI 191 IRQ_TYPE_LEVEL_HIGH>; 365 365 brcm,pcie-msi-inten; 366 366 }; 367 367 };
+16 -16
arch/arm/boot/dts/bcm-nsp.dtsi
··· 391 391 reg = <0x38000 0x50>; 392 392 #address-cells = <1>; 393 393 #size-cells = <0>; 394 - interrupts = <GIC_SPI 89 IRQ_TYPE_NONE>; 394 + interrupts = <GIC_SPI 89 IRQ_TYPE_LEVEL_HIGH>; 395 395 clock-frequency = <100000>; 396 396 dma-coherent; 397 397 status = "disabled"; ··· 496 496 497 497 #interrupt-cells = <1>; 498 498 interrupt-map-mask = <0 0 0 0>; 499 - interrupt-map = <0 0 0 0 &gic GIC_SPI 131 IRQ_TYPE_NONE>; 499 + interrupt-map = <0 0 0 0 &gic GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>; 500 500 501 501 linux,pci-domain = <0>; 502 502 ··· 519 519 compatible = "brcm,iproc-msi"; 520 520 msi-controller; 521 521 interrupt-parent = <&gic>; 522 - interrupts = <GIC_SPI 127 IRQ_TYPE_NONE>, 523 - <GIC_SPI 128 IRQ_TYPE_NONE>, 524 - <GIC_SPI 129 IRQ_TYPE_NONE>, 525 - <GIC_SPI 130 IRQ_TYPE_NONE>; 522 + interrupts = <GIC_SPI 127 IRQ_TYPE_LEVEL_HIGH>, 523 + <GIC_SPI 128 IRQ_TYPE_LEVEL_HIGH>, 524 + <GIC_SPI 129 IRQ_TYPE_LEVEL_HIGH>, 525 + <GIC_SPI 130 IRQ_TYPE_LEVEL_HIGH>; 526 526 brcm,pcie-msi-inten; 527 527 }; 528 528 }; ··· 533 533 534 534 #interrupt-cells = <1>; 535 535 interrupt-map-mask = <0 0 0 0>; 536 - interrupt-map = <0 0 0 0 &gic GIC_SPI 137 IRQ_TYPE_NONE>; 536 + interrupt-map = <0 0 0 0 &gic GIC_SPI 137 IRQ_TYPE_LEVEL_HIGH>; 537 537 538 538 linux,pci-domain = <1>; 539 539 ··· 556 556 compatible = "brcm,iproc-msi"; 557 557 msi-controller; 558 558 interrupt-parent = <&gic>; 559 - interrupts = <GIC_SPI 133 IRQ_TYPE_NONE>, 560 - <GIC_SPI 134 IRQ_TYPE_NONE>, 561 - <GIC_SPI 135 IRQ_TYPE_NONE>, 562 - <GIC_SPI 136 IRQ_TYPE_NONE>; 559 + interrupts = <GIC_SPI 133 IRQ_TYPE_LEVEL_HIGH>, 560 + <GIC_SPI 134 IRQ_TYPE_LEVEL_HIGH>, 561 + <GIC_SPI 135 IRQ_TYPE_LEVEL_HIGH>, 562 + <GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>; 563 563 brcm,pcie-msi-inten; 564 564 }; 565 565 }; ··· 570 570 571 571 #interrupt-cells = <1>; 572 572 interrupt-map-mask = <0 0 0 0>; 573 - interrupt-map = <0 0 0 0 &gic GIC_SPI 143 IRQ_TYPE_NONE>; 573 + interrupt-map = <0 0 0 0 &gic GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>; 574 574 575 575 linux,pci-domain = <2>; 576 576 ··· 593 593 compatible = "brcm,iproc-msi"; 594 594 msi-controller; 595 595 interrupt-parent = <&gic>; 596 - interrupts = <GIC_SPI 139 IRQ_TYPE_NONE>, 597 - <GIC_SPI 140 IRQ_TYPE_NONE>, 598 - <GIC_SPI 141 IRQ_TYPE_NONE>, 599 - <GIC_SPI 142 IRQ_TYPE_NONE>; 596 + interrupts = <GIC_SPI 139 IRQ_TYPE_LEVEL_HIGH>, 597 + <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>, 598 + <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>, 599 + <GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>; 600 600 brcm,pcie-msi-inten; 601 601 }; 602 602 };
+1 -1
arch/arm/boot/dts/bcm5301x.dtsi
··· 365 365 i2c0: i2c@18009000 { 366 366 compatible = "brcm,iproc-i2c"; 367 367 reg = <0x18009000 0x50>; 368 - interrupts = <GIC_SPI 121 IRQ_TYPE_NONE>; 368 + interrupts = <GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>; 369 369 #address-cells = <1>; 370 370 #size-cells = <0>; 371 371 clock-frequency = <100000>;
+1 -5
arch/arm/boot/dts/da850.dtsi
··· 549 549 gpio-controller; 550 550 #gpio-cells = <2>; 551 551 reg = <0x226000 0x1000>; 552 - interrupts = <42 IRQ_TYPE_EDGE_BOTH 553 - 43 IRQ_TYPE_EDGE_BOTH 44 IRQ_TYPE_EDGE_BOTH 554 - 45 IRQ_TYPE_EDGE_BOTH 46 IRQ_TYPE_EDGE_BOTH 555 - 47 IRQ_TYPE_EDGE_BOTH 48 IRQ_TYPE_EDGE_BOTH 556 - 49 IRQ_TYPE_EDGE_BOTH 50 IRQ_TYPE_EDGE_BOTH>; 552 + interrupts = <42 43 44 45 46 47 48 49 50>; 557 553 ti,ngpio = <144>; 558 554 ti,davinci-gpio-unbanked = <0>; 559 555 status = "disabled";
+1 -1
arch/arm/boot/dts/imx6q.dtsi
··· 90 90 clocks = <&clks IMX6Q_CLK_ECSPI5>, 91 91 <&clks IMX6Q_CLK_ECSPI5>; 92 92 clock-names = "ipg", "per"; 93 - dmas = <&sdma 11 7 1>, <&sdma 12 7 2>; 93 + dmas = <&sdma 11 8 1>, <&sdma 12 8 2>; 94 94 dma-names = "rx", "tx"; 95 95 status = "disabled"; 96 96 };
+1 -1
arch/arm/boot/dts/imx6sx.dtsi
··· 1344 1344 ranges = <0x81000000 0 0 0x08f80000 0 0x00010000 /* downstream I/O */ 1345 1345 0x82000000 0 0x08000000 0x08000000 0 0x00f00000>; /* non-prefetchable memory */ 1346 1346 num-lanes = <1>; 1347 - interrupts = <GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>; 1347 + interrupts = <GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>; 1348 1348 interrupt-names = "msi"; 1349 1349 #interrupt-cells = <1>; 1350 1350 interrupt-map-mask = <0 0 0 0x7>;
+2 -2
arch/arm/boot/dts/socfpga.dtsi
··· 748 748 nand0: nand@ff900000 { 749 749 #address-cells = <0x1>; 750 750 #size-cells = <0x1>; 751 - compatible = "denali,denali-nand-dt"; 751 + compatible = "altr,socfpga-denali-nand"; 752 752 reg = <0xff900000 0x100000>, 753 753 <0xffb80000 0x10000>; 754 754 reg-names = "nand_data", "denali_reg"; 755 755 interrupts = <0x0 0x90 0x4>; 756 756 dma-mask = <0xffffffff>; 757 - clocks = <&nand_clk>; 757 + clocks = <&nand_x_clk>; 758 758 status = "disabled"; 759 759 }; 760 760
+2 -3
arch/arm/boot/dts/socfpga_arria10.dtsi
··· 593 593 #size-cells = <0>; 594 594 reg = <0xffda5000 0x100>; 595 595 interrupts = <0 102 4>; 596 - num-chipselect = <4>; 597 - bus-num = <0>; 596 + num-cs = <4>; 598 597 /*32bit_access;*/ 599 598 tx-dma-channel = <&pdma 16>; 600 599 rx-dma-channel = <&pdma 17>; ··· 632 633 nand: nand@ffb90000 { 633 634 #address-cells = <1>; 634 635 #size-cells = <1>; 635 - compatible = "denali,denali-nand-dt", "altr,socfpga-denali-nand"; 636 + compatible = "altr,socfpga-denali-nand"; 636 637 reg = <0xffb90000 0x72000>, 637 638 <0xffb80000 0x10000>; 638 639 reg-names = "nand_data", "denali_reg";
+1 -1
arch/arm/common/Makefile
··· 10 10 obj-$(CONFIG_SHARP_LOCOMO) += locomo.o 11 11 obj-$(CONFIG_SHARP_PARAM) += sharpsl_param.o 12 12 obj-$(CONFIG_SHARP_SCOOP) += scoop.o 13 - obj-$(CONFIG_SMP) += secure_cntvoff.o 13 + obj-$(CONFIG_CPU_V7) += secure_cntvoff.o 14 14 obj-$(CONFIG_PCI_HOST_ITE8152) += it8152.o 15 15 obj-$(CONFIG_MCPM) += mcpm_head.o mcpm_entry.o mcpm_platsmp.o vlock.o 16 16 CFLAGS_REMOVE_mcpm_entry.o = -pg
+161 -229
arch/arm/configs/multi_v7_defconfig
··· 1 1 CONFIG_SYSVIPC=y 2 - CONFIG_FHANDLE=y 3 2 CONFIG_NO_HZ=y 4 3 CONFIG_HIGH_RES_TIMERS=y 5 4 CONFIG_CGROUPS=y ··· 9 10 CONFIG_MODULE_UNLOAD=y 10 11 CONFIG_PARTITION_ADVANCED=y 11 12 CONFIG_CMDLINE_PARTITION=y 12 - CONFIG_ARCH_MULTI_V7=y 13 - # CONFIG_ARCH_MULTI_V5 is not set 14 - # CONFIG_ARCH_MULTI_V4 is not set 15 13 CONFIG_ARCH_VIRT=y 16 14 CONFIG_ARCH_ALPINE=y 17 15 CONFIG_ARCH_ARTPEC=y 18 16 CONFIG_MACH_ARTPEC6=y 19 - CONFIG_ARCH_MVEBU=y 20 - CONFIG_MACH_ARMADA_370=y 21 - CONFIG_MACH_ARMADA_375=y 22 - CONFIG_MACH_ARMADA_38X=y 23 - CONFIG_MACH_ARMADA_39X=y 24 - CONFIG_MACH_ARMADA_XP=y 25 - CONFIG_MACH_DOVE=y 26 17 CONFIG_ARCH_AT91=y 27 18 CONFIG_SOC_SAMA5D2=y 28 19 CONFIG_SOC_SAMA5D3=y ··· 21 32 CONFIG_ARCH_BCM_CYGNUS=y 22 33 CONFIG_ARCH_BCM_HR2=y 23 34 CONFIG_ARCH_BCM_NSP=y 24 - CONFIG_ARCH_BCM_21664=y 25 - CONFIG_ARCH_BCM_281XX=y 26 35 CONFIG_ARCH_BCM_5301X=y 36 + CONFIG_ARCH_BCM_281XX=y 37 + CONFIG_ARCH_BCM_21664=y 27 38 CONFIG_ARCH_BCM2835=y 28 39 CONFIG_ARCH_BCM_63XX=y 29 40 CONFIG_ARCH_BRCMSTB=y ··· 32 43 CONFIG_MACH_BERLIN_BG2CD=y 33 44 CONFIG_MACH_BERLIN_BG2Q=y 34 45 CONFIG_ARCH_DIGICOLOR=y 46 + CONFIG_ARCH_EXYNOS=y 47 + CONFIG_EXYNOS5420_MCPM=y 35 48 CONFIG_ARCH_HIGHBANK=y 36 49 CONFIG_ARCH_HISI=y 37 50 CONFIG_ARCH_HI3xxx=y 38 - CONFIG_ARCH_HIX5HD2=y 39 51 CONFIG_ARCH_HIP01=y 40 52 CONFIG_ARCH_HIP04=y 41 - CONFIG_ARCH_KEYSTONE=y 42 - CONFIG_ARCH_MESON=y 53 + CONFIG_ARCH_HIX5HD2=y 43 54 CONFIG_ARCH_MXC=y 44 55 CONFIG_SOC_IMX50=y 45 56 CONFIG_SOC_IMX51=y ··· 49 60 CONFIG_SOC_IMX6SX=y 50 61 CONFIG_SOC_IMX6UL=y 51 62 CONFIG_SOC_IMX7D=y 52 - CONFIG_SOC_VF610=y 53 63 CONFIG_SOC_LS1021A=y 64 + CONFIG_SOC_VF610=y 65 + CONFIG_ARCH_KEYSTONE=y 66 + CONFIG_ARCH_MEDIATEK=y 67 + CONFIG_ARCH_MESON=y 68 + CONFIG_ARCH_MVEBU=y 69 + CONFIG_MACH_ARMADA_370=y 70 + CONFIG_MACH_ARMADA_375=y 71 + CONFIG_MACH_ARMADA_38X=y 72 + CONFIG_MACH_ARMADA_39X=y 73 + CONFIG_MACH_ARMADA_XP=y 74 + CONFIG_MACH_DOVE=y 54 75 CONFIG_ARCH_OMAP3=y 55 76 CONFIG_ARCH_OMAP4=y 56 77 CONFIG_SOC_OMAP5=y 57 78 CONFIG_SOC_AM33XX=y 58 79 CONFIG_SOC_AM43XX=y 59 80 CONFIG_SOC_DRA7XX=y 81 + CONFIG_ARCH_SIRF=y 60 82 CONFIG_ARCH_QCOM=y 61 - CONFIG_ARCH_MEDIATEK=y 62 83 CONFIG_ARCH_MSM8X60=y 63 84 CONFIG_ARCH_MSM8960=y 64 85 CONFIG_ARCH_MSM8974=y 65 86 CONFIG_ARCH_ROCKCHIP=y 66 - CONFIG_ARCH_SOCFPGA=y 67 - CONFIG_PLAT_SPEAR=y 68 - CONFIG_ARCH_SPEAR13XX=y 69 - CONFIG_MACH_SPEAR1310=y 70 - CONFIG_MACH_SPEAR1340=y 71 - CONFIG_ARCH_STI=y 72 - CONFIG_ARCH_STM32=y 73 - CONFIG_ARCH_EXYNOS=y 74 - CONFIG_EXYNOS5420_MCPM=y 75 87 CONFIG_ARCH_RENESAS=y 76 88 CONFIG_ARCH_EMEV2=y 77 89 CONFIG_ARCH_R7S72100=y ··· 89 99 CONFIG_ARCH_R8A7793=y 90 100 CONFIG_ARCH_R8A7794=y 91 101 CONFIG_ARCH_SH73A0=y 102 + CONFIG_ARCH_SOCFPGA=y 103 + CONFIG_PLAT_SPEAR=y 104 + CONFIG_ARCH_SPEAR13XX=y 105 + CONFIG_MACH_SPEAR1310=y 106 + CONFIG_MACH_SPEAR1340=y 107 + CONFIG_ARCH_STI=y 108 + CONFIG_ARCH_STM32=y 92 109 CONFIG_ARCH_SUNXI=y 93 - CONFIG_ARCH_SIRF=y 94 110 CONFIG_ARCH_TEGRA=y 95 - CONFIG_ARCH_TEGRA_2x_SOC=y 96 - CONFIG_ARCH_TEGRA_3x_SOC=y 97 - CONFIG_ARCH_TEGRA_114_SOC=y 98 - CONFIG_ARCH_TEGRA_124_SOC=y 99 111 CONFIG_ARCH_UNIPHIER=y 100 112 CONFIG_ARCH_U8500=y 101 - CONFIG_MACH_HREFV60=y 102 - CONFIG_MACH_SNOWBALL=y 103 113 CONFIG_ARCH_VEXPRESS=y 104 114 CONFIG_ARCH_VEXPRESS_TC2_PM=y 105 115 CONFIG_ARCH_WM8850=y 106 116 CONFIG_ARCH_ZYNQ=y 107 - CONFIG_TRUSTED_FOUNDATIONS=y 108 - CONFIG_PCI=y 109 - CONFIG_PCI_HOST_GENERIC=y 110 - CONFIG_PCI_DRA7XX=y 111 - CONFIG_PCI_DRA7XX_EP=y 112 - CONFIG_PCI_KEYSTONE=y 113 - CONFIG_PCI_MSI=y 117 + CONFIG_PCIEPORTBUS=y 114 118 CONFIG_PCI_MVEBU=y 115 119 CONFIG_PCI_TEGRA=y 116 120 CONFIG_PCI_RCAR_GEN2=y 117 121 CONFIG_PCIE_RCAR=y 118 - CONFIG_PCIEPORTBUS=y 122 + CONFIG_PCI_DRA7XX_EP=y 123 + CONFIG_PCI_KEYSTONE=y 119 124 CONFIG_PCI_ENDPOINT=y 120 125 CONFIG_PCI_ENDPOINT_CONFIGFS=y 121 126 CONFIG_PCI_EPF_TEST=m 122 127 CONFIG_SMP=y 123 128 CONFIG_NR_CPUS=16 124 - CONFIG_HIGHPTE=y 125 - CONFIG_CMA=y 126 129 CONFIG_SECCOMP=y 127 130 CONFIG_ARM_APPENDED_DTB=y 128 131 CONFIG_ARM_ATAG_DTB_COMPAT=y ··· 128 145 CONFIG_CPU_FREQ_GOV_USERSPACE=m 129 146 CONFIG_CPU_FREQ_GOV_CONSERVATIVE=m 130 147 CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y 148 + CONFIG_CPUFREQ_DT=y 131 149 CONFIG_ARM_IMX6Q_CPUFREQ=y 132 150 CONFIG_QORIQ_CPUFREQ=y 133 151 CONFIG_CPU_IDLE=y 134 152 CONFIG_ARM_CPUIDLE=y 135 - CONFIG_NEON=y 136 - CONFIG_KERNEL_MODE_NEON=y 137 153 CONFIG_ARM_ZYNQ_CPUIDLE=y 138 154 CONFIG_ARM_EXYNOS_CPUIDLE=y 155 + CONFIG_KERNEL_MODE_NEON=y 139 156 CONFIG_NET=y 140 157 CONFIG_PACKET=y 141 158 CONFIG_UNIX=y ··· 153 170 CONFIG_IPV6_TUNNEL=m 154 171 CONFIG_IPV6_MULTIPLE_TABLES=y 155 172 CONFIG_NET_DSA=m 156 - CONFIG_NET_SWITCHDEV=y 157 173 CONFIG_CAN=y 158 - CONFIG_CAN_RAW=y 159 - CONFIG_CAN_BCM=y 160 - CONFIG_CAN_DEV=y 161 174 CONFIG_CAN_AT91=m 162 175 CONFIG_CAN_FLEXCAN=m 163 - CONFIG_CAN_RCAR=m 164 - CONFIG_CAN_XILINXCAN=y 165 - CONFIG_CAN_MCP251X=y 166 - CONFIG_NET_DSA_BCM_SF2=m 167 - CONFIG_B53=m 168 - CONFIG_B53_SPI_DRIVER=m 169 - CONFIG_B53_MDIO_DRIVER=m 170 - CONFIG_B53_MMAP_DRIVER=m 171 - CONFIG_B53_SRAB_DRIVER=m 172 176 CONFIG_CAN_SUN4I=y 177 + CONFIG_CAN_XILINXCAN=y 178 + CONFIG_CAN_RCAR=m 179 + CONFIG_CAN_MCP251X=y 173 180 CONFIG_BT=m 174 181 CONFIG_BT_HCIUART=m 175 182 CONFIG_BT_HCIUART_BCM=y ··· 172 199 CONFIG_RFKILL_GPIO=y 173 200 CONFIG_DEVTMPFS=y 174 201 CONFIG_DEVTMPFS_MOUNT=y 175 - CONFIG_DMA_CMA=y 176 202 CONFIG_CMA_SIZE_MBYTES=64 177 203 CONFIG_OMAP_OCP2SCP=y 178 204 CONFIG_SIMPLE_PM_BUS=y 179 - CONFIG_SUNXI_RSB=y 180 205 CONFIG_MTD=y 181 206 CONFIG_MTD_CMDLINE_PARTS=y 182 207 CONFIG_MTD_BLOCK=y ··· 207 236 CONFIG_EEPROM_AT24=y 208 237 CONFIG_BLK_DEV_SD=y 209 238 CONFIG_BLK_DEV_SR=y 210 - CONFIG_SCSI_MULTI_LUN=y 211 239 CONFIG_ATA=y 212 240 CONFIG_SATA_AHCI=y 213 241 CONFIG_SATA_AHCI_PLATFORM=y ··· 221 251 CONFIG_SATA_RCAR=y 222 252 CONFIG_NETDEVICES=y 223 253 CONFIG_VIRTIO_NET=y 224 - CONFIG_HIX5HD2_GMAC=y 254 + CONFIG_B53_SPI_DRIVER=m 255 + CONFIG_B53_MDIO_DRIVER=m 256 + CONFIG_B53_MMAP_DRIVER=m 257 + CONFIG_B53_SRAB_DRIVER=m 258 + CONFIG_NET_DSA_BCM_SF2=m 225 259 CONFIG_SUN4I_EMAC=y 226 - CONFIG_MACB=y 227 260 CONFIG_BCMGENET=m 228 261 CONFIG_BGMAC_BCMA=y 229 262 CONFIG_SYSTEMPORT=m 263 + CONFIG_MACB=y 230 264 CONFIG_NET_CALXEDA_XGMAC=y 231 265 CONFIG_GIANFAR=y 266 + CONFIG_HIX5HD2_GMAC=y 267 + CONFIG_E1000E=y 232 268 CONFIG_IGB=y 233 269 CONFIG_MV643XX_ETH=y 234 270 CONFIG_MVNETA=y ··· 244 268 CONFIG_SH_ETH=y 245 269 CONFIG_SMSC911X=y 246 270 CONFIG_STMMAC_ETH=y 247 - CONFIG_STMMAC_PLATFORM=y 248 271 CONFIG_DWMAC_DWC_QOS_ETH=y 249 272 CONFIG_TI_CPSW=y 250 273 CONFIG_XILINX_EMACLITE=y 251 274 CONFIG_AT803X_PHY=y 252 - CONFIG_MARVELL_PHY=y 253 - CONFIG_SMSC_PHY=y 254 275 CONFIG_BROADCOM_PHY=y 255 276 CONFIG_ICPLUS_PHY=y 256 - CONFIG_REALTEK_PHY=y 277 + CONFIG_MARVELL_PHY=y 257 278 CONFIG_MICREL_PHY=y 258 - CONFIG_FIXED_PHY=y 279 + CONFIG_REALTEK_PHY=y 259 280 CONFIG_ROCKCHIP_PHY=y 281 + CONFIG_SMSC_PHY=y 260 282 CONFIG_USB_PEGASUS=y 261 283 CONFIG_USB_RTL8152=m 262 284 CONFIG_USB_LAN78XX=m ··· 262 288 CONFIG_USB_NET_SMSC75XX=y 263 289 CONFIG_USB_NET_SMSC95XX=y 264 290 CONFIG_BRCMFMAC=m 265 - CONFIG_RT2X00=m 266 - CONFIG_RT2800USB=m 267 291 CONFIG_MWIFIEX=m 268 292 CONFIG_MWIFIEX_SDIO=m 293 + CONFIG_RT2X00=m 294 + CONFIG_RT2800USB=m 269 295 CONFIG_INPUT_JOYDEV=y 270 296 CONFIG_INPUT_EVDEV=y 271 297 CONFIG_KEYBOARD_QT1070=m 272 298 CONFIG_KEYBOARD_GPIO=y 273 299 CONFIG_KEYBOARD_TEGRA=y 274 - CONFIG_KEYBOARD_SPEAR=y 275 - CONFIG_KEYBOARD_ST_KEYSCAN=y 276 - CONFIG_KEYBOARD_CROS_EC=m 277 300 CONFIG_KEYBOARD_SAMSUNG=m 301 + CONFIG_KEYBOARD_ST_KEYSCAN=y 302 + CONFIG_KEYBOARD_SPEAR=y 303 + CONFIG_KEYBOARD_CROS_EC=m 278 304 CONFIG_MOUSE_PS2_ELANTECH=y 279 305 CONFIG_MOUSE_CYAPA=m 280 306 CONFIG_MOUSE_ELAN_I2C=y 281 307 CONFIG_INPUT_TOUCHSCREEN=y 282 308 CONFIG_TOUCHSCREEN_ATMEL_MXT=m 283 309 CONFIG_TOUCHSCREEN_MMS114=m 310 + CONFIG_TOUCHSCREEN_WM97XX=m 284 311 CONFIG_TOUCHSCREEN_ST1232=m 285 312 CONFIG_TOUCHSCREEN_STMPE=y 286 313 CONFIG_TOUCHSCREEN_SUN4I=y 287 - CONFIG_TOUCHSCREEN_WM97XX=m 288 314 CONFIG_INPUT_MISC=y 289 315 CONFIG_INPUT_MAX77693_HAPTIC=m 290 316 CONFIG_INPUT_MAX8997_HAPTIC=m ··· 301 327 CONFIG_SERIAL_8250_EM=y 302 328 CONFIG_SERIAL_8250_MT6577=y 303 329 CONFIG_SERIAL_8250_UNIPHIER=y 330 + CONFIG_SERIAL_OF_PLATFORM=y 304 331 CONFIG_SERIAL_AMBA_PL011=y 305 332 CONFIG_SERIAL_AMBA_PL011_CONSOLE=y 306 333 CONFIG_SERIAL_ATMEL=y 307 334 CONFIG_SERIAL_ATMEL_CONSOLE=y 308 335 CONFIG_SERIAL_ATMEL_TTYAT=y 309 - CONFIG_SERIAL_BCM63XX=y 310 - CONFIG_SERIAL_BCM63XX_CONSOLE=y 311 336 CONFIG_SERIAL_MESON=y 312 337 CONFIG_SERIAL_MESON_CONSOLE=y 313 338 CONFIG_SERIAL_SAMSUNG=y ··· 318 345 CONFIG_SERIAL_IMX_CONSOLE=y 319 346 CONFIG_SERIAL_SH_SCI=y 320 347 CONFIG_SERIAL_SH_SCI_NR_UARTS=20 321 - CONFIG_SERIAL_SH_SCI_CONSOLE=y 322 - CONFIG_SERIAL_SH_SCI_DMA=y 323 348 CONFIG_SERIAL_MSM=y 324 349 CONFIG_SERIAL_MSM_CONSOLE=y 325 350 CONFIG_SERIAL_VT8500=y 326 351 CONFIG_SERIAL_VT8500_CONSOLE=y 327 - CONFIG_SERIAL_OF_PLATFORM=y 328 352 CONFIG_SERIAL_OMAP=y 329 353 CONFIG_SERIAL_OMAP_CONSOLE=y 354 + CONFIG_SERIAL_BCM63XX=y 355 + CONFIG_SERIAL_BCM63XX_CONSOLE=y 330 356 CONFIG_SERIAL_XILINX_PS_UART=y 331 357 CONFIG_SERIAL_XILINX_PS_UART_CONSOLE=y 332 358 CONFIG_SERIAL_FSL_LPUART=y ··· 337 365 CONFIG_SERIAL_STM32=y 338 366 CONFIG_SERIAL_STM32_CONSOLE=y 339 367 CONFIG_SERIAL_DEV_BUS=y 340 - CONFIG_HVC_DRIVER=y 341 368 CONFIG_VIRTIO_CONSOLE=y 369 + CONFIG_HW_RANDOM=y 370 + CONFIG_HW_RANDOM_ST=y 342 371 CONFIG_I2C_CHARDEV=y 343 - CONFIG_I2C_DAVINCI=y 344 - CONFIG_I2C_MESON=y 345 - CONFIG_I2C_MUX=y 346 372 CONFIG_I2C_ARB_GPIO_CHALLENGE=m 347 373 CONFIG_I2C_MUX_PCA954x=y 348 374 CONFIG_I2C_MUX_PINCTRL=y ··· 348 378 CONFIG_I2C_AT91=m 349 379 CONFIG_I2C_BCM2835=y 350 380 CONFIG_I2C_CADENCE=y 381 + CONFIG_I2C_DAVINCI=y 351 382 CONFIG_I2C_DESIGNWARE_PLATFORM=y 352 383 CONFIG_I2C_DIGICOLOR=m 353 384 CONFIG_I2C_EMEV2=m 354 385 CONFIG_I2C_GPIO=m 355 - CONFIG_I2C_EXYNOS5=y 356 386 CONFIG_I2C_IMX=y 387 + CONFIG_I2C_MESON=y 357 388 CONFIG_I2C_MV64XXX=y 358 389 CONFIG_I2C_RIIC=y 359 390 CONFIG_I2C_RK3X=y ··· 398 427 CONFIG_SPMI=y 399 428 CONFIG_PINCTRL_AS3722=y 400 429 CONFIG_PINCTRL_PALMAS=y 401 - CONFIG_PINCTRL_BCM2835=y 402 430 CONFIG_PINCTRL_APQ8064=y 403 431 CONFIG_PINCTRL_APQ8084=y 404 432 CONFIG_PINCTRL_IPQ8064=y ··· 407 437 CONFIG_PINCTRL_MSM8916=y 408 438 CONFIG_PINCTRL_QCOM_SPMI_PMIC=y 409 439 CONFIG_PINCTRL_QCOM_SSBI_PMIC=y 410 - CONFIG_GPIO_GENERIC_PLATFORM=y 411 440 CONFIG_GPIO_DAVINCI=y 412 441 CONFIG_GPIO_DWAPB=y 413 442 CONFIG_GPIO_EM=y 414 443 CONFIG_GPIO_RCAR=y 444 + CONFIG_GPIO_SYSCON=y 415 445 CONFIG_GPIO_UNIPHIER=y 416 446 CONFIG_GPIO_XILINX=y 417 447 CONFIG_GPIO_ZYNQ=y 418 448 CONFIG_GPIO_PCA953X=y 419 449 CONFIG_GPIO_PCA953X_IRQ=y 420 450 CONFIG_GPIO_PCF857X=y 421 - CONFIG_GPIO_TWL4030=y 422 451 CONFIG_GPIO_PALMAS=y 423 - CONFIG_GPIO_SYSCON=y 424 452 CONFIG_GPIO_TPS6586X=y 425 453 CONFIG_GPIO_TPS65910=y 454 + CONFIG_GPIO_TWL4030=y 455 + CONFIG_POWER_AVS=y 456 + CONFIG_ROCKCHIP_IODOMAIN=y 457 + CONFIG_POWER_RESET_AS3722=y 458 + CONFIG_POWER_RESET_GPIO=y 459 + CONFIG_POWER_RESET_GPIO_RESTART=y 460 + CONFIG_POWER_RESET_ST=y 461 + CONFIG_POWER_RESET_KEYSTONE=y 462 + CONFIG_POWER_RESET_RMOBILE=y 426 463 CONFIG_BATTERY_ACT8945A=y 427 464 CONFIG_BATTERY_CPCAP=m 428 465 CONFIG_BATTERY_SBS=y 466 + CONFIG_AXP20X_POWER=m 429 467 CONFIG_BATTERY_MAX17040=m 430 468 CONFIG_BATTERY_MAX17042=m 431 469 CONFIG_CHARGER_CPCAP=m ··· 442 464 CONFIG_CHARGER_MAX8997=m 443 465 CONFIG_CHARGER_MAX8998=m 444 466 CONFIG_CHARGER_TPS65090=y 445 - CONFIG_AXP20X_POWER=m 446 - CONFIG_POWER_RESET_AS3722=y 447 - CONFIG_POWER_RESET_GPIO=y 448 - CONFIG_POWER_RESET_GPIO_RESTART=y 449 - CONFIG_POWER_RESET_KEYSTONE=y 450 - CONFIG_POWER_RESET_RMOBILE=y 451 - CONFIG_POWER_RESET_ST=y 452 - CONFIG_POWER_AVS=y 453 - CONFIG_ROCKCHIP_IODOMAIN=y 454 467 CONFIG_SENSORS_IIO_HWMON=y 455 468 CONFIG_SENSORS_LM90=y 456 469 CONFIG_SENSORS_LM95245=y ··· 449 480 CONFIG_SENSORS_PWM_FAN=m 450 481 CONFIG_SENSORS_INA2XX=m 451 482 CONFIG_CPU_THERMAL=y 452 - CONFIG_BCM2835_THERMAL=m 453 - CONFIG_BRCMSTB_THERMAL=m 454 483 CONFIG_IMX_THERMAL=y 455 484 CONFIG_ROCKCHIP_THERMAL=y 456 485 CONFIG_RCAR_THERMAL=y 457 486 CONFIG_ARMADA_THERMAL=y 458 - CONFIG_DAVINCI_WATCHDOG=m 459 - CONFIG_EXYNOS_THERMAL=m 487 + CONFIG_BCM2835_THERMAL=m 488 + CONFIG_BRCMSTB_THERMAL=m 460 489 CONFIG_ST_THERMAL_MEMMAP=y 461 490 CONFIG_WATCHDOG=y 462 491 CONFIG_DA9063_WATCHDOG=m ··· 462 495 CONFIG_ARM_SP805_WATCHDOG=y 463 496 CONFIG_AT91SAM9X_WATCHDOG=y 464 497 CONFIG_SAMA5D4_WATCHDOG=y 498 + CONFIG_DW_WATCHDOG=y 499 + CONFIG_DAVINCI_WATCHDOG=m 465 500 CONFIG_ORION_WATCHDOG=y 466 501 CONFIG_RN5T618_WATCHDOG=y 467 - CONFIG_ST_LPC_WATCHDOG=y 468 502 CONFIG_SUNXI_WATCHDOG=y 469 503 CONFIG_IMX2_WDT=y 504 + CONFIG_ST_LPC_WATCHDOG=y 470 505 CONFIG_TEGRA_WATCHDOG=m 471 506 CONFIG_MESON_WATCHDOG=y 472 - CONFIG_DW_WATCHDOG=y 473 507 CONFIG_DIGICOLOR_WATCHDOG=y 474 508 CONFIG_RENESAS_WDT=m 475 - CONFIG_BCM2835_WDT=y 476 509 CONFIG_BCM47XX_WDT=y 477 - CONFIG_BCM7038_WDT=m 510 + CONFIG_BCM2835_WDT=y 478 511 CONFIG_BCM_KONA_WDT=y 512 + CONFIG_BCM7038_WDT=m 513 + CONFIG_BCMA_HOST_SOC=y 514 + CONFIG_BCMA_DRIVER_GMAC_CMN=y 515 + CONFIG_BCMA_DRIVER_GPIO=y 479 516 CONFIG_MFD_ACT8945A=y 480 517 CONFIG_MFD_AS3711=y 481 518 CONFIG_MFD_AS3722=y ··· 487 516 CONFIG_MFD_ATMEL_HLCDC=m 488 517 CONFIG_MFD_BCM590XX=y 489 518 CONFIG_MFD_AC100=y 490 - CONFIG_MFD_AXP20X=y 491 519 CONFIG_MFD_AXP20X_I2C=y 492 520 CONFIG_MFD_AXP20X_RSB=y 493 521 CONFIG_MFD_CROS_EC=m ··· 499 529 CONFIG_MFD_MAX8907=y 500 530 CONFIG_MFD_MAX8997=y 501 531 CONFIG_MFD_MAX8998=y 502 - CONFIG_MFD_RK808=y 503 532 CONFIG_MFD_CPCAP=y 504 533 CONFIG_MFD_PM8XXX=y 505 534 CONFIG_MFD_QCOM_RPM=y 506 535 CONFIG_MFD_SPMI_PMIC=y 536 + CONFIG_MFD_RK808=y 507 537 CONFIG_MFD_RN5T618=y 508 538 CONFIG_MFD_SEC_CORE=y 509 539 CONFIG_MFD_STMPE=y ··· 513 543 CONFIG_MFD_TPS65218=y 514 544 CONFIG_MFD_TPS6586X=y 515 545 CONFIG_MFD_TPS65910=y 516 - CONFIG_REGULATOR_ACT8945A=y 517 - CONFIG_REGULATOR_AB8500=y 518 546 CONFIG_REGULATOR_ACT8865=y 547 + CONFIG_REGULATOR_ACT8945A=y 519 548 CONFIG_REGULATOR_ANATOP=y 549 + CONFIG_REGULATOR_AB8500=y 520 550 CONFIG_REGULATOR_AS3711=y 521 551 CONFIG_REGULATOR_AS3722=y 522 552 CONFIG_REGULATOR_AXP20X=y ··· 524 554 CONFIG_REGULATOR_CPCAP=y 525 555 CONFIG_REGULATOR_DA9210=y 526 556 CONFIG_REGULATOR_FAN53555=y 527 - CONFIG_REGULATOR_RK808=y 528 557 CONFIG_REGULATOR_GPIO=y 529 - CONFIG_MFD_SYSCON=y 530 - CONFIG_POWER_RESET_SYSCON=y 531 558 CONFIG_REGULATOR_LP872X=y 532 559 CONFIG_REGULATOR_MAX14577=m 533 560 CONFIG_REGULATOR_MAX8907=y ··· 538 571 CONFIG_REGULATOR_PBIAS=y 539 572 CONFIG_REGULATOR_PWM=y 540 573 CONFIG_REGULATOR_QCOM_RPM=y 541 - CONFIG_REGULATOR_QCOM_SMD_RPM=y 574 + CONFIG_REGULATOR_QCOM_SMD_RPM=m 575 + CONFIG_REGULATOR_RK808=y 542 576 CONFIG_REGULATOR_RN5T618=y 543 577 CONFIG_REGULATOR_S2MPS11=y 544 578 CONFIG_REGULATOR_S5M8767=y ··· 560 592 CONFIG_MEDIA_CONTROLLER=y 561 593 CONFIG_VIDEO_V4L2_SUBDEV_API=y 562 594 CONFIG_MEDIA_USB_SUPPORT=y 563 - CONFIG_USB_VIDEO_CLASS=y 564 - CONFIG_USB_GSPCA=y 595 + CONFIG_USB_VIDEO_CLASS=m 565 596 CONFIG_V4L_PLATFORM_DRIVERS=y 566 597 CONFIG_SOC_CAMERA=m 567 598 CONFIG_SOC_CAMERA_PLATFORM=m 568 - CONFIG_VIDEO_RCAR_VIN=m 569 - CONFIG_VIDEO_ATMEL_ISI=m 570 599 CONFIG_VIDEO_SAMSUNG_EXYNOS4_IS=m 571 600 CONFIG_VIDEO_S5P_FIMC=m 572 601 CONFIG_VIDEO_S5P_MIPI_CSIS=m 573 602 CONFIG_VIDEO_EXYNOS_FIMC_LITE=m 574 603 CONFIG_VIDEO_EXYNOS4_FIMC_IS=m 604 + CONFIG_VIDEO_RCAR_VIN=m 605 + CONFIG_VIDEO_ATMEL_ISI=m 575 606 CONFIG_V4L_MEM2MEM_DRIVERS=y 576 607 CONFIG_VIDEO_SAMSUNG_S5P_JPEG=m 577 608 CONFIG_VIDEO_SAMSUNG_S5P_MFC=m ··· 581 614 CONFIG_VIDEO_RENESAS_JPU=m 582 615 CONFIG_VIDEO_RENESAS_VSP1=m 583 616 CONFIG_V4L_TEST_DRIVERS=y 617 + CONFIG_VIDEO_VIVID=m 584 618 CONFIG_CEC_PLATFORM_DRIVERS=y 585 619 CONFIG_VIDEO_SAMSUNG_S5P_CEC=m 586 620 # CONFIG_MEDIA_SUBDRV_AUTOSELECT is not set 587 621 CONFIG_VIDEO_ADV7180=m 588 622 CONFIG_VIDEO_ML86V7667=m 589 623 CONFIG_DRM=y 590 - CONFIG_DRM_I2C_ADV7511=m 591 - CONFIG_DRM_I2C_ADV7511_AUDIO=y 592 624 # CONFIG_DRM_I2C_CH7006 is not set 593 625 # CONFIG_DRM_I2C_SIL164 is not set 594 - CONFIG_DRM_DUMB_VGA_DAC=m 595 - CONFIG_DRM_NXP_PTN3460=m 596 - CONFIG_DRM_PARADE_PS8622=m 597 626 CONFIG_DRM_NOUVEAU=m 598 627 CONFIG_DRM_EXYNOS=m 599 628 CONFIG_DRM_EXYNOS_FIMD=y ··· 608 645 CONFIG_DRM_SUN4I=m 609 646 CONFIG_DRM_FSL_DCU=m 610 647 CONFIG_DRM_TEGRA=y 648 + CONFIG_DRM_PANEL_SIMPLE=y 611 649 CONFIG_DRM_PANEL_SAMSUNG_LD9040=m 612 650 CONFIG_DRM_PANEL_SAMSUNG_S6E63J0X03=m 613 651 CONFIG_DRM_PANEL_SAMSUNG_S6E8AA0=m 614 - CONFIG_DRM_PANEL_SIMPLE=y 652 + CONFIG_DRM_DUMB_VGA_DAC=m 653 + CONFIG_DRM_NXP_PTN3460=m 654 + CONFIG_DRM_PARADE_PS8622=m 615 655 CONFIG_DRM_SII9234=m 656 + CONFIG_DRM_I2C_ADV7511=m 657 + CONFIG_DRM_I2C_ADV7511_AUDIO=y 616 658 CONFIG_DRM_STI=m 617 - CONFIG_DRM_VC4=y 659 + CONFIG_DRM_VC4=m 618 660 CONFIG_DRM_ETNAVIV=m 619 661 CONFIG_DRM_MXSFB=m 620 662 CONFIG_FB_ARMCLCD=y ··· 627 659 CONFIG_FB_WM8505=y 628 660 CONFIG_FB_SH_MOBILE_LCDC=y 629 661 CONFIG_FB_SIMPLE=y 630 - CONFIG_BACKLIGHT_LCD_SUPPORT=y 631 - CONFIG_BACKLIGHT_CLASS_DEVICE=y 632 662 CONFIG_LCD_PLATFORM=m 633 663 CONFIG_BACKLIGHT_PWM=y 634 664 CONFIG_BACKLIGHT_AS3711=y ··· 634 668 CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y 635 669 CONFIG_SOUND=m 636 670 CONFIG_SND=m 637 - CONFIG_SND_DYNAMIC_MINORS=y 638 671 CONFIG_SND_HDA_TEGRA=m 639 672 CONFIG_SND_HDA_INPUT_BEEP=y 640 673 CONFIG_SND_HDA_PATCH_LOADER=y ··· 657 692 CONFIG_SND_SOC_ODROID=m 658 693 CONFIG_SND_SOC_SH4_FSI=m 659 694 CONFIG_SND_SOC_RCAR=m 660 - CONFIG_SND_SIMPLE_SCU_CARD=m 695 + CONFIG_SND_SOC_STI=m 661 696 CONFIG_SND_SUN4I_CODEC=m 662 697 CONFIG_SND_SOC_TEGRA=m 663 698 CONFIG_SND_SOC_TEGRA20_I2S=m ··· 668 703 CONFIG_SND_SOC_TEGRA_WM9712=m 669 704 CONFIG_SND_SOC_TEGRA_TRIMSLICE=m 670 705 CONFIG_SND_SOC_TEGRA_ALC5632=m 671 - CONFIG_SND_SOC_CPCAP=m 672 706 CONFIG_SND_SOC_TEGRA_MAX98090=m 673 707 CONFIG_SND_SOC_AK4642=m 708 + CONFIG_SND_SOC_CPCAP=m 674 709 CONFIG_SND_SOC_SGTL5000=m 675 710 CONFIG_SND_SOC_SPDIF=m 676 - CONFIG_SND_SOC_WM8978=m 677 - CONFIG_SND_SOC_STI=m 678 711 CONFIG_SND_SOC_STI_SAS=m 679 - CONFIG_SND_SIMPLE_CARD=m 712 + CONFIG_SND_SOC_WM8978=m 713 + CONFIG_SND_SIMPLE_SCU_CARD=m 680 714 CONFIG_USB=y 681 715 CONFIG_USB_OTG=y 682 716 CONFIG_USB_XHCI_HCD=y 683 717 CONFIG_USB_XHCI_MVEBU=y 684 - CONFIG_USB_XHCI_RCAR=m 685 718 CONFIG_USB_XHCI_TEGRA=m 686 719 CONFIG_USB_EHCI_HCD=y 687 - CONFIG_USB_EHCI_MSM=m 688 - CONFIG_USB_EHCI_EXYNOS=y 689 - CONFIG_USB_EHCI_TEGRA=y 690 720 CONFIG_USB_EHCI_HCD_STI=y 691 - CONFIG_USB_EHCI_HCD_PLATFORM=y 692 - CONFIG_USB_ISP1760=y 721 + CONFIG_USB_EHCI_TEGRA=y 722 + CONFIG_USB_EHCI_EXYNOS=y 693 723 CONFIG_USB_OHCI_HCD=y 694 724 CONFIG_USB_OHCI_HCD_STI=y 695 - CONFIG_USB_OHCI_HCD_PLATFORM=y 696 725 CONFIG_USB_OHCI_EXYNOS=m 697 726 CONFIG_USB_R8A66597_HCD=m 698 727 CONFIG_USB_RENESAS_USBHS=m ··· 705 746 CONFIG_USB_TUSB_OMAP_DMA=y 706 747 CONFIG_USB_DWC3=y 707 748 CONFIG_USB_DWC2=y 708 - CONFIG_USB_HSIC_USB3503=y 709 749 CONFIG_USB_CHIPIDEA=y 710 750 CONFIG_USB_CHIPIDEA_UDC=y 711 751 CONFIG_USB_CHIPIDEA_HOST=y 752 + CONFIG_USB_ISP1760=y 753 + CONFIG_USB_HSIC_USB3503=y 712 754 CONFIG_AB8500_USB=y 713 - CONFIG_KEYSTONE_USB_PHY=y 755 + CONFIG_KEYSTONE_USB_PHY=m 714 756 CONFIG_NOP_USB_XCEIV=m 715 757 CONFIG_AM335X_PHY_USB=m 716 758 CONFIG_TWL6030_USB=m 717 759 CONFIG_USB_GPIO_VBUS=y 718 760 CONFIG_USB_ISP1301=y 719 - CONFIG_USB_MSM_OTG=m 720 761 CONFIG_USB_MXS_PHY=y 721 762 CONFIG_USB_GADGET=y 722 763 CONFIG_USB_FSL_USB2=y ··· 752 793 CONFIG_MMC_SDHCI_ESDHC_IMX=y 753 794 CONFIG_MMC_SDHCI_DOVE=y 754 795 CONFIG_MMC_SDHCI_TEGRA=y 796 + CONFIG_MMC_SDHCI_S3C=y 755 797 CONFIG_MMC_SDHCI_PXAV3=y 756 798 CONFIG_MMC_SDHCI_SPEAR=y 757 - CONFIG_MMC_SDHCI_S3C=y 758 799 CONFIG_MMC_SDHCI_S3C_DMA=y 759 800 CONFIG_MMC_SDHCI_BCM_KONA=y 801 + CONFIG_MMC_MESON_MX_SDIO=y 760 802 CONFIG_MMC_SDHCI_ST=y 761 803 CONFIG_MMC_OMAP=y 762 804 CONFIG_MMC_OMAP_HS=y 763 805 CONFIG_MMC_ATMELMCI=y 764 806 CONFIG_MMC_SDHCI_MSM=y 765 - CONFIG_MMC_MESON_MX_SDIO=y 766 807 CONFIG_MMC_MVSDIO=y 767 808 CONFIG_MMC_SDHI=y 768 809 CONFIG_MMC_DW=y 769 - CONFIG_MMC_DW_PLTFM=y 770 810 CONFIG_MMC_DW_EXYNOS=y 771 811 CONFIG_MMC_DW_ROCKCHIP=y 772 812 CONFIG_MMC_SH_MMCIF=y ··· 805 847 CONFIG_RTC_DRV_RK808=m 806 848 CONFIG_RTC_DRV_RS5C372=m 807 849 CONFIG_RTC_DRV_BQ32K=m 808 - CONFIG_RTC_DRV_PALMAS=y 809 - CONFIG_RTC_DRV_ST_LPC=y 810 850 CONFIG_RTC_DRV_TWL4030=y 851 + CONFIG_RTC_DRV_PALMAS=y 811 852 CONFIG_RTC_DRV_TPS6586X=y 812 853 CONFIG_RTC_DRV_TPS65910=y 813 854 CONFIG_RTC_DRV_S35390A=m 814 855 CONFIG_RTC_DRV_RX8581=m 815 856 CONFIG_RTC_DRV_EM3027=y 857 + CONFIG_RTC_DRV_S5M=m 816 858 CONFIG_RTC_DRV_DA9063=m 817 859 CONFIG_RTC_DRV_EFI=m 818 860 CONFIG_RTC_DRV_DIGICOLOR=m 819 - CONFIG_RTC_DRV_S5M=m 820 861 CONFIG_RTC_DRV_S3C=m 821 862 CONFIG_RTC_DRV_PL031=y 822 863 CONFIG_RTC_DRV_AT91RM9200=m 823 864 CONFIG_RTC_DRV_AT91SAM9=m 824 865 CONFIG_RTC_DRV_VT8500=y 825 - CONFIG_RTC_DRV_SUN6I=y 826 866 CONFIG_RTC_DRV_SUNXI=y 827 867 CONFIG_RTC_DRV_MV=y 828 868 CONFIG_RTC_DRV_TEGRA=y 869 + CONFIG_RTC_DRV_ST_LPC=y 829 870 CONFIG_RTC_DRV_CPCAP=m 830 871 CONFIG_DMADEVICES=y 831 - CONFIG_DW_DMAC=y 832 872 CONFIG_AT_HDMAC=y 833 873 CONFIG_AT_XDMAC=y 874 + CONFIG_DMA_BCM2835=y 875 + CONFIG_DMA_SUN6I=y 834 876 CONFIG_FSL_EDMA=y 877 + CONFIG_IMX_DMA=y 878 + CONFIG_IMX_SDMA=y 835 879 CONFIG_MV_XOR=y 880 + CONFIG_MXS_DMA=y 881 + CONFIG_PL330_DMA=y 882 + CONFIG_SIRF_DMA=y 883 + CONFIG_STE_DMA40=y 884 + CONFIG_ST_FDMA=m 836 885 CONFIG_TEGRA20_APB_DMA=y 886 + CONFIG_XILINX_DMA=y 887 + CONFIG_QCOM_BAM_DMA=y 888 + CONFIG_DW_DMAC=y 837 889 CONFIG_SH_DMAE=y 838 890 CONFIG_RCAR_DMAC=y 839 891 CONFIG_RENESAS_USB_DMAC=m 840 - CONFIG_STE_DMA40=y 841 - CONFIG_SIRF_DMA=y 842 - CONFIG_TI_EDMA=y 843 - CONFIG_PL330_DMA=y 844 - CONFIG_IMX_SDMA=y 845 - CONFIG_IMX_DMA=y 846 - CONFIG_MXS_DMA=y 847 - CONFIG_DMA_BCM2835=y 848 - CONFIG_DMA_OMAP=y 849 - CONFIG_QCOM_BAM_DMA=y 850 - CONFIG_XILINX_DMA=y 851 - CONFIG_DMA_SUN6I=y 852 - CONFIG_ST_FDMA=m 892 + CONFIG_VIRTIO_PCI=y 893 + CONFIG_VIRTIO_MMIO=y 853 894 CONFIG_STAGING=y 854 - CONFIG_SENSORS_ISL29018=y 855 - CONFIG_SENSORS_ISL29028=y 856 895 CONFIG_MFD_NVEC=y 857 896 CONFIG_KEYBOARD_NVEC=y 858 897 CONFIG_SERIO_NVEC_PS2=y 859 898 CONFIG_NVEC_POWER=y 860 899 CONFIG_NVEC_PAZ00=y 861 - CONFIG_BCMA=y 862 - CONFIG_BCMA_HOST_SOC=y 863 - CONFIG_BCMA_DRIVER_GMAC_CMN=y 864 - CONFIG_BCMA_DRIVER_GPIO=y 865 - CONFIG_QCOM_GSBI=y 866 - CONFIG_QCOM_PM=y 867 - CONFIG_QCOM_SMEM=y 868 - CONFIG_QCOM_SMD_RPM=y 869 - CONFIG_QCOM_SMP2P=y 870 - CONFIG_QCOM_SMSM=y 871 - CONFIG_QCOM_WCNSS_CTRL=m 872 - CONFIG_ROCKCHIP_PM_DOMAINS=y 873 - CONFIG_COMMON_CLK_QCOM=y 874 - CONFIG_QCOM_CLK_RPM=y 875 - CONFIG_CHROME_PLATFORMS=y 876 900 CONFIG_STAGING_BOARD=y 877 - CONFIG_CROS_EC_CHARDEV=m 878 901 CONFIG_COMMON_CLK_MAX77686=y 879 902 CONFIG_COMMON_CLK_RK808=m 880 903 CONFIG_COMMON_CLK_S2MPS11=m 904 + CONFIG_COMMON_CLK_QCOM=y 905 + CONFIG_QCOM_CLK_RPM=y 881 906 CONFIG_APQ_MMCC_8084=y 882 907 CONFIG_MSM_GCC_8660=y 883 908 CONFIG_MSM_MMCC_8960=y 884 909 CONFIG_MSM_MMCC_8974=y 885 - CONFIG_HWSPINLOCK_QCOM=y 910 + CONFIG_BCM2835_MBOX=y 886 911 CONFIG_ROCKCHIP_IOMMU=y 887 912 CONFIG_TEGRA_IOMMU_GART=y 888 913 CONFIG_TEGRA_IOMMU_SMMU=y 889 914 CONFIG_REMOTEPROC=m 890 915 CONFIG_ST_REMOTEPROC=m 891 916 CONFIG_RPMSG_VIRTIO=m 917 + CONFIG_RASPBERRYPI_POWER=y 918 + CONFIG_QCOM_GSBI=y 919 + CONFIG_QCOM_PM=y 920 + CONFIG_QCOM_SMD_RPM=m 921 + CONFIG_QCOM_WCNSS_CTRL=m 922 + CONFIG_ROCKCHIP_PM_DOMAINS=y 923 + CONFIG_ARCH_TEGRA_2x_SOC=y 924 + CONFIG_ARCH_TEGRA_3x_SOC=y 925 + CONFIG_ARCH_TEGRA_114_SOC=y 926 + CONFIG_ARCH_TEGRA_124_SOC=y 892 927 CONFIG_PM_DEVFREQ=y 893 928 CONFIG_ARM_TEGRA_DEVFREQ=m 894 - CONFIG_MEMORY=y 895 - CONFIG_EXTCON=y 896 929 CONFIG_TI_AEMIF=y 897 930 CONFIG_IIO=y 898 931 CONFIG_IIO_SW_TRIGGER=y ··· 896 947 CONFIG_XILINX_XADC=y 897 948 CONFIG_MPU3050_I2C=y 898 949 CONFIG_CM36651=m 950 + CONFIG_SENSORS_ISL29018=y 951 + CONFIG_SENSORS_ISL29028=y 899 952 CONFIG_AK8975=y 900 - CONFIG_RASPBERRYPI_POWER=y 901 953 CONFIG_IIO_HRTIMER_TRIGGER=y 902 954 CONFIG_PWM=y 903 955 CONFIG_PWM_ATMEL=m 904 956 CONFIG_PWM_ATMEL_HLCDC_PWM=m 905 957 CONFIG_PWM_ATMEL_TCB=m 958 + CONFIG_PWM_BCM2835=y 959 + CONFIG_PWM_BRCMSTB=m 906 960 CONFIG_PWM_FSL_FTM=m 907 961 CONFIG_PWM_MESON=m 908 962 CONFIG_PWM_RCAR=m 909 963 CONFIG_PWM_RENESAS_TPU=y 910 964 CONFIG_PWM_ROCKCHIP=m 911 965 CONFIG_PWM_SAMSUNG=m 966 + CONFIG_PWM_STI=y 912 967 CONFIG_PWM_SUN4I=y 913 968 CONFIG_PWM_TEGRA=y 914 969 CONFIG_PWM_VT8500=y 970 + CONFIG_KEYSTONE_IRQ=y 971 + CONFIG_PHY_SUN4I_USB=y 972 + CONFIG_PHY_SUN9I_USB=y 915 973 CONFIG_PHY_HIX5HD2_SATA=y 916 - CONFIG_E1000E=y 917 - CONFIG_PWM_STI=y 918 - CONFIG_PWM_BCM2835=y 919 - CONFIG_PWM_BRCMSTB=m 974 + CONFIG_PHY_BERLIN_SATA=y 975 + CONFIG_PHY_BERLIN_USB=y 976 + CONFIG_PHY_CPCAP_USB=m 977 + CONFIG_PHY_QCOM_APQ8064_SATA=m 978 + CONFIG_PHY_RCAR_GEN2=m 979 + CONFIG_PHY_ROCKCHIP_DP=m 980 + CONFIG_PHY_ROCKCHIP_USB=y 981 + CONFIG_PHY_SAMSUNG_USB2=m 982 + CONFIG_PHY_MIPHY28LP=y 983 + CONFIG_PHY_STIH407_USB=y 984 + CONFIG_PHY_STM32_USBPHYC=y 985 + CONFIG_PHY_TEGRA_XUSB=y 920 986 CONFIG_PHY_DM816X_USB=m 921 987 CONFIG_OMAP_USB2=y 922 988 CONFIG_TI_PIPE3=y 923 989 CONFIG_TWL4030_USB=m 924 - CONFIG_PHY_BERLIN_USB=y 925 - CONFIG_PHY_CPCAP_USB=m 926 - CONFIG_PHY_BERLIN_SATA=y 927 - CONFIG_PHY_ROCKCHIP_DP=m 928 - CONFIG_PHY_ROCKCHIP_USB=y 929 - CONFIG_PHY_QCOM_APQ8064_SATA=m 930 - CONFIG_PHY_MIPHY28LP=y 931 - CONFIG_PHY_RCAR_GEN2=m 932 - CONFIG_PHY_STIH407_USB=y 933 - CONFIG_PHY_STM32_USBPHYC=y 934 - CONFIG_PHY_SUN4I_USB=y 935 - CONFIG_PHY_SUN9I_USB=y 936 - CONFIG_PHY_SAMSUNG_USB2=m 937 - CONFIG_PHY_TEGRA_XUSB=y 938 - CONFIG_PHY_BRCM_SATA=y 939 - CONFIG_NVMEM=y 940 990 CONFIG_NVMEM_IMX_OCOTP=y 941 991 CONFIG_NVMEM_SUNXI_SID=y 942 992 CONFIG_NVMEM_VF610_OCOTP=y 943 - CONFIG_BCM2835_MBOX=y 944 993 CONFIG_RASPBERRYPI_FIRMWARE=y 945 - CONFIG_EFI_VARS=m 946 - CONFIG_EFI_CAPSULE_LOADER=m 947 994 CONFIG_BCM47XX_NVRAM=y 948 995 CONFIG_BCM47XX_SPROM=y 996 + CONFIG_EFI_VARS=m 997 + CONFIG_EFI_CAPSULE_LOADER=m 949 998 CONFIG_EXT4_FS=y 950 999 CONFIG_AUTOFS4_FS=y 951 1000 CONFIG_MSDOS_FS=y ··· 951 1004 CONFIG_NTFS_FS=y 952 1005 CONFIG_TMPFS_POSIX_ACL=y 953 1006 CONFIG_UBIFS_FS=y 954 - CONFIG_TMPFS=y 955 1007 CONFIG_SQUASHFS=y 956 1008 CONFIG_SQUASHFS_LZO=y 957 1009 CONFIG_SQUASHFS_XZ=y ··· 966 1020 CONFIG_NLS_ISO8859_1=y 967 1021 CONFIG_NLS_UTF8=y 968 1022 CONFIG_PRINTK_TIME=y 969 - CONFIG_DEBUG_FS=y 970 1023 CONFIG_MAGIC_SYSRQ=y 971 - CONFIG_LOCKUP_DETECTOR=y 972 - CONFIG_CPUFREQ_DT=y 973 - CONFIG_KEYSTONE_IRQ=y 974 - CONFIG_HW_RANDOM=y 975 - CONFIG_HW_RANDOM_ST=y 976 1024 CONFIG_CRYPTO_USER=m 977 1025 CONFIG_CRYPTO_USER_API_HASH=m 978 1026 CONFIG_CRYPTO_USER_API_SKCIPHER=m ··· 975 1035 CONFIG_CRYPTO_DEV_MARVELL_CESA=m 976 1036 CONFIG_CRYPTO_DEV_EXYNOS_RNG=m 977 1037 CONFIG_CRYPTO_DEV_S5P=m 1038 + CONFIG_CRYPTO_DEV_ATMEL_AES=m 1039 + CONFIG_CRYPTO_DEV_ATMEL_TDES=m 1040 + CONFIG_CRYPTO_DEV_ATMEL_SHA=m 978 1041 CONFIG_CRYPTO_DEV_SUN4I_SS=m 979 1042 CONFIG_CRYPTO_DEV_ROCKCHIP=m 980 1043 CONFIG_ARM_CRYPTO=y 981 - CONFIG_CRYPTO_SHA1_ARM=m 982 1044 CONFIG_CRYPTO_SHA1_ARM_NEON=m 983 1045 CONFIG_CRYPTO_SHA1_ARM_CE=m 984 1046 CONFIG_CRYPTO_SHA2_ARM_CE=m 985 - CONFIG_CRYPTO_SHA256_ARM=m 986 1047 CONFIG_CRYPTO_SHA512_ARM=m 987 1048 CONFIG_CRYPTO_AES_ARM=m 988 1049 CONFIG_CRYPTO_AES_ARM_BS=m 989 1050 CONFIG_CRYPTO_AES_ARM_CE=m 990 - CONFIG_CRYPTO_CHACHA20_NEON=m 991 - CONFIG_CRYPTO_CRC32_ARM_CE=m 992 - CONFIG_CRYPTO_CRCT10DIF_ARM_CE=m 993 1051 CONFIG_CRYPTO_GHASH_ARM_CE=m 994 - CONFIG_CRYPTO_DEV_ATMEL_AES=m 995 - CONFIG_CRYPTO_DEV_ATMEL_TDES=m 996 - CONFIG_CRYPTO_DEV_ATMEL_SHA=m 997 - CONFIG_VIDEO_VIVID=m 998 - CONFIG_VIRTIO=y 999 - CONFIG_VIRTIO_PCI=y 1000 - CONFIG_VIRTIO_PCI_LEGACY=y 1001 - CONFIG_VIRTIO_MMIO=y 1052 + CONFIG_CRYPTO_CRC32_ARM_CE=m 1053 + CONFIG_CRYPTO_CHACHA20_NEON=m
+1
arch/arm/mach-bcm/Kconfig
··· 20 20 select GPIOLIB 21 21 select ARM_AMBA 22 22 select PINCTRL 23 + select PCI_DOMAINS if PCI 23 24 help 24 25 This enables support for systems based on Broadcom IPROC architected SoCs. 25 26 The IPROC complex contains one or more ARM CPUs along with common
+1 -1
arch/arm/mach-davinci/board-da850-evm.c
··· 774 774 GPIO_LOOKUP("davinci_gpio.0", DA850_MMCSD_CD_PIN, "cd", 775 775 GPIO_ACTIVE_LOW), 776 776 GPIO_LOOKUP("davinci_gpio.0", DA850_MMCSD_WP_PIN, "wp", 777 - GPIO_ACTIVE_LOW), 777 + GPIO_ACTIVE_HIGH), 778 778 }, 779 779 }; 780 780
+1
arch/arm/mach-socfpga/Kconfig
··· 10 10 select HAVE_ARM_SCU 11 11 select HAVE_ARM_TWD if SMP 12 12 select MFD_SYSCON 13 + select PCI_DOMAINS if PCI 13 14 14 15 if ARCH_SOCFPGA 15 16 config SOCFPGA_SUSPEND
+2 -4
arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
··· 309 309 interrupts = <0 99 4>; 310 310 resets = <&rst SPIM0_RESET>; 311 311 reg-io-width = <4>; 312 - num-chipselect = <4>; 313 - bus-num = <0>; 312 + num-cs = <4>; 314 313 status = "disabled"; 315 314 }; 316 315 ··· 321 322 interrupts = <0 100 4>; 322 323 resets = <&rst SPIM1_RESET>; 323 324 reg-io-width = <4>; 324 - num-chipselect = <4>; 325 - bus-num = <0>; 325 + num-cs = <4>; 326 326 status = "disabled"; 327 327 }; 328 328
+14 -1
arch/arm64/boot/dts/amlogic/meson-axg-s400.dts
··· 66 66 67 67 &ethmac { 68 68 status = "okay"; 69 - phy-mode = "rgmii"; 70 69 pinctrl-0 = <&eth_rgmii_y_pins>; 71 70 pinctrl-names = "default"; 71 + phy-handle = <&eth_phy0>; 72 + phy-mode = "rgmii"; 73 + 74 + mdio { 75 + compatible = "snps,dwmac-mdio"; 76 + #address-cells = <1>; 77 + #size-cells = <0>; 78 + 79 + eth_phy0: ethernet-phy@0 { 80 + /* Realtek RTL8211F (0x001cc916) */ 81 + reg = <0>; 82 + eee-broken-1000t; 83 + }; 84 + }; 72 85 }; 73 86 74 87 &uart_A {
+2 -2
arch/arm64/boot/dts/amlogic/meson-axg.dtsi
··· 132 132 133 133 sd_emmc_b: sd@5000 { 134 134 compatible = "amlogic,meson-axg-mmc"; 135 - reg = <0x0 0x5000 0x0 0x2000>; 135 + reg = <0x0 0x5000 0x0 0x800>; 136 136 interrupts = <GIC_SPI 217 IRQ_TYPE_EDGE_RISING>; 137 137 status = "disabled"; 138 138 clocks = <&clkc CLKID_SD_EMMC_B>, ··· 144 144 145 145 sd_emmc_c: mmc@7000 { 146 146 compatible = "amlogic,meson-axg-mmc"; 147 - reg = <0x0 0x7000 0x0 0x2000>; 147 + reg = <0x0 0x7000 0x0 0x800>; 148 148 interrupts = <GIC_SPI 218 IRQ_TYPE_EDGE_RISING>; 149 149 status = "disabled"; 150 150 clocks = <&clkc CLKID_SD_EMMC_C>,
+9 -3
arch/arm64/boot/dts/amlogic/meson-gx.dtsi
··· 35 35 no-map; 36 36 }; 37 37 38 + /* Alternate 3 MiB reserved for ARM Trusted Firmware (BL31) */ 39 + secmon_reserved_alt: secmon@5000000 { 40 + reg = <0x0 0x05000000 0x0 0x300000>; 41 + no-map; 42 + }; 43 + 38 44 linux,cma { 39 45 compatible = "shared-dma-pool"; 40 46 reusable; ··· 463 457 464 458 sd_emmc_a: mmc@70000 { 465 459 compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc"; 466 - reg = <0x0 0x70000 0x0 0x2000>; 460 + reg = <0x0 0x70000 0x0 0x800>; 467 461 interrupts = <GIC_SPI 216 IRQ_TYPE_EDGE_RISING>; 468 462 status = "disabled"; 469 463 }; 470 464 471 465 sd_emmc_b: mmc@72000 { 472 466 compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc"; 473 - reg = <0x0 0x72000 0x0 0x2000>; 467 + reg = <0x0 0x72000 0x0 0x800>; 474 468 interrupts = <GIC_SPI 217 IRQ_TYPE_EDGE_RISING>; 475 469 status = "disabled"; 476 470 }; 477 471 478 472 sd_emmc_c: mmc@74000 { 479 473 compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc"; 480 - reg = <0x0 0x74000 0x0 0x2000>; 474 + reg = <0x0 0x74000 0x0 0x800>; 481 475 interrupts = <GIC_SPI 218 IRQ_TYPE_EDGE_RISING>; 482 476 status = "disabled"; 483 477 };
+1 -1
arch/arm64/boot/dts/amlogic/meson-gxl-mali.dtsi
··· 6 6 7 7 &apb { 8 8 mali: gpu@c0000 { 9 - compatible = "amlogic,meson-gxbb-mali", "arm,mali-450"; 9 + compatible = "amlogic,meson-gxl-mali", "arm,mali-450"; 10 10 reg = <0x0 0xc0000 0x0 0x40000>; 11 11 interrupts = <GIC_SPI 160 IRQ_TYPE_LEVEL_HIGH>, 12 12 <GIC_SPI 161 IRQ_TYPE_LEVEL_HIGH>,
-3
arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts
··· 234 234 235 235 bus-width = <4>; 236 236 cap-sd-highspeed; 237 - sd-uhs-sdr12; 238 - sd-uhs-sdr25; 239 - sd-uhs-sdr50; 240 237 max-frequency = <100000000>; 241 238 disable-wp; 242 239
+7
arch/arm64/boot/dts/amlogic/meson-gxl-s905x-p212.dtsi
··· 189 189 &usb0 { 190 190 status = "okay"; 191 191 }; 192 + 193 + &usb2_phy0 { 194 + /* 195 + * HDMI_5V is also used as supply for the USB VBUS. 196 + */ 197 + phy-supply = <&hdmi_5v>; 198 + };
-8
arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
··· 13 13 / { 14 14 compatible = "amlogic,meson-gxl"; 15 15 16 - reserved-memory { 17 - /* Alternate 3 MiB reserved for ARM Trusted Firmware (BL31) */ 18 - secmon_reserved_alt: secmon@5000000 { 19 - reg = <0x0 0x05000000 0x0 0x300000>; 20 - no-map; 21 - }; 22 - }; 23 - 24 16 soc { 25 17 usb0: usb@c9000000 { 26 18 status = "disabled";
+4 -4
arch/arm64/boot/dts/broadcom/northstar2/ns2.dtsi
··· 118 118 119 119 #interrupt-cells = <1>; 120 120 interrupt-map-mask = <0 0 0 0>; 121 - interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 281 IRQ_TYPE_NONE>; 121 + interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 281 IRQ_TYPE_LEVEL_HIGH>; 122 122 123 123 linux,pci-domain = <0>; 124 124 ··· 149 149 150 150 #interrupt-cells = <1>; 151 151 interrupt-map-mask = <0 0 0 0>; 152 - interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 305 IRQ_TYPE_NONE>; 152 + interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 305 IRQ_TYPE_LEVEL_HIGH>; 153 153 154 154 linux,pci-domain = <4>; 155 155 ··· 566 566 reg = <0x66080000 0x100>; 567 567 #address-cells = <1>; 568 568 #size-cells = <0>; 569 - interrupts = <GIC_SPI 394 IRQ_TYPE_NONE>; 569 + interrupts = <GIC_SPI 394 IRQ_TYPE_LEVEL_HIGH>; 570 570 clock-frequency = <100000>; 571 571 status = "disabled"; 572 572 }; ··· 594 594 reg = <0x660b0000 0x100>; 595 595 #address-cells = <1>; 596 596 #size-cells = <0>; 597 - interrupts = <GIC_SPI 395 IRQ_TYPE_NONE>; 597 + interrupts = <GIC_SPI 395 IRQ_TYPE_LEVEL_HIGH>; 598 598 clock-frequency = <100000>; 599 599 status = "disabled"; 600 600 };
+4
arch/arm64/boot/dts/broadcom/stingray/bcm958742k.dts
··· 43 43 enet-phy-lane-swap; 44 44 }; 45 45 46 + &sdio0 { 47 + mmc-ddr-1_8v; 48 + }; 49 + 46 50 &uart2 { 47 51 status = "okay"; 48 52 };
+4
arch/arm64/boot/dts/broadcom/stingray/bcm958742t.dts
··· 42 42 &gphy0 { 43 43 enet-phy-lane-swap; 44 44 }; 45 + 46 + &sdio0 { 47 + mmc-ddr-1_8v; 48 + };
+2 -2
arch/arm64/boot/dts/broadcom/stingray/stingray.dtsi
··· 409 409 reg = <0x000b0000 0x100>; 410 410 #address-cells = <1>; 411 411 #size-cells = <0>; 412 - interrupts = <GIC_SPI 177 IRQ_TYPE_NONE>; 412 + interrupts = <GIC_SPI 177 IRQ_TYPE_LEVEL_HIGH>; 413 413 clock-frequency = <100000>; 414 414 status = "disabled"; 415 415 }; ··· 453 453 reg = <0x000e0000 0x100>; 454 454 #address-cells = <1>; 455 455 #size-cells = <0>; 456 - interrupts = <GIC_SPI 178 IRQ_TYPE_NONE>; 456 + interrupts = <GIC_SPI 178 IRQ_TYPE_LEVEL_HIGH>; 457 457 clock-frequency = <100000>; 458 458 status = "disabled"; 459 459 };
+2
arch/arm64/boot/dts/hisilicon/hi3660-hikey960.dts
··· 585 585 vmmc-supply = <&wlan_en>; 586 586 ti,non-removable; 587 587 non-removable; 588 + cap-power-off-card; 589 + keep-power-in-suspend; 588 590 #address-cells = <0x1>; 589 591 #size-cells = <0x0>; 590 592 status = "ok";
+2
arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
··· 322 322 dwmmc_2: dwmmc2@f723f000 { 323 323 bus-width = <0x4>; 324 324 non-removable; 325 + cap-power-off-card; 326 + keep-power-in-suspend; 325 327 vmmc-supply = <&reg_vdd_3v3>; 326 328 mmc-pwrseq = <&wl1835_pwrseq>; 327 329
+1 -1
arch/arm64/boot/dts/marvell/armada-cp110.dtsi
··· 149 149 150 150 CP110_LABEL(icu): interrupt-controller@1e0000 { 151 151 compatible = "marvell,cp110-icu"; 152 - reg = <0x1e0000 0x10>; 152 + reg = <0x1e0000 0x440>; 153 153 #interrupt-cells = <3>; 154 154 interrupt-controller; 155 155 msi-parent = <&gicp>;
+1 -1
arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi
··· 75 75 76 76 serial@75b1000 { 77 77 label = "LS-UART0"; 78 - status = "okay"; 78 + status = "disabled"; 79 79 pinctrl-names = "default", "sleep"; 80 80 pinctrl-0 = <&blsp2_uart2_4pins_default>; 81 81 pinctrl-1 = <&blsp2_uart2_4pins_sleep>;
+2 -2
arch/arm64/boot/dts/qcom/msm8916.dtsi
··· 1191 1191 1192 1192 port@0 { 1193 1193 reg = <0>; 1194 - etf_out: endpoint { 1194 + etf_in: endpoint { 1195 1195 slave-mode; 1196 1196 remote-endpoint = <&funnel0_out>; 1197 1197 }; 1198 1198 }; 1199 1199 port@1 { 1200 1200 reg = <0>; 1201 - etf_in: endpoint { 1201 + etf_out: endpoint { 1202 1202 remote-endpoint = <&replicator_in>; 1203 1203 }; 1204 1204 };
+1 -1
arch/arm64/boot/dts/socionext/uniphier-ld11-global.dts
··· 54 54 sound { 55 55 compatible = "audio-graph-card"; 56 56 label = "UniPhier LD11"; 57 - widgets = "Headphone", "Headphone Jack"; 57 + widgets = "Headphone", "Headphones"; 58 58 dais = <&i2s_port2 59 59 &i2s_port3 60 60 &i2s_port4
+1 -1
arch/arm64/boot/dts/socionext/uniphier-ld20-global.dts
··· 54 54 sound { 55 55 compatible = "audio-graph-card"; 56 56 label = "UniPhier LD20"; 57 - widgets = "Headphone", "Headphone Jack"; 57 + widgets = "Headphone", "Headphones"; 58 58 dais = <&i2s_port2 59 59 &i2s_port3 60 60 &i2s_port4
+43 -67
arch/arm64/configs/defconfig
··· 47 47 CONFIG_ARCH_QCOM=y 48 48 CONFIG_ARCH_ROCKCHIP=y 49 49 CONFIG_ARCH_SEATTLE=y 50 + CONFIG_ARCH_SYNQUACER=y 50 51 CONFIG_ARCH_RENESAS=y 51 52 CONFIG_ARCH_R8A7795=y 52 53 CONFIG_ARCH_R8A7796=y ··· 59 58 CONFIG_ARCH_STRATIX10=y 60 59 CONFIG_ARCH_TEGRA=y 61 60 CONFIG_ARCH_SPRD=y 62 - CONFIG_ARCH_SYNQUACER=y 63 61 CONFIG_ARCH_THUNDER=y 64 62 CONFIG_ARCH_THUNDER2=y 65 63 CONFIG_ARCH_UNIPHIER=y ··· 67 67 CONFIG_ARCH_ZX=y 68 68 CONFIG_ARCH_ZYNQMP=y 69 69 CONFIG_PCI=y 70 - CONFIG_HOTPLUG_PCI_PCIE=y 71 70 CONFIG_PCI_IOV=y 72 71 CONFIG_HOTPLUG_PCI=y 73 72 CONFIG_HOTPLUG_PCI_ACPI=y 74 - CONFIG_PCI_LAYERSCAPE=y 75 - CONFIG_PCI_HISI=y 76 - CONFIG_PCIE_QCOM=y 77 - CONFIG_PCIE_KIRIN=y 78 - CONFIG_PCIE_ARMADA_8K=y 79 - CONFIG_PCIE_HISI_STB=y 80 73 CONFIG_PCI_AARDVARK=y 81 74 CONFIG_PCI_TEGRA=y 82 75 CONFIG_PCIE_RCAR=y 83 - CONFIG_PCIE_ROCKCHIP=y 84 - CONFIG_PCIE_ROCKCHIP_HOST=m 85 76 CONFIG_PCI_HOST_GENERIC=y 86 77 CONFIG_PCI_XGENE=y 87 78 CONFIG_PCI_HOST_THUNDER_PEM=y 88 79 CONFIG_PCI_HOST_THUNDER_ECAM=y 80 + CONFIG_PCIE_ROCKCHIP_HOST=m 81 + CONFIG_PCI_LAYERSCAPE=y 82 + CONFIG_PCI_HISI=y 83 + CONFIG_PCIE_QCOM=y 84 + CONFIG_PCIE_ARMADA_8K=y 85 + CONFIG_PCIE_KIRIN=y 86 + CONFIG_PCIE_HISI_STB=y 89 87 CONFIG_ARM64_VA_BITS_48=y 90 88 CONFIG_SCHED_MC=y 91 89 CONFIG_NUMA=y ··· 102 104 CONFIG_WQ_POWER_EFFICIENT_DEFAULT=y 103 105 CONFIG_ARM_CPUIDLE=y 104 106 CONFIG_CPU_FREQ=y 105 - CONFIG_CPU_FREQ_GOV_ATTR_SET=y 106 - CONFIG_CPU_FREQ_GOV_COMMON=y 107 107 CONFIG_CPU_FREQ_STAT=y 108 108 CONFIG_CPU_FREQ_GOV_POWERSAVE=m 109 109 CONFIG_CPU_FREQ_GOV_USERSPACE=y ··· 109 113 CONFIG_CPU_FREQ_GOV_CONSERVATIVE=m 110 114 CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y 111 115 CONFIG_CPUFREQ_DT=y 116 + CONFIG_ACPI_CPPC_CPUFREQ=m 112 117 CONFIG_ARM_ARMADA_37XX_CPUFREQ=y 113 118 CONFIG_ARM_BIG_LITTLE_CPUFREQ=y 114 119 CONFIG_ARM_SCPI_CPUFREQ=y 115 120 CONFIG_ARM_TEGRA186_CPUFREQ=y 116 - CONFIG_ACPI_CPPC_CPUFREQ=m 117 121 CONFIG_NET=y 118 122 CONFIG_PACKET=y 119 123 CONFIG_UNIX=y ··· 232 236 CONFIG_SNI_AVE=y 233 237 CONFIG_SNI_NETSEC=y 234 238 CONFIG_STMMAC_ETH=m 235 - CONFIG_DWMAC_IPQ806X=m 236 - CONFIG_DWMAC_MESON=m 237 - CONFIG_DWMAC_ROCKCHIP=m 238 - CONFIG_DWMAC_SUNXI=m 239 - CONFIG_DWMAC_SUN8I=m 240 239 CONFIG_MDIO_BUS_MUX_MMIOREG=y 241 240 CONFIG_AT803X_PHY=m 242 241 CONFIG_MARVELL_PHY=m ··· 260 269 CONFIG_WLCORE_SDIO=m 261 270 CONFIG_INPUT_EVDEV=y 262 271 CONFIG_KEYBOARD_ADC=m 263 - CONFIG_KEYBOARD_CROS_EC=y 264 272 CONFIG_KEYBOARD_GPIO=y 273 + CONFIG_KEYBOARD_CROS_EC=y 265 274 CONFIG_INPUT_TOUCHSCREEN=y 266 275 CONFIG_TOUCHSCREEN_ATMEL_MXT=m 267 276 CONFIG_INPUT_MISC=y ··· 287 296 CONFIG_SERIAL_SAMSUNG_CONSOLE=y 288 297 CONFIG_SERIAL_TEGRA=y 289 298 CONFIG_SERIAL_SH_SCI=y 290 - CONFIG_SERIAL_SH_SCI_NR_UARTS=11 291 - CONFIG_SERIAL_SH_SCI_CONSOLE=y 292 299 CONFIG_SERIAL_MSM=y 293 300 CONFIG_SERIAL_MSM_CONSOLE=y 294 301 CONFIG_SERIAL_XILINX_PS_UART=y 295 302 CONFIG_SERIAL_XILINX_PS_UART_CONSOLE=y 296 303 CONFIG_SERIAL_MVEBU_UART=y 297 304 CONFIG_SERIAL_DEV_BUS=y 298 - CONFIG_SERIAL_DEV_CTRL_TTYPORT=y 299 305 CONFIG_VIRTIO_CONSOLE=y 300 - CONFIG_I2C_HID=m 301 306 CONFIG_I2C_CHARDEV=y 302 307 CONFIG_I2C_MUX=y 303 308 CONFIG_I2C_MUX_PCA954x=y ··· 312 325 CONFIG_I2C_CROS_EC_TUNNEL=y 313 326 CONFIG_SPI=y 314 327 CONFIG_SPI_ARMADA_3700=y 315 - CONFIG_SPI_MESON_SPICC=m 316 - CONFIG_SPI_MESON_SPIFC=m 317 328 CONFIG_SPI_BCM2835=m 318 329 CONFIG_SPI_BCM2835AUX=m 330 + CONFIG_SPI_MESON_SPICC=m 331 + CONFIG_SPI_MESON_SPIFC=m 319 332 CONFIG_SPI_ORION=y 320 333 CONFIG_SPI_PL022=y 321 - CONFIG_SPI_QUP=y 322 334 CONFIG_SPI_ROCKCHIP=y 335 + CONFIG_SPI_QUP=y 323 336 CONFIG_SPI_S3C64XX=y 324 337 CONFIG_SPI_SPIDEV=m 325 338 CONFIG_SPMI=y 326 - CONFIG_PINCTRL_IPQ8074=y 327 339 CONFIG_PINCTRL_SINGLE=y 328 340 CONFIG_PINCTRL_MAX77620=y 341 + CONFIG_PINCTRL_IPQ8074=y 329 342 CONFIG_PINCTRL_MSM8916=y 330 343 CONFIG_PINCTRL_MSM8994=y 331 344 CONFIG_PINCTRL_MSM8996=y 332 - CONFIG_PINCTRL_MT7622=y 333 345 CONFIG_PINCTRL_QDF2XXX=y 334 346 CONFIG_PINCTRL_QCOM_SPMI_PMIC=y 347 + CONFIG_PINCTRL_MT7622=y 335 348 CONFIG_GPIO_DWAPB=y 336 349 CONFIG_GPIO_MB86S7X=y 337 350 CONFIG_GPIO_PL061=y ··· 355 368 CONFIG_THERMAL_GOV_POWER_ALLOCATOR=y 356 369 CONFIG_CPU_THERMAL=y 357 370 CONFIG_THERMAL_EMULATION=y 371 + CONFIG_ROCKCHIP_THERMAL=m 372 + CONFIG_RCAR_GEN3_THERMAL=y 358 373 CONFIG_ARMADA_THERMAL=y 359 374 CONFIG_BRCMSTB_THERMAL=m 360 375 CONFIG_EXYNOS_THERMAL=y 361 - CONFIG_RCAR_GEN3_THERMAL=y 362 - CONFIG_QCOM_TSENS=y 363 - CONFIG_ROCKCHIP_THERMAL=m 364 376 CONFIG_TEGRA_BPMP_THERMAL=m 377 + CONFIG_QCOM_TSENS=y 365 378 CONFIG_UNIPHIER_THERMAL=y 366 379 CONFIG_WATCHDOG=y 367 380 CONFIG_S3C2410_WATCHDOG=y ··· 382 395 CONFIG_MFD_SPMI_PMIC=y 383 396 CONFIG_MFD_RK808=y 384 397 CONFIG_MFD_SEC_CORE=y 398 + CONFIG_REGULATOR_FIXED_VOLTAGE=y 385 399 CONFIG_REGULATOR_AXP20X=y 386 400 CONFIG_REGULATOR_FAN53555=y 387 - CONFIG_REGULATOR_FIXED_VOLTAGE=y 388 401 CONFIG_REGULATOR_GPIO=y 389 402 CONFIG_REGULATOR_HI6421V530=y 390 403 CONFIG_REGULATOR_HI655X=y ··· 394 407 CONFIG_REGULATOR_QCOM_SPMI=y 395 408 CONFIG_REGULATOR_RK808=y 396 409 CONFIG_REGULATOR_S2MPS11=y 410 + CONFIG_RC_CORE=m 411 + CONFIG_RC_DECODERS=y 412 + CONFIG_RC_DEVICES=y 413 + CONFIG_IR_MESON=m 397 414 CONFIG_MEDIA_SUPPORT=m 398 415 CONFIG_MEDIA_CAMERA_SUPPORT=y 399 416 CONFIG_MEDIA_ANALOG_TV_SUPPORT=y 400 417 CONFIG_MEDIA_DIGITAL_TV_SUPPORT=y 401 418 CONFIG_MEDIA_CONTROLLER=y 402 - CONFIG_MEDIA_RC_SUPPORT=y 403 - CONFIG_RC_CORE=m 404 - CONFIG_RC_DEVICES=y 405 - CONFIG_RC_DECODERS=y 406 - CONFIG_IR_MESON=m 407 419 CONFIG_VIDEO_V4L2_SUBDEV_API=y 408 420 # CONFIG_DVB_NET is not set 409 421 CONFIG_V4L_MEM2MEM_DRIVERS=y ··· 427 441 CONFIG_ROCKCHIP_DW_MIPI_DSI=y 428 442 CONFIG_ROCKCHIP_INNO_HDMI=y 429 443 CONFIG_DRM_RCAR_DU=m 430 - CONFIG_DRM_RCAR_LVDS=y 431 - CONFIG_DRM_RCAR_VSP=y 444 + CONFIG_DRM_RCAR_LVDS=m 432 445 CONFIG_DRM_TEGRA=m 433 446 CONFIG_DRM_PANEL_SIMPLE=m 434 447 CONFIG_DRM_I2C_ADV7511=m ··· 440 455 CONFIG_BACKLIGHT_GENERIC=m 441 456 CONFIG_BACKLIGHT_PWM=m 442 457 CONFIG_BACKLIGHT_LP855X=m 443 - CONFIG_FRAMEBUFFER_CONSOLE=y 444 458 CONFIG_LOGO=y 445 459 # CONFIG_LOGO_LINUX_MONO is not set 446 460 # CONFIG_LOGO_LINUX_VGA16 is not set ··· 452 468 CONFIG_SND_SOC_AK4613=m 453 469 CONFIG_SND_SIMPLE_CARD=m 454 470 CONFIG_SND_AUDIO_GRAPH_CARD=m 471 + CONFIG_I2C_HID=m 455 472 CONFIG_USB=y 456 473 CONFIG_USB_OTG=y 457 474 CONFIG_USB_XHCI_HCD=y ··· 486 501 CONFIG_MMC_ARMMMCI=y 487 502 CONFIG_MMC_SDHCI=y 488 503 CONFIG_MMC_SDHCI_ACPI=y 489 - CONFIG_MMC_SDHCI_F_SDH30=y 490 504 CONFIG_MMC_SDHCI_PLTFM=y 491 505 CONFIG_MMC_SDHCI_OF_ARASAN=y 492 506 CONFIG_MMC_SDHCI_OF_ESDHC=y 493 507 CONFIG_MMC_SDHCI_CADENCE=y 494 508 CONFIG_MMC_SDHCI_TEGRA=y 509 + CONFIG_MMC_SDHCI_F_SDH30=y 495 510 CONFIG_MMC_MESON_GX=y 496 511 CONFIG_MMC_SDHCI_MSM=y 497 512 CONFIG_MMC_SPI=y ··· 509 524 CONFIG_LEDS_GPIO=y 510 525 CONFIG_LEDS_PWM=y 511 526 CONFIG_LEDS_SYSCON=y 527 + CONFIG_LEDS_TRIGGER_DISK=y 512 528 CONFIG_LEDS_TRIGGER_HEARTBEAT=y 513 529 CONFIG_LEDS_TRIGGER_CPU=y 514 530 CONFIG_LEDS_TRIGGER_DEFAULT_ON=y 515 531 CONFIG_LEDS_TRIGGER_PANIC=y 516 - CONFIG_LEDS_TRIGGER_DISK=y 517 532 CONFIG_EDAC=y 518 533 CONFIG_EDAC_GHES=y 519 534 CONFIG_RTC_CLASS=y ··· 522 537 CONFIG_RTC_DRV_S5M=y 523 538 CONFIG_RTC_DRV_DS3232=y 524 539 CONFIG_RTC_DRV_EFI=y 540 + CONFIG_RTC_DRV_CROS_EC=y 525 541 CONFIG_RTC_DRV_S3C=y 526 542 CONFIG_RTC_DRV_PL031=y 527 543 CONFIG_RTC_DRV_SUN6I=y 528 544 CONFIG_RTC_DRV_ARMADA38X=y 529 545 CONFIG_RTC_DRV_TEGRA=y 530 546 CONFIG_RTC_DRV_XGENE=y 531 - CONFIG_RTC_DRV_CROS_EC=y 532 547 CONFIG_DMADEVICES=y 533 548 CONFIG_DMA_BCM2835=m 534 549 CONFIG_K3_DMA=y ··· 564 579 CONFIG_ARM_MHU=y 565 580 CONFIG_PLATFORM_MHU=y 566 581 CONFIG_BCM2835_MBOX=y 567 - CONFIG_HI6220_MBOX=y 568 582 CONFIG_QCOM_APCS_IPC=y 569 583 CONFIG_ROCKCHIP_IOMMU=y 570 584 CONFIG_TEGRA_IOMMU_SMMU=y ··· 586 602 CONFIG_EXTCON_USB_GPIO=y 587 603 CONFIG_EXTCON_USBC_CROS_EC=y 588 604 CONFIG_MEMORY=y 589 - CONFIG_TEGRA_MC=y 590 605 CONFIG_IIO=y 591 606 CONFIG_EXYNOS_ADC=y 592 607 CONFIG_ROCKCHIP_SARADC=m ··· 601 618 CONFIG_PWM_ROCKCHIP=y 602 619 CONFIG_PWM_SAMSUNG=y 603 620 CONFIG_PWM_TEGRA=m 621 + CONFIG_PHY_XGENE=y 622 + CONFIG_PHY_SUN4I_USB=y 623 + CONFIG_PHY_HI6220_USB=y 604 624 CONFIG_PHY_HISTB_COMBPHY=y 605 625 CONFIG_PHY_HISI_INNO_USB2=y 606 - CONFIG_PHY_RCAR_GEN3_USB2=y 607 - CONFIG_PHY_RCAR_GEN3_USB3=m 608 - CONFIG_PHY_HI6220_USB=y 609 - CONFIG_PHY_QCOM_USB_HS=y 610 - CONFIG_PHY_SUN4I_USB=y 611 626 CONFIG_PHY_MVEBU_CP110_COMPHY=y 612 627 CONFIG_PHY_QCOM_QMP=m 613 - CONFIG_PHY_ROCKCHIP_INNO_USB2=y 628 + CONFIG_PHY_QCOM_USB_HS=y 629 + CONFIG_PHY_RCAR_GEN3_USB2=y 630 + CONFIG_PHY_RCAR_GEN3_USB3=m 614 631 CONFIG_PHY_ROCKCHIP_EMMC=y 632 + CONFIG_PHY_ROCKCHIP_INNO_USB2=y 615 633 CONFIG_PHY_ROCKCHIP_PCIE=m 616 634 CONFIG_PHY_ROCKCHIP_TYPEC=y 617 - CONFIG_PHY_XGENE=y 618 635 CONFIG_PHY_TEGRA_XUSB=y 619 636 CONFIG_QCOM_L2_PMU=y 620 637 CONFIG_QCOM_L3_PMU=y 621 - CONFIG_MESON_EFUSE=m 622 638 CONFIG_QCOM_QFPROM=y 623 639 CONFIG_ROCKCHIP_EFUSE=y 624 640 CONFIG_UNIPHIER_EFUSE=y 641 + CONFIG_MESON_EFUSE=m 625 642 CONFIG_TEE=y 626 643 CONFIG_OPTEE=y 627 644 CONFIG_ARM_SCPI_PROTOCOL=y ··· 630 647 CONFIG_ACPI=y 631 648 CONFIG_ACPI_APEI=y 632 649 CONFIG_ACPI_APEI_GHES=y 633 - CONFIG_ACPI_APEI_PCIEAER=y 634 650 CONFIG_ACPI_APEI_MEMORY_FAILURE=y 635 651 CONFIG_ACPI_APEI_EINJ=y 636 652 CONFIG_EXT2_FS=y ··· 664 682 CONFIG_DEBUG_FS=y 665 683 CONFIG_MAGIC_SYSRQ=y 666 684 CONFIG_DEBUG_KERNEL=y 667 - CONFIG_LOCKUP_DETECTOR=y 668 685 # CONFIG_SCHED_DEBUG is not set 669 686 # CONFIG_DEBUG_PREEMPT is not set 670 687 # CONFIG_FTRACE is not set ··· 672 691 CONFIG_CRYPTO_ECHAINIV=y 673 692 CONFIG_CRYPTO_ANSI_CPRNG=y 674 693 CONFIG_ARM64_CRYPTO=y 675 - CONFIG_CRYPTO_SHA256_ARM64=m 676 - CONFIG_CRYPTO_SHA512_ARM64=m 677 694 CONFIG_CRYPTO_SHA1_ARM64_CE=y 678 695 CONFIG_CRYPTO_SHA2_ARM64_CE=y 679 - CONFIG_CRYPTO_GHASH_ARM64_CE=y 680 - CONFIG_CRYPTO_CRCT10DIF_ARM64_CE=m 681 - CONFIG_CRYPTO_CRC32_ARM64_CE=m 682 - CONFIG_CRYPTO_AES_ARM64=m 683 - CONFIG_CRYPTO_AES_ARM64_CE=m 684 - CONFIG_CRYPTO_AES_ARM64_CE_CCM=y 685 - CONFIG_CRYPTO_AES_ARM64_CE_BLK=y 686 - CONFIG_CRYPTO_AES_ARM64_NEON_BLK=m 687 - CONFIG_CRYPTO_CHACHA20_NEON=m 688 - CONFIG_CRYPTO_AES_ARM64_BS=m 689 696 CONFIG_CRYPTO_SHA512_ARM64_CE=m 690 697 CONFIG_CRYPTO_SHA3_ARM64=m 691 698 CONFIG_CRYPTO_SM3_ARM64_CE=m 699 + CONFIG_CRYPTO_GHASH_ARM64_CE=y 700 + CONFIG_CRYPTO_CRCT10DIF_ARM64_CE=m 701 + CONFIG_CRYPTO_CRC32_ARM64_CE=m 702 + CONFIG_CRYPTO_AES_ARM64_CE_CCM=y 703 + CONFIG_CRYPTO_AES_ARM64_CE_BLK=y 704 + CONFIG_CRYPTO_CHACHA20_NEON=m 705 + CONFIG_CRYPTO_AES_ARM64_BS=m
+6 -1
arch/arm64/include/asm/alternative.h
··· 28 28 __le32 *origptr, __le32 *updptr, int nr_inst); 29 29 30 30 void __init apply_alternatives_all(void); 31 - void apply_alternatives(void *start, size_t length); 31 + 32 + #ifdef CONFIG_MODULES 33 + void apply_alternatives_module(void *start, size_t length); 34 + #else 35 + static inline void apply_alternatives_module(void *start, size_t length) { } 36 + #endif 32 37 33 38 #define ALTINSTR_ENTRY(feature,cb) \ 34 39 " .word 661b - .\n" /* label */ \
+1 -5
arch/arm64/include/asm/pgtable.h
··· 224 224 * Only if the new pte is valid and kernel, otherwise TLB maintenance 225 225 * or update_mmu_cache() have the necessary barriers. 226 226 */ 227 - if (pte_valid_not_user(pte)) { 227 + if (pte_valid_not_user(pte)) 228 228 dsb(ishst); 229 - isb(); 230 - } 231 229 } 232 230 233 231 extern void __sync_icache_dcache(pte_t pteval); ··· 432 434 { 433 435 WRITE_ONCE(*pmdp, pmd); 434 436 dsb(ishst); 435 - isb(); 436 437 } 437 438 438 439 static inline void pmd_clear(pmd_t *pmdp) ··· 482 485 { 483 486 WRITE_ONCE(*pudp, pud); 484 487 dsb(ishst); 485 - isb(); 486 488 } 487 489 488 490 static inline void pud_clear(pud_t *pudp)
+44 -7
arch/arm64/kernel/alternative.c
··· 122 122 } 123 123 } 124 124 125 - static void __apply_alternatives(void *alt_region, bool use_linear_alias) 125 + /* 126 + * We provide our own, private D-cache cleaning function so that we don't 127 + * accidentally call into the cache.S code, which is patched by us at 128 + * runtime. 129 + */ 130 + static void clean_dcache_range_nopatch(u64 start, u64 end) 131 + { 132 + u64 cur, d_size, ctr_el0; 133 + 134 + ctr_el0 = read_sanitised_ftr_reg(SYS_CTR_EL0); 135 + d_size = 4 << cpuid_feature_extract_unsigned_field(ctr_el0, 136 + CTR_DMINLINE_SHIFT); 137 + cur = start & ~(d_size - 1); 138 + do { 139 + /* 140 + * We must clean+invalidate to the PoC in order to avoid 141 + * Cortex-A53 errata 826319, 827319, 824069 and 819472 142 + * (this corresponds to ARM64_WORKAROUND_CLEAN_CACHE) 143 + */ 144 + asm volatile("dc civac, %0" : : "r" (cur) : "memory"); 145 + } while (cur += d_size, cur < end); 146 + } 147 + 148 + static void __apply_alternatives(void *alt_region, bool is_module) 126 149 { 127 150 struct alt_instr *alt; 128 151 struct alt_region *region = alt_region; ··· 168 145 pr_info_once("patching kernel code\n"); 169 146 170 147 origptr = ALT_ORIG_PTR(alt); 171 - updptr = use_linear_alias ? lm_alias(origptr) : origptr; 148 + updptr = is_module ? origptr : lm_alias(origptr); 172 149 nr_inst = alt->orig_len / AARCH64_INSN_SIZE; 173 150 174 151 if (alt->cpufeature < ARM64_CB_PATCH) ··· 178 155 179 156 alt_cb(alt, origptr, updptr, nr_inst); 180 157 181 - flush_icache_range((uintptr_t)origptr, 182 - (uintptr_t)(origptr + nr_inst)); 158 + if (!is_module) { 159 + clean_dcache_range_nopatch((u64)origptr, 160 + (u64)(origptr + nr_inst)); 161 + } 162 + } 163 + 164 + /* 165 + * The core module code takes care of cache maintenance in 166 + * flush_module_icache(). 167 + */ 168 + if (!is_module) { 169 + dsb(ish); 170 + __flush_icache_all(); 171 + isb(); 183 172 } 184 173 } 185 174 ··· 213 178 isb(); 214 179 } else { 215 180 BUG_ON(alternatives_applied); 216 - __apply_alternatives(&region, true); 181 + __apply_alternatives(&region, false); 217 182 /* Barriers provided by the cache flushing */ 218 183 WRITE_ONCE(alternatives_applied, 1); 219 184 } ··· 227 192 stop_machine(__apply_alternatives_multi_stop, NULL, cpu_online_mask); 228 193 } 229 194 230 - void apply_alternatives(void *start, size_t length) 195 + #ifdef CONFIG_MODULES 196 + void apply_alternatives_module(void *start, size_t length) 231 197 { 232 198 struct alt_region region = { 233 199 .begin = start, 234 200 .end = start + length, 235 201 }; 236 202 237 - __apply_alternatives(&region, false); 203 + __apply_alternatives(&region, true); 238 204 } 205 + #endif
+2 -3
arch/arm64/kernel/module.c
··· 448 448 const char *secstrs = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset; 449 449 450 450 for (s = sechdrs, se = sechdrs + hdr->e_shnum; s < se; s++) { 451 - if (strcmp(".altinstructions", secstrs + s->sh_name) == 0) { 452 - apply_alternatives((void *)s->sh_addr, s->sh_size); 453 - } 451 + if (strcmp(".altinstructions", secstrs + s->sh_name) == 0) 452 + apply_alternatives_module((void *)s->sh_addr, s->sh_size); 454 453 #ifdef CONFIG_ARM64_MODULE_PLTS 455 454 if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE) && 456 455 !strcmp(".text.ftrace_trampoline", secstrs + s->sh_name))
-7
arch/microblaze/Kconfig.debug
··· 8 8 9 9 source "lib/Kconfig.debug" 10 10 11 - config HEART_BEAT 12 - bool "Heart beat function for kernel" 13 - default n 14 - help 15 - This option turns on/off heart beat kernel functionality. 16 - First GPIO node is taken. 17 - 18 11 endmenu
-5
arch/microblaze/include/asm/setup.h
··· 19 19 20 20 extern char *klimit; 21 21 22 - void microblaze_heartbeat(void); 23 - void microblaze_setup_heartbeat(void); 24 - 25 22 # ifdef CONFIG_MMU 26 23 extern void mmu_reset(void); 27 24 # endif /* CONFIG_MMU */ 28 - 29 - extern void of_platform_reset_gpio_probe(void); 30 25 31 26 void time_init(void); 32 27 void init_IRQ(void);
+1 -1
arch/microblaze/include/asm/unistd.h
··· 38 38 39 39 #endif /* __ASSEMBLY__ */ 40 40 41 - #define __NR_syscalls 399 41 + #define __NR_syscalls 401 42 42 43 43 #endif /* _ASM_MICROBLAZE_UNISTD_H */
+2
arch/microblaze/include/uapi/asm/unistd.h
··· 415 415 #define __NR_pkey_alloc 396 416 416 #define __NR_pkey_free 397 417 417 #define __NR_statx 398 418 + #define __NR_io_pgetevents 399 419 + #define __NR_rseq 400 418 420 419 421 #endif /* _UAPI_ASM_MICROBLAZE_UNISTD_H */
+1 -3
arch/microblaze/kernel/Makefile
··· 8 8 CFLAGS_REMOVE_timer.o = -pg 9 9 CFLAGS_REMOVE_intc.o = -pg 10 10 CFLAGS_REMOVE_early_printk.o = -pg 11 - CFLAGS_REMOVE_heartbeat.o = -pg 12 11 CFLAGS_REMOVE_ftrace.o = -pg 13 12 CFLAGS_REMOVE_process.o = -pg 14 13 endif ··· 16 17 17 18 obj-y += dma.o exceptions.o \ 18 19 hw_exception_handler.o irq.o \ 19 - platform.o process.o prom.o ptrace.o \ 20 + process.o prom.o ptrace.o \ 20 21 reset.o setup.o signal.o sys_microblaze.o timer.o traps.o unwind.o 21 22 22 23 obj-y += cpu/ 23 24 24 - obj-$(CONFIG_HEART_BEAT) += heartbeat.o 25 25 obj-$(CONFIG_MODULES) += microblaze_ksyms.o module.o 26 26 obj-$(CONFIG_MMU) += misc.o 27 27 obj-$(CONFIG_STACKTRACE) += stacktrace.o
-72
arch/microblaze/kernel/heartbeat.c
··· 1 - /* 2 - * Copyright (C) 2007-2009 Michal Simek <monstr@monstr.eu> 3 - * Copyright (C) 2007-2009 PetaLogix 4 - * Copyright (C) 2006 Atmark Techno, Inc. 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 - */ 10 - 11 - #include <linux/sched.h> 12 - #include <linux/sched/loadavg.h> 13 - #include <linux/io.h> 14 - 15 - #include <asm/setup.h> 16 - #include <asm/page.h> 17 - #include <asm/prom.h> 18 - 19 - static unsigned int base_addr; 20 - 21 - void microblaze_heartbeat(void) 22 - { 23 - static unsigned int cnt, period, dist; 24 - 25 - if (base_addr) { 26 - if (cnt == 0 || cnt == dist) 27 - out_be32(base_addr, 1); 28 - else if (cnt == 7 || cnt == dist + 7) 29 - out_be32(base_addr, 0); 30 - 31 - if (++cnt > period) { 32 - cnt = 0; 33 - /* 34 - * The hyperbolic function below modifies the heartbeat 35 - * period length in dependency of the current (5min) 36 - * load. It goes through the points f(0)=126, f(1)=86, 37 - * f(5)=51, f(inf)->30. 38 - */ 39 - period = ((672 << FSHIFT) / (5 * avenrun[0] + 40 - (7 << FSHIFT))) + 30; 41 - dist = period / 4; 42 - } 43 - } 44 - } 45 - 46 - void microblaze_setup_heartbeat(void) 47 - { 48 - struct device_node *gpio = NULL; 49 - int *prop; 50 - int j; 51 - const char * const gpio_list[] = { 52 - "xlnx,xps-gpio-1.00.a", 53 - NULL 54 - }; 55 - 56 - for (j = 0; gpio_list[j] != NULL; j++) { 57 - gpio = of_find_compatible_node(NULL, NULL, gpio_list[j]); 58 - if (gpio) 59 - break; 60 - } 61 - 62 - if (gpio) { 63 - base_addr = be32_to_cpup(of_get_property(gpio, "reg", NULL)); 64 - base_addr = (unsigned long) ioremap(base_addr, PAGE_SIZE); 65 - pr_notice("Heartbeat GPIO at 0x%x\n", base_addr); 66 - 67 - /* GPIO is configured as output */ 68 - prop = (int *) of_get_property(gpio, "xlnx,is-bidir", NULL); 69 - if (prop) 70 - out_be32(base_addr + 4, 0); 71 - } 72 - }
-29
arch/microblaze/kernel/platform.c
··· 1 - /* 2 - * Copyright 2008 Michal Simek <monstr@monstr.eu> 3 - * 4 - * based on virtex.c file 5 - * 6 - * Copyright 2007 Secret Lab Technologies Ltd. 7 - * 8 - * This file is licensed under the terms of the GNU General Public License 9 - * version 2. This program is licensed "as is" without any warranty of any 10 - * kind, whether express or implied. 11 - */ 12 - 13 - #include <linux/init.h> 14 - #include <linux/of_platform.h> 15 - #include <asm/setup.h> 16 - 17 - static struct of_device_id xilinx_of_bus_ids[] __initdata = { 18 - { .compatible = "simple-bus", }, 19 - { .compatible = "xlnx,compound", }, 20 - {} 21 - }; 22 - 23 - static int __init microblaze_device_probe(void) 24 - { 25 - of_platform_bus_probe(NULL, xilinx_of_bus_ids, NULL); 26 - of_platform_reset_gpio_probe(); 27 - return 0; 28 - } 29 - device_initcall(microblaze_device_probe);
+6 -5
arch/microblaze/kernel/reset.c
··· 18 18 static int handle; /* reset pin handle */ 19 19 static unsigned int reset_val; 20 20 21 - void of_platform_reset_gpio_probe(void) 21 + static int of_platform_reset_gpio_probe(void) 22 22 { 23 23 int ret; 24 24 handle = of_get_named_gpio(of_find_node_by_path("/"), ··· 27 27 if (!gpio_is_valid(handle)) { 28 28 pr_info("Skipping unavailable RESET gpio %d (%s)\n", 29 29 handle, "reset"); 30 - return; 30 + return -ENODEV; 31 31 } 32 32 33 33 ret = gpio_request(handle, "reset"); 34 34 if (ret < 0) { 35 35 pr_info("GPIO pin is already allocated\n"); 36 - return; 36 + return ret; 37 37 } 38 38 39 39 /* get current setup value */ ··· 51 51 52 52 pr_info("RESET: Registered gpio device: %d, current val: %d\n", 53 53 handle, reset_val); 54 - return; 54 + return 0; 55 55 err: 56 56 gpio_free(handle); 57 - return; 57 + return ret; 58 58 } 59 + device_initcall(of_platform_reset_gpio_probe); 59 60 60 61 61 62 static void gpio_system_reset(void)
+2
arch/microblaze/kernel/syscall_table.S
··· 400 400 .long sys_pkey_alloc 401 401 .long sys_pkey_free 402 402 .long sys_statx 403 + .long sys_io_pgetevents 404 + .long sys_rseq
-7
arch/microblaze/kernel/timer.c
··· 156 156 static irqreturn_t timer_interrupt(int irq, void *dev_id) 157 157 { 158 158 struct clock_event_device *evt = &clockevent_xilinx_timer; 159 - #ifdef CONFIG_HEART_BEAT 160 - microblaze_heartbeat(); 161 - #endif 162 159 timer_ack(); 163 160 evt->event_handler(evt); 164 161 return IRQ_HANDLED; ··· 314 317 pr_err("Failed to setup IRQ"); 315 318 return ret; 316 319 } 317 - 318 - #ifdef CONFIG_HEART_BEAT 319 - microblaze_setup_heartbeat(); 320 - #endif 321 320 322 321 ret = xilinx_clocksource_init(); 323 322 if (ret)
+2 -2
arch/mips/kernel/signal.c
··· 801 801 regs->regs[0] = 0; /* Don't deal with this again. */ 802 802 } 803 803 804 - rseq_signal_deliver(regs); 804 + rseq_signal_deliver(ksig, regs); 805 805 806 806 if (sig_uses_siginfo(&ksig->ka, abi)) 807 807 ret = abi->setup_rt_frame(vdso + abi->vdso->off_rt_sigreturn, ··· 870 870 if (thread_info_flags & _TIF_NOTIFY_RESUME) { 871 871 clear_thread_flag(TIF_NOTIFY_RESUME); 872 872 tracehook_notify_resume(regs); 873 - rseq_handle_notify_resume(regs); 873 + rseq_handle_notify_resume(NULL, regs); 874 874 } 875 875 876 876 user_enter();
+3 -3
arch/parisc/Kconfig
··· 244 244 245 245 config PARISC_PAGE_SIZE_16KB 246 246 bool "16KB" 247 - depends on PA8X00 247 + depends on PA8X00 && BROKEN 248 248 249 249 config PARISC_PAGE_SIZE_64KB 250 250 bool "64KB" 251 - depends on PA8X00 251 + depends on PA8X00 && BROKEN 252 252 253 253 endchoice 254 254 ··· 347 347 int "Maximum number of CPUs (2-32)" 348 348 range 2 32 349 349 depends on SMP 350 - default "32" 350 + default "4" 351 351 352 352 endmenu 353 353
-4
arch/parisc/Makefile
··· 65 65 # kernel. 66 66 cflags-y += -mdisable-fpregs 67 67 68 - # Without this, "ld -r" results in .text sections that are too big 69 - # (> 0x40000) for branches to reach stubs. 70 - cflags-y += -ffunction-sections 71 - 72 68 # Use long jumps instead of long branches (needed if your linker fails to 73 69 # link a too big vmlinux executable). Not enabled for building modules. 74 70 ifdef CONFIG_MLONGCALLS
-8
arch/parisc/include/asm/signal.h
··· 21 21 unsigned long sig[_NSIG_WORDS]; 22 22 } sigset_t; 23 23 24 - #ifndef __KERNEL__ 25 - struct sigaction { 26 - __sighandler_t sa_handler; 27 - unsigned long sa_flags; 28 - sigset_t sa_mask; /* mask last for extensibility */ 29 - }; 30 - #endif 31 - 32 24 #include <asm/sigcontext.h> 33 25 34 26 #endif /* !__ASSEMBLY */
+2 -1
arch/parisc/include/uapi/asm/unistd.h
··· 364 364 #define __NR_preadv2 (__NR_Linux + 347) 365 365 #define __NR_pwritev2 (__NR_Linux + 348) 366 366 #define __NR_statx (__NR_Linux + 349) 367 + #define __NR_io_pgetevents (__NR_Linux + 350) 367 368 368 - #define __NR_Linux_syscalls (__NR_statx + 1) 369 + #define __NR_Linux_syscalls (__NR_io_pgetevents + 1) 369 370 370 371 371 372 #define __IGNORE_select /* newselect */
+9 -16
arch/parisc/kernel/drivers.c
··· 154 154 { 155 155 /* FIXME: we need this because apparently the sti 156 156 * driver can be registered twice */ 157 - if(driver->drv.name) { 158 - printk(KERN_WARNING 159 - "BUG: skipping previously registered driver %s\n", 160 - driver->name); 157 + if (driver->drv.name) { 158 + pr_warn("BUG: skipping previously registered driver %s\n", 159 + driver->name); 161 160 return 1; 162 161 } 163 162 164 163 if (!driver->probe) { 165 - printk(KERN_WARNING 166 - "BUG: driver %s has no probe routine\n", 167 - driver->name); 164 + pr_warn("BUG: driver %s has no probe routine\n", driver->name); 168 165 return 1; 169 166 } 170 167 ··· 488 491 489 492 dev = create_parisc_device(mod_path); 490 493 if (dev->id.hw_type != HPHW_FAULTY) { 491 - printk(KERN_ERR "Two devices have hardware path [%s]. " 492 - "IODC data for second device: " 493 - "%02x%02x%02x%02x%02x%02x\n" 494 - "Rearranging GSC cards sometimes helps\n", 495 - parisc_pathname(dev), iodc_data[0], iodc_data[1], 496 - iodc_data[3], iodc_data[4], iodc_data[5], iodc_data[6]); 494 + pr_err("Two devices have hardware path [%s]. IODC data for second device: %7phN\n" 495 + "Rearranging GSC cards sometimes helps\n", 496 + parisc_pathname(dev), iodc_data); 497 497 return NULL; 498 498 } 499 499 ··· 522 528 * the keyboard controller 523 529 */ 524 530 if ((hpa & 0xfff) == 0 && insert_resource(&iomem_resource, &dev->hpa)) 525 - printk("Unable to claim HPA %lx for device %s\n", 526 - hpa, name); 531 + pr_warn("Unable to claim HPA %lx for device %s\n", hpa, name); 527 532 528 533 return dev; 529 534 } ··· 868 875 static int count; 869 876 870 877 print_pa_hwpath(dev, hw_path); 871 - printk(KERN_INFO "%d. %s at 0x%px [%s] { %d, 0x%x, 0x%.3x, 0x%.5x }", 878 + pr_info("%d. %s at 0x%px [%s] { %d, 0x%x, 0x%.3x, 0x%.5x }", 872 879 ++count, dev->name, (void*) dev->hpa.start, hw_path, dev->id.hw_type, 873 880 dev->id.hversion_rev, dev->id.hversion, dev->id.sversion); 874 881
+1
arch/parisc/kernel/syscall_table.S
··· 445 445 ENTRY_COMP(preadv2) 446 446 ENTRY_COMP(pwritev2) 447 447 ENTRY_SAME(statx) 448 + ENTRY_COMP(io_pgetevents) /* 350 */ 448 449 449 450 450 451 .ifne (. - 90b) - (__NR_Linux_syscalls * (91b - 90b))
+2 -2
arch/parisc/kernel/unwind.c
··· 25 25 26 26 /* #define DEBUG 1 */ 27 27 #ifdef DEBUG 28 - #define dbg(x...) printk(x) 28 + #define dbg(x...) pr_debug(x) 29 29 #else 30 30 #define dbg(x...) 31 31 #endif ··· 182 182 start = (long)&__start___unwind[0]; 183 183 stop = (long)&__stop___unwind[0]; 184 184 185 - printk("unwind_init: start = 0x%lx, end = 0x%lx, entries = %lu\n", 185 + dbg("unwind_init: start = 0x%lx, end = 0x%lx, entries = %lu\n", 186 186 start, stop, 187 187 (stop - start) / sizeof(struct unwind_table_entry)); 188 188
-1
arch/powerpc/include/asm/book3s/32/pgalloc.h
··· 138 138 static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, 139 139 unsigned long address) 140 140 { 141 - pgtable_page_dtor(table); 142 141 pgtable_free_tlb(tlb, page_address(table), 0); 143 142 } 144 143 #endif /* _ASM_POWERPC_BOOK3S_32_PGALLOC_H */
-1
arch/powerpc/include/asm/nohash/32/pgalloc.h
··· 140 140 unsigned long address) 141 141 { 142 142 tlb_flush_pgtable(tlb, address); 143 - pgtable_page_dtor(table); 144 143 pgtable_free_tlb(tlb, page_address(table), 0); 145 144 } 146 145 #endif /* _ASM_POWERPC_PGALLOC_32_H */
+1
arch/powerpc/include/asm/systbl.h
··· 393 393 SYSCALL(pkey_free) 394 394 SYSCALL(pkey_mprotect) 395 395 SYSCALL(rseq) 396 + COMPAT_SYS(io_pgetevents)
+1 -1
arch/powerpc/include/asm/unistd.h
··· 12 12 #include <uapi/asm/unistd.h> 13 13 14 14 15 - #define NR_syscalls 388 15 + #define NR_syscalls 389 16 16 17 17 #define __NR__exit __NR_exit 18 18
+1
arch/powerpc/include/uapi/asm/unistd.h
··· 399 399 #define __NR_pkey_free 385 400 400 #define __NR_pkey_mprotect 386 401 401 #define __NR_rseq 387 402 + #define __NR_io_pgetevents 388 402 403 403 404 #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */
-4
arch/powerpc/kernel/pci_32.c
··· 285 285 * Note that the returned IO or memory base is a physical address 286 286 */ 287 287 288 - #pragma GCC diagnostic push 289 - #pragma GCC diagnostic ignored "-Wpragmas" 290 - #pragma GCC diagnostic ignored "-Wattribute-alias" 291 288 SYSCALL_DEFINE3(pciconfig_iobase, long, which, 292 289 unsigned long, bus, unsigned long, devfn) 293 290 { ··· 310 313 311 314 return result; 312 315 } 313 - #pragma GCC diagnostic pop
-4
arch/powerpc/kernel/pci_64.c
··· 203 203 #define IOBASE_ISA_IO 3 204 204 #define IOBASE_ISA_MEM 4 205 205 206 - #pragma GCC diagnostic push 207 - #pragma GCC diagnostic ignored "-Wpragmas" 208 - #pragma GCC diagnostic ignored "-Wattribute-alias" 209 206 SYSCALL_DEFINE3(pciconfig_iobase, long, which, unsigned long, in_bus, 210 207 unsigned long, in_devfn) 211 208 { ··· 256 259 257 260 return -EOPNOTSUPP; 258 261 } 259 - #pragma GCC diagnostic pop 260 262 261 263 #ifdef CONFIG_NUMA 262 264 int pcibus_to_node(struct pci_bus *bus)
-4
arch/powerpc/kernel/rtas.c
··· 1051 1051 } 1052 1052 1053 1053 /* We assume to be passed big endian arguments */ 1054 - #pragma GCC diagnostic push 1055 - #pragma GCC diagnostic ignored "-Wpragmas" 1056 - #pragma GCC diagnostic ignored "-Wattribute-alias" 1057 1054 SYSCALL_DEFINE1(rtas, struct rtas_args __user *, uargs) 1058 1055 { 1059 1056 struct rtas_args args; ··· 1137 1140 1138 1141 return 0; 1139 1142 } 1140 - #pragma GCC diagnostic pop 1141 1143 1142 1144 /* 1143 1145 * Call early during boot, before mem init, to retrieve the RTAS
-8
arch/powerpc/kernel/signal_32.c
··· 1038 1038 } 1039 1039 #endif 1040 1040 1041 - #pragma GCC diagnostic push 1042 - #pragma GCC diagnostic ignored "-Wpragmas" 1043 - #pragma GCC diagnostic ignored "-Wattribute-alias" 1044 1041 #ifdef CONFIG_PPC64 1045 1042 COMPAT_SYSCALL_DEFINE3(swapcontext, struct ucontext __user *, old_ctx, 1046 1043 struct ucontext __user *, new_ctx, int, ctx_size) ··· 1131 1134 set_thread_flag(TIF_RESTOREALL); 1132 1135 return 0; 1133 1136 } 1134 - #pragma GCC diagnostic pop 1135 1137 1136 1138 #ifdef CONFIG_PPC64 1137 1139 COMPAT_SYSCALL_DEFINE0(rt_sigreturn) ··· 1227 1231 return 0; 1228 1232 } 1229 1233 1230 - #pragma GCC diagnostic push 1231 - #pragma GCC diagnostic ignored "-Wpragmas" 1232 - #pragma GCC diagnostic ignored "-Wattribute-alias" 1233 1234 #ifdef CONFIG_PPC32 1234 1235 SYSCALL_DEFINE3(debug_setcontext, struct ucontext __user *, ctx, 1235 1236 int, ndbg, struct sig_dbg_op __user *, dbg) ··· 1330 1337 return 0; 1331 1338 } 1332 1339 #endif 1333 - #pragma GCC diagnostic pop 1334 1340 1335 1341 /* 1336 1342 * OK, we're invoking a handler
-4
arch/powerpc/kernel/signal_64.c
··· 625 625 /* 626 626 * Handle {get,set,swap}_context operations 627 627 */ 628 - #pragma GCC diagnostic push 629 - #pragma GCC diagnostic ignored "-Wpragmas" 630 - #pragma GCC diagnostic ignored "-Wattribute-alias" 631 628 SYSCALL_DEFINE3(swapcontext, struct ucontext __user *, old_ctx, 632 629 struct ucontext __user *, new_ctx, long, ctx_size) 633 630 { ··· 690 693 set_thread_flag(TIF_RESTOREALL); 691 694 return 0; 692 695 } 693 - #pragma GCC diagnostic pop 694 696 695 697 696 698 /*
-4
arch/powerpc/kernel/syscalls.c
··· 62 62 return ret; 63 63 } 64 64 65 - #pragma GCC diagnostic push 66 - #pragma GCC diagnostic ignored "-Wpragmas" 67 - #pragma GCC diagnostic ignored "-Wattribute-alias" 68 65 SYSCALL_DEFINE6(mmap2, unsigned long, addr, size_t, len, 69 66 unsigned long, prot, unsigned long, flags, 70 67 unsigned long, fd, unsigned long, pgoff) ··· 75 78 { 76 79 return do_mmap2(addr, len, prot, flags, fd, offset, PAGE_SHIFT); 77 80 } 78 - #pragma GCC diagnostic pop 79 81 80 82 #ifdef CONFIG_PPC32 81 83 /*
-4
arch/powerpc/mm/subpage-prot.c
··· 186 186 * in a 2-bit field won't allow writes to a page that is otherwise 187 187 * write-protected. 188 188 */ 189 - #pragma GCC diagnostic push 190 - #pragma GCC diagnostic ignored "-Wpragmas" 191 - #pragma GCC diagnostic ignored "-Wattribute-alias" 192 189 SYSCALL_DEFINE3(subpage_prot, unsigned long, addr, 193 190 unsigned long, len, u32 __user *, map) 194 191 { ··· 269 272 up_write(&mm->mmap_sem); 270 273 return err; 271 274 } 272 - #pragma GCC diagnostic pop
+20 -9
arch/powerpc/platforms/powermac/time.c
··· 42 42 #define DBG(x...) 43 43 #endif 44 44 45 - /* Apparently the RTC stores seconds since 1 Jan 1904 */ 45 + /* 46 + * Offset between Unix time (1970-based) and Mac time (1904-based). Cuda and PMU 47 + * times wrap in 2040. If we need to handle later times, the read_time functions 48 + * need to be changed to interpret wrapped times as post-2040. 49 + */ 46 50 #define RTC_OFFSET 2082844800 47 51 48 52 /* ··· 101 97 if (req.reply_len != 7) 102 98 printk(KERN_ERR "cuda_get_time: got %d byte reply\n", 103 99 req.reply_len); 104 - now = (req.reply[3] << 24) + (req.reply[4] << 16) 105 - + (req.reply[5] << 8) + req.reply[6]; 100 + now = (u32)((req.reply[3] << 24) + (req.reply[4] << 16) + 101 + (req.reply[5] << 8) + req.reply[6]); 102 + /* it's either after year 2040, or the RTC has gone backwards */ 103 + WARN_ON(now < RTC_OFFSET); 104 + 106 105 return now - RTC_OFFSET; 107 106 } 108 107 ··· 113 106 114 107 static int cuda_set_rtc_time(struct rtc_time *tm) 115 108 { 116 - time64_t nowtime; 109 + u32 nowtime; 117 110 struct adb_request req; 118 111 119 - nowtime = rtc_tm_to_time64(tm) + RTC_OFFSET; 112 + nowtime = lower_32_bits(rtc_tm_to_time64(tm) + RTC_OFFSET); 120 113 if (cuda_request(&req, NULL, 6, CUDA_PACKET, CUDA_SET_TIME, 121 114 nowtime >> 24, nowtime >> 16, nowtime >> 8, 122 115 nowtime) < 0) ··· 147 140 if (req.reply_len != 4) 148 141 printk(KERN_ERR "pmu_get_time: got %d byte reply from PMU\n", 149 142 req.reply_len); 150 - now = (req.reply[0] << 24) + (req.reply[1] << 16) 151 - + (req.reply[2] << 8) + req.reply[3]; 143 + now = (u32)((req.reply[0] << 24) + (req.reply[1] << 16) + 144 + (req.reply[2] << 8) + req.reply[3]); 145 + 146 + /* it's either after year 2040, or the RTC has gone backwards */ 147 + WARN_ON(now < RTC_OFFSET); 148 + 152 149 return now - RTC_OFFSET; 153 150 } 154 151 ··· 160 149 161 150 static int pmu_set_rtc_time(struct rtc_time *tm) 162 151 { 163 - time64_t nowtime; 152 + u32 nowtime; 164 153 struct adb_request req; 165 154 166 - nowtime = rtc_tm_to_time64(tm) + RTC_OFFSET; 155 + nowtime = lower_32_bits(rtc_tm_to_time64(tm) + RTC_OFFSET); 167 156 if (pmu_request(&req, NULL, 5, PMU_SET_RTC, nowtime >> 24, 168 157 nowtime >> 16, nowtime >> 8, nowtime) < 0) 169 158 return -ENXIO;
+1 -1
arch/x86/entry/entry_32.S
··· 477 477 * whereas POPF does not.) 478 478 */ 479 479 addl $PT_EFLAGS-PT_DS, %esp /* point esp at pt_regs->flags */ 480 - btr $X86_EFLAGS_IF_BIT, (%esp) 480 + btrl $X86_EFLAGS_IF_BIT, (%esp) 481 481 popfl 482 482 483 483 /*
+8 -8
arch/x86/entry/entry_64_compat.S
··· 84 84 pushq %rdx /* pt_regs->dx */ 85 85 pushq %rcx /* pt_regs->cx */ 86 86 pushq $-ENOSYS /* pt_regs->ax */ 87 - pushq %r8 /* pt_regs->r8 */ 87 + pushq $0 /* pt_regs->r8 = 0 */ 88 88 xorl %r8d, %r8d /* nospec r8 */ 89 - pushq %r9 /* pt_regs->r9 */ 89 + pushq $0 /* pt_regs->r9 = 0 */ 90 90 xorl %r9d, %r9d /* nospec r9 */ 91 - pushq %r10 /* pt_regs->r10 */ 91 + pushq $0 /* pt_regs->r10 = 0 */ 92 92 xorl %r10d, %r10d /* nospec r10 */ 93 - pushq %r11 /* pt_regs->r11 */ 93 + pushq $0 /* pt_regs->r11 = 0 */ 94 94 xorl %r11d, %r11d /* nospec r11 */ 95 95 pushq %rbx /* pt_regs->rbx */ 96 96 xorl %ebx, %ebx /* nospec rbx */ ··· 374 374 pushq %rcx /* pt_regs->cx */ 375 375 xorl %ecx, %ecx /* nospec cx */ 376 376 pushq $-ENOSYS /* pt_regs->ax */ 377 - pushq $0 /* pt_regs->r8 = 0 */ 377 + pushq %r8 /* pt_regs->r8 */ 378 378 xorl %r8d, %r8d /* nospec r8 */ 379 - pushq $0 /* pt_regs->r9 = 0 */ 379 + pushq %r9 /* pt_regs->r9 */ 380 380 xorl %r9d, %r9d /* nospec r9 */ 381 - pushq $0 /* pt_regs->r10 = 0 */ 381 + pushq %r10 /* pt_regs->r10*/ 382 382 xorl %r10d, %r10d /* nospec r10 */ 383 - pushq $0 /* pt_regs->r11 = 0 */ 383 + pushq %r11 /* pt_regs->r11 */ 384 384 xorl %r11d, %r11d /* nospec r11 */ 385 385 pushq %rbx /* pt_regs->rbx */ 386 386 xorl %ebx, %ebx /* nospec rbx */
+3
arch/x86/include/asm/pgalloc.h
··· 184 184 185 185 static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) 186 186 { 187 + if (!pgtable_l5_enabled()) 188 + return; 189 + 187 190 BUG_ON((unsigned long)p4d & (PAGE_SIZE-1)); 188 191 free_page((unsigned long)p4d); 189 192 }
+1 -1
arch/x86/include/asm/pgtable.h
··· 898 898 #define pgd_page(pgd) pfn_to_page(pgd_pfn(pgd)) 899 899 900 900 /* to find an entry in a page-table-directory. */ 901 - static __always_inline p4d_t *p4d_offset(pgd_t *pgd, unsigned long address) 901 + static inline p4d_t *p4d_offset(pgd_t *pgd, unsigned long address) 902 902 { 903 903 if (!pgtable_l5_enabled()) 904 904 return (p4d_t *)pgd;
+2 -2
arch/x86/include/asm/pgtable_64.h
··· 216 216 } 217 217 #endif 218 218 219 - static __always_inline void native_set_p4d(p4d_t *p4dp, p4d_t p4d) 219 + static inline void native_set_p4d(p4d_t *p4dp, p4d_t p4d) 220 220 { 221 221 pgd_t pgd; 222 222 ··· 230 230 *p4dp = native_make_p4d(native_pgd_val(pgd)); 231 231 } 232 232 233 - static __always_inline void native_p4d_clear(p4d_t *p4d) 233 + static inline void native_p4d_clear(p4d_t *p4d) 234 234 { 235 235 native_set_p4d(p4d, native_make_p4d(0)); 236 236 }
+12 -3
arch/x86/kernel/e820.c
··· 1248 1248 { 1249 1249 int i; 1250 1250 u64 end; 1251 + u64 addr = 0; 1251 1252 1252 1253 /* 1253 1254 * The bootstrap memblock region count maximum is 128 entries ··· 1265 1264 struct e820_entry *entry = &e820_table->entries[i]; 1266 1265 1267 1266 end = entry->addr + entry->size; 1267 + if (addr < entry->addr) 1268 + memblock_reserve(addr, entry->addr - addr); 1269 + addr = end; 1268 1270 if (end != (resource_size_t)end) 1269 1271 continue; 1270 1272 1273 + /* 1274 + * all !E820_TYPE_RAM ranges (including gap ranges) are put 1275 + * into memblock.reserved to make sure that struct pages in 1276 + * such regions are not left uninitialized after bootup. 1277 + */ 1271 1278 if (entry->type != E820_TYPE_RAM && entry->type != E820_TYPE_RESERVED_KERN) 1272 - continue; 1273 - 1274 - memblock_add(entry->addr, entry->size); 1279 + memblock_reserve(entry->addr, entry->size); 1280 + else 1281 + memblock_add(entry->addr, entry->size); 1275 1282 } 1276 1283 1277 1284 /* Throw away partial pages: */
+7 -14
arch/x86/mm/fault.c
··· 641 641 return 0; 642 642 } 643 643 644 - static const char nx_warning[] = KERN_CRIT 645 - "kernel tried to execute NX-protected page - exploit attempt? (uid: %d)\n"; 646 - static const char smep_warning[] = KERN_CRIT 647 - "unable to execute userspace code (SMEP?) (uid: %d)\n"; 648 - 649 644 static void 650 645 show_fault_oops(struct pt_regs *regs, unsigned long error_code, 651 646 unsigned long address) ··· 659 664 pte = lookup_address_in_pgd(pgd, address, &level); 660 665 661 666 if (pte && pte_present(*pte) && !pte_exec(*pte)) 662 - printk(nx_warning, from_kuid(&init_user_ns, current_uid())); 667 + pr_crit("kernel tried to execute NX-protected page - exploit attempt? (uid: %d)\n", 668 + from_kuid(&init_user_ns, current_uid())); 663 669 if (pte && pte_present(*pte) && pte_exec(*pte) && 664 670 (pgd_flags(*pgd) & _PAGE_USER) && 665 671 (__read_cr4() & X86_CR4_SMEP)) 666 - printk(smep_warning, from_kuid(&init_user_ns, current_uid())); 672 + pr_crit("unable to execute userspace code (SMEP?) (uid: %d)\n", 673 + from_kuid(&init_user_ns, current_uid())); 667 674 } 668 675 669 - printk(KERN_ALERT "BUG: unable to handle kernel "); 670 - if (address < PAGE_SIZE) 671 - printk(KERN_CONT "NULL pointer dereference"); 672 - else 673 - printk(KERN_CONT "paging request"); 674 - 675 - printk(KERN_CONT " at %px\n", (void *) address); 676 + pr_alert("BUG: unable to handle kernel %s at %px\n", 677 + address < PAGE_SIZE ? "NULL pointer dereference" : "paging request", 678 + (void *)address); 676 679 677 680 dump_pagetable(address); 678 681 }
+2 -2
arch/x86/platform/efi/efi_64.c
··· 166 166 pgd = pgd_offset_k(pgd_idx * PGDIR_SIZE); 167 167 set_pgd(pgd_offset_k(pgd_idx * PGDIR_SIZE), save_pgd[pgd_idx]); 168 168 169 - if (!(pgd_val(*pgd) & _PAGE_PRESENT)) 169 + if (!pgd_present(*pgd)) 170 170 continue; 171 171 172 172 for (i = 0; i < PTRS_PER_P4D; i++) { 173 173 p4d = p4d_offset(pgd, 174 174 pgd_idx * PGDIR_SIZE + i * P4D_SIZE); 175 175 176 - if (!(p4d_val(*p4d) & _PAGE_PRESENT)) 176 + if (!p4d_present(*p4d)) 177 177 continue; 178 178 179 179 pud = (pud_t *)p4d_page_vaddr(*p4d);
+4
block/blk-core.c
··· 3473 3473 dst->cpu = src->cpu; 3474 3474 dst->__sector = blk_rq_pos(src); 3475 3475 dst->__data_len = blk_rq_bytes(src); 3476 + if (src->rq_flags & RQF_SPECIAL_PAYLOAD) { 3477 + dst->rq_flags |= RQF_SPECIAL_PAYLOAD; 3478 + dst->special_vec = src->special_vec; 3479 + } 3476 3480 dst->nr_phys_segments = src->nr_phys_segments; 3477 3481 dst->ioprio = src->ioprio; 3478 3482 dst->extra_len = src->extra_len;
+12
block/blk-mq.c
··· 1075 1075 1076 1076 #define BLK_MQ_RESOURCE_DELAY 3 /* ms units */ 1077 1077 1078 + /* 1079 + * Returns true if we did some work AND can potentially do more. 1080 + */ 1078 1081 bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list, 1079 1082 bool got_budget) 1080 1083 { ··· 1208 1205 blk_mq_run_hw_queue(hctx, true); 1209 1206 else if (needs_restart && (ret == BLK_STS_RESOURCE)) 1210 1207 blk_mq_delay_run_hw_queue(hctx, BLK_MQ_RESOURCE_DELAY); 1208 + 1209 + return false; 1211 1210 } 1211 + 1212 + /* 1213 + * If the host/device is unable to accept more work, inform the 1214 + * caller of that. 1215 + */ 1216 + if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE) 1217 + return false; 1212 1218 1213 1219 return (queued + errors) != 0; 1214 1220 }
+1 -1
certs/blacklist.h
··· 1 1 #include <linux/kernel.h> 2 2 3 - extern const char __initdata *const blacklist_hashes[]; 3 + extern const char __initconst *const blacklist_hashes[];
+10 -3
crypto/af_alg.c
··· 1060 1060 } 1061 1061 EXPORT_SYMBOL_GPL(af_alg_async_cb); 1062 1062 1063 - __poll_t af_alg_poll_mask(struct socket *sock, __poll_t events) 1063 + /** 1064 + * af_alg_poll - poll system call handler 1065 + */ 1066 + __poll_t af_alg_poll(struct file *file, struct socket *sock, 1067 + poll_table *wait) 1064 1068 { 1065 1069 struct sock *sk = sock->sk; 1066 1070 struct alg_sock *ask = alg_sk(sk); 1067 1071 struct af_alg_ctx *ctx = ask->private; 1068 - __poll_t mask = 0; 1072 + __poll_t mask; 1073 + 1074 + sock_poll_wait(file, sk_sleep(sk), wait); 1075 + mask = 0; 1069 1076 1070 1077 if (!ctx->more || ctx->used) 1071 1078 mask |= EPOLLIN | EPOLLRDNORM; ··· 1082 1075 1083 1076 return mask; 1084 1077 } 1085 - EXPORT_SYMBOL_GPL(af_alg_poll_mask); 1078 + EXPORT_SYMBOL_GPL(af_alg_poll); 1086 1079 1087 1080 /** 1088 1081 * af_alg_alloc_areq - allocate struct af_alg_async_req
+2 -2
crypto/algif_aead.c
··· 375 375 .sendmsg = aead_sendmsg, 376 376 .sendpage = af_alg_sendpage, 377 377 .recvmsg = aead_recvmsg, 378 - .poll_mask = af_alg_poll_mask, 378 + .poll = af_alg_poll, 379 379 }; 380 380 381 381 static int aead_check_key(struct socket *sock) ··· 471 471 .sendmsg = aead_sendmsg_nokey, 472 472 .sendpage = aead_sendpage_nokey, 473 473 .recvmsg = aead_recvmsg_nokey, 474 - .poll_mask = af_alg_poll_mask, 474 + .poll = af_alg_poll, 475 475 }; 476 476 477 477 static void *aead_bind(const char *name, u32 type, u32 mask)
+2 -2
crypto/algif_skcipher.c
··· 206 206 .sendmsg = skcipher_sendmsg, 207 207 .sendpage = af_alg_sendpage, 208 208 .recvmsg = skcipher_recvmsg, 209 - .poll_mask = af_alg_poll_mask, 209 + .poll = af_alg_poll, 210 210 }; 211 211 212 212 static int skcipher_check_key(struct socket *sock) ··· 302 302 .sendmsg = skcipher_sendmsg_nokey, 303 303 .sendpage = skcipher_sendpage_nokey, 304 304 .recvmsg = skcipher_recvmsg_nokey, 305 - .poll_mask = af_alg_poll_mask, 305 + .poll = af_alg_poll, 306 306 }; 307 307 308 308 static void *skcipher_bind(const char *name, u32 type, u32 mask)
+9
crypto/asymmetric_keys/x509_cert_parser.c
··· 249 249 return -EINVAL; 250 250 } 251 251 252 + if (strcmp(ctx->cert->sig->pkey_algo, "rsa") == 0) { 253 + /* Discard the BIT STRING metadata */ 254 + if (vlen < 1 || *(const u8 *)value != 0) 255 + return -EBADMSG; 256 + 257 + value++; 258 + vlen--; 259 + } 260 + 252 261 ctx->cert->raw_sig = value; 253 262 ctx->cert->raw_sig_size = vlen; 254 263 return 0;
+72
drivers/acpi/osl.c
··· 45 45 #include <linux/uaccess.h> 46 46 #include <linux/io-64-nonatomic-lo-hi.h> 47 47 48 + #include "acpica/accommon.h" 49 + #include "acpica/acnamesp.h" 48 50 #include "internal.h" 49 51 50 52 #define _COMPONENT ACPI_OS_SERVICES ··· 1491 1489 return acpi_check_resource_conflict(&res); 1492 1490 } 1493 1491 EXPORT_SYMBOL(acpi_check_region); 1492 + 1493 + static acpi_status acpi_deactivate_mem_region(acpi_handle handle, u32 level, 1494 + void *_res, void **return_value) 1495 + { 1496 + struct acpi_mem_space_context **mem_ctx; 1497 + union acpi_operand_object *handler_obj; 1498 + union acpi_operand_object *region_obj2; 1499 + union acpi_operand_object *region_obj; 1500 + struct resource *res = _res; 1501 + acpi_status status; 1502 + 1503 + region_obj = acpi_ns_get_attached_object(handle); 1504 + if (!region_obj) 1505 + return AE_OK; 1506 + 1507 + handler_obj = region_obj->region.handler; 1508 + if (!handler_obj) 1509 + return AE_OK; 1510 + 1511 + if (region_obj->region.space_id != ACPI_ADR_SPACE_SYSTEM_MEMORY) 1512 + return AE_OK; 1513 + 1514 + if (!(region_obj->region.flags & AOPOBJ_SETUP_COMPLETE)) 1515 + return AE_OK; 1516 + 1517 + region_obj2 = acpi_ns_get_secondary_object(region_obj); 1518 + if (!region_obj2) 1519 + return AE_OK; 1520 + 1521 + mem_ctx = (void *)&region_obj2->extra.region_context; 1522 + 1523 + if (!(mem_ctx[0]->address >= res->start && 1524 + mem_ctx[0]->address < res->end)) 1525 + return AE_OK; 1526 + 1527 + status = handler_obj->address_space.setup(region_obj, 1528 + ACPI_REGION_DEACTIVATE, 1529 + NULL, (void **)mem_ctx); 1530 + if (ACPI_SUCCESS(status)) 1531 + region_obj->region.flags &= ~(AOPOBJ_SETUP_COMPLETE); 1532 + 1533 + return status; 1534 + } 1535 + 1536 + /** 1537 + * acpi_release_memory - Release any mappings done to a memory region 1538 + * @handle: Handle to namespace node 1539 + * @res: Memory resource 1540 + * @level: A level that terminates the search 1541 + * 1542 + * Walks through @handle and unmaps all SystemMemory Operation Regions that 1543 + * overlap with @res and that have already been activated (mapped). 1544 + * 1545 + * This is a helper that allows drivers to place special requirements on memory 1546 + * region that may overlap with operation regions, primarily allowing them to 1547 + * safely map the region as non-cached memory. 1548 + * 1549 + * The unmapped Operation Regions will be automatically remapped next time they 1550 + * are called, so the drivers do not need to do anything else. 1551 + */ 1552 + acpi_status acpi_release_memory(acpi_handle handle, struct resource *res, 1553 + u32 level) 1554 + { 1555 + if (!(res->flags & IORESOURCE_MEM)) 1556 + return AE_TYPE; 1557 + 1558 + return acpi_walk_namespace(ACPI_TYPE_REGION, handle, level, 1559 + acpi_deactivate_mem_region, NULL, res, NULL); 1560 + } 1561 + EXPORT_SYMBOL_GPL(acpi_release_memory); 1494 1562 1495 1563 /* 1496 1564 * Let drivers know whether the resource checks are effective
+3 -4
drivers/base/power/domain.c
··· 2487 2487 * power domain corresponding to a DT node's "required-opps" property. 2488 2488 * 2489 2489 * @dev: Device for which the performance-state needs to be found. 2490 - * @opp_node: DT node where the "required-opps" property is present. This can be 2490 + * @np: DT node where the "required-opps" property is present. This can be 2491 2491 * the device node itself (if it doesn't have an OPP table) or a node 2492 2492 * within the OPP table of a device (if device has an OPP table). 2493 - * @state: Pointer to return performance state. 2494 2493 * 2495 2494 * Returns performance state corresponding to the "required-opps" property of 2496 2495 * a DT node. This calls platform specific genpd->opp_to_performance_state() ··· 2498 2499 * Returns performance state on success and 0 on failure. 2499 2500 */ 2500 2501 unsigned int of_genpd_opp_to_performance_state(struct device *dev, 2501 - struct device_node *opp_node) 2502 + struct device_node *np) 2502 2503 { 2503 2504 struct generic_pm_domain *genpd; 2504 2505 struct dev_pm_opp *opp; ··· 2513 2514 2514 2515 genpd_lock(genpd); 2515 2516 2516 - opp = of_dev_pm_opp_find_required_opp(&genpd->dev, opp_node); 2517 + opp = of_dev_pm_opp_find_required_opp(&genpd->dev, np); 2517 2518 if (IS_ERR(opp)) { 2518 2519 dev_err(dev, "Failed to find required OPP: %ld\n", 2519 2520 PTR_ERR(opp));
+2 -2
drivers/block/drbd/drbd_req.c
··· 1244 1244 _drbd_start_io_acct(device, req); 1245 1245 1246 1246 /* process discards always from our submitter thread */ 1247 - if ((bio_op(bio) & REQ_OP_WRITE_ZEROES) || 1248 - (bio_op(bio) & REQ_OP_DISCARD)) 1247 + if (bio_op(bio) == REQ_OP_WRITE_ZEROES || 1248 + bio_op(bio) == REQ_OP_DISCARD) 1249 1249 goto queue_for_submitter_thread; 1250 1250 1251 1251 if (rw == WRITE && req->private_bio && req->i.size
+13 -16
drivers/char/random.c
··· 402 402 /* 403 403 * Static global variables 404 404 */ 405 - static DECLARE_WAIT_QUEUE_HEAD(random_wait); 405 + static DECLARE_WAIT_QUEUE_HEAD(random_read_wait); 406 + static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); 406 407 static struct fasync_struct *fasync; 407 408 408 409 static DEFINE_SPINLOCK(random_ready_list_lock); ··· 722 721 723 722 /* should we wake readers? */ 724 723 if (entropy_bits >= random_read_wakeup_bits && 725 - wq_has_sleeper(&random_wait)) { 726 - wake_up_interruptible_poll(&random_wait, POLLIN); 724 + wq_has_sleeper(&random_read_wait)) { 725 + wake_up_interruptible(&random_read_wait); 727 726 kill_fasync(&fasync, SIGIO, POLL_IN); 728 727 } 729 728 /* If the input pool is getting full, send some ··· 1397 1396 trace_debit_entropy(r->name, 8 * ibytes); 1398 1397 if (ibytes && 1399 1398 (r->entropy_count >> ENTROPY_SHIFT) < random_write_wakeup_bits) { 1400 - wake_up_interruptible_poll(&random_wait, POLLOUT); 1399 + wake_up_interruptible(&random_write_wait); 1401 1400 kill_fasync(&fasync, SIGIO, POLL_OUT); 1402 1401 } 1403 1402 ··· 1839 1838 if (nonblock) 1840 1839 return -EAGAIN; 1841 1840 1842 - wait_event_interruptible(random_wait, 1841 + wait_event_interruptible(random_read_wait, 1843 1842 ENTROPY_BITS(&input_pool) >= 1844 1843 random_read_wakeup_bits); 1845 1844 if (signal_pending(current)) ··· 1876 1875 return ret; 1877 1876 } 1878 1877 1879 - static struct wait_queue_head * 1880 - random_get_poll_head(struct file *file, __poll_t events) 1881 - { 1882 - return &random_wait; 1883 - } 1884 - 1885 1878 static __poll_t 1886 - random_poll_mask(struct file *file, __poll_t events) 1879 + random_poll(struct file *file, poll_table * wait) 1887 1880 { 1888 - __poll_t mask = 0; 1881 + __poll_t mask; 1889 1882 1883 + poll_wait(file, &random_read_wait, wait); 1884 + poll_wait(file, &random_write_wait, wait); 1885 + mask = 0; 1890 1886 if (ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits) 1891 1887 mask |= EPOLLIN | EPOLLRDNORM; 1892 1888 if (ENTROPY_BITS(&input_pool) < random_write_wakeup_bits) ··· 1990 1992 const struct file_operations random_fops = { 1991 1993 .read = random_read, 1992 1994 .write = random_write, 1993 - .get_poll_head = random_get_poll_head, 1994 - .poll_mask = random_poll_mask, 1995 + .poll = random_poll, 1995 1996 .unlocked_ioctl = random_ioctl, 1996 1997 .fasync = random_fasync, 1997 1998 .llseek = noop_llseek, ··· 2323 2326 * We'll be woken up again once below random_write_wakeup_thresh, 2324 2327 * or when the calling thread is about to terminate. 2325 2328 */ 2326 - wait_event_interruptible(random_wait, kthread_should_stop() || 2329 + wait_event_interruptible(random_write_wait, kthread_should_stop() || 2327 2330 ENTROPY_BITS(&input_pool) <= random_write_wakeup_bits); 2328 2331 mix_pool_bytes(poolp, buffer, count); 2329 2332 credit_entropy_bits(poolp, entropy);
+4 -4
drivers/cpufreq/qcom-cpufreq-kryo.c
··· 87 87 int ret; 88 88 89 89 cpu_dev = get_cpu_device(0); 90 - if (NULL == cpu_dev) 91 - ret = -ENODEV; 90 + if (!cpu_dev) 91 + return -ENODEV; 92 92 93 93 msm8996_version = qcom_cpufreq_kryo_get_msm_id(); 94 94 if (NUM_OF_MSM8996_VERSIONS == msm8996_version) { ··· 97 97 } 98 98 99 99 np = dev_pm_opp_of_get_opp_desc_node(cpu_dev); 100 - if (IS_ERR(np)) 101 - return PTR_ERR(np); 100 + if (!np) 101 + return -ENOENT; 102 102 103 103 ret = of_device_is_compatible(np, "operating-points-v2-kryo-cpu"); 104 104 if (!ret) {
+8
drivers/dax/super.c
··· 86 86 { 87 87 struct dax_device *dax_dev; 88 88 bool dax_enabled = false; 89 + struct request_queue *q; 89 90 pgoff_t pgoff; 90 91 int err, id; 91 92 void *kaddr; ··· 96 95 97 96 if (blocksize != PAGE_SIZE) { 98 97 pr_debug("%s: error: unsupported blocksize for dax\n", 98 + bdevname(bdev, buf)); 99 + return false; 100 + } 101 + 102 + q = bdev_get_queue(bdev); 103 + if (!q || !blk_queue_dax(q)) { 104 + pr_debug("%s: error: request queue doesn't support dax\n", 99 105 bdevname(bdev, buf)); 100 106 return false; 101 107 }
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
··· 376 376 struct amdgpu_device *adev = ring->adev; 377 377 uint64_t index; 378 378 379 - if (ring != &adev->uvd.inst[ring->me].ring) { 379 + if (ring->funcs->type != AMDGPU_RING_TYPE_UVD) { 380 380 ring->fence_drv.cpu_addr = &adev->wb.wb[ring->fence_offs]; 381 381 ring->fence_drv.gpu_addr = adev->wb.gpu_addr + (ring->fence_offs * 4); 382 382 } else {
+27 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
··· 52 52 unsigned long bo_size; 53 53 const char *fw_name; 54 54 const struct common_firmware_header *hdr; 55 - unsigned version_major, version_minor, family_id; 55 + unsigned char fw_check; 56 56 int r; 57 57 58 58 INIT_DELAYED_WORK(&adev->vcn.idle_work, amdgpu_vcn_idle_work_handler); ··· 83 83 84 84 hdr = (const struct common_firmware_header *)adev->vcn.fw->data; 85 85 adev->vcn.fw_version = le32_to_cpu(hdr->ucode_version); 86 - family_id = le32_to_cpu(hdr->ucode_version) & 0xff; 87 - version_major = (le32_to_cpu(hdr->ucode_version) >> 24) & 0xff; 88 - version_minor = (le32_to_cpu(hdr->ucode_version) >> 8) & 0xff; 89 - DRM_INFO("Found VCN firmware Version: %hu.%hu Family ID: %hu\n", 90 - version_major, version_minor, family_id); 91 86 87 + /* Bit 20-23, it is encode major and non-zero for new naming convention. 88 + * This field is part of version minor and DRM_DISABLED_FLAG in old naming 89 + * convention. Since the l:wq!atest version minor is 0x5B and DRM_DISABLED_FLAG 90 + * is zero in old naming convention, this field is always zero so far. 91 + * These four bits are used to tell which naming convention is present. 92 + */ 93 + fw_check = (le32_to_cpu(hdr->ucode_version) >> 20) & 0xf; 94 + if (fw_check) { 95 + unsigned int dec_ver, enc_major, enc_minor, vep, fw_rev; 96 + 97 + fw_rev = le32_to_cpu(hdr->ucode_version) & 0xfff; 98 + enc_minor = (le32_to_cpu(hdr->ucode_version) >> 12) & 0xff; 99 + enc_major = fw_check; 100 + dec_ver = (le32_to_cpu(hdr->ucode_version) >> 24) & 0xf; 101 + vep = (le32_to_cpu(hdr->ucode_version) >> 28) & 0xf; 102 + DRM_INFO("Found VCN firmware Version ENC: %hu.%hu DEC: %hu VEP: %hu Revision: %hu\n", 103 + enc_major, enc_minor, dec_ver, vep, fw_rev); 104 + } else { 105 + unsigned int version_major, version_minor, family_id; 106 + 107 + family_id = le32_to_cpu(hdr->ucode_version) & 0xff; 108 + version_major = (le32_to_cpu(hdr->ucode_version) >> 24) & 0xff; 109 + version_minor = (le32_to_cpu(hdr->ucode_version) >> 8) & 0xff; 110 + DRM_INFO("Found VCN firmware Version: %hu.%hu Family ID: %hu\n", 111 + version_major, version_minor, family_id); 112 + } 92 113 93 114 bo_size = AMDGPU_GPU_PAGE_ALIGN(le32_to_cpu(hdr->ucode_size_bytes) + 8) 94 115 + AMDGPU_VCN_STACK_SIZE + AMDGPU_VCN_HEAP_SIZE
+5 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 1463 1463 uint64_t count; 1464 1464 1465 1465 max_entries = min(max_entries, 16ull * 1024ull); 1466 - for (count = 1; count < max_entries; ++count) { 1466 + for (count = 1; 1467 + count < max_entries / (PAGE_SIZE / AMDGPU_GPU_PAGE_SIZE); 1468 + ++count) { 1467 1469 uint64_t idx = pfn + count; 1468 1470 1469 1471 if (pages_addr[idx] != ··· 1478 1476 dma_addr = pages_addr; 1479 1477 } else { 1480 1478 addr = pages_addr[pfn]; 1481 - max_entries = count; 1479 + max_entries = count * (PAGE_SIZE / AMDGPU_GPU_PAGE_SIZE); 1482 1480 } 1483 1481 1484 1482 } else if (flags & AMDGPU_PTE_VALID) { ··· 1493 1491 if (r) 1494 1492 return r; 1495 1493 1496 - pfn += last - start + 1; 1494 + pfn += (last - start + 1) / (PAGE_SIZE / AMDGPU_GPU_PAGE_SIZE); 1497 1495 if (nodes && nodes->size == pfn) { 1498 1496 pfn = 0; 1499 1497 ++nodes;
+8 -8
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 3928 3928 if (acrtc->base.state->event) 3929 3929 prepare_flip_isr(acrtc); 3930 3930 3931 + spin_unlock_irqrestore(&crtc->dev->event_lock, flags); 3932 + 3931 3933 surface_updates->surface = dc_stream_get_status(acrtc_state->stream)->plane_states[0]; 3932 3934 surface_updates->flip_addr = &addr; 3933 - 3934 3935 3935 3936 dc_commit_updates_for_stream(adev->dm.dc, 3936 3937 surface_updates, ··· 3945 3944 __func__, 3946 3945 addr.address.grph.addr.high_part, 3947 3946 addr.address.grph.addr.low_part); 3948 - 3949 - 3950 - spin_unlock_irqrestore(&crtc->dev->event_lock, flags); 3951 3947 } 3952 3948 3953 3949 /* ··· 4204 4206 struct drm_connector *connector; 4205 4207 struct drm_connector_state *old_con_state, *new_con_state; 4206 4208 struct dm_crtc_state *dm_old_crtc_state, *dm_new_crtc_state; 4209 + int crtc_disable_count = 0; 4207 4210 4208 4211 drm_atomic_helper_update_legacy_modeset_state(dev, state); 4209 4212 ··· 4409 4410 struct amdgpu_crtc *acrtc = to_amdgpu_crtc(crtc); 4410 4411 bool modeset_needed; 4411 4412 4413 + if (old_crtc_state->active && !new_crtc_state->active) 4414 + crtc_disable_count++; 4415 + 4412 4416 dm_new_crtc_state = to_dm_crtc_state(new_crtc_state); 4413 4417 dm_old_crtc_state = to_dm_crtc_state(old_crtc_state); 4414 4418 modeset_needed = modeset_required( ··· 4465 4463 * so we can put the GPU into runtime suspend if we're not driving any 4466 4464 * displays anymore 4467 4465 */ 4466 + for (i = 0; i < crtc_disable_count; i++) 4467 + pm_runtime_put_autosuspend(dev->dev); 4468 4468 pm_runtime_mark_last_busy(dev->dev); 4469 - for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) { 4470 - if (old_crtc_state->active && !new_crtc_state->active) 4471 - pm_runtime_put_autosuspend(dev->dev); 4472 - } 4473 4469 } 4474 4470 4475 4471
+2 -1
drivers/gpu/drm/arm/malidp_drv.c
··· 278 278 279 279 static void malidp_fini(struct drm_device *drm) 280 280 { 281 - drm_atomic_helper_shutdown(drm); 282 281 drm_mode_config_cleanup(drm); 283 282 } 284 283 ··· 645 646 malidp_de_irq_fini(drm); 646 647 drm->irq_enabled = false; 647 648 irq_init_fail: 649 + drm_atomic_helper_shutdown(drm); 648 650 component_unbind_all(dev, drm); 649 651 bind_fail: 650 652 of_node_put(malidp->crtc.port); ··· 681 681 malidp_se_irq_fini(drm); 682 682 malidp_de_irq_fini(drm); 683 683 drm->irq_enabled = false; 684 + drm_atomic_helper_shutdown(drm); 684 685 component_unbind_all(dev, drm); 685 686 of_node_put(malidp->crtc.port); 686 687 malidp->crtc.port = NULL;
+2 -1
drivers/gpu/drm/arm/malidp_hw.c
··· 634 634 .vsync_irq = MALIDP500_DE_IRQ_VSYNC, 635 635 }, 636 636 .se_irq_map = { 637 - .irq_mask = MALIDP500_SE_IRQ_CONF_MODE, 637 + .irq_mask = MALIDP500_SE_IRQ_CONF_MODE | 638 + MALIDP500_SE_IRQ_GLOBAL, 638 639 .vsync_irq = 0, 639 640 }, 640 641 .dc_irq_map = {
+6 -3
drivers/gpu/drm/arm/malidp_planes.c
··· 23 23 24 24 /* Layer specific register offsets */ 25 25 #define MALIDP_LAYER_FORMAT 0x000 26 + #define LAYER_FORMAT_MASK 0x3f 26 27 #define MALIDP_LAYER_CONTROL 0x004 27 28 #define LAYER_ENABLE (1 << 0) 28 29 #define LAYER_FLOWCFG_MASK 7 ··· 236 235 if (state->rotation & MALIDP_ROTATED_MASK) { 237 236 int val; 238 237 239 - val = mp->hwdev->hw->rotmem_required(mp->hwdev, state->crtc_h, 240 - state->crtc_w, 238 + val = mp->hwdev->hw->rotmem_required(mp->hwdev, state->crtc_w, 239 + state->crtc_h, 241 240 fb->format->format); 242 241 if (val < 0) 243 242 return val; ··· 338 337 dest_w = plane->state->crtc_w; 339 338 dest_h = plane->state->crtc_h; 340 339 341 - malidp_hw_write(mp->hwdev, ms->format, mp->layer->base); 340 + val = malidp_hw_read(mp->hwdev, mp->layer->base); 341 + val = (val & ~LAYER_FORMAT_MASK) | ms->format; 342 + malidp_hw_write(mp->hwdev, val, mp->layer->base); 342 343 343 344 for (i = 0; i < ms->n_planes; i++) { 344 345 /* calculate the offset for the layer's plane registers */
-3
drivers/gpu/drm/i915/i915_drv.h
··· 2245 2245 **/ 2246 2246 static inline struct scatterlist *__sg_next(struct scatterlist *sg) 2247 2247 { 2248 - #ifdef CONFIG_DEBUG_SG 2249 - BUG_ON(sg->sg_magic != SG_MAGIC); 2250 - #endif 2251 2248 return sg_is_last(sg) ? NULL : ____sg_next(sg); 2252 2249 } 2253 2250
+8 -4
drivers/gpu/drm/meson/meson_drv.c
··· 197 197 priv->io_base = regs; 198 198 199 199 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "hhi"); 200 - if (!res) 201 - return -EINVAL; 200 + if (!res) { 201 + ret = -EINVAL; 202 + goto free_drm; 203 + } 202 204 /* Simply ioremap since it may be a shared register zone */ 203 205 regs = devm_ioremap(dev, res->start, resource_size(res)); 204 206 if (!regs) { ··· 217 215 } 218 216 219 217 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dmc"); 220 - if (!res) 221 - return -EINVAL; 218 + if (!res) { 219 + ret = -EINVAL; 220 + goto free_drm; 221 + } 222 222 /* Simply ioremap since it may be a shared register zone */ 223 223 regs = devm_ioremap(dev, res->start, resource_size(res)); 224 224 if (!regs) {
+4 -4
drivers/i2c/algos/i2c-algo-bit.c
··· 647 647 if (bit_adap->getscl == NULL) 648 648 adap->quirks = &i2c_bit_quirk_no_clk_stretch; 649 649 650 - /* Bring bus to a known state. Looks like STOP if bus is not free yet */ 651 - setscl(bit_adap, 1); 652 - udelay(bit_adap->udelay); 653 - setsda(bit_adap, 1); 650 + /* 651 + * We tried forcing SCL/SDA to an initial state here. But that caused a 652 + * regression, sadly. Check Bugzilla #200045 for details. 653 + */ 654 654 655 655 ret = add_adapter(adap); 656 656 if (ret < 0)
+2 -2
drivers/i2c/busses/i2c-gpio.c
··· 279 279 * required for an I2C bus. 280 280 */ 281 281 if (pdata->scl_is_open_drain) 282 - gflags = GPIOD_OUT_LOW; 282 + gflags = GPIOD_OUT_HIGH; 283 283 else 284 - gflags = GPIOD_OUT_LOW_OPEN_DRAIN; 284 + gflags = GPIOD_OUT_HIGH_OPEN_DRAIN; 285 285 priv->scl = i2c_gpio_get_desc(dev, "scl", 1, gflags); 286 286 if (IS_ERR(priv->scl)) 287 287 return PTR_ERR(priv->scl);
+9 -5
drivers/i2c/i2c-core-smbus.c
··· 465 465 466 466 status = i2c_transfer(adapter, msg, num); 467 467 if (status < 0) 468 - return status; 469 - if (status != num) 470 - return -EIO; 468 + goto cleanup; 469 + if (status != num) { 470 + status = -EIO; 471 + goto cleanup; 472 + } 473 + status = 0; 471 474 472 475 /* Check PEC if last message is a read */ 473 476 if (i && (msg[num-1].flags & I2C_M_RD)) { 474 477 status = i2c_smbus_check_pec(partial_pec, &msg[num-1]); 475 478 if (status < 0) 476 - return status; 479 + goto cleanup; 477 480 } 478 481 479 482 if (read_write == I2C_SMBUS_READ) ··· 502 499 break; 503 500 } 504 501 502 + cleanup: 505 503 if (msg[0].flags & I2C_M_DMA_SAFE) 506 504 kfree(msg[0].buf); 507 505 if (msg[1].flags & I2C_M_DMA_SAFE) 508 506 kfree(msg[1].buf); 509 507 510 - return 0; 508 + return status; 511 509 } 512 510 513 511 /**
+1 -1
drivers/iio/accel/mma8452.c
··· 1053 1053 if (src < 0) 1054 1054 return IRQ_NONE; 1055 1055 1056 - if (!(src & data->chip_info->enabled_events)) 1056 + if (!(src & (data->chip_info->enabled_events | MMA8452_INT_DRDY))) 1057 1057 return IRQ_NONE; 1058 1058 1059 1059 if (src & MMA8452_INT_DRDY) {
+2
drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
··· 959 959 } 960 960 961 961 irq_type = irqd_get_trigger_type(desc); 962 + if (!irq_type) 963 + irq_type = IRQF_TRIGGER_RISING; 962 964 if (irq_type == IRQF_TRIGGER_RISING) 963 965 st->irq_mask = INV_MPU6050_ACTIVE_HIGH; 964 966 else if (irq_type == IRQF_TRIGGER_FALLING)
+2
drivers/iio/light/tsl2772.c
··· 582 582 "%s: failed to get lux\n", __func__); 583 583 return lux_val; 584 584 } 585 + if (lux_val == 0) 586 + return -ERANGE; 585 587 586 588 ret = (chip->settings.als_cal_target * chip->settings.als_gain_trim) / 587 589 lux_val;
+2 -3
drivers/iio/pressure/bmp280-core.c
··· 415 415 } 416 416 comp_humidity = bmp280_compensate_humidity(data, adc_humidity); 417 417 418 - *val = comp_humidity; 419 - *val2 = 1024; 418 + *val = comp_humidity * 1000 / 1024; 420 419 421 - return IIO_VAL_FRACTIONAL; 420 + return IIO_VAL_INT; 422 421 } 423 422 424 423 static int bmp280_read_raw(struct iio_dev *indio_dev,
+8 -4
drivers/input/input-mt.c
··· 131 131 * inactive, or if the tool type is changed, a new tracking id is 132 132 * assigned to the slot. The tool type is only reported if the 133 133 * corresponding absbit field is set. 134 + * 135 + * Returns true if contact is active. 134 136 */ 135 - void input_mt_report_slot_state(struct input_dev *dev, 137 + bool input_mt_report_slot_state(struct input_dev *dev, 136 138 unsigned int tool_type, bool active) 137 139 { 138 140 struct input_mt *mt = dev->mt; ··· 142 140 int id; 143 141 144 142 if (!mt) 145 - return; 143 + return false; 146 144 147 145 slot = &mt->slots[mt->slot]; 148 146 slot->frame = mt->frame; 149 147 150 148 if (!active) { 151 149 input_event(dev, EV_ABS, ABS_MT_TRACKING_ID, -1); 152 - return; 150 + return false; 153 151 } 154 152 155 153 id = input_mt_get_value(slot, ABS_MT_TRACKING_ID); 156 - if (id < 0 || input_mt_get_value(slot, ABS_MT_TOOL_TYPE) != tool_type) 154 + if (id < 0) 157 155 id = input_mt_new_trkid(mt); 158 156 159 157 input_event(dev, EV_ABS, ABS_MT_TRACKING_ID, id); 160 158 input_event(dev, EV_ABS, ABS_MT_TOOL_TYPE, tool_type); 159 + 160 + return true; 161 161 } 162 162 EXPORT_SYMBOL(input_mt_report_slot_state); 163 163
+1 -1
drivers/input/joystick/xpad.c
··· 125 125 u8 mapping; 126 126 u8 xtype; 127 127 } xpad_device[] = { 128 - { 0x0079, 0x18d4, "GPD Win 2 Controller", 0, XTYPE_XBOX360 }, 128 + { 0x0079, 0x18d4, "GPD Win 2 X-Box Controller", 0, XTYPE_XBOX360 }, 129 129 { 0x044f, 0x0f00, "Thrustmaster Wheel", 0, XTYPE_XBOX }, 130 130 { 0x044f, 0x0f03, "Thrustmaster Wheel", 0, XTYPE_XBOX }, 131 131 { 0x044f, 0x0f07, "Thrustmaster, Inc. Controller", 0, XTYPE_XBOX },
+5 -4
drivers/input/keyboard/goldfish_events.c
··· 45 45 static irqreturn_t events_interrupt(int irq, void *dev_id) 46 46 { 47 47 struct event_dev *edev = dev_id; 48 - unsigned type, code, value; 48 + unsigned int type, code, value; 49 49 50 50 type = __raw_readl(edev->addr + REG_READ); 51 51 code = __raw_readl(edev->addr + REG_READ); ··· 57 57 } 58 58 59 59 static void events_import_bits(struct event_dev *edev, 60 - unsigned long bits[], unsigned type, size_t count) 60 + unsigned long bits[], unsigned int type, size_t count) 61 61 { 62 62 void __iomem *addr = edev->addr; 63 63 int i, j; ··· 99 99 100 100 for (j = 0; j < ARRAY_SIZE(val); j++) { 101 101 int offset = (i * ARRAY_SIZE(val) + j) * sizeof(u32); 102 + 102 103 val[j] = __raw_readl(edev->addr + REG_DATA + offset); 103 104 } 104 105 ··· 113 112 struct input_dev *input_dev; 114 113 struct event_dev *edev; 115 114 struct resource *res; 116 - unsigned keymapnamelen; 115 + unsigned int keymapnamelen; 117 116 void __iomem *addr; 118 117 int irq; 119 118 int i; ··· 151 150 for (i = 0; i < keymapnamelen; i++) 152 151 edev->name[i] = __raw_readb(edev->addr + REG_DATA + i); 153 152 154 - pr_debug("events_probe() keymap=%s\n", edev->name); 153 + pr_debug("%s: keymap=%s\n", __func__, edev->name); 155 154 156 155 input_dev->name = edev->name; 157 156 input_dev->id.bustype = BUS_HOST;
+10
drivers/input/misc/Kconfig
··· 841 841 To compile this driver as a module, choose M here: the 842 842 module will be called rave-sp-pwrbutton. 843 843 844 + config INPUT_SC27XX_VIBRA 845 + tristate "Spreadtrum sc27xx vibrator support" 846 + depends on MFD_SC27XX_PMIC || COMPILE_TEST 847 + select INPUT_FF_MEMLESS 848 + help 849 + This option enables support for Spreadtrum sc27xx vibrator driver. 850 + 851 + To compile this driver as a module, choose M here. The module will 852 + be called sc27xx_vibra. 853 + 844 854 endif
+1
drivers/input/misc/Makefile
··· 66 66 obj-$(CONFIG_INPUT_AXP20X_PEK) += axp20x-pek.o 67 67 obj-$(CONFIG_INPUT_GPIO_ROTARY_ENCODER) += rotary_encoder.o 68 68 obj-$(CONFIG_INPUT_RK805_PWRKEY) += rk805-pwrkey.o 69 + obj-$(CONFIG_INPUT_SC27XX_VIBRA) += sc27xx-vibra.o 69 70 obj-$(CONFIG_INPUT_SGI_BTNS) += sgi_btns.o 70 71 obj-$(CONFIG_INPUT_SIRFSOC_ONKEY) += sirfsoc-onkey.o 71 72 obj-$(CONFIG_INPUT_SOC_BUTTON_ARRAY) += soc_button_array.o
+154
drivers/input/misc/sc27xx-vibra.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2018 Spreadtrum Communications Inc. 4 + */ 5 + 6 + #include <linux/module.h> 7 + #include <linux/of_address.h> 8 + #include <linux/platform_device.h> 9 + #include <linux/regmap.h> 10 + #include <linux/input.h> 11 + #include <linux/workqueue.h> 12 + 13 + #define CUR_DRV_CAL_SEL GENMASK(13, 12) 14 + #define SLP_LDOVIBR_PD_EN BIT(9) 15 + #define LDO_VIBR_PD BIT(8) 16 + 17 + struct vibra_info { 18 + struct input_dev *input_dev; 19 + struct work_struct play_work; 20 + struct regmap *regmap; 21 + u32 base; 22 + u32 strength; 23 + bool enabled; 24 + }; 25 + 26 + static void sc27xx_vibra_set(struct vibra_info *info, bool on) 27 + { 28 + if (on) { 29 + regmap_update_bits(info->regmap, info->base, LDO_VIBR_PD, 0); 30 + regmap_update_bits(info->regmap, info->base, 31 + SLP_LDOVIBR_PD_EN, 0); 32 + info->enabled = true; 33 + } else { 34 + regmap_update_bits(info->regmap, info->base, LDO_VIBR_PD, 35 + LDO_VIBR_PD); 36 + regmap_update_bits(info->regmap, info->base, 37 + SLP_LDOVIBR_PD_EN, SLP_LDOVIBR_PD_EN); 38 + info->enabled = false; 39 + } 40 + } 41 + 42 + static int sc27xx_vibra_hw_init(struct vibra_info *info) 43 + { 44 + return regmap_update_bits(info->regmap, info->base, CUR_DRV_CAL_SEL, 0); 45 + } 46 + 47 + static void sc27xx_vibra_play_work(struct work_struct *work) 48 + { 49 + struct vibra_info *info = container_of(work, struct vibra_info, 50 + play_work); 51 + 52 + if (info->strength && !info->enabled) 53 + sc27xx_vibra_set(info, true); 54 + else if (info->strength == 0 && info->enabled) 55 + sc27xx_vibra_set(info, false); 56 + } 57 + 58 + static int sc27xx_vibra_play(struct input_dev *input, void *data, 59 + struct ff_effect *effect) 60 + { 61 + struct vibra_info *info = input_get_drvdata(input); 62 + 63 + info->strength = effect->u.rumble.weak_magnitude; 64 + schedule_work(&info->play_work); 65 + 66 + return 0; 67 + } 68 + 69 + static void sc27xx_vibra_close(struct input_dev *input) 70 + { 71 + struct vibra_info *info = input_get_drvdata(input); 72 + 73 + cancel_work_sync(&info->play_work); 74 + if (info->enabled) 75 + sc27xx_vibra_set(info, false); 76 + } 77 + 78 + static int sc27xx_vibra_probe(struct platform_device *pdev) 79 + { 80 + struct vibra_info *info; 81 + int error; 82 + 83 + info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL); 84 + if (!info) 85 + return -ENOMEM; 86 + 87 + info->regmap = dev_get_regmap(pdev->dev.parent, NULL); 88 + if (!info->regmap) { 89 + dev_err(&pdev->dev, "failed to get vibrator regmap.\n"); 90 + return -ENODEV; 91 + } 92 + 93 + error = device_property_read_u32(&pdev->dev, "reg", &info->base); 94 + if (error) { 95 + dev_err(&pdev->dev, "failed to get vibrator base address.\n"); 96 + return error; 97 + } 98 + 99 + info->input_dev = devm_input_allocate_device(&pdev->dev); 100 + if (!info->input_dev) { 101 + dev_err(&pdev->dev, "failed to allocate input device.\n"); 102 + return -ENOMEM; 103 + } 104 + 105 + info->input_dev->name = "sc27xx:vibrator"; 106 + info->input_dev->id.version = 0; 107 + info->input_dev->close = sc27xx_vibra_close; 108 + 109 + input_set_drvdata(info->input_dev, info); 110 + input_set_capability(info->input_dev, EV_FF, FF_RUMBLE); 111 + INIT_WORK(&info->play_work, sc27xx_vibra_play_work); 112 + info->enabled = false; 113 + 114 + error = sc27xx_vibra_hw_init(info); 115 + if (error) { 116 + dev_err(&pdev->dev, "failed to initialize the vibrator.\n"); 117 + return error; 118 + } 119 + 120 + error = input_ff_create_memless(info->input_dev, NULL, 121 + sc27xx_vibra_play); 122 + if (error) { 123 + dev_err(&pdev->dev, "failed to register vibrator to FF.\n"); 124 + return error; 125 + } 126 + 127 + error = input_register_device(info->input_dev); 128 + if (error) { 129 + dev_err(&pdev->dev, "failed to register input device.\n"); 130 + return error; 131 + } 132 + 133 + return 0; 134 + } 135 + 136 + static const struct of_device_id sc27xx_vibra_of_match[] = { 137 + { .compatible = "sprd,sc2731-vibrator", }, 138 + {} 139 + }; 140 + MODULE_DEVICE_TABLE(of, sc27xx_vibra_of_match); 141 + 142 + static struct platform_driver sc27xx_vibra_driver = { 143 + .driver = { 144 + .name = "sc27xx-vibrator", 145 + .of_match_table = sc27xx_vibra_of_match, 146 + }, 147 + .probe = sc27xx_vibra_probe, 148 + }; 149 + 150 + module_platform_driver(sc27xx_vibra_driver); 151 + 152 + MODULE_DESCRIPTION("Spreadtrum SC27xx Vibrator Driver"); 153 + MODULE_LICENSE("GPL v2"); 154 + MODULE_AUTHOR("Xiaotong Lu <xiaotong.lu@spreadtrum.com>");
+2
drivers/input/mouse/elan_i2c.h
··· 27 27 #define ETP_DISABLE_POWER 0x0001 28 28 #define ETP_PRESSURE_OFFSET 25 29 29 30 + #define ETP_CALIBRATE_MAX_LEN 3 31 + 30 32 /* IAP Firmware handling */ 31 33 #define ETP_PRODUCT_ID_FORMAT_STRING "%d.0" 32 34 #define ETP_FW_NAME "elan_i2c_" ETP_PRODUCT_ID_FORMAT_STRING ".bin"
+2 -1
drivers/input/mouse/elan_i2c_core.c
··· 613 613 int tries = 20; 614 614 int retval; 615 615 int error; 616 - u8 val[3]; 616 + u8 val[ETP_CALIBRATE_MAX_LEN]; 617 617 618 618 retval = mutex_lock_interruptible(&data->sysfs_mutex); 619 619 if (retval) ··· 1345 1345 { "ELAN060C", 0 }, 1346 1346 { "ELAN0611", 0 }, 1347 1347 { "ELAN0612", 0 }, 1348 + { "ELAN0618", 0 }, 1348 1349 { "ELAN1000", 0 }, 1349 1350 { } 1350 1351 };
+8 -2
drivers/input/mouse/elan_i2c_smbus.c
··· 56 56 static int elan_smbus_initialize(struct i2c_client *client) 57 57 { 58 58 u8 check[ETP_SMBUS_HELLOPACKET_LEN] = { 0x55, 0x55, 0x55, 0x55, 0x55 }; 59 - u8 values[ETP_SMBUS_HELLOPACKET_LEN] = { 0, 0, 0, 0, 0 }; 59 + u8 values[I2C_SMBUS_BLOCK_MAX] = {0}; 60 60 int len, error; 61 61 62 62 /* Get hello packet */ ··· 117 117 static int elan_smbus_calibrate_result(struct i2c_client *client, u8 *val) 118 118 { 119 119 int error; 120 + u8 buf[I2C_SMBUS_BLOCK_MAX] = {0}; 121 + 122 + BUILD_BUG_ON(ETP_CALIBRATE_MAX_LEN > sizeof(buf)); 120 123 121 124 error = i2c_smbus_read_block_data(client, 122 - ETP_SMBUS_CALIBRATE_QUERY, val); 125 + ETP_SMBUS_CALIBRATE_QUERY, buf); 123 126 if (error < 0) 124 127 return error; 125 128 129 + memcpy(val, buf, ETP_CALIBRATE_MAX_LEN); 126 130 return 0; 127 131 } 128 132 ··· 475 471 static int elan_smbus_get_report(struct i2c_client *client, u8 *report) 476 472 { 477 473 int len; 474 + 475 + BUILD_BUG_ON(I2C_SMBUS_BLOCK_MAX > ETP_SMBUS_REPORT_LEN); 478 476 479 477 len = i2c_smbus_read_block_data(client, 480 478 ETP_SMBUS_PACKET_QUERY,
+9 -2
drivers/input/mouse/elantech.c
··· 799 799 else if (ic_version == 7 && etd->info.samples[1] == 0x2A) 800 800 sanity_check = ((packet[3] & 0x1c) == 0x10); 801 801 else 802 - sanity_check = ((packet[0] & 0x0c) == 0x04 && 802 + sanity_check = ((packet[0] & 0x08) == 0x00 && 803 803 (packet[3] & 0x1c) == 0x10); 804 804 805 805 if (!sanity_check) ··· 1175 1175 { } 1176 1176 }; 1177 1177 1178 + static const char * const middle_button_pnp_ids[] = { 1179 + "LEN2131", /* ThinkPad P52 w/ NFC */ 1180 + "LEN2132", /* ThinkPad P52 */ 1181 + NULL 1182 + }; 1183 + 1178 1184 /* 1179 1185 * Set the appropriate event bits for the input subsystem 1180 1186 */ ··· 1200 1194 __clear_bit(EV_REL, dev->evbit); 1201 1195 1202 1196 __set_bit(BTN_LEFT, dev->keybit); 1203 - if (dmi_check_system(elantech_dmi_has_middle_button)) 1197 + if (dmi_check_system(elantech_dmi_has_middle_button) || 1198 + psmouse_matches_pnp_id(psmouse, middle_button_pnp_ids)) 1204 1199 __set_bit(BTN_MIDDLE, dev->keybit); 1205 1200 __set_bit(BTN_RIGHT, dev->keybit); 1206 1201
+6 -6
drivers/input/mouse/psmouse-base.c
··· 192 192 else 193 193 input_report_rel(dev, REL_WHEEL, -wheel); 194 194 195 - input_report_key(dev, BTN_SIDE, BIT(4)); 196 - input_report_key(dev, BTN_EXTRA, BIT(5)); 195 + input_report_key(dev, BTN_SIDE, packet[3] & BIT(4)); 196 + input_report_key(dev, BTN_EXTRA, packet[3] & BIT(5)); 197 197 break; 198 198 } 199 199 break; ··· 203 203 input_report_rel(dev, REL_WHEEL, -(s8) packet[3]); 204 204 205 205 /* Extra buttons on Genius NewNet 3D */ 206 - input_report_key(dev, BTN_SIDE, BIT(6)); 207 - input_report_key(dev, BTN_EXTRA, BIT(7)); 206 + input_report_key(dev, BTN_SIDE, packet[0] & BIT(6)); 207 + input_report_key(dev, BTN_EXTRA, packet[0] & BIT(7)); 208 208 break; 209 209 210 210 case PSMOUSE_THINKPS: 211 211 /* Extra button on ThinkingMouse */ 212 - input_report_key(dev, BTN_EXTRA, BIT(3)); 212 + input_report_key(dev, BTN_EXTRA, packet[0] & BIT(3)); 213 213 214 214 /* 215 215 * Without this bit of weirdness moving up gives wildly ··· 223 223 * Cortron PS2 Trackball reports SIDE button in the 224 224 * 4th bit of the first byte. 225 225 */ 226 - input_report_key(dev, BTN_SIDE, BIT(3)); 226 + input_report_key(dev, BTN_SIDE, packet[0] & BIT(3)); 227 227 packet[0] |= BIT(3); 228 228 break; 229 229
+1
drivers/input/rmi4/Kconfig
··· 3 3 # 4 4 config RMI4_CORE 5 5 tristate "Synaptics RMI4 bus support" 6 + select IRQ_DOMAIN 6 7 help 7 8 Say Y here if you want to support the Synaptics RMI4 bus. This is 8 9 required for all RMI4 device support.
+16 -18
drivers/input/rmi4/rmi_2d_sensor.c
··· 32 32 if (obj->type == RMI_2D_OBJECT_NONE) 33 33 return; 34 34 35 - if (axis_align->swap_axes) 36 - swap(obj->x, obj->y); 37 - 38 35 if (axis_align->flip_x) 39 36 obj->x = sensor->max_x - obj->x; 40 37 41 38 if (axis_align->flip_y) 42 39 obj->y = sensor->max_y - obj->y; 40 + 41 + if (axis_align->swap_axes) 42 + swap(obj->x, obj->y); 43 43 44 44 /* 45 45 * Here checking if X offset or y offset are specified is ··· 120 120 x = min(RMI_2D_REL_POS_MAX, max(RMI_2D_REL_POS_MIN, (int)x)); 121 121 y = min(RMI_2D_REL_POS_MAX, max(RMI_2D_REL_POS_MIN, (int)y)); 122 122 123 - if (axis_align->swap_axes) 124 - swap(x, y); 125 - 126 123 if (axis_align->flip_x) 127 124 x = min(RMI_2D_REL_POS_MAX, -x); 128 125 129 126 if (axis_align->flip_y) 130 127 y = min(RMI_2D_REL_POS_MAX, -y); 128 + 129 + if (axis_align->swap_axes) 130 + swap(x, y); 131 131 132 132 if (x || y) { 133 133 input_report_rel(sensor->input, REL_X, x); ··· 141 141 struct input_dev *input = sensor->input; 142 142 int res_x; 143 143 int res_y; 144 + int max_x, max_y; 144 145 int input_flags = 0; 145 146 146 147 if (sensor->report_abs) { 147 - if (sensor->axis_align.swap_axes) { 148 - swap(sensor->max_x, sensor->max_y); 149 - swap(sensor->axis_align.clip_x_low, 150 - sensor->axis_align.clip_y_low); 151 - swap(sensor->axis_align.clip_x_high, 152 - sensor->axis_align.clip_y_high); 153 - } 154 - 155 148 sensor->min_x = sensor->axis_align.clip_x_low; 156 149 if (sensor->axis_align.clip_x_high) 157 150 sensor->max_x = min(sensor->max_x, ··· 156 163 sensor->axis_align.clip_y_high); 157 164 158 165 set_bit(EV_ABS, input->evbit); 159 - input_set_abs_params(input, ABS_MT_POSITION_X, 0, sensor->max_x, 160 - 0, 0); 161 - input_set_abs_params(input, ABS_MT_POSITION_Y, 0, sensor->max_y, 162 - 0, 0); 166 + 167 + max_x = sensor->max_x; 168 + max_y = sensor->max_y; 169 + if (sensor->axis_align.swap_axes) 170 + swap(max_x, max_y); 171 + input_set_abs_params(input, ABS_MT_POSITION_X, 0, max_x, 0, 0); 172 + input_set_abs_params(input, ABS_MT_POSITION_Y, 0, max_y, 0, 0); 163 173 164 174 if (sensor->x_mm && sensor->y_mm) { 165 175 res_x = (sensor->max_x - sensor->min_x) / sensor->x_mm; 166 176 res_y = (sensor->max_y - sensor->min_y) / sensor->y_mm; 177 + if (sensor->axis_align.swap_axes) 178 + swap(res_x, res_y); 167 179 168 180 input_abs_set_res(input, ABS_X, res_x); 169 181 input_abs_set_res(input, ABS_Y, res_y);
+49 -1
drivers/input/rmi4/rmi_bus.c
··· 9 9 10 10 #include <linux/kernel.h> 11 11 #include <linux/device.h> 12 + #include <linux/irq.h> 13 + #include <linux/irqdomain.h> 12 14 #include <linux/list.h> 13 15 #include <linux/pm.h> 14 16 #include <linux/rmi.h> ··· 169 167 {} 170 168 #endif 171 169 170 + static struct irq_chip rmi_irq_chip = { 171 + .name = "rmi4", 172 + }; 173 + 174 + static int rmi_create_function_irq(struct rmi_function *fn, 175 + struct rmi_function_handler *handler) 176 + { 177 + struct rmi_driver_data *drvdata = dev_get_drvdata(&fn->rmi_dev->dev); 178 + int i, error; 179 + 180 + for (i = 0; i < fn->num_of_irqs; i++) { 181 + set_bit(fn->irq_pos + i, fn->irq_mask); 182 + 183 + fn->irq[i] = irq_create_mapping(drvdata->irqdomain, 184 + fn->irq_pos + i); 185 + 186 + irq_set_chip_data(fn->irq[i], fn); 187 + irq_set_chip_and_handler(fn->irq[i], &rmi_irq_chip, 188 + handle_simple_irq); 189 + irq_set_nested_thread(fn->irq[i], 1); 190 + 191 + error = devm_request_threaded_irq(&fn->dev, fn->irq[i], NULL, 192 + handler->attention, IRQF_ONESHOT, 193 + dev_name(&fn->dev), fn); 194 + if (error) { 195 + dev_err(&fn->dev, "Error %d registering IRQ\n", error); 196 + return error; 197 + } 198 + } 199 + 200 + return 0; 201 + } 202 + 172 203 static int rmi_function_probe(struct device *dev) 173 204 { 174 205 struct rmi_function *fn = to_rmi_function(dev); ··· 213 178 214 179 if (handler->probe) { 215 180 error = handler->probe(fn); 216 - return error; 181 + if (error) 182 + return error; 183 + } 184 + 185 + if (fn->num_of_irqs && handler->attention) { 186 + error = rmi_create_function_irq(fn, handler); 187 + if (error) 188 + return error; 217 189 } 218 190 219 191 return 0; ··· 272 230 273 231 void rmi_unregister_function(struct rmi_function *fn) 274 232 { 233 + int i; 234 + 275 235 rmi_dbg(RMI_DEBUG_CORE, &fn->dev, "Unregistering F%02X.\n", 276 236 fn->fd.function_number); 277 237 278 238 device_del(&fn->dev); 279 239 of_node_put(fn->dev.of_node); 280 240 put_device(&fn->dev); 241 + 242 + for (i = 0; i < fn->num_of_irqs; i++) 243 + irq_dispose_mapping(fn->irq[i]); 244 + 281 245 } 282 246 283 247 /**
+9 -1
drivers/input/rmi4/rmi_bus.h
··· 14 14 15 15 struct rmi_device; 16 16 17 + /* 18 + * The interrupt source count in the function descriptor can represent up to 19 + * 6 interrupt sources in the normal manner. 20 + */ 21 + #define RMI_FN_MAX_IRQS 6 22 + 17 23 /** 18 24 * struct rmi_function - represents the implementation of an RMI4 19 25 * function for a particular device (basically, a driver for that RMI4 function) ··· 32 26 * @irq_pos: The position in the irq bitfield this function holds 33 27 * @irq_mask: For convenience, can be used to mask IRQ bits off during ATTN 34 28 * interrupt handling. 29 + * @irqs: assigned virq numbers (up to num_of_irqs) 35 30 * 36 31 * @node: entry in device's list of functions 37 32 */ ··· 43 36 struct list_head node; 44 37 45 38 unsigned int num_of_irqs; 39 + int irq[RMI_FN_MAX_IRQS]; 46 40 unsigned int irq_pos; 47 41 unsigned long irq_mask[]; 48 42 }; ··· 84 76 void (*remove)(struct rmi_function *fn); 85 77 int (*config)(struct rmi_function *fn); 86 78 int (*reset)(struct rmi_function *fn); 87 - int (*attention)(struct rmi_function *fn, unsigned long *irq_bits); 79 + irqreturn_t (*attention)(int irq, void *ctx); 88 80 int (*suspend)(struct rmi_function *fn); 89 81 int (*resume)(struct rmi_function *fn); 90 82 };
+20 -32
drivers/input/rmi4/rmi_driver.c
··· 21 21 #include <linux/pm.h> 22 22 #include <linux/slab.h> 23 23 #include <linux/of.h> 24 + #include <linux/irqdomain.h> 24 25 #include <uapi/linux/input.h> 25 26 #include <linux/rmi.h> 26 27 #include "rmi_bus.h" ··· 128 127 return 0; 129 128 } 130 129 131 - static void process_one_interrupt(struct rmi_driver_data *data, 132 - struct rmi_function *fn) 133 - { 134 - struct rmi_function_handler *fh; 135 - 136 - if (!fn || !fn->dev.driver) 137 - return; 138 - 139 - fh = to_rmi_function_handler(fn->dev.driver); 140 - if (fh->attention) { 141 - bitmap_and(data->fn_irq_bits, data->irq_status, fn->irq_mask, 142 - data->irq_count); 143 - if (!bitmap_empty(data->fn_irq_bits, data->irq_count)) 144 - fh->attention(fn, data->fn_irq_bits); 145 - } 146 - } 147 - 148 130 static int rmi_process_interrupt_requests(struct rmi_device *rmi_dev) 149 131 { 150 132 struct rmi_driver_data *data = dev_get_drvdata(&rmi_dev->dev); 151 133 struct device *dev = &rmi_dev->dev; 152 - struct rmi_function *entry; 134 + int i; 153 135 int error; 154 136 155 137 if (!data) ··· 157 173 */ 158 174 mutex_unlock(&data->irq_mutex); 159 175 160 - /* 161 - * It would be nice to be able to use irq_chip to handle these 162 - * nested IRQs. Unfortunately, most of the current customers for 163 - * this driver are using older kernels (3.0.x) that don't support 164 - * the features required for that. Once they've shifted to more 165 - * recent kernels (say, 3.3 and higher), this should be switched to 166 - * use irq_chip. 167 - */ 168 - list_for_each_entry(entry, &data->function_list, node) 169 - process_one_interrupt(data, entry); 176 + for_each_set_bit(i, data->irq_status, data->irq_count) 177 + handle_nested_irq(irq_find_mapping(data->irqdomain, i)); 170 178 171 179 if (data->input) 172 180 input_sync(data->input); ··· 977 1001 static int rmi_driver_remove(struct device *dev) 978 1002 { 979 1003 struct rmi_device *rmi_dev = to_rmi_device(dev); 1004 + struct rmi_driver_data *data = dev_get_drvdata(&rmi_dev->dev); 980 1005 981 1006 rmi_disable_irq(rmi_dev, false); 1007 + 1008 + irq_domain_remove(data->irqdomain); 1009 + data->irqdomain = NULL; 982 1010 983 1011 rmi_f34_remove_sysfs(rmi_dev); 984 1012 rmi_free_function_list(rmi_dev); ··· 1015 1035 { 1016 1036 struct rmi_device *rmi_dev = data->rmi_dev; 1017 1037 struct device *dev = &rmi_dev->dev; 1018 - int irq_count; 1038 + struct fwnode_handle *fwnode = rmi_dev->xport->dev->fwnode; 1039 + int irq_count = 0; 1019 1040 size_t size; 1020 1041 int retval; 1021 1042 ··· 1027 1046 * being accessed. 1028 1047 */ 1029 1048 rmi_dbg(RMI_DEBUG_CORE, dev, "%s: Counting IRQs.\n", __func__); 1030 - irq_count = 0; 1031 1049 data->bootloader_mode = false; 1032 1050 1033 1051 retval = rmi_scan_pdt(rmi_dev, &irq_count, rmi_count_irqs); ··· 1037 1057 1038 1058 if (data->bootloader_mode) 1039 1059 dev_warn(dev, "Device in bootloader mode.\n"); 1060 + 1061 + /* Allocate and register a linear revmap irq_domain */ 1062 + data->irqdomain = irq_domain_create_linear(fwnode, irq_count, 1063 + &irq_domain_simple_ops, 1064 + data); 1065 + if (!data->irqdomain) { 1066 + dev_err(&rmi_dev->dev, "Failed to create IRQ domain\n"); 1067 + return -ENOMEM; 1068 + } 1040 1069 1041 1070 data->irq_count = irq_count; 1042 1071 data->num_of_irq_regs = (data->irq_count + 7) / 8; ··· 1069 1080 { 1070 1081 struct rmi_device *rmi_dev = data->rmi_dev; 1071 1082 struct device *dev = &rmi_dev->dev; 1072 - int irq_count; 1083 + int irq_count = 0; 1073 1084 int retval; 1074 1085 1075 - irq_count = 0; 1076 1086 rmi_dbg(RMI_DEBUG_CORE, dev, "%s: Creating functions.\n", __func__); 1077 1087 retval = rmi_scan_pdt(rmi_dev, &irq_count, rmi_create_function); 1078 1088 if (retval < 0) {
+5 -5
drivers/input/rmi4/rmi_f01.c
··· 681 681 return 0; 682 682 } 683 683 684 - static int rmi_f01_attention(struct rmi_function *fn, 685 - unsigned long *irq_bits) 684 + static irqreturn_t rmi_f01_attention(int irq, void *ctx) 686 685 { 686 + struct rmi_function *fn = ctx; 687 687 struct rmi_device *rmi_dev = fn->rmi_dev; 688 688 int error; 689 689 u8 device_status; ··· 692 692 if (error) { 693 693 dev_err(&fn->dev, 694 694 "Failed to read device status: %d.\n", error); 695 - return error; 695 + return IRQ_RETVAL(error); 696 696 } 697 697 698 698 if (RMI_F01_STATUS_BOOTLOADER(device_status)) ··· 704 704 error = rmi_dev->driver->reset_handler(rmi_dev); 705 705 if (error) { 706 706 dev_err(&fn->dev, "Device reset failed: %d\n", error); 707 - return error; 707 + return IRQ_RETVAL(error); 708 708 } 709 709 } 710 710 711 - return 0; 711 + return IRQ_HANDLED; 712 712 } 713 713 714 714 struct rmi_function_handler rmi_f01_handler = {
+5 -4
drivers/input/rmi4/rmi_f03.c
··· 244 244 return 0; 245 245 } 246 246 247 - static int rmi_f03_attention(struct rmi_function *fn, unsigned long *irq_bits) 247 + static irqreturn_t rmi_f03_attention(int irq, void *ctx) 248 248 { 249 + struct rmi_function *fn = ctx; 249 250 struct rmi_device *rmi_dev = fn->rmi_dev; 250 251 struct rmi_driver_data *drvdata = dev_get_drvdata(&rmi_dev->dev); 251 252 struct f03_data *f03 = dev_get_drvdata(&fn->dev); ··· 263 262 /* First grab the data passed by the transport device */ 264 263 if (drvdata->attn_data.size < ob_len) { 265 264 dev_warn(&fn->dev, "F03 interrupted, but data is missing!\n"); 266 - return 0; 265 + return IRQ_HANDLED; 267 266 } 268 267 269 268 memcpy(obs, drvdata->attn_data.data, ob_len); ··· 278 277 "%s: Failed to read F03 output buffers: %d\n", 279 278 __func__, error); 280 279 serio_interrupt(f03->serio, 0, SERIO_TIMEOUT); 281 - return error; 280 + return IRQ_RETVAL(error); 282 281 } 283 282 } 284 283 ··· 304 303 serio_interrupt(f03->serio, ob_data, serio_flags); 305 304 } 306 305 307 - return 0; 306 + return IRQ_HANDLED; 308 307 } 309 308 310 309 static void rmi_f03_remove(struct rmi_function *fn)
+16 -26
drivers/input/rmi4/rmi_f11.c
··· 570 570 } 571 571 572 572 static void rmi_f11_finger_handler(struct f11_data *f11, 573 - struct rmi_2d_sensor *sensor, 574 - unsigned long *irq_bits, int num_irq_regs, 575 - int size) 573 + struct rmi_2d_sensor *sensor, int size) 576 574 { 577 575 const u8 *f_state = f11->data.f_state; 578 576 u8 finger_state; ··· 579 581 int rel_fingers; 580 582 int abs_size = sensor->nbr_fingers * RMI_F11_ABS_BYTES; 581 583 582 - int abs_bits = bitmap_and(f11->result_bits, irq_bits, f11->abs_mask, 583 - num_irq_regs * 8); 584 - int rel_bits = bitmap_and(f11->result_bits, irq_bits, f11->rel_mask, 585 - num_irq_regs * 8); 586 - 587 - if (abs_bits) { 584 + if (sensor->report_abs) { 588 585 if (abs_size > size) 589 586 abs_fingers = size / RMI_F11_ABS_BYTES; 590 587 else ··· 597 604 rmi_f11_abs_pos_process(f11, sensor, &sensor->objs[i], 598 605 finger_state, i); 599 606 } 600 - } 601 607 602 - if (rel_bits) { 603 - if ((abs_size + sensor->nbr_fingers * RMI_F11_REL_BYTES) > size) 604 - rel_fingers = (size - abs_size) / RMI_F11_REL_BYTES; 605 - else 606 - rel_fingers = sensor->nbr_fingers; 607 - 608 - for (i = 0; i < rel_fingers; i++) 609 - rmi_f11_rel_pos_report(f11, i); 610 - } 611 - 612 - if (abs_bits) { 613 608 /* 614 609 * the absolute part is made in 2 parts to allow the kernel 615 610 * tracking to take place. ··· 619 638 } 620 639 621 640 input_mt_sync_frame(sensor->input); 641 + } else if (sensor->report_rel) { 642 + if ((abs_size + sensor->nbr_fingers * RMI_F11_REL_BYTES) > size) 643 + rel_fingers = (size - abs_size) / RMI_F11_REL_BYTES; 644 + else 645 + rel_fingers = sensor->nbr_fingers; 646 + 647 + for (i = 0; i < rel_fingers; i++) 648 + rmi_f11_rel_pos_report(f11, i); 622 649 } 650 + 623 651 } 624 652 625 653 static int f11_2d_construct_data(struct f11_data *f11) ··· 1266 1276 return 0; 1267 1277 } 1268 1278 1269 - static int rmi_f11_attention(struct rmi_function *fn, unsigned long *irq_bits) 1279 + static irqreturn_t rmi_f11_attention(int irq, void *ctx) 1270 1280 { 1281 + struct rmi_function *fn = ctx; 1271 1282 struct rmi_device *rmi_dev = fn->rmi_dev; 1272 1283 struct rmi_driver_data *drvdata = dev_get_drvdata(&rmi_dev->dev); 1273 1284 struct f11_data *f11 = dev_get_drvdata(&fn->dev); ··· 1294 1303 data_base_addr, f11->sensor.data_pkt, 1295 1304 f11->sensor.pkt_size); 1296 1305 if (error < 0) 1297 - return error; 1306 + return IRQ_RETVAL(error); 1298 1307 } 1299 1308 1300 - rmi_f11_finger_handler(f11, &f11->sensor, irq_bits, 1301 - drvdata->num_of_irq_regs, valid_bytes); 1309 + rmi_f11_finger_handler(f11, &f11->sensor, valid_bytes); 1302 1310 1303 - return 0; 1311 + return IRQ_HANDLED; 1304 1312 } 1305 1313 1306 1314 static int rmi_f11_resume(struct rmi_function *fn)
+4 -4
drivers/input/rmi4/rmi_f12.c
··· 197 197 rmi_2d_sensor_abs_report(sensor, &sensor->objs[i], i); 198 198 } 199 199 200 - static int rmi_f12_attention(struct rmi_function *fn, 201 - unsigned long *irq_nr_regs) 200 + static irqreturn_t rmi_f12_attention(int irq, void *ctx) 202 201 { 203 202 int retval; 203 + struct rmi_function *fn = ctx; 204 204 struct rmi_device *rmi_dev = fn->rmi_dev; 205 205 struct rmi_driver_data *drvdata = dev_get_drvdata(&rmi_dev->dev); 206 206 struct f12_data *f12 = dev_get_drvdata(&fn->dev); ··· 222 222 if (retval < 0) { 223 223 dev_err(&fn->dev, "Failed to read object data. Code: %d.\n", 224 224 retval); 225 - return retval; 225 + return IRQ_RETVAL(retval); 226 226 } 227 227 } 228 228 ··· 232 232 233 233 input_mt_sync_frame(sensor->input); 234 234 235 - return 0; 235 + return IRQ_HANDLED; 236 236 } 237 237 238 238 static int rmi_f12_write_control_regs(struct rmi_function *fn)
+5 -4
drivers/input/rmi4/rmi_f30.c
··· 122 122 } 123 123 } 124 124 125 - static int rmi_f30_attention(struct rmi_function *fn, unsigned long *irq_bits) 125 + static irqreturn_t rmi_f30_attention(int irq, void *ctx) 126 126 { 127 + struct rmi_function *fn = ctx; 127 128 struct f30_data *f30 = dev_get_drvdata(&fn->dev); 128 129 struct rmi_driver_data *drvdata = dev_get_drvdata(&fn->rmi_dev->dev); 129 130 int error; ··· 135 134 if (drvdata->attn_data.size < f30->register_count) { 136 135 dev_warn(&fn->dev, 137 136 "F30 interrupted, but data is missing\n"); 138 - return 0; 137 + return IRQ_HANDLED; 139 138 } 140 139 memcpy(f30->data_regs, drvdata->attn_data.data, 141 140 f30->register_count); ··· 148 147 dev_err(&fn->dev, 149 148 "%s: Failed to read F30 data registers: %d\n", 150 149 __func__, error); 151 - return error; 150 + return IRQ_RETVAL(error); 152 151 } 153 152 } 154 153 ··· 160 159 rmi_f03_commit_buttons(f30->f03); 161 160 } 162 161 163 - return 0; 162 + return IRQ_HANDLED; 164 163 } 165 164 166 165 static int rmi_f30_config(struct rmi_function *fn)
+3 -2
drivers/input/rmi4/rmi_f34.c
··· 100 100 return 0; 101 101 } 102 102 103 - static int rmi_f34_attention(struct rmi_function *fn, unsigned long *irq_bits) 103 + static irqreturn_t rmi_f34_attention(int irq, void *ctx) 104 104 { 105 + struct rmi_function *fn = ctx; 105 106 struct f34_data *f34 = dev_get_drvdata(&fn->dev); 106 107 int ret; 107 108 u8 status; ··· 127 126 complete(&f34->v7.cmd_done); 128 127 } 129 128 130 - return 0; 129 + return IRQ_HANDLED; 131 130 } 132 131 133 132 static int rmi_f34_write_blocks(struct f34_data *f34, const void *data,
-6
drivers/input/rmi4/rmi_f54.c
··· 610 610 mutex_unlock(&f54->data_mutex); 611 611 } 612 612 613 - static int rmi_f54_attention(struct rmi_function *fn, unsigned long *irqbits) 614 - { 615 - return 0; 616 - } 617 - 618 613 static int rmi_f54_config(struct rmi_function *fn) 619 614 { 620 615 struct rmi_driver *drv = fn->rmi_dev->driver; ··· 751 756 .func = 0x54, 752 757 .probe = rmi_f54_probe, 753 758 .config = rmi_f54_config, 754 - .attention = rmi_f54_attention, 755 759 .remove = rmi_f54_remove, 756 760 };
+1
drivers/input/touchscreen/silead.c
··· 603 603 { "GSL3692", 0 }, 604 604 { "MSSL1680", 0 }, 605 605 { "MSSL0001", 0 }, 606 + { "MSSL0002", 0 }, 606 607 { } 607 608 }; 608 609 MODULE_DEVICE_TABLE(acpi, silead_ts_acpi_match);
+1 -1
drivers/isdn/mISDN/socket.c
··· 588 588 .getname = data_sock_getname, 589 589 .sendmsg = mISDN_sock_sendmsg, 590 590 .recvmsg = mISDN_sock_recvmsg, 591 - .poll_mask = datagram_poll_mask, 591 + .poll = datagram_poll, 592 592 .listen = sock_no_listen, 593 593 .shutdown = sock_no_shutdown, 594 594 .setsockopt = data_sock_setsockopt,
+1 -1
drivers/md/dm-raid.c
··· 588 588 } 589 589 590 590 /* Return md raid10 algorithm for @name */ 591 - static const int raid10_name_to_format(const char *name) 591 + static int raid10_name_to_format(const char *name) 592 592 { 593 593 if (!strcasecmp(name, "near")) 594 594 return ALGORITHM_RAID10_NEAR;
+4 -3
drivers/md/dm-table.c
··· 885 885 static int device_supports_dax(struct dm_target *ti, struct dm_dev *dev, 886 886 sector_t start, sector_t len, void *data) 887 887 { 888 - struct request_queue *q = bdev_get_queue(dev->bdev); 889 - 890 - return q && blk_queue_dax(q); 888 + return bdev_dax_supported(dev->bdev, PAGE_SIZE); 891 889 } 892 890 893 891 static bool dm_table_supports_dax(struct dm_table *t) ··· 1905 1907 1906 1908 if (dm_table_supports_dax(t)) 1907 1909 blk_queue_flag_set(QUEUE_FLAG_DAX, q); 1910 + else 1911 + blk_queue_flag_clear(QUEUE_FLAG_DAX, q); 1912 + 1908 1913 if (dm_table_supports_dax_write_cache(t)) 1909 1914 dax_write_cache(t->md->dax_dev, true); 1910 1915
-9
drivers/md/dm-thin-metadata.c
··· 776 776 static int __commit_transaction(struct dm_pool_metadata *pmd) 777 777 { 778 778 int r; 779 - size_t metadata_len, data_len; 780 779 struct thin_disk_superblock *disk_super; 781 780 struct dm_block *sblock; 782 781 ··· 793 794 return r; 794 795 795 796 r = dm_tm_pre_commit(pmd->tm); 796 - if (r < 0) 797 - return r; 798 - 799 - r = dm_sm_root_size(pmd->metadata_sm, &metadata_len); 800 - if (r < 0) 801 - return r; 802 - 803 - r = dm_sm_root_size(pmd->data_sm, &data_len); 804 797 if (r < 0) 805 798 return r; 806 799
+9 -2
drivers/md/dm-thin.c
··· 1386 1386 1387 1387 static void set_pool_mode(struct pool *pool, enum pool_mode new_mode); 1388 1388 1389 + static void requeue_bios(struct pool *pool); 1390 + 1389 1391 static void check_for_space(struct pool *pool) 1390 1392 { 1391 1393 int r; ··· 1400 1398 if (r) 1401 1399 return; 1402 1400 1403 - if (nr_free) 1401 + if (nr_free) { 1404 1402 set_pool_mode(pool, PM_WRITE); 1403 + requeue_bios(pool); 1404 + } 1405 1405 } 1406 1406 1407 1407 /* ··· 1480 1476 1481 1477 r = dm_pool_alloc_data_block(pool->pmd, result); 1482 1478 if (r) { 1483 - metadata_operation_failed(pool, "dm_pool_alloc_data_block", r); 1479 + if (r == -ENOSPC) 1480 + set_pool_mode(pool, PM_OUT_OF_DATA_SPACE); 1481 + else 1482 + metadata_operation_failed(pool, "dm_pool_alloc_data_block", r); 1484 1483 return r; 1485 1484 } 1486 1485
+5 -5
drivers/md/dm-writecache.c
··· 259 259 if (da != p) { 260 260 long i; 261 261 wc->memory_map = NULL; 262 - pages = kvmalloc(p * sizeof(struct page *), GFP_KERNEL); 262 + pages = kvmalloc_array(p, sizeof(struct page *), GFP_KERNEL); 263 263 if (!pages) { 264 264 r = -ENOMEM; 265 265 goto err2; ··· 859 859 860 860 if (wc->entries) 861 861 return 0; 862 - wc->entries = vmalloc(sizeof(struct wc_entry) * wc->n_blocks); 862 + wc->entries = vmalloc(array_size(sizeof(struct wc_entry), wc->n_blocks)); 863 863 if (!wc->entries) 864 864 return -ENOMEM; 865 865 for (b = 0; b < wc->n_blocks; b++) { ··· 1481 1481 wb->bio.bi_iter.bi_sector = read_original_sector(wc, e); 1482 1482 wb->page_offset = PAGE_SIZE; 1483 1483 if (max_pages <= WB_LIST_INLINE || 1484 - unlikely(!(wb->wc_list = kmalloc(max_pages * sizeof(struct wc_entry *), 1485 - GFP_NOIO | __GFP_NORETRY | 1486 - __GFP_NOMEMALLOC | __GFP_NOWARN)))) { 1484 + unlikely(!(wb->wc_list = kmalloc_array(max_pages, sizeof(struct wc_entry *), 1485 + GFP_NOIO | __GFP_NORETRY | 1486 + __GFP_NOMEMALLOC | __GFP_NOWARN)))) { 1487 1487 wb->wc_list = wb->wc_list_inline; 1488 1488 max_pages = WB_LIST_INLINE; 1489 1489 }
+1 -1
drivers/md/dm-zoned-target.c
··· 787 787 788 788 /* Chunk BIO work */ 789 789 mutex_init(&dmz->chunk_lock); 790 - INIT_RADIX_TREE(&dmz->chunk_rxtree, GFP_KERNEL); 790 + INIT_RADIX_TREE(&dmz->chunk_rxtree, GFP_NOIO); 791 791 dmz->chunk_wq = alloc_workqueue("dmz_cwq_%s", WQ_MEM_RECLAIM | WQ_UNBOUND, 792 792 0, dev->name); 793 793 if (!dmz->chunk_wq) {
+3 -5
drivers/md/dm.c
··· 1056 1056 if (len < 1) 1057 1057 goto out; 1058 1058 nr_pages = min(len, nr_pages); 1059 - if (ti->type->direct_access) 1060 - ret = ti->type->direct_access(ti, pgoff, nr_pages, kaddr, pfn); 1059 + ret = ti->type->direct_access(ti, pgoff, nr_pages, kaddr, pfn); 1061 1060 1062 1061 out: 1063 1062 dm_put_live_table(md, srcu_idx); ··· 1605 1606 * the usage of io->orig_bio in dm_remap_zone_report() 1606 1607 * won't be affected by this reassignment. 1607 1608 */ 1608 - struct bio *b = bio_clone_bioset(bio, GFP_NOIO, 1609 - &md->queue->bio_split); 1609 + struct bio *b = bio_split(bio, bio_sectors(bio) - ci.sector_count, 1610 + GFP_NOIO, &md->queue->bio_split); 1610 1611 ci.io->orig_bio = b; 1611 - bio_advance(bio, (bio_sectors(bio) - ci.sector_count) << 9); 1612 1612 bio_chain(b, bio); 1613 1613 ret = generic_make_request(bio); 1614 1614 break;
+11 -8
drivers/mtd/chips/cfi_cmdset_0002.c
··· 2526 2526 2527 2527 struct ppb_lock { 2528 2528 struct flchip *chip; 2529 - loff_t offset; 2529 + unsigned long adr; 2530 2530 int locked; 2531 2531 }; 2532 2532 ··· 2544 2544 unsigned long timeo; 2545 2545 int ret; 2546 2546 2547 + adr += chip->start; 2547 2548 mutex_lock(&chip->mutex); 2548 - ret = get_chip(map, chip, adr + chip->start, FL_LOCKING); 2549 + ret = get_chip(map, chip, adr, FL_LOCKING); 2549 2550 if (ret) { 2550 2551 mutex_unlock(&chip->mutex); 2551 2552 return ret; ··· 2564 2563 2565 2564 if (thunk == DO_XXLOCK_ONEBLOCK_LOCK) { 2566 2565 chip->state = FL_LOCKING; 2567 - map_write(map, CMD(0xA0), chip->start + adr); 2568 - map_write(map, CMD(0x00), chip->start + adr); 2566 + map_write(map, CMD(0xA0), adr); 2567 + map_write(map, CMD(0x00), adr); 2569 2568 } else if (thunk == DO_XXLOCK_ONEBLOCK_UNLOCK) { 2570 2569 /* 2571 2570 * Unlocking of one specific sector is not supported, so we ··· 2603 2602 map_write(map, CMD(0x00), chip->start); 2604 2603 2605 2604 chip->state = FL_READY; 2606 - put_chip(map, chip, adr + chip->start); 2605 + put_chip(map, chip, adr); 2607 2606 mutex_unlock(&chip->mutex); 2608 2607 2609 2608 return ret; ··· 2660 2659 * sectors shall be unlocked, so lets keep their locking 2661 2660 * status at "unlocked" (locked=0) for the final re-locking. 2662 2661 */ 2663 - if ((adr < ofs) || (adr >= (ofs + len))) { 2662 + if ((offset < ofs) || (offset >= (ofs + len))) { 2664 2663 sect[sectors].chip = &cfi->chips[chipnum]; 2665 - sect[sectors].offset = offset; 2664 + sect[sectors].adr = adr; 2666 2665 sect[sectors].locked = do_ppb_xxlock( 2667 2666 map, &cfi->chips[chipnum], adr, 0, 2668 2667 DO_XXLOCK_ONEBLOCK_GETLOCK); ··· 2676 2675 i++; 2677 2676 2678 2677 if (adr >> cfi->chipshift) { 2678 + if (offset >= (ofs + len)) 2679 + break; 2679 2680 adr = 0; 2680 2681 chipnum++; 2681 2682 ··· 2708 2705 */ 2709 2706 for (i = 0; i < sectors; i++) { 2710 2707 if (sect[i].locked) 2711 - do_ppb_xxlock(map, sect[i].chip, sect[i].offset, 0, 2708 + do_ppb_xxlock(map, sect[i].chip, sect[i].adr, 0, 2712 2709 DO_XXLOCK_ONEBLOCK_LOCK); 2713 2710 } 2714 2711
+2 -2
drivers/mtd/devices/mtd_dataflash.c
··· 733 733 { "AT45DB642x", 0x1f2800, 8192, 1056, 11, SUP_POW2PS}, 734 734 { "at45db642d", 0x1f2800, 8192, 1024, 10, SUP_POW2PS | IS_POW2PS}, 735 735 736 - { "AT45DB641E", 0x1f28000100, 32768, 264, 9, SUP_EXTID | SUP_POW2PS}, 737 - { "at45db641e", 0x1f28000100, 32768, 256, 8, SUP_EXTID | SUP_POW2PS | IS_POW2PS}, 736 + { "AT45DB641E", 0x1f28000100ULL, 32768, 264, 9, SUP_EXTID | SUP_POW2PS}, 737 + { "at45db641e", 0x1f28000100ULL, 32768, 256, 8, SUP_EXTID | SUP_POW2PS | IS_POW2PS}, 738 738 }; 739 739 740 740 static struct flash_info *jedec_lookup(struct spi_device *spi,
+5 -1
drivers/mtd/nand/raw/denali_dt.c
··· 123 123 if (ret) 124 124 return ret; 125 125 126 - denali->clk_x_rate = clk_get_rate(dt->clk); 126 + /* 127 + * Hardcode the clock rate for the backward compatibility. 128 + * This works for both SOCFPGA and UniPhier. 129 + */ 130 + denali->clk_x_rate = 200000000; 127 131 128 132 ret = denali_init(denali); 129 133 if (ret)
+4 -1
drivers/mtd/nand/raw/mxc_nand.c
··· 48 48 #define NFC_V1_V2_CONFIG (host->regs + 0x0a) 49 49 #define NFC_V1_V2_ECC_STATUS_RESULT (host->regs + 0x0c) 50 50 #define NFC_V1_V2_RSLTMAIN_AREA (host->regs + 0x0e) 51 - #define NFC_V1_V2_RSLTSPARE_AREA (host->regs + 0x10) 51 + #define NFC_V21_RSLTSPARE_AREA (host->regs + 0x10) 52 52 #define NFC_V1_V2_WRPROT (host->regs + 0x12) 53 53 #define NFC_V1_UNLOCKSTART_BLKADDR (host->regs + 0x14) 54 54 #define NFC_V1_UNLOCKEND_BLKADDR (host->regs + 0x16) ··· 1273 1273 1274 1274 writew(config1, NFC_V1_V2_CONFIG1); 1275 1275 /* preset operation */ 1276 + 1277 + /* spare area size in 16-bit half-words */ 1278 + writew(mtd->oobsize / 2, NFC_V21_RSLTSPARE_AREA); 1276 1279 1277 1280 /* Unlock the internal RAM Buffer */ 1278 1281 writew(0x2, NFC_V1_V2_CONFIG);
+1 -1
drivers/mtd/nand/raw/nand_base.c
··· 440 440 441 441 for (; page < page_end; page++) { 442 442 res = chip->ecc.read_oob(mtd, chip, page); 443 - if (res) 443 + if (res < 0) 444 444 return res; 445 445 446 446 bad = chip->oob_poi[chip->badblockpos];
+36 -12
drivers/mtd/nand/raw/nand_macronix.c
··· 17 17 18 18 #include <linux/mtd/rawnand.h> 19 19 20 + /* 21 + * Macronix AC series does not support using SET/GET_FEATURES to change 22 + * the timings unlike what is declared in the parameter page. Unflag 23 + * this feature to avoid unnecessary downturns. 24 + */ 25 + static void macronix_nand_fix_broken_get_timings(struct nand_chip *chip) 26 + { 27 + unsigned int i; 28 + static const char * const broken_get_timings[] = { 29 + "MX30LF1G18AC", 30 + "MX30LF1G28AC", 31 + "MX30LF2G18AC", 32 + "MX30LF2G28AC", 33 + "MX30LF4G18AC", 34 + "MX30LF4G28AC", 35 + "MX60LF8G18AC", 36 + }; 37 + 38 + if (!chip->parameters.supports_set_get_features) 39 + return; 40 + 41 + for (i = 0; i < ARRAY_SIZE(broken_get_timings); i++) { 42 + if (!strcmp(broken_get_timings[i], chip->parameters.model)) 43 + break; 44 + } 45 + 46 + if (i == ARRAY_SIZE(broken_get_timings)) 47 + return; 48 + 49 + bitmap_clear(chip->parameters.get_feature_list, 50 + ONFI_FEATURE_ADDR_TIMING_MODE, 1); 51 + bitmap_clear(chip->parameters.set_feature_list, 52 + ONFI_FEATURE_ADDR_TIMING_MODE, 1); 53 + } 54 + 20 55 static int macronix_nand_init(struct nand_chip *chip) 21 56 { 22 57 if (nand_is_slc(chip)) 23 58 chip->bbt_options |= NAND_BBT_SCAN2NDPAGE; 24 59 25 - /* 26 - * MX30LF2G18AC chip does not support using SET/GET_FEATURES to change 27 - * the timings unlike what is declared in the parameter page. Unflag 28 - * this feature to avoid unnecessary downturns. 29 - */ 30 - if (chip->parameters.supports_set_get_features && 31 - !strcmp("MX30LF2G18AC", chip->parameters.model)) { 32 - bitmap_clear(chip->parameters.get_feature_list, 33 - ONFI_FEATURE_ADDR_TIMING_MODE, 1); 34 - bitmap_clear(chip->parameters.set_feature_list, 35 - ONFI_FEATURE_ADDR_TIMING_MODE, 1); 36 - } 60 + macronix_nand_fix_broken_get_timings(chip); 37 61 38 62 return 0; 39 63 }
+2
drivers/mtd/nand/raw/nand_micron.c
··· 66 66 67 67 if (p->supports_set_get_features) { 68 68 set_bit(ONFI_FEATURE_ADDR_READ_RETRY, p->set_feature_list); 69 + set_bit(ONFI_FEATURE_ON_DIE_ECC, p->set_feature_list); 69 70 set_bit(ONFI_FEATURE_ADDR_READ_RETRY, p->get_feature_list); 71 + set_bit(ONFI_FEATURE_ON_DIE_ECC, p->get_feature_list); 70 72 } 71 73 72 74 return 0;
+1 -1
drivers/net/ethernet/amd/Kconfig
··· 173 173 174 174 config AMD_XGBE 175 175 tristate "AMD 10GbE Ethernet driver" 176 - depends on ((OF_NET && OF_ADDRESS) || ACPI || PCI) && HAS_IOMEM && HAS_DMA 176 + depends on ((OF_NET && OF_ADDRESS) || ACPI || PCI) && HAS_IOMEM 177 177 depends on X86 || ARM64 || COMPILE_TEST 178 178 select BITREVERSE 179 179 select CRC32
-1
drivers/net/ethernet/apm/xgene-v2/Kconfig
··· 1 1 config NET_XGENE_V2 2 2 tristate "APM X-Gene SoC Ethernet-v2 Driver" 3 - depends on HAS_DMA 4 3 depends on ARCH_XGENE || COMPILE_TEST 5 4 help 6 5 This is the Ethernet driver for the on-chip ethernet interface
-1
drivers/net/ethernet/apm/xgene/Kconfig
··· 1 1 config NET_XGENE 2 2 tristate "APM X-Gene SoC Ethernet Driver" 3 - depends on HAS_DMA 4 3 depends on ARCH_XGENE || COMPILE_TEST 5 4 select PHYLIB 6 5 select MDIO_XGENE
+4 -2
drivers/net/ethernet/arc/Kconfig
··· 24 24 config ARC_EMAC 25 25 tristate "ARC EMAC support" 26 26 select ARC_EMAC_CORE 27 - depends on OF_IRQ && OF_NET && HAS_DMA && (ARC || COMPILE_TEST) 27 + depends on OF_IRQ && OF_NET 28 + depends on ARC || COMPILE_TEST 28 29 ---help--- 29 30 On some legacy ARC (Synopsys) FPGA boards such as ARCAngel4/ML50x 30 31 non-standard on-chip ethernet device ARC EMAC 10/100 is used. ··· 34 33 config EMAC_ROCKCHIP 35 34 tristate "Rockchip EMAC support" 36 35 select ARC_EMAC_CORE 37 - depends on OF_IRQ && OF_NET && REGULATOR && HAS_DMA && (ARCH_ROCKCHIP || COMPILE_TEST) 36 + depends on OF_IRQ && OF_NET && REGULATOR 37 + depends on ARCH_ROCKCHIP || COMPILE_TEST 38 38 ---help--- 39 39 Support for Rockchip RK3036/RK3066/RK3188 EMAC ethernet controllers. 40 40 This selects Rockchip SoC glue layer support for the
-2
drivers/net/ethernet/broadcom/Kconfig
··· 157 157 config BGMAC_BCMA 158 158 tristate "Broadcom iProc GBit BCMA support" 159 159 depends on BCMA && BCMA_HOST_SOC 160 - depends on HAS_DMA 161 160 depends on BCM47XX || ARCH_BCM_5301X || COMPILE_TEST 162 161 select BGMAC 163 162 select PHYLIB ··· 169 170 170 171 config BGMAC_PLATFORM 171 172 tristate "Broadcom iProc GBit platform support" 172 - depends on HAS_DMA 173 173 depends on ARCH_BCM_IPROC || COMPILE_TEST 174 174 depends on OF 175 175 select BGMAC
+1 -4
drivers/net/ethernet/cadence/macb_ptp.c
··· 170 170 171 171 if (delta > TSU_NSEC_MAX_VAL) { 172 172 gem_tsu_get_time(&bp->ptp_clock_info, &now); 173 - if (sign) 174 - now = timespec64_sub(now, then); 175 - else 176 - now = timespec64_add(now, then); 173 + now = timespec64_add(now, then); 177 174 178 175 gem_tsu_set_time(&bp->ptp_clock_info, 179 176 (const struct timespec64 *)&now);
+1 -1
drivers/net/ethernet/calxeda/Kconfig
··· 1 1 config NET_CALXEDA_XGMAC 2 2 tristate "Calxeda 1G/10G XGMAC Ethernet driver" 3 - depends on HAS_IOMEM && HAS_DMA 3 + depends on HAS_IOMEM 4 4 depends on ARCH_HIGHBANK || COMPILE_TEST 5 5 select CRC32 6 6 help
+1 -1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 263 263 "Can't %s DCB Priority on port %d, TX Queue %d: err=%d\n", 264 264 enable ? "set" : "unset", pi->port_id, i, -err); 265 265 else 266 - txq->dcb_prio = value; 266 + txq->dcb_prio = enable ? value : 0; 267 267 } 268 268 } 269 269
+1 -1
drivers/net/ethernet/hisilicon/Kconfig
··· 5 5 config NET_VENDOR_HISILICON 6 6 bool "Hisilicon devices" 7 7 default y 8 - depends on (OF || ACPI) && HAS_DMA 8 + depends on OF || ACPI 9 9 depends on ARM || ARM64 || COMPILE_TEST 10 10 ---help--- 11 11 If you have a network (Ethernet) card belonging to this class, say Y.
+3 -5
drivers/net/ethernet/marvell/Kconfig
··· 18 18 19 19 config MV643XX_ETH 20 20 tristate "Marvell Discovery (643XX) and Orion ethernet support" 21 - depends on (MV64X60 || PPC32 || PLAT_ORION || COMPILE_TEST) && INET 22 - depends on HAS_DMA 21 + depends on MV64X60 || PPC32 || PLAT_ORION || COMPILE_TEST 22 + depends on INET 23 23 select PHYLIB 24 24 select MVMDIO 25 25 ---help--- ··· 58 58 config MVNETA 59 59 tristate "Marvell Armada 370/38x/XP/37xx network interface support" 60 60 depends on ARCH_MVEBU || COMPILE_TEST 61 - depends on HAS_DMA 62 61 select MVMDIO 63 62 select PHYLINK 64 63 ---help--- ··· 83 84 config MVPP2 84 85 tristate "Marvell Armada 375/7K/8K network interface support" 85 86 depends on ARCH_MVEBU || COMPILE_TEST 86 - depends on HAS_DMA 87 87 select MVMDIO 88 88 select PHYLINK 89 89 ---help--- ··· 91 93 92 94 config PXA168_ETH 93 95 tristate "Marvell pxa168 ethernet support" 94 - depends on HAS_IOMEM && HAS_DMA 96 + depends on HAS_IOMEM 95 97 depends on CPU_PXA168 || ARCH_BERLIN || COMPILE_TEST 96 98 select PHYLIB 97 99 ---help---
+1 -1
drivers/net/ethernet/marvell/mvneta.c
··· 1932 1932 rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE); 1933 1933 index = rx_desc - rxq->descs; 1934 1934 data = rxq->buf_virt_addr[index]; 1935 - phys_addr = rx_desc->buf_phys_addr; 1935 + phys_addr = rx_desc->buf_phys_addr - pp->rx_offset_correction; 1936 1936 1937 1937 if (!mvneta_rxq_desc_is_first_last(rx_status) || 1938 1938 (rx_status & MVNETA_RXD_ERR_SUMMARY)) {
+1 -1
drivers/net/ethernet/mellanox/mlxsw/Kconfig
··· 30 30 31 31 config MLXSW_PCI 32 32 tristate "PCI bus implementation for Mellanox Technologies Switch ASICs" 33 - depends on PCI && HAS_DMA && HAS_IOMEM && MLXSW_CORE 33 + depends on PCI && HAS_IOMEM && MLXSW_CORE 34 34 default m 35 35 ---help--- 36 36 This is PCI bus implementation for Mellanox Technologies Switch ASICs.
+6 -5
drivers/net/ethernet/mscc/ocelot.c
··· 344 344 static int ocelot_gen_ifh(u32 *ifh, struct frame_info *info) 345 345 { 346 346 ifh[0] = IFH_INJ_BYPASS; 347 - ifh[1] = (0xff00 & info->port) >> 8; 347 + ifh[1] = (0xf00 & info->port) >> 8; 348 348 ifh[2] = (0xff & info->port) << 24; 349 - ifh[3] = IFH_INJ_POP_CNT_DISABLE | (info->cpuq << 20) | 350 - (info->tag_type << 16) | info->vid; 349 + ifh[3] = (info->tag_type << 16) | info->vid; 351 350 352 351 return 0; 353 352 } ··· 369 370 QS_INJ_CTRL_SOF, QS_INJ_CTRL, grp); 370 371 371 372 info.port = BIT(port->chip_port); 372 - info.cpuq = 0xff; 373 + info.tag_type = IFH_TAG_TYPE_C; 374 + info.vid = skb_vlan_tag_get(skb); 373 375 ocelot_gen_ifh(ifh, &info); 374 376 375 377 for (i = 0; i < IFH_LEN; i++) 376 - ocelot_write_rix(ocelot, ifh[i], QS_INJ_WR, grp); 378 + ocelot_write_rix(ocelot, (__force u32)cpu_to_be32(ifh[i]), 379 + QS_INJ_WR, grp); 377 380 378 381 count = (skb->len + 3) / 4; 379 382 last = skb->len % 4;
+1 -1
drivers/net/ethernet/realtek/r8169.c
··· 7148 7148 { 7149 7149 struct rtl8169_private *tp = netdev_priv(dev); 7150 7150 7151 - rtl8169_interrupt(pci_irq_vector(tp->pci_dev, 0), dev); 7151 + rtl8169_interrupt(pci_irq_vector(tp->pci_dev, 0), tp); 7152 7152 } 7153 7153 #endif 7154 7154
-2
drivers/net/ethernet/renesas/Kconfig
··· 17 17 18 18 config SH_ETH 19 19 tristate "Renesas SuperH Ethernet support" 20 - depends on HAS_DMA 21 20 depends on ARCH_RENESAS || SUPERH || COMPILE_TEST 22 21 select CRC32 23 22 select MII ··· 30 31 31 32 config RAVB 32 33 tristate "Renesas Ethernet AVB support" 33 - depends on HAS_DMA 34 34 depends on ARCH_RENESAS || COMPILE_TEST 35 35 select CRC32 36 36 select MII
+1
drivers/net/ethernet/sfc/efx.c
··· 3180 3180 return true; 3181 3181 } 3182 3182 3183 + static 3183 3184 struct hlist_head *efx_rps_hash_bucket(struct efx_nic *efx, 3184 3185 const struct efx_filter_spec *spec) 3185 3186 {
+1 -1
drivers/net/ethernet/ti/davinci_cpdma.c
··· 205 205 * devices (e.g. cpsw switches) use plain old memory. Descriptor pools 206 206 * abstract out these details 207 207 */ 208 - int cpdma_desc_pool_create(struct cpdma_ctlr *ctlr) 208 + static int cpdma_desc_pool_create(struct cpdma_ctlr *ctlr) 209 209 { 210 210 struct cpdma_params *cpdma_params = &ctlr->params; 211 211 struct cpdma_desc_pool *pool;
+4
drivers/net/ethernet/ti/davinci_emac.c
··· 1387 1387 1388 1388 static int match_first_device(struct device *dev, void *data) 1389 1389 { 1390 + if (dev->parent && dev->parent->of_node) 1391 + return of_device_is_compatible(dev->parent->of_node, 1392 + "ti,davinci_mdio"); 1393 + 1390 1394 return !strncmp(dev_name(dev), "davinci_mdio", 12); 1391 1395 } 1392 1396
+2 -1
drivers/net/ipvlan/ipvlan_main.c
··· 594 594 ipvlan->phy_dev = phy_dev; 595 595 ipvlan->dev = dev; 596 596 ipvlan->sfeatures = IPVLAN_FEATURES; 597 - ipvlan_adjust_mtu(ipvlan, phy_dev); 597 + if (!tb[IFLA_MTU]) 598 + ipvlan_adjust_mtu(ipvlan, phy_dev); 598 599 INIT_LIST_HEAD(&ipvlan->addrs); 599 600 spin_lock_init(&ipvlan->addrs_lock); 600 601
+1 -1
drivers/net/ppp/pppoe.c
··· 1107 1107 .socketpair = sock_no_socketpair, 1108 1108 .accept = sock_no_accept, 1109 1109 .getname = pppoe_getname, 1110 - .poll_mask = datagram_poll_mask, 1110 + .poll = datagram_poll, 1111 1111 .listen = sock_no_listen, 1112 1112 .shutdown = sock_no_shutdown, 1113 1113 .setsockopt = sock_no_setsockopt,
+1
drivers/net/usb/qmi_wwan.c
··· 1246 1246 {QMI_FIXED_INTF(0x413c, 0x81b3, 8)}, /* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card (rev3) */ 1247 1247 {QMI_FIXED_INTF(0x413c, 0x81b6, 8)}, /* Dell Wireless 5811e */ 1248 1248 {QMI_FIXED_INTF(0x413c, 0x81b6, 10)}, /* Dell Wireless 5811e */ 1249 + {QMI_FIXED_INTF(0x413c, 0x81d7, 1)}, /* Dell Wireless 5821e */ 1249 1250 {QMI_FIXED_INTF(0x03f0, 0x4e1d, 8)}, /* HP lt4111 LTE/EV-DO/HSPA+ Gobi 4G Module */ 1250 1251 {QMI_FIXED_INTF(0x03f0, 0x9d1d, 1)}, /* HP lt4120 Snapdragon X5 LTE */ 1251 1252 {QMI_FIXED_INTF(0x22de, 0x9061, 3)}, /* WeTelecom WPD-600N */
-1
drivers/net/wireless/broadcom/brcm80211/Kconfig
··· 60 60 bool "PCIE bus interface support for FullMAC driver" 61 61 depends on BRCMFMAC 62 62 depends on PCI 63 - depends on HAS_DMA 64 63 select BRCMFMAC_PROTO_MSGBUF 65 64 select FW_LOADER 66 65 ---help---
+1 -1
drivers/net/wireless/quantenna/qtnfmac/Kconfig
··· 7 7 config QTNFMAC_PEARL_PCIE 8 8 tristate "Quantenna QSR10g PCIe support" 9 9 default n 10 - depends on HAS_DMA && PCI && CFG80211 10 + depends on PCI && CFG80211 11 11 select QTNFMAC 12 12 select FW_LOADER 13 13 select CRC32
+6 -5
drivers/net/xen-netfront.c
··· 1810 1810 err = xen_net_read_mac(dev, info->netdev->dev_addr); 1811 1811 if (err) { 1812 1812 xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename); 1813 - goto out; 1813 + goto out_unlocked; 1814 1814 } 1815 1815 1816 1816 rtnl_lock(); ··· 1925 1925 xennet_destroy_queues(info); 1926 1926 out: 1927 1927 rtnl_unlock(); 1928 + out_unlocked: 1928 1929 device_unregister(&dev->dev); 1929 1930 return err; 1930 1931 } ··· 1951 1950 /* talk_to_netback() sets the correct number of queues */ 1952 1951 num_queues = dev->real_num_tx_queues; 1953 1952 1954 - rtnl_lock(); 1955 - netdev_update_features(dev); 1956 - rtnl_unlock(); 1957 - 1958 1953 if (dev->reg_state == NETREG_UNINITIALIZED) { 1959 1954 err = register_netdev(dev); 1960 1955 if (err) { ··· 1959 1962 return err; 1960 1963 } 1961 1964 } 1965 + 1966 + rtnl_lock(); 1967 + netdev_update_features(dev); 1968 + rtnl_unlock(); 1962 1969 1963 1970 /* 1964 1971 * All public and private state should now be sane. Get
+2 -2
drivers/nfc/pn533/usb.c
··· 74 74 struct sk_buff *skb = NULL; 75 75 76 76 if (!urb->status) { 77 - skb = alloc_skb(urb->actual_length, GFP_KERNEL); 77 + skb = alloc_skb(urb->actual_length, GFP_ATOMIC); 78 78 if (!skb) { 79 79 nfc_err(&phy->udev->dev, "failed to alloc memory\n"); 80 80 } else { ··· 186 186 187 187 if (dev->protocol_type == PN533_PROTO_REQ_RESP) { 188 188 /* request for response for sent packet directly */ 189 - rc = pn533_submit_urb_for_response(phy, GFP_ATOMIC); 189 + rc = pn533_submit_urb_for_response(phy, GFP_KERNEL); 190 190 if (rc) 191 191 goto error; 192 192 } else if (dev->protocol_type == PN533_PROTO_REQ_ACK_RESP) {
+2 -1
drivers/nvdimm/pmem.c
··· 414 414 blk_queue_logical_block_size(q, pmem_sector_size(ndns)); 415 415 blk_queue_max_hw_sectors(q, UINT_MAX); 416 416 blk_queue_flag_set(QUEUE_FLAG_NONROT, q); 417 - blk_queue_flag_set(QUEUE_FLAG_DAX, q); 417 + if (pmem->pfn_flags & PFN_MAP) 418 + blk_queue_flag_set(QUEUE_FLAG_DAX, q); 418 419 q->queuedata = pmem; 419 420 420 421 disk = alloc_disk_node(0, nid);
+5 -2
drivers/nvme/host/rdma.c
··· 732 732 blk_cleanup_queue(ctrl->ctrl.admin_q); 733 733 nvme_rdma_free_tagset(&ctrl->ctrl, ctrl->ctrl.admin_tagset); 734 734 } 735 - nvme_rdma_free_qe(ctrl->device->dev, &ctrl->async_event_sqe, 736 - sizeof(struct nvme_command), DMA_TO_DEVICE); 735 + if (ctrl->async_event_sqe.data) { 736 + nvme_rdma_free_qe(ctrl->device->dev, &ctrl->async_event_sqe, 737 + sizeof(struct nvme_command), DMA_TO_DEVICE); 738 + ctrl->async_event_sqe.data = NULL; 739 + } 737 740 nvme_rdma_free_queue(&ctrl->queues[0]); 738 741 } 739 742
+3 -3
drivers/pci/Makefile
··· 28 28 obj-$(CONFIG_PCI_ECAM) += ecam.o 29 29 obj-$(CONFIG_XEN_PCIDEV_FRONTEND) += xen-pcifront.o 30 30 31 - obj-y += controller/ 32 - obj-y += switch/ 33 - 34 31 # Endpoint library must be initialized before its users 35 32 obj-$(CONFIG_PCI_ENDPOINT) += endpoint/ 33 + 34 + obj-y += controller/ 35 + obj-y += switch/ 36 36 37 37 ccflags-$(CONFIG_PCI_DEBUG) := -DDEBUG
-3
drivers/pci/controller/Kconfig
··· 96 96 depends on OF 97 97 select PCI_HOST_COMMON 98 98 select IRQ_DOMAIN 99 - select PCI_DOMAINS 100 99 help 101 100 Say Y here if you want to support a simple generic PCI host 102 101 controller, such as the one emulated by kvmtool. ··· 137 138 138 139 config PCIE_IPROC 139 140 tristate 140 - select PCI_DOMAINS 141 141 help 142 142 This enables the iProc PCIe core controller support for Broadcom's 143 143 iProc family of SoCs. An appropriate bus interface driver needs ··· 174 176 config PCIE_ALTERA 175 177 bool "Altera PCIe controller" 176 178 depends on ARM || NIOS2 || COMPILE_TEST 177 - select PCI_DOMAINS 178 179 help 179 180 Say Y here if you want to enable PCIe controller support on Altera 180 181 FPGA.
+9 -1
drivers/pci/hotplug/acpi_pcihp.c
··· 7 7 * All rights reserved. 8 8 * 9 9 * Send feedback to <kristen.c.accardi@intel.com> 10 - * 11 10 */ 12 11 13 12 #include <linux/module.h> ··· 86 87 return 0; 87 88 88 89 /* If _OSC exists, we should not evaluate OSHP */ 90 + 91 + /* 92 + * If there's no ACPI host bridge (i.e., ACPI support is compiled 93 + * into the kernel but the hardware platform doesn't support ACPI), 94 + * there's nothing to do here. 95 + */ 89 96 host = pci_find_host_bridge(pdev->bus); 90 97 root = acpi_pci_find_root(ACPI_HANDLE(&host->dev)); 98 + if (!root) 99 + return 0; 100 + 91 101 if (root->osc_support_set) 92 102 goto no_control; 93 103
+1 -1
drivers/perf/xgene_pmu.c
··· 1463 1463 case PMU_TYPE_IOB: 1464 1464 return devm_kasprintf(dev, GFP_KERNEL, "iob%d", id); 1465 1465 case PMU_TYPE_IOB_SLOW: 1466 - return devm_kasprintf(dev, GFP_KERNEL, "iob-slow%d", id); 1466 + return devm_kasprintf(dev, GFP_KERNEL, "iob_slow%d", id); 1467 1467 case PMU_TYPE_MCB: 1468 1468 return devm_kasprintf(dev, GFP_KERNEL, "mcb%d", id); 1469 1469 case PMU_TYPE_MC:
-2
drivers/scsi/ipr.c
··· 760 760 ioa_cfg->hrrq[i].allow_interrupts = 0; 761 761 spin_unlock(&ioa_cfg->hrrq[i]._lock); 762 762 } 763 - wmb(); 764 763 765 764 /* Set interrupt mask to stop all new interrupts */ 766 765 if (ioa_cfg->sis64) ··· 8402 8403 ioa_cfg->hrrq[i].allow_interrupts = 1; 8403 8404 spin_unlock(&ioa_cfg->hrrq[i]._lock); 8404 8405 } 8405 - wmb(); 8406 8406 if (ioa_cfg->sis64) { 8407 8407 /* Set the adapter to the correct endian mode. */ 8408 8408 writel(IPR_ENDIAN_SWAP_KEY, ioa_cfg->regs.endian_swap_reg);
+3 -4
drivers/scsi/qla2xxx/qla_target.c
··· 1224 1224 void qlt_schedule_sess_for_deletion(struct fc_port *sess) 1225 1225 { 1226 1226 struct qla_tgt *tgt = sess->tgt; 1227 - struct qla_hw_data *ha = sess->vha->hw; 1228 1227 unsigned long flags; 1229 1228 1230 1229 if (sess->disc_state == DSC_DELETE_PEND) ··· 1240 1241 return; 1241 1242 } 1242 1243 1243 - spin_lock_irqsave(&ha->tgt.sess_lock, flags); 1244 1244 if (sess->deleted == QLA_SESS_DELETED) 1245 1245 sess->logout_on_delete = 0; 1246 1246 1247 + spin_lock_irqsave(&sess->vha->work_lock, flags); 1247 1248 if (sess->deleted == QLA_SESS_DELETION_IN_PROGRESS) { 1248 - spin_unlock_irqrestore(&ha->tgt.sess_lock, flags); 1249 + spin_unlock_irqrestore(&sess->vha->work_lock, flags); 1249 1250 return; 1250 1251 } 1251 1252 sess->deleted = QLA_SESS_DELETION_IN_PROGRESS; 1252 - spin_unlock_irqrestore(&ha->tgt.sess_lock, flags); 1253 + spin_unlock_irqrestore(&sess->vha->work_lock, flags); 1253 1254 1254 1255 sess->disc_state = DSC_DELETE_PEND; 1255 1256
+1 -1
drivers/scsi/scsi_debug.c
··· 5507 5507 int k = sdebug_add_host; 5508 5508 5509 5509 stop_all_queued(); 5510 - free_all_queued(); 5511 5510 for (; k; k--) 5512 5511 sdebug_remove_adapter(); 5512 + free_all_queued(); 5513 5513 driver_unregister(&sdebug_driverfs_driver); 5514 5514 bus_unregister(&pseudo_lld_bus); 5515 5515 root_device_unregister(pseudo_primary);
+9 -4
drivers/soc/imx/gpcv2.c
··· 39 39 40 40 #define GPC_M4_PU_PDN_FLG 0x1bc 41 41 42 - 43 - #define PGC_MIPI 4 44 - #define PGC_PCIE 5 45 - #define PGC_USB_HSIC 8 42 + /* 43 + * The PGC offset values in Reference Manual 44 + * (Rev. 1, 01/2018 and the older ones) GPC chapter's 45 + * GPC_PGC memory map are incorrect, below offset 46 + * values are from design RTL. 47 + */ 48 + #define PGC_MIPI 16 49 + #define PGC_PCIE 17 50 + #define PGC_USB_HSIC 20 46 51 #define GPC_PGC_CTRL(n) (0x800 + (n) * 0x40) 47 52 #define GPC_PGC_SR(n) (GPC_PGC_CTRL(n) + 0xc) 48 53
+2 -1
drivers/soc/qcom/Kconfig
··· 5 5 6 6 config QCOM_COMMAND_DB 7 7 bool "Qualcomm Command DB" 8 - depends on (ARCH_QCOM && OF) || COMPILE_TEST 8 + depends on ARCH_QCOM || COMPILE_TEST 9 + depends on OF_RESERVED_MEM 9 10 help 10 11 Command DB queries shared memory by key string for shared system 11 12 resources. Platform drivers that require to set state of a shared
+29 -6
drivers/soc/renesas/rcar-sysc.c
··· 194 194 195 195 static bool has_cpg_mstp; 196 196 197 - static void __init rcar_sysc_pd_setup(struct rcar_sysc_pd *pd) 197 + static int __init rcar_sysc_pd_setup(struct rcar_sysc_pd *pd) 198 198 { 199 199 struct generic_pm_domain *genpd = &pd->genpd; 200 200 const char *name = pd->genpd.name; 201 201 struct dev_power_governor *gov = &simple_qos_governor; 202 + int error; 202 203 203 204 if (pd->flags & PD_CPU) { 204 205 /* ··· 252 251 rcar_sysc_power_up(&pd->ch); 253 252 254 253 finalize: 255 - pm_genpd_init(genpd, gov, false); 254 + error = pm_genpd_init(genpd, gov, false); 255 + if (error) 256 + pr_err("Failed to init PM domain %s: %d\n", name, error); 257 + 258 + return error; 256 259 } 257 260 258 261 static const struct of_device_id rcar_sysc_matches[] __initconst = { ··· 380 375 pr_debug("%pOF: syscier = 0x%08x\n", np, syscier); 381 376 iowrite32(syscier, base + SYSCIER); 382 377 378 + /* 379 + * First, create all PM domains 380 + */ 383 381 for (i = 0; i < info->num_areas; i++) { 384 382 const struct rcar_sysc_area *area = &info->areas[i]; 385 383 struct rcar_sysc_pd *pd; ··· 405 397 pd->ch.isr_bit = area->isr_bit; 406 398 pd->flags = area->flags; 407 399 408 - rcar_sysc_pd_setup(pd); 409 - if (area->parent >= 0) 410 - pm_genpd_add_subdomain(domains->domains[area->parent], 411 - &pd->genpd); 400 + error = rcar_sysc_pd_setup(pd); 401 + if (error) 402 + goto out_put; 412 403 413 404 domains->domains[area->isr_bit] = &pd->genpd; 405 + } 406 + 407 + /* 408 + * Second, link all PM domains to their parents 409 + */ 410 + for (i = 0; i < info->num_areas; i++) { 411 + const struct rcar_sysc_area *area = &info->areas[i]; 412 + 413 + if (!area->name || area->parent < 0) 414 + continue; 415 + 416 + error = pm_genpd_add_subdomain(domains->domains[area->parent], 417 + domains->domains[area->isr_bit]); 418 + if (error) 419 + pr_warn("Failed to add PM subdomain %s to parent %u\n", 420 + area->name, area->parent); 414 421 } 415 422 416 423 error = of_genpd_add_provider_onecell(np, &domains->onecell_data);
+1 -1
drivers/staging/android/ion/ion_heap.c
··· 30 30 struct page **tmp = pages; 31 31 32 32 if (!pages) 33 - return NULL; 33 + return ERR_PTR(-ENOMEM); 34 34 35 35 if (buffer->flags & ION_FLAG_CACHED) 36 36 pgprot = PAGE_KERNEL;
+1 -1
drivers/staging/comedi/drivers/quatech_daqp_cs.c
··· 642 642 /* Make sure D/A update mode is direct update */ 643 643 outb(0, dev->iobase + DAQP_AUX_REG); 644 644 645 - for (i = 0; i > insn->n; i++) { 645 + for (i = 0; i < insn->n; i++) { 646 646 unsigned int val = data[i]; 647 647 int ret; 648 648
+36 -8
drivers/target/target_core_user.c
··· 656 656 } 657 657 658 658 static void gather_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd, 659 - bool bidi) 659 + bool bidi, uint32_t read_len) 660 660 { 661 661 struct se_cmd *se_cmd = cmd->se_cmd; 662 662 int i, dbi; ··· 689 689 for_each_sg(data_sg, sg, data_nents, i) { 690 690 int sg_remaining = sg->length; 691 691 to = kmap_atomic(sg_page(sg)) + sg->offset; 692 - while (sg_remaining > 0) { 692 + while (sg_remaining > 0 && read_len > 0) { 693 693 if (block_remaining == 0) { 694 694 if (from) 695 695 kunmap_atomic(from); ··· 701 701 } 702 702 copy_bytes = min_t(size_t, sg_remaining, 703 703 block_remaining); 704 + if (read_len < copy_bytes) 705 + copy_bytes = read_len; 704 706 offset = DATA_BLOCK_SIZE - block_remaining; 705 707 tcmu_flush_dcache_range(from, copy_bytes); 706 708 memcpy(to + sg->length - sg_remaining, from + offset, ··· 710 708 711 709 sg_remaining -= copy_bytes; 712 710 block_remaining -= copy_bytes; 711 + read_len -= copy_bytes; 713 712 } 714 713 kunmap_atomic(to - sg->offset); 714 + if (read_len == 0) 715 + break; 715 716 } 716 717 if (from) 717 718 kunmap_atomic(from); ··· 1047 1042 { 1048 1043 struct se_cmd *se_cmd = cmd->se_cmd; 1049 1044 struct tcmu_dev *udev = cmd->tcmu_dev; 1045 + bool read_len_valid = false; 1046 + uint32_t read_len = se_cmd->data_length; 1050 1047 1051 1048 /* 1052 1049 * cmd has been completed already from timeout, just reclaim ··· 1063 1056 pr_warn("TCMU: Userspace set UNKNOWN_OP flag on se_cmd %p\n", 1064 1057 cmd->se_cmd); 1065 1058 entry->rsp.scsi_status = SAM_STAT_CHECK_CONDITION; 1066 - } else if (entry->rsp.scsi_status == SAM_STAT_CHECK_CONDITION) { 1059 + goto done; 1060 + } 1061 + 1062 + if (se_cmd->data_direction == DMA_FROM_DEVICE && 1063 + (entry->hdr.uflags & TCMU_UFLAG_READ_LEN) && entry->rsp.read_len) { 1064 + read_len_valid = true; 1065 + if (entry->rsp.read_len < read_len) 1066 + read_len = entry->rsp.read_len; 1067 + } 1068 + 1069 + if (entry->rsp.scsi_status == SAM_STAT_CHECK_CONDITION) { 1067 1070 transport_copy_sense_to_cmd(se_cmd, entry->rsp.sense_buffer); 1068 - } else if (se_cmd->se_cmd_flags & SCF_BIDI) { 1071 + if (!read_len_valid ) 1072 + goto done; 1073 + else 1074 + se_cmd->se_cmd_flags |= SCF_TREAT_READ_AS_NORMAL; 1075 + } 1076 + if (se_cmd->se_cmd_flags & SCF_BIDI) { 1069 1077 /* Get Data-In buffer before clean up */ 1070 - gather_data_area(udev, cmd, true); 1078 + gather_data_area(udev, cmd, true, read_len); 1071 1079 } else if (se_cmd->data_direction == DMA_FROM_DEVICE) { 1072 - gather_data_area(udev, cmd, false); 1080 + gather_data_area(udev, cmd, false, read_len); 1073 1081 } else if (se_cmd->data_direction == DMA_TO_DEVICE) { 1074 1082 /* TODO: */ 1075 1083 } else if (se_cmd->data_direction != DMA_NONE) { ··· 1092 1070 se_cmd->data_direction); 1093 1071 } 1094 1072 1095 - target_complete_cmd(cmd->se_cmd, entry->rsp.scsi_status); 1073 + done: 1074 + if (read_len_valid) { 1075 + pr_debug("read_len = %d\n", read_len); 1076 + target_complete_cmd_with_length(cmd->se_cmd, 1077 + entry->rsp.scsi_status, read_len); 1078 + } else 1079 + target_complete_cmd(cmd->se_cmd, entry->rsp.scsi_status); 1096 1080 1097 1081 out: 1098 1082 cmd->se_cmd = NULL; ··· 1768 1740 /* Initialise the mailbox of the ring buffer */ 1769 1741 mb = udev->mb_addr; 1770 1742 mb->version = TCMU_MAILBOX_VERSION; 1771 - mb->flags = TCMU_MAILBOX_FLAG_CAP_OOOC; 1743 + mb->flags = TCMU_MAILBOX_FLAG_CAP_OOOC | TCMU_MAILBOX_FLAG_CAP_READ_LEN; 1772 1744 mb->cmdr_off = CMDR_OFF; 1773 1745 mb->cmdr_size = udev->cmdr_size; 1774 1746
+32 -23
drivers/tty/n_tty.c
··· 124 124 struct mutex output_lock; 125 125 }; 126 126 127 + #define MASK(x) ((x) & (N_TTY_BUF_SIZE - 1)) 128 + 127 129 static inline size_t read_cnt(struct n_tty_data *ldata) 128 130 { 129 131 return ldata->read_head - ldata->read_tail; ··· 143 141 144 142 static inline unsigned char echo_buf(struct n_tty_data *ldata, size_t i) 145 143 { 144 + smp_rmb(); /* Matches smp_wmb() in add_echo_byte(). */ 146 145 return ldata->echo_buf[i & (N_TTY_BUF_SIZE - 1)]; 147 146 } 148 147 ··· 319 316 static void reset_buffer_flags(struct n_tty_data *ldata) 320 317 { 321 318 ldata->read_head = ldata->canon_head = ldata->read_tail = 0; 322 - ldata->echo_head = ldata->echo_tail = ldata->echo_commit = 0; 323 319 ldata->commit_head = 0; 324 - ldata->echo_mark = 0; 325 320 ldata->line_start = 0; 326 321 327 322 ldata->erasing = 0; ··· 618 617 old_space = space = tty_write_room(tty); 619 618 620 619 tail = ldata->echo_tail; 621 - while (ldata->echo_commit != tail) { 620 + while (MASK(ldata->echo_commit) != MASK(tail)) { 622 621 c = echo_buf(ldata, tail); 623 622 if (c == ECHO_OP_START) { 624 623 unsigned char op; 625 624 int no_space_left = 0; 626 625 626 + /* 627 + * Since add_echo_byte() is called without holding 628 + * output_lock, we might see only portion of multi-byte 629 + * operation. 630 + */ 631 + if (MASK(ldata->echo_commit) == MASK(tail + 1)) 632 + goto not_yet_stored; 627 633 /* 628 634 * If the buffer byte is the start of a multi-byte 629 635 * operation, get the next byte, which is either the ··· 642 634 unsigned int num_chars, num_bs; 643 635 644 636 case ECHO_OP_ERASE_TAB: 637 + if (MASK(ldata->echo_commit) == MASK(tail + 2)) 638 + goto not_yet_stored; 645 639 num_chars = echo_buf(ldata, tail + 2); 646 640 647 641 /* ··· 738 728 /* If the echo buffer is nearly full (so that the possibility exists 739 729 * of echo overrun before the next commit), then discard enough 740 730 * data at the tail to prevent a subsequent overrun */ 741 - while (ldata->echo_commit - tail >= ECHO_DISCARD_WATERMARK) { 731 + while (ldata->echo_commit > tail && 732 + ldata->echo_commit - tail >= ECHO_DISCARD_WATERMARK) { 742 733 if (echo_buf(ldata, tail) == ECHO_OP_START) { 743 734 if (echo_buf(ldata, tail + 1) == ECHO_OP_ERASE_TAB) 744 735 tail += 3; ··· 749 738 tail++; 750 739 } 751 740 741 + not_yet_stored: 752 742 ldata->echo_tail = tail; 753 743 return old_space - space; 754 744 } ··· 760 748 size_t nr, old, echoed; 761 749 size_t head; 762 750 751 + mutex_lock(&ldata->output_lock); 763 752 head = ldata->echo_head; 764 753 ldata->echo_mark = head; 765 754 old = ldata->echo_commit - ldata->echo_tail; ··· 769 756 * is over the threshold (and try again each time another 770 757 * block is accumulated) */ 771 758 nr = head - ldata->echo_tail; 772 - if (nr < ECHO_COMMIT_WATERMARK || (nr % ECHO_BLOCK > old % ECHO_BLOCK)) 759 + if (nr < ECHO_COMMIT_WATERMARK || 760 + (nr % ECHO_BLOCK > old % ECHO_BLOCK)) { 761 + mutex_unlock(&ldata->output_lock); 773 762 return; 763 + } 774 764 775 - mutex_lock(&ldata->output_lock); 776 765 ldata->echo_commit = head; 777 766 echoed = __process_echoes(tty); 778 767 mutex_unlock(&ldata->output_lock); ··· 825 810 826 811 static inline void add_echo_byte(unsigned char c, struct n_tty_data *ldata) 827 812 { 828 - *echo_buf_addr(ldata, ldata->echo_head++) = c; 813 + *echo_buf_addr(ldata, ldata->echo_head) = c; 814 + smp_wmb(); /* Matches smp_rmb() in echo_buf(). */ 815 + ldata->echo_head++; 829 816 } 830 817 831 818 /** ··· 995 978 } 996 979 997 980 seen_alnums = 0; 998 - while (ldata->read_head != ldata->canon_head) { 981 + while (MASK(ldata->read_head) != MASK(ldata->canon_head)) { 999 982 head = ldata->read_head; 1000 983 1001 984 /* erase a single possibly multibyte character */ 1002 985 do { 1003 986 head--; 1004 987 c = read_buf(ldata, head); 1005 - } while (is_continuation(c, tty) && head != ldata->canon_head); 988 + } while (is_continuation(c, tty) && 989 + MASK(head) != MASK(ldata->canon_head)); 1006 990 1007 991 /* do not partially erase */ 1008 992 if (is_continuation(c, tty)) ··· 1045 1027 * This info is used to go back the correct 1046 1028 * number of columns. 1047 1029 */ 1048 - while (tail != ldata->canon_head) { 1030 + while (MASK(tail) != MASK(ldata->canon_head)) { 1049 1031 tail--; 1050 1032 c = read_buf(ldata, tail); 1051 1033 if (c == '\t') { ··· 1320 1302 finish_erasing(ldata); 1321 1303 echo_char(c, tty); 1322 1304 echo_char_raw('\n', ldata); 1323 - while (tail != ldata->read_head) { 1305 + while (MASK(tail) != MASK(ldata->read_head)) { 1324 1306 echo_char(read_buf(ldata, tail), tty); 1325 1307 tail++; 1326 1308 } ··· 1896 1878 struct n_tty_data *ldata; 1897 1879 1898 1880 /* Currently a malloc failure here can panic */ 1899 - ldata = vmalloc(sizeof(*ldata)); 1881 + ldata = vzalloc(sizeof(*ldata)); 1900 1882 if (!ldata) 1901 - goto err; 1883 + return -ENOMEM; 1902 1884 1903 1885 ldata->overrun_time = jiffies; 1904 1886 mutex_init(&ldata->atomic_read_lock); 1905 1887 mutex_init(&ldata->output_lock); 1906 1888 1907 1889 tty->disc_data = ldata; 1908 - reset_buffer_flags(tty->disc_data); 1909 - ldata->column = 0; 1910 - ldata->canon_column = 0; 1911 - ldata->num_overrun = 0; 1912 - ldata->no_room = 0; 1913 - ldata->lnext = 0; 1914 1890 tty->closing = 0; 1915 1891 /* indicate buffer work may resume */ 1916 1892 clear_bit(TTY_LDISC_HALTED, &tty->flags); 1917 1893 n_tty_set_termios(tty, NULL); 1918 1894 tty_unthrottle(tty); 1919 - 1920 1895 return 0; 1921 - err: 1922 - return -ENOMEM; 1923 1896 } 1924 1897 1925 1898 static inline int input_available_p(struct tty_struct *tty, int poll) ··· 2420 2411 tail = ldata->read_tail; 2421 2412 nr = head - tail; 2422 2413 /* Skip EOF-chars.. */ 2423 - while (head != tail) { 2414 + while (MASK(head) != MASK(tail)) { 2424 2415 if (test_bit(tail & (N_TTY_BUF_SIZE - 1), ldata->read_flags) && 2425 2416 read_buf(ldata, tail) == __DISABLED_CHAR) 2426 2417 nr--;
+1
drivers/tty/serdev/core.c
··· 617 617 static void __exit serdev_exit(void) 618 618 { 619 619 bus_unregister(&serdev_bus_type); 620 + ida_destroy(&ctrl_ida); 620 621 } 621 622 module_exit(serdev_exit); 622 623
-2
drivers/tty/serial/8250/8250_pci.c
··· 3339 3339 /* multi-io cards handled by parport_serial */ 3340 3340 { PCI_DEVICE(0x4348, 0x7053), }, /* WCH CH353 2S1P */ 3341 3341 { PCI_DEVICE(0x4348, 0x5053), }, /* WCH CH353 1S1P */ 3342 - { PCI_DEVICE(0x4348, 0x7173), }, /* WCH CH355 4S */ 3343 3342 { PCI_DEVICE(0x1c00, 0x3250), }, /* WCH CH382 2S1P */ 3344 - { PCI_DEVICE(0x1c00, 0x3470), }, /* WCH CH384 4S */ 3345 3343 3346 3344 /* Moxa Smartio MUE boards handled by 8250_moxa */ 3347 3345 { PCI_VDEVICE(MOXA, 0x1024), },
+2 -2
drivers/tty/vt/vt.c
··· 784 784 if (!*vc->vc_uni_pagedir_loc) 785 785 con_set_default_unimap(vc); 786 786 787 - vc->vc_screenbuf = kmalloc(vc->vc_screenbuf_size, GFP_KERNEL); 787 + vc->vc_screenbuf = kzalloc(vc->vc_screenbuf_size, GFP_KERNEL); 788 788 if (!vc->vc_screenbuf) 789 789 goto err_free; 790 790 ··· 871 871 872 872 if (new_screen_size > (4 << 20)) 873 873 return -EINVAL; 874 - newscreen = kmalloc(new_screen_size, GFP_USER); 874 + newscreen = kzalloc(new_screen_size, GFP_USER); 875 875 if (!newscreen) 876 876 return -ENOMEM; 877 877
+4 -1
drivers/usb/chipidea/host.c
··· 124 124 125 125 hcd->power_budget = ci->platdata->power_budget; 126 126 hcd->tpl_support = ci->platdata->tpl_support; 127 - if (ci->phy || ci->usb_phy) 127 + if (ci->phy || ci->usb_phy) { 128 128 hcd->skip_phy_initialization = 1; 129 + if (ci->usb_phy) 130 + hcd->usb_phy = ci->usb_phy; 131 + } 129 132 130 133 ehci = hcd_to_ehci(hcd); 131 134 ehci->caps = ci->hw_bank.cap;
+3
drivers/usb/class/cdc-acm.c
··· 1758 1758 { USB_DEVICE(0x11ca, 0x0201), /* VeriFone Mx870 Gadget Serial */ 1759 1759 .driver_info = SINGLE_RX_URB, 1760 1760 }, 1761 + { USB_DEVICE(0x1965, 0x0018), /* Uniden UBC125XLT */ 1762 + .driver_info = NO_UNION_NORMAL, /* has no union descriptor */ 1763 + }, 1761 1764 { USB_DEVICE(0x22b8, 0x7000), /* Motorola Q Phone */ 1762 1765 .driver_info = NO_UNION_NORMAL, /* has no union descriptor */ 1763 1766 },
+3
drivers/usb/dwc2/core.h
··· 1004 1004 * @frame_list_sz: Frame list size 1005 1005 * @desc_gen_cache: Kmem cache for generic descriptors 1006 1006 * @desc_hsisoc_cache: Kmem cache for hs isochronous descriptors 1007 + * @unaligned_cache: Kmem cache for DMA mode to handle non-aligned buf 1007 1008 * 1008 1009 * These are for peripheral mode: 1009 1010 * ··· 1178 1177 u32 frame_list_sz; 1179 1178 struct kmem_cache *desc_gen_cache; 1180 1179 struct kmem_cache *desc_hsisoc_cache; 1180 + struct kmem_cache *unaligned_cache; 1181 + #define DWC2_KMEM_UNALIGNED_BUF_SIZE 1024 1181 1182 1182 1183 #endif /* CONFIG_USB_DWC2_HOST || CONFIG_USB_DWC2_DUAL_ROLE */ 1183 1184
+12 -8
drivers/usb/dwc2/gadget.c
··· 812 812 u32 index; 813 813 u32 maxsize = 0; 814 814 u32 mask = 0; 815 + u8 pid = 0; 815 816 816 817 maxsize = dwc2_gadget_get_desc_params(hs_ep, &mask); 817 818 ··· 841 840 ((len << DEV_DMA_NBYTES_SHIFT) & mask)); 842 841 843 842 if (hs_ep->dir_in) { 844 - desc->status |= ((hs_ep->mc << DEV_DMA_ISOC_PID_SHIFT) & 843 + if (len) 844 + pid = DIV_ROUND_UP(len, hs_ep->ep.maxpacket); 845 + else 846 + pid = 1; 847 + desc->status |= ((pid << DEV_DMA_ISOC_PID_SHIFT) & 845 848 DEV_DMA_ISOC_PID_MASK) | 846 849 ((len % hs_ep->ep.maxpacket) ? 847 850 DEV_DMA_SHORT : 0) | ··· 889 884 struct dwc2_dma_desc *desc; 890 885 891 886 if (list_empty(&hs_ep->queue)) { 887 + hs_ep->target_frame = TARGET_FRAME_INITIAL; 892 888 dev_dbg(hsotg->dev, "%s: No requests in queue\n", __func__); 893 889 return; 894 890 } ··· 2761 2755 */ 2762 2756 tmp = dwc2_hsotg_read_frameno(hsotg); 2763 2757 2764 - dwc2_hsotg_complete_request(hsotg, ep, get_ep_head(ep), 0); 2765 - 2766 2758 if (using_desc_dma(hsotg)) { 2767 2759 if (ep->target_frame == TARGET_FRAME_INITIAL) { 2768 2760 /* Start first ISO Out */ ··· 2821 2817 2822 2818 tmp = dwc2_hsotg_read_frameno(hsotg); 2823 2819 if (using_desc_dma(hsotg)) { 2824 - dwc2_hsotg_complete_request(hsotg, hs_ep, 2825 - get_ep_head(hs_ep), 0); 2826 - 2827 2820 hs_ep->target_frame = tmp; 2828 2821 dwc2_gadget_incr_frame_num(hs_ep); 2829 2822 dwc2_gadget_start_isoc_ddma(hs_ep); ··· 4740 4739 } 4741 4740 4742 4741 ret = usb_add_gadget_udc(dev, &hsotg->gadget); 4743 - if (ret) 4742 + if (ret) { 4743 + dwc2_hsotg_ep_free_request(&hsotg->eps_out[0]->ep, 4744 + hsotg->ctrl_req); 4744 4745 return ret; 4745 - 4746 + } 4746 4747 dwc2_hsotg_dump(hsotg); 4747 4748 4748 4749 return 0; ··· 4758 4755 int dwc2_hsotg_remove(struct dwc2_hsotg *hsotg) 4759 4756 { 4760 4757 usb_del_gadget_udc(&hsotg->gadget); 4758 + dwc2_hsotg_ep_free_request(&hsotg->eps_out[0]->ep, hsotg->ctrl_req); 4761 4759 4762 4760 return 0; 4763 4761 }
+87 -6
drivers/usb/dwc2/hcd.c
··· 1567 1567 } 1568 1568 1569 1569 if (hsotg->params.host_dma) { 1570 - dwc2_writel((u32)chan->xfer_dma, 1571 - hsotg->regs + HCDMA(chan->hc_num)); 1570 + dma_addr_t dma_addr; 1571 + 1572 + if (chan->align_buf) { 1573 + if (dbg_hc(chan)) 1574 + dev_vdbg(hsotg->dev, "align_buf\n"); 1575 + dma_addr = chan->align_buf; 1576 + } else { 1577 + dma_addr = chan->xfer_dma; 1578 + } 1579 + dwc2_writel((u32)dma_addr, hsotg->regs + HCDMA(chan->hc_num)); 1580 + 1572 1581 if (dbg_hc(chan)) 1573 1582 dev_vdbg(hsotg->dev, "Wrote %08lx to HCDMA(%d)\n", 1574 - (unsigned long)chan->xfer_dma, chan->hc_num); 1583 + (unsigned long)dma_addr, chan->hc_num); 1575 1584 } 1576 1585 1577 1586 /* Start the split */ ··· 2634 2625 } 2635 2626 } 2636 2627 2628 + static int dwc2_alloc_split_dma_aligned_buf(struct dwc2_hsotg *hsotg, 2629 + struct dwc2_qh *qh, 2630 + struct dwc2_host_chan *chan) 2631 + { 2632 + if (!hsotg->unaligned_cache || 2633 + chan->max_packet > DWC2_KMEM_UNALIGNED_BUF_SIZE) 2634 + return -ENOMEM; 2635 + 2636 + if (!qh->dw_align_buf) { 2637 + qh->dw_align_buf = kmem_cache_alloc(hsotg->unaligned_cache, 2638 + GFP_ATOMIC | GFP_DMA); 2639 + if (!qh->dw_align_buf) 2640 + return -ENOMEM; 2641 + } 2642 + 2643 + qh->dw_align_buf_dma = dma_map_single(hsotg->dev, qh->dw_align_buf, 2644 + DWC2_KMEM_UNALIGNED_BUF_SIZE, 2645 + DMA_FROM_DEVICE); 2646 + 2647 + if (dma_mapping_error(hsotg->dev, qh->dw_align_buf_dma)) { 2648 + dev_err(hsotg->dev, "can't map align_buf\n"); 2649 + chan->align_buf = 0; 2650 + return -EINVAL; 2651 + } 2652 + 2653 + chan->align_buf = qh->dw_align_buf_dma; 2654 + return 0; 2655 + } 2656 + 2637 2657 #define DWC2_USB_DMA_ALIGN 4 2638 2658 2639 2659 struct dma_aligned_buffer { ··· 2839 2801 2840 2802 /* Set the transfer attributes */ 2841 2803 dwc2_hc_init_xfer(hsotg, chan, qtd); 2804 + 2805 + /* For non-dword aligned buffers */ 2806 + if (hsotg->params.host_dma && qh->do_split && 2807 + chan->ep_is_in && (chan->xfer_dma & 0x3)) { 2808 + dev_vdbg(hsotg->dev, "Non-aligned buffer\n"); 2809 + if (dwc2_alloc_split_dma_aligned_buf(hsotg, qh, chan)) { 2810 + dev_err(hsotg->dev, 2811 + "Failed to allocate memory to handle non-aligned buffer\n"); 2812 + /* Add channel back to free list */ 2813 + chan->align_buf = 0; 2814 + chan->multi_count = 0; 2815 + list_add_tail(&chan->hc_list_entry, 2816 + &hsotg->free_hc_list); 2817 + qtd->in_process = 0; 2818 + qh->channel = NULL; 2819 + return -ENOMEM; 2820 + } 2821 + } else { 2822 + /* 2823 + * We assume that DMA is always aligned in non-split 2824 + * case or split out case. Warn if not. 2825 + */ 2826 + WARN_ON_ONCE(hsotg->params.host_dma && 2827 + (chan->xfer_dma & 0x3)); 2828 + chan->align_buf = 0; 2829 + } 2842 2830 2843 2831 if (chan->ep_type == USB_ENDPOINT_XFER_INT || 2844 2832 chan->ep_type == USB_ENDPOINT_XFER_ISOC) ··· 5310 5246 } 5311 5247 } 5312 5248 5249 + if (hsotg->params.host_dma) { 5250 + /* 5251 + * Create kmem caches to handle non-aligned buffer 5252 + * in Buffer DMA mode. 5253 + */ 5254 + hsotg->unaligned_cache = kmem_cache_create("dwc2-unaligned-dma", 5255 + DWC2_KMEM_UNALIGNED_BUF_SIZE, 4, 5256 + SLAB_CACHE_DMA, NULL); 5257 + if (!hsotg->unaligned_cache) 5258 + dev_err(hsotg->dev, 5259 + "unable to create dwc2 unaligned cache\n"); 5260 + } 5261 + 5313 5262 hsotg->otg_port = 1; 5314 5263 hsotg->frame_list = NULL; 5315 5264 hsotg->frame_list_dma = 0; ··· 5357 5280 return 0; 5358 5281 5359 5282 error4: 5360 - kmem_cache_destroy(hsotg->desc_gen_cache); 5283 + kmem_cache_destroy(hsotg->unaligned_cache); 5361 5284 kmem_cache_destroy(hsotg->desc_hsisoc_cache); 5285 + kmem_cache_destroy(hsotg->desc_gen_cache); 5362 5286 error3: 5363 5287 dwc2_hcd_release(hsotg); 5364 5288 error2: ··· 5400 5322 usb_remove_hcd(hcd); 5401 5323 hsotg->priv = NULL; 5402 5324 5403 - kmem_cache_destroy(hsotg->desc_gen_cache); 5325 + kmem_cache_destroy(hsotg->unaligned_cache); 5404 5326 kmem_cache_destroy(hsotg->desc_hsisoc_cache); 5327 + kmem_cache_destroy(hsotg->desc_gen_cache); 5405 5328 5406 5329 dwc2_hcd_release(hsotg); 5407 5330 usb_put_hcd(hcd); ··· 5514 5435 dwc2_writel(hprt0, hsotg->regs + HPRT0); 5515 5436 5516 5437 /* Wait for the HPRT0.PrtSusp register field to be set */ 5517 - if (dwc2_hsotg_wait_bit_set(hsotg, HPRT0, HPRT0_SUSP, 300)) 5438 + if (dwc2_hsotg_wait_bit_set(hsotg, HPRT0, HPRT0_SUSP, 3000)) 5518 5439 dev_warn(hsotg->dev, "Suspend wasn't generated\n"); 5519 5440 5520 5441 /* ··· 5694 5615 __func__); 5695 5616 return ret; 5696 5617 } 5618 + 5619 + dwc2_hcd_rem_wakeup(hsotg); 5697 5620 5698 5621 hsotg->hibernated = 0; 5699 5622 hsotg->bus_suspended = 0;
+8
drivers/usb/dwc2/hcd.h
··· 76 76 * (micro)frame 77 77 * @xfer_buf: Pointer to current transfer buffer position 78 78 * @xfer_dma: DMA address of xfer_buf 79 + * @align_buf: In Buffer DMA mode this will be used if xfer_buf is not 80 + * DWORD aligned 79 81 * @xfer_len: Total number of bytes to transfer 80 82 * @xfer_count: Number of bytes transferred so far 81 83 * @start_pkt_count: Packet count at start of transfer ··· 135 133 136 134 u8 *xfer_buf; 137 135 dma_addr_t xfer_dma; 136 + dma_addr_t align_buf; 138 137 u32 xfer_len; 139 138 u32 xfer_count; 140 139 u16 start_pkt_count; ··· 305 302 * speed. Note that this is in "schedule slice" which 306 303 * is tightly packed. 307 304 * @ntd: Actual number of transfer descriptors in a list 305 + * @dw_align_buf: Used instead of original buffer if its physical address 306 + * is not dword-aligned 307 + * @dw_align_buf_dma: DMA address for dw_align_buf 308 308 * @qtd_list: List of QTDs for this QH 309 309 * @channel: Host channel currently processing transfers for this QH 310 310 * @qh_list_entry: Entry for QH in either the periodic or non-periodic ··· 356 350 struct dwc2_hs_transfer_time hs_transfers[DWC2_HS_SCHEDULE_UFRAMES]; 357 351 u32 ls_start_schedule_slice; 358 352 u16 ntd; 353 + u8 *dw_align_buf; 354 + dma_addr_t dw_align_buf_dma; 359 355 struct list_head qtd_list; 360 356 struct dwc2_host_chan *channel; 361 357 struct list_head qh_list_entry;
+9 -2
drivers/usb/dwc2/hcd_intr.c
··· 942 942 frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index]; 943 943 len = dwc2_get_actual_xfer_length(hsotg, chan, chnum, qtd, 944 944 DWC2_HC_XFER_COMPLETE, NULL); 945 - if (!len) { 945 + if (!len && !qtd->isoc_split_offset) { 946 946 qtd->complete_split = 0; 947 - qtd->isoc_split_offset = 0; 948 947 return 0; 949 948 } 950 949 951 950 frame_desc->actual_length += len; 951 + 952 + if (chan->align_buf) { 953 + dev_vdbg(hsotg->dev, "non-aligned buffer\n"); 954 + dma_unmap_single(hsotg->dev, chan->qh->dw_align_buf_dma, 955 + DWC2_KMEM_UNALIGNED_BUF_SIZE, DMA_FROM_DEVICE); 956 + memcpy(qtd->urb->buf + (chan->xfer_dma - qtd->urb->dma), 957 + chan->qh->dw_align_buf, len); 958 + } 952 959 953 960 qtd->isoc_split_offset += len; 954 961
+4 -1
drivers/usb/dwc2/hcd_queue.c
··· 383 383 /* Get the map and adjust if this is a multi_tt hub */ 384 384 map = qh->dwc_tt->periodic_bitmaps; 385 385 if (qh->dwc_tt->usb_tt->multi) 386 - map += DWC2_ELEMENTS_PER_LS_BITMAP * qh->ttport; 386 + map += DWC2_ELEMENTS_PER_LS_BITMAP * (qh->ttport - 1); 387 387 388 388 return map; 389 389 } ··· 1696 1696 1697 1697 if (qh->desc_list) 1698 1698 dwc2_hcd_qh_free_ddma(hsotg, qh); 1699 + else if (hsotg->unaligned_cache && qh->dw_align_buf) 1700 + kmem_cache_free(hsotg->unaligned_cache, qh->dw_align_buf); 1701 + 1699 1702 kfree(qh); 1700 1703 } 1701 1704
+13 -10
drivers/usb/dwc3/core.c
··· 1272 1272 if (!dwc->clks) 1273 1273 return -ENOMEM; 1274 1274 1275 - dwc->num_clks = ARRAY_SIZE(dwc3_core_clks); 1276 1275 dwc->dev = dev; 1277 1276 1278 1277 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); ··· 1306 1307 if (IS_ERR(dwc->reset)) 1307 1308 return PTR_ERR(dwc->reset); 1308 1309 1309 - ret = clk_bulk_get(dev, dwc->num_clks, dwc->clks); 1310 - if (ret == -EPROBE_DEFER) 1311 - return ret; 1312 - /* 1313 - * Clocks are optional, but new DT platforms should support all clocks 1314 - * as required by the DT-binding. 1315 - */ 1316 - if (ret) 1317 - dwc->num_clks = 0; 1310 + if (dev->of_node) { 1311 + dwc->num_clks = ARRAY_SIZE(dwc3_core_clks); 1312 + 1313 + ret = clk_bulk_get(dev, dwc->num_clks, dwc->clks); 1314 + if (ret == -EPROBE_DEFER) 1315 + return ret; 1316 + /* 1317 + * Clocks are optional, but new DT platforms should support all 1318 + * clocks as required by the DT-binding. 1319 + */ 1320 + if (ret) 1321 + dwc->num_clks = 0; 1322 + } 1318 1323 1319 1324 ret = reset_control_deassert(dwc->reset); 1320 1325 if (ret)
+2 -1
drivers/usb/dwc3/dwc3-of-simple.c
··· 165 165 166 166 reset_control_put(simple->resets); 167 167 168 - pm_runtime_put_sync(dev); 169 168 pm_runtime_disable(dev); 169 + pm_runtime_put_noidle(dev); 170 + pm_runtime_set_suspended(dev); 170 171 171 172 return 0; 172 173 }
+2
drivers/usb/dwc3/dwc3-pci.c
··· 34 34 #define PCI_DEVICE_ID_INTEL_GLK 0x31aa 35 35 #define PCI_DEVICE_ID_INTEL_CNPLP 0x9dee 36 36 #define PCI_DEVICE_ID_INTEL_CNPH 0xa36e 37 + #define PCI_DEVICE_ID_INTEL_ICLLP 0x34ee 37 38 38 39 #define PCI_INTEL_BXT_DSM_GUID "732b85d5-b7a7-4a1b-9ba0-4bbd00ffd511" 39 40 #define PCI_INTEL_BXT_FUNC_PMU_PWR 4 ··· 290 289 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_GLK), }, 291 290 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CNPLP), }, 292 291 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CNPH), }, 292 + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICLLP), }, 293 293 { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_NL_USB), }, 294 294 { } /* Terminating Entry */ 295 295 };
+5 -8
drivers/usb/dwc3/dwc3-qcom.c
··· 490 490 qcom->dwc3 = of_find_device_by_node(dwc3_np); 491 491 if (!qcom->dwc3) { 492 492 dev_err(&pdev->dev, "failed to get dwc3 platform device\n"); 493 + ret = -ENODEV; 493 494 goto depopulate; 494 495 } 495 496 ··· 548 547 return 0; 549 548 } 550 549 551 - #ifdef CONFIG_PM_SLEEP 552 - static int dwc3_qcom_pm_suspend(struct device *dev) 550 + static int __maybe_unused dwc3_qcom_pm_suspend(struct device *dev) 553 551 { 554 552 struct dwc3_qcom *qcom = dev_get_drvdata(dev); 555 553 int ret = 0; ··· 560 560 return ret; 561 561 } 562 562 563 - static int dwc3_qcom_pm_resume(struct device *dev) 563 + static int __maybe_unused dwc3_qcom_pm_resume(struct device *dev) 564 564 { 565 565 struct dwc3_qcom *qcom = dev_get_drvdata(dev); 566 566 int ret; ··· 571 571 572 572 return ret; 573 573 } 574 - #endif 575 574 576 - #ifdef CONFIG_PM 577 - static int dwc3_qcom_runtime_suspend(struct device *dev) 575 + static int __maybe_unused dwc3_qcom_runtime_suspend(struct device *dev) 578 576 { 579 577 struct dwc3_qcom *qcom = dev_get_drvdata(dev); 580 578 581 579 return dwc3_qcom_suspend(qcom); 582 580 } 583 581 584 - static int dwc3_qcom_runtime_resume(struct device *dev) 582 + static int __maybe_unused dwc3_qcom_runtime_resume(struct device *dev) 585 583 { 586 584 struct dwc3_qcom *qcom = dev_get_drvdata(dev); 587 585 588 586 return dwc3_qcom_resume(qcom); 589 587 } 590 - #endif 591 588 592 589 static const struct dev_pm_ops dwc3_qcom_dev_pm_ops = { 593 590 SET_SYSTEM_SLEEP_PM_OPS(dwc3_qcom_pm_suspend, dwc3_qcom_pm_resume)
+3
drivers/usb/gadget/composite.c
··· 1719 1719 */ 1720 1720 if (w_value && !f->get_alt) 1721 1721 break; 1722 + 1723 + spin_lock(&cdev->lock); 1722 1724 value = f->set_alt(f, w_index, w_value); 1723 1725 if (value == USB_GADGET_DELAYED_STATUS) { 1724 1726 DBG(cdev, ··· 1730 1728 DBG(cdev, "delayed_status count %d\n", 1731 1729 cdev->delayed_status); 1732 1730 } 1731 + spin_unlock(&cdev->lock); 1733 1732 break; 1734 1733 case USB_REQ_GET_INTERFACE: 1735 1734 if (ctrl->bRequestType != (USB_DIR_IN|USB_RECIP_INTERFACE))
+18 -8
drivers/usb/gadget/function/f_fs.c
··· 215 215 216 216 struct mm_struct *mm; 217 217 struct work_struct work; 218 + struct work_struct cancellation_work; 218 219 219 220 struct usb_ep *ep; 220 221 struct usb_request *req; ··· 1073 1072 return 0; 1074 1073 } 1075 1074 1075 + static void ffs_aio_cancel_worker(struct work_struct *work) 1076 + { 1077 + struct ffs_io_data *io_data = container_of(work, struct ffs_io_data, 1078 + cancellation_work); 1079 + 1080 + ENTER(); 1081 + 1082 + usb_ep_dequeue(io_data->ep, io_data->req); 1083 + } 1084 + 1076 1085 static int ffs_aio_cancel(struct kiocb *kiocb) 1077 1086 { 1078 1087 struct ffs_io_data *io_data = kiocb->private; 1079 - struct ffs_epfile *epfile = kiocb->ki_filp->private_data; 1088 + struct ffs_data *ffs = io_data->ffs; 1080 1089 int value; 1081 1090 1082 1091 ENTER(); 1083 1092 1084 - spin_lock_irq(&epfile->ffs->eps_lock); 1085 - 1086 - if (likely(io_data && io_data->ep && io_data->req)) 1087 - value = usb_ep_dequeue(io_data->ep, io_data->req); 1088 - else 1093 + if (likely(io_data && io_data->ep && io_data->req)) { 1094 + INIT_WORK(&io_data->cancellation_work, ffs_aio_cancel_worker); 1095 + queue_work(ffs->io_completion_wq, &io_data->cancellation_work); 1096 + value = -EINPROGRESS; 1097 + } else { 1089 1098 value = -EINVAL; 1090 - 1091 - spin_unlock_irq(&epfile->ffs->eps_lock); 1099 + } 1092 1100 1093 1101 return value; 1094 1102 }
+2 -2
drivers/usb/host/xhci-mem.c
··· 886 886 887 887 dev = xhci->devs[slot_id]; 888 888 889 - trace_xhci_free_virt_device(dev); 890 - 891 889 xhci->dcbaa->dev_context_ptrs[slot_id] = 0; 892 890 if (!dev) 893 891 return; 892 + 893 + trace_xhci_free_virt_device(dev); 894 894 895 895 if (dev->tt_info) 896 896 old_active_eps = dev->tt_info->active_eps;
+3 -3
drivers/usb/host/xhci-tegra.c
··· 481 481 unsigned long mask; 482 482 unsigned int port; 483 483 bool idle, enable; 484 - int err; 484 + int err = 0; 485 485 486 486 memset(&rsp, 0, sizeof(rsp)); 487 487 ··· 1223 1223 pm_runtime_disable(&pdev->dev); 1224 1224 usb_put_hcd(tegra->hcd); 1225 1225 disable_xusbc: 1226 - if (!&pdev->dev.pm_domain) 1226 + if (!pdev->dev.pm_domain) 1227 1227 tegra_powergate_power_off(TEGRA_POWERGATE_XUSBC); 1228 1228 disable_xusba: 1229 - if (!&pdev->dev.pm_domain) 1229 + if (!pdev->dev.pm_domain) 1230 1230 tegra_powergate_power_off(TEGRA_POWERGATE_XUSBA); 1231 1231 put_padctl: 1232 1232 tegra_xusb_padctl_put(tegra->padctl);
+31 -5
drivers/usb/host/xhci-trace.h
··· 171 171 TP_ARGS(ring, trb) 172 172 ); 173 173 174 + DECLARE_EVENT_CLASS(xhci_log_free_virt_dev, 175 + TP_PROTO(struct xhci_virt_device *vdev), 176 + TP_ARGS(vdev), 177 + TP_STRUCT__entry( 178 + __field(void *, vdev) 179 + __field(unsigned long long, out_ctx) 180 + __field(unsigned long long, in_ctx) 181 + __field(u8, fake_port) 182 + __field(u8, real_port) 183 + __field(u16, current_mel) 184 + 185 + ), 186 + TP_fast_assign( 187 + __entry->vdev = vdev; 188 + __entry->in_ctx = (unsigned long long) vdev->in_ctx->dma; 189 + __entry->out_ctx = (unsigned long long) vdev->out_ctx->dma; 190 + __entry->fake_port = (u8) vdev->fake_port; 191 + __entry->real_port = (u8) vdev->real_port; 192 + __entry->current_mel = (u16) vdev->current_mel; 193 + ), 194 + TP_printk("vdev %p ctx %llx | %llx fake_port %d real_port %d current_mel %d", 195 + __entry->vdev, __entry->in_ctx, __entry->out_ctx, 196 + __entry->fake_port, __entry->real_port, __entry->current_mel 197 + ) 198 + ); 199 + 200 + DEFINE_EVENT(xhci_log_free_virt_dev, xhci_free_virt_device, 201 + TP_PROTO(struct xhci_virt_device *vdev), 202 + TP_ARGS(vdev) 203 + ); 204 + 174 205 DECLARE_EVENT_CLASS(xhci_log_virt_dev, 175 206 TP_PROTO(struct xhci_virt_device *vdev), 176 207 TP_ARGS(vdev), ··· 235 204 ); 236 205 237 206 DEFINE_EVENT(xhci_log_virt_dev, xhci_alloc_virt_device, 238 - TP_PROTO(struct xhci_virt_device *vdev), 239 - TP_ARGS(vdev) 240 - ); 241 - 242 - DEFINE_EVENT(xhci_log_virt_dev, xhci_free_virt_device, 243 207 TP_PROTO(struct xhci_virt_device *vdev), 244 208 TP_ARGS(vdev) 245 209 );
+43 -4
drivers/usb/host/xhci.c
··· 908 908 spin_unlock_irqrestore(&xhci->lock, flags); 909 909 } 910 910 911 + static bool xhci_pending_portevent(struct xhci_hcd *xhci) 912 + { 913 + struct xhci_port **ports; 914 + int port_index; 915 + u32 status; 916 + u32 portsc; 917 + 918 + status = readl(&xhci->op_regs->status); 919 + if (status & STS_EINT) 920 + return true; 921 + /* 922 + * Checking STS_EINT is not enough as there is a lag between a change 923 + * bit being set and the Port Status Change Event that it generated 924 + * being written to the Event Ring. See note in xhci 1.1 section 4.19.2. 925 + */ 926 + 927 + port_index = xhci->usb2_rhub.num_ports; 928 + ports = xhci->usb2_rhub.ports; 929 + while (port_index--) { 930 + portsc = readl(ports[port_index]->addr); 931 + if (portsc & PORT_CHANGE_MASK || 932 + (portsc & PORT_PLS_MASK) == XDEV_RESUME) 933 + return true; 934 + } 935 + port_index = xhci->usb3_rhub.num_ports; 936 + ports = xhci->usb3_rhub.ports; 937 + while (port_index--) { 938 + portsc = readl(ports[port_index]->addr); 939 + if (portsc & PORT_CHANGE_MASK || 940 + (portsc & PORT_PLS_MASK) == XDEV_RESUME) 941 + return true; 942 + } 943 + return false; 944 + } 945 + 911 946 /* 912 947 * Stop HC (not bus-specific) 913 948 * ··· 1044 1009 */ 1045 1010 int xhci_resume(struct xhci_hcd *xhci, bool hibernated) 1046 1011 { 1047 - u32 command, temp = 0, status; 1012 + u32 command, temp = 0; 1048 1013 struct usb_hcd *hcd = xhci_to_hcd(xhci); 1049 1014 struct usb_hcd *secondary_hcd; 1050 1015 int retval = 0; ··· 1078 1043 command = readl(&xhci->op_regs->command); 1079 1044 command |= CMD_CRS; 1080 1045 writel(command, &xhci->op_regs->command); 1046 + /* 1047 + * Some controllers take up to 55+ ms to complete the controller 1048 + * restore so setting the timeout to 100ms. Xhci specification 1049 + * doesn't mention any timeout value. 1050 + */ 1081 1051 if (xhci_handshake(&xhci->op_regs->status, 1082 - STS_RESTORE, 0, 10 * 1000)) { 1052 + STS_RESTORE, 0, 100 * 1000)) { 1083 1053 xhci_warn(xhci, "WARN: xHC restore state timeout\n"); 1084 1054 spin_unlock_irq(&xhci->lock); 1085 1055 return -ETIMEDOUT; ··· 1174 1134 done: 1175 1135 if (retval == 0) { 1176 1136 /* Resume root hubs only when have pending events. */ 1177 - status = readl(&xhci->op_regs->status); 1178 - if (status & STS_EINT) { 1137 + if (xhci_pending_portevent(xhci)) { 1179 1138 usb_hcd_resume_root_hub(xhci->shared_hcd); 1180 1139 usb_hcd_resume_root_hub(hcd); 1181 1140 }
+4
drivers/usb/host/xhci.h
··· 382 382 #define PORT_PLC (1 << 22) 383 383 /* port configure error change - port failed to configure its link partner */ 384 384 #define PORT_CEC (1 << 23) 385 + #define PORT_CHANGE_MASK (PORT_CSC | PORT_PEC | PORT_WRC | PORT_OCC | \ 386 + PORT_RC | PORT_PLC | PORT_CEC) 387 + 388 + 385 389 /* Cold Attach Status - xHC can set this bit to report device attached during 386 390 * Sx state. Warm port reset should be perfomed to clear this bit and move port 387 391 * to connected state.
+14
drivers/usb/serial/cp210x.c
··· 95 95 { USB_DEVICE(0x10C4, 0x8156) }, /* B&G H3000 link cable */ 96 96 { USB_DEVICE(0x10C4, 0x815E) }, /* Helicomm IP-Link 1220-DVM */ 97 97 { USB_DEVICE(0x10C4, 0x815F) }, /* Timewave HamLinkUSB */ 98 + { USB_DEVICE(0x10C4, 0x817C) }, /* CESINEL MEDCAL N Power Quality Monitor */ 99 + { USB_DEVICE(0x10C4, 0x817D) }, /* CESINEL MEDCAL NT Power Quality Monitor */ 100 + { USB_DEVICE(0x10C4, 0x817E) }, /* CESINEL MEDCAL S Power Quality Monitor */ 98 101 { USB_DEVICE(0x10C4, 0x818B) }, /* AVIT Research USB to TTL */ 99 102 { USB_DEVICE(0x10C4, 0x819F) }, /* MJS USB Toslink Switcher */ 100 103 { USB_DEVICE(0x10C4, 0x81A6) }, /* ThinkOptics WavIt */ ··· 115 112 { USB_DEVICE(0x10C4, 0x826B) }, /* Cygnal Integrated Products, Inc., Fasttrax GPS demonstration module */ 116 113 { USB_DEVICE(0x10C4, 0x8281) }, /* Nanotec Plug & Drive */ 117 114 { USB_DEVICE(0x10C4, 0x8293) }, /* Telegesis ETRX2USB */ 115 + { USB_DEVICE(0x10C4, 0x82EF) }, /* CESINEL FALCO 6105 AC Power Supply */ 116 + { USB_DEVICE(0x10C4, 0x82F1) }, /* CESINEL MEDCAL EFD Earth Fault Detector */ 117 + { USB_DEVICE(0x10C4, 0x82F2) }, /* CESINEL MEDCAL ST Network Analyzer */ 118 118 { USB_DEVICE(0x10C4, 0x82F4) }, /* Starizona MicroTouch */ 119 119 { USB_DEVICE(0x10C4, 0x82F9) }, /* Procyon AVS */ 120 120 { USB_DEVICE(0x10C4, 0x8341) }, /* Siemens MC35PU GPRS Modem */ ··· 130 124 { USB_DEVICE(0x10C4, 0x8470) }, /* Juniper Networks BX Series System Console */ 131 125 { USB_DEVICE(0x10C4, 0x8477) }, /* Balluff RFID */ 132 126 { USB_DEVICE(0x10C4, 0x84B6) }, /* Starizona Hyperion */ 127 + { USB_DEVICE(0x10C4, 0x851E) }, /* CESINEL MEDCAL PT Network Analyzer */ 133 128 { USB_DEVICE(0x10C4, 0x85A7) }, /* LifeScan OneTouch Verio IQ */ 129 + { USB_DEVICE(0x10C4, 0x85B8) }, /* CESINEL ReCon T Energy Logger */ 134 130 { USB_DEVICE(0x10C4, 0x85EA) }, /* AC-Services IBUS-IF */ 135 131 { USB_DEVICE(0x10C4, 0x85EB) }, /* AC-Services CIS-IBUS */ 136 132 { USB_DEVICE(0x10C4, 0x85F8) }, /* Virtenio Preon32 */ ··· 142 134 { USB_DEVICE(0x10C4, 0x8857) }, /* CEL EM357 ZigBee USB Stick */ 143 135 { USB_DEVICE(0x10C4, 0x88A4) }, /* MMB Networks ZigBee USB Device */ 144 136 { USB_DEVICE(0x10C4, 0x88A5) }, /* Planet Innovation Ingeni ZigBee USB Device */ 137 + { USB_DEVICE(0x10C4, 0x88FB) }, /* CESINEL MEDCAL STII Network Analyzer */ 138 + { USB_DEVICE(0x10C4, 0x8938) }, /* CESINEL MEDCAL S II Network Analyzer */ 145 139 { USB_DEVICE(0x10C4, 0x8946) }, /* Ketra N1 Wireless Interface */ 146 140 { USB_DEVICE(0x10C4, 0x8962) }, /* Brim Brothers charging dock */ 147 141 { USB_DEVICE(0x10C4, 0x8977) }, /* CEL MeshWorks DevKit Device */ 148 142 { USB_DEVICE(0x10C4, 0x8998) }, /* KCF Technologies PRN */ 143 + { USB_DEVICE(0x10C4, 0x89A4) }, /* CESINEL FTBC Flexible Thyristor Bridge Controller */ 149 144 { USB_DEVICE(0x10C4, 0x8A2A) }, /* HubZ dual ZigBee and Z-Wave dongle */ 150 145 { USB_DEVICE(0x10C4, 0x8A5E) }, /* CEL EM3588 ZigBee USB Stick Long Range */ 151 146 { USB_DEVICE(0x10C4, 0x8B34) }, /* Qivicon ZigBee USB Radio Stick */ 152 147 { USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */ 153 148 { USB_DEVICE(0x10C4, 0xEA61) }, /* Silicon Labs factory default */ 149 + { USB_DEVICE(0x10C4, 0xEA63) }, /* Silicon Labs Windows Update (CP2101-4/CP2102N) */ 154 150 { USB_DEVICE(0x10C4, 0xEA70) }, /* Silicon Labs factory default */ 155 151 { USB_DEVICE(0x10C4, 0xEA71) }, /* Infinity GPS-MIC-1 Radio Monophone */ 152 + { USB_DEVICE(0x10C4, 0xEA7A) }, /* Silicon Labs Windows Update (CP2105) */ 153 + { USB_DEVICE(0x10C4, 0xEA7B) }, /* Silicon Labs Windows Update (CP2108) */ 156 154 { USB_DEVICE(0x10C4, 0xF001) }, /* Elan Digital Systems USBscope50 */ 157 155 { USB_DEVICE(0x10C4, 0xF002) }, /* Elan Digital Systems USBwave12 */ 158 156 { USB_DEVICE(0x10C4, 0xF003) }, /* Elan Digital Systems USBpulse100 */
+6 -4
drivers/usb/typec/tcpm.c
··· 418 418 u64 ts_nsec = local_clock(); 419 419 unsigned long rem_nsec; 420 420 421 + mutex_lock(&port->logbuffer_lock); 421 422 if (!port->logbuffer[port->logbuffer_head]) { 422 423 port->logbuffer[port->logbuffer_head] = 423 424 kzalloc(LOG_BUFFER_ENTRY_SIZE, GFP_KERNEL); 424 - if (!port->logbuffer[port->logbuffer_head]) 425 + if (!port->logbuffer[port->logbuffer_head]) { 426 + mutex_unlock(&port->logbuffer_lock); 425 427 return; 428 + } 426 429 } 427 430 428 431 vsnprintf(tmpbuffer, sizeof(tmpbuffer), fmt, args); 429 - 430 - mutex_lock(&port->logbuffer_lock); 431 432 432 433 if (tcpm_log_full(port)) { 433 434 port->logbuffer_head = max(port->logbuffer_head - 1, 0); ··· 3044 3043 tcpm_port_is_sink(port) && 3045 3044 time_is_after_jiffies(port->delayed_runtime)) { 3046 3045 tcpm_set_state(port, SNK_DISCOVERY, 3047 - port->delayed_runtime - jiffies); 3046 + jiffies_to_msecs(port->delayed_runtime - 3047 + jiffies)); 3048 3048 break; 3049 3049 } 3050 3050 tcpm_set_state(port, unattached_state(port), 0);
+13
drivers/usb/typec/ucsi/ucsi.c
··· 350 350 } 351 351 352 352 if (con->status.change & UCSI_CONSTAT_CONNECT_CHANGE) { 353 + typec_set_pwr_role(con->port, con->status.pwr_dir); 354 + 355 + switch (con->status.partner_type) { 356 + case UCSI_CONSTAT_PARTNER_TYPE_UFP: 357 + typec_set_data_role(con->port, TYPEC_HOST); 358 + break; 359 + case UCSI_CONSTAT_PARTNER_TYPE_DFP: 360 + typec_set_data_role(con->port, TYPEC_DEVICE); 361 + break; 362 + default: 363 + break; 364 + } 365 + 353 366 if (con->status.connected) 354 367 ucsi_register_partner(con); 355 368 else
+5
drivers/usb/typec/ucsi/ucsi_acpi.c
··· 79 79 return -ENODEV; 80 80 } 81 81 82 + /* This will make sure we can use ioremap_nocache() */ 83 + status = acpi_release_memory(ACPI_HANDLE(&pdev->dev), res, 1); 84 + if (ACPI_FAILURE(status)) 85 + return -ENOMEM; 86 + 82 87 /* 83 88 * NOTE: The memory region for the data structures is used also in an 84 89 * operation region, which means ACPI has already reserved it. Therefore
+2 -1
drivers/vhost/net.c
··· 1226 1226 if (ubufs) 1227 1227 vhost_net_ubuf_put_wait_and_free(ubufs); 1228 1228 err_ubufs: 1229 - sockfd_put(sock); 1229 + if (sock) 1230 + sockfd_put(sock); 1230 1231 err_vq: 1231 1232 mutex_unlock(&vq->mutex); 1232 1233 err:
+1 -147
fs/aio.c
··· 5 5 * Implements an efficient asynchronous io interface. 6 6 * 7 7 * Copyright 2000, 2001, 2002 Red Hat, Inc. All Rights Reserved. 8 - * Copyright 2018 Christoph Hellwig. 9 8 * 10 9 * See ../COPYING for licensing terms. 11 10 */ ··· 164 165 bool datasync; 165 166 }; 166 167 167 - struct poll_iocb { 168 - struct file *file; 169 - __poll_t events; 170 - struct wait_queue_head *head; 171 - 172 - union { 173 - struct wait_queue_entry wait; 174 - struct work_struct work; 175 - }; 176 - }; 177 - 178 168 struct aio_kiocb { 179 169 union { 180 170 struct kiocb rw; 181 171 struct fsync_iocb fsync; 182 - struct poll_iocb poll; 183 172 }; 184 173 185 174 struct kioctx *ki_ctx; ··· 1577 1590 if (unlikely(iocb->aio_buf || iocb->aio_offset || iocb->aio_nbytes || 1578 1591 iocb->aio_rw_flags)) 1579 1592 return -EINVAL; 1593 + 1580 1594 req->file = fget(iocb->aio_fildes); 1581 1595 if (unlikely(!req->file)) 1582 1596 return -EBADF; ··· 1590 1602 INIT_WORK(&req->work, aio_fsync_work); 1591 1603 schedule_work(&req->work); 1592 1604 return 0; 1593 - } 1594 - 1595 - /* need to use list_del_init so we can check if item was present */ 1596 - static inline bool __aio_poll_remove(struct poll_iocb *req) 1597 - { 1598 - if (list_empty(&req->wait.entry)) 1599 - return false; 1600 - list_del_init(&req->wait.entry); 1601 - return true; 1602 - } 1603 - 1604 - static inline void __aio_poll_complete(struct aio_kiocb *iocb, __poll_t mask) 1605 - { 1606 - fput(iocb->poll.file); 1607 - aio_complete(iocb, mangle_poll(mask), 0); 1608 - } 1609 - 1610 - static void aio_poll_work(struct work_struct *work) 1611 - { 1612 - struct aio_kiocb *iocb = container_of(work, struct aio_kiocb, poll.work); 1613 - 1614 - if (!list_empty_careful(&iocb->ki_list)) 1615 - aio_remove_iocb(iocb); 1616 - __aio_poll_complete(iocb, iocb->poll.events); 1617 - } 1618 - 1619 - static int aio_poll_cancel(struct kiocb *iocb) 1620 - { 1621 - struct aio_kiocb *aiocb = container_of(iocb, struct aio_kiocb, rw); 1622 - struct poll_iocb *req = &aiocb->poll; 1623 - struct wait_queue_head *head = req->head; 1624 - bool found = false; 1625 - 1626 - spin_lock(&head->lock); 1627 - found = __aio_poll_remove(req); 1628 - spin_unlock(&head->lock); 1629 - 1630 - if (found) { 1631 - req->events = 0; 1632 - INIT_WORK(&req->work, aio_poll_work); 1633 - schedule_work(&req->work); 1634 - } 1635 - return 0; 1636 - } 1637 - 1638 - static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync, 1639 - void *key) 1640 - { 1641 - struct poll_iocb *req = container_of(wait, struct poll_iocb, wait); 1642 - struct aio_kiocb *iocb = container_of(req, struct aio_kiocb, poll); 1643 - struct file *file = req->file; 1644 - __poll_t mask = key_to_poll(key); 1645 - 1646 - assert_spin_locked(&req->head->lock); 1647 - 1648 - /* for instances that support it check for an event match first: */ 1649 - if (mask && !(mask & req->events)) 1650 - return 0; 1651 - 1652 - mask = file->f_op->poll_mask(file, req->events) & req->events; 1653 - if (!mask) 1654 - return 0; 1655 - 1656 - __aio_poll_remove(req); 1657 - 1658 - /* 1659 - * Try completing without a context switch if we can acquire ctx_lock 1660 - * without spinning. Otherwise we need to defer to a workqueue to 1661 - * avoid a deadlock due to the lock order. 1662 - */ 1663 - if (spin_trylock(&iocb->ki_ctx->ctx_lock)) { 1664 - list_del_init(&iocb->ki_list); 1665 - spin_unlock(&iocb->ki_ctx->ctx_lock); 1666 - 1667 - __aio_poll_complete(iocb, mask); 1668 - } else { 1669 - req->events = mask; 1670 - INIT_WORK(&req->work, aio_poll_work); 1671 - schedule_work(&req->work); 1672 - } 1673 - 1674 - return 1; 1675 - } 1676 - 1677 - static ssize_t aio_poll(struct aio_kiocb *aiocb, struct iocb *iocb) 1678 - { 1679 - struct kioctx *ctx = aiocb->ki_ctx; 1680 - struct poll_iocb *req = &aiocb->poll; 1681 - __poll_t mask; 1682 - 1683 - /* reject any unknown events outside the normal event mask. */ 1684 - if ((u16)iocb->aio_buf != iocb->aio_buf) 1685 - return -EINVAL; 1686 - /* reject fields that are not defined for poll */ 1687 - if (iocb->aio_offset || iocb->aio_nbytes || iocb->aio_rw_flags) 1688 - return -EINVAL; 1689 - 1690 - req->events = demangle_poll(iocb->aio_buf) | EPOLLERR | EPOLLHUP; 1691 - req->file = fget(iocb->aio_fildes); 1692 - if (unlikely(!req->file)) 1693 - return -EBADF; 1694 - if (!file_has_poll_mask(req->file)) 1695 - goto out_fail; 1696 - 1697 - req->head = req->file->f_op->get_poll_head(req->file, req->events); 1698 - if (!req->head) 1699 - goto out_fail; 1700 - if (IS_ERR(req->head)) { 1701 - mask = EPOLLERR; 1702 - goto done; 1703 - } 1704 - 1705 - init_waitqueue_func_entry(&req->wait, aio_poll_wake); 1706 - aiocb->ki_cancel = aio_poll_cancel; 1707 - 1708 - spin_lock_irq(&ctx->ctx_lock); 1709 - spin_lock(&req->head->lock); 1710 - mask = req->file->f_op->poll_mask(req->file, req->events) & req->events; 1711 - if (!mask) { 1712 - __add_wait_queue(req->head, &req->wait); 1713 - list_add_tail(&aiocb->ki_list, &ctx->active_reqs); 1714 - } 1715 - spin_unlock(&req->head->lock); 1716 - spin_unlock_irq(&ctx->ctx_lock); 1717 - done: 1718 - if (mask) 1719 - __aio_poll_complete(aiocb, mask); 1720 - return 0; 1721 - out_fail: 1722 - fput(req->file); 1723 - return -EINVAL; /* same as no support for IOCB_CMD_POLL */ 1724 1605 } 1725 1606 1726 1607 static int io_submit_one(struct kioctx *ctx, struct iocb __user *user_iocb, ··· 1664 1807 break; 1665 1808 case IOCB_CMD_FDSYNC: 1666 1809 ret = aio_fsync(&req->fsync, &iocb, true); 1667 - break; 1668 - case IOCB_CMD_POLL: 1669 - ret = aio_poll(req, &iocb); 1670 1810 break; 1671 1811 default: 1672 1812 pr_debug("invalid aio operation %d\n", iocb.aio_lio_opcode);
+4 -1
fs/btrfs/extent_io.c
··· 4542 4542 offset_in_extent = em_start - em->start; 4543 4543 em_end = extent_map_end(em); 4544 4544 em_len = em_end - em_start; 4545 - disko = em->block_start + offset_in_extent; 4546 4545 flags = 0; 4546 + if (em->block_start < EXTENT_MAP_LAST_BYTE) 4547 + disko = em->block_start + offset_in_extent; 4548 + else 4549 + disko = 0; 4547 4550 4548 4551 /* 4549 4552 * bump off for our next call to get_extent
+5 -2
fs/btrfs/inode.c
··· 9005 9005 9006 9006 unlock_extent_cached(io_tree, page_start, page_end, &cached_state); 9007 9007 9008 - out_unlock: 9009 9008 if (!ret2) { 9010 9009 btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE, true); 9011 9010 sb_end_pagefault(inode->i_sb); 9012 9011 extent_changeset_free(data_reserved); 9013 9012 return VM_FAULT_LOCKED; 9014 9013 } 9014 + 9015 + out_unlock: 9015 9016 unlock_page(page); 9016 9017 out: 9017 9018 btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE, (ret != 0)); ··· 9444 9443 u64 new_idx = 0; 9445 9444 u64 root_objectid; 9446 9445 int ret; 9446 + int ret2; 9447 9447 bool root_log_pinned = false; 9448 9448 bool dest_log_pinned = false; 9449 9449 ··· 9641 9639 dest_log_pinned = false; 9642 9640 } 9643 9641 } 9644 - ret = btrfs_end_transaction(trans); 9642 + ret2 = btrfs_end_transaction(trans); 9643 + ret = ret ? ret : ret2; 9645 9644 out_notrans: 9646 9645 if (new_ino == BTRFS_FIRST_FREE_OBJECTID) 9647 9646 up_read(&fs_info->subvol_sem);
+5 -5
fs/btrfs/ioctl.c
··· 3577 3577 ret = btrfs_extent_same_range(src, loff, BTRFS_MAX_DEDUPE_LEN, 3578 3578 dst, dst_loff, &cmp); 3579 3579 if (ret) 3580 - goto out_unlock; 3580 + goto out_free; 3581 3581 3582 3582 loff += BTRFS_MAX_DEDUPE_LEN; 3583 3583 dst_loff += BTRFS_MAX_DEDUPE_LEN; ··· 3587 3587 ret = btrfs_extent_same_range(src, loff, tail_len, dst, 3588 3588 dst_loff, &cmp); 3589 3589 3590 + out_free: 3591 + kvfree(cmp.src_pages); 3592 + kvfree(cmp.dst_pages); 3593 + 3590 3594 out_unlock: 3591 3595 if (same_inode) 3592 3596 inode_unlock(src); 3593 3597 else 3594 3598 btrfs_double_inode_unlock(src, dst); 3595 - 3596 - out_free: 3597 - kvfree(cmp.src_pages); 3598 - kvfree(cmp.dst_pages); 3599 3599 3600 3600 return ret; 3601 3601 }
+13 -4
fs/btrfs/qgroup.c
··· 2680 2680 free_extent_buffer(scratch_leaf); 2681 2681 } 2682 2682 2683 - if (done && !ret) 2683 + if (done && !ret) { 2684 2684 ret = 1; 2685 + fs_info->qgroup_rescan_progress.objectid = (u64)-1; 2686 + } 2685 2687 return ret; 2686 2688 } 2687 2689 ··· 2786 2784 2787 2785 if (!init_flags) { 2788 2786 /* we're resuming qgroup rescan at mount time */ 2789 - if (!(fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_RESCAN)) 2787 + if (!(fs_info->qgroup_flags & 2788 + BTRFS_QGROUP_STATUS_FLAG_RESCAN)) { 2790 2789 btrfs_warn(fs_info, 2791 2790 "qgroup rescan init failed, qgroup is not enabled"); 2792 - else if (!(fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_ON)) 2791 + ret = -EINVAL; 2792 + } else if (!(fs_info->qgroup_flags & 2793 + BTRFS_QGROUP_STATUS_FLAG_ON)) { 2793 2794 btrfs_warn(fs_info, 2794 2795 "qgroup rescan init failed, qgroup rescan is not queued"); 2795 - return -EINVAL; 2796 + ret = -EINVAL; 2797 + } 2798 + 2799 + if (ret) 2800 + return ret; 2796 2801 } 2797 2802 2798 2803 mutex_lock(&fs_info->qgroup_rescan_lock);
+1
fs/ceph/inode.c
··· 1135 1135 if (IS_ERR(realdn)) { 1136 1136 pr_err("splice_dentry error %ld %p inode %p ino %llx.%llx\n", 1137 1137 PTR_ERR(realdn), dn, in, ceph_vinop(in)); 1138 + dput(dn); 1138 1139 dn = realdn; /* note realdn contains the error */ 1139 1140 goto out; 1140 1141 } else if (realdn) {
+6 -13
fs/eventfd.c
··· 101 101 return 0; 102 102 } 103 103 104 - static struct wait_queue_head * 105 - eventfd_get_poll_head(struct file *file, __poll_t events) 106 - { 107 - struct eventfd_ctx *ctx = file->private_data; 108 - 109 - return &ctx->wqh; 110 - } 111 - 112 - static __poll_t eventfd_poll_mask(struct file *file, __poll_t eventmask) 104 + static __poll_t eventfd_poll(struct file *file, poll_table *wait) 113 105 { 114 106 struct eventfd_ctx *ctx = file->private_data; 115 107 __poll_t events = 0; 116 108 u64 count; 109 + 110 + poll_wait(file, &ctx->wqh, wait); 117 111 118 112 /* 119 113 * All writes to ctx->count occur within ctx->wqh.lock. This read ··· 150 156 count = READ_ONCE(ctx->count); 151 157 152 158 if (count > 0) 153 - events |= (EPOLLIN & eventmask); 159 + events |= EPOLLIN; 154 160 if (count == ULLONG_MAX) 155 161 events |= EPOLLERR; 156 162 if (ULLONG_MAX - 1 > count) 157 - events |= (EPOLLOUT & eventmask); 163 + events |= EPOLLOUT; 158 164 159 165 return events; 160 166 } ··· 305 311 .show_fdinfo = eventfd_show_fdinfo, 306 312 #endif 307 313 .release = eventfd_release, 308 - .get_poll_head = eventfd_get_poll_head, 309 - .poll_mask = eventfd_poll_mask, 314 + .poll = eventfd_poll, 310 315 .read = eventfd_read, 311 316 .write = eventfd_write, 312 317 .llseek = noop_llseek,
+5 -10
fs/eventpoll.c
··· 922 922 return 0; 923 923 } 924 924 925 - static struct wait_queue_head *ep_eventpoll_get_poll_head(struct file *file, 926 - __poll_t eventmask) 927 - { 928 - struct eventpoll *ep = file->private_data; 929 - return &ep->poll_wait; 930 - } 931 - 932 - static __poll_t ep_eventpoll_poll_mask(struct file *file, __poll_t eventmask) 925 + static __poll_t ep_eventpoll_poll(struct file *file, poll_table *wait) 933 926 { 934 927 struct eventpoll *ep = file->private_data; 935 928 int depth = 0; 929 + 930 + /* Insert inside our poll wait queue */ 931 + poll_wait(file, &ep->poll_wait, wait); 936 932 937 933 /* 938 934 * Proceed to find out if wanted events are really available inside ··· 968 972 .show_fdinfo = ep_show_fdinfo, 969 973 #endif 970 974 .release = ep_eventpoll_release, 971 - .get_poll_head = ep_eventpoll_get_poll_head, 972 - .poll_mask = ep_eventpoll_poll_mask, 975 + .poll = ep_eventpoll_poll, 973 976 .llseek = noop_llseek, 974 977 }; 975 978
+9 -13
fs/pipe.c
··· 509 509 } 510 510 } 511 511 512 - static struct wait_queue_head * 513 - pipe_get_poll_head(struct file *filp, __poll_t events) 514 - { 515 - struct pipe_inode_info *pipe = filp->private_data; 516 - 517 - return &pipe->wait; 518 - } 519 - 520 512 /* No kernel lock held - fine */ 521 - static __poll_t pipe_poll_mask(struct file *filp, __poll_t events) 513 + static __poll_t 514 + pipe_poll(struct file *filp, poll_table *wait) 522 515 { 516 + __poll_t mask; 523 517 struct pipe_inode_info *pipe = filp->private_data; 524 - int nrbufs = pipe->nrbufs; 525 - __poll_t mask = 0; 518 + int nrbufs; 519 + 520 + poll_wait(filp, &pipe->wait, wait); 526 521 527 522 /* Reading only -- no need for acquiring the semaphore. */ 523 + nrbufs = pipe->nrbufs; 524 + mask = 0; 528 525 if (filp->f_mode & FMODE_READ) { 529 526 mask = (nrbufs > 0) ? EPOLLIN | EPOLLRDNORM : 0; 530 527 if (!pipe->writers && filp->f_version != pipe->w_counter) ··· 1020 1023 .llseek = no_llseek, 1021 1024 .read_iter = pipe_read, 1022 1025 .write_iter = pipe_write, 1023 - .get_poll_head = pipe_get_poll_head, 1024 - .poll_mask = pipe_poll_mask, 1026 + .poll = pipe_poll, 1025 1027 .unlocked_ioctl = pipe_ioctl, 1026 1028 .release = pipe_release, 1027 1029 .fasync = pipe_fasync,
+10 -1
fs/proc/generic.c
··· 564 564 return seq_open(file, de->seq_ops); 565 565 } 566 566 567 + static int proc_seq_release(struct inode *inode, struct file *file) 568 + { 569 + struct proc_dir_entry *de = PDE(inode); 570 + 571 + if (de->state_size) 572 + return seq_release_private(inode, file); 573 + return seq_release(inode, file); 574 + } 575 + 567 576 static const struct file_operations proc_seq_fops = { 568 577 .open = proc_seq_open, 569 578 .read = seq_read, 570 579 .llseek = seq_lseek, 571 - .release = seq_release, 580 + .release = proc_seq_release, 572 581 }; 573 582 574 583 struct proc_dir_entry *proc_create_seq_private(const char *name, umode_t mode,
-23
fs/select.c
··· 34 34 35 35 #include <linux/uaccess.h> 36 36 37 - __poll_t vfs_poll(struct file *file, struct poll_table_struct *pt) 38 - { 39 - if (file->f_op->poll) { 40 - return file->f_op->poll(file, pt); 41 - } else if (file_has_poll_mask(file)) { 42 - unsigned int events = poll_requested_events(pt); 43 - struct wait_queue_head *head; 44 - 45 - if (pt && pt->_qproc) { 46 - head = file->f_op->get_poll_head(file, events); 47 - if (!head) 48 - return DEFAULT_POLLMASK; 49 - if (IS_ERR(head)) 50 - return EPOLLERR; 51 - pt->_qproc(file, head, pt); 52 - } 53 - 54 - return file->f_op->poll_mask(file, events); 55 - } else { 56 - return DEFAULT_POLLMASK; 57 - } 58 - } 59 - EXPORT_SYMBOL_GPL(vfs_poll); 60 37 61 38 /* 62 39 * Estimate expected accuracy in ns from a timeval.
+11 -11
fs/timerfd.c
··· 226 226 kfree_rcu(ctx, rcu); 227 227 return 0; 228 228 } 229 - 230 - static struct wait_queue_head *timerfd_get_poll_head(struct file *file, 231 - __poll_t eventmask) 229 + 230 + static __poll_t timerfd_poll(struct file *file, poll_table *wait) 232 231 { 233 232 struct timerfd_ctx *ctx = file->private_data; 233 + __poll_t events = 0; 234 + unsigned long flags; 234 235 235 - return &ctx->wqh; 236 - } 236 + poll_wait(file, &ctx->wqh, wait); 237 237 238 - static __poll_t timerfd_poll_mask(struct file *file, __poll_t eventmask) 239 - { 240 - struct timerfd_ctx *ctx = file->private_data; 238 + spin_lock_irqsave(&ctx->wqh.lock, flags); 239 + if (ctx->ticks) 240 + events |= EPOLLIN; 241 + spin_unlock_irqrestore(&ctx->wqh.lock, flags); 241 242 242 - return ctx->ticks ? EPOLLIN : 0; 243 + return events; 243 244 } 244 245 245 246 static ssize_t timerfd_read(struct file *file, char __user *buf, size_t count, ··· 364 363 365 364 static const struct file_operations timerfd_fops = { 366 365 .release = timerfd_release, 367 - .get_poll_head = timerfd_get_poll_head, 368 - .poll_mask = timerfd_poll_mask, 366 + .poll = timerfd_poll, 369 367 .read = timerfd_read, 370 368 .llseek = noop_llseek, 371 369 .show_fdinfo = timerfd_show,
+27 -4
fs/xfs/libxfs/xfs_ag_resv.c
··· 157 157 error = xfs_mod_fdblocks(pag->pag_mount, oldresv, true); 158 158 resv->ar_reserved = 0; 159 159 resv->ar_asked = 0; 160 + resv->ar_orig_reserved = 0; 160 161 161 162 if (error) 162 163 trace_xfs_ag_resv_free_error(pag->pag_mount, pag->pag_agno, ··· 190 189 struct xfs_mount *mp = pag->pag_mount; 191 190 struct xfs_ag_resv *resv; 192 191 int error; 193 - xfs_extlen_t reserved; 192 + xfs_extlen_t hidden_space; 194 193 195 194 if (used > ask) 196 195 ask = used; 197 - reserved = ask - used; 198 196 199 - error = xfs_mod_fdblocks(mp, -(int64_t)reserved, true); 197 + switch (type) { 198 + case XFS_AG_RESV_RMAPBT: 199 + /* 200 + * Space taken by the rmapbt is not subtracted from fdblocks 201 + * because the rmapbt lives in the free space. Here we must 202 + * subtract the entire reservation from fdblocks so that we 203 + * always have blocks available for rmapbt expansion. 204 + */ 205 + hidden_space = ask; 206 + break; 207 + case XFS_AG_RESV_METADATA: 208 + /* 209 + * Space taken by all other metadata btrees are accounted 210 + * on-disk as used space. We therefore only hide the space 211 + * that is reserved but not used by the trees. 212 + */ 213 + hidden_space = ask - used; 214 + break; 215 + default: 216 + ASSERT(0); 217 + return -EINVAL; 218 + } 219 + error = xfs_mod_fdblocks(mp, -(int64_t)hidden_space, true); 200 220 if (error) { 201 221 trace_xfs_ag_resv_init_error(pag->pag_mount, pag->pag_agno, 202 222 error, _RET_IP_); ··· 238 216 239 217 resv = xfs_perag_resv(pag, type); 240 218 resv->ar_asked = ask; 241 - resv->ar_reserved = resv->ar_orig_reserved = reserved; 219 + resv->ar_orig_reserved = hidden_space; 220 + resv->ar_reserved = ask - used; 242 221 243 222 trace_xfs_ag_resv_init(pag, type, ask); 244 223 return 0;
+26
fs/xfs/libxfs/xfs_bmap.c
··· 5780 5780 return error; 5781 5781 } 5782 5782 5783 + /* Make sure we won't be right-shifting an extent past the maximum bound. */ 5784 + int 5785 + xfs_bmap_can_insert_extents( 5786 + struct xfs_inode *ip, 5787 + xfs_fileoff_t off, 5788 + xfs_fileoff_t shift) 5789 + { 5790 + struct xfs_bmbt_irec got; 5791 + int is_empty; 5792 + int error = 0; 5793 + 5794 + ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL)); 5795 + 5796 + if (XFS_FORCED_SHUTDOWN(ip->i_mount)) 5797 + return -EIO; 5798 + 5799 + xfs_ilock(ip, XFS_ILOCK_EXCL); 5800 + error = xfs_bmap_last_extent(NULL, ip, XFS_DATA_FORK, &got, &is_empty); 5801 + if (!error && !is_empty && got.br_startoff >= off && 5802 + ((got.br_startoff + shift) & BMBT_STARTOFF_MASK) < got.br_startoff) 5803 + error = -EINVAL; 5804 + xfs_iunlock(ip, XFS_ILOCK_EXCL); 5805 + 5806 + return error; 5807 + } 5808 + 5783 5809 int 5784 5810 xfs_bmap_insert_extents( 5785 5811 struct xfs_trans *tp,
+2
fs/xfs/libxfs/xfs_bmap.h
··· 227 227 xfs_fileoff_t *next_fsb, xfs_fileoff_t offset_shift_fsb, 228 228 bool *done, xfs_fsblock_t *firstblock, 229 229 struct xfs_defer_ops *dfops); 230 + int xfs_bmap_can_insert_extents(struct xfs_inode *ip, xfs_fileoff_t off, 231 + xfs_fileoff_t shift); 230 232 int xfs_bmap_insert_extents(struct xfs_trans *tp, struct xfs_inode *ip, 231 233 xfs_fileoff_t *next_fsb, xfs_fileoff_t offset_shift_fsb, 232 234 bool *done, xfs_fileoff_t stop_fsb, xfs_fsblock_t *firstblock,
+5
fs/xfs/libxfs/xfs_format.h
··· 962 962 XFS_DFORK_DSIZE(dip, mp) : \ 963 963 XFS_DFORK_ASIZE(dip, mp)) 964 964 965 + #define XFS_DFORK_MAXEXT(dip, mp, w) \ 966 + (XFS_DFORK_SIZE(dip, mp, w) / sizeof(struct xfs_bmbt_rec)) 967 + 965 968 /* 966 969 * Return pointers to the data or attribute forks. 967 970 */ ··· 1528 1525 #define BMBT_STARTOFF_BITLEN 54 1529 1526 #define BMBT_STARTBLOCK_BITLEN 52 1530 1527 #define BMBT_BLOCKCOUNT_BITLEN 21 1528 + 1529 + #define BMBT_STARTOFF_MASK ((1ULL << BMBT_STARTOFF_BITLEN) - 1) 1531 1530 1532 1531 typedef struct xfs_bmbt_rec { 1533 1532 __be64 l0, l1;
+47 -29
fs/xfs/libxfs/xfs_inode_buf.c
··· 374 374 } 375 375 } 376 376 377 + static xfs_failaddr_t 378 + xfs_dinode_verify_fork( 379 + struct xfs_dinode *dip, 380 + struct xfs_mount *mp, 381 + int whichfork) 382 + { 383 + uint32_t di_nextents = XFS_DFORK_NEXTENTS(dip, whichfork); 384 + 385 + switch (XFS_DFORK_FORMAT(dip, whichfork)) { 386 + case XFS_DINODE_FMT_LOCAL: 387 + /* 388 + * no local regular files yet 389 + */ 390 + if (whichfork == XFS_DATA_FORK) { 391 + if (S_ISREG(be16_to_cpu(dip->di_mode))) 392 + return __this_address; 393 + if (be64_to_cpu(dip->di_size) > 394 + XFS_DFORK_SIZE(dip, mp, whichfork)) 395 + return __this_address; 396 + } 397 + if (di_nextents) 398 + return __this_address; 399 + break; 400 + case XFS_DINODE_FMT_EXTENTS: 401 + if (di_nextents > XFS_DFORK_MAXEXT(dip, mp, whichfork)) 402 + return __this_address; 403 + break; 404 + case XFS_DINODE_FMT_BTREE: 405 + if (whichfork == XFS_ATTR_FORK) { 406 + if (di_nextents > MAXAEXTNUM) 407 + return __this_address; 408 + } else if (di_nextents > MAXEXTNUM) { 409 + return __this_address; 410 + } 411 + break; 412 + default: 413 + return __this_address; 414 + } 415 + return NULL; 416 + } 417 + 377 418 xfs_failaddr_t 378 419 xfs_dinode_verify( 379 420 struct xfs_mount *mp, ··· 482 441 case S_IFREG: 483 442 case S_IFLNK: 484 443 case S_IFDIR: 485 - switch (dip->di_format) { 486 - case XFS_DINODE_FMT_LOCAL: 487 - /* 488 - * no local regular files yet 489 - */ 490 - if (S_ISREG(mode)) 491 - return __this_address; 492 - if (di_size > XFS_DFORK_DSIZE(dip, mp)) 493 - return __this_address; 494 - if (dip->di_nextents) 495 - return __this_address; 496 - /* fall through */ 497 - case XFS_DINODE_FMT_EXTENTS: 498 - case XFS_DINODE_FMT_BTREE: 499 - break; 500 - default: 501 - return __this_address; 502 - } 444 + fa = xfs_dinode_verify_fork(dip, mp, XFS_DATA_FORK); 445 + if (fa) 446 + return fa; 503 447 break; 504 448 case 0: 505 449 /* Uninitialized inode ok. */ ··· 494 468 } 495 469 496 470 if (XFS_DFORK_Q(dip)) { 497 - switch (dip->di_aformat) { 498 - case XFS_DINODE_FMT_LOCAL: 499 - if (dip->di_anextents) 500 - return __this_address; 501 - /* fall through */ 502 - case XFS_DINODE_FMT_EXTENTS: 503 - case XFS_DINODE_FMT_BTREE: 504 - break; 505 - default: 506 - return __this_address; 507 - } 471 + fa = xfs_dinode_verify_fork(dip, mp, XFS_ATTR_FORK); 472 + if (fa) 473 + return fa; 508 474 } else { 509 475 /* 510 476 * If there is no fork offset, this may be a freshly-made inode
+2 -2
fs/xfs/libxfs/xfs_rtbitmap.c
··· 1029 1029 if (low_rec->ar_startext >= mp->m_sb.sb_rextents || 1030 1030 low_rec->ar_startext == high_rec->ar_startext) 1031 1031 return 0; 1032 - if (high_rec->ar_startext >= mp->m_sb.sb_rextents) 1033 - high_rec->ar_startext = mp->m_sb.sb_rextents - 1; 1032 + if (high_rec->ar_startext > mp->m_sb.sb_rextents) 1033 + high_rec->ar_startext = mp->m_sb.sb_rextents; 1034 1034 1035 1035 /* Iterate the bitmap, looking for discrepancies. */ 1036 1036 rtstart = low_rec->ar_startext;
+56 -58
fs/xfs/xfs_bmap_util.c
··· 685 685 } 686 686 687 687 /* 688 - * dead simple method of punching delalyed allocation blocks from a range in 689 - * the inode. Walks a block at a time so will be slow, but is only executed in 690 - * rare error cases so the overhead is not critical. This will always punch out 691 - * both the start and end blocks, even if the ranges only partially overlap 692 - * them, so it is up to the caller to ensure that partial blocks are not 693 - * passed in. 688 + * Dead simple method of punching delalyed allocation blocks from a range in 689 + * the inode. This will always punch out both the start and end blocks, even 690 + * if the ranges only partially overlap them, so it is up to the caller to 691 + * ensure that partial blocks are not passed in. 694 692 */ 695 693 int 696 694 xfs_bmap_punch_delalloc_range( ··· 696 698 xfs_fileoff_t start_fsb, 697 699 xfs_fileoff_t length) 698 700 { 699 - xfs_fileoff_t remaining = length; 701 + struct xfs_ifork *ifp = &ip->i_df; 702 + xfs_fileoff_t end_fsb = start_fsb + length; 703 + struct xfs_bmbt_irec got, del; 704 + struct xfs_iext_cursor icur; 700 705 int error = 0; 701 706 702 707 ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL)); 703 708 704 - do { 705 - int done; 706 - xfs_bmbt_irec_t imap; 707 - int nimaps = 1; 708 - xfs_fsblock_t firstblock; 709 - struct xfs_defer_ops dfops; 710 - 711 - /* 712 - * Map the range first and check that it is a delalloc extent 713 - * before trying to unmap the range. Otherwise we will be 714 - * trying to remove a real extent (which requires a 715 - * transaction) or a hole, which is probably a bad idea... 716 - */ 717 - error = xfs_bmapi_read(ip, start_fsb, 1, &imap, &nimaps, 718 - XFS_BMAPI_ENTIRE); 719 - 720 - if (error) { 721 - /* something screwed, just bail */ 722 - if (!XFS_FORCED_SHUTDOWN(ip->i_mount)) { 723 - xfs_alert(ip->i_mount, 724 - "Failed delalloc mapping lookup ino %lld fsb %lld.", 725 - ip->i_ino, start_fsb); 726 - } 727 - break; 728 - } 729 - if (!nimaps) { 730 - /* nothing there */ 731 - goto next_block; 732 - } 733 - if (imap.br_startblock != DELAYSTARTBLOCK) { 734 - /* been converted, ignore */ 735 - goto next_block; 736 - } 737 - WARN_ON(imap.br_blockcount == 0); 738 - 739 - /* 740 - * Note: while we initialise the firstblock/dfops pair, they 741 - * should never be used because blocks should never be 742 - * allocated or freed for a delalloc extent and hence we need 743 - * don't cancel or finish them after the xfs_bunmapi() call. 744 - */ 745 - xfs_defer_init(&dfops, &firstblock); 746 - error = xfs_bunmapi(NULL, ip, start_fsb, 1, 0, 1, &firstblock, 747 - &dfops, &done); 709 + if (!(ifp->if_flags & XFS_IFEXTENTS)) { 710 + error = xfs_iread_extents(NULL, ip, XFS_DATA_FORK); 748 711 if (error) 749 - break; 712 + return error; 713 + } 750 714 751 - ASSERT(!xfs_defer_has_unfinished_work(&dfops)); 752 - next_block: 753 - start_fsb++; 754 - remaining--; 755 - } while(remaining > 0); 715 + if (!xfs_iext_lookup_extent_before(ip, ifp, &end_fsb, &icur, &got)) 716 + return 0; 717 + 718 + while (got.br_startoff + got.br_blockcount > start_fsb) { 719 + del = got; 720 + xfs_trim_extent(&del, start_fsb, length); 721 + 722 + /* 723 + * A delete can push the cursor forward. Step back to the 724 + * previous extent on non-delalloc or extents outside the 725 + * target range. 726 + */ 727 + if (!del.br_blockcount || 728 + !isnullstartblock(del.br_startblock)) { 729 + if (!xfs_iext_prev_extent(ifp, &icur, &got)) 730 + break; 731 + continue; 732 + } 733 + 734 + error = xfs_bmap_del_extent_delay(ip, XFS_DATA_FORK, &icur, 735 + &got, &del); 736 + if (error || !xfs_iext_get_extent(ifp, &icur, &got)) 737 + break; 738 + } 756 739 757 740 return error; 758 741 } ··· 1187 1208 return 0; 1188 1209 if (offset + len > XFS_ISIZE(ip)) 1189 1210 len = XFS_ISIZE(ip) - offset; 1190 - return iomap_zero_range(VFS_I(ip), offset, len, NULL, &xfs_iomap_ops); 1211 + error = iomap_zero_range(VFS_I(ip), offset, len, NULL, &xfs_iomap_ops); 1212 + if (error) 1213 + return error; 1214 + 1215 + /* 1216 + * If we zeroed right up to EOF and EOF straddles a page boundary we 1217 + * must make sure that the post-EOF area is also zeroed because the 1218 + * page could be mmap'd and iomap_zero_range doesn't do that for us. 1219 + * Writeback of the eof page will do this, albeit clumsily. 1220 + */ 1221 + if (offset + len >= XFS_ISIZE(ip) && ((offset + len) & PAGE_MASK)) { 1222 + error = filemap_write_and_wait_range(VFS_I(ip)->i_mapping, 1223 + (offset + len) & ~PAGE_MASK, LLONG_MAX); 1224 + } 1225 + 1226 + return error; 1191 1227 } 1192 1228 1193 1229 /* ··· 1397 1403 ASSERT(xfs_isilocked(ip, XFS_MMAPLOCK_EXCL)); 1398 1404 1399 1405 trace_xfs_insert_file_space(ip); 1406 + 1407 + error = xfs_bmap_can_insert_extents(ip, stop_fsb, shift_fsb); 1408 + if (error) 1409 + return error; 1400 1410 1401 1411 error = xfs_prepare_shift(ip, offset); 1402 1412 if (error)
+2 -2
fs/xfs/xfs_fsmap.c
··· 513 513 struct xfs_trans *tp, 514 514 struct xfs_getfsmap_info *info) 515 515 { 516 - struct xfs_rtalloc_rec alow; 517 - struct xfs_rtalloc_rec ahigh; 516 + struct xfs_rtalloc_rec alow = { 0 }; 517 + struct xfs_rtalloc_rec ahigh = { 0 }; 518 518 int error; 519 519 520 520 xfs_ilock(tp->t_mountp->m_rbmip, XFS_ILOCK_SHARED);
+1 -1
fs/xfs/xfs_fsops.c
··· 387 387 do { 388 388 free = percpu_counter_sum(&mp->m_fdblocks) - 389 389 mp->m_alloc_set_aside; 390 - if (!free) 390 + if (free <= 0) 391 391 break; 392 392 393 393 delta = request - mp->m_resblks;
+21 -36
fs/xfs/xfs_inode.c
··· 3236 3236 struct xfs_inode *cip; 3237 3237 int nr_found; 3238 3238 int clcount = 0; 3239 - int bufwasdelwri; 3240 3239 int i; 3241 3240 3242 3241 pag = xfs_perag_get(mp, XFS_INO_TO_AGNO(mp, ip->i_ino)); ··· 3359 3360 * inode buffer and shut down the filesystem. 3360 3361 */ 3361 3362 rcu_read_unlock(); 3362 - /* 3363 - * Clean up the buffer. If it was delwri, just release it -- 3364 - * brelse can handle it with no problems. If not, shut down the 3365 - * filesystem before releasing the buffer. 3366 - */ 3367 - bufwasdelwri = (bp->b_flags & _XBF_DELWRI_Q); 3368 - if (bufwasdelwri) 3369 - xfs_buf_relse(bp); 3370 - 3371 3363 xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE); 3372 3364 3373 - if (!bufwasdelwri) { 3374 - /* 3375 - * Just like incore_relse: if we have b_iodone functions, 3376 - * mark the buffer as an error and call them. Otherwise 3377 - * mark it as stale and brelse. 3378 - */ 3379 - if (bp->b_iodone) { 3380 - bp->b_flags &= ~XBF_DONE; 3381 - xfs_buf_stale(bp); 3382 - xfs_buf_ioerror(bp, -EIO); 3383 - xfs_buf_ioend(bp); 3384 - } else { 3385 - xfs_buf_stale(bp); 3386 - xfs_buf_relse(bp); 3387 - } 3388 - } 3389 - 3390 3365 /* 3391 - * Unlocks the flush lock 3366 + * We'll always have an inode attached to the buffer for completion 3367 + * process by the time we are called from xfs_iflush(). Hence we have 3368 + * always need to do IO completion processing to abort the inodes 3369 + * attached to the buffer. handle them just like the shutdown case in 3370 + * xfs_buf_submit(). 3392 3371 */ 3372 + ASSERT(bp->b_iodone); 3373 + bp->b_flags &= ~XBF_DONE; 3374 + xfs_buf_stale(bp); 3375 + xfs_buf_ioerror(bp, -EIO); 3376 + xfs_buf_ioend(bp); 3377 + 3378 + /* abort the corrupt inode, as it was not attached to the buffer */ 3393 3379 xfs_iflush_abort(cip, false); 3394 3380 kmem_free(cilist); 3395 3381 xfs_perag_put(pag); ··· 3470 3486 xfs_log_force(mp, 0); 3471 3487 3472 3488 /* 3473 - * inode clustering: 3474 - * see if other inodes can be gathered into this write 3489 + * inode clustering: try to gather other inodes into this write 3490 + * 3491 + * Note: Any error during clustering will result in the filesystem 3492 + * being shut down and completion callbacks run on the cluster buffer. 3493 + * As we have already flushed and attached this inode to the buffer, 3494 + * it has already been aborted and released by xfs_iflush_cluster() and 3495 + * so we have no further error handling to do here. 3475 3496 */ 3476 3497 error = xfs_iflush_cluster(ip, bp); 3477 3498 if (error) 3478 - goto cluster_corrupt_out; 3499 + return error; 3479 3500 3480 3501 *bpp = bp; 3481 3502 return 0; ··· 3489 3500 if (bp) 3490 3501 xfs_buf_relse(bp); 3491 3502 xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE); 3492 - cluster_corrupt_out: 3493 - error = -EFSCORRUPTED; 3494 3503 abort_out: 3495 - /* 3496 - * Unlocks the flush lock 3497 - */ 3504 + /* abort the corrupt inode, as it was not attached to the buffer */ 3498 3505 xfs_iflush_abort(ip, false); 3499 3506 return error; 3500 3507 }
+14 -1
fs/xfs/xfs_iomap.c
··· 963 963 unsigned *lockmode) 964 964 { 965 965 unsigned mode = XFS_ILOCK_SHARED; 966 + bool is_write = flags & (IOMAP_WRITE | IOMAP_ZERO); 966 967 967 968 /* 968 969 * COW writes may allocate delalloc space or convert unwritten COW 969 970 * extents, so we need to make sure to take the lock exclusively here. 970 971 */ 971 - if (xfs_is_reflink_inode(ip) && (flags & (IOMAP_WRITE | IOMAP_ZERO))) { 972 + if (xfs_is_reflink_inode(ip) && is_write) { 972 973 /* 973 974 * FIXME: It could still overwrite on unshared extents and not 974 975 * need allocation. ··· 990 989 mode = XFS_ILOCK_EXCL; 991 990 } 992 991 992 + relock: 993 993 if (flags & IOMAP_NOWAIT) { 994 994 if (!xfs_ilock_nowait(ip, mode)) 995 995 return -EAGAIN; 996 996 } else { 997 997 xfs_ilock(ip, mode); 998 + } 999 + 1000 + /* 1001 + * The reflink iflag could have changed since the earlier unlocked 1002 + * check, so if we got ILOCK_SHARED for a write and but we're now a 1003 + * reflink inode we have to switch to ILOCK_EXCL and relock. 1004 + */ 1005 + if (mode == XFS_ILOCK_SHARED && is_write && xfs_is_reflink_inode(ip)) { 1006 + xfs_iunlock(ip, mode); 1007 + mode = XFS_ILOCK_EXCL; 1008 + goto relock; 998 1009 } 999 1010 1000 1011 *lockmode = mode;
+6 -1
fs/xfs/xfs_trans.c
··· 258 258 if (!(flags & XFS_TRANS_NO_WRITECOUNT)) 259 259 sb_start_intwrite(mp->m_super); 260 260 261 - WARN_ON(mp->m_super->s_writers.frozen == SB_FREEZE_COMPLETE); 261 + /* 262 + * Zero-reservation ("empty") transactions can't modify anything, so 263 + * they're allowed to run while we're frozen. 264 + */ 265 + WARN_ON(resp->tr_logres > 0 && 266 + mp->m_super->s_writers.frozen == SB_FREEZE_COMPLETE); 262 267 atomic_inc(&mp->m_active_trans); 263 268 264 269 tp = kmem_zone_zalloc(xfs_trans_zone,
+2 -1
include/crypto/if_alg.h
··· 245 245 int offset, size_t size, int flags); 246 246 void af_alg_free_resources(struct af_alg_async_req *areq); 247 247 void af_alg_async_cb(struct crypto_async_request *_req, int err); 248 - __poll_t af_alg_poll_mask(struct socket *sock, __poll_t events); 248 + __poll_t af_alg_poll(struct file *file, struct socket *sock, 249 + poll_table *wait); 249 250 struct af_alg_async_req *af_alg_alloc_areq(struct sock *sk, 250 251 unsigned int areqlen); 251 252 int af_alg_get_rsgl(struct sock *sk, struct msghdr *msg, int flags,
+3
include/linux/acpi.h
··· 443 443 int acpi_check_region(resource_size_t start, resource_size_t n, 444 444 const char *name); 445 445 446 + acpi_status acpi_release_memory(acpi_handle handle, struct resource *res, 447 + u32 level); 448 + 446 449 int acpi_resources_are_enforced(void); 447 450 448 451 #ifdef CONFIG_HIBERNATION
+2 -2
include/linux/blkdev.h
··· 1119 1119 if (!q->limits.chunk_sectors) 1120 1120 return q->limits.max_sectors; 1121 1121 1122 - return q->limits.chunk_sectors - 1123 - (offset & (q->limits.chunk_sectors - 1)); 1122 + return min(q->limits.max_sectors, (unsigned int)(q->limits.chunk_sectors - 1123 + (offset & (q->limits.chunk_sectors - 1)))); 1124 1124 } 1125 1125 1126 1126 static inline unsigned int blk_rq_get_max_sectors(struct request *rq,
+7 -1
include/linux/compat.h
··· 72 72 */ 73 73 #ifndef COMPAT_SYSCALL_DEFINEx 74 74 #define COMPAT_SYSCALL_DEFINEx(x, name, ...) \ 75 + __diag_push(); \ 76 + __diag_ignore(GCC, 8, "-Wattribute-alias", \ 77 + "Type aliasing is used to sanitize syscall arguments");\ 75 78 asmlinkage long compat_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)); \ 76 79 asmlinkage long compat_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)) \ 77 80 __attribute__((alias(__stringify(__se_compat_sys##name)))); \ ··· 83 80 asmlinkage long __se_compat_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)); \ 84 81 asmlinkage long __se_compat_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)) \ 85 82 { \ 86 - return __do_compat_sys##name(__MAP(x,__SC_DELOUSE,__VA_ARGS__));\ 83 + long ret = __do_compat_sys##name(__MAP(x,__SC_DELOUSE,__VA_ARGS__));\ 84 + __MAP(x,__SC_TEST,__VA_ARGS__); \ 85 + return ret; \ 87 86 } \ 87 + __diag_pop(); \ 88 88 static inline long __do_compat_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)) 89 89 #endif /* COMPAT_SYSCALL_DEFINEx */ 90 90
+25
include/linux/compiler-gcc.h
··· 347 347 #if GCC_VERSION >= 50100 348 348 #define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1 349 349 #endif 350 + 351 + /* 352 + * Turn individual warnings and errors on and off locally, depending 353 + * on version. 354 + */ 355 + #define __diag_GCC(version, severity, s) \ 356 + __diag_GCC_ ## version(__diag_GCC_ ## severity s) 357 + 358 + /* Severity used in pragma directives */ 359 + #define __diag_GCC_ignore ignored 360 + #define __diag_GCC_warn warning 361 + #define __diag_GCC_error error 362 + 363 + /* Compilers before gcc-4.6 do not understand "#pragma GCC diagnostic push" */ 364 + #if GCC_VERSION >= 40600 365 + #define __diag_str1(s) #s 366 + #define __diag_str(s) __diag_str1(s) 367 + #define __diag(s) _Pragma(__diag_str(GCC diagnostic s)) 368 + #endif 369 + 370 + #if GCC_VERSION >= 80000 371 + #define __diag_GCC_8(s) __diag(s) 372 + #else 373 + #define __diag_GCC_8(s) 374 + #endif
+18
include/linux/compiler_types.h
··· 271 271 # define __native_word(t) (sizeof(t) == sizeof(char) || sizeof(t) == sizeof(short) || sizeof(t) == sizeof(int) || sizeof(t) == sizeof(long)) 272 272 #endif 273 273 274 + #ifndef __diag 275 + #define __diag(string) 276 + #endif 277 + 278 + #ifndef __diag_GCC 279 + #define __diag_GCC(version, severity, string) 280 + #endif 281 + 282 + #define __diag_push() __diag(push) 283 + #define __diag_pop() __diag(pop) 284 + 285 + #define __diag_ignore(compiler, version, option, comment) \ 286 + __diag_ ## compiler(version, ignore, option) 287 + #define __diag_warn(compiler, version, option, comment) \ 288 + __diag_ ## compiler(version, warn, option) 289 + #define __diag_error(compiler, version, option, comment) \ 290 + __diag_ ## compiler(version, error, option) 291 + 274 292 #endif /* __LINUX_COMPILER_TYPES_H */
+1 -1
include/linux/dax.h
··· 135 135 136 136 ssize_t dax_iomap_rw(struct kiocb *iocb, struct iov_iter *iter, 137 137 const struct iomap_ops *ops); 138 - int dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size, 138 + vm_fault_t dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size, 139 139 pfn_t *pfnp, int *errp, const struct iomap_ops *ops); 140 140 vm_fault_t dax_finish_sync_fault(struct vm_fault *vmf, 141 141 enum page_entry_size pe_size, pfn_t pfn);
+3 -1
include/linux/filter.h
··· 472 472 struct bpf_binary_header { 473 473 u16 pages; 474 474 u16 locked:1; 475 - u8 image[]; 475 + 476 + /* Some arches need word alignment for their instructions */ 477 + u8 image[] __aligned(4); 476 478 }; 477 479 478 480 struct bpf_prog {
-2
include/linux/fs.h
··· 1720 1720 int (*iterate) (struct file *, struct dir_context *); 1721 1721 int (*iterate_shared) (struct file *, struct dir_context *); 1722 1722 __poll_t (*poll) (struct file *, struct poll_table_struct *); 1723 - struct wait_queue_head * (*get_poll_head)(struct file *, __poll_t); 1724 - __poll_t (*poll_mask) (struct file *, __poll_t); 1725 1723 long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long); 1726 1724 long (*compat_ioctl) (struct file *, unsigned int, unsigned long); 1727 1725 int (*mmap) (struct file *, struct vm_area_struct *);
+1 -1
include/linux/iio/buffer-dma.h
··· 141 141 char __user *user_buffer); 142 142 size_t iio_dma_buffer_data_available(struct iio_buffer *buffer); 143 143 int iio_dma_buffer_set_bytes_per_datum(struct iio_buffer *buffer, size_t bpd); 144 - int iio_dma_buffer_set_length(struct iio_buffer *buffer, int length); 144 + int iio_dma_buffer_set_length(struct iio_buffer *buffer, unsigned int length); 145 145 int iio_dma_buffer_request_update(struct iio_buffer *buffer); 146 146 147 147 int iio_dma_buffer_init(struct iio_dma_buffer_queue *queue,
+1 -1
include/linux/input/mt.h
··· 100 100 return axis == ABS_MT_SLOT || input_is_mt_value(axis); 101 101 } 102 102 103 - void input_mt_report_slot_state(struct input_dev *dev, 103 + bool input_mt_report_slot_state(struct input_dev *dev, 104 104 unsigned int tool_type, bool active); 105 105 106 106 void input_mt_report_finger_count(struct input_dev *dev, int count);
-1
include/linux/net.h
··· 147 147 int (*getname) (struct socket *sock, 148 148 struct sockaddr *addr, 149 149 int peer); 150 - __poll_t (*poll_mask) (struct socket *sock, __poll_t events); 151 150 __poll_t (*poll) (struct file *file, struct socket *sock, 152 151 struct poll_table_struct *wait); 153 152 int (*ioctl) (struct socket *sock, unsigned int cmd,
+3 -3
include/linux/pm_domain.h
··· 234 234 int of_genpd_parse_idle_states(struct device_node *dn, 235 235 struct genpd_power_state **states, int *n); 236 236 unsigned int of_genpd_opp_to_performance_state(struct device *dev, 237 - struct device_node *opp_node); 237 + struct device_node *np); 238 238 239 239 int genpd_dev_pm_attach(struct device *dev); 240 240 struct device *genpd_dev_pm_attach_by_id(struct device *dev, ··· 274 274 275 275 static inline unsigned int 276 276 of_genpd_opp_to_performance_state(struct device *dev, 277 - struct device_node *opp_node) 277 + struct device_node *np) 278 278 { 279 - return -ENODEV; 279 + return 0; 280 280 } 281 281 282 282 static inline int genpd_dev_pm_attach(struct device *dev)
+7 -7
include/linux/poll.h
··· 74 74 pt->_key = ~(__poll_t)0; /* all events enabled */ 75 75 } 76 76 77 - static inline bool file_has_poll_mask(struct file *file) 78 - { 79 - return file->f_op->get_poll_head && file->f_op->poll_mask; 80 - } 81 - 82 77 static inline bool file_can_poll(struct file *file) 83 78 { 84 - return file->f_op->poll || file_has_poll_mask(file); 79 + return file->f_op->poll; 85 80 } 86 81 87 - __poll_t vfs_poll(struct file *file, struct poll_table_struct *pt); 82 + static inline __poll_t vfs_poll(struct file *file, struct poll_table_struct *pt) 83 + { 84 + if (unlikely(!file->f_op->poll)) 85 + return DEFAULT_POLLMASK; 86 + return file->f_op->poll(file, pt); 87 + } 88 88 89 89 struct poll_table_entry { 90 90 struct file *filp;
+2
include/linux/rmi.h
··· 354 354 struct mutex irq_mutex; 355 355 struct input_dev *input; 356 356 357 + struct irq_domain *irqdomain; 358 + 357 359 u8 pdt_props; 358 360 359 361 u8 num_rx_electrodes;
-18
include/linux/scatterlist.h
··· 9 9 #include <asm/io.h> 10 10 11 11 struct scatterlist { 12 - #ifdef CONFIG_DEBUG_SG 13 - unsigned long sg_magic; 14 - #endif 15 12 unsigned long page_link; 16 13 unsigned int offset; 17 14 unsigned int length; ··· 61 64 * 62 65 */ 63 66 64 - #define SG_MAGIC 0x87654321 65 67 #define SG_CHAIN 0x01UL 66 68 #define SG_END 0x02UL 67 69 ··· 94 98 */ 95 99 BUG_ON((unsigned long) page & (SG_CHAIN | SG_END)); 96 100 #ifdef CONFIG_DEBUG_SG 97 - BUG_ON(sg->sg_magic != SG_MAGIC); 98 101 BUG_ON(sg_is_chain(sg)); 99 102 #endif 100 103 sg->page_link = page_link | (unsigned long) page; ··· 124 129 static inline struct page *sg_page(struct scatterlist *sg) 125 130 { 126 131 #ifdef CONFIG_DEBUG_SG 127 - BUG_ON(sg->sg_magic != SG_MAGIC); 128 132 BUG_ON(sg_is_chain(sg)); 129 133 #endif 130 134 return (struct page *)((sg)->page_link & ~(SG_CHAIN | SG_END)); ··· 189 195 **/ 190 196 static inline void sg_mark_end(struct scatterlist *sg) 191 197 { 192 - #ifdef CONFIG_DEBUG_SG 193 - BUG_ON(sg->sg_magic != SG_MAGIC); 194 - #endif 195 198 /* 196 199 * Set termination bit, clear potential chain bit 197 200 */ ··· 206 215 **/ 207 216 static inline void sg_unmark_end(struct scatterlist *sg) 208 217 { 209 - #ifdef CONFIG_DEBUG_SG 210 - BUG_ON(sg->sg_magic != SG_MAGIC); 211 - #endif 212 218 sg->page_link &= ~SG_END; 213 219 } 214 220 ··· 248 260 static inline void sg_init_marker(struct scatterlist *sgl, 249 261 unsigned int nents) 250 262 { 251 - #ifdef CONFIG_DEBUG_SG 252 - unsigned int i; 253 - 254 - for (i = 0; i < nents; i++) 255 - sgl[i].sg_magic = SG_MAGIC; 256 - #endif 257 263 sg_mark_end(&sgl[nents - 1]); 258 264 } 259 265
+2 -1
include/linux/skbuff.h
··· 3252 3252 int *peeked, int *off, int *err); 3253 3253 struct sk_buff *skb_recv_datagram(struct sock *sk, unsigned flags, int noblock, 3254 3254 int *err); 3255 - __poll_t datagram_poll_mask(struct socket *sock, __poll_t events); 3255 + __poll_t datagram_poll(struct file *file, struct socket *sock, 3256 + struct poll_table_struct *wait); 3256 3257 int skb_copy_datagram_iter(const struct sk_buff *from, int offset, 3257 3258 struct iov_iter *to, int size); 3258 3259 static inline int skb_copy_datagram_msg(const struct sk_buff *from, int offset,
+4
include/linux/slub_def.h
··· 155 155 156 156 #ifdef CONFIG_SYSFS 157 157 #define SLAB_SUPPORTS_SYSFS 158 + void sysfs_slab_unlink(struct kmem_cache *); 158 159 void sysfs_slab_release(struct kmem_cache *); 159 160 #else 161 + static inline void sysfs_slab_unlink(struct kmem_cache *s) 162 + { 163 + } 160 164 static inline void sysfs_slab_release(struct kmem_cache *s) 161 165 { 162 166 }
+4
include/linux/syscalls.h
··· 231 231 */ 232 232 #ifndef __SYSCALL_DEFINEx 233 233 #define __SYSCALL_DEFINEx(x, name, ...) \ 234 + __diag_push(); \ 235 + __diag_ignore(GCC, 8, "-Wattribute-alias", \ 236 + "Type aliasing is used to sanitize syscall arguments");\ 234 237 asmlinkage long sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)) \ 235 238 __attribute__((alias(__stringify(__se_sys##name)))); \ 236 239 ALLOW_ERROR_INJECTION(sys##name, ERRNO); \ ··· 246 243 __PROTECT(x, ret,__MAP(x,__SC_ARGS,__VA_ARGS__)); \ 247 244 return ret; \ 248 245 } \ 246 + __diag_pop(); \ 249 247 static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)) 250 248 #endif /* __SYSCALL_DEFINEx */ 251 249
+1 -1
include/net/bluetooth/bluetooth.h
··· 271 271 int flags); 272 272 int bt_sock_stream_recvmsg(struct socket *sock, struct msghdr *msg, 273 273 size_t len, int flags); 274 - __poll_t bt_sock_poll_mask(struct socket *sock, __poll_t events); 274 + __poll_t bt_sock_poll(struct file *file, struct socket *sock, poll_table *wait); 275 275 int bt_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg); 276 276 int bt_sock_wait_state(struct sock *sk, int state, unsigned long timeo); 277 277 int bt_sock_wait_ready(struct sock *sk, unsigned long flags);
+2
include/net/iucv/af_iucv.h
··· 153 153 atomic_t autobind_name; 154 154 }; 155 155 156 + __poll_t iucv_sock_poll(struct file *file, struct socket *sock, 157 + poll_table *wait); 156 158 void iucv_sock_link(struct iucv_sock_list *l, struct sock *s); 157 159 void iucv_sock_unlink(struct iucv_sock_list *l, struct sock *s); 158 160 void iucv_accept_enqueue(struct sock *parent, struct sock *sk);
+2 -1
include/net/sctp/sctp.h
··· 109 109 int sctp_inet_listen(struct socket *sock, int backlog); 110 110 void sctp_write_space(struct sock *sk); 111 111 void sctp_data_ready(struct sock *sk); 112 - __poll_t sctp_poll_mask(struct socket *sock, __poll_t events); 112 + __poll_t sctp_poll(struct file *file, struct socket *sock, 113 + poll_table *wait); 113 114 void sctp_sock_rfree(struct sk_buff *skb); 114 115 void sctp_copy_sock(struct sock *newsk, struct sock *sk, 115 116 struct sctp_association *asoc);
+2 -1
include/net/tcp.h
··· 388 388 void tcp_close(struct sock *sk, long timeout); 389 389 void tcp_init_sock(struct sock *sk); 390 390 void tcp_init_transfer(struct sock *sk, int bpf_op); 391 - __poll_t tcp_poll_mask(struct socket *sock, __poll_t events); 391 + __poll_t tcp_poll(struct file *file, struct socket *sock, 392 + struct poll_table_struct *wait); 392 393 int tcp_getsockopt(struct sock *sk, int level, int optname, 393 394 char __user *optval, int __user *optlen); 394 395 int tcp_setsockopt(struct sock *sk, int level, int optname,
+4 -2
include/net/tls.h
··· 109 109 110 110 struct strparser strp; 111 111 void (*saved_data_ready)(struct sock *sk); 112 - __poll_t (*sk_poll_mask)(struct socket *sock, __poll_t events); 112 + unsigned int (*sk_poll)(struct file *file, struct socket *sock, 113 + struct poll_table_struct *wait); 113 114 struct sk_buff *recv_pkt; 114 115 u8 control; 115 116 bool decrypted; ··· 225 224 void tls_sw_free_resources_rx(struct sock *sk); 226 225 int tls_sw_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, 227 226 int nonblock, int flags, int *addr_len); 228 - __poll_t tls_sw_poll_mask(struct socket *sock, __poll_t events); 227 + unsigned int tls_sw_poll(struct file *file, struct socket *sock, 228 + struct poll_table_struct *wait); 229 229 ssize_t tls_sw_splice_read(struct socket *sock, loff_t *ppos, 230 230 struct pipe_inode_info *pipe, 231 231 size_t len, unsigned int flags);
+1 -1
include/net/udp.h
··· 285 285 int udp_pre_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len); 286 286 int __udp_disconnect(struct sock *sk, int flags); 287 287 int udp_disconnect(struct sock *sk, int flags); 288 - __poll_t udp_poll_mask(struct socket *sock, __poll_t events); 288 + __poll_t udp_poll(struct file *file, struct socket *sock, poll_table *wait); 289 289 struct sk_buff *skb_udp_tunnel_segment(struct sk_buff *skb, 290 290 netdev_features_t features, 291 291 bool is_ipv6);
+4 -2
include/uapi/linux/aio_abi.h
··· 39 39 IOCB_CMD_PWRITE = 1, 40 40 IOCB_CMD_FSYNC = 2, 41 41 IOCB_CMD_FDSYNC = 3, 42 - /* 4 was the experimental IOCB_CMD_PREADX */ 43 - IOCB_CMD_POLL = 5, 42 + /* These two are experimental. 43 + * IOCB_CMD_PREADX = 4, 44 + * IOCB_CMD_POLL = 5, 45 + */ 44 46 IOCB_CMD_NOOP = 6, 45 47 IOCB_CMD_PREADV = 7, 46 48 IOCB_CMD_PWRITEV = 8,
+3 -1
include/uapi/linux/target_core_user.h
··· 44 44 #define TCMU_MAILBOX_VERSION 2 45 45 #define ALIGN_SIZE 64 /* Should be enough for most CPUs */ 46 46 #define TCMU_MAILBOX_FLAG_CAP_OOOC (1 << 0) /* Out-of-order completions */ 47 + #define TCMU_MAILBOX_FLAG_CAP_READ_LEN (1 << 1) /* Read data length */ 47 48 48 49 struct tcmu_mailbox { 49 50 __u16 version; ··· 72 71 __u16 cmd_id; 73 72 __u8 kflags; 74 73 #define TCMU_UFLAG_UNKNOWN_OP 0x1 74 + #define TCMU_UFLAG_READ_LEN 0x2 75 75 __u8 uflags; 76 76 77 77 } __packed; ··· 121 119 __u8 scsi_status; 122 120 __u8 __pad1; 123 121 __u16 __pad2; 124 - __u32 __pad3; 122 + __u32 read_len; 125 123 char sense_buffer[TCMU_SENSE_BUFFERSIZE]; 126 124 } rsp; 127 125 };
+3 -4
init/Kconfig
··· 1051 1051 depends on HAVE_LD_DEAD_CODE_DATA_ELIMINATION 1052 1052 depends on EXPERT 1053 1053 help 1054 - Select this if the architecture wants to do dead code and 1055 - data elimination with the linker by compiling with 1056 - -ffunction-sections -fdata-sections, and linking with 1057 - --gc-sections. 1054 + Enable this if you want to do dead code and data elimination with 1055 + the linker by compiling with -ffunction-sections -fdata-sections, 1056 + and linking with --gc-sections. 1058 1057 1059 1058 This can reduce on disk and in-memory size of the kernel 1060 1059 code and static data, particularly for small configs and
+1
kernel/dma/swiotlb.c
··· 1085 1085 .unmap_page = swiotlb_unmap_page, 1086 1086 .dma_supported = dma_direct_supported, 1087 1087 }; 1088 + EXPORT_SYMBOL(swiotlb_dma_ops);
+1 -1
kernel/events/core.c
··· 6482 6482 data->phys_addr = perf_virt_to_phys(data->addr); 6483 6483 } 6484 6484 6485 - static void __always_inline 6485 + static __always_inline void 6486 6486 __perf_event_output(struct perf_event *event, 6487 6487 struct perf_sample_data *data, 6488 6488 struct pt_regs *regs,
+1
lib/Kconfig.kasan
··· 6 6 config KASAN 7 7 bool "KASan: runtime memory debugger" 8 8 depends on SLUB || (SLAB && !DEBUG_SLAB) 9 + select SLUB_DEBUG if SLUB 9 10 select CONSTRUCTORS 10 11 select STACKDEPOT 11 12 help
+1 -1
lib/percpu_ida.c
··· 141 141 spin_lock_irqsave(&tags->lock, flags); 142 142 143 143 /* Fastpath */ 144 - if (likely(tags->nr_free >= 0)) { 144 + if (likely(tags->nr_free)) { 145 145 tag = tags->freelist[--tags->nr_free]; 146 146 spin_unlock_irqrestore(&tags->lock, flags); 147 147 return tag;
-6
lib/scatterlist.c
··· 24 24 **/ 25 25 struct scatterlist *sg_next(struct scatterlist *sg) 26 26 { 27 - #ifdef CONFIG_DEBUG_SG 28 - BUG_ON(sg->sg_magic != SG_MAGIC); 29 - #endif 30 27 if (sg_is_last(sg)) 31 28 return NULL; 32 29 ··· 108 111 for_each_sg(sgl, sg, nents, i) 109 112 ret = sg; 110 113 111 - #ifdef CONFIG_DEBUG_SG 112 - BUG_ON(sgl[0].sg_magic != SG_MAGIC); 113 114 BUG_ON(!sg_is_last(ret)); 114 - #endif 115 115 return ret; 116 116 } 117 117 EXPORT_SYMBOL(sg_last);
-7
lib/test_printf.c
··· 260 260 { 261 261 int err; 262 262 263 - /* 264 - * Make sure crng is ready. Otherwise we get "(ptrval)" instead 265 - * of a hashed address when printing '%p' in plain_hash() and 266 - * plain_format(). 267 - */ 268 - wait_for_random_bytes(); 269 - 270 263 err = plain_hash(); 271 264 if (err) { 272 265 pr_warn("plain 'p' does not appear to be hashed\n");
+4
mm/slab_common.c
··· 567 567 list_del(&s->list); 568 568 569 569 if (s->flags & SLAB_TYPESAFE_BY_RCU) { 570 + #ifdef SLAB_SUPPORTS_SYSFS 571 + sysfs_slab_unlink(s); 572 + #endif 570 573 list_add_tail(&s->list, &slab_caches_to_rcu_destroy); 571 574 schedule_work(&slab_caches_to_rcu_destroy_work); 572 575 } else { 573 576 #ifdef SLAB_SUPPORTS_SYSFS 577 + sysfs_slab_unlink(s); 574 578 sysfs_slab_release(s); 575 579 #else 576 580 slab_kmem_cache_release(s);
+6 -1
mm/slub.c
··· 5667 5667 kset_unregister(s->memcg_kset); 5668 5668 #endif 5669 5669 kobject_uevent(&s->kobj, KOBJ_REMOVE); 5670 - kobject_del(&s->kobj); 5671 5670 out: 5672 5671 kobject_put(&s->kobj); 5673 5672 } ··· 5749 5750 5750 5751 kobject_get(&s->kobj); 5751 5752 schedule_work(&s->kobj_remove_work); 5753 + } 5754 + 5755 + void sysfs_slab_unlink(struct kmem_cache *s) 5756 + { 5757 + if (slab_state >= FULL) 5758 + kobject_del(&s->kobj); 5752 5759 } 5753 5760 5754 5761 void sysfs_slab_release(struct kmem_cache *s)
-2
mm/vmstat.c
··· 1796 1796 * to occur in the future. Keep on running the 1797 1797 * update worker thread. 1798 1798 */ 1799 - preempt_disable(); 1800 1799 queue_delayed_work_on(smp_processor_id(), mm_percpu_wq, 1801 1800 this_cpu_ptr(&vmstat_work), 1802 1801 round_jiffies_relative(sysctl_stat_interval)); 1803 - preempt_enable(); 1804 1802 } 1805 1803 } 1806 1804
+1 -1
net/appletalk/ddp.c
··· 1869 1869 .socketpair = sock_no_socketpair, 1870 1870 .accept = sock_no_accept, 1871 1871 .getname = atalk_getname, 1872 - .poll_mask = datagram_poll_mask, 1872 + .poll = datagram_poll, 1873 1873 .ioctl = atalk_ioctl, 1874 1874 #ifdef CONFIG_COMPAT 1875 1875 .compat_ioctl = atalk_compat_ioctl,
+8 -3
net/atm/common.c
··· 647 647 return error; 648 648 } 649 649 650 - __poll_t vcc_poll_mask(struct socket *sock, __poll_t events) 650 + __poll_t vcc_poll(struct file *file, struct socket *sock, poll_table *wait) 651 651 { 652 652 struct sock *sk = sock->sk; 653 - struct atm_vcc *vcc = ATM_SD(sock); 654 - __poll_t mask = 0; 653 + struct atm_vcc *vcc; 654 + __poll_t mask; 655 + 656 + sock_poll_wait(file, sk_sleep(sk), wait); 657 + mask = 0; 658 + 659 + vcc = ATM_SD(sock); 655 660 656 661 /* exceptional events */ 657 662 if (sk->sk_err)
+1 -1
net/atm/common.h
··· 17 17 int vcc_recvmsg(struct socket *sock, struct msghdr *msg, size_t size, 18 18 int flags); 19 19 int vcc_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len); 20 - __poll_t vcc_poll_mask(struct socket *sock, __poll_t events); 20 + __poll_t vcc_poll(struct file *file, struct socket *sock, poll_table *wait); 21 21 int vcc_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg); 22 22 int vcc_compat_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg); 23 23 int vcc_setsockopt(struct socket *sock, int level, int optname,
+1 -1
net/atm/pvc.c
··· 113 113 .socketpair = sock_no_socketpair, 114 114 .accept = sock_no_accept, 115 115 .getname = pvc_getname, 116 - .poll_mask = vcc_poll_mask, 116 + .poll = vcc_poll, 117 117 .ioctl = vcc_ioctl, 118 118 #ifdef CONFIG_COMPAT 119 119 .compat_ioctl = vcc_compat_ioctl,
+1 -1
net/atm/svc.c
··· 636 636 .socketpair = sock_no_socketpair, 637 637 .accept = svc_accept, 638 638 .getname = svc_getname, 639 - .poll_mask = vcc_poll_mask, 639 + .poll = vcc_poll, 640 640 .ioctl = svc_ioctl, 641 641 #ifdef CONFIG_COMPAT 642 642 .compat_ioctl = svc_compat_ioctl,
+1 -1
net/ax25/af_ax25.c
··· 1941 1941 .socketpair = sock_no_socketpair, 1942 1942 .accept = ax25_accept, 1943 1943 .getname = ax25_getname, 1944 - .poll_mask = datagram_poll_mask, 1944 + .poll = datagram_poll, 1945 1945 .ioctl = ax25_ioctl, 1946 1946 .listen = ax25_listen, 1947 1947 .shutdown = ax25_shutdown,
+5 -2
net/bluetooth/af_bluetooth.c
··· 437 437 return 0; 438 438 } 439 439 440 - __poll_t bt_sock_poll_mask(struct socket *sock, __poll_t events) 440 + __poll_t bt_sock_poll(struct file *file, struct socket *sock, 441 + poll_table *wait) 441 442 { 442 443 struct sock *sk = sock->sk; 443 444 __poll_t mask = 0; 444 445 445 446 BT_DBG("sock %p, sk %p", sock, sk); 447 + 448 + poll_wait(file, sk_sleep(sk), wait); 446 449 447 450 if (sk->sk_state == BT_LISTEN) 448 451 return bt_accept_poll(sk); ··· 478 475 479 476 return mask; 480 477 } 481 - EXPORT_SYMBOL(bt_sock_poll_mask); 478 + EXPORT_SYMBOL(bt_sock_poll); 482 479 483 480 int bt_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg) 484 481 {
+1 -1
net/bluetooth/hci_sock.c
··· 1975 1975 .sendmsg = hci_sock_sendmsg, 1976 1976 .recvmsg = hci_sock_recvmsg, 1977 1977 .ioctl = hci_sock_ioctl, 1978 - .poll_mask = datagram_poll_mask, 1978 + .poll = datagram_poll, 1979 1979 .listen = sock_no_listen, 1980 1980 .shutdown = sock_no_shutdown, 1981 1981 .setsockopt = hci_sock_setsockopt,
+1 -1
net/bluetooth/l2cap_sock.c
··· 1653 1653 .getname = l2cap_sock_getname, 1654 1654 .sendmsg = l2cap_sock_sendmsg, 1655 1655 .recvmsg = l2cap_sock_recvmsg, 1656 - .poll_mask = bt_sock_poll_mask, 1656 + .poll = bt_sock_poll, 1657 1657 .ioctl = bt_sock_ioctl, 1658 1658 .mmap = sock_no_mmap, 1659 1659 .socketpair = sock_no_socketpair,
+1 -1
net/bluetooth/rfcomm/sock.c
··· 1049 1049 .setsockopt = rfcomm_sock_setsockopt, 1050 1050 .getsockopt = rfcomm_sock_getsockopt, 1051 1051 .ioctl = rfcomm_sock_ioctl, 1052 - .poll_mask = bt_sock_poll_mask, 1052 + .poll = bt_sock_poll, 1053 1053 .socketpair = sock_no_socketpair, 1054 1054 .mmap = sock_no_mmap 1055 1055 };
+1 -1
net/bluetooth/sco.c
··· 1197 1197 .getname = sco_sock_getname, 1198 1198 .sendmsg = sco_sock_sendmsg, 1199 1199 .recvmsg = sco_sock_recvmsg, 1200 - .poll_mask = bt_sock_poll_mask, 1200 + .poll = bt_sock_poll, 1201 1201 .ioctl = bt_sock_ioctl, 1202 1202 .mmap = sock_no_mmap, 1203 1203 .socketpair = sock_no_socketpair,
+1 -1
net/bpfilter/Makefile
··· 22 22 quiet_cmd_copy_umh = GEN $@ 23 23 cmd_copy_umh = echo ':' > $(obj)/.bpfilter_umh.o.cmd; \ 24 24 $(OBJCOPY) -I binary \ 25 - `LC_ALL=C objdump -f net/bpfilter/bpfilter_umh \ 25 + `LC_ALL=C $(OBJDUMP) -f net/bpfilter/bpfilter_umh \ 26 26 |awk -F' |,' '/file format/{print "-O",$$NF} \ 27 27 /^architecture:/{print "-B",$$2}'` \ 28 28 --rename-section .data=.init.rodata $< $@
+8 -4
net/caif/caif_socket.c
··· 934 934 } 935 935 936 936 /* Copied from af_unix.c:unix_poll(), added CAIF tx_flow handling */ 937 - static __poll_t caif_poll_mask(struct socket *sock, __poll_t events) 937 + static __poll_t caif_poll(struct file *file, 938 + struct socket *sock, poll_table *wait) 938 939 { 939 940 struct sock *sk = sock->sk; 941 + __poll_t mask; 940 942 struct caifsock *cf_sk = container_of(sk, struct caifsock, sk); 941 - __poll_t mask = 0; 943 + 944 + sock_poll_wait(file, sk_sleep(sk), wait); 945 + mask = 0; 942 946 943 947 /* exceptional events? */ 944 948 if (sk->sk_err) ··· 976 972 .socketpair = sock_no_socketpair, 977 973 .accept = sock_no_accept, 978 974 .getname = sock_no_getname, 979 - .poll_mask = caif_poll_mask, 975 + .poll = caif_poll, 980 976 .ioctl = sock_no_ioctl, 981 977 .listen = sock_no_listen, 982 978 .shutdown = sock_no_shutdown, ··· 997 993 .socketpair = sock_no_socketpair, 998 994 .accept = sock_no_accept, 999 995 .getname = sock_no_getname, 1000 - .poll_mask = caif_poll_mask, 996 + .poll = caif_poll, 1001 997 .ioctl = sock_no_ioctl, 1002 998 .listen = sock_no_listen, 1003 999 .shutdown = sock_no_shutdown,
+1 -1
net/can/bcm.c
··· 1660 1660 .socketpair = sock_no_socketpair, 1661 1661 .accept = sock_no_accept, 1662 1662 .getname = sock_no_getname, 1663 - .poll_mask = datagram_poll_mask, 1663 + .poll = datagram_poll, 1664 1664 .ioctl = can_ioctl, /* use can_ioctl() from af_can.c */ 1665 1665 .listen = sock_no_listen, 1666 1666 .shutdown = sock_no_shutdown,
+1 -1
net/can/raw.c
··· 843 843 .socketpair = sock_no_socketpair, 844 844 .accept = sock_no_accept, 845 845 .getname = raw_getname, 846 - .poll_mask = datagram_poll_mask, 846 + .poll = datagram_poll, 847 847 .ioctl = can_ioctl, /* use can_ioctl() from af_can.c */ 848 848 .listen = sock_no_listen, 849 849 .shutdown = sock_no_shutdown,
+9 -4
net/core/datagram.c
··· 819 819 820 820 /** 821 821 * datagram_poll - generic datagram poll 822 + * @file: file struct 822 823 * @sock: socket 823 - * @events to wait for 824 + * @wait: poll table 824 825 * 825 826 * Datagram poll: Again totally generic. This also handles 826 827 * sequenced packet sockets providing the socket receive queue ··· 831 830 * and you use a different write policy from sock_writeable() 832 831 * then please supply your own write_space callback. 833 832 */ 834 - __poll_t datagram_poll_mask(struct socket *sock, __poll_t events) 833 + __poll_t datagram_poll(struct file *file, struct socket *sock, 834 + poll_table *wait) 835 835 { 836 836 struct sock *sk = sock->sk; 837 - __poll_t mask = 0; 837 + __poll_t mask; 838 + 839 + sock_poll_wait(file, sk_sleep(sk), wait); 840 + mask = 0; 838 841 839 842 /* exceptional events? */ 840 843 if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue)) ··· 871 866 872 867 return mask; 873 868 } 874 - EXPORT_SYMBOL(datagram_poll_mask); 869 + EXPORT_SYMBOL(datagram_poll);
+9 -7
net/dccp/ccids/ccid3.c
··· 600 600 { 601 601 struct ccid3_hc_rx_sock *hc = ccid3_hc_rx_sk(sk); 602 602 struct dccp_sock *dp = dccp_sk(sk); 603 - ktime_t now = ktime_get_real(); 603 + ktime_t now = ktime_get(); 604 604 s64 delta = 0; 605 605 606 606 switch (fbtype) { ··· 625 625 case CCID3_FBACK_PERIODIC: 626 626 delta = ktime_us_delta(now, hc->rx_tstamp_last_feedback); 627 627 if (delta <= 0) 628 - DCCP_BUG("delta (%ld) <= 0", (long)delta); 629 - else 630 - hc->rx_x_recv = scaled_div32(hc->rx_bytes_recv, delta); 628 + delta = 1; 629 + hc->rx_x_recv = scaled_div32(hc->rx_bytes_recv, delta); 631 630 break; 632 631 default: 633 632 return; 634 633 } 635 634 636 - ccid3_pr_debug("Interval %ldusec, X_recv=%u, 1/p=%u\n", (long)delta, 635 + ccid3_pr_debug("Interval %lldusec, X_recv=%u, 1/p=%u\n", delta, 637 636 hc->rx_x_recv, hc->rx_pinv); 638 637 639 638 hc->rx_tstamp_last_feedback = now; ··· 679 680 static u32 ccid3_first_li(struct sock *sk) 680 681 { 681 682 struct ccid3_hc_rx_sock *hc = ccid3_hc_rx_sk(sk); 682 - u32 x_recv, p, delta; 683 + u32 x_recv, p; 684 + s64 delta; 683 685 u64 fval; 684 686 685 687 if (hc->rx_rtt == 0) { ··· 688 688 hc->rx_rtt = DCCP_FALLBACK_RTT; 689 689 } 690 690 691 - delta = ktime_to_us(net_timedelta(hc->rx_tstamp_last_feedback)); 691 + delta = ktime_us_delta(ktime_get(), hc->rx_tstamp_last_feedback); 692 + if (delta <= 0) 693 + delta = 1; 692 694 x_recv = scaled_div32(hc->rx_bytes_recv, delta); 693 695 if (x_recv == 0) { /* would also trigger divide-by-zero */ 694 696 DCCP_WARN("X_recv==0\n");
+2 -1
net/dccp/dccp.h
··· 316 316 int flags, int *addr_len); 317 317 void dccp_shutdown(struct sock *sk, int how); 318 318 int inet_dccp_listen(struct socket *sock, int backlog); 319 - __poll_t dccp_poll_mask(struct socket *sock, __poll_t events); 319 + __poll_t dccp_poll(struct file *file, struct socket *sock, 320 + poll_table *wait); 320 321 int dccp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len); 321 322 void dccp_req_err(struct sock *sk, u64 seq); 322 323
+1 -1
net/dccp/ipv4.c
··· 984 984 .accept = inet_accept, 985 985 .getname = inet_getname, 986 986 /* FIXME: work on tcp_poll to rename it to inet_csk_poll */ 987 - .poll_mask = dccp_poll_mask, 987 + .poll = dccp_poll, 988 988 .ioctl = inet_ioctl, 989 989 /* FIXME: work on inet_listen to rename it to sock_common_listen */ 990 990 .listen = inet_dccp_listen,
+1 -1
net/dccp/ipv6.c
··· 1070 1070 .socketpair = sock_no_socketpair, 1071 1071 .accept = inet_accept, 1072 1072 .getname = inet6_getname, 1073 - .poll_mask = dccp_poll_mask, 1073 + .poll = dccp_poll, 1074 1074 .ioctl = inet6_ioctl, 1075 1075 .listen = inet_dccp_listen, 1076 1076 .shutdown = inet_shutdown,
+11 -2
net/dccp/proto.c
··· 312 312 313 313 EXPORT_SYMBOL_GPL(dccp_disconnect); 314 314 315 - __poll_t dccp_poll_mask(struct socket *sock, __poll_t events) 315 + /* 316 + * Wait for a DCCP event. 317 + * 318 + * Note that we don't need to lock the socket, as the upper poll layers 319 + * take care of normal races (between the test and the event) and we don't 320 + * go look at any of the socket buffers directly. 321 + */ 322 + __poll_t dccp_poll(struct file *file, struct socket *sock, 323 + poll_table *wait) 316 324 { 317 325 __poll_t mask; 318 326 struct sock *sk = sock->sk; 319 327 328 + sock_poll_wait(file, sk_sleep(sk), wait); 320 329 if (sk->sk_state == DCCP_LISTEN) 321 330 return inet_csk_listen_poll(sk); 322 331 ··· 367 358 return mask; 368 359 } 369 360 370 - EXPORT_SYMBOL_GPL(dccp_poll_mask); 361 + EXPORT_SYMBOL_GPL(dccp_poll); 371 362 372 363 int dccp_ioctl(struct sock *sk, int cmd, unsigned long arg) 373 364 {
+3 -3
net/decnet/af_decnet.c
··· 1207 1207 } 1208 1208 1209 1209 1210 - static __poll_t dn_poll_mask(struct socket *sock, __poll_t events) 1210 + static __poll_t dn_poll(struct file *file, struct socket *sock, poll_table *wait) 1211 1211 { 1212 1212 struct sock *sk = sock->sk; 1213 1213 struct dn_scp *scp = DN_SK(sk); 1214 - __poll_t mask = datagram_poll_mask(sock, events); 1214 + __poll_t mask = datagram_poll(file, sock, wait); 1215 1215 1216 1216 if (!skb_queue_empty(&scp->other_receive_queue)) 1217 1217 mask |= EPOLLRDBAND; ··· 2331 2331 .socketpair = sock_no_socketpair, 2332 2332 .accept = dn_accept, 2333 2333 .getname = dn_getname, 2334 - .poll_mask = dn_poll_mask, 2334 + .poll = dn_poll, 2335 2335 .ioctl = dn_ioctl, 2336 2336 .listen = dn_listen, 2337 2337 .shutdown = dn_shutdown,
+2 -2
net/ieee802154/socket.c
··· 423 423 .socketpair = sock_no_socketpair, 424 424 .accept = sock_no_accept, 425 425 .getname = sock_no_getname, 426 - .poll_mask = datagram_poll_mask, 426 + .poll = datagram_poll, 427 427 .ioctl = ieee802154_sock_ioctl, 428 428 .listen = sock_no_listen, 429 429 .shutdown = sock_no_shutdown, ··· 969 969 .socketpair = sock_no_socketpair, 970 970 .accept = sock_no_accept, 971 971 .getname = sock_no_getname, 972 - .poll_mask = datagram_poll_mask, 972 + .poll = datagram_poll, 973 973 .ioctl = ieee802154_sock_ioctl, 974 974 .listen = sock_no_listen, 975 975 .shutdown = sock_no_shutdown,
+4 -4
net/ipv4/af_inet.c
··· 986 986 .socketpair = sock_no_socketpair, 987 987 .accept = inet_accept, 988 988 .getname = inet_getname, 989 - .poll_mask = tcp_poll_mask, 989 + .poll = tcp_poll, 990 990 .ioctl = inet_ioctl, 991 991 .listen = inet_listen, 992 992 .shutdown = inet_shutdown, ··· 1021 1021 .socketpair = sock_no_socketpair, 1022 1022 .accept = sock_no_accept, 1023 1023 .getname = inet_getname, 1024 - .poll_mask = udp_poll_mask, 1024 + .poll = udp_poll, 1025 1025 .ioctl = inet_ioctl, 1026 1026 .listen = sock_no_listen, 1027 1027 .shutdown = inet_shutdown, ··· 1042 1042 1043 1043 /* 1044 1044 * For SOCK_RAW sockets; should be the same as inet_dgram_ops but without 1045 - * udp_poll_mask 1045 + * udp_poll 1046 1046 */ 1047 1047 static const struct proto_ops inet_sockraw_ops = { 1048 1048 .family = PF_INET, ··· 1053 1053 .socketpair = sock_no_socketpair, 1054 1054 .accept = sock_no_accept, 1055 1055 .getname = inet_getname, 1056 - .poll_mask = datagram_poll_mask, 1056 + .poll = datagram_poll, 1057 1057 .ioctl = inet_ioctl, 1058 1058 .listen = sock_no_listen, 1059 1059 .shutdown = inet_shutdown,
+17 -6
net/ipv4/tcp.c
··· 494 494 } 495 495 496 496 /* 497 - * Socket is not locked. We are protected from async events by poll logic and 498 - * correct handling of state changes made by other threads is impossible in 499 - * any case. 497 + * Wait for a TCP event. 498 + * 499 + * Note that we don't need to lock the socket, as the upper poll layers 500 + * take care of normal races (between the test and the event) and we don't 501 + * go look at any of the socket buffers directly. 500 502 */ 501 - __poll_t tcp_poll_mask(struct socket *sock, __poll_t events) 503 + __poll_t tcp_poll(struct file *file, struct socket *sock, poll_table *wait) 502 504 { 505 + __poll_t mask; 503 506 struct sock *sk = sock->sk; 504 507 const struct tcp_sock *tp = tcp_sk(sk); 505 - __poll_t mask = 0; 506 508 int state; 509 + 510 + sock_poll_wait(file, sk_sleep(sk), wait); 507 511 508 512 state = inet_sk_state_load(sk); 509 513 if (state == TCP_LISTEN) 510 514 return inet_csk_listen_poll(sk); 515 + 516 + /* Socket is not locked. We are protected from async events 517 + * by poll logic and correct handling of state changes 518 + * made by other threads is impossible in any case. 519 + */ 520 + 521 + mask = 0; 511 522 512 523 /* 513 524 * EPOLLHUP is certainly not done right. But poll() doesn't ··· 600 589 601 590 return mask; 602 591 } 603 - EXPORT_SYMBOL(tcp_poll_mask); 592 + EXPORT_SYMBOL(tcp_poll); 604 593 605 594 int tcp_ioctl(struct sock *sk, int cmd, unsigned long arg) 606 595 {
+5 -5
net/ipv4/udp.c
··· 2591 2591 * udp_poll - wait for a UDP event. 2592 2592 * @file - file struct 2593 2593 * @sock - socket 2594 - * @events - events to wait for 2594 + * @wait - poll table 2595 2595 * 2596 2596 * This is same as datagram poll, except for the special case of 2597 2597 * blocking sockets. If application is using a blocking fd ··· 2600 2600 * but then block when reading it. Add special case code 2601 2601 * to work around these arguably broken applications. 2602 2602 */ 2603 - __poll_t udp_poll_mask(struct socket *sock, __poll_t events) 2603 + __poll_t udp_poll(struct file *file, struct socket *sock, poll_table *wait) 2604 2604 { 2605 - __poll_t mask = datagram_poll_mask(sock, events); 2605 + __poll_t mask = datagram_poll(file, sock, wait); 2606 2606 struct sock *sk = sock->sk; 2607 2607 2608 2608 if (!skb_queue_empty(&udp_sk(sk)->reader_queue)) 2609 2609 mask |= EPOLLIN | EPOLLRDNORM; 2610 2610 2611 2611 /* Check for false positives due to checksum errors */ 2612 - if ((mask & EPOLLRDNORM) && !(sock->file->f_flags & O_NONBLOCK) && 2612 + if ((mask & EPOLLRDNORM) && !(file->f_flags & O_NONBLOCK) && 2613 2613 !(sk->sk_shutdown & RCV_SHUTDOWN) && first_packet_length(sk) == -1) 2614 2614 mask &= ~(EPOLLIN | EPOLLRDNORM); 2615 2615 2616 2616 return mask; 2617 2617 2618 2618 } 2619 - EXPORT_SYMBOL(udp_poll_mask); 2619 + EXPORT_SYMBOL(udp_poll); 2620 2620 2621 2621 int udp_abort(struct sock *sk, int err) 2622 2622 {
+2 -2
net/ipv6/af_inet6.c
··· 570 570 .socketpair = sock_no_socketpair, /* a do nothing */ 571 571 .accept = inet_accept, /* ok */ 572 572 .getname = inet6_getname, 573 - .poll_mask = tcp_poll_mask, /* ok */ 573 + .poll = tcp_poll, /* ok */ 574 574 .ioctl = inet6_ioctl, /* must change */ 575 575 .listen = inet_listen, /* ok */ 576 576 .shutdown = inet_shutdown, /* ok */ ··· 603 603 .socketpair = sock_no_socketpair, /* a do nothing */ 604 604 .accept = sock_no_accept, /* a do nothing */ 605 605 .getname = inet6_getname, 606 - .poll_mask = udp_poll_mask, /* ok */ 606 + .poll = udp_poll, /* ok */ 607 607 .ioctl = inet6_ioctl, /* must change */ 608 608 .listen = sock_no_listen, /* ok */ 609 609 .shutdown = inet_shutdown, /* ok */
+6 -3
net/ipv6/mcast.c
··· 2082 2082 mld_send_initial_cr(idev); 2083 2083 idev->mc_dad_count--; 2084 2084 if (idev->mc_dad_count) 2085 - mld_dad_start_timer(idev, idev->mc_maxdelay); 2085 + mld_dad_start_timer(idev, 2086 + unsolicited_report_interval(idev)); 2086 2087 } 2087 2088 } 2088 2089 ··· 2095 2094 if (idev->mc_dad_count) { 2096 2095 idev->mc_dad_count--; 2097 2096 if (idev->mc_dad_count) 2098 - mld_dad_start_timer(idev, idev->mc_maxdelay); 2097 + mld_dad_start_timer(idev, 2098 + unsolicited_report_interval(idev)); 2099 2099 } 2100 2100 in6_dev_put(idev); 2101 2101 } ··· 2454 2452 if (idev->mc_ifc_count) { 2455 2453 idev->mc_ifc_count--; 2456 2454 if (idev->mc_ifc_count) 2457 - mld_ifc_start_timer(idev, idev->mc_maxdelay); 2455 + mld_ifc_start_timer(idev, 2456 + unsolicited_report_interval(idev)); 2458 2457 } 2459 2458 in6_dev_put(idev); 2460 2459 }
+2 -2
net/ipv6/raw.c
··· 1334 1334 } 1335 1335 #endif /* CONFIG_PROC_FS */ 1336 1336 1337 - /* Same as inet6_dgram_ops, sans udp_poll_mask. */ 1337 + /* Same as inet6_dgram_ops, sans udp_poll. */ 1338 1338 const struct proto_ops inet6_sockraw_ops = { 1339 1339 .family = PF_INET6, 1340 1340 .owner = THIS_MODULE, ··· 1344 1344 .socketpair = sock_no_socketpair, /* a do nothing */ 1345 1345 .accept = sock_no_accept, /* a do nothing */ 1346 1346 .getname = inet6_getname, 1347 - .poll_mask = datagram_poll_mask, /* ok */ 1347 + .poll = datagram_poll, /* ok */ 1348 1348 .ioctl = inet6_ioctl, /* must change */ 1349 1349 .listen = sock_no_listen, /* ok */ 1350 1350 .shutdown = inet_shutdown, /* ok */
+5 -2
net/iucv/af_iucv.c
··· 1488 1488 return 0; 1489 1489 } 1490 1490 1491 - static __poll_t iucv_sock_poll_mask(struct socket *sock, __poll_t events) 1491 + __poll_t iucv_sock_poll(struct file *file, struct socket *sock, 1492 + poll_table *wait) 1492 1493 { 1493 1494 struct sock *sk = sock->sk; 1494 1495 __poll_t mask = 0; 1496 + 1497 + sock_poll_wait(file, sk_sleep(sk), wait); 1495 1498 1496 1499 if (sk->sk_state == IUCV_LISTEN) 1497 1500 return iucv_accept_poll(sk); ··· 2388 2385 .getname = iucv_sock_getname, 2389 2386 .sendmsg = iucv_sock_sendmsg, 2390 2387 .recvmsg = iucv_sock_recvmsg, 2391 - .poll_mask = iucv_sock_poll_mask, 2388 + .poll = iucv_sock_poll, 2392 2389 .ioctl = sock_no_ioctl, 2393 2390 .mmap = sock_no_mmap, 2394 2391 .socketpair = sock_no_socketpair,
+5 -5
net/kcm/kcmsock.c
··· 1336 1336 struct list_head *head; 1337 1337 int index = 0; 1338 1338 1339 - /* For SOCK_SEQPACKET sock type, datagram_poll_mask checks the sk_state, 1340 - * so we set sk_state, otherwise epoll_wait always returns right away 1341 - * with EPOLLHUP 1339 + /* For SOCK_SEQPACKET sock type, datagram_poll checks the sk_state, so 1340 + * we set sk_state, otherwise epoll_wait always returns right away with 1341 + * EPOLLHUP 1342 1342 */ 1343 1343 kcm->sk.sk_state = TCP_ESTABLISHED; 1344 1344 ··· 1903 1903 .socketpair = sock_no_socketpair, 1904 1904 .accept = sock_no_accept, 1905 1905 .getname = sock_no_getname, 1906 - .poll_mask = datagram_poll_mask, 1906 + .poll = datagram_poll, 1907 1907 .ioctl = kcm_ioctl, 1908 1908 .listen = sock_no_listen, 1909 1909 .shutdown = sock_no_shutdown, ··· 1924 1924 .socketpair = sock_no_socketpair, 1925 1925 .accept = sock_no_accept, 1926 1926 .getname = sock_no_getname, 1927 - .poll_mask = datagram_poll_mask, 1927 + .poll = datagram_poll, 1928 1928 .ioctl = kcm_ioctl, 1929 1929 .listen = sock_no_listen, 1930 1930 .shutdown = sock_no_shutdown,
+1 -1
net/key/af_key.c
··· 3751 3751 3752 3752 /* Now the operations that really occur. */ 3753 3753 .release = pfkey_release, 3754 - .poll_mask = datagram_poll_mask, 3754 + .poll = datagram_poll, 3755 3755 .sendmsg = pfkey_sendmsg, 3756 3756 .recvmsg = pfkey_recvmsg, 3757 3757 };
+1 -1
net/l2tp/l2tp_ip.c
··· 613 613 .socketpair = sock_no_socketpair, 614 614 .accept = sock_no_accept, 615 615 .getname = l2tp_ip_getname, 616 - .poll_mask = datagram_poll_mask, 616 + .poll = datagram_poll, 617 617 .ioctl = inet_ioctl, 618 618 .listen = sock_no_listen, 619 619 .shutdown = inet_shutdown,
+1 -1
net/l2tp/l2tp_ip6.c
··· 754 754 .socketpair = sock_no_socketpair, 755 755 .accept = sock_no_accept, 756 756 .getname = l2tp_ip6_getname, 757 - .poll_mask = datagram_poll_mask, 757 + .poll = datagram_poll, 758 758 .ioctl = inet6_ioctl, 759 759 .listen = sock_no_listen, 760 760 .shutdown = inet_shutdown,
+1 -1
net/l2tp/l2tp_ppp.c
··· 1818 1818 .socketpair = sock_no_socketpair, 1819 1819 .accept = sock_no_accept, 1820 1820 .getname = pppol2tp_getname, 1821 - .poll_mask = datagram_poll_mask, 1821 + .poll = datagram_poll, 1822 1822 .listen = sock_no_listen, 1823 1823 .shutdown = sock_no_shutdown, 1824 1824 .setsockopt = pppol2tp_setsockopt,
+1 -1
net/llc/af_llc.c
··· 1192 1192 .socketpair = sock_no_socketpair, 1193 1193 .accept = llc_ui_accept, 1194 1194 .getname = llc_ui_getname, 1195 - .poll_mask = datagram_poll_mask, 1195 + .poll = datagram_poll, 1196 1196 .ioctl = llc_ui_ioctl, 1197 1197 .listen = llc_ui_listen, 1198 1198 .shutdown = llc_ui_shutdown,
+1 -1
net/netlink/af_netlink.c
··· 2658 2658 .socketpair = sock_no_socketpair, 2659 2659 .accept = sock_no_accept, 2660 2660 .getname = netlink_getname, 2661 - .poll_mask = datagram_poll_mask, 2661 + .poll = datagram_poll, 2662 2662 .ioctl = netlink_ioctl, 2663 2663 .listen = sock_no_listen, 2664 2664 .shutdown = sock_no_shutdown,
+1 -1
net/netrom/af_netrom.c
··· 1355 1355 .socketpair = sock_no_socketpair, 1356 1356 .accept = nr_accept, 1357 1357 .getname = nr_getname, 1358 - .poll_mask = datagram_poll_mask, 1358 + .poll = datagram_poll, 1359 1359 .ioctl = nr_ioctl, 1360 1360 .listen = nr_listen, 1361 1361 .shutdown = sock_no_shutdown,
+6 -3
net/nfc/llcp_sock.c
··· 548 548 return 0; 549 549 } 550 550 551 - static __poll_t llcp_sock_poll_mask(struct socket *sock, __poll_t events) 551 + static __poll_t llcp_sock_poll(struct file *file, struct socket *sock, 552 + poll_table *wait) 552 553 { 553 554 struct sock *sk = sock->sk; 554 555 __poll_t mask = 0; 555 556 556 557 pr_debug("%p\n", sk); 558 + 559 + sock_poll_wait(file, sk_sleep(sk), wait); 557 560 558 561 if (sk->sk_state == LLCP_LISTEN) 559 562 return llcp_accept_poll(sk); ··· 899 896 .socketpair = sock_no_socketpair, 900 897 .accept = llcp_sock_accept, 901 898 .getname = llcp_sock_getname, 902 - .poll_mask = llcp_sock_poll_mask, 899 + .poll = llcp_sock_poll, 903 900 .ioctl = sock_no_ioctl, 904 901 .listen = llcp_sock_listen, 905 902 .shutdown = sock_no_shutdown, ··· 919 916 .socketpair = sock_no_socketpair, 920 917 .accept = sock_no_accept, 921 918 .getname = llcp_sock_getname, 922 - .poll_mask = llcp_sock_poll_mask, 919 + .poll = llcp_sock_poll, 923 920 .ioctl = sock_no_ioctl, 924 921 .listen = sock_no_listen, 925 922 .shutdown = sock_no_shutdown,
+2 -2
net/nfc/rawsock.c
··· 284 284 .socketpair = sock_no_socketpair, 285 285 .accept = sock_no_accept, 286 286 .getname = sock_no_getname, 287 - .poll_mask = datagram_poll_mask, 287 + .poll = datagram_poll, 288 288 .ioctl = sock_no_ioctl, 289 289 .listen = sock_no_listen, 290 290 .shutdown = sock_no_shutdown, ··· 304 304 .socketpair = sock_no_socketpair, 305 305 .accept = sock_no_accept, 306 306 .getname = sock_no_getname, 307 - .poll_mask = datagram_poll_mask, 307 + .poll = datagram_poll, 308 308 .ioctl = sock_no_ioctl, 309 309 .listen = sock_no_listen, 310 310 .shutdown = sock_no_shutdown,
+12 -13
net/packet/af_packet.c
··· 2262 2262 if (po->stats.stats1.tp_drops) 2263 2263 status |= TP_STATUS_LOSING; 2264 2264 } 2265 + 2266 + if (do_vnet && 2267 + virtio_net_hdr_from_skb(skb, h.raw + macoff - 2268 + sizeof(struct virtio_net_hdr), 2269 + vio_le(), true, 0)) 2270 + goto drop_n_account; 2271 + 2265 2272 po->stats.stats1.tp_packets++; 2266 2273 if (copy_skb) { 2267 2274 status |= TP_STATUS_COPY; 2268 2275 __skb_queue_tail(&sk->sk_receive_queue, copy_skb); 2269 2276 } 2270 2277 spin_unlock(&sk->sk_receive_queue.lock); 2271 - 2272 - if (do_vnet) { 2273 - if (virtio_net_hdr_from_skb(skb, h.raw + macoff - 2274 - sizeof(struct virtio_net_hdr), 2275 - vio_le(), true, 0)) { 2276 - spin_lock(&sk->sk_receive_queue.lock); 2277 - goto drop_n_account; 2278 - } 2279 - } 2280 2278 2281 2279 skb_copy_bits(skb, 0, h.raw + macoff, snaplen); 2282 2280 ··· 4076 4078 return 0; 4077 4079 } 4078 4080 4079 - static __poll_t packet_poll_mask(struct socket *sock, __poll_t events) 4081 + static __poll_t packet_poll(struct file *file, struct socket *sock, 4082 + poll_table *wait) 4080 4083 { 4081 4084 struct sock *sk = sock->sk; 4082 4085 struct packet_sock *po = pkt_sk(sk); 4083 - __poll_t mask = datagram_poll_mask(sock, events); 4086 + __poll_t mask = datagram_poll(file, sock, wait); 4084 4087 4085 4088 spin_lock_bh(&sk->sk_receive_queue.lock); 4086 4089 if (po->rx_ring.pg_vec) { ··· 4423 4424 .socketpair = sock_no_socketpair, 4424 4425 .accept = sock_no_accept, 4425 4426 .getname = packet_getname_spkt, 4426 - .poll_mask = datagram_poll_mask, 4427 + .poll = datagram_poll, 4427 4428 .ioctl = packet_ioctl, 4428 4429 .listen = sock_no_listen, 4429 4430 .shutdown = sock_no_shutdown, ··· 4444 4445 .socketpair = sock_no_socketpair, 4445 4446 .accept = sock_no_accept, 4446 4447 .getname = packet_getname, 4447 - .poll_mask = packet_poll_mask, 4448 + .poll = packet_poll, 4448 4449 .ioctl = packet_ioctl, 4449 4450 .listen = sock_no_listen, 4450 4451 .shutdown = sock_no_shutdown,
+6 -3
net/phonet/socket.c
··· 340 340 return sizeof(struct sockaddr_pn); 341 341 } 342 342 343 - static __poll_t pn_socket_poll_mask(struct socket *sock, __poll_t events) 343 + static __poll_t pn_socket_poll(struct file *file, struct socket *sock, 344 + poll_table *wait) 344 345 { 345 346 struct sock *sk = sock->sk; 346 347 struct pep_sock *pn = pep_sk(sk); 347 348 __poll_t mask = 0; 349 + 350 + poll_wait(file, sk_sleep(sk), wait); 348 351 349 352 if (sk->sk_state == TCP_CLOSE) 350 353 return EPOLLERR; ··· 448 445 .socketpair = sock_no_socketpair, 449 446 .accept = sock_no_accept, 450 447 .getname = pn_socket_getname, 451 - .poll_mask = datagram_poll_mask, 448 + .poll = datagram_poll, 452 449 .ioctl = pn_socket_ioctl, 453 450 .listen = sock_no_listen, 454 451 .shutdown = sock_no_shutdown, ··· 473 470 .socketpair = sock_no_socketpair, 474 471 .accept = pn_socket_accept, 475 472 .getname = pn_socket_getname, 476 - .poll_mask = pn_socket_poll_mask, 473 + .poll = pn_socket_poll, 477 474 .ioctl = pn_socket_ioctl, 478 475 .listen = pn_socket_listen, 479 476 .shutdown = sock_no_shutdown,
+1 -1
net/qrtr/qrtr.c
··· 1023 1023 .recvmsg = qrtr_recvmsg, 1024 1024 .getname = qrtr_getname, 1025 1025 .ioctl = qrtr_ioctl, 1026 - .poll_mask = datagram_poll_mask, 1026 + .poll = datagram_poll, 1027 1027 .shutdown = sock_no_shutdown, 1028 1028 .setsockopt = sock_no_setsockopt, 1029 1029 .getsockopt = sock_no_getsockopt,
+1 -1
net/rose/af_rose.c
··· 1470 1470 .socketpair = sock_no_socketpair, 1471 1471 .accept = rose_accept, 1472 1472 .getname = rose_getname, 1473 - .poll_mask = datagram_poll_mask, 1473 + .poll = datagram_poll, 1474 1474 .ioctl = rose_ioctl, 1475 1475 .listen = rose_listen, 1476 1476 .shutdown = sock_no_shutdown,
+7 -3
net/rxrpc/af_rxrpc.c
··· 734 734 /* 735 735 * permit an RxRPC socket to be polled 736 736 */ 737 - static __poll_t rxrpc_poll_mask(struct socket *sock, __poll_t events) 737 + static __poll_t rxrpc_poll(struct file *file, struct socket *sock, 738 + poll_table *wait) 738 739 { 739 740 struct sock *sk = sock->sk; 740 741 struct rxrpc_sock *rx = rxrpc_sk(sk); 741 - __poll_t mask = 0; 742 + __poll_t mask; 743 + 744 + sock_poll_wait(file, sk_sleep(sk), wait); 745 + mask = 0; 742 746 743 747 /* the socket is readable if there are any messages waiting on the Rx 744 748 * queue */ ··· 949 945 .socketpair = sock_no_socketpair, 950 946 .accept = sock_no_accept, 951 947 .getname = sock_no_getname, 952 - .poll_mask = rxrpc_poll_mask, 948 + .poll = rxrpc_poll, 953 949 .ioctl = sock_no_ioctl, 954 950 .listen = rxrpc_listen, 955 951 .shutdown = rxrpc_shutdown,
+17 -4
net/sched/cls_flower.c
··· 66 66 struct rhashtable_params filter_ht_params; 67 67 struct flow_dissector dissector; 68 68 struct list_head filters; 69 - struct rcu_head rcu; 69 + struct rcu_work rwork; 70 70 struct list_head list; 71 71 }; 72 72 ··· 203 203 return rhashtable_init(&head->ht, &mask_ht_params); 204 204 } 205 205 206 + static void fl_mask_free(struct fl_flow_mask *mask) 207 + { 208 + rhashtable_destroy(&mask->ht); 209 + kfree(mask); 210 + } 211 + 212 + static void fl_mask_free_work(struct work_struct *work) 213 + { 214 + struct fl_flow_mask *mask = container_of(to_rcu_work(work), 215 + struct fl_flow_mask, rwork); 216 + 217 + fl_mask_free(mask); 218 + } 219 + 206 220 static bool fl_mask_put(struct cls_fl_head *head, struct fl_flow_mask *mask, 207 221 bool async) 208 222 { ··· 224 210 return false; 225 211 226 212 rhashtable_remove_fast(&head->ht, &mask->ht_node, mask_ht_params); 227 - rhashtable_destroy(&mask->ht); 228 213 list_del_rcu(&mask->list); 229 214 if (async) 230 - kfree_rcu(mask, rcu); 215 + tcf_queue_work(&mask->rwork, fl_mask_free_work); 231 216 else 232 - kfree(mask); 217 + fl_mask_free(mask); 233 218 234 219 return true; 235 220 }
+2 -2
net/sched/sch_hfsc.c
··· 1385 1385 if (next_time == 0 || next_time > q->root.cl_cfmin) 1386 1386 next_time = q->root.cl_cfmin; 1387 1387 } 1388 - WARN_ON(next_time == 0); 1389 - qdisc_watchdog_schedule(&q->watchdog, next_time); 1388 + if (next_time) 1389 + qdisc_watchdog_schedule(&q->watchdog, next_time); 1390 1390 } 1391 1391 1392 1392 static int
+3 -1
net/sctp/chunk.c
··· 237 237 /* Account for a different sized first fragment */ 238 238 if (msg_len >= first_len) { 239 239 msg->can_delay = 0; 240 - SCTP_INC_STATS(sock_net(asoc->base.sk), SCTP_MIB_FRAGUSRMSGS); 240 + if (msg_len > first_len) 241 + SCTP_INC_STATS(sock_net(asoc->base.sk), 242 + SCTP_MIB_FRAGUSRMSGS); 241 243 } else { 242 244 /* Which may be the only one... */ 243 245 first_len = msg_len;
+1 -1
net/sctp/ipv6.c
··· 1010 1010 .socketpair = sock_no_socketpair, 1011 1011 .accept = inet_accept, 1012 1012 .getname = sctp_getname, 1013 - .poll_mask = sctp_poll_mask, 1013 + .poll = sctp_poll, 1014 1014 .ioctl = inet6_ioctl, 1015 1015 .listen = sctp_inet_listen, 1016 1016 .shutdown = inet_shutdown,
+1 -1
net/sctp/protocol.c
··· 1016 1016 .socketpair = sock_no_socketpair, 1017 1017 .accept = inet_accept, 1018 1018 .getname = inet_getname, /* Semantics are different. */ 1019 - .poll_mask = sctp_poll_mask, 1019 + .poll = sctp_poll, 1020 1020 .ioctl = inet_ioctl, 1021 1021 .listen = sctp_inet_listen, 1022 1022 .shutdown = inet_shutdown, /* Looks harmless. */
+3 -1
net/sctp/socket.c
··· 7717 7717 * here, again, by modeling the current TCP/UDP code. We don't have 7718 7718 * a good way to test with it yet. 7719 7719 */ 7720 - __poll_t sctp_poll_mask(struct socket *sock, __poll_t events) 7720 + __poll_t sctp_poll(struct file *file, struct socket *sock, poll_table *wait) 7721 7721 { 7722 7722 struct sock *sk = sock->sk; 7723 7723 struct sctp_sock *sp = sctp_sk(sk); 7724 7724 __poll_t mask; 7725 + 7726 + poll_wait(file, sk_sleep(sk), wait); 7725 7727 7726 7728 sock_rps_record_flow(sk); 7727 7729
+9 -3
net/smc/af_smc.c
··· 1273 1273 return mask; 1274 1274 } 1275 1275 1276 - static __poll_t smc_poll_mask(struct socket *sock, __poll_t events) 1276 + static __poll_t smc_poll(struct file *file, struct socket *sock, 1277 + poll_table *wait) 1277 1278 { 1278 1279 struct sock *sk = sock->sk; 1279 1280 __poll_t mask = 0; ··· 1290 1289 if ((sk->sk_state == SMC_INIT) || smc->use_fallback) { 1291 1290 /* delegate to CLC child sock */ 1292 1291 release_sock(sk); 1293 - mask = smc->clcsock->ops->poll_mask(smc->clcsock, events); 1292 + mask = smc->clcsock->ops->poll(file, smc->clcsock, wait); 1294 1293 lock_sock(sk); 1295 1294 sk->sk_err = smc->clcsock->sk->sk_err; 1296 1295 if (sk->sk_err) { ··· 1308 1307 } 1309 1308 } 1310 1309 } else { 1310 + if (sk->sk_state != SMC_CLOSED) { 1311 + release_sock(sk); 1312 + sock_poll_wait(file, sk_sleep(sk), wait); 1313 + lock_sock(sk); 1314 + } 1311 1315 if (sk->sk_err) 1312 1316 mask |= EPOLLERR; 1313 1317 if ((sk->sk_shutdown == SHUTDOWN_MASK) || ··· 1625 1619 .socketpair = sock_no_socketpair, 1626 1620 .accept = smc_accept, 1627 1621 .getname = smc_getname, 1628 - .poll_mask = smc_poll_mask, 1622 + .poll = smc_poll, 1629 1623 .ioctl = smc_ioctl, 1630 1624 .listen = smc_listen, 1631 1625 .shutdown = smc_shutdown,
+7 -43
net/socket.c
··· 117 117 static int sock_mmap(struct file *file, struct vm_area_struct *vma); 118 118 119 119 static int sock_close(struct inode *inode, struct file *file); 120 - static struct wait_queue_head *sock_get_poll_head(struct file *file, 121 - __poll_t events); 122 - static __poll_t sock_poll_mask(struct file *file, __poll_t); 123 - static __poll_t sock_poll(struct file *file, struct poll_table_struct *wait); 120 + static __poll_t sock_poll(struct file *file, 121 + struct poll_table_struct *wait); 124 122 static long sock_ioctl(struct file *file, unsigned int cmd, unsigned long arg); 125 123 #ifdef CONFIG_COMPAT 126 124 static long compat_sock_ioctl(struct file *file, ··· 141 143 .llseek = no_llseek, 142 144 .read_iter = sock_read_iter, 143 145 .write_iter = sock_write_iter, 144 - .get_poll_head = sock_get_poll_head, 145 - .poll_mask = sock_poll_mask, 146 146 .poll = sock_poll, 147 147 .unlocked_ioctl = sock_ioctl, 148 148 #ifdef CONFIG_COMPAT ··· 1126 1130 } 1127 1131 EXPORT_SYMBOL(sock_create_lite); 1128 1132 1129 - static struct wait_queue_head *sock_get_poll_head(struct file *file, 1130 - __poll_t events) 1131 - { 1132 - struct socket *sock = file->private_data; 1133 - 1134 - if (!sock->ops->poll_mask) 1135 - return NULL; 1136 - sock_poll_busy_loop(sock, events); 1137 - return sk_sleep(sock->sk); 1138 - } 1139 - 1140 - static __poll_t sock_poll_mask(struct file *file, __poll_t events) 1141 - { 1142 - struct socket *sock = file->private_data; 1143 - 1144 - /* 1145 - * We need to be sure we are in sync with the socket flags modification. 1146 - * 1147 - * This memory barrier is paired in the wq_has_sleeper. 1148 - */ 1149 - smp_mb(); 1150 - 1151 - /* this socket can poll_ll so tell the system call */ 1152 - return sock->ops->poll_mask(sock, events) | 1153 - (sk_can_busy_loop(sock->sk) ? POLL_BUSY_LOOP : 0); 1154 - } 1155 - 1156 1133 /* No kernel lock held - perfect */ 1157 1134 static __poll_t sock_poll(struct file *file, poll_table *wait) 1158 1135 { 1159 1136 struct socket *sock = file->private_data; 1160 - __poll_t events = poll_requested_events(wait), mask = 0; 1137 + __poll_t events = poll_requested_events(wait); 1161 1138 1162 - if (sock->ops->poll) { 1163 - sock_poll_busy_loop(sock, events); 1164 - mask = sock->ops->poll(file, sock, wait); 1165 - } else if (sock->ops->poll_mask) { 1166 - sock_poll_wait(file, sock_get_poll_head(file, events), wait); 1167 - mask = sock->ops->poll_mask(sock, events); 1168 - } 1169 - 1170 - return mask | sock_poll_busy_flag(sock); 1139 + sock_poll_busy_loop(sock, events); 1140 + if (!sock->ops->poll) 1141 + return 0; 1142 + return sock->ops->poll(file, sock, wait) | sock_poll_busy_flag(sock); 1171 1143 } 1172 1144 1173 1145 static int sock_mmap(struct file *file, struct vm_area_struct *vma)
+1 -4
net/strparser/strparser.c
··· 392 392 /* Lower sock lock held */ 393 393 void strp_data_ready(struct strparser *strp) 394 394 { 395 - if (unlikely(strp->stopped)) 395 + if (unlikely(strp->stopped) || strp->paused) 396 396 return; 397 397 398 398 /* This check is needed to synchronize with do_strp_work. ··· 406 406 queue_work(strp_wq, &strp->work); 407 407 return; 408 408 } 409 - 410 - if (strp->paused) 411 - return; 412 409 413 410 if (strp->need_bytes) { 414 411 if (strp_peek_len(strp) < strp->need_bytes)
+9 -5
net/tipc/socket.c
··· 692 692 } 693 693 694 694 /** 695 - * tipc_poll - read pollmask 695 + * tipc_poll - read and possibly block on pollmask 696 696 * @file: file structure associated with the socket 697 697 * @sock: socket for which to calculate the poll bits 698 + * @wait: ??? 698 699 * 699 700 * Returns pollmask value 700 701 * ··· 709 708 * imply that the operation will succeed, merely that it should be performed 710 709 * and will not block. 711 710 */ 712 - static __poll_t tipc_poll_mask(struct socket *sock, __poll_t events) 711 + static __poll_t tipc_poll(struct file *file, struct socket *sock, 712 + poll_table *wait) 713 713 { 714 714 struct sock *sk = sock->sk; 715 715 struct tipc_sock *tsk = tipc_sk(sk); 716 716 __poll_t revents = 0; 717 + 718 + sock_poll_wait(file, sk_sleep(sk), wait); 717 719 718 720 if (sk->sk_shutdown & RCV_SHUTDOWN) 719 721 revents |= EPOLLRDHUP | EPOLLIN | EPOLLRDNORM; ··· 3037 3033 .socketpair = tipc_socketpair, 3038 3034 .accept = sock_no_accept, 3039 3035 .getname = tipc_getname, 3040 - .poll_mask = tipc_poll_mask, 3036 + .poll = tipc_poll, 3041 3037 .ioctl = tipc_ioctl, 3042 3038 .listen = sock_no_listen, 3043 3039 .shutdown = tipc_shutdown, ··· 3058 3054 .socketpair = tipc_socketpair, 3059 3055 .accept = tipc_accept, 3060 3056 .getname = tipc_getname, 3061 - .poll_mask = tipc_poll_mask, 3057 + .poll = tipc_poll, 3062 3058 .ioctl = tipc_ioctl, 3063 3059 .listen = tipc_listen, 3064 3060 .shutdown = tipc_shutdown, ··· 3079 3075 .socketpair = tipc_socketpair, 3080 3076 .accept = tipc_accept, 3081 3077 .getname = tipc_getname, 3082 - .poll_mask = tipc_poll_mask, 3078 + .poll = tipc_poll, 3083 3079 .ioctl = tipc_ioctl, 3084 3080 .listen = tipc_listen, 3085 3081 .shutdown = tipc_shutdown,
+1 -1
net/tls/tls_main.c
··· 712 712 build_protos(tls_prots[TLSV4], &tcp_prot); 713 713 714 714 tls_sw_proto_ops = inet_stream_ops; 715 - tls_sw_proto_ops.poll_mask = tls_sw_poll_mask; 715 + tls_sw_proto_ops.poll = tls_sw_poll; 716 716 tls_sw_proto_ops.splice_read = tls_sw_splice_read; 717 717 718 718 #ifdef CONFIG_TLS_DEVICE
+10 -9
net/tls/tls_sw.c
··· 919 919 return copied ? : err; 920 920 } 921 921 922 - __poll_t tls_sw_poll_mask(struct socket *sock, __poll_t events) 922 + unsigned int tls_sw_poll(struct file *file, struct socket *sock, 923 + struct poll_table_struct *wait) 923 924 { 925 + unsigned int ret; 924 926 struct sock *sk = sock->sk; 925 927 struct tls_context *tls_ctx = tls_get_ctx(sk); 926 928 struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx); 927 - __poll_t mask; 928 929 929 - /* Grab EPOLLOUT and EPOLLHUP from the underlying socket */ 930 - mask = ctx->sk_poll_mask(sock, events); 930 + /* Grab POLLOUT and POLLHUP from the underlying socket */ 931 + ret = ctx->sk_poll(file, sock, wait); 931 932 932 - /* Clear EPOLLIN bits, and set based on recv_pkt */ 933 - mask &= ~(EPOLLIN | EPOLLRDNORM); 933 + /* Clear POLLIN bits, and set based on recv_pkt */ 934 + ret &= ~(POLLIN | POLLRDNORM); 934 935 if (ctx->recv_pkt) 935 - mask |= EPOLLIN | EPOLLRDNORM; 936 + ret |= POLLIN | POLLRDNORM; 936 937 937 - return mask; 938 + return ret; 938 939 } 939 940 940 941 static int tls_read_size(struct strparser *strp, struct sk_buff *skb) ··· 1192 1191 sk->sk_data_ready = tls_data_ready; 1193 1192 write_unlock_bh(&sk->sk_callback_lock); 1194 1193 1195 - sw_ctx_rx->sk_poll_mask = sk->sk_socket->ops->poll_mask; 1194 + sw_ctx_rx->sk_poll = sk->sk_socket->ops->poll; 1196 1195 1197 1196 strp_check_rcv(&sw_ctx_rx->strp); 1198 1197 }
+19 -11
net/unix/af_unix.c
··· 638 638 static int unix_socketpair(struct socket *, struct socket *); 639 639 static int unix_accept(struct socket *, struct socket *, int, bool); 640 640 static int unix_getname(struct socket *, struct sockaddr *, int); 641 - static __poll_t unix_poll_mask(struct socket *, __poll_t); 642 - static __poll_t unix_dgram_poll_mask(struct socket *, __poll_t); 641 + static __poll_t unix_poll(struct file *, struct socket *, poll_table *); 642 + static __poll_t unix_dgram_poll(struct file *, struct socket *, 643 + poll_table *); 643 644 static int unix_ioctl(struct socket *, unsigned int, unsigned long); 644 645 static int unix_shutdown(struct socket *, int); 645 646 static int unix_stream_sendmsg(struct socket *, struct msghdr *, size_t); ··· 681 680 .socketpair = unix_socketpair, 682 681 .accept = unix_accept, 683 682 .getname = unix_getname, 684 - .poll_mask = unix_poll_mask, 683 + .poll = unix_poll, 685 684 .ioctl = unix_ioctl, 686 685 .listen = unix_listen, 687 686 .shutdown = unix_shutdown, ··· 704 703 .socketpair = unix_socketpair, 705 704 .accept = sock_no_accept, 706 705 .getname = unix_getname, 707 - .poll_mask = unix_dgram_poll_mask, 706 + .poll = unix_dgram_poll, 708 707 .ioctl = unix_ioctl, 709 708 .listen = sock_no_listen, 710 709 .shutdown = unix_shutdown, ··· 726 725 .socketpair = unix_socketpair, 727 726 .accept = unix_accept, 728 727 .getname = unix_getname, 729 - .poll_mask = unix_dgram_poll_mask, 728 + .poll = unix_dgram_poll, 730 729 .ioctl = unix_ioctl, 731 730 .listen = unix_listen, 732 731 .shutdown = unix_shutdown, ··· 2630 2629 return err; 2631 2630 } 2632 2631 2633 - static __poll_t unix_poll_mask(struct socket *sock, __poll_t events) 2632 + static __poll_t unix_poll(struct file *file, struct socket *sock, poll_table *wait) 2634 2633 { 2635 2634 struct sock *sk = sock->sk; 2636 - __poll_t mask = 0; 2635 + __poll_t mask; 2636 + 2637 + sock_poll_wait(file, sk_sleep(sk), wait); 2638 + mask = 0; 2637 2639 2638 2640 /* exceptional events? */ 2639 2641 if (sk->sk_err) ··· 2665 2661 return mask; 2666 2662 } 2667 2663 2668 - static __poll_t unix_dgram_poll_mask(struct socket *sock, __poll_t events) 2664 + static __poll_t unix_dgram_poll(struct file *file, struct socket *sock, 2665 + poll_table *wait) 2669 2666 { 2670 2667 struct sock *sk = sock->sk, *other; 2671 - int writable; 2672 - __poll_t mask = 0; 2668 + unsigned int writable; 2669 + __poll_t mask; 2670 + 2671 + sock_poll_wait(file, sk_sleep(sk), wait); 2672 + mask = 0; 2673 2673 2674 2674 /* exceptional events? */ 2675 2675 if (sk->sk_err || !skb_queue_empty(&sk->sk_error_queue)) ··· 2699 2691 } 2700 2692 2701 2693 /* No write status requested, avoid expensive OUT tests. */ 2702 - if (!(events & (EPOLLWRBAND|EPOLLWRNORM|EPOLLOUT))) 2694 + if (!(poll_requested_events(wait) & (EPOLLWRBAND|EPOLLWRNORM|EPOLLOUT))) 2703 2695 return mask; 2704 2696 2705 2697 writable = unix_writable(sk);
+13 -6
net/vmw_vsock/af_vsock.c
··· 850 850 return err; 851 851 } 852 852 853 - static __poll_t vsock_poll_mask(struct socket *sock, __poll_t events) 853 + static __poll_t vsock_poll(struct file *file, struct socket *sock, 854 + poll_table *wait) 854 855 { 855 - struct sock *sk = sock->sk; 856 - struct vsock_sock *vsk = vsock_sk(sk); 857 - __poll_t mask = 0; 856 + struct sock *sk; 857 + __poll_t mask; 858 + struct vsock_sock *vsk; 859 + 860 + sk = sock->sk; 861 + vsk = vsock_sk(sk); 862 + 863 + poll_wait(file, sk_sleep(sk), wait); 864 + mask = 0; 858 865 859 866 if (sk->sk_err) 860 867 /* Signify that there has been an error on this socket. */ ··· 1091 1084 .socketpair = sock_no_socketpair, 1092 1085 .accept = sock_no_accept, 1093 1086 .getname = vsock_getname, 1094 - .poll_mask = vsock_poll_mask, 1087 + .poll = vsock_poll, 1095 1088 .ioctl = sock_no_ioctl, 1096 1089 .listen = sock_no_listen, 1097 1090 .shutdown = vsock_shutdown, ··· 1849 1842 .socketpair = sock_no_socketpair, 1850 1843 .accept = vsock_accept, 1851 1844 .getname = vsock_getname, 1852 - .poll_mask = vsock_poll_mask, 1845 + .poll = vsock_poll, 1853 1846 .ioctl = sock_no_ioctl, 1854 1847 .listen = vsock_listen, 1855 1848 .shutdown = vsock_shutdown,
+1 -1
net/vmw_vsock/virtio_transport.c
··· 201 201 return -ENODEV; 202 202 } 203 203 204 - if (le32_to_cpu(pkt->hdr.dst_cid) == vsock->guest_cid) 204 + if (le64_to_cpu(pkt->hdr.dst_cid) == vsock->guest_cid) 205 205 return virtio_transport_send_pkt_loopback(vsock, pkt); 206 206 207 207 if (pkt->reply)
+1 -1
net/x25/af_x25.c
··· 1750 1750 .socketpair = sock_no_socketpair, 1751 1751 .accept = x25_accept, 1752 1752 .getname = x25_getname, 1753 - .poll_mask = datagram_poll_mask, 1753 + .poll = datagram_poll, 1754 1754 .ioctl = x25_ioctl, 1755 1755 #ifdef CONFIG_COMPAT 1756 1756 .compat_ioctl = compat_x25_ioctl,
+4 -3
net/xdp/xsk.c
··· 303 303 return (xs->zc) ? xsk_zc_xmit(sk) : xsk_generic_xmit(sk, m, total_len); 304 304 } 305 305 306 - static __poll_t xsk_poll_mask(struct socket *sock, __poll_t events) 306 + static unsigned int xsk_poll(struct file *file, struct socket *sock, 307 + struct poll_table_struct *wait) 307 308 { 308 - __poll_t mask = datagram_poll_mask(sock, events); 309 + unsigned int mask = datagram_poll(file, sock, wait); 309 310 struct sock *sk = sock->sk; 310 311 struct xdp_sock *xs = xdp_sk(sk); 311 312 ··· 697 696 .socketpair = sock_no_socketpair, 698 697 .accept = sock_no_accept, 699 698 .getname = sock_no_getname, 700 - .poll_mask = xsk_poll_mask, 699 + .poll = xsk_poll, 701 700 .ioctl = sock_no_ioctl, 702 701 .listen = sock_no_listen, 703 702 .shutdown = sock_no_shutdown,
-6
scripts/checkpatch.pl
··· 2606 2606 "A patch subject line should describe the change not the tool that found it\n" . $herecurr); 2607 2607 } 2608 2608 2609 - # Check for old stable address 2610 - if ($line =~ /^\s*cc:\s*.*<?\bstable\@kernel\.org\b>?.*$/i) { 2611 - ERROR("STABLE_ADDRESS", 2612 - "The 'stable' address should be 'stable\@vger.kernel.org'\n" . $herecurr); 2613 - } 2614 - 2615 2609 # Check for unwanted Gerrit info 2616 2610 if ($in_commit_log && $line =~ /^\s*change-id:/i) { 2617 2611 ERROR("GERRIT_CHANGE_ID",
+1 -1
scripts/gcc-x86_64-has-stack-protector.sh
··· 1 1 #!/bin/sh 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 - echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -O0 -mcmodel=kernel -fno-PIE -fstack-protector - -o - 2> /dev/null | grep -q "%gs" 4 + echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -m64 -O0 -mcmodel=kernel -fno-PIE -fstack-protector - -o - 2> /dev/null | grep -q "%gs"
+3
scripts/kconfig/expr.h
··· 171 171 * config BAZ 172 172 * int "BAZ Value" 173 173 * range 1..255 174 + * 175 + * Please, also check zconf.y:print_symbol() when modifying the 176 + * list of property types! 174 177 */ 175 178 enum prop_type { 176 179 P_UNKNOWN,
+1 -1
scripts/kconfig/preprocess.c
··· 156 156 nread--; 157 157 158 158 /* remove trailing new lines */ 159 - while (buf[nread - 1] == '\n') 159 + while (nread > 0 && buf[nread - 1] == '\n') 160 160 nread--; 161 161 162 162 buf[nread] = 0;
+6 -2
scripts/kconfig/zconf.y
··· 31 31 static struct menu *current_menu, *current_entry; 32 32 33 33 %} 34 - %expect 32 34 + %expect 31 35 35 36 36 %union 37 37 { ··· 337 337 338 338 /* if entry */ 339 339 340 - if_entry: T_IF expr nl 340 + if_entry: T_IF expr T_EOL 341 341 { 342 342 printd(DEBUG_PARSE, "%s:%d:if\n", zconf_curname(), zconf_lineno()); 343 343 menu_add_entry(NULL); ··· 716 716 fputs( " menu ", out); 717 717 print_quoted_string(out, prop->text); 718 718 fputc('\n', out); 719 + break; 720 + case P_SYMBOL: 721 + fputs( " symbol ", out); 722 + fprintf(out, "%s\n", prop->sym->name); 719 723 break; 720 724 default: 721 725 fprintf(out, " unknown prop %d!\n", prop->type);
+4 -2
security/keys/dh.c
··· 142 142 * The src pointer is defined as Z || other info where Z is the shared secret 143 143 * from DH and other info is an arbitrary string (see SP800-56A section 144 144 * 5.8.1.2). 145 + * 146 + * 'dlen' must be a multiple of the digest size. 145 147 */ 146 148 static int kdf_ctr(struct kdf_sdesc *sdesc, const u8 *src, unsigned int slen, 147 149 u8 *dst, unsigned int dlen, unsigned int zlen) ··· 207 205 { 208 206 uint8_t *outbuf = NULL; 209 207 int ret; 210 - size_t outbuf_len = round_up(buflen, 211 - crypto_shash_digestsize(sdesc->shash.tfm)); 208 + size_t outbuf_len = roundup(buflen, 209 + crypto_shash_digestsize(sdesc->shash.tfm)); 212 210 213 211 outbuf = kmalloc(outbuf_len, GFP_KERNEL); 214 212 if (!outbuf) {
+33 -45
security/selinux/selinuxfs.c
··· 441 441 static ssize_t sel_read_policy(struct file *filp, char __user *buf, 442 442 size_t count, loff_t *ppos) 443 443 { 444 - struct selinux_fs_info *fsi = file_inode(filp)->i_sb->s_fs_info; 445 444 struct policy_load_memory *plm = filp->private_data; 446 445 int ret; 447 - 448 - mutex_lock(&fsi->mutex); 449 446 450 447 ret = avc_has_perm(&selinux_state, 451 448 current_sid(), SECINITSID_SECURITY, 452 449 SECCLASS_SECURITY, SECURITY__READ_POLICY, NULL); 453 450 if (ret) 454 - goto out; 451 + return ret; 455 452 456 - ret = simple_read_from_buffer(buf, count, ppos, plm->data, plm->len); 457 - out: 458 - mutex_unlock(&fsi->mutex); 459 - return ret; 453 + return simple_read_from_buffer(buf, count, ppos, plm->data, plm->len); 460 454 } 461 455 462 456 static vm_fault_t sel_mmap_policy_fault(struct vm_fault *vmf) ··· 1182 1188 ret = -EINVAL; 1183 1189 if (index >= fsi->bool_num || strcmp(name, 1184 1190 fsi->bool_pending_names[index])) 1185 - goto out; 1191 + goto out_unlock; 1186 1192 1187 1193 ret = -ENOMEM; 1188 1194 page = (char *)get_zeroed_page(GFP_KERNEL); 1189 1195 if (!page) 1190 - goto out; 1196 + goto out_unlock; 1191 1197 1192 1198 cur_enforcing = security_get_bool_value(fsi->state, index); 1193 1199 if (cur_enforcing < 0) { 1194 1200 ret = cur_enforcing; 1195 - goto out; 1201 + goto out_unlock; 1196 1202 } 1197 1203 length = scnprintf(page, PAGE_SIZE, "%d %d", cur_enforcing, 1198 1204 fsi->bool_pending_values[index]); 1199 - ret = simple_read_from_buffer(buf, count, ppos, page, length); 1200 - out: 1201 1205 mutex_unlock(&fsi->mutex); 1206 + ret = simple_read_from_buffer(buf, count, ppos, page, length); 1207 + out_free: 1202 1208 free_page((unsigned long)page); 1203 1209 return ret; 1210 + 1211 + out_unlock: 1212 + mutex_unlock(&fsi->mutex); 1213 + goto out_free; 1204 1214 } 1205 1215 1206 1216 static ssize_t sel_write_bool(struct file *filep, const char __user *buf, ··· 1216 1218 int new_value; 1217 1219 unsigned index = file_inode(filep)->i_ino & SEL_INO_MASK; 1218 1220 const char *name = filep->f_path.dentry->d_name.name; 1221 + 1222 + if (count >= PAGE_SIZE) 1223 + return -ENOMEM; 1224 + 1225 + /* No partial writes. */ 1226 + if (*ppos != 0) 1227 + return -EINVAL; 1228 + 1229 + page = memdup_user_nul(buf, count); 1230 + if (IS_ERR(page)) 1231 + return PTR_ERR(page); 1219 1232 1220 1233 mutex_lock(&fsi->mutex); 1221 1234 ··· 1241 1232 if (index >= fsi->bool_num || strcmp(name, 1242 1233 fsi->bool_pending_names[index])) 1243 1234 goto out; 1244 - 1245 - length = -ENOMEM; 1246 - if (count >= PAGE_SIZE) 1247 - goto out; 1248 - 1249 - /* No partial writes. */ 1250 - length = -EINVAL; 1251 - if (*ppos != 0) 1252 - goto out; 1253 - 1254 - page = memdup_user_nul(buf, count); 1255 - if (IS_ERR(page)) { 1256 - length = PTR_ERR(page); 1257 - page = NULL; 1258 - goto out; 1259 - } 1260 1235 1261 1236 length = -EINVAL; 1262 1237 if (sscanf(page, "%d", &new_value) != 1) ··· 1273 1280 ssize_t length; 1274 1281 int new_value; 1275 1282 1283 + if (count >= PAGE_SIZE) 1284 + return -ENOMEM; 1285 + 1286 + /* No partial writes. */ 1287 + if (*ppos != 0) 1288 + return -EINVAL; 1289 + 1290 + page = memdup_user_nul(buf, count); 1291 + if (IS_ERR(page)) 1292 + return PTR_ERR(page); 1293 + 1276 1294 mutex_lock(&fsi->mutex); 1277 1295 1278 1296 length = avc_has_perm(&selinux_state, ··· 1292 1288 NULL); 1293 1289 if (length) 1294 1290 goto out; 1295 - 1296 - length = -ENOMEM; 1297 - if (count >= PAGE_SIZE) 1298 - goto out; 1299 - 1300 - /* No partial writes. */ 1301 - length = -EINVAL; 1302 - if (*ppos != 0) 1303 - goto out; 1304 - 1305 - page = memdup_user_nul(buf, count); 1306 - if (IS_ERR(page)) { 1307 - length = PTR_ERR(page); 1308 - page = NULL; 1309 - goto out; 1310 - } 1311 1291 1312 1292 length = -EINVAL; 1313 1293 if (sscanf(page, "%d", &new_value) != 1)
+1
security/smack/smack_lsm.c
··· 2296 2296 struct smack_known *skp = smk_of_task_struct(p); 2297 2297 2298 2298 isp->smk_inode = skp; 2299 + isp->smk_flags |= SMK_INODE_INSTANT; 2299 2300 } 2300 2301 2301 2302 /*
+2 -1
sound/core/seq/seq_clientmgr.c
··· 2004 2004 struct snd_seq_client *cptr = NULL; 2005 2005 2006 2006 /* search for next client */ 2007 - info->client++; 2007 + if (info->client < INT_MAX) 2008 + info->client++; 2008 2009 if (info->client < 0) 2009 2010 info->client = 0; 2010 2011 for (; info->client < SNDRV_SEQ_MAX_CLIENTS; info->client++) {
+1 -1
sound/core/timer.c
··· 1520 1520 } else { 1521 1521 if (id.subdevice < 0) 1522 1522 id.subdevice = 0; 1523 - else 1523 + else if (id.subdevice < INT_MAX) 1524 1524 id.subdevice++; 1525 1525 } 1526 1526 }
+3 -2
sound/pci/hda/hda_codec.c
··· 2899 2899 list_for_each_entry(pcm, &codec->pcm_list_head, list) 2900 2900 snd_pcm_suspend_all(pcm->pcm); 2901 2901 state = hda_call_codec_suspend(codec); 2902 - if (codec_has_clkstop(codec) && codec_has_epss(codec) && 2903 - (state & AC_PWRST_CLK_STOP_OK)) 2902 + if (codec->link_down_at_suspend || 2903 + (codec_has_clkstop(codec) && codec_has_epss(codec) && 2904 + (state & AC_PWRST_CLK_STOP_OK))) 2904 2905 snd_hdac_codec_link_down(&codec->core); 2905 2906 snd_hdac_link_power(&codec->core, false); 2906 2907 return 0;
+1
sound/pci/hda/hda_codec.h
··· 258 258 unsigned int power_save_node:1; /* advanced PM for each widget */ 259 259 unsigned int auto_runtime_pm:1; /* enable automatic codec runtime pm */ 260 260 unsigned int force_pin_prefix:1; /* Add location prefix */ 261 + unsigned int link_down_at_suspend:1; /* link down at runtime suspend */ 261 262 #ifdef CONFIG_PM 262 263 unsigned long power_on_acct; 263 264 unsigned long power_off_acct;
+23 -41
sound/pci/hda/patch_ca0132.c
··· 991 991 enum { 992 992 QUIRK_NONE, 993 993 QUIRK_ALIENWARE, 994 + QUIRK_ALIENWARE_M17XR4, 994 995 QUIRK_SBZ, 995 996 QUIRK_R3DI, 996 997 }; ··· 1041 1040 }; 1042 1041 1043 1042 static const struct snd_pci_quirk ca0132_quirks[] = { 1043 + SND_PCI_QUIRK(0x1028, 0x057b, "Alienware M17x R4", QUIRK_ALIENWARE_M17XR4), 1044 1044 SND_PCI_QUIRK(0x1028, 0x0685, "Alienware 15 2015", QUIRK_ALIENWARE), 1045 1045 SND_PCI_QUIRK(0x1028, 0x0688, "Alienware 17 2015", QUIRK_ALIENWARE), 1046 1046 SND_PCI_QUIRK(0x1028, 0x0708, "Alienware 15 R2 2016", QUIRK_ALIENWARE), ··· 5665 5663 * I think this has to do with the pin for rear surround being 0x11, 5666 5664 * and the center/lfe being 0x10. Usually the pin order is the opposite. 5667 5665 */ 5668 - const struct snd_pcm_chmap_elem ca0132_alt_chmaps[] = { 5666 + static const struct snd_pcm_chmap_elem ca0132_alt_chmaps[] = { 5669 5667 { .channels = 2, 5670 5668 .map = { SNDRV_CHMAP_FL, SNDRV_CHMAP_FR } }, 5671 5669 { .channels = 4, ··· 5968 5966 info->stream[SNDRV_PCM_STREAM_CAPTURE].nid = spec->adcs[0]; 5969 5967 5970 5968 /* With the DSP enabled, desktops don't use this ADC. */ 5971 - if (spec->use_alt_functions) { 5969 + if (!spec->use_alt_functions) { 5972 5970 info = snd_hda_codec_pcm_new(codec, "CA0132 Analog Mic-In2"); 5973 5971 if (!info) 5974 5972 return -ENOMEM; ··· 6132 6130 * Bit 6: set to select Data2, clear for Data1 6133 6131 * Bit 7: set to enable DMic, clear for AMic 6134 6132 */ 6135 - val = 0x23; 6133 + if (spec->quirk == QUIRK_ALIENWARE_M17XR4) 6134 + val = 0x33; 6135 + else 6136 + val = 0x23; 6136 6137 /* keep a copy of dmic ctl val for enable/disable dmic purpuse */ 6137 6138 spec->dmic_ctl = val; 6138 6139 snd_hda_codec_write(codec, spec->input_pins[0], 0, ··· 7228 7223 7229 7224 snd_hda_sequence_write(codec, spec->base_init_verbs); 7230 7225 7231 - if (spec->quirk != QUIRK_NONE) 7226 + if (spec->use_alt_functions) 7232 7227 ca0132_alt_init(codec); 7233 7228 7234 7229 ca0132_download_dsp(codec); ··· 7242 7237 case QUIRK_R3DI: 7243 7238 r3di_setup_defaults(codec); 7244 7239 break; 7245 - case QUIRK_NONE: 7246 - case QUIRK_ALIENWARE: 7240 + case QUIRK_SBZ: 7241 + break; 7242 + default: 7247 7243 ca0132_setup_defaults(codec); 7248 7244 ca0132_init_analog_mic2(codec); 7249 7245 ca0132_init_dmic(codec); ··· 7349 7343 static void ca0132_config(struct hda_codec *codec) 7350 7344 { 7351 7345 struct ca0132_spec *spec = codec->spec; 7352 - struct auto_pin_cfg *cfg = &spec->autocfg; 7353 7346 7354 7347 spec->dacs[0] = 0x2; 7355 7348 spec->dacs[1] = 0x3; ··· 7410 7405 /* SPDIF I/O */ 7411 7406 spec->dig_out = 0x05; 7412 7407 spec->multiout.dig_out_nid = spec->dig_out; 7413 - cfg->dig_out_pins[0] = 0x0c; 7414 - cfg->dig_outs = 1; 7415 - cfg->dig_out_type[0] = HDA_PCM_TYPE_SPDIF; 7416 7408 spec->dig_in = 0x09; 7417 - cfg->dig_in_pin = 0x0e; 7418 - cfg->dig_in_type = HDA_PCM_TYPE_SPDIF; 7419 7409 break; 7420 7410 case QUIRK_R3DI: 7421 7411 codec_dbg(codec, "%s: QUIRK_R3DI applied.\n", __func__); ··· 7438 7438 /* SPDIF I/O */ 7439 7439 spec->dig_out = 0x05; 7440 7440 spec->multiout.dig_out_nid = spec->dig_out; 7441 - cfg->dig_out_pins[0] = 0x0c; 7442 - cfg->dig_outs = 1; 7443 - cfg->dig_out_type[0] = HDA_PCM_TYPE_SPDIF; 7444 7441 break; 7445 7442 default: 7446 7443 spec->num_outputs = 2; ··· 7460 7463 /* SPDIF I/O */ 7461 7464 spec->dig_out = 0x05; 7462 7465 spec->multiout.dig_out_nid = spec->dig_out; 7463 - cfg->dig_out_pins[0] = 0x0c; 7464 - cfg->dig_outs = 1; 7465 - cfg->dig_out_type[0] = HDA_PCM_TYPE_SPDIF; 7466 7466 spec->dig_in = 0x09; 7467 - cfg->dig_in_pin = 0x0e; 7468 - cfg->dig_in_type = HDA_PCM_TYPE_SPDIF; 7469 7467 break; 7470 7468 } 7471 7469 } ··· 7468 7476 static int ca0132_prepare_verbs(struct hda_codec *codec) 7469 7477 { 7470 7478 /* Verbs + terminator (an empty element) */ 7471 - #define NUM_SPEC_VERBS 4 7479 + #define NUM_SPEC_VERBS 2 7472 7480 struct ca0132_spec *spec = codec->spec; 7473 7481 7474 7482 spec->chip_init_verbs = ca0132_init_verbs0; ··· 7480 7488 if (!spec->spec_init_verbs) 7481 7489 return -ENOMEM; 7482 7490 7483 - /* HP jack autodetection */ 7484 - spec->spec_init_verbs[0].nid = spec->unsol_tag_hp; 7485 - spec->spec_init_verbs[0].param = AC_VERB_SET_UNSOLICITED_ENABLE; 7486 - spec->spec_init_verbs[0].verb = AC_USRSP_EN | spec->unsol_tag_hp; 7487 - 7488 - /* MIC1 jack autodetection */ 7489 - spec->spec_init_verbs[1].nid = spec->unsol_tag_amic1; 7490 - spec->spec_init_verbs[1].param = AC_VERB_SET_UNSOLICITED_ENABLE; 7491 - spec->spec_init_verbs[1].verb = AC_USRSP_EN | spec->unsol_tag_amic1; 7492 - 7493 7491 /* config EAPD */ 7494 - spec->spec_init_verbs[2].nid = 0x0b; 7495 - spec->spec_init_verbs[2].param = 0x78D; 7496 - spec->spec_init_verbs[2].verb = 0x00; 7492 + spec->spec_init_verbs[0].nid = 0x0b; 7493 + spec->spec_init_verbs[0].param = 0x78D; 7494 + spec->spec_init_verbs[0].verb = 0x00; 7497 7495 7498 7496 /* Previously commented configuration */ 7499 7497 /* 7500 - spec->spec_init_verbs[3].nid = 0x0b; 7501 - spec->spec_init_verbs[3].param = AC_VERB_SET_EAPD_BTLENABLE; 7498 + spec->spec_init_verbs[2].nid = 0x0b; 7499 + spec->spec_init_verbs[2].param = AC_VERB_SET_EAPD_BTLENABLE; 7500 + spec->spec_init_verbs[2].verb = 0x02; 7501 + 7502 + spec->spec_init_verbs[3].nid = 0x10; 7503 + spec->spec_init_verbs[3].param = 0x78D; 7502 7504 spec->spec_init_verbs[3].verb = 0x02; 7503 7505 7504 7506 spec->spec_init_verbs[4].nid = 0x10; 7505 - spec->spec_init_verbs[4].param = 0x78D; 7507 + spec->spec_init_verbs[4].param = AC_VERB_SET_EAPD_BTLENABLE; 7506 7508 spec->spec_init_verbs[4].verb = 0x02; 7507 - 7508 - spec->spec_init_verbs[5].nid = 0x10; 7509 - spec->spec_init_verbs[5].param = AC_VERB_SET_EAPD_BTLENABLE; 7510 - spec->spec_init_verbs[5].verb = 0x02; 7511 7509 */ 7512 7510 7513 7511 /* Terminator: spec->spec_init_verbs[NUM_SPEC_VERBS-1] */
+5
sound/pci/hda/patch_hdmi.c
··· 3741 3741 3742 3742 spec->chmap.channels_max = max(spec->chmap.channels_max, 8u); 3743 3743 3744 + /* AMD GPUs have neither EPSS nor CLKSTOP bits, hence preventing 3745 + * the link-down as is. Tell the core to allow it. 3746 + */ 3747 + codec->link_down_at_suspend = 1; 3748 + 3744 3749 return 0; 3745 3750 } 3746 3751
+17 -3
sound/pci/hda/patch_realtek.c
··· 2545 2545 SND_PCI_QUIRK(0x10cf, 0x1397, "Fujitsu Lifebook S7110", ALC262_FIXUP_FSC_S7110), 2546 2546 SND_PCI_QUIRK(0x10cf, 0x142d, "Fujitsu Lifebook E8410", ALC262_FIXUP_BENQ), 2547 2547 SND_PCI_QUIRK(0x10f1, 0x2915, "Tyan Thunder n6650W", ALC262_FIXUP_TYAN), 2548 + SND_PCI_QUIRK(0x1734, 0x1141, "FSC ESPRIMO U9210", ALC262_FIXUP_FSC_H270), 2548 2549 SND_PCI_QUIRK(0x1734, 0x1147, "FSC Celsius H270", ALC262_FIXUP_FSC_H270), 2549 2550 SND_PCI_QUIRK(0x17aa, 0x384e, "Lenovo 3000", ALC262_FIXUP_LENOVO_3000), 2550 2551 SND_PCI_QUIRK(0x17ff, 0x0560, "Benq ED8", ALC262_FIXUP_BENQ), ··· 4996 4995 struct alc_spec *spec = codec->spec; 4997 4996 4998 4997 if (action == HDA_FIXUP_ACT_PRE_PROBE) { 4999 - spec->shutup = alc_no_shutup; /* reduce click noise */ 5000 4998 spec->reboot_notify = alc_d3_at_reboot; /* reduce noise */ 5001 4999 spec->parse_flags = HDA_PINCFG_NO_HP_FIXUP; 5002 5000 codec->power_save_node = 0; /* avoid click noises */ ··· 5393 5393 5394 5394 /* for hda_fixup_thinkpad_acpi() */ 5395 5395 #include "thinkpad_helper.c" 5396 + 5397 + static void alc_fixup_thinkpad_acpi(struct hda_codec *codec, 5398 + const struct hda_fixup *fix, int action) 5399 + { 5400 + alc_fixup_no_shutup(codec, fix, action); /* reduce click noise */ 5401 + hda_fixup_thinkpad_acpi(codec, fix, action); 5402 + } 5396 5403 5397 5404 /* for dell wmi mic mute led */ 5398 5405 #include "dell_wmi_helper.c" ··· 5953 5946 }, 5954 5947 [ALC269_FIXUP_THINKPAD_ACPI] = { 5955 5948 .type = HDA_FIXUP_FUNC, 5956 - .v.func = hda_fixup_thinkpad_acpi, 5949 + .v.func = alc_fixup_thinkpad_acpi, 5957 5950 .chained = true, 5958 5951 .chain_id = ALC269_FIXUP_SKU_IGNORE, 5959 5952 }, ··· 6610 6603 SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), 6611 6604 SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), 6612 6605 SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6606 + SND_PCI_QUIRK(0x17aa, 0x312a, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6613 6607 SND_PCI_QUIRK(0x17aa, 0x312f, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6614 - SND_PCI_QUIRK(0x17aa, 0x3138, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6608 + SND_PCI_QUIRK(0x17aa, 0x3136, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6615 6609 SND_PCI_QUIRK(0x17aa, 0x313c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6616 6610 SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI), 6617 6611 SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC), ··· 6790 6782 {0x14, 0x90170110}, 6791 6783 {0x19, 0x02a11030}, 6792 6784 {0x21, 0x02211020}), 6785 + SND_HDA_PIN_QUIRK(0x10ec0235, 0x17aa, "Lenovo", ALC294_FIXUP_LENOVO_MIC_LOCATION, 6786 + {0x14, 0x90170110}, 6787 + {0x19, 0x02a11030}, 6788 + {0x1a, 0x02a11040}, 6789 + {0x1b, 0x01014020}, 6790 + {0x21, 0x0221101f}), 6793 6791 SND_HDA_PIN_QUIRK(0x10ec0236, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 6794 6792 {0x12, 0x90a60140}, 6795 6793 {0x14, 0x90170110},
+1
sound/pci/lx6464es/lx6464es.c
··· 1018 1018 chip->port_dsp_bar = pci_ioremap_bar(pci, 2); 1019 1019 if (!chip->port_dsp_bar) { 1020 1020 dev_err(card->dev, "cannot remap PCI memory region\n"); 1021 + err = -ENOMEM; 1021 1022 goto remap_pci_failed; 1022 1023 } 1023 1024
+1
tools/arch/arm/include/uapi/asm/kvm.h
··· 91 91 #define KVM_VGIC_V3_ADDR_TYPE_DIST 2 92 92 #define KVM_VGIC_V3_ADDR_TYPE_REDIST 3 93 93 #define KVM_VGIC_ITS_ADDR_TYPE 4 94 + #define KVM_VGIC_V3_ADDR_TYPE_REDIST_REGION 5 94 95 95 96 #define KVM_VGIC_V3_DIST_SIZE SZ_64K 96 97 #define KVM_VGIC_V3_REDIST_SIZE (2 * SZ_64K)
+1
tools/arch/arm64/include/uapi/asm/kvm.h
··· 91 91 #define KVM_VGIC_V3_ADDR_TYPE_DIST 2 92 92 #define KVM_VGIC_V3_ADDR_TYPE_REDIST 3 93 93 #define KVM_VGIC_ITS_ADDR_TYPE 4 94 + #define KVM_VGIC_V3_ADDR_TYPE_REDIST_REGION 5 94 95 95 96 #define KVM_VGIC_V3_DIST_SIZE SZ_64K 96 97 #define KVM_VGIC_V3_REDIST_SIZE (2 * SZ_64K)
+1
tools/arch/powerpc/include/uapi/asm/kvm.h
··· 633 633 #define KVM_REG_PPC_PSSCR (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xbd) 634 634 635 635 #define KVM_REG_PPC_DEC_EXPIRY (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xbe) 636 + #define KVM_REG_PPC_ONLINE (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xbf) 636 637 637 638 /* Transactional Memory checkpointed state: 638 639 * This is all GPRs, all VSX regs and a subset of SPRs
+1
tools/arch/powerpc/include/uapi/asm/unistd.h
··· 398 398 #define __NR_pkey_alloc 384 399 399 #define __NR_pkey_free 385 400 400 #define __NR_pkey_mprotect 386 401 + #define __NR_rseq 387 401 402 402 403 #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */
+2
tools/arch/x86/include/asm/cpufeatures.h
··· 282 282 #define X86_FEATURE_AMD_IBPB (13*32+12) /* "" Indirect Branch Prediction Barrier */ 283 283 #define X86_FEATURE_AMD_IBRS (13*32+14) /* "" Indirect Branch Restricted Speculation */ 284 284 #define X86_FEATURE_AMD_STIBP (13*32+15) /* "" Single Thread Indirect Branch Predictors */ 285 + #define X86_FEATURE_AMD_SSBD (13*32+24) /* "" Speculative Store Bypass Disable */ 285 286 #define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */ 287 + #define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */ 286 288 287 289 /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */ 288 290 #define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */
+7
tools/include/uapi/drm/drm.h
··· 680 680 */ 681 681 #define DRM_CLIENT_CAP_ATOMIC 3 682 682 683 + /** 684 + * DRM_CLIENT_CAP_ASPECT_RATIO 685 + * 686 + * If set to 1, the DRM core will provide aspect ratio information in modes. 687 + */ 688 + #define DRM_CLIENT_CAP_ASPECT_RATIO 4 689 + 683 690 /** DRM_IOCTL_SET_CLIENT_CAP ioctl argument type */ 684 691 struct drm_set_client_cap { 685 692 __u64 capability;
+1 -1
tools/include/uapi/linux/bpf.h
··· 2630 2630 union { 2631 2631 /* inputs to lookup */ 2632 2632 __u8 tos; /* AF_INET */ 2633 - __be32 flowlabel; /* AF_INET6 */ 2633 + __be32 flowinfo; /* AF_INET6, flow_label + priority */ 2634 2634 2635 2635 /* output: metric of fib result (IPv4/IPv6 only) */ 2636 2636 __u32 rt_metric;
+2
tools/include/uapi/linux/if_link.h
··· 333 333 IFLA_BRPORT_BCAST_FLOOD, 334 334 IFLA_BRPORT_GROUP_FWD_MASK, 335 335 IFLA_BRPORT_NEIGH_SUPPRESS, 336 + IFLA_BRPORT_ISOLATED, 336 337 __IFLA_BRPORT_MAX 337 338 }; 338 339 #define IFLA_BRPORT_MAX (__IFLA_BRPORT_MAX - 1) ··· 517 516 IFLA_VXLAN_COLLECT_METADATA, 518 517 IFLA_VXLAN_LABEL, 519 518 IFLA_VXLAN_GPE, 519 + IFLA_VXLAN_TTL_INHERIT, 520 520 __IFLA_VXLAN_MAX 521 521 }; 522 522 #define IFLA_VXLAN_MAX (__IFLA_VXLAN_MAX - 1)
+1
tools/include/uapi/linux/kvm.h
··· 948 948 #define KVM_CAP_S390_BPB 152 949 949 #define KVM_CAP_GET_MSR_FEATURES 153 950 950 #define KVM_CAP_HYPERV_EVENTFD 154 951 + #define KVM_CAP_HYPERV_TLBFLUSH 155 951 952 952 953 #ifdef KVM_CAP_IRQ_ROUTING 953 954
+1 -1
tools/perf/arch/powerpc/util/skip-callchain-idx.c
··· 243 243 u64 ip; 244 244 u64 skip_slot = -1; 245 245 246 - if (chain->nr < 3) 246 + if (!chain || chain->nr < 3) 247 247 return skip_slot; 248 248 249 249 ip = chain->ips[2];
+2
tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
··· 341 341 330 common pkey_alloc __x64_sys_pkey_alloc 342 342 331 common pkey_free __x64_sys_pkey_free 343 343 332 common statx __x64_sys_statx 344 + 333 common io_pgetevents __x64_sys_io_pgetevents 345 + 334 common rseq __x64_sys_rseq 344 346 345 347 # 346 348 # x32-specific system call numbers start at 512 to avoid cache impact
+3 -2
tools/perf/bench/numa.c
··· 1098 1098 u8 *global_data; 1099 1099 u8 *process_data; 1100 1100 u8 *thread_data; 1101 - u64 bytes_done; 1101 + u64 bytes_done, secs; 1102 1102 long work_done; 1103 1103 u32 l; 1104 1104 struct rusage rusage; ··· 1254 1254 timersub(&stop, &start0, &diff); 1255 1255 td->runtime_ns = diff.tv_sec * NSEC_PER_SEC; 1256 1256 td->runtime_ns += diff.tv_usec * NSEC_PER_USEC; 1257 - td->speed_gbs = bytes_done / (td->runtime_ns / NSEC_PER_SEC) / 1e9; 1257 + secs = td->runtime_ns / NSEC_PER_SEC; 1258 + td->speed_gbs = secs ? bytes_done / secs / 1e9 : 0; 1258 1259 1259 1260 getrusage(RUSAGE_THREAD, &rusage); 1260 1261 td->system_time_ns = rusage.ru_stime.tv_sec * NSEC_PER_SEC;
+10 -1
tools/perf/builtin-annotate.c
··· 283 283 return ret; 284 284 } 285 285 286 + static int process_feature_event(struct perf_tool *tool, 287 + union perf_event *event, 288 + struct perf_session *session) 289 + { 290 + if (event->feat.feat_id < HEADER_LAST_FEATURE) 291 + return perf_event__process_feature(tool, event, session); 292 + return 0; 293 + } 294 + 286 295 static int hist_entry__tty_annotate(struct hist_entry *he, 287 296 struct perf_evsel *evsel, 288 297 struct perf_annotate *ann) ··· 480 471 .attr = perf_event__process_attr, 481 472 .build_id = perf_event__process_build_id, 482 473 .tracing_data = perf_event__process_tracing_data, 483 - .feature = perf_event__process_feature, 474 + .feature = process_feature_event, 484 475 .ordered_events = true, 485 476 .ordering_requires_timestamps = true, 486 477 },
+2 -1
tools/perf/builtin-report.c
··· 217 217 } 218 218 219 219 /* 220 - * All features are received, we can force the 220 + * (feat_id = HEADER_LAST_FEATURE) is the end marker which 221 + * means all features are received, now we can force the 221 222 * group if needed. 222 223 */ 223 224 setup_forced_leader(rep, session->evlist);
+27 -3
tools/perf/builtin-script.c
··· 1834 1834 struct perf_evlist *evlist; 1835 1835 struct perf_evsel *evsel, *pos; 1836 1836 int err; 1837 + static struct perf_evsel_script *es; 1837 1838 1838 1839 err = perf_event__process_attr(tool, event, pevlist); 1839 1840 if (err) ··· 1842 1841 1843 1842 evlist = *pevlist; 1844 1843 evsel = perf_evlist__last(*pevlist); 1844 + 1845 + if (!evsel->priv) { 1846 + if (scr->per_event_dump) { 1847 + evsel->priv = perf_evsel_script__new(evsel, 1848 + scr->session->data); 1849 + } else { 1850 + es = zalloc(sizeof(*es)); 1851 + if (!es) 1852 + return -ENOMEM; 1853 + es->fp = stdout; 1854 + evsel->priv = es; 1855 + } 1856 + } 1845 1857 1846 1858 if (evsel->attr.type >= PERF_TYPE_MAX && 1847 1859 evsel->attr.type != PERF_TYPE_SYNTH) ··· 3044 3030 return set_maps(script); 3045 3031 } 3046 3032 3033 + static int process_feature_event(struct perf_tool *tool, 3034 + union perf_event *event, 3035 + struct perf_session *session) 3036 + { 3037 + if (event->feat.feat_id < HEADER_LAST_FEATURE) 3038 + return perf_event__process_feature(tool, event, session); 3039 + return 0; 3040 + } 3041 + 3047 3042 #ifdef HAVE_AUXTRACE_SUPPORT 3048 3043 static int perf_script__process_auxtrace_info(struct perf_tool *tool, 3049 3044 union perf_event *event, ··· 3097 3074 .attr = process_attr, 3098 3075 .event_update = perf_event__process_event_update, 3099 3076 .tracing_data = perf_event__process_tracing_data, 3100 - .feature = perf_event__process_feature, 3077 + .feature = process_feature_event, 3101 3078 .build_id = perf_event__process_build_id, 3102 3079 .id_index = perf_event__process_id_index, 3103 3080 .auxtrace_info = perf_script__process_auxtrace_info, ··· 3148 3125 "+field to add and -field to remove." 3149 3126 "Valid types: hw,sw,trace,raw,synth. " 3150 3127 "Fields: comm,tid,pid,time,cpu,event,trace,ip,sym,dso," 3151 - "addr,symoff,period,iregs,uregs,brstack,brstacksym,flags," 3152 - "bpf-output,callindent,insn,insnlen,brstackinsn,synth,phys_addr", 3128 + "addr,symoff,srcline,period,iregs,uregs,brstack," 3129 + "brstacksym,flags,bpf-output,brstackinsn,brstackoff," 3130 + "callindent,insn,insnlen,synth,phys_addr,metric,misc", 3153 3131 parse_output_fields), 3154 3132 OPT_BOOLEAN('a', "all-cpus", &system_wide, 3155 3133 "system-wide collection from all CPUs"),
+20 -5
tools/perf/tests/parse-events.c
··· 1309 1309 return 0; 1310 1310 } 1311 1311 1312 + static bool test__intel_pt_valid(void) 1313 + { 1314 + return !!perf_pmu__find("intel_pt"); 1315 + } 1316 + 1312 1317 static int test__intel_pt(struct perf_evlist *evlist) 1313 1318 { 1314 1319 struct perf_evsel *evsel = perf_evlist__first(evlist); ··· 1380 1375 const char *name; 1381 1376 __u32 type; 1382 1377 const int id; 1378 + bool (*valid)(void); 1383 1379 int (*check)(struct perf_evlist *evlist); 1384 1380 }; 1385 1381 ··· 1654 1648 }, 1655 1649 { 1656 1650 .name = "intel_pt//u", 1651 + .valid = test__intel_pt_valid, 1657 1652 .check = test__intel_pt, 1658 1653 .id = 52, 1659 1654 }, ··· 1693 1686 1694 1687 static int test_event(struct evlist_test *e) 1695 1688 { 1689 + struct parse_events_error err = { .idx = 0, }; 1696 1690 struct perf_evlist *evlist; 1697 1691 int ret; 1692 + 1693 + if (e->valid && !e->valid()) { 1694 + pr_debug("... SKIP"); 1695 + return 0; 1696 + } 1698 1697 1699 1698 evlist = perf_evlist__new(); 1700 1699 if (evlist == NULL) 1701 1700 return -ENOMEM; 1702 1701 1703 - ret = parse_events(evlist, e->name, NULL); 1702 + ret = parse_events(evlist, e->name, &err); 1704 1703 if (ret) { 1705 - pr_debug("failed to parse event '%s', err %d\n", 1706 - e->name, ret); 1704 + pr_debug("failed to parse event '%s', err %d, str '%s'\n", 1705 + e->name, ret, err.str); 1706 + parse_events_print_error(&err, e->name); 1707 1707 } else { 1708 1708 ret = e->check(evlist); 1709 1709 } ··· 1728 1714 for (i = 0; i < cnt; i++) { 1729 1715 struct evlist_test *e = &events[i]; 1730 1716 1731 - pr_debug("running test %d '%s'\n", e->id, e->name); 1717 + pr_debug("running test %d '%s'", e->id, e->name); 1732 1718 ret1 = test_event(e); 1733 1719 if (ret1) 1734 1720 ret2 = ret1; 1721 + pr_debug("\n"); 1735 1722 } 1736 1723 1737 1724 return ret2; ··· 1814 1799 } 1815 1800 1816 1801 while (!ret && (ent = readdir(dir))) { 1817 - struct evlist_test e; 1802 + struct evlist_test e = { .id = 0, }; 1818 1803 char name[2 * NAME_MAX + 1 + 12 + 3]; 1819 1804 1820 1805 /* Names containing . are special and cannot be used directly */
+1
tools/perf/tests/topology.c
··· 45 45 46 46 perf_header__set_feat(&session->header, HEADER_CPU_TOPOLOGY); 47 47 perf_header__set_feat(&session->header, HEADER_NRCPUS); 48 + perf_header__set_feat(&session->header, HEADER_ARCH); 48 49 49 50 session->header.data_size += DATA_SIZE; 50 51
+9 -2
tools/perf/util/c++/clang.cpp
··· 146 146 raw_svector_ostream ostream(*Buffer); 147 147 148 148 legacy::PassManager PM; 149 - if (TargetMachine->addPassesToEmitFile(PM, ostream, 150 - TargetMachine::CGFT_ObjectFile)) { 149 + bool NotAdded; 150 + #if CLANG_VERSION_MAJOR < 7 151 + NotAdded = TargetMachine->addPassesToEmitFile(PM, ostream, 152 + TargetMachine::CGFT_ObjectFile); 153 + #else 154 + NotAdded = TargetMachine->addPassesToEmitFile(PM, ostream, nullptr, 155 + TargetMachine::CGFT_ObjectFile); 156 + #endif 157 + if (NotAdded) { 151 158 llvm::errs() << "TargetMachine can't emit a file of this type\n"; 152 159 return std::unique_ptr<llvm::SmallVectorImpl<char>>(nullptr);; 153 160 }
+10 -2
tools/perf/util/header.c
··· 2129 2129 int cpu_nr = ff->ph->env.nr_cpus_avail; 2130 2130 u64 size = 0; 2131 2131 struct perf_header *ph = ff->ph; 2132 + bool do_core_id_test = true; 2132 2133 2133 2134 ph->env.cpu = calloc(cpu_nr, sizeof(*ph->env.cpu)); 2134 2135 if (!ph->env.cpu) ··· 2184 2183 return 0; 2185 2184 } 2186 2185 2186 + /* On s390 the socket_id number is not related to the numbers of cpus. 2187 + * The socket_id number might be higher than the numbers of cpus. 2188 + * This depends on the configuration. 2189 + */ 2190 + if (ph->env.arch && !strncmp(ph->env.arch, "s390", 4)) 2191 + do_core_id_test = false; 2192 + 2187 2193 for (i = 0; i < (u32)cpu_nr; i++) { 2188 2194 if (do_read_u32(ff, &nr)) 2189 2195 goto free_cpu; ··· 2200 2192 if (do_read_u32(ff, &nr)) 2201 2193 goto free_cpu; 2202 2194 2203 - if (nr != (u32)-1 && nr > (u32)cpu_nr) { 2195 + if (do_core_id_test && nr != (u32)-1 && nr > (u32)cpu_nr) { 2204 2196 pr_debug("socket_id number is too big." 2205 2197 "You may need to upgrade the perf tool.\n"); 2206 2198 goto free_cpu; ··· 3464 3456 pr_warning("invalid record type %d in pipe-mode\n", type); 3465 3457 return 0; 3466 3458 } 3467 - if (feat == HEADER_RESERVED || feat > HEADER_LAST_FEATURE) { 3459 + if (feat == HEADER_RESERVED || feat >= HEADER_LAST_FEATURE) { 3468 3460 pr_warning("invalid record type %d in pipe-mode\n", type); 3469 3461 return -1; 3470 3462 }
+1 -1
tools/perf/util/intel-pt-decoder/intel-pt-pkt-decoder.c
··· 366 366 if (len < offs) 367 367 return INTEL_PT_NEED_MORE_BYTES; 368 368 byte = buf[offs++]; 369 - payload |= (byte >> 1) << shift; 369 + payload |= ((uint64_t)byte >> 1) << shift; 370 370 } 371 371 372 372 packet->type = INTEL_PT_CYC;
+97 -2
tools/perf/util/pmu.c
··· 234 234 return 0; 235 235 } 236 236 237 + static void perf_pmu_assign_str(char *name, const char *field, char **old_str, 238 + char **new_str) 239 + { 240 + if (!*old_str) 241 + goto set_new; 242 + 243 + if (*new_str) { /* Have new string, check with old */ 244 + if (strcasecmp(*old_str, *new_str)) 245 + pr_debug("alias %s differs in field '%s'\n", 246 + name, field); 247 + zfree(old_str); 248 + } else /* Nothing new --> keep old string */ 249 + return; 250 + set_new: 251 + *old_str = *new_str; 252 + *new_str = NULL; 253 + } 254 + 255 + static void perf_pmu_update_alias(struct perf_pmu_alias *old, 256 + struct perf_pmu_alias *newalias) 257 + { 258 + perf_pmu_assign_str(old->name, "desc", &old->desc, &newalias->desc); 259 + perf_pmu_assign_str(old->name, "long_desc", &old->long_desc, 260 + &newalias->long_desc); 261 + perf_pmu_assign_str(old->name, "topic", &old->topic, &newalias->topic); 262 + perf_pmu_assign_str(old->name, "metric_expr", &old->metric_expr, 263 + &newalias->metric_expr); 264 + perf_pmu_assign_str(old->name, "metric_name", &old->metric_name, 265 + &newalias->metric_name); 266 + perf_pmu_assign_str(old->name, "value", &old->str, &newalias->str); 267 + old->scale = newalias->scale; 268 + old->per_pkg = newalias->per_pkg; 269 + old->snapshot = newalias->snapshot; 270 + memcpy(old->unit, newalias->unit, sizeof(old->unit)); 271 + } 272 + 273 + /* Delete an alias entry. */ 274 + static void perf_pmu_free_alias(struct perf_pmu_alias *newalias) 275 + { 276 + zfree(&newalias->name); 277 + zfree(&newalias->desc); 278 + zfree(&newalias->long_desc); 279 + zfree(&newalias->topic); 280 + zfree(&newalias->str); 281 + zfree(&newalias->metric_expr); 282 + zfree(&newalias->metric_name); 283 + parse_events_terms__purge(&newalias->terms); 284 + free(newalias); 285 + } 286 + 287 + /* Merge an alias, search in alias list. If this name is already 288 + * present merge both of them to combine all information. 289 + */ 290 + static bool perf_pmu_merge_alias(struct perf_pmu_alias *newalias, 291 + struct list_head *alist) 292 + { 293 + struct perf_pmu_alias *a; 294 + 295 + list_for_each_entry(a, alist, list) { 296 + if (!strcasecmp(newalias->name, a->name)) { 297 + perf_pmu_update_alias(a, newalias); 298 + perf_pmu_free_alias(newalias); 299 + return true; 300 + } 301 + } 302 + return false; 303 + } 304 + 237 305 static int __perf_pmu__new_alias(struct list_head *list, char *dir, char *name, 238 306 char *desc, char *val, 239 307 char *long_desc, char *topic, ··· 309 241 char *metric_expr, 310 242 char *metric_name) 311 243 { 244 + struct parse_events_term *term; 312 245 struct perf_pmu_alias *alias; 313 246 int ret; 314 247 int num; 248 + char newval[256]; 315 249 316 250 alias = malloc(sizeof(*alias)); 317 251 if (!alias) ··· 330 260 pr_err("Cannot parse alias %s: %d\n", val, ret); 331 261 free(alias); 332 262 return ret; 263 + } 264 + 265 + /* Scan event and remove leading zeroes, spaces, newlines, some 266 + * platforms have terms specified as 267 + * event=0x0091 (read from files ../<PMU>/events/<FILE> 268 + * and terms specified as event=0x91 (read from JSON files). 269 + * 270 + * Rebuild string to make alias->str member comparable. 271 + */ 272 + memset(newval, 0, sizeof(newval)); 273 + ret = 0; 274 + list_for_each_entry(term, &alias->terms, list) { 275 + if (ret) 276 + ret += scnprintf(newval + ret, sizeof(newval) - ret, 277 + ","); 278 + if (term->type_val == PARSE_EVENTS__TERM_TYPE_NUM) 279 + ret += scnprintf(newval + ret, sizeof(newval) - ret, 280 + "%s=%#x", term->config, term->val.num); 281 + else if (term->type_val == PARSE_EVENTS__TERM_TYPE_STR) 282 + ret += scnprintf(newval + ret, sizeof(newval) - ret, 283 + "%s=%s", term->config, term->val.str); 333 284 } 334 285 335 286 alias->name = strdup(name); ··· 376 285 snprintf(alias->unit, sizeof(alias->unit), "%s", unit); 377 286 } 378 287 alias->per_pkg = perpkg && sscanf(perpkg, "%d", &num) == 1 && num == 1; 379 - alias->str = strdup(val); 288 + alias->str = strdup(newval); 380 289 381 - list_add_tail(&alias->list, list); 290 + if (!perf_pmu_merge_alias(alias, list)) 291 + list_add_tail(&alias->list, list); 382 292 383 293 return 0; 384 294 } ··· 394 302 return -EINVAL; 395 303 396 304 buf[ret] = 0; 305 + 306 + /* Remove trailing newline from sysfs file */ 307 + rtrim(buf); 397 308 398 309 return __perf_pmu__new_alias(list, dir, name, NULL, buf, NULL, NULL, NULL, 399 310 NULL, NULL, NULL);
+1
tools/testing/selftests/net/.gitignore
··· 12 12 udpgso 13 13 udpgso_bench_rx 14 14 udpgso_bench_tx 15 + tcp_inq
+2
tools/testing/selftests/net/config
··· 12 12 CONFIG_INET6_XFRM_MODE_TUNNEL=y 13 13 CONFIG_IPV6_VTI=y 14 14 CONFIG_DUMMY=y 15 + CONFIG_BRIDGE=y 16 + CONFIG_VLAN_8021Q=y
+36 -23
tools/testing/selftests/x86/sigreturn.c
··· 610 610 */ 611 611 for (int i = 0; i < NGREG; i++) { 612 612 greg_t req = requested_regs[i], res = resulting_regs[i]; 613 + 613 614 if (i == REG_TRAPNO || i == REG_IP) 614 615 continue; /* don't care */ 615 - if (i == REG_SP) { 616 - printf("\tSP: %llx -> %llx\n", (unsigned long long)req, 617 - (unsigned long long)res); 618 616 617 + if (i == REG_SP) { 619 618 /* 620 - * In many circumstances, the high 32 bits of rsp 621 - * are zeroed. For example, we could be a real 622 - * 32-bit program, or we could hit any of a number 623 - * of poorly-documented IRET or segmented ESP 624 - * oddities. If this happens, it's okay. 619 + * If we were using a 16-bit stack segment, then 620 + * the kernel is a bit stuck: IRET only restores 621 + * the low 16 bits of ESP/RSP if SS is 16-bit. 622 + * The kernel uses a hack to restore bits 31:16, 623 + * but that hack doesn't help with bits 63:32. 624 + * On Intel CPUs, bits 63:32 end up zeroed, and, on 625 + * AMD CPUs, they leak the high bits of the kernel 626 + * espfix64 stack pointer. There's very little that 627 + * the kernel can do about it. 628 + * 629 + * Similarly, if we are returning to a 32-bit context, 630 + * the CPU will often lose the high 32 bits of RSP. 625 631 */ 626 - if (res == (req & 0xFFFFFFFF)) 627 - continue; /* OK; not expected to work */ 632 + 633 + if (res == req) 634 + continue; 635 + 636 + if (cs_bits != 64 && ((res ^ req) & 0xFFFFFFFF) == 0) { 637 + printf("[NOTE]\tSP: %llx -> %llx\n", 638 + (unsigned long long)req, 639 + (unsigned long long)res); 640 + continue; 641 + } 642 + 643 + printf("[FAIL]\tSP mismatch: requested 0x%llx; got 0x%llx\n", 644 + (unsigned long long)requested_regs[i], 645 + (unsigned long long)resulting_regs[i]); 646 + nerrs++; 647 + continue; 628 648 } 629 649 630 650 bool ignore_reg = false; ··· 674 654 #endif 675 655 676 656 /* Sanity check on the kernel */ 677 - if (i == REG_CX && requested_regs[i] != resulting_regs[i]) { 657 + if (i == REG_CX && req != res) { 678 658 printf("[FAIL]\tCX (saved SP) mismatch: requested 0x%llx; got 0x%llx\n", 679 - (unsigned long long)requested_regs[i], 680 - (unsigned long long)resulting_regs[i]); 659 + (unsigned long long)req, 660 + (unsigned long long)res); 681 661 nerrs++; 682 662 continue; 683 663 } 684 664 685 - if (requested_regs[i] != resulting_regs[i] && !ignore_reg) { 686 - /* 687 - * SP is particularly interesting here. The 688 - * usual cause of failures is that we hit the 689 - * nasty IRET case of returning to a 16-bit SS, 690 - * in which case bits 16:31 of the *kernel* 691 - * stack pointer persist in ESP. 692 - */ 665 + if (req != res && !ignore_reg) { 693 666 printf("[FAIL]\tReg %d mismatch: requested 0x%llx; got 0x%llx\n", 694 - i, (unsigned long long)requested_regs[i], 695 - (unsigned long long)resulting_regs[i]); 667 + i, (unsigned long long)req, 668 + (unsigned long long)res); 696 669 nerrs++; 697 670 } 698 671 }
-18
tools/virtio/linux/scatterlist.h
··· 36 36 */ 37 37 BUG_ON((unsigned long) page & 0x03); 38 38 #ifdef CONFIG_DEBUG_SG 39 - BUG_ON(sg->sg_magic != SG_MAGIC); 40 39 BUG_ON(sg_is_chain(sg)); 41 40 #endif 42 41 sg->page_link = page_link | (unsigned long) page; ··· 66 67 static inline struct page *sg_page(struct scatterlist *sg) 67 68 { 68 69 #ifdef CONFIG_DEBUG_SG 69 - BUG_ON(sg->sg_magic != SG_MAGIC); 70 70 BUG_ON(sg_is_chain(sg)); 71 71 #endif 72 72 return (struct page *)((sg)->page_link & ~0x3); ··· 114 116 **/ 115 117 static inline void sg_mark_end(struct scatterlist *sg) 116 118 { 117 - #ifdef CONFIG_DEBUG_SG 118 - BUG_ON(sg->sg_magic != SG_MAGIC); 119 - #endif 120 119 /* 121 120 * Set termination bit, clear potential chain bit 122 121 */ ··· 131 136 **/ 132 137 static inline void sg_unmark_end(struct scatterlist *sg) 133 138 { 134 - #ifdef CONFIG_DEBUG_SG 135 - BUG_ON(sg->sg_magic != SG_MAGIC); 136 - #endif 137 139 sg->page_link &= ~0x02; 138 140 } 139 141 140 142 static inline struct scatterlist *sg_next(struct scatterlist *sg) 141 143 { 142 - #ifdef CONFIG_DEBUG_SG 143 - BUG_ON(sg->sg_magic != SG_MAGIC); 144 - #endif 145 144 if (sg_is_last(sg)) 146 145 return NULL; 147 146 ··· 149 160 static inline void sg_init_table(struct scatterlist *sgl, unsigned int nents) 150 161 { 151 162 memset(sgl, 0, sizeof(*sgl) * nents); 152 - #ifdef CONFIG_DEBUG_SG 153 - { 154 - unsigned int i; 155 - for (i = 0; i < nents; i++) 156 - sgl[i].sg_magic = SG_MAGIC; 157 - } 158 - #endif 159 163 sg_mark_end(&sgl[nents - 1]); 160 164 } 161 165