Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
drivers/net/ieee802154/fakehard.c

A bug fix went into 'net' for ieee802154/fakehard.c, which is removed
in 'net-next'.

Add build fix into the merge from Stephen Rothwell in openvswitch, the
logging macros take a new initial 'log' argument, a new call was added
in 'net' so when we merge that in here we have to explicitly add the
new 'log' arg to it else the build fails.

Signed-off-by: David S. Miller <davem@davemloft.net>

+2044 -946
+4 -2
Documentation/devicetree/bindings/ata/sata_rcar.txt
··· 3 3 Required properties: 4 4 - compatible : should contain one of the following: 5 5 - "renesas,sata-r8a7779" for R-Car H1 6 - - "renesas,sata-r8a7790" for R-Car H2 7 - - "renesas,sata-r8a7791" for R-Car M2 6 + - "renesas,sata-r8a7790-es1" for R-Car H2 ES1 7 + - "renesas,sata-r8a7790" for R-Car H2 other than ES1 8 + - "renesas,sata-r8a7791" for R-Car M2-W 9 + - "renesas,sata-r8a7793" for R-Car M2-N 8 10 - reg : address and length of the SATA registers; 9 11 - interrupts : must consist of one interrupt specifier. 10 12
-4
Documentation/devicetree/bindings/interrupt-controller/interrupts.txt
··· 30 30 Example: 31 31 interrupts-extended = <&intc1 5 1>, <&intc2 1 0>; 32 32 33 - A device node may contain either "interrupts" or "interrupts-extended", but not 34 - both. If both properties are present, then the operating system should log an 35 - error and use only the data in "interrupts". 36 - 37 33 2) Interrupt controller nodes 38 34 ----------------------------- 39 35
+11
Documentation/devicetree/bindings/pci/pci.txt
··· 7 7 8 8 Open Firmware Recommended Practice: Interrupt Mapping 9 9 http://www.openfirmware.org/1275/practice/imap/imap0_9d.pdf 10 + 11 + Additionally to the properties specified in the above standards a host bridge 12 + driver implementation may support the following properties: 13 + 14 + - linux,pci-domain: 15 + If present this property assigns a fixed PCI domain number to a host bridge, 16 + otherwise an unstable (across boots) unique number will be assigned. 17 + It is required to either not set this property at all or set it for all 18 + host bridges in the system, otherwise potentially conflicting domain numbers 19 + may be assigned to root buses behind different host bridges. The domain 20 + number for each host bridge in the system must be unique.
+1 -1
Documentation/devicetree/bindings/pinctrl/img,tz1090-pdc-pinctrl.txt
··· 9 9 common pinctrl bindings used by client devices, including the meaning of the 10 10 phrase "pin configuration node". 11 11 12 - TZ1090-PDC's pin configuration nodes act as a container for an abitrary number 12 + TZ1090-PDC's pin configuration nodes act as a container for an arbitrary number 13 13 of subnodes. Each of these subnodes represents some desired configuration for a 14 14 pin, a group, or a list of pins or groups. This configuration can include the 15 15 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/img,tz1090-pinctrl.txt
··· 9 9 common pinctrl bindings used by client devices, including the meaning of the 10 10 phrase "pin configuration node". 11 11 12 - TZ1090's pin configuration nodes act as a container for an abitrary number of 12 + TZ1090's pin configuration nodes act as a container for an arbitrary number of 13 13 subnodes. Each of these subnodes represents some desired configuration for a 14 14 pin, a group, or a list of pins or groups. This configuration can include the 15 15 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/lantiq,falcon-pinumx.txt
··· 9 9 common pinctrl bindings used by client devices, including the meaning of the 10 10 phrase "pin configuration node". 11 11 12 - Lantiq's pin configuration nodes act as a container for an abitrary number of 12 + Lantiq's pin configuration nodes act as a container for an arbitrary number of 13 13 subnodes. Each of these subnodes represents some desired configuration for a 14 14 pin, a group, or a list of pins or groups. This configuration can include the 15 15 mux function to select on those group(s), and two pin configuration parameters:
+1 -1
Documentation/devicetree/bindings/pinctrl/lantiq,xway-pinumx.txt
··· 9 9 common pinctrl bindings used by client devices, including the meaning of the 10 10 phrase "pin configuration node". 11 11 12 - Lantiq's pin configuration nodes act as a container for an abitrary number of 12 + Lantiq's pin configuration nodes act as a container for an arbitrary number of 13 13 subnodes. Each of these subnodes represents some desired configuration for a 14 14 pin, a group, or a list of pins or groups. This configuration can include the 15 15 mux function to select on those group(s), and two pin configuration parameters:
+1 -1
Documentation/devicetree/bindings/pinctrl/nvidia,tegra20-pinmux.txt
··· 9 9 common pinctrl bindings used by client devices, including the meaning of the 10 10 phrase "pin configuration node". 11 11 12 - Tegra's pin configuration nodes act as a container for an abitrary number of 12 + Tegra's pin configuration nodes act as a container for an arbitrary number of 13 13 subnodes. Each of these subnodes represents some desired configuration for a 14 14 pin, a group, or a list of pins or groups. This configuration can include the 15 15 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/pinctrl-sirf.txt
··· 13 13 Please refer to pinctrl-bindings.txt in this directory for details of the common 14 14 pinctrl bindings used by client devices. 15 15 16 - SiRFprimaII's pinmux nodes act as a container for an abitrary number of subnodes. 16 + SiRFprimaII's pinmux nodes act as a container for an arbitrary number of subnodes. 17 17 Each of these subnodes represents some desired configuration for a group of pins. 18 18 19 19 Required subnode-properties:
+1 -1
Documentation/devicetree/bindings/pinctrl/pinctrl_spear.txt
··· 32 32 Please refer to pinctrl-bindings.txt in this directory for details of the common 33 33 pinctrl bindings used by client devices. 34 34 35 - SPEAr's pinmux nodes act as a container for an abitrary number of subnodes. Each 35 + SPEAr's pinmux nodes act as a container for an arbitrary number of subnodes. Each 36 36 of these subnodes represents muxing for a pin, a group, or a list of pins or 37 37 groups. 38 38
+1 -1
Documentation/devicetree/bindings/pinctrl/qcom,apq8064-pinctrl.txt
··· 18 18 common pinctrl bindings used by client devices, including the meaning of the 19 19 phrase "pin configuration node". 20 20 21 - Qualcomm's pin configuration nodes act as a container for an abitrary number of 21 + Qualcomm's pin configuration nodes act as a container for an arbitrary number of 22 22 subnodes. Each of these subnodes represents some desired configuration for a 23 23 pin, a group, or a list of pins or groups. This configuration can include the 24 24 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/qcom,apq8084-pinctrl.txt
··· 47 47 common pinctrl bindings used by client devices, including the meaning of the 48 48 phrase "pin configuration node". 49 49 50 - The pin configuration nodes act as a container for an abitrary number of 50 + The pin configuration nodes act as a container for an arbitrary number of 51 51 subnodes. Each of these subnodes represents some desired configuration for a 52 52 pin, a group, or a list of pins or groups. This configuration can include the 53 53 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/qcom,ipq8064-pinctrl.txt
··· 18 18 common pinctrl bindings used by client devices, including the meaning of the 19 19 phrase "pin configuration node". 20 20 21 - Qualcomm's pin configuration nodes act as a container for an abitrary number of 21 + Qualcomm's pin configuration nodes act as a container for an arbitrary number of 22 22 subnodes. Each of these subnodes represents some desired configuration for a 23 23 pin, a group, or a list of pins or groups. This configuration can include the 24 24 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/qcom,msm8960-pinctrl.txt
··· 47 47 common pinctrl bindings used by client devices, including the meaning of the 48 48 phrase "pin configuration node". 49 49 50 - The pin configuration nodes act as a container for an abitrary number of 50 + The pin configuration nodes act as a container for an arbitrary number of 51 51 subnodes. Each of these subnodes represents some desired configuration for a 52 52 pin, a group, or a list of pins or groups. This configuration can include the 53 53 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/qcom,msm8974-pinctrl.txt
··· 18 18 common pinctrl bindings used by client devices, including the meaning of the 19 19 phrase "pin configuration node". 20 20 21 - Qualcomm's pin configuration nodes act as a container for an abitrary number of 21 + Qualcomm's pin configuration nodes act as a container for an arbitrary number of 22 22 subnodes. Each of these subnodes represents some desired configuration for a 23 23 pin, a group, or a list of pins or groups. This configuration can include the 24 24 mux function to select on those pin(s)/group(s), and various pin configuration
+4 -1
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 34 34 chrp Common Hardware Reference Platform 35 35 chunghwa Chunghwa Picture Tubes Ltd. 36 36 cirrus Cirrus Logic, Inc. 37 + cnm Chips&Media, Inc. 37 38 cortina Cortina Systems, Inc. 38 39 crystalfontz Crystalfontz America, Inc. 39 40 dallas Maxim Integrated Products (formerly Dallas Semiconductor) ··· 93 92 mediatek MediaTek Inc. 94 93 micrel Micrel Inc. 95 94 microchip Microchip Technology Inc. 95 + micron Micron Technology Inc. 96 96 mitsubishi Mitsubishi Electric Corporation 97 97 mosaixtech Mosaix Technologies, Inc. 98 98 moxa Moxa ··· 129 127 ricoh Ricoh Co. Ltd. 130 128 rockchip Fuzhou Rockchip Electronics Co., Ltd 131 129 samsung Samsung Semiconductor 130 + sandisk Sandisk Corporation 132 131 sbs Smart Battery System 133 132 schindler Schindler 134 133 seagate Seagate Technology PLC ··· 141 138 sirf SiRF Technology, Inc. 142 139 sitronix Sitronix Technology Corporation 143 140 smsc Standard Microsystems Corporation 144 - snps Synopsys, Inc. 141 + snps Synopsys, Inc. 145 142 solidrun SolidRun 146 143 sony Sony Corporation 147 144 spansion Spansion Inc.
+75 -6
Documentation/input/elantech.txt
··· 38 38 7.2.1 Status packet 39 39 7.2.2 Head packet 40 40 7.2.3 Motion packet 41 + 8. Trackpoint (for Hardware version 3 and 4) 42 + 8.1 Registers 43 + 8.2 Native relative mode 6 byte packet format 44 + 8.2.1 Status Packet 41 45 42 46 43 47 44 48 1. Introduction 45 49 ~~~~~~~~~~~~ 46 50 47 - Currently the Linux Elantech touchpad driver is aware of two different 48 - hardware versions unimaginatively called version 1 and version 2. Version 1 49 - is found in "older" laptops and uses 4 bytes per packet. Version 2 seems to 50 - be introduced with the EeePC and uses 6 bytes per packet, and provides 51 - additional features such as position of two fingers, and width of the touch. 51 + Currently the Linux Elantech touchpad driver is aware of four different 52 + hardware versions unimaginatively called version 1,version 2, version 3 53 + and version 4. Version 1 is found in "older" laptops and uses 4 bytes per 54 + packet. Version 2 seems to be introduced with the EeePC and uses 6 bytes 55 + per packet, and provides additional features such as position of two fingers, 56 + and width of the touch. Hardware version 3 uses 6 bytes per packet (and 57 + for 2 fingers the concatenation of two 6 bytes packets) and allows tracking 58 + of up to 3 fingers. Hardware version 4 uses 6 bytes per packet, and can 59 + combine a status packet with multiple head or motion packets. Hardware version 60 + 4 allows tracking up to 5 fingers. 61 + 62 + Some Hardware version 3 and version 4 also have a trackpoint which uses a 63 + separate packet format. It is also 6 bytes per packet. 52 64 53 65 The driver tries to support both hardware versions and should be compatible 54 66 with the Xorg Synaptics touchpad driver and its graphical configuration 55 67 utilities. 68 + 69 + Note that a mouse button is also associated with either the touchpad or the 70 + trackpoint when a trackpoint is available. Disabling the Touchpad in xorg 71 + (TouchPadOff=0) will also disable the buttons associated with the touchpad. 56 72 57 73 Additionally the operation of the touchpad can be altered by adjusting the 58 74 contents of some of its internal registers. These registers are represented ··· 94 78 2. Extra knobs 95 79 ~~~~~~~~~~~ 96 80 97 - Currently the Linux Elantech touchpad driver provides two extra knobs under 81 + Currently the Linux Elantech touchpad driver provides three extra knobs under 98 82 /sys/bus/serio/drivers/psmouse/serio? for the user. 99 83 100 84 * debug ··· 127 111 Hardware version 2 does not provide the same parity bits. Only some basic 128 112 data consistency checking can be done. For now checking is disabled by 129 113 default. Currently even turning it on will do nothing. 114 + 115 + * crc_enabled 116 + 117 + Sets crc_enabled to 0/1. The name "crc_enabled" is the official name of 118 + this integrity check, even though it is not an actual cyclic redundancy 119 + check. 120 + 121 + Depending on the state of crc_enabled, certain basic data integrity 122 + verification is done by the driver on hardware version 3 and 4. The 123 + driver will reject any packet that appears corrupted. Using this knob, 124 + The state of crc_enabled can be altered with this knob. 125 + 126 + Reading the crc_enabled value will show the active value. Echoing 127 + "0" or "1" to this file will set the state to "0" or "1". 130 128 131 129 ///////////////////////////////////////////////////////////////////////////// 132 130 ··· 776 746 777 747 byte 0 ~ 2 for one finger 778 748 byte 3 ~ 5 for another 749 + 750 + 751 + 8. Trackpoint (for Hardware version 3 and 4) 752 + ========================================= 753 + 8.1 Registers 754 + ~~~~~~~~~ 755 + No special registers have been identified. 756 + 757 + 8.2 Native relative mode 6 byte packet format 758 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 759 + 8.2.1 Status Packet 760 + ~~~~~~~~~~~~~ 761 + 762 + byte 0: 763 + bit 7 6 5 4 3 2 1 0 764 + 0 0 sx sy 0 M R L 765 + byte 1: 766 + bit 7 6 5 4 3 2 1 0 767 + ~sx 0 0 0 0 0 0 0 768 + byte 2: 769 + bit 7 6 5 4 3 2 1 0 770 + ~sy 0 0 0 0 0 0 0 771 + byte 3: 772 + bit 7 6 5 4 3 2 1 0 773 + 0 0 ~sy ~sx 0 1 1 0 774 + byte 4: 775 + bit 7 6 5 4 3 2 1 0 776 + x7 x6 x5 x4 x3 x2 x1 x0 777 + byte 5: 778 + bit 7 6 5 4 3 2 1 0 779 + y7 y6 y5 y4 y3 y2 y1 y0 780 + 781 + 782 + x and y are written in two's complement spread 783 + over 9 bits with sx/sy the relative top bit and 784 + x7..x0 and y7..y0 the lower bits. 785 + ~sx is the inverse of sx, ~sy is the inverse of sy. 786 + The sign of y is opposite to what the input driver 787 + expects for a relative movement
+34
MAINTAINERS
··· 2752 2752 S: Supported 2753 2753 F: drivers/net/ethernet/chelsio/cxgb3/ 2754 2754 2755 + CXGB3 ISCSI DRIVER (CXGB3I) 2756 + M: Karen Xie <kxie@chelsio.com> 2757 + L: linux-scsi@vger.kernel.org 2758 + W: http://www.chelsio.com 2759 + S: Supported 2760 + F: drivers/scsi/cxgbi/cxgb3i 2761 + 2755 2762 CXGB3 IWARP RNIC DRIVER (IW_CXGB3) 2756 2763 M: Steve Wise <swise@chelsio.com> 2757 2764 L: linux-rdma@vger.kernel.org ··· 2772 2765 W: http://www.chelsio.com 2773 2766 S: Supported 2774 2767 F: drivers/net/ethernet/chelsio/cxgb4/ 2768 + 2769 + CXGB4 ISCSI DRIVER (CXGB4I) 2770 + M: Karen Xie <kxie@chelsio.com> 2771 + L: linux-scsi@vger.kernel.org 2772 + W: http://www.chelsio.com 2773 + S: Supported 2774 + F: drivers/scsi/cxgbi/cxgb4i 2775 2775 2776 2776 CXGB4 IWARP RNIC DRIVER (IW_CXGB4) 2777 2777 M: Steve Wise <swise@chelsio.com> ··· 6631 6617 S: Maintained 6632 6618 F: arch/arm/*omap*/ 6633 6619 F: drivers/i2c/busses/i2c-omap.c 6620 + F: drivers/irqchip/irq-omap-intc.c 6621 + F: drivers/mfd/*omap*.c 6622 + F: drivers/mfd/menelaus.c 6623 + F: drivers/mfd/palmas.c 6624 + F: drivers/mfd/tps65217.c 6625 + F: drivers/mfd/tps65218.c 6626 + F: drivers/mfd/tps65910.c 6627 + F: drivers/mfd/twl-core.[ch] 6628 + F: drivers/mfd/twl4030*.c 6629 + F: drivers/mfd/twl6030*.c 6630 + F: drivers/mfd/twl6040*.c 6631 + F: drivers/regulator/palmas-regulator*.c 6632 + F: drivers/regulator/pbias-regulator.c 6633 + F: drivers/regulator/tps65217-regulator.c 6634 + F: drivers/regulator/tps65218-regulator.c 6635 + F: drivers/regulator/tps65910-regulator.c 6636 + F: drivers/regulator/twl-regulator.c 6634 6637 F: include/linux/i2c-omap.h 6635 6638 6636 6639 OMAP DEVICE TREE SUPPORT ··· 6658 6627 S: Maintained 6659 6628 F: arch/arm/boot/dts/*omap* 6660 6629 F: arch/arm/boot/dts/*am3* 6630 + F: arch/arm/boot/dts/*am4* 6631 + F: arch/arm/boot/dts/*am5* 6632 + F: arch/arm/boot/dts/*dra7* 6661 6633 6662 6634 OMAP CLOCK FRAMEWORK SUPPORT 6663 6635 M: Paul Walmsley <paul@pwsan.com>
+4 -3
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 18 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc4 4 + EXTRAVERSION = -rc5 5 5 NAME = Diseased Newt 6 6 7 7 # *DOCUMENTATION* ··· 297 297 298 298 HOSTCC = gcc 299 299 HOSTCXX = g++ 300 - HOSTCFLAGS = -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer 300 + HOSTCFLAGS = -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer -std=gnu89 301 301 HOSTCXXFLAGS = -O2 302 302 303 303 ifeq ($(shell $(HOSTCC) -v 2>&1 | grep -c "clang version"), 1) ··· 401 401 KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \ 402 402 -fno-strict-aliasing -fno-common \ 403 403 -Werror-implicit-function-declaration \ 404 - -Wno-format-security 404 + -Wno-format-security \ 405 + -std=gnu89 405 406 406 407 KBUILD_AFLAGS_KERNEL := 407 408 KBUILD_CFLAGS_KERNEL :=
+16 -4
arch/arm/boot/compressed/head.S
··· 397 397 add sp, sp, r6 398 398 #endif 399 399 400 - tst r4, #1 401 - bleq cache_clean_flush 400 + bl cache_clean_flush 402 401 403 402 adr r0, BSYM(restart) 404 403 add r0, r0, r6 ··· 1046 1047 b call_cache_fn 1047 1048 1048 1049 __armv4_mpu_cache_flush: 1050 + tst r4, #1 1051 + movne pc, lr 1049 1052 mov r2, #1 1050 1053 mov r3, #0 1051 1054 mcr p15, 0, ip, c7, c6, 0 @ invalidate D cache ··· 1065 1064 mov pc, lr 1066 1065 1067 1066 __fa526_cache_flush: 1067 + tst r4, #1 1068 + movne pc, lr 1068 1069 mov r1, #0 1069 1070 mcr p15, 0, r1, c7, c14, 0 @ clean and invalidate D cache 1070 1071 mcr p15, 0, r1, c7, c5, 0 @ flush I cache ··· 1075 1072 1076 1073 __armv6_mmu_cache_flush: 1077 1074 mov r1, #0 1078 - mcr p15, 0, r1, c7, c14, 0 @ clean+invalidate D 1075 + tst r4, #1 1076 + mcreq p15, 0, r1, c7, c14, 0 @ clean+invalidate D 1079 1077 mcr p15, 0, r1, c7, c5, 0 @ invalidate I+BTB 1080 - mcr p15, 0, r1, c7, c15, 0 @ clean+invalidate unified 1078 + mcreq p15, 0, r1, c7, c15, 0 @ clean+invalidate unified 1081 1079 mcr p15, 0, r1, c7, c10, 4 @ drain WB 1082 1080 mov pc, lr 1083 1081 1084 1082 __armv7_mmu_cache_flush: 1083 + tst r4, #1 1084 + bne iflush 1085 1085 mrc p15, 0, r10, c0, c1, 5 @ read ID_MMFR1 1086 1086 tst r10, #0xf << 16 @ hierarchical cache (ARMv7) 1087 1087 mov r10, #0 ··· 1145 1139 mov pc, lr 1146 1140 1147 1141 __armv5tej_mmu_cache_flush: 1142 + tst r4, #1 1143 + movne pc, lr 1148 1144 1: mrc p15, 0, r15, c7, c14, 3 @ test,clean,invalidate D cache 1149 1145 bne 1b 1150 1146 mcr p15, 0, r0, c7, c5, 0 @ flush I cache ··· 1154 1146 mov pc, lr 1155 1147 1156 1148 __armv4_mmu_cache_flush: 1149 + tst r4, #1 1150 + movne pc, lr 1157 1151 mov r2, #64*1024 @ default: 32K dcache size (*2) 1158 1152 mov r11, #32 @ default: 32 byte line size 1159 1153 mrc p15, 0, r3, c0, c0, 1 @ read cache type ··· 1189 1179 1190 1180 __armv3_mmu_cache_flush: 1191 1181 __armv3_mpu_cache_flush: 1182 + tst r4, #1 1183 + movne pc, lr 1192 1184 mov r1, #0 1193 1185 mcr p15, 0, r1, c7, c0, 0 @ invalidate whole cache v3 1194 1186 mov pc, lr
+1 -1
arch/arm/boot/dts/am335x-evm.dts
··· 489 489 reg = <0x00060000 0x00020000>; 490 490 }; 491 491 partition@4 { 492 - label = "NAND.u-boot-spl"; 492 + label = "NAND.u-boot-spl-os"; 493 493 reg = <0x00080000 0x00040000>; 494 494 }; 495 495 partition@5 {
+2 -2
arch/arm/boot/dts/am437x-gp-evm.dts
··· 291 291 dcdc3: regulator-dcdc3 { 292 292 compatible = "ti,tps65218-dcdc3"; 293 293 regulator-name = "vdcdc3"; 294 - regulator-min-microvolt = <1350000>; 295 - regulator-max-microvolt = <1350000>; 294 + regulator-min-microvolt = <1500000>; 295 + regulator-max-microvolt = <1500000>; 296 296 regulator-boot-on; 297 297 regulator-always-on; 298 298 };
+2 -2
arch/arm/boot/dts/am437x-sk-evm.dts
··· 363 363 dcdc3: regulator-dcdc3 { 364 364 compatible = "ti,tps65218-dcdc3"; 365 365 regulator-name = "vdds_ddr"; 366 - regulator-min-microvolt = <1350000>; 367 - regulator-max-microvolt = <1350000>; 366 + regulator-min-microvolt = <1500000>; 367 + regulator-max-microvolt = <1500000>; 368 368 regulator-boot-on; 369 369 regulator-always-on; 370 370 };
+2 -2
arch/arm/boot/dts/am43x-epos-evm.dts
··· 358 358 dcdc3: regulator-dcdc3 { 359 359 compatible = "ti,tps65218-dcdc3"; 360 360 regulator-name = "vdcdc3"; 361 - regulator-min-microvolt = <1350000>; 362 - regulator-max-microvolt = <1350000>; 361 + regulator-min-microvolt = <1500000>; 362 + regulator-max-microvolt = <1500000>; 363 363 regulator-boot-on; 364 364 regulator-always-on; 365 365 };
+1 -1
arch/arm/boot/dts/sama5d31.dtsi
··· 12 12 #include "sama5d3_uart.dtsi" 13 13 14 14 / { 15 - compatible = "atmel,samad31", "atmel,sama5d3", "atmel,sama5"; 15 + compatible = "atmel,sama5d31", "atmel,sama5d3", "atmel,sama5"; 16 16 };
+1 -1
arch/arm/boot/dts/sama5d33.dtsi
··· 10 10 #include "sama5d3_gmac.dtsi" 11 11 12 12 / { 13 - compatible = "atmel,samad33", "atmel,sama5d3", "atmel,sama5"; 13 + compatible = "atmel,sama5d33", "atmel,sama5d3", "atmel,sama5"; 14 14 };
+1 -1
arch/arm/boot/dts/sama5d34.dtsi
··· 12 12 #include "sama5d3_mci2.dtsi" 13 13 14 14 / { 15 - compatible = "atmel,samad34", "atmel,sama5d3", "atmel,sama5"; 15 + compatible = "atmel,sama5d34", "atmel,sama5d3", "atmel,sama5"; 16 16 };
+1 -1
arch/arm/boot/dts/sama5d35.dtsi
··· 14 14 #include "sama5d3_tcb1.dtsi" 15 15 16 16 / { 17 - compatible = "atmel,samad35", "atmel,sama5d3", "atmel,sama5"; 17 + compatible = "atmel,sama5d35", "atmel,sama5d3", "atmel,sama5"; 18 18 };
+1 -1
arch/arm/boot/dts/sama5d36.dtsi
··· 16 16 #include "sama5d3_uart.dtsi" 17 17 18 18 / { 19 - compatible = "atmel,samad36", "atmel,sama5d3", "atmel,sama5"; 19 + compatible = "atmel,sama5d36", "atmel,sama5d3", "atmel,sama5"; 20 20 };
+1 -1
arch/arm/boot/dts/sama5d3xcm.dtsi
··· 8 8 */ 9 9 10 10 / { 11 - compatible = "atmel,samad3xcm", "atmel,sama5d3", "atmel,sama5"; 11 + compatible = "atmel,sama5d3xcm", "atmel,sama5d3", "atmel,sama5"; 12 12 13 13 chosen { 14 14 bootargs = "console=ttyS0,115200 rootfstype=ubifs ubi.mtd=5 root=ubi0:rootfs";
+1 -1
arch/arm/mach-mvebu/board-v7.c
··· 188 188 189 189 static void __init mvebu_dt_init(void) 190 190 { 191 - if (of_machine_is_compatible("plathome,openblocks-ax3-4")) 191 + if (of_machine_is_compatible("marvell,armadaxp")) 192 192 i2c_quirk(); 193 193 if (of_machine_is_compatible("marvell,a375-db")) { 194 194 external_abort_quirk();
+1
arch/arm/mm/Kconfig
··· 798 798 799 799 config KUSER_HELPERS 800 800 bool "Enable kuser helpers in vector page" if !NEED_KUSER_HELPERS 801 + depends on MMU 801 802 default y 802 803 help 803 804 Warning: disabling this option may break user programs.
+32 -4
arch/arm/plat-orion/gpio.c
··· 497 497 #define orion_gpio_dbg_show NULL 498 498 #endif 499 499 500 + static void orion_gpio_unmask_irq(struct irq_data *d) 501 + { 502 + struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 503 + struct irq_chip_type *ct = irq_data_get_chip_type(d); 504 + u32 reg_val; 505 + u32 mask = d->mask; 506 + 507 + irq_gc_lock(gc); 508 + reg_val = irq_reg_readl(gc->reg_base + ct->regs.mask); 509 + reg_val |= mask; 510 + irq_reg_writel(reg_val, gc->reg_base + ct->regs.mask); 511 + irq_gc_unlock(gc); 512 + } 513 + 514 + static void orion_gpio_mask_irq(struct irq_data *d) 515 + { 516 + struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 517 + struct irq_chip_type *ct = irq_data_get_chip_type(d); 518 + u32 mask = d->mask; 519 + u32 reg_val; 520 + 521 + irq_gc_lock(gc); 522 + reg_val = irq_reg_readl(gc->reg_base + ct->regs.mask); 523 + reg_val &= ~mask; 524 + irq_reg_writel(reg_val, gc->reg_base + ct->regs.mask); 525 + irq_gc_unlock(gc); 526 + } 527 + 500 528 void __init orion_gpio_init(struct device_node *np, 501 529 int gpio_base, int ngpio, 502 530 void __iomem *base, int mask_offset, ··· 593 565 ct = gc->chip_types; 594 566 ct->regs.mask = ochip->mask_offset + GPIO_LEVEL_MASK_OFF; 595 567 ct->type = IRQ_TYPE_LEVEL_HIGH | IRQ_TYPE_LEVEL_LOW; 596 - ct->chip.irq_mask = irq_gc_mask_clr_bit; 597 - ct->chip.irq_unmask = irq_gc_mask_set_bit; 568 + ct->chip.irq_mask = orion_gpio_mask_irq; 569 + ct->chip.irq_unmask = orion_gpio_unmask_irq; 598 570 ct->chip.irq_set_type = gpio_irq_set_type; 599 571 ct->chip.name = ochip->chip.label; 600 572 ··· 603 575 ct->regs.ack = GPIO_EDGE_CAUSE_OFF; 604 576 ct->type = IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING; 605 577 ct->chip.irq_ack = irq_gc_ack_clr_bit; 606 - ct->chip.irq_mask = irq_gc_mask_clr_bit; 607 - ct->chip.irq_unmask = irq_gc_mask_set_bit; 578 + ct->chip.irq_mask = orion_gpio_mask_irq; 579 + ct->chip.irq_unmask = orion_gpio_unmask_irq; 608 580 ct->chip.irq_set_type = gpio_irq_set_type; 609 581 ct->handler = handle_edge_irq; 610 582 ct->chip.name = ochip->chip.label;
+1 -1
arch/arm64/include/asm/memory.h
··· 142 142 * virt_to_page(k) convert a _valid_ virtual address to struct page * 143 143 * virt_addr_valid(k) indicates whether a virtual address is valid 144 144 */ 145 - #define ARCH_PFN_OFFSET PHYS_PFN_OFFSET 145 + #define ARCH_PFN_OFFSET ((unsigned long)PHYS_PFN_OFFSET) 146 146 147 147 #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT) 148 148 #define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
+21 -6
arch/arm64/kernel/efi-entry.S
··· 54 54 b.eq efi_load_fail 55 55 56 56 /* 57 - * efi_entry() will have relocated the kernel image if necessary 58 - * and we return here with device tree address in x0 and the kernel 59 - * entry point stored at *image_addr. Save those values in registers 60 - * which are callee preserved. 57 + * efi_entry() will have copied the kernel image if necessary and we 58 + * return here with device tree address in x0 and the kernel entry 59 + * point stored at *image_addr. Save those values in registers which 60 + * are callee preserved. 61 61 */ 62 62 mov x20, x0 // DTB address 63 63 ldr x0, [sp, #16] // relocated _text address 64 64 mov x21, x0 65 65 66 66 /* 67 - * Flush dcache covering current runtime addresses 68 - * of kernel text/data. Then flush all of icache. 67 + * Calculate size of the kernel Image (same for original and copy). 69 68 */ 70 69 adrp x1, _text 71 70 add x1, x1, #:lo12:_text ··· 72 73 add x2, x2, #:lo12:_edata 73 74 sub x1, x2, x1 74 75 76 + /* 77 + * Flush the copied Image to the PoC, and ensure it is not shadowed by 78 + * stale icache entries from before relocation. 79 + */ 75 80 bl __flush_dcache_area 76 81 ic ialluis 82 + 83 + /* 84 + * Ensure that the rest of this function (in the original Image) is 85 + * visible when the caches are disabled. The I-cache can't have stale 86 + * entries for the VA range of the current image, so no maintenance is 87 + * necessary. 88 + */ 89 + adr x0, efi_stub_entry 90 + adr x1, efi_stub_entry_end 91 + sub x1, x1, x0 92 + bl __flush_dcache_area 77 93 78 94 /* Turn off Dcache and MMU */ 79 95 mrs x0, CurrentEL ··· 119 105 ldp x29, x30, [sp], #32 120 106 ret 121 107 108 + efi_stub_entry_end: 122 109 ENDPROC(efi_stub_entry)
+3 -2
arch/arm64/kernel/insn.c
··· 163 163 * which ends with "dsb; isb" pair guaranteeing global 164 164 * visibility. 165 165 */ 166 - atomic_set(&pp->cpu_count, -1); 166 + /* Notify other processors with an additional increment. */ 167 + atomic_inc(&pp->cpu_count); 167 168 } else { 168 - while (atomic_read(&pp->cpu_count) != -1) 169 + while (atomic_read(&pp->cpu_count) <= num_online_cpus()) 169 170 cpu_relax(); 170 171 isb(); 171 172 }
+1 -1
arch/arm64/lib/clear_user.S
··· 46 46 sub x1, x1, #2 47 47 4: adds x1, x1, #1 48 48 b.mi 5f 49 - strb wzr, [x0] 49 + USER(9f, strb wzr, [x0] ) 50 50 5: mov x0, #0 51 51 ret 52 52 ENDPROC(__clear_user)
+1 -1
arch/arm64/mm/mmu.c
··· 202 202 } 203 203 204 204 static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr, 205 - unsigned long end, unsigned long phys, 205 + unsigned long end, phys_addr_t phys, 206 206 int map_io) 207 207 { 208 208 pud_t *pud;
+7 -1
arch/mips/include/asm/jump_label.h
··· 20 20 #define WORD_INSN ".word" 21 21 #endif 22 22 23 + #ifdef CONFIG_CPU_MICROMIPS 24 + #define NOP_INSN "nop32" 25 + #else 26 + #define NOP_INSN "nop" 27 + #endif 28 + 23 29 static __always_inline bool arch_static_branch(struct static_key *key) 24 30 { 25 - asm_volatile_goto("1:\tnop\n\t" 31 + asm_volatile_goto("1:\t" NOP_INSN "\n\t" 26 32 "nop\n\t" 27 33 ".pushsection __jump_table, \"aw\"\n\t" 28 34 WORD_INSN " 1b, %l[l_yes], %0\n\t"
-2
arch/mips/include/asm/mach-loongson/cpu-feature-overrides.h
··· 41 41 #define cpu_has_mcheck 0 42 42 #define cpu_has_mdmx 0 43 43 #define cpu_has_mips16 0 44 - #define cpu_has_mips32r1 0 45 44 #define cpu_has_mips32r2 0 46 45 #define cpu_has_mips3d 0 47 - #define cpu_has_mips64r1 0 48 46 #define cpu_has_mips64r2 0 49 47 #define cpu_has_mipsmt 0 50 48 #define cpu_has_prefetch 0
+8 -4
arch/mips/include/asm/uaccess.h
··· 301 301 __get_kernel_common((x), size, __gu_ptr); \ 302 302 else \ 303 303 __get_user_common((x), size, __gu_ptr); \ 304 - } \ 304 + } else \ 305 + (x) = 0; \ 305 306 \ 306 307 __gu_err; \ 307 308 }) ··· 317 316 " .insn \n" \ 318 317 " .section .fixup,\"ax\" \n" \ 319 318 "3: li %0, %4 \n" \ 319 + " move %1, $0 \n" \ 320 320 " j 2b \n" \ 321 321 " .previous \n" \ 322 322 " .section __ex_table,\"a\" \n" \ ··· 632 630 " .insn \n" \ 633 631 " .section .fixup,\"ax\" \n" \ 634 632 "3: li %0, %4 \n" \ 633 + " move %1, $0 \n" \ 635 634 " j 2b \n" \ 636 635 " .previous \n" \ 637 636 " .section __ex_table,\"a\" \n" \ ··· 776 773 "jal\t" #destination "\n\t" 777 774 #endif 778 775 779 - #ifndef CONFIG_CPU_DADDI_WORKAROUNDS 780 - #define DADDI_SCRATCH "$0" 781 - #else 776 + #if defined(CONFIG_CPU_DADDI_WORKAROUNDS) || (defined(CONFIG_EVA) && \ 777 + defined(CONFIG_CPU_HAS_PREFETCH)) 782 778 #define DADDI_SCRATCH "$3" 779 + #else 780 + #define DADDI_SCRATCH "$0" 783 781 #endif 784 782 785 783 extern size_t __copy_user(void *__to, const void *__from, size_t __n);
+5 -2
arch/mips/kernel/cpu-probe.c
··· 757 757 c->cputype = CPU_LOONGSON2; 758 758 __cpu_name[cpu] = "ICT Loongson-2"; 759 759 set_elf_platform(cpu, "loongson2e"); 760 + set_isa(c, MIPS_CPU_ISA_III); 760 761 break; 761 762 case PRID_REV_LOONGSON2F: 762 763 c->cputype = CPU_LOONGSON2; 763 764 __cpu_name[cpu] = "ICT Loongson-2"; 764 765 set_elf_platform(cpu, "loongson2f"); 766 + set_isa(c, MIPS_CPU_ISA_III); 765 767 break; 766 768 case PRID_REV_LOONGSON3A: 767 769 c->cputype = CPU_LOONGSON3; 768 - c->writecombine = _CACHE_UNCACHED_ACCELERATED; 769 770 __cpu_name[cpu] = "ICT Loongson-3"; 770 771 set_elf_platform(cpu, "loongson3a"); 772 + set_isa(c, MIPS_CPU_ISA_M64R1); 771 773 break; 772 774 case PRID_REV_LOONGSON3B_R1: 773 775 case PRID_REV_LOONGSON3B_R2: 774 776 c->cputype = CPU_LOONGSON3; 775 777 __cpu_name[cpu] = "ICT Loongson-3"; 776 778 set_elf_platform(cpu, "loongson3b"); 779 + set_isa(c, MIPS_CPU_ISA_M64R1); 777 780 break; 778 781 } 779 782 780 - set_isa(c, MIPS_CPU_ISA_III); 781 783 c->options = R4K_OPTS | 782 784 MIPS_CPU_FPU | MIPS_CPU_LLSC | 783 785 MIPS_CPU_32FPR; 784 786 c->tlbsize = 64; 787 + c->writecombine = _CACHE_UNCACHED_ACCELERATED; 785 788 break; 786 789 case PRID_IMP_LOONGSON_32: /* Loongson-1 */ 787 790 decode_configs(c);
+32 -10
arch/mips/kernel/jump_label.c
··· 18 18 19 19 #ifdef HAVE_JUMP_LABEL 20 20 21 - #define J_RANGE_MASK ((1ul << 28) - 1) 21 + /* 22 + * Define parameters for the standard MIPS and the microMIPS jump 23 + * instruction encoding respectively: 24 + * 25 + * - the ISA bit of the target, either 0 or 1 respectively, 26 + * 27 + * - the amount the jump target address is shifted right to fit in the 28 + * immediate field of the machine instruction, either 2 or 1, 29 + * 30 + * - the mask determining the size of the jump region relative to the 31 + * delay-slot instruction, either 256MB or 128MB, 32 + * 33 + * - the jump target alignment, either 4 or 2 bytes. 34 + */ 35 + #define J_ISA_BIT IS_ENABLED(CONFIG_CPU_MICROMIPS) 36 + #define J_RANGE_SHIFT (2 - J_ISA_BIT) 37 + #define J_RANGE_MASK ((1ul << (26 + J_RANGE_SHIFT)) - 1) 38 + #define J_ALIGN_MASK ((1ul << J_RANGE_SHIFT) - 1) 22 39 23 40 void arch_jump_label_transform(struct jump_entry *e, 24 41 enum jump_label_type type) 25 42 { 43 + union mips_instruction *insn_p; 26 44 union mips_instruction insn; 27 - union mips_instruction *insn_p = 28 - (union mips_instruction *)(unsigned long)e->code; 29 45 30 - /* Jump only works within a 256MB aligned region. */ 31 - BUG_ON((e->target & ~J_RANGE_MASK) != (e->code & ~J_RANGE_MASK)); 46 + insn_p = (union mips_instruction *)msk_isa16_mode(e->code); 32 47 33 - /* Target must have 4 byte alignment. */ 34 - BUG_ON((e->target & 3) != 0); 48 + /* Jump only works within an aligned region its delay slot is in. */ 49 + BUG_ON((e->target & ~J_RANGE_MASK) != ((e->code + 4) & ~J_RANGE_MASK)); 50 + 51 + /* Target must have the right alignment and ISA must be preserved. */ 52 + BUG_ON((e->target & J_ALIGN_MASK) != J_ISA_BIT); 35 53 36 54 if (type == JUMP_LABEL_ENABLE) { 37 - insn.j_format.opcode = j_op; 38 - insn.j_format.target = (e->target & J_RANGE_MASK) >> 2; 55 + insn.j_format.opcode = J_ISA_BIT ? mm_j32_op : j_op; 56 + insn.j_format.target = e->target >> J_RANGE_SHIFT; 39 57 } else { 40 58 insn.word = 0; /* nop */ 41 59 } 42 60 43 61 get_online_cpus(); 44 62 mutex_lock(&text_mutex); 45 - *insn_p = insn; 63 + if (IS_ENABLED(CONFIG_CPU_MICROMIPS)) { 64 + insn_p->halfword[0] = insn.word >> 16; 65 + insn_p->halfword[1] = insn.word; 66 + } else 67 + *insn_p = insn; 46 68 47 69 flush_icache_range((unsigned long)insn_p, 48 70 (unsigned long)insn_p + sizeof(*insn_p));
+1
arch/mips/lib/memcpy.S
··· 503 503 STOREB(t0, NBYTES-2(dst), .Ls_exc_p1\@) 504 504 .Ldone\@: 505 505 jr ra 506 + nop 506 507 .if __memcpy == 1 507 508 END(memcpy) 508 509 .set __memcpy, 0
+1
arch/mips/loongson/loongson-3/numa.c
··· 33 33 34 34 static struct node_data prealloc__node_data[MAX_NUMNODES]; 35 35 unsigned char __node_distances[MAX_NUMNODES][MAX_NUMNODES]; 36 + EXPORT_SYMBOL(__node_distances); 36 37 struct node_data *__node_data[MAX_NUMNODES]; 37 38 EXPORT_SYMBOL(__node_data); 38 39
+4
arch/mips/mm/tlb-r4k.c
··· 299 299 300 300 local_irq_save(flags); 301 301 302 + htw_stop(); 302 303 pid = read_c0_entryhi() & ASID_MASK; 303 304 address &= (PAGE_MASK << 1); 304 305 write_c0_entryhi(address | pid); ··· 347 346 tlb_write_indexed(); 348 347 } 349 348 tlbw_use_hazard(); 349 + htw_start(); 350 350 flush_itlb_vm(vma); 351 351 local_irq_restore(flags); 352 352 } ··· 424 422 425 423 local_irq_save(flags); 426 424 /* Save old context and create impossible VPN2 value */ 425 + htw_stop(); 427 426 old_ctx = read_c0_entryhi(); 428 427 old_pagemask = read_c0_pagemask(); 429 428 wired = read_c0_wired(); ··· 446 443 447 444 write_c0_entryhi(old_ctx); 448 445 write_c0_pagemask(old_pagemask); 446 + htw_start(); 449 447 out: 450 448 local_irq_restore(flags); 451 449 return ret;
+1 -1
arch/mips/oprofile/backtrace.c
··· 92 92 /* This marks the end of the previous function, 93 93 which means we overran. */ 94 94 break; 95 - stack_size = (unsigned) stack_adjustment; 95 + stack_size = (unsigned long) stack_adjustment; 96 96 } else if (is_ra_save_ins(&ip)) { 97 97 int ra_slot = ip.i_format.simmediate; 98 98 if (ra_slot < 0)
+1
arch/mips/sgi-ip27/ip27-memory.c
··· 107 107 } 108 108 109 109 unsigned char __node_distances[MAX_COMPACT_NODES][MAX_COMPACT_NODES]; 110 + EXPORT_SYMBOL(__node_distances); 110 111 111 112 static int __init compute_node_distance(nasid_t nasid_a, nasid_t nasid_b) 112 113 {
+8 -11
arch/parisc/include/asm/uaccess.h
··· 9 9 #include <asm/errno.h> 10 10 #include <asm-generic/uaccess-unaligned.h> 11 11 12 + #include <linux/bug.h> 13 + 12 14 #define VERIFY_READ 0 13 15 #define VERIFY_WRITE 1 14 16 ··· 30 28 * that put_user is the same as __put_user, etc. 31 29 */ 32 30 33 - extern int __get_kernel_bad(void); 34 - extern int __get_user_bad(void); 35 - extern int __put_kernel_bad(void); 36 - extern int __put_user_bad(void); 37 - 38 31 static inline long access_ok(int type, const void __user * addr, 39 32 unsigned long size) 40 33 { ··· 40 43 #define get_user __get_user 41 44 42 45 #if !defined(CONFIG_64BIT) 43 - #define LDD_KERNEL(ptr) __get_kernel_bad(); 44 - #define LDD_USER(ptr) __get_user_bad(); 46 + #define LDD_KERNEL(ptr) BUILD_BUG() 47 + #define LDD_USER(ptr) BUILD_BUG() 45 48 #define STD_KERNEL(x, ptr) __put_kernel_asm64(x,ptr) 46 49 #define STD_USER(x, ptr) __put_user_asm64(x,ptr) 47 50 #define ASM_WORD_INSN ".word\t" ··· 91 94 case 2: __get_kernel_asm("ldh",ptr); break; \ 92 95 case 4: __get_kernel_asm("ldw",ptr); break; \ 93 96 case 8: LDD_KERNEL(ptr); break; \ 94 - default: __get_kernel_bad(); break; \ 97 + default: BUILD_BUG(); break; \ 95 98 } \ 96 99 } \ 97 100 else { \ ··· 100 103 case 2: __get_user_asm("ldh",ptr); break; \ 101 104 case 4: __get_user_asm("ldw",ptr); break; \ 102 105 case 8: LDD_USER(ptr); break; \ 103 - default: __get_user_bad(); break; \ 106 + default: BUILD_BUG(); break; \ 104 107 } \ 105 108 } \ 106 109 \ ··· 133 136 case 2: __put_kernel_asm("sth",__x,ptr); break; \ 134 137 case 4: __put_kernel_asm("stw",__x,ptr); break; \ 135 138 case 8: STD_KERNEL(__x,ptr); break; \ 136 - default: __put_kernel_bad(); break; \ 139 + default: BUILD_BUG(); break; \ 137 140 } \ 138 141 } \ 139 142 else { \ ··· 142 145 case 2: __put_user_asm("sth",__x,ptr); break; \ 143 146 case 4: __put_user_asm("stw",__x,ptr); break; \ 144 147 case 8: STD_USER(__x,ptr); break; \ 145 - default: __put_user_bad(); break; \ 148 + default: BUILD_BUG(); break; \ 146 149 } \ 147 150 } \ 148 151 \
+1 -7
arch/parisc/include/uapi/asm/bitsperlong.h
··· 1 1 #ifndef __ASM_PARISC_BITSPERLONG_H 2 2 #define __ASM_PARISC_BITSPERLONG_H 3 3 4 - /* 5 - * using CONFIG_* outside of __KERNEL__ is wrong, 6 - * __LP64__ was also removed from headers, so what 7 - * is the right approach on parisc? 8 - * -arnd 9 - */ 10 - #if (defined(__KERNEL__) && defined(CONFIG_64BIT)) || defined (__LP64__) 4 + #if defined(__LP64__) 11 5 #define __BITS_PER_LONG 64 12 6 #define SHIFT_PER_LONG 6 13 7 #else
+5 -3
arch/parisc/include/uapi/asm/msgbuf.h
··· 1 1 #ifndef _PARISC_MSGBUF_H 2 2 #define _PARISC_MSGBUF_H 3 3 4 + #include <asm/bitsperlong.h> 5 + 4 6 /* 5 7 * The msqid64_ds structure for parisc architecture, copied from sparc. 6 8 * Note extra padding because this structure is passed back and forth ··· 15 13 16 14 struct msqid64_ds { 17 15 struct ipc64_perm msg_perm; 18 - #ifndef CONFIG_64BIT 16 + #if __BITS_PER_LONG != 64 19 17 unsigned int __pad1; 20 18 #endif 21 19 __kernel_time_t msg_stime; /* last msgsnd time */ 22 - #ifndef CONFIG_64BIT 20 + #if __BITS_PER_LONG != 64 23 21 unsigned int __pad2; 24 22 #endif 25 23 __kernel_time_t msg_rtime; /* last msgrcv time */ 26 - #ifndef CONFIG_64BIT 24 + #if __BITS_PER_LONG != 64 27 25 unsigned int __pad3; 28 26 #endif 29 27 __kernel_time_t msg_ctime; /* last change time */
+4 -2
arch/parisc/include/uapi/asm/sembuf.h
··· 1 1 #ifndef _PARISC_SEMBUF_H 2 2 #define _PARISC_SEMBUF_H 3 3 4 + #include <asm/bitsperlong.h> 5 + 4 6 /* 5 7 * The semid64_ds structure for parisc architecture. 6 8 * Note extra padding because this structure is passed back and forth ··· 15 13 16 14 struct semid64_ds { 17 15 struct ipc64_perm sem_perm; /* permissions .. see ipc.h */ 18 - #ifndef CONFIG_64BIT 16 + #if __BITS_PER_LONG != 64 19 17 unsigned int __pad1; 20 18 #endif 21 19 __kernel_time_t sem_otime; /* last semop time */ 22 - #ifndef CONFIG_64BIT 20 + #if __BITS_PER_LONG != 64 23 21 unsigned int __pad2; 24 22 #endif 25 23 __kernel_time_t sem_ctime; /* last change time */
+15 -20
arch/parisc/include/uapi/asm/shmbuf.h
··· 1 1 #ifndef _PARISC_SHMBUF_H 2 2 #define _PARISC_SHMBUF_H 3 3 4 + #include <asm/bitsperlong.h> 5 + 4 6 /* 5 7 * The shmid64_ds structure for parisc architecture. 6 8 * Note extra padding because this structure is passed back and forth ··· 15 13 16 14 struct shmid64_ds { 17 15 struct ipc64_perm shm_perm; /* operation perms */ 18 - #ifndef CONFIG_64BIT 16 + #if __BITS_PER_LONG != 64 19 17 unsigned int __pad1; 20 18 #endif 21 19 __kernel_time_t shm_atime; /* last attach time */ 22 - #ifndef CONFIG_64BIT 20 + #if __BITS_PER_LONG != 64 23 21 unsigned int __pad2; 24 22 #endif 25 23 __kernel_time_t shm_dtime; /* last detach time */ 26 - #ifndef CONFIG_64BIT 24 + #if __BITS_PER_LONG != 64 27 25 unsigned int __pad3; 28 26 #endif 29 27 __kernel_time_t shm_ctime; /* last change time */ 30 - #ifndef CONFIG_64BIT 28 + #if __BITS_PER_LONG != 64 31 29 unsigned int __pad4; 32 30 #endif 33 31 size_t shm_segsz; /* size of segment (bytes) */ ··· 38 36 unsigned int __unused2; 39 37 }; 40 38 41 - #ifdef CONFIG_64BIT 42 - /* The 'unsigned int' (formerly 'unsigned long') data types below will 43 - * ensure that a 32-bit app calling shmctl(*,IPC_INFO,*) will work on 44 - * a wide kernel, but if some of these values are meant to contain pointers 45 - * they may need to be 'long long' instead. -PB XXX FIXME 46 - */ 47 - #endif 48 39 struct shminfo64 { 49 - unsigned int shmmax; 50 - unsigned int shmmin; 51 - unsigned int shmmni; 52 - unsigned int shmseg; 53 - unsigned int shmall; 54 - unsigned int __unused1; 55 - unsigned int __unused2; 56 - unsigned int __unused3; 57 - unsigned int __unused4; 40 + unsigned long shmmax; 41 + unsigned long shmmin; 42 + unsigned long shmmni; 43 + unsigned long shmseg; 44 + unsigned long shmall; 45 + unsigned long __unused1; 46 + unsigned long __unused2; 47 + unsigned long __unused3; 48 + unsigned long __unused4; 58 49 }; 59 50 60 51 #endif /* _PARISC_SHMBUF_H */
+1 -1
arch/parisc/include/uapi/asm/signal.h
··· 85 85 struct siginfo; 86 86 87 87 /* Type of a signal handler. */ 88 - #ifdef CONFIG_64BIT 88 + #if defined(__LP64__) 89 89 /* function pointers on 64-bit parisc are pointers to little structs and the 90 90 * compiler doesn't support code which changes or tests the address of 91 91 * the function in the little struct. This is really ugly -PB
+2 -1
arch/parisc/include/uapi/asm/unistd.h
··· 833 833 #define __NR_seccomp (__NR_Linux + 338) 834 834 #define __NR_getrandom (__NR_Linux + 339) 835 835 #define __NR_memfd_create (__NR_Linux + 340) 836 + #define __NR_bpf (__NR_Linux + 341) 836 837 837 - #define __NR_Linux_syscalls (__NR_memfd_create + 1) 838 + #define __NR_Linux_syscalls (__NR_bpf + 1) 838 839 839 840 840 841 #define __IGNORE_select /* newselect */
+5 -4
arch/parisc/kernel/syscall_table.S
··· 286 286 ENTRY_COMP(msgsnd) 287 287 ENTRY_COMP(msgrcv) 288 288 ENTRY_SAME(msgget) /* 190 */ 289 - ENTRY_SAME(msgctl) 290 - ENTRY_SAME(shmat) 289 + ENTRY_COMP(msgctl) 290 + ENTRY_COMP(shmat) 291 291 ENTRY_SAME(shmdt) 292 292 ENTRY_SAME(shmget) 293 - ENTRY_SAME(shmctl) /* 195 */ 293 + ENTRY_COMP(shmctl) /* 195 */ 294 294 ENTRY_SAME(ni_syscall) /* streams1 */ 295 295 ENTRY_SAME(ni_syscall) /* streams2 */ 296 296 ENTRY_SAME(lstat64) ··· 323 323 ENTRY_SAME(epoll_ctl) /* 225 */ 324 324 ENTRY_SAME(epoll_wait) 325 325 ENTRY_SAME(remap_file_pages) 326 - ENTRY_SAME(semtimedop) 326 + ENTRY_COMP(semtimedop) 327 327 ENTRY_COMP(mq_open) 328 328 ENTRY_SAME(mq_unlink) /* 230 */ 329 329 ENTRY_COMP(mq_timedsend) ··· 436 436 ENTRY_SAME(seccomp) 437 437 ENTRY_SAME(getrandom) 438 438 ENTRY_SAME(memfd_create) /* 340 */ 439 + ENTRY_SAME(bpf) 439 440 440 441 /* Nothing yet */ 441 442
+1 -1
arch/powerpc/sysdev/fsl_msi.c
··· 361 361 cascade_data->virq = virt_msir; 362 362 msi->cascade_array[irq_index] = cascade_data; 363 363 364 - ret = request_irq(virt_msir, fsl_msi_cascade, 0, 364 + ret = request_irq(virt_msir, fsl_msi_cascade, IRQF_NO_THREAD, 365 365 "fsl-msi-cascade", cascade_data); 366 366 if (ret) { 367 367 dev_err(&dev->dev, "failed to request_irq(%d), ret = %d\n",
+1 -1
arch/sparc/include/asm/atomic_32.h
··· 22 22 23 23 int atomic_add_return(int, atomic_t *); 24 24 int atomic_cmpxchg(atomic_t *, int, int); 25 - #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) 25 + int atomic_xchg(atomic_t *, int); 26 26 int __atomic_add_unless(atomic_t *, int, int); 27 27 void atomic_set(atomic_t *, int); 28 28
+2 -10
arch/sparc/include/asm/cmpxchg_32.h
··· 11 11 #ifndef __ARCH_SPARC_CMPXCHG__ 12 12 #define __ARCH_SPARC_CMPXCHG__ 13 13 14 - static inline unsigned long xchg_u32(__volatile__ unsigned long *m, unsigned long val) 15 - { 16 - __asm__ __volatile__("swap [%2], %0" 17 - : "=&r" (val) 18 - : "0" (val), "r" (m) 19 - : "memory"); 20 - return val; 21 - } 22 - 14 + unsigned long __xchg_u32(volatile u32 *m, u32 new); 23 15 void __xchg_called_with_bad_pointer(void); 24 16 25 17 static inline unsigned long __xchg(unsigned long x, __volatile__ void * ptr, int size) 26 18 { 27 19 switch (size) { 28 20 case 4: 29 - return xchg_u32(ptr, x); 21 + return __xchg_u32(ptr, x); 30 22 } 31 23 __xchg_called_with_bad_pointer(); 32 24 return x;
+6 -6
arch/sparc/include/uapi/asm/swab.h
··· 9 9 { 10 10 __u16 ret; 11 11 12 - __asm__ __volatile__ ("lduha [%1] %2, %0" 12 + __asm__ __volatile__ ("lduha [%2] %3, %0" 13 13 : "=r" (ret) 14 - : "r" (addr), "i" (ASI_PL)); 14 + : "m" (*addr), "r" (addr), "i" (ASI_PL)); 15 15 return ret; 16 16 } 17 17 #define __arch_swab16p __arch_swab16p ··· 20 20 { 21 21 __u32 ret; 22 22 23 - __asm__ __volatile__ ("lduwa [%1] %2, %0" 23 + __asm__ __volatile__ ("lduwa [%2] %3, %0" 24 24 : "=r" (ret) 25 - : "r" (addr), "i" (ASI_PL)); 25 + : "m" (*addr), "r" (addr), "i" (ASI_PL)); 26 26 return ret; 27 27 } 28 28 #define __arch_swab32p __arch_swab32p ··· 31 31 { 32 32 __u64 ret; 33 33 34 - __asm__ __volatile__ ("ldxa [%1] %2, %0" 34 + __asm__ __volatile__ ("ldxa [%2] %3, %0" 35 35 : "=r" (ret) 36 - : "r" (addr), "i" (ASI_PL)); 36 + : "m" (*addr), "r" (addr), "i" (ASI_PL)); 37 37 return ret; 38 38 } 39 39 #define __arch_swab64p __arch_swab64p
+3 -3
arch/sparc/kernel/pci_schizo.c
··· 581 581 { 582 582 unsigned long csr_reg, csr, csr_error_bits; 583 583 irqreturn_t ret = IRQ_NONE; 584 - u16 stat; 584 + u32 stat; 585 585 586 586 csr_reg = pbm->pbm_regs + SCHIZO_PCI_CTRL; 587 587 csr = upa_readq(csr_reg); ··· 617 617 pbm->name); 618 618 ret = IRQ_HANDLED; 619 619 } 620 - pci_read_config_word(pbm->pci_bus->self, PCI_STATUS, &stat); 620 + pbm->pci_ops->read(pbm->pci_bus, 0, PCI_STATUS, 2, &stat); 621 621 if (stat & (PCI_STATUS_PARITY | 622 622 PCI_STATUS_SIG_TARGET_ABORT | 623 623 PCI_STATUS_REC_TARGET_ABORT | ··· 625 625 PCI_STATUS_SIG_SYSTEM_ERROR)) { 626 626 printk("%s: PCI bus error, PCI_STATUS[%04x]\n", 627 627 pbm->name, stat); 628 - pci_write_config_word(pbm->pci_bus->self, PCI_STATUS, 0xffff); 628 + pbm->pci_ops->write(pbm->pci_bus, 0, PCI_STATUS, 2, 0xffff); 629 629 ret = IRQ_HANDLED; 630 630 } 631 631 return ret;
+4
arch/sparc/kernel/smp_64.c
··· 816 816 void __irq_entry smp_call_function_client(int irq, struct pt_regs *regs) 817 817 { 818 818 clear_softint(1 << irq); 819 + irq_enter(); 819 820 generic_smp_call_function_interrupt(); 821 + irq_exit(); 820 822 } 821 823 822 824 void __irq_entry smp_call_function_single_client(int irq, struct pt_regs *regs) 823 825 { 824 826 clear_softint(1 << irq); 827 + irq_enter(); 825 828 generic_smp_call_function_single_interrupt(); 829 + irq_exit(); 826 830 } 827 831 828 832 static void tsb_sync(void *info)
+27
arch/sparc/lib/atomic32.c
··· 45 45 46 46 #undef ATOMIC_OP 47 47 48 + int atomic_xchg(atomic_t *v, int new) 49 + { 50 + int ret; 51 + unsigned long flags; 52 + 53 + spin_lock_irqsave(ATOMIC_HASH(v), flags); 54 + ret = v->counter; 55 + v->counter = new; 56 + spin_unlock_irqrestore(ATOMIC_HASH(v), flags); 57 + return ret; 58 + } 59 + EXPORT_SYMBOL(atomic_xchg); 60 + 48 61 int atomic_cmpxchg(atomic_t *v, int old, int new) 49 62 { 50 63 int ret; ··· 150 137 return (unsigned long)prev; 151 138 } 152 139 EXPORT_SYMBOL(__cmpxchg_u32); 140 + 141 + unsigned long __xchg_u32(volatile u32 *ptr, u32 new) 142 + { 143 + unsigned long flags; 144 + u32 prev; 145 + 146 + spin_lock_irqsave(ATOMIC_HASH(ptr), flags); 147 + prev = *ptr; 148 + *ptr = new; 149 + spin_unlock_irqrestore(ATOMIC_HASH(ptr), flags); 150 + 151 + return (unsigned long)prev; 152 + } 153 + EXPORT_SYMBOL(__xchg_u32);
+1 -1
arch/x86/Kconfig
··· 144 144 145 145 config PERF_EVENTS_INTEL_UNCORE 146 146 def_bool y 147 - depends on PERF_EVENTS && SUP_SUP_INTEL && PCI 147 + depends on PERF_EVENTS && CPU_SUP_INTEL && PCI 148 148 149 149 config OUTPUT_FORMAT 150 150 string
+3 -1
arch/x86/boot/compressed/Makefile
··· 76 76 suffix-$(CONFIG_KERNEL_LZO) := lzo 77 77 suffix-$(CONFIG_KERNEL_LZ4) := lz4 78 78 79 + RUN_SIZE = $(shell objdump -h vmlinux | \ 80 + perl $(srctree)/arch/x86/tools/calc_run_size.pl) 79 81 quiet_cmd_mkpiggy = MKPIGGY $@ 80 - cmd_mkpiggy = $(obj)/mkpiggy $< > $@ || ( rm -f $@ ; false ) 82 + cmd_mkpiggy = $(obj)/mkpiggy $< $(RUN_SIZE) > $@ || ( rm -f $@ ; false ) 81 83 82 84 targets += piggy.S 83 85 $(obj)/piggy.S: $(obj)/vmlinux.bin.$(suffix-y) $(obj)/mkpiggy FORCE
+3 -2
arch/x86/boot/compressed/head_32.S
··· 207 207 * Do the decompression, and jump to the new kernel.. 208 208 */ 209 209 /* push arguments for decompress_kernel: */ 210 - pushl $z_output_len /* decompressed length */ 210 + pushl $z_run_size /* size of kernel with .bss and .brk */ 211 + pushl $z_output_len /* decompressed length, end of relocs */ 211 212 leal z_extract_offset_negative(%ebx), %ebp 212 213 pushl %ebp /* output address */ 213 214 pushl $z_input_len /* input_len */ ··· 218 217 pushl %eax /* heap area */ 219 218 pushl %esi /* real mode pointer */ 220 219 call decompress_kernel /* returns kernel location in %eax */ 221 - addl $24, %esp 220 + addl $28, %esp 222 221 223 222 /* 224 223 * Jump to the decompressed kernel.
+4 -1
arch/x86/boot/compressed/head_64.S
··· 402 402 * Do the decompression, and jump to the new kernel.. 403 403 */ 404 404 pushq %rsi /* Save the real mode argument */ 405 + movq $z_run_size, %r9 /* size of kernel with .bss and .brk */ 406 + pushq %r9 405 407 movq %rsi, %rdi /* real mode address */ 406 408 leaq boot_heap(%rip), %rsi /* malloc area for uncompression */ 407 409 leaq input_data(%rip), %rdx /* input_data */ 408 410 movl $z_input_len, %ecx /* input_len */ 409 411 movq %rbp, %r8 /* output target address */ 410 - movq $z_output_len, %r9 /* decompressed length */ 412 + movq $z_output_len, %r9 /* decompressed length, end of relocs */ 411 413 call decompress_kernel /* returns kernel location in %rax */ 414 + popq %r9 412 415 popq %rsi 413 416 414 417 /*
+10 -3
arch/x86/boot/compressed/misc.c
··· 358 358 unsigned char *input_data, 359 359 unsigned long input_len, 360 360 unsigned char *output, 361 - unsigned long output_len) 361 + unsigned long output_len, 362 + unsigned long run_size) 362 363 { 363 364 real_mode = rmode; 364 365 ··· 382 381 free_mem_ptr = heap; /* Heap */ 383 382 free_mem_end_ptr = heap + BOOT_HEAP_SIZE; 384 383 385 - output = choose_kernel_location(input_data, input_len, 386 - output, output_len); 384 + /* 385 + * The memory hole needed for the kernel is the larger of either 386 + * the entire decompressed kernel plus relocation table, or the 387 + * entire decompressed kernel plus .bss and .brk sections. 388 + */ 389 + output = choose_kernel_location(input_data, input_len, output, 390 + output_len > run_size ? output_len 391 + : run_size); 387 392 388 393 /* Validate memory location choices. */ 389 394 if ((unsigned long)output & (MIN_KERNEL_ALIGN - 1))
+7 -2
arch/x86/boot/compressed/mkpiggy.c
··· 36 36 uint32_t olen; 37 37 long ilen; 38 38 unsigned long offs; 39 + unsigned long run_size; 39 40 FILE *f = NULL; 40 41 int retval = 1; 41 42 42 - if (argc < 2) { 43 - fprintf(stderr, "Usage: %s compressed_file\n", argv[0]); 43 + if (argc < 3) { 44 + fprintf(stderr, "Usage: %s compressed_file run_size\n", 45 + argv[0]); 44 46 goto bail; 45 47 } 46 48 ··· 76 74 offs += olen >> 12; /* Add 8 bytes for each 32K block */ 77 75 offs += 64*1024 + 128; /* Add 64K + 128 bytes slack */ 78 76 offs = (offs+4095) & ~4095; /* Round to a 4K boundary */ 77 + run_size = atoi(argv[2]); 79 78 80 79 printf(".section \".rodata..compressed\",\"a\",@progbits\n"); 81 80 printf(".globl z_input_len\n"); ··· 88 85 /* z_extract_offset_negative allows simplification of head_32.S */ 89 86 printf(".globl z_extract_offset_negative\n"); 90 87 printf("z_extract_offset_negative = -0x%lx\n", offs); 88 + printf(".globl z_run_size\n"); 89 + printf("z_run_size = %lu\n", run_size); 91 90 92 91 printf(".globl input_data, input_data_end\n"); 93 92 printf("input_data:\n");
+1
arch/x86/include/asm/smp.h
··· 150 150 } 151 151 152 152 void cpu_disable_common(void); 153 + void cpu_die_common(unsigned int cpu); 153 154 void native_smp_prepare_boot_cpu(void); 154 155 void native_smp_prepare_cpus(unsigned int max_cpus); 155 156 void native_smp_cpus_done(unsigned int max_cpus);
+2
arch/x86/kernel/cpu/common.c
··· 146 146 147 147 static int __init x86_xsave_setup(char *s) 148 148 { 149 + if (strlen(s)) 150 + return 0; 149 151 setup_clear_cpu_cap(X86_FEATURE_XSAVE); 150 152 setup_clear_cpu_cap(X86_FEATURE_XSAVEOPT); 151 153 setup_clear_cpu_cap(X86_FEATURE_XSAVES);
+21 -12
arch/x86/kernel/cpu/microcode/amd_early.c
··· 108 108 * load_microcode_amd() to save equivalent cpu table and microcode patches in 109 109 * kernel heap memory. 110 110 */ 111 - static void apply_ucode_in_initrd(void *ucode, size_t size) 111 + static void apply_ucode_in_initrd(void *ucode, size_t size, bool save_patch) 112 112 { 113 113 struct equiv_cpu_entry *eq; 114 114 size_t *cont_sz; 115 115 u32 *header; 116 116 u8 *data, **cont; 117 + u8 (*patch)[PATCH_MAX_SIZE]; 117 118 u16 eq_id = 0; 118 119 int offset, left; 119 120 u32 rev, eax, ebx, ecx, edx; ··· 124 123 new_rev = (u32 *)__pa_nodebug(&ucode_new_rev); 125 124 cont_sz = (size_t *)__pa_nodebug(&container_size); 126 125 cont = (u8 **)__pa_nodebug(&container); 126 + patch = (u8 (*)[PATCH_MAX_SIZE])__pa_nodebug(&amd_ucode_patch); 127 127 #else 128 128 new_rev = &ucode_new_rev; 129 129 cont_sz = &container_size; 130 130 cont = &container; 131 + patch = &amd_ucode_patch; 131 132 #endif 132 133 133 134 data = ucode; ··· 216 213 rev = mc->hdr.patch_id; 217 214 *new_rev = rev; 218 215 219 - /* save ucode patch */ 220 - memcpy(amd_ucode_patch, mc, 221 - min_t(u32, header[1], PATCH_MAX_SIZE)); 216 + if (save_patch) 217 + memcpy(patch, mc, 218 + min_t(u32, header[1], PATCH_MAX_SIZE)); 222 219 } 223 220 } 224 221 ··· 249 246 *data = cp.data; 250 247 *size = cp.size; 251 248 252 - apply_ucode_in_initrd(cp.data, cp.size); 249 + apply_ucode_in_initrd(cp.data, cp.size, true); 253 250 } 254 251 255 252 #ifdef CONFIG_X86_32 ··· 266 263 size_t *usize; 267 264 void **ucode; 268 265 269 - mc = (struct microcode_amd *)__pa(amd_ucode_patch); 266 + mc = (struct microcode_amd *)__pa_nodebug(amd_ucode_patch); 270 267 if (mc->hdr.patch_id && mc->hdr.processor_rev_id) { 271 268 __apply_microcode_amd(mc); 272 269 return; ··· 278 275 if (!*ucode || !*usize) 279 276 return; 280 277 281 - apply_ucode_in_initrd(*ucode, *usize); 278 + apply_ucode_in_initrd(*ucode, *usize, false); 282 279 } 283 280 284 281 static void __init collect_cpu_sig_on_bsp(void *arg) ··· 342 339 * AP has a different equivalence ID than BSP, looks like 343 340 * mixed-steppings silicon so go through the ucode blob anew. 344 341 */ 345 - apply_ucode_in_initrd(ucode_cpio.data, ucode_cpio.size); 342 + apply_ucode_in_initrd(ucode_cpio.data, ucode_cpio.size, false); 346 343 } 347 344 } 348 345 #endif ··· 350 347 int __init save_microcode_in_initrd_amd(void) 351 348 { 352 349 unsigned long cont; 350 + int retval = 0; 353 351 enum ucode_state ret; 352 + u8 *cont_va; 354 353 u32 eax; 355 354 356 355 if (!container) ··· 360 355 361 356 #ifdef CONFIG_X86_32 362 357 get_bsp_sig(); 363 - cont = (unsigned long)container; 358 + cont = (unsigned long)container; 359 + cont_va = __va(container); 364 360 #else 365 361 /* 366 362 * We need the physical address of the container for both bitness since 367 363 * boot_params.hdr.ramdisk_image is a physical address. 368 364 */ 369 - cont = __pa(container); 365 + cont = __pa(container); 366 + cont_va = container; 370 367 #endif 371 368 372 369 /* ··· 379 372 if (relocated_ramdisk) 380 373 container = (u8 *)(__va(relocated_ramdisk) + 381 374 (cont - boot_params.hdr.ramdisk_image)); 375 + else 376 + container = cont_va; 382 377 383 378 if (ucode_new_rev) 384 379 pr_info("microcode: updated early to new patch_level=0x%08x\n", ··· 391 382 392 383 ret = load_microcode_amd(eax, container, container_size); 393 384 if (ret != UCODE_OK) 394 - return -EINVAL; 385 + retval = -EINVAL; 395 386 396 387 /* 397 388 * This will be freed any msec now, stash patches for the current ··· 400 391 container = NULL; 401 392 container_size = 0; 402 393 403 - return 0; 394 + return retval; 404 395 }
+8
arch/x86/kernel/cpu/microcode/core.c
··· 465 465 466 466 if (uci->valid && uci->mc) 467 467 microcode_ops->apply_microcode(cpu); 468 + else if (!uci->mc) 469 + /* 470 + * We might resume and not have applied late microcode but still 471 + * have a newer patch stashed from the early loader. We don't 472 + * have it in uci->mc so we have to load it the same way we're 473 + * applying patches early on the APs. 474 + */ 475 + load_ucode_ap(); 468 476 } 469 477 470 478 static struct syscore_ops mc_syscore_ops = {
+1 -1
arch/x86/kernel/cpu/microcode/core_early.c
··· 124 124 static bool check_loader_disabled_ap(void) 125 125 { 126 126 #ifdef CONFIG_X86_32 127 - return __pa_nodebug(dis_ucode_ldr); 127 + return *((bool *)__pa_nodebug(&dis_ucode_ldr)); 128 128 #else 129 129 return dis_ucode_ldr; 130 130 #endif
+45 -4
arch/x86/kernel/cpu/perf_event_intel_uncore_snbep.c
··· 486 486 .attrs = snbep_uncore_qpi_formats_attr, 487 487 }; 488 488 489 - #define SNBEP_UNCORE_MSR_OPS_COMMON_INIT() \ 490 - .init_box = snbep_uncore_msr_init_box, \ 489 + #define __SNBEP_UNCORE_MSR_OPS_COMMON_INIT() \ 491 490 .disable_box = snbep_uncore_msr_disable_box, \ 492 491 .enable_box = snbep_uncore_msr_enable_box, \ 493 492 .disable_event = snbep_uncore_msr_disable_event, \ 494 493 .enable_event = snbep_uncore_msr_enable_event, \ 495 494 .read_counter = uncore_msr_read_counter 495 + 496 + #define SNBEP_UNCORE_MSR_OPS_COMMON_INIT() \ 497 + __SNBEP_UNCORE_MSR_OPS_COMMON_INIT(), \ 498 + .init_box = snbep_uncore_msr_init_box \ 496 499 497 500 static struct intel_uncore_ops snbep_uncore_msr_ops = { 498 501 SNBEP_UNCORE_MSR_OPS_COMMON_INIT(), ··· 1922 1919 .format_group = &hswep_uncore_cbox_format_group, 1923 1920 }; 1924 1921 1922 + /* 1923 + * Write SBOX Initialization register bit by bit to avoid spurious #GPs 1924 + */ 1925 + static void hswep_uncore_sbox_msr_init_box(struct intel_uncore_box *box) 1926 + { 1927 + unsigned msr = uncore_msr_box_ctl(box); 1928 + 1929 + if (msr) { 1930 + u64 init = SNBEP_PMON_BOX_CTL_INT; 1931 + u64 flags = 0; 1932 + int i; 1933 + 1934 + for_each_set_bit(i, (unsigned long *)&init, 64) { 1935 + flags |= (1ULL << i); 1936 + wrmsrl(msr, flags); 1937 + } 1938 + } 1939 + } 1940 + 1941 + static struct intel_uncore_ops hswep_uncore_sbox_msr_ops = { 1942 + __SNBEP_UNCORE_MSR_OPS_COMMON_INIT(), 1943 + .init_box = hswep_uncore_sbox_msr_init_box 1944 + }; 1945 + 1925 1946 static struct attribute *hswep_uncore_sbox_formats_attr[] = { 1926 1947 &format_attr_event.attr, 1927 1948 &format_attr_umask.attr, ··· 1971 1944 .event_mask = HSWEP_S_MSR_PMON_RAW_EVENT_MASK, 1972 1945 .box_ctl = HSWEP_S0_MSR_PMON_BOX_CTL, 1973 1946 .msr_offset = HSWEP_SBOX_MSR_OFFSET, 1974 - .ops = &snbep_uncore_msr_ops, 1947 + .ops = &hswep_uncore_sbox_msr_ops, 1975 1948 .format_group = &hswep_uncore_sbox_format_group, 1976 1949 }; 1977 1950 ··· 2052 2025 SNBEP_UNCORE_PCI_COMMON_INIT(), 2053 2026 }; 2054 2027 2028 + static unsigned hswep_uncore_irp_ctrs[] = {0xa0, 0xa8, 0xb0, 0xb8}; 2029 + 2030 + static u64 hswep_uncore_irp_read_counter(struct intel_uncore_box *box, struct perf_event *event) 2031 + { 2032 + struct pci_dev *pdev = box->pci_dev; 2033 + struct hw_perf_event *hwc = &event->hw; 2034 + u64 count = 0; 2035 + 2036 + pci_read_config_dword(pdev, hswep_uncore_irp_ctrs[hwc->idx], (u32 *)&count); 2037 + pci_read_config_dword(pdev, hswep_uncore_irp_ctrs[hwc->idx] + 4, (u32 *)&count + 1); 2038 + 2039 + return count; 2040 + } 2041 + 2055 2042 static struct intel_uncore_ops hswep_uncore_irp_ops = { 2056 2043 .init_box = snbep_uncore_pci_init_box, 2057 2044 .disable_box = snbep_uncore_pci_disable_box, 2058 2045 .enable_box = snbep_uncore_pci_enable_box, 2059 2046 .disable_event = ivbep_uncore_irp_disable_event, 2060 2047 .enable_event = ivbep_uncore_irp_enable_event, 2061 - .read_counter = ivbep_uncore_irp_read_counter, 2048 + .read_counter = hswep_uncore_irp_read_counter, 2062 2049 }; 2063 2050 2064 2051 static struct intel_uncore_type hswep_uncore_irp = {
+1 -1
arch/x86/kernel/ptrace.c
··· 1484 1484 */ 1485 1485 if (work & _TIF_NOHZ) { 1486 1486 user_exit(); 1487 - work &= ~TIF_NOHZ; 1487 + work &= ~_TIF_NOHZ; 1488 1488 } 1489 1489 1490 1490 #ifdef CONFIG_SECCOMP
+11 -4
arch/x86/kernel/smpboot.c
··· 1303 1303 numa_remove_cpu(cpu); 1304 1304 } 1305 1305 1306 + static DEFINE_PER_CPU(struct completion, die_complete); 1307 + 1306 1308 void cpu_disable_common(void) 1307 1309 { 1308 1310 int cpu = smp_processor_id(); 1311 + 1312 + init_completion(&per_cpu(die_complete, smp_processor_id())); 1309 1313 1310 1314 remove_siblinginfo(cpu); 1311 1315 ··· 1320 1316 fixup_irqs(); 1321 1317 } 1322 1318 1323 - static DEFINE_PER_CPU(struct completion, die_complete); 1324 - 1325 1319 int native_cpu_disable(void) 1326 1320 { 1327 1321 int ret; ··· 1329 1327 return ret; 1330 1328 1331 1329 clear_local_APIC(); 1332 - init_completion(&per_cpu(die_complete, smp_processor_id())); 1333 1330 cpu_disable_common(); 1334 1331 1335 1332 return 0; 1336 1333 } 1337 1334 1335 + void cpu_die_common(unsigned int cpu) 1336 + { 1337 + wait_for_completion_timeout(&per_cpu(die_complete, cpu), HZ); 1338 + } 1339 + 1338 1340 void native_cpu_die(unsigned int cpu) 1339 1341 { 1340 1342 /* We don't do anything here: idle task is faking death itself. */ 1341 - wait_for_completion_timeout(&per_cpu(die_complete, cpu), HZ); 1343 + 1344 + cpu_die_common(cpu); 1342 1345 1343 1346 /* They ack this in play_dead() by setting CPU_DEAD */ 1344 1347 if (per_cpu(cpu_state, cpu) == CPU_DEAD) {
+2 -3
arch/x86/lib/csum-wrappers_64.c
··· 41 41 while (((unsigned long)src & 6) && len >= 2) { 42 42 __u16 val16; 43 43 44 - *errp = __get_user(val16, (const __u16 __user *)src); 45 - if (*errp) 46 - return isum; 44 + if (__get_user(val16, (const __u16 __user *)src)) 45 + goto out_err; 47 46 48 47 *(__u16 *)dst = val16; 49 48 isum = (__force __wsum)add32_with_carry(
+10 -1
arch/x86/mm/init_64.c
··· 1123 1123 unsigned long end = (unsigned long) &__end_rodata_hpage_align; 1124 1124 unsigned long text_end = PFN_ALIGN(&__stop___ex_table); 1125 1125 unsigned long rodata_end = PFN_ALIGN(&__end_rodata); 1126 - unsigned long all_end = PFN_ALIGN(&_end); 1126 + unsigned long all_end; 1127 1127 1128 1128 printk(KERN_INFO "Write protecting the kernel read-only data: %luk\n", 1129 1129 (end - start) >> 10); ··· 1134 1134 /* 1135 1135 * The rodata/data/bss/brk section (but not the kernel text!) 1136 1136 * should also be not-executable. 1137 + * 1138 + * We align all_end to PMD_SIZE because the existing mapping 1139 + * is a full PMD. If we would align _brk_end to PAGE_SIZE we 1140 + * split the PMD and the reminder between _brk_end and the end 1141 + * of the PMD will remain mapped executable. 1142 + * 1143 + * Any PMD which was setup after the one which covers _brk_end 1144 + * has been zapped already via cleanup_highmem(). 1137 1145 */ 1146 + all_end = roundup((unsigned long)_brk_end, PMD_SIZE); 1138 1147 set_memory_nx(rodata_start, (all_end - rodata_start) >> PAGE_SHIFT); 1139 1148 1140 1149 rodata_test();
+39
arch/x86/tools/calc_run_size.pl
··· 1 + #!/usr/bin/perl 2 + # 3 + # Calculate the amount of space needed to run the kernel, including room for 4 + # the .bss and .brk sections. 5 + # 6 + # Usage: 7 + # objdump -h a.out | perl calc_run_size.pl 8 + use strict; 9 + 10 + my $mem_size = 0; 11 + my $file_offset = 0; 12 + 13 + my $sections=" *[0-9]+ \.(?:bss|brk) +"; 14 + while (<>) { 15 + if (/^$sections([0-9a-f]+) +(?:[0-9a-f]+ +){2}([0-9a-f]+)/) { 16 + my $size = hex($1); 17 + my $offset = hex($2); 18 + $mem_size += $size; 19 + if ($file_offset == 0) { 20 + $file_offset = $offset; 21 + } elsif ($file_offset != $offset) { 22 + # BFD linker shows the same file offset in ELF. 23 + # Gold linker shows them as consecutive. 24 + next if ($file_offset + $mem_size == $offset + $size); 25 + 26 + printf STDERR "file_offset: 0x%lx\n", $file_offset; 27 + printf STDERR "mem_size: 0x%lx\n", $mem_size; 28 + printf STDERR "offset: 0x%lx\n", $offset; 29 + printf STDERR "size: 0x%lx\n", $size; 30 + 31 + die ".bss and .brk are non-contiguous\n"; 32 + } 33 + } 34 + } 35 + 36 + if ($file_offset == 0) { 37 + die "Never found .bss or .brk file offset\n"; 38 + } 39 + printf("%d\n", $mem_size + $file_offset);
+3
arch/x86/xen/smp.c
··· 510 510 current->state = TASK_UNINTERRUPTIBLE; 511 511 schedule_timeout(HZ/10); 512 512 } 513 + 514 + cpu_die_common(cpu); 515 + 513 516 xen_smp_intr_free(cpu); 514 517 xen_uninit_lock_cpu(cpu); 515 518 xen_teardown_timer(cpu);
+11 -8
block/blk-merge.c
··· 97 97 98 98 void blk_recount_segments(struct request_queue *q, struct bio *bio) 99 99 { 100 - bool no_sg_merge = !!test_bit(QUEUE_FLAG_NO_SG_MERGE, 101 - &q->queue_flags); 102 - bool merge_not_need = bio->bi_vcnt < queue_max_segments(q); 100 + unsigned short seg_cnt; 103 101 104 - if (no_sg_merge && !bio_flagged(bio, BIO_CLONED) && 105 - merge_not_need) 106 - bio->bi_phys_segments = bio->bi_vcnt; 102 + /* estimate segment number by bi_vcnt for non-cloned bio */ 103 + if (bio_flagged(bio, BIO_CLONED)) 104 + seg_cnt = bio_segments(bio); 105 + else 106 + seg_cnt = bio->bi_vcnt; 107 + 108 + if (test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags) && 109 + (seg_cnt < queue_max_segments(q))) 110 + bio->bi_phys_segments = seg_cnt; 107 111 else { 108 112 struct bio *nxt = bio->bi_next; 109 113 110 114 bio->bi_next = NULL; 111 - bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio, 112 - no_sg_merge && merge_not_need); 115 + bio->bi_phys_segments = __blk_recalc_rq_segments(q, bio, false); 113 116 bio->bi_next = nxt; 114 117 } 115 118
+33 -8
block/blk-mq.c
··· 107 107 wake_up_all(&q->mq_freeze_wq); 108 108 } 109 109 110 - /* 111 - * Guarantee no request is in use, so we can change any data structure of 112 - * the queue afterward. 113 - */ 114 - void blk_mq_freeze_queue(struct request_queue *q) 110 + static void blk_mq_freeze_queue_start(struct request_queue *q) 115 111 { 116 112 bool freeze; 117 113 ··· 119 123 percpu_ref_kill(&q->mq_usage_counter); 120 124 blk_mq_run_queues(q, false); 121 125 } 126 + } 127 + 128 + static void blk_mq_freeze_queue_wait(struct request_queue *q) 129 + { 122 130 wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->mq_usage_counter)); 131 + } 132 + 133 + /* 134 + * Guarantee no request is in use, so we can change any data structure of 135 + * the queue afterward. 136 + */ 137 + void blk_mq_freeze_queue(struct request_queue *q) 138 + { 139 + blk_mq_freeze_queue_start(q); 140 + blk_mq_freeze_queue_wait(q); 123 141 } 124 142 125 143 static void blk_mq_unfreeze_queue(struct request_queue *q) ··· 1931 1921 /* Basically redo blk_mq_init_queue with queue frozen */ 1932 1922 static void blk_mq_queue_reinit(struct request_queue *q) 1933 1923 { 1934 - blk_mq_freeze_queue(q); 1924 + WARN_ON_ONCE(!q->mq_freeze_depth); 1935 1925 1936 1926 blk_mq_sysfs_unregister(q); 1937 1927 ··· 1946 1936 blk_mq_map_swqueue(q); 1947 1937 1948 1938 blk_mq_sysfs_register(q); 1949 - 1950 - blk_mq_unfreeze_queue(q); 1951 1939 } 1952 1940 1953 1941 static int blk_mq_queue_reinit_notify(struct notifier_block *nb, ··· 1964 1956 return NOTIFY_OK; 1965 1957 1966 1958 mutex_lock(&all_q_mutex); 1959 + 1960 + /* 1961 + * We need to freeze and reinit all existing queues. Freezing 1962 + * involves synchronous wait for an RCU grace period and doing it 1963 + * one by one may take a long time. Start freezing all queues in 1964 + * one swoop and then wait for the completions so that freezing can 1965 + * take place in parallel. 1966 + */ 1967 + list_for_each_entry(q, &all_q_list, all_q_node) 1968 + blk_mq_freeze_queue_start(q); 1969 + list_for_each_entry(q, &all_q_list, all_q_node) 1970 + blk_mq_freeze_queue_wait(q); 1971 + 1967 1972 list_for_each_entry(q, &all_q_list, all_q_node) 1968 1973 blk_mq_queue_reinit(q); 1974 + 1975 + list_for_each_entry(q, &all_q_list, all_q_node) 1976 + blk_mq_unfreeze_queue(q); 1977 + 1969 1978 mutex_unlock(&all_q_mutex); 1970 1979 return NOTIFY_OK; 1971 1980 }
+8 -6
block/ioprio.c
··· 157 157 158 158 int ioprio_best(unsigned short aprio, unsigned short bprio) 159 159 { 160 - unsigned short aclass = IOPRIO_PRIO_CLASS(aprio); 161 - unsigned short bclass = IOPRIO_PRIO_CLASS(bprio); 160 + unsigned short aclass; 161 + unsigned short bclass; 162 162 163 - if (aclass == IOPRIO_CLASS_NONE) 164 - aclass = IOPRIO_CLASS_BE; 165 - if (bclass == IOPRIO_CLASS_NONE) 166 - bclass = IOPRIO_CLASS_BE; 163 + if (!ioprio_valid(aprio)) 164 + aprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, IOPRIO_NORM); 165 + if (!ioprio_valid(bprio)) 166 + bprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, IOPRIO_NORM); 167 167 168 + aclass = IOPRIO_PRIO_CLASS(aprio); 169 + bclass = IOPRIO_PRIO_CLASS(bprio); 168 170 if (aclass == bclass) 169 171 return min(aprio, bprio); 170 172 if (aclass > bclass)
+5 -3
block/scsi_ioctl.c
··· 458 458 rq = blk_get_request(q, in_len ? WRITE : READ, __GFP_WAIT); 459 459 if (IS_ERR(rq)) { 460 460 err = PTR_ERR(rq); 461 - goto error; 461 + goto error_free_buffer; 462 462 } 463 463 blk_rq_set_block_pc(rq); 464 464 ··· 531 531 } 532 532 533 533 error: 534 + blk_put_request(rq); 535 + 536 + error_free_buffer: 534 537 kfree(buffer); 535 - if (rq) 536 - blk_put_request(rq); 538 + 537 539 return err; 538 540 } 539 541 EXPORT_SYMBOL_GPL(sg_scsi_ioctl);
+8
drivers/acpi/blacklist.c
··· 290 290 DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 3446"), 291 291 }, 292 292 }, 293 + { 294 + .callback = dmi_disable_osi_win8, 295 + .ident = "Dell Vostro 3546", 296 + .matches = { 297 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 298 + DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 3546"), 299 + }, 300 + }, 293 301 294 302 /* 295 303 * BIOS invocation of _OSI(Linux) is almost always a BIOS bug.
+1 -1
drivers/acpi/device_pm.c
··· 878 878 return 0; 879 879 880 880 target_state = acpi_target_system_state(); 881 - wakeup = device_may_wakeup(dev); 881 + wakeup = device_may_wakeup(dev) && acpi_device_can_wakeup(adev); 882 882 error = acpi_device_wakeup(adev, target_state, wakeup); 883 883 if (wakeup && error) 884 884 return error;
+19 -9
drivers/ata/ahci.c
··· 60 60 /* board IDs by feature in alphabetical order */ 61 61 board_ahci, 62 62 board_ahci_ign_iferr, 63 + board_ahci_nomsi, 63 64 board_ahci_noncq, 64 65 board_ahci_nosntf, 65 66 board_ahci_yes_fbs, ··· 117 116 }, 118 117 [board_ahci_ign_iferr] = { 119 118 AHCI_HFLAGS (AHCI_HFLAG_IGN_IRQ_IF_ERR), 119 + .flags = AHCI_FLAG_COMMON, 120 + .pio_mask = ATA_PIO4, 121 + .udma_mask = ATA_UDMA6, 122 + .port_ops = &ahci_ops, 123 + }, 124 + [board_ahci_nomsi] = { 125 + AHCI_HFLAGS (AHCI_HFLAG_NO_MSI), 120 126 .flags = AHCI_FLAG_COMMON, 121 127 .pio_mask = ATA_PIO4, 122 128 .udma_mask = ATA_UDMA6, ··· 321 313 { PCI_VDEVICE(INTEL, 0x8c87), board_ahci }, /* 9 Series RAID */ 322 314 { PCI_VDEVICE(INTEL, 0x8c8e), board_ahci }, /* 9 Series RAID */ 323 315 { PCI_VDEVICE(INTEL, 0x8c8f), board_ahci }, /* 9 Series RAID */ 316 + { PCI_VDEVICE(INTEL, 0xa103), board_ahci }, /* Sunrise Point-H AHCI */ 317 + { PCI_VDEVICE(INTEL, 0xa103), board_ahci }, /* Sunrise Point-H RAID */ 318 + { PCI_VDEVICE(INTEL, 0xa105), board_ahci }, /* Sunrise Point-H RAID */ 319 + { PCI_VDEVICE(INTEL, 0xa107), board_ahci }, /* Sunrise Point-H RAID */ 320 + { PCI_VDEVICE(INTEL, 0xa10f), board_ahci }, /* Sunrise Point-H RAID */ 324 321 325 322 /* JMicron 360/1/3/5/6, match class to avoid IDE function */ 326 323 { PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, ··· 488 475 { PCI_VDEVICE(ASMEDIA, 0x0612), board_ahci }, /* ASM1062 */ 489 476 490 477 /* 491 - * Samsung SSDs found on some macbooks. NCQ times out. 492 - * https://bugzilla.kernel.org/show_bug.cgi?id=60731 478 + * Samsung SSDs found on some macbooks. NCQ times out if MSI is 479 + * enabled. https://bugzilla.kernel.org/show_bug.cgi?id=60731 493 480 */ 494 - { PCI_VDEVICE(SAMSUNG, 0x1600), board_ahci_noncq }, 481 + { PCI_VDEVICE(SAMSUNG, 0x1600), board_ahci_nomsi }, 495 482 496 483 /* Enmotus */ 497 484 { PCI_DEVICE(0x1c44, 0x8000), board_ahci }, ··· 527 514 static void ahci_pci_save_initial_config(struct pci_dev *pdev, 528 515 struct ahci_host_priv *hpriv) 529 516 { 530 - unsigned int force_port_map = 0; 531 - unsigned int mask_port_map = 0; 532 - 533 517 if (pdev->vendor == PCI_VENDOR_ID_JMICRON && pdev->device == 0x2361) { 534 518 dev_info(&pdev->dev, "JMB361 has only one port\n"); 535 - force_port_map = 1; 519 + hpriv->force_port_map = 1; 536 520 } 537 521 538 522 /* ··· 539 529 */ 540 530 if (hpriv->flags & AHCI_HFLAG_MV_PATA) { 541 531 if (pdev->device == 0x6121) 542 - mask_port_map = 0x3; 532 + hpriv->mask_port_map = 0x3; 543 533 else 544 - mask_port_map = 0xf; 534 + hpriv->mask_port_map = 0xf; 545 535 dev_info(&pdev->dev, 546 536 "Disabling your PATA port. Use the boot option 'ahci.marvell_enable=0' to avoid this.\n"); 547 537 }
+17 -61
drivers/ata/libahci.c
··· 1778 1778 } 1779 1779 } 1780 1780 1781 - static void ahci_update_intr_status(struct ata_port *ap) 1781 + static void ahci_port_intr(struct ata_port *ap) 1782 1782 { 1783 1783 void __iomem *port_mmio = ahci_port_base(ap); 1784 - struct ahci_port_priv *pp = ap->private_data; 1785 1784 u32 status; 1786 1785 1787 1786 status = readl(port_mmio + PORT_IRQ_STAT); 1788 1787 writel(status, port_mmio + PORT_IRQ_STAT); 1789 1788 1790 - atomic_or(status, &pp->intr_status); 1789 + ahci_handle_port_interrupt(ap, port_mmio, status); 1791 1790 } 1792 1791 1793 1792 static irqreturn_t ahci_port_thread_fn(int irq, void *dev_instance) ··· 1803 1804 spin_lock_bh(ap->lock); 1804 1805 ahci_handle_port_interrupt(ap, port_mmio, status); 1805 1806 spin_unlock_bh(ap->lock); 1806 - 1807 - return IRQ_HANDLED; 1808 - } 1809 - 1810 - irqreturn_t ahci_thread_fn(int irq, void *dev_instance) 1811 - { 1812 - struct ata_host *host = dev_instance; 1813 - struct ahci_host_priv *hpriv = host->private_data; 1814 - u32 irq_masked = hpriv->port_map; 1815 - unsigned int i; 1816 - 1817 - for (i = 0; i < host->n_ports; i++) { 1818 - struct ata_port *ap; 1819 - 1820 - if (!(irq_masked & (1 << i))) 1821 - continue; 1822 - 1823 - ap = host->ports[i]; 1824 - if (ap) { 1825 - ahci_port_thread_fn(irq, ap); 1826 - VPRINTK("port %u\n", i); 1827 - } else { 1828 - VPRINTK("port %u (no irq)\n", i); 1829 - if (ata_ratelimit()) 1830 - dev_warn(host->dev, 1831 - "interrupt on disabled port %u\n", i); 1832 - } 1833 - } 1834 1807 1835 1808 return IRQ_HANDLED; 1836 1809 } ··· 1846 1875 1847 1876 irq_masked = irq_stat & hpriv->port_map; 1848 1877 1878 + spin_lock(&host->lock); 1879 + 1849 1880 for (i = 0; i < host->n_ports; i++) { 1850 1881 struct ata_port *ap; 1851 1882 ··· 1856 1883 1857 1884 ap = host->ports[i]; 1858 1885 if (ap) { 1859 - ahci_update_intr_status(ap); 1886 + ahci_port_intr(ap); 1860 1887 VPRINTK("port %u\n", i); 1861 1888 } else { 1862 1889 VPRINTK("port %u (no irq)\n", i); ··· 1879 1906 */ 1880 1907 writel(irq_stat, mmio + HOST_IRQ_STAT); 1881 1908 1909 + spin_unlock(&host->lock); 1910 + 1882 1911 VPRINTK("EXIT\n"); 1883 1912 1884 - return handled ? IRQ_WAKE_THREAD : IRQ_NONE; 1913 + return IRQ_RETVAL(handled); 1885 1914 } 1886 1915 1887 1916 unsigned int ahci_qc_issue(struct ata_queued_cmd *qc) ··· 2295 2320 */ 2296 2321 pp->intr_mask = DEF_PORT_IRQ; 2297 2322 2298 - spin_lock_init(&pp->lock); 2299 - ap->lock = &pp->lock; 2323 + /* 2324 + * Switch to per-port locking in case each port has its own MSI vector. 2325 + */ 2326 + if ((hpriv->flags & AHCI_HFLAG_MULTI_MSI)) { 2327 + spin_lock_init(&pp->lock); 2328 + ap->lock = &pp->lock; 2329 + } 2300 2330 2301 2331 ap->private_data = pp; 2302 2332 ··· 2462 2482 return rc; 2463 2483 } 2464 2484 2465 - static int ahci_host_activate_single_irq(struct ata_host *host, int irq, 2466 - struct scsi_host_template *sht) 2467 - { 2468 - int i, rc; 2469 - 2470 - rc = ata_host_start(host); 2471 - if (rc) 2472 - return rc; 2473 - 2474 - rc = devm_request_threaded_irq(host->dev, irq, ahci_single_irq_intr, 2475 - ahci_thread_fn, IRQF_SHARED, 2476 - dev_driver_string(host->dev), host); 2477 - if (rc) 2478 - return rc; 2479 - 2480 - for (i = 0; i < host->n_ports; i++) 2481 - ata_port_desc(host->ports[i], "irq %d", irq); 2482 - 2483 - rc = ata_host_register(host, sht); 2484 - if (rc) 2485 - devm_free_irq(host->dev, irq, host); 2486 - 2487 - return rc; 2488 - } 2489 - 2490 2485 /** 2491 2486 * ahci_host_activate - start AHCI host, request IRQs and register it 2492 2487 * @host: target ATA host ··· 2487 2532 if (hpriv->flags & AHCI_HFLAG_MULTI_MSI) 2488 2533 rc = ahci_host_activate_multi_irqs(host, irq, sht); 2489 2534 else 2490 - rc = ahci_host_activate_single_irq(host, irq, sht); 2535 + rc = ata_host_activate(host, irq, ahci_single_irq_intr, 2536 + IRQF_SHARED, sht); 2491 2537 return rc; 2492 2538 } 2493 2539 EXPORT_SYMBOL_GPL(ahci_host_activate);
+15
drivers/ata/sata_rcar.c
··· 146 146 enum sata_rcar_type { 147 147 RCAR_GEN1_SATA, 148 148 RCAR_GEN2_SATA, 149 + RCAR_R8A7790_ES1_SATA, 149 150 }; 150 151 151 152 struct sata_rcar_priv { ··· 764 763 ap->udma_mask = ATA_UDMA6; 765 764 ap->flags |= ATA_FLAG_SATA; 766 765 766 + if (priv->type == RCAR_R8A7790_ES1_SATA) 767 + ap->flags |= ATA_FLAG_NO_DIPM; 768 + 767 769 ioaddr->cmd_addr = base + SDATA_REG; 768 770 ioaddr->ctl_addr = base + SSDEVCON_REG; 769 771 ioaddr->scr_addr = base + SCRSSTS_REG; ··· 796 792 sata_rcar_gen1_phy_init(priv); 797 793 break; 798 794 case RCAR_GEN2_SATA: 795 + case RCAR_R8A7790_ES1_SATA: 799 796 sata_rcar_gen2_phy_init(priv); 800 797 break; 801 798 default: ··· 843 838 .data = (void *)RCAR_GEN2_SATA 844 839 }, 845 840 { 841 + .compatible = "renesas,sata-r8a7790-es1", 842 + .data = (void *)RCAR_R8A7790_ES1_SATA 843 + }, 844 + { 846 845 .compatible = "renesas,sata-r8a7791", 846 + .data = (void *)RCAR_GEN2_SATA 847 + }, 848 + { 849 + .compatible = "renesas,sata-r8a7793", 847 850 .data = (void *)RCAR_GEN2_SATA 848 851 }, 849 852 { }, ··· 862 849 { "sata_rcar", RCAR_GEN1_SATA }, /* Deprecated by "sata-r8a7779" */ 863 850 { "sata-r8a7779", RCAR_GEN1_SATA }, 864 851 { "sata-r8a7790", RCAR_GEN2_SATA }, 852 + { "sata-r8a7790-es1", RCAR_R8A7790_ES1_SATA }, 865 853 { "sata-r8a7791", RCAR_GEN2_SATA }, 854 + { "sata-r8a7793", RCAR_GEN2_SATA }, 866 855 { }, 867 856 }; 868 857 MODULE_DEVICE_TABLE(platform, sata_rcar_id_table);
+34 -8
drivers/base/power/domain.c
··· 361 361 struct device *dev = pdd->dev; 362 362 int ret = 0; 363 363 364 - if (gpd_data->need_restore) 364 + if (gpd_data->need_restore > 0) 365 365 return 0; 366 + 367 + /* 368 + * If the value of the need_restore flag is still unknown at this point, 369 + * we trust that pm_genpd_poweroff() has verified that the device is 370 + * already runtime PM suspended. 371 + */ 372 + if (gpd_data->need_restore < 0) { 373 + gpd_data->need_restore = 1; 374 + return 0; 375 + } 366 376 367 377 mutex_unlock(&genpd->lock); 368 378 ··· 383 373 mutex_lock(&genpd->lock); 384 374 385 375 if (!ret) 386 - gpd_data->need_restore = true; 376 + gpd_data->need_restore = 1; 387 377 388 378 return ret; 389 379 } ··· 399 389 { 400 390 struct generic_pm_domain_data *gpd_data = to_gpd_data(pdd); 401 391 struct device *dev = pdd->dev; 402 - bool need_restore = gpd_data->need_restore; 392 + int need_restore = gpd_data->need_restore; 403 393 404 - gpd_data->need_restore = false; 394 + gpd_data->need_restore = 0; 405 395 mutex_unlock(&genpd->lock); 406 396 407 397 genpd_start_dev(genpd, dev); 398 + 399 + /* 400 + * Call genpd_restore_dev() for recently added devices too (need_restore 401 + * is negative then). 402 + */ 408 403 if (need_restore) 409 404 genpd_restore_dev(genpd, dev); 410 405 ··· 618 603 static int pm_genpd_runtime_suspend(struct device *dev) 619 604 { 620 605 struct generic_pm_domain *genpd; 606 + struct generic_pm_domain_data *gpd_data; 621 607 bool (*stop_ok)(struct device *__dev); 622 608 int ret; 623 609 ··· 644 628 return 0; 645 629 646 630 mutex_lock(&genpd->lock); 631 + 632 + /* 633 + * If we have an unknown state of the need_restore flag, it means none 634 + * of the runtime PM callbacks has been invoked yet. Let's update the 635 + * flag to reflect that the current state is active. 636 + */ 637 + gpd_data = to_gpd_data(dev->power.subsys_data->domain_data); 638 + if (gpd_data->need_restore < 0) 639 + gpd_data->need_restore = 0; 640 + 647 641 genpd->in_progress++; 648 642 pm_genpd_poweroff(genpd); 649 643 genpd->in_progress--; ··· 1463 1437 spin_unlock_irq(&dev->power.lock); 1464 1438 1465 1439 if (genpd->attach_dev) 1466 - genpd->attach_dev(dev); 1440 + genpd->attach_dev(genpd, dev); 1467 1441 1468 1442 mutex_lock(&gpd_data->lock); 1469 1443 gpd_data->base.dev = dev; 1470 1444 list_add_tail(&gpd_data->base.list_node, &genpd->dev_list); 1471 - gpd_data->need_restore = genpd->status == GPD_STATE_POWER_OFF; 1445 + gpd_data->need_restore = -1; 1472 1446 gpd_data->td.constraint_changed = true; 1473 1447 gpd_data->td.effective_constraint_ns = -1; 1474 1448 mutex_unlock(&gpd_data->lock); ··· 1525 1499 genpd->max_off_time_changed = true; 1526 1500 1527 1501 if (genpd->detach_dev) 1528 - genpd->detach_dev(dev); 1502 + genpd->detach_dev(genpd, dev); 1529 1503 1530 1504 spin_lock_irq(&dev->power.lock); 1531 1505 ··· 1572 1546 1573 1547 psd = dev_to_psd(dev); 1574 1548 if (psd && psd->domain_data) 1575 - to_gpd_data(psd->domain_data)->need_restore = val; 1549 + to_gpd_data(psd->domain_data)->need_restore = val ? 1 : 0; 1576 1550 1577 1551 spin_unlock_irqrestore(&dev->power.lock, flags); 1578 1552 }
+2 -2
drivers/cpufreq/cpufreq-dt.c
··· 166 166 if (ret == -EPROBE_DEFER) 167 167 dev_dbg(cpu_dev, "cpu%d clock not ready, retry\n", cpu); 168 168 else 169 - dev_err(cpu_dev, "failed to get cpu%d clock: %d\n", ret, 170 - cpu); 169 + dev_err(cpu_dev, "failed to get cpu%d clock: %d\n", cpu, 170 + ret); 171 171 } else { 172 172 *cdev = cpu_dev; 173 173 *creg = cpu_reg;
+2 -1
drivers/cpufreq/cpufreq.c
··· 1022 1022 1023 1023 read_unlock_irqrestore(&cpufreq_driver_lock, flags); 1024 1024 1025 - policy->governor = NULL; 1025 + if (policy) 1026 + policy->governor = NULL; 1026 1027 1027 1028 return policy; 1028 1029 }
+16 -7
drivers/dma/pl330.c
··· 271 271 #define DMAC_MODE_NS (1 << 0) 272 272 unsigned int mode; 273 273 unsigned int data_bus_width:10; /* In number of bits */ 274 - unsigned int data_buf_dep:10; 274 + unsigned int data_buf_dep:11; 275 275 unsigned int num_chan:4; 276 276 unsigned int num_peri:6; 277 277 u32 peri_ns; ··· 2336 2336 int burst_len; 2337 2337 2338 2338 burst_len = pl330->pcfg.data_bus_width / 8; 2339 - burst_len *= pl330->pcfg.data_buf_dep; 2339 + burst_len *= pl330->pcfg.data_buf_dep / pl330->pcfg.num_chan; 2340 2340 burst_len >>= desc->rqcfg.brst_size; 2341 2341 2342 2342 /* src/dst_burst_len can't be more than 16 */ ··· 2459 2459 /* Select max possible burst size */ 2460 2460 burst = pl330->pcfg.data_bus_width / 8; 2461 2461 2462 - while (burst > 1) { 2463 - if (!(len % burst)) 2464 - break; 2462 + /* 2463 + * Make sure we use a burst size that aligns with all the memcpy 2464 + * parameters because our DMA programming algorithm doesn't cope with 2465 + * transfers which straddle an entry in the DMA device's MFIFO. 2466 + */ 2467 + while ((src | dst | len) & (burst - 1)) 2465 2468 burst /= 2; 2466 - } 2467 2469 2468 2470 desc->rqcfg.brst_size = 0; 2469 2471 while (burst != (1 << desc->rqcfg.brst_size)) 2470 2472 desc->rqcfg.brst_size++; 2473 + 2474 + /* 2475 + * If burst size is smaller than bus width then make sure we only 2476 + * transfer one at a time to avoid a burst stradling an MFIFO entry. 2477 + */ 2478 + if (desc->rqcfg.brst_size * 8 < pl330->pcfg.data_bus_width) 2479 + desc->rqcfg.brst_len = 1; 2471 2480 2472 2481 desc->rqcfg.brst_len = get_burst_len(desc, len); 2473 2482 ··· 2741 2732 2742 2733 2743 2734 dev_info(&adev->dev, 2744 - "Loaded driver for PL330 DMAC-%d\n", adev->periphid); 2735 + "Loaded driver for PL330 DMAC-%x\n", adev->periphid); 2745 2736 dev_info(&adev->dev, 2746 2737 "\tDBUFF-%ux%ubytes Num_Chans-%u Num_Peri-%u Num_Events-%u\n", 2747 2738 pcfg->data_buf_dep, pcfg->data_bus_width / 8, pcfg->num_chan,
+30 -31
drivers/dma/sun6i-dma.c
··· 230 230 readl(pchan->base + DMA_CHAN_CUR_PARA)); 231 231 } 232 232 233 - static inline int convert_burst(u32 maxburst, u8 *burst) 233 + static inline s8 convert_burst(u32 maxburst) 234 234 { 235 235 switch (maxburst) { 236 236 case 1: 237 - *burst = 0; 238 - break; 237 + return 0; 239 238 case 8: 240 - *burst = 2; 241 - break; 239 + return 2; 242 240 default: 243 241 return -EINVAL; 244 242 } 245 - 246 - return 0; 247 243 } 248 244 249 - static inline int convert_buswidth(enum dma_slave_buswidth addr_width, u8 *width) 245 + static inline s8 convert_buswidth(enum dma_slave_buswidth addr_width) 250 246 { 251 247 if ((addr_width < DMA_SLAVE_BUSWIDTH_1_BYTE) || 252 248 (addr_width > DMA_SLAVE_BUSWIDTH_4_BYTES)) 253 249 return -EINVAL; 254 250 255 - *width = addr_width >> 1; 256 - return 0; 251 + return addr_width >> 1; 257 252 } 258 253 259 254 static void *sun6i_dma_lli_add(struct sun6i_dma_lli *prev, ··· 279 284 struct dma_slave_config *config) 280 285 { 281 286 u8 src_width, dst_width, src_burst, dst_burst; 282 - int ret; 283 287 284 288 if (!config) 285 289 return -EINVAL; 286 290 287 - ret = convert_burst(config->src_maxburst, &src_burst); 288 - if (ret) 289 - return ret; 291 + src_burst = convert_burst(config->src_maxburst); 292 + if (src_burst) 293 + return src_burst; 290 294 291 - ret = convert_burst(config->dst_maxburst, &dst_burst); 292 - if (ret) 293 - return ret; 295 + dst_burst = convert_burst(config->dst_maxburst); 296 + if (dst_burst) 297 + return dst_burst; 294 298 295 - ret = convert_buswidth(config->src_addr_width, &src_width); 296 - if (ret) 297 - return ret; 299 + src_width = convert_buswidth(config->src_addr_width); 300 + if (src_width) 301 + return src_width; 298 302 299 - ret = convert_buswidth(config->dst_addr_width, &dst_width); 300 - if (ret) 301 - return ret; 303 + dst_width = convert_buswidth(config->dst_addr_width); 304 + if (dst_width) 305 + return dst_width; 302 306 303 307 lli->cfg = DMA_CHAN_CFG_SRC_BURST(src_burst) | 304 308 DMA_CHAN_CFG_SRC_WIDTH(src_width) | ··· 536 542 { 537 543 struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(chan->device); 538 544 struct sun6i_vchan *vchan = to_sun6i_vchan(chan); 539 - struct dma_slave_config *sconfig = &vchan->cfg; 540 545 struct sun6i_dma_lli *v_lli; 541 546 struct sun6i_desc *txd; 542 547 dma_addr_t p_lli; 543 - int ret; 548 + s8 burst, width; 544 549 545 550 dev_dbg(chan2dev(chan), 546 551 "%s; chan: %d, dest: %pad, src: %pad, len: %zu. flags: 0x%08lx\n", ··· 558 565 goto err_txd_free; 559 566 } 560 567 561 - ret = sun6i_dma_cfg_lli(v_lli, src, dest, len, sconfig); 562 - if (ret) 563 - goto err_dma_free; 568 + v_lli->src = src; 569 + v_lli->dst = dest; 570 + v_lli->len = len; 571 + v_lli->para = NORMAL_WAIT; 564 572 573 + burst = convert_burst(8); 574 + width = convert_buswidth(DMA_SLAVE_BUSWIDTH_4_BYTES); 565 575 v_lli->cfg |= DMA_CHAN_CFG_SRC_DRQ(DRQ_SDRAM) | 566 576 DMA_CHAN_CFG_DST_DRQ(DRQ_SDRAM) | 567 577 DMA_CHAN_CFG_DST_LINEAR_MODE | 568 - DMA_CHAN_CFG_SRC_LINEAR_MODE; 578 + DMA_CHAN_CFG_SRC_LINEAR_MODE | 579 + DMA_CHAN_CFG_SRC_BURST(burst) | 580 + DMA_CHAN_CFG_SRC_WIDTH(width) | 581 + DMA_CHAN_CFG_DST_BURST(burst) | 582 + DMA_CHAN_CFG_DST_WIDTH(width); 569 583 570 584 sun6i_dma_lli_add(NULL, v_lli, p_lli, txd); 571 585 ··· 580 580 581 581 return vchan_tx_prep(&vchan->vc, &txd->vd, flags); 582 582 583 - err_dma_free: 584 - dma_pool_free(sdev->pool, v_lli, p_lli); 585 583 err_txd_free: 586 584 kfree(txd); 587 585 return NULL; ··· 913 915 sdc->slave.device_prep_dma_memcpy = sun6i_dma_prep_dma_memcpy; 914 916 sdc->slave.device_control = sun6i_dma_control; 915 917 sdc->slave.chancnt = NR_MAX_VCHANS; 918 + sdc->slave.copy_align = 4; 916 919 917 920 sdc->slave.dev = &pdev->dev; 918 921
+1 -2
drivers/firewire/core-cdev.c
··· 1637 1637 _IOC_SIZE(cmd) > sizeof(buffer)) 1638 1638 return -ENOTTY; 1639 1639 1640 - if (_IOC_DIR(cmd) == _IOC_READ) 1641 - memset(&buffer, 0, _IOC_SIZE(cmd)); 1640 + memset(&buffer, 0, sizeof(buffer)); 1642 1641 1643 1642 if (_IOC_DIR(cmd) & _IOC_WRITE) 1644 1643 if (copy_from_user(&buffer, arg, _IOC_SIZE(cmd)))
+33 -16
drivers/gpu/drm/exynos/exynos_drm_drv.c
··· 495 495 496 496 mutex_lock(&drm_component_lock); 497 497 498 + /* Do not retry to probe if there is no any kms driver regitered. */ 499 + if (list_empty(&drm_component_list)) { 500 + mutex_unlock(&drm_component_lock); 501 + return ERR_PTR(-ENODEV); 502 + } 503 + 498 504 list_for_each_entry(cdev, &drm_component_list, list) { 499 505 /* 500 506 * Add components to master only in case that crtc and ··· 591 585 goto err_unregister_mixer_drv; 592 586 #endif 593 587 588 + match = exynos_drm_match_add(&pdev->dev); 589 + if (IS_ERR(match)) { 590 + ret = PTR_ERR(match); 591 + goto err_unregister_hdmi_drv; 592 + } 593 + 594 + ret = component_master_add_with_match(&pdev->dev, &exynos_drm_ops, 595 + match); 596 + if (ret < 0) 597 + goto err_unregister_hdmi_drv; 598 + 594 599 #ifdef CONFIG_DRM_EXYNOS_G2D 595 600 ret = platform_driver_register(&g2d_driver); 596 601 if (ret < 0) 597 - goto err_unregister_hdmi_drv; 602 + goto err_del_component_master; 598 603 #endif 599 604 600 605 #ifdef CONFIG_DRM_EXYNOS_FIMC ··· 636 619 goto err_unregister_ipp_drv; 637 620 #endif 638 621 639 - match = exynos_drm_match_add(&pdev->dev); 640 - if (IS_ERR(match)) { 641 - ret = PTR_ERR(match); 642 - goto err_unregister_resources; 643 - } 644 - 645 - ret = component_master_add_with_match(&pdev->dev, &exynos_drm_ops, 646 - match); 647 - if (ret < 0) 648 - goto err_unregister_resources; 649 - 650 622 return ret; 651 623 652 - err_unregister_resources: 653 - 654 624 #ifdef CONFIG_DRM_EXYNOS_IPP 655 - exynos_platform_device_ipp_unregister(); 656 625 err_unregister_ipp_drv: 657 626 platform_driver_unregister(&ipp_driver); 658 627 err_unregister_gsc_drv: ··· 661 658 662 659 #ifdef CONFIG_DRM_EXYNOS_G2D 663 660 platform_driver_unregister(&g2d_driver); 664 - err_unregister_hdmi_drv: 661 + err_del_component_master: 665 662 #endif 663 + component_master_del(&pdev->dev, &exynos_drm_ops); 666 664 665 + err_unregister_hdmi_drv: 667 666 #ifdef CONFIG_DRM_EXYNOS_HDMI 668 667 platform_driver_unregister(&hdmi_driver); 669 668 err_unregister_mixer_drv: ··· 745 740 static int exynos_drm_init(void) 746 741 { 747 742 int ret; 743 + 744 + /* 745 + * Register device object only in case of Exynos SoC. 746 + * 747 + * Below codes resolves temporarily infinite loop issue incurred 748 + * by Exynos drm driver when using multi-platform kernel. 749 + * So these codes will be replaced with more generic way later. 750 + */ 751 + if (!of_machine_is_compatible("samsung,exynos3") && 752 + !of_machine_is_compatible("samsung,exynos4") && 753 + !of_machine_is_compatible("samsung,exynos5")) 754 + return -ENODEV; 748 755 749 756 exynos_drm_pdev = platform_device_register_simple("exynos-drm", -1, 750 757 NULL, 0);
+6 -3
drivers/gpu/drm/exynos/exynos_drm_g2d.c
··· 302 302 struct exynos_drm_subdrv *subdrv = &g2d->subdrv; 303 303 304 304 kfree(g2d->cmdlist_node); 305 - dma_free_attrs(subdrv->drm_dev->dev, G2D_CMDLIST_POOL_SIZE, 306 - g2d->cmdlist_pool_virt, 307 - g2d->cmdlist_pool, &g2d->cmdlist_dma_attrs); 305 + 306 + if (g2d->cmdlist_pool_virt && g2d->cmdlist_pool) { 307 + dma_free_attrs(subdrv->drm_dev->dev, G2D_CMDLIST_POOL_SIZE, 308 + g2d->cmdlist_pool_virt, 309 + g2d->cmdlist_pool, &g2d->cmdlist_dma_attrs); 310 + } 308 311 } 309 312 310 313 static struct g2d_cmdlist_node *g2d_get_cmdlist(struct g2d_data *g2d)
+8 -6
drivers/gpu/drm/i915/i915_dma.c
··· 1670 1670 goto out_regs; 1671 1671 1672 1672 if (drm_core_check_feature(dev, DRIVER_MODESET)) { 1673 - ret = i915_kick_out_vgacon(dev_priv); 1674 - if (ret) { 1675 - DRM_ERROR("failed to remove conflicting VGA console\n"); 1676 - goto out_gtt; 1677 - } 1678 - 1673 + /* WARNING: Apparently we must kick fbdev drivers before vgacon, 1674 + * otherwise the vga fbdev driver falls over. */ 1679 1675 ret = i915_kick_out_firmware_fb(dev_priv); 1680 1676 if (ret) { 1681 1677 DRM_ERROR("failed to remove conflicting framebuffer drivers\n"); 1678 + goto out_gtt; 1679 + } 1680 + 1681 + ret = i915_kick_out_vgacon(dev_priv); 1682 + if (ret) { 1683 + DRM_ERROR("failed to remove conflicting VGA console\n"); 1682 1684 goto out_gtt; 1683 1685 } 1684 1686 }
+3 -16
drivers/gpu/drm/i915/i915_gem_tiling.c
··· 364 364 * has to also include the unfenced register the GPU uses 365 365 * whilst executing a fenced command for an untiled object. 366 366 */ 367 - 368 - obj->map_and_fenceable = 369 - !i915_gem_obj_ggtt_bound(obj) || 370 - (i915_gem_obj_ggtt_offset(obj) + 371 - obj->base.size <= dev_priv->gtt.mappable_end && 372 - i915_gem_object_fence_ok(obj, args->tiling_mode)); 373 - 374 - /* Rebind if we need a change of alignment */ 375 - if (!obj->map_and_fenceable) { 376 - u32 unfenced_align = 377 - i915_gem_get_gtt_alignment(dev, obj->base.size, 378 - args->tiling_mode, 379 - false); 380 - if (i915_gem_obj_ggtt_offset(obj) & (unfenced_align - 1)) 381 - ret = i915_gem_object_ggtt_unbind(obj); 382 - } 367 + if (obj->map_and_fenceable && 368 + !i915_gem_object_fence_ok(obj, args->tiling_mode)) 369 + ret = i915_gem_object_ggtt_unbind(obj); 383 370 384 371 if (ret == 0) { 385 372 obj->fence_dirty =
-5
drivers/gpu/drm/i915/intel_pm.c
··· 5469 5469 I915_WRITE(_3D_CHICKEN, 5470 5470 _MASKED_BIT_ENABLE(_3D_CHICKEN_HIZ_PLANE_DISABLE_MSAA_4X_SNB)); 5471 5471 5472 - /* WaSetupGtModeTdRowDispatch:snb */ 5473 - if (IS_SNB_GT1(dev)) 5474 - I915_WRITE(GEN6_GT_MODE, 5475 - _MASKED_BIT_ENABLE(GEN6_TD_FOUR_ROW_DISPATCH_DISABLE)); 5476 - 5477 5472 /* WaDisable_RenderCache_OperationalFlush:snb */ 5478 5473 I915_WRITE(CACHE_MODE_0, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE)); 5479 5474
+15 -1
drivers/gpu/drm/nouveau/core/subdev/fb/gk20a.c
··· 27 27 }; 28 28 29 29 static int 30 + gk20a_fb_init(struct nouveau_object *object) 31 + { 32 + struct gk20a_fb_priv *priv = (void *)object; 33 + int ret; 34 + 35 + ret = nouveau_fb_init(&priv->base); 36 + if (ret) 37 + return ret; 38 + 39 + nv_mask(priv, 0x100c80, 0x00000001, 0x00000000); /* 128KiB lpg */ 40 + return 0; 41 + } 42 + 43 + static int 30 44 gk20a_fb_ctor(struct nouveau_object *parent, struct nouveau_object *engine, 31 45 struct nouveau_oclass *oclass, void *data, u32 size, 32 46 struct nouveau_object **pobject) ··· 62 48 .base.ofuncs = &(struct nouveau_ofuncs) { 63 49 .ctor = gk20a_fb_ctor, 64 50 .dtor = _nouveau_fb_dtor, 65 - .init = _nouveau_fb_init, 51 + .init = gk20a_fb_init, 66 52 .fini = _nouveau_fb_fini, 67 53 }, 68 54 .memtype = nvc0_fb_memtype_valid,
+23 -2
drivers/gpu/drm/nouveau/nv50_display.c
··· 791 791 } 792 792 793 793 static int 794 + nv50_crtc_set_raster_vblank_dmi(struct nouveau_crtc *nv_crtc, u32 usec) 795 + { 796 + struct nv50_mast *mast = nv50_mast(nv_crtc->base.dev); 797 + u32 *push; 798 + 799 + push = evo_wait(mast, 8); 800 + if (!push) 801 + return -ENOMEM; 802 + 803 + evo_mthd(push, 0x0828 + (nv_crtc->index * 0x400), 1); 804 + evo_data(push, usec); 805 + evo_kick(push, mast); 806 + return 0; 807 + } 808 + 809 + static int 794 810 nv50_crtc_set_color_vibrance(struct nouveau_crtc *nv_crtc, bool update) 795 811 { 796 812 struct nv50_mast *mast = nv50_mast(nv_crtc->base.dev); ··· 1120 1104 evo_mthd(push, 0x0804 + (nv_crtc->index * 0x400), 2); 1121 1105 evo_data(push, 0x00800000 | mode->clock); 1122 1106 evo_data(push, (ilace == 2) ? 2 : 0); 1123 - evo_mthd(push, 0x0810 + (nv_crtc->index * 0x400), 8); 1107 + evo_mthd(push, 0x0810 + (nv_crtc->index * 0x400), 6); 1124 1108 evo_data(push, 0x00000000); 1125 1109 evo_data(push, (vactive << 16) | hactive); 1126 1110 evo_data(push, ( vsynce << 16) | hsynce); 1127 1111 evo_data(push, (vblanke << 16) | hblanke); 1128 1112 evo_data(push, (vblanks << 16) | hblanks); 1129 1113 evo_data(push, (vblan2e << 16) | vblan2s); 1130 - evo_data(push, vblankus); 1114 + evo_mthd(push, 0x082c + (nv_crtc->index * 0x400), 1); 1131 1115 evo_data(push, 0x00000000); 1132 1116 evo_mthd(push, 0x0900 + (nv_crtc->index * 0x400), 2); 1133 1117 evo_data(push, 0x00000311); ··· 1157 1141 nv_connector = nouveau_crtc_connector_get(nv_crtc); 1158 1142 nv50_crtc_set_dither(nv_crtc, false); 1159 1143 nv50_crtc_set_scale(nv_crtc, false); 1144 + 1145 + /* G94 only accepts this after setting scale */ 1146 + if (nv50_vers(mast) < GF110_DISP_CORE_CHANNEL_DMA) 1147 + nv50_crtc_set_raster_vblank_dmi(nv_crtc, vblankus); 1148 + 1160 1149 nv50_crtc_set_color_vibrance(nv_crtc, false); 1161 1150 nv50_crtc_set_image(nv_crtc, crtc->primary->fb, x, y, false); 1162 1151 return 0;
+10 -1
drivers/gpu/drm/radeon/atom.c
··· 1217 1217 return ret; 1218 1218 } 1219 1219 1220 - int atom_execute_table(struct atom_context *ctx, int index, uint32_t * params) 1220 + int atom_execute_table_scratch_unlocked(struct atom_context *ctx, int index, uint32_t * params) 1221 1221 { 1222 1222 int r; 1223 1223 ··· 1235 1235 ctx->divmul[1] = 0; 1236 1236 r = atom_execute_table_locked(ctx, index, params); 1237 1237 mutex_unlock(&ctx->mutex); 1238 + return r; 1239 + } 1240 + 1241 + int atom_execute_table(struct atom_context *ctx, int index, uint32_t * params) 1242 + { 1243 + int r; 1244 + mutex_lock(&ctx->scratch_mutex); 1245 + r = atom_execute_table_scratch_unlocked(ctx, index, params); 1246 + mutex_unlock(&ctx->scratch_mutex); 1238 1247 return r; 1239 1248 } 1240 1249
+2
drivers/gpu/drm/radeon/atom.h
··· 125 125 struct atom_context { 126 126 struct card_info *card; 127 127 struct mutex mutex; 128 + struct mutex scratch_mutex; 128 129 void *bios; 129 130 uint32_t cmd_table, data_table; 130 131 uint16_t *iio; ··· 146 145 147 146 struct atom_context *atom_parse(struct card_info *, void *); 148 147 int atom_execute_table(struct atom_context *, int, uint32_t *); 148 + int atom_execute_table_scratch_unlocked(struct atom_context *, int, uint32_t *); 149 149 int atom_asic_init(struct atom_context *); 150 150 void atom_destroy(struct atom_context *); 151 151 bool atom_parse_data_header(struct atom_context *ctx, int index, uint16_t *size,
+3 -1
drivers/gpu/drm/radeon/atombios_dp.c
··· 100 100 memset(&args, 0, sizeof(args)); 101 101 102 102 mutex_lock(&chan->mutex); 103 + mutex_lock(&rdev->mode_info.atom_context->scratch_mutex); 103 104 104 105 base = (unsigned char *)(rdev->mode_info.atom_context->scratch + 1); 105 106 ··· 114 113 if (ASIC_IS_DCE4(rdev)) 115 114 args.v2.ucHPD_ID = chan->rec.hpd; 116 115 117 - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); 116 + atom_execute_table_scratch_unlocked(rdev->mode_info.atom_context, index, (uint32_t *)&args); 118 117 119 118 *ack = args.v1.ucReplyStatus; 120 119 ··· 148 147 149 148 r = recv_bytes; 150 149 done: 150 + mutex_unlock(&rdev->mode_info.atom_context->scratch_mutex); 151 151 mutex_unlock(&chan->mutex); 152 152 153 153 return r;
+3 -1
drivers/gpu/drm/radeon/atombios_i2c.c
··· 48 48 memset(&args, 0, sizeof(args)); 49 49 50 50 mutex_lock(&chan->mutex); 51 + mutex_lock(&rdev->mode_info.atom_context->scratch_mutex); 51 52 52 53 base = (unsigned char *)rdev->mode_info.atom_context->scratch; 53 54 ··· 83 82 args.ucSlaveAddr = slave_addr << 1; 84 83 args.ucLineNumber = chan->rec.i2c_id; 85 84 86 - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); 85 + atom_execute_table_scratch_unlocked(rdev->mode_info.atom_context, index, (uint32_t *)&args); 87 86 88 87 /* error */ 89 88 if (args.ucStatus != HW_ASSISTED_I2C_STATUS_SUCCESS) { ··· 96 95 radeon_atom_copy_swap(buf, base, num, false); 97 96 98 97 done: 98 + mutex_unlock(&rdev->mode_info.atom_context->scratch_mutex); 99 99 mutex_unlock(&chan->mutex); 100 100 101 101 return r;
+1 -1
drivers/gpu/drm/radeon/r600_dpm.c
··· 1256 1256 (mode_info->atom_context->bios + data_offset + 1257 1257 le16_to_cpu(ext_hdr->usPowerTuneTableOffset)); 1258 1258 rdev->pm.dpm.dyn_state.cac_tdp_table->maximum_power_delivery_limit = 1259 - ppt->usMaximumPowerDeliveryLimit; 1259 + le16_to_cpu(ppt->usMaximumPowerDeliveryLimit); 1260 1260 pt = &ppt->power_tune_table; 1261 1261 } else { 1262 1262 ATOM_PPLIB_POWERTUNE_Table *ppt = (ATOM_PPLIB_POWERTUNE_Table *)
+1
drivers/gpu/drm/radeon/radeon_device.c
··· 952 952 } 953 953 954 954 mutex_init(&rdev->mode_info.atom_context->mutex); 955 + mutex_init(&rdev->mode_info.atom_context->scratch_mutex); 955 956 radeon_atom_initialize_bios_scratch_regs(rdev->ddev); 956 957 atom_allocate_fb_scratch(rdev->mode_info.atom_context); 957 958 return 0;
+3
drivers/gpu/drm/radeon/radeon_encoders.c
··· 179 179 (rdev->pdev->subsystem_vendor == 0x1734) && 180 180 (rdev->pdev->subsystem_device == 0x1107)) 181 181 use_bl = false; 182 + /* disable native backlight control on older asics */ 183 + else if (rdev->family < CHIP_R600) 184 + use_bl = false; 182 185 else 183 186 use_bl = true; 184 187 }
+4 -5
drivers/gpu/drm/tegra/dc.c
··· 736 736 737 737 static void tegra_crtc_disable(struct drm_crtc *crtc) 738 738 { 739 - struct tegra_dc *dc = to_tegra_dc(crtc); 740 739 struct drm_device *drm = crtc->dev; 741 740 struct drm_plane *plane; 742 741 ··· 751 752 } 752 753 } 753 754 754 - drm_vblank_off(drm, dc->pipe); 755 + drm_crtc_vblank_off(crtc); 755 756 } 756 757 757 758 static bool tegra_crtc_mode_fixup(struct drm_crtc *crtc, ··· 840 841 u32 value; 841 842 int err; 842 843 843 - drm_vblank_pre_modeset(crtc->dev, dc->pipe); 844 - 845 844 err = tegra_crtc_setup_clk(crtc, mode); 846 845 if (err) { 847 846 dev_err(dc->dev, "failed to setup clock for CRTC: %d\n", err); ··· 893 896 unsigned int syncpt; 894 897 unsigned long value; 895 898 899 + drm_crtc_vblank_off(crtc); 900 + 896 901 /* hardware initialization */ 897 902 reset_control_deassert(dc->rst); 898 903 usleep_range(10000, 20000); ··· 942 943 value = GENERAL_ACT_REQ | WIN_A_ACT_REQ; 943 944 tegra_dc_writel(dc, value, DC_CMD_STATE_CONTROL); 944 945 945 - drm_vblank_post_modeset(crtc->dev, dc->pipe); 946 + drm_crtc_vblank_on(crtc); 946 947 } 947 948 948 949 static void tegra_crtc_load_lut(struct drm_crtc *crtc)
+30 -14
drivers/infiniband/ulp/isert/ib_isert.c
··· 115 115 attr.cap.max_recv_wr = ISERT_QP_MAX_RECV_DTOS; 116 116 /* 117 117 * FIXME: Use devattr.max_sge - 2 for max_send_sge as 118 - * work-around for RDMA_READ.. 118 + * work-around for RDMA_READs with ConnectX-2. 119 + * 120 + * Also, still make sure to have at least two SGEs for 121 + * outgoing control PDU responses. 119 122 */ 120 - attr.cap.max_send_sge = device->dev_attr.max_sge - 2; 123 + attr.cap.max_send_sge = max(2, device->dev_attr.max_sge - 2); 121 124 isert_conn->max_sge = attr.cap.max_send_sge; 122 125 123 126 attr.cap.max_recv_sge = 1; ··· 228 225 struct isert_cq_desc *cq_desc; 229 226 struct ib_device_attr *dev_attr; 230 227 int ret = 0, i, j; 228 + int max_rx_cqe, max_tx_cqe; 231 229 232 230 dev_attr = &device->dev_attr; 233 231 ret = isert_query_device(ib_dev, dev_attr); 234 232 if (ret) 235 233 return ret; 234 + 235 + max_rx_cqe = min(ISER_MAX_RX_CQ_LEN, dev_attr->max_cqe); 236 + max_tx_cqe = min(ISER_MAX_TX_CQ_LEN, dev_attr->max_cqe); 236 237 237 238 /* asign function handlers */ 238 239 if (dev_attr->device_cap_flags & IB_DEVICE_MEM_MGT_EXTENSIONS && ··· 279 272 isert_cq_rx_callback, 280 273 isert_cq_event_callback, 281 274 (void *)&cq_desc[i], 282 - ISER_MAX_RX_CQ_LEN, i); 275 + max_rx_cqe, i); 283 276 if (IS_ERR(device->dev_rx_cq[i])) { 284 277 ret = PTR_ERR(device->dev_rx_cq[i]); 285 278 device->dev_rx_cq[i] = NULL; ··· 291 284 isert_cq_tx_callback, 292 285 isert_cq_event_callback, 293 286 (void *)&cq_desc[i], 294 - ISER_MAX_TX_CQ_LEN, i); 287 + max_tx_cqe, i); 295 288 if (IS_ERR(device->dev_tx_cq[i])) { 296 289 ret = PTR_ERR(device->dev_tx_cq[i]); 297 290 device->dev_tx_cq[i] = NULL; ··· 810 803 complete(&isert_conn->conn_wait); 811 804 } 812 805 813 - static void 806 + static int 814 807 isert_disconnected_handler(struct rdma_cm_id *cma_id, bool disconnect) 815 808 { 816 - struct isert_conn *isert_conn = (struct isert_conn *)cma_id->context; 809 + struct isert_conn *isert_conn; 810 + 811 + if (!cma_id->qp) { 812 + struct isert_np *isert_np = cma_id->context; 813 + 814 + isert_np->np_cm_id = NULL; 815 + return -1; 816 + } 817 + 818 + isert_conn = (struct isert_conn *)cma_id->context; 817 819 818 820 isert_conn->disconnect = disconnect; 819 821 INIT_WORK(&isert_conn->conn_logout_work, isert_disconnect_work); 820 822 schedule_work(&isert_conn->conn_logout_work); 823 + 824 + return 0; 821 825 } 822 826 823 827 static int ··· 843 825 switch (event->event) { 844 826 case RDMA_CM_EVENT_CONNECT_REQUEST: 845 827 ret = isert_connect_request(cma_id, event); 828 + if (ret) 829 + pr_err("isert_cma_handler failed RDMA_CM_EVENT: 0x%08x %d\n", 830 + event->event, ret); 846 831 break; 847 832 case RDMA_CM_EVENT_ESTABLISHED: 848 833 isert_connected_handler(cma_id); ··· 855 834 case RDMA_CM_EVENT_DEVICE_REMOVAL: /* FALLTHRU */ 856 835 disconnect = true; 857 836 case RDMA_CM_EVENT_TIMEWAIT_EXIT: /* FALLTHRU */ 858 - isert_disconnected_handler(cma_id, disconnect); 837 + ret = isert_disconnected_handler(cma_id, disconnect); 859 838 break; 860 839 case RDMA_CM_EVENT_CONNECT_ERROR: 861 840 default: 862 841 pr_err("Unhandled RDMA CMA event: %d\n", event->event); 863 842 break; 864 - } 865 - 866 - if (ret != 0) { 867 - pr_err("isert_cma_handler failed RDMA_CM_EVENT: 0x%08x %d\n", 868 - event->event, ret); 869 - dump_stack(); 870 843 } 871 844 872 845 return ret; ··· 3205 3190 { 3206 3191 struct isert_np *isert_np = (struct isert_np *)np->np_context; 3207 3192 3208 - rdma_destroy_id(isert_np->np_cm_id); 3193 + if (isert_np->np_cm_id) 3194 + rdma_destroy_id(isert_np->np_cm_id); 3209 3195 3210 3196 np->np_context = NULL; 3211 3197 kfree(isert_np);
+8
drivers/infiniband/ulp/srpt/ib_srpt.c
··· 2092 2092 if (!qp_init) 2093 2093 goto out; 2094 2094 2095 + retry: 2095 2096 ch->cq = ib_create_cq(sdev->device, srpt_completion, NULL, ch, 2096 2097 ch->rq_size + srp_sq_size, 0); 2097 2098 if (IS_ERR(ch->cq)) { ··· 2116 2115 ch->qp = ib_create_qp(sdev->pd, qp_init); 2117 2116 if (IS_ERR(ch->qp)) { 2118 2117 ret = PTR_ERR(ch->qp); 2118 + if (ret == -ENOMEM) { 2119 + srp_sq_size /= 2; 2120 + if (srp_sq_size >= MIN_SRPT_SQ_SIZE) { 2121 + ib_destroy_cq(ch->cq); 2122 + goto retry; 2123 + } 2124 + } 2119 2125 printk(KERN_ERR "failed to create_qp ret= %d\n", ret); 2120 2126 goto err_destroy_cq; 2121 2127 }
+1
drivers/input/misc/twl4030-pwrbutton.c
··· 85 85 } 86 86 87 87 platform_set_drvdata(pdev, pwr); 88 + device_init_wakeup(&pdev->dev, true); 88 89 89 90 return 0; 90 91 }
+26 -2
drivers/input/mouse/alps.c
··· 1156 1156 { 1157 1157 struct alps_data *priv = psmouse->private; 1158 1158 1159 - if ((psmouse->packet[0] & 0xc8) == 0x08) { /* PS/2 packet */ 1159 + /* 1160 + * Check if we are dealing with a bare PS/2 packet, presumably from 1161 + * a device connected to the external PS/2 port. Because bare PS/2 1162 + * protocol does not have enough constant bits to self-synchronize 1163 + * properly we only do this if the device is fully synchronized. 1164 + */ 1165 + if (!psmouse->out_of_sync_cnt && (psmouse->packet[0] & 0xc8) == 0x08) { 1160 1166 if (psmouse->pktcnt == 3) { 1161 1167 alps_report_bare_ps2_packet(psmouse, psmouse->packet, 1162 1168 true); ··· 1186 1180 } 1187 1181 1188 1182 /* Bytes 2 - pktsize should have 0 in the highest bit */ 1189 - if ((priv->proto_version < ALPS_PROTO_V5) && 1183 + if (priv->proto_version < ALPS_PROTO_V5 && 1190 1184 psmouse->pktcnt >= 2 && psmouse->pktcnt <= psmouse->pktsize && 1191 1185 (psmouse->packet[psmouse->pktcnt - 1] & 0x80)) { 1192 1186 psmouse_dbg(psmouse, "refusing packet[%i] = %x\n", 1193 1187 psmouse->pktcnt - 1, 1194 1188 psmouse->packet[psmouse->pktcnt - 1]); 1189 + 1190 + if (priv->proto_version == ALPS_PROTO_V3 && 1191 + psmouse->pktcnt == psmouse->pktsize) { 1192 + /* 1193 + * Some Dell boxes, such as Latitude E6440 or E7440 1194 + * with closed lid, quite often smash last byte of 1195 + * otherwise valid packet with 0xff. Given that the 1196 + * next packet is very likely to be valid let's 1197 + * report PSMOUSE_FULL_PACKET but not process data, 1198 + * rather than reporting PSMOUSE_BAD_DATA and 1199 + * filling the logs. 1200 + */ 1201 + return PSMOUSE_FULL_PACKET; 1202 + } 1203 + 1195 1204 return PSMOUSE_BAD_DATA; 1196 1205 } 1197 1206 ··· 2409 2388 2410 2389 /* We are having trouble resyncing ALPS touchpads so disable it for now */ 2411 2390 psmouse->resync_time = 0; 2391 + 2392 + /* Allow 2 invalid packets without resetting device */ 2393 + psmouse->resetafter = psmouse->pktsize * 2; 2412 2394 2413 2395 return 0; 2414 2396
+53 -3
drivers/input/mouse/elantech.c
··· 563 563 } else { 564 564 input_report_key(dev, BTN_LEFT, packet[0] & 0x01); 565 565 input_report_key(dev, BTN_RIGHT, packet[0] & 0x02); 566 + input_report_key(dev, BTN_MIDDLE, packet[0] & 0x04); 566 567 } 567 568 568 569 input_mt_report_pointer_emulation(dev, true); ··· 793 792 unsigned char packet_type = packet[3] & 0x03; 794 793 bool sanity_check; 795 794 795 + if ((packet[3] & 0x0f) == 0x06) 796 + return PACKET_TRACKPOINT; 797 + 796 798 /* 797 799 * Sanity check based on the constant bits of a packet. 798 800 * The constant bits change depending on the value of ··· 881 877 882 878 case 4: 883 879 packet_type = elantech_packet_check_v4(psmouse); 884 - if (packet_type == PACKET_UNKNOWN) 880 + switch (packet_type) { 881 + case PACKET_UNKNOWN: 885 882 return PSMOUSE_BAD_DATA; 886 883 887 - elantech_report_absolute_v4(psmouse, packet_type); 884 + case PACKET_TRACKPOINT: 885 + elantech_report_trackpoint(psmouse, packet_type); 886 + break; 887 + 888 + default: 889 + elantech_report_absolute_v4(psmouse, packet_type); 890 + break; 891 + } 892 + 888 893 break; 889 894 } 890 895 ··· 1133 1120 } 1134 1121 1135 1122 /* 1123 + * Some hw_version 4 models do have a middle button 1124 + */ 1125 + static const struct dmi_system_id elantech_dmi_has_middle_button[] = { 1126 + #if defined(CONFIG_DMI) && defined(CONFIG_X86) 1127 + { 1128 + /* Fujitsu H730 has a middle button */ 1129 + .matches = { 1130 + DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"), 1131 + DMI_MATCH(DMI_PRODUCT_NAME, "CELSIUS H730"), 1132 + }, 1133 + }, 1134 + #endif 1135 + { } 1136 + }; 1137 + 1138 + /* 1136 1139 * Set the appropriate event bits for the input subsystem 1137 1140 */ 1138 1141 static int elantech_set_input_params(struct psmouse *psmouse) ··· 1167 1138 __clear_bit(EV_REL, dev->evbit); 1168 1139 1169 1140 __set_bit(BTN_LEFT, dev->keybit); 1141 + if (dmi_check_system(elantech_dmi_has_middle_button)) 1142 + __set_bit(BTN_MIDDLE, dev->keybit); 1170 1143 __set_bit(BTN_RIGHT, dev->keybit); 1171 1144 1172 1145 __set_bit(BTN_TOUCH, dev->keybit); ··· 1330 1299 ELANTECH_INT_ATTR(reg_26, 0x26); 1331 1300 ELANTECH_INT_ATTR(debug, 0); 1332 1301 ELANTECH_INT_ATTR(paritycheck, 0); 1302 + ELANTECH_INT_ATTR(crc_enabled, 0); 1333 1303 1334 1304 static struct attribute *elantech_attrs[] = { 1335 1305 &psmouse_attr_reg_07.dattr.attr, ··· 1345 1313 &psmouse_attr_reg_26.dattr.attr, 1346 1314 &psmouse_attr_debug.dattr.attr, 1347 1315 &psmouse_attr_paritycheck.dattr.attr, 1316 + &psmouse_attr_crc_enabled.dattr.attr, 1348 1317 NULL 1349 1318 }; 1350 1319 ··· 1472 1439 } 1473 1440 1474 1441 /* 1442 + * Some hw_version 4 models do not work with crc_disabled 1443 + */ 1444 + static const struct dmi_system_id elantech_dmi_force_crc_enabled[] = { 1445 + #if defined(CONFIG_DMI) && defined(CONFIG_X86) 1446 + { 1447 + /* Fujitsu H730 does not work with crc_enabled == 0 */ 1448 + .matches = { 1449 + DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"), 1450 + DMI_MATCH(DMI_PRODUCT_NAME, "CELSIUS H730"), 1451 + }, 1452 + }, 1453 + #endif 1454 + { } 1455 + }; 1456 + 1457 + /* 1475 1458 * Some hw_version 3 models go into error state when we try to set 1476 1459 * bit 3 and/or bit 1 of r10. 1477 1460 */ ··· 1562 1513 * The signatures of v3 and v4 packets change depending on the 1563 1514 * value of this hardware flag. 1564 1515 */ 1565 - etd->crc_enabled = ((etd->fw_version & 0x4000) == 0x4000); 1516 + etd->crc_enabled = (etd->fw_version & 0x4000) == 0x4000 || 1517 + dmi_check_system(elantech_dmi_force_crc_enabled); 1566 1518 1567 1519 /* Enable real hardware resolution on hw_version 3 ? */ 1568 1520 etd->set_hw_resolution = !dmi_check_system(no_hw_res_dmi_table);
+3 -2
drivers/input/mouse/synaptics.c
··· 135 135 1232, 5710, 1156, 4696 136 136 }, 137 137 { 138 - (const char * const []){"LEN0034", "LEN0036", "LEN2002", 139 - "LEN2004", NULL}, 138 + (const char * const []){"LEN0034", "LEN0036", "LEN0039", 139 + "LEN2002", "LEN2004", NULL}, 140 140 1024, 5112, 2024, 4832 141 141 }, 142 142 { ··· 163 163 "LEN0036", /* T440 */ 164 164 "LEN0037", 165 165 "LEN0038", 166 + "LEN0039", /* T440s */ 166 167 "LEN0041", 167 168 "LEN0042", /* Yoga */ 168 169 "LEN0045",
+4
drivers/md/md.c
··· 5121 5121 printk("md: %s still in use.\n",mdname(mddev)); 5122 5122 if (did_freeze) { 5123 5123 clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery); 5124 + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); 5124 5125 md_wakeup_thread(mddev->thread); 5125 5126 } 5126 5127 err = -EBUSY; ··· 5136 5135 mddev->ro = 1; 5137 5136 set_disk_ro(mddev->gendisk, 1); 5138 5137 clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery); 5138 + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); 5139 + md_wakeup_thread(mddev->thread); 5139 5140 sysfs_notify_dirent_safe(mddev->sysfs_state); 5140 5141 err = 0; 5141 5142 } ··· 5181 5178 mutex_unlock(&mddev->open_mutex); 5182 5179 if (did_freeze) { 5183 5180 clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery); 5181 + set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); 5184 5182 md_wakeup_thread(mddev->thread); 5185 5183 } 5186 5184 return -EBUSY;
+2 -1
drivers/net/bonding/bond_main.c
··· 2470 2470 bond_slave_state_change(bond); 2471 2471 if (BOND_MODE(bond) == BOND_MODE_XOR) 2472 2472 bond_update_slave_arr(bond, NULL); 2473 - } else if (do_failover) { 2473 + } 2474 + if (do_failover) { 2474 2475 block_netpoll_tx(); 2475 2476 bond_select_active_slave(bond); 2476 2477 unblock_netpoll_tx();
+2 -2
drivers/net/can/dev.c
··· 110 110 long rate; 111 111 u64 v64; 112 112 113 - /* Use CIA recommended sample points */ 113 + /* Use CiA recommended sample points */ 114 114 if (bt->sample_point) { 115 115 sampl_pt = bt->sample_point; 116 116 } else { ··· 382 382 BUG_ON(idx >= priv->echo_skb_max); 383 383 384 384 if (priv->echo_skb[idx]) { 385 - kfree_skb(priv->echo_skb[idx]); 385 + dev_kfree_skb_any(priv->echo_skb[idx]); 386 386 priv->echo_skb[idx] = NULL; 387 387 } 388 388 }
+1
drivers/net/can/m_can/Kconfig
··· 1 1 config CAN_M_CAN 2 + depends on HAS_IOMEM 2 3 tristate "Bosch M_CAN devices" 3 4 ---help--- 4 5 Say Y here if you want to support for Bosch M_CAN controller.
+166 -53
drivers/net/can/m_can/m_can.c
··· 105 105 MRAM_CFG_NUM, 106 106 }; 107 107 108 + /* Fast Bit Timing & Prescaler Register (FBTP) */ 109 + #define FBTR_FBRP_MASK 0x1f 110 + #define FBTR_FBRP_SHIFT 16 111 + #define FBTR_FTSEG1_SHIFT 8 112 + #define FBTR_FTSEG1_MASK (0xf << FBTR_FTSEG1_SHIFT) 113 + #define FBTR_FTSEG2_SHIFT 4 114 + #define FBTR_FTSEG2_MASK (0x7 << FBTR_FTSEG2_SHIFT) 115 + #define FBTR_FSJW_SHIFT 0 116 + #define FBTR_FSJW_MASK 0x3 117 + 108 118 /* Test Register (TEST) */ 109 119 #define TEST_LBCK BIT(4) 110 120 111 121 /* CC Control Register(CCCR) */ 112 - #define CCCR_TEST BIT(7) 113 - #define CCCR_MON BIT(5) 114 - #define CCCR_CCE BIT(1) 115 - #define CCCR_INIT BIT(0) 122 + #define CCCR_TEST BIT(7) 123 + #define CCCR_CMR_MASK 0x3 124 + #define CCCR_CMR_SHIFT 10 125 + #define CCCR_CMR_CANFD 0x1 126 + #define CCCR_CMR_CANFD_BRS 0x2 127 + #define CCCR_CMR_CAN 0x3 128 + #define CCCR_CME_MASK 0x3 129 + #define CCCR_CME_SHIFT 8 130 + #define CCCR_CME_CAN 0 131 + #define CCCR_CME_CANFD 0x1 132 + #define CCCR_CME_CANFD_BRS 0x2 133 + #define CCCR_TEST BIT(7) 134 + #define CCCR_MON BIT(5) 135 + #define CCCR_CCE BIT(1) 136 + #define CCCR_INIT BIT(0) 137 + #define CCCR_CANFD 0x10 116 138 117 139 /* Bit Timing & Prescaler Register (BTP) */ 118 140 #define BTR_BRP_MASK 0x3ff ··· 226 204 227 205 /* Rx Buffer / FIFO Element Size Configuration (RXESC) */ 228 206 #define M_CAN_RXESC_8BYTES 0x0 207 + #define M_CAN_RXESC_64BYTES 0x777 229 208 230 209 /* Tx Buffer Configuration(TXBC) */ 231 210 #define TXBC_NDTB_OFF 16 ··· 234 211 235 212 /* Tx Buffer Element Size Configuration(TXESC) */ 236 213 #define TXESC_TBDS_8BYTES 0x0 214 + #define TXESC_TBDS_64BYTES 0x7 237 215 238 216 /* Tx Event FIFO Con.guration (TXEFC) */ 239 217 #define TXEFC_EFS_OFF 16 ··· 243 219 /* Message RAM Configuration (in bytes) */ 244 220 #define SIDF_ELEMENT_SIZE 4 245 221 #define XIDF_ELEMENT_SIZE 8 246 - #define RXF0_ELEMENT_SIZE 16 247 - #define RXF1_ELEMENT_SIZE 16 222 + #define RXF0_ELEMENT_SIZE 72 223 + #define RXF1_ELEMENT_SIZE 72 248 224 #define RXB_ELEMENT_SIZE 16 249 225 #define TXE_ELEMENT_SIZE 8 250 - #define TXB_ELEMENT_SIZE 16 226 + #define TXB_ELEMENT_SIZE 72 251 227 252 228 /* Message RAM Elements */ 253 229 #define M_CAN_FIFO_ID 0x0 ··· 255 231 #define M_CAN_FIFO_DATA(n) (0x8 + ((n) << 2)) 256 232 257 233 /* Rx Buffer Element */ 234 + /* R0 */ 258 235 #define RX_BUF_ESI BIT(31) 259 236 #define RX_BUF_XTD BIT(30) 260 237 #define RX_BUF_RTR BIT(29) 238 + /* R1 */ 239 + #define RX_BUF_ANMF BIT(31) 240 + #define RX_BUF_EDL BIT(21) 241 + #define RX_BUF_BRS BIT(20) 261 242 262 243 /* Tx Buffer Element */ 244 + /* R0 */ 263 245 #define TX_BUF_XTD BIT(30) 264 246 #define TX_BUF_RTR BIT(29) 265 247 ··· 326 296 if (enable) { 327 297 /* enable m_can configuration */ 328 298 m_can_write(priv, M_CAN_CCCR, cccr | CCCR_INIT); 299 + udelay(5); 329 300 /* CCCR.CCE can only be set/reset while CCCR.INIT = '1' */ 330 301 m_can_write(priv, M_CAN_CCCR, cccr | CCCR_INIT | CCCR_CCE); 331 302 } else { ··· 357 326 m_can_write(priv, M_CAN_ILE, 0x0); 358 327 } 359 328 360 - static void m_can_read_fifo(const struct net_device *dev, struct can_frame *cf, 361 - u32 rxfs) 329 + static void m_can_read_fifo(struct net_device *dev, u32 rxfs) 362 330 { 331 + struct net_device_stats *stats = &dev->stats; 363 332 struct m_can_priv *priv = netdev_priv(dev); 364 - u32 id, fgi; 333 + struct canfd_frame *cf; 334 + struct sk_buff *skb; 335 + u32 id, fgi, dlc; 336 + int i; 365 337 366 338 /* calculate the fifo get index for where to read data */ 367 339 fgi = (rxfs & RXFS_FGI_MASK) >> RXFS_FGI_OFF; 340 + dlc = m_can_fifo_read(priv, fgi, M_CAN_FIFO_DLC); 341 + if (dlc & RX_BUF_EDL) 342 + skb = alloc_canfd_skb(dev, &cf); 343 + else 344 + skb = alloc_can_skb(dev, (struct can_frame **)&cf); 345 + if (!skb) { 346 + stats->rx_dropped++; 347 + return; 348 + } 349 + 350 + if (dlc & RX_BUF_EDL) 351 + cf->len = can_dlc2len((dlc >> 16) & 0x0F); 352 + else 353 + cf->len = get_can_dlc((dlc >> 16) & 0x0F); 354 + 368 355 id = m_can_fifo_read(priv, fgi, M_CAN_FIFO_ID); 369 356 if (id & RX_BUF_XTD) 370 357 cf->can_id = (id & CAN_EFF_MASK) | CAN_EFF_FLAG; 371 358 else 372 359 cf->can_id = (id >> 18) & CAN_SFF_MASK; 373 360 374 - if (id & RX_BUF_RTR) { 361 + if (id & RX_BUF_ESI) { 362 + cf->flags |= CANFD_ESI; 363 + netdev_dbg(dev, "ESI Error\n"); 364 + } 365 + 366 + if (!(dlc & RX_BUF_EDL) && (id & RX_BUF_RTR)) { 375 367 cf->can_id |= CAN_RTR_FLAG; 376 368 } else { 377 - id = m_can_fifo_read(priv, fgi, M_CAN_FIFO_DLC); 378 - cf->can_dlc = get_can_dlc((id >> 16) & 0x0F); 379 - *(u32 *)(cf->data + 0) = m_can_fifo_read(priv, fgi, 380 - M_CAN_FIFO_DATA(0)); 381 - *(u32 *)(cf->data + 4) = m_can_fifo_read(priv, fgi, 382 - M_CAN_FIFO_DATA(1)); 369 + if (dlc & RX_BUF_BRS) 370 + cf->flags |= CANFD_BRS; 371 + 372 + for (i = 0; i < cf->len; i += 4) 373 + *(u32 *)(cf->data + i) = 374 + m_can_fifo_read(priv, fgi, 375 + M_CAN_FIFO_DATA(i / 4)); 383 376 } 384 377 385 378 /* acknowledge rx fifo 0 */ 386 379 m_can_write(priv, M_CAN_RXF0A, fgi); 380 + 381 + stats->rx_packets++; 382 + stats->rx_bytes += cf->len; 383 + 384 + netif_receive_skb(skb); 387 385 } 388 386 389 387 static int m_can_do_rx_poll(struct net_device *dev, int quota) 390 388 { 391 389 struct m_can_priv *priv = netdev_priv(dev); 392 - struct net_device_stats *stats = &dev->stats; 393 - struct sk_buff *skb; 394 - struct can_frame *frame; 395 390 u32 pkts = 0; 396 391 u32 rxfs; 397 392 ··· 431 374 if (rxfs & RXFS_RFL) 432 375 netdev_warn(dev, "Rx FIFO 0 Message Lost\n"); 433 376 434 - skb = alloc_can_skb(dev, &frame); 435 - if (!skb) { 436 - stats->rx_dropped++; 437 - return pkts; 438 - } 439 - 440 - m_can_read_fifo(dev, frame, rxfs); 441 - 442 - stats->rx_packets++; 443 - stats->rx_bytes += frame->can_dlc; 444 - 445 - netif_receive_skb(skb); 377 + m_can_read_fifo(dev, rxfs); 446 378 447 379 quota--; 448 380 pkts++; ··· 527 481 return 1; 528 482 } 529 483 484 + static int __m_can_get_berr_counter(const struct net_device *dev, 485 + struct can_berr_counter *bec) 486 + { 487 + struct m_can_priv *priv = netdev_priv(dev); 488 + unsigned int ecr; 489 + 490 + ecr = m_can_read(priv, M_CAN_ECR); 491 + bec->rxerr = (ecr & ECR_REC_MASK) >> ECR_REC_SHIFT; 492 + bec->txerr = ecr & ECR_TEC_MASK; 493 + 494 + return 0; 495 + } 496 + 530 497 static int m_can_get_berr_counter(const struct net_device *dev, 531 498 struct can_berr_counter *bec) 532 499 { 533 500 struct m_can_priv *priv = netdev_priv(dev); 534 - unsigned int ecr; 535 501 int err; 536 502 537 503 err = clk_prepare_enable(priv->hclk); ··· 556 498 return err; 557 499 } 558 500 559 - ecr = m_can_read(priv, M_CAN_ECR); 560 - bec->rxerr = (ecr & ECR_REC_MASK) >> ECR_REC_SHIFT; 561 - bec->txerr = ecr & ECR_TEC_MASK; 501 + __m_can_get_berr_counter(dev, bec); 562 502 563 503 clk_disable_unprepare(priv->cclk); 564 504 clk_disable_unprepare(priv->hclk); ··· 600 544 if (unlikely(!skb)) 601 545 return 0; 602 546 603 - m_can_get_berr_counter(dev, &bec); 547 + __m_can_get_berr_counter(dev, &bec); 604 548 605 549 switch (new_state) { 606 550 case CAN_STATE_ERROR_ACTIVE: ··· 652 596 653 597 if ((psr & PSR_EP) && 654 598 (priv->can.state != CAN_STATE_ERROR_PASSIVE)) { 655 - netdev_dbg(dev, "entered error warning state\n"); 599 + netdev_dbg(dev, "entered error passive state\n"); 656 600 work_done += m_can_handle_state_change(dev, 657 601 CAN_STATE_ERROR_PASSIVE); 658 602 } 659 603 660 604 if ((psr & PSR_BO) && 661 605 (priv->can.state != CAN_STATE_BUS_OFF)) { 662 - netdev_dbg(dev, "entered error warning state\n"); 606 + netdev_dbg(dev, "entered error bus off state\n"); 663 607 work_done += m_can_handle_state_change(dev, 664 608 CAN_STATE_BUS_OFF); 665 609 } ··· 671 615 { 672 616 if (irqstatus & IR_WDI) 673 617 netdev_err(dev, "Message RAM Watchdog event due to missing READY\n"); 674 - if (irqstatus & IR_BEU) 618 + if (irqstatus & IR_ELO) 675 619 netdev_err(dev, "Error Logging Overflow\n"); 676 620 if (irqstatus & IR_BEU) 677 621 netdev_err(dev, "Bit Error Uncorrected\n"); ··· 789 733 .brp_inc = 1, 790 734 }; 791 735 736 + static const struct can_bittiming_const m_can_data_bittiming_const = { 737 + .name = KBUILD_MODNAME, 738 + .tseg1_min = 2, /* Time segment 1 = prop_seg + phase_seg1 */ 739 + .tseg1_max = 16, 740 + .tseg2_min = 1, /* Time segment 2 = phase_seg2 */ 741 + .tseg2_max = 8, 742 + .sjw_max = 4, 743 + .brp_min = 1, 744 + .brp_max = 32, 745 + .brp_inc = 1, 746 + }; 747 + 792 748 static int m_can_set_bittiming(struct net_device *dev) 793 749 { 794 750 struct m_can_priv *priv = netdev_priv(dev); 795 751 const struct can_bittiming *bt = &priv->can.bittiming; 752 + const struct can_bittiming *dbt = &priv->can.data_bittiming; 796 753 u16 brp, sjw, tseg1, tseg2; 797 754 u32 reg_btp; 798 755 ··· 816 747 reg_btp = (brp << BTR_BRP_SHIFT) | (sjw << BTR_SJW_SHIFT) | 817 748 (tseg1 << BTR_TSEG1_SHIFT) | (tseg2 << BTR_TSEG2_SHIFT); 818 749 m_can_write(priv, M_CAN_BTP, reg_btp); 819 - netdev_dbg(dev, "setting BTP 0x%x\n", reg_btp); 750 + 751 + if (priv->can.ctrlmode & CAN_CTRLMODE_FD) { 752 + brp = dbt->brp - 1; 753 + sjw = dbt->sjw - 1; 754 + tseg1 = dbt->prop_seg + dbt->phase_seg1 - 1; 755 + tseg2 = dbt->phase_seg2 - 1; 756 + reg_btp = (brp << FBTR_FBRP_SHIFT) | (sjw << FBTR_FSJW_SHIFT) | 757 + (tseg1 << FBTR_FTSEG1_SHIFT) | 758 + (tseg2 << FBTR_FTSEG2_SHIFT); 759 + m_can_write(priv, M_CAN_FBTP, reg_btp); 760 + } 820 761 821 762 return 0; 822 763 } ··· 846 767 847 768 m_can_config_endisable(priv, true); 848 769 849 - /* RX Buffer/FIFO Element Size 8 bytes data field */ 850 - m_can_write(priv, M_CAN_RXESC, M_CAN_RXESC_8BYTES); 770 + /* RX Buffer/FIFO Element Size 64 bytes data field */ 771 + m_can_write(priv, M_CAN_RXESC, M_CAN_RXESC_64BYTES); 851 772 852 773 /* Accept Non-matching Frames Into FIFO 0 */ 853 774 m_can_write(priv, M_CAN_GFC, 0x0); ··· 856 777 m_can_write(priv, M_CAN_TXBC, (1 << TXBC_NDTB_OFF) | 857 778 priv->mcfg[MRAM_TXB].off); 858 779 859 - /* only support 8 bytes firstly */ 860 - m_can_write(priv, M_CAN_TXESC, TXESC_TBDS_8BYTES); 780 + /* support 64 bytes payload */ 781 + m_can_write(priv, M_CAN_TXESC, TXESC_TBDS_64BYTES); 861 782 862 783 m_can_write(priv, M_CAN_TXEFC, (1 << TXEFC_EFS_OFF) | 863 784 priv->mcfg[MRAM_TXE].off); ··· 872 793 RXFC_FWM_1 | priv->mcfg[MRAM_RXF1].off); 873 794 874 795 cccr = m_can_read(priv, M_CAN_CCCR); 875 - cccr &= ~(CCCR_TEST | CCCR_MON); 796 + cccr &= ~(CCCR_TEST | CCCR_MON | (CCCR_CMR_MASK << CCCR_CMR_SHIFT) | 797 + (CCCR_CME_MASK << CCCR_CME_SHIFT)); 876 798 test = m_can_read(priv, M_CAN_TEST); 877 799 test &= ~TEST_LBCK; 878 800 ··· 884 804 cccr |= CCCR_TEST; 885 805 test |= TEST_LBCK; 886 806 } 807 + 808 + if (priv->can.ctrlmode & CAN_CTRLMODE_FD) 809 + cccr |= CCCR_CME_CANFD_BRS << CCCR_CME_SHIFT; 887 810 888 811 m_can_write(priv, M_CAN_CCCR, cccr); 889 812 m_can_write(priv, M_CAN_TEST, test); ··· 952 869 953 870 priv->dev = dev; 954 871 priv->can.bittiming_const = &m_can_bittiming_const; 872 + priv->can.data_bittiming_const = &m_can_data_bittiming_const; 955 873 priv->can.do_set_mode = m_can_set_mode; 956 874 priv->can.do_get_berr_counter = m_can_get_berr_counter; 957 875 priv->can.ctrlmode_supported = CAN_CTRLMODE_LOOPBACK | 958 876 CAN_CTRLMODE_LISTENONLY | 959 - CAN_CTRLMODE_BERR_REPORTING; 877 + CAN_CTRLMODE_BERR_REPORTING | 878 + CAN_CTRLMODE_FD; 960 879 961 880 return dev; 962 881 } ··· 1041 956 struct net_device *dev) 1042 957 { 1043 958 struct m_can_priv *priv = netdev_priv(dev); 1044 - struct can_frame *cf = (struct can_frame *)skb->data; 1045 - u32 id; 959 + struct canfd_frame *cf = (struct canfd_frame *)skb->data; 960 + u32 id, cccr; 961 + int i; 1046 962 1047 963 if (can_dropped_invalid_skb(dev, skb)) 1048 964 return NETDEV_TX_OK; ··· 1062 976 1063 977 /* message ram configuration */ 1064 978 m_can_fifo_write(priv, 0, M_CAN_FIFO_ID, id); 1065 - m_can_fifo_write(priv, 0, M_CAN_FIFO_DLC, cf->can_dlc << 16); 1066 - m_can_fifo_write(priv, 0, M_CAN_FIFO_DATA(0), *(u32 *)(cf->data + 0)); 1067 - m_can_fifo_write(priv, 0, M_CAN_FIFO_DATA(1), *(u32 *)(cf->data + 4)); 979 + m_can_fifo_write(priv, 0, M_CAN_FIFO_DLC, can_len2dlc(cf->len) << 16); 980 + 981 + for (i = 0; i < cf->len; i += 4) 982 + m_can_fifo_write(priv, 0, M_CAN_FIFO_DATA(i / 4), 983 + *(u32 *)(cf->data + i)); 984 + 1068 985 can_put_echo_skb(skb, dev, 0); 986 + 987 + if (priv->can.ctrlmode & CAN_CTRLMODE_FD) { 988 + cccr = m_can_read(priv, M_CAN_CCCR); 989 + cccr &= ~(CCCR_CMR_MASK << CCCR_CMR_SHIFT); 990 + if (can_is_canfd_skb(skb)) { 991 + if (cf->flags & CANFD_BRS) 992 + cccr |= CCCR_CMR_CANFD_BRS << CCCR_CMR_SHIFT; 993 + else 994 + cccr |= CCCR_CMR_CANFD << CCCR_CMR_SHIFT; 995 + } else { 996 + cccr |= CCCR_CMR_CAN << CCCR_CMR_SHIFT; 997 + } 998 + m_can_write(priv, M_CAN_CCCR, cccr); 999 + } 1069 1000 1070 1001 /* enable first TX buffer to start transfer */ 1071 1002 m_can_write(priv, M_CAN_TXBTIE, 0x1); ··· 1095 992 .ndo_open = m_can_open, 1096 993 .ndo_stop = m_can_close, 1097 994 .ndo_start_xmit = m_can_start_xmit, 995 + .ndo_change_mtu = can_change_mtu, 1098 996 }; 1099 997 1100 998 static int register_m_can_dev(struct net_device *dev) ··· 1113 1009 struct resource *res; 1114 1010 void __iomem *addr; 1115 1011 u32 out_val[MRAM_CFG_LEN]; 1116 - int ret; 1012 + int i, start, end, ret; 1117 1013 1118 1014 /* message ram could be shared */ 1119 1015 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "message_ram"); ··· 1163 1059 priv->mcfg[MRAM_RXB].off, priv->mcfg[MRAM_RXB].num, 1164 1060 priv->mcfg[MRAM_TXE].off, priv->mcfg[MRAM_TXE].num, 1165 1061 priv->mcfg[MRAM_TXB].off, priv->mcfg[MRAM_TXB].num); 1062 + 1063 + /* initialize the entire Message RAM in use to avoid possible 1064 + * ECC/parity checksum errors when reading an uninitialized buffer 1065 + */ 1066 + start = priv->mcfg[MRAM_SIDF].off; 1067 + end = priv->mcfg[MRAM_TXB].off + 1068 + priv->mcfg[MRAM_TXB].num * TXB_ELEMENT_SIZE; 1069 + for (i = start; i < end; i += 4) 1070 + writel(0x0, priv->mram_base + i); 1166 1071 1167 1072 return 0; 1168 1073 }
+1
drivers/net/can/rcar_can.c
··· 628 628 .ndo_open = rcar_can_open, 629 629 .ndo_stop = rcar_can_close, 630 630 .ndo_start_xmit = rcar_can_start_xmit, 631 + .ndo_change_mtu = can_change_mtu, 631 632 }; 632 633 633 634 static void rcar_can_rx_pkt(struct rcar_can_priv *priv)
+1 -4
drivers/net/can/sja1000/kvaser_pci.c
··· 214 214 struct net_device *dev; 215 215 struct sja1000_priv *priv; 216 216 struct kvaser_pci *board; 217 - int err, init_step; 217 + int err; 218 218 219 219 dev = alloc_sja1000dev(sizeof(struct kvaser_pci)); 220 220 if (dev == NULL) ··· 235 235 if (channel == 0) { 236 236 board->xilinx_ver = 237 237 ioread8(board->res_addr + XILINX_VERINT) >> 4; 238 - init_step = 2; 239 238 240 239 /* Assert PTADR# - we're in passive mode so the other bits are 241 240 not important */ ··· 262 263 263 264 priv->irq_flags = IRQF_SHARED; 264 265 dev->irq = pdev->irq; 265 - 266 - init_step = 4; 267 266 268 267 dev_info(&pdev->dev, "reg_base=%p conf_addr=%p irq=%d\n", 269 268 priv->reg_base, board->conf_addr, dev->irq);
+1 -2
drivers/net/can/usb/ems_usb.c
··· 434 434 if (urb->actual_length > CPC_HEADER_SIZE) { 435 435 struct ems_cpc_msg *msg; 436 436 u8 *ibuf = urb->transfer_buffer; 437 - u8 msg_count, again, start; 437 + u8 msg_count, start; 438 438 439 439 msg_count = ibuf[0] & ~0x80; 440 - again = ibuf[0] & 0x80; 441 440 442 441 start = CPC_HEADER_SIZE; 443 442
+1 -2
drivers/net/can/usb/esd_usb2.c
··· 464 464 { 465 465 struct esd_tx_urb_context *context = urb->context; 466 466 struct esd_usb2_net_priv *priv; 467 - struct esd_usb2 *dev; 468 467 struct net_device *netdev; 469 468 size_t size = sizeof(struct esd_usb2_msg); 470 469 ··· 471 472 472 473 priv = context->priv; 473 474 netdev = priv->netdev; 474 - dev = priv->usb2; 475 475 476 476 /* free up our allocated buffer */ 477 477 usb_free_coherent(urb->dev, size, ··· 1141 1143 } 1142 1144 } 1143 1145 unlink_all_urbs(dev); 1146 + kfree(dev); 1144 1147 } 1145 1148 } 1146 1149
+1
drivers/net/can/usb/gs_usb.c
··· 718 718 .ndo_open = gs_can_open, 719 719 .ndo_stop = gs_can_close, 720 720 .ndo_start_xmit = gs_can_start_xmit, 721 + .ndo_change_mtu = can_change_mtu, 721 722 }; 722 723 723 724 static struct gs_can *gs_make_candev(unsigned int channel, struct usb_interface *intf)
+3 -1
drivers/net/can/xilinx_can.c
··· 300 300 static int xcan_chip_start(struct net_device *ndev) 301 301 { 302 302 struct xcan_priv *priv = netdev_priv(ndev); 303 - u32 err, reg_msr, reg_sr_mask; 303 + u32 reg_msr, reg_sr_mask; 304 + int err; 304 305 unsigned long timeout; 305 306 306 307 /* Check if it is in reset mode */ ··· 962 961 .ndo_open = xcan_open, 963 962 .ndo_stop = xcan_close, 964 963 .ndo_start_xmit = xcan_start_xmit, 964 + .ndo_change_mtu = can_change_mtu, 965 965 }; 966 966 967 967 /**
+1 -1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_dcb.c
··· 1082 1082 pgid = be32_to_cpu(pcmd.u.dcb.pgid.pgid); 1083 1083 1084 1084 for (i = 0; i < CXGB4_MAX_PRIORITY; i++) 1085 - pg->prio_pg[i] = (pgid >> (i * 4)) & 0xF; 1085 + pg->prio_pg[7 - i] = (pgid >> (i * 4)) & 0xF; 1086 1086 1087 1087 INIT_PORT_DCB_READ_PEER_CMD(pcmd, pi->port_id); 1088 1088 pcmd.u.dcb.pgrate.type = FW_PORT_DCB_TYPE_PGRATE;
+6
drivers/net/ethernet/emulex/benet/be_main.c
··· 4423 4423 "Disabled VxLAN offloads for UDP port %d\n", 4424 4424 be16_to_cpu(port)); 4425 4425 } 4426 + 4427 + static bool be_gso_check(struct sk_buff *skb, struct net_device *dev) 4428 + { 4429 + return vxlan_gso_check(skb); 4430 + } 4426 4431 #endif 4427 4432 4428 4433 static const struct net_device_ops be_netdev_ops = { ··· 4457 4452 #ifdef CONFIG_BE2NET_VXLAN 4458 4453 .ndo_add_vxlan_port = be_add_vxlan_port, 4459 4454 .ndo_del_vxlan_port = be_del_vxlan_port, 4455 + .ndo_gso_check = be_gso_check, 4460 4456 #endif 4461 4457 }; 4462 4458
+12 -1
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 1693 1693 mlx4_set_stats_bitmap(mdev->dev, &priv->stats_bitmap); 1694 1694 1695 1695 #ifdef CONFIG_MLX4_EN_VXLAN 1696 - if (priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_VXLAN_OFFLOADS) 1696 + if (priv->mdev->dev->caps.tunnel_offload_mode == MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) 1697 1697 vxlan_get_rx_port(dev); 1698 1698 #endif 1699 1699 priv->port_up = true; ··· 2365 2365 2366 2366 queue_work(priv->mdev->workqueue, &priv->vxlan_del_task); 2367 2367 } 2368 + 2369 + static bool mlx4_en_gso_check(struct sk_buff *skb, struct net_device *dev) 2370 + { 2371 + return vxlan_gso_check(skb); 2372 + } 2368 2373 #endif 2369 2374 2370 2375 static const struct net_device_ops mlx4_netdev_ops = { ··· 2401 2396 #ifdef CONFIG_MLX4_EN_VXLAN 2402 2397 .ndo_add_vxlan_port = mlx4_en_add_vxlan_port, 2403 2398 .ndo_del_vxlan_port = mlx4_en_del_vxlan_port, 2399 + .ndo_gso_check = mlx4_en_gso_check, 2404 2400 #endif 2405 2401 }; 2406 2402 ··· 2432 2426 .ndo_rx_flow_steer = mlx4_en_filter_rfs, 2433 2427 #endif 2434 2428 .ndo_get_phys_port_id = mlx4_en_get_phys_port_id, 2429 + #ifdef CONFIG_MLX4_EN_VXLAN 2430 + .ndo_add_vxlan_port = mlx4_en_add_vxlan_port, 2431 + .ndo_del_vxlan_port = mlx4_en_del_vxlan_port, 2432 + .ndo_gso_check = mlx4_en_gso_check, 2433 + #endif 2435 2434 }; 2436 2435 2437 2436 int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
+6
drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
··· 503 503 504 504 adapter->flags |= QLCNIC_DEL_VXLAN_PORT; 505 505 } 506 + 507 + static bool qlcnic_gso_check(struct sk_buff *skb, struct net_device *dev) 508 + { 509 + return vxlan_gso_check(skb); 510 + } 506 511 #endif 507 512 508 513 static const struct net_device_ops qlcnic_netdev_ops = { ··· 531 526 #ifdef CONFIG_QLCNIC_VXLAN 532 527 .ndo_add_vxlan_port = qlcnic_add_vxlan_port, 533 528 .ndo_del_vxlan_port = qlcnic_del_vxlan_port, 529 + .ndo_gso_check = qlcnic_gso_check, 534 530 #endif 535 531 #ifdef CONFIG_NET_POLL_CONTROLLER 536 532 .ndo_poll_controller = qlcnic_poll_controller,
+3 -3
drivers/net/ethernet/ti/cpsw.c
··· 129 129 #define CPSW_VLAN_AWARE BIT(1) 130 130 #define CPSW_ALE_VLAN_AWARE 1 131 131 132 - #define CPSW_FIFO_NORMAL_MODE (0 << 15) 133 - #define CPSW_FIFO_DUAL_MAC_MODE (1 << 15) 134 - #define CPSW_FIFO_RATE_LIMIT_MODE (2 << 15) 132 + #define CPSW_FIFO_NORMAL_MODE (0 << 16) 133 + #define CPSW_FIFO_DUAL_MAC_MODE (1 << 16) 134 + #define CPSW_FIFO_RATE_LIMIT_MODE (2 << 16) 135 135 136 136 #define CPSW_INTPACEEN (0x3f << 16) 137 137 #define CPSW_INTPRESCALE_MASK (0x7FF << 0)
+3 -1
drivers/net/ppp/pptp.c
··· 506 506 int len = sizeof(struct sockaddr_pppox); 507 507 struct sockaddr_pppox sp; 508 508 509 - sp.sa_family = AF_PPPOX; 509 + memset(&sp.sa_addr, 0, sizeof(sp.sa_addr)); 510 + 511 + sp.sa_family = AF_PPPOX; 510 512 sp.sa_protocol = PX_PROTO_PPTP; 511 513 sp.sa_addr.pptp = pppox_sk(sock->sk)->proto.pptp.src_addr; 512 514
+1
drivers/net/usb/qmi_wwan.c
··· 780 780 {QMI_FIXED_INTF(0x413c, 0x81a4, 8)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */ 781 781 {QMI_FIXED_INTF(0x413c, 0x81a8, 8)}, /* Dell Wireless 5808 Gobi(TM) 4G LTE Mobile Broadband Card */ 782 782 {QMI_FIXED_INTF(0x413c, 0x81a9, 8)}, /* Dell Wireless 5808e Gobi(TM) 4G LTE Mobile Broadband Card */ 783 + {QMI_FIXED_INTF(0x03f0, 0x581d, 4)}, /* HP lt4112 LTE/HSPA+ Gobi 4G Module (Huawei me906e) */ 783 784 784 785 /* 4. Gobi 1000 devices */ 785 786 {QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */
+37
drivers/net/virtio_net.c
··· 1673 1673 }; 1674 1674 #endif 1675 1675 1676 + static bool virtnet_fail_on_feature(struct virtio_device *vdev, 1677 + unsigned int fbit, 1678 + const char *fname, const char *dname) 1679 + { 1680 + if (!virtio_has_feature(vdev, fbit)) 1681 + return false; 1682 + 1683 + dev_err(&vdev->dev, "device advertises feature %s but not %s", 1684 + fname, dname); 1685 + 1686 + return true; 1687 + } 1688 + 1689 + #define VIRTNET_FAIL_ON(vdev, fbit, dbit) \ 1690 + virtnet_fail_on_feature(vdev, fbit, #fbit, dbit) 1691 + 1692 + static bool virtnet_validate_features(struct virtio_device *vdev) 1693 + { 1694 + if (!virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ) && 1695 + (VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_CTRL_RX, 1696 + "VIRTIO_NET_F_CTRL_VQ") || 1697 + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_CTRL_VLAN, 1698 + "VIRTIO_NET_F_CTRL_VQ") || 1699 + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_GUEST_ANNOUNCE, 1700 + "VIRTIO_NET_F_CTRL_VQ") || 1701 + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_MQ, "VIRTIO_NET_F_CTRL_VQ") || 1702 + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_CTRL_MAC_ADDR, 1703 + "VIRTIO_NET_F_CTRL_VQ"))) { 1704 + return false; 1705 + } 1706 + 1707 + return true; 1708 + } 1709 + 1676 1710 static int virtnet_probe(struct virtio_device *vdev) 1677 1711 { 1678 1712 int i, err; 1679 1713 struct net_device *dev; 1680 1714 struct virtnet_info *vi; 1681 1715 u16 max_queue_pairs; 1716 + 1717 + if (!virtnet_validate_features(vdev)) 1718 + return -EINVAL; 1682 1719 1683 1720 /* Find if host supports multiqueue virtio_net device */ 1684 1721 err = virtio_cread_feature(vdev, VIRTIO_NET_F_MQ,
-6
drivers/net/vxlan.c
··· 67 67 68 68 #define VXLAN_FLAGS 0x08000000 /* struct vxlanhdr.vx_flags required value. */ 69 69 70 - /* VXLAN protocol header */ 71 - struct vxlanhdr { 72 - __be32 vx_flags; 73 - __be32 vx_vni; 74 - }; 75 - 76 70 /* UDP port for VXLAN traffic. 77 71 * The IANA assigned port is 4789, but the Linux default is 8472 78 72 * for compatibility with early adopters.
+6 -3
drivers/net/wireless/ath/ath9k/main.c
··· 996 996 struct ath_vif *avp; 997 997 998 998 /* 999 - * Pick the MAC address of the first interface as the new hardware 1000 - * MAC address. The hardware will use it together with the BSSID mask 1001 - * when matching addresses. 999 + * The hardware will use primary station addr together with the 1000 + * BSSID mask when matching addresses. 1002 1001 */ 1003 1002 memset(iter_data, 0, sizeof(*iter_data)); 1004 1003 memset(&iter_data->mask, 0xff, ETH_ALEN); ··· 1227 1228 list_add_tail(&avp->list, &avp->chanctx->vifs); 1228 1229 } 1229 1230 1231 + ath9k_calculate_summary_state(sc, avp->chanctx); 1232 + 1230 1233 ath9k_assign_hw_queues(hw, vif); 1231 1234 1232 1235 an->sc = sc; ··· 1297 1296 ath9k_beacon_remove_slot(sc, vif); 1298 1297 1299 1298 ath_tx_node_cleanup(sc, &avp->mcast_node); 1299 + 1300 + ath9k_calculate_summary_state(sc, avp->chanctx); 1300 1301 1301 1302 mutex_unlock(&sc->mutex); 1302 1303 }
+2 -2
drivers/net/wireless/brcm80211/brcmfmac/of.c
··· 40 40 return; 41 41 42 42 irq = irq_of_parse_and_map(np, 0); 43 - if (irq < 0) { 44 - brcmf_err("interrupt could not be mapped: err=%d\n", irq); 43 + if (!irq) { 44 + brcmf_err("interrupt could not be mapped\n"); 45 45 devm_kfree(dev, sdiodev->pdata); 46 46 return; 47 47 }
+1 -1
drivers/net/wireless/brcm80211/brcmfmac/pcie.c
··· 19 19 #include <linux/pci.h> 20 20 #include <linux/vmalloc.h> 21 21 #include <linux/delay.h> 22 - #include <linux/unaligned/access_ok.h> 23 22 #include <linux/interrupt.h> 24 23 #include <linux/bcma/bcma.h> 25 24 #include <linux/sched.h> 25 + #include <asm/unaligned.h> 26 26 27 27 #include <soc.h> 28 28 #include <chipcommon.h>
+4 -2
drivers/net/wireless/brcm80211/brcmfmac/usb.c
··· 738 738 goto finalize; 739 739 } 740 740 741 - if (!brcmf_usb_ioctl_resp_wait(devinfo)) 741 + if (!brcmf_usb_ioctl_resp_wait(devinfo)) { 742 + usb_kill_urb(devinfo->ctl_urb); 742 743 ret = -ETIMEDOUT; 743 - else 744 + } else { 744 745 memcpy(buffer, tmpbuf, buflen); 746 + } 745 747 746 748 finalize: 747 749 kfree(tmpbuf);
+16 -3
drivers/of/address.c
··· 450 450 return NULL; 451 451 } 452 452 453 + static int of_empty_ranges_quirk(void) 454 + { 455 + if (IS_ENABLED(CONFIG_PPC)) { 456 + /* To save cycles, we cache the result */ 457 + static int quirk_state = -1; 458 + 459 + if (quirk_state < 0) 460 + quirk_state = 461 + of_machine_is_compatible("Power Macintosh") || 462 + of_machine_is_compatible("MacRISC"); 463 + return quirk_state; 464 + } 465 + return false; 466 + } 467 + 453 468 static int of_translate_one(struct device_node *parent, struct of_bus *bus, 454 469 struct of_bus *pbus, __be32 *addr, 455 470 int na, int ns, int pna, const char *rprop) ··· 490 475 * This code is only enabled on powerpc. --gcl 491 476 */ 492 477 ranges = of_get_property(parent, rprop, &rlen); 493 - #if !defined(CONFIG_PPC) 494 - if (ranges == NULL) { 478 + if (ranges == NULL && !of_empty_ranges_quirk()) { 495 479 pr_err("OF: no ranges; cannot translate\n"); 496 480 return 1; 497 481 } 498 - #endif /* !defined(CONFIG_PPC) */ 499 482 if (ranges == NULL || rlen == 0) { 500 483 offset = of_read_number(addr, na); 501 484 memset(addr, 0, pna * 4);
+1 -1
drivers/of/dynamic.c
··· 247 247 * @allocflags: Allocation flags (typically pass GFP_KERNEL) 248 248 * 249 249 * Copy a property by dynamically allocating the memory of both the 250 - * property stucture and the property name & contents. The property's 250 + * property structure and the property name & contents. The property's 251 251 * flags have the OF_DYNAMIC bit set so that we can differentiate between 252 252 * dynamically allocated properties and not. 253 253 * Returns the newly allocated property or NULL on out of memory error.
+1 -1
drivers/of/fdt.c
··· 773 773 if (offset < 0) 774 774 return -ENODEV; 775 775 776 - while (match->compatible) { 776 + while (match->compatible[0]) { 777 777 unsigned long addr; 778 778 if (fdt_node_check_compatible(fdt, offset, match->compatible)) { 779 779 match++;
+8 -3
drivers/of/selftest.c
··· 896 896 return; 897 897 } 898 898 899 - while (last_node_index >= 0) { 899 + while (last_node_index-- > 0) { 900 900 if (nodes[last_node_index]) { 901 901 np = of_find_node_by_path(nodes[last_node_index]->full_name); 902 - if (strcmp(np->full_name, "/aliases") != 0) { 902 + if (np == nodes[last_node_index]) { 903 + if (of_aliases == np) { 904 + of_node_put(of_aliases); 905 + of_aliases = NULL; 906 + } 903 907 detach_node_and_children(np); 904 908 } else { 905 909 for_each_property_of_node(np, prop) { ··· 912 908 } 913 909 } 914 910 } 915 - last_node_index--; 916 911 } 917 912 } 918 913 ··· 924 921 res = selftest_data_add(); 925 922 if (res) 926 923 return res; 924 + if (!of_aliases) 925 + of_aliases = of_find_node_by_path("/aliases"); 927 926 928 927 np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a"); 929 928 if (!np) {
+1 -1
drivers/pci/access.c
··· 444 444 return pcie_caps_reg(dev) & PCI_EXP_FLAGS_VERS; 445 445 } 446 446 447 - static inline bool pcie_cap_has_lnkctl(const struct pci_dev *dev) 447 + bool pcie_cap_has_lnkctl(const struct pci_dev *dev) 448 448 { 449 449 int type = pci_pcie_type(dev); 450 450
+6 -1
drivers/pci/host/pci-xgene.c
··· 631 631 if (ret) 632 632 return ret; 633 633 634 - bus = pci_scan_root_bus(&pdev->dev, 0, &xgene_pcie_ops, port, &res); 634 + bus = pci_create_root_bus(&pdev->dev, 0, 635 + &xgene_pcie_ops, port, &res); 635 636 if (!bus) 636 637 return -ENOMEM; 638 + 639 + pci_scan_child_bus(bus); 640 + pci_assign_unassigned_bus_resources(bus); 641 + pci_bus_add_devices(bus); 637 642 638 643 platform_set_drvdata(pdev, port); 639 644 return 0;
+2
drivers/pci/pci.h
··· 6 6 7 7 extern const unsigned char pcie_link_speed[]; 8 8 9 + bool pcie_cap_has_lnkctl(const struct pci_dev *dev); 10 + 9 11 /* Functions internal to the PCI core code */ 10 12 11 13 int pci_create_sysfs_dev_files(struct pci_dev *pdev);
+17 -13
drivers/pci/probe.c
··· 407 407 { 408 408 struct pci_dev *dev = child->self; 409 409 u16 mem_base_lo, mem_limit_lo; 410 - unsigned long base, limit; 410 + u64 base64, limit64; 411 + dma_addr_t base, limit; 411 412 struct pci_bus_region region; 412 413 struct resource *res; 413 414 414 415 res = child->resource[2]; 415 416 pci_read_config_word(dev, PCI_PREF_MEMORY_BASE, &mem_base_lo); 416 417 pci_read_config_word(dev, PCI_PREF_MEMORY_LIMIT, &mem_limit_lo); 417 - base = ((unsigned long) mem_base_lo & PCI_PREF_RANGE_MASK) << 16; 418 - limit = ((unsigned long) mem_limit_lo & PCI_PREF_RANGE_MASK) << 16; 418 + base64 = (mem_base_lo & PCI_PREF_RANGE_MASK) << 16; 419 + limit64 = (mem_limit_lo & PCI_PREF_RANGE_MASK) << 16; 419 420 420 421 if ((mem_base_lo & PCI_PREF_RANGE_TYPE_MASK) == PCI_PREF_RANGE_TYPE_64) { 421 422 u32 mem_base_hi, mem_limit_hi; ··· 430 429 * this, just assume they are not being used. 431 430 */ 432 431 if (mem_base_hi <= mem_limit_hi) { 433 - #if BITS_PER_LONG == 64 434 - base |= ((unsigned long) mem_base_hi) << 32; 435 - limit |= ((unsigned long) mem_limit_hi) << 32; 436 - #else 437 - if (mem_base_hi || mem_limit_hi) { 438 - dev_err(&dev->dev, "can't handle 64-bit address space for bridge\n"); 439 - return; 440 - } 441 - #endif 432 + base64 |= (u64) mem_base_hi << 32; 433 + limit64 |= (u64) mem_limit_hi << 32; 442 434 } 443 435 } 436 + 437 + base = (dma_addr_t) base64; 438 + limit = (dma_addr_t) limit64; 439 + 440 + if (base != base64) { 441 + dev_err(&dev->dev, "can't handle bridge window above 4GB (bus address %#010llx)\n", 442 + (unsigned long long) base64); 443 + return; 444 + } 445 + 444 446 if (base <= limit) { 445 447 res->flags = (mem_base_lo & PCI_PREF_RANGE_TYPE_MASK) | 446 448 IORESOURCE_MEM | IORESOURCE_PREFETCH; ··· 1327 1323 ~hpp->pci_exp_devctl_and, hpp->pci_exp_devctl_or); 1328 1324 1329 1325 /* Initialize Link Control Register */ 1330 - if (dev->subordinate) 1326 + if (pcie_cap_has_lnkctl(dev)) 1331 1327 pcie_capability_clear_and_set_word(dev, PCI_EXP_LNKCTL, 1332 1328 ~hpp->pci_exp_lnkctl_and, hpp->pci_exp_lnkctl_or); 1333 1329
+1
drivers/platform/x86/Kconfig
··· 202 202 config HP_ACCEL 203 203 tristate "HP laptop accelerometer" 204 204 depends on INPUT && ACPI 205 + depends on SERIO_I8042 205 206 select SENSORS_LIS3LV02D 206 207 select NEW_LEDS 207 208 select LEDS_CLASS
+44
drivers/platform/x86/hp_accel.c
··· 37 37 #include <linux/leds.h> 38 38 #include <linux/atomic.h> 39 39 #include <linux/acpi.h> 40 + #include <linux/i8042.h> 41 + #include <linux/serio.h> 40 42 #include "../../misc/lis3lv02d/lis3lv02d.h" 41 43 42 44 #define DRIVER_NAME "hp_accel" ··· 74 72 } 75 73 76 74 /* HP-specific accelerometer driver ------------------------------------ */ 75 + 76 + /* e0 25, e0 26, e0 27, e0 28 are scan codes that the accelerometer with acpi id 77 + * HPQ6000 sends through the keyboard bus */ 78 + #define ACCEL_1 0x25 79 + #define ACCEL_2 0x26 80 + #define ACCEL_3 0x27 81 + #define ACCEL_4 0x28 77 82 78 83 /* For automatic insertion of the module */ 79 84 static const struct acpi_device_id lis3lv02d_device_ids[] = { ··· 303 294 printk(KERN_DEBUG DRIVER_NAME ": Error getting resources\n"); 304 295 } 305 296 297 + static bool hp_accel_i8042_filter(unsigned char data, unsigned char str, 298 + struct serio *port) 299 + { 300 + static bool extended; 301 + 302 + if (str & I8042_STR_AUXDATA) 303 + return false; 304 + 305 + if (data == 0xe0) { 306 + extended = true; 307 + return true; 308 + } else if (unlikely(extended)) { 309 + extended = false; 310 + 311 + switch (data) { 312 + case ACCEL_1: 313 + case ACCEL_2: 314 + case ACCEL_3: 315 + case ACCEL_4: 316 + return true; 317 + default: 318 + serio_interrupt(port, 0xe0, 0); 319 + return false; 320 + } 321 + } 322 + 323 + return false; 324 + } 325 + 306 326 static int lis3lv02d_add(struct acpi_device *device) 307 327 { 308 328 int ret; ··· 364 326 if (ret) 365 327 return ret; 366 328 329 + /* filter to remove HPQ6000 accelerometer data 330 + * from keyboard bus stream */ 331 + if (strstr(dev_name(&device->dev), "HPQ6000")) 332 + i8042_install_filter(hp_accel_i8042_filter); 333 + 367 334 INIT_WORK(&hpled_led.work, delayed_set_status_worker); 368 335 ret = led_classdev_register(NULL, &hpled_led.led_classdev); 369 336 if (ret) { ··· 386 343 if (!device) 387 344 return -EINVAL; 388 345 346 + i8042_remove_filter(hp_accel_i8042_filter); 389 347 lis3lv02d_joystick_disable(&lis3_dev); 390 348 lis3lv02d_poweroff(&lis3_dev); 391 349
+9 -8
drivers/power/ab8500_fg.c
··· 25 25 #include <linux/slab.h> 26 26 #include <linux/delay.h> 27 27 #include <linux/time.h> 28 + #include <linux/time64.h> 28 29 #include <linux/of.h> 29 30 #include <linux/completion.h> 30 31 #include <linux/mfd/core.h> ··· 109 108 struct ab8500_fg_avg_cap { 110 109 int avg; 111 110 int samples[NBR_AVG_SAMPLES]; 112 - __kernel_time_t time_stamps[NBR_AVG_SAMPLES]; 111 + time64_t time_stamps[NBR_AVG_SAMPLES]; 113 112 int pos; 114 113 int nbr_samples; 115 114 int sum; ··· 387 386 */ 388 387 static int ab8500_fg_add_cap_sample(struct ab8500_fg *di, int sample) 389 388 { 390 - struct timespec ts; 389 + struct timespec64 ts64; 391 390 struct ab8500_fg_avg_cap *avg = &di->avg_cap; 392 391 393 - getnstimeofday(&ts); 392 + getnstimeofday64(&ts64); 394 393 395 394 do { 396 395 avg->sum += sample - avg->samples[avg->pos]; 397 396 avg->samples[avg->pos] = sample; 398 - avg->time_stamps[avg->pos] = ts.tv_sec; 397 + avg->time_stamps[avg->pos] = ts64.tv_sec; 399 398 avg->pos++; 400 399 401 400 if (avg->pos == NBR_AVG_SAMPLES) ··· 408 407 * Check the time stamp for each sample. If too old, 409 408 * replace with latest sample 410 409 */ 411 - } while (ts.tv_sec - VALID_CAPACITY_SEC > avg->time_stamps[avg->pos]); 410 + } while (ts64.tv_sec - VALID_CAPACITY_SEC > avg->time_stamps[avg->pos]); 412 411 413 412 avg->avg = avg->sum / avg->nbr_samples; 414 413 ··· 447 446 static void ab8500_fg_fill_cap_sample(struct ab8500_fg *di, int sample) 448 447 { 449 448 int i; 450 - struct timespec ts; 449 + struct timespec64 ts64; 451 450 struct ab8500_fg_avg_cap *avg = &di->avg_cap; 452 451 453 - getnstimeofday(&ts); 452 + getnstimeofday64(&ts64); 454 453 455 454 for (i = 0; i < NBR_AVG_SAMPLES; i++) { 456 455 avg->samples[i] = sample; 457 - avg->time_stamps[i] = ts.tv_sec; 456 + avg->time_stamps[i] = ts64.tv_sec; 458 457 } 459 458 460 459 avg->pos = 0;
+15 -8
drivers/power/bq2415x_charger.c
··· 1579 1579 if (np) { 1580 1580 bq->notify_psy = power_supply_get_by_phandle(np, "ti,usb-charger-detection"); 1581 1581 1582 - if (!bq->notify_psy) 1583 - return -EPROBE_DEFER; 1582 + if (IS_ERR(bq->notify_psy)) { 1583 + dev_info(&client->dev, 1584 + "no 'ti,usb-charger-detection' property (err=%ld)\n", 1585 + PTR_ERR(bq->notify_psy)); 1586 + bq->notify_psy = NULL; 1587 + } else if (!bq->notify_psy) { 1588 + ret = -EPROBE_DEFER; 1589 + goto error_2; 1590 + } 1584 1591 } 1585 1592 else if (pdata->notify_device) 1586 1593 bq->notify_psy = power_supply_get_by_name(pdata->notify_device); ··· 1609 1602 ret = of_property_read_u32(np, "ti,current-limit", 1610 1603 &bq->init_data.current_limit); 1611 1604 if (ret) 1612 - return ret; 1605 + goto error_2; 1613 1606 ret = of_property_read_u32(np, "ti,weak-battery-voltage", 1614 1607 &bq->init_data.weak_battery_voltage); 1615 1608 if (ret) 1616 - return ret; 1609 + goto error_2; 1617 1610 ret = of_property_read_u32(np, "ti,battery-regulation-voltage", 1618 1611 &bq->init_data.battery_regulation_voltage); 1619 1612 if (ret) 1620 - return ret; 1613 + goto error_2; 1621 1614 ret = of_property_read_u32(np, "ti,charge-current", 1622 1615 &bq->init_data.charge_current); 1623 1616 if (ret) 1624 - return ret; 1617 + goto error_2; 1625 1618 ret = of_property_read_u32(np, "ti,termination-current", 1626 1619 &bq->init_data.termination_current); 1627 1620 if (ret) 1628 - return ret; 1621 + goto error_2; 1629 1622 ret = of_property_read_u32(np, "ti,resistor-sense", 1630 1623 &bq->init_data.resistor_sense); 1631 1624 if (ret) 1632 - return ret; 1625 + goto error_2; 1633 1626 } else { 1634 1627 memcpy(&bq->init_data, pdata, sizeof(bq->init_data)); 1635 1628 }
+111 -53
drivers/power/charger-manager.c
··· 97 97 static bool is_batt_present(struct charger_manager *cm) 98 98 { 99 99 union power_supply_propval val; 100 + struct power_supply *psy; 100 101 bool present = false; 101 102 int i, ret; 102 103 ··· 108 107 case CM_NO_BATTERY: 109 108 break; 110 109 case CM_FUEL_GAUGE: 111 - ret = cm->fuel_gauge->get_property(cm->fuel_gauge, 110 + psy = power_supply_get_by_name(cm->desc->psy_fuel_gauge); 111 + if (!psy) 112 + break; 113 + 114 + ret = psy->get_property(psy, 112 115 POWER_SUPPLY_PROP_PRESENT, &val); 113 116 if (ret == 0 && val.intval) 114 117 present = true; 115 118 break; 116 119 case CM_CHARGER_STAT: 117 - for (i = 0; cm->charger_stat[i]; i++) { 118 - ret = cm->charger_stat[i]->get_property( 119 - cm->charger_stat[i], 120 - POWER_SUPPLY_PROP_PRESENT, &val); 120 + for (i = 0; cm->desc->psy_charger_stat[i]; i++) { 121 + psy = power_supply_get_by_name( 122 + cm->desc->psy_charger_stat[i]); 123 + if (!psy) { 124 + dev_err(cm->dev, "Cannot find power supply \"%s\"\n", 125 + cm->desc->psy_charger_stat[i]); 126 + continue; 127 + } 128 + 129 + ret = psy->get_property(psy, POWER_SUPPLY_PROP_PRESENT, 130 + &val); 121 131 if (ret == 0 && val.intval) { 122 132 present = true; 123 133 break; ··· 151 139 static bool is_ext_pwr_online(struct charger_manager *cm) 152 140 { 153 141 union power_supply_propval val; 142 + struct power_supply *psy; 154 143 bool online = false; 155 144 int i, ret; 156 145 157 146 /* If at least one of them has one, it's yes. */ 158 - for (i = 0; cm->charger_stat[i]; i++) { 159 - ret = cm->charger_stat[i]->get_property( 160 - cm->charger_stat[i], 161 - POWER_SUPPLY_PROP_ONLINE, &val); 147 + for (i = 0; cm->desc->psy_charger_stat[i]; i++) { 148 + psy = power_supply_get_by_name(cm->desc->psy_charger_stat[i]); 149 + if (!psy) { 150 + dev_err(cm->dev, "Cannot find power supply \"%s\"\n", 151 + cm->desc->psy_charger_stat[i]); 152 + continue; 153 + } 154 + 155 + ret = psy->get_property(psy, POWER_SUPPLY_PROP_ONLINE, &val); 162 156 if (ret == 0 && val.intval) { 163 157 online = true; 164 158 break; ··· 185 167 static int get_batt_uV(struct charger_manager *cm, int *uV) 186 168 { 187 169 union power_supply_propval val; 170 + struct power_supply *fuel_gauge; 188 171 int ret; 189 172 190 - if (!cm->fuel_gauge) 173 + fuel_gauge = power_supply_get_by_name(cm->desc->psy_fuel_gauge); 174 + if (!fuel_gauge) 191 175 return -ENODEV; 192 176 193 - ret = cm->fuel_gauge->get_property(cm->fuel_gauge, 177 + ret = fuel_gauge->get_property(fuel_gauge, 194 178 POWER_SUPPLY_PROP_VOLTAGE_NOW, &val); 195 179 if (ret) 196 180 return ret; ··· 209 189 { 210 190 int i, ret; 211 191 bool charging = false; 192 + struct power_supply *psy; 212 193 union power_supply_propval val; 213 194 214 195 /* If there is no battery, it cannot be charged */ ··· 217 196 return false; 218 197 219 198 /* If at least one of the charger is charging, return yes */ 220 - for (i = 0; cm->charger_stat[i]; i++) { 199 + for (i = 0; cm->desc->psy_charger_stat[i]; i++) { 221 200 /* 1. The charger sholuld not be DISABLED */ 222 201 if (cm->emergency_stop) 223 202 continue; 224 203 if (!cm->charger_enabled) 225 204 continue; 226 205 206 + psy = power_supply_get_by_name(cm->desc->psy_charger_stat[i]); 207 + if (!psy) { 208 + dev_err(cm->dev, "Cannot find power supply \"%s\"\n", 209 + cm->desc->psy_charger_stat[i]); 210 + continue; 211 + } 212 + 227 213 /* 2. The charger should be online (ext-power) */ 228 - ret = cm->charger_stat[i]->get_property( 229 - cm->charger_stat[i], 230 - POWER_SUPPLY_PROP_ONLINE, &val); 214 + ret = psy->get_property(psy, POWER_SUPPLY_PROP_ONLINE, &val); 231 215 if (ret) { 232 216 dev_warn(cm->dev, "Cannot read ONLINE value from %s\n", 233 217 cm->desc->psy_charger_stat[i]); ··· 245 219 * 3. The charger should not be FULL, DISCHARGING, 246 220 * or NOT_CHARGING. 247 221 */ 248 - ret = cm->charger_stat[i]->get_property( 249 - cm->charger_stat[i], 250 - POWER_SUPPLY_PROP_STATUS, &val); 222 + ret = psy->get_property(psy, POWER_SUPPLY_PROP_STATUS, &val); 251 223 if (ret) { 252 224 dev_warn(cm->dev, "Cannot read STATUS value from %s\n", 253 225 cm->desc->psy_charger_stat[i]); ··· 272 248 { 273 249 struct charger_desc *desc = cm->desc; 274 250 union power_supply_propval val; 251 + struct power_supply *fuel_gauge; 275 252 int ret = 0; 276 253 int uV; 277 254 ··· 280 255 if (!is_batt_present(cm)) 281 256 return false; 282 257 283 - if (cm->fuel_gauge && desc->fullbatt_full_capacity > 0) { 258 + fuel_gauge = power_supply_get_by_name(cm->desc->psy_fuel_gauge); 259 + if (!fuel_gauge) 260 + return false; 261 + 262 + if (desc->fullbatt_full_capacity > 0) { 284 263 val.intval = 0; 285 264 286 265 /* Not full if capacity of fuel gauge isn't full */ 287 - ret = cm->fuel_gauge->get_property(cm->fuel_gauge, 266 + ret = fuel_gauge->get_property(fuel_gauge, 288 267 POWER_SUPPLY_PROP_CHARGE_FULL, &val); 289 268 if (!ret && val.intval > desc->fullbatt_full_capacity) 290 269 return true; ··· 302 273 } 303 274 304 275 /* Full, if the capacity is more than fullbatt_soc */ 305 - if (cm->fuel_gauge && desc->fullbatt_soc > 0) { 276 + if (desc->fullbatt_soc > 0) { 306 277 val.intval = 0; 307 278 308 - ret = cm->fuel_gauge->get_property(cm->fuel_gauge, 279 + ret = fuel_gauge->get_property(fuel_gauge, 309 280 POWER_SUPPLY_PROP_CAPACITY, &val); 310 281 if (!ret && val.intval >= desc->fullbatt_soc) 311 282 return true; ··· 580 551 return ret; 581 552 } 582 553 554 + static int cm_get_battery_temperature_by_psy(struct charger_manager *cm, 555 + int *temp) 556 + { 557 + struct power_supply *fuel_gauge; 558 + 559 + fuel_gauge = power_supply_get_by_name(cm->desc->psy_fuel_gauge); 560 + if (!fuel_gauge) 561 + return -ENODEV; 562 + 563 + return fuel_gauge->get_property(fuel_gauge, 564 + POWER_SUPPLY_PROP_TEMP, 565 + (union power_supply_propval *)temp); 566 + } 567 + 583 568 static int cm_get_battery_temperature(struct charger_manager *cm, 584 569 int *temp) 585 570 { ··· 603 560 return -ENODEV; 604 561 605 562 #ifdef CONFIG_THERMAL 606 - ret = thermal_zone_get_temp(cm->tzd_batt, (unsigned long *)temp); 607 - if (!ret) 608 - /* Calibrate temperature unit */ 609 - *temp /= 100; 610 - #else 611 - ret = cm->fuel_gauge->get_property(cm->fuel_gauge, 612 - POWER_SUPPLY_PROP_TEMP, 613 - (union power_supply_propval *)temp); 563 + if (cm->tzd_batt) { 564 + ret = thermal_zone_get_temp(cm->tzd_batt, (unsigned long *)temp); 565 + if (!ret) 566 + /* Calibrate temperature unit */ 567 + *temp /= 100; 568 + } else 614 569 #endif 570 + { 571 + /* if-else continued from CONFIG_THERMAL */ 572 + ret = cm_get_battery_temperature_by_psy(cm, temp); 573 + } 574 + 615 575 return ret; 616 576 } 617 577 ··· 873 827 struct charger_manager *cm = container_of(psy, 874 828 struct charger_manager, charger_psy); 875 829 struct charger_desc *desc = cm->desc; 830 + struct power_supply *fuel_gauge; 876 831 int ret = 0; 877 832 int uV; 878 833 ··· 904 857 ret = get_batt_uV(cm, &val->intval); 905 858 break; 906 859 case POWER_SUPPLY_PROP_CURRENT_NOW: 907 - ret = cm->fuel_gauge->get_property(cm->fuel_gauge, 860 + fuel_gauge = power_supply_get_by_name(cm->desc->psy_fuel_gauge); 861 + if (!fuel_gauge) { 862 + ret = -ENODEV; 863 + break; 864 + } 865 + ret = fuel_gauge->get_property(fuel_gauge, 908 866 POWER_SUPPLY_PROP_CURRENT_NOW, val); 909 867 break; 910 868 case POWER_SUPPLY_PROP_TEMP: 911 869 case POWER_SUPPLY_PROP_TEMP_AMBIENT: 912 870 return cm_get_battery_temperature(cm, &val->intval); 913 871 case POWER_SUPPLY_PROP_CAPACITY: 914 - if (!cm->fuel_gauge) { 872 + fuel_gauge = power_supply_get_by_name(cm->desc->psy_fuel_gauge); 873 + if (!fuel_gauge) { 915 874 ret = -ENODEV; 916 875 break; 917 876 } ··· 928 875 break; 929 876 } 930 877 931 - ret = cm->fuel_gauge->get_property(cm->fuel_gauge, 878 + ret = fuel_gauge->get_property(fuel_gauge, 932 879 POWER_SUPPLY_PROP_CAPACITY, val); 933 880 if (ret) 934 881 break; ··· 977 924 break; 978 925 case POWER_SUPPLY_PROP_CHARGE_NOW: 979 926 if (is_charging(cm)) { 980 - ret = cm->fuel_gauge->get_property(cm->fuel_gauge, 927 + fuel_gauge = power_supply_get_by_name( 928 + cm->desc->psy_fuel_gauge); 929 + if (!fuel_gauge) { 930 + ret = -ENODEV; 931 + break; 932 + } 933 + 934 + ret = fuel_gauge->get_property(fuel_gauge, 981 935 POWER_SUPPLY_PROP_CHARGE_NOW, 982 936 val); 983 937 if (ret) { ··· 1030 970 .properties = default_charger_props, 1031 971 .num_properties = ARRAY_SIZE(default_charger_props), 1032 972 .get_property = charger_get_property, 973 + .no_thermal = true, 1033 974 }; 1034 975 1035 976 /** ··· 1546 1485 return ret; 1547 1486 } 1548 1487 1549 - static int cm_init_thermal_data(struct charger_manager *cm) 1488 + static int cm_init_thermal_data(struct charger_manager *cm, 1489 + struct power_supply *fuel_gauge) 1550 1490 { 1551 1491 struct charger_desc *desc = cm->desc; 1552 1492 union power_supply_propval val; 1553 1493 int ret; 1554 1494 1555 1495 /* Verify whether fuel gauge provides battery temperature */ 1556 - ret = cm->fuel_gauge->get_property(cm->fuel_gauge, 1496 + ret = fuel_gauge->get_property(fuel_gauge, 1557 1497 POWER_SUPPLY_PROP_TEMP, &val); 1558 1498 1559 1499 if (!ret) { ··· 1564 1502 cm->desc->measure_battery_temp = true; 1565 1503 } 1566 1504 #ifdef CONFIG_THERMAL 1567 - cm->tzd_batt = cm->fuel_gauge->tzd; 1568 - 1569 1505 if (ret && desc->thermal_zone) { 1570 1506 cm->tzd_batt = 1571 1507 thermal_zone_get_zone_by_name(desc->thermal_zone); ··· 1726 1666 int ret = 0, i = 0; 1727 1667 int j = 0; 1728 1668 union power_supply_propval val; 1669 + struct power_supply *fuel_gauge; 1729 1670 1730 1671 if (g_desc && !rtc_dev && g_desc->rtc_name) { 1731 1672 rtc_dev = rtc_class_open(g_desc->rtc_name); ··· 1790 1729 while (desc->psy_charger_stat[i]) 1791 1730 i++; 1792 1731 1793 - cm->charger_stat = devm_kzalloc(&pdev->dev, 1794 - sizeof(struct power_supply *) * i, GFP_KERNEL); 1795 - if (!cm->charger_stat) 1796 - return -ENOMEM; 1797 - 1732 + /* Check if charger's supplies are present at probe */ 1798 1733 for (i = 0; desc->psy_charger_stat[i]; i++) { 1799 - cm->charger_stat[i] = power_supply_get_by_name( 1800 - desc->psy_charger_stat[i]); 1801 - if (!cm->charger_stat[i]) { 1734 + struct power_supply *psy; 1735 + 1736 + psy = power_supply_get_by_name(desc->psy_charger_stat[i]); 1737 + if (!psy) { 1802 1738 dev_err(&pdev->dev, "Cannot find power supply \"%s\"\n", 1803 1739 desc->psy_charger_stat[i]); 1804 1740 return -ENODEV; 1805 1741 } 1806 1742 } 1807 1743 1808 - cm->fuel_gauge = power_supply_get_by_name(desc->psy_fuel_gauge); 1809 - if (!cm->fuel_gauge) { 1744 + fuel_gauge = power_supply_get_by_name(desc->psy_fuel_gauge); 1745 + if (!fuel_gauge) { 1810 1746 dev_err(&pdev->dev, "Cannot find power supply \"%s\"\n", 1811 1747 desc->psy_fuel_gauge); 1812 1748 return -ENODEV; ··· 1846 1788 cm->charger_psy.num_properties = psy_default.num_properties; 1847 1789 1848 1790 /* Find which optional psy-properties are available */ 1849 - if (!cm->fuel_gauge->get_property(cm->fuel_gauge, 1791 + if (!fuel_gauge->get_property(fuel_gauge, 1850 1792 POWER_SUPPLY_PROP_CHARGE_NOW, &val)) { 1851 1793 cm->charger_psy.properties[cm->charger_psy.num_properties] = 1852 1794 POWER_SUPPLY_PROP_CHARGE_NOW; 1853 1795 cm->charger_psy.num_properties++; 1854 1796 } 1855 - if (!cm->fuel_gauge->get_property(cm->fuel_gauge, 1797 + if (!fuel_gauge->get_property(fuel_gauge, 1856 1798 POWER_SUPPLY_PROP_CURRENT_NOW, 1857 1799 &val)) { 1858 1800 cm->charger_psy.properties[cm->charger_psy.num_properties] = ··· 1860 1802 cm->charger_psy.num_properties++; 1861 1803 } 1862 1804 1863 - ret = cm_init_thermal_data(cm); 1805 + ret = cm_init_thermal_data(cm, fuel_gauge); 1864 1806 if (ret) { 1865 1807 dev_err(&pdev->dev, "Failed to initialize thermal data\n"); 1866 1808 cm->desc->measure_battery_temp = false; ··· 2124 2066 int i; 2125 2067 bool found = false; 2126 2068 2127 - for (i = 0; cm->charger_stat[i]; i++) { 2128 - if (psy == cm->charger_stat[i]) { 2069 + for (i = 0; cm->desc->psy_charger_stat[i]; i++) { 2070 + if (!strcmp(psy->name, cm->desc->psy_charger_stat[i])) { 2129 2071 found = true; 2130 2072 break; 2131 2073 }
+3
drivers/power/power_supply_core.c
··· 417 417 { 418 418 int i; 419 419 420 + if (psy->no_thermal) 421 + return 0; 422 + 420 423 /* Register battery zone device psy reports temperature */ 421 424 for (i = 0; i < psy->num_properties; i++) { 422 425 if (psy->properties[i] == POWER_SUPPLY_PROP_TEMP) {
-2
drivers/scsi/bnx2fc/bnx2fc_els.c
··· 480 480 bnx2fc_initiate_cleanup(orig_io_req); 481 481 /* Post a new IO req with the same sc_cmd */ 482 482 BNX2FC_IO_DBG(rec_req, "Post IO request again\n"); 483 - spin_unlock_bh(&tgt->tgt_lock); 484 483 rc = bnx2fc_post_io_req(tgt, new_io_req); 485 - spin_lock_bh(&tgt->tgt_lock); 486 484 if (!rc) 487 485 goto free_frame; 488 486 BNX2FC_IO_DBG(rec_req, "REC: io post err\n");
+10 -9
drivers/scsi/bnx2fc/bnx2fc_io.c
··· 1894 1894 goto exit_qcmd; 1895 1895 } 1896 1896 } 1897 + 1898 + spin_lock_bh(&tgt->tgt_lock); 1899 + 1897 1900 io_req = bnx2fc_cmd_alloc(tgt); 1898 1901 if (!io_req) { 1899 1902 rc = SCSI_MLQUEUE_HOST_BUSY; 1900 - goto exit_qcmd; 1903 + goto exit_qcmd_tgtlock; 1901 1904 } 1902 1905 io_req->sc_cmd = sc_cmd; 1903 1906 1904 1907 if (bnx2fc_post_io_req(tgt, io_req)) { 1905 1908 printk(KERN_ERR PFX "Unable to post io_req\n"); 1906 1909 rc = SCSI_MLQUEUE_HOST_BUSY; 1907 - goto exit_qcmd; 1910 + goto exit_qcmd_tgtlock; 1908 1911 } 1912 + 1913 + exit_qcmd_tgtlock: 1914 + spin_unlock_bh(&tgt->tgt_lock); 1909 1915 exit_qcmd: 1910 1916 return rc; 1911 1917 } ··· 2026 2020 int task_idx, index; 2027 2021 u16 xid; 2028 2022 2023 + /* bnx2fc_post_io_req() is called with the tgt_lock held */ 2024 + 2029 2025 /* Initialize rest of io_req fields */ 2030 2026 io_req->cmd_type = BNX2FC_SCSI_CMD; 2031 2027 io_req->port = port; ··· 2055 2047 /* Build buffer descriptor list for firmware from sg list */ 2056 2048 if (bnx2fc_build_bd_list_from_sg(io_req)) { 2057 2049 printk(KERN_ERR PFX "BD list creation failed\n"); 2058 - spin_lock_bh(&tgt->tgt_lock); 2059 2050 kref_put(&io_req->refcount, bnx2fc_cmd_release); 2060 - spin_unlock_bh(&tgt->tgt_lock); 2061 2051 return -EAGAIN; 2062 2052 } 2063 2053 ··· 2067 2061 task = &(task_page[index]); 2068 2062 bnx2fc_init_task(io_req, task); 2069 2063 2070 - spin_lock_bh(&tgt->tgt_lock); 2071 - 2072 2064 if (tgt->flush_in_prog) { 2073 2065 printk(KERN_ERR PFX "Flush in progress..Host Busy\n"); 2074 2066 kref_put(&io_req->refcount, bnx2fc_cmd_release); 2075 - spin_unlock_bh(&tgt->tgt_lock); 2076 2067 return -EAGAIN; 2077 2068 } 2078 2069 2079 2070 if (!test_bit(BNX2FC_FLAG_SESSION_READY, &tgt->flags)) { 2080 2071 printk(KERN_ERR PFX "Session not ready...post_io\n"); 2081 2072 kref_put(&io_req->refcount, bnx2fc_cmd_release); 2082 - spin_unlock_bh(&tgt->tgt_lock); 2083 2073 return -EAGAIN; 2084 2074 } 2085 2075 ··· 2093 2091 2094 2092 /* Ring doorbell */ 2095 2093 bnx2fc_ring_doorbell(tgt); 2096 - spin_unlock_bh(&tgt->tgt_lock); 2097 2094 return 0; 2098 2095 }
+11 -6
drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
··· 830 830 if (status == CPL_ERR_RTX_NEG_ADVICE) 831 831 goto rel_skb; 832 832 833 + module_put(THIS_MODULE); 834 + 833 835 if (status && status != CPL_ERR_TCAM_FULL && 834 836 status != CPL_ERR_CONN_EXIST && 835 837 status != CPL_ERR_ARP_MISS) ··· 940 938 cxgbi_sock_get(csk); 941 939 spin_lock_bh(&csk->lock); 942 940 943 - if (!cxgbi_sock_flag(csk, CTPF_ABORT_REQ_RCVD)) { 944 - cxgbi_sock_set_flag(csk, CTPF_ABORT_REQ_RCVD); 945 - cxgbi_sock_set_state(csk, CTP_ABORTING); 946 - goto done; 941 + cxgbi_sock_clear_flag(csk, CTPF_ABORT_REQ_RCVD); 942 + 943 + if (!cxgbi_sock_flag(csk, CTPF_TX_DATA_SENT)) { 944 + send_tx_flowc_wr(csk); 945 + cxgbi_sock_set_flag(csk, CTPF_TX_DATA_SENT); 947 946 } 948 947 949 - cxgbi_sock_clear_flag(csk, CTPF_ABORT_REQ_RCVD); 948 + cxgbi_sock_set_flag(csk, CTPF_ABORT_REQ_RCVD); 949 + cxgbi_sock_set_state(csk, CTP_ABORTING); 950 + 950 951 send_abort_rpl(csk, rst_status); 951 952 952 953 if (!cxgbi_sock_flag(csk, CTPF_ABORT_RPL_PENDING)) { 953 954 csk->err = abort_status_to_errno(csk, req->status, &rst_status); 954 955 cxgbi_sock_closed(csk); 955 956 } 956 - done: 957 + 957 958 spin_unlock_bh(&csk->lock); 958 959 cxgbi_sock_put(csk); 959 960 rel_skb:
+9 -11
drivers/scsi/cxgbi/libcxgbi.c
··· 816 816 read_lock_bh(&csk->callback_lock); 817 817 if (csk->user_data) 818 818 iscsi_conn_failure(csk->user_data, 819 - ISCSI_ERR_CONN_FAILED); 819 + ISCSI_ERR_TCP_CONN_CLOSE); 820 820 read_unlock_bh(&csk->callback_lock); 821 821 } 822 822 } ··· 905 905 { 906 906 cxgbi_sock_get(csk); 907 907 spin_lock_bh(&csk->lock); 908 + 909 + cxgbi_sock_set_flag(csk, CTPF_ABORT_RPL_RCVD); 908 910 if (cxgbi_sock_flag(csk, CTPF_ABORT_RPL_PENDING)) { 909 - if (!cxgbi_sock_flag(csk, CTPF_ABORT_RPL_RCVD)) 910 - cxgbi_sock_set_flag(csk, CTPF_ABORT_RPL_RCVD); 911 - else { 912 - cxgbi_sock_clear_flag(csk, CTPF_ABORT_RPL_RCVD); 913 - cxgbi_sock_clear_flag(csk, CTPF_ABORT_RPL_PENDING); 914 - if (cxgbi_sock_flag(csk, CTPF_ABORT_REQ_RCVD)) 915 - pr_err("csk 0x%p,%u,0x%lx,%u,ABT_RPL_RSS.\n", 916 - csk, csk->state, csk->flags, csk->tid); 917 - cxgbi_sock_closed(csk); 918 - } 911 + cxgbi_sock_clear_flag(csk, CTPF_ABORT_RPL_PENDING); 912 + if (cxgbi_sock_flag(csk, CTPF_ABORT_REQ_RCVD)) 913 + pr_err("csk 0x%p,%u,0x%lx,%u,ABT_RPL_RSS.\n", 914 + csk, csk->state, csk->flags, csk->tid); 915 + cxgbi_sock_closed(csk); 919 916 } 917 + 920 918 spin_unlock_bh(&csk->lock); 921 919 cxgbi_sock_put(csk); 922 920 }
+7
drivers/scsi/device_handler/scsi_dh_alua.c
··· 474 474 * LUN Not Ready -- Offline 475 475 */ 476 476 return SUCCESS; 477 + if (sdev->allow_restart && 478 + sense_hdr->asc == 0x04 && sense_hdr->ascq == 0x02) 479 + /* 480 + * if the device is not started, we need to wake 481 + * the error handler to start the motor 482 + */ 483 + return FAILED; 477 484 break; 478 485 case UNIT_ATTENTION: 479 486 if (sense_hdr->asc == 0x29 && sense_hdr->ascq == 0x00)
+1 -1
drivers/scsi/megaraid/megaraid_sas_base.c
··· 4453 4453 instance->msixentry[i].entry = i; 4454 4454 i = pci_enable_msix_range(instance->pdev, instance->msixentry, 4455 4455 1, instance->msix_vectors); 4456 - if (i) 4456 + if (i > 0) 4457 4457 instance->msix_vectors = i; 4458 4458 else 4459 4459 instance->msix_vectors = 0;
+11 -9
drivers/scsi/scsi_error.c
··· 459 459 if (! scsi_command_normalize_sense(scmd, &sshdr)) 460 460 return FAILED; /* no valid sense data */ 461 461 462 - if (scmd->cmnd[0] == TEST_UNIT_READY && scmd->scsi_done != scsi_eh_done) 463 - /* 464 - * nasty: for mid-layer issued TURs, we need to return the 465 - * actual sense data without any recovery attempt. For eh 466 - * issued ones, we need to try to recover and interpret 467 - */ 468 - return SUCCESS; 469 - 470 462 scsi_report_sense(sdev, &sshdr); 471 463 472 464 if (scsi_sense_is_deferred(&sshdr)) ··· 473 481 return rc; 474 482 /* handler does not care. Drop down to default handling */ 475 483 } 484 + 485 + if (scmd->cmnd[0] == TEST_UNIT_READY && scmd->scsi_done != scsi_eh_done) 486 + /* 487 + * nasty: for mid-layer issued TURs, we need to return the 488 + * actual sense data without any recovery attempt. For eh 489 + * issued ones, we need to try to recover and interpret 490 + */ 491 + return SUCCESS; 476 492 477 493 /* 478 494 * Previous logic looked for FILEMARK, EOM or ILI which are ··· 2001 2001 * is no point trying to lock the door of an off-line device. 2002 2002 */ 2003 2003 shost_for_each_device(sdev, shost) { 2004 - if (scsi_device_online(sdev) && sdev->locked) 2004 + if (scsi_device_online(sdev) && sdev->was_reset && sdev->locked) { 2005 2005 scsi_eh_lock_door(sdev); 2006 + sdev->was_reset = 0; 2007 + } 2006 2008 } 2007 2009 2008 2010 /*
+1 -1
drivers/target/iscsi/iscsi_target.c
··· 3491 3491 len = sprintf(buf, "TargetAddress=" 3492 3492 "%s:%hu,%hu", 3493 3493 inaddr_any ? conn->local_ip : np->np_ip, 3494 - inaddr_any ? conn->local_port : np->np_port, 3494 + np->np_port, 3495 3495 tpg->tpgt); 3496 3496 len += 1; 3497 3497
+5 -4
drivers/target/target_core_pr.c
··· 2738 2738 struct t10_pr_registration *pr_reg, *pr_reg_tmp, *pr_reg_n, *pr_res_holder; 2739 2739 struct t10_reservation *pr_tmpl = &dev->t10_pr; 2740 2740 u32 pr_res_mapped_lun = 0; 2741 - int all_reg = 0, calling_it_nexus = 0, released_regs = 0; 2741 + int all_reg = 0, calling_it_nexus = 0; 2742 + bool sa_res_key_unmatched = sa_res_key != 0; 2742 2743 int prh_type = 0, prh_scope = 0; 2743 2744 2744 2745 if (!se_sess) ··· 2814 2813 if (!all_reg) { 2815 2814 if (pr_reg->pr_res_key != sa_res_key) 2816 2815 continue; 2816 + sa_res_key_unmatched = false; 2817 2817 2818 2818 calling_it_nexus = (pr_reg_n == pr_reg) ? 1 : 0; 2819 2819 pr_reg_nacl = pr_reg->pr_reg_nacl; ··· 2822 2820 __core_scsi3_free_registration(dev, pr_reg, 2823 2821 (preempt_type == PREEMPT_AND_ABORT) ? &preempt_and_abort_list : 2824 2822 NULL, calling_it_nexus); 2825 - released_regs++; 2826 2823 } else { 2827 2824 /* 2828 2825 * Case for any existing all registrants type ··· 2839 2838 if ((sa_res_key) && 2840 2839 (pr_reg->pr_res_key != sa_res_key)) 2841 2840 continue; 2841 + sa_res_key_unmatched = false; 2842 2842 2843 2843 calling_it_nexus = (pr_reg_n == pr_reg) ? 1 : 0; 2844 2844 if (calling_it_nexus) ··· 2850 2848 __core_scsi3_free_registration(dev, pr_reg, 2851 2849 (preempt_type == PREEMPT_AND_ABORT) ? &preempt_and_abort_list : 2852 2850 NULL, 0); 2853 - released_regs++; 2854 2851 } 2855 2852 if (!calling_it_nexus) 2856 2853 core_scsi3_ua_allocate(pr_reg_nacl, ··· 2864 2863 * registered reservation key, then the device server shall 2865 2864 * complete the command with RESERVATION CONFLICT status. 2866 2865 */ 2867 - if (!released_regs) { 2866 + if (sa_res_key_unmatched) { 2868 2867 spin_unlock(&dev->dev_reservation_lock); 2869 2868 core_scsi3_put_pr_reg(pr_reg_n); 2870 2869 return TCM_RESERVATION_CONFLICT;
+1 -1
drivers/target/target_core_transport.c
··· 2292 2292 * and let it call back once the write buffers are ready. 2293 2293 */ 2294 2294 target_add_to_state_list(cmd); 2295 - if (cmd->data_direction != DMA_TO_DEVICE) { 2295 + if (cmd->data_direction != DMA_TO_DEVICE || cmd->data_length == 0) { 2296 2296 target_execute_cmd(cmd); 2297 2297 return 0; 2298 2298 }
+24
drivers/vhost/scsi.c
··· 1312 1312 vhost_scsi_set_endpoint(struct vhost_scsi *vs, 1313 1313 struct vhost_scsi_target *t) 1314 1314 { 1315 + struct se_portal_group *se_tpg; 1315 1316 struct tcm_vhost_tport *tv_tport; 1316 1317 struct tcm_vhost_tpg *tpg; 1317 1318 struct tcm_vhost_tpg **vs_tpg; ··· 1360 1359 ret = -EEXIST; 1361 1360 goto out; 1362 1361 } 1362 + /* 1363 + * In order to ensure individual vhost-scsi configfs 1364 + * groups cannot be removed while in use by vhost ioctl, 1365 + * go ahead and take an explicit se_tpg->tpg_group.cg_item 1366 + * dependency now. 1367 + */ 1368 + se_tpg = &tpg->se_tpg; 1369 + ret = configfs_depend_item(se_tpg->se_tpg_tfo->tf_subsys, 1370 + &se_tpg->tpg_group.cg_item); 1371 + if (ret) { 1372 + pr_warn("configfs_depend_item() failed: %d\n", ret); 1373 + kfree(vs_tpg); 1374 + mutex_unlock(&tpg->tv_tpg_mutex); 1375 + goto out; 1376 + } 1363 1377 tpg->tv_tpg_vhost_count++; 1364 1378 tpg->vhost_scsi = vs; 1365 1379 vs_tpg[tpg->tport_tpgt] = tpg; ··· 1417 1401 vhost_scsi_clear_endpoint(struct vhost_scsi *vs, 1418 1402 struct vhost_scsi_target *t) 1419 1403 { 1404 + struct se_portal_group *se_tpg; 1420 1405 struct tcm_vhost_tport *tv_tport; 1421 1406 struct tcm_vhost_tpg *tpg; 1422 1407 struct vhost_virtqueue *vq; ··· 1466 1449 vs->vs_tpg[target] = NULL; 1467 1450 match = true; 1468 1451 mutex_unlock(&tpg->tv_tpg_mutex); 1452 + /* 1453 + * Release se_tpg->tpg_group.cg_item configfs dependency now 1454 + * to allow vhost-scsi WWPN se_tpg->tpg_group shutdown to occur. 1455 + */ 1456 + se_tpg = &tpg->se_tpg; 1457 + configfs_undepend_item(se_tpg->se_tpg_tfo->tf_subsys, 1458 + &se_tpg->tpg_group.cg_item); 1469 1459 } 1470 1460 if (match) { 1471 1461 for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) {
+1 -1
fs/nfs/blocklayout/blocklayout.c
··· 378 378 loff_t offset = header->args.offset; 379 379 size_t count = header->args.count; 380 380 struct page **pages = header->args.pages; 381 - int pg_index = pg_index = header->args.pgbase >> PAGE_CACHE_SHIFT; 381 + int pg_index = header->args.pgbase >> PAGE_CACHE_SHIFT; 382 382 unsigned int pg_len; 383 383 struct blk_plug plug; 384 384 int i;
+9 -5
fs/nfs/blocklayout/rpc_pipefs.c
··· 65 65 66 66 dprintk("%s CREATING PIPEFS MESSAGE\n", __func__); 67 67 68 + mutex_lock(&nn->bl_mutex); 68 69 bl_pipe_msg.bl_wq = &nn->bl_wq; 69 70 70 71 b->simple.len += 4; /* single volume */ 71 72 if (b->simple.len > PAGE_SIZE) 72 - return -EIO; 73 + goto out_unlock; 73 74 74 75 memset(msg, 0, sizeof(*msg)); 75 76 msg->len = sizeof(*bl_msg) + b->simple.len; 76 77 msg->data = kzalloc(msg->len, gfp_mask); 77 78 if (!msg->data) 78 - goto out; 79 + goto out_free_data; 79 80 80 81 bl_msg = msg->data; 81 82 bl_msg->type = BL_DEVICE_MOUNT, ··· 88 87 rc = rpc_queue_upcall(nn->bl_device_pipe, msg); 89 88 if (rc < 0) { 90 89 remove_wait_queue(&nn->bl_wq, &wq); 91 - goto out; 90 + goto out_free_data; 92 91 } 93 92 94 93 set_current_state(TASK_UNINTERRUPTIBLE); ··· 98 97 if (reply->status != BL_DEVICE_REQUEST_PROC) { 99 98 printk(KERN_WARNING "%s failed to decode device: %d\n", 100 99 __func__, reply->status); 101 - goto out; 100 + goto out_free_data; 102 101 } 103 102 104 103 dev = MKDEV(reply->major, reply->minor); 105 - out: 104 + out_free_data: 106 105 kfree(msg->data); 106 + out_unlock: 107 + mutex_unlock(&nn->bl_mutex); 107 108 return dev; 108 109 } 109 110 ··· 235 232 struct nfs_net *nn = net_generic(net, nfs_net_id); 236 233 struct dentry *dentry; 237 234 235 + mutex_init(&nn->bl_mutex); 238 236 init_waitqueue_head(&nn->bl_wq); 239 237 nn->bl_device_pipe = rpc_mkpipe_data(&bl_upcall_ops, 0); 240 238 if (IS_ERR(nn->bl_device_pipe))
+23 -2
fs/nfs/delegation.c
··· 125 125 continue; 126 126 if (!test_bit(NFS_DELEGATED_STATE, &state->flags)) 127 127 continue; 128 + if (!nfs4_valid_open_stateid(state)) 129 + continue; 128 130 if (!nfs4_stateid_match(&state->stateid, stateid)) 129 131 continue; 130 132 get_nfs_open_context(ctx); ··· 195 193 { 196 194 int res = 0; 197 195 198 - res = nfs4_proc_delegreturn(inode, delegation->cred, &delegation->stateid, issync); 196 + if (!test_bit(NFS_DELEGATION_REVOKED, &delegation->flags)) 197 + res = nfs4_proc_delegreturn(inode, 198 + delegation->cred, 199 + &delegation->stateid, 200 + issync); 199 201 nfs_free_delegation(delegation); 200 202 return res; 201 203 } ··· 386 380 { 387 381 struct nfs_client *clp = NFS_SERVER(inode)->nfs_client; 388 382 struct nfs_inode *nfsi = NFS_I(inode); 389 - int err; 383 + int err = 0; 390 384 391 385 if (delegation == NULL) 392 386 return 0; 393 387 do { 388 + if (test_bit(NFS_DELEGATION_REVOKED, &delegation->flags)) 389 + break; 394 390 err = nfs_delegation_claim_opens(inode, &delegation->stateid); 395 391 if (!issync || err != -EAGAIN) 396 392 break; ··· 613 605 rcu_read_unlock(); 614 606 } 615 607 608 + static void nfs_revoke_delegation(struct inode *inode) 609 + { 610 + struct nfs_delegation *delegation; 611 + rcu_read_lock(); 612 + delegation = rcu_dereference(NFS_I(inode)->delegation); 613 + if (delegation != NULL) { 614 + set_bit(NFS_DELEGATION_REVOKED, &delegation->flags); 615 + nfs_mark_return_delegation(NFS_SERVER(inode), delegation); 616 + } 617 + rcu_read_unlock(); 618 + } 619 + 616 620 void nfs_remove_bad_delegation(struct inode *inode) 617 621 { 618 622 struct nfs_delegation *delegation; 619 623 624 + nfs_revoke_delegation(inode); 620 625 delegation = nfs_inode_detach_delegation(inode); 621 626 if (delegation) { 622 627 nfs_inode_find_state_and_recover(inode, &delegation->stateid);
+1
fs/nfs/delegation.h
··· 31 31 NFS_DELEGATION_RETURN_IF_CLOSED, 32 32 NFS_DELEGATION_REFERENCED, 33 33 NFS_DELEGATION_RETURNING, 34 + NFS_DELEGATION_REVOKED, 34 35 }; 35 36 36 37 int nfs_inode_set_delegation(struct inode *inode, struct rpc_cred *cred, struct nfs_openres *res);
+1
fs/nfs/dir.c
··· 1527 1527 case -ENOENT: 1528 1528 d_drop(dentry); 1529 1529 d_add(dentry, NULL); 1530 + nfs_set_verifier(dentry, nfs_save_change_attribute(dir)); 1530 1531 break; 1531 1532 case -EISDIR: 1532 1533 case -ENOTDIR:
+1
fs/nfs/direct.c
··· 266 266 { 267 267 struct nfs_direct_req *dreq = container_of(kref, struct nfs_direct_req, kref); 268 268 269 + nfs_free_pnfs_ds_cinfo(&dreq->ds_cinfo); 269 270 if (dreq->l_ctx != NULL) 270 271 nfs_put_lock_context(dreq->l_ctx); 271 272 if (dreq->ctx != NULL)
-3
fs/nfs/filelayout/filelayout.c
··· 145 145 case -NFS4ERR_DELEG_REVOKED: 146 146 case -NFS4ERR_ADMIN_REVOKED: 147 147 case -NFS4ERR_BAD_STATEID: 148 - if (state == NULL) 149 - break; 150 - nfs_remove_bad_delegation(state->inode); 151 148 case -NFS4ERR_OPENMODE: 152 149 if (state == NULL) 153 150 break;
+1 -1
fs/nfs/inode.c
··· 626 626 { 627 627 struct inode *inode = dentry->d_inode; 628 628 int need_atime = NFS_I(inode)->cache_validity & NFS_INO_INVALID_ATIME; 629 - int err; 629 + int err = 0; 630 630 631 631 trace_nfs_getattr_enter(inode); 632 632 /* Flush out writes to the server in order to update c/mtime. */
+1
fs/nfs/netns.h
··· 19 19 struct rpc_pipe *bl_device_pipe; 20 20 struct bl_dev_msg bl_mount_reply; 21 21 wait_queue_head_t bl_wq; 22 + struct mutex bl_mutex; 22 23 struct list_head nfs_client_list; 23 24 struct list_head nfs_volume_list; 24 25 #if IS_ENABLED(CONFIG_NFS_V4)
+45 -50
fs/nfs/nfs4proc.c
··· 370 370 case -NFS4ERR_DELEG_REVOKED: 371 371 case -NFS4ERR_ADMIN_REVOKED: 372 372 case -NFS4ERR_BAD_STATEID: 373 - if (inode != NULL && nfs4_have_delegation(inode, FMODE_READ)) { 374 - nfs_remove_bad_delegation(inode); 375 - exception->retry = 1; 376 - break; 377 - } 378 373 if (state == NULL) 379 374 break; 380 375 ret = nfs4_schedule_stateid_recovery(server, state); ··· 1649 1654 nfs_inode_find_state_and_recover(state->inode, 1650 1655 stateid); 1651 1656 nfs4_schedule_stateid_recovery(server, state); 1652 - return 0; 1657 + return -EAGAIN; 1653 1658 case -NFS4ERR_DELAY: 1654 1659 case -NFS4ERR_GRACE: 1655 1660 set_bit(NFS_DELEGATED_STATE, &state->flags); ··· 2104 2109 return ret; 2105 2110 } 2106 2111 2112 + static void nfs_finish_clear_delegation_stateid(struct nfs4_state *state) 2113 + { 2114 + nfs_remove_bad_delegation(state->inode); 2115 + write_seqlock(&state->seqlock); 2116 + nfs4_stateid_copy(&state->stateid, &state->open_stateid); 2117 + write_sequnlock(&state->seqlock); 2118 + clear_bit(NFS_DELEGATED_STATE, &state->flags); 2119 + } 2120 + 2121 + static void nfs40_clear_delegation_stateid(struct nfs4_state *state) 2122 + { 2123 + if (rcu_access_pointer(NFS_I(state->inode)->delegation) != NULL) 2124 + nfs_finish_clear_delegation_stateid(state); 2125 + } 2126 + 2127 + static int nfs40_open_expired(struct nfs4_state_owner *sp, struct nfs4_state *state) 2128 + { 2129 + /* NFSv4.0 doesn't allow for delegation recovery on open expire */ 2130 + nfs40_clear_delegation_stateid(state); 2131 + return nfs4_open_expired(sp, state); 2132 + } 2133 + 2107 2134 #if defined(CONFIG_NFS_V4_1) 2108 - static void nfs41_clear_delegation_stateid(struct nfs4_state *state) 2135 + static void nfs41_check_delegation_stateid(struct nfs4_state *state) 2109 2136 { 2110 2137 struct nfs_server *server = NFS_SERVER(state->inode); 2111 - nfs4_stateid *stateid = &state->stateid; 2138 + nfs4_stateid stateid; 2112 2139 struct nfs_delegation *delegation; 2113 - struct rpc_cred *cred = NULL; 2114 - int status = -NFS4ERR_BAD_STATEID; 2115 - 2116 - /* If a state reset has been done, test_stateid is unneeded */ 2117 - if (test_bit(NFS_DELEGATED_STATE, &state->flags) == 0) 2118 - return; 2140 + struct rpc_cred *cred; 2141 + int status; 2119 2142 2120 2143 /* Get the delegation credential for use by test/free_stateid */ 2121 2144 rcu_read_lock(); 2122 2145 delegation = rcu_dereference(NFS_I(state->inode)->delegation); 2123 - if (delegation != NULL && 2124 - nfs4_stateid_match(&delegation->stateid, stateid)) { 2125 - cred = get_rpccred(delegation->cred); 2146 + if (delegation == NULL) { 2126 2147 rcu_read_unlock(); 2127 - status = nfs41_test_stateid(server, stateid, cred); 2128 - trace_nfs4_test_delegation_stateid(state, NULL, status); 2129 - } else 2130 - rcu_read_unlock(); 2148 + return; 2149 + } 2150 + 2151 + nfs4_stateid_copy(&stateid, &delegation->stateid); 2152 + cred = get_rpccred(delegation->cred); 2153 + rcu_read_unlock(); 2154 + status = nfs41_test_stateid(server, &stateid, cred); 2155 + trace_nfs4_test_delegation_stateid(state, NULL, status); 2131 2156 2132 2157 if (status != NFS_OK) { 2133 2158 /* Free the stateid unless the server explicitly 2134 2159 * informs us the stateid is unrecognized. */ 2135 2160 if (status != -NFS4ERR_BAD_STATEID) 2136 - nfs41_free_stateid(server, stateid, cred); 2137 - nfs_remove_bad_delegation(state->inode); 2138 - 2139 - write_seqlock(&state->seqlock); 2140 - nfs4_stateid_copy(&state->stateid, &state->open_stateid); 2141 - write_sequnlock(&state->seqlock); 2142 - clear_bit(NFS_DELEGATED_STATE, &state->flags); 2161 + nfs41_free_stateid(server, &stateid, cred); 2162 + nfs_finish_clear_delegation_stateid(state); 2143 2163 } 2144 2164 2145 - if (cred != NULL) 2146 - put_rpccred(cred); 2165 + put_rpccred(cred); 2147 2166 } 2148 2167 2149 2168 /** ··· 2201 2192 { 2202 2193 int status; 2203 2194 2204 - nfs41_clear_delegation_stateid(state); 2195 + nfs41_check_delegation_stateid(state); 2205 2196 status = nfs41_check_open_stateid(state); 2206 2197 if (status != NFS_OK) 2207 2198 status = nfs4_open_expired(sp, state); ··· 2240 2231 seq = raw_seqcount_begin(&sp->so_reclaim_seqcount); 2241 2232 2242 2233 ret = _nfs4_proc_open(opendata); 2243 - if (ret != 0) { 2244 - if (ret == -ENOENT) { 2245 - dentry = opendata->dentry; 2246 - if (dentry->d_inode) 2247 - d_delete(dentry); 2248 - else if (d_unhashed(dentry)) 2249 - d_add(dentry, NULL); 2250 - 2251 - nfs_set_verifier(dentry, 2252 - nfs_save_change_attribute(opendata->dir->d_inode)); 2253 - } 2234 + if (ret != 0) 2254 2235 goto out; 2255 - } 2256 2236 2257 2237 state = nfs4_opendata_to_nfs4_state(opendata); 2258 2238 ret = PTR_ERR(state); ··· 4839 4841 case -NFS4ERR_DELEG_REVOKED: 4840 4842 case -NFS4ERR_ADMIN_REVOKED: 4841 4843 case -NFS4ERR_BAD_STATEID: 4842 - if (state == NULL) 4843 - break; 4844 - nfs_remove_bad_delegation(state->inode); 4845 4844 case -NFS4ERR_OPENMODE: 4846 4845 if (state == NULL) 4847 4846 break; ··· 8336 8341 static const struct nfs4_state_recovery_ops nfs40_nograce_recovery_ops = { 8337 8342 .owner_flag_bit = NFS_OWNER_RECLAIM_NOGRACE, 8338 8343 .state_flag_bit = NFS_STATE_RECLAIM_NOGRACE, 8339 - .recover_open = nfs4_open_expired, 8344 + .recover_open = nfs40_open_expired, 8340 8345 .recover_lock = nfs4_lock_expired, 8341 8346 .establish_clid = nfs4_init_clientid, 8342 8347 }; ··· 8403 8408 | NFS_CAP_CHANGE_ATTR 8404 8409 | NFS_CAP_POSIX_LOCK 8405 8410 | NFS_CAP_STATEID_NFSV41 8406 - | NFS_CAP_ATOMIC_OPEN_V1 8407 - | NFS_CAP_SEEK, 8411 + | NFS_CAP_ATOMIC_OPEN_V1, 8408 8412 .init_client = nfs41_init_client, 8409 8413 .shutdown_client = nfs41_shutdown_client, 8410 8414 .match_stateid = nfs41_match_stateid, ··· 8425 8431 | NFS_CAP_CHANGE_ATTR 8426 8432 | NFS_CAP_POSIX_LOCK 8427 8433 | NFS_CAP_STATEID_NFSV41 8428 - | NFS_CAP_ATOMIC_OPEN_V1, 8434 + | NFS_CAP_ATOMIC_OPEN_V1 8435 + | NFS_CAP_SEEK, 8429 8436 .init_client = nfs41_init_client, 8430 8437 .shutdown_client = nfs41_shutdown_client, 8431 8438 .match_stateid = nfs41_match_stateid,
-2
fs/nfs/write.c
··· 715 715 716 716 if (test_and_clear_bit(PG_INODE_REF, &req->wb_flags)) 717 717 nfs_release_request(req); 718 - else 719 - WARN_ON_ONCE(1); 720 718 } 721 719 722 720 static void
+2 -2
include/dt-bindings/pinctrl/dra.h
··· 40 40 41 41 /* Active pin states */ 42 42 #define PIN_OUTPUT (0 | PULL_DIS) 43 - #define PIN_OUTPUT_PULLUP (PIN_OUTPUT | PULL_ENA | PULL_UP) 44 - #define PIN_OUTPUT_PULLDOWN (PIN_OUTPUT | PULL_ENA) 43 + #define PIN_OUTPUT_PULLUP (PULL_UP) 44 + #define PIN_OUTPUT_PULLDOWN (0) 45 45 #define PIN_INPUT (INPUT_EN | PULL_DIS) 46 46 #define PIN_INPUT_SLEW (INPUT_EN | SLEWCONTROL) 47 47 #define PIN_INPUT_PULLUP (PULL_ENA | INPUT_EN | PULL_UP)
+5 -2
include/linux/bitops.h
··· 18 18 * position @h. For example 19 19 * GENMASK_ULL(39, 21) gives us the 64bit vector 0x000000ffffe00000. 20 20 */ 21 - #define GENMASK(h, l) (((U32_C(1) << ((h) - (l) + 1)) - 1) << (l)) 22 - #define GENMASK_ULL(h, l) (((U64_C(1) << ((h) - (l) + 1)) - 1) << (l)) 21 + #define GENMASK(h, l) \ 22 + (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h)))) 23 + 24 + #define GENMASK_ULL(h, l) \ 25 + (((~0ULL) << (l)) & (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h)))) 23 26 24 27 extern unsigned int __sw_hweight8(unsigned int w); 25 28 extern unsigned int __sw_hweight16(unsigned int w);
+6
include/linux/can/dev.h
··· 99 99 return 1; 100 100 } 101 101 102 + static inline bool can_is_canfd_skb(const struct sk_buff *skb) 103 + { 104 + /* the CAN specific type of skb is identified by its data length */ 105 + return skb->len == CANFD_MTU; 106 + } 107 + 102 108 /* get data length from can_dlc with sanitized can_dlc */ 103 109 u8 can_dlc2len(u8 can_dlc); 104 110
+1 -1
include/linux/inetdevice.h
··· 242 242 static __inline__ __be32 inet_make_mask(int logmask) 243 243 { 244 244 if (logmask) 245 - return htonl(~((1<<(32-logmask))-1)); 245 + return htonl(~((1U<<(32-logmask))-1)); 246 246 return 0; 247 247 } 248 248
-5
include/linux/kernel_stat.h
··· 77 77 return kstat_cpu(cpu).irqs_sum; 78 78 } 79 79 80 - /* 81 - * Lock/unlock the current runqueue - to extract task statistics: 82 - */ 83 - extern unsigned long long task_delta_exec(struct task_struct *); 84 - 85 80 extern void account_user_time(struct task_struct *, cputime_t, cputime_t); 86 81 extern void account_system_time(struct task_struct *, int, cputime_t, cputime_t); 87 82 extern void account_steal_time(cputime_t);
+11
include/linux/nfs_xdr.h
··· 1224 1224 unsigned int status; 1225 1225 }; 1226 1226 1227 + static inline void 1228 + nfs_free_pnfs_ds_cinfo(struct pnfs_ds_commit_info *cinfo) 1229 + { 1230 + kfree(cinfo->buckets); 1231 + } 1232 + 1227 1233 #else 1228 1234 1229 1235 struct pnfs_ds_commit_info { 1230 1236 }; 1237 + 1238 + static inline void 1239 + nfs_free_pnfs_ds_cinfo(struct pnfs_ds_commit_info *cinfo) 1240 + { 1241 + } 1231 1242 1232 1243 #endif /* CONFIG_NFS_V4_1 */ 1233 1244
+5 -3
include/linux/pm_domain.h
··· 72 72 bool max_off_time_changed; 73 73 bool cached_power_down_ok; 74 74 struct gpd_cpuidle_data *cpuidle_data; 75 - void (*attach_dev)(struct device *dev); 76 - void (*detach_dev)(struct device *dev); 75 + int (*attach_dev)(struct generic_pm_domain *domain, 76 + struct device *dev); 77 + void (*detach_dev)(struct generic_pm_domain *domain, 78 + struct device *dev); 77 79 }; 78 80 79 81 static inline struct generic_pm_domain *pd_to_genpd(struct dev_pm_domain *pd) ··· 106 104 struct notifier_block nb; 107 105 struct mutex lock; 108 106 unsigned int refcount; 109 - bool need_restore; 107 + int need_restore; 110 108 }; 111 109 112 110 #ifdef CONFIG_PM_GENERIC_DOMAINS
-3
include/linux/power/charger-manager.h
··· 253 253 struct device *dev; 254 254 struct charger_desc *desc; 255 255 256 - struct power_supply *fuel_gauge; 257 - struct power_supply **charger_stat; 258 - 259 256 #ifdef CONFIG_THERMAL 260 257 struct thermal_zone_device *tzd_batt; 261 258 #endif
+6
include/linux/power_supply.h
··· 200 200 void (*external_power_changed)(struct power_supply *psy); 201 201 void (*set_charged)(struct power_supply *psy); 202 202 203 + /* 204 + * Set if thermal zone should not be created for this power supply. 205 + * For example for virtual supplies forwarding calls to actual 206 + * sensors or other supplies. 207 + */ 208 + bool no_thermal; 203 209 /* For APM emulation, think legacy userspace. */ 204 210 int use_for_apm; 205 211
-2
include/net/netfilter/nf_tables.h
··· 396 396 /** 397 397 * struct nft_trans - nf_tables object update in transaction 398 398 * 399 - * @rcu_head: rcu head to defer release of transaction data 400 399 * @list: used internally 401 400 * @msg_type: message type 402 401 * @ctx: transaction context 403 402 * @data: internal information related to the transaction 404 403 */ 405 404 struct nft_trans { 406 - struct rcu_head rcu_head; 407 405 struct list_head list; 408 406 int msg_type; 409 407 struct nft_ctx ctx;
+18
include/net/vxlan.h
··· 8 8 #define VNI_HASH_BITS 10 9 9 #define VNI_HASH_SIZE (1<<VNI_HASH_BITS) 10 10 11 + /* VXLAN protocol header */ 12 + struct vxlanhdr { 13 + __be32 vx_flags; 14 + __be32 vx_vni; 15 + }; 16 + 11 17 struct vxlan_sock; 12 18 typedef void (vxlan_rcv_t)(struct vxlan_sock *vh, struct sk_buff *skb, __be32 key); 13 19 ··· 50 44 struct rtable *rt, struct sk_buff *skb, 51 45 __be32 src, __be32 dst, __u8 tos, __u8 ttl, __be16 df, 52 46 __be16 src_port, __be16 dst_port, __be32 vni, bool xnet); 47 + 48 + static inline bool vxlan_gso_check(struct sk_buff *skb) 49 + { 50 + if ((skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL) && 51 + (skb->inner_protocol_type != ENCAP_TYPE_ETHER || 52 + skb->inner_protocol != htons(ETH_P_TEB) || 53 + (skb_inner_mac_header(skb) - skb_transport_header(skb) != 54 + sizeof(struct udphdr) + sizeof(struct vxlanhdr)))) 55 + return false; 56 + 57 + return true; 58 + } 53 59 54 60 /* IP header + UDP + VXLAN + Ethernet header */ 55 61 #define VXLAN_HEADROOM (20 + 8 + 8 + 14)
+2
include/sound/soc-dpcm.h
··· 102 102 /* state and update */ 103 103 enum snd_soc_dpcm_update runtime_update; 104 104 enum snd_soc_dpcm_state state; 105 + 106 + int trigger_pending; /* trigger cmd + 1 if pending, 0 if not */ 105 107 }; 106 108 107 109 /* can this BE stop and free */
+5 -3
kernel/events/core.c
··· 1562 1562 1563 1563 if (!task) { 1564 1564 /* 1565 - * Per cpu events are removed via an smp call and 1566 - * the removal is always successful. 1565 + * Per cpu events are removed via an smp call. The removal can 1566 + * fail if the CPU is currently offline, but in that case we 1567 + * already called __perf_remove_from_context from 1568 + * perf_event_exit_cpu. 1567 1569 */ 1568 1570 cpu_function_call(event->cpu, __perf_remove_from_context, &re); 1569 1571 return; ··· 8119 8117 8120 8118 static void __perf_event_exit_context(void *__info) 8121 8119 { 8122 - struct remove_event re = { .detach_group = false }; 8120 + struct remove_event re = { .detach_group = true }; 8123 8121 struct perf_event_context *ctx = __info; 8124 8122 8125 8123 perf_pmu_rotate_stop(ctx->pmu);
+2 -2
kernel/power/suspend.c
··· 146 146 147 147 static int platform_suspend_prepare_late(suspend_state_t state) 148 148 { 149 - return state == PM_SUSPEND_FREEZE && freeze_ops->prepare ? 149 + return state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->prepare ? 150 150 freeze_ops->prepare() : 0; 151 151 } 152 152 ··· 164 164 165 165 static void platform_resume_early(suspend_state_t state) 166 166 { 167 - if (state == PM_SUSPEND_FREEZE && freeze_ops->restore) 167 + if (state == PM_SUSPEND_FREEZE && freeze_ops && freeze_ops->restore) 168 168 freeze_ops->restore(); 169 169 } 170 170
+21 -42
kernel/sched/core.c
··· 2475 2475 EXPORT_PER_CPU_SYMBOL(kernel_cpustat); 2476 2476 2477 2477 /* 2478 - * Return any ns on the sched_clock that have not yet been accounted in 2479 - * @p in case that task is currently running. 2480 - * 2481 - * Called with task_rq_lock() held on @rq. 2482 - */ 2483 - static u64 do_task_delta_exec(struct task_struct *p, struct rq *rq) 2484 - { 2485 - u64 ns = 0; 2486 - 2487 - /* 2488 - * Must be ->curr _and_ ->on_rq. If dequeued, we would 2489 - * project cycles that may never be accounted to this 2490 - * thread, breaking clock_gettime(). 2491 - */ 2492 - if (task_current(rq, p) && task_on_rq_queued(p)) { 2493 - update_rq_clock(rq); 2494 - ns = rq_clock_task(rq) - p->se.exec_start; 2495 - if ((s64)ns < 0) 2496 - ns = 0; 2497 - } 2498 - 2499 - return ns; 2500 - } 2501 - 2502 - unsigned long long task_delta_exec(struct task_struct *p) 2503 - { 2504 - unsigned long flags; 2505 - struct rq *rq; 2506 - u64 ns = 0; 2507 - 2508 - rq = task_rq_lock(p, &flags); 2509 - ns = do_task_delta_exec(p, rq); 2510 - task_rq_unlock(rq, p, &flags); 2511 - 2512 - return ns; 2513 - } 2514 - 2515 - /* 2516 2478 * Return accounted runtime for the task. 2517 2479 * In case the task is currently running, return the runtime plus current's 2518 2480 * pending runtime that have not been accounted yet. ··· 2483 2521 { 2484 2522 unsigned long flags; 2485 2523 struct rq *rq; 2486 - u64 ns = 0; 2524 + u64 ns; 2487 2525 2488 2526 #if defined(CONFIG_64BIT) && defined(CONFIG_SMP) 2489 2527 /* ··· 2502 2540 #endif 2503 2541 2504 2542 rq = task_rq_lock(p, &flags); 2505 - ns = p->se.sum_exec_runtime + do_task_delta_exec(p, rq); 2543 + /* 2544 + * Must be ->curr _and_ ->on_rq. If dequeued, we would 2545 + * project cycles that may never be accounted to this 2546 + * thread, breaking clock_gettime(). 2547 + */ 2548 + if (task_current(rq, p) && task_on_rq_queued(p)) { 2549 + update_rq_clock(rq); 2550 + p->sched_class->update_curr(rq); 2551 + } 2552 + ns = p->se.sum_exec_runtime; 2506 2553 task_rq_unlock(rq, p, &flags); 2507 2554 2508 2555 return ns; ··· 6339 6368 if (!sched_debug()) 6340 6369 break; 6341 6370 } 6371 + 6372 + if (!level) 6373 + return; 6374 + 6342 6375 /* 6343 6376 * 'level' contains the number of unique distances, excluding the 6344 6377 * identity distance node_distance(i,i). ··· 7419 7444 if (unlikely(running)) 7420 7445 put_prev_task(rq, tsk); 7421 7446 7422 - tg = container_of(task_css_check(tsk, cpu_cgrp_id, 7423 - lockdep_is_held(&tsk->sighand->siglock)), 7447 + /* 7448 + * All callers are synchronized by task_rq_lock(); we do not use RCU 7449 + * which is pointless here. Thus, we pass "true" to task_css_check() 7450 + * to prevent lockdep warnings. 7451 + */ 7452 + tg = container_of(task_css_check(tsk, cpu_cgrp_id, true), 7424 7453 struct task_group, css); 7425 7454 tg = autogroup_task_group(tsk, tg); 7426 7455 tsk->sched_task_group = tg;
+2
kernel/sched/deadline.c
··· 1701 1701 .prio_changed = prio_changed_dl, 1702 1702 .switched_from = switched_from_dl, 1703 1703 .switched_to = switched_to_dl, 1704 + 1705 + .update_curr = update_curr_dl, 1704 1706 };
+14
kernel/sched/fair.c
··· 726 726 account_cfs_rq_runtime(cfs_rq, delta_exec); 727 727 } 728 728 729 + static void update_curr_fair(struct rq *rq) 730 + { 731 + update_curr(cfs_rq_of(&rq->curr->se)); 732 + } 733 + 729 734 static inline void 730 735 update_stats_wait_start(struct cfs_rq *cfs_rq, struct sched_entity *se) 731 736 { ··· 1183 1178 if ((cur->flags & PF_EXITING) || is_idle_task(cur)) 1184 1179 cur = NULL; 1185 1180 raw_spin_unlock_irq(&dst_rq->lock); 1181 + 1182 + /* 1183 + * Because we have preemption enabled we can get migrated around and 1184 + * end try selecting ourselves (current == env->p) as a swap candidate. 1185 + */ 1186 + if (cur == env->p) 1187 + goto unlock; 1186 1188 1187 1189 /* 1188 1190 * "imp" is the fault differential for the source task between the ··· 7960 7948 .switched_to = switched_to_fair, 7961 7949 7962 7950 .get_rr_interval = get_rr_interval_fair, 7951 + 7952 + .update_curr = update_curr_fair, 7963 7953 7964 7954 #ifdef CONFIG_FAIR_GROUP_SCHED 7965 7955 .task_move_group = task_move_group_fair,
+2
kernel/sched/rt.c
··· 2128 2128 2129 2129 .prio_changed = prio_changed_rt, 2130 2130 .switched_to = switched_to_rt, 2131 + 2132 + .update_curr = update_curr_rt, 2131 2133 }; 2132 2134 2133 2135 #ifdef CONFIG_SCHED_DEBUG
+2
kernel/sched/sched.h
··· 1135 1135 unsigned int (*get_rr_interval) (struct rq *rq, 1136 1136 struct task_struct *task); 1137 1137 1138 + void (*update_curr) (struct rq *rq); 1139 + 1138 1140 #ifdef CONFIG_FAIR_GROUP_SCHED 1139 1141 void (*task_move_group) (struct task_struct *p, int on_rq); 1140 1142 #endif
+1 -1
kernel/time/posix-cpu-timers.c
··· 553 553 *sample = cputime_to_expires(cputime.utime); 554 554 break; 555 555 case CPUCLOCK_SCHED: 556 - *sample = cputime.sum_exec_runtime + task_delta_exec(p); 556 + *sample = cputime.sum_exec_runtime; 557 557 break; 558 558 } 559 559 return 0;
+2 -2
lib/Makefile
··· 10 10 lib-y := ctype.o string.o vsprintf.o cmdline.o \ 11 11 rbtree.o radix-tree.o dump_stack.o timerqueue.o\ 12 12 idr.o int_sqrt.o extable.o \ 13 - sha1.o md5.o irq_regs.o reciprocal_div.o argv_split.o \ 13 + sha1.o md5.o irq_regs.o argv_split.o \ 14 14 proportions.o flex_proportions.o ratelimit.o show_mem.o \ 15 15 is_single_threaded.o plist.o decompress.o kobject_uevent.o \ 16 16 earlycpio.o ··· 26 26 bust_spinlocks.o hexdump.o kasprintf.o bitmap.o scatterlist.o \ 27 27 gcd.o lcm.o list_sort.o uuid.o flex_array.o iovec.o clz_ctz.o \ 28 28 bsearch.o find_last_bit.o find_next_bit.o llist.o memweight.o kfifo.o \ 29 - percpu-refcount.o percpu_ida.o hash.o rhashtable.o 29 + percpu-refcount.o percpu_ida.o hash.o rhashtable.o reciprocal_div.o 30 30 obj-y += string_helpers.o 31 31 obj-$(CONFIG_TEST_STRING_HELPERS) += test-string_helpers.o 32 32 obj-y += kstrtox.o
+2 -2
mm/iov_iter.c
··· 911 911 if (i->nr_segs == 1) 912 912 return i->count; 913 913 else if (i->type & ITER_BVEC) 914 - return min(i->count, i->iov->iov_len - i->iov_offset); 915 - else 916 914 return min(i->count, i->bvec->bv_len - i->iov_offset); 915 + else 916 + return min(i->count, i->iov->iov_len - i->iov_offset); 917 917 } 918 918 EXPORT_SYMBOL(iov_iter_single_seg_count); 919 919
+1 -2
net/bridge/br_multicast.c
··· 813 813 return; 814 814 815 815 if (port) { 816 - __skb_push(skb, sizeof(struct ethhdr)); 817 816 skb->dev = port->dev; 818 817 NF_HOOK(NFPROTO_BRIDGE, NF_BR_LOCAL_OUT, skb, NULL, skb->dev, 819 - dev_queue_xmit); 818 + br_dev_queue_push_xmit); 820 819 } else { 821 820 br_multicast_select_own_querier(br, ip, skb); 822 821 netif_rx(skb);
+6 -17
net/core/skbuff.c
··· 552 552 case SKB_FCLONE_CLONE: 553 553 fclones = container_of(skb, struct sk_buff_fclones, skb2); 554 554 555 - /* Warning : We must perform the atomic_dec_and_test() before 556 - * setting skb->fclone back to SKB_FCLONE_FREE, otherwise 557 - * skb_clone() could set clone_ref to 2 before our decrement. 558 - * Anyway, if we are going to free the structure, no need to 559 - * rewrite skb->fclone. 555 + /* The clone portion is available for 556 + * fast-cloning again. 560 557 */ 561 - if (atomic_dec_and_test(&fclones->fclone_ref)) { 558 + skb->fclone = SKB_FCLONE_FREE; 559 + 560 + if (atomic_dec_and_test(&fclones->fclone_ref)) 562 561 kmem_cache_free(skbuff_fclone_cache, fclones); 563 - } else { 564 - /* The clone portion is available for 565 - * fast-cloning again. 566 - */ 567 - skb->fclone = SKB_FCLONE_FREE; 568 - } 569 562 break; 570 563 } 571 564 } ··· 880 887 if (skb->fclone == SKB_FCLONE_ORIG && 881 888 n->fclone == SKB_FCLONE_FREE) { 882 889 n->fclone = SKB_FCLONE_CLONE; 883 - /* As our fastclone was free, clone_ref must be 1 at this point. 884 - * We could use atomic_inc() here, but it is faster 885 - * to set the final value. 886 - */ 887 - atomic_set(&fclones->fclone_ref, 2); 890 + atomic_inc(&fclones->fclone_ref); 888 891 } else { 889 892 if (skb_pfmemalloc(skb)) 890 893 gfp_mask |= __GFP_MEMALLOC;
+18 -18
net/dcb/dcbnl.c
··· 1080 1080 if (!app) 1081 1081 return -EMSGSIZE; 1082 1082 1083 - spin_lock(&dcb_lock); 1083 + spin_lock_bh(&dcb_lock); 1084 1084 list_for_each_entry(itr, &dcb_app_list, list) { 1085 1085 if (itr->ifindex == netdev->ifindex) { 1086 1086 err = nla_put(skb, DCB_ATTR_IEEE_APP, sizeof(itr->app), 1087 1087 &itr->app); 1088 1088 if (err) { 1089 - spin_unlock(&dcb_lock); 1089 + spin_unlock_bh(&dcb_lock); 1090 1090 return -EMSGSIZE; 1091 1091 } 1092 1092 } ··· 1097 1097 else 1098 1098 dcbx = -EOPNOTSUPP; 1099 1099 1100 - spin_unlock(&dcb_lock); 1100 + spin_unlock_bh(&dcb_lock); 1101 1101 nla_nest_end(skb, app); 1102 1102 1103 1103 /* get peer info if available */ ··· 1234 1234 } 1235 1235 1236 1236 /* local app */ 1237 - spin_lock(&dcb_lock); 1237 + spin_lock_bh(&dcb_lock); 1238 1238 app = nla_nest_start(skb, DCB_ATTR_CEE_APP_TABLE); 1239 1239 if (!app) 1240 1240 goto dcb_unlock; ··· 1271 1271 else 1272 1272 dcbx = -EOPNOTSUPP; 1273 1273 1274 - spin_unlock(&dcb_lock); 1274 + spin_unlock_bh(&dcb_lock); 1275 1275 1276 1276 /* features flags */ 1277 1277 if (ops->getfeatcfg) { ··· 1326 1326 return 0; 1327 1327 1328 1328 dcb_unlock: 1329 - spin_unlock(&dcb_lock); 1329 + spin_unlock_bh(&dcb_lock); 1330 1330 nla_put_failure: 1331 1331 return err; 1332 1332 } ··· 1762 1762 struct dcb_app_type *itr; 1763 1763 u8 prio = 0; 1764 1764 1765 - spin_lock(&dcb_lock); 1765 + spin_lock_bh(&dcb_lock); 1766 1766 if ((itr = dcb_app_lookup(app, dev->ifindex, 0))) 1767 1767 prio = itr->app.priority; 1768 - spin_unlock(&dcb_lock); 1768 + spin_unlock_bh(&dcb_lock); 1769 1769 1770 1770 return prio; 1771 1771 } ··· 1789 1789 if (dev->dcbnl_ops->getdcbx) 1790 1790 event.dcbx = dev->dcbnl_ops->getdcbx(dev); 1791 1791 1792 - spin_lock(&dcb_lock); 1792 + spin_lock_bh(&dcb_lock); 1793 1793 /* Search for existing match and replace */ 1794 1794 if ((itr = dcb_app_lookup(new, dev->ifindex, 0))) { 1795 1795 if (new->priority) ··· 1804 1804 if (new->priority) 1805 1805 err = dcb_app_add(new, dev->ifindex); 1806 1806 out: 1807 - spin_unlock(&dcb_lock); 1807 + spin_unlock_bh(&dcb_lock); 1808 1808 if (!err) 1809 1809 call_dcbevent_notifiers(DCB_APP_EVENT, &event); 1810 1810 return err; ··· 1823 1823 struct dcb_app_type *itr; 1824 1824 u8 prio = 0; 1825 1825 1826 - spin_lock(&dcb_lock); 1826 + spin_lock_bh(&dcb_lock); 1827 1827 if ((itr = dcb_app_lookup(app, dev->ifindex, 0))) 1828 1828 prio |= 1 << itr->app.priority; 1829 - spin_unlock(&dcb_lock); 1829 + spin_unlock_bh(&dcb_lock); 1830 1830 1831 1831 return prio; 1832 1832 } ··· 1850 1850 if (dev->dcbnl_ops->getdcbx) 1851 1851 event.dcbx = dev->dcbnl_ops->getdcbx(dev); 1852 1852 1853 - spin_lock(&dcb_lock); 1853 + spin_lock_bh(&dcb_lock); 1854 1854 /* Search for existing match and abort if found */ 1855 1855 if (dcb_app_lookup(new, dev->ifindex, new->priority)) { 1856 1856 err = -EEXIST; ··· 1859 1859 1860 1860 err = dcb_app_add(new, dev->ifindex); 1861 1861 out: 1862 - spin_unlock(&dcb_lock); 1862 + spin_unlock_bh(&dcb_lock); 1863 1863 if (!err) 1864 1864 call_dcbevent_notifiers(DCB_APP_EVENT, &event); 1865 1865 return err; ··· 1882 1882 if (dev->dcbnl_ops->getdcbx) 1883 1883 event.dcbx = dev->dcbnl_ops->getdcbx(dev); 1884 1884 1885 - spin_lock(&dcb_lock); 1885 + spin_lock_bh(&dcb_lock); 1886 1886 /* Search for existing match and remove it. */ 1887 1887 if ((itr = dcb_app_lookup(del, dev->ifindex, del->priority))) { 1888 1888 list_del(&itr->list); ··· 1890 1890 err = 0; 1891 1891 } 1892 1892 1893 - spin_unlock(&dcb_lock); 1893 + spin_unlock_bh(&dcb_lock); 1894 1894 if (!err) 1895 1895 call_dcbevent_notifiers(DCB_APP_EVENT, &event); 1896 1896 return err; ··· 1902 1902 struct dcb_app_type *app; 1903 1903 struct dcb_app_type *tmp; 1904 1904 1905 - spin_lock(&dcb_lock); 1905 + spin_lock_bh(&dcb_lock); 1906 1906 list_for_each_entry_safe(app, tmp, &dcb_app_list, list) { 1907 1907 list_del(&app->list); 1908 1908 kfree(app); 1909 1909 } 1910 - spin_unlock(&dcb_lock); 1910 + spin_unlock_bh(&dcb_lock); 1911 1911 } 1912 1912 1913 1913 static int __init dcbnl_init(void)
+4
net/ipv4/fib_rules.c
··· 62 62 else 63 63 res->tclassid = 0; 64 64 #endif 65 + 66 + if (err == -ESRCH) 67 + err = -ENETUNREACH; 68 + 65 69 return err; 66 70 } 67 71 EXPORT_SYMBOL_GPL(__fib_lookup);
+1
net/ipv4/netfilter/nft_masq_ipv4.c
··· 24 24 struct nf_nat_range range; 25 25 unsigned int verdict; 26 26 27 + memset(&range, 0, sizeof(range)); 27 28 range.flags = priv->flags; 28 29 29 30 verdict = nf_nat_masquerade_ipv4(pkt->skb, pkt->ops->hooknum,
+2 -2
net/ipv4/tcp_input.c
··· 5232 5232 if (len < (th->doff << 2) || tcp_checksum_complete_user(sk, skb)) 5233 5233 goto csum_error; 5234 5234 5235 - if (!th->ack && !th->rst) 5235 + if (!th->ack && !th->rst && !th->syn) 5236 5236 goto discard; 5237 5237 5238 5238 /* ··· 5651 5651 goto discard; 5652 5652 } 5653 5653 5654 - if (!th->ack && !th->rst) 5654 + if (!th->ack && !th->rst && !th->syn) 5655 5655 goto discard; 5656 5656 5657 5657 if (!tcp_validate_incoming(sk, skb, th, 0))
+4
net/ipv6/ip6mr.c
··· 1439 1439 1440 1440 void ip6_mr_cleanup(void) 1441 1441 { 1442 + rtnl_unregister(RTNL_FAMILY_IP6MR, RTM_GETROUTE); 1443 + #ifdef CONFIG_IPV6_PIMSM_V2 1444 + inet6_del_protocol(&pim6_protocol, IPPROTO_PIM); 1445 + #endif 1442 1446 unregister_netdevice_notifier(&ip6_mr_notifier); 1443 1447 unregister_pernet_subsys(&ip6mr_net_ops); 1444 1448 kmem_cache_destroy(mrt_cachep);
+1
net/ipv6/netfilter/nft_masq_ipv6.c
··· 25 25 struct nf_nat_range range; 26 26 unsigned int verdict; 27 27 28 + memset(&range, 0, sizeof(range)); 28 29 range.flags = priv->flags; 29 30 30 31 verdict = nf_nat_masquerade_ipv6(pkt->skb, &range, pkt->out);
+5 -1
net/ipx/af_ipx.c
··· 1764 1764 struct ipxhdr *ipx = NULL; 1765 1765 struct sk_buff *skb; 1766 1766 int copied, rc; 1767 + bool locked = true; 1767 1768 1768 1769 lock_sock(sk); 1769 1770 /* put the autobinding in */ ··· 1791 1790 if (sock_flag(sk, SOCK_ZAPPED)) 1792 1791 goto out; 1793 1792 1793 + release_sock(sk); 1794 + locked = false; 1794 1795 skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT, 1795 1796 flags & MSG_DONTWAIT, &rc); 1796 1797 if (!skb) { ··· 1828 1825 out_free: 1829 1826 skb_free_datagram(sk, skb); 1830 1827 out: 1831 - release_sock(sk); 1828 + if (locked) 1829 + release_sock(sk); 1832 1830 return rc; 1833 1831 } 1834 1832
+6 -9
net/mac80211/rc80211_minstrel_ht.c
··· 394 394 cur_thr = mi->groups[cur_group].rates[cur_idx].cur_tp; 395 395 cur_prob = mi->groups[cur_group].rates[cur_idx].probability; 396 396 397 - tmp_group = tp_list[j - 1] / MCS_GROUP_RATES; 398 - tmp_idx = tp_list[j - 1] % MCS_GROUP_RATES; 399 - tmp_thr = mi->groups[tmp_group].rates[tmp_idx].cur_tp; 400 - tmp_prob = mi->groups[tmp_group].rates[tmp_idx].probability; 401 - 402 - while (j > 0 && (cur_thr > tmp_thr || 403 - (cur_thr == tmp_thr && cur_prob > tmp_prob))) { 404 - j--; 397 + do { 405 398 tmp_group = tp_list[j - 1] / MCS_GROUP_RATES; 406 399 tmp_idx = tp_list[j - 1] % MCS_GROUP_RATES; 407 400 tmp_thr = mi->groups[tmp_group].rates[tmp_idx].cur_tp; 408 401 tmp_prob = mi->groups[tmp_group].rates[tmp_idx].probability; 409 - } 402 + if (cur_thr < tmp_thr || 403 + (cur_thr == tmp_thr && cur_prob <= tmp_prob)) 404 + break; 405 + j--; 406 + } while (j > 0); 410 407 411 408 if (j < MAX_THR_RATES - 1) { 412 409 memmove(&tp_list[j + 1], &tp_list[j], (sizeof(*tp_list) *
+6
net/netfilter/ipset/ip_set_core.c
··· 1863 1863 if (*op < IP_SET_OP_VERSION) { 1864 1864 /* Check the version at the beginning of operations */ 1865 1865 struct ip_set_req_version *req_version = data; 1866 + 1867 + if (*len < sizeof(struct ip_set_req_version)) { 1868 + ret = -EINVAL; 1869 + goto done; 1870 + } 1871 + 1866 1872 if (req_version->version != IPSET_PROTOCOL) { 1867 1873 ret = -EPROTO; 1868 1874 goto done;
+2
net/netfilter/ipvs/ip_vs_xmit.c
··· 846 846 new_skb = skb_realloc_headroom(skb, max_headroom); 847 847 if (!new_skb) 848 848 goto error; 849 + if (skb->sk) 850 + skb_set_owner_w(new_skb, skb->sk); 849 851 consume_skb(skb); 850 852 skb = new_skb; 851 853 }
+8 -6
net/netfilter/nf_conntrack_core.c
··· 611 611 */ 612 612 NF_CT_ASSERT(!nf_ct_is_confirmed(ct)); 613 613 pr_debug("Confirming conntrack %p\n", ct); 614 - /* We have to check the DYING flag inside the lock to prevent 615 - a race against nf_ct_get_next_corpse() possibly called from 616 - user context, else we insert an already 'dead' hash, blocking 617 - further use of that particular connection -JM */ 614 + 615 + /* We have to check the DYING flag after unlink to prevent 616 + * a race against nf_ct_get_next_corpse() possibly called from 617 + * user context, else we insert an already 'dead' hash, blocking 618 + * further use of that particular connection -JM. 619 + */ 620 + nf_ct_del_from_dying_or_unconfirmed_list(ct); 618 621 619 622 if (unlikely(nf_ct_is_dying(ct))) { 623 + nf_ct_add_to_dying_list(ct); 620 624 nf_conntrack_double_unlock(hash, reply_hash); 621 625 local_bh_enable(); 622 626 return NF_ACCEPT; ··· 639 635 &h->tuple) && 640 636 zone == nf_ct_zone(nf_ct_tuplehash_to_ctrack(h))) 641 637 goto out; 642 - 643 - nf_ct_del_from_dying_or_unconfirmed_list(ct); 644 638 645 639 /* Timer relative to confirmation time, not original 646 640 setting time, otherwise we'd get timer wrap in
+8 -16
net/netfilter/nf_tables_api.c
··· 3484 3484 } 3485 3485 } 3486 3486 3487 - /* Schedule objects for release via rcu to make sure no packets are accesing 3488 - * removed rules. 3489 - */ 3490 - static void nf_tables_commit_release_rcu(struct rcu_head *rt) 3487 + static void nf_tables_commit_release(struct nft_trans *trans) 3491 3488 { 3492 - struct nft_trans *trans = container_of(rt, struct nft_trans, rcu_head); 3493 - 3494 3489 switch (trans->msg_type) { 3495 3490 case NFT_MSG_DELTABLE: 3496 3491 nf_tables_table_destroy(&trans->ctx); ··· 3607 3612 } 3608 3613 } 3609 3614 3615 + synchronize_rcu(); 3616 + 3610 3617 list_for_each_entry_safe(trans, next, &net->nft.commit_list, list) { 3611 3618 list_del(&trans->list); 3612 - trans->ctx.nla = NULL; 3613 - call_rcu(&trans->rcu_head, nf_tables_commit_release_rcu); 3619 + nf_tables_commit_release(trans); 3614 3620 } 3615 3621 3616 3622 nf_tables_gen_notify(net, skb, NFT_MSG_NEWGEN); ··· 3619 3623 return 0; 3620 3624 } 3621 3625 3622 - /* Schedule objects for release via rcu to make sure no packets are accesing 3623 - * aborted rules. 3624 - */ 3625 - static void nf_tables_abort_release_rcu(struct rcu_head *rt) 3626 + static void nf_tables_abort_release(struct nft_trans *trans) 3626 3627 { 3627 - struct nft_trans *trans = container_of(rt, struct nft_trans, rcu_head); 3628 - 3629 3628 switch (trans->msg_type) { 3630 3629 case NFT_MSG_NEWTABLE: 3631 3630 nf_tables_table_destroy(&trans->ctx); ··· 3716 3725 } 3717 3726 } 3718 3727 3728 + synchronize_rcu(); 3729 + 3719 3730 list_for_each_entry_safe_reverse(trans, next, 3720 3731 &net->nft.commit_list, list) { 3721 3732 list_del(&trans->list); 3722 - trans->ctx.nla = NULL; 3723 - call_rcu(&trans->rcu_head, nf_tables_abort_release_rcu); 3733 + nf_tables_abort_release(trans); 3724 3734 } 3725 3735 3726 3736 return 0;
+11 -1
net/netfilter/nfnetlink.c
··· 47 47 [NFNLGRP_CONNTRACK_EXP_NEW] = NFNL_SUBSYS_CTNETLINK_EXP, 48 48 [NFNLGRP_CONNTRACK_EXP_UPDATE] = NFNL_SUBSYS_CTNETLINK_EXP, 49 49 [NFNLGRP_CONNTRACK_EXP_DESTROY] = NFNL_SUBSYS_CTNETLINK_EXP, 50 + [NFNLGRP_NFTABLES] = NFNL_SUBSYS_NFTABLES, 51 + [NFNLGRP_ACCT_QUOTA] = NFNL_SUBSYS_ACCT, 50 52 }; 51 53 52 54 void nfnl_lock(__u8 subsys_id) ··· 466 464 static int nfnetlink_bind(int group) 467 465 { 468 466 const struct nfnetlink_subsystem *ss; 469 - int type = nfnl_group2type[group]; 467 + int type; 468 + 469 + if (group <= NFNLGRP_NONE || group > NFNLGRP_MAX) 470 + return -EINVAL; 471 + 472 + type = nfnl_group2type[group]; 470 473 471 474 rcu_read_lock(); 472 475 ss = nfnetlink_get_subsys(type); ··· 520 513 static int __init nfnetlink_init(void) 521 514 { 522 515 int i; 516 + 517 + for (i = NFNLGRP_NONE + 1; i <= NFNLGRP_MAX; i++) 518 + BUG_ON(nfnl_group2type[i] == NFNL_SUBSYS_NONE); 523 519 524 520 for (i=0; i<NFNL_SUBSYS_COUNT; i++) 525 521 mutex_init(&table[i].mutex);
+6 -34
net/netfilter/nft_compat.c
··· 21 21 #include <linux/netfilter_ipv6/ip6_tables.h> 22 22 #include <net/netfilter/nf_tables.h> 23 23 24 - static const struct { 25 - const char *name; 26 - u8 type; 27 - } table_to_chaintype[] = { 28 - { "filter", NFT_CHAIN_T_DEFAULT }, 29 - { "raw", NFT_CHAIN_T_DEFAULT }, 30 - { "security", NFT_CHAIN_T_DEFAULT }, 31 - { "mangle", NFT_CHAIN_T_ROUTE }, 32 - { "nat", NFT_CHAIN_T_NAT }, 33 - { }, 34 - }; 35 - 36 - static int nft_compat_table_to_chaintype(const char *table) 37 - { 38 - int i; 39 - 40 - for (i = 0; table_to_chaintype[i].name != NULL; i++) { 41 - if (strcmp(table_to_chaintype[i].name, table) == 0) 42 - return table_to_chaintype[i].type; 43 - } 44 - 45 - return -1; 46 - } 47 - 48 24 static int nft_compat_chain_validate_dependency(const char *tablename, 49 25 const struct nft_chain *chain) 50 26 { 51 - enum nft_chain_type type; 52 27 const struct nft_base_chain *basechain; 53 28 54 29 if (!tablename || !(chain->flags & NFT_BASE_CHAIN)) 55 30 return 0; 56 31 57 - type = nft_compat_table_to_chaintype(tablename); 58 - if (type < 0) 59 - return -EINVAL; 60 - 61 32 basechain = nft_base_chain(chain); 62 - if (basechain->type->type != type) 33 + if (strcmp(tablename, "nat") == 0 && 34 + basechain->type->type != NFT_CHAIN_T_NAT) 63 35 return -EINVAL; 64 36 65 37 return 0; ··· 89 117 struct xt_target *target, void *info, 90 118 union nft_entry *entry, u8 proto, bool inv) 91 119 { 92 - par->net = &init_net; 120 + par->net = ctx->net; 93 121 par->table = ctx->table->name; 94 122 switch (ctx->afi->family) { 95 123 case AF_INET: ··· 296 324 struct xt_match *match, void *info, 297 325 union nft_entry *entry, u8 proto, bool inv) 298 326 { 299 - par->net = &init_net; 327 + par->net = ctx->net; 300 328 par->table = ctx->table->name; 301 329 switch (ctx->afi->family) { 302 330 case AF_INET: ··· 346 374 union nft_entry e = {}; 347 375 int ret; 348 376 349 - ret = nft_compat_chain_validate_dependency(match->name, ctx->chain); 377 + ret = nft_compat_chain_validate_dependency(match->table, ctx->chain); 350 378 if (ret < 0) 351 379 goto err; 352 380 ··· 420 448 if (!(hook_mask & match->hooks)) 421 449 return -EINVAL; 422 450 423 - ret = nft_compat_chain_validate_dependency(match->name, 451 + ret = nft_compat_chain_validate_dependency(match->table, 424 452 ctx->chain); 425 453 if (ret < 0) 426 454 return ret;
+6 -4
net/openvswitch/actions.c
··· 281 281 { 282 282 int transport_len = skb->len - skb_transport_offset(skb); 283 283 284 - if (l4_proto == IPPROTO_TCP) { 284 + if (l4_proto == NEXTHDR_TCP) { 285 285 if (likely(transport_len >= sizeof(struct tcphdr))) 286 286 inet_proto_csum_replace16(&tcp_hdr(skb)->check, skb, 287 287 addr, new_addr, 1); 288 - } else if (l4_proto == IPPROTO_UDP) { 288 + } else if (l4_proto == NEXTHDR_UDP) { 289 289 if (likely(transport_len >= sizeof(struct udphdr))) { 290 290 struct udphdr *uh = udp_hdr(skb); 291 291 ··· 296 296 uh->check = CSUM_MANGLED_0; 297 297 } 298 298 } 299 + } else if (l4_proto == NEXTHDR_ICMP) { 300 + if (likely(transport_len >= sizeof(struct icmp6hdr))) 301 + inet_proto_csum_replace16(&icmp6_hdr(skb)->icmp6_cksum, 302 + skb, addr, new_addr, 1); 299 303 } 300 304 } 301 305 ··· 819 815 820 816 case OVS_ACTION_ATTR_SAMPLE: 821 817 err = sample(dp, skb, key, a); 822 - if (unlikely(err)) /* skb already freed. */ 823 - return err; 824 818 break; 825 819 } 826 820
+7 -7
net/openvswitch/datapath.c
··· 1316 1316 return msgsize; 1317 1317 } 1318 1318 1319 - /* Called with ovs_mutex or RCU read lock. */ 1319 + /* Called with ovs_mutex. */ 1320 1320 static int ovs_dp_cmd_fill_info(struct datapath *dp, struct sk_buff *skb, 1321 1321 u32 portid, u32 seq, u32 flags, u8 cmd) 1322 1322 { ··· 1604 1604 if (!reply) 1605 1605 return -ENOMEM; 1606 1606 1607 - rcu_read_lock(); 1607 + ovs_lock(); 1608 1608 dp = lookup_datapath(sock_net(skb->sk), info->userhdr, info->attrs); 1609 1609 if (IS_ERR(dp)) { 1610 1610 err = PTR_ERR(dp); ··· 1613 1613 err = ovs_dp_cmd_fill_info(dp, reply, info->snd_portid, 1614 1614 info->snd_seq, 0, OVS_DP_CMD_NEW); 1615 1615 BUG_ON(err < 0); 1616 - rcu_read_unlock(); 1616 + ovs_unlock(); 1617 1617 1618 1618 return genlmsg_reply(reply, info); 1619 1619 1620 1620 err_unlock_free: 1621 - rcu_read_unlock(); 1621 + ovs_unlock(); 1622 1622 kfree_skb(reply); 1623 1623 return err; 1624 1624 } ··· 1630 1630 int skip = cb->args[0]; 1631 1631 int i = 0; 1632 1632 1633 - rcu_read_lock(); 1634 - list_for_each_entry_rcu(dp, &ovs_net->dps, list_node) { 1633 + ovs_lock(); 1634 + list_for_each_entry(dp, &ovs_net->dps, list_node) { 1635 1635 if (i >= skip && 1636 1636 ovs_dp_cmd_fill_info(dp, skb, NETLINK_CB(cb->skb).portid, 1637 1637 cb->nlh->nlmsg_seq, NLM_F_MULTI, ··· 1639 1639 break; 1640 1640 i++; 1641 1641 } 1642 - rcu_read_unlock(); 1642 + ovs_unlock(); 1643 1643 1644 1644 cb->args[0] = i; 1645 1645
+8 -1
net/openvswitch/flow_netlink.c
··· 140 140 if (match->key->eth.type == htons(ETH_P_ARP) 141 141 || match->key->eth.type == htons(ETH_P_RARP)) { 142 142 key_expected |= 1 << OVS_KEY_ATTR_ARP; 143 - if (match->mask && (match->mask->key.eth.type == htons(0xffff))) 143 + if (match->mask && (match->mask->key.tp.src == htons(0xff))) 144 144 mask_allowed |= 1 << OVS_KEY_ATTR_ARP; 145 145 } 146 146 ··· 772 772 ipv6_key->ipv6_frag, OVS_FRAG_TYPE_MAX); 773 773 return -EINVAL; 774 774 } 775 + 776 + if (!is_mask && ipv6_key->ipv6_label & htonl(0xFFF00000)) { 777 + OVS_NLERR(log, "IPv6 flow label %x is out of range (max=%x).\n", 778 + ntohl(ipv6_key->ipv6_label), (1 << 20) - 1); 779 + return -EINVAL; 780 + } 781 + 775 782 SW_FLOW_KEY_PUT(match, ipv6.label, 776 783 ipv6_key->ipv6_label, is_mask); 777 784 SW_FLOW_KEY_PUT(match, ip.proto,
+30 -5
net/sunrpc/auth_gss/auth_gss.c
··· 1353 1353 char *string = NULL; 1354 1354 struct gss_cred *gss_cred = container_of(cred, struct gss_cred, gc_base); 1355 1355 struct gss_cl_ctx *ctx; 1356 + unsigned int len; 1356 1357 struct xdr_netobj *acceptor; 1357 1358 1358 1359 rcu_read_lock(); ··· 1361 1360 if (!ctx) 1362 1361 goto out; 1363 1362 1364 - acceptor = &ctx->gc_acceptor; 1363 + len = ctx->gc_acceptor.len; 1364 + rcu_read_unlock(); 1365 1365 1366 1366 /* no point if there's no string */ 1367 - if (!acceptor->len) 1368 - goto out; 1369 - 1370 - string = kmalloc(acceptor->len + 1, GFP_KERNEL); 1367 + if (!len) 1368 + return NULL; 1369 + realloc: 1370 + string = kmalloc(len + 1, GFP_KERNEL); 1371 1371 if (!string) 1372 + return NULL; 1373 + 1374 + rcu_read_lock(); 1375 + ctx = rcu_dereference(gss_cred->gc_ctx); 1376 + 1377 + /* did the ctx disappear or was it replaced by one with no acceptor? */ 1378 + if (!ctx || !ctx->gc_acceptor.len) { 1379 + kfree(string); 1380 + string = NULL; 1372 1381 goto out; 1382 + } 1383 + 1384 + acceptor = &ctx->gc_acceptor; 1385 + 1386 + /* 1387 + * Did we find a new acceptor that's longer than the original? Allocate 1388 + * a longer buffer and try again. 1389 + */ 1390 + if (len < acceptor->len) { 1391 + len = acceptor->len; 1392 + rcu_read_unlock(); 1393 + kfree(string); 1394 + goto realloc; 1395 + } 1373 1396 1374 1397 memcpy(string, acceptor->data, acceptor->len); 1375 1398 string[acceptor->len] = '\0';
+4 -4
sound/pci/hda/patch_realtek.c
··· 4520 4520 [ALC269_FIXUP_HEADSET_MODE] = { 4521 4521 .type = HDA_FIXUP_FUNC, 4522 4522 .v.func = alc_fixup_headset_mode, 4523 + .chained = true, 4524 + .chain_id = ALC255_FIXUP_DELL_WMI_MIC_MUTE_LED 4523 4525 }, 4524 4526 [ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC] = { 4525 4527 .type = HDA_FIXUP_FUNC, ··· 4711 4709 [ALC255_FIXUP_HEADSET_MODE] = { 4712 4710 .type = HDA_FIXUP_FUNC, 4713 4711 .v.func = alc_fixup_headset_mode_alc255, 4712 + .chained = true, 4713 + .chain_id = ALC255_FIXUP_DELL_WMI_MIC_MUTE_LED 4714 4714 }, 4715 4715 [ALC255_FIXUP_HEADSET_MODE_NO_HP_MIC] = { 4716 4716 .type = HDA_FIXUP_FUNC, ··· 4748 4744 [ALC255_FIXUP_DELL_WMI_MIC_MUTE_LED] = { 4749 4745 .type = HDA_FIXUP_FUNC, 4750 4746 .v.func = alc_fixup_dell_wmi, 4751 - .chained_before = true, 4752 - .chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE 4753 4747 }, 4754 4748 [ALC282_FIXUP_ASPIRE_V5_PINS] = { 4755 4749 .type = HDA_FIXUP_PINS, ··· 4785 4783 SND_PCI_QUIRK(0x1028, 0x05f4, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4786 4784 SND_PCI_QUIRK(0x1028, 0x05f5, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4787 4785 SND_PCI_QUIRK(0x1028, 0x05f6, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4788 - SND_PCI_QUIRK(0x1028, 0x0610, "Dell", ALC255_FIXUP_DELL_WMI_MIC_MUTE_LED), 4789 4786 SND_PCI_QUIRK(0x1028, 0x0615, "Dell Vostro 5470", ALC290_FIXUP_SUBWOOFER_HSJACK), 4790 4787 SND_PCI_QUIRK(0x1028, 0x0616, "Dell Vostro 5470", ALC290_FIXUP_SUBWOOFER_HSJACK), 4791 - SND_PCI_QUIRK(0x1028, 0x061f, "Dell", ALC255_FIXUP_DELL_WMI_MIC_MUTE_LED), 4792 4788 SND_PCI_QUIRK(0x1028, 0x0638, "Dell Inspiron 5439", ALC290_FIXUP_MONO_SPEAKERS_HSJACK), 4793 4789 SND_PCI_QUIRK(0x1028, 0x064a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), 4794 4790 SND_PCI_QUIRK(0x1028, 0x064b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+1
sound/soc/codecs/cs42l51-i2c.c
··· 46 46 .driver = { 47 47 .name = "cs42l51", 48 48 .owner = THIS_MODULE, 49 + .of_match_table = cs42l51_of_match, 49 50 }, 50 51 .probe = cs42l51_i2c_probe, 51 52 .remove = cs42l51_i2c_remove,
+3 -1
sound/soc/codecs/cs42l51.c
··· 558 558 } 559 559 EXPORT_SYMBOL_GPL(cs42l51_probe); 560 560 561 - static const struct of_device_id cs42l51_of_match[] = { 561 + const struct of_device_id cs42l51_of_match[] = { 562 562 { .compatible = "cirrus,cs42l51", }, 563 563 { } 564 564 }; 565 565 MODULE_DEVICE_TABLE(of, cs42l51_of_match); 566 + EXPORT_SYMBOL_GPL(cs42l51_of_match); 567 + 566 568 MODULE_AUTHOR("Arnaud Patard <arnaud.patard@rtp-net.org>"); 567 569 MODULE_DESCRIPTION("Cirrus Logic CS42L51 ALSA SoC Codec Driver"); 568 570 MODULE_LICENSE("GPL");
+1
sound/soc/codecs/cs42l51.h
··· 22 22 23 23 extern const struct regmap_config cs42l51_regmap; 24 24 int cs42l51_probe(struct device *dev, struct regmap *regmap); 25 + extern const struct of_device_id cs42l51_of_match[]; 25 26 26 27 #define CS42L51_CHIP_ID 0x1B 27 28 #define CS42L51_CHIP_REV_A 0x00
+1 -1
sound/soc/codecs/es8328-i2c.c
··· 19 19 #include "es8328.h" 20 20 21 21 static const struct i2c_device_id es8328_id[] = { 22 - { "everest,es8328", 0 }, 22 + { "es8328", 0 }, 23 23 { } 24 24 }; 25 25 MODULE_DEVICE_TABLE(i2c, es8328_id);
+3 -3
sound/soc/codecs/max98090.c
··· 1941 1941 * 0x02 (when master clk is 20MHz to 40MHz).. 1942 1942 * 0x03 (when master clk is 40MHz to 60MHz).. 1943 1943 */ 1944 - if ((freq >= 10000000) && (freq < 20000000)) { 1944 + if ((freq >= 10000000) && (freq <= 20000000)) { 1945 1945 snd_soc_write(codec, M98090_REG_SYSTEM_CLOCK, 1946 1946 M98090_PSCLK_DIV1); 1947 - } else if ((freq >= 20000000) && (freq < 40000000)) { 1947 + } else if ((freq > 20000000) && (freq <= 40000000)) { 1948 1948 snd_soc_write(codec, M98090_REG_SYSTEM_CLOCK, 1949 1949 M98090_PSCLK_DIV2); 1950 - } else if ((freq >= 40000000) && (freq < 60000000)) { 1950 + } else if ((freq > 40000000) && (freq <= 60000000)) { 1951 1951 snd_soc_write(codec, M98090_REG_SYSTEM_CLOCK, 1952 1952 M98090_PSCLK_DIV4); 1953 1953 } else {
+2
sound/soc/codecs/rt5645.c
··· 139 139 { 0x76, 0x000a }, 140 140 { 0x77, 0x0c00 }, 141 141 { 0x78, 0x0000 }, 142 + { 0x79, 0x0123 }, 142 143 { 0x80, 0x0000 }, 143 144 { 0x81, 0x0000 }, 144 145 { 0x82, 0x0000 }, ··· 335 334 case RT5645_DMIC_CTRL2: 336 335 case RT5645_TDM_CTRL_1: 337 336 case RT5645_TDM_CTRL_2: 337 + case RT5645_TDM_CTRL_3: 338 338 case RT5645_GLB_CLK: 339 339 case RT5645_PLL_CTRL1: 340 340 case RT5645_PLL_CTRL2:
+18 -18
sound/soc/codecs/rt5670.c
··· 100 100 { 0x4c, 0x5380 }, 101 101 { 0x4f, 0x0073 }, 102 102 { 0x52, 0x00d3 }, 103 - { 0x53, 0xf0f0 }, 103 + { 0x53, 0xf000 }, 104 104 { 0x61, 0x0000 }, 105 105 { 0x62, 0x0001 }, 106 106 { 0x63, 0x00c3 }, 107 107 { 0x64, 0x0000 }, 108 - { 0x65, 0x0000 }, 108 + { 0x65, 0x0001 }, 109 109 { 0x66, 0x0000 }, 110 110 { 0x6f, 0x8000 }, 111 111 { 0x70, 0x8000 }, 112 112 { 0x71, 0x8000 }, 113 113 { 0x72, 0x8000 }, 114 - { 0x73, 0x1110 }, 114 + { 0x73, 0x7770 }, 115 115 { 0x74, 0x0e00 }, 116 116 { 0x75, 0x1505 }, 117 117 { 0x76, 0x0015 }, ··· 125 125 { 0x83, 0x0000 }, 126 126 { 0x84, 0x0000 }, 127 127 { 0x85, 0x0000 }, 128 - { 0x86, 0x0008 }, 128 + { 0x86, 0x0004 }, 129 129 { 0x87, 0x0000 }, 130 130 { 0x88, 0x0000 }, 131 131 { 0x89, 0x0000 }, 132 132 { 0x8a, 0x0000 }, 133 133 { 0x8b, 0x0000 }, 134 - { 0x8c, 0x0007 }, 134 + { 0x8c, 0x0003 }, 135 135 { 0x8d, 0x0000 }, 136 136 { 0x8e, 0x0004 }, 137 137 { 0x8f, 0x1100 }, 138 138 { 0x90, 0x0646 }, 139 139 { 0x91, 0x0c06 }, 140 140 { 0x93, 0x0000 }, 141 - { 0x94, 0x0000 }, 142 - { 0x95, 0x0000 }, 141 + { 0x94, 0x1270 }, 142 + { 0x95, 0x1000 }, 143 143 { 0x97, 0x0000 }, 144 144 { 0x98, 0x0000 }, 145 145 { 0x99, 0x0000 }, ··· 150 150 { 0x9e, 0x0400 }, 151 151 { 0xae, 0x7000 }, 152 152 { 0xaf, 0x0000 }, 153 - { 0xb0, 0x6000 }, 153 + { 0xb0, 0x7000 }, 154 154 { 0xb1, 0x0000 }, 155 155 { 0xb2, 0x0000 }, 156 156 { 0xb3, 0x001f }, 157 - { 0xb4, 0x2206 }, 157 + { 0xb4, 0x220c }, 158 158 { 0xb5, 0x1f00 }, 159 159 { 0xb6, 0x0000 }, 160 160 { 0xb7, 0x0000 }, ··· 171 171 { 0xcf, 0x1813 }, 172 172 { 0xd0, 0x0690 }, 173 173 { 0xd1, 0x1c17 }, 174 - { 0xd3, 0xb320 }, 174 + { 0xd3, 0xa220 }, 175 175 { 0xd4, 0x0000 }, 176 176 { 0xd6, 0x0400 }, 177 177 { 0xd9, 0x0809 }, 178 178 { 0xda, 0x0000 }, 179 179 { 0xdb, 0x0001 }, 180 180 { 0xdc, 0x0049 }, 181 - { 0xdd, 0x0009 }, 181 + { 0xdd, 0x0024 }, 182 182 { 0xe6, 0x8000 }, 183 183 { 0xe7, 0x0000 }, 184 - { 0xec, 0xb300 }, 184 + { 0xec, 0xa200 }, 185 185 { 0xed, 0x0000 }, 186 - { 0xee, 0xb300 }, 186 + { 0xee, 0xa200 }, 187 187 { 0xef, 0x0000 }, 188 188 { 0xf8, 0x0000 }, 189 189 { 0xf9, 0x0000 }, 190 190 { 0xfa, 0x8010 }, 191 191 { 0xfb, 0x0033 }, 192 - { 0xfc, 0x0080 }, 192 + { 0xfc, 0x0100 }, 193 193 }; 194 194 195 195 static bool rt5670_volatile_register(struct device *dev, unsigned int reg) ··· 1877 1877 { "DAC1 MIXR", "DAC1 Switch", "DAC1 R Mux" }, 1878 1878 { "DAC1 MIXR", NULL, "DAC Stereo1 Filter" }, 1879 1879 1880 + { "DAC Stereo1 Filter", NULL, "PLL1", is_sys_clk_from_pll }, 1881 + { "DAC Mono Left Filter", NULL, "PLL1", is_sys_clk_from_pll }, 1882 + { "DAC Mono Right Filter", NULL, "PLL1", is_sys_clk_from_pll }, 1883 + 1880 1884 { "DAC MIX", NULL, "DAC1 MIXL" }, 1881 1885 { "DAC MIX", NULL, "DAC1 MIXR" }, 1882 1886 ··· 1930 1926 1931 1927 { "DAC L1", NULL, "DAC L1 Power" }, 1932 1928 { "DAC L1", NULL, "Stereo DAC MIXL" }, 1933 - { "DAC L1", NULL, "PLL1", is_sys_clk_from_pll }, 1934 1929 { "DAC R1", NULL, "DAC R1 Power" }, 1935 1930 { "DAC R1", NULL, "Stereo DAC MIXR" }, 1936 - { "DAC R1", NULL, "PLL1", is_sys_clk_from_pll }, 1937 1931 { "DAC L2", NULL, "Mono DAC MIXL" }, 1938 - { "DAC L2", NULL, "PLL1", is_sys_clk_from_pll }, 1939 1932 { "DAC R2", NULL, "Mono DAC MIXR" }, 1940 - { "DAC R2", NULL, "PLL1", is_sys_clk_from_pll }, 1941 1933 1942 1934 { "OUT MIXL", "BST1 Switch", "BST1" }, 1943 1935 { "OUT MIXL", "INL Switch", "INL VOL" },
+1 -2
sound/soc/codecs/sgtl5000.c
··· 1299 1299 1300 1300 /* enable small pop, introduce 400ms delay in turning off */ 1301 1301 snd_soc_update_bits(codec, SGTL5000_CHIP_REF_CTRL, 1302 - SGTL5000_SMALL_POP, 1303 - SGTL5000_SMALL_POP); 1302 + SGTL5000_SMALL_POP, 1); 1304 1303 1305 1304 /* disable short cut detector */ 1306 1305 snd_soc_write(codec, SGTL5000_CHIP_SHORT_CTRL, 0);
+1 -1
sound/soc/codecs/sgtl5000.h
··· 275 275 #define SGTL5000_BIAS_CTRL_MASK 0x000e 276 276 #define SGTL5000_BIAS_CTRL_SHIFT 1 277 277 #define SGTL5000_BIAS_CTRL_WIDTH 3 278 - #define SGTL5000_SMALL_POP 0x0001 278 + #define SGTL5000_SMALL_POP 0 279 279 280 280 /* 281 281 * SGTL5000_CHIP_MIC_CTRL
+1
sound/soc/codecs/wm_adsp.c
··· 1355 1355 file, blocks, pos - firmware->size); 1356 1356 1357 1357 out_fw: 1358 + regmap_async_complete(regmap); 1358 1359 release_firmware(firmware); 1359 1360 wm_adsp_buf_free(&buf_list); 1360 1361 out:
+26
sound/soc/fsl/fsl_asrc.c
··· 684 684 } 685 685 } 686 686 687 + static struct reg_default fsl_asrc_reg[] = { 688 + { REG_ASRCTR, 0x0000 }, { REG_ASRIER, 0x0000 }, 689 + { REG_ASRCNCR, 0x0000 }, { REG_ASRCFG, 0x0000 }, 690 + { REG_ASRCSR, 0x0000 }, { REG_ASRCDR1, 0x0000 }, 691 + { REG_ASRCDR2, 0x0000 }, { REG_ASRSTR, 0x0000 }, 692 + { REG_ASRRA, 0x0000 }, { REG_ASRRB, 0x0000 }, 693 + { REG_ASRRC, 0x0000 }, { REG_ASRPM1, 0x0000 }, 694 + { REG_ASRPM2, 0x0000 }, { REG_ASRPM3, 0x0000 }, 695 + { REG_ASRPM4, 0x0000 }, { REG_ASRPM5, 0x0000 }, 696 + { REG_ASRTFR1, 0x0000 }, { REG_ASRCCR, 0x0000 }, 697 + { REG_ASRDIA, 0x0000 }, { REG_ASRDOA, 0x0000 }, 698 + { REG_ASRDIB, 0x0000 }, { REG_ASRDOB, 0x0000 }, 699 + { REG_ASRDIC, 0x0000 }, { REG_ASRDOC, 0x0000 }, 700 + { REG_ASRIDRHA, 0x0000 }, { REG_ASRIDRLA, 0x0000 }, 701 + { REG_ASRIDRHB, 0x0000 }, { REG_ASRIDRLB, 0x0000 }, 702 + { REG_ASRIDRHC, 0x0000 }, { REG_ASRIDRLC, 0x0000 }, 703 + { REG_ASR76K, 0x0A47 }, { REG_ASR56K, 0x0DF3 }, 704 + { REG_ASRMCRA, 0x0000 }, { REG_ASRFSTA, 0x0000 }, 705 + { REG_ASRMCRB, 0x0000 }, { REG_ASRFSTB, 0x0000 }, 706 + { REG_ASRMCRC, 0x0000 }, { REG_ASRFSTC, 0x0000 }, 707 + { REG_ASRMCR1A, 0x0000 }, { REG_ASRMCR1B, 0x0000 }, 708 + { REG_ASRMCR1C, 0x0000 }, 709 + }; 710 + 687 711 static const struct regmap_config fsl_asrc_regmap_config = { 688 712 .reg_bits = 32, 689 713 .reg_stride = 4, 690 714 .val_bits = 32, 691 715 692 716 .max_register = REG_ASRMCR1C, 717 + .reg_defaults = fsl_asrc_reg, 718 + .num_reg_defaults = ARRAY_SIZE(fsl_asrc_reg), 693 719 .readable_reg = fsl_asrc_readable_reg, 694 720 .volatile_reg = fsl_asrc_volatile_reg, 695 721 .writeable_reg = fsl_asrc_writeable_reg,
+3 -1
sound/soc/rockchip/rockchip_i2s.c
··· 154 154 while (val) { 155 155 regmap_read(i2s->regmap, I2S_CLR, &val); 156 156 retry--; 157 - if (!retry) 157 + if (!retry) { 158 158 dev_warn(i2s->dev, "fail to clear\n"); 159 + break; 160 + } 159 161 } 160 162 } 161 163 }
+1
sound/soc/samsung/snow.c
··· 110 110 { .compatible = "google,snow-audio-max98095", }, 111 111 {}, 112 112 }; 113 + MODULE_DEVICE_TABLE(of, snow_of_match); 113 114 114 115 static struct platform_driver snow_driver = { 115 116 .driver = {
+1 -2
sound/soc/sh/fsi.c
··· 1711 1711 static struct snd_pcm_hardware fsi_pcm_hardware = { 1712 1712 .info = SNDRV_PCM_INFO_INTERLEAVED | 1713 1713 SNDRV_PCM_INFO_MMAP | 1714 - SNDRV_PCM_INFO_MMAP_VALID | 1715 - SNDRV_PCM_INFO_PAUSE, 1714 + SNDRV_PCM_INFO_MMAP_VALID, 1716 1715 .buffer_bytes_max = 64 * 1024, 1717 1716 .period_bytes_min = 32, 1718 1717 .period_bytes_max = 8192,
+1 -2
sound/soc/sh/rcar/core.c
··· 886 886 static struct snd_pcm_hardware rsnd_pcm_hardware = { 887 887 .info = SNDRV_PCM_INFO_INTERLEAVED | 888 888 SNDRV_PCM_INFO_MMAP | 889 - SNDRV_PCM_INFO_MMAP_VALID | 890 - SNDRV_PCM_INFO_PAUSE, 889 + SNDRV_PCM_INFO_MMAP_VALID, 891 890 .buffer_bytes_max = 64 * 1024, 892 891 .period_bytes_min = 32, 893 892 .period_bytes_max = 8192,
+1 -1
sound/soc/soc-core.c
··· 884 884 list_for_each_entry(component, &component_list, list) { 885 885 if (dlc->of_node && component->dev->of_node != dlc->of_node) 886 886 continue; 887 - if (dlc->name && strcmp(dev_name(component->dev), dlc->name)) 887 + if (dlc->name && strcmp(component->name, dlc->name)) 888 888 continue; 889 889 list_for_each_entry(dai, &component->dai_list, list) { 890 890 if (dlc->dai_name && strcmp(dai->name, dlc->dai_name))
+56 -16
sound/soc/soc-pcm.c
··· 1522 1522 dpcm_init_runtime_hw(runtime, &cpu_dai_drv->capture); 1523 1523 } 1524 1524 1525 + static int dpcm_fe_dai_do_trigger(struct snd_pcm_substream *substream, int cmd); 1526 + 1527 + /* Set FE's runtime_update state; the state is protected via PCM stream lock 1528 + * for avoiding the race with trigger callback. 1529 + * If the state is unset and a trigger is pending while the previous operation, 1530 + * process the pending trigger action here. 1531 + */ 1532 + static void dpcm_set_fe_update_state(struct snd_soc_pcm_runtime *fe, 1533 + int stream, enum snd_soc_dpcm_update state) 1534 + { 1535 + struct snd_pcm_substream *substream = 1536 + snd_soc_dpcm_get_substream(fe, stream); 1537 + 1538 + snd_pcm_stream_lock_irq(substream); 1539 + if (state == SND_SOC_DPCM_UPDATE_NO && fe->dpcm[stream].trigger_pending) { 1540 + dpcm_fe_dai_do_trigger(substream, 1541 + fe->dpcm[stream].trigger_pending - 1); 1542 + fe->dpcm[stream].trigger_pending = 0; 1543 + } 1544 + fe->dpcm[stream].runtime_update = state; 1545 + snd_pcm_stream_unlock_irq(substream); 1546 + } 1547 + 1525 1548 static int dpcm_fe_dai_startup(struct snd_pcm_substream *fe_substream) 1526 1549 { 1527 1550 struct snd_soc_pcm_runtime *fe = fe_substream->private_data; 1528 1551 struct snd_pcm_runtime *runtime = fe_substream->runtime; 1529 1552 int stream = fe_substream->stream, ret = 0; 1530 1553 1531 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE; 1554 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_FE); 1532 1555 1533 1556 ret = dpcm_be_dai_startup(fe, fe_substream->stream); 1534 1557 if (ret < 0) { ··· 1573 1550 dpcm_set_fe_runtime(fe_substream); 1574 1551 snd_pcm_limit_hw_rates(runtime); 1575 1552 1576 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO; 1553 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO); 1577 1554 return 0; 1578 1555 1579 1556 unwind: 1580 1557 dpcm_be_dai_startup_unwind(fe, fe_substream->stream); 1581 1558 be_err: 1582 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO; 1559 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO); 1583 1560 return ret; 1584 1561 } 1585 1562 ··· 1626 1603 struct snd_soc_pcm_runtime *fe = substream->private_data; 1627 1604 int stream = substream->stream; 1628 1605 1629 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE; 1606 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_FE); 1630 1607 1631 1608 /* shutdown the BEs */ 1632 1609 dpcm_be_dai_shutdown(fe, substream->stream); ··· 1640 1617 dpcm_dapm_stream_event(fe, stream, SND_SOC_DAPM_STREAM_STOP); 1641 1618 1642 1619 fe->dpcm[stream].state = SND_SOC_DPCM_STATE_CLOSE; 1643 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO; 1620 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO); 1644 1621 return 0; 1645 1622 } 1646 1623 ··· 1688 1665 int err, stream = substream->stream; 1689 1666 1690 1667 mutex_lock_nested(&fe->card->mutex, SND_SOC_CARD_CLASS_RUNTIME); 1691 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE; 1668 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_FE); 1692 1669 1693 1670 dev_dbg(fe->dev, "ASoC: hw_free FE %s\n", fe->dai_link->name); 1694 1671 ··· 1703 1680 err = dpcm_be_dai_hw_free(fe, stream); 1704 1681 1705 1682 fe->dpcm[stream].state = SND_SOC_DPCM_STATE_HW_FREE; 1706 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO; 1683 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO); 1707 1684 1708 1685 mutex_unlock(&fe->card->mutex); 1709 1686 return 0; ··· 1796 1773 int ret, stream = substream->stream; 1797 1774 1798 1775 mutex_lock_nested(&fe->card->mutex, SND_SOC_CARD_CLASS_RUNTIME); 1799 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE; 1776 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_FE); 1800 1777 1801 1778 memcpy(&fe->dpcm[substream->stream].hw_params, params, 1802 1779 sizeof(struct snd_pcm_hw_params)); ··· 1819 1796 fe->dpcm[stream].state = SND_SOC_DPCM_STATE_HW_PARAMS; 1820 1797 1821 1798 out: 1822 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO; 1799 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO); 1823 1800 mutex_unlock(&fe->card->mutex); 1824 1801 return ret; 1825 1802 } ··· 1933 1910 } 1934 1911 EXPORT_SYMBOL_GPL(dpcm_be_dai_trigger); 1935 1912 1936 - static int dpcm_fe_dai_trigger(struct snd_pcm_substream *substream, int cmd) 1913 + static int dpcm_fe_dai_do_trigger(struct snd_pcm_substream *substream, int cmd) 1937 1914 { 1938 1915 struct snd_soc_pcm_runtime *fe = substream->private_data; 1939 1916 int stream = substream->stream, ret; ··· 2007 1984 return ret; 2008 1985 } 2009 1986 1987 + static int dpcm_fe_dai_trigger(struct snd_pcm_substream *substream, int cmd) 1988 + { 1989 + struct snd_soc_pcm_runtime *fe = substream->private_data; 1990 + int stream = substream->stream; 1991 + 1992 + /* if FE's runtime_update is already set, we're in race; 1993 + * process this trigger later at exit 1994 + */ 1995 + if (fe->dpcm[stream].runtime_update != SND_SOC_DPCM_UPDATE_NO) { 1996 + fe->dpcm[stream].trigger_pending = cmd + 1; 1997 + return 0; /* delayed, assuming it's successful */ 1998 + } 1999 + 2000 + /* we're alone, let's trigger */ 2001 + return dpcm_fe_dai_do_trigger(substream, cmd); 2002 + } 2003 + 2010 2004 int dpcm_be_dai_prepare(struct snd_soc_pcm_runtime *fe, int stream) 2011 2005 { 2012 2006 struct snd_soc_dpcm *dpcm; ··· 2067 2027 2068 2028 dev_dbg(fe->dev, "ASoC: prepare FE %s\n", fe->dai_link->name); 2069 2029 2070 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE; 2030 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_FE); 2071 2031 2072 2032 /* there is no point preparing this FE if there are no BEs */ 2073 2033 if (list_empty(&fe->dpcm[stream].be_clients)) { ··· 2094 2054 fe->dpcm[stream].state = SND_SOC_DPCM_STATE_PREPARE; 2095 2055 2096 2056 out: 2097 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO; 2057 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO); 2098 2058 mutex_unlock(&fe->card->mutex); 2099 2059 2100 2060 return ret; ··· 2241 2201 { 2242 2202 int ret; 2243 2203 2244 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_BE; 2204 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_BE); 2245 2205 ret = dpcm_run_update_startup(fe, stream); 2246 2206 if (ret < 0) 2247 2207 dev_err(fe->dev, "ASoC: failed to startup some BEs\n"); 2248 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO; 2208 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO); 2249 2209 2250 2210 return ret; 2251 2211 } ··· 2254 2214 { 2255 2215 int ret; 2256 2216 2257 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_BE; 2217 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_BE); 2258 2218 ret = dpcm_run_update_shutdown(fe, stream); 2259 2219 if (ret < 0) 2260 2220 dev_err(fe->dev, "ASoC: failed to shutdown some BEs\n"); 2261 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO; 2221 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO); 2262 2222 2263 2223 return ret; 2264 2224 }
+4 -3
sound/usb/mixer.c
··· 2033 2033 cval->res = 1; 2034 2034 cval->initialized = 1; 2035 2035 2036 - if (desc->bDescriptorSubtype == UAC2_CLOCK_SELECTOR) 2037 - cval->control = UAC2_CX_CLOCK_SELECTOR; 2038 - else 2036 + if (state->mixer->protocol == UAC_VERSION_1) 2039 2037 cval->control = 0; 2038 + else /* UAC_VERSION_2 */ 2039 + cval->control = (desc->bDescriptorSubtype == UAC2_CLOCK_SELECTOR) ? 2040 + UAC2_CX_CLOCK_SELECTOR : UAC2_SU_SELECTOR; 2040 2041 2041 2042 namelist = kmalloc(sizeof(char *) * desc->bNrInPins, GFP_KERNEL); 2042 2043 if (!namelist) {
+14
sound/usb/quirks.c
··· 1146 1146 if ((le16_to_cpu(dev->descriptor.idVendor) == 0x23ba) && 1147 1147 (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS) 1148 1148 mdelay(20); 1149 + 1150 + /* Marantz/Denon devices with USB DAC functionality need a delay 1151 + * after each class compliant request 1152 + */ 1153 + if ((le16_to_cpu(dev->descriptor.idVendor) == 0x154e) && 1154 + (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS) { 1155 + 1156 + switch (le16_to_cpu(dev->descriptor.idProduct)) { 1157 + case 0x3005: /* Marantz HD-DAC1 */ 1158 + case 0x3006: /* Marantz SA-14S1 */ 1159 + mdelay(20); 1160 + break; 1161 + } 1162 + } 1149 1163 } 1150 1164 1151 1165 /*